content
stringlengths
0
625k
subset
stringclasses
1 value
meta
dict
--- abstract: | Let $X$ and $t$ be real numbers, $X\ge 1$ and $t>1$. The family of sets $$\left\{\left\lfloor \frac{X}{n^t} \right\rfloor ~:~ 1\leq n\leq X\right\}$$ are very sparse sets that satisfy the prime number theorem as $X$ increases. We give an exact formula for the cardinality of these sets. author: - "[RANDELL HEYMAN School of Mathematics and Statistics, University of New South Wales, Sydney, Australia `randell@unsw.edu.au`]{.smallcaps}" - "[MD RAHIL MIRAJ Theoretical Statistics and Mathematics Unit, Indian Statistical Institute, Bengaluru, Karnataka, India `rahilmiraj@gmail.com`]{.smallcaps}" title: "**An exact formula for a family of floor function sets**" --- Keywords: floor function sets, prime number theorem AMS Classification (2020): 11A41, 11A67, 11B05 # Introduction Sets and sequences utilising the floor function, and in particular the primality of their elements, have been studied for many decades. The two most commonly researched are probably Beatty sequences (see, for example,  [@ABS; @BaBa; @BaLi; @GuNe; @Harm]) and Patestski-Shapiro sequences (see, for example, [@Akb; @BBBSW; @BBGY; @BGS; @LSZ; @Morg; @RivW]). Recent research interest has also focused on 'hyperbolic sets' utilising the floor function. For positive integer $X$ let $$S(X):= \left\{\left\lfloor \frac{X}{n} \right\rfloor ~:~ 1\leq n\leq X\right\}.$$ In [@Hey] it was shown that $$|S(X)|=2\sqrt{X}+O(1).$$ An exact formula was also given for integer values of $X$. Specifically, $$|S(X)|= \left\lfloor\sqrt{4X+1}\right\rfloor-1.$$ Using exponential sum techniques, it was shown in [@Hey2] that the number of primes in the set $S(X)$ could be estimated. That is, $$|\{p \in S(X): p \text{ is prime}\}|=\frac{4X}{\log x}+O\left(\frac{X}{(\log x)^2}\right),$$ which was improved by Ma And Wu [@Ma3] as follows: $$\begin{aligned} |\{p \in S(X): p \text{ is prime}\}| &=\int_2^{\sqrt{X}}\frac{dt}{\log t}+\int_2^{\sqrt{X}}\frac{dt}{\log (X/t)}\cr&+O\left(\sqrt{X}\exp\left(-c(\log X)^{3/5}(\log \log X)^{-1/5}\right)\right),\end{aligned}$$ where $c>0$ is a positive integer. In that paper Ma And Wu observed that $S(X)$ is probably the first example of such a sparse set that satisfies the prime number theorem, in the sense that $$|\{p \in S(X): p \text{ is prime}\}|\sim\frac{|S(X)|}{\log |S(X)| }.$$ More recently in [@Bor], it was shown that there exists even sparser sets that satisfy the prime number theorem. For real $t>1$ the family of sets $$S_t(X):= \left\{\left\lfloor \frac{X}{n^t} \right\rfloor ~:~ 1\leq n\leq X\right\},$$ also satisfy the prime number theorem. An estimate of the cardinality of $S(X)$ is given in that paper by $$|S_t(X)|=X^{\frac1{t+1}}\left(t^{\frac{t}{t+1}}+t^{\frac1{t+1}}\right)+O(1).$$ Whilst asymptotic formulas with error terms are the norm in number theory, it is interesting to extend the exact formula from $|S(X)|$ to $|S_t(X)|$. In this short paper we prove the following theorem: **Theorem 1**. *For given $X\geq 1$ and $t>1$, let $a=(tX)^{1/(t+1)}$. For $1\leq X< 2$ we have $|S_t(X)|=1$. If $X\geq 2$, then $$|S_t(X)|= \begin{cases} a+\left\lfloor{\frac{X}{a^t}}\right\rfloor & \text{if $a$ is integer},\cr \lfloor{a}\rfloor+\left\lfloor{\frac{X}{\lfloor{a+1}\rfloor^t}}\right\rfloor+\varepsilon(X,t) & \text{if $a$ is not an integer},\cr \end{cases}$$ where $$\varepsilon(X,t)= \begin{cases} 0 & \text{if $X\ge \left\lfloor{\frac{X}{\lfloor{a}\rfloor^t}}\right\rfloor \lfloor{a+1}\rfloor^t$},\cr 1 & \text{otherwise.} \end{cases}$$* # Proof of theorem It is easy to see that for $1\leq X< 2$ we have $S_t(X)=\{1\}$ for any $t>1$. We base our proof on the graph of $f(n)=X/n^t$, noting that values $\left\lfloor X/n^t\right\rfloor$ can be inferred from the graph. Let $a$ be the (unique) value of $n$ for which $f'(n)=-1$. Simple calculus calculations show that $a=(tX)^{1/(t+1)}$. Consider the case when $a$ is an integer. For $1 \le n < a$, utilising the mean value theorem, we see that $$\frac{X}{(n+1)^t}-\frac{X}{n^t}<-1$$ and thus $$\left\lfloor\frac{X}{(n+1)^t}\right\rfloor-\left\lfloor\frac{X}{n^t}\right\rfloor<\frac{X}{(n+1)^t}-\frac{X}{n^t}+1<0.$$ It follows that $\left\lfloor\frac{X}{n^t}\right\rfloor$ and $\left\lfloor\frac{X}{(n+1)^t}\right\rfloor$ are distinct. Therefore $$\left\lfloor\frac{X}{1^t}\right\rfloor,\left\lfloor\frac{X}{2^t}\right\rfloor,\ldots,\left\lfloor\frac{X}{(a-1)^t}\right\rfloor \in S_t(X).$$ So, for $1 \le n < a$ the contribution to the cardinality of $S_t(X)$ is $a-1$. For $a \le n \le X$, the maximum element of $S_t(X)$ is $\left\lfloor\frac{X}{a^t}\right\rfloor$ and the minimum element is $0$. So in this case we obtain $$\left\lfloor\frac{X}{a^t}\right\rfloor, \left\lfloor\frac{X}{a^t}\right\rfloor-1,\ldots,1,0 \in S_t(X).$$ So, for $a \le n \le X$ the contribution to the cardinality of $S_t(X)$ is $\left\lfloor\frac{X}{a^t}\right\rfloor+1.$ Again using the mean value theorem, $\left\lfloor\frac{X}{a^t}\right\rfloor\ne \left\lfloor\frac{X}{(a-1)^t}\right\rfloor$. So there is no over counting. More precisely, $$\left\{\left\lfloor\frac{X}{1^t}\right\rfloor,\left\lfloor\frac{X}{2^t}\right\rfloor,\ldots,\left\lfloor\frac{X}{(a-1)^t}\right\rfloor\right\} \cap \left \{\left\lfloor\frac{X}{a^t}\right\rfloor, \left\lfloor\frac{X}{a^t}\right\rfloor-1,\ldots,1,0\right \} = \phi$$ We conclude that if $a$ is an integer then $$|S_t(X)|=a+\left\lfloor\frac{X}{a^t}\right\rfloor.$$ Next, we examine the other case, $a$ is not an integer. Using a similar process we see that for $1 \le n \le \left\lfloor a\right\rfloor$ we have $f'(n) < -1$. So $$\left\lfloor\frac{X}{1^t}\right\rfloor,\left\lfloor\frac{X}{2^t}\right\rfloor, \ldots , \left\lfloor\frac{X}{\left\lfloor a\right\rfloor^t}\right\rfloor \in S_t(X).$$ therefore, the values of $n$ for which $1\le n \le \left\lfloor a\right\rfloor$ contribute to the cardinality of $S_t(X)$ is $\left\lfloor a\right\rfloor$. For $\left\lfloor a\right\rfloor < n \le X$ we have $$\left\lfloor\frac{X}{\left\lfloor a+1\right\rfloor^t}\right\rfloor,\left\lfloor\frac{X}{\left\lfloor a+1\right\rfloor^t}\right\rfloor-1, \ldots , 1,0\in S_t(X).$$ So the values of $n$ for which $\left\lfloor a\right\rfloor < n \le X$ contribute to the cardinality of $S_t(X)$ is $\left\lfloor\frac{X}{\left\lfloor a+1\right\rfloor^t}\right\rfloor$+1. But it maybe that $$\left\lfloor\frac{X}{\left\lfloor a\right\rfloor^t}\right\rfloor=\left\lfloor\frac{X}{\left\lfloor a+1\right\rfloor^t}\right\rfloor.$$ So we correct for this possible overlap by 1 if $$\left\lfloor\frac{X}{\left\lfloor a\right\rfloor^t}\right\rfloor=\left\lfloor\frac{X}{\left\lfloor a+1\right\rfloor^t}\right\rfloor,$$ and making no change otherwise. This will require two divisions, which takes $O(n^2)$ time each. We will show that 1 should be subtracted from the cardinality if, and only if, $X\ge \left\lfloor{\frac{X}{\lfloor{a}\rfloor^t}}\right\rfloor \lfloor{a+1}\rfloor^t$. Having to now perform one division and one multiplication, the checking for overlaps will be slightly faster than before, as multiplication takes $O(n^{1.58})$ time following Karatsuba's algorithm [@Kars]. If 1 should be subtracted from the cardinality then an overlap exists and then, for some non-negative integer $k$, we have $$\begin{aligned} \label{eq:k} \left\lfloor\frac{X}{\left\lfloor a\right\rfloor^t}\right\rfloor&=\left\lfloor\frac{X}{\left\lfloor a+1\right\rfloor^t}\right\rfloor=k.\end{aligned}$$ Then $$\left\lfloor\frac{X}{\left\lfloor a\right\rfloor^t}\right\rfloor=k=\left\lfloor\frac{X}{\left\lfloor a+1\right\rfloor^t}\right\rfloor\leq \frac{X}{\left\lfloor a+1\right\rfloor^t},$$ and so $$\left\lfloor\frac{X}{\left\lfloor a\right\rfloor^t}\right\rfloor\leq \frac{X}{\left\lfloor a+1\right\rfloor^t},$$ which implies that we should deduct 1 if $$X\ge \left\lfloor{\frac{X}{\lfloor{a}\rfloor^t}}\right\rfloor \lfloor{a+1}\rfloor^t.$$ Conversely, $$X\ge \left\lfloor{\frac{X}{\lfloor{a}\rfloor^t}}\right\rfloor \lfloor{a+1}\rfloor^t$$ implies $$\left\lfloor\frac{X}{\left\lfloor a\right\rfloor^t}\right\rfloor\leq \frac{X}{\left\lfloor a+1\right\rfloor^t},$$ which means that $$\left\lfloor\frac{X}{\left\lfloor a\right\rfloor^t}\right\rfloor\leq \frac{X}{\left\lfloor a+1\right\rfloor^t}<\frac{X}{\left\lfloor a\right\rfloor^t}<\left\lfloor\frac{X}{\left\lfloor a\right\rfloor^t}\right\rfloor+1.$$ From [\[eq:k\]](#eq:k){reference-type="eqref" reference="eq:k"} we have $k=\left\lfloor\frac{X}{\left\lfloor a\right\rfloor^t}\right\rfloor$, and therefore $$k \le \frac{X}{\left\lfloor a+1\right\rfloor^t}<\left\lfloor\frac{X}{\left\lfloor a\right\rfloor^t}\right\rfloor+1=k+1.$$ This means that $$k \le \frac{X}{\left\lfloor a+1\right\rfloor^t}< k+1,$$ which implies that $$k=\left\lfloor\frac{X}{\left\lfloor a+1\right\rfloor^t}\right\rfloor,$$ and so an overlap exists, and 1 should be subtracted from the cardinality. We conclude that if $a$ is not an integer then $$|S_t(X)|=\lfloor{a}\rfloor+\left\lfloor{\frac{X}{\lfloor{a+1}\rfloor^t}}\right\rfloor+\varepsilon(X,t),$$ where $$\varepsilon(X,t)= \begin{cases} 0 & \text{if $X\ge \left\lfloor{\frac{X}{\lfloor{a}\rfloor^t}}\right\rfloor \lfloor{a+1}\rfloor^t$},\cr 1 & \text{otherwise,} \end{cases}$$ concluding the proof. 9 A. G. Abercrombie, W. D. Banks and I. E. Shparlinski, Arithmetic functions on Beatty sequences, *Acta Arith.* **136** (2009), 81--89. Y. Akbal, Friable values of Piatetski-Shapiro sequences, *Proc. Amer. Math. Soc.* **145** (2017), 4255--4268. R. C. Baker and W. D. Banks, Character sums with Piatetski-Shapiro sequences, *Quart. J. Math.* **66** (2015), 393--416. R. C. Baker, W. D. Banks, J. Brüdern, I. E. Shparlinski and A. Weingartner, Piatetski-Shapiro sequences, *Acta Arith.* **157** (2013), 37--68. R. C. Baker, W. D. Banks, V. Z. Guo and A. M. Yeager, Piatetski-Shapiro primes from almost primes, *Monatsh Math.* **174** (2014), 357--370. R. C. Baker and L. Zhao, Gaps between primes in Beatty sequences, *Acta Arith.* **172** (2016), 207--242. W. D. Banks, V. Z. Guo and I. E. Shparlinski, Almost primes of the form $\left\lfloor p^c\right\rfloor$, *Indag. Math.* **27** (2016), 423--436. O. Bordellès, R. Heyman and D. Nikolic, Sparse sets that satisfy the prime number theorem, *preprint*, Available at arXiv:2304.08736 \[math.NT\] A. M. Güloǧlu and C. W. Nevans, Sums with multiplicative functions over a Beatty sequence, *Bull. Austral. Math. Soc.* **78** (2008), 327--334. G. Harman, Primes in Beatty sequences in short intervals, *Mathematika* **62** (2016), 572--586. R. Heyman, Cardinality of a floor function set, *Integers* **19** (2019), A67. R. Heyman, Primes in the floor function set, *Integers* **22** (2022), A59. K. Liu, I. E. Shparlinski and T. Zhang, Squares in Piatetski-Shapiro sequences, *Acta Arith.* **181** (2017), 239--252. R. Ma and J. Wu, On the primes in floor function sets, *preprint*, Available at arXiv:2112.12426 \[math.NT\] J. F. Morgenbesser, The sum of digits of $\left\lfloor n^c\right\rfloor$, *Acta Arith.* **148** (2011), 367--393. J. Rivat and J. Wu, 'Prime number of the form $\left\lfloor n^c\right\rfloor$', *Glasg. Math. J.* , **43** (2001), no. 2, 414--433. A. Karatsuba, The complexity of computations, *Proceedings of the Steklov Institute of Mathematics-Interperiodica Translation* **211** (1995), 169--183.
arxiv_math
{ "id": "2309.16072", "title": "An exact formula for a family of floor function sets", "authors": "Randell Heyman and MD Rahil Miraj", "categories": "math.NT", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | The anti-Ramsey number $\mathrm{ar}(n,F)$ of an $r$-graph $F$ is the minimum number of colors needed to color the complete $n$-vertex $r$-graph to ensure the existence of a rainbow copy of $F$. We prove a general upper bound for $\mathrm{ar}(n,F)$ when $F$ is the expansion of a hypergraph with smaller uniformality, which refines the general bound $\mathrm{ar}(n,F) = \mathrm{ex}(n,F_{-}) + o(n^r)$ by Erdős--Simonovits--Sós. Here $F_{-}$ is the family of $r$-graphs obtained from $F$ by removing one edge. We also determine the exact value of $\mathrm{ar}(n,F)$ for large $n$ when $F$ is the expansion of a complete graph, extending a result of Erdős--Simonovits--Sós from graphs to hypergraphs. **Keywords:** anti-Ramsey problem, hypergraph Turán problem, expansion of hypergraphs, splitting hypergraphs, stability. author: - "Xizhi Liu[^1]" - "Jialei Song[^2]" title: "**Hypergraph anti-Ramsey theorems**" --- # Introduction {#SEC:Introduction} Fix an integer $r\ge 2$, an $r$-graph $\mathcal{H}$ is a collection of $r$-subsets of some finite set $V$. We identify a hypergraph $\mathcal{H}$ with its edge set and use $V(\mathcal{H})$ to denote its vertex set. The **anti-Ramsey number** $\mathrm{ar}(n,F)$ of an $r$-graph $F$ is the minimum number $m$ such that any surjective map $\chi \colon K_n^r \to [m]$ contains a rainbow copy of $F$, i.e. a copy of $F$ in which every edge receives a unique value under $\chi$. Given a family $\mathcal{F}$ of $r$-graphs, we say $\mathcal{H}$ is **$\mathcal{F}$-free** if it does not contain any member of $\mathcal{F}$ as a subgraph. The **Turán number** $\mathrm{ex}(n,\mathcal{F})$ of $\mathcal{F}$ is the maximum number of edges in an $\mathcal{F}$-free $r$-graph on $n$ vertices. The study of $\mathrm{ex}(n,\mathcal{F})$ and its variant has been a central topic in Extremal graph and hypergraph theory since the seminal work of Turán [@T41], and we refer the reader to surveys [@Fu91; @Caen94; @Sid95; @Kee11] for more related results. In [@ESS75], Erdős--Simonovits--Sós initiated the study of anti-Ramsey problems and proved various results for graphs and hypergraphs. In particular, they proved that for every $r$-graph, $r\ge 2$, $$\begin{aligned} \label{equ:ESS-hypergraph} \mathrm{ar}(n,F) \le \mathrm{ex}(n,F_{-}) + o(n^r), \end{aligned}$$ where $F_{-}$ denotes the family of $r$-graphs obtained from $F$ by removing exactly one edge. For complete graphs, they proved that for fixed $\ell \ge 2$ and sufficiently large $n$, $$\begin{aligned} \label{equ:ESS-complete-graph} \mathrm{ar}(n,K_{\ell+1}) = \mathrm{ex}(n,K_{\ell}) + 2. \end{aligned}$$ Later, Montellano-Ballesteros and Neumann-Lara [@MN02] refined their result by showing that [\[equ:ESS-complete-graph\]](#equ:ESS-complete-graph){reference-type="eqref" reference="equ:ESS-complete-graph"} holds for all integers $n \ge \ell$. Determine the value of $\mathrm{ar}(n,F)$ for graphs $F$ has received a lot of attention and there has been substantial progress since the work of Erdős--Simonovits--Sós. Taking a more comprehensive perspective, Jiang [@Jiang02] showed that $\mathrm{ar}(n,F) \le \mathrm{ex}(n,F_{-}) +O(n)$ for subdivided[^3] graphs $F$. For further results on various classes of graphs, we refer the reader to a survey [@FMO10] by Fujita--Magnant--Ozeki for more details. In contrast, the value of $\mathrm{ar}(n,F)$ is only known for a few classes of hypergraphs such as matchings (see e.g. [@OY13; @FK19; @Jin21; @GLP23]), linear paths and cycles (see e.g. [@GLS20; @TLY22; @TLG23]), and the augmentation of certain linear trees (see e.g. [@LS23a]). In this note, we contribute to hypergraph anti-Ramsey theory by refining [\[equ:ESS-hypergraph\]](#equ:ESS-hypergraph){reference-type="eqref" reference="equ:ESS-hypergraph"} and extending [\[equ:ESS-complete-graph\]](#equ:ESS-complete-graph){reference-type="eqref" reference="equ:ESS-complete-graph"} to hypergraph expansions. Let $r > k \ge 2$ be integers. The **expansion** $H_{F}^{r}$ of a $k$-graph $F$ is the $r$-graph obtained from $F$ by adding a set of $r-k$ new vertices to each edge, ensuring that different edges receive disjoint sets (see Figure [\[fig:Path-Cycle-Expansion\]](#fig:Path-Cycle-Expansion){reference-type="ref" reference="fig:Path-Cycle-Expansion"}). The expansion $H_{\mathcal{F}}^{r}$ of a family $\mathcal{F}$ is simply the collections of expansions of all elements in $\mathcal{F}$. Expansions are important objects in Extremal set theory and Hypergraph Turán problems. Its was introduced by Mubayi in [@M06] as a way to extend Turán's theorem to hypergraphs. There has been lots of progress in the study of expansion over the last few decades, and we refer the reader to the survey [@MV15Survey] by Mubyai--Verstraëte for more related results. ## Expansion of hypergraphs In this subsection, we present a refinement of [\[equ:ESS-hypergraph\]](#equ:ESS-hypergraph){reference-type="eqref" reference="equ:ESS-hypergraph"}. The following definitions will play a crucial role in our results. Given a $k$-graph $F$, $k \ge 2$, and a vertex $u\in V(F)$, the **$u$-splitting** of $F$ is a $k$-graph, denoted by $F\vee u$, obtained from $F$ by - removing the vertex $u$ and all edges containing $u$ from $F$, and then - adding a $d_{F}(u)$-set $\{v_e \colon e \in L_{F}(u)\}$ of new vertices to the vertex set and adding $\left\{e\cup \{v_e\} \colon e \in L_{F}(u)\right\}$ to the edge set. In other words, $$\begin{aligned} F\vee u := \left\{e\cup \{v_e\} \colon e \in L_{F}(u)\right\} \cup \left(F-u\right), \end{aligned}$$ where $\{v_e \colon e \in L_{F}(u)\}$ is a $d_{F}(u)$-set of new vertices outside $V(F)$. Given an independent set $I:= \{u_1, \ldots, u_{\ell}\}$ in $F$, the **$I$-splitting** of $F$, denoted by $F\vee I$, is defined inductively by letting $F_0 := F$ and $F_i:= F_{i-1}\vee u_i$ for $i\in [\ell]$ (see Figure [\[fig:splitting\]](#fig:splitting){reference-type="ref" reference="fig:splitting"}). It is easy to see that since $I$ is independent, the ordering of vertices in $I$ does not affect the resulting $k$-graph. The **splitting family** $\mathrm{Split}(F)$ of $F$ is defined as $$\begin{aligned} \mathrm{Split}(F) := \left\{\hat{F} \colon \text{$\exists$ an independent set $I$ in $F$ such that $\hat{F} = F\vee I$}\right\}. \end{aligned}$$ In the definition above, we allow $I$ to be empty. Hence, $F\in \mathrm{Split}(F)$. Note that $|\hat{F}| = |F|$ for all $\hat{F} \in \mathrm{Split}(F)$. Given a coloring $\chi\colon K_{n}^{r} \to \mathbb{N}$, we say a subgraph $\mathcal{H} \subseteq K_{n}^{r}$ is **rainbow** if no two edges in $\mathcal{H}$ have the same color. The coloring $\chi$ is a **rainbow-$\mathcal{F}$-free** for some family $\mathcal{F}$ if any rainbow subgraph of $K_{n}^r$ is $\mathcal{F}$-free. The main result of this subsection is as follows. **Theorem 1**. *Let $n \ge r > k \ge 2$ be integers and $F$ be a $k$-graph. Suppose that $\chi\colon K_{n}^{r} \to \mathbb{N}$ is a rainbow-$H_{F}^{r}$-free coloring. Then every rainbow subgraph $\mathcal{H} \subseteq K_{n}^{r}$ can be made $H_{F_{-}}^{r}$-free by removing at most $\left(|F|-1\right)\cdot \mathrm{ex}\left(n, \mathrm{Split}(F)\right)$ edges. In particular, for every integer $n \ge r$, $$\begin{aligned} \mathrm{ar}(n, H_{F}^{r}) \le \mathrm{ex}(n, H_{F_{-}}^{r}) + \left(|F|-1\right)\cdot \mathrm{ex}\left(n, \mathrm{Split}(F)\right). \end{aligned}$$* Since $\mathrm{Split}(F)$ is a family of $k$-graphs (and $k \le r-1$), we have $\mathrm{ar}(n, H_{F}^{r}) \le \mathrm{ex}(n, H_{F_{-}}^{r}) + O(n^k)$, which improves the bound given by [\[equ:ESS-hypergraph\]](#equ:ESS-hypergraph){reference-type="eqref" reference="equ:ESS-hypergraph"}. Observe that if the graph $F$ is obtained from a bipartite graph by adding a forest to one part, then the family $\mathrm{Split}(F)$ contains a forest (obtained by splitting the other part in $F$). Therefore, we have the following corollary. **Corollary 2**. *Let $n \ge r\ge 3$ be integers and $F$ be a graph obtained from a bipartite graph by adding a forest to one part. Suppose that $\chi\colon K_{n}^{r} \to \mathbb{N}$ is a rainbow-$H_{F}^{r}$-free coloring. There exists a constant $C_F$ depending only on $F$ such that every rainbow subgraph $\mathcal{H} \subseteq K_{n}^{r}$ can be made $H_{F_{-}}^{r}$-free by removing at most $C_F \cdot n$ edges. In particular, for all integers $n \ge r$, $$\begin{aligned} \mathrm{ar}(n, H_{F}^{r}) \le \mathrm{ex}(n, H_{F_{-}}^{r}) + C_F \cdot n. \end{aligned}$$* Corollary [Corollary 2](#CORO:antiRamsey-expansion-hypergraph-splittingk=2){reference-type="ref" reference="CORO:antiRamsey-expansion-hypergraph-splittingk=2"} together with stability theorems in [@FJS14path; @FJ15; @FJ15cycle; @KMV15a; @KMV17b] implies [@GLS20 Lemma 2.7], [@TLG23 Theorem 5], and [@LS23a Lemma 4.2]. Notice that if a $k$-graph $F$ is $p$-partite, then $\mathrm{Split}(F)$ contains a $(p-1)$-partite $k$-graph (obtained by splitting an arbitrary part in $F$). Combined with the (hypergraph) Kövari--Sós--Turán theorem [@KST54; @E64], we obtain the following corollary. **Corollary 3**. *Let $n \ge r> k \ge 2$ be integers and $F$ be a $(k+1)$-partite $k$-graph. Suppose that $\chi\colon K_{n}^{r} \to \mathbb{N}$ is a rainbow-$H_{F}^{r}$-free coloring. There exists constants $C_F, \alpha_F >0$ depending only on $F$ such that every rainbow subgraph $\mathcal{H} \subseteq K_{n}^{r}$ can be made $H_{F_{-}}^{r}$-free by removing at most $C_F \cdot n^{k-\alpha_F}$ edges. In particular, for all integers $n \ge r$, $$\begin{aligned} \mathrm{ar}(n,H_{F}^{r}) \le \mathrm{ex}(n,H_{F_{-}}^{r}) + C_{F}\cdot n^{k-\alpha_F}. \end{aligned}$$* Theorem [Theorem 1](#THM:antiRamsey-expansion-hypergraph-splitting){reference-type="ref" reference="THM:antiRamsey-expansion-hypergraph-splitting"} will be proved in Section [2](#SEC:proof-antiRamsey-expansion-hypergraph-splitting){reference-type="ref" reference="SEC:proof-antiRamsey-expansion-hypergraph-splitting"}. ## Expansion of graphs In this subsection, we present an extension of [\[equ:ESS-complete-graph\]](#equ:ESS-complete-graph){reference-type="eqref" reference="equ:ESS-complete-graph"} to the expansion of graphs. For convenience, we let $H_{\ell}^{r} := H_{K_{\ell}}^{r}$. Let $t \ge 1$ be an integer. We use $F[t]$ to denote the **$t$-blowup** of $F$, i.e. $F[t]$ is the $r$-graph obtained from $F$ by replacing vertices with disjoint $t$-sets, and replacing each edge in $F$ with the corresponding complete $r$-partite $r$-graph. Given a family $\mathcal{F}$ of $r$-graphs, we let $$\begin{aligned} \mathcal{F}[t] := \left\{F[t] \colon F\in \mathcal{F}\right\}. \end{aligned}$$ Let $\ell \ge 2$ and $t \ge 4$ be integers. - Let $K_{\ell}^{\alpha}[t]$ denote the graph obtained from the blowup $K_{r}[t]$ by adding a path of length two into one part (see Figure [\[fig:edge-critical\]](#fig:edge-critical){reference-type="ref" reference="fig:edge-critical"} (a)). - Let $K_{\ell}^{\beta}[t]$ denote the graph obtained from the blowup $K_{r}[t]$ by adding two disjoint edges into one part (see Figure [\[fig:edge-critical\]](#fig:edge-critical){reference-type="ref" reference="fig:edge-critical"} (b)). - Let $K_{\ell}^{\gamma}[t]$ denote the graph obtained from the blowup $K_{r}[t]$ by adding two edges into two different parts (see Figure [\[fig:edge-critical\]](#fig:edge-critical){reference-type="ref" reference="fig:edge-critical"} (c)). The motivation for these definitions is that if $F$ is obtained from an edge-critical graph by adding one edge, then $F$ can be found within one of the previously defined graphs for sufficiently large $t$. Given integers $n \ge r \ge 2$. Let $V_1 \sqcup \cdots \sqcup V_{\ell} = [n]$ be a partition such that $|V_n|+1\ge |V_1| \ge |V_2| \ge \cdots \ge |V_n|$. Let $T_{r}(n,\ell)$ denote the $r$-graph whose edge set consists of all $r$-subsets of $[n]$ that contain at most one vertex from each $V_i$. Let $t_{r}(n,\ell)$ denote the number of edges in $T_{r}(n,\ell)$ and notice that $t_{r}(n,\ell) \sim \binom{\ell}{r}\left(\frac{n}{\ell}\right)^r$. Extending the classical Turán Theorem to hypergraphs, Mubayi proved in [@M06] that $\mathrm{ex}(n,H_{\ell+1}^{r}) = (1+o(1))t_{r}(n,\ell)$ for all $\ell \ge r \ge 3$. Building on a stability theorem of Mubayi, Pikhurko [@Pik13] later proved that for fixed $\ell \ge r \ge 3$, $\mathrm{ex}(n,H_{\ell+1}^{r}) = t_{r}(n,\ell)$ holds for all sufficiently large $n$. The main results in this subsection are as follows. **Theorem 4**. *Let $\ell \ge 3$ and $t \ge 4$ be fixed integers. For all sufficiently large $n$, $$\begin{aligned} \mathrm{ar}(n,H_{F}^{3}) = \begin{cases} t_{3}(n,\ell) + \ell+1, & \quad\text{if}\quad F \in \left\{K_{\ell}^{\alpha}[t],\ K_{\ell}^{\beta}[t]\right\}, \\ t_{3}(n,\ell) + 2, & \quad\text{if}\quad F = K_{\ell}^{\gamma}[t]. \end{cases} \end{aligned}$$* The situation for $r \ge 4$ is simpler. **Theorem 5**. *Let $\ell \ge r \ge 4$ and $t \ge 4$ be fixed integers. For all sufficiently large $n$, $$\begin{aligned} \mathrm{ar}(n,H_{F}^{r}) = t_{r}(n,\ell) + 2 \quad\text{for all}\quad F\in \left\{K_{\ell}^{\alpha}[t],\ K_{\ell}^{\beta}[t],\ K_{\ell}^{\gamma}[t]\right\}. \end{aligned}$$* We would like to remind the reader that the case $r=2$ can be handled by a result of Jiang--Pikhurko in [@JP09]. Observe that for every $r\ge 3$, the $r$-graph $H_{\ell+1}^{r}$ is contained in $K_{\ell}^{\gamma}[4]$. Hence, we obtain the following corollary, which is an extension of [\[equ:ESS-complete-graph\]](#equ:ESS-complete-graph){reference-type="eqref" reference="equ:ESS-complete-graph"} to hypergraphs. **Corollary 6**. *Let $\ell \ge r \ge 3$ be fixed integers. For all sufficiently large $n$, $$\begin{aligned} \mathrm{ar}(n,H_{\ell+1}^{r}) = t_{r}(n,\ell) + 2. \end{aligned}$$* Proofs for Theorems [Theorem 4](#THM:antiRamsey-expansion-edge-criticalr=3){reference-type="ref" reference="THM:antiRamsey-expansion-edge-criticalr=3"} and [Theorem 5](#THM:antiRamsey-expansion-edge-criticalr>3){reference-type="ref" reference="THM:antiRamsey-expansion-edge-criticalr>3"} are presented in Section [3](#SEC:proof-antiRamsey-expansion-edge-criticalr=3){reference-type="ref" reference="SEC:proof-antiRamsey-expansion-edge-criticalr=3"}. # Proof of Theorem [Theorem 1](#THM:antiRamsey-expansion-hypergraph-splitting){reference-type="ref" reference="THM:antiRamsey-expansion-hypergraph-splitting"} {#SEC:proof-antiRamsey-expansion-hypergraph-splitting} *Proof of Theorem [Theorem 1](#THM:antiRamsey-expansion-hypergraph-splitting){reference-type="ref" reference="THM:antiRamsey-expansion-hypergraph-splitting"}.* Let $n \ge r > k \ge 2$ be integers and $F$ be a $k$-graph. Let $\chi\colon K_{n}^{r} \to \mathbb{N}$ is a rainbow-$H_{F}^{r}$-free coloring and $\mathcal{H} \subseteq K_{n}^{r}$ be a rainbow subgraph. Let $\mathcal{C}$ be a maximal collection of pairwise edge-disjoint copies of members in $H_{F_{-}}^{r}$. In other words, members in $\mathcal{C}$ are pairwise edge-disjoint, and if $H \subseteq \mathcal{H}$ is a copy of some member in $H_{F_{-}}^{r}$, then $H$ contains at least one edge from some member in $\mathcal{C}$. For convenience, let us assume that $\mathcal{C} = \{Q_1, \ldots, Q_m\}$, where $m \ge 0$ is an integer and $Q_i \subseteq \mathcal{H}$ is a copy of some member in $H_{F_{-}}^{r}$ for $i\in [m]$. Let $i\in [m]$ and $S\subseteq [n]\setminus V(Q_i)$ be an $(r-k)$-set. Since $Q_i$ is a copy of some member in $H_{F_{-}}^{r}$, there exists a $k$-set $e_i \subseteq V(Q_i)$ such that $Q_i \cup \{\{e_i \cup S\}\}$ is a copy of $H_{F}^{r}$. We let $\mathcal{A}$ be an auxiliary multi-$k$-graph whose edge set is the collection of $e_i$ for all $i\in [m]$. For $i\in [m]$ let $\chi(Q_i) := \{\chi(e)\colon e\in Q_i\}$. Since $\mathcal{H}$ is rainbow and $Q_i \subseteq \mathcal{H}$ for $i\in [m]$, we know that $$\begin{aligned} \label{equ:anti-Ramsey-splitting-disjoint-color-set} \chi(Q_i) \cap \chi(Q_j) = \emptyset \quad\text{for all}\quad 1\le i < j \le m. \end{aligned}$$ **Claim 7**. *For every $i\in [m]$ and for every $(r-k)$-set $S\subseteq [n]\setminus V(Q_i)$, we have $\chi(e_i \cup S) \in \chi(Q_i)$.* *Proof.* Suppose to the contrary that $\chi(e_i \cup S) \not\in \chi(Q_i)$, then the $r$-graph $Q_i \cup \{\{e_i \cup S\}\}$ would be a rainbow copy of $H_{F}^{r}$, a contradiction. ◻ **Claim 8**. *The set $\mathcal{A}$ does not contain multi-sets. In other words, $e_i \neq e_j$ for all $1 \le i < j \le m$.* *Proof.* Suppose to the contrary that $e_i = e_j =: e$ for some $i\neq j$. Let $S\subseteq [n]\setminus \left(V(Q_i)\cup V(Q_j)\right)$ be an $(r-k)$-set. It follows from Claim [Claim 7](#CLAIM:anti-Ramsey-splitting-e_i-cup-S){reference-type="ref" reference="CLAIM:anti-Ramsey-splitting-e_i-cup-S"} and [\[equ:anti-Ramsey-splitting-disjoint-color-set\]](#equ:anti-Ramsey-splitting-disjoint-color-set){reference-type="eqref" reference="equ:anti-Ramsey-splitting-disjoint-color-set"} that $\chi(e \cup S) \in \chi(Q_i) \cap \chi(Q_j) = \emptyset$, a contradiction. ◻ **Claim 9**. *The $k$-graph $\mathcal{A}$ is $\mathrm{Split}(F_{+})$-free. In particular, $m \le \mathrm{ex}(n, \mathrm{Split}(F_{+}))$.* *Proof.* Suppose to the contrary that $\mathcal{A}$ contains some member $\hat{F} \in \mathrm{Split}(F)$. By the definition of $\mathrm{Split}(F)$, there exists an independent set $I= \{v_1, \ldots, v_{p}\}$ in $F$ such that $\hat{F} = F\vee I$, where $p\ge 0$ is an integer. Let $d_i:= d_{F}(v_i)$ for $i\in [p]$, and let $d:= \sum_{i\in[p]}d_i$. Assume that $\hat{F} = \{f_1, \ldots, f_{\ell}\}$, where $\ell:= |\hat{F}|$. Let $\psi\colon \hat{F} \to \mathcal{A}$ be an embedding. By relabelling members in $\mathcal{C}$ if necessary, we may assume that $\psi(f_i) = e_i$ for $i\in [\ell]$. Let $U:= \bigcup_{i\in [\ell]}V(Q_i)$. - For $i\in [p]$, choose a $d_i$-star[^4] $S_{d_i}$ from $\binom{[n]\setminus U}{r-k}$, and - for $j \in \left[d+1, \ell\right]$, choose an $(r-k)$-subset $e'_j$ of $[n]\setminus U$ such that elements in $\left\{S_{d_1}, \ldots, S_{d_p}, e'_{d+1}, \ldots, e'_{\ell}\right\}$ are pairwise vertex-disjoint. We will use $\{e_1, \ldots, e_{\ell}\}$ and $\left\{S_{d_1}, \ldots, S_{d_p}, e'_{d+1}, \ldots, e'_{\ell}\right\}$ to build a rainbow copy of $F^{r}$. By relabelling members in $\hat{F}$ if necessary, we may assume that $$\begin{aligned} \{f_1, \ldots, f_{d_1}\}, \ldots, \{f_{d_1+\cdots+d_{p-1}+1}, \ldots, f_{d}\} \subseteq \hat{F} \end{aligned}$$ are edge sets obtained by splitting $v_1, \ldots, v_p$, respectively. In other words, $$\begin{aligned} \{f_{d_1+\cdots+d_{i-1}+1},\ldots,f_{d_1+\cdots+d_i}\} = (F\vee v_{i})\setminus F \quad\text{for}\quad i\in [p]. \end{aligned}$$ We further assume that $S_{d_i} = \{e'_{d_1+\cdots+d_{i-1}+1},\ldots,e'_{d_1+\cdots+d_i}\}$ for $i\in [p]$. Now let $E_j := e_j \cup e'_j$ for $j\in [\ell]$. It is easy to observe that $\{E_1, \ldots, E_{\ell}\}$ is a copy of $F^{r}$ with the center of $S_{d_i}$ playing the role of $v_i$ for $i\in [p]$ (see Figure [\[fig:splitting\]](#fig:splitting){reference-type="ref" reference="fig:splitting"}). In addition, it follows from Claim [Claim 7](#CLAIM:anti-Ramsey-splitting-e_i-cup-S){reference-type="ref" reference="CLAIM:anti-Ramsey-splitting-e_i-cup-S"} and [\[equ:anti-Ramsey-splitting-disjoint-color-set\]](#equ:anti-Ramsey-splitting-disjoint-color-set){reference-type="eqref" reference="equ:anti-Ramsey-splitting-disjoint-color-set"} that $\{E_1, \ldots, E_{\ell}\}$ is rainbow, which contradicts the rainbow-$H_{F}^{r}$-freeness of the coloring $\chi$. ◻ Let $\mathcal{H}':= \mathcal{H}\setminus\left(\bigcup_{i\in [m]}Q_i\right)$. It follows from the maximality of $\mathcal{C}$ that $\mathcal{H}'$ is $H_{F_{-}}^{r}$-free. In addition, it follows from Claim [Claim 9](#CLAIM:antiRamsey-splitting-free){reference-type="ref" reference="CLAIM:antiRamsey-splitting-free"} that $|\mathcal{H}'| \ge |\mathcal{H}| - |F|\cdot m \ge |\mathcal{H}|-|F|\cdot \mathrm{ex}(n, \mathrm{Split}(F_{+}))$, completing the proof of Theorem [Theorem 1](#THM:antiRamsey-expansion-hypergraph-splitting){reference-type="ref" reference="THM:antiRamsey-expansion-hypergraph-splitting"}. ◻ # Proofs of Theorems [Theorem 4](#THM:antiRamsey-expansion-edge-criticalr=3){reference-type="ref" reference="THM:antiRamsey-expansion-edge-criticalr=3"} and [Theorem 5](#THM:antiRamsey-expansion-edge-criticalr>3){reference-type="ref" reference="THM:antiRamsey-expansion-edge-criticalr>3"} {#SEC:proof-antiRamsey-expansion-edge-criticalr=3} We prove Theorems [Theorem 4](#THM:antiRamsey-expansion-edge-criticalr=3){reference-type="ref" reference="THM:antiRamsey-expansion-edge-criticalr=3"} and [Theorem 5](#THM:antiRamsey-expansion-edge-criticalr>3){reference-type="ref" reference="THM:antiRamsey-expansion-edge-criticalr>3"} in this section. Before that, we prove some useful lemmas. **Lemma 10**. *Let $r \ge 2$, $F$ be an $r$-graph, and $\chi\colon K_{n}^{r} \to \mathbb{N}$ be a rainbow-$F$-free coloring. Every rainbow subgraph $\mathcal{H} \subseteq K_{n}^{r}$ can be made $F_{-}$-free by removing $o(n^r)$ edges.* *Proof.* The lemma follows easily from the Hypergraph Removal Lemma (see e.g. [@NRS06; @RS06; @Tao06; @Gow07]) and the observation of Erdős--Simonovits--Sós [@ESS75] that every rainbow subgraph $\mathcal{H} \subseteq K_{n}^{r}$ is $F_{-}[2]$-free. ◻ We also need the following strengthen of [@LMR1 Lemma 4.5]. Let $\mathcal{G}$ be an $r$-graph with vertex set $[m]$ and let $V_1 \sqcup \cdots \sqcup V_{m} = V$ be a partition of some vertex set $V$. We use $\mathcal{G}[V_1,\ldots,V_{m}]$ to denote the $r$-graph obtained by replacing vertex $i$ in $\mathcal{G}$ with the set $V_i$ for $i\in [m]$, and by replacing each edge in $\mathcal{G}$ with a corresponding complete $r$-partite $r$-graph. We call $\mathcal{G}[V_1,\ldots,V_{m}]$ a **blowup** of $\mathcal{G}$. **Lemma 11**. *Fix a real $\eta \in (0, 1)$ and integers $m, n\ge 1$. Let $\mathcal{G}$ be an $r$-graph with vertex set $[m]$ and let $\mathcal{H}$ be a further $r$-graph with $v(\mathcal{H})=n$. Consider a vertex partition $V(\mathcal{H}) = \bigcup_{i\in[m]}V_i$ and the associated blow-up $\widehat{\mathcal{G}} := \mathcal{G}[V_1,\ldots,V_{m}]$ of $\mathcal{G}$. Suppose that two sets $T \subseteq [m]$ and $S\subseteq V(\mathcal{H})$ have the properties* 1. *[\[it:47a\]]{#it:47a label="it:47a"} $|V_{j}'| \ge 2q(|S|+1)|T|\eta^{1/r} n$ for all $j \in T$,* 2. *[\[it:47b\]]{#it:47b label="it:47b"} $|\mathcal{H}[V_{j_1}',\ldots,V_{j_r}']| \ge |\widehat{\mathcal{G}}[V_{j_1}',\ldots,V_{j_r}']| - \eta n^r$ for all $\{j_1,\ldots,j_r\} \in \binom{T}{r}$, and* 3. *[\[it:47c\]]{#it:47c label="it:47c"} $|L_{\mathcal{H}}(v)[V_{j_1}',\ldots,V_{j_{r-1}}']| \ge |L_{\widehat{\mathcal{G}}}(v)[V_{j_1}',\ldots,V_{j_{r-1}}']| - \eta n^{r-1}$ for all $v\in S$ and for all $\{j_1,\ldots,j_{r-1}\} \in \binom{T}{r-1}$,* *where $V_i' := V_i\setminus S$ for $i\in [m]$. Then there exists a selection of $q$-set $U_i \subseteq V_j$ for all $j\in [T]$ such that $U := \bigcup_{j\in T}U_j$ satisfies $\widehat{\mathcal{G}}[U] \subseteq \mathcal{H}[U]$ and $L_{\widehat{\mathcal{G}}}(v)[U] \subseteq L_{\mathcal{H}}(v)[U]$ for all $v\in S$. In particular, if $\mathcal{H} \subseteq \widehat{\mathcal{G}}$, then $\widehat{\mathcal{G}}[U] = \mathcal{H}[U]$ and $L_{\widehat{\mathcal{G}}}(v)[U] = L_{\mathcal{H}}(v)[U]$ for all $v\in S$.* *Proof.* By shrinking $V_j'$ if necessary, we may assume that $|V_j'| = n_1:= q(|S|+1)|T|\eta^{1/r} n$. Choose for each $j\in T$ a $q$-set $U_{j} \subseteq V_j'$ independently and uniformly at random. Let $U:= \bigcup_{j\in T}U_j$. For every $\{j_1, \ldots, j_{r}\} \in \mathcal{G}$, let $\mathbb{P}_{j_1, \ldots, j_{r}}$ denote the probablity that $\mathcal{H}[U_{j_1}, \ldots, U_{j_r}]\neq\widehat{\mathcal{G}}[U_{j_1}, \ldots, U_{j_r}]$. Then it follows from Assumption [\[it:47b\]](#it:47b){reference-type="ref" reference="it:47b"} that $$\begin{aligned} % \mathbb{P}\left(\mathcal{H}[U_{j_1}, \ldots, U_{j_r}]\neq\widehat{\mathcal{G}}[U_{j_1}, \ldots, U_{j_r}]\right) \mathbb{P}_{j_1, \ldots, j_{r}} = 1- \frac{N\left(K_{q,\ldots,q}, \mathcal{H}[U_{j_1}, \ldots, U_{j_r}]\right)}{N\left(K_{q,\ldots,q}, \widehat{\mathcal{G}}[U_{j_1}, \ldots, U_{j_r}]\right)} & \le {\eta n^r \left\{\binom{n_1-1}{q-1}\right\}^r}/{\left\{\binom{n_1}{q}\right\}^r} \\ %& \le \eta n^r \left(\frac{q}{n_1}\right)^r & = \eta n^r \left(\frac{q}{2q(|S|+1)|T|\eta^{1/r} n}\right)^r \le \frac{1}{2|T|^r}. \end{aligned}$$ For every $v\in S$ and $\{j_1, \ldots, j_{r-1}\} \in L_{\mathcal{G}}(v)$, let $\mathbb{P}_{v; j_1, \ldots, j_{r-1}}$ denote the probablity that $L_{\mathcal{H}}(v)[U_{j_1}, \ldots, U_{j_{r-1}}]\neq L_{\widehat{\mathcal{G}}}(v)[U_{j_1}, \ldots, U_{j_{r-1}}]$. Then it follows from Assumption [\[it:47c\]](#it:47c){reference-type="ref" reference="it:47c"} that $$\begin{aligned} % \mathbb{P}\left(L_{\mathcal{H}}(v)[U_{j_1}, \ldots, U_{j_{r-1}}]\neq L_{\widehat{\mathcal{G}}}(v)[U_{j_1}, \ldots, U_{j_{r-1}}]\right) \mathbb{P}_{v; j_1, \ldots, j_{r-1}} = 1- \frac{N\left(K_{q,\ldots,q}, L_{\mathcal{H}}[U_{j_1}, \ldots, U_{j_{r-1}}]\right)}{N\left(K_{q,\ldots,q}, L_{\widehat{\mathcal{G}}}[U_{j_1}, \ldots, U_{j_{r-1}}]\right)} & \le {\eta n^{r-1} \left\{\binom{n_1-1}{q-1}\right\}^{r-1}}/{\left\{\binom{n_1}{q}\right\}^{r-1}} \\ % & \le \eta n^{r-1} \left(\frac{q}{n_1}\right)^{r-1} & = \eta n^{r-1} \left(\frac{q}{2q(|S|+1)|T|\eta^{1/r} n}\right)^{r-1} \\ & \le \frac{1}{2(|S|+1)|T|^{r-1}}. \end{aligned}$$ Therefore, the probability that $U$ fails to have the desired properties is at most $$\begin{aligned} \binom{|T|}{r}\times \frac{1}{2|T|^r} + |S|\binom{|T|}{r-1} \times \frac{1}{2(|S|+1)|T|^{r-1}} \le \frac{1}{4} + \frac{1}{2} = \frac{3}{4}. \end{aligned}$$ So the probability that $U$ has these properties is positive. ◻ Another ingredient that we need is the following stability result for $H_{\ell+1}^{r}[t]$-free $r$-graphs. **Lemma 12**. *Let $\ell \ge r \ge 3$ and $t \ge 1$ be fixed integers. For every $\epsilon>0$ there exist $\delta >0$ and $n_0$ such that the following holds for all $n \ge n_0$. Suppose that $\mathcal{H}$ is a $H_{\ell+1}^{r}[t]$-free $r$-graph with $n \ge n_0$ vertices and at least $t_{r}(n,\ell) - \delta n^r$ edges. Then $\mathcal{H}$ can be made $\ell$-partite by removing at most $\epsilon n^r$ edges.* *Proof.* This lemma follows easily from the Hypergraph Removal Lemma and the stability theorem of Mubayi on $H_{\ell+1}^{r}$-free $r$-graphs [@M06 Theorem 3]. ◻ Now we are ready to prove Theorem [Theorem 4](#THM:antiRamsey-expansion-edge-criticalr=3){reference-type="ref" reference="THM:antiRamsey-expansion-edge-criticalr=3"}. *Proof of Theorem [Theorem 4](#THM:antiRamsey-expansion-edge-criticalr=3){reference-type="ref" reference="THM:antiRamsey-expansion-edge-criticalr=3"}.* Fix integers $\ell \ge 3$ and $t \ge 4$. The lower bound for the case $F \in \left\{ K_{\ell}^{\beta}[t],\ K_{\ell}^{\gamma}[t]\right\}$ follows the well-known fact $$\begin{aligned} \mathrm{ar}(n,F) \ge \mathrm{ex}(n,F_{-}) +2 \quad\text{holds for all $n \ge 1$ and for all $r$-graphs $F$}. \end{aligned}$$ The lower bound for the case $F=K_{\ell}^{\alpha}$ follows from the following construction. Fix $n \ge \ell \ge 3$. Let $m := t_{3}(n,\ell)+\ell$. Let $V_1 \sqcup \cdots \sqcup V_{\ell} = [n]$ be a partition such that $|V_{\ell}|+1 \ge |V_1| \ge |V_2| \ge \cdots \ge |V_{\ell}|$. - For $i\in [\ell]$ color all triples that contain at least two vertices from $V_i$ using color $i$, and - fix an arbitrary rainbow coloring for the rest $t_{3}(n,\ell)$ triples using colors that were not used in the previous step. We leave the verification of the rainbow-$H_{F}^{3}$-freeness of the coloring defined above to interested readers. Now we focus on the upper bound. Fix $F\in \left\{K_{\ell}^{\alpha}[t],\ K_{\ell}^{\beta}[t],\ K_{\ell}^{\gamma}[t]\right\}$. Let $\delta >0$ be sufficiently small and $n$ be a sufficiently large. Let $\chi\colon K_{n}^{3} \to \mathbb{N}$ be a rainbow $H_{F}^{3}$-free coloring, and let $\mathcal{H} \subseteq K_{n}^{3}$ be a maximum rainbow subgraph. Suppose to the contrary that $$\begin{aligned} |\mathcal{H}| \ge \begin{cases} t_{3}(n,\ell) + \ell+1, & \quad\text{if}\quad F \in \left\{K_{\ell}^{\alpha}[t],\ K_{\ell}^{\beta}[t]\right\}, \\ t_{3}(n,\ell) + 2, & \quad\text{if}\quad F = K_{\ell}^{\gamma}[t]. \end{cases}\end{aligned}$$ Let $F_1:= K_{\ell}^{+}[t]$, where $K_{\ell}^{+}[t]$ denote the graph obtained from $K_{\ell}[t]$ by adding one edge into some part. Note that $F_1 \in F_{-}$. Let $\mathcal{H}' \subseteq \mathcal{H}$ be a maximum $H_{F_1}^{3}$-free subgraph. It follows from Lemma [Lemma 10](#LEMMA:ESS-removal){reference-type="ref" reference="LEMMA:ESS-removal"} that $$\begin{aligned} \label{equ:edge-critical-H'r=3} |\mathcal{H}'| \ge |\mathcal{H}| - \frac{\delta n^3}{100} % \ge t_{\ell}(n,3) - o(n^3) \ge \binom{\ell}{3}\left(\frac{n}{\ell}\right)^3 - \frac{\delta n^3}{50}. \end{aligned}$$ Let $V_1 \sqcup \cdots \sqcup V_{\ell} = [n]$ be partition such that the number of edges in the induced $\ell$-partite subgraph[^5] $\mathcal{H}'' := \mathcal{H}[V_1, \ldots, V_{\ell}]$ is maximized. Since $H_{F_1}^3$ is contained in $H_{\ell+1}^{3}[t]$, it follows from [\[equ:edge-critical-H\'r=3\]](#equ:edge-critical-H'r=3){reference-type="eqref" reference="equ:edge-critical-H'r=3"} and Lemma [Lemma 12](#LEMMA:stability-expansion-clique){reference-type="ref" reference="LEMMA:stability-expansion-clique"} that $$\begin{aligned} \label{equ:edge-critical-H''r=3} |\mathcal{H}''| \ge |\mathcal{H}'| - \frac{\delta n^3}{25} \ge \binom{\ell}{3}\left(\frac{n}{\ell}\right)^3 - \frac{\delta n^3}{18}. \end{aligned}$$ Combined with the inequality (see [@LMR2 Lemma 2.2]) $$\begin{aligned} |\mathcal{H}''| \le \sum_{1\le i < j < k \le \ell}|V_i||V_j||V_k| \le \binom{\ell}{3}\left(\frac{n}{\ell}\right)^3 - \frac{\ell-2}{6\ell}\sum_{i\in [\ell]}\left(|V_i|- \frac{n}{\ell}\right)^2n, \end{aligned}$$ we obtain $$\begin{aligned} \label{equ:edge-critical-Vi-sizer=3} \frac{n}{\ell} - \sqrt{\delta}n \le |V_i| \le \frac{n}{\ell} + \sqrt{\delta}n \quad\text{for all}\quad i\in [\ell]. \end{aligned}$$ Let $\mathcal{G}$ denote the complete $r$-graph with $\ell$ vertices, and let $\widehat{\mathcal{G}}:= \mathcal{G}[V_1, \ldots, V_{\ell}]$ denote the complete $\ell$-partite $r$-graph[^6] with parts $V_1, \ldots, V_{\ell}$. Let $$\begin{aligned} \mathcal{B} := \mathcal{H} \setminus \widehat{\mathcal{G}}, \quad\text{and}\quad \mathcal{M} := \widehat{\mathcal{G}} \setminus \mathcal{H} = \widehat{\mathcal{G}} \setminus \mathcal{H}''. \end{aligned}$$ It follows from [\[equ:edge-critical-H\'\'r=3\]](#equ:edge-critical-H''r=3){reference-type="eqref" reference="equ:edge-critical-H''r=3"} that $$\begin{aligned} \label{equ:edge-critical-Mr=3} |\mathcal{M}| = |\widehat{\mathcal{G}}| - |\mathcal{H}''| \le \binom{\ell}{3}\left(\frac{n}{\ell}\right)^3 - \left(\binom{\ell}{3}\left(\frac{n}{\ell}\right)^3 - \frac{\delta n^3}{18}\right) \le \delta n^3. \end{aligned}$$ For $i\in [\ell]$ let $$\begin{aligned} D_i := \left\{v\in V_i \colon |L_{\widehat{\mathcal{G}}}(v)| - |L_{\mathcal{H}''}(v)| \le 3\delta^{1/3} n^2\right\}, \quad\text{and}\quad \overline{D}_i := V_i \setminus D_i. \end{aligned}$$ For convenience, let $D:= \bigcup_{i\in [\ell]} D_i$ and $\overline{D} := \bigcup_{i\in [\ell]} \overline{D}_i$. **Claim 13**. *We have $|\mathcal{M}| \ge \delta^{1/3} n^2 |\overline{D}|$ and $|\overline{D}| \le \delta^{2/3} n$.* *Proof.* It follows from the definition that every vertex in $\overline{D}$ contributes at least $\delta n^2$ elements in $\mathcal{M}$. Therefore, $$\begin{aligned} |\mathcal{M}| \ge \frac{1}{3} \times |\overline{D}| \times 3\delta^{1/3} n^2 = \delta^{1/3} n^2 |\overline{D}|. \end{aligned}$$ Combined with [\[equ:edge-critical-Mr=3\]](#equ:edge-critical-Mr=3){reference-type="eqref" reference="equ:edge-critical-Mr=3"}, we obtain $|\overline{D}| \le \delta n^3/(\delta^{1/3} n^2) = \delta^{2/3} n$, which completes the proof of Claim [Claim 13](#CLAIM:edge-critical-missing-edgesr=3){reference-type="ref" reference="CLAIM:edge-critical-missing-edgesr=3"}. ◻ The most crucial part in the proof is the following claim. **Claim 14**. *If $F\in \left\{K_{\ell}^{\alpha}[t],\ K_{\ell}^{\beta}[t]\right\}$. Then $\mathcal{B}$ does not contain two edges $e, e'$ such that $$\begin{aligned} \label{equ:CLAIM:edge-critical-two-bad-edgesr=3-case-1} \min\left\{|e\cap D_i|,\ |e'\cap D_i|\right\} \ge 2 \quad\text{holds for some }i\in [\ell]. \end{aligned}$$ If $F = K_{\ell}^{\gamma}[t]$, then the set $\mathcal{B}$ does not contain two edges $e, e'$ such that $$\begin{aligned} \label{equ:CLAIM:edge-critical-two-bad-edgesr=3} \max\left\{|e\cap D_i|\colon i\in [\ell]\right\} \ge 2 \quad\text{and}\quad \max\left\{|e'\cap D_i| \colon i\in [\ell]\right\} \ge 2. \end{aligned}$$* *Proof.* First consider the case $F = K_{\ell}^{\alpha}[t]$. Suppose to the contrary that there exist two distinct edges $e, e' \in \mathcal{B}$ such that [\[equ:CLAIM:edge-critical-two-bad-edgesr=3-case-1\]](#equ:CLAIM:edge-critical-two-bad-edgesr=3-case-1){reference-type="eqref" reference="equ:CLAIM:edge-critical-two-bad-edgesr=3-case-1"} holds. By symmetry, we may assume that $\min\left\{|e\cap D_1|,\ |e'\cap D_1|\right\} \ge 2$. Choose a $3$-set $f \subseteq D_1$ such that $|f\cap e| = |f\cap e'| = 1$. By symmetry, we may assume that $\chi(f) \neq \chi(e)$. Let $\{v_1\}:= f\cap e$. Fix $v_2 \in \left(e\cap D_1\right)\setminus \{v_1\}$ and $v_3 \in f\setminus\{v_1\}$. Let $\mathcal{H}'$ be the $3$-graph obtained from $\mathcal{H}$ by removing an edge (if there exists such an edge) with color $\chi(f)$. Then apply Lemma [Lemma 11](#LEMMA:greedily-embedding-Gi){reference-type="ref" reference="LEMMA:greedily-embedding-Gi"} to $\mathcal{H}'$ with $V_i' = D_i\setminus (e \cup f)$ for $i\in [\ell]$, $T = [\ell]$, $S = \{v_1, v_2, v_3\}$, and $q = 2\binom{\ell}{2}t^2$, we obtain a $q$-set $U_i\subseteq D_i\setminus (e \cup f)$ for each $i\in [\ell]$ such that the induced $\ell$-partite $3$-graph of $\mathcal{H}'$ on $U_1, \ldots, U_{\ell}$ is complete, and for every $v\in \{v_1, v_2, v_3\}$ the induced $(\ell-1)$-partite graph of $L_{\mathcal{H}'}(v)$ on $U_2, \ldots, U_{\ell}$ is also complete. Note that Lemma [Lemma 11](#LEMMA:greedily-embedding-Gi){reference-type="ref" reference="LEMMA:greedily-embedding-Gi"} [\[it:47a\]](#it:47a){reference-type="ref" reference="it:47a"} is guaranteed by [\[equ:edge-critical-Vi-sizer=3\]](#equ:edge-critical-Vi-sizer=3){reference-type="eqref" reference="equ:edge-critical-Vi-sizer=3"}, Lemma [Lemma 11](#LEMMA:greedily-embedding-Gi){reference-type="ref" reference="LEMMA:greedily-embedding-Gi"} [\[it:47b\]](#it:47b){reference-type="ref" reference="it:47b"} is guaranteed by the definition of $D_i$, and Lemma [Lemma 11](#LEMMA:greedily-embedding-Gi){reference-type="ref" reference="LEMMA:greedily-embedding-Gi"} [\[it:47c\]](#it:47c){reference-type="ref" reference="it:47c"} is guaranteed by [\[equ:edge-critical-Mr=3\]](#equ:edge-critical-Mr=3){reference-type="eqref" reference="equ:edge-critical-Mr=3"}. Let $U:= \{v_1, v_2, v_3\} \cup \bigcup_{i\in [\ell]}U_i$ It is easy to see that the expansion of $K_{\ell}^{\alpha}[t]$ is contained in $\{e,f\} \cup \mathcal{H}'[U]$. This is a contradiction, since $\{e,f\} \cup \mathcal{H}'[U]$ is rainbow. The case $F = K_{\ell}^{\beta}[t]$ can be proved similarly by choosing a $3$-set $f \subseteq D_1$ such that $|f\cap e| = |f\cap e'| = 0$. So we omit the details here. Now we consider the case $F = K_{\ell}^{\gamma}[t]$. Suppose to the contrary that there exist two distinct edges $e, e' \in \mathcal{B}$ such that [\[equ:CLAIM:edge-critical-two-bad-edgesr=3\]](#equ:CLAIM:edge-critical-two-bad-edgesr=3){reference-type="eqref" reference="equ:CLAIM:edge-critical-two-bad-edgesr=3"} holds. Let us assume that $|e\cap D_{i_1}|\ge 2$ and $|e'\cap D_{i_2}|\ge 2$, where $i_1, i_2 \in [\ell]$. Fix $\{u_1,v_1\} \subset e\cap D_{i_1}$ and $\{u_1',v_1'\} \subset e'\cap D_{i_2}$. Then apply Lemma [Lemma 11](#LEMMA:greedily-embedding-Gi){reference-type="ref" reference="LEMMA:greedily-embedding-Gi"} to $\mathcal{H}$ with $V_i' = D_i\setminus (e \cup f)$ for $i\in [\ell]$, $T = [\ell]$, $S = \{u_1, v_1, u_1', v_1'\}$, and $q = 2\binom{\ell}{2}t^2$, we obtain a $q$-set $U_i\subseteq D_i\setminus (e \cup f)$ for each $i\in [\ell]$ such that - the induced $\ell$-partite $3$-graph of $\mathcal{H}$ on $U_1, \ldots, U_{\ell}$ is complete, - for every $v\in \{u_1, v_1\}$ the induced $(\ell-1)$-partite graph of $L_{\mathcal{H}}(v)$ on $\{U_i \colon i\in [\ell]\setminus\{i_1\}\}$ is complete, - and for every $v\in \{u_1', v_1'\}$ the induced $(\ell-1)$-partite graph of $L_{\mathcal{H}}(v)$ on $\{U_i \colon i\in [\ell]\setminus\{i_2\}\}$ is complete. Choose $i^{\ast} \in [\ell]\setminus\{i_1,i_2\}$ and fix a $3$-set $f \subseteq D_{i^{\ast}}$ such that $|f\cap U_{i^{\ast}}| = 2$. By symmetry, we may assume that $\chi(f) \neq \chi(e)$. Let $\{u_2, v_2\} := f\cap U_{i^{\ast}}$. Fix a $t$-set $W_i \subseteq U_i$ for $i \in [\ell]\setminus\{i_1, i^{\ast}\}$, fix a $t$-set $W_{i_1} \subseteq U_{i_1} \cup \{u_1, v_1\}$ with $\{u_1, v_1\} \subseteq W_{i_1}$, and fix a $t$-set $W_{i^{\ast}} \subseteq U_{i^{\ast}}$ with $\{u_2, v_2\} \subseteq W_{i^{\ast}}$. Let $K$ denote the complete $\ell$-partite graph with parts $W_1, \ldots, W_{\ell}$. Observe from the choice of $U_i$'s that every pair in $K$ is contained in at least $q = 2\binom{\ell}{2}t^2$ edges in $\mathcal{H}$. Since $\mathcal{H}$ is rainbow, it is easy to greedily extend $K$ to be a rainbow copy of $H_{K}^{3}$ and avoid using the color $\chi(f)$. However, this copy of $H_{K}^{3}$ together with edges $e$ and $f$ is a rainbow copy of $H_{F}^{3}$, contradicting the rainbow-$H_{F}^{3}$-freeness of $\chi$. ◻ For $i\in \{0,1,2,3\}$ let $$\begin{aligned} \mathcal{B}_i := \left\{e\in \mathcal{B} \colon |e\cap \overline{D}| = i\right\}. \end{aligned}$$ **Case 1**: $F\in \left\{K_{\ell}^{\alpha}[t],\ K_{\ell}^{\beta}[t]\right\}$. For every triple $e\in \mathcal{B}_0 \cup \mathcal{B}_1$ we can fix a pair $e'\subseteq e\cap D_i$ for some $i\in [\ell]$. It follows from Claim [Claim 14](#CLAIM:edge-critical-two-bad-edgesr=3){reference-type="ref" reference="CLAIM:edge-critical-two-bad-edgesr=3"} that no two triples will share the same pair and no two pairs lie in the same part. Therefore, $|\mathcal{B}_0| + |\mathcal{B}_1| \le \ell$. Combined with Claim [Claim 13](#CLAIM:edge-critical-missing-edgesr=3){reference-type="ref" reference="CLAIM:edge-critical-missing-edgesr=3"} and the trivial bound $|\mathcal{B}_2| + |\mathcal{B}_3| \le n|\overline{D}|^2$, we obtain $$\begin{aligned} |\mathcal{H}| \le |\widehat{\mathcal{G}}| + \sum_{i=0}^{3}|\mathcal{B}| - |\mathcal{M}| & \le t_{3}(n,\ell) + \ell + n|\overline{D}|^2 - \delta^{1/3} n^2 |\overline{D}| \\ & \le t_{3}(n,\ell) + \ell - \left(\delta^{2/3} n - \delta^{1/3} n\right)n|\overline{D}| \le t_{3}(n,\ell) + \ell, \end{aligned}$$ a contradiction. **Case 2**: $F = K_{\ell}^{\gamma}[t]$. Similarly, for every triple $e\in \mathcal{B}_0 \cup \mathcal{B}_1$ we can fix a pair $e'\subseteq e\cap D_i$ for some $i\in [\ell]$. It follows from Claim [Claim 14](#CLAIM:edge-critical-two-bad-edgesr=3){reference-type="ref" reference="CLAIM:edge-critical-two-bad-edgesr=3"} that no two triples will share the same pair and the number of pairs is at most one. Therefore, $|\mathcal{B}_0| + |\mathcal{B}_1| \le 1$. Similarly, by Claim [Claim 13](#CLAIM:edge-critical-missing-edgesr=3){reference-type="ref" reference="CLAIM:edge-critical-missing-edgesr=3"} and the trivial bound $|\mathcal{B}_2| + |\mathcal{B}_3| \le n|\overline{D}|^2$, we have $$\begin{aligned} |\mathcal{H}| \le |\widehat{\mathcal{G}}| + \sum_{i=0}^{3}|\mathcal{B}| - |\mathcal{M}| & \le t_{3}(n,\ell) + 1 + n|\overline{D}|^2 - \delta^{1/3} n^2 |\overline{D}| \\ & \le t_{3}(n,\ell) + 1 - \left(\delta^{2/3} n - \delta^{1/3} n\right)n|\overline{D}| \le t_{3}(n,\ell) + 1, \end{aligned}$$ a contradiction. ◻ The proof for Theorem [Theorem 5](#THM:antiRamsey-expansion-edge-criticalr>3){reference-type="ref" reference="THM:antiRamsey-expansion-edge-criticalr>3"} is similar to the proof of Theorem [Theorem 4](#THM:antiRamsey-expansion-edge-criticalr=3){reference-type="ref" reference="THM:antiRamsey-expansion-edge-criticalr=3"}. So we omit the detail and only sketch the proof for the most crucial claim. **Claim 15**. *The set $\mathcal{B}$ does not contain two edges $e, e'$ such that $$\begin{aligned} \label{equ:CLAIM:edge-critical-two-bad-edgesr>3} \max\left\{|e\cap D_i|\colon i\in [\ell]\right\} \ge 2 \quad\text{and}\quad \max\left\{|e'\cap D_i| \colon i\in [\ell]\right\} \ge 2. \end{aligned}$$* *Proof.* Suppose to the contrary that there exist two distinct edges $e, e' \in \mathcal{B}$ such that [\[equ:CLAIM:edge-critical-two-bad-edgesr\>3\]](#equ:CLAIM:edge-critical-two-bad-edgesr>3){reference-type="eqref" reference="equ:CLAIM:edge-critical-two-bad-edgesr>3"} holds. Let us assume that $|e\cap D_{i_1}|\ge 2$ and $|e'\cap D_{i_2}|\ge 2$, where $i_1, i_2 \in [\ell]$. If $F = K_{\ell}^{\alpha}[t]$, then we choose an $r$-set $f \subseteq D_{i_1} \cup D_{i_2}$ such that $|f\cap e| = |f\cap e'| = 1$ and such that $\min\left\{|f\cap D_{i_1}|,\ |f\cap D_{i_2}|\right\} \ge 2$. Since $r\ge 4$, such $f$ exists. If $F = K_{\ell}^{\beta}[t]$, then we choose an $r$-set $f \subseteq D_{i_1} \cup D_{i_2}$ such that $|f\cap e| = |f\cap e'| = 0$ and such that $\min\left\{|f\cap D_{i_1}|,\ |f\cap D_{i_2}|\right\} \ge 2$. Since $r\ge 4$, such $f$ exists. The case $F = K_{\ell}^{\gamma}[t]$ can be handled using the same way as in the proof of Claim [Claim 14](#CLAIM:edge-critical-two-bad-edgesr=3){reference-type="ref" reference="CLAIM:edge-critical-two-bad-edgesr=3"}. The rest part is similar to the proof of Claim [Claim 14](#CLAIM:edge-critical-two-bad-edgesr=3){reference-type="ref" reference="CLAIM:edge-critical-two-bad-edgesr=3"}, so we omit the details. ◻ # Concluding remarks {#SEC:remark} $\bullet$ Given an $r$-graph $F$ and an integer $1\le k < r$, we say an edge $e\in F$ is **$k$-pendant** if $e$ contains a $k$-subset $e'$ such that $$\begin{aligned} e'\cap f = \emptyset \quad\text{for all}\quad f\in F\setminus \{e\}.\end{aligned}$$ For convenience, let $F_{k-}$ denote the family of $r$-graphs that can be obtained from $F$ by removing one $k$-pendant edge, i.e. $$\begin{aligned} F_{k-} := \left\{F\setminus \{e\} \colon \text{$e\in F$ is $k$-pendant}\right\}. \end{aligned}$$ The argument in the proof of Theorem [Theorem 1](#THM:antiRamsey-expansion-hypergraph-splitting){reference-type="ref" reference="THM:antiRamsey-expansion-hypergraph-splitting"} (see Claim [Claim 8](#CLAIM:anti-Ramsey-splitting-no-multiset){reference-type="ref" reference="CLAIM:anti-Ramsey-splitting-no-multiset"}) yields the following result. **Theorem 16**. *Let $r > k \ge 1$ be integers and $F$ be an $r$-graph. Suppose that $\chi\colon K_{n}^{r} \to \mathbb{N}$ is a rainbow-$F$-free coloring. Then every rainbow subgraph $\mathcal{H} \subseteq K_{n}^{r}$ can be made $F_{k-}$-free by removing at most $(|F|-1)\binom{n}{k}$ edges. In particular, for all integers $n \ge r$, $$\begin{aligned} \mathrm{ar}(n,F) \le \mathrm{ex}(n,F_{k-}) + (|F|-1) \binom{n}{k}. \end{aligned}$$* $\bullet$ The following question seems interesting, and we are uncertain whether it has already been addressed in the literature. **Problem 17**. *Let $r \ge 2$ be an integer. Is it true that for every $\delta >0$ there exists an $r$-graph $F$ such that $$\begin{aligned} \label{equ:antiRamsey-prob} \mathrm{ar}(n,F) - \mathrm{ex}(n,F_{-}) = \Omega(n^{r-\delta})? \end{aligned}$$ On the other hand, does there exists an $r$-graph $F$ such that [\[equ:antiRamsey-prob\]](#equ:antiRamsey-prob){reference-type="eqref" reference="equ:antiRamsey-prob"} holds for all $\delta >0$?* 10 D. de Caen. The current status of Turán's problem on hypergraphs. In *Extremal problems for finite sets (Visegrád, 1991)*, volume 3 of *Bolyai Soc. Math. Stud.*, pages 187--197. János Bolyai Math. Soc., Budapest, 1994. P. Erdős. On extremal problems of graphs and generalized graphs. , 2:183--190, 1964. P. Erdős, M. Simonovits, and V. T. Sós. Anti-Ramsey theorems. In *Infinite and finite sets (Colloq., Keszthely, 1973; dedicated to P. Erdős on his 60th birthday), Vol. II*, Colloq. Math. Soc. János Bolyai, Vol. 10, pages 633--643. 1975. P. Frankl and A. Kupavskii. Two problems on matchings in set families---in the footsteps of Erdős and Kleitman. , 138:286--313, 2019. S. Fujita, C. Magnant, and K. Ozeki. Rainbow generalizations of Ramsey theory: a survey. , 26(1):1--30, 2010. Z. Füredi. Turán type problems. In *Surveys in combinatorics, 1991 (Guildford, 1991)*, volume 166 of *London Math. Soc. Lecture Note Ser.*, pages 253--300. Cambridge Univ. Press, Cambridge, 1991. Z. Füredi and T. Jiang. Hypergraph Turán numbers of linear cycles. , 123:252--270, 2014. Z. Füredi and T. Jiang. urán numbers of hypergraph trees. , 2015. Z. Füredi, T. Jiang, and R. Seiver. Exact solution of the hypergraph Turán problem for $k$-uniform linear paths. , 34(3):299--322, 2014. W. T. Gowers. Hypergraph regularity and the multidimensional Szemerédi theorem. , 166(3):897--946, 2007. R. Gu, J. Li, and Y. Shi. Anti-Ramsey numbers of paths and cycles in hypergraphs. , 34(1):271--307, 2020. M. Guo, H. Lu, and X. Peng. Anti-Ramsey Number of Matchings in 3-uniform Hypergraphs. , 37(3):1970--1987, 2023. T. Jiang. Anti-Ramsey numbers of subdivided graphs. , 85(2):361--366, 2002. T. Jiang and O. Pikhurko. Anti-Ramsey numbers of doubly edge-critical graphs. , 61(3):210--218, 2009. Z. Jin. Anti-Ramsey number of matchings in a hypergraph. , 344(12):Paper No. 112594, 10, 2021. P. Keevash. Hypergraph Turán problems. In *Surveys in combinatorics 2011*, volume 392 of *London Math. Soc. Lecture Note Ser.*, pages 83--139. Cambridge Univ. Press, Cambridge, 2011. A. Kostochka, D. Mubayi, and J. Verstraëte. Turán problems and shadows I: Paths and cycles. , 129:57--79, 2015. A. Kostochka, D. Mubayi, and J. Verstraëte. Turán problems and shadows II: Trees. , 122:457--478, 2017. T. Kövari, V. T. Sós, and P. Turán. On a problem of K. Zarankiewicz. , 3:50--57, 1954. X. Liu, D. Mubayi, and C. Reiher. Hypergraphs with many extremal configurations. to appear. X. Liu, D. Mubayi, and C. Reiher. A unified approach to hypergraph stability. , 158:36--62, 2023. X. Liu and J. Song. Exact results for some extremal problems on expansions I. In preparation. J. J. Montellano-Ballesteros and V. Neumann-Lara. An anti-Ramsey theorem. , 22(3):445--449, 2002. D. Mubayi. A hypergraph extension of Turán's theorem. , 96(1):122--134, 2006. D. Mubayi and J. Verstraëte. A survey of Turán problems for expansions. In *Recent trends in combinatorics*, volume 159 of *IMA Vol. Math. Appl.*, pages 117--143. Springer, \[Cham\], 2016. B. Nagle, V. Rödl, and M. Schacht. The counting lemma for regular $k$-uniform hypergraphs. , 28(2):113--179, 2006. L. Özkahya and M. Young. Anti-Ramsey number of matchings in hypergraphs. , 313(20):2359--2364, 2013. O. Pikhurko. Exact computation of the hypergraph Turán function for expanded complete 2-graphs. , 103(2):220--225, 2013. V. Rödl and J. Skokan. Applications of the regularity lemma for uniform hypergraphs. , 28(2):180--194, 2006. A. Sidorenko. What we know and what we do not know about Turán numbers. , 11(2):179--199, 1995. Y. Tang, T. Li, and G. Yan. Anti-Ramsey number of disjoint union of star-like hypergraphs. Submitted. Y. Tang, T. Li, and G. Yan. Anti-Ramsey number of expansions of paths and cycles in uniform hypergraphs. , 101(4):668--685, 2022. T. Tao. A variant of the hypergraph removal lemma. , 113(7):1257--1280, 2006. P. Turán. On an external problem in graph theory. , 48:436--452, 1941. [^1]: Research was supported by ERC Advanced Grant 101020255 and Leverhulme Research Project Grant RPG-2018-424. Email: `xizhi.liu.ac@gmail.com` [^2]: Research was supported by Science and Technology Commission of Shanghai Municipality (No. 22DZ2229014). Email: `jlsong@math.ecnu.edu.cn` [^3]: Here, 'subdivided' means every edge in $F$ is incidence to a vertex of degree two. [^4]: A $d$-star is a collection of $d$ edges such that there is a unique vertex (called center) that is contained in all edges and two edges intersect only on this vertex. [^5]: Here, $\mathcal{H}[V_1, \ldots, V_{\ell}]$ consists of all edges in $\mathcal{H}$ that contain at most one vertex from each $V_i$. [^6]: In other words, $\widehat{\mathcal{G}}$ consists of all $r$-subset of $[n]$ that contain at most one vertex from each $V_i$.
arxiv_math
{ "id": "2310.01186", "title": "Hypergraph anti-Ramsey theorems", "authors": "Xizhi Liu and Jialei Song", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by-nc-sa/4.0/" }
--- abstract: | The representation theory of a commutative noetherian ring is tightly controlled by its prime spectrum. In this article we use the prime spectrum to describe mutation of cosilting objects in the derived category of a commutative noetherian ring. address: Jorge Vitória, Dipartimento di Matematica "Tullio Levi-Civita", Università degli Studi di Padova, via Trieste 63, 35131 Padova, Italy author: - Jorge Vitória title: | Mutations and derived equivalences\ for commutative noetherian rings --- # Introduction Arguably the most popular use of the word "mutation" in science is in genetics. The same word has been used in algebra since the 80s for a process that, figuratively, comes close to that used in biology. A DNA mutation is a process that turns a DNA sequence into another one through various changes, three of which are dubbed *deletion*, *substitution* and *insertion*. In algebra, mutation often stands for a transformation of an algebraic object (exceptional collections, cluster variables, cluster-tilting objects, silting or cositling complexes, support $\tau$-tilting modules, etc\...), encompassing deletion, substitution and insertion in a sequential way in order to produce a new object of the same nature. Our focus is on the recently developed concept of mutation for the class of bounded cosilting complexes in the derived category of a ring. These complexes parametrise an important class of t-structures, and it is through this lens that the operation of mutation is conceptually better understood. Nevertheless, it is often difficult to describe mutations explicitly. This is due to the "substitution" step of the process: it relies on approximation theory, and it may be challenging to pin down such approximations. This can be overcome for some particular rings. For finite-dimensional algebras over a field there is a wide range of combinatorial and homological techniques to help with this task (see, for example, [@AI; @AIR; @ALSV]). In this article, after reviewing the necessary tools to understand mutation conceptually, we aim to describe this process as accurately as possible in the setting of the derived category of a commutative noetherian ring. In this setting, we do not have the combinatorial techniques of finite-dimensional algebras, but we have a topological tool instead: the prime spectrum. It is well-known that the prime spectrum of a commutative noetherian ring $R$ controls much of its representation theory via the notion of support. Celebrated results include Gabriel's classification of hereditary torsion classes of $\mathsf{Mod}(R)$ ([@G]) and Neeman's analogous classification result for localising and smashing subcategories of the derived category $\mathsf{D}(\mathsf{Mod}(R))$ ([@N]). More relevant for us is the classification of compactly generated t-structures of [@AJS]. This is done in terms of certain decreasing sequences of subsets of $\mathsf{Spec}(R)$. Recently, it was shown in [@HN] that these same sequences classify pure-injective cosilting objects, opening the door to a concrete approach to mutation in this setting. This paper is structured as follows: we start in Section [2](#prelim){reference-type="ref" reference="prelim"} by setting up some notation and reviewing some facts about t-structures and torsion pairs; in Section [3](#der cat){reference-type="ref" reference="der cat"} we survey the classification of (intermediate) compactly generated t-structures for a commutative noetherian ring and its relation with cosilting theory; in Section [4](#mutation section){reference-type="ref" reference="mutation section"} we review the main principles of the theory of mutation for cosilting objects, based on [@ALSV]. In Section [5](#der eq section){reference-type="ref" reference="der eq section"} we review some ideas concerning derived equivalences and survey results from [@PaV] concerning t-structures inducing derived equivalences in the derived category of a commutative noetherian ring. Finally, in Section [6](#comm mutation){reference-type="ref" reference="comm mutation"} we use a classification result made available in Section [5](#der eq section){reference-type="ref" reference="der eq section"} to discuss explicit some computations of cosilting mutation in the derived category of a commutative noetherian ring. The material surveyed in this article is essentially contained in [@PaV] and [@ALSV]. We refer to the article of Rosanna Laking in this same volume ([@L-Proc]) concerning other aspects of this same concept of mutation in the context of finite-dimensional algebras. # Preliminaries {#prelim} ## Notation All subcategories considered are strict and full. In an additive category $\mathcal{A}$ with arbitrary (set-indexed) products, given an object $X$ we denote by $\mathsf{Prod}(X)$ the subcategory formed by all summands of products of $X$. In the following let $\mathcal{X}$ and $\mathcal{Y}$ be subcategories of $\mathcal{A}$. We denote by $\mathcal{X}^\perp$ the subcategory of $\mathcal{A}$ formed by the objects $Y$ for which $\mathsf{Hom}_\mathcal{A}(X,Y)=0$ for all $X$ in $\mathcal{X}$. The notation ${}^\perp\mathcal{X}$ stands for the dual definition. If $\mathcal{A}$ is triangulated and $I$ is a subset of $\mathbb{Z}$, then we define $$\nonumber \begin{split} \mathcal{X}^{\perp_I}:=&\{Y\in\mathcal{A}\colon \mathsf{Hom}(X,Y[i])=0,\ \forall i\in I\}\\ {}^{\perp_I}\mathcal{X}:=&\{Y\in\mathcal{A}\colon \mathsf{Hom}(Y,X[i])=0,\ \forall i\in I\}. \end{split}$$ We often replace $I$ by the symbols $\geq 0$, $\leq 0$, $>0$, $<0$, $\neq 0$, with the obvious meaning. We write $\mathcal{X}\ast \mathcal{Y}$ for the subcategory formed by all objects $A$ of $\mathcal{A}$ for which there are $X$ in $\mathcal{X}$, $Y$ in $\mathcal{Y}$ and a triangle $$\nonumber \xymatrix{X\ar[r]&A\ar[r]&Y\ar[r]&X[1].}$$ If $\mathcal{A}$ is abelian, we denote its derived category by $\mathsf{D}(\mathcal{A})$ and its bounded derived category by $\mathsf{D}^b(\mathcal{A})$. If $\mathcal{A}$ is cocomplete, we consider the subcategory of **finitely presented objects** $\mathsf{fp}(\mathcal{A})$ formed by those $X$ for which $\mathsf{Hom}_\mathcal{A}(X,-)$ commutes with direct limits. Particular focus will be given to **Grothendieck abelian categories**, i.e., cocomplete abelian categories with exact direct limits and a generator. Recall that a Grothendieck category $\mathcal{A}$ is said to be **locally coherent** if there is a generating set of finitely presented objects and $\mathsf{fp}(\mathcal{A})$ is abelian. For a ring $R$, we denote by $\mathsf{Mod}(R)$ the category of right $R$-modules and, if $R$ is coherent, we denote by $\mathsf{mod}(R)$ the subcategory $\mathsf{fp}(\mathsf{Mod}(R))$. We will use the shorter version $\mathsf{D}(R)$ for the derived category $\mathsf{D}(\mathsf{Mod}(R))$, and $\mathsf{D}^b(R)$ for the bounded derived category $\mathsf{D}^b(\mathsf{mod}(R))$. ## Comments on the setup We will mostly be working on the derived category of a commutative noetherian ring $R$. Nevertheless, some of our results (mostly introductory ones) hold in a much larger generality and when this generality comes at very little cost to the reader, we shall state the result in such general setting. At points, nevertheless, we may choose to state a result in a more restricted setting, should that contribute to ease the reading of some technical points. We will refer to the literature when appropriate. A general framework that we commonly use is that of compactly generated triangulated categories. Recall that a triangulated category $\mathcal{D}$ is **compactly generated** if if admits set-indexed coproducts and the full subcategory of compact objects, denoted by $\mathcal{D}^c$, is a generating subcategory, i.e., if $(\mathcal{D}^c)^\perp=0$. In a compactly generated triangulated category we have a natural notion of pure-injective objects: an object $X$ is said to be **pure-injective** if the functor $\mathbf{y}X:=\mathsf{Hom}_\mathcal{D}(-,X)_{|\mathcal{D}^c}$ is an injective functor in the locally coherent Grothendieck category $\mathsf{Mod}(\mathcal{D}^c)$ of contravariant additive functors from $\mathcal{D}^c$ to the category of abelian groups. For more details on this functor category, pure-injective objects and the theory of purity in compactly generated triangulated categories in general, we refer to [@Herzog] and [@Krause-TC]. Note that the derived category of the category of left/right modules over any ring is a compactly generated triangulated category, whose compact objects are precisely those isomorphic to bounded complexes of finitely generated projective modules. ## A glossary of torsion pairs and t-structures {#glossary} This paper is focused on the interaction of t-structures in the derived category of a commutative noetherian ring with torsion pairs in the hearts of such t-structures. For the definitions of these three concepts (t-structure; heart; torsion pair) and basic properties (the heart is abelian; a t-structure induces a cohomological functor to the heart) we refer to [@BBDG]. In this subsection we setup a brief list of qualifiers (tagged with some initials in square brackets) that we apply to either t-structures, their hearts or torsion pairs in their hearts throughout the paper. ### t-structures and cosilting Let $\mathcal{D}$ be a compactly generated triangulated category with $\mathcal{D}^c$ its full subcategory of compact objects. - A t-structure $\mathbb{T}:=(\mathcal{U},\mathcal{V})$ in $\mathcal{D}$ is **compactly generated** if $(\mathcal{U}\cap \mathcal{D}^c)^\perp=\mathcal{V}$. - A t-structure $\mathbb{T}:=(\mathcal{U},\mathcal{V})$ in $\mathcal{D}$ is **nondegenerate** if $\cap_{n\in\mathbb{Z}}\mathcal{U}[n]=0=\cap_{n\in\mathbb{Z}}\mathcal{V}[n]$. - If $\mathcal{D}=\mathsf{D}(R)$ for a ring $R$ or if $\mathcal{D}=\mathsf{D}^b(R)$ for a coherent ring $R$, we say that $\mathbb{T}$ is **intermediate** if there is $a<b$ such that $\mathsf{D}^{\leq a}\subseteq \mathcal{U}\subseteq \mathsf{D}^{\leq b}$, where $\mathbb{D}:=(\mathsf{D}^{< 0},\mathsf{D}^{\geq 0})$ is the standard t-structure in $\mathsf{D}(R)$ (respectively, in $\mathsf{D}^b(R)$). Two particular sources of t-structures are cosilting/cotilting objects. - An object $C$ of $\mathcal{D}$ is said to be **cosilting** if $\mathbb{T}_C:=({}^{\perp_{\leq 0}}C, {}^{\perp_{>0}}C)$ is a t-structure. Such a t-structure is then said to be a **cosilting t-structure**. The assignment sending a cosilting object $C$ to its t-structure $\mathbb{T}_C$ is denoted by $\theta$. - An object $C$ of $\mathcal{D}$ is said to be **cotilting** if it is cosilting and $\mathsf{Prod}(C)$ is contained in the heart of $\theta(C)$. Such a t-structure is then said to be a **cotilting t-structure**. Two cosilting objects $C_1$ and $C_2$ are said to be **equivalent** if $\mathsf{Prod}(C_1)=\mathsf{Prod}(C_2)$ or, equivalently, if they yield the same t-structure ([@PV; @NSZ]). This means that the assignment $\theta$ described above can be defined on equivalence classes of cosilting objects. Of particular interest to us are pure-injective cosilting objects. In fact, it remains an open question whether every cosilting is pure-injective. If $\mathcal{D}=\mathsf{D}(R)$, it is shown in [@MV Proposition 3.10] (based on [@B] and [@S]) that any cosilting object isomorphic to a bounded complex of injective $R$-modules (such cosilting complexes will be called **bounded**) is automatically pure-injective. These will, in fact, be the main cosilting objects we will be interested in and so, the pure-injectivity condition can be safely assumed and, for simplicity, we will avoid (unless strictly necessary) referring to it. Let us fix some notation regarding these definitions, where $\mathcal{D}$ denotes a compactly generated triangulated category and $R$ denotes a ring. - We denote by $\mathsf{Cosilt}^*(\mathcal{D})$ (respectively, $\mathsf{Cotilt}^*(\mathcal{D})$) the set of equivalence classes of pure-injective cosilting (respectively, cotilting) objects. It is not obvious that these are indeed sets: this follows from a bijection between equivalence classes of cosilting objects and certain ideals of morphisms of $\mathcal{D}^c$ ([@SS20 Theorem 8.16 and Proposition 9.6]). - We denote the class of t-structures in $\mathcal{D}$ by $\mathsf{t\mbox{-}str}_\mathcal{D}$. We consider the following subsets of this class: - $\mathsf{t\mbox{-}str}_\mathcal{D}[\mathsf{CG}]$, whose elements are the compactly generated t-structures in $\mathcal{D}$ - $\mathsf{t\mbox{-}str}_\mathcal{D}[\mathsf{Cosilt^*}]$, whose elements are the t-structures $\theta(C)$ for $C$ in $\mathsf{Cosilt}^*(\mathcal{D})$. The restriction of $\theta$ to $\mathsf{Cosilt}^*(\mathcal{D})$ induces, tautologically, a bijection $$\nonumber \theta\colon \mathsf{Cosilt}^*(\mathcal{D})\longrightarrow \mathsf{t\mbox{-}str}_\mathcal{D}[\mathsf{Cosilt^*}].$$ - If $\mathcal{D}=\mathsf{D}(R)$, we consider the following subclasses of the classes defined above: - $\mathsf{Cosilt}^b(\mathsf{D}(R))$: the subset of bounded cosilting objects; - $\mathsf{Cotilt}^b(\mathsf{D}(R))$: the subset of bounded cotilting objects; - $\mathsf{t\mbox{-}str}_{\mathsf{D}(R)}[\mathsf{Int}]$: the subclass of intermediate t-structures; - $\mathsf{t\mbox{-}str}_{\mathsf{D}(R)}[\mathsf{CG,Int}]:=\mathsf{t\mbox{-}str}_{\mathsf{D}(R)}[\mathsf{Int}]\cap \mathsf{t\mbox{-}str}_{\mathsf{D}(R)}[\mathsf{CG}]$; - $\mathsf{t\mbox{-}str}_{\mathsf{D}(R)}[\mathsf{Cosilt^*,Int}]:=\mathsf{t\mbox{-}str}_{\mathsf{D}(R)}[\mathsf{Int}]\cap \mathsf{t\mbox{-}str}_{\mathsf{D}(R)}[\mathsf{Cosilt^*}]$. The following theorem summarises some useful properties of compactly generated t-structures, cositling t-structures and their relation. **Theorem 1** ([@AMV3; @L; @MV; @SS20]). *Let $\mathcal{D}$ be a compactly generated triangulated category and let $\mathbb{T}=(\mathcal{U},\mathcal{V})$ be a t-structure in $\mathcal{D}$. The following statements hold.* 1. *The t-structure $\mathbb{T}$ lies in $\mathsf{t\mbox{-}str}_\mathcal{D}[\mathsf{Cosilt^*}]$ if and only if $\mathbb{T}$ is nondegenerate, $\mathcal{V}$ is closed under coproducts and the heart of $\mathbb{T}$ is a Grothendieck category.* 2. *If $\mathbb{T}$ is nondegenerate and compactly generated t-structure in $\mathcal{D}$, then there is a (unique, up to equivalence) pure-injective cosilting object $C$ such that $\mathsf{Prod}(C)=\mathcal{V}^{\perp}[-1]\cap \mathcal{V}$. We denote the injective function assigning $\mathbb{T}$ to $C$ by $\alpha$; it has $\theta$ as a left inverse, i.e. $\mathbb{T}=\theta(\alpha(\mathbb{T}))=({}^{\perp_{\leq 0}}\alpha(\mathbb{T}),{}^{\perp_{> 0}}\alpha(\mathbb{T}))$.* 3. *if $\mathcal{D}=\mathsf{D}(R)$, then the assignment $\alpha$ defined above restricts to an injective function (also denoted by $\alpha$) $$\nonumber \alpha\colon \mathsf{t\mbox{-}str}_\mathcal{D}[\mathsf{CG, Int}]\longrightarrow \mathsf{Cosilt}^b(\mathcal{D}) (\subseteq \mathsf{Cosilt}^*(\mathcal{D})).$$* *Proof.* Statement (1) is shown in [@AMV3 Corollary 3.8], using the Brown Representability Theorem for compactly generated triangulated categories. Statement (2) was first proved in [@AMV3] when $\mathcal{D}$ is, in addition, algebraic; it was later extended in [@L] for when $\mathcal{D}$ lies at the base of stable derivator, and it was finally proved for all compactly generated triangulated categories in [@SS20 Thereom 8.31]. We also refer to [@PV] and [@NSZ] for details on how to obtain the cosilting object out of a cosilting t-structure. Finally step (3) is an easy observation that follows, for example, from [@MV Theorem 3.14]. The fact that $\mathsf{Cosilt}^b(\mathcal{D}) \subseteq \mathsf{Cosilt}^*(\mathcal{D})$ was already mentioned - see [@MV Proposition 3.10]. ◻ ### Hearts We denote by $\mathcal{H}$ the assignment that sends a t-structure $\mathbb{T}=(\mathcal{U},\mathcal{V})$ in a triangulated category $\mathcal{D}$ to its heart, which we write as $\mathcal{H}_\mathbb{T}$ and it denotes the subcategory $\mathcal{U}[-1]\cap\mathcal{V}$ of $\mathcal{D}$, with inclusion functor denoted by $\epsilon_\mathbb{T}\colon \mathcal{H}_\mathbb{T}\longrightarrow \mathcal{D}$. We denote the class of hearts in $\mathcal{D}$ by $\mathsf{Heart}_\mathcal{D}$. By definition, the assignment $\mathcal{H}\colon \mathsf{t\mbox{-}str}_{\mathcal{D}}\longrightarrow \mathsf{Heart}_\mathcal{D}$ is surjective, but it is well known that a heart $\mathcal{H}_\mathbb{T}$ does not uniquely determine $\mathbb{T}$. For example, any stable t-structure (i.e. a t-structure for which both classes are triangulated subcategories) has a zero heart, and yet there are multiple such t-structures in any non-trivial $\mathcal {D}$ (for example, the pairs $(0,\mathcal{D})$ and $(\mathcal{D},0)$). In other words, the assignment $\mathcal{H}\colon \mathsf{t\mbox{-}str}_\mathcal{D} \longrightarrow \mathsf{Heart}_\mathcal{D}$ is surjective by definition, but usually highly non injective. Still, we have the following easy lemma (analogous to the reconstruction of a bounded t-structure from its heart, see for example [@Br Lemma 2.3]). **Lemma 2**. *Let $R$ be a ring and $\mathbb{T}$ an intermediate t-structure in $\mathsf{D}(R)$. Then the subcategory $\mathcal{H}_\mathbb{T}$ of $\mathcal{D}$ determines the intermediate t-structure $\mathbb{T}$.* *Proof.* Let $\mathbb{T}=(\mathcal{U},\mathcal{V})$. Without loss of generality, suppose that $\mathsf{D}^{\leq -n}\subseteq \mathcal{U}\subseteq \mathsf{D}^{\leq 0}$ for some $n\geq 0$. Let $X$ be an object in $\mathsf{D}^{\geq -n+1}\cap \mathcal{U}$ (and, therefore, in $\mathcal{V}[n]\cap \mathcal{U}$). By iterated truncations (with respect to $\mathbb{T}$), we obtain that $X$ is an iterated finite extension of shifts of objects in $\mathcal{H}_\mathbb{T}$. Indeed, notice that this iteration stops at the truncation with respect to $\mathcal{U}[n]\subseteq \mathsf{D}^{\leq -n}$, since objects in this subcategory have no maps to $X$. It then follows that every object in $\mathcal{U}$ is an extension of an object in $\mathsf{D}^{\leq -n}$ with an object built from a finite sequence of extensions of objects in shifts of the heart $\mathcal{H}_\mathbb{T}$. Finally, note that $\mathbb{T}$ is completely determined by $\mathcal{U}$. ◻ We denote by $\mathsf{Heart}_{\mathsf{D}(R)}[\mathsf{CG, Int}]$ the image of $\mathsf{t\mbox{-}str}_{\mathsf{D}(R)}[\mathsf{CG, Int}]$ under $\mathcal{H}$. ### Torsion pairs A torsion pair $\mathbf{t}:=(\mathcal{T},\mathcal{F})$ in an abelian category $\mathcal{A}$ is said to be - **hereditary** if $\mathcal{T}$ is closed under subobjects; - **of finite type** if $\mathcal{A}$ is cocomplete and $\mathcal{F}$ is closed under direct limits. We denote the class of torsion pairs in an abelian category $\mathcal{A}$ by $\mathsf{Tors}_\mathcal{A}$ and we write - $\mathsf{Tors}_\mathcal{A}[\mathsf{Her}]$ for the subclass of hereditary torsion pairs; - $\mathsf{Tors}_\mathcal{A}[\mathsf{FT}]$ for the subclass of torsion pairs of finite type (if $\mathcal{A}$ is cocomplete); - $\mathsf{Tors}_\mathcal{A}[\mathsf{Her, FT}]:=\mathsf{Tors}_\mathcal{A}[\mathsf{Her}]\cap \mathsf{Tors}_\mathcal{A}[\mathsf{FT}]$ (if $\mathcal{A}$ is cocomplete). If $\mathcal{A}$ is a locally coherent Grothendieck category, we may wonder whether the classes $\mathsf{Tors}_\mathcal{A}$ and $\mathsf{Tors}_{\mathsf{fp}(\mathcal{A})}$ are related in any sensible way. **Theorem 3**. *[@CBlfp Lemma 4.4][\[torsion pairs prelim\]]{#torsion pairs prelim label="torsion pairs prelim"} Let $\mathcal{A}$ denote a locally finitely presented Grothendieck category. There is an injective assignment $$\begin{aligned} \mathsf{Lift}\colon \mathsf{Tors}_{\mathsf{fp}(\mathcal{A})}\longrightarrow& \mathsf{Tors}_{\mathcal{A}}[FT]\\ (\mathsf{t},\mathsf{f})\mapsto&({}^\perp(\mathsf{t}^\perp),(\mathsf{t}^\perp))=(\varinjlim \mathsf{t},\varinjlim \mathsf{f})\end{aligned}$$ where $\varinjlim \mathsf{t}$ and $\varinjlim \mathsf{f}$ denote the closure under direct limits in $\mathcal{A}$ of the subclasses $\mathsf{t}$ and $\mathsf{f}$ of $\mathsf{fp}(\mathcal{A})$, respectively. Furthermore, if $\mathcal{A}$ is a locally noetherian Grothendieck category (i.e. if all objects in $\mathsf{fp}(\mathcal{A})$ are noetherian), then $\mathsf{Lift}$ is a bijection.* # A guide to compactly generated t-structures for a commutative noetherian ring {#der cat} For a commutative noetherian ring $R$, consider its prime spectrum, which we denote by $\mathsf{Spec}(R)$. This is a poset ordered by inclusion, and it is endowed with two natural topologies, naturally dual to each other. Indeed, for a prime ideal $\mathfrak{p}$, let $V(\mathfrak{p})$ denote the set of prime ideals of $R$ that contain $\mathfrak{p}$. Then - The Zariski topology is the unique topology for which the closed sets are exactly the finite unions of sets of the form $V(\mathfrak{p})$; - The Hochster dual topology is the unique topology for which the sets of the form $V(\mathfrak{p})$ are a basis of open subsets. Note that the open sets of the Hochster dual topology are therefore the upper sets of the poset $\mathsf{Spec}(R)$, i.e. the subsets $V$ of $\mathsf{Spec}(R)$ for which if $\mathfrak{p}$ lies in $V$ and $\mathfrak{p}\subseteq \mathfrak{q}$, then also $\mathfrak{q}$ lies in $V$. We will denote the set of these subsets by $\mathscr{O}^H(\mathsf{Spec}(R))$, and they are called **specialisation-closed**. These subsets of $\mathsf{Spec}(R)$ play an important role in classifying hereditary torsion pairs in $\mathsf{Mod}(R)$ (see Example [Example 15](#standard){reference-type="ref" reference="standard"}). When studying t-structures in $\mathsf{D}(R)$, the following slightly more intricate combinatorial objects are useful. **Lemma 4**. *[@Sta Proposition 4.1][@Tak Proposition 4.3][@HNS Remark 2.10] Let $R$ be a commutative noetherian ring. There is a bijection between* 1. *Decreasing functions $\varphi\colon \mathbb{Z}\longrightarrow 2^{\mathsf{Spec}(R)}$ such that $\varphi(n)$ lies in $\mathscr{O}^H(\mathsf{Spec}(R))$ for all $n$ in $\mathbb{Z}$ (these are known as **sp-filtrations** of $\mathsf{Spec}(R)$).* 2. *Increasing functions $f\colon \mathsf{Spec}(R)\longrightarrow \overline{\mathbb{Z}}:=\mathbb{Z}\cup\{\pm \infty\}$.* *Moreover, this bijection restricts to a bijection between sp-filtrations $\varphi$ of $\mathsf{Spec}(R)$ for which there are $a<b$ integers such that $\varphi(a)=\mathsf{Spec}(R)$ and $\varphi(b)=\emptyset$ (we will call them **bounded**) and **bounded** increasing functions $f\colon \mathsf{Spec}(R)\longrightarrow \mathbb{Z}$, i.e. those whose image is contained in an interval $[a,b]$ ($a<b$ integers).* The set formed by the bounded sp-filtrations will be denoted by $\mathsf{sp\mbox{-}filt}^b(R)$; the bounded increasing functions mentioned above are bounded poset homomorphisms from $\mathsf{Spec}(R)$ to $\mathbb{Z}$, and we denote the set of those by $\mathsf{Hom}^b_\mathsf{pos}(\mathsf{Spec}(R),\mathbb{Z})$. Below, we define the inverse assignments giving the bijection set out in the theorem: $$\nonumber \begin{split} \mathsf{f}\colon \mathsf{sp\mbox{-}filt}^b(R)\longrightarrow \mathsf{Hom}^b_\mathsf{pos}(\mathsf{Spec}(R),\mathbb{Z}), &\ \ \ \varphi\mapsto \mathsf{f}_\varphi\\ \Phi\colon \mathsf{Hom}^b_\mathsf{pos}(\mathsf{Spec}(R),\mathbb{Z})\longrightarrow \mathsf{sp\mbox{-}filt}^b(R), &\ \ \ f\mapsto\Phi_f. \end{split}$$ Given a bounded sp-filtration $\varphi$, note that $\mathsf{Spec}(R)$ is the $\mathbb{Z}$-indexed disjoint union of the subsets $\varphi(n-1)\setminus \varphi(n)$. Define a function $\mathsf{f}_\varphi\colon \mathsf{Spec}(R)\longrightarrow \mathbb{Z}$ by assigning to each prime ideal $\mathfrak{p}$ the (unique) integer $\mathsf{f}_\varphi(\mathfrak{p}):=n$ for which $\mathfrak{p}$ lies in $\varphi(n-1)\setminus \varphi(n)$. It can then be checked that $\mathsf{f}_\varphi$ is an increasing function (essentially due to the fact that $\phi(n)$ is specialisation-closed for all $n$ in $\mathbb{Z}$). Note that if $\varphi(a)=\mathsf{Spec}(R)$ and $\varphi(b)=\emptyset$, then the image of $\mathsf{f}_\varphi$ is contained in the integer interval $[a,b[$. For the inverse assignment, given an increasing function $f\colon \mathsf{Spec}(R)\longrightarrow \mathbb{Z}$, define a filtration $\Phi_f$ of $\mathsf{Spec}(R)$ as follows: $\Phi_f(n):=f^{-1}(]n,+\infty[)$. It can then be easily checked that this is an sp-filtration and, moreover, that since $f$ is assumed to have a bounded image, say contained in $]a,b]$ for some integers $a<b$, then $\Phi_f(a)=f^{-1}(]a,+\infty[)=\mathsf{Spec}(R)$ and $\Phi_f=f^{-1}(]b,+\infty[)=\emptyset$. This shows that $\Phi_f$ is, indeed, a bounded sp-filtration. **Example 5**. Note that if $R$ has finite Krull dimension $n$, the **height function** sending each prime in $\mathsf{Spec}(R)$ to its height in a bounded poset homomophism, whose image lies precisely in the interval $[0,n]$. The above combinatorial data turns out to classify compactly generated t-structures in the derived category of a commutative noetherian ring $R$. Denoting by $k(\mathfrak{p})$ the residue field of $R$ at $\mathfrak{p}$, we define the **support** of an object $X$ of $\mathsf{D}(R)$ as $$\mathsf{supp}(X):=\{\mathfrak{p}\in\mathsf{Spec}(R)\colon k(\mathfrak{p})\otimes_R^\mathbb{L}X\neq 0\}$$ For any subcategory $\mathcal{X}$ of $\mathsf{D}(R)$, we write $\mathsf{supp}(\mathcal{X})$ to denote the union, over all $X$ in $\mathcal{X}$ of $\mathsf{supp}(X)$. The following theorem combines a series of results from the literature. **Theorem 6**. *Let $R$ be a commutative noetherian ring. The diagram $$\nonumber \xymatrix{\mathsf{t\mbox{-}str}_{\mathsf{D}^b(R)}[\mathsf{Int}]\ar[rrrr]^{\alpha\ \circ \ \mathsf{t\mbox{-}Lift}}\ar[dd]^{\mathsf{t\mbox{-}Lift}}&&&&\mathsf{Cosilt}^b(\mathsf{D}(R))\ar@/^1.3pc/[ddllll]^{\theta}\ar[dd]_{\mathcal{H}\circ \theta}\\ \\ \mathsf{t\mbox{-}str}_{\mathsf{D}(R)}[\mathsf{CG, Int}]\ar@/^1.3pc/[uurrrr]^{\alpha}\ar[rrrr]_{\mathcal{H}}\ar[d]^{\beta}&&&& \mathsf{Heart}_{\mathsf{D}(R)}[\mathsf{CG,Int}]\ar[d]_{\gamma}\\ \mathsf{sp\mbox{-}filt}^b(R)\ar[rrrr]^{\mathsf{f}}&&&&\mathsf{Hom}^b_\mathsf{pos}(\mathsf{Spec}(R),\mathbb{Z})}$$ commutes, $\mathsf{t\mbox{-}Lift}$ is an injection and $\alpha$, $\beta$, $\gamma$, $\theta$, $\mathcal{H}$ and $\mathsf{f}$ are bijections defined by:* - *$\alpha$, $\theta$, $\mathcal{H}$ and $\mathsf{f}$ are the assignments already defined:* - *$\alpha$ sends a compactly generated intermediate t-structure $\mathbb{T}$ to the (unique up to equivalence) bounded cosilting complex $\alpha(\mathbb{T})$ such that $\mathbb{T}=({}^{\perp_{\leq 0}}\alpha(\mathbb{T}),{}^{\perp_{>0}}\alpha(\mathbb{T}))$;* - *$\theta$ sends a cosilting complex $C$ to the associated cosilting t-structure, and it is the inverse of $\alpha$;* - *$\mathcal{H}$ sends a t-structure $\mathbb{T}$ to its heart $\mathcal{H}_\mathbb{T}$;* - *$\mathsf{f}$ sends a bounded sp-filtration $\varphi$ to a bounded morphism of posets $\mathsf{f}_\varphi$;* - *$\mathsf{t\mbox{-}Lift}(u,v):=({}^\perp(u^{\perp}),u^{\perp})=(\mathsf{hocolim}(u),\mathsf{hocolim}(v))$;* - *$\beta(\mathcal{U},\mathcal{V}):=(\varphi\colon \mathbb{Z}\longrightarrow 2^{\mathsf{Spec}(\mathbb{R})}; n\mapsto \mathsf{supp}(H^n(\mathcal{U})))$;* - *$\gamma(\mathcal{H}_\mathbb{T}):=(\psi\colon \mathsf{Spec}(R)\longrightarrow \mathbb{Z}; \mathfrak{p}\mapsto \psi(\mathfrak{p}))$, where $\psi(\mathfrak{p})$ is the unique integer for which $k(\mathfrak{p})[-\psi(\mathfrak{p})]$ lies in $\mathcal{H}_\mathbb{T}$.* *Proof.* We articulate the results in the literature that yield the diagram. Note that $\mathcal{H}$ is a bijection from Lemma [Lemma 2](#int t-str){reference-type="ref" reference="int t-str"}. 1. $\mathsf{t\mbox{-}Lift}$ is well-defined (i.e., the image is indeed a compactly generated and intermediate t-structure) by [@MZ]. It is an injection with a left inverse given by the intersection with $\mathsf{D}^b(R)$. Note that $\mathsf{t\mbox{-}Lift}$ is a t-structure version of the assignment $\mathsf{Lift}$ defined for torsion pairs in Theorem [\[torsion pairs prelim\]](#torsion pairs prelim){reference-type="ref" reference="torsion pairs prelim"}. 2. The map $\alpha$ is well-defined and injective by Theorem [Theorem 1](#t-structures prelim){reference-type="ref" reference="t-structures prelim"}(3) and its surjectivity is a version of the so-called *telescope problem for t-structures*. Starting with a pure-injective cosilting object $C$, it is well-known that the associated t-structure $\theta(C)$ is homotopically smashing (see [@SSV] for the relevant concept). There is a priori no reason for it to be compactly generated, but it is shown in [@HN Theorem 1.1] that for a commutative noetherian ring this is the case. The intermediate condition follows from the boundedness of the cosilting complex. 3. The fact that $\beta$ is a bijection follows from [@AJS Theorem 3.10]. The inverse assignment is the obvious one: for each sp-filtration $\varphi$ consider the class $$\nonumber \mathcal{U}_\varphi:=\{X\in \mathsf{D}(R)\colon \mathsf{supp}(H^n(X))\in \varphi(n)\}.$$ It is shown in [@AJS] that $(\mathcal{U}_\varphi,\mathcal{U}_\varphi^\perp)$ is indeed a compactly generated t-structure in $\mathsf{D}(R)$. Once again, if we restrict ourselves to intermediate t-structures it is easy to see that we get under $\beta$ a bijection with bounded sp-filtrations. 4. The fact that $\gamma$ is well-defined follows from [@PaV Proposition 4.7], based on results of [@HN]. It is indeed the case that for the heart $\mathcal{H}_\mathbb{T}$ of a compactly generated intermediate t-structure $\mathbb{T}$ in $\mathsf{D}(R)$, for each residue field of $R$ there is one and one only shift of it that lies in $\mathcal{H}_\mathbb{T}$. Moreover, as shown in [@PaV], that shift is precisely determined by the symmetric value of $\mathsf{f}_\varphi$, where $\varphi$ is the sp-filtration associated to $\varphi$ under the correspondence of Lemma [Lemma 4](#combinatorics){reference-type="ref" reference="combinatorics"}. Note that this is the essence of the commutativity of the bottom square of the diagram. Since all the other maps in the square are bijections, then so is $\gamma$.  ◻ *Remark 7*. It is also shown in [@MZ] that if $\mathbb{S}$ is an intermediate t-structure in $\mathsf{D}^b(R)$ for a coherent ring $R$ and $\mathbb{T}=\mathsf{t\mbox{-}Lift}(\mathbb{S})$, then the heart $\mathcal{H}_\mathbb{T}$ is a locally coherent Grothendieck category, whose subcategory of finitely presented objects $\mathsf{fp}(\mathcal{H}_\mathbb{T})=\mathcal{H}_\mathbb{S}$. We shall use the diagram in the theorem above as a guide for the next sections. We will define an operation of mutation in $\mathsf{Cosilt}^b(\mathsf{D}(R))$, and attempt to translate it to combinatorial data in $\mathsf{Spec}(R)$ following the diagram. **Example 8**. Consider $\mathbb{D}$ to be the standard t-structure in $\mathsf{D}^b(R)$. Then we have that - $\mathsf{t\mbox{-}Lift}(\mathbb{D})$ is the standard t-structure in $\mathsf{D}(R)$; - $\mathcal{H} \circ \mathsf{t\mbox{-}Lift}(\mathbb{D})$ coincides with the complexes $X$ in $\mathsf{D}(R)$ for which $H^k(X)=0$ for all $k\neq 0$, and it is equivalent to $\mathsf{Mod}(R)$; - $\alpha \circ \mathsf{t\mbox{-}Lift}(\mathbb{D})$ is the (equivalence class of the) injective cogenerator of $\mathsf{Mod}(R)$; - $\beta\circ \mathsf{t\mbox{-}Lift}(\mathbb{D})$ is the sp-filtration defined by $$\nonumber \varphi(n)=\begin{cases} \mathsf{Spec}(R)& {\rm if\ } n\leq -1\\ \emptyset & {\rm otherwise;}\end{cases}$$ - $\gamma\circ \mathcal{H}\circ \mathsf{t\mbox{-}Lift}(\mathbb{D})$ is the bounded homomorphism of posets $\psi\colon \mathsf{Spec}(R)\longrightarrow \mathbb{Z}$ which is constant equal to $0$. # Mutation {#mutation section} In this paper we discuss right mutation only. Left mutation is an inverse operation to right mutation, and it is also discussed in detail in [@ALSV]. As mentioned in the introduction, mutation is a process by which we change a component of an object of a certain nature to produce an object of the same nature. We introduce mutation of cosilting objects from two points of view and claim in Theorem [\[pathways to mutation\]](#pathways to mutation){reference-type="ref" reference="pathways to mutation"} that they are equivalent. ## Mutation via approximation theory We begin with a traditional point of view on mutation: for a fixed object pick a part of it that we want to remain unchanged, (right or left) approximate the object with respect to the part, and consider the triangle associated to such a map. The direct sum of the two new objects appearing in that triangle is then called the left/right mutation of the fixed object with respect to the part. This is essentially the recipe followed for mutating exceptional collections in algebraic geometry (see, for example, [@Bondal]), or silting/cluster-tilting objects in representation theory (see, for example, [@BMRRT; @AI]). We will discuss right mutations and, thus, we will only care about right approximations or precovers. Given a subcategory $\mathcal{X}$ of an additive category $\mathcal{A}$, a map $f\colon X\longrightarrow A$ is said to be an **$\mathcal{X}$-precover** of $A$ if $X$ lies in $\mathcal{X}$ and $\mathsf{Hom}_\mathcal{A}(Y,f)$ is surjective for all $Y$ in $\mathcal{X}$. Such an $\mathcal{X}$-precover $f$ is said to be an **$\mathcal{X}$-cover** if $f$ is right minimal, i.e. if for any endomorphism $g$ of $X$, if $f\circ g=f$ then $g$ is an isomorphism. If every object of $\mathcal{A}$ admits an $\mathcal{X}$-(pre)cover, then we say that $\mathcal{X}$ is **(pre)covering** in $\mathcal{A}$. We can now describe mutation of pure-injective cosilting objects via approximations. Let $\mathcal{D}$ be a compactly generated triangulated category and $C$ an object in $\mathsf{Cosilt}^*(\mathcal{D})$. Consider the following procedure that, out of an input subject to a mutability condition, produces an output. - **Input:** Consider a subcategory $\mathscr{E}$ of $\mathsf{Prod}(C)$ satisfying $\mathsf{Prod}(\mathscr{E})=\mathscr{E}$. - **Mutability condition:** We say that $C$ admits a right mutation with respect to $\mathscr{E}$ if $C$ admits an $\mathscr{E}$-cover. - **Output:** Let $\Phi\colon E_0\longrightarrow C$ be an $\mathscr{E}$-cover for $C$, and consider the triangle $$\nonumber \xymatrix{E_1\ar[r]&E_0\ar[r]^\Phi&C\ar[r]&E_1[1].}$$ Then we define the **right mutation of $C$ at $\mathscr{E}$** to be $\mu^{-}_\mathscr{E}(C):=E_0\oplus E_1$. The following theorem justifies why this operation is well-defined within $\mathsf{Cosilt}^*(\mathcal{D})$. **Theorem 9**. *[@ALSV Proposition 4.5] Given $C$ in $\mathsf{Cosilt}^*(\mathcal{D})$ and $\mathscr{E}\subseteq\mathsf{Prod}(C)$ as above satisfying the mutability condition above, then the output $\mu^{-}_\mathscr{E}(C)$ lies in $\mathsf{Cosilt}^*(\mathcal{D})$.* ## Mutation via torsion pairs We now shift our focus from objects to categorical decompositions, namely t-structures and torsion pairs. The idea of creating a t-structure out of an old one, keeping part of the heart and changing some other part (these two parts forming a torsion pair, and in a right mutation we will want to *keep* the torsionfree class and *change* the torsion class) follows from [@HRS]. This process is called (right) **HRS-tilt**, and can be succinctly described as follows. **Theorem 10**. *[@HRS][\[HRS\]]{#HRS label="HRS"} Let $\mathbb{T}=(\mathcal{U},\mathcal{V})$ be a t-structure in a triangulated category $\mathcal{D}$ and let $\mathbf{t}:=(\mathcal{T},\mathcal{F})$ be a torsion pair in $\mathcal{H}_\mathbb{T}$. Then the pair $(\mathcal{U}\ast \mathcal{T},\mathcal{F}\ast (\mathcal{V}[-1]))$ is a t-structure with heart $\mathcal{F}\ast(\mathcal{T}[-1])$, called the **right HRS-tilt of $\mathbb{T}$ at $\mathbf{t}$**.* If we want this process to restrict to a given class of t-structures, say cosilting t-structures, then a mutability condition must be imposed. We can indeed describe mutation of cosilting t-structures (coming from pure-injective cosilting objects) via HRS-tilts. Let $\mathcal{D}$ be a compactly generated triangulated category and $\mathbb{T}$ a t-structure in $\mathsf{t\mbox{-}str}_\mathcal{D}[\mathsf{Cosilt^*}]$. Consider the following procedure that, out of an input subject to a mutability condition, produces an output. - **Input:** Consider a torsion pair $\mathbf{t}=(\mathcal{T},\mathcal{F})$ in $\mathsf{Tors}_{\mathcal{H}_\mathbb{T}}[\mathsf{Her}]$. - **Mutability condition:** We say that $\mathbb{T}$ admits a right mutation with respect to $\mathbf{t}$ if $\mathbf{t}$ is of finite type, i.e. if it lies in $\mathsf{Tors}_{\mathcal{H}_\mathbb{T}}[\mathsf{Her, FT}]$. - **Output:** Consider the the right HRS-tilt of $\mathbb{T}$ at $\mathbf{t}$ (see Theorem [\[HRS\]](#HRS){reference-type="ref" reference="HRS"}) given by the pair $\mu^-_\mathbf{t}(\mathbb{T}):=(\mathcal{U}\ast \mathcal{T},\mathcal{F}\ast (\mathcal{V}[-1]))$, which we define to be **the right mutation of $\mathbb{T}$ at $\mathbf{t}$**. The following theorem justifies why this operation is well-defined within $\mathsf{t\mbox{-}str}_\mathcal{D}[\mathsf{Cosilt}^*]$. This result relies on a generalisation of [@PS Theorem 1.2], which discusses when the right HRS-tilt of a Grothendieck heart is still Grothendieck. **Theorem 11**. *[@ALSV Theorem 4.3, Proposition 4.5] Given the input $(\mathbb{T},\mathbf{t})$ as above satisfying the mutability condition above, then the output $\mu^-_\mathbf{t}(\mathbb{T})$ lies in $\mathsf{t\mbox{-}str}_\mathcal{D}[\mathsf{Cosilt}^*]$.* ## The two approaches are equivalent As announced in the preamble of this section, we discuss why the approaches to mutation defined above are two points of view on essentially the same transformation. For a fixed pure-injective cosilting complex $C$, denote by $\theta(C)$ the associated cosilting t-structure, with $\mathcal{H}_C:=\mathcal{H}_{\theta(C)}={}^{\perp_{\neq 0}}C$ the associated (Grothendieck) heart and $\mathsf{H}^0_C\colon \mathcal{D}\longrightarrow \mathcal{H}_C$ the associated cohomological functor. **Theorem 12**. *[@ALSV Theorems 3.5 and 4.9][\[pathways to mutation\]]{#pathways to mutation label="pathways to mutation"} In a compactly generated triangulated category $\mathcal{D}$, the above pathways to cosilting mutation are equivalent. More precisely, given $C$ in $\mathsf{Cosilt}^*(\mathcal{D})$ and $\mathscr{E}=\mathsf{Prod}(\mathscr{E})$ in $\mathsf{Prod}(C)$, we have that* 1. *$C$ admits a right mutation with respect to $\mathscr{E}$ if and only if $\theta(C)$ admits a right mutation with respect to $\mathbf{t}_\mathscr{E}:=({}^\perp H^0_C(C),\mathsf{Cogen}(H^0_C(C)))$.* 2. *If $C$ admits a right mutation with respect to $\mathscr{E}$, then $\theta(\mu^-_\mathscr{E}(C))=\mu^{-}_{\mathbf{t}_\mathscr{E}}(\theta(C))$.* *Proof.* We begin by recalling that the assignment $\theta$ is a bijection. Fix $C$ a pure-injective cosilting object in $\mathcal{D}$ and $\mathbb{T}:=\theta(C)$ its associated t-structure. **Equivalence of inputs**: The cohomological functor $\mathsf{H}^0_C$ associated to $\mathbb{T}$ induces an equivalence of categories between $\mathsf{Prod}(C)$ and the subcategory $\mathsf{Inj}(\mathcal{H}_{\mathbb{T}})$ of injective objects in $\mathcal{H}_\mathbb{T}$, which we know to be a Grothendieck category. Given a subcategory $\mathscr{E}=\mathsf{Prod}(\mathscr{E})$ of $\mathsf{Prod}(C)$, $H^0_\mathbb{T}(\mathscr{E})$ is then a class of injective objects in $\mathcal{H}_\mathbb{T}$ that, therefore, cogenerates a hereditary torsion pair $({}^\perp H^0_\mathbb{T}(\mathscr{E}),\mathsf{Cogen}(H^0_\mathbb{T}(\mathscr{E})))$ where $\mathsf{Cogen}(H^0_\mathbb{T}(\mathscr{E}))$ stands for the class of subobjects in $\mathcal{H}_\mathbb{T}$ of products objects in $H^0_\mathbb{T}(\mathscr{E})$. **Equivalence of mutability conditions**: This is Statement (1) of our theorem; we refer to the cited reference for a complete proof. **Equivalence of output**: It is shown in [@ALSV Theorem 4.9] that the t-structure associated to the cosilting object $\mu^{-}_\mathscr{E}(C)$ is precisely the right HRS-tilt of $\mathbb{T}$ at the torsion pair $\mathbf{t}_\mathscr{E}$. Moreover, if $\mathscr{E}_1$ and $\mathscr{E}_2$ are subcategories of $\mathsf{Prod}(C)$ satisfying the mutability condition such that $\mathbf{t}_{\mathscr{E}_1}=\mathbf{t}_{\mathscr{E}_2}$, then $\mu^-_{\mathscr{E}_1}(C)$ and $\mu^-_{\mathscr{E}_2}(C)$ are equivalent. ◻ *Remark 13*. The condition for a subcategory $\mathscr{E}=\mathsf{Prod}(\mathscr{E})$ of $\mathsf{Prod}(C)$ to satisfy the mutability condition can be phrased topologically (in the Ziegler spectrum) if $\mathcal{H}_{\theta(C)}$ is a locally coherent Grothendieck category. We refer to [@ALS] for details. ## The commutative noetherian case Let $R$ be a commutative noetherian ring and $\mathcal{D}=\mathsf{D}(R)$. We try to understand how the action of mutation translates to actions on the vertices of the diagram of Theorem [Theorem 6](#diagram){reference-type="ref" reference="diagram"}, in particular to those of combinatorial nature: $\mathsf{sp\mbox{-}filt}^b(R)$ and $\mathsf{Hom}^b_{\mathsf{pos}}(\mathsf{Spec}(R),\mathbb{Z})$. To make this translation possible, we first ensure that the input for a mutation is determined by a subset of $\mathsf{Spec}(R)$. The proof of the following lemma relies on Neeman's classification of localising subcategories in $\mathsf{D}(R)$ (see [@N] for details). **Lemma 14**. *[@PaV Proposition 4.1 and Theorem 4.5][\[basics on support for hearts\]]{#basics on support for hearts label="basics on support for hearts"} Let $R$ be a commutative noetherian ring. Let $C$ be a bounded cosilting object in $\mathsf{D}(R)$ with associated heart $\mathcal{H}_{\theta(C)}$ and associated bounded homomorphism of posets $\psi_C:=\gamma(\mathcal{H}_{\theta(C)})\colon \mathsf{Spec}(R)\longrightarrow \mathbb{Z}$. The following statements hold for the assingment $$\nonumber \hat{\mathfrak{s}}\colon \mathsf{Tors}_{\mathcal{H}_{\theta(C)}}[Her]\longrightarrow 2^{\mathsf{Spec}(R)}$$ sending a hereditary torsion pair $\mathbf{t}=(\mathcal{T},\mathcal{F})$ in $\mathcal{H}_{\theta(C)}$ to $\mathsf{supp}(\mathcal{T})$.* 1. *$\hat{\mathfrak{s}}$ is injective, with left inverse given by the formula $$\nonumber \mathsf{supp}^{-1}(\hat{\mathfrak{s}}(\mathbf{t}))\cap \mathcal{H}_\mathbb{T}=\mathcal{T}.$$* 2. *Given $\mathbf{t}=(\mathcal{T},\mathcal{F})$ in $\mathsf{Tors}_{\mathcal{H}_{\theta(C)}}[\mathsf{Her}]$, we have $$\nonumber \hat{\mathfrak{s}}(\mathbf{t})=\mathsf{supp}(\mathcal{T}):=\{\mathfrak{p}\in\mathsf{Spec}(R)\colon k(\mathfrak{p})[-\psi_C(\mathfrak{p})]\in\mathcal{T}\}.$$* 3. *$\mathscr{O}^H(\mathsf{Spec}(R))$ is contained in the image of $\hat{\mathfrak{s}}$, and for $V$ in $\mathscr{O}^H(\mathsf{Spec}(R))$ we have that $\mathsf{supp}^{-1}(V)\cap\mathcal{H}_{\theta(C)}$ is a hereditary torsion class in $\mathcal{H}_{\theta(C)}$.* Note that the Lemma above does not tell us precisely what the image of $\mathfrak{s}$ is. Nevertheless, the first statement helpfully asserts that a hereditary torsion class in $\mathcal{H}_{\theta(C)}$ is determined by its support. **Example 15**. If $E$ is an injective cogenerator of $\mathsf{Mod}(R)$, then it is a pure-injective cosilting in $\mathsf{D}(R)$ and $\theta(E)$ is nothing but the standard t-structure. In this case, $\psi_E$ is the constant function sending every prime in $\mathsf{Spec}(R)$ to zero. It is a celebrated theorem of Gabriel ([@G]) that the assignment of support considered above establishes a bijection between hereditary torsion classes in $\mathsf{Mod}(R)$ and specialisation-closed subsets of $\mathsf{Spec}(R)$. A recent result of Angeleri Hügel and Hrbek ([@AH1 Lemma 4.2]) shows that a torsion class in $\mathsf{Mod}(R)$ is hereditary if and only if it is of finite type. We can therefore state that the assignment $$\nonumber \mathfrak{s}\colon \mathsf{Tors}_{\mathsf{Mod}(R)}[\mathsf{Her}]=\mathsf{Tors}_{\mathsf{Mod}(R)}[\mathsf{Her,FT}]\longrightarrow \mathscr{O}^H(\mathsf{Spec}(R))$$ sending a hereditary torsion pair of finite type $\mathbf{t}:=(\mathcal{T},\mathcal{F})$ to the specialisation closed subset $\mathsf{supp}(\mathcal{T})$ is a bijection. We will be able to state an analogous result for hearts of certain t-structures later on (see Theorem [Theorem 20](#der equiv and mut condition){reference-type="ref" reference="der equiv and mut condition"}(1)). We are now able to carry right mutation, in a compatible way to what was described in the previous subsections, to the sets $\mathsf{sp\mbox{-}filt}^b(R)$ and $\mathsf{Hom}^b_{\mathsf{pos}}(\mathsf{Spec}(R),\mathbb{Z})$. It is more convenient for us to establish this transformation using mutation via torsion pairs. **Theorem 16**. *Let $\mathbb{T}$ be a t-structure in $\mathsf{t\mbox{-}str}_{\mathsf{D}(R)}[\mathsf{Cosilt}^*,\mathsf{Int}]=\mathsf{t\mbox{-}str}_{\mathsf{D}(R)}[\mathsf{CG},\mathsf{Int}]$, and consider the associated bounded sp-filtration $\varphi:=\beta(\mathbb{T})$ and bounded homomorphism of posets $\psi:=\mathsf{f}_\varphi$. Take as input $\mathbf{t}$ in $\mathsf{Tors}_{\mathcal{H}_\mathbb{T}}[\mathsf{Her,FT}]$ (i.e. $\mathbf{t}$ satisfies the mutatibility condition), with $\mathsf{supp}(\mathcal{T})=:W$. Consider the output $\mu^{-}_\mathbf{t}(\mathbb{T})$.* 1. *The bounded homomorphism of posets $\mu^-_W(\psi):=\gamma \mathcal{H}_{\mu^{-}_\mathbf{t}(\mathbb{T})}$ is defined by $$\nonumber \mu^-_W(\psi)(\mathfrak{p})=\begin{cases}\psi(\mathfrak{p})+1&{\rm if\ \mathfrak{p}\in W}\\ \psi(\mathfrak{p})&{\rm if\ \mathfrak{p}\notin W}\end{cases}$$* 2. *The sp-filtration $\mu^{-}_W(\varphi):=\beta(\mu^{-}_\mathbf{t}(\mathbb{T}))$ is defined by $$\nonumber \mu^{-}_W(\varphi)(n)=(W\cap \varphi(n-1))\cup \varphi(n)$$* *Proof.* (1) By the construction of $\mu^{-}_\mathbf{t}(\mathbb{T})$ we know that $\mathcal{H}_ {\mu^{-}_\mathbf{t}(\mathbb{T})}=\mathcal{F}\ast\mathcal{T}[-1]$. It is known (see [@PV Proposition 4.1]) that for any given prime ideal $\mathfrak{p}$, the shift of $k(\mathfrak{p})$ that lies in $\mathcal{H}_\mathbb{T}$ either lies in $\mathcal{T}$ (and this happens if and only if $\mathfrak{p}$ lies in $W$) or it lies in $\mathcal{F}$ (and this happens if and only if $\mathfrak{p}$ does not lie in $W$). By definition of the assignment $\gamma$, we obtain the description of $\mu^-_W(\psi)(\mathfrak{p})$ as claimed. \(2\) We determine $\mu^{-}_W(\varphi)$ by determining $\mathsf{f}^{-1}(\mu^-_W(\psi))$. As described in Lemma [Lemma 4](#combinatorics){reference-type="ref" reference="combinatorics"}, $\mu^{-}_W(\varphi)(n)$ can be calculated as $\mu^-_W(\psi)^{-1}(]n,+\infty[)$, i.e. the set of primes $\mathfrak{p}$ for which $\mu^-_W(\psi)(\mathfrak{p})> n$. These are precisely the primes $\mathfrak{p}$ that lie in $W$ and for which $\psi(\mathfrak{p})> n-1$ or the primes $\mathfrak{p}$ that do not lie in $W$ and for which $\psi(\mathfrak{p})> n$. In other words, we have $$\begin{aligned} \mu^{-}_W(\varphi)(n)&=(\varphi(n-1)\cap W)\cup (\varphi(n)\cap (\mathsf{Spec}(R)\setminus W))\\ &=(W\cap \varphi(n-1))\cup \varphi(n).\qedhere\end{aligned}$$ ◻ *Remark 17*. Note that there is a mild discrepancy in the notations $\mu^-_{\mathscr{E}}$, $\mu^-_\mathbf{t}$ and $\mu^-_W$. In the first case, the class in subscript indicates the part of the cosilting object we want mutation to keep; in the second case we write the whole torsion pair in subscript, indicating therefore both the part to be kept (torsionfree class) and the part to change (torsion class); in the third case we indicate the part of $\mathsf{Spec}(R)$ whose values (of a given bounded homomorphism os poset) will be changed. Despite the discrepancy, we hope that these choices allows notation to be more manageable in each setting. Note that in order to fully describe the mutation process in $\mathsf{Hom}_{\mathsf{pos}}(\mathsf{Spec}(R),\mathbb{Z})$ or in $\mathsf{sp\mbox{-}filt}^b(R)$, we need to be able to identify the mutability condition for the subset $W$ of $\mathsf{Spec}(R)$, i.e. to describe the properties of a subset $W$ of $\mathsf{Spec}(R)$ which is the support of a torsion class $\mathcal{T}$ of a torsion pair $\mathbf{t}$ in $\mathsf{Tors}_{\mathcal{H}_\mathbb{T}}[\mathsf{Her,FT}]$, where $\mathbb{T}$ is a t-structures in $\mathsf{t\mbox{-}str}_{\mathsf{D}(R)}[\mathsf{Cosilt}^*,\mathsf{Int}]$. We will see that this can be done under some assumptions on $\mathbb{T}$. # Derived equivalences {#der eq section} ## t-structures inducing derived equivalences Our current understanding of how mutation acts on all vertices of the diagram of Theorem [Theorem 6](#diagram){reference-type="ref" reference="diagram"} is entangled with the study of t-structures inducing derived equivalences. We say that a t-structure $\mathbb{T}$ in a triangulated category $\mathcal{D}$ **induces a derived equivalence** if the heart $\mathcal{H}_\mathbb{T}$ of $\mathbb{T}$ is such that the inclusion $\epsilon_\mathbb{T}\colon \mathcal{H}_\mathbb{T}\longrightarrow \mathcal{D}$ extends to a fully faithful triangle functor $F_\mathbb{T}\colon \mathsf{D}^b(\mathcal{H}_\mathbb{T})\longrightarrow \mathcal{D}$. Note that the terminology may be slightly misleading: $F_\mathbb{T}$ does not necessarily need to be an equivalence, but it of course induces an equivalence between $\mathsf{D}^b(\mathcal{H})$ and the essential image of $F_\mathbb{T}$. It is well-known from [@BBDG] that if $\mathbb{T}$ is a bounded t-structure and $F_\mathbb{T}$ is faithful, then the essential image of $F_\mathbb{T}$ is $\mathcal{D}$ itself. Note that all t-structures in $\mathsf{t\mbox{-}str}_{\mathsf{D}^b(R)}[\mathsf{Int}]$ are bounded. **Example 18**. Let us discuss how derived equivalences of rings (derived Morita theory) fit into this framework. Let $R$ be a ring and consider a tilting complex $T$ in the sense of Rickard ([@Rick]), i.e., $T$ is a bounded complex of finitely generated projective $R$-modules such that $\mathsf{Hom}_{\mathsf{D}(R)}(T,T[n])=0$ for all $n\neq 0$ and for which the smallest thick subcategory containing $T$ is precisely the category of compact objects in $\mathsf{D}(R)$. Then we know that: 1. The pair $\mathbb{T}:=(\mathsf{T}^{\perp_{\geq 0}},\mathsf{T}^{\perp_{<0}})$ is a t-structure in $\mathsf{D}(R)$; 2. The heart $\mathcal{H}_\mathbb{T}$ of $\mathbb{T}$ is cocomplete and has a small projective generator, namely $T$ itself. This means that $\mathcal{H}_\mathbb{T}$ is equivalent to the category of right modules over $\mathsf{End}_{\mathsf{D}(R)}(T)$; 3. There is a fully faithful triangle functor $F\colon \mathsf{D}^b(\mathcal{H}_\mathbb{T})\longrightarrow \mathsf{D}(R)$ extending the embedding $\epsilon_{\mathbb{T}}$ of $\mathcal{H}_\mathbb{T}$ into $\mathsf{D}(R)$; 4. The essential image of the functor $F$ is precisely $\mathsf{D}^b(\mathsf{Mod}(R))$ and, thus, $F$ induces a triangle equivalence $\mathsf{D}^b(\mathsf{Mod}(\mathsf{End}_{\mathsf{D}(R)}(T)))\longrightarrow \mathsf{D}^b(\mathsf{Mod}(R))$; 5. It turns out that $F$ can also be extended to a triangle equivalence $$\nonumber \hat{F}\colon \mathsf{D}(\mathsf{End}_{\mathsf{D}(R)}(T))\longrightarrow\mathsf{D}(R).$$ As a consequence of Rickard's work in [@Rick], whenever two rings $R$ and $S$ have equivalent derived categories, there is an equivalence functor obtained by the (fairly involved) outline described above (based on [@BBDG], see also [@AJSS Section 6] and [@AI; @PV]). It is not clear whether such an equivalence functor is uniquely determined by the t-structure. In a compactly generated triangulated category, pure-injective cosilting objects are parametrising *nice enough* t-structures with Grothendieck hearts (see Theorem [Theorem 1](#t-structures prelim){reference-type="ref" reference="t-structures prelim"}(1)). Therefore, by determining which cosilting t-structures induce derived equivalences, we generalise the derived Morita theory of Rickard discussed above, in the sense that every module category is a Grothendieck category. This generalisation encompasses known examples of derived equivalences that do not fit into Rickard's framework. For example, if we want to discuss how the celebrated derived equivalence shown by Beilinson in [@Bei] between quasi-coherent sheaves on the $1$-dimensional projective space and the representations of the Kronecker quiver fits in this framework, we should bring cosilting t-structures into the picture (see for example [@S]). **Proposition 19**. *[@PV Proposition 5.1] Let $\mathcal{\mathcal{G}}$ be a Grothendieck abelian category, $\mathcal{D}(\mathcal{G})$ its derived category, and $C$ a cosilting object in $\mathcal{D}(\mathcal{G})$. Then $\theta(C)$ induces a derived equivalence if and only if $C$ is cotilting.* In this proposition, the assumption on the triangulated category (that it is the derived category of a Grothendieck category) serves the purpose of guaranteeing the existence of a so-called realisation functor, as defined in [@BBDG]. The result holds for other triangulated categories that admit an enhancement that is suitable for the construction of such a functor. We refer to [@PV] for a detailed discussion of the use of $f$-categories for this purpose (following Beilinson's appendix in [@Bei2]) or to [@Vi] for a discussion of the (significant) advantages of working in a triangulated category that lies at the base of a stable derivator (and, in fact, $\mathcal{D}(\mathcal{G})$ is one such category). ## The commutative noetherian case We now turn into the case of $\mathsf{D}(R)$ for a commutative noetherian ring $R$. Looking at the diagram of Theorem [Theorem 6](#diagram){reference-type="ref" reference="diagram"}, we consider the subset $\mathsf{Cotilt}^*(\mathsf{D}(R))$ of $\mathsf{Cosilt}^*(\mathsf{D}(R))$. At present, we do not know how to characterise the sp-filtrations or the bounded homomorphisms of posets that are associated to $\mathsf{Cotilt}^*(\mathsf{D}(R))$. Some progress in that direction is made in [@HNS]. Nevertheless, what we can say (and we shall in the following theorem) is that the operation $\alpha\circ \mathsf{t\mbox{-}Lift}$ which associates a bounded (pure-injective) cosilting complex to an intermediate t-structure in $\mathsf{D}^b(R)$ factors via $\mathsf{Cotilt}^*(\mathsf{D}(R))$. In other words, this means that every t-structure in the image of $\mathsf{t\mbox{-}Lift}$ induces a derived equivalence. Moreover, for these t-structures we can parametrise precisely the torsion pairs that give rise to right mutations. Recall the assignment $$\nonumber \hat{\mathfrak{s}}\colon \mathsf{Tors}_{\mathcal{H}_{\theta(C)}}[Her]\longrightarrow 2^{\mathsf{Spec}(R)}$$ introduced in Lemma [\[basics on support for hearts\]](#basics on support for hearts){reference-type="ref" reference="basics on support for hearts"} sending a hereditary torsion pair $\mathbf{t}=(\mathcal{T},\mathcal{F})$ in $\mathcal{H}_{\theta(C)}$ (with $C$ in $\mathsf{Cosilt}^b(\mathsf{D}(R))$) to $\mathsf{supp}(\mathcal{T})$. **Theorem 20**. *Let $R$ be a commutative noetherian ring, $\mathbb{S}$ an intermediate t-structure in $\mathsf{D}^b(R)$ and $\mathbb{T}:=\mathsf{t\mbox{-}Lift}(\mathbb{S})$. The following statements hold.* 1. *[@PaV Theorem 6.16] The t-structure $\mathbb{T}$ induces a derived equivalence.* 2. *[@PaV Corollary 6.18] The assignment $\hat{\mathfrak{s}}$ restricts to a bijection $$\nonumber \mathfrak{s}\colon \mathsf{Tors}_{\mathcal{H}_\mathbb{T}}[\mathsf{Her},\mathsf{FT}]\longrightarrow \mathscr{O}^H(\mathsf{Spec}(R)).$$* Note that part (2) of the theorem generalises the statement of Gabriel in [@G], reviewed in Example [Example 15](#standard){reference-type="ref" reference="standard"}. *Proof.* We provide an outline of the steps of the proof, pointing the reader to some of the main ingredients (for further details we refer to [@PaV]). 1. *$\mathbb{T}$ is an iterated right mutation of a shift of the standard t-structure, and each hereditary torsion pair of finite type involved in this iteration is in the image of the $\mathsf{Lift}$ assignment in the corresponding heart.* Recall from Lemma [\[basics on support for hearts\]](#basics on support for hearts){reference-type="ref" reference="basics on support for hearts"}(2) that $\mathsf{supp}^{-1}(V)$ is a hereditary torsion class in $\mathcal{H}_\mathbb{T}$, for any $V\subseteq \mathsf{Spec}(R)$ specialisation-closed. Using the classification of compactly generated intermediate t-structures in $\mathsf{D}(R)$ in terms of sp-filtrations, we get a precise recipe of how to iteratively build $\mathbb{T}$ via HRS-tilts at hereditary torsion pairs (since each step of the filtration is specialisation-closed). Since at each step of the iteration we still have a compactly generated t-structure, it follows from [@SSV Proposition 6.1] that the hereditary torsion pairs we are tilting at are of finite type. Note that this argument, so far, applies to any intermediate compactly generated t-structure. It remains to see that each torsion pair in this iteration restricts to the subcategory of finitely presented objects in the corresponding heart. This can be proved inductively on the length of the (bounded) sp-filtration, by using the fact that $\mathbb{T}=\mathsf{t\mbox{-}Lift}(\mathbb{S})$. 1. *Assertion (1) holds.* We again use an inductive argument on the length of the bounded sp-filtration associated to $\mathbb{T}$. The key for the inductive step is the criterion established in [@CHZ] for when an HRS-tilt at a torsion pair induces a derived equivalence. This criterion turns out to be verified precisely because the iterated right mutations occur at torsion pairs that are lifted from torsion pairs in the subcategory of finitely presented objects, as discussed in Step 1. Consider now the diagram induced by the injective map $\hat{\mathfrak{s}}$ (see Lemma [\[basics on support for hearts\]](#basics on support for hearts){reference-type="ref" reference="basics on support for hearts"}), $$\nonumber \xymatrix{\mathsf{Tors}_{\mathcal{H}_\mathbb{T}}[\mathsf{Her},\mathsf{FT}]\ar@{^{(}->}[d]_{\mathsf{inc}}\ar@{-->}[rr]^{\mathfrak{s}}&&\mathscr{O}^H(\mathsf{Spec}(R))\ar@{^{(}->}[d]_{\mathsf{inc}}\\ \mathsf{Tors}_{\mathcal{H}_\mathbb{T}}[\mathsf{Her}]\ar[rr]^{\hat{\mathfrak{s}}}&&2^{\mathsf{Spec}(R)}}$$ where $\mathsf{inc}$ denotes inclusion maps. As recalled above $\mathsf{supp}^{-1}(V)\cap \mathcal{H}_\mathbb{T}$ is a hereditary torsion class in $\mathcal{H}_\mathbb{T}$, for any $V$ in $\mathscr{O}^H(\mathsf{Spec}(R))$. To prove (2) we want the restriction $\mathfrak{s}$ of $\hat{\mathfrak{s}}$ to be well-defined and surjective. 1. *$\hat{\mathfrak{s}}(\mathsf{Tors}_{\mathcal{H}_\mathbb{T}}[\mathsf{Her},\mathsf{FT}])\subseteq \mathscr{O}^H(\mathsf{Spec}(R))$, and $\mathfrak{s}$ is well-defined.* If $\mathbf{t}:=(\mathcal{T},\mathcal{F})$ lies in $\mathsf{Tors}_{\mathcal{H}_\mathbb{T}}[\mathsf{Her},\mathsf{FT}]$, since $\mathcal{H}_\mathbb{T}$ is a locally coherent category with $\mathsf{fp}(\mathcal{H}_\mathbb{T})=\mathcal{H}_\mathbb{S}$ (see Remark [Remark 7](#loc coherent heart){reference-type="ref" reference="loc coherent heart"}), we can argue that $\mathsf{supp}(\mathcal{T})=\mathsf{supp}(\mathcal{T}\cap \mathcal{H}_\mathbb{S})$. Then we observe that $\mathcal{T}$ and $\mathcal{T}\cap \mathcal{H}_\mathbb{S}$ generate the same localising subcategory in $\mathsf{D}(R)$ and that such localising subcategory must be compactly generated by [@AJS Proposition 3.10]. The result then follows from Neeman's classification of smashing subcategories ([@N]). 1. : *$\mathfrak{s}$ is surjective.* It remains to argue that given a specialisation-closed subset $V$ of $\mathsf{Spec}(R)$, the hereditary torsion class $\mathsf{supp}^{-1}(V)\cap \mathcal{H}_\mathbb{T}$ in $\mathcal{H}_\mathbb{T}$ is indeed of finite type. For this purpose we again use the classification of smashing subcategories in terms of specialisation-closed subsets proved by Neeman in [@N], and we are able to draw the desired conclusion using the fact that $\mathbb{T}$ induces a derived equivalence by Statement (1). ◻ *Remark 21*. Let $\mathbb{S}$ be an intermediate t-structure in $\mathsf{D}^b(R)$. If $\psi\colon \mathsf{Spec}(R)\longrightarrow \mathbb{Z}$ and $\varphi \colon \mathbb{Z}\longrightarrow 2^\mathsf{Spec}(R)$ are, respectively, the associated bounded homomorphism of posets and bounded sp-filtration for $\mathbb{T}:=\mathsf{t\mbox{-}Lift}(\mathbb{S})$, then the mutability condition on a subset $W$ of $\mathsf{Spec}(R)$ is made explicit in part (2) of the theorem: $\psi$ (or $\varphi$) admits a right mutation at $W$ if and only if $W$ lies in $\mathscr{O}^H(\mathsf{Spec}(R))$. The fact that $\mathbb{T}=\mathsf{t\mbox{-}Lift}(\mathbb{S})$ induces a derived equivalence ($\mathbb{S}$ intermediate t-structure in $\mathsf{D}^b(R)$) yields two triangulated equivalences, namely $\mathsf{D}^b(\mathcal{H}_\mathbb{T})\longrightarrow \mathsf{D}^b(\mathsf{Mod}(R))$ and its extension $\mathsf{D}(\mathcal{H}_\mathbb{T})\longrightarrow \mathsf{D}(R)$ (see [@Vi] for the construction of this functor, an *unbounded realisation functor*). It is natural to ask whether these functors restrict to a triangle equivalence at the level of the bounded derived category of $\mathcal{H}_\mathbb{S}$ and the bounded derived category of finitely generated $R$-modules. The answer is positive and the key for this restriction is to be able to identify $\mathsf{D}^b(\mathcal{H}_\mathbb{S})=\mathsf{D}^b(\mathsf{fp}(\mathcal{H}_\mathbb{T}))$ intrinsically inside $\mathsf{D}^b(\mathcal{H}_\mathbb{T})$, as was done in [@HP Lemma 3.11] (indeed $\mathsf{D}^b(\mathcal{H}_\mathbb{S})$ turns out to be the subcategory of compact objects in $\mathsf{D}^b(\mathcal{H}_\mathbb{T})$). **Corollary 22**. *Let $R$ be a commutative noetherian ring and $\mathbb{S}$ an intermediate t-structure in $\mathsf{D}^b(R)$ with heart $\mathcal{H}_\mathbb{S}$. Then* 1. *[@HP Theorem 3.10] there is an equivalence $\mathsf{D}^b(\mathcal{H}_\mathbb{S})\longrightarrow \mathsf{D}^b(R)$;* 2. *[@PaV Proposition 6.20] the assignment of support induces a bijection between Serre subcategories of $\mathcal{H}_\mathbb{S}$ and specialisation-closed subsets of $\mathsf{Spec}(R)$;* 3. *$\mathsf{t\mbox{-}Lift}(\mathbb{S})$ is an iterated right mutation of a shift of the standard t-structure.* *Proof.* Note that (3) follows from Step 1 in the previous proof. We comment briefly on (2). Since we have $\mathcal{H}_\mathbb{S}=\mathsf{fp}(\mathcal{H}_\mathbb{T})$, where $\mathbb{T}:=\mathsf{t\mbox{-}Lift}(\mathbb{S})$, statement (2) follows from Statement (2) of the previous theorem combined with the fact that in a locally coherent Grothendieck category there is always a bijection between Serre subcategories of $\mathcal{H}_\mathbb{S}$ and hereditary torsion classes of finite type in $\mathcal{H}_\mathbb{T}$ (see [@Herzog] and [@KraLoc]). ◻ # Mutations for bounded t-functions {#comm mutation} In the last section we have seen that we have a combinatorial theory of right mutations ready to be applied to the bounded homomorphisms of posets $\mathsf{Spec}(R)\longrightarrow \mathbb{Z}$ which are associated to (lifts of) intermediate t-structures in $\mathsf{D}^b(R)$. **Definition 23**. A function $\psi\colon \mathsf{Spec}(R)\longrightarrow \overline{\mathbb{Z}}$ is said to be a **t-function** if for a minimal inclusion of prime ideals $\mathfrak{p}\subsetneq \mathfrak{q}$ we have $\psi(\mathfrak{p})\leq \psi(\mathfrak{q})\leq \psi(\mathfrak{p})+1$. A t-function $\psi$ is said to be **bounded** if the image of $\psi$ is contained in an integer interval $[-n,n]$ for some integer $n$. We denote the set of bounded t-functions $\mathsf{Spec}(R)\longrightarrow \mathbb{Z}$ by $\mathsf{t\mbox{-}Fun}^b(R)$. Note that, in particular, $\mathsf{t\mbox{-}Fun}^b(R)\subseteq \mathsf{Hom}_\mathsf{pos}^b(\mathsf{Spec}(R),\mathbb{Z})$ and, it turns out that they often correspond precisely to the t-structures mentioned above. Note also that, for a commutative noetherian ring of finite Krull dimension, any integer valued t-function $\mathsf{Spec}(R)\longrightarrow \mathbb{Z}$ is bounded: once the values of the (finitely many) minimal prime ideals are chosen (and these choices are not in general free from each other), the Krull dimension of $R$ determines an interval in which values of a t-function can lie. The following theorem states precisely the correspondence between t-functions and t-structures for a large class of rings: CM-excellent rings of finite Krull dimension. Since the purpose of this article is to illustrate and simplify cosilting mutation for a large class of examples, it will suffice to say that any commutative noetherian ring of finite Krull dimension that is a quotient of a regular (or more generally, of a Cohen-Macaulay) ring is CM-excellent (see [@Tak]). We refer to [@Sta Theorem 6.12], as well as [@AJS Theorem 6.9] for some predecessors of this theorem. **Theorem 24**. *[@Tak Theorem 5.5] Let $R$ be a commutative noetherian ring, and suppose $R$ is CM-excellent of finite Krull dimension. The assignment $\mathsf{f}\circ \beta\circ \mathsf{t\mbox{-}Lift}$ determines a bijection between $\mathsf{t\mbox{-}str}_{\mathsf{D}^b(R)}[\mathsf{Int}]$ and $\mathsf{t\mbox{-}Fun}^b(R)$.* *Remark 25*. Note that, in comparison to the reference cited, we have added the words *intermediate* and *bounded* to the t-structure side and to the t-function side, respectively. This restriction of the cited bijection is easy to observe, and it can also be seen in Theorem [Theorem 6](#diagram){reference-type="ref" reference="diagram"}, where correspondence between t-structures and sp-filtrations or homomorphisms of posets associates the intermediate condition (on t-structures) to a boundedness condition (on morphisms of posets). As a consequence of this theorem and of the last section, we have a complete description of right mutations for t-functions as follows. Let $R$ be a commutative noetherian ring of finite Krull dimension which is CM-excellent and let $\psi\colon\mathsf{Spec}(R)\longrightarrow \mathbb{Z}$ be a t-function. Consider the following procedure that, out of an input subject to a mutability condition, produces an output. - **Input:** Consider a subset $W$ of $\mathsf{Spec}(R)$. - **Mutability condition:** We say that $\psi$ admits a right mutation with respect to $W$ if $W$ is specialisation-closed. - **Output:** Consider the bounded homomorphism of posets $\mu^-_W(\psi)\colon \mathsf{Spec}(R)\longrightarrow \mathbb{Z}$ defined by $$\nonumber \mu^-_W(\psi)(\mathfrak{p})=\begin{cases}\psi(\mathfrak{p})+1&{\rm if\ \mathfrak{p}\in W}\\ \psi(\mathfrak{p})&{\rm if\ \mathfrak{p}\notin W}\end{cases}$$ We saw in Theorems [Theorem 16](#mutation of combinatorial data){reference-type="ref" reference="mutation of combinatorial data"} and [Theorem 20](#der equiv and mut condition){reference-type="ref" reference="der equiv and mut condition"} that this does indeed correspond to a mutation of the t-structure associated to $\psi$. Note that, nevertheless, the mutation of a t-function is not necessarily a t-function. This reflects the fact that not all intermediate cosilting t-structures restrict to $\mathsf{D}^b(R)$, despite the fact that all of them are iterated mutations of a shift of the standard t-structure (see Theorem [Theorem 20](#der equiv and mut condition){reference-type="ref" reference="der equiv and mut condition"}). We end the paper exploring this theory of right mutations in $\mathsf{D}^b(\mathbb{Z})$. Note that $\mathbb{Z}$ is regular (thus CM-excellent) and it has finite Krull dimension (equal to 1). **Example 26**. Let $\mathbb{P}$ denote the set of prime natural numbers, and recall that, as sets, $\mathsf{Spec}(\mathbb{Z})=\{0\}\cup \mathbb{P}$ (identifying each such integer with the ideal generated by it). Observe that a bounded t-function $\psi\colon \mathsf{Spec}(Z)\longrightarrow \mathbb{Z}$ if completely determined by two pieces of information: $\psi(0)$ and a subset $U=\psi^{-1}(\psi(0)+1)$. It is clear (by definition of a t-function) that $\psi(q)=\psi(0)$ whenever $q$ is in $\mathbb{P}\setminus U$. Hence we have a bijection $$\nonumber \mathsf{t\mbox{-}Fun}^b(\mathbb{Z})\longrightarrow \mathbb{Z}\times 2^{\mathbb{P}}$$ and we will identify a bounded t-function $\psi$ with the pair $(\psi(0),\psi^{-1}(\psi(0)+1))$. Note now that the specialisation-closed subsets of $\mathsf{Spec}(\mathbb{Z})$ are $$\nonumber \mathscr{O}^H(\mathsf{Spec}(\mathbb{Z}))=\{\mathsf{Spec}(\mathbb{Z})\}\cup 2^{\mathbb{P}}.$$ We compute all right mutations of a bounded t-function $(n,U)$ (with $n$ in $\mathbb{Z}$ and $U$ a subset of $\mathbb{P}$) with respect to a specialisation closed subset $W$ of $\mathsf{Spec}(\mathbb{Z})$. First observe that $\mu_{\mathsf{Spec}(\mathbb{Z})}^-(n,U)=(n+1,U)$ since we change all the values of the original t-function. Now, if $W$ is a subset of $\mathbb{P}$, then right mutation of $(n,U)$ with respect to $W$ gives us the following bounded morphism of posets: $$\nonumber \mu^-_W(n,U)(p)=\begin{cases} n {\rm\ if \ } p\notin W\cup U\\ n+1 {\rm\ if \ } p\in (W\setminus U)\cup (U\setminus W),\\ n+2 {\rm\ if \ } p\in (W\cap U)\end{cases} p\in\{0\}\cup \mathbb{P}.$$ From this result, it is worth observing that - If $W=\emptyset$, then $\mu^-_\emptyset(n,U)=(n,U)$; - If $W\cap U=\emptyset$, then $\mu^-_W(n,U)=(n,U\cup W)$; - If $W\cap U\neq \emptyset$, then $\mu^-_W(n,U)$ is no longer a t-function. [^1] 99 , *$\tau$-tilting theory*, Compos. Math. **150** (2014), 415--452. T. Aihara, O. Iyama, *Silting mutation in triangulated categories*, J. Lond. Math. Soc. **85** (2012), no. 3, 633-668. L. Alonso Tarrío, A. Jeremías López and M. Saorín, *Compactly generated t-structures on the derived category of a noetherian ring*, J. Algebra **324** (2010), 313--346. L. Alonso Tarrío, A. Jeremías López and M. Souto Salorio, *Construction of t-structures and equivalences of derived categories*, Trans. Amer. Math. Soc. **355** (2003), 2523--2543. , *Silting modules over commutative rings*, Int. Math. Res. Not. IMRN **2017(13)** (2017), 4131--4151. L. Angeleri Hügel, R. Laking and F. Sentieri, *Mutation of torsion pairs for finite-dimensional algebras*, in preparation. L. Angeleri Hügel, R. Laking, J. Šťovíček and J.Vitória, *Mutations and torsion pairs*, preprint (2022), [arXiv:2201.02147](https://arxiv.org/abs/2201.02147). L. Angeleri Hügel, F. Marks and J.Vitória, *Torsion pairs in silting theory*, Pacific J. Math. **291** (2017), 257--278. , *Cotilting modules are pure-injective*, Proc. Amer. Math. Soc. **131** (2003), no. 12, 3665--3672. A. A. Beilinson, *Coherent sheaves on $\mathbb{P}^n$ and problems of linear algebra*, Func. Anal. Appl. **12** (1978), 214--216. A. A. Beilinson, *On the derived category of perverse sheaves*, In: K-theory, arithmetic and geometry (Moscow, 1984-1986), Lecture Notes in Math., **1289**. Springer, Berlin (1987). A. Beilinson, J. Bernstein, P. Deligne and O. Gabber, *Analyse et topologie sur les espaces singuliers*, Astérisque, vol. **100** (1982), Soc. Math. France. A. I. Bondal, *Representations of associative algebras and coherent sheaves* (Russian), Izv. Akad. Nauk SSSR Ser. Mat. **53**(1) (1989), 25--44; English translation, Math. USSR-Izv. **34**(1) (1990), 23--42. , *t-structures on some local Calabi-Yau varieties*, J. Algebra **289** (2005), 453--483. A. Buan, B. R. Marsh, I. Reiten, M. Reineke and G. Todorov, *Tilting theory and cluster combinatorics*, Adv. Math. **204** (2006), 572--618. X.-W. Chen, Z. Han and Y. Zhou, *Derived equivalence via HRS-tilting*, Adv. Math. **354** (2019). , *Locally finitely presented additive categories*, Comm. Algebra **22** (1994), 1641--1674. P. Gabriel, *Des catégories abéliennes*, Bull. Soc. Math. France **90** (1962), 323--448. D. Happel, I. Reiten, S.Smalø, *Tilting in abelian categories and quasitilted algebras*, Mem. Amer. Math. Soc. **120** (1996), no. 575, viii+ 88. , *The Ziegler spectrum of a locally coherent Grothendieck category*, Proc. Lond. Math. Soc. **74** (1997), 503--558. M. Hrbek and T. Nakamura, *Telescope conjecture for homotopically smashing t-structures over commutative noetherian rings*, J. Pure Appl. Algebra **225**(4) (2021). and J. Šťovı́ček, *Tilting complexes and codimension functions over commutative noetherian rings*, preprint (2022), arXiv:2207.01309. and S. Pavon, *Singular equivalences to locally coherent hearts of commutative noetherian rings*, J. Algebra **632** (2023), 117--153. , *The spectrum of a locally coherent category*, J. Pure Appl. Algebra **114** (1997), 259--271. H. Krause, *Smashing subcategories and the telescope conjecture---an algebraic approach*, Invent. Math. **139** (2000), no. 1, 99--133. R. Laking, *Purity in compactly generated derivators and $t$-structures with Grothendieck hearts*, Math. Z. **295** (2020), 1615--1641. R. Laking, *Bricks and mutation*, in preparation. [F. Marks, J. Vitória]{.smallcaps}, *Silting and cosilting classes in derived categories*, J. Algebra **501** (2018), 526-544. [F. Marks]{.smallcaps} and [A. Zvonareva]{.smallcaps}, *Lifting and restricting t-structures*, preprint (2021), Bull. Lond. Math. Soc. 55 (2) (2023), 640--657. A. Neeman, *The chromatic tower for $\mathsf{D}(R)$*, Topology **31** (1992), 519--532. , *Silting theory in triangulated categories with coproducts*, J. Pure Appl. Algebra **223** (2019), 2273--2319. and M. Saorı́n, *Direct limits in the heart of a $t$-structure: the case of a torsion pair*, J. Pure Appl. Algebra **219** (9) (2015), 4117--4143. C. E. Parra and M. Saorı́n, *Addendum to "Direct limits in the heart of a $t$-structure: the case of a torsion pair"* \[J. Pure Appl. Algebra **219** (9) (2015), 4117--4143\], J. Pure Appl. Algebra **220** (6) (2016), 2467--2469. S. Pavon and J. Vitória, *Hearts for commutative noetherian rings: torsion pairs and derived equivalences*, Doc. Math. **26** (2021), 829--871. C. Psaroudakis and J. Vitória, *Realisation functors in tilting theory*, Math. Z. **288** (3) (2018), 965--1028. J. Rickard, *Morita Theory for Derived Categories*, J. London Math. Soc.  **39** (1989), 436--456. M. Saorín and J. Šťovíček, *$t$-Structures with Grothendieck hearts via functor categories*, preprint (2020), arXiv:2003.01401. M. Saorín, J. Šťovíček and S.Virili, *$t$-structures on stable derivators and Grothendieck categories*, Adv. Math. **429** (2023), 109--139. D. Stanley, *Invariants of t-structures and classifications of nullity classes*, Adv. Math. **224** (2010), 2662--2689. , *All $n$-cotilting modules are pure-injective*, Proc. Amer. Math. Soc. **134**(7) (2006), 1891--1897. , *Faltings' annihilator theorem and t-structures of derived categories*, Math. Z. **304** (2023). , *Morita theory for stable derivators*, preprint (2018), arXiv:1807.01505 , *Stability conditions, torsion theories and tilting*, J. Lond. Math. Soc. (2) **82** (2010), no. 3, 663--682. [^1]: **Acknowledgements:** The author thanks Lidia Angeleri Hügel, Michal Hrbek, Rosanna Laking, Sergio Pavon and Jan Šťovíček for inspiring and clarifying discussions, helpful insights and comments on earlier versions of this paper, the Scientific Committee of the International Conference on Representations of Algebras (ICRA) 2022 for the opportunity of presenting this work at the conference, and the editors of this volume for their invitation to write this article. The author acknowledges financial support from *PRIN - Progetto di Rilevante Interesse Nazionale* through the project *Categories, Algebras: Ring Theoretical and Homological Approaches (CARTHA)* that allowed him to attend ICRA 2022. This work was partially supported by the Department of Mathematics of the University of Padova via its *BIRD - Budget Integrato per la Ricerca dei Dipartimenti 2022*, through the project *Representations of quivers with commutative coefficients*.
arxiv_math
{ "id": "2310.01834", "title": "Mutations and derived equivalences for commutative noetherian rings", "authors": "Jorge Vit\\'oria", "categories": "math.RT math.AC math.RA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- author: - | Hazhe Ye, Yingzhi Tian[^1]\ College of Mathematics and System Sciences, Xinjiang University, Urumqi, Xinjiang 830046, China title: "Characterizing the forbidden pairs for graphs to be super-edge-connected[^2]" --- GBKsong Let $\mathcal{H}$ be a set of given connected graphs. A graph $G$ is said to be $\mathcal{H}$-free if $G$ contains no $H$ as an induced subgraph for any $H\in \mathcal{H}$. The graph $G$ is super-edge-connected if each minimum edge-cut isolates a vertex in $G$. In this paper, except for some special graphs, we characterize all forbidden subgraph sets $\mathcal{H}$ such that every $\mathcal{H}$-free is super-edge-connected for $|\mathcal{H}|=1$ and $2$. Forbidden subgraphs; Super-edge-connectedness; Edge-connectivity # Introduction In this paper, we consider only finite simple graphs. For notations and graph-theoretical terminology not defined here, we follow [@Bondy]. Let $G=(V,E)$ be a finite simple graph, where $V=V(G)$ is the vertex set and $E=E(G)$ is the edge set. For a vertex $u\in V(G)$, the $neighborhood$ of $u$ in $G$ is $N_{G}(u)=\{v\in V(G)|\ v$ is adjacent to $u \}$, and the $degree$ of $u$ in $G$ is $d_G(u)=|N_{G}(u)|$. The $minimum$ $degree$ and $maximum$ $degree$ of $G$ are denoted by $\delta(G)$ and $\triangle(G)$, respectively. For a vertex set $A\subseteq V(G)$, the $induced$ $subgraph$ of $A$ in $G$, denoted by $G[A]$, is the graph with vertex set $A$ and two vertices $u$ and $v$ in $A$ are adjacent if and only if they are adjacent in $G$. The $distance$ $d_{G}(u,v)$ between two vertices $u,v\in V(G)$ is the length of the shortest path between them in the graph $G$. An edge set $F\subseteq E(G)$ is called an $edge$-$cut$ if $G-F$ is disconnected. The $edge$-$connectivity$ $\kappa^{\prime}(G)$ of a graph $G$ is defined as the minimum cardinality of an edge-cut over all edge-cuts of G. The $vertex$-$connectivity$ $\kappa(G)$ of $G$ can be defined similarly. The graph $G$ is connected if $\kappa^{\prime}(G)\geq1$. If $G$ is not connected, them each maximal connected subgraph is call a $component$ of $G$. It is well known that $\kappa (G)\leq\kappa^{\prime}(G)\leq\delta(G)$. If $\kappa^{\prime}(G)=\delta(G)$, then the graph $G$ is said to be $maximally$ $edge$-$connected$. In 1981, Bauer, Suffel, Boesch, and Tindell [@Bauer] proposed the concept of super-edge-connectedness. A graph $G$ is said to be $super$-$edge$-$connected$ if each minimum edge-cut isolates a vertex of $G$, that is, each minimum edge-cut of $G$ is the set of edges incident with some vertex in $G$. Clearly, if $G$ is super-edge-connected, then it is also maximally edge-connected; the converse is not true, for example, the cycle $C_n$ ($n\geq4$) is maximally edge-connected but not super-edge-connected. There are lots of sufficient conditions for a graph to be maximally edge-connected, such as minimum degree condition [@Chartrand], Ore condition [@Lesniak] and diameter condition [@Plesnik]. Similar sufficient conditions for a digraph to be super-edge-connected can be seen in [@Fiol]. For more results, please refer to the survey paper [@Hellwig] and the references herein. Let $\mathcal{H}$ be a set of given connected graphs. A graph $G$ is said to be $\mathcal{H}$-$free$ if there is no induced subgraph in $G$ isomorphic to some graph $H\in \mathcal{H}$. Each graph $H$ in $\mathcal{H}$ is called a $forbidden$ $subgraph$. If $|\mathcal{H}|=2$, then the two graphs in $\mathcal{H}$ are called the $forbidden$ $pair$. For two sets of connected graphs $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$, if for each graph $H_{2}$ in $\mathcal{H}_{2}$, there exists a graph $H_{1}$ in $\mathcal{H}_{1}$ such that $H_{1}$ is an induced subgraph of $H_{2}$, then denote this relation by $\mathcal{H}_{1}\preceq \mathcal{H}_{2}$. Obviously, $\mathcal{H}_{1}\preceq \mathcal{H}_{2}$ implies that a $\mathcal{H}_{1}$-free graph is also $\mathcal{H}_{2}$-free. Figure 1 gives some classes of forbidden subgraphs used in this paper. ![Some classes of forbidden subgraphs.](fig1.pdf "fig:"){#1 width="10cm"}\ In [@Faudree], Faudree and Gould determined the forbidden pairs for hamiltonicity of 2-connected graphs except for a finite number of exceptions. Wang, Tsuchiya and Xiong [@Wang] characterized all the forbidden pairs $R,S$ such that every connected $\{R,S\}$-free graph $G$ has $\kappa(G)=\kappa^{\prime}(G)$. Recently, Du, Huang and Xiong [@Du] characterized all the forbidden pairs $R,S$ such that every connected $\{R,S\}$-free graph $G$ is maximally edge-connected. **Theorem 1**. *([@Wang]) Let $S$ be a connected graph. Then $G$ being a connected $S$-free graph implies $\kappa(G)=\kappa^{\prime}(G)$ if and only if $S$ is an induced subgraph of $P_3$.* **Theorem 2**. *([@Wang]) Let $\mathcal{H}$ be a set of two connected graphs such that each member of $\mathcal{H}$ is not an induced subgraph of $P_3$. Then $G$ being a connected $\mathcal{H}$-free graph implies $\kappa(G)=\kappa^{\prime}(G)$ if and only if $\mathcal{H} \preceq\left\{Z_1, P_5\right\}, \mathcal{H} \preceq\left\{Z_1, K_{1,4}\right\}, \mathcal{H} \preceq\left\{Z_1, T_{1,1,2}\right\}$, $\mathcal{H} \preceq\left\{H_0, P_4\right\}$, or $\mathcal{H} \preceq\left\{ H_0,K_{1,3}\right\}$.* **Theorem 3**. *([@Du]) Let $S$ be a connected graph. Then $G$ being a connected S-free graph implies $\kappa^{\prime}(G)=$ $\delta(G)$ if and only if $S$ is an induced subgraph of $P_4$.* **Theorem 4**. *([@Du]) Let $\mathcal{H}=\{R, S\}$ be a set of two connected graphs such that both $R$ and $S$ are not an induced subgraph of $P_4$. Then $G$ being a connected $\mathcal{H}$-free graph implies $\kappa^{\prime}(G)=\delta(G)$ if and only if $\mathcal{H} \preceq\left\{H_1, P_5\right\}, \mathcal{H} \preceq\left\{Z_2, P_6\right\}$, or $\mathcal{H} \preceq\left\{Z_2, T_{1,1,3}\right\}$.* Motivated by the results above, we will characterize the forbidden pairs for a graph to be super-edge-connected in this paper. In the next section, the main results will be presented. The proof of the main theorem will be given in the last section. # Main results **Theorem 5**. *Let $S$ be a connected graph. Then $G$ being a connected S-free graph implies $G$ is super-edge-connected if and only if $S$ is an induced subgraph of $P_3$.* *Proof.* Let $G$ be a connected $P_3$-free graph. Then $G$ is a complete graph, and thus $G$ is super-edge-connected. Let $S$ be a connected graph such that every connected $S$-free graph is super-edge-connected. Since $G_{i}$ in Figure 2 is not super-edge-connected for $i\in \{1, \cdots, 6\}$, then we know $G_{i}$ is not $S$-free and contains $S$ as an induced subgraph. We observe that the common induced subgraph of the graphs in ${G}_3$ and ${G}_6$ is a path and the longest induced path of the graphs in ${G}_1$ is $P_3$, then $S$ must be an induced subgraph of $P_3$. The proof is complete. ◻ The $Cartesian$ $product$ of two graphs $G_1$ and $G_2$, denoted by $G_1 \square G_2$, is defined on the vertex sets $V(G_1) \times V(G_2)$, and $(x_1, y_1)(x_2, y_2)$ is an edge in $G_1 \square G_2$ if and only if one of the following is true: ($i$) $x_1=x_2$ and $y_1 y_2 \in E(G_2)$; ($ii$) $y_1=y_2$ and $x_1 x_2 \in E(G_1)$. In the following, we try to characterize the forbidden pairs for a graph to be super-edge-connected. By Theorem 2.1, a connected $P_3$-free graph is super-edge-connected. Thus for any connected graph $S$, we obtain that if $G$ is $\left\{P_{3} , S\right\}$-free, then $G$ is super-edge-connected. So we assume that both $R$ and $S$ are not an induced subgraph of $P_3$ in the following main theorem in this paper. **Theorem 6**. *Let $\mathcal{H}=\{R, S\}$ be a set of two connected graphs such that both $R$ and $S$ are not an induced subgraph of $P_3$. Then $G$ being a connected $\mathcal{H}$-free graph implies $G$ is super-edge-connected if and only if (i) $G\ncong C_{4}$ and $\mathcal{H} \preceq\left\{H_0, P_4\right\}$, or (ii) $G\ncong P_{2}\Box P_{3}$, $G\ncong P_{n}, C_{n}\ (n\geq4)$ and $\mathcal{H} \preceq\left\{Z_1, T_{1,1,2}\right\}$.* # The Proof of Theorem 2.2 In Figure 2, we construct some classes of non super-edge-connected graphs $G_{i}$ (where $i\in \{1, \cdots, 6\}$) on $n$ vertices, which will be used in the proof of the main result. To distinguish these graphs, we assume $n\geq8$. $C_n$ denotes the cycle on $n$ vertices and $K_n$ denotes the complete graph on $n$ vertices. ![Some classes of non super-edge-connected graphs.](fig2.pdf "fig:"){#2 width="10cm"}\ *Proof.* **We first prove the necessity.** Assume $\mathcal{H}=\{R,S\}$ is a set of connected graphs such that both $R$ and $S$ are not an induced subgraph of $P_{3}$, and every connected $\mathcal{H}$-free graph is super-edge-connected. Let $G$ be a connected $\mathcal{H}$-free graph. Then $G\ncong P_2\Box P_3$ and $G\ncong P_n,C_n\ (n\geq4)$. Since $G_{i}$ in Figure 2 is not super-edge-connected, we obtain that $G_{i}$ is not $\mathcal{H}$-free and contains at least one of $R$ and $S$ as an induced subgraph for $i\in \{1, \cdots, 6\}$. **Claim 1.** Either $R$ or $S$ is an induced subgraph of $H_{0}$. By contradiction, assume that neither $R$ nor $S$ is an induced subgraph of $H_{0}$. Without loss of generality, assume that $R$ is an induced subgraph of $G_{1}$. Since $R$ is not an induced subgraph of $H_{0}$. We observe that $R$ must contain a $K_{t}$ as an induced subgraph with $t\geq4$ . For $i\in\{2,3,4,5,6\}$, $G_{i}$ is $R$-free. Then they contain $S$ as an induced subgraph. We observe that the common induced subgraph of the graphs in ${G}_3$ and ${G}_6$ is a path and the longest induced path of $G_{2}$ is $P_{3}$, then $S$ should be an induced subgraph of $P_{3}$, a contradiction. Claim 1 is thus proved. By Claim 1, assume, without loss of generality, that $R$ is an induced subgraph of $H_{0}$. We distinguish two cases as follows. **Case 1.** $R$ is $H_{0}$. For $i\in\{3,4,5,6\}$, $G_{i}$ is $R$-free. Then they contain $S$ as an induced subgraph. We observe that the common induced subgraph of the graphs in ${G}_3$ and ${G}_6$ is a path and the longest induced path of $G_{4}$ is $P_{4}$, then $S$ should be an induced subgraph of $P_{4}$. So $\mathcal{H}=\{R,S\}\preceq\left\{H_{0},P_{4}\right\}$ . **Case 2.** $R$ is an induced subgraph of $Z_{1}$. Since $R$ is not an induced subgraph of $P_3$, we get that $R$ must contain a triangle. For $i\in\{3,4,5\}$, $G_{i}$ is $R$-free. Then they contain $S$ as an induced subgraph. We observe that the maximal common induced subgraph of the graphs in ${G}_4$ and ${G}_5$ is a $T_{1,1,2}$, then $S$ should be an induced subgraph of $T_{1,1,2}$. So $\mathcal{H}=\{R,S\}\preceq\left\{Z_{1},T_{1,1,2}\right\}$. **Now we are going to prove the sufficiency.** By contradiction, assume $G$ is a connected $\mathcal{H}$-free graph, but $G$ is not super-edge-connected, where ($i$) $G\ncong C_{4}$ and $\mathcal{H} \preceq\left\{H_0, P_4\right\}$, or ($ii$) $G\ncong P_{2}\Box P_{3}$, $G\ncong P_{n}, C_{n}\ (n\geq4)$ and $\mathcal{H} \preceq\left\{Z_1, T_{1,1,2}\right\}$. Since $G$ is not super-edge-connected, there is a minimum edge-cut $F$ such that $G-F$ has no isolated vertices. Clearly, $G-F$ has only two components, say $X$ and $Y$. Let $X_1=V\left(X\right) \cap V(F)$ and $Y_1=V\left(Y\right) \cap V(F)$. Denote $X_2=V(X)\setminus X_1$ and $Y_2=V(Y)\setminus Y_1$. Note that $|V(X)|, |V(Y)|\geq2$ and $|X_1|, |Y_1|\leq |F|=\kappa^{\prime}(G)\leq\delta(G)$. **Claim 2.** If $X_2=\emptyset$, then $X$ is a complete graph with order $\delta(G)$, each vertex in $X$ is incident with exactly one edge in $F$, and $|X|=|X_1|=|F|=\kappa^{\prime}(G)=\delta(G)$; similarly, if $Y_2=\emptyset$, then $Y$ is a complete graph with order $\delta(G)$, each vertex in $Y$ is incident with exactly one edge in $F$, and $|Y|=|Y_1|=|F|=\kappa^{\prime}(G)=\delta(G)$. By $\kappa^{\prime}(G)\leq\delta(G)$, $|X_1|\leq|V(X)|$ and $|X_1|\leq |F|=\kappa^{\prime}(G)$, we have the following inequalities. $$\begin{aligned} \left|E\left(X\right)\right| & =\frac{1}{2}\left(\sum_{x \in V\left(X\right)} d_G(x)-\kappa^{\prime}(G)\right) \\ & \geq \frac{1}{2}\left(\delta(G)\left|V\left(X\right)\right|-\kappa^{\prime}(G)\right) \\ & \geq \frac{1}{2}\left(\delta(G) |X_1|-\kappa^{\prime}(G)\right) \\ & \geq\frac{1}{2} \kappa^{\prime}(G)\left(|X_1|-1\right) \\ & \geq \frac{1}{2} |X_1|\left(|X_1|-1\right). \end{aligned}$$ If $X_2=\emptyset$, then all the inequalities above will be equalities. Thus we obtain that $X$ is a complete graph with order $\delta(G)$, each vertex in $X$ is incident with exactly one edge in $F$, and $|X|=|X_1|=|F|=\kappa^{\prime}(G)=\delta(G)$. By Claim 2, we consider two cases in the following. **Case 1.** At least one of $X_2$ and $Y_2$ is an empty set. Assume, without loss of generality, that $X_2=\emptyset$. Then, by Claim 2, $X$ is a complete graph with order $\delta(G)$, each vertex in $X$ is incident with exactly one edge in $F$, and $|X|=|X_1|=|F|=\kappa^{\prime}(G)=\delta(G)$. In addition, by $|V(X)|\geq2$, we have $\delta(G)\geq2$. **Subcase 1.1.** $G\ncong C_{4}$ and $\mathcal{H} \preceq\left\{H_0, P_4\right\}$. Suppose $Y_2=\emptyset$. Then $G$ is the union of two complete graphs on $\delta(G)$ vertices, together with the perfect matching $F$, that is, $G$ is isomorphic to the Cartesian product graph $K_{\delta(G)}\Box K_2$. If $\delta(G)\geq3$, then there exists an induced $P_{4}$, a contradiction. Otherwise, if $\delta(G)=2$, then $G\cong C_4$, also a contradiction. Suppose $Y_2\neq\emptyset$. Then by $Y$ is connected, there is a path $x_{1}x_{2}y_{1}y_{2}$, where $x_{1},$ $x_{2}\in V\left(X\right)$, $y_{1}\in Y_1$ and $y_{2}\in Y_2$. If $x_1y_1\notin E(G)$, then $x_{1}x_{2}y_{1}y_{2}$ is an induced path of $G$, a contradiction. Thus we assume $x_1y_1\in E(G)$. Since $d_{G}\left(y_{2}\right)\geq\delta(G)\geq2$, $y_2$ has a neighbor $y_3$ different from $y_1$. If $y_1y_3\notin E(G)$, then $G\left[\{x_1,y_1,y_2,y_3\}\right]\cong P_4$, a contradiction. Otherwise, if $y_1y_3\in E(G)$, then $G\left[\{x_1,x_2,y_1,y_2,y_3\}\right]\cong H_0$, also a contradiction. **Subcase 1.2.** $G\ncong P_{2}\Box P_{3}$, $G\ncong P_{n}, C_{n}\ (n\geq4)$ and $\mathcal{H} \preceq\left\{Z_1, T_{1,1,2}\right\}$. Clearly, $|X_1|=|F|\geq|Y_1|$. If $|X_1|>|Y_1|$, then there exists a vertex in $Y_1$, say $y_1$, has at least two neighbors in $X_1$, say $x_1$ and $x_2$. By $Y$ is connected, $y_1$ has at least one neighbor, say $y_2$, in $V(Y)$. Since $X$ is a complete graph and each vertex in $X$ is incident with exactly one edge in $F$, we know that $G\left[\{x_1,x_2,y_1,y_2\}\right]\cong Z_1$, a contradiction. Thus, we assume $|X_1|=|Y_1|$ in the following. Since $|X_1|=|Y_1|=|F|$, we know that each vertex in $X_1$ has only one neighbor in $Y_1$ and each vertex in $Y_1$ has only one neighbor in $X_1$. If $|X_1|=|Y_1|\geq3$, then the induced subgraph by any three vertex in $X_1$, say $x_1,x_2,x_3$, together with the neighbor of $x_1$ in $Y_1$ is isomorphic to $Z_{1}$, a contradiction. It is remaining to consider $|X_1|=|Y_1|=2$. Assume $X_1=\{x_1,x_2\}$, $Y_1=\{y_1,y_2\}$ and $F=\{x_1y_1,x_2y_2\}$. Suppose $y_1y_2\in E(G)$. Since $G$ is not a cycle, $y_1$ or $y_2$, say $y_2$ has a neighbor $y_3$ in $Y_2$. If $y_1y_3\in E(G)$, then $G[\{x_1,y_1,y_2,y_3\}]\cong Z_1$, a contradiction. So $y_1y_3\notin E(G)$. Since $\delta(G)\geq2$, $y_3$ has another neighbor $y_4$ different from $y_2$. If $y_2y_4\in E(G)$, then $G[\{x_2,y_2,y_3,y_4\}]\cong Z_1$, a contradiction. So $y_2y_4\notin E(G)$. If $y_1y_4\notin E(G)$, then $G[\{x_2,y_1,y_2,y_3,y_4\}]\cong T_{1,1,2}$, a contradiction. So $y_1y_4\in E(G)$. If $V(G)=\{x_1,x_2,y_1,y_2,y_3,y_4\}$, then $G\cong P_{2}\Box P_{3}$, a contradiction. Thus, by $G$ is connected, there is a vertex $y_5$ adjacent to $y_1$, $y_2$, $y_3$, or $y_4$. We only consider the case $y_1y_5\in E(G)$ here, others can be analysed similarly. If $y_4y_5\in E(G)$, then $G[\{x_1,y_1,y_4,y_5\}]\cong Z_1$, a contradiction. Otherwise, if $y_4y_5\notin E(G)$, then $G[\{x_1,x_2,y_1,y_4,y_5\}]\cong T_{1,1,2}$, also a contradiction. Suppose $y_1y_2\notin E(G)$. Since $Y$ is connected, there is a path $z_1z_2\cdots z_t$ connecting $y_1$ and $y_2$ in $Y$, where $z_1=y_1$, $z_t=y_2$ and $t\geq3$. Furthermore, we choose this path $z_1z_2\cdots z_t$ to be a shortest path connecting $y_1$ and $y_2$ in $Y$. Since $G$ is not a cycle, there is a vertex $w\in Y\setminus\{z_1,z_2,\cdots,z_t\}$ such that $w$ is adjacent to some vertex in $\{z_1,z_2,\cdots,z_t\}$. Assume $wz_i\in E(G)$ and $wz_j\notin E(G)$ for $j<i$. Let $z_0=x_1$ and $z_{-1}=x_2$ . If $wz_{i+1}\in E(G)$, then $G[\{w,z_{i-1},z_{i},z_{i+1}\}]\cong Z_1$, a contradiction. Otherwise, if $wz_{i+1}\notin E(G)$, then $G[\{w,z_{i-2},z_{i-1},z_{i},z_{i+1}\}]\cong T_{1,1,2}$, also a contradiction. **Case 2.** Neither $X_2$ nor $Y_2$ is an empty set. Since the distance between a vertex in $X_2$ and a vertex in $Y_2$ is at least three, $G$ must contains $P_4$ as an induced path. So the case $G\ncong C_{4}$ and $\mathcal{H} \preceq\left\{H_0, P_4\right\}$ can not happen. Thus we only consider the case that $G\ncong P_{2}\Box P_{3}$, $G\ncong P_{n}, C_{n}\ (n\geq4)$ and $\mathcal{H} \preceq\left\{Z_1, T_{1,1,2}\right\}$ in the following. We distinguish two subcases as follows. **Subcase 2.1.** $G$ contains a path $x_{2}x_{1}y_{1}y_{2}$, where $x_i\in X_i$ and $y_i\in Y_i$ for $i=1,2$. **Subcase 2.1.1.** $\left|N_{G}\left(x_{2}\right)\cap X_2\right|\geq2$ or $\left|N_{G}\left(y_{2}\right)\cap Y_2\right|\geq2$. Assume $\left|N_{G}\left(x_{2}\right)\cap X_2\right|\geq2$. Then there exist two vertices $x_{2}^{\prime}$, $x_{2}^{\prime\prime}\in X_2$ such that $x_{2}x_{2}^{\prime}$, $x_{2}x_{2}^{\prime\prime}\in E(G)$. If $x_{2}^{\prime}x_{2}^{\prime\prime}$, $x_{2}^{\prime}x_{1}$, $x_{2}^{\prime\prime}x_{1}\notin E(G)$, then $G\left[\{x_1,x_2,x_2^{\prime},x_2^{\prime\prime},y_1\}\right]\cong T_{1,1,2}$, a contradiction. If at least one of these three edges exists, then $G$ contains an induced subgraph isomorphic to $Z_1$, also a contradiction. For example, if $x_{2}^{\prime}x_{1}\in E(G)$, then $G\left[\{x_1,x_2,x_2^{\prime},y_1\}\right]\cong Z_1$. The case $\left|N_{G}\left(y_{2}\right)\cap Y_2\right|\geq2$ can be proved similarly. **Subcase 2.1.2.** $\left|N_{G}\left(x_{2}\right)\cap X_2\right|=1$ and $\left|N_{G}\left(y_{2}\right)\cap Y_2\right|=1$. Let $x_3\in N_{G}\left(x_{2}\right)\cap X_2$ and $y_3\in N_{G}\left(y_{2}\right)\cap Y_2$. If $x_1x_3$ or $y_1y_3\in E(G)$, then $G$ contains an induced $Z_{1}$. For example, if $x_1x_3\in E(G)$, then $G\left[\{x_1,x_2,x_3,y_1\}\right]\cong Z_1$. Hence, $x_1x_3\notin E(G)$ and $y_1y_3\notin E(G)$. Suppose $d_{G}\left(x_{1}\right)\geq3$. Let $w\in N_{G}\left(x_{1}\right)\setminus \{x_2,y_1\}$. If $w\in X_2$, then $G\left[\{x_1,x_2,y_1,w\}\right]\cong Z_1$ when $wx_2\in E(G)$, and $G\left[\{x_1,x_2,y_1,y_2,w\}\right]\cong T_{1,1,2}$ when $wx_2\notin E(G)$, contradicts to the assumption. By a similar argument, we can obtain contradictions for $w\in X_1$ and $w\in Y_1$. Thus $d_{G}\left(x_{1}\right)=2$. Similarly, $d_{G}\left(y_{1}\right)=2$. Since $G\ncong P_{n}, C_{n}\ (n\geq4)$, then there exists a vertex in $G$ such that its degree is at least three. Assume, without loss of generality, $V(X)$ has a vertex with degree at least three. Choose such a vertex $z_1$ such that its distance to $x_1$ is minimum in the component $X$. Let $z_1z_2\cdots z_r(z_r=x_1)$ be the shortest path between $z_1$ and $x_1$. Then $d_G(z_i)=2$ for $2\leq i\leq r$. Let $z_{1}^{\prime}$, $z_{1}^{\prime\prime}\in N_{G}\left(z_{1}\right)\setminus \{z_2\}$. Then we can find an induced $Z_1$ or $T_{1,1,2}$ according to $z_{1}^{\prime}z_{1}^{\prime\prime}$ is an edge in $G$ or not. **Subcase 2.1.3.** $\left|N_{G}\left(x_{2}\right)\cap X_2\right|=0$ and $\left|N_{G}\left(y_{2}\right)\cap Y_2\right|\leq1$, or $\left|N_{G}\left(x_{2}\right)\cap X_2\right|\leq1$ and $\left|N_{G}\left(y_{2}\right)\cap Y_2\right|=0$. Without loss of generality, assume that$\left|N_{G}\left(x_{2}\right)\cap X_2\right|=0$ and $\left|N_{G}\left(y_{2}\right)\cap Y_2\right|\leq1$. Since $N_{G}\left(x_{2}\right)\subseteq X_1$ and $|X_1|\leq|F|=\kappa^{\prime}(G)\leq\delta(G)$, we have $d_{G}\left(x_{2}\right)=|X_1|=|F|=\kappa^{\prime}(G)=\delta(G)$. Note that $|Y_1|\leq|F|=|X_1|$. Suppose $|X_1|>|Y_1|$. Then $\left|N_{G}\left(y_{2}\right)\cap Y_2\right|\geq1$. Let $y_3\in N_{G}\left(y_{2}\right)\cap Y_2$. If $y_1y_3\in E(G)$, then $G\left[\{x_1,y_1,y_2,y_3\}\right]\cong Z_1$, a contradiction. So assume $y_1y_3\notin E(G)$. If $\left|N_{G}\left(y_{1}\right)\cap X_1\right|\geq2$, then we can find an induced $Z_1$ or $T_{1,1,2}$ according to any two vertices in $N_{G}\left(y_{1}\right)\cap X_1$ are adjacent or not. So assume $y_1$ has exactly one neighbor $x_1$ in $X_1$. By $|X_1|>|Y_1|\geq1$, we get $|X_1|\geq2$. Since $y_1$ has exactly one neighbor $x_1$ in $X_1$, we have $|Y_1|\geq2$, thus $|X_1|\geq3$. If there is at least one edge in $G[X_1]$, then we can find an induced $Z_1$, a contradiction. Otherwise, we can find an induced $T_{1,1,2}$ rooted at $x_2$, also a contradiction. Suppose $|X_1|=|Y_1|$. Then by $|X_1|=|F|=\kappa^{\prime}(G)=\delta(G)$, we know that each vertex in $X_1$ has only one neighbor in $Y_1$ and each vertex in $Y_1$ has only one neighbor in $X_1$. That is, $F$ is perfect matching in $G[X_1\cup Y_1]$ connecting $X_1$ and $Y_1$. Suppose $|X_1|=|Y_1|\geq3$. Then according to there is an edge in $G[X_1]$ or not, we can find an induced $Z_1$ or $T_{1,1,2}$, a contradiction. Suppose $|X_1|=|Y_1|=2$. By a similar argument as Subcase 1.2, we can obtain a contradiction. Suppose $|X_1|=|Y_1|=1$. Since $G\ncong P_{n}, C_{n}\ (n\geq4)$, then there exists a vertex in $G$ such that its degree is at least three. By a similar argument as Subcase 2.1.2, we can also obtain a contradiction. **Subcase 2.2.** $G$ contains no path $x_{2}x_{1}y_{1}y_{2}$ satisfying $x_i\in X_i$ and $y_i\in Y_i$ for $i=1,2$. Let $X_1^1=\left\{x \in X_1: N_G(x) \cap X_2 \neq \emptyset\right\}$ and $Y_1^1=\left\{y \in Y_1: N_G(y) \cap Y_2 \neq \emptyset\right\}$. Denote $X_1^2=X_1-X_1^1$ and $Y_1^2=Y_1-Y_1^1$. Since $G$ contains no path $x_{2}x_{1}y_{1}y_{2}$ satisfying $x_i\in X_i$ and $y_i\in Y_i$ for $i=1,2$, we obtain that $X_1^2\neq\emptyset$, $Y_1^2\neq\emptyset$ and there is no edge connecting $X_1^1$ and $Y_1^1$. Since $X$ is connected, there is an edge connecting $X_1^1$ and $X_1^2$, say $x_1^1x_1^2\in E(G)$, where $x_1^1\in X_1^1$ and $x_1^2\in X_1^2$. Let $x_2$ be a neighbor of $x_1^1$ in $X_2$ and $y_1$ be a neighbor of $x_1^1$ in $Y_1$. Since $|X_1^1|<\delta(G)$, $x_2$ has a neighbor, say $x_3$, in $X_2$. If $x_1^1x_3\in E(G)$, then $G\left[\{x_1^1,x_2,x_3,y_1\}\right]\cong Z_1$, a contradiction. So assume $x_1^1x_3\notin E(G)$. If $x_1^2y_1\in E(G)$, then $G\left[\{x_1^1,x_1^2,x_2,y_1\}\right]\cong Z_1$, a contradiction. So assume $x_1^2y_1\notin E(G)$. Thus $G\left[\{x_1^1,x_1^2,x_2,x_3,y_1\}\right]\cong T_{1,1,2}$, also a contradiction. Since all cases lead to contradictions, the proof is thus complete. ◻ 9 D. Bauer, C. Suffel, F. Boesch, R. Tindell, Connectivity extremal problems and the design of reliable probabilistic networks, in: The Theory and Application of Graphs, Kalamazoo MI, 1980, Wiley, New York, 1981, pp. 45-54. J. A. Bondy, U. S. R. Murty, Graph Theory, Graduate Texts in Mathematics 244, Springer, Berlin, 2008. G. Chartrand, A graph-theoretic approach to a communications problem, SIAM J. Appl. Math. 14 (1966) 778-781. J. Du, Z. Huang, L. Xiong, Characterizing forbidden pairs for the edge-connectivity of a connected graph to be its minimum degree, Axioms 11 (2022) 219. R. J. Faudree, R. J. Gould, Characterizing forbidden pairs for Hamiltonian properties, Discrete Math. 173 (1997) 45-60. M. A. Fiol, On super-edge-connected digraphs and bipartite digraphs, J. Graph Theory 16 (1992) 545-555. A. Hellwig, L. Volkmann, Maximally edge-connected and vertex-connected graphs and digraphs: A survey, Discrete Math. 308 (2008) 3265-3296. L. Lesniak, Results on the edge-connectivity of graphs, Discrete Math. 8 (1974) 351-354. J. Plesnik, Critical graphs of given diameter, Acta Fac. Rerum Natur. Univ. Commenian Math. 30 (1975) 71-93. S. Wang, S. Tsuchiya, L. Xiong, Forbidden pairs for equality of connectivity and edge-connectivity of graphs, Graphs Comb. 35 (2019) 419-426. [^1]: Corresponding author. E-mail: tianyzhxj\@163.com (Y. Tian). [^2]: The research is supported by National Natural Science Foundation of China (12261086).
arxiv_math
{ "id": "2309.00829", "title": "Characterizing the forbidden pairs for graphs to be super-edge-connected", "authors": "Hazhe Ye, Yingzhi Tian", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The *echo index* counts the number of simultaneously stable asymptotic responses of a nonautonomous (i.e. input-driven) dynamical system. It generalizes the well-known *echo state property* for recurrent neural networks - this corresponds to the echo index being equal to one. In this paper, we investigate how the echo index depends on parameters that govern typical responses to a finite-state ergodic external input that forces the dynamics. We consider the echo index for a nonautonomous system that switches between a finite set of maps, where we assume that each map possesses a finite set of hyperbolic equilibrium attractors. We find the minimum and maximum repetitions of each map are crucial for the resulting echo index. Casting our theoretical findings in the RNN computing framework, we obtain that for small amplitude forcing the echo index corresponds to the number of attractors for the input-free system, while for large amplitude forcing, the echo index reduces to one. The intermediate regime is the most interesting; in this region the echo index depends not just on the amplitude of forcing but also on more subtle properties of the input. address: - Department of Mathematics, University of Exeter, Exeter EX4 4QF, UK - Department of Computer Science, University of Pisa, Largo Bruno Pontecorvo, 3 - 56127, IT author: - Peter Ashwin - Andrea Ceni bibliography: - nesp-refs-dec22.bib title: Transitions in echo index and dependence on input repetitions --- Nonautonomous dynamical system ,Input-driven system ,Multistability ,Recurrent neural network ,Echo state property. # Introduction One of the most pressing questions for artificial intelligence systems, is whether one can understand and query the reasons behind a decision made by such a system. The difficulty of answering this *explainability problem* [@Explain2022] is reflected in the fact that a trained neural network is commonly referred to as a black box. This suggests it is important to try and understand the functioning of neural networks in decision making under input - it is important to *open the black box* [@sussillo2013opening] and nonlinear dynamics gives tools that can be used for this. As an example, [@ceni2020interpreting] show that excitable network attractors can be used to understand function and malfunction of trained RNNs for certain tasks. Recurrent neural networks (RNN) such as echo state networks [@jaeger2001echo; @lukovsevivcius2009reservoir] can retain memory of internal states. In this case, it is important to view the system in an input-driven context [@manjunath2012theory]. A useful criterion for successful computation is the Echo State Property (ESP) [@jaeger2001echo], which holds if there is asymptotic loss of information about the internal state of the system and only the input is important to determine the output. However, as discussed in [@manjunath2013echo], the presence of the ESP depends not just on system but on the particular input considered. As the input streams to an input-driven RNN will never be fully deterministic, this means there is no guarantee that ESP will be satisfied in practise. Recent work has highlighted that RNNs that do not have ESP may still be a useful model for understanding errors in neural networks [@ceni2020echo], or in order to design multifunction RNNs that switch between different tasks [@ceni2021phd; @multifunction2022recurrent]. In this paper we develop ideas in [@ceni2020echo] in more depth by considering parametrizations of input by repetition properties of the inputs. The remainder of this section discusses shift dynamics and introduces the echo index for discrete time dynamical systems. We discuss parametrization of inputs by symbol repetition. Section [2](#sec:minecho){reference-type="ref" reference="sec:minecho"} presents some conditions to give bounds on echo index and the ESP. Section [3](#sec:rnns){reference-type="ref" reference="sec:rnns"} turns to a numerical example of an echo state RNN where we show how the echo index changes depending on min-max block-length and parameters of the input to the RNN. Section [4](#sec:conclusions){reference-type="ref" reference="sec:conclusions"} concludes with a discussion, including barriers to strengthening the theoretic results in Section [3](#sec:rnns){reference-type="ref" reference="sec:rnns"}. ## Input-driven dynamics and shift dynamics We consider properties of a discrete time input-driven dynamical system [@manjunath2012theory] of the form $$\label{eq:driven_dynamics} x[k+1] = f(x[k],u[k]),$$ where time is indexed by $k\in\mathbb{Z}$, states are $x[k]\in X\subset \mathbb{R}^n$ and $u[k]\in U\subset \mathbb{R}^p$ is an input sequence. We denote sequences in bold font, i.e. $$\mathbf{x}= \{ x[k] \}_{k \in \mathbb{Z}}, ~~\mathbf{u}= \{ u[k] \}_{k \in \mathbb{Z}},$$ and space of sequences in calligraphic font, i.e. $\mathcal{X}= X^\mathbb{Z}$, $\mathcal{U}= U^\mathbb{Z}$, so that $\mathbf{x}\in \mathcal{X}$, $\mathbf{u}\in \mathcal{U}$. We assume that $X$ and $U$ are compact sets and assume that $f:X\times U\rightarrow X$ is an update function that is continuous in both arguments, so that a forward orbit is defined. Note that for a given input sequence $u[k]$, ([\[eq:driven_dynamics\]](#eq:driven_dynamics){reference-type="ref" reference="eq:driven_dynamics"}) can be thought of as a nonautonomous dynamical system [@kloeden2011nonautonomous] $$\label{eq:nad_dynamics} x[k+1] = F(x[k],k),$$ where $F:X\times \mathbb{R}\rightarrow X$ is defined by $F(x,k)=f(x,u[k])$. The system ([\[eq:driven_dynamics\]](#eq:driven_dynamics){reference-type="ref" reference="eq:driven_dynamics"}) can also be viewed as a skew product dynamical system [@kloeden2011nonautonomous] over shift dynamics on the input sequence $\mathbf{u}\in \mathcal{U}$. We consider the special case where $U$ is taken from a finite set of $M$ input values. Without loss of generality we denote $U=\{0,\ldots,M-1\}$ and call elements of $U$ *symbols*. We recall some standard concepts from the symbolic dynamics of shifts; see for example [@adler1992symbolic; @lind1995introduction; @bruin2022topological] for background and more details. We define the shift operator $\sigma$ by $$[\sigma({\bf u})][k]=u[k+1].$$ such that $\sigma:\mathcal{U}\rightarrow \mathcal{U}$. Consider a given subset of input sequences $\mathcal{V}\subseteq \mathcal{U}$. We say $\mathcal{V}$ is *shift-invariant* if $\sigma(\mathcal{V}) = \mathcal{V}$ and call $(\sigma,\mathcal{U})$ the full shift on $U$. We consider the product topology on $\mathcal{V}$ induced by the metric $$d(\mathbf{u},\mathbf{v})=\sum_{k\in\mathbb{Z}} \frac{d_U(u[k],v[k])}{2^{|k|}} \label{eq:metric}$$ where $d_U(u,v)$ is a metric on $U$. Note that $\sigma$ acts continuously under such a topology. If we assume that $\mathcal{V}$ is shift-invariant and closed then one can lift the nonautonomous system [\[eq:driven_dynamics\]](#eq:driven_dynamics){reference-type="eqref" reference="eq:driven_dynamics"} to a continuous *autonomous* dynamical system on an extended space $\mathcal{F}: X \times \mathcal{V}\rightarrow X \times \mathcal{V}$. This is given by $$\label{eq:extended-autonomous} [x[k+1],\mathbf{u}[k]]=\mathcal{F}(x[k],{\bf u}[k]) = \left( f(x,u[0]) , \sigma({\bf u}[k]) \right)$$ where we write $\mathbf{u}[k]=\sigma^{k}\mathbf{u}$. The skew product nature of ([\[eq:extended-autonomous\]](#eq:extended-autonomous){reference-type="ref" reference="eq:extended-autonomous"}) means that composition can be written $$\label{eq:with-cocycle} \mathcal{F}^n(x,{\bf u}) = \left(\Phi_{n,{\bf u}}(x), \sigma^n(\mathbf{u}) \right)$$ for all $n\in\mathbb{Z}_0^+$, where $\Phi$ is a cocycle, namely $\Phi_{0,{\mathbf{u}}}$ is the identity map on $X$ and $\Phi_{n+k,\mathbf{u}}= \Phi_{n,\sigma^k(\mathbf{u})}\circ \Phi_{k,\mathbf{u}}$ for any $n,k\in\mathbb{Z}^+_0$. We write the forward orbit of $x[0]$ driven by the input sequence $\mathbf{u}$ in terms of this cocycle, as $$x[k]=\Phi_{k,\mathbf{u}}(x[0]).$$ for any $k\in\mathbb{Z}_0^+$. This can be extended to negative $k$ if $f$ is invertible. There are many choices of $\mathcal{V}\subset \mathcal{U}$ that may be used to characterise a set of possible input sequences, for convenience we only consider closed and shift-invariant subsets. Given any $\mathbf{u}\in\mathcal{U}$ one can define a closed invariant subset in terms of its orbit closure $$\mathcal{U}(\mathbf{u})=\overline{\{\sigma^{k}(\mathbf{u})~:~k\in\mathbb{Z}\}}$$ where $\overline{\mathcal{V}}$ denotes the closure of $\mathcal{V}$ in the topology of ([\[eq:metric\]](#eq:metric){reference-type="ref" reference="eq:metric"}). An important set of closed and shift-invariant subsets are *subshifts of finite type* defined as follows: For any $m\geq k$ we write $u[k,m]=(u[k],u[k+1],\ldots,u[m])$ to denote a finite string or *word* of symbols. Given a finite set $\mathcal{B}$ of words we define $$\mathcal{U}_{\mathcal{B}}=\{\mathbf{u}\in\mathcal{U}~:~u[k,m]\cap \mathcal{B}=\emptyset \mbox{ for all } k,m \in \mathbb{Z}\mbox{ with } k\leq m\},$$ namely the set of sequences that contain no word in the set $\mathcal{B}$. This is called a *subshift of finite type* with *forbidden words* $\mathcal{B}$. Without loss of generality we can assume that $\mathcal{B}$ is *minimal* in the sense there is no proper subset $\mathcal{B}'\subset \mathcal{B}$ such that $\mathcal{U}_{\mathcal{B}}=\mathcal{U}_{\mathcal{B}'}$. If $\mathcal{U}_{\mathcal{B}}$ has a set of forbidden words of length at most $k+1$ then we say it is a *$k$-step subshift*: knowing $k$ consecutive symbols provides the only constraints on the next symbol. Note that by considering a higher block shift on overlapping words of length $m$ it is possible to express such a $\mathcal{U}$ as a 1-step subshift of finite type [@bruin2022topological]. We write system ([\[eq:driven_dynamics\]](#eq:driven_dynamics){reference-type="ref" reference="eq:driven_dynamics"}) in the form of an iterated function system $$\label{eq:switching_dynamics} x[k+1] = f_{u[k]}( x[k]),$$ for $k\in\mathbb{Z}$, where [\[eq:switching_dynamics\]](#eq:switching_dynamics){reference-type="eqref" reference="eq:switching_dynamics"} describes the dynamics of switching between one of $M$ of autonomous dynamical systems. ## Local attractors and the echo index We recall a nonautonomous notion of local attractor of [\[eq:driven_dynamics\]](#eq:driven_dynamics){reference-type="eqref" reference="eq:driven_dynamics"} from [@ceni2020echo] for fixed input sequence; this was used to proposed a generalization of the notion of ESP for input-driven systems with finitely many local attractors. For a given input sequence $\mathbf{v}\in\mathcal{U}$, we say $\mathbf{x}=\{x[k]\}_{k\in\mathbb{Z}}$ is an *entire solution* if it is a trajectory for the input $\mathbf{v}$ in forwards and backwards time; i.e. it satisfies ([\[eq:driven_dynamics\]](#eq:driven_dynamics){reference-type="ref" reference="eq:driven_dynamics"}) for all $k\in\mathbb{Z}$. Note that we do not require $f_i$ to be invertible or surjective, and hence given some $x[0]\in X$ there may be (a) multiple choices for $x[-1]$ or (b) no choice for $x[-1]$ that lies within $X$. Note also that whether (a) or (b) hold will typically depend on $v[k]$ for $k<0$. Note that there will always be a point $x[0]$ that is on an entire solution $\mathbf{x}$. In particular, the set $$X_{-n, 0}(\mathbf{v}):=\Phi_{n,\sigma^{-n}(\mathbf{v})}(X)$$ is well-defined, compact and non-empty as $f(X,v)\subset X$ is a continuous image of a compact set. Moreover $$X_{-n-1, 0}(\mathbf{v})=\Phi_{n,\sigma^{-n}(\mathbf{v})}\circ f_{v[-n-1]} (X)\subset X_{-n, 0}(\mathbf{v})$$ and hence the set $$X_0(\mathbf{v}):=\bigcap_{n>0} X_{-n, 0}(\mathbf{v})$$ consists of points that lie on entire solutions. An entire solution $\mathbf{x}$ is *globally (pullback) attracting* for input $\mathbf{v}$ if $$\lim_{n\rightarrow \infty} h( X_{-n, 0}(\mathbf{v}), x[0])=0$$ where $h(A,B)=\sup_{y\in A} \inf_{z\in B} d(y,z)$. Following [@manjunath2013echo] say ([\[eq:driven_dynamics\]](#eq:driven_dynamics){reference-type="ref" reference="eq:driven_dynamics"}) for a given input $\mathbf{v}$ has the *ESP* if it has a unique entire solution $\mathbf{x}$ that is globally attracting for this input. An equivalent condition is that there is single point in $X_0(\mathbf{v})$. However, just as an autonomous dynamical system may be multistable, there may be more than one locally attracting entire solution. Moreover, the system ([\[eq:driven_dynamics\]](#eq:driven_dynamics){reference-type="ref" reference="eq:driven_dynamics"}) may have the echo state property for some inputs but not for others. This is explored in [@ceni2020echo] where we define a notion of echo index as the smallest number of uniform attraction entire solutions (UAES) that attract almost all initial states of the system. If there are a number $m$ of UAESs $\{ \mathbf{x}_1, \ldots, \mathbf{x}_m \}$ such that they decompose[^1] the whole phase space in sets that are uniformly attracted to them, then we say the system has *echo index* $m$ and write $\mathcal{I}(\mathbf{v})=m$; see [@ceni2020echo Definitions 3.3 and 3.4] for formal definitions. Note that the echo index is shift-invariant, that is $\mathcal{I}(\sigma({\bf v}))=\mathcal{I}({\bf v})$. We note that the echo index $\mathcal{I}$ may take a number of values on a given closed shift-invariant set $\mathcal{V}\subset \mathcal{U}$. In particular, we say $\mathcal{V}$ has *consistent echo* $n$ if $\mathcal{I}(\mathbf{v})=n$ for all $\mathbf{v}\in\mathcal{V}$. # Sufficient conditions to determine minimum echo index {#sec:minecho} We show that certain assumptions on the behaviour of individual maps and input sequences can be used to guarantee minimum echo index in terms of symbol repetitions for the input. ## Subshifts with min-max repetitions of input sequences {#sec:inputs} Suppose we have a finite set of symbols in $U$ and consider a subshift $U_{\mathcal{B}}$ of finite type defined by a set of prohibited words $\mathcal{B}$. We consider a particular class of subshifts $\mathcal{U}_{\mathcal{B}}$ that are characterised by minimum and maximum numbers of a repetition. We say there is *min repetition $m^-_i$ of symbol $i$* if $\mathcal{B}$ contains all words of the form $$j\,i^{m}\,k$$ for all $m<m^-_i$ and all $j,k\neq i$. Similarly, we say there is *max repetition $m^+_i$ of symbol $i$* if $\mathcal{B}$ contains the word $$i^{m^+_i+1}.$$ We say $m_i^+=\infty$ if there is no such word in $\mathcal{B}$. We say the subshift of finite type $\mathcal{U}_{\mathcal{B}}$ has *min-max repetitions* $m^{\pm}_i$ in this case. For example, for the two symbols $0$ and $1$ consider the subshift with prohibited words $$\mathcal{B}=\{010,0110,01110,1111111,101,1001\}.$$ In this case $\mathcal{U}_{\mathcal{B}}$ has minimum $3$ and maximum $\infty$ repetitions of symbol $0$, and minimum $4$ and maximum $6$ repetitions of symbol $1$. Note that this subshift can be expressed as a 1-step subshift on blocks of $7$ consecutive symbols, which in its most general form will have $2^7$ states and at most two arrows from each state. One can represent $\mathcal{U}_{\mathcal{B}}$ using a smaller number of states using multiple representations of the same state, this is shown in Figure [1](#fig:minmaxexample){reference-type="ref" reference="fig:minmaxexample"}. ![Graphical representation of a subshift that has minimum $3$ repeats of $0$ and between $4$ and $6$ repeats of $1$. For this case we have min-max repetitions $m_0^-=3$, $m_0^+=\infty$, $m_1^-=4$ and $m_1^+=6$.](figure/fig_minmaxexample.pdf){#fig:minmaxexample width="8cm"} If the subshift $\mathcal{V}$ supports an ergodic shift-invariant measure then one can apply tools from ergodic theory [@lind2021introduction; @adler1992symbolic] or random dynamical systems [@arnold1995random]. In particular, if sequences are chosen with respect to an ergodic shift-invariant probability measure $\mu$ on $\mathcal{U}$ then one can use this to ignore annoying sequences in $\mathcal{U}$ as long as they lie on some set the can be shown to be zero-measure. For example, given any set of non-zero probabilities $P$ assigned to symbols $U$, the Bernoulli product measure $\mu=P^{\mathbb{Z}}$ assumes the probability of a symbol is given, independent of location, by the same distribution $P$. This can be shown to be an ergodic measure that is zero for any set of sequences where the frequency of each symbol does not match the probability $P$. Note that there are many ergodic invariant measures corresponding to the topological subshift shown in Figure [1](#fig:minmaxexample){reference-type="ref" reference="fig:minmaxexample"}. By allocating transition probabilities one can define a Markov process that explores this subshift according to this measure. A useful example of this is the family of ergodic subshifts on two symbols $$\mathcal{U}(m_0^{\pm},m_1^{\pm},p_0,p_1)$$ with minimum and maximum repeats $m_i^{\pm}$ and repeat probability $0<p_i<1$, for each additional repeat of $i$ after the minimum number. Figure [2](#fig:minmaxexample_markov){reference-type="ref" reference="fig:minmaxexample_markov"} illustrates an example of a family of measures that correspond to the topological subshift in Figure [1](#fig:minmaxexample){reference-type="ref" reference="fig:minmaxexample"} and depends on the parameters $p_0$ and $p_1$. ![A Markov process for the topological subshift in Figure [1](#fig:minmaxexample){reference-type="ref" reference="fig:minmaxexample"}. For any choice of parameters $0<p_0<1$ and $0<p_1<1$ the transition probabilities shown generate a Markov process with min-max repetitions $m_0^-=3$, $m_0^+=\infty$, $m_1^-=4$ and $m_1^+=6$. For example, after in initial 3 repeats, symbol $0$ will be repeated with probability $p_0$ for each further repeat.](figure/fig_minmax_markov_example.pdf){#fig:minmaxexample_markov width="8cm"} ## Minimum echo for multiple attractor maps {#subsec:minecho} We now discuss some testable (but restrictive) assumptions on the maps ([\[eq:switching_dynamics\]](#eq:switching_dynamics){reference-type="ref" reference="eq:switching_dynamics"}) that can be used to give bounds on echo index. Suppose $X \subset \mathbb{R}^{n}$ is a compact $n$-dimensional manifold, write $\ell(\cdot)$ to denote Lebesgue measure on this and consider the finite set of $M$ maps $f_i: X\rightarrow X$ for $i=0, \ldots,M-1$. **Assumption 1**. *Suppose that for each $i=0,\ldots, M-1$ we have:* - *$f_i$ is a continuously differentiable map that is almost everywhere a local diffeomorphism.* - *$f_i$ has a finite number of hyperbolic stable fixed points $x_i^{0}, \ldots, x_i^{L(i)-1}$.* - *The basins of attraction $\mathcal{B}_i^{j}$ of $x_i^j$ exhaust the measure of state space in measure, i.e. $$\ell( X \setminus \bigcup_j \mathcal{B}_i^{j} ) = 0.$$* - *There is a $P(i,j,k)\in\{1,\ldots,L(k)\}$ such that $$%\label{eq:basin} x_i^j\in \mathcal{B}_k^{P(i,j,k)}.$$ for all $k=1,\ldots,M-1$ and $j=0,\ldots,L(i)-1$.* **Remark 1**. *Condition (i) implies in particular that the pre-image of any set of zero Lebesgue measure under $f_i$ also has zero measure. Condition (ii) implies that near each $x_i^j$ some iterate of $f_i$ is locally a contraction in some neighbourhood; we characterise this in Lemma [Lemma 1](#lem:iteratecontraction){reference-type="ref" reference="lem:iteratecontraction"}. Condition (iii) means that $\{\mathcal{B}_i^j\}_{j=1}^{L(i)}$ is a full measure partition for each $i$. Condition (iv) means that $x_i^j$ lies within the basin of an attractor for $f_k$ but moreover it is a non-degeneracy assumption that means that no attractor for $f_i$ is on the basin boundary for $f_k$.* We characterise the local contraction property more precisely: **Lemma 1**. *Suppose that Assumption [Assumption 1](#assum:uasps){reference-type="ref" reference="assum:uasps"} is satisfied. Then for each $i,j$ and any choice of $0<\rho<1$ there is a neighbourhood $N_{i}^j$ of $x_i^j$ and $n_i^j\geq 1$ such that if $$F:=(f_i|_{N_i^j})^{k}$$ for any $k\geq n_i^j$ then $F:N_i^j\rightarrow N_i^j$ has a unique fixed point at $x_i^j$ and moreover $F$ contracts by $\rho$ $$\|F(x)-F(y)\|<\rho\|x-y\|.$$*  The assumption of a hyperbolic attraction fixed point $x_i^j$ implies linear stability, and hence that in any neighbourhood contained in $\mathcal{B}_i^j$ some iterate of $f_i$ will be a contraction. 0◻ Now consider a specific input sequence $\mathbf{v}=\{v[k]\}_{k\in\mathbb{Z}}$. Given some choice of $A[0]\in\{1,\ldots,N_{v[0]}\}$ and applying Assumption [Assumption 1](#assum:uasps){reference-type="ref" reference="assum:uasps"}(iv) we have $$x_{v[0]}^{A[0]}\in \mathcal{B}_{v[1]}^{P(v[0],A[0],v[1])}.$$ Hence, associated with a sequence $\mathbf{v}$ and initial choice of attractor $x_{v[0]}^{A[0]}$ there will be a unique sequence $$\left\{x_{v[k]}^{A[k]}\right \}_{k\geq 0}$$ such that $x_{v[k]}^{A[k]}$ is an attractor for $f_{v[k]}$ contained in the basin of $x_{v[k+1]}^{A[k+1]}$ for the map $f_{v[k+1]}$, namely $$\label{eq:attractorseq} A[k]=P(v[k-1],A[k-1],v[k]).$$ We call such a sequence $A[k]$ a *forward attractor sequence* for $\mathbf{v}$ starting at $A[0]$. Since $A[k]$ is determined by ([\[eq:attractorseq\]](#eq:attractorseq){reference-type="ref" reference="eq:attractorseq"}) there will be only finitely many forward attractor sequences and the number of these is bounded above by $\min_{i} L(i)$. An *entire attractor sequence* is a sequence $A[k]$ satisfying ([\[eq:attractorseq\]](#eq:attractorseq){reference-type="ref" reference="eq:attractorseq"}) that is associated with a bi-infinite $\mathbf{v}$; this depends not only on the $v[k]$ for $k\geq 0$ but also on $k<0$. **Lemma 2**. *Suppose that Assumption [Assumption 1](#assum:uasps){reference-type="ref" reference="assum:uasps"} holds for the system ([\[eq:switching_dynamics\]](#eq:switching_dynamics){reference-type="ref" reference="eq:switching_dynamics"}). Then there is a $m_{\min}$ such that for any $\mathbf{v}$ with $m_i^-\geq m_{\min}$ for all $i$ and any entire attractor sequence $A[k]$ for this $\mathbf{v}$, there is a pullback attracting entire solution $x^{A[k]}_{v[k]}$ such that $x^{A[k]}_{v[k]}\in \mathcal{B}_{v[k+1]}^{A[k+1]}$.*   Choose any $0<\rho<1$; by applying Lemma [Lemma 1](#lem:iteratecontraction){reference-type="ref" reference="lem:iteratecontraction"} for all choices of $i,j$ we can find an $\epsilon>0$ and an $m_{\min}$ such that $m_{\min}>n_i^j$ and $B_{\epsilon}(x_i^j) \subset N_i^j$. Pick any attractor sequence $A[k]$ for a $\mathbf{v}$ that satisfies $m_i^-\geq m_{\min}$ and define $$x^{[A]}[k,n]:= \Phi_{n,\sigma^{k-n}(\mathbf{v})} B_{\epsilon}(x_{v[k-n]}^{A[k-n]}).$$ This is a nested sequence in increasing $n$ for fixed $k$ and there is contraction by $\rho$ over blocks of the same symbol. Hence the set is non-empty and has diameter that shrinks to zero as $n\rightarrow \infty$. This means that $$x^{[A]}[k]:= \bigcap_{n<k} x^{[A]}[k,n]$$ consists of a single entire solution that pullback attracts an $\epsilon$-neighbourhood of itself. 0◻ A consequence of this is that, for long enough minimum block-lengths we can get a lower bound for echo index from the number of distinct entire attractor sequences. **Theorem 3**. *Suppose that Assumption [Assumption 1](#assum:uasps){reference-type="ref" reference="assum:uasps"} holds for ([\[eq:switching_dynamics\]](#eq:switching_dynamics){reference-type="ref" reference="eq:switching_dynamics"}) and choose $m_{\min}$ and $\mathbf{v}$ such that the conclusion of Lemma [Lemma 2](#lem:entire){reference-type="ref" reference="lem:entire"} holds. Suppose that there are $E$ distinct entire attractor sequences for $\mathbf{v}$. Then the echo index is at least $E$ for this sequence.*   By Lemma [Lemma 2](#lem:entire){reference-type="ref" reference="lem:entire"} each entire attractor sequence $A[k]$ has a pullback attractor $x^{[A]}[k]$. These are distinct as long as the entire attractor sequences are distinct. 0◻ Under additional assumptions, one can ensure echo index, in particular for the following case where we assume there is a uniform bound on how long it takes points to enter a given neighbourhood of a single attractor. **Theorem 4**. *Suppose that Assumption [Assumption 1](#assum:uasps){reference-type="ref" reference="assum:uasps"} holds for ([\[eq:switching_dynamics\]](#eq:switching_dynamics){reference-type="ref" reference="eq:switching_dynamics"}) and in addition assume that there is an $i$ such that $f_i$ has a single attracting fixed point $x_i^0$ such that $h(f_i^m(X),x_i^0)\rightarrow 0$ as $m\rightarrow \infty$. Then there is a single attractor sequence and an $m_{\min}$ such that for every $\mathbf{v}$ with $m_i^-\geq m_{\min}$ the system has echo index one.*   In this case note that whenever $v[k]=i$ we have $A[k]=0$. Hence there is only one entire attractor sequence. Moreover, given any $\epsilon$ there is an $m$ such that all points must enter the neighbourhood $B_{\epsilon}(x_i^0)$ after at most $m$ iterates and hence the entire solution pullback attracts all points in $X$. 0◻ ## Obstructions to conditions for echo consistency Theorem [Theorem 3](#thm:suff){reference-type="ref" reference="thm:suff"} gives sufficient conditions to guarantee a minimum echo index for the forced system in terms of $E$ the number of distinct attractor sequences for the system with input $\mathbf{v}$. Conversely, Theorem [Theorem 4](#thm:echoone){reference-type="ref" reference="thm:echoone"} shows under stronger conditions that the echo index is precisely one. One might naively expect that an even stronger result than Theorem [Theorem 3](#thm:suff){reference-type="ref" reference="thm:suff"} may follow, namely that the echo index is in fact $E$ for cases where $E\geq 2$. This is not the case because under composition new attractors may appear near a basin boundary. As an example, define $$F(x)=x+x^2\sin\frac{\pi}{x},~~F(0)=0$$ and $f(x)$ is the continuous function such that $f(x)=F(x)$ for $|F(x)|<1$ and $f(x)$ is a linear function with constant slope $0.5$ for $|F(x)|>1$. Now consider the maps $$\label{eq:f0f1} f_0(x) = f(x)+1,~~~ f_1(x) = f(x-1).$$ One can verify that $f_1(x)$ and $f_2(x)$ each has a unique attracting fixed point, namely $x_0^0=3$, $x_1^0=2$. However, $f_0\circ f_1=f^2$ has infinitely many attracting fixed points separated by repelling fixed points; it also has neutrally stable and linearly unstable fixed points; see Figure [3](#fig:diabolic){reference-type="ref" reference="fig:diabolic"}. In summary, this system satisfies Assumption [Assumption 1](#assum:uasps){reference-type="ref" reference="assum:uasps"} and for any input there is only one attractor sequence $A[k]=0$ for all $k$, but for an input $\mathbf{v}$ that alternates between $0$ and $1$ there are infinitely many attracting entire solutions. Hence for this input there is infinite echo index even though each. Nonetheless, applying Theorem [Theorem 4](#thm:echoone){reference-type="ref" reference="thm:echoone"} there is a minimum block-length such that there is only one attracting entire solution. In this case, numerical simulations suggest this minimum is $2$, which would suggest that the system has echo index one for all input sequences except the periodic sequence that is an infinite repeat of $01$. ![Maps $f_0$ and $f_1$ ([\[eq:f0f1\]](#eq:f0f1){reference-type="ref" reference="eq:f0f1"}) such that each individual map has a unique globally attracting linearly stable fixed point. However the composition of $f_1\circ f_0$ has infinitely many stable fixed points.](figure/fig_diabolic.pdf){#fig:diabolic width="12cm"} # Echo index dependence for RNNs {#sec:rnns} One of the main motivations for this study is the need to understand how many responses there are for RNNs such as (a) trained discrete-time RNNs of the form [@tallec2018can; @bianchi2017recurrent] $$\begin{aligned} \label{eq:rnn} x[k+1] & = \phi( W_r x[k] + W_i u[k+1] + W_{f} z[k] )\end{aligned}$$ driven by inputs $u[k]$ or (b) trained ESNs of leaky-integrator neurons: $$\label{eq:leaky_rnn} x[k+1] = G(x[k],u[k],z[k]),~~G(x,u,z)=(1-\alpha) x + \alpha \phi( W_r x + W_i u + W_{f} z ),$$ where $\alpha \in (0,1)$ quantifies the leakage, $\phi$ is a nonlinear function; in both case the input sequence is given by $u[k]$ and $z[k]$ represents the output feedback $$\label{eq:output} z[k] = \psi(x[k]).$$ The nonlinear function $\phi$ is called an *activation function*, we assume it is bounded, monotonically increasing, and differentiable, e.g. a scaling of $\tanh$. By contrast, $\psi$ is usually the identity function or a softmax function. ## Switching dynamics for RNNs We consider an ESN with leaky-integrator neurons [@jaeger2007optimization] and no output feedback[^2] for implementing the input-driven state update rule [\[eq:switching_dynamics\]](#eq:switching_dynamics){reference-type="eqref" reference="eq:switching_dynamics"}. We consider a map of the form ([\[eq:leaky_rnn\]](#eq:leaky_rnn){reference-type="ref" reference="eq:leaky_rnn"}) but with $$\label{eq:map} G_{\alpha}(x, u):= (1-\alpha)x + \alpha \tanh(W_r x + W_{in} u).$$ A finite set of input values $U = \{ u_0, u_1, \ldots, u_{M-1} \}$ defines a number $M$ of autonomous maps $f_i: X \rightarrow X$, where $X=[-1,1]^r$ and $r$ the dimension of the internal state of the RNN, i.e. the number of neurons. In this RNN's framework, we can see that for small enough input values, the echo index of the nonautonomous switching system is the number of attractors of the input-free ESN. On the other hand, Theorem [Theorem 4](#thm:echoone){reference-type="ref" reference="thm:echoone"} has an interpretation in terms of large amplitude forcing for RNN-like systems. In fact, we know that large amplitude inputs drives the system in the saturating tails regime of $\tanh$ characterised by a single attracting fixed point [@ceni2020interpreting]. Thus Theorem [Theorem 4](#thm:echoone){reference-type="ref" reference="thm:echoone"} implies that on forcing an RNN with a large amplitude input for long enough, the resulting nonautonomous RNN switching dynamics are characterised by echo index one. ## An example of input-driven RNN with multiple attractors {#sec:example} We provide here an example with a two dimensional ESN to better illustrate the concepts. We choose, $$W_r = \begin{bmatrix} \frac{1}{2} & 0 \\ 0 & \frac{7}{4} \end{bmatrix}, \quad W_{in} = I_2,$$ with $I_2$ the identity matrix. We consider input sequences $\mathcal{U}= \{ u_0, u_1 \}^{\mathbb{Z}}$ where $$\label{eq:inputs} u_0:= \begin{pmatrix} \frac{1}{4} \\ \frac{1}{20} \end{pmatrix}~\mbox{ and }~u_1:= \begin{pmatrix} -\frac{1}{4} \\ -\frac{1}{2} \end{pmatrix}.$$ What follows can be observed for any value of $\alpha \in (0,1]$. We chose $\alpha=\dfrac{1}{4}$ again because a small leak rate highlights the transient dynamics. The nonautonomous dynamics driven by some input sequence $\mathbf{u}\in \mathcal{U}$ consists of sequence of applications of the two maps: $$f_0(x):= G(x,u_0),~~f_1(x):= G(x,u_1).$$ One can verify that the autonomous system $x[k+1] = f_0(x[k])$ has two asymptotically stable points with a saddle between them along the vertical line of $x_1 \approx 0.45$ (see Figure [4](#fig:autonomous_maps){reference-type="ref" reference="fig:autonomous_maps"}) while the autonomous system $x[k+1] = f_1(x[k])$ has only one (asymptotically stable) fixed point lying in the quadrant where both variables are negatives. Note that [@ceni2020echo Theorem 4.1] can be applied in this example to prove the existence of a local point attractor lying in a strip of negative values of the $x_2$ variable. Nevertheless, it is not straightforward to prove the existence of additional local point attractors. ![ (a) Phase portrait of the map $f_0$. Red line represents the stable manifold of the saddle. Some initial conditions have been evolved and plotted (as black points) in order to visualise the vector field. (b) phase space portrait of the map $f_1$. Note that the purple point is not a fixed point but a *slow point* [@sussillo2013opening; @ceni2020interpreting]. In fact, the map $f_1$ is close to a saddle-node bifurcation which occurs nearby the position of such a slow point. ](figure/fig_map1map2.png){#fig:autonomous_maps width="16cm"} \(a\) (b) The stable manifold of the saddle of the autonomous map $f_0$ is a horizontal line dividing the phase space in two sets. Let us denote with $x^*$ the upper stable node of $f_0$ lying on the quadrant where both variables are positive. Thanks to [@ceni2020echo Proposition B.1], we can consider the phase space to be $X=[-1,1]^2$. Let us call $X^{up}$ the upper half where $x^*$ lies (including the stable manifold line) and $X^{down}$ the remaining part. On the other hand, the global attractor of the autonomous map $f_1$ consists of only an asymptotically stable fixed point lying in $X^{down}$. Lemma [Lemma 1](#lem:iteratecontraction){reference-type="ref" reference="lem:iteratecontraction"} can be applied to show there exists a (minimum) positive integer $m_{\min}$ for which $f_1^{m_{\min}}( X^{up} )$ is mapped into the interior of $X^{down}$. For the particular choice of parameters it turns out that $m_{\min}\approx 30$. ## Bifurcations of echo index {#sec:bif-2esp-1esp} In this section we perform numerical simulations to compute the echo index of the ESN's example of section [3.2](#sec:example){reference-type="ref" reference="sec:example"}. For each choice of parameters determining the sequence $\mathbf{v}$, we choose $50$ uniformly distributed initial conditions and iterate $T$ steps before using a clustering algorithm to numerically estimate the number of clusters in the final state - this is an estimate of the echo index. Random sequences of length $T$ with varying $m_0^-$ and $m_1^+$ are chosen, and we fix $m_0^+=40$ and $m_1^-=1$. Figure [8](#fig:random_inputs){reference-type="ref" reference="fig:random_inputs"} highlights that for $m_0^+$ large enough there will be echo index $2$ for small enough $m_1^+$. Panel (a) shows $T=100$, $p_0=0.9$ and $p_1=0.95$. Note that the short length of the timeseries is not enough to collapse the initial conditions down to one of two values. Panel (b) is as for (a) but with $T=1000$; in this case we have index one or two, though some cases where $m_1^+$ is large have not been sampled for long enough giving some spurious echo index $2$. Panel (c) is as for (a) but with $T=2000$; we find what is presumably a clear boundary emerging between different values of the echo index. Finally (d) shows the deterministic case where $p_0=0$ and $p_1=1$ which corresponds to the periodic orbit where there are $m_0^-$ repeats of $0$ and $m_1^+$ repeats of $1$ - in this case there a clear boundary between regions with echo index one and two; this presumably corresponds to a bifurcation of the periodically forced map where a second attractor appears. Note that for $m_1^+>30$ and large enough $T$, we always find echo index one as suggested by the discussion at the end of Section [3.2](#sec:example){reference-type="ref" reference="sec:example"}, but for $m_0^-$ small it is possible to find echo index one for some values of $m_1^+<30$. ![Estimates of echo index for the system ([\[eq:map\]](#eq:map){reference-type="ref" reference="eq:map"}) with parameters as in the text for a range of random input sequences. Random sequences of length $T$ with varying $m_0^-$ and $m_1^+$ are chosen, fixing $m_0^+=40$ and $m_1^-=1$. Note that the echo index is apparently 2 or more for small enough $m_1^+$, or for small $T$. (a-c) show estimates of the echo index for randomly generate inputs with the given min-max block-lengths and probabilities of repetition $p_0$ and $p_1$ as in Figure [2](#fig:minmaxexample_markov){reference-type="ref" reference="fig:minmaxexample_markov"}. Examining longer timeseries when estimating the echo index leads to a clear boundary between regions of different echo index. (d) shows a special case where there are only periodic inputs. We conjecture that the cases (a-c) will limit to (d) for arbitrarily large $T$. ](figure/fig-modelA.pdf "fig:"){#fig:random_inputs width="8cm"}  ![Estimates of echo index for the system ([\[eq:map\]](#eq:map){reference-type="ref" reference="eq:map"}) with parameters as in the text for a range of random input sequences. Random sequences of length $T$ with varying $m_0^-$ and $m_1^+$ are chosen, fixing $m_0^+=40$ and $m_1^-=1$. Note that the echo index is apparently 2 or more for small enough $m_1^+$, or for small $T$. (a-c) show estimates of the echo index for randomly generate inputs with the given min-max block-lengths and probabilities of repetition $p_0$ and $p_1$ as in Figure [2](#fig:minmaxexample_markov){reference-type="ref" reference="fig:minmaxexample_markov"}. Examining longer timeseries when estimating the echo index leads to a clear boundary between regions of different echo index. (d) shows a special case where there are only periodic inputs. We conjecture that the cases (a-c) will limit to (d) for arbitrarily large $T$. ](figure/fig-modelB.pdf "fig:"){#fig:random_inputs width="8cm"} (a)(b)   ![Estimates of echo index for the system ([\[eq:map\]](#eq:map){reference-type="ref" reference="eq:map"}) with parameters as in the text for a range of random input sequences. Random sequences of length $T$ with varying $m_0^-$ and $m_1^+$ are chosen, fixing $m_0^+=40$ and $m_1^-=1$. Note that the echo index is apparently 2 or more for small enough $m_1^+$, or for small $T$. (a-c) show estimates of the echo index for randomly generate inputs with the given min-max block-lengths and probabilities of repetition $p_0$ and $p_1$ as in Figure [2](#fig:minmaxexample_markov){reference-type="ref" reference="fig:minmaxexample_markov"}. Examining longer timeseries when estimating the echo index leads to a clear boundary between regions of different echo index. (d) shows a special case where there are only periodic inputs. We conjecture that the cases (a-c) will limit to (d) for arbitrarily large $T$. ](figure/fig-modelC.pdf "fig:"){#fig:random_inputs width="8cm"}  ![Estimates of echo index for the system ([\[eq:map\]](#eq:map){reference-type="ref" reference="eq:map"}) with parameters as in the text for a range of random input sequences. Random sequences of length $T$ with varying $m_0^-$ and $m_1^+$ are chosen, fixing $m_0^+=40$ and $m_1^-=1$. Note that the echo index is apparently 2 or more for small enough $m_1^+$, or for small $T$. (a-c) show estimates of the echo index for randomly generate inputs with the given min-max block-lengths and probabilities of repetition $p_0$ and $p_1$ as in Figure [2](#fig:minmaxexample_markov){reference-type="ref" reference="fig:minmaxexample_markov"}. Examining longer timeseries when estimating the echo index leads to a clear boundary between regions of different echo index. (d) shows a special case where there are only periodic inputs. We conjecture that the cases (a-c) will limit to (d) for arbitrarily large $T$. ](figure/fig-modelF.pdf "fig:"){#fig:random_inputs width="8cm"} (c)(d) # Conclusions {#sec:conclusions} In this paper we have gone beyond work in [@ceni2020echo] to highlight specific ways in which the echo index varies with input signal (and hence whether the echo state property holds for a given input). We present this for an iterated function system on a compact space with an example application to an echo state network. We have so far only considered min-max block-length and repetition probabilities on determining bounds on echo index but the actual value may depend on much more subtle properties of words appearing in the input and properties of the individual maps. It remains a challenge to better understand this relationship between input set, system properties and echo index, even if we restrict only to functions where the only attractors are fixed points. Clearly, responses for cases where the autonomous dynamics of the maps include more complex attractors (such as chaotic, quasiperiodic or period) will be more challenging, not least because the a simple generalization of Assumption [Assumption 1](#assum:uasps){reference-type="ref" reference="assum:uasps"}(iv) is unreasonable - a single attractor of one map will can stably straddle several basins of attractions for attractors of another map. One interpretation of our results is that minimum block-lengths are a proxy for the input rate to the system - a long enough minimum block-length corresponds to a slow rate of input. Our results Theorems [Theorem 3](#thm:suff){reference-type="ref" reference="thm:suff"} and [Theorem 4](#thm:echoone){reference-type="ref" reference="thm:echoone"} imply that the expected behaviour can be characterised by attractors and basins of the individual maps for slow enough rates of input. For shorter min block-lengths the picture becomes more complex and the transient or nonautonomous behaviour of each map features more strongly in the response to input - there will be an analogy to rate-induced critical transitions [@ashwin2012tipping] in the response for such cases. A future direction that seems worthy of study is the role of transients in determining the echo index. The computations in Figure [8](#fig:random_inputs){reference-type="ref" reference="fig:random_inputs"} show that even for quite long (but finite) computations the number of responses may apparently exceed the echo index which is attained asymptotically. A thorough understanding of responses will be needed to explain such transient behaviour of the echo index. This clearly depends not only on transients in the map dynamics but also on waiting times to see certain words within the inputs. ## Acknowledgements {#acknowledgements .unnumbered} We thank EPSRC for support via EP/W52265X/1. We thank Lorenzo Livi, Muhammed Fadera and Claire Postlethwaite for very informative discussions in relation to this work. ## Data Access {#data-access .unnumbered} The Matlab code for Figure [8](#fig:random_inputs){reference-type="ref" reference="fig:random_inputs"} is available from `https://github.com/peterashwin/ashwin-ceni-2023`.     [^1]: apart from a subset of zero Lebesgue measure. [^2]: As discussed in [@ceni2020interpreting] whenever the readout is linear the feedback term can be formally incorporated in the reservoir term. Therefore the absence of the feedback term in an ESN state-update rule can represent an ESN state-update rule after the training session where $W_r$ represents the "effective reservoir" after training.
arxiv_math
{ "id": "2309.04728", "title": "Transitions in echo index and dependence on input repetitions", "authors": "Peter Ashwin and Andrea Ceni", "categories": "math.DS cs.AI cs.LG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We investigate the distortion of the Assouad dimension and (regularized) spectrum of sets under planar quasiregular maps. While the respective results for the Hausdorff and upper box-counting dimension follow immediately from their quasiconformal counterparts by employing elementary properties of these dimension notions (e.g. countable stability and Lipschitz stability), the Assouad dimension and spectrum do not share such properties. We obtain upper bounds on the Assouad dimension and spectrum of images of compact sets under planar quasiregular maps by studying their behavior around their critical points. As an application the invariance of porosity of compact subsets of the plane under quasiregular maps is established. address: | Department of Mathematics\ University of Tennessee, Knoxville\ 1403 Circle Dr\ Knoxville, TN 37966 author: - Efstathios-K. Chrontsios-Garitsis title: Quasiregular distortion of dimensions --- # Introduction Quasiregular maps are considered a natural generalization of holomorphic maps, especially in higher dimensions. However, their significance is not restricted to the higher dimensional setting. They often play a central role in complex dynamics and are an essential part of a set of techniques called "quasiconformal surgery\", which allows us to "glue\" otherwise rigid maps together while preserving some of their properties. Quasiconformal surgery was first introduced by Douady and Hubbard for polynomial-like maps ([@Douady], [@DouadyHub]) and later generalized further by Shishikura in [@Shishikura] for rational maps in order to study the number of stable regions and Herman rings associated to their dynamics (see also [@NuriaBook] for a thorough exposition of these techniques). The plethora of fractals encountered in dynamics and other areas has been a strong motive to determine quantities that help in classifying these complicated sets. It is in fact a core element of Fractal Geometry to study different dimension notions for fractals (see [@falconer2014]). One notion that has been recently used in a range of different areas is the Assouad dimension, which was first introduced by Assouad in [@Assouad83] as a tool to investigate the embedability of metric spaces onto Euclidean spaces. Moreover, Fraser and Yu in [@fy:assouad-spectrum] introduced the notion of Assouad spectrum, a collection of dimension values interpolating between the upper box-counting dimension and the Assouad dimension. This notion has been recently studied extensively, since it provides more information on sets which exhibit a "gap\" between their box and Assouad dimensions. The book by Fraser [@Fraser2020] is one of the most complete references to date for properties and applications of the Assouad dimension and spectrum. The study of how dimension notions change under quasiconformal mappings has been an interesting problem at the intersection of Analysis and Fractal Geometry. Gehring and Väisälä first established how quasiconformal maps distort the Hausdorff dimension in [@GV] and Kaufman showed the same bounds hold for the quasiconformal distortion of the upper box-counting dimension in [@Kaufman]. Tyson and the author proved that the Assouad dimension and regularized Assouad spectrum of sets change similarly under quasiconformal maps in [@Chronts], where an application on quasiconformally classifying polynomial spirals is also provided. One way to define quasiregular maps is with the same dilatation bound condition as in quasiconformal maps, but without the assumption of the map being a homeomorphism (see Section 2 for definitions). It is a natural question to ask whether the "bounded stretching\" properties of quasiregular and quasiconformal maps are enough for these dimension distortion bounds, or the fact that quasiconformal maps are homeomorphisms plays an important role as well. In the cases of the Hausdorff and upper box-counting dimensions the analytic properties of quasiconformal maps, which the class of quasiregular maps also shares, are indeed enough to establish the same upper bounds on the dimensions of images of sets. This can be seen even more clearly in the case of a planar quasiregular map, which can be written as a composition of a holomorphic and a quasiconformal map (see [@Rickman], [@LehtoVir]). Indeed, the Hausdorff dimension does not change under holomorphic maps, since it is countably stable and the critical points are at most countably many, hence a quasiregular map only changes the Hausdorff dimension of a set by however much the respective quasiconformal map changes it, which is established by the result of Gehring and Väisälä in [@GV]. Similarly, the upper box-counting dimension is Lipschitz stable, i.e., it does not increase under Lipschitz maps, implying that the upper box-counting dimension of a compact set would only increase by at most the amount the respective quasiconformal map increases it, which is bounded by the result of Kaufman in [@Kaufman]. However, things are different in the case of the Assouad dimension and spectrum, which are neither countably stable nor Lipschitz stable (see [@Fraser2020] p. 18, 48). The requirement of these notions to check all points and all appropriate pairs of scales in the set creates the need for more information when one tries to use counting arguments in the source of a map to bound the dimension of the image (see Section 2 for definitions). Indeed, this piece of information was essential in the proof of the quasiconformal distortion bounds of the Assouad dimension and spectrum in [@Chronts] (see Corollaries 2.2 and 2.3 for rigorous statements of these properties). However, understanding how planar quasiregular maps behave around their critical points provides information of similar nature. By doing so, we were able to show the following two main results for the quasiregular distortion of the Assouad dimension and spectrum, respectively. **Theorem 1**. *Given $K>1$, let $f:\Omega \rightarrow \mathbb{C}$ be a non-constant $K$-quasiregular mapping defined on a domain $\Omega \subset \mathbb{C}$ and let $E\subset \Omega$ be a compact set with $\dim_A E=\alpha\in (0,2)$. Then we have $$\dim_A f(E)\leq \frac{2K\alpha}{2+(K-1)\alpha}<2.$$* **Theorem 2**. *Let $f:\Omega\to\mathbb{C}$ be a $K$-quasiregular map as in Theorem [Theorem 1](#thm:Adim){reference-type="ref" reference="thm:Adim"}. For $t>0$ and a compact set $E\subset\Omega$, we have $$\label{eq:main2} \dim_A^{\theta(t)} f(E) \le \frac{2K\alpha_{t/K}}{2+(K-1)\alpha_{t/K}}, %\frac{\pSob(K)\alpha_{t/K}}{\pSob(K) - 2 + \alpha_{t/K}}=$$ where $\theta(t):=\frac{1}{1+t}$ and $\alpha_{t}:=\dim_{A,reg}^{\theta(t)}(E)$.* An immediate Corollary of Theorem [Theorem 1](#thm:Adim){reference-type="ref" reference="thm:Adim"} is the following. **Corollary 3**. *Porosity of compact subsets of $\mathbb{C}$ is preserved under planar quasiregular mappings.* This can be seen as the quasiregular version of Väisälä's result on quasisymmetric invariance of porosity in [@vai:porosity]. The proof of Corollary [Corollary 3](#cor:porous){reference-type="ref" reference="cor:porous"} is immediate by Theorem [Theorem 1](#thm:Adim){reference-type="ref" reference="thm:Adim"} and the characterization of porosity proved by Luukkainen in [@Luukkainen], i.e., a subset of $\mathbb{R}^n$ is porous if, and only if, its Assouad dimension is strictly less than $n$. This paper is organized as follows. Section 2 reviews the definitions of the Assouad dimension and spectrum and the respective results on their distortion under quasiconformal maps. In Section 3 we prove Theorems [Theorem 1](#thm:Adim){reference-type="ref" reference="thm:Adim"} and [Theorem 2](#th:Aspec){reference-type="ref" reference="th:Aspec"} by investigating the distortion of dimensions under holomorphic maps. Section 4 contains examples and further remarks on our main results. #### **Acknowledgments.** Part of this research was conducted during the author's visit to University of Jyväskylä. The author wishes to thank Sylvester Eriksson-Bique for the invitation and the Jenny and Arttu Wihuri Foundation for partial support through the "Quasiworld Network\" during this visit. The author also wishes to thank Kai Rajala and Pekka Pankka for the very interesting discussions. # Background A map $f:\Omega \to \mathbb{C}$ defined in a domain $\Omega\subset \mathbb{C}$ is said to be *$K-$quasiregular* for $K\geq1$, if $f$ lies in the local Sobolev space $W^{1,2}_{\scriptstyle{loc}}(\Omega)$ and the inequality $$\label{eq:qc-defn} |Df|^2 \le K \det Df$$ holds a.e. in $\Omega$. Here $Df$ denotes the (a.e. defined) differential matrix and $|{\mathbf A}|=\max\{|{\mathbf A}({\mathbf v})|:|{\mathbf v}|=1\}$ denotes the operator norm of a matrix ${\mathbf A}$. Moreover, if $f$ is a homeomorphism onto its image, then it is a *$K-$quasiconformal* map. The dimension notions we are focusing on are the Assouad dimension and the (regularized) Assouad spectrum. We will first establish the notation we are following. For a point $z\in \mathbb{C}$ and a positive scale $R>0$ we denote by $D(z,R)$ the open disc centered at $z$ of radius $R$. For an arbitrary set $E \subset \mathbb{C}$ and a smaller scale $r\in (0,R)$ we denote by $N(D(z,R) \cap E,r)$ the smallest number of sets of diameter at most $r$ needed to cover $D(z,R) \cap E$. We can now define the *Assouad dimension* of a set $E\subset \mathbb{C}$ as follows $$\dim_A(E) = \inf \left\{\alpha>0 \,:\, {\exists\,C>0\mbox{ s.t. } N(D(z,R) \cap E,r) \le C (R/r)^{\alpha} \atop \mbox{ for all $0<r\le R$ and all $z \in E$}} \right\}.$$ Furthermore, given $\theta\in (0,1)$ we can define the *regularized Assouad $\theta$-spectrum* of $E$ as follows $$\label{def:Assouad-spectrum} \dim_{A,reg}^\theta(E) = \inf \left\{\alpha>0 \,:\, {\exists\,C>0\mbox{ s.t. } N(D(z,R) \cap E,r) \le C (R/r)^{\alpha} \atop \mbox{ for all $0<r\le R^{1/\theta}< R< 1$ and all $z \in E$}} \right\}.$$ It should be noted that this is a slight modification of the original spectrum defined by Fraser and Yu in [@fy:assouad-spectrum], where they used the relation $r=R^{1/\theta}$ for the pair of scales instead of $r\leq R^{1/\theta}$. The regularized Assouad spectrum can also be found as the *upper* Assouad spectrum in the literature and is closely related to the one defined in [@fy:assouad-spectrum] (see [@Fraser2020] p. 32). However, since we are only using the notion where the scales are related by an inequality, we will be calling the set of all dimension values in [\[def:Assouad-spectrum\]](#def:Assouad-spectrum){reference-type="eqref" reference="def:Assouad-spectrum"} for all $\theta\in (0,1)$ the *Assouad spectrum* of $E$ and will use the simplified notation $\dim_A^\theta E$. It is often easier to work with specific sets, and especially dyadic squares, instead of arbitrary sets of diameter at most $r$ to cover a set. In particular, for a set $E \subset \mathbb{C}$ and $z \in E, R>0$ we consider the axes-parallel square $Q:=Q(z,R) \subset \mathbb{C}$, we subdivide $Q$ into $2^2$ essentially disjoint sub-squares, each with side length equal to half of the side length of $Q$, and then we subdivide each of those squares in the same fashion, and so on. Let ${\mathcal W}(Q)$ denote the collection of all such squares obtained at any level of the construction, and let ${\mathcal W}_m(Q)$ denote the collection of all squares obtained after $m$ steps. We will denote by $N_d(D(z,R) \cap E,m)$ the number of dyadic squares in ${\mathcal W}_m(Q)$ needed to cover $D(z,R) \cap E$. The following property of the Assouad spectrum proved in [@Chronts] allows us to interchange between squares and arbitrary sets in the definition of $\dim_A^\theta E$. **Proposition 4**. *Let $E \subset \mathbb{C}$ be a bounded subset and let $0<\theta<1$. The Assouad spectrum value $\dim_A^\theta(E)$ is equal to the infimum of all $\alpha>0$ for which there exists $C>0$ so that $$N_d(D(z,R) \cap E,m) \le C 2^{m\alpha}$$ for all $z \in E$ and all $m\in \mathbb{N}$ such that $0<2^{-m}R\le R^{1/\theta} < R < 1$.* We now recall the results on quasiconformal distortion of the Assouad dimension and spectrum, which will be used in Section 3. **Theorem 5** ([@Chronts], Theorem 1.2). *Given $K>1$, let $f:\Omega \rightarrow \Omega'$ be a $K$-quasiconformal mapping between domains $\Omega, \Omega' \subset \mathbb{C}$ and $E\subset \Omega$ be a compact set with $\dim_A E=\alpha\in (0,2)$. Then we have $$\dim_A f(E)\leq \frac{2K\alpha}{2+(K-1)\alpha}<2.$$* **Theorem 6** ([@Chronts], Theorem 1.3). *Let $f:\Omega\to\Omega'$ be a $K$-quasiconformal map as in Theorem [Theorem 5](#thQC:Adim){reference-type="ref" reference="thQC:Adim"}. For $t>0$ and a compact set $E\subset\Omega$, we have $$\label{eq:QCmain2} \dim_A^{\theta(t)} f(E) \le \frac{2K\alpha_{t/K}}{2+(K-1)\alpha_{t/K}}, %\frac{\pSob(K)\alpha_{t/K}}{\pSob(K) - 2 + \alpha_{t/K}}=$$ where $\theta(t):=\frac{1}{1+t}$ and $\alpha_{t}:=\dim_{A,reg}^{\theta(t)}(E)$.* It should be noted that the two theorems above were proved for quasiconformal maps defined between domains in $\mathbb{R}^n$, $n\geq 2$. In that setting, however, the upper bounds only implicitly depend on the dilatation $K$ for $n>2$, through the higher integrability exponent of $K$-quasiconformal maps. The upper bounds are explicit in the two-dimensional case only thanks to Astala's result in [@Astala] calculating the higher integrability exponent for planar quasiconformal maps. We refer the reader to the discussion in the introduction of [@Chronts] for more details. # Distortion of Assouad dimension and spectrum Suppose $f:\Omega \rightarrow \mathbb{C}$ is a non-constant quasiregular map and let $E$ be a subset of $\Omega$ with $\alpha= \dim_A E$. To provide an upper bound on $\dim_A f(E)$ we need to provide a suitable upper bound for $N(D(w,R') \cap f(E),r')$, where $w\in f(E)$, $0<r'<R'$ are arbitrary. As mentioned in the introduction, however, it can be challenging to find the appropriate disc $D(z,R)\subset \Omega$ in the source in order to use our knowledge of coverings of $D(z,R)\cap E$ and count the number of sets that we need to map with $f$ to cover $D(w,R')\cap f(E)$, especially close to points where $f$ fails to be a local homeomorphism. A fundamental property of planar quasiregular maps, which is called *Stoïlow factorization* of quasiregular maps, is used to simplify this task. More specifically, if $f$ is $K$-quasiregular for some $K\geq 1$, then there is a holomorphic map $h$ and a $K$-quasiconformal map $g$ such that $f=h\circ g$ (see for instance [@LehtoVir] p. 247). We will use this property to reduce the problem to the distortion of the Assouad dimension and spectrum under holomorphic maps. Indeed, in view of Theorem [Theorem 6](#thQC:Aspec){reference-type="ref" reference="thQC:Aspec"} it is enough for the quasiregular distortion of the Assouad spectrum to prove the following: **Theorem 7**. *Let $h:\Omega \rightarrow \mathbb{C}$ be a non-constant holomorphic map in a domain $\Omega \subset \mathbb{C}$ and $E \subset \Omega$ be a compact set. Then $\dim_A^\theta h(E)\leq \dim_A^\theta E$ for all $\theta\in (0,1)$.* *Proof.* Since $E$ is compact, it contains at most finitely many critical points of $h$. If $E$ contains no critical points of $h$, then there are $m_1, m_2>0$ such that $|h'(z)|\in [m_1, m_2]$ for all $z\in E$, which implies $\dim_A^\theta h(E)= \dim_A^\theta E$ for all $\theta\in (0,1)$ by bi-Lipschitz stability of the Assouad spectrum (see [@Fraser2020] p. 18, 49). Suppose $E$ contains critical points of $h$, which we denote by $c_1, \dots, c_m$. For any $c_j$, there are $\epsilon_j<1/10$ and conformal maps $\phi_j$, $\psi_j$ such that $$\label{eq:conformal} (\phi_j\circ h\circ \psi_j)(z)=z^{d_j},$$ for all $z\in D(c_j, \epsilon_j)$, where $d_j\geq2$ is the degree of $f$ at $c_j$ (see for instance [@Palka] p. 346). Fix $\theta\in (0,1)$. By finite stability of the Assouad spectrum we have $$\label{eq:finite_stab} \dim_A^\theta h(E)= \max\{\dim_A^\theta h(\tilde{E}), \dim_A^\theta h(E\cap D(c_1, \epsilon_1)), \dots, \dim_A^\theta h(E\cap D(c_m, \epsilon_m))\},$$ where $\tilde{E}=E\setminus \cup_{j=1}^m D(c_j,\epsilon_j)$. However, there are $\tilde{m}_1, \tilde{m}_2>0$ such that $|h'(z)|\in [\tilde{m}_1, \tilde{m}_2]$ for all $z\in \tilde{E}$, which implies that $\dim_A^\theta h(\tilde{E})=\dim_A^\theta\tilde{E}\leq \dim_A^\theta E$. Hence, it is enough to show that $\dim_A^\theta h(E\cap D(c_j, \epsilon_j))\leq \dim_A^\theta(E\cap D(c_j, \epsilon_j))$ for all $j= 1, \dots, m$. In fact, by bi-Lipschitz invariance of the Assouad spectrum and [\[eq:conformal\]](#eq:conformal){reference-type="eqref" reference="eq:conformal"}, [\[eq:finite_stab\]](#eq:finite_stab){reference-type="eqref" reference="eq:finite_stab"}, it is enough to prove that the map $h(z)=z^d: D(0,\epsilon)\rightarrow \mathbb{C}$ with $d\geq 2$ and $\epsilon\in(0,1/10)$ satisfies $\dim_A^\theta h(E) \leq \dim_A^\theta E$ for compact $E\subset D(0,\epsilon)$ with $0\in E$. Fix arbitrary $\alpha > \dim_A^\theta E$ and $p>2$. We will show that $\dim_A^\theta h(E)\leq \beta := \frac{p \alpha}{p-2+\alpha}$ and take $\alpha \searrow \dim_A^\theta E$, $p\rightarrow \infty$ to finish the proof. Let $w\in h(E)$ and $R'>0$. In fact, by choice of $\epsilon$ it is enough to consider $R'<1/5$. If $0\notin \overline{D(w,R')}\cap E$ then $\dim_A^\theta(h(E)\cap D(w,R'))=\dim_A^\theta(E\cap h^{-1}(D(w,R')))\leq \dim_A^\theta E <\alpha<\beta$ and it follows that there is $C_0>0$ such that $$N(D(w,R')\cap h(E))\leq \left( \frac{R'}{r'} \right)^\beta$$ for all $r'\in (0, (R')^{1/\theta})$. A relation of this kind in the case $0\in \overline{D(w,R')}\cap E$ would finish the proof. Suppose $0\in E$ and set $$r_j':= 2^{\frac{-j\alpha}{\beta}}R'$$ for all $j\geq j_0$, where $j_0\in \mathbb{N}$ is the smallest positive integer such that $2^{\frac{-j_0\alpha}{\beta}}R'\leq (R')^{1/\theta}<2^{\frac{(-j_0+1)\alpha}{\beta}}R'$, and $$\begin{aligned} R &:= (2R')^{1/d} \\ r_j&:= 2^{-j}R. \end{aligned}$$ It is enough to show that there is a constant $C>0$ such that for all $j\geq j_0$ we have $$\label{eq:N_target_desired} N(D(w,R')\cap h(E), r_j')\leq C \left( \frac{R'}{r_j'} \right)^\beta = C 2^{j \alpha},$$ which implies that $\dim_A^\theta h(E)\leq \beta$. Fix $j\geq j_0$. Note that the images of sets in a covering of $D(0,R)\cap E$ naturally form a covering of $h(D(0,R)\cap E)$, which is also a covering of $D(w,R')\cap h(E)$, since $D(w,R')\cap h(E)\subset h(D(0,R)\cap E)$ by choice of $R$ and the fact that $0\in \overline{D(w,R')}$. Hence, we will cover $D(0,R)\cap E$ with dyadic sub-squares $\{ Q_k^j \}$ of $Q(0,R)=[-R,R]\times [-R,R]$ of side length $r_j$ and take their images under $h$ to cover $D(w,R')\cap h(E)$. Moreover, by $j\geq j_0$ we have the inequality $r_j'\leq (R')^{1/ \theta}$, which implies $$\begin{aligned} 2^{\frac{-j\alpha}{\beta}}R'&\leq (R')^{1/\theta}\\ -\frac{j\alpha}{\beta}&\leq \left(\frac{1}{\theta}-1\right)\log_2(R') \\ j&\geq \left(\frac{1}{\theta}-1\right) \frac{\beta}{\alpha}(-\log_2(R')) \\ &\geq\left(\frac{1}{\theta}-1\right) \frac{1}{d}(-\log_2(R')-1). \end{aligned}$$ The last inequality is important, since it ensures that for $j\geq j_0$ we have $$r_j=2^{-j} R= 2^{-j} (2R')^{1/d}\leq ((2R')^{1/d})^{1/\theta}=R^{1/\theta}.$$ This allows us to use $\alpha>\dim_A^\theta E$ and Proposition [Proposition 4](#prop:Assouad-spectrum-technical-proposition){reference-type="ref" reference="prop:Assouad-spectrum-technical-proposition"} to count how many sub-squares $\{ Q_k^j\}$ of $Q(0,R)$ we actually need. In particular, there is $C>0$ such that $$\label{eq:dyad_source} N_d(D(0,R)\cap E, j)\leq C 2^{j\alpha}.$$ If all squares $Q_k^j$ have images under $h$ of diameter at most $r_j'$, i.e., $\mathop{\mathrm{diam}}h(Q_k^j)\leq r_j'$ for all $k$, then by [\[eq:dyad_source\]](#eq:dyad_source){reference-type="eqref" reference="eq:dyad_source"} the covering $\{h(Q_k^j)\}$ of $D(w,R')\cap h(E)$ consists of at most $C 2^{j\alpha}=C \left( \frac{R'}{r_j'} \right)^\beta$, as needed for the upper bound on $\dim_A^\theta h(E)$. Suppose this is not the case and there are squares $Q_k^j$ for which $\mathop{\mathrm{diam}}h(Q_k^j)>r_k'$. The plan is to sub-divide these squares and their "children\" until eventually their images under $h$ have diameter at most $r_j'$. We know this is possible by uniform continuity of $h$, but we need to count how many times we have to sub-divide, i.e., how many images of such squares we will end up including in our covering. For $\ell\geq j$ we call a sub-square $Q_{k,n}^{\ell}$ of $Q_k^j$ of side length $r_{\ell}$ $j$-**major** if $\mathop{\mathrm{diam}}h(Q_{k,n}^{\ell})>r_j'$ and $j$-**minor** otherwise. We need to count all $j$-major sub-squares of all levels $\ell$, subdivide them all once and count the number of resulting $j$-minor squares, which is an upper bound for $N(D(w,R')\cap h(E), r_j')$. For all $j$-major $Q_{k,n}^{\ell}$, we have that $h\in W^{1,p}(Q_{k,n}^{\ell})$. Applying Morrey-Sobolev inequality on $h$ on such a major sub-square gives $$\mathop{\mathrm{diam}}h(Q_{k,n}^{\ell}) \leq C_S (\mathop{\mathrm{diam}}(Q_{k,n}^{\ell}))^{1-2/p} \left( \int_{Q_{k,n}^{\ell}} |Dh|^p \right)^{1/p}.$$ For what follows, we will not keep track of multiplying constants that do not depend on the scales $R$, $R'$ or the levels $j$, $\ell$ and keep denoting all of them by $C_S$. Since $Q_{k,n}^{\ell}$ is $j$-major, the above inequality implies $$r_j'\leq C_S\, r_\ell^{1-2/p} \left( \int_{Q_{k,n}^{\ell}} |Dh|^p \right)^{1/p},$$ which by definition of $r_j'$ and $r_\ell$ leads to $$2^{-\frac{j\alpha p}{\beta}} (R')^p\leq C_S\, 2^{-\ell(p-2)}R^{p-2} \int_{Q_{k,n}^{\ell}} |Dh|^p .$$ Since all $j$-major $\ell$-level sub-squares are essentially disjoint, we can sum over all such squares in the above inequality and have $$2^{-\frac{j\alpha p}{\beta}} (R')^p M(\ell)\leq C_S\, 2^{-\ell(p-2)}R^{p-2} \int_{\bigcup_{k,n}Q_{k,n}^{\ell}} |Dh|^p ,$$ where $M(\ell)$ is the number of all $\ell$-level $j$-major squares. Note that $k$ runs through all $j$-major sub-squares $Q_k^j$ of $Q(0,R)$ and $n$ runs through all $j$-major sub-squares of $Q_k^j$ of side length $r_\ell$. Summing over all levels $\ell\geq j$ and noting that $R=(2R')^{1/d}$ results in $$2^{-\frac{j\alpha p}{\beta}} (R')^p \sum_{\ell\geq j}M(\ell)\leq C_S\, \sum_{\ell\geq j} 2^{-\ell(p-2)}(R')^{\frac{p-2}{d}} \left( \int_{\bigcup_{k,n,\ell}Q_{k,n}^{\ell}} |Dh|^p \right).$$ However, the above inequality along with $\sum_{\ell\geq j} 2^{-\ell(p-2)}\leq C_1 2^{-j(p-2)}$ and $$\left( \int_{\bigcup_{k,n,\ell}Q_{k,n}^{\ell}} |Dh|^p \right)\leq \left( \int_{D(0, \sqrt{2} (2R')^{1/d})} |Dh|^p \right) \leq C_2 (R')^{2/d} \left((R')^{1/d}\right)^{p(d-1)}$$ implies that $$2^{-\frac{j\alpha p}{\beta}} (R')^p \sum_{\ell\geq j}M(\ell)\leq C_S\, 2^{-j(p-2)} (R')^{p/d-2/d} (R')^{2/d} \left((R')^{1/d}\right)^{p(d-1)}.$$ It is now clear that all terms with $R'$ cancel out and because $$-j(p-2)+j\alpha p/\beta=-j(p-2)+j(p-2+\alpha)=j \alpha$$ by definition of $\beta=p\alpha/(p-2+\alpha)$, we have $$\sum_{\ell\geq j}M(\ell)\leq C_S \, 2^{j\alpha}.$$ Since $\sum_{\ell\geq j}M(\ell)$ is the number of all $j$-major sub-squares of $Q(0,r)$, subdividing all of them once will lead to a covering consisting of only $j$-minor squares, which are at most $$4\sum_{\ell\geq j}M(\ell)\leq 4C_S \, 2^j\alpha.$$ As a result, their images form a covering of $D(w,R')\cap h(E)$ by sets of diameter at most $r_j'$, which implies $$N(D(w,R')\cap h(E), r_j')\leq 4C_S \, 2^{j\alpha}= C \left( \frac{R'}{r_j'} \right)^\beta$$ as needed to complete the proof. ◻ *Proof of Theorem [Theorem 2](#th:Aspec){reference-type="ref" reference="th:Aspec"}:* By the decomposition $f=h\circ g$, where $h$ is holomorphic, $g$ is $K$-quasiconformal and Theorems [Theorem 7](#thm:Holom){reference-type="ref" reference="thm:Holom"} and [Theorem 6](#thQC:Aspec){reference-type="ref" reference="thQC:Aspec"} the proof is immediate.\ 0◻\ Note that Theorem [Theorem 1](#thm:Adim){reference-type="ref" reference="thm:Adim"} does not immediately follow by Theorem [Theorem 2](#th:Aspec){reference-type="ref" reference="th:Aspec"} itself, since by taking $t\rightarrow 0^+$, i.e., $\theta \rightarrow 1^-$, in [\[eq:main2\]](#eq:main2){reference-type="eqref" reference="eq:main2"} would only imply the desired upper bound for the so-called *quasi-Assouad dimension* $\dim_{qA} h(E):=\lim_{\theta\rightarrow 1^-}\dim_A^\theta h(E)$, which can differ from the Assouad dimension (see [@Fraser2020], Section 5.3 for instance). However, the proof of Theorem [Theorem 1](#thm:Adim){reference-type="ref" reference="thm:Adim"} also relies on the fact that holomorphic maps do not increase the Assouad dimension, which can be proved almost identically to how Theorem [Theorem 7](#thm:Holom){reference-type="ref" reference="thm:Holom"} was proved. **Theorem 8**. *Let $h:\Omega \rightarrow \mathbb{C}$ be a non-constant holomorphic map in a domain $\Omega \subset \mathbb{C}$ and $E \subset \Omega$ be a compact set. Then $\dim_A h(E)\leq \dim_A E$.* *Proof.* Following the reductions in the proof of Theorem [Theorem 7](#thm:Holom){reference-type="ref" reference="thm:Holom"}, it is enough to prove that the map $h(z)=z^d: D(0,\epsilon)\rightarrow \mathbb{C}$ with $d\geq 2$ and $\epsilon\in(0,1/10)$ satisfies $\dim_A h(E) \leq \dim_A E$ for compact $E\subset D(0,\epsilon)$ with $0\in E$. Let $\alpha>\dim_A E$, $p>2$ and set $\beta=\frac{p\alpha}{p-2+\alpha}$. Let $w\in h(E)$ and $R'\in(0,1/5)$. The proof follows exactly the same way as the proof of Theorem [Theorem 7](#thm:Holom){reference-type="ref" reference="thm:Holom"}, with the only difference being that the scales $$\begin{aligned} r_j'&:= 2^{\frac{-j\alpha}{\beta}}R'\\ R &:= (2R')^{1/d} \\ r_j&:= 2^{-j}R. \end{aligned}$$ are now defined for all $j\in \mathbb{N}$. This does not change any of the arguments in the proof of Theorem [Theorem 7](#thm:Holom){reference-type="ref" reference="thm:Holom"} and we can similarly prove that there is $C>0$ such that $$N(D(w,R')\cap h(E),r_j')\leq C 2^{j\alpha}= C \left( \frac{R'}{r_j'} \right)^\beta.$$ Since $y$ and $R'$ were arbitrary, this shows that $\dim_A h(E)\leq \beta$ and taking $\alpha\searrow \dim_A E$, $p\rightarrow \infty$ completes the proof. ◻ *Proof of Theorem [Theorem 1](#thm:Adim){reference-type="ref" reference="thm:Adim"}:* It follows directly by the decomposition $f=h\circ g$ and Theorems [Theorem 8](#thm:HolomA){reference-type="ref" reference="thm:HolomA"} and [Theorem 5](#thQC:Adim){reference-type="ref" reference="thQC:Adim"}. 0◻ # Final Remarks It is important to note that in both Theorem [Theorem 1](#thm:Adim){reference-type="ref" reference="thm:Adim"} and Corollary [Corollary 3](#cor:porous){reference-type="ref" reference="cor:porous"}, the restriction to compact sets is necessary. Using the principal logarithm set $h(z)=-\text{Log} z$ defined on $\mathbb{C}\setminus \{ t\in \mathbb{R}: t\leq0 \}$. This is a holomorphic map that takes the family of punctured circles $$E=\{ e^{-n} e^{it}: n\in \mathbb{N}, \, t\in \mathbb{R}\setminus \{ (2k+1) \pi: k\in \mathbb{Z}\} \}$$ onto the family of countably punctured vertical lines $$h(E)=\left\{ n-it: n\in \mathbb{N}, \, t\in \mathbb{R}\setminus \{ (2k+1) \pi: k\in \mathbb{Z}\} \right\} .$$ However, $\dim_A E=1<2$, so it is a porous set, while $h(E)$ is not, since $\dim_A h(E)=2$. Another remark is regarding the strictness of the quasiregular (and holomorphic) distortion of the Assouad spectrum. Consider the set $E=\{ 1/n: n\in \mathbb{N}\}\cup \{0\}$ and the holomorphic map $h(z)=z^2$. We have $h(E)=\{ 1/n^2: n\in \mathbb{N}\}\cup \{0\}$ and (by [@Fraser2020] p. 33, 42) $$\dim_A^\theta E= \min\left\{ \frac{1}{2(1-\theta)}, 1 \right\},$$ $$\dim_A^\theta h(E)= \min\left\{ \frac{1}{3(1-\theta)}, 1 \right\}.$$ As a result, we have the strict inequality $\dim_A^{1/2} h(E)=2/3<1=\dim_A^{1/2} E$. We finish the paper by recalling that the quasiconformal distortion Theorems [Theorem 5](#thQC:Adim){reference-type="ref" reference="thQC:Adim"} and [Theorem 6](#thQC:Aspec){reference-type="ref" reference="thQC:Aspec"} actually hold for higher dimensions. However, the techniques used to prove Theorems [Theorem 1](#thm:Adim){reference-type="ref" reference="thm:Adim"} and [Theorem 2](#th:Aspec){reference-type="ref" reference="th:Aspec"} heavily relied on properties of holomorphic maps and the fundamental fact that planar quasiregular maps can be written as a composition of holomorphic and quasiconformal mappings. In higher dimensions, while the "analytic\" part of the proof could be reproduced similarly to the arguments used for Theorems [Theorem 5](#thQC:Adim){reference-type="ref" reference="thQC:Adim"} and [Theorem 6](#thQC:Aspec){reference-type="ref" reference="thQC:Aspec"} in [@Chronts], the "topological\" part of going back from the image to the source in an appropriate way is greatly hindered by the unpredictable properties of the branch set of quasiregular maps, which can be of higher Hausdorff dimension (see Theorems 1.1 and 1.3 in [@BonkQR]). While there are results on the behavior of higher-dimensional quasiregular maps around the branch set (see for instance [@BlochEremenko], [@BlochKai1], [@BlochKai2], [@EggYolkQR]), and even partial results regarding the porosity invariance around the branch set (see [@KaiOnninenBranchImagePorous] and [@SarvasDimHofBranch]), it can still be difficult to establish a strong enough connection between the coverings we need in the image and the ones we can count in the source. This was not a problem in two dimensions because of the discreteness of the critical set and the well-understood local behavior of holomorphic maps around critical points. 10 Assouad, P. Plongements lipschitziens dans ${\bf R}^{n}$. , 4 (1983), 429--448. Astala, K. Area distortion of quasiconformal mappings. , 1 (1994), 37--60. Bonk, M., and Heinonen, J. Smooth quasiregular mappings with branching. , 100 (2004), 153--170. Branner, B., and Fagella, N. , vol. 141 of *Cambridge Studies in Advanced Mathematics*. Cambridge University Press, Cambridge, 2014. With contributions by Xavier Buff, Shaun Bullett, Adam L. Epstein, Peter Haı̈ssinsky, Christian Henriksen, Carsten L. Petersen, Kevin M. Pilgrim, Tan Lei and Michael Yampolsky. Chrontsios Garitsis, E. K., and Tyson, J. T. Quasiconformal distortion of the Assouad spectrum and classification of polynomial spirals. , 1 (2023), 282--307. Douady, A. Systèmes dynamiques holomorphes. In *Bourbaki seminar, Vol. 1982/83*, vol. 105-106 of *Astérisque*. Soc. Math. France, Paris, 1983, pp. 39--63. Douady, A., and Hubbard, J. H. On the dynamics of polynomial-like mappings. , 2 (1985), 287--343. Eremenko, A. Bloch radius, normal families and quasiregular mappings. , 2 (2000), 557--560. Falconer, K. , third ed. John Wiley & Sons, Ltd., Chichester, 2014. Mathematical foundations and applications. Fraser, J. , vol. 222 of *Cambridge Trats in Mathematics*. Cambridge University Press, 2020. Fraser, J. M., and Yu, H. New dimension spectra: finer information on scaling and homogeneity. (2018), 273--328. Gehring, F. W., and Väisälä, J. Hausdorff dimension and quasiconformal mappings. (1973), 504--512. Kaufman, R. P. Sobolev spaces, dimension, and random series. , 2 (2000), 427--431. Lehto, O., and Virtanen, K. I. , second ed., vol. Band 126 of *Die Grundlehren der mathematischen Wissenschaften*. Springer-Verlag, New York-Heidelberg, 1973. Translated from the German by K. W. Lucas. Luukkainen, J. Assouad dimension: antifractal metrization, porous sets, and homogeneous measures. , 1 (1998), 23--76. Onninen, J., and Rajala, K. Quasiregular mappings to generalized manifolds. (2009), 33--79. Palka, B. P. . Undergraduate Texts in Mathematics. Springer-Verlag, New York, 1991. Poggi-Corradini, P., and Rajala, K. An egg-yolk principle and exponential integrability for quasiregular mappings. , 2 (2007), 531--544. Rajala, K. A lower bound for the Bloch radius of $K$-quasiregular mappings. , 9 (2004), 2593--2601. Rajala, K. Bloch's theorem for mappings of bounded and finite distortion. , 2 (2007), 445--460. Rickman, S. , vol. 26 of *Ergebnisse der Mathematik und ihrer Grenzgebiete (3) \[Results in Mathematics and Related Areas (3)\]*. Springer-Verlag, Berlin, 1993. Sarvas, J. The Hausdorff dimension of the branch set of a quasiregular mapping. , 2 (1975), 297--307. Shishikura, M. On the quasiconformal surgery of rational functions. , 1 (1987), 1--29. Väisälä, J. Porous sets and quasisymmetric maps. (1987), 525--533.
arxiv_math
{ "id": "2309.07362", "title": "Quasiregular distortion of dimensions", "authors": "Efstathios Konstantinos Chrontsios Garitsis", "categories": "math.CV math.DS", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | A system of a first order history-dependent evolutionary variational-hemivariational inequality with unilateral constraints coupled with a nonlinear ordinary differential equation in a Banach space is studied. Based on a fixed point theorem for history dependent operators, results on the well-posedness of the system are proved. Existence, uniqueness, continuous dependence of the solution on the data, and the solution regularity are established. Two applications of dynamic problems from contact mechanics illustrate the abstract results. First application is a unilateral viscoplastic frictionless contact problem which leads to a hemivariational inequality for the velocity field, and the second one deals with a viscoelastic frictional contact problem which is described by a variational inequality. **Key words.** Variational-hemivariational inequality, history-dependent operator, unilateral constraint, evolution triple, inclusion, subgradient, contact problem. **2010 Mathematics Subject Classification.** 35K86, 35M86, 35R70, 47J20, 49J40, 74M10, 74M15. author: - "Stanisław Migórski [^1]" title: | Well-posedness of Constrained Evolutionary Differential Variational--Hemivariational Inequalities\ with Applications [^2] --- # Introduction {#Intro} In this paper we study the following system of an evolutionary variational-hemivariational inequality coupled with an ordinary differential equation in a Banach space. Find ${\mathtt x} \colon (0, T) \to E$ and $w \colon (0, T) \to V$ with $w(t) \in K$ for a.e. $t \in (0, T)$ such that $$\begin{aligned} &&\hspace{-0.5cm} {\mathtt x}'(t) = F(t,{\mathtt x}(t), w(t), (Sw)(t)) \ \ \mbox{\rm for a.e.} \ \ t \in (0,T), \label{001} \\[1mm] &&\hspace{-0.5cm} \langle w'(t) + A(t, {\mathtt x}(t), w(t)) + (R_1 w)(t), v - w(t) \rangle + \, j^0 (t, {\mathtt x}(t), (R_2w)(t), Mw(t); Mv - Mw(t)) \nonumber \\[1mm] && \quad \ \, + \, \varphi (t, {\mathtt x}(t), (R_3w)(t), Mv) - \varphi(t, {\mathtt x}(t), (R_3w)(t), Mw(t)) \ge \langle f(t, {\mathtt x}(t)), v - w(t) \rangle \nonumber \\[1mm] &&\qquad \ \ \ \ \mbox{\rm for all} \ v \in K, \ \mbox{\rm a.e.} \ t \in (0,T), \label{002} \\ &&\hspace{-0.5cm} {\mathtt x}(0) = {\mathtt x}_0, \ \ w(0) = w_0 . \label{003}\end{aligned}$$ Here, $E$ is a Banach space, and $(V, H, V^*)$ is an evolution triple of spaces. Problem ([\[001\]](#001){reference-type="ref" reference="001"})--([\[003\]](#003){reference-type="ref" reference="003"}) represents a system which couples the ordinary differential equation ([\[001\]](#001){reference-type="ref" reference="001"}) with the constrained variational-hemivariational inequality ([\[002\]](#002){reference-type="ref" reference="002"}), supplemented with the initial conditions ([\[003\]](#003){reference-type="ref" reference="003"}). Following the terminology in [@Anh; @LMZ2017; @Liu1; @MZJOGO2018], we refer to ([\[001\]](#001){reference-type="ref" reference="001"})--([\[003\]](#003){reference-type="ref" reference="003"}) as an evolutionary differential variational-hemivariational inequality. The inequality ([\[002\]](#002){reference-type="ref" reference="002"}) couples with equation ([\[001\]](#001){reference-type="ref" reference="001"}) through the solution $w$ and its history value $Sw$, $j^0$ denotes the generalized directional derivative in the last variable of a nonconvex and nondifferentiable function $j \colon (0, T) \times E \times Z \times X \to \mathbb{R}$, $\varphi \colon (0, T) \times E \times Y \times X \to \mathbb{R}$ is a function convex in the last variable, $M \colon V\rightarrow X$ is an affine linear operator, $S$, $R_1$, $R_2$, $R_3$ are the so-called history-dependent operators, and $K \subset V$ represents a set of constraints. For a history-dependent operator, the current value for a given function at the time instant $t$ depends on the values of the function at the time instants from $0$ to $t$. For this reason, the probem ([\[001\]](#001){reference-type="ref" reference="001"})--([\[003\]](#003){reference-type="ref" reference="003"}) is also called a history-dependent differential variational-hemivariational inequality. The main features of the system are the presence of the constraints set $K$ and the strong coupling between the differential equation and the inequality which leads to a complex time dependent dynamics. Problem ([\[001\]](#001){reference-type="ref" reference="001"})--([\[003\]](#003){reference-type="ref" reference="003"}) is new and, to the best of our knowledge, has not been studied in the literature. Our main existence and uniqueness result, in its particular case, solves an open problem stated in [@SOFMIG Section 10.4], and extends the recent result in [@MigorskiBiao Theorem 20] obtained for a purely variational-hemivariational inequality. There are two main goals of the paper. First, we establish a well-posedness result which includes existence, uniqueness and continuous dependence of solution to the system ([\[001\]](#001){reference-type="ref" reference="001"})--([\[003\]](#003){reference-type="ref" reference="003"}) on the data. In particular, if $f$ is assumed to be independent of ${\mathtt x}$, then we show, see Theorem [Theorem 4](#Theorem1){reference-type="ref" reference="Theorem1"}, that the map $({\mathtt x}_0, w_0, f) \mapsto ({\mathtt x}, w)$ is Lipschitz continuous. The second goal of the paper is to provide two applications to new classes of dynamic problems in contact mechanics. In the first application we study a dynamic unilateral viscoplastic frictionless contact problem involving a nonmonotone Clarke subdifferential boundary condition. Its weak formulation is a hemivariational inequality coupled with an ordinary differential equation. The second application concerns a dynamic frictional viscoelastic contact problem with adhesion which leads to a variational inequality combined with a differential equation on the contact surface. Only special versions of the system ([\[001\]](#001){reference-type="ref" reference="001"})-([\[003\]](#003){reference-type="ref" reference="003"}) have been explored in the literature. For instance, the evolutionary variational-hemivariational inequalities without a constraint set $K$, and with and without history-dependent operators and their variants are discussed in [@SOFMIG Chapter 7]. If $K=V$, $\varphi = 0$, and $A$, $j$, $f$ are independent of ${\mathtt x}$, then ([\[002\]](#002){reference-type="ref" reference="002"})-([\[003\]](#003){reference-type="ref" reference="003"}) reduces to the evolution hemivariational inequality considered in [@MOgorzaly]. If the operator $A$ and a locally Lipschitz function $j$ are assumed to be independent of ${\mathtt{x}}$ and $R_1$, then the problem reduces to the parabolic hemivariational inequality studied, for example, in [@Liu2008; @Miettinen; @MO2004]. Very recently the system has been treated in [@ZM2021] with $K=V$, $\varphi = 0$, and $F$, $j$ independent of the history-dependent operators, and $f$ independent of the variable ${\mathtt x}$. A general dynamic variational-hemivariational inequality with history-dependent operators without constraints can be found in [@HMS2017]. All aforementioned papers treat the special case of ([\[002\]](#002){reference-type="ref" reference="002"}) with $K = V$. In contact mechanics, the ordinary differential equation ([\[001\]](#001){reference-type="ref" reference="001"}) supplementing ([\[002\]](#002){reference-type="ref" reference="002"}) appears naturally. For example, the system is met in rate-type viscoplastic constitutive laws, including the internal state variables in viscoplasticity, see [@SOFMIG Chapter 3], and in modeling of additional effects like wear and adhesion phenomena in contact problems, see [@SST Chapter 11], [@SHS Chapter 2] and the references therein. We remark that the notion of a history-dependent operator was introduced in [@SM1] and used in a several recent papers [@HMS2017; @Migorski2020; @MOS13; @MOS18; @MOgorzaly; @SHM; @SM1; @SMH2018; @SP; @SX]. Differential variational inequalities was initiated and first systematically discussed in [@Pang] in finite dimensional spaces. Note also that a related result on a class of differential hemivariational inequalities which consists of a hemivariational inequality of parabolic type combined with a nonlinear evolution equation has been delivered by the Rothe method in [@MZJOGO2018]. The differential parabolic-parabolic variational inequalities are examined in [@Anh] by using the technique of measure of noncompactness. Results on partial differential variational inequalities with nonlocal boundary conditions can be found in [@LMZ2017]. The literature on hemivariational inequalities has been significantly enlarged in the last forty years, see monographs [@CLM; @Go11.I; @MOSBOOK; @NP; @SOFMIG], for analysis of various classes of such inequalities, see e.g. [@Bartosz; @Gwinner; @HMS2017; @KULIG; @Liu2008; @LMZ2017; @Liu1; @MMM; @MHZ2020; @Migorski2021; @MZJOGO2018; @ZM2021]. By our approach, we relax the assumptions on similar problems treated in the aforementioned papers and considerably improve some results by allowing the history-dependent operators to appear in the convex term and in the generalized directional derivative of a nonconvex potential. The paper is structured as follows. In Section [2](#Prelim){reference-type="ref" reference="Prelim"}, we recall some prerequisites needed throughout this paper. In Section [3](#s1){reference-type="ref" reference="s1"}, we state and prove the main results on the well-posedness of the system ([\[001\]](#001){reference-type="ref" reference="001"})--([\[003\]](#003){reference-type="ref" reference="003"}). Next, in Section [4](#Application1){reference-type="ref" reference="Application1"}, we consider a model of contact which leads to a differential hemivariational inequality for the velocity field of the form ([\[001\]](#001){reference-type="ref" reference="001"})--([\[003\]](#003){reference-type="ref" reference="003"}). Finally, for the second contact problem in Section [5](#Application2){reference-type="ref" reference="Application2"}, under appropriate hypotheses on the data, we prove the well-posedness for a differential variational inequality with constraints which is a weak form of the contact model. # Preliminaries {#Prelim} Let $(X, \| \cdot \|_X)$ be a normed space, $X^*$ be the dual of $X$ and $\langle\cdot,\cdot\rangle_{X^*\times X}$ denote the duality brackets for the pair $(X^*, X)$. For simplicity, when no confusion arises, we often skip the subscripts. For Banach spaces $X$, $Y$, we will use the notation ${\mathcal L}(X, Y)$ for the set of all linear continuous operators from $X$ to $Y$. The notation $\| A \|$ stands for the operator norm of $A \colon X \to Y$ in ${\mathcal L(X, Y)}$. Given a set $D \subset X$, we write $\| D \|_X = \sup \{ \| x \|_X \mid x \in D \}$. We denote by $C([0, T]; X)$ the space of continuous functions on $[0, T]$ with values in $X$. It is well known that if $X$ is a Banach space, then $C([0, T]; X)$ is also a Banach space. Let $h \colon X \to \mathbb{R}$ be a locally Lipschitz function. The generalized (Clarke) directional derivative of $h$ at the point $x \in X$ in the direction $v \in X$ is defined by $$h^{0}(x; v) = \limsup_{y \to x, \ \lambda \downarrow 0} \frac{h(y + \lambda v) - h(y)}{\lambda}.$$ The generalized subgradient of $h$ at $x$ is a subset of the dual space $X^*$ given by $$\partial h (x) = \{\, \zeta \in X^* \mid h^{0}(x; v) \ge {\langle \zeta, v \rangle} \ \mbox{for all} \ v \in X \, \}.$$ A locally Lipschitz function $h$ is said to be regular (in the sense of Clarke) at the point $x \in X$ if for all $v \in X$ the derivative $h' (x; v)$ exists and $h^0(x; v) = h'(x; v)$. Throughout the paper, the generalized derivative and the generalized subgradient are always taken with respect to the last variable of a given function. We say that a map $A \colon X \to X^*$ is demicontinuous if the function $u \mapsto \langle A u, v \rangle_{X^* \times X}$ is continuous for all $v \in X$, i.e., $A$ is continuous as a mapping from $X$ to $X^*$ endowed with $w^*$-topology. It is hemicontinuous, if for all $u$, $v$, $w \in X$, the function $t \mapsto \langle A(u+tv), w \rangle_{X^* \times X}$ is continuous on $[0,1]$. A map $A$ is monotone if $\langle Au - Av, u - v \rangle_{X^* \times X} \ge 0$ for all $u$, $v \in X$. It is strongly monotone with constant $c > 0$ if $\langle Au - Av, u - v \rangle_{X^* \times X} \ge c \, \|u-v\|^2_X$ for all $u$, $v \in X$. For more details, we refer to [@Clarke; @DMP1; @DMP2; @MOSBOOK]. The following definition and a fixed point result (being a consequence of the Banach contraction principle) can be found in [@SOFMIG Definition 30 and Theorem 67], respectively. **Definition 1**. *Let ${\mathbb X}$ and ${\mathbb Y}$ be normed spaces. An operator ${\mathbb S} \colon L^2(0,T; {\mathbb X}) \to L^2(0,T; {\mathbb Y})$ is called a history-dependent operator if there exists $M > 0$ such that $$\| ({\mathbb S} u_1)(t)- ({\mathbb S}u_2)(t) \|_{{\mathbb Y}} \le M \int_0^t \| u_1(s) -u_2(s)\|_{{\mathbb X}} \, ds$$ for all $u_1$, $u_2 \in L^2(0,T; {\mathbb X})$, a.e. $t \in (0, T)$.* **Lemma 2**. *Let ${\mathbb X}$ be a Banach space and $0 < T < \infty$. Let $F \colon L^2(0,T; {\mathbb X}) \to L^2(0,T; {\mathbb X})$ be an operator such that $$\| (F \eta_1)(t) - (F \eta_2)(t)\|^2_{\mathbb X} \le c \int_0^t \| \eta_1(s) - \eta_2(s)\|^2_{\mathbb X} \, ds$$ for all   $\eta_1$, $\eta_2 \in L^2(0,T;{\mathbb X})$, a.e. $t \in (0,T)$ with a constant $c > 0$. Then $F$ has a unique fixed point in $L^2(0, T; {\mathbb X})$, i.e., there exists a unique $\eta^* \in L^2(0,T;{\mathbb X})$ such that $F \eta^* = \eta^*$.* # Well-posedness of the abstract problem {#s1} In this section we provide results on the well-posedness of the differential variational-hemivariational inequalities. We shall establish existence and uniqueness of solution and prove continuous dependence of solution on the initial conditions and the function $f$. Let $E$, $U$, $X$, $Y$ and $Z$ be Banach spaces and $(V, H, V^*)$ be an evolution triple of spaces. The latter means that $V$ is a separable reflexive Banach space and $H$ is a separable Hilbert space such that the embedding $V \subset H$ is continuous and dense. Then $H$ is embedded continuously and densely in $V^*$, and the duality brackets $\langle \cdot, \cdot \rangle$ for the pair $(V^*, V)$ and the inner product $\langle \cdot, \cdot \rangle_H$ on $H$ coincide on $H \times V$. In what follows we denote by $\| \cdot \|$ the norm in $V$. We set $${\mathbb{W}} = \{ \, w \in L^2(0, T; V) \mid w' \in L^2(0,T; V^*) \, \},$$ where the time derivative $w'$ is understood in the distributional sense. It is known that $L^2(0,T;V)^* \simeq L^2(0,T; V^*)$ and the space ${\mathbb{W}}$ endowed with the norm $\| w \|_{{\mathbb{W}}} = \| w \|_{L^2(0, T; V)} + \| w' \|_{L^2(0, T; V^*)}$ is a separable reflexive Banach space. We deal with the following system. **Problem 3**. *Find ${\mathtt x}\in H^1(0,T; E)$ and $w \in {\mathbb{W}}$ such that $w(t) \in K$ for a.e. $t \in (0, T)$ and $$\begin{cases} \displaystyle {\mathtt x}'(t) = F(t,{\mathtt x}(t), w(t), (Sw)(t)) \ \ \mbox{\rm for a.e.} \ \, t \in (0,T), \\[1mm] \langle w'(t) + A(t, {\mathtt x}(t), w(t)) + (R_1 w)(t), v - w(t) \rangle + \, j^0 (t, {\mathtt x}(t), (R_2w)(t), Mw(t); Mv - Mw(t)) \\[1mm] \ \ \quad \qquad + \, \varphi (t, {\mathtt x}(t), (R_3w)(t), Mv) - \varphi(t, {\mathtt x}(t), (R_3w)(t), Mw(t)) \ge \langle f(t, {\mathtt x}(t)), v - w(t) \rangle \\[1mm] \qquad\qquad\qquad \qquad \ \ \mbox{\rm for all} \ v \in K, \ \mbox{\rm a.e.} \ t \in (0,T), \\ {\mathtt x}(0) = {\mathtt x}_0, \ \ w(0) = w_0 . \end{cases}$$* We need the following hypotheses on the data of Problem [Problem 3](#DVHI){reference-type="ref" reference="DVHI"}. $\underline{H({F})}:$ $\displaystyle F \colon (0, T) \times E \times V \times U \to E$ is such that $\underline{H({A})}:$ $\displaystyle A \colon (0, T) \times E \times V \to V^*$ is such that $\underline{H(j)}:$ $j \colon (0, T) \times E \times Z \times X \to \mathbb{R}$ is such that $\underline{H(\varphi)}:$ $\varphi \colon (0, T) \times E \times Y \times X \to \mathbb{R}$ is such that $\underline{H(K)}:$ $K$ is a closed and convex subset of $V$ with $0 \in K$. $\underline{H(M)}:$ $M \colon V\rightarrow X$ is such that $\underline{H(f)}:$ $f \colon (0, T) \times E \to V^*$ is such that $\underline{(H_1)}:$ $m_j \| M_1 \|^2 < m_A$, where $M_1 \colon V \to X$ is defined by $M_1 v = M v - M0$ for $v \in V$. $\underline{(H_2)}:$ ${\mathtt x}_0 \in E$, $w_0 \in V$. $\underline{(H_3)}:$ $S \colon L^2(0,T;V) \to L^2(0,T;U)$, ${R}_1 \colon L^2(0, T; V) \to L^2(0, T; V^*)$, ${R}_2 \colon L^2(0, T; V) \to L^2(0, T; Z)$, ${R}_3 \colon L^2(0, T; V) \to L^2(0, T; Y)$ are such that for all $v_1$, $v_2 \in L^2(0, T; V)$, a.e. $t\in (0, T)$ with $c_S$, $c_{R_1}$, $c_{R_2}$, $c_{R_3} > 0$. **Remark 1**. * In hypothesis $H(M)$, the operator $M$ is an affine operator if and only if the operator $M_1$ defined in $(H_1)$ is linear. The operator $M_1$ is called the linear part of $M$. In $(H_1)$ and in what follows, the constant $\| M_1 \|$ denotes the norm in ${\mathcal L}(V, X)$ of the linear part $M_1$ of $M$. It is clear that if $M \in {\mathcal L}(V, X)$, then $M_1 = M$. * **Remark 2**. * The following conditions are useful while checking the hypothesis $H(A)$(d). If $A(t, {\mathtt x}, \cdot)$ is strongly monotone with $m_A>0$ and $A(t,\cdot, v)$ is Lipschitz with $L_A>0$, i.e., $$\begin{aligned} && \langle A (t, {\mathtt x}, v_1) - A(t, {\mathtt x}, v_2), v_1 - v_2 \rangle \ge m_A \| v_1 - v_2 \|^2 \ \ \mbox{for all} \ \ {\mathtt x} \in E, \ v_1, v_2 \in V , \\[2mm] && \| A (t, {\mathtt x}_1, v) - A(t, {\mathtt x}_2, v) \|_{V^*} \le L_A\, \| {\mathtt x}_1 - {\mathtt x}_2\|_E \ \ \mbox{for all} \ \ {\mathtt x}_1, {\mathtt x}_2 \in E, \ v \in V, \end{aligned}$$ for a.e. $t \in (0,T)$, then $$\langle A (t, {\mathtt x}_1, v_1) - A(t, {\mathtt x}_2, v_2), v_1 - v_2 \rangle \ge m_A \, \| v_1 - v_2 \|^2 - L_A \, \|{\mathtt x}_1 - {\mathtt x}_2 \|_E \| v_1 - v_2\|$$ for all ${\mathtt x}_1$, ${\mathtt x}_2 \in E$, $v_1$, $v_2 \in V$, a.e. $t \in (0, T)$. * The main result on the well-posedness of Problem $\ref{DVHI}$ reads as follows. **Theorem 4**. *Under hypotheses $H(F)$, $H(A)$, $H(j)$, $H(\varphi)$, $H(K)$, $H(M)$, $H(f)$, $(H_1)$--$(H_3)$, for each $({\mathtt x}_0, w_0) \in E \times V$, Problem $\ref{DVHI}$ has a unique solution $({\mathtt x}, w) \in H^1(0,T; E) \times {\mathbb{W}}$ with $w(t) \in K$ for a.e. $t \in (0, T)$. Further, for each initial conditions $({\mathtt x}_0, w_0)$, $(\widetilde{\mathtt x}_0, {\widetilde{w}}_0) \in E \times V$, there is a constant $c > 0$ such that $$\label{EST333} \| {\mathtt x} - {\widetilde{{\mathtt x}}} \|_{H^1(0, T; E)} + \| w - {\widetilde{w}} \|_{L^2(0, T; V)} \le c \, (\| {\mathtt x}_0- \widetilde{\mathtt x}_0 \|_E + \| w_0 - \widetilde {w}_0 \|),$$ where $({\mathtt x}, w)$ and $({\widetilde{{\mathtt x}}}, {\widetilde{w}})$ denote the unique solutions to Problem $\ref{DVHI}$ corresponding to $({\mathtt x}_0, w_0)$ and $(\widetilde{\mathtt x}_0, {\widetilde{w}}_0)$, respectively. Further, the solution has the regularity $({\mathtt x}, w) \in C(0,T; E \times H)$.* **Proof.**  The proof is carried out in several steps and it is inspired by a recent result from [@MigorskiBiao Theorem 20] combined with a fixed-point argument of Lemma [Lemma 2](#CONTR){reference-type="ref" reference="CONTR"}. **Step 1**. Let $\lambda \in L^2(0, T; E)$, $\xi \in L^2(0,T; V^*)$, $\zeta \in L^2(0, T; Z)$, and $\eta \in L^2(0, T; Y)$ be fixed and consider the following auxiliary problem. **Problem 5**. *Find $w = w_{\lambda\xi\zeta\eta} \in {\mathbb{W}}$ with $w(t) \in K$ for a.e. $t \in (0,T)$ such that $$\label{Problem1b} \begin{cases} \displaystyle \langle w'(t) + A(t, \lambda(t), w(t)) + \xi(t), v - w(t) \rangle + \, j^0 (t, \lambda(t),\zeta(t), Mw(t); Mv - Mw(t)) \\[1mm] \ \ \quad \qquad + \, \varphi (t, \lambda (t),\eta(t), Mv) - \varphi(t, \lambda(t), \eta(t), Mw(t)) \ge \langle f(t, \lambda(t)), v-w(t) \rangle \\[1mm] \qquad\qquad\qquad \qquad \ \ \mbox{\rm for all} \ v \in K, \ \mbox{\rm a.e.} \ t \in (0,T), \\ w(0) = w_0 . \end{cases}$$* We define operator ${\widetilde{A}} \colon (0,T) \times V \to V^*$, functions ${\widetilde{j}} \colon (0, T) \times X \to \mathbb{R}$, ${\widetilde{\varphi}} \colon (0, T) \times X \to \mathbb{R}$, and ${\widetilde{f}} \colon (0, T) \to V^*$ by $$\begin{aligned} && {\widetilde{A}} (t, v) = A(t, \lambda(t), v) \ \ \mbox{for} \ \ v \in V, \, \mbox{a.e.} \ \, t \in (0, T), \\[1mm] && \quad {\widetilde{j}}(t,v) = j(t, \lambda(t), \zeta(t), v) \ \ \mbox{for} \ \ v \in X, \, \mbox{a.e.} \ \, t \in (0, T), \\[1mm] &&\qquad {\widetilde{\varphi}} (t, v) = \varphi(t, \lambda(t), \eta(t), v) \ \ \mbox{for} \ \ v \in X, \, \mbox{a.e.} \ \, t \in (0, T), \\[1mm] &&\qquad\quad {\widetilde{f}}(t) = f(t, \lambda(t)) - \xi(t) \ \ \mbox{for a.e.} \ \, t \in (0, T).\end{aligned}$$ With this notation Problem [Problem 5](#ProblemAUX1){reference-type="ref" reference="ProblemAUX1"} can be equivalently reformulated as follows. Find element $w = w_{\lambda\xi\zeta\eta} \in {\mathbb{W}}$ with $w(t) \in K$ for a.e. $t \in (0,T)$ such that $$\label{ProblemEQUIV} \begin{cases} \displaystyle \langle w'(t) + {\widetilde{A}}(t, w(t)), v - w(t) \rangle + \, {\widetilde{j}}^0 (t, Mw(t); Mv - Mw(t)) \\[1mm] \quad \qquad + \, {\widetilde{\varphi}} (t, Mv) - {\widetilde{\varphi}} (t, Mw(t)) \ge \langle {\widetilde{f}}(t), v - w(t) \rangle \ \ \mbox{\rm for all} \ v \in K, \ \mbox{\rm a.e.} \ t \in (0,T), \\[1mm] w(0) = w_0 . \end{cases}$$ We shall establish the properties of the data in Problem [\[ProblemEQUIV\]](#ProblemEQUIV){reference-type="ref" reference="ProblemEQUIV"}. By hypothesis $H(A)$(a), (b), it is clear that $A(\cdot,\cdot, v)$ is a Carathéodory function for all $v\in V$. Therefore, the measurability of $t\mapsto\lambda(t)$ entails, see [@DMP2 Corollary 2.5.24], that ${\widetilde{A}}(\cdot, v)$ is measurable for all $v\in V$. We use $H(A)$(c), (d) to get that ${\widetilde{A}}(t, \cdot)$ is demicontinuous for a.e. $t \in (0, T)$ and $$\| {\widetilde{A}}(t, v) \|_{V^*} \le {\widetilde{a}}_0(t) + a_2 \| v \| \ \ \mbox{for all} \ \ v \in V, \ \mbox{a.e.} \ \, t \in (0, T),$$ where ${\widetilde{a}}_{0} \in L^2(0, T)$. Further, we employ $H(A)$(d) to infer that ${\widetilde{A}}(t, \cdot)$ is strongly monotone with constant $m_A > 0$ for a.e. $t \in (0, T)$. The function ${\widetilde{j}}(\cdot, v)$ is measurable because of $H(j)$(c) and of the measurability of $t\mapsto \lambda(t)$ and $t\mapsto \zeta(t)$. It is clear that ${\widetilde{j}}(t,\cdot)$ is locally Lipschitz for a.e. $t \in (0,T)$ and $$\| \partial {\widetilde{j}} (t, v)\|_{X^*} \le {\widetilde{c}}_{0j} + c_{3j} \| v \|_X \ \ \mbox{for all} \ \ v \in X, \ \mbox{a.e.} \ \, t \in (0, T).$$ with ${\widetilde{c}}_{0j} \in L^2(0, T)$. From $H(j)$(e), we know that $${\widetilde{j}}^0(t, v_1; v_2-v_1) + {\widetilde{j}}^0 (t, v_2; v_1-v_2) \le m_j \, \| v_1 - v_2 \|_X^2 \ \ \mbox{for all} \ \ v_1, v_2 \in X, \ \mbox{a.e.} \ \, t \in (0, T).$$ By $H(\varphi)$, we deduce that ${\widetilde{\varphi}}(\cdot, v)$ is measurable for all $v \in X$, ${\widetilde{\varphi}}(t, \cdot)$ is convex and lower semicontinuous for a.e. $t \in (0,T)$, and $$\| \partial {\widetilde{\varphi}}(t, v) \|_{X^*} \le {\widetilde{c}}_{0\varphi}(t) + c_{3\varphi} \| v \|_X \ \ \mbox{for all} \ \ v \in X, \ \mbox{a.e.} \ \, t \in (0, T)$$ with ${\widetilde{c}}_{0\varphi} \in L^2(0,T)$. Since $\lambda \in L^2(0, T; E)$ and $\xi \in L^2(0,T; V^*)$, we use $H(f)$ to obtain ${\widetilde{f}} \in L^2(0,T; V^*)$. Having verified the above properties of the data, and having in mind hypotheses $H(K)$, $(H_1)$--$(H_3)$, we are in a position to apply [@MigorskiBiao Theorem 20] to deduce that the problem ([\[ProblemEQUIV\]](#ProblemEQUIV){reference-type="ref" reference="ProblemEQUIV"}), and equivalently Problem [Problem 5](#ProblemAUX1){reference-type="ref" reference="ProblemAUX1"}, is uniquely solvable. **Step 2**. Let $(\lambda_i, \xi_i, \zeta_i, \eta_i) \in L^2(0,T;E \times V^* \times Z \times Y)$, $i=1$, $2$ and $w_1 = w_{\lambda_1\xi_1\zeta_1\eta_1}$, $w_2 = w_{\lambda_2\xi_2\zeta_2\eta_2} \in {\mathbb{W}}$ with $w_1(t)$, $w_2(t) \in K$ for a.e. $t \in (0,T)$, be the unique solutions to ([\[ProblemEQUIV\]](#ProblemEQUIV){reference-type="ref" reference="ProblemEQUIV"}) corresponding to $(\lambda_1,\xi_1, \zeta_1, \eta_1)$ and $(\lambda_2, \xi_2, \zeta_2, \eta_2)$, respectively. We shall prove the following estimate $$\begin{aligned} \label{888} && \| w_1 - w_2 \|_{L^2(0, t; V)} \le c \, \big( \| \lambda_1-\lambda_2\|_{L^2(0,t;E)} + \| \xi_1 - \xi_2 \|_{L^2(0, t; V^*)} \nonumber \\[1mm] && \qquad \qquad + \| \zeta_1 - \zeta_2 \|_{L^2(0, t; Z)} + \| \eta_1 - \eta_2 \|_{L^2(0, t; Y)} \big)\end{aligned}$$ for all $t \in [0,T]$, where $c >0$ is a constant. From Problem [Problem 5](#ProblemAUX1){reference-type="ref" reference="ProblemAUX1"} it follows that $$\begin{aligned} &&\hspace{-0.6cm} \displaystyle \langle w_1'(t) + A(t, \lambda_1(t), w_1(t)) + \xi_1 (t), w_2(t) - w_1(t) \rangle \\[1mm] &&\hspace{-0.6cm} \quad + \, j^0 (t, \lambda_1(t),\zeta_1(t), Mw_1(t); Mw_2(t) - Mw_1(t)) \\[1mm] &&\hspace{-0.6cm} \qquad + \, \varphi (t, \lambda_1 (t), \eta_1(t), Mw_2(t)) - \varphi(t, \lambda_1(t), \eta_1(t), Mw_1(t)) \ge \langle f(t, \lambda_1(t)), w_2(t)-w_1(t) \rangle \end{aligned}$$ for a.e. $t \in (0, T)$ and $$\begin{aligned} &&\hspace{-0.6cm} \displaystyle \langle w_2'(t) + A(t, \lambda_2(t), w_2(t)) + \xi_2 (t), w_1(t) - w_2(t) \rangle \\[1mm] &&\hspace{-0.6cm} \quad + \, j^0 (t, \lambda_2(t),\zeta_2(t), Mw_2(t); Mw_1(t) - Mw_2(t)) \\[1mm] &&\hspace{-0.6cm} \qquad + \, \varphi (t, \lambda_2 (t), \eta_2(t), Mw_1(t)) - \varphi(t, \lambda_2(t), \eta_2(t), Mw_2(t)) \ge \langle f(t, \lambda_2(t)), w_1(t)-w_2(t) \rangle \end{aligned}$$ for a.e. $t \in (0, T)$, and $w_1(0) - w_2 (0) = 0$. Next, we sum up the last two inequalities to get $$\begin{aligned} &&\hspace{-0.7cm} \langle w'_1(t) - w'_2(t), w_1(t) - w_2(t) \rangle + \langle A(t, \lambda_1(t), w_1(t)) - A(t, \lambda_2(t), w_2(t)), w_1(t) - w_2(t)\rangle \\ [1mm] &&\hspace{-0.5cm} \le \langle \xi_1(t) -\xi_2(t), w_2(t) - w_1(t) \rangle + \langle f(t, \lambda_1(t)) - f(t, \lambda_2(t)), w_1(t) - w_2(t) \rangle \\[1mm] &&\hspace{-0.3cm} + j^0(t, \lambda_1(t), \zeta_1(t), Mw_1(t); Mw_2(t) - Mw_1(t)) + j^0(t, \lambda_2(t), \zeta_2(t), Mw_2(t); Mw_1(t) - Mw_2(t)) \\ [1mm] &&\hspace{-0.1cm} + \, \varphi(t, \lambda_1(t), \eta_1(t), Mw_2(t)) - \varphi (t, \lambda_1(t), \eta_1(t), Mw_1 (t)) \\[1mm] &&\hspace{0.1cm} + \, \varphi(t, \lambda_2(t), \eta_2(t), Mw_1(t)) - \varphi (t, \lambda_2(t), \eta_2(t), Mw_2 (t)) \end{aligned}$$ for a.e. $t \in (0,T)$. We integrate the above inequality on $(0, t)$, apply the integration by parts formula, see [@DMP2 Proposition 3.4.14], to the first term, use hypotheses $H(A)$(d), $H(j)$(e), $H(\varphi)$(e) to obtain $$\begin{aligned} &&\hspace{-0.5cm} \frac{1}{2}\| w_1(t) - w_2(t) \|^2_H - \frac{1}{2}\| w_1(0) - w_2(0) \|^2_H + m_A \, \int_{0}^{t} \| w_1(s) - w_2(s) \|^2 \, ds \\ &&\hspace{-0.3cm} \le {\overline{m}}_A \int_0^t \| \lambda_1(s)-\lambda_2(s)\|_E \, \| w_1(s)-w_2(s)\| \, ds + \int_0^t \| \xi_1(s)-\xi_2(s)\|_{V^*} \, \|w_1(s)-w_2(s)\| \, ds \\ &&\hspace{-0.1cm} + \int_0^t \| f(s, \lambda_1(s))-f(s, \lambda_2(s)) \|_{V^*} \, \| w_1(s) - w_2(s) \| \, ds + m_j \int_0^t \| Mw_2(s) - Mw_1(s) \|_X^2 \\ &&\hspace{0.1cm} + \, {\overline{m}}_{j} \int_{0}^{t} \Big( \| \lambda_1(s)-\lambda_2(s)\|_E + \| \zeta_1 (s) - \zeta_2(s) \|_Z \Big) \| w_1(s) - w_2(s)\| \, ds \\ &&\hspace{0.3cm} + \, m_\varphi \int_{0}^{t} \Big( \| \lambda_1(s)-\lambda_2(s)\|_E + \| \eta_1 (s) - \eta_2(s) \|_Y \Big) \| w_1(s) - w_2(s)\| \, ds.\end{aligned}$$ for all $t \in [0,T]$. Next, using hypothesis $H(f)$(b), condition $w_1(0) - w_2(0) = 0$ and the inequality $\| M v \|_X \le \| M_1\|\, \| v \|$ for all $v \in V$, and the Hölder inequality, we have $$\begin{aligned} &&%\hspace{-0.6cm} \big( m_A - m_j \|M_1\|^2 \big) \, \| w_1 - w_2 \|_{L^2(0, t; V)}^2 \le \| \xi_1 - \xi_2 \|_{L^2(0, t; V^*)} \| w_1 - w_2 \|_{L^2(0, t; V)} \\[2mm] &&\quad + \, \Big( {\overline{m}}_A + L_f+{\overline{m}}_j \| M_1\| + m_\varphi \| M_1\| \Big) \, \| \lambda_1 - \lambda_2 \|_{L^2(0, t; E)} \| w_1 - w_2 \|_{L^2(0, t; V)} \\[2mm] &&\qquad + \, \Big( {\overline{m}}_j \| M_1\| \, \| \zeta_1 - \zeta_2 \|_{L^2(0, t; Z)} + m_\varphi \| M_1\| \| \eta_1 - \eta_2 \|_{L^2(0, t; Y)} \Big) \, \| w_1 - w_2 \|_{L^2(0, t; V)}\end{aligned}$$ for all $t \in [0,T]$. Hence, by $(H_1)$, the inequality ([\[888\]](#888){reference-type="ref" reference="888"}) follows. **Step 3.** Let $w \in L^2(0, T; V)$ be fixed. We claim that there exists a unique function ${\mathtt x} \in H^1(0, T; E)$ solution to the Cauchy problem $$\begin{aligned} \label{ODE} \begin{cases} \displaystyle {\mathtt x}'(t) = F(t, {\mathtt x}(t), w(t), (Sw)(t)) \ \ \mbox{for a.e.} \ \ t \in (0, T) \\ {\mathtt x}(0) = {\mathtt x}_0. \end{cases}\end{aligned}$$ Further, if $w_1$, $w_2 \in L^2(0, T; E)$ and ${\mathtt x}_1$, ${\mathtt x}_2 \in H^1(0,T;E)$ denote the unique solutions to ([\[ODE\]](#ODE){reference-type="ref" reference="ODE"}) corresponding to $w_1$ and $w_2$, respectively, then $$\label{EST1} \| {\mathtt x}_1(t) - {\mathtt x}_2(t)\|_E \le m \int_0^t \| w_1(s) - w_2(s)\| \, ds \ \ \mbox{for all} \ \ t \in (0, T)$$ with a constant $m > 0$. In fact, let $w \in L^2(0, T; V)$ and define the function ${\widetilde{F}} \colon (0, T) \times E \to E$ by $${\widetilde{F}} (t, {\mathtt x}) = F(t, {\mathtt x}, w(t), (S w)(t)) \ \ \mbox{for} \ \ {\mathtt x} \in E, \ \mbox{a.e.} \ \ t \in (0, T).$$ By hypotheses $H(F)$, we obtain $$\| {\widetilde{F}}(t, {\mathtt x}_1) - {\widetilde{F}}(t, {\mathtt x}_2)\|_E = \| F(t, {\mathtt x}_1, w(t), (S w)(t)) - F(t, {\mathtt x}_2, w(t), (S w)(t)) \|_E \le L_f \| {\mathtt x}_1 - {\mathtt x}_2\|_E$$ for all ${\mathtt x}_1$, ${\mathtt x}_2 \in E$, a.e. $t\in (0, T)$, and $$(0, T) \ni t \mapsto {\widetilde{F}}(t, {\mathtt x}) \in E$$ is a $L^2$-function for any ${\mathtt x} \in E$. Hence, we are able to apply the classical Cauchy-Lipschitz theorem, see e.g., [@SST Theorem 2.30] to deduce the unique solvability of ([\[ODE\]](#ODE){reference-type="ref" reference="ODE"}). Next, for $w_1$, $w_2 \in L^2(0,T; V)$, we integrate equation in ([\[ODE\]](#ODE){reference-type="ref" reference="ODE"}) to get $${\mathtt x}_i(t) = {\mathtt x}_0 + \int_0^t F(s, {\mathtt x}_i(s), w_i(s), (S w_i)(s))\, ds \ \ \mbox{for} \ \ i=1,2.$$ Thus $$\begin{aligned} &&\hspace{-0.6cm} \| {\mathtt x}_1(t) -{\mathtt x}_2(t) \|_E \le \int_0^t \|F(s, {\mathtt x}_1(s), w_1(s), (S w_1)(s)) -F(s, {\mathtt x}_2(s), w_2(s), (S w_2)(s)) \|_E \, ds \\[1mm] &&\hspace{-0.6cm} \quad \le L_F \int_0^t \|{\mathtt x}_1(s)-{\mathtt x}_2(s)\|_E + \| w_1(s)-w_2(s)\| + \| (S w_1)(s)-(S w_2)(s)\|_U \, ds \\[1mm] && \hspace{-0.6cm} \qquad \le L_F \int_0^t \|{\mathtt x}_1(s)-{\mathtt x}_2(s)\|_E + \| w_1(s)-w_2(s)\| \, ds + c_S L_F T \int_0^t \| w_1(s)-w_2(s)\| \, ds\end{aligned}$$ for all $t \in [0, T]$. Finally, we use the standard argument based on the Growall inequality to deduce inequality ([\[EST1\]](#EST1){reference-type="ref" reference="EST1"}) with a constant $m>0$. We denote by $R_0 \colon L^2(0,T;V) \to H^1(0,T;E) \subset L^2(0,T;E)$ the operator $R_0 (w) = {\mathtt x}$ which to any $w \in L^2(0,T;V)$ assigns the unique solution ${\mathtt x} \in L^2(0,T;E)$ to the problem ([\[ODE\]](#ODE){reference-type="ref" reference="ODE"}). The inequality ([\[EST1\]](#EST1){reference-type="ref" reference="EST1"}) can be restated as follows $$\label{EST2} \| (R_0 w_1)(t) - (R_0 w_2)(t)\|_E \le m \int_0^t \| w_1(s) - w_2(s)\| \, ds \ \ \mbox{for all} \ \ t \in (0, T),$$ i.e., $R_0$ is a history-dependent operator. **Step 4**. In this part of the proof we apply a fixed point argument. We define the operator $\Lambda \colon L^2(0,T; E \times V^* \times Z \times Y) \to L^2(0,T;E \times V^* \times Z \times Y)$ by $$\Lambda(\lambda, \xi, \zeta, \eta) = (R_0 w_{\lambda\xi\zeta\eta}, {R}_1 w_{\lambda\xi\zeta\eta}, {R_2} w_{\lambda\xi\zeta\eta}, {R_3} w_{\lambda\xi\zeta\eta})$$ for all $(\lambda,\xi, \zeta, \eta) \in L^2(0,T;E \times V^* \times Z \times Y)$, where $w_{\lambda\xi\zeta\eta} \in {\mathbb{W}}$ denotes the unique solution to Problem [Problem 5](#ProblemAUX1){reference-type="ref" reference="ProblemAUX1"} corresponding to $(\lambda, \xi, \zeta, \eta)$. From hypothesis $(H_3)$(b)--(d), inequalities ([\[888\]](#888){reference-type="ref" reference="888"}) and ([\[EST2\]](#EST2){reference-type="ref" reference="EST2"}), and by the Hölder inequality, we find a constant $c > 0$ such that $$\begin{aligned} &&\hspace{-0.6cm} \| \Lambda(\lambda_1,\xi_1,\zeta_1,\eta_1)(t) - \Lambda(\lambda_1, \xi_2, \zeta_2,\eta_2)(t) \|^2_{E\times V^* \times Z \times Y} \\[2mm] && \hspace{-0.4cm} = \| (R_0 w_1)(t) - (R_0 w_2)(t) \|_{E}^2 + \| ({R}_1 w_1)(t) - ({R}_1 w_2)(t) \|_{V^*}^2 \\[2mm] &&\hspace{-0.2cm} + \| ({R_2} w_1)(t) - ({R_2} w_2)(t) \|_Z^2 + \| ({R_3} w_1)(t) - ({R_3} w_2)(t) \|_Y^2 \\[1mm] && \hspace{-0.0cm} \le \Big( m \int_0^t \| w_1(s) - w_2(s) \| \, ds \Big)^2 + \Big( c_{R_1} \int_0^t \| w_1(s) - w_2(s) \| \, ds \Big)^2 \\[2mm] && \hspace{0.2cm} + \Big( c_{R_2} \int_0^t \| w_1(s) - w_2(s) \| \, ds \Big)^2 + \Big( c_{R_3} \int_0^t \| w_1(s) - w_2(s) \| \, ds \Big)^2 \le c \, \| w_1 - w_2 \|_{L^2(0, t; V)}^2 \\ &&\hspace{0.4cm} \le c \, \Big( \| \lambda_1-\lambda_2\|^2_{L^2(0,t;E)} + \| \xi_1 - \xi_2 \|^2_{L^2(0, t; V^*)} + \| \zeta_1 - \zeta_2 \|^2_{L^2(0, t; Z)} + \| \eta_1 - \eta_2 \|^2_{L^2(0, t; Y)} \Big),\end{aligned}$$ which implies $$\begin{aligned} &&\| \Lambda(\lambda_1,\xi_1, \zeta_1, \eta_1)(t) - \Lambda(\lambda_2, \xi_2, \zeta_2, \eta_2)(t) \|^2_{E\times V^* \times Z \times Y} \nonumber \\ &&\qquad\qquad \le c \, \int_{0}^{t} \, \| (\lambda_1, \xi_1, \zeta_1, \eta_1)(s) - (\lambda_2, \xi_2, \zeta_2, \eta_2)(s) \|^2_{E \times V^* \times Z \times Y} \, ds \label{pstaly}\end{aligned}$$ for a.e. $t \in (0,T)$. By Lemma [Lemma 2](#CONTR){reference-type="ref" reference="CONTR"}, we deduce that there exists a unique fixed point $(\lambda^*, \xi^*,\zeta^*,\eta^*)$ of $\Lambda$, that is $$(\lambda^*, \xi^*, \zeta^*,\eta^*) \in L^2(0,T;E \times V^* \times Z \times Y) \ \ {\rm and}\ \ \Lambda(\lambda^*, \xi^*, \zeta^*, \eta^*) = (\lambda^*, \xi^*, \zeta^*, \eta^*).$$ Let $(\lambda^*, \xi^*, \zeta^*, \eta^*) \in L^2(0,T;E \times V^* \times Z \times Y)$ be the unique fixed point of the operator $\Lambda$. We define $w_{\lambda^* \xi^* \zeta^* \eta^*} \in {\mathbb{W}}$ to be the unique solution to Problem [Problem 5](#ProblemAUX1){reference-type="ref" reference="ProblemAUX1"} corresponding to $(\lambda^*, \xi^*,\zeta^*, \eta^*)$. From the definition of the operator $\Lambda$, we have $$\lambda^* = R_0(w_{\lambda^* \xi^* \zeta^* \eta^*}), \ \ \xi^* = {R}_1(w_{\lambda^* \xi^* \zeta^* \eta^*}), \ \ \eta^* = {R_2}(w_{\lambda^* \xi^* \zeta^* \eta^*}) \ \ \mbox{and} \ \ \zeta^* = {R_3}(w_{\lambda^* \xi^* \zeta^* \eta^*}) .$$ Finally, we use these relations in Problem [Problem 5](#ProblemAUX1){reference-type="ref" reference="ProblemAUX1"}, and conclude that $w_{\lambda^* \xi^* \zeta^* \eta^*}\in {\mathbb{W}}$ is the unique solution to Problem [Problem 5](#ProblemAUX1){reference-type="ref" reference="ProblemAUX1"}. **Step 5.** Finally, we shall prove ([\[EST333\]](#EST333){reference-type="ref" reference="EST333"}). Let $({\mathtt x}, w)$ and $({\widetilde{{\mathtt x}}}, {\widetilde{w}})$ be the unique solutions to Problem $\ref{DVHI}$ corresponding to $({\mathtt x}_0, w_0)$, $(\widetilde{\mathtt x}_0, {\widetilde{w}}_0) \in E \times V$, respectively. We apply the arguments used in the inequalities in Steps 2 and 3. We write two inequalities for $w$ and ${\widetilde{w}}$ as in ([\[ProblemEQUIV\]](#ProblemEQUIV){reference-type="ref" reference="ProblemEQUIV"}), then we choose $v = {\widetilde{w}}(t)$ and $v = w(t)$ in the inequality satisfied by $w$ and ${\widetilde{w}}$, respectively, and add the resulting inequalities to get $$\begin{aligned} &&\hspace{-0.7cm} \langle w'(t) - {\widetilde{w}}'(t), w(t) - {\widetilde{w}}(t) \rangle + \langle A(t, \lambda_1(t), w(t)) - A(t, \lambda_2(t), {\widetilde{w}}(t)), w(t) - {\widetilde{w}}(t)\rangle \\ [1mm] &&\hspace{-0.5cm} \le \langle (R_1w)(t) - (R_1{\widetilde{w}})(t), {\widetilde{w}}(t) - w(t) \rangle + \langle f(t,{\mathtt x}(t)) - f(t, {\widetilde{{\mathtt x}}}(t)), {\widetilde{w}}(t) - w(t) \rangle \\[1mm] &&\hspace{-0.3cm} + \, j^0(t, {\mathtt x}(t), (R_2w)(t); M{\widetilde{w}}(t) - M w(t)) + j^0(t, {\widetilde{{\mathtt x}}}(t), (R_2{\widetilde{w}})(t); Mw(t) - M{\widetilde{w}}(t)) \\ [1mm] &&\hspace{-0.1cm} + \, \varphi(t, {\mathtt x}(t), (R_3w)(t), M{\widetilde{w}}(t)) - \varphi (t, {\mathtt x}(t), (R_3w)(t), Mw (t)) \\[1mm] &&\hspace{0.1cm} + \, \varphi(t, {\widetilde{{\mathtt x}}}(t), (R_3{\widetilde{w}})(t), Mw(t)) - \varphi (t, {\widetilde{{\mathtt x}}}(t), (R_3{\widetilde{w}})(t), M{\widetilde{w}}(t)) \end{aligned}$$ for a.e. $t \in (0,T)$. Integrating the latter on $(0, t)$, we use the integration by parts formula to the first term, hypotheses $H(A)$(d), $H(j)$(e), $H(\varphi)$(e), $H(f)$(b), and the Young inequality $ab \le \frac{\varepsilon^2}{2} a^2 + \frac{1}{2\varepsilon^2}b^2$ for $a$, $b \in \mathbb{R}$ to obtain $$\begin{aligned} &&\hspace{-0.5cm} (m_A - m_j \| M_1 \|^2)\, \int_{0}^{t} \| w(s) - {\widetilde{w}}(s) \|^2 \, ds \le \frac{1}{2}\| w_0 - \widetilde w_0 \|^2_H \\ &&\hspace{-0.3cm} + \, \frac{\varepsilon^2}{2} \left( {\overline{m}}_A + 2 {\overline{m}}_j \| M_1\| + 2m_\varphi \| M_1\| +2 \right) \int_{0}^{t} \| w(s) - {\widetilde{w}}(s) \|^2 \, ds \\ && + \, \frac{1}{2\varepsilon^2} \left( {\overline{m}}_A + {\overline{m}}_j \| M_1 \| + m_\varphi \| M_1 \| + L_f^2 \right) \int_{0}^{t} \| {\mathtt x}(s) -{\widetilde{{\mathtt x}}}(s) \|^2 \, ds \\ && + \, \frac{1}{2\varepsilon^2} \int_{0}^{t} \| (R_1w)(s)-(R_1{\widetilde{w}})(s) \|_{V^*}^2 \, ds \\ && + \, \frac{1}{2\varepsilon^2} {\overline{m}}_j \| M_1\| \int_{0}^{t} \| (R_2w)(s)-(R_2{\widetilde{w}})(s) \|_{Z}^2 \, ds \\ && + \, \frac{1}{2\varepsilon^2} m_\varphi \| M_1\| \int_{0}^{t} \| (R_3w)(s)-(R_3{\widetilde{w}})(s) \|_{V^*}^2 \, ds.\end{aligned}$$ for all $t \in [0,T]$. By the smallness condition $(H_1)$, we choose $\varepsilon > 0$ such that $$m_A - m_j \| M_1 \|^2 - \frac{\varepsilon^2}{2} \left( {\overline{m}}_A + 2 {\overline{m}}_j \| M_1\| + 2m_\varphi \| M_1\| +2 \right) > 0.$$ Next, we employ $(H_3)$(b)--(d) and get $$\begin{aligned} &&\label{EST88} d_1 \int_{0}^{t} \| w(s) - {\widetilde{w}}(s) \|^2 \, ds \le \frac{1}{2}\| w_0 - \widetilde w_0 \|^2_H \\ &&\quad + \, d_2 \int_{0}^{t} \| {\mathtt x}(s)-{\widetilde{{\mathtt x}}}(s) \|_E^2 \, ds + \, d_3 \int_0^t \left( \int_{0}^{s} \| w(\tau) - {\widetilde{w}}(\tau) \|^2 \, d\tau \right)\, ds \nonumber \end{aligned}$$ with some positive constants $d_i$, $i=1$, $2$, $3$. On the other hand, we use the ordinary differential equation in Problem [Problem 3](#DVHI){reference-type="ref" reference="DVHI"} and, analogously as in ([\[EST1\]](#EST1){reference-type="ref" reference="EST1"}), by the Gronwall inequality, we have $$\label{EST99} \| {\mathtt x}(t) - {\widetilde{{\mathtt x}}}(t)\|_E \le d_4 \left( \| {\mathtt x}_0 - \widetilde{\mathtt x}_0\|_E + \int_0^t \| w(s) - {\widetilde{w}}(s)\| \, ds \right) \ \ \mbox{for all} \ \ t \in (0, T)$$ with a constant $d_4 > 0$. We combine ([\[EST88\]](#EST88){reference-type="ref" reference="EST88"}) and ([\[EST99\]](#EST99){reference-type="ref" reference="EST99"}), and again use the Gronwall inequality to deduce $$\label{EST901} \| w - {\widetilde{w}} \|_{L^2(0, t; V)} \le d_5 \left( \| {\mathtt x}_0 - \widetilde{\mathtt x}_0\|_E + \| w_0 - \widetilde w_0 \| \right)$$ for all $t \in [0, T]$ with $d_5 > 0$. By $H(F)$(b), it follows $$\label{INEQ902} \| {\mathtt x}' - {\widetilde{{\mathtt x}}}'\|_{L^2(0, t; E)} \le d_6 \left( \| {\mathtt x} - {\widetilde{{\mathtt x}}}\|_{L^2(0, t; E)} + \| w - {\widetilde{w}}\|_{L^2(0, t; V)} \right) \ \ \mbox{for all} \ \ t \in [0, T]$$ with a constant $d_6 > 0$. The latter together with ([\[EST99\]](#EST99){reference-type="ref" reference="EST99"}) and ([\[EST901\]](#EST901){reference-type="ref" reference="EST901"}) imply ([\[EST333\]](#EST333){reference-type="ref" reference="EST333"}). The regularity of the solution follows from the following embeddings $W^{1,2}(0, T; E) \subset C([0, T]; E)$ and ${\mathbb{W}} \subset C([0,T]; H)$. This completes the proof of the theorem. $\Box$ We conclude this section with a corollary of Theorem [Theorem 4](#Theorem1){reference-type="ref" reference="Theorem1"} when the term $f$ is independent of the second variable and $f \in L^2(0, T; V^*)$. **Corollary 1**. *Let hypotheses $H(F)$, $H(A)$, $H(j)$, $H(\varphi)$, $H(K)$, $H(M)$, $(H_1)$--$(H_3)$ hold, and $f \in L^2(0, T; V^*)$. Then Problem $\ref{DVHI}$ is uniquely solvable with $({\mathtt x}, w) \in H^1(0,T; E) \times {\mathbb{W}}$ and $w(t) \in K$ for a.e. $t \in (0, T)$. Moreover, for every $({\mathtt x}_0, w_0, f)$, $(\widetilde{\mathtt x}_0, {\widetilde{w}}_0, {\widetilde{f}}) \in E \times V \times L^2(0, T; V^*)$, there exists a constant $c > 0$ such that $$\label{EST444} \| {\mathtt x} - {\widetilde{{\mathtt x}}} \|_{H^1(0, T; E)} + \| w - {\widetilde{w}} \|_{L^2(0, T; V)} \le c \, (\| {\mathtt x}_0- \widetilde{\mathtt x}_0 \|_E + \| w_0 - \widetilde {w}_0\| + \| f - {\widetilde{f}} \|_{L^2(0, T; V^*)}),$$ where $({\mathtt x}, w)$ and $({\widetilde{{\mathtt x}}}, {\widetilde{w}})$ are the unique solutions to Problem $\ref{DVHI}$ corresponding to $({\mathtt x}_0, w_0, f)$ and $(\widetilde{\mathtt x}_0, {\widetilde{w}}_0, \widetilde{f})$, respectively.* # A dynamic frictionless viscoplastic contact problem {#Application1} In this section we shall illustrate the applicability of results of Section [3](#s1){reference-type="ref" reference="s1"}. We examine an example of a dynamic unilateral viscoplastic frictionless contact problem. We describe the physical setting and give the classical formulation of the contact problem, provide its variational formulation, and prove a result on its well-posedness. A viscoplastic body occupies a bounded domain $\Omega$ of $\mathbb{R}^{d}$, $d=1$, $2$, $3$. The boundary $\Gamma$ of $\Omega$ is supposed to be Lipschitz continuous and splitted into three mutually disjoint and measurable parts $\Gamma= \overline{\Gamma}_{D} \cup \overline{\Gamma}_{N} \cup \overline{\Gamma}_{C}$ with $|\Gamma_{D}| > 0$. The outward unit normal at $\Gamma$, denoted by $\mbox{\boldmath{$\nu$}}$, is defined a.e. on $\Gamma$. Let $Q = \Omega \times (0, T)$, where $0 < T < \infty$. The symbol $\mathbb{S}^{d}$ denotes the space of $d\times d$ symmetric matrices. We use the standard notation for inner products and norms on $\mathbb{R}^{d}$ and $\mathbb{S}^{d}$, i.e., $\mbox{\boldmath{$u$}}\cdot \mbox{\boldmath{$v$}}=u_{i}v_{i}$, $\|\mbox{\boldmath{$u$}}\|^2=\mbox{\boldmath{$u$}}\cdot \mbox{\boldmath{$u$}}$ for $\mbox{\boldmath{$u$}}=(u_{i})$, $\mbox{\boldmath{$v$}}=(v_{i})\in \mathbb{R}^{d}$, and $\mbox{\boldmath{$\sigma$}}\cdot \mbox{\boldmath{$\tau$}}=\sigma_{i}\tau_{i}$, $\|\mbox{\boldmath{$\sigma$}}\|^2= \mbox{\boldmath{$\sigma$}}\cdot \mbox{\boldmath{$\sigma$}}$ for $\mbox{\boldmath{$\sigma$}}= (\sigma_{ij})$, $\mbox{\boldmath{$\tau$}}=(\tau_{ij})\in \mathbb{S}^{d}$, respectively. The normal and tangential components of vectors and tensors at the boundary are expressed by the following notation $$v_{\nu}=\mbox{\boldmath{$v$}}\cdot \mbox{\boldmath{$\nu$}}, \ \ \ \mbox{\boldmath{$v$}}_{\tau}=\mbox{\boldmath{$v$}}-v_{\nu}\, \mbox{\boldmath{$\nu$}}, \ \ \ \sigma_{\nu}=(\mbox{\boldmath{$\sigma$}}\mbox{\boldmath{$\nu$}})\cdot\mbox{\boldmath{$\nu$}}, \ \ \ \mbox{\boldmath{$\sigma$}}_{\tau}=\mbox{\boldmath{$\sigma$}}\, \mbox{\boldmath{$\nu$}}- \sigma_{\nu}\, \mbox{\boldmath{$\nu$}}.$$ Since $\mbox{\boldmath{$v$}}_\tau \cdot \mbox{\boldmath{$\nu$}}= \mbox{\boldmath{$\sigma$}}_\tau \cdot \mbox{\boldmath{$\nu$}}= 0$, the following decomposition formula holds $$\label{DEC} \mbox{\boldmath{$\sigma$}}\mbox{\boldmath{$\nu$}}\cdot \mbox{\boldmath{$v$}} = (\sigma_{\nu} \, \mbox{\boldmath{$\nu$}}+ \mbox{\boldmath{$\sigma$}}_{\tau}) \cdot (v_\nu \, \mbox{\boldmath{$\nu$}}+ \mbox{\boldmath{$v$}}_\tau) = \sigma_{\nu} \, v_\nu + \mbox{\boldmath{$\sigma$}}_{\tau} \cdot \mbox{\boldmath{$v$}}_{\tau}.$$ The classical model for the contact process has the following formulation. **Problem 6**. *Find a displacement field $\mbox{\boldmath{$u$}}\colon Q \to\mathbb{R}^d$ and a stress field $\mbox{\boldmath{$\sigma$}}\colon Q \rightarrow \mathbb{S}^d$ such that for all $t\in (0,T)$, $$\begin{aligned} \label{equation1} \mbox{\boldmath{$\sigma$}}(t) &={\mathscr A}\mbox{\boldmath{$\varepsilon$}}({\mbox{\boldmath{$u$}}}'(t)) +{\mathscr B}\mbox{\boldmath{$\varepsilon$}}({\mbox{\boldmath{$u$}}}(t)) +\int_0^t{\mathscr G}\big( \mbox{\boldmath{$\sigma$}}(s) - {\mathscr A}\mbox{\boldmath{$\varepsilon$}}({\mbox{\boldmath{$u$}}}'(s)), \mbox{\boldmath{$\varepsilon$}}({\mbox{\boldmath{$u$}}}(s) \big)\,ds \quad &{\rm in}\ &\Omega,\\[1mm] \label{equation2} {\mbox{\boldmath{$u$}}}''(t)&={\rm Div}\,\mbox{\boldmath{$\sigma$}}(t)+\mbox{\boldmath{$f$}}_0(t)\quad&{\rm in}\ &\Omega,\\[1mm] \label{equation3} \mbox{\boldmath{$u$}}(t)&=\mbox{\boldmath{$0$}}&{\rm on}\ &\Gamma_D,\\[1mm] \label{equation4} \mbox{\boldmath{$\sigma$}}(t)\mbox{\boldmath{$\nu$}}&=\mbox{\boldmath{$f$}}_N(t)\quad&{\rm on}\ &\Gamma_N,\\[1mm] \nonumber u'_{\nu}(t)&\le g, \ \sigma_{\nu}(t)+\xi(t) \le 0, \ (u'_{\nu}(t)-g)(\sigma_{\nu}(t)+\xi(t))=0 \\[1mm] \label{equation5} &\ \ \ {\rm with} \ \xi(t) \in k(u_{\nu}(t))\, \partial j_{\nu}(u'_{\nu}(t)) \quad&{\rm on}\ &\Gamma_C,\\[1mm] % \label{equation6} \mbox{\boldmath{$\sigma$}}_\tau(t) &=\mbox{\boldmath{$0$}}\quad&{\rm on}\ &\Gamma_C, % %\|\bsigma_\tau(t)\|&\le %F_b\Big(\int_0^t\|\bu_\tau(s)\|\,ds\Big),\quad &\ %&\nonumber\\[1mm] %-\bsigma_\tau&=F_b\Big(\int_0^t\|\bu_\tau(s)\|\,ds\Bi%g) %\frac{{\bu}_\tau'(t)}{\|{\bu}_\tau'(t)\|}\ \ \ %{\rm if}\ %{\bu}_\tau'(t)\ne\bzero \quad&{\rm on}\ %&\Gamma_C,\label{equation6} \end{aligned}$$ and $$\label{equation7} \mbox{\boldmath{$u$}}(0)=\mbox{\boldmath{$u$}}_0,\qquad {\mbox{\boldmath{$u$}}}'(0)=\mbox{\boldmath{$w$}}_0\quad \ {\rm in}\quad \Omega.$$* In this problem, equation ([\[equation1\]](#equation1){reference-type="ref" reference="equation1"}) represents a general viscoplastic constitutive law in which $\mathscr{A}$, $\mathscr{B}$ and $\mathscr{G}$ stand for the viscosity operator, the elasticity operator and the viscoplastic constitutive function, respectively. Recall that the components of the linearized strain tensor $\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$u$}})$ are given by $$\mbox{\boldmath{$\varepsilon$}}_{ij}(\mbox{\boldmath{$u$}}) = (\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$u$}}))_{ij} = \frac{1}{2} (u_{i,j} + u_{j,i}) \ \ \mbox{in} \ \ \Omega,$$ where $u_{i,j} = \partial u_i/\partial x_j$. Equation ([\[equation2\]](#equation2){reference-type="ref" reference="equation2"}) is the equation of motion which governs the evolution of the mechanical state of the body. Here $u''(t) = \partial^2\mbox{\boldmath{$u$}}/ \partial t^2$ represents the acceleration field, ${\rm Div} \, \mbox{\boldmath{$\sigma$}}= (\sigma_{ij,j})$, and $\mbox{\boldmath{$f$}}_{0}$ denotes the density of body forces. The displacement homogeneous boundary condition ([\[equation3\]](#equation3){reference-type="ref" reference="equation3"}) means that the body is fixed on $\Gamma_{D}$, while ([\[equation4\]](#equation4){reference-type="ref" reference="equation4"}) is the traction boundary condition with surface tractions of density $\mbox{\boldmath{$f$}}_N$ acting on $\Gamma_{N}$. Condition ([\[equation5\]](#equation5){reference-type="ref" reference="equation5"}) is the Signorini unilateral contact boundary condition for the normal velocity in which $g > 0$ and $\partial j_\nu$ denotes the Clarke subgradient of a prescribed function $j_\nu$. Condition $\xi(t) \in k(u_{\nu}(t))\, \partial j_{\nu}(u'_{\nu}(t))$ on $\Gamma_C$ represents the normal damped response condition where $k$ is a given damper coefficient depending on the normal displacement. Condition ([\[equation6\]](#equation6){reference-type="ref" reference="equation6"}) is called the frictionless condition in which the tangential part of the stress vanishes. Examples and details on mechanical interpretation of conditions ([\[equation5\]](#equation5){reference-type="ref" reference="equation5"}) and ([\[equation6\]](#equation6){reference-type="ref" reference="equation6"}) can be found in [@HMS2017; @MOSBOOK] and references therein. Finally, equations ([\[equation7\]](#equation7){reference-type="ref" reference="equation7"}) are the initial conditions in which $\mbox{\boldmath{$u$}}_0$ and $\mbox{\boldmath{$w$}}_0$ represent the initial displacement and the initial velocity, respectively. To cope with the weak formulation of Problem [Problem 6](#Plastic){reference-type="ref" reference="Plastic"}, we introduce the spaces $$\label{SPACESVH} V=\{\, \mbox{\boldmath{$v$}}\in H^{1}(\Omega;\mathbb{R}^{d})\mid \mbox{\boldmath{$v$}}={\bf{0}}~\textrm{on}~\Gamma_{D}\, \} \ \ \mbox{and} \ \ \mathcal{H}=L^{2}(\Omega;\mathbb{S}^{d}).$$ The inner product and the corresponding norm on $V$ are given by $$\label{NORM} (\mbox{\boldmath{$u$}},\mbox{\boldmath{$v$}})_{V} = (\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$u$}}), \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}}))_{\mathcal{H}},~~\|\mbox{\boldmath{$v$}}\|_{V}=\| \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}})\|_{\mathcal{H}} \ \ \mbox{for all} \ \ \mbox{\boldmath{$u$}},\, \mbox{\boldmath{$v$}}\in V.$$ The space $\mathcal{H}$ is a Hilbert space endowed with the inner product $$\langle \mbox{\boldmath{$\sigma$}}, \mbox{\boldmath{$\tau$}}\rangle_{\mathcal{H}}= \int_{\Omega}\mbox{\boldmath{$\sigma$}}(\mbox{\boldmath{$x$}}) \cdot \mbox{\boldmath{$\tau$}}(\mbox{\boldmath{$x$}})\, dx \ \ \mbox{for} \ \ \mbox{\boldmath{$\sigma$}}, \mbox{\boldmath{$\tau$}}\in {\mathcal H},$$ and the associated norm $\|\cdot\|_{\mathcal{H}}$. Since $|\Gamma_D| > 0$, the Korn inequality implies that the norm $\| \cdot \|_V$ defined by ([\[NORM\]](#NORM){reference-type="ref" reference="NORM"}) is equivalent to the usual norm $\| \cdot \|_{H^1 (\Omega;\mathbb{R}^d)}$. Recall that the trace operator $\gamma\colon V\to L^{2}(\Gamma_C;\mathbb{R}^{d})$ is linear and continuous, i.e., $\|\mbox{\boldmath{$v$}}\|_{L^{2}(\Gamma;\mathbb{R}^{d})} \le \|\gamma\|\|\mbox{\boldmath{$v$}}\|_{V}$ for $\mbox{\boldmath{$v$}}\in V$, where $\|\gamma\|$ denotes the norm of the trace operator in ${\cal L}(V,L^2(\Gamma_C;\mathbb{R}^d))$. We need the following hypotheses on the data to Problem [Problem 6](#Plastic){reference-type="ref" reference="Plastic"}. $\underline{H({\mathscr{A}})}:$ ${\mathscr{A}} \colon Q \times \mathbb{S}^{d}\rightarrow \mathbb{S}^{d}$ is such that $\underline{H({\mathscr B})}:$ $\mathscr {B}\colon Q \times \mathbb{S}^{d} \rightarrow \mathbb{S}^{d}$ is such that $\underline{H({\mathscr G})}:$ $\mathscr {G}\colon Q \times \mathbb{S}^{d} \times \mathbb{S}^d \rightarrow \mathbb{S}^{d}$ is such that $\underline{H({k})}:$ $k \colon \Gamma_C \times \mathbb{R}\to \mathbb{R}$ is such that $\underline{H({j_\nu})}:$ $j_\nu \colon \Gamma_C \times \mathbb{R}\to \mathbb{R}$ is such that $\underline{(H_4)}:$ $\mbox{\boldmath{$f$}}_0 \in L^2(0,T;L^2(\Omega;\mathbb{R}^d))$, $\mbox{\boldmath{$f$}}_N \in L^2(0,T;L^2(\Gamma_N;\mathbb{R}^d))$, $\mbox{\boldmath{$u$}}_0$, $\mbox{\boldmath{$w$}}_0\in V$. We introduce the set $K$ of admissible velocity fields defined by $$\label{SETK} K=\{\, \mbox{\boldmath{$v$}}\in V \mid v_{\nu} \leq g \ \ \mbox{\rm on} \ \ \Gamma_{C}\, \},$$ and an element $\mbox{\boldmath{$f$}}\in L^2(0,T;V^*)$ given by $$\label{fXXX} \langle \mbox{\boldmath{$f$}}(t),\mbox{\boldmath{$v$}}\rangle = \langle \mbox{\boldmath{$f$}}_{0} (t), \mbox{\boldmath{$v$}}\rangle_{L^{2}(\Omega;\mathbb{R}^{d})} +\langle \mbox{\boldmath{$f$}}_{N}(t),\mbox{\boldmath{$v$}}\rangle_{L^{2}(\Gamma_{N};\mathbb{R}^{d})}$$ for all $\mbox{\boldmath{$v$}}\in V$, a.e. $t \in (0, T)$. We now turn to the weak formulation of Problem [Problem 6](#Plastic){reference-type="ref" reference="Plastic"}. Let $(\mbox{\boldmath{$u$}}, \mbox{\boldmath{$\sigma$}})$ be a smooth solution to this problem which means that the data are smooth functions such that all the derivatives and all the conditions are satisfied in the usual sense at each point. Let $\mbox{\boldmath{$v$}}\in K$ and $t\in (0,T)$. We multiply ([\[equation2\]](#equation2){reference-type="ref" reference="equation2"}) by $\mbox{\boldmath{$v$}}-\mbox{\boldmath{$u$}}'(t)$, use Green's formula, see [@MOSBOOK Theorem 2.25], and apply the boundary conditions ([\[equation3\]](#equation3){reference-type="ref" reference="equation3"}) and ([\[equation4\]](#equation4){reference-type="ref" reference="equation4"}) to obtain $$\begin{aligned} &&\hspace{-0.5cm} \int_\Omega\mbox{\boldmath{$u$}}''(t)\cdot(\mbox{\boldmath{$v$}}-\mbox{\boldmath{$u$}}'(t)) \,dx +\int_\Omega \mbox{\boldmath{$\sigma$}}(t)\cdot \big( \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}}) -\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$u$}}'(t)) \big) \, dx \\ [1mm] &&\hspace{-0.5cm} \ \ =\int_\Omega \mbox{\boldmath{$f$}}_0(t)\cdot(\mbox{\boldmath{$v$}}-\mbox{\boldmath{$u$}}'(t))dx +\int_{\Gamma_N} \mbox{\boldmath{$f$}}_N(t)\cdot(\mbox{\boldmath{$v$}}-\mbox{\boldmath{$u$}}'(t))\, d\Gamma +\int_{\Gamma_C} \mbox{\boldmath{$\sigma$}}(t)\mbox{\boldmath{$\nu$}}\cdot(\mbox{\boldmath{$v$}}-\mbox{\boldmath{$u$}}'(t))\, d\Gamma.\end{aligned}$$ By ([\[equation5\]](#equation5){reference-type="ref" reference="equation5"}) and the definition of the Clarke subgradient, we have $$\begin{aligned} && \sigma_{\nu}(t) (v_\nu - u_\nu'(t)) = (\sigma_{\nu}(t) + \xi(t))(v_\nu - g) - (\sigma_{\nu}(t) + \xi(t))(u_\nu'(t) - g) \nonumber \\[2mm] &&\qquad - \xi(t) (v_\nu - u_\nu'(t)) \ge %- \eta(t) (v_\nu - u_\nu'(t)) \ge - k(u_\nu(t)) \, j_\nu^0(u_\nu'(t); v_\nu - u_\nu'(t)) \ \ \mbox{on} \ \ \Gamma_C. \label{normal}\end{aligned}$$ On the other hand, ([\[DEC\]](#DEC){reference-type="ref" reference="DEC"}) and the frictionless condition ([\[equation6\]](#equation6){reference-type="ref" reference="equation6"}) imply $$\label{tangent} \mbox{\boldmath{$\sigma$}}(t)\mbox{\boldmath{$\nu$}}\cdot (\mbox{\boldmath{$v$}}- \mbox{\boldmath{$u$}}'(t)) = \sigma_{\nu}(t) (v_\nu - u_\nu'(t)) %+ %\bsigma_{\tau}(t) \cdot (\bv_\tau - \bu_\tau'(t)) %= %\bsigma_{\nu}(t) (\bv_\nu - \bu_\nu'(t)) \ \ \mbox{on} \ \ \Gamma_C.$$ Combining ([\[fXXX\]](#fXXX){reference-type="ref" reference="fXXX"}), ([\[normal\]](#normal){reference-type="ref" reference="normal"}) and ([\[tangent\]](#tangent){reference-type="ref" reference="tangent"}), we have $$\begin{aligned} && \int_\Omega\mbox{\boldmath{$u$}}''(t)\cdot(\mbox{\boldmath{$v$}}-\mbox{\boldmath{$u$}}'(t))\, dx +\langle \mbox{\boldmath{$\sigma$}}(t), \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}})-\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$u$}}'(t)) \rangle_{\mathcal{H}} \nonumber \\ [1mm] && \qquad\qquad +\int_{\Gamma_{C}}k(u_{\nu}(t))\, j_{\nu}^0(u'_{\nu}(t); v_{\nu}-u'_{\nu}(t))\, d\Gamma \ge \langle \mbox{\boldmath{$f$}}(t), \mbox{\boldmath{$v$}}- \mbox{\boldmath{$u$}}'(t) \rangle. \label{NEW4}\end{aligned}$$ From the constitutive law ([\[equation1\]](#equation1){reference-type="ref" reference="equation1"}), we get $$\label{ETA1} \mbox{\boldmath{$\sigma$}}(t) = {\mathscr A}\mbox{\boldmath{$\varepsilon$}}({\mbox{\boldmath{$u$}}}'(t)) +{\mathscr B}\mbox{\boldmath{$\varepsilon$}}({\mbox{\boldmath{$u$}}}(t)) + \mbox{\boldmath{$\eta$}}(t)$$ with $$\label{ETA2} \mbox{\boldmath{$\eta$}}(t) = \int_0^t{\mathscr G}\big( \mbox{\boldmath{$\sigma$}}(s) - {\mathscr A}\mbox{\boldmath{$\varepsilon$}}({\mbox{\boldmath{$u$}}}'(s)), \mbox{\boldmath{$\varepsilon$}}({\mbox{\boldmath{$u$}}}(s) \big)\,ds .$$ Inserting ([\[ETA1\]](#ETA1){reference-type="ref" reference="ETA1"}) into ([\[NEW4\]](#NEW4){reference-type="ref" reference="NEW4"}) and using ([\[ETA2\]](#ETA2){reference-type="ref" reference="ETA2"}), we obtain the following variational formulation of Problem [Problem 6](#Plastic){reference-type="ref" reference="Plastic"}. **Problem 7**. *Find $\mbox{\boldmath{$\eta$}}\colon (0, T) \to {\mathcal{H}}$ and $\mbox{\boldmath{$u$}}\colon (0,T)\to V$ such that $\mbox{\boldmath{$u$}}'(t) \in K$ a.e. $t\in (0, T)$ and $$\begin{aligned} &&\hspace{-0.4cm} \mbox{\boldmath{$\eta$}}'(t) = {\mathscr{G}} (\mbox{\boldmath{$\eta$}}(t) + \mathscr{B}(\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$u$}}(t))), \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$u$}}(t))) \ \ \mbox{\rm for all} \ \ t \in (0, T), \\[2mm] &&\hspace{-0.4cm} \int_\Omega\mbox{\boldmath{$u$}}''(t)\cdot(\mbox{\boldmath{$v$}}-\mbox{\boldmath{$u$}}'(t)) \, dx +\langle \mathscr{A}(\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$u$}}'(t))) + \mathscr{B}(\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$u$}}(t))) + \mbox{\boldmath{$\eta$}}(t), \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}})-\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$u$}}'(t)) \rangle_{\mathcal{H}} \\ &&\hspace{-0.4cm} \quad +\int_{\Gamma_{C}} k(u_{\nu}(t)) \, j_{\nu}^0(u'_{\nu}(t); v_{\nu}-u'_{\nu}(t))\, d\Gamma \ge \langle \mbox{\boldmath{$f$}}(t), \mbox{\boldmath{$v$}}- \mbox{\boldmath{$u$}}'(t) \rangle \ \ \mbox{\rm for all} \ \ \mbox{\boldmath{$v$}}\in K, \ \mbox{\rm a.e.} \ t \in (0,T), \\[1mm] &&\hspace{-0.4cm} \mbox{\boldmath{$\eta$}}(0) = \mbox{\boldmath{$0$}}, \ \mbox{\boldmath{$u$}}(0) = \mbox{\boldmath{$u$}}_0, \ \mbox{\boldmath{$u$}}'(0) = \mbox{\boldmath{$w$}}_0.\end{aligned}$$* Problem [Problem 7](#CONTACTX1){reference-type="ref" reference="CONTACTX1"} couples the ordinary differential equation with a hemivariational inequality with constraints. The result below concerns the unique solvability, continuous dependence, and regularity of solution to Problem [Problem 7](#CONTACTX1){reference-type="ref" reference="CONTACTX1"}. **Theorem 8**. *Assume hypotheses $H({\mathscr A})$, $H({\mathscr B})$, $H({\mathscr G})$, $H(k)$, $H(j_\nu)$, $(H_4)$ and the smallness condition $$\label{CONTACT_smallness} \alpha_{j_{\nu}}k_2\|\gamma\|^{2} < m_{\mathscr{A}}.$$ Then Problem $\ref{CONTACTX1}$ has a solution $\mbox{\boldmath{$\eta$}}\in H^1(0,T; {\mathcal{H}})$ and $\mbox{\boldmath{$u$}}\in C([0, T]; V)$ such that $\mbox{\boldmath{$u$}}' \in {\mathbb{W}}$ with $\mbox{\boldmath{$u$}}'(t) \in K$ for a.e. $t \in (0, T)$. If, in addition, $$\label{EXTRA} \mbox{\rm either} \ \ j_\nu (\mbox{\boldmath{$x$}}, \cdot) \ \ \mbox{\rm or} \ \ -j_\nu (\mbox{\boldmath{$x$}}, \cdot) \ \ \mbox{\rm is regular for a.e.} \ \mbox{\boldmath{$x$}}\in \Gamma_C,$$ then the solution to Problem $\ref{CONTACTX1}$ is unique, and there exists a constant $c > 0$ such that $$\begin{aligned} \label{EST555} &&\hspace{-0.2cm} \| \mbox{\boldmath{$\eta$}}_1 - \mbox{\boldmath{$\eta$}}_2 \|_{H^1(0, T; E)} + \| \mbox{\boldmath{$u$}}_1 - \mbox{\boldmath{$u$}}_2 \|_{C(0, T; V)} + \| \mbox{\boldmath{$u$}}'_1 - \mbox{\boldmath{$u$}}'_2 \|_{L^2(0, T; V)} \\[1mm] && \hspace{-0.2cm} \quad \le c \, ( \| \mbox{\boldmath{$u$}}_0 - \widetilde {\mbox{\boldmath{$u$}}}_0 \| + \| \mbox{\boldmath{$w$}}_0 - \widetilde {\mbox{\boldmath{$w$}}}_0\| + \| \mbox{\boldmath{$f$}}_0 - {\widetilde{\mbox{\boldmath{$f$}}}_0} \|_{L^2(0, T; L^2(\Omega;\mathbb{R}^d))} + \| \mbox{\boldmath{$f$}}_N - {\widetilde{\mbox{\boldmath{$f$}}}_N} \|_{L^2(0, T; L^2(\Gamma_N; \mathbb{R}^d))}), \nonumber \end{aligned}$$ where $(\mbox{\boldmath{$\eta$}}_1, \mbox{\boldmath{$u$}}_1)$ and $(\mbox{\boldmath{$\eta$}}_2, \mbox{\boldmath{$u$}}_2)$ are the unique solutions to Problem $\ref{CONTACTX1}$ corresponding to $$(\mbox{\boldmath{$u$}}_0, \mbox{\boldmath{$w$}}_0, \mbox{\boldmath{$f$}}_0, \mbox{\boldmath{$f$}}_N), (\widetilde{\mbox{\boldmath{$u$}}}_0, {\widetilde{\mbox{\boldmath{$w$}}}}_0, \widetilde{\mbox{\boldmath{$f$}}}, \widetilde{\mbox{\boldmath{$f$}}}_N) \in V\times V \times L^2(0, T; L^2(\Omega;\mathbb{R}^d) \times L^2(\Gamma_N; \mathbb{R}^d)),$$ respectively. Moreover, we have the regularity $(\mbox{\boldmath{$\eta$}}, \mbox{\boldmath{$u$}}') \in C([0,T]; {\mathcal{H}} \times L^2(\Omega;\mathbb{R}^d))$.* **Proof**.  Let $\mbox{\boldmath{$w$}}(t)=\mbox{\boldmath{$u$}}'(t)$ for all $t\in (0,T)$. Then $\mbox{\boldmath{$u$}}(t) = \mbox{\boldmath{$u$}}_0 + \int_0^t \mbox{\boldmath{$w$}}(s) \, ds$ for all $t\in (0,T)$ and Problem [Problem 7](#CONTACTX1){reference-type="ref" reference="CONTACTX1"} can be reformulated as follows. **Problem 9**. *Find $\mbox{\boldmath{$\eta$}}\colon (0, T) \to {\mathcal{H}}$ and $\mbox{\boldmath{$w$}}\colon (0,T)\to V$ such that $\mbox{\boldmath{$w$}}(t) \in K$ a.e. $t\in (0, T)$, $\mbox{\boldmath{$w$}}(0)=\mbox{\boldmath{$w$}}_0$, $\mbox{\boldmath{$\eta$}}(0) = \mbox{\boldmath{$0$}}$ and $$\begin{aligned} &&\hspace{-0.3cm} \mbox{\boldmath{$\eta$}}'(t) = {\mathscr{G}} \left( \mbox{\boldmath{$\eta$}}(t) + \mathscr{B} \left( \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$u$}}_0 + \int_0^t \mbox{\boldmath{$w$}}(s) \, ds) \right), \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$u$}}_0 + \int_0^t \mbox{\boldmath{$w$}}(s) \, ds ) \right) \ \ \mbox{\rm for all} \ \ t \in (0, T), \\[2mm] &&\hspace{-0.3cm} \int_\Omega\mbox{\boldmath{$w$}}'(t)\cdot(\mbox{\boldmath{$v$}}-\mbox{\boldmath{$w$}}(t)) \, dx + \langle \mathscr{A}(\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$w$}}(t))) + \mathscr{B}(\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$u$}}_0 + \int_0^t \mbox{\boldmath{$w$}}(s) \, ds)) + \mbox{\boldmath{$\eta$}}(t), \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}})-\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$w$}}(t)) \rangle_{\mathcal{H}} \\ &&\hspace{-0.3cm} \quad +\int_{\Gamma_{C}} k\left( u_{0\nu} + \int_0^t w_\nu (s) \, ds \right) \, j_{\nu}^0(w_{\nu}(t); v_{\nu}-w_{\nu}(t))\, d\Gamma \ge \langle \mbox{\boldmath{$f$}}(t), \mbox{\boldmath{$v$}}- \mbox{\boldmath{$w$}}(t) \rangle \\[2mm] && \ \ \ \ \mbox{\rm for all} \ \ \mbox{\boldmath{$v$}}\in K, \ \mbox{\rm a.e.} \ t \in (0,T). %\\[1mm] %&&\hspace{-0.3cm} %\beeta (0) = \bzero, \ \bu(0) = \bu_0, \ \bu'(0) = \bw_0.\end{aligned}$$* Let $E := {\mathcal{H}}$, $U := V$, $Z = X := L^2(\Gamma_C)$, and $A \colon (0,T) \times E \times V \to V^*$, $S \colon L^2(0,T;V) \to L^2(0,T; U)$, $R_1 \colon L^2(0,T;V) \to L^2(0,T; V^*)$, $R_2 \colon L^2(0,T;V) \to L^2(0,T; Z)$, $j \colon Z \times X \to \mathbb{R}$, $F \colon (0,T) \times E \times U \to E$, $M \colon V \to X$ be defined by $$\begin{aligned} && \langle A (t, \mbox{\boldmath{$\eta$}}, \mbox{\boldmath{$v$}}), \mbox{\boldmath{$z$}}\rangle := \langle\mathscr{A}(t,\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}})) + \mbox{\boldmath{$\eta$}}, \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$z$}}) \rangle_{\mathcal{H}} \ \ \mbox{for} \ \ \mbox{\boldmath{$v$}}, \mbox{\boldmath{$z$}}\in V, \ \mbox{\boldmath{$\eta$}}\in {\mathcal{H}}, \ \mbox{a.e.} \ t \in (0,T), \\ && (S\mbox{\boldmath{$w$}})(t) := \mbox{\boldmath{$u$}}_0 + \int_0^t \mbox{\boldmath{$w$}}(s) \, ds \ \ \mbox{for} \ \ \mbox{\boldmath{$w$}}\in L^2(0,T; V), \ \mbox{a.e.} \ t \in (0,T), \\ && \langle (R_1\mbox{\boldmath{$w$}})(t), \mbox{\boldmath{$v$}}\rangle := \langle {\mathscr{B}} \mbox{\boldmath{$\varepsilon$}} ((S\mbox{\boldmath{$w$}})(t)), \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}})\rangle_{\mathcal{H}} \ \ \mbox{for} \ \ \mbox{\boldmath{$w$}}\in L^2(0,T; V), \ \mbox{\boldmath{$v$}}\in V, \ \mbox{a.e.} \ t \in (0,T), \\ && (R_2\mbox{\boldmath{$w$}})(t) := ((S\mbox{\boldmath{$w$}})(t))_\nu = u_{0\nu} + \int_0^t w_\nu(s) \, ds \ \ \mbox{for} \ \ \mbox{\boldmath{$w$}}\in L^2(0,T; V), \ \mbox{a.e.} \ t \in (0,T), \\ && j(z, v) = \int_{\Gamma_C} k(\mbox{\boldmath{$x$}}, z) \, j_\nu (\mbox{\boldmath{$x$}}, v) \, d\Gamma \ \ \mbox{for} \ \ z \in Z, \ v \in X, \ \mbox{a.e.} \ t \in (0,T), \\ && F(t, \mbox{\boldmath{$\eta$}}, \mbox{\boldmath{$w$}})(\mbox{\boldmath{$x$}}) := {\mathscr{G}} (\mbox{\boldmath{$x$}}, t, \mbox{\boldmath{$\eta$}}(\mbox{\boldmath{$x$}}) + {\mathscr{B}}(\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$w$}}(\mbox{\boldmath{$x$}}))), \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$w$}}(\mbox{\boldmath{$x$}}))) \ \mbox{for} \ \mbox{\boldmath{$\eta$}}\in E, \ \mbox{\boldmath{$w$}}\in U, \ \mbox{a.e.} \ t \in (0,T), \\[1mm] && M \mbox{\boldmath{$v$}}:= v_\nu \ \ \mbox{for} \ \ \mbox{\boldmath{$v$}}\in V, \end{aligned}$$ and $\varphi \equiv 0$. With the above notation, we consider the following auxiliary system associated with Problem [Problem 9](#CONTACTX33){reference-type="ref" reference="CONTACTX33"}. **Problem 10**. *Find $\mbox{\boldmath{$\eta$}}\in H^1(0,T; E)$ and $\mbox{\boldmath{$w$}}\in \mathbb{W}$ such that $\mbox{\boldmath{$w$}}(t) \in K$ for a.e. $t \in (0, T)$, $\mbox{\boldmath{$w$}}(0)=\mbox{\boldmath{$w$}}_0$, $\mbox{\boldmath{$\eta$}}(0) = \mbox{\boldmath{$0$}}$ and $$\begin{cases} \displaystyle \mbox{\boldmath{$\eta$}}'(t) = F(t, \mbox{\boldmath{$\eta$}}(t), (S\mbox{\boldmath{$w$}})(t)) \ \ \mbox{\rm for all} \ \ t \in (0,T), \\[1mm] \langle \mbox{\boldmath{$w$}}'(t) + A(t, \mbox{\boldmath{$\eta$}}(t), \mbox{\boldmath{$w$}}(t)) + (R_1 \mbox{\boldmath{$w$}})(t) - \mbox{\boldmath{$f$}}(t), \mbox{\boldmath{$v$}}- \mbox{\boldmath{$w$}}(t) \rangle \\[1mm] \qquad \ \ \, + \, j^0 ((R_2\mbox{\boldmath{$w$}})(t), M \mbox{\boldmath{$w$}}(t); M\mbox{\boldmath{$v$}} - M\mbox{\boldmath{$w$}}(t)) \ge 0 \ \ \mbox{\rm for all} \ \mbox{\boldmath{$v$}}\in K, \ \mbox{\rm a.e.} \ t \in (0,T). \end{cases}$$* First, we apply Corollary [Corollary 1](#Corollary2){reference-type="ref" reference="Corollary2"} to deduce that Problem [Problem 10](#CONTACTX2){reference-type="ref" reference="CONTACTX2"} is solvable. We will verify hypotheses $H(F)$, $H(A)$, $H(j)$, $H(\varphi)$, $H(K)$, $H(M)$, and $(H_1)$--$(H_3)$. We use assumptions $H(\mathscr{G})$ and $H(\mathscr{B})$ to obtain the estimate $$\begin{aligned} && \| F(t, \mbox{\boldmath{$\eta$}}, \mbox{\boldmath{$w$}}) \|_{\mathcal{H}} \le \| F(t, \mbox{\boldmath{$\eta$}}, \mbox{\boldmath{$w$}}) - F(t, \mbox{\boldmath{$0$}}, \mbox{\boldmath{$0$}}) \|_{\mathcal{H}} + \| F(t, \mbox{\boldmath{$0$}}, \mbox{\boldmath{$0$}}) \|_{\mathcal{H}} \\[2mm] && \quad \le \sqrt{3} \, L_{\mathscr{G}} \, (\| \mbox{\boldmath{$\eta$}}\|_{\mathcal{H}} + L_{\mathscr{B}} \| \mbox{\boldmath{$w$}}\| + \| \mbox{\boldmath{$w$}}\| ) + \sqrt{2} \, (L_{\mathscr{G}} \, \| {\mathscr{B}}(\cdot, t,\mbox{\boldmath{$0$}}) \|_{\mathcal{H}} + \| {\mathscr{G}}(\cdot, t,\mbox{\boldmath{$0$}},\mbox{\boldmath{$0$}}) \|_{\mathcal{H}})\end{aligned}$$ for all $\mbox{\boldmath{$\eta$}}\in {\mathcal{H}}$, $\mbox{\boldmath{$w$}}\in V$, a.e. $t \in (0, T)$. Hence, the function $t \mapsto F(t, \mbox{\boldmath{$\eta$}}, \mbox{\boldmath{$w$}})$ belongs to $L^2(0,T;{\mathcal{H}})$ for all $\mbox{\boldmath{$\eta$}}\in {\mathcal{H}}$, $\mbox{\boldmath{$w$}}\in V$. Similarly, we find that $$\| F(t, \mbox{\boldmath{$\eta$}}_1, \mbox{\boldmath{$w$}}_1)-F(t,\mbox{\boldmath{$\eta$}}_2, \mbox{\boldmath{$w$}}_2) \|_{\mathcal{H}} \le \sqrt{2} \, L_{\mathscr{G}} \, (\sqrt{2}\, \| \mbox{\boldmath{$\eta$}}_1-\mbox{\boldmath{$\eta$}}_2 \|_{\mathcal{H}} + \sqrt{2} \, L_{\mathscr{B}} \| \mbox{\boldmath{$w$}}_1-\mbox{\boldmath{$w$}}_2\| + \| \mbox{\boldmath{$w$}}_1-\mbox{\boldmath{$w$}}_2 \| )$$ for all $\mbox{\boldmath{$\eta$}}_1$, $\mbox{\boldmath{$\eta$}}_2 \in {\mathcal{H}}$, $\mbox{\boldmath{$w$}}_1$, $\mbox{\boldmath{$w$}}_2 \in V$, a.e. $t \in (0, T)$. We deduce that the map $F$ satisfies $H(F)$. Next, we show that operator ${\mathscr{A}}$ satisfies $H(A)$. It is clear from $H({\mathscr{A}})$ that $A(\cdot, \mbox{\boldmath{$\eta$}}, \mbox{\boldmath{$v$}})$ is measurable for all $\mbox{\boldmath{$\eta$}}\in E$, $\mbox{\boldmath{$v$}}\in V$, and $A(t,\cdot, \mbox{\boldmath{$v$}})$ is continuous for all $\mbox{\boldmath{$v$}}\in V$, a.e. $t \in (0, T)$. From $H({\mathscr{A}})$(3), we have $\| {\mathscr{A}}(t,\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}})) \|_{\mathcal H} \le \sqrt{2} \, (\| {\widetilde{a}}_0(t)\|_{L^2(\Omega)} + {\widetilde{a}}_2 \, \| \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}}) \|_{\mathcal H})$ for all $\mbox{\boldmath{$v$}}\in V$, a.e. $t \in (0, T)$, and consequently $$\begin{aligned} && \| A(t, \mbox{\boldmath{$\eta$}}, \mbox{\boldmath{$v$}}) \|_{V^*} = \sup_{\| z\|\le 1} \langle {\mathscr{A}}(t, \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}}))+\mbox{\boldmath{$\eta$}}, \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$z$}}) \rangle_{\mathcal H} \le \sup_{\| z \|\le 1} \left( \| {\mathscr{A}}(t, \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}}))\|_{\mathcal H} + \| \mbox{\boldmath{$\eta$}}\|_{\mathcal H} \right) \, \| \mbox{\boldmath{$z$}}\| \\ && \qquad \le \sqrt{2} \, \| {\widetilde{a}}_0(t)\|_{L^2(\Omega)} + \| \mbox{\boldmath{$\eta$}}\|_{\mathcal H} + \sqrt{2} \, {\widetilde{a}}_2 \, \| \mbox{\boldmath{$v$}}\|, \end{aligned}$$ which implies condition $H(A)$(c) with $a_0(t) = \sqrt{2} \, \| {\widetilde{a}}_0(t)\|_{L^2(\Omega)}$, $a_1 = 1$ and $a_2 = \sqrt{2} \, {\widetilde{a}}_2$. To demonstrate $H(A)$(d), we observe that $$\| A(t, \mbox{\boldmath{$\eta$}}_1, \mbox{\boldmath{$v$}}) - A(t, \mbox{\boldmath{$\eta$}}_2, \mbox{\boldmath{$v$}}) \|_{V^*} = \sup_{\| z\|\le 1} \langle \mbox{\boldmath{$\eta$}}_1 - \mbox{\boldmath{$\eta$}}_2, \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$z$}}) \rangle_{\mathcal H} \le \sup_{\| z \|\le 1} \| \mbox{\boldmath{$\eta$}}_1 - \mbox{\boldmath{$\eta$}}_2\|_{\mathcal H} \| \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$z$}}) \|_{\mathcal H} \le \| \mbox{\boldmath{$\eta$}}_1 - \mbox{\boldmath{$\eta$}}_2\|_{\mathcal H}$$ for all $\mbox{\boldmath{$\eta$}}_1$, $\mbox{\boldmath{$\eta$}}_2 \in {\mathcal H}$, $\mbox{\boldmath{$v$}}\in V$, a.e. $t \in (0, T)$, and from $H({\mathscr{A}})$(4), we deduce $$\begin{aligned} && \langle A(t, \mbox{\boldmath{$\eta$}}, \mbox{\boldmath{$v$}}_1) - A(t, \mbox{\boldmath{$\eta$}}, \mbox{\boldmath{$v$}}_2) \rangle = \langle {\mathscr{A}}(t, \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}}_1))- {\mathscr{A}}(t, \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}}_2)), \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}}_1)-\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}}_2) \rangle_{\mathcal H} \\ && \qquad \ge m_{\mathscr{A}} \int_{\Omega} \| \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}}_1)-\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}}_2)\|^2_{\mathbb{S}^d} \, dx = m_{\mathscr{A}} \, \| \mbox{\boldmath{$v$}}_1-\mbox{\boldmath{$v$}}_2\|^2\end{aligned}$$ for all $\mbox{\boldmath{$\eta$}}\in {\mathcal H}$, $\mbox{\boldmath{$v$}}_1$, $\mbox{\boldmath{$v$}}_2 \in V$, a.e. $t \in (0, T)$. Exploiting the last two inequalities, by Remark [Remark 2](#REM2){reference-type="ref" reference="REM2"}, we obtain $H(A)$(d) with $m_A = m_{\mathscr{A}}$ and ${\overline{m}}_A = 0$. Hence $H(A)$ holds. From hypotheses $H(k)$, $H(j_\nu)$, and [@MOSBOOK Theorem 3.47], it follows that function $j$ satisfies $H(j)$(a)--(d), and $$\label{j-inequality} j^0(z, v; w) \le \int_{\Gamma_{C}} k(\mbox{\boldmath{$x$}}, z) \, j_\nu^0 (\mbox{\boldmath{$x$}}, v; w) \, d\Gamma \ \ \mbox{for all} \ \ z, v, w \in X.$$ We use ([\[j-inequality\]](#j-inequality){reference-type="ref" reference="j-inequality"}) with $H(k)$ and $H(j_\nu)$ to get $$\begin{aligned} &&\hspace{-0,5cm} j^0(z_1, v_1; v_2 - v_1) + j^0(z_2, v_2; v_1 - v_2) \\ [1mm] && \le \int_{\Gamma_C} \Big( k(z_1) j_\nu^0 (v_{1}; v_{2} - v_{1}) + k(z_2) j_\nu^0 (v_{2}; v_{1} - v_{2}) \Big) \, d\Gamma \\ [1mm] && \le \int_{\Gamma_C} \big( k(z_1) - k(z_2) \big) j_\nu^0 (v_{1}; v_{2} - v_{1}) + \, k(z_2) \Big( j_\nu^0 (v_{1}; v_{2} - v_{1}) + j_\nu^0 (v_{2}; v_{1} - v_{2}) \Big) \, d\Gamma \\ [1mm] && \le c_0 \, L_k \int_{\Gamma_C} | z_1(\mbox{\boldmath{$x$}}) - z_2(\mbox{\boldmath{$x$}})| \, \| v_1(\mbox{\boldmath{$x$}}) - v_2(\mbox{\boldmath{$x$}})\| \, d\Gamma + \alpha_{j_{\nu}} \, k_2 \int_{\Gamma_C} \| v_1(\mbox{\boldmath{$x$}}) - v_2(\mbox{\boldmath{$x$}}) \|^2 \, d\Gamma \\ [1mm] && \le {\overline{m}}_{j} \| z_1 - z_2 \|_Z \| v_1 - v_2 \|_X + m_{j} \| v_1 - v_2 \|^2_X,\end{aligned}$$ where $m_{j} = \alpha_{j_{\nu}} k_2$ and ${\overline{m}}_{j} = c_0 L_k$. Hence $H(j)$(e) follows, and therefore, $j$ satisfies $H(j)$. Condition $H(\varphi)$ holds trivially. It is clear that the set $K$ is a closed and convex subset of $V$ with $\mbox{\boldmath{$0$}}\in K$, i.e., $H(K)$ holds. By the linearity and continuity of the normal trace operator, it is obvious that $M_1 = M$ and $H(M)$(a) is satisfied. Condition $H(M)$(b) holds as a consequence of the proof of [@AMMA2 Theorem 2.18, p.59]. Using the regularity of $\mbox{\boldmath{$f$}}_0$ and $\mbox{\boldmath{$f$}}_N$ in $(H_4)$, we readily obtain that the functional defined by ([\[fXXX\]](#fXXX){reference-type="ref" reference="fXXX"}) satisfies $\mbox{\boldmath{$f$}}\in L^2(0, T; V^*)$. Condition $(H_1)$ is a consequence of the smallness hypothesis ([\[CONTACT_smallness\]](#CONTACT_smallness){reference-type="ref" reference="CONTACT_smallness"}) while condition $(H_2)$ follows from $(H_4)$. Finally, we argue as in [@AMMA14 Theorem 14.2, p.367] to demonstrate that operators $S$, $R_1$ and $R_2$ satisfy the inequalities $(H_3)$(a)--(c), respectively. Having verified all hypotheses of Corollary [Corollary 1](#Corollary2){reference-type="ref" reference="Corollary2"}, we deduce from it that Problem [Problem 10](#CONTACTX2){reference-type="ref" reference="CONTACTX2"} has a unique solution $\mbox{\boldmath{$\eta$}}\in H^1(0,T; E)$, $\mbox{\boldmath{$w$}}\in \mathbb{W}$ such that $\mbox{\boldmath{$w$}}(t) \in K$ for a.e. $t \in (0, T)$. By the inequality ([\[j-inequality\]](#j-inequality){reference-type="ref" reference="j-inequality"}) it follows that any solution to Problem [Problem 10](#CONTACTX2){reference-type="ref" reference="CONTACTX2"} is a solution to Problem [Problem 9](#CONTACTX33){reference-type="ref" reference="CONTACTX33"}. Moreover, from the relation $\mbox{\boldmath{$u$}}(t) = \int_0^t \mbox{\boldmath{$w$}}(s) \, ds + \mbox{\boldmath{$u$}}_0$ for all $t \in (0, T)$, we conclude that $\mbox{\boldmath{$u$}}\in C([0, T]; V)$, $\mbox{\boldmath{$u$}}' \in \mathbb{W}$ with $\mbox{\boldmath{$u$}}'(t) \in K$ for a.e. $t \in (0, T)$. The proof of existence of solution to Problem [Problem 7](#CONTACTX1){reference-type="ref" reference="CONTACTX1"} is completed. Next, we suppose the regularity hypothesis ([\[EXTRA\]](#EXTRA){reference-type="ref" reference="EXTRA"}). Invoking [@MOSBOOK Theorem 3.47(vii)], we deduce that ([\[j-inequality\]](#j-inequality){reference-type="ref" reference="j-inequality"}) holds with equality. This implies that Problems [Problem 9](#CONTACTX33){reference-type="ref" reference="CONTACTX33"} and [Problem 10](#CONTACTX2){reference-type="ref" reference="CONTACTX2"} are equivalent. Therefore, in this case, Problem [Problem 7](#CONTACTX1){reference-type="ref" reference="CONTACTX1"} is uniquely solvable. Let $(\mbox{\boldmath{$\eta$}}_1, \mbox{\boldmath{$u$}}_1)$ and $(\mbox{\boldmath{$\eta$}}_2, \mbox{\boldmath{$u$}}_2)$ be the unique solutions to Problem $\ref{CONTACTX1}$ corresponding to the data $$(\mbox{\boldmath{$u$}}_0, \mbox{\boldmath{$w$}}_0, \mbox{\boldmath{$f$}}_0, \mbox{\boldmath{$f$}}_N), (\widetilde{\mbox{\boldmath{$u$}}}_0, {\widetilde{\mbox{\boldmath{$w$}}}}_0, \widetilde{\mbox{\boldmath{$f$}}}, \widetilde{\mbox{\boldmath{$f$}}}_N) \in V\times V \times L^2(0, T; L^2(\Omega;\mathbb{R}^d) \times L^2(\Gamma_N; \mathbb{R}^d)),$$ respectively. From ([\[fXXX\]](#fXXX){reference-type="ref" reference="fXXX"}) and $(H_4)$, we readily have $$\label{*3} \| \mbox{\boldmath{$f$}}- \widetilde{\mbox{\boldmath{$f$}}} \|_{L^2(0, T; V^*)} \le c \, \left( \| \mbox{\boldmath{$f$}}_0 - {\widetilde{\mbox{\boldmath{$f$}}}_0} \|_{L^2(0, T; L^2(\Omega;\mathbb{R}^d))} + \| \mbox{\boldmath{$f$}}_N - {\widetilde{\mbox{\boldmath{$f$}}}_N} \|_{L^2(0, T; L^2(\Gamma_N; \mathbb{R}^d))} \right)$$ with a constant $c > 0$, which by ([\[EST444\]](#EST444){reference-type="ref" reference="EST444"}) of Corollary [Corollary 1](#Corollary2){reference-type="ref" reference="Corollary2"} implies $$\begin{aligned} \label{*4} &&\hspace{-0.2cm} \| \mbox{\boldmath{$\eta$}}_1 - \mbox{\boldmath{$\eta$}}_2 \|_{H^1(0, T; E)} + \| \mbox{\boldmath{$w$}}_1 - \mbox{\boldmath{$w$}}_2 \|_{L^2(0, T; V)} \\[1mm] && \hspace{-0.2cm} \quad \le c \, ( \| \mbox{\boldmath{$w$}}_0 - \widetilde {\mbox{\boldmath{$w$}}}_0\| + \| \mbox{\boldmath{$f$}}_0 - {\widetilde{\mbox{\boldmath{$f$}}}_0} \|_{L^2(0, T; L^2(\Omega;\mathbb{R}^d))} + \| \mbox{\boldmath{$f$}}_N - {\widetilde{\mbox{\boldmath{$f$}}}_N} \|_{L^2(0, T; L^2(\Gamma_N; \mathbb{R}^d))}). \nonumber \end{aligned}$$ Using again the equation $\mbox{\boldmath{$u$}}(t) = \mbox{\boldmath{$u$}}_0 + \int_0^t \mbox{\boldmath{$w$}}(s) \, ds$ for all $t\in (0,T)$, we obtain $$\label{*55} \| \mbox{\boldmath{$u$}}_1 - \mbox{\boldmath{$u$}}_2 \|_{C(0, T; V)} \le \| \mbox{\boldmath{$u$}}_0 - \widetilde{\mbox{\boldmath{$u$}}}_0 \| + \sqrt{T} \, \| \mbox{\boldmath{$w$}}_1 - \mbox{\boldmath{$w$}}_2 \|_{L^2(0, T; V)}.$$ Adding ([\[\*4\]](#*4){reference-type="ref" reference="*4"}) and ([\[\*55\]](#*55){reference-type="ref" reference="*55"}), and combining with $\mbox{\boldmath{$w$}}_i = \mbox{\boldmath{$u$}}'_i$ for $i=1$, $2$, we deduce the inequality ([\[EST555\]](#EST555){reference-type="ref" reference="EST555"}). Finally, we use the continuity of the embeddings $$H^1(0, T; {\mathcal{H}}) \subset C([0, T]; {\mathcal{H}}) \ \ \mbox{and} \ \ {\mathbb{W}} \subset C([0,T]; L^2(\Omega;\mathbb{R}^d))$$ to conclude the desired regularity of the solution $(\mbox{\boldmath{$\eta$}}, \mbox{\boldmath{$u$}}') \in C([0,T]; {\mathcal{H}} \times L^2(\Omega;\mathbb{R}^d))$. This completes the proof. $\Box$ # A dynamic frictional viscoelastic contact problem {#Application2} In this section we consider the dynamic contact problem for viscoelastic materials with friction and adhesion. The physical framework is similar to the one considered in the previous section. The difference arises in the fact that now the body $\Omega \subset \mathbb{R}^d$ is assumed to be viscoelastic with long memory, the process is frictional and it is governed by a convex potential, and the adhesion process is modeled by an ordinary differential equation for the bonding variable on a contact surface $\Gamma_C$. The classical formulation of the contact problem we study in this section is the following. **Problem 11**. *Find a displacement field $\mbox{\boldmath{$u$}}\colon Q \to\mathbb{R}^d$ and a stress field $\mbox{\boldmath{$\sigma$}}\colon Q \rightarrow \mathbb{S}^d$ and a bonding field $\beta \colon \Gamma_C \times (0, T) \to [0,1]$ such that for all $t\in (0,T)$, $$\begin{aligned} \label{equation1x} \mbox{\boldmath{$\sigma$}}(t) &={\mathscr A}\mbox{\boldmath{$\varepsilon$}}({\mbox{\boldmath{$u$}}}'(t)) +{\mathscr B}\mbox{\boldmath{$\varepsilon$}}({\mbox{\boldmath{$u$}}}(t)) + \int_0^t {\mathscr{C}}(t-s) \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$u$}}'(s))\, ds \quad &{\rm in}\ &\Omega,\\[1mm] \label{equation2x} {\mbox{\boldmath{$u$}}}''(t)&={\rm Div}\,\mbox{\boldmath{$\sigma$}}(t)+\mbox{\boldmath{$f$}}_0(t)\quad&{\rm in}\ &\Omega,\\[1mm] \label{equation3x} \mbox{\boldmath{$u$}}(t)&=\mbox{\boldmath{$0$}}&{\rm on}\ &\Gamma_D,\\[1mm] \label{equation4x} \mbox{\boldmath{$\sigma$}}(t)\mbox{\boldmath{$\nu$}}&=\mbox{\boldmath{$f$}}_N(t)\quad&{\rm on}\ &\Gamma_N,\\[1mm] \label{equation5x} u'_{\nu}(t)&\le g, \ \sigma_{\nu}(t) \le 0, \ (u'_{\nu}(t)-g) \, \sigma_{\nu}(t)=0 \quad&{\rm on}\ &\Gamma_C,\\[1mm] \label{equation6x} -\mbox{\boldmath{$\sigma$}}_\tau(t) &\in \mu(u_\nu(t)) \, h_1(\beta(t)) \, \partial h_2 (\mbox{\boldmath{$u$}}'_\tau(t)) \quad&{\rm on}\ &\Gamma_C,\\[1mm] \label{equation7x} \beta'(t)&= G(t, \beta(t), \mbox{\boldmath{$u$}}(t)) \quad&{\rm on}\ &\Gamma_C,\\[1mm] \label{equation8x} \beta(0)&= \beta_0 \quad&{\rm on}\ &\Gamma_C,\\[1mm] \label{equation9x} \mbox{\boldmath{$u$}}(0)&= \mbox{\boldmath{$u$}}_0, \ \ \mbox{\boldmath{$u$}}'(t) = \mbox{\boldmath{$w$}}_0 \quad&{\rm in}\ &\Omega. % %\|\bsigma_\tau(t)\|&\le %F_b\Big(\int_0^t\|\bu_\tau(s)\|\,ds\Big),\quad &\ %&\nonumber\\[1mm] %-\bsigma_\tau&=F_b\Big(\int_0^t\|\bu_\tau(s)\|\,ds\Bi%g) %\frac{{\bu}_\tau'(t)}{\|{\bu}_\tau'(t)\|}\ \ \ %{\rm if}\ %{\bu}_\tau'(t)\ne\bzero \quad&{\rm on}\ %&\Gamma_C,\label{equation6}\end{aligned}$$* In this problem, the viscoelastic constitutive law is of the form ([\[equation1x\]](#equation1x){reference-type="ref" reference="equation1x"}), where $\mathscr{A}$, $\mathscr{B}$ and $\mathscr C$ denote the viscosity operator, the elasticity operator, and the relaxation tensor, respectively. In the study of Problem [Problem 11](#VISCO2){reference-type="ref" reference="VISCO2"} we suppose that the viscosity operator ${\mathscr{A}}$, the elasticity operator ${\mathscr{B}}$ satisfy condition $H({\mathscr{A}})$ and $H({\mathscr{B}})$, respectively. We also assume that the densities of the body forces ${\mbox{\boldmath{$f$}}_0}$ and tractions ${\mbox{\boldmath{$f$}}_N}$, and the initial data $\mbox{\boldmath{$u$}}_0$ and $\mbox{\boldmath{$w$}}_0$ satisfy hypothesis $(H_4)$. The condition ([\[equation6x\]](#equation6x){reference-type="ref" reference="equation6x"}) represents the friction condition which is a law between the tangential velocity and the tangential stress corresponding to the friction force. This law involves the product of a function $h_1$ which depends on the adhesion field $\beta$ and the subdifferential $\partial h_2$ of a convex potential $h_2 (\mbox{\boldmath{$x$}}, \cdot)$. The coefficient $\mu$ is a given function which depends on the normal displacement. Since condition ([\[equation5x\]](#equation5x){reference-type="ref" reference="equation5x"}) is the Signorini unilateral contact boundary condition for the normal velocity with a constant $g > 0$, analogously as in the previous section, we need the set $K$ of admissible unilateral velocity constraints defined by ([\[SETK\]](#SETK){reference-type="ref" reference="SETK"}). The unknown surface variable $\beta$ is called the adhesion field, see [@SHS]. It describes intensity of adhesion on the contact surface and is governed by an ordinary differential equation ([\[equation7x\]](#equation7x){reference-type="ref" reference="equation7x"}) with an initial condition ([\[equation8x\]](#equation8x){reference-type="ref" reference="equation8x"}). The value $\beta = 1$ means that the adhesion is complete, $\beta = 0$ means that there is no adhesion, and $0<\beta<1$ means that the adhesion is partial. Further, we need the following hypotheses on the relaxation tensor ${\mathscr{C}}$, the potential $h$, the coefficient $\mu$, and the adhesive evolution rate function $G$. $\underline{H({\mathscr C})}:$ $\mathscr {C}\colon Q \times \mathbb{S}^{d} \rightarrow \mathbb{S}^{d}$ is such that $\underline{H(h_1)}:$ $h_1 \colon \Gamma_C \times \mathbb{R}\rightarrow \mathbb{R}$ is such that $\underline{H(h_2)}:$ $h_2 \colon \Gamma_C \times \mathbb{R}^{d} \rightarrow \mathbb{R}$ is such that $\underline{H(\mu)}:$ $\mu \colon \Gamma_C \times \mathbb{R}\rightarrow \mathbb{R}$ is such that $\underline{H(G)}:$ $G \colon \Gamma_C \times (0, T) \times \mathbb{R}\times \mathbb{R}^d \rightarrow \mathbb{R}$ is such that Let $V$ and ${\mathcal H}$ be the spaces defined by ([\[SPACESVH\]](#SPACESVH){reference-type="ref" reference="SPACESVH"}), and $\mbox{\boldmath{$f$}}\in L^2(0,T;V^*)$ be given by ([\[fXXX\]](#fXXX){reference-type="ref" reference="fXXX"}). We use the standard procedure as in Section [4](#Application1){reference-type="ref" reference="Application1"} to obtain the following weak formulation of Problem [Problem 11](#VISCO2){reference-type="ref" reference="VISCO2"}. **Problem 12**. *Find $\beta \colon (0, T) \to [0, 1]$ and $\mbox{\boldmath{$u$}}\colon (0,T)\to V$ such that $\mbox{\boldmath{$u$}}'(t) \in K$ a.e. $t\in (0, T)$ and $$\begin{aligned} &&\hspace{-0.4cm} \beta'(t) = G(t, \beta(t), \mbox{\boldmath{$u$}}(t)) \ \ \mbox{\rm on} \ \ \Gamma_C \ \mbox{\rm for all} \ \, t \in (0, T), \\[1mm] %&&\hspace{-0.4cm} %\int_\Omega\bu''(t)\cdot(\bv-\bu'(t)) \, dx %\\ &&\hspace{-0.4cm} %\quad \langle \mbox{\boldmath{$u$}}''(t), \mbox{\boldmath{$v$}}-\mbox{\boldmath{$u$}}'(t) \rangle +\langle \mathscr{A}(\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$u$}}'(t))) + \mathscr{B}(\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$u$}}(t))) + \int_0^t {\mathscr{C}}(t-s) \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$u$}}'(s)) \, ds, \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}})-\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$u$}}'(t)) \rangle_{\mathcal{H}} \\ &&\hspace{-0.4cm} \quad +\int_{\Gamma_{C}} \mu(u_{\nu}(t)) \, h_1(\beta(t)) \, \left( h_2(\mbox{\boldmath{$v$}}_\tau)-h_2(\mbox{\boldmath{$u$}}'_\tau(t)) \right) \, d\Gamma \ge \langle \mbox{\boldmath{$f$}}(t), \mbox{\boldmath{$v$}}- \mbox{\boldmath{$u$}}'(t) \rangle \\[2mm] &&\hspace{-0.4cm} \qquad \ \ \mbox{\rm for all} \ \, \mbox{\boldmath{$v$}}\in K, \ \mbox{\rm a.e.} \ t \in (0,T), \\[1mm] &&\hspace{-0.4cm} \beta (0) = \beta_0, \ \mbox{\boldmath{$u$}}(0) = \mbox{\boldmath{$u$}}_0, \ \mbox{\boldmath{$u$}}'(0) = \mbox{\boldmath{$w$}}_0. \end{aligned}$$* Problem [Problem 12](#CONTACTX88){reference-type="ref" reference="CONTACTX88"} consists with the ordinary differential equation on the contact surface and a dynamic variational inequality with constraints. The following result deals with the unique solvability, continuous dependence, and regularity of solution to Problem [Problem 12](#CONTACTX88){reference-type="ref" reference="CONTACTX88"}. **Theorem 13**. *Assume hypotheses $H({\mathscr A})$, $H({\mathscr B})$, $H({\mathscr C})$, $H(h_1)$, $H(h_2)$, $H(\mu)$, $H(G)$, $(H_4)$ and $\beta_0 \in L^2(\Gamma_C)$ with $0 \le \beta_0(\mbox{\boldmath{$x$}}) \le 1$ for a.e. $\mbox{\boldmath{$x$}}\in \Gamma_C$. Then Problem $\ref{CONTACTX88}$ has a unique solution $\beta \in H^1(0,T; L^2(\Gamma_C))$ and $\mbox{\boldmath{$u$}}\in C([0, T]; V)$ such that $0 \le \beta(\mbox{\boldmath{$x$}}, t) \le 1$ for all $t \in (0, T)$, a.e. $\mbox{\boldmath{$x$}}\in \Gamma_C$, $\mbox{\boldmath{$u$}}' \in {\mathbb{W}}$ with $\mbox{\boldmath{$u$}}'(t) \in K$ for a.e. $t \in (0, T)$. Moreover, let $(\beta_1, \mbox{\boldmath{$u$}}_1)$ and $(\beta_2, \mbox{\boldmath{$u$}}_2)$ be the unique solutions to Problem $\ref{CONTACTX88}$ corresponding to the data $$(\beta_0, \mbox{\boldmath{$u$}}_0, \mbox{\boldmath{$w$}}_0, \mbox{\boldmath{$f$}}_0, \mbox{\boldmath{$f$}}_N), (\widetilde{\beta}_0, \widetilde{\mbox{\boldmath{$u$}}}_0, {\widetilde{\mbox{\boldmath{$w$}}}}_0, \widetilde{\mbox{\boldmath{$f$}}}, \widetilde{\mbox{\boldmath{$f$}}}_N) \in L^2(\Gamma_C) \times V\times V \times L^2(0, T; L^2(\Omega;\mathbb{R}^d) \times L^2(\Gamma_N; \mathbb{R}^d)),$$ respectively. Then, there is a constant $c > 0$ such that $$\begin{aligned} \label{EST789} &&\hspace{-0.6cm} \| \beta_1 - \beta_2 \|_{H^1(0, T;L^2(\Gamma_C))} + \| \mbox{\boldmath{$u$}}_1 - \mbox{\boldmath{$u$}}_2 \|_{C(0, T; V)} + \| \mbox{\boldmath{$u$}}'_1 - \mbox{\boldmath{$u$}}'_2 \|_{L^2(0, T; V)} \le c \, ( \| \beta_0 - \widetilde{\beta}_0 \|_{L^2(\Gamma_C)} \\[1mm] && \hspace{-0.4cm} \quad + \, \| \mbox{\boldmath{$u$}}_0 - \widetilde {\mbox{\boldmath{$u$}}}_0 \| + \| \mbox{\boldmath{$w$}}_0 - \widetilde {\mbox{\boldmath{$w$}}}_0\| + \| \mbox{\boldmath{$f$}}_0 - {\widetilde{\mbox{\boldmath{$f$}}}_0} \|_{L^2(0, T; L^2(\Omega;\mathbb{R}^d))} + \| \mbox{\boldmath{$f$}}_N - {\widetilde{\mbox{\boldmath{$f$}}}_N} \|_{L^2(0, T; L^2(\Gamma_N; \mathbb{R}^d))}), \nonumber \end{aligned}$$ Moreover, $(\beta, \mbox{\boldmath{$u$}}') \in C([0,T]; L^2(\Gamma_C) \times L^2(\Omega;\mathbb{R}^d))$.* **Proof**.  Let $\mbox{\boldmath{$w$}}(t)=\mbox{\boldmath{$u$}}'(t)$ for all $t\in (0,T)$, hence $\mbox{\boldmath{$u$}}(t) = \mbox{\boldmath{$u$}}_0 + \int_0^t \mbox{\boldmath{$w$}}(s) \, ds$ for all $t\in (0,T)$. We reformulate Problem $\ref{CONTACTX88}$ in terms of the velocity $\mbox{\boldmath{$w$}}$ using the notation below. Let $E := Y = L^2(\Gamma_C)$, $U := V$, $X = L^2(\Gamma_C;\mathbb{R}^d)$, $A \colon (0,T) \times V \to V^*$, $I \colon L^2(0, T; V) \to L^2(0, T; V)$, $S \colon L^2(0,T;V) \to L^2(0,T; U)$, $R_1 \colon L^2(0,T;V) \to L^2(0,T; V^*)$, $R_3 \colon L^2(0,T;V) \to L^2(0,T; Y)$, $\varphi \colon (0, T) \times E \times Y \times X \to \mathbb{R}$, $F \colon (0,T) \times E \times U \to E$, $M \colon V \to X$ be defined by $$\begin{aligned} && \langle A (t, \mbox{\boldmath{$v$}}), \mbox{\boldmath{$z$}}\rangle := \langle\mathscr{A}(t,\mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}})), \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$z$}}) \rangle_{\mathcal{H}} \ \ \mbox{for} \ \ \mbox{\boldmath{$v$}}, \mbox{\boldmath{$z$}}\in V, \ \mbox{a.e.} \ t \in (0,T), \\ && (I\mbox{\boldmath{$w$}})(t) := \mbox{\boldmath{$u$}}_0 + \int_0^t \mbox{\boldmath{$w$}}(s) \, ds \ \ \mbox{for} \ \ \mbox{\boldmath{$w$}}\in L^2(0,T; V), \ \mbox{a.e.} \ t \in (0,T), \\ && (S\mbox{\boldmath{$w$}})(t) := ((I\mbox{\boldmath{$w$}})(t))_\tau = \mbox{\boldmath{$u$}}_{0\tau} + \int_0^t \mbox{\boldmath{$w$}}_\tau(s) \, ds \ \ \mbox{for} \ \ \mbox{\boldmath{$w$}}\in L^2(0,T; V), \ \mbox{a.e.} \ t \in (0,T), \\ && \langle (R_1\mbox{\boldmath{$w$}})(t), \mbox{\boldmath{$v$}}\rangle := \langle {\mathscr{B}} \mbox{\boldmath{$\varepsilon$}}((I\mbox{\boldmath{$w$}})(t)) + \int_0^t {\mathscr{C}}(t-s) \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$w$}}(s)) \, ds, \mbox{\boldmath{$\varepsilon$}}(\mbox{\boldmath{$v$}})\rangle_{\mathcal{H}} \\ &&\qquad\qquad\qquad\qquad \ \ \mbox{for} \ \ \mbox{\boldmath{$w$}}\in L^2(0,T; V), \ \mbox{\boldmath{$v$}}\in V, \ \mbox{a.e.} \ t \in (0,T), \\ && (R_3\mbox{\boldmath{$w$}})(t) := ((I\mbox{\boldmath{$w$}})(t))_\nu = u_{0\nu} + \int_0^t w_\nu(s) \, ds \ \ \mbox{for} \ \ \mbox{\boldmath{$w$}}\in L^2(0,T; V), \ \mbox{a.e.} \ t \in (0,T), \\ && \varphi(t, \beta, y, \mbox{\boldmath{$v$}}) := \int_{\Gamma_{C}} \mu (y) \, h_1(\beta) \, h_2 (\mbox{\boldmath{$v$}}_\tau) \, d\Gamma \ \ \mbox{for} \ \ \beta \in E, \ y \in Y, \ \mbox{\boldmath{$v$}}\in X, \ \mbox{a.e.} \ t \in (0,T), \\ % j(z, v) = \int_{\Gamma_C} k(\bx, z) \, % j_\nu (\bx, v) \, d\Gamma % \ \ \mbox{for} \ \ z \in Z, \ v \in X, % \ \mbox{a.e.} \ t \in (0,T), % \\ && F(t, \beta, \mbox{\boldmath{$w$}})(\mbox{\boldmath{$x$}}) := G(\mbox{\boldmath{$x$}}, t, \beta (\mbox{\boldmath{$x$}}, t), \mbox{\boldmath{$w$}}_\tau (\mbox{\boldmath{$x$}},t)) \ \mbox{for} \ \beta \in E, \ \mbox{\boldmath{$w$}}\in U, \ \mbox{a.e.} \ t \in (0,T), \\[1mm] && M \mbox{\boldmath{$v$}}:= \mbox{\boldmath{$v$}}_\tau \ \ \mbox{for} \ \ \mbox{\boldmath{$v$}}\in V, \end{aligned}$$ and $j \equiv 0$. Under this notation Problem $\ref{CONTACTX88}$ is equivalently written as **Problem 14**. *Find $\beta \in H^1(0,T; E)$ and $\mbox{\boldmath{$w$}}\in \mathbb{W}$ such that $\mbox{\boldmath{$w$}}(t) \in K$ for a.e. $t \in (0, T)$, $\mbox{\boldmath{$w$}}(0)=\mbox{\boldmath{$w$}}_0$, $\beta(0) = \beta_0$ and $$\begin{cases} \displaystyle \beta'(t) = F(t, \beta(t), (S\mbox{\boldmath{$w$}})(t)) \ \ \mbox{\rm for all} \ \ t \in (0,T), \\[1mm] \langle \mbox{\boldmath{$w$}}'(t) + A(t, \mbox{\boldmath{$w$}}(t)) + (R_1 \mbox{\boldmath{$w$}})(t) - \mbox{\boldmath{$f$}}(t), \mbox{\boldmath{$v$}}- \mbox{\boldmath{$w$}}(t) \rangle + \, \varphi(t, \beta(t), (R_3\mbox{\boldmath{$w$}})(t), M \mbox{\boldmath{$v$}}) \\[1mm] \qquad \ \ \, - \, \varphi(t, \beta(t), (R_3\mbox{\boldmath{$w$}})(t), M \mbox{\boldmath{$w$}}(t)) \ge 0 \ \ \mbox{\rm for all} \ \mbox{\boldmath{$v$}}\in K, \ \mbox{\rm a.e.} \ t \in (0,T). \end{cases}$$* Now we apply Corollary [Corollary 1](#Corollary2){reference-type="ref" reference="Corollary2"} to deduce the well-posedness of Problem [Problem 14](#CONTACTX88a){reference-type="ref" reference="CONTACTX88a"}. To this end, we are able to verify hypotheses $H(F)$, $H(A)$, $H(j)$, $H(\varphi)$, $H(K)$, $H(M)$, $\mbox{\boldmath{$f$}}\in L^2(0, T; V^*)$, and $(H_1)$--$(H_3)$. Note that hypotheses $H(j)$ and $(H_1)$ hold trivially and no smallness condition is needed. Condition $H(F)$ is a consequence of $H(G)$ and can be proved analogously as in the proof of Theorem [Theorem 8](#existence3){reference-type="ref" reference="existence3"}. Analogously, we can also show that $H(A)$, $H(K)$, $H(M)$, and $(H_3)$ are satisfied, while $(H_2)$ follows from $(H_4)$. Additionally, a careful examination of [@MO2008 Lemma 5] implies that $0 \le \beta(\mbox{\boldmath{$x$}}, t) \le 1$ for all $t \in (0, T)$, a.e. $\mbox{\boldmath{$x$}}\in \Gamma_C$. Next, we use $H(h_1)$, $H(h_2)$ and $H(\mu)$ to check that $H(\varphi)$(a)--(d) holds with $c_{0\varphi} = h_0 \mu_0 \| \rho\|_{L^2(\Gamma_C)}$, $c_{1\varphi} = c_{2\varphi} = 0$ and $c_{3\varphi} = h_0 \mu_0 \rho_2$. Condition $H(\varphi)$(e) follows from the inequality $$\begin{aligned} &&\hspace{-0.4cm} \varphi (t, \beta_1, y_1, v_2) - \varphi (t, \beta_1, y_1, v_1) + \varphi (t, \beta_2, y_2, v_1) - \varphi (t, \beta_2, y_2, v_2) \\ && \le \mu_0 \int_{\Gamma_C} |h_1(\beta_1)-h_1(\beta_2)| \, | h_2(\mbox{\boldmath{$v$}}_{1\tau}) - h_2(\mbox{\boldmath{$v$}}_{2\tau}) | \, d\Gamma \le c \, \| \beta_1 - \beta_2\|_{E} \, \| \mbox{\boldmath{$v$}}_1 - \mbox{\boldmath{$v$}}_2\|_X\end{aligned}$$ for all $\beta_1$, $\beta_2 \in E$, $\mbox{\boldmath{$v$}}_1$, $\mbox{\boldmath{$v$}}_2 \in X$ with $c > 0$. Hence hypothesis $H(\varphi)$ is verified. By Corollary [Corollary 1](#Corollary2){reference-type="ref" reference="Corollary2"} we deduce the unique solvability of Problem [Problem 14](#CONTACTX88a){reference-type="ref" reference="CONTACTX88a"}. Moreover, due to ([\[EST444\]](#EST444){reference-type="ref" reference="EST444"}) in Corollary [Corollary 1](#Corollary2){reference-type="ref" reference="Corollary2"}, we have $$\begin{aligned} \label{**4} &&\hspace{-0.2cm} \| \beta_1 - \beta_2 \|_{H^1(0, T;L^2(\Gamma_C))} + \| \mbox{\boldmath{$w$}}_1 - \mbox{\boldmath{$w$}}_2 \|_{L^2(0, T; V)} \le c \, (\| \beta_0 - \widetilde{\beta}_0 \|_{L^2(\Gamma_C)} \\[1mm] && \hspace{-0.2cm} \quad + \, \| \mbox{\boldmath{$w$}}_0 - \widetilde {\mbox{\boldmath{$w$}}}_0\| + \| \mbox{\boldmath{$f$}}_0 - {\widetilde{\mbox{\boldmath{$f$}}}_0} \|_{L^2(0, T; L^2(\Omega;\mathbb{R}^d))} + \| \mbox{\boldmath{$f$}}_N - {\widetilde{\mbox{\boldmath{$f$}}}_N} \|_{L^2(0, T; L^2(\Gamma_N; \mathbb{R}^d))}). \nonumber \end{aligned}$$ We combine ([\[\*55\]](#*55){reference-type="ref" reference="*55"}) with ([\[\*\*4\]](#**4){reference-type="ref" reference="**4"}) to deduce the inequality in the thesis of the theorem. As in Theorem [Theorem 8](#existence3){reference-type="ref" reference="existence3"}, we use the continuous embeddings $H^1(0, T; L^2(\Gamma_C)) \subset C([0, T]; L^2(\Gamma_C))$ and ${\mathbb{W}} \subset C([0,T]; L^2(\Omega;\mathbb{R}^d))$ to conclude the desired regularity of the solution $(\beta, \mbox{\boldmath{$u$}}') \in C([0,T]; L^2(\Gamma_C) \times L^2(\Omega;\mathbb{R}^d))$. This completes the proof. $\Box$ We refer to [@SST Chapter 11.4], [@SHS Chapter 5], and [@Bartosz; @HLM2015; @MO2008; @MShengda2018] for examples of the adhesive evolution rate function $G$ which depends on both the bonding field $\beta$ and the displacement and may change sign. This allows for rebonding to take place after debonding, and it allows for possible cycles of debonding and rebonding. Note also that the choice $h_2(\mbox{\boldmath{$\xi$}}) = \| \mbox{\boldmath{$\xi$}}\|$ for $\mbox{\boldmath{$\xi$}}\in \mathbb{R}^d$ leads to a modified version of Coulomb's law which is usually used to model the frictional contact. # Final comments We note that similar well-posedness results for more complicated contact models can be obtained when various conditions on different parts of the contact boundary are considered. Furthermore, by the application of Theorem [Theorem 4](#Theorem1){reference-type="ref" reference="Theorem1"} and Corollary [Corollary 1](#Corollary2){reference-type="ref" reference="Corollary2"}, one can deal with the well-posedness of dynamic frictional and frictionless contact models involving the internal state variables in viscoplasticity, see [@SOFMIG Chapter 3], and the wear phenomena in contact problems, see e.g. [@SST Chapter 11], [@SHS Chapter 2]. It would be interesting to address the following open issues related to the results of this paper. First it is of interest to examine the penalty methods for the system ([\[001\]](#001){reference-type="ref" reference="001"})--([\[003\]](#003){reference-type="ref" reference="003"}) which will improve earlier results obtained recently in [@Cen; @SOFMIG; @SMH2018; @SP]. Also, an interesting topic is to study optimal control problems for the system ([\[001\]](#001){reference-type="ref" reference="001"})--([\[003\]](#003){reference-type="ref" reference="003"}) including the necessary conditions of optimality for the control problems, see [@Migorski2020]. Finally, it is an extensive and important project to provide the numerical analysis of the system, see [@SHM] and the references therein. 99 N.T.V. Anh, T.D. Ke, On the differential variational inequalities of parabolic-parabolic type, *Acta Appl. Math.* **176** (2021), 5. K. Bartosz, Hemivariational inequalities modeling dynamic contact problems with adhesion, *Nonlinear Anal. Theory Methods Appl.* **71** (2009), 1747--1762. S. Carl, V.K. Le, D. Motreanu, *Nonsmooth Variational Problems and their Inequalities*, Springer, New York, 2007. J.X. Cen, L. Li, S. Migórski, Van Thien Nguyen, Convergence of a generalized penalty and regularization method for quasi-variational-hemivariational inequalities, *Communications in Nonlinear Science and Numerical Simulation* **103** (2021), 105998. F.H. Clarke, *Optimization and Nonsmooth Analysis*, Wiley, New York, 1983. Z. Denkowski, S. Migórski, N.S. Papageorgiou, *An Introduction to Nonlinear Analysis: Theory*, Kluwer Academic/Plenum Publishers, Boston, Dordrecht, London, New York, 2003. Z. Denkowski, S. Migórski, N.S. Papageorgiou, *An Introduction to Nonlinear Analysis: Applications*, Kluwer Academic/Plenum Publishers, Boston, Dordrecht, London, New York, 2003. D. Goeleven, D. Motreanu, Y. Dumont, M. Rochdi, *Variational and Hemivariational Inequalities, Theory, Methods and Applications, Volume I: Unilateral Analysis and Unilateral Mechanics*, Kluwer Academic Publishers, Boston, Dordrecht, London, 2003. J. Gwinner, On a new class of differential variational inequalities and a stability result, *Math. Program.* **139** (2013), 205--221. J. Han, Y. Li, S. Migórski, Analysis of an adhesive contact problem for viscoelastic materials with long memory, *Journal of Mathematical Analysis and Applications* **427** (2015), 646--668. W. Han, S. Migórski, M. Sofonea, Analysis of a general dynamic history-dependent variational-hemivariational inequality, *Nonlinear Analysis: Real World Applications* **36** (2017), 69--88. W. Han, M. Sofonea, *Quasistatic Contact Problems in Viscoelasticity and Viscoplasticity*, Studies in Advanced Mathematics **30**, Americal Mathematical Society, Providence, RI--International Press, Somerville, MA, 2002. A. Kulig, S. Migórski, Solvability and continuous dependence results for second order nonlinear inclusion with Volterra-type operator, *Nonlinear Analysis* **75** (2012), 4729--4746. Z.H. Liu, Existence results for quasilinear parabolic hemivariational inequalities, *J. Differential Equations* **244** (2008), 1395--1409. Z.H. Liu, S. Migórski, S.D. Zeng, Partial differential variational inequalities involving nonlocal boundary conditions in Banach spaces, *J. Differential Equations* **263** (2017), 3989--4006. Z.H. Liu, S.D. Zeng, D. Motreanu, Partial differential hemivariational inequalities, *Adv. Nonlinear Anal.* **7** (2017), 571--586. M. Miettinen, A parabolic hemivariational inequality, *Nonlinear Analysis: Theory, Methods and Applications* **26** (1996), 725--734. S. Migórski, Evolution hemivariational inequality for a class of dynamic viscoelastic nonmonotone frictional contact problems, *Computers & Mathematics with Applications* **52** (2006), 677--698. S. Migórski, Optimal control of history-dependent evolution inclusions with applications to frictional contact, *J. Optimization Theory and Applications* **185** (2020), 574--596. S. Migórski, A class of history-dependent systems of evolution inclusions with applications, *Nonlinear Analysis: Real World Applications* **59** (2021), 103246. S. Migórski, W. Han, S.D. Zeng, A new class of hyperbolic variational-hemivariational inequalities driven by non-linear evolution equations, *European J. Appl. Math.* **32** (2021), 59--88. S. Migórski, A. Ochal, Boundary hemivariational inequality of parabolic type, *Nonlinear Analysis: Theory Methods and Appl.* **57** (2004), 579--596. S. Migórski, A. Ochal, Dynamic bilateral contact problem for viscoelastic piezoelectric materials with adhesion, *Nonlinear Analysis: Theory, Methods and Applications* **69** (2008), 495--509. S. Migórski, A. Ochal, M. Sofonea, History-dependent subdifferential inclusions and hemivariational inequalities in contact mechanics, *Nonlinear Analysis: Real World Applications* **12** (2011), 3384--3396. S. Migórski, A. Ochal, M. Sofonea, *Nonlinear Inclusions and Hemivariational Inequalities. Models and Analysis of Contact Problems*, Advances in Mechanics and Mathematics **26**, Springer, New York, 2013. S. Migórski, A. Ochal, M. Sofonea, History-dependent variational-hemivariational inequalities in contact mechanics, *Nonlinear Analysis: Real World Applications* **22** (2015), 604--618. S. Migórski, A. Ochal, M. Sofonea, Evolutionary inclusions and hemivariational inequalities, Chapter 2 in *Advances in Variational and Hemivariational Inequalities: Theory, Numerical Analysis, and Applications*, edited by W. Han, et al., Advances in Mechanics and Mathematics Series, vol. 33 (2015), 39--64, Springer. S. Migórski, J. Ogorzaly, Dynamic history-dependent variational-hemivariational inequalities with applications to contact mechanics, *Zeitschrift für angewandte Mathematik und Physik* **68** (2017), Article ID.15, 22p. S. Migórski, B. Zeng, A new class of history--dependent evolutionary variational--hemivariational inequalities with unilateral constraints, *Applied Mathematics & Optimization* **84** (2021), 2671--2697. S. Migórski, S. Zeng, A class of differential hemivariational inequalities in Banach spaces, *J. Global Optimization* **72** (2018), 761--779. S. Migórski, S.D. Zeng, Hyperbolic hemivariational inequalities controled by evolution equations with application to adhesive contact model, *Nonlinear Analysis: Real World Applications* **43** (2018), 121--143. Z. Naniewicz, P.D. Panagiotopoulos, *Mathematical Theory of Hemivariational Inequalities and Applications*, Marcel Dekker, Inc., New York, Basel, Hong Kong, 1995. J.S. Pang, D.E. Stewart, Differential variational inequalities, *Math. Program.* **113** (2008), 345--424. M. Shillor, M. Sofonea, J.J. Telega, *Models and Analysis of Quasistatic Contact*, Lect. Notes Phys. **655**, Springer, Berlin, Heidelberg, 2004. M. Sofonea, W. Han, S. Migórski, Numerical analysis of history-dependent variational-hemivariational inequalities with applications to contact problems, *European Journal of Applied Mathematics* **26** (2015), 427--452. M. Sofonea, W. Han, M. Shillor, *Analysis and Approximation of Contact Problems with Adhesion or Damage*, Pure and Applied Mathematics **276**, Chapman-Hall/CRC Press, New York, 2006. M. Sofonea, A. Matei, History-dependent quasivariational inequalities arising in Contact Mechanics, *European Journal of Applied Mathematics* **22** (2011), 471--491. M. Sofonea, S. Migórski, *Variational-Hemivariational Inequalities with Applications*, Chapman & Hall/CRC, Boca Raton, 2018. M. Sofonea, S. Migórski, W. Han, A penalty method for history-dependent variational-hemivariational inequalities, *Computers & Mathematics with Applications* **75** (2018), 2561--2573. M. Sofonea, S. Migórski, A. Ochal, Two history-dependent contact problems, Chapter 14 in *Advances in Variational and Hemivariational Inequalities: Theory, Numerical Analysis, and Applications*, edited by W. Han, et al., Advances in Mechanics and Mathematics Series, vol. 33 (2015), 355--380, Springer. M. Sofonea, F. Pǎtrulescu, Penalization of history-dependent variational inequalities, *European Journal of Applied Mathematics* **25** (2014), 155--176. M. Sofonea, Y. Xiao, Fully history-dependent quasivariational inequalities in contact mechanics, *Applicable Analysis* **95** (2016), 2464--2484. S.D. Zeng, S. Migórski, Dynamic history-dependent hemivariational inequalities controlled by evolution equations with application to contact mechanics, *Journal of Dynamics and Differential Equations*, 2021, in press, https://doi.org/10.1007/s10884-021-10088-0. [^1]:   College of Applied Mathematics, Chengdu University of Information Technology, Chengdu 610225, Sichuan Province, P.R. China, and Jagiellonian University in Krakow, Chair of Optimization and Control, ul. Lojasiewicza 6, 30348 Krakow, Poland. Tel.: +48-12-6646666. E-mail address: stanislaw.migorski\@uj.edu.pl. [^2]:   The project has received funding from the European Union's Horizon 2020 Research and Innovation Programme under the Marie Skłodowska-Curie grant agreement No. 823731 CONMECH. It is supported by Natural Science Foundation of Guangxi (Grant No: 2018GXNSFAA281353), Beibu Gulf University Project No. 2018KYQD06, and the projects financed by the Ministry of Science and Higher Education of Republic of Poland under Grants Nos. 4004/GGPJII/H2020/2018/0 and 440328/PnH2/2019, and the National Science Centre of Poland under Project No. 2021/41/B/ST1/01636.
arxiv_math
{ "id": "2309.06516", "title": "Well-posedness of Constrained Evolutionary Differential\n Variational-Hemivariational Inequalities", "authors": "S. Migorski", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this paper, we work with the existence and uniqueness of free boundary constant mean curvature hypersurfaces in rotational domains. These are domains whose boundary is generated by a rotation of a graph. Under some conditions on the function that generates the graph and a gap condition on the umbilicity tensor, we classify the CMC free boundary hypersurfaces as topological disks or annulus. Also, we construct some examples of free boundary minimal surfaces in the rotational ellipsoid that, in particular, satisfy our gap condition. address: - | $^1$Departamento de Matemática\ Universidade Federal da Paraı́ba\ 58.051-900 João Pessoa, Paraı́ba, Brazil - $^2$Università degli Studi di Torino, Dipartimento di Matemática "Giuseppe Peano", Torino, TO, Italy. author: - Allan Freitas$^{1, 2, \ast}$, Márcio S. Santos$^1$ and Joyce S. Sindeaux$^1$ title: Gap results and existence of CMC free boundary hypersurfaces in rotational domains --- # Introduction In this work, we consider an $n$-dimensional constant mean curvature (CMC in short) hypersurface $\Sigma$ with a smooth boundary that is compact, oriented, and immersed in a Riemannian manifold $M^{n+1}$ with a smooth boundary $\partial M$, such that $\partial\Sigma \subset \partial M$ and the boundary of $\Sigma$ meets the boundary of $M$ orthogonally. In this situation, we say that $\Sigma$ is a *free boundary CMC hypersurface in $M$*. Such hypersurfaces are stationary for the area functional for variations preserving the enclosed volume (see, for example, [@Ros:1995 Section 1]). In particular, when $H=0$, we say that $\Sigma$ is a free boundary minimal hypersurface. In the particular case where the domain $M$ is the unitary ball $\mathbb{B}^3$ in the Euclidean space, the simplest examples of CMC free boundary surfaces are the equatorial disk, the critical catenoid (minimal surfaces) and the spherical caps. We remember that after its initial motivation by Courant [@courant1940] and preliminary developments (for example, [@nitsche1985], [@struwe1984] and [@Ros:1995]), this topic has received plenty of attention, mainly after the 2010 decade and the undeniable contributions of Fraser and Schoen ([@fraser2011], [@fraser2016]). These underlying works reveal, in particular, several similarities between free boundary minimal surfaces in an Euclidean unit ball and closed minimal surfaces in the sphere. In this sense, the classical results and strategies to obtain rigidity results in the last one could indicate interest directions to get related developments in the free boundary case. In this direction, the so-called gap results in the second fundamental form give an important characterization of CMC surfaces in the sphere. A series of contributions from Simons [@Simons], Lawson [@L], and Chern, do Carmo and Kobayashi [@CdCK] get the following gap result for the second fundamental form $A$ of the immersion: ****Theorem** 1** (Chern-do Carmo-Kobayashi [@CdCK], Lawson [@L], Simons [@Simons]). *Let $\Sigma$ be a closed minimal hypersurface in the unit sphere $\mathbb{S}^{n+1}$. Assume that the second fundamental form $A$ on $\Sigma$ satisfies $$|A|^2 \leq n.$$ Then* 1. *either $|A|^2=0$ and $\Sigma$ is an equator;* 2. *or $|A|^2=n$ and $\Sigma$ is a Clifford minimal hypersurface.* In the study of CMC hypersurfaces in the sphere, Alencar and do Carmo [@hilario] also obtained a gap result, but now considering the umbilicity tensor $\phi=A-\frac{H}{n}g$. ****Theorem** 2** (Alencar-do Carmo [@hilario]). *Let $\Sigma$ be a closed, CMC hypersurface in the unit sphere $\mathbb{S}^{n+1}$. If $$\|\phi\|^2\leq C_H,$$* 1. *either $\|\phi\|^2\equiv 0$ and $\Sigma^n$ is totally umbilical in $\mathbb{S}^{n+1}$,* 2. *$\|\phi\|^2\equiv C_H$ and $\Sigma^n$ is an $H(r)$-torus in $\mathbb{S}^{n+1}$.* *Here, $C_{H}$ is related to a root of a polynomial whose coefficients depend on the mean curvature $H$ and the dimension $n$[^1].* We could describe some contributions by starting from these two characterizations and studying similar phenomena for free boundary CMC hypersurfaces in the ball. In [@Ambrozio:2021], Ambrozio and Nunes proved that if $\Sigma$ is a compact free boundary minimal surface in $\mathbb{B}^3$ and for all points $x$ in $\Sigma$, $$\label{gapambnunes} |A|^2(x)\langle x, N(x)\rangle ^2\leq 2,$$ then $\Sigma$ is a flat equatorial disk or a critical catenoid. In higher dimensions, some similar gap results to [\[gapambnunes\]](#gapambnunes){reference-type="eqref" reference="gapambnunes"} can be obtained for $2$-dimensional surfaces in the ball (see [@barbosaviana1]) and, with a topological rigidity, in submanifolds of any codimension in higher dimensional balls (see [@barbosaviana2 Theorem 3.7]). Also, some gaps result just in the second fundamental form, as that in Theorem [**Theorem** 1](#thm:Simons){reference-type="ref" reference="thm:Simons"}, was obtained in [@CMV], [@Barbosa:2021] and [@BarbosaFreitas:2023]. The question arises: Does an analogous result hold to these in the context of free boundary CMC non-minimal surfaces? Barbosa, Cavalvante, and Pereira answered this question in [@Barbosa:2023]. More specifically, in this work they proved that if $\Sigma$ is a compact free boundary CMC surface in $\mathbb{B}^3$ and for all points $x$ in $\Sigma$, $$|\phi|^2\langle \vec{x}, N\rangle ^2\leq\frac{1}{2}(2+H\langle \vec{x},N\rangle ^2),$$ then $\Sigma$ is a totally umbilical disc or a part of a Delaunay surface. In addition to the studies involving free boundary surfaces in the unit ball, investigations of this kind have also been conducted in other domains. For example, when the ambient space is a wedge (López [@Lopez:2014]), a slab (Ainouz and Souam, [@Aiounz:2016]), a cone (Choe [@Choe:2011]) or a cylinder (Lopez and Pyo [@Lopez-Pyo:2014]). We also cite the work [@Maximo:2017] by Máximo, Nunes, and Smith, where they study free boundary minimal annuli inside convex subsets of $3$-dimensional Riemannian manifolds with nonnegative Ricci curvature. Regarding rigidity conclusions starting from a gap condition, Andrade, Barbosa, and Pereira [@Andrade:2021] established some results for balls conforming to the Euclidean ball. More recently, when the ambient space is a strictly convex domain in a $3$-dimensional Riemannian manifold with sectional curvature bounded above, and $\Sigma$ is a CMC free boundary surface in this region, Min and Seo [@Min:2022] establish a pinching condition on the length of the umbilicity tensor on $\Sigma$. This criterion ensures that the surface is topologically equivalent to a disk or an annulus. In the particular case where the domain is a geodesic ball of a $3$-dimensional space form, they concluded that $\Sigma$ is a spherical cap or a Delaunay surface. In [@BarbosaFreitas:2023], the first author, jointly with Barbosa, Melo, and Vitório, investigated the existence of compact free boundary minimal hypersurfaces immersed in domains whose boundary is a regular level set, in particular giving some gap results for free boundary minimal hypersurfaces immersed in an Euclidean ball and in a rotational ellipsoid. This work explores some gap results for CMC free Boundary Surfaces in Rotation Domains described below. By considering a curve $\alpha(t)=(f(t),t)$, where $f$ is a positive real-valued smooth function, we generate a hypersurface $\partial\Omega$ starting from the revolution of this curve in an appropriate axis. In this sense, we can describe a domain $\Omega$ such that $\partial \Omega\subset F^{-1}(1)$ is a revolution hypersurface and $F:\mathbb{R}^n\times I\rightarrow \mathbb{R}$ is a smooth function given by $$F(x, y) = \frac{1}{2}\left(|x|^2-f^2(y)\right) + 1.$$ Furthermore, we consider a hypersurface $\Sigma$, which is a free boundary CMC surface in $\Omega$. In our first results, we use an auxiliary function $g$ given by $$g(x,y)=\langle \Bar{\nabla}F,N\rangle$$ to get a topological characterization for CMC free boundary surfaces in $(n+1)$-dimensional rotation domains, with $n\geq 2$. In particular, we obtain the following gap result for CMC surfaces in these domains ****Theorem** 3**. *Let $\Sigma^2$ be a compact CMC surface with a free boundary in $F^{-1}(1)$. If $(f')^2+ff''+1\leq 0$ and $$|\phi|^2g(x,y)^2\leq\frac{1}{2}(2+Hg(x,y))^2$$ on $\Sigma$, then $\Sigma$ is homeomorphic to a disk or an annulus.* Also, talking about the higher dimensional case, we can prove the following result for minimal free-boundary hypersurfaces. ****Theorem** 4**. *Let $\Sigma^n$ be an $n$-dimensional free boundary minimal hypersurface in a domain $\Omega$ with boundary $\partial\Omega\subset F^{-1}(1)$. Assume that $(f')^2+ff''+1\leq 0$. If $$\begin{aligned} |A|^2 g(x,y)^2\leq \frac{n}{n-1},\end{aligned}$$ for every $(x,y)\in\Sigma^n$, then one of the following is true:\ 1. $\Sigma^n$ is diffeomorphic to a disk $\mathbb{D}^n$.\ 2. $\Sigma^n$ is diffeomorphic to $\mathbb{S}^1\times\mathbb{D}^{n-1}$ and $C(\Sigma^n)$ is a closed geodesic.* Furthermore, we construct new examples of CMC surfaces that are free boundary on the rotational ellipsoid, exploring the technique from [@Barbosa:2023]. This permits seeing examples of catenoids, nodoids, and onduloids in this domain. This paper is organized as follows. In the second section, we approach some preliminaries for the theme and obtain some auxiliary lemmas that permit obtaining the main results of Section 3, about CMC free boundary surfaces in $3$-dimensional rotation domains and minimal free boundary surfaces in $(n + 1)$-dimensional rotation domains. Finally, in the last section, we get some examples of Delaunay surfaces that are free boundary and satisfy our pinching condition in the particular case where the ambient space is a rotational ellipsoid. # Preliminaries {#sec:Background} Throughout this paper, we will consider $\Omega\subset\mathbb{R}^{n+1}$, with $n\geq 2$, be a rotation domain with smooth boundary $\partial\Omega\subset F^{-1}(1)$ where $F:\mathbb{R}^n\times I\rightarrow\mathbb{R}$ is a smooth function for some interval $I\subset\mathbb{R}$. We denote by $\Bar{N}:=\frac{\nabla F}{|\nabla F|}$ the outward unit normal to $\partial\Omega$. Let $\Sigma^{n}\hookrightarrow\Omega$ an hypersurface with boundary such that $\partial\Sigma\subset\partial\Omega$. We denote $N$ the outward unit normal to $\Sigma$ and $\nu$ the outward conormal along $\partial\Sigma$ in $\Sigma$. In this scope, a hypersurface $\Sigma$ is called *free boundary* if $\Sigma$ meets $\partial\Omega$ orthogonally, there is, $\nu = \Bar{N}$ along $\partial\Sigma$ or, equivalently, $\langle N, \Bar{N}\rangle = 0$ along $\partial\Sigma$. More specifically, for $n=2$, let us consider a rotational hypersurface in the following sense. Let $\alpha(t) =(f(t), t)$ be a plane curve $\alpha$ that is the graph of a positive real valued smooth function $f : I \rightarrow \mathbb{R}$ in the $x_1x_3$-plane. Let $\theta$ be a parametrization of the $2-$dimensional unit sphere in the hyperplane $x_3 = 0$. The hypersurface of revolution with generatriz $\alpha$ can be parametrized by $$X(\theta, t) = (\theta f(t), t)=(\cos\theta f(t),\sin\theta f(t),t).$$ In this scope, we study free boundary surfaces $\Sigma$ in domains $\Omega$ which boundary is a hypersurface of revolution given above. Let us also consider $F : \mathbb{R}^2\times I \rightarrow \mathbb{R}$ be the smooth function defined by $$\begin{aligned} \label{def F} F(x, y) = \frac{1}{2}\left(|x|^2-f^2(y)\right) + 1, \end{aligned}$$ where $x = (x_1,x_2)$ and $y = x_3$, we have that $\Omega\subset F^{-1}(1).$ Notice that $1$ is a regular value of $F$. Observe that $$\nabla F(x,y)=(x,-f(y)f'(y))=(x,y)+(0,-y-f(y)f'(y)),$$ where $y=\langle (x,y),E_3\rangle.$ Then, $$\begin{aligned} D^2F= \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0\\ 0 & 0 & -(f'(y))^2-f(y)f''(y) \end{pmatrix}=\text{Id}_{3\times 3}+ \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0\\ 0 & 0 & -(f'(y))^2-f(y)f''(y)-1 \end{pmatrix}.\end{aligned}$$ Therefore, for all $X,Y \in T(\Sigma)$ we have $$\begin{aligned} \label{hess} \text{Hess}_\Sigma F(X,Y)&=\langle\bar{\nabla}_X(\bar{\nabla} F)^\top,Y\rangle\nonumber\\ &=\langle\bar{\nabla}_X(\bar{\nabla} F -\langle\bar{\nabla} F,N\rangle N),Y\rangle\nonumber\\ &=D^2F(X,Y)+\langle\bar{\nabla} F,N\rangle\langle A_NX,Y\rangle\nonumber\\ &=\langle X,Y\rangle+g(x,y)\langle A_NX,Y\rangle-((f'(y))^2+f(y)f''(y)+1)\langle TX,Y\rangle,\end{aligned}$$ where $T:T_{(x,y)})\Sigma\rightarrow T_{(x,y)}\Sigma$ is given by $TX=\langle X,E_3^\top\rangle E_3^\top$ and $$g(x,y)=\langle\bar{\nabla} F,N\rangle=\langle(x,y),N\rangle+\langle N,E_3\rangle(-y-f(y)f'(y)).$$ It is easy to check that $T$ is a self adjoint operator whose $E_3^\top$ is an eigenvector associated with the eigenvalue $|E_3^\top|^2$. Besides, we can take any nonzero vector in $T\Sigma$, orthogonal to $E_3^\top$, to verify that zero is also an eigenvalue of $T$. Therefore $$\begin{aligned} \label{positivo definidoo} 0 \leq \langle TX, X\rangle \leq |E_3^\top|^2|X|^2,\ \forall X \in T_{(x,y)}\Sigma.\end{aligned}$$ ****Lemma** 1**. *Suppose that $(f')^2+ff''+1\leq 0$. Then for each ${(x,y)}\in \Sigma$, the eigenvalues of Hess$_\Sigma F(x,y)$ are greater of equal to $$1+k_1g(x,y) \text{ and } 1+k_2g(x,y),$$ where $k_1\leq k_2$ are the principal curvatures of $\Sigma$ with respect to the normal vector $N$.* *Proof.* Suppose that $(f')^2+ff''+1\leq 0$, then using ([\[hess\]](#hess){reference-type="ref" reference="hess"}) and ([\[positivo definidoo\]](#positivo definidoo){reference-type="ref" reference="positivo definidoo"}), we have that $$\begin{aligned} \text{Hess}_\Sigma F(X,X)&=\langle X,X\rangle+g(x,y)\langle A_NX,X\rangle-((f'(y))^2+f(y)f''(y)+1)\langle TX,X\rangle\\ &\geq \langle X+g(x,y)AX,X\rangle.\end{aligned}$$ But, the eigenvalues of $X\rightarrow X+g(x,y)AX$ are $$1+k_1g(x,y) \text{ and } 1+k_2g(x,y),$$ where $k_1\leq k_2$ are the eigenvalues of A. Then, $k_1$ and $k_2$ are the principal curvature of $\Sigma$ and the eigenvalues $\lambda_1\leq\lambda_2$ of Hess$_\Sigma F(x,y)$ satisfy that $$\lambda_1\geq1+k_1g(x,y) \text{ and } \lambda_2\geq1+k_2g(x,y).$$ ◻ **Remark 1**. *Observe that if $(f')^2+ff''+1=0$, we get $$0=(f'(y))^2+f(y)f''(y)+1=(y+f(y)f'(y))'.$$ Then, $$y+f(y)f'(y)=c_1,$$ where $c_1$ is a constant. Thus, $$(f^2(y))'=2f(y)f'(y)=2(c_1-y)=(2c_1y-y^2)',$$ Therefore, $$f^2(y)=2c_1-y^2+c_2,$$ where $c_2$ is a constant. It implies that $$F(x,y)=\frac{1}{2}(|x|^2+y^2-2c_1y-c_2)+1.$$ Then, the set $F^{-1}(1)$ is the sphere $$x_1^2+x_2^2+(y-c_1)^2=c_2+c_1^2.$$* # Gap results {#sec:3 dimensional} This section aims to give a topological classification of CMC free-boundary hypersurfaces in the rotational domains, as defined earlier. We employ a gap condition in the umbilicity tensor and the graph function whose rotation generates the boundary domain. We subdivide our analysis in the three-dimensional and higher dimensional cases. ## CMC Free Boundary Surfaces in $3$-dimensional rotational domains In this subsection, we get a topological characterization for CMC Free Boundary Surfaces in $3$-dimensional rotation domains. The next proposition shows that the gap condition given bellow implies the convexity of $F$ on $\Sigma$, and the proof of the result follows the same steps as in [@Barbosa:2023 Lemma 2.1]. ****Proposition** 1**. *Let $\Sigma$ be a compact free boundary CMC surface in $\Omega$. Assume that $(f')^2+ff''+1\leq 0$ and for all points (x,y) in $\Sigma$, $$\begin{aligned} \label{hipotese de phi} |\phi|^2g(x,y)^2\leq\frac{1}{2}(2+Hg(x,y))^2,\end{aligned}$$ where $\phi=A-\frac{H}{2}\langle\cdot,\cdot\rangle$ is the umbilicity tensor. Then, $$Hess_\Sigma F(X,X)\geq 0,$$ for all $(x,y)\in \Sigma$ and $X\in T_{(x,y)}\Sigma.$* *Proof.* By Lemma [**Lemma** 1](#autovalores){reference-type="ref" reference="autovalores"}, we have that the eigenvalues $\lambda_1\leq\lambda_2$ of Hess$_\Sigma F(x,y)$ satisfy that $$\lambda_1\geq1+k_1g(x,y):=\tilde{\lambda_1} \text{ and } \lambda_2\geq1+k_2g(x,y):=\tilde{\lambda_2},$$ where $k_1$ and $k_2$ are the principal curvature of $\Sigma$. In order to prove Hess$_\Sigma F(X,X)\geq 0$, we need to show that $\lambda_1$ and $\lambda_2$ are nonnegative. Using condition ([\[hipotese de phi\]](#hipotese de phi){reference-type="ref" reference="hipotese de phi"}) we have $$\begin{aligned} \label{eq lambda/gap}4\tilde{\lambda_1}\tilde{\lambda_2}&=4(1+k_1g(x,y))(1+k_2g(x,y))\nonumber\\ &=4+4k_2g(x,y)+4k_1g(x,y)+4k_1k_2g(x,y)^2\nonumber\\ &=4+4Hg(x,y)+2(H^2-|A|^2)g(x,y)^2\\ &=(2+Hg(x,y))^2-2|\phi|^2g(x,y)^2\geq0\nonumber.\end{aligned}$$ Therefore, we need to show that at least one $\tilde{\lambda_i}$ is non-negative. For this, we will to show that function $v$ defined on $\Sigma$ and given by $$v := \tilde{\lambda_1} + \tilde{\lambda_2} = 2 + Hg(x,y)$$ is nonnegative. Note that we can assume that $\Sigma$ is not totally umbilical, otherwise it is obvious to check. Let us suppose that $v(p) < 0$ at some point $p\in\Sigma$. The free boundary condition ensures that $$v = 2 + Hg(x,y)= 2$$ along $\partial\Sigma$. Choose $q\in\partial\Sigma$ and let $\alpha: [0, 1] \rightarrow \Sigma$ be a continuous curve such that $\alpha(0) = p$ and $\alpha(1) = q$. Since $v$ changes the signal along $\alpha$, there is a point $p_0 = \alpha(t_0),\ t_0\in(0, 1)$ such that $v(p_0) = 0$. The condition ([\[hipotese de phi\]](#hipotese de phi){reference-type="ref" reference="hipotese de phi"}) implies that $$|\phi|^2(p_0)=0,$$ and hence $p_0$ is an umbilical point. Since $\Sigma$ is not a totally umbilical surface, we have that $p_0$ is an isolated point. So there is $\epsilon>0$ such that $v(\alpha(t)) < 0$, if $t\in[t_0-\epsilon, t_0)$ and $v(\alpha(t)) > 0$, if $t \in(t_0, t_0 +\epsilon]$, or vice-versa. On the other hand, since $0 = v(p_0)= 2 + H g(x,y)$, we have $g(x,y)(p_0)\neq 0$. Let $D_{r_0}(p_0)$ be a geodesic disk with radius $r_0$ centered at $p_0$ such that $p_0$ is the only umbilical point of $\Sigma$ on $D_{r_0}(p_0)$. We can choose $r_0$ and $\epsilon$ in such way that $\alpha(t)\in D_{r_0}(p_0)$ for all $t\in [t_0-\epsilon, t_0 +\epsilon]$. Choose $\bar{r_0}<r_0$ such that $\alpha(t_0 -\epsilon),\ \alpha(t_0 +\epsilon)\notin D_{\bar{r_0}}(p_0)$. Let $\mathcal{A} = D_{r_0}(p_0) \setminus D_{\bar{r}_0}(p_0)$ be the annulus determined by these two discs and let $\beta$ denote a path in $\mathcal{A}$ joining the points $\alpha(t_0 -\epsilon)$ and $\alpha(t_0 +\epsilon)$. Again, $v$ changes the signal along of $\beta$, and therefore there is a point $\tilde{q}\in D_{r_0}(p_0)$ such that $v(\tilde{q}) = 0$. But, as above, it implies that $\tilde{q}$ is another umbilical point in $D_{r_0}(p_0)$ which is a contradiction and we conclude that $v \geq0$ as desired. Then, $\lambda_i\geq\tilde{\lambda_i}\geq0$ for all $i$. Therefore $Hess_\Sigma F(X,X)\geq 0.$ ◻ **Remark 2**. *Observe that, from ([\[eq lambda/gap\]](#eq lambda/gap){reference-type="ref" reference="eq lambda/gap"}) if we want to prove that the gap ([\[hipotese de phi\]](#hipotese de phi){reference-type="ref" reference="hipotese de phi"}) is valid, it is enough to show that $\tilde{\lambda_i}$ is non-negative for $i=1,2.$* ****Lemma** 2**. *Suppose that $(f')^2+ff''+1\leq 0$. Then the Weingarten operator $A^{\mathbb{R}^3}_{\partial\Omega}$ of $F^{-1}(1) = \partial\Omega$ in $\mathbb{R}^3$ with respect to inward unit normal satisfies $$\langle A^{\mathbb{R}^3}_{\partial\Omega}X,X\rangle\geq k_1|X|^2>0,\ \forall X\in T\partial\Omega,\ x\neq0.$$* *Proof.* We claim that both eigenvalues $k_1 \leq k_2$ of $A^{\mathbb{R}^3}_{\partial\Omega}$ are positive. Let $U\subset\mathbb{R}^2$ be an open set and $x : U \subset\mathbb{R}^2\rightarrow V \subset\partial\Omega$ the immersion $$x(\theta, t) = (\cos \theta f(t),\sin \theta f(t),t),\ (\theta, t) \in U.$$ A straight forward calculation shows that the Gaussian curvature of $\partial\Omega$ at $x(\theta, t)$ is $$K(\theta, t) =-\frac{ff''}{(1+(f')^2)^2f^2}>0.$$ Hence, $K$ is strictly positive on $\partial\Omega$. In particular, $k_1$ and $k_2$ have the same sign. Furthermore, a simple calculation gives us $$H=\frac{1+(f')^2-ff''}{2f(1+(f')^2)^{\frac{3}{2}}}>0.$$ Therefore $k_2 > 0$ and $k_1 > 0$. Thus, for all $X\in T \partial\Omega$ with $X\neq0$, $$\langle A^{\mathbb{R}^3}_{\partial\Omega}X,X\rangle\geq k_1|X|^2 > 0.$$ ◻ Now, we are in conditions to prove Theorem [**Theorem** 3](#gap rotacional cmc n=3){reference-type="ref" reference="gap rotacional cmc n=3"}. *Proof of Theorem [**Theorem** 3](#gap rotacional cmc n=3){reference-type="ref" reference="gap rotacional cmc n=3"}.* First, we claim that the geodesic curvature $k_g$ of $\partial\Sigma$ in $\Sigma$ is positive. In fact, given $X,Y\in T\partial\Sigma$, we have on $\partial\Sigma$ that $$\nabla_{X}^{\mathbb{R}^3}Y=\nabla_X^{\partial\Omega}Y+\langle A_{\partial\Omega}^{\mathbb{R}^3} X,Y\rangle\bar{N}=\nabla_X^{\partial\Sigma}Y+\langle A_{\partial\Sigma}^{\partial\Omega} X,Y\rangle N+\langle A_{\partial\Omega}^{\mathbb{R}^3} X,Y\rangle\bar{N}$$ and $$\nabla_{X}^{\mathbb{R}^3}Y=\nabla_X^{\Sigma}Y+\langle A_{\Sigma}^{\mathbb{R}^3} X,Y\rangle N=\nabla_X^{\Sigma}Y+\langle A_{\partial\Sigma}^{\Sigma} X,Y\rangle \bar{N}+\langle A_{\Sigma}^{\mathbb{R}^3} X,Y\rangle N.$$ Then, we will have $A^\Sigma_{\partial\Sigma}= A^{\mathbb{R}^3}_{\partial\Omega}$ on $\partial\Sigma$, where $\partial\Omega= F^{ -1}(1)$. Hence, if $X\in T \partial\Sigma$ is unitary, it follows from Lemma [**Lemma** 2](#lemma weingarten){reference-type="ref" reference="lemma weingarten"} that $$\begin{aligned} \label{Curvatura geodésica} k_g = \langle A^\Sigma_{\partial\Sigma}X,X\rangle=\langle A^{\mathbb{R}^3}_{\partial\Omega}X,X\rangle> 0.\end{aligned}$$ Now, observe that if either $\Sigma$ is totally umbilical or $\Sigma$ has nonnegative Gaussian curvature everywhere, then $\Sigma$ is homeomorphic to a disk. In fact, if $\Sigma$ is totally umbilical, we have that the Gaussian curvature $K_\Sigma$ of $\Sigma$ satisfies $$K_\Sigma= H ^2 \geq 0.$$ Then, in any case, $\Sigma$ has nonnegative Gaussian curvature everywhere. From the Gauss-Bonnet theorem and [\[Curvatura geodésica\]](#Curvatura geodésica){reference-type="eqref" reference="Curvatura geodésica"} , it follows $$\int_\Sigma K_\Sigma+\int_{\partial\Sigma}k_g=2\pi\mathcal{X}(\Sigma)>0,$$ which shows that $$\mathcal{X}(\Sigma)=2-2\hat{g}-r>0,$$ where $\hat{g}$ and $r$ are respectively the genus and quantity connected components of $\Sigma$. Then, $\hat{g}=0$ and $r=1$. Therefore, $\mathcal{X}(\Sigma) = 1$, $\Sigma$ is orientable and has exactly one boundary component. Thus, $\Sigma$ is homeomorphic to a disk. Therefore, from now on, let us assume that $\Sigma$ is not a totally umbilical surface and has negative Gaussian curvature at some point of $\Sigma$ and consider $$\mathcal{C} = \{p\in\Sigma; F(p) = \min_{ x\in\Sigma} F(x)\}.$$ Given $p, q\in \mathcal{C}$, let $\gamma: [0, 1] \rightarrow\Sigma$ be a geodesic such that $\gamma(0) = p$ and $\gamma(1) = q$. It follows from Proposition [**Proposition** 1](#Hess positiva){reference-type="ref" reference="Hess positiva"} $\text{Hess}_\Sigma F \geq 0$ on $\Sigma$. Then, $$\frac{d^2}{dt^2}(F\circ\gamma)=\text{Hess}_\Sigma F\left(\frac{d\gamma}{dt},\frac{d\gamma}{dt}\right) \geq 0$$ for all $t\in [0, 1]$. Since $p,q\in \mathcal{C}$, we have $$\frac{d}{dt}(F\circ\gamma)(0)=\frac{d}{dt}(F\circ\gamma)(1)=0,$$ which implies that $F$ is constant on $\gamma$ by the maximum principle. Then, we conclude that $(F \circ\gamma)(t) \equiv \text{min}_{\Sigma} F.$ Therefore, $\gamma([0, 1]) \subset\mathcal{C}$ and $\mathcal{C}$ must be a totally convex subset of $\Sigma$. In particular, totally convex property of $\mathcal{C}$ also assures that $\gamma([0, 1]) \subset\mathcal{C}$ for all geodesic loop $\gamma: [0, 1]\rightarrow\Sigma$, based at a point $p\in\mathcal{C}$. Moreover, using [\[Curvatura geodésica\]](#Curvatura geodésica){reference-type="eqref" reference="Curvatura geodésica"} we assures that each geodesic $\gamma$ which connect two points in $\mathcal{C}$ is completely inside of $\Sigma$, that is, the trace of $\gamma$ does not have points of $\partial\Sigma$. Hence, $\mathcal{C}$ is contained in the interior of $\Sigma$. Finally, we claim that $\Sigma$ is homeomorphic to either a disk or an annulus. To see this, we divide into two cases:\ Case $1$: $\mathcal{C}$ consists of a single point. Case $2$: $\mathcal{C}$ contains more than one point.\ For Case $1$, let $p\in\Sigma\setminus\partial\Sigma$ be the only point of $\mathcal{C}$. Suppose that there is a non-trivial homotopy class $[\alpha] \in\pi_1(\Sigma, p)$, then we can find a geodesic loop $\gamma: [0, 1]\rightarrow\Sigma$, $\gamma(0) = \gamma(1) = p$ with $\gamma\in [\alpha]$. But, since $\mathcal{C}$ is totally convex, $\gamma([0, 1])\subset\mathcal{C}$ and, in particular, $\mathcal{C}$ has more than one point, which is a contradiction. This implies that $\pi_1(\Sigma, p)$ is trivial. Thus, $\Sigma$ is simply connected and we conclude that $\Sigma$ is homeomorphic to a disk. For Case $2$, we may assume that $\Sigma$ is not homeomorphic to a disk. Given $p\in \mathcal{C}$ we can find a geodesic loop $\gamma : [0,1]\rightarrow\Sigma$, $\gamma(0) = \gamma(1)=p$ belonging to a non-trivial homotopy class $[\alpha]\in \pi_1(\Sigma, p)$. The totally convexity of $\mathcal{C}$ ensures that $\gamma([0,1])\subset\mathcal{C}.$ We claim that $\gamma$ is a regular curve. Indeed, if $\gamma'(0)\neq \gamma'(1)$, we can choose $\epsilon_0 > 0$ small and for each $\epsilon<\epsilon_0$ consider the minimizing geodesic $\tilde{\gamma}_\epsilon$ joining $\gamma(1 - \epsilon)$ and $\gamma(0 + \epsilon)$. Since $\mathcal{C}$ is totally convex and $\gamma\subset\mathcal{C}$, we conclude that $\tilde{\gamma}_\epsilon\in \mathcal{C}.$ Now, we can choose an nonempty open set $U\subset\{\tilde{\gamma}_\epsilon\}_{\epsilon<\epsilon_0}$ of $\mathcal{C}$. Thus, for any geodesic $\beta(t)\in U$, $$0=\frac{d^2}{dt^2}(F\circ\beta)=\text{Hess}_\Sigma F\left(\frac{d\beta}{dt},\frac{d\beta}{dt}\right)\geq 0,$$ Therefore, $\text{Hess}_\Sigma F\left(\frac{d\beta}{dt},\frac{d\beta}{dt}\right)=0$ in $U$. By the proof of Lemma [**Lemma** 1](#autovalores){reference-type="ref" reference="autovalores"} and Proposition [**Proposition** 1](#Hess positiva){reference-type="ref" reference="Hess positiva"} $$0=\text{Hess}_\Sigma F(e_i,e_i)\geq 1+\langle\Bar{\nabla}F,N\rangle k_i\geq 0.$$ Then, $$1+\langle\Bar{\nabla}F,N\rangle k_1=1+\langle\Bar{\nabla}F,N\rangle k_2=0,$$ and we get that $k_1=k_2$ in $U$. Thus, the open subset $U$ is totally umbilical, which shows that $\Sigma$ must be totally umbilical which is a contradiction. Therefore $\mathcal{C}$ has to be equal to the unique closed geodesic $\gamma$. Since $[\alpha]$ was chosen to be arbitrary, this implies that $\pi_1(\Sigma, p) \approx \mathbb{Z}$ and $\Sigma$ is homeomorphic to an annulus. ◻ ## Minimal Free Boundary Surfaces in $(n+1)$-dimensional rotational domains {#sec:n+1 dimensional} In this subsection, let us consider a rotational hypersurface in the following sense. Let $\alpha(t) =(f(t), t)$ be a plane curve $\alpha$ that is the graph of a positive real valued smooth function $f : I \rightarrow \mathbb{R}$ in the $x_1x_{n+1}$-plane. Let $\theta$ be a parametrization of the $n-$dimensional unit sphere in the hyperplane $x_{n+1} = 0$. The hypersurface of revolution with generatriz $\alpha$ can be parametrized by $$X(\theta, t) = (\theta f(t), t).$$ In this scope, we study minimal free boundary surfaces in domains $\Omega$ which boundary is a hypersurface of revolution. Let us denote $x = (x_1,x_2,...,x_n)$ and $y = x_{n+1}$. Let $F : \mathbb{R}^{n+1} = \mathbb{R}^n\times\mathbb{R} \rightarrow \mathbb{R}$ be the smooth function defined by $$F(x, y) = \frac{1}{2}\left(|x|^2-f^2(y)\right) + 1,$$ we have that $\partial\Omega\subset F^{-1}(1).$ Observe that, analogous to what was done in the previous section for dimension 3, denoting by $\Sigma$ a minimal free boundary surface in $\Omega$, we have $$\nabla F(x,y)=(x,-f(y)f'(y))=(x,y)+(0,-y-f(y)f'(y)),$$ where $y=\langle (x,y),E_{n+1}\rangle.$ Then, for all $X,Y \in T(\Sigma)$ we have $$\begin{aligned} \text{Hess}_\Sigma F(X,Y)&=\langle X,Y\rangle+g(x,y)\langle A_NX,Y\rangle-((f'(y))^2+f(y)f''(y)+1)\langle TX,Y\rangle,\end{aligned}$$ where $T:T_{(x,y)})\Sigma\rightarrow T_{(x,y)}\Sigma$ is given by $TX=\langle X,E_{n+1}^\top\rangle E_{n+1}^\top$ and $$g(x,y)=\langle\bar{\nabla} F,N\rangle=\langle(x,y),N\rangle+\langle N,E_{n+1}\rangle(-y-f(y)f'(y)).$$ We can write $$\begin{aligned} \label{hess n dimensional} \text{Hess}_\Sigma F(X,X)&=\langle X,X\rangle+\langle A(X,X),(\nabla F)^\bot\rangle-((f'(y))^2+f(y)f''(y)+1)\langle TX,Y\rangle,\end{aligned}$$ It is easy to check that $T$ is a self adjoint operator whose $E_{n+1}^\top$ is an eigenvector associated with the eigenvalue $|E_{n+1}^\top|^2$. Besides, we can take any nonzero vector in $T\Sigma$, orthogonal to $E_{n+1}^\top$, to verify that zero is also an eigenvalue of $T$. Therefore $$\begin{aligned} 0 \leq \langle TX, X\rangle \leq |E_{n+1}^\top|^2|X|^2,\ \forall X \in T_{(x,y)}\Sigma.\end{aligned}$$ ****Lemma** 3**. *[@Chen:1973 Chen] [\[lema algebrico\]]{#lema algebrico label="lema algebrico"} Let $a_1,..., a_n$ and $b$ be real numbers. If $$\sum_{i=1}^n a_i^2\leq\frac{(\sum_{i=1}^n a_i)^2}{n-1} -\frac{b}{n-1},$$ then $2a_ia_j\geq\frac{b}{n-1}$ for every $i, j\in\{1, . . . , n\}$.* The next proposition shows that the gap condition given bellow implies the convexity of $F$ on $\Sigma$. ****Proposition** 2**. *Let $\Sigma^n$ be a minimal free boundary hypersurface $n$-dimensional in $\Omega$, with $n\geq3$. Assume that $(f')^2+ff''+1\leq 0$. If $$\begin{aligned} \label{gap dimensão alta 1} |\nabla F^\bot|^2|A(x,y)|^2\leq \frac{n}{n-1},\end{aligned}$$ for every $(x,y)\in\Sigma^n$. Then, $$Hess_\Sigma F(X,X)\geq 0,$$ for all $(x,y)\in \Sigma$ and $X\in T_{(x,y)}\Sigma.$* *Proof.* Suppose that $(f')^2+ff''+1\leq 0$, then using ([\[hess n dimensional\]](#hess n dimensional){reference-type="ref" reference="hess n dimensional"}) we get $$\begin{aligned} \label{hess caso particular} \text{Hess}_\Sigma F(X,X)&\geq\langle X,X\rangle+\langle A(X,X),(\nabla F)^\bot\rangle.\end{aligned}$$ Let $\{e_1, . . . , e_n\}$ be an orthonormal basis of eigenvectors of $\text{Hess}_\Sigma F$ at $(x,y)\in\Sigma$ with respective eigenvalues $\lambda_1, . . . , \lambda_n$. We want to show that $\lambda_i \geq 0$ for every $i$. By ([\[hess caso particular\]](#hess caso particular){reference-type="ref" reference="hess caso particular"}), $\lambda_i\geq \tilde{\lambda_i}:= 1+\langle A(e_i,e_i),(\nabla F)^\bot\rangle$ and joining with ([\[gap dimensão alta 1\]](#gap dimensão alta 1){reference-type="ref" reference="gap dimensão alta 1"}) we get $$\begin{aligned} \sum_{i=1}^n \tilde{\lambda_i}^2&=n+2\sum_{i=1}^n\langle A(e_i,e_i),(\nabla F)^\bot\rangle+\sum_{i=1}^n\langle A(e_i,e_i),(\nabla F)^\bot\rangle^2\\ &\leq n+|\nabla F^\bot|^2\sum_{i=1}^n|A(e_i,e_i)|^2\\ &\leq n++|\nabla F^\bot|^2|A|^2\leq n+\frac{n}{n-1}=\frac{n^2}{n-1}.\end{aligned}$$ On the other hand, we have that $( \sum_{i=1}^n \tilde{\lambda_i})^2=n^2$ since $\Sigma^n$ is minimal. Then $$\sum_{i=1}^n \tilde{\lambda_i}^2\leq \frac{( \sum_{i=1}^n \tilde{\lambda_i})^2}{n-1}.$$ By Lemma [\[lema algebrico\]](#lema algebrico){reference-type="ref" reference="lema algebrico"}, where $\tilde{\lambda_i} = a_i$ and $b = 0$, we get that $2\tilde{\lambda_i}\tilde{\lambda_j} \geq 0$. Consequently, the eigenvalues $\tilde{\lambda_i}$, $i = 1, . . . , n$, have all the same sign. Since $\sum_{i=1}^n \tilde{\lambda_i}=n$, we conclude that $\tilde{\lambda_i} \geq 0$ for every $i$. Therefore, $\lambda_i\geq\tilde{\lambda_i}\geq0$ for every $i$. Then $$Hess_\Sigma F(X,X)\geq 0,$$ for all $(x,y)\in \Sigma$ and $X\in T_{(x,y)}\Sigma.$ ◻ *Proof of Theorem [**Theorem** 4](#gaphighdim){reference-type="ref" reference="gaphighdim"}.* Firstly, let us define $\mathcal{C} = \{(x,y) \in\Sigma: F(x,y) = \text{min}_\Sigma F\}.$ From Proposition [**Proposition** 2](#Hess positiva dim alta){reference-type="ref" reference="Hess positiva dim alta"}, $$Hess_\Sigma F(X,X)\geq 0,$$ for all $(x,y)\in \Sigma$ and $X\in T_{(x,y)}\Sigma.$ The convexity of $Hess_\Sigma F$ strongly restricts the set $\mathcal{C}$ and the topology of $\Sigma$. We first prove that $\mathcal{C}$ is a totally convex set of $\Sigma$. As in the proof of Theorem [**Theorem** 3](#gap rotacional cmc n=3){reference-type="ref" reference="gap rotacional cmc n=3"}, the convexity of $\text{Hess}F$ restricted to $\Sigma$ implies that the set $\mathcal{C}$ is a totally convex set of $\Sigma$. From now on, the proof follows the same line as in [@barbosaviana2 Theorem 3.7] that uses standard Morse's theory. If $\mathcal{C}= \{(x_0,y_0)\}$, for some $(x_0,y_0)\in\Sigma$, $\Sigma$ is diffeomorphic to a disk $\mathbb{D}^n.$ If $\mathcal{C}$ contains more than one point we can show that $\text{dim}(\mathcal{C}) = 1$ and $\mathcal{C}$ is a geodesic. In this case, $\mathcal{C}$ is not a closed geodesic (what would imply that $\Sigma$ is diffeomorphic to a disk) or is a closed geodesic (what would force $\Sigma$ to be diffeomorphic to $\mathbb{S}^1\times\mathbb{D}^{n-1}$). ◻ # Examples of CMC free boundary surfaces in the rotational ellipsoid In this section, we show that there are a catenoid and some portions Delaunay surfaces that are free boundary on the rotational ellipsoid $$\begin{aligned} \label{elipsoide rotacional} a^2x^2+a^2y^2+b^2z^2=R^2,\end{aligned}$$ with $a^2\leq b^2$ and some constant $R^2$, and satisfy the pinching condition ([\[hipotese de phi\]](#hipotese de phi){reference-type="ref" reference="hipotese de phi"}). **Remark 3**. *Let us consider $$f(y)=\frac{b}{a}\sqrt{\left(\frac{R}{b}\right)^2-y^2},$$ in ([\[def F\]](#def F){reference-type="ref" reference="def F"}). Then, we obtain the rotational ellipsoid given by ([\[elipsoide rotacional\]](#elipsoide rotacional){reference-type="ref" reference="elipsoide rotacional"}). In this case, the hypothesis $(f')^2+ff''+1\leq 0$ is automatically satisfied. In fact, we have $$f'(y)=-\frac{yb}{a\sqrt{\left(\frac{R}{b}\right)^2-y^2}}$$ and $$f''(y)=-\frac{R^2}{ab\left(\left(\frac{R}{b}\right)^2-b^2\right)^{\frac{3}{2}}}.$$ Therefore, $$\begin{aligned} (f')^2+ff''+1&=-\frac{y^2b^2}{a^2\left(\left(\frac{R}{b}\right)^2-y^2\right)}+\frac{b\sqrt{\left(\frac{R}{b}\right)^2-y^2}}{a}\left(-\frac{R^2}{ab\left(\left(\frac{R}{b}\right)^2-b^2\right)^{\frac{3}{2}}}\right)+1\\ &=\frac{(a^2-b^2)\left(\left(\frac{R}{b}\right)^2-y^2\right)}{a^2\left(\left(\frac{R}{b}\right)^2-y^2\right)}=\frac{a^2-b^2}{a^2}\leq 0.\end{aligned}$$* First, let us consider a smooth curve parametrized by arc length in the $xz$-plane $\beta(s) = (x(s), 0, z(s))$, with $x(s) > 0$ and denote by $\Sigma$ the surface obtained by rotation of $\beta$ around the $z$-axis. We start presenting a lemma with sufficient conditions for a general rotational surface to satisfy the pinching condition ([\[hipotese de phi\]](#hipotese de phi){reference-type="ref" reference="hipotese de phi"}) in the rotational ellipsoid. ****Lemma** 4**. *Suppose that the curve $\beta$ satisfies the following conditions $$\begin{aligned} \label{h1} -1\leq x''(s)\left(x(s)-\frac{x'(s)}{z'(s)}z(s)\frac{b^2}{a^2}\right),\ \text{if } z'(s)\neq0, \end{aligned}$$ $$\begin{aligned} \label{h2} -1\leq z(s)z''(s)\frac{b^2}{a^2},\ \text{if } z'(s)=0,\end{aligned}$$ $$\begin{aligned} \label{h3} -x(s)x'(s)^2\leq z'(s)x'(s)z(s)\frac{b^2}{a^2},\end{aligned}$$ with $a^2\leq b^2$. Then, $\Sigma$ satisfies the pinching condition $$|\phi|^2g(x,y)^2\leq\frac{1}{2}(2+Hg(x,y))^2,$$ on the rotational ellipsoid given in ([\[elipsoide rotacional\]](#elipsoide rotacional){reference-type="ref" reference="elipsoide rotacional"}).* *Proof.* From Remark [Remark 2](#obs vale a volta){reference-type="ref" reference="obs vale a volta"} , it suffices to show that $$\Tilde{\lambda}_1=1+k_1g(x,y)\geq 0 \text{ and }\title{\lambda}_2=1+k_2g(x,y)\geq0.$$ Let us consider $X:[s_1,s_2]\times \mathbb{S}^1\rightarrow \mathbb{R}^3$ given by $$X(s,\theta)=(x(s)\cos(\theta),x(s)\sin(\theta),z(s)),$$ obtained by rotation of $\beta$ around the $z$-axis. Therefore, $$X_s(s,\theta)=(x'(s)\cos(\theta),x'(s)\sin(\theta),z'(s))$$ and $$X_\theta(s,\theta)=(-x(s)\sin(\theta),x(s)\cos(\theta),0).$$ Then, a straight forward computation shows that $$N=(-z'(s)\cos(\theta),-z'(s)\sin(\theta),x'(s)).$$ Thus, $$\begin{aligned} \label{posição com N} \langle (x,y),N\rangle&=-x(s)z'(s)\cos^2(\theta)-x(s)z'(s)\sin^2(\theta)+x'(s)z(s)\nonumber\\ &=-x(s)z'(s)+x'(s)z(s).\end{aligned}$$ From ([\[posição com N\]](#posição com N){reference-type="ref" reference="posição com N"}) and Remark [Remark 3](#obs para f no elipsoide){reference-type="ref" reference="obs para f no elipsoide"}, we get $$\begin{aligned} \label{g(x,y)} g(x,y)&=\langle \nabla F,N\rangle\nonumber\\ &=\langle (x,y),N\rangle -\langle N,E_3\rangle(y+f(y)f'(y))\nonumber\\ &=-x(s)z'(s)+x'(s)z(s) -x'(s)\left(z(s)-\frac{b^2}{a^2}z(s)\right)\nonumber\\ &=-x(s)z'(s)+x'(s)z(s)\frac{b^2}{a^2}. \end{aligned}$$ A straight forward computation shows that $$\begin{aligned} \label{curvaturas k1 e k2} k_1=x'(s)z''(s)-x''(s)z'(s)\text{ and }k_2=\frac{z'(s)}{x(s)}.\end{aligned}$$ If $z'(s)\neq 0,$ we can write $$\begin{aligned} \label{curvatura k1 z' não nulo} k_1(s)=-\frac{x''(s)}{z'(s)}.\end{aligned}$$ Then, using ([\[g(x,y)\]](#g(x,y)){reference-type="ref" reference="g(x,y)"}) and ([\[h1\]](#h1){reference-type="ref" reference="h1"}) $$\begin{aligned} \Tilde{\lambda}_1&=1+k_1g(x,y)\\ &= 1-\frac{x''(s)}{z'(s)}\left(-x(s)z'(s)+x'(s)z(s)\frac{b^2}{a^2}\right)\\ &=1+x''(s)\left(x(s)-\frac{x'(s)}{z'(s)}z(s)\frac{b^2}{a^2}\right)\geq0. \end{aligned}$$ If $z'(s)=0,$ since the curve is parameterized by the arc length, then $x'(s)^2=1$. Using ([\[g(x,y)\]](#g(x,y)){reference-type="ref" reference="g(x,y)"}) and ([\[h2\]](#h2){reference-type="ref" reference="h2"}), we get $$\begin{aligned} \Tilde{\lambda}_1&=1+k_1g(x,y)\\ &= 1+\left(x'(s)z''(s)-x''(s)z'(s)\right)\left(-x(s)z'(s)+x'(s)z(s)\frac{b^2}{a^2}\right)\\ &=1+z''(s)z(s)\frac{b^2}{a^2}\geq0. \end{aligned}$$ Finally, using again that the curve is parameterized by the arc length, together with ([\[g(x,y)\]](#g(x,y)){reference-type="ref" reference="g(x,y)"}) and ([\[h3\]](#h3){reference-type="ref" reference="h3"}), we obtain $$\begin{aligned} \Tilde{\lambda}_2(s)&= 1+k_2g(x,y)\\ &= 1+\frac{z'(s)}{x(s)}\left(-x(s)z'(s)+x'(s)z(s)\frac{b^2}{a^2}\right)\\ &=\frac{x(s)-x(s)z'(s)^2+z'(s)x'(s)z(s)\frac{b^2}{a^2}}{x(s)}\\ &=\frac{x(s)x'(s)^2+z'(s)x'(s)z(s)\frac{b^2}{a^2}}{x(s)}\geq0.\end{aligned}$$ Therefore, $\tilde{\lambda}_1(s)\geq0$ and $\tilde{\lambda}_2(s)\geq0$ as desired. ◻ The function $$\begin{aligned} \label{funçao rho} \rho(s)=x(s)-\frac{x'(s)}{z'(s)}z(s)\frac{b^2}{a^2}. \end{aligned}$$ that appears in ([\[h1\]](#h1){reference-type="ref" reference="h1"}) has an important geometric meaning. In fact, if $\rho(s_ 0) = 0$, then we can proof that $\Sigma$ is orthogonal to the rotational ellipsoid $E$ given by $$\begin{aligned} a^2x^2+a^2y^2+b^2z^2=R^2, \end{aligned}$$ where $R^2:=a^2x(s_0)^2+b^2z(s_0)^2.$ In particular we have the following lemma. ****Lemma** 5**. *Assume that $\beta(s)$ is defined for $s\in[c,d]$ and consider $\mathcal{Z} =\{s\in[c,d]; z'(s) = 0\}.$ Let us consider $a$ and $b$ positive real numbers, such that $a^2\leq b^2$ and define the function $\rho: [c,d] \setminus \mathcal{Z}\rightarrow\mathbb{R}$ by $$\begin{aligned} \rho(s)=x(s)-\frac{x'(s)}{z'(s)}z(s)\frac{b^2}{a^2}. \end{aligned}$$ Let $s_1<s_2$ be two values in $[c,d]$ such that:* *$\rho(s_1)=\rho(s_2)=0$,* *$a^2x(s_1)^2+b^2z(s_1)^2=a^2x(s_2)^2+b^2z(s_2)^2:=R^2$ and* *$a^2x(s)^2+b^2z(s)^2<R^2$ for all $s\in(s_1,s_2).$* *Then, the rotation of $\beta_{|_{[s_1,s_2]}}$ produces a free boundary surface $\Sigma$ inside the rotational ellipsoid $E$ given by $$\begin{aligned} \label{elipsoide} a^2x^2+a^2y^2+b^2z^2=R^2. \end{aligned}$$* *Proof.* The ellipsoid given in ([\[elipsoide\]](#elipsoide){reference-type="ref" reference="elipsoide"}) can be parametrized by $$\Bar{X}(s,\theta)=\left(\frac{R}{a}\cos(s)\cos(\theta),\frac{R}{a}\cos(s)\sin(\theta),\frac{R}{b}\sin(s)\right).$$ A straight calculation show that $$\Bar{N}=\frac{(\frac{1}{b}\cos(s)\cos(\theta),\frac{1}{b}\cos(s)\sin(\theta),\frac{1}{a}\sin(s))}{\sqrt{\frac{cos^2(s)}{b^2}+\frac{\sin^2(s)}{a^2}}}.$$ Now, observe that if $\rho(s_1)=\rho(s_2)=0$, then $$\begin{aligned} 0=\rho(s_i)=x(s_i)-\frac{x'(s_i)}{z'(s_i)}z(s_i)\frac{b^2}{a^2}.\end{aligned}$$ We have, $z(s_i)\neq0$. In fact, if $z(s_i)=0$, we conclude that $x(s_i)=0$, what does not happen. Thus, we can write $$x'(s_i)=\frac{a^2}{b^2}\frac{x(s_i)}{z(s_i)}z'(s_i),$$ $i=1,2.$ Therefore, $$\begin{aligned} \beta'(s_i)&=\left(\frac{a^2}{b^2}\frac{x(s_i)}{z(s_i)}z'(s_i),0,z'(s_i)\right)\\ &=z'(s_i)z(s_i)\frac{a}{b}\left(\frac{a}{b}x(s_i),0,\frac{b}{a}z(s_i)\right)\end{aligned}$$ On the other hand, using (ii) we have that the curve $\beta$ intersects the ellipsoid at the points $\beta(s_i)$. The normal at these points is given by $$\Bar{N}(\beta(s_i))=\frac{\left(\frac{a}{b}x(s_i),0,\frac{b}{a}z(s_i)\right)}{R\sqrt{\frac{cos^2(s)}{b^2}+\frac{\sin^2(s)}{a^2}}}.$$ Then, $$\beta'(s_i)=z'(s_i)z(s_i)\frac{a}{b}R\sqrt{\frac{cos^2(s)}{b^2}+\frac{\sin^2(s)}{a^2}}\Bar{N}(\beta(s_i)).$$ Thus, the rotation of $\beta_{|_{[s_1,s_2]}}$ is orthogonal to the ellipsoid in ([\[elipsoide\]](#elipsoide){reference-type="ref" reference="elipsoide"}). As by hypothesis we have $a^2x(s)^2+b^2z(s)^2<R^2$ for all $s\in(s_1,s_2)$ we get that $\Sigma\subset E.$ ◻ Before presenting examples of CMC free boundary surfaces, let us introduce an example in the case where $H=0$, that is, a minimal free boundary surface in the rotational ellipsoid. ****Example** 1**. *Consider $\Sigma$ the catenoid obtained by revolving the curve $\beta(s)=(\cosh(s),0,s)$ around the $z$-axis. Parameterizing by arc length we obtain the curve $\Bar{\beta}(s)=(\cosh(\sinh^{-1}(s)),0,\sinh^{-1}(s)).$ Taking $a^2=1$ and $b^2=2$ in ([\[funçao rho\]](#funçao rho){reference-type="ref" reference="funçao rho"}), we get that $\rho(s)=0$ if and only if $$\frac{1}{2\sinh^{-1}(s)}=\tanh(\sinh^{-1}(s)).$$ Solving the equation we get that $s_1=-0,755...$ and $s_2=0,755...$ are such that $\rho(s_i)=0$ for $i=1,2.$ The parity of the functions $\cosh(s)$ and $\sinh^{-1}$ ensures that $$(\cosh(\sinh^{-1}(s_1))^2+2(\sinh^{-1}(s_1))^2=(\cosh(\sinh^{-1}(s_2)))^2+2(\sinh^{-1}(s_2))^2,$$ once $s_1=-s_2$. Then, let us define $$R^2:=(\cosh(\sinh^{-1}(s_1))^2+2(\sinh^{-1}(s_1))^2=(\cosh(\sinh^{-1}(s_2)))^2+2(\sinh^{-1}(s_2))^2.$$ This way, the degrowth and growth of $\cosh(s)$ in $(s_1,0)$ and $(0,s_2)$, respectively, and the fact that $\sinh^{-1}(s)$ is increasing guarantee that $(\cosh(\sinh^{-1}(s))^2+2(\sinh^{-1}(s))^2<R^2$ for all $s\in(s_1,s_2).$ Then, $\Sigma$ is a free boundary surface in the ellipsoid $E$ given by $$x^2+y^2+2z^2=R^2.$$* *Furthermore, with some calculations we get that $$\begin{aligned} -1-x''(s)\left(x(s)-\frac{x'(s)}{z'(s)}z(s)\frac{b^2}{a^2}\right)=-1-\frac{1}{(1+s^2)^{\frac{3}{2}}}\left(\cosh(\sinh^{-1}(s))-2s\sinh^{-1}(s)\right)\leq0 \end{aligned}$$ and $$\begin{aligned} -x(s)x'(s)^2-z'(s)x'(s)z(s)\frac{b^2}{a^2}=-\frac{s}{1+s^2}\left(s\cosh(\sinh^{-1}(s))+2\sinh^{-1}(s)\right)\leq 0, \end{aligned}$$ for all $s\in[s_1,s_2].$ Then, from Lemma [**Lemma** 4](#lema do gap){reference-type="ref" reference="lema do gap"}, $\Sigma$ satisfies the condition $$|\phi|^2g(x,y)^2\leq\frac{1}{2}(2+Hg(x,y))^2.$$* ![Catenoid free boundary in the elipsoid](Catenoide_free_boundary.png){#fig:Catenoid width="10cm"} Now, let us consider a smooth curve parametrized by arc length in the $xz$-plane $\beta(s) = (x(s), 0, z(s))$, with $x(s) > 0$, where $$\begin{aligned} \label{x} x(s)=\frac{1}{H}\sqrt{1+B^2+2B\sin(Hs+\frac{3\pi}{2})}\end{aligned}$$ and $$\begin{aligned} \label{z} z(s)=\int^{s+\frac{3\pi}{2H}}_{\frac{3\pi}{2H}}\frac{1+B\sin(Ht)}{\sqrt{1+B^2+2B\sin(Ht)}}dt,\end{aligned}$$ are given by the solution of Kenmotsu [@Kenmotsu:1980 Section 2, Equation (11)], where $B, H \in \mathbb{R},$ with $H > 0,\ B \geq 0$ and $B\neq 1$. Let denote by $\Sigma$ the surface obtained by rotation of $\beta$ around the $z$-axis. From Delaunay's Theorem, we know that any complete surface of revolution with constant mean curvature is a sphere, a catenoid, or a surface whose generating curve is given by $\beta$. A surface whose generating curve is given by $\beta$ is called a Delaunay surface, with parameters $H$ and $B$, which can be of different types. If $B=0$ we get right cylinders. If $0 < B < 1$, Delaunay surfaces are embedded and they are called unduloids. If $B > 1$ they are only immersed and called nodoids. Observe that the components of the velocity vector of the curve $\beta(s)$ in the $xz$-plane are given by $$\begin{aligned} x'(s)=\frac{B\cos(Hs+\frac{3\pi}{2})}{\sqrt{1+B^2+2B\sin(Hs+\frac{3\pi}{2})}} \text{ and } z'(s)=\frac{1+B\sin(Hs+\frac{3\pi}{2})}{\sqrt{1+B^2+2B\sin(Hs+\frac{3\pi}{2})}}.\end{aligned}$$ And the acceleration components are given by $$\begin{aligned} x''(s)=\frac{-BH(B+\sin(Hs+\frac{3\pi}{2}))(B\sin(Hs+\frac{3\pi}{2})+1)}{(1+B^2+2B\sin(Hs+\frac{3\pi}{2}))^\frac{3}{2}}\end{aligned}$$ and $$\begin{aligned} z''(s)=\frac{HB^2\cos(Hs+\frac{3\pi}{2})(B+\sin(Hs+\frac{3\pi}{2}))}{(1+B^2+2B\sin(Hs+\frac{3\pi}{2}))^\frac{3}{2}}.\end{aligned}$$ Let us assume that $0 < B < 1$. The key observation in this case is that the function $z$ satisfies $z'(s) > 0$ for all $s$. Let $s_0$ be the smaller positive value such that $x''(s_0) = 0$. One can easily check that $s_0 = s_0(H, B) = \frac{1}{H}\sin^{-1}(-B) +\frac{\pi}{2H},$ where $\sin^{-1} : [-1, 1] \rightarrow [-\frac{\pi}{2},\frac{\pi}{2} ]$. Thus, given $s \in(-s_0, s_0)$ we have $z'(s) > 0$ and $x''(s) > 0.$ **Remark 4**. *In this case, we only have $x' = 0$ at point $0$, so the tangent is only vertical at this point. Therefore, we only have one wave of the unduloid inside the ellipsoid.* Now, let us see some properties of the function $\rho$ that we will need later. ****Lemma** 6**. *Fix $0< B < 1$, $H > 0$, and consider the function $\rho : [-s_0, s_0] \rightarrow \mathbb{R}$ given by ([\[funçao rho\]](#funçao rho){reference-type="ref" reference="funçao rho"}). Then,\ i) $\rho(0)>0$.\ ii) $\rho'(0)=0$ and $\rho'(s_0)\leq0$.\ iii) $\rho$ is increasing in $(- s_0,0)$ and decreasing in $(0, s_0).$* *Proof.* Observe that i) follows directly since $$\begin{aligned} \rho(0)=x(0)-\frac{x'(0)}{z'(0)}z(0)\frac{b^2}{a^2}=\frac{1-B}{H}>0. \end{aligned}$$ To proof ii) we observe that, since $\beta$ is parametrized by arc length, then $$\begin{aligned} \rho'(s)&=x'(s)-\frac{b^2}{a^2}\frac{((x'(s)z(s))'z'(s)-x'(s)z(s)z''(s))}{z'(s)^2}\\ &=x'(s)+\frac{b^2}{a^2}\frac{x'(s)z(s)z''(s)-x''(s)z(s)z'(s)-x'(s)z'(s)^2}{z'(s)^2}\\ &=\left(1-\frac{b^2}{a^2}\right)x'(s)+\frac{b^2}{a^2}z(s)\left(\frac{x'(s)z''(s)-x''(s)z'(s)}{z'(s)^2}\right)\\ &=\frac{(a^2-b^2)}{a^2}x'(s)-\frac{b^2}{a^2}z(s)\left(\frac{x'(s)}{z'(s)}\right)'.\end{aligned}$$ As $x'(0)=0$ and $z(0)=0$ it follows that $\rho'(0)=0.$ On the other hand, using the expressions for $k_1$ given in ([\[curvaturas k1 e k2\]](#curvaturas k1 e k2){reference-type="ref" reference="curvaturas k1 e k2"}) and ([\[curvatura k1 z\' não nulo\]](#curvatura k1 z' não nulo){reference-type="ref" reference="curvatura k1 z' não nulo"}) we get $$\begin{aligned} \rho'(s)&=\frac{(a^2-b^2)}{a^2}x'(s)-\left(\frac{-x'(s)z''(s)+x''(s)z'(s)}{z'(s)^2}\right)z(s)\frac{b^2}{a^2}\\ &=\frac{(a^2-b^2)}{a^2}x'(s)+\frac{k_1(s)}{z'(s)^2}z(s)\frac{b^2}{a^2}\\ &=\frac{(a^2-b^2)}{a^2}x'(s)-\frac{x''(s)}{z'(s)^3}z(s)\frac{b^2}{a^2}.\end{aligned}$$ Then, since $x''(s_0)=0$ we have that $$\begin{aligned} \rho'(s_0)&=(a^2-b^2)x'(s_0)\\ &=\frac{(a^2-b^2)}{a^2}\frac{B\cos(Hs_0+\frac{3\pi}{2})}{\sqrt{1+B^2+2\sin(Hs_0+\frac{3\pi}{2})}}\\ &=\frac{(a^2-b^2)}{a^2}\frac{B\sqrt{1-B^2}}{{\sqrt{1-B^2}}}\\ &=\frac{(a^2-b^2)}{a^2}B\leq0.\end{aligned}$$ Finally, since $x''(s)>0$ and $x'(0)=0$ we get that $x'(s)>0$ for all $s\in (0,s_0)$ and $x'(s)<0$ for all $s\in(-s_0, 0).$ In the same way, we have $z(s)>0$ in $(0,s_0)$ and $z(s)<0$ in $(-s_0,0)$, then we obtain $$\begin{aligned} \rho'(s)=\frac{(a^2-b^2)}{a^2}x'(s)-\frac{x''(s)}{z'(s)^3}z(s)\frac{b^2}{a^2}<0\end{aligned}$$ in $(0,s_0),$ and $$\begin{aligned} \rho'(s)=\frac{(a^2-b^2)}{a^2}x'(s)-\frac{x''(s)}{z'(s)^3}z(s)\frac{b^2}{a^2}>0\end{aligned}$$ in $(-s_0,0)$. Therefore, $\rho$ is increasing in $(- s_0,0)$ and decreasing in $(0, s_0).$ ◻ The next lemma gives us conditions to have an unduloid that is a free boundary surface on the rotational ellipsoid. ****Lemma** 7**. *Fix $0 < B < 1$, $H > 0$, and set $z_0 = \frac{1-B^2}{HB}$. If $z(s_0) \geq z_0$, then $\rho(\Bar{s}) = 0$ for some $\bar{s} \in(0, s_0]$. In particular, the surface obtained by rotation of $\beta|_{[-\bar{s},\bar{s}]}$ is a free boundary CMC surface inside the rotational ellipsoid $E$ given by $$\begin{aligned} a^2x^2+a^2y^2+b^2z^2=\Bar{R}^2, \end{aligned}$$ where $\Bar{R}^2:=a^2x(\bar{s})^2+b^2z(\bar{s})^2$.* *Proof.* If $z(s_0) \geq z_0$, then we get $$\begin{aligned} \rho(s_0)&=x(s_0)-\frac{x'(s_0)}{z'(s_0)}z(s_0)\frac{b^2}{a^2}\\ &\leq x(s_0)-\frac{x'(s_0)}{z'(s_0)}z_0\frac{b^2}{a^2}\\ &=\frac{(a^2-b^2)}{a^2}\frac{\sqrt{1-B^2}}{H}\leq 0. \end{aligned}$$ By assertion i) of Lemma [**Lemma** 6](#lema rho decrescente){reference-type="ref" reference="lema rho decrescente"}, $\rho(0) > 0$, and then by continuity there is $\bar{s}\in (0\ s_0]$ such that $\rho(\bar{s}) = 0$. Using the parity of functions $\sin(Ht+\frac{3\pi}{2})$ and $\sin(Ht)$, we get $$\begin{aligned} x(-s)=x(s),\ x'(-s)=-x'(s),\ z(-s)=-z(s) \text{ and } z'(-s)=z'(s),\end{aligned}$$ and thus, $$\rho(-\bar{s}) = \rho(\bar{s}) = 0.$$ Moreover, $x'(0) = 0$ and $x''(s) > 0$ imply that $x'(s) > 0$ for all $s \in (0, \bar{s}]$. Therefore, $x'(s) > 0$ and $z'(s) > 0$ in $(0,\Bar{s})$, and it ensures $a^2x^2(s) + b^2z^2(s) < \Bar{R}^2 := a^2x^2(\bar{s}) + b^2z^2(\bar{s})$ for all $s \in(0, \bar{s}]$. Because the curve $\beta$ is symmetric with respect to $x$-axis we get $a^2x^2(s) + b^2z^2(s) \leq \Bar{R}^2$ for all $s\in[-\bar{s},\bar{s}]$ and we conclude that the surface is free boundary by Lemma [**Lemma** 5](#lemma Free boundary){reference-type="ref" reference="lemma Free boundary"}. ◻ ****Example** 2**. *Fix $B = 0,9$ and $H = 0,1$, so we have $z_0 =\frac{1-B^2}{HB}= 2,111...$ and $s_0 = 10\sin^{-1}(-0,9) + 5\pi \approx 4,51026.$ Then, we get $$z_0(s_0)=\int_{15\pi}^{4,51026+15\pi}\left(\frac{1+(0,9)\sin(0,1t)}{\sqrt{1+(0,9)^2+(1,8)\sin(0,1t)}}\right)dt\approx2,71697.$$ Therefore, $z(s_0)\geq z_0$. From Lemma [**Lemma** 7](#free boundary cmc no E){reference-type="ref" reference="free boundary cmc no E"}, there is $\bar{s} \in(0, s_0]$ such that the surface obtained by rotation of $\beta|_{[-\bar{s},\bar{s}]}$ is a free boundary CMC surface inside the rotational ellipsoid $E$ given by $$\begin{aligned} \label{elipsoide2} a^2x^2+a^2y^2+b^2z^2=\Bar{R}^2, \end{aligned}$$ where $\Bar{R}^2:=a^2x(\bar{s})^2+b^2z(\bar{s})^2$.* ![Unduloid free boundary in the elipsoid](Onduloide_no_elipsoide.png){#fig:Unduloid width="12cm"} The next example says essentially that there are portions of unduloids that are free boundary in the ellipsoid given by ([\[elipsoide2\]](#elipsoide2){reference-type="ref" reference="elipsoide2"}) and satisfy the conditions of Lemma [**Lemma** 4](#lema do gap){reference-type="ref" reference="lema do gap"}, there is, satisfy $$|\phi|^2g(x,y)^2\leq\frac{1}{2}(2+Hg(x,y))^2,$$ on $\Sigma.$ ****Example** 3**. *Fix $0<B < 1$ and $H > 0$ and consider $\beta(s)=(x(s),0, z(s))$ as above and set $z_0=\frac{1-B^2}{HB}$. Let $s_0$ be the smaller positive value such that $x''(s_0) = 0$, in other words, $s_0= \frac{1}{H}\sin^{-1}(-B)+\frac{\pi}{2H}.$ Suppose $z(s_0)\geq z_0.$ From Lemma ([**Lemma** 7](#free boundary cmc no E){reference-type="ref" reference="free boundary cmc no E"}), the surface $\Sigma$ obtained by rotation of $\beta|_{[-\Bar{s},\Bar{s}]}$, for some $\Bar{s}\in (0,s_0],$ is a free boundary CMC surface inside the rotational ellipsoid $E$ given by ([\[elipsoide\]](#elipsoide){reference-type="ref" reference="elipsoide"}). Moreover, in this case, for all $s\in[-\Bar{s},\Bar{s}]$ we have* *(i) $x''(s)\geq0.$ In fact, we have $[-\Bar{s},\Bar{s}]\subset[-s_0,s_0],$ where $s_0$ was chosen to be the largest neighborhood of $0$ where $x''(s)\geq0.$* *(ii) $\rho(s):=x(s)-\frac{x'(s)}{z'(s)}z(s)\frac{b^2}{a^2}\geq 0.$ Indeed, from Lemma [**Lemma** 7](#free boundary cmc no E){reference-type="ref" reference="free boundary cmc no E"}, $\rho(\Bar{s})=0$. From Lemma [**Lemma** 6](#lema rho decrescente){reference-type="ref" reference="lema rho decrescente"}, $\rho$ is increasing in $(-s_0, 0)$ and decreasing in $(0, s_0)$. Therefore, $\rho(s)\geq0.$* *(iii) $z(s)x'(s)\geq0.$ In fact, since $z'(s)>0$ and $x''(s)>0$ in $(-s_0,s_0)$, we get that $z$ and $x'$ are both growing in $(-s_0,s_0).$ Since $z(0)=x'(0)=0$, we conclude that $x'$ and $z$ has the same sing.* *The items (i), (ii) and (iii) guarantee that the inequalities in Lemma [**Lemma** 4](#lema do gap){reference-type="ref" reference="lema do gap"} are satisfied. In fact, from (i) and (ii) we get ([\[h1\]](#h1){reference-type="ref" reference="h1"}). Since $z'>0$, we do not need to show the validity of ([\[h2\]](#h2){reference-type="ref" reference="h2"}). Using that $x>0$ and (iii) we get ([\[h3\]](#h3){reference-type="ref" reference="h3"}). Therefore, $$|\phi|^2g(x,y)^2\leq\frac{1}{2}(2+Hg(x,y))^2,$$ on $\Sigma.$* Now, let us assume that $B>1$. Let $r_0$ be the smaller positive value such that $z'(r_0)=0.$ We can check that $r_0=r_0(H,B)= \frac{1}{H}\sin^{-1}\left(-\frac{1}{B}\right)+\frac{\pi}{2H}$, where $\sin^{-1}:[-1,1]\rightarrow[-\frac{\pi}{2},\frac{\pi}{2}].$ In this case, we have $z'(r)<0$ and $x''(r)>0$ for all $r\in (-r_0,r_0)$. **Remark 5**. *In this case, since $z' \neq 0$ for all $r\in(-r_0,r_0)$, we do not have horizontal tangents. Therefore, the node of the nodoids does not lie inside the ellipsoid.* In the next Lemma we are going to show that there are portions of nodoids that are free boundary in the ellipsoid given by ([\[elipsoide2\]](#elipsoide2){reference-type="ref" reference="elipsoide2"}) and satisfy the conditions of Lemma [**Lemma** 4](#lema do gap){reference-type="ref" reference="lema do gap"}, there is, satisfy $$|\phi|^2g(x,y)^2\leq\frac{1}{2}(2+Hg(x,y))^2,$$ on $\Sigma.$ ****Lemma** 8**. *Fix $B>1$ and $H>0$ and consider $\beta(r)=(x(r),0,z(r))$, with $x$ and $z$ given in ([\[x\]](#x){reference-type="ref" reference="x"}) and ([\[z\]](#z){reference-type="ref" reference="z"}), respectively. Let $r_0$ as above, then, there is $\bar{r}\in (-r_0,r_0)$ such that $\rho(\bar{r})=0$ and the surface obtained by rotation of $\beta|_{[-\bar{r},\bar{r}]}$ is a free boundary CMC surface inside the rotational ellipsoid $E$ given by $$\begin{aligned} a^2x^2+a^2y^2+b^2z^2=\Bar{R}^2, \end{aligned}$$ where $\bar{R}^2:=a^2x(\bar{s})^2+b^2z(\bar{s})^2$. Furthermore, we have $$|\phi|^2g(x,y)^2\leq\frac{1}{2}(2+Hg(x,y))^2,$$ on $\Sigma.$* *Proof.* In fact, we have that $$\rho(0)=x(0)-\frac{x'(0)}{z'(0)}z(0)\frac{b^2}{a^2}=\frac{|1-B|}{H}>0,$$ and $\rho(r)\rightarrow -\infty$ when $r\rightarrow r_0$. Then, by continuity there is $\bar{r}\in(0,r_0)$ such that $\rho(\bar{r})=0.$ Using the parity of function $\rho$ we have $$\rho(\bar{r})=\rho(-\bar{r})=0.$$ Moreover, $x'(0) = 0$ and $x''(r) > 0$ imply that $x'(r) < 0$ for all $r \in (-\bar{r}, 0)$. Therefore, $x'(r) < 0$ and $z'(r) < 0$ in $(-\bar{r}, 0)$, and it ensures $a^2x^2(r) + b^2z^2(r) < \Bar{R}^2 := a^2x^2(-\bar{r}) + b^2z^2(-\bar{r})$ for all $r \in(-\bar{r}, 0)$. Because the curve $\beta$ is symmetric with respect to $x$-axis we get $a^2x^2(r) + b^2z^2(r) \leq \Bar{R}^2$ for all $r\in[-\bar{r},\bar{r}]$ and we conclude that the surface is free boundary by Lemma [**Lemma** 5](#lemma Free boundary){reference-type="ref" reference="lemma Free boundary"}. Furthermore, in this case, for all $r\in [-\bar{r}, \bar{r}]$ we have \(i\) $\rho(r)\geq 0.$ Indeed, as already calculated in Lemma [**Lemma** 6](#lema rho decrescente){reference-type="ref" reference="lema rho decrescente"}, we have $$\rho'(r)=\frac{(a^2-b^2)}{a^2}x'(r)-\frac{x''(r)}{z'(r)^3}z(r)\frac{b^2}{a^2}.$$ Since $x''(r)>0$ and $x'(0)=0$ we get that $x'(r)<0$ for all $r\in(-r_0, 0)$ and $x'(r)>0$ for all $r\in (0,r_0)$. Similarly, we have $z(r)>0$ in $(-r_0,0)$ and $z(r)<0$ in $(0,r_0)$, then we obtain $$\rho'(r)>0\ \forall r\in (-\bar{r},0)$$ and $$\rho'(r)<0\ \forall r\in (0,\bar{r}).$$ Therefore, $\rho$ is increasing in $(- r_0,0)$ and decreasing in $(0, r_0).$ Since $\rho(0)>0$, we conclude that $\rho(r)\geq0,$ for all $r\in [-\bar{r}, \bar{r}]$. \(ii\) $x'(r)z(r)\leq 0.$ In fact, since $z'(r)<0$ and $x''(r)>0$ in $(-r_0,r_0)$, we get that $x'$ is growing in $(-r_0,r_0)$ and $z$ is descending in $(-r_0,r_0)$. Since $z(0)=x'(0)=0$, we conclude that $x'$ and $z$ have opposite signs. The items (i), (ii) and (iii) guarantee that the inequalities in Lemma [**Lemma** 4](#lema do gap){reference-type="ref" reference="lema do gap"} are satisfied. In fact, from $x''(r)>0$ and (i) we get ([\[h1\]](#h1){reference-type="ref" reference="h1"}). Since $z'<0$, we do not need to show the validity of ([\[h2\]](#h2){reference-type="ref" reference="h2"}). Using that $x>0$ and (ii) we get ([\[h3\]](#h3){reference-type="ref" reference="h3"}). Therefore, $$|\phi|^2g(x,y)^2\leq\frac{1}{2}(2+Hg(x,y))^2,$$ on $\Sigma.$ ◻ ****Example** 4**. *Fiz $B=1,1$ and $H=0,1$. Then, we have $r_0\approx10\sin^{-1}(-0.91)+5\pi\approx4,297...$. Therefore, $z'(r_0)=0$ and from Lemma [**Lemma** 8](#lema nodoide){reference-type="ref" reference="lema nodoide"}, there is $\bar{r} \in(0, r_0]$ such that the surface obtained by rotation of $\beta|_{[-\bar{r},\bar{r}]}$ is a free boundary CMC surface inside the rotational ellipsoid $E$ given by $$\begin{aligned} a^2x^2+a^2y^2+b^2z^2=\Bar{R}^2, \end{aligned}$$ where $\Bar{R}^2:=a^2x(\bar{r})^2+b^2z(\bar{r})^2$.* ![Nodoid free boundary in the elipsoid](Nodoide_no_elipsoide.png){#fig:nodoid width="12cm"} # Funding {#funding .unnumbered} The first and second authors have been partially supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) of the Ministry of Science, Technology and Innovation of Brazil, Grants 316080/2021-7, 200261/2022-3, and 306524/2022-8. The authors also were supported by Paraíba State Research Foundation (FAPESQ), Grants 3025/2021 and 2021/3175 (A. Freitas). The third author was partially supported by Grant 2022/1963, Paraíba State Research Foundation (FAPESQ). # Acknowledgements {#acknowledgements .unnumbered} This work is a part of the Ph.D. thesis of the third author. The authors would like to thank Ezequiel Barbosa and Luciano Mari for their discussions about the object of this paper and several valuable suggestions. The first author would like to thank the hospitality of the Mathematics Department of Università degli Studi di Torino, where part of this work was carried out. The third author would like to express her gratitude for the hospitality and support during her visit to the Mathematics Department of Universidade Federal de Minas Gerais in April/May 2023. # Data availability statement {#data-availability-statement .unnumbered} This manuscript has no associated data. [^1]: *For details, see the Introduction of [@hilario].*
arxiv_math
{ "id": "2309.10405", "title": "Gap results and existence of CMC free boundary hypersurfaces in\n rotational domains", "authors": "Allan Freitas, M\\'arcio Santos, and J. Sindeaux", "categories": "math.DG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this article, we point out the connections between the distinguished varieties introduced by Agler and McCarthy with certain uniform algebras on bidisc studied by Samuelsson and Wold. We also prove analogues of Samuelsson-Wold result for the domains in $\mathbb{C}^2$ that are the images of the bidisc under certain proper polynomial map on $\mathbb{C}^2$. We also give a description of polynomial convex hull of graph of anti-holomorphic polynomial over the distinguished boundary of such domains. We mention the case for the symmetrized bidisc as an example. address: - Department of Mathematics and Statistics, Indian Institute of Science Education and Research Kolkata, Mohanpur -- 741 246 - Department of Mathematics, Indian Institute of Science Education and Research Pune, Pune -- 411 008 author: - Sushil Gorai and Golam Mostafa Mondal title: Uniform algebras and distinguished varieties --- [^1] # Introduction {#S:intro} This article connects the theory of distinguished varieties--a well-explored topic in operator theory, with the notions of uniform algebra generated by holomorphic polynomials and certain pluriharmonic functions. The latter one is also a very well-studied object in several complex variables. In particular, we observe that the failure of uniform approximation for all continuous functions on the distinguished boundary of certain domains in $\mathbb{C}^2$ by elements of holomorphic polynomials in $z_1$ and $z_2,$ and some pluriharmonic functions is the presence of certain distinguished variety in the domain where the pluriharmonic functions become holomorphic. Before making these precise, let us briefly mention the theory of distinguished varieties and the theory of uniform algebras one by one. In a seminal paper [@AglerMcCarthy2005], Agler and McCarthy introduced the notion of distinguished variety in the bidisc $\mathbb{D}^2$ as follows: A non-empty set $V$ in $\mathbb{C}^2$ is said to be a *distinguished variety* if there exists a polynomial $p$ in $\mathbb{C}[z, w]$ such that $$V = \{(z, w) \in \mathbb{D}^2 : p(z, w) = 0\}$$ and such that $$\begin{aligned} \label{E:Distinguishd Variety} \overline{V} \cap \partial\mathbb{D}^2 = \overline{V} \cap \mathbb{T}^2. \end{aligned}$$ Here, $\partial \mathbb{D}^2$ represents the boundary of the $\mathbb{D}^2$, and $\mathbb{T}^2$ is the distinguished boundary of $\mathbb{D}^2$. A distinguished variety is an algebraic variety that exits the bidisc through the distinguished boundary. The set $\overline{V}$ is the closure of $V$ within $\overline{\mathbb{D}}^2$. We will use $\partial V$ to denote the set described by ([\[E:Distinguishd Variety\]](#E:Distinguishd Variety){reference-type="ref" reference="E:Distinguishd Variety"}). From a topological standpoint, $\partial V$ represents the boundary of $V$ within the zero set of $p$ instead of encompassing the entirety of $\mathbb{C}^2$. One of the fundamental results in operator theory, known as Andô's inequality [@Ando1963], establishes that when $T_1$ and $T_2$ are commuting operators, each with a norm not exceeding 1, the following inequality holds for any two-variable polynomial $p$: $$\begin{aligned} \label{E:Ando_Inequality} \|p(T_1, T_2)\| \leq \|p\|_{\mathbb{D}^2} \end{aligned}$$ holds. Agler and McCarthy [@AglerMcCarthy2005 Theorem 3.1] gave the following improvement of the inequality ([\[E:Ando_Inequality\]](#E:Ando_Inequality){reference-type="ref" reference="E:Ando_Inequality"}): if $T_1$ and $T_2$ are matrices, then $$\|p(T_1, T_2)\| \leq \|p\|_V,$$ where $V$ is a distinguished variety that depends on $T_1$ and $T_2$. Additionally, in [@AglerMcCarthy2005 Theorem 1.12], the authors have shown that all distinguished varieties in the bidisc can be expressed as $$\{(z, w) \in \mathbb{D}^2 : \det(\Psi(z) - wI) = 0\},$$ where $\Psi$ is an analytic matrix-valued function defined on the disk, and it is unitary on $\partial\mathbb{D}$. A similar description of distinguished varieties in the symmetrized bidisc is given in [@PalShalit2014]. Consider a compact subset $K$ of $\mathbb{C}^n$. The space of all continuous complex-valued functions on $K$ is denoted as $\mathcal{C}(K)$, equipped with the norm $|g| = \sup_{K}|g(z)|$. We denote the closure of the set of polynomials in $\mathcal{C}(K)$ as $\mathcal{P}(K)$. For a collection of functions $g_1, \ldots, g_N \in \mathcal{C}(K)$, we use $[g_1, \ldots, g_N; K]$ to represent the uniform algebra generated by $g_1, \ldots, g_N$ on $K$. We define the set $X={(g_1(z), \ldots, g_N(z)): z\in K}$ associated with the uniform algebra $[g_1, \ldots, g_N; K]$. Using the Stone and Weierstrass theorem, we assert that $$\begin{aligned} = \mathcal{C}(K) \end{aligned}$$ if and only if $\mathcal{P}(X)=\mathcal{C}(X)$ and the generators $g_1, \ldots, g_N$ separate points on $K$. If we consider $\mathcal{P}(K)$ and $\mathcal{C}(K)$ as Banach algebras, the equality $\mathcal{P}(K)=\mathcal{C}(K)$ implies the equality of their corresponding maximal ideal spaces. The maximal ideal space of $\mathcal{C}(K)$ corresponds to $K$, and that of $\mathcal{P}(K)$ corresponds to $\widehat{K}$, where $\widehat{K}$ is the polynomial convex hull of $K$ (see [@Gamelin]). Here, the *polynomial convex hull* of $K$ is denoted as $\widehat{K}$ and is defined as follows: $$\begin{aligned} \widehat{K}:=\left\{\alpha\in\mathbb{C}^n: |p(\alpha)|\le\max_{K}|p|~~\forall p\in \mathbb{C}[z_1,z_2,\cdots,z_n]\right\}. \end{aligned}$$ We say $K$ is *polynomially convex* when $\widehat{K}= K$. Thus, polynomial convexity serves as a necessary condition for all compacts $K$ where $\mathcal{P}(K)=\mathcal{C}(K)$ holds. Recall that an *analytic disc* in $\mathbb{C}^n$ is a holomorphic map $\phi:\mathbb{D}\rightarrow\mathbb{C}^n$ which is non-constant and continuous on $\overline{\mathbb{D}}$. Let $K\subset\mathbb{C}^n.$ We say an analytic disc $\phi$ is present in $K$ if $\phi(\mathbb{D})\subset K.$ In view of Lavrentiev's [@Lv] result, if $K$ be a compact subset of $\mathbb{C},$ then $\mathscr{P}(K)=\mathcal{C}(K)$ if and only if $K = \widehat{K}$ and there does not exist any analytic disc in $K.$ But this is far from being a sufficient condition for polynomially convex compacts in higher dimensions. This article discusses some results in which the presence of an analytic disc is the only obstruction for polynomially convex compact $K$ for which $\mathscr{P}(K)=\mathcal{C}(K).$ We now talk about the Wermer maximality theorem. Let $\mathbb{T}^{1}$ be the unit circle in the complex plane and $\mathcal{C}(\mathbb{T}^{1})$ be the set of all continuous complex-valued functions on $\mathbb{T}^{1}.$ Let $\mathcal{A}$ denote the set of all $f\in \mathcal{C}(\mathbb{T}^{1})$ which are boundary values of functions holomorphic on $\mathbb{D}$ and continuous on $\overline{\mathbb{D}}.$ In [@ZL52], the following question was asked: $$\textit{if } g\in \mathcal{C}(\mathbb{T}^{1})\setminus \mathcal{A}, \textit{ does the closed algebra generated by } g \textit{ and } \mathcal{A} \textit{ equal } \mathcal{C}(\mathbb{T}^{1})?$$ In [@ZL52], it is shown that if $g$ is real-valued or if $g$ satisfies a Lipschitz condition, the algebra generated by $g$ and $\mathcal{A}$ equals $\mathcal{C}(\mathbb{T}^{1}).$ Wermer [@Wer53] settled this question by proving the following: **Result 1** (Wermer). *If $\mathcal{B}$ is any closed subalgebra of $\mathcal{C}(\mathbb{T}^{1})$ with $\mathcal{A}\subset \mathcal{B}\subset \mathcal{C}(\mathbb{T}^{1}).$ Then either $\mathcal{A}=\mathcal{B}$ or $\mathcal{B}=\mathcal{C}(\mathbb{T}^{1}).$* A uniform algebra $\mathcal{U}$ defined on a compact subset $K$ is said to be a *maximal subalgebra* of $\mathcal{C}(K)$ if, for any other subalgebra $\mathcal{B}$ of $\mathcal{C}(K)$ such that $\mathcal{U}\subset \mathcal{B}\subset \mathcal{C}(K)$, it holds that either $\mathcal{U}=\mathcal{B}$ or $\mathcal{B}=\mathcal{C}(K)$. is known as the *Wermer Maximality Theorem.* A similar related result due to Wermer is the following [@Wer65]: Let $g\in C^{1}(\overline{\mathbb{D}}).$ Assume that the graph ${\sf Gr}_{\overline{\mathbb{D}}}(g)\subset\mathbb{C}^{2}$ of $g$ is polynomially convex. Let $E:=\{z\in\overline{\mathbb{D}}:\frac{\partial g}{\partial\bar{z}}(z)=0\}.$ Then $$\begin{aligned} =\{f\in C(\overline{\mathbb{D}}):f|_{E}\in \mathcal{O}(E)\}. \end{aligned}$$ It is natural to ask the version of these results to the higher dimensions. The question in the higher dimension has no clear answer like the Wermer maximality theorem. The natural object is to generalization of the second result of Wermer, even when considering the algebra generated by polynomials and a pluriharmonic function. For a domain $\Omega\subset \mathbb{C}^n,$ let $PH(\Omega)$ denote the class of all pluriharmonic function on $\Omega.$ The works of Čirka [@Cirka69], Izzo [@AIJAMS93; @AIJ95], Samuelsson and Wold [@SaW12], and Izzo, Samuelsson, and Wold [@ISW16] focused on the study of uniform algebras generated by holomorphic and pluriharmonic functions in higher dimensions. In [@SaW12], Samuelsson and Wold [@SaW12] proved the following results in the case of the bidisc $\mathbb{D}^2.$ **Result 2** (Samuelsson-Wold). *Let $h_{j}\in PH(\mathbb{D}^{2})\cap\mathcal{C}^{1}(\overline{\mathbb{D}}^2)$ for $j=1,\cdots,N.$ Then either there exists a holomorphic disc in $\overline{\mathbb{D}}^2$ where all $h_{j}$'s are holomorphic, or $[z_1,z_2,h_1,\cdots,h_{N};\overline{\mathbb{D}}^2]=\mathcal{C}(\overline{\mathbb{D}}^2).$* The following result can be thought of an analogue of the Wermer maximality theorem in case of the bidisc. **Result 3** (Samuelsson-Wold). *Let $f_{j} \in \mathcal{C}(\mathbb{T}^2)$ for $j=1,\cdots,N$ with $N\ge 1$, and assume that each $f_{j}$ extends to a pluriharmonic function on $\mathbb{D}^{2}$. Then either $[z_1,z_2,f_1,\cdots,f_{N};\mathbb{T}^2]=\mathcal{C}(\mathbb{T}^2)$, or there exists a non-trivial algebraic variety $Z\subset \mathbb{C}^2$ with $\overline{V}\setminus V\subset \mathbb{T}^2,$ and the pluriharmonic extensions of the $f_{j}$'s are holomorphic on $Z,$ where $V=Z\cap (\overline{\mathbb{D}^2}\setminus \mathbb{T}^2).$* *Remark 4*. In if not all of the functions $f_1,\dots, f_N$ is holomorphic in any analytic disc that lies in $\partial\mathbb{D}^2$ and $[z_1,z_2,f_1,\cdots,f_{N};\mathbb{T}^2]\ne \mathcal{C}(\mathbb{T}^2)$, then the algebraic variety that exists is a distinguished variety. As mentioned earlier, by a result of Agler and McCarthy [@AglerMcCarthy2005], every distinguished variety in the bidisc is of the form $\{(z, w) \in \mathbb{D}^2 : \det(\Psi(z) - wI) = 0\}$ for some matrix-valued holomorphic function $\Psi$ on $\mathbb{D}^2$. Therefore, the variety that exists in is also of the above mentioned determinant form. We do not know what connections are there with the matrix-valued funtion $\Psi$ in [@AglerMcCarthy2005] and the pluriharmonic functions in . *Remark 5*. It might occur that the variety in appears in the boundary of the bidisc. In this case, the variety is not a distinguished variety, but such variety can also be explained from the operator theoretic point of view from a result due to Das and Sarkar [@BataJaydeb2017 Theorem 4.3]. From the proof of it is clear that the form of such variety is $\{\lambda\}\times \mathbb{D}$ or $\mathbb{D}\times\{\lambda\}$ for some $\lambda\in\partial\mathbb{D}$, which matches with the description in [@BataJaydeb2017 Theorem 4.3]. Consider the domain $\Omega=\phi(\mathbb{D}^2)$ in $\mathbb{C}^2$ and we note that the distinguished boundary of $\Omega$ for the algebra $\mathcal{A}(\Omega)$ is $\Gamma_{\Omega}=\phi(\mathbb{T}^2).$ We prove the following generalization of and for the above domain. **Theorem 6**. *Let $h_{j}\in PH(\Omega)\cap\mathcal{C}^{1}(\overline{\Omega})$ for $j=1,\cdots,N,$ and $\phi^{-1}(\overline{\Omega})\subset \overline{\mathbb{D}}^2.$ Then, either there exists a holomorphic disc in $\overline{\Omega}$ where all $h_{j}$'s are holomorphic, or $[z_1,z_2,h_1,\cdots,h_{N};\overline{\Omega}]=\mathcal{C}(\overline{\Omega}).$* **Theorem 7**. *Let $f_{j}\in \mathcal{C}(\Gamma_{\Omega})$ for $j=1,\cdots,N, N\ge 1$ and assume that each $f_{j}$ extends to a pluriharmonic function on $\Omega.$ If $\phi^{-1}(\Gamma_{\Omega})\subset \mathbb{T}^2$. If $f_{j}$ is not holomorphic on any analytic disc present in the boundary $\partial \Omega$ for at least one $j$, then either $$\begin{aligned} =\mathcal{C}(\Gamma_{\Omega}), \end{aligned}$$ or there exists a distinguished variety $V$ in $\Omega$ such that the pluriharmonic extensions of the $f_{j}$'s are holomorphic on $V.$* As a corollary we can extend and to the symmetrized bidisc. Recall that the symmetrized bidisc $\mathbb{G}_{2}$ is the image of the bidisc under the *symmetrization map* $\Pi:(z_1,z_2)\to (z_1+z_2,z_1z_2)$ i.e., $$\begin{aligned} \mathbb{G}_{2}=\{(z_1+z_2,z_1z_2):|z_1|<1,|z_2|< 1\}. \end{aligned}$$Since $\Pi^{-1}(\Pi(\overline{\mathbb{D}}^2))=\Pi^{-1}(\overline{\mathbb{G}}_2)=\overline{\mathbb{D}}^2,$ by using , we get that $\overline{\mathbb{G}}_2$ is polynomially convex. If $f:\mathbb{G}_2\to \mathbb{C}$ is a holomorphic function on $\mathbb{G}_2,$ then $f\circ\Pi:\mathbb{D}^2\to\mathbb{C}$ is a symmetric function on $\mathbb{D}^{2}.$ Therefore, if $\mathcal{A}(\overline{\mathbb{G}}_2)$ is the algebra of functions that are holomorphic on $\mathbb{G}_2$ and continuous on $\overline{\mathbb{G}}_2,$ then the distinguished boundary $\Gamma_{{\mathbb{G}}_{2}}$ of $\mathbb{G}_2$ is the image $\Pi(\mathbb{T}^2)$ of the torus $\mathbb{T}^2$ (the distinguished boundary of $\mathbb{D}^2$). Since $\mathbb{G}_2$ is neither convex (not even biholomorphic to any convex domain [@Costa2004]) nor smooth (not even the Lipschitz domain [@DebGorai2015]), and hence, many results in the theory of several complex variables does not apply to $\mathbb{G}_2.$ Several authors have studied this domain over the last three decades, and it has shown to be a domain with a highly rich complex geometry and function theory: see, among many other articles, [@Trybula2015; @KosiZwo2016; @PflZwo2005; @JarPfl2004; @EdiZwo2005; @Costa2004; @AglrYung2004; @AglrYngLyk2019; @AYL2019; @TirPalRoy2012; @Sarkar2015]. There are significant similarities and contrasts between its geometry and function theory and those of the bidisc. Here we observe that and continues to hold if the bidisc is replaced by the symmetrized bidisc. More precisely: **Corollary 8**. *Let $h_{j}\in PH(\mathbb{G}_2)\cap\mathcal{C}^{1}(\overline{\mathbb{G}}_2)$ for $j=1,\cdots,N.$ Then either there exists a holomorphic disc in $\overline{\mathbb{G}}_2$ where all $h_{j}$'s are holomorphic, or $$[z_1,z_2,h_1,\cdots,h_{N};\overline{\mathbb{G}}_2]=\mathcal{C}(\overline{\mathbb{G}}_2).$$* **Corollary 9**. *Let $f_{j}\in \mathcal{C}(\Gamma_{\mathbb{G}_2})$ for $j=1,\cdots,N, N\ge 1$ and assume that each $f_{j}$ extends to a pluriharmonic function on $\mathbb{G}_2.$ If $f_{j}$ is not holomorphic on any analytic disc present in the boundary $\partial \mathbb{G}_{2}$ for at least one $j$, then either $$\begin{aligned} =\mathcal{C}(\Gamma_{\mathbb{G}_2}), \end{aligned}$$ or there exists a distinguished variety $V$ in $\mathbb{G}_2$ such that the pluriharmonic extensions of the $f_{j}$'s are holomorphic on $V.$* *Remark 10*. In view of a result by Pal and Shalit [@PalShalit2014], we see that the variety that appears in has the form of a zero set of a certain determinant. However, we do not know whether a similar type of determinant form can also given for the distinguished varieties that appear in . # Technical Results {#S:technical} In this section, we provide some known results and some preliminary lemmas that will be utilized to prove our results. **Result 11** ([@Sto07]). *If $F:\mathbb{C}^n\to \mathbb{C}^n$ is a proper holomorphic map, and if $K\subset \mathbb{C}^n$ is a compact set, then the set $K$ is polynomially convex if and only if the set $F^{-1}(K)$ is polynomially convex, and $\mathcal{P}(K) =\mathcal{C}(K)$ if and only if $\mathcal{P}(F^{-1}(K)) =\mathcal{C}(F^{-1}(K)).$* **Result 12** (Remmert Proper Mapping theorem [@Rem56; @Rem57]). *Let $M, N$ be complex spaces, and $f:M\to N$ is a proper holomorphic map. If $Z$ is an analytic subvariety in $M$ then $f(Z)$ is also an analytic subvariety in $N.$ Moreover, if $Z$ is irreducible then $f(Z)$ is also irreducible subvariety of $N.$* The following result is from the book [@Chirka_book89 Page 29]. **Result 13**. *(Chirka)[\[R:Image_AlgVariety\]]{#R:Image_AlgVariety label="R:Image_AlgVariety"} Let $\Omega_1\subset \mathbb{C}^p,\Omega_2\subset \mathbb{C}^m,$ are open subsets such that $\Omega=\Omega_1\times\Omega_2,$ $p+m=n,$ and $\textit{proj}_{1}:(z,w)\to z.$ Let $V$ be an analytic subset in $\Omega$ such that $\textit{proj}_{1}:V\to \Omega_1$ is a proper map. Then $\textit{proj}_{1}(V)$ is an analytic subset in $\Omega_1.$ Moreover, if $\Omega=\mathbb{C}^n,$ $\Omega_1=\mathbb{C}^p,$ and $V$ is an algebraic subset in $\mathbb{C}^n,$ then $\textit{proj}_{1}(V)$ is also an algebraic subset in $\mathbb{C}^p.$* The following lemma is well-known to experts. Since we have not found a explicit mention of this lemma in the literature, we decided to put it here for completeness. **Lemma 14**. *Let $\Psi:\mathbb{C}^n\to \mathbb{C}^n$ be a proper polynomial map. Let $Z$ be an algebraic variety in $\mathbb{C}^n,$ then $\Psi(Z)$ is also an algebraic variety in $\mathbb{C}^n.$* *Proof.* Consider the algebraic variety $V=\{(\Psi(z),z):z\in Z\}$ in $\mathbb{C}^n\times \mathbb{C}^n$ and $\Omega_1=\Omega_2=\mathbb{C}^n.$ We now show that that $\textit{proj}_{1}:V\to \Omega_1$ is a proper map. Let $K\subset \mathbb{C}^n$ be a compact subset of $\mathbb{C}^n.$ Then $\textit{proj}_{1}^{-1}\{K\}=(K\times \mathbb{C}^n)\cap V=\{(\xi,\eta)\in K\times\mathbb{C}^n: (\xi,\eta)\in V\}=\{(\Psi(\eta),\eta)\in K\times\mathbb{C}^n:\eta\in Z\}=$compact (since $\Psi$ is a proper map). Therefore, $\textit{proj}_{1}:V\to \Omega_1$ is a proper map. Hence, by , we conclude that $\textit{proj}_{1}(V)=\Psi(Z)$ is an algebraic variety. ◻ *Remark 15*. The case $\Psi=\Pi$ is available in [@PalShalit2014 Lemma 3.1]. Let $\Psi:\mathbb{C}^n\to \mathbb{C}^n$ be a proper holomorphic polynomial map. Let $\Omega:=\Psi(\mathbb{D}^n)$ be a domain such that $\Psi^{-1}(\Psi(\mathbb{D}^n))\subset \mathbb{D}^n,$ $\Psi^{-1}(\Psi(\partial \mathbb{D}^n))\subset \partial\mathbb{D}^n,$ and $\Psi^{-1}(\Psi(\mathbb{T}^n))\subset \mathbb{T}^n.$ The following lemma illustrates that every distinguished variety in $\Omega$ can be derived from a distinguished variety in $\mathbb{D}^n$. **Lemma 16**. *Let $Z\subset \Omega$. Then $Z$ is a distinguished variety in $\Omega$ if and only if there is a distinguished variety $V$ in $\mathbb{D}^n$ such that $\Psi(V)=Z.$* *Proof.* Given that $\Psi$ is a proper map, it implies that $\Psi$ is onto, and therefore, $\Psi(\Psi^{-1}(Z))=Z$. Additionally, it can be easily demonstrated that $\Psi^{-1}(Z)$ is an algebraic variety. Let us define $V:=\Psi^{-1}(Z)$. Now, we need to prove the following: $V\cap \partial \mathbb{D}^n\subset V\cap\mathbb{T}^n$. Consider an element $\alpha\in V\cap \partial \mathbb{D}^n.$ This implies that $\alpha\in \Psi^{-1}(Z)\cap \partial\mathbb{D}^n.$ Hence, we have $\Psi(\alpha)\in Z\cap \Psi(\partial \mathbb{D}^n)$. Since $Z$ is a distinguished variety, we can conclude that $\Psi(\alpha)\in Z\cap \Psi(\partial \mathbb{T}^n)$. Consequently, we can deduce that $\alpha$ lies in $\Psi^{-1}(Z\cap \Psi(\mathbb{T}^n))=\Psi^{-1}(Z)\cap \Psi^{-1}(\Psi(\mathbb{T}^n))$. By our assumption, together with this, we get that $V\cap \partial \mathbb{D}^n\subset V\cap\mathbb{T}^n$. Conversely, let us assume that $V$ is a subset of $\mathbb{D}^n$ and is a distinguished variety. By using , we can conclude that $\Psi(V)$ is an algebraic variety in $\Omega$. Now, we claim that $Z=\Psi(V)$ is a distinguished variety in $\Omega$. Suppose $\alpha\in Z\cap \Psi(\partial \mathbb{D}^n)=\Psi(V)\cap \Psi(\partial \mathbb{D}^n).$ We need to show that $\alpha$ also lies in $\Psi(\mathbb{T}^n)$. Since $\alpha\in Z\cap \Psi(\partial \mathbb{D}^n)$, there exist $\eta_1\in V$ and $\eta_2\in \partial \mathbb{D}^n$ such that $\Psi(\eta_1)=\Psi(\eta_2)=\alpha$. Consequently, $\eta_2$ belongs to $\Psi^{-1}(\Psi(\partial\mathbb{D}^n))$, which is a subset of $\partial\mathbb{D}^n$. Thus, we have $\eta_2\in V\cap \partial\mathbb{D}^n$, and as a result, $\Psi(\eta_2)\in \Psi(V\cap \partial\mathbb{D}^n)$. This implies that $\alpha$ lies in $\Psi(V\cap \mathbb{T}^n)$. ◻ *Remark 17*. The case $\Omega=\mathbb{G}_2$ is available in [@PalShalit2014 Lemma 3.1]. **Lemma 18**. *Let $g:G\subset\mathbb{C}^{N}\to\mathbb{C}^{N}$ be a proper holomorphic mapping and $q:g(G)\to\mathbb{C}$ be a continuous function. If $q\circ g:G\to \mathbb{C}$ is holomorphic, then $q$ is holomorphic.* *Proof.* Let us define $\Omega:=g(G).$ Since $g$ is proper holomorphic, $\Omega$ is open. First, we assume $z\in G$ and $\det d{g}(z)\ne 0,$ where $\det dg(z)$ is the determinant of the complex Jacobian matrix of $g$ at $z.$ Then there exists a neighborhood $V$ of $z$ and a neighborhood $W$ of $g(z)$ such that $g^{-1}:W\to V$ is holomorphic. Therefore, $q\circ g\circ g^{-1}=q$ is holomorphic at $g(z).$ Next, we define $X:=\{z\in G:\det d{g}(z)=0\}.$ Hence, $q$ is holomorphic on $\Omega\setminus g(X).$ Clearly, $X$ is an analytic variety with $\dim_{\mathbb{C}}X\le (N-1).$ Since $g$ is proper holomorphic mapping, by , $g(X)$ is also an analytic variety in $\Omega.$ Since $q$ is continuous on $\Omega$ and holomorphic on $\Omega\setminus g(X),$ by Riemann's removable singularity theorem, we can say that $q$ is holomorphic on $\Omega.$ ◻ Let $\Psi:\mathbb{C}^n\to \mathbb{C}^n$ be a proper holomorphic map. Let $\Omega:=\Psi(\mathbb{D}^n)$ be a domain such that $\Psi^{-1}(\Psi(\mathbb{D}^n))\subset \mathbb{D}^n,$ $\Psi^{-1}(\Psi(\partial \mathbb{D}^n))\subset \partial\mathbb{D}^n,$ and $\Psi^{-1}(\Psi(\mathbb{T}^n))\subset \mathbb{T}^n.$ We denote the distinguished boundary of $\Omega$ for the algebra $\mathcal{A}(\Omega)$ by $\Gamma_{\Omega}.$ Clearly, $\Gamma_{\Omega}$ is equal to $\Psi(\mathbb{T}^n).$ The following theorem might be of independent interest. We will use this in our proofs. **Theorem 19**. *Let $N\ge 1$ and $f_1,\cdots,f_{N}\in \mathcal{C}(\Gamma_{\Omega}).$ Then $[z_1,\cdots,z_n,f_{1},\cdots,f_N;\Gamma_{\Omega}]=\mathcal{C}(\Gamma_{\Omega})$ if and only if ${\sf Gr}_{f}(\Gamma_{\Omega})$ is polynomially convex, where $f=(f_1,\cdots,f_{N}).$* *Proof.* We denote $X:={\sf Gr}_{f}(\Gamma_{\Omega}).$ Since $[z_1,\cdots,z_n,f_{1},\cdots,f_N;\Gamma_{\Omega}]=\mathcal{C}(\Gamma_{\Omega})$ implies $\mathscr{P}(X)=\mathcal{C}(X),$ hence $\widehat{X}=X.$ Conversely, suppose that $\widehat{X}=X.$ We consider the proper holomorphic map $\Phi:\mathbb{C}_{z}^{n}\times\mathbb{C}_{w}^{N}\to \mathbb{C}_{z}^{n}\times\mathbb{C}_{w}^{N},$ define by $$\begin{aligned} \Phi(z,w)=(\Psi(z),w). \end{aligned}$$ Clearly, $$\begin{aligned} \Phi^{-1}(X)={\sf Gr}_{f\circ \Psi}(\mathbb{T}^{n})=:Y. \end{aligned}$$ Since $X$ is polynomially convex, $Y$ is also polynomially convex (by ). Let $U$ be a neighborhood of $\mathbb{T}^{n}$ such that $z_{1}\not =0~~\text{ on } U.$ Define $g(z_1,z_2,\cdots,z_n)=\frac{1}{z_1}.$ Then $g$ is holomorphic on $U.$ Also, $g$ is holomorphic on $U\times \mathbb{C}^{N}.$ Since $Y\subset U\times\mathbb{C}^{N},$ by the *Oka-Weil* approximation theorem, there exists a sequence of polynomial $P_{j}$ in $\mathbb{C}^{n}_{z}\times\mathbb{C}_{w}^{N}$ such that $P_{j}(z,w)\to g$ uniformly on $Y.$ This implies $P_{j}(z,(f\circ \Psi)(z))\to g=\frac{1}{z_{1}}=\overline{z}_{1}$ uniformly on $\mathbb{T}^{n}.$ Hence $\overline{z}_{1}\in [z_1,\cdots,z_n,f_{1}\circ\Psi,\cdots,f_N\circ\Psi;\mathbb{T}^{n}].$ By the similar method we can show that $\overline{z}_j\in [z_1,\cdots,z_n,f_{1}\circ\Psi,\cdots,f_N\circ\Psi;\mathbb{T}^{n}],~\forall_{j}\in\{1,\cdots,n\}.$ Hence, $[z_1,\cdots,z_n,\overline{z}_{1},\cdots,\overline{z}_n;\mathbb{T}^n]\subset [z_1,\cdots,z_n,f_{1}\circ\Psi,\cdots,f_N\circ\Psi;\mathbb{T}^{n}].$ Therefore, $$\begin{aligned} \label{E:Approx_Torus} [z_1,\cdots,z_n,\overline{z}_{1},\cdots,\overline{z}_n;\mathbb{T}^{n}]=\mathcal{C}(\mathbb{T}^{n}) =[z_1,\cdots,z_n,f_{1}\circ\Psi,\cdots,f_N\circ\Psi;\mathbb{T}^{n}]. \end{aligned}$$ Note that $\mathscr{P}(X)=\mathcal{C}(X)$ if and only if $\mathscr{P}(\Phi^{-1}(X))=\mathcal{C}(\Phi^{-1}(X))$ (see ) i.e., $\mathscr{P}(Y)=\mathcal{C}(Y).$ Therefore, using ([\[E:Approx_Torus\]](#E:Approx_Torus){reference-type="ref" reference="E:Approx_Torus"}), we get that $$\begin{aligned} =\mathcal{C}(\Gamma_{\Omega}). \end{aligned}$$ ◻ **Corollary 20**. *Let $N\ge 1,$ and $f_1,\cdots,f_{N}\in \mathcal{C}(\Gamma_{\mathbb{G}_{n}}).$ Then $[z_1,\cdots,z_n,f_{1},\cdots,f_N;\Gamma_{\mathbb{G}_{n}}]=\mathcal{C}(\Gamma_{\mathbb{G}_{n}})$ if and only if ${\sf Gr}_{f}(\Gamma_{\mathbb{G}_{n}})$ is polynomially convex, where $f=(f_1,\cdots,f_{N}).$* In [@Jimbo03; @Jimbo05], Jimbo explored the structure of polynomial hulls concerning graphs of antiholomorphic polynomials on the torus. For the sake of completeness, we include Jimbo's result from [@Jimbo05] here since we will use it multiple times in this paper. Let $\mathbb{T}^2$ be the torus in $\mathbb{C}^2$ and $P$ be an arbitrary polynomial in $\mathbb{C}^{2}.$ In [@Jimbo05], Jimbo gave a description for $\widehat{{\sf Gr}_{\overline{P}}(\mathbb{T}^2)}.$ Let the polynomial $P(z_1,z_2)$ be of degree $m$ in $z_1$ and of degree $n$ in $z_2.$ We write $$\begin{aligned} P(z_1,z_2))= \sum_{\substack{0\le i\le m\\ 0\le j\le n \\ }} a_{ij}z_1^{i}z_2^{j}. \end{aligned}$$ Therefore, on $\mathbb{T}^2,$ we have $$\begin{aligned} \overline{P(z_1,z_2)}& =\frac{1}{z^{m}_{1}z^{n}_{2}} \sum_{\substack{0\le i\le m\\ 0\le j\le n \\ }} \overline{a}_{ij}z_{1}^{m-i}{z_2}^{n-j}\\ &=\frac{K(z_1,z_2)}{z^{m}_{1}z^{n}_{2}}=h(z_1,z_2),\text{ where } K(z_1,z_2)=\sum_{\substack{0\le i\le m\\ 0\le j\le n \\ }} \overline{a}_{ij}z_1^{m-i}z_2^{n-j}. \end{aligned}$$ Hence on $\mathbb{T}^2,$ we get that $$\begin{aligned} \overline{P(z_1,z_2)}=h(z_1,z_2),\text{ where } h(z_1,z_2)=\frac{K(z_1,z_2)}{z_1^m z_2^n}. \end{aligned}$$ We define $L:=\{z_1=0,|z_2|\le 1\}\cup\{z_2=0,|z_1|\le 1\}$ and $$\begin{aligned} \label{E:hull_interior} X=\left\{(z_1,z_2)\in \overline{\mathbb{D}}^{2}\setminus (L\cup \mathbb{T}^2): \overline{P(z_1,z_2)}=h(z_1,z_2)\right\}. \end{aligned}$$ We set $$\begin{aligned} \triangle({z}):= \begin{vmatrix} \frac{\partial P(z)}{\partial{z_1}}& \frac{\partial P(z)}{\partial z_2}\\[1.5ex] \frac{\partial h(z)}{\partial{{z_1}}}& \frac{\partial h(z)}{\partial{z_2}}\\[1.5ex] \end{vmatrix}. \end{aligned}$$ We can write $$\begin{aligned} \triangle(z) =\frac{1}{z^{m+1}_{1}z^{n+1}_{2}}\prod^{l}_{j=1}q_{j}(z), \end{aligned}$$ where each $q_{j}$ is an irreducible polynomial in $\mathbb{C}^2.$ We define the corresponding irreducible algebraic variety $Z_{j}:=Z(q_{j})=\{z\in \mathbb{C}^2:q_{j}(z)=0\}.$ We assume $\triangle({z})\not\equiv 0$ on $X.$ Therefore, each $q_{j}$ is a non-zero holomorphic polynomial in $\mathbb{C}^2.$ We denote $Q_{j}=Z_{j}\cap \mathbb{T}^2.$ **Result 21** (Jimbo). *We let $J=\{j\in \{1,\cdots,l\}: \emptyset\ne Q_{j}\ne \widehat{Q_{j}}, \widehat{Q_{j}}\setminus L\subset X\}.$* 1. *If $J=\emptyset,$ then $\widehat{{\sf Gr}_{\overline{P}}(\mathbb{T}^2)}={\sf Gr}_{\overline{P}}(\mathbb{T}^2),$ and $[z_1,z_2,\overline{P};\mathbb{T}^2]=\mathcal{C}(\mathbb{T}^2);$* 2. *If $J\ne \emptyset,$ then* *$$\begin{aligned} \widehat{{\sf Gr}_{\overline{P}}( \mathbb{T}^2)}={\sf Gr}_{\overline{P}}(\mathbb{T}^2)\cup\bigg(\cup_{j\in J} {\sf Gr}_{\overline{P}}(\widehat{Q_{j}})\bigg). \end{aligned}$$* # proof of Note that the map $\phi:\mathbb{C}^2\to \mathbb{C}^2$ is defined as $\phi(z)=(p_{1}(z),p_{2}(z)).$ We consider the proper holomorphic map $\widetilde{\Psi}:\mathbb{C}^{2+N}\to \mathbb{C}^{2+N},$ defined as follows: $$\begin{aligned} \label{E:GenProperMap} \widetilde{\Psi}(z_1,z_2,w_1,\cdots,w_{N})=\left(\phi(z_1,z_2),w_1,\cdots,w_N\right), \end{aligned}$$ where $~ (z_1,z_2)\in\mathbb{C}^2,$ and $(w_1,\cdots,w_N)\in\mathbb{C}^N.$ Recall that $\Omega=\phi(\mathbb{D}^2)$ and $\Gamma_{\Omega}=\phi(\mathbb{T}^2).$\ * Proof of .* We claim that $\widetilde{\Psi}^{-1}({\sf Gr}_{h}(\overline{\Omega}))={\sf Gr}_{h\circ\phi}(\mathbb{\overline{D}}^2)$: let $$\begin{aligned} (\alpha,\beta)\in \widetilde{\Psi}^{-1}({\sf Gr}_{h}(\overline{\Omega}))& \implies \widetilde{\Psi}(\alpha,\beta)\in {\sf Gr}_{h}(\overline{\Omega})\\ & \implies (\phi(\alpha),\beta)\in {\sf Gr}_{h}(\overline{\Omega})\\ & \implies \beta=h(\phi(\alpha)) \text{ and } \phi(\alpha)\in \overline{\Omega}. \end{aligned}$$ Now $$\begin{aligned} \phi(\alpha)\in \overline{\Omega} & \implies \alpha \in \phi^{-1}(\phi(\alpha))\subset \phi^{-1}( \overline{\Omega})\subset \overline{\mathbb{D}}^2. \end{aligned}$$ Therefore $\widetilde{\Psi}^{-1}({\sf Gr}_{h}(\overline{\Omega}))\subset {\sf Gr}_{h\circ\phi}(\mathbb{\overline{D}}^2).$\ Conversely, let $$\begin{aligned} (p,q)\in {\sf Gr}_{h\circ \phi}(\overline{\mathbb{D}}^2) &\implies q=(h\circ \phi)(p) \text{ and } p\in \overline{\mathbb{D}}^2\\ &\implies q=h(\phi(p)) \text{ and } \Pi(p)\in \overline{\Omega}\\ &\implies (\phi(p),q)\in {\sf Gr}_{h}(\overline{\Omega})\\ &\implies \widetilde{\Psi}(p,q)\in {\sf Gr}_{h}(\overline{\Omega})\\ &\implies (p,q)\in \widetilde{\Psi}^{-1}\left({\sf Gr}_{h}(\overline{\Omega})\right). \end{aligned}$$ Hence ${\sf Gr}_{h\circ\phi}(\mathbb{\overline{D}}^2)\subset\widetilde{\Psi}^{-1}({\sf Gr}_{h}(\overline{\Omega})).$ Therefore, $\widetilde{\Psi}^{-1}({\sf Gr}_{h}(\overline{\Omega}))={\sf Gr}_{h\circ\phi}(\mathbb{\overline{D}}^2).$ Since $\widetilde{\Psi}$ is proper holomorphic mapping and $\widetilde{\Psi}^{-1}({\sf Gr}_{h}(\overline{\Omega}))=G_{h\circ\phi}(\mathbb{\overline{D}}^2),$ by , we can say that $\mathscr{P}\left({\sf Gr}_{h}(\overline{\Omega})\right)=\mathcal{C}\left({\sf Gr}_{h}(\overline{\Omega})\right)$ if and only if $\mathscr{P}\left({\sf Gr}_{h\circ\phi}(\overline{\mathbb{D}}^2)\right)=\mathcal{C}\left({\sf Gr}_{h\circ\phi}(\overline{\mathbb{D}}^2)\right).$ We note that $h\circ \phi$ is pluriharmonic on $\mathbb{D}^2$ and continuous on $\overline{\mathbb{D}}^2.$ Therefore, two cases hold. $\mathscr{P}\left({\sf Gr}_{h\circ\phi}(\overline{\mathbb{D}}^2)\right)=\mathcal{C}\left({\sf Gr}_{h\circ\phi}(\overline{\mathbb{D}}^2)\right).$ In this case we have $\mathscr{P}\left({\sf Gr}_{h}(\overline{\Omega})\right)=\mathcal{C}\left({\sf Gr}_{h}(\overline{\Omega})\right).$\ $\mathscr{P}\left({\sf Gr}_{h\circ\phi}(\overline{\mathbb{D}}^2)\right)\ne\mathcal{C}\left({\sf Gr}_{h\circ\phi}(\overline{\mathbb{D}}^2)\right).$ Therefore, by , there exists an analytic disc $g:\mathbb{D}\hookrightarrow{} \overline{\mathbb{D}}^2$ where $(h_{j}\circ\phi)\circ g:\mathbb{D}\hookrightarrow{} \mathbb{\overline{D}}^2$ is holomorphic for all $j=1,\cdots,N.$ If we take $\gamma:=\phi\circ g,$ then clearly $\gamma:\mathbb{D}\hookrightarrow{}\overline{\Omega}$ is an analytic disc in $\overline{\Omega}$ such that $h_j$ is holomorphic on $\gamma(\mathbb{D})$ (by ) for all $j=1,\cdots,N.$ This proves the theorem. ◻ *Proof of .* Let $h_{j}$ denotes the pluriharmonic extension of $f_{j}$ to $\Omega$ and write $h=(h_1,\cdots,h_{N}):\overline{\Omega}\to \mathbb{C}^{N}.$ We have $\widetilde{\Psi}$ is proper holomorphic mapping and $\widetilde{\Psi}^{-1}({\sf Gr}_{h}(\Gamma_{\Omega}))={\sf Gr}_{h\circ\phi}(\mathbb{T}^2).$ Therefore, by , ${\sf Gr}_{h}(\Gamma_{\Omega})$ is polynomially convex if and only if ${\sf Gr}_{h\circ\phi}(\mathbb{T}^2)$ is polynomially convex. We note that $h\circ \phi$ is pluriharmonic on $\mathbb{D}^2$ and continuous on $\overline{\mathbb{D}}^2.$ Therefore, two cases hold. ${\sf Gr}_{h}(\Gamma_{\Omega})$ is polynomially convex. In view of , we have $$[z_1,z_2,f_{1},\cdots,f_N;\Gamma_{\Omega}]=\mathcal{C}(\Gamma_{\Omega}).$$ ${\sf Gr}_{h}(\Gamma_{\Omega})$ is not polynomially convex. Consequently, ${\sf Gr}_{h\circ\phi}(\mathbb{T}^2)$ is not polynomially convex. Therefore, by , there exists a distinguished variety $Z\subset\mathbb{D}^2$ where $(h_{j}\circ\phi)$ is holomorphic for all $j=1,\cdots,N.$ Since $\phi$ is a proper holomorphic mapping, by , we have $\phi(Z)$ is also an algebraic variety. Since $\phi$ is proper holomorphic, $(h_{j}\circ\phi)$ is holomorphic on $Z,$ then $h_{j}$ is also holomorphic on $\phi(Z)$ (by ). Since $\phi$ sends distinguished variety of $\mathbb{D}^2$ to distinguished variety of $\Omega$ (), we have $\phi(Z)\cap b\Omega\subset \Gamma_{\Omega}.$ ◻ # Description of Polynomial Hull In this section, we provide a description of the polynomial convex hull of the graph of an anti-holomorphic polynomial over the distinguished boundary of the domain $\Omega,$ where $\Omega$ is the image of the bidisc under certain proper polynomial map from $\mathbb{C}^2$ to $\mathbb{C}^2.$ Let $F=(f_1,f_2,\cdots,f_{n}):\mathbb{C}^n\to \mathbb{C}^n$ be a proper map. Let $$J_{f}(z)= \begin{vmatrix} \frac{\partial f_{1}}{\partial{z_1}}(z) & \frac{\partial f_{1}}{\partial{z_2}}(z)&\cdots&\frac{\partial f_{1}}{\partial{z_n}}(z)\\[1.5ex] \vdots&\vdots&\cdots&\vdots\\[1.5ex] \frac{\partial f_{n}}{\partial{z_1}}(z) & \frac{\partial f_{n}}{\partial{z_2}}(z)&\cdots&\frac{\partial f_{n}}{\partial{z_n}}(z)\\[1.5ex] \end{vmatrix}.$$ The *critical locus* of $f$ is the complex analytic variety $Z(J_{f})=\{z\in \mathbb{C}^n:J_{f}(z)=0\}\subset \mathbb{C}^n.$ The *branch locus* $B(f)$ of $f$ is the image of the critical locus. Since $f$ is proper, $$\begin{aligned} f:\mathbb{C}^n\setminus f^{-1}(B(f))\to \mathbb{C}^n\setminus B(f) \end{aligned}$$ is a covering map of finite degree $d;$ $d$ is said to be the *topological degree* of $f.$ **Definition 22**. Two proper map $\phi,\tilde{\phi}:\mathbb{C}^{2}\to \mathbb{C}^2$ are said to be *equivalent* if there exist $f,g\in \text{Aut}(\mathbb{C}^2)$ such that $\phi=f\circ\tilde{\phi}\circ g.$ Consider two holomorphic polynomials, $p_1$ and $p_2$, defined in $\mathbb{C}^2$. Let $\phi(z) = (p_1(z), p_2(z))$ represent a proper holomorphic mapping from $\mathbb{C}^2$ to $\mathbb{C}^2$, equivalent to $\tilde{\phi}(z_1, z_2) = (z_1^m, z_2^n)$ for some natural numbers $m$ and $n$. There is a characterization due to Lamy [@Lamy05] (see also Bisi and Polizzi [@BisiPolizzi2010]) for $m=1$ and $n=2$ as follows: a proper polynomial map $f:\mathbb{C}^2\to \mathbb{C}^2$ with a topological degree of 2 is equivalent to $g(z_1,z_2)=(z_1,z^{2}_2).$ Let $P(z_1, z_2)$ be any polynomial in $\mathbb{C}^2$. We aim to calculate $\widehat{{\sf Gr}_{\overline{P}}(\Gamma_{\Omega})}$. It is evident that $\widetilde{\Psi}^{-1}({\sf Gr}_{\overline{P}}(\Gamma_{\Omega})) = {\sf Gr}_{\overline{P}\circ\phi}(\mathbb{T}^2) = {\sf Gr}_{\overline{P\circ\phi}}(\mathbb{T}^2)$ ($\widetilde{\Psi}$ is given by ([\[E:GenProperMap\]](#E:GenProperMap){reference-type="ref" reference="E:GenProperMap"})). Consequently, ${\sf Gr}_{\overline{P}}(\Gamma_{\Omega}) = \widetilde{\Psi}\left(\operatorname{Gr}_{\overline{P\circ\phi}}(\mathbb{T}^2)\right)$. In this scenario, the following result holds. **Lemma 23**. *$\widehat{\widetilde{\Psi}(Y)}=\widetilde{\Psi}\left(\widehat{Y}\right),$ where $Y={\sf Gr}_{\overline{P\circ\phi}}(\mathbb{{T}}^2).$* *Proof.* Since $\widetilde{\Psi}$ is a proper holomorphic map, by using , we have that $\widetilde{\Psi}^{-1}\left(\widehat{\widetilde{\Psi}(Y)}\right)$ is polynomially convex. Therefore $$\begin{aligned} \widehat{Y}\subset \widehat{\widetilde{\Psi}^{-1}\left(\widehat{\widetilde{\Psi}(Y)}\right)}\subset \widetilde{\Psi}^{-1}\left(\widehat{\widetilde{\Psi}(Y)}\right). \end{aligned}$$ This implies, $\widetilde{\Psi}(\widehat{Y})\subset \widehat{\widetilde{\Psi}(Y)}.$ Next, we show that $\widetilde{\Psi}^{-1}\left(\widetilde{{\Psi}}(\widehat{Y})\right)\subset \widehat{Y}.$ To prove this, let $(\alpha_1,\alpha_2,\beta)\in \widetilde{\Psi}^{-1}\left(\widetilde{{\Psi}}(\widehat{Y})\right).$ Then there exists $(\xi_1,\xi_2,\eta)\in \widehat{Y}$ such that $\widetilde{\Psi}(\alpha_1,\alpha_2,\beta)=\Psi(\xi_1,\xi_2,\eta).$ This implies, $\phi(\alpha_1,\alpha_2)=\phi(\xi_1,\xi_2)$ and $\beta=\eta.$ Since, $\phi$ is proper polynomial map and is equivalent to $\tilde{\phi}(z_1,z_2)=(z^m_1,z^n_{2}),$ there exist $f,g\in \text{Aut}(\mathbb{C}^2)$ such that $\phi=f\circ \tilde{\phi} \circ g.$ Then $$\begin{aligned} &\phi(\alpha_1,\alpha_2)=\phi(\xi_1,\xi_2)\\ \implies &(f\circ \tilde{\phi} \circ g)(\alpha_1,\alpha_2)=(f\circ \tilde{\phi} \circ g)(\xi_1,\xi_2)\\ \implies &(\tilde{\phi} \circ g)(\alpha_1,\alpha_2)=(\tilde{\phi} \circ g)(\xi_1,\xi_2)\\ \implies& g^{m}_{1}(\alpha_1,\alpha_2)=g^{m}_{1}(\xi_1,\xi_2) \text{ and } g^{n}_2(\alpha_1,\alpha_2)=g^{n}_2(\xi_1,\xi_2), \text{ where } g=(g_1,g_2).\\ \implies & (\alpha_1,\alpha_2)=g^{-1}\{(\lambda^{k}_{m}g_1(\xi_1,\xi_2),\lambda^{r}_{n}g_2(\xi_1,\xi_2)\}=(a_{k},b_{r}), \end{aligned}$$ where $\lambda_{l}=\cos \frac{2\pi}{l}+i\sin\frac{2\pi}{l},$ $k\in\{0,\cdots,m-1\}$ and $r\in \{0,\cdots,n-1\}.$ It remains to show that $(a_{k},b_{r},\eta)\in \widehat{Y}.$ If possible, assume that $(a_{k},b_{r},\eta)\notin \widehat{Y}$ for some $k\in \{0,\cdots,m-1\},r\in \{0,\cdots,n-1\}.$ Then there exists a polynomial $\chi$ in $\mathbb{C}^2_{z}\times\mathbb{C}_{w}$ such that $$\begin{aligned} \label{E:Other_pt_InHull} |\chi(a_{k},b_{r},\eta)|>\sup_{Y}|\chi(z,w)|. \end{aligned}$$ Let us define $F(z_1,z_2):=(\lambda^{k}_{m}z_1,\lambda^{r}_{n}z_{2}),$ and $\tilde{F}(z_1,z_2,w):=((g^{-1}\circ F\circ\ g)(z),w).$ Since $\phi^{-1}(\phi (\mathbb{T}^2))\subset \mathbb{T}^2$ (hence $(g^{-1}\circ F\circ\ g)(z)\in \mathbb{T}^2$ if $z\in \mathbb{T}^2$), using ([\[E:Other_pt_InHull\]](#E:Other_pt_InHull){reference-type="ref" reference="E:Other_pt_InHull"}), we get that $$\begin{aligned} \label{E:pt_InHull} |(\chi\circ \tilde{F})(\xi,\eta)|>\sup_{Y}|(\chi\circ \tilde{F})(z,w)|. \end{aligned}$$ Since $\tilde{F}\in \text{Aut}(\mathbb{C}^{3}),$ ([\[E:pt_InHull\]](#E:pt_InHull){reference-type="ref" reference="E:pt_InHull"}) says that $(\xi,\eta)\notin \widehat{Y}$ and this is a contradiction. Hence $(a_{k},b_{r},\eta)\in \widehat{Y}.$ Therefore, $\widetilde{\Psi}^{-1}\left(\widetilde{\Psi}(\widehat{Y})\right)=\widehat{Y}.$ Since $\widetilde{\Psi}$ is proper holomorphic map, by using , we can say that $\widetilde{\Psi}(\widehat{Y})$ is polynomially convex. Therefore, $\widehat{\widetilde{\Psi}(Y)}\subset \widetilde{\Psi}(\widehat{Y}).$ This proves the lemma. ◻ By using , we can say that $$\begin{aligned} \widehat{{\sf Gr}_{\overline{P}}({\Gamma_{\Omega})}}=\widetilde{\Psi}\left(\widehat{{\sf Gr}_{\overline{P\circ\phi}}(\mathbb{{T}}^2)}\right). \end{aligned}$$ Therefore, to give a description for $\widehat{{\sf Gr}_{\overline{P}}({\Gamma_{\Omega})}},$ it is enough to compute $\widehat{{\sf Gr}_{\overline{P\circ\phi}}(\mathbb{{T}}^2)}.$ ## Description of Hull on Symmetrized Bidisc Let $P(z_1,z_2)$ be any polynomial in $\mathbb{C}^{2}.$ By , we calculate $\widehat{{\sf Gr}_{\overline{P}}(\Gamma_{{\mathbb{G}}_{2}})}.$ If we take $p_{1}(z)=z_1+z_2$ and $p_{1}(z)=z_1z_2,$ then $\phi=\Pi$ and $\widetilde{\Psi}(z,w)=(\Pi(z),w)$ is a proper map from $\mathbb{C}^3$ to $\mathbb{C}^3.$ It is easy to show that $\Pi$ a proper polynomial map of topological degree $2,$ and hence equivalent to $(z_1,z^2_{2}).$ Clearly, $\widetilde{\Psi}^{-1}({\sf Gr}_{\overline{P}}({\Gamma}_{\mathbb{G}_2}))={\sf Gr}_{\overline{P}\circ\Pi}(\mathbb{{T}}^2)={\sf Gr}_{\overline{P\circ\Pi}}(\mathbb{{T}}^2).$ Therefore, ${\sf Gr}_{\overline{P}}({\Gamma}_{\mathbb{G}_2}))=\widetilde{\Psi}\left({\sf Gr}_{\overline{P\circ\Pi}}(\mathbb{{T}}^2)\right).$ By , we get that **Lemma 24**. *$\widehat{\widetilde{\Psi}\left(Y\right)}=\widetilde{\Psi}\left(\widehat{Y}\right),$ where $Y={\sf Gr}_{\overline{P\circ\Pi}}(\mathbb{{T}}^2).$* By using , we can say that $$\begin{aligned} \widehat{{\sf Gr}_{\overline{P}}({\Gamma_{\mathbb{G}_2})}}=\widetilde{\Psi}\left(\widehat{{\sf Gr}_{\overline{P\circ\Pi}}(\mathbb{{T}}^2)}\right). \end{aligned}$$ Therefore, to give a description for $\widehat{{\sf Gr}_{\overline{P}}({\Gamma_{\mathbb{G}_2})}},$ it is enough to compute $\widehat{{\sf Gr}_{\overline{P\circ\Pi}}(\mathbb{{T}}^2)}.$ # Examples **Example 25**. Let $P(z_1,z_2)=z_1-z_2.$ Then $[z_1,z_2,\overline{P};\Gamma_{\mathbb{G}_{2}}]\ne \mathcal{C}(\Gamma_{\mathbb{G}_{2}}).$ In view of , to demonstrate that $[z_1,z_2,\overline{P};\Gamma_{\mathbb{G}_{2}}]\ne \mathcal{C}(\Gamma_{\mathbb{G}_{2}}),$ it suffices to establish that the graph of $\overline{P}$ over $\Gamma_{\mathbb{G}_{2}}$ is not polynomially convex. To achieve this, it is sufficient to show that the graph of $\overline{P\circ \Pi}$ over $\Gamma_{\mathbb{G}_{2}}$ lacks polynomial convexity. Following the notation in , we define $h(z)=\frac{1}{z_1}+\frac{1}{z_2}-\frac{1}{z_1z_2}.$ Then $$\begin{aligned} \triangle({z})= \begin{vmatrix} \frac{\partial (P\circ\Pi)}{\partial{z_1}}& \frac{\partial (P\circ\Pi)}{\partial z_2}\\[1.5ex] \frac{\partial h}{\partial{{z_1}}}& \frac{\partial h}{\partial{z_2}}\\[1.5ex] \end{vmatrix} = \begin{vmatrix} 1-z_2& 1-z_1\\[1.5ex] \frac{-1}{z^2_1}+\frac{1}{z^2_1z_2}& \frac{-1}{z^2_2}+\frac{1}{z^2_2z_1}\\[1.5ex] \end{vmatrix} &=\frac{1}{z^2_{1}z^{2}_{2}}(z_1-z_2)(z_1-1)(z_2-1). \end{aligned}$$ We define $q_{1}:=z_1-1,~~q_2=z_2-1,q_3:=z_{1}-z_{2},$ and $Z_{j}=\{z\in \mathbb{C}^{2}:q_{j}(z)=0\},j=1,2,3.$ Therefore, $$\begin{aligned} \Sigma&=\left\{z\in \overline{\mathbb{D}}^{2}\setminus (L\cup \mathbb{T}^2): \triangle(z)=0\right\}\\ &=\left\{z\in \overline{\mathbb{D}}^{2}\setminus (L\cup \mathbb{T}^2)\right\}\cap[\cup^{3}_{j=1} Z_{j}], \end{aligned}$$ and $$\begin{aligned} X&=\left\{z\in \overline{\mathbb{D}}^{2}\setminus (L\cup \mathbb{T}^2): \overline{(P\circ \Pi)(z)}=h(z)\right\}\\ &=\left\{z\in \overline{\mathbb{D}}^{2}\setminus (L\cup \mathbb{T}^2): \overline{z_1+z_2-z_1z_2}=\frac{1}{z_1}+\frac{1}{z_1}-\frac{1}{z_1z_2}\right\}. \end{aligned}$$ Here $Q_{j}=Z_{j}\cap \mathbb{{T}}^2.$ Clearly, $$\begin{aligned} \widehat{Q_1}&=\{z\in \mathbb{C}^2:z_1=1,|z_2|\le 1\}\ne Q_1;\\ \widehat{Q_2}&=\{z\in \mathbb{C}^2:z_2=1,|z_1|\le 1\}\ne Q_2;\\ \widehat{Q_3}&=\{z\in \mathbb{C}^2:z_1=z_2,|z_1|\le 1\}\ne Q_3. \end{aligned}$$ It is evident that $\widehat{Q_{j}}\setminus (\mathbb{T}^2\cup L)\subset X$ holds true only for $j=1,2.$ On the other hand, we note that $(\frac{1}{2},\frac{1}{2})\in \widehat{Q_{3}}\setminus (\mathbb{T}^2\cup L),$ yet $(\frac{1}{2},\frac{1}{2})\notin X.$ Therefore, by , we deduce that: $$\begin{aligned} \widehat{{\sf Gr}_{\overline{P\circ \Pi}}(\mathbb{T}^2)}={\sf Gr}_{\overline{P\circ \Pi}}(\mathbb{T}^2)\cup {\sf Gr}_{\overline{P\circ \Pi}}(\widehat{Q_{1}})\cup {\sf Gr}_{\overline{P\circ \Pi}}(\widehat{Q_{2}}). \end{aligned}$$ Hence $$\begin{aligned} \widehat{{\sf Gr}_{\overline{P}}(\Gamma_{\mathbb{G}_{2}})}&=\Psi\left({\sf Gr}_{\overline{P\circ \Pi}}(\mathbb{T}^2) \right) \cup \Psi\left({\sf Gr}_{\overline{P\circ \Pi}}(\widehat{Q_{1})}\right)\cup \Psi\left( {\sf Gr}_{\overline{P\circ \Pi}}(\widehat{Q_{2})}\right)\\ &= {\sf Gr}_{\overline{P}}(\Gamma_{\mathbb{G}_{2}})\cup \{(1+z,z,w):w=\overline{P(1+z,z)}, z\in \overline{\mathbb{D}}\}\\ &\cup \{(1+z,z,w):w=\overline{P(1+z,z)}, z\in \overline{\mathbb{D}}\}\\ &= {\sf Gr}_{\overline{P}}(\Gamma_{\mathbb{G}_{2}})\cup \{(1+z,z,w):w=\overline{P(1+z,z)}, z\in \overline{\mathbb{D}}\}\\ &= {\sf Gr}_{\overline{P}}(\Gamma_{\mathbb{G}_{2}})\cup \{(1+z,z,1):z\in \overline{\mathbb{D}}\}. \end{aligned}$$ **Example 26**. $P(z_1,z_2)=z_1-2z_{2}.$ Then $[z_1,z_2,\overline{P};\Gamma_{\mathbb{G}_{2}}]= \mathcal{C}(\Gamma_{\mathbb{G}_{2}}).$ In light of , in order to establish that $[z_1,z_2,\overline{P};\Gamma_{\mathbb{G}_{2}}]= \mathcal{C}(\Gamma_{\mathbb{G}_{2}}),$ it is sufficient to demonstrate the polynomial convexity of the graph of $\overline{P}$ over $\Gamma_{\mathbb{G}_{2}}$. To accomplish this, it is enough to prove that the graph of $\overline{P\circ \Pi}$ over $\mathbb{T}^2$ is polynomially convex. Following the notation in , we have $h(z)=\frac{1}{z_1}+\frac{1}{z_2}-\frac{2}{z_1z_2}.$ $$\begin{aligned} \triangle({z})=& \begin{vmatrix} \frac{\partial (P\circ\Pi)}{\partial{z_1}}& \frac{\partial (P\circ\Pi)}{\partial z_2}\\[1.5ex] \frac{\partial h}{\partial{{z_1}}}& \frac{\partial h}{\partial{z_2}}\\[1.5ex] \end{vmatrix} = \begin{vmatrix} 1-2z_2& 1-2z_1\\[1.5ex] \frac{-1}{z^2_1}+\frac{2}{z^2_1z_2}& \frac{-1}{z^2_2}+\frac{2}{z^2_2z_1}\\[1.5ex] \end{vmatrix}\\ =&\frac{1}{z^2_{1}z^{2}_{2}}(z_1+z_2-2-2z_1z_2)(z_2-z_1). \end{aligned}$$ We define $q_{1}:=z_1+z_2-2-2z_1z_2,~q_2:=z_{2}-z_{1}.$ and $Z_{j}=\{z\in \mathbb{C}^{2}:q_{j}(z)=0\},j=1,2.$ Therefore, $$\begin{aligned} \Sigma&=\left\{(z,w)\in \overline{\mathbb{D}}^{2}\setminus (L\cup \mathbb{T}^2): \triangle_{(z,w)}=0\right\}\\ &=\left\{(z,w)\in \overline{\mathbb{D}}^{2}\setminus (L\cup \mathbb{T}^2)\right\}\cap[\cup^{2}_{j=1} Z_{j}], \end{aligned}$$ and $$\begin{aligned} X&=\left\{z\in \overline{\mathbb{D}}^{2}\setminus (L\cup \mathbb{T}^2): \overline{(P\circ \Pi)(z)}=h(z)\right\}\\ &=\left\{z\in \overline{\mathbb{D}}^{2}\setminus (L\cup \mathbb{T}^2): \overline{z_1+z_2-2z_1z_2}=\frac{1}{z_1}+\frac{1}{z_2}-\frac{2}{z_1z_2}\right\}. \end{aligned}$$ Here $Q_{j}=Z_{j}\cap \mathbb{{T}}^2.$ We now claim that $$\begin{aligned} \widehat{Q_1}&=\{z\in \mathbb{C}^2:z_1+z_2-2z_1z_2-2=0,|z_1|= 1,|z_2|= 1\}= Q_1. \end{aligned}$$ Clearly, $\widehat{Q_1}\subset \{z\in \mathbb{C}^2:z_1+z_2-2z_1z_2-2=0,|z_1|\le 1,|z_2|\le 1\}.$ Let $(\alpha,\beta)\in \{z\in \mathbb{C}^2:z_1+z_2-2z_1z_2-2=0,|z_1|\le 1,|z_2|\le 1\}\setminus Q_1.$ First, we assume that $|\beta|<1.$ Since $\alpha+\beta-2\alpha\beta-2=0,$ hence $$\begin{aligned} \label{E:Pt_OutsideBidsc} |2-\alpha|= |\beta||1-2\alpha|<|1-2\alpha|. \end{aligned}$$ Let $\alpha=u+iv.$ Then from ([\[E:Pt_OutsideBidsc\]](#E:Pt_OutsideBidsc){reference-type="ref" reference="E:Pt_OutsideBidsc"}), we get that $$\begin{aligned} &(2-u)^2+v^2<(1-2u)^2+4v^2\\ &\implies 4+u^2+v^2-4u<1+4(u^2+v^2)-4u\\ &\implies u^2+v^2>1 \text{ i.e., } |\alpha|>1. \end{aligned}$$ Hence, we conclude that $(\alpha, \beta) \notin \widehat{Q_1}.$ In the case where $|\alpha|<1,$ we can similarly demonstrate that $|\beta|>1,$ leading to the same conclusion, $(\alpha, \beta) \notin \widehat{Q_1}.$ As a result, we establish that $Q_{1}$ is polynomially convex. Furthermore, consider $\widehat{Q_2}=\{z\in \mathbb{C}^2:z_1=z_2,|z_1|\le 1\}\ne Q_2.$ Notably, $(\frac{1}{2},\frac{1}{2})\in \widehat{Q_{2}}\setminus (\mathbb{T}^2\cup L),$ while $(\frac{1}{2},\frac{1}{2})\notin X.$ Hence, by , we can deduce that: $$\begin{aligned} \widehat{{\sf Gr}_{\overline{P\circ \Pi}}(\mathbb{T}^2)}={\sf Gr}_{\overline{P\circ \Pi}}(\mathbb{T}^2). \end{aligned}$$ This implies: $$\begin{aligned} \widehat{{\sf Gr}_{\overline{P}}(\Gamma_{\mathbb{G}{2}})}=\Psi\left({\sf Gr}_{\overline{P\circ \Pi}}(\mathbb{T}^2)\right)={\sf Gr}_{\overline{P}}(\Gamma_{\mathbb{G}_{2}}). \end{aligned}$$ **Example 27**. Let $p_{1}(z_1,z_2)=2z_1+z^2_2,~~p_{2}(z_1,z_2)=z_1-z^2_2,~P(z_1,z_2)=z_1-z_{2}$ and $\phi(z_1,z_2)=(p_{1}(z_1,z_2),p_{2}(z_1,z_2)).$ Therefore $\Omega=\phi(\mathbb{D}^2).$ Then $[z_1,z_2,\overline{P};\Gamma_{\Omega}]=\mathcal{C}(\Gamma_{\Omega}).$ According to , it follows that $[z_1,z_2,\overline{P};\Gamma_{\Omega}]=\mathcal{C}(\Gamma_{\Omega})$ if, and only if, ${\sf Gr}_{\overline{P}}(\Gamma_{\Omega})$ exhibits polynomial convexity. Furthermore, the polynomial convexity of ${\sf Gr}_{\overline{P}}(\Gamma_{\Omega})$ is equivalent to the polynomial convexity of ${\sf Gr}_{\overline{P\circ \phi}}(\mathbb{T}^2)$. Here $\overline{P\circ \phi}=\overline{z_1+2z^2_2}=\frac{1}{z_1}+\frac{2}{z^2_2}=:h(z)$ on $\mathbb{T}^2.$ $$\begin{aligned} \triangle({z})&= \begin{vmatrix} \frac{\partial (P\circ\phi)}{\partial{z_1}}& \frac{\partial (P\circ\phi)}{\partial z_2}\\[1.5ex] \frac{\partial h}{\partial{{z_1}}}& \frac{\partial h}{\partial{z_2}}\\[1.5ex] \end{vmatrix} = \begin{vmatrix} 1& 4z_2\\[1.5ex] \frac{-1}{z^2_1}& \frac{-4}{z^3_2}\\[1.5ex] \end{vmatrix} =\frac{1}{z^2_{1}z^{3}_{2}}(z_1+z^2_2)(z^2_2-z_1). \end{aligned}$$ We define $q_{1}:=z_1+z^2_2,~q_2:=z^2_2-z_1,$ and $Z_{j}=\{z\in \mathbb{C}^{2}:q_{j}(z)=0\},j=1,2.$ Therefore, $$\begin{aligned} \Sigma&=\left\{(z,w)\in \overline{\mathbb{D}}^{2}\setminus (L\cup \mathbb{T}^2): \triangle_{(z,w)}=0\right\}\\ &=\left\{(z,w)\in \overline{\mathbb{D}}^{2}\setminus (L\cup \mathbb{T}^2)\right\}\cap[\cup^{2}_{j=1} Z_{j}], \end{aligned}$$ and $$\begin{aligned} X&=\left\{z\in \overline{\mathbb{D}}^{2}\setminus (L\cup \mathbb{T}^2): \overline{(P\circ \phi)(z)}=h(z)\right\}\\ &=\left\{z\in \overline{\mathbb{D}}^{2}\setminus (L\cup \mathbb{T}^2): \overline{z_1+2z^2_2}=\frac{1}{z_1}+\frac{2}{z^2_2}\right\}. \end{aligned}$$ Here $Q_{j}=Z_{j}\cap \mathbb{{T}}^2.$ Clearly, $$\begin{aligned} \widehat{Q_1}&=\{z\in \mathbb{C}^2:z_1+z^2_2=0, |z_1|\le 1,|z_2|\le 1\}\ne Q_1,~\text{ and}\\ \widehat{Q_2}&=\{z\in \mathbb{C}^2:z^2_2-z_1=0, |z_1|\le 1,|z_2|\le 1\}\ne Q_2. \end{aligned}$$ It is easy to see that $\widehat{Q_{j}}\setminus (\mathbb{T}^2\cup L)\nsubseteq X$ for $j=1,2.$ Therefore, by a , we get that $$\begin{aligned} \widehat{{\sf Gr}_{\overline{P\circ \phi}}(\mathbb{T}^2)}={\sf Gr}_{\overline{P\circ \phi}}(\mathbb{T}^2). \end{aligned}$$ Hence $$\begin{aligned} \widehat{{\sf Gr}_{\overline{P}}(\Gamma_{\Omega})}&=\Psi\left({\sf Gr}_{\overline{P\circ \phi}}(\mathbb{T}^2) \right)= {\sf Gr}_{\overline{P}}(\Gamma_{\Omega}). \end{aligned}$$ **Example 28**. Let $p_{1}(z_1,z_2)=z_1+z_2,~~p_{2}(z_1,z_2)=z^2_1+z^2_2,~P(z_1,z_2)=z^2_1+z_{2}$ and $\phi(z_1,z_2)=(p_{1}(z_1,z_2),p_{2}(z_1,z_2)).$ Therefore $\Omega=\phi(\mathbb{D}^2).$ Then $[z_1,z_2,\overline{P};\Gamma_{\Omega}]\ne \mathcal{C}(\Gamma_{\Omega}).$ Based on , we can assert that $[z_1,z_2,\overline{P};\Gamma_{\Omega}]\ne \mathcal{C}(\Gamma_{\Omega})$ if, and only if, ${\sf Gr}_{\overline{P}}(\Gamma_{\Omega})$ lacks polynomial convexity. Furthermore, ${\sf Gr}_{\overline{P}}(\Gamma_{\Omega})$ possesses polynomial convexity if, and only if, ${\sf Gr}_{\overline{P\circ \phi}}(\mathbb{T}^2)$ is polynomially convex. Therefore, it is enough to show that ${\sf Gr}_{\overline{P\circ \phi}}(\mathbb{T}^2)$ is not polynomailly convex. Here $P\circ \phi=2(z^2_1+z_1z_2+z^2_2).$ Hence, $$\begin{aligned} \overline{P\circ \phi}=\overline{2(z^2_1+z_1z_2+z^2_2)}=2\left(\frac{1}{z^2_1}+\frac{1}{z^2_2}+\frac{1}{z_1z_2}\right)=:h(z) \text{ on } \mathbb{T}^2. \end{aligned}$$ $$\begin{aligned} \triangle({z})&= \begin{vmatrix} \frac{\partial (P\circ\phi)}{\partial{z_1}}& \frac{\partial (P\circ\phi)}{\partial z_2}\\[1.5ex] \frac{\partial h}{\partial{{z_1}}}& \frac{\partial h}{\partial{z_2}}\\[1.5ex] \end{vmatrix} = \begin{vmatrix} 2(2z_1+z_2)& 2(2z_2+z_1)\\[1.5ex] 2(\frac{-2}{z^3_1}-\frac{-1}{z^2_1z_2})& 2(\frac{-2}{z^3_2}-\frac{-1}{z_1z^2_2})\\[1.5ex] \end{vmatrix}\\[1.5ex] &=\frac{8\alpha^{-1}}{z^3_{1}z^{3}_{2}}(z_1+z_2)(z_2-z_1)(z_1-\alpha z_2)(z_2-\alpha z_1), \text{ where } \alpha=e^{\frac{2\pi i}{3}}. \end{aligned}$$ We define $q_{1}:=z_1+z_2,~q_2:=z_2-z_1,~,q_3=z_1-\alpha z_2,~q_{4}=z_2-\alpha z_1,$ and $Z_{j}=\{z\in \mathbb{C}^{2}:q_{j}(z)=0\},j=1,2,3,4.$ Therefore, $$\begin{aligned} \Sigma&=\left\{z\in \overline{\mathbb{D}}^{2}\setminus (L\cup \mathbb{T}^2): \triangle(z)=0\right\}\\ &=\left\{z\in \overline{\mathbb{D}}^{2}\setminus (L\cup \mathbb{T}^2)\right\}\cap[\cup^{3}_{j=1} Z_{j}], \end{aligned}$$ and $$\begin{aligned} X&=\left\{z\in \overline{\mathbb{D}}^{2}\setminus (L\cup \mathbb{T}^2): \overline{(P\circ \phi)(z)}=h(z)\right\}\\ &=\left\{z\in \overline{\mathbb{D}}^{2}\setminus (L\cup \mathbb{T}^2): \overline{2(z^2_1+z_1z_2+z^2_2)}=2\left(\frac{1}{z^2_1}+\frac{1}{z^2_2}+\frac{1}{z_1z_2}\right)\right\}. \end{aligned}$$ Here $Q_{j}=Z_{j}\cap \mathbb{{T}}^2.$ Clearly, $$\begin{aligned} \widehat{Q_1}&=\{z\in \mathbb{C}^2:z_1+z_2=0, |z_1|\le 1,|z_2|\le 1\}\ne Q_1;\\ \widehat{Q_2}&=\{z\in \mathbb{C}^2:z_2-z_1=0, |z_1|\le 1,|z_2|\le 1\}\ne Q_2;\\ \widehat{Q_3}&=\{z\in \mathbb{C}^2:z_1-\alpha z_2=0, |z_1|\le 1,|z_2|\le 1\}\ne Q_3;\\ \widehat{Q_4}&=\{z\in \mathbb{C}^2:z_2-\alpha z_1=0, |z_1|\le 1,|z_2|\le 1\}\ne Q_4. \end{aligned}$$ Again $\widehat{Q_{j}}\setminus (\mathbb{T}^2\cup L)\nsubseteq X$ for $j=1,2,$ and $\widehat{Q_{j}}\setminus (\mathbb{T}^2\cup L)\subset X$ for $j=3,4.$ Therefore, by , we get that $$\begin{aligned} \widehat{{\sf Gr}_{\overline{P\circ \phi}}(\mathbb{T}^2)}={\sf Gr}_{\overline{P\circ \phi}}(\mathbb{T}^2)\cup {\sf Gr}_{\overline{P\circ \phi}}(\widehat{Q_{3}})\cup {\sf Gr}_{\overline{P\circ \phi}}(\widehat{Q_{4}}). \end{aligned}$$ Hence $$\begin{aligned} \widehat{{\sf Gr}_{\overline{P}}(\Gamma_{\Omega})}&=\Psi\left({\sf Gr}_{\overline{P\circ \phi}}(\mathbb{T}^2) \right)={\sf Gr}_{\overline{P}}(\Gamma_{\Omega})\cup \Psi\left({\sf Gr}_{\overline{P\circ \phi}}(\widehat{Q_{3}})\right)\cup \Psi\left({\sf Gr}_{\overline{P\circ \phi}}(\widehat{Q_{4}})\right). \end{aligned}$$ We would like to express our sincere gratitude to Professor Franc Forstnerič for pointing out in [@Cirka69] and showing us the proof of . The first named author was partially supported by a Matrics Research Grant (MTR/2017/000974) of SERB, Dept. of Science and Technology, Govt. of India, for the beginning of this work and is supported by a Core Research Grant (CRG/2022/003560) of SERB, Dept. of Science and Technology, Govt. of India, for the later part of the work. The second named author's work received partial support from an INSPIRE Fellowship (IF 160487) provided by the Dept. of Science and Technology, Govt. of India, during the early stage of this work. Presently, this research is supported by a research grant from SERB (Grant No. CRG/2021/005884), Dept. of Science and Technology, Govt. of India. 10 J. Agler, Z. Lykova, and N. Young. Geodesics, retracts, and the norm-preserving extension property in the symmetrized bidisc. , 258(1242):vii+108, 2019. J. Agler, Z. Lykova, and N. J. Young. A geometric characterization of the symmetrized bidisc. , 473(2):1377--1413, 2019. J. Agler and N. J. Young. The hyperbolic geometry of the symmetrized bidisc. , 14(3):375--403, 2004. Jim Agler and John E. McCarthy. Distinguished varieties. , 194(2):133--153, 2005. T. Andô. On a pair of commutative contractions. , 24:88--90, 1963. T. Bhattacharyya, S. Pal, and S. S. Roy. Dilations of $\Gamma$-contractions by solving operator equations. , 230(2):577--606, 2012. C. Bisi and F. Polizzi. On proper polynomial maps of $\Bbb C^2$. , 20(1):72--89, 2010. D. Chakrabarti and S. Gorai. Function theory and holomorphic maps on symmetric products of planar domains. , 25(4):2196--2225, 2015. E. M. Chirka. , volume 46 of *Mathematics and its Applications (Soviet Series)*. Kluwer Academic Publishers Group, Dordrecht, 1989. Translated from the Russian by R. A. M. Hoksbergen. C. Costara. The symmetrized bidisc and Lempert's theorem. , 36(5):656--662, 2004. B. Krishna Das and Jaydeb Sarkar. Ando dilations, von Neumann inequality, and distinguished varieties. , 272(5):2114--2131, 2017. A. Edigarian and W. Zwonek. Geometry of the symmetrized polydisc. , 84(4):364--374, 2005. T. W. Gamelin. . Prentice-Hall, Inc., Englewood Cliffs, N. J., 1969. A. J. Izzo. Uniform algebras generated by holomorphic and pluriharmonic functions. , 339(2):835--847, 1993. A. J. Izzo. Uniform algebras generated by holomorphic and pluriharmonic functions on strictly pseudoconvex domains. , 171(2):429--436, 1995. A. J. Izzo, Samuelsson K. H., and E. F. Wold. Presence or absence of analytic structure in maximal ideal spaces. , 366(1-2):459--478, 2016. M. Jarnicki and P. Pflug. On automorphisms of the symmetrized bidisc. , 83(3):264--266, 2004. T. Jimbo. Polynomial hulls of graphs of antiholomorphic functions. volume 57, pages 157--163. 2003. Japanese Association of Mathematical Sciences 2001 Annual Meeting (Tennoji). T. Jimbo. Polynomial hulls of graphs on the torus in $C^2$. , 62(3):335--342, 2005. L. Kosiński and W. Zwonek. Nevanlinna-Pick problem and uniqueness of left inverses in convex domains, symmetrized bidisc and tetrablock. , 26(3):1863--1890, 2016. S. Lamy. Sur la structure du groupe d'automorphismes de certaines surfaces affines. , 49(1):3--20, 2005. M. A. Lavrentiev. Sur les fonctions d'une variable complexe, représentables par des séries de polynomes. . Z. L. Leı̆benzon. On the ring of continuous functions on a circle. , 7(4(50)):163--164, 1952. S Pal and O. M. Shalit. Spectral sets and distinguished varieties in the symmetrized bidisc. , 266(9):5779--5800, 2014. P. Pflug and W. Zwonek. Description of all complex geodesics in the symmetrized bidisc. , 37(4):575--584, 2005. R. Remmert. Projektionen analytischer Mengen. , 130:410--441, 1956. R. Remmert. Holomorphe und meromorphe Abbildungen komplexer Räume. , 133:328--370, 1957. H. Samuelsson and E. F. Wold. Uniform algebras and approximation on manifolds. , 188(3):505--523, 2012. J. Sarkar. Operator theory on symmetrized bidisc. , 64(3):847--873, 2015. E. L. Stout. , volume 261 of *Progress in Mathematics*. Birkhäuser Boston, Inc., Boston, MA, 2007. M. Trybuła. Invariant metrics on the symmetrized bidisc. , 60(4):559--565, 2015. E. M. Čirka. Approximation by holomorphic functions on smooth manifolds in ${\bf C}^{n}$. , 78 (120):101--123, 1969. J. Wermer. On algebras of continuous functions. , 4:866--869, 1953. J. Wermer. Polynomially convex disks. , 158:6--10, 1965. [^1]:
arxiv_math
{ "id": "2309.05006", "title": "Uniform algebras and distinguished varieties", "authors": "Sushil Gorai, Golam Mostafa Mondal", "categories": "math.CV", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We discuss a central limit theorem in the framework of the group algebra of the Thompson group $F$. We consider the sequence of self-adjoint elements given by $a_n=\frac{g_n+g_n^{*}}{\sqrt{2}}$ in the noncommutative probability space $( {\mathbb{C}} (F),\varphi)$, where the expectation functional $\varphi$ is the trace associated to the left regular representation of $F$, and the $g_n$-s are the generators of $F$ in its standard infinite presentation. We show that the limit law of the sequence $s_n = \frac{a_0+\cdots+a_{n-1}}{\sqrt{n}}$ is the standard normal distribution. address: A. Krishnan, Department of Mathematics and Computer Studies, Mary Immaculate College, St. Patrick's Campus, Thurles, Ireland author: - Arundhathi Krishnan bibliography: - references.bib title: A Central Limit Theorem in the framework of the Thompson Group $F$ --- # Introduction and Preliminaries {#section:intro} It is well-known that simplified versions of certain central limit theorems -- for instance, the classical and free versions -- can be proved algebraically. Let $( {\mathcal A} ,\varphi)$ be a $*$-probability space, $(a_n)$ be a sequence of self-adjoint elements in ${\mathcal A}$, and $s_n= \frac{1}{\sqrt{n}}(a_0+\cdots +a_{n-1})$ for each $n \in {\mathbb{N}}$. The moment of order $d$ of the element $s_n$ is given by $\varphi(s_n^d)$. We are often interested in the existence of the limit of the sequence $(\varphi(s_n^d))$, and its value if it exists, for each $d \in {\mathbb{N}}$: $$\label{equation:limitmoment} \lim_{n \to \infty} \varphi(s_n^d)= \lim_{n \to \infty} \frac{1}{n^{d/2}} \sum_{ \underline{i} : [d] \to \{ 0, \ldots , n-1 \} } \ \varphi( a_{ \underline{i} (1)} \cdots a_{ \underline{i} (d)}).$$ Usually, the sum on the right hand side of [\[equation:limitmoment\]](#equation:limitmoment){reference-type="eqref" reference="equation:limitmoment"} is rewritten using some property of the sequence $(a_n)$ so that the number of terms being summed no longer depend on the value of $n$. For example, in the above mentioned cases of the classical and free central limit theorems respectively, the sum is taken over all partitions, and non-crossing pair partitions respectively, of $\{1,\ldots, d\}$ (see [@Sp90 Theorem 3] and [@NS06 Lecture 8]). Algebraic central limit theorems have been studied in various contexts (see [@Sk22] for a nice overview). In the setting of the infinite symmetric group $S_{\infty}$, Biane showed that the law of a normalized sequence of random variables coming from the star generators converges to the law of a semi-circular system (see [@Bi95 Theorem 1]). Further, Köstler and Nica, and Köstler, Nica and Campbell showed that the limit distribution of a sequence coming from the star generators is connected to the average empirical eigenvalue distribution of a random GUE matrix (See [@KN21 Theorem 1.1] and [@CKN22 Theorem 2.9]). We are interested in a limit theorem in which the sequence $(a_n)$ comes from the generators of the Thompson group $F$. The standard infinite presentation of $F$ is as follows: $$F= \langle g_0, g_1, \ldots \mid g_n g_k = g_k g_{n+1}, \ 0\leq k < n <\infty \rangle .$$ The Thompson group $F$ is one of three groups $F, V$ and $T$ introduced by Richard Thompson in 1965. It can be described as a certain subgroup of the group of piece-wise linear homeomorphisms on the unit interval. Many of its unusual properties have been studied since then (see [@CFP96] and [@CF11]), in particular, due to the still open question of its non-amenability. Several aspects of its connections to subfactor theory and noncommutative stochastic processes have been studied, for instance in [@BJ19a; @Br20; @AJ21; @KK22]. Consider the noncommutative probability space $( {\mathbb{C}} (F), \varphi)$ where ${\mathbb{C}} (F)$ is the group $*$-algebra of $F$ and the expectation functional $\varphi$ is given by the trace associated to the left regular representation of $F$. That is, with $e$ representing the identity element of $F$, $$\varphi(x) = \begin{cases} 1, & x=e\\ 0, & x \neq e. \end{cases}$$ Each element $g$ of $F$ can be viewed as an element of the group algebra ${\mathbb{C}} (F)$; it is also written as $g$, and is unitary, with $g^* = g^{-1}$. Let $(a_n)$ be the sequence of self-adjoint elements of ${\mathbb{C}} (F)$ defined by $$a_n := \frac{g_n +g_n^*}{\sqrt{2}}, \ n \in {\mathbb{N}} _0.$$ Note that $\varphi(a_n)=0$ and $\varphi (a_n^2) = 1$ for each $n \in {\mathbb{N}} _0$, so that $(a_n)$ is a sequence of identically distributed, self-adjoint random variables which are centered and have variance $1$ in the probability space $( {\mathbb{C}} (F),\varphi)$. Before stating our main theorem, we remind the reader of the definition of a non-commutative probability space, and the notion of convergence in distribution of a sequence of random variables. **Definition 1** (Non-commutative probability space). A non-commutative probability space $( {\mathcal A} ,\varphi)$ consists of a unital $*$-algebra over ${\mathbb{C}}$ and a unital positive linear functional $\varphi$ on ${\mathcal A}$. The elements $a \in {\mathcal A}$ are called non-commutative random variables in $( {\mathcal A} ,\varphi)$. **Definition 2** (Convergence in distribution). Let $( {\mathcal A} _n,\varphi_n) \ (n \in {\mathbb{N}} )$ and $( {\mathcal A} ,\varphi)$ be non-commutative probability spaces and consider random variables $a_n \in {\mathcal A} _n$ for each $n \in {\mathbb{N}}$, and $a \in {\mathcal A}$. We say that $(a_n)$ converges in distribution to $a$ as $n \to \infty$, and denote this by $$a_n \stackrel{\text{distr}}{\longrightarrow} a,$$ if we have $$\lim_{n \to \infty} \varphi_n(a_n^d) = \varphi(a^d), \ \forall d \in {\mathbb{N}} .$$ We now state our main result -- a central limit theorem in the framework of the Thompson group $F$. **Theorem 3** (CLT for the sequence $a_n$). *Let $(a_n)$ be the sequence of self-adjoint random variables in $( {\mathbb{C}} (F), \varphi)$ given by $$a_n = \frac{g_n+g_n^*}{\sqrt{2}}, \ n \in {\mathbb{N}} _0$$ and $$s_n := \frac{1}{\sqrt{n}} (a_0+ \cdots +a_{n-1}), \ n \in {\mathbb{N}} .$$* *Then we have $$\lim_{n \to \infty} \varphi (s_n^d) = \begin{cases} (d-1)!! & \ \text{for } d \text{ even,} \\ 0 & d \ \text{for } d \text{ odd.} \end{cases}$$* *That is, $$s_n \stackrel{\text{distr}}{\longrightarrow} x,$$ where $x$ is a normally distributed random variable of variance $1$.* The normal distribution is determined by its moments. As the random variables $s_n$ have moments of all orders, we can invoke Theorem 30.2 of [@Bil95] to conclude that the probability measures $\mu_n$ corresponding to the the random variables $s_n$ converge weakly to $\mu= N(0,1)$, where $N(0,1)$ is the centered normal distribution of variance $1$. ## Methodology used to compute moments {#methodology-used-to-compute-moments .unnumbered} Given a positive integer $d$, the moment of order $d$ of $s_n$ can be expressed in terms of the expectation functional $\varphi$ evaluated on products of the generators of $F$ and their inverses as follows: $$\label{equation:moment} \hspace{1cm} \varphi (s_n^d) = \frac{1}{(2n)^{d/2}} \sum_{ \substack{ \underline{i} : [d] \to \{ 0, \ldots , n-1 \}, \\ \underline{ \varepsilon }: [d] \to \{ -1,1 \} } } \ \varphi \Bigl( g_{ \underline{i} (1)}^{\underline{ \varepsilon }(1)} \cdots g_{ \underline{i} (d)}^{\underline{ \varepsilon }(d)} \Bigr).$$ We will refer to tuples $( \underline{i} ,\underline{ \varepsilon })$ with $\underline{i} : [d] \to {\mathbb{N}} _0$ and $\underline{ \varepsilon }: [d] \to \{-1,1\}$ as *words* of length $d$. For $w=( \underline{i} ,\underline{ \varepsilon })$, we write $\mathrm{eval} _F(w)$ to mean the element $g_{ \underline{i} (1)}^{\underline{ \varepsilon }(1)}\cdots g_{ \underline{i} (d)}^{\underline{ \varepsilon }(d)}\in F$. We call a word $w$ *neutral* if $\mathrm{eval} _F(w)=e$. It is clear from the expansion of the moments of $s_n$ in [\[equation:moment\]](#equation:moment){reference-type="eqref" reference="equation:moment"} and the definition of the trace $\varphi$, that for each $d \in {\mathbb{N}}$, evaluating $\lim_{n \to \infty} \varphi(s_n^d)$ reduces to counting the number of neutral words $( \underline{i} ,\underline{ \varepsilon })$ of length $d$ with $\underline{i}$ taking values in $\{0,\ldots, n-1\}$. Our theorem does not fall into the category arising from the Speicher -- von Waldenfels general algebraic central limit theorem, as the sequence $(a_n)$ is not exchangeable, and does not satisfy the singleton vanishing property (see [@BS94] and [@SW94]). The sequence $(a_n)$ is also easily seen not to be spreadable. However, we will still use the combinatorics of pair partitions to compute our moments as there turns out to be a natural way to associate a pair partition to each neutral word. It is evident from the length-preserving relations of $F$ that no word $( \underline{i} ,\underline{ \varepsilon })$ of odd length can satisfy $\mathrm{eval} _F(( \underline{i} ,\underline{ \varepsilon }))=e$. Our approach will be to show that for every even integer $d$, each neutral word of length $d$ corresponds to a unique pair partition of $\{1,\ldots,d\}$. In order to show this, we will use the language of so-called abstract reduction systems to show that any neutral word of length $d$ in $F$ has a unique normal form, also of length $d$, of the following type: $$(g_{ \underline{j} (1)},\ldots,g_{ \underline{j} (\frac{d}{2})},g_{ \underline{j} (\frac{d}{2})}^{-1},\ldots, g_{ \underline{j} (1)}^{-1}) \ \text{ with }\ \underline{j} (1) +1\geq \underline{j} (2), \ldots, \underline{j} (\frac{d}{2}-1)+1 \geq \underline{j} (\frac{d}{2}).$$ The pattern of this normal form allows us to naturally pair up generators of $F$ and their inverses. Tracing back the steps that transform a word to its unique normal form allows us to assign a permutation $\tau$ in ${\mathcal S} _d$ and thereby, a pair partition $\pi$ of $\{1,\ldots,d\}$, to the word $( \underline{i} ,\underline{ \varepsilon })$. We will then arrive at the following expression for the large $n$ limit of the moment of order $d$ of $s_n$: $$\label{equation:pairpart} \lim_{n \to \infty} \varphi(s_n^d) = \sum_{ \pi \in {\mathcal P} _2(d)} \lim_{n\to \infty} \frac{1}{(2n)^{d/2}}\sum_{\substack{\tau \in {\mathcal S} _d, \\ \tau(\pi) =\pi_{\text{rain}}}} |\mathcal{W}_0(d,n,\tau)|.$$ Here, $\pi_{\text{rain}}$ is the so-called rainbow pair-partition, $\tau(\pi) = \pi_{\text{rain}}$ means that the pair partition $\pi$ is transformed to the rainbow pair-partition $\pi_{\text{rain}}$ via the permutation $\tau$, and $|\mathcal{W}_0(d,n,\tau)|$ denotes the number of neutral words $( \underline{i} ,\underline{ \varepsilon })$ of length $d$ with letters coming from $\{g_0,\ldots, g_{n-1}\}$, and whose assigned permutation is $\tau$. We will show that for every permutation $\tau$, this number is sandwiched between polynomials in $n$, of degree $\frac{d}{2}$, and with leading coefficient $\frac{1}{(\frac{d}{2})!}$. This will allow us to show that each pair partition's contribution is $1$ to the outer sum in [\[equation:pairpart\]](#equation:pairpart){reference-type="ref" reference="equation:pairpart"}. Indeed, Theorem [Theorem 3](#theorem:main){reference-type="ref" reference="theorem:main"} will then follow, as the number of pair partitions of the set $\{1,\ldots, d\}$ is $0$ for odd $d$, and $(d-1)!!$ for even $d$. ## Organisation of the paper {#organisation-of-the-paper .unnumbered} We are left to outline the contents of this paper. Including the introduction, which forms Section [1](#section:intro){reference-type="ref" reference="section:intro"}, the paper consists of five sections. In Section [2](#section:normal){reference-type="ref" reference="section:normal"}, we introduce the set of words in the Thompson group $F$, and an abstract reduction system which will provide the framework for our counting results. In particular, the reduction system gives us a normal form for each neutral word as described above. The language of abstract reduction systems is immensely helping in showing that the normal form obtained is unique. In Section [3](#section:algorithm){reference-type="ref" reference="section:algorithm"}, we recap the basic definitions of pair partitions, and discuss how to assign a pair partition to each neutral word. In other words, we assign a "bin" for each neutral-word in a natural way. As an intermediate step, we utilize a permutation $\tau$ in ${\mathcal S} _d$, where ${\mathcal S} _d$ is the set of permutations of the set $\{1,\ldots,d\}$. In Section [4](#section:count){reference-type="ref" reference="section:count"}, we approximate the size of each "bin". That is, for each pair partition $\pi$ of $[d]$, we find upper and lower bounds for the number of neutral words formed with letters from the set $\{g_0,\ldots, g_{n-1}, g_0^{-1}, \ldots, g_{n-1}^{-1}\}$ which are assigned to $\pi$ under the binning procedure described in Section [3](#section:algorithm){reference-type="ref" reference="section:algorithm"}. Finally, in Section [5](#section:main){reference-type="ref" reference="section:main"}, we prove our main theorem -- a central limit theorem for a sequence coming from the generators of the Thompson group ${\mathbb{C}} (F)$. # A normal form for words in $F$ {#section:normal} In this section, we briefly describe $\mathcal{W}(d)$, the set of words in $F$ of length $d$, and then devise an abstract reduction system on $\mathcal{W}(d)$ to obtain a particular normal form for each word. ## The Thompson group $F$ {#subsection:Thompson} In order to arrive at a central limit theorem, we remind that we work with the infinite presentation of $F$: $$F = \langle g_0, g_1, g_2, \ldots \mid g_lg_k = g_k g_{l+1}, \ 0\leq k<l<\infty\rangle.$$ **Definition 4** (Words in $F$). Let $d\in {\mathbb{N}}$ and $[d]$ denote the set $\{1,\ldots, d\}$. A word of length $d$ in $F$ is a tuple $(g_{ \underline{i} (1)}^{\underline{ \varepsilon }(1)},\ldots, g_{ \underline{i} (d)}^{\underline{ \varepsilon }(d)})$ with $\underline{i} : [d] \to {\mathbb{N}} _0$ and $\underline{ \varepsilon }(l): [d] \to \{-1, 1\}$. - For brevity, we write $( \underline{i} ,\underline{ \varepsilon })$ to mean $(g_{ \underline{i} (1)}^{\underline{ \varepsilon }(1)},\ldots, g_{ \underline{i} (d)}^{\underline{ \varepsilon }(d)})$. - We will denote the set of words of length $d$ by $\mathcal{W}(d)$. - For a word $w= ( \underline{i} ,\underline{ \varepsilon }) \in \mathcal{W}(d)$, we will write $\mathrm{eval} _F(w)$ to mean the element $$g_{ \underline{i} (1)}^{\underline{ \varepsilon }(1)}\cdots g_{ \underline{i} (d)}^{\underline{ \varepsilon }(d)} \in F.$$ - If $\mathrm{eval} _F(w)=e$, the identity element of $F$, then we call $w$ a *neutral* word. - A word of length $1$ simply evaluates to a generator or the inverse of a generator, and is called a *letter*. - Two words $w_1 = (g_{ \underline{i} (1)}^{\underline{ \varepsilon }(1)},\ldots, g_{ \underline{i} (d_1)}^{\underline{ \varepsilon }(d_1)})$ and $w_2 = (g_{ \underline{i} '(1)}^{\underline{ \varepsilon }'(1)},\ldots, g_{ \underline{i} '(d_2)}^{\underline{ \varepsilon }'(d_2)})$ can be concatenated to give $w = w_1 w_2 := (g_{ \underline{i} (1)}^{\underline{ \varepsilon }(1)},\ldots, g_{ \underline{i} (d_1)}^{\underline{ \varepsilon }(d_1)}, g_{ \underline{i} '(1)}^{\underline{ \varepsilon }'(1)},\ldots, g_{ \underline{i} '(d_2)}^{\underline{ \varepsilon }'(d_2)})$. - We will call $w$ a *sub-word* of $w'$ if there exist words $w_1$ and $w_2$ such that $w'= w_1 w w_2$ ## Abstract reduction systems and Newman's lemma We establish some terminology as used in the theory of abstract reduction systems. We use notations and definitions from [@BN99 Chapter 2]. **Notation and Definition 5**. An *abstract reduction system* is a pair $(A, \rightarrow)$, where $A$ is a non-empty set and $\rightarrow$ is a subset of $A \times A$ known as a binary relation, or as a *reduction* in the parlance of theoretical computer science and logic. It is common to write $x \rightarrow y$ rather than $(x,y) \in \rightarrow$, denoting that $x$ is *rewritten* as or *reduced* to $y$. 1. The symbol $\stackrel{i}{\rightarrow}$ is used to denote the $i$-fold composition of $\rightarrow$ for $i \in {\mathbb{N}}$. 2. The reflexive transitive closure of $\rightarrow$ is denoted by $\stackrel{*}{\rightarrow}$. 3. An element $x\in A$ is called *reducible* if there exists some $y \in A$ such that $x \rightarrow y$; otherwise it is called irreducible or said to be in *normal form*. The element $y$ is a normal form of $x$ if $x \stackrel{*}{\rightarrow} y$ and $y$ is in normal form. 4. Two elements $x, y \in A$ are said to be *joinable* if there exists some $z\in A$ with $x \stackrel{*}{\rightarrow} z$ and $y \stackrel{*}{\rightarrow} z$. This property is denoted by $x \downarrow y$. 5. A reduction $\rightarrow$ is said to be *confluent* if for all $w, x, y\in A$ with $w \stackrel{*}{\rightarrow} x$ and $w \stackrel{*}{\rightarrow} y$, we have $x \downarrow y$. It is said to be *locally confluent* if for all $w, x, y\in A$ with $w \rightarrow x$ and $w \rightarrow y$, we have $x \downarrow y$. 6. A reduction is said to be *terminating* if there is no infinite chain $x_0 \rightarrow x_1 \rightarrow x_2 \rightarrow \cdots$. 7. A reduction is said to be *normalizing* if every element has a normal form. 8. A reduction $\rightarrow$ is called finitely branching if every element has finitely many direct successors. Confluence and local confluence can be visualized pictorially as diamonds as seen in Figure [\[figure:conflocconf\]](#figure:conflocconf){reference-type="ref" reference="figure:conflocconf"}. Here, solid lines denote universal quantifiers and dashed arrows denote existential quantifiers. $$\begin{array}{cccc} \begin{tikzcd}[column sep=tiny] & x \ar[dr, "*", dashrightarrow] & \\ w \ar[ur, "*", rightarrow] \ar[dr,"*", rightarrow] & & z & \\ & y \ar[ur, "*", dashrightarrow] & \end{tikzcd} &&& \begin{tikzcd}[column sep=tiny] & x \ar[dr, "*", dashrightarrow] & \\ w \ar[ur, rightarrow] \ar[dr,rightarrow] & & z & \\ & y \ar[ur, "*", dashrightarrow] & \end{tikzcd} \\ \text{ Confluence } & & & \text{ Local Confluence } \\ \end{array}$$ **Remark 6**. It is clear from Notation and Definition [Notation and Definition 5](#notationdefinition:ARS){reference-type="ref" reference="notationdefinition:ARS"} that if a reduction is terminating, then every element has a normal form, and hence the reduction is normalizing. We will also state, for completeness, some standard results in the theory of abstract reduction systems. The following lemma describes when a normalizing reduction results in a *unique* normal form for every element. **Lemma 7**. *([@BN99 Lemma 2.1.8]) [\[lemma:uniquenormalform\]]{#lemma:uniquenormalform label="lemma:uniquenormalform"} If a reduction is normalizing and confluent, then every element has a *unique* normal form.* The following characterization is sometimes useful to show that a reduction $\rightarrow$ is terminating. **Lemma 8**. *([@BN99 Lemma 2.3.3]) [\[lemma:terminating\]]{#lemma:terminating label="lemma:terminating"} A finitely branching reduction terminates if and only if there is a monotone embedding into $( {\mathbb{N}} _0,>)$.* The following lemma is due to Newman and is called the "diamond" lemma. See Figure [\[figure:conflocconf\]](#figure:conflocconf){reference-type="ref" reference="figure:conflocconf"} for an illustrative explanation of the name. **Lemma 9**. *([@Ne42], [@BN99 Lemma 2.7.2]) [\[lemma:newman\]]{#lemma:newman label="lemma:newman"} A terminating reduction is confluent if and only if it is locally confluent.* Newman's lemma gives that a terminating reduction which is *locally confluent* in fact has the stronger property of being *confluent*. Altogether, Remark [Remark 6](#remark:normalizing){reference-type="ref" reference="remark:normalizing"}, Lemmas [\[lemma:uniquenormalform\]](#lemma:uniquenormalform){reference-type="ref" reference="lemma:uniquenormalform"} and [\[lemma:newman\]](#lemma:newman){reference-type="ref" reference="lemma:newman"} show that local confluence guarantees the existence of a *unique* normal form for every element in a terminating abstract reduction system. Two reductions can can be composed to give a new reduction. **Definition 10**. Let $R \subseteq A \times B$ and $S \subseteq B \times C$ be two reductions. Their composition is defined by $$S \circ R : = \{(x,z) \mid \exists y \in B \text{ with } (x,y) \in R, (y,z) \in S\}.$$ The following are some results for composed reductions. **Lemma 11**. *[\[lemma:composition\]]{#lemma:composition label="lemma:composition"} Let $(A, \rightsquigarrow)$ be an abstract reduction system and $$A^{\text{irr}} := \{w \in A \mid w \text{ is irreducible with respect to } \rightsquigarrow\}.$$ Let $(A^{\text{irr}}, \rightarrowtail)$ be a second abstract reduction system and consider a new reduction given by the composition $\rightarrow \ : = \ \stackrel{*}{\rightarrowtail} \circ \stackrel{*}{\rightsquigarrow}$. Then $\rightarrow$ is transitive and reflexive. That is, $$\stackrel{*}\rightarrow \ = \ \rightarrow.$$* *Proof.* Clearly, $\rightarrow \subseteq \stackrel{*}\rightarrow$. On the other hand, suppose $x \stackrel{*}\rightarrow z$. If $x=z$, then $x \in A^{\text{irr}}$ and $x \stackrel{*}\rightarrow z$ implies that $x \stackrel{*}\rightarrowtail z$, hence $x \stackrel{*}\rightsquigarrow x \stackrel{*}\rightarrowtail z$, so that $x \rightarrow z$. If $x \neq z$, then there exists $x_1 \in A^{\text{irr}}$ such that $$x \rightarrow x_1 \stackrel{*}\rightarrow z.$$ Now, as $x \rightarrow x_1$, there exists $y \in A^{\text{irr}}$ such that $x\stackrel{*}\rightsquigarrow y$ and $y \stackrel{*}\rightarrowtail x_1$. As $x_1 \in A^{\text{irr}}$, $x_1 \stackrel{*} \rightarrow z$ is the same as saying that $x_1 \stackrel{*} \rightarrowtail z$ and it then follows that $y \stackrel{*}\rightarrowtail z$. Altogether, we have $y \in A^{\text{irr}}$ such that $x \stackrel{*} \rightsquigarrow y$ and $y \stackrel{*}\rightarrowtail z$, so that $x \rightarrow z$. ◻ **Proposition 12**. *[\[proposition:compositionARS\]]{#proposition:compositionARS label="proposition:compositionARS"} Let $(A, \rightsquigarrow)$ be an abstract reduction system with a locally confluent and terminating reduction $\rightsquigarrow$ and let $A^{\text{irr}} := {\{w \in A \mid w \text{ is irreducible with respect to } \rightsquigarrow\}}$. Suppose $(A^{\text{irr}}, \rightarrowtail)$ is an abstract reduction system where $\rightarrowtail$ is locally confluent and terminating. Let $\rightarrow \ : = \ \stackrel{*}{\rightarrowtail} \circ \stackrel{*}{\rightsquigarrow}$. Then every element of $A$ has a unique normal form with respect to the reduction $\rightarrow$.* *Proof.* Let $w \in A$. By Lemma [\[lemma:uniquenormalform\]](#lemma:uniquenormalform){reference-type="ref" reference="lemma:uniquenormalform"} and (Newman's) Lemma [\[lemma:newman\]](#lemma:newman){reference-type="ref" reference="lemma:newman"}, there exists unique $W(w) \in A^{\text{irr}}$ such that $w \stackrel{*}{\rightsquigarrow} W(w)$. Further, there exists unique $N(W (w))\in A^{\text{irr}}$ such that $W(w)\stackrel{*}{\rightarrowtail} N(W(w))$ and $N(W(w))$ is irreducible with respect to the reduction $\rightarrowtail$. Hence $w \rightarrow N(W(w)$. We will show that $N(W(w))$ is the unique normal form of $w$ under the composed reduction $\rightarrow$. First, to show that $N(W(w))$ is irreducible under $\rightarrow$, suppose that $N(W(w)) \rightarrow z$. Then there exists $y \in A^{\text{irr}}$ such that $N(W(w)) \stackrel{*}{\rightsquigarrow} y \stackrel{*}{\rightarrowtail} z$. But $N(W(w)) \in A^{\text{irr}}$ implies that $y= N(W(w))$, so that we have $N(W(w)) \stackrel{*}{\rightarrowtail} z$. As $N(W(w))$ is also irreducible with respect to $\rightarrowtail$, we must have $z= N(W(w))$, so that $N(W(w))$ is irreducible and a normal form of $w$ with respect to the reduction $\rightarrow$. It remains to show that $N(W(w))$ is the *unique* normal form of $w$ under $\rightarrow$. Suppose there exists another normal form $v \in A^{\text{irr}}$. Then $w \stackrel{*}{\rightarrow} v$, so by Lemma [\[lemma:composition\]](#lemma:composition){reference-type="ref" reference="lemma:composition"}, there exists $y \in A^{\text{irr}}$ with $w \stackrel{*}{\rightsquigarrow} y \stackrel{*}{\rightarrowtail} v$. By the uniqueness of $W(w) \in A^{\text{irr}}$ such that $w \stackrel{*}{\rightsquigarrow} W(w)$, we must have $y=W(w)$, so that we are left with $W(w) \stackrel{*}{\rightarrowtail} v$ and $W(w) \stackrel{*}{\rightarrowtail} N(W(w))$. As $\rightarrowtail$ is confluent, there exists $u \in A^{\text{irr}}$ with $N(W(w)) \stackrel{*}{\rightarrowtail} u$ and $v \stackrel{*}{\rightarrowtail} u$. Now, as $N(W(w))$ is irreducible with respect to $\rightarrowtail$, we must have $u= N(W(w))$ and $v \stackrel{*}{\rightarrowtail} N(W(w))$. But this gives $v \stackrel{*}{\rightsquigarrow} v \stackrel{*}{\rightarrowtail} N(W(w))$, so that $v \rightarrow N(W(w))$. By the irreducibility of $v$ under $\rightarrow$, we must have $v=N(W(w))$. ◻ ## The abstract reduction system for $\mathcal{W}(d)$ {#subsection:ARSF} We will use the relations of the Thompson group $F$ to formulate an abstract reduction system for $\mathcal{W}(d)$. To simplify the use of the relations of $F$, we will actually work with two abstract reduction systems in sequence, and show that each of the reductions is terminating and locally confluent. We will then consider the composition of these two reductions and use Proposition [\[proposition:compositionARS\]](#proposition:compositionARS){reference-type="ref" reference="proposition:compositionARS"} to show a unique normal form exists with respect to the composed reduction. The goal of our first abstract reduction system is to separate generators and inverses of generators from each other in a given word. In particular, we would like to move the generators, and the inverses of generators, to the left, and to the right respectively, of a word. The system is given by $(\mathcal{W}(d),\rightsquigarrow)$ with $$u=w_1ww_2 \rightsquigarrow v = w_1f(w)w_2 \text{ for } u, v \in \mathcal{W}(d),$$ where $w$, a sub-word of length $2$, is rewritten as $f(w)$, another sub-word of length $2$, in the following way: $$\label{equation:rewriting1} f(w) = f(g_{ \underline{i} (k)}^{-1},g_{ \underline{i} (k+1)}) = \begin{cases} (g_{ \underline{i} (k+1)},g_{ \underline{i} (k)}^{-1}) & \underline{i} (k)= \underline{i} (k+1),\\ (g_{ \underline{i} (k+1)}, g_{ \underline{i} (k)+1}^{-1}) & \underline{i} (k)> \underline{i} (k+1),\\ (g_{ \underline{i} (k+1)+1}, g_{ \underline{i} (k)}^{-1}) & \underline{i} (k+1)> \underline{i} (k).\\ \end{cases}$$ Note that $\mathrm{eval} _F(u) = \mathrm{eval} _F(v)$ for $u, v \in \mathcal{W}(d)$ whenever $u \rightsquigarrow v$. **Proposition 13**. *[\[proposition:rewriting1\]]{#proposition:rewriting1 label="proposition:rewriting1"} The abstract reduction system $(\mathcal{W}(d), \rightsquigarrow)$ is terminating and locally confluent.* *Proof.* The reduction system results in moving generators to the left of a given word. From [\[equation:rewriting1\]](#equation:rewriting1){reference-type="eqref" reference="equation:rewriting1"}, we observe that while the index of a generator (or its inverse) might change in a step of the reduction, generators remain generators (and inverses remain inverses). It is thus clear that for any word $w \in \mathcal{W}(d)$, the reduction [\[equation:rewriting1\]](#equation:rewriting1){reference-type="eqref" reference="equation:rewriting1"} results in an irreducible word in at most $d^2$ steps (indeed, this is a very generous upper bound and the actual number of steps required to reach an irreducible word is less). Hence, there is no infinite chain $w_0 \rightsquigarrow w_1 \rightsquigarrow \cdots$, and the system $(\mathcal{W}(d),\rightsquigarrow)$ is terminating. For local confluence, we need that if $u \rightsquigarrow v$ and $u \rightsquigarrow w$, then there exists $x\in \mathcal{W}(d)$ such that $v \stackrel{*}{\rightsquigarrow} x$ and $w \stackrel{*}{\rightsquigarrow} x$. By definition, $v$ and $w$ are obtained from $u$ by replacing some length $2$ sub-word $(g_{ \underline{i} (k)}^{-1}, g_{ \underline{i} (k+1)})$ by another length $2$ sub-word as in [\[equation:rewriting1\]](#equation:rewriting1){reference-type="eqref" reference="equation:rewriting1"}. It is easy to see that if two disjoint sub-words are rewritten in the reduction $u \rightsquigarrow v$ and $u \rightsquigarrow w$, then there exists $x$ as required, namely the word obtained by applying the two (commuting) reductions to $u$ one after the other. It thus suffices to deal with the case of overlapping sub-words of length 2 in a word of length $3$. However, a quick check shows that a word $u$ of length $3$ cannot be reduced to $v$ and $w$ in two distinct ways under the relation $\rightsquigarrow$. Hence $(\mathcal{W}(d),\rightsquigarrow)$ is locally confluent. ◻ As $(\mathcal{W}(d),\rightsquigarrow)$ is locally confluent and terminating, it is confluent by Newman's Lemma [\[lemma:newman\]](#lemma:newman){reference-type="ref" reference="lemma:newman"}. Hence every word $w \in \mathcal{W}(d)$ has a unique normal form. In particular, this means any word $w \in \mathcal{W}(d)$ can be rewritten uniquely as $$\label{equation:normal1} W(w) = (w_+,w_-),$$ where $w_+$ is a sub-word consisting only of generators of $F$, and $w_-$ is a sub-word consisting only of inverses of generators. Observe that $\mathrm{eval} _F(w) = \mathrm{eval} _F(W(w))$. Let $\mathcal{W}(d)^{\text{irr}}$ be the set of words of length $d$ which are given in the form $(w_+,w_-)$, i.e., it is the set of irreducible words of $\mathcal{W}(d)$ under the reduction $\rightsquigarrow$. **Example 14**. Let $w = (g^{-1}_3,g_6,g^{-1}_0, g_2,g_4^{-1})$. Then we get We observe here that applying the reduction $\rightsquigarrow$ to the disjoint sub-words $(g^{-1}_3,g_6)$ and $(g^{-1}_0,g_2)$ consecutively results in the same word, shown in blue, irrespective of the order in which the reductions are applied. The word shown in purple, $(g_7,g_3,g_3^{-1},g_0^{-1},g_4^{-1})$, is the unique normal form $W(w) \in \mathcal{W}(d)^{\text{irr}}$. Our second abstract reduction system is given by $(\mathcal{W}(d)^{\text{irr}}, \rightarrowtail)$ with $$u = w_1ww_2 \rightarrowtail v = w_1h(w)w_2,$$ where $w$, a sub-word of length $2$, is rewritten as $h(w)$, another sub-word of length $2$, in the following way: $$\label{equation:rewriting2} h(w) = h(g_{ \underline{i} (k)}^{\underline{ \varepsilon }(k)},g_{ \underline{i} (k+1)}^{\underline{ \varepsilon }(k+1)}) = \begin{cases} (g_{ \underline{i} (k+1)-1}, g_{ \underline{i} (k)}) & \underline{i} (k+1)-1 > \underline{i} (k), \ \underline{ \varepsilon }(k)=\underline{ \varepsilon }(k+1)=1 \\ (g_{ \underline{i} (k+1)}^{-1}, g_{ \underline{i} (k)-1}^{-1}) & \underline{i} (k)-1> \underline{i} (k+1), \ \underline{ \varepsilon }(k)=\underline{ \varepsilon }(k+1)=-1\\ \end{cases}$$ Note that $\mathrm{eval} _F(u)= \mathrm{eval} _F(v)$ for $u, v \in \mathcal{W}(d)^{\text{irr}}$ whenever $u \rightarrowtail v$. The goal of this reduction system is to start with a word $w=(w_+,w_-) \in \mathcal{W}(d)^{\text{irr}}$ and rewrite $w_+$ so that the letters are arranged in decreasing order of their indices (except for a possible increase of $0$ or $1$), and symmetrically, to rewrite $w_-$ so that the letters are arranged in increasing order of their indices (except for a possible decrease of $0$ or $1$). This rewriting is aimed at arriving at the so -- called normal form [@DT19] (or by some authors, the anti-normal form [@Be04]) of an element of the Thompson monoid $F^+$. In the following proposition, we follow closely the argument in the proof of [@DT19 Lemma 2.1]. **Proposition 15**. *[\[proposition:rewriting2\]]{#proposition:rewriting2 label="proposition:rewriting2"} The abstract reduction system $(\mathcal{W}(d)^{\text{irr}}, \rightarrowtail)$, is terminating and locally confluent.* *Proof.* Define $\tau: \mathcal{W}(d)^{\text{irr}} \to {\mathbb{N}} _0$ by $$\label{equation:sumindices} \text{sum}(w) = \text{sum}(g_{ \underline{i} (1)}^{\underline{ \varepsilon }(1)},\ldots, g_{ \underline{i} (d)}^{\underline{ \varepsilon }(d)}) : = \sum_{k=1}^d \underline{i} (k).$$ Then from [\[equation:rewriting2\]](#equation:rewriting2){reference-type="eqref" reference="equation:rewriting2"}, it is clear that $w \rightarrowtail w' \implies \text{sum}(w) \geq \text{sum}(w')$, hence the reduction system is terminating by Lemma [\[lemma:terminating\]](#lemma:terminating){reference-type="ref" reference="lemma:terminating"}. Next, we will show that $\rightarrowtail$ is locally confluent. In particular, we will show that if $u \rightarrowtail v$ and $u \rightarrowtail w$, then there exists $x\in \mathcal{W}(d)^{\text{irr}}$ such that $v \stackrel{*}{\rightarrowtail} x$ and $w \stackrel{*}{\rightarrowtail} x$. By definition, $v$ and $w$ are obtained from $u$ by replacing some length $2$ sub-word $(g_{ \underline{i} (k)}^{\underline{ \varepsilon }(k)}, g_{ \underline{i} (k+1)}^{\underline{ \varepsilon }(k+1)})$ by another length $2$ sub-word. If the two reductions correspond to the rewriting of two disjoint sub-words, then the required $x$ is the word obtained by applying the two (commuting) reductions one after the other. So we only need to deal with the case of overlapping sub-words of length $2$ in a word of length $3$. This happens in the case that $u= (g_{ \underline{i} (k)},g_{ \underline{i} (k+1)},g_{ \underline{i} (k+2)})$ with $\underline{i} (k+1)-1 \geq \underline{i} (k)+1$, $\underline{i} (k+2) -1\geq \underline{i} (k+1)+1$. Then we have A symmetric diamond can be drawn for inverses. Hence $(\mathcal{W}(d)^{\text{irr}},\rightarrowtail)$ is locally confluent, and consequently, it is confluent by Newman's Lemma [\[lemma:newman\]](#lemma:newman){reference-type="ref" reference="lemma:newman"}. ◻ As $(\mathcal{W}(d)^{\text{irr}},\rightarrowtail)$ is confluent and terminating, every word $u$ has a unique normal form, which we write as $N(u)$. Moreover, $\mathrm{eval} _F(u) = \mathrm{eval} _F(N(u))$. **Example 16**. Let us continue working with the word $W(w)$ that was obtained as the normal form in Example [Example 14](#example:reduction1){reference-type="ref" reference="example:reduction1"}. We get $$(g_7,g_3,g_3^{-1},g_0^{-1},g_4^{-1}) \rightarrowtail (g_7,g_3,g_0^{-1},g_2^{-1},g_4^{-1}),$$ which is the required normal form $N(W(w)).$ Now, consider the abstract reduction system $(\mathcal{W}(d), \rightarrow)$, where $\rightarrow \ := \ \stackrel{*}{\rightarrowtail} \circ \stackrel{*}{\rightsquigarrow}$. By Proposition [\[proposition:compositionARS\]](#proposition:compositionARS){reference-type="ref" reference="proposition:compositionARS"}, as each of $(\mathcal{W}(d), \rightsquigarrow)$ and $(\mathcal{W}(d)^{\text{irr}}, \rightarrowtail)$ is terminating and locally confluent, the composed reduction system $(\mathcal{W}(d), \rightarrow)$ results in a unique normal form for each $w \in \mathcal{W}(d)$. Altogether, using Propositions [\[proposition:rewriting1\]](#proposition:rewriting1){reference-type="ref" reference="proposition:rewriting1"} and [\[proposition:rewriting2\]](#proposition:rewriting2){reference-type="ref" reference="proposition:rewriting2"}, we obtain for each word $w \in \mathcal{W}(d)$, a unique normal form $N(W(w)) \in \mathcal{W}(d)^{\text{irr}}$ as recorded in the proposition below. **Proposition 17**. *[\[prop:normalform\]]{#prop:normalform label="prop:normalform"} Let $( \underline{i} ,\underline{ \varepsilon }) \in \mathcal{W}(d)$ be a word of length $d$. Then there exists a unique word in $\mathcal{W}(d)$ given by $$\label{equation:normalform} ( \underline{j} , \underline{ \varepsilon }') = (g_{ \underline{j} (1)},\cdots, g_{ \underline{j} (r)},g_{ \underline{j} (r+1)}^{-1},\cdots, g_{ \underline{j} (d)}^{-1})$$ satisfying the following conditions:* 1. *$\mathrm{eval} _F( \underline{i} ,\underline{ \varepsilon }) = \mathrm{eval} _F( \underline{j} ,\underline{ \varepsilon }')$;* 2. *$r\leq d$;* 3. *$\underline{j} (1) +1\geq \underline{j} (2), \cdots, \underline{j} (r-1) +1 \geq \underline{j} (r)$;* 4. *$\underline{j} (r+1)\leq \underline{j} (r+2)+1,\cdots, \underline{j} (d-1) \leq \underline{j} (d)+1$.* *Proof.* For $w = ( \underline{i} ,\underline{ \varepsilon })$, let $( \underline{j} ,\underline{ \varepsilon }')$ be defined by $$( \underline{j} ,\underline{ \varepsilon }') := N(W(w)),$$ where $W(w)$ is the unique normal form of $w$ obtained in Proposition [\[proposition:rewriting1\]](#proposition:rewriting1){reference-type="ref" reference="proposition:rewriting1"} under the reduction system $(\mathcal{W}(d), \rightsquigarrow)$ and $N(W(w))$ is the unique normal form of $W(w)$ obtained in Proposition [\[proposition:rewriting2\]](#proposition:rewriting2){reference-type="ref" reference="proposition:rewriting2"} under the abstract reduction system $(\mathcal{W}(d)^{\text{irr}}, \rightarrowtail)$. The form of $N(W(w))$ follows from [\[equation:rewriting1\]](#equation:rewriting1){reference-type="eqref" reference="equation:rewriting1"} and [\[equation:rewriting2\]](#equation:rewriting2){reference-type="eqref" reference="equation:rewriting2"}. By Proposition [\[proposition:compositionARS\]](#proposition:compositionARS){reference-type="ref" reference="proposition:compositionARS"}, $( \underline{j} ,\underline{ \varepsilon }')$ is the unique normal form of the word $( \underline{i} ,\underline{ \varepsilon })$ under the reduction system $(\mathcal{W}(d), \rightarrow)$. ◻ **Remark 18**. [\[remark:interchangerules\]]{#remark:interchangerules label="remark:interchangerules"} Each step of the abstract reduction systems $(\mathcal{W}(d), \rightsquigarrow)$ and $(\mathcal{W}(d)^{\text{irr}},\rightarrowtail)$ results in two letters interchanging positions, causing a change in the index of one letter by at most $\pm 1$. Suppose that in a step of the reduction we have interchanged the positions of $g_{ \underline{i} (l)}^{\underline{ \varepsilon }(l)}$ and $g_{ \underline{i} (m)}^{\underline{ \varepsilon }(m)}$, with say, $\underline{i} (l) > \underline{i} (m)$. Then the change in index can be summarized as follows. - If $g_{ \underline{i} (l)}$ moves to the *left* of $g_{ \underline{i} (m)}^{-1}$, then $g_{ \underline{i} (l)}$ is transformed to $g_{ \underline{i} (l)+1}$. - If $g_{ \underline{i} (l)}^{-1}$ moves to the *right* of $g_{ \underline{i} (m)}$, then $g_{ \underline{i} (l)}^{-1}$ is transformed to $g_{ \underline{i} (l)+1}^{-1}$. - If $g_{ \underline{i} (l)}$ moves to the *left* of $g_{ \underline{i} (m)}$ with $\underline{i} (l) -1 > \underline{i} (m)$, then $g_{ \underline{i} (l)}$ is transformed to $g_{ \underline{i} (l)-1}$. - If $g_{ \underline{i} (l)}^{-1}$ moves to the *right* of $g_{ \underline{i} (m)}^{-1}$ with $\underline{i} (l) -1 > \underline{i} (m)$, then $g_{ \underline{i} (l)}^{-1}$ is transformed to $g_{ \underline{i} (l)-1}^{-1}$. We reiterate that while the indices of letters might change in each step of the reduction, generators remain generators (and inverses remain inverses). Moreover, the letter with the smaller index $g_{ \underline{i} (m)}^{\underline{ \varepsilon }(m)}$ remains unchanged. # A method for putting neutral words into bins {#section:algorithm} In the previous section, we described an abstract reduction system $(\mathcal{W}(d), \rightarrow)$ and showed that each word in $\mathcal{W}(d)$ has a unique normal form. In this section, we will use the structure of this normal form to assign a pair-partition -- a "bin" -- to each neutral word of $\mathcal{W}(d)$. We start this section with some notations and definitions reviewing permutation groups and pair-partitions. **Notation 19** (Review of notations about permutations). ${\mathcal S} _d$ will denote the group of permutations of the set $[d] = \{1, \ldots, d\}$ for $d \in {\mathbb{N}}$. **Notation 20** (Review of notations about pair-partitions). ${\mathcal P} _2(d)$ will denote the set of pair-partitions of the set $[d]$ for $d \in {\mathbb{N}}$. ${\mathcal P} _2(d)$ is non-empty only for $d$ even, and in that case, a pair-partition is of the form $\pi = \{V_1, \ldots, V_{\frac{d}{2}}\}$ with $V_i \subseteq [d]$ and $|V_i|=2$ for each $i \in [\frac{d}{2}]$. **Notation and Definition 21**. For $d$ an even integer, $\pi_{\text{rain}}\in {\mathcal P} _2(d)$ will denote the rainbow pair-partition given by $$\pi_{\text{rain}}= \{\{1, d\}, \ldots, \{\frac{d}{2}, \frac{(d+2)}{2}\}\}.$$ We will use the abstract reduction system $(\mathcal{W}(d), \rightarrow)$ discussed in Subsection [2.3](#subsection:ARSF){reference-type="ref" reference="subsection:ARSF"} to associate to each neutral word in $\mathcal{W}(d)$ a unique pair partition of the set $[d]$. The pair partitions of $[d]$ will be the "bins" into which we place our words. We will do this by first assigning to each word a permutation of $[d]$, and then assigning a pair partition to the permutation using the rainbow pair-partition as a tool. Let us now describe how to find a permutation for a given word using its normal form described by [\[equation:normalform\]](#equation:normalform){reference-type="eqref" reference="equation:normalform"}. ## Assigning permutations to words {#subsection:PermutationsGivenNeutralWords} The abstract reduction system $(\mathcal{W}(d), \rightarrow)$ results in a unique normal form [\[equation:normalform\]](#equation:normalform){reference-type="eqref" reference="equation:normalform"} for each word $w =( \underline{i} ,\underline{ \varepsilon }) \in W(d)$. For each $l \in [d]$, $g_{ \underline{i} (l)}^{\underline{ \varepsilon }(l)}$ is transformed uniquely to some $g_{ \underline{j} (u_l)}^{\underline{ \varepsilon }(l)}$ in the normal form $( \underline{j} ,\underline{ \varepsilon }')$. The steps in the transformation of the index $\underline{i} (l)$ to $\underline{j} (u_l)$ is described by Remark [\[remark:interchangerules\]](#remark:interchangerules){reference-type="ref" reference="remark:interchangerules"}. Define the permutation $\tau( \underline{i} ,\underline{ \varepsilon }) \in {\mathcal S} _d$ by $$\label{equation:permdef} \tau( \underline{i} ,\underline{ \varepsilon })(l) := u_l, \ l\in [d].$$ **Lemma 22**. *[\[lemma:perm\]]{#lemma:perm label="lemma:perm"} Let $w = ( \underline{i} , \underline{ \varepsilon }) \in \mathcal{W}(d)$ and $( \underline{j} , \underline{ \varepsilon }')$ be its unique normal form as in [\[equation:normalform\]](#equation:normalform){reference-type="eqref" reference="equation:normalform"}. Let $\tau( \underline{i} ,\underline{ \varepsilon }) \in {\mathcal S} _d$ be the permutation defined as in [\[equation:permdef\]](#equation:permdef){reference-type="eqref" reference="equation:permdef"}. Then we have the following inequalities: $$-d \leq \underline{j} (l) - \underline{i} (\tau( \underline{i} ,\underline{ \varepsilon })^{-1}(l)) \leq d, \quad l \in \{1, \ldots, d\}.$$ If $\sum_{l=1}^d \underline{ \varepsilon }(l) =0$, then $d$ is even, and $$\underline{ \varepsilon }' = \underline{ \varepsilon }_0 := (\underbrace{+1, \ldots, +1}_{\frac{d}{2} \text{ times }}, \underbrace{-1, \ldots, -1}_{\frac{d}{2} \text{ times }})$$ Further, in this case, we have the inequalities: $$-\frac{d}{2} \leq \underline{j} (l) - \underline{i} (\tau( \underline{i} ,\underline{ \varepsilon })^{-1}(l)) \leq \frac{d}{2}, \quad l \in \{1, \ldots, d\}.$$* *Proof.* The reduction of a word $( \underline{i} ,\underline{ \varepsilon })$ of length $d$ to its normal form $( \underline{j} ,\underline{ \varepsilon }')$ can result in at most $d$ position changes for a given letter, and correspondingly can cause a change of at most $\pm d$ in its index, as detailed in Remark [\[remark:interchangerules\]](#remark:interchangerules){reference-type="ref" reference="remark:interchangerules"}. As $\underline{j} (l)$ is the transformed index of a letter which was originally in the position $\tau( \underline{i} ,\underline{ \varepsilon })^{-1}(l)$ in the original word $( \underline{i} ,\underline{ \varepsilon })$, we have $$-d \leq \underline{j} (l) - \underline{i} (\tau( \underline{i} ,\underline{ \varepsilon })^{-1}(l)) \leq d, \ l \in \{1,\ldots, d\}$$ Next, suppose that $\sum_{t=1}^d \underline{ \varepsilon }(t) =0$. As $\underline{ \varepsilon }$ takes values in $\{-1, 1\}$, this means that $d$ must be even and that the given word is composed of exactly $\frac{d}{2}$ generators and $\frac{d}{2}$ inverses of generators. Hence $\underline{ \varepsilon }' = \underline{ \varepsilon }_0$, where $$\underline{ \varepsilon }_0 := (\underbrace{+1, \ldots, +1}_{\frac{d}{2} \text{ times }}, \underbrace{-1, \ldots, -1}_{\frac{d}{2} \text{ times }})$$ Now, the abstract reduction system only allows for any letter to move to the right (or to the left) of at most $\frac{d}{2}$ generators, and at most $\frac{d}{2}$ inverses of generators. Hence by Remark [\[remark:interchangerules\]](#remark:interchangerules){reference-type="ref" reference="remark:interchangerules"}, any index $\underline{i} (\tau( \underline{i} ,\underline{ \varepsilon })^{-1}(l))$ can change by at most $\pm \frac{d}{2}$ to some $\underline{j} (l)$ in the normal form $( \underline{j} ,\underline{ \varepsilon }_0)$. ◻ Let $d, n\in {\mathbb{N}}$. Consider the following sets of words $$\mathcal{W}_0(d) := \Bigl\{ ( \underline{i} , \underline{ \varepsilon }) \begin{array}{ll} \vline & \underline{i} : [d] \to {\mathbb{N}} , \ \underline{ \varepsilon }: [d] \to \{ -1,1 \}, \\ \vline & \mbox{such that} \ \mathrm{eval} _F( \underline{i} ,\underline{ \varepsilon }) = g_{ \underline{i} (1)}^{\underline{ \varepsilon }(1)} \cdots g_{ \underline{i} (d)}^{\underline{ \varepsilon }(d)} = e \end{array} \Bigr\},$$ and $$\mathcal{W}_{0}(d,n):= \{( \underline{i} ,\underline{ \varepsilon }) \in \mathcal{W}_0(d) \mid \underline{i} : [d] \to \{0,\ldots, n-1\}\}.$$ In other words, $\mathcal{W}_0(d)$ is the set of neutral words in $\mathcal{W}(d)$ and $\mathcal{W}_{0}(d,n)$ is the set of neutral words of length $d$ composed of the first $n$ generators $\{g_0, \ldots, g_{n-1}\}$ of $F$. It is immediate that $\mathcal{W}_0(d)$ and $\mathcal{W}_{0}(d,n)$ are non-empty if and only if $d$ is even. We record some easy corollaries of Lemma [\[lemma:perm\]](#lemma:perm){reference-type="ref" reference="lemma:perm"}. **Corollary 23**. *[\[corollary:normalform\]]{#corollary:normalform label="corollary:normalform"} Let $( \underline{i} , \underline{ \varepsilon }) \in \mathcal{W}_0(d)$. Then the normal form of $( \underline{i} ,\underline{ \varepsilon })$ is given by* *$$\label{equation:neutralnormalform} ( \underline{j} ,\underline{ \varepsilon }_0) = (g_{ \underline{j} (1)},\ldots g_{ \underline{j} (\frac{d}{2})},g_{ \underline{j} (\frac{d}{2})}^{-1},\ldots ,g_{ \underline{j} (1)}^{-1})$$ with $\underline{j} (1)+1\geq \underline{j} (2), \ldots, \underline{j} (\frac{d}{2}-1)+1 \geq \underline{j} (\frac{d}{2})$. Further, the permutation $\tau( \underline{i} ,\underline{ \varepsilon }) \in {\mathcal S} _d$ satisfies $$\label{equation:ij-inequality} -\frac{d}{2} \leq \underline{j} (l) - \underline{i} (\tau( \underline{i} ,\underline{ \varepsilon })^{-1}(l)) \leq \frac{d}{2}, \ l \in \{1,\ldots, \frac{d}{2}\}.$$* **Corollary 24**. *[\[corollary:ineq\]]{#corollary:ineq label="corollary:ineq"} Let $( \underline{i} , \underline{ \varepsilon }) \in \mathcal{W}_0(d)$ and $\tau( \underline{i} ,\underline{ \varepsilon }) =\tau$. Then for each $k \in [\frac{d}{2}]$, $$| \underline{i} (\tau^{-1}(k)) - \underline{i} (\tau^{-1}(d-k+1))| \leq d.$$* *Proof.* From Corollary [\[corollary:normalform\]](#corollary:normalform){reference-type="ref" reference="corollary:normalform"}, we have that given a word $( \underline{i} ,\underline{ \varepsilon }) \in \mathcal{W}_0(d)$ and its unique normal form $( \underline{j} ,\underline{ \varepsilon }_0)$, $$\frac{-d}{2} \leq \underline{j} ((l) - \underline{i} (\tau^{-1}((l)) \leq \frac{d}{2}, \ l \in [d].$$ As $\underline{j} (k) = \underline{j} (d-k+1)$ for each $k \in [\frac{d}{2}]$, we arrive at $$\begin{aligned} | \underline{i} (\tau^{-1}(k)) - \underline{i} (\tau^{-1}(d-k+1)| & \leq | \underline{i} (\tau^{-1}(k)) - \underline{j} (k)|+ | \underline{j} (d-k+1) - \underline{i} (\tau^{-1}(d-k+1))| \\ & \leq \frac{d}{2}+\frac{d}{2} = d.\end{aligned}$$ ◻ ## Assigning pair partitions to neutral words {#subsection:NeutralWordsGivePairPartitions} We have already assigned to each word $( \underline{i} ,\underline{ \varepsilon })$ in $\mathcal{W}(d)$ (and in particular, in $\mathcal{W}_0(d)$) a permutation $\tau( \underline{i} ,\underline{ \varepsilon })$ corresponding to the normal form of $( \underline{i} ,\underline{ \varepsilon })$. The structure of the normal form $( \underline{j} ,\underline{ \varepsilon }_0)$ in [\[equation:neutralnormalform\]](#equation:neutralnormalform){reference-type="eqref" reference="equation:neutralnormalform"} makes it evident why $\mathrm{eval} _F( \underline{i} ,\underline{ \varepsilon }) =e$. Indeed, successive generators $g_{ \underline{j} (\frac{d}{2})}$, $g_{ \underline{j} {(\frac{d}{2}-1})}, \ldots, g_{ \underline{j} (1)}$ and their inverses clearly cancel each other out in $\mathrm{eval} _F( \underline{j} ,\underline{ \varepsilon }_0)$. We use this observation to associate a pair partition of the set $[d]$ to the word $( \underline{i} ,\underline{ \varepsilon })$. Let $\pi_{\text{rain}}$ denote the rainbow pair-partition as in Notation and Definition [Notation and Definition 21](#notdef:rainbow){reference-type="ref" reference="notdef:rainbow"}. For any permutation $\sigma \in {\mathcal S} _d$ we write $\sigma^{-1}(\pi_{\text{rain}})$ to mean the pair partition $$\{\{\sigma^{-1}(1),\sigma^{-1}(d)\}, \{\sigma^{-1}(2),\sigma^{-1}(d-1)\}, \ldots, \{\sigma^{-1}(\frac{d}{2}),\sigma^{-1}(\frac{d+2}{2})\}\}.$$ Define the map $\pi$ from $\mathcal{W}_0(d) \to {\mathcal P} _2(d)$ as: $$\label{equation:permrainbow} \pi( \underline{i} ,\underline{ \varepsilon }) := (\tau( \underline{i} , \underline{ \varepsilon }))^{-1} (\pi_{\text{rain}}).$$ Hence, for each word $( \underline{i} ,\underline{ \varepsilon }) \in \mathcal{W}_0(d)$, we obtain a unique "bin" or pair partition, given by $\pi( \underline{i} ,\underline{ \varepsilon })$. This pair partition records which generator and inverse generator from the original neutral word $( \underline{i} ,\underline{ \varepsilon })$ eventually "pair up" in the normal form $( \underline{j} ,\underline{ \varepsilon }_0)$ to cancel each other out in the evaluation of the word as the identity element. Indeed, as the rainbow pair partition $\pi_{\text{rain}}$ consists of pairs of the form $\{k,d-k+1\}$ for $k \in [\frac{d}{2}]$, the pair partition $\pi( \underline{i} ,\underline{ \varepsilon })$ consists of pairs of the form $\{\tau( \underline{i} ,\underline{ \varepsilon })^{-1}(k), \tau^{-1}(d-k+1)\}$ for $k \in [\frac{d}{2}]$. We illustrate with an example. **Example 25**. Suppose $d=8$ and $( \underline{i} ,\underline{ \varepsilon }) \in \mathcal{W}_0(8,20)$ is given by $$( \underline{i} ,\underline{ \varepsilon }) = (g_{2},g_0,g_{18}^{-1},g_4^{-1},g_0^{-1},g_{16},g_3,g_2^{-1}).$$ Then the normal form $( \underline{j} ,\underline{ \varepsilon }_0)$ of $( \underline{i} ,\underline{ \varepsilon })$ is $$( \underline{j} ,\underline{ \varepsilon }_0) = (g_{16},g_2,g_3,g_0,g_0^{-1},g_3^{-1}, g_2^{-1}, g_{16}^{-1}).$$ We visualize the rainbow pair partition on the normal form as $$\begin{tikzpicture}[scale=0.5] \draw (0,0) node[below] {$g_{16}$} -- (0,4) -- (14,4) -- (14,0) node[below] {$g_{16}^{-1}$}; \draw (2,0) node[below] {$g_2$} -- (2,3) -- (12,3) -- (12,0) node[below] {$g_{2}^{-1}$}; \draw (4,0) node[below] {$g_{3}$} -- (4,2) -- (10,2) -- (10,0) node[below] {$g_{3}^{-1}$}; \draw (6,0) node[below] {$g_{0}$} -- (6,1) -- (8,1) -- (8,0) node[below] {$g_{0}^{-1}$}; \end{tikzpicture}$$ The permutation $\tau( \underline{i} ,\underline{ \varepsilon })$ is given by $$\tau( \underline{i} ,\underline{ \varepsilon }) = (1,2,4,6)(3,8,7).$$ For instance, we see that the letter $g_{ \underline{i} (1)}=g_2$ is transformed to $g_{ \underline{j} (2)}=g_2$ in $( \underline{j} ,\underline{ \varepsilon }_0)$ (so $\tau( \underline{i} ,\underline{ \varepsilon })(1)=2$), or the letter $g_{ \underline{i} (3)}^{-1}= g_{18}^{-1}$ is transformed to $g_{ \underline{j} (8)}^{-1}=g_{16}^{-1}$ in $( \underline{j} ,\underline{ \varepsilon }_0)$ (so $\tau( \underline{i} ,\underline{ \varepsilon })(3)=8$). We can also verify as given in Lemma [\[lemma:perm\]](#lemma:perm){reference-type="ref" reference="lemma:perm"}, for instance, that $$| \underline{j} (8)- \underline{i} (\tau( \underline{i} ,\underline{ \varepsilon })^{-1}(8))| = | \underline{j} (8)- \underline{i} (3)| = |16-18| =2 \leq 4=\frac{8}{2} =\frac{d}{2}.$$ The corresponding pair partition is then given by $$\pi( \underline{i} ,\underline{ \varepsilon }) = \left\{\{1,8\}, \{2,5\}, \{3,6\}, \{4,7\}\right\},$$ The pair partition $\pi( \underline{i} ,\underline{ \varepsilon })$ is visualized below to indicate which generators and which inverses pair up in the evaluation of the normal form to cancel each other out. $$\begin{tikzpicture}[scale=0.5] \draw (0,0) node[below] {$g_{2}$} -- (0,4) -- (14,4) -- (14,0) node[below] {$g_{2}^{-1}$} ; \draw (2,0) node[below] {$g_{0}$} -- (2,3) -- (8,3) -- (8,0) node[below] {$g_{0}^{-1}$}; \draw (4,0) node[below] {$g_{18}^{-1}$} -- (4,2) -- (10,2) -- (10,0) node[below] {$g_{16}$} ; \draw (6,0) node[below] {$g_{4}^{-1}$} -- (6,1) -- (12,1) -- (12,0) node[below] {$g_{3}$}; \end{tikzpicture}$$ ## Obtaining neutral words from permutations and pair partitions {#subsection:PairPartitionsGiveNeutralWords} In Subsection [3.2](#subsection:NeutralWordsGivePairPartitions){reference-type="ref" reference="subsection:NeutralWordsGivePairPartitions"}, we described how to assign a unique permutation and a pair partition to every neutral word. In this subsection, we describe some constructions in the other direction. That is, given a permutation (or in some cases, only a pair partition) we can complete a partially filled word to give a neutral word which corresponds to the given permutation (or pair partition, respectively). We start with a simple but key lemma. **Lemma 26**. *[\[lemma:smallestindex\]]{#lemma:smallestindex label="lemma:smallestindex"} Let $( \underline{i} ,\underline{ \varepsilon }) \in \mathcal{W}_0(d)$ with $i_0 = \min \{ \underline{i} (l) \mid l \in [d]\}$. For each pair $\{l,m\} \in \pi( \underline{i} ,\underline{ \varepsilon })$, $\underline{i} (l) = i_0$ if and only if $\underline{i} (m) = i_0$.* *Proof.* Remark [\[remark:interchangerules\]](#remark:interchangerules){reference-type="ref" reference="remark:interchangerules"} explains that the generator with smaller index is unchanged in any sub-word of length $2$ when the reduction $\rightarrow$ is applied to a word. Hence, $\underline{i} (k) = i_0$ if and only if $\underline{j} (\tau( \underline{i} ,\underline{ \varepsilon })(k))=i_0$. Given the pair $\{l,m\} \in \pi( \underline{i} ,\underline{ \varepsilon })$, $\underline{i} (l)=i_0$ if and only if $\underline{i} (m) = \underline{j} (\tau( \underline{i} ,\underline{ \varepsilon })(m)) = \underline{j} (\tau( \underline{i} ,\underline{ \varepsilon })(l)) = \underline{i} (l) = i_0$. ◻ In the following proposition, we will use Lemma [\[lemma:smallestindex\]](#lemma:smallestindex){reference-type="ref" reference="lemma:smallestindex"} to show that a neutral word $( \underline{i} ,\underline{ \varepsilon })$ can be determined by $\pi( \underline{i} ,\underline{ \varepsilon })$ if $\underline{ \varepsilon }$ is completely known, and $\underline{i}$ is partially known. We use a new abstract reduction system with the goal of pushing out the generators with the smallest index to the left of a word, and the inverse generators with the smallest index to the right of a word. The reason we use this new reduction system rather than $(\mathcal{W}(d), \rightarrow)$ used earlier is that we cannot be guaranteed that the indices of the letters provided are sufficiently spread out. Compare this with Proposition [\[proposition:spaceddetermineneutral\]](#proposition:spaceddetermineneutral){reference-type="ref" reference="proposition:spaceddetermineneutral"} where the indices are spread out enough for us to know precisely how the reduction $\rightarrow$ changes an index, even when the actual value of the index is unknown. **Proposition 27**. *[\[proposition:determineneutral\]]{#proposition:determineneutral label="proposition:determineneutral"} Let $d$ be an even integer. Suppose that $( \underline{i} ,\underline{ \varepsilon }) \in \mathcal{W}_0(d)$, and the tuple $\underline{ \varepsilon }: [d] \to \{-1,1\}$ is given such that $\underline{ \varepsilon }(l_+) = 1$ and $\underline{ \varepsilon }(l_-) = -1$ for each pair $\{l_+,l_-\} \in \pi( \underline{i} ,\underline{ \varepsilon })$. If the values of $\underline{i} (l_+)$ are known for each $\{l_+,l_-\} \in \pi( \underline{i} ,\underline{ \varepsilon })$, then the values of $\underline{i} (l_-)$ are uniquely determined.* *Proof.* We prove the proposition by induction on $d$, the length of the word. Suppose that $d=2$. The pair partition must be given by $\pi( \underline{i} ,\underline{ \varepsilon }) = \{\{1,2\}\}$. Then we must have $\underline{i} (l_-)= \underline{i} (l_+)$ by Lemma [\[lemma:smallestindex\]](#lemma:smallestindex){reference-type="ref" reference="lemma:smallestindex"}. Next, suppose the proposition holds for all (even) $d<d_0$. Suppose $( \underline{i} ,\underline{ \varepsilon }) \in \mathcal{W}_0(d_0)$ with the value of $\underline{i} (l_+)$ given for each $\{l_+,l_-\}\in \pi( \underline{i} ,\underline{ \varepsilon })$. Let $i_0= \min\{ \underline{i} (l) \mid l \in [d]\}$. Then by Lemma [\[lemma:smallestindex\]](#lemma:smallestindex){reference-type="ref" reference="lemma:smallestindex"}, we must have $\underline{i} (l_-)=i_0$ whenever $\underline{i} (l_+)= i_0$. To show that the remaining values of $\underline{i}$ can be uniquely determined, we will use a new abstract reduction system $(\mathcal{W}(d),\twoheadrightarrow)$. The goal is to push outwards those letters whose indices are the smallest for a given word, and thereby arrive at a shorter word on which the induction hypothesis can be used. The reduction $\twoheadrightarrow$ is given by $$u = w_1ww_2 \twoheadrightarrow v= w_1f(w)w_2.$$ Here $w$, a sub-word of length $2$, is rewritten as $f(w)$ with the following rules: $$\begin{aligned} f(g_{ \underline{i} (m)}^{\underline{ \varepsilon }(m)}, g_{i_0}) &= (g_{i_0},g_{ \underline{i} (m)+1}^{\underline{ \varepsilon }(m)}), & \text{ if } i_0 = \min\{ \underline{i} (l) \mid l \in [d]\} \text{ and } \underline{i} (m) > i_0 \\ f(g_{i_0}^{-1},g_{ \underline{i} (m)}^{\underline{ \varepsilon }(m)}) &= (g_{ \underline{i} (m)+1}^{\underline{ \varepsilon }(m)},g_{i_0}^{-1}), & \text{ if } i_0 = \min\{ \underline{i} (l) \mid l \in [d]\} \text{ and } \underline{i} (m) > i_0 \\ f(g_{i_0}^{-1},g_{i_0}) & = (g_{i_0},g_{i_0}^{-1}) & \text{ if } i_0 = \min\{ \underline{i} (l) \mid l \in [d]\}. \end{aligned}$$ We will show that $\twoheadrightarrow$ is locally confluent and terminating. It is terminating because in a given word $( \underline{i} ,\underline{ \varepsilon })$ with $\min\{ \underline{i} (l) \mid l \in [d]\} = i_0$, there are at most $d$ occurrences of $g_{i_0}$ (and $g_{i_0}^{-1}$) and each can be moved at most $(d-1)$ times so that the rewriting terminates in less than $2d(d-1)$ steps. For local confluence, consider the following diamond similar to the proof of Proposition [\[proposition:rewriting2\]](#proposition:rewriting2){reference-type="ref" reference="proposition:rewriting2"}: Hence, Newman's lemma gives a unique normal form of the following type: $$g_{i_0}\cdots g_{i_0} \tilde{w} g_{i_0}^{-1} \cdots g_{i_0}^{-1}.$$ Here $\tilde{w} =(\tilde{ \underline{i} },\tilde{\underline{ \varepsilon }})$ is a uniquely determined word of length strictly less than $d_0$, where $\tilde{\underline{ \varepsilon }}$ is known, and the values of $\tilde{ \underline{i} }$ are known in terms of the values of $\underline{i}$. Further, $\mathrm{eval} _F(\tilde{w})=e$, and the pairs in $\pi(\tilde{ \underline{i} },\tilde{\underline{ \varepsilon }})$ correspond to the pairs in $\pi( \underline{i} ,\underline{ \varepsilon })$. By the induction hypothesis, $\tilde{ \underline{i} }$ can be determined completely. We have already determined $\underline{i} (l_-) = i_0$ for all pairs $\{l_+,l_-\} \in \pi( \underline{i} ,\underline{ \varepsilon })$ with $\underline{i} (l_+)=i_0$. For the remaining pairs $\{m_+,m_-\} \in \pi( \underline{i} ,\underline{ \varepsilon })$, the indices $\underline{i} (m_-)$ can now be recovered from the tuple $\tilde{ \underline{i} }$. ◻ We demonstrate with an example. **Example 28**. Suppose $( \underline{i} ,\underline{ \varepsilon }) \in W_0(6)$ with pair partition $\pi( \underline{i} ,\underline{ \varepsilon }) = \{\{1,4\}, \{2,5\}, \{3,6\}\}$ and $\underline{ \varepsilon }=(1,-1,-1,-1,1,1)$. The value of $\underline{i} (l_+)$ is given for each $\{l_+,l_-\} \in \pi( \underline{i} ,\underline{ \varepsilon })$: $$\begin{tikzpicture}[scale=0.5] \draw (0,0) node[below] {$g_{2}$} -- (0,3) -- (6,3) -- (6,0) node[below] {\color{blue} $g_{ \underline{i} (4)}^{-1}$} ; \draw (2,0) node[below] {\color{blue} $g_{ \underline{i} (2)}^{-1}$} -- (2,2) -- (8,2) -- (8,0) node[below] {$g_{1}$} ; \draw (4,0) node[below] {\color{blue} $g_{ \underline{i} (3)}^{-1}$} -- (4,1) -- (10,1) -- (10,0) node[below] {$g_{0}$}; \end{tikzpicture}$$ We will fill in the missing inverse generators by using Proposition [\[proposition:determineneutral\]](#proposition:determineneutral){reference-type="ref" reference="proposition:determineneutral"}. We identify the smallest index $i_0=0 = \underline{i} (6)$ and fill in $\underline{i} (3)= \underline{i} (6)=0$ as $\{3,6\} \in \pi( \underline{i} ,\underline{ \varepsilon })$. The unique normal form under the reduction $\twoheadrightarrow$ is $$(g_{0},g_3,g_{ \underline{i} (2)+1}^{-1}, g_{ \underline{i} (4)+2}^{-1},g_3,g_0^{-1}),$$ The shorter word is given by $\tilde{w} = (g_3,g_{ \underline{i} (2)+1}^{-1},g_{ \underline{i} (4)+2}^{-1},g_3)$ and shown below with the relevant pair partition: $$\begin{tikzpicture}[scale=0.5] \draw (0,0) node[below] {$g_{3}$} -- (0,2) -- (6,2) -- (6,0) node[below] {\color{blue} $g_{ \underline{i} (4)+2}^{-1}$} ; \draw (2,0) node[below] {\color{blue} $g_{ \underline{i} (2)+1}^{-1}$} -- (2,1) -- (8,1) -- (8,0) node[below] {$g_{3}$} ; \end{tikzpicture}$$ Now identifying the smallest index in $\tilde{w}$ as $3$, we get $\underline{i} (4)+2=3$ and $\underline{i} (2)+1=3$, so the completed word is as follows: $$\begin{tikzpicture}[scale=0.5] \draw (0,0) node[below] {$g_{2}$} -- (0,3) -- (6,3) -- (6,0) node[below] {\color{blue} $g_{1}^{-1}$}; \draw (2,0) node[below] {\color{blue}$g_{2}^{-1}$} -- (2,2) -- (8,2) -- (8,0) node[below] {$g_{1}$} ; \draw (4,0) node[below] {\color{blue}$g_{0}^{-1}$} -- (4,1) -- (10,1) -- (10,0) node[below] {$g_{0}$}; \end{tikzpicture}$$ Above, we demonstrated how a partially-filled word $( \underline{i} ,\underline{ \varepsilon })$ that is known to be neutral can be completed uniquely given its associated pair partition $\pi( \underline{i} ,\underline{ \varepsilon })$. We will use this in Proposition [\[proposition:upperbound\]](#proposition:upperbound){reference-type="ref" reference="proposition:upperbound"}. Next, we will show that given a permutation $\tau$, and letters with sufficiently spaced out indices, we can always fill in inverse generators uniquely to arrive at a neutral word $( \underline{i} ,\underline{ \varepsilon })$ such that $\tau( \underline{i} ,\underline{ \varepsilon }) =\tau$. Here, we will use our original reduction system $(\mathcal{W}(d), \rightarrow)$. **Proposition 29**. *[\[proposition:spaceddetermineneutral\]]{#proposition:spaceddetermineneutral label="proposition:spaceddetermineneutral"} Let $d$ be an even integer and $\tau \in {\mathcal S} _d$ be a permutation. Let the tuple $\underline{ \varepsilon }: [d] \to \{-1,1\}$ be given such that $$\underline{ \varepsilon }(\tau^{-1}(l)) = \begin{cases} 1 & \text{ if } l \in \{1,\ldots, \frac{d}{2}\} \\ -1 & \text{ if } l \in \{\frac{d}{2}+1, \ldots, d\}. \end{cases}$$ Suppose we are given values $\underline{i} (\tau^{-1}(l))$ for $l \in \{1,\ldots, \frac{d}{2}\}$ such that $$\label{equation:spaced} \underline{i} (\tau^{-1}(l)) < \underline{i} (\tau^{-1}(l-1)) -3d, \ l \in \{2,\ldots, \frac{d}{2}\}.$$ Then there exist unique values $\underline{i} (\tau^{-1}(l))$ for $l \in \{\frac{d}{2}+1, \ldots, d\}$ such that $( \underline{i} ,\underline{ \varepsilon }) \in \mathcal{W}_0(d)$ and $\tau( \underline{i} ,\underline{ \varepsilon })= \tau$.* *Proof.* By Corollary [\[corollary:ineq\]](#corollary:ineq){reference-type="ref" reference="corollary:ineq"}, in order to ensure that $( \underline{i} ,\underline{ \varepsilon }) \in \mathcal{W}_0(d)$ with $\tau( \underline{i} ,\underline{ \varepsilon })=\tau$, a necessary condition is $$| \underline{i} (\tau^{-1}(k))- \underline{i} (\tau^{-1}(d-k+1))| \leq d, \ k \in [\frac{d}{2}].$$ Hence we will restrict our choice of $\underline{i} (\tau^{-1}(d-k+1))$ to satisfy $$\label{equation:restriction} \underline{i} (\tau^{-1}(k))-d \leq \underline{i} (\tau^{-1}(d-k+1)) \leq \underline{i} (\tau^{-1}(k))+d, \ k \in [\frac{d}{2}].$$ Recall the abstract reduction system $(\mathcal{W}(d), \rightarrow)$ described in Section [2.3](#subsection:ARSF){reference-type="ref" reference="subsection:ARSF"} where $\rightarrow$ is the composed reduction given by $\rightarrow \, = \, \stackrel{*}{\rightarrowtail} \circ \stackrel{*}\rightsquigarrow$. Let $( \underline{j} ,\underline{ \varepsilon }_0)$ be the unique normal form of $( \underline{i} ,\underline{ \varepsilon })$ under $(\mathcal{W}(d), \rightarrow)$. As described in the beginning of Subsection [3.1](#subsection:PermutationsGivenNeutralWords){reference-type="ref" reference="subsection:PermutationsGivenNeutralWords"} and in the proof of Lemma [\[lemma:perm\]](#lemma:perm){reference-type="ref" reference="lemma:perm"}, each letter $g_{ \underline{i} (\tau^{-1}(l))}^{\underline{ \varepsilon }(\tau^{-1}(l))}$ is transformed to a letter $g_{ \underline{j} (v_l)}^{\underline{ \varepsilon }(\tau^{-1}(l))}$, where $$\label{equation:transformed} v_l = \tau( \underline{i} ,\underline{ \varepsilon })(\tau^{-1}(l)), \ \underline{j} (v_l)= \underline{i} (\tau^{-1}(l)) +q(l), \ \text{and } q(l) \in \{-\frac{d}{2}, \ldots, 0, \ldots, \frac{d}{2}\}$$ We claim that the values of $q(l)$ for each $l \in [\frac{d}{2}]$ can be explicitly computed due to the inequalities [\[equation:spaced\]](#equation:spaced){reference-type="eqref" reference="equation:spaced"} and [\[equation:restriction\]](#equation:restriction){reference-type="eqref" reference="equation:restriction"}. Indeed, in each step of the first reduction $\rightsquigarrow$, we will rewrite a sub-word of length $2$ of the form: $$(g^{-1}_{ \underline{i} (\tau^{-1}(d-m+1))+p}, g_{ \underline{i} (\tau^{-1}(k))+q}), \ k, m \in [\frac{d}{2}], -d \leq p,q \leq d,$$ where $p,q$ are known quantities (for example, in the very first step, $p=q=0$). If $m>k$, $$\underline{i} (\tau^{-1}(d-m+1)) +p \leq \underline{i} (\tau^{-1}(m)) +d+p < \underline{i} (\tau^{-1}(k))-2d +p \leq \underline{i} (\tau^{-1}(k))-d \leq \underline{i} (\tau^{-1}(k)) +q .$$ Hence the rewriting [\[equation:rewriting1\]](#equation:rewriting1){reference-type="eqref" reference="equation:rewriting1"} is given by $$(g^{-1}_{ \underline{i} (\tau^{-1}(d-m+1))+p}, g_{ \underline{i} (\tau^{-1}(k))+q}) \rightsquigarrow (g_{ \underline{i} (\tau^{-1}(k))+q+1}, g^{-1}_{ \underline{i} (\tau^{-1}(d-m+1))+p}).$$ Similarly, if $m<k$ or if $m=k$, the rewriting [\[equation:rewriting1\]](#equation:rewriting1){reference-type="eqref" reference="equation:rewriting1"} can be writted precisely. We summarize as follows: $$\begin{aligned} (g^{-1}_{ \underline{i} (\tau^{-1}(d-m+1))+p}, g_{ \underline{i} (\tau^{-1}(k))+q}) & \rightsquigarrow (g_{ \underline{i} (\tau^{-1}(k))+q+1}, g^{-1}_{ \underline{i} (\tau^{-1}(d-m+1))+p}) & m>k,\\ (g^{-1}_{ \underline{i} (\tau^{-1}(d-m+1))+p}, g_{ \underline{i} (\tau^{-1}(k))+q}) & \rightsquigarrow (g_{ \underline{i} (\tau^{-1}(k))+q}, g^{-1}_{ \underline{i} (\tau^{-1}(d-m+1))+p+1}) & m < k,\\ (g^{-1}_{ \underline{i} (\tau^{-1}(d-m+1))+p}, g_{ \underline{i} (\tau^{-1}(k))+q}) & \rightsquigarrow (g_{ \underline{i} (\tau^{-1}(k))+q}, g^{-1}_{ \underline{i} (\tau^{-1}(d-m+1))+p}) & m=k. \end{aligned}$$ The second reduction $\rightarrowtail$ gives rewritings of sub-words of length $2$ of the form: $$\begin{aligned} (g_{ \underline{i} (\tau^{-1}(m))+p}, g_{ \underline{i} (\tau^{-1}(k))+q}) & \rightarrowtail (g_{ \underline{i} (\tau^{-1}(k))+q-1}, g_{ \underline{i} (\tau^{-1}(m))+p}) & k<m, \\ (g^{-1}_{ \underline{i} (\tau^{-1}(d-m+1))+p}, g^{-1}_{ \underline{i} (\tau^{-1}(d-k+1))+q}) & \rightarrowtail (g^{-1}_{ \underline{i} (\tau^{-1}(d-k+1))+q}, g^{-1}_{ \underline{i} (\tau^{-1}(d-m+1))+p-1}) & m<k. \end{aligned}$$ Now on applying the composed reduction $\rightarrow \, = \, \stackrel{*}{\rightarrowtail} \circ \stackrel{*}{\rightsquigarrow}$ to $( \underline{i} ,\underline{ \varepsilon })$, we arrive at the normal form $( \underline{j} ,\underline{ \varepsilon }_0)$ with the value of each $q(l)$ known in [\[equation:transformed\]](#equation:transformed){reference-type="eqref" reference="equation:transformed"}. We remind that by definition of the permutation $\tau( \underline{i} ,\underline{ \varepsilon })$, each generator $g_{ \underline{i} (\tau( \underline{i} ,\underline{ \varepsilon })^{-1}(l))}$ is transformed to the generator $g_{ \underline{j} (l)}$. We also remind that the normal form $( \underline{j} ,\underline{ \varepsilon }_0)$ satisfies $$\label{equation:normalformrep} \underline{j} (l) \leq \underline{j} (l-1)+1, \ l \in \{2,\ldots, \frac{d}{2}\}.$$ By the hypothesis [\[equation:spaced\]](#equation:spaced){reference-type="eqref" reference="equation:spaced"}, we have $$\underline{i} (\tau^{-1}(l)) < \underline{i} (\tau^{-1}(l-1)) -3d, \ l \in \{2, \ldots, \frac{d}{2}\}.$$ Hence, for any choice of $q(l-1), q(l) \in \{-\frac{d}{2}, \ldots, \frac{d}{2}\}$, we have $$\begin{aligned} \underline{i} (\tau^{-1}(l)) +q(l) & < \underline{i} (\tau^{-1}(l-1)) +q(l)-3d \\ &\leq \underline{i} (\tau^{-1}(l-1)) +\frac{d}{2}-3d \\ &= \underline{i} (\tau^{-1}(l-1)) -\frac{5d}{2} \\ &\leq \underline{i} (\tau^{-1}(l-1)) +q(l-1) \\ &\leq \underline{i} (\tau^{-1}(l-1)) +q(l-1)+1.\end{aligned}$$ Thus we get $$\underline{j} (v_{l}) \leq \underline{j} (v_{l-1}) +1, \ l \in \{2,\ldots, \frac{d}{2}\}.$$ By the uniqueness of the normal form, and comparing with [\[equation:normalformrep\]](#equation:normalformrep){reference-type="eqref" reference="equation:normalformrep"}, we must have $l=v_l$ for every $l \in [\frac{d}{2}]$. A symmetric argument gives that $l=v_l$ for every $l \in\{\frac{d}{2}+1, \ldots, d\}$. Altogether we get $\underline{j} (l) = \underline{i} (\tau^{-1}(l)) +q(l)$, where $q(l)$ is explicitly known for each $l \in [d]$. In order for $( \underline{j} ,\underline{ \varepsilon }_0)$ and therefore, for $( \underline{i} ,\underline{ \varepsilon })$ to be a neutral word, we equate $\underline{j} (l)$ to $\underline{j} (d-l+1)$ for every $l \in [\frac{d}{2}]$ and uniquely determine the values of $\underline{i} (\tau^{-1}(d-l+1))$ for each $l \in [\frac{d}{2}]$. Finally, we have $\tau^{-1}(l) = \tau( \underline{i} ,\underline{ \varepsilon })^{-1}(v_l) = \tau( \underline{i} ,\underline{ \varepsilon })^{-1}(l)$ for all $l \in \{1, \ldots, d\}$, so that $\tau( \underline{i} ,\underline{ \varepsilon }) = \tau$ as required. ◻ We will use Proposition [\[proposition:spaceddetermineneutral\]](#proposition:spaceddetermineneutral){reference-type="ref" reference="proposition:spaceddetermineneutral"} in the proof of Proposition [\[proposition:lowerbound\]](#proposition:lowerbound){reference-type="ref" reference="proposition:lowerbound"}. Let us now look at an example that illustrates the filling in of missing letters. **Example 30**. Suppose $d=8$ and $\tau = (1,4,7,2) (3,6)(5,8)$. The pair partition in this case is $\pi = \tau^{-1} (\pi_{\text{rain}}) = \{\{2,5\}, \{4,7\}, \{3, 6\}, \{1,8\}\}$. Suppose $\underline{ \varepsilon }=(1,1,-1,-1,-1,1,1,-1)$ and $\underline{i} (1) = 0, \underline{i} (2) = 75, \underline{i} (6) = 25$ and $\underline{i} (7) = 50$. We represent the pair partition with the values of $\underline{i}$ that are already known and show that the other values can be filled in uniquely as shown in Proposition [\[proposition:spaceddetermineneutral\]](#proposition:spaceddetermineneutral){reference-type="ref" reference="proposition:spaceddetermineneutral"}. $$\begin{tikzpicture}[scale=0.5] \draw (0,0) node[below] {$g_{0}$} -- (0,4) -- (14,4) -- (14,0) node[below] {\color{blue} $g_{ \underline{i} (8)}^{-1}$}; \draw (2,0) node[below] {$g_{75}$} -- (2,3) -- (8,3) -- (8,0) node[below] {\color{blue} $g_{ \underline{i} (5)}^{-1}$}; \draw (4,0) node[below] {\color{blue} $g_{ \underline{i} (3)}^{-1}$}-- (4,1) -- (10,1) -- (10,0) node[below] {$g_{25}$} ; \draw (6,0) node[below] {\color{blue} $g_{ \underline{i} (4)}^{-1}$} -- (6,2) -- (12,2) -- (12,0) node[below] {$g_{50}$}; \end{tikzpicture}$$ Observe that the indices $\underline{i} (1), \underline{i} (2), \underline{i} (6), \underline{i} (7)$ satisfy the inequalities: $$\underline{i} (\tau^{-1}(4)) < \underline{i} (\tau^{-1}(3)) - 3d, \ \underline{i} (\tau^{-1}(3)) < \underline{i} (\tau^{-1}(2))-3d, \ \underline{i} (\tau^{-1}(2)) < \underline{i} (\tau^{-1}(1))-3d.$$ We show explicitly some steps of the reduction $\rightsquigarrow$ with reasoning: $$\begin{aligned} (g_0,g_{75},g_{ \underline{i} (3)}^{-1},g_{ \underline{i} (4)}^{-1},g_{ \underline{i} (5)}^{-1},g_{25},g_{50},g_{ \underline{i} (8)}^{-1}) & \rightsquigarrow (g_0,g_{75},g_{25},g_{ \underline{i} (3)}^{-1},g_{ \underline{i} (4)+1}^{-1},g_{ \underline{i} (5)+1}^{-1},g_{50},g_{ \underline{i} (8)}^{-1}).\end{aligned}$$ Here we must have $\underline{i} (5) > 25$, $\underline{i} (4) >25$ and $\underline{i} (3) =25$ due to the equations [\[equation:spaced\]](#equation:spaced){reference-type="eqref" reference="equation:spaced"} and [\[equation:restriction\]](#equation:restriction){reference-type="eqref" reference="equation:restriction"}. Finally, we get that the normal form of $( \underline{i} ,\underline{ \varepsilon })$ is $( \underline{j} ,\underline{ \varepsilon }_0) = (g_{74},g_{49},g_{24},g_0,g_{ \underline{i} (8)}^{-1}, g_{ \underline{i} (3)-1}^{-1},g_{ \underline{i} (4)}^{-1},g_{ \underline{i} (5)+1}^{-1})$. Equating $\underline{j} (l)$ to $\underline{j} (d-l+1)$ for each $l \in [\frac{d}{2}]$ gives $$\begin{aligned} \underline{i} (8) = 0, & \ \underline{i} (3) = 24+1=25 \\ \underline{i} (4) =49, & \ \underline{i} (5) = 74-1=73.\end{aligned}$$ Our filled in word is visualized as: $$\begin{tikzpicture}[scale=0.5] \draw (0,0) node[below] {$g_{0}$} -- (0,4) -- (14,4) -- (14,0) node[below] {\color{blue} $g_{0}^{-1}$} ; \draw (2,0) node[below] {$g_{75}$} -- (2,3) -- (8,3) -- (8,0) node[below] {\color{blue} $g_{73}^{-1}$}; \draw (4,0) node[below] {\color{blue} $g_{25}^{-1}$} -- (4,1) -- (10,1) -- (10,0) node[below] {$g_{25}$} ; \draw (6,0) node[below] {\color{blue} $g_{49}^{-1}$} -- (6,2) -- (12,2) -- (12,0) node[below] {$g_{50}$}; \end{tikzpicture}$$ So we have $( \underline{i} ,\underline{ \varepsilon }) = (g_0, g_{75}, g_{25}^{-1}, g_{49}^{-1},g_{73}^{-1}, g_{25}, g_{50}, g_0^{-1})$ and $\tau( \underline{i} ,\underline{ \varepsilon }) = \tau$. # Counting the size of a bin {#section:count} In Section [3](#section:algorithm){reference-type="ref" reference="section:algorithm"}, we assigned to each word $( \underline{i} ,\underline{ \varepsilon }) \in \mathcal{W}_0(d)$ a unique permutation $\tau( \underline{i} ,\underline{ \varepsilon })$ that records the transformation of $( \underline{i} ,\underline{ \varepsilon })$ into its unique normal form $( \underline{j} ,\underline{ \varepsilon }_0)$ under the reduction $\rightarrow$ given by [\[equation:neutralnormalform\]](#equation:neutralnormalform){reference-type="eqref" reference="equation:neutralnormalform"}. Further, the permutation provides us with a "bin" -- that is, a pair partition $\pi( \underline{i} ,\underline{ \varepsilon }) \in {\mathcal P} _2(d)$ such that $\pi( \underline{i} ,\underline{ \varepsilon }) = \tau( \underline{i} ,\underline{ \varepsilon })^{-1} (\pi_{\text{rain}})$, where $\pi_{\text{rain}}$ is the rainbow pair partition on $d$ points. We will now show that for every pair partition $\pi \in {\mathcal P} _2(d)$, the number of words $( \underline{i} ,\underline{ \varepsilon })$ in $\mathcal{W}_{0}(d,n)$ with $\pi( \underline{i} ,\underline{ \varepsilon })=\pi$ can be approximated in the limit as $n \to \infty$. We first count the number of permutations associated to each pair partition in this set-up. As before, if $\pi=\{\{k_1,l_1\}, \ldots\{k_{\frac{d}{2}}, l_{\frac{d}{2}}\}\}$ and $\tau \in {\mathcal S} _d$, we will write $\tau(\pi)$ to mean the pair partition given by $$\{\{\tau(k_1), \tau(l_1)\}, \ldots\{\tau(k_{\frac{d}{2}}), \tau(l_{\frac{d}{2}})\}\}.$$ **Lemma 31**. *[\[lemma:permnumber\]]{#lemma:permnumber label="lemma:permnumber"} Let $\pi \in {\mathcal P} _2(d)$ be any pair partition and $\pi_{\text{rain}}\in {\mathcal P} _2(d)$ be the rainbow pair partition. There exist $d!!$ permutations $\tau \in {\mathcal S} _d$ such that $\tau(\pi) = \pi_{\text{rain}}$.* *Proof.* Each pair in the pair partition $\pi$ must be sent to a pair in the rainbow partition by the permutation $\tau$. This gives a total of $d \cdot (d-2) \cdot \cdots 2 =d!!$ permutations $\tau$ such that $\tau(\pi)= \pi_{\text{rain}}$. ◻ Next, we will show that for each permutation $\tau \in {\mathcal S} _d$, the number of words $( \underline{i} ,\underline{ \varepsilon }) \in \mathcal{W}_{0}(d,n)$ such that $\tau( \underline{i} ,\underline{ \varepsilon }) = \tau$, (where $\tau( \underline{i} ,\underline{ \varepsilon })$ is the uniquely determined permutation that arises in Lemma [\[lemma:perm\]](#lemma:perm){reference-type="ref" reference="lemma:perm"}), is bounded by two polynomials in $n$ of degree $\frac{d}{2}$ and with leading coefficient $(\frac{d}{2})!$. For each $\tau \in {\mathcal S} _d$, the set whose cardinality we would like to find bounds on is: $$\mathcal{W}_0(d,n,\tau) : = \{( \underline{i} ,\underline{ \varepsilon }) \in \mathcal{W}_{0}(d,n)\mid \tau( \underline{i} ,\underline{ \varepsilon }) = \tau\}.$$ Let $$N(d,n,\tau) := |\mathcal{W}_0(d,n, \tau)|.$$ **Remark 32**. [\[remark:epsilon\]]{#remark:epsilon label="remark:epsilon"} It is a simple observation from the structure of $\underline{ \varepsilon }_0$ in the normal form $( \underline{j} ,\underline{ \varepsilon }_0)$ in [\[equation:neutralnormalform\]](#equation:neutralnormalform){reference-type="eqref" reference="equation:neutralnormalform"} that given a permutation $\tau \in {\mathcal S} _d$, any word $( \underline{i} ,\underline{ \varepsilon })$ in $\mathcal{W}_0(d,n,\tau)$ must satisfy the following rule for $\underline{ \varepsilon }$: $$l \in \{1,\ldots,\frac{d}{2}\} \implies \underline{ \varepsilon }(\tau^{-1}(l)) =+1; \quad l \in \{\frac{d}{2}+1,\ldots,d\} \implies \underline{ \varepsilon }(\tau^{-1}(l)) =-1.$$ **Proposition 33**. *[\[proposition:upperbound\]]{#proposition:upperbound label="proposition:upperbound"} Let $d$ be a fixed positive even integer and $n \in {\mathbb{N}}$. Let $\pi_{\text{rain}}\in {\mathcal P} _2(d)$ denote the rainbow pair-partition and $\pi \in {\mathcal P} _2(d)$ be any pair-partition. Then for each $\tau \in {\mathcal S} _d$ with $\tau(\pi) = \pi_{\text{rain}}$, the following inequality holds:* *$$\label{equation:upperbound} N(d,n,\tau) \leq {\binom{n+\frac{d^2}{2}-2}{\frac{d}{2}}}.$$* *Proof.* Define the set $${\mathcal U} (d,n) := \{ \underline{k} : [\frac{d}{2}] \to \{0, \ldots, n-1\} \mid \underline{k} (l+1) \leq \underline{k} (l) +d+1, l \in \{1, \ldots, \frac{d}{2}-1\} \}.$$ Note that the set ${\mathcal U} (d,n)$ is independent of the permutation $\tau$. We will provide an injection from the set $\mathcal{W}_0(d,n,\tau)$ into ${\mathcal U} (d,n)$. Let $( \underline{j} ,\underline{ \varepsilon }_0)$ be the unique normal form of a word $( \underline{i} ,\underline{ \varepsilon }) \in \mathcal{W}_0(d,n,\tau)$. Recall that [\[equation:ij-inequality\]](#equation:ij-inequality){reference-type="eqref" reference="equation:ij-inequality"} gives the inequality $$\underline{i} (\tau^{-1}(l))-\frac{d}{2} \leq \underline{j} (l) \leq \underline{i} (\tau^{-1}(l))+\frac{d}{2}, \ l \in \{1,\ldots, \frac{d}{2}\}.$$ Furthermore, by [\[equation:neutralnormalform\]](#equation:neutralnormalform){reference-type="eqref" reference="equation:neutralnormalform"}, $\underline{j}$ satisfies $$\underline{j} (l+1) \leq \underline{j} (l)+1, \ l \in \{1, \ldots, \frac{d}{2}-1\}.$$ Hence we get for $( \underline{i} ,\underline{ \varepsilon }) \in \mathcal{W}_0(d,n,\tau)$, $$\underline{i} (\tau^{-1}(l+1)) \leq \underline{i} (\tau^{-1}(l)) +d+1$$ Let $$\iota(( \underline{i} ,\underline{ \varepsilon })) := \underline{k} ,$$ with $\underline{k} (l) = \underline{i} (\tau^{-1}(l))$ for $l\in \{1,\ldots, \frac{d}{2}\}$. Then $\iota$ defines a map from $(\mathcal{W}_0(d,n,\tau)$ into $U(d,n)$. Next we show that $\iota$ is injective. Suppose $( \underline{i} ,\underline{ \varepsilon }), (\tilde{ \underline{i} }, \tilde{\underline{ \varepsilon }}) \in \mathcal{W}_0(d,n,\tau)$ and $\underline{i} (\tau^{-1}(l)) = \tilde{ \underline{i} }(\tau^{-1}(l))$ for each $l \in \{1, \ldots, \frac{d}{2}\}$. By Remark [\[remark:epsilon\]](#remark:epsilon){reference-type="ref" reference="remark:epsilon"} it is clear that $\underline{ \varepsilon }= \tilde{\underline{ \varepsilon }}$. Hence it suffices to show that $$\underline{i} ((\tau^{-1}(l))= \tilde{ \underline{i} }(\tau^{-1}(l)), \ l \in \{\frac{d}{2}+1, \ldots, d\}.$$ In other words, it suffices to show that $\underline{i} (l_-)=\tilde{ \underline{i} }(l_-)$ for every pair $\{l_+,l_-\} \in \pi$ given that $\underline{i} (l_+)=\tilde{ \underline{i} }(l_+)$. But this is precisely the content of Proposition [\[proposition:determineneutral\]](#proposition:determineneutral){reference-type="ref" reference="proposition:determineneutral"}. Altogether this gives $( \underline{i} ,\underline{ \varepsilon }) = (\tilde{ \underline{i} },\tilde{\underline{ \varepsilon }})$ as desired. The inequalities in the definition of ${\mathcal U} (d,n)$ $$\underline{k} (l+1) \leq \underline{k} (l) +d+1, \ l \in [\frac{d}{2}-1]$$ give an injection from ${\mathcal U} (d,n)$ into the set $\{ \underline{k} :[\frac{d}{2}] \to \{\frac{-d^2}{2}+2,\ldots, n-1\} \mid \underline{k} (l+1) < \underline{k} (l), \ l \in [\frac{d}{2}-1]\}$, whose cardinality is given by $$\binom{n+\frac{d^2}{2}-2}{\frac{d}{2}}.$$ Hence $N(d,n,\tau) \leq \binom{n+\frac{d^2}{2}-2}{\frac{d}{2}}$, as required. ◻ **Proposition 34**. *[\[proposition:lowerbound\]]{#proposition:lowerbound label="proposition:lowerbound"} Let $d$ be a fixed positive even integer and $n \in {\mathbb{N}}$ be such that $n > \frac{3d}{2}(d-1)$. Let $\pi_{\text{rain}}\in {\mathcal P} _2(d)$ denote the rainbow pair-partition and $\pi \in {\mathcal P} _2(d)$ be any pair-partition. Then for each $\tau \in {\mathcal S} _d$ with $\tau(\pi) = \pi_{\text{rain}}$, the following inequality holds:* *$$\label{equation:lowerbound} {\binom{n+2d-\frac{3d^2}{2}}{\frac{d}{2}}}\leq N(d,n,\tau).$$* *Proof.* For $c$ a positive even integer, and $m \in {\mathbb{N}} _0$, let $${\mathcal L} (c,r) := \{ \underline{k} : [\frac{c}{2}] \to \{0, \ldots, r-1\} \mid \underline{k} (m) < \underline{k} (m-1)-3c, m \in \{2, \ldots, \frac{c}{2}\}\}.$$ Given $\tau \in {\mathcal S} _d$ and $\underline{k} \in {\mathcal L} (d, n-d)$, define $$\underline{i} (\tau^{-1}(l)) : = \underline{k} (l), \text{ if } l \in \{1, \ldots, \frac{d}{2}\}.$$ Define $\underline{ \varepsilon }$ by $$\underline{ \varepsilon }(\tau^{-1}(l)) = \begin{cases} 1 & l \in \{1,\ldots,\frac{d}{2}\} \\ -1 & l \in \{\frac{d}{2}+1,\ldots,d\}. \end{cases}$$ As $$\underline{i} (\tau^{-1}(l)) < \underline{i} (\tau^{-1}(l-1)) -3d, \ l \in \{2,\ldots, \frac{d}{2}\},$$ by Proposition [\[proposition:spaceddetermineneutral\]](#proposition:spaceddetermineneutral){reference-type="ref" reference="proposition:spaceddetermineneutral"}, there exist unique values $\underline{i} (\tau^{-1}(l))$ for $l \in \{\frac{d}{2}+1,\ldots, d\}$ such that $( \underline{i} ,\underline{ \varepsilon }) \in \mathcal{W}_0(d)$ and $\tau( \underline{i} ,\underline{ \varepsilon })= \tau$. Further, by Corollary [\[corollary:ineq\]](#corollary:ineq){reference-type="ref" reference="corollary:ineq"}, as $\underline{i} (\tau^{-1}(l)) = \underline{k} (l)$ takes values in $\{0,\ldots, n-d-1\}$ for $l \in [\frac{d}{2}]$, we must have that $\underline{i} (\tau^{-1}(l))$ takes values in $\{0, \ldots, n-1\}$ for $l \in \{\frac{d}{2}+1,\ldots, d\}$, so that $( \underline{i} ,\underline{ \varepsilon }) \in \mathcal{W}_0(d,n,\tau)$. This allows us to define $\iota : { {\mathcal L} (d,n-d)} \to \mathcal{W}_0(d,n, \tau)$ by $\iota( \underline{k} ) = \underline{i}$, with $\underline{i}$ as described above. Indeed, $\iota$ is an injection from ${\mathcal L} (d,n-d)$ into the set $\mathcal{W}_0(d,n, \tau)$ because $\iota_1( \underline{k} _1) = \iota( \underline{k} _2)$ implies in particular that $\underline{k} _1(l) = \iota( \underline{k} _1)(\tau^{-1}(l)) = \iota( \underline{k} _2)(\tau^{-1}(l)) = \underline{k} _2(l)$ for all $l \in [\frac{d}{2}]$. The set ${\mathcal L} (d,n-d)$ is described by the inequalities $$\underline{k} (l+1) < \underline{k} (l) - 3d, \ l\in [\frac{d}{2}-1],$$ for tuples $\underline{k}$ taking values in $\{0, \ldots, n-d-1\}$. These inequalities give an injection from the set $\{ \underline{k} : [\frac{d}{2}] \to \{\frac{3d^2}{2}-3d,\ldots, n-d-1\} \mid \underline{k} (l+1) < \underline{k} (l), \ l \in [\frac{d}{2}-1]\}$ into ${\mathcal L} (d,n-d)$. Hence $| {\mathcal L} (d,n-d)|$ is bounded below by $\binom{n+2d-\frac{3d^2}{2}}{\frac{d}{2}}$. ◻ # A Central Limit Theorem for ${\mathbb{C}} (F)$ {#section:main} ## Main Result We are now ready to prove our main result, Theorem [Theorem 3](#theorem:main){reference-type="ref" reference="theorem:main"}, which we restate here for convenience: **Theorem 1** (CLT for the sequence $a_n$). *Let $(a_n)$ be the sequence of self-adjoint random variables in $( {\mathbb{C}} (F), \varphi)$ given by $$a_n = \frac{g_n+g_n^*}{\sqrt{2}}, \ n \in {\mathbb{N}} _0$$ and $$s_n := \frac{1}{\sqrt{n}} (a_0+ \cdots +a_{n-1}), \ n \in {\mathbb{N}} .$$* *Then we have $$\lim_{n \to \infty} \varphi (s_n^d) = \begin{cases} (d-1)!! & \ \text{for } d \text{ even,} \\ 0 & d \ \text{for } d \text{ odd.} \end{cases}$$* *That is, $$s_n \stackrel{\text{distr}}{\longrightarrow} x,$$ where $x$ is a normally distributed random variable of variance $1$.* *Proof.* Recall the sets $\mathcal{W}_0(d,n)$ and $\mathcal{W}_0(d,n,\tau)$ defined as before for $d, n \in {\mathbb{N}} :$ $$\mathcal{W}_{0}(d,n):= \Bigl\{ ( \underline{i} , \underline{ \varepsilon }) \begin{array}{ll} \vline & \underline{i} : [d] \to \{ 0,1, \ldots , n-1 \}, \ \underline{ \varepsilon }: [d] \to \{ -1,1 \} \\ \vline & \mbox{such that} \ \mathrm{eval} _F( \underline{i} ,\underline{ \varepsilon }) = g_{ \underline{i} (1)}^{\underline{ \varepsilon }(1)} \cdots g_{ \underline{i} (d)}^{\underline{ \varepsilon }(d)} = e \end{array} \Bigr\}$$ and for each $\tau \in {\mathcal S} _d$, $$\mathcal{W}_0(d,n,\tau) = \{( \underline{i} ,\underline{ \varepsilon }) \in \mathcal{W}_{0}(d,n)\mid \tau( \underline{i} ,\underline{ \varepsilon })=\tau\}.$$ The moment of order $d$ of $s_n$ is given by $$\begin{aligned} \varphi (s_n^d) &= \frac{1}{(2n)^{d/2}} \sum_{ \substack{ \underline{i} : [d] \to \{ 0, \ldots , n-1 \}, \\ \underline{ \varepsilon }: [d] \to \{ -1,1 \} } } \ \varphi \Bigl( g_{ \underline{i} (1)}^{\underline{ \varepsilon }(1)} \cdots g_{ \underline{i} (d)}^{\underline{ \varepsilon }(d)} \Bigr) \\ &= \frac{1}{(2n)^{d/2}} \sum_{ \substack{ \underline{i} : [d] \to \{ 0, \ldots , n-1 \}, \\ \underline{ \varepsilon }: [d] \to \{ -1,1 \} \\ \mathrm{eval} _F( \underline{i} ,\underline{ \varepsilon }) =e } } \ \varphi \Bigl( g_{ \underline{i} (1)}^{\underline{ \varepsilon }(1)} \cdots g_{ \underline{i} (d)}^{\underline{ \varepsilon }(d)} \Bigr) \\ &= \frac{1}{(2n)^{d/2}} |\mathcal{W}_0(d,n)| \end{aligned}$$ For odd $d \in {\mathbb{N}}$, $\mathcal{W}_0(d,n) = \emptyset$, so $\varphi(s_n^d)=0$ for every $n \in {\mathbb{N}}$, and thus $\lim_{n \to \infty} \varphi(s_n^d) =0$. For even $d \in {\mathbb{N}}$, by Propositions [\[proposition:upperbound\]](#proposition:upperbound){reference-type="ref" reference="proposition:upperbound"} and [\[proposition:lowerbound\]](#proposition:lowerbound){reference-type="ref" reference="proposition:lowerbound"}, we have for each $\tau \in {\mathcal S} _d$ and for $n > \frac{3d}{2}(d-1)$, $$\label{equation:lowerupper} {\binom{n+2d-\frac{3d^2}{2}}{\frac{d}{2}}} \leq N(d,n,\tau) \leq {\binom{n+\frac{d^2}{2}-2}{\frac{d}{2}}}.$$ Dividing each term in the inequalities [\[equation:lowerupper\]](#equation:lowerupper){reference-type="eqref" reference="equation:lowerupper"} by $n^{\frac{d}{2}}$ and on taking limits as $n\ \to \infty$, we get for every $\tau \in {\mathcal S} _d$, $$\label{equation:ineq} \frac{1}{{(\frac{d}{2})!}} \leq \lim_{n\to \infty} \frac{1}{n^{\frac{d}{2}}} N(d,n,\tau) \leq \frac{1}{{(\frac{d}{2})!}}.$$ Now $$\begin{aligned} \lim_{n\to \infty} \varphi (s_n^d) &= \lim_{n\to \infty} \frac{1}{(2n)^{d/2}} |\mathcal{W}_0(d,n)| \\ &= \lim_{n\to \infty} \frac{1}{(2n)^{d/2}} \sum_{ \pi \in {\mathcal P} _2(d)} \sum_{\substack{\tau \in {\mathcal S} _d, \\ \tau(\pi) =\pi_{\text{rain}}}} |\mathcal{W}_0(d,n,\tau)| \\ &= \lim_{n\to \infty} \frac{1}{(2n)^{d/2}} \sum_{ \pi \in {\mathcal P} _2(d)} \sum_{\substack{\tau \in {\mathcal S} _d,\\ \tau(\pi) =\pi_{\text{rain}}}} N(d,n,\tau) \\ &= \sum_{\pi \in {\mathcal P} _2(d)} \frac{1}{2^{\frac{d}{2}}}\sum_{\substack{\tau \in {\mathcal S} _d, \\ \tau(\pi)=\pi_{\text{rain}}} }\lim_{n \to \infty} \frac{1}{n^{\frac{d}{2}}} N(d,n, \tau).\end{aligned}$$ We get the following inequalities from [\[equation:ineq\]](#equation:ineq){reference-type="eqref" reference="equation:ineq"}: $$\begin{aligned} \sum_{\pi \in {\mathcal P} _2(d)} \frac{1}{2^{\frac{d}{2}}} \sum_{\substack{\tau \in {\mathcal S} _d, \\ \tau(\pi) =\pi_{\text{rain}}}} \frac{1}{(\frac{d}{2})!} & \leq \lim_{n \to \infty} \varphi(s_n^d) \\ &=\sum_{\pi \in {\mathcal P} _2(d)} \frac{1}{2^{\frac{d}{2}}} \sum_{\substack{\tau\in {\mathcal S} _d, \\ \tau(\pi)=\pi_{\text{rain}}}}\lim_{n \to \infty} \frac{1}{n^{\frac{d}{2}}} N(d,n, \tau) \\ & \leq \sum_{\pi \in {\mathcal P} _2(d)} \frac{1}{2^{\frac{d}{2}}} \sum_{\substack{\tau \in {\mathcal S} _d, \\ \tau(\pi)=\pi_{\text{rain}}} }\frac{1}{(\frac{d}{2})!}.\end{aligned}$$ By Lemma [\[lemma:permnumber\]](#lemma:permnumber){reference-type="ref" reference="lemma:permnumber"}, given the rainbow pair partition $\pi_{\text{rain}}$ and any pair partition $\pi \in {\mathcal P} _2(d)$, the number of permutations $\tau \in {\mathcal S} _d$ with $\tau(\pi) = \pi_{\text{rain}}$ is $d!!=2^{\frac{d}{2}}(\frac{d}{2})!$. Hence $$\sum_{\pi \in {\mathcal P} _2(d)} 1 = \sum_{\pi \in {\mathcal P} _2(d)} \sum_{\substack{\tau\in {\mathcal S} _d, \\ \tau(\pi)=\pi_{\text{rain}}}} \frac{1}{2^{\frac{d}{2}}(\frac{d}{2})!} \leq \lim_{n \to \infty} \varphi(s_n^d) \leq \sum_{\pi \in {\mathcal P} _2(d)} \sum_{\substack{\tau\in {\mathcal S} _d, \\ \tau(\pi)=\pi_{\text{rain}}}} \frac{1}{2^{\frac{d}{2}}(\frac{d}{2})!}=\sum_{\pi \in {\mathcal P} _2(d)} 1,$$ The number of pair partitions $\pi$ in ${\mathcal P} _2(d)$ is $(d-1)!!$, so we arrive at $$\lim_{n \to \infty} \varphi(s_n^d) = | {\mathcal P} _2(d)| =(d-1)!!.$$ ◻ The following corollary is a consequence of Theorem [Theorem 3](#theorem:main){reference-type="ref" reference="theorem:main"} and the fact that the normal distribution is determined by its moments. See for instance Example 30.1 and Theorem 30.2 in [@Bil95]. **Corollary 35**. *Let $(a_n)$ and $(s_n)$ be the sequences described in Theorem [Theorem 3](#theorem:main){reference-type="ref" reference="theorem:main"}. For every $n \in {\mathbb{N}}$, let $\mu_n$ denote the law of $s_n$. As $n \to \infty$, the probability measures $\mu_n$ have a $w^*$-limit $\mu$, where $\mu= N(0,1)$ is the law of the centered normal distribution of variance $1$.* ## Further questions It would be natural to study a multi-dimensional version of the central limit theorem studied here and the combinatorics in that case; however, this question is out of the scope of the current paper. Also of interest is the question of which groups given by infinite presentations lend themselves to a central limit theorem of the type described here. ## Acknowledgements {#acknowledgements .unnumbered} The author would like to thank Alexandru Nica for proposing the question of finding a central limit theorem for the Thompson group $F$, and for several fruitful discussions. The author also thanks Claus Köstler for the introduction to $F$, and for many helpful discussions.
arxiv_math
{ "id": "2309.05626", "title": "A central limit theorem in the framework of the Thompson group $F$", "authors": "Arundhathi Krishnan", "categories": "math.OA math.CO math.GR math.PR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- --- Research highlight 1 Research highlight 2 # 00
arxiv_math
{ "id": "2309.02431", "title": "Crack propagation in anisotropic brittle materials: from a phase-field\n model to a shape optimization approach", "authors": "Tim Suchan, Chaitanya Kandekar, Wolfgang E. Weber, Kathrin Welker", "categories": "math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The special shadow-complexity is an invariant of closed $4$-manifolds defined by Costantino using Turaev's shadows. We show that for any positive integer $k$, the special shadow-complexity of the connected sum of $k$ copies of $S^1\times S^3$ is exactly $k+1$. address: Department of Mathematics, Chuo University, 1-13-27 Kasuga Bunkyo-ku, Tokyo, 112-8551, Japan author: - Hironobu Naoe title: "The special shadow-complexity of $\\#_k(S^1\\times S^3)$" --- # Introduction A *shadow* was introduced by Turaev in 1990s for the study of quantum invariants of $3$-manifolds and links [@Tur94]. It is defined as a certain $2$-dimensional polyhedron embedded in a smooth $4$-manifold. Turaev showed that any (closed) smooth $4$-manifold can be described by a shadow together with a coloring on regions by half-integers, which is called the *gleam*. There are various studies concerning shadows, see [@CM17; @Cos06; @Cos06b; @Cos08; @CT08; @IK14; @KMN18; @KN20; @Mar11] for instance. A combinatorial description such as a shadow provides a framework according to the "complexity" of the description. Costantino applied shadows to define invariants of $4$-manifolds, called *the shadow-complexity* and *the special shadow-complexity*, with inspired by the Matveev complexity of $3$-manifolds. The (special) shadow-complexity of a (closed) $4$-manifold $W$ is defined as the minimum of the number of *true vertices* of all (special) shadows of $W$. Roughly speaking, it measures how complicated shadows of a given $4$-manifold need to be. In general, it is difficult to determine the value of an invariant defined as the minimum of something, especially to give a lower bound for it, and the (special) shadow-complexity is no exception. We refer the reader to [@CT08] for the only known result giving a lower bound for the shadow-complexity of $4$-manifolds with non-empty boundary. In that paper, Costantino and Turston have established a relationship between the shadows and the geometric structures of $3$-manifolds. On the other hand, the shadow-complexity of closed $4$-manifolds has not been bounded from below by anything. Note that the unboundedness of the shadow-complexity of closed $4$-manifolds was shown in [@KMN18], but their proof does not provide a concrete lower bound. The following is our main theorem, which determines the explicit values of the special shadow-complexities for infinitely many closed $4$-manifolds. **Theorem 1**. *For any positive integer $k$, $\mathrm{sc}^{\mathrm{sp}}(\#_k(S^1\times S^3))=k+1$.* An upper bound of $\mathrm{sc}^{\mathrm{sp}}(\#k(S^1\times S^3))$ can be constructed by an easy observation. Actually, $\mathrm{sc}^{\mathrm{sp}}(S^1\times S^3)=2$ is easily checked by constructing a concrete shadow and the classification of the closed $4$-manifolds with special shadow-complexity at most $1$ in [@Cos06b]. As Costantino mentioned in [@Cos06b], the inequality $\mathrm{sc}^{\mathrm{sp}}(W\# W')\leq\mathrm{sc}^{\mathrm{sp}}(W)+\mathrm{sc}^{\mathrm{sp}}(W')+4$ holds for any closed $4$-manifolds $W$ and $W'$. Therefore, we obtain $\mathrm{sc}^{\mathrm{sp}}(\#_k(S^1\times S^3))\leq 6k-4$, but it is too rough. In order to give the upper bound $\mathrm{sc}^{\mathrm{sp}}(\#_k(S^1\times S^3))\leq k+1$, we construct a shadow of $\#_k(S^1\times S^3)$ concretely. For this construction, we will introduce two techniques, boundary-disposal and vertex-creation. On the other hand, in order to give the lower bound $\mathrm{sc}^{\mathrm{sp}}(\#_k(S^1\times S^3))\geq k+1$, we investigate presentations of the fundamental group of shadows of $\#_k(S^1\times S^3)$. As immediate consequences of the proof of Theorem [Theorem 1](#thm:mainthm){reference-type="ref" reference="thm:mainthm"}, we have the following. **Corollary 2** (Corollary [Corollary 9](#cor:surj){reference-type="ref" reference="cor:surj"}). *The special shadow-complexity of closed $4$-manifolds is a surjection to $\mathbb{Z}_{\geq0}\setminus\{1\}$.* **Corollary 3** (Corollary [Corollary 10](#cor:lower_bound){reference-type="ref" reference="cor:lower_bound"}). *For any closed $4$-manifold $W$, $\mathrm{rank}(\pi_1(W))\leq\mathrm{sc}^{\mathrm{sp}}(W)$. Moreover, if $\pi_1(W)$ is the free group, then $\mathrm{rank}(\pi_1(W))+1\leq\mathrm{sc}^{\mathrm{sp}}(W)$.* In Section [2](#sec:Shadows){reference-type="ref" reference="sec:Shadows"}, we review the notion of shadows of smooth $4$-manifold and the (special) shadow-complexity of $4$-manifolds. In Section [3](#sec:Results){reference-type="ref" reference="sec:Results"}, we first introduce two modifications, Boundary-disposal and vertex-creation, of shadowed polyhedra to construct a special shadow $Z_k$ of $\#_k(S^1\times S^3)$, and then we show that $Z_k$ attains the special shadow-complexity of $\#_k(S^1\times S^3)$. Throughout this paper, any manifold is supposed to be compact, connected, oriented and smooth unless otherwise mentioned. # Shadows and complexity of $4$-manifolds {#sec:Shadows} ## Simple polyhedra and shadowed polyhedra We first introduce some terminology. A *simple polyhedron* is a connected compact space $X$ such that a regular neighborhood $\mathrm{Nbd}(x;X)$ of each point $x\in X$ is homeomorphic to one of (i)-(iv) shown in Figure [1](#fig:local_model){reference-type="ref" reference="fig:local_model"}. \(i\) \[B\] at 48.19 8.73 (ii) \[B\] at 147.40 8.73 (iii) \[B\] at 246.61 8.73 (iv) \[B\] at 334.49 8.73 ![Local models of simple polyhedra.](local_model "fig:"){#fig:local_model width=".65\\hsize"} A point of type (iii) is called a *true vertex*. The set of all points of types (ii) and (iii) is called the *singular set* of $X$, which is denoted by $S(X)$. Note that each connected component of $S(X)$ is a circle or a quartic graph. A connected component of $S(X)$ with the true vertices removed is called a *triple line*. Each connected component of $X\setminus S(X)$ is called a *region*. If every region of $X$ is an open disk, then $X$ is said to be *special* and is called a *special polyhedron*. The set of points of type (iv) is the *boundary* of $X$, which is denoted by $\partial X$. If $\partial X$ is empty, the simple polyhedron $X$ is said to be *closed*. Thus, a special polyhedron is closed. A region not intersecting $\partial X$ is called an *internal region*. A *boundary region* is a region that is not an internal region. We then define the $\mathbb{Z}_2$-*gleam* of a simple polyhedron $X$. Let $R$ be an internal region of $X$. Then $R$ is homeomorphic to the interior of some compact surface $F$, and the homeomorphism $\mathrm{Int}F\to R$ will be denoted by $f$. This $f$ can extend to a local homeomorphism $\overline{f}:F\to X$. Moreover, there exists a simple polyhedron $\widetilde{F}$ obtained from $F$ by attaching an annulus or a Möbius band to each boundary component of $F$ along the core circle such that $\overline{f}$ can extend to a local homeomorphism $\widetilde{f}:\widetilde{F}\to X$. Then the number of the Möbius bands attached to $F$ modulo $2$ is called the $\mathbb{Z}_2$-*gleam* of $R$ and is denoted by $\mathfrak{gl}_2(R)\in\{0,1\}$. Note that for each region, its $\mathbb{Z}_2$-gleam is determined only by the combinatorial structure of $X$. A function mapping each internal region $R$ of $X$ to a half-integer $\mathfrak{gl}(R)$ satisfying $\mathfrak{gl}(R)+\frac12\mathfrak{gl}_2(R)\in\mathbb{Z}$ is called a *gleam function*, or simply *gleam*, of $X$. We also call the value $\mathfrak{gl}(R)$ the *gleam* of $R$. A simple polyhedron equipped with a gleam is called a *shadowed polyhedron*. ## Shadows of $4$-manifolds Let $X$ be a simple polyhedron. A $4$-manifold $M$ with boundary is called a *$4$-dimensional thickening* of $X$ if the following hold; - $X$ is embedded in $M$ local-flatly, that is, a regular neighborhood $\mathrm{Nbd}(x;X)$ of each point $x\in X$ is contained in a smooth $3$-ball in $M$, - $M$ collapses onto $X$ under some triangulations agreeing with the smooth structure of $M$, and - $X\cap\partial M=\partial X$. We call $X$ a *shadow* of $M$, which is the definition of shadows of $4$-manifolds with boundary. Shadows of closed $4$-manifolds will be defined later. Any simple polyhedron has a $4$-dimensional thickening. Although there might be non-diffeomorphic $4$-dimensional thickenings for a single simple polyhedron, Turaev showed that there exists a canonical way to associate to each shadowed polyhedron $X$ a unique $4$-dimensional thickening $M_X$ up to diffeomorphism. This correspondence is called *Turaev's reconstruction*. We give a brief review of Turaev's reconstruction. Let $X$ be a shadowed polyhedron equipped with a gleam $\mathfrak{gl}$. Let $R_1,\ldots,R_m$ denote the internal regions of $X$. Set $\bar R_i=R_i\setminus \mathrm{Int}\mathrm{Nbd}(S(X);X)$ for $i\in\{1,\ldots,m\}$ and $X_S=X\setminus \mathrm{Int}(\bar R_1\sqcup\cdots\sqcup \bar R_m)$. Let $\varphi:\partial \bar R_1\sqcup\cdots\sqcup\partial \bar R_m\to \partial X_S\setminus\partial X$ denote a homeomorphism reconstructing $X$ as $(\bar R_1\sqcup\cdots\sqcup \bar R_m\sqcup X_S)/\varphi$. It is easy to see that there exists a $3$-dimensional (possibly non-orientable) handlebody $N_S$ in which $X_S$ is properly embedded such that $N_S$ collapses onto $X_S$. Set $B_S=\mathrm{Nbd}(\partial X_S;\partial M_S)$, which consists of disjoint annuli or Möbius bands. Let $M_S$ be the subbundle of the determinant line bundle over $N_B$ with fibers of length $\leq1$ with respect to an auxiliary Riemannian metric. Note that $M_S$ is a $4$-dimensional thickening of $X_S$ with $B_N$ embedded in the boundary $\partial M_S$. Set $N_i=\bar R_i\times [-1,1]$, and suppose $\bar R_i$ is embedded in $N_i$ by identifying $\bar R_i$ with $\bar R_i\times \{0\}$. Note that $N_i$ is a non-orientable $3$-manifold if $\bar R_i$ is non-orientable. Set $A_i=\partial \bar R_i\times [-1,1]\subset N_i$. Let $M_i$ be the subbundle of the determinant line bundle over $N_i$ with fibers of length $\leq1$ with respect to an auxiliary Riemannian metric, so that it contains $\bar R_i$ properly and $A_i$ in the boundary $\partial M_i$. For each $i\in\{1,\ldots,m\}$, we glue $M_i$ to $M_S$ by an embedding $\Phi_i:\mathrm{Nbd}(\partial \bar R_i;\partial M_i)\to \partial M_S$ such that $\Phi_i|_{\partial \bar R_i}=\varphi|_{\partial \bar R_i}$ and $\Phi_i(A_i)$ is rotated $\mathfrak{gl}(R_i)$ times with respect to $B_i$, and the obtained $4$-manifold is nothing but the $4$-dimensional thickening $M_X$ that we wanted. We next define shadows of closed $4$-manifolds. **Definition 4**. Let $W$ be a closed $4$-manifold and $X$ a simple polyhedron embedded in $W$. We call $X$ a *shadow* of $W$ if the following hold; - $X$ is locally-flat in $W$, and - $W\setminus\mathrm{Int}\mathrm{Nbd}(X;W)$ is diffeomorphic to $\natural_k(S^1\times B^3)$ for some $k\in\mathbb{Z}_{\geq0}$. By the second condition in the definition, $X$ can be regraded as a $2$-skeleton of $W$ since $W$ is obtained from $\mathrm{Nbd}(X;W)$ by attaching some $3$-handles and $4$-handles. Therefore, we have $H_1(X)\cong H_1(W)$ and $\pi_1(X)\cong \pi_1(W)$. The notion of shadows was introduced by Turaev in order to study quantum invariants of $3$-manifolds and links, and he showed the following. **Theorem 5** (Turaev [@Tur94]). *Any closed $4$-manifold admits a shadow.* Suppose a shadow $X$ of a closed $4$-manifold $W$ is given. Turaev showed that there exists a gleam of $X$ such that the $4$-dimensional thickening $M_X$ of the shadowed polyhedron $X$ is diffeomorphic to $\mathrm{Nbd}(X;W)$, see [@Cos05; @Tur94]. By the definition of shadows, $\partial \mathrm{Nbd}(X;W)$ is diffeomorphic to $\#_k(S^1\times S^2)$ for some $k\in\mathbb{Z}_{\geq0}$. By Laudenbach and Poénaru [@LP72], a closed $4$-manifold obtained from $\mathrm{Nbd}(X;W)$ by attaching $3$- and $4$-handles is unique up to diffeomorphism, which implies that any closed $4$-manifold can be described by a shadowed polyhedron. ## Complexity of $4$-manifolds The *complexity* of a simple polyhedron $X$ is defined as the number $c(X)$ of true vertices of $X$. Costantino defined two kinds of invariants of $4$-manifolds using shadows in [@Cos06b]. **Definition 6**. Let $W$ be a $4$-manifold with possibly non-empty boundary. 1. The *shadow-complexity*, denoted by $\mathrm{sc}(W)$, of $W$ is the minimum of $c(X)$ for all shadows $X$ of $W$. 2. The *special shadow-complexity*, denoted by $\mathrm{sc}^{\mathrm{sp}}(W)$, of $W$ is the minimum of $c(X)$ for all special shadows $X$ of $W$. There exist exotic (i.e. homeomorphic but not diffeomorphic) smooth structures in dimension $4$, and it is known that the shadow-complexity and the special shadow-complexity are invariants essentially for the smooth structures of $4$-manifolds [@Mar11]. Our main object is the special shadow-complexity, which has a remarkable property: **Theorem 7** ([@Cos06b], see also [@Mar05]). *The special shadow-complexity of closed $4$-manifolds is a finite-to-one invariant.* Costantino classified all the closed $4$-manifolds with special shadow-complexity at most $1$. **Theorem 8** ([@Cos06b Theorem 1.1]). *Let $W$ be a cloed $4$-manifold. The following are equivalent;* - *$\mathrm{sc}^{\mathrm{sp}}(W)=0$,* - *$\mathrm{sc}^{\mathrm{sp}}(W)\leq 1$, and* - *$W$ is diffeomorphic to either $S^4$, $\mathbb{CP}^2$, $\overline{\mathbb{CP}}^2$, $\mathbb{CP}^2\#\mathbb{CP}^2$, $\overline{\mathbb{CP}}^2\#\overline{\mathbb{CP}}^2$, $\mathbb{CP}^2\#\overline{\mathbb{CP}}^2$ or $S^2\times S^2$.* On the other hand, while the shadow-complexity is not finite-to-one, Martelli characterized all the closed $4$-manifolds with shadow-complexity $0$ in [@Mar11]. We also refer the reader to [@KMN18] for the characterization of closed $4$-manifolds with *connected shadow-complexity* at most $1$, where the connected shadow-complexity is another kind of invariant defined by using shadows. See [@KMN18] for more details. There have been no results to answer a question: for an arbitrarily given $n\in\mathbb{Z}_{\geq0}$, does there exist a closed $4$-manifold whose (ordinary, special or connected) shadow-complexity is exactly $n$? Note that we can easily give an example of a $4$-manifold with boundary whose (special) shadow-complexity is exactly $n$, see [@IK14]. # Results {#sec:Results} For a positive integer $k$, let $Z_k$ be a special polyhedron described in the lowermost part of Figure [5](#fig:Xk){reference-type="ref" reference="fig:Xk"}, which will be explained in more detail in Subsection [3.2](#subsec:The_special_polyhedra_$X_k$_and_$Z_k$){reference-type="ref" reference="subsec:The_special_polyhedra_$X_k$_and_$Z_k$"}. Our theorem determines the special shadow-complexity of $\#_k(S^1\times S^3)$, and actually $Z_k$ is a shadow of $\#_k(S^1\times S^3)$ attaining the special shadow-complexity. We start this section with explaining how we find $Z_k$. ## Boundary-disposal and vertex-creation Here we introduce two kinds of modifications used to construct $Z_k$. We first explain a modification called a *boundary-disposal*. Let $X$ be a shadowed polyhedron with $S(X)\ne\emptyset$ and $\partial X$ consisting of one circle $C_0$, so $X$ has a single boundary region, say $R_0$. Let us consider an arc $\gamma$ contained in $R_0$ such that it connects a point in $S(X)$ and a point in $C_0$. See the upper left part of Figure [2](#fig:specialization){reference-type="ref" reference="fig:specialization"}, which depicts $\mathrm{Nbd}(\gamma\cup C_0;X)$. \[Bl\] at 132.23 217.66 $1/2$ \[B\] at 313.39 227.58 $1/2$ \[B\] at 274.96 227.58 -$1$ \[B\] at 232.44 209.99 glue \[B\] at 59.53 164.64 glue \[B\] at 272.13 164.64 (i) \[B\] at 189.92 238.34 (ii) \[B\] at 189.92 153.30 (iii) \[B\] at 189.92 82.43 $1/2$ \[B\] at 107.72 48.42 $0$ \[B\] at 96.38 75.51 $1/2$ \[B\] at 62.36 69.68 -$1$ \[B\] at 19.84 54.09 glue \[B\] at 59.53 8.73 glue \[B\] at 272.13 8.73 $0$ \[B\] at 308.98 75.51 $1/2$ \[B\] at 274.96 69.68 -$1/2$ \[B\] at 320.31 48.42 \[B\] at 100.80 234.67 ![Bonundary-disposal.](specialization "fig:"){#fig:specialization width="1\\hsize"} glue \[B\] at 59.53 8.73 glue \[B\] at 215.43 8.73 [$\delta$]{style="color: red"} \[Bl\] at 79.37 70.94 ![Vertex-creation.](creation "fig:"){#fig:creation width=".8\\hsize"} The moves shown in Figures [2](#fig:specialization){reference-type="ref" reference="fig:specialization"}-(i) and -(ii) modify $X$ into another shadowed polyhedron without changing the corresponding $4$-manifold [@Tur94]. By the moves (i) and (ii), exactly $3$ true vertices are created in total. The move shown in Figures [2](#fig:specialization){reference-type="ref" reference="fig:specialization"}-(iii) is a collapsing so that the annular boundary region is removed, which also removes one true vertex. As a result, the shadowed polyhedron $X$ is modified into another one $X'$ such that - the corresponding $4$-manifold of $X'$ is the same as $X$, - exactly one disk region is newly created, - the homeomorphism types of the original regions are not changed except for $R_0$, and $R_0$ is changed into a surface homeomorphic to $R_0\cup_{C_0} D^2$, especially $\partial X'=\emptyset$, and - $c(X')=c(X)+2$. The composition of the moves (i), (ii) and (iii) is called a *boundary-disposal*. We stress that the resulting polyhedron is special if $X\cup_{\partial X}D^2$ is special. By this modification, the singular set is changed as shown in the upper part of Figure [4](#fig:moves){reference-type="ref" reference="fig:moves"}. $R_0$ \[Br\] at 15.12 73.66 \[B\] at 108.67 58.23 \[B\] at 108.67 41.23 \[B\] at 108.67 129.10 \[B\] at 108.67 112.10 ![Changes on the singular sets by the modifications boundary-disposal and vertex-creation, where we only illustrate how the regions are attached along the singular sets. The singular sets themselves are not pictured in the figure. ](moves "fig:"){#fig:moves width=".6\\hsize"} We next introduce another modification called *vertex-creation*. Let $X$ be a special polyhedron. Then, the homeomorphism type of $X$ is determined only by a regular neighborhood $\mathrm{Nbd}(S(X);X)$ of its singular set. Let $\delta$ be a short arc contained in a triple line of $X$, see the left part of Figure [3](#fig:creation){reference-type="ref" reference="fig:creation"}. Then we consider another special polyhedron $X'$ such that $\mathrm{Nbd}(S(X');X')$ is obtained from $\mathrm{Nbd}(S(X);X)$ by modifying near $\delta$ as shown in the figure. Although the simple polyhedron $X$ and the resulting one $X'$ do not necessarily share the same $4$-dimensional thickening, we emphasize the following properties; - the numbers of regions of $X$ and $X'$ are equal, and - $c(X')=c(X)+1$, especially $S(X')$ is homotopy equivalent to the wedge sum of $S(X)$ and a circle. The modification to get $X'$ from $X$ is called a *vertex-creation*, by which the singular set is changed as shown in the lower part of Figure [4](#fig:moves){reference-type="ref" reference="fig:moves"}. ## The special polyhedra $X_k$ and $Z_k$ {#subsec:The_special_polyhedra_$X_k$_and_$Z_k$} First, let $X_1$ be a special polyhedron whose singular set is indicated in the uppermost part of Figure [5](#fig:Xk){reference-type="ref" reference="fig:Xk"}. It has a single disk region, and $S(X)$ is homeomorphic to a circle. We next define $X_2$ as a special polyhedron obtained from $X_1$ by a vertex-creation, see the second from the top in Figure [5](#fig:Xk){reference-type="ref" reference="fig:Xk"}. Then $X_2$ also has a single disk region, and $S(X)$ is homeomorphic to the bouquet of two circles. For $k\geq3$, let $X_k$ be a special polyhedron obtained from $X_{k-1}$ by a vertex-creation, see Figure [5](#fig:Xk){reference-type="ref" reference="fig:Xk"}. The special polyhedron $X_k$ has a single disk region, and $S(X_k)$ is homotopy equivalent to the bouquet of $k$ circles. Note that $c(X_k)=k-1$. $X_3$ \[Br\] at 25.54 171.46 $X_2$ \[Br\] at 25.54 242.32 $X_1$ \[Br\] at 25.54 313.19 $X_k$ \[Br\] at 25.54 100.59 $Z_k$ \[Br\] at 25.54 29.7 ![The special polyhedra $X_1,X_2,X_3,X_k$ and $Z_k$, where we only illustrate how the regions are attached along the singular sets. The singular sets themselves are not pictured in the figure. ](Xk "fig:"){#fig:Xk width=".8\\hsize"} For each $k\in\mathbb{Z}_{\geq1}$, let $X_k^\circ$ be a simple polyhedron obtained from $X_k$ by removing an open disk from the single region of $X_k$. It is easy to see that $X_k^\circ$ has a single annular boundary region. Then $X_k^\circ$ collapses onto $S(X_k^\circ)$, which is homotopy equivalent to the bouquet of $k$ circles. Hence $X_k^\circ$ has a unique $4$-dimensinoal thickening $\#_k(S^1\times B^3)$, and it can be regarded as a shadow of $\#_k(S^1\times S^3)$. We define $Z_k$ as a simple polyhedron obtained from $X_k^\circ$ by a boundary-disposal. By the construction, $Z_k$ is a special shadow of $\#_k(S^1\times S^3)$. Moreover, we have $c(Z_k)=k+1$, and $Z_k$ has exactly two disk regions. ## The proof of Theorem [Theorem 1](#thm:mainthm){reference-type="ref" reference="thm:mainthm"} {#the-proof-of-theorem-thmmainthm} We finally give the proof of Theorem [Theorem 1](#thm:mainthm){reference-type="ref" reference="thm:mainthm"}. *Proof of Theorem [Theorem 1](#thm:mainthm){reference-type="ref" reference="thm:mainthm"}.* We already have $\mathrm{sc}(\#_k(S^1\times S^3))\leq k+1$ due to the existence of $Z_k$, and then we show $\mathrm{sc}(\#_k(S^1\times S^3))\geq k+1$ below. Let $X$ be a special shadow of $\#_k(S^1\times S^3)$ with $c(X)=n$, so it suffices to show that $n\geq k+1$. By the classification of closed $4$-manifolds with special shadow-complexity zero [@Cos06b Theorem 1.1], we need $n\geq 2$. Let $m$ be the number of regions of $X$, which is required to satisfy $m\geq2$ by [@Cos06b Corollary 3.17]. Since $X$ has at least $2$ regions, there does not exist a triple line such that only a single region passes through it. Therefore, there exists a triple line $a_1$ and a region $R_1$ passing through $a_1$ exactly once. Take a spanning tree $T$ of $S(X)$ so that $T$ does not contains $a_1$, which can be done since $S(X)$ is a quartic graph. It is easily seen that $S(X)\setminus (T\cup a_1)$ consists of $n$ triple lines, which will be denoted by $a_2,\ldots,a_{n+1}$. The triple lines in $T$ are denoted by $a_{n+2},\ldots,a_{2n}$. Let $R_2,\ldots,R_m$ denote the regions other than $R_1$. We give orientations to triple lines and regions arbitrarily. For each $j\in\{1,\ldots,m\}$, we will consider the region $R_j$ as the image of the interior of a closed disk $D_j$ attached along $S(X)$ by some attaching map $\varphi_j$. Then the preimage of true vertices by $\varphi_j$ devides $\partial D_j$ into the preimage of some triple lines, which gives a word $\tilde r_j$ in $\{a_1,\ldots,a_{2n}\}$. Note that we will not distinguish two words forming $a_i^{\pm1}w$ and $wa_i^{\pm1}$ for some $i\in\{1,\ldots,2n\}$ and some word $w$. We then remove alphabets $a_{n+2},\ldots,a_{2n}$ and their inverses from $\tilde r_j$, so that we obtain a word in $\{a_1,\ldots,a_{n+1}\}$. Let $r_j$ denote the obtained word. The above removal corresponds to the quotient $X\to X/T$, and by the homotopy equivalence $X\simeq X/T$, $\pi_1(X)$ admits a presentation $$\pi_1(X)\cong \langle a_1,\ldots,a_{n+1}\mid r_1,\ldots,r_m \rangle.$$ We need the following three claims. *Claim 1*. For each $i\in\{1,\ldots,n+1\}$, the total number of times $a_i$ and $a_i^{-1}$ appear in $r_1,\ldots,r_m$ is exactly $3$. *Proof.* It follows from that for each triple line, the number of regions (counted with multiplicity) pass through the triple line is 3. ◻ *Claim 2*. Each word of $r_1,\ldots,r_m$ is reduced, that is, it does not contain a subword forming $a_ia_i^{-1}$ or $a_i^{-1}a_i$ for any $i\in\{1,\ldots,n+1\}$. Especially, $r_1,\ldots,r_m$ are not the empty word. *Proof of Claim [Claim 2](#clm:all_reduced){reference-type="ref" reference="clm:all_reduced"}.* Let us fix a true vertex $x$. Let $a_{i_1}$, $a_{i_2}$, $a_{i_3}$ and $a_{i_4}$ be the four triple lines connecting to the true vertex $x$, and suppose their orientations are given so that $a_{i_1}^{\epsilon_1}$, $a_{i_2}^{\epsilon_2}$, $a_{i_3}^{\epsilon_3}$ and $a_{i_4}^{\epsilon_4}$ are directed to $x$, for some $\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4\in\{-1,1\}$. Note that there may exist $i,i'\in\{i_1,i_2,i_3,i_4\}$ with $i\ne i'$ such that $a_{i}^{\epsilon_i}=a_{i'}^{-\epsilon_{i'}}$, see Figure [6](#fig:claim){reference-type="ref" reference="fig:claim"}. $x$ \[Bl\] at 105.01 36.43 $a_{i_1}^{\epsilon_1}$ \[Bl\] at 56.41 52.44 $a_{i_2}^{\epsilon_2}$ \[Bl\] at 56.41 12.76 $a_{i_3}^{\epsilon_3}$ \[Bl\] at 95.51 12.76 $a_{i_4}^{\epsilon_4}$ \[Bl\] at 95.51 49.61 ![An example showing a part of the singular set near a true vertex $x$ such that $a_{1}^{\epsilon_1}$ coincides with $a_{2}^{-\epsilon_{2}}$. ](claim "fig:"){#fig:claim width=".4\\hsize"} Then a region passing through $x$ is attached along either one of $a_{i_1}^{\epsilon_1}a_{i_2}^{-\epsilon_2}$, $a_{i_2}^{\epsilon_2}a_{i_3}^{-\epsilon_3}$, $a_{i_3}^{\epsilon_3}a_{i_4}^{-\epsilon_4}$, $a_{i_4}^{\epsilon_4}a_{i_1}^{-\epsilon_1}$, $a_{i_1}^{\epsilon_1}a_{i_3}^{-\epsilon_3}$ or $a_{i_2}^{\epsilon_2}a_{i_4}^{-\epsilon_4}$. Therefore, $\tilde r_1,\ldots,\tilde r_m$ are reduced. As already mentioned above, the removal of $a_{n+2}^{\pm1},\ldots,a_{2n}^{\pm1}$ corresponds to the quotient $X\to X/T$. Since $T$ is a tree graph, the words $r_1,\ldots,r_m$ are also reduced. ◻ *Claim 3*. For $j,j'\in\{1,\ldots,m\}$ with $j\ne j'$ and any subword $w$ of $r_j$ of length at least $2$, both $w$ and $w^{-1}$ do not contained in $r_{j'}$. *Proof of Claim [Claim 3](#clm:no_common_subwords){reference-type="ref" reference="clm:no_common_subwords"}.* As seen in the proof of Claim [Claim 2](#clm:all_reduced){reference-type="ref" reference="clm:all_reduced"}, each of six regions around a true vertex determines a pair chosen in the four triple line connected to the triple line, and this correspondence is one-to-one. Hence, $\tilde r_1,\ldots,\tilde r_{m}$ do not contain common subwords. Since for any two leaves of $T$, there exists a unique path in $T$ connecting them, the words $r_1,\ldots,r_{m}$ also contain no common subwords. ◻ By the assumption on $a_1$ and $R_1$ and Claim [Claim 2](#clm:all_reduced){reference-type="ref" reference="clm:all_reduced"}, there exists a reduced word $w_1$ not containing $a_1$ or $a_1^{-1}$ such that $r_1=a_1w_1^{-1}$, so we have $$\pi_1(X)\cong \langle a_1,\ldots,a_{n+1}\mid a_1w_1^{-1},\,r_2\ldots,r_m \rangle.$$ Note that $w_1$ is possibly the empty word. We simultaneously remove a generator $a_1$ and a relator $a_1w_1^{-1}$ from the presentation with changing other relations suitably, and then we obtain a new presentation of $\pi_1(X)$ with $n$ generators. Since $\pi_1(X)$ is the free group $F_k$ of rank $k$, we have $n\geq k$. We want to show that $n\geq k+1$, so we suppose that $n=k$ for a contradiction. By Claim [Claim 1](#clm:triple_line){reference-type="ref" reference="clm:triple_line"} and by exchanging the orientations of regions if necessary, we can assume without loss of generality that either one of the following holds; - $m\geq3$, and each $r_2$ and $r_3$ contains $a_1$ exactly once, - $r_2$ contains $a_1$ and $a_1^{-1}$ exactly once each, or - $r_2$ contains $a_1$ exactly twice. Assume (i) holds. Then we can write $r_2=a_1w_2$ and $r_3=a_1w_3$ for some words $w_2$ and $w_3$ in $\{a_2,\ldots,a_{k+1}\}$, and $\pi_1(X)$ admits a presentation $$\pi_1(X)\cong \langle a_2,\ldots,a_{k+1}\mid w_1 w_2 ,\,w_1 w_3,\, r_4,\ldots,r_m \rangle.$$ Then, there exists a surjection from the group presented by $$\langle a_2,\ldots,a_{k+1}\mid w_1 w_2 \rangle$$ to $\pi(X)$. Therefore, $w_1w_2$ must be reduced to the empty word since $\pi_1(X)\cong F_k$. If $w_1$ is the empty word, then $w_2$ is reduced to the empty word. Then $r_2$ is also reduced to the empty word, which contradicts to Claim [Claim 2](#clm:all_reduced){reference-type="ref" reference="clm:all_reduced"}. Suppose $w_1$ is not the empty word. If $w_1=a_i$ for some $i\in\{2,\ldots,k+1\}$, then $w_2=a_i^{-1}$. Hence $r_2=a_1a_i^{-1}=r_1$, which contradicts Claim [Claim 3](#clm:no_common_subwords){reference-type="ref" reference="clm:no_common_subwords"}. If $w_1$ is of length at least $2$, we can check that $w_1$ and $w_2^{-1}$ have a common subword. Then $r_1$ and $r_2^{-1}$ also have a common subword, a contradiction to Claim [Claim 3](#clm:no_common_subwords){reference-type="ref" reference="clm:no_common_subwords"}. Assume (ii) holds, and suppose that $r_2=x_1 w_2 x_1^{-1} w_2'$ for some words $w_2, w_2'$ in $\{a_2,\ldots,a_{k+1}\}$. Then we have $$\pi_1(X)\cong \langle a_2,\ldots,a_{k+1}\mid w_1 w_2 w_1^{-1} w_2',\,r_3,\ldots,r_m \rangle,$$ to which there exists the surjection from the group presented by $$\langle a_2,\ldots,a_{k+1}\mid w_1 w_2 w_1^{-1} w_2' \rangle.$$ Thus, $w_1w_2w_1^{-1}w'_2$ must be reduced to the empty word since $\pi_1(X)\cong F_k$. If $w_1$ is the empty word, then $r_1=a_1$ and $r_2=a_1w_2a_1w'_2$. By Claim [Claim 1](#clm:triple_line){reference-type="ref" reference="clm:triple_line"}, the number of times that $a_i$ and $a_i^{-1}$ appear in $r_2$ is exactly $3$ for each $i\in\{2,\ldots,k+1\}$, and that in $w_2w'_2$ is also $3$. Then $w_2w'_2$ can not be reduced to the empty word, which is a contradiction. Supposing $w_1$ is not the empty word, we can check that $r_1$ and $r_2$ has a common subword, a contradiction to Claim [Claim 3](#clm:no_common_subwords){reference-type="ref" reference="clm:no_common_subwords"}. The case (iii) also leads to a contradiction in the same way as in (ii). In all cases, we derive a contradiction. Hence $n\geq k+1$, which completes the proof of Theorem [Theorem 1](#thm:mainthm){reference-type="ref" reference="thm:mainthm"}. ◻ **Corollary 9**. *The special shadow-complexity of closed $4$-manifolds is a surjection to $\mathbb{Z}_{\geq0}\setminus\{1\}$.* *Proof.* As seen in Theorem [Theorem 8](#thm:sc^sp_at_most_one){reference-type="ref" reference="thm:sc^sp_at_most_one"}, there exist closed $4$-manifolds with special shadow-complexity $0$ but no closed $4$-manifolds with special shadow-complexity $1$. We have found a closed $4$-manifold with special shadow-complexity $k+1$ for each positive integer $k$. ◻ **Corollary 10**. *For any closed $4$-manifold $W$, $\mathrm{rank}(\pi_1(W))\leq\mathrm{sc}^{\mathrm{sp}}(W)$. Moreover, if $\pi_1(W)$ is the free group, then $\mathrm{rank}(\pi_1(W))+1\leq\mathrm{sc}^{\mathrm{sp}}(W)$.* *Proof.* Let $W$ be a closed $4$-manifold and $X$ a shadow of $W$ with $\mathrm{sc}^{\mathrm{sp}}(W)=c(X)=n$. In the same way as in the proof of Theorem [Theorem 1](#thm:mainthm){reference-type="ref" reference="thm:mainthm"}, we can always obtain a presentation of $\pi_1(W)$ with $n$ generators. Therefore, we have $\mathrm{rank}(\pi_1(W))\leq\mathrm{sc}^{\mathrm{sp}}(W)$. In order to show $\mathrm{sc}(\#_k(S^1\times S^3))\geq k+1$ in the proof of Theorem [Theorem 1](#thm:mainthm){reference-type="ref" reference="thm:mainthm"}, we only use that the group $\pi_1(\#_k(S^1\times S^3))$ is free. Hence, the latter statement also holds. ◻ 99 A. Carrega and B. Martelli, *Shadows, ribbon surfaces, and quantum invariants*, Quantum Topol. **8** (2017), 249--294. F. Costantino, *Stein domains and branched shadows of $4$-manifolds*, Geom. Dedicata **121** (2006), 89--111. F. Costantino, *Shadows and branched shadows of $3$- and $4$-manifolds*. Scuola Normale Superiore, Edizioni della Normale, Pisa, Italy, 2005. F. Costantino, *Complexity of $4$-manifolds*, Experiment. Math. **15** (2006), no. 2, 237--249. F. Costantino, *Branched shadows and complex structures on $4$-manifolds*, J. Knot Theory Ramifications **17** (2008), 1429--1454. F. Costantino and D. Thurston, *$3$-manifolds efficiently bound $4$-manifolds*, J. Topol. **1** (2008), no. 3, 703--745. M. Ishikawa and Y. Koda, *Stable maps and branched shadows of $3$-manifolds*, Math. Ann. **367** (2017), 1819--1863 Y. Koda, B. Martelli and H. Naoe, *Four-manifolds with shadow-complexity one*, Ann. Fac. Sci. Toulouse. : Mathématiques, Serie 6, **31** (2022) no. 4, pp. 1111--1212. Y. Koda and H. Naoe, *Shadows of acyclic $4$-manifolds with sphere boundary*, Algebr. Geom. Topol. **20** (2020), no. 7, 3707--3731 F. Laudenbach, V. Poénaru, *A note on $4$-dimensional handlebodies*, Bull. Soc. Math. France **100** (1972), 337--344. B. Martelli, *Links, two-handles, and four-manifolds*, Int. Math. Res. Not. IMRN **2005**, no. 58, 3595--3623. B. Martelli, *Four-manifolds with shadow-complexity zero*, Int. Math. Res. Not. IMRN **2011**, no. 6, 1268--1351. V.G. Turaev, *Quantum invariants of knots and $3$-manifolds*, De Gruyter Studies in Mathematics, vol 18, Walter de Gruyter & Co., Berlin, 1994.
arxiv_math
{ "id": "2309.09225", "title": "The special shadow-complexity of $\\#_k(S^1\\times S^3)$", "authors": "Hironobu Naoe", "categories": "math.GT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this note, we establish a bi-parameter linear localization of the one-dimensional stochastic wave equation with a multiplicative space-time white noise forcing. address: - | Jingyu Huang\ School of Mathematics\ Watson Building\ University of Birmingham\ Edgbaston\ Birmingham\ B15 2TT\ United Kingdom - | Tadahiro Oh, School of Mathematics\ The University of Edinburgh\ and The Maxwell Institute for the Mathematical Sciences\ James Clerk Maxwell Building\ The King's Buildings\ Peter Guthrie Tait Road\ Edinburgh\ EH9 3FD\ United Kingdom - | Mamoru Okamoto\ Department of Mathematics\ Graduate School of Science\ Osaka University\ Toyonaka\ Osaka\ 560-0043\ Japan author: - Jingyu Huang, Tadahiro Oh, and Mamoru Okamoto title: On the linear localization of the one-dimensional stochastic wave equation with a multiplicative space-time white noise forcing --- = 14pt # Introduction We consider the following stochastic wave equation (SNLW) on $\mathbb{R}\times \mathbb{R}$: $$\begin{cases} \partial_t^2 u - \partial_x^2 u = F (u) \xi\\ (u,\partial_tu) |_{t = 0} = (u_0,u_1), \end{cases} \qquad (t, x) \in \mathbb{R}\times \mathbb{R}, \label{NLW1}$$ where $F : \mathbb{R}\to \mathbb{R}$ is a Lipschitz continuous function and $\xi$ denotes the (Gaussian) space-time white noise on $\mathbb{R}\times \mathbb{R}$ whose space-time covariance is formally given by $$\begin{aligned} \mathbb{E}\big[\xi(t_1, x_1)\xi(t_2, x_2) \big] = \delta(t_1 - t_2) \delta(x_1 - x_2). \label{white1}\end{aligned}$$ The expression [\[white1\]](#white1){reference-type="eqref" reference="white1"} is merely formal but we can make it rigorous by testing it against a test function. **Definition 1**. *A two-parameter white noise $\xi$ on $\mathbb{R}^2$ is a family of centered Gaussian random variables $\{ \xi(\varphi): \varphi \in L^2(\mathbb{R}^2)\}$ such that $$\begin{aligned} \mathbb{E}\big[ \xi(\varphi)^2 \big] = \| \varphi\|_{L^2(\mathbb{R}^2)}^2 \quad \text{and} \qquad \mathbb{E}\big[ \xi(\varphi_1) \xi( \varphi_2)\big] = \langle \varphi_1, \varphi_2 \rangle_{L^2(\mathbb{R}^2)}. %\label{white2}\end{aligned}$$* In [@Walsh], Walsh studied the Ito solution theory for [\[NLW1\]](#NLW1){reference-type="eqref" reference="NLW1"} and proved its well-posedness. See, for example, [@Walsh p.323, Exercise 3.7] and [@Dalang p.45], where the fundamental properties of solutions to [\[NLW1\]](#NLW1){reference-type="eqref" reference="NLW1"} are stated (implicitly). For readers' convenience, we state and prove basic properties of solutions to [\[NLW1\]](#NLW1){reference-type="eqref" reference="NLW1"} in Appendix [3](#SEC:A){reference-type="ref" reference="SEC:A"}. Our main goal in this note is to study the local fluctuation property of solutions to [\[NLW1\]](#NLW1){reference-type="eqref" reference="NLW1"}. Let us first consider the following stochastic heat equation: $$\begin{cases} \partial_tu - \partial_x^2 u = F (u) \xi\\ u|_{t = 0} = u_0, \end{cases} \qquad (t, x) \in \mathbb{R}\times \mathbb{R}. \label{NLH1}$$ It is well known that, under suitable assumptions on $F$ and $u_0$, the solution to [\[NLH1\]](#NLH1){reference-type="eqref" reference="NLH1"} *locally linearizes*; namely by letting $Z_\textup{heat}$ denote the linear solution satisfying $\partial_tZ_\textup{heat}- \partial_x^2 Z_\textup{heat}= \xi$ with $Z_\textup{heat}|_{t= 0} =0$, the solution $u$ to [\[NLH1\]](#NLH1){reference-type="eqref" reference="NLH1"} satisfies $$\begin{aligned} u(t , x + \varepsilon) - u(t , x) = F(u(t , x))\big\{Z_\textup{heat}(t , x + \varepsilon) - Z_\textup{heat}(t , x)\big\} + R_\varepsilon(t,x), \label{fluc1}\end{aligned}$$ where, as $\varepsilon\to 0$, the remainder term $R_\varepsilon(t,x)$ tends to 0 much faster than $Z_\textup{heat}(t , x + \varepsilon) - Z_\textup{heat}(t , x)$. See, for example, [@Hairer1; @KSXZ; @FKM; @HP]. The relation [\[fluc1\]](#fluc1){reference-type="eqref" reference="fluc1"} states that, for fixed $t$, local fluctuations (in $x$) of the solution $u(t)$ are essentially given by those of $Z_\textup{heat}(t)$. In other words, if we ignore precise regularity conditions, then [\[fluc1\]](#fluc1){reference-type="eqref" reference="fluc1"} states that $u(t)$ is controlled by $Z_\textup{heat}(t)$ in the sense of controlled paths due to Gubinelli [@Gubi]; see [@FH Definition 4.6]. In [@HK], Khoshnevisan and the first author studied an analogous issue for SNLW [\[NLW1\]](#NLW1){reference-type="eqref" reference="NLW1"}. In particular, they showed that the solution to [\[NLW1\]](#NLW1){reference-type="eqref" reference="NLW1"} with initial data $(u_0,u_1) \equiv (0, 1)$ does *not* locally linearize (for fixed $t$), which shows a sharp contrast to the case of the stochastic heat equation. In this note, we change our viewpoint and study the local linearization issue for SNLW [\[NLW1\]](#NLW1){reference-type="eqref" reference="NLW1"} from a *bi-parameter* point of view. In [@Walsh], Walsh studied the well-posedness issue of [\[NLW1\]](#NLW1){reference-type="eqref" reference="NLW1"} by first switching to the null coordinates: $$\begin{aligned} x_1 = \frac{x-t}{\sqrt 2}\qquad \text{and}\qquad x_2 = \frac{x+t}{\sqrt 2}. %\label{null1}\end{aligned}$$ In the null coordinates, the Cauchy problem [\[NLW1\]](#NLW1){reference-type="eqref" reference="NLW1"} becomes $$\begin{aligned} \begin{cases} \partial_{x_1}\partial_{x_2} v = - \frac 12 F(v)\widetilde\xi\\ v|_{x_1 = x_2} = u_0(\sqrt 2 \,\cdot\,), \quad (\partial_{x_2} - \partial_{x_1}) v|_{x_1 = x_2} = \sqrt 2 u_1(\sqrt 2\,\cdot\,), \end{cases} \label{SNLW1a} \end{aligned}$$ where $$\begin{aligned} v(x_1, x_2) = u \bigg(\frac{-x_1 + x_2}{\sqrt 2}, \frac{x_1 + x_2}{\sqrt 2}\bigg) \quad \text{and} \quad \widetilde\xi (x_1, x_2) = \xi \bigg(\frac{-x_1 + x_2}{\sqrt 2}, \frac{x_1 + x_2}{\sqrt 2}\bigg) \label{null2}\end{aligned}$$ with the latter interpreted in a suitable sense. Note that this change of coordinates is via an orthogonal transformation (which in particular preserves the $L^2$-inner product on $\mathbb{R}^2$) and thus $\widetilde\xi$ is also a two-parameter white noise in the sense of Definition [Definition 1](#DEF:white){reference-type="ref" reference="DEF:white"}. By integrating in $x_1$ and $x_2$, we can rewrite [\[SNLW1a\]](#SNLW1a){reference-type="eqref" reference="SNLW1a"} as $$\begin{aligned} v(\mathbf{x}) & = V_0(\mathbf{x}) + \frac 12 \int_{x_1}^{x_2} \int_{x_1}^{y_2} F(v(\mathbf{y})) \widetilde\xi (dy_1, dy_2), \label{SNLW1b}\end{aligned}$$ where $\mathbf{x}= (x_1, x_2)$, $\mathbf{y}= (y_1, y_2)$, and $$\begin{aligned} V_0(\mathbf{x}) & = \frac 12 \Big(u_0(\sqrt 2 x_1) + u_0(\sqrt 2 x_2)\Big) + \frac 12 \int_{\sqrt 2 x_1}^{\sqrt 2 x_2} u_1(y) dy. \label{V0}\end{aligned}$$ Under the Lipschitz assumption on $F$, one can then interpret the last term on the right-hand side of [\[SNLW1b\]](#SNLW1b){reference-type="eqref" reference="SNLW1b"} as a two-parameter stochastic integral ([@Cai1; @Cai2]) and prove well-posedness of [\[SNLW1b\]](#SNLW1b){reference-type="eqref" reference="SNLW1b"} (and hence of the original SNLW [\[NLW1\]](#NLW1){reference-type="eqref" reference="NLW1"}); see [@Walsh; @Dalang]. In the following, we study the local linearization property of the solution $v$ to [\[SNLW1a\]](#SNLW1a){reference-type="eqref" reference="SNLW1a"} in the variable $\mathbf{x}= (x_1, x_2)$ in a bi-parameter manner. For this purpose, let us introduce some notations. Let $\widetilde Z$ be the linearization of $v$ in [\[SNLW1a\]](#SNLW1a){reference-type="eqref" reference="SNLW1a"}; namely, $\widetilde Z$ is the solution to [\[SNLW1a\]](#SNLW1a){reference-type="eqref" reference="SNLW1a"} with $F(v) \equiv 1$ and $(u_0, u_1) = (0, 0)$: $$\begin{cases} \partial_{x_1} \partial_{x_2} \widetilde Z = -\frac 12 \widetilde\xi\\ \widetilde Z|_{x_1 = x_2} = 0, \quad (\partial_{x_2} - \partial_{x_1}) \widetilde Z|_{x_1 = x_2} = 0. \end{cases} %\label{SNLW2}$$ By a direction integration, we then have $$\begin{aligned} \widetilde Z(\mathbf{x}) = \frac 12 \int_{x_1}^{x_2} \int_{x_1}^{y_2} \widetilde\xi (dy_1, dy_2) \label{SNLW3}\end{aligned}$$ which is to be interpreted as a two-parameter stochastic integral. Given $\varepsilon\in \mathbb{R}$, define the difference operator $\delta_{\varepsilon}^{(j)}$, $j = 1, 2$, by setting $$\begin{aligned} \begin{split} \delta^{(1)}_{\varepsilon} f(x_1, x_2) = f(x_1 + \varepsilon, x_2) - f(x_1, x_2),\\ \delta^{(2)}_{\varepsilon} f(x_1, x_2) = f(x_1 , x_2+ \varepsilon)- f(x_1, x_2). \end{split} \label{Lip2}\end{aligned}$$ Then, from [\[Lip2\]](#Lip2){reference-type="eqref" reference="Lip2"} and [\[SNLW1b\]](#SNLW1b){reference-type="eqref" reference="SNLW1b"}, we have $$\begin{aligned} \begin{split} \delta_{\pm \varepsilon}^{(1)}\delta_\varepsilon^{(2)} v(\mathbf{x}) & = v(x_1\pm\varepsilon, x_2+\varepsilon) - v(x_1\pm\varepsilon,x_2) - v(x_1,x_2+\varepsilon) + v(x_1,x_2) \\ & = \delta_{\pm \varepsilon}^{(1)}\delta_\varepsilon^{(2)} V_0(\mathbf{x}) - \frac 12 \int_{x_2}^{x_2+\varepsilon} \int_{x_1}^{x_1\pm \varepsilon} F(v(\mathbf{y})) \widetilde\xi (dy_1, dy_2). \end{split} \label{Z1}\end{aligned}$$ Similarly, from [\[SNLW3\]](#SNLW3){reference-type="eqref" reference="SNLW3"}, we have $$\begin{aligned} \begin{split} \delta_{\pm\varepsilon}^{(1)}\delta_\varepsilon^{(2)} \widetilde Z(\mathbf{x}) & = \widetilde Z(x_1\pm\varepsilon, x_2+\varepsilon) -\widetilde Z(x_1\pm\varepsilon,x_2) - \widetilde Z(x_1,x_2+\varepsilon) + \widetilde Z(x_1,x_2) \\ & = - \frac 12 \int_{x_2}^{x_2+\varepsilon} \int_{x_1}^{x_1\pm \varepsilon} \widetilde\xi (dy_1, dy_2). \end{split} \label{Z2}\end{aligned}$$ Thus, from the Wiener isometry (see, for example, [@Kho09 (20) on p. 7]), we have $$\begin{aligned} \mathbb{E}\big[|\delta_{\pm \varepsilon}^{(1)}\delta_\varepsilon^{(2)}\widetilde Z(\mathbf{x})|^2\big] = \frac 14 \varepsilon^2, \label{Z3}\end{aligned}$$ which shows that the decay rate of $|\delta_{\pm \varepsilon}^{(1)}\delta_\varepsilon^{(2)} \widetilde Z(\mathbf{x})|$ is $\sim |\varepsilon|$ on average. The following lemma shows that the decay rate (as $\varepsilon\to 0$) of $|\delta_{\pm \varepsilon}^{(1)}\delta_\varepsilon^{(2)} \widetilde Z(\mathbf{x})|$ is almost surely slower than $|\varepsilon|^{1+\kappa}$ for any $\kappa > 0$. **Lemma 2**. *Fix $\mathbf{x}= (x_1, x_2) \in \mathbb{R}^2$. Then, for any $\kappa > 0$, we have $$\begin{aligned} \limsup_{\varepsilon\to 0} \frac{|\delta_{\pm \varepsilon}^{(1)}\delta_\varepsilon^{(2)} \widetilde Z(\mathbf{x})|}{|\varepsilon|^{1+\kappa}}= \infty, \label{decay1}\end{aligned}$$* *almost surely.* Lemma [Lemma 2](#LEM:decay){reference-type="ref" reference="LEM:decay"} follows from the large deviation estimate and the Borel-Cantelli lemma. We present the proof of Lemma [Lemma 2](#LEM:decay){reference-type="ref" reference="LEM:decay"} in the next section. Let us now turn to the local linearization property of solutions to SNLW [\[NLW1\]](#NLW1){reference-type="eqref" reference="NLW1"}. In [@HK], Khoshnevisan and the first author investigated the local linearization issue for SNLW [\[NLW1\]](#NLW1){reference-type="eqref" reference="NLW1"} by studying local fluctuations (in $x$) of $u(t)$ for fixed $t$. While such an approach is suitable for the stochastic heat equation [\[NLH1\]](#NLH1){reference-type="eqref" reference="NLH1"}, it does not seem to be appropriate for the wave equation. We instead propose to study *bi-parameter* fluctuations of $u$ with respect to the null characteristics $x_1 = \frac{x-t}{\sqrt 2}$ and $x_2 = \frac{x+t}{\sqrt 2}$. For this purpose, let us first state the linear localization result for SNLW [\[SNLW1a\]](#SNLW1a){reference-type="eqref" reference="SNLW1a"} in the null coordinates. Given $(x_1, x_2) \in \mathbb{R}^2$ and (small) $\varepsilon\in \mathbb{R}$, define the remainder terms $\widetilde R_\varepsilon^+ (x_1, x_2)$ and $\widetilde R_\varepsilon^- (x_1, x_2)$ by setting $$\begin{aligned} \begin{split} \widetilde R_\varepsilon^\pm (x_1, x_2) & = \delta_{\pm \varepsilon}^{(1)}\delta_\varepsilon^{(2)}v(x_1, x_2) - F(v(x_1, x_2)) \delta_{\pm \varepsilon}^{(1)}\delta_\varepsilon^{(2)}\widetilde Z(x_1, x_2)\\ & = \{ v(x_1\pm \varepsilon, x_2+\varepsilon) - v(x_1\pm \varepsilon,x_2) - v(x_1,x_2+\varepsilon) + v(x_1,x_2) \} \\ &\quad - F(v(x_1,x_2)) \{ \widetilde Z(x_1\pm \varepsilon, x_2+\varepsilon) - \widetilde Z(x_1\pm \varepsilon,x_2)\\ & \hphantom{XXXXXXXXX} - \widetilde Z(x_1,x_2+\varepsilon) + \widetilde Z(x_1,x_2) \}. \end{split} \label{Z4}\end{aligned}$$ **Theorem 3**. *Given $u_0 \in C^1_b(\mathbb{R})$ and $u_1 \in C_b(\mathbb{R})$, let $v$ be the solution to SNLW [\[SNLW1a\]](#SNLW1a){reference-type="eqref" reference="SNLW1a"} in the null coordinates and $\widetilde Z$ be as in [\[SNLW3\]](#SNLW3){reference-type="eqref" reference="SNLW3"}. Then, given any $\mathbf{x}= (x_1, x_2) \in \mathbb{R}^2$ and finite $p \ge 2$, we have $$\begin{aligned} \|\widetilde R_\varepsilon^\pm (\mathbf{x}) \|_{L^p(\Omega)} \le C(p, x_1, x_2) \varepsilon^{\frac 32}, \label{Z5}\end{aligned}$$* *uniformly in small $\varepsilon> 0$.* Theorem [Theorem 3](#THM:loc){reference-type="ref" reference="THM:loc"} establishes bi-parameter linear localization for the solution $v$ to [\[SNLW1a\]](#SNLW1a){reference-type="eqref" reference="SNLW1a"} in the following sense; the remainder term $\widetilde R_\varepsilon^\pm (x_1, x_2)$ decays like $\sim \varepsilon^\frac 32$ as $\varepsilon\to 0$ on average, and hence, in view of Lemma [Lemma 2](#LEM:decay){reference-type="ref" reference="LEM:decay"}, $\widetilde R_\varepsilon^\pm (x_1, x_2)$ tends to 0 much faster than $\delta_{\pm \varepsilon}^{(1)}\delta_\varepsilon^{(2)}\widetilde Z(x_1, x_2)$. As an immediate corollary to Theorem [Theorem 3](#THM:loc){reference-type="ref" reference="THM:loc"} and [\[null2\]](#null2){reference-type="eqref" reference="null2"}, we obtain the following bi-parameter linear localization for the solution $u$ to SNLW [\[NLW1\]](#NLW1){reference-type="eqref" reference="NLW1"} in the original space-time coordinates. **Theorem 4**. *Given $u_0 \in C^1_b(\mathbb{R})$ and $u_1 \in C_b(\mathbb{R})$, let $u$ be the solution to SNLW [\[NLW1\]](#NLW1){reference-type="eqref" reference="NLW1"} and $Z$ be the linear solution, satisfying $$\begin{cases} \partial_t^2 Z- \partial_x^2 Z = \xi\\ (Z,\partial_tZ) |_{t = 0} = (0,0). \end{cases} %\label{NLW2}$$* *Then, given any $(t, x) \in \mathbb{R}^2$ and finite $p \ge 2$, we have $$\begin{aligned} & \| \Delta^{(1)}_\varepsilon u(t, x) - F(u(x, t)) \Delta^{(1)}_\varepsilon Z(t, x) \|_{L^p(\Omega)}\\ &\quad + \| \Delta^{(2)}_\varepsilon u(t, x) - F(u(x, t)) \Delta^{(2)}_\varepsilon Z(t, x) \|_{L^p(\Omega)} \leq C(p, t, x) \varepsilon^\frac{3}{2}, \end{aligned}$$* *uniformly in small $\varepsilon> 0$, where $\Delta^{(1)}_\varepsilon$ and $\Delta^{(2)}_\varepsilon$ are defined by $$\begin{aligned} \Delta^{(1)}_\varepsilon f(t, x) & = f(t, x+2\varepsilon) - f(t-\varepsilon, x+\varepsilon) - f(t+\varepsilon, x+\varepsilon) + f(t, x), \\ \Delta^{(2)}_\varepsilon f(t, x) & =f(t + 2\varepsilon, x) - f(t+\varepsilon, x-\varepsilon) - f(t+\varepsilon, x+\varepsilon) + f(t, x).\end{aligned}$$* We also have the following claim as a direct corollary to Lemma [Lemma 2](#LEM:decay){reference-type="ref" reference="LEM:decay"} and [\[null2\]](#null2){reference-type="eqref" reference="null2"}; given any $\kappa > 0$ and $(t, x) \in \mathbb{R}^2$, we have $$\begin{aligned} \limsup_{\varepsilon\to 0} \frac{| \Delta^{(1)}_\varepsilon Z(t, x)|}{|\varepsilon|^{1+\kappa}} = \limsup_{\varepsilon\to 0} \frac{| \Delta^{(2)}_\varepsilon Z(t, x)|}{|\varepsilon|^{1+\kappa}} = \infty, \label{decay5}\end{aligned}$$ almost surely. Hence, from Theorem [Theorem 4](#THM:2){reference-type="ref" reference="THM:2"} and [\[decay5\]](#decay5){reference-type="eqref" reference="decay5"}, we have $$\begin{aligned} \Delta^{(1)}_\varepsilon u(t, x) & = F(u(x, t)) \Delta^{(1)}_\varepsilon Z(t, x) + R^{(1)}_\varepsilon(t, x),\\ \Delta^{(2)}_\varepsilon u(t, x) & = F(u(x, t)) \Delta^{(2)}_\varepsilon Z(t, x) + R^{(2)}_\varepsilon(t, x), \end{aligned}$$ where the remainder term $R^{(j)}_\varepsilon(t, x)$ decays much faster than $\Delta^{(j)}_\varepsilon Z(t, x)$, $j = 1, 2$, thus establishing a bi-parameter linear localization for the solution $u$ to SNLW [\[NLW1\]](#NLW1){reference-type="eqref" reference="NLW1"} in the original space-time coordinates. **Remark 5**. *In Theorems [Theorem 3](#THM:loc){reference-type="ref" reference="THM:loc"} and [Theorem 4](#THM:2){reference-type="ref" reference="THM:2"}, we established local linearizability of SNLW in a bi-parameter sense. Such a bi-parameter point of view is natural in studying the one-dimensional (stochastic) wave equation. See, for example, [@QT; @CG; @BLS] and the references therein.* # Proofs of Lemma [Lemma 2](#LEM:decay){reference-type="ref" reference="LEM:decay"} and Theorem [Theorem 3](#THM:loc){reference-type="ref" reference="THM:loc"} {#proofs-of-lemma-lemdecay-and-theorem-thmloc} In this section, we present the proofs of Lemma [Lemma 2](#LEM:decay){reference-type="ref" reference="LEM:decay"} and Theorem [Theorem 3](#THM:loc){reference-type="ref" reference="THM:loc"}. We first prove Lemma [Lemma 2](#LEM:decay){reference-type="ref" reference="LEM:decay"} on a lower bound of the decay rate of $|\delta_{\pm \varepsilon}^{(1)}\delta_\varepsilon^{(2)} \widetilde Z(\mathbf{x})|$ as $\varepsilon\to 0$. *Proof of Lemma [Lemma 2](#LEM:decay){reference-type="ref" reference="LEM:decay"}.* We only consider $\delta_{\varepsilon}^{(1)}\delta_\varepsilon^{(2)} \widetilde Z(\mathbf{x})$ since the same proof applies to $\delta_{-\varepsilon}^{(1)}\delta_\varepsilon^{(2)}\widetilde Z(\mathbf{x})$. We first recall the so-called Cramér condition; a random variable $X$ is said to satisfy the Cramér condition if there exists $\lambda> 0$ such that $$\begin{aligned} \varphi(\lambda) = \mathbb{E}\big[e^{\lambda|X|}\big] < \infty.\end{aligned}$$ If this condition holds, then we set $\psi(\lambda)$ by setting $\psi(\lambda) = \log \varphi(\lambda)$ if $\varphi(\lambda) < \infty$ and $\psi(\lambda) = \infty$ if $\varphi(\lambda) = \infty$. For $a < \mathbb{E}[X]$, we define the Cramér transform: $$\begin{aligned} H(a) = \sup_{\lambda<0} \{ a \lambda- \psi(\lambda) \}. \label{Cr2}\end{aligned}$$ Then, we have $$\begin{aligned} P(X \le a) \le e^{-H(a)}. \label{Cr3}\end{aligned}$$ See Shiryaev's book [@SHIR Section IV.5]. Given $\varepsilon> 0$ and $\mathbf{x}\in \mathbb{R}^2$, we set $X = |\delta_\varepsilon^{(1)}\delta_\varepsilon^{(2)} \widetilde Z(\mathbf{x})|^2$. Then, recalling from [\[Z2\]](#Z2){reference-type="eqref" reference="Z2"} and [\[Z3\]](#Z3){reference-type="eqref" reference="Z3"} that $\delta_\varepsilon^{(1)}\delta_\varepsilon^{(2)} \widetilde Z(\mathbf{x})$ is a mean-zero Gaussian random variable with variance $\mathbb{E}[X] = \frac 14 \varepsilon^2$, we have $$\begin{aligned} \varphi(\lambda) = \mathbb{E}\big[e^{\lambda|X|}\big] = \sqrt{\frac{2}{2 - \lambda\varepsilon^2}}. \label{Cr4}\end{aligned}$$ Then, given $0 < a < \mathbb{E}[X] = \frac 14 \varepsilon^2$, we see from [\[Cr2\]](#Cr2){reference-type="eqref" reference="Cr2"} and [\[Cr4\]](#Cr4){reference-type="eqref" reference="Cr4"} that $H(a) = \sup_{\lambda< 0} \big\{ a \lambda+ \frac 12 \log (1-\frac{1}{2}\lambda\varepsilon^2) \big\}$ has the maximum value $\frac{4a-\varepsilon^2}{2\varepsilon^2} + \frac 12 \log \frac{\varepsilon^2}{4a}$ at $\lambda=\frac{4a-\varepsilon^2}{2a\varepsilon^2} < 0$. Hence, given $\kappa > 0$ and $M \gg 1$, by choosing $a = M \varepsilon^{2+2\kappa}$, it follows from [\[Cr3\]](#Cr3){reference-type="eqref" reference="Cr3"} that there exists $C_M > 0$ such that $$\begin{aligned} \begin{split} P\Big(|\delta_\varepsilon^{(1)}\delta_\varepsilon^{(2)} Z(\mathbf{x})|^2 \le M \varepsilon^{2+2\kappa}\Big) & \leq \exp\bigg(- \frac{4 M \varepsilon^{2+2\kappa} -\varepsilon^2}{\varepsilon^2} - \frac 12 \log \frac{\varepsilon^{-2\kappa}}{4M} \bigg)\\ & \leq \exp\big(\kappa \log \varepsilon + C_M \big) \end{split} \label{Cr5}\end{aligned}$$ for any sufficiently small $\varepsilon> 0$ (such that $M \varepsilon^{2\kappa} < \frac 14$). Given $n \in \mathbb{N}$, let $\varepsilon_n = e^{-n}$ which tends to $0$ as $n \to \infty$. Then, from [\[Cr5\]](#Cr5){reference-type="eqref" reference="Cr5"}, we have $$\begin{aligned} \sum_{n =1}^\infty P\Big(|\delta_{\varepsilon_n}^{(1)}\delta_{\varepsilon_n}^{(2)} \widetilde Z(\mathbf{x})|^2 \le M \varepsilon_n^{2+2\kappa}\Big) \leq C_M' \sum_{n =1}^\infty e^{- \kappa n} < \infty.\end{aligned}$$ Hence, by the Borel-Cantelli lemma, there exists an almost surely finite constant $N(\omega) >0$ such that $$\begin{aligned} |\delta_{\varepsilon_n}^{(1)}\delta_{\varepsilon_n}^{(2)} \widetilde Z(\mathbf{x})|^2 > M \varepsilon_n^{2+2\kappa}\end{aligned}$$ for any $n \ge N(\omega)$. In particular, we obtain $$\begin{aligned} \limsup_{\varepsilon\to 0} \frac{|\delta_\varepsilon^{(1)}\delta_\varepsilon^{(2)} \widetilde Z(\mathbf{x})|}{|\varepsilon|^{1+\kappa}} >\sqrt M, \label{Cr6}\end{aligned}$$ almost surely. Since [\[Cr6\]](#Cr6){reference-type="eqref" reference="Cr6"} holds for any (integer) $M \gg1$, we conclude [\[decay1\]](#decay1){reference-type="eqref" reference="decay1"}. ◻ Next, we present the proof of Theorem [Theorem 3](#THM:loc){reference-type="ref" reference="THM:loc"}. *Proof of Theorem [Theorem 3](#THM:loc){reference-type="ref" reference="THM:loc"}.* We only consider $\widetilde R_\varepsilon^+$ since the same proof applies to $\widetilde R_\varepsilon^-$. As before, we use the short-hand notations $\mathbf{x}= (x_1, x_2)$ and $\mathbf{y}= (y_1, y_2)$. We first recall the Hölder continuity of the solution to [\[SNLW1b\]](#SNLW1b){reference-type="eqref" reference="SNLW1b"}. In particular, it follows from Proposition [Proposition 7](#PROP:Hol1){reference-type="ref" reference="PROP:Hol1"} and [\[null2\]](#null2){reference-type="eqref" reference="null2"} that, for $L>0$ and $2 \le p < \infty$, we have $$\mathbb{E}\big[ |v(\mathbf{x}) - v(\mathbf{y})|^p \big] \lesssim |x_1-y_1|^{\frac p2} + |x_2-y_2|^{\frac p2}, \label{vdif1}$$ uniformly for $\mathbf{x}, \mathbf{y}\in \mathbb{R}^2$ with $|x_1|, |x_2|, |y_1|, |y_2| \le L$. From [\[Z4\]](#Z4){reference-type="eqref" reference="Z4"} with [\[Z1\]](#Z1){reference-type="eqref" reference="Z1"} and [\[Z2\]](#Z2){reference-type="eqref" reference="Z2"}, we have $$\begin{aligned} \begin{split} \widetilde R_\varepsilon^+ (x_1, x_2) & = \delta_{\varepsilon}^{(1)}\delta_\varepsilon^{(2)} V_0(\mathbf{x}) - \frac 12 \int_{x_2}^{x_2+\varepsilon} \int_{x_1}^{x_1+ \varepsilon} \big\{ F(v(\mathbf{y})) - F(v(\mathbf{x}))\big\} \widetilde\xi (dy_1, dy_2)\\ & =: \hspace{0.5mm}\text{I}\hspace{0.5mm}(\mathbf{x}) + \text{I \hspace{-2.8mm} I} (\mathbf{x}), \end{split} \label{X1}\end{aligned}$$ where $V_0$ is as in [\[V0\]](#V0){reference-type="eqref" reference="V0"}. It is easy to see from [\[V0\]](#V0){reference-type="eqref" reference="V0"} and [\[Lip2\]](#Lip2){reference-type="eqref" reference="Lip2"} that $$\begin{aligned} \hspace{0.5mm}\text{I}\hspace{0.5mm}(\mathbf{x}) %\dl_{\eps}^{(1)}\dl_\eps^{(2)} V_0(\xx) = 0. \label{X2}\end{aligned}$$ Next, we estimate the term $\text{I \hspace{-2.8mm} I}$ in [\[X1\]](#X1){reference-type="eqref" reference="X1"}. From the Burkholder--Davis--Gundy inequality ([@Kho09 Theorem 5.27]), Minkowski's inequality, the Lipschitz continuity of $F$, and [\[vdif1\]](#vdif1){reference-type="eqref" reference="vdif1"}, we have $$\begin{aligned} \begin{split} \mathbb{E}\big[ |\text{I \hspace{-2.8mm} I} (\mathbf{x}) |^p \big] &\lesssim \mathbb{E}\bigg[ \bigg( \int_{x_2}^{x_2+\varepsilon} \int_{x_1}^{x_1+\varepsilon} |F(v(\mathbf{y})) - F(v(\mathbf{x}))|^2 dy_1 dy_2 \bigg)^{\frac p2} \bigg] \\ &\lesssim \bigg( \int_{x_2}^{x_2+\varepsilon} \int_{x_1}^{x_1+\varepsilon} \mathbb{E}\big[ |v(\mathbf{y}) - v(\mathbf{x})|^p \big]^{\frac 2p} dy_1 dy_2 \bigg)^{\frac p2} \\ &\lesssim \bigg( \int_{x_2}^{x_2+\varepsilon} \int_{x_1}^{x_1+\varepsilon} \big\{ |x_1 - y_1| + |x_2 - y_2|\big\} dy_1 dy_2 \bigg)^{\frac p2} \sim \varepsilon^{\frac 32 p}. \end{split} \label{X3}\end{aligned}$$ Therefore, the desired bound [\[Z5\]](#Z5){reference-type="eqref" reference="Z5"} follows from [\[X1\]](#X1){reference-type="eqref" reference="X1"}, [\[X2\]](#X2){reference-type="eqref" reference="X2"}, and [\[X3\]](#X3){reference-type="eqref" reference="X3"}. ◻ # On the Cauchy problem for the stochastic wave equation {#SEC:A} In this appendix, we go over basic properties of solutions to [\[NLW1\]](#NLW1){reference-type="eqref" reference="NLW1"}. Our presentation follows closely that in Section 6 of [@Kho09] on the stochastic heat equation. In the remaining part of this note, we restrict our attention to positive times (i.e. $t \ge0$) for simplicity of the presentation. Let $G$ be the fundamental solution for the wave equation defined by $$G(t,x) = \frac 12 \cdot \mathbf 1_{\{ |x|<t \}} (t,x). \label{fundw}$$ Then, the Duhamel formulation (= mild formulation) of [\[NLW1\]](#NLW1){reference-type="eqref" reference="NLW1"} is given by $$\begin{aligned} u(t,x) &= \partial_t\int_\mathbb{R}G(t,x-y) u_0(y) dy + \int_\mathbb{R}G(t,x-y) u_1(y) dy \\ &\quad + \int_0^t \int_\mathbb{R}G(t-s, x-y) F (u(s,y)) \xi(dy ds). \end{aligned} \label{NLWA2}$$ Given $T>0$ and finite $p \ge 2$, set $$\begin{aligned} \| u \|_{\mathcal{X}_{T, p}} = \sup_{0 \le t \le T} \sup_{x \in \mathbb{R}} \|u(t,x)\|_{L^p(\Omega)}. \label{NLWA2a}\end{aligned}$$ Then, for $T> 0$, we define a solution space $\mathcal{X}$ by setting $$\begin{aligned} \begin{split} \mathcal{X}= \Big\{ u \text{ on } \mathbb{R}_+ \times \mathbb{R}: \, \| u \|_{\mathcal{X}_{T, p}} < \infty \text{ for any $T> 0$ and finite } p \ge 2 \Big\}. \end{split} \label{NLWA2b}\end{aligned}$$ Then, we have the following well-posedness result. **Proposition 6**. *Let $u_0 \in C^1_b(\mathbb{R})$ and $u_1 \in C_b(\mathbb{R})$. Suppose that $F$ is Lipschitz continuous. Then, there exists a unique global-in-time solution $u$ to [\[NLWA2\]](#NLWA2){reference-type="eqref" reference="NLWA2"}, belonging to the class $\mathcal{X}$.* *Proof.* First, we prove uniqueness. Let $u_1$ and $u_2$ be solutions to [\[NLWA2\]](#NLWA2){reference-type="eqref" reference="NLWA2"}, belonging to the class $\mathcal{X}$ defined in [\[NLWA2b\]](#NLWA2b){reference-type="eqref" reference="NLWA2b"}. By letting $w = u_1 -u_2$, we have $$w(t,x) = \int_0^t \int_\mathbb{R}G(t-s,x-y) \big\{ F(u_1(s,y)) - F (u_2(s,y)) \big\} \xi(dy ds).$$ Then, by the Wiener isometry and the Lipschitz continuity of $F$, we have $$\begin{aligned} \begin{split} \mathbb{E}\big[ |w(t,x)|^2 \big] & = \int_0^t \int_\mathbb{R}G(t-s,x-y)^2 \mathbb{E}\big[ |F(u_1(s,y)) - F (u_2(s,y))|^2 \big] dy ds\\ & \lesssim \int_0^t \int_\mathbb{R}G(t-s,x-y)^2 \mathbb{E}\big[ |w(s,y)|^2 \big] dy ds. \end{split} \label{AA1}\end{aligned}$$ By setting $$H(t) = \| w \|_{\mathcal{X}_{t, 2}}^2 = \sup_{0 \le s \le t} \sup_{x \in \mathbb{R}} \mathbb{E}\big[ |w(s,x)|^2 \big],$$ where the $\mathcal{X}_{t, 2}$-norm is as in [\[NLWA2a\]](#NLWA2a){reference-type="eqref" reference="NLWA2a"}, it follows from [\[fundw\]](#fundw){reference-type="eqref" reference="fundw"} and [\[AA1\]](#AA1){reference-type="eqref" reference="AA1"} that $$H(t) \lesssim \int_0^t (t-s) H(s) ds.$$ Since $H(0) = 0$, Gronwall's inequality yields that $H(t)= 0$ for any $t \in \mathbb{R}_+$. This proves uniqueness of a solution. Next, we prove existence. Define a sequence $\{ u^{(n)}\}_{n = 0}^\infty$ by setting $$u^{(0)} (t,x) = \partial_t\int_\mathbb{R}G(t,x-y) u_0(y) dy + \int_\mathbb{R}G(t,x-y) u_1(y) dy$$ and $$u^{(n)} (t,x) = u^{(0)}(t,x) + \int_0^t \int_\mathbb{R}G(t-s, x-y) F(u^{(n-1)}(s,y)) \xi(dy ds) \label{iter}$$ for $n \in \mathbb{N}$. Then, $d_n = u^{(n)} - u^{(n-1)}$ satisfies $$d_n(t,x) = \int_0^t \int_\mathbb{R}G(t-s, x-y) \big\{ F (u^{(n-1)}(s,y)) - F (u^{(n-2)}(s,y)) \big\} \xi(dy ds).$$ Let $T> 0$ and $2 \le p < \infty$. Then, from the Burkholder--Davis--Gundy inequality, Hölder's inequality, and the Lipschitz continuity of $F$, we have $$\begin{aligned} &\mathbb{E}\big[ |d_n(t,x)|^p \big] \\ &\lesssim \mathbb{E}\bigg[ \bigg( \int_0^t \int_\mathbb{R}G(t-s, x-y)^2 \big| F (u^{(n-1)}(s,y)) - F (u^{(n-2)}(s,y)) \big|^2 dy ds \bigg)^{\frac p2} \bigg] \\ &\lesssim \bigg( \int_0^t \int_\mathbb{R}G(t-s, x-y)^{\frac p{p-2}} dy ds \bigg)^{\frac{p-2}2} \\ &\qquad \times \mathbb{E}\bigg[ \int_0^t \int_\mathbb{R}G(t-s, x-y)^{\frac p2} |d_{n-1}(s,y)|^p dy ds \bigg]. \end{aligned} \label{AA2}$$ Hence, by defining $H_n$ by $$H_n(t) = \| d_n \|_{\mathcal{X}_{t, p}}^p = \sup_{0 \le s \le t} \sup_{x \in \mathbb{R}} \mathbb{E}\big[ |d_n(s,x)|^p \big],$$ where the $\mathcal{X}_{t, p}$-norm is as in [\[NLWA2a\]](#NLWA2a){reference-type="eqref" reference="NLWA2a"}, it follows from [\[fundw\]](#fundw){reference-type="eqref" reference="fundw"} and [\[AA2\]](#AA2){reference-type="eqref" reference="AA2"} that there exists a constant $C = C(T, p) >0$ such that $$H_n(t) \le C \int_0^t H_{n-1}(s) ds$$ for any $t \in [0,T]$. Then, a Gronwall-type argument (see Lemma 6.5 in [@Kho09], for example) yields $$H_n(t) \le H_1(T) \frac{(Ct)^{n-1}}{(n-1)!}$$ for any $n \in \mathbb{N}$ and $t \in [0,T]$. By summing over $n \in \mathbb{N}$, we conclude that, given any $T> 0$ and finite $p\ge 2$, there exists $C_0(T, p) > 0$ such that $$\sum_{n=1}^\infty H_n(t)^{\frac 1p} \le C_0(T, p)< \infty$$ for any $t \in [0,T]$. This implies that $u^{(n)}$ converges to some limit, denoted by $u$, with respect to the $\mathcal{X}_{T, p}$-norm for each $T > 0$ and finite $p \ge 2$. As a result, the limit $u$ belongs to the class $\mathcal{X}$ defined in [\[NLWA2b\]](#NLWA2b){reference-type="eqref" reference="NLWA2b"}. Furthermore, from [\[iter\]](#iter){reference-type="eqref" reference="iter"}, we conclude that the limit $u$ almost surely satisfies [\[NLWA2\]](#NLWA2){reference-type="eqref" reference="NLWA2"} for any $(t,x) \in \mathbb{R}_+ \times \mathbb{R}$. ◻ **Proposition 7**. *Let $T, L>0$, and $2 \le p < \infty$, and let $u$ be the solution to [\[NLWA2\]](#NLWA2){reference-type="eqref" reference="NLWA2"} constructed in Proposition [Proposition 6](#PROP:A1){reference-type="ref" reference="PROP:A1"}. Then, we have $$\mathbb{E}\big[ |u(t,x) - u(t',x')|^p \big] \lesssim |t-t'|^{\frac p2} + |x-x'|^{\frac p2}$$ for any $t,t' \in [0,T]$ and $x,x' \in \mathbb{R}$ with $| x|, | x'|\le L$.* *Proof.* We have $$\begin{aligned} u_\text{lin}(t, x) :\! & = \partial_t\int_\mathbb{R}G(t,x-y) u_0(y) dy + \int_\mathbb{R}G(t,x-y) u_1(y) dy\\ & = \frac 12 \big\{u_0(x+t) - u_0(x-t)\big\} + \frac 12 \int_{x-t}^{x+t} u_1(y) dy .\end{aligned}$$ Then, since $u_0 \in C^1_b(\mathbb{R})$ and $u_1 \in C_b(\mathbb{R})$, we have $$\begin{aligned} | u_\text{lin}(t, x) - u_\text{lin}(t', x') |^p \lesssim|t - t'|^p + |x - x'|^p \lesssim|t-t'|^{\frac p2} + |x-x'|^{\frac p2}\end{aligned}$$ for any $t,t' \in [0,T]$ and $x,x' \in \mathbb{R}$ such that $| x - x'|\le L$. Next, we consider the third term on the right-hand side of [\[NLWA2\]](#NLWA2){reference-type="eqref" reference="NLWA2"} which we denotes by $U(t, x)$: $$U(t,x) = \int_0^t \int_\mathbb{R}G(t-s, x-y) F (u(s,y)) \xi(dy ds).$$ For $0 \le t' \le t \le T$, we have $$\begin{aligned} U(t,x) - U(t',x') &= \int_{t'}^t \int_\mathbb{R}G(t-s, x-y) F (u(s,y)) \xi(dy ds) \\ &\quad + \int_0^{t'} \int_\mathbb{R}\{ G(t-s, x-y) - G(t'-s, x-y) \} F (u(s,y)) \xi(dy ds) \\ &\quad + \int_0^{t'} \int_\mathbb{R}\{ G(t'-s, x-y) - G(t'-s, x'-y) \} F (u(s,y)) \xi(dy ds) \\ &=: \hspace{0.5mm}\text{I}\hspace{0.5mm}+ \text{I \hspace{-2.8mm} I} + \text{I \hspace{-2.9mm} I \hspace{-2.9mm} I}.\end{aligned}$$ Let $2 \le p < \infty$. From the Lipschitz continuity of $F$ and the fact that $u \in \mathcal{X}$ (see [\[NLWA2a\]](#NLWA2a){reference-type="eqref" reference="NLWA2a"} and [\[NLWA2b\]](#NLWA2b){reference-type="eqref" reference="NLWA2b"}) that $$\begin{aligned} \sup_{0 \le s \le T} \sup_{y \in \mathbb{R}} \| F (u(s,y))\|_{L^p(\Omega)} \lesssim \sup_{0 \le s \le T} \sup_{y \in \mathbb{R}} \|u(s,y)\|_{L^p(\Omega)} +F (0) < \infty. \label{FF1}\end{aligned}$$ Then, from the Burkholder--Davis--Gundy inequality, Minkowski's inequality, and [\[FF1\]](#FF1){reference-type="eqref" reference="FF1"} with [\[fundw\]](#fundw){reference-type="eqref" reference="fundw"}, we have $$\begin{aligned} \mathbb{E}\big[ |\text{I \hspace{-2.9mm} I \hspace{-2.9mm} I}|^p \big] &\lesssim \mathbb{E}\bigg[ \bigg( \int_0^{t'} \int_\mathbb{R} |G(t'-s, x-y)- G(t'-s,x'-y)|^2 |F (u(s,y))|^2 dy ds \bigg)^{\frac p2} \bigg] \\ &\lesssim \bigg( \int_0^{t'} \int_\mathbb{R} |G(t'-s, x-y)- G(t'-s,x'-y)|^2 \mathbb{E}\big[ |F (u(s,y))|^p \big]^{\frac 2p} dy ds \bigg)^{\frac p2}\\ &\lesssim \bigg( \int_0^{t'} \int_\mathbb{R} |G(t'-s, x-y)- G(t'-s,x'-y)|^2 dy ds \bigg)^{\frac p2} \\ &\lesssim |x-x'|^{\frac p2}.\end{aligned}$$ Similar computations yield $$\mathbb{E}\big[ |\hspace{0.5mm}\text{I}\hspace{0.5mm}|^p \big] + \mathbb{E}\big[ |\text{I \hspace{-2.8mm} I} |^p \big] \lesssim |t-t'|^{\frac p2}.$$ This concludes the proof of Proposition [Proposition 7](#PROP:Hol1){reference-type="ref" reference="PROP:Hol1"}. ◻ **Acknowledgements 1**. *T.O. was supported by the European Research Council (grant no. 864138 "SingStochDispDyn\"). M.O. was supported by JSPS KAKENHI Grant number JP23K03182.* 99 B. Bringmann, J. Lührmann, G. Staffilani, *The wave maps equation and Brownian paths*, arXiv:2111.07381 \[math.AP\]. R. Cairoli, *Sur une équation différentielle stochastique*, C. R. Acad. Sci. Paris Sér. A-B 274 (1972), A1739--A1742. R. Cairoli, J. Walsh, *Stochastic integrals in the plane*, Acta Math. 134 (1975), 111--183. K. Chouk, M. Gubinelli, *Rough sheets*, arXiv:1406.7748 \[math.PR\]. R.C. Dalang, *The stochastic wave equation*, A minicourse on stochastic partial differential equations, 39--71, Lecture Notes in Math., 1962, Springer, Berlin, 2009. M. Foondun, D. Khoshnevisan, P. Mahboubi, *Analysis of the gradient of the solution to a stochastic heat equation via fractional Brownian motion*, Stoch. Partial Differ. Equ. Anal. Comput. 3 (2015), no. 2, 133--158. P. Friz, M. Hairer, *A course on rough paths. With an introduction to regularity structures.* Second edition. Universitext. Springer, Cham, \[2020\], ©2020. xvi+346 pp. M. Gubinelli, *Controlling rough paths*, J. Funct. Anal. 216 (2004), no. 1, 86--140. M. Hairer, *A theory of regularity structures,* Invent. Math. 198 (2014), no. 2, 269--504. M. Hairer, É. Pardoux, *A Wong-Zakai theorem for stochastic PDEs*, J. Math. Soc. Japan 67 (2015), no. 4, 1551--1604. J. Huang, D. Khoshnevisan, *Delocalization of a $(1+1)$-dimensional stochastic wave equation*, arXiv:1610.07727 \[math.PR\]. D. Khoshnevisan, *A primer on stochastic partial differential equations*, A minicourse on stochastic partial differential equations, 1--38, Lecture Notes in Math., 1962, Springer, Berlin, 2009. D. Khoshnevisan, J. Swanson, Y. Xiao, L. Zhang, *Weak existence of a solution to a differential equation driven by a very rough fBm*, arXiv:1309.3613 \[math.PR\]. L. Quer-Sardanyons, S. Tindel, *The 1-d stochastic wave equation driven by a fractional Brownian sheet*, Stochastic Process. Appl. 117 (2007), no. 10, 1448--1472. A. Shiryaev, *Probability,* Translated from the first (1980) Russian edition by R. P. Boas. Second edition. Graduate Texts in Mathematics, 95. Springer-Verlag, New York, 1996. xvi+623 pp. J. Walsh, *An introduction to stochastic partial differential equations*, École d'été de probabilités de Saint-Flour, XIV--1984, 265--439, Lecture Notes in Math., 1180, Springer, Berlin, 1986.
arxiv_math
{ "id": "2309.16506", "title": "On the linear localization of the one-dimensional stochastic wave\n equation with a multiplicative space-time white noise forcing", "authors": "Jingyu Huang, Tadahiro Oh, and Mamoru Okamoto", "categories": "math.AP math.PR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Let $E=\mathbb{Q}\big(\sqrt{-d}\big)$ be an imaginary quadratic field for a square-free positive integer $d$, and let $\mathcal{O}$ be its ring of integers. For each positive integer $m$, let $I_m$ be the free Hermitian lattice over $\mathcal{O}$ with an orthonormal basis, let $\mathfrak{S}_d(1)$ be the set consisting of all positive definite integral unary Hermitian lattices over $\mathcal{O}$ that can be represented by some $I_m$, and let $g_d(1)$ be the least positive integer such that all Hermitian lattices in $\mathfrak{S}_d(1)$ can be uniformly represented by $I_{g_d(1)}$. The main results of this work provide an algorithm to calculate the explicit form of $\mathfrak{S}_d(1)$ and the exact value of $g_d(1)$ for every imaginary quadratic field $E$, which can be viewed as a natural extension of the Pythagoras number in the lattice setting. title: | **An algorithm for $g$-invariant on unary Hermitian lattices\ over imaginary quadratic fields** --- *Department of Computational, Engineering, and Mathematical Sciences\ Texas A&M University-San Antonio, San Antonio, Texas 78224, USA\ E-mail: <jliu@tamusa.edu>* *Mathematics Subject Classification 2020.* Primary 11E39. Secondary 11Y16, 11Y40. *Keywords.* Waring's problem, unary Hermitian lattices, sums of norms, algorithm. # Introduction {#Sec:Int} A positive integer $a$ is said to be represented by a quadratic form $f$, (often) written as $a\to f$, provided the Diophantine equation $f(\vec{x})=a$ has an integral solution. A typical question is to determine those positive integers which can be represented by the form $I_m:=x_1^2+\cdots+x_m^2$ with $m$ a positive integer. In 1770, Lagrange proved the famous Four-Square Theorem: The quadratic form $x_1^2+x_2^2+x_3^2+x_4^2$ can represent all positive integers; such quadratic forms are said to be *universal* in the literature. Recalling that $7$ cannot be a sum of any three integral squares, it is clear that the sum of four squares is best possible for the universality over $\mathbb{Z}$. This result has been generalized to several directions, and one important direction is to consider the representation of sums of squares in totally real number fields $F$: A positive definite integral quadratic form over a totally real number field $F$ is said to be universal provided it can represent all totally positive integers in $F$. In 1928, Götzky [@fG] proved that $x_1^2+x_2^2+x_3^2+x_4^2$ is universal over $\mathbb{Q}\big(\sqrt{5}\big)$ and later Maaß [@hM] further proved the universality of $x_1^2+x_2^2+x_3^2$ over $\mathbb{Q}\big(\sqrt{5}\big)$, which, however, will not occur in any other field $F$, as Siegel [@clS] proved that all totally positive integers in $F$ are sums of integral squares if and only if either $F=\mathbb{Q}$ or $F=\mathbb{Q}\big(\sqrt{5}\big)$. Recall the *Pythagoras number* $P(\mathcal{O}_F)$ of $F$ is defined as the smallest positive integer $p$ such that every sum of integral squares in $F$ is a sum of $p$ integral squares, where $\mathcal{O}_F$ denotes the ring of integers of $F$. Noticing Maaß [@hM] and that $1+1+\big(1+\sqrt{5}\big)^2/4$ cannot be a sum of any two integral squares in $F=\mathbb{Q}\big(\sqrt{5}\big)$, one has $P(\mathcal{O}_F)=3$. For other real quadratic fields $F=\mathbb{Q}\big(\sqrt{d}\big)$ with $d$ a square-free positive integer, it is known $P(\mathcal{O}_F)$ was completely determined by the earlier works of Cohn and Pall, Dzewas, Kneser, Maaß, and Peters: $P(\mathcal{O}_F)=3$ for $d=2,3$, $P(\mathcal{O}_F)=4$ for $d=6,7$, and $P(\mathcal{O}_F)=5$ for all the remaining cases; see Theorem 3.1 of Krásenský, Raška and Sgallová [@KRS]. Some recent important contributions in this direction include Blomer and Kala [@BK], Chan and Icaza [@CI], Kala and Yatsyna [@KY1; @KY2], and Krásenský and Yatsyna [@KY] etc. In this paper, I shall consider the Hermitian analog of this representation problem over imaginary quadratic fields: Let $E=\mathbb{Q}\big(\sqrt{-d}\big)$ be an imaginary quadratic field with $d$ a square-free positive integer, and denote by $\mathcal{O}$ its ring of integers. For each positive integer $m$, let $I_m:=x_1\overline{x_1}+\cdots+x_m\overline{x_m}$ be the sum of $m$ norms. Define the *Pythagoras number* $P(\mathcal{O})$ of $E$ to be the smallest positive integer $p$ such that every sum of integral norms in $E$ is a sum of $p$ integral norms. Kim, Kim and Park [@KKP Table 10] and Lemma 2.2 in [@jL2] completely determined $P(\mathcal{O})$: $P(\mathcal{O})=2$ for $d=1,2,3,7,11$, $P(\mathcal{O})=3$ for $d=5,6,15,19,23,27$, and $P(\mathcal{O})=4$ for all the remaining cases. Seeing this, a harder problem is addressed here. In Beli, Chan, Icaza and me [@BCIL] and my own work [@jL1; @jL2], we studied unary Hermitian lattices--a more general concept than positive integers; as $\mathcal{O}$ needs not to be a principal ideal domain in general, a positive integer $a$ corresponds to a unary Hermitian lattice $\mathcal{O}v$ with the Hermitian map $h$ satisfying $h(v)=a$, but not vice versa. The standard correspondence between Hermitian forms and free Hermitian lattices implies that $I_m$ corresponds to the free Hermitian lattice $\mathcal{O}v_1+\cdots+\mathcal{O}v_m$ with $I_m$ its associated Hermitian form for the orthonormal basis $\{v_1,\ldots,v_m\}$; so, $I_m$ below denotes both the Hermitian form and its corresponding Hermitian lattice. A unary Hermitian lattice $L$ is said to be represented by some $I_m$, provided there is an injective linear map from $L$ to $I_m$ preserving the Hermitian maps. Denote by $\mathfrak{S}_d(1)$ the set consisting of all positive definite integral unary Hermitian lattices over $\mathcal{O}$ which can be represented by some $I_m$, and by $g_d(1)$ the smallest positive integer such that all the lattices in $\mathfrak{S}_d(1)$ can be uniformly represented by $I_{g_d(1)}$. Over the imaginary quadratic fields $E=\mathbb{Q}\big(\sqrt{-d}\big)$ with class number $1$, *i.e.*, those fields by $d=1,2,3,7,11,19,43,67,163$, each positive definite integral unary Hermitian lattice is of the form $L=\mathcal{O}v$, where $h(v)$ is a positive integer. $L$ is represented by $I_m$ if and only if $h(v)$ can be written as a sum of $m$ norms of integers in $\mathcal{O}$. So, $\mathfrak{S}_d(1)$ contains all the positive definite integral unary Hermitian lattices with $g_d(1)=P(\mathcal{O})$. \[In fact, one has $g_d(1)=P(\mathcal{O})=2$ if $d=1,2,3,7,11$, $g_d(1)=P(\mathcal{O})=3$ if $d=19$, and $g_d(1)=P(\mathcal{O})=4$ if $d=43,67,163$.\] On the other hand, for imaginary quadratic fields with class number larger than $1$, [@jL2 Theorems 1 and 2] determined the explicit form of $\mathfrak{S}_d(1)$ and the exact value of $g_d(1)$ for every $E=\mathbb{Q}\big(\sqrt{-d}\big)$ with class number $2$ or $3$. \[Notice that $g_d(1)=P(\mathcal{O})$ in all cases except for $E=\mathbb{Q}\big(\sqrt{-907}\big)$ where $g_{907}(1)=5>P(\mathcal{O})=4$.\] This work, as an essential extension to [@jL2], provides an algorithm with rigorous mathematical proofs, which can determine the explicit form of $\mathfrak{S}_d(1)$ and the exact value of $g_d(1)$ for every imaginary quadratic field $E$ with any given class number. This paper is organized as follows: In Section 2, I discuss the geometric language of Hermitian spaces and Hermitian lattices, in Section 3, I introduce an algorithm to determine $\mathfrak{S}_d(1)$ and $g_d(1)$ for all imaginary quadratic fields $E$ with rigorous proofs, and finally in Section 4, I provide SageMath codes and work out an example in details using these codes as a demonstration. # Preliminaries {#Sec:Pre} Throughout this paper, the notations and terminologies for lattices from the classical monograph of O'Meara [@otO] will be adopted. For background information and terminologies specific to the Hermitian setting, one may consult the works of Shimura [@gS] and Gerstein [@ljG]. Let $E=\mathbb{Q}\big(\sqrt{-d}\big)$ be an imaginary quadratic field for a square-free positive integer $d$, and let $\mathcal{O}$ be its ring of integers. Then, one has $\mathcal{O}=\mathbb{Z}+\mathbb{Z}\omega$ with $\omega=\omega_d:=\sqrt{-d}$ if $d\equiv1,2~(\mathrm{mod}~4)$ and $\omega=\omega_d:=\big(1+\sqrt{-d}\big)/2$ if $d\equiv3~(\mathrm{mod}~4)$. For each $\alpha\in E$, denote by $\overline\alpha$ the complex conjugate of $\alpha$ and define the norm of $\alpha$ to be $N(\alpha):=\alpha\overline\alpha$. A Hermitian space $(V,h)$ is a vector space $V$ over $E$ that admits of a Hermitian map $h:V\times V\to E$ satisfying, for all $\alpha,\beta\in E$ and $v,v_1,v_2,w\in V$,     (0.1) : $h(v,w)=\overline{h(w,v)}$;     (0.2) : $h(\alpha v_1+\beta v_2,w)=\alpha h(v_1,w)+\beta h(v_2,w)$. Write $h(v):=h(v,v)$ for brevity. By condition **(0.1)**, one has $h(v)=\overline{h(v)}$, and thus, $h(v)\in\mathbb{Q}$ for every $v\in V$. A Hermitian lattice $L$ is defined as a finitely generated $\mathcal{O}$-module in the Hermitian space $V$, and $L$ is said to be *a lattice on* $V$ whenever $V=EL$. We assume in the sequel that all Hermitian lattices $L$ are positive definite integral in the sense that $h(v)\in\mathbb{Z}_{>0}$ for all nonzero elements $v\in L$ and that $h(v,w)\in\mathcal{O}$ for all pairs of elements $v,w\in L$. $L$ is said to be represented by another Hermitian lattice $K$, written as $L\to K$, provided there exists an injective linear map $\sigma:L\to K$ which preserves Hermitian maps, *i.e.*, $h_L(v,w)=h_K(\sigma(v),\sigma(w))$ for all $v,w\in L$. We thus call $\sigma$ a representation from $L$ to $K$. In addition, $L$ and $K$ are said to be *isometric* if $\sigma$ is bijective, and the isometry class containing $L$ is denoted by $\mathrm{cls}(L)$. Let $V$ be an $n$-dimensional Hermitian space, and let $L$ be a Hermitian lattice on $V$; then, there are fractional $\mathcal{O}$-ideals $\mathfrak{A}_1,\ldots,\mathfrak{A}_n$ and a basis $\{v_1,\ldots,v_n\}$ of $V$ such that $L=\mathfrak{A}_1v_1+\cdots+\mathfrak{A}_nv_n$. In particular, if we have $\mathfrak{A}_1=\cdots=\mathfrak{A}_n=\mathcal{O}$ for some basis $\{v_1,\ldots,v_n\}$, then we say $L$ a *free* Hermitian lattice and associate to it a Gram matrix $M_L=(h(v_l,v_{l'}))$. For example, a free unary Hermitian lattice $L=\mathcal{O}v$ with $h(v)=a$ can be identified with the Gram matrix $M_L=\langle a\rangle$, and $L\to I_m$ if and only if one can write $a$ as a sum of $m$ norms of integers in $\mathcal{O}$. The *scale* $\mathfrak{s}L$ of $L$ is the fractional $\mathcal{O}$-ideal generated by the set $\{h(v,w):v,w\in L\}$, and the *volume* of $L$ is the fractional $\mathcal{O}$-ideal $\mathfrak{v}L:=\big(\mathfrak{A}_1\overline{\mathfrak{A}_1}\big)\cdots\big(\mathfrak{A}_n\overline{\mathfrak{A}_n}\big)\mathrm{det}(h(v_l,v_{l'}))$. Each $\mathfrak{A}_j$ can be written as a product of integral powers of prime ideals in $\mathcal{O}$. For each prime ideal $\mathfrak{P}$ in $\mathcal{O}$, denote by $p$ the prime number which lies below $\mathfrak{P}$. Then, one has $\mathfrak{P}\overline{\mathfrak{P}}=p^2\mathcal{O}$ when $p$ is inert in $E$ and $\mathfrak{P}\overline{\mathfrak{P}}=p\mathcal{O}$ otherwise. In consequence, there is a unique positive rational number $\delta_L$ (said to be the *discriminant* of $L$) with $\mathfrak{v}L=\delta_L\mathcal{O}$. Suppose $\mathcal{I}$ is a fractional $\mathcal{O}$-ideal. $L$ is said to be $\mathcal{I}$-*modular* if $\mathfrak{s}L=\mathcal{I}$ and $\mathfrak{v}L=\mathcal{I}^n$; in particular, $L$ is said to be *unimodular* whenever $\mathcal{I}=\mathcal{O}$. In [@jL2], I determined the explicit form of $\mathfrak{S}_d(1)$ and the exact value of $g_d(1)$ for all $E=\mathbb{Q}\big(\sqrt{-d}\big)$ with class number $2$ or $3$ using the main strategy as follows. Choose a representative $\mathfrak{P}$ for each ideal class in $E$: $\mathfrak{P}$ either is $\mathcal{O}$ or is a non-principal prime ideal in $\mathcal{O}$. Every positive definite integral unary Hermitian lattice lies in the isometry class of $L^r=\mathfrak{P}v^r$ for a positive integer $r$. When $\mathfrak{P}=\mathcal{O}$, then $h(v^r)=r$ and $L^r\cong\langle r\rangle$ is represented by $I_4$; so, $L^r\in\mathfrak{S}_d(1)$ for all positive integers $r$. Otherwise, we know $\mathfrak{P}$ is a non-principal prime ideal with $\mathfrak{P}\overline{\mathfrak{P}}=p\mathcal{O}$ ($p$ lying below $\mathfrak{P}$) and $h(v^r)=r/p$. Write $\mathfrak{P}=\mathcal{O}p+\mathcal{O}(s+t\omega)$ for some $s+t\omega\in\mathcal{O}$. Using [@jL2 Lemma 2.1], $L^r$ can be represented by $I_m$ if and only if $r/p=\sum_{\ell=1}^mN(\gamma_\ell/p)$, where $\gamma_\ell:=a_\ell+b_\ell\omega\in\mathcal{O}$, with $a_\ell,b_\ell\in\mathbb{Z}$ for $1\leq\ell\leq m$ satisfying the conditions below     (1.1) : $p|(sa_\ell-dtb_\ell)$ and $p|(ta_\ell+sb_\ell)$ when $d\equiv1,2~(\mathrm{mod}~4)$;     (1.2) : $p\big|\big(sa_\ell-\frac{d+1}{4}tb_\ell\big)$ and $p|(ta_\ell+(s+t)b_\ell)$ when $d\equiv3~(\mathrm{mod}~4)$. Below, I extend this to imaginary quadratic fields $E=\mathbb{Q}\big(\sqrt{-d}\big)$ with class number $\mathtt{c}\geq4$. As Lemma 2.2 in [@jL2] and Lemma [Lemma 1](#Lem1){reference-type="ref" reference="Lem1"} below lead to $4\leq g_d(1)\leq5$, $\mathfrak{S}_d(1)$ consists of all positive definite integral unary Hermitian lattices, except for those that cannot be represented by $I_5$. For $g_d(1)$, I first observe that $L^r$ can be represented by $I_4$ when $r\geq C$ for a positive integer $C$ depending only on $d$ and $p$. When $r<C$, we only need to check the representations $\mathfrak{P}v^r\to I_5$ and $\mathfrak{P}v^r\to I_4$: If every positive definite integral unary Hermitian lattice that can be represented by $I_5$ can also be represented by $I_4$, then $g_d(1)=4$; otherwise, $g_d(1)=5$. I will design an algorithm to completely determine the explicit form of $\mathfrak{S}_d(1)$ and the exact value of $g_d(1)$. The result below plays a pivotal role in the design of this algorithm, whose proof depends on the classical work of Mordell [@ljM]. **Lemma 1**. *Let $E=\mathbb{Q}\big(\sqrt{-d}\big)$ be an imaginary quadratic field with $d$ a square-free positive integer, and let $\mathcal{O}$ be its ring of integers. Let $\mathfrak{S}_d(1)$ be the set of all positive definite integral unary Hermitian lattices over $\mathcal{O}$ that can be represented by some $I_m$, and let $g_d(1)$ be the smallest positive integer such that all the lattices in $\mathfrak{S}_d(1)$ can be uniformly represented by $I_{g_d(1)}$. Then, one has $g_d(1)\leq5$.* *Proof.* Assume $L\in\mathfrak{S}_d(1)$ is a lattice represented by the Hermitian $\mathcal{O}$-lattice $I_m$ for an $m\in\mathbb{N}$. Without loss of generality, suppose $L$ is a sublattice of $I_m$. Then, we can write $L=\mathfrak{U}^{-1}v$ with $\mathfrak{U}$ an integral ideal and $v\subseteq\mathfrak{U}^m$, as $\mathfrak{U}^{-1}v\subseteq\mathcal{O}^m$. Write $\mathfrak{U}=\mathbb{Z}\gamma_1+\mathbb{Z}\gamma_2$ for $\gamma_1,\gamma_2\in\mathfrak{U}$. So, $v=\gamma_1w_1+\gamma_2w_2$ for two vectors $w_1,w_2\in\mathbb{Z}^m$. Now, consider the quadratic $\mathbb{Z}$-lattice $L^{\mathbb{Z}}=\mathbb{Z}w_1+\mathbb{Z}w_2$, a rank (at most) $2$ sublattice of the quadratic $\mathbb{Z}$-lattice $I_m^{\mathbb{Z}}$, which clearly can be represented by $I_5^{\mathbb{Z}}$ noting Mordell [@ljM]. We claim there is a representation $\varphi$ sending $L$ to $I_5$. Let $w_1',w_2'$ be the images of $w_1,w_2$ in $I_5^{\mathbb{Z}}$, respectively. Then, $v'=\gamma_1w_1'+\gamma_2w_2'\in\mathfrak{U}^5$ is a vector in the Hermitian $\mathcal{O}$-lattice $I_5$. Set $\varphi(v)=v'$ to see, with $(~)^t$ being transpose, $$\label{Eq2.1} \begin{aligned} h(v)&=(\gamma_1w_1+\gamma_2w_2)^t\cdot\overline{(\gamma_1w_1+\gamma_2w_2)}\\ &=(\gamma_1~~\gamma_2)\bigg(\begin{array}{c} (w_1)^t \\ (w_2)^t \end{array}\bigg) (w_1~~w_2)\bigg(\begin{array}{c} \overline{\gamma_1} \\ \overline{\gamma_2} \end{array}\bigg)\\ &=(\gamma_1~~\gamma_2)\bigg(\begin{array}{c} \big(w_1'\big)^t \\ \big(w_2'\big)^t \end{array}\bigg) \big(w_1'~~w_2'\big)\bigg(\begin{array}{c} \overline{\gamma_1} \\ \overline{\gamma_2} \end{array}\bigg)\\ &=\big(\gamma_1w_1'+\gamma_2w_2'\big)^t\cdot\overline{\big(\gamma_1w_1'+\gamma_2w_2'\big)}=H(v'). \end{aligned}$$ We verified $h(v)=H(v')$ for Hermitian maps $h$ and $H$ on $L$ and $I_5$, respectively, and therefore, we have $L\to I_5$. Consequently, $g_d(1)\leq5$. ◻ The next lemma says that for any two prime ideals in $\mathcal{O}$ which lie above the same prime number $p$, we only need to consider one of them. **Lemma 2**. *Let $E=\mathbb{Q}\big(\sqrt{-d}\big)$ be an imaginary quadratic field with $d$ a square-free positive integer, let $p$ be a prime number splitting in $E$, and let $\mathfrak{P}_1,\mathfrak{P}_2$ be prime ideals in $\mathcal{O}$ lying above $p$. Then, $r_1,r_2$ assume the same set of values in which $\mathfrak{P}_1v_1^{r_1},\mathfrak{P}_2v_2^{r_2}\in\mathfrak{S}_d(1)$, and there is a transformation between their representations by $I_m$.* *Proof.* When $d\equiv3~(\mathrm{mod}~4)$, Proposition 8.3 in [@jN] leads to $\mathfrak{P}_1=\big(p,-\frac{n+1}{2}+\omega\big)$ and $\mathfrak{P}_2=\big(p,\frac{n-1}{2}+\omega\big)$ for the smallest positive odd integer $n$ with $-d\equiv n^2~(\mathrm{mod}~p)$; the proof of the stated result in Lemma [Lemma 2](#Lem2){reference-type="ref" reference="Lem2"} was given in [@jL2 Lemma 4.1]. Below, we discuss the case of $d\equiv1,2~(\mathrm{mod}~4)$. Notice only odd prime numbers now split and $p\hspace{-0.8mm}\nmid\hspace{-0.8mm}d$. By Theorem 25 in [@dM], $\mathfrak{P}_1=(p,n+\omega)$ and $\mathfrak{P}_2=(p,n-\omega)$ are prime ideals in $\mathcal{O}$ lying above $p$ for the least positive integer $n$ with $-d\equiv n^2~(\mathrm{mod}~p)$. Recall here that $\mathfrak{P}_jv_j^{r_j}\to I_m$ if and only if $r_j/p=\sum_{\ell=1}^mN(\gamma_\ell/p)$ for $j=1,2$, where $\gamma_\ell=a_\ell+b_\ell\omega\in\mathcal{O}$ with $a_\ell,b_\ell$ satisfying $p|(na_\ell-db_\ell)$ and $p|(a_\ell+nb_\ell)$ for $\mathfrak{P}_1$ and $p|(na_\ell+db_\ell)$ and $p|(a_\ell-nb_\ell)$ for $\mathfrak{P}_2$ by [@jL2 Lemma 2.1]. One easily verifies that $p|(a_\ell+nb_\ell)$ and $p|(a_\ell-nb_\ell)$ lead to $p|(na_\ell-db_\ell)$ and $p|(na_\ell+db_\ell)$, respectively. So, when calculating the values of $r_1,r_2$ where $\mathfrak{P}_1v_1^{r_1},\mathfrak{P}_2v_2^{r_2}\in\mathfrak{S}_d(1)$, we only need the conditions $p|(a_\ell+nb_\ell)$ for $\mathfrak{P}_1$ and $p|(a_\ell-nb_\ell)$ for $\mathfrak{P}_2$. For every pair of integers $a_\ell,b_\ell$ with $p|(a_\ell+nb_\ell)$, take $a'_\ell=a_\ell$ and $b'_\ell=-b_\ell$ to derive $p\big|\big(a'_\ell-nb'_\ell\big)$ and $N\Big(\frac{a'_\ell+b'_\ell\omega}{p}\Big)=N\Big(\frac{a_\ell+b_\ell\omega}{p}\Big)$, where $N\big(\frac{a+b\omega}{p}\big):=\frac{a^2}{p^2}+\frac{db^2}{p^2}$ here. Conversely, choose $a_\ell=a'_\ell$ and $b_\ell=-b'_\ell$ for every pair of integers $a'_\ell,b'_\ell$ satisfying $p\big|\big(a'_\ell-nb'_\ell\big)$ to see $p|(a_\ell+nb_\ell)$ and $N\Big(\frac{a_\ell+b_\ell\omega}{p}\Big)=N\Big(\frac{a'_\ell+b'_\ell\omega}{p}\Big)$. Thus, $r_1,r_2$ assume the same set of values in which $\mathfrak{P}_1v_1^{r_1},\mathfrak{P}_2v_2^{r_2}$ belong to $\mathfrak{S}_d(1)$, and $\mathfrak{P}_1v_1^{r_1},\mathfrak{P}_2v_2^{r_2}$ are represented by the same $I_m$ when $r_1=r_2$. ◻ # An algorithm {#Sec:Alg} Let $E=\mathbb{Q}\big(\sqrt{-d}\big)$ be an imaginary quadratic field with class number $\mathtt{c}\geq4$, and let $\mathcal{O}$ be its ring of integers. When $\mathfrak{P}=\mathcal{O}$, then $\mathfrak{P}v^r\to I_4$ for all positive integers $r$. So, we assume $\mathfrak{P}$ is a non-principal prime ideal in $\mathcal{O}$ and $p$ is the prime number lying below $\mathfrak{P}$, and prove in cases that $\mathfrak{P}v^r\to I_4$ provided $r$ is no less than a given integer $C$. As Lemma 2.2 in [@jL2] and Lemma [Lemma 1](#Lem1){reference-type="ref" reference="Lem1"} here yield $4\leq g_d(1)\leq5$, we will utilize software to check the representations $\mathfrak{P}v^r\to I_4$ and $\mathfrak{P}v^r\to I_5$ when $r<C$. Let $E(p)$ be the set of positive integers $r$ such that $\mathfrak{P}v^r$ cannot be represented by $I_5$, and write $g(p):=4$ if every unary Hermitian lattice $\mathfrak{P}v^r$ that can be represented by $I_5$ can also be represented by $I_4$; otherwise, write $g(p):=5$. $\mathfrak{S}_d(1)$ consists of all the positive definite integral unary Hermitian lattices except for $\mathfrak{P}v^r$ with $r\in E(p)$, and $g_d(1)$ corresponds to the largest value of $g(p)$ when $p$ runs through all the primes lying below non-principal prime representatives of ideal classes of $E$. **Theorem 3**. *Assume that $\mathfrak{P}$ is a non-principal prime ideal in $E$ and $p$ is the prime number lying below $\mathfrak{P}$. When $p|d$, the unary Hermitian lattice $\mathfrak{P}v^r$ is represented by $I_4$ for every positive integer $r\geq C:=(p-1)d/p$.* *Proof.* $d\equiv1,2~(\mathrm{mod}~4)$. Then, $\mathcal{O}=\mathbb{Z}+\mathbb{Z}\omega$ for $\omega=\sqrt{-d}$. By Theorem 25 in [@dM], we have $\mathfrak{P}=\mathcal{O}p+\mathcal{O}\omega$. Thus, it follows from [@jL2 Lemma 2.1] that $$\omega\gamma_\ell=\omega(a_\ell+b_\ell\omega)=-db_\ell+a_\ell\omega$$ for each $1\leq\ell\leq4$, which leads to $p|a_\ell$ and $b_\ell$ arbitrary; so, one has $$h(v^{r})=\frac{r}{p}=\sum_{\ell=1}^4N\Big(\frac{\gamma_\ell}{p}\Big) =\sum_{\ell=1}^4\bigg(\frac{a_\ell^2}{p^2}+\frac{db_\ell^2}{p^2}\bigg)=:\sum_{\ell=1}^4P_d(a_\ell,b_\ell).$$ Choose $a_\ell=p\tilde{a}_\ell$ with $b_\ell$ an arbitrary integer for $1\leq\ell\leq4$ to observe $$\label{Eq3.1} r=p\sum_{\ell=1}^4P_d(a_\ell,b_\ell)=p\sum_{\ell=1}^4\bigg(\tilde{a}^2_\ell+\frac{db_\ell^2}{p^2}\bigg) =\sum_{\ell=1}^4\Big(p\tilde{a}^2_\ell+\frac{d}{p}b_\ell^2\Big).$$ $d\equiv 3~(\mathrm{mod}~4)$. Then, $\mathcal{O}=\mathbb{Z}+\mathbb{Z}\omega$ for $\omega=\big(1+\sqrt{-d}\big)/2$. By Theorem 25 in [@dM], we get $\mathfrak{P}=\mathcal{O}p+\mathcal{O}\sqrt{-d}$. So, it follows from [@jL2 Lemma 2.1] that $$(-1+2\omega)\gamma_\ell=(-1+2\omega)(a_\ell+b_\ell\omega)=-\Big(a_\ell+\frac{d+1}{2}b_\ell\Big)+(2a_\ell+b_\ell)\omega$$ for each $1\leq\ell\leq 4$, which leads to $p|(2a_\ell+b_\ell)$; therefore, one has $$h(v^{r})=\frac{r}{p}=\sum_{\ell=1}^4N\Big(\frac{\gamma_\ell}{p}\Big) =\sum_{\ell=1}^4\bigg(\frac{a_\ell^2}{p^2}+\frac{a_\ell b_\ell}{p^2}+\frac{(d+1)b_\ell^2}{4p^2}\bigg)=:\sum_{\ell=1}^4P_d(a_\ell,b_\ell).$$ Take $a_\ell=p\tilde{a}_\ell-\tilde{b}_\ell,b_\ell=2\tilde{b}_\ell$ with $\tilde{a}_\ell,\tilde{b}_\ell$ arbitrary integers for $1\leq\ell\leq4$ to observe $$\label{Eq3.2} \begin{aligned} r&=p\sum_{\ell=1}^4P_d(a_\ell,b_\ell)=p\sum_{\ell=1}^4\frac{1}{p^2}\Big(a_\ell^2+a_\ell b_\ell+\frac{d+1}{4}b_\ell^2\Big)\\ &=\sum_{\ell=1}^4\frac{1}{p}\big(\big(p\tilde{a}_\ell-\tilde{b}_\ell\big)^2+2\tilde{b}_\ell\big(p\tilde{a}_\ell-\tilde{b}_\ell\big)+(d+1)\tilde{b}^2_\ell\big) =\sum_{\ell=1}^4\Big(p\tilde{a}^2_\ell+\frac{d}{p}\tilde{b}^2_\ell\Big). \end{aligned}$$ In equalities [\[Eq3.1\]](#Eq3.1){reference-type="eqref" reference="Eq3.1"} and [\[Eq3.2\]](#Eq3.2){reference-type="eqref" reference="Eq3.2"}, $d/p$ is an integer relatively prime to $p$, and therefore, $\{0,d/p,2d/p,\ldots,(p-1)d/p\}$ is a complete set of representatives of $\mathbb{Z}/p\mathbb{Z}$, with $r$ represented by $\displaystyle{p\sum_{\ell=1}^4\tilde{a}^2_\ell+\frac{d}{p}\sum_{\ell=1}^4\tilde{b}^2_\ell}$ for all $r\geq(p-1)d/p$. ◻ **Theorem 4**. *Suppose that $\mathfrak{P}$ is a non-principal prime ideal in $E$ and $2$ is the prime number lying below $\mathfrak{P}$. When $d$ is odd, the unary Hermitian lattice $\mathfrak{P}v^r$ is represented by $I_4$ for every positive integer $r\geq C:=(d+1)/2$.* *Proof.* As a matter of fact, Lemma 2.3 in [@jL2] implies that $\mathfrak{P}v^r\to I_4$ for all positive even integers $r$; the discussion below will lead to $\mathfrak{P}v^r\to I_3$ when $r\geq(d+1)/2$ is a positive odd integer. $d\equiv1~(\mathrm{mod}~4)$. Then, $\mathcal{O}=\mathbb{Z}+\mathbb{Z}\omega$ for $\omega=\sqrt{-d}$. By [@dM Theorem 25], one has $\mathfrak{P}=\mathcal{O}2+\mathcal{O}(1+\omega)$. So, it follows from [@jL2 Lemma 2.1] that $$(1+\omega)\gamma_\ell=(1+\omega)(a_\ell+b_\ell\omega)=(a_\ell-db_\ell)+(a_\ell+b_\ell)\omega$$ for each $1\leq\ell\leq 3$, which leads to $2|(a_\ell+b_\ell)$; therefore, we have $$h(v^{r})=\frac{r}{2}=\sum_{\ell=1}^3N\Big(\frac{\gamma_\ell}{2}\Big) =\sum_{\ell=1}^3\bigg(\frac{a_\ell^2}{4}+\frac{db_\ell^2}{4}\bigg)=:\sum_{\ell=1}^3P_d(a_\ell,b_\ell).$$ Choose $a_1=2\tilde{a}_1+1,b_1=1$ and $a_\ell=2\tilde{a}_\ell,b_\ell=0$ for $2\leq\ell\leq3$ to observe $$\label{Eq3.3} \begin{aligned} r&=2\sum_{\ell=1}^3P_d(a_\ell,b_\ell)=\frac{1}{2}\big((2\tilde{a}_1+1)^2+d\big)+2\tilde{a}^2_2+2\tilde{a}^2_3\\ &=\frac{d+1}{2}+2\big(\tilde{a}^2_1+\tilde{a}_1+\tilde{a}^2_2+\tilde{a}^2_3\big). \end{aligned}$$ Noice $(d+1)/2$ is an odd integer and $2\big(\tilde{a}^2_1+\tilde{a}_1+\tilde{a}^2_2+\tilde{a}^2_3\big)$ can represent all positive even integers in view of [@zS15 Page 1368]. Therefore, $\mathfrak{P}v^r\to I_3$ when $r\geq(d+1)/2$ is a positive odd integer. $d\equiv7~(\mathrm{mod}~8)$. Then, $\mathcal{O}=\mathbb{Z}+\mathbb{Z}\omega$ for $\omega=\big(1+\sqrt{-d}\big)/2$. By Theorem 25 in [@dM], one has either $\mathfrak{P}=\mathcal{O}2+\mathcal{O}\omega$ or $\mathfrak{P}=\mathcal{O}2+\mathcal{O}(1-\omega)$. By Lemma [Lemma 2](#Lem2){reference-type="ref" reference="Lem2"}, we take $\mathfrak{P}=\mathcal{O}2+\mathcal{O}\omega$ for discussion. It follows from [@jL2 Lemma 2.1] that $$\omega\gamma_\ell=\omega(a_\ell+b_\ell\omega)=-\frac{d+1}{4}b_\ell+(a_\ell+b_\ell)\omega$$ for each $1\leq\ell\leq 3$, which leads to $2|(a_\ell+b_\ell)$; therefore, we have $$h(v^{r})=\frac{r}{2}=\sum_{\ell=1}^3N\Big(\frac{\gamma_\ell}{2}\Big) =\sum_{\ell=1}^3\bigg(\frac{a_\ell^2}{4}+\frac{a_\ell b_\ell}{4}+\frac{(d+1)b_\ell^2}{16}\bigg)=:\sum_{\ell=1}^3P_d(a_\ell,b_\ell).$$ Choose $a_\ell=2\tilde{a}_\ell+1$ and $b_\ell=-1$ for $1\leq\ell\leq3$ to observe $$\label{Eq3.4} \begin{aligned} r&=2\sum_{\ell=1}^3P_d(a_\ell,b_\ell)=\sum_{\ell=1}^3\frac{1}{2}\Big((2\tilde{a}_\ell+1)^2-(2\tilde{a}_\ell+1)+\frac{d+1}{4}\Big)\\ &=\sum_{\ell=1}^3\Big(2\tilde{a}_\ell^2+\tilde{a}_\ell+\frac{d+1}{8}\Big)=\frac{3(d+1)}{8}+\sum_{\ell=1}^3\tilde{a}_\ell(2\tilde{a}_\ell+1). \end{aligned}$$ Notice $3(d+1)/8$ is an integer and $\sum_{\ell=1}^3\tilde{a}_\ell(2\tilde{a}_\ell+1)$ represents all positive integers by [@zS17 Theorem 1.2]. Therefore, $\mathfrak{P}v^r\to I_3$ whenever $r\geq(d+1)/2\geq3(d+1)/8$ is a positive odd integer. Note that when $d\equiv3~(\mathrm{mod}~8)$, the prime ideal $\mathfrak{P}$ lying above $2$ is $2\mathcal{O}$, a principal ideal, which is in the same ideal class of $\mathcal{O}$. ◻ **Theorem 5**. *Let $\mathfrak{P}$ be a non-principal prime ideal in $E$, and let $p$ be the prime number lying below $\mathfrak{P}$. When $p$ is odd with $p\hspace{-0.8mm}\nmid\hspace{-0.8mm}d$, the unary Hermitian lattice $\mathfrak{P}v^r$ is represented by $I_4$ for every positive integer $r\geq C:=p(p-1)^2/4+(p-1)n+(d+n^2)/p+2pd$ if $d\equiv1,2~(\mathrm{mod}~4)$ and $r\geq C:=p(p-1)^2/4+(p-1)n+(d+n^2)/p+p(d+1)/4$ if $d\equiv3~(\mathrm{mod}~4)$.* *Proof.* $d\equiv1,2~(\mathrm{mod}~4)$. Then, $\mathcal{O}=\mathbb{Z}+\mathbb{Z}\omega$ for $\omega=\sqrt{-d}$. By Theorem 25 in [@dM], one has either $\mathfrak{P}=\mathcal{O}p+\mathcal{O}(n+\omega)$ or $\mathfrak{P}=\mathcal{O}p+\mathcal{O}(n-\omega)$, where $n$ is the least positive integer with $-d\equiv n^2~(\mathrm{mod}~p)$. By Lemma [Lemma 2](#Lem2){reference-type="ref" reference="Lem2"}, we take $\mathfrak{P}=\mathcal{O}p+\mathcal{O}(n+\omega)$ for discussion. It follows from [@jL2 Lemma 2.1] that $$(n+\omega)\gamma_\ell=(n+\omega)(a_\ell+b_\ell\omega)=(na_\ell-db_\ell)+(a_\ell+nb_\ell)\omega$$ for each $1\leq\ell\leq 4$, which leads to $p|(a_\ell+nb_\ell)$; therefore, we have $$h(v^{r})=\frac{r}{p}=\sum_{\ell=1}^4N\Big(\frac{\gamma_\ell}{p}\Big) =\sum_{\ell=1}^4\bigg(\frac{a_\ell^2}{p^2}+\frac{db_\ell^2}{p^2}\bigg)=:\sum_{\ell=1}^4P_d(a_\ell,b_\ell).$$ Choose $a_1=p\tilde{a}_1-n,b_1=1$ and $a_\ell=p\tilde{a}_\ell,b_\ell=p\tilde{b}_\ell$ for $2\leq\ell\leq4$ to observe $$\label{Eq3.5} {\small\begin{aligned} r=&~p\sum_{\ell=1}^4P_d(a_\ell,b_\ell)=\bigg(p\tilde{a}_1^2-2\tilde{a}_1n+\frac{d+n^2}{p}\bigg)+ p\big(\tilde{a}_2^2+\tilde{a}_3^2+\tilde{a}_4^2+d\tilde{b}_2^2+d\tilde{b}_3^2+d\tilde{b}_4^2\big)\\ :=&~S_1+pS_2. \end{aligned}}$$ For $S_1$, note $(d+n^2)/p$ is an integer and $p\tilde{a}_1^2-2\tilde{a}_1n$ forms a complete representative set when $\tilde{a}_1$ runs through the integral set $\{-(p-1)/2,\ldots,-1,0,1,\ldots,(p-1)/2\}$. For $S_2$, recall $\tilde{a}_2^2+\tilde{a}_3^2+\tilde{a}_4^2$ can represent all positive integers except for those of the form $4^s(8t+7)$ with $s,t$ nonnegative integers. Given an integer $m\geq2d$ of the form $4^s(8t+7)$, we know that either $m-d$ or $m-2d$ can be represented by $\tilde{a}_2^2+\tilde{a}_3^2+\tilde{a}_4^2$; details are listed below $d\equiv1~(\mathrm{mod}~8)$ $d\equiv2~(\mathrm{mod}~8)$ $d\equiv5~(\mathrm{mod}~8)$ $d\equiv6~(\mathrm{mod}~8)$ ---------------------- ----------------------------- ----------------------------- ----------------------------- ----------------------------- $m=4^0(8t+7)$ $m-d$ $m-d$ $m-d$ $m-d$ $m=4^1(8t+7)$ $m-d$ $m-d$ $m-2d$ $m-d$ $m=4^{s\geq2}(8t+7)$ $m-2d$ $m-d$ $m-d$ $m-d$ So, $S_2$ represents all the positive integers $m\geq2d$. Combine $S_1$ and $S_2$ to see $\mathfrak{P}v^r\to I_4$ for every positive integer $r\geq p(p-1)^2/4+(p-1)n+(d+n^2)/p+2pd$. $d\equiv3~(\mathrm{mod}~4)$. Then, $\mathcal{O}=\mathbb{Z}+\mathbb{Z}\omega$ for $\omega=\big(1+\sqrt{-d}\big)/2$, and Proposition 8.3 in [@jN] leads to $\mathfrak{P}_1=\big(p,-\frac{n+1}{2}+\omega\big)$ and $\mathfrak{P}_2=\big(p,\frac{n-1}{2}+\omega\big)$ for the least positive odd integer $n$ with $-d\equiv n^2~(\mathrm{mod}~p)$. By Lemma 2.1 in [@jL2] and Lemma [Lemma 2](#Lem2){reference-type="ref" reference="Lem2"}, we take $\mathfrak{P}=\mathcal{O}p+\mathcal{O}\big(\frac{n-1}{2}+\omega\big)$ for discussion and have $$\Big(\frac{n-1}{2}+\omega\Big)\gamma_\ell=\Big(\frac{n-1}{2}+\omega\Big)(a_\ell+b_\ell\omega) =\Big(\frac{n-1}{2}a_\ell-\frac{d+1}{4}b_\ell\Big)+\Big(a_\ell+\frac{n+1}{2}b_\ell\Big)\omega$$ for each $1\leq\ell\leq 4$, which leads to $p\big|\big(a_\ell+\frac{n+1}{2}b_\ell\big)$; therefore, one has $$h(v^{r})=\frac{r}{p}=\sum_{\ell=1}^4N\Big(\frac{\gamma_\ell}{p}\Big) =\sum_{\ell=1}^4\bigg(\frac{a_\ell^2}{p^2}+\frac{a_\ell b_\ell}{p^2}+\frac{(d+1)b_\ell^2}{4p^2}\bigg)=:\sum_{\ell=1}^4P_d(a_\ell,b_\ell).$$ Take $a_1=p\tilde{a}_1-(n+1),b_1=2$, $a_2=p\tilde{a}_2,b_2=p$, and $a_\ell=p\tilde{a}_\ell,b_\ell=0$ for $3\leq\ell\leq4$ to observe $$\label{Eq3.6} {\small\begin{aligned} r=&~p\sum_{\ell=1}^4P_d(a_\ell,b_\ell)=\bigg(p\tilde{a}_1^2-2\tilde{a}_1n+\frac{d+n^2}{p}\bigg)+p\Big(\tilde{a}_2^2+\tilde{a}_2+\frac{d+1}{4}+\tilde{a}_3^2+\tilde{a}_4^2\Big)\\ :=&~S_1+pS_2. \end{aligned}}$$ For $S_1$, note $(d+n^2)/p$ is an integer and $p\tilde{a}_1^2-2\tilde{a}_1n$ forms a complete representative set when $\tilde{a}_1$ runs through the integral set $\{-(p-1)/2,\ldots,-1,0,1,\ldots,(p-1)/2\}$. For $S_2$, recall $\tilde{a}_2^2+\tilde{a}_2+\tilde{a}_3^2+\tilde{a}_4^2$ can represent all positive integers by [@zS15 Page 1368]. Thus, $S_2$ represents all the positive integers $m\geq(d+1)/4$. Combine $S_1$ and $S_2$ to see $\mathfrak{P}v^r\to I_4$ for every positive integer $r\geq p(p-1)^2/4+(p-1)n+(d+n^2)/p+p(d+1)/4$. ◻ **Input:** $p$, $d$ and other indispensable parameters $E(p)$ and $g(p)$ Rewrite $P_d(a_\ell,b_\ell)$ and remove the congruence restrictions on $a_\ell$ and $b_\ell$; see Section 4 for details. $E(p)\gets$ the set of values of $r$ that are less than $C$ and that cannot be represented by $\sum_{\ell=1}^5pP_d(a_\ell, b_\ell)$. $F(p)\gets$ the set of values of $r$ that are less than $C$ and that cannot be represented by $\sum_{\ell=1}^4pP_d(a_\ell, b_\ell)$. $g(p)=4$ $g(p)=5$ Return $g(p),E(p)$ # SageMath codes for an example {#Sec:Ex} In this section, I provide the SageMath codes for this algorithm and use the imaginary quadratic field $E=\mathbb{Q}\big(\sqrt{-87}\big)$ (with class number $6$) as a demonstration for the process of determining the set $\mathfrak{S}_d(1)$ and the value of the $g$-invariant $g_d(1)$. In order to determine the positive integers that can be represented by the quadratic forms $\sum_{\ell=1}^5pP_d(a_\ell,b_\ell)$ or $\sum_{\ell=1}^4pP_d(a_\ell,b_\ell)$, one needs to remove the restrictions on $a_\ell,b_\ell$ using the given congruence relation between them; for instance, when $p\hspace{-0.8mm}\nmid\hspace{-0.8mm}d$ and $d\equiv3~(\mathrm{mod}~4)$, we have $$pP_d(a_\ell,b_\ell)=\frac{1}{p}\Big(a_\ell^2+a_\ell b_\ell+\frac{d+1}{4}b_\ell^2\Big)$$ with $p\big|\big(a_\ell+\frac{n+1}{2}b_\ell\big)$. Choose $a_\ell=p\tilde{a}_\ell-\frac{n+1}{2}\tilde{b}_\ell,b_\ell=\tilde{b}_\ell$ for arbitrary integers $\tilde{a}_\ell,\tilde{b}_\ell$ to deduce $$pP_d(a_\ell,b_\ell)=p\tilde{a}_\ell^2-n\tilde{a}_\ell\tilde{b}_\ell+\frac{d+n^2}{4p}\tilde{b}_\ell^2,$$ a quadratic form with integral variables $\tilde{a}_\ell,\tilde{b}_\ell$. This new expression of $pP_d(a_\ell,b_\ell)$ is used in Code $6$. The followings are the codes for the six cases discussed above. **Code 1: $p|d$ and $d\equiv1,2~(\mathrm{mod}~4)$** $p;d$ $g(p);E(p)$ ``` {.python language="Python"} sage: p=?; d=?; C=(p-1)*d/p sage: Q=QuadraticForm(ZZ, 10, [p,0,0,0,0,0,0,0,0,0,d/p,0,0,0,0,0,0,0,0,p,0,0,0,0,0,0,0,d/p,0,0,0,0,0,0,p,0,0,0,0,0,d/p,0,0,0,0,p,0,0,0,d/p,0,0,p,0,d/p]) sage: S=Q.representation_number_list(C) sage: Q=QuadraticForm(ZZ, 8, [p,0,0,0,0,0,0,0,d/p,0,0,0,0,0,0,p,0,0,0,0,0,d/p,0,0,0,0,p,0,0,0,d/p,0,0,p,0,d/p]) sage: T=Q.representation_number_list(C) sage: def u(l): sage: if S[l]==0:return l sage: else:return 0 sage: E=[u(l) for l in [0..C-1]] sage: E(p)=[value for value in E if value !=0] sage: def v(l): sage: if T[l]==0:return l sage: else:return 0 sage: F=[v(l) for l in [0..C-1]] sage: F(p)=[value for value in F if value !=0] sage: def g(p): sage: if E(p)==F(p): return 4 sage: else: return 5 sage: g(p);E(p) ``` **Code 2: $p|d$ and $d\equiv3~(\mathrm{mod}~4)$** $p;d$ $g(p);E(p)$ ``` {.python language="Python"} sage: p=?; d=?; C=(p-1)*d/p sage: Q=QuadraticForm(ZZ, 10, [d/p,-d,0,0,0,0,0,0,0,0,p*(1+d)/4,0,0,0,0,0,0,0,0,d/p,-d,0,0,0,0,0,0,p*(1+d)/4,0,0,0,0,0,0,d/p,-d,0,0,0,0,p*(1+d)/4,0,0,0,0,d/p,-d,0,0,p*(1+d)/4,0,0,d/p,-d,p*(1+d)/4]) sage: S=Q.representation_number_list(C) sage: Q=QuadraticForm(ZZ, 8, [d/p,-d,0,0,0,0,0,0,p*(1+d)/4,0,0,0,0,0,0,d/p,-d,0,0,0,0,p*(1+d)/4,0,0,0,0,d/p,-d,0,0,p*(1+d)/4,0,0,d/p,-d,p*(1+d)/4]) sage: T=Q.representation_number_list(C) sage: def u(l): sage: if S[l]==0:return l sage: else:return 0 sage: E=[u(l) for l in [0..C-1]] sage: E(p)=[value for value in E if value !=0] sage: def v(l): sage: if T[l]==0:return l sage: else:return 0 sage: F=[v(l) for l in [0..C-1]] sage: F(p)=[value for value in F if value !=0] sage: def g(p): sage: if E(p)==F(p): return 4 sage: else: return 5 sage: g(p);E(p) ``` **Code 3: $2\hspace{-0.8mm}\nmid\hspace{-0.8mm}d$ and $d\equiv1~(\mathrm{mod}~4)$** $d$ $g(p);E(p)$ ``` {.python language="Python"} sage: p=2; d=?; C=(1+d)/2 sage: Q=QuadraticForm(ZZ, 10, [2,-2,0,0,0,0,0,0,0,0,(1+d)/2,0,0,0,0,0,0,0,0,2,-2,0,0,0,0,0,0,(1+d)/2,0,0,0,0,0,0,2,-2,0,0,0,0,(1+d)/2,0,0,0,0,2,-2,0,0,(1+d)/2,0,0,2,-2,(1+d)/2]) sage: S=Q.representation_number_list(C) sage: Q=QuadraticForm(ZZ, 8, [2,-2,0,0,0,0,0,0,(1+d)/2,0,0,0,0,0,0,2,-2,0,0,0,0,(1+d)/2,0,0,0,0,2,-2,0,0,(1+d)/2,0,0,2,-2,(1+d)/2]) sage: T=Q.representation_number_list(C) sage: def u(l): sage: if S[l]==0:return l sage: else:return 0 sage: E=[u(l) for l in [0..C-1]] sage: E(p)=[value for value in E if value !=0] sage: def v(l): sage: if T[l]==0:return l sage: else:return 0 sage: F=[v(l) for l in [0..C-1]] sage: F(p)=[value for value in F if value !=0] sage: def g(p): sage: if E(p)==F(p): return 4 sage: else: return 5 sage: g(p);E(p) ``` **Code 4: $2\hspace{-0.8mm}\nmid\hspace{-0.8mm}d$ and $d\equiv7~(\mathrm{mod}~8)$** $d$ $g(p);E(p)$ ``` {.python language="Python"} sage: p=2; d=?; C=(1+d)/2 sage: Q=QuadraticForm(ZZ, 10, [2,-1,0,0,0,0,0,0,0,0,(1+d)/8,0,0,0,0,0,0,0,0,2,-1,0,0,0,0,0,0,(1+d)/8,0,0,0,0,0,0,2,-1,0,0,0,0,(1+d)/8,0,0,0,0,2,-1,0,0,(1+d)/8,0,0,2,-1,(1+d)/8]) sage: S=Q.representation_number_list(C) sage: Q=QuadraticForm(ZZ, 8, [2,-1,0,0,0,0,0,0,(1+d)/8,0,0,0,0,0,0,2,-1,0,0,0,0,(1+d)/8,0,0,0,0,2,-1,0,0,(1+d)/8,0,0,2,-1,(1+d)/8]) sage: T=Q.representation_number_list(C) sage: def u(l): sage: if S[l]==0:return l sage: else:return 0 sage: E=[u(l) for l in [0..C-1]] sage: E(p)=[value for value in E if value !=0] sage: def v(l): sage: if T[l]==0:return l sage: else:return 0 sage: F=[v(l) for l in [0..C-1]] sage: F(p)=[value for value in F if value !=0] sage: def g(p): sage: if E(p)==F(p): return 4 sage: else: return 5 sage: g(p);E(p) ``` **Code 5: $p\hspace{-0.8mm}\nmid\hspace{-0.8mm}d$ and $d\equiv1,2~(\mathrm{mod}~4)$** $p;d;n$ $g(p);E(p)$ ``` {.python language="Python"} sage: p=?; d=?; n=?; C=p*(p-1)*(p-1)/4+(p-1)*n+(d+n*n)/p+2*p*d sage: Q=QuadraticForm(ZZ, 10, [p,-2*n,0,0,0,0,0,0,0,0,(d+n*n)/p,0,0,0,0,0,0,0,0,p,-2*n,0,0,0,0,0,0,(d+n*n)/p,0,0,0,0,0,0,p,-2*n,0,0,0,0,(d+n*n)/p,0,0,0,0,p,-2*n,0,0,(d+n*n)/p,0,0,p,-2*n,(d+n*n)/p]) sage: S=Q.representation_number_list(C) sage: Q=QuadraticForm(ZZ, 8, [p,-2*n,0,0,0,0,0,0,(d+n*n)/p,0,0,0,0,0,0,p,-2*n,0,0,0,0,(d+n*n)/p,0,0,0,0,p,-2*n,0,0,(d+n*n)/p,0,0,p,-2*n,(d+n*n)/p]) sage: T=Q.representation_number_list(C) sage: def u(l): sage: if S[l]==0:return l sage: else:return 0 sage: E=[u(l) for l in [0..C-1]] sage: E(p)=[value for value in E if value !=0] sage: def v(l): sage: if T[l]==0:return l sage: else:return 0 sage: F=[v(l) for l in [0..C-1]] sage: F(p)=[value for value in F if value !=0] sage: def g(p): sage: if E(p)==F(p): return 4 sage: else: return 5 sage: g(p);E(p) ``` **Code 6: $p\hspace{-0.8mm}\nmid\hspace{-0.8mm}d$ and $d\equiv3~(\mathrm{mod}~4)$** $p;d;n$ $g(p);E(p)$ ``` {.python language="Python"} sage: p=?; d=?; n=?; C=p*(p-1)*(p-1)/4+(p-1)*n+(d+n*n)/p+p*(d+1)/4 sage: Q=QuadraticForm(ZZ, 10, [p,-n,0,0,0,0,0,0,0,0,(d+n*n)/(4*p),0,0,0,0,0,0,0,0,p,-n,0,0,0,0,0,0,(d+n*n)/(4*p),0,0,0,0,0,0,p,-n,0,0,0,0,(d+n*n)/(4*p),0,0,0,0,p,-n,0,0,(d+n*n)/(4*p),0,0,p,-n,(d+n*n)/(4*p)]) sage: S=Q.representation_number_list(C) sage: Q=QuadraticForm(ZZ, 8, [p,-n,0,0,0,0,0,0,(d+n*n)/(4*p),0,0,0,0,0,0,p,-n,0,0,0,0,(d+n*n)/(4*p),0,0,0,0,p,-n,0,0,(d+n*n)/(4*p),0,0,p,-n,(d+n*n)/(4*p)]) sage: T=Q.representation_number_list(C) sage: def u(l): sage: if S[l]==0:return l sage: else:return 0 sage: E=[u(l) for l in [0..C-1]] sage: E(p)=[value for value in E if value !=0] sage: def v(l): sage: if T[l]==0:return l sage: else:return 0 sage: F=[v(l) for l in [0..C-1]] sage: F(p)=[value for value in F if value !=0] sage: def g(p): sage: if E(p)==F(p): return 4 sage: else: return 5 sage: g(p);E(p) ``` Now, consider the imaginary quadratic field $E=\mathbb{Q}\big(\sqrt{-87}\big)$ with class number $6$. Identify $5$ prime ideas as representatives of the $5$ non-principal ideal classes. $d$ $6$ nonequivalent prime ideal representatives. ``` {.python language="Python"} sage: d=87 sage: K.<a>=NumberField(x^2+d) sage: K.class_number() 6 sage: Cl=K.class_group() sage: [c.representative_prime() for c in Cl] [Fractional ideal (5), Fractional ideal (2,1/2*a-1/2), Fractional ideal (7,1/2*a+5/2), Fractional ideal (3,1/2*a+3/2), Fractional ideal (7,1/2*a+9/2), Fractional ideal (2,1/2*a+1/2)] ``` $a=\sqrt{-87}$ in this SageMath code outcome. Denote by $\mathfrak{P}_1=\mathcal{O}$, $\mathfrak{P}_2=\mathcal{O}2+\mathcal{O}(-1+\omega)$, $\mathfrak{P}_3=\mathcal{O}7+\mathcal{O}(2+\omega)$, $\mathfrak{P}_4=\mathcal{O}3+\mathcal{O}(1+\omega)$, $\mathfrak{P}_5=\mathcal{O}7+\mathcal{O}(4+\omega)$, and $\mathfrak{P}_6=\mathcal{O}2+\mathcal{O}\omega$. From the above result, $3$ is ramified in $E$ and $2,7$ splits in $E$. By Lemma [Lemma 2](#Lem2){reference-type="ref" reference="Lem2"}, one only needs to check the representations of $\mathfrak{P}_2v_2^{r_2}$, $\mathfrak{P}_3v_3^{r_3}$ and $\mathfrak{P}_4v_4^{r_4}$ by $I_m$. $d=87$ $g(2);E(2)$ ``` {.python language="Python"} sage: p=2; d=87; C=(1+d)/2 sage: Q=QuadraticForm(ZZ, 10, [2,-1,0,0,0,0,0,0,0,0,(1+d)/8,0,0,0,0,0,0,0,0,2,-1,0,0,0,0,0,0,(1+d)/8,0,0,0,0,0,0,2,-1,0,0,0,0,(1+d)/8,0,0,0,0,2,-1,0,0,(1+d)/8,0,0,2,-1,(1+d)/8]) sage: S=Q.representation_number_list(C) sage: Q=QuadraticForm(ZZ, 8, [2,-1,0,0,0,0,0,0,(1+d)/8,0,0,0,0,0,0,2,-1,0,0,0,0,(1+d)/8,0,0,0,0,2,-1,0,0,(1+d)/8,0,0,2,-1,(1+d)/8]) sage: T=Q.representation_number_list(C) sage: def u(l): sage: if S[l]==0:return l sage: else:return 0 sage: E=[u(l) for l in [0..C-1]] sage: E(p)=[value for value in E if value !=0] sage: def v(l): sage: if T[l]==0:return l sage: else:return 0 sage: F=[v(l) for l in [0..C-1]] sage: F(p)=[value for value in F if value !=0] sage: def g(p): sage: if E(p)==F(p): return 4 sage: else: return 5 sage: g(p);E(p) 4 (1, 3, 5, 7, 9) ``` $p=7;d=87;n=5$ $g(7);E(7)$ ``` {.python language="Python"} sage: p=7; d=87; n=5; C=p*(p-1)*(p-1)/4+(p-1)*n+(d+n*n)/p+p*(d+1)/4 sage: Q=QuadraticForm(ZZ, 10, [p,-n,0,0,0,0,0,0,0,0,(d+n*n)/(4*p),0,0,0,0,0,0,0,0,p,-n,0,0,0,0,0,0,(d+n*n)/(4*p),0,0,0,0,0,0,p,-n,0,0,0,0,(d+n*n)/(4*p),0,0,0,0,p,-n,0,0,(d+n*n)/(4*p),0,0,p,-n,(d+n*n)/(4*p)]) sage: S=Q.representation_number_list(C) sage: Q=QuadraticForm(ZZ, 8, [p,-n,0,0,0,0,0,0,(d+n*n)/(4*p),0,0,0,0,0,0,p,-n,0,0,0,0,(d+n*n)/(4*p),0,0,0,0,p,-n,0,0,(d+n*n)/(4*p),0,0,p,-n,(d+n*n)/(4*p)]) sage: T=Q.representation_number_list(C) sage: def u(l): sage: if S[l]==0:return l sage: else:return 0 sage: E=[u(l) for l in [0..C-1]] sage: E(p)=[value for value in E if value !=0] sage: def v(l): sage: if T[l]==0:return l sage: else:return 0 sage: F=[v(l) for l in [0..C-1]] sage: F(p)=[value for value in F if value !=0] sage: def g(p): sage: if E(p)==F(p): return 4 sage: else: return 5 sage: g(p);E(p) 4 (1, 2, 3, 5, 9) ``` $p=3;d=87$ $g(3);E(3)$ ``` {.python language="Python"} sage: p=3; d=87; C=(p-1)*d/p sage: Q=QuadraticForm(ZZ, 10, [d/p,-d,0,0,0,0,0,0,0,0,p*(1+d)/4,0,0,0,0,0,0,0,0,d/p,-d,0,0,0,0,0,0,p*(1+d)/4,0,0,0,0,0,0,d/p,-d,0,0,0,0,p*(1+d)/4,0,0,0,0,d/p,-d,0,0,p*(1+d)/4,0,0,d/p,-d,p*(1+d)/4]) sage: S=Q.representation_number_list(C) sage: Q=QuadraticForm(ZZ, 8, [d/p,-d,0,0,0,0,0,0,p*(1+d)/4,0,0,0,0,0,0,d/p,-d,0,0,0,0,p*(1+d)/4,0,0,0,0,d/p,-d,0,0,p*(1+d)/4,0,0,d/p,-d,p*(1+d)/4]) sage: T=Q.representation_number_list(C) sage: def u(l): sage: if S[l]==0:return l sage: else:return 0 sage: E=[u(l) for l in [0..C-1]] sage: E(p)=[value for value in E if value !=0] sage: def v(l): sage: if T[l]==0:return l sage: else:return 0 sage: F=[v(l) for l in [0..C-1]] sage: F(p)=[value for value in F if value !=0] sage: def g(p): sage: if E(p)==F(p): return 4 sage: else: return 5 sage: g(p);E(p) 4 (1, 2, 4, 5, 7, 10, 13) ``` The preceding results together yield $g_{87}(1)=4$ and $\mathfrak{S}_{87}(1)=\big\{\mathcal{O}v_1^{r_1}:r_1\geq1\big\}\bigcup\allowbreak \big\{\mathfrak{P}_2v_2^{r_2}:r_2\neq1,3,5,7,9\big\}\bigcup\big\{\mathfrak{P}_3v_3^{r_3}:r_3\neq1,2,3,5,9\big\} \bigcup\big\{\mathfrak{P}_4v_4^{r_4}:r_4\neq1,2,4,5,7,\allowbreak 10,13\big\}\bigcup\big\{\mathfrak{P}_5v_5^{r_5}:r_5\neq1,2,3,5,9\big\}\bigcup\big\{\mathfrak{P}_6v_6^{r_6}:r_6\neq1,3,5,7,9\big\}$. 99 C.N. Beli, W.K. Chan, M.I. Icaza, and J. Liu. *On a Waring's problem for integral quadratic and Hermitian forms*. Trans. Amer. Math. Soc. **371** (2019), 5505--5527. V. Blomer and V. Kala. *On the rank of universal quadratic forms over real quadratic fields*. Doc. Math. **23** (2018), 15--34. W.K. Chan and M.I. Icaza. *Hermite reduction and a Waring's problem for integral quadratic forms over number fields*. Trans. Amer. Math. Soc. **374** (2021), 2967--2985. L.J. Gerstein. *Classes of definite Hermitian forms*. Amer. J. Math. **100** (1978), 81--97. F. Götzky. *Über eine zahlentheoretische Anwendung von Modulfunktionen zweier Veränderlicher*. Math. Ann. **100** (1928), 411--437. V. Kala and P. Yatsyna. *Lifting problem for universal quadratic forms*. Adv. Math. **377** (2021), Paper No. 107497, 24 pp. V. Kala and P. Yatsyna. *On Kitaoka's conjecture and lifting problem for universal quadratic forms*. Bull. Lond. Math. Soc. **55** (2023), 854--864. B.M. Kim, J.Y. Kim, and P.S. Park. *The fifteen theorem for universal Hermitian lattices over imaginary quadratic fields*. Math. Comp. **79** (2010), 1123--1144. J. Krásenský, M. Raška, and E. Sgallová. *Pythagoras numbers of orders in biquadratic fields*. Expo. Math. **40** (2022), 1181--1228. J. Krásenský and P. Yatsyna. *On quadratic Waring's problem in totally real number fields*. Proc. Amer. Math. Soc. **151** (2023), 1471--1485. J. Liu. *On a Waring's problem for Hermitian lattices*. Bull. Sci. Math. **174** (2022), Paper No. 102970, 25 pp. J. Liu. *$g$-invariant on unary Hermitian lattices over imaginary quadratic fields with class number $2$ or $3$*. J. Algebra **622** (2023), 636--675. H. Maaß. *Über die Darstellung total positiver Zahlen des Körpers $R\big(\sqrt{5}\big)$ als Summe von drei Quadraten*. Abh. Math. Semin. Univ. Hambg. **14** (1941), 185--191. D. Marcus. **Number fields**. Springer-Verlag, New York-Heidelberg, 1977. L.J. Mordell. *A new Waring's problem with squares of linear forms*. Quart. J. Math. **1** (1930), 276--288. J. Neukirch. **Algebraic number theory**. Springer-Verlag, Berlin, 1999. O.T. O'Meara. **Introduction to quadratic forms**. Springer-Verlag, Berlin, 2000. G. Shimura. *Arithmetic of unitary groups*. Ann. of Math. **79** (1964), 369--409. C.L. Siegel. *Sums of $m^{th}$ powers of algebraic integers*. Ann. of Math. **46** (1945), 313--339. Z.-W. Sun. *On universal sums of polygonal numbers*. Sci. China Math. **58** (2015), 1367--1396. Z.-W. Sun. *On $x(ax+1)+y(by+1)+z(cz+1)$ and $x(ax+b)+y(ay+c)+z(az+d)$*. J. Number Theory **171** (2017), 275--283.
arxiv_math
{ "id": "2309.16138", "title": "An algorithm for $g$-invariant on unary Hermitian lattices over\n imaginary quadratic fields", "authors": "Jingbo Liu", "categories": "math.NT cs.DS", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we obtain a classification theorem of $2$-dimensional complete Lagrangian self-expanders with constant squared norm of the second fundamental form in $\mathbb C^{2}$. address: - | Zhi Li\ College of Mathematics and Information Science, Henan Normal University, , Xinxiang, Henan, China. - | Guoxin Wei\ School of Mathematical Sciences, South China Normal University, , Guangzhou, China. author: - Zhi Li and Guoxin Wei title: Complete lagrangian self-expanders in $\mathbb C^{2}$ --- # introduction An $n$-dimensional smooth immersed submanifold $x: M^{n}\to \mathbb{R}^{n+p}$ is called a self-expanders of the mean curvature flow if its mean curvature vector $\vec{H}$ satisfies the following non-linear elliptic equation $$\vec{H}=x^{\perp},$$ where where $x^{\perp}$ is the orthogonal projection of the position vector $x$ in $\mathbb{R}^{n+p}$ to the normal bundle of $M^{n}$. It is well known that if $x$ is a self-expanders then $x_{t}=\sqrt{2t}x, \ \ t>0$ is moved by the mean curvature flow. Notice that the mean curvature flow always blows up at finite time. For noncompact hypersurfaces, solution to the mean curvature flow may exist for all times. The self-expanders appear as the singularity model of the mean curvature flow which exists for long time. For the case of codimension one, motivation for the study of self-expanders goes back to work of Ecker-Huisken [@EH] and Stavrou [@Sta]. Specially, the rigidity theorem of the self-expanders were studied by many geometers. Specific references can be found in [@AC], [@CZ], [@H], [@Ish], [@Smo] and the references therein. In addition, the properties of higher codimension self-expanders have been studied for the Lagrangian mean curvature flow. The first examples of Lagrangian self-expanders were constructed in Anciaux [@Anc] and Lee and Wang ([@LW1]). In [@LN], Lotay and Neves proved that zero-Maslov class Lagrangian self-expanders in $\mathbb{C}^{n}$ that are asymptotic to a pair of planes intersecting transversely are locally unique for $n>2$ and unique if $n=2$. Later, Nakahara [@Nak] construct a smooth Lagrangian self-expander asymptotic to any pair of Lagrangian planes in $\mathbb{C}^{n}$ which intersect transversely at the origin and have sum of characteristic angles less than $\pi$. In 2016, Imagi, Joyce and dos Santos [@IJS] shown further uniqueness results for Lagrangian self-expanders asymptotic to the union of two transverse Lagrangian planes. More other results about the Lagrangian self-expanders have been done (see [@GSSZ], [@JLT], [@LW], [@Nev]). Recently, Cheng, Hori and Wei [@CHW] established the following classification theorem for complete Lagrangian self-shrinkers in $\mathbb{C}^{2}$: **Theorem 1**. *Let $x: M^{2}\to \mathbb{C}^{2}$ be a $2$-dimensional complete Lagrangian self-shrinkers with the constant squared norm of the second fundamental form in $\mathbb{C}^{2}$, then $x(M^{2})$ is one of the following surfaces: $\mathbb {R}^{2}$, $\mathbb {S}^{1}(1)\times \mathbb {R}^{1}$ and $\mathbb {S}^{1}(1)\times \mathbb {S}^{1}(1)$.* Motivated by the above theorem, we aim in this paper to study the complete Lagrangian self-expander in $\mathbb{C}^{2}$. As the result, we obtain the following result: **Theorem 2**. *Let $x: M^{2}\to \mathbb C^{2}$ be a $2$-dimensional complete Lagrangian self-expander with constant squared norm of the second fundamental form, then $x(M^{2})$ is a hyperplane $\mathbb {R}^{2}$ through the origin.* # Preliminaries Let $x: M^{2} \rightarrow\mathbb C^{2}$ be an $2$-dimensional Lagrangian surface of $\mathbb C^{2}$. Denote by $J$ the canonical complex structure on $\mathbb C^{2}$. We choose orthonormal tangent vector fields $\{e_{1}, e_{2}\}$ and $\{e_{1^{\ast}}, e_{2^{\ast}}\}$ are normal vector fields given by $$e_{1^{\ast}}=J e_{1}, \ e_{2^{\ast}}=Je_{2}.$$ Then $$\{e_{1}, e_{2}, e_{1^{\ast}}, e_{2^{\ast}}\}$$ is called an adapted Lagrangian frame field. The dual frame fields of $\{e_{1}, e_{2}\}$ are $\{\omega_{1}, \omega_{2}\}$, the Levi-Civita connection forms and normal connection forms are $\omega_{ij}$ and $\omega_{i^{\ast}j^{\ast}}$ , respectively. Since $x: M^{2} \rightarrow\mathbb C^{2}$ is a Lagrangian surface (see [@LV], [@LW2]), we have $$\label{2.1-1} h^{p^{\ast}}_{ij}=h^{p^{\ast}}_{ji}=h^{i^{\ast}}_{pj}, \ \ i,j,p=1,2.$$ The second fundamental form $h$ and the mean curvature $\vec{H}$ of $x$ are respectively defined by $$h=\sum_{ijp}h^{p^{\ast}}_{ij}\omega_{i}\otimes\omega_{j}\otimes e_{p^{\ast}},\ \ \vec{H}=\sum_{p}H^{p^{\ast}}e_{p^{\ast}}=\sum_{i,p}h^{p^{\ast}}_{ii}e_{p^{\ast}}.$$ Let $S=\sum_{i,j,p}(h^{p^{\ast}}_{ij})^{2}$ be the squared norm of the second fundamental form and $H=|\vec{H}|$ denote the mean curvature of $x$. If we denote the components of curvature tensors of the Levi-Civita connection forms $\omega_{ij}$ and normal connection forms $\omega_{i^{\ast}j^{\ast}}$ by $R_{ijkl}$ and $R_{i^{\ast}j^{\ast}kl}$, respectively, then for $i, j, k, l, p, q=1, 2$, the equations of Gauss, Codazzi and Ricci are given by $$\label{2.1-2} R_{ijkl}=\sum_{p}(h^{p^{\ast}}_{ik}h^{p^{\ast}}_{jl}-h^{p^{\ast}}_{il}h^{p^{\ast}}_{jk}),$$ $$\label{2.1-3} R_{ik}=\sum_{p}H^{p^{\ast}}h^{p^{\ast}}_{ik}-\sum_{j,p}h^{p^{\ast}}_{ij}h^{p^{\ast}}_{jk},$$ $$\label{2.1-4} h^{p^{\ast}}_{ijk}=h^{p^{\ast}}_{ikj},$$ $$\label{2.1-5} R_{p^{\ast}q^{\ast}kl}=\sum_{i}(h^{p^{\ast}}_{ik}h^{q^{\ast}}_{il} -h^{p^{\ast}}_{il}h^{p^{\ast}}_{ik}),$$ $$\label{2.1-6} R=H^{2}-S.$$ From [\[2.1-1\]](#2.1-1){reference-type="eqref" reference="2.1-1"} and [\[2.1-4\]](#2.1-4){reference-type="eqref" reference="2.1-4"}, we easily know that the components $h^{p^{\ast}}_{ijk}$ is totally symmetric for $i, j, k,l$. In particular, $$\label{2.1-7} h^{p^{\ast}}_{ijk}=h^{p^{\ast}}_{kji}=h^{i^{\ast}}_{pjk}, \ \ i, j, k ,p=1, 2.$$ By making use of [\[2.1-1\]](#2.1-1){reference-type="eqref" reference="2.1-1"}, [\[2.1-2\]](#2.1-2){reference-type="eqref" reference="2.1-2"} and [\[2.1-5\]](#2.1-5){reference-type="eqref" reference="2.1-5"}, we obtain $$\label{2.1-8} R_{ijkl}=K(\delta_{ik}\delta_{jl}-\delta_{il}\delta_{jk})=R_{i^{\ast}j^{\ast}kl}, \ \ K=\frac{1}{2}(H^{2}-S),$$ where $K$ is the Gaussian curvature of $x$. By defining $$\sum_{l}h^{p^{\ast}}_{ijkl}\omega_{l}=dh^{p^{\ast}}_{ijk}+\sum_{l}h^{p^{\ast}}_{ljk}\omega_{li} +\sum_{l}h^{p^{\ast}}_{ilk}\omega_{lj}+\sum_{l} h^{p^{\ast}}_{ijl}\omega_{lk}+\sum_{q} h^{q^{\ast}}_{ijk}\omega_{q^{\ast}p^{\ast}},$$ and $$\begin{aligned} \sum_{m}h^{p^{\ast}}_{ijklm}\omega_{m}&=dh^{p^{\ast}}_{ijkl}+\sum_{m}h^{p^{\ast}}_{mjkl}\omega_{mi} +\sum_{m}h^{p^{\ast}}_{imkl}\omega_{mj}+\sum_{m}h^{p^{\ast}}_{ijml}\omega_{mk}\\ &\ \ +\sum_{m}h^{p^{\ast}}_{ijkm}\omega_{ml}+\sum_{m}h^{m^{\ast}}_{ijkl}\omega_{m^{\ast}p^{\ast}}, \end{aligned}$$ we have the following Ricci identities $$\label{2.1-9} h^{p^{\ast}}_{ijkl}-h^{p^{\ast}}_{ijlk}=\sum_{m} h^{p^{\ast}}_{mj}R_{mikl}+\sum_{m} h^{p^{\ast}}_{im}R_{mjkl}+\sum_{m} h^{m^{\ast}}_{ij}R_{m^{\ast}p^{\ast}kl},$$ and $$\label{2.1-10} \begin{aligned} h^{p^{\ast}}_{ijkln}-h^{p^{\ast}}_{ijknl}=&\sum_{m} h^{p^{\ast}}_{mjk}R_{miln} +\sum_{m}h^{p^{\ast}}_{imk}R_{mjln}+ \sum_{m}h^{p^{\ast}}_{ijm}R_{mkln}\\ &+\sum_{m}h^{m^{\ast}}_{ijk}R_{m^{\ast}p^{\ast}ln}. \end{aligned}$$ The $\mathcal{L}$-operator is defined by $$\mathcal{L}f=\Delta f+\langle x,\nabla f\rangle,$$ where $\Delta$ and $\nabla$ denote the Laplacian and the gradient operator, respectively. For the mean curvature vector field $\vec{H}=\sum_{p}H^{p^{\ast}}e_{p^{\ast}}$, we define $$\label{2.1-11} \begin{aligned} |\nabla^{\perp}\vec{H}|^{2}=\sum_{i,p}(H^{p^{\ast}}_{,i})^{2}, \ \ \Delta^{\perp}H^{p^{\ast}}=\sum_{i}H^{p^{\ast}}_{,ii}. \end{aligned}$$ We next suppose that $x: M^{2}\rightarrow\mathbb{C}^{2}$ is a lagrangian self-expander, that is, $H^{p^{\ast}}=\langle x, e_{p^{\ast}}\rangle$. By a simple calculation, we have the following basic formulas. $$\label{2.1-12} \aligned H^{p^{\ast}}_{,i} =&-\sum_{k}h^{p^{\ast}}_{ik}\langle x, e_{k}\rangle, \\ H^{p^{\ast}}_{,ij} =&-\sum_{k}h^{p^{\ast}}_{ijk}\langle x, e_{k}\rangle-h^{p^{\ast}}_{ij}-\sum_{k,q}h^{p^{\ast}}_{ik}h^{q^{\ast}}_{kj}H^{q^{\ast}}, \ \ i,j,p =1,2. \endaligned$$ Using the above formulas and the Ricci identities, we can get the following Lemma. The specific calculation process is similar to [@CHW]. **Lemma 1**. *Let $x:M^{2}\rightarrow \mathbb{C}^{2}$ be an $2$-dimensional complete lagrangian self-expander. We have $$\label{2.1-13} \aligned \frac{1}{2}\Delta H^{2}=&|\nabla^{\perp} \vec{H}|^{2}-\frac{1}{2}\langle x, \nabla H^{2}\rangle-H^{2}-\sum_{i,j,p,q}H^{p^{\ast}}h^{p^{\ast}}_{ij}H^{q^{\ast}}h^{q^{\ast}}_{ij}, \endaligned$$ $$\label{2.1-14} \frac{1}{2}\mathcal{L}S =\sum_{i,j,k}(h^{p^{\ast}}_{ijk})^{2}-S(\frac{3}{2}S+1)+2H^{2}S-\frac{1}{2}H^{4} -\sum_{i,j,p,q}H^{p^{\ast}}h^{p^{\ast}}_{ij}H^{q^{\ast}}h^{q^{\ast}}_{ij}.$$* **Lemma 2**. *Let $x:M^{2}\rightarrow \mathbb{C}^{2}$ be an $2$-dimensional complete lagrangian self-expander. If $S$ is constant, we infer that $$\label{2.1-15} \aligned &\frac{1}{2}\mathcal{L}\sum_{i,j,k}(h^{p^{\ast}}_{ijk})^{2}\\ =&\sum_{i,j,k,l,p}(h^{p^{\ast}}_{ijkl})^{2}+(10K-2)\sum_{i,j,k,p}(h^{p^{\ast}}_{ijk})^{2}-5K|\nabla^{\perp} \vec{H}|^{2}-\langle\nabla K, \nabla H^{2}\rangle\\ &-3\sum_{i,l,p}K_{,i}H^{p^{\ast}}_{,l}h^{p^{\ast}}_{il} -2\sum_{i,j,k,l,p,q}h^{p^{\ast}}_{ijk}h^{p^{\ast}}_{ijl}h^{q^{\ast}}_{kl}H^{q^{\ast}} -\sum_{i,j,k,l,p,q}h^{p^{\ast}}_{ijk}h^{p^{\ast}}_{il}h^{q^{\ast}}_{jl}H^{q^{\ast}}_{,k}\\ &-\sum_{i,j,k,l,p,q}h^{p^{\ast}}_{il}h^{p^{\ast}}_{ijk}h^{q^{\ast}}_{jkl}H^{q^{\ast}} \endaligned$$ and $$\label{2.1-16} \aligned &\frac{1}{2}\mathcal{L}\sum_{i,j,k}(h^{p^{\ast}}_{ijk})^{2}\\ =&(H^{2}-2S)\Big(|\nabla^{\perp} \vec{H}|^{2}-H^{2}\Big)+\frac{1}{2}|\nabla H^{2}|^{2}\\ &+(3K-2-H^{2}+2S)\sum_{j,k,p,q}H^{p^{\ast}}h^{p^{\ast}}_{jk}H^{q^{\ast}}h^{q^{\ast}}_{jk}\\ &-K\Big(H^{4}+\sum_{j,k,p}h^{p^{\ast}}_{jk}H^{p^{\ast}}H^{j^{\ast}}H^{k^{\ast}} \Big) -\sum_{i,j,k,l,p,q}H^{p^{\ast}}h^{p^{\ast}}_{ij}h^{q^{\ast}}_{ij}h^{q^{\ast}}_{kl}h^{m^{\ast}}_{kl}H^{m^{\ast}}\\ &-\sum_{j,k,l,p,q}H^{p^{\ast}}h^{p^{\ast}}_{jk}H^{q^{\ast}}h^{q^{\ast}}_{jl}H^{m^{\ast}}h^{m^{\ast}}_{kl} +2\sum_{i,j,k,p,q}H^{p^{\ast}}_{,i}h^{p^{\ast}}_{ijk}h^{q^{\ast}}_{jk}H^{q^{\ast}}\\ &+\sum_{i,j,k}\Big(\sum_{p}(H^{p^{\ast}}_{,i}h^{p^{\ast}}_{jk}+H^{p^{\ast}}h^{p^{\ast}}_{ijk})\Big) \Big(\sum_{q}(H^{q^{\ast}}_{,i}h^{q^{\ast}}_{jk}+H^{q^{\ast}}h^{q^{\ast}}_{ijk})\Big). \endaligned$$* The following generalized maximum principle due to Omori [@O] and Yau [@Y] will be used in this paper. **Lemma 3**. *Let $(M^{n},g)$ be a complete Riemannian manifold with Ricci curvature bounded from below. For a $C^{2}$-function $f$ bounded from above, there exists a sequence of points $\{p_{t}\}\in M^{n}$, such that $$\lim_{t\rightarrow\infty} f(p_{t})=\sup f,\quad \lim_{t\rightarrow\infty} |\nabla f|(p_{t})=0,\quad \limsup_{t\rightarrow\infty}\Delta f(p_{t})\leq 0.$$* # Proof of Theorem 1.1 First of all, using [\[2.1-12\]](#2.1-12){reference-type="eqref" reference="2.1-12"}--[\[2.1-16\]](#2.1-16){reference-type="eqref" reference="2.1-16"}, we obtain the following proposition. **Proposition 1**. *Let $x:M^{2}\rightarrow \mathbb{C}^{2}$ be a self-expander with the constant squared norm $S$ of the second fundamental form, then $H^{2}\equiv0$.* *Proof.* Since $S$ is constant, by taking exterior derivative of $S$ and using [\[2.1-14\]](#2.1-14){reference-type="eqref" reference="2.1-14"} to obtain that $$\label{3.1-1} \begin{aligned} &\sum_{i,j,p}h^{p^{\ast}}_{ij}h^{p^{\ast}}_{ijk}=0, \ \ \sum_{i,j,p}h^{p^{\ast}}_{ij}h^{p^{\ast}}_{ijkl} +\sum_{i,j,p}h^{p^{\ast}}_{ijk}h^{p^{\ast}}_{ijl}=0, \ \ k,l=1, 2, \\ &\sum_{i,j,k}(h^{p^{\ast}}_{ijk})^{2}=S(\frac{3}{2}S+1)-2H^{2}S+\frac{1}{2}H^{4} +\sum_{i,j,p,q}H^{p^{\ast}}h^{p^{\ast}}_{ij}H^{q^{\ast}}h^{q^{\ast}}_{ij}. \end{aligned}$$ If $\vec{H}=\sum_{p}H^{p^{\ast}}e_{p^{\ast}}=0$ at $p\in M^{2}$, we know $H^{2}\leq S$. If $\vec{H}=\sum_{p}H^{p^{\ast}}e_{p^{\ast}}\neq 0$ at $p\in M^{2}$, we choose a local frame field $\{e_{1}, e_{2}\}$ such that $$\vec{H}=H^{1^{\ast}}e_{1^{\ast}}, \ \ H^{1^{\ast}}=|\vec{H}|=H, \ \ H^{2^{\ast}}=h^{2^{\ast}}_{11}+h^{2^{\ast}}_{22}=0.$$ Besides, $h^{1^{\ast}}_{11}$, $h^{1^{\ast}}_{22}$ and $h^{2^{\ast}}_{11}$ by $\lambda_{1}$, $\lambda_{2}$ and $\lambda$, respectively, then $$S=\lambda^{2}_{1}+3\lambda^{2}_{2}+4\lambda^{2}, \ \ H^{2}=(\lambda_{1}+\lambda_{2})^{2}\leq \frac{4}{3}(\lambda^{2}_{1}+3\lambda^{2}_{2})\leq \frac{4}{3}S,$$ and the equality of the above inequality holds if and only if $$\lambda_{1}=3\lambda_{2}, \ \ \lambda=0.$$ Thus, $H^{2}$ is bounded from above since $S$ is constant. From the Gauss equations, we know that the Ricci curvature of $x:M^{2}\rightarrow \mathbb{C}^{2}$ is bounded from below. By applying the generalized maximum principle of Omori and Yau to the function $H^{2}$, there exists a sequence $\{p_{t}\} \in M^{2}$ such that $$\lim_{t\rightarrow\infty} H^{2}(p_{t})=\sup H^{2},\quad \lim_{t\rightarrow\infty} |\nabla H^{2}|(p_{t})=0,\quad \limsup_{t\rightarrow\infty}\Delta H^{2}(p_{t})\leq 0.$$ For the constant $S$, by use of [\[2.1-14\]](#2.1-14){reference-type="eqref" reference="2.1-14"}, [\[2.1-15\]](#2.1-15){reference-type="eqref" reference="2.1-15"} and [\[2.1-16\]](#2.1-16){reference-type="eqref" reference="2.1-16"}, we know that $\{h^{p^{\ast}}_{ij}(p_{t})\}$, $\{h^{p^{\ast}}_{ijk}(p_{t})\}$ and $\{h^{p^{\ast}}_{ijkl}(p_{t})\}$ are bounded sequences for $i, j, k, l, p = 1,2$. One can assume $$\begin{aligned} &\lim_{t\rightarrow\infty}H^{2}(p_{t})=\sup H^{2}=\bar H^{2}, \ \ \lim_{t\rightarrow\infty}h^{p^{\ast}}_{ij}(p_{t})=\bar h^{p^{\ast}}_{ij}, \ \ \lim_{t\rightarrow\infty}h^{p^{\ast}}_{ijk}(p_{t})=\bar h^{p^{\ast}}_{ijk}, \\ &\lim_{t\rightarrow\infty}h^{p^{\ast}}_{ijkl}(p_{t})=\bar h^{p^{\ast}}_{ijkl}, \ \ i, j, k, l, p=1, 2. \end{aligned}$$ Then it follows from [\[2.1-13\]](#2.1-13){reference-type="eqref" reference="2.1-13"}, the third equation of [\[3.1-1\]](#3.1-1){reference-type="eqref" reference="3.1-1"} and $\lim_{t\rightarrow\infty} |\nabla H^{2}(p_{t})|=0$ that $$\label{3.1-2} \aligned 0\geq&\lim_{t\rightarrow\infty}|\nabla^{\perp}\vec{H}|^{2}(p_{t})-\frac{1}{2}\lim_{t\rightarrow\infty}\langle x, \nabla H^{2}\rangle(p_{t})-\bar H^{2}-\sum_{i,j,p,q}\bar H^{p^{\ast}}\bar h^{p^{\ast}}_{ij}\bar H^{q^{\ast}}\bar h^{q^{\ast}}_{ij} \\ =&\lim_{t\rightarrow\infty}|\nabla^{\perp}\vec{H}|^{2}(p_{t})-\sum_{i,j,k}(\bar h^{p^{\ast}}_{ijk})^{2}+\frac{1}{2}(\bar H^{2}-S)(\bar H^{2}-3S-2). \endaligned$$ If $\sup H^{2}=0$, we know $H^{2}\equiv 0$. From now, we only consider that $\sup H^{2}>0$. However, this situation does not exist. Without loss of the generality, at each point $p_{t}$, we can assume $H^{2}(p_{t})\neq 0$. Since $\lim_{t\rightarrow\infty} |\nabla H^{2}(p_{t})|=0$ and $|\nabla H^{2}|^{2}=4\sum_{k}(\sum_{p}H^{p^{\ast}}H^{p^{\ast}}_{,k})^{2}$, we can see that $$\label{3.1-3} \bar H^{1^{\ast}}_{,k}=0, \ \bar h^{1^{\ast}}_{11k}+\bar h^{1^{\ast}}_{22k}=0, \ \ k=1, 2.$$ It follows from the first formula of [\[2.1-12\]](#2.1-12){reference-type="eqref" reference="2.1-12"} and $h^{2^{\ast}}_{11}+h^{2^{\ast}}_{22}=0$ that $$\label{3.1-4} \begin{aligned} &\bar H^{1^{\ast}}_{,1}=-\bar \lambda_{1}\lim_{t\rightarrow\infty} \langle x, e_{1} \rangle(p_{t})-\bar \lambda\lim_{t\rightarrow\infty} \langle x, e_{2} \rangle(p_{t}), \\ &\bar H^{1^{\ast}}_{,2}=-\bar \lambda\lim_{t\rightarrow\infty} \langle x, e_{1} \rangle(p_{t})-\bar \lambda_{2}\lim_{t\rightarrow\infty} \langle x, e_{2} \rangle(p_{t}), \\ \end{aligned}$$ and $$\label{3.1-5} \begin{aligned} &\bar H^{2^{\ast}}_{,1}=-\bar \lambda\lim_{t\rightarrow\infty} \langle x, e_{1} \rangle(p_{t})-\bar \lambda_{2}\lim_{t\rightarrow\infty} \langle x, e_{2} \rangle(p_{t}), \\ &\bar H^{2^{\ast}}_{,2}=-\bar \lambda_{2}\lim_{t\rightarrow\infty} \langle x, e_{1} \rangle(p_{t})+\bar \lambda\lim_{t\rightarrow\infty} \langle x, e_{2} \rangle(p_{t}). \end{aligned}$$ By making use of the first equation of [\[3.1-1\]](#3.1-1){reference-type="eqref" reference="3.1-1"} and [\[3.1-3\]](#3.1-3){reference-type="eqref" reference="3.1-3"}, we obtain $$\label{3.1-6} (\bar \lambda_{1}-3\bar \lambda_{2})\bar h^{1^{\ast}}_{11k}+3\bar \lambda\bar h^{2^{\ast}}_{11k}-\bar \lambda\bar h^{2^{\ast}}_{22k}=0, \ \ k=1, 2.$$ By the above formulas, we can obtain the claim that $$\bar \lambda=0.$$ In fact, we first assume that $\bar \lambda\neq0$ and then deduce a contradiction. By [\[3.1-3\]](#3.1-3){reference-type="eqref" reference="3.1-3"}, [\[3.1-6\]](#3.1-6){reference-type="eqref" reference="3.1-6"} and the symmetry of indices, we infer $$\label{3.1-7} (\bar \lambda_{1}-3\bar \lambda_{2})\bar h^{1^{\ast}}_{111}+4\bar \lambda\bar h^{2^{\ast}}_{111}=0, \ \ -3\bar \lambda\bar h^{1^{\ast}}_{111}-\bar \lambda\bar h^{2^{\ast}}_{222}+(\bar \lambda_{1}-3\bar \lambda_{2})\bar h^{2^{\ast}}_{111}=0.$$ [\[3.1-3\]](#3.1-3){reference-type="eqref" reference="3.1-3"} and [\[3.1-4\]](#3.1-4){reference-type="eqref" reference="3.1-4"} imply that $$\label{3.1-8} (\bar \lambda_{1}\bar \lambda_{2}-\bar \lambda^{2})\lim_{t\rightarrow\infty} \langle x, e_{1} \rangle(p_{t})=0, \ \ (\bar \lambda_{1}\bar \lambda_{2}-\bar \lambda^{2})\lim_{t\rightarrow\infty} \langle x, e_{2} \rangle(p_{t})=0.$$ The claim is divided into two cases. From [\[3.1-8\]](#3.1-8){reference-type="eqref" reference="3.1-8"}, we can see $$\lim_{t\rightarrow\infty} \langle x, e_{1} \rangle(p_{t})=\lim_{t\rightarrow\infty} \langle x, e_{2} \rangle(p_{t})=0,$$ This together with [\[3.1-5\]](#3.1-5){reference-type="eqref" reference="3.1-5"} imply that $$\label{3.1-9} \bar H^{2^{\ast}}_{,k}=0, \ \bar h^{2^{\ast}}_{11k}+\bar h^{2^{\ast}}_{22k}=0, \ \ k=1, 2.$$ Combining [\[3.1-3\]](#3.1-3){reference-type="eqref" reference="3.1-3"}, [\[3.1-7\]](#3.1-7){reference-type="eqref" reference="3.1-7"} with [\[3.1-9\]](#3.1-9){reference-type="eqref" reference="3.1-9"}, we obtain $$(\bar \lambda_{1}-3\bar \lambda_{2})\bar h^{1^{\ast}}_{111}+4\bar \lambda\bar h^{2^{\ast}}_{111}=0, \ \ -4\bar \lambda\bar h^{1^{\ast}}_{111}+(\bar \lambda_{1}-3\bar \lambda_{2})\bar h^{2^{\ast}}_{111}=0.$$ Thus, $$\bar h^{1^{\ast}}_{111}=\bar h^{2^{\ast}}_{111}=0$$ since $(\bar \lambda_{1}-3\bar \lambda_{2})^{2}+16\bar \lambda^{2}\neq0$. Then [\[3.1-3\]](#3.1-3){reference-type="eqref" reference="3.1-3"} and [\[3.1-9\]](#3.1-9){reference-type="eqref" reference="3.1-9"} give that $$\bar h^{p^{\ast}}_{ijk}=0, \ \ i,j,k,p=1,2.$$ Consequently, by taking the limit of the third equation of [\[3.1-1\]](#3.1-1){reference-type="eqref" reference="3.1-1"}, we infer $$\begin{aligned} 0&=S(\frac{3}{2}S+1)-2\bar H^{2}S+\frac{1}{2}\bar H^{4} +\sum_{i,j,p,q}\bar H^{p^{\ast}}\bar h^{p^{\ast}}_{ij}\bar H^{q^{\ast}}\bar h^{q^{\ast}}_{ij}\\ &=S(\frac{3}{2}S+1)-2\bar H^{2}S+\frac{1}{2}\bar H^{4} +\bar H^{2}(\bar \lambda^{2}_{1}+\bar \lambda^{2}_{2}+2\bar \lambda^{2})\\ &=S(\frac{1}{2}S+1)+(S-\bar H^{2})^{2}+\frac{1}{2}\bar H^{2}(\bar \lambda_{1}-\bar \lambda_{2})^{2} +2\bar H^{2}\bar \lambda^{2}. \end{aligned}$$ Then $$S=\bar H^{2}=0.$$ It contradicts the hypothesis. Since $\bar \lambda_{1}\bar \lambda_{2}-\bar \lambda^{2}=0$, then [\[3.1-7\]](#3.1-7){reference-type="eqref" reference="3.1-7"} is equivalent to $$\label{3.1-10} (\bar \lambda_{1}+3\bar \lambda_{2})^{2}\bar h^{1^{\ast}}_{111}=-4\bar \lambda^{2}\bar h^{2^{\ast}}_{222}, \ \ (\bar \lambda_{1}+3\bar \lambda_{2})^{2}\bar h^{2^{\ast}}_{111}=\bar \lambda(\bar \lambda_{1}-3\bar \lambda_{2})\bar h^{2^{\ast}}_{222}.$$ Suppose $\bar \lambda_{1}+3\bar \lambda_{2}=0$, we have that $\bar h^{2^{\ast}}_{222}=0$, and [\[3.1-7\]](#3.1-7){reference-type="eqref" reference="3.1-7"} yields $$-3\bar \lambda_{2}\bar h^{1^{\ast}}_{111}+2\bar \lambda\bar h^{2^{\ast}}_{111}=0, \ \ -\bar \lambda\bar h^{1^{\ast}}_{111}-2\bar \lambda_{2}\bar h^{2^{\ast}}_{111}=0.$$ Then $\bar h^{1^{\ast}}_{111}=\bar h^{2^{\ast}}_{111}=0$. That is, $$\lim_{t\rightarrow\infty}|\nabla^{\perp}\vec{H}|^{2}(p_{t})=0, \ \ \bar h^{p^{\ast}}_{ijk}=0, \ \ i,j,k,p=1,2.$$ It follows from [\[3.1-2\]](#3.1-2){reference-type="eqref" reference="3.1-2"} that $$(\bar H^{2}-S)(\bar H^{2}-3S-2)\leq0.$$ It is a contradiction since $\bar H^{2}-S=-8\bar \lambda^{2}_{2}-4\bar \lambda^{2}<0$. Suppose $\bar \lambda_{1}+3\bar \lambda_{2}\neq0$, [\[3.1-10\]](#3.1-10){reference-type="eqref" reference="3.1-10"} implies that $$\bar h^{1^{\ast}}_{111}=-\bar h^{1^{\ast}}_{221}=-\frac{4\bar \lambda^{2}}{(\bar \lambda_{1}+3\bar \lambda_{2})^{2}}\bar h^{2^{\ast}}_{222}, \ \ \bar h^{2^{\ast}}_{111}=-\bar h^{1^{\ast}}_{222}=\frac{\bar \lambda(\bar \lambda_{1}-3\bar \lambda_{2})}{(\bar \lambda_{1}+3\bar \lambda_{2})^{2}}\bar h^{2^{\ast}}_{222}.$$ Using [\[3.1-3\]](#3.1-3){reference-type="eqref" reference="3.1-3"}, [\[3.1-4\]](#3.1-4){reference-type="eqref" reference="3.1-4"} and [\[3.1-5\]](#3.1-5){reference-type="eqref" reference="3.1-5"}, by a direct caculation, we have $$\begin{aligned} &\lim_{t\rightarrow\infty}|\nabla^{\perp}\vec{H}|^{2}(p_{t})=(\bar H^{2^{\ast}}_{,2})^{2}=(\bar h^{2^{\ast}}_{112}+\bar h^{2^{\ast}}_{222})^{2}=\frac{(\bar \lambda^{2}_{1}+9\bar \lambda^{2}_{2}+10\bar \lambda^{2})^{2}}{(\bar \lambda_{1}+3\bar \lambda_{2})^{4}}(\bar h^{2^{\ast}}_{222})^{2}, \\ &\sum_{i,j,k}(\bar h^{p^{\ast}}_{ijk})^{2}=7(\bar h^{1^{\ast}}_{111})^{2}+8(\bar h^{2^{\ast}}_{111})^{2}+(\bar h^{2^{\ast}}_{222})^{2}=\frac{(\bar \lambda^{2}_{1}+9\bar \lambda^{2}_{2}+10\bar \lambda^{2})^{2}}{(\bar \lambda_{1}+3\bar \lambda_{2})^{4}}(\bar h^{2^{\ast}}_{222})^{2}. \end{aligned}$$ Then it follows from [\[3.1-2\]](#3.1-2){reference-type="eqref" reference="3.1-2"} that $$(\bar H^{2}-S)(\bar H^{2}-3S-2)\leq0.$$ It is a contradiction since $\bar H^{2}-S=-2\bar \lambda^{2}_{2}-2\bar \lambda^{2}<0$. So the claim is proved. Next, we now use $\bar \lambda =0$ to complete the proof of proposition [Proposition 1](#proposition 3.1){reference-type="ref" reference="proposition 3.1"}. For $\bar \lambda =0$, [\[3.1-4\]](#3.1-4){reference-type="eqref" reference="3.1-4"} implies $$\label{3.1-11} \bar \lambda_{1}\lim_{t\rightarrow\infty} \langle x, e_{1} \rangle(p_{t})=0, \ \ \bar \lambda_{2}\lim_{t\rightarrow\infty} \langle x, e_{2} \rangle(p_{t})=0.$$ If $\bar \lambda_{1}=0$, we know that $S=3\bar H^{2}=3\bar \lambda^{2}_{2}$ and $\bar \lambda_{2}\neq0$ since $\bar H^{2}\neq0$. By making use of [\[3.1-3\]](#3.1-3){reference-type="eqref" reference="3.1-3"} and [\[3.1-6\]](#3.1-6){reference-type="eqref" reference="3.1-6"}, we infer $$\bar h^{1^{\ast}}_{11k}=\bar h^{1^{\ast}}_{22k}=0, \ \ k=1,2.$$ Namely, $$\bar h^{1^{\ast}}_{111}=\bar h^{1^{\ast}}_{112}=\bar h^{1^{\ast}}_{221}=\bar h^{1^{\ast}}_{222}=0,$$ and $$\lim_{t\rightarrow\infty}|\nabla^{\perp}\vec{H}|^{2}(p_{t})=\sum_{i,j,k}(\bar h^{p^{\ast}}_{ijk})^{2}=(\bar h^{2^{\ast}}_{222})^{2}.$$ Then it follows from [\[3.1-2\]](#3.1-2){reference-type="eqref" reference="3.1-2"} that $$(\bar H^{2}-S)(\bar H^{2}-3S-2)\leq0.$$ It is a contradiction since $\bar H^{2}-S=-2\bar \lambda^{2}_{2}<0$. If $\bar \lambda_{2}=0$, we infer that $S=\bar H^{2}=\bar \lambda^{2}_{1}$ and $\bar \lambda_{1}\neq0$ since $\bar H^{2}\neq0$. From [\[3.1-3\]](#3.1-3){reference-type="eqref" reference="3.1-3"}, [\[3.1-5\]](#3.1-5){reference-type="eqref" reference="3.1-5"} and [\[3.1-6\]](#3.1-6){reference-type="eqref" reference="3.1-6"}, we get $$\bar H^{2^{\ast}}_{,1}=\bar H^{2^{\ast}}_{,2}=0, \ \ \bar h^{1^{\ast}}_{11k}=\bar h^{1^{\ast}}_{22k}=0, \ \ k=1,2.$$ Then we can draw a conclusion that $$\bar h^{p^{\ast}}_{ijk}=0, \ \ i,j,k,p=1,2.$$ By taking the limit of the third equation of [\[3.1-1\]](#3.1-1){reference-type="eqref" reference="3.1-1"}, we infer $$\begin{aligned} 0&=S(\frac{3}{2}S+1)-2\bar H^{2}S+\frac{1}{2}\bar H^{4} +\sum_{i,j,p,q}\bar H^{p^{\ast}}\bar h^{p^{\ast}}_{ij}\bar H^{q^{\ast}}\bar h^{q^{\ast}}_{ij}\\ &=S(\frac{3}{2}S+1)-2\bar H^{2}S+\frac{1}{2}\bar H^{4} +\bar H^{2}\bar \lambda^{2}_{1}\\ &=S(S+1). \end{aligned}$$ Thus, $S=\bar H^{2}=0$. It contradicts the hypothesis. If $\bar \lambda_{1}\bar \lambda_{2}\neq0$, it is easy to draw that by [\[3.1-11\]](#3.1-11){reference-type="eqref" reference="3.1-11"} $$\label{3.1-12} \lim_{t\rightarrow\infty} \langle x, e_{1} \rangle(p_{t})=0, \ \ \lim_{t\rightarrow\infty} \langle x, e_{2} \rangle(p_{t})=0.$$ By making use of [\[3.1-3\]](#3.1-3){reference-type="eqref" reference="3.1-3"}, [\[3.1-5\]](#3.1-5){reference-type="eqref" reference="3.1-5"}, [\[3.1-6\]](#3.1-6){reference-type="eqref" reference="3.1-6"} and [\[3.1-12\]](#3.1-12){reference-type="eqref" reference="3.1-12"}, we obtain $$\label{3.1-13} \bar H^{p^{\ast}}_{,k}=\bar h^{p^{\ast}}_{11k}+\bar h^{p^{\ast}}_{22k}=0, \ \ k,p=1,2,$$ and $$\label{3.1-14} (\bar \lambda_{1}-3\bar \lambda_{2})\bar h^{1^{\ast}}_{11k}=0, \ \ k=1,2.$$ Suppose $\bar \lambda_{1}-3\bar \lambda_{2}\neq0$, from [\[3.1-13\]](#3.1-13){reference-type="eqref" reference="3.1-13"} and [\[3.1-14\]](#3.1-14){reference-type="eqref" reference="3.1-14"}, we have $$\bar h^{p^{\ast}}_{ijk}=0, \ \ i,j,k,p=1,2.$$ By taking the limit of the third equation of [\[3.1-1\]](#3.1-1){reference-type="eqref" reference="3.1-1"}, we infer $$\begin{aligned} 0&=S(\frac{3}{2}S+1)-2\bar H^{2}S+\frac{1}{2}\bar H^{4} +\sum_{i,j,p,q}\bar H^{p^{\ast}}\bar h^{p^{\ast}}_{ij}\bar H^{q^{\ast}}\bar h^{q^{\ast}}_{ij}\\ &=S(\frac{3}{2}S+1)-2\bar H^{2}S+\frac{1}{2}\bar H^{4} +\bar H^{2}(\bar \lambda^{2}_{1}+\bar \lambda^{2}_{2})\\ &=S(\frac{1}{2}S+1)+(S-\bar H^{2})^{2}+\frac{1}{2}\bar H^{2}(\bar \lambda_{1}-\bar \lambda_{2})^{2}. \end{aligned}$$ Then $S=\bar H^{2}=0$. It contradicts the hypothesis. Suppose $\bar \lambda_{1}-3\bar \lambda_{2}=0$, we know that $$\label{3.1-15} \bar H=\frac{4}{3}\bar \lambda_{1}, \ \ \bar H^{2}=\frac{16}{9}\bar \lambda^{2}_{1}, \ \ S=\frac{4}{3}\bar \lambda^{2}_{1}, \ \ \bar H^{2}=\frac{4}{3}S.$$ Taking the limit of the third equation of [\[3.1-1\]](#3.1-1){reference-type="eqref" reference="3.1-1"} and by a direct caculation, we can see that $$\sum_{i,j,k,p}(\bar h^{p^{\ast}}_{ijk})^{2}=\frac{5}{6}S^{2}+S.$$ Besides, using [\[3.1-13\]](#3.1-13){reference-type="eqref" reference="3.1-13"} and the symmetry of indices, we infer $$\sum_{i,j,p}(\bar h^{p^{\ast}}_{ij1})^{2}=4\Big((\bar h^{1^{\ast}}_{111})^{2}+(\bar h^{1^{\ast}}_{112})^{2}\Big), \ \ \sum_{i,j,k,p}(\bar h^{p^{\ast}}_{ijk})^{2}=8\Big((\bar h^{1^{\ast}}_{111})^{2}+(\bar h^{1^{\ast}}_{112})^{2}\Big).$$ Then $$\label{3.1-16} \sum_{i,j,p}(\bar h^{p^{\ast}}_{ij1})^{2}=\frac{1}{2}\sum_{i,j,k,p}(\bar h^{p^{\ast}}_{ijk})^{2}=\frac{5}{12}S^{2}+\frac{1}{2}S.$$ Taking the limit of the second equation of [\[3.1-1\]](#3.1-1){reference-type="eqref" reference="3.1-1"} and choosing $k=l=1$, we obtain $$\bar h^{1^{\ast}}_{11}\bar h^{1^{\ast}}_{1111}+3\bar h^{1^{\ast}}_{22}\bar h^{1^{\ast}}_{2211} +3\bar h^{2^{\ast}}_{11}\bar h^{2^{\ast}}_{1111}+\bar h^{2^{\ast}}_{22}\bar h^{2^{\ast}}_{2222}=-\sum_{i,j,p}(\bar h^{p^{\ast}}_{ij1})^{2},$$ namely, $$\label{3.1-17} \bar \lambda_{1}\bar h^{1^{\ast}}_{1111}+3\bar \lambda_{2}\bar h^{1^{\ast}}_{2211} =-\sum_{i,j,p}(\bar h^{p^{\ast}}_{ij1})^{2}.$$ It follows from $\bar \lambda_{1}=3\bar \lambda_{2}$, [\[3.1-16\]](#3.1-16){reference-type="eqref" reference="3.1-16"} and [\[3.1-17\]](#3.1-17){reference-type="eqref" reference="3.1-17"} that $$\label{3.1-18} \bar \lambda_{1}(\bar h^{1^{\ast}}_{1111}+\bar h^{1^{\ast}}_{2211}) =-(\frac{5}{12}S^{2}+\frac{1}{2}S).$$ By mean of the second equation of [\[2.1-12\]](#2.1-12){reference-type="eqref" reference="2.1-12"}, [\[3.1-12\]](#3.1-12){reference-type="eqref" reference="3.1-12"} and [\[3.1-15\]](#3.1-15){reference-type="eqref" reference="3.1-15"} and choosing $i=j=p=1$, we know $$\begin{aligned} \bar H^{1^{\ast}}_{,11} =&-\bar h^{1^{\ast}}_{11}-\sum_{k}\bar h^{1^{\ast}}_{1k}\bar h^{1^{\ast}}_{k1}\bar H^{1^{\ast}}\\ =&-\bar \lambda_{1}-\bar \lambda^{2}_{1}\bar H=-\frac{3}{4}\bar H(S+1), \end{aligned}$$ which implies that $$\label{3.1-19} \bar h^{1^{\ast}}_{1111}+\bar h^{1^{\ast}}_{2211}=-\frac{3}{4}\bar H(S+1),$$ Using [\[3.1-15\]](#3.1-15){reference-type="eqref" reference="3.1-15"}, [\[3.1-18\]](#3.1-18){reference-type="eqref" reference="3.1-18"} and [\[3.1-19\]](#3.1-19){reference-type="eqref" reference="3.1-19"}, we infer $$-\frac{3}{4}\bar \lambda_{1}\bar H(S+1)=-\frac{3}{4}S(S+1)=-(\frac{5}{12}S^{2}+\frac{1}{2}S).$$ Then $S=\bar H^{2}=0$. It contradicts the hypothesis. ◻ *Proof of Theorem [Theorem 1](#theorem 1.1){reference-type="ref" reference="theorem 1.1"}*. If $S$ is constant, from Proposition [Proposition 1](#proposition 3.1){reference-type="ref" reference="proposition 3.1"}, we have that $H^{2}\equiv0$ which implies that $M^{2}$ is a hyperplane $\mathbb{R}^{2}$ through the origin. In fact, it follows from the definition of lagrangian self-expander and the first equation of [\[2.1-12\]](#2.1-12){reference-type="eqref" reference="2.1-12"} that $$\langle x, e_{p^{\ast}}\rangle=0, \ \ \sum_{k}h^{p^{\ast}}_{ik}\langle x, e_{k}\rangle=0, \ \ i,p=1,2.$$ Suppose $\langle x, e_{k}\rangle=0$, we draw $$0=\langle x, e_{k}\rangle_{,k}=1+\sum_{p}h^{p^{\ast}}_{kk}\langle x, e_{p^{\ast}}\rangle=1.$$ It is impossible. Then $$\langle x, e_{k}\rangle\neq0, \ \ h^{p^{\ast}}_{ik}=0, \ \ i,k,p=1,2.$$ So the main theorem of the present paper (Theorem [Theorem 2](#theorem 1.2){reference-type="ref" reference="theorem 1.2"}) is proved. $\square$ The first author was partially supported by the China Postdoctoral Science Foundation Grant No.2022M711074. The second author was partly supported by grant No.12171164 of NSFC, GDUPS (2018), Guangdong Natural Science Foundation Grant No.2023A1515010510. 99 S. Ancari and X. Cheng, Volume properties and rigidity on self-expanders of mean curvature flow, Geom. Dedicata, **216**,(2022), no. 2, Paper No. 24, 25 pp. H. Anciaux, Construction of Lagrangian self-similar solutions to the mean curvature flow in $\mathbb{C}^{n}$, Geom. Dedicata, **120** (2006), 37-48. X. Cheng and D. Zhou, Spectral properties and rigidity for self-expanding solutions of the mean curvature flows, Math. Ann., **371**(2018), no. 1-2, 371-389. Q. -M. Cheng, H. Hori and G. Wei, Complete Lagrangian self-shrinkers in $\mathbb{R}^{4}$, Math. Z., **301**(2022), no. 4, 3417-3468. K. Ecker and G.Huisken, Mean curvature evolution of entire graphs, Ann. Math., **130**(1989), no. 3, 453-471. P. H. Halldorsson, Self-similar solutions to the curve shortening flow, Trans. Amer. Math. Soc., **364** (2012), 5285-5309. K. Groh, M. Schwarz, K. Smoczyk and K. Zehmisch, Mean curvature flow of monotone Lagrangian submanifolds, Math. Z., **257** (2007), no. 2, 295-327. Y. Imagi, D. Joyce and J.O. dos Santos, Uniqueness results for special Lagrangians and Lagrangian mean curvature flow expanders in $\mathbb{C}^{m}$, Duke Math. J., **165**(2016), no. 5, 847-933. N. Ishimura, Curvature evolution of plane curves with prescribed opening angle, Bull. Austral. Math. Soc., **52** (1995), 287-296. D. Joyce, Y. -I. Lee and M. -P. Tsui, Self-similar solutions and translating solitons for Lagrangian mean curvature flow, J. Differential Geom., **84**(2010), 127-161. Y. -I. Lee and M. -T. Wang, Hamiltonian stationary shrinkers and expanders for Lagrangian mean curvature flows, J. Differential Geom., **83**(2009), 27-42. Y. -I. Lee and M. -T. Wang, Hamiltonian stationary cones and self-similar solutions in higher dimension, Trans. Amer. Math. Soc., **362**(2010), 1491-1503. H. Li and L. Vrancken, A basic inequality and new characterization of Whitney spheres in a complex space form, Israel J. Math., **146**(2005), 223-242. H. Li and X. Wang, A differentiable sphere theorem for compact Lagrangian submanifolds in complex Euclidean space and complex projective space, Commun. Anal. Geom., **22**(2014), no. 2, 269-288. J. D. Lotay and A. Neves, Uniqueness of Langrangian self-expanders, Geom. Topol., **17**(2013), 2689-2729. H. Nakahara, Some examples of self-similar solutions and translating solitons for Lagrangian mean curvature flow, Tohoku Math. J., **65**(2013), no. 3, 411-425. A. Neves, Singularities of Lagrangian mean curvature flow: zero-Maslov class case, Invent. Math., **168** (2007), 449-484. H. Omori, Isometric immersion of Riemannian manifolds, J. Math. Soc. Japan, **19**(1967), 205-214. K. Smoczyk, Self-expanders of the mean curvature flow, Vietnam J. Math., **49**(2021), no. 2, 433-445. N. Stavrou, Selfsimilar solutions to the mean curvature flow, J. Reine Angew. Math., **499**(1998), 189-198. S. T. Yau, Harmonic functions on complete Riemannian manifolds, Comm. Pure and Appl. Math., **28**(1975), 201-228.
arxiv_math
{ "id": "2309.16324", "title": "Complete lagrangian self-expanders in $\\mathbb C^{2}$", "authors": "Zhi Li and Guoxin Wei", "categories": "math.DG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Let $\mu$ and $\nu$ be compactly supported probability measures on the real line with densities with respect to Lebesgue measure. We show that for all large real $z$, if $\mu \boxplus \nu$ is their additive free convolution, we have $$\int_{-\infty}^\infty \log(z - x) \mu \boxplus \nu (\mathrm{d}x) = \sup_{\Pi} \left\{ \mathbb{E}_\Pi[\log(z - (X+Y)] - \mathcal{E}[\Pi]+\mathcal{E}[\mu]+\mathcal{E}[\nu] \right\},$$ where the supremum is taken over all probability laws $\Pi$ on $\mathbb{R}^2$ for a pair of real-valued random variables $(X,Y)$ with respective marginal laws $\mu$ and $\nu$, and given a probability law $P$ with density function $f$ on $\mathbb{R}^k$, $\mathcal{E}[P] := \int_{\mathbb{R}^k} f \log f$ is its classical entropy. We prove similar formulas for the multiplicative free convolution $\mu \boxtimes \nu$ and the free compression $[\mu]_\tau$ of probability laws. [The maximisers in our variational descriptions of these free operations on measures can be computed explicitly, and from these we can]{style="color: black"} then deduce the standard $R$- and $S$-transform descriptions of additive and multiplicative free convolution. We use our formulation to derive several new inequalities relating free and classical convolutions of random variables, such as $$\int_{-\infty}^\infty \log(z - x) \mu \boxplus \nu (\mathrm{d}x) \geq \mathbb{E}[\log(z - (X+Y)],$$ valid for all large $z$, where on the right-hand side $X,Y$ are independent classical random variables with respective laws $\mu,\nu$. Our approach is based on applying a large deviation principle on the symmetric group to the celebrated quadrature formulas of Marcus, Spielman and Srivastava. author: - Octavio Arizmendi and Samuel G. G. Johnston title: A variational approach to free probability --- # Introduction and overview ## The quadrature formulas Let $A$ and $B$ be diagonal matrices with real entries $a_1,\ldots,a_N$ and $b_1,\ldots,b_N$, and let $U$ be a Haar unitary random matrix of dimension $N$. There are three remarkable **quadrature** formulas characterising the expected characteristic polynomials of the matrices $$\begin{aligned} \label{eq:oper} A + UBU^{-1}, \qquad A \cdot UBU^{-1}, \qquad \text{and} \qquad [U A U^{-1}]_k, \end{aligned}$$ where the first two matrices above are $N \times N$ matrices, and the last matrix, $[U A U^{-1}]_k$, refers to the principle $k \times k$ minor (i.e. the top-left corner) of the random matrix $U A U^{-1}$. These formulas were developed by Marcus, Spielman and Srivastava in their resolution to the Kadison-Singer problem, as well as in their work on Ramanujan graphs [@MSS; @MSS2; @MSS3]. The quadrature formulas state that the expected characteristic polynomial of each of the random matrices in [\[eq:oper\]](#eq:oper){reference-type="eqref" reference="eq:oper"} is unchanged if the Haar unitary random matrix $U$ is replaced by a permutation matrix $\Sigma$ chosen uniformly from the symmetric group on $N$ letters. More specifically, in each of the following formulas, let $U$ be a Haar unitary matrix under $\mathbf{E}_{\mathcal{U}_N}$, and let $\Sigma$ be a uniform permutation matrix under $\mathbf{E}_{\mathcal{S}_N}$. Then the first quadrature formula, concerned with the addition of random matrices, states that $$\begin{aligned} \label{eq:GM} \mathbf{E}_{\mathcal{U}_N} [ \det (z - A - UBU^{-1} ) ] = \mathbf{E}_{\mathcal{S}_N} [ \det (z - A - \Sigma B \Sigma^{-1} ) ] .\end{aligned}$$ Likewise, if we replace the addition of the matrices $A$ and $UBU^{-1}$ with their multiplication, then provided $A$ and $B$ have only positive eigenvalues, we have the analogous formula $$\begin{aligned} \label{eq:GM2} \mathbf{E}_{\mathcal{U}_N} [ \det (z - A \cdot UBU^{-1} ) ] = \mathbf{E}_{\mathcal{S}_N} [ \det (z - A \cdot \Sigma B \Sigma^{-1} ) ] .\end{aligned}$$ Finally, the expected characteristic polynomial of the $k \times k$ principal minor (i.e. top-left corner) $[UAU^{-1}]_k$ of a Haar unitary conjugation $UAU^{-1}$ of an $N \times N$ matrix $A$ satisfies $$\begin{aligned} \label{eq:GM3} \mathbf{E}_{\mathcal{U}_N} [ \det (z - [UAU^{-1}]_k ) ] = \mathbf{E}_{\mathcal{S}_N}[ \det (z - [\Sigma A \Sigma^{-1}]_k ) ] ,\end{aligned}$$ where the determinants inside the expectations in [\[eq:GM3\]](#eq:GM3){reference-type="eqref" reference="eq:GM3"} are of $k \times k$ matrices. See e.g. Gorin and Marcus [@GM] for further information on statements of these three quadrature formulas. (We note that the formulation we use in [\[eq:GM3\]](#eq:GM3){reference-type="eqref" reference="eq:GM3"} is slightly different to equation (4) of Gorin and Marcus [@GM]; see the appendix for a proof of the equivalence of the two equations.) ## A quick overview of our approach In this article, we will be interested in studying the logarithmic asymptotics of the three formulas [\[eq:GM\]](#eq:GM){reference-type="eqref" reference="eq:GM"}, [\[eq:GM2\]](#eq:GM2){reference-type="eqref" reference="eq:GM2"} and [\[eq:GM3\]](#eq:GM3){reference-type="eqref" reference="eq:GM3"} as $N$ tends to infinity while the empirical spectra of the diagonal matrices $A_N$ and $B_N$ approximate probability measures $\mu$ and $\nu$ on the real line in the sense that $$\begin{aligned} \label{eq:conv0} \frac{1}{N} \sum_{i=1}^N \delta_{\lambda_i(A_N)} \to \mu \qquad \text{and} \qquad \frac{1}{N} \sum_{i=1}^N \delta_{\lambda_i(B_N)} \to \nu .\end{aligned}$$ We find that there is dramatically different behaviour contributing to the expectations over the unitary and symmetric groups on the left-hand side and right-hand side of each of the respective equations. Before stating our results in full in Section [2](#sec:results){reference-type="ref" reference="sec:results"}, in the remainder of the introduction we give a flavour of our results by focussing on the asymptotic analogue of the first (i.e. additive) of the quadrature formulas [\[eq:GM\]](#eq:GM){reference-type="eqref" reference="eq:GM"} and its consequences: In short, Theorem [Theorem 1](#thm:t1){reference-type="ref" reference="thm:t1"} gives the asymptotic version of [\[eq:GM\]](#eq:GM){reference-type="eqref" reference="eq:GM"}, Theorem [Theorem 2](#thm:t2){reference-type="ref" reference="thm:t2"} gives a characterisation of the $\mu \boxplus \nu$ in terms of various transforms, and Theorem [Theorem 3](#thm:t3){reference-type="ref" reference="thm:t3"} gives an inequality relating expectations of classical and free convolutions. On the one hand, the (random) empirical spectrum $\mu \boxplus_N \nu$ of the random matrix $A_N+U_NB_NU_N^{-1}$ converges almost-surely to a probability measure $\mu \boxplus \nu$ known as the additive free convolution of $\mu$ and $\nu$. This convergence is well behaved as we take expectations over the unitary group, and in particular, noting that $\det (z - A_N - U_NB_NU_N^{-1} ) = \exp \{ N \int_{-\infty}^\infty \log(z - x) (\mu \boxplus_N \nu)(\mathrm{d}x) \}$ we find that under [\[eq:conv0\]](#eq:conv0){reference-type="eqref" reference="eq:conv0"} we have $$\begin{aligned} \label{eq:LGM} \lim_{N \to \infty} \frac{1}{N} \log \mathbf{E}_{\mathcal{U}_N} [ \det (z - A_N - U_NB_NU_N^{-1} ) ] = \int_{-\infty}^\infty \log(z - x) (\mu \boxplus \nu)(\mathrm{d}x).\end{aligned}$$ The asymptotic behaviour of the right-hand side of [\[eq:GM\]](#eq:GM){reference-type="eqref" reference="eq:GM"} on the other hand is more delicate, owing to a large deviation on the symmetric group. Giving a very rough idea of how this occurs here, if $\sigma$ is a permutation on the symmetric group on $N$ letters, we define its associated approximate coupling measure on $[0,1]^2$ by setting $$\begin{aligned} \pi_N := \frac{1}{N} \sum_{ i = 1}^N \delta_{(i/N,\sigma(i)/N)}.\end{aligned}$$ Loosely speaking, we find that if $\sigma$ is a uniform random permutation of $N$ letters, then the large deviation principle for the symmetric group [@trashorras; @wu] states that for density functions $f:[0,1]^2 \to \mathbb{R}$, we have $$\begin{aligned} \mathbb{P}( \pi_N \approx f \mathrm{d}s\mathrm{d}t) \approx e^{ - N I[f] } \qquad \text{where} \qquad I[\pi] := \begin{cases} \int_{[0,1]^2} f \log f \qquad &\text{$f$ is coupling}\\ +\infty \qquad &\text{otherwise}, \end{cases}\end{aligned}$$ where $f$ is a **coupling density** if $f$ satisfies $\int_0^1 f(s_0,t) \mathrm{d}t = \int_0^1 f(s,t_0)\mathrm{d}s = 1$ for all $s_0,t_0 \in [0,1]$. We are ultimately able to use this large deviation principle to show that under [\[eq:conv0\]](#eq:conv0){reference-type="eqref" reference="eq:conv0"} we have $$\begin{aligned} \label{eq:RGM} \lim_{N \to \infty} \frac{1}{N} \log \mathbf{E}_{\mathcal{S}_N} [ \det (z - A_N - \Sigma_NB_N\Sigma_N^{-1} ) ] = \sup_f \mathcal{G}[ \log(z - \rho_\mu(s) - \rho_\nu(t)), f ],\end{aligned}$$ where for integrable $w:[0,1]^2 \to \mathbb{R}$, $\mathcal{G}[w,\cdot]$ is the energy functional $$\begin{aligned} \mathcal{G}[w,f] := \int_{[0,1]^2} (w - \log f) f,\end{aligned}$$ $\rho_\mu$ (resp. $\rho_\nu$) denotes the inverse of the distribution function of $\mu$ (resp. $\nu$), and the supremum is taken over all coupling densities $f$. By combining the asymptotic descriptions [\[eq:LGM\]](#eq:LGM){reference-type="eqref" reference="eq:LGM"} and [\[eq:RGM\]](#eq:RGM){reference-type="eqref" reference="eq:RGM"} of either side of [\[eq:GM\]](#eq:GM){reference-type="eqref" reference="eq:GM"}, we arrive at the following result: **Theorem 1**. *Given probability measures $\mu$ and $\nu$ with compact support, write $\rho_\mu$ (resp. $\rho_\nu$) for the inverse of the distribution function of $\mu$ (resp. $\nu$). Then for all sufficiently large $z \in \mathbb{R}$ we have $$\begin{aligned} \label{eq:mainopen} \int_{-\infty}^\infty \log(z-\lambda)( \mu \boxplus \nu)(\mathrm{d}\lambda) = \sup_f \mathcal{G}[ \log(z - \rho_\mu(s) - \rho_\nu(t)), f ].\end{aligned}$$* See Theorem [Theorem 7](#thm:main){reference-type="ref" reference="thm:main"} for a statement with precise conditions, as well as analogous statement for free multiplicative convolution and free compression. It is possible to give an alternative description of Theorem [Theorem 1](#thm:t1){reference-type="ref" reference="thm:t1"} in terms of the supremum of couplings of probability measures. Namely, we show in Section [6](#sec:entropy){reference-type="ref" reference="sec:entropy"} that as a consequence of Theorem [Theorem 1](#thm:t1){reference-type="ref" reference="thm:t1"}, in the special case where $\mu$ and $\nu$ have densities with respect to Lebesgue measure, we have $$\begin{aligned} \label{eq:fisher} \int_{-\infty}^\infty \log(z - \lambda) \mu \boxplus \nu (\mathrm{d}\lambda) = \sup_{\Pi} \left\{ \mathbb{E}_\Pi[\log(z - (X+Y)] - \mathcal{E}[\Pi]+\mathcal{E}[\mu]+\mathcal{E}[\nu] \right\},\end{aligned}$$ where the supremum is taken over all probability laws $\Pi$ on $\mathbb{R}^2$ for random variables $(X,Y)$ with marginal laws $\mu$ and $\nu$, and given a probability law $P$ with density function $f$ on $\mathbb{R}^k$, $\mathcal{E}[P] := \int_{\mathbb{R}^k} f \log f$ is its entropy. This recovers the result in the abstract. We prove similar formulas for the multiplicative free convolution and the free compression of probability laws. The variational characterisation [\[eq:mainopen\]](#eq:mainopen){reference-type="eqref" reference="eq:mainopen"} of the probability measure $\mu \boxplus \nu$ has several consequences. For one thing, the right-hand side of [\[eq:mainopen\]](#eq:mainopen){reference-type="eqref" reference="eq:mainopen"} has a unique $z$-dependent maximiser which we can compute explicitly in the sequel. By plugging this maximiser into the energy functional we prove the following result, which is a variation on well known results in the literature. **Theorem 2**. *Let $\omega,\omega_\mu,\omega_\nu$ be the functions of $z$ given by the unique solutions to the relations $$\begin{aligned} \omega(z) &= \omega_\mu(z) + \omega_\nu(z) \label{eq:add1} \\ \frac{1}{\omega(z) - z } &= G_\mu(\omega_\mu(z)) = G_\nu(\omega_\nu(z)). \label{eq:add2}\end{aligned}$$ Then $$\begin{aligned} \label{eq:mainopen2} \int_{-\infty}^\infty \log(z-\lambda)( \mu \boxplus \nu)(\mathrm{d}\lambda) = - \log (\omega-z) + \int_{-\infty}^\infty \log(\omega_\mu-\lambda)\mu(\mathrm{d}\lambda) + \int_{-\infty}^\infty \log(\omega_\nu -\lambda)\nu(\mathrm{d}\lambda).\end{aligned}$$* The equation [\[eq:mainopen2\]](#eq:mainopen2){reference-type="eqref" reference="eq:mainopen2"} may then be regarded as a new construction of $\mu \boxplus \nu$: it is the unique measure satisfying [\[eq:mainopen2\]](#eq:mainopen2){reference-type="eqref" reference="eq:mainopen2"} (with $\omega,\omega_\mu,\omega_\nu$ defined in [\[eq:add1\]](#eq:add1){reference-type="eqref" reference="eq:add1"} and [\[eq:add2\]](#eq:add2){reference-type="eqref" reference="eq:add2"}) for all sufficiently large $z$. This construction gives rise to the standard construction of $\mu \boxplus \nu$ in terms of $R$-transforms. Namely, the $R$-transform of a measure $\mu$ is the unique function $R_\mu$ satisfying $$\begin{aligned} R_\mu(G_\mu(z)) + \frac{1}{G_\mu(z)} = z\end{aligned}$$ for all sufficiently large $z$. In the sequel we show how by differentiating [\[eq:mainopen2\]](#eq:mainopen2){reference-type="eqref" reference="eq:mainopen2"} with respect to $z$ and using the relations [\[eq:add1\]](#eq:add1){reference-type="eqref" reference="eq:add1"} and [\[eq:add2\]](#eq:add2){reference-type="eqref" reference="eq:add2"}, one can show that the $R$-transforms of $\mu \boxplus \nu, \mu$ and $\nu$ are related via $$\begin{aligned} R_{\mu \boxplus \nu } = R_\mu + R_\nu.\end{aligned}$$ See Section [3](#sec:free){reference-type="ref" reference="sec:free"} for a definition of the $R$-transform, as well as further details on this derivation. There are other consequences of [\[eq:mainopen\]](#eq:mainopen){reference-type="eqref" reference="eq:mainopen"} (and its analogues for free multiplication and compression) that we can deduce from taking the supremum at face value. For one thing, by plugging in any coupling density $f$ we obtain an associated inequality involve free additive convolution. In particular, by setting $f$ equal to the uniform density, we are able to prove the following inequality: **Theorem 3**. *Let $\mu,\nu$ have compact support. If $X$ and $Y$ are independent classical random variables with respective laws $\mu$ and $\nu$, then for all sufficiently large $z$ we have $$\begin{aligned} \label{eq:ads} \int_{-\infty}^\infty \log(z - \lambda) (\mu \boxplus \nu) (\mathrm{d}\lambda) \geq \mathbb{E}[ \log (z - (X+Y)].\end{aligned}$$* Theorem [Theorem 3](#thm:t3){reference-type="ref" reference="thm:t3"} alternatively follows from setting $\Pi$ equal to the (classical) product measure of $\mu$ and $\nu$ in [\[eq:fisher\]](#eq:fisher){reference-type="eqref" reference="eq:fisher"}, and then using the well known fact that for any measure $\Pi'$ on $\mathbb{R}^2$ with marginals $\mu$ and $\nu$, we have $$\begin{aligned} \mathcal{E}[\Pi'] \geq \mathcal{E}[\mu ] + \mathcal{E}[\nu],\end{aligned}$$ with equality if and only if $\Pi'$ is the product measure. Recalling that the logarithm function is concave, the equation [\[eq:ads\]](#eq:ads){reference-type="eqref" reference="eq:ads"} in some sense captures how free convolution creates measures of smaller tails than classical convolution after preserving mean and variance. In fact, we see in the sequel that the analogue of [\[eq:ads\]](#eq:ads){reference-type="eqref" reference="eq:ads"} with multiplication (i.e. $\mu \boxtimes \nu$ and $XY$) holds. Moreover, by applying a similar method to the analogue of [\[eq:mainopen\]](#eq:mainopen){reference-type="eqref" reference="eq:mainopen"} for free compression, we obtain a monotonicity result for free compression, which roughly speaking says that the expectation of $\log(z - \lambda)$ is monotone with respect to the free compression operation. ## Overview That concludes the brief taster of our methods. The remainder of this article is structured as follows: - In Section [2](#sec:results){reference-type="ref" reference="sec:results"} we state our results in full, and give a more detailed description of our methods. - In Section [3](#sec:free){reference-type="ref" reference="sec:free"} we further study the associated formal calculations in free probability, and relate these calculations to the free probability literature at large. - In Section [4](#sec:UN){reference-type="ref" reference="sec:UN"} we start with the proofs of our main results. In this section, we prove Theorem [Theorem 4](#thm:UN){reference-type="ref" reference="thm:UN"}, which characterises the convergence of the left-hand sides of each of [\[eq:GM\]](#eq:GM){reference-type="eqref" reference="eq:GM"}-[\[eq:GM3\]](#eq:GM3){reference-type="eqref" reference="eq:GM3"}. - In Section [5](#sec:SN){reference-type="ref" reference="sec:SN"}, we prove Theorem [Theorem 6](#thm:sup){reference-type="ref" reference="thm:sup"}, which characterises the convergence of the right-hand sides of each of [\[eq:GM\]](#eq:GM){reference-type="eqref" reference="eq:GM"}-[\[eq:GM3\]](#eq:GM3){reference-type="eqref" reference="eq:GM3"}. - In Section [6](#sec:entropy){reference-type="ref" reference="sec:entropy"}, we derive the alternative formulation Theorem [Theorem 8](#thm:main2){reference-type="ref" reference="thm:main2"} of Theorem [Theorem 7](#thm:main){reference-type="ref" reference="thm:main"} in terms of the classical notion of entropy from information theory. - In the final part, Section [7](#sec:char){reference-type="ref" reference="sec:char"}, we use standard techniques in calculus of variations to characterise the maximisers occuring on the right-hand side of [\[eq:mainopen\]](#eq:mainopen){reference-type="eqref" reference="eq:mainopen"}. # Main results {#sec:results} While the fixed-$N$ formulas in [\[eq:GM\]](#eq:GM){reference-type="eqref" reference="eq:GM"}, [\[eq:GM2\]](#eq:GM2){reference-type="eqref" reference="eq:GM2"} and [\[eq:GM3\]](#eq:GM3){reference-type="eqref" reference="eq:GM3"} are valid for all complex $z$, whilst working under a large deviation framework we will find it convenient to assume $z$ is a sufficiently large real number. Specifically, given a measure $\mu$ on the real line $\mu$ with compact support, let us write $$\begin{aligned} \mathrm{E}_\mu := \mathrm{ess sup}( \mu) := \inf \{ r \in \mathbb{R} : \mu[r,\infty) = 0 \} \in (-\infty,+\infty]\end{aligned}$$ for the essential supremum of a measure $\mu$. We will work under the following assumption: **Assumption 1**. *Throughout the article $z$ will denote a real number, and $\mu$ and $\nu$ will be probability measures with $E_\mu,E_\nu < \infty$.* *Furthermore, in every context involving additive free convolution of measures $\mu$ and $\nu$, we assume $$\begin{aligned} z > \mathrm{E}_\mu + \mathrm{E}_\nu.\end{aligned}$$ In every context involving multiplicative free convolution of measures $\mu$ and $\nu$, we assume that the measures $\mu$ and $\nu$ are supported on $(0,\infty)$, and we further assume $$\begin{aligned} z > \mathrm{E}_\mu \cdot \mathrm{E}_\nu.\end{aligned}$$ In every context involving free compression of a measure $\mu$, we assume $$\begin{aligned} z > \mathrm{E}_\mu.\end{aligned}$$* We emphasise that this assumption will be in force for the remainder of the article. Throughout the article, we will find it useful to formulate measures in terms of their quantile functions, which mentioned in the introduction. The quantile function $\rho_\mu:[0,1] \to \mathbb{R}$ of a probability measure is the right-inverse of the distribution function associated with $\mu$. In other words, $\rho_\mu$ is the unique right-continuous function satisfying $$\begin{aligned} y = \int_{-\infty}^{\rho_\mu(t)} \mu(\mathrm{d}x) = t \qquad \text{for all $t \in [0,1]$}.\end{aligned}$$ Note then that $\int_{-\infty}^\infty f(x) \mu(\mathrm{d}x) = \int_0^1 f(\rho_\mu(t))\mathrm{d}t$ for suitable $f$. If $\mu$ contains an atom of mass $p$, then $\rho_\mu$ is constant on an interval of length $p$. If $\mu$ contains a gap in its support of length $p$, then $\rho_\mu$ has a jump of size $p$. In order to produce $N \times N$ diagonal matrices $A_N$ and $B_N$ with spectra approximating the measures $\mu$ and $\nu$, one can define the diagonal matrices $$\begin{aligned} \label{eq:conv1} A_N := \mathrm{diag}( \rho_\mu(i/N) : i = 1,\ldots,N) \qquad \text{and} \qquad B_N := \mathrm{diag}( \rho_\nu(i/N): i = 1,\ldots,N).\end{aligned}$$ Note that with $A_N$ and $B_N$ defined as in [\[eq:conv1\]](#eq:conv1){reference-type="eqref" reference="eq:conv1"}, the convergence [\[eq:conv0\]](#eq:conv0){reference-type="eqref" reference="eq:conv0"} holds. We also note that Assumption [Assumption 1](#as0){reference-type="ref" reference="as0"} ensures that each of the determinants in occuring in [\[eq:GM\]](#eq:GM){reference-type="eqref" reference="eq:GM"}, [\[eq:GM2\]](#eq:GM2){reference-type="eqref" reference="eq:GM2"} and [\[eq:GM3\]](#eq:GM3){reference-type="eqref" reference="eq:GM3"} is positive almost-surely. ## Free probability and expectations over the unitary group {#sec:unitary} With a view to discussing the (easier) left-hand sides of each of the equations [\[eq:GM\]](#eq:GM){reference-type="eqref" reference="eq:GM"}, [\[eq:GM2\]](#eq:GM2){reference-type="eqref" reference="eq:GM2"}, and [\[eq:GM3\]](#eq:GM3){reference-type="eqref" reference="eq:GM3"}, we begin by introducing the key ideas of free probability. The central results of free probability state if the empirical measures of $A$ and $B$ approximate probability measures $\mu$ and $\nu$ on the real line, then there are three operations on probability measures $$\begin{aligned} \label{eq:freeops} \mu \boxplus \nu, \qquad \mu \boxtimes \nu \qquad \text{and} \qquad [\mu]_\tau,\end{aligned}$$ (each taking one or more probability measures on $\mathbb{R}$ and creating a new probability measure on $\mathbb{R}$) characterising the leading order behaviour of the empirical spectral measures of the random matrices in [\[eq:oper\]](#eq:oper){reference-type="eqref" reference="eq:oper"}. More explicitly, write $\lambda_1(C) \leq \ldots \leq \lambda_p(C)$ for the eigenvalues of a $p \times p$ Hermitian matrix. Suppose [\[eq:conv0\]](#eq:conv0){reference-type="eqref" reference="eq:conv0"} holds. Then there are probability measures $\mu \boxplus \nu, \mu \boxtimes \nu$ and $[\mu]_\tau$ on the real line such that as $N \to \infty$ we have the following almost-sure convergence of empirical spectra of random matrices: For addition, $$\begin{aligned} \frac{1}{N} \sum_{ i = 1}^N \delta_{\lambda_i( A_N + U_N B_N U_N^{-1}) } \to \mu \boxplus \nu .\end{aligned}$$ For multiplication (with $\mu$ and $\nu$ supported on $(0,\infty)$) we have $$\begin{aligned} \frac{1}{N} \sum_{ i = 1}^N \delta_{\lambda_i( A_N \cdot U_N B_N U_N^{-1}) } \to \mu \boxtimes \nu .\end{aligned}$$ For compression, if $k = \lfloor \tau N \rfloor$ is the greatest integer below $\tau N$ for some $\tau \in [0,1]$, we have $$\begin{aligned} \frac{1}{k} \sum_{ i = 1}^k \delta_{\lambda_i( [U_N A_N U_N^{-1})]_k } \to [\mu]_\tau .\end{aligned}$$ We remark at this stage in the setting of multiplication of Hermitian matrices that, although $A_N U_N B_N U_N^{-1}$ has all real eigenvalues, it is not guaranteed to be Hermitian. For this reason, some authors prefer considering, in the case where $A_N$ has positive eigenvalues only, the matrix $A_N^{1/2} U_N B_N U_N^{-1} A_N^{1/2}$, which has the same spectrum as $A_N U_N B_N U_N^{-1}$, but is Hermitian. Also, it transpires that up to a scaling factor, the compression measure $[\mu]_{1/k}$ and the $k$-fold additive free convolution $\mu \boxplus \ldots \boxplus \mu$ are identical. We explore this connection further in Section [3.4](#sec:compr){reference-type="ref" reference="sec:compr"}. Consider now that for suitable complex $z$, $\det(z-C) = \prod_{i=1}^N (z - \lambda_i(C))$ can then be written in terms of the empirical spectral measure via the equation $$\begin{aligned} \frac{1}{N} \log \det( z- C) = \int_{-\infty}^\infty \log(z - \lambda) \mu_C(\mathrm{d}\lambda).\end{aligned}$$ As a foundation for our results in the sequel, we have the following result, which states that the logarithmic derivatives of each of the unitary expectations in [\[eq:GM\]](#eq:GM){reference-type="eqref" reference="eq:GM"}, [\[eq:GM2\]](#eq:GM2){reference-type="eqref" reference="eq:GM2"} and [\[eq:GM3\]](#eq:GM3){reference-type="eqref" reference="eq:GM3"} are well behaved as $N \to \infty$: **Theorem 4**. *Let $A_N$ and $B_N$ be as in [\[eq:conv1\]](#eq:conv1){reference-type="eqref" reference="eq:conv1"}. Then for addition we have $$\begin{aligned} \label{eq:UGM} \lim_{N \to \infty} \frac{1}{N } \log \mathbf{E}_{\mathcal{U}_N} [ \det (z - A_N - U_N B_N U_N^{-1} ) ] = \int_{-\infty}^\infty \log(z-\lambda) ( \mu \boxplus \nu )(\mathrm{d} \lambda) . \end{aligned}$$ For multiplication we have $$\begin{aligned} \label{eq:UGM2} \lim_{N \to \infty} \frac{1}{N } \log \mathbf{E}_{\mathcal{U}_N} [ \det (z - A_N \cdot U_NB_NU_N^{-1} ) ] = \int_{-\infty}^\infty \log(z-\lambda) (\mu \boxtimes \nu) (\mathrm{d} \lambda) . \end{aligned}$$ For compression we have $$\begin{aligned} \label{eq:UGM3} \lim_{N \to \infty} \frac{1}{\tau N } \log \mathbf{E}_{\mathcal{U}_N} [ \det (z - [U_NA_NU_N^{-1}]_{[\tau N]} ) ] = \int_{-\infty}^\infty \log(z-\lambda) [\mu]_\tau (\mathrm{d} \lambda) . \end{aligned}$$* In the next section, we turn to studying the more delicate asymptotics of the expectations over the symmetric groups. ## Large deviations and expectations over the symmetric group {#sec:symmetric} In this section we study the asymptotics of the right-hand sides of [\[eq:GM\]](#eq:GM){reference-type="eqref" reference="eq:GM"}-[\[eq:GM3\]](#eq:GM3){reference-type="eqref" reference="eq:GM3"}, finding in a certain sense that the contributions to the expectations in question come from *atypical* elements of the symmetric group. We turn to considering the random variables occuring in the right-hand side of [\[eq:GM\]](#eq:GM){reference-type="eqref" reference="eq:GM"}-[\[eq:GM3\]](#eq:GM3){reference-type="eqref" reference="eq:GM3"}. We begin by considering the first equation, [\[eq:GM\]](#eq:GM){reference-type="eqref" reference="eq:GM"}, concerned with the addition of random matrices. Let $\sigma:\{1,\ldots,N\} \to \{1,\ldots,N\}$ be the bijection associated with a permutation matrix $\Sigma$. Then the determinant in [\[eq:GM\]](#eq:GM){reference-type="eqref" reference="eq:GM"} then reads $$\begin{aligned} \label{eq:xX} \det(z - A - \Sigma B \Sigma^{-1}) = \prod_{i = 1}^N (z - a_i - b_{\sigma(i)} ).\end{aligned}$$ Now we introduce the main idea. Given a permutation $\sigma$ we can define an atomic probability measure $\pi_N$ on the unit square $[0,1]^2$ by setting $$\begin{aligned} \pi_N := \frac{1}{N} \sum_{ i = 1}^N \delta_{(i/N,\sigma(i)/N)}.\end{aligned}$$ We the measure $\pi_N$ associated with $\sigma$ and $\Sigma_N$ at hand, with $A_N$ and $B_N$ as in [\[eq:conv1\]](#eq:conv1){reference-type="eqref" reference="eq:conv1"} we may write $$\begin{aligned} \label{eq:yY} \frac{1}{N} \log \det(z - A_N - \Sigma_N B_N \Sigma_N^{-1}) = \int_{[0,1]^2} \log (z - \rho_\mu(s) - \rho_\nu(t)) \pi_N(\mathrm{d}s,\mathrm{d}t).\end{aligned}$$ Similarly, we have $$\begin{aligned} \label{eq:AAA} \frac{1}{N} \log \det(z - A \cdot \Sigma B \Sigma^{-1}) = \prod_{i = 1}^N (z - a_i b_{\sigma(i)} ) = \int_{[0,1]^2} \log (z - \rho_\mu(s) \rho_\nu(t)) \pi_N(\mathrm{d}s,\mathrm{d}t),\end{aligned}$$ and with $k = \lfloor \tau N \rfloor$ we have $$\begin{aligned} \label{eq:BBB} \frac{1}{\tau N} \log \det(z - [\Sigma A \Sigma^{-1}]_k) =\frac{1}{\tau N} \log \prod_{i = 1}^N (z - a_i b_{\sigma(i)} ) =\frac{1}{\tau} \int_{[0,1]^2} \log (z - \rho_\mu(s))\mathrm{1}_{\{t < \tau\}} \pi_N(\mathrm{d}s,\mathrm{d}t).\end{aligned}$$ It is of no cost to work more broadly from here on, and thus will be proceed by continuing the wider problem of studying the asymptotics of $\mathbf{E}_{\mathcal{S}_N}[e^{NZ_N[w]}]$ where $$\begin{aligned} Z_N[w] := \int_{[0,1]^2} w \mathrm{d}\pi_N,\end{aligned}$$ for a measurable function $w:[0,1]^2 \to \mathbb{R}$. It is easily verified that $\pi_N$ converges weakly to the uniform measure on $[0,1]^2$, and as such, for all sufficiently regular (continuous, say) $w$, as $N \to \infty$ we have $$\begin{aligned} Z_N[w] \to \int_{[0,1]^2} w(s,t) \mathrm{d}s \mathrm{d}t.\end{aligned}$$ However, what we will find is that the asymptotics of $\mathbf{E}_{\mathcal{S}_N}[e^{N Z_N[w]}]$ are in fact dominated by rare events that nonetheless contribute overwhelmingly to its expectation. To describe these rare events, let us begin by noting that any candidate limiting density $f:[0,1]^2 \to [0,\infty)$ for $\pi_N$ must be a coupling density, i.e. it must satisfy the consistency condition $$\begin{aligned} \label{eq:jetski} \int_0^1 f(s,t) \mathrm{d}s = \int_0^1 f(s,t) \mathrm{d}t = 1,\end{aligned}$$ in each of the variables $s$ and $t$. This follows from the fact that $\sigma$ is a bijection. Of course, we note that [\[eq:jetski\]](#eq:jetski){reference-type="eqref" reference="eq:jetski"} implies $\int_{[0,1]^2} f(s,t)\mathrm{d}s\mathrm{d}t = 1$, i.e. $f$ is a probability density. Note that for measurable $A,B:[0,1] \to \mathbb{R}$, the equation [\[eq:jetski\]](#eq:jetski){reference-type="eqref" reference="eq:jetski"} entails $$\begin{aligned} \label{eq:throughfact} \int_0^1 \int_0^1 (A(s)+B(t))f(s,t) \mathrm{d}s \mathrm{d}t = \int_0^1 A(s) \mathrm{d}s + \int_0^1 B(t)\mathrm{d}t,\end{aligned}$$ a fact we will use frequently throughout the article. We now outline briefly the basic tenets of large deviation theory, following Dembo and Zeitouni [@DZ]. A **rate function** $I:\mathcal{X} \to [0,\infty]$ on a topological space $\mathcal{X}$ with Borel $\sigma$-algebra $\mathcal{B}$ is simply a lower semicontinuous function, i.e. a function such that for each $\alpha \in [0,\infty)$, the level set $\Psi_I(\alpha) := \{x : I(x) \leq \alpha\}$ is closed. A rate function is **good** if the level sets are compact. We say that a sequence of Borel probability measures $(P_N)_{N \geq 1}$ on a topological space $(\mathcal{X},\mathcal{B})$ satisfies a **large deviation principle** with rate function $I:\mathcal{X} \to [0,\infty]$ if for all $\Gamma \in \mathcal{B}$ we have $$\begin{aligned} - \inf_{x \in \Gamma^o} I(x) \leq \lim \inf_{N \to \infty} \frac{1}{N} \log P_N(\Gamma) \leq \lim \sup_{N \to \infty} \frac{1}{N} \log P_N(\Gamma) \leq - \inf_{x \in \overline{\Gamma}} I(x).\end{aligned}$$ Speaking very informally, if $X_N$ is a random variable with law $P_N$, the large deviation principle entails that $P_N(X_N \approx x) \approx e^{ - NI(x)}$. We turn to discussing a large deviation principle for the law $P_N$ of a random measure $\pi_N := \frac{1}{N} \sum_{i=1}^N \delta_{(i/N,\sigma(i)/N)}$ associated with a uniformly chosen permutation $\sigma_N$ from the symmetric group on $N$ letters. As far as we can see, the large deviation principle was first proved by Wu [@wu; @wu2], though the exact formulation we use is due to Trashorras [@trashorras] (see also [@KKRW]). Let $\mathcal{M}_1([0,1]^2)$ denote the set of probability measures on $[0,1]^2$, endowed with the topology of weak convergence. We write $\mathcal{M}_1^{\mathrm{coup}}([0,1]^2)$ for the subset of $\mathcal{M}_1([0,1]^2)$ consisting of probability measures $\pi$ on $[0,1]^2$ with uniform marginals, that is, measures on $[0,1]^2$ with the property that for all $0 \leq a \leq b \leq 1$ we have $$\begin{aligned} \pi( [a,b] \times [0,1] ) = \pi( [0,1] \times [a,b] ) = b-a.\end{aligned}$$ We call such measures coupling measures. We call the density of a coupling measure a coupling density. Note that coupling measures need not have a density with respect to Lebesgue measure: take for example the measure $\pi$ satisfying $\pi \left( \cup_{s \in [a,b]} \{(s,s)\} \right) = b-a$ and $\pi([0,1]^2 - \cup_{s \in [0,1]} \{(s,s)\}) = 0$. Note that if a measure $\pi$ on $[0,1]^2$ has density function $f$ with respect to Lebesgue measure satisfying [\[eq:jetski\]](#eq:jetski){reference-type="eqref" reference="eq:jetski"}, then $\pi$ is coupling. **Theorem 5** (Trashorras [@trashorras]). *Under the topology of weak convergence of probability measures on $[0,1]^2$, the random measures $\pi_N$ under $P_N$ satisfies a large deviation principle with good rate function $I(\pi)$ given by $$\begin{aligned} I(\pi) := \begin{cases} \int_{[0,1]^2} f(s,t) \log f(s,t) \mathrm{d}s \mathrm{d}t \qquad &\text{if $\pi(\mathrm{d}s,\mathrm{d}t) = f(s,t)\mathrm{d}s\mathrm{d}t \in \mathcal{M}_1^{\mathrm{coup}}([0,1]^2)$}\\ +\infty \qquad &\text{otherwise}. \end{cases}\end{aligned}$$* (To obtain Theorem [Theorem 5](#thm:trashorras){reference-type="ref" reference="thm:trashorras"} from the exact statement given in [@trashorras Theorem 1], set $x_i^N := i/N$, $\Sigma := [0,1]$, and $\mu$ as Lebesgue measure on $[0,1]$.) Through the article we will abuse notation and write $I(f) := I(\pi)$ whenever $f$ is the density function associated with a measure $\pi$. Again, interpreting this result very informally, it states that for $\pi \in \mathcal{M}_1^{\mathrm{coup}}$ with density $f$ with respect to Lebesgue measure, $$\begin{aligned} P_N( \pi_N \approx f(s,t) \mathrm{d}s \mathrm{d}t ) \approx e^{ - N\int_{[0,1]^2} f(s,t) \log f(s,t) \mathrm{d}s \mathrm{d}t }.\end{aligned}$$ With this theorem at hand, we are ready to study the asymptotics of $\mathbf{E}_{\mathcal{S}_N}[e^{N Z_N[w]}]$ at a heuristic level. With $Z_N[w] := \int_{[0,1]^2} w \mathrm{d}\pi_N$, as a rough calculation, we have $$\begin{aligned} \label{eq:rough} \mathbf{E}[ e^{ N Z_N[w]} ] \approx \sum_{ \pi} e^{ N \int w \mathrm{d} \pi } P_N( \pi_N \approx \pi) \approx \exp \left\{ N \sup_{f } \mathcal{G}[w,f] \right\},\end{aligned}$$ where, given a probability measure $\pi$ in $\mathcal{M}_1^{\mathrm{coup}}([0,1]^2)$ with density $f$, we have $$\begin{aligned} \label{eq:GF} \mathcal{G}[w,\pi] := \int_{[0,1]^2} (w(s,t) - \log f(s,t)) f(s,t) \mathrm{d}s \mathrm{d}t.\end{aligned}$$ We now make [\[eq:rough\]](#eq:rough){reference-type="eqref" reference="eq:rough"} precise. We say a function $w:[0,1] \to \mathbb{R}$ is **admissible** if it is well approximated by rectangular step functions on intervals. More explicitly, a function $w$ is admissible if, for every $\delta > 0$, there are $0 = s_0 \leq s_1 \leq \ldots \leq s_k = 1$ and $0 = t_0 \leq t_1 \leq \ldots \leq t_\ell = 1$ of $[0,1]$, and a function $\tilde{w}:[0,1] \to \mathbb{R}$ that is constant on each $[s_i,s_{i+1})\times [t_j,t_{j+1})$ such that $|\tilde{w}(s,t) - w(s,t)| \leq \delta$ for all $s,t \in [0,1]$. Here, $\tilde{w}$ is a **rectangular step function**. We have the following result, which we prove in Section [5](#sec:SN){reference-type="ref" reference="sec:SN"}using the large deviation principle in Theorem [Theorem 5](#thm:trashorras){reference-type="ref" reference="thm:trashorras"}. **Theorem 6**. *Let $w$ be admissible, and as above, let $Z_N[w] = \int_{[0,1]^2} w \mathrm{d}\pi_N$, where $\pi_N := \frac{1}{N} \sum_{ i =1}^N \delta_{(i/N,\sigma(i)/N)}$ is associated with a uniform permutation $\sigma$ of $N$ letters. Then $$\begin{aligned} \lim_{N \to \infty} \frac{1}{N} \log \mathbf{E}[e^{ N Z_N[w]} ] = \sup_{ f } \mathcal{G}[w,f] .\end{aligned}$$ where the supremum is taken over all coupling densities $f$.* In Section [5](#sec:SN){reference-type="ref" reference="sec:SN"} we prove that all functions we consider are admissible, and hence as a consequence of Theorem [Theorem 6](#thm:sup){reference-type="ref" reference="thm:sup"} we have our foundational result: **Theorem 7** (Hydrodynamic quadrature theorem). *Let $\mu$ and $\nu$ be probability measures on the real line with compact support, and let $\rho_\mu,\rho_\nu$ be the right-continuous inverses of their distribution functions. Suppose Assumption [Assumption 1](#as0){reference-type="ref" reference="as0"} holds. Then with $\mathcal{G}[w,\pi]$ as in [\[eq:GF\]](#eq:GF){reference-type="eqref" reference="eq:GF"} we have $$\begin{aligned} \label{eq:max1} \int_{-\infty}^\infty \log(z-\lambda) ( \mu \boxplus \nu )(\mathrm{d} \lambda) =\sup_{f} \mathcal{G}\left[ \log(z - \rho_\mu(s) - \rho_\nu(t)), f\right].\end{aligned}$$ Likewise, $$\begin{aligned} \label{eq:max2} \int_{-\infty}^\infty \log(z-\lambda) (\mu \boxtimes \nu) (\mathrm{d} \lambda) =\sup_{f} \mathcal{G}\left[ \log(z - \rho_\mu(s) \cdot \rho_\nu(t)), f \right],\end{aligned}$$ and $$\begin{aligned} \label{eq:max3} \tau \int_{-\infty}^\infty \log(z-\lambda) [\mu]_\tau (\mathrm{d} \lambda) =\sup_{f} \mathcal{G}\left[ \log(z-\rho_\mu(s)) \mathrm{1}_{[0,\tau]}(t) , f \right].\end{aligned}$$* *Proof.* We prove [\[eq:max1\]](#eq:max1){reference-type="eqref" reference="eq:max1"}; the proofs of [\[eq:max2\]](#eq:max2){reference-type="eqref" reference="eq:max2"} and [\[eq:max3\]](#eq:max3){reference-type="eqref" reference="eq:max3"} are very similar. Using [\[eq:UGM\]](#eq:UGM){reference-type="eqref" reference="eq:UGM"} to obtain the first equality below, [\[eq:GM\]](#eq:GM){reference-type="eqref" reference="eq:GM"} to obtain the second, setting $w(s,t) = \log(z-\rho_\mu(s)-\rho_\nu(t))$ and using the definition of $\pi_N$ to obtain the third, and then using Theorem [Theorem 6](#thm:sup){reference-type="ref" reference="thm:sup"} to obtain the fourth, we have $$\begin{aligned} \int_{-\infty}^\infty \log(z-\lambda) ( \mu \boxplus \nu )(\mathrm{d} \lambda) &= \lim_{N \to \infty} \frac{1}{N } \log \mathbf{E}_{\mathcal{U}_N} [ \det (z - A_N - U_NB_NU_N^{-1} ) ] \\ &= \lim_{N \to \infty} \frac{1}{N } \log \mathbf{E}_{\mathcal{S}_N} [ \det (z - A_N - \Sigma_NB_N\Sigma_N^{-1} ) ] \\ &= \lim_{N \to \infty} \frac{1}{N } \log \mathbf{E}_{\mathcal{S}_N} [ e^{NZ_N[w]} ] \\ &= \sup_{ f} \mathcal{G}[w, f ] ,\end{aligned}$$ where the use of Theorem [Theorem 6](#thm:sup){reference-type="ref" reference="thm:sup"} is justified by the fact that, thanks to Lemma [Lemma 26](#lem:adm){reference-type="ref" reference="lem:adm"}, the function $w(s,t) = \log(z-\rho_\mu(s)-\rho_\nu(t))$ is admissible. ◻ We now give an alternative formulation of Theorem [Theorem 7](#thm:main){reference-type="ref" reference="thm:main"} in terms of direct couplings of classical random variables. **Theorem 8** (Hydrodynamic quadrature theorem, coupling version). *Let $\mu$ and $\nu$ be probability measures on the real line with compactly supported density functions. Then $$\begin{aligned} \label{eq:max1} \int_{-\infty}^\infty \log(z-\lambda) ( \mu \boxplus \nu )(\mathrm{d} \lambda) =\sup_{\Pi} \left\{ \mathbf{E}_\Pi [ \log (z - (X+Y) ] - \mathcal{E}[\Pi] + \mathcal{E}[\mu] + \mathcal{E}[\nu] \right\},\end{aligned}$$ where $\mathcal{E}[ \cdot ]$ is defined in [\[eq:caldef\]](#eq:caldef){reference-type="eqref" reference="eq:caldef"} and the supremum is taken over all couplings $\Pi$ on $\mathbb{R}^2$ of the measures $\mu$ and $\nu$.* *Likewise, $$\begin{aligned} \label{eq:max2} \int_{-\infty}^\infty \log(z-\lambda) (\mu \boxtimes \nu) (\mathrm{d} \lambda) =\sup_{\Pi} \left\{ \mathbf{E}_\Pi [ \log (z - XY) ] - \mathcal{E}[\Pi] + \mathcal{E}[\mu] + \mathcal{E}[\nu] \right\}.\end{aligned}$$* *Finally, $$\begin{aligned} \label{eq:max3} \tau \int_{-\infty}^\infty \log(z-\lambda) [\mu]_\tau (\mathrm{d} \lambda) =\sup_{\Pi} \left\{ \mathbf{E}_\Pi [ \log (z - X)\mathrm{1}_{Y \leq \tau} ] - \mathcal{E}[\Pi] + \mathcal{E}[\mu] \right\},\end{aligned}$$ where the supremum is taken over all couplings $\Pi$ of $\mu$ and the uniform measure on $[0,1]$.* In the majority of the sequel, we will continue working with the formulation Theorem [Theorem 7](#thm:main){reference-type="ref" reference="thm:main"} (as opposed to Theorem [Theorem 8](#thm:main2){reference-type="ref" reference="thm:main2"}) of our main result, as it does not require the measures $\mu$ and $\nu$ have densities with respect to Lebesgue measure, and its formulation makes it straightforward to solve the variational problem. ## Shape of the maximisers Given a nice function $w$, it is a fairly straightforward computation using the standard techniques of variational calculus to compute the forms taken by maximisers of $\mathcal{G}[w,\pi]$: **Proposition 9**. *Let $w:[0,1]^2 \to \mathbb{R}$ be admissible. Then the coupling density $f_*$ maximising the energy functional $\mathcal{G}[w,f]$ takes the form $$\begin{aligned} f_*(s,t) = \alpha(s) \beta(t) e^{w(s,t)}\end{aligned}$$ for some functions $\alpha,\beta:[0,1] \to \mathbb{R}$. Moreover, for $f_*$ of this form we have $$\begin{aligned} \mathcal{G}[w,f_*] = - \int_0^1 \log \alpha(s) \mathrm{d}s - \int_0^1 \log \beta(t) \mathrm{d}t.\end{aligned}$$* In the three cases $w = \log(z - \rho_\mu(s) - \rho_\nu(t)), \log(z - \rho_\mu(s) \cdot \rho_\nu(t))$ and $\log(z-\rho_\mu(s)) \mathrm{1}_{[0,\tau]}(t)$ (respectively corresponding to addition, multiplication, and compression), we are able to use the characterisation in Proposition [Proposition 9](#prop:maxform){reference-type="ref" reference="prop:maxform"} to compute the maximisers explicitly in terms of the Cauchy transform $$\begin{aligned} G_\mu(z) := \int_{-\infty}^\infty \frac{1}{z-\lambda} \mu(\mathrm{d}\lambda) = \int_0^1 \frac{ \mathrm{d}t }{ z - \rho_\mu(t)}.\end{aligned}$$ of a probability measure $\mu$. After computing the maximisers explicitly, we are able to plug them into the energy functional [\[eq:GF\]](#eq:GF){reference-type="eqref" reference="eq:GF"} and obtain explicit formulas for the Cauchy transforms of the measures in [\[eq:oper\]](#eq:oper){reference-type="eqref" reference="eq:oper"}. This leads to our next pair of results, which give implicit formulas for the expectations of $\log(z-\cdot)$ under free additive convolutions, free multiplicative convolutions, and free compressions of probability measures. These results take shape in terms of objects known as subordination functions for free additive and free multiplicative convolution [@Bia; @Vent1; @BB; @Voi02], and subo The first of these two results is concerned with free convolution and free multiplicative convolution. This result takes shape in trms of objects known as subordinator functions for free additive and free multiplicative convolution [@Bia; @Vent1; @BB; @Voi02], and is a variation on well known results in the literature. **Theorem 10**. *With the symbol $\boxdot$ representing either $\boxplus$ or $\boxtimes$ we have $$\begin{aligned} \label{eq:apollo2} \int_{-\infty}^\infty \log(z - \lambda) \mu \boxdot \nu (\mathrm{d}\lambda) = - \log\omega + \int_{-\infty}^\infty \log(\omega_\mu - \lambda) \mu (\mathrm{d}\lambda) + \int_{-\infty}^\infty \log(\omega_\nu - \lambda) \nu (\mathrm{d}\lambda)\end{aligned}$$ where, in the case $\boxdot = \boxplus$ is free additive convolution, $\omega,\omega_\mu,\omega_\nu$ are functions of $z$ determined implicitly as the unique solutions to the equations $$\begin{aligned} \omega(z) + z &= \omega_\mu(z) + \omega_\nu(z) \label{eq:add1} \\ \frac{1}{\omega(z)} &= G_\mu(\omega_\mu(z)) = G_\nu(\omega_\nu(z)), \label{eq:add2}\end{aligned}$$ and in the case $\boxdot = \boxtimes$ is free multiplicative convolution, they are instead the functions of $z$ given by the unique solutions to the equations $$\begin{aligned} 1+ \frac{1}{\omega(z)} &= \omega_\mu G_\mu(\omega_\mu(z)) = \omega_\nu G_\nu(\omega_\nu(z)) \label{eq:mult1}\\ \omega_\mu(z)\omega_\nu(z) &= z(\omega(z)+1)\label{eq:mult2}\end{aligned}$$* Our next result is concerned with free compression, and is a variant on existing subordination results for free compression, see e.g. [@BN]. **Theorem 11**. *We have $$\begin{aligned} \label{eq:apollo3} \int_{-\infty}^\infty \log(z - \lambda) [\mu]_\tau (\mathrm{d}\lambda) = \frac{1}{\tau} \int_{-\infty}^\infty \log(\omega - \lambda)\mu(\mathrm{d}\lambda) + \log \tau - \frac{1-\tau}{\tau} \log \frac{\omega-z}{1-\tau},\end{aligned}$$ where $\omega$ is the function of $z$ (and $\tau$) given by the unique solution to $$\begin{aligned} \label{eq:comp1} 1 = \frac{\omega-z}{1-\tau} G_\mu(\omega ).\end{aligned}$$* Theorems [Theorem 10](#thm:explicit){reference-type="ref" reference="thm:explicit"} and [Theorem 11](#thm:explicit2){reference-type="ref" reference="thm:explicit2"} are proved in Section [7](#sec:char){reference-type="ref" reference="sec:char"}. It is possible to relate both of Theorems [Theorem 10](#thm:explicit){reference-type="ref" reference="thm:explicit"} and [Theorem 11](#thm:explicit2){reference-type="ref" reference="thm:explicit2"} to more standard formulations in free probability. We undertake this task in Section [3](#sec:free){reference-type="ref" reference="sec:free"}. ## An log-inequality relating classical and free random variables It is possible to read off from Theorem [Theorem 7](#thm:main){reference-type="ref" reference="thm:main"} the following simple inequalities relating classical and free convolution. **Corollary 12**. *Let $X,Y$ be independent classical random variables with marginal laws $\mu,\nu$ with compact supports. Then for all $z$ sufficiently large (more specifically, satisfying Assumption [Assumption 1](#as0){reference-type="ref" reference="as0"}), we have $$\begin{aligned} \label{eq:maxa1} \int_{-\infty}^\infty \log(z-\lambda)(\mu \boxplus \nu)(\mathrm{d}\lambda) \geq \mathbf{E}[\log(z - (X+Y))]\end{aligned}$$ and $$\begin{aligned} \label{eq:maxa2} \int_{-\infty}^\infty \log(z-\lambda)(\mu \boxtimes \nu)(\mathrm{d}\lambda) \geq \mathbf{E}[\log(z - XY)].\end{aligned}$$ Likewise $$\begin{aligned} \label{eq:maxa3} \int_{-\infty}^\infty \log(z-\lambda)[\mu]_\tau \mathrm{d}\lambda \geq \mathbf{E}[\log(z-X)]\end{aligned}$$* *Proof.* Each of these equations follows by setting $f$ equal to the uniform density in each of [\[eq:max1\]](#eq:max1){reference-type="eqref" reference="eq:max1"}, [\[eq:max2\]](#eq:max2){reference-type="eqref" reference="eq:max2"} and [\[eq:max3\]](#eq:max3){reference-type="eqref" reference="eq:max3"}, and using the fact that $\int_{[0,1]^2}W(\rho_\mu(s),\rho_\nu(t))\mathrm{d}s\mathrm{d}t = \mathbf{E}[W(X,Y)]$ for independent $X$ and $Y$ with respective laws $\mu$ and $\nu$. ◻ That completes the section on our main results. In the next section we perform some further free probability calculations. # Further discussion of free probability {#sec:free} For the sake of completeness, in this section we show how the usual formulas for free convolutions and free compression can be derived from scratch using Theorems [Theorem 10](#thm:explicit){reference-type="ref" reference="thm:explicit"} and [Theorem 11](#thm:explicit2){reference-type="ref" reference="thm:explicit2"}. ## Relation to standard formulations in free probability Let us note that by differentiating the equation [\[eq:apollo2\]](#eq:apollo2){reference-type="eqref" reference="eq:apollo2"} with respect to $z$ we obtain $$\begin{aligned} \label{eq:da} G_{\mu \boxdot \nu}(z) = - \frac{\omega' }{\omega} + \omega_\mu' G_\mu(\omega_\mu) + \omega_\nu'G_\nu(\omega_\nu).\end{aligned}$$ where $\omega',\omega_\mu',\omega_\nu'$ denote derivatives with respect to $z$, and in the case that $\boxdot = \boxplus$, $\omega_\mu,\omega_\nu,\omega$ satisfy [\[eq:add1\]](#eq:add1){reference-type="eqref" reference="eq:add1"} and [\[eq:add2\]](#eq:add2){reference-type="eqref" reference="eq:add2"}, and in the case that $\boxdot = \boxtimes$, they satisfy [\[eq:mult1\]](#eq:mult1){reference-type="eqref" reference="eq:mult1"} and [\[eq:mult2\]](#eq:mult2){reference-type="eqref" reference="eq:mult2"}. By manipulating the suitable system of equations --- either [\[eq:add1\]](#eq:add1){reference-type="eqref" reference="eq:add1"}-[\[eq:add2\]](#eq:add2){reference-type="eqref" reference="eq:add2"} or [\[eq:mult1\]](#eq:mult1){reference-type="eqref" reference="eq:mult1"}-[\[eq:mult2\]](#eq:mult2){reference-type="eqref" reference="eq:mult2"}, we arrive at the following result, characterising the measures $\mu \boxplus \nu$ and $\mu \boxtimes \nu$. **Theorem 13**. *We have $$\begin{aligned} \label{eq:nattan0} G_{\mu \boxplus \nu } (z) = \frac{1}{\omega(z)},\end{aligned}$$ where $\omega(z)$ is the unique solution of the system of equations [\[eq:add1\]](#eq:add1){reference-type="eqref" reference="eq:add1"}-[\[eq:add2\]](#eq:add2){reference-type="eqref" reference="eq:add2"}.* *We have $$\begin{aligned} G_{\mu \boxtimes \nu } (z) = \frac{\omega+1}{z \omega }\end{aligned}$$ where $\omega(z)$ is the unique solution of the system of equations [\[eq:mult1\]](#eq:mult1){reference-type="eqref" reference="eq:mult1"}-[\[eq:mult2\]](#eq:mult2){reference-type="eqref" reference="eq:mult2"}.* *Proof.* In the case $\boxdot = \boxplus$, using [\[eq:add2\]](#eq:add2){reference-type="eqref" reference="eq:add2"}, [\[eq:da\]](#eq:da){reference-type="eqref" reference="eq:da"} simplifies to $$\begin{aligned} \label{eq:nattan} G_{\mu \boxplus \nu}(z) = - \frac{\omega' }{\omega} + \frac{\omega_\mu'}{\omega} + \frac{\omega_\nu'}{\omega}.\end{aligned}$$ Differentiating [\[eq:add1\]](#eq:add1){reference-type="eqref" reference="eq:add1"} with respect to $z$, we have $1 = - \omega' + \omega_\mu'+\omega_\nu'$, and plugging this into [\[eq:nattan\]](#eq:nattan){reference-type="eqref" reference="eq:nattan"} completes the proof of [\[eq:nattan0\]](#eq:nattan0){reference-type="eqref" reference="eq:nattan0"}. As for the case where $\boxdot = \boxtimes$, using [\[eq:mult1\]](#eq:mult1){reference-type="eqref" reference="eq:mult1"} in [\[eq:da\]](#eq:da){reference-type="eqref" reference="eq:da"} to obtain the first equality below, and rearranging to obtain the second, we have $$\begin{aligned} G_{\mu \boxtimes \nu}(z) &= - \frac{\omega' }{\omega} + \left( 1 + \frac{1}{\omega} \right) \left( \frac{\omega_\mu'}{\omega_\mu} + \frac{\omega_\nu'}{\omega_\nu} \right)\\ &= - \frac{\omega' }{\omega} + \left( 1 + \frac{1}{\omega} \right) \frac{\mathrm{d}}{\mathrm{d}z} \log(\omega_\mu \omega_\nu).\end{aligned}$$ Now appealing to [\[eq:mult2\]](#eq:mult2){reference-type="eqref" reference="eq:mult2"} we have $\omega_\mu \omega_\nu = z ( \omega+1)$ and thus $$\begin{aligned} G_{\mu \boxtimes \nu}(z) &= - \frac{\omega' }{\omega} + \left( 1 + \frac{1}{\omega} \right) \frac{\mathrm{d}}{\mathrm{d}z} \left[ \log z + \log ( \omega+1) \right]\\ &= - \frac{\omega' }{\omega} + \left( 1 + \frac{1}{\omega} \right) \left[ \frac{1}{z} + \frac{\omega'}{\omega+1} \right] = \frac{1}{z} \left( 1 + \frac{1}{\omega}\right),\end{aligned}$$ as required. ◻ In the case of free compression, differentiating through [\[eq:apollo3\]](#eq:apollo3){reference-type="eqref" reference="eq:apollo3"} and appealing to [\[eq:GM3\]](#eq:GM3){reference-type="eqref" reference="eq:GM3"} and [\[eq:UGM3\]](#eq:UGM3){reference-type="eqref" reference="eq:UGM3"}, we see that $$\begin{aligned} \label{eq:taumu} G_{ [\mu]_\tau }( z) = \frac{1}{\tau} \left\{ -(1-\tau)\frac{\omega'}{\omega} + (\tau + (1 - \tau) \omega' ) \frac{1}{\tau} G_\mu\left( z + \frac{1-\tau}{\tau} \omega \right) \right\},\end{aligned}$$ where the $1/\tau$ factor in front comes from the normalisation required to make $[\mu]_\tau$ a probability measure. Using [\[eq:comp1\]](#eq:comp1){reference-type="eqref" reference="eq:comp1"} in [\[eq:taumu\]](#eq:taumu){reference-type="eqref" reference="eq:taumu"} we obtain the following result. **Theorem 14**. *We have $$\begin{aligned} G_{[\mu]_\tau}(z) = \frac{1-\tau}{\tau} \frac{1}{\omega(z) - z},\end{aligned}$$ where $\omega(z)$ is the solution to the equation [\[eq:comp1\]](#eq:comp1){reference-type="eqref" reference="eq:comp1"}.* *Proof.* Differentiating [\[eq:apollo3\]](#eq:apollo3){reference-type="eqref" reference="eq:apollo3"} with respect to $z$ we obtain $$\begin{aligned} \label{eq:apollo4} G_{[\mu]_\tau}(z) = \frac{\omega'}{\tau} G(\omega) - \frac{1-\tau}{\tau} \frac{\omega'-1}{\omega-z}.\end{aligned}$$ Here $\omega(z)$ is the unique solution to $1 = \frac{\omega-z}{1-\tau} G_\mu(\omega )$; simplifying using this statement we obtain the result. ◻ ## $R$-transform Given a probability measure $\mu$ and its associated Cauchy transform $G_\mu(z) := \int_{-\infty}^\infty \frac{1}{z-\lambda}\mu(\mathrm{d}\lambda)$, we define its $R$-transform $R_\mu$ implicitly through the relation $$\begin{aligned} \label{eq:Rdef} z = \frac{1}{G_\mu(z)} + R_\mu(G_\mu(z)).\end{aligned}$$ We now give a formal proof of the following well-known result as a consequence of Theorem [Theorem 13](#thm:addmult){reference-type="ref" reference="thm:addmult"}. **Theorem 15**. *We have $$\begin{aligned} R_{\mu \boxplus \nu}(s) = R_\mu(s) + R_\nu(s).\end{aligned}$$* *Proof.* Substituting $\omega_\mu(z)$ for $z$ in [\[eq:Rdef\]](#eq:Rdef){reference-type="eqref" reference="eq:Rdef"} we obtain $$\begin{aligned} \omega_\mu(z) = \frac{1}{G_\mu(\omega_\mu(z))} + R_\mu(G_\mu(\omega_\mu(z)).\end{aligned}$$ Using [\[eq:add2\]](#eq:add2){reference-type="eqref" reference="eq:add2"} we obtain $$\begin{aligned} \omega_\mu(z) = \omega(z) + R_\mu(1/\omega(z)).\end{aligned}$$ By finding an equivalent equation for $\nu$ rather than $\mu$, and adding the equations together, we obtain $$\begin{aligned} \omega_\mu(z) +\omega_\nu(z) = 2 \omega(z) + R_\mu(1/\omega(z)) + R_\nu(1/\omega(z)).\end{aligned}$$ Using [\[eq:add1\]](#eq:add1){reference-type="eqref" reference="eq:add1"} this reduces to $$\begin{aligned} z = \omega(z) + R_\mu(1/\omega(z)) + R_\nu(1/\omega(z)).\end{aligned}$$ Finally, by [\[eq:nattan0\]](#eq:nattan0){reference-type="eqref" reference="eq:nattan0"} we obtain $$\begin{aligned} z = \frac{1}{G_{\mu \boxplus \nu}(z)} + R_\mu(G_{\mu \boxplus \nu}(z)) + R_\nu(G_{\mu \boxplus \nu}(z).\end{aligned}$$ Thus, $R_\mu(s) + R_\nu(s) = R_{\mu \boxplus \nu}(s)$, completing the proof. ◻ ## $S$-transform Given a probability measure $\mu$, for sufficiently small $z$ define $$\begin{aligned} \psi_\mu(z) := \int_{-\infty}^\infty \sum_{n \geq 1} (zx)^n \mu(\mathrm{d}x).\end{aligned}$$ It is easily verified that $$\begin{aligned} \psi_\mu(z) = \frac{1}{z} G_\mu(1/z)-1.\end{aligned}$$ Let $\chi_\mu(z)$ denote the inverse function of $\psi_\mu(z)$. The **$S$-transform** of the measure $\mu$ is the function $$\begin{aligned} \label{eq:Sdef} S_\mu(z) := \frac{1+z}{z} \chi_\mu(z).\end{aligned}$$ We now use Theorem [Theorem 13](#thm:addmult){reference-type="ref" reference="thm:addmult"} to give a formal proof of the following standard result relating the $S$-transform of the multiplicative free convolution of measures $\mu$ and $\nu$ to the respective $S$-transforms of $\mu$ and $\nu$. **Theorem 16**. *We have $$\begin{aligned} \label{eq:Smult} S_{\mu \boxtimes \nu}(z) = S_\mu(z) S_\nu(z).\end{aligned}$$* *Proof.* For probability measures $\mu$, set $J_\mu(z) := zG_\mu(z)-1$. Then $J_\mu(z) = \psi_\mu(1/z)$. In terms of $J_\mu$, the multiplicative part of Theorem [Theorem 13](#thm:addmult){reference-type="ref" reference="thm:addmult"} reads as saying $$\begin{aligned} \label{eq:Jeq} J_{\mu \boxtimes \nu}(z) = \frac{1}{\omega(z)},\end{aligned}$$ where, rephrasing [\[eq:mult1\]](#eq:mult1){reference-type="eqref" reference="eq:mult1"} and [\[eq:mult2\]](#eq:mult2){reference-type="eqref" reference="eq:mult2"} in terms of $J_\mu$, we have $$\begin{aligned} J_\mu (\omega_\mu(z))&=J_\nu(\omega_\nu(z)) = \frac{1}{\omega(z)} \label{eq:mult11}\\ \omega_\mu(z)\omega_\nu(z) &= z(\omega(z)+1)\label{eq:mult12}.\end{aligned}$$ We now compare the quantities $S_\mu \left(\frac{1}{\omega(z)}\right)S_\nu\left(\frac{1}{\omega(z)}\right)$ and $S_{\mu \boxtimes \nu}\left(\frac{1}{\omega(z)}\right)$, and show that they coincide. Considering the first of these quantities, by the definition [\[eq:Sdef\]](#eq:Sdef){reference-type="eqref" reference="eq:Sdef"} we have $$\begin{aligned} \label{eq:mozart} S_\mu \left(\frac{1}{\omega(z)}\right)S_\nu\left(\frac{1}{\omega(z)}\right) = (\omega(z)+1)^2 \chi_\mu \left(\frac{1}{\omega(z)}\right)\chi_\nu\left(\frac{1}{\omega(z)}\right).\end{aligned}$$ Plugging [\[eq:mult11\]](#eq:mult11){reference-type="eqref" reference="eq:mult11"} into [\[eq:mozart\]](#eq:mozart){reference-type="eqref" reference="eq:mozart"} we obtain $$\begin{aligned} \label{eq:mozart2} S_\mu \left(\frac{1}{\omega(z)}\right)S_\nu\left(\frac{1}{\omega(z)}\right) = (\omega(z)+1)^2 \chi_\mu \left(J_\mu(\omega_\mu(z)) \right)\chi_\nu\left(J_\nu(\omega_\nu(z)) \right) = \frac{(\omega(z)+1)^2}{\omega_\mu(z)\omega_\nu(z)},\end{aligned}$$ where to obtain the final inequality above, we have used the fact that since $J_\mu(z) = \psi_\mu(1/z)$, and $\chi_\mu$ is the inverse function of $\psi_\mu$, we have $\chi_\mu(J_\mu(z)) = 1/z$. Using [\[eq:mult12\]](#eq:mult12){reference-type="eqref" reference="eq:mult12"}, [\[eq:mozart2\]](#eq:mozart2){reference-type="eqref" reference="eq:mozart2"} subsequently reduces to $$\begin{aligned} \label{eq:mozart3} S_\mu \left(\frac{1}{\omega(z)}\right)S_\nu\left(\frac{1}{\omega(z)}\right) = \frac{(\omega(z)+1)}{z}.\end{aligned}$$ We turn to considering $S_{\mu \boxtimes \nu}\left(\frac{1}{\omega(z)}\right)$. Using the definition [\[eq:Sdef\]](#eq:Sdef){reference-type="eqref" reference="eq:Sdef"} of the $S$-transform to obtain the first inequality below, [\[eq:Jeq\]](#eq:Jeq){reference-type="eqref" reference="eq:Jeq"} to obtain the second equality below, and then $\chi_\mu(J_\mu(z)) = 1/z$ to obtain the third, we obtain $$\begin{aligned} \label{eq:mozart4} S_{\mu \boxtimes \nu}\left(\frac{1}{\omega(z)}\right) = (\omega(z)+1)\chi_{\mu \boxtimes \nu}\left( \frac{1}{\omega(z)} \right) = (\omega(z)+1)\chi_{\mu \boxtimes \nu}\left( J_{\mu \boxtimes \nu}(z) \right) = \frac{\omega(z)+1}{z}.\end{aligned}$$ Comparing [\[eq:mozart3\]](#eq:mozart3){reference-type="eqref" reference="eq:mozart3"} and [\[eq:mozart4\]](#eq:mozart4){reference-type="eqref" reference="eq:mozart4"}, we see that [\[eq:Smult\]](#eq:Smult){reference-type="eqref" reference="eq:Smult"} holds, completing the proof. ◻ ## Relationship between free additive convolution and compression {#sec:compr} Recall the $R$-transform defined in [\[eq:Rdef\]](#eq:Rdef){reference-type="eqref" reference="eq:Rdef"}. We now use Theorem [Theorem 14](#thm:compression){reference-type="ref" reference="thm:compression"} to give a formal proof of the following standard result concerning the $R$-transform of the compression of a measure. **Theorem 17**. *For $\tau \in (0,1]$ we have $$\begin{aligned} \label{eq:symph3} R_{[\mu]_\tau}(s) = R_\mu(\tau s).\end{aligned}$$* *Proof.* Noting that $[\mu]_1 = \mu$, throughout this proof we wil use $G_\tau$ and $G_1$ as shorthand for $G_{[\mu]_\tau}$ and $G_\mu$, and do similarly with $R_\tau$ and $R_1$. According to Theorem [Theorem 14](#thm:compression){reference-type="ref" reference="thm:compression"}, $$\begin{aligned} \label{eq:miral} G_\tau(z) = \frac{1}{\tau} \frac{1 - \tau}{\omega(z) - z},\end{aligned}$$ where $\omega(z)$ is the solution to the equation $G_1(\omega(z)) = \frac{1-\tau}{\omega(z)-z}$. Now on the one hand, according to the definition of the $R$-transform of $[\mu]_\tau$ we have $$\begin{aligned} \label{eq:miral2} R_\tau(G_\tau(z))=z-\frac{1}{G_\tau(z)}.\end{aligned}$$ Using [\[eq:miral\]](#eq:miral){reference-type="eqref" reference="eq:miral"}, [\[eq:miral2\]](#eq:miral2){reference-type="eqref" reference="eq:miral2"} reads $$\begin{aligned} \label{eq:symph} R_\tau \left( \frac{1}{\tau} \frac{1-\tau}{\omega(z)-z} \right) = \frac{1}{1-\tau} ( z - \tau \omega(z)).\end{aligned}$$ On the other hand, substituting $\omega(z)$ for $z$ in the definition of the $R$-transform of $\mu$, we have $$\begin{aligned} \label{eq:miral3} R_1 ( G_1( \omega(z) ) = \omega(z) - \frac{1}{G_1(\omega(z))}.\end{aligned}$$ Using $G_1(\omega(z)) = \frac{1-\tau}{\omega(z)-z}$ [\[eq:miral3\]](#eq:miral3){reference-type="eqref" reference="eq:miral3"} reads $$\begin{aligned} \label{eq:symph2} R_1 \left( \frac{1-\tau}{\omega(z)-z} \right) = \frac{1}{1-\tau} ( z- \tau \omega(z) ).\end{aligned}$$ Comparing [\[eq:symph\]](#eq:symph){reference-type="eqref" reference="eq:symph"} and [\[eq:symph2\]](#eq:symph2){reference-type="eqref" reference="eq:symph2"}, and substituting $s = \frac{1}{\tau} \frac{1-\tau}{\omega(z)-z}$ we obtain [\[eq:symph3\]](#eq:symph3){reference-type="eqref" reference="eq:symph3"}. ◻ Given a probability measure $\mu$, write $\lambda^* \mu$ for the pushforward measure under the map $x \mapsto \lambda x$. In other words, if $X$ has law $\mu$, $\lambda X$ has law $\lambda^*\mu$. It is easily verifed that $R_{\lambda^* \mu}(t) = \lambda R_\mu(t)$. It follows in particular that $$\begin{aligned} R_{[\mu]_\tau}(s) = \frac{1}{\tau} R_{\tau_* \mu}(s).\end{aligned}$$ Recall now that $R_{\mu \boxplus \nu}(s) = R_\mu(s) +R_\nu(s)$. It follows that for integers $k \geq 1$ we have $R_{\mu^{\boxplus k}}(s) = kR_\mu(s)$, where $\mu^{\boxplus} := \mu \boxplus \ldots \boxplus \mu$ denotes the $k$-fold additive free convolution of $\mu$ with itself. Thus, setting $\tau = 1/k$, it follows that $$\begin{aligned} R_{[\mu]_{1/k}}(s) = k R_{\tau_* \mu}(s) = R_{ (\tau_* \mu)^{\boxplus k} } (s) = R_{ \tau_* \mu^{\boxplus k} } (s).\end{aligned}$$ By the uniqueness of $R$-transforms, we have arrived at the following known result (see e.g. [@ST]). **Theorem 18**. *For integers $k\geq 1$, we have the following identity in law relating free compression with additive free convolution $$\begin{aligned} \label{eq:compadd} [\mu]_{1/k} := (1/k)_* \mu^{\boxplus k} .\end{aligned}$$* With the $[\mu]_\tau$ defined for all $\tau \in (0,1]$, one can then take use [\[eq:compadd\]](#eq:compadd){reference-type="eqref" reference="eq:compadd"} to define the fractional additive free convolution, valid for all real $k \geq 1$, by setting $$\begin{aligned} \mu^{ \boxplus k} := k_* [\mu]_{1/k};\end{aligned}$$ see e.g. [@BV] or [@NS]. For real $k \geq 1$, it is also natural to consider the scaling $\sqrt{k}_* [\mu]_{1/k} = (1/\sqrt{k})_* \mu^{\boxplus k}$, as this measure has the same variance as $\mu$. In particular, Shlyakhtenko and Tao [@ST] recently showed that if $$\begin{aligned} \chi(\mu) := \int_{-\infty}^\infty \int_{-\infty}^\infty \log | s - t | \mu(\mathrm{d}s) \mu (\mathrm{d}t) + \frac{3}{4} + \frac{1}{2} \log 2 \pi\end{aligned}$$ denotes the so-called free energy of a probability measure $\mu$ [@Vent1], then $\chi((1/\sqrt{k})_* \mu^{\boxplus k})$ is monotone decreasing in $k$. ## Fiedler inequalities In this section we make a brief remark sketching how one may apply our method to Fiedler's inequality (in place of a quadrature equality) we obtain an inequality for free additive convolution. Namely, Fiedler's inequality [@fiedler] states that for real diagonal matrices $A$ and $B$ with diagonal entries $a_1,\ldots,a_N$ and $b_1,\ldots,b_N$ we have $$\begin{aligned} \label{eq:fiedler} \min_{ \sigma \in \mathcal{S}_N } \prod_{i=1}^N (z - a_i - b_{\sigma(i)}) \leq \det(z - A - UBU^*) \leq \max_{ \sigma \in \mathcal{S}_N } \prod_{i=1}^N (z - a_i - b_{\sigma(i)}).\end{aligned}$$ Taking $\frac{1}{N}\log$ asymptotics with the empirical spectra of $A$ and $B$ approximating measures $\mu$ and $\nu$ on the real line, we obtain from the lower bound in [\[eq:fiedler\]](#eq:fiedler){reference-type="eqref" reference="eq:fiedler"} the hydrodynamic Fiedler inequality $$\begin{aligned} \inf_\pi \int_0^1 \int_0^1 \log(z - \rho_\mu(s) - \rho_\nu(t)) \pi(\mathrm{d}s,\mathrm{d}t) &\leq \int_{-\infty}^\infty \log(z - x) \mu \boxplus \nu(\mathrm{d}x)\end{aligned}$$ where the infimum is over all coupling measures $\pi$ on $[0,1]^2$, c.f. [\[eq:mainopen\]](#eq:mainopen){reference-type="eqref" reference="eq:mainopen"}. (Note in this case that the infimum need not be attained by a measure with a density with respect to Lebesgue measure.) We note that the equivalent bound obtained from using the upper bound in [\[eq:fiedler\]](#eq:fiedler){reference-type="eqref" reference="eq:fiedler"} only yields a weaker version of [\[eq:max1\]](#eq:max1){reference-type="eqref" reference="eq:max1"}; this follows from the fact that $\int_0^1 \int_0^1 f \log f \geq 0$ for all coupling densities $f$. # Asymptotics expectations over the unitary group {#sec:UN} The main objective of this section is to prove Thereom [Theorem 4](#thm:UN){reference-type="ref" reference="thm:UN"}. We will consider the case of addition first. $$\begin{aligned} \label{eq:UGM B} \lim_{N \to \infty} \frac{1}{N } \log \mathbf{E}_{\mathcal{U}_N} [ \det (z - A_N - U_N B_N U_N^{-1} ) ] = \int_{-\infty}^\infty \log(z-\lambda) ( \mu \boxplus \nu )(\mathrm{d} \lambda) . \end{aligned}$$ For a matrix $M$ define the characteristic polynomial of the matrix $M$ in the variable $x$ by $\chi_{M}(x) = \det (xI - M)$. For $d$-dimensional hermitian matrices $A$ and $B$ with characteristic polynomials $p$ and $q$, we define the *finite free additive convolution* of $p$ and $q$ to be $$p (x) \boxplus_{d} q (x) = \mathbf{E}_{\mathcal{U}_N} [ \det (z - A_N - U_N B_N U_N^{-1} ) ].$$ where the expectation is taken over unitary matrices $U$ sampled according to the Haar measure on ${\mathcal{U}_N}$ . These convolutions do not depend on the specific choice of $A$ and $B$, but only on $p$ and $q$, and for practical purposes, we assume that $A$ and $B$ are diagonal. The connection with free probability is that as $d\to\infty$, these polynomial convolution approximates free additive convolution. We will use the following formulation to prove [\[eq:UGM B\]](#eq:UGM B){reference-type="eqref" reference="eq:UGM B"} **Theorem 19** (Arizmendi, Perales [@AP]). *Let $\mu$ and $\nu$ be probability measures supported on a compact subset of the real line. Let $(p_d)_{d=1}^\infty$ and $(q_d)_{d=1}^\infty$ be sequences of monic real-rooted polynomials, where $p_d$ and $q_d$ have degree $d$.* *If the empirical root distributions of these sequences of polynomials converge weakly to $\mu$ and $\nu$ respectively, then the empirical root distributions of the sequence $(p_d\boxplus_d q_d)_{d=1}^\infty$ converge weakly to $\mu\boxplus \nu$.* It is well known that weak convergence implies pointwise convergence of the Stieljes transform, i.e. $$\begin{aligned} \label{eq:UGM C} \lim_{N \to \infty} \mathbf{E}_{\mathcal{U}_N} \left[\frac{1}{N } \frac{d}{dz} \log [ \det (z - A_N - U_N B_N U_N^{-1} ) ] \right ] =\int^\infty_{-\infty}\frac{1}{z-s} (\mu \boxplus \nu) (\mathrm{d}s).\end{aligned}$$ However, it is not directly true that [\[eq:UGM C\]](#eq:UGM C){reference-type="eqref" reference="eq:UGM C"} is equivalent to [\[eq:UGM B\]](#eq:UGM B){reference-type="eqref" reference="eq:UGM B"}, and some care is needed to prove the latter. To do this recall that a sequence of real probability measures $\mu_n$ converges in moments to a measure $\mu$ if for all $k\in\mathbb{N}$, we have the convergence $$\int t^k \mu_{n}(dt)\to \int t^k \mu(dt).$$ The proof of Theorem [Theorem 19](#thm finite to free){reference-type="ref" reference="thm finite to free"} in [@AP] implies the following corollary. **Corollary 20**. *Let $\mu$ and $\nu$ be probability measures supported on a compact subset of the real line. Let $(p_d)_{d=1}^\infty$ and $(q_d)_{d=1}^\infty$ be sequences of monic real-rooted polynomials, where $p_d$ and $q_d$ have degree $d$. If the empirical root distributions of these sequences of polynomials converge in moments to $\mu$ and $\nu$, respectively, then the empirical root distributions of the sequence $(p_d\boxplus_d q_d)_{d=1}^\infty$ converges to $\mu\boxplus \nu$ in moments.* We now prove the main result of this section. We assume that for all $N$, $||A_N||$ and $||B_N||$ are uniformly bounded by some constast $r_A$ and $r_B$. Let $\lambda_1,\cdots, \lambda_N$ be roots of the $p_A\boxplus p_B$. Let Then, for $z\geq r_A+r_B$ we have that $$\begin{aligned} \frac{1}{N }\log \mathbf{E}_{\mathcal{U}_N} [ \det (z - A_N - U_N B_N U_N^{-1} ) ] &=&\frac{1}{N} \sum^N_{i=1} \log( z-\lambda_i) \\ &=&\log(z)- \frac{1}{N}\sum^N_{i=1}\left(\sum^\infty_{k=1}\frac{\lambda_i^k}{kz^k} \right)\\&=&\log(z)- \sum^\infty_{k=1}\left(\frac{1}{N}\sum^N_{i=1}\frac{\lambda_i^k}{kz^k} \right)\\&=& \log(z)- \sum^\infty_{k=1}\left(\frac{m_k(\mu_n)}{kz^k}, \right)\end{aligned}$$ where, for a measure $\mu$ and $k\in\mathbb{N}$, $m_k(\mu)=\int t^k \mu(dt)$, denotes the moments of order $k$. A similar derivation shows that, for $z>r_A+r_B$, $$\int^\infty_{-\infty} \log (z-s) (\mu \boxplus \nu) (\mathrm{d}s)=\log(z)- \sum^\infty_{k=1}\left(\frac{m_k(\mu\boxplus \nu)}{kz^k} \right).$$ Now, by Corollary [Corollary 20](#cor moments){reference-type="ref" reference="cor moments"}, we now that $m_k(\mu_n) \to m_k (\mu\boxplus \nu)$ and $|m_k(\nu_n)|\leq (r_A+r_B)^k$, thus for $z>r_A+r_B$ we have $$\begin{aligned} \lim_{N \to \infty}\frac{1}{N }\log \mathbf{E}_{\mathcal{U}_N} [ \det (z - A_N - U_N B_N U_N^{-1} ) ] &=& \log(z)- \lim_{N \to \infty} \sum^\infty_{k=1}\left(\frac{m_k(\nu_n)}{kz^k} \right)\\&=& \log(z)- \sum^\infty_{k=1}\left(\frac{m_k(\mu\boxplus \nu)}{kz^k} \right)\\ &=&\int^\infty_{-\infty} \log (z-s) (\mu \boxplus \nu) (\mathrm{d}s)\end{aligned}$$ This proves the result for additive convolution. To prove the case for multiplicative and compression one uses the results of Arizmendi, Garza-Vargas and Perales [@AGP], where they prove the analog results to Theorem [Theorem 19](#thm finite to free){reference-type="ref" reference="thm finite to free"} for these cases. **Theorem 21** ([@AGP]). *Let $\mu$ and $\nu$ be probability measures supported on a compact subset of the real line. Let $(p_d)_{d=1}^\infty$ and $(q_d)_{d=1}^\infty$ be sequences of monic real-rooted polynomials, where $p_d$ and $q_d$ have degree $d$, and assume that the $q_d$ have only non-negative roots. If the empirical root distributions of these sequences of polynomials converge weakly to $\mu$ and $\nu$ respectively, then the empirical root distributions of the sequence $(p_d\boxtimes_d q_d)_{d=1}^\infty$ converge weakly to $\mu\boxtimes \nu$. Moreoover, if the convergence of $(p_d)_{d=1}^\infty$ and $(q_d)_{d=1}^\infty$ hold in moments, then so does the convergence of $(p_d\boxtimes_d q_d)_{d=1}^\infty$ to $\mu\boxtimes \nu$.* **Theorem 22** (Hoskins and Kabluchko [@HK], Arizmendi, Garza-Vargas and Perales [@AGP]). *Fix $t \in (0, 1)$ and let $(p_d)_{d=1}^\infty$ be a sequence of polynomials. Assume that the empirical root distributions of $(p_d)_{d=1}^\infty$ converge weakly to a compactly supported probability measure $\mu$. Then, if we set $$r_d(x) := D^{\lfloor (1-t )d\rfloor } p_d(t x),$$ the empirical root distributions of $(r_d)_{d=1}^\infty$ converge weakly to $\mu^{\boxplus 1/t}$. Moreover, if the convergence of $(p_d)_{d=1}^\infty$ holds in moments, then so does the convergence of $(r_d)_{d=1}^\infty$ to $\mu^{\boxplus 1/t}$.* # Asymptotics expectation over the symmetric group {#sec:SN} ## Proof of Theorem [Theorem 6](#thm:sup){reference-type="ref" reference="thm:sup"} {#proof-of-theorem-thmsup} In this section we prove Theorem [Theorem 6](#thm:sup){reference-type="ref" reference="thm:sup"}. We recall that under a probability measure $\mathbf{P}$ with expectation $\mathbf{E}$ we have a sequence $(\sigma_N)_{N \geq 1}$ of permutations, where each $\sigma_N$ is uniformly distributed on the symmetric group. To $\sigma_N$ we associate a random measure $\pi_N$ on $[0,1]^2$ by setting $\pi_N := \frac{1}{N} \sum_{i=1}^N \delta_{(i/N,\sigma_N(i)/N)}$. We recall that a function $w:[0,1]^2 \to \mathbb{R}$ is called rectangular step if there exist $0 =s_0 \leq \ldots \leq s_k = 1$ and $0 = t_0 \leq \ldots \leq t_\ell = 1$ such that $w$ is constant on each $[s_i,s_{i+1})\times [t_j,t_{j+1})$. A function $w$ is admissible if for every $\delta$ there exists a rectangular step function $\tilde{w}$ such that $|\tilde{w}(s,t) -w(s,t)| \leq \delta$ for all $(s,t) \in [0,1]^2$. Theorem [Theorem 6](#thm:sup){reference-type="ref" reference="thm:sup"} is the statement that for all admissible $w$, we have $\lim_{N \to \infty} \frac{1}{N}\mathbf{E}[e^{NZ_N[w]}] = \sup_f \mathcal{G}[w,f]$, where for coupling densities $f$, $\mathcal{G}[w,f] := \int_{[0,1]^2} (w - \log f) f$. To go about proving Theorem [Theorem 6](#thm:sup){reference-type="ref" reference="thm:sup"}, we begin with the following reduction. **Lemma 23**. *If Theorem [Theorem 6](#thm:sup){reference-type="ref" reference="thm:sup"} holds for every rectangular step function $w$, then it holds for every admissible $w$.* *Proof.* Suppose the hypothesis of the lemma holds. Let $w$ be admissible and let $\delta > 0$ be arbitrary. Now let $\tilde{w}$ be a rectangular step function such that $|\tilde{w}(s,t) - w(s,t)| \leq \delta$. Note then that with $Z_n[w] := \int_{[0,1]^2} w \mathrm{d}\pi_N$ we have $|Z_N[\tilde{w}] -Z_n[w]| \leq \delta$. Moreover, for every coupling density $f$ we have $| \mathcal{G}[\tilde{w},f] - \mathcal{G}[w,f]| \leq \delta$. It follows that for every $N$ we have $$\begin{aligned} | \frac{1}{N} \log \mathbf{E}[ e^{ N Z_N[w]} ] - \frac{1}{N} \log \mathbf{E}[ e^{N Z_N[w']}] |\leq \delta,\end{aligned}$$ and $$\begin{aligned} |\sup_f \mathcal{G}[w,f] - \sup_f \mathcal{G}[\tilde{w},f]| \leq R_w \delta.\end{aligned}$$ Finally, by the hypothesis of the lemma, $$\begin{aligned} \lim_{N \to \infty} \frac{1}{N} \mathbf{E}[ e^{N Z_N[\tilde{w} } ] = \sup_f \mathcal{G}[\tilde{w},f].\end{aligned}$$ It follows that $$\begin{aligned} \sup_f \mathcal{G}[w,f] - \delta \leq \lim \inf_{N \to \infty} \frac{1}{N} \log \mathbf{E}[ e^{ N Z_N[w]} ] \leq \lim \sup_{N \to \infty} \frac{1}{N} \log \mathbf{E}[ e^{ N Z_N[w]} ] \leq \sup_f \mathcal{G}[w,f] + \delta.\end{aligned}$$ Since $\delta$ was arbitrary, the result holds. ◻ Thus our task in the remainder of this section is to establish Theorem [Theorem 6](#thm:sup){reference-type="ref" reference="thm:sup"} for admissible $w$. Our next step is to establish the following lower bound. **Lemma 24**. *Let $w:[0,1]^2 \to \mathbb{R}$ be rectangular step. Then $$\begin{aligned} \lim \inf_{N \to \infty} \frac{1}{N} \log \mathbf{E}[ e^{ N Z_N[w]}] \geq \sup_f \mathcal{G}[w,f].\end{aligned}$$* *Proof.* Fix $\delta > 0$ arbitrary. Let $f_0$ be chosen so that $\mathcal{G}[w,f_0] \geq \sup_f \mathcal{G}[w,f]-\delta$. Define the subset $$\begin{aligned} A_\delta := \{ \pi \in \mathcal{M}_1([0,1]^2) : | \pi([s_i,s_{i+1})\times[t_j,t_{j+1})) - \int_{[s_i,s_{i+1})\times[t_j,t_{j+1})} f_0 | \leq \delta ~~ \forall i,j \}.\end{aligned}$$ of $\mathcal{M}_1([0,1]^2)$. Note that $A_\delta$ is a Borel subset of $\mathcal{M}_1([0,1]^2)$ endowed with the topology of uniform convergence. Then $\mathbf{E}[ e^{ N Z_N[w]}] \geq \mathbf{E}[ e^{ N Z_N[w]} \mathrm{1}\{ \pi_N \in A_\delta \}] \geq \mathbf{P}( \pi_N \in A_\delta ) \inf_{\pi \in A_\delta} e^{ N \int_{[0,1]^2} w ~\mathrm{d}\pi }$, and accordingly, $$\begin{aligned} \frac{1}{N} \log \mathbf{E}[ e^{ N Z_N[w]}] \geq \frac{1}{N} \log \mathbf{P}( \pi_N \in A_\delta ) + \inf_{\pi \in A_\delta} \int_{[0,1]^2} w ~\mathrm{d}\pi.\end{aligned}$$ There is a constant $R_w$ depending only on $w$ such that $$\begin{aligned} \label{eq:toucan1} \inf_{\pi \in A_\delta} \int_{[0,1]^2} w ~\mathrm{d}\pi \geq \int_{[0,1]^2} wf_0 - R_w\delta.\end{aligned}$$ Moreover, using Trashorras's large deviation principle, Theorem [Theorem 5](#thm:trashorras){reference-type="ref" reference="thm:trashorras"}, to obtain the first inequality below we have $$\begin{aligned} \label{eq:toucan2} \lim \inf_{N \to \infty} \frac{1}{N} \log \mathbf{P}( \pi_N \in A_\delta ) \geq - \inf_{ \pi \in A_\delta^{\circ} } I(\pi) \geq - \int_{[0,1]^2} f_0 \log f_0,\end{aligned}$$ where the last inequality follows from the fact that $f_0(s,t)\mathrm{d}s\mathrm{d}t$ is itself plainly an element of $A_\delta^{\circ}$. Combining [\[eq:toucan1\]](#eq:toucan1){reference-type="eqref" reference="eq:toucan1"} and [\[eq:toucan2\]](#eq:toucan2){reference-type="eqref" reference="eq:toucan2"} we have $$\begin{aligned} \lim \inf_{N \to \infty} \frac{1}{N} \log \mathbf{E}[ e^{ N Z_N[w]}] \geq \mathcal{G}[w,f_0]-R_w\delta.\end{aligned}$$ Since $\mathcal{G}[w,f_0]$ lies within $\delta$ of the supremum of $\mathcal{G}[w,f]$, it follows that $$\begin{aligned} \lim \inf_{N \to \infty} \frac{1}{N} \log \mathbf{E}[ e^{ N Z_N[w]}] \geq \sup_f \mathcal{G}[w,f]-(R_w+1)\delta.\end{aligned}$$ Since $\delta > 0$ was arbitrary, the result holds. ◻ Now we prove the corresponding upper bound. **Lemma 25**. *Let $w:[0,1]^2 \to \mathbb{R}$ be rectangular step. Then $$\begin{aligned} \lim \sup_{N \to \infty} \frac{1}{N} \log \mathbf{E}[ e^{ N Z_N[w]}] \leq \sup_f \mathcal{G}[w,f].\end{aligned}$$* *Proof.* Fix $\delta > 0$. Given a tuple $\mathbf{k} := (k_{i,j})_{ 1 \leq i \leq k, 1 \leq j \leq \ell}$ of non-negative integers, define the subset $$\begin{aligned} B_\delta(\mathbf{k}) := \{ \pi \in \mathcal{M}_1([0,1]^2) : | \pi([s_i,s_{i+1})\times[t_j,t_{j+1})) \in [k_{i,j}\delta,(k_{i,j}+1)\delta) ~\forall 1 \leq i \leq k, 1 \leq j \leq \ell \}\end{aligned}$$ of $\mathcal{M}_1([0,1]^2)$. Like $A_\delta$ in the previous lemma's proof, each $B_\delta(\mathbf{k})$ is a Borel subset in the topology of weak convergence. Moreover, note that if $M$ is the smallest integer at least as large as $1/\delta$, then $$\begin{aligned} \mathcal{M}_1([0,1]^2) = \bigcup_{ \mathbf{k} : 0 \leq k_{i,j} \leq M } B_\delta(\mathbf{k}),\end{aligned}$$ where the union is taken over all tuples $\mathbf{k} = (k_{i,j})_{1 \leq i \leq k, 1 \leq j \leq \ell}$ with components between $0$ and $M$. Now $$\begin{aligned} \mathbf{E}[ e^{ N Z_N[w]}] &\leq \sum_{ \mathbf{k} : 0 \leq k_{i,j} \leq M } \mathbf{E}[ e^{ N Z_N[w]} \mathrm{1}\{ \pi_N \in B_\delta(\mathbf{k}) \} ] \\ &\leq \sum_{ \mathbf{k} : 0 \leq k_{i,j} \leq M } \mathbf{P}( \pi_N \in B_\delta(\mathbf{k})) \sup_{\pi \in B_\delta(\mathbf{k}) }e^{N\int_{[0,1]^2}w\mathrm{d}\pi}\\ &\leq M^{k\ell} \max_{ \mathbf{k} : 0 \leq k_{i,j} \leq M } \left\{ \mathbf{P}( \pi_N \in B_\delta(\mathbf{k})) \sup_{\pi \in B_\delta(\mathbf{k}) }e^{N\int_{[0,1]^2}w\mathrm{d}\pi} \right\}\end{aligned}$$ Taking limsups to obtain the first inequality below, and then appealing to the upper bound in Trashorras's large deviation principle, Theorem [Theorem 5](#thm:trashorras){reference-type="ref" reference="thm:trashorras"}, to obtain the second, we have $$\begin{aligned} \label{eq:cooper} \lim \sup_{N \to \infty} \frac{1}{N} \log \mathbf{E}[e^{NZ_N[w]}] &\leq \max_{ \mathbf{k} : 0 \leq k_{i,j} \leq M } \left\{ \lim \sup_{N \to \infty} \frac{1}{N} \log \mathbf{P}( \pi_N \in B_\delta(\mathbf{k})) + \sup_{\pi \in B_\delta(\mathbf{k}) }N\int_{[0,1]^2}w\mathrm{d}\pi \right\} \nonumber \\ &\leq \max_{ \mathbf{k} : 0 \leq k_{i,j} \leq M } \left\{ - \inf_{ \pi \in \overline{B_\delta(\mathbf{k})} }I[\pi]+ \sup_{\pi \in B_\delta(\mathbf{k}) }\int_{[0,1]^2}w\mathrm{d}\pi \right\}.\end{aligned}$$ There is a constant $R_w$ depending only on $w$ such that for any $\mathbf{k}$, and for all $w,w' \in B_\delta(\mathbf{k})$, we have $$\begin{aligned} \left| \int_{[0,1]^2}w\mathrm{d}\pi - \int_{[0,1]^2}w'\mathrm{d}\pi \right| \leq R_w \delta.\end{aligned}$$ In particular from [\[eq:cooper\]](#eq:cooper){reference-type="eqref" reference="eq:cooper"} we have $$\begin{aligned} \lim \sup_{N \to \infty} \frac{1}{N} \log \mathbf{E}[e^{NZ_N[w]}] &\leq \max_{ \mathbf{k} : 0 \leq k_{i,j} \leq M } \left\{ \sup_{ f : f \mathrm{d}s\mathrm{d}t \in \overline{B_\delta(\mathbf{k})} } \mathcal{G}[w,f]+ R_w \delta \right\}\\ &\leq \sup_{ f} \mathcal{G}[w,f]+ R_w \delta .\end{aligned}$$ Since $\delta > 0$ was arbitrary, that completes the proof. ◻ We now tie together our work. *Proof of Theorem [Theorem 6](#thm:sup){reference-type="ref" reference="thm:sup"}.* According to Lemma [Lemma 23](#lem:reduction){reference-type="ref" reference="lem:reduction"}, in proving Theorem [Theorem 6](#thm:sup){reference-type="ref" reference="thm:sup"} we may assume without loss of generality that $w$ is rectangular step. Combining the lower bound in Lemma [Lemma 24](#lem:lower){reference-type="ref" reference="lem:lower"} with the upper bound in Lemma [Lemma 25](#lem:upper){reference-type="ref" reference="lem:upper"}, we obtain the statement of Theorem [Theorem 6](#thm:sup){reference-type="ref" reference="thm:sup"}. ◻ ## Proof of Lemma [Lemma 26](#lem:adm){reference-type="ref" reference="lem:adm"} {#proof-of-lemma-lemadm} **Lemma 26**. *Let $\mu$ and $\nu$ be probability measures on the real line with compact support. Then for any $z$ satisfying Assumption [Assumption 1](#as0){reference-type="ref" reference="as0"} each of the functions $w_z(s,t) := \log(z - \rho_\mu(s) - \rho_\nu(t)), w_z(s,t) := \log(z - \rho_\mu(s)\rho_\nu(t) )$ and $w_z(s,t) := \log(z-\rho_\mu(s))\mathrm{1}_{[0,\tau]}(t)$ are admissible.* *Proof of Lemma [Lemma 26](#lem:adm){reference-type="ref" reference="lem:adm"}.* We prove the result for $\log(z - \rho_\mu(s) - \rho_\nu(t))$. The other proofs are similar. Note first that $w:[0,1]^2 \to \mathbb{R}$ given by $w(s,t) := \log(z - \rho_\mu(s)-\rho_\nu(t))$ is decreasing in both the $s$ and $t$ variable. Fix $\delta > 0$. Since $\mu$ and $\nu$ have bounded support, suppose their supports are both contained in an interval $[a,b]$. Let $a = u_0 \leq u_1 \leq \ldots \leq u_M = b$ be chosen so that $u_{i+1}-u_i \leq \delta$ for each $i$, and chosen so that no $u_i$ coincides with an atom of $\mu$ or $\nu$. Define $$\begin{aligned} s_i := \int_{-\infty}^{u_i} \mu(\mathrm{d}x) \qquad \text{and} \qquad t_i := \int_{-\infty}^{u_i} \nu(\mathrm{d}x).\end{aligned}$$ Then $\rho_\mu(s_{i+1}-\rho_\mu(s_i)\leq \delta$ and $\rho_\nu(t_{i+1})-\rho_\nu(t_i) \leq \delta$. Let $z > b$. Then there is a constant $C_{z,b}$ such that $|\log(z - (x+y)) - \log(z-x)| \leq C_{z,b} \delta$ whenever $x \in [a,b]$ and $|y| < \delta$. It follows that there is a constant depending $\mu, \nu$ and $z$ but independent of $\delta$ such that whenever $(s,t) \in [s_i,s_{i+1})\times [t_j,t_{j+1})$ we have $$\begin{aligned} |\log(z - \rho_\mu(s) - \rho_\nu(t) ) - \log(z - \rho_\mu(s_{i}) - \rho_\nu(t_{j}) )| \leq C_{\mu,\nu,z} \delta.\end{aligned}$$ It follows that the rectangular step function $\tilde{w}:[0,1]^2 \to \mathbb{R}$ defined by $$\begin{aligned} \tilde{w}(s,t) := \log(z - \rho_\mu(s_i) - \rho_\nu(t_j) ) \qquad \text{for $(s,t) \in [s_i,s_{i+1})\times [t_j,t_{j+1})$}\end{aligned}$$ satisfies $$\begin{aligned} |\tilde{w}(s,t) - \log(z - \rho_\mu(s)-\rho_\nu(t))| \leq C_{\mu,\nu,z}\delta \qquad \text{for all $(s,t) \in [0,1]^2$}.\end{aligned}$$ Since $\delta > 0$ was arbitrary, that completes the proof. ◻ # The entropy description {#sec:entropy} ## Classical entropy We now turn to an alternative interpretation, phrasing Theorem [Theorem 7](#thm:main){reference-type="ref" reference="thm:main"} directly in terms of couplings of probability measures. Given a probability measure $\pi$ on $\mathbb{R}^k$ with Radon-Nikodym derivative $g$ with respect to Lebesgue measure, we define the differential entropy of $\pi$ by $$\begin{aligned} \label{eq:caldef} \mathcal{E}[\pi] := \int_{\mathbb{R}^k} g(\mathbf{x}) \log g(\mathbf{x})\mathrm{d}\mathbf{x}.\end{aligned}$$ Suppose $\mu$ is a probability measure on $\mathbb{R}$ with density $g$ with respect to Lebesgue measure. Differentiating through the equation $t = \int_{-\infty}^{\rho_\mu(t)} g(x)\mathrm{d}x$, it is easily seen that $g(\rho_\mu(t))=1/\rho_\mu'(t)$. Then by taking the change of variable $x = \rho_\mu(t)$ in [\[eq:caldef\]](#eq:caldef){reference-type="eqref" reference="eq:caldef"}, we have $$\begin{aligned} \label{eq:calnew} \mathcal{E}[\mu] = - \int_0^1 \log \rho_\mu'(t) \mathrm{d}t.\end{aligned}$$ Let $\mu$ and $\nu$ be probability measures on the real line. Recall that a probability measure $\pi$ on $[0,1]^2$ is a coupling measure if it has uniform marginals. Recall that a coupling of $\mu$ and $\nu$ is a probability measure $\Pi$ on $\mathbb{R}^2$ whose marginals are given by $\mu$ and $\nu$. Every coupling measure $\pi$ gives rise to a coupling of the measures $\mu$ and $\nu$: we let $\Pi$ be the law of the random variable $(\rho_\mu(S),\rho_\nu(T))$, where $(S,T)$ have law $\pi$. If either $\mu$ or $\nu$ (or both) has a density with respect to Lebesgue measure, the correspondence between $\pi$ and $\Pi$ is bijective. If both $\mu$ and $\nu$ contain atoms, then $\rho_\mu$ and $\rho_\nu$ both have intervals, say $[s,s']$ and $[t,t']$ on which they are constant. If $\pi$ and $\pi'$ are two different coupling measures that give the same mass to $[s,s']$ and $[t,t']$, and agree on $[0,1]^2 - [s,s'] \times [t,t']$, then these measures will give rise to the same coupling of $\mu$ and $\nu$. With this picture in mind, we say that a coupling measure $\pi$ is **minimal** for the measures $\mu$ and $\nu$ if, for each pair of intervals $[s,s')$ and $[t,t')$ on which $\rho_\mu$ and $\rho_\nu$ are constant, $\pi$ has a constant density with respect to Lebesgue measure on $[s,s') \times [t,t')$. There is a bijective correspondence between minimal coupling measures for $(\mu,\nu)$ and couplings of $\mu$ and $\nu$. We now seek to relate the quantity $\mathcal{G}[w,\pi]$ to the differential entropy $\mathcal{E}[\Pi]$. We begin with the following claim. **Lemma 27**. *Suppose $\mu$ and $\nu$ have densities with respect to Lebesgue measure. Let $\pi$ be the (minimal) coupling measure associated with a coupling $\Pi$ of $(\mu,\nu)$. Suppose $\pi$ has density $f$ with respect to Lebesgue measure. Then $$\begin{aligned} \int_0^1 \int_0^1 f(s,t) \log f(s,t) \mathrm{d}s \mathrm{d}t = - \mathcal{E}[\mu] -\mathcal{E}[\nu] + \mathcal{E}[\Pi],\end{aligned}$$ where $\mathcal{E}[\cdot]$ is defined in [\[eq:caldef\]](#eq:caldef){reference-type="eqref" reference="eq:caldef"}.* *Proof.* Note that $$\begin{aligned} \pi( [s,s') \times [t,t') ) := \Pi ( [\rho_\mu(s),\rho_\mu(s')) \times [ \rho_\nu(t),\rho_\nu(t')) ).\end{aligned}$$ Accordingly, if $F$ is the density of $\Pi$ with respect to Lebesgue measure, we have $$\begin{aligned} F(\rho_\mu(s),\rho_\nu(t)) \rho_\mu'(s) \rho_\nu'(t) = f(s,t)\end{aligned}$$ for all $s,t \in [0,1]$. In particular, $$\begin{aligned} \int_0^1 \int_0^1 f(s,t) \log f(s,t) \mathrm{d}s \mathrm{d}t &= \int_0^1 \int_0^1 (\log F(\rho_\mu(s),\rho_\nu(t)) + \log \rho_\mu'(s) + \log \rho_\nu'(t)) f(s,t) \mathrm{d}s \mathrm{d}t \\ &= - \mathcal{E}[\mu] - \mathcal{E}[\nu] + \int_0^1 \int_0^1 \log F(\rho_\mu(s),\rho_\nu(t)) f(s,t) \mathrm{d}s \mathrm{d}t,\end{aligned}$$ where the latter equality above follows from [\[eq:throughfact\]](#eq:throughfact){reference-type="eqref" reference="eq:throughfact"} and [\[eq:calnew\]](#eq:calnew){reference-type="eqref" reference="eq:calnew"}. The result follows from noting that $\int_0^1 \int_0^1 \log F(\rho_\mu(s),\rho_\nu(t)) f(s,t) \mathrm{d}s \mathrm{d}t = \mathbf{E}_\Pi[\log F(X,Y)] =: \mathcal{E}[\Pi]$. ◻ We now have the following corollary. **Corollary 28**. *Suppose $\mu$ and $\nu$ have densities with respect to Lebesgue measure. Let $w:[0,1]^2 \to \mathbb{R}$ be a function of the form $w(s,t) = W(\rho_\mu(s),\rho_\nu(t))$. Then if $\Pi$ is the coupling of $(\mu,\nu)$ associated with a coupling measure $\pi$, we have $$\begin{aligned} \mathcal{G}[w,\pi] = \mathbf{E}_\Pi [W(X,Y) ] - \mathcal{E}[\Pi] + \mathcal{E}[\mu] + \mathcal{E}[\nu].\end{aligned}$$* *Proof.* This follows immediately from Lemma [Lemma 27](#lem:pre1){reference-type="ref" reference="lem:pre1"} and the fact that $$\int_0^1 \int_0^1 w(s,t) \pi(s,t) \mathrm{d}s \mathrm{d}t = \mathbf{E}_\Pi[W(X,Y)].$$ ◻ We are now equipped to give another account of Theorem [Theorem 7](#thm:main){reference-type="ref" reference="thm:main"} in terms of couplings of classical random variables. # Variational techniques {#sec:char} ## Proof of Proposition [Proposition 9](#prop:maxform){reference-type="ref" reference="prop:maxform"} {#proof-of-proposition-propmaxform} Let $w:[0,1]^2 \to \mathbb{R}$ be admissible. Then the coupling density $f$ maximising the energy functional $\mathcal{G}[w,f]$ takes the form $$\begin{aligned} f(s,t) = \alpha(s) \beta(t) e^{w(s,t)}\end{aligned}$$ for some functions $\alpha,\beta:[0,1] \to \mathbb{R}$. Moreover, for $f$ of this form we have $$\begin{aligned} \mathcal{G}[w,f] = - \int_0^1 \log \alpha(s) \mathrm{d}s - \int_0^1 \log \beta(t) \mathrm{d}t.\end{aligned}$$ *Proof of Proposition [Proposition 9](#prop:maxform){reference-type="ref" reference="prop:maxform"}.* Given an admissible $w:[0,1]^2 \to \mathbb{R}$, let $f$ be the maximiser of the energy functional $\mathcal{G}[w,f]$. For suitable $\psi$, we will use the stationarity $\mathrm{d}{d \varepsilon} \mathcal{G}[w,f+\varepsilon \psi]$ to deduce the form taken by $f$. To characterise the suitable perturbation functions $\psi$, we recall that $f$ must satisfying the consistency condition $$\begin{aligned} \label{eq:cons2} 1 = \int_0^1 f(s,t_0)\mathrm{d}s = \int_0^1 f(s_0,t)\mathrm{d}t\end{aligned}$$ for all $s_0,t_0 \in [0,1]$. In order for $f+\varepsilon \psi$ to also satisfy the consistency condition [\[eq:cons2\]](#eq:cons2){reference-type="eqref" reference="eq:cons2"}, it must be the case that $$\begin{aligned} \label{eq:cons3} 0 = \int_0^1 \psi(s,t_0)\mathrm{d}s = \int_0^1 \psi(s_0,t)\mathrm{d}t\end{aligned}$$ for all $s_0,t_0 \in [0,1]$. With this picture in mind, we will choose points $s_1,s_2,t_1,t_2$ of $(0,1)$, and consider functions $\psi$ that are positive at $(s_1,t_1)$ and $(s_2,t_2)$, and negative at $(s_1,t_2)$ and $(s_2,t_1)$ in such a way that [\[eq:cons3\]](#eq:cons3){reference-type="eqref" reference="eq:cons3"}. More specifically, let $\eta_0:\mathbb{R}^2 \to [0,\infty)$ be a bump function of radius centered at zero, i.e. a smooth function satisfying $\eta_0(x,y)=0$ whenever $x^2 + y^2 \geq \delta$ for some small $\delta$, and with the property $\int_{\mathbb{R}^2} \eta_0(x,y)\mathrm{d}x\mathrm{d}y = 1$. For $(s,t) \in (0,1)$, write $\eta_{(s,t)}$ for $\eta_0$ recentered at $(s,t)$. We assume that $\delta$ is taken sufficiently small such that each $\eta_{s_i,t_j}$ is supported in $[0,1]^2$. Define $$\begin{aligned} \label{eq:testf} \psi := \eta_{s_1,t_1} + \eta_{s_2,t_2} - \eta_{s_2,t_1} - \eta_{s_1,t_2}.\end{aligned}$$ Then $\psi$ satisfies [\[eq:cons3\]](#eq:cons3){reference-type="eqref" reference="eq:cons3"}. Let $f_*$ be the maximiser of $\mathcal{G}[w,f]$. Choose $\varepsilon$ sufficiently small such that $f_* + \varepsilon \psi$ is non-negative on $[0,1]^2$, and note that the condition [\[eq:testf\]](#eq:testf){reference-type="eqref" reference="eq:testf"} entails that $f_* + \varepsilon \psi$ satisfies [\[eq:jetski\]](#eq:jetski){reference-type="eqref" reference="eq:jetski"}. We now study the small $\varepsilon$ asymptotics of $\mathcal{G}[w,f_*+\varepsilon \psi]$. We have $$\begin{aligned} \mathcal{G}[w,f_*+ \varepsilon \psi] &= \int_{[0,1]^2} (w - \log( f_*+\varepsilon \psi )) ( f_*+\varepsilon \psi ) \\ &= \mathcal{G}[w,f_*] - \int_{[0,1]^2} \log(1 + \varepsilon \psi/f_*) (f_* + \varepsilon \psi) + \varepsilon \int_{[0,1]^2} (w - \log(f_* + \varepsilon \psi)) \psi\\ & = \mathcal{G}[w,f_*] - \varepsilon \int_{[0,1]^2} \psi + \varepsilon \int_{[0,1]^2} (w - \log(f_*)) \psi + O(\varepsilon^2)\\ & = \mathcal{G}[w,f_*] + \varepsilon \int_{[0,1]^2} (w - \log(f_*)) \psi + O(\varepsilon^2),\end{aligned}$$ where the last equality above follows from the fact that $\psi$ integrates to zero. It follows that if $f_*$ is a stationary point of the functional $\mathcal{G}[w,f_*]$, it must be the case that $$\begin{aligned} \int_0^1 \int_0^1 \psi ( w - \log f) \mathrm{d}s \mathrm{d}t = 0 \qquad \text{for every test function $\psi$ of the form \eqref{eq:testf}}.\end{aligned}$$ In order for the last line to hold for arbitrary bump functions $\eta$ in the construction of $\psi$, it must be the case that, setting $G := w - \log f_*$, we have $$\begin{aligned} \label{eq:Glin} G(s_1,t_1 ) + G(s_2,t_2 ) - G(s_1,t_2) - G(s_2,t_1)= 0 \qquad \text{ for all $s_1,s_2,t_1,t_2$ in $[0,1]$}.\end{aligned}$$ We now show that the this implies that $G(s,t)$ is a sum of autonomous functions of $s$ and $t$. To see this, differentiating [\[eq:Glin\]](#eq:Glin){reference-type="eqref" reference="eq:Glin"} with respect to $s_1$ we see that $$\begin{aligned} \frac{\partial G}{\partial s}(s_1,t_1) = \frac{\partial G}{\partial s}(s_1,t_2) \qquad \text{ for all $t_1,t_2$ in $[0,1]$}. \end{aligned}$$ In other words, $\frac{\partial G}{\partial s}(s,t) = \tilde{A}(s)$ for some function $\tilde{A}(s)$ not depending on $t$. Likewise $\frac{\partial G}{\partial t}(s,t) = \tilde{B}(t)$. In other words, it follows that there are functions $A(s)$ and $B(t)$ (with respective derivatives $\tilde{A}(s)$ and $\tilde{B}(t)$) such that $$\begin{aligned} G(s,t) = A(s) + B(t).\end{aligned}$$ Using $G := w- \log f_*$, this implies that $f_*$ takes the form $$\begin{aligned} \label{eq:fform} f_*(s,t) = A(s)B(t)e^{w(s,t)},\end{aligned}$$ as required. We note that since $\mathcal{G}[w,f_*]$ is concave in the $f_*$ variable, it follows that any stationary point $f_*$ is a maximum. That completes the proof of the first part of the theorem. The second part of the theorem follows quickly by plugging $f_*(s,t) = A(s)B(t)e^{w(s,t)}$ into $\mathcal{G}[w,f_*]$ and recalling from the introduction that [\[eq:cons2\]](#eq:cons2){reference-type="eqref" reference="eq:cons2"} implies $$\begin{aligned} \label{eq:throughfact2} \int_0^1 \int_0^1 (A(s)+B(t))f(s,t) \mathrm{d}s \mathrm{d}t = \int_0^1 A(s) \mathrm{d}s + \int_0^1 B(t)\mathrm{d}t,\end{aligned}$$ Indeed, plugging [\[eq:fform\]](#eq:fform){reference-type="eqref" reference="eq:fform"} into $\mathcal{G}[w,f]$ to obtain the first inequality below, and then using [\[eq:throughfact2\]](#eq:throughfact2){reference-type="eqref" reference="eq:throughfact2"} to obtain the second, we have $$\begin{aligned} \label{eq:sabitzer} \mathcal{G}[w,f_*] = - \int_{[0,1]^2 } (\log A(s) + \log B(t) ) f_*(s,t) \mathrm{d}s \mathrm{d}t = - \int_0^1 \log A(s) \mathrm{d}s - \int_0^1 \log B(t)\mathrm{d}t,\end{aligned}$$ completing the second part of the proof. ◻ ## Proof of Theorems [Theorem 10](#thm:explicit){reference-type="ref" reference="thm:explicit"} and [Theorem 11](#thm:explicit2){reference-type="ref" reference="thm:explicit2"} {#proof-of-theorems-thmexplicit-and-thmexplicit2} In the statement of Theorem [Theorem 10](#thm:explicit){reference-type="ref" reference="thm:explicit"}, we claimed in the case of additive convolution that for each $z$, there are unique Before proving the body of Theorem [Theorem 10](#thm:explicit){reference-type="ref" reference="thm:explicit"}, we first prove the following lemma, confirming that the systems of equations appearing in the statement of Theorem [Theorem 10](#thm:explicit){reference-type="ref" reference="thm:explicit"} (i.e. [\[eq:add1\]](#eq:add1){reference-type="eqref" reference="eq:add1"} and [\[eq:add2\]](#eq:add2){reference-type="eqref" reference="eq:add2"} for additive convolution and and [\[eq:mult1\]](#eq:mult1){reference-type="eqref" reference="eq:mult1"} and [\[eq:mult2\]](#eq:mult2){reference-type="eqref" reference="eq:mult2"} for multiplicative convolution) indeed have unique solutions. **Lemma 29**. *Recall that $\mathrm{E}_\mu$ denotes the essential supremum of a measure $\mu$. Then whenever $z > \mathrm{E}_\mu + \mathrm{E}_\nu$, there are unique real numbers $\omega(z),\omega_\mu(z),\omega_\nu(z)$ in $(\mathrm{E}_\mu + \mathrm{E}_\nu,\infty)$ and depending on $z$ such that $$\begin{aligned} \omega(z) + z &= \omega_\mu(z) + \omega_\nu(z) \label{eq:add1b} \\ \frac{1}{\omega(z)} &= G_\mu(\omega_\mu(z)) = G_\nu(\omega_\nu(z)), \label{eq:add2b}\end{aligned}$$ hold.* *Likewise, whenever $z > \mathrm{E}_\mu \cdot \mathrm{E}_\nu$, there are unique real numbers $\omega(z),\omega_\mu(z),\omega_\nu(z)$ in $(\mathrm{E}_\mu \cdot \mathrm{E}_\nu,\infty)$ and depending on $z$ such that $$\begin{aligned} 1+ \frac{1}{\omega(z)} &= \omega_\mu G_\mu(\omega_\mu(z)) = \omega_\nu G_\nu(\omega_\nu(z)) \label{eq:mult1b}\\ \omega_\mu(z)\omega_\nu(z) &= z(\omega(z)+1)\label{eq:mult2b}\end{aligned}$$* *Proof.* First we consider the additive case, proving that the equations [\[eq:add1b\]](#eq:add1b){reference-type="eqref" reference="eq:add1b"} and [\[eq:add2b\]](#eq:add2b){reference-type="eqref" reference="eq:add2b"} have unique solutions. In this case we may assume without loss of generality that $E_\mu = E_\nu = 0$ and $z > 0$, as translating the measures $\mu$ and $\nu$ is equivalent to translating $z$, $\omega_\mu$ and $\omega_\nu$. When $E_\mu = 0$, the function $G_\mu(s) := \int_{-\infty}^\infty \frac{1}{s-x} \mu(\mathrm{d}x)$ is continuous and strictly decreasing for $s$ in $(0,\infty)$. The limit $G_\mu(0) := \lim_{s \downarrow \mathrm{E}_\mu}G_\mu(0)$ lies in $(0,+\infty]$. Moreover, $G_\mu(s) \to 0$ as $s \to \infty$. It follows that for every $0 < c < G_\mu(0)$, there exists $s$ such that $G_\mu(s) = c$. Since $\mu$ is supported on $(-\infty,0]$, we have the simple bound $G_\mu(s) \leq 1/s$. Moreover, since $\mu$ has compact support, it has finite expectation, and hence $G_\mu(1) = 1/s + O_\mu(1/s^2)$ as $s \to \infty$. Write $H_\mu(s) := 1/G_\mu(s)$ for the reciprocal. Then $H_\mu(s)$ is continuous and strictly increasing for $s \in (0,\infty)$, and satisfies $H_\mu(s) \geq s$, and $H_\mu(s) = s + O_\mu(1)$ as $s \to \infty$. Let $w \geq H_\mu(0) \wedge H_\nu(0)$. Then there exist unique $s_\mu(w)$ and $s_\nu(w)$ such that $$\begin{aligned} H_\mu(s_\mu(w)) = H_\nu(s_\nu(w)) = w.\end{aligned}$$ The functions $s_\mu(w)$ and $s_\nu(w)$ are increasing in $w$. Moreover, $H_\mu(s) \geq s$ implies $s_\mu(w) \leq w$, and $H_\mu(s) = s + O_\mu(1)$ implies $s_\mu(w) = w + O_\mu(1)$ as $w \to \infty$. For $w \geq w_0 := H_\mu(0) \wedge H_\nu(0)$, define $f(w) := s_\mu(w) + s_\nu(w)$, and $g(w) = z+w$. The functions $f$ and $g$ are both increasing functions of $w$. Without loss of generality let $H_\mu(0) \leq H_\nu(0)$. Then $s_\nu(w_0) = 0$, and, since $s_\mu(w_0) \leq w_0$, we have the bound $f(w_0) \leq w_0$. In particular, we have $$\begin{aligned} f(w_0) \leq w_0 < w_0 + z = g(w_0).\end{aligned}$$ However, since $s_\mu(w) = w + O_\mu(1)$, it follows that $f(w) = 2w + O_{\mu,\nu}(1)$ as $w \to \infty$. Plainly, $g(w) := z+w = w + O_z(1)$ as $w \to \infty$. In particular, there exists some $\omega$ such that $g(\omega) = f(\omega)$. Define $\omega_\mu := s_\mu(\omega)$ and $\omega_\nu := s_\nu(\omega)$. Then $\omega,\omega_\mu,\omega_\nu$ are solutions to [\[eq:add1b\]](#eq:add1b){reference-type="eqref" reference="eq:add1b"} and [\[eq:add2b\]](#eq:add2b){reference-type="eqref" reference="eq:add2b"}. We now show these are unique solutions. To this end, it is sufficient to establish that $s_\mu(w)$ and $s_\nu(w)$ are convex functions of $w$, since this would imply $f(w)$ is convex, and, since $f(w_0) < g(w_0)$, this would imply that $f(w)$ and $g(w)$ intersect at most once for $w \in (w_0,\infty)$. $s_\mu(w)$ is the inverse function of $H_\mu(s)$. Thus to establish convexity of $s_\mu(w)$ we need to establish the concavity of $H_\mu(s)$. A straightforward calculation using the fact that $H_\mu(s) := 1/G_\mu(s)$ tells us that for $s > 0$ $$\begin{aligned} H_\mu''(s) := \frac{1}{G_\mu(s)^3} \left( G_\mu'(s)^2 - G_\mu''(s)G_\mu(s)\right). \end{aligned}$$ We now show that $H_\mu''(s)$ is negative. To see this, first note the denominator may be written $$\begin{aligned} G_\mu'(s)^2 - G_\mu''(s)G_\mu(s) &= \left(\int_{-\infty}^\infty \frac{1}{(s-x)^2} \mu(\mathrm{d}x)\right)^2 - \int_{-\infty}^\infty \frac{1}{(s-x)^3} \mu(\mathrm{d}x) \int_{-\infty}^\infty \frac{1}{s-x} \mu(\mathrm{d}x) \\ &= \mathbb{E}[Y_s^2Z_s^2] - \mathbb{E}[Y_s^3Z_s],\end{aligned}$$ where $Y_s$ and $Z_s$ are independent and both have the law of $(s-X)^{-1}$, where $X$ has law $\mu$. Using symmetry in $Y_s$ and $Z_s$ now note that $$\begin{aligned} \mathbb{E}[Y_s^2Z_s^2] - \mathbb{E}[Y_s^3Z_s] = -\frac{1}{2} \mathbb{E}[Y_sZ_s(Y_s-Z_s)^2] \leq 0.\end{aligned}$$ It follows that $H_\mu''(s) \leq 0$, and hence $H_\mu(s)$ is concave, as required. The proof of the multiplicative case is similar, and we leave the details to the interested reader. ◻ *Proof of Theorem [Theorem 10](#thm:explicit){reference-type="ref" reference="thm:explicit"}.* By Theorem [Theorem 7](#thm:main){reference-type="ref" reference="thm:main"} we have $$\begin{aligned} \label{eq:max1b} \int_{-\infty}^\infty \log(z-\lambda) ( \mu \boxdot \nu )(\mathrm{d} \lambda) =\sup_{f} \mathcal{G}\left[ \log(z - \rho_\mu(s) \odot \rho_\nu(t)), f\right],\end{aligned}$$ where $\odot$ denotes either addition or multiplication. Now according to Proposition [Proposition 9](#prop:maxform){reference-type="ref" reference="prop:maxform"} with $w(s,t) = \log(z-\rho_\mu(s)\odot \rho_\nu(t))$, the coupling density $f$ maximising the energy functional $\mathcal{G}[w,f]$ takes the form $$\begin{aligned} f(s,t) = \alpha(s) \beta(t) (z - \rho_\mu(s) \odot \rho_\nu(t)),\end{aligned}$$ for some functions $\alpha,\beta:[0,1] \to \mathbb{R}$. We now compute $\alpha(s)$ and $\beta(t)$ explicitly using the consistency condition [\[eq:jetski\]](#eq:jetski){reference-type="eqref" reference="eq:jetski"}. Integrating first against $s$, we see that for any $t \in [0,1]$ we have $$\begin{aligned} \label{eq:kole} 1 = \beta(t) \int_0^1 \alpha(s) (z - \rho_\mu(s) \odot \rho_\nu(t)) \mathrm{d}s.\end{aligned}$$ Regardless of whether $\odot$ is addition or multiplication, it follows that $\beta(t)$ takes the form $\beta(t) = (c_1 - c_2 \rho_\mu(t) )^{-1}$ for some constants $c_1,c_2$. Applying the same logic to $\alpha(s)$, and removing the spare degree of freedom, the maximiser $f$ of $\mathcal{G}[\log(z-\rho_\mu(s)\odot \rho_\nu(t)),f]$ takes the form $$\begin{aligned} \label{eq:lamp} f_*(s,t) = \omega \frac{ z- \rho_\mu(s) \odot \rho_\nu(t) }{ (\omega_\mu - \rho_\mu(s) )( \omega_\nu - \rho_\nu(t)) },\end{aligned}$$ for some $\omega,\omega_\mu,\omega_\nu$ depending on $z$. By the latter part of Proposition [Proposition 9](#prop:maxform){reference-type="ref" reference="prop:maxform"}, we have $$\begin{aligned} \mathcal{G}[\log(z-\rho_\mu(s)\odot \rho_\nu(t)),f_*] &= - \log \omega + \int_0^1 \log(\omega_\mu - \rho_\mu(s) )\mathrm{d}s + \int_0^1 \log (\omega_\nu - \rho_\nu(t) )\mathrm{d}t \\ &= - \log \omega + \int_{-\infty}^\infty \log( \omega_\mu - \lambda) \mu(\mathrm{d}\lambda) + \int_{-\infty}^\infty \log( \omega_\nu - \lambda) \nu (\mathrm{d} \lambda),\end{aligned}$$ thereby completing the first part of the proof of Theorem [Theorem 10](#thm:explicit){reference-type="ref" reference="thm:explicit"}. It remains to derive the relations [\[eq:add1\]](#eq:add1){reference-type="eqref" reference="eq:add1"} and [\[eq:add2\]](#eq:add2){reference-type="eqref" reference="eq:add2"} in the additive case, and [\[eq:mult1\]](#eq:mult1){reference-type="eqref" reference="eq:mult1"} and [\[eq:mult2\]](#eq:mult2){reference-type="eqref" reference="eq:mult2"} in the multiplicative case. We do this using the consistency condition [\[eq:jetski\]](#eq:jetski){reference-type="eqref" reference="eq:jetski"} for $f$ of the form [\[eq:lamp\]](#eq:lamp){reference-type="eqref" reference="eq:lamp"}. From this juncture onwards, we will treat the cases where $\odot$ is addition and multiplication separately. **Additive case.** Integrating [\[eq:lamp\]](#eq:lamp){reference-type="eqref" reference="eq:lamp"} with respect to $s$, and using the condition that $\int_0^1 f(s,t) \mathrm{d}s= 1$ for all $t \in [0,1]$, we see that for all $t \in [0,1]$ we have $$\begin{aligned} \label{eq:un1} 1 = \frac{\omega}{\omega_\nu - \rho_\nu(t)} \int_0^1 \frac{ z - \rho_\nu(t) - \rho_\mu(s)}{ \omega_\mu - \rho_\mu(s) } \mathrm{d}s = \frac{\omega}{\omega_\nu - \rho_\nu(t) } \left\{ 1 + ( z - \rho_\nu(t) - \omega_\mu) G_\mu(\omega_\mu)) \right\} .\end{aligned}$$ Rearranging, we obtain $$\begin{aligned} \label{eq:un2} \omega_\nu - \rho_\nu(t) = \omega \left\{ 1 + ( z - \rho_\nu(t) - \omega_\mu) G_\mu(\omega_\mu)) \right\} .\end{aligned}$$ Since $\rho_\nu(t)$ is non-constant, we can associate coefficients of $\rho_\nu(t)$ in [\[eq:un2\]](#eq:un2){reference-type="eqref" reference="eq:un2"}, as well as terms that are constant in $t$, to obtain the pair of equations $$\begin{aligned} 1 = \omega G_\mu(\omega_\mu) \qquad \text{and} \qquad \omega_\nu = \omega \left( 1 + (z - \omega_\mu) G_\mu(\omega_\mu)) \right).\end{aligned}$$ Simplifying the latter of these equations using the former, and studying the analogous equation swapping $\mu$ and $\nu$, we obtain [\[eq:add1\]](#eq:add1){reference-type="eqref" reference="eq:add1"} and [\[eq:add2\]](#eq:add2){reference-type="eqref" reference="eq:add2"}.\ **Multiplicative case.** Integrating [\[eq:lamp\]](#eq:lamp){reference-type="eqref" reference="eq:lamp"} with respect to $s$, and using the condition that $\int_0^1 f(s,t) \mathrm{d}s= 1$ for all $t \in [0,1]$, we see that for all $t \in [0,1]$ we have $$\begin{aligned} \label{eq:vn1} 1 = \frac{\omega}{\omega_\nu - \rho_\nu(t)} \int_0^1 \frac{ z - \rho_\nu(t) \rho_\mu(s)}{ \omega_\mu - \rho_\mu(s) } \mathrm{d}s = \frac{\omega}{\omega_\nu - \rho_\nu(t) } \left\{ \rho_\nu(t) + (z - \omega_\mu \rho_\nu(t)) G_\mu(\omega_\mu) \right\} .\end{aligned}$$ Rearranging, we obtain $$\begin{aligned} \label{eq:vn2} \omega_\nu - \rho_\nu(t) = \omega \left\{ \rho_\nu(t) + (z - \omega_\mu \rho_\nu(t)) G_\mu(\omega_\mu) \right\}.\end{aligned}$$ Since $\rho_\nu(t)$ is non-constant, we can associate coefficients of $\rho_\nu(t)$ in [\[eq:vn2\]](#eq:vn2){reference-type="eqref" reference="eq:vn2"}, as well as terms that are constant in $t$, to obtain the pair of equations $$\begin{aligned} -1 = \omega( 1 - \omega_\mu G_\mu(\omega_\mu)) \qquad \text{and} \qquad \omega_\nu = z \omega G_\mu(\omega_\mu).\end{aligned}$$ Simplifying the latter of these equations using the former, and studying the analogous equation swapping $\mu$ and $\nu$, we obtain [\[eq:mult1\]](#eq:mult1){reference-type="eqref" reference="eq:mult1"} and [\[eq:mult2\]](#eq:mult2){reference-type="eqref" reference="eq:mult2"}. ◻ *Proof of Theorem [Theorem 11](#thm:explicit2){reference-type="ref" reference="thm:explicit2"}.* By Theorem [Theorem 7](#thm:main){reference-type="ref" reference="thm:main"} we have $$\begin{aligned} \label{eq:bull} \tau \int_{-\infty}^\infty \log(z - \lambda) [\mu]_\tau(\mathrm{d}\lambda) = \sup_f \mathcal{G} [ \log(z -\rho_\mu(s)) \mathrm{1}_{[0,\tau]}(t), f].\end{aligned}$$ Now according to Proposition [Proposition 9](#prop:maxform){reference-type="ref" reference="prop:maxform"} with $w(s,t) =\log(z -\rho_\mu(s)) \mathrm{1}_{[0,\tau]}(t)$ it follows that the maximiser of the energy functional $\mathcal{G}[w,f]$ takes the form $$\begin{aligned} f(s,t) = \alpha(s) \beta(t) (z - \rho_\mu(s))^{\mathrm{1}_{[0,\tau]}(t)}.\end{aligned}$$ We now use the consistency condition [\[eq:jetski\]](#eq:jetski){reference-type="eqref" reference="eq:jetski"} to find the structure of $\alpha(s)$ and $\beta(t)$. Integrating $f(s,t)$ first against $t$, we see that for every $s \in [0,1]$ we have $$\begin{aligned} \label{eq:graz} 1 = \alpha(s) \left[ \int_0^\tau \beta(t) (z-\rho_\mu(s)) \mathrm{d}s + \int_\tau^1 \beta(t) \mathrm{d}t \right]\end{aligned}$$ In short, $\alpha(s)$ takes the form $(c_1 - c_2 \rho_\mu(s))^{-1}$, for some $c_1,c_2$ depending on $z, \tau$ and law $\mu$, but independent of $s$. Likewise, integrating $f(s,t)$ against $s$, we see that whenever $t \leq \tau$ we have $$\begin{aligned} 1 = \beta(t) \int_0^1 \alpha(s)(z- \rho_\mu(s)) \mathrm{d}s \\\end{aligned}$$ where as whenever $t > \tau$ we have $$\begin{aligned} 1 = \beta(t) \int_0^1 \alpha(s) \mathrm{d}s .\end{aligned}$$ In short, $\beta(t)$ takes the form $\beta(t) = c_3 \mathrm{1}_{t \leq \tau} + c_4 \mathrm{1}_{t > \tau}$ for some $c_3,c_4$ depending on $z, \tau$ and law $\mu$, but independent of $t$. Combining this observation with form $\alpha$ must take by virtue of [\[eq:graz\]](#eq:graz){reference-type="eqref" reference="eq:graz"}, we see that $f$ takes the form $$\begin{aligned} \label{eq:khat} f_*(s,t) = \frac{ C_1 \mathrm{1}_{t \leq \tau}(z - \rho_\mu(s) + C_2 \mathrm{1}_{t > \tau} }{ \omega - \rho_\mu(s)},\end{aligned}$$ for some $C_1,C_2,\omega$ depending on $z$. We now use the consistency relations again to find these functions of $z$. Integrating $f_*$ in [\[eq:khat\]](#eq:khat){reference-type="eqref" reference="eq:khat"} with respect to $t$, we see that for any $s \in [0,1]$ we have $$\begin{aligned} 1 = \frac{ c_1 \tau( z - \rho_\mu(s) ) + c_2 (1 - \tau) }{ \omega - \rho_\mu(s) },\end{aligned}$$ which, since $\rho_\mu(s)$ is not constant in $s$, implies $$\begin{aligned} c_1 = 1/\tau \qquad \text{and} \qquad c_2 = \frac{\omega - z}{1- \tau}.\end{aligned}$$ In short, we hone in further that $f_*$ takes the form $$\begin{aligned} \label{eq:khat2} f_*(s,t) = \frac{ \frac{1}{\tau} \mathrm{1}_{t \leq \tau}(z - \rho_\mu(s)) +\frac{\omega - z}{1- \tau} \mathrm{1}_{t > \tau} }{ \omega - \rho_\mu(s)}.\end{aligned}$$ It remains to find a condition on $\omega$. To this end, we now now integrate [\[eq:khat2\]](#eq:khat2){reference-type="eqref" reference="eq:khat2"} across $s \in [0,1]$ for a $t > \tau$. We obtain (again by virtue of the consistency condition [\[eq:jetski\]](#eq:jetski){reference-type="eqref" reference="eq:jetski"}) $$\begin{aligned} 1 = \frac{\omega-z}{1-\tau} G_\mu(\omega ),\end{aligned}$$ which is precisely [\[eq:comp1\]](#eq:comp1){reference-type="eqref" reference="eq:comp1"}. (Fixing a $t \leq \tau$ and integrating against $s \in [0,1]$ also leads to the same condition.) Finally, we note that with $w(s,t) = \log( z - \rho_\mu(s)) \mathrm{1}_{t \leq \tau}$, $f_*(s,t)$ in [\[eq:khat2\]](#eq:khat2){reference-type="eqref" reference="eq:khat2"} may be written $$\begin{aligned} \label{eq:khat3} f_*(s,t) = \frac{ \frac{1}{\tau} \mathrm{1}_{t \leq \tau} +\frac{\omega - z}{1- \tau} \mathrm{1}_{t > \tau} }{ \omega - \rho_\mu(s)} e^w,\end{aligned}$$ and hence by the latter part of Proposition [Proposition 9](#prop:maxform){reference-type="ref" reference="prop:maxform"}, we have $$\begin{aligned} \mathcal{G}[w,f_*] &= \int_0^1 \log(\omega - \rho_\mu(s))\mathrm{d}s - \int_0^1 \log \left( \frac{1}{\tau} \mathrm{1}_{t \leq \tau} +\frac{\omega - z}{1- \tau} \mathrm{1}_{t > \tau} \right)\mathrm{d}t \\ &= \int_{-\infty}^\infty \log(\omega - \lambda)\mu(\mathrm{d}\lambda) + \tau \log \tau - (1 - \tau) \log \frac{\omega-z}{1-\tau}.\end{aligned}$$ It follows from [\[eq:bull\]](#eq:bull){reference-type="eqref" reference="eq:bull"} that $$\begin{aligned} \int_{-\infty}^\infty \log(z - \lambda) [\mu]_\tau(\mathrm{d}\lambda) = \frac{1}{\tau} \int_{-\infty}^\infty \log(\omega - \lambda)\mu(\mathrm{d}\lambda) + \log \tau - \frac{1-\tau}{\tau} \log \frac{\omega-z}{1-\tau},\end{aligned}$$ thereby completing the proof of Theorem [Theorem 11](#thm:explicit2){reference-type="ref" reference="thm:explicit2"}. ◻ # Appendix {#appendix .unnumbered} ## Equivalence between (4) of Gorin and Marcus [@GM] and [\[eq:GM3\]](#eq:GM3){reference-type="eqref" reference="eq:GM3"} {#equivalence-between-4-of-gorin-and-marcus-and-eqgm3 .unnumbered} According to equation (4) of [@GM], if $A$ is a matrix with eigenvalues $a_1,\ldots,a_N$, and $[A]_k$ is its $k \times k$ principal minor, then for Haar unitary $U$ we have $$\begin{aligned} \mathbb{E}[ \det( z - [U^*AU]_k) ] = \frac{1}{N(N-1)\ldots (k+1) } \left( \frac{\partial}{\partial z} \right)^{N-k} \prod_{i=1}^N (z - a_i).\end{aligned}$$ We now show this leads to [\[eq:GM3\]](#eq:GM3){reference-type="eqref" reference="eq:GM3"}. First note that one may easily prove by induction in $j$ that $$\begin{aligned} \left( \frac{\partial}{\partial z } \right)^j \prod_{i=1}^N (z-a_i) = j! \sum_{ S\subset [N] : \# S = N-j} \prod_{i \in S} (z-a_i).\end{aligned}$$ In particular, $$\begin{aligned} \mathbb{E}[ \det( z - [U^*AU]_k) ] = \frac{1}{\binom{N}{k}} \sum_{T \subset [N] : \mathcal{T} = k } \prod_{i \in T} (z - a_i) = \mathbf{E} [ \prod_{i \in \mathcal{T}} ( z-a_i) ],\end{aligned}$$ where under $\mathbf{E}$, $\mathcal{T}$ is a uniformly chosen size-$k$ subset of $[N]$. To obtain [\[eq:GM3\]](#eq:GM3){reference-type="eqref" reference="eq:GM3"}, we now now that if $\sigma$ is a uniformly chosen permutation of $\{1,\ldots,N\}$, then its first $k$ values $\{\sigma(1),\ldots,\sigma(k)\}$ are a uniformly chosen size-$k$ subset of $[N]$. In particular, since $\prod_{i=1}^k (z-a_{\sigma(i)}) = \det(z - [\Sigma A \Sigma^{-1}]_k)$ whenever $\Sigma$ is the permutation matrix associated with $\sigma$, [\[eq:GM3\]](#eq:GM3){reference-type="eqref" reference="eq:GM3"} follows. 19 Arizmendi, O., & Perales, D. (2018). Cumulants for finite free convolution. Journal of Combinatorial Theory, Series A, 155, 244-266. Arizmendi, O., Garza-Vargas, J., & Perales, D. (2023). Finite free cumulants: Multiplicative convolutions, genus expansion and infinitesimal distributions. Transactions of the American Mathematical Society, 376(06), 4383-4420. Belinschi, S. T., & Bercovici, H. (2007). A new approach to subordination results in free probability. Journal d'Analyse Mathématique, 101(1), 357-365. Belinschi, S. T., & Nica, A. (2008). On a remarkable semigroup of homomorphisms with respect to free multiplicative convolution. Indiana University Mathematics Journal, 1679-1713. Bercovici, H. & Voicolescu, D. (1995). Superconvergence to the central limit and failure of the Cramer theorem for free random variables, Probabability Theory and Related Fields 103(2), 15--222. Biane, P. (1999), Processes with free increments, Mathematische Zeitschrift, 227, 143--174. Biane, P., Capitaine, M., & Guionnet, A. (2003). Large deviation bounds for matrix Brownian motion. Inventiones mathematicae, 152, 433-459. Cabanal-Duvillard, T., & Guionnet, A. (2003). Discussions around Voiculescu's free entropies. Advances in Mathematics, 174(2), 167-226. Collins, B. (2003). Moments and cumulants of polynomial random variables on unitary groups, the Itzykson-Zuber integral, and free probability, International Mathematics Research Notices, 17, 953-- 982. Dembo, A., & Zeitouni, O. (2009). Large deviations techniques and applications (Vol. 38). Springer Science & Business Media. Fiedler, M. (1971). Bounds for the determinant of the sum of Hermitian matrices, Proceedings of the American Mathematical Society (30), 27--31. Fulton, W., & Harris, J. (2013). Representation theory: a first course (Vol. 129). Springer Science & Business Media. Gorin, V. & Marcus A. W. (2020). Crystallization of random matrix orbits. International Mathematics Research Notices, 2020(3), 883-913. Hoskins, J., and Kabluchko, Z. (2021). Dynamics of zeroes under repeated differentiation. Experimental Mathematics, 1-27. Kenyon, R., Král', D., Radin, C., & Winkler, P. (2020). Permutations with fixed pattern densities. Random Structures and Algorithms, 56(1), 220-250. Marcus, A. W. (2021). Polynomial convolutions and (finite) free probability. arXiv preprint arXiv:2108.07054. Mingo, J. A., & Speicher, R. (2017). Free probability and random matrices (Vol. 35). New York: Springer. Marcus, A., Spielman, D. A., & Srivastava, N. (2013, October). Interlacing families I: Bipartite Ramanujan graphs of all degrees. 2013 IEEE 54th Annual Symposium on Foundations of computer science, (pp. 529-537). IEEE. Marcus, A. W., Spielman, D. A., & Srivastava, N. (2015). Interlacing families ii: Mixed characteristic polynomials and the kadison-Singer problem. Annals of Mathematics, 327-350. Marcus, A. W., Spielman, D. A., & Srivastava, N. (2022). Finite free convolutions of polynomials. Probability Theory and Related Fields, 182(3-4), 807-848. Nica A. & Speicher, R. (1996). On the multiplication of free N-tuples of noncommutative random variables, American Journal of Mathematics, 118(4), 799--837. Shlyakhtenko, D., and Tao, T., with an appendix by Jekel, D. (2020). Fractional free convolution powers. arXiv preprint arXiv:2009.01882. Trashorras, J. (2008). Large deviations for symmetrised empirical measures. Journal of Theoretical Probability, 21, 397-412. D. Voiculescu, (2002) Analytic subordination consequences of free Markovianity, Indiana University Mathematics Journal, 51, 1161--1166. Voiculescu, D. (1993). The analogues of entropy and of Fisher's information measure in free probability theory I, Communications in Mathematical Physics 155, 71-92. Voiculescu, D. (1994). The analogues of entropy and of Fisher's information measure in free probability theory II, Inventiones Mathematicae, 118, 411-440. Voiculescu, D. (1996). The analogues of entropy and of Fisher's information measure in free probability theory III: The absence of Cartan subalgebras, Geometric and Functional Analysis, 172-199. Wu, L. (1991). Large deviation for the empirical fields of symmetric measures. *Chinese Ann. Math.* 12B, 3, 348--357. Wu, L. (2004). Large Deviation Principle for Exchangeable Sequences: Necessary and Sufficient Condition. *J. Theor. Probab.* 17(4).
arxiv_math
{ "id": "2309.12196", "title": "A variational approach to free probability", "authors": "Octavio Arizmendi and Samuel G. G. Johnston", "categories": "math.PR math.FA math.OA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Let $L$ be an even lattice of odd rank with discriminant group $L'/L$, and let $\alpha,\beta \in L'/L$. We prove the Weil bound for the Kloosterman sums $S_{\alpha,\beta}(m,n,c)$ of half-integral weight for the Weil Representation attached to $L$. We obtain this bound by proving an identity that relates a divisor sum of Kloosterman sums to a sparse exponential sum. This identity generalizes Kohnen's identity for plus space Kloosterman sums with the theta multiplier system. address: - Mathematics Department, Brigham Young University, Provo, UT 84602 - Mathematics Department, Brigham Young University, Provo, UT 84602 - Mathematics Department, University of Illinois at Urbana-Champaign, Urbana, IL 61801 author: - Nickolas Andersen - Gradin Anderson - Amy Woodall bibliography: - bibliography.bib title: The Weil bound for generalized Kloosterman sums of half-integral weight --- # Introduction In 1926, Kloosterman [@kloosterman] introduced his eponymous exponential sum $$\label{eq:kloo-def-ordinary} S(m,n,c) = \sum_{d(c)^\times} e_c(m\overline d + n d), \qquad e_c(x) = e^{2\pi i x/c},$$ in order to apply the circle method to the problem of representations of integers by quaternary quadratic forms. Here the $\times$ superscript indicates that we sum over $d\in (\mathbb{Z}/c\mathbb{Z})^\times$, and $\overline d$ denotes the inverse of $d$ modulo $c$. Kloosterman proved that $S(m,n,p) \ll p^{3/4}$ for any prime $p$. This was subsequently improved by Weil in [@weil] to the sharp bound $$|S(m,n,p)|\leq 2p^{\frac 12}.$$ A theorem of Katz (see [@katz] and [@adolphson]) asserts that the Kloosterman angles $\theta_p(n)$ defined by the relation $S(1,n,p) = 2\sqrt p\cos \theta_p(n)$ are equidistributed with respect to the Sato--Tate measure. For $c=p^\lambda$ with $\lambda \geq 2$, the Kloosterman sum can be evaluated explicitly (see Chapter 4 of [@iwaniec]) and for all $c>0$ we have $$|S(m,n,c)| \leq \tau(c) (m,n,c)^{\frac 12} c^{\frac 12},$$ where $\tau(c)$ is the number of divisors of $c$. This inequality is called the Weil bound. In this paper we prove a similar bound for Kloosterman sums of half-integral weight for the Weil representation, which are defined precisely in Section [3](#sec:background){reference-type="ref" reference="sec:background"}. Briefly, let $L$ be an even lattice with determinant $\Delta$ and discriminant group $L'/L$. Then $|L'/L|=|\Delta|$. For $\alpha,\beta\in L'/L$, let $\rho_{\alpha\beta}$ denote the coefficients of the Weil representation $\rho_L$. These coefficients are given by an explicit exponential sum [\[eq:rho-alpha-beta-formula\]](#eq:rho-alpha-beta-formula){reference-type="eqref" reference="eq:rho-alpha-beta-formula"} involving values of the quadratic form $q:L'/L\to \mathbb{Q}/\mathbb{Z}$. Suppose that $c \in \mathbb{Z}^+$, $\frac{m}{2\Delta}\in \mathbb{Z}+q(\alpha)$, and $\frac{n}{2\Delta}\in \mathbb{Z}+q(\beta)$, and let $k$ be a half-integer satisfying [\[eq:sigma-consistency\]](#eq:sigma-consistency){reference-type="eqref" reference="eq:sigma-consistency"}. We define the generalized Kloosterman sum as $$\label{eq:kloo-weil-def} S_{\alpha,\beta}(m,n,c) = e^{-\pi ik/2} \sum_{d(c)^\times} \overline\rho_{\alpha\beta}(\tilde\gamma) e_{2\Delta c}\left(ma+nd\right),$$ where $\gamma=\left( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \right)\in \mathop{\mathrm{SL}}_2(\mathbb{Z})$ is any matrix with bottom row $(c\ d)$ and $\tilde\gamma = (\gamma,\sqrt{cz+d})$. The coefficients $\rho_{\alpha\beta}$ satisfy $\rho_{\alpha\beta}(\tilde\gamma)\ll_L 1$, so the trivial bound $S_{\alpha,\beta}(m,n,c) \ll_L c$ holds. Let $g$ denote the rank of $L$ and suppose that $g$ is odd. In Section [3](#sec:background){reference-type="ref" reference="sec:background"} we show that $$(-1)^{(g-1)/2} m\equiv 0,1\pmod{4}.$$ In the following theorem, $\omega(c)$ denotes the number of distinct primes dividing $c$. **Theorem 1**. *Suppose that $g$ is odd. Let $\alpha,\beta\in L'/L$ and let $m,n$ be integers satisfying $\tfrac{m}{2\Delta}\in \mathbb{Z}+q(\alpha)$ and $\tfrac{n}{2\Delta}\in \mathbb{Z}+q(\beta)$. Write $m=m_0v^2$ where $(-1)^{(g-1)/2}m_0$ is a fundamental discriminant. If $(v,\Delta)=1$ then $$\label{eq:main-weil-bound} S_{\alpha,\beta}(m,n,c) \ll_L 2^{\omega(c)} \tau((v,c)) (m_0n,c)^{\frac 12} c^{\frac 12}.$$* **Remark 1**. The implied constant is of the form $|\Delta|^{Ag}$ for some absolute constant $A$, which can be computed explicitly using the results of Section [4.3](#sec:counting){reference-type="ref" reference="sec:counting"}. **Remark 2**. It is likely that a bound similar to [\[eq:main-weil-bound\]](#eq:main-weil-bound){reference-type="eqref" reference="eq:main-weil-bound"} holds when $g$ is even, but the methods of this paper are not suited to that case. We will deduce [\[eq:main-weil-bound\]](#eq:main-weil-bound){reference-type="eqref" reference="eq:main-weil-bound"} from an identity that relates a sum of Kloosterman sums to a sparse exponential sum. In Section [2](#sec:exp-sum-identity){reference-type="ref" reference="sec:exp-sum-identity"} we state this identity and use it to prove Theorem [Theorem 1](#thm:main-weil-bound){reference-type="ref" reference="thm:main-weil-bound"}. In Section [3](#sec:background){reference-type="ref" reference="sec:background"} we provide background on the discriminant group of a lattice, the Weil representation, and Gauss sums. The proof of the exponential sum identity is Section [4](#sec:proof-identity){reference-type="ref" reference="sec:proof-identity"}. # The exopnential sum identity {#sec:exp-sum-identity} We begin by discussing Kohnen's identity [@kohnen Proposition 5] for plus space Kloosterman sums with the theta multiplier system and a similar identity for the eta multiplier system proved by the first author [@andersen Proposition 6]. Let $k\in \mathbb{R}$ and let $\nu$ be a multiplier system (see [@iwaniec Section 2.6]) of weight $k$ on a congruence subgroup $\Gamma\subseteq \mathop{\mathrm{SL}}_2(\mathbb{Z})$ containing $T=\left( \begin{smallmatrix} 1 & 1 \\ 0 & 1 \end{smallmatrix} \right)$. If $\Gamma_\infty=\langle \pm T \rangle$ denotes the stabilizer of $\infty$ in $\Gamma$, define the Kloosterman sum with multiplier system $\nu$ by $$S(m,n,c,\nu) = \sum_{\gamma = \left( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \right)\in \Gamma_\infty \backslash \Gamma / \Gamma_\infty} \overline\nu(\gamma) e_c(ma+nd).$$ Here the sum is only well-defined if $m,n\in \mathbb{Q}$ satisfy a consistency condition involving the value of $\nu(T)$. These Kloosterman sums appear in the Fourier coefficients of Poincaré series of weight $k$ with multiplier system $\nu$. See [@selberg] for more details. Let $\nu_\theta$ denote the multiplier system of weight $\frac 12$ on $\Gamma_0(4)$ for the theta function $\theta(z) = \sum_{n\in \mathbb{Z}}e(n^2z)$. Kohnen identified a distinguished subspace of modular forms for $\nu_\theta$ which he called the plus space. Projection of Poincaré series to the plus space naturally introduces the modified Kloosterman sums $$S^+(m,n,4c,\nu_\theta) = (1-i) S(m,n,4c,\nu_\theta) \times \begin{cases} 1 & \text{ if } c \text{ is even}, \\ 2 & \text{ if } c \text{ is odd}. \end{cases}$$ Kohnen's identity relates these plus space Kloosterman sums to a sparse quadratic Weyl sum. To precisely state the identity, we first fix some notation. Suppose that $N$ is a positive integer and that $m$ and $n$ are squares modulo $4N$. Further suppose that $m$ is a fundamental discriminant. Then $m\equiv 0,1\pmod{4}$ and either $m$ is odd and squarefree, or $\frac m4 \equiv 2,3\pmod{4}$ and $\frac m4$ is squarefree. Following [@GKZ Section I.2], if $b^2-4Nac=mn$, define $$\chi_m(aN,b,c) = \begin{cases} \left(\frac {m}{r}\right) & \text{ if } (a,b,c,m)=1, \\ 0 & \text{ otherwise}, \end{cases}$$ where, in the first case, $r$ is any integer coprime to $m$ represented by the quadratic form $aN_1x^2+bxy+cN_2y^2$, for some splitting $N=N_1N_2$, $N_1>0$. Proposition 1 of [@GKZ] gives several properties of $\chi_m(aN,b,c)$, including that it is well-defined and independent of choice of splitting $N=N_1N_2$. **Proposition 1** (Kohnen, Proposition 5 of [@kohnen]). Suppose that $m,n\equiv 0,1\pmod{4}$ and that $m$ is a fundamental discriminant. Then for all $v\in \mathbb{Z}$ we have $$\label{eq:kohnen-id} \sum_{u\mid (v,c)} \left(\frac {m}{u}\right) \sqrt{\frac uc} \, S^+\!\left(\mfrac{mv^2}{u^2},n,\mfrac{4c}u,\nu_\theta\right) = 4\sum_{\substack {\ell(2c) \\ \ell^2 \equiv mn(4c)}} \chi_m\left(c,\ell,\mfrac{\ell^2-mn}{4c}\right)e_{2c}(\ell v).$$ The sum on the right-hand side of [\[eq:kohnen-id\]](#eq:kohnen-id){reference-type="eqref" reference="eq:kohnen-id"} is quite small in absolute value; in particular, if $(mn,4c)=1$, the sum is $\ll c^\varepsilon$. By Mobiüs inversion in two variables (see Corollary [Corollary 4](#cor:mobius-inv){reference-type="ref" reference="cor:mobius-inv"} below) it follows that $S^+(m,n,4c,\nu_\theta) \ll c^{1/2+\varepsilon}$. **Remark 3**. One notable use of Kohnen's identity is by Duke, Imamoḡlu, and Tóth [@DIT1; @DIT2] as a bridge connecting coefficients of half-integral weight forms and cycle integrals of weight zero forms. In [@andersen] the first author proved an analogue of Kohnen's identity for Kloosterman sums with the Dedekind eta multiplier system $\nu_\eta$ of weight $\frac 12$ on $\mathop{\mathrm{SL}}_2(\mathbb{Z})$. Up to a constant, $S(\frac{1}{24},\frac{1}{24}-n,c,\nu_\eta)$ equals the sum $A_c(n)$ appearing in the Hardy-Ramanujan-Rademacher formula for the partition function $p(n)$ (see [@rademacher; @AA]). **Proposition 2**. (Andersen, Proposition 6 of [@andersen]) Suppose that $m,n\equiv 1\pmod{24}$ and that $m$ is a fundamental discriminant. Then for all $v\in \mathbb{Z}$ with $(v,6)=1$ we have $$\begin{gathered} 2\sqrt{-3i}\sum_{u\mid (v,c)} \left(\frac {12}{v/u}\right) \left(\frac {m}{u}\right) \sqrt{\frac uc} \, S\left(\mfrac{mv^2}{24u^2}, \mfrac{n}{24}, \mfrac{c}{u},\nu_\eta\right) \\ = \sum_{\substack{\ell(12c) \\ \ell^2\equiv mn(24c)}} \left(\frac {12}{\ell}\right) \chi_m\left(6c,\ell,\mfrac{\ell^2-mn}{24c}\right) e_{12c}(\ell v). \end{gathered}$$ The Kloosterman sums appearing in these identities can be written as linear combinations of the sums [\[eq:kloo-weil-def\]](#eq:kloo-weil-def){reference-type="eqref" reference="eq:kloo-weil-def"}. We indicate how this is done for $\nu_\eta$; the construction is similar for $\nu_\theta$. Let $L$ denote the lattice $\mathbb{Z}$ with bilinear form $\left\langle x,y \right\rangle = 12xy$ (use $\left\langle x,y \right\rangle=2xy$ instead for $\nu_\theta$). The dual lattice is $L'=\frac{1}{12}\mathbb{Z}$, so for $\alpha,\beta\in L'/L$ we can write $\alpha = \frac{h}{12}$ and $\beta=\frac{j}{12}$ for $h,j\in \mathbb{Z}/12\mathbb{Z}$. In Section [3](#sec:background){reference-type="ref" reference="sec:background"} we will prove that for any $h\in \mathbb{Z}/12\mathbb{Z}$ with $(h,6)=1$ we have $$\label{eq:eta-weil-connection} S\left(\tfrac{m}{24},\tfrac{n}{24},c,\nu_\eta\right) = \left(\frac {12}{h}\right)\sum_{j(12)} \left(\frac {12}{j}\right) S_{\alpha,\beta}(m,n,c).$$ Our first version of the exponential sum identity is a direct generalization of Kohnen's identity for even lattices of rank $1$. In this case, without loss of generality we can take $L=\mathbb{Z}$ and $\left\langle x,y \right\rangle=\Delta xy$, where $\Delta=\pm 2N$ for some $N\in \mathbb{Z}^+$. Then $q(x) := \frac 12 \left\langle x,x \right\rangle = \frac 12 \Delta x^2$ and we can write $\alpha,\beta\in L'/L$ as $\alpha = \frac{a}{\Delta}$ and $\beta = \frac{b}{\Delta}$ for $a,b\in \mathbb{Z}/2N\mathbb{Z}$. Define $\sigma$ by [\[eq:sigma-consistency\]](#eq:sigma-consistency){reference-type="eqref" reference="eq:sigma-consistency"}. **Theorem 2**. *Suppose that $L$ has rank $1$. Let $\alpha=\frac{a}{\Delta},\beta=\frac{b}{\Delta}\in L'/L$, and let $m,n$ be integers satisfying $m\equiv a^2\pmod{4N}$ and $n \equiv b^2 \pmod{4N}$. Suppose that $m$ is a fundamental discriminant. Then for any $v\in \mathbb{Z}$ and any $c\geq 1$ we have $$\sum_{u\mid (v,c)} \left(\frac {m}{u}\right) \sqrt{\frac uc} \, S_{\alpha \frac vu, \beta}\left( mv^2/u^2, n, \mfrac cu \right) \\ = \frac{i^{-\sigma}}{\sqrt{2N}}\sum_{\substack{\ell(2Nc) \\ \ell \equiv ab(2N) \\ \ell^2\equiv mn(4Nc) }} \chi_{m}\left(Nc,\ell,\mfrac{\ell^2-mn}{4Nc}\right) e_{\Delta c}(\ell v). \label{eq:main-identity}$$* By [@GKZ Proposition 1] the character $\chi_m$ can be computed using the formula $$\chi_m \left(Nc, \ell, \mfrac{\ell^2-mn}{4Nc}\right) \\= \prod_{\substack{p^\lambda\parallel c \\ p\nmid m}} \left(\frac {m}{p^{\lambda}}\right) \prod_{\substack{p^\lambda\parallel c \\ p\mid m}} \left(\frac {m/p^*}{p^{\lambda+\nu}}\right) \left(\frac {p^*}{(\ell^2-mn)/p^{\lambda+\nu}}\right)$$ where $p^\nu\parallel 2\Delta$ and $$p^* = \begin{cases} (\frac{-1}{p})p & \text{ if $p$ is odd}, \\ (\frac{-1}{m'})2^\mu & \text{ if $p=2$ and $m=2^\mu m'$ with $m'$ odd}. \end{cases}$$ Our second version of the exponential sum identity holds for any lattice of odd rank at the cost of having less-precise information at the "bad" primes, i.e. primes dividing $2(m,\Delta,c)$. At these primes we will need to count the number of solutions to the quadratic congruence $$\tilde m x^2 - \left\langle \alpha,y \right\rangle x - q(y) + \left\langle \beta,y \right\rangle - \tilde \ell x + \tilde n \equiv 0\pmod{p^j},$$ where $x\in \mathbb{Z}/p^j\mathbb{Z}$, $y\in L/p^j L$, and $\tilde m = \frac{m}{2\Delta}-q(\alpha)$, $\tilde \ell = \frac{\ell}{\Delta}-\left\langle \alpha,\beta \right\rangle$, and $\tilde n = \frac{n}{2\Delta}-q(\beta)$. (The quantities $\tilde m$, $\tilde \ell$, and $\tilde n$ are integers in each context in which this congruence appears.) Let $N(p^j)$ denote the number of such solutions. We define a function $\xi_{\alpha,\beta}(\ell,m,n,c)$ at prime powers $c=p^\lambda$ and extend to all $c$ multiplicatively. Write $|\Delta|=2N$ and $$m_L = (-4)^{(g-1)/2}m.$$ (In Section [3](#sec:background){reference-type="ref" reference="sec:background"} we will show that $\Delta$ is even whenever $g$ is odd.) In particular, note that $m_L=m$ when $g=1$. 1. If $p$ is odd and $(m,\Delta,p)=1$ then $$\xi_{\alpha,\beta}(\ell,m,n,p^\lambda) = \begin{dcases} \left(\frac {m_L}{p^\lambda}\right) & \text{ if } p\nmid m \text{ and } \ell^2\equiv mn\pmod{p^{\lambda+\nu}}, \\ \left(\frac {m_L/p^*}{p^{\lambda}}\right) \left(\frac {p^*}{(\ell^2-mn)/p^{\lambda}}\right) & \text{ if } p \mid m \text{ and } \ell^2\equiv mn\pmod{p^{\lambda}}, \\ 0 & \text{ otherwise}. \end{dcases}$$ (Note that in the second case $\nu=0$.) 2. If $p=2$ or if $p$ is odd and $(m,\Delta,p)>1$ then $$\xi_{\alpha,\beta}(\ell,m,n,p^\lambda) = \begin{dcases} p^{-\lambda \frac{g+1}{2}} \left(N(p^\lambda) - p^gN(p^{\lambda-1})\right) & \text{ if }\ell^2 \equiv mn \pmod{2Np^{2\left\lfloor\frac{\lambda}{2}\right\rfloor}}, \\ 0 & \text{ otherwise}. \end{dcases}$$ Here $p^\nu \parallel 2\Delta$, as in the definition of $\chi_m$. **Theorem 3**. *Suppose that $g:=\operatorname{rank} L>1$ is odd. Let $\alpha,\beta \in L'/L$ and let $m,n$ be integers satisfying $\frac{m}{2\Delta}\in \mathbb{Z}+q(\alpha)$ and $\frac{n}{2\Delta} \in \mathbb{Z}+q(\beta)$. Suppose that $(-1)^{(g-1)/2}m$ is a fundamental discriminant. Then for any $v\in \mathbb{Z}$ and any $c\geq 1$ we have $$\label{eq:exp-sum-general} \sum_{u\mid (v,c)} \left(\frac {m_L}{u}\right) \sqrt{\frac uc} \, S_{\alpha \frac vu, \beta}\left( mv^2/u^2, n, \mfrac cu \right) \\ = \frac{i^{-\sigma}}{\sqrt{2N}} \sum_{\substack{\ell(2Nc) \\ \frac{\ell}{\Delta} \equiv \left\langle \alpha,\beta \right\rangle(1)}} \xi_{\alpha,\beta}(\ell,m,n,c) e_{\Delta c}(\ell v).$$ Furthermore, for all $\ell,m,n,c$ we have $$\label{eq:xi-L-bound} \xi_{\alpha,\beta}(\ell,m,n,c)\ll_L 1.$$* **Remark 4**. The Kloosterman sums $S_{\alpha,\beta}(0,n,c)$ appear naturally in the Fourier coefficients of Eisenstein series for the Weil representation, which are studied in [@bruinier-kuss] and [@schwagenscheidt]. In the formulas given in those papers, quantities analogous to $N(p^\lambda)-p^g N(p^{\lambda-1})$ also appear at the bad primes. **Corollary 4**. *With the assumptions of Theorem [Theorem 3](#thm:exp-sum-general){reference-type="ref" reference="thm:exp-sum-general"}, we have $$S_{\alpha v,\beta}\left(mv^2,n, c\right) \\= \frac{i^{-\sigma}\sqrt c}{\sqrt{2N}} \sum_{u\mid(v,c)} \mu(u) \left(\frac {m_L}{u}\right)\sum_{\substack{\ell(2Nc/u) \\ \frac{\ell}{\Delta} \equiv \left\langle \alpha,\beta \right\rangle(1)}} \xi_{\alpha,\beta}(\ell,m,n,c/u) e_{\Delta c}(\ell v).$$* *Proof.* We apply Möbius inversion in two variables. The identity [\[eq:exp-sum-general\]](#eq:exp-sum-general){reference-type="eqref" reference="eq:exp-sum-general"} can be written $$\sum_{u\mid (v,c)} \left(\frac {m_L}{u}\right) f(v/u,c/u) = g(v,c).$$ Therefore $$\begin{aligned} \sum_{u\mid (v,c)} \mu(u) \left(\frac {m_L}{u}\right) g(v/u,c/u) &= \sum_{u\mid (v,c)} \mu(u) \left(\frac {m_L}{u}\right) \sum_{w\mid (v/u,c/u)} \left(\frac {m_L}{w}\right) f(v/uw,c/uw) \\ &= \sum_{\substack{v=uwa \\ c=uwb}} \mu(u) \left(\frac {m_L}{uw}\right) f(a,b) \\ &= \sum_{\substack{a\mid v, \, b\mid c \\ v/a=c/b}} \left(\frac {m_L}{v/a}\right) f(a,b) \sum_{u\mid v/a} \mu(u) = f(v,c). \end{aligned}$$ The corollary follows immediately. ◻ We can now prove Theorem [Theorem 1](#thm:main-weil-bound){reference-type="ref" reference="thm:main-weil-bound"}, assuming the truth of Theorems [Theorem 2](#thm:exp-sum-identity-g=1){reference-type="ref" reference="thm:exp-sum-identity-g=1"} and [Theorem 3](#thm:exp-sum-general){reference-type="ref" reference="thm:exp-sum-general"}. *Proof of Theorem [Theorem 1](#thm:main-weil-bound){reference-type="ref" reference="thm:main-weil-bound"}.* Suppose that $\alpha,\beta\in L'/L$ and that $\frac{m}{2\Delta}-q(\alpha),\frac{n}{2\Delta}-q(\beta)\in \mathbb{Z}$. Write $m=m_0v^2$ with $(-1)^{(g-1)/2}m_0$ fundamental and $(v,\Delta)=1$. We assume here that $g>1$; the case $g=1$ is similar and a bit easier. Since $v$ and $|L'/L|$ are coprime, there exists an $\alpha'\in L'/L$ such that $\alpha=v\alpha'$. Since $m_0v^2-\left\langle \Delta v\alpha',v\alpha' \right\rangle\in 4N\mathbb{Z}$, we have $\frac{m_0}{2\Delta}-q(\alpha')\in \mathbb{Z}$. By Corollary [Corollary 4](#cor:mobius-inv){reference-type="ref" reference="cor:mobius-inv"} we have $$S_{\alpha' v,\beta}\left(m_0v^2,n, c\right) \ll_L \sqrt c \sum_{u\mid (v,c)} R(m_0n,c/u),$$ where $R(y,c)$ is the number of solutions to the quadratic congruence $x^2\equiv y\pmod{c}$. Since $R(y,c)$ is multiplicative as a function of $c$, it suffices to evaluate $R(y,p^\lambda)$ for each prime power $p^\lambda\parallel c$. Suppose that $p$ is odd. (In the case $p=2$ the estimates given below are correct if we multiply each upper bound by $2$.) If $p\nmid y$ then $R(y,p^\lambda)\leq 2$ by a simple argument using Hensel's lemma. Now suppose that $y=p^\mu y'$ with $p\nmid y'$ and $\mu\geq 1$. Then any solution to $x^2\equiv y\pmod{p^\lambda}$ can be written $x=p^\delta x'$, where $\delta = \min(\lceil \frac\mu 2 \rceil, \lceil \frac\lambda 2 \rceil)$. Then $$R(y,p^\lambda) \leq \begin{cases} 2p^{\mu-\delta} & \text{ if } \mu<\lambda, \\ p^{\lambda-\delta} & \text{ if } \mu\geq \lambda. \end{cases}$$ It follows that $R(y,p^\lambda) \leq 2p^{\min(\mu,\lambda)/2}$, so $$R(y,c) \leq 2^{\omega(c)+1} (y,c)^{\frac 12}.$$ Theorem [Theorem 1](#thm:main-weil-bound){reference-type="ref" reference="thm:main-weil-bound"} follows. ◻ # Background {#sec:background} ## Lattices and discriminant groups Let $L$ be an even lattice with nondegenerate symmetric bilinear form $\langle \cdot, \cdot \rangle$, and let $q(x) = \frac 12 \left\langle x,x \right\rangle$ denote the associated $\mathbb{Z}$-valued quadratic form. Let $L'$ denote the dual lattice $$\label{eq:L'-def} L' = \left\{ x\in L\otimes \mathbb{Q}: \left\langle x,y \right\rangle \in \mathbb{Z}\text{ for all }y\in L \right\};$$ then the quotient $L'/L$ is a finite abelian group. We denote the standard basis of $\mathbb{C}[L'/L]$ by $\{\mathfrak e_\alpha : \alpha \in L'/L\}$. By identifying $L$ with $\mathbb{Z}^g$ we may write $\left\langle x,y \right\rangle = x^TMy$ for all $x,y\in \mathbb{Z}^g$, for some symmetric integer matrix $M$ with even diagonal. Let $\Delta = \det M$; then $|L'/L|=|\Delta|$. If $\alpha\in L'$ then we can write $\alpha = M^{-1}a$ for some $a\in \mathbb{Z}^g$ and we have $\Delta \alpha\in L$. Here we give a few lemmas that will be useful in the following section. **Lemma 5**. *For all $\alpha,\beta,\gamma\in L'$ we have $$\Delta(\left\langle \gamma,\alpha \right\rangle\beta-\left\langle \gamma,\beta \right\rangle{\alpha})\in L.$$* *Proof.* We write $\alpha=M^{-1}a,\beta=M^{-1}b,\gamma=M^{-1}c$ for some $a,b,c\in \mathbb{Z}^g$. In this notation, it suffices to prove that the vector $$x = \det(M)(M^{-1}bc^TM^{-1}a-M^{-1}ac^TM^{-1}b)$$ is in $\mathbb{Z}^g$. Notice that this quantity is linear in $a,b,$ and $c$, so we may assume that $a=e_i$, $b=e_j$, and $c=e_k$, where $e_i$ is the $i$-th standard basis vector. If $x_\ell$ denotes the $\ell$-th component of $x$, then $$\frac{x_\ell}{\det (M)} = e_\ell^T M^{-1}e_je_k^TM^{-1}e_i-e_\ell^TM^{-1}e_ie_k^TM^{-1}e_j.$$ Note that $\det(M)e_\ell^TM^{-1}e_j=(-1)^{j+\ell}M_{j,\ell}$, where $M_{j,\ell}$ denotes the $j,\ell$-th minor of $M$, and similarly for the other products, so we obtain $$\det(M)x_\ell = (-1)^{i+j+k+\ell} \left(M_{j,\ell}M_{i,k}-M_{i,\ell}M_{j,k}\right).$$ Sylvester's determinant identity [@bareiss], also known as the Desnanot-Jacobi identity, shows that the expression on the right-hand side is divisible by $\det M$. It follows that $x_\ell \in \mathbb{Z}$. ◻ If $g$ is odd then $\Delta$ is even by Lemma 14.3.21 of [@cohen-stromberg]. As in the introduction, we write $|\Delta| = 2N$ with $N\in \mathbb{Z}^+$. **Lemma 6**. *Suppose that $g$ is odd. Let $\alpha,\beta\in L'/L$. Let $\ell,m,n\in \mathbb{Z}$ such that $$\tfrac{\ell}{\Delta}-\left\langle \alpha,\beta \right\rangle\in \mathbb{Z}, \quad \tfrac{m}{2\Delta}-q(\alpha)\in \mathbb{Z}, \ \text{ and } \ \tfrac{n}{2\Delta}-q(\beta) \in \mathbb{Z}.$$ Then $\ell^2\equiv mn \pmod{2N}$. If $g=1$ then $\ell^2\equiv mn\pmod{4N}$.* *Proof.* The assumptions on $\ell$, $m$, and $n$ are equivalent to $$\begin{aligned} \ell &\equiv \Delta\left\langle \alpha,\beta \right\rangle \pmod{2N}, \\ m &\equiv \Delta\left\langle \alpha,\alpha \right\rangle \pmod{4N}, \\ n &\equiv \Delta\left\langle \beta,\beta \right\rangle \pmod{4N}. \end{aligned}$$ It follows that $$\ell^2-mn \equiv \Delta^2(\left\langle \alpha,\beta \right\rangle^2-\left\langle \alpha,\alpha \right\rangle\left\langle \beta,\beta \right\rangle) \pmod{4N}.$$ If $g=1$ then $\left\langle \alpha,\beta \right\rangle^2=\left\langle \alpha,\alpha \right\rangle\left\langle \beta,\beta \right\rangle$ so $\ell^2\equiv mn\pmod{4N}$. If $g>1$ then the lemma will follow if we can show that $\Delta(\left\langle \alpha,\beta \right\rangle^2-\left\langle \alpha,\alpha \right\rangle\left\langle \beta,\beta \right\rangle)$ is an integer. By Lemma [Lemma 5](#lem:alpha-beta-gamma){reference-type="ref" reference="lem:alpha-beta-gamma"} we have $$x := \Delta (\left\langle \alpha,\beta \right\rangle\alpha - \left\langle \alpha,\alpha \right\rangle\beta) \in L.$$ Thus $\left\langle x,\beta \right\rangle\in \mathbb{Z}$, which completes the proof. ◻ **Lemma 7**. *Suppose that $g$ is odd. Let $\alpha \in L'/L$ and suppose that $\frac{m}{2\Delta}-q(\alpha)\in \mathbb{Z}$. Then $$(-1)^{(g-1)/2} m\equiv 0,1\pmod{4}.$$* *Proof.* Let $\alpha \in L'/L$ and write $\alpha = M^{-1}a$ for $a\in \mathbb{Z}^g$ so that $q(\alpha)=\frac 12 a^TM^{-1}a$. Then $-m=\frac m\Delta \det(-M)=\det(S)$, where $S$ is the block matrix $$S=\left( \begin{matrix} \frac{m}{\Delta}-a^TM^{-1}a & -a^T \\ -a & -M \end{matrix} \right).$$ Note that $S$ is a symmetric integer matrix with even diagonal, so the result follows from Lemma 14.3.20 of [@cohen-stromberg]. ◻ ## The Weil representation Good references for the background material in this subsection are [@bruinier Chapter 1] and [@cohen-stromberg Chapter 14]. Let $\mathop{\mathrm{Mp}}_2(\mathbb{R})$ be the metaplectic group, the elements of which are of the form $(\gamma,\phi)$, where $\gamma=\left( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \right)\in \mathop{\mathrm{SL}}_2(\mathbb{R})$ and $\phi:\mathbb H\to\mathbb{C}$ is a holomorphic function with $\phi^2(\tau)=c\tau+d$. The group law on $\mathop{\mathrm{Mp}}_2(\mathbb{R})$ is given by $$(\gamma_1,\phi_1(\tau))(\gamma_2,\phi_2(\tau)) = (\gamma_1\gamma_2, \phi_1(\gamma_2\tau)\phi_2(\tau)).$$ Let $\mathop{\mathrm{Mp}}_2(\mathbb{Z})$ denote the inverse image of $\mathop{\mathrm{SL}}_2(\mathbb{Z})$ under the covering map $(\gamma,\phi)\mapsto \gamma$. Then $\mathop{\mathrm{Mp}}_2(\mathbb{Z})$ is generated by the elements $$T=\left(\left( \begin{smallmatrix} 1 & 1 \\ 0 & 1 \end{smallmatrix} \right), 1\right) \quad \text{ and } \quad S = \left(\left( \begin{smallmatrix} 0 & -1 \\ 1 & 0 \end{smallmatrix} \right),\sqrt\tau\right)$$ and the center of $\mathop{\mathrm{Mp}}_2(\mathbb{Z})$ is generated by $$Z = S^2 = (ST)^3 = \left(\left( \begin{smallmatrix} -1 & 0 \\ 0 & -1 \end{smallmatrix} \right),i\right).$$ The Weil representation associated with the lattice $L$ is the unitary representation $$\rho_L : \mathop{\mathrm{Mp}}_2(\mathbb{Z}) \to \mathbb{C}[L'/L]$$ given by $$\begin{aligned} \rho_L(T) \mathfrak{e}_\alpha &= e(q(\alpha))\mathfrak{e}_\alpha, \label{eq:T-transform} \\ \rho_L(S) \mathfrak{e}_\alpha &= \frac{i^{(b^--b^+)/2}}{\sqrt{|L'/L|}} \sum_{\beta \in L'/L} e(-\left\langle \alpha,\beta \right\rangle)\mathfrak{e}_\beta.\end{aligned}$$ Here $(b^+,b^-)$ is the signature of $L$. For $\frak g\in \mathop{\mathrm{Mp}}_2(\mathbb{Z})$ we define the coefficient $\rho_{\alpha\beta}(\frak g)$ of the representation $\rho_L$ by $$\rho_L(\frak g) \frak e_\beta = \sum_{\alpha\in L'/L} \rho_{\alpha\beta}(\frak g) \frak e_\alpha.$$ Shintani [@shintani] gave the following formula for the coefficients $\rho_{\alpha\beta}(\frak g)$: if $\frak g = (\left( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \right), \sqrt{cz+d})$ and $c>0$, then $$\label{eq:rho-alpha-beta-formula} \rho_{\alpha\beta}(\frak g) = \frac{i^{\frac{b^--b^+}{2}}}{c^{(b^++b^-)/2}\sqrt{|L'/L|}} \sum_{r\in L/cL} e_c(aq(\alpha+r)-\left\langle \beta,\alpha+r \right\rangle+dq(\beta)).$$ Since $\rho_L$ factors through a double cover of the finite group $\mathop{\mathrm{SL}}_2(\mathbb{Z}/4N\mathbb{Z})$, we have the upper bound $\rho_{\alpha\beta}(\mathfrak g)\ll_L 1$. If $f:\mathbb H\to \mathbb{C}[L'/L]$ is a modular form for the Weil representation then $f$ satisfies the transformation law $$f(\gamma z) = \phi^{2k}(z) \rho_L(\mathfrak g) f(z) \quad \text{ for all } \mathfrak g = (\gamma,\phi)\in \mathop{\mathrm{Mp}}_2(\mathbb{Z}).$$ Setting $\mathfrak g=Z$, we find that such an $f$ satisfies $f = (-1)^{2k+b^--b^+} f$. Thus $f=0$ unless $k$ satisfies the consistency condition $$\label{eq:sigma-consistency} \sigma := k + \tfrac 12(b^--b^+) \in \mathbb{Z}.$$ Suppose that $k$ satisfies [\[eq:sigma-consistency\]](#eq:sigma-consistency){reference-type="eqref" reference="eq:sigma-consistency"}. Then for $c \in \mathbb{Z}^+$, $\frac{m}{2\Delta}\in \mathbb{Z}+q(\alpha)$, and $\frac{n}{2\Delta}\in \mathbb{Z}+q(\beta)$, we define the generalized Kloosterman sum as $$\label{eq:kloo-weil-def-2} S_{\alpha,\beta}(m,n,c) = e^{-\pi ik/2} \sum_{d(c)^\times} \overline\rho_{\alpha\beta}(\tilde\gamma) e_{2\Delta c}\left(ma+nd\right).$$ Here $\gamma=\left( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \right)\in \mathop{\mathrm{SL}}_2(\mathbb{Z})$ is any matrix with bottom row $(c\ d)$ and $\tilde\gamma = (\gamma,\sqrt{cz+d})$ is a lift of $\gamma$ to $\mathop{\mathrm{Mp}}_2(\mathbb{Z})$. By [\[eq:T-transform\]](#eq:T-transform){reference-type="eqref" reference="eq:T-transform"} we have $\rho_{\alpha\beta}(T^r \frak g T^s) = e(rq(\alpha)+sq(\beta))\rho_{\alpha\beta}(\mathfrak g)$, so the sum [\[eq:kloo-weil-def-2\]](#eq:kloo-weil-def-2){reference-type="eqref" reference="eq:kloo-weil-def-2"} is independent of the choice of representatives for $(\mathbb{Z}/c\mathbb{Z})^\times$ and the choice of matrix $\gamma$. **Remark 5**. While the weight $k$ does not play a major role in the definition [\[eq:kloo-weil-def-2\]](#eq:kloo-weil-def-2){reference-type="eqref" reference="eq:kloo-weil-def-2"}, we refer to the sums as half-integral weight Kloosterman sums when $g=b^++b^-$ is odd because of condition [\[eq:sigma-consistency\]](#eq:sigma-consistency){reference-type="eqref" reference="eq:sigma-consistency"}. We conclude this subsection by proving equation [\[eq:eta-weil-connection\]](#eq:eta-weil-connection){reference-type="eqref" reference="eq:eta-weil-connection"}. Let $L$ denote the lattice $\mathbb{Z}$ with bilinear form $\left\langle x,y \right\rangle = 12xy$ (use $\left\langle x,y \right\rangle=2xy$ instead for $\nu_\theta$). The dual lattice is $L'=\frac{1}{12}\mathbb{Z}$, so we can write $\alpha = \frac{h}{12}$ and $\beta=\frac{j}{12}$ for $h,j\in \mathbb{Z}/12\mathbb{Z}$. If $F(z) = \sum_{h(12)} \left(\frac {12}{h}\right) \eta(z) \mathfrak{e}_{\alpha}$ (where we emphasize that $\mathfrak{e}_\alpha$ depends on $h$) then $$F(\gamma z) = \rho_L(\tilde\gamma) (cz+d)^{\frac 12} F(z) \qquad \text{ for all }\gamma = \left( \begin{smallmatrix} a & b \\ c & d \end{smallmatrix} \right)\in \mathop{\mathrm{SL}}_2(\mathbb{Z}),$$ where $\tilde \gamma=(\gamma,\sqrt{cz+d})\in \mathop{\mathrm{Mp}}_2(\mathbb{Z})$ (see [@BO Section 3.2]). Therefore $$\begin{aligned} \nu_\eta(\gamma) F(z) &= (cz+d)^{-\frac 12}F(\gamma z) \\ &= \sum_{h(12)} \left(\frac {12}{h}\right) \eta(z) \rho_L(\tilde \gamma)\mathfrak{e}_\alpha = \sum_{j(12)} \sum_{h(12)} \left(\frac {12}{h}\right)\rho_{\beta\alpha}(\tilde\gamma) \eta(z) \mathfrak{e}_\beta,\end{aligned}$$ from which it follows that $$\nu_\eta(\gamma) = \left(\frac {12}{h}\right) \sum_{j(12)} \left(\frac {12}{j}\right) \rho_{\alpha\beta}(\tilde\gamma) \qquad \text{ for all }\gamma \in \mathop{\mathrm{SL}}_2(\mathbb{Z}),$$ for any $h\in \mathbb{Z}/12\mathbb{Z}$ with $(h,6)=1$. Thus, for all such $h$ we have $$S\left(\tfrac{m}{24},\tfrac{n}{24},c,\nu_\eta\right) = \left(\frac {12}{h}\right)\sum_{j(12)} \left(\frac {12}{j}\right) S_{\alpha,\beta}(m,n,c).$$ ## Gauss sums Let $G(c)$ denote the Gauss sum $$G(c) = \sum_{x(c)} e_c(x^2).$$ The evaluation of these sums is a classical result; see Chapter 1 of [@BEW] for a thorough treatment. For odd $c$ we have $$\label{eq:gauss-odd} G(c) = \varepsilon_c \sqrt{c},$$ where $$\varepsilon_c = \begin{cases} 1 & \text{ if }c\equiv 1\pmod{4}, \\ i & \text{ if }c\equiv 3\pmod{4}. \end{cases}$$ Furthermore, if $(a,c)=1$ then $$\label{eq:Gauss-y} \sum_{x(c)} e_c(ax^2) = \left(\frac {a}{c}\right) G(c).$$ When $c=2^\lambda$ we will encounter the more general Gauss sums $$G(a,b,c) = \sum_{x(c)} e_{c}(ax^2+bx).$$ These are evaluated in Chapter 1 of [@BEW]. We have $G(a,b,c) = 0$ unless $(a,c)\mid b$, and if $a$ is odd then $$\label{eq:gauss-even} \sum_{x(2^\lambda)} e_{2^\lambda}(ax^2+bx) = \begin{cases} 2 & \text{ if } \lambda = 1 \text{ and $b$ is odd}, \\ e_{2^\lambda}\left(-\overline a (b/2)^2\right)(1+i)\varepsilon_a^{-1} \left(\frac {2}{a}\right)^\lambda 2^{\lambda/2} & \text{ if }\lambda\geq 2 \text{ and $b$ is even}, \\ 0 & \text{ otherwise}, \end{cases}$$ where $\overline a a \equiv 1\pmod{2^\lambda}$. More generally, suppose that $f$ is a quadratic form on $\mathbb{Z}^g$, given by $f(x) = \frac 12 x^T M x$, where $M$ is a symmetric $g\times g$ integer matrix with even diagonal. Let $\Delta = \det M$. If $c$ is odd and $(\Delta,c)=1$ then $$\label{eq:Gauss-sum-quadratic-form} \sum_{x \in (\mathbb{Z}/c\mathbb{Z})^g} e_c(f(x)) = \left(\frac {\overline 2^g\Delta}{c}\right) G(c)^{g},$$ where $\overline 2 2\equiv 1\pmod{c}$. This formula is proved by Weber in [@weber Section 6], see also [@cohen]. It can be proved by first reducing to the case where $c=p^\lambda$ is a prime power, then using the fact that $f$ can be diagonalized over $\mathbb{Z}_p$ when $p$ is odd. Lastly, we will encounter the sum $$T(n,p^\lambda) = \sum_{d(p^\lambda)^\times} \left(\frac {d}{p}\right) e_{p^\lambda}(dn)$$ for an odd prime $p$. By replacing $d$ by $d+p$, we see that $T(n,p^\lambda) = 0$ unless $n\equiv 0\pmod{p^{\lambda-1}}$. In that case, $T(n,p^\lambda) = p^{\lambda-1}T(n/p^{\lambda-1},p)$, and this latter sum is the Gauss sum attached to the character $(\frac{\cdot}{p})$, which is evaluated in [@BEW Chapter 1]. We conclude that $$\label{eq:twisted-exp-sum} \sum_{d(p^\lambda)^\times} \left(\frac {d}{p}\right) e_{p^\lambda}(d n) = \begin{dcases} \varepsilon_p p^{\lambda-\frac12} \left(\frac {n/p^{\lambda-1}}{p}\right) & \text{ if }n \equiv 0\pmod{p^{\lambda-1}}, \\ 0 & \text{ otherwise.} \end{dcases}$$ By a similar method (instead replacing $d$ by $d+4$) we have $$\label{eq:twisted-exp-sum-2} \sum_{d(2^\lambda)^\times} \left(\frac {-1}{d}\right) e_{2^\lambda}(d n) = \begin{dcases} 2^{\lambda-1}i \left(\frac {-4}{n/2^{\lambda-2}}\right) & \text{ if }n \equiv 0\pmod{2^{\lambda-2}}, \\ 0 & \text{ otherwise.} \end{dcases}$$ # Proof of Theorems [Theorem 2](#thm:exp-sum-identity-g=1){reference-type="ref" reference="thm:exp-sum-identity-g=1"} and [Theorem 3](#thm:exp-sum-general){reference-type="ref" reference="thm:exp-sum-general"} {#sec:proof-identity} Fix $\alpha,\beta,m,n$ satisfying $\frac{m}{2\Delta}-q(\alpha)\in \mathbb{Z}$ and $\frac{n}{2\Delta}-q(\beta)\in \mathbb{Z}$. Suppose that $(-1)^{(g-1)/2}m$ is a fundamental discriminant. Let $h = \frac{g+1}{2}$ and for convenience set $$\chi_m \left(Nc, \ell, \mfrac{\ell^2-mn}{4Nc}\right) = 0 \quad \text{ if }\ell^2\not\equiv mn\pmod{4Nc}.$$ Let $$L_v(c) = i^\sigma \sqrt{2N}\sum_{u\mid (v,c)} \left(\frac {m_L}{u}\right) \sqrt{\frac uc} \, S_{\alpha \frac vu, \beta}\left( mv^2/u^2, n, \mfrac cu \right)$$ and $$R_v(c) = \begin{dcases} \sum \chi_{m}(Nc,\ell,\tfrac{\ell^2-mn}{4Nc}) e_{\Delta c}(\ell v) & \text{ if }g=1, \\ \sum \xi_{\alpha,\beta}(\ell,m,n,c) e_{\Delta c}(\ell v) & \text{ if }g>1, \end{dcases}$$ where $\ell$ runs mod $2Nc$ with $\frac{\ell}{\Delta}-\left\langle \alpha,\beta \right\rangle\in \mathbb{Z}$ in both sums. Note that $R_v(c)$ is periodic in $v$ with period $2Nc$, and its Fourier transform equals $$\label{R-fourier} \frac{1}{2Nc}\sum_{v(2Nc)} e_{\Delta c}(-v\ell) R_v(c) = \begin{cases} \chi_m (Nc,\ell,\tfrac{\ell^2-mn}{4Nc}) & \text{ if } \tfrac{\ell}{\Delta} \equiv \left\langle \alpha,\beta \right\rangle (1) \text{ and } g=1, \\ \xi_{\alpha,\beta}(\ell,m,n,c) & \text{ if }\tfrac{\ell}{\Delta} \equiv \left\langle \alpha,\beta \right\rangle (1) \text{ and } g>1,\\ 0 & \text{ otherwise}. \end{cases}$$ We claim that $L_v(c)$ is also periodic in $v$ with period $2Nc$. After inserting the definition of the Kloosterman sum [\[eq:kloo-weil-def\]](#eq:kloo-weil-def){reference-type="eqref" reference="eq:kloo-weil-def"} and the formula for the coefficients of the Weil representation [\[eq:rho-alpha-beta-formula\]](#eq:rho-alpha-beta-formula){reference-type="eqref" reference="eq:rho-alpha-beta-formula"} into the definition of $L_v(c)$, we obtain $$\begin{gathered} \label{eq:Lv-three-sums} L_v(c) = \sum_{u\mid (v,c)} \left(\frac {m_L}{u}\right) (c/u)^{-h} \sum_{d(c/u)^\times} \\ \times \sum_{r\in L/ (c/u) L} e_{c/u}\left( \left(\mfrac{m(v/u)^2}{2\Delta}-q(\alpha v/u+r)\right)a +\left\langle \beta,\alpha v/u+r \right\rangle + \left(\mfrac n{2\Delta} - q(\beta)\right)d \right),\end{gathered}$$ where $ad\equiv 1\pmod{c/u}$. Since $4Nq(\alpha)\in \mathbb{Z}$ and $2N\alpha\in L$ we have $$\begin{aligned} \left\langle \alpha v/u+r,2N(c/u)\alpha \right\rangle &= (c/u)\left(4Nq(\alpha)(v/u)+2N\left\langle r,\alpha \right\rangle\right) \equiv 0\pmod{c/u}, \\ q(2N\alpha c/u) &= 4N^2q(\alpha)(c/u)^2 \equiv 0\pmod{c/u}, \\ \left\langle \beta,2N\alpha(c/u) \right\rangle &= \left\langle \beta,2N\alpha \right\rangle(c/u) \equiv 0\pmod{c/u}.\end{aligned}$$ Thus $L_v(c)$ is indeed periodic in $v$ with period $2Nc$. So it suffices to prove that $$\label{eq:cal-l} \mathcal L_\ell(c) := \frac{1}{2Nc}\sum_{v(2Nc)} e_{\Delta c}(-v\ell) L_v(c)$$ agrees with the right-hand side of [\[R-fourier\]](#R-fourier){reference-type="eqref" reference="R-fourier"} for all $\ell \in \mathbb{Z}$. By [\[eq:cal-l\]](#eq:cal-l){reference-type="eqref" reference="eq:cal-l"} and [\[eq:Lv-three-sums\]](#eq:Lv-three-sums){reference-type="eqref" reference="eq:Lv-three-sums"}, the quantity $\mathcal L_\ell(c)$ comprises four sums $$\sum_{v(2Nc)} \sum_{u\mid (v,c)} \sum_{d(c/u)^\times} \sum_{r\in L/ (c/u) L}$$ which we reorder as $$\sum_{u\mid c} \sum_{d(c/u)^\times} \sum_{\substack{v(2Nc) \\ u\mid v}} \sum_{r\in L/(c/u)L}.$$ We replace $v$ by $uv$, then $u$ by $c/u$, and rearrange terms to obtain $$\begin{gathered} \mathcal L_\ell(c) = \frac{1}{2Nc} \sum_{u\mid c} \left(\frac {m_L}{c/u}\right) u^{-h}\sum_{d(u)^\times} e_u\left(\tilde n d\right) \\ \times \sum_{v(2Nu)} \sum_{r\in L/ uL} e_{u}\left( a\tilde mv^2 - a\left\langle \alpha v,r \right\rangle - aq(r) +\left\langle \beta,r \right\rangle + (\left\langle \beta,\alpha \right\rangle - \tfrac {\ell}{\Delta})v\right), \end{gathered}$$ where $\tilde m = \tfrac{m}{2\Delta}-q(\alpha) \in \mathbb{Z}$ and $\tilde n = \tfrac{n}{2\Delta}-q(\beta)\in \mathbb{Z}$. If we make the change of variable $v\mapsto v+u$ we see that the $v$-sum equals zero unless $$\tfrac{\ell}{\Delta} \equiv \left\langle \alpha,\beta \right\rangle \pmod{1}.$$ For the remainder of this proof we make this assumption and we set $\tilde \ell = \tfrac{\ell}{\Delta} - \left\langle \alpha,\beta \right\rangle$. By Lemma [Lemma 6](#lem:l^2-mn){reference-type="ref" reference="lem:l^2-mn"} we have $\ell^2\equiv mn\pmod{2N}$. Now the summands in the $v$-sum are invariant under $v\mapsto v+u$, so we can write $$\label{eq:L-ell-c-u-d-sum} \mathcal L_\ell(c) = c^{-1} \sum_{u\mid c}\left(\frac {m_L}{c/u}\right) u^{-h}\sum_{d(u)^\times} e_u\left(\tilde n d\right) \mathcal S(d,u),$$ where $$\mathcal S(d,u) = \sum_{v(u)} \sum_{r\in L/ uL} e_{u}\left( a\tilde mv^2 - a\left\langle \alpha v,r \right\rangle - aq(r) +\left\langle \beta,r \right\rangle -\tilde \ell v\right).$$ Since $(d,u)=1$ we can replace $v$ by $dv$ and $r$ by $dr$ to get $$\mathcal S(d,u) = \sum_{v(u)} \sum_{r\in L/ uL} e_{u}(df(v,r)),$$ where $$f(v,r) = \tilde m v^2 - \left\langle \alpha,r \right\rangle v - q(r) + \left\langle \beta,r \right\rangle-\tilde \ell v.$$ **Remark 6**. In the case $g=1$, the two-dimensional quadratic Gauss sum $\mathcal S(d,u)$ is analogous to the sum appearing in Proposition 2 of [@GKZ]. **Lemma 8**. *We have $\mathcal S(d,u)=0$ unless $(m,u)\mid \ell$.* *Proof.* Let $w=\Delta u/(m,u)$ and note that $\alpha w\in L$. Then, since $\Delta$ is even, $$\begin{aligned} f(v+w,r-\alpha w) &= f(v,r) + \tfrac 12 \Delta mu^2/(m,u)^2 + (mv-\ell)u/(m,u) \\ &\equiv f(v,r) - \ell u/(m,u) \pmod{u}. \end{aligned}$$ Thus $\mathcal S(d,u) = e(-d\ell/(m,u)) \mathcal S(d,u)$, i.e. $\mathcal S(d,u) = 0$ unless $(m,u)\mid \ell$. ◻ Using the Ramanujan sum evaluation $$\sum_{d(u)^\times} e_u(yd) = \sum_{t\mid (u,y)} \mu(u/t) t$$ we find that $$\sum_{d(u)^\times} e_u(\tilde n d) \mathcal S(d,u) = \sum_{v,r} \sum_{t\mid (u,y)}\mu(u/t) t = \sum_{t\mid u} \mu(u/t)t \sum_{\substack{v,r \\ t\mid y}} 1,$$ where $y=f(v,r)+\tilde n$. The inner sum is invariant under $v\mapsto v+t$ and $r\mapsto r+t$, so we get $$\sum_{d(u)^\times} e_u(\tilde n d) \mathcal S(d,u) = u^{g+1} \sum_{t\mid u} \mu(u/t) t^{-g} N(t),$$ where $$\label{eq:Nt-def} N(t) = \# \left\{(v,r)\in \mathbb{Z}/t\mathbb{Z}\times L/tL : f(v,r)+\tilde n \equiv 0 \pmod{t}\right\}.$$ By the Chinese remainder theorem, $N(t)$ is multiplicative, therefore $L_\ell(c)$ is multiplicative as a function of $c$. For the remainder of this section, assume $c=p^\lambda$ where $p$ is prime and $p\mid \ell$ if $p\mid m$ (which we can assume by Lemma [Lemma 8](#lem:p-div-ell){reference-type="ref" reference="lem:p-div-ell"}). By the discussion above, we have two valid expressions for $\mathcal L_\ell(p^\lambda)$: $$\begin{aligned} \mathcal L_\ell(p^\lambda) &= p^{-\lambda} \sum_{j=0}^\lambda \left(\frac {m_L}{p^{\lambda-j}}\right) p^{-jh} \sum_{d(p^j)^\times} e_{p^j}(\tilde n d) \mathcal S(d,p^j) \label{eq:exp-1} \\ &= p^{-\lambda} \sum_{j=0}^\lambda \left(\frac {m_L}{p^{\lambda-j}}\right) p^{-j(h-1)} \left(N(p^j) - p^g N(p^{j-1})\right). \label{eq:exp-2}\end{aligned}$$ We proceed by cases as follows. 1. We first assume that $(m,\Delta,p)=1$ and that $p$ is odd, and we evaluate the Gauss sums $\mathcal S(d,u)$ in each of the cases $p\nmid m$ and $p\mid m$. 2. In the case $g=1$, we evaluate $\mathcal S(d,u)$ for the remaining "bad" primes. 3. When $g>1$ we approach the problem by studying the counting function $N(p^j)$. We first show that the quantity $N(p^j)-p^g N(p^{j-1})$ is frequently zero. We then estimate the size of $N(p^j)$. ## The case $(m,\Delta,p)=1$ with $p$ odd {#sec:complete-square} We would like to make a change of variables that eliminates the linear terms in $f(v,r)$ modulo $u$, where $u=p^j$. For $w\in \mathbb{Z}$ and $s\in L$ we have $$\begin{gathered} f(v+w,r+s) = \tilde m v^2 - \langle \alpha,r \rangle v - q(r) + \frac{mw^2-2\ell w}{2\Delta}+q(\beta) - q(\beta-\alpha w-s) \\ + v\left( \frac{mw-\ell}{\Delta} - \left\langle \alpha,\alpha w+s-\beta \right\rangle\right) - \langle \alpha w+ s-\beta,r \rangle.\end{gathered}$$ If we are to eliminate the terms on the second line, then a natural choice is $s=\beta-\alpha w$, but this is usually not an element of $L$. Note that either $p\nmid m$ or $p\parallel m$ because $(-1)^{(g-1)/2}m$ is a fundamental discriminant. **Lemma 9**. *Suppose that $p$ is odd and $(m,\Delta,p)=1$. Let $k\in \mathbb{Z}^+$ and let $$w = \begin{cases} \label{eq:w-def} \overline m \ell & \text{ if }p\nmid m, \\ \overline{(m/p)}(\ell/p) & \text{ if }p\mid m, \end{cases}$$ where $\overline m m\equiv 1\pmod{p^{k}}$ in the first case and $\overline{(m/p)}(m/p)\equiv 1\pmod{p^{k}}$ in the second case. Then $\beta-\alpha w\in L+p^{k} (L'/L)$.* *Proof.* We begin with the observation $$m\beta - \ell\alpha = \tilde m(2\Delta\beta) - \tilde \ell(\Delta\alpha) + \Delta(\left\langle \alpha,\alpha \right\rangle\beta - \left\langle \alpha,\beta \right\rangle\alpha) \in L$$ by Lemma [Lemma 5](#lem:alpha-beta-gamma){reference-type="ref" reference="lem:alpha-beta-gamma"}. By Lemma [Lemma 8](#lem:p-div-ell){reference-type="ref" reference="lem:p-div-ell"} we have $(m,p)\mid \ell$, so we can write $m=(m,p) m_1$ and $\ell=(m,p) \ell_1$. We claim that if $p\mid m$ then $m_1\beta-\ell_1\alpha\in L$. Indeed, the lattice element $$\Delta(m\beta-\ell\alpha) = p(m_1(\Delta\beta)-\ell_1(\Delta\alpha))$$ is an element of $\Delta L\cap pL=\Delta pL$ because, in this case, $p\nmid \Delta$. Thus $m_1\beta-\ell_1\alpha\in L$. Let $w=\overline{m}_1 \ell_1$, as in [\[eq:w-def\]](#eq:w-def){reference-type="eqref" reference="eq:w-def"}. Then $$\beta-\alpha w - \overline{m}_1(m_1\beta-\ell_1\alpha) = (1-m_1\overline{m}_1)\beta\in p^{k} L'/L.$$ The statement of the lemma follows. ◻ Write $2\Delta=p^\nu \Delta'$, with $p\nmid \Delta'$, and make the change of variable $d\mapsto \Delta'd$ in [\[eq:exp-1\]](#eq:exp-1){reference-type="eqref" reference="eq:exp-1"}. Choose $w$ as in [\[eq:w-def\]](#eq:w-def){reference-type="eqref" reference="eq:w-def"} with $k=\lambda+\nu$ and use Lemma [Lemma 9](#lem:w-def){reference-type="ref" reference="lem:w-def"} to choose $s\in L$ and $\gamma\in L'/L$ such that $s-\beta+\alpha w = p^{\lambda+\nu}\gamma$. Then $$\Delta'\left\langle \alpha,\alpha w+s-\beta \right\rangle = p^\lambda \left\langle 2\Delta\alpha,\gamma \right\rangle \equiv 0\pmod{p^\lambda},$$ and a similar statement holds for $\Delta'\left\langle \alpha w+s-\beta,r \right\rangle$ and $\Delta' q(\beta-\alpha w-s)$. Furthermore, we have $\Delta'(mw-\ell)/\Delta \equiv 0\pmod{p^\lambda}$. Therefore $$\Delta'(f(v+w,r+s)+\tilde n) \equiv \Delta'(\tilde m v^2 - \left\langle \alpha,r \right\rangle v-q(r)) + \hat n \pmod{p^\lambda},$$ where (recalling that $\nu=0$ if $p\mid m$) $$\begin{aligned} \hat n = \Delta' \frac{mw^2-2\ell w+n}{2\Delta} \equiv \begin{dcases} \frac{n-\overline m \ell^2}{p^\nu} \pmod{p^\lambda} & \text{ if } p \nmid m, \\ n-\overline{(m/p)}\ell^2/p \pmod{p^\lambda} & \text{ if }p\mid m. \end{dcases}\end{aligned}$$ In the case $p\nmid m$ this shows that $\ell^2\equiv mn\pmod{p^\nu}$ because $\hat n$ must be an integer. Thus $$\label{eq:n1-S1} \sum_{d(u)^\times} e_u(\Delta'd\tilde n) \mathcal S(\Delta'd,u) = \sum_{d(u)^\times} e_u(d\hat n) \mathcal S_1(d,u),$$ where $$\mathcal S_1(d,u) = \sum_{v(u)}\sum_{r\in L/uL} e_u(d\Delta'f_1(v,r)), \qquad f_1(v,r) = \tilde m v^2 - \left\langle \alpha,r \right\rangle v-q(r).$$ Let $M$ denote the Gram matrix of $L$ and identify $L$ with $\mathbb{Z}^g$ so that $\left\langle x,y \right\rangle = x^TMy$ and $\alpha = M^{-1}a$ for some $a\in \mathbb{Z}^g$. Then we can write $f_1(v,r) = \frac 12 x^T S x$, where $x=(v,r)\in \mathbb{Z}^{g+1}$ and $S$ is the block matrix $$S = \left( \begin{matrix} 2\tilde m & -a^T \\ -a & -M \end{matrix} \right).$$ The determinant of $S$ equals $\frac{m}{\Delta}\det(-M) = -m$. Suppose first that $p\nmid m$. Then by [\[eq:Gauss-sum-quadratic-form\]](#eq:Gauss-sum-quadratic-form){reference-type="eqref" reference="eq:Gauss-sum-quadratic-form"}, applied to the quadratic form $x\mapsto d\Delta' x^TSx$ on the lattice $\mathbb{Z}\oplus L$, together with [\[eq:gauss-odd\]](#eq:gauss-odd){reference-type="eqref" reference="eq:gauss-odd"}, we have $$\mathcal S_1(d,u) = \left(\frac {-m}{u}\right) G(u)^{g+1} = \left(\frac {m_L}{u}\right) u^{h},$$ where we have used the fact that $g$ is odd so $(\overline 2d\Delta')^{g+1}$ is a square. It follows that $$\begin{aligned} \mathcal L_\ell(c) &= \frac{1}{c} \left(\frac {m_L}{c}\right) \sum_{u\mid c} \sum_{d(u)^\times} e_u(\hat nd) = \frac{1}{c} \left(\frac {m_L}{c}\right) \sum_{u\mid c} \sum_{k\mid (u,\hat n)} \mu(u/k)k \\ &= \left(\frac {m_L}{p^\lambda}\right) \times \begin{cases} 1 & \text{ if }\hat n \equiv 0\pmod{p^\lambda}, \\ 0 & \text{ otherwise}. \end{cases}\end{aligned}$$ The condition $\hat n \equiv 0 \pmod{p^\lambda}$ is equivalent to $\ell^2\equiv mn\pmod{p^{\lambda+\nu}}$. Now suppose that $p\mid m$. Note that in this case many of the terms in [\[eq:L-ell-c-u-d-sum\]](#eq:L-ell-c-u-d-sum){reference-type="eqref" reference="eq:L-ell-c-u-d-sum"} are zero, so $$\mathcal L_\ell(c) = c^{-h-1} \sum_{d(c)^\times} e_c(d\hat n) \mathcal S_1(d,c).$$ By assumption we have that $p\nmid \Delta$, so $2\Delta=\Delta'$ and we can write $$\Delta'f_1(v,\overline{\Delta}r) \equiv mv^2 - 2\overline{\Delta}q(\hat \alpha v+r) \pmod{c},$$ where $\Delta \overline\Delta \equiv 1\pmod{c}$ and $\hat\alpha = \Delta\alpha \in L$. Thus $$\mathcal S_1(d,c) = \sum_{v(c)}e_c(dmv^2)\sum_{r\in L/cL} e_c(-2d \overline\Delta q(r)) = p\sum_{v(c/p)}e_{c/p}(d(m/p)v^2)\sum_{r\in L/cL} e_c(-2d \overline\Delta q(r)).$$ Note that $(m/p,c/p)=1$, so by [\[eq:Gauss-y\]](#eq:Gauss-y){reference-type="eqref" reference="eq:Gauss-y"} and [\[eq:gauss-odd\]](#eq:gauss-odd){reference-type="eqref" reference="eq:gauss-odd"}, the first Gauss sum evaluates to $$\sum_{v(c/p)}e_{c/p}(d(m/p)v^2) = \left(\frac {d(m/p)}{c/p}\right) \varepsilon_{c/p} (c/p)^{\frac 12}.$$ For the second, since $p\nmid \Delta$, [\[eq:Gauss-sum-quadratic-form\]](#eq:Gauss-sum-quadratic-form){reference-type="eqref" reference="eq:Gauss-sum-quadratic-form"} yields $$\sum_{r\in L/cL} e_c(-2d \overline\Delta q(r)) = \left(\frac {(-d\overline\Delta)^g\Delta}{c}\right) \varepsilon_c^g c^{\frac g2}.$$ Therefore, since $g$ is odd, $$\begin{aligned} \mathcal S_1(d,c) &= p^{\frac 12} \left(\frac {d}{p}\right) \left(\frac {-1}{c}\right) \left(\frac {m/p}{c/p}\right) \varepsilon_{c/p} \varepsilon_c^g c^h,\end{aligned}$$ from which it follows that $$\mathcal L_\ell(c) = p^{\frac 12}c^{-1} \left(\frac {m/p}{c/p}\right) \left(\frac {-1}{c}\right) \varepsilon_{c/p}\varepsilon_c^g \sum_{d(c)^\times} \left(\frac {d}{p}\right) e_c(d\hat n).$$ By [\[eq:twisted-exp-sum\]](#eq:twisted-exp-sum){reference-type="eqref" reference="eq:twisted-exp-sum"} we find that $\mathcal L_\ell(c)=0$ unless $\hat n\equiv 0\pmod{p^{\lambda-1}}$, which we now assume. Then $$\mathcal L_\ell(c) = \left(\frac {(-1)^{(g+1)/2}}{p^\lambda}\right) \left(\frac {m/p}{c/p}\right) \left(\frac {-\hat n/p^{\lambda-1}}{p}\right),$$ because $\varepsilon_p\varepsilon_{c/p}\varepsilon_c^g = (\frac{-1}{p})^{\lambda(g-1)/2+1}$. We have $$-\frac{\hat n}{p^{\lambda-1}} = -\frac{n-\overline{(m/p)}\ell^2/p}{p^{\lambda-1}} \equiv \overline{(m/p)} \frac{\ell^2-mn}{p^\lambda} \pmod{p},$$ so $$\mathcal L_\ell(c) = \left(\frac {-m_L/p}{p^\lambda}\right) \left(\frac {(\ell^2-mn)/p^{\lambda}}{p}\right) = \left(\frac {m_L/p^*}{p^\lambda}\right) \left(\frac {p^*}{(\ell^2-mn)/p^\lambda}\right).$$ Lastly, we note that the condition $\hat n \equiv 0\pmod{p^{\lambda-1}}$ is equivalent to $\ell^2\equiv mn\pmod{p^\lambda}$. ## The case $g=1$ {#sec:rank-1} In this section we evaluate $\mathcal L_\ell(c)$, with $c=p^\lambda$, in the remaining cases when $g=1$: $p=2$ or $(m,\Delta,p)>1$. To match the setup of Theorem [Theorem 2](#thm:exp-sum-identity-g=1){reference-type="ref" reference="thm:exp-sum-identity-g=1"}, we take $L=\mathbb{Z}$ with $\left\langle x,y \right\rangle = \Delta xy$ and $\alpha = \frac{a}{\Delta}$ and $\beta=\frac b{\Delta}$. Suppose first that $p$ is odd and $p\mid (m,\Delta)$. Then by [\[eq:exp-1\]](#eq:exp-1){reference-type="eqref" reference="eq:exp-1"} we have $$\mathcal L_\ell(c) = c^{-2} \sum_{d(c)^\times} e_c(d\tilde n) \mathcal S(d,c).$$ Since $m\equiv a^2 \pmod{4N}$, we have $p\mid a$, and since $p\parallel m$, we cannot have $p^2\mid \Delta$ (i.e. $\nu=1$). By replacing $r$ with $r+c/p$ in $\mathcal S(d,c)$, we find that $\mathcal S(d,c)=0$ unless $p\mid b$, and thus $p\mid n$ because $\frac n{2\Delta}-q(\beta) \in \mathbb{Z}$. In what follows we make these assumptions and write $$m = pm_1, \quad \Delta = p\Delta_1, \qquad (p,m_1)=(p,\Delta_1)=1,$$ and similarly define $\ell_1$, $a_1$, $b_1$, and $n_1$. We will need the following analogue of Lemma [Lemma 9](#lem:w-def){reference-type="ref" reference="lem:w-def"}. **Lemma 10**. *Suppose that $g=1$ and that $(m,\Delta,p)>1$. For $k\in \mathbb{Z}^+$, if $$w \equiv \overline m_1 \ell_1 \pmod{p^k}$$ then $\beta-\alpha w\in L+p^{k} (L'/L)$.* *Proof.* Since $g=1$ we have $\left\langle \alpha,\alpha \right\rangle\beta-\left\langle \alpha,\beta \right\rangle\alpha = 0$, so $m\beta-\ell\alpha = 2\tilde m b - \tilde \ell a \in pL$. It follows that $m_1\beta-\ell_1\alpha\in L$. The remainder of the proof follows the proof of Lemma [Lemma 9](#lem:w-def){reference-type="ref" reference="lem:w-def"}. ◻ The discussion following Lemma [Lemma 9](#lem:w-def){reference-type="ref" reference="lem:w-def"} shows that $$\mathcal L_\ell(c) = c^{-2} \sum_{d(c)^\times} e_c(d\hat n) \mathcal S_1(d,c),$$ with $\mathcal S_1(d,c)$ as in Section [4.1](#sec:complete-square){reference-type="ref" reference="sec:complete-square"} and $$\hat n \equiv n_1 - \overline m_1 \ell_1^2 \pmod{p^\lambda}.$$ Note that $\Delta'=2\Delta_1$ and $$f_1(v,r) = \tilde m v^2 - arv - \tfrac 12 \Delta r^2.$$ We have $\Delta' f_1(v,r) = m_1 v^2 - p(a_1 v + \Delta_1 r)^2$, so after a change of variables we obtain $$\mathcal L_\ell(c) = pc^{-2}\sum_{d(c)^\times} e_c(d\hat n)\sum_{v(c)} e_c(dm_1 v^2) \sum_{r(c/p)} e_{c/p}(-dr^2).$$ The rest of the computation resembles the case $p\mid m$ in Section [4.1](#sec:complete-square){reference-type="ref" reference="sec:complete-square"}. Using [\[eq:gauss-odd\]](#eq:gauss-odd){reference-type="eqref" reference="eq:gauss-odd"}, [\[eq:Gauss-y\]](#eq:Gauss-y){reference-type="eqref" reference="eq:Gauss-y"}, and [\[eq:twisted-exp-sum\]](#eq:twisted-exp-sum){reference-type="eqref" reference="eq:twisted-exp-sum"}, we find that $\mathcal L_\ell(c) = 0$ unless $p^{\lambda-1}\mid \hat n$, in which case we have $$\begin{aligned} \mathcal L_\ell(c) &= \left(\frac {m_1}{p^\lambda}\right) \left(\frac {-1}{p^{\lambda-1}}\right) \left(\frac {\hat n/p^{\lambda-1}}{p}\right) \varepsilon_{p^\lambda} \varepsilon_{p^{\lambda-1}} \varepsilon_p \\ % &= \pfrac{m_1}{p^\lambda} \pfrac{\ep}{p^{\lambda-1}} \pfrac{\hat n/p^{\lambda-1}}{p} \pfrac{-1}{p^\lambda} \\ &= \left(\frac {m_L/p^*}{p^{\lambda+1}}\right)\left(\frac {p^*}{(\ell^2-mn)/p^{\lambda+1}}\right).\end{aligned}$$ In this case the condition $p^{\lambda-1}\mid \hat n$ is equivalent to $\ell^2\equiv mn\pmod{p^{\lambda+1}}$. Now suppose that $p=2$ and $m$ is odd. Since $(m,\Delta,p)=1$, we follow Section [4.1](#sec:complete-square){reference-type="ref" reference="sec:complete-square"} to get $$\mathcal L_\ell(2^\lambda) = 2^{-\lambda} \left(\frac {m}{2^\lambda}\right) \sum_{j=0}^\lambda \left(\frac {m}{2}\right)^j 2^{-j} \sum_{d(2^j)^\times} e_{2^j}(d\hat n) \mathcal S_1(d,2^j),$$ where $\hat n = (n-\overline m \ell^2)/2^{\nu}$ and $$\mathcal S_1(d,2^j) = \sum_{v(2^j)}\sum_{r(2^j)} e_{2^j}(d\Delta'f_1(v,r)), \qquad f_1(v,r) = \tilde m v^2 - a r v - \tfrac 12 \Delta r^2.$$ The congruence $m\equiv a^2 \pmod{4N}$ shows that $a$ is odd and that $m \equiv 1\pmod{4}$. By Corollary 3.1 of [@alaca-doyle] we have[^1] $$\left(\frac {m}{2}\right)^j 2^{-j} \mathcal S_1(d,2^j) = 1$$ when $(d,2^j)=1$, therefore $$\begin{aligned} \mathcal L_\ell(2^\lambda) &= 2^{-\lambda} \left(\frac {m}{2^\lambda}\right) \sum_{j=0}^\lambda \sum_{d(2^j)^\times} e_{2^j}(d\hat n) = 2^{-\lambda} \left(\frac {m}{2^\lambda}\right) \sum_{d(2^\lambda)} e_{2^\lambda}(d\hat n) \\ &= \left(\frac {m_L}{2^\lambda}\right) \begin{dcases} 1 & \text{ if } \hat n \equiv 0 \pmod{2^\lambda}, \\ 0 & \text{ otherwise}. \end{dcases}\end{aligned}$$ Here the condition $\hat n \equiv 0 \pmod{2^\lambda}$ is equivalent to $\ell^2\equiv mn \pmod{2^{\lambda+\nu}}$. Finally, suppose that $p=2$ and that $m$ is even. Then $$\mathcal L_\ell(2^\lambda) = 2^{-2\lambda} \sum_{d(2^\lambda)^\times} e_{2^\lambda}(d\tilde n) \mathcal S(d,2^\lambda).$$ Define $\mu$ by $2^{\mu}\parallel m$ and recall that $\nu$ satisfies $2^\nu \parallel 2\Delta$. Since $m$ is a fundamental discriminant, we have $\mu\in \{2,3\}$. Furthermore, $a$ is even and $\frac m4 \equiv \frac{a^2}{4} \pmod {\frac 12 \Delta}$. Since $\frac m4 \equiv 2,3\pmod{4}$, we see that $4\nmid \frac 12 \Delta$. In other words, $\nu \in \{2,3\}$. In what follows, we assume that $\lambda\geq 3$; when $\lambda\in \{1,2\}$ there are only finitely many cases to check. Suppose first that $\mu=\nu=2$. Then $m_1:=\frac m4$ and $D:=\frac 12\Delta$ are odd. It follows from two applications of [\[eq:gauss-even\]](#eq:gauss-even){reference-type="eqref" reference="eq:gauss-even"} that $\mathcal S(d,2^\lambda)=0$ unless $b$ is even, in which case we have $$\begin{aligned} \mathcal S(d,2^\lambda) &= (1+i)\varepsilon_{-dD}^{-1}\left(\frac {2^\lambda}{-dD}\right) 2^{\frac \lambda2} e_{2^\lambda}(d\overline D b_1^2) \sum_{v(2^\lambda)} e_{2^\lambda}(d\overline D(m_1v^2-\ell_1 v)) \\ &= (1+i)^2 \varepsilon_{-dD}^{-1} \varepsilon_{dDm_1}^{-1} \left(\frac {2^\lambda}{-m_1}\right) e_{2^\lambda}(d\overline D (b_1^2-\overline m_1 \ell_2^2))2^\lambda,\end{aligned}$$ where $b=2b_1$ and $\ell=2\ell_1=4\ell_2$. (Note that $4\mid\ell$ by Lemma [Lemma 8](#lem:p-div-ell){reference-type="ref" reference="lem:p-div-ell"}.) Since $m_1\equiv 3\pmod{4}$ we have $\varepsilon_{-dD}^{-1} \varepsilon_{dDm_1}^{-1} = -\left(\frac {-1}{dD}\right)$. Also, $4\mid n$ because $b$ is even, so we can write $n=4n_1$. By replacing $d$ by $Dd$ and applying [\[eq:twisted-exp-sum-2\]](#eq:twisted-exp-sum-2){reference-type="eqref" reference="eq:twisted-exp-sum-2"}, we obtain $$\begin{aligned} \mathcal L_\ell(2^\lambda) &= -2^{1-\lambda}i \left(\frac {2^\lambda}{-m_1}\right) \sum_{d(2^\lambda)^\times} \left(\frac {-1}{d}\right) e_{2^\lambda}(d(n_1-\overline m_1\ell_2^2)) \\ &= \begin{dcases} \left(\frac {2^\lambda}{-m_1}\right) \left(\frac {-4}{(n_1-\overline m_1\ell_2^2)/2^{\lambda-2}}\right) & \text{ if } n_1 \equiv \overline m_1 \ell_2^2 \pmod{2^{\lambda-2}}, \\ 0 & \text{ otherwise}. \end{dcases}\end{aligned}$$ We conclude that $\mathcal L_\ell(2^\lambda)=0$ unless $\ell^2 \equiv mn \pmod{2^{\lambda+2}}$, in which case $$\mathcal L_\ell(2^\lambda) = \left(\frac {m/2^*}{2^\lambda}\right) \left(\frac {2^*}{(\ell^2-mn)/2^{\lambda+2}}\right).$$ If $(\mu,\nu)=(3,2)$ then by a similar argument we obtain $$\mathcal S_1(d,2^\lambda) = \varepsilon_{m_2}^{-1} \left(\frac {2^\lambda}{-m_2}\right) \left(\frac {2^*}{dD}\right) e_{2^\lambda}(d\overline D(b_1^2-2\overline m_2\ell_3^2)) 2^{\lambda+\frac 32},$$ where $m=8m_2$ and $\ell=8\ell_3$. Thus $$\mathcal L_\ell(2^\lambda) = \begin{dcases} \left(\frac {2^{\lambda+1}}{-m_2}\right) \left(\frac {2^*}{(n_1^2-2\overline m_2\ell_3^2)/2^{\lambda-3}}\right) & \text{ if } n_1^2\equiv 2\overline m_2 \ell_3^2 \pmod{2^{\lambda-3}}, \\ 0 & \text{ otherwise}. \end{dcases}$$ In other words, $\mathcal L_\ell(2^\lambda)=0$ unless $\ell^2 \equiv mn \pmod{2^{\lambda+2}}$, in which case we have $$\mathcal L_\ell(2^\lambda) = \left(\frac {m/2^*}{2^\lambda}\right) \left(\frac {2^*}{(\ell^2-mn)/2^{\lambda+2}}\right).$$ The remaining cases $(\mu,\nu)=(2,3)$ and $(\mu,\nu)=(3,3)$ are similar to the previous two, except that $\frac 12 \Delta$ is even and $\tilde m$ is odd, so we evaluate the $v$-sum first, then the $r$-sum. ## The case when $g>1$ and either $p=2$ or $p$ is odd and $(m,\Delta,p)>1$ {#sec:counting} In each of these cases we have $$\mathcal L_\ell(p^\lambda) = p^{-\lambda h} \left(N(p^\lambda)-p^g N(p^{\lambda-1})\right),$$ and we will show that 1. $\mathcal L_\ell(p^\lambda)=0$ unless $\ell^2\equiv mn\pmod{2Np^{2\left\lfloor\frac{\lambda}{2}\right\rfloor}}$, and 2. $|\xi_{\alpha,\beta}(\ell,m,n,p^\lambda)| \leq p^{A\nu g}$ for some absolute constant $A$. Then, because $|\xi_{\alpha,\beta}(\ell,m,n,p^\lambda)|\leq 1$ for all other primes, we conclude that $$|\xi_{\alpha,\beta}(\ell,m,n,c)| \leq \prod_{\substack{p\mid 2(\Delta,c) \\ p^\nu \parallel 2\Delta}} p^{A\nu g} \leq |2\Delta|^{Ag} \ll_L 1.$$ We begin with a simple observation to motivate our approach. If $k\geq 1$ and $(v_0,r_0)$ is a solution to the congruence $f(v,r)+\tilde n \equiv 0\pmod{p^{k-1}}$, then $(v_0+p^{k-1}x,r_0+p^{k-1}y)$ is a solution to $f(v,r)+\tilde n \equiv 0 \pmod{p^k}$ if and only if $$\label{eq:x-y-mod-p} (2\tilde m v_0 - \left\langle \alpha,r_0 \right\rangle-\tilde \ell)x + \left\langle \alpha v_0+r_0-\beta,y \right\rangle \equiv -\frac{f(v_0,r_0)+\tilde n}{p^{k-1}} \pmod{p}.$$ If $p\nmid (2\tilde mv_0-\left\langle \alpha,r_0 \right\rangle-\tilde \ell)$ then there are $p^g$ pairs $(x,y)$ satisfying [\[eq:x-y-mod-p\]](#eq:x-y-mod-p){reference-type="eqref" reference="eq:x-y-mod-p"}: for each $y \in L/pL$ there is exactly one $x\in \mathbb{Z}/p\mathbb{Z}$ for which [\[eq:x-y-mod-p\]](#eq:x-y-mod-p){reference-type="eqref" reference="eq:x-y-mod-p"} holds. A similar argument shows that there are $p^g$ pairs satisfying [\[eq:x-y-mod-p\]](#eq:x-y-mod-p){reference-type="eqref" reference="eq:x-y-mod-p"} as long as there exists a $y\in L/pL$ such that $\left\langle \alpha v_0+r_0-\beta,y \right\rangle$ is not divisible by $p$. Thus $N(p^k)-p^g N(p^{k-1})=0$ unless there is a solution $(v_0,r_0)$ for which $2\tilde mv_0-\left\langle \alpha,r_0 \right\rangle-\tilde \ell \equiv 0\pmod{p}$ and $\left\langle \alpha v_0+r_0-\beta,y \right\rangle \equiv 0\pmod{p}$ for all $y\in L/pL$. Generalizing this idea, for fixed $\lambda$ and for $j\leq k\leq \lambda$ let $\mathcal M_j(p^k)$ be the set of pairs $(v,r)$ with $v\in \mathbb{Z}/p^\lambda\mathbb{Z}$ and $r\in L/p^\lambda L$ such that $(v,r)$ is a solution to the congruences $$\begin{aligned} f(v,r)+\tilde n &\equiv 0 \pmod{p^k}, \\ 2\Tilde{m}v-\left\langle \alpha,r \right\rangle-\Tilde{\ell} &\equiv 0\pmod{p^j}, \\ \left\langle \alpha v+r-\beta,y \right\rangle &\equiv 0 \pmod{p^j} \qquad \forall y\in L.\end{aligned}$$ Then $\mathcal M_{j+1}(p^k) \subseteq \mathcal M_j(p^k)$ for each $j\leq k-1$. Let $\mathcal M^*_k(p^k) = \mathcal M_k(p^k)$ and, for $j\leq k-1$, let $\mathcal M^*_j(p^k)= \mathcal M_j(p^k)\setminus \mathcal M_{j+1}(p^k)$. We write $$M_j(p^k) = \#\mathcal M_j(p^k) \quad \text{ and } \quad M_j^*(p^k) = \#\mathcal M_j^*(p^k).$$ Then we have $$N(p^k) = p^{-(g+1)(\lambda-k)}M_0(p^k),$$ so $$\label{eq:N-M-translation} N(p^\lambda) - p^g N(p^{\lambda-1}) = M_0(p^\lambda) - \tfrac 1p M_0(p^{\lambda-1}).$$ **Lemma 11**. *Notation as above, we have $$\label{eq:N=M-lambda/2} N(p^\lambda)-p^gN(p^{\lambda-1}) = M_{\left\lfloor\frac{\lambda}{2}\right\rfloor}(p^\lambda)-\tfrac{1}{p}M_{\left\lfloor\frac{\lambda}{2}\right\rfloor}(p^{\lambda-1}).$$* *Proof.* By [\[eq:N-M-translation\]](#eq:N-M-translation){reference-type="eqref" reference="eq:N-M-translation"} we have $$N(p^\lambda)-p^gN(p^{\lambda-1}) = \sum_{j=0}^{\lambda}\left(M^*_j(p^\lambda)-\tfrac{1}{p}M^*_j(p^{\lambda-1})\right),$$ so to prove [\[eq:N=M-lambda/2\]](#eq:N=M-lambda/2){reference-type="eqref" reference="eq:N=M-lambda/2"} it suffices to show that $$\label{eq:Mj-want} M_j^*(p^\lambda) = \tfrac 1p M_j^*(p^{\lambda-1}) \quad \text{ for all }j<\left\lfloor\tfrac{\lambda}{2}\right\rfloor.$$ Suppose that $j<\left\lfloor\frac{\lambda}{2}\right\rfloor$; then $2j+2\leq \lambda$. Let $(v,r)\in \mathcal M^*_j(p^{\lambda-1})$. We claim that for every $x\in \mathbb{Z}/p^{j+1}\mathbb{Z}$ and every $y\in L/p^{j+1}L$, $$\label{eq:new-xy-in-M-j} (v+xp^{\lambda-j-1},r+yp^{\lambda-j-1}) \in \mathcal M_j^*(p^{\lambda-1}).$$ Indeed, $$\begin{aligned} f(v+xp^{\lambda-j-1},r+yp^{\lambda-j-1}) +\tilde n &\equiv (2\Tilde{m}v-\left\langle \alpha,r \right\rangle-\Tilde{\ell})xp^{\lambda-j-1}-\left\langle \alpha v+r-\beta,y \right\rangle p^{\lambda-j-1} \\ &\equiv 0 \pmod{p^{\lambda-1}}\end{aligned}$$ and $$\begin{aligned} 2\Tilde{m}(v+xp^{\lambda-j-1})-\left\langle \alpha,r+yp^{\lambda-j-1} \right\rangle-\Tilde{\ell} &\equiv 2\Tilde{m}v-\left\langle \alpha,r \right\rangle-\Tilde{\ell} \pmod{p^{j+1}}, \\ \left\langle \alpha (v+xp^{\lambda-j-1})+r+yp^{\lambda-j-1}-\beta,z \right\rangle&\equiv \left\langle \alpha v+r-\beta,z \right\rangle \pmod{p^{j+1}}\end{aligned}$$ for all $z\in L$. A similar argument shows that $(v+xp^{\lambda-j-1},r+yp^{\lambda-j-1})\in \mathcal M^*_j(p^\lambda)$ if and only if $$f(v,r)+\tilde n+(2\Tilde{m}v-\left\langle \alpha,r \right\rangle-\Tilde{\ell})xp^{\lambda-j-1}-\left\langle \alpha v+r-\beta,y \right\rangle p^{\lambda-j-1} \equiv 0\pmod{p^\lambda},$$ which holds if and only if $$\label{eq:in-M-lambda} \frac{f(v,r)+\tilde n}{p^{\lambda-1}}+\frac{2\Tilde{m}v-\left\langle \alpha,r \right\rangle-\Tilde{\ell}}{p^j}x - \frac{\left\langle \alpha v+r-\beta,y \right\rangle}{p^j} \equiv 0\pmod{p}.$$ We consider two cases. 1. If $2\Tilde{m}v-\left\langle \alpha,r \right\rangle-\Tilde{\ell} \not\equiv 0\pmod{p^{j+1}}$, then $(2\tilde m v-\left\langle \alpha,r \right\rangle-\tilde \ell)/p^j$ is invertible mod $p$, so for each $y$ the pair $(x,y)$ satisfies [\[eq:in-M-lambda\]](#eq:in-M-lambda){reference-type="eqref" reference="eq:in-M-lambda"} for $x$ in exactly one residue class mod $p$. 2. Suppose that $2\Tilde{m}v-\left\langle \alpha,r \right\rangle-\Tilde{\ell} \equiv 0\pmod{p^{j+1}}$. We identify $L$ with $\mathbb{Z}^g$ as explained in Section [3](#sec:background){reference-type="ref" reference="sec:background"}. Then, by [\[eq:new-xy-in-M-j\]](#eq:new-xy-in-M-j){reference-type="eqref" reference="eq:new-xy-in-M-j"}, there exists a basis $\{e_1,\ldots,e_g\}\subseteq \mathbb{Z}^g$ such that $$\left\langle \alpha v+r-\beta,e_i \right\rangle \equiv 0 \pmod{p^j} \quad \text{ for } 1\leq i \leq g,$$ but $$\left\langle \alpha v+r-\beta,e_1 \right\rangle \not\equiv 0 \pmod{p^{j+1}}.$$ Write $y=\sum_i a_i e_i$, with $a_i\in \mathbb{Z}/p^{j+1}\mathbb{Z}$. Then [\[eq:in-M-lambda\]](#eq:in-M-lambda){reference-type="eqref" reference="eq:in-M-lambda"} holds if and only if $$a_1 \left\langle \alpha v+r-\beta,e_1 \right\rangle/p^j \equiv -\sum_{i=2}^g a_i \left\langle \alpha v+r-\beta,e_i \right\rangle/p^j + (f(v,r)+\tilde n)/p^{\lambda-1} \pmod{p}.$$ For each choice of $x$ and $a_2,\ldots,a_g$, the latter congruence holds for $a_1$ in exactly one residue class mod $p$. It follows that $M_j^*(p^\lambda) = \frac 1p M_j^*(p^{\lambda-1})$, and this proves [\[eq:Mj-want\]](#eq:Mj-want){reference-type="eqref" reference="eq:Mj-want"}. ◻ **Lemma 12**. *Suppose that $p=2$ or $p\mid m$. If $N(p^\lambda)\ne p^gN(p^{\lambda -1})$ then $$\ell^2\equiv mn\pmod{2Np^{2\lfloor\frac{\lambda}{2}\rfloor}}.$$* *Proof.* Let $j=\lfloor\tfrac{\lambda}{2}\rfloor.$ By Lemma [Lemma 11](#lem:N=M-lambda/2){reference-type="ref" reference="lem:N=M-lambda/2"}, $N(p^\lambda)=p^gN(p^{\lambda -1})$ unless $$M_j(p^\lambda) \ne \tfrac{1}{p} M_j(p^{\lambda-1}).$$ If this is the case, then $\mathcal M_j(p^{\lambda-1})$ is nonempty, so let $(v,r)\in \mathcal M_j(p^{\lambda-1})$. Then we have $$\begin{aligned} f(v,r) + \tilde n &\equiv 0 \pmod{p^{\lambda-1}}, \label{eq:M1} \\ 2\Tilde{m}v-\left\langle \alpha,r \right\rangle-\Tilde{\ell} &\equiv 0 \pmod{p^j}, \label{eq:M2} \\ \left\langle \alpha v+r-\beta,y \right\rangle & \equiv 0 \pmod{p^j} \quad \text{ for all } y\in L. \label{eq:M3}\end{aligned}$$ Let $\gamma=\alpha v+r-\beta \in L'$ and $\hat \gamma = \Delta \gamma \in L$. It will be convenient to rewrite the congruences [\[eq:M1\]](#eq:M1){reference-type="eqref" reference="eq:M1"}--[\[eq:M3\]](#eq:M3){reference-type="eqref" reference="eq:M3"} in terms of $\gamma$ and $\hat \gamma$. First, a calculation shows that $$mv^2-2\ell v+n - \left\langle \hat \gamma,\gamma \right\rangle = 2\Delta(f(v,r)+\tilde n),$$ so by [\[eq:M1\]](#eq:M1){reference-type="eqref" reference="eq:M1"} we have $$\label{eq:M1'} m(mv^2-2\ell v+n) \equiv m\left\langle \hat \gamma,\gamma \right\rangle \pmod{2Np^{\lambda}}.$$ Here we have used that $p=2$ or $p\mid m$. Second, if $\hat \alpha = \Delta\alpha \in L$ then we have $$\label{eq:mv-l} mv - \ell = \left\langle \hat \alpha, \gamma \right\rangle + X,$$ where, by [\[eq:M2\]](#eq:M2){reference-type="eqref" reference="eq:M2"}, $$\label{eq:M2'} X = \Delta(2\tilde m v - \left\langle \alpha,r \right\rangle - \tilde \ell) \equiv 0 \pmod{2Np^j}.$$ Lastly, $p^{-j}\gamma\in L\otimes \mathbb{Q}$ so, by [\[eq:M3\]](#eq:M3){reference-type="eqref" reference="eq:M3"}, we have $\left\langle p^{-j}\gamma, y \right\rangle\in \mathbb{Z}$ for all $y\in L$. It follows from this and [\[eq:L\'-def\]](#eq:L'-def){reference-type="eqref" reference="eq:L'-def"} that $$\label{eq:M3'} \gamma_1 := p^{-j}\gamma \in L'.$$ By [\[eq:M1\'\]](#eq:M1'){reference-type="eqref" reference="eq:M1'"} and $\eqref{eq:mv-l}$ we have $$\begin{aligned} mn-\ell^2 &= m(mv^2-2\ell v+n)-(mv-\ell)^2 \\ &\equiv m\left\langle \hat \gamma,\gamma \right\rangle - \left\langle \hat \alpha,\gamma \right\rangle^2 - 2\left\langle \hat \alpha,\gamma \right\rangle X \pmod{2Np^{2j}} \\ &\equiv \left\langle \hat \alpha,\alpha \right\rangle\left\langle \hat \gamma,\gamma \right\rangle - \left\langle \hat \alpha,\gamma \right\rangle^2 + 2\Delta \tilde m \left\langle \hat\gamma,\gamma \right\rangle - 2 \left\langle \hat \alpha,\gamma \right\rangle X \pmod{2Np^{2j}}.\end{aligned}$$ We now use [\[eq:M3\'\]](#eq:M3'){reference-type="eqref" reference="eq:M3'"} to obtain $$\begin{aligned} \left\langle \hat\alpha,\alpha \right\rangle \left\langle \hat\gamma,\gamma \right\rangle-\left\langle \hat\alpha,\gamma \right\rangle^2 &= \left\langle \hat\alpha, \left\langle \hat \gamma,\gamma \right\rangle\alpha - \left\langle \hat\alpha,\gamma \right\rangle\gamma \right\rangle \\ &= \Delta^2 p^{2j}\left\langle \alpha, \left\langle \gamma_1,\gamma_1 \right\rangle\alpha - \left\langle \alpha,\gamma_1 \right\rangle\gamma_1 \right\rangle \equiv 0 \pmod{2Np^{2j}}, \label{eq:aagg-agag}\end{aligned}$$ where we used Lemma [Lemma 5](#lem:alpha-beta-gamma){reference-type="ref" reference="lem:alpha-beta-gamma"} in the second line to show that $\Delta(\left\langle \gamma_1,\gamma_1 \right\rangle\alpha - \left\langle \alpha,\gamma_1 \right\rangle\gamma_1) \in L$. Using [\[eq:M3\'\]](#eq:M3'){reference-type="eqref" reference="eq:M3'"} again we find that $$\label{eq:gg} \left\langle \hat\gamma,\gamma \right\rangle = p^{2j}\left\langle \Delta\gamma_1,\gamma_1 \right\rangle \equiv 0\pmod{p^{2j}}.$$ Finally, $\left\langle \hat\alpha,\gamma \right\rangle = p^j\left\langle \hat\alpha,\gamma_1 \right\rangle \equiv 0\pmod{p^j}$. The lemma follows. ◻ To finish the proof of Theorem [Theorem 3](#thm:exp-sum-general){reference-type="ref" reference="thm:exp-sum-general"} we need to prove the upper bound for $\xi_{\alpha,\beta}(\ell,m,n,p^\lambda)$. This will follow quickly from Lemma [Lemma 11](#lem:N=M-lambda/2){reference-type="ref" reference="lem:N=M-lambda/2"} after we give an upper bound for $M_j(p^k)$. **Lemma 13**. *If $p^\nu\parallel 2\Delta$, $p^\mu\parallel m$, and $j\leq k\leq \lambda$ then $$M_j(p^k) \leq \begin{cases} p^{(g+1)(\lambda-j+\mu)+g\nu} & \text{ if } j\geq \nu+\mu, \\ p^{(g+1)\lambda - j+\mu} & \text{ if } j \leq \nu+\mu-1. \end{cases}$$* **Remark 7**. Since $(-1)^{(g-1)/2}m$ is fundamental, we have $\mu=1$ for odd $p$ and $\mu\leq 3$ for $p=2$. *Proof.* Suppose $(v,r)\in \mathcal M_j(p^k)$. If $(v+x,r+y)\in \mathcal M_j(p^k)$ as well, then $$\begin{aligned} 2\Tilde{m}x&\equiv \left\langle \alpha, y \right\rangle \pmod{p^j}, \\ \left\langle \alpha,z \right\rangle x&\equiv -\left\langle y,z \right\rangle \pmod{p^j} \quad \text{ for all } z\in L.\end{aligned}$$ We apply the second congruence with $z=\hat\alpha := \Delta\alpha$ to get $$2\Delta\tilde m x \equiv \left\langle \hat\alpha,y \right\rangle \equiv -\left\langle \hat\alpha,\alpha \right\rangle x \pmod{p^j},$$ i.e. $mx\equiv 0 \pmod{p^j}$. Thus $p^{j-\mu}\mid x$, so $$\left\langle \alpha,y \right\rangle\equiv \left\langle y,z \right\rangle\equiv 0 \pmod{p^{j-\mu}} \quad \text{ for all } z\in L.$$ Let $y_1 = p^{-j+\mu}y\in L\otimes \mathbb{Q}$. Then $\left\langle y_1,z \right\rangle\in \mathbb{Z}$ for all $z\in L$, so by [\[eq:L\'-def\]](#eq:L'-def){reference-type="eqref" reference="eq:L'-def"}, $y_1\in L'$. It follows that $\Delta y_1\in L$, so $y\in p^{j-\mu-\nu}L \cap L$. If $j\geq \mu+\nu$ then every element of $\mathcal M_j(p^k)$ is of the form $$(v+p^{j-\mu}x', r+p^{j-\mu-\nu}y')$$ for some $x'\in \mathbb{Z}$, $y'\in L$. Therefore $\mathcal M_j(p^k)$ has at most $p^{(g+1)\lambda-(j-\mu)-g(j-\mu-\nu)}$ elements. If $j<\mu+\nu$ then every element of $\mathcal M_j(p^k)$ is of the form $$(v+p^{j-\mu}x', r+y')$$ for some $x'\in \mathbb{Z}$, $y'\in L$, so $M_j(p^k)\leq p^{(g+1)\lambda-(j-\mu)}$. ◻ By Lemma [Lemma 11](#lem:N=M-lambda/2){reference-type="ref" reference="lem:N=M-lambda/2"} we have $$p^{-\lambda h}\left|N(p^\lambda) - p^g N(p^{\lambda-1})\right| \leq p^{-\lambda h}\max\left(M_{\left\lfloor\frac{\lambda}{2}\right\rfloor}(p^\lambda), \tfrac 1p M_{\left\lfloor\frac{\lambda}{2}\right\rfloor}(p^{\lambda-1})\right). % \leq p^{\frac 32(g+1)+g\nu}.$$ If $\left\lfloor\frac{\lambda}{2}\right\rfloor\geq \nu+\mu$ then Lemma [Lemma 13](#lem:M-bound){reference-type="ref" reference="lem:M-bound"} gives $$p^{-\lambda h}\left|N(p^\lambda) - p^g N(p^{\lambda-1})\right| \leq p^{\nu g + \frac 72(g+1)}$$ because $\mu\leq 3$. On the other hand, if $\left\lfloor\frac{\lambda}{2}\right\rfloor\leq \nu+\mu-1$ then $\lambda\leq 2\nu+5$, so Lemma [Lemma 13](#lem:M-bound){reference-type="ref" reference="lem:M-bound"} gives $$p^{-\lambda h}\left|N(p^\lambda) - p^g N(p^{\lambda-1})\right| \leq p^{\frac{g+1}2\lambda - \left\lfloor\frac{\lambda}{2}\right\rfloor+ 3} \leq p^{\nu g+\frac 52g+\frac 72}.$$ In either case we have $|\xi_{\alpha,\beta}(\ell,m,n,p^\lambda)|\leq p^{A\nu g}$ for some absolute constant $A$. 0◻ [^1]: There are several cases to tediously check, but all yield the same result. Alternatively, one can prove this using several applications of [\[eq:gauss-even\]](#eq:gauss-even){reference-type="eqref" reference="eq:gauss-even"}.
arxiv_math
{ "id": "2309.08528", "title": "The Weil bound for generalized Kloosterman sums of half-integral weight", "authors": "Nickolas Andersen, Gradin Anderson, Amy Woodall", "categories": "math.NT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | By a theorem of D. Wigner, an irreducible unitary representation with non-zero $(\frak{g},K)$-cohomology has trivial infinitesimal character, and hence up to unitary equivalence, these are finite in number. We have determined the number of equivalence classes of these representations and the Poincaré polynomial of cohomologies of these representations for the Lie group $SO_0(2,m)$ for any positive integer $m.$ We have also determined, among these, which are discrete series representations and holomorphic discrete series representations. address: Department of Mathematics, Presidency University, 86/1 College Street, Kolkata 700073, India author: - Ankita Pal, Pampa Paul title: Irreducible unitary representations with non-zero relative Lie algebra cohomology of the Lie group $SO_0(2,m)$ --- # Introduction Let $G$ be a connected semisimple Lie group with finite centre, and $K$ be a maximal compact subgroup of $G$ with Cartan involution $\theta.$ The differential of $\theta$ at identity is denoted by the same notation $\theta.$ Let $\frak{g}_0$ be the Lie algebra of $G, \frak{k}_0$ be the subalgebra of $\frak{g}_0$ corresponding to the Lie subgroup $K$ of $G, \frak{h}_0$ be a $\theta$-stable fundamental Cartan subalgebra of $\frak{g}_0,$ and $\frak{g}=\frak{g}_0^\mathbb{C}, \frak{h}=\frak{h}_0^\mathbb{C}.$ Corresponding to a $\theta$-stable parabolic subalgebra $\frak{q}$ of $\frak{g}_0$ containing $\frak{h}_0,$ and a linear function $\lambda$ on $\frak{h}$ in a certain good range, there is a cohomologically induced module $A_\frak{q}(\lambda),$ which is an irreducible unitary representation of $G$ with infinitesimal character $\chi_\lambda.$ These representations include all discrete series representations if rank$(G)=$rank$(K).$ We are interested in those cohomologically induced module $A_\frak{q}(\lambda)$ for which the infinitesimal character $\chi_\lambda$ is trivial, that is $\chi_\lambda$ is the infinitesimal character of the trivial representation of $G$, and we denote it by $A_\frak{q}.$ By a theorem of D. Wigner, an irreducible representation with non-zero $(\frak{g},K)$-cohomology has trivial infinitesimal character. Hence there are only finitely many irreducible unitary representations with non-zero $(\frak{g}, K)$-cohomology. In fact, the irreducible unitary representations with non-zero relative Lie algebra cohomology are exactly the irreducible unitary representations $A_\frak{q}.$ See §[2](#general){reference-type="ref" reference="general"} for more details. So the representations $A_\frak{q}$ are important on its own. Apart from that Borel [@borelc] has conjectured that an irreducible unitary representation with non-zero $(\frak{g},K)$-cohomology is an automorphic representation for a suitable uniform discrete subgroup of $G.$ Millson and Raghunathan [@mira] have proved this conjecture for the group $G=SO(n,1),$ by constructing geometric cycles and using Matsushima's isomophism [@matsushima]. So the representations $A_\frak{q}$ are possible candidates of automorphic representations of $G.$ Collingwood [@collingwood] has determined the representations $A_\frak{q}$ and computed cohomologies of these representations of $Sp(n,1),$ and the real rank one real form of $F_4.$ Li and Schwermer [@lisch] have determined the representations $A_\frak{q}$ and cohomologies of these representations for the connected non-compact real Lie group of type $G_2.$ Mondal and Sankaran [@mondal-sankaran2] have determined certain representations $A_\frak{q}$ of Hodge type $(p,p)$ when $G/K$ is an irreducible Hermitian symmetric space. If $G$ is a complex simple Lie group, the number of equivalence classes of the representations $A_\frak{q},$ and Poincaré polynomials of cohomologies of some of these representations have been determined in [@paul]. In this article, we have determined the number of equivalence classes of the representations $A_\frak{q},$ and Poincaré polynomials of cohomologies of these representations, when $G=SO_0(2,m)$ for any positive integer $m.$ The main results are stated as follows: **Theorem 1**. *(i) If $A$ is the number of equivalence classes of irreducible unitary representations with non-zero $(\frak{g}, K)$-cohomology of the Lie group $SO_0(2,m)(m \in \mathbb{N})$, then $$A = \begin{cases} l(l+2) & \textrm{if } m=2l-1, \\ l^2+4l-3 & \textrm{if } m =2l-2. \\ \end{cases}$$\ (ii) An $A_\frak{q}$ is unitarily equivalent to a discrete series representation of $SO_0(2,m)$ with trivial infinitesimal character *if and only if* $(-\Delta(\frak{u} \cap \frak{p}_-)) \cup \Delta(\frak{u} \cap \frak{p}_+) = \Delta_n^+.$ Also if $m \neq 2,$ an $A_\frak{q}$ is unitarily equivalent to a holomorphic discrete series representation of $SO_0(2,m)$ with trivial infinitesimal character *if and only if* $\Delta(\frak{u} \cap \frak{p}_+)=\phi \textrm{ or } \Delta_n^+.$ If $D$(respectively, $D_h$) is the number of equivalence classes of discrete series representations (respectively, holomorphic discrete series representations) of $SO_0(2,m)$ with trivial infinitesimal character, then $D = 2l$ if $m=2l-1,$ or $2l-2.$ Also $D_h=2$ if $m \neq 2.$ If $m=2,$ then $D_h=4.$* We have also determined Poincaré polynomials of cohomologies of these representations in the Table [\[b-table\]](#b-table){reference-type="ref" reference="b-table"}, Table [\[d-table\]](#d-table){reference-type="ref" reference="d-table"}. The proof of Th.[Theorem 1](#th1){reference-type="ref" reference="th1"} is given in §[4.1](#proof){reference-type="ref" reference="proof"}. We have used Remark 3.3(iii) of [@mondal-sankaran2] to prove Th.[Theorem 1](#th1){reference-type="ref" reference="th1"}(i), and an alternative approach of $\theta$-stable parabolic subalgebras to determine the Poincaré polynomial of cohomologies of these representations. # Irreducible unitary representations with non-zero $(\frak{g}, K)$-cohomology {#general} Let $G$ be a connected semisimple Lie group with finite centre, and $K$ be a maximal compact subgroup of $G$ with Cartan involution $\theta.$ The differential of $\theta$ at identity is denoted by the same notation $\theta.$ Let $\frak{g}_0=$Lie$(G),\frak{k}_0=$Lie$(K),$ and $\frak{g}_0=\frak{k}_0 \oplus \frak{p}_0$ be the Cartan decomposition corresponding to $\theta.$ Let $\frak{g}=\frak{g}_0^\mathbb{C},\frak{k}=\frak{k}_0^\mathbb{C} \subset \frak{g}, \frak{p}=\frak{p}_0^\mathbb{C} \subset \frak{g}.$ A $\theta$-stable parabolic subalgebra of $\frak{g}_0$ is a parabolic subalgebra $\frak{q}$ of $\frak{g}$ such that (a) $\theta(\frak{q}) = \frak{q}$, and (b) $\bar{\frak{q}} \cap \frak{q}= \frak{l}$ is a Levi subalgebra of $\frak{q}$; where $\bar{\ }$ denotes the conjugation of $\frak{g}$ with respect to $\frak{g}_0$. By (b), $\frak{l}$ is the complexification of a real subalgebra $\frak{l}_0$ of $\frak{g}_0$. Also $\theta(\frak{l}_0) = \frak{l}_0$ and and $\frak{l}_0$ contains a maximal abelian subalgebra $\frak{t}_0$ of $\frak{k}_0$. Then $\frak{h}_0 = \frak{z}_{\frak{g}_0} (\frak{t}_0)$ is a $\theta$-stable Cartan subalgebra of $\frak{g}_0.$ Let $\frak{t}=\frak{t}_0^\mathbb{C} \subset \frak{k},$ and $\frak{h}=\frak{h}_0^\mathbb{C} \subset \frak{g}.$ Note that $\frak{t},\frak{h}$ are Cartan subalgebras of $\frak{k},\frak{g}$ respectively and $\frak{h} \subset \frak{q}.$ Let $\frak{u}$ be the nilradical of $\frak{q}$ so that $\frak{q} = \frak{l} \oplus \frak{u}.$ Then $\frak{u}$ is $\theta$-stable and so $\frak{u} = (\frak{u} \cap \frak{k}) \oplus (\frak{u} \cap \frak{p}).$ If $V$ is a finite dimensional complex $A$-module, where $A$ is an abelian Lie algebra; we denote by $\Delta (V)$( or by $\Delta (V , A)$), the set of all non-zero weights of $V;$ by $V^\alpha,$ the weight space of $V$ corresponding to a weight $\alpha \in \Delta(V);$ and by $\delta (V)$ (or by $\delta (V , A)$), $1/2$ of the sum of elements in $\Delta (V)$ counted with their respective multiplicities. Fix a maximal abelian subspace $\frak{t}_0$ of $\frak{k}_0$ and $\frak{t}=\frak{t}_0^\mathbb{C} \subset \frak{k}.$ Since $\frak{k},\frak{g}$ are $\frak{t}$-modules (under the adjoint action), we have $$\frak{k}=\frak{t}\oplus \sum_{\alpha \in \Delta(\frak{k},\frak{t})} \frak{k}^\alpha, \textrm{and } \frak{g}=\frak{h}\oplus \sum_{\alpha \in \Delta(\frak{g},\frak{t})} \frak{g}^\alpha.$$ Note that $\Delta(\frak{k},\frak{t})$ is actually the set of all non-zero roots of $\frak{k}$ relative to the Cartan subalgebra $\frak{t}.$ Choose a system of positive roots $\Delta_\frak{k}^+$ in $\Delta(\frak{k},\frak{t}).$ If $x \in i\frak{t}_0$ be such that $\alpha(x) \ge 0$ for all $\alpha \in \Delta_\frak{k}^+,$ then $\frak{q}_x= \frak{h}\oplus \sum_{\alpha \in \Delta(\frak{g},\frak{t}),\alpha(x) \ge 0} \frak{g}^\alpha$ is a $\theta$-stable parabolic subalgebra of $\frak{g}_0$, $\frak{l}_x= \frak{h}\oplus \sum_{\alpha \in \Delta(\frak{g},\frak{t}),\alpha(x)= 0} \frak{g}^\alpha$ is the Levi subalgebra of $\frak{q}_x,$ and $\frak{u}_x= \sum_{\alpha \in \Delta(\frak{g},\frak{t}),\alpha(x) > 0} \frak{g}^\alpha$ is the nilradical of $\frak{q}_x.$ If $\frak{q}$ is a $\theta$-stable parabolic subalgebra of $\frak{g}_0$, there exists $k \in K$ such that $Ad(k)(\frak{q})=\frak{q}_x.$ Now associated with a $\theta$-stable parabolic subalgebra $\frak{q}$, we have an irreducible unitary representation $\mathcal{R}^S _\frak{q} (\mathbb{C}) = A_\frak{q}$ of $G$ with trivial infinitesimal character, where $S = \textrm{dim} (\frak{u} \cap \frak{k}).$ The associated $(\frak{g}, K)$-module $A_{\frak{q}, K}$ contains an irreducible $K$-submodule $V$ of highest weight (with respect to $\Delta ^+ _\frak{k}$) $2 \delta (\frak{u} \cap \frak{p}, \frak{t}) = \sum_{\beta \in \Delta (\frak{u} \cap \frak{p}, \frak{t})} \beta$ and it occurs with multiplicity one in $A_{\frak{q}, K}$. Any other irreducible $K$-module that occurs in $A_{\frak{q}, K}$ has highest weight of the form $2 \delta (\frak{u} \cap \frak{p}, \frak{t}) + \sum_{\gamma \in \Delta (\frak{u} \cap \frak{p}, \frak{t})} n_\gamma \gamma,$ with $n_\gamma$ a non-negative integer [@voganz Th. 2.5]. The $(\frak{g}, K)$-modules $A_{\frak{q} , K}$ were first constructed, in general, by Parthasarathy [@parthasarathy1]. Vogan and Zuckerman [@voganz] gave a construction of the $(\frak{g}, K)$-modules $A_{\frak{q} , K}$ via cohomological induction and Vogan [@vogan] proved that these are unitarizable. Define an equivalence relation on the set of all $\theta$-stable parabolic subalgebras of $\frak{g}_0,$ by $\frak{q}$ is equivalent to $\frak{q}'$ if either $Ad(k)(\frak{q})=\frak{q}',$ for some $k \in K,$ or $\frak{u} \cap \frak{p} = \frak{u}' \cap \frak{p}.$ Also unitary equivalence is an equivalence relation on the set of all irreducible unitary representations $A_\frak{q}.$ Then the set of all equivalence classes of $\theta$-stable parabolic subalgebras are in one-one correspondence with the set of all equivalence classes of the irreducible unitary representations $A_\frak{q}$ [@riba Prop. 4.5]. If $\frak{q}$ is a $\theta$-stable parabolic subalgebra of $\frak{g}$, then the Levi subgroup $L = \{g \in G : \textrm{Ad}(g) (\frak{q}) = \frak{q} \}$ is a connected reductive Lie subgroup of $G$ with Lie algebra $\frak{l}_0.$ As $\theta(\frak{l}_0) = \frak{l}_0, L \cap K$ is a maximal compact subgroup of $L$. One has $$H^r (\frak{g}, K; A_{\frak{q}, K}) \cong H^{r-R(\frak{q})} (\frak{l}, L\cap K ; \mathbb{C}),$$ where $R(\frak{q}) := \textrm{dim}(\frak{u} \cap \frak{p})$. Let $Y_\frak{q}$ denote the compact dual of the Riemannian globally symmetric space $L/{L\cap K}$. Then $H^r (\frak{l}, L\cap K ; \mathbb{C}) \cong H^r (Y_\frak{q} ; \mathbb{C})$. And hence $$H^r (\frak{g}, K; A_{\frak{q}, K}) \cong H^{r-R(\frak{q})} (Y_\frak{q} ; \mathbb{C}).$$ If $P_\frak{q}(t)$ denotes the Poincaré polynomial of $H^* (\frak{g}, K; A_{\frak{q}, K})$, then by the above result, we have $$P_\frak{q}(t) = t^{R(\frak{q})} P(Y_\frak{q} , t).$$ Conversely, if $\pi$ is an irreducible unitary represention of $G$ with non-zero $(\frak{g},K)$-cohomology, then $\pi$ is unitarily equivalent to $A_\frak{q}$ for some $\theta$-stable parabolic subalgebra $\frak{q}$ of $\frak{g}_0$ [@voganz Th. 4.1]. See also [@vogan97] for a beautiful description of the theory of $(\frak{g}, K)$-modules $A_{\frak{q} , K}.$ If rank$(G) =$ rank$(K)$ and $\frak{q}$ is a $\theta$-stable Borel subalgebra that is, $\frak{q}$ is a Borel subalgebra of $\frak{g}$ containing a Cartan subalgebra of $\frak{k}$, then $A_\frak{q}$ is a discrete series representation of $G$ with trivial infinitesimal character. In this case, $R(\frak{q})= \frac{1}{2} \textrm{ dim}(G/K)$, $L$ is a maximal torus in $K$ and hence $$H^r (\frak{g}^\mathbb{C}, K; A_{\frak{q}, K}) = \begin{cases} 0 & \textrm{if } r \neq R(\frak{q}), \\ \mathbb{C} & \textrm{if } r = R(\frak{q}). \end{cases}$$ If we take $\frak{q} = \frak{g}$, then $L=G$ and $A_\frak{q} = \mathbb{C}$, the trivial representation of $G$. If $G/K$ is Hermitian symmetric, choose a Borel-de Siebenthal positive root system in $\Delta(\frak{g},\frak{t})$ containing $\Delta_\frak{k}^+,$ and a unique non-compact simple root $\nu;$ and define $\frak{p}_+=\sum_{\beta \in \Delta(\frak{g},\frak{t}), n_\nu(\beta)=1}\frak{g}^\beta, \frak{p}_-=\sum_{\beta \in \Delta(\frak{g},\frak{t}),n_\nu(\beta)=-1} \frak{g}^\beta;$ where $n_{\nu}(\beta)$ is the coefficient of $\nu$ in the decomposition of $\beta$ into simple roots. Then $\frak{p}=\frak{p}_+\oplus\frak{p}_-, \frak{u}\cap\frak{p}=(\frak{u}\cap\frak{p}_+)\oplus(\frak{u}\cap\frak{p}_-).$ Define $R_+(\frak{q})=$dim$(\frak{u}\cap\frak{p}_+), R_-(\frak{q})=$dim$(\frak{u}\cap\frak{p}_-).$ So $R(\frak{q})=R_+(\frak{q})+R_-(\frak{q}).$ One has a Hodge decomposition $$H^r (\frak{g}, K; A_{\frak{q}, K}) = \oplus_{p+q=r} H^{p,q} (\frak{g}, K; A_{\frak{q}, K}) = H^{p,q} (\frak{g}, K; A_{\frak{q}, K}) \cong H^{p-R_+(\frak{q}),q-R_-(\frak{q})} (Y_\frak{q} ; \mathbb{C}) ;$$ where $p+q=r, p-q=R_+(\frak{q})-R_-(\frak{q}).$ See [@borel-wallach Ch. II, §4], [@gh], [@voganz]. The pair $(R_+(\frak{q}), R_-(\frak{q}))$ is referred to be the *Hodge type* of the representation $A_\frak{q}.$ # Discrete series representations We follow the notations from the previous section. Assume that rank$(G)=$ rank$(K),$ so that $G$ admits discrete series representation. A non-singular linear function $\lambda$ on $i\frak{t}_0$ relative to $\Delta(\frak{g},\frak{t})$ dominant with respect to $\Delta_\frak{k}^+,$ defines uniquely a positive root system $\Delta_\lambda^+$ of $\Delta(\frak{g},\frak{t})$ containing $\Delta_\frak{k}^+.$ Define $\delta_\frak{g} =\frac{1}{2} \sum_{\alpha \in \Delta_\lambda^+}\alpha, \delta_\frak{k} = \frac{1}{2}\sum_{\alpha \in \Delta_\frak{k}^+}\alpha.$ If $\lambda +\delta_\frak{g}$ is analytically integral(that is, $\lambda +\delta_\frak{g}$ is the differential of a Lie group homomorphism on the Cartan subgroup of $G$ corresponding to $\frak{t}_0$), then there exists a discrete series representation $\pi_\lambda$ with infinitesimal character $\chi_\lambda$(it is the character of the Verma module of $\frak{g}$ with highest weight $\lambda-\delta_\frak{g}$); the associated $(\frak{g}, K)$-module $\pi_{\lambda,K}$ contains an irreducible $K$-submodule with highest weight $\Lambda=\lambda+\delta_\frak{g}-2\delta_\frak{k}$ and it occurs with multiplicity one in $\pi_{\lambda, K}.$ Any other irreducible $K$-module that occurs in $\pi_{\lambda, K}$ has highest weight of the form $\Lambda + \sum_{\alpha \in \Delta_\lambda^+} n_\alpha \alpha,$ with $n_\alpha$ a non-negative integer. Upto unitary equivalence these are all discrete series representations of $G.$ This $\lambda$ is called the *Harish-Chandra parameter*, $\Lambda$ is called the *Blattner parameter* of the discrete series representation $\pi_\lambda.$ The positive root system $\Delta_\lambda^+$ is called the *Harish-Chandra root order* corresponding to $\lambda.$ If $G/K$ is Hermitian symmetric, then $\pi_\lambda$ is a holomorphic discrete series representation *if and only if* the Harish-Chandra root order corresponding to $\lambda$ is a Borel-de Siebenthal positive root system. See [@hc1], [@knapp]. # Irreducible unitary representations with non-zero $(\frak{g}, K)$-cohomology of the Lie group $SO_0(2,m)$ Let $I_{2,m} = \left( \begin{array}{ccc} -I_2 & 0 \\ 0 & I_m \\ \end{array} \right)$, where $I_m$ denotes the identity matrix of order $m$. Let $G$ be the group $SO_0(2, m)$, the connected component of the group $\{g \in SL(m+2, \mathbb{R}) : g^t I_{2,m} g = I_{2,m} \}$. Then $G$ is a Lie group with Lie algebra $\frak{g}_0 = \frak{so}(2,m)$ = $\{ \left( \begin{array}{ccc} X_1 & X_2 \\ X_2^t & X_3 \\ \end{array} \right): \textrm{all }X_i \textrm{ real}, X_1, X_3 \textrm{ skew symmetric of order } 2 \textrm{ and } m \textrm{ respectively}, X_2 \textrm{ arbitrary}\}$. The map $\theta : G \longrightarrow G$ given by $\theta(g) = I_{2,m} g I_{2,m}$ for all $g \in G$ is a Cartan involution with maximal compact subgroup $K= \{ \left( \begin{array}{ccc} A & 0 \\ 0 & B \\ \end{array} \right): A \in SO(2), B \in SO(m)\} \cong SO(2) \times SO(m)$. The differential of $\theta$ at the identity element of $G$ is the map $X \mapsto I_{2,m} X I_{2,m}$ for all $X \in \frak{g}_0$, and is denoted by the same notation $\theta : \frak{g}_0 \longrightarrow \frak{g}_0$. Then $\theta : \frak{g}_0 \longrightarrow \frak{g}_0$ is a Cartan involution and $\frak{g}_0 = \frak{k}_0 \oplus \frak{p}_0$ is the Cartan decomposition corresponding to $+1$ and $-1$-eigenspaces of $\theta$. Note that $\frak{k}_0 = \{ \left( \begin{array}{ccc} A & 0 \\ 0 & B \\ \end{array} \right): A \in \frak{so}(2), B \in \frak{so}(m)\} \cong \frak{so}(2) \oplus \frak{so}(m)$, and it is the Lie subalgebra of $\frak{g}_0$ corresponding to the connected Lie subgroup $K$ of $G$. Note that $G/K$ is an irreducible Hermitian symmetric space of non-compact type. The complexification of $\frak{g}_0$ is $\frak{g} = \frak{so}(m+2, \mathbb{C})$, and $$\frak{g} = \begin{cases} \frak{b}_l & \textrm{if } m = 2l-1, \\ \frak{\delta}_l & \textrm{if } m = 2l-2. \end{cases}$$ Let $\frak{k} = \frak{k}_0^\mathbb{C}\subset \frak{g}, \frak{p} = \frak{p}_0^\mathbb{C} \subset \frak{g},$ and $\frak{t}'_0$ be a maximal abelian subspace of $\frak{so}(m)$. Then $\frak{t}_0 = \frak{so}(2) \oplus \frak{t}'_0$ be a maximal abelian subspace of $\frak{k}_0$, and $\frak{h} = \frak{t}_0^\mathbb{C}$ is a Cartan subalgebra of $\frak{k}$ as well as of $\frak{g}$. Let $\Delta = \Delta(\frak{g}, \frak{h})$ be the set of all non-zero roots of $\frak{g}$ with respect to the Cartan subalgebra $\frak{h},$ similarly $\Delta_\frak{k} = \Delta(\frak{k}, \frak{h})$ be the set of all non-zero roots of $\frak{k}$ with respect to $\frak{h},$ and $\Delta_n = \Delta \setminus \Delta_\frak{k}=$ the set of all non-compact roots of $\frak{g}$ with respect to $\frak{h}$. Then $\frak{k}= \frak{h} + \sum_{\alpha \in \Delta_\frak{k}} \frak{g}^\alpha, \frak{p} = \sum_{\alpha \in \Delta_n} \frak{g}^\alpha,$ where $\frak{g}^\alpha$ is the root subspace of $\frak{g}$ of the root $\alpha \in \Delta$. Let $B$ denote the Killing form of $\frak{g}$. For any linear function $\lambda$ on $\frak{h},$ there exists unique $H_\lambda \in \frak{h}$ such that $$\lambda (H) = B (H, H_\lambda ) \textrm{ for all } H \in \frak{h} .$$ Put $\langle \lambda , \mu \rangle = B(H_\lambda, H_\mu)$ for any linear functions $\lambda, \mu$ on $\frak{h}$, $H_\alpha ^* = 2 H_\alpha /\alpha (H_\alpha)$ for all $\alpha \in \Delta,$ and $\frak{h}_\mathbb{R} = \sum_{\alpha \in \Delta} \mathbb{R} H_\alpha$. Then $\frak{h}_\mathbb{R} = i\frak{t}_0.$ For $m \neq 2,$ let $\Delta^+$ be a Borel-de Siebenthal positive root system of $\Delta$ with a unique non-compact simple root $\phi_1$, that is $$n_{\phi_1}(\alpha) = \begin{cases} 0 & \textrm{if } \alpha \in \Delta_\frak{k}, \\ \pm 1 & \textrm{if } \alpha \in \Delta_n. \end{cases}$$ If $m=2,$ then $\Delta^+ = \{\phi_1, \phi_2\},$ where both of $\phi_1,$ and $\phi_2$ are non-compact and simple. Let $\Delta_\frak{k}^+ = \Delta^+ \cap \Delta_\frak{k}, \Delta_n^+ = \Delta^+ \cap \Delta_n,$ and $\Delta_n^- = -\Delta_n^+$. Write $\frak{p}_+ = \sum_{\alpha \in \Delta_n^+} \frak{g}^\alpha,$ and $\frak{p}_- = \sum_{\alpha \in \Delta_n^-} \frak{g}^\alpha$. Then $\frak{p} = \frak{p}_+ \oplus \frak{p}_-$ is the irreducible decomposition of $\frak{p}$ under the adjoint representation of $\frak{k},$ if $m \neq 2$. For $m \neq 2,$ let $\Phi_\frak{k} = \{ \phi_2, \phi_3, \ldots , \phi_l\}$ be the set of all simple roots in in $\Delta_\frak{k}^+$. Then $\Phi = \{\phi_1, \phi_2, \ldots , \phi_l\}$ is the set of all simple roots in $\Delta$. In the diagrams of this article, the non-compact roots are represented by black vertices. Since $A_{\textrm{Ad}(k)(\frak{q})}$ is unitarily equivalent to $A_\frak{q}$ for all $k\in K,$ to determine all unitarily inequivalent $A_\frak{q},$ it is sufficient to determine all $\theta$-stable parabolic subalgebras $\frak{q}$ of $\frak{g}_0$ which contain $\frak{h} \oplus \sum_{\alpha \in \Delta_\frak{k}^+} \frak{g}^\alpha$. Let $\frak{q}$ be a $\theta$-stable parabolic subalgebra of $\frak{g}_0$ containing $\frak{h} \oplus \sum_{\alpha \in \Delta_\frak{k}^+} \frak{g}^\alpha$. Then there exists $x \in \frak{h}_\mathbb{R}$ such that $\frak{q} = \frak{q}_x = \frak{h} \oplus \sum_{\alpha(x) \ge 0, \alpha \in \Delta} \frak{g}^\alpha = \frak{l}_x \oplus \frak{u}_x,$ where $\frak{l}_x = \frak{h} \oplus \sum_{\alpha(x) = 0, \alpha \in \Delta} \frak{g}^\alpha$ is the Levi subalgebra of $\frak{q}_x,$ and $\frak{u}_x= \sum_{\alpha(x) > 0, \alpha \in \Delta} \frak{g}^\alpha$ is the nilradical of $\frak{q}_x$. Note that $\alpha (x) \ge 0$ for all $\alpha \in \Delta_\frak{k}^+.$ Write $\Delta(\frak{u}_x \cap \frak{p}_+) = \{ \beta \in \Delta_n^+ : \beta (x) > 0 \},$ and $\Delta(\frak{u}_x \cap \frak{p}_-) = \{ \beta \in \Delta_n^- : \beta (x) > 0 \}$. For $x, y \in \frak{h}_\mathbb{R}, A_{\frak{q}_x}$ is unitarily equivalent to $A_{\frak{q}_x}$ *iff* $\Delta(\frak{u}_x \cap \frak{p}_+) \cup \Delta(\frak{u}_x \cap \frak{p}_-) = \Delta(\frak{u}_y \cap \frak{p}_+) \cup \Delta(\frak{u}_y \cap \frak{p}_-)$. So we will determine all possible candidates of $\Delta(\frak{u}_x \cap \frak{p}_+) \cup \Delta(\frak{u}_x \cap \frak{p}_-),$ where $x \in \frak{h}_\mathbb{R}$ with $\alpha (x) \ge 0$ for all $\alpha \in \Delta_\frak{k}^+$. For $x \in \frak{h}_\mathbb{R}$ with $\alpha (x) \ge 0$ for all $\alpha \in \Delta_\frak{k}^+,$ we may write $x = H_\lambda$ for some linear function $\lambda$ on $\frak{h}_\mathbb{R}$ with $\langle \lambda, \alpha \rangle \ge 0$ for all $\alpha \in \Delta_\frak{k}^+$. We write $\frak{q}_\lambda = \frak{q}_x, \frak{l}_\lambda = \frak{l}_x,$ and $\frak{u}_\lambda = \frak{u}_x$. Thus $\Delta(\frak{u}_\lambda \cap \frak{p}_+) = \{ \beta \in \Delta_n^+ : \langle \lambda , \beta \rangle > 0 \},$ and $\Delta(\frak{u}_\lambda \cap \frak{p}_-) = \{ \beta \in \Delta_n^- : \langle \lambda, \beta \rangle > 0 \}$. Clearly $\Delta(\frak{u}_\lambda \cap \frak{p}_+)$ and $-\Delta(\frak{u}_\lambda \cap \frak{p}_-)$ are disjoint subsets of $\Delta_n^+$. Now we begin our proofs with this elementary lemma. **Lemma 2**. *[@mondal-sankaran2 Remark 3.3(iii)][\[lemma\]]{#lemma label="lemma"} Let $\lambda$ be a linear function on $\frak{h}_\mathbb{R}$ such that $\langle \lambda, \alpha \rangle \ge 0$ for all $\alpha \in \Delta_\frak{k}^+$.\ (i) Let $\beta, \gamma \in \Delta_n$ be such that $\gamma > \beta,$ and both belong to $\Delta_n^+$ or $\Delta_n^-$. Then $\langle \lambda , \beta \rangle > 0 \implies \langle \lambda , \gamma \rangle >0$.\ (ii) Let $\phi \in \Delta_\frak{k}^+$ be simple and $\beta \in \Delta_n$. If $\beta -\phi \in \Delta, \langle \lambda , \beta - \phi \rangle = 0,$ and $\langle \lambda , \beta \rangle > 0,$ then $\langle \lambda , \phi \rangle > 0$.\ (iii) Let $\phi \in \Delta_\frak{k}^+$ be simple and $\beta \in \Delta_n$. If $\beta + \phi \in \Delta, \langle \lambda , \beta + \phi \rangle = 0,$ and $\langle \lambda , \beta \rangle = 0,$ then $\langle \lambda , \phi \rangle = 0$. If $\beta - \phi \in \Delta, \langle \lambda , \beta - \phi \rangle = 0,$ and $\langle \lambda , \beta \rangle = 0,$ then $\langle \lambda , \phi \rangle = 0$.* *Proof.* (i) Let $\beta, \gamma \in \Delta_n$ be such that $\gamma > \beta$ and both belong to $\Delta_n^+$ or $\Delta_n^-$. Then $\gamma = \beta + \sum_{2\le i \le l} n_i \phi_i,$ where $n_i \in \mathbb{N} \cup \{0\}$ for all $2 \le i \le l$. Since $\langle \lambda , \beta \rangle > 0,$ and $\langle \lambda , \phi_i \rangle \ge 0$ for all $2 \le i \le l,$ we have $\langle \lambda , \gamma \rangle > 0$. \(ii\) $\langle \lambda , \beta - \phi \rangle = 0 \implies \langle \lambda , \phi \rangle = \langle \lambda , \beta \rangle > 0$. \(iii\) $\langle \lambda , \beta + \phi \rangle = 0 \implies \langle \lambda , \phi \rangle = - \langle \lambda , \beta \rangle = 0,$ and\ $\langle \lambda , \beta - \phi \rangle = 0 \implies \langle \lambda , \phi \rangle = \langle \lambda , \beta \rangle = 0$. ◻ Lemma [\[lemma\]](#lemma){reference-type="ref" reference="lemma"}(i) says that $\Delta(\frak{u}_\lambda \cap \frak{p}_+)$ is either empty or a set of the form $\cup_{1 \le i \le r }\{ \beta \in \Delta_n^+ : \beta \ge \xi_i \},$ and $\Delta(\frak{u}_\lambda \cap \frak{p}_-)$ is either empty or a set of the form $\cup_{1\le j \le s} \{ -\beta \in \Delta_n^- : -\beta \ge -\eta_j \}= \cup_{1\le j \le s} (-\{ \beta \in \Delta_n^+ : \beta \le \eta_j \})$, where $\{ \xi_1, \xi_2, \ldots , \xi_r\}, \{ \eta_1, \eta_2, \ldots , \eta_s\}$ are sets of pairwise non-comparable roots in $\Delta_n^+$. If $\frak{g} = \frak{b}_l (l \ge 2)$, then $\Delta_n^+ = \{\phi_1, \phi_1 + \phi_2, \ldots , \phi_1 + \phi_2 + \cdots + \phi_l, \phi_1 + \phi_2 + \cdots + 2\phi_l, \phi_1 + \phi_2 + \cdots + 2\phi_{l-1} + 2\phi_l, \ldots , \phi_1 + 2\phi_2 + \cdots + 2\phi_l \}$. If $\frak{g} = \frak{b}_1$, then $\Delta_n^+ = \{\phi_1 \}$. If $\frak{g} = \frak{\delta}_l (l\ge 4)$, then $\Delta_n^+ = \{\phi_1, \phi_1 + \phi_2, \ldots , \phi_1 + \phi_2 + \cdots + \phi_{l-2}, \phi_1 + \phi_2 + \cdots + \phi_{l-2} + \phi_{l-1}, \phi_1 + \phi_2 + \cdots + \phi_{l-2} + \phi_l, \phi_1 + \phi_2 + \cdots + \phi_{l-2} + \phi_{l-1} + \phi_l, \phi_1 + \phi_2 + \cdots + 2\phi_{l-2} + \phi_{l-1} + \phi_l, \ldots , \phi_1 + 2\phi_2 + \cdots + 2\phi_{l-2} + \phi_{l-1} + \phi_l \}$. If $\frak{g} = \frak{\delta}_2$, then $\Delta_n^+ = \{ \phi_1, \phi_2 \}$. If $\frak{g} = \frak{\delta}_3$, then $\Delta_n^+ = \{\phi_1, \phi_1+\phi_2, \phi_1+\phi_3, \phi_1+\phi_2+\phi_3\}$. In the Figure [\[diagram\]](#diagram){reference-type="ref" reference="diagram"}, the vertices represent roots in $\Delta_n^+$. Two roots $\beta, \gamma \in \Delta_n^+$ are joined by a line with an arrow in the direction of $\gamma$ if $\gamma = \beta + \phi$ for some simple root $\phi \in \Delta_\frak{k}^+$. In this case, the simple root $\phi$ is given on one side of the line. ## Proof of Th.[Theorem 1](#th1){reference-type="ref" reference="th1"} {#proof} Let $\omega_1, \omega_2, \ldots, \omega_l$ be the fundamental weights of $\frak{g}$ corresponding to the simple roots $\phi_1, \phi_2, \ldots, \phi_l$ respectively. \(i\) **$\frak{g} = \frak{b}_l(l>1):$** Lemma [\[lemma\]](#lemma){reference-type="ref" reference="lemma"}(i) and the diagram of $\Delta_n^+$ in Figure [\[diagram\]](#diagram){reference-type="ref" reference="diagram"} show that $\Delta(\frak{u}_\lambda \cap \frak{p}_+)$ is either empty or a set of the form $\{ \beta \in \Delta_n^+ : \beta \ge \xi\},$ and $\Delta(\frak{u}_\lambda \cap \frak{p}_-)$ is either empty or a set of the form $\{ -\beta \in \Delta_n^- : -\beta \ge -\eta \}= -\{ \beta \in \Delta_n^+ : \beta \le \eta \}$, and $\Delta(\frak{u}_\lambda \cap \frak{p}_+), -\Delta(\frak{u}_\lambda \cap \frak{p}_-)$ are disjoint subsets of $\Delta_n^+$, where $\xi, \eta \in \Delta_n^+.$\ Let $\Delta(\frak{u}_\lambda \cap \frak{p}_-)$ be empty. Then $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \xi\},$ where $\xi > \phi_1 + \phi_2 + \cdots + \phi_l$ is not possible. For then $\xi = \phi_1 + \cdots + \phi_{i-1} + 2\phi_i + \cdots + 2\phi_l,$ where $2 \le i \le l$. So $\langle \lambda , \phi_i \rangle > 0,$ by Lemma [\[lemma\]](#lemma){reference-type="ref" reference="lemma"}(ii). Again $\langle \lambda , \phi_1+\phi_2+\cdots+\phi_i \rangle =0, \langle \lambda , \phi_1+\phi_2+\cdots+\phi_{i-1} \rangle =0.$ Thus $\langle \lambda , \phi_i \rangle =0,$ a contradiction. If $\xi \le \phi_1 + \phi_2 + \cdots +\phi_l,$ then $\xi = \phi_1+\cdots+\phi_i,$ for some $1 \le i \le l,$ and $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \xi\}, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=\phi,$ where $\lambda= \omega_i.$ Also $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\phi, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=\phi,$ for $\lambda =0.$ Thus the number of equivalence classes of irreducible unitary representations with non-zero $(\frak{g}, K)$-cohomology for which $\Delta(\frak{u}_\lambda \cap \frak{p}_-) = \phi,$ is $l+1$.\ Let $\Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_i \},$ where $1 \le i \le l-1.$ Then $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \xi\},$ where $\xi > \phi_1+\cdots+\phi_l, \xi \neq \phi_1+\cdots+\phi_i+2\phi_{i+1}+\cdots+2\phi_l$ is not possible. Because $\xi > \phi_1+\cdots+\phi_l, \xi \neq \phi_1+\cdots+\phi_i+2\phi_{i+1}+\cdots+2\phi_l$ implies $\xi=\phi_1+\cdots+\phi_{j-1}+2\phi_j+\cdots+2\phi_l$ for some $2 \le j \le l, j \neq i+1.$ Then $\langle \lambda , \phi_{i+1} \rangle > 0, \langle \lambda , \phi_j \rangle > 0,$ by Lemma [\[lemma\]](#lemma){reference-type="ref" reference="lemma"}(ii). If $2 \le j \le i,$ then $\langle \lambda , \phi_1+\cdots+\phi_i+2\phi_{i+1}+\cdots+2\phi_l \rangle =0, \langle \lambda , \phi_1+\cdots+\phi_{i+1}+2\phi_{i+2}+\cdots+2\phi_l \rangle =0$. Thus $\langle \lambda , \phi_{i+1} \rangle =0$, a contradiction. If $i+2\le j \le l,$ then $\langle \lambda , \phi_1+\phi_2+\cdots+\phi_j \rangle =0, \langle \lambda , \phi_1+\phi_2+\cdots+\phi_{j-1} \rangle =0$. Thus $\langle \lambda , \phi_j \rangle =0$, a contradiction. If $\xi \le \phi_1 + \phi_2 + \cdots +\phi_l,$ then $\xi = \phi_1+\cdots+\phi_j,$ for some $i+1 \le j \le l,$ and $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \xi\},\Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_i \},$ where $\lambda= \frac{\omega_{i+1}}{\langle \omega_{i+1},\phi_{i+1}\rangle}+\frac{\omega_j}{\langle \omega_j,\phi_j\rangle}-\frac{\omega_1}{\langle \omega_1,\phi_1\rangle}.$ If $\xi = \phi_1+\cdots+\phi_i+2\phi_{i+1}+\cdots+2\phi_l,$ then $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \xi\},\Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_i \},$ for $\lambda= \frac{\omega_{i+1}}{\langle \omega_{i+1},\phi_{i+1}\rangle}-\frac{\omega_1}{\langle \omega_1,\phi_1\rangle}.$ Also $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\phi$ is not possible, for $\langle \lambda , \phi_{i+1} \rangle > 0,$ by Lemma [\[lemma\]](#lemma){reference-type="ref" reference="lemma"}(ii); and $\langle \lambda , \phi_1+\cdots+\phi_i+2\phi_{i+1}+\cdots+2\phi_l \rangle =0, \langle \lambda , \phi_1+\cdots+\phi_{i+1}+2\phi_{i+2}+\cdots+2\phi_l \rangle =0$. Thus $\langle \lambda , \phi_{i+1} \rangle =0$, a contradiction. Hence the number of equivalence classes of irreducible unitary representations with non-zero $(\frak{g}, K)$-cohomology for which $\Delta(\frak{u}_\lambda \cap \frak{p}_-) = -\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_i \},$ is $l - i+1$ for all $1 \le i \le l-1.$\ Let $\Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_l \}.$ Since $\xi > \phi_1+\cdots+\phi_l, \xi=\phi_1+\cdots+\phi_{j-1}+2\phi_j+\cdots+2\phi_l$ for some $2 \le j \le l.$ Now $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \xi\}, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_l \},$ for $\lambda= \frac{\omega_l}{\langle \omega_l,\phi_l \rangle}+\frac{\omega_j}{\langle \omega_j,\phi_j \rangle}-\frac{3\omega_1}{\langle \omega_1,\phi_1\rangle}.$ Also $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\phi, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_l \},$ for $\lambda = \frac{\omega_l}{\langle \omega_l,\phi_l \rangle} -\frac{2\omega_1}{\langle \omega_1,\phi_1\rangle}.$ Thus the number of equivalence classes of irreducible unitary representations with non-zero $(\frak{g}, K)$-cohomology for which $\Delta(\frak{u}_\lambda \cap \frak{p}_-) = -\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_l \},$ is $l.$\ Let $\Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{i-1}+2\phi_i + \cdots+2\phi_l \}, 2 \le i \le l.$ Then $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\phi, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{i-1}+2\phi_i + \cdots+2\phi_l \},$ for $\lambda = \frac{\omega_{i-1}}{\langle \omega_{i-1},\phi_{i-1} \rangle} -\frac{2\omega_1}{\langle \omega_1,\phi_1\rangle}$ if $3 \le i \le l,$ and $\lambda = -\omega_1$ if $i=2.$ If $\xi=\phi_1+\cdots+\phi_{j-1}+2\phi_j+\cdots+2\phi_l$ for some $2 \le j < i,$ then $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \xi\}, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{i-1}+2\phi_i + \cdots+2\phi_l \},$ for $\lambda= \frac{\omega_{i-1}}{\langle \omega_{i-1},\phi_{i-1} \rangle}+\frac{\omega_j}{\langle \omega_j,\phi_j \rangle}-\frac{3\omega_1}{\langle \omega_1,\phi_1\rangle}.$ Hence the number of equivalence classes of irreducible unitary representations with non-zero $(\frak{g}, K)$-cohomology for which $\Delta(\frak{u}_\lambda \cap \frak{p}_-) = -\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{i-1}+2\phi_i + \cdots+2\phi_l \},$ is $i-1$ for all $2 \le i \le l.$\ Thus the number of equivalence classes of irreducible unitary representations with non-zero $(\frak{g}, K)$-cohomology of the Lie group $SO_0(2,2l-1)(l > 1)$ is $A=(l+1)+l+\cdots+2+l+1+\cdots+(l-1)=3l + (l-1)l/2 + (l-1)l/2 = l(l+2).$ **$\frak{g} = \frak{b}_1:$** In this case $\Delta_n^+ =\{\phi_1\}.$ If $\lambda = 0,$ then $\Delta(\frak{u}_\lambda \cap \frak{p}_-), \Delta(\frak{u}_\lambda \cap \frak{p}_+)$ are empty. $\lambda = \omega_1$ implies $\Delta(\frak{u}_\lambda \cap \frak{p}_-)=\phi, \Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{\phi_1\};$ and $\lambda = -\omega_1$ implies $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\phi, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=\{-\phi_1\}.$ Thus $A=3=l(l+2).$ **$\frak{g} = \frak{\delta}_l(l \ge 3):$** Lemma [\[lemma\]](#lemma){reference-type="ref" reference="lemma"}(i) and the diagram of $\Delta_n^+$ in Figure [\[diagram\]](#diagram){reference-type="ref" reference="diagram"} show that $\Delta(\frak{u}_\lambda \cap \frak{p}_+)$ is either empty or a set of the form $\{ \beta \in \Delta_n^+ : \beta \ge \xi\},$ or $\{ \beta \in \Delta_n^+ : \beta \ge \xi_1 \textrm{ or } \xi_2 \},$ and $\Delta(\frak{u}_\lambda \cap \frak{p}_-)$ is either empty or a set of the form $\{ -\beta \in \Delta_n^- : -\beta \ge -\eta \}= -\{ \beta \in \Delta_n^+ : \beta \le \eta \},$ or $-\{ \beta \in \Delta_n^+ : \beta \le \xi_1 \textrm{ or } \xi_2 \},$ where $\xi, \eta \in \Delta_n^+; \xi_1=\phi_1+\cdots+\phi_{l-2}+\phi_{l-1},\xi_2=\phi_1+\cdots+\phi_{l-2}+\phi_l.$\ Let $\Delta(\frak{u}_\lambda \cap \frak{p}_-)$ be empty. Then $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \xi\},$ where $\xi \ge \phi_1 + \phi_2 + \cdots + \phi_l$ is not possible. For then $\xi = \phi_1+\cdots +\phi_l,$ or $\xi = \phi_1 + \cdots + \phi_{i-1} + 2\phi_i + \cdots + 2\phi_{l-2}+\phi_{l-1}+\phi_l,$ where $2 \le i \le l-2$. If $\xi = \phi_1+\cdots +\phi_l,$ then $\langle \lambda , \phi_{l-1} \rangle > 0, \langle \lambda , \phi_l \rangle > 0,$ by Lemma [\[lemma\]](#lemma){reference-type="ref" reference="lemma"}(ii). Again $\langle \lambda , \phi_1+\cdots+\phi_{l-1} \rangle =0, \langle \lambda , \phi_1+\cdots+\phi_{l-2}+\phi_l \rangle =0, \langle \lambda , \phi_1+\cdots+\phi_{l-2} \rangle =0$. Thus $\langle \lambda , \phi_{l-1} \rangle =0, \langle \lambda , \phi_l \rangle =0,$ a contradiction. If $\xi = \phi_1 + \cdots + \phi_{i-1} + 2\phi_i + \cdots + 2\phi_{l-2}+\phi_{l-1}+\phi_l (2 \le i \le l-2),$ then $\langle \lambda , \phi_i \rangle > 0,$ by Lemma [\[lemma\]](#lemma){reference-type="ref" reference="lemma"}(ii). Again $\langle \lambda , \phi_1+\phi_2+\cdots+\phi_i \rangle =0, \langle \lambda , \phi_1+\phi_2+\cdots+\phi_{i-1} \rangle =0$. Thus $\langle \lambda , \phi_i \rangle =0$, a contradiction. If $\xi < \phi_1 + \phi_2 + \cdots +\phi_l,$ then $\xi = \phi_1+\cdots+\phi_i,$ for some $1 \le i \le l-1,$ or $\xi = \xi_2$ and $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \xi\}, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=\phi,$ for $\lambda= \omega_i, \omega_l$ respectively. Also $\Delta(\frak{u}_\lambda \cap \frak{p}_+)= \{ \beta \in \Delta_n^+ : \beta \ge \xi_1 \textrm{ or } \xi_2 \}, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=\phi,$ for $\lambda= \omega_{l-1} +\omega_l,$ and $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\phi, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=\phi,$ for $\lambda =0.$ Thus the number of equivalence classes of irreducible unitary representations with non-zero $(\frak{g}, K)$-cohomology for which $\Delta(\frak{u}_\lambda \cap \frak{p}_-) = \phi,$ is $l+2$.\ Let $\Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_i \},$ where $1 \le i \le l-3, l \ge 4.$ Then $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \xi\},$ where $\xi \ge \phi_1+\cdots+\phi_l, \xi \neq \phi_1+\cdots+\phi_i+2\phi_{i+1}+\cdots+2\phi_{l-2}+\phi_{l-1}+\phi_l$ is not possible. For if $\xi = \phi_1+\cdots+\phi_l,$ then $\langle \lambda , \phi_{l-1} \rangle > 0, \langle \lambda , \phi_l \rangle > 0$ by Lemma [\[lemma\]](#lemma){reference-type="ref" reference="lemma"}(ii). Again $\langle \lambda , \phi_1+\phi_2+\cdots+\phi_{l-2} \rangle =0, \langle \lambda , \phi_1+\phi_2+\cdots+\phi_{l-1} \rangle =0, \langle \lambda , \phi_1+\phi_2+\cdots+\phi_{l-2}+\phi_l \rangle =0.$ Thus $\langle \lambda , \phi_{l-1} \rangle = 0, \langle \lambda , \phi_l \rangle = 0,$ a contradiction. Now $\xi > \phi_1+\cdots+\phi_l, \xi \neq \phi_1+\cdots+\phi_i+2\phi_{i+1}+\cdots+2\phi_{l-2}+\phi_{l-1}+\phi_l$ implies $\xi=\phi_1+\cdots+\phi_{j-1}+2\phi_j+\cdots+2\phi_{l-2}+\phi_{l-1}+\phi_l$ for some $2 \le j \le l-2, j \neq i+1.$ Then $\langle \lambda , \phi_{i+1} \rangle > 0, \langle \lambda , \phi_j \rangle > 0,$ by Lemma [\[lemma\]](#lemma){reference-type="ref" reference="lemma"}(ii). If $2 \le j \le i,$ then $\langle \lambda , \phi_1+\cdots+\phi_i+2\phi_{i+1}+\cdots+2\phi_l \rangle =0, \langle \lambda , \phi_1+\cdots+\phi_{i+1}+2\phi_{i+2}+\cdots+2\phi_l \rangle =0$. Thus $\langle \lambda , \phi_{i+1} \rangle =0$, a contradiction. If $i+2\le j \le l-2,$ then $\langle \lambda , \phi_1+\phi_2+\cdots+\phi_j \rangle =0, \langle \lambda , \phi_1+\phi_2+\cdots+\phi_{j-1} \rangle =0$. Thus $\langle \lambda , \phi_j \rangle =0$, a contradiction. If $\xi < \phi_1 + \phi_2 + \cdots +\phi_l,$ then $\xi = \phi_1+\cdots+\phi_j,$ for some $i+1 \le j \le l-1,$ or $\xi = \xi_2.$ Now $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \phi_1+\cdots+\phi_j\}(i+1 \le j \le l-1),\Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_i \}$ for $\lambda= \omega_{i+1}+\omega_j-\omega_1; \Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \xi_2 \}, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_i \}$ for $\lambda= \omega_{i+1}+\omega_l-\omega_1;$ and $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \phi_1+\cdots+\phi_i+2\phi_{i+1}+\cdots+2\phi_{l-2}+\phi_{l-1}+\phi_l,\}, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_i \}$ for $\lambda= \omega_{i+1}-\omega_1.$ Also $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \xi_1 \textrm{ or } \xi_2 \}, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_i \}$ for $\lambda= \omega_{i+1}+\omega_{l-1}+\omega_l -\omega_1.$ Again $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\phi$ is not possible, for $\langle \lambda , \phi_{i+1} \rangle > 0,$ by Lemma [\[lemma\]](#lemma){reference-type="ref" reference="lemma"}(ii); and $\langle \lambda , \phi_1+\cdots+\phi_i+2\phi_{i+1}+\cdots+2\phi_l \rangle =0, \langle \lambda , \phi_1+\cdots+\phi_{i+1}+2\phi_{i+2}+\cdots+2\phi_l \rangle =0$. Thus $\langle \lambda , \phi_{i+1} \rangle =0$, a contradiction. Hence the number of equivalence classes of irreducible unitary representations with non-zero $(\frak{g}, K)$-cohomology for which $\Delta(\frak{u}_\lambda \cap \frak{p}_-) = -\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_i \},$ is $l - i+2$ for all $1 \le i \le l-3.$\ Let $\Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{l-2} \}.$ Then $\langle \lambda , \phi_{l-1} \rangle > 0, \langle \lambda , \phi_l \rangle > 0$ by Lemma [\[lemma\]](#lemma){reference-type="ref" reference="lemma"}(ii). Hence $\Delta(\frak{u}_\lambda \cap \frak{p}_+)= \phi,$ or $\{ \beta \in \Delta_n^+ : \beta \ge \xi\},$ where $\xi > \phi_1+\cdots+\phi_l$ is not possible. For then $\langle \lambda , \phi_1+\phi_2+\cdots+\phi_l \rangle =0, \langle \lambda , \phi_1+\phi_2+\cdots+\phi_{l-1} \rangle =0, \langle \lambda , \phi_1+\phi_2+\cdots+\phi_{l-2}+\phi_l \rangle =0.$ Thus $\langle \lambda , \phi_l \rangle = 0, \langle \lambda , \phi_{l-1} \rangle = 0,$ a contradiction. Now $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \xi_1 \}, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{l-2} \}$ for $\lambda= 2\omega_{l-1}+\omega_l-\omega_1; \Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \xi_2 \}, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{l-2} \}$ for $\lambda= \omega_{l-1}+2\omega_l-\omega_1;$ and $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \phi_1+\cdots+\phi_l\}, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{l-2} \}$ for $\lambda= \omega_{l-1}+\omega_l-\omega_1.$ Also $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \xi_1 \textrm{ or } \xi_2 \}, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{l-2} \}$ for $\lambda= 2\omega_{l-1}+2\omega_l-\omega_1.$ Hence the number of equivalence classes of irreducible unitary representations with non-zero $(\frak{g}, K)$-cohomology for which $\Delta(\frak{u}_\lambda \cap \frak{p}_-) = -\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{l-2} \},$ is $4.$\ Let $\Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1+\cdots+\phi_{l-2}+\phi_a \},$ where $a=l-1,l;$ and $b=l \textrm{ if }a=l-1, \textrm{and } b=l-1 \textrm{ if } a=l.$ Then $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \phi_1+\cdots+\phi_{l-2}+\phi_b \}, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1+\cdots+\phi_{l-2}+\phi_a \}$ for $\lambda= 2\omega_b-\omega_1; \Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \phi_1+\cdots+\phi_{l-2}+\phi_{l-1}+\phi_l\}, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{l-2}+\phi_a \}$ for $\lambda=\omega_a + 2\omega_b-2\omega_1; \Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \phi_1+\cdots+\phi_{j-1}+2\phi_j+\cdots+2\phi_{l-2}+\phi_{l-1}+\phi_l\}(2\le j \le l-2), \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{l-2}+\phi_a \}$ for $\lambda= \omega_j+\omega_b-2\omega_1 (a=l-1,l).$ Also $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\phi, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{l-2}+\phi_a \}$ for $\lambda= \omega_b-\omega_1.$ Thus the number of equivalence classes of irreducible unitary representations with non-zero $(\frak{g}, K)$-cohomology for which $\Delta(\frak{u}_\lambda \cap \frak{p}_-) = -\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{l-2}+\phi_a \}(a=l-1,l),$ is $l.$\ Let $\Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \xi_1 \textrm{ or } \xi_2 \}.$ Then $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \phi_1+\cdots+\phi_{l-2}+\phi_{l-1}+\phi_l\}, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \xi_1 \textrm{ or } \xi_2 \}$ for $\lambda= 2\omega_{l-1}+2\omega_l-3\omega_1;$ $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \phi_1+\cdots+\phi_{j-1}+2\phi_j+\cdots+2\phi_{l-2}+\phi_{l-1}+\phi_l\}(2\le j \le l-2), \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \xi_1 \textrm{ or } \xi_2 \}$ for $\lambda= \omega_j+\omega_{l-1}+\omega_l-3\omega_1;$ and $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\phi, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \xi_1 \textrm{ or } \xi_2 \}$ for $\lambda= \omega_{l-1}+\omega_l-2\omega_1.$ Thus the number of equivalence classes of irreducible unitary representations with non-zero $(\frak{g}, K)$-cohomology for which $\Delta(\frak{u}_\lambda \cap \frak{p}_-) = -\{\beta \in \Delta_n^+ : \beta \le \xi_1 \textrm{ or } \xi_2 \}$ is $l-1.$\ Let $\Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{l-2}+\phi_{l-1}+\phi_l \}.$ Then $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \phi_1+\cdots+\phi_{j-1}+2\phi_j+\cdots+2\phi_{l-2}+\phi_{l-1}+\phi_l\}(2\le j \le l-2), \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{l-2}+\phi_{l-1}+\phi_l \}$ for $\lambda= \omega_j+\omega_{l-2}-3\omega_1; \Delta(\frak{u}_\lambda \cap \frak{p}_+)=\phi, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{l-2}+\phi_{l-1}+\phi_l \}$ for $\lambda= \omega_{l-2}-2\omega_1.$ Thus the number of equivalence classes of irreducible unitary representations with non-zero $(\frak{g}, K)$-cohomology for which $\Delta(\frak{u}_\lambda \cap \frak{p}_-) = -\{\beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{l-2}+\phi_{l-1}+\phi_l \},$ is $l-2.$\ Let $\Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1+\cdots+\phi_{i-1}+2\phi_i+\cdots+2\phi_{l-2}+\phi_{l-1}+\phi_l\}(2\le i \le l-2), l \ge 4.$ Then $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\phi, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1+\cdots+\phi_{i-1}+2\phi_i+\cdots+2\phi_{l-2}+\phi_{l-1}+\phi_l\}$ for $\lambda= \omega_{i-1}-2\omega_1$ if $3\le i \le l-2,$ and $\lambda = -\omega_1$ if $i=2.$ Also $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{ \beta \in \Delta_n^+ : \beta \ge \phi_1+\cdots+\phi_{j-1}+2\phi_j+\cdots+2\phi_{l-2}+\phi_{l-1}+\phi_l\}(2\le j \le i-1), \Delta(\frak{u}_\lambda \cap \frak{p}_-)=-\{ \beta \in \Delta_n^+ : \beta \le \phi_1+\cdots+\phi_{i-1}+2\phi_i+\cdots+2\phi_{l-2}+\phi_{l-1}+\phi_l\}$ for $\lambda= \omega_{i-1}+\omega_j-3\omega_1.$ Thus the number of equivalence classes of irreducible unitary representations with non-zero $(\frak{g}, K)$-cohomology for which $\Delta(\frak{u}_\lambda \cap \frak{p}_-) = -\{\beta \in \Delta_n^+ : \beta \le \phi_1+\cdots+\phi_{i-1}+2\phi_i+\cdots+2\phi_{l-2}+\phi_{l-1}+\phi_l\}(2\le i \le l-2),$ is $i-1.$\ Hence $A=(l+2)+(l+1)+\cdots+5+4+l+l+(l-1)+(l-2)+(l-3)+\cdots+1=l^2+4l-3.$ **$\frak{g} = \frak{\delta}_2:$** In this case $\Delta_n^+ =\{\phi_1,\phi_2\}.$ If $\lambda = 0,$ then $\Delta(\frak{u}_\lambda \cap \frak{p}_-), \Delta(\frak{u}_\lambda \cap \frak{p}_+)$ are empty. $\lambda = \omega_i(1\le i \le2)$ implies $\Delta(\frak{u}_\lambda \cap \frak{p}_-)=\phi, \Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{\phi_i\}; \lambda = \omega_1+\omega_2$ implies $\Delta(\frak{u}_\lambda \cap \frak{p}_-)=\phi, \Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{\phi_1,\phi_2\}; \lambda = -\omega_i(1\le i \le2)$ implies $\Delta(\frak{u}_\lambda \cap \frak{p}_-)=\{-\phi_i\}, \Delta(\frak{u}_\lambda \cap \frak{p}_+)=\phi; \lambda = \omega_1-\omega_2$ implies $\Delta(\frak{u}_\lambda \cap \frak{p}_-)=\{-\phi_2\}, \Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{\phi_1\}; \lambda = \omega_2-\omega_1$ implies $\Delta(\frak{u}_\lambda \cap \frak{p}_-)=\{-\phi_1\}, \Delta(\frak{u}_\lambda \cap \frak{p}_+)=\{\phi_2\};$ and $\lambda = -\omega_1-\omega_2$ implies $\Delta(\frak{u}_\lambda \cap \frak{p}_+)=\phi, \Delta(\frak{u}_\lambda \cap \frak{p}_-)=\{-\phi_1,-\phi_2\}.$ Thus $A=9=l^2+4l-3.$ \(ii\) An irreducible unitary representation $\pi$ of $SO_0(2,m)$ with trivial infinitesimal character is a discrete series representation *if and only if* $\pi$ is unitarily equivalent to $A_\frak{b},$ where $\frak{b}$ is a Borel subalgebra of $\frak{g}$ containing $\frak{h} + \sum_{\alpha \in \Delta_\frak{k}^+} \frak{g}^\alpha.$ If $\frak{b}$ is a Borel subalgebra of $\frak{g}$ containing $\frak{h} + \sum_{\alpha \in \Delta_\frak{k}^+} \frak{g}^\alpha,$ then $\frak{b}=\frak{b}_\lambda=\frak{h} \oplus \frak{u}_\lambda$ for some linear function $\lambda$ on $\frak{h}_\mathbb{R}$ with $\langle \lambda,\alpha \rangle \neq 0$ for all $\alpha \in \Delta,$ and $\langle \lambda,\alpha \rangle > 0$ for all $\alpha \in \Delta_\frak{k}^+.$ Since $\langle \lambda,\beta \rangle \neq 0$ for all $\beta \in \Delta_n^+,$ for the irreducible unitary representation $A_\frak{b}$ we have $(-\Delta(\frak{u}_\lambda \cap \frak{p}_-)) \cup \Delta(\frak{u}_\lambda \cap \frak{p}_+) = \Delta_n^+.$ Conversely suppose that $\lambda$ be a linear function on $\frak{h}_\mathbb{R}$ such that $\langle \lambda,\alpha \rangle \ge 0$ for all $\alpha \in \Delta_\frak{k}^+,$ and $(-\Delta(\frak{u}_\lambda \cap \frak{p}_-)) \cup \Delta(\frak{u}_\lambda \cap \frak{p}_+) = \Delta_n^+.$ Since $\langle \lambda,\alpha \rangle \ge 0$ for all $\alpha \in \Delta_\frak{k}^+,$ and $\langle \lambda,\beta \rangle \neq 0$ for all $\beta \in \Delta_n, \lambda=\sum_{1\le i\le l} \frac{c_i\omega_i}{\langle \omega_i,\phi_i \rangle},$ where $c_1$ is a non-zero real number and $c_i$ is a non-negative real number for all $2 \le i \le l.$ If $c_1 >0,$ let $d_1=c_1;$ and $d_i =c_i \textrm{ if } c_i \neq 0,\textrm{and }d_i =1 \textrm{ if } c_i = 0;$ for all $2\le i \le l.$ If $c_1 <0,$ let $\{i: 2\le i \le l, c_i = 0\}=\{i_1,i_2, \ldots , i_k\},$ and $m_j =\textrm{max} \{ n_{\phi_{i_j}}(\beta): \beta \in \Delta_n^+, \langle \lambda,\beta \rangle < 0\}$ for all $1\le j \le k.$ Assume that $\frak{g} \neq \delta_2,$ and if $\frak{g}=\delta_l (l \ge 3),$ $-\Delta(\frak{u}_\lambda \cap \frak{p}_-) \neq \{\beta \in \Delta_n^+ : \beta \le \xi_i\},$ where $i=1,2.$ Then if $\beta \in -\Delta(\frak{u}_\lambda \cap \frak{p}_-), \beta' \in \Delta(\frak{u}_\lambda \cap \frak{p}_+); \beta < \beta',$ and so $n_{\phi_{i_j}}(\beta) \le n_{\phi_{i_j}}(\beta')$ for all $1 \le j \le k.$ In this case, if $c_1<0,$ let $d_1=c_1-\sum_{1\le j \le k}m_j;$ and $d_i =c_i \textrm{ if } c_i \neq 0,\textrm{and }d_i =1 \textrm{ if } c_i = 0;$ for all $2\le i \le l.$ Let $\lambda'=\sum_{1\le i\le l} \frac{d_i\omega_i}{\langle \omega_i,\phi_i \rangle},$ for any $c_1 \in \mathbb{R}\setminus \{0\}.$ Then $\langle \lambda',\alpha \rangle > 0$ for all $\alpha \in \Delta_\frak{k}^+;$ and if $\beta \in \Delta_n,\ \langle \lambda,\beta \rangle < 0 \implies \langle \lambda',\beta \rangle < 0,$ and $\langle \lambda,\beta \rangle > 0 \implies \langle \lambda',\beta \rangle > 0.$ Thus $\frak{q}_{\lambda'}$ is a Borel subalgebra of $\frak{g}$ containing $\frak{h} + \sum_{\alpha \in \Delta_\frak{k}^+} \frak{g}^\alpha,$ and $\Delta(\frak{u}_{\lambda'} \cap \frak{p}_-) = \Delta(\frak{u}_\lambda \cap \frak{p}_-), \Delta(\frak{u}_{\lambda'} \cap \frak{p}_+)=\Delta(\frak{u}_\lambda \cap \frak{p}_+).$ Thus $A_{\frak{q}_\lambda}$ is is unitarily equivalent to $A_{\frak{q}_{\lambda'}},$ which is a discrete series representation with trivial infinitesimal character.\ Let $\frak{g}=\delta_l (l \ge 3),$ and $-\Delta(\frak{u}_\lambda \cap \frak{p}_-) = \{\beta \in \Delta_n^+ : \beta \le \xi_i\},$ where $i=1,2;$ that is $-\Delta(\frak{u}_\lambda \cap \frak{p}_-) = \{\beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{l-2}+\phi_a \}(a=l-1,l).$ Then $\Delta(\frak{u}_\lambda \cap \frak{p}_+) = \{\beta \in \Delta_n^+ : \beta \ge \phi_1 +\cdots + \phi_{l-2}+\phi_b \},$ where $b=l \textrm{ if }a=l-1, \textrm{and } b=l-1 \textrm{ if }a=l.$ Clearly $\langle \lambda , \phi_b \rangle > 0,$ that is $c_b>0.$ If $c_a=0,$ let $d_1=c_1-\sum_{1\le j \le k}m_j; d_a =c_a+1; d_b=c_b+1;$ and for all $2\le i \le l-2,\ d_i =c_i \textrm{ if } c_i \neq 0,\textrm{and }d_i =1 \textrm{ if } c_i = 0.$ If $c_a \neq 0,$ let $d_1=c_1-\sum_{1\le j \le k}m_j;$ and $d_i =c_i \textrm{ if } c_i \neq 0,\textrm{and }d_i =1 \textrm{ if } c_i = 0;$ for all $2\le i \le l.$ $\lambda'=\sum_{1\le i\le l} \frac{d_i\omega_i}{\langle \omega_i,\phi_i \rangle}.$ Since $\beta \in -\Delta(\frak{u}_\lambda \cap \frak{p}_-), \beta' \in \Delta(\frak{u}_\lambda \cap \frak{p}_+) \implies n_{\phi_{i_j}}(\beta) \le n_{\phi_{i_j}}(\beta')$ for all $1 \le j \le k,i_j \neq l-1,l;$ we have $\langle \lambda',\alpha \rangle > 0$ for all $\alpha \in \Delta_\frak{k}^+;$ and if $\beta \in \Delta_n,\ \langle \lambda,\beta \rangle < 0 \implies \langle \lambda',\beta \rangle < 0,$ and $\langle \lambda,\beta \rangle > 0 \implies \langle \lambda',\beta \rangle > 0.$ As above $A_{\frak{q}_\lambda}$ is is unitarily equivalent to $A_{\frak{q}_{\lambda'}},$ which is a discrete series representation with trivial infinitesimal character.\ Let $\frak{g}=\frak{\delta}_2.$ Since $(-\Delta(\frak{u}_\lambda \cap \frak{p}_-)) \cup \Delta(\frak{u}_\lambda \cap \frak{p}_+) = \Delta_n^+,$ the candidates of $(-\Delta(\frak{u}_\lambda \cap \frak{p}_-), \Delta(\frak{u}_\lambda \cap \frak{p}_+))$ are $(\phi, \{\phi_1,\phi_2\}), (\{\phi_1\},\{\phi_2\}),(\{\phi_2\},\{\phi_1\}),$ and $(\{\phi_1,\phi_2\},\phi).$ The corresponding $\lambda'$ are $\omega_1+\omega_2,-\omega_1+\omega_2,\omega_1-\omega_2,-\omega_1-\omega_2$ resectively. Then $A_{\frak{q}_\lambda}$ is is unitarily equivalent to $A_{\frak{q}_{\lambda'}},$ which is a discrete series representation with trivial infinitesimal character. The Blattner parameter of the discrete series representation $A_{\frak{b}_\lambda},$ where $\frak{b}_\lambda = \frak{h} \oplus \frak{u}_\lambda$ for some linear function $\lambda$ on $\frak{h}_\mathbb{R}$ with $\langle \lambda,\alpha \rangle \neq 0$ for all $\alpha \in \Delta,$ and $\langle \lambda,\alpha \rangle > 0$ for all $\alpha \in \Delta_\frak{k}^+,$ is $\sum_{\beta \in \Delta(\frak{u}_\lambda \cap \frak{p}_-) \cup \Delta(\frak{u}_\lambda \cap \frak{p}_+)} \beta.$ If $m \neq 2, \frak{g}$ is simple and in this case the discrete series representation $A_{\frak{b}_\lambda}$ is a holomorphic discrete series representation *if and only if* the Blattner parameter is $\sum_{\beta \in \Delta_n^+} \beta,$ or $\sum_{\beta \in \Delta_n^-} \beta;$ that is $\Delta(\frak{u}_\lambda \cap \frak{p}_+)$ is either $\Delta_n^+$ or empty. For since $\frak{g}$ is simple, the only Borel-de Siebenthal positive root system containing $\Delta_\frak{k}^+$ are $\Delta_\frak{k}^+ \cup \Delta_n^+,$ and $\Delta_\frak{k}^+ \cup \Delta_n^-.$ Hence the number of equivalence classes of holomorphic discrete series representations of $SO_0(2,m)(m \neq 2)$ with trivial infinitesimal character is $2.$ If $\frak{g}=\frak{\delta}_2,$ any positive root system is Borel-de Siebenthal positive root system, and so any discrete series representation with trivial infinitesimal character is holomorphic. Let $\frak{g}=\frak{b}_l(l \ge 2).$ The candidates of $(\Delta(\frak{u}_\lambda \cap \frak{p}_-), \Delta(\frak{u}_\lambda \cap \frak{p}_+))$ for which $(-\Delta(\frak{u}_\lambda \cap \frak{p}_-)) \cup \Delta(\frak{u}_\lambda \cap \frak{p}_+) = \Delta_n^+,$ are $(\phi ,\{ \beta \in \Delta_n^+ : \beta \ge \phi_1\}), (-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_i \}, \{ \beta \in \Delta_n^+ : \beta \ge \phi_1 +\cdots + \phi_{i+1}\}) (1\le i \le l-1), (-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_l \}, \{ \beta \in \Delta_n^+ : \beta \ge \phi_1 +\cdots + \phi_{l-1}+2\phi_l \}), (-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{i-1}+2\phi_i + \cdots+2\phi_l \}, \{ \beta \in \Delta_n^+ : \beta \ge \phi_1 +\cdots + \phi_{i-2}+2\phi_{i-1} + \cdots+2\phi_l \}) (3 \le i \le l), (-\{ \beta \in \Delta_n^+ : \beta \le \phi_1+2\phi_2 + \cdots+2\phi_l \}, \phi ).$ Thus the number of equivalence classes of discrete series representations of $SO_0(2,m)$ with trivial infinitesimal character is $2l \textrm{ if }m=2l-1, l \ge 2.$\ If $\frak{g}=\frak{b}_1,$ the candidates of $(\Delta(\frak{u}_\lambda \cap \frak{p}_-), \Delta(\frak{u}_\lambda \cap \frak{p}_+))$ for which $(-\Delta(\frak{u}_\lambda \cap \frak{p}_-)) \cup \Delta(\frak{u}_\lambda \cap \frak{p}_+) = \Delta_n^+,$ are $(\phi ,\{\phi_1\}), (\{-\phi_1\},\phi)$. Thus the number is $2.$\ Let $\frak{g}=\frak{\delta}_l(l \ge 3).$ The candidates of $(\Delta(\frak{u}_\lambda \cap \frak{p}_-), \Delta(\frak{u}_\lambda \cap \frak{p}_+))$ for which $(-\Delta(\frak{u}_\lambda \cap \frak{p}_-)) \cup \Delta(\frak{u}_\lambda \cap \frak{p}_+) = \Delta_n^+,$ are $(\phi ,\{ \beta \in \Delta_n^+ : \beta \ge \phi_1\}), (-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_i \}, \{ \beta \in \Delta_n^+ : \beta \ge \phi_1 +\cdots + \phi_{i+1}\}) (1\le i \le l-3,l\ge 4), (-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{l-2} \}, \{ \beta \in \Delta_n^+ : \beta \ge \xi_1 \textrm{ or } \xi_2\}), (-\{ \beta \in \Delta_n^+ : \beta \le \xi_1 \}, \{ \beta \in \Delta_n^+ : \beta \ge \xi_2\}), (-\{ \beta \in \Delta_n^+ : \beta \le \xi_2 \}, \{ \beta \in \Delta_n^+ : \beta \ge \xi_1\}), (-\{ \beta \in \Delta_n^+ : \beta \le \xi_1 \textrm{ or } \xi_2\}, \{ \beta \in \Delta_n^+ : \beta \ge \phi_1 +\cdots + \phi_l \}), (-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_l \}, \{ \beta \in \Delta_n^+ : \beta \ge \phi_1 +\cdots +\phi_{l-3}+2\phi_{l-2}+ \phi_{l-1}+\phi_l \}), (-\{ \beta \in \Delta_n^+ : \beta \le \phi_1 +\cdots + \phi_{i-1}+2\phi_i + \cdots+2\phi_{l-2}+\phi_{l-1}+\phi_l \}, \{ \beta \in \Delta_n^+ : \beta \ge \phi_1 +\cdots + \phi_{i-2}+2\phi_{i-1} + \cdots+2\phi_{l-2}+\phi_{l-1}+\phi_l \}) (3 \le i \le l-2,l\ge 4), (-\{ \beta \in \Delta_n^+ : \beta \le \phi_1+2\phi_2 + \cdots+2\phi_{l-2}+\phi_{l-1}+\phi_l \}, \phi ).$ Thus the number of equivalence classes of discrete series representations of $SO_0(2,m)$ with trivial infinitesimal character is $2l \textrm{ if }m=2l-2, l \ge 3.$\ If $\frak{g}=\frak{\delta}_2,$ then obviously the number is $4.$ **Remark 3**. *Note that the set of all Hodge types $(R_+(\frak{q}), R_-(\frak{q}))$ of irreducible unitary representations $A_\frak{q}$ is given by $$\ \begin{cases} \{(i,j): i,j \in \mathbb{N}\cup \{0\}, l \le i+j \le |\Delta_n^+|\}\cup\{(i,i): 0 \le i \le [\frac{|\Delta_n^+|}{2}]\} & \textrm{if } \frak{g}=\frak{b}_l, \\ \{(i,j): i,j \in \mathbb{N}\cup \{0\}, l-1 \le i+j \le |\Delta_n^+|\}\cup\{(i,i): 0 \le i \le [\frac{|\Delta_n^+|}{2}]\} & \textrm{if } \frak{g}=\frak{\delta}_l; \\ \end{cases}$$ where $|\Delta_n^+|$ is the number of roots in $\Delta_n^+.$ The discrete series representations with trivial infinitesimal character correspond to the set $\{(i,j): i,j \in \mathbb{N}\cup \{0\}, i+j = |\Delta_n^+|\}.$* # Poincaré polynomials of cohomologies of the irreducible unitary representations with non-zero $(\frak{g},K)$-cohomology of the Lie group $SO_0(2,m)$ To determine the Poincaré polynomial of $H^*(\frak{g},K;A_{\frak{q},K}),$ we need to determine the spaces $Y_\frak{q},$ and to do so we need to analyze the $\theta$-stable parabolic subalgebras containing $\frak{h} \oplus \sum_{\alpha \in \Delta_\frak{k}^+}\frak{g}^{\alpha}$ more closely. Note that a parabolic subalgebra of $\frak{g}$ is $\theta$-stable *if and only if* it contains a maximal abelian subspace of $\frak{k}_0.$ Thus a parabolic $\frak{q}$ subalgebra of $\frak{g}$ which contains $\frak{h} \oplus \sum_{\alpha \in \Delta_\frak{k}^+}\frak{g}^{\alpha},$ is $\theta$-stable. So there exists a positive root system $\Delta_\frak{q}^+$ of $\Delta$ containing $\Delta_\frak{k}^+$ and a subset $\Gamma$ of $\Phi_\frak{q},$ the set of all simple roots in $\Delta_\frak{q}^+,$ such that $\frak{q}= \frak{l} \oplus \frak{u},$ where $\frak{l} = \frak{h} \oplus \sum_{n_\phi (\alpha) =0 \textrm{ for all } \phi \in \Gamma} \frak{g}^\alpha$ is the Levi subalgebra of $\frak{q},$ and $\frak{u} = \sum_{n_\phi (\alpha) >0 \textrm{ for some } \phi \in \Gamma} \frak{g}^\alpha$ is the nilradical of $\frak{q};$ where $\alpha =\sum_{\phi \in \Phi_\frak{q}} n_\phi (\alpha) \phi \in \Delta.$ Note that the Levi subalgebra $\frak{l}$ is the direct sum of an $|\Gamma|$-dimensional centre and a semisimple Lie algebra whose Dynkin diagram is the subdiagram of the dynkin diagram of $\frak{g}$ consisting of the vertices $\Phi_\frak{q} \setminus \Gamma.$ If $\frak{q}$ is a parabolic subalgebra which contains $\frak{h} \oplus \sum_{\alpha \in \Delta_\frak{k}^+}\frak{g}^{\alpha},$ there are many positive root systems of $\Delta$ containing $\Delta_\frak{k}^+ \cup \Delta(\frak{u} \cap \frak{p}_-) \cup \Delta(\frak{u} \cap \frak{p}_+).$ For example, $\Delta_\frak{k}^+ \cup \Delta(\frak{u} \cap \frak{p}_-) \cup (\Delta_n^+\setminus (-\Delta(\frak{u} \cap \frak{p}_-)))$ is a positive root system of $\Delta,$ as we have seen in the proof of Th. [Theorem 1](#th1){reference-type="ref" reference="th1"}(ii) that there exists a non-singular linear function $\lambda'$ on $\frak{h}_\mathbb{R}$ such that $\lambda'$ is dominant with respect to $\Delta_\frak{k}^+ \cup \Delta(\frak{u} \cap \frak{p}_-) \cup (\Delta_n^+\setminus (-\Delta(\frak{u} \cap \frak{p}_-))).$ We define $\Delta_\frak{q}^+= \Delta_\frak{k}^+ \cup \Delta(\frak{u} \cap \frak{p}_-) \cup (\Delta_n^+\setminus (-\Delta(\frak{u} \cap \frak{p}_-))).$ In the Tables [\[b-table\]](#b-table){reference-type="ref" reference="b-table"} and [\[d-table\]](#d-table){reference-type="ref" reference="d-table"}, we have determined $\Phi_\frak{q}, \Gamma, Y_\frak{q}$ for each $\theta$-stable parabolic subalgebra containing $\frak{h} \oplus \sum_{\alpha \in \Delta_\frak{k}^+}\frak{g}^{\alpha}.$ In the Tables [\[b-table\]](#b-table){reference-type="ref" reference="b-table"} and [\[d-table\]](#d-table){reference-type="ref" reference="d-table"}, we can see that $Y_\frak{q}$ is either singleton, or $\frac{SU(k)}{S(U(1)\times U(k-1))}(k \ge 2),$ or $\frac{SO(2k+1)}{SO(2)\times SO(2k-1)} (k \ge 1),$ or $\frac{SO(2k)}{SO(2)\times SO(2k-2)}(k \ge 2).$ We have $P(\textrm{singleton},t)=1,$ $P(\frac{SU(k)}{S(U(1)\times U(k-1))},t)=1+t^2+t^4+\cdots+t^{2k-2} \textrm{ for all }k \ge 2, P(\frac{SO(2k+1)}{SO(2)\times SO(2k-1)},t)=1+t^2+t^4+\cdots+t^{4k-2} \textrm{ for all }k \ge 1, P(\frac{SO(2k)}{SO(2)\times SO(2k-2)},t)=1+t^2+t^4+\cdots+t^{2k-4}+2t^{2k-2}+t^{2k}+\cdots+t^{4k-4} \textrm{ for all }k \ge 2.$ See [@ghv]. Since $H^r (\frak{g}, K; A_{\frak{q}, K}) = H^{p,q} (\frak{g}, K; A_{\frak{q}, K}) \cong H^{p-R_+(\frak{q}),q-R_-(\frak{q})} (Y_\frak{q} ;\mathbb{C}),$ for unique non-negative integers $p,q$ with $p+q=r, p-q=R_+(\frak{q})-R_-(\frak{q});$ we write two variable Poincaré polynomial $P_\frak{q}(x,t)$ for $H^*(\frak{g},K;A_{\frak{q},K}),$ and the coefficient of the term $x^pt^q$ in $P_\frak{q}(x,t)$ is dim$( H^{p,q} (\frak{g}, K; A_{\frak{q}, K})).$ ## Poincaré polynomials of cohomologies of the irreducible unitary representations $A_\frak{q}$ of $SO_0(2,2)$ Let $\frak{g}=\frak{\delta}_2.$\ Note that $\Delta_n^+=\{\phi_1,\phi_2\}.$ Now $\Delta(\frak{u}\cap\frak{p}_-)=\phi, \Delta(\frak{u}\cap \frak{p}_+)=\phi \implies Y_\frak{q}=\frac{SO(4)}{SO(2)\times SO(2)}, P_\frak{q}(x,t)=1+2xt+x^2t^2;$\ $\Delta(\frak{u}\cap\frak{p}_-)=\phi, \Delta(\frak{u}\cap \frak{p}_+)=\{\phi_i\}(i=1,2) \implies Y_\frak{q}=\frac{SU(2)}{S(U(1)\times U(1))}, P_\frak{q}(x,t)=x+x^2t;$\ $\Delta(\frak{u}\cap\frak{p}_-)=\phi, \Delta(\frak{u}\cap \frak{p}_+)=\{\phi_1,\phi_2\} \implies Y_\frak{q}=$ singleton, $P_\frak{q}(x,t)=x^2;$\ $\Delta(\frak{u}\cap\frak{p}_-)=\{-\phi_i\}(i=1,2), \Delta(\frak{u}\cap \frak{p}_+)=\phi \implies Y_\frak{q}=\frac{SU(2)}{S(U(1)\times U(1))}, P_\frak{q}(x,t)=t+xt^2;$\ $\Delta(\frak{u}\cap\frak{p}_-)=\{-\phi_i\}(i=1,2), \Delta(\frak{u}\cap \frak{p}_+)=\{\phi_j\}(j=1,2,j\neq i) \implies Y_\frak{q}=$ singleton, $P_\frak{q}(x,t)=xt;$\ $\Delta(\frak{u}\cap\frak{p}_-)=\{-\phi_1,-\phi_2\}, \Delta(\frak{u}\cap \frak{p}_+)=\phi \implies Y_\frak{q}=$ singleton, $P_\frak{q}(x,t)=t^2.$\ # acknowledgement {#acknowledgement .unnumbered} Both authors acknowledge the financial support from the Department of Science and Technology (DST), Govt. of India under the Scheme \"Fund for Improvement of S&T Infrastructure (FIST)\" \[File No. SR/FST/MS-I/2019/41\]. Ankita Pal acknowledges the financial support from Council of Scientific and Industrial Research (CSIR) \[File No. 08/155(0091)/2021-EMR-I\]. 99 Borel, A. Cohomologie de sous-groupes discrets et représentations de groupes semisimples. Bull. Soc. Math. France **32-33** (1976) 73--112. Borel, Armand; Wallach, Nolan *Continuous Cohomology, Discrete Subgroups, and Representations of Reductive Groups.* Ann. Math. Stud. **94**, Princeton Univ. Press, 1980. Second Ed. Collingwood, D.H. A note on continuous cohomology for semisimple Lie groups. Math. Z. **189** (1985) 65--70. Griffiths, P.; Harris, J. *Principles of Algebraic Geometry.* John Wiley. New York. Greub,W.; Halperin, S.; Vanstone, R. *Connections, Curvature, and Cohomology Volume III: Cohomology of Principal Bundles and Homogeneous Spaces.* Pure and Applied Mathematics. Academic Press, New York, 1976. Harish-Chandra Representations of semisimple Lie groups. VI. Integrable and square-integrable representations. Amer. J. Math. **78** (1956) 564--628. Knapp, A. W. *Representation Theory of Semisimple Groups. An Overview Based on Examples.* Princeton Landmarks in Mathematics. Princeton University Press, 2001. Li, J.-S.; Schwermer J. Constructions of automorphic forms and related cohomology classes for arithmetic subgroups of $G_2$. Comp. Math. **87** (1993) 45--78. Matsushima, Y. A formula for the Betti numbers of compact locally symmetric Riemannian manifolds. J. Diff. Geom. **1** (1967) 99--109. Millson, John J.; Raghunathan, M. S. Geometric construction of cohomology for arithmetic groups. I. Proc. Indian Acad. Sci. (Math. Sci.) **90** (1981) 103--123. Mondal, Arghya; Sankaran, Parameswaran Geometric cycles in compact locally Hermitian symmetric spaces and automorphic representations. Transformation Groups **24** (2019) 913--948. Parthasarathy, R. A generalization of Enright-Varadarajan modules. Compositio Math. **36** (1978) 53--73. Paul, P. Geometric cycles in compact Riemannian locally symmetric Spaces of type IV and automorphic representations of complex simple Lie groups. Journal of Lie Theory **30** (2020) 851--908. Salamanca-Riba, S. On the unitary dual of some classical Lie groups. Comp. Math. **68** (1988) 251--303. Vogan, David Unitarizability of certain series of representations. Ann. Math. **120** (1984) 141--187. Vogan, David Cohomology and group representations, Proc. Symp. Pur. Math. **61** (1997) 219--234, Amer. Math. Soc., Providence, RI. Vogan, David A., Jr.; Zuckerman, Gregg J. Unitary representations with nonzero cohomology. Compositio Math. **53** (1984) no. 1, 51--90.
arxiv_math
{ "id": "2309.12703", "title": "Irreducible unitary representations with non-zero relative Lie algebra\n cohomology of the Lie group $SO_0(2,m)$", "authors": "Ankita Pal, Pampa Paul", "categories": "math.RT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- author: - Kaushik Bal and Sanjit Biswas bibliography: - ref1.bib nocite: "[@*]" title: Semilinear degenerate elliptic equation in the presence of singular nonlinearity --- # ABSTRACT {#abstract .unnumbered} Given $\Omega(\subseteq\mathbb{R}^{1+m})$, a smooth bounded domain and a nonnegative measurable function $f$ defined on $\Omega$ with suitable summability. In this paper, we will study the existence and regularity of solutions to the quasilinear degenerate elliptic equation with a singular nonlinearity given by: $$\begin{aligned} -\Delta_\lambda u&=\frac{f}{u^{\nu}} \text{ in }\Omega\nonumber\\ &u>0 \text{ in } \Omega\nonumber\\ &u=0 \text{ on } \partial\Omega\nonumber\end{aligned}$$ where the operator $\Delta_\lambda$ is given by $$\Delta_\lambda{u}=u_{xx}+|x|^{2\lambda}\Delta_y{u};\,(x,y)\in \mathbb{R}\times\mathbb{R}^m$$ is known as the Grushin operator. # INTRODUCTION {#sec1} In this paper, we are interested in the semilinear elliptic problem, whose model is given by $$\begin{aligned} \label{maineq} -\Delta_\lambda u&=\frac{f}{u^\nu} \text{ in }\Omega\\ &u>0 \text{ in } \Omega\\ &u=0 \text{ on } \partial\Omega\end{aligned}$$ where the operator $\Delta_\lambda$ is given by $$\Delta_\lambda{u}=u_{xx}+|x|^{2\lambda}\Delta_y{u};\;\lambda\geq 0$$ is known as the Grushin operator. $\Delta_y$ denotes the Laplacian operator w.r.t $y$ variable. $\Omega\subseteq\mathbb{R}^{1+m}$ is a $\Lambda-$connected bounded open set (definition provided in the next section) and $X=(x,y)\in\Omega$, $x\in\mathbb{R}$, $y=(y_1,y_2,...,y_m)\in\mathbb{R}^m$, $m\geq1$. Here $\nu>0$ is a positive real number, and $f$ is a nonnegative measurable function lying in some Lebesgue space.\ To understand the context of our study, we start by looking at available literature concerning ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}). Starting with the now classical work by Crandall et al. [@CRT] where the case $\lambda=0$ was considered and showed to have a unique solution in $C^2(\Omega)\cap C(\bar{\Omega})$ such that the solution behaves like some power of the distance function near the boundary, a plethora of work followed provided $f\in C^{\alpha}(\Omega)$. Of particular significance is the work of Lazer-Mckenna, where the solution was shown to exist in $H_0^1(\Omega)$ if and only if $0<\delta<3$. When $f\in L^1(\Omega)$, Boccardo and Orsina [@BLO] proved if $0<\nu\leq 1$ then there exist a solution of ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}) in $H^1_0(\Omega)$ and for $\nu>1$ there exist a solution $u\in H^1_{loc}(\Omega)$ such that $u^\frac{\nu+1}{2}\in H^1_0(\Omega)$ among other regularity results. The p-laplacian case was settled by [@CST], where existence, uniqueness, and some regularity results were proved. In this paper, we would like to relook at the equation ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}) by replacing the Laplacian with a degenerate elliptic equation whose prototype is given by Grushin Laplacian $\Delta_\lambda$. We will prove the existence and regularity results analog to [@BLO]. It is worth pointing out that there are several issues when degeneracy is introduced. If the distance between the domain $\Omega$ and the plane $x=0$ is positive, then the Grushin operator will become uniformly elliptic in $\Omega$, and in this case, the problem is settled in [@BLO]. We assume the domain $\Omega$ intersects the $x=0$ plane, thus degenerating the operator in $\Omega$. To handle this kind of degeneracy, assuming that $\Delta_\lambda$ admits a uniformly elliptic direction, we discuss the solvability of ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}) in the weighted degenerate Sobolev space $H^{1,\lambda}(\Omega)$ which is defined in [@FS; @FL1]. We would also need to have a notion of convergence of sequence in the space $H^{1,\lambda}(\Omega)$ for which Monticelli-Payne [@MP] introduced the concept of a quasi-gradient, hence providing a proper representation of elements of $H^{1,\lambda}(\Omega)$. Another issue is the lack of availability of the Strong Maximum Principle, which we showed to hold using weak Harnack inequality of Franchi-Lanconelli [@FL2 Theorem 4.3] valid for $d-$metric on $\Omega$ provided $\lambda\geq 1$ and assuming that $\Omega$ is $\Lambda-$connected (definition is provided in the next section). We conclude our study with a brief discussion of how singular variable exponent for Grushin Laplacian may be handled, whose Laplacian counterpart can be found in Garain-Mukherjee [@PT]. For further reading into the topic, one may look at the papers [@BBG; @BG1; @BG; @BGM; @MD] and the references therein.\ **Notation 1**. *Throughout the paper, if not explicitly stated, $C$ will denote a positive real number depending only on $\Omega$ and $N$, whose value may change from line to line. We denote by $\langle.,.\rangle$ the Euclidean inner product on $\mathbb{R}^n$ and denote by $|A|:=\sup_{|\xi|=1}\langle A\xi,\xi\rangle$ the norm of a real, symmetric $N\times N$ matrix $A$. The Lebesgue measure of $S\subset \mathbb{R}^N$ is denoted by $|S|$. The Hölder conjugate of $r\geq 1$ is denoted by $r'$.* This paper is organized into seven sections. Section [2](#sec2){reference-type="ref" reference="sec2"} discusses functional, analytical settings related to our problem and a few related results. We state our main results in section [3](#sec3){reference-type="ref" reference="sec3"}. Section [4](#sec4){reference-type="ref" reference="sec4"} and [5](#sec5){reference-type="ref" reference="sec5"} are devoted to proving a few auxiliary results. We prove our main results in section [6](#sec6){reference-type="ref" reference="sec6"}. Finally, in section [7](#sec7){reference-type="ref" reference="sec7"}, we consider the variable singular exponent case. # PRELIMINARIES AND FEW USEFUL RESULTS {#sec2} We define a few crucial notions, and the metric introduced in Franchi-Lanconelli [@FL2]. **Definition 1**. *An open subset $\Omega(\subset\mathbb{R}^N)$ is said to be $\Lambda-$connected if for every $X,Y\in\Omega$, there exists a continuous curve lying in $\Omega$ which is piecewise an integral curve of the vector fields $\pm \partial_x,\pm|x|^{\lambda}\partial_{y_1},...,\pm|x|^{\lambda}\partial_{y_m}$ connecting $X$ and $Y$.* Note that every $\Lambda-$connected open set in $\mathbb{R}^N$ is connected. We denote by $P(\Lambda)$ the set of all continuous curves which are piecewise integral curves of the vector fields $\pm \partial_x,\pm|x|^{\lambda}\partial_{y_1},...,\pm|x|^{\lambda}\partial_{y_m}$. Let $\gamma:[0,T]\to\Omega$ is an element in $P(\Lambda)$ and define $l(\gamma)=T$. **Definition 2**. *Let $X,Y\in\Omega$, we define a new metric $d$ on $\Omega$ by $d(X,Y)=\inf\{l(\gamma):\gamma\in P(\Lambda)$ connecting $X$ and $Y$}.* The $d-$ball around $X\in\Omega$ with radius $r>0$ is denoted by $S_d(X,r)$ and is given by $S_d(X,r)=\{Y\in\Omega:d(X, Y)<r$}. ([@FL1]) ensures that the usual metric is equivalent to the $d$ in $\Omega$. Let $N=k+m$ and $\Omega\subseteq\mathbb{R}^N$ be a bounded domain. Let $A=\left(\begin{array}{cc} I_k & O\\ O & |x|^{2\lambda}I_m \end{array}\right)$ and define the set $$V_A(\Omega)=\{u\in C^1(\Omega) | \int_\Omega |u|^p\,dX + \int_\Omega \langle A\nabla u,\nabla u\rangle^\frac{p}{2}\,dX <\infty\}$$ Consider the normed linear spaces $(V_A(\Omega),\|.\|)$ and $(C^1_0(\Omega),\|.\|_0)$ where $$\|u\|=(\int_\Omega |u|^p\,dX + \int_\Omega \langle A\nabla u,\nabla u\rangle^\frac{p}{2}\,dX)^\frac{1}{p}$$ and $$\|u\|_0=(\int_\Omega \langle A\nabla u,\nabla u\rangle^\frac{p}{2}\,dX)^\frac{1}{p}$$ Now $W^{1,\lambda,p}(\Omega)$ and $W^{1,\lambda,p}_0(\Omega)$ is defined as the completion of $(V_A(\Omega),\|.\|)$ and $(C^1_0(\Omega),\|.\|_0)$ respectively. Each element $[\{u_n\}]$, of the Banach space $W^{1,\lambda,p}(\Omega)$ is a class of Cauchy sequence in $(V_A(\Omega),\|.\|)$ and $\|[\{u_n\}]\|=\lim_{n\to\infty}\|u_n\|$. A function $u$ is said to be in $W^{1,\lambda,p}_{loc}(\Omega)$ if and only if $u\in W^{1,\lambda,p}(\Omega')$ for every $\Omega'\Subset\Omega$. For more information, one can look into Monticelli-Payne [@MP].\ The following theorem proves that $\|.\|_0$ and $\|.\|$ are equivalent norm on $W^{1,\lambda,p}_0(\Omega)$. **Theorem 1**. *()(Monticelli-Payne [@MP]) Let $\Omega\subset \mathbb{R}^N$ be a bounded domain, and $A$ is given as above. Then for any $1\leq p<\infty$ there exists a constant $C_p=C(N,p,\|A\|_\infty,d(\Omega))>0$ such that $$\|u\|_{L^p(\Omega)}^p \leq C_p\int_\Omega \langle A\nabla u,\nabla u\rangle^\frac{p}{2}\; dX\;\mbox{ for all $u\in C^1_0(\Omega)$}$$\ where $d(\Omega)$ denotes the diameter of $\Omega$.* Now the suitable representation of an element of $W^{1,\lambda,p}(\Omega)$ and $W^{1,\lambda,p}_0(\Omega)$ is given by the following theorem, whose proof follows exactly that of Monticelli-Payne where it is done for $p=2$. **Theorem 2**. *(Monticelli-Payne [@MP]))[\[REP\]]{#REP label="REP"} Let $\Omega \subset \mathbb{R}^N$ be a bounded open set, and $A$ is given as above. Then for every $[\{u_n\}]\in W^{1,\lambda,p}(\Omega)$ there exists unique $u\in L^p(\Omega)$ and $U\in (L^p(\Omega))^N$ such that the following properties hold* 1. *$u_n\rightarrow u$ in $L^p(\Omega)$ and $\sqrt A \nabla u_n\rightarrow U$ in $(L^p(\Omega))^N$.* 2. *$\sqrt{A}^{-1}U$ is the weak gradient of $u$ in each of the component of $\Omega\setminus\Sigma$* 3. *If $|[\sqrt{A}]^{-1}|\in L^{p'}(\Omega)$ then $[\sqrt{A}]^{-1}U$ is the weak gradient of $u$ in $\Omega$.* 4. *One has $$\|[{u_n}]\|^p=\|u\|_{L^p(\Omega)}^p+\|U\|_{(L^p(\Omega))^N}^p$$* *where $\Sigma=\{X\in\Omega : \text{det}[A(X)]=0\}$, $p'=\frac{p}{p-1}$.* *Proof.* Let $[\{u_n\}]\in W^{1,\lambda,p}$. So $[\{u_n\}]$ is a Cauchy sequence in $(V_A,\|.\|)$. Clearly $\{u_n\}$ and $\{\sqrt{A}\nabla u_n\}$ are Cauchy in $L^p(\Omega)$ and $L^p(\Omega)^N$. Hence there exists $u\in L^p(\Omega)$ and $U\in L^p(\Omega)^N$ such that $u_n\to u$ in $L^p(\Omega)$ and $\{\sqrt{A}\nabla u_n\}\to U$ in $L^p(\Omega)^N$ as $n\to\infty$. If $[\{u_n\}]=[\{v_n\}]$ and $\{\sqrt{A}\nabla u_n\}\to U$, $\{\sqrt{A}\nabla v_n\}\to V$ in $L^p(\Omega)^N$ as $n\to\infty$. Then $$\begin{aligned} \|U-V\|_{L^p(\Omega)^N}&\leq \|\sqrt{A}\nabla u_n-U\|_{L^p(\Omega)^N}+ \|\sqrt{A}\nabla u_n-\sqrt{A}\nabla v_n\|_{L^p(\Omega)^N}+ \|\sqrt{A}\nabla v_n-V\|_{L^p(\Omega)^N}\\ &\to 0 \text{ as $n\to\infty$} \end{aligned}$$ which implies $U=V$ a.e in $\Omega$. So $U$ does not depend on the representative of the class $[\{u_n\}]$. Let $\phi\in C^\infty_0(\Omega)$. Since $u_n\to u$ in $L^p(\Omega)$ so $u_n$ converges to $u$ in the distributional sense as well. As $u_n\in C^1(\Omega)$ so $$\int_\Omega u_n \nabla \phi dx=-\int_\Omega \phi\nabla u_n dx$$ Taking limit $n\to\infty$ we have $$\int_\Omega u \nabla \phi dx=-\lim_{n\to \infty}\int_\Omega \phi\nabla u_n dx= -\lim_{n\to \infty}\int_\Omega \phi\sqrt{A}^{-1}\sqrt{A}\nabla u_n dx$$ Hence if $|\phi\sqrt{A}^{-1}|\in L^{p'}(\Omega)$ then $$\begin{aligned} \label{San1} \int_\Omega u \nabla \phi dx=-\int_\Omega \phi\sqrt{A}^{-1} U dx \end{aligned}$$ If support of $\phi$ is contained in a component of $\Omega\setminus\Sigma$ then $|\phi\sqrt{A}^{-1}|\in L^{p'}(\Omega)$. By using ([\[San1\]](#San1){reference-type="ref" reference="San1"}) we can conclude that $\sqrt{A}^{-1}U$ is the weak gradient of $u$ in that component of $\Omega\setminus\Sigma$. Hence (ii) is proved. Also, if $|\sqrt{A}^{-1}|\in L^{p'}(\Omega)$ then ([\[San1\]](#San1){reference-type="ref" reference="San1"}) is true for every $\phi\in C^\infty_0(\Omega)$. So $\sqrt{A}^{-1}U$ is the weak gradient of $u$ in $\Omega$. Which proves (iii).\ For $[\{u_n\}]\in W^{1,\lambda,p}(\Omega)$, $$\begin{aligned} \|[\{u_n\}]\|^p=\lim_{n\to\infty} (\|u_n\|_{L^p(\Omega)}^p+\|\sqrt{A}\nabla u_n\|_{L^p(\Omega)^N}^p)=(\|u\|_{L^p(\Omega)}^p+\|U\|_{L^p(\Omega)^N}^p) \end{aligned}$$ Hence (iv) is proved. ◻ Using the above theorem, we have the following embedding theorem. **Corollary 3**. *The space $W^{1,\lambda,p}(\Omega)$ is continuously embedded into $L^p(\Omega)$.* *Proof.* Define the map $T:W^{1,\lambda,p}(\Omega)\to L^p(\Omega)$ by $T([\{u_n\}])=u$. $T$ is a bounded linear map.\ Claim: $T$is injective. Let $u=0$. If we can prove $U=0$, then we are done. Since $\Sigma$ has measure zero, we can prove that $U=0$ a.e in each component of $\Omega\setminus\Sigma$. Let $\Omega'$ be a component of $\Omega\setminus\Sigma$. By the above theorem for every $\phi\in C^\infty_0(\Omega')$ $$\begin{aligned} \label{San1} \int_{\Omega'} \phi\sqrt{A}^{-1} U dx=-\int_{\Omega'} u \nabla \phi dx=0 \end{aligned}$$ which ensures us $\sqrt{A}^{-1}U=0$ a.e in $\Omega'$. So $U=0$ a.e in $\Omega'$. ◻ Henceforth we use the notation $u$ for the element $[\{u_n\}]\in W^{1,\lambda,p}(\Omega)$ or$[\{u_n\}]\in W^{1,\lambda,p}_0(\Omega)$ which is determined in Theorem ([\[REP\]](#REP){reference-type="ref" reference="REP"}). Using the properties of $U\in(L^p(\Omega))^N$ in the theorem we introduce the following definition: **Definition 3**. *For $u\in W^{1,\lambda,p}(\Omega)$ we denote the weak quasi gradient of $u$ by $\nabla^* u$ and defined by $$\nabla^* u:=(\sqrt A)^{-1} U$$ which is a vector-valued function defined almost everywhere in $\Omega$.* Also for $u\in W^{1,\lambda,p}(\Omega)$, $$\begin{aligned} \|u\|^p&=\|u\|_{L^p(\Omega)}^p+\|\sqrt{A}\nabla^*u\|_{L^p(\Omega)}^p\\ &=\int_\Omega|u|^p dx + \int_\Omega \langle A\nabla^*u,\nabla^*u\rangle^\frac{p}{2}. \end{aligned}$$ We define $H^{1,\lambda}(\Omega):=W^{1,\lambda,2}(\Omega)$ and $H^{1,\lambda}_0(\Omega):=W^{1,\lambda,2}_0(\Omega)$. $(H^{1,\lambda}(\Omega),\|.\|)$ and $(H^{1,\lambda}_0(\Omega),\|.\|_0)$ are Hilbert spaces. **Theorem 4**. *(Embedding Theorem)([@FL3 Theorem 2.6] and [@KAL Proposition 3.2]) Let $\Omega\subset \mathbb{R}^{k+m}$ be an open set. The embedding $$H^{1,\lambda}_0(\Omega)\hookrightarrow L^q(\Omega)$$ is continuous for every $q\in[1,2^*_\lambda]$ and compact for $q\in[1,2^*_\lambda)$, where $2^*_\lambda=\frac{2Q}{Q-2},\; Q=k+(\lambda+1)m$.* **Theorem 5**. *(Stampacchia-Kinderlehrer [@KDS])[\[stam\]]{#stam label="stam"} Let $\phi:[k_0,\infty)\to \mathbb{R}$ be a nonnegative and nonincreasing such that for $k_0\leq k\leq h$,$$\phi(h)\leq [C/(h-k)^\alpha]|\phi(k)|^\beta$$ where $C,\alpha,\beta$ are positive constant with $\beta>1$. Then $$\phi(k_0+d)=0$$ where $d^\alpha=C2^{\frac{\alpha\beta}{\beta-1}}|\phi(k_0)|^{(\beta-1)}$* Now we will prove the Strong Maximum Principle for super-solutions of $-\Delta_\lambda u=0$. In this proof, we denote $\rho$ and $S_\rho$, which are defined in [@FL1 Definition 2.6]. The constants $a, c_1$ are introduced in [@FL1 Theorem 4.3]. Also, $c$ and $\epsilon_0$ are defined in [@FL1 Proposition 2.9]. **Theorem 6**. *(Strong Maximum Principle) Let $\Omega\subset \mathbb{R}^{1+m}$ be a $\Lambda-$connected, bounded open set and $\lambda\geq 1$. Let u be a nonnegative (not identically zero) function in $H^{1,\lambda}_0(\Omega)$ such that $u$ is a super solution of $-\Delta_\lambda u=0$, i.e., for every nonnegative $v\in H^{1,\lambda}_0(\Omega)$, $$\int_\Omega \langle A\nabla^*u,\nabla^*v\rangle dX\geq 0.$$ If there exist a ball $B_r(x_0)\Subset\Omega$ with $\inf_{B_r(x_0)}u=0$ then $u$ is identically zero in $\Omega$.* *Proof.* Let $n_0$ be a natural number such that $n_0^{\epsilon_0}>2c_1$. We can choose $r>0$ such that $B(X_0,n_0r)\Subset \Omega$, $\inf_{B_r(X_0)}u=0$ and $S_\rho(X,ac(n_0r)^{\epsilon_0})\subset \Omega$. By using ([@FL1]) and ([@FL1]) we have $$B(X_0,r)\subset B(X_0,n_0r)\subset S_d(X_0,c(n_0r)^{\epsilon_0})\subset S_\rho(X_0,ac(n_0r)^{\epsilon_0})\subset\Omega$$ Put $R=\frac{ac(n_0r)^{\epsilon_0}}{c_1}$ and by [@FL1] with $p=1$, we have $$\label{HR1} \inf_{S_\rho(X_0,\frac{R}{2})}u\geq M |S_\rho(X_0,R)|^{-1}\int_{S_\rho(X_0,R)}|u|\ dX.$$ By using ([@FL1]) and ([@FL1]) we easily can show that $B(X_0,r)\subset S_\rho(X_0,\frac{R}{2})$. Hence, $\inf_{S_\rho(X_0,\frac{R}{2})}u=0$. By ([\[HR1\]](#HR1){reference-type="ref" reference="HR1"}) we have $u=0$ a.e. in $S_\rho(X_0,R)$ and hence, in $B(X_0,r)$. Let $Y\in\Omega$ and $r_0=r$. Since $\Omega$ is a bounded domain, we can find a finite collection of balls $\{B(X_i,r_i)\}_{i=0}^{i=k}$ such that $B(X_i,n_0r_i)\Subset \Omega$, $S_\rho(X_i,ac(n_0r_i)^{\epsilon_0})\subset \Omega$, $B(X_{i-1},r_{i-1})\cap B(X_{i},r_{i})\neq\emptyset$ for $i=1,2...k$ and $Y\in B(X_k,r_k)$. We can use the previous process to show that $u=0$ a.e. in $B(X_1,r_1)$. Iterating we have $u=0$ a.e. in $B(X_k,r_k)$. Hence, $u=0$ a.e.in $\Omega$. ◻ Now we are ready to define the notion of solution of ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}). **Definition 4**. *A function $u\in H^{1,\lambda}_{loc}(\Omega)$ is said to be a weak solution of ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}) if for every $\Omega'\Subset\Omega$, there exists a positive constant $C(\Omega')$ such that $$\begin{aligned} u\geq C(\Omega')>0 \text{ a.e in $\Omega'$}, \end{aligned}$$ $$\begin{aligned} \int_\Omega \langle A\nabla^*u,\nabla v\rangle\,dX=\int_\Omega \frac{fv}{u^\nu}\,dX\;\mbox{ for all $v\in C^1_0(\Omega)$} \end{aligned}$$ and* - *if $\nu\leq 1$ then $u\in H^{1,\lambda}_0(\Omega)$.* - *if $\nu>1$ then $u^{\frac{\nu+1}{2}}\in H^{1,\lambda}_0(\Omega)$.* # EXISTENCE AND REGULARITY RESULTS {#sec3} Henceforth, we will assume $N=1+m$, and $\Omega\subset\mathbb{R}^N$ is a $\Lambda-$connected, bounded open set. We will also assume $f$ is a nonnegative (not identically zero) function and $\lambda\geq 1$. Our main results are the following: ## The case $\nu=1$ **Theorem 7**. *Let $\nu=1$ and $f\in L^1(\Omega)$. Then the Dirichlet boundary value problem ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}) has a unique solution in the sense of definition ([Definition 4](#def2){reference-type="ref" reference="def2"}).* **Theorem 8**. *Let $\nu=1$ and $f\in L^r(\Omega),r\geq 1$. Then the solution given by Theorem [Theorem 7](#Th1){reference-type="ref" reference="Th1"} satisfies the following* 1. *If $r>\frac{Q}{2}$ then $u\in L^\infty(\Omega)$.* 2. *If $1\leq r<\frac{Q}{2}$ then $u\in L^s(\Omega)$.* *where $Q=(m+1)+\lambda m$ and $s=\frac{2Qr}{Q-2r}$.* ## The case $\nu>1$ **Theorem 9**. *Let $\nu>1$ and $f\in L^1(\Omega)$. Then there exists $u\in H^{1,\lambda}_\text{loc}(\Omega)$ which satisfies equation ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}) in sense of definition ([Definition 4](#def2){reference-type="ref" reference="def2"}).* **Theorem 10**. *Let $\nu>1$ and $f\in L^r(\Omega),\; r\geq1$. Then the solution $u$ of ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}) given by the above theorem is such that* 1. *If $r>\frac{Q}{2}$ then $u\in L^\infty(\Omega).$* 2. *If $1\leq r<\frac{Q}{2}$ then $u\in L^s(\Omega).$* *where $s=\frac{Qr(\nu+1)}{(Q-2r)}$ and $Q=(m+1)+\lambda m.$* ## The case $\nu<1$ **Theorem 11**. *Let $\nu<1$ and $f\in L^r(\Omega),r=(\frac{2^*_\lambda}{1-\lambda})'$. Then ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}) has a unique solution in $H^{1,\lambda}_0(\Omega)$.* **Theorem 12**. *Let $\nu<1$ and $f\in L^r(\Omega),\; r\geq (\frac{2^*_\lambda}{1-\nu})'$. Then the solution $u$ of ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}) given by the above theorem is such that* 1. *If $r>\frac{Q}{2}$ then $u\in L^\infty(\Omega).$\ * 2. *If $(\frac{2^*_\lambda}{1-\nu})'\leq r<\frac{Q}{2}$ then $u\in L^s(\Omega).$* *where $s=\frac{Qr(\nu+1)}{(Q-2r)},\; Q=(m+1)+\lambda m$ and $r'$ denotes the Hölder conjugate of $r$.* **Theorem 13**. *Let $\nu<1$ and $f\in L^r(\Omega)$ for some $1\leq r<\frac{2Q}{(Q+2)+\nu(Q-2)}$. Then there exists $u\in W^{1,\lambda,q}_0(\Omega)$ which is a solution of ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}) in the sense $$\begin{aligned} \int_\Omega\langle A\nabla^*u,\nabla v\rangle dX=\int_\Omega \frac{fv}{u^\nu}\;dX \mbox{ for all $v\in C^1_0(\Omega)$}\end{aligned}$$ where $q=\frac{Qr(\nu+1)}{Q-r(1-\nu)}$.* # APPROXIMATION OF THE EQUATION ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}) {#sec4} Let $f$ be a nonnegative (not identically zero) measurable function and $n\in N$. Let us consider the equation $$\begin{aligned} \label{equ} -\Delta_\lambda u_n&=\frac{f_n}{(u_n+\frac{1}{n})^\nu} \text{ in }\Omega\\ u=0 &\text{ on } \partial\Omega\nonumber\end{aligned}$$ where $f_n:=\min\{f,n\}$.\ **Lemma 14**. *Equation ([\[equ\]](#equ){reference-type="ref" reference="equ"}) has a unique solution $u_n\in H^{1,\lambda}_0(\Omega)\cap L^\infty(\Omega)$.* *Proof.* Let $w\in L^2(\Omega)$ be a fixed element. Now consider the equation $$\begin{aligned} \label{P2} -\Delta_\lambda u&=g_n\text{ in }\Omega\\ u&=0 \text{ on } \partial\Omega\nonumber\end{aligned}$$ where $g_n=\frac{f_n}{(|w|+\frac{1}{n})^\nu}$. Since $|g_n(x)|\leq n^{\nu+1}$ one has $g_n\in L^2(\Omega)$. By [@MP Theorem 4.4], we can say equation ([\[P2\]](#P2){reference-type="ref" reference="P2"}) has a unique solution $u_w\in H^{1,\lambda}_0(\Omega)$ and the map $T:L^2(\Omega)\to H^{1,\lambda}_0(\Omega)$ such that $T(w)=u_w$ is continuous. By Theorem [Theorem 4](#emb1){reference-type="ref" reference="emb1"}, we have the compact embedding $$H^{1,\lambda}_0(\Omega)\hookrightarrow L^2(\Omega).$$ Hence, the $T:L^2(\Omega)\to L^2(\Omega)$ is continuous as well as compact.\ Let $S=\{w\in L^2(\Omega) : w=\lambda Tw \;\text{for some}\; 0\leq \lambda\leq 1\}$.\ **Claim:** The set $S$ is bounded.\ Let $w\in S$. By the Poincaré inequality (see [@MP Theorem 2.1]), there exists a constant $C>0$ such that,\ $$\begin{aligned} \|u_w\|_{L^2(\Omega)}^2&\leq C\int_\Omega \langle A\nabla^*u_w,\nabla^*u_w\rangle \,dX =C\int_\Omega g_n(x)u_w\,dX \leq Cn^{\nu+1}\int_\Omega u_w\,dX \leq Cn^{\nu+1}|\Omega|^\frac{1}{2} \|u_w\|_{L^2(\Omega)}\end{aligned}$$ Hence, we have $$\begin{aligned} \|u_w\|_{L^2(\Omega)}&\leq Cn^{\nu+1}|\Omega|^\frac{1}{2}\end{aligned}$$ where $C>0$ is a independent of $w$. This proves $S$ is bounded. Hence by Schaefer's fixed point theorem, there exists $u_n\in H^{1,\lambda}_0(\Omega)$ such that $$\begin{aligned} -\Delta_\lambda u_n&=\frac{f_n}{(|u_n|+\frac{1}{n})^\nu} \text{ in }\Omega\\ u=0 &\text{ on } \partial\Omega\nonumber\end{aligned}$$ By Weak Maximum Principle (see [@MP 4.4]), we have $u_n\geq 0$ in $\Omega$. So $u_n$ is a solution of ([\[equ\]](#equ){reference-type="ref" reference="equ"}). Hence, $$\label{weakd} \int_\Omega \langle A\nabla^*u_n,\nabla v\rangle dX=\int_\Omega \frac{f_nv}{(u_n+\frac{1}{n})^\nu}dX \text{ for every }v\in C^1_0(\Omega)$$ Now, we want to prove $u_n\in L^\infty(\Omega)$.\ Let $k>1$ and define $S(k)=\{x\in \Omega : u_n(x)\geq k\}$. We can treat the function $$v(x)= \begin{cases} u_n(x)-k & x\in S(k)\\ o & \text{otherwise} \end{cases}$$ as a function in $C^1_0(\Omega)$. By putting $v$ in ([\[weakd\]](#weakd){reference-type="ref" reference="weakd"}), we obtain $$\begin{aligned} \label{weakin} \int_{S(k)} \langle A\nabla^*v,\nabla^* v\rangle\,dX&=\int_{S(k)} \frac{f_nv}{(v+k+\frac{1}{n})^\nu}\,dX\nonumber \leq n^{\nu+1} \int_{S(k)}v\,dX\nonumber \leq n^{\nu+1} \|v\|_{L^{2^*_\lambda}(\Omega)} |S(k)|^{1-\frac{1}{2^*_\lambda}}\end{aligned}$$ Here, $2^*_\lambda=\frac{2Q}{Q-2}$ and $Q=(m+1)+\lambda m$. Now, by Theorem [Theorem 4](#emb1){reference-type="ref" reference="emb1"} there exists $C>0$ such that $$\begin{aligned} \|v\|_{L^{2^*_\lambda}(\Omega)}^2&\leq C\int_{\Omega} \langle A\nabla^*v,\nabla^* v\rangle\,dX \nonumber =C\int_{S(k)} \langle A\nabla^*v,\nabla^* v\rangle\,dX \nonumber \leq Cn^{\nu+1}\|v\|_{L^{2^*_\lambda}(\Omega)} |S(k)|^{1-\frac{1}{2^*_\lambda}}. \nonumber\end{aligned}$$ We have $$\label{S1} \|v\|_{L^{2^*_\lambda}(\Omega)}\leq Cn^{\nu+1}|S(k)|^{1-\frac{1}{2^*_\lambda}}$$ Assume $1<k<h$ and using Inequality ([\[S1\]](#S1){reference-type="ref" reference="S1"}) we get $$\begin{aligned} |S(h)|^\frac{1}{2^*_\lambda}(h-k)&=(\int_{S(h)}(h-k)^{2^*_\lambda}\,dX)^\frac{1}{2^*_\lambda} \leq (\int_{S(k)}(v(x))^{2^*_\lambda}\,dX)^\frac{1}{2^*_\lambda} \leq \|v\|_{L^{2^*_\lambda}(\Omega)} \leq Cn^{\nu+1}|S(k)|^{1-\frac{1}{2^*_\lambda}}\end{aligned}$$ The above two inequalities implies $$\begin{aligned} |S(h)|\leq (\frac{Cn^{\nu+1}}{(h-k)})^{2^*_\lambda})|S(k)|^{2^*_\lambda-1}\end{aligned}$$ Let $d^{2^*_\lambda}=(Cn^{\nu+1})^{2^*_\lambda})2^\frac{2^*_\lambda(2^*_\lambda-1)}{2^*_\lambda-2} |S(1)|^{2^*_\lambda-2}$ then by the Theorem [\[stam\]](#stam){reference-type="ref" reference="stam"}, we get $|S(1+d)|=0$. Hence, $u_n(x)\leq 1+d$ a.e in $\Omega$. We get a positive constant $C(n)$ such that $u_n\leq C(n)$ a.e in $\Omega$. Consequently, $u_n\in L^\infty(\Omega)$.\ Let $u_n$ and $v_n$ be two solutions of ([\[equ\]](#equ){reference-type="ref" reference="equ"}). The function $w=(u_n-v_n)^+\in H^{1,\lambda}_0(\Omega)$ can be considered as a test function. It is clear that $$\begin{aligned} \label{f1} [(v_n+\frac{1}{n})^\nu-(u_n+\frac{1}{n})^\nu]w\leq 0\end{aligned}$$ Since $u_n$ and $v_n$ are two solutions of ([\[equ\]](#equ){reference-type="ref" reference="equ"}) so by putting $w$ in ([\[weakd\]](#weakd){reference-type="ref" reference="weakd"}) we get $$\begin{aligned} \int_\Omega \langle A\nabla^*u_n,\nabla^* w\rangle dX=\int_\Omega \frac{f_nw}{(u_n+\frac{1}{n})^\nu}dX \\ \text{and} \int_\Omega \langle A\nabla^*v_n,\nabla^* w\rangle dX=\int_\Omega \frac{f_nw}{(v_n+\frac{1}{n})^\nu}dX \end{aligned}$$ Therefore, $$\begin{aligned} \int_\Omega \langle A\nabla^*(u_n-v_n),\nabla^* w\rangle\,dX &=\int_\Omega \frac{f_n[(v_n+\frac{1}{n})^\nu-(u_n+\frac{1}{n})^\nu]}{(u_n+\frac{1}{n})^\nu(v_n+\frac{1}{n})^\nu}w\,dX\\\end{aligned}$$ Using ([\[f1\]](#f1){reference-type="ref" reference="f1"}) we have $$\begin{aligned} \int_\Omega \langle A\nabla^*w,\nabla^* w\rangle\,dX\leq 0\end{aligned}$$ Hence, $w=0$ and so $(u_n-v_n)\leq 0$. By a similar argument, we can prove that $(v_n-u_n)\leq 0$. Consequently, $u_n=v_n$ a.e in $\Omega$. ◻ **Lemma 15**. *Let for each $n\in \mathbb{N}$, $u_n$ be the solution of ([\[equ\]](#equ){reference-type="ref" reference="equ"}). Then the sequence $\{u_n\}$ is an increasing sequence and for each $\Omega'\Subset \Omega$, there exists a constant $C(\Omega')>0$ such that $$u_n(x)\geq C(\Omega')>0\;\mbox{ a.e}\; x\in\Omega'\;\mbox{ and for all}\; n\in\mathbb{N}$$* *Proof.* Let $n\in \mathbb{N}$ be fixed. Define $w=(u_n-u_{n+1})^+$. It is clear that $$[(u_{n+1}+\frac{1}{n+1})^\nu-(u_n+\frac{1}{n})^\nu]w\leq0.$$ $w$ can be considered as a test function. Arguing as in the proof of the previous theorem, we obtain $w=0$. Hence, $u_n-u_{n+1}\leq 0$ $\implies u_n\leq u_{n+1}$ a.e in $\Omega$ and for all $n\in\mathbb{N}$. Since $f$ is not identically zero so $f_i$ is not identically zero for some $i\in N$. Without loss of generality, we may assume that $f_1$ is not identically zero.\ Consider the equation $$\begin{aligned} -\Delta_\lambda u_1&=\frac{f_1}{(u_1+1)^\nu}\text{ in}\;\Omega\\ u_1&=0 \text{ on } \partial\Omega\nonumber\end{aligned}$$ Since $f_1$ is not identically zero so $u_1$ is not identically zero. So by Theorem [Theorem 6](#smp){reference-type="ref" reference="smp"}, we have $u_1>0$ in $\Omega$. Hence, for every compact set $\Omega'\Subset \Omega$, there exists a constant $C(\Omega')>0$ such that $u_1\geq C(\Omega')$ a.e. in $\Omega'$. Monotonicity of the sequence implies that for every $n\in N$, $$\begin{aligned} u_n\geq C(\Omega'). \end{aligned}$$ ◻ # A FEW AUXILIARY RESULTS {#sec5} We start this section with the proof of a priori estimates on $u_n$. **Lemma 16**. *Let $u_n$ be the solution of equation ([\[equ\]](#equ){reference-type="ref" reference="equ"}) with $\nu=1$ and assume $f\in L^1(\Omega)$ is a nonnegative function (not identically zero). Then the sequence $\{u_n\}$ is bounded in $H^{1,\lambda}_0(\Omega)$.* *Proof.* Since $u_n\in H^{1,\lambda}_0(\Omega)$ is a solution of ([\[equ\]](#equ){reference-type="ref" reference="equ"}) so from ([\[weakd\]](#weakd){reference-type="ref" reference="weakd"}) we obtain\ $$\begin{aligned} \int_\Omega \langle A\nabla^*u_n,\nabla^* u_n\rangle\,dX&=\int_\Omega \frac{f_nu_n}{(u_n+\frac{1}{n})}dX \leq \int_\Omega fdX =\|f\|_{L^1(\Omega)} \end{aligned}$$ Hence, $\{u_n\}$ is bounded in $H^{1,\lambda}_0(\Omega)$. ◻ **Lemma 17**. *Let $u_n$ be the solution of the equation ([\[equ\]](#equ){reference-type="ref" reference="equ"}) with $\nu>1$ and $f\in L^1(\Omega)$ is a nonnegative function (not identically zero). Then $\{u_n^{\frac{\nu+1}{2}}\}$ is bounded in $H^{1,\lambda}_0(\Omega)$ and $\{u_n\}$ is bounded in $H^{1,\lambda}_\text{loc}(\Omega)$ and in $L^s(\Omega)$, where $s=\frac{(\nu+1)Q}{(Q-2)}$.* *Proof.* Since $\nu>1$ and $u_n\in H^{1,\lambda}_0(\Omega)$ so by putting $v=u_n^\nu$ in ([\[weakd\]](#weakd){reference-type="ref" reference="weakd"}) we have, $$\begin{aligned} \int_\Omega\langle A\nabla^*u_n,\nabla^*u_n^\nu\rangle dX&=\int_\Omega \frac{f_nu_n^\nu}{(u_n+\frac{1}{n})^\nu}dX \leq\int_\Omega fdX. \end{aligned}$$ Now, $$\begin{aligned} \label{f2} \int_\Omega \langle A\nabla^*u_n^{\frac{\nu+1}{2}},\nabla^*u_n^{\frac{\nu+1}{2}}\rangle dX=\frac{(\nu+1)^2}{4\nu}\int_\Omega \nu u_n^{\nu-1}\langle A\nabla^*u_n,\nabla^*u_n\rangle dX&=\frac{(\nu+1)^2}{4\nu}\int_\Omega\langle A\nabla^*u_n,\nabla^*u_n^\nu\rangle dX\nonumber\\ \leq \frac{(\nu+1)^2}{4\nu}\int_\Omega fdX. \end{aligned}$$ Hence, $\{u_n^\frac{\nu+1}{2}\}$ is bounded in $H^{1,\lambda}_0(\Omega)$. By Theorem [Theorem 4](#emb1){reference-type="ref" reference="emb1"}, there exists a constant $C>0$ such that $$\begin{aligned} \|u_n^{\frac{\nu+1}{2}}\|_{L^{2^*_\lambda}(\Omega)}\leq C\|u_n^{\frac{\nu+1}{2}}\|_{H^{1,\lambda}_0(\Omega)} \end{aligned}$$ By using ([\[f2\]](#f2){reference-type="ref" reference="f2"}), we have $$\begin{aligned} (\int_\Omega u_n^{2^*_\lambda\frac{(\nu+1)}{2}} dX)^\frac{2}{2^*_\lambda}\leq C\frac{(\nu+1)^2}{4\nu}\|f\|_{L^1(\Omega)} \end{aligned}$$ Since $s={2^*_\lambda\frac{(\nu+1)}{2}}$ so $$\begin{aligned} \int_\Omega u_n^s dX\leq (C\frac{(\nu+1)^2}{4\nu}\|f\|_{L^1(\Omega)})^\frac{2^*_\lambda}{2}\end{aligned}$$ Hence, $\{u_n\}$ is bounded in $L^s(\Omega).$ To prove $\{u_n\}$ is bounded in $H^{1,\lambda}_\text{loc}(\Omega)$, let $\Omega'\Subset\Omega$ and $\eta\in C^\infty_0(\Omega)$ such that $0\leq\eta\leq1$ and $\eta=1$ in $\Omega'$. It is a test function as $u_n\eta^2\in H^{1,\lambda}_0(\Omega)$. By Lemma [Lemma 15](#lem1){reference-type="ref" reference="lem1"}, there exists a constant $C>0$ such that $u_n\geq C$ a.e in ($\eta$). Put $v=u_n\eta^2$ in ([\[weakd\]](#weakd){reference-type="ref" reference="weakd"}) we have $$\begin{aligned} \label{w0} \int_\Omega\langle A\nabla^*u_n,\nabla^*(u_n\eta^2)\rangle dX=\int_\Omega \frac{f_nu_n\eta^2}{(u_n+\frac{1}{n})^\nu}dX \end{aligned}$$ Also, $$\begin{aligned} \label{f5} \int_\Omega\langle A\nabla^*u_n,\nabla^*(u_n\eta^2)\rangle dX=\int_\Omega \{\eta^2\langle A\nabla^*u_n,\nabla^*u_n\rangle+2\eta u_n\langle A\nabla^*u_n,\nabla\eta\rangle\} \end{aligned}$$ From ([\[w0\]](#w0){reference-type="ref" reference="w0"}) and ([\[f5\]](#f5){reference-type="ref" reference="f5"}) we get $$\begin{aligned} \label{f6} \int_\Omega \eta^2\langle A\nabla^*u_n,\nabla^*u_n\rangle dX= \int_\Omega \frac{f_n\eta^2}{C^{(\nu-1)}}dX-\int_\Omega2\eta u_n\langle A\nabla^*u_n,\nabla\eta\rangle dX \end{aligned}$$ Choose $\epsilon>0$ and use Young's inequality; one has $$\begin{aligned} \label{w8} |\int_\Omega2\eta u_n\langle A\nabla^*u_n,\nabla\eta\rangle dX|&\leq \int_\Omega2|\langle\eta\sqrt{A}\nabla^*u_n,u_n\sqrt{A}\nabla\eta\rangle|dX\nonumber\\ &\leq \frac{1}{\epsilon}\int_\Omega \eta^2 |\sqrt{A}\nabla^*u_n|^2dX+\epsilon \int_\Omega u_n^2 |\sqrt{A}\nabla\eta|^2dX, \end{aligned}$$ Put $\epsilon=2$ then we get $$\begin{aligned} \label{w9} |\int_\Omega2\eta u_n\langle A\nabla^*u_n,\nabla\eta\rangle dX|&\leq\frac{1}{2}\int_\Omega \eta^2 |\sqrt{A}\nabla^*u_n|^2dX+2\int_\Omega u_n^2 |\sqrt{A}\nabla\eta|^2dX \nonumber\\ &=\frac{1}{2}\int_\Omega \eta^2 \langle A\nabla^*u_n,\nabla^*u_n\rangle dX+2\int_\Omega u_n^2 \langle A\nabla \eta,\nabla \eta\rangle dX \end{aligned}$$ Using ([\[f6\]](#f6){reference-type="ref" reference="f6"}) and ([\[w9\]](#w9){reference-type="ref" reference="w9"}), we have $$\begin{aligned} \int_\Omega \eta^2\langle A\nabla^*u_n,\nabla^*u_n\rangle dX&\leq 2\int_\Omega \frac{f\eta^2}{C^{(\nu-1)}}dX+4\int_\Omega u_n^2 \langle A\nabla \eta,\nabla \eta\rangle dX\\ &\leq \frac{2\|\eta\|_\infty^2\|f\|_{L^1(\Omega)}}{C^{\nu-1}}+ 4\|\langle A\nabla \eta,\nabla \eta\rangle\|_\infty\int_\Omega u_n^2 dX \end{aligned}$$ Since $\{u_n\}$ is bounded in $L^s(\Omega)$ and $s>2$ So $\{u_n\}$ is bounded in $L^2(\Omega)$. $$\begin{aligned} \int_\Omega \eta^2\langle A\nabla^*u_n,\nabla^*u_n\rangle dX&\leq \frac{2\|\eta\|_\infty^2\|f\|_{L^1(\Omega)}}{C^{\nu-1}}+4\|\langle A\nabla \eta,\nabla \eta\rangle\|_\infty\int_\Omega u_n^2 dX\\ &\leq C(f,\eta)\end{aligned}$$ Now, $$\int_{\Omega'}\langle A\nabla^*u_n,\nabla^*u_n\rangle dX\leq \int_\Omega \eta^2\langle A\nabla^*u_n,\nabla^*u_n\rangle dX\leq C(f,\eta)$$ Hence, $\{u_n\}$ is bounded in $H^{1,\lambda}_\text{loc}(\Omega)$. ◻ **Lemma 18**. *Let $u_n$ be the solution of ([\[equ\]](#equ){reference-type="ref" reference="equ"}) with $\nu<1$ and $f\in L^r$, $r=(\frac{2^*_\lambda}{1-\nu})'$ is a nonnegative (not identically zero) function. Then $\{u_n\}$ is bounded in $H^{1,\lambda}_0(\Omega)$.* *Proof.* Since $r=(\frac{2^*_\lambda}{1-\nu})'$, we can choose $v=u_n$ in ([\[weakd\]](#weakd){reference-type="ref" reference="weakd"}) and using Hölder inequality, one has $$\begin{aligned} \label{f3} \int_\Omega\langle A\nabla^*u_n,\nabla^*u_n\rangle dX=\int_\Omega \frac{f_nu_n}{(u_n+\frac{1}{n})^\nu}\leq \int_\Omega fu_n^{1-\nu} dX&\leq \|f\|_{L^r(\Omega)}(\int_\Omega u_n^{(1-\nu)r'} dX)^\frac{1}{r'}\nonumber\\ &\leq \|f\|_{L^r(\Omega)}(\int_\Omega u_n^{2^*_\lambda} dX)^\frac{1-\nu}{2^*_\lambda}. \end{aligned}$$ By Theorem [Theorem 4](#emb1){reference-type="ref" reference="emb1"} and using the above inequality, we get $$\begin{aligned} \label{w6} \int_\Omega u_n^{2^*_\lambda} dX&\leq C(\int_\Omega\langle A\nabla^*u_n,\nabla^*u_n\rangle dX)^\frac{2^*_\lambda}{2}\leq C (\|f\|_{L^r(\Omega)}(\int_\Omega u_n^{2^*_\lambda} dX)^\frac{1-\nu}{2^*_\lambda})^\frac{2^*_\lambda}{2}. \end{aligned}$$ So we have $$\begin{aligned} \label{P1} \int_\Omega u_n^{2^*_\lambda} dX\leq C\|f\|_{L^r(\Omega)}^\frac{2^*_\lambda}{1+\nu}. \end{aligned}$$ Hence, $\{u_n\}$ is bounded $L^{2^*_\lambda}(\Omega)$. Using ([\[f3\]](#f3){reference-type="ref" reference="f3"}) and ([\[P1\]](#P1){reference-type="ref" reference="P1"}), we can conclude $\|u_n\|_{H^{1,\lambda}_0(\Omega)}\leq C\|f\|_{L^r(\Omega)}^\frac{1}{1+\nu}$ where $C$ is independent of $n$ . Hence, $\{u_n\}$ is bounded in $H^{1,\lambda}_0(\Omega)$. ◻ # PROOF OF MAIN RESULTS {#sec6} ## The case $\nu=1$ **Proof of Theorem [Theorem 7](#Th1){reference-type="ref" reference="Th1"}:** *Proof.* Consider the above sequence $\{u_n\}$ and define $u$ as the pointwise limit of the sequence ${\{u_n\}}$. Since $H^{1,\lambda}_0(\Omega)$ is Hilbert space and $\{u_n\}$ is bounded in $H^{1,\lambda}_0(\Omega)$ so it admits a weakly convergent subsequence. Assume $u_n$ weakly converges to $v$ in $H^{1,\lambda}_0(\Omega)$ and hence $u_n$ converges to $v$ in $L^2(\Omega)$. So $\{u_n\}$ has a subsequence that converges to $v$ pointwise. Consequently, $u=v$. So we may assume that the sequence $\{u_n\}$ weakly converges to $u$ in $H^{1,\lambda}_0(\Omega)$. Choose $v'\in C^1_0(\Omega)$. By Lemma [Lemma 15](#lem1){reference-type="ref" reference="lem1"}, there exists $C>0$ such that $u\geq u_n\geq C$ a.e in supp(v') and for all $n\in \mathbb{N}$. So $$|\frac{f_nv'}{(u_n+\frac{1}{n})}|\leq \frac{\|v'\|_\infty|f|}{C}\;\mbox{ for all} n\in\mathbb{N}$$ By Dominated Convergence Theorem, we have $$\begin{aligned} \label{DCT} \lim_{n\to \infty}\int_\Omega \frac{f_nv'}{(u_n+\frac{1}{n})}dX= \int_\Omega \lim_{n\to \infty} \frac{f_nv'}{(u_n+\frac{1}{n})}dX =\int_\Omega \frac{fv'}{u}dX. \end{aligned}$$ As $u_n$ is a solution of ([\[equ\]](#equ){reference-type="ref" reference="equ"}) so from ([\[weakd\]](#weakd){reference-type="ref" reference="weakd"}) we get, $$\begin{aligned} &\int_\Omega \langle A\nabla^*u_n,\nabla v'\rangle dX=\int_\Omega \frac {f_nv'}{(u_n+\frac{1}n)} dX \end{aligned}$$ Take $n\to \infty$ and use ([\[DCT\]](#DCT){reference-type="ref" reference="DCT"}) we obtain, $$\begin{aligned} \int_\Omega \langle A\nabla^*u,\nabla v'\rangle dX=\int_\Omega \frac {fv'}{u} dX \end{aligned}$$ Hence, $u\in H^{1,\lambda}_0(\Omega)$ is a solution of ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}).\ Let $u$ and $v$ be two solutions of ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}). The function $w=(u-v)^+\in H^{1,\lambda}_0(\Omega)$ can be considered as a test function. Since $u_n$ and $v_n$ are two solutions of ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}) so we have $$\begin{aligned} \int_\Omega \langle A\nabla^*u,\nabla^* w\rangle dX&=\int_\Omega \frac{fw}{u}dX \\ & \text{and}\\ \int_\Omega \langle A\nabla^*v,\nabla^* w\rangle dX&=\int_\Omega \frac{fw}{v}dX \end{aligned}$$ By subtracting one from the other, we get $$\begin{aligned} \int_\Omega \langle A\nabla^*(u-v),\nabla^* w\rangle\,dX &=\int_\Omega \frac{f(v-u)}{uv}w dX\leq 0.\end{aligned}$$ Which ensures us $$\begin{aligned} \int_\Omega \langle A\nabla^*w,\nabla^* w\rangle\,dX\leq 0.\end{aligned}$$ Hence, $w=0$ and so $(u-v)\leq 0$. By interchanging the role of $u$ and $v$, we get $(v-u)\leq 0$. Consequently, $u=v$ a.e in $\Omega$. ◻ **Proof of Theorem [Theorem 8](#Th2){reference-type="ref" reference="Th2"}:** *Proof.* $(i)$ Let $k>1$ and define $S(k)=\{x\in \Omega : u_n(x)\geq k\}$. We can treat the function $$v(x)= \begin{cases} u_n(x)-k & x\in S(k)\\ o & \text{otherwise} \end{cases}$$ as a function in $C^1_0(\Omega)$. So by $(5)$ we have $$\begin{aligned} \label{w1} \int_{S(k)} \langle A\nabla^*v,\nabla^* v\rangle\,dX&=\int_{S(k)} \frac{f_nv}{(v+k+\frac{1}{n})}dX \leq \int_{S(k)} fv\,dX \leq \|f\|_{L^r(\Omega)} \|v\|_{L^{2^*_\lambda}(\Omega)} |S(k)|^{1-\frac{1}{2^*_\lambda}-\frac{1}{r}} \end{aligned}$$ where $2^*_\lambda=\frac{2Q}{Q-2}$. By Theorem [Theorem 4](#emb1){reference-type="ref" reference="emb1"}, there exists $C>0$ such that $$\begin{aligned} \label{w2} \|v\|_{L^{2^*_\lambda}(\Omega)}^2&\leq C\int_{\Omega} \langle A\nabla^*v,\nabla^* v\rangle dX =C\int_{S(k)} \langle A\nabla^*v,\nabla^* v\rangle\,dX \leq C\|f\|_{L^r(\Omega)} \|v\|_{L^{2^*_\lambda}(\Omega)} |S(k)|^{1-\frac{1}{2^*_\lambda}-\frac{1}{r}}\end{aligned}$$ The last inequality follows from ([\[w1\]](#w1){reference-type="ref" reference="w1"}). Inequality ([\[w2\]](#w2){reference-type="ref" reference="w2"}) ensures us $$\|v\|_{L^{2^*_\lambda}(\Omega)}\leq C\|f\|_{L^r(\Omega)} |S(k)|^{1-\frac{1}{2^*_\lambda}-\frac{1}{r}}$$ Assume $1<k<h$. Using last inequality, we obtain $$\begin{aligned} |S(h)|^\frac{1}{2^*_\lambda}(h-k)&=(\int_{S(h)}(h-k)^{2^*_\lambda}\,dX)^\frac{1}{2^*_\lambda} \leq (\int_{S(k)}(v(x))^{2^*_\lambda}\,dX)^\frac{1}{2^*_\lambda} \leq \|v\|_{L^{2^*_\lambda}(\Omega)} \leq C\|f\|_{L^r(\Omega)} |S(k)|^{1-\frac{1}{2^*_\lambda}-\frac{1}{r}}\end{aligned}$$ So, $$|S(h)|\leq (\frac{C\|f\|_{L^r(\Omega)}}{(h-k)})^{2^*_\lambda} |S(k)|^{{2^*_\lambda}(1-\frac{1}{2^*_\lambda}-\frac{1}{r})} \\$$ As $r>\frac{Q}{2}$ we have, $2^*_\lambda({1-\frac{1}{2^*_\lambda}-\frac{1}{r}})>1$. Let $$d^{2^*_\lambda}=(C\|f\|_{L^r(\Omega)})^{2^*_\lambda}2^\frac{(2^*_\lambda)^2(1-\frac{1}{2^*_\lambda}-\frac{1}{r})}{[{2^*_\lambda}(1-\frac{1}{(2^*_\lambda}-\frac{1}{r})-1]} |S(1)|^{{2^*_\lambda}(1-\frac{1}{2^*_\lambda}-\frac{1}{r})-2}$$ By Theorem [\[stam\]](#stam){reference-type="ref" reference="stam"} we have $|S(1+d)|=0$. Hence, $u_n(x)\leq 1+d$ a.e in $\Omega$ . We get a positive constant $C$ independent of $n$ such that $u_n\leq C\|f\|_{L^r(\Omega)}$ a.e in $\Omega$ for all $n\in\mathbb{N}$. Hence, $\|u\|_{L^\infty(\Omega)}\leq C\|f\|_{L^r(\Omega)}$\ $(ii)$ If $r=1$ then $s=2^*_\lambda$. Since $u\in H^{1,\lambda}_0(\Omega)$ so by Theorem [Theorem 4](#emb1){reference-type="ref" reference="emb1"}, we have $u\in L^s(\Omega)$.\ If $1<r<\frac{Q}{2}$ . Choose $\delta>1$ (to be determined later). Consider the function $w=u^{2\delta-1}$. By the density argument, $w$ can be treated as a test function. Put $w$ in ([\[weakd\]](#weakd){reference-type="ref" reference="weakd"}), we have $$\begin{aligned} \int_\Omega (2\delta-1)u_n^{(2\delta-2)}\langle A\nabla^*u_n,\nabla^*u_n\rangle dX=\int_\Omega \frac{f_nw}{u_n+\frac{1}{n}}dX \leq \int_\Omega fu_n^{2\delta-2 } dX\end{aligned}$$ By using Hölder inequality on the RHS of the above inequality, we get $$\label{w3} \int_\Omega\langle A\nabla^*u_n^\delta,\nabla^*u_n^\delta\rangle dX =\int_\Omega \delta^2 u_n^{(2\delta-2)}\langle A\nabla^*u_n,\nabla^*u_n\rangle dX\leq \frac{\delta^2}{(2\delta-1)}\|f\|_{L^r(\Omega)}(\int_\Omega u_n^{(2\delta-2)r'} dX)^\frac{1}{r'}$$ where $\frac{1}{r}+\frac{1}{r'}=1$. By Theorem [Theorem 4](#emb1){reference-type="ref" reference="emb1"}, we have\ $$\begin{aligned} \label{w4} \int_\Omega u_n^{2^*_\lambda\delta} &\leq C(\int_\Omega \langle A\nabla^*{u_n^\delta},\nabla^*{u_n^\delta}\rangle dX)^\frac{2^*_\lambda}{2}\nonumber\\ &\leq C\{\frac{\delta^2}{(2\delta-1)}\|f\|_{L^r(\Omega)}(\int_\Omega u_n^{(2\delta-2)r'} dX)^\frac{1}{r'}\}^\frac{2^*_\lambda}{2}, \text{ [by (\ref{w3})]}\end{aligned}$$ We choose $\delta$ such that $2^*_\lambda\delta=(2\delta-2)r'$ so $\delta=\frac{r(Q-2)}{(Q-2r)}$. Clearly, $\delta>1$ and $2^*_\lambda\delta=s$ . By using ([\[w4\]](#w4){reference-type="ref" reference="w4"}), we have $$\begin{aligned} (\int_\Omega u_n^sdX)^{(1-\frac{2^*_\lambda}{2r'})}\leq C\end{aligned}$$ Also, $(1-\frac{2^*_\lambda}{2r'})>0$ as $r<\frac{Q}{2}$. So we get $$\int_\Omega u_n^sdX\leq C, \text{ $C>0$ is independent of $n$}.$$ By Dominated Convergence Theorem, we have $$\int_\Omega u^sdX\leq C.$$ Hence we are done. ◻ ## The Case $\nu>1$ **Proof of Theorem [Theorem 9](#Th3){reference-type="ref" reference="Th3"}:** *Proof.* Define $u$ as the pointwise limit of $\{u_n\}$. By Lemma [Lemma 17](#L2){reference-type="ref" reference="L2"}, $\{u_n\}$ and $\{u_n^{\frac{\nu+1}{2}}\}$ are bounded in $H^{1,\lambda}_{loc}(\Omega)$ and $H^{1,\lambda}_0(\Omega)$ respectively. So by the similar argument as the proof of Theorem [Theorem 7](#Th1){reference-type="ref" reference="Th1"} we can prove $u\in H^{1,\lambda}_{loc}(\Omega)$ and $u^{\frac{\nu+1}{2}}\in H^{1,\lambda}_0(\Omega)$.\ Let $v\in C^1_0(\Omega)$ and $\Omega'=\text{supp}(v)$. Without loss of generality we can assume $u_n$ weakly converges to $u$ in $H^{1,\lambda}(\Omega')$. By Lemma [Lemma 15](#lem1){reference-type="ref" reference="lem1"}, there exists $C>0$ such that $u_n(x)\geq C$ a.e $x\in\Omega'$ and for all $n\in \mathbb{N}$. So, $u\geq C>0$ a.e in $\Omega'$. Also, $$|\frac{f_nv}{(u_n+\frac{1}{n})^\nu}|\leq \frac{\|v\|_\infty|f|}{C^\nu},\mbox{ for all} n\in \mathbb{N}$$ By the Dominated Convergence Theorem, we have $$\begin{aligned} \label{DCT1} \lim_{n\to \infty}\int_{\Omega'} \frac{f_nv}{(u_n+\frac{1}{n})^\nu}dX&= \int_{\Omega'} \lim_{k\to \infty} \frac{f_nv}{(u_n+\frac{1}{n})^\nu}dX =\int_{\Omega'} \frac{fv}{u^\nu}dX. \end{aligned}$$ As $u_n$ is a solution of ([\[equ\]](#equ){reference-type="ref" reference="equ"}) so $$\begin{aligned} \int_{\Omega'}\langle A\nabla^*u_n,\nabla v\rangle dX=\int_{\Omega'}\frac{f_nv}{(u_n+\frac{1}{n})^\nu} dX \end{aligned}$$ Take $n\to\infty$ and use ([\[DCT1\]](#DCT1){reference-type="ref" reference="DCT1"}), we get $$\begin{aligned} \int_{\Omega}\langle A\nabla^*u,\nabla v\rangle dX=\int_{\Omega}\frac{fv}{u^\nu} dX \end{aligned}$$ Hence, $u\in H^{1,\lambda}_{loc}(\Omega)$ is a solution of ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}). ◻ **Proof of Theorem [Theorem 10](#Th4){reference-type="ref" reference="Th4"}:** *Proof.* (i) The same proof of Theorem ([Theorem 8](#Th2){reference-type="ref" reference="Th2"}) will work.\ (ii) If $r=1$ then $s=\frac{2^*_\lambda(\nu+1)}{2}$. Also, $u^\frac{\nu+1}{2}\in H^{1,\lambda}_0(\Omega)$. By Theorem [Theorem 4](#emb1){reference-type="ref" reference="emb1"}, we have $u\in L^s(\Omega)$.\ If $1<r<\frac{Q}{2}$. Choose $\delta>\frac{\nu+1}{2}$. By the density argument, $v=u_n^{2\delta-1}$ can be considered a test function. From ([\[weakd\]](#weakd){reference-type="ref" reference="weakd"}), we have $$\int_\Omega\langle A\nabla^*u_n,\nabla^*u_n^{2\delta-1}\rangle\,dX=\int_\Omega \frac{f_nu_n^{2\delta-1}}{(u_n+\frac{1}{n})^\nu}\,dX$$ which gives us $$\begin{aligned} \label{w5} \int_\Omega (2\delta-1)u_n^{2\delta-2}\langle A\nabla^*u_n,\nabla^*u_n\rangle dX&\leq \int_\Omega fu_n^{2\delta-\nu-1}dX \leq \|f\|_{L^r(\Omega)}(\int_\Omega u_n^{(2\delta-\nu-1)r'}dX)^\frac{1}{r'} \end{aligned}$$ By Theorem [Theorem 4](#emb1){reference-type="ref" reference="emb1"}, there exists $C>0$ such that $$\begin{aligned} \label{g1} \int_\Omega u_n^{\delta2^*_\lambda} dX&\leq C(\int_\Omega \langle A\nabla^*u_n^\delta,\nabla^*u_n^\delta\rangle dX)^\frac{2^*_\lambda}{2} \leq C(\int_\Omega \delta^2 u_n^{2\delta-2} \langle A\nabla^*u_n,\nabla^*u_n\rangle dX)^\frac{2^*_\lambda}{2}\end{aligned}$$ By using ([\[w5\]](#w5){reference-type="ref" reference="w5"}) and ([\[g1\]](#g1){reference-type="ref" reference="g1"}), we get $$\begin{aligned} \int_\Omega u_n^{\delta2^*_\lambda} dX&\leq C\{\frac{\delta^2}{(2\delta-1)}\|f\|_L^r(\Omega)\}^\frac{2^*_\lambda}{2}(\int_\Omega u_n^{(2\delta-\nu-1)r'}dX)^\frac{2^*_\lambda}{2r'}\end{aligned}$$ Choose $\delta$ such that $\delta2^*_\lambda=(2\delta-\nu-1)r'$ then $2^*_\lambda\delta=s$ . As $r<\frac{Q}{2}$ so $1-\frac{2^*_\lambda}{2r'}>0$. we have $\int_\Omega u_n^s dX\leq C$. Hence, by Dominated Convergence Theorem we get $u\in L^s(\Omega)$. ◻ ## The Case $\nu<1$ **Proof of Theorem [Theorem 11](#Th5){reference-type="ref" reference="Th5"}:** *Proof.* Since $\{u_n\}$ is bounded in $H^{1,\lambda}_0(\Omega)$ so it has a subsequence which converges to u weakly in $H^{1,\lambda}_0(\Omega)$. Without loss of generality we can assume $u_n\rightharpoonup u \text{in}\;H^{1,\lambda}_0(\Omega)$. Let $v\in C^1_0(\Omega)$. By the Lemma [Lemma 15](#lem1){reference-type="ref" reference="lem1"}, there exists $C>0$ such that $u_n(x)\geq C$ a.e $x\in\text{supp}(v)$ and for all $n\in \mathbb{N}$. So $$|\frac{f_nv}{(u_n+\frac{1}{n})^\nu}|\leq \frac{\|v\|_\infty|f|}{C^\nu}\;\mbox{for all}\;n\in\mathbb{N}$$ By the Dominated Convergence Theorem, we have $$\begin{aligned} \label{DCT2} \lim_{n\to \infty}\int_\Omega \frac{f_nv}{(u_n+\frac{1}{n})^\nu}dX&= \int_\Omega \lim_{k\to \infty} \frac{f_nv}{(u_n+\frac{1}{n})^\nu}dX =\int_\Omega \frac{fv}{u^\nu}dX. \end{aligned}$$ As $u_n$ is a solution of ([\[equ\]](#equ){reference-type="ref" reference="equ"}) so, $$\begin{aligned} \int_\Omega \langle A\nabla^*u_n,\nabla v\rangle dX=\int_\Omega \frac{f_nv}{(u_n+\frac{1}{n})^\nu} dX\end{aligned}$$ Take $n\to\infty$ and ([\[DCT2\]](#DCT2){reference-type="ref" reference="DCT2"}) we get $$\int_\Omega \langle A\nabla^*u,\nabla v\rangle dX=\int_\Omega \frac{fv}{u^\nu} dX$$ Hence, $u\in H^{1,\lambda}_0(\Omega)$ is a solution of ([\[maineq\]](#maineq){reference-type="ref" reference="maineq"}) with $\nu<1$. The proof of uniqueness is similar to Theorem [Theorem 7](#Th1){reference-type="ref" reference="Th1"}. ◻ **Proof of Theorem [Theorem 12](#Th6){reference-type="ref" reference="Th6"}:** *Proof.* (i) The proof is similar to the proof of Theorem [Theorem 8](#Th2){reference-type="ref" reference="Th2"}.\ (ii) If $r=(\frac{2^*_\lambda}{1-\nu})'$ then $s=2^*_\lambda$. By the embedding theorem and ([\[weakd\]](#weakd){reference-type="ref" reference="weakd"}), we have $$\begin{aligned} (\int_\Omega u_n^{2^*_\lambda}dX)^\frac{1}{2^*_\lambda}\leq C(\int_\Omega \langle A\nabla^*u_n,\nabla^*u_n\rangle dX)^\frac{1}{2} =C(\int_\Omega \frac{f_nu_n}{(u_n+\frac{1}{n})^\nu}dX)^\frac{1}{2} &\leq C(\int_\Omega fu_n^{1-\nu} dX)^\frac{1}{2}\\ &\leq C \|f\|_{L^r(\Omega)}^\frac{1}{2}(\int_\Omega u_n^{(1-\nu)r'}dX)^\frac{1}{2r'} \end{aligned}$$ Since $r'=\frac{2^*_\lambda}{1-\nu}$ so using the above inequality we get $$\begin{aligned} \int_\Omega u_n^{2^*_\lambda} dX&\leq C\|f\|_{L^r(\Omega)}^\frac{2^*_\lambda}{1+\nu}\end{aligned}$$ By Dominated Convergence Theorem we have $u\in L^{2^*_\lambda}(\Omega)$.\ Let $(\frac{2^*_\lambda}{1-\nu})'< r<\frac{Q}{2}$. Choose $\delta>1$ (to be determined later). We can treat the function $v=u_n^{2\delta-1}$ as a test function and put it in ([\[weakd\]](#weakd){reference-type="ref" reference="weakd"}), we obtain $$\begin{aligned} \label{M1} \int_\Omega\langle A\nabla^*u_n,\nabla^*u_n^{2\delta-1}\rangle dX=\int_\Omega \frac{f_nu_n^{2\delta-1}}{(u_n+\frac{1}{n})^\nu}dX\leq \int_\Omega fu_n^{2\delta-\nu-1}dX\leq \|f\|_{L^r(\Omega)}(\int_\Omega u_n^{(2\delta-\nu-1)r'}dX)^\frac{1}{r'} \end{aligned}$$ Also, $$\begin{aligned} \label{M2} \int_\Omega\langle A\nabla^*u_n,\nabla^*u_n^{2\delta-1}\rangle dX=\int_\Omega (2\delta-1)u_n^{2\delta-2}\langle A\nabla^*u_n,\nabla^*u_n\rangle dX=\int_\Omega \frac{(2\delta-1)}{\delta^2}\langle A\nabla^*u_n^\delta,\nabla^*u_n^\delta\rangle dX \end{aligned}$$ Using ([\[M1\]](#M1){reference-type="ref" reference="M1"}) and ([\[M2\]](#M2){reference-type="ref" reference="M2"}) we have $$\int_\Omega \langle A\nabla^*u_n^\delta,\nabla^*u_n^\delta\rangle dX)\leq \frac{\delta^2}{(2\delta-1)}\|f\|_{L^r(\Omega)}(\int_\Omega u_n^{(2\delta-\nu-1)r'}dX)^\frac{1}{r'}$$ By Theorem [Theorem 4](#emb1){reference-type="ref" reference="emb1"}, there exists $C>0$ such that $$\begin{aligned} \int_\Omega u_n^{\delta2^*_\lambda} dX&\leq C(\int_\Omega \langle A\nabla^*u_n^\delta,\nabla^*u_n^\delta\rangle dX)^\frac{2^*_\lambda}{2}\\ &\leq C\{\frac{\delta^2}{(2\delta-1)}\|f\|_{L^r(\Omega)\}}^\frac{2^*_\lambda}{2}(\int_\Omega u_n^{(2\delta-\nu-1)r'}dX)^\frac{2^*_\lambda}{2r'}\end{aligned}$$ Choose $\delta$ such that $\delta2^*_\lambda=(2\delta-\nu-1)r'$ then $2^*_\lambda\delta=s$ . As $(\frac{2^*_\lambda}{1-\nu})'< r<\frac{Q}{2}$ so $\delta>1$ and $\frac{2^*_\lambda}{2r'}<1$. Hence, we have $\int_\Omega u_n^s dX\leq C$. Hence, by Dominated Convergence Theorem, we get $u\in L^s(\Omega)$. ◻ **Proof of Theorem [Theorem 13](#Th7){reference-type="ref" reference="Th7"}:** *Proof.* Let $\epsilon<\frac{1}{n}$ and $v=(u_n+\epsilon)^{2\delta-1}-\epsilon^{2\delta-1}$ with $\frac{1+\nu}{2}\leq\delta<1$. We can treat $v$ as a function in $C^1_0(\Omega)$. Put $v$ in ([\[weakd\]](#weakd){reference-type="ref" reference="weakd"}) and we obtain $$\begin{aligned} \int_\Omega\langle A\nabla^*u_n,\nabla^*u_n\rangle (u_n+\epsilon)^{2\delta-2} dX\leq \frac{1}{(2\delta-1)}\int_\Omega\frac{fv}{(u_n+\frac{1}{n})^\nu} \end{aligned}$$ As $\epsilon<\frac{1}{n}$ so we have $$\begin{aligned} \label{T3} \int_\Omega\langle A\nabla^*u_n,\nabla^*u_n\rangle (u_n+\epsilon)^{2\delta-2} dX\leq \frac{1}{(2\delta-1)}\int_\Omega f(u_n+\epsilon)^{2\delta-1-\nu}\; dX \end{aligned}$$ By some simple calculation, we get $$\begin{aligned} \int_\Omega\langle A\nabla^*v,\nabla^*v\rangle dX\leq \frac{\delta^2}{(2\delta-1)}\int_\Omega f(u_n+\epsilon)^{2\delta-1-\nu} dX \end{aligned}$$ By Theorem [Theorem 4](#emb1){reference-type="ref" reference="emb1"}, we have $$\begin{aligned} (\int_\Omega v^{2^*_\lambda} dX)^\frac{2}{2^*_\lambda}\leq \frac{C\delta^2}{(2\delta-1)}\int_\Omega f(u_n+\epsilon)^{2\delta-1-\nu} \end{aligned}$$ Take $\epsilon\rightarrow 0$ and use Dominated convergence Theorem we have, $$\begin{aligned} \label{T4} (\int_\Omega u_n^{2^*_\lambda\delta})^\frac{2}{2^*_\lambda}&\leq \frac{C\delta^2}{(2\delta-1)}\int_\Omega fu_n^{2\delta-1-\nu} \end{aligned}$$ If $r=1$ then choose $\delta=\frac{\nu+1}{2}$ and from the previous inequality we have $\{u_n\}$ is bounded in $L^s(\Omega)$ with $s=\frac{Q(\nu+1)}{(Q-2)}$.\ If $r>1$ then choose $\delta$ in such a way that $(2\delta-1-\nu)r'=2^*_\lambda \delta$. Now, applying Hölder inequality on RHS of ([\[T4\]](#T4){reference-type="ref" reference="T4"}) we have, $$\begin{aligned} (\int_\Omega u_n^{2^*_\lambda\delta})^\frac{2}{2^*_\lambda}&\leq \frac{C\delta^2}{(2\delta-1)}\|f\|_{L^r(\Omega)}(\int_\Omega u_n^{(2\delta-1-\nu)r'})^\frac{1}{r'}\\ &=\frac{C\delta^2}{(2\delta-1)}\|f\|_{L^r(\Omega)}(\int_\Omega u_n^{2^*_\lambda \delta})^\frac{1}{r'} \end{aligned}$$ As $1\leq r<\frac{2Q}{(Q+2)+\nu(Q-2)}<\frac{Q}{2}$ so $\frac{2}{2^*_\lambda}>\frac{1}{r'}$. Hence, $\{u_n\}$ is bounded in $L^s(\Omega)$ with $s=2^*_\lambda \delta=\frac{Qr(\nu+1)}{(Q-2r)}$. Using Hölder inequality in ([\[T3\]](#T3){reference-type="ref" reference="T3"}), we have $$\begin{aligned} \int_\Omega\langle A\nabla^*u_n,\nabla^*u_n\rangle (u_n+\epsilon)^{2\delta-2} dX\leq \frac{1}{(2\delta-1)}\|f\|_{L^r(\Omega)}(\int_\Omega (u_n+\epsilon)^{2^*_\lambda \delta})^\frac{1}{r'} \end{aligned}$$ Since $u_n$ is bounded in $L^s(\Omega)$ so $$\int_\Omega\langle A\nabla^*u_n,\nabla^*u_n\rangle (u_n+\epsilon)^{2\delta-2} dX\leq C.$$ For $q=\frac{Qr(\nu+1)}{Q-r(1-\nu)}$ and above chosen $\delta$ satisfies the condition $(2-2\delta)q=(2-q)s$.\ So, $$\begin{aligned} \int_\Omega \langle A\nabla^*u_n,\nabla^*u_n\rangle^\frac{q}{2} dX&=\int_\Omega \frac{|\sqrt{A}\nabla^*u_n|^q}{(u_n+\epsilon)^{q-q\delta}}(u_n+\epsilon)^{q-\delta q} dX\\ &\leq(\int_\Omega\frac{|\sqrt{A}\nabla^*u_n|^2}{(u_n+\epsilon)^{2-2\delta}} dX) (\int_\Omega (u_n+\epsilon)^sdX)^{1-\frac{q}{2}} \end{aligned}$$ since $\{u_n\}$ is bounded in $L^s(\Omega)$ and $\epsilon<\frac{1}{n}$ so $\{u_n+\epsilon\}$ is bounded in $L^s(\Omega)$. Consequently, $\{u_n\}$ is bounded in $W^{1,\lambda,q}_0(\Omega)$. Hence $u\in W^{1,\lambda,q}_0(\Omega)$. ◻ # VARIABLE SINGULAR EXPONENT {#sec7} Consider the equation $$\begin{aligned} \label{Var1} -\Delta_\lambda u&=\frac{f}{u^{\nu(x)}} \text{ in }\Omega\nonumber\\ &u>0 \;\text{in}\; \Omega\\ &u=0 \;\text{on}\; \partial\Omega\nonumber\end{aligned}$$ where $\nu\in C^1(\overline{\Omega})$ is a positive function. **Theorem 19**. *Let $f\in L^{(2^*_\lambda)'}(\Omega)$ be a function. If there exists $K\Subset \Omega$ such that $0<\nu(x)\leq 1$ in $K^c$ (complement of K) then ([\[Var1\]](#Var1){reference-type="ref" reference="Var1"}) has an unique solution in $H_0^{1,\lambda}(\Omega)$ provided $\lambda\geq 1$.* *Proof.* The same approximation used in the earlier section yields the existence of a strictly positive function $u$, which is the increased limit of the sequence $\{u_n\}\subset H^{1,\lambda}_0(\Omega)\cap L^\infty(\Omega)$. Also, Lemma [Lemma 15](#lem1){reference-type="ref" reference="lem1"} is satisfied. As $K\Subset \Omega$ so by Lemma [Lemma 15](#lem1){reference-type="ref" reference="lem1"}, there exists $C>0$ such that $u_n(x)\geq C$ for a.e $x\in K\;\mbox{and for all}\; n\in\mathbb{N}$. For each $n\in\mathbb{N}$, $u_n$ solves $$\begin{aligned} \label{var2} -\Delta_\lambda u_n&=\frac{f_n}{(u_n+\frac{1}{n})^{\nu(x)}} \text{in}\Omega\nonumber\\ &u>0 \;\text{in}\; \Omega\\ &u=0 \;\text{on}\; \partial\Omega\nonumber\end{aligned}$$ By using Hölder inequality and the embedding theorem, we have $$\begin{aligned} \int_\Omega \langle A\nabla^*u_n,\nabla^*u_n\rangle dx&=\int_\Omega \frac{f_nu_n}{(u_n+\frac{1}{n})^{\nu(x)}} dx\\ &=\int_K \frac{f_nu_n}{(u_n+\frac{1}{n})^{\nu(x)}} dx +\int_{\{K^c\cap\Omega\}} \frac{f_nu_n}{(u_n+\frac{1}{n})^{\nu(x)}} dx\\ &\leq ||\frac{1}{C^{\nu(x)}}||_\infty \int_K fu_n dx+ \int_{\{x\in K^c\cap\Omega: u_n(x)\leq 1\}} f u_n^{1-\nu(x)} dx + \int_{x\in K^c\cap\Omega: u_n(x)\geq 1} f u_n^{1-\nu(x)} dx\\ &\leq ||\frac{1}{C^{\nu(x)}}||_\infty \int_K fu_n dx+ \int_{\{x\in K^c\cap\Omega: u_n(x)\leq 1\}} f dx + \int_{x\in K^c\cap\Omega: u_n(x)\geq 1} f u_n dx\\ &\leq ||\frac{1}{C^{\nu(x)}}||_\infty ||f||_{L^{(2^*_\lambda)'}(\Omega)}||u_n||_{L^{2^*_\lambda}}+||f||_{L^1(\Omega)}+ ||f||_{L^{(2^*_\lambda)'}(\Omega)}||u_n||_{L^{2^*_\lambda}(\Omega)}\\ &\leq C||f||_{L^{(2^*_\lambda)'}(\Omega)}||u_n||_{H^{1,\lambda}_0(\Omega)}+ ||f||_{L^1(\Omega)}\end{aligned}$$ We obtained $$||u_n||_{H^{1,\lambda}_0(\Omega)}^2\leq C||f||_{L^{(2^*_\lambda)'}(\Omega)}||u_n||_{H^{1,\lambda}_0(\Omega)}+ ||f||_{L^1(\Omega)}.$$ Hence, $u_n$ is bounded in $H^{1,\lambda}_0(\Omega)$. Without loss of generality we can assume that $u_n$ weakly converges to $u$ in $H^{1,\lambda}_0(\Omega)$. Let $w\in C^1_c(\Omega)$. Using Lemma [Lemma 15](#lem1){reference-type="ref" reference="lem1"}, there exists $c>0$ such that $u_n\geq c$ for a.e $x$ in $\text{supp}(w)$. Since $u_n$ solves ([\[var2\]](#var2){reference-type="ref" reference="var2"}) so $$\begin{aligned} \int_\Omega \langle A\nabla^*u_n,\nabla w\rangle dx&=\int_\Omega \frac{f_nw}{(u_n+\frac{1}{n})^{\nu(x)}} dx \end{aligned}$$ Taking $n\to\infty$ and using the dominated convergence theorem, we get $$\int_\Omega \langle A\nabla^*u,\nabla w\rangle dx=\int_\Omega \frac{fw}{u^{\nu(x)}} dx$$ Hence, $u$ is a solution of ([\[Var1\]](#Var1){reference-type="ref" reference="Var1"}). The proof of the uniqueness part is identical to the one given in Theorem [Theorem 7](#Th1){reference-type="ref" reference="Th1"}. ◻ **Theorem 20**. *Let $u$ be the solution of equation ([\[Var1\]](#Var1){reference-type="ref" reference="Var1"}) with $f\in L^r(\Omega)$, $r>\frac{Q}{2}$. Then $u\in L^\infty(\Omega)$, where $Q=(m+1)+\lambda m$.* *Proof.* The proof is similar to that of the Theorem [Theorem 8](#Th2){reference-type="ref" reference="Th2"} and is omitted here. ◻
arxiv_math
{ "id": "2309.04857", "title": "Semilinear degenerate elliptic equation in the presence of singular\n nonlinearity", "authors": "Kaushik Bal and Sanjit Biswas", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We study various kinds of Grassmannians or Lagrangian Grassmannians over $\mathbb{R}$, $\mathbb{C}$ or $\mathbb{H}$, all of which can be expressed as $\mathbb{G}/\mathbb{P}$ where $\mathbb{G}$ is a classical group and $\mathbb{P}$ is a parabolic subgroup of $\mathbb{G}$ with abelian unipotent radical. The same Grassmannians can also be realized as (classical) compact symmetric spaces $G/K$. We give explicit generators and relations for the de Rham cohomology rings of $\mathbb{G}/\mathbb{P}\cong G/K$. At the same time we describe certain filtered deformations of these rings, related to Clifford algebras and spin modules. While the cohomology rings are of our primary interest, the filtered setting of $K$-invariants in the Clifford algebra actually provides a more conceptual framework for the results we obtain. address: - Department of Mathematics and Statistics, Fylde Building, Lancaster University, Lancaster, UK - Department of Mathematics, Aoyama Gakuin University, Fuchinobe 5-10-1, Chuo-ku, Sagamihara 252-5258, Japan - Department of Mathematics, Faculty of Science, University of Zagreb, Bijenička 30, 10000 Zagreb, Croatia author: - Kieran Calvert - Kyo Nishiyama - Pavle Pandžić bibliography: - references.bib title: Clifford algebras, symmetric spaces and cohomology rings of Grassmannians --- [^1] [^2] # Introduction {#sec intro} This paper is motivated by email correspondence between the third named author and Bert Kostant in 2004 [@kostantemail]. In his study of the action of Dirac operators on Harish-Chandra modules attached to a real reductive Lie group $G_\mathbb{R}$, the third named author was led to consider the algebra $C(\mathfrak{p})^K$. Here $K$ is a maximal compact subgroup of $G_\mathbb{R}$ corresponding to a Cartan decomposition $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$ of the complexified Lie algebra of $G_\mathbb{R}$ and $C(\mathfrak{p})$ is the Clifford algebra of $\mathfrak{p}$ with respect to the (extended) Killing form. The third named author remembered hearing Kostant speak about the graded version of $C(\mathfrak{p})^K$, $(\textstyle{\bigwedge}\mathfrak{p})^K$, and its relation to cohomology, so he asked Kostant about the latter algebra expecting the description of the graded version of the algebra will give some information about the algebra $C(\mathfrak{p})^K$ he was interested in. Kostant replied as follows: From kostant\@math.mit.edu Mon Mar 15 21:07:54 2004 for \<pandzic\@math.hr\>; Mon, 15 Mar 2004 21:07:53 +0100 (MET) Subject: Re: a question Dear Pavle The following is known. In general (\\wedge p)k is the cochain complex whose cohomology is H(G/K). In the symmetric case (G/K is a symmetric space) the cobounbary operator is trivial so that (\\wedge p)k = H(G/K). If in addition rank K = rank G then all cohomology is even dimension and if e = Euler characteristic of G/K then of course dim (\\wedge p)k = e. Also in this case dim p is even and C(p) = End S where S is the spin module. Furthermore under the action of K S=V_1 + \.... + V_e where the V_i are irreducible K modules and all distinct. Thus C(p)K is an abelian (semisimple) algebra of dimension e. This is a special case of my result with Sternberg, Ramond and Gross (GKRS) in the PNAS. It is a nice unsolved problem to locate the 1-dimensional idempotents which project onto the various V\_ i. In case G/K is Hermitian symmetric these idempotents I believe correspond to the Schubert classes in (\\wedge p)k. If so one has some sort of generalization of Schubert classes when G/K is not Hermitian. Best regards Bert It turns out Kostant was not exactly right in thinking that the idempotents will correspond to the Schubert classes; in fact, they typically all have nonzero top degree term, and one must take their linear combinations in order to get a basis compatible with filtration. His intuition that this question is related to some sort of generalization of Schubert classes when $G/K$ is not Hermitian was however right; as we shall see, these cases correspond to various kinds of real or quaternionic Grassmannians which possess their own Schubert calculus. Our main goal in this paper is to describe the de Rham cohomology rings of these Grassmannians using their realization as compact symmetric spaces. The main tool is the representations of the Clifford algebras associated with symmetric spaces. When the Grassmannians are complex, the results we obtain here are well known. However, for the real and quaternionic Grassmannians, the results are not widely known. For example, the ring structure of the cohomology of the real Grassmannians was conjectured by Casian and Kodama [@Casian.Kodama.2013]. Recently, many related papers have appeared (see [@Rabelo.2016; @Lambert.Rabelo.2022; @Rabelo.SMartin.2019; @Matszangosz.2021] for integral cohomology, for example), but these papers treat the Grassmannians individually, and not in a uniform way. See also [@chen; @CHL; @esch; @HL] for identifications of compact symmetric spaces and Grassmannians. We study these cohomology rings systematically. The key ideas are the usage of Clifford algebras as mentioned above and the description of the Grassmannians as flag manifolds corresponding to maximal parabolic subgroups with abelian unipotent radicals. In fact, we start with the Grassmannian $\mathbb{G}/\mathbb{P}$, where $\mathbb{P}$ is a maximal parabolic subgroup with abelian unipotent radical in a reductive Lie group $\mathbb{G}$, then produce symmetric spaces of compact and noncompact type using three involutions $\theta, \sigma$ and $\tau = \theta \sigma$ of $\mathbb{G}$, which are mutually commuting. Our results describe the de Rham cohomology ring of each of the Grassmannians in our list by explicit generators and relations, and also give an explicit basis consisting of certain monomials. In most cases (including the well known complex Grassmannian cases) we show that our basis can be replaced by a basis consisting of certain Schur polynomials. These have the advantage of a rather well understood multiplication table, related to the Littlewood-Richardson coefficients. However our monomials with their clear structure of generators and relations also lead to an explicit multiplication table, as explained in Section [4](#coho symm){reference-type="ref" reference="coho symm"}. In this way we get an alternative approach to Schubert calculus on the Grassmannians in question. The paper is organized as follows. In Section [2](#sec gras){reference-type="ref" reference="sec gras"} we describe our Grassmannians $\mathbb{G}/\mathbb{P}$ and give their realizations as compact symmetric spaces. The cases are summarized in the table at the end of the introduction. In each case $\mathbb{P}$ is a maximal parabolic subgroup of $\mathbb{G}$ with abelian unipotent radical. Note that a Grassmannian is the set of certain subspaces of fixed dimension in a vector space $V$. On this set of subspaces, the automorphism group $\mathbb{G}$ of $V$, which typically preserves additional structure (quadratic or symplectic forms), acts transitively. This is justified by the following well known theorem by Witt. **Theorem 1**. *[@Witt], [@Bou Ch. 1-2], [@Die]. [\[witt\]]{#witt label="witt"} Let $V$ be a vector space over $\mathbb{R}$, $\mathbb{C}$ or $\mathbb{H}$ with a nondegenerate form $\langle\,,\rangle$ which is either bilinear symmetric, or bilinear skew symmetric, or Hermitian, or skew Hermitian. Let $U$ and $W$ be subspaces of $V$ and let $\varphi:U\to W$ be an isomorphism preserving $\langle\,,\rangle$. Then $\varphi$ extends to an automorphism of $V$ preserving $\langle\,,\rangle$.* Let $\mathbb{P}$ be the stabilizer in $\mathbb{G}$ of a standard subspace $U$ in our Grassmannian. (In some cases, $U$ is a Lagrangian or isotropic subspace. It depends on the situation.) Then $\mathbb{P}$ is a maximal parabolic subgroup and our Grassmannian is equal to $\mathbb{G}/\mathbb{P}$. So Grassmannians are naturally identified with (partial) flag manifolds $\mathbb{G}/\mathbb{P}$. As already mentioned, compact symmetric spaces are diffeomorphic to Grassmannians $\mathbb{G}/\mathbb{P}$ where $\mathbb{P}$ has abelian unipotent radical. In fact, these Grassmannians exhaust all such pairs $(\mathbb{G}, \mathbb{P})$ (see [@Wolf], [@howe § 5.5.1], and also [@RRS]). Then, in appropriate realizations, the Grassmannians are varieties of either ordinary subspaces of a vector space, or of Lagrangian subspaces with respect to a certain form. In this paper, the group $\mathbb{G}$ will be a classical group; in particular, it is a reductive matrix group, and we always consider the standard Cartan involution $$\label{cart inv} \theta(g)=(\bar g^t)^{-1},\quad g\in\mathbb{G}.$$ The corresponding maximal compact subgroup $\mathbb{G}^\theta$ will be denoted by $\mathbb{K}$ and also by $G$. We will use Proposition [Proposition 1](#k trans){reference-type="ref" reference="k trans"} below to see that $\mathbb{K}$ acts transitively on the Grassmannian, so that $\mathbb{G}/\mathbb{P}$ is diffeomorphic to $\mathbb{K}/\mathbb{P}\cap\mathbb{K}$. This follows easily from the fact that $\mathbb{P}$ contains a minimal parabolic subgroup $\mathbb{P}_0=\mathbb{M}\mathbb{A}\mathbb{N}_0$, and that $\mathbb{G}$ has an Iwasawa decomposition $\mathbb{G}=\mathbb{K}\mathbb{A}\mathbb{N}_0$, where $\mathbb{K}$ is as above. It is known by [@TK1968 §4] (see also [@kobayashi Lemma 7.3.1] and [@RRS]) that the following are equivalent: 1. $\mathbb{P}$ has abelian unipotent radical; 2. $\mathbb{P}$ has a Levi subgroup $\mathbb{L}$ which is a symmetric subgroup of $\mathbb{G}$; 3. $K := \mathbb{P}\cap\mathbb{K}=\mathbb{L}\cap\mathbb{K}$ is a symmetric subgroup of $G = \mathbb{K}$. We give a short and comprehensive proof of this result in Theorem [\[abel sym\]](#abel sym){reference-type="ref" reference="abel sym"} for convenience of the readers. The involution $\sigma$ of $\mathbb{G}$ mentioned above is related to the above Levi subgroup $\mathbb{L}$ of $\mathbb{P}$: $\mathbb{L}$ is $\mathbb{G}^\sigma$, the subgroup of $\mathbb{G}$ consisting of points fixed by $\sigma$; we will see that also $\mathbb{P}$ itself is $\sigma$-stable. The involution $\sigma$, which we describe explicitly below, commutes with the Cartan involution $\theta$, and hence $K = \mathbb{L}^{\theta} = \mathbb{L}\cap \mathbb{K}$ is a maximal compact subgroup of $\mathbb{L}$. Let us denote by $\tau = \theta \sigma$ the third involution of $\mathbb{G}$. We denote the fixed point subgroup $\mathbb{G}^{\tau}$ by $G_{\mathbb{R}}$, so that $G_{\mathbb{R}}/ K$ is a noncompact Riemannian symmetric space with its compact dual equal to $G/K$. It turns out that in this way we get to cover the full list of compact classical symmetric spaces as listed e.g. in [@howe p.69]. We summarize the three involutions and symmetric spaces thus obtained in the following diagram. See also Table [\[tab:Grass_symspaces\]](#tab:Grass_symspaces){reference-type="ref" reference="tab:Grass_symspaces"} at the end of Introduction. $$\vcenter{ \xymatrix @R-.3ex @M+.5ex @C-3ex @L+.5ex @H+1ex { & \ar@{-}[ld]_{\sigma} \ar@{-}[d]_{\theta} \makebox[3ex][c]{$\mathbb{G}$} \ar@{-}[rd]^{\tau = \theta \sigma} & \\ \ar@{-}[rd]_(.4){\tau} \makebox[3ex][r]{$\mathbb{L}= \mathbb{G}^{\sigma}$} & \ar@{-}[d]_{\sigma} \makebox[9ex][c]{$\mathbb{K}= \mathbb{G}^{\theta} = G$} & \makebox[3ex][l]{$ \mathbb{G}^{\tau} = G_{\mathbb{R}} $} \ar@{-}[ld]^(.3){\theta} %%\ar@{}@<1ex>[ld]_{\theta} \ar@{-}[ld] \\ & \makebox[9ex][c]{$K = \mathbb{L}\cap \mathbb{K}= G \cap G_{\mathbb{R}}$} & }} \qquad\qquad \begin{aligned} \mathbb{P}= \mathbb{L}\mathbb{N}:\ &\text{$\sigma $-stable parabolic subgroup} \\ &\text{with abelian unipotent radical} \\ \tau = \theta & \text{ on } \mathbb{L} \\ \theta = \sigma & \text{ on } G_{\mathbb{R}} \\ \sigma = \tau & \text{ on } \mathbb{K}= G \end{aligned}$$ Since $\mathbb{P}$ is block upper triangular with two diagonal blocks, of sizes (say) $p$ and $q$, $\mathbb{L}$ can be taken as the block diagonal part of $\mathbb{P}$. Now if we denote by $I_{p,q}$ the matrix $\left(\begin{smallmatrix}I_p&0\cr 0&-I_q\end{smallmatrix}\right)$, then $$I_{p,q}\begin{pmatrix}A&B \cr C&D\end{pmatrix}I_{p,q}=\begin{pmatrix}A&-B \cr -C&D\end{pmatrix},$$ and we see that the involution we want is $$\label{def sigma} \sigma(g)=I_{p,q}g I_{p,q}.$$ It follows that $\mathbb{P}$ is $\sigma$-stable, and it will now be very easy to identify the groups $\mathbb{L}=\mathbb{G}^\sigma$. (In fact, we will see in Section 2 that $\sigma$ can be described in terms of $\mathbb{P}$ only; see Theorem [\[abel sym\]](#abel sym){reference-type="ref" reference="abel sym"}.) It turns out that the complexifications of these groups are exactly the groups listed in [@howe p. 70] as the groups acting in a skew-multiplicity free way on $\mathfrak{p}$, the complexified tangent space of $G/K \simeq \mathbb{G}/ \mathbb{P}$ at the base point $e K$. Here "skew-multiplicity free\" means that $\mathbb{L}_\mathbb{C}$ acts on $\textstyle{\bigwedge}\mathfrak{p}$ in a multiplicity free way (note that $\mathfrak{p}$ can also be identified with the complexification of the Lie algebra of $\mathbb{N}$, so that $\mathbb{L}$ acts on it naturally). We will also describe the groups $G_\mathbb{R}=\mathbb{G}^{\tau}$, where $\tau = {\sigma\theta}$. Note that by [\[cart inv\]](#cart inv){reference-type="eqref" reference="cart inv"} and [\[def sigma\]](#def sigma){reference-type="eqref" reference="def sigma"}, we get $$\label{sigma theta} \sigma \theta(g)=I_{p,q}(\bar g^t)^{-1}I_{p,q}.$$ The group $G_\mathbb{R}$ is a noncompact reductive Lie group with maximal compact subgroup $K=\mathbb{K}^\sigma$, and $G/K=\mathbb{K}/\mathbb{K}^{\sigma}$ is the compact dual of the noncompact Riemannian symmetric space $G_\mathbb{R}/K$ as explained above. In this way, we get to cover the full list of noncompact classical symmetric spaces. As in the Hermitian symmetric case, the noncompact Riemannian symmetric space $G_\mathbb{R}/K$ is embedded into $\mathbb{G}/\mathbb{P}$ as an open subset (a generalization of the Borel embedding). The realizations of our Grassmannians as symmetric spaces are known in most (or all) cases, but the results are scattered in the literature. We will indicate some references when we get to the case by case analysis. Our view point is to produce the classical Riemannian symmetric spaces, both compact and noncompact ones, in terms of the pairs $(\mathbb{G}, \mathbb{P})$ on our list. In Section [3](#sec spin){reference-type="ref" reference="sec spin"} we collect some facts needed for our description of the cohomology rings of compact symmetric spaces $G/K$ as above (and thus also of the corresponding Grassmannians $\mathbb{G}/\mathbb{P}$). For $\sigma$ as above, its restriction to $G$ is an involution such that $K=G^\sigma$. Let $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$ be the decomposition of the complexified Lie algebra $\mathfrak{g}$ of $G$ into eigenspaces of $\sigma$. We are assuming $G$ is connected, but $K$ need not be connected. As noted in Kostant's email message, if $\mathfrak{g}$ and $\mathfrak{k}$ have equal rank, then the algebras $C(\mathfrak{p})^K$ and $(\textstyle{\bigwedge}\mathfrak{p})^K$ can be expressed as $$C(\mathfrak{p})^K= \operatorname{Pr}(S);\qquad (\textstyle{\bigwedge}\mathfrak{p})^K= \operatorname{gr}\operatorname{Pr}(S),$$ where the algebra $\operatorname{Pr}(S)$ is spanned by the projections of the spin module $S$ to its isotypic components for the pin double cover $\widetilde{K}$ of $K$. As explained in Subsections [3.1](#K dec general){reference-type="ref" reference="K dec general"} and [3.3](#cpk eq rk){reference-type="ref" reference="cpk eq rk"}, one can use the natural map $\alpha:U(\mathfrak{k})\to C(\mathfrak{p})$ that gives the spin module its $\mathfrak{k}$-module structure, and the fact that the projections are given by the action of the center of $U(\mathfrak{k})$, to express the algebra $\operatorname{Pr}(S)$ as the quotient of $\mathbb{C}[\mathfrak{t}^*]^{W_K}$ by the ideal generated by the elements of $\mathbb{C}[\mathfrak{t}^*]^{W_G}$ vanishing at $\rho$. Here $\mathfrak{t}$ is the complexification of a Cartan subalgebra of the Lie algebra of $K$, $W_K$ is the Weyl group of $K$ (see [\[gp W gp\]](#gp W gp){reference-type="eqref" reference="gp W gp"}), $W_G=W_\mathfrak{g}$ is the Weyl group of $G$ or equivalently the Weyl group of the root system $\Delta(\mathfrak{g},\mathfrak{t})$, and $\rho$ is the half sum of roots in the (fixed) positive root system $\Delta^+(\mathfrak{g},\mathfrak{t})$. The algebra $\operatorname{gr}\operatorname{Pr}(S)$ attached to the natural filtration of $\operatorname{Pr}(S)$ can be expressed as the quotient of $\mathbb{C}[\mathfrak{t}^*]^{W_K}$ by the ideal generated by the elements of $\mathbb{C}[\mathfrak{t}^*]^{W_G}$ vanishing at $0$. In the "almost equal rank case\" $(\mathfrak{g},\mathfrak{k})=(\mathfrak{so}(2p+2q+2),\mathfrak{so}(2p+1)\times\mathfrak{so}(2q+1))$, the algebras $C(\mathfrak{p})^K$ and $(\textstyle{\bigwedge}\mathfrak{p})^K$ are very close to $\operatorname{Pr}(S)$ and $\operatorname{gr}\operatorname{Pr}(S)$; one has to tensor with the Clifford respectively exterior algebra of a certain one-dimensional space. This is explained in Subsections [3.5](#subsec so odd){reference-type="ref" reference="subsec so odd"} and [3.7](#subsec so/soo){reference-type="ref" reference="subsec so/soo"}. On the opposite end are the primary and almost primary cases; in these cases the $\widetilde{K}$-module $S$ has only one isotypic component and the algebra of projections is trivial. These cases are described in Subsections [3.6](#subsec primary and aprim){reference-type="ref" reference="subsec primary and aprim"} and [3.8](#subsec u/o){reference-type="ref" reference="subsec u/o"}. The algebra $(\textstyle{\bigwedge}\mathfrak{p})^K$ is now equal to the exterior algebra of a certain subspace of $(\textstyle{\bigwedge}{\mathfrak p})^K$ (denoted by $\mathcal P_\wedge(\mathfrak{p})$), where $\mathcal P_\wedge(\mathfrak{p})$ is the subspace orthogonal to the square of the augmentation ideal (Definition [Definition 1](#def samspace){reference-type="ref" reference="def samspace"}). This result is due to Hopf and Samelson [@hopf1941; @Samelson41] for the group case and Theorem [Theorem 1](#thm prim and aprim alg){reference-type="ref" reference="thm prim and aprim alg"} for the remaining (almost) primary cases. Similar isomorphisms go way back to Cartan, Chevalley, Koszul and others; see [@CCCvol3 p. 568]. We believe that likewise $C(\mathfrak{p})^K$ is the Clifford algebra over $\mathcal P_\wedge(\mathfrak{p})$, but this is currently known (by the results of Kostant [@K97]) only in the group cases, i.e., when $\mathfrak{g}=\mathfrak{g}_1\oplus\mathfrak{g}_1$ and $\mathfrak{k}\cong\mathfrak{p}\cong\mathfrak{g}_1$. Finally, in Section [4](#coho symm){reference-type="ref" reference="coho symm"} we give a precise description, by generators and relations, of the algebras $(\textstyle{\bigwedge}\mathfrak{p})^K$ in each of the cases. In the equal rank and almost equal rank cases, we also describe the algebras $C(\mathfrak{p})^K$, which in these cases amounts to describing the algebras $\operatorname{Pr}(S)$. We also give explicit bases for these algebras. In this way we get to compute the de Rham cohomology of the symmetric spaces on our list, and thus also of the corresponding Grassmannians. Namely, as mentioned in Kostant's email message, the cohomology of the compact symmetric space $G/K$ is (after complexification) equal to $(\textstyle{\bigwedge}\mathfrak{p})^K$. This fact is quite well known, but it is not easy to find an appropriate reference. It is proved in [@taylor] (unpublished) using Hodge theory, partially proved in [@leung], and proved in [@CCCvol3] under the assumption $K$ is connected. It is also mentioned in passing in [@howe p. 69], and in [@BW § 1.6]. Borel and Wallach [@BW] attribute the result to É. Cartan and de Rham. We start Section [4](#coho symm){reference-type="ref" reference="coho symm"} by presenting a simple proof of this fact which we learned from Sebastian Goette [@goette]. We are thus led to study the appropriate quotients of the $W_K$-invariants in $\mathbb{C}[\mathfrak{t}^*]$ in each of the cases. The results often involve the following algebra. **Definition 1**. Let $p,q\in \mathbb{Z}$ with $1\leq p\leq q$, and let $c=(c_1,\dots,c_{p+q})\in\mathbb{C}^{p+q}$. We define $\mathfrak{H}(p,q;c)$ to be the algebra generated by $r_1,\dots,r_p$ and $s_1,\dots,s_q$ with relations generated by $$\sum_{i,j\geq 0;\ i+j=k}r_is_j=c_k$$ for $k=1,\dots,p+q$, where we set $r_0=s_0=1$ and $r_i=0$ if $i>p$, $s_j=0$ if $j>q$. We can use the first $q$ of the relations to express $s_1,\dots,s_q$ in terms of the $r_i$, so $\mathfrak{H}(p,q;c)$ is in fact generated by $r_1,\dots,r_p$ only. The remaining relations can be used to obtain the relations among the $r_i$, not involving the $s_j$. We do that in the proof of Theorem [Theorem 1](#gen rel basis){reference-type="ref" reference="gen rel basis"}; the relations among the $r_i$ are [\[rel explic 1\]](#rel explic 1){reference-type="eqref" reference="rel explic 1"} and [\[rel explic other\]](#rel explic other){reference-type="eqref" reference="rel explic other"} and they form another set of defining relations. From these relations one can obtain expressions for each monomial in $r_1,\dots,r_p$ of degree $q+1$ as a linear combination of lower degree monomials in $r_1,\dots,r_p$. We will also see that the monomials in the $r_i$ of degree at most $q$ form a basis of the algebra $\mathfrak{H}(p,q;c)$; so $\mathfrak{H}(p,q;c)$ can be identified with the space $$\mathbb{C}[r_1,\dots,r_p]_{\leq q}$$ of polynomials in the $r_i$ of degree $\leq q$. We show in Remark [Remark 1](#rem schur){reference-type="ref" reference="rem schur"} that the above monomials span the same subspace of $\mathbb{C}[r_1,\dots,r_p]$ as the Schur polynomials $s_\lambda$ attached to partitions $\lambda$ with Young diagrams contained in the $p\times q$ box. Moreover, our basis consisting of monomials and the basis consisting of Schur polynomials are connected by a triangular change of basis. In this way we get a connection with the usual Schubert calculus, where the multiplication of the Schur polynomials is given in terms of Littlewood-Richardson coefficients. For $G/K=\operatorname{U}(p+q)/\operatorname{U}(p)\times \operatorname{U}(q)$, we prove in Theorem [Theorem 1](#gen rel basis){reference-type="ref" reference="gen rel basis"} that $C(\mathfrak{p})^K$ is isomorphic to the algebra $\mathfrak{H}(p,q;c)$, with the $r_i$ being the elementary symmetric functions in the first $p$ coordinate functions $x_1,\dots,x_p$ on $\mathfrak{t}\cong\mathbb{C}^{p+q}$, the $s_j$ being the elementary symmetric functions in the last $q$ coordinate functions $x_{p+1},\dots,x_{p+q}$ on $\mathfrak{t}$, and $c=(t_1(\rho),\dots,t_{p+q}(\rho))$, where $t_k$ are the elementary symmetric functions on $x_1,\dots,x_{p+q}$. The natural filtration on $C(\mathfrak{p})^K$ coming from the filtration of $C(\mathfrak{p})$ corresponds to the filtration on $\mathfrak{H}(p,q;c)$ obtained by setting $$\label{set deg upq} \deg r_i=2i,\qquad i=1,\dots,p.$$ The cohomology ring $(\textstyle{\bigwedge}\mathfrak{p})^K$ is isomorphic to $\mathfrak{h}(p,q,0)$, and its natural grading is again obtained by [\[set deg upq\]](#set deg upq){reference-type="eqref" reference="set deg upq"}. For $G/K=\operatorname{Sp}(p+q)/\operatorname{Sp}(p)\times \operatorname{Sp}(q)$ (Theorem [Theorem 1](#gen rel basis B/BxB){reference-type="ref" reference="gen rel basis B/BxB"}), the algebra $C(\mathfrak{p})^K$ is again $\mathfrak{H}(p,q;c)$, but now the $r_i$, the $s_j$ and the $t_k$ are elementary symmetric functions on the squares of the appropriate variables. The parameter $c$ is again given by evaluating $t_k$ at $\rho$. The filtration is now obtained by setting $\deg r_i=4i$, $i=1,\dots,p$. The cohomology ring $(\textstyle{\bigwedge}\mathfrak{p})^K$ is again isomorphic to $\mathfrak{h}(p,q,0)$, and its natural grading is also obtained by setting $\deg r_i=4i$. For $G/K=\operatorname{SO}(k+m)/S(\operatorname{O}(k)\times \operatorname{O}(m))$ (Theorem [Theorem 1](#gen rel basis SOk+m){reference-type="ref" reference="gen rel basis SOk+m"}), the algebra $C(\mathfrak{p})^K$ is $\mathfrak{H}(p,q;c)$ if $(k,m)=(2p,2q)$ or $(2p,2q+1)$, with $\{r_i\}$, $\{s_j\}$, $\{t_k\}$ and $c$ defined similarly as above. If $(k,m)=(2p+1,2q+1)$ (the almost equal rank case), there is an extra generator $e$, of degree $2p+2q+1$, squaring to $1$. The filtration degrees of the $r_i$ are again equal to $4i$. The cohomology ring $(\textstyle{\bigwedge}\mathfrak{p})^K$ is isomorphic to $\mathfrak{H}(p,q,0)$ or $\mathfrak{H}(p,q,0)\oplus\mathfrak{H}(p,q;0)e$, and its natural grading is also obtained by setting $\deg r_i=4i$ and $\deg e=2p+2q+1$. In this case we get to prove the conjecture of Casian-Kodama [@Casian.Kodama.2013]. For $G/K=\operatorname{U}(n)/\operatorname{O}(n)$ (Theorem [Theorem 1](#thm prim and aprim alg){reference-type="ref" reference="thm prim and aprim alg"}), the situation is different: the algebra $(\textstyle{\bigwedge}\mathfrak{p})^K$ is the exterior algebra on the subspace $\mathcal P_\wedge(\mathfrak{p})$ (Definition [Definition 1](#def samspace){reference-type="ref" reference="def samspace"}), and the degrees are given in Table [1](#tab degrees prim){reference-type="ref" reference="tab degrees prim"}. For $G/K=\operatorname{Sp}(n)/U(n)$ (Theorem [Theorem 1](#thm basis C/A){reference-type="ref" reference="thm basis C/A"}), we are back to elementary symmetric functions: $r_1,\dots,r_n$ are the elementary symmetric functions on the coordinate functions $x_1,\dots,x_n$ on the Cartan subalgebra $\mathfrak{t}\cong\mathbb{C}^n$ of $\mathfrak{k}$, while $t_1,\dots,t_n$ are the elementary symmetric functions on the squares of the $x_i$. The algebras $C(\mathfrak{p})^K$ and $(\textstyle{\bigwedge}\mathfrak{p})^K$ are generated by the $r_i$, but the relations are now different: $$r_k^2=t_k+2r_{k-1}r_{k+1}-2r_{k-2}r_{k+2}+\dots,\quad k=1,\dots,n,$$ where as usual we set $r_0=1$ and $r_i=0$ for $i>n$ or $i<0$, and where $t_k$ should be replaced by $t_k(\rho)$ if the algebra is $C(\mathfrak{p})^K$ and with 0 if the algebra is $(\textstyle{\bigwedge}\mathfrak{p})^K$. This time a basis for each of our algebras is given by the monomials $$r_1^{{\varepsilon}_1}r_2^{{\varepsilon}_2}\dots r_n^{{\varepsilon}_n},\quad {\varepsilon}_i\in\{0,1\}.$$ The filtration degree of $C(\mathfrak{p})^K$ inherited from $C(\mathfrak{p})$, and the gradation degree of $(\textstyle{\bigwedge}\mathfrak{p})^K$ inherited from $\textstyle{\bigwedge}\mathfrak{p}$, are obtained by setting $\deg r_i=2i$ for $i=1,\dots,n$. For $G/K=\operatorname{SO}(2n)/\operatorname{U}(n)$ (Theorem [Theorem 1](#thm basis D/A){reference-type="ref" reference="thm basis D/A"}) the situation is entirely analogous to the case $G/K=\operatorname{Sp}(n)/U(n)$, except that we get to eliminate $r_n$ from the list of generators. For the group cases $G\times G/\Delta G\cong G$ where $G$ is $\operatorname{SO}(n)$, $\operatorname{U}(n)$ or $\operatorname{Sp}(n)$ (Theorem [\[gen rel basis group\]](#gen rel basis group){reference-type="ref" reference="gen rel basis group"}), the algebras $C(\mathfrak{p})^K\cong C(\mathfrak{g})^\mathfrak{g}$ and $(\textstyle{\bigwedge}\mathfrak{p})^K\cong(\textstyle{\bigwedge}\mathfrak{g})^\mathfrak{g}$ are the Clifford respectively exterior algebra of the graded subspace $\mathcal P_\wedge(\mathfrak{p})$ (Definition [Definition 1](#def samspace){reference-type="ref" reference="def samspace"}). For the Clifford algebras, these cases were settled by Kostant in [@K97]. For the exterior algebras, the fact that $(\textstyle{\bigwedge}{\mathfrak p})^{\mathfrak k}$ is isomorphic to a graded subspace goes back to Cartan, Chevalley, Koszul and others [@CCCvol3]. The degrees are given in Table [1](#tab degrees prim){reference-type="ref" reference="tab degrees prim"}. For the cases $G/K=\operatorname{U}(2n)/\operatorname{Sp}(n)$ (Theorem [Theorem 1](#gen rel basis A/C){reference-type="ref" reference="gen rel basis A/C"}) the algebra $(\textstyle{\bigwedge}\mathfrak{p})^K$ is the exterior algebra of the graded subspace $\mathcal P_\wedge(\mathfrak{p})$ (Definition [Definition 1](#def samspace){reference-type="ref" reference="def samspace"}). Again, the degrees are given in Table [1](#tab degrees prim){reference-type="ref" reference="tab degrees prim"}. Our results are well known for complex Grassmannians, i.e., for $\operatorname{Gr}_p(\mathbb{C}^{p+q})\cong\operatorname{U}(p+q)/\operatorname{U}(p)\times\operatorname{U}(q)$, $\operatorname{Gr}_2(\mathbb{R}^{2+q})\cong\operatorname{SO}(2+q)/S(\operatorname{O}(2)\times\operatorname{O}(q))$, $\operatorname{LGr}(\mathbb{C}^{2n})\cong\operatorname{Sp}(n)/\operatorname{U}(n)$ and $\operatorname{OLGr}^+(\mathbb{C}^{2n})\cong\operatorname{SO}(2n)/\operatorname{U}(n)$. Among many papers dealing with the complex Grassmannians and their Schubert calculus, we mention [@Fulton.YT.1997], [@Fulton.Pragacz.1998], [@Tamvakis2005; @TamvakisArbeitTalk2001], and [@Pragacz.1991; @Pragacz.1996; @Pragacz.2003]. .5cm $$%\hspace*{-50ex} \begin{array}{l l} &Key: \quad n = p + q, \\&\mathbb{P}: \text{ maximal parabolic subgroup with abelian nilpotent radical}, \mathbb{P}= \mathbb{L}\ltimes \mathbb{N}: \text{Levi decomposition}, \quad \\ &\mathbb{L}= \mathbb{G}^\sigma : \text{ Levi of } \mathbb{P}, \quad G = \mathbb{K}= \mathbb{G}^{\theta} \subset \mathbb{G}: \text{ maximal compact}, \quad G_{\mathbb{R}} = \mathbb{G}^{\sigma\theta} : \text{ noncompact real group}, \\ & K = G_{\mathbb{R}}^\theta = \mathbb{G}^{\sigma, \theta} \subset G_{\mathbb{R}} : \text{ maximal compact subgroup,}\quad \mathrm{Lie}(\mathbb{N}) \simeq \mathfrak{p}_0 = \mathrm{Lie}(G)^{-\sigma}, \\ & K = \mathbb{K}\cap \mathbb{P}\:(\text{except for }\mathbb{G}= \operatorname{SO}_{n+2}(\mathbb{C}) \text{ when }K_e = \mathbb{K}\cap \mathbb{P}\text{ and } \mathbb{G}/\mathbb{P}\simeq G /K_e) , \\& \text{Compact symmetric space: } G / K = \mathbb{K}/K \simeq \mathbb{G}/ \mathbb{P}\text{ Grassmannian}, \quad %\\ %& %\text{\PP %\begin{tabular}{l}It is not always simple. Anyway, \\I would reshuffle the order in the key to be as in the table now: first $\bbG$, then $\bbP$ etc.,\end{tabular}} \\& L_0 : \text{ maximal Lagrangian subspace }, \quad P_1(Q_n(\mathbb{F})) : \text{ stabiliser of a point in quadric}, \\ &\mathrm{Her}_n(\mathbb{F}) : \text{ Hermitian matrices}, \quad \mathrm{SHer}_n(\mathbb{F}) : \text{ Skew Hermitian matrices}, %\\ %& \UU_n(\mathbb{F}) % = \begin{cases} \OO(n) & \mathbb{F}=\mathbb{R}, \\ %\UU(n) & \mathbb{F} = \mathbb{C}, \\ %\Sp(n) & \mathbb{F} = \mathbb{H}, %\end{cases} %\quad \UU_{n,n}(\mathbb{F}) %= \begin{cases} \OO({n,n}) & %\mathbb{F}=\mathbb{R}, \\ %\UU({n,n}) & \mathbb{F} = \mathbb{C}, \\ %\Sp({n,n}) & \mathbb{F} = \mathbb{H}. %\end{cases} \\& \operatorname{U}_{p,q}(\mathbb{F}) = \operatorname{O}({p,q}) \text{ if } \mathbb{F}=\mathbb{R}, \:\: \operatorname{U}({p,q}) \text{ if } \mathbb{F} = \mathbb{C}, \: \: \operatorname{Sp}({p,q}) \text{ if } \mathbb{F} = \mathbb{H}. \end{array}$$ # Realization of certain Grassmannians as compact symmetric spaces {#sec gras} ## Some general facts {#structure} Let $\mathbb{G}$ be one of the groups in Table 1; note that the corresponding symmetric spaces $G/K$ and $G_\mathbb{R}/K$ exhaust the list of classical symmetric spaces given in [@hel Ch.9, Sec.4]. Let $\mathbb{P}$ be the parabolic subgroup of $\mathbb{G}$ described in Table 1. Then $\mathbb{P}$ has a Levi decomposition $\mathbb{P}=\mathbb{L}\mathbb{N}$ specified in Table 1. (As we shall see in the case by case analysis in the subsequent subsections, $\mathbb{P}$ consists of the block upper triangular matrices in $\mathbb{G}$ with two diagonal blocks, while $\mathbb{L}$ consists of the block diagonal matrices in $\mathbb{P}$.) Let $\mathfrak{P}=\mathfrak{L}\oplus \mathfrak{N}$ be the corresponding decomposition of the Lie algebra of $\mathbb{P}$. The opposite parabolic subalgebra is $\mathfrak{P}^-=\mathfrak{L}\oplus\mathfrak{N}^-=\mathfrak{L}\oplus\theta\mathfrak{N}$, where the differential of $\theta$ is still denoted by $\theta$. The parabolic subgroup $\mathbb{P}$ contains a minimal parabolic subgroup $\mathbb{P}_0=\mathbb{M}\mathbb{A}\mathbb{N}_0$ corresponding to an Iwasawa decomposition $\mathbb{G}=\mathbb{K}\mathbb{A}\mathbb{N}_0$. The Levi subgroup $\mathbb{L}$ of $\mathbb{P}$ contains $\mathbb{M}\mathbb{A}$, while the unipotent radical $\mathbb{N}$ of $\mathbb{P}$ is contained in $\mathbb{N}_0$. Let $\mathfrak{G},\mathfrak{A},\mathfrak{M}$ be the Lie algebras of $\mathbb{G},\mathbb{A},\mathbb{M}$. Recall that $\mathfrak{P}$ can be constructed by taking a subset of simple $(\mathfrak{G},\mathfrak{A})$ roots. Then one generates a root subsystem by these simple roots, which defines $\mathfrak{L}$ as the span of $\mathfrak{M}\oplus\mathfrak{A}$ and the root spaces for the roots in this subsystem, and defines $\mathfrak{N}$ to be the span of the root spaces for the remaining positive roots. **Lemma 1**. *(1) With the above notation, suppose that $\gamma,\delta$ are roots of $\mathfrak{N}$ such that $\gamma+\delta$ is a root (hence a root of $\mathfrak{N}$). Then $[\mathfrak{G}_\gamma,\mathfrak{G}_\delta]\neq 0$.* *(2) Suppose $\mathfrak{N}$ is abelian. Then $[\mathfrak{N},\mathfrak{N}^-]=[\mathfrak{N},\theta\mathfrak{N}]$ is contained in $\mathfrak{L}$.* *Proof.* (1) Let $\delta-k\gamma,\dots,\delta,\delta+\gamma,\dots,\delta+n\gamma$ be the $\gamma$-string of roots through $\delta$, with $k\geq 0$ and $n\geq 1$. Let $e\in\mathfrak{G}_\gamma$ be nonzero. By [@beyond Proposition 6.52] there is an $\mathfrak{sl}_2$-triple $e,h,f$ with $h\in\mathfrak{A}$ and $f\in\mathfrak{G}_{-\gamma}$. Now $\mathfrak{G}_{\delta-k\gamma}\oplus\dots\oplus\mathfrak{G}_\delta\oplus\mathfrak{G}_{\delta+\gamma}\oplus\dots\oplus\mathfrak{G}_{\delta+n\gamma}$ is a representation of the $\mathfrak{sl}_2$ spanned by $e,h,f$, and since $\mathfrak{G}_\delta$ and $\mathfrak{G}_{\delta+\gamma}$ are both nonzero, the action of $e$ between them can not be 0. This implies (1). \(2\) Assume that $[\mathfrak{N},\theta\mathfrak{N}]$ is not contained in $\mathfrak{L}$. Then there are root vectors $x\in\mathfrak{G}_\alpha,\, y\in\mathfrak{G}_\beta$ in $\mathfrak{N}$ such that $[x,\theta y]\notin \mathfrak{L}$. Since $[x,\theta y]\in \mathfrak{G}_{\alpha-\beta}$, it follows that $\alpha-\beta$ is a root either of $\mathfrak{N}$ or of $\mathfrak{N}^-$. If $\alpha-\beta$ is a root of $\mathfrak{N}$, then since $\alpha=(\alpha-\beta)+\beta$ is a root, (1) implies that $[\mathfrak{G}_{\alpha-\beta},\mathfrak{G}_\beta]\neq 0$, so $[\mathfrak{N},\mathfrak{N}]\neq 0$ and $\mathfrak{N}$ is not abelian. If $\alpha-\beta$ is a root of $\mathfrak{N}^-$, then $\beta-\alpha$ is a root of $\mathfrak{N}$, so $\beta=(\beta-\alpha)+\alpha$ again implies that $[\mathfrak{N},\mathfrak{N}]\neq 0$ and so $\mathfrak{N}$ is not abelian. ◻ The following theorem was proved in [@TK1968 §4]. See also [@kobayashi Lemma 7.3.1] and [@RRS]. We present a short proof for the convenience of the reader. **Theorem 1**. *[@TK1968]. [\[abel sym\]]{#abel sym label="abel sym"} Let $\mathbb{G}$, $\mathbb{K}$ and $\mathbb{P}=\mathbb{L}\mathbb{N}$ be as above (i.e., as in Table 1). Then the following statements are equivalent:* 1. *$\mathbb{N}$ (or equivalently $\mathfrak{N}$) is abelian;* 2. *$\mathbb{L}$ is a symmetric subgroup of $\mathbb{G}$;* 3. *$\mathbb{P}\cap\mathbb{K}=\mathbb{L}\cap\mathbb{K}$ is a symmetric subgroup of $\mathbb{K}$.* *Proof.* **$\mathbf{(a)\Rightarrow (b)}$.** It is enough to show that in the decomposition $\mathfrak{G}=\mathfrak{L}\oplus(\mathfrak{N}\oplus\theta\mathfrak{N})$ we have $[\mathfrak{N}\oplus\theta\mathfrak{N},\mathfrak{N}\oplus\theta\mathfrak{N}]\subseteq\mathfrak{L}$. But Lemma [Lemma 1](#nn-){reference-type="ref" reference="nn-"}(2) implies that $$[\mathfrak{N}\oplus\theta\mathfrak{N},\mathfrak{N}\oplus\theta\mathfrak{N}]=[\mathfrak{N},\mathfrak{N}]+[\mathfrak{N},\theta\mathfrak{N}]+[\theta\mathfrak{N},\mathfrak{N}]+[\theta\mathfrak{N},\theta\mathfrak{N}]=0+[\mathfrak{N},\theta\mathfrak{N}]+0\subseteq \mathfrak{L}.$$ Note that the associated involution $\sigma$ is defined to be $+1$ on $\mathfrak{L}$ and $(-1)$ on $\mathfrak{N}\oplus\theta\mathfrak{N}$. In particular $\mathbb{P}$ is $\sigma$-stable. **$\mathbf{(b)\Rightarrow(c)}$.** Let $\sigma$ be an involution of $\mathbb{G}$ such that $\mathbb{G}^\sigma_e\subseteq \mathbb{L}\subseteq \mathbb{G}^\sigma$, where $\mathbb{G}^\sigma_e$ denotes the connected component of $\mathbb{G}^\sigma$. Since $\mathbb{P}$ is standard, we may assume $\sigma$ commutes with $\theta$. Then the restriction of $\sigma$ to $\mathbb{K}$ is an involution, and $\mathbb{K}^\sigma_e\subseteq \mathbb{L}\cap \mathbb{K}\subseteq \mathbb{K}^\sigma$. So $\mathbb{L}\cap\mathbb{K}$ is a symmetric subgroup of $\mathbb{K}$. (We remark that in all the examples we consider we will have $\mathbb{L}=\mathbb{G}^\sigma$ and $\mathbb{L}\cap\mathbb{K}=\mathbb{K}^\sigma$.) **$\mathbf{(c)\Rightarrow(a)}$.** Suppose that $\mathfrak{N}$ is not abelian. Then there are roots $\alpha,\beta$ of $\mathfrak{N}$ and $x\in\mathfrak{G}_\alpha$, $y\in\mathfrak{G}_\beta$ such that $[x,y]\neq 0$. Then $x+\theta x,y+\theta y\in (\mathfrak{N}\oplus\theta\mathfrak{N})^\theta$, and we have $$[x+\theta x,y+\theta y]=[x,y]+[x,\theta y]+[\theta x,y]+[\theta x,\theta y],$$ with $$[x,y]\in \mathfrak{G}_{\alpha+\beta},\ [x,\theta y] \in \mathfrak{G}_{\alpha-\beta},\ [\theta x,y]\in \mathfrak{G}_{-\alpha+\beta},\ [\theta x,\theta y]\in \mathfrak{G}_{-\alpha-\beta}.$$ Since the root $\alpha+\beta$ is strictly greater than $\alpha-\beta$, $-\alpha+\beta$ and $-\alpha-\beta$ (in the usual lexicographical order), we see that $[x,y]\in\mathfrak{N}\setminus 0$ implies $[x+\theta x,y+\theta y]\notin\mathfrak{L}\cap\mathfrak{K}$. It follows that $\mathbb{L}\cap\mathbb{K}$ is not a symmetric subgroup of $\mathbb{K}$. ◻ *Remark 1*. Suppose that $[\mathfrak{G},\mathfrak{G}]$ is simple and that $\mathfrak{P}=\mathfrak{L}\oplus\mathfrak{N}$ is a standard parabolic subalgebra as above. If $\mathfrak{N}$ is abelian, then $\mathfrak{P}$ is a maximal parabolic subalgebra. See [@RRS Lemma 2.2, p.651]. **Proposition 1**. *Let $\mathbb{G}$, $\mathbb{K}$ and $\mathbb{P}=\mathbb{L}\mathbb{N}$ be as above (i.e., as in Table 1). Then $\mathbb{K}$ acts transitively on $\mathbb{G}/\mathbb{P}$, and therefore $\mathbb{G}/\mathbb{P}$ is diffeomorphic to $\mathbb{K}/\mathbb{P}\cap\mathbb{K}=\mathbb{K}/\mathbb{L}\cap\mathbb{K}$. In particular, $\mathbb{G}/\mathbb{P}$ is diffeomorphic to a symmetric space.* *Proof.* It is clear from the Iwasawa decomposition that $\mathbb{K}$ acts transitively on $\mathbb{G}/\mathbb{P}_0$, where $\mathbb{P}_0$ is a minimal parabolic subgroup of $\mathbb{G}$ contained in $\mathbb{P}$. Since $\mathbb{P}\supseteq\mathbb{P}_0$, there is a natural projection from $\mathbb{G}/\mathbb{P}_0$ to $\mathbb{G}/\mathbb{P}$, sending $g\mathbb{P}_0$ to $g\mathbb{P}$. This projection intertwines the $\mathbb{G}$-actions, hence also the $\mathbb{K}$-actions. It follows that $\mathbb{K}$ acts transitively on $\mathbb{G}/\mathbb{P}$. Indeed, if $g\mathbb{P}\in\mathbb{G}/\mathbb{P}$, let $k\in\mathbb{K}$ be such that $k\mathbb{P}_0=g\mathbb{P}_0$. Taking the projection we see that $k\mathbb{P}=g\mathbb{P}$, which implies transitivity of the $\mathbb{K}$-action. ◻ ## Ordinary Grassmannians {#real gras} Let $\mathbb{F}$ be $\mathbb{R}$, $\mathbb{C}$ or $\mathbb{H}$. Let $\operatorname{Gr}_p(\mathbb{F}^{p+q})$ be the Grassmannian of $p$-dimensional subspaces of the vector space $\mathbb{F}^{p+q}$. The group $\mathbb{G}=\operatorname{GL}(p+q,\mathbb{F})$ clearly acts transitively on $\operatorname{Gr}_p(\mathbb{F}^{p+q})$, so $\operatorname{Gr}_p(\mathbb{F}^{p+q})=\mathbb{G}/\mathbb{P}$, where $\mathbb{P}$ is the stabilizer in $\mathbb{G}$ of the standard $p$-dimensional subspace $$\label{std rp} \mathbb{F}^p=\{(x_1,\dots,x_p,0,\dots,0)\,\big|\,x_1,\dots,x_p\in\mathbb{F}\} \subseteq \mathbb{F}^{p+q}.$$ In other words, $$\mathbb{P}=\left\{\begin{pmatrix} A & B\cr 0 & C \end{pmatrix}\,\big|\,A\in \operatorname{GL}(p,\mathbb{F}),\,C\in \operatorname{GL}(q,\mathbb{F}), \, B\in M_{pq}(\mathbb{F})\right\}.$$ Let $\sigma$ be the involution of $\mathbb{G}$ defined as in [\[def sigma\]](#def sigma){reference-type="eqref" reference="def sigma"}, i.e., $\sigma(g)=I_{p,q} g I_{p,q}$ where $I_{p,q}=\left(\begin{smallmatrix}I_p&0\cr 0&-I_q\end{smallmatrix}\right)$. Then $\mathbb{G}^\sigma$ is equal to the Levi subgroup $\mathbb{L}$ of $\mathbb{P}$ and it consists of block diagonal matrices in $\mathbb{G}$, i.e., $$\mathbb{G}^\sigma=\mathbb{L}=\operatorname{GL}(p,\mathbb{F})\times \operatorname{GL}(q,\mathbb{F}).$$ The maximal compact subgroup $\mathbb{K}$ of $\mathbb{G}$ is the unitary group of $\mathbb{F}^{p+q}$ with respect to the standard inner product, denoted as $\operatorname{U}(p+q,\mathbb{F})$. In other words, $\mathbb{K}$ is $\operatorname{O}(p+q)$ if $\mathbb{F}=\mathbb{R}$, $\operatorname{U}(p+q)$ if $\mathbb{F}=\mathbb{C}$, and $\operatorname{Sp}(p+q)$ if $\mathbb{F}=\mathbb{H}$. $\mathbb{P}\cap\mathbb{K}=\mathbb{K}^\sigma$ is $\operatorname{U}(p,\mathbb{F})\times \operatorname{U}(q,\mathbb{F})$, embedded block diagonally. In other words, $\mathbb{K}^\sigma$ is $\operatorname{O}(p)\times \operatorname{O}(q)$ if $\mathbb{F}=\mathbb{R}$, $\operatorname{U}(p)\times \operatorname{U}(q)$ if $\mathbb{F}=\mathbb{C}$, and $\operatorname{Sp}(p)\times \operatorname{Sp}(q)$ if $\mathbb{F}=\mathbb{H}$. Now Proposition [Proposition 1](#k trans){reference-type="ref" reference="k trans"} implies the following well known result which can be found for example in [@On Ch. I, §4]. **Proposition 1**. *$\operatorname{Gr}_p(\mathbb{F}^{p+q}) = \mathbb{G}/\mathbb{P}$ is diffeomorphic to $\operatorname{U}(p+q,\mathbb{F})/\operatorname{U}(p,\mathbb{F})\times \operatorname{U}(q,\mathbb{F})$. In other words, $\operatorname{Gr}_p(\mathbb{R}^{p+q})$ is diffeomorphic to $\operatorname{O}(p+q)/\operatorname{O}(p)\times \operatorname{O}(q)$, $\operatorname{Gr}_p(\mathbb{C}^{p+q})$ is diffeomorphic to $\operatorname{U}(p+q)/\operatorname{U}(p)\times \operatorname{U}(q)$, and $\operatorname{Gr}_p(\mathbb{H}^{p+q})$ is diffeomorphic to $\operatorname{Sp}(p+q)/\operatorname{Sp}(p)\times \operatorname{Sp}(q)$* For cohomology computation we need $G=\mathbb{K}$ to be connected, and it is connected if $\mathbb{F}$ is $\mathbb{C}$ or $\mathbb{H}$. If $\mathbb{F}=\mathbb{R}$, we note that $$\mathbb{G}/\mathbb{P}\cong \operatorname{SO}(p+q)/S(\operatorname{O}(p)\times \operatorname{O}(q)).$$ This follows immediately from the fact that $\operatorname{SL}(p+q,\mathbb{R})$ acts transitively on $\operatorname{Gr}_p(\mathbb{R}^{p+q})$. We can now conclude **Corollary 1**. *The cohomology ring (with complex coefficients) of the Grassmannian $\operatorname{Gr}_p(\mathbb{F}^{p+q})$ is described by: Theorem [Theorem 1](#gen rel basis SOk+m){reference-type="ref" reference="gen rel basis SOk+m"} if $\mathbb{F}=\mathbb{R}$; Theorem [Theorem 1](#gen rel basis){reference-type="ref" reference="gen rel basis"} if $\mathbb{F}=\mathbb{C}$; Theorem [Theorem 1](#gen rel basis B/BxB){reference-type="ref" reference="gen rel basis B/BxB"} if $\mathbb{F}=\mathbb{H}$.* Since the involution $\sigma\theta$ of $\mathbb{G}$ is given by [\[sigma theta\]](#sigma theta){reference-type="eqref" reference="sigma theta"}, i.e., by $\sigma\theta(g)=I_{p,q}(\bar g^t)^{-1}I_{p,q}$, the group $G_\mathbb{R}=\mathbb{G}^{\sigma\theta}$ is equal to $\operatorname{U}(p,q;\mathbb{F})$. In other words, $G_\mathbb{R}$ is $\operatorname{O}(p,q)$ if $\mathbb{F}=\mathbb{R}$; $\operatorname{U}(p,q)$ if $\mathbb{F}=\mathbb{C}$; and $\operatorname{Sp}(p,q)$ if $\mathbb{F}=\mathbb{H}$. ## The symplectic Lagrangian Grassmannians {#real lagr grass} Let $\mathbb{F}=\mathbb{R}$ or $\mathbb{C}$, and let $\operatorname{LGr}(\mathbb{R}^{2n})$ be the (symplectic) Lagrangian Grassmannian, i.e., the manifold of all Lagrangian subspaces of $\mathbb{F}^{2n}$ with respect to the standard symplectic form $\langle\,,\rangle$ given by $$\langle x,y\rangle=x^t J_n y,\qquad\text{where}\quad J_n=\begin{pmatrix} 0&I_n\cr -I_n&0\end{pmatrix}.$$ Let $\mathbb{G}$ be the group $\operatorname{Sp}(2n,\mathbb{F})$ of $2n\times 2n$ matrices over $\mathbb{F}$ preserving the form $\langle\,,\rangle$, i.e., satisfying $g^tJ_ng=J_n$. Then $\mathbb{G}$ acts on $\operatorname{LGr}(\mathbb{F}^{2n})$, and this action is transitive by Witt's Theorem [\[witt\]](#witt){reference-type="ref" reference="witt"}. Thus $\operatorname{LGr}(\mathbb{F}^{2n})=\mathbb{G}/\mathbb{P}$, where $\mathbb{P}$ is the (Siegel) parabolic subgroup of $\operatorname{Sp}(2n,\mathbb{F})$, defined as the stabilizer of the standard Lagrangian subspace $$L_0=\{(x_1,\dots,x_n,0,\dots,0)\,\big|\,x_1,\dots,x_n\in\mathbb{F}\} \subseteq \mathbb{F}^{2n}.$$ Writing $g\in \mathbb{G}$ as a block matrix $\left(\begin{smallmatrix} A&B\cr C& D \end{smallmatrix}\right)$ with $n\times n$ blocks, the condition $g^tJ_ng=J_n$ implies $$\label{cond sp2nR} \mathbb{G}=\operatorname{Sp}(2n,\mathbb{F})=\left\{\begin{pmatrix}A&B\cr C& D\end{pmatrix}\in \operatorname{GL}(2n,\mathbb{F})\,\big|\,A^t C=C^tA,\ B^t D=D^t B,\ A^tD-C^t B=I_n\right\}.$$ It follows that $$\label{cond P sp2nR} \mathbb{P}=\left\{\begin{pmatrix}A&B\cr 0& D\end{pmatrix}\in \operatorname{GL}(2n,\mathbb{F})\,\big|\,B^t D=D^t B,\ A^tD=I_n\right\}.$$ Let $\sigma$ be the involution of $\mathbb{G}$ given by [\[def sigma\]](#def sigma){reference-type="eqref" reference="def sigma"}, i.e., by $\sigma(g)=I_{n,n}gI_{n,n}$. Then $\mathbb{G}^\sigma$ is equal to the Levi subgroup $\mathbb{L}$ of $\mathbb{P}$ and it consists of block diagonal matrices in $\mathbb{G}$, i.e., $$\mathbb{G}^\sigma=\mathbb{L}=\left\{\begin{pmatrix}A&0\cr 0& (A^t)^{-1}\end{pmatrix}\,\big|\,A\in \operatorname{GL}(n,\mathbb{F})\right\}\cong \operatorname{GL}(n,\mathbb{F})\quad\text{via}\ \begin{pmatrix}A&0\cr 0& (A^t)^{-1}\end{pmatrix}\leftrightarrow A.$$ Since $\mathbb{K}$ consists of the fixed points of the Cartan involution $\theta (g)=(\bar g^t)^{-1}$, and since $g\in\mathbb{G}$ is equivalent to $(g^t)^{-1}=J_n gJ_n^{-1}$, we see that $\theta(g)=g$ is equivalent to $J_n\bar g=gJ_n$. This implies $$\label{descr K spn} \mathbb{K}=\left\{\begin{pmatrix} A&-\bar C\cr C& \bar A \end{pmatrix}\,\big|\,A^tC=C^tA,\ \bar A^tA+\bar C^tC=I_n\right\}.$$ If $\mathbb{F}=\mathbb{R}$ (so the bars can be omitted), this is exactly the standard description of $\operatorname{U}(n)$ inside $\operatorname{GL}(2n,\mathbb{R})$; more precisely, $\mathbb{K}\cong \operatorname{U}(n)$ via $\left(\begin{smallmatrix}A&-C\cr C& A \end{smallmatrix}\right)\leftrightarrow A+iC$. If $\mathbb{F}=\mathbb{C}$, then [\[descr K spn\]](#descr K spn){reference-type="eqref" reference="descr K spn"} is exactly the standard description of $\operatorname{Sp}(n)$ inside $\operatorname{GL}(2n,\mathbb{C})$. We now also see that $$\mathbb{P}\cap \mathbb{K}= \mathbb{K}^\sigma = \left\{\begin{pmatrix} A&0\cr 0& \bar A \end{pmatrix}\,\big|\,A^tA=I\right\}\cong \operatorname{U}(n,\mathbb{F})\quad\text{via}\ \begin{pmatrix} A&0\cr 0& \bar A \end{pmatrix}\leftrightarrow A.$$ In other words, if $\mathbb{F}=\mathbb{R}$, then $\mathbb{K}^\sigma =\operatorname{O}(n)$ via $\left(\begin{smallmatrix}A&0\cr 0& A \end{smallmatrix}\right)\leftrightarrow A$, and if $\mathbb{F}=\mathbb{C}$, then $\mathbb{K}^\sigma =\operatorname{U}(n)$ via $\left(\begin{smallmatrix}A&0\cr 0& \bar A \end{smallmatrix}\right)\leftrightarrow A$. Now Proposition [Proposition 1](#k trans){reference-type="ref" reference="k trans"} implies **Proposition 1**. *The real symplectic Lagrangian Grassmannian $\operatorname{LGr}(\mathbb{R}^{2n})$ is diffeomorphic to $\operatorname{U}(n)/\operatorname{O}(n)$, while the complex symplectic Lagrangian Grassmannian $\operatorname{LGr}(\mathbb{C}^{2n})$ is diffeomorphic to $\operatorname{Sp}(n)/\operatorname{U}(n)$.* **Corollary 1**. *The cohomology ring (with complex coefficients) of the symplectic Lagrangian Grassmannian $\operatorname{LGr}(\mathbb{F}^{2n})$ is described by: Theorem [Theorem 1](#thm prim and aprim alg){reference-type="ref" reference="thm prim and aprim alg"} if $\mathbb{F}=\mathbb{R}$; Theorem [Theorem 1](#thm basis C/A){reference-type="ref" reference="thm basis C/A"} if $\mathbb{F}=\mathbb{C}$.* Finally, we describe the group $\mathbb{G}^{\sigma\theta}$. Let first $\mathbb{F}=\mathbb{R}$. Then since $\sigma\theta(g)=I_{n,n}(g^t)^{-1}I_{n,n}$, and since any $g\in\mathbb{G}$ satisfies $(g^t)^{-1}=J_ngJ_n^{-1}$, $\sigma\theta(g)=g$ is equivalent to $gD_n=D_n g$ where $D_n=\left(\begin{smallmatrix}0&I_n\cr I_n&0\end{smallmatrix}\right)$. It follows that $$\mathbb{G}^{\sigma\theta}=\left\{g=\begin{pmatrix}A&B\cr B&A\end{pmatrix}\,\big|\,A^tB=B^tA,\,A^tA- B^tB=I_n \right\}.$$ Conjugating $\left(\begin{smallmatrix}A&B\cr B&A\end{smallmatrix}\right)$ by $\left(\begin{smallmatrix}I_n&I_n\cr I_n&-I_n\end{smallmatrix}\right)$ we get the matrix $\left(\begin{smallmatrix}A+B&0\cr 0&A-B\end{smallmatrix}\right)$, and the conditions $A^tB=B^tA,\, A^tA- B^tB=I_n$ imply $(A+B)^t(A-B)=I_n$, so $A-B=((A+B)^t)^{-1}$. Conversely, starting from the matrix $\left(\begin{smallmatrix}Z&0\cr 0& (Z^t)^{-1}\end{smallmatrix}\right)$ and setting $A=\frac{1}{2}(Z+(Z^t)^{-1})$, $B=\frac{1}{2}(Z-(Z^t)^{-1})$, we get $A^tB= B^tA,\,A^tA-B^tB=I_n$. Thus $$\mathbb{G}^{\sigma\theta}\cong \left\{\begin{pmatrix}Z&0\cr 0&(Z^t)^{-1}\end{pmatrix}\,\big|\,Z\in \operatorname{GL}(n,\mathbb{R})\right\} \cong \operatorname{GL}(n,\mathbb{R}) \ \text{via}\ \begin{pmatrix}Z&0\cr 0&(Z^t)^{-1}\end{pmatrix}\leftrightarrow Z.$$ Now let $\mathbb{F}=\mathbb{C}$. Since $\sigma\theta(g)=I_{n,n}(\bar g^t)^{-1}I_{n,n}$, and since any $g\in \mathbb{G}$ satisfies $(g^t)^{-1}=J_ngJ_n^{-1}$, we see that $\sigma\theta(g)=D_n\bar g D_n$. We claim that $\mathbb{G}^{\sigma\theta}\cong \operatorname{Sp}(2n,\mathbb{R})$. To see this, we note that $$\label{sqrt Dn} C_n=\frac{1}{2}\begin{pmatrix} (1+i)I_n& (1-i)I_n\cr (1-i)I_n&(1+i)I_n \end{pmatrix} \qquad\text{implies}\quad C_n^2=D_n\text{ and } C_n^{-1}=\bar C_n.$$ This implies that $$g\in\mathbb{G}^{\sigma\theta} \qquad\text{if and only if}\qquad \overline{C_ngC_n^{-1}}=C_n g C_n^{-1},$$ i.e., that $\mathbb{G}^{\sigma\theta}=C_n^{-1} \operatorname{Sp}(2n,\mathbb{R})C_n\cong \operatorname{Sp}(2n,\mathbb{R})$. ## Orthogonal Lagrangian Grassmannians {#real orth lagr grass} Let $\operatorname{OLGr}(\mathbb{C}^{2n})$ be the (complex) orthogonal Lagrangian Grassmannian. In other words, $\operatorname{OLGr}(\mathbb{C}^{2n})$ is the manifold of all Lagrangian subspaces of $\mathbb{C}^{2n}$ with respect to the symmetric bilinear form $\langle\,,\rangle$ defined by $$\langle x,y\rangle=\sum_{r=1}^n x_ry_{n+r} +\sum_{r=1}^n x_{n+r}y_r=x^t D_n y,$$ where as before, $D_n=\left(\begin{smallmatrix}0&I_n\cr I_n&0\end{smallmatrix}\right)$. Let $\mathbb{G}=\operatorname{O}(2n,\mathbb{C})$ be the group of $2n\times 2n$ complex matrices preserving the form $\langle\,,\rangle$, i.e., satisfying $g^tD_ng=D_n$. Writing $g=\left(\begin{smallmatrix}A&B\cr C&D\end{smallmatrix}\right)$ we see that $$\label{cond onn} \mathbb{G}=\operatorname{O}(2n,\mathbb{C})=\left\{\begin{pmatrix}A&B\cr C&D \end{pmatrix}\in \operatorname{GL}(2n,\mathbb{C})\,\big|\,C^tA=-A^tC,\, D^tB=- B^tD,\, A^tD+C^tB=I_n\right\}.$$ The group $\mathbb{G}$ acts on $\operatorname{OLGr}(\mathbb{C}^{2n})$, and this action is transitive by Witt's Theorem [\[witt\]](#witt){reference-type="ref" reference="witt"}. Thus $\operatorname{OLGr}(\mathbb{C}^{2n})=\mathbb{G}/\mathbb{P}$, where $\mathbb{P}$ is the parabolic subgroup of $\mathbb{G}$, defined as the stabilizer of the standard Lagrangian subspace $$L_0=\{(x_1,\dots,x_n,0,\dots,0)\,\big|\,x_1,\dots,x_n\in\mathbb{C}\}\subset \mathbb{C}^{2n}.$$ An element $g=\left(\begin{smallmatrix}A&B\cr C&D\end{smallmatrix}\right)$ of $\mathbb{G}$ stabilizes $L_0$ if and only if $C=0$, so $$\label{descr P o2nc} \mathbb{P}=\left\{\begin{pmatrix}A&B\cr 0&D\end{pmatrix}\in \operatorname{GL}(2n,\mathbb{C})\,\big|\,D^tB=- B^tD,\, A^tD=I_n\right\}.$$ Let $\sigma$ be the involution of $\mathbb{G}$ defined by [\[def sigma\]](#def sigma){reference-type="eqref" reference="def sigma"}, i.e., by $\sigma(g)=I_{n,n}gI_{n,n}$. Then $\mathbb{G}^\sigma$ is equal to the Levi subgroup $\mathbb{L}$ of $\mathbb{P}$ and it consists of block diagonal matrices in $\mathbb{G}$, i.e., $$\mathbb{G}^\sigma=\mathbb{L}=\left\{\begin{pmatrix}A&0\cr 0& (A^t)^{-1}\end{pmatrix}\,\big|\,A\in \operatorname{GL}(n,\mathbb{C})\right\}\cong \operatorname{GL}(n,\mathbb{C})\quad\text{via}\ \begin{pmatrix}A&0\cr 0& (A^t)^{-1}\end{pmatrix}\leftrightarrow A.$$ Since $\theta(g)=(\bar g^t)^{-1}$ and since $g\in\mathbb{G}$ is equivalent to $(g^t)^{-1}=D_ngD_n$, we see that $\theta(g)=g$ is equivalent to $D_n\bar gD_n=g$, or $D_n\bar g=gD_n$. Thus $$\label{descr K onn} \mathbb{K}=\left\{ \begin{pmatrix}A& \bar C\cr C& \bar A\end{pmatrix}\in \operatorname{GL}(2n,\mathbb{C})\,\big|\,C^tA=- A^tC,\ \bar A^tA+ \bar C^tC=I_n\right\}.$$ To identify this subgroup, we connect our $\mathbb{G}$ with the more usual group $\mathbb{G}'=\operatorname{O}(2n,\mathbb{C})'$ given by $g^tg=I_{2n}$. The maximal compact subgroup $\mathbb{K}'$ of $\mathbb{G}'$ is given by the condition $(\bar g^t)^{-1}=g$, or equivalently $\bar g=g$, so $\mathbb{K}'=\operatorname{O}(2n)$. Since $\mathbb{G}$ and $\mathbb{G}'$ are isomorphic, $\mathbb{K}\cong \operatorname{O}(2n)$. In fact, explicit isomorphisms $\mathbb{G}\cong\mathbb{G}'$ and $\mathbb{K}\cong\mathbb{K}'$ are given by conjugation by the matrix $C_n$ of [\[sqrt Dn\]](#sqrt Dn){reference-type="eqref" reference="sqrt Dn"}. We also see that $$\mathbb{P}\cap \mathbb{K}= \mathbb{K}^\sigma = \left\{ \begin{pmatrix}A& 0\cr 0&\bar A\end{pmatrix}\,\big|\,\bar A^tA=I_n\right\}\cong \operatorname{U}(n)\quad\text{via}\quad \begin{pmatrix}A& 0\cr 0&\bar A\end{pmatrix}\leftrightarrow A.$$ Now Proposition [Proposition 1](#k trans){reference-type="ref" reference="k trans"} implies **Proposition 1**. *The orthogonal Lagrangian Grassmannian $\operatorname{OLGr}(\mathbb{C}^{2n})$ is diffeomorphic to $\operatorname{O}(2n)/\operatorname{U}(n)$.* Since our computation of cohomology of a compact symmetric space $G/K$ requires $G$ to be connected, we replace $\operatorname{O}(2n)/\operatorname{U}(n)$ by $\operatorname{SO}(2n)/\operatorname{U}(n)$. The orbit of $\operatorname{SO}(2n)$ on $\operatorname{OLGr}(\mathbb{C}^{2n})$ is one of the two components of $\operatorname{OLGr}(\mathbb{C}^{2n})$, which we denote by $\operatorname{OLGr}^+(\mathbb{C}^{2n})$, and still call it the orthogonal Lagrangian Grassmannian. The other component of $\operatorname{OLGr}(\mathbb{C}^{2n})$ is diffeomorphic to $\operatorname{OLGr}^+(\mathbb{C}^{2n})$ and thus has the same cohomology. **Corollary 1**. *The cohomology ring (with complex coefficients) of the orthogonal Lagrangian Grassmannian $\operatorname{OLGr}^+(\mathbb{C}^{2n})\cong \operatorname{SO}(2n)/\operatorname{U}(n)$ is described by Theorem [Theorem 1](#thm basis D/A){reference-type="ref" reference="thm basis D/A"}.* Finally, we describe the group $G_\mathbb{R}=\mathbb{G}^{\sigma\theta}$ in case $\mathbb{G}=\operatorname{SO}(2n,\mathbb{C})$. Since $\sigma\theta(g)=I_{n,n}(\bar g^t)^{-1} I_{n,n}$, we see that $\mathbb{G}^{\sigma\theta}=\operatorname{O}(2n,\mathbb{C})\cap\operatorname{U}(n,n)=\operatorname{SO}(2n,\mathbb{C})\cap \operatorname{SU}(n,n)$, and this is exactly the description of $\operatorname{SO}^*(2n)$ given in [@beyond 1.141]. *Remark 1*. One could define the group $\operatorname{O}^*(2n)$ as $\operatorname{O}(2n,\mathbb{C})\cap U(n,n)$, or alternatively, as the group of automorphisms of $\mathbb{H}^n$ preserving a skew Hermitian form; see Section [2.6](#quat lagr grass){reference-type="ref" reference="quat lagr grass"}. Conceivably, an element of this group could have determinant equal to $\pm 1$. However, we prove in Section [2.6](#quat lagr grass){reference-type="ref" reference="quat lagr grass"} that the maximal compact subgroup of this group is $\operatorname{U}(n)$, so it follows that the group is connected and the determinant must be 1. In other words, $\operatorname{O}^*(2n)=\operatorname{SO}^*(2n)$. ## The Hermitian Lagrangian Grassmannians {#herm lagr grass} Let $\mathbb{F}$ be $\mathbb{R}$, $\mathbb{C}$ or $\mathbb{H}$ and let $\operatorname{HLGr}(\mathbb{F}^{2n})$ be the Hermitian Lagrangian Grassmannian. In other words, $\operatorname{HLGr}(\mathbb{F}^{2n})$ is the manifold of all Lagrangian subspaces of $\mathbb{F}^{2n}$ with respect to the Hermitian form $\langle\,,\rangle$ of signature $(n,n)$, defined by $$\langle x,y\rangle=\sum_{r=1}^n \bar x_ry_{n+r} +\sum_{r=1}^n \bar x_{n+r}y_r=\bar x^t D_n y,$$ where as before, $D_n=\left(\begin{smallmatrix}0&I_n\cr I_n&0\end{smallmatrix}\right)$. Let $\mathbb{G}=\operatorname{U}(n,n;\mathbb{F})$ be the group of $2n\times 2n$ matrices over $\mathbb{F}$ preserving the form $\langle\,,\rangle$, i.e., satisfying $\bar g^tD_ng=D_n$. So if $\mathbb{F}=\mathbb{R}$, $\mathbb{G}=\operatorname{O}(n,n)$; if $\mathbb{F}=\mathbb{C}$, $\mathbb{G}=\operatorname{U}(n,n)$; and if $\mathbb{F}=\mathbb{H}$, $\mathbb{G}=\operatorname{Sp}(n,n)$. Writing $g=\left(\begin{smallmatrix}A&B\cr C&D\end{smallmatrix}\right)$ with $n\times n$ blocks, we see that $$\label{cond unn} \mathbb{G}=\operatorname{U}(n,n;\mathbb{F})=\left\{\begin{pmatrix}A&B\cr C&D \end{pmatrix}\in \operatorname{GL}(2n,\mathbb{F})\,\big|\,\bar C^tA=-\bar A^tC,\, \bar D^tB=-\bar B^tD,\, \bar A^tD+\bar C^tB=I_n\right\}.$$ The group $\mathbb{G}$ acts on $\operatorname{HLGr}(\mathbb{F}^{2n})$, and this action is transitive by Witt's Theorem [\[witt\]](#witt){reference-type="ref" reference="witt"}. Thus $\operatorname{HLGr}(\mathbb{F}^{2n})=\mathbb{G}/\mathbb{P}$, where $\mathbb{P}$ is the parabolic subgroup of $\mathbb{G}$ defined as the stabilizer of the standard Lagrangian subspace $$L_0=\{(x_1,\dots,x_n,0,\dots,0)\,\big|\,x_1,\dots,x_n\in\mathbb{F}\}\subset\mathbb{F}^{2n}.$$ It follows that $$\label{descr P unn} \mathbb{P}=\left\{\begin{pmatrix}A&B\cr 0&D\end{pmatrix}\in \operatorname{GL}(2n,\mathbb{F})\,\big|\,\bar D^tB=-\bar B^tD,\, \bar A^tD=I_n\right\}.$$ Let $\sigma$ be an involution of $\mathbb{G}$ defined by [\[def sigma\]](#def sigma){reference-type="eqref" reference="def sigma"}, i.e., by $\sigma(g)=I_{n,n}gI_{n,n}$. Then $\mathbb{G}^\sigma$ is equal to the Levi subgroup $\mathbb{L}$ of $\mathbb{P}$ and it consists of block diagonal matrices in $\mathbb{G}$, i.e., $$\mathbb{G}^\sigma=\mathbb{L}=\left\{\begin{pmatrix}A&0\cr 0& (\bar A^t)^{-1}\end{pmatrix}\,\big|\,A\in \operatorname{GL}(n,\mathbb{F})\right\}\cong \operatorname{GL}(n,\mathbb{F})\quad\text{via}\ \begin{pmatrix}A&0\cr 0& (\bar A^t)^{-1}\end{pmatrix}\leftrightarrow A.$$ Since the Cartan involution is $\theta(g)=(\bar g^t)^{-1}$ and since $g\in\mathbb{G}$ is equivalent to $(\bar g^t)^{-1}=D_ngD_n$, we see that $\theta(g)=g$ is equivalent to $D_ngD_n=g$, or $D_ng=gD_n$. It follows that $$\label{descr K unn} \mathbb{K}=\left\{ \begin{pmatrix}A& B\cr B& A\end{pmatrix}\in \operatorname{GL}(2n,\mathbb{F})\,\big|\,\bar B^tA=-\bar A^tB,\ \bar A^tA+\bar B^tB=I_n\right\}.$$ To identify this subgroup, we conjugate the matrix $\left(\begin{smallmatrix}A&B\cr B&A\end{smallmatrix}\right)$ by $\left(\begin{smallmatrix}I_n&I_n\cr I_n&-I_n\end{smallmatrix}\right)$ and get the matrix $\left(\begin{smallmatrix}A+B&0\cr 0&A-B\end{smallmatrix}\right)$. The conditions $\bar B^tA=- \bar A^tB,\, \bar A^tA+\bar B^tB=I_n$ imply $\overline{(A+B)}^t(A+B)=I_n$ and $\overline{(A-B)}^t(A-B)=I_n$, so $A+B$ and $A-B$ are in $\operatorname{U}(n,\mathbb{F})$ (i.e., in $\operatorname{O}(n)$ if $\mathbb{F}=\mathbb{R}$; in $\operatorname{U}(n)$ if $\mathbb{F}=\mathbb{C}$; and in $\operatorname{Sp}(n)$ if $\mathbb{F}=\mathbb{H}$). Conversely, starting from matrices $Z$ and $W$ in $\operatorname{U}(n,\mathbb{F})$, we can reconstruct $A$ and $B$ as $A=\frac{1}{2}(Z+W)$, $B=\frac{1}{2}(Z-W)$, and the matrix $\left(\begin{smallmatrix}A&B\cr B&A\end{smallmatrix}\right)$ will satisfy the conditions $\bar B^tA=- \bar A^tB,\, \bar A^tA+\bar B^tB=I_n$. So we found an explicit isomorphism $\mathbb{K}\cong \operatorname{U}(n,\mathbb{F})\times \operatorname{U}(n,\mathbb{F})$. Under this isomorphism, the subgroup $\mathbb{P}\cap\mathbb{K}=\mathbb{K}^\sigma$ corresponds to the diagonal $\Delta \operatorname{U}(n,\mathbb{F})\subset \operatorname{U}(n,\mathbb{F})\times \operatorname{U}(n,\mathbb{F})$. Now Proposition [Proposition 1](#k trans){reference-type="ref" reference="k trans"} implies **Proposition 1**. *The Hermitian Lagrangian Grassmannian $\operatorname{HLGr}(\mathbb{F}^{2n})$ is diffeomorphic to $\operatorname{U}(n,\mathbb{F})\times \operatorname{U}(n,\mathbb{F})/\Delta \operatorname{U}(n,\mathbb{F})$. In other words, $\operatorname{HLGr}(\mathbb{F}^{2n})$ is diffeomorphic to $\operatorname{O}(n)\times \operatorname{O}(n)/\Delta \operatorname{O}(n)$ if $\mathbb{F}=\mathbb{R}$; to $\operatorname{U}(n)\times \operatorname{U}(n)/\Delta \operatorname{U}(n)$ if $\mathbb{F}=\mathbb{C}$; and to $\operatorname{Sp}(n)\times \operatorname{Sp}(n)/\Delta \operatorname{Sp}(n)$ if $\mathbb{F}=\mathbb{H}$ .* To compute the cohomology of $G/K$ we need $G$ to be connected, and in case $\mathbb{F}=\mathbb{R}$ the group $G=\mathbb{K}=\operatorname{O}(n)\times \operatorname{O}(n)$ is not connected. Thus in the real case we replace $\operatorname{O}(n)\times \operatorname{O}(n)/\Delta \operatorname{O}(n)$ by $\operatorname{SO}(n)\times \operatorname{SO}(n)/\Delta \operatorname{SO}(n)$. This amounts to replacing $\operatorname{HLGr}(\mathbb{R}^{2n})$ by the orbit $\operatorname{HLGr}^+(\mathbb{R}^{2n})$ of $\operatorname{SO}(n)\times \operatorname{SO}(n)$ which is one of the two connected components of $\operatorname{HLGr}(\mathbb{R}^{2n})$. **Corollary 1**. *The cohomology rings (with complex coefficients) of the Hermitian Lagrangian Grassmannians $\operatorname{HLGr}^+(\mathbb{R}^{2n})$, $\operatorname{HLGr}(\mathbb{C}^{2n})$, and $\operatorname{HLGr}(\mathbb{H}^{2n})$, are described by Theorem [\[gen rel basis group\]](#gen rel basis group){reference-type="ref" reference="gen rel basis group"}.* Finally, we describe the group $G_\mathbb{R}=\mathbb{G}^{\sigma\theta}$. Since $\sigma\theta(g)=I_{n,n}(\bar g^t)^{-1} I_{n,n}$ and since any $g\in\mathbb{G}$ satisfies $(\bar g^t)^{-1}=D_ngD_n$, we see that $$\sigma\theta(g)=I_{n,n}D_ngD_nI_{n,n}=J_ngJ_n^{-1}.$$ Thus $\sigma\theta(g)=g$ is equivalent to $J_ng=gJ_n$, and it follows that $$\label{cond gst} \mathbb{G}^{\sigma\theta}=\left\{\begin{pmatrix}A&-C\cr C&A\end{pmatrix}\,\big|\,\bar C^tA=-\bar A^tC,\,\bar A^tA-\bar C^tC=I_n\right\}.$$ If $\mathbb{F}=\mathbb{R}$, recall that $A+iC\mapsto\left(\begin{smallmatrix}A&-C\cr C&A\end{smallmatrix}\right)$ is the standard embedding of $\operatorname{GL}(n,\mathbb{C})$ into $\operatorname{GL}(2n,\mathbb{R})$, and note that the conditions $C^tA=-A^tC,\,A^tA-C^tC=I_n$ correspond to $(A+iC)^t(A+iC)=I_n$. It follows that $\mathbb{G}^{\sigma\theta}=\operatorname{O}(n,\mathbb{C})$. If $\mathbb{F}=\mathbb{C}$, we conjugate $\left(\begin{smallmatrix}A&-C\cr C&A\end{smallmatrix}\right)$ by $\left(\begin{smallmatrix}1&i\cr 1&-i\end{smallmatrix}\right)$ and get $\left(\begin{smallmatrix}A+iC&0\cr 0&A-iC\end{smallmatrix}\right)$. The conditions $\bar C^tA=-\bar A^tC,\,\bar A^tA-\bar C^tC=I_n$ imply $\overline{(A+iC})^t(A-iC)=I_n$. Conversely, given $\left(\begin{smallmatrix}Z&0\cr 0&(\bar Z^t)^{-1}\end{smallmatrix}\right)$ we can reconstruct $A$ and $C$ as $A=\frac{1}{2}(Z+(\bar Z^t)^{-1})$ and $C=\frac{1}{2i}(Z-(\bar Z^t)^{-1})$ and get $\left(\begin{smallmatrix}A&-C\cr C&A\end{smallmatrix}\right)\in\mathbb{G}^\sigma$. It follows that $$\mathbb{G}^{\sigma\theta}\cong \operatorname{GL}(n,\mathbb{C}), \quad \text{via}\ \begin{pmatrix}A&-C\cr C&A\end{pmatrix}\leftrightarrow \begin{pmatrix}A+iC&0\cr 0&A-iC\end{pmatrix}\leftrightarrow A+iC.$$ If $\mathbb{F}=\mathbb{H}$, we claim that $\mathbb{G}^{\sigma\theta}$ is isomorphic to $\operatorname{Sp}(2n,\mathbb{C})$. To see this, we consider the map $$\begin{pmatrix}A&-C\cr C&A\end{pmatrix}=\begin{pmatrix}A_1+jA_2&-C_1-jC_2\cr C_1+jC_2&A_1+jA_2\end{pmatrix}\mapsto \begin{pmatrix}A_1+iC_1&-\bar A_2-i\bar C_2\cr A_2+iC_2&\bar A_1+i\bar C_1\end{pmatrix}.$$ It is a tedious but straightforward computation to check that the conditions [\[cond gst\]](#cond gst){reference-type="eqref" reference="cond gst"} imply the conditions in [\[cond sp2nR\]](#cond sp2nR){reference-type="eqref" reference="cond sp2nR"}, so our map sends $\mathbb{G}^{\sigma\theta}$ into $\operatorname{Sp}(2n,\mathbb{C})$. Conversely, if $\left(\begin{smallmatrix}X&Y\cr Z&T\end{smallmatrix}\right)\in\operatorname{Sp}(2n,\mathbb{C})$, then we can reconstruct an element of $\mathbb{G}^{\sigma\theta}$ mapping to $\left(\begin{smallmatrix}X&Y\cr Z&T\end{smallmatrix}\right)$; it is given by $$A_1=\frac{1}{2}(X+\bar T),\quad A_2=\frac{1}{2}(Z-\bar Y),\quad C_1=\frac{1}{2i}(X-\bar T),\quad C_2=\frac{1}{2i}(Z+\bar Y).$$ (Another tedious computation shows that this element does satisfy the conditions of [\[cond gst\]](#cond gst){reference-type="eqref" reference="cond gst"}.) ## The skew Hermitian quaternionic Lagrangian Grassmannians {#quat lagr grass} Consider the skew Hermitian form on $\mathbb{H}^{2n}$ given by $$\label{skew herm form} (x\,|\,y)=\bar x^t J_n y=\sum_{r=1}^{n} \bar x_r y_{n+r}-\sum_{r=1}^{n} \bar x_{n+r}y_r,$$ where as before, $J_n=\left(\begin{smallmatrix}0&I_n\cr -I_n&0\end{smallmatrix}\right)$. (Note that the form $(\,|\,)$ is different from, but equivalent to, the form considered in [@rossmann]. Also, recall that $\mathbb{H}$ acts on $\mathbb{H}^{2n}$ by right scalar multiplication, and that the form $(\,|\,)$ satisfies the condition $(x\alpha\,|\,y\beta)=\bar\alpha(x\,|\,y)\beta$ for $x,y\in\mathbb{H}^{2n}$ and $\alpha,\beta\in\mathbb{H}$.) The group $\mathbb{G}=\operatorname{SO}^*(4n)$ is the group of automorphisms of $\mathbb{H}^{2n}$ preserving the form $(\,|\,)$, i.e., $$\mathbb{G}=\left\{g\in \operatorname{GL}(2n,\mathbb{H})\,\big|\,\bar g^tJ_n g=J_n \right\}.$$ The operations bar and transpose are defined by passing to the complex matrices: if $X$ is any $2n\times 2n$ quaternionic matrix, we write it as $X=U+jV$ with $U,V$ complex and identify $X$ with the $4n\times 4n$ complex matrix $\left(\begin{smallmatrix}U&-\bar V\cr V&\bar U\end{smallmatrix}\right)$. Then $$\bar X= \overline{\begin{pmatrix}U&-\bar V\cr V&\bar U\end{pmatrix}}=\bar U+j\bar V;\ X^t=\begin{pmatrix}U&-\bar V\cr V&\bar U\end{pmatrix}^t=U^t-j\bar V^t.$$ Then one has $\overline{XY}=\bar X\bar Y$ and $(XY)^t=Y^tX^t$. The reader is cautioned that the operations bar and transpose cannot be performed directly on the quaternionic matrix in the usual way, but their composition can, since $$\overline{(U+jV)}^t=(\bar U+j\bar V)^t=\bar U^t-jV^t=\bar U^t +\bar V^t\bar j.$$ Upon writing $g\in\mathbb{G}$ as $\left(\begin{smallmatrix}A&B\cr C&D\end{smallmatrix}\right)$ with $n\times n$ (quaternionic) blocks and writing out the condition $\bar g^tJ_n g=J_n$, we see $$\label{cond so*} \mathbb{G}=\operatorname{SO}^*(4n)=\left\{\begin{pmatrix}A&B\cr C&D\end{pmatrix}\,\big|\,\bar A^tC=\bar C^tA,\,\bar B^tD=\bar D^tB,\,\bar A^tD-\bar C^t B=I_n \right\}.$$ Clearly, $\mathbb{G}$ acts on the skew Hermitian quaternionic Lagrangian Grassmannian $\operatorname{LGr}^*(\mathbb{H}^{2n})$, the manifold of all Lagrangian subspaces of $\mathbb{H}^{2n}$ with respect to $(\,|\,)$, and this action is transitive by Witt's Theorem [\[witt\]](#witt){reference-type="ref" reference="witt"}. Thus $\operatorname{LGr}^*(\mathbb{H}^{2n})=\mathbb{G}/\mathbb{P}$, where $\mathbb{P}$ is the stabilizer in $\mathbb{G}$ of the standard Lagrangian subspace $$L_0=\{(x_1,\dots,x_n,0,\dots,0)\,\big|\,x_1,\dots,x_n\in\mathbb{H}\}\subset \mathbb{H}^{2n}.$$ It follows that $$\label{cond P so*} \mathbb{P}=\left\{\begin{pmatrix}A&B\cr 0&D\end{pmatrix}\in \operatorname{GL}(2n,\mathbb{H})\,\big|\,\bar B^tD=\bar D^tB,\,\bar A^tD=I_n \right\}.$$ Let $\sigma$ be an involution of $\mathbb{G}$ given by [\[def sigma\]](#def sigma){reference-type="eqref" reference="def sigma"}, i.e., by $\sigma(g)=I_{n,n}gI_{n,n}$. Then $\mathbb{G}^\sigma$ is equal to the Levi subgroup $\mathbb{L}$ of $\mathbb{P}$ and it consists of block diagonal matrices in $\mathbb{G}$, i.e., $$\mathbb{G}^\sigma=\mathbb{L}=\left\{\begin{pmatrix}A&0\cr 0& (\bar A^t)^{-1}\end{pmatrix}\,\big|\,A\in \operatorname{GL}(n,\mathbb{H})\right\}\cong \operatorname{GL}(n,\mathbb{H})\quad\text{via}\ \begin{pmatrix}A&0\cr 0& (\bar A^t)^{-1}\end{pmatrix}\leftrightarrow A.$$ Since $\mathbb{K}$ consists of the fixed points of the Cartan involution $\theta(g)=(\bar g^t)^{-1}$, and since $g\in\mathbb{G}$ is equivalent to $(\bar g^t)^{-1}=J_ngJ_n^{-1}$, we see that $g\in\mathbb{K}$ if and only if $gJ_n=J_ng$. This implies $$\label{cond K so*} \mathbb{K}=\left\{\begin{pmatrix}A&-C\cr C&A\end{pmatrix}\in \operatorname{GL}(2n,\mathbb{H})\,\big|\,\bar A^tC=\bar C^tA,\,\bar A^tA+\bar C^tC=I_n \right\}.$$ We now also see that $$\mathbb{P}\cap\mathbb{K}=\mathbb{K}^\sigma=\left\{\begin{pmatrix}A&0\cr 0&A\end{pmatrix}\in \operatorname{GL}(2n,\mathbb{H})\,\big|\,\bar A^tA=I_n \right\}\cong \operatorname{Sp}(n)\quad\text{via}\ \begin{pmatrix}A&0\cr 0&A\end{pmatrix}\leftrightarrow A.$$ We claim that $\mathbb{K}\cong \operatorname{U}(2n)$. To see this, we consider a different copy $\mathbb{G}'$ of $\operatorname{SO}^*(4n)$ inside $\operatorname{GL}(2n,\mathbb{H})$, the one preserving the skew Hermitian form $$\langle x,y\rangle= \bar x^t i y=\sum_{r=1}^{2n}\bar x_riy_r.$$ So $\mathbb{G}'$ is the subgroup of $\operatorname{GL}(2n,\mathbb{H})$ consisting of matrices $g$ such that $\bar g^tig=iI_{2n}$, and upon writing $g=U+jV$ with $U,V$ complex, we see $$\label{cond so+ bis} \mathbb{G}'=\left\{U+jV\in \operatorname{GL}(2n,\mathbb{H})\,\big|\,\bar U^tU+\bar V^tV=I_{2n},\, U^tV=V^t U\right\}.$$ Since $g\in\mathbb{G}'$ is equivalent to $(\bar g^t)^{-1}=-igi$, $\theta g=g$ is equivalent to $ig=gi$. Writing $g=U+jV$ with $U,V$ complex, we see $$\label{cond k so+ bis} \mathbb{K}'=\left\{U+j0\in \operatorname{GL}(2n,\mathbb{H})\,\big|\,\bar U^tU=I_{2n} \right\}.$$ So $\mathbb{K}'$ is the usual $\operatorname{U}(2n)$, embedded into $\operatorname{GL}(2n,\mathbb{H})$ as $U(n)+j0$. To show an explicit connection between $\mathbb{G}$ and $\mathbb{G}'$ and also between $\mathbb{K}$ and $\mathbb{K}'$, we note that the matrix $$T_n=\frac{1}{\sqrt{2}}\begin{pmatrix}I_n&-iI_n\cr jI_n&-kI_n\end{pmatrix}$$ satisfies $\bar T_n^t i T_n=J_n$. It follows that $$T_n\mathbb{G}T_n^{-1} = \mathbb{G}',$$ and since $\theta T_n=T_n$, also $T_n\mathbb{K}T_n^{-1} = \mathbb{K}'=\operatorname{U}(2n)$. Moreover, $T_n(\mathbb{P}\cap\mathbb{K}) T_n^{-1}$ is the standard $\operatorname{Sp}(n)$ inside $\operatorname{U}(2n)$, embedded as matrices of the form $\left(\begin{smallmatrix}U&-\bar V\cr V&\bar U\end{smallmatrix}\right)$. Now Proposition [Proposition 1](#k trans){reference-type="ref" reference="k trans"} implies the following result, which can also be found in [@CN]. **Proposition 1**. *The skew Hermitian quaternionic Lagrangian Grassmannian $\mathbb{G}/\mathbb{P}=\operatorname{LGr}^*(\mathbb{H}^{2n})$ is diffeomorphic to $\operatorname{U}(2n)/\operatorname{Sp}(n)$.* **Corollary 1**. *The cohomology ring (with complex coefficients) of the skew Hermitian quaternionic Lagrangian Grassmannian $\operatorname{LGr}^*(\mathbb{H}^{2n})\cong \operatorname{U}(2n)/\operatorname{Sp}(n)$ is described by Theorem [Theorem 1](#gen rel basis A/C){reference-type="ref" reference="gen rel basis A/C"}.* Finally, we describe the group $G_\mathbb{R}=\mathbb{G}^{\sigma\theta}$. Since $\sigma\theta(g)=I_{n,n}(\bar g^t)^{-1}I_{n,n}$ and since $g\in\mathbb{G}$ is equivalent to $(\bar g^t)^{-1}=J_ngJ_n^{-1}$, $$\sigma\theta (g)=I_{n,n}J_ngJ_n^{-1}I_{n,n}=D_n g D_n,\qquad g\in\mathbb{G}.$$ Thus $\sigma\theta(g)=g$ is equivalent to $gD_n=D_ng$. It follows that $$\mathbb{G}^{\sigma\theta}=\left\{g=\begin{pmatrix}A&B\cr B&A\end{pmatrix}\,\big|\,\bar A^tB=\bar B^tA,\,\bar A^tA-\bar B^tB=I_n\right\}.$$ Conjugating $\left(\begin{smallmatrix}A&B\cr B&A\end{smallmatrix}\right)$ by $\left(\begin{smallmatrix}I_n&I_n\cr I_n&-I_n\end{smallmatrix}\right)$ we get the matrix $\left(\begin{smallmatrix}A+B&0\cr 0&A-B\end{smallmatrix}\right)$, and the conditions $\bar A^tB=\bar B^tA,\,\bar A^tA-\bar B^tB=I_n$ imply $\overline{(A+B)^t}(A-B)=I_n$, so $A-B=(\overline{(A+B)^t})^{-1}$. Conversely, starting from the matrix $\left(\begin{smallmatrix}Z&0\cr 0& (\bar Z^t)^{-1}\end{smallmatrix}\right)$ and setting $A=\frac{1}{2}(Z+(\bar Z^t)^{-1})$, $B=\frac{1}{2}(Z-(\bar Z^t)^{-1})$, we get $\bar A^tB=\bar B^tA,\,\bar A^tA-\bar B^tB=I_n$. Thus $$\mathbb{G}^{\sigma\theta}\cong \left\{\begin{pmatrix}Z&0\cr 0&(\bar Z^t)^{-1}\end{pmatrix}\,\big|\,Z\in \operatorname{GL}(n,\mathbb{H})\right\} \cong \operatorname{GL}(n,\mathbb{H})\quad \text{via}\ \begin{pmatrix}Z&0\cr 0&(\bar Z^t)^{-1}\end{pmatrix}\leftrightarrow Z.$$ ## The quadric cases In this subsection $\mathbb{F}$ is equal to $\mathbb{R}$ or $\mathbb{C}$. The following is taken from [@Thor Chapter 4.4] Let $f$ be a nondegenerate symmetric bilinear form on $\mathbb{F}^{n+2}$. The quadric $Q_f(\mathbb{F})$ is defined to be the subset of the projective space $P^{n+1}(\mathbb{F})$: $$Q_f(\mathbb{F}) = \{ x = (x_1 : \ldots : x_{n+2}) \in P^{n+1}(\mathbb{F}) \:| \:f(x,x) =0 \}.$$ If $\mathbb{F}=\mathbb{R}$ then $f$ has normal form with matrix $I_{p+1,q+1}$ (if the form is definite, $Q_f(\mathbb{R})$ does not contain projective lines) and we denote $Q_f(\mathbb{R})$ by $Q_{p,q}(\mathbb{R})$. If $\mathbb{F}=\mathbb{C}$ then all $f$ are equivalent and we denote $Q_f(\mathbb{C})$ by $Q_{n}(\mathbb{C})$. The group $\mathbb{G}=\operatorname{SO}(p+1,q+1)_e$ acts transitively on the quadric $Q_{p,q}(\mathbb{R})$ with parabolic $\mathbb{P}= \mathrm{Stab}(1:0: \ldots :1:0: \ldots:0 )$. The maximal compact subgroup of $\mathbb{G}$ is $G=\mathbb{K}= \operatorname{SO}(p+1) \times \operatorname{SO}(q+1)$ and $K =\mathbb{K}\cap \mathbb{P}= S(\operatorname{O}(p) \times \operatorname{O}(q))$. In this setting $Q_{p,q}(\mathbb{R})$ is equal to the symmetric space $$G/K = \operatorname{SO}(p+1) \times \operatorname{SO}(q+1) / S(\operatorname{O}(p) \times \operatorname{O}(q)), \qquad p+q > 2.$$ The symmetric space is not an irreducible symmetric space, it has a double cover by spheres $S^p \times S^q$, however it is indecomposable. When $\mathbb{F}=\mathbb{C}$ then $\operatorname{SO}_{n+2}(\mathbb{C})$ acts transitively on $Q_{n+2}(\mathbb{C})$ and the parabolic $\mathbb{P}= \mathrm{Stab}(1:i: \ldots :0)$ has abelian unipotent radical, furthermore $$Q_n(\mathbb{C}) =\operatorname{SO}_{n+2}(\mathbb{C}) /\mathrm{Stab}(1:i: \ldots :0) \simeq \operatorname{SO}(n+2)/\operatorname{SO}(n) \times \operatorname{SO}(2) \quad n \geq 3.$$ As a compact symmetric space, $Q_n(\mathbb{C})$ coincides with $\operatorname{SO}(n + 2)/\operatorname{SO}(n) \times \operatorname{SO}(2)$, which is equal to a double cover of the Grassmannian of $2$-planes in $\mathbb{R}^{n+2}$. We leave the calculation of the cohomology of double covers of Grassmannians and these quadrics to future work. # The structure of $C(\mathfrak{p})^K$ and $(\textstyle{\bigwedge}\mathfrak{p})^K$ {#sec spin} ## The decomposition of the spin module {#K dec general} Let $G/K$ be a compact symmetric space corresponding to an involution $\sigma$ of $G$ and let $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$ be the decomposition of the complexified Lie algebra $\mathfrak{g}$ of $G$ into eigenspaces of $\sigma$. In particular, $\mathfrak{k}$ is the complexified Lie algebra of $K$. We note that if $G_\mathbb{R}/K$ is the noncompact dual of $G/K$, then $\sigma$ corresponds to the Cartan involution of $G_\mathbb{R}$ ($\sigma$ coincides with $\theta$ on $G_{\mathbb{R}}$). We define the Clifford algebra $C(\mathfrak{p})$ using the nondegenerate invariant symmetric bilinear form $B$ on $\mathfrak{g}$ obtained by extending the Killing form over the center of $\mathfrak{g}$. Alternatively, for matrix groups from our table, we can replace $B$ by the trace form $\operatorname{tr}XY$. Recall that $C(\mathfrak{p})$ is the associative unital algebra generated by $\mathfrak{p}$, with relations $XY+YX=2B(X,Y)$, $X,Y\in\mathfrak{p}$. Let $S$ be a spin module for $C(\mathfrak{p})$. Recall that $S$ is constructed as follows. Let $\mathfrak{p}^+$ and $\mathfrak{p}^-$ be two maximal isotropic subspaces of $\mathfrak{p}$, dual under $B$. Let $S=\textstyle{\bigwedge}\mathfrak{p}^+$, with elements of $\mathfrak{p}^+\subset C(\mathfrak{p})$ acting by wedging, and elements of $\mathfrak{p}^-\subset C(\mathfrak{p})$ acting by contracting. If $\dim\mathfrak{p}$ is even, $\mathfrak{p}=\mathfrak{p}^+\oplus\mathfrak{p}^-$ and hence this determines $S$ completely. Moreover, $S$ is the only irreducible $C(\mathfrak{p})$-module, and $C(\mathfrak{p})=\operatorname{End}S$. If $\dim\mathfrak{p}$ is odd, then $\mathfrak{p}=\mathfrak{p}^+\oplus\mathfrak{p}^-\oplus\mathbb{C}Z$ where $Z$ is an element of $\mathfrak{p}$ not contained in $\mathfrak{p}^+\oplus\mathfrak{p}^-$, such that $B(Z,Z)=1$. Now we can make $Z$ act on $\textstyle{\bigwedge}\mathfrak{p}^+$ in two different ways; it can act by $1$ on $\textstyle{\bigwedge}^{\operatorname{even}}\mathfrak{p}^+$ and by $-1$ on $\textstyle{\bigwedge}^{\operatorname{odd}}\mathfrak{p}^+$, or by $-1$ on $\textstyle{\bigwedge}^{\operatorname{even}}\mathfrak{p}^+$ and by $1$ on $\textstyle{\bigwedge}^{\operatorname{odd}}\mathfrak{p}^+$. In this way we get two inequivalent $C(\mathfrak{p})$-modules $S_1$ and $S_2$. These are the only irreducible $C(\mathfrak{p})$-modules and $C(\mathfrak{p})=\operatorname{End}S_1\oplus\operatorname{End}S_2$. In the following, $S$ denotes either one of these two modules. For more details about Clifford algebras, spin modules, and also pin and spin groups, see [@HP Ch.2]. Since the pin group $\operatorname{Pin}(\mathfrak{p})$ is contained in $C(\mathfrak{p})$, the pin double cover $\widetilde{K}$ of $K$ acts on $S$. Recall that $\widetilde{K}$ is obtained from the following pullback diagram $$\begin{CD} \widetilde{K}@>>> \operatorname{Pin}(\mathfrak{p}) \\ @VVV @VVV \\ K @>>> \operatorname{O}(\mathfrak{p}) \end{CD}$$ where the map $K\to \operatorname{O}(\mathfrak{p})$ is given by the adjoint action of $K$ on $\mathfrak{p}$. It now follows that $$\label{end k s} C(\mathfrak{p})^K=C(\mathfrak{p})^{\widetilde{K}}=\left\{\begin{matrix} \operatorname{End}_{\widetilde{K}}S,\qquad\qquad\quad\ \quad \dim\mathfrak{p}\text{ even}\cr \operatorname{End}_{\widetilde{K}}S_1\oplus\operatorname{End}_{\widetilde{K}}S_2,\quad \dim\mathfrak{p}\text{ odd}. \end{matrix}\right.$$ Since the algebra $C(\mathfrak{p})^K$ and its graded version $(\textstyle{\bigwedge}\mathfrak{p})^K$ are of our primary interest, we are led to study the $\widetilde{K}$-decomposition of $S$. We first study the decomposition of $S$ under the complexified Lie algebra $\mathfrak{k}$ of $\widetilde{K}$. This Lie algebra acts on $S$ through the map $\alpha:\mathfrak{k}\to C(\mathfrak{p})$, which is defined as the action map $\mathfrak{k}\to\mathfrak{so}(\mathfrak{p})$ followed by the Chevalley map (i.e., the skew symmetrization) $\mathfrak{so}(\mathfrak{p})\cong\textstyle{\bigwedge}^2\mathfrak{p}\hookrightarrow C(\mathfrak{p})$. Explicitly, if $b_i$ is a basis of $\mathfrak{p}$ with dual basis $d_i$ with respect to the form $B$, then $$\label{alpha def} \alpha(X)=\frac{1}{4}\sum_i[X,b_i]d_i,\quad X\in\mathfrak{k}.$$ (See [@HP §2.3.3]; the difference in sign comes from using different conventions to define the Clifford algebra.) Let $\mathfrak{t}_0$ be a Cartan subalgebra of the (real) Lie algebra $\mathfrak{k}_0$ of $K$ and let $\mathfrak{t}=(\mathfrak{t}_0)_\mathbb{C}$. Let $\Delta^+(\mathfrak{g},\mathfrak{t})\supseteq\Delta^+(\mathfrak{k},\mathfrak{t})$ be compatible choices of positive roots for $(\mathfrak{g},\mathfrak{t})$ respectively $(\mathfrak{k},\mathfrak{t})$. Let $\rho$ respectively $\rho_\mathfrak{k}$ be the corresponding half sums of positive roots. Let $W_\mathfrak{g}$ respectively $W_\mathfrak{k}$ be the Weyl groups of $\Delta(\mathfrak{g},\mathfrak{t})$ respectively $\Delta(\mathfrak{k},\mathfrak{t})$. Let $W^1_{\mathfrak{g},\mathfrak{k}}$ be the set of minimal length representatives of $W_{\mathfrak{k}}$-cosets in $W_{\mathfrak{g}}$. Alternatively, $$W^1_{\mathfrak{g},\mathfrak{k}}=\{\sigma\in W_\mathfrak{g}\,\big|\,\sigma\rho\text{ is $\mathfrak{k}$-dominant}\}.$$ It is well known ([@Par]; see also [@HP]) that the decomposition of $S$ under the action of $\mathfrak{k}$ is given by $$\label{decomp S k} S=m\cdot\bigoplus_{\sigma\in W^1_{\mathfrak{g},\mathfrak{k}}} E_{\sigma},$$ where $E_\sigma$ denotes the irreducible finite-dimensional $\mathfrak{k}$-module with highest weight $\sigma\rho-\rho_\mathfrak{k}$, and the multiplicity $m$ is equal to $2^{[\frac{1}{2}\tiny{\dim}\mathfrak{a}]}$ where $\mathfrak{a}$ is the centralizer of $\mathfrak{t}$ in $\mathfrak{p}$ (so that $\mathfrak{h}=\mathfrak{t}\oplus\mathfrak{a}$ is a Cartan subalgebra of $\mathfrak{g}$). Since $m$ is exactly the dimension of the spin module for the Clifford algebra $C(\mathfrak{a})$, [\[end k s\]](#end k s){reference-type="eqref" reference="end k s"} and [\[decomp S k\]](#decomp S k){reference-type="eqref" reference="decomp S k"} imply that $$\label{abstract dec} C(\mathfrak{p})^\mathfrak{k}\cong C(\mathfrak{a})\otimes\operatorname{Pr}(S),$$ where $\operatorname{Pr}(S)$ is the algebra spanned by the $\mathfrak{k}$-equivariant projections $\operatorname{pr}_\sigma:S\to m\cdot E_\sigma$, $\sigma\in W^1_{\mathfrak{g},\mathfrak{k}}$. If $K$ is connected, then the adjoint action map maps $K$ into $\operatorname{SO}(\mathfrak{p})$, so $\widetilde{K}$ is the spin double cover of $K$, $$\begin{CD} \widetilde{K}@>>> \operatorname{Spin}(\mathfrak{p}) \\ @VVV @VVV \\ K @>>> \operatorname{SO}(\mathfrak{p}). \end{CD}$$ If the double covering map $\widetilde{K}\to K$ does not split, then $\widetilde{K}$ is connected and [\[decomp S k\]](#decomp S k){reference-type="eqref" reference="decomp S k"} gives a decomposition of $S$ with respect to $\widetilde{K}$. If the covering $\widetilde{K}\to K$ splits, then $\widetilde{K}=K\times\mathbb{Z}_2$, where the generator $z$ of $\mathbb{Z}_2$ maps to $1\in K$ under the covering map. This implies that $z$ maps to the preimage in $\operatorname{Spin}(\mathfrak{p})$ of $1\in \operatorname{SO}(\mathfrak{p})$, that is to $\pm 1\in C(\mathfrak{p})$. Thus $z$ acts by the scalar $1$ or $-1$ on $S$, in particular it preserves the decomposition [\[decomp S k\]](#decomp S k){reference-type="eqref" reference="decomp S k"}, and hence this decomposition is also a decomposition of the $\widetilde{K}$-module $S$ into irreducibles. To conclude: **Proposition 1**. *If $K$ is connected, then the $\widetilde{K}$-decomposition of $S$ into irreducibles is the same as the $\mathfrak{k}$-decomposition [\[decomp S k\]](#decomp S k){reference-type="eqref" reference="decomp S k"}.* In general, the $\mathfrak{k}$-decomposition is the same as the decomposition under $\widetilde{K}_e$, the connected component of the identity in $\widetilde{K}$, but the $\widetilde{K}$-action may combine several irreducible $\mathfrak{k}$-modules into one irreducible $\widetilde{K}$-module. More precisely, the component group $\widetilde{K}/\widetilde{K}_e\cong K/K_e$ acts by permuting the components $E_\sigma$ of $S$, and the $E_\sigma$ combining to produce an irreducible $\widetilde{K}$-module belong to the same orbit of $K/K_e$. (Recall that $K_e$ is a normal subgroup of $K$ and that $K/K_e$ is a finite group.) We will treat the case of disconnected $K$ in Subsections [3.7](#subsec so/soo){reference-type="ref" reference="subsec so/soo"} and [3.8](#subsec u/o){reference-type="ref" reference="subsec u/o"} below. Before that we describe the structure of $C(\mathfrak{p})^\mathfrak{k}$ more precisely. ## Top degree element and Poincaré duality {#top elt} We identify $C(\mathfrak{p})$ and $\textstyle{\bigwedge}\mathfrak{p}$ using the Chevalley map, and think of them as one vector space with two different multiplications. **Proposition 1**. *Let $T$ be the unique (up to a scalar multiple) element of the top wedge of $\mathfrak{p}$; let $d=\dim\mathfrak{p}$ denote the degree of $T$.* *(1) $T$ squares to a nonzero constant with respect to Clifford multiplication; consequently we can rescale $T$ and assume that $T^2=1$.* *(2) If $d$ is odd, $T$ is in the center of $C(\mathfrak{p})$. If $d$ is even, $T$ commutes with $C(\mathfrak{p})_{\operatorname{even}}$.* *(3) $T$ is $\mathfrak{k}$-invariant (with respect to the adjoint action).* *(4) Clifford multiplication by $T$ from the left is a linear isomorphism from $\textstyle{\bigwedge}^j\mathfrak{p}$ to $\textstyle{\bigwedge}^{d-j}\mathfrak{p}$, for any $j=0,1,\dots,d$. This isomorphism, denoted by $*$, preserves the $\mathfrak{k}$-invariants and therefore gives an isomorphism from $(\textstyle{\bigwedge}^j\mathfrak{p})^\mathfrak{k}$ to $(\textstyle{\bigwedge}^{d-j}\mathfrak{p})^\mathfrak{k}$ for any $j$.* *(5) The isomorphism $*$ can up to sign be expressed as $x\mapsto \iota_xT$.* *(6) For any $x,y\in \textstyle{\bigwedge}^j\mathfrak{p}$, $$x\wedge *y = B(x,y)T.$$ In other words, $*$ is the usual Hodge star operator.* *Proof.* Let $Z_1,\dots,Z_d$ be an orthonormal basis of $\mathfrak{p}$. Then, up to a nonzero scalar, $T=Z_1\cdots Z_d$. It follows that $T^2$ is a nonzero constant, since $Z_1\cdots Z_d$ squares to $\pm 1$. This proves (1). (2) is a straightforward computation: one checks that $Z_1\cdots Z_d$ commutes with all $Z_j$ if $d$ is odd, and with all $Z_jZ_k$ if $d$ is even. To prove (3), we note that the adjoint action of $X\in\mathfrak{k}$ on $C(\mathfrak{p})$ is the same as the Clifford commutator with $\alpha(X)$. Since $\alpha$ maps $\mathfrak{k}$ into $C(\mathfrak{p})_{\operatorname{even}}$, the claim follows from (2). To prove (4), we note that if we set $Z_I=Z_{i_1}\dots Z_{i_a}$ for $I=\{i_1,\dots,i_a\}\subseteq\{1,\dots,d\},$ then $T Z_I=\pm Z_{I^c},$ where $I^c=\{1,\dots,d\}\setminus I$. This implies (4). (5) follows from the fact that Clifford multiplication by $y\in\mathfrak{p}$ equals $\iota_y+{\varepsilon}_y$ where ${\varepsilon}_y$ denotes wedging by $y$. Since $T$ is of top degree, it is annihilated by all ${\varepsilon}_y$, $y\in\mathfrak{p}$, and this implies the claim. To prove (6), we note that both sides of the equation are bilinear in $x$ and $y$, so we can assume $x=Z_I$, $y=Z_J$ for some $I,J\subseteq\{1,\dots,d\}$. If $I\neq J$, both sides of the equation are zero. Finally, if $I=J$, then we are to check that $Z_I\wedge *Z_I=T$, which is a straightforward computation. ◻ **Lemma 1**. *Let $x$ be any element of $C(\mathfrak{p})$ such that $x^2=1$ (with respect to Clifford multiplication). Then $B(x,x)=1$, where $B$ is the extended Killing form on $C(\mathfrak{p})\cong\textstyle{\bigwedge}\mathfrak{p}$. Consequently the elements $1$ and $x$ span a subalgebra of $C(\mathfrak{p})$ isomorphic to the Clifford algebra on the one-dimensional space $\mathbb{C}x$.* *In particular, if $T$ is the top element of $C(\mathfrak{p})$ as above, rescaled so that $T^2=1$, then $B(T,T)=1$ and $\operatorname{span}_\mathbb{C}(1,T)$ is a subalgebra of $C(\mathfrak{p})$ is isomorphic to the Clifford algebra $C(\mathbb{C}T)$.* *Proof.* This follows from the fact that the constant term of $x^2$ is $\iota_x x$, so $$1=B(1,1)=B(\iota_x x,1)=B(x,x\wedge 1)=B(x,x).$$ ◻ ## $C(\mathfrak{p})^\mathfrak{k}$ and $(\textstyle{\bigwedge}\mathfrak{p})^\mathfrak{k}$ in the equal rank cases {#cpk eq rk} The equal rank cases on our list are: $$\begin{aligned} & G/K=\operatorname{U}(p+q)/\operatorname{U}(p)\times\operatorname{U}(q)\quad &\text{ (Subsection \ref{real gras})};\\ & G/K=\operatorname{Sp}(p+q)/\operatorname{Sp}(p)\times\operatorname{Sp}(q)\quad &\text{ (Subsection \ref{real gras})};\\ & G/K=\operatorname{SO}(2p+2q)/\mathrm{S}(\operatorname{O}(2p)\times\operatorname{O}(2q))\quad &\text{ (Subsection \ref{real gras})}; \\ & G/K=\operatorname{SO}(2p+2q+1)/\mathrm{S}(\operatorname{O}(2p)\times\operatorname{O}(2q+1))\quad &\text{ (Subsection \ref{real gras})}; \\ & G/K=\operatorname{Sp}(n)/\operatorname{U}(n) \quad &\text{ (Subsection \ref{real lagr grass})}; \\ & G/K=\operatorname{SO}(2n)/\operatorname{U}(n) \quad &\text{ (Subsection \ref{real orth lagr grass})}. \end{aligned}$$ In each of these cases the situation is as in Kostant's email (see the introduction). In other words, the spin module $S$ is multiplicity free under $\mathfrak{k}$ and since $\dim\mathfrak{p}$ is even, $C(\mathfrak{p})=\operatorname{End}S$. Therefore Schur's lemma implies that $C(\mathfrak{p})^\mathfrak{k}=\operatorname{End}_\mathfrak{k}S=\operatorname{Pr}(S)$, the algebra spanned by the $\mathfrak{k}$-equivariant projections to the $\mathfrak{k}$-irreducible constituents of the spin module. The map $\alpha:\mathfrak{k}\to C(\mathfrak{p})$ from [\[alpha def\]](#alpha def){reference-type="eqref" reference="alpha def"} extends to $U(\mathfrak{k})$ and its restriction to the center $Z(\mathfrak{k})$ of $U(\mathfrak{k})$ is the algebra homomorphism $$\alpha_\mathfrak{k}:Z(\mathfrak{k})\to C(\mathfrak{p})^\mathfrak{k}.$$ (The notation $\alpha_\mathfrak{k}$ is to distinguish this map from the analogous map $\alpha_K$ on the level of $K$-invariants; for connected $K$, there is no difference between these two maps.) Since the $\mathfrak{k}$-infinitesimal character of $E_\sigma$ corresponds to $\sigma\rho$ under the Harish-Chandra isomorphism $Z(\mathfrak{k})\cong\mathbb{C}[\mathfrak{t}^*]^{W_\mathfrak{k}}$, we can identify $\alpha_\mathfrak{k}$ with $$\label{descr alpha k} \alpha_\mathfrak{k}:\mathbb{C}[\mathfrak{t}^{*}]^{W_\mathfrak{k}}\to C(\mathfrak{p})^\mathfrak{k},\qquad \alpha_\mathfrak{k}(P)=\sum_{\sigma\in W^1_{\mathfrak{g},\mathfrak{k}} }P(\sigma\rho) \operatorname{pr}_\sigma,$$ where $\operatorname{pr}_\sigma$ denotes the $\mathfrak{k}$-equivariant projection $S\to E_\sigma$. **Proposition 1**. *The map $\alpha_\mathfrak{k}$ of [\[descr alpha k\]](#descr alpha k){reference-type="eqref" reference="descr alpha k"} is a filtered algebra homomorphism, which doubles the degree. Here the filtration on the algebra $\mathbb{C}[\mathfrak{t}^*]^{W_\mathfrak{k}}$ is induced by the grading, while the filtration on the algebra $C(\mathfrak{p})^\mathfrak{k}$ is inherited from $C(\mathfrak{p})$.* *Proof.* The claim follows from the fact that $\alpha_\mathfrak{k}$ is the restriction of $\alpha:U(\mathfrak{k})\to C(\mathfrak{p})$ given by extending [\[alpha def\]](#alpha def){reference-type="eqref" reference="alpha def"}. ◻ In the next subsection we consider a more general setting. We will in particular prove that the map [\[descr alpha k\]](#descr alpha k){reference-type="eqref" reference="descr alpha k"} is onto; consequently, the map $\operatorname{gr}\alpha_\mathfrak{k}:\mathbb{C}[\mathfrak{t}^*]^{W_\mathfrak{k}}\to (\textstyle{\bigwedge}\mathfrak{p})^\mathfrak{k}$ is also onto. Moreover, we will give a description of $\ker\alpha$ and of $\ker\operatorname{gr}\alpha=\operatorname{gr}\ker\alpha$. It is clear from [\[descr alpha k\]](#descr alpha k){reference-type="eqref" reference="descr alpha k"} that $\ker\alpha_\mathfrak{k}$ consists of polynomials vanishing at all $\sigma\rho$, $\sigma\in W^1_{\mathfrak{g},\mathfrak{k}}$. Thus $\ker\alpha$ contains all $W_\mathfrak{g}$-invariant polynomials on $\mathfrak{t}^*$ that vanish at $\rho$ (and thus automatically on all $\sigma\rho$); we will prove that these polynomials in fact generate $\ker\alpha_\mathfrak{k}$. Likewise, we will see that $\ker\operatorname{gr}\alpha$ is generated by $W_\mathfrak{g}$-invariant polynomials on $\mathfrak{t}^*$ vanishing at $0$. ## Relative coinvariant algebra and filtered deformations {#rel coinv} Let $W$ be a finite group inside $\operatorname{GL}({\mathfrak t})$, with subgroup $H \subset W$. Let $\nu \in {\mathfrak t}^*$ be a point such that $\mathrm{Stab}_W(\nu) = \{ \operatorname{id}\}$. Let $\mathbb{C}[W]$ denote the algebra of functions from $W$ to $\mathbb{C}$ with pointwise multiplication. Give $\mathbb{C}[W]$ basis $\{ f_w: w \in W\}$, $f_w(w') = \delta_{w,w'}1$. **Definition 1**. Define $\operatorname{Ev}_\nu: \mathbb{C}[\mathfrak{t}^*] \to \mathbb{C}[W]$ by $$\operatorname{Ev}_\nu(p) = \sum_{w \in W} p(w \nu) f_w.$$ Restricting $\operatorname{Ev}_\nu$ to $\mathbb{C}[\mathfrak{t}^*]^H$ we define $\operatorname{Ev}_\nu^H: \mathbb{C}[\mathfrak{t}^*]^H \to \mathbb{C}[W]^H=\mathbb{C}[W/H]$. **Lemma 1**. *The map $\operatorname{Ev}_\nu$ is a surjective $W$-module and algebra homomorphism and $\operatorname{Ev}_\nu^H$ is a surjective algebra homomorphism.* *Proof.* Clearly $\operatorname{Ev}_\nu$ is a $W$ module homomorphism. Let $p_\nu$ be a linear polynomial that is zero on $\nu$ and non-zero on all $w \nu$, for all $w \neq 1.$ The polynomial $\prod_{w \in W \setminus \{ 1 \}} w p_\nu$ evaluated on $\nu$ is non-zero and is zero on every other element in the orbit of $\nu$. Suitably scaled, $\operatorname{Ev}_\nu(\prod_{w \in W \setminus \{ 1 \}} w p_\nu) = f_{1}.$ Since $f_{1}$ is a cyclic generator for the module $\mathbb{C}[W]$ and is in the image of $\operatorname{Ev}_\nu$ then the homomorphism is surjective. Since $f_{w}f_{w'} = \delta_{ww'}f_w$, then $\operatorname{Ev}_\nu(p)\operatorname{Ev}_\nu(q) = \sum_{w\in W}p(w\nu)f_w\sum_{w'\in W}q(w'\nu)f_{w'}$ is equal to $\sum_{w\in W}pq(w\nu)f_w = \operatorname{Ev}_\nu(pq)$. Taking $H$ invariants on both sides proves that $\operatorname{Ev}_\nu^H$ is a surjective algebra homomorphism. ◻ We define two ideals of $\mathbb{C}[\mathfrak{t}^*]$, $I_{W,+}$ is the two sided graded ideal generated by $$\{p \in \mathbb{C}[\mathfrak{t}^*]^W: \deg p >0 \} = \{p \in \mathbb{C}[\mathfrak{t}^*]^W: p(0) = 0\}$$ and $I_{W,\nu}$ is the filtered ideal generated by $\{p \in \mathbb{C}[\mathfrak{t}^*]^W: p(\nu)=0 \}$. **Lemma 1**. *The ideal $I_{W,+}$ is equal to $\operatorname{gr}I_{W, \nu}$.* *Proof.* $I_{W,+}$ is generated by $\mathbb{C}[\mathfrak{t}^*]^W_+=\{p \in \mathbb{C}[\mathfrak{t}^*]^W: p(0) = 0\}$ and $I_{W,\nu}$ is generated by $\mathbb{C}[\mathfrak{t}^*]^W_\nu =\{p \in \mathbb{C}[\mathfrak{t}^*]^W: p(\nu) = 0\}$. Both of which are codimension $1$ in $\mathbb{C}[\mathfrak{t}^*]^W$. Furthermore, $\operatorname{gr}(\mathbb{C}[\mathfrak{t}^*]^W_\nu)$ is a codimension one graded ideal of $\mathbb{C}[\mathfrak{t}^*]^W$. Since $\mathbb{C}[\mathfrak{t}^*]^W_+$ is the only graded ideal of codimension one, then $\operatorname{gr}\mathbb{C}[\mathfrak{t}^*]^W_\nu = \mathbb{C}[\mathfrak{t}^*]^W_+$. ◻ **Lemma 1**. *The kernel of $\operatorname{Ev}_\nu$ is equal to $I_{W,\nu}$ and the $\ker \operatorname{Ev}_\nu^H = I_{W,\nu}^H$.* *Proof.* The kernel of $\operatorname{Ev}_\nu$ is precisely polynomials that evaluate to zero on the full $W$-orbit of $\nu$. The kernel $I_{W,\nu}$ is generated by $W$-invariant polynomials that evaluate to zero on $\nu$. Since they are $W$-invariant they also evaluate to zero on the full $W$-orbit. Hence $\operatorname{ker}\operatorname{Ev}_\nu \supset I_{W,\nu}$. The quotient of $\mathbb{C}[\mathfrak{t}^*]$ by $I_{W,+}$ is the coinvariant algebra, which, in particular, is of dimension $|W|$. Lemma [Lemma 1](#l::grI){reference-type="ref" reference="l::grI"} then shows that the codimension of $I_{W,\nu}$ is $|W|$. Since $\operatorname{Ev}_\nu$ is surjective onto $\mathbb{C}[W]$ then $\ker \operatorname{Ev}_\nu$ is also of codimension $|W|$ and hence $\operatorname{ker}\operatorname{Ev}_\nu = I_{W,\nu}$. The second statement follows by taking $H$ invariants of both sides. ◻ The polynomials $\mathbb{C}[\mathfrak{t}^*]$ (resp. $\mathbb{C}[\mathfrak{t}^*]^H$) are graded, therefore the map $\operatorname{Ev}_\nu$ gives $\mathbb{C}[W]$ (resp. $\mathbb{C}[W/H]$) a filtration. **Theorem 1**. *Let $W \subset \operatorname{GL}({\mathfrak t})$ be such that $\mathbb{C}[\mathfrak{t}^*]$ is a free $\mathbb{C}[\mathfrak{t}^*]^W$ module of rank $|W|$ and let $H$ be any subgroup of $W$. With the filtration on $\mathbb{C}[W]$ endowed by $\operatorname{Ev}_\nu$, the associated graded algebra of $\mathbb{C}[W]$ is the coinvariant algebra $\mathbb{C}[\mathfrak{t}^*] / \langle \mathbb{C}[\mathfrak{t}^*]^W_+\rangle$, similarly $\operatorname{gr}(\mathbb{C}[W/H] )\cong \mathbb{C}[\mathfrak{t}^*]^H / \langle \mathbb{C}[\mathfrak{t}^*]^W_+\rangle_{\mathbb{C}[\mathfrak{t}^*]^H} .$* *Proof.* Lemmas [Lemma 1](#l::grI){reference-type="ref" reference="l::grI"} and [Lemma 1](#l::kerev){reference-type="ref" reference="l::kerev"} prove that $\operatorname{gr}\ker \operatorname{Ev}_\nu = I_{W,+}$ and $\operatorname{gr}(\ker \operatorname{Ev}_\nu^H) = I_{W,+}^H$. Since $\mathbb{C}[W] = \mathbb{C}[\mathfrak{t}^*] /\ker(\operatorname{Ev}_\nu)$ then $\operatorname{gr}\mathbb{C}[W] = \mathbb{C}[\mathfrak{t}^*] / \operatorname{gr}(\ker(\operatorname{Ev}_\nu)) = \mathbb{C}[\mathfrak{t}^*] / I_{W,+}$ which is the definition of the coinvariant algebra of $W$ acting on $\mathbb{C}[\mathfrak{t}^*]$. An identical statement holds for $\operatorname{Ev}_\nu^H$. ◻ Hence for any finite group $W$ acting by complex reflections on ${\mathfrak t}$ with any subgroup $H$ we can define a filtration on $\mathbb{C}[W/H]$ such that the associated graded algebra is isomorphic to the relative coinvariant algebra of $W$ and $H$. **Corollary 1**. *With notation of Subsections [3.1](#K dec general){reference-type="ref" reference="K dec general"} and [3.3](#cpk eq rk){reference-type="ref" reference="cpk eq rk"}, the map $\alpha_\mathfrak{k}$ is surjective onto $C(\mathfrak{p})^\mathfrak{k}$ and the kernel of $\alpha_\mathfrak{k}$ is generated by the $W_\mathfrak{g}$ invariant polynomials in $\mathbb{C}[\mathfrak{t}^*]^{W_\mathfrak{k}}$ which evaluate to zero at $\rho$, $(I_{W,\rho})^{W_\mathfrak{k}}$. Furthermore, the map $\operatorname{gr}\alpha_\mathfrak{k}$ is surjective onto $(\textstyle{\bigwedge}\mathfrak{p})^\mathfrak{k}$ and the kernel of $\operatorname{gr}\alpha_\mathfrak{k}$ is $I_{W_\mathfrak{g},+}^{W_\mathfrak{k}}$.* *Proof.* The map $\alpha_\mathfrak{k}$ defined by [\[descr alpha k\]](#descr alpha k){reference-type="eqref" reference="descr alpha k"} is given by evaluation of polynomials in $\mathbb{C}[\mathfrak{t}^*]^{W_\mathfrak{k}}$ at $\sigma\rho$ for $\sigma \in W^1_{\mathfrak{g},\mathfrak{k}}$, defining an isomorphism between $C(\mathfrak{p})^\mathfrak{k}=\operatorname{Pr}(S)$ and $\mathbb{C}[W_G/W_K]$ where $\operatorname{pr}_{\sigma}$ maps to $\sum_{w \in \sigma W_\mathfrak{k}} f_{w}$. So the map $\alpha_\mathfrak{k}$ is equal to the restriction of $\operatorname{Ev}_{\rho}:\mathbb{C}[\mathfrak{t}^*] \to \mathbb{C}[W_\mathfrak{g}]$ to $W_\mathfrak{k}$ invariants $$\operatorname{Ev}_{\rho}^{W_\mathfrak{k}}: \mathbb{C}[\mathfrak{t}^*]^{W_\mathfrak{k}} \to \mathbb{C}[W_\mathfrak{g}/W_\mathfrak{k}] = \mathbb{C}[W^1_{\mathfrak{g},\mathfrak{k}}].$$ Lemma [Lemma 1](#ev surj){reference-type="ref" reference="ev surj"} then states that $\alpha_\mathfrak{k}$ is surjective onto $C(\mathfrak{p})^\mathfrak{k}=\operatorname{Pr}(S)$ and Lemma [Lemma 1](#l::kerev){reference-type="ref" reference="l::kerev"} describes the kernel. Theorem [Theorem 1](#thm coinv){reference-type="ref" reference="thm coinv"} provides the statement for $\operatorname{gr}\alpha_\mathfrak{k}$. ◻ ## $C(\mathfrak{p})^\mathfrak{k}$ and $(\textstyle{\bigwedge}\mathfrak{p})^\mathfrak{k}$ in the almost equal rank case: $G/K=\operatorname{SO}(2p+2q+2)/S(\operatorname{O}(2p+1)\times\operatorname{O}(2q+1))$ {#subsec so odd} We call this case "almost equal rank\", because $\dim\mathfrak{a}=1$ for all $p$ and $q$. To see that indeed $\dim\mathfrak{a}=1$, and also for later purposes, we first describe a Cartan subalgebra $\mathfrak{h}=\mathfrak{t}\oplus\mathfrak{a}$ of $\mathfrak{g}$. For the Cartan subalgebra $\mathfrak{t}$ of $\mathfrak{k}$ we choose block diagonal matrices with diagonal blocks $$\label{t so odd} t_1J,\dots,t_pJ,0,t_{p+1}J,\dots,t_{p+q}J,0,$$ where $J=J_1=\left(\begin{smallmatrix}0&1\cr -1&0\end{smallmatrix}\right)$, and $t_1,\dots,t_{p+q}$ are (complex) scalars. The centralizer $\mathfrak{a}$ of $\mathfrak{t}$ in $\mathfrak{p}$ is one-dimensional, spanned by $E_{k\,k+m}-E_{k+m\,k}$. We identify $\mathfrak{t}$ with $\mathbb{R}^{p+q}\times 0\subset\mathbb{R}^{p+q+1}$ by sending the matrix [\[t so odd\]](#t so odd){reference-type="eqref" reference="t so odd"} to $(t_1,\dots,t_{p+q},0)$, and we identify $\mathfrak{a}$ with $0\times \mathbb{R}\subset\mathbb{R}^{p+q+1}$ by sending $E_{k\,k+m}-E_{k+m\,k}$ to $(0,\dots,0,1)$. Since $\dim\mathfrak{p}$ is odd, $C(\mathfrak{p})=\operatorname{End}S_1\oplus \operatorname{End}S_2$, where $S_1$ and $S_2$ are the two spin modules. These spin modules are not isomorphic as $C(\mathfrak{p})$-modules, but they are isomorphic as modules over $C(\mathfrak{p})_{\operatorname{even}}$, in particular they are isomorphic as $\mathfrak{k}$-modules. Moreover, the $\mathfrak{k}$-module $S=S_1=S_2$ is multiplicity free (since the multiplicity $m=2^{[\frac{1}{2}\tiny{\dim}\mathfrak{a}]}=1$). To understand the decomposition $C(\mathfrak{p})=\operatorname{End}S_1\oplus \operatorname{End}S_2$ more explicitly, we first note that the top element $T$ of $C(\mathfrak{p})$, which is central in $C(\mathfrak{p})$ since $\dim\mathfrak{p}$ is odd, acts as 1 on $S_1$ and as $-1$ on $S_2$. Therefore the central idempotents $$\operatorname{pr}_1=\frac{1}{2}(1+T),\qquad \operatorname{pr}_2=\frac{1}{2}(1-T)$$ satisfy the following: $\operatorname{pr}_1$ is 1 on $S_1$ and 0 on $S_2$, while $\operatorname{pr}_2$ is 0 on $S_1$ and 1 on $S_2$. It follows that $$\operatorname{End}S_1=C(\mathfrak{p})\operatorname{pr}_1\qquad\text{ and }\qquad \operatorname{End}S_2=C(\mathfrak{p})\operatorname{pr}_2.$$ By Proposition [Proposition 1](#hodge *){reference-type="ref" reference="hodge *"}, multiplication by $T$ is an isomorphism between $C(\mathfrak{p})_{\operatorname{even}}$ and $C(\mathfrak{p})_{\operatorname{odd}}$, and moreover $$\label{dec almost eq} C(\mathfrak{p})\cong C(\mathbb{C}T)\otimes C(\mathfrak{p})_{\operatorname{even}},$$ with the isomorphism implemented by the multiplication. It follows that $$\operatorname{End}S_1=C(\mathfrak{p})_{\operatorname{even}}\operatorname{pr}_1\qquad\text{ and }\qquad \operatorname{End}S_2=C(\mathfrak{p})_{\operatorname{even}}\operatorname{pr}_2.$$ Namely, $\operatorname{End}S_i$ corresponds to $\operatorname{pr}_i\otimes C(\mathfrak{p})_{\operatorname{even}}$ under the decomposition [\[dec almost eq\]](#dec almost eq){reference-type="eqref" reference="dec almost eq"}. Since the $\mathfrak{k}$-action on $C(\mathbb{C}T)$ is trivial, and since $\operatorname{End}S_1=\operatorname{End}S_2=\operatorname{End}S$ as $\mathfrak{k}$-modules, we see that for any $c\in C(\mathbb{C}T)$, $c\otimes C(\mathfrak{p})_{\operatorname{even}}$ is a copy of $\operatorname{End}S$. In particular, $1\otimes C(\mathfrak{p})_{\operatorname{even}}=C(\mathfrak{p})_{\operatorname{even}}$ is isomorphic to $\operatorname{End}S$, and in the following when we write $\operatorname{End}S$ we mean this particular copy. It follows that $\operatorname{End}_\mathfrak{k}S=C(\mathfrak{p})_{\operatorname{even}}^\mathfrak{k}$; this is also the image of the map $\alpha_\mathfrak{k}$ of [\[descr alpha k\]](#descr alpha k){reference-type="eqref" reference="descr alpha k"}, which now sends $\mathbb{C}[\mathfrak{t}^*]^{W_\mathfrak{k}}$ onto $\operatorname{Pr}(S)=\operatorname{End}_\mathfrak{k}S\subset C(\mathfrak{p})^\mathfrak{k}$. ($\operatorname{End}_\mathfrak{k}S$ is equal to the algebra $\operatorname{Pr}(S)$ of projections onto isotypic components since $S$ is multiplicity free.) Furthermore, an analogue of Proposition [Proposition 1](#prop al k){reference-type="ref" reference="prop al k"} holds, with $C(\mathfrak{p})^\mathfrak{k}$ replaced by $\operatorname{Pr}(S)$. Finally, the above discussion shows that $$C(\mathfrak{p})^\mathfrak{k}=C(\mathbb{C}T)\otimes \operatorname{Pr}(S).$$ We now go back to our Cartan subalgebra $\mathfrak{h}=\mathfrak{t}\oplus\mathfrak{a}$. Since $(\mathfrak{g},\mathfrak{t})$-roots are the restrictions of $(\mathfrak{g},\mathfrak{h})$-roots to $\mathfrak{t}$, we see that $\Delta(\mathfrak{g},\mathfrak{t})$ is of type $B_{p+q}$, while $\Delta(\mathfrak{k},\mathfrak{t})$ is of type $B_p\times B_q$. (On the other hand, $\Delta(\mathfrak{g},\mathfrak{h})$ is of type $D_{p+q+1}$.) **Lemma 1**. *The filtered algebra $\operatorname{Pr}(S)$ is isomorphic to the filtered algebra $C(\mathfrak{p}')^{\mathfrak{k}'}=\operatorname{Pr}(S')$ for the equal rank symmetric space $$G'/K'=\operatorname{Sp}(p+q)/\operatorname{Sp}(p)\times \operatorname{Sp}(q).$$ This algebra is isomorphic to the algebra $\mathbb{C}[\mathfrak{t}^*]^{W_\mathfrak{k}'}$ modulo the ideal generated by $\mathbb{C}[\mathfrak{t}^*]^{W_\mathfrak{g}'}$. (We identify the isomorphic spaces $\mathfrak{t}$ and $\mathfrak{t}'$.) It can be identified with the space $\mathbb{C}[r_1,\dots,r_p]_{\leq q}$, spanned by monomials of degree at most $q$ in the elementary symmetric functions $r_1,\dots,r_p$ of the squares of the variables $x_1,\dots,x_p$, as in Subsection [4.3](#subsec B/BxB){reference-type="ref" reference="subsec B/BxB"} below. The degrees of these monomials as functions of the $x_i$ range from 0 to $4pq$ and are divisible by 4.* *Proof.* It will be shown in Subsection [4.3](#subsec B/BxB){reference-type="ref" reference="subsec B/BxB"} that $C(\mathfrak{p}')^{\mathfrak{k}'}$ for $G'/K'$ is $\mathbb{C}[r_1,\dots,r_p]_{\leq q}$ as in the statement of the lemma. Since types $B$ and $C$ have the same Weyl group, $W_\mathfrak{g}$ and $W_\mathfrak{k}$ are the same for $G/K$ and $G'/K'$. Moreover, $\rho=\rho'=(p+q,p+q-1,\dots,1)\in\mathfrak{t}^*$. Since the algebra $\operatorname{Pr}(S)$ is isomorphic to the algebra $\mathbb{C}[\mathfrak{t}^*]^{W_\mathfrak{k}}$ modulo the ideal generated by $\mathbb{C}[\mathfrak{t}^*]^{W_\mathfrak{g}}_\rho$, this implies the lemma. ◻ The degree of the top element $T$ of $C(\mathfrak{p})$ is $d=\dim\mathfrak{p}=4pq+2p+2q+1$. The algebra $\operatorname{Pr}(S)$ contains a unique element $t$ of degree $4pq$. Let $e=Tt$ be the corresponding odd element. Then $e$ is the unique element of lowest odd degree; this degree is $$\deg e=d-4pq=2p+2q+1.$$ **Lemma 1**. *The elements $t$ and $e$ square to nonzero constants in $C(\mathfrak{p})$. Therefore, we can rescale these two elements and assume that $t^2=e^2=1$.* *Proof.* By Lemma [Lemma 1](#g' k'){reference-type="ref" reference="g' k'"}, the filtered algebra $\operatorname{Pr}(S)$ is isomorphic to the filtered algebra $C(\mathfrak{p}')^{\mathfrak{k}'}$ for $(\mathfrak{g}',\mathfrak{k}')=(\mathfrak{sp}(p,q),\mathfrak{sp}(p)\times\mathfrak{sp}(q))$. By Proposition [Proposition 1](#hodge *){reference-type="ref" reference="hodge *"}, the top degree element of $C(\mathfrak{p}')^{\mathfrak{k}'}$ squares to a nonzero constant. So $t^2$ is a nonzero constant. Since $T^2=1$, it follows that $e=Tt$ squares to the same constant. ◻ **Corollary 1**. *The element $e\in C(\mathfrak{p})^\mathfrak{k}$ satisfies $B(e,e)=1$ (here $B$ is the extended Killing form on $C(\mathfrak{p})\cong\textstyle{\bigwedge}\mathfrak{p}$). Consequently the elements $1$ and $e$ span a subalgebra of $C(\mathfrak{p})^\mathfrak{k}$ isomorphic to the Clifford algebra on the one-dimensional space $\mathbb{C}e$. The same is true if we replace $e$ by $t$.* *Proof.* This follows from Lemma [Lemma 1](#squares){reference-type="ref" reference="squares"} and Lemma [Lemma 1](#T cliff){reference-type="ref" reference="T cliff"}. ◻ **Theorem 1**. *There are tensor product decompositions $$\begin{aligned} &C(\mathfrak{p})^\mathfrak{k}\cong C(\mathbb{C}e)\otimes \operatorname{Pr}(S);\\ &(\textstyle{\bigwedge}\mathfrak{p})^\mathfrak{k}\cong\textstyle{\bigwedge}\mathbb{C}e\otimes \operatorname{gr}\operatorname{Pr}(S), \end{aligned}$$ with the isomorphisms implemented by the multiplication.* *Proof.* To show that $C(\mathfrak{p})^\mathfrak{k}\cong C(\mathbb{C}e)\otimes \operatorname{Pr}(S)$, we first note that by [\[dec almost eq\]](#dec almost eq){reference-type="eqref" reference="dec almost eq"}, $\operatorname{Pr}(S)=C(\mathfrak{p})_{\operatorname{even}}$ is a subalgebra of $C(\mathfrak{p})^\mathfrak{k}$ of half the dimension. Moreover, by Corollary [Corollary 1](#t T cliff){reference-type="ref" reference="t T cliff"}, $\operatorname{span}\{1,e\}$ is a subalgebra of $C(\mathfrak{p})^\mathfrak{k}$ isomorphic to the Clifford algebra $C(\mathbb{C}e)$ of the space $\mathbb{C}e$. It is thus enough to show that (Clifford) multiplication by $e$ is injective on $\operatorname{Pr}(S)$. This follows immediately from $e^2=1$. Since $\operatorname{Pr}(S)$ commutes with $C(\mathfrak{p})^\mathfrak{k}$, this concludes the proof of $C(\mathfrak{p})^\mathfrak{k}\cong C(\mathbb{C}e)\otimes \operatorname{Pr}(S)$. To prove $(\textstyle{\bigwedge}\mathfrak{p})^\mathfrak{k}\cong\textstyle{\bigwedge}\mathbb{C}e\otimes \operatorname{gr}\operatorname{Pr}(S)$, we again start from the fact that $\operatorname{gr}\operatorname{Pr}(S)$ is a subalgebra of $(\textstyle{\bigwedge}\mathfrak{p})^\mathfrak{k}$ of half the dimension. Moreover, $e\wedge e=0$, since the degree of $e\wedge e$ is $4p+4q+2$, which is even but not divisible by 4. It thus suffices to see that wedging by $e$ is injective on $\operatorname{gr}\operatorname{Pr}(S)$. We first note that $e\wedge t=T$ up to (nonzero) scalar. Indeed, since the degrees match, it is enough to see that $e\wedge t\neq 0$. But $$B(e\wedge t,T)=-B(e,\iota_tT)=-B(e,e)\neq 0.$$ (We have already seen that $B(e,e)=1$. Alternatively, since $e$ is the only $\mathfrak{k}$-invariant in its degree, and since $B$ is nondegenerate on $\mathfrak{k}$-invariants, $B(e,e)\neq 0$.) Assume now that $p\in\operatorname{gr}\operatorname{Pr}(S)$ is nonzero; we want to show that $e\wedge p\neq 0$. We can assume $p$ is homogeneous. We will be done if we can show that there is $p'\in\operatorname{gr}\operatorname{Pr}(S)$ such that $p\wedge p'=t$; then $$(e\wedge p)\wedge p'=e\wedge t\neq 0,$$ so also $e\wedge p\neq 0$. By Lemma [Lemma 1](#g' k'){reference-type="ref" reference="g' k'"}, the algebra $\operatorname{Pr}(S)$ is isomorphic (as a filtered algebra) to the algebra $C(\mathfrak{p}')^{\mathfrak{k}'}$, where $(\mathfrak{g}',\mathfrak{k}')=(\mathfrak{sp}(p+q),\mathfrak{sp}(p)\times\mathfrak{sp}(q))$. The corresponding graded algebras are thus also isomorphic. The claim now follows from Proposition [Proposition 1](#hodge *){reference-type="ref" reference="hodge *"}.(6). ◻ ## $(\textstyle{\bigwedge}\mathfrak{p})^K$ in the primary case and almost primary case {#subsec primary and aprim} In this section we cite results from [@CCCvol3] that describe the structure of $(\textstyle{\bigwedge}\mathfrak{p})^\mathfrak{k}$, extend this description to $(\textstyle{\bigwedge}\mathfrak{p})^K$ and explicitly give a generating subspace when $G/K$ is primary or almost primary. **Definition 1**. Let $\lambda_\mathfrak{k}=\mathrm{gr}\alpha_\mathfrak{k}:U(\mathfrak{k})^\mathfrak{k}\to (\textstyle{\bigwedge}\mathfrak{p})^\mathfrak{k}$ and let $\lambda_K$ be the restriction of $\lambda_\mathfrak{k}$ to $U(\mathfrak{k})^K$. **Theorem 1**. *[@CCCvol3 X.4 Th VII] [\[th: cccstruct\]]{#th: cccstruct label="th: cccstruct"} Let $(\mathfrak{g},\mathfrak{k})$ be a symmetric pair of Lie algebras, with Cartan involution $\theta$. Let $\mathcal P_\mathfrak{g}$ be a graded subspace that generates $\textstyle{\bigwedge}(\mathfrak{g})^\mathfrak{g}$ and define the Samelson subspace $\mathcal P_\mathfrak{a}= \mathcal P _\mathfrak{g}^{-\theta}$ then there is an isomorphism of graded algebras $$(\textstyle{\bigwedge}\mathfrak{p})^\mathfrak{k}\cong \textstyle{\bigwedge}\mathcal P _ \mathfrak{a}\otimes \operatorname{im}\lambda_\mathfrak{k}.$$* We extend the graded algebra description of $(\textstyle{\bigwedge}\mathfrak{p})^\mathfrak{k}$ from [@CCCvol3] to $(\textstyle{\bigwedge}\mathfrak{p})^K$ in the below proposition. **Proposition 1**. *Let $G/K$ be a symmetric space with $G$ connected ($K$ may be disconnected). Then, with notation as in Theorem [\[th: cccstruct\]](#th: cccstruct){reference-type="ref" reference="th: cccstruct"}, there is an isomorphism of graded algebras $$(\textstyle{\bigwedge}\mathfrak{p})^K \cong \textstyle{\bigwedge}\mathcal P _ \mathfrak{a}\otimes \operatorname{im}\lambda_K.$$* *Proof.* When one identifies the de Rham cohomology $H(G/K_e)$ of the space $G/K_e$ with $\textstyle{\bigwedge}{\mathfrak p}^{\mathfrak k}$ and the de Rham cohomology $H(G)$ of $G$ with $\textstyle{\bigwedge}({\mathfrak g})^{\mathfrak g}$, then the map on cohomology from $H(G/K_e)$ to $H(G)$ is given by the inclusion of $(\textstyle{\bigwedge}\mathfrak{p})^\mathfrak{k}$ into $\textstyle{\bigwedge}\mathfrak{g}$ followed by the projection of $\textstyle{\bigwedge}\mathfrak{g}$ onto $(\textstyle{\bigwedge}\mathfrak{g})^\mathfrak{g}$ along $\mathfrak{g}\cdot\textstyle{\bigwedge}\mathfrak{g}$ (see [@CCCvol3 X.4] for extra details). Both of these maps are $K$-module homomophisms and the image of the composition is $\textstyle{\bigwedge}\mathcal P_\mathfrak{a}\subset \textstyle{\bigwedge}\mathcal P_ \mathfrak{g}$ [@CCCvol3 X.4 Th VII (2)]. Since $(\textstyle{\bigwedge}\mathfrak{g})^\mathfrak{g}$ is $G$-fixed (and hence $K$-fixed) and the above map is a $K$-module homomorphism, we can conclude that the subspace of $(\textstyle{\bigwedge}{\mathfrak p})^{\mathfrak k}$ congruent to $\textstyle{\bigwedge}\mathcal P_\mathfrak{a}$ is $K$-fixed. The space $\textstyle{\bigwedge}({\mathfrak p})^K$ is equal to $$(\textstyle{\bigwedge}({\mathfrak p})^{\mathfrak k})^K \cong (\textstyle{\bigwedge}(\mathcal P_{\mathfrak a}) \otimes \operatorname{im}\lambda_\mathfrak{k})^K = \textstyle{\bigwedge}(\mathcal P_{\mathfrak a}) \otimes (\operatorname{im}\lambda_\mathfrak{k})^K,$$ the second equality following from the fact that the first tensorand is entirely $K$-fixed. To finish, note that $(\operatorname{im}\lambda_\mathfrak{k})^K = \operatorname{im}\lambda_K$, hence $$(\textstyle{\bigwedge}{\mathfrak p})^K \cong \textstyle{\bigwedge}\mathcal P_{\mathfrak a}\otimes \operatorname{im}\lambda_K.$$ ◻ **Definition 1**. The symmetric space $G/K$ is primary if $W_{\mathfrak{g},\mathfrak{t}} = W_\mathfrak{k}$ and almost primary if $W_{\mathfrak{g},\mathfrak{t}} = W_K \neq W_\mathfrak{k}$. **Definition 1**. Define $\mathcal P_\wedge(\mathfrak{p})$ to be the subspace of $(\textstyle{\bigwedge}\mathfrak{p})^K$ orthogonal to square of the augmentation ideal $((\textstyle{\bigwedge}\mathfrak{p})^K_+)^2$. **Proposition 1**. *[@On proposition 4 p 105][\[prop: graded gens\]]{#prop: graded gens label="prop: graded gens"} Suppose that the algebra $A$ is isomorphic to an exterior algebra, and let $S$ be a subset of $A$. Then the following are equivalent:* 1. *the algebra $A$ is generated by $S$ and $1$;* 2. *the augmentation ideal $A_+$ is equal to the $A$-submodule generated by $S$;* 3. *the augmentation ideal $A_+$ is equal to $\mathrm{span}_\mathbb{C}(S) \oplus A_+^2$.* In particular, Proposition [\[prop: graded gens\]](#prop: graded gens){reference-type="ref" reference="prop: graded gens"} shows that modifying a generating set $S$ by any elements from $A_+^2$ retains the property of generating $A$; we will use this to show that $P_\wedge(\mathfrak{p})$ generates $(\textstyle{\bigwedge}\mathfrak{p})^K$ in the (almost) primary case. **Corollary 1**. *Suppose that $A$ is a graded algebra with non-degenerate bilinear form such that $A$ is isomorphic to an exterior algebra, and different graded components are orthogonal to each other. Let $R$ be the subspace of $A_+$ orthogonal to $A_+^2$ then $A$ is generated by $R$.* *Proof.* Since $A$ is isomorphic to an exterior algebra, suppose $P$ is any graded subspace of $A$ such that $A = \textstyle{\bigwedge}P$. The proof follows by performing a Gram-Schmidt algorithm on $P$ modifying by elements in $A_+^2$ until the generating subspace is orthogonal to $A_+^2$. By Proposition [\[prop: graded gens\]](#prop: graded gens){reference-type="ref" reference="prop: graded gens"}, at each step of the Gram-Schmidt process the new subspace still generates $A$ and is graded since the different graded components are orthogonal. The end result is a graded subspace $P'$ orthogonal to $A_+^2$ that generates $A$. Hence we have a direct sum of orthogonal components $$A = \mathbb{C}\oplus P' \oplus A_+^2,$$ and $P'$ is contained in $R$. Any element in $R \setminus P'$ would be orthogonal to $P'$, $\mathbb{C}$ and $A_+^2$ thus contradicting the fact that the form on $A$ is non-degenerate, hence $R = P'$ and $R$ generates $A$. ◻ **Theorem 1**. *Let $G/K$ be primary or almost primary, then the inclusion $\mathcal P_\wedge(\mathfrak{p}) \hookrightarrow (\textstyle{\bigwedge}\mathfrak{p})^K$ extends to an isomorphism of graded algebra $$(\textstyle{\bigwedge}\mathfrak{p})^K = \textstyle{\bigwedge}\mathcal P_\wedge(\mathfrak{p}).$$* *Proof.* If $G/K$ is primary then $\operatorname{im}\lambda_\mathfrak{k}$ is $\mathbb{C}$ and $(\textstyle{\bigwedge}\mathfrak{p})^\mathfrak{k}\cong \textstyle{\bigwedge}\mathcal P_\mathfrak{a}$, if $G/K$ is almost primary then $\operatorname{im}\lambda_K$ is $\mathbb{C}$ and $(\textstyle{\bigwedge}\mathfrak{p})^K \cong \textstyle{\bigwedge}\mathcal P_\mathfrak{a}$. Hence, in both cases $(\textstyle{\bigwedge}\mathfrak{p})^K$ is isomorphic to an exterior algebra, denote this isomorphism by $f: \textstyle{\bigwedge}\mathcal P_\mathfrak{a}\cong (\textstyle{\bigwedge}\mathfrak{p})^K$. Then $(\textstyle{\bigwedge}\mathfrak{p})^K$ is generated by the graded subspace $f(\mathcal P_\mathfrak{a})$ and the form on $(\textstyle{\bigwedge}\mathfrak{p})^K$ induced by the Killing form on $\mathfrak{p}$ is non degenerate on $(\textstyle{\bigwedge}\mathfrak{p})^K$ with differing graded components orthogonal. Hence Corollary [Corollary 1](#cor: gen orthog){reference-type="ref" reference="cor: gen orthog"} proves that $P_\wedge(\mathfrak{p})$ (Definition [Definition 1](#def samspace){reference-type="ref" reference="def samspace"}) generates $(\textstyle{\bigwedge}\mathfrak{p})^K$; $(\textstyle{\bigwedge}\mathfrak{p})^K = \textstyle{\bigwedge}\mathcal P_\wedge (\mathfrak{p})$. ◻ The degrees of $\mathcal P_\wedge(\mathfrak{p})$ are the same as the degrees of $\mathcal P_{\mathfrak a}= \mathcal P_\mathfrak{g}^{-\theta}$ which are given in [@CCCvol3 Table I,II, III p. 492-496] and repeated below for reference. There is paper in preparation [@CGKP] that will prove a transgression theorem when $G/K$ is primary or almost primary and will directly give the degrees of $\mathcal P_\wedge(\mathfrak{p})$. ----------------------------------------- --------------------------------------- ----------------------------------------------- $G$ $K$ degrees of $\mathcal P_\wedge({\mathfrak p})$ Group $\operatorname{U}_{2n+1}(\mathbb{R})^2$ $\operatorname{U}_{2n+1}(\mathbb{R})$ $4p-1 : 1 \leq p \leq n$ $\operatorname{U}_{2n}(\mathbb{R})^2$ $\operatorname{U}_{2n}(\mathbb{R})$ $4p-1 : 1 \leq p \leq n -1$ and $2n-1$ $\operatorname{U}_n(\mathbb{C})^2$ $\operatorname{U}_n(\mathbb{C})$ $2p-1 : 1 \leq p \leq n$ $\operatorname{U}_n(\mathbb{H})^2$ $\operatorname{U}_n(\mathbb{H})$ $4p-1 : 1 \leq p \leq n$ Primary $\operatorname{U}_{2n+1}(\mathbb{C})$ $\operatorname{U}_{2n+1}(\mathbb{R})$ $4p-3 : 1 \leq p \leq n+1$ $\operatorname{U}_{2n}(\mathbb{C})$ $\operatorname{U}_n(\mathbb{H})$ $4p-3 : 1 \leq p \leq n$ Almost Primary $\operatorname{U}_{2n}(\mathbb{C})$ $\operatorname{U}_{2n}(\mathbb{R})$ $4p-3 : 1 \leq p \leq n$ ----------------------------------------- --------------------------------------- ----------------------------------------------- : Degrees of $\mathcal P_\wedge({\mathfrak p})$ (Recall $\operatorname{U}_n(\mathbb{R}) = \operatorname{O}(n)$, $\operatorname{U}_n(\mathbb{C}) = \operatorname{U}(n)$, and $\operatorname{U}_n(\mathbb{H}) = \operatorname{Sp}(n)$.) ## $C(\mathfrak{p})^K$ and $(\textstyle{\bigwedge}\mathfrak{p})^K$ for disconnected $K$: the case $G/K=\operatorname{SO}(k+m)/S(\operatorname{O}(k)\times\operatorname{O}(m))$ {#subsec so/soo} To understand what happens with the decomposition [\[decomp S k\]](#decomp S k){reference-type="eqref" reference="decomp S k"} when $K$ is disconnected, we consider the group theoretic Weyl group $W_K$ defined as $$\label{gp W gp} W_{K}=N_{K}(\mathfrak{t})/Z_{K}(\mathfrak{t}),$$ where $N_{K}(\mathfrak{t})$ denotes the normalizer in $K$ of the Cartan subalgebra $\mathfrak{t}$ of $\mathfrak{k}$, while $Z_K(\mathfrak{t})$ denotes the centralizer of $\mathfrak{t}$ in $K$. It is well known (see [@beyond Theorem 4.54]) that for connected $K$, $W_{K}=W_\mathfrak{k}$, the Weyl group of the root system of $(\mathfrak{k},\mathfrak{t})$. Analogously, we define $W_G$, which is equal to $W_\mathfrak{g}=W_{\mathfrak{g},\mathfrak{t}}$ since $G$ is assumed connected. We note that it looks like we should consider the groups $W_{\widetilde{K}}$ and $W_{\widetilde{K}_e}$, but these are in fact the same as $W_K$ respectively $W_{K_e}$. This follows from the fact that the adjoint action of $k\in\widetilde{K}$ is the same as the adjoint action of $\pi(k)$ where $\pi$ denotes the covering map from $\widetilde{K}$ to $K$. Let now $G/K=\operatorname{SO}(k+m)/S(\operatorname{O}(k)\times \operatorname{O}(m))$. In these cases $K$ is disconnected and the group theoretic Weyl group $W_K$ may be different from $W_\mathfrak{k}$. In the following we describe $W_K$ explicitly. The group $K=S(\operatorname{O}(k)\times \operatorname{O}(m))$ has two connected components, and we choose the following explicit representative $s$ of the disconnected component: $$\begin{aligned} \label{def s} s&=&\operatorname{diag}(\underbrace{1,\dots,1,-1}_k,\underbrace{1,\dots,1,-1}_m)\quad\text{if }k\text{ and }m\text{ are both even};\\ \nonumber s&=&\operatorname{diag}(\underbrace{1,\dots,1,-1}_k,\underbrace{-1,\dots,-1}_m)\quad\text{if }k\text{ is even and }m\text { is odd};\\ \nonumber s&=&\operatorname{diag}(\underbrace{-1,\dots,-1}_k,\underbrace{-1,\dots,-1}_m)\quad\text{if }k\text{ and }m\text{ are both odd}.\end{aligned}$$ For the Cartan subalgebra $\mathfrak{t}$ of $\mathfrak{k}$ we choose block diagonal matrices with diagonal blocks $$\begin{aligned} &t_1J,\dots,t_pJ,t_{p+1}J,\dots,t_{p+q}J & \qquad \text{if }(k,m)=(2p,2q); \\ &t_1J,\dots,t_pJ,t_{p+1}J,\dots,t_{p+q}J,0 & \qquad \text{if }(k,m)=(2p,2q+1); \\ &t_1J,\dots,t_pJ,0,t_{p+1}J,\dots,t_{p+q}J,0 & \qquad \text{if }(k,m)=(2p+1,2q+1);, \end{aligned}$$ where $J=J_1=\left(\begin{smallmatrix}0&1\cr -1&0\end{smallmatrix}\right)$, and $t_1,\dots,t_{p+q}$ are (complex) scalars. In the equal rank cases $(k,m)=(2p,2q)$ and $(k,m)=(2p,2q+1)$, $\mathfrak{t}$ is also a Cartan subalgebra of $\mathfrak{g}$, while for $(k,m)=(2p+1,2q+1)$, as noted in Subsection [3.5](#subsec so odd){reference-type="ref" reference="subsec so odd"}, a Cartan subalgebra for $\mathfrak{g}$ is $\mathfrak{h}=\mathfrak{t}\oplus\mathfrak{a}$ where $\mathfrak{a}$ is one-dimensional, spanned by $E_{k\,k+m}-E_{k+m\,k}$. For $(k,m)=(2p,2q)$ and $(k,m)=(2p,2q+1)$, we identify $\mathfrak{t}$ with $\mathbb{R}^{p+q}$ by sending the above described matrix to $(t_1,\dots,t_{p+q})$. For $(k,m)=(2p+1,2q+1)$ we identify $\mathfrak{t}$ with $\mathbb{R}^{p+q}\times 0\subset\mathbb{R}^{p+q+1}$ by sending the above described matrix to $(t_1,\dots,t_{p+q},0)$, and we identify $\mathfrak{a}$ with $0\times \mathbb{R}\subset\mathbb{R}^{p+q+1}$ by sending $E_{k\,k+m}-E_{k+m\,k}$ to $(0,\dots,0,1)$. In this last case we see that $\Delta(\mathfrak{g},\mathfrak{t})$ is of type $B_{p+q}$, while $\Delta(\mathfrak{k},\mathfrak{t})$ is of type $B_p\times B_q$. (On the other hand, $\Delta(\mathfrak{g},\mathfrak{h})$ is of type $D_{p+q+1}$.) Since $$\label{compute s} \begin{pmatrix} 1 & 0 \cr 0 & -1\end{pmatrix} \begin{pmatrix} 0 & t \cr -t & 0\end{pmatrix} \begin{pmatrix} 1 & 0 \cr 0 & -1\end{pmatrix}= \begin{pmatrix} 0 & -t \cr t & 0\end{pmatrix},$$ we see that in each of the cases $s$ given in [\[def s\]](#def s){reference-type="eqref" reference="def s"} normalizes $\mathfrak{t}$ and acts on it as follows: **Lemma 1**. *For $(k,m)=(2p,2q)$, $$s(t_1,\dots,t_{p-1},t_p,t_{p+1},\dots,t_{p+q-1},t_{p+q})= (t_1,\dots,t_{p-1},-t_p,t_{p+1},\dots,t_{p+q-1},-t_{p+q}).$$ For $(k,m)=(2p,2q+1)$, $$s(t_1,\dots,t_{p-1},t_p,t_{p+1},\dots,t_{p+q})= (t_1,\dots,t_{p-1},-t_p,t_{p+1},\dots,t_{p+q})$$ For $(k,m)=(2p+1,2q+1)$, $s$ centralizes $\mathfrak{t}$ (i.e., $s$ acts trivially on $\mathfrak{t}$).* *In particular, $s$ normalizes $\mathfrak{t}$ in all cases.* Since in the cases $(k,m)=(2p,2q)$ and $(k,m)=(2p,2q+1)$ we have $K=K_e\rtimes\{1,s\}$, and we know by Lemma [Lemma 1](#s on t){reference-type="ref" reference="s on t"} that $s$ normalizes $\mathfrak{t}$, we conclude that $$W_K=W_{K_e}\rtimes\{1,s\}=W_\mathfrak{k}\rtimes\{1,s\}.$$ For $(k,m)=(2p+1,2q+1)$, $s$ centralizes $\mathfrak{t}$ and thus $W_K=W_\mathfrak{k}$. To conclude: **Proposition 1**. *For $G/K=\operatorname{SO}(k+m)/S(\operatorname{O}(k)\times \operatorname{O}(m))$, $$W_K=\left\{ \begin{matrix} S(B_p\times B_q) & \qquad \text{if}\ (k,m)=(2p,2q);\cr B_p\times B_q & \qquad \text{if}\ (k,m)=(2p,2q+1);\cr B_p\times B_q & \qquad \text{if}\ (k,m)=(2p+1,2q+1). \end{matrix}\right.$$ Here $B_p\times B_q$ is the group consisting of permutations and sign changes of the first $p$ and the last $q$ coordinates, while $S(B_p\times B_q)$ is the group consisting of permutations and sign changes of the first $p$ and the last $q$ coordinates, so that the total number of sign changes is even.* *The group $W_G$ is equal to $D_{p+q}$ if $(k,m)=(2p,2q)$, and to $B_{p+q}$ if $(k,m)=(2p,2q+1)$ or if $(k,m)=(2p+1,2q+1)$.* In Lemma [Lemma 1](#s on t){reference-type="ref" reference="s on t"} we have described the adjoint action of $s$ (and hence also of $\tilde s$) on $\mathfrak{t}$. It follows from that and from passing to the dual $\mathfrak{t}^*$ that **Lemma 1**. *Let $s\in K$ be defined by [\[def s\]](#def s){reference-type="eqref" reference="def s"}, and let $\tilde s$ be a lift of $s$ in $\widetilde{K}$. Then the coadjoint action of $s$ and $\tilde s$ permutes the positive roots of $(\mathfrak{k},\mathfrak{t})$.* *Proof.* As already noted, $\operatorname{Ad}(\tilde s)=\operatorname{Ad}(s)$ so the coadjoint actions of $s$ and $\tilde s$ are also the same. To compute the action of $s$, we note that the formulas in Lemma [Lemma 1](#s on t){reference-type="ref" reference="s on t"} imply: \(1\) For $G/K=SO(2p+2q)/S(O(2p)\times O(2q))$, $s$ interchanges the roots ${\varepsilon}_i-{\varepsilon}_p$ and ${\varepsilon}_i+{\varepsilon}_p$ ($1\leq i<p$), and the roots ${\varepsilon}_{p+j}-{\varepsilon}_{p+q}$ and ${\varepsilon}_{p+j}+{\varepsilon}_{p+q}$ ($1\leq j<q$), while fixing all the other positive $(\mathfrak{k},\mathfrak{t})$-roots. \(2\) For $G/K=SO(2p+2q+1)/S(O(2p)\times O(2q+1))$, $s$ interchanges the roots ${\varepsilon}_i-{\varepsilon}_p$ and ${\varepsilon}_i+{\varepsilon}_p$ ($1\leq i<p$), while fixing all the other positive $(\mathfrak{k},\mathfrak{t})$-roots. \(3\) For $G/K=SO(2p+2q+2)/S(O(2p+1)\times O(2q+1))$, $s$ fixes all the positive $(\mathfrak{k},\mathfrak{t})$-roots. ◻ The formulas in Lemma [Lemma 1](#s on t){reference-type="ref" reference="s on t"} also describe the action of $s$ (and hence of $\tilde s$) on weights ${\lambda}\in\mathfrak{t}^*$. In coordinates, the action is exactly the same as on coordinates of elements of $\mathfrak{t}$: \(1\) For $G/K=SO(2p+2q)/S(O(2p)\times O(2q))$, $s$ acts by changing the sign of the $p$-th and the $(p+q)$-th coordinate; \(2\) For $G/K=SO(2p+2q+1)/S(O(2p)\times O(2q+1))$, $s$ acts by changing the sign of the $p$-th coordinate; \(3\) For $G/K=SO(2p+2q+2)/S(O(2p+1)\times O(2q+1))$, $s$ acts trivially. We are now ready to describe the $\widetilde{K}$-decomposition of the spin module $S$ in each of the cases. Recall that the $\mathfrak{k}$-decomposition of $S$, or equivalently the $\widetilde{K}_e$-decomposition where $\widetilde{K}_e$ is the spin double cover of the connected component $K_e$ of the identity in $K$, is multiplicity free (since $\dim\mathfrak{a}\leq 1$), and given by \(1\) For $G/K=SO(2p+2q)/S(O(2p)\times O(2q))$, the infinitesimal characters of the irreducible $\mathfrak{k}$-submodules of $S$ are the $\mathfrak{k}$-dominant $W_G$-conjugates of $\rho=(p+q-1,p+q-2,\dots,1,0)$. These are the $(p,q)$-shuffles of $\rho$, and the $(p,q)$-shuffles of $\rho$ with the sign of the $p$-th and the $(p+q)$-th coordinates changed to negative (note that one of these coordinates is zero, so the sign change does not affect it). The highest weights are obtained from these infinitesimal characters by subtracting $\rho_\mathfrak{k}$. \(2\) For $G/K=SO(2p+2q+1)/S(O(2p)\times O(2q+1))$, the infinitesimal characters of the irreducible $\mathfrak{k}$-submodules of $S$ are the $\mathfrak{k}$-dominant $W_G$-conjugates of $\rho=(p+q-\frac{1}{2},p+q-\frac{3}{2},\dots,\frac{3}{2},\frac{1}{2})$. These are the $(p,q)$-shuffles of $\rho$, and the $(p,q)$-shuffles of $\rho$ with the sign of the $p$-th coordinate changed to negative. The highest weights are obtained from these infinitesimal characters by subtracting $\rho_\mathfrak{k}$. \(3\) For $G/K=SO(2p+2q+2)/S(O(2p+1)\times O(2q+1))$, the infinitesimal characters of the irreducible $\mathfrak{k}$-submodules of $S$ are the $\mathfrak{k}$-dominant $W_G$-conjugates of $\rho=\rho_{(\mathfrak{g},\mathfrak{t})}=\rho_{(\mathfrak{g},\mathfrak{h})}\big|_\mathfrak{t}=(p+q,p+q-1,\dots,2,1)$. These are the $(p,q)$-shuffles of $\rho$. The highest weights are obtained from these infinitesimal characters by subtracting $\rho_\mathfrak{k}$. It is now clear that in Cases (1) and (2) the action of $s$ (or $\tilde s$) interchanges the infinitesimal characters that differ only by the sign of one of the coordinates. Since in each of the cases $s$ permutes $\Delta^+(\mathfrak{k},\mathfrak{t})$, it fixes $\rho_\mathfrak{k}$ and thus also interchanges the highest weights corresponding to the above infinitesimal characters. In Case (3), $s$ (and $\tilde s$) fix all the $\mathfrak{k}$-infinitesimal characters and hence also all the highest weights in $S$. **Proposition 1**. *For $G/K=\operatorname{SO}(2p+2q)/S(\operatorname{O}(2p)\times \operatorname{O}(2q))$ or $G/K=\operatorname{SO}(2p+2q+1)/S(\operatorname{O}(2p)\times \operatorname{O}(2q+1))$, each irreducible $\widetilde{K}$-module in $S$, when viewed as a $\mathfrak{k}$-module, decomposes into two irreducible $\mathfrak{k}$-modules. The highest weights of these two modules are interchanged by $\tilde s$ and $s$, and differ from each other by one sign change.* *For $G/K=\operatorname{SO}(2p+2q+2)/S(\operatorname{O}(2p+1)\times \operatorname{O}(2q+1))$, the $\widetilde{K}$-decomposition of the spin module $S$ is the same as the $\mathfrak{k}$-decomposition.* *Proof.* Let $v$ be any highest weight vector for $\mathfrak{k}$ in $S$, and let its weight be ${\lambda}$ (it is one of the weights described above). We claim that $\tilde sv$ is a highest weight vector of weight $s{\lambda}$. To see this, we first note that $\widetilde{K}$-equivariance of the $\mathfrak{k}$-action on $S$ implies that for any $X\in\mathfrak{k}$, $$\label{equi} X(\tilde sv)=\tilde s(\operatorname{Ad}(\tilde s)^{-1} X) v=\tilde s(\operatorname{Ad}(s)^{-1} X) v=\tilde s(\operatorname{Ad}(s)X) v.$$ By Lemma [Lemma 1](#s on roots){reference-type="ref" reference="s on roots"}, if $X$ is a positive root vector, then $\operatorname{Ad}(s)X$ is also a positive root vector. It follows that $$X(\tilde sv)=\tilde s(\operatorname{Ad}(s) X) v=0,$$ so $\tilde sv$ is a highest weight vector. To see the weight of $\tilde sv$, we apply [\[equi\]](#equi){reference-type="eqref" reference="equi"} for $X\in\mathfrak{t}$. Recall from Lemma [Lemma 1](#s on t){reference-type="ref" reference="s on t"} that then also $\operatorname{Ad}(s) X\in\mathfrak{t}$, so we have $$X(\tilde sv)=\tilde s(\operatorname{Ad}(s) X) v={\lambda}(\operatorname{Ad}(s) X)\tilde sv= s{\lambda}(X)\tilde sv,$$ so $\tilde sv$ is of weight $s{\lambda}$. Let now $\widetilde{K}_e$ be the spin double cover of the identity component $K_e$ of $K$. Then by Proposition [Proposition 1](#K-dec of S conn){reference-type="ref" reference="K-dec of S conn"} the $\widetilde{K}_e$-decomposition of $S$ is the same as the $\mathfrak{k}$-decomposition. Since $\widetilde{K}_e$ and $\tilde s$ generate $\widetilde{K}$, and since we have described the action of $\tilde s$, the proposition follows. ◻ We now describe a version of the Harish-Chandra isomorphism for the disconnected case. Let $$\gamma_0:Z(\mathfrak{k})=U(\mathfrak{k})^\mathfrak{k}=U(\mathfrak{k})^{K_e}\to S(\mathfrak{t})^{W_\mathfrak{k}}=S(\mathfrak{t})^{W_{K_e}}$$ be the usual Harish-Chandra isomorphism. **Proposition 1**. *For $K=S(\operatorname{O}(k)\times \operatorname{O}(m))$, the Harish-Chandra map $\gamma_0:U(\mathfrak{k})^{K_e}\to S(\mathfrak{t})^{W_{K_e}}$ restricts to an isomorphism $$\gamma:U(\mathfrak{k})^K\to S(\mathfrak{t})^{W_K}.$$* *Proof.* We have seen that $K=K_e\rtimes\{1,s\}$, where $s$ is defined as above (the product is direct if $k,m$ are both odd). Since $s$ normalizes $K_e$, $\mathfrak{k}$, $\mathfrak{t}$ and $W_{K_e}$, it acts on $U(\mathfrak{k})^{K_e}$ and on $S(\mathfrak{t})^{W_{K_e}}$, and the map $\gamma_0$ intertwines these actions. The claim now follows by taking $s$-invariants. ◻ *Remark 1*. It is possible to generalize the above proposition to the case when $K$ is an arbitrary compact Lie group. We now define $$\alpha_K:U(\mathfrak{k})^K\cong \mathbb{C}[\mathfrak{t}^*]^{W_K}\to \operatorname{Pr}(S)\subseteq C(\mathfrak{p})^K,$$ as the restriction of the map $\alpha_\mathfrak{k}$ of [\[descr alpha k\]](#descr alpha k){reference-type="eqref" reference="descr alpha k"} to the $K$-invariants. Here $\operatorname{Pr}(S)$ denotes the algebra of $\widetilde{K}$-equivariant projections of the $K$-module $S$ to its isotypic components $E_\sigma$, where $\sigma\in W_{G,K}^1$, the set of minimal length representatives of $W_K$-cosets in $W_G$. Since the $E_\sigma$ are of multiplicity 1, $\operatorname{Pr}(S)=\operatorname{End}_{\widetilde{K}} S$. The map $\alpha_K$ is given by the analogue of [\[descr alpha k\]](#descr alpha k){reference-type="eqref" reference="descr alpha k"}, i.e., $$\alpha_K(P)=\sum_{\sigma\in W_{G,K}^1} P(\sigma\rho)\operatorname{pr}_\sigma,\qquad P\in \mathbb{C}[\mathfrak{t}^*]^{W_K},$$ where $\operatorname{pr}_\sigma:S\to E_\sigma$ is the $K$-equivariant projection. From the above considerations we conclude **Corollary 1**. *(1) For $G/K=\operatorname{SO}(2p+2q)/S(\operatorname{O}(2p)\times\operatorname{O}(2q))$, the algebra $C(\mathfrak{p})^K$ is isomorphic to $\operatorname{Pr}(S)$, which is isomorphic to $\mathbb{C}[\mathfrak{t}^*]^{S(B_p\times B_q)}$ modulo the ideal generated by $D_{p+q}$-invariants in $\mathbb{C}[\mathfrak{t}^*]$ evaluating to $0$ at $\rho=(p+q-1,\dots,1,0)$. The algebra $(\textstyle{\bigwedge}\mathfrak{p})^K$ is isomorphic to $\operatorname{gr}\operatorname{Pr}(S)$, which is isomorphic to $\mathbb{C}[\mathfrak{t}^*]^{S(B_p\times B_q)}$ modulo the ideal generated by $D_{p+q}$-invariants in $\mathbb{C}[\mathfrak{t}^*]$ evaluating to $0$ at $0$.* *(2) For $G/K=\operatorname{SO}(2p+2q+1)/S(\operatorname{O}(2p)\times\operatorname{O}(2q+1))$, the algebra $C(\mathfrak{p})^K$ is isomorphic to $\operatorname{Pr}(S)$, which is isomorphic to $\mathbb{C}[\mathfrak{t}^*]^{B_p\times B_q}$ modulo the ideal generated by $B_{p+q}$-invariants in $\mathbb{C}[\mathfrak{t}^*]$ evaluating to $0$ at $\rho=(p+q-\frac{1}{2},\dots,\frac{3}{2},\frac{1}{2})$. The algebra $(\textstyle{\bigwedge}\mathfrak{p})^K$ is isomorphic to $\operatorname{gr}\operatorname{Pr}(S)$, which is isomorphic to $\mathbb{C}[\mathfrak{t}^*]^{B_p\times B_q}$ modulo the ideal generated by $B_{p+q}$-invariants in $\mathbb{C}[\mathfrak{t}^*]$ evaluating to $0$ at $0$.* *(3) For $G/K=SO(2p+2q+2)/S(\operatorname{O}(2p+1)\times\operatorname{O}(2q+1))$, the algebra $C(\mathfrak{p})^K$ is isomorphic to $C(\mathbb{C}e)\otimes\operatorname{Pr}(S)$, where $e$ is a generator of degree $2p+2q+1$ squaring to $1$, and $\operatorname{Pr}(S)$ is isomorphic to $\mathbb{C}[\mathfrak{t}^*]^{B_p\times B_q}$ modulo the ideal generated by $B_{p+q}$-invariants in $\mathbb{C}[\mathfrak{t}^*]$ evaluating to $0$ at $\rho=(p+q,\dots,2,1)$. The algebra $(\textstyle{\bigwedge}\mathfrak{p})^K$ is isomorphic to $\textstyle{\bigwedge}\mathbb{C}e\otimes \operatorname{gr}\operatorname{Pr}(S)$, where $e$ is a generator of degree $2p+2q+1$ squaring to $0$, and where $\operatorname{gr}\operatorname{Pr}(S)$ is isomorphic to $\mathbb{C}[\mathfrak{t}^*]^{B_p\times B_q}$ modulo the ideal generated by $B_{p+q}$-invariants in $\mathbb{C}[\mathfrak{t}^*]$ evaluating to $0$ at $0$.* ## $(\textstyle{\bigwedge}\mathfrak{p})^K$ for disconnected $K$: the case $G/K=\operatorname{U}(n)/\operatorname{O}(n)$ {#subsec u/o} We use the standard matrix realizations of $G=\operatorname{U}(n)$ and $K=\operatorname{O}(n)$. Since $K$ is disconnected, the group $W_K$ may be different from $W_\mathfrak{k}$ and we want to describe it explicitly. We prove that $G/K = \operatorname{U}(n) /\operatorname{O}(n)$ is primary (resp. almost primary) when $n$ is even (resp. odd), we then apply the results of Section [3.6](#subsec primary and aprim){reference-type="ref" reference="subsec primary and aprim"}. The group $K=\operatorname{O}(n)$ has two connected components and we choose the following representative for the disconnected component: $$\begin{aligned} s=\operatorname{diag}(1,\dots,1,-1)&\qquad&\text{if}\ n=2k;\\ s=\operatorname{diag}(-1,\dots,-1)&\qquad&\text{if}\ n=2k+1. \end{aligned}$$ For the Cartan subalgebra $\mathfrak{t}$ of $\mathfrak{k}$ we take the space of block diagonal matrices with diagonal blocks $$\begin{aligned} t_1J,\dots, t_kJ&\qquad&\text{if }\ n=2k;\\ t_1J,\dots, t_kJ,0&\qquad&\text{if }\ n=2k+1,\end{aligned}$$ where as before $J=J_1=\left(\begin{smallmatrix}0&1\cr -1&0\end{smallmatrix}\right)$ and $t_1,\dots,t_k$ are complex scalars. We extend $\mathfrak{t}$ to a Cartan subalgebra $\mathfrak{h}=\mathfrak{t}\oplus\mathfrak{a}$ of $\mathfrak{g}$, where $\mathfrak{a}$ is the space of block diagonal matrices with diagonal blocks $$\begin{aligned} a_1I,\dots, a_kI&\qquad&\text{if }\ n=2k;\\ a_1I,\dots, a_kI,a_{k+1}&\qquad&\text{if }\ n=2k+1,\end{aligned}$$ where $I=I_2=\left(\begin{smallmatrix}1&0\cr 0&1\end{smallmatrix}\right)$ and $a_1,\dots,a_{k+1}$ are complex scalars. We can identify $\mathfrak{t}\subset\mathfrak{h}\cong\mathbb{C}^n$ with $$\begin{aligned} \{(t_1,\dots,t_k,-t_k,\dots,-t_1)\,\big|\,t_1,\dots,t_k\in\mathbb{C}\}&\qquad&\text{if }\ n=2k;\\ \{(t_1,\dots,t_k,0,-t_k,\dots,-t_1)\,\big|\,t_1,\dots,t_k\in\mathbb{C}\}&\qquad&\text{if }\ n=2k+1,\end{aligned}$$ and $\mathfrak{a}\subset\mathfrak{h}\cong\mathbb{C}^n$ with $$\begin{aligned} \{(a_1,\dots,a_k,a_k,\dots,a_1)\,\big|\,a_1,\dots,a_k\in\mathbb{C}\}&\qquad&\text{if }\ n=2k;\\ \{(a_1,\dots,a_k,a_{k+1},a_k,\dots,a_1)\,\big|\,a_1,\dots,a_{k+1}\in\mathbb{C}\}&\qquad&\text{if }\ n=2k+1,\end{aligned}$$ This corresponds to the action of the involution $\sigma$ on $\mathfrak{h}$ being $$\sigma(h_1,\dots,h_n)=(-h_n,\dots,-h_1).$$ Here $\sigma$ is the restriction to $G=\operatorname{U}(n)$ of the involution in Subsection [2.3](#real lagr grass){reference-type="ref" reference="real lagr grass"}; in particular, $\operatorname{U}(n)^\sigma=\operatorname{O}(n)$. The above identification enables us to identify the $(\mathfrak{g},\mathfrak{t})$ roots easily: $\Delta(\mathfrak{g},\mathfrak{t})$ is of type $C_k$ if $n=2k$ and of type $BC_k$ if $n=2k+1$. Thus $$W_G=W_\mathfrak{g}=B_k;$$ recall that the Weyl group $B_k$, which is the same as the Weyl group of type $C_k$ or $BC_k$, consists of permutations and sign changes of coordinates of $\mathfrak{t}\cong \mathbb{C}^k$. Of course, $W_\mathfrak{k}$ is $D_k$ if $n=2k$ and $B_k$ if $n=2k+1$; here $D_k$ is the Weyl group of type $D_k$, consisting of permutations of the coordinates and sign changes of an even number of coordinates of $\mathfrak{t}\cong \mathbb{C}^k$. We however want to identify the group theoretic Weyl group $W_K$. To do this, we identify $\mathfrak{t}$ with $\mathbb{C}^k$ by sending $(t_1,\dots,t_k,-t_k,\dots,-t_1)$ to $(t_1,\dots,t_k)$. Using [\[compute s\]](#compute s){reference-type="eqref" reference="compute s"} we see that for $n=2k$ the element $s$ normalizes $\mathfrak{t}$ and sends $(t_1,\dots,t_{k-1},t_k)\in\mathfrak{t}$ to $(t_1,\dots,t_{k-1},-t_k)$. Thus $W_K=W_\mathfrak{k}\rtimes\{1,s\}=B_k$. For $n=2k+1$, $s$ centralizes $\mathfrak{t}$, hence $W_K=W_\mathfrak{k}=B_k$. We have proved that $G/K = U(n)/O(n)$ is primary for odd $n$ and almost primary for even $n$, therefore Theorem [Theorem 1](#thm prim and aprim alg){reference-type="ref" reference="thm prim and aprim alg"} gives the graded algebra structure of $(\textstyle{\bigwedge}\mathfrak{p})^K$. We conclude **Corollary 1**. *For $G/K=\operatorname{U}(n)/\operatorname{O}(n)$, either $G/K$ is primary or almost primary. Hence, by Theorem [Theorem 1](#thm prim and aprim alg){reference-type="ref" reference="thm prim and aprim alg"}, $$(\textstyle{\bigwedge}\mathfrak{p})^K\cong \textstyle{\bigwedge}(\mathcal P_\wedge(\mathfrak{p})),$$ where $\mathcal P_\wedge(\mathfrak{p})$ is the subspace defined in Definition [Definition 1](#def samspace){reference-type="ref" reference="def samspace"} and the degrees are given in Table [1](#tab degrees prim){reference-type="ref" reference="tab degrees prim"}.* # Cohomology rings of compact symmetric spaces {#coho symm} ## Some general facts {#subsec coho symm general} Let $G/K$ be a compact symmetric space, with $G$ a compact connected Lie group and $K$ a closed symmetric subgroup. Then the de Rham cohomology (with real coefficients) of $G/K$ can be identified, as an algebra, with $(\textstyle{\bigwedge}\mathfrak{p}_0^*)^K$, where $\mathfrak{p}_0$ stands for the tangent space $\mathfrak{g}_0/\mathfrak{k}_0$ to $G/K$ at $eK$ ($\mathfrak{g}_0$ and $\mathfrak{k}_0$ are the Lie algebras of $G$ respectively $K$). In the examples we are interested in, $\mathfrak{p}_0$ can be $K$-equivariantly embedded into $\mathfrak{g}_0$, and also $\mathfrak{p}_0^*\cong\mathfrak{p}_0$. As mentioned in the introduction, this fact is well known, but it is difficult to find an appropriate reference. We present here a proof we learned from Sebastian Goette [@goette]. Any $g\in G$ acts on $G/K$ by a map that is homotopic to the identity. Indeed, since $G$ is connected, there is a smooth path $g(t)$, $t\in [0,1]$, from $g$ to the unit element $e\in G$. Then $H:G/K\times [0,1]\to G/K$, $H(x,t)=g(t)x$, is a smooth homotopy from $g:G/K\to G/K$ to the identity map on $G/K$. It now follows that if $\omega$ is a closed form, then it represents the same cohomology class as $g^*\omega$, for any $g\in G$. Namely, [@Lee Proposition 15.5] says that $g^*$ and $e^*=\operatorname{id}$ induce the same map on cohomology, so the class of $\omega$ is the same as the class of $g^*\omega$. Since $G$ is compact, we can average over $g$ and get a $G$-invariant differential form that represents the same cohomology class as $\omega$. Next, assume that $\omega$ is $G$-invariant and $\omega=d\mu$. Even if $\mu$ is not $G$-invariant itself, we know that $d\mu=g^*d\mu=dg^*\mu$. So we can average over $g\in G$ again to get a $G$-invariant differential form $\bar\mu$ such that $\omega=d\bar\mu$. So we see that the de Rham cohomology is captured by the subcomplex of $G$-invariant differential forms. Now we recall that the differential forms on $G/K$ are sections of the homogeneous vector bundle $$G\times_K \textstyle{\bigwedge}\mathfrak{p}_0^* \to G/K.$$ The bundle $G\times_K \textstyle{\bigwedge}\mathfrak{p}_0^*$ is defined as $(G\times\textstyle{\bigwedge}\mathfrak{p}_0^*)/\sim$, where $\sim$ is the equivalence relation defined by $$(gk,\nu)\sim(g,\operatorname{Ad}^*(k)\nu),\qquad g\in G,k\in K, \nu\in\textstyle{\bigwedge}\mathfrak{p}_0^*.$$ The differential forms, or sections of the above bundle, are maps $$\omega:G\to\textstyle{\bigwedge}\mathfrak{p}_0^*\quad \text{such that }\quad \omega(gk) = \operatorname{Ad}^*(k^{-1})\omega(g),\quad g\in G,\ k\in K.$$ The group $G$ acts on such $\omega$ by left translation, i.e., $$(g\omega)(g')=\omega(g^{-1}g'),\qquad g,g'\in G.$$ Thus $\omega$ is $G$-invariant if and only if it is constant as a function on $G$, i.e., $\omega(g)=\omega(e)$ for any $g\in G$. For such an invariant form $\omega$, set $\bar\omega=\omega(e)\in\textstyle{\bigwedge}\mathfrak{p}_0^*$. We claim that $\bar\omega\in(\textstyle{\bigwedge}\mathfrak{p}_0^*)^K$. Indeed, for any $k\in K$ we have $$\begin{gathered} \operatorname{Ad}^*(k)\bar\omega=\operatorname{Ad}^*(k)\omega(e)=(\text{since $\omega$ is a section})=\\ \omega (ek^{-1})=\omega(k^{-1})=(\text{since $\omega$ is $G$-invariant})=\omega(e)=\bar\omega.\end{gathered}$$ Conversely, if $\bar\omega\in(\textstyle{\bigwedge}\mathfrak{p}_0^*)^K$, then $\omega(g)=\bar\omega$ defines a $G$-invariant form $\omega$ on $G/K$. So we see that for any compact homogeneous space $G/K$, with $G$ a compact connected Lie group and $K$ a closed subgroup of $G$, the de Rham cohomology $H(G/K)$ is the cohomology of the complex $((\textstyle{\bigwedge}\mathfrak{p}_0^*)^K,d)$, where the differential $d$ is induced by the de Rham differential. Let us describe the differential $d$ more explicitly. We first recall a coordinate free formula for the de Rham differential $d$ on a manifold $M$. Any differential $q$-form is determined if we know how to evaluate it on any $q$-tuple of (smooth) vector fields. In this interpretation, the de Rham differential of a $q$-form $\omega$ is the $(q+1)$-form given by $$\begin{aligned} \label{de rham} & \qquad d\omega(X_1\wedge ... \wedge X_{q+1})= \sum_i(-1)^{i-1} X_i(\omega(X_1\wedge\dots\wedge\widehat X_i\wedge\dots\wedge X_{q+1})) + \\ \nonumber & \qquad \sum_{i<j} (-1)^{i+j}\omega ([X_i,X_j] \wedge X_1 \wedge \dots\wedge \widehat X_i\wedge\dots\wedge\widehat X_j\wedge\dots\wedge X_{q+1}),\end{aligned}$$ where $X_1,\dots,X_{q+1}$ are vector fields on $M$, the bracket denotes the Lie bracket of vector fields, and the hat over a variable means this variable is omitted. See e.g. [@Lee Proposition 12.19]. If $M=G/K$ and if the form $\omega$ is $G$-invariant (as we saw we may assume), then we know $\omega$ at any point $gK$ if we know it at the base point $eK$. More precisely, if $Y_1,\dots,Y_q$ is any $q$-tuple of tangent vectors at a point $gK$ in $G/K$, then $$\omega(gK)(Y_1,\dots,Y_q)=\omega(eK)(g^{-1}_*Y_1,\dots,g^{-1}_*Y_q).$$ It follows that it is enough to know the value of $\omega$ at $q$-tuples of $G$-invariant vector fields, which correspond to the tangent space $\mathfrak{g}_0/\mathfrak{k}_0\cong\mathfrak{p}_0$ to $G/K$ at $eK$. The $G$-invariant vector fields can in turn be obtained as push-forwards of left invariant vector fields on $G$ under the projection $\Pi:G\to G/K$. Since $\Pi_*$ is compatible with Lie brackets, and since the left invariant vector fields on $G$ can be identified with the Lie algebra $\mathfrak{g}_0$ of $G$, we see that if $\tilde X,\tilde Y$ are invariant vector fields on $G/K$ corresponding to tangent vectors $X,Y\in\mathfrak{p}_0$, then the vector field $[\tilde X,\tilde Y]$ corresponds to the tangent vector $\pi([X,Y])$, where the bracket $[X,Y]$ is taken in $\mathfrak{g}_0$ and $\pi=d\Pi:\mathfrak{g}_0\to \mathfrak{p}_0$ is the canonical projection. Thus the formula [\[de rham\]](#de rham){reference-type="eqref" reference="de rham"} can be rewritten with $X_1,\dots,X_{q+1}$ in $\mathfrak{p}_0$ and with $[X_i,X_j]$ replaced by $\pi([X_i,X_j])$. Moreover, the first sum in the formula vanishes, since the action of a vector field involves the differentiation, and $\omega$ is constant. So the de Rham differential becomes $$\label{de rham bis} d\omega(X_1\wedge ... \wedge X_{q+1})= \sum_{i<j} (-1)^{i+j}\omega (\pi([X_i,X_j]) \wedge X_1 \wedge \dots\wedge \widehat X_i\wedge\dots\wedge\widehat X_j\wedge\dots\wedge X_{q+1}),$$ with $X_i$ in $\mathfrak{p}_0$. Now let us assume in addition that $K$ is a symmetric subgroup of $G$, i.e., that $(\mathfrak{g}_0,\mathfrak{k}_0)$ is a symmetric pair. Then $[\mathfrak{p}_0,\mathfrak{p}_0]\subseteq\mathfrak{k}_0$, so $\pi([X_i,X_j])=0$ for any $X_i,X_j\in\mathfrak{p}_0$, and we see that $d\omega=0$. Hence $$\label{coho sym sp} H(G/K)=(\textstyle{\bigwedge}\mathfrak{p}_0^*)^K\cong(\textstyle{\bigwedge}\mathfrak{p}_0)^K.$$ In the following we will pass to the cohomology of $G/K$ with complex coefficients: $$H(G/K;\mathbb{C})=H(G/K)\otimes_\mathbb{R}\mathbb{C}=(\textstyle{\bigwedge}\mathfrak{p}_0)^K\otimes_\mathbb{R}\mathbb{C}= (\textstyle{\bigwedge}\mathfrak{p})^K,$$ where $\mathfrak{p}$ stands for the complexification of $\mathfrak{p}_0$. We will also denote by $\mathfrak{g}$ respectively $\mathfrak{k}$ the complexifications of $\mathfrak{g}_0$ respectively $\mathfrak{k}_0$. Recall from Section [3](#sec spin){reference-type="ref" reference="sec spin"}, the paragraph above Corollary [Corollary 1](#cor o/oo){reference-type="ref" reference="cor o/oo"}, that there is a surjection $\alpha=\alpha_K: \mathbb{C}[\mathfrak{t}^*]^{W_K}\to \operatorname{Pr}(S)$, given by $$\alpha_K(P)=\sum_{\tilde\sigma\in W^1_{G,K}}P(\tilde\sigma\rho)\operatorname{pr}_{\tilde\sigma},$$ with kernel equal to the ideal $\langle\mathbb{C}[\mathfrak{t}^*]^{W_G}_\rho\rangle$ in $\mathbb{C}[\mathfrak{t}^*]^{W_K}$ generated by the $W_G$-invariants that vanish at $\rho$. Likewise, the kernel of the surjection $\operatorname{gr}\alpha:\mathbb{C}[\mathfrak{t}^*]^{W_K}\to\operatorname{gr}\operatorname{Pr}(S)\subseteq(\textstyle{\bigwedge}\mathfrak{p})^K$ is the ideal $\langle\mathbb{C}[\mathfrak{t}^*]^{W_G}_+\rangle$ generated by the $W_G$-invariants that vanish at $0$. In this section we will only use the obvious inclusions $$\label{ker incl} \langle \mathbb{C}[\mathfrak{t}^*]^{W_G,}_\rho\rangle\subseteq\ker\alpha \quad\text{ and }\quad \langle\mathbb{C}[\mathfrak{t}^*]^{W_G}_+\rangle\subseteq\ker\operatorname{gr}\alpha.$$ The opposite inclusions will get reproved as a byproduct of our analysis. Namely, in each of the cases we will have a candidate set for a basis of the quotient $\mathbb{C}[\mathfrak{t}^*]^{W_K}/\ker\alpha$ respectively $\mathbb{C}[\mathfrak{t}^*]^{W_K}/\ker\operatorname{gr}\alpha$ consisting of certain monomials, of the cardinality equal to $$\dim \operatorname{Pr}(S)=\dim\operatorname{gr}\operatorname{Pr}(S)=|W^1_{G,K}|.$$ We will use the relations coming from $\mathbb{C}[\mathfrak{t}^*]^{W_G}_\rho$ respectively $\mathbb{C}[\mathfrak{t}^*]^{W_G}_+$ to show that the images of these candidate monomials span the respective quotients. It will follow that they form a basis, and also that there can be no additional relations outside of $\langle \mathbb{C}[\mathfrak{t}^*]^{W_G}_\rho\rangle$ respectively $\langle\mathbb{C}[\mathfrak{t}^*]^{W_G}_+\rangle$. ## The case $G/K=\operatorname{U}(p+q)/\operatorname{U}(p)\times \operatorname{U}(q)$ {#subsec A/AxA} This is an equal rank case, so $\operatorname{Pr}(S)=C(\mathfrak{p})^K$ and $\operatorname{gr}\operatorname{Pr}(S)=(\textstyle{\bigwedge}\mathfrak{p})^K$. Since $G$ and $K$ are both connected, the Weyl groups are $W_G=W_\mathfrak{g}=S_{p+q}$ and $W_K=W_\mathfrak{k}=S_p\times S_q$. Let $x_1,\dots,x_{p+q}$ be coordinates for the Cartan subalgebra $\mathfrak{t}$. Let - $r_1,\dots,r_p$ be the elementary symmetric functions in variables $x_1,\dots,x_p$; - $s_1,\dots,s_q$ be the elementary symmetric functions in variables $x_{p+1},\dots,x_{p+q}$; - $t_1,\dots,t_{p+q}$ be the elementary symmetric functions in variables $x_1,\dots,x_{p+q}$. Then the space of $W_K$-invariants in $S(\mathfrak{t})$ is generated by $r_1,\dots,r_p$ and $s_1,\dots,s_q$. **Theorem 1**. *For $G/K=\operatorname{U}(p+q)/\operatorname{U}(p)\times \operatorname{U}(q)$, $1 \leq p \leq q$, the algebra $C(\mathfrak{p})^K$ is isomorphic to the algebra $\mathfrak{H}(p,q;c)$ of Definition [Definition 1](#def alg h){reference-type="ref" reference="def alg h"}, where $c=(t_1(\rho),\dots,t_{p+q}(\rho))$. The algebra $(\textstyle{\bigwedge}\mathfrak{p})^K$ is isomorphic to $\mathfrak{H}(p,q;0)$.* *In other words, the algebras $C(\mathfrak{p})^K$ and $(\textstyle{\bigwedge}\mathfrak{p})^K$ are both generated by $r_1,\dots,r_p$ (or more precisely, their images in the respective quotients), with relations generated by $$\label{rel} \sum_{i,j\geq 0;\ i+j=k}r_is_j=t_k= \left\{\begin{matrix} t_k(\rho) & \text{ for the case of } C(\mathfrak{p})^K \cr 0 & \text{ for the case of } (\textstyle{\bigwedge}\mathfrak{p})^K \end{matrix} \right.$$ for $k=1,\dots,p+q$, where we set $r_0=s_0=1$ and $r_i=0$ if $i>p$, $s_j=0$ if $j>q$.* *To summarize, $$\begin{aligned} C(\mathfrak{p})^K &= \mathfrak{H}(p,q;c) = \frac{\text{\bf C}[r_1,\dots,r_p; s_1,\dots,s_q]}{\Bigl(\sum_{i,j\geq 0;\ i+j=k}r_is_j=t_k(\rho)\Bigr)} \\ (\textstyle{\bigwedge}\mathfrak{p})^K &= \mathfrak{H}(p,q;0) = \frac{\text{\bf C}[r_1,\dots,r_p; s_1,\dots,s_q]}{\Bigl(\sum_{i,j\geq 0;\ i+j=k}r_is_j=0\Bigr)} \end{aligned}$$ The latter algebra is isomorphic to $H^*(\operatorname{Gr}_p(\mathbb{C}^{p+q}), \mathbb{C})$.* *A basis for each of these algebras is given by the monomials $r^\alpha=r_1^{\alpha_1}\dots r_p^{\alpha_p}$ of degree $|\alpha|\leq q$, so that our algebras can be identified with the space $$\mathbb{C}[r_1,\dots,r_p]_{\leq q}.$$ In particular, each monomial in $r_1,\dots,r_p$ of degree $q+1$ can be expressed as a linear combination of lower degree monomials in $r_1,\dots,r_p$. Such expressions follow from [\[rel explic 1\]](#rel explic 1){reference-type="eqref" reference="rel explic 1"} and [\[rel explic other\]](#rel explic other){reference-type="eqref" reference="rel explic other"} below, and they provide another set of defining relations for each of our algebras.* *The filtration degree of $C(\mathfrak{p})^K$ inherited from $C(\mathfrak{p})$, and the gradation degree of $(\textstyle{\bigwedge}\mathfrak{p})^K$ inherited from $\textstyle{\bigwedge}\mathfrak{p}$, are obtained by setting $\deg r_i=2i$ for $i=1,\dots,p$.* *Proof.* Let us first note that it is clear that the algebras $C(\mathfrak{p})^K$ and $(\textstyle{\bigwedge}\mathfrak{p})^K$ are generated by the $r_i$ and the $s_j$. Also, the inclusions [\[ker incl\]](#ker incl){reference-type="eqref" reference="ker incl"} imply that the relations for these algebras include the relations [\[rel\]](#rel){reference-type="eqref" reference="rel"}. We are going to see that these relations in fact generate all the relations. Using the first $q$ of the relations [\[rel\]](#rel){reference-type="eqref" reference="rel"}, we can express all $s_j$ as polynomials in the $r_i$. Indeed, the first relation is $$r_1+s_1=t_1$$ and we see that $s_1=t_1-r_1$. Now the second relation is $$r_2+r_1s_1+s_2=t_2,$$ so $s_2$ can be expressed as a polynomial in the $r_i$ since we have already expressed $s_1$. We continue inductively. So each of our algebras is generated by the (images of the) polynomials in the $r_i$. We now prove that every monomial in the $r_i$ of degree $q+1$ can be expressed as a linear combination of lower degree monomials. This will finish the proof. Namely, this will show that the monomials in the $r_i$ of degree at most $q$ span each of our algebras, and their number is $\binom{p+q}{p}$, the same as the dimension of both algebras. So they have to form a basis. Since we only use the relations [\[rel\]](#rel){reference-type="eqref" reference="rel"}, it follows that these relations generate all the relations, otherwise the dimension would be lower which is impossible. We order the monomials in the $r_i$ first by degree, and then inside each degree by the reverse lexicographical order. We will show by induction on this order that all degree $q+1$ monomials can be expressed by lower degree monomials. We start with the first monomial, $r_1^{q+1}$. We express the $s_j$ in terms of the $r_i$ from relations $2,3,\dots,q+1$. These relations are linear in the $s_j$, with coefficients that are either constant or the $r_i$. In matrix form, this system of equations is $$\label{matrix} \begin{pmatrix} r_1 & 1 & 0 & 0 & 0 & \dots & 0 \cr r_2 & r_1 & 1& 0 & 0 & \dots & 0 \cr \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \cr r_{p-1} & \dots & r_1 & 1 & 0 & \dots & 0\cr r_{p} & r_{p-1} & \dots & r_1 & 1 & 0 & \dots \cr 0 & r_p & r_{p-1} & \dots & r_1 & 1 & \dots \cr \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \cr 0 & \dots & 0 & r_p & \dots & r_2 & r_1 \end{pmatrix} \begin{pmatrix} s_1 \cr s_2 \cr \vdots \cr s_q \end{pmatrix} = \begin{pmatrix} t_2-r_2 \cr t_3-r_3 \cr \vdots \cr t_p-r_p \cr t_{p+1} \cr \vdots \cr t_{q+1} \end{pmatrix}$$ The determinant $D$ of this system contains a unique monomial $r_1^q$ of degree $q$, since upon expanding successively along the first row we can always pick either $r_1$, or $1$ which leads to lower degree. In particular $D\neq 0$, so we can solve the system by Cramer's rule and obtain $$\label{sol} D s_i=D_i,\quad i=1,\dots,q,$$ where $D_i$ is obtained from $D$ by replacing the $i$-th column by the right hand side of [\[matrix\]](#matrix){reference-type="eqref" reference="matrix"}. Now we multiply the first equation, $r_1+s_1=t_1$, by $D$, and use [\[sol\]](#sol){reference-type="eqref" reference="sol"} to get $$\label{rel explic 1} r_1D+D_1=t_1 D.$$ Since all monomials in $D_1$ and in $t_1D$ are of degree $\leq q$, and since all the monomials of $r_1D$ are of degree $\leq q$ except for the monomial $r_1^{q+1}$, we have expressed $r_1^{q+1}$ as a linear combination of lower degree monomials. We now do the induction step. Let $1\leq i_1<i_2<\dots< i_a\leq p$ be integers and let $$\label{mono} r_{i_1}^{m_1} r_{i_2}^{m_2}\dots r_{i_a}^{m_a}$$ be a monomial of degree $q+1$ different from $r_1^{q+1}$. Suppose that we have already expressed all the degree $q+1$ monomials that are before [\[mono\]](#mono){reference-type="eqref" reference="mono"} in reverse lexicographical order. Let us consider the degree $q$ monomial $$\label{smaller mono} r_{i_1}^{m_1-1} r_{i_2}^{m_2}\dots r_{i_a}^{m_a}.$$ Notice that for each $i$ and $j$, $r_is_j$ appears in exactly one of the equations (the $(i+j)$-th one). We first assume that $m_1>1$ and pick the equations that contain respectively $$\begin{gathered} \label{picked} r_{i_1}s_1,r_{i_1}s_2,\dots r_{i_1}s_{m_1-1};\ r_{i_2}s_{m_1},\dots,r_{i_2}s_{m_1+m_2-1};\ \dots \\ \dots; \ r_{i_a}s_{m_1+\dots + m_{a-1}},\dots,r_{i_a}s_{m_1+ \dots + m_a-1} (=r_{i_a}s_q).\end{gathered}$$ We view these equations as a linear system for $s_1,\dots,s_q$ and note that the diagonal coefficients are exactly the coefficients of the terms [\[picked\]](#picked){reference-type="eqref" reference="picked"}, i.e., $$r_{i_1},\dots,r_{i_1},r_{i_2},\dots,r_{i_2},\dots,r_{i_a},\dots,r_{i_a},$$ with $r_{i_1}$ repeating $m_1-1$ times, $r_{i_2}$ repeating $m_2$ times, \..., $r_{i_a}$ repeating $m_a$ times. Thus the determinant of the system, denoted again by $D$, contains the monomial [\[smaller mono\]](#smaller mono){reference-type="eqref" reference="smaller mono"}, and we claim this is the leading term of the expanded determinant $D$. (We warn the reader not to confuse the present $D$ with the one in [\[sol\]](#sol){reference-type="eqref" reference="sol"}.) The first of the picked equations is $$r_{i_1}s_1+r_{i_1-1}s_2+\dots +r_1s_{i_1}+s_{i_1+1}=t_{i_1+1}-r_{i_1+1}.$$ (This covers all the cases since we defined $r_i=0$ for $i>p$ and $s_j=0$ for $j>q$.) The first row of $D$ is thus $$r_{i_1}\ r_{i_1-1}\ \dots \ r_1\ 1\ 0\ \dots \ 0$$ with 1 and/or zeros possibly missing. When we expand $D$ along the first row and then write out all the lower order determinants as combinations of monomials, the terms containing the first row elements $r_{i_1-1},r_{i_1-2},\dots$ are all either of lower degree or before the term [\[smaller mono\]](#smaller mono){reference-type="eqref" reference="smaller mono"} in our ordering. To get the leading term we thus have to pick $r_{i_1}$ and cross the first row and column. The remaining determinant (if $m_1>2$) has the first row equal to $$r_{i_1}\ r_{i_1-1}\ r_{i_1-2}\ \dots,$$ and we use the same argument to conclude that we should pick $r_{i_1}$ to obtain the leading term. After we go over all the rows containing $r_{i_1}$, we continue with the next row $$r_{i_2}\ r_{i_2-1}\ r_{i_2-2}\ \dots$$ We again see that to obtain the leading term we have to pick $r_{i_2}$ from this row. The conclusion is that the leading term of $D$ is indeed the monomial [\[smaller mono\]](#smaller mono){reference-type="eqref" reference="smaller mono"}; in particular, $D\neq 0$. We now again write the Cramer's rule $$\label{soli} Ds_i=D_i, \quad i=1,\dots,q.$$ We multiply the equation containing $r_{i_1}$ with coefficient 1, i.e., the equation $$r_{i_1}+r_{i_1-1}s_1+\dots +r_1 s_{i_1-1}+ s_{i_1}=t_{i_1},$$ by $D$, and use [\[soli\]](#soli){reference-type="eqref" reference="soli"} to get $$\label{rel explic other} r_{i_1}D+r_{i_1-1}D_1+\dots +r_1 D_{i_1-1}+ D_{i_1}=t_{i_1}D.$$ The leading term of $r_{i_1}D$ is [\[mono\]](#mono){reference-type="eqref" reference="mono"}, and the other terms in the above equation are either of lower degree, or of the same degree but of lower order with respect to the reverse lexicographical order. Expressing these last terms by lower degree terms using the inductive assumption, we see that we have expressed [\[mono\]](#mono){reference-type="eqref" reference="mono"} as a linear combination of lower degree terms. This finishes the proof if $m_1>1$. If $m_1=1$, we proceed analogously, starting by picking the equation containing $r_{i_2}s_1$. The argument is entirely similar. The statement about degrees follows from the fact that $r_i$ is of degree $i$ as a polynomial in the variables $x_j$, and that the map $\alpha:U(\mathfrak{k})\to C(\mathfrak{p})$ doubles the degree. ◻ *Remark 1*. In the course of the proof of Theorem [Theorem 1](#gen rel basis){reference-type="ref" reference="gen rel basis"} we have obtained explicit relations [\[rel explic 1\]](#rel explic 1){reference-type="eqref" reference="rel explic 1"} and [\[rel explic other\]](#rel explic other){reference-type="eqref" reference="rel explic other"} for the generators $r_1,\dots,r_p$. It is clear that $C(\mathfrak{p})^K$ is the algebra generated by the $r_i$ with these relations if we set $t_i=t_i(\rho)$ and $(\textstyle{\bigwedge}\mathfrak{p})^K$ is the algebra generated by the $r_i$ with the same relations if we set $t_i=0$. *Remark 1*. The monomials in Theorem [Theorem 1](#gen rel basis){reference-type="ref" reference="gen rel basis"} span the same space as the Schur polynomials $s_\lambda$ for $\lambda$ in the $p\times q$ box. In particular, these Schur polynomials also form a basis of our algebra(s), since their number is equal to the dimension of each of the two algebras. To pursue this relationship in more detail, we first recall the well known Jacobi-Trudi formulas that express Schur polynomials as polynomials in the elementary symmetric functions: if ${\lambda}$ is a partition with Young diagram inside the $p\times q$ box, let ${\lambda}^t=({\lambda}^t_1,\dots,{\lambda}^t_l)$ be the transpose of ${\lambda}$ ($l\leq q$ is the length of ${\lambda}^t$). Then $$\label{jacobi} s_{\lambda}=\det\begin{pmatrix}r_{{\lambda}^t_1} & r_{{\lambda}^t_1+1}&\dots& r_{{\lambda}^t_1+l-1}\cr r_{{\lambda}^t_2-1}&r_{{\lambda}^t_2}&\dots& r_{{\lambda}^t_2+l-2}\cr \vdots&\vdots & \vdots &\vdots\cr r_{{\lambda}^t_l-l+1}& r_{{\lambda}^t_l-l+2}&\dots& r_{{\lambda}^t_l} \end{pmatrix}$$ Here $r_j$ is the $j$th elementary symmetric function on $x_1,\dots,x_p$ if $1\leq j\leq p$, $r_0=1$, and $r_j=0$ if $j<0$ or $j>p$. This is an expression of $s_{\lambda}$ as a linear combination of monomials in $r_1,\dots,r_p$ of degree at most $q$. We claim that in this way we obtain a triangular change of basis between the $s_{\lambda}$ and the monomials in $r_1,\dots,r_p$ of degree at most $q$. To see this, we order the monomials first be degree (in the $r_j$), and then by reverse lexicographical order inside each degree. We claim that upon expanding the determinant [\[jacobi\]](#jacobi){reference-type="eqref" reference="jacobi"} the leading term is the diagonal monomial $r_{{\lambda}^t_1}r_{{\lambda}^t_2}\dots r_{{\lambda}^t_l}$. Indeed, let us expand the determinant along the first row. Since ${\lambda}^t_1\geq{\lambda}^t_2\geq\dots\geq{\lambda}^t_l$, the diagonal monomial has no $r_j$ with $j>{\lambda}^t_1$, but if we pick any element of the first row other than $r_{{\lambda}^t_1}$, all monomials in the corresponding piece of the expansion will contain $r_j$ with $j>{\lambda}^t_1$. So to obtain the leading term we must pick $r_{{\lambda}^t_1}$. We now repeat this argument inductively, always expanding along the first row. The main advantage of the Schur polynomials is the fact that their multiplication table is well understood, using Littlewood-Richardson coefficients. While the computation of the LR coefficients is only algorithmic, computer programs for computing them are widely known and available; for example, there is an online calculator available from the web page of Joel Gibson [@Gibson]. Our approach using monomials in the elementary functions and the relations between them that we obtained can also lead to a multiplication table, as illustrated by the following example. In this way, we get an alternative way of computing the LR coefficients. **Example 1**. *Let $p=2$ and $q=3$. The expressions of the Schur polynomials for ${\lambda}$ in the $2\times 3$ box in terms of monomials in the elementary symmetric functions $r_1=x_1+x_2$, $r_2=x_1x_2$ are $$\begin{array}{lllll} s_{(0,0)}=1; \qquad &s_{(1,0)}=r_1; \qquad &s_{(2,0)}=r_1^2-r_2; \qquad &s_{(3,0)}=r_1^3-2r_1r_2;\\ &s_{(1,1)}=r_2; \qquad &s_{(2,1)}=r_1r_2; \qquad &s_{(3,1)}=r_1^2r_2-r_2^2; \\ &s_{(2,2)}=r_2^2; \qquad &s_{(3,2)}=r_1r_2^2; \qquad &s_{(3,3)}=r_2^3. \end{array}$$ Our relations expressing monomials of degree four in terms of monomials of degree at most three are $$r_1^4 = 3r_1^2r_2-r_2^2; \quad \ r_1^3r_2= 2r_1r_2^2; \quad \ r_1^2r_2^2= r_2^3; \quad \ r_1r_2^3= 0; \quad \ r_2^4 = 0.$$ The multiplication table for the monomials is* *$r_1$* *$r_2$* *$r_1^2$* *$r_1r_2$* *$r_2^2$* *$r_1^3$* *$r_1^2r_2$* *$r_1r_2^2$* *$r_2^3$* -------------- ----------- ------------ --------------------- --------------- -------------- --------------------- --------------- -------------- ----------- *$r_1$* *$r_1^2$* *$r_1r_2$* *$r_1^3$* *$r_1^2r_2$* *$r_1r_2^2$* *$3r_1^2r_2-r_2^2$* *$2r_1r_2^2$* *$r_2^3$* *0* *$r_2$* *$r_2^2$* *$r_1^2r_2$* *$r_1r_2^2$* *$r_2^3$* *$2r_1r_2^2$* *$0$* *0* *0* *$r_1^2$* *$3r_1^2r_2-r_2^2$* *$2r_1r_2^2$* *$r_2^3$* *$5r_1r_2^2$* *$2r_2^3$* *0* *0* *$r_1r_2$* *$r_2^3$* *0* *$2r_2^3$* *0* *0* *0* *$r_2^2$* *0* *0* *0* *0* *0* *$r_1^3$* *$5r_2^3$* *0* *0* *0* *$r_1^2r_2$* *0* *0* *0* *$r_1r_2^2$* *0* *0* *$r_2^3$* *0* *The reader is invited to compare this with the multiplication table for the Schur polynomials obtained from [@Gibson]; to use the online calculator one has to remember that the Schur polynomials for ${\lambda}$ outside of the $2\times 3$ box have to be replaced by zeros.* ## The cases $G/K=\operatorname{Sp}(p+q)/\operatorname{Sp}(p)\times \operatorname{Sp}(q)$ {#subsec B/BxB} Since $G$ and $K$ have equal rank, $\operatorname{Pr}(S)=C(\mathfrak{p})^K$ and $\operatorname{gr}\operatorname{Pr}(S)=(\textstyle{\bigwedge}\mathfrak{p})^K$. Since $G$ and $K$ are both connected, the Weyl group $W_G$ is equal to $W_\mathfrak{g}$ which is isomorphic to $B_{p+q}$, while the Weyl group $W_K=W_\mathfrak{k}$ is $B_p\times B_q$. (Recall that type $B$ and type $C$ have the same Weyl group. It consists of permutations and sign changes of the variables.) As in type A, the set $W^1_{G,K}$ consists of $(p,q)$-shuffles. In particular, $$|W^1_{G,K}|=\dim C(\mathfrak{p})^K=\dim(\textstyle{\bigwedge}\mathfrak{p})^K =\binom{p+q}{p}.$$ It is well known (see [@HumphreysRefBook p.67]) that the algebra of $B_k$-invariants is a polynomial algebra generated by symmetric functions of the squares of the variables. Thus $S(\mathfrak{t})^{W_K}$ is generated by $$r_1=x_1^2+\dots+x_p^2,\quad r_2=x_1^2x_2^2+\dots+x_{p-1}^2x_p^2,\quad \dots,\quad r_p=x_1^2x_2^2\dots x_p^2$$ $$s_1=x_{p+1}^2+\dots+x_{p+q}^2,\ s_2=x_{p+1}^2x_{p+2}^2+\dots+x_{p+q-1}^2x_{p+q}^2,\ \dots,\ s_q=x_{p+1}^2\dots x_{p+q}^2$$ and $S(\mathfrak{t})^{W_G}$ is generated by $$t_1=x_1^2+\dots+x_{p+q}^2,\quad t_2=x_1^2x_2^2+\dots+x_{p+q-1}^2x_{p+q}^2,\quad \dots,\quad t_{p+q}=x_1^2x_2^2\dots x_{p+q}^2$$ As in the type A case, the relations for $C(\mathfrak{p})^K$ respectively $(\textstyle{\bigwedge}\mathfrak{p})^K$ include the relations [\[rel\]](#rel){reference-type="eqref" reference="rel"}. Moreover, we have **Theorem 1**. *For $G/K=\operatorname{Sp}(p+q)/\operatorname{Sp}(p)\times \operatorname{Sp}(q)$, $1 \leq p \leq q$, the algebra $C(\mathfrak{p})^K$ is isomorphic to the algebra $\mathfrak{H}(p,q;c)$ of Definition [Definition 1](#def alg h){reference-type="ref" reference="def alg h"}, with generators the above $r_i$, and with $c=(t_1(\rho),\dots,t_{p+q}(\rho))$. The algebra $(\textstyle{\bigwedge}\mathfrak{p})^K$ is isomorphic to $\mathfrak{H}(p,q;0)$.* *In other words, $$\begin{aligned} C(\mathfrak{p})^K &= \mathfrak{H}(p,q;c) = \frac{\mathbb{C}[r_1,\dots,r_p; s_1,\dots,s_q]}{\Bigl(\sum_{i,j\geq 0;\ i+j=k}r_is_j=t_k(\rho)\Bigr)} \\ (\textstyle{\bigwedge}\mathfrak{p})^K &= \mathfrak{H}(p,q;0) = \frac{\mathbb{C}[r_1,\dots,r_p; s_1,\dots,s_q]}{\Bigl(\sum_{i,j\geq 0;\ i+j=k}r_is_j=0\Bigr)} \end{aligned}$$ The latter algebra is isomorphic to $H^*(\operatorname{Gr}_p(\mathbb{H}^{p+q}), \mathbb{C})$.* *Both algebras can be identified with the space $$\mathbb{C}[r_1,\dots,r_p]_{\leq q}.$$ The filtration degree of $C(\mathfrak{p})^K$ inherited from $C(\mathfrak{p})$, and the gradation degree of $(\textstyle{\bigwedge}\mathfrak{p})^K$ inherited from $\textstyle{\bigwedge}\mathfrak{p}$, are obtained by setting $\deg r_i=4i$ for $i=1,\dots,p$.* *Proof.* The same as the proof of Theorem [Theorem 1](#gen rel basis){reference-type="ref" reference="gen rel basis"}. (The statement about degrees follows from the fact that $r_i$ is of degree $2i$ as a polynomial in the variables $x_j$, and from the fact that the map $\alpha:U(\mathfrak{k})\to C(\mathfrak{p})$ doubles the degree.) ◻ *Remark 1*. As in Remark [Remark 1](#rem schur){reference-type="ref" reference="rem schur"}, we can replace the monomials in the $r_i$ by the Schur polynomials $s_\lambda$ for $\lambda$ in the $p\times q$ box. This allows us to write the multiplication table in the usual way. ## The cases $G/K=\operatorname{SO}(k+m)/S(\operatorname{O}(k)\times \operatorname{O}(m))$ {#subsec SOk+m} If $(k,m)=(2p,2q)$ or $(k,m)=(2p,2q+1)$ then $G$ and $K$ have equal rank, so $C(\mathfrak{p})^K=\operatorname{Pr}(S)$ and $(\textstyle{\bigwedge}\mathfrak{p})^K=\operatorname{gr}\operatorname{Pr}(S)$. If $(k,m)=(2p,2q+1)$, then by Proposition [Proposition 1](#w sok+m){reference-type="ref" reference="w sok+m"}, $W_G=B_{p+q}$ and $W_K=B_p\times B_q$, so $C(\mathfrak{p})^K$ and $(\textstyle{\bigwedge}\mathfrak{p})^K$ are described by Theorem [Theorem 1](#gen rel basis B/BxB){reference-type="ref" reference="gen rel basis B/BxB"}. If $(k,m)=(2p,2q)$, then by Proposition [Proposition 1](#w sok+m){reference-type="ref" reference="w sok+m"}, $W_G=D_{p+q}$ and $W_K=S(B_p\times B_q)$. The invariants in $S(\mathfrak{t})$ under $B_p\times B_q\supset W_K$ are generated by the symmetric functions $r_1,\dots,r_p$ of $x_1^2,\dots,x_p^2$ and the symmetric functions $s_1,\dots,s_q$ of $x_{p+1}^2,\dots,x_{p+q}^2$, while the invariants under $D_p\times D_q\subset W_K$ are generated by $r_1,\dots,r_{p-1}$, $s_1,\dots,s_{q-1}$, and the Pfaffians $\bar r_p=x_1\dots x_p$, $\bar s_q=x_{p+1}\dots x_{p+q}$ (see [@HumphreysRefBook p.68]). It follows that the invariants under $W_K$ are generated by $$r_1,\dots,r_p;\ s_1,\dots,s_q;\ \bar r_p\bar s_q.$$ Of course, these generators are not independent, as $(\bar r_p\bar s_q)^2=r_ps_q$. Since our algebras $C(\mathfrak{p})^K$ and $(\textstyle{\bigwedge}\mathfrak{p})^K$ are quotients of $\mathbb{C}[\mathfrak{t}^*]^{W_K}$ by the ideal generated by $\mathbb{C}[\mathfrak{t}^*]^{W_G}_\rho$ respectively $\mathbb{C}[\mathfrak{t}^*]^{W_G}_+$, and since $\bar r_p\bar s_q=x_1\dots x_{p+q}$ is $W_G$-invariant, we can remove $\bar r_p\bar s_q$ from the list of generators. (Note that the value of $\bar r_p\bar s_q$ at $\rho$ is 0, since 0 is a coordinate of $\rho$.) It follows that the algebras $C(\mathfrak{p})^K$ and $(\textstyle{\bigwedge}\mathfrak{p})^K$ are again described by Theorem [Theorem 1](#gen rel basis B/BxB){reference-type="ref" reference="gen rel basis B/BxB"}. Finally, suppose that $(k,m)=(2p+1,2q+1)$. This is an unequal (but almost equal) rank case and as we saw in Subsection [3.5](#subsec so odd){reference-type="ref" reference="subsec so odd"}, the fundamental Cartan subalgebra is $\mathfrak{h}=\mathfrak{t}\oplus\mathfrak{a}\cong\mathbb{C}^{p+q+1}$ with $$\mathfrak{t}=\{(t_1,\dots t_{p+q},0)\,\big|\,t_i\in\mathbb{C}\};\qquad \mathfrak{a}=\{(0,\dots,0,a)\,\big|\,a\in\mathbb{C}\}.$$ The root system $\Delta(\mathfrak{g},\mathfrak{t})$ is $B_{p+q}$ while $\Delta(\mathfrak{k},\mathfrak{t})=B_p\times B_q$. (Note that $\Delta(\mathfrak{g},\mathfrak{h})$ is $D_{p+q+1}$.) By Corollary [Corollary 1](#cor o/oo){reference-type="ref" reference="cor o/oo"} (3), $C(\mathfrak{p})^K=C(\mathcal P_\mathfrak{a})\otimes \operatorname{Pr}(S)$; since $\mathfrak{a}$ is one-dimensional, $C(\mathcal P_\mathfrak{a})$ is two-dimensional, spanned by 1 and a generator $e$ squaring to $1$. Likewise, $(\textstyle{\bigwedge}\mathfrak{p})^K=\textstyle{\bigwedge}\mathcal P_\mathfrak{a}\otimes \operatorname{Pr}(S)$, with $\textstyle{\bigwedge}\mathcal P_\mathfrak{a}$ spanned by 1 and by a generator $e$ squaring to 0. We know from Proposition [Proposition 1](#w sok+m){reference-type="ref" reference="w sok+m"} that $W_G=B_{p+q}$ and $W_K=B_p\times B_q$. It follows that the algebras $\operatorname{Pr}(S)$ and $\operatorname{gr}\operatorname{Pr}(S)$ are described by Theorem [Theorem 1](#gen rel basis B/BxB){reference-type="ref" reference="gen rel basis B/BxB"}, with notation given by that theorem and the text above it. **Theorem 1**. *Let $G/K=\operatorname{SO}(k+m)/S(\operatorname{O}(k)\times \operatorname{O}(m))$.* *(a) If $(k,m)=(2p,2q)$ or $(k,m)=(2p,2q+1)$, then the algebra $C(\mathfrak{p})^K$ is isomorphic to the algebra $\mathfrak{H}(p,q;c)$ of Definition [Definition 1](#def alg h){reference-type="ref" reference="def alg h"}, with generators $r_1,\dots,r_p$ as above, and with $c=(t_1(\rho),\dots,t_{p+q}(\rho))$. The algebra $(\textstyle{\bigwedge}\mathfrak{p})^K$ is isomorphic to $\mathfrak{H}(p,q;0)$. In other words, $$\begin{aligned} C(\mathfrak{p})^K &= \mathfrak{H}(p,q;c) = \frac{\mathbb{C}[r_1,\dots,r_p; s_1,\dots,s_q]}{\Bigl(\sum_{i,j\geq 0;\ i+j=k}r_is_j=t_k(\rho)\Bigr)} \\ (\textstyle{\bigwedge}\mathfrak{p})^K &= \mathfrak{H}(p,q;0) = \frac{\mathbb{C}[r_1,\dots,r_p; s_1,\dots,s_q]}{\Bigl(\sum_{i,j\geq 0;\ i+j=k}r_is_j=0\Bigr)} \end{aligned}$$ The latter algebra is isomorphic to $H^*(\operatorname{Gr}_k(\mathbb{R}^{k + m}), \mathbb{C})$.* *Both algebras can be identified with the space $$\mathbb{C}[r_1,\dots,r_p]_{\leq q}.$$ The filtration degree of $C(\mathfrak{p})^K$ inherited from $C(\mathfrak{p})$, and the gradation degree of $(\textstyle{\bigwedge}\mathfrak{p})^K$ inherited from $\textstyle{\bigwedge}\mathfrak{p}$, are obtained by setting $\deg r_i=4i$ for $i=1,\dots,p$.* *(b) If $(k,m)=(2p+1,2q+1)$, then the algebra $C(\mathfrak{p})^K$ contains the algebra $\mathfrak{H}(p,q;c)$ as in (a), and an additional generator $e$ squaring to $1$. The algebra $(\textstyle{\bigwedge}\mathfrak{p})^K$ contains the algebra $\mathfrak{H}(p,q;0)$ as in (a), and an additional generator $e$ squaring to 0. $$\begin{aligned} C(\mathfrak{p})^K &= \frac{\mathbb{C}[r_1,\dots,r_p; s_1,\dots,s_q; e]}{\Bigl(\sum_{i,j\geq 0;\ i+j=k}r_is_j=t_k(\rho),\, e^2 = 1 \Bigr)} \\ (\textstyle{\bigwedge}\mathfrak{p})^K & = \frac{\mathbb{C}[r_1,\dots,r_p; s_1,\dots,s_q; e]}{\Bigl(\sum_{i,j\geq 0;\ i+j=k}r_is_j=0 ,\, e^2 = 0 \Bigr)} \end{aligned}$$ The latter algebra is isomorphic to $H^*(\operatorname{Gr}_k(\mathbb{R}^{k+m}), \mathbb{C}))$.* *Each of the algebras can be identified with $$\mathbb{C}[r_1,\dots,r_p]_{\leq q} \oplus \mathbb{C}[r_1,\dots,r_p]_{\leq q}\, e.$$ The filtration degree of $C(\mathfrak{p})^K$ inherited from $C(\mathfrak{p})$, and the gradation degree of $(\textstyle{\bigwedge}\mathfrak{p})^K$ inherited from $\textstyle{\bigwedge}\mathfrak{p}$, are obtained by setting $\deg r_i=4i$ for $i=1,\dots,p$ and $\deg e=2p+2q+1$.* *Proof.* This follows from the discussion above and from Corollary [Corollary 1](#cor o/oo){reference-type="ref" reference="cor o/oo"}. ◻ *Remark 1*. As in Remark [Remark 1](#rem schur){reference-type="ref" reference="rem schur"}, we can replace the monomials in the $r_i$ by the Schur polynomials $s_\lambda$ for $\lambda$ in the $p\times q$ box. This allows us to write the multiplication table in the usual way. ## The case $G/K=\operatorname{Sp}(n)/\operatorname{U}(n)$ {#subsec C/A} Since $G$ and $K$ have equal rank, $\operatorname{Pr}(S)=C(\mathfrak{p})^K$ and $\operatorname{gr}\operatorname{Pr}(S)=(\textstyle{\bigwedge}\mathfrak{p})^K$. Since $G$ and $K$ are both connected, the Weyl groups are $W_G=W_\mathfrak{g}=B_n$ and $W_K=W_\mathfrak{k}=A_{n - 1}$. In other words, $W_K$ consists of the permutations of the variables $x_1,\dots,x_n$, while $W_G$ consists of permutations and sign changes of $x_1,\dots,x_n$. The set $W^1_{G,K}$ has $2^n$ elements and can be identified with the sign changes. The algebra $S(\mathfrak{t})^{W_K}$ is generated by $$r_1=x_1+\dots+x_n,\quad r_2=x_1x_2+x_1x_3+\dots + x_{n-1}x_n,\ \dots,\quad r_n=x_1x_2\dots x_n,$$ and $S(\mathfrak{t})^{W_G}$ is generated by $$t_1=x_1^2+\dots+x_n^2,\quad t_2=x_1^2x_2^2+\dots+x_{n-1}^2x_n^2,\ \dots,\ t_n=x_1^2x_2^2\dots x_n^2.$$ To write down the relations coming from $S(\mathfrak{t})^{W_G}$, let $z$ be a formal variable and note that $$\begin{gathered} \sum_{k=0}^n (-1)^k t_k z^{2k}=\prod_{k=1}^n (1-x_k^2z^2)=\prod_{i=1}^n (1-x_iz)\prod_{i=1}^n (1+x_iz) \\ = \left(\sum_{i=0}^n(-1)^ir_iz^i\right)\left(\sum_{j=0}^n r_jz^j\right)=\sum_{k=0}^n\ \left(\sum_{i+j=2k}(-1)^ir_ir_j\right)z^{2k}.\end{gathered}$$ It follows that the relations are $$\label{rel C/A} \sum_{i+j=2k} (-1)^ir_ir_j=(-1)^kt_k,\quad k=1,\dots,n.$$ Equivalently, $$\label{rel C/A bis} r_k^2=t_k+2r_{k-1}r_{k+1}-2r_{k-2}r_{k+2}+\dots,\quad k=1,\dots,n,$$ where as usual we set $r_0=1$ and $r_i=0$ for $i>n$. **Theorem 1**. *For $G/K=\operatorname{Sp}(n)/\operatorname{U}(n)$, the algebras $C(\mathfrak{p})^K$ and $(\textstyle{\bigwedge}\mathfrak{p})^K$ are both generated by $r_1,\dots,r_p$ (or more precisely, their images in the respective quotients), with relations generated by [\[rel C/A bis\]](#rel C/A bis){reference-type="eqref" reference="rel C/A bis"} with $t_k$ replaced by $t_k(\rho)$ for $C(\mathfrak{p})^K$, and by $0$ for $(\textstyle{\bigwedge}\mathfrak{p})^K$.* *In other words, $$\begin{aligned} C(\mathfrak{p})^K &= \frac{\mathbb{C}[r_1,\dots,r_n]}{\Bigl(\sum_{i+j=2k} (-1)^ir_ir_j=(-1)^k t_k(\rho)\ (1 \leq k \leq n)\Bigr)} \\ (\textstyle{\bigwedge}\mathfrak{p})^K & = \frac{\mathbb{C}[r_1,\dots,r_n]}{\Bigl(\sum_{i+j=2k} (-1)^ir_ir_j= 0\ (1 \leq k \leq n)\Bigr)} \\ \end{aligned}$$ The latter algebra is isomorphic to $H^*(\operatorname{LGr}(\mathbb{C}^{2n}), \mathbb{C})$.* *A basis for each of the algebras is represented by the monomials $$\label{basis C/A} r_1^{{\varepsilon}_1}r_2^{{\varepsilon}_2}\dots r_n^{{\varepsilon}_n},\quad {\varepsilon}_i\in\{0,1\}.$$ The filtration degree of $C(\mathfrak{p})^K$ inherited from $C(\mathfrak{p})$, and the gradation degree of $(\textstyle{\bigwedge}\mathfrak{p})^K$ inherited from $\textstyle{\bigwedge}\mathfrak{p}$, are obtained by setting $\deg r_i=2i$ for $i=1,\dots,n$.* *Proof.* Note that the cardinality of the set [\[basis C/A\]](#basis C/A){reference-type="eqref" reference="basis C/A"} is correct, $2^n$, so it is enough to show that every monomial can be written as a linear combination of the monomials in [\[basis C/A\]](#basis C/A){reference-type="eqref" reference="basis C/A"}. We proceed by induction on degree. If the degree is 0, the only possible monomial is 1, and it is on the list [\[basis C/A\]](#basis C/A){reference-type="eqref" reference="basis C/A"}. If an arbitrary monomial contains either $r_1$ or $r_n$ with degree $\geq 2$, then we can use the relations [\[rel C/A bis\]](#rel C/A bis){reference-type="eqref" reference="rel C/A bis"} for $k=1$ ($r_1^2=t_1+r_2$) or for $k=n$ ($r_n^2=t_n$) to write this monomial as a combination of smaller degree monomials. Assume now that a monomial $$\label{monos C/A} x^d = x_1^{d_1}\dots x_n^{d_n},\quad d_i\in\mathbb{Z}_+$$ has $d_k\geq 2$, for some $1<k<n$. We identify monomials [\[monos C/A\]](#monos C/A){reference-type="eqref" reference="monos C/A"} with the strings of exponents $(d_1,\dots,d_n)\in\mathbb{Z}_+^n$. Let $f:[1,n]\to\mathbb{R}^+$ be a concave function taking integer values on $[1,n]\cap\mathbb{Z}$; for example, we can take $f(x)=x(n+1-x)$. We define $F:\mathbb{Z}_+^n\to\mathbb{Z}_+$ by $$F(d_1,\dots,d_n)=\sum_{k=1}^n f(k)d_k.$$ By relations [\[rel C/A bis\]](#rel C/A bis){reference-type="eqref" reference="rel C/A bis"}, the monomial $x^d$ is, up to lower degree monomials, equal to a linear combinations of monomials with exponents of the form $$(\dots, d_{k-i}+1,\dots d_k-2,\dots,d_{k+i}+1,\dots)=d+e_{k-i}-2e_k+e_{k+i},$$ with $i$ a positive integer such that $k-i\geq 1$ and $k+i\leq n$. Here $e_1,\dots,e_n$ is the usual standard basis of $\mathbb{R}^n$. We now have $$\begin{gathered} F(d+e_{k-i}-2e_k+e_{k+i})-F(d)=\\ f(k-i)[(d_{k-i}+1)-d_{k-i}] +f(k)[(d_k-2)-d_k]+f(k+i)[(d_{k+i)}+1)-d_{k+i}]=\\ f(k-i)-2f(k)+f(k+i),\end{gathered}$$ which is negative since $f$ is concave. So all the monomials in the expression for $d$ using the relations have values of $F$ lower than $F(d)$. We can repeat this procedure as long as we have some $d_k\geq 2$, $1<k<n$ (recall that we already handled the cases $d_1\geq 2$ and $d_n\geq 2$). Since the value of $F$ gets strictly smaller each time, and since these values are positive integers, the process has to stop, meaning that there are no $d_k\geq 2$, hence we have arrived at a monomial of the form [\[basis C/A\]](#basis C/A){reference-type="eqref" reference="basis C/A"}. The statement about degrees follows from the fact that $r_i$ is of degree $i$ as a polynomial in the variables $x_j$, and that the map $\alpha:U(\mathfrak{k})\to C(\mathfrak{p})$ doubles the degree. ◻ ## The case $G/K=\operatorname{SO}(2n)/\operatorname{U}(n)$ {#subsec D/A} Since $G$ and $K$ have equal rank, $\operatorname{Pr}(S)=C(\mathfrak{p})^K$ and $\operatorname{gr}\operatorname{Pr}(S)=(\textstyle{\bigwedge}\mathfrak{p})^K$. Since $G$ and $K$ are both connected, the Weyl groups are $W_G=W_\mathfrak{g}=D_n$ and $W_K=W_\mathfrak{k}=A_{n - 1}$, i.e., $W_K$ consists of the permutations of the variables $x_1,\dots,x_n$, while $W_G$ consists of permutations and sign changes of an even number of variables $x_1,\dots,x_n$. The set $W^1_{G,K}$ has $2^{n-1}$ elements and can be identified with the sign changes of an even number of variables. The algebra $S(\mathfrak{t})^{W_K}$ is generated by $$r_1=x_1+\dots+x_n,\quad r_2=x_1x_2+x_1x_3+\dots + x_{n-1}x_n,\ \dots,\quad r_n=x_1x_2\dots x_n,$$ and $S(\mathfrak{t})^{W_G}$ is generated by $$t_1=x_1^2+\dots+x_n^2,\ \dots,\quad t_{n-1}=x_1^2\dots x_{n-1}^2+\dots+x_2^2\dots x_n^2,\quad \bar t_n=x_1x_2\dots x_n.$$ We set $t_n=\bar t_n^2$. The relations are the same as [\[rel C/A\]](#rel C/A){reference-type="eqref" reference="rel C/A"} or [\[rel C/A bis\]](#rel C/A bis){reference-type="eqref" reference="rel C/A bis"} (where as before, $t_k$ is replaced by $t_k(\rho)$ for $C(\mathfrak{p})^K$ and by $0$ for $(\textstyle{\bigwedge}\mathfrak{p})^K$), except that for $k=n$ we have $r_n=\bar t_n$ instead of $r_n^2=t_n$. This last equation enables us to eliminate $r_n$ from the list of generators. Thus we have **Theorem 1**. *For $G/K=\operatorname{SO}(2n)/\operatorname{U}(n)$, the algebras $C(\mathfrak{p})^K$ and $(\textstyle{\bigwedge}\mathfrak{p})^K$ are both generated by $r_1,\dots,r_{n-1}$ (or more precisely, their images in the respective quotients). The relations are generated by [\[rel C/A bis\]](#rel C/A bis){reference-type="eqref" reference="rel C/A bis"} with $t_k$ replaced by $t_k(\rho)$ for $C(\mathfrak{p})^K$ and by $0$ for $(\textstyle{\bigwedge}\mathfrak{p})^K$, and with the last relation replaced by $r_n=\bar t_n$.* *In other words, $$\begin{aligned} C(\mathfrak{p})^K &= \frac{\mathbb{C}[r_1,\dots,r_n]}{\Bigl(\sum_{i+j=2k} (-1)^ir_ir_j=(-1)^k t_k(\rho)\; (1 \leq k \leq n), r_n = \bar{t}_n(\rho) \Bigr)} \\ (\textstyle{\bigwedge}\mathfrak{p})^K & = \frac{\mathbb{C}[r_1,\dots,r_n]}{\Bigl(\sum_{i+j=2k} (-1)^ir_ir_j= 0\; (1 \leq k \leq n), r_n = 0 \Bigr)} \\ \end{aligned}$$ The latter algebra is isomorphic to $H^*(\operatorname{OLGr}^+(\mathbb{C}^{2n}), \mathbb{C})$.* *A basis for each of the algebras is represented by the monomials $$\label{basis D/A} r_1^{{\varepsilon}_1}r_2^{{\varepsilon}_2}\dots r_{n-1}^{{\varepsilon}_{n-1}},\quad {\varepsilon}_i\in\{0,1\}.$$ The filtration degree of $C(\mathfrak{p})^K$ inherited from $C(\mathfrak{p})$, and the gradation degree of $(\textstyle{\bigwedge}\mathfrak{p})^K$ inherited from $\textstyle{\bigwedge}\mathfrak{p}$, are obtained by setting $\deg r_i=2i$ for $i=1,\dots,n-1$.* *Proof.* The same as the proof of Theorem [Theorem 1](#thm basis C/A){reference-type="ref" reference="thm basis C/A"}. ◻ ## The group cases {#subsec group} In this subsection we consider the group cases $G\times G/\Delta G\cong G$, where $G$ is $\operatorname{SO}(n)$, or $\operatorname{U}(n)$, or $\operatorname{Sp}(n)$. In each of the three cases, the complexified Lie algebra of $G\times G$ is $\mathfrak{g}\oplus\mathfrak{g}$ where $\mathfrak{g}$ is the complexified Lie algebra of $G$, $\mathfrak{k}$ is the diagonal subalgebra $\Delta\mathfrak{g}\cong\mathfrak{g}$ of $\mathfrak{g}\oplus\mathfrak{g}$, and $\mathfrak{p}$ is the antidiagonal subspace of $\mathfrak{g}\oplus\mathfrak{g}$, which is isomorphic to $\mathfrak{g}$ as a $\mathfrak{g}$-module. Thus we are looking for the description of $C(\mathfrak{g})^G=C(\mathfrak{g})^\mathfrak{g}$ and of $(\textstyle{\bigwedge}\mathfrak{g})^G=(\textstyle{\bigwedge}\mathfrak{g})^\mathfrak{g}$. Thus we can use the results of [@K97] in the Clifford case and the well known Hopf-Koszul-Samelson Theorem in the exterior case [@cartan1951; @Samelson41; @koszul50] see [@CCCvol3 p.568] for full bibliographic details. to conclude **Theorem 1**. *[@K97], [@Samelson41] [\[gen rel basis group\]]{#gen rel basis group label="gen rel basis group"} The algebras $C(\mathfrak{p})^K=C(\mathfrak{g})^\mathfrak{g}$ and $(\textstyle{\bigwedge}\mathfrak{p})^K=(\textstyle{\bigwedge}\mathfrak{g})^\mathfrak{g}$ are isomorphic to $C(\mathcal P_\wedge(\mathfrak{p}))$, respectively $\textstyle{\bigwedge}\mathcal P_\wedge(\mathfrak{p})$, where $\mathcal P_\wedge({\mathfrak p})\cong\mathfrak{h}$ denotes the graded subspace defined in Definition [Definition 1](#def samspace){reference-type="ref" reference="def samspace"}. The degrees are given in Table [1](#tab degrees prim){reference-type="ref" reference="tab degrees prim"}.* ## The cases $G/K=\operatorname{U}(2n)/\operatorname{Sp}(n)$ {#subsec A/C} In these cases $K$ is connected, so $(\textstyle{\bigwedge}\mathfrak{p})^K=(\textstyle{\bigwedge}\mathfrak{p})^\mathfrak{k}$. Since these are unequal rank cases, we first describe the fundamental Cartan subalgebra $\mathfrak{h}=\mathfrak{t}\oplus\mathfrak{a}$. The situation is similar to the case $G/K=\operatorname{U}(2n)/\operatorname{O}(2n)$. The noncompact symmetric space corresponding to $\operatorname{U}(2n)/\operatorname{Sp}(n)$ is $\operatorname{GL}(2n,\mathbb{H})/\operatorname{Sp}(n)=\operatorname{U}^*(2n)/\operatorname{Sp}(n)$. The following can be read off from the information about the classification of real forms in [@beyond]. The fundamental Cartan subalgebra $\mathfrak{h}$ can be identified with $\mathbb{C}^{2n}$, with the Cartan involution acting by $\theta(h_1,\dots,h_{2n})=(-h_{2n},\dots,-h_1)$. Hence $$\begin{aligned} &\mathfrak{t}&=\{(h_1,\dots,h_n,-h_n,\dots,-h_1)\,\big|\,h_1,\dots,h_n\in\mathbb{C}\}\cong\mathbb{C}^n;\\ &\mathfrak{a}&=\{(h_1,\dots,h_n,h_n,\dots,h_1)\,\big|\,h_1,\dots,h_n\in\mathbb{C}\}\cong\mathbb{C}^n.\end{aligned}$$ If we now restrict the roots $\pm({\varepsilon}_i-{\varepsilon}_j)$, $1\leq i<j\leq 2n$, to $\mathfrak{t}$, we get $$\pm({\varepsilon}_i\pm{\varepsilon}_j),\ 1\leq i<j\leq n;\quad\ 2{\varepsilon}_i,\ 1\leq i\leq n.$$ In other words, we have obtained the root system $C_n$. Since $\Delta(\mathfrak{k},\mathfrak{t})$ is also of type $C_n$ (but with smaller multiplicities), $W^1_{\mathfrak{g},\mathfrak{k}}$ consists only of the identity. (This means that the spin module $S$ is primary, as already observed in [@han].) So the algebras $\operatorname{Pr}(S)$ and $\operatorname{gr}\operatorname{Pr}(S)$ are both equal to $\mathbb{C}\cdot 1$, and using Theorem [Theorem 1](#thm prim and aprim alg){reference-type="ref" reference="thm prim and aprim alg"} we get **Theorem 1**. *The algebra $(\textstyle{\bigwedge}\mathfrak{p})^K$ is isomorphic to $\textstyle{\bigwedge}(\mathcal P_\wedge(\mathfrak{p}))$, where $\mathcal P_\wedge(\mathfrak{p})\cong\mathfrak{a}$ is defined in Definition [Definition 1](#def samspace){reference-type="ref" reference="def samspace"} and the degrees are given in Table [1](#tab degrees prim){reference-type="ref" reference="tab degrees prim"}.* [^1]: K. Nishiyama is supported by JSPS KAKENHI Grant Number \#21K03184. [^2]: P. Pandžić is supported by the QuantiXLie Center of Excellence, a project cofinanced by the Croatian Government and European Union through the European Regional Development Fund - the Competitiveness and Cohesion Operational Programme (KK.01.1.1.01.0004).
arxiv_math
{ "id": "2310.04839", "title": "Clifford algebras, symmetric spaces and cohomology rings of\n Grassmannians", "authors": "Kieran Calvert, Kyo Nishiyama, Pavle Pand\\v{z}i\\'c", "categories": "math.RT math.DG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We study in detail the operad controlling several pre-Lie algebra structures sharing the same Lie bracket. Specifically, we show that this operad admits a combinatorial description similar to that of Chapoton and Livernet for the pre-Lie operad, and that it has many of the remarkable algebraic properties of the pre-Lie operad. author: - Paul Laubie bibliography: - Bibly.bib title: Combinatorics of pre-Lie products sharing a Lie bracket --- # Introduction {#introduction .unnumbered} Historically, the first example of non-associative algebraic structure that have been studied are Lie algebras. They were introduced by Sophus Lie in the 1870's as an algebraic structure on the tangent space of Lie group, and were generalized on the vector fields of any manifold.\ Although the commutator of an associative algebra is a Lie bracket, this particular example of Lie algebra never come from a structure of associative algebra on the vector fields. However, it is well known that a flat torsion-free connection on a manifold induce a pre-Lie algebra (or left-symmetric algebra) structure on the vector fields such that the commutator is the usual Lie bracket on the vector fields [@LSA]. This example may hint that pre-Lie algebras are a bit "more natural" than associative algebras when studying Lie algebras.\ The case of manifold with two flat torsion-free connections appears in [@D2pl] with the notion of Joyce structure. In the case of a manifold with two flat torsion-free connections, the algebraic structure on the vector fields is richer than a pre-Lie algebra. Indeed each connections give a pre-Lie product and the commutator of those pre-Lie product is the usual Lie bracket on the vector fields. Hence we get an algebra with two pre-Lie products sharing the same Lie bracket.\ If one uses the language of operads to talk about algebraic structures, the operad corresponding to algebras with two pre-Lie products sharing the same Lie bracket is $\bigvee_\mathrm{Lie}^2\mathrm{PreLie}$, this is the fibered coproduct of two copies of $\mathrm{PreLie}$ over $\mathrm{Lie}$, i.e. the colimit of the cospan $\mathrm{PreLie}\leftarrow \mathrm{Lie}\rightarrow \mathrm{PreLie}$ in the category of algebraic operads. The notion of fiber coproduct is analogous to the notion of amalgamated sum in group theory. The morphism $\mathrm{Lie}\rightarrow\mathrm{PreLie}$ used to defined this coproduct corresponds to the fact that the operator $[a,b]=ab-ba$ make every pre-Lie algebra into a Lie algebra.\ Computations show that dimensions of the low arity components of $\bigvee_\mathrm{Lie}^2\mathrm{PreLie}$ coincides with the number of rooted Greg trees [A005264](https://oeis.org/A005264) in the OEIS [@Oeis]. A rooted Greg tree is a rooted tree with black and white vertices such that white vertices are distinguished (e.g. numbered), black vertices are undistinguished and each black vertex has at least two children. This raises the question if the underlying species of $\bigvee_\mathrm{Lie}^2\mathrm{PreLie}$ is the species of rooted Greg trees.\ It is known that the operads $\mathrm{Lie}$ and $\mathrm{PreLie}$ share agreeable algebraic properties, indeed they are binary quadratic Koszul operads [@AlgOp] and [@PreLie], they have the Nielsen-Schreier property, which means that any subalgebra of a free Lie (resp. pre-Lie) algebra is free [@FreeLie],[@FreeLie2] and [@NSPrt], moreover $\mathrm{PreLie}$ is free over $\mathrm{Lie}$ as a left and right module [@PLrightFree],[@PBW] (and not as a bimodule). It is natural to ask if $\bigvee_\mathrm{Lie}^2\mathrm{PreLie}$ shares those properties.\ Before we summarize the main results of this paper, remark that the operad $\bigvee_\mathrm{Lie}^2\mathrm{PreLie}$ can naturally be generalized to $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}$ for any $n\in\mathbb{N}$. The operad $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}$ is the fibered coproduct of $n$ copies of $\mathrm{PreLie}$ over $\mathrm{Lie}$. This operad encode algebras with $n$ pre-Lie products that share a common Lie bracket.\ Thi paper is organized as follows, we begin by recalling some general facts about operads and more particularly about the operads $\mathrm{Lie}$ and $\mathrm{PreLie}$ in Section [1](#seq:recall){reference-type="ref" reference="seq:recall"}. We then give an explicit description of the operadic structure on the rooted trees from [@PreLie] in a way that will make it easier to generalize. In the next section, we generalize this construction to a larger class of trees, the rooted Greg trees, introducing a naive operadic structure on them, the operad $\mathrm{Greg}$. Unfortunately, this newly defined operad is not isomorphic $\bigvee_\mathrm{Lie}^2\mathrm{PreLie}$ but $\bigvee_\mathrm{Lie}^2\mathrm{PreLie}$ will be shown to be a deformation of $\mathrm{Greg}$. We show that: **Theorem 1**. *[Corollary 4](#thm:m1){reference-type="ref" reference="thm:m1"} The operad $\mathrm{Greg}$ is binary, quadratic and Koszul.* In Section [4](#seq:GGO){reference-type="ref" reference="seq:GGO"}, we generalize the construction of $\mathrm{Greg}$ by making the operadic structure depend on a coalgebra $C$, defining the operad $\mathrm{Greg}^C$. In the next section, we show that we can obtain the operad $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}$ this way, with a careful choice of the coalgebra. We show that: **Theorem 2**. *[Corollary 7](#thm:m2){reference-type="ref" reference="thm:m2"} The operad $\mathrm{Greg}^C$ is binary, quadratic and Koszul.* In Section [6](#seq:free){reference-type="ref" reference="seq:free"}, we address the question of the freeness of $\mathrm{Greg}^C$ and we show that for $C'$ a subcoalgebra of $C$: **Theorem 3**. *[Corollary 12](#thm:m3){reference-type="ref" reference="thm:m3"}&[Corollary 15](#thm:m3ns){reference-type="ref" reference="thm:m3ns"} The operad $\mathrm{Greg}^C$ is free as a left $\mathrm{Greg}^{C'}$-module, as a right $\mathrm{Greg}^{C'}$-module and has the Nielsen-Schreier property.* In particular: **Theorem 4**. *[Corollary 13](#thm:m4){reference-type="ref" reference="thm:m4"}&[Corollary 16](#thm:m4ns){reference-type="ref" reference="thm:m4ns"} The operad $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ is free as a left $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}$-module, as a right $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}$-module and has the Nielsen-Schreier property.* In the next section, we compute explicit generators of $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ as a left $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}$-module, generalizing a conjecture stated in [@FPL] proven in [@FPRed]. Let $\mathcal{F}$ be the free operad functor, $\bar{\mathcal{F}}$ the reduced free operad functor and $\bar{\mathcal{F}}^{(n)}$ the $n$-th iteration of $\bar{\mathcal{F}}$, we show that: **Theorem 5**. *[Theorem 18](#thm:m5){reference-type="ref" reference="thm:m5"} The operad $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ is isomorphic to $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}\circ\mathcal{F}(\bar{\mathcal{F}}^{(n)}(\mathrm{CycLie}))$ as a left $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}$-module.* # Acknowledgements {#acknowledgements .unnumbered} This work was supported by the French national research agency \[grant ANR-20-CE40-0016\].\ I would like express my deepest gratitude to my PhD advisor Vladimir Dotsenko for his infinite patience and for encouraging me in this project. I am very thankful to Frédéric Chapoton for his availability to discuss about operadic constructions on combinatorial objects. I would like to thank Pedro Tamaroff for our discussions about homotopy colimits of operads which lead me to consider examples of Remark [Remark 13](#rmk:PT){reference-type="ref" reference="rmk:PT"}. And I would also like to thank Gaétan Leclerc for our numerous and interesting math related discussions. # Recollections on Lie and pre-Lie operad {#seq:recall} The base field is supposed to be of characteristic $0$.\ The reader is assumed to be familiar with the theory of species, an introduction and a study of this theory is done in the book of Bergeron, Labelle and Leroux [@Species].\ For necessary information about operads, shuffle operads and Gröbner bases over operads, we refer the reader to the book of Loday an Vallette [@AlgOp] and the book of Dotsenko and Bremner [@AlgComp].\ We make an extensive use of Gröbner bases over operads, and give explicit computations of them. Because of the size of the bases, lengthy figure are postponed to the [Appendix](#Apx), namely figure 2 to 9.\ We note $\mathcal{F}$ the free operad functor.\ The operad $\mathrm{Lie}$ is the operad encoding Lie algebras. It is generated by a binary skew-symmetric operation $l$ satisfying the Jacobi identity. A presentation of this operad is given by: $$\mathrm{Lie}=\mathcal{F}(l)/\langle l\circ_1l+(l\circ_1l).(1\;2\;3)+(l\circ_1l).(1\;3\;2)\rangle ,$$ with $l.(1\;2)=-l$.\ The operad $\mathrm{PreLie}$ is the operad encoding pre-Lie algebras. It is generated by a binary operation $x$ satisfying the pre-Lie identity. A presentation of this operad is given by: $$\mathrm{PreLie}=\mathcal{F}(x)/\langle (x \circ_1x -x \circ_2x) - (x \circ_1x -x \circ_2x).(2\;3) \rangle$$ As stated above, we will make an extensive use of the notion of Gröbner basis over operads which relies on the notion of shuffle operad and shuffle trees. We refer the reader to [@AlgComp] for more details.\ The main goal of working with shuffle operads is to put a compatible order on the monomials of the free operad to latter define Gröbner basis. It is quite clear that actions of symmetric groups prevent any hope of such order. Hence the actions of symmetric groups are disposed of when considering shuffle operads, which explain the need of a notation for $x.(1\;2)$, let us write $x.(1\;2)=y$.\ One need to be careful with the notation since several "type" of trees will be used in this article. Shuffle trees are used to write Gröbner basis. Other trees such that rooted trees and rooted Greg trees are used to describe in combinatorial way the underlying species of the operads.\ **Definition 1**. A shuffle tree on an alphabet $\chi$ is a rooted planar tree such that: - Internal vertices are labeled by elements of $\chi$ and have as many children as the arity of their label (in our case it is always $2$). - Leaves are labeled by different integers of $\{1,\dots, n\}$, with $n$ the number of leaves. - The numbering of the leaves must satisfy the *local increasing condition*:\ The numbering of the leaves is extended to the internal vertices such that each vertex receives the smallest number of its children.\ The local increasing condition is that for each internal vertex, the numbering of its children is increasing from left to right. Let us write the pre-Lie relation with shuffle trees: $$% Left tree 123 \left( \vcenter{\hbox{\begin{tikzpicture}[scale=0.2] %vertices \node (1) at (-3,6) {}; \node (2) at (0,6) {}; \node (3) at (1.5,3) {}; \node (x2) at (-1.5,3) {}; \node (x1) at (0,0) {}; %triangles %edges \draw[thick] (x2)--(1); \draw[thick] (x2)--(2); \draw[thick] (x1)--(3); \draw[thick] (x1)--(x2); %dots %circles %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$2$}}; \node at (3) {\scalebox{0.8}{$3$}}; \node at (x1) {\scalebox{0.8}{$x$}}; \node at (x2) {\scalebox{0.8}{$x$}}; \end{tikzpicture}}}- % Right tree 123 \vcenter{\hbox{\begin{tikzpicture}[scale=0.2] %vertices \node (1) at (-1.5,3) {}; \node (2) at (0,6) {}; \node (3) at (3,6) {}; \node (x2) at (1.5,3) {}; \node (x1) at (0,0) {}; %triangles %edges \draw[thick] (x2)--(2); \draw[thick] (x2)--(3); \draw[thick] (x1)--(1); \draw[thick] (x1)--(x2); %dots %circles %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$2$}}; \node at (3) {\scalebox{0.8}{$3$}}; \node at (x1) {\scalebox{0.8}{$x$}}; \node at (x2) {\scalebox{0.8}{$x$}}; \end{tikzpicture}}} \right) - \left( % Left tree 132 \vcenter{\hbox{\begin{tikzpicture}[scale=0.2] %vertices \node (1) at (-3,6) {}; \node (2) at (0,6) {}; \node (3) at (1.5,3) {}; \node (x2) at (-1.5,3) {}; \node (x1) at (0,0) {}; %triangles %edges \draw[thick] (x2)--(1); \draw[thick] (x2)--(2); \draw[thick] (x1)--(3); \draw[thick] (x1)--(x2); %dots %circles %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$3$}}; \node at (3) {\scalebox{0.8}{$2$}}; \node at (x1) {\scalebox{0.8}{$x$}}; \node at (x2) {\scalebox{0.8}{$x$}}; \end{tikzpicture}}}- % Right tree 123 \vcenter{\hbox{\begin{tikzpicture}[scale=0.2] %vertices \node (1) at (-1.5,3) {}; \node (2) at (0,6) {}; \node (3) at (3,6) {}; \node (x2) at (1.5,3) {}; \node (x1) at (0,0) {}; %triangles %edges \draw[thick] (x2)--(2); \draw[thick] (x2)--(3); \draw[thick] (x1)--(1); \draw[thick] (x1)--(x2); %dots %circles %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$2$}}; \node at (3) {\scalebox{0.8}{$3$}}; \node at (x1) {\scalebox{0.8}{$x$}}; \node at (x2) {\scalebox{0.8}{$y$}}; \end{tikzpicture}}} \right)$$ It is a classical result that $\mathrm{Lie}$ is a sub-operad of $\mathrm{PreLie}$ by the inclusion $l\mapsto x-y$.\ The operad encoding two pre-Lie products sharing a Lie bracket is the colimit of the cospan $\mathrm{PreLie}\leftarrow\mathrm{Lie}\rightarrow\mathrm{PreLie}$ in the category of operads, it is the coproduct of two copies of $\mathrm{PreLie}$ fibered over $\mathrm{Lie}$. We note it $\bigvee_\mathrm{Lie}^2\mathrm{PreLie}$.\ A presentation of this operad is given by: $$\begin{gathered} \bigvee_\mathrm{Lie}^2\mathrm{PreLie}=\mathcal{F}(x_1,x_2)/\langle(x_1 \circ_1x_1 -x_1 \circ_2x_1) - (x_1 \circ_1x_1 -x_1 \circ_2x_1).(2\;3)\;;\\ \qquad\qquad(x_2 \circ_1x_2 -x_2 \circ_2x_2) - (x_2 \circ_1x_2 -x_2 \circ_2x_2).(2\;3)\;; (x_1-x_2)-(x_1-x_2).(1\;2)\rangle\end{gathered}$$ with the notation $x_1.(1\;2)=y_1$ and $x_2.(1\;2)=y_2$.\ This presentation is not quadratic, indeed the last relation is linear, a quadratic presentation will be given in Section [4](#seq:GGO){reference-type="ref" reference="seq:GGO"}. However, this presentation is enough to compute the first dimensions aritywise of this operad either by hand or with a computer using the Haskell calculator written by Dotsenko and Heijltjes [@Hask].\ Arity $1$ $2$ $3$ $4$ $5$ $\cdots$ $n$ ---------------------------------------------------------- ----- ----- ------ ------- ------- ---------- ----------- $\dim(\mathrm{Lie}(n))$ $1$ $1$ $2$ $6$ $24$ $\cdots$ $(n-1)!$ $\dim(\mathrm{PreLie}(n))$ $1$ $2$ $9$ $64$ $625$ $\cdots$ $n^{n-1}$ $\dim\big(\bigvee_\mathrm{Lie}^2\mathrm{PreLie}(n)\big)$ $1$ $3$ $22$ $262$ ? ? ? One may recognize in the last line the first terms of the sequence of the number of rooted Greg trees with $n$ white vertices [A005264](https://oeis.org/A005264) in [@Oeis]. A rooted Greg tree is a rooted tree with black an white vertices such that the black vertices have at least two children.\ This lead to the natural questions: - Is the sequence of the aritywise dimensions of $\bigvee_\mathrm{Lie}^2\mathrm{PreLie}$ the sequence of the number of rooted Greg trees? - Are they the same species? - Is there a combinatorial interpretation of the operad $\bigvee_\mathrm{Lie}^2\mathrm{PreLie}$ using rooted Greg trees? In this article we answer positively to those questions and show that the operad $\bigvee_\mathrm{Lie}^2\mathrm{PreLie}$ have agreeable algebraic properties namely is binary, quadratic, Koszul, free as a left $\mathrm{PreLie}$-module, free as a right $\mathrm{PreLie}$-module and has the Nielsen-Schreier property. The last tree properties are freeness properties that will be discussed in Section [6](#seq:free){reference-type="ref" reference="seq:free"}. # Operad structure on rooted trees {#seq:RT} Let us give a full construction of the operad structure on rooted trees defined by Chapoton and Livernet in [@PreLie], in order to generalize it on rooted Greg trees.\ **Definition 2**. A rooted tree is a connected non-empty non-oriented graph without cycle with a distinguished vertex called the root. Let $I\subset\mathbb{N}$ be finite, vertices are in bijection with $I$.\ The action of the symmetric group is given by the permutation of the labels.\ When drawing rooted trees, root will be at the bottom and leaves at the top. For $S$ a rooted tree, $v$ a vertex of $S$, the forest $FS=\{S_1,\dots,S_k\}$ the forest of the children of $v$ and $B$ the rooted tree below $v$. Let us introduce the following notation: $$\vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (S) at (0,0) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (ST) at (S){}; %edges %dots %circles \draw[circle,fill=black] (ST.east) circle [radius=2pt]; %labels \node at (S) {\scalebox{0.8}{$S$}}; \end{tikzpicture}}} \qquad = \qquad \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (lv) at (0,0) {}; \node (SD) at (0,-3) {}; \node (SU1) at (-2,3) {}; \node (dot) at (0,3) {}; \node (SUk) at (2,3) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=-90, fill=white!50, minimum size =24pt] (SDT) at (SD){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (SU1T) at (SU1){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (SUkT) at (SUk){}; %edges \draw[thick] (SU1T.east)--(lv); \draw[thick] (SUkT.east)--(lv); \draw[thick] (lv)--(SDT.west); %dots \node at (dot) {$\cdots$}; %circles \draw[circle,fill=white] (lv) circle [radius=24pt]; \draw[circle,fill=black] (SDT.east) circle [radius=2pt]; \draw[circle,fill=black] (SU1T.east) circle [radius=2pt]; \draw[circle,fill=black] (SUkT.east) circle [radius=2pt]; %labels \node at (lv) {\scalebox{1}{$v$}}; \node at (SU1) {\scalebox{0.8}{$S_1$}}; \node at (SUk) {\scalebox{0.8}{$S_k$}}; \node at (SD) {\scalebox{0.8}{$B$}}; \end{tikzpicture}}} \qquad = \qquad \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (lv) at (0,0) {}; \node (SD) at (0,-2.7) {}; \node (F) at (0,3) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (SDT) at (SD){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (FT) at (F){}; %edges \draw[thick, double] (FT.east)--(lv); \draw[thick] (lv)--(SDT.west); %dots %circles \draw[circle,fill=white] (lv) circle [radius=24pt]; \draw[circle,fill=black] (FT.east) circle [radius=2pt]; \draw[circle,fill=black] (SDT.east) circle [radius=2pt]; %labels \node at (lv) {\scalebox{1}{$v$}}; \node at (F) {\scalebox{0.8}{$FS$}}; \node at (SD) {\scalebox{0.8}{$B$}}; \end{tikzpicture}}}$$ We use circles to represent vertices, triangles to represent trees or forests and double edge to represent that each trees of the forest $FS$ is grafted to $v$. **Definition 3**. Let $S$ and $T$ be two rooted trees labeled over disjoint sets and $V(S)$ the set of vertices of $S$, let $S\star T$ be the *fall product* of $T$ over $S$ defined by: $$S\star T = % only for \displaystyle \mathop{% \raisebox {-1\depthofsumsign+1\depthofsumsign} {\scalebox {1} {$\displaystyle\sum$}% } } _{v\in V(S)} \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (lv) at (0,0) {}; \node (SD) at (0,-3) {}; \node (T) at (1.5,3) {}; \node (FS) at (-1.5,3) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (SDT) at (SD){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TT) at (T){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (FST) at (FS){}; %edges \draw[thick] (TT.east)--(lv); \draw[thick,double] (FST.east)--(lv); \draw[thick] (lv)--(SDT.west); %dots %circles \draw[circle,fill=white] (lv) circle [radius=24pt]; \draw[circle,fill=black] (TT.east) circle [radius=2pt]; \draw[circle,fill=black] (FST.east) circle [radius=2pt]; \draw[circle,fill=black] (SDT.east) circle [radius=2pt]; %labels \node at (lv) {\scalebox{1}{$v$}}; \node at (T) {\scalebox{0.8}{$T$}}; \node at (FS) {\scalebox{0.8}{$FS$}}; \node at (SD) {\scalebox{0.8}{$B$}}; \end{tikzpicture}}}$$ For readability sake, let us omit the sums, the tree $B$ and the forest $FS$: $$S\star T = \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (lv) at (0,0) {}; \node (T) at (0,3) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TT) at (T){}; %edges \draw[thick] (TT.east)--(lv); %dots %circles \draw[circle,fill=white] (lv) circle [radius=24pt]; \draw[circle,fill=black] (TT.east) circle [radius=2pt]; %labels \node at (lv) {\scalebox{1}{$v$}}; \node at (T) {\scalebox{0.8}{$T$}}; \end{tikzpicture}}}$$ *Example 1*. Let compute $(R\star S)\star T$ to grasp the definition of the fall product. Let $v_r$ and $v_{r'}$ be generic vertices of $R$ and $v_s$ a generic vertex of $S$. $$(R\star S)\star T= \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (lv) at (0,0) {}; \node (r) at (0,-3) {}; \node (T) at (0,3) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TT) at (T){}; %edges \draw[thick] (TT.east)--(lv); \draw[thick, dashed] (r)--(lv); %dots %circles \draw[circle,fill=white] (lv) circle [radius=24pt]; \draw[circle,fill=white] (r) circle [radius=24pt]; \draw[circle,fill=black] (TT.east) circle [radius=2pt]; %labels \node at (lv) {\scalebox{1}{$v_s$}}; \node at (r) {\scalebox{1}{$v_r$}}; \node at (T) {\scalebox{0.8}{$T$}}; \end{tikzpicture}}} + \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (r) at (3,0) {}; \node (s) at (0,0) {}; \node (S) at (3,3) {}; \node (T) at (0,3) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TT) at (T){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (ST) at (S){}; %edges \draw[thick] (TT.east)--(s); \draw[thick] (ST.east)--(r); \draw[thick, dashed] (r)--(s); %dots %circles \draw[circle,fill=white] (s) circle [radius=24pt]; \draw[circle,fill=white] (r) circle [radius=24pt]; \draw[circle,fill=black] (TT.east) circle [radius=2pt]; \draw[circle,fill=black] (ST.east) circle [radius=2pt]; %labels \node at (s) {\scalebox{1}{$v_{r'}$}}; \node at (r) {\scalebox{1}{$v_r$}}; \node at (S) {\scalebox{0.8}{$S$}}; \node at (T) {\scalebox{0.8}{$T$}}; \end{tikzpicture}}} + \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (T) at (1.5,3) {}; \node (r) at (0,0) {}; \node (S) at (-1.5,3) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TT) at (T){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (ST) at (S){}; %edges \draw[thick] (TT.east)--(r); \draw[thick] (ST.east)--(r); %dots %circles \draw[circle,fill=white] (r) circle [radius=24pt]; \draw[circle,fill=black] (TT.east) circle [radius=2pt]; \draw[circle,fill=black] (ST.east) circle [radius=2pt]; %labels \node at (r) {\scalebox{1}{$v_r$}}; \node at (S) {\scalebox{0.8}{$S$}}; \node at (T) {\scalebox{0.8}{$T$}}; \end{tikzpicture}}}$$ Here $S$ fall of the vertex $v_r$ of $R$, then either $T$ fall on a vertex of $S$, or $T$ fall on another vertex $v_{r'}$ of $R$, or $T$ fall on the vertex $v_r$ of $R$.\ The dotted edge represent a path between the two vertices that may be longer than one edge. The dotted edge is horizontal if the two vertices can be one below the other, one above the other or neither of them. **Proposition 4**. *The fall product is pre-Lie.* *Proof.* We have computed $(R\star S)\star T$, moreover we have: $$R\star (S\star T)= \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (lv) at (0,0) {}; \node (r) at (0,-3) {}; \node (T) at (0,3) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TT) at (T){}; %edges \draw[thick] (TT.east)--(lv); \draw[thick, dashed] (r)--(lv); %dots %circles \draw[circle,fill=white] (lv) circle [radius=24pt]; \draw[circle,fill=white] (r) circle [radius=24pt]; \draw[circle,fill=black] (TT.east) circle [radius=2pt]; %labels \node at (lv) {\scalebox{1}{$v_s$}}; \node at (r) {\scalebox{1}{$v_r$}}; \node at (T) {\scalebox{0.8}{$T$}}; \end{tikzpicture}}}$$ Hence $(R\star S)\star T-R\star (S\star T)$ is symmetric in $S$ and $T$. Hence the fall product is pre-Lie. ◻ *Remark 2*. The fall product allow us graft a rooted tree $T$ over another rooted tree $S$ on all possible vertices of $S$. However as we can see in the above computation, a naive composition of the fall product is not enough to make several trees fall on the same tree.\ Indeed, when computing $(R\star S)\star T$, we have that $S$ fall on $R$, but $T$ can either fall on $S$ or on $R$.\ the solution is to use the symmetric brace products. The symmetric brace products were first introduced by Lada and Markl in [@sBr] and the following formula to get the symmetric brace products from a pre-Lie product was given by Oudom and Guin in [@BRPL]: - $Br(S)=S$ - $Br(S;T)=S\star T$ - $Br(S;T_1,\dots,T_{n+1})=Br(S;T_1,\dots,T_n)\star T_{n+1}- \sum_{i=1}^nBr(S;T_1,\dots,T_i\star T_{n+1},\dots,T_n)$ The symmetric brace product $Br(S;T_1,\dots,T_n)$ is the sum of all possible ways to graft the trees $T_1,\dots,T_n$ on vertices of $S$. It is symmetric in the $T_i$'s. For $FT=\{T_1,\dots,T_n\}$, we will write $Br(S;FT)$. Let us recall from [@AlgOp Definition 5.3.7] that an operad structure on a species can be given by a collection of operations $\circ_i$ of arity $2$, the partial compositions satisfying the sequential and parallel composition axioms.\ Let us define those operations on rooted trees using the symmetric brace products. **Definition 5**. Let $T$ and $S$ be two rooted trees, let $i$ be a label of a vertex of $T$ and $v$ this vertex. $$\vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (T) at (0,0) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TT) at (T){}; %edges %dots %circles \draw[circle,fill=black] (TT.east) circle [radius=2pt]; %labels \node at (T) {\scalebox{0.8}{$T$}}; \end{tikzpicture}}} = \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (FT) at (0,3) {}; \node (i) at (0,0) {}; \node (TD) at (0,-3) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (FTT) at (FT){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TDT) at (TD){}; %edges \draw[thick,double] (FTT.east)--(i); \draw[thick] (TDT.west)--(i); %dots %circles \draw[circle,fill=white] (i) circle [radius=24pt]; \draw[circle,fill=black] (FTT.east) circle [radius=2pt]; \draw[circle,fill=black] (TDT.east) circle [radius=2pt]; %labels \node at (i) {\scalebox{1}{$v$}}; \node at (FT) {\scalebox{0.8}{$FT$}}; \node at (TD) {\scalebox{0.8}{$B$}}; \end{tikzpicture}}}$$ Let consider $U=Br(S;FT)$, then the partial composition $\circ_i$ is defined by: $$T\circ_i S = \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (T) at (0,0) {}; \node (i) at (0,3) {}; \node (S) at (0,6) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TT) at (T){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (ST) at (S){}; %edges \draw[thick] (TT.west)--(ST.east); %dots %circles \draw[circle,fill=black] (TT.east) circle [radius=2pt]; \draw[circle,fill=black] (ST.east) circle [radius=2pt]; \draw[circle,fill=white] (i) circle [radius=24pt]; \node[cross=13pt] at (i) {}; %labels \node at (T) {\scalebox{0.8}{$B$}}; \node at (i) {\scalebox{0.8}{$v$}}; \node at (S) {\scalebox{0.8}{$U$}}; \end{tikzpicture}}}$$ The children of $v$ fall on $S$, then the result of this symmetric brace product is grafted on $v$, and finally the vertex $v$ is removed.\ This is exactly the insertion of $S$ in $T$ at the vertex $v$, with all the children of $v$ falling on $S$.\ Although this construction is not exactly the construction detailed in [@PreLie], it is equivalent to it. Hence those partial compositions satisfy the sequential and parallel composition axioms. *Remark 3*. When defining partial compositions on a species $P$, we define $$\circ_i:\mathcal{P}(A\sqcup\{i\})\otimes\mathcal{P}(B)\rightarrow\mathcal{P}(A\sqcup B),$$ where $A$ and $B$ are disjoint sets.\ It is equivalent to the definition with $\circ_i:\mathcal{P}(n)\otimes\mathcal{P}(m)\rightarrow\mathcal{P}(n+m-1)$ which involves some renumbering. Let us recall the following result: **Theorem 1**. *[@PreLie Theorem 1.9] Let $\mathcal{RT}$ be the species of rooted trees. The operad $(\mathcal{RT},\{\circ_i\})$ with the partial compositions defined as above is isomorphic to the operad $\mathrm{PreLie}$. Moreover the isomorphism is $\vcenter{\hbox{\begin{tikzpicture}[scale=0.2] %vertices \node (1) at (0,0) {}; \node (2) at (0,3) {}; %triangles %edges \draw[thick] (1)--(2); %dots %circles \draw[circle,fill=white] (1) circle [radius=24pt]; \draw[circle,fill=white] (2) circle [radius=24pt]; %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$2$}}; \end{tikzpicture}}}\mapsto x$ with $x$ the generator of $\mathrm{PreLie}$.* # Rooted Greg trees and the Greg operad {#seq:RGTO} Greg trees were introduced by Greg [@Hgreg] and Maas [@Hmaas] in order to state a combinatorial problem arising from textual criticism, which was solved by Flight in [@Greg].\ **Definition 6**. A *rooted Greg tree* $T$ is a rooted tree with two colors of vertices black and white such that: - the white vertices in bijection with $I$ a finite set of labels, the number of white vertices is the *arity* of $T$; - the black vertices are unlabeled and have at least two children, the number of black vertices is the *weight* of $T$. Let us introduce the following notation: - $WV(T)$ is the set of white vertices of $T$, $BV(T)$ the set of black vertices of $T$ and $V(T)$ the set of vertices of $T$. - $G_k(n)$ is the set of Greg trees of arity $n$ with white vertices labeled by $\{1,\dots,n\}$ and weight $k$. - $G(n)$ is the set of Greg trees of arity $n$ with white vertices labeled by $\{1,\dots,n\}$. - $\mathcal{G}_k(n)$ is the vector space spanned by $G_k(n)$ and $\mathcal{G}(n)$ the vector space spanned by $G(n)$, and $\mathcal{G}$ the species of Greg trees. The action of the symmetric group on rooted Greg trees is the natural one permuting the labels of the white vertices.\ $$\vcenter{\hbox{\begin{tikzpicture}[scale=0.7] \draw[thick] (0,0)--(0,2); \draw[fill=white, thick] (0,0) circle [radius=15pt]; \draw[fill=white, thick] (0,2) circle [radius=15pt]; \node at (0,0) {\scalebox{1}{1}}; \node at (0,2) {\scalebox{1}{2}}; \end{tikzpicture}}} \qquad \vcenter{\hbox{\begin{tikzpicture}[scale=0.7] \draw[thick] (0,0)--(0,2); \draw[fill=white, thick] (0,0) circle [radius=15pt]; \draw[fill=white, thick] (0,2) circle [radius=15pt]; \node at (0,0) {\scalebox{1}{2}}; \node at (0,2) {\scalebox{1}{1}}; \end{tikzpicture}}} \qquad \vcenter{\hbox{\begin{tikzpicture}[scale=0.7] \draw[thick] (0,0.5)--(-1,2); \draw[thick] (0,0.5)--(1,2); \draw[fill=black, thick] (0,0.5) circle [radius=15pt]; \draw[fill=white, thick] (-1,2) circle [radius=15pt]; \draw[fill=white, thick] (1,2) circle [radius=15pt]; \node at (0,0.5) { }; \node at (-1,2) {\scalebox{1}{1}}; \node at (1,2) {\scalebox{1}{2}}; \end{tikzpicture}}}$$ *Remark 4*. The condition that the black vertices have at least two children ensure that $G(n)$ is finite.\ Another important fact is that $(G_0(n))_{n\in\mathbb{N}}$ is the species of rooted trees. Hence in order to define an operad structure on the rooted Greg trees, one may try to mimic and generalize the construction on rooted trees of [@PreLie]. **Proposition 7**. *Let $f_\mathcal{G}(t)$ be the exponential generating series of $\mathcal{G}$. Then $f_\mathcal{G}(t)$ is the inverse under composition of $(2t+1)\exp(-t)-1$.* Let us naively generalize the construction described in the last section to the rooted Greg trees.\ Definition [Definition 3](#dfn:fall){reference-type="ref" reference="dfn:fall"} of the fall product is the same, it is straightforward to check that it is a pre-Lie product on $\mathcal{G}$. Definition [Definition 5](#dfn:inf){reference-type="ref" reference="dfn:inf"} of partial compositions is also the same. However, one may want to check that it satisfies the sequential and parallel composition axioms stated in [@AlgOp Definition 5.3.7]. **Proposition 8**. *The partial compositions on $\mathcal{G}$ satisfies the sequential and parallel composition axioms.* *Proof.* *Parallel composition axiom:*\ Let us compute $(T\circ_i S)\circ_j R$ in the case where $i$ and $j$ are the labels $v_i$ and $v_j$ which are white vertex of $T$. Let us write $T$ the following way: $$T= \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (T) at (0,0) {}; \node (i) at (-2,2.5) {}; \node (j) at (2,2.5) {}; \node (S) at (2,6) {}; \node (R) at (-2,6) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TT) at (T){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =32pt] (ST) at (S){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =32pt] (RT) at (R){}; %edges \draw[thick] (TT.west)--(i); \draw[thick] (TT.west)--(j); \draw[thick,double] (RT.east)--(i); \draw[thick,double] (ST.east)--(j); %dots %circles \draw[circle,fill=white] (i) circle [radius=24pt]; \draw[circle,fill=white] (j) circle [radius=24pt]; \draw[circle,fill=black] (TT.east) circle [radius=2pt]; \draw[circle,fill=black] (ST.east) circle [radius=2pt]; \draw[circle,fill=black] (RT.east) circle [radius=2pt]; %labels \node at (T) {\scalebox{0.8}{$T_0$}}; \node at (R) {\scalebox{0.8}{$FT^{(i)}$}}; \node at (i) {\scalebox{0.8}{$v_i$}}; \node at (j) {\scalebox{0.8}{$v_j$}}; \node at (S) {\scalebox{0.8}{$FT^{(j)}$}}; \end{tikzpicture}}}$$ Applying the definition of the partial composition, we get: $$(T\circ_i S)\circ_j R = T= \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (T) at (0,0) {}; \node (i) at (-2,2.5) {}; \node (j) at (2,2.5) {}; \node (S) at (2,6) {}; \node (R) at (-2,6) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TT) at (T){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (ST) at (S){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (RT) at (R){}; %edges \draw[thick] (TT.west)--(i); \draw[thick] (TT.west)--(j); \draw[thick] (RT.east)--(i); \draw[thick] (ST.east)--(j); %dots %circles \draw[circle,fill=white] (i) circle [radius=24pt]; \draw[circle,fill=white] (j) circle [radius=24pt]; \draw[circle,fill=black] (TT.east) circle [radius=2pt]; \draw[circle,fill=black] (ST.east) circle [radius=2pt]; \draw[circle,fill=black] (RT.east) circle [radius=2pt]; \node[cross=13pt] at (i) {}; \node[cross=13pt] at (j) {}; %labels \node at (T) {\scalebox{0.8}{$T_0$}}; \node at (R) {\scalebox{0.8}{$U$}}; \node at (i) {\scalebox{0.8}{$v_i$}}; \node at (j) {\scalebox{0.8}{$v_j$}}; \node at (S) {\scalebox{0.8}{$V$}}; \end{tikzpicture}}} = (T\circ_j R)\circ_i S$$ with $U=Br(S;FT^{(i)})$ and $V=Br(R;FT^{(j)})$.\ The parallel axiom is verified.\ *Sequential composition axiom:*\ Let us compute $T\circ_i (S\circ_j R)$ in the case where $i$ is the label of $v_i$ a white vertex of $T$ and $j$ is the label of $v_j$ a white vertex of $S$. Let us write: $$T= \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (T) at (0,0) {}; \node (i) at (0,3) {}; \node (R) at (0,6) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TT) at (T){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (RT) at (R){}; %edges \draw[thick] (TT.west)--(i); \draw[thick,double] (RT.east)--(i); %dots %circles \draw[circle,fill=white] (i) circle [radius=24pt]; \draw[circle,fill=black] (RT.east) circle [radius=2pt]; \draw[circle,fill=black] (TT.east) circle [radius=2pt]; %labels \node at (T) {\scalebox{0.8}{$T_0$}}; \node at (R) {\scalebox{0.8}{$FT$}}; \node at (i) {\scalebox{0.8}{$v_i$}}; \end{tikzpicture}}} \qquad \text{and} \qquad S= \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (T) at (0,0) {}; \node (i) at (0,3) {}; \node (R) at (0,6) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TT) at (T){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (RT) at (R){}; %edges \draw[thick] (TT.west)--(i); \draw[thick,double] (RT.east)--(i); %dots %circles \draw[circle,fill=white] (i) circle [radius=24pt]; \draw[circle,fill=black] (RT.east) circle [radius=2pt]; \draw[circle,fill=black] (TT.east) circle [radius=2pt]; %labels \node at (T) {\scalebox{0.8}{$S_0$}}; \node at (R) {\scalebox{0.8}{$FS$}}; \node at (i) {\scalebox{0.8}{$v_j$}}; \end{tikzpicture}}}$$ Then we have: $$T\star S= \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (T) at (0,-3) {}; \node (c) at (0,0) {}; \node (i) at (0,6) {}; \node (R) at (0,3) {}; \node (FU) at (-2,9) {}; \node (FT) at (2,9) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TT) at (T){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (RT) at (R){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =32pt] (FUT) at (FU){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =32pt] (FTT) at (FT){}; %edges \draw[thick] (TT.west)--(c); \draw[thick] (RT.east)--(c); \draw[thick] (RT.west)--(i); \draw[thick,double] (FUT.east)--(i); \draw[thick,double] (FTT.east)--(i); %dots %circles \draw[circle,fill=white] (i) circle [radius=24pt]; \draw[circle,fill=white] (c) circle [radius=24pt]; \draw[circle,fill=black] (RT.east) circle [radius=2pt]; \draw[circle,fill=black] (TT.east) circle [radius=2pt]; \draw[circle,fill=black] (FUT.east) circle [radius=2pt]; \draw[circle,fill=black] (FTT.east) circle [radius=2pt]; \node[cross=13] at (c) {}; %labels \node at (T) {\scalebox{0.8}{$T_0$}}; \node at (R) {\scalebox{0.8}{$U_0$}}; \node at (FU) {\scalebox{0.8}{$FU$}}; \node at (FT) {\scalebox{0.8}{$FT_{k+1}$}}; \node at (i) {\scalebox{0.8}{$v_j$}}; \node at (c) {\scalebox{0.8}{$v_i$}}; \end{tikzpicture}}}$$ With $FS=(S_1,\dots,S_k)$, $FU=(U_1,\dots,U_k)$, $U_l=Br(S_l;FT_l)$ and $FT=\bigsqcup_{l=0}^{k+1} FT_l$ for every possible such decomposition of $FT$ in (possibly empty) sub-forest. $$(T\star S)\star R= \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (T) at (0,-3) {}; \node (R) at (0,9) {}; \node (S) at (0,3) {}; \node (i) at (0,0) {}; \node (j) at (0,6) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TT) at (T){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (RT) at (R){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (ST) at (S){}; %edges \draw[thick] (TT.west)--(i); \draw[thick] (i)--(ST.east); \draw[thick] (j)--(ST.west); \draw[thick] (RT.east)--(j); %dots %circles \draw[circle,fill=white] (i) circle [radius=24pt]; \draw[circle,fill=white] (j) circle [radius=24pt]; \draw[circle,fill=black] (RT.east) circle [radius=2pt]; \draw[circle,fill=black] (TT.east) circle [radius=2pt]; \draw[circle,fill=black] (ST.east) circle [radius=2pt]; \node[cross=13] at (i) {}; \node[cross=13] at (j) {}; %labels \node at (T) {\scalebox{0.8}{$T_0$}}; \node at (S) {\scalebox{0.8}{$U_0$}}; \node at (i) {\scalebox{0.8}{$v_i$}}; \node at (j) {\scalebox{0.8}{$v_j$}}; \node at (R) {\scalebox{0.8}{$V$}}; \end{tikzpicture}}}$$ With $V=Br(R;FU\cup FT_{k+1})$.\ Let us compute $T\star (S\star R)$: $$(T\star S)\star R= \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (T) at (0,-3) {}; \node (R) at (0,9) {}; \node (S) at (0,3) {}; \node (i) at (0,0) {}; \node (j) at (0,6) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TT) at (T){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (RT) at (R){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (ST) at (S){}; %edges \draw[thick] (TT.west)--(i); \draw[thick] (i)--(ST.east); \draw[thick] (j)--(ST.west); \draw[thick] (RT.east)--(j); %dots %circles \draw[circle,fill=white] (i) circle [radius=24pt]; \draw[circle,fill=white] (j) circle [radius=24pt]; \draw[circle,fill=black] (RT.east) circle [radius=2pt]; \draw[circle,fill=black] (TT.east) circle [radius=2pt]; \draw[circle,fill=black] (ST.east) circle [radius=2pt]; \node[cross=13] at (i) {}; \node[cross=13] at (j) {}; %labels \node at (T) {\scalebox{0.8}{$T_0$}}; \node at (S) {\scalebox{0.8}{$U_0$}}; \node at (i) {\scalebox{0.8}{$v_i$}}; \node at (j) {\scalebox{0.8}{$v_j$}}; \node at (R) {\scalebox{0.8}{$W$}}; \end{tikzpicture}}}$$ With $W=Br(Br(R;FS);\widetilde{FT_1})$ and $U_0=Br(S_0;FT_0)$, and $FT=FT_0\sqcup \widetilde{FT_1}$ for every possible such decomposition of $FT$ in (possibly empty) sub-forest. To conclude, one need to remark that $V=W$. Indeed, in the definition of $W$, the forest $FS$ fall on $R$, then $\widetilde{FT_1}$ fall on the result; and in the definition of $V$, some trees of $\widetilde{FT_1}$ fall on some trees of $FS$ and the resulting forest fall on $R$. Those two operations give the same result. ◻ **Definition 9**. Let $\mathrm{Greg}$ be the operad $(\mathcal{G},\{\circ_i\})$.\ Let us note $$x=\vcenter{\hbox{\begin{tikzpicture}[scale=0.2] %vertices \node (1) at (0,0) {}; \node (2) at (0,3) {}; %triangles %edges \draw[thick] (1)--(2); %dots %circles \draw[circle,fill=white] (1) circle [radius=24pt]; \draw[circle,fill=white] (2) circle [radius=24pt]; %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$2$}}; \end{tikzpicture}}}\qquad y=\vcenter{\hbox{\begin{tikzpicture}[scale=0.2] %vertices \node (1) at (0,0) {}; \node (2) at (0,3) {}; %triangles %edges \draw[thick] (1)--(2); %dots %circles \draw[circle,fill=white] (1) circle [radius=24pt]; \draw[circle,fill=white] (2) circle [radius=24pt]; %labels \node at (2) {\scalebox{0.8}{$1$}}; \node at (1) {\scalebox{0.8}{$2$}}; \end{tikzpicture}}}\qquad g=\vcenter{\hbox{\begin{tikzpicture}[scale=0.2] %vertices \node (1) at (-1.5,3) {}; \node (2) at (1.5,3) {}; \node (c) at (0,0) {}; %triangles %edges \draw[thick] (c)--(1); \draw[thick] (c)--(2); %dots %circles \draw[circle,fill=black] (c) circle [radius=24pt]; \draw[circle,fill=white] (1) circle [radius=24pt]; \draw[circle,fill=white] (2) circle [radius=24pt]; %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$2$}}; \end{tikzpicture}}}$$ We know from Section [2](#seq:RT){reference-type="ref" reference="seq:RT"} that $x$ verify the pre-Lie relation. Let us introduce the following notation: $$x_n= \vcenter{\hbox{\begin{tikzpicture}[scale=0.2] %vertices \node (T) at (0,0) {}; \node (S1) at (3,3) {}; \node (S2) at (0,3) {}; \node (S3) at (-3,3) {}; %edges \draw[thick] (T)--(S1); \draw[thick] (T)--(S3); %dots \node at (S2) {$\cdots$}; %circles \draw[circle,fill=white] (S1) circle [radius=24pt]; \draw[circle,fill=white] (S3) circle [radius=24pt]; \draw[circle,fill=white] (T) circle [radius=24pt]; %triangles %labels \node at (T) {\scalebox{0.8}{$1$}}; \node at (S3) {\scalebox{0.8}{$2$}}; \node at (S1) {\scalebox{0.8}{$n$}}; \end{tikzpicture}}} \qquad g_n= \vcenter{\hbox{\begin{tikzpicture}[scale=0.2] %vertices \node (T) at (0,0) {}; \node (S1) at (3,3) {}; \node (S2) at (0,3) {}; \node (S3) at (-3,3) {}; %edges \draw[thick] (T)--(S1); \draw[thick] (T)--(S3); %dots \node at (S2) {$\cdots$}; %circles \draw[circle,fill=white] (S1) circle [radius=24pt]; \draw[circle,fill=white] (S3) circle [radius=24pt]; \draw[circle,fill=black] (T) circle [radius=24pt]; %triangles %labels \node at (S3) {\scalebox{0.8}{$1$}}; \node at (S1) {\scalebox{0.8}{$n$}}; \end{tikzpicture}}}$$ *Example 5*. Let us compute $x\circ_1 g$. Because of the renumbering that we have been ignoring, we have to compute $\vcenter{\hbox{\begin{tikzpicture}[scale=0.2] %vertices \node (1) at (0,0) {}; \node (2) at (0,3) {}; %triangles %edges \draw[thick] (1)--(2); %dots %circles \draw[circle,fill=white] (1) circle [radius=24pt]; \draw[circle,fill=white] (2) circle [radius=24pt]; %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$3$}}; \end{tikzpicture}}}\circ_1\vcenter{\hbox{\begin{tikzpicture}[scale=0.2] %vertices \node (1) at (-1.5,3) {}; \node (2) at (1.5,3) {}; \node (c) at (0,0) {}; %triangles %edges \draw[thick] (c)--(1); \draw[thick] (c)--(2); %dots %circles \draw[circle,fill=black] (c) circle [radius=24pt]; \draw[circle,fill=white] (1) circle [radius=24pt]; \draw[circle,fill=white] (2) circle [radius=24pt]; %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$2$}}; \end{tikzpicture}}}$. $$\vcenter{\hbox{\begin{tikzpicture}[scale=0.2] %vertices \node (1) at (0,0) {}; \node (2) at (0,3) {}; %triangles %edges \draw[thick] (1)--(2); %dots %circles \draw[circle,fill=white] (1) circle [radius=24pt]; \draw[circle,fill=white] (2) circle [radius=24pt]; %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$3$}}; \end{tikzpicture}}}\circ_1\vcenter{\hbox{\begin{tikzpicture}[scale=0.2] %vertices \node (1) at (-1.5,3) {}; \node (2) at (1.5,3) {}; \node (c) at (0,0) {}; %triangles %edges \draw[thick] (c)--(1); \draw[thick] (c)--(2); %dots %circles \draw[circle,fill=black] (c) circle [radius=24pt]; \draw[circle,fill=white] (1) circle [radius=24pt]; \draw[circle,fill=white] (2) circle [radius=24pt]; %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$2$}}; \end{tikzpicture}}}= \vcenter{\hbox{\begin{tikzpicture}[scale=0.2] %vertices \node (1) at (-1.5,3) {}; \node (2) at (1.5,3) {}; \node (3) at (-1.5,6) {}; \node (c) at (0,0) {}; %triangles %edges \draw[thick] (c)--(1); \draw[thick] (c)--(2); \draw[thick] (1)--(3); %dots %circles \draw[circle,fill=black] (c) circle [radius=24pt]; \draw[circle,fill=white] (1) circle [radius=24pt]; \draw[circle,fill=white] (2) circle [radius=24pt]; \draw[circle,fill=white] (3) circle [radius=24pt]; %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$2$}}; \node at (3) {\scalebox{0.8}{$3$}}; \end{tikzpicture}}}+ \vcenter{\hbox{\begin{tikzpicture}[scale=0.2] %vertices \node (1) at (-1.5,3) {}; \node (2) at (1.5,3) {}; \node (3) at (1.5,6) {}; \node (c) at (0,0) {}; %triangles %edges \draw[thick] (c)--(1); \draw[thick] (c)--(2); \draw[thick] (2)--(3); %dots %circles \draw[circle,fill=black] (c) circle [radius=24pt]; \draw[circle,fill=white] (1) circle [radius=24pt]; \draw[circle,fill=white] (2) circle [radius=24pt]; \draw[circle,fill=white] (3) circle [radius=24pt]; %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$2$}}; \node at (3) {\scalebox{0.8}{$3$}}; \end{tikzpicture}}}+ \vcenter{\hbox{\begin{tikzpicture}[scale=0.2] %vertices \node (1) at (-2,3) {}; \node (2) at (0,3) {}; \node (3) at (2,3) {}; \node (c) at (0,0) {}; %triangles %edges \draw[thick] (c)--(1); \draw[thick] (c)--(2); \draw[thick] (c)--(3); %dots %circles \draw[circle,fill=black] (c) circle [radius=24pt]; \draw[circle,fill=white] (1) circle [radius=24pt]; \draw[circle,fill=white] (2) circle [radius=24pt]; \draw[circle,fill=white] (3) circle [radius=24pt]; %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$2$}}; \node at (3) {\scalebox{0.8}{$3$}}; \end{tikzpicture}}}$$ Hence we have $x\circ_1 g- (g\circ_1 x).(2\;3) - g\circ_2 x = \vcenter{\hbox{\begin{tikzpicture}[scale=0.2] %vertices \node (1) at (-2,3) {}; \node (2) at (0,3) {}; \node (3) at (2,3) {}; \node (c) at (0,0) {}; %triangles %edges \draw[thick] (c)--(1); \draw[thick] (c)--(2); \draw[thick] (c)--(3); %dots %circles \draw[circle,fill=black] (c) circle [radius=24pt]; \draw[circle,fill=white] (1) circle [radius=24pt]; \draw[circle,fill=white] (2) circle [radius=24pt]; \draw[circle,fill=white] (3) circle [radius=24pt]; %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$2$}}; \node at (3) {\scalebox{0.8}{$3$}}; \end{tikzpicture}}}$. One may recognize the Leibniz rule in the left hand side of the equation. Moreover since the right hand side is symmetric, we have $x\circ_1 g- (g\circ_1 x).(2\;3) - g\circ_2 x = (x\circ_1 g- (g\circ_1 x).(2\;3) - g\circ_2 x).(2\;3)$. Let us call this relation the Greg relation.\ *Remark 6*. The element $g_3$ encodes the failure to verify of Leibniz's rule in the operad $\mathrm{Greg}$, the same as the element $x_3$ encodes the failure to verify of the associativity relation. **Proposition 10**. *The operad $\mathrm{Greg}$ is binary.* *Proof.* Let $\mathcal{P}(x,y,g)$ be the sub operad of $\mathrm{Greg}$ generated by $x$ ,$y$ and $g$. We have to show that $\mathcal{P}(x,y,g)=\mathrm{Greg}$. Let us proof it by recurrence on the arity.\ *Base case:* By definition $\mathcal{P}(x,y,g)(2)=\mathrm{Greg}(2)$.\ *Induction step:* Let $n\geq 2$ and suppose that $\mathcal{P}(x,y,g)(k)=\mathrm{Greg}(k)$ for all $k\leq n$. We have to show that $\mathcal{P}(x,y,g)(n+1)=\mathrm{Greg}(n+1)$.\ Computing $x\circ_1x_n$ and $x\circ_1c_n$ show that $x_{n+1}\in \mathcal{P}(x,y,g)(n+1)$ and $c_{n+1}\in \mathcal{P}(x,y,g)(n+1)$. Since we can obtain any rooted Greg trees by inductively composing corollas in the leaves of smaller trees, we have $\mathrm{Greg}(n+1)= \mathcal{P}(x,y,g)(n+1)$.\ Hence by recurrence, $\mathcal{P}(x,y,g)=\mathrm{Greg}$. ◻ We want to prove that $\mathrm{Greg}$ is quadratic to get a quadratic presentation. In order to do so, we will introduce a quadratic operad $\mathrm{Greg}'$, show that $\mathrm{Greg}'$ is Koszul and use the information given on its dimensions of components to show that $\mathrm{Greg}'$ is isomorphic to $\mathrm{Greg}$. **Definition 11**. Let $\mathrm{Greg}'$ be the quotient of the free operad generated by $\tilde{x}$ and $\tilde{g}$ such that $\tilde{x}$ has no symmetry and $\tilde{g}.(1\;2) =\tilde{g}$, by the following relations: - $(\tilde{x}\circ_1\tilde{x}-\tilde{x}\circ_2\tilde{x}) - (\tilde{x}\circ_1\tilde{x}-\tilde{x}\circ_2\tilde{x}).(2\;3)$ - $(\tilde{x}\circ_1 \tilde{g}- (\tilde{g}\circ_1 \tilde{x}).(2\;3) - \tilde{g}\circ_2 \tilde{x}) - (\tilde{x}\circ_1 \tilde{g}- (\tilde{g}\circ_1 \tilde{x}).(2\;3) - \tilde{g}\circ_2 \tilde{x}).(2\;3)$ The first relation is the pre-Lie relation and the second one is the Greg relation. Since $\mathrm{Greg}$ is binary and those relations are satisfied in $\mathrm{Greg}$, we have a surjective morphism of operads $\mathrm{Greg}'\twoheadrightarrow \mathrm{Greg}$. *Remark 7*. From this definition and using the formalism of shuffle operads that will be recall in a brief moment, it would already by possible to prove that $\mathrm{Greg}'$ is Koszul. However, the dimensions of components of its Koszul dual are much simpler, hence we will compute and work with the Koszul dual of $\mathrm{Greg}'$. **Definition 12**. Let $(\mathrm{Greg}')^!$ be the quotient of the free operad generated by $x_*$ and $g_*$ such that $x_*$ has no symmetry and $g_*.(1\;2)=-g_*$, by the following relations: $$x_*\circ_1x_*-x_*\circ_2x_* \qquad ; \qquad x_*\circ_1x_*-(x_*\circ_1x_*).(2\;3)$$ $$x_*\circ_1g_*-g_*\circ_2x_* \qquad ; \qquad x_*\circ_1g_*-(g_*\circ_1x_*).(2\;3)$$ $$x_*\circ_1g_*+(x_*\circ_1g_*).(1\;2\;3)+(x_*\circ_1g_*).(1\;3\;2)$$ $$x_*\circ_2g_* \qquad ; \qquad g_*\circ_1g_*$$ This is the Koszul dual of $\mathrm{Greg}'$. The explicit method to compute a presentation of the Koszul dual of a binary operad is detailed in [@AlgOp Subsection 7.6]. The two first relation are associativity and permutativity. This operad is not a set operad because of the last three relations. Let us give an example of $(\mathrm{Greg}')^!$-algebra that allows us show that $\dim((\mathrm{Greg}')^!(4))\geq 7$.\ **Definition 13**. Let $\chi$ be a finite alphabet and $\mathbf{W}(\chi)$ be the span of finite words on $\chi$ with the following extra decorations: either one letter is pointed with a dot, or there is a arrow from one letter to an other.\ Let $W(\chi)$ be the quotient of $\mathbf{W}(\chi)$ by the following relations, letters commute with each other (the dot and the arrow follow the letter), reverting the arrow change the sign and $\overset{\curvearrowright}{ab}cv=\overset{\curvearrowright}{cb}av+\overset{\curvearrowright}{ac}bv$ for any $a,b,c\in \chi$ and $v$ a finite word. Because of the letters commute, let us write the elements of $W(\chi)$ with the pointed letter (or arrowed letters) at the start.\ Let $\settowidth{\myheight}{x}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$x$};}}$ and $\settowidth{\myheight}{g}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$g$};}}$ product on $W(\chi)$ defined by: - $\dot{a}v\settowidth{\myheight}{x}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$x$};}}\dot{b}w=\dot{a}vbw$ - $\overset{\curvearrowright}{ab}v\settowidth{\myheight}{x}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$x$};}}\dot{c}w=\overset{\curvearrowright}{ab}vcw$ - $\dot{a}v\settowidth{\myheight}{g}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$g$};}}\dot{b}w=\overset{\curvearrowright}{ab}vw$ and all other cases give $0$. **Proposition 14**. *The algebra $(W(\chi),\settowidth{\myheight}{x}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$x$};}},\settowidth{\myheight}{g}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$g$};}})$ is a $(\mathrm{Greg}')^!$-algebra generated by $\chi$.* *Proof.* Indeed, $\dot{a}v=\dot{a}\settowidth{\myheight}{x}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$x$};}}w$ with $w$ the word $v$ with a dot on a letter (let us say the first one for example) and $\overset{\curvearrowright}{ab}v=(\dot{a}\settowidth{\myheight}{g}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$g$};}}\dot{b})\settowidth{\myheight}{x}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$x$};}}w$ so $(W(\chi),\settowidth{\myheight}{x}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$x$};}},\settowidth{\myheight}{g}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$g$};}})$ it is generated by $\chi$.\ The product $\settowidth{\myheight}{g}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$g$};}}$ is skew-symmetric, indeed $\dot{a}\settowidth{\myheight}{g}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$g$};}}\dot{b}=\overset{\curvearrowright}{ab}=\overset{\curvearrowleft}{ba}= -\overset{\curvearrowright}{ba}=-\dot{b}\settowidth{\myheight}{g}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$g$};}}\dot{a}$. The $7$ relations of $(\mathrm{Greg}')^!$ are easily checked. ◻ **Proposition 15**. *We have $\dim((\mathrm{Greg}')^!(4))\geq 7$.* *Proof.* Let $A=W(\{a,b,c,d\})$. Let us compute the dimension of $\textnormal{Mult}(A(4))$ the multilinear part of $A(4)$. Let us write words of $\textnormal{Mult}(A(4))$ such that the arrow always start from $a$ to the second letter and aside from that, the letter are in the lexicographic order.\ The rewriting rule $\overset{\curvearrowright}{\beta\gamma}\alpha\mapsto \overset{\curvearrowright}{\alpha\gamma}\beta-\overset{\curvearrowright}{\alpha\beta}\gamma$ is confluent.\ Indeed, $\overset{\curvearrowright}{cd}ab= \overset{\curvearrowright}{bd}ac-\overset{\curvearrowright}{bc}ad= (\overset{\curvearrowright}{ad}bc-\overset{\curvearrowright}{ab}cd)- (\overset{\curvearrowright}{ac}bd-\overset{\curvearrowright}{ab}cd)= \overset{\curvearrowright}{ad}bc-\overset{\curvearrowright}{ac}bd$.\ Hence $\textnormal{Mult}(A(4))$ is spanned by the words $\dot{a}bcd$, $a\dot{b}cd$, $ab\dot{c}d$, $abc\dot{d}$, $\overset{\curvearrowright}{ab}cd$, $\overset{\curvearrowright}{ac}bd$ and $\overset{\curvearrowright}{ad}bc$.\ Since $A$ is a $(\mathrm{Greg}')^!$-algebra on $4$ generators, $\dim((\mathrm{Greg}')^!(4))\geq\dim(\textnormal{Mult}(A(4)))=7$. ◻ *Remark 8*. The algebra $(W(\chi),\settowidth{\myheight}{x}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$x$};}},\settowidth{\myheight}{g}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$g$};}})$ is in fact the free $(\mathrm{Greg}')^!$-algebra generated by $\chi$. Let us use the formalism of shuffle operads to write down a Gröbner basis of $(\mathrm{Greg}')^!$. Operadic monomials will be written as shuffle trees. However one need not to confuse them with Greg trees. For further information on shuffle operads, see [@AlgComp].\ A quick introduction on shuffle trees has been given in the first section see [Definition 1](#rec:sh){reference-type="ref" reference="rec:sh"}. Writing the relations of $(\mathrm{Greg}')^!$ using shuffle trees is a good exercise to familiarize ourselves with shuffle trees, and to be careful not to confuse them with the species of rooted trees or rooted Greg trees.\ Since the actions of the symmetric groups are disposed of when working with shuffle operads, let us note $y_*=x_*.(1\;2)$.\ *Example 9*. Let us rewrite the relations of $(\mathrm{Greg}')^!$ and the elements of there orbit as shuffle trees. Because of the local increasing condition, $y_*$ is needed although it can be expressed in in $(\mathrm{Greg}')^!$ with $x_*$. We get $25$ relations, they are written in Figure [1](#fg:StRel1){reference-type="ref" reference="fg:StRel1"}. From those computations, the only missing ingredient to get a Gröbner basis is a monomial order. We will consider the tree following orders: - the degree-lexicographic permutation order with $x_*>y_*>g_*$; - the permutation reverse-degree-lexicographic order with $g_*>y_*>x_*$; - and the reverse-degree-lexicographic permutation order with $x_*>y_*>g_*$. Those orders are defined in [@AlgComp] for any presentation of shuffle operads. Using a monomial order on a set of relations, one get a terminating rewriting system. - the degree-lexicographic permutation order with $x_*>y_*>g_*$ give Figure [2](#fg:StGB1){reference-type="ref" reference="fg:StGB1"}; - the permutation reverse-degree-lexicographic order with $g_*>y_*>x_*$ give Figure [3](#fg:StGB2){reference-type="ref" reference="fg:StGB2"}; - and the reverse-degree-lexicographic permutation order with $x_*>y_*>g_*$ give Figure [4](#fg:StGB3){reference-type="ref" reference="fg:StGB3"}. **Proposition 16**. *The rewriting systems displayed in Figure [2](#fg:StGB1){reference-type="ref" reference="fg:StGB1"}, Figure [3](#fg:StGB2){reference-type="ref" reference="fg:StGB2"} and Figure [4](#fg:StGB3){reference-type="ref" reference="fg:StGB3"} are confluent. (They are in fact Gröbner bases.)* *Proof.* To proof this fact one have two choices either checking the confluence of the critical monomials ($294$ cases to check for Figure [2](#fg:StGB1){reference-type="ref" reference="fg:StGB1"}), or remarking that there are $7$ normal forms in arity $4$ for each rewriting system and because $\dim((\mathrm{Greg}')^!(4))\geq 7$, no new relations can appear.\ Since we have a monomial order and that rewriting rules are quadratic in a binary operad, checking arity $4$ is enough. ◻ **Proposition 17**. *We have that $\dim((\mathrm{Greg}')^!(n))=2n-1$ for all $n\geq 1$, hence its exponential generating series is $(2t-1)\exp(t)+1$.* *Proof.* It suffice to count the normal forms of a rewriting system for example Figure [2](#fg:StGB1){reference-type="ref" reference="fg:StGB1"}. Let $n\geq 2$ and count the number of normal forms in arity $n$. Those are right combs with at most one $g_*$, with all the $x_*$ above the $g_*$ and the $y_*$, and all the $y_*$ below the $g_*$ and the $x_*$.\ Hence the normal forms are determined by the number of occurrences $x_*$ and $g_*$, and have either zero or one occurrences $g_*$. If there is no $g_*$, then one can have from $0$ to $n-1$ occurrences of $x_*$. If there is one $g_*$, then one can have from $0$ to $n-2$ occurrences of $x_*$.\ Hence the number of normal forms in arity $n$ is $2n-1$. ◻ **Theorem 2**. *The operad $\mathrm{Greg}'$ is Koszul.* *Proof.* We have a quadratic Gröbner basis for $(\mathrm{Greg}')^!$, hence $(\mathrm{Greg}')^!$ is Koszul, hence $\mathrm{Greg}'$ is Koszul. ◻ **Theorem 3**. *The exponential generating series of the operad $\mathrm{Greg}'$ is the inverse under composition of $(2t+1)\exp(-t)-1$.\ Hence $\mathrm{Greg}'$ is isomorphic to $\mathrm{Greg}$.* *Proof.* We know that for a Koszul operad $\mathcal{P}$, then $f_\mathcal{P}(f_{\mathcal{P}^!}(-t))=-t$. Hence we get the exponential generating series of the operad $\mathrm{Greg}'$.\ Since we have a surjective morphism from $\mathrm{Greg}'$ to $\mathrm{Greg}$ and that they have the same exponential generating series, the morphism is an isomorphism. ◻ **Corollary 4**. *The operad $\mathrm{Greg}$ is binary, quadratic and Koszul.* # Generalization of the Greg operad {#seq:GGO} We have studied the operad $\mathrm{Greg}$, however we have not yet relate this operad to $\bigvee_\mathrm{Lie}^2\mathrm{PreLie}$. In fact, we shall know establish a much more general result about the operad $\bigvee_\mathrm{Lie}^{m+1}\mathrm{PreLie}$. **Proposition 18**. *The operad $\bigvee_\mathrm{Lie}^{m+1}\mathrm{PreLie}$ is isomorphic to the operad $\mathcal{F}(x,c_1,\dots,c_m)/\langle \mathcal{R}\rangle$, with $x$ without symmetry, $c_k.(1\;2)=c_k$ and $\mathcal{R}$ the relations: $$\tag{pre-Lie}\label{eq:pl} (x \circ_1x -x \circ_2x) - (x \circ_1x -x \circ_2x).(2\;3)$$ $$\begin{gathered} \tag{diff pre-Lie}\label{eq:vpl} (x \circ_1 c_k - (c_k \circ_1 x ).(2\;3) - c_k \circ_2 x ) - (x \circ_1 c_k - (c_k \circ_1 x ).(2\;3) - c_k \circ_2 x ).(2\;3)\\ + \sum_{i,j\mid \max(i,j)=k}(c_i\circ_1 c_j - (c_i\circ_1 c_j).(2\;3)) \end{gathered}$$* *Proof.* We already know the following presentation: $\bigvee_\mathrm{Lie}^{m+1}\mathrm{PreLie}\simeq\mathcal{F}(x_1,\dots,x_{m+1})/\langle \mathcal{R}' \rangle$ with $x_k$ without symmetries and $\mathcal{R}'$ the relations: $$\tag{pre-Lie k}\label{eq:pli} (x_k \circ_1x_k -x_k \circ_2x_k) - (x_k \circ_1x_k -x_k \circ_2x_k).(2\;3)$$ $$\tag{share}\label{eq:share} (x_k-x_{k+1})-(x_k-x_{k+1}).(1\;2)$$ Let $c_k=x_k-x_{k+1}$, then $c_k=c_k.(1\;2)$ is equivalent to Relation ([\[eq:share\]](#eq:share){reference-type="ref" reference="eq:share"}). Let $x=x_1$. We have that $x_{k+1}=x+\sum_{i=1}^{k}c_i$. Hence $\mathcal{R}'$ is equivalent to: $$\begin{gathered} (x \circ_1x -x \circ_2x) - (x \circ_1x -x \circ_2x).(2\;3) + \\ \sum_{i=1}^{k}((x \circ_1c_i -x \circ_2c_i) - (x \circ_1c_i -x \circ_2c_i).(2\;3) + (c_i \circ_1x -c_i \circ_2x) - (c_i \circ_1x -c_i \circ_2x).(2\;3)) + \\ \sum_{i=1}^{k}\sum_{j=1}^{k}((c_i \circ_1c_j -c_i \circ_2c_j) - (c_i \circ_1c_j -c_i \circ_2c_j).(2\;3)), \end{gathered}$$ which is equal to: $$\begin{gathered} (x \circ_1x -x \circ_2x) - (x \circ_1x -x \circ_2x).(2\;3) + \\ \sum_{i=1}^{k}(x \circ_1c_i - (x \circ_1c_i).(2\;3) + (c_i \circ_1x -c_i \circ_2x) - (c_i \circ_1x -c_i \circ_2x).(2\;3)) + \\ \sum_{i=1}^{k}\sum_{j=1}^{k}(c_i \circ_1c_j - (c_i \circ_1c_j).(2\;3)) \end{gathered}$$ Finally, if we subtract consecutive relations; we obtain $$\begin{gathered} x \circ_1c_k - (x \circ_1c_k).(2\;3) + (c_i \circ_1x -c_k \circ_2x) - (c_k \circ_1x -c_k \circ_2x).(2\;3) +\\ \sum_{i,j\mid\max(i,j)=k}(c_i \circ_1c_j - (c_i \circ_1c_j).(2\;3)), \end{gathered}$$ which is the the intended relation. ◻ This quadratic presentation very much look like the presentation of the operad $\mathrm{Greg}$. The operad $\bigvee_\mathrm{Lie}^2\mathrm{PreLie}$ is not isomorphic to $\mathrm{Greg}$, however one may wonder if the operad $\bigvee_\mathrm{Lie}^2\mathrm{PreLie}$ is a deformation of $\mathrm{Greg}$. We shall now show that it is indeed the case.\ Let $C=(V,\Delta)$ with $V$ be a vector space of finite dimension $n$ and $\Delta$ a coassociative cocomutative coproduct on $V$. **Definition 19**. The vector space of *rooted Greg trees over $V$* is $\mathcal{G}_k^V(m)=\bigotimes_{\tau\in G_k(m)}V^{\otimes BV(\tau)}$.\ It has a basis of rooted Greg trees whose black vertices are labelled by a basis of $V$.\ Let $\mathcal{G}^V(m)=\bigoplus_k\mathcal{G}_k^V(m)$ and $\mathcal{G}^V=\bigoplus_m\mathcal{G}^V(m)$. **Definition 20**. Let us define the deformed fall product $\star^\Delta$ on $\mathcal{G}^V$.\ Let $S$ and $T$ be two rooted Greg trees over $V$, and reuse the notation of Section [2](#seq:RT){reference-type="ref" reference="seq:RT"}.\ For $v$ a vertex of $S$, the forest $FS=\{S_1,\dots,S_k\}$ the forest of the children of $v$ , $B$ the rooted tree below $v$ and $c^v$ the label of $v$. Let us write: $$\vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (S) at (0,0) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (ST) at (S){}; %edges %dots %circles \draw[circle,fill=black] (ST.east) circle [radius=2pt]; %labels \node at (S) {\scalebox{0.8}{$S$}}; \end{tikzpicture}}} \qquad = \qquad \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (lv) at (0,0) {}; \node (SD) at (0,-2.7) {}; \node (F) at (0,3) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (SDT) at (SD){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (FT) at (F){}; %edges \draw[thick, double] (FT.east)--(lv); \draw[thick] (lv)--(SDT.west); %dots %circles \draw[circle,fill=white] (lv) circle [radius=24pt]; \draw[circle,fill=black] (FT.east) circle [radius=2pt]; \draw[circle,fill=black] (SDT.east) circle [radius=2pt]; %labels \node at (lv) {\scalebox{1}{$c^v$}}; \node at (F) {\scalebox{0.8}{$FS$}}; \node at (SD) {\scalebox{0.8}{$B$}}; \end{tikzpicture}}}$$ For $v$ a black vertex and $c^v$ its label, let us write $c_{(1)}^v\otimes c_{(2)}^v=\Delta(c^v)$ using the Sweedler notation. Let us define the product $\star^\Delta$ by: $$S\star^\Delta T = % only for \displaystyle \mathop{% \raisebox {-1\depthofsumsign+1\depthofsumsign} {\scalebox {1} {$\displaystyle\sum$}% } } _{v\in V(S)} \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (lv) at (0,0) {}; \node (SD) at (0,-3) {}; \node (T) at (1.5,3) {}; \node (FS) at (-1.5,3) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (SDT) at (SD){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TT) at (T){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (FST) at (FS){}; %edges \draw[thick] (TT.east)--(lv); \draw[thick,double] (FST.east)--(lv); \draw[thick] (lv)--(SDT.west); %dots %circles \draw[circle,fill=white] (lv) circle [radius=24pt]; \draw[circle,fill=black] (TT.east) circle [radius=2pt]; \draw[circle,fill=black] (FST.east) circle [radius=2pt]; \draw[circle,fill=black] (SDT.east) circle [radius=2pt]; %labels \node at (lv) {\scalebox{1}{$c^v$}}; \node at (T) {\scalebox{0.8}{$T$}}; \node at (FS) {\scalebox{0.8}{$FS$}}; \node at (SD) {\scalebox{0.8}{$B$}}; \end{tikzpicture}}}\quad + % only for \displaystyle \mathop{% \raisebox {-1\depthofsumsign+1\depthofsumsign} {\scalebox {1} {$\displaystyle\sum$}% } } _{v\in BV(S)}\;% only for \displaystyle \mathop{% \raisebox {-1\depthofsumsign+1\depthofsumsign} {\scalebox {1} {$\displaystyle\sum$}% } } _{FS_{(1)}\sqcup FS_{(2)}=FS} \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (lv0) at (-1.5,-3) {}; \node (lv) at (0,0) {}; \node (SD) at (-1.5,-6) {}; \node (T) at (1.5,3) {}; \node (FS) at (-1.5,3) {}; \node (FS2) at (-3,0) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (SDT) at (SD){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TT) at (T){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =28pt] (FST) at (FS){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =28pt] (FS2T) at (FS2){}; %edges \draw[thick] (TT.east)--(lv); \draw[thick,double] (FST.east)--(lv); \draw[thick,double] (FS2T.east)--(lv0); \draw[thick] (lv)--(lv0); \draw[thick] (lv0)--(SDT.west); %dots %circles \draw[circle,fill=white] (lv0) circle [radius=24pt]; \draw[circle,fill=white] (lv) circle [radius=24pt]; \draw[circle,fill=black] (TT.east) circle [radius=2pt]; \draw[circle,fill=black] (FST.east) circle [radius=2pt]; \draw[circle,fill=black] (FS2T.east) circle [radius=2pt]; \draw[circle,fill=black] (SDT.east) circle [radius=2pt]; %labels \node at (lv0) {\scalebox{1}{$c_{(1)}^v$}}; \node at (lv) {\scalebox{1}{$c_{(2)}^v$}}; \node at (T) {\scalebox{0.8}{$T$}}; \node at (FS) {\scalebox{0.8}{$FS_{(2)}$}}; \node at (FS2) {\scalebox{0.8}{$FS_{(1)}$}}; \node at (SD) {\scalebox{0.8}{$B$}}; \end{tikzpicture}}}$$ For readability sake, let us write: $$S\star^\Delta T = \qquad \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (lv) at (0,0) {}; \node (T) at (0,3) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TT) at (T){}; %edges \draw[thick] (TT.east)--(lv); %dots %circles \draw[circle,fill=white] (lv) circle [radius=24pt]; \draw[circle,fill=black] (TT.east) circle [radius=2pt]; %labels \node at (lv) {\scalebox{1}{$c^v$}}; \node at (T) {\scalebox{0.8}{$T$}}; \end{tikzpicture}}} \qquad + \qquad \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (lv0) at (0,-3) {}; \node (lv) at (0,0) {}; \node (T) at (0,3) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TT) at (T){}; %edges \draw[thick] (TT.east)--(lv); \draw[thick] (lv)--(lv0); %dots %circles \draw[circle,fill=white] (lv0) circle [radius=24pt]; \draw[circle,fill=white] (lv) circle [radius=24pt]; \draw[circle,fill=black] (TT.east) circle [radius=2pt]; %labels \node at (lv0) {\scalebox{1}{$c_{(1)}^v$}}; \node at (lv) {\scalebox{1}{$c_{(2)}^v$}}; \node at (T) {\scalebox{0.8}{$T$}}; \end{tikzpicture}}}$$ **Proposition 21**. *The deformed fall product $\star^\Delta$ is pre-Lie.* *Proof.* The proof is the tedious computation of $(R\star^\Delta S) \star^\Delta T-R\star^\Delta (S \star^\Delta T)$. The computation of $(R\star^\Delta S) \star^\Delta T$ is done in Figure [5](#fg:StComp1){reference-type="ref" reference="fg:StComp1"}; $r$, $r'$ and $s$ are labels of vertices of $R$ and $S$ respectively. The boxed terms are the terms of $R\star^\Delta (S \star^\Delta T)$.\ Using the coassociativity and cocomutativity of $\Delta$, we get that $(R\star^\Delta S) \star^\Delta T-R\star^\Delta (S \star^\Delta T)$ is symmetric in $S$ and $T$.\ Hence the deformed fall product $\star^\Delta$ is pre-Lie. ◻ *Remark 10*. One may remark that cocomutativity is stronger than the needed condition, indeed the weaker needed condition is that $r_{(1)}\otimes r_{(2)}\otimes r_{(3)}=r_{(1)}\otimes r_{(3)}\otimes r_{(2)}$ using Sweedler notation, which is known as the copermutativity property. The symmetric brace product $Br^\Delta$ associated to $\star^\Delta$ is also defined by the following formula: $$Br^\Delta(S;T_1,\dots,T_{n+1})=Br^\Delta(S;T_1,\dots,T_n)\star^\Delta T_{n+1}- \sum_{i=1}^nBr^\Delta(S;T_1,\dots,T_i\star^\Delta T_{n+1},\dots,T_n)$$ Same as $Br$, $Br^\Delta$ is symmetric in the $T_i$'s. **Definition 22**. The partial compositions $\circ_i^\Delta$ are defined the same way as in Definition [Definition 5](#dfn:inf){reference-type="ref" reference="dfn:inf"} by: $$T\circ_i^\Delta S = \vcenter{\hbox{\begin{tikzpicture}[scale=0.5] %vertices \node (T) at (0,0) {}; \node (i) at (0,3) {}; \node (S) at (0,6) {}; %triangles \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (TT) at (T){}; \node[isosceles triangle, isosceles triangle apex angle=60, draw, rotate=270, fill=white!50, minimum size =24pt] (ST) at (S){}; %edges \draw[thick] (TT.west)--(ST.east); %dots %circles \draw[circle,fill=black] (TT.east) circle [radius=2pt]; \draw[circle,fill=black] (ST.east) circle [radius=2pt]; \draw[circle,fill=white] (i) circle [radius=24pt]; \node[cross=13pt] at (i) {}; %labels \node at (T) {\scalebox{0.8}{$B$}}; \node at (i) {\scalebox{0.8}{$v$}}; \node at (S) {\scalebox{0.8}{$U$}}; \end{tikzpicture}}}$$ With $U=Br^\Delta(S;FT)$. **Proposition 23**. *The partial compositions $\circ_i^\Delta$ satisfied the sequential composition and parallel composition axioms.* The proof is the same as the one of Proposition [Proposition 8](#prp:inf){reference-type="ref" reference="prp:inf"}. **Definition 24**. With $C=(V,\Delta)$, let $\mathrm{Greg}^C$ be the operad $(\mathcal{G}^V,\{\circ_i^\Delta\})$.\ Let $(e^1,\dots,e^n)$ be a basis of $V$ and : $$x_n= \vcenter{\hbox{\begin{tikzpicture}[scale=0.3] %vertices \node (T) at (0,0) {}; \node (S1) at (3,3) {}; \node (S2) at (0,3) {}; \node (S3) at (-3,3) {}; %edges \draw[thick] (T)--(S1); \draw[thick] (T)--(S3); %dots \node at (S2) {$\cdots$}; %circles \draw[circle,fill=white] (S1) circle [radius=24pt]; \draw[circle,fill=white] (S3) circle [radius=24pt]; \draw[circle,fill=white] (T) circle [radius=24pt]; %triangles %labels \node at (T) {\scalebox{0.8}{$1$}}; \node at (S3) {\scalebox{0.8}{$2$}}; \node at (S1) {\scalebox{0.8}{$n$}}; \end{tikzpicture}}} \qquad g_n^k= \vcenter{\hbox{\begin{tikzpicture}[scale=0.3] %vertices \node (T) at (0,0) {}; \node (S1) at (3,3) {}; \node (S2) at (0,3) {}; \node (S3) at (-3,3) {}; %edges \draw[thick] (T)--(S1); \draw[thick] (T)--(S3); %dots \node at (S2) {$\cdots$}; %circles \draw[circle,fill=white] (S1) circle [radius=24pt]; \draw[circle,fill=white] (S3) circle [radius=24pt]; \draw[circle,fill=black] (T) circle [radius=28pt]; %triangles %labels \node at (S3) {\scalebox{0.8}{$1$}}; \node at (S1) {\scalebox{0.8}{$n$}}; \node at (T) {\scalebox{1}{$\textcolor{white}{e^k}$}}; \end{tikzpicture}}}$$ Let $x=x_2$ and $g^k=g_2^k$. **Proposition 25**. *The operad $\mathrm{Greg}^{C}$ is binary and satisfy the following relations: $$\tag{pre-Lie} (x \circ_1x -x \circ_2x) - (x \circ_1x -x \circ_2x).(2\;3)$$ $$\begin{gathered} \tag{greg $\Delta$}\label{eq:gregDelta} (x \circ_1 g^k - (g^k \circ_1 x ).(2\;3) - g^k \circ_2 x ) - (x \circ_1 g^k - (g^k \circ_1 x ).(2\;3) - g^k \circ_2 x ).(2\;3)\\ + (g^k_{(1)}\circ_1 g^k_{(2)} - (g^k_{(1)}\circ_1 g^k_{(2)}).(2\;3)) \end{gathered}$$ With $g^k_{(1)}$ and $g^k_{(2)}$ defined by $\Delta$ by the identification of $V$ with the span of the generators $g^k$.* *Proof.* Let us compute $x \circ_1 g^k$: $$\vcenter{\hbox{\begin{tikzpicture}[scale=0.3] %vertices \node (1) at (0,0) {}; \node (2) at (0,3) {}; %triangles %edges \draw[thick] (1)--(2); %dots %circles \draw[circle,fill=white] (1) circle [radius=24pt]; \draw[circle,fill=white] (2) circle [radius=24pt]; %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$3$}}; \end{tikzpicture}}}\circ_1\vcenter{\hbox{\begin{tikzpicture}[scale=0.3] %vertices \node (1) at (-1.5,3) {}; \node (2) at (1.5,3) {}; \node (c) at (0,0) {}; %triangles %edges \draw[thick] (c)--(1); \draw[thick] (c)--(2); %dots %circles \draw[circle,fill=black] (c) circle [radius=28pt]; \draw[circle,fill=white] (1) circle [radius=24pt]; \draw[circle,fill=white] (2) circle [radius=24pt]; %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$2$}}; \node at (c) {\scalebox{1}{$\textcolor{white}{e^k}$}}; \end{tikzpicture}}}= \vcenter{\hbox{\begin{tikzpicture}[scale=0.3] %vertices \node (1) at (-1.5,3) {}; \node (2) at (1.5,3) {}; \node (3) at (-1.5,6) {}; \node (c) at (0,0) {}; %triangles %edges \draw[thick] (c)--(1); \draw[thick] (c)--(2); \draw[thick] (1)--(3); %dots %circles \draw[circle,fill=black] (c) circle [radius=28pt]; \draw[circle,fill=white] (1) circle [radius=24pt]; \draw[circle,fill=white] (2) circle [radius=24pt]; \draw[circle,fill=white] (3) circle [radius=24pt]; %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$2$}}; \node at (3) {\scalebox{0.8}{$3$}}; \node at (c) {\scalebox{1}{$\textcolor{white}{e^k}$}}; \end{tikzpicture}}}+ \vcenter{\hbox{\begin{tikzpicture}[scale=0.3] %vertices \node (1) at (-1.5,3) {}; \node (2) at (1.5,3) {}; \node (3) at (1.5,6) {}; \node (c) at (0,0) {}; %triangles %edges \draw[thick] (c)--(1); \draw[thick] (c)--(2); \draw[thick] (2)--(3); %dots %circles \draw[circle,fill=black] (c) circle [radius=28pt]; \draw[circle,fill=white] (1) circle [radius=24pt]; \draw[circle,fill=white] (2) circle [radius=24pt]; \draw[circle,fill=white] (3) circle [radius=24pt]; %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$2$}}; \node at (3) {\scalebox{0.8}{$3$}}; \node at (c) {\scalebox{1}{$\textcolor{white}{e^k}$}}; \end{tikzpicture}}}+ \vcenter{\hbox{\begin{tikzpicture}[scale=0.3] %vertices \node (1) at (-2,3) {}; \node (2) at (0,3) {}; \node (3) at (2,3) {}; \node (c) at (0,0) {}; %triangles %edges \draw[thick] (c)--(1); \draw[thick] (c)--(2); \draw[thick] (c)--(3); %dots %circles \draw[circle,fill=black] (c) circle [radius=28pt]; \draw[circle,fill=white] (1) circle [radius=24pt]; \draw[circle,fill=white] (2) circle [radius=24pt]; \draw[circle,fill=white] (3) circle [radius=24pt]; %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$2$}}; \node at (3) {\scalebox{0.8}{$3$}}; \node at (c) {\scalebox{1}{$\textcolor{white}{e^k}$}}; \end{tikzpicture}}}+ \vcenter{\hbox{\begin{tikzpicture}[scale=0.3] %vertices \node (1) at (-1.5,3) {}; \node (2) at (0,6) {}; \node (3) at (3,6) {}; \node (c) at (0,0) {}; \node (g) at (1.5,3) {}; %triangles %edges \draw[thick] (c)--(1); \draw[thick] (c)--(g); \draw[thick] (g)--(2); \draw[thick] (g)--(3); %dots %circles \draw[circle,fill=black] (c) circle [radius=32pt]; \draw[circle,fill=black] (g) circle [radius=32pt]; \draw[circle,fill=white] (1) circle [radius=24pt]; \draw[circle,fill=white] (2) circle [radius=24pt]; \draw[circle,fill=white] (3) circle [radius=24pt]; %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$2$}}; \node at (3) {\scalebox{0.8}{$3$}}; \node at (c) {\scalebox{1}{$\textcolor{white}{e_{(1)}^k}$}}; \node at (g) {\scalebox{1}{$\textcolor{white}{e_{(2)}^k}$}}; \end{tikzpicture}}}+ \vcenter{\hbox{\begin{tikzpicture}[scale=0.3] %vertices \node (g) at (-1.5,3) {}; \node (3) at (0,6) {}; \node (1) at (-3,6) {}; \node (c) at (0,0) {}; \node (2) at (1.5,3) {}; %triangles %edges \draw[thick] (g)--(1); \draw[thick] (c)--(g); \draw[thick] (c)--(2); \draw[thick] (g)--(3); %dots %circles \draw[circle,fill=black] (c) circle [radius=32pt]; \draw[circle,fill=black] (g) circle [radius=32pt]; \draw[circle,fill=white] (1) circle [radius=24pt]; \draw[circle,fill=white] (2) circle [radius=24pt]; \draw[circle,fill=white] (3) circle [radius=24pt]; %labels \node at (1) {\scalebox{0.8}{$1$}}; \node at (2) {\scalebox{0.8}{$2$}}; \node at (3) {\scalebox{0.8}{$3$}}; \node at (c) {\scalebox{1}{$\textcolor{white}{e_{(1)}^k}$}}; \node at (g) {\scalebox{1}{$\textcolor{white}{e_{(2)}^k}$}}; \end{tikzpicture}}}$$ This show that the operad $\mathrm{Greg}^C$ satisfy Relation [\[eq:gregDelta\]](#eq:gregDelta){reference-type="ref" reference="eq:gregDelta"}.\ Let $\mathcal{P}(x,y,g^1,\dots,g^n)$ be the sub operad of $\mathrm{Greg}^C$ generated by $x$, $y$ and $g^1,\dots,g^n$. We have to show that $\mathcal{P}(x,y,g^1,\dots,g^n)=\mathrm{Greg}^C$. Let us proof it by recurrence on the arity.\ *Base case:* By definition $\mathcal{P}(x,y,g^1,\dots,g^n)(2)=\mathrm{Greg}^C(2)$.\ *Induction step:* Let $m\geq 2$ and suppose that $\mathcal{P}(x,y,g^1,\dots,g^n)(k)=\mathrm{Greg}^C(k)$ for all $k\leq m$. We have to show that $\mathcal{P}(x,y,g^1,\dots,g^n)(m+1)=\mathrm{Greg}^C(m+1)$.\ Computing $x\circ_1x_m$ and $x\circ_1g^k_m$ show that $x_{m+1}\in \mathcal{P}(x,y,g)(m+1)$ and $g^k_{m+1}\in \mathcal{P}(x,y,g)(m+1)$. Since we can obtain any rooted Greg trees by inductively composing corollas in the leaves of smaller trees, we have $\mathrm{Greg}^C(m+1)= \mathcal{P}(x,y,g^1,\dots,g^n)(m+1)$.\ Hence by recurrence, $\mathcal{P}(x,y,g^1,\dots,g^n)=\mathrm{Greg}^C$. ◻ # Koszul dual and Koszulness Let us use the same strategy as in the previous section to prove that the operad $\mathrm{Greg}^C$ is Koszul. **Definition 26**. Let $\mathcal{G}^C$ the operad defined by generators and relations as follows: $\tilde{x}$ is a generator without symmetries, $\tilde{g}^k$ are symmetric generators such that: $$(\tilde{x}\circ_1\tilde{x}-\tilde{x}\circ_2\tilde{x}) - (\tilde{x}\circ_1\tilde{x}-\tilde{x}\circ_2\tilde{x}).(2\;3)$$ $$\begin{gathered} (\tilde{x}\circ_1 \tilde{g}^k - (\tilde{g}^k \circ_1 \tilde{x}).(2\;3) - \tilde{g}^k \circ_2 \tilde{x}) - (\tilde{x}\circ_1 \tilde{g}^k - (\tilde{g}^k \circ_1 \tilde{x}).(2\;3) - \tilde{g}^k \circ_2 \tilde{x}).(2\;3)\\ + (\tilde{g}^k_{(1)}\circ_1 \tilde{g}^k_{(2)} - (\tilde{g}^k_{(1)}\circ_1 \tilde{g}^k_{(2)}).(2\;3)) \end{gathered}$$ With $\tilde{g}^k_{(1)}$ and $\tilde{g}^k_{(2)}$ defined by $\Delta$ by the identification of $V$ with the span of the generators $\tilde{g}^k$. Let $C^*=(V^*,\mu)$ the linear dual of $C$. This is a commutative algebra of dimension $n$. **Definition 27**. Let $(\mathcal{G}^C)^!$ the operad defined by generators and relations as follows: $x_*$ is a generator without symmetries, $g^k_*$ are symmetric generators such that: $$x_*\circ_1x_*-x_*\circ_2x_* \qquad ; \qquad x_*\circ_1x_*-(x_*\circ_1x_*).(2\;3)$$ $$x_*\circ_1g^k_*-g^k_*\circ_2x_* \qquad ; \qquad x_*\circ_1g^k_*-(g^k_*\circ_1x_*).(2\;3)$$ $$x_*\circ_1g^k_*+(x_*\circ_1g^k_*).(1\;2\;3)+(x_*\circ_1g^k_*).(1\;3\;2)$$ $$x_*\circ_2g^k_* \qquad ; \qquad g^i_*\circ_1g^j_*-x\circ_1g_*^{i.j}$$ With the notation $g_*^{i.j}=\mu(g_*^i,g_*^j)$, one should be careful since $i.j$ in not the product of $i$ and $j$; this is just a way to keep notation more compact. Let us generalize the construction of $W(\chi)$ of Definition [Definition 13](#dfn:walg){reference-type="ref" reference="dfn:walg"} to get an example of $(\mathcal{G}^C)^!$-algebra that allows us show that $\dim((\mathcal{G}^C)^!(4))\geq 4+3n$.\ **Definition 28**. Let $\chi$ be a finite alphabet and $\mathbf{W}_C(\chi)$ be the span of finite words on $\chi$ with the following extra decorations: either one letter is pointed with a dot or there is a arrow from one letter to an other, the arrow is linearly labeled by $V^*$. Let us write $\overset{i}{\curvearrowright}$ instead of $\overset{e^*_i}{\curvearrowright}$ and $\overset{i.j}{\curvearrowright}$ instead of $\overset{\mu(e^*_i,e^*_j)}{\curvearrowright}$.\ Let $W_C(\chi)$ be the quotient of $\mathbf{W}_C(\chi)$ by the following relations, letters commute with each other (the dot and the arrow follow the letter), reverting the arrow change the sign and $\overset{\overset{i}{\curvearrowright}}{ab}cv=\overset{\overset{i}{\curvearrowright}}{cb}av+ \overset{\overset{i}{\curvearrowright}}{ac}bv$ for any $a,b,c\in \chi$ and $v$ a finite word. Because of the letters commute, let us write the elements of $W(\chi)$ with the pointed letter (or arrowed letters) at the start.\ Let $\settowidth{\myheight}{x}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$x$};}}$ and $\settowidth{\myheight}{g}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$g$};}}_i$ product on $W(\chi)$ defined by: - $\dot{a}v\settowidth{\myheight}{x}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$x$};}}\dot{b}w=\dot{a}vbw$ - $\overset{\overset{i}{\curvearrowright}}{ab}v\settowidth{\myheight}{x}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$x$};}}\dot{c}w= \overset{\overset{i}{\curvearrowright}}{ab}vcw$ - $\dot{a}v\settowidth{\myheight}{g}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$g$};}}_i\dot{b}w=\overset{\overset{i}{\curvearrowright}}{ab}vw$ - $\overset{\overset{i}{\curvearrowright}}{ab}v\settowidth{\myheight}{g}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$g$};}}_j\dot{c}w= \overset{\overset{i.j}{\curvearrowright}}{ab}vcw$ and all other cases give $0$. **Proposition 29**. *The algebra $(W_C(\chi),\settowidth{\myheight}{x}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$x$};}},\{ \settowidth{\myheight}{g}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$g$};}}_i \} )$ is a $(\mathcal{G}^C)^!$-algebra generated by $\chi$.* *Proof.* Indeed, $\dot{a}v=\dot{a}\settowidth{\myheight}{x}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$x$};}}w$ with $w$ the word $v$ with a dot on a letter (let us say the first one for example) and $\overset{\overset{i}{\curvearrowright}}{ab}v=(\dot{a}\settowidth{\myheight}{g}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$g$};}}_i\dot{b})\settowidth{\myheight}{x}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$x$};}}w$ so $(W(\chi),\settowidth{\myheight}{x}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$x$};}},\{ \settowidth{\myheight}{g}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$g$};}}_i\} )$ it is generated by $\chi$.\ The product $\settowidth{\myheight}{g}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$g$};}}$ is skew-symmetric since $\dot{a}\settowidth{\myheight}{g}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$g$};}}_i\dot{b}=\overset{\overset{i}{\curvearrowright}}{ab}= \overset{\overset{i}{\curvearrowleft}}{ba}= -\overset{\overset{i}{\curvearrowright}}{ba}=-\dot{b}\settowidth{\myheight}{g}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$g$};}}_i\dot{a}$. The $7$ relations of $(\mathcal{G}^C)^!$ are easily checked. ◻ **Proposition 30**. *We have $\dim((\mathcal{G}^C)^!(4))\geq 4+3n$.* *Proof.* Let $A=W(\{a,b,c,d\})$. Let us compute the dimension of $\textnormal{Mult}(A(4))$ the multilinear part of $A(4)$. Let us write words of $\textnormal{Mult}(A(4))$ such that the arrow always start from $a$ to the second letter and aside from that, the letter are in the lexicographic order.\ The rewriting rule $\overset{\overset{i}{\curvearrowright}}{\beta\gamma}\alpha\mapsto \overset{\overset{i}{\curvearrowright}}{\alpha\gamma}\beta- \overset{\overset{i}{\curvearrowright}}{\alpha\beta}\gamma$ is confluent.\ Indeed, $\overset{\overset{i}{\curvearrowright}}{cd}ab= \overset{\overset{i}{\curvearrowright}}{bd}ac-\overset{\overset{i}{\curvearrowright}}{bc}ad= (\overset{\overset{i}{\curvearrowright}}{ad}bc-\overset{\overset{i}{\curvearrowright}}{ab}cd)- (\overset{\overset{i}{\curvearrowright}}{ac}bd-\overset{\overset{i}{\curvearrowright}}{ab}cd)= \overset{\overset{i}{\curvearrowright}}{ad}bc-\overset{\overset{i}{\curvearrowright}}{ac}bd$.\ Hence $\textnormal{Mult}(A(4))$ is spanned by the words $\dot{a}bcd$, $a\dot{b}cd$, $ab\dot{c}d$, $abc\dot{d}$, $\overset{\overset{i}{\curvearrowright}}{ab}cd$, $\overset{\overset{i}{\curvearrowright}}{ac}bd$ and $\overset{\overset{i}{\curvearrowright}}{ad}bc$.\ Since $A$ is a $(\mathcal{G}^C)^!$-algebra on $4$ generators, $\dim((\mathcal{G}^C)^!(4))\geq\dim(\textnormal{Mult}(A(4)))=4+3n$. ◻ *Remark 11*. The algebra $(W(\chi),\settowidth{\myheight}{x}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$x$};}},\{ \settowidth{\myheight}{g}% \raisebox{-.1\myheight}{\tikz[baseline=(char.base)]{% \node[shape=circle,draw,minimum size=\myheight*\myheight*.3,inner sep=0.5pt](char){$g$};}}_i\} )$ is in fact the free $(\mathcal{G}^C)^!$-algebra generated by $\chi$. Let us consider the tree following orders to get Gröbner bases: - the degree-lexicographic permutation order with $x_*>y_*>g_*$ give Figure [6](#fg:StGBg1){reference-type="ref" reference="fg:StGBg1"}; - the weighted permutation reverse-degree-lexicographic order with $g_*>y_*>x_*$ and $g_*$ of degree $1$ give Figure [7](#fg:StGBg2){reference-type="ref" reference="fg:StGBg2"}; - and the reverse-degree-lexicographic permutation order with $x_*>y_*>g_*$ give Figure [8](#fg:StGBg3){reference-type="ref" reference="fg:StGBg3"}. **Proposition 31**. *The rewriting systems displayed in Figure [6](#fg:StGBg1){reference-type="ref" reference="fg:StGBg1"}, Figure [7](#fg:StGBg2){reference-type="ref" reference="fg:StGBg2"} and Figure [8](#fg:StGBg3){reference-type="ref" reference="fg:StGBg3"} are confluent. (They are in fact Gröbner bases.)* *Proof.* As in Proposition [Proposition 16](#prp:GB){reference-type="ref" reference="prp:GB"}, remarking that there are $4+3m$ normal forms in arity $4$ for each rewriting system and that $\dim((\mathcal{G}^C)^!(4))\geq 4+3n$ is enough.\ Since we have a monomial order and that rewriting rules are quadratic in a binary operad, checking arity $4$ is enough. ◻ **Proposition 32**. *We have that $\dim((\mathcal{G}^C)^!(m))=(n+1)m-n$ for all $m\geq 1$, hence its exponential generating series is $((n+1)t-n)\exp(t)+n$.* *Proof.* It suffice to count the normal forms of a rewriting system for example Figure [6](#fg:StGBg1){reference-type="ref" reference="fg:StGBg1"}. Let $m\geq 2$ and count the number of normal forms in arity $m$. Those are right combs with at most one $g^k_*$, with all the $x_*$ above the $g^k_*$ and the $y_*$, and all the $y_*$ below the $g^k_*$ and the $x_*$.\ Hence the normal forms are determined by the number of occurrences $x_*$ and $g^k_*$, and have either zero or one occurrences $g^k_*$. If there is no $g^k_*$, then one can have from $0$ to $m-1$ occurrences of $x_*$. If there is one $g^k_*$, then one can have from $0$ to $m-2$ occurrences of $x_*$ and $n$ choice for the $g^k_*$ that appear.\ Hence the number of normal forms in arity $m$ is $m+n(m-1)=(n+1)m-n$. ◻ **Theorem 5**. *The operad $\mathcal{G}^C$ is Koszul.* *Proof.* We have a quadratic Gröbner basis for $(\mathcal{G}^C)^!$, hence $(\mathcal{G}^C)^!$ is Koszul, hence $\mathcal{G}^C$ is Koszul. ◻ **Proposition 33**. *The exponential generating series of $\mathrm{Greg}^C$ verifies: $$f_{\mathrm{Greg}^C}=t\exp(f_{\mathrm{Greg}^C})+n(\exp(f_{\mathrm{Greg}^C})-f_{\mathrm{Greg}^C}-1)$$* *Proof.* An inspection of the species $\mathcal{G}^V$ which is the species of rooted Greg trees such that the black vertices are labelled by $\{e_1,\dots,e_n\}$ show that: $$\mathcal{G}^V=X\cdot E(\mathcal{G}^V) + nE_{\geq2}(\mathcal{G}^V)$$ With the usual notation of species, $X$ is the singleton species, $E$ is the species of sets and $E_{\geq2}$ is the species of sets with at least two elements.\ The above equation means that a rooted Greg tree is either a white vertex and a set of rooted Greg trees connected to it, or a black vertex labelled by $e_k$ (so $n$ possibility) and a set of a least $2$ rooted Greg trees connected to it.\ Since $\mathcal{G}^V$ is the underlying species of $\mathrm{Greg}^C$, we have that: $$f_{\mathrm{Greg}^C}=t\exp(f_{\mathrm{Greg}^C})+n(\exp(f_{\mathrm{Greg}^C})-f_{\mathrm{Greg}^C}-1)$$ ◻ *Remark 12*. We can recover the recursive formula enumerating the rooted Greg trees from [@GenGreg Proposition 2.1] by resolving a differential equation:\ We have $h=((n+1)t+n)\exp(-t)-n$. Hence $d_1h=-((n+1)t-1)\exp(-t)$ and $d_2h=(t+1)\exp(-t)-1$. Hence $$(z+2)h + d_1h -(z+1)^2d_2h=1$$ Let $f$ be such that $h(f(t,n),n)=h\circ(f,\textnormal{id})=t$. We have that $d_1h\circ(f,\textnormal{id}).d_1f=1$ and\ $d_1h\circ(f,\textnormal{id}).d_2f+d_2h\circ(f,\textnormal{id})=0$, hence: $$((z+2)t-1)d_1f+(z+1)^2d_2f=-1$$ Let $f(t,n)=\sum\frac{g_k(n)}{k!}t^k$ with $g_k$ polynomials in $n$, we get the following recursive relation: - $g_1(n)=1$ - $g_{k+1}(n)=(n+2)kg_k(n)+(n+1)^2g_k'(n)$\ **Theorem 6**. *The operad $\mathcal{G}^C$ is isomorphic to $\mathrm{Greg}^C$.* *Proof.* We know that $\dim((\mathcal{G}^C)^!(m))=(n+1)m-n$, hence its exponential generating series is $f_{(\mathcal{G}^C)^!}=((n+1)t-n)\exp(t)+n$. Let $h(t,n)=-f_{(\mathcal{G}^C)^!}(-t)=((n+1)t+n)\exp(-t)-n$, since $\mathcal{G}^C$ is Koszul, we know that $h(f_{\mathcal{G}^C}(t,n),n)=t$. Hence: $$t=((n+1)f_{\mathcal{G}^C}+n)\exp(-f_{\mathcal{G}^C})-n$$ Hence: $$f_{\mathcal{G}^C}=t\exp(f_{\mathcal{G}^C})+n(\exp(f_{\mathcal{G}^C})-f_{\mathcal{G}^C}-1)$$ Which show that $f_{\mathcal{G}^C}=f_{\mathrm{Greg}^C}$. Since we have a surjective morphism from $\mathcal{G}^C$ to $\mathrm{Greg}^C$ and equality of dimensions of components, we have that $\mathcal{G}^C$ is isomorphic to $\mathrm{Greg}^C$. ◻ **Corollary 7**. *The operad $\mathrm{Greg}^C$ is binary, quadratic and Koszul.* **Definition 34**. Let $\mathrm{Greg}_n$ be the operad $\mathrm{Greg}^{(V,0)}$, with $0$ the trivial coproduct on $V$. **Corollary 8**. *The operad $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ is isomorphic to $\mathrm{Greg}^{(V,\Delta_{\max})}$ with: $$\Delta_{\max}:e_k\mapsto \sum_{i,j\vert \max(i,j)=k}e_i\otimes e_j$$ Moreover $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ is filtered by the grading of the rooted Greg trees by the number of black vertices. The associated graded operad is $\mathrm{Greg}_n$.* **Corollary 9**. *The operad $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ is Koszul.* *Remark 13*. This fact is not a direct consequence of the definition of $\bigvee_\mathrm{Lie}^{m+1}\mathrm{PreLie}$ as a coproduct. Indeed, the fiber coproduct of two Koszul operads $\mathcal{P}$ and $\mathcal{Q}$ over a Koszul operad $\mathcal{R}$ is not necessarily Koszul. Take for instance the operads $\bigvee_\mathrm{Lie}^2\mathrm{Ass}$ and $\bigvee_\mathrm{Lie}^2\mathrm{Poiss}$ are not Koszul which can be check by comparing the exponential generating series of those operads and of their Koszul dual.\ Worst, freeness as right $\mathcal{R}$-modules of $\mathcal{P}$ and $\mathcal{Q}$ does not solve the issue as shown by the example $\bigvee_\mathrm{Lie}^2\mathrm{Poiss}$.\ It seems that freeness as left modules solve this issue. Indeed for instance, the operads $\bigvee_{\mathrm{Com}}^2\mathrm{Poiss}$ and $\bigvee_{\mathrm{Com}}^2\mathrm{Zinb}$ are Koszul. However, the author does not know how to prove that left freeness ensure that Koszulness is preserved.\ Left and right freeness are defined at the very beginning of the next section. # Nielsen-Schreier and freeness properties {#seq:free} We have seen that $\mathrm{Greg}^C$ is Koszul using quadratic Gröbner basis. However, one Gröbner basis was enough the show this fact. Three different Gröbner bases were computed with particular normal forms. Indeed, Theorems from Dotsenko [@Free] and Dotsenko and Umirbaev [@NSPrt] allow to show some freeness properties using Gröbner bases.\ Let us recall those freeness properties and show that they hold for $\mathrm{Greg}^C$.\ Let $\mathcal{P}$ an operad. A *left module* $L$ over $\mathcal{P}$ is a species $L$ with a morphism $\mathcal{P}\circ L\to L$ satisfying the usual axioms.\ A *right module* $R$ over $\mathcal{P}$ is a species $R$ with a morphism $R\circ \mathcal{P}\to R$ satisfying the usual axioms.\ A *bimodule* over $\mathcal{P}$ is a left and right module over $\mathcal{P}$ such that the two structures commute.\ Let $\mathcal{Q}$ be an operad such that we have a morphism of operads $\mathcal{P}\to\mathcal{Q}$, then $\mathcal{Q}$ has a canonical structure of a left and a right module over $\mathcal{P}$. (It has in fact a structure of bimodule over $\mathcal{P}$.)\ A left module $L$ (resp. right module $R$) over $\mathcal{P}$ is said to be *free* if $L$ (resp. $R$) is isomorphic to $\mathcal{P}\circ \mathcal{X}$ (resp. $\mathcal{X}\circ \mathcal{P}$) for some species $\mathcal{X}$ and that the module structure is given by the operadic composition in $\mathcal{P}$.\ In this context, the $\circ$ is the plethysm of species, which can be interpreted as the composition of Schur functors.\ We refer to [@Species] for more details on the plethysm of species.\ Let $\mathcal{F}(E)/(R)$ and $\mathcal{F}(E\sqcup F)/(R\sqcup S)$ be presentations of the operads $\mathcal{P}$ and $\mathcal{Q}$ respectively, such that $R\sqcup S$ is a Gröbner basis for some monomial order and $R$ is a Gröbner basis once the monomial order restricted to $R$.\ Let recall the following theorems: **Theorem 10** (*left freeness version*). *[@Free Theorem 4] Assume that the root of the leading terms of $S$ are elements of $F$. Then $\mathcal{Q}$ is free as left $\mathcal{P}$-module.* **Theorem 11** (*right freeness version*). *[@Free Theorem 4] Assume that the vertices such that each children is a leaf, of the leading terms of $S$ are elements of $F$. Then $\mathcal{Q}$ is free as right $\mathcal{P}$-module.* Let $C$ be a coassociative cocomutative coalgebra and $C'$ a subcoalgebra of $C$. **Corollary 12**. *The operad $\mathrm{Greg}^C$ is free as left and as a right $\mathrm{Greg}^{C'}$-module (and not as a bimodule).* *Proof.* By reversing the order, one can go from a Gröbner basis of an operad to a Gröbner basis of its Koszul dual which exchange the leading terms and the normal forms.\ Hence Gröbner basis from Figure [7](#fg:StGBg2){reference-type="ref" reference="fg:StGBg2"} witness the left freeness and Gröbner basis from Figure [8](#fg:StGBg3){reference-type="ref" reference="fg:StGBg3"} witness the right freeness. ◻ **Corollary 13**. *The operad $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ is free as left and as a right $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}$-module (and not as a bimodule).* Let us now define the Nielsen-Schreier property. **Definition 35**. An operad $\mathcal{P}$ has the *Nielsen-Schreier property* if the subalgebra of any free $\mathcal{P}$-algebra is free. Let us recall the following theorem: **Theorem 14**. *[@NSPrt Theorem 4.1] Let $\mathcal{P}$ an operad and $E$ a set of generator of $\mathcal{P}$ satisfying the following conditions:* - *$P$ admit a Gröbner basis for the reverse path lexicographic ordering such for each leading terms, the smallest leaf is directly connected to the root.* - *$P$ admit a Gröbner basis such that each leading terms is a left comb such that the smallest leaf and the second smallest leaf are directly connected to the same vertex.* *Then $\mathcal{P}$ has the Nielsen-Schreier property.* **Corollary 15**. *The operad $\mathrm{Greg}^C$ has the Nielsen-Schreier property.* *Proof.* The Gröbner basis from Figure [6](#fg:StGBg1){reference-type="ref" reference="fg:StGBg1"} witness the first condition and the Gröbner basis from Figure [7](#fg:StGBg2){reference-type="ref" reference="fg:StGBg2"} witness the second condition. ◻ **Corollary 16**. *The operad $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}$ has the Nielsen-Schreier property.* # Explicit computation of the generators Let us compute the explicit generators of $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ as a left $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}$-module. To do so, let us mimic the proof of Dotsenko [@FPRed].\ A structure of cyclic operad on an operad $\mathcal{P}$ is given by an action of $\mathfrak{S}_{n+1}$ on $\mathcal{P}(n)$ compatible with the operadic structure. This is equivalently given by an action of $\tau=(1,2,\ldots,n+1)$ on $\mathcal{P}(n)$ verifying: - $\tau(\mu\circ_i\nu)=\tau(\mu)\circ_{i+1}\nu$ for $i<m$ with $m$ the arity of $\mu\circ_i\nu$; - $\tau(\mu\circ_m\nu)=\tau(\nu)\circ_1\tau(\mu)$. It is known that $\mathrm{Lie}$ is a cyclic operad, $\mathrm{CycLie}$ is the species under this cyclic operad so as vector space we have $\mathrm{Lie}(k)=\mathrm{CycLie}(k+1)$.\ In the particular case of $\mathrm{CycLie}$, the action of $\tau$ is given by $\tau(l)=l$.\ Let us introduce the following notation, $x$ is the generator of $\mathrm{PreLie}$ without symmetries, $x=\mu+l$ with $\mu$ symmetric and $l$ skew-symmetric. Then $l$ is the generator of the suboperad $\mathrm{Lie}$ of $\mathrm{PreLie}$ and $\mu$ is magmatic.\ We want to prove that $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}\simeq\bigvee_\mathrm{Lie}^n\mathrm{PreLie}\circ\mathcal{F}(\bar{\mathcal{F}}^{(n)}(\mathrm{CycLie}))$ with $\mathcal{F}$ the free operad functor, $\bar{\mathcal{F}}$ the reduced free operad functor such that $\mathcal{F}(\mathcal{X})=\bar{\mathcal{F}}(\mathcal{X})\oplus\mathcal{X}$ and $\bar{\mathcal{F}}^{(n)}$ the $n$-th iteration of $\bar{\mathcal{F}}$.\ Let us recall the following theorem from [@FPRed]: **Theorem 17**. *Let $\mathcal{Y}$ be the subspecies of $\mathrm{PreLie}$ such that $y\in\mathcal{Y}$ if and only if $y=(\mu\circ_2 a)\circ_1 b$ with $a,b\in\mathrm{Lie}$. Then:* - *$\mathcal{Y}$ is isomorphic to $\mathrm{CycLie}$ as species;* - *Let $\mathcal{P}(\mathcal{Y})$ the suboperad of $\mathrm{PreLie}$ generated by $\mathcal{Y}$, then $\mathcal{P}(\mathcal{Y})$ is free;* - *The left $\mathrm{Lie}$-submodule of $\mathrm{PreLie}$ generated by $\mathcal{P}(\mathcal{Y})$ is free and coincide with $\mathrm{PreLie}$.* Let us use the exact same technic as the one used in [@FPRed] to explicitly compute the generators of $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ as a left $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}$-module.\ The idea is the following, we introduce some explicit generators of $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ as a left $\mathrm{Lie}$-module and as a left $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}$-module. We then define a bunch of surjective morphisms of species involving those generators. We then compute the dimensions of the species involved to show that the morphisms are isomorphisms. We then conclude that the generators we introduced freely generate $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$.\ Let $\mathcal{Y}_n$ the subspecies of $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ generated by the $(c_n\circ_2 a)\circ_1 b$ such that $a,b\in\bigvee_\mathrm{Lie}^n\mathrm{PreLie}$. Let $\mathcal{X}_n$ the subspecies of $\mathcal{Y}_n$ generated by the $x=(c_n\circ_2 a)\circ_1 b$ with $a,b\in\mathrm{Lie}$. Let $\mathcal{P}(\mathcal{Y}_n)$ the suboperad of $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ generated by $\mathcal{Y}_n$. And let $\mathcal{Z}_n$ the species inductively defined by $\mathcal{Z}_0=\mathcal{F}(\mathcal{Y}_0)$ and $\mathcal{Z}_{n+1}=\mathcal{Z}_n\circ\mathcal{F}(\mathcal{Y}_{n+1})$.\ **Lemma 1**. *We have a surjective morphism of $\mathrm{Lie}$-bimodule from $\mathrm{Lie}\circ\mathcal{Z}_n$ to $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$.* *Proof.* Since $\mathcal{Y}_k$ is a subspecies of $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$, we have a morphism of species from $\mathcal{F}(\mathcal{Y}_k)$ to $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$. Hence we have a morphism from $\mathcal{Z}_n$ to $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$. Since $\mathrm{Lie}$ is a suboperad of $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$, we have a morphism of left $\mathrm{Lie}$-module from $\mathrm{Lie}\circ\mathcal{F}(\mathcal{Y}_k)$ to $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$.\ This is a morphism of right $\mathrm{Lie}$-module since each $\mathcal{F}(\mathcal{Y}_k)$ is a right $\mathrm{Lie}$-module.\ Moreover this morphism is surjective since $l,\mu,c_1,\dots,c_n$ are in $\mathrm{Lie}\circ\mathcal{Z}_n$ and are generators of $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$. ◻ Let define the following filtration on $\mathrm{Lie}\circ\mathcal{Z}_n$: **Definition 36**. Let define the weight of an element of $\mathcal{F}(\mathcal{Y}_k)$ as the usual weight in free operad, which is the number of generators needed in the composition. Then we define inductively the weight of an element $\gamma(z,f_1,\dots,f_k)$ of $\mathcal{Z}_n$ with $z\in\mathcal{Z}_{n-1}$ and $f_i\in\mathcal{F}(\mathcal{Y}_n)$ as the total sum of the weight of those elements.\ For an element $\alpha=\gamma(l,z_1,\dots,z_r)$ of $\mathrm{Lie}\circ\mathcal{Z}_n$ such that $z_i\in\mathcal{Z}_n$ of weight $w_i$ and $l\in\mathrm{Lie}$ of arity $r$, let $w=r+\sum w_i$ be the weight of $\alpha$. We define the filtration by $\alpha\in F^w(\mathrm{Lie}\circ\mathcal{Z}_n)$ with $w$ the weight of $\alpha$. **Proposition 37**. *This filtration is compatible with the $\mathrm{Lie}$-bimodule structure. (It is in fact a filtration by infinitesimal $\mathrm{Lie}$-bimodule.)* *Proof.* Indeed, we have that $l(F^p(\mathrm{Lie}\circ\mathcal{Z}_n),F^q(\mathrm{Lie}\circ\mathcal{Z}_n))\subseteq F^{p+q}(\mathrm{Lie}\circ\mathcal{Z}_n)$ and\ $F^p(\mathrm{Lie}\circ\mathcal{Z}_n)\circ\mathrm{Lie}\subseteq F^p(\mathrm{Lie}\circ\mathcal{Z}_n)$. ◻ Hence this filtration induces a filtration on $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ by the surjective morphism of $\mathrm{Lie}$-bimodule of the previous lemma.\ **Lemma 2**. *We have a surjective morphism of species from $\mathrm{CycLie}$ to $\mathcal{X}_n$.* *Proof.* Let us compute Relation ([\[eq:vpl\]](#eq:vpl){reference-type="ref" reference="eq:vpl"}) with $x=\mu+l$. We get: $$\begin{gathered} (l \circ_1 c_n - (c_n \circ_1 l ).(2\;3) - c_n \circ_2 l ) - (l \circ_1 c_n - (c_n \circ_1 l ).(2\;3) - c_n \circ_2 l ).(2\;3)\\ (\mu \circ_1 c_n - (c_n \circ_1 \mu ).(2\;3) - c_n \circ_2 \mu ) - (\mu \circ_1 c_n - (c_n \circ_1 \mu ).(2\;3) - c_n \circ_2 \mu ).(2\;3)\\ + \sum_{i,j\mid \max(i,j)=n}(c_i\circ_1 c_j - (c_i\circ_1 c_j).(2\;3)) \end{gathered}$$ Let us rewrite it a bit: $$\begin{gathered} 2\times (c_n \circ_2 l) + (c_n \circ_1 l ).(2\;3) - (c_n \circ_1 l )=\\ l \circ_1 c_n - (l \circ_1 c_n ).(2\;3)+\\ (\mu \circ_1 c_n - (c_n \circ_1 \mu ).(2\;3)) - (\mu \circ_1 c_n - (c_n \circ_1 \mu ).(2\;3)).(2\;3)\\ + \sum_{i,j\mid \max(i,j)=n}(c_i\circ_1 c_j - (c_i\circ_1 c_j).(2\;3)) \end{gathered}$$ Let us remark that element of the left hand side are in $F^2\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ and elements of the right hand side are in $F^3\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$. Indeed at the left hand side, we have composition of the identity (arity $1$) with elements of $\mathcal{F}(\mathcal{Y}_n)$ having exactly one occurrences of an element of $\{\mu,c_1,\dots,c_n\}$, hence degree $2$. At the right hand side, we have either composition of $l$ (arity $2$) with elements of $\mathcal{F}(\mathcal{Y}_n)$ having exactly one occurrences of an element of $\{\mu,c_1,\dots,c_n\}$, hence degree $3$; or composition the identity (arity $1$) with elements of $\mathcal{F}(\mathcal{Y}_n)$ having exactly two occurrences of an element of $\{\mu,c_1,\dots,c_n\}$, hence degree $3$.\ Let us consider $\mathrm{gr}_F\mathcal{X}_n$ the graded species associated to the restriction of the filtration $F$ of $\mathcal{X}_n$. As species we have that $\mathrm{gr}_F\mathcal{X}_n$ is isomorphic to $\mathcal{X}_n$.\ Moreover in $\mathrm{gr}_F\mathcal{X}_n$, the above relation give: $$2\times (c_n \circ_2 l) + (c_n \circ_1 l ).(2\;3) - (c_n \circ_1 l )=0$$ Let us denote $r$ this relation and compute $\frac{1}{3}(r+r.(1\;3))$: $$\frac{1}{3}(2\times (c_n \circ_2 l) + (c_n \circ_1 l ).(2\;3) - (c_n \circ_1 l )+ 2\times (c_n \circ_2 l).(1\;3) + (c_n \circ_1 l ).(1\;2\;3) - (c_n \circ_1 l ).(1\;3))=0$$ We get: $$\tag{cyc} (c_n \circ_2 l) = (c_n \circ_1 l )$$ This relation allows us to define a morphism of species from $\mathrm{CycLie}$ to $\mathrm{gr}_F\mathcal{X}_n$ by sending $\tilde{\textnormal{id}}\mapsto c_n$ and $\tilde{l}\mapsto (c_n \circ_2 l)$, with $\tilde{\textnormal{id}}$ and $\tilde{l}$ the identity and the Lie bracket of $\mathrm{CycLie}$. Indeed we have an action of $\tau=(1\;2\;3)$ on $c_n \circ_2 l$ and $(c_n \circ_2 l).\tau=(c_n \circ_1 l)$, which give $(c_n \circ_2 l)$ by the relation above, hence $\tau(\tilde{l})=\tilde{l}$.\ This morphism is surjective since $\mathrm{gr}_F\mathcal{X}_n$ is a right $\mathrm{Lie}$-module generated by $c_n$.\ Hence we have a surjective morphism of species from $\mathrm{CycLie}$ to $\mathcal{X}_n$. ◻ **Lemma 3**. *We have a surjective morphism of species from $\mathcal{X}_n\circ\mathcal{Z}_{n-1}$ to $Y_n$.* *Proof.* Let $\gamma(c_n,y_1,\dots,y_k)$ a monomial element of $\mathcal{Y}_n$, since $y_i\in\bigvee_\mathrm{Lie}^n\mathrm{PreLie}$ and $\mathrm{Lie}\circ\mathcal{Z}_{n-1}$ surjects on $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}$, we have $l_i$ such that: $$y_i=\gamma(l_i,\alpha_{(i,1)},\dots,\alpha_{(i,r_i)})$$ with $l_i\in\mathrm{Lie}$ and $\alpha_{(i,j)}\in\mathcal{Z}_{n-1}$ .\ Let $\beta=\gamma(c_n,l_1,\dots,l_k)$, we have $\beta\in\mathcal{X}_n$, hence $\gamma(c_n,y_1,\dots,y_k)$ is in the image of $\mathcal{X}_n\circ\mathcal{Z}_{n-1}$. ◻ Let us summarize the morphisms of species we have: $$\begin{gathered} \label{eq:surj} \bigvee_\mathrm{Lie}^n\mathrm{PreLie}\circ\mathcal{F}(\mathrm{CycLie}\circ\mathcal{Z}_{n-1})\twoheadrightarrow \bigvee_\mathrm{Lie}^n\mathrm{PreLie}\circ\mathcal{F}(\mathcal{X}_n\circ\mathcal{Z}_{n-1}) \twoheadrightarrow\\ \bigvee_\mathrm{Lie}^n\mathrm{PreLie}\circ\mathcal{F}(\mathcal{Y}_n)\twoheadrightarrow \bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}\end{gathered}$$ One last ingredient is needed: the equality of dimensions of the components to show that those morphisms are in fact isomorphisms. **Proposition 38**. *Let $S$ a species, $f_S(t)$ its exponential generating series. Then $$f_{\mathcal{F}(\bar{\mathcal{F}}^{(n)}(S))}(t)=\frac{\textnormal{rev}_t(t-(n+1)f_S(t))-t}{n+1}+t$$ where $\textnormal{rev}_t$ is the inverse of the composition in the argument $t$ and $f_{\mathcal{F}(\bar{\mathcal{F}}^{(n)}(S))}$ the exponential generating series of $\mathcal{F}(\bar{\mathcal{F}}^n(S))$.* *Proof.* For a species $S$ with exponential generating series $f_S(t)$, the exponential generating series $f_{\mathcal{F}(S)}(t,z)$ of $\mathcal{F}(S)$ is given by $f_{\mathcal{F}(S)}(t,z)$ is the inverse of $t-zf_S(t)$ for the composition in the argument $t$, hence we have $f_{\mathcal{F}(S)}(t,z)=\textnormal{rev}_t(t-zf_S(t))$. Hence the exponential generating series of $\bar{\mathcal{F}}(S)$ is $f_{\bar{\mathcal{F}}(S)}(t,z)=f_{\mathcal{F}(S)}(t,z)-t=\textnormal{rev}_t(t-zf_S(t))-t$\ The exponential generating series $f_{\mathcal{F}(S)}(t,z)$ and $f_{\bar{\mathcal{F}}(S)}(t,z)$ have two arguments, the first one $t$ count the arity of the elements and the second one $z$ count the number of generators of the elements in the free operad.\ Since $z$ count the number of generators of the elements in the free operad, dividing by $z$ allows us to count the number of compositions of generators. Hence the exponential generating series of $\bar{\mathcal{F}}^{(n)}(S)(t)$ is $\frac{\textnormal{rev}_t(t-nf_S(t))-t}{n}$.\ Finally the exponential generating series of $\mathcal{F}(\bar{\mathcal{F}}^{(n)}(S))$ is $\frac{\textnormal{rev}_t(t-(n+1)f_S(t))-t}{n+1}+t$. ◻ **Lemma 4**. *The exponential generating series of $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}\circ\mathcal{F}(\bar{\mathcal{F}}^{(n)}(\mathrm{CycLie}))$ is equal to the exponential generating series of $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$.* *Proof.* Let us compute the exponential generating series of $\mathcal{F}(\bar{\mathcal{F}}^{(n)}(\mathrm{CycLie}))$. The exponential generating series of $\mathrm{CycLie}$ is well known to be $(1-t)\ln(1-t)+t$, indeed its dimensions are $(n-2)!$. Hence the exponential generating series of $\mathcal{F}(\bar{\mathcal{F}}^{(n)}(\mathrm{CycLie}))$ is $$f_{\mathcal{F}(\bar{\mathcal{F}}^{(n)}(\mathrm{CycLie}))}(t)=\frac{\textnormal{rev}_t(t-(n+1)(1-t)\ln(1-t)-(n+1)t)-t}{n+1}+t$$ We have already computed the exponential generating series of $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ in Proposition [Proposition 33](#prt:egsgc){reference-type="ref" reference="prt:egsgc"} which is $$f_{\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}}(t)=\textnormal{rev}_t((nt+t+n)\exp(-t)-n)$$ And the exponential generating series of $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ is $$f_{\bigvee_\mathrm{Lie}^n\mathrm{PreLie}}(t)=\textnormal{rev}_t((nt+n-1)\exp(-t)-n+1)$$ Let us show that $f_{\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}}(t)=(f_{\bigvee_\mathrm{Lie}^n\mathrm{PreLie}}\circ f_{\mathcal{F}(\bar{\mathcal{F}}^{(n)}(\mathrm{CycLie}))})(t)$. Let $$f(t)=(nt+t+n)\exp(-t)-n \qquad g(t)=(nt+n-1)\exp(-t)-n+1$$ $$h(t)=t-(n+1)(1-t)\ln(1-t)-(n+1)t$$ We want to show that $\textnormal{rev}_t(f)(t)=(\textnormal{rev}_t(g)\circ(\frac{\textnormal{rev}_t(h)-t}{n+1}+t))(t)$.\ It suffice to show that $h((n+1)g-nf)=f$. Let us compute: $$\begin{aligned} (n+1)g(t)-nf(t)\hspace{-10pt}&&=&(n+1)((nt+n-1)\exp(-t)-n+1)-n((nt+t+n)\exp(-t)-n) \\ &&=&((n+1)nt\exp(-t)+(n+1)(n-1)\exp(-t)-(n+1)(n-1))- \\ && &(n(n+1)t\exp(-t)-n^2\exp(-t)+n^2) \\ &&=&-\exp(-t)+1 \end{aligned}$$ Hence $$h((n+1)g-nf)=-\exp(-t)+1 -(n+1)\exp(-t)\ln(\exp(-t))-(n+1)(-\exp(-t)+1 )=f$$ which concludes the proof. ◻ We can state and prove the generalization of the previous theorem: **Theorem 18**. *We have:* 1. *The species $\mathcal{X}_n$ is isomorphic to $\mathrm{CycLie}$ as a species.* 2. *The species $\mathcal{Y}_n$ is isomorphic to $\bar{\mathcal{F}}^{(n)}(\mathrm{CycLie})$ as species;* 3. *The suboperad $\mathcal{P}(\mathcal{Y}_n)$ of $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ generated by $\mathcal{Y}_n$ is free;* 4. *The left $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}$-submodule of $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ generated by $\mathcal{P}(\mathcal{Y}_n)$ is free and coincide with $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$.* 5. *The species $\mathcal{Z}_n$ is isomorphic to $\mathcal{F}(\mathrm{CycLie})\circ\dots\circ\mathcal{F}(\bar{\mathcal{F}}^{(n)}(\mathrm{CycLie}))$ as species;* 6. *The left $\mathrm{Lie}$-submodule of $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ generated by $\mathcal{Z}_n$ is free and coincide with the $\mathrm{Lie}$-module $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$.* *Proof.* Let us prove this theorem by induction on $n$. The base case is the theorem from [@FPRed].\ From the previous lemmas we have $$\begin{gathered} \bigvee_\mathrm{Lie}^n\mathrm{PreLie}\circ\mathcal{F}(\mathrm{CycLie}\circ\mathcal{Z}_{n-1})\twoheadrightarrow \bigvee_\mathrm{Lie}^n\mathrm{PreLie}\circ\mathcal{F}(\mathcal{X}_n\circ\mathcal{Z}_{n-1}) \twoheadrightarrow\\ \bigvee_\mathrm{Lie}^n\mathrm{PreLie}\circ\mathcal{F}(\mathcal{Y}_n)\twoheadrightarrow\bigvee_\mathrm{Lie}^n\mathrm{PreLie}\circ\mathcal{P}(\mathcal{Y}_n)\twoheadrightarrow \bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie} \end{gathered}$$ By item (5), we have $\mathcal{Z}_{n-1}\simeq\mathcal{F}(\mathrm{CycLie})\circ\dots\circ\mathcal{F}(\bar{\mathcal{F}}^{(n-1)}(\mathrm{CycLie}))$, hence $$\mathrm{CycLie}\circ\mathcal{Z}_{n-1}\simeq\mathrm{CycLie}\circ\mathcal{F}(\mathrm{CycLie})\circ\dots\circ\mathcal{F}(\bar{\mathcal{F}}^{(n-1)}(\mathrm{CycLie})) \simeq \bar{\mathcal{F}}^{(n)}(\mathrm{CycLie})$$ Those surjective morphisms are isomorphisms by equality of dimensions. This show that: 1. The species $\mathcal{X}_n$ is isomorphic to $\mathrm{CycLie}$; 2. The species $\mathcal{Y}_n$ is isomorphic to $\bar{\mathcal{F}}^{(n)}(\mathrm{CycLie})$; 3. The species $\mathcal{P}(\mathcal{Y}_n)$ is isomorphic to $\mathcal{F}(\mathcal{Y}_n)$; 4. And the left $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}$-module $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}\circ\mathcal{P}(\mathcal{Y}_n)$ is isomorphic to $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ as left $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}$-module. Moreover since $\mathcal{Z}_n=\mathcal{Z}_{n-1}\circ\mathcal{F}(\mathcal{Y}_n)$ we have that $\mathcal{Z}_n$ is isomorphic to: $$\mathcal{F}(\mathrm{CycLie})\circ\dots\circ\mathcal{F}(\bar{\mathcal{F}}^{(n)}(\mathrm{CycLie}))$$ as a species.\ Since $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}\circ\mathcal{P}(\mathcal{Y}_n)$ is isomorphic to $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ as left $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}$-module, they are isomorphic as left $\mathrm{Lie}$-module. Hence $\mathrm{Lie}\circ\mathcal{Z}_n$ is isomorphic to $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ as left $\mathrm{Lie}$-module, in particular the left $\mathrm{Lie}$-submodule generated by $\mathcal{Z}_n$ is free and coincide with $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$. ◻ *Remark 14*. This proof can be adapted to show that $\mathrm{Greg}_n\simeq\mathrm{Greg}_{n-1}\circ\mathcal{F}(\bar{\mathcal{F}}^{(n)}(\mathrm{CycLie}))$. The operad $\bigvee_\mathrm{Lie}^{n+1}\mathrm{PreLie}$ is also free as a right $\bigvee_\mathrm{Lie}^n\mathrm{PreLie}$-module. It could be interesting to compute explicit generator in this case. # [Appendix]{#Apx} {#appendix .unnumbered} ![The 25 relations of the operad $(\mathrm{Greg}')^!$](StRel1.pdf){#fg:StRel1} ![The "dlp" rewriting system for $(\mathrm{Greg}')^!$](StGB1.pdf){#fg:StGB1} ![The "prdl" rewriting system for $(\mathrm{Greg}')^!$](StGB2.pdf){#fg:StGB2} ![The "rdlp" rewriting system for $(\mathrm{Greg}')^!$](StGB3.pdf){#fg:StGB3} ![The computation of $(R\star^\Delta S)\star^\Delta T$](StComp1.pdf){#fg:StComp1} ![The "dlp" rewriting system for $(\mathcal{G}^C)^!$](StGBg1.pdf){#fg:StGBg1} ![The "wprdl" rewriting system for $(\mathcal{G}^C)^!$](StGBg2.pdf){#fg:StGBg2} ![The "rdlp" rewriting system for $(\mathcal{G}^C)^!$](StGBg3.pdf){#fg:StGBg3} Institut de Recherche Mathématique Avancée, UMR 7501, Université de Strasbourg, 7 rue René-Descartes, 67000 Strasbourg CEDEX, France\ *Email address:* laubie\@unistra.fr
arxiv_math
{ "id": "2309.05552", "title": "Combinatorics of pre-Lie products sharing a Lie bracket", "authors": "Paul Laubie", "categories": "math.QA math.CO math.RA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | A methodology for defining variational principles for a class of PDE models from continuum mechanics is demonstrated, and some of its features explored. The scheme is applied to quasi-static and dynamic models of rate-independent and rate-dependent, single crystal plasticity at finite deformation. author: - "Amit Acharya[^1]" bibliography: - dual_plasticity.bib title: A Hidden Convexity in Continuum Mechanics, with application to classical, continuous-time, rate-(in)dependent plasticity --- # Introduction In this paper we explore a strategy for designing variational principles for a significant class of static and dynamical models from continuum mechanics, naturally stated as systems of partial differential equations (PDE). The models can be dissipative or conservative. Action functionals are designed, whose Euler-Lagrange equations recover the primal PDE system and side conditions in a well-defined sense. The essential ideas behind the approach may be understood from [@action_3 Sec. 2], [@action_2 Sec. 7], and [@acharya2022action Sec. 6.1]. The variational principles govern a dual set of fields corresponding to the primal ones of the continuum mechanical model, and the scheme provides a mapping to recover the primal fields with the guarantee that the latter are weak solutions of the primal model. The Lagrangian of the dual variational problem is convex (with a trivial sign change), and therefore existence of a minimizer appears to rest on only the coercivity of the dual functional. Correspondingly, the Euler-Lagrange equations of the dual functional are shown to possess a local degenerate ellipticity, regardless of the properties of the primal system. In the context of solving the primal PDE system, these features are crucially enabled by the 'free' choice of a (family of) potential(s) in the primal variables that may be interpreted as defining a 'target' whose integral is to be extremized, subject to the primal PDE system as constraints. The dual fields then are simply the Lagrange multipliers of the formulation, and since the target is free to choose, one chooses it to have as strongly positive-definite an Hessian as needed to dominate the non-monotonicity of the constraint equations. Everything said above is of recent origin [@acharya2022action; @action_2; @action_3] and mathematically formal, but potentially useful, as borne out by encouraging results in computational implementations of model problems [@KA1]-[@a_arora_thesis Sec. 6]-[@sga; @KA2] involving a range of linear and nonlinear, ODE and PDE, time-dependent and independent problems related to continuum mechanics (linear transport, heat equation, Euler's equations for a rigid body, double-well elastostatics in 1-d, inviscid Burgers in conservation and Hamilton-Jacobi form, the inverse problem of a liquid crystal membrane attaining a prescribed shape, constrained to meet a prescribed principal stretch field), and all approximated by the simplest Galerkin discretization (in these first instances) for solving boundary value problems in domains in space(-time). As examples of this overall approach, in this paper we apply the strategy to develop action principles for classical rate-dependent and independent, dynamic and quasi-static, single crystal plasticity theories without restriction to rate problems, time discretization, energy minimizing paths, associated plasticity, hardening matrix derived from an energy potential, treating plastic slip as an energetic state variable, the existence of a dissipation potential or even a free energy function. Variational principles for plasticity is a subject with a substantial body of work, e.g. [@hill1958general; @hill1959some; @hill1979aspects; @suquet1988discontinuities; @strang1979family]-[@petryk2003incremental; @petryk2020quasi and earlier references therein]-[@ortiz1989symmetry; @ortiz1999nonconvex; @ortiz1999variational; @carstensen2002non; @mielke2015rate; @maso2006quasistatic; @dal2011quasistatic], with a detailed review presented in [@petryk2020quasi]. Our work is complementary to these points of view, and presents a different formalism that exploits the 'free' choice of an added target function in the primal variables with strong convexity properties, especially in providing a somewhat unified point of view in dealing with quasi-static and dynamic problems without utilizing time-discretization. The possibility of making such a choice in aiding the solution to problems has the flavor of the use of the 'linear comparison solid' related to the literature on effective properties, cf., [@castaneda1997nonlinear; @castaneda1999variational]. Rigorous results on a weak formulation and existence of solutions of the governing PDE of plasticity were first provided in the seminal work [@suquet1988discontinuities and earlier references] and subsequent works , e.g. [@han2012plasticity]. As is well-understood by experts but simply to avoid possible confusion, we explicitly note that the existence of a variational principle for a set of P(O)DE is a different question than that of posing a variational (weak) statement for that set of equations. An outline of the paper is as follows: in Sec. [2](#sec:dual_cont_mech){reference-type="ref" reference="sec:dual_cont_mech"} we present the formalism of the dual formulation. In Sec. [3](#sec:deg_ellipt){reference-type="ref" reference="sec:deg_ellipt"} a computation is presented to motivate the degenerate ellipticity of the dual problem. Secs. [4](#sec:rd_sc){reference-type="ref" reference="sec:rd_sc"} and [5](#sec:ri_sc){reference-type="ref" reference="sec:ri_sc"} contain the algorithmic steps to implement the scheme on the theories of rate-dependent and rate-independent single crystal plasticity, respectively. Sec. [6](#sec:concl){reference-type="ref" reference="sec:concl"} contains some concluding remarks. A few words on notation: *except for Sec. [5](#sec:ri_sc){reference-type="ref" reference="sec:ri_sc"}*, we always use the summation convention on ranges of indices, and the placement of indices as super or subscripts has no special siginifcance. The use of direct notation would have required too many definitions to be put in place for many of the explicit calculations - hence, despite their clumsy appearance, I have chosen to explicitly write out the computations - it is hoped that this avoids any ambiguity. Also, whenever a function is declared as capable of being defined arbitrarily, such arbitrariness is assumed to be restricted by natural smoothness requirements for the problem context to make sense. We mention at the very outset that from the point of view of this paper, the variational principles developed are purely mathematical devices with their sole justification resting on contributing to solution strategies for the primal system of physical equations involved. Thus, the whole burden of physical modeling rests on the development of the primal system, which is considered a 'given' in this work. This paper is not concerned with the quality of physical modeling of plasticity with the considered models, or their connection to the micromechanics of the phenomenon. Those are separate concerns dealt with in [@arora2020dislocation; @arora2020unification; @arora2020finite; @arora2022mechanics; @arora2023interface] - in fact, this more sophisticated model (which incorporates many of the features of the classical theory), which, among other things, is a first example of a setting in continuum solid mechanics where non-singular, finite deformation elastic fields of arbitrary dislocation distributions can be calculated, served as the primary motivation for the development of the formalism presented here, as discussed in [@action_3]. # A dual formulation for models from continuum mechanics {#sec:dual_cont_mech} This paper started out with the specific goal of demonstrating some variational principles for the equations of plasticity theory. However, I soon realized that the main ideas were most efficiently conveyed in the general setting described below, in the spirit of not missing the forest for the trees by sparing the reader the details of some tedious calculations. This provides the main motivation for this Section. Lower-case Latin indices belong to the set $\{1,2,3\}$ representing Rectangular Cartesian spatial coordinates, and $t$ is time. Let upper-case Latin indices belong to the set $\{1,2,3, \cdots,N \}$, indexing the components of the $N \times 1$ array of primal variables, $U$, with, possibly, a conversion to first-order form as necessary. We consider the system of equations [\[eq:gov_cont_mech\]]{#eq:gov_cont_mech label="eq:gov_cont_mech"} $$\begin{aligned} {\cal C}_{\Gamma I} \partial_t U_I + \partial_j {\cal F}_{\Gamma j}(U) + G_\Gamma(U,x,t) & = 0 \ \mbox{in} \ \Omega \times (0,T), \qquad \Gamma = 1, \ldots, N^* \label{eq:gov}\\ {\cal C}_{\Gamma I} U_I (x, 0) & = {\cal C}_{\Gamma I} U_I^{(0)} (x) \ \mbox{specified on } \Omega \ \mbox{(initial conditions)}\\ ({\cal F}_{\Gamma j}(U)n_j)|_{(x,t)} & = (B_{\Gamma j} n_j)|_{(x,t)} \ \mbox{specified on } \partial\Omega_\Gamma \ \mbox{(boundary conditions)} \label{eq:gov_bc}, \end{aligned}$$ where $\Omega$ is a fixed domain in $\mathbb R^3$ with boundary $\partial\Omega \supset \bigcup_\Gamma \partial\Omega_\Gamma$, upper-case Greek indices index the number of equations involved, after conversion to first-order form when needed. Here, ${\cal C}$ is an $N^* \times N$ matrix, ${\cal F}, G$ are given functions of their argument, and $U^{(0)}, B$ are specified functions. It can be shown that nonlinear elastostatics, elastodynamics, (in)compressible Euler and Navier Stokes can all be written in this form. In this work, we will explicitly consider the cases of classical rate-dependent and rate-independent single crystal plasticity, the latter furnishing a concrete setting for considering inequality constraints, converted to equalities by the addition of slack variables. As an example, consider the equations of nonlinear elastostatics given by [\[eq:elastostat\]]{#eq:elastostat label="eq:elastostat"} $$\begin{aligned} \partial_j \hat{P}_{ij}(F) & = 0 \ \mbox{in} \ \Omega \label{eq:equil}\\ \partial_j y_i - F_{ij} & = 0 \ \mbox{in} \ \Omega \label{eq:y-F}\\ \hat{P}_{ij}n_j = p_i \ \mbox{on} \ \partial\Omega_p \qquad & ; \qquad y_i = y^{(b)}_i \ \mbox{on} \ \partial\Omega_y \label{eq:bc_stat} \end{aligned}$$ where $\hat{P}$ is the First Piola-Kirchhoff stress response function. Let $(y_1, y_2, y_3)$ form the first three components of the array $U$. The conversion to first-order form (so that the Lagrangian $\mathcal{L}_H$ that appears subsequently in [\[eq:defs\]](#eq:defs){reference-type="eqref" reference="eq:defs"} contains no derivatives in the primal variables) requires the addition of nine more primal variables $F$ [\[eq:y-F\]](#eq:y-F){reference-type="eqref" reference="eq:y-F"}. These additional nine relations can be written in the form $$\label{eq:first-order} \mathcal{A}_{\Gamma I j} \partial_j U_I - \mathcal{B}_{\Gamma I} U_I = 0, \qquad \Gamma = 4,\ldots 12,$$ where $\mathcal{A}, \mathcal{B}$ are constant matrices (with $\mathcal{B}$ diagonal in many cases) that define the augmentation of the primal list from $(y)$ to $(y, F)$, and define the augmenting primal variables as, in general, linear combinations of the partial derivatives of components of $U$. The equation set [\[eq:first-order\]](#eq:first-order){reference-type="eqref" reference="eq:first-order"} can be expressed in the form [\[eq:gov\]](#eq:gov){reference-type="eqref" reference="eq:gov"}, and we note, for the convenience of the reader, that the arrays $\mathcal{B}$ and $B$ are not the same. Boundary conditions are best considered on a specific case-by-case basis. It is shown in Appendix [7](#app:A){reference-type="ref" reference="app:A"} how Dirichlet boundary conditions can be accommodated within the setup [\[eq:gov_cont_mech\]](#eq:gov_cont_mech){reference-type="eqref" reference="eq:gov_cont_mech"}. Define the pre-dual functional by forming the scalar products of [\[eq:gov\]](#eq:gov){reference-type="eqref" reference="eq:gov"} with the dual fields $D$, integrating by parts, substituting the prescribed initial and boundary conditions (ignoring, for now, space-time boundary contributions that are not specified) *and adding a potential $H$ as shown*: $$\label{eq:predual} \begin{aligned} \widehat{S}_H[U,D] & = \int_\Omega \int_0^T \left( - {\cal C}_{\Gamma I} \partial_j U_I \partial_t D_\Gamma - {\cal F}_{\Gamma j}\big|_U \partial_j D_\Gamma + G_\Gamma\big|_{(U,x,t)} D_\Gamma + H(U,x,t) \right) \,dx dt \\ & \qquad - \int_\Omega {\cal C}_{\Gamma I} {U}_I^{(0)}(x) D(x, 0) \, dx + \sum_\Gamma \int_{\partial\Omega_\Gamma} \int_0^T B_{\Gamma j} \, D_\Gamma \, n_j \, da dt, \end{aligned}$$ (where the arguments $(x,t)$ are suppressed except to display the explicit dependence of $G, H$ and in the initial condition). Define $$\label{eq:defs} \begin{aligned} \mathcal{D}& := (\partial_t D, \nabla D, D)\\ \mathcal{L}_H(U,\mathcal{D},x,t) & := - {\cal C}_{\Gamma I} U_I \partial_t D_\Gamma - {\cal F}_{\Gamma j}\big|_U \partial_j D_\Gamma + G_\Gamma(U,x,t) D_\Gamma + H(U,x,t) \end{aligned}$$ and *require the choice* of the potential $H$ to be such that it facilitates the existence of a function $$U = U^{(H)}(\mathcal{D}, x, t)$$ which satisfies $$\label{eq:dLdU} \frac{\partial\mathcal{L}_H}{\partial U} \left(U^{(H)}(\mathcal{D}, x,t),\mathcal{D},x,t \right) = 0 \qquad \forall \ (\mathcal{D},x,t).$$ When such a *dual-to-primal* (DtP) 'change of variables' mapping, $U^{(H)}$, exists, defining the *dual* functional as $$\begin{aligned} & S_H[D] := \widehat{S}_H \left[ U^{(H)}, D \right] \\ & \ = \int_\Omega \int_0^T \mathcal{L}_H\left(U^{(H)}(\mathcal{D},x,t), \mathcal{D}, x, t \right) \, dxdt - \int_\Omega {\cal C}_{\Gamma I} U_I^{(0)}(x) D_\Gamma(x, 0) \, dx + \sum_\Gamma \int_{\partial\Omega_\Gamma} \int_0^T B_{\Gamma j} \, D_\Gamma \, n_j \, da dt,\\ & \mbox{with $D$ specified (arbitrarily) on parts of the space-time domain boundary complementary to those }\\ & \mbox{that appear explicitly above,} \end{aligned}$$ and noting [\[eq:dLdU\]](#eq:dLdU){reference-type="eqref" reference="eq:dLdU"}, the first variation of $S_H$ (about a state $(x,t) \mapsto D(x,t)$ in the direction $\delta D$, the latter constrained to vanish on parts of the boundary where $D$ is specified), is given by $$\begin{aligned} \delta S_H\bigg|_{\delta D} [D] & = \bigintsss_\Omega \bigintsss_0^T \frac{ \partial\mathcal{L}_H}{\partial\mathcal{D}} \left(U^{(H)}(\mathcal{D},x,t), \mathcal{D}, x,t \right) \cdot \delta \mathcal{D}\, dx dt - \int_\Omega {\cal C}_{\Gamma I} {U}_I^{(0)}(x) \delta D_\Gamma(x, 0) \, dx \\ & \quad + \sum_\Gamma \int_{\partial\Omega_\Gamma} \int_0^T B_{\Gamma j} \, \delta D_\Gamma \, n_j \, da dt. \end{aligned}$$ Noting, now, that $\mathcal{L}_H$ is necessarily affine in $\mathcal{D}$, its second argument, it can be checked that *the Euler-Lagrange (E-L) equations and natural boundary conditions of the dual functional $S_H$ are exactly the system [\[eq:gov\]](#eq:gov){reference-type="eqref" reference="eq:gov"} with $U$ substituted by $U^{(H)}(\mathcal{D}|_{(x,\cdot)},x,\cdot)$*; the first variation is explicitly given as $$\begin{aligned} & \delta S_H\Big|_{\delta D} [D] = \\ & \int_\Omega \int_0^T \left( \partial_t\left( {\cal C}_{\Gamma I} U^{(H)}_I \Big|_{\left (\mathcal{D}|_{(x,t)},x,t \right)} \right) + \partial_j {\cal F}_{\Gamma j} \left( U^{(H)}\Big|_{\left (\mathcal{D}|_{(x,t)},x,t \right)}\right) + G_\Gamma \left( U^{(H)}\Big|_{\left (\mathcal{D}|_{(x,t)},x,t \right)}, x , t \right)\right) \delta D_\Gamma (x,t) \, dx dt\\ & \quad + \sum_\Gamma \int_{\Omega_\Gamma} \int_0^T \left( B_{\Gamma j} (x,t) - {\cal F}_{\Gamma j} \left( U^{(H)}\Big|_{\left (\mathcal{D}|_{(x,t)},x,t \right)} \right) \right) n_j (x,t) \delta D_\Gamma(x,t) \, dadt \\ & \quad + \int_\Omega {\cal C}_{\Gamma I} \left( U^{(H)}_I \Big|_{\left(\mathcal{D}|_{(x,0)},x,0 \right)} - U^{(0)}_I(x) \right) \delta D (x, 0) \, dx. \end{aligned}$$ It is this simple idea that we exploit to develop variational principles for a class of models from continuum mechanics. It is an important consistency check of our scheme that considering the potential $H$ of the form $$\label{eq:gen_H} H(U,x,t) = \frac{1}{2}\, a_U \,\left| U - \bar{U}(x,t) \right|^2 + \frac{1}{p} \, b_U \, \left| U - \bar{U}(x,t) \right|^p,$$ where $a_U, b_U$ are positive constants, typically large, with $p >2$ tailored to the nonlinearities present in the functions ${\cal F}, G$, and for $(x,t) \mapsto \bar{U}(x,t)$ an arbitrarily specified function, $$\label{eq:dLdU_powerlaw} \frac{\partial\mathcal{L}}{\partial U_I} = - {\cal C}_{\Gamma I} \partial_t D_\Gamma - \frac{ \partial{\cal F}_{\Gamma j}}{\partial U_I} \partial_j D_\Gamma + \frac{\partial G_\Gamma}{\partial U_I} D_\Gamma + \left( a_U + b_U \, \left|U - \bar{U} \right|^{p-2} \right) \left(U_I - \bar{U}_I \right) = 0$$ is solved, $$\mbox{for} \qquad \mathcal{D}(x,t) := (\partial_t D(x,t), \nabla D (x,t), D(x,t)) = (0,0,0), \qquad \mbox{by} \qquad U^{(H)}(\mathcal{D}(x,t),x,t) = \bar{U}(x,t).$$ *If we now choose* $\bar{U}$ *as a solution to the primal problem* [\[eq:gov_cont_mech\]](#eq:gov_cont_mech){reference-type="eqref" reference="eq:gov_cont_mech"}, *then a (smooth) solution exists to the E-L equations of the dual problem* given by $(x,t) \mapsto D(x,t) = 0$. This is an existence result for our dual problem. As well, it shows that all solutions to the primal problem can be recovered by the dual scheme by a family of appropriately designed dual problems. Of course, it is the goal of our strategy to design and use specific $H$'s, without the knowledge of exact solutions to the primal problem, as a selection criterion to recover special sets of (possibly unstable) solutions of the primal problem in a 'stable' manner by solving the dual problem. We note that there are examples in continuum mechanics, e.g. nonconvex elastostatics in 1-d, where an unstable solution of a primal energy functional is actually the limit of an energy minimizing sequence, which is then recovered as a minima of a relaxed primal problem. Another important point to note is that the dual E-L equations corresponding to primal *initial*-(boundary)-value problems contain second order time derivatives in the dual variables, after conversion of the primal system to first-order form; this can be understood by considering the form of the DtP mapping [\[eq:dLdU_powerlaw\]](#eq:dLdU_powerlaw){reference-type="eqref" reference="eq:dLdU_powerlaw"} and the primal system [\[eq:gov\]](#eq:gov){reference-type="eqref" reference="eq:gov"}. This generally requires two 'boundary' conditions in the time-like direction on such variables, when at most, only one is available from the primal problem. This raises the question of how the second condition ought to be specified and what effect it has on the recovery of the correct primal solution, especially when the primal system has a unique solution as an initial-value problem. It turns out that a final time boundary condition can be arbitrarily specified on the dual variables and this does not have an effect on the recovery of correct primal solutions as the DtP mapping, for standard initial value problems, necessarily depends on $\partial_t D$, see e.g. [\[eq:dLdU_powerlaw\]](#eq:dLdU_powerlaw){reference-type="eqref" reference="eq:dLdU_powerlaw"}, and specifying $D$ at the final time leaves the time derivative free to adjust to the demands of achieving the required primal solution through the DtP mapping. This fact has been discussed and demonstrated in specific contexts in [@action_2 Sec. 7] and [@KA1]. # Local degenerate ellipticity of the dual formulation of continuum mechanics {#sec:deg_ellipt} For this section, let Greek lower-case indices belong to the set $\{0,1,2,3\}$ representing Rectangular Cartesian space-time coordinates $x^\alpha, \alpha = 0, 1,2,3$; $0$ represents the time coordinate when the PDE is time-dependent. Let upper-case Latin indices belong to the set $\{1,2,3, \cdots,N \}$, indexing the components of the $N \times 1$ array of primal variables, $U$, with, possibly, a conversion to first-order form as necessary. Now consider the system of primal PDE $$\label{eq:conserv_source} \partial_\alpha ({\cal F}_{\Gamma\alpha}(U)) + G_\Gamma(U, x) = 0, \qquad \Gamma = 1, \ldots, N^*$$ where upper-case Greek indices index the number of equations involved, after conversion to first-order form when needed. We assume that the functions $U \mapsto \frac{\partial^2 {\cal F}_\Gamma}{\partial U_P \partial U_R}(U)$ and $U \mapsto \frac{\partial^2 G_\Gamma}{\partial U_P \partial U_R}(U)$ are bounded functions on their domains. Let $D$ be the $N^* \times 1$ array of dual fields and, as earlier, let us consider a shifted quadratic for the potential $H$, characterized by a diagonal matrix $[a_{kj}]$ with constant positive diagonal entries so that the Lagrangian takes the form (with $H$ chosen as a quadratic form for simplicity of presentation) $$\mathcal{L}(U,D,\nabla D, \bar{U}) := - {\cal F}_{\Gamma \alpha}(U) \partial_\alpha D_\Gamma + D_\Gamma G_\Gamma(U) + \frac{1}{2}(U_k - \bar{U}_k) a_{kj} (U_j - \bar{U}_j).$$ Then the corresponding DtP mapping, obtained by 'solving $\frac{\partial\mathcal{L}}{\partial U} = 0$ for $U$ in terms of $(\nabla D, D, \bar{U})$,' is given by the implicit equation $$\label{eq:deg_ell_map} U_J^{(Q)}(\nabla D, D, \bar{U}) = \bar{U}_J + (a^{-1})_{JK} \left( \frac{\partial{\cal F}_{\Gamma \alpha}}{\partial U_K} \bigg|_{U^{(Q)}(\nabla D, D, \bar{U})} \partial_\alpha D_\Gamma - D_\Gamma \frac{\partial G_\Gamma}{\partial U_K}\bigg|_{U^{(Q)}(\nabla D, D, \bar{U})} \right).$$ It is a fundamental property of the dual scheme that the dual E-L equation is then given by $$\label{eq:dual_EL} \partial_\alpha \Big( {\cal F}_{\Gamma \alpha} \big( U(\nabla D, D, \bar{U})) \Big) + G_\Gamma ( U(\nabla D, D, \bar{U}) ) = 0$$ (where we have dropped the superscript $^{(Q)}$ for notational convenience), whose ellipticity is governed by the term $$\mathbb{A}_{\Gamma \alpha \Pi \mu}(\nabla D, D, \bar{U}) := \frac{\partial{\cal F}_{\Gamma \alpha}}{\partial U_P}\bigg|_{U(\nabla D, D, \bar{U})} \, \frac{\partial U_P}{\partial(\nabla D)_{\Pi \mu}}\bigg|_{U(\nabla D, D, \bar{U})}.$$ From [\[eq:deg_ell_map\]](#eq:deg_ell_map){reference-type="eqref" reference="eq:deg_ell_map"} we have $$\begin{aligned} a^{-1}_{PR} \left( \delta_{\Gamma \Pi} \delta_{\mu \alpha} \frac{ \partial{\cal F}_{\Gamma \alpha}}{\partial U_R} \, + \, \partial_\alpha D_\Gamma \frac{ \partial^2 {\cal F}_{\Gamma \alpha}}{\partial U_R \partial U_S} \frac{\partial U_S}{\partial(\nabla D)_{\Pi \mu}} - D_\Gamma \frac{ \partial^2 G_\Gamma}{\partial U_R \partial U_S} \frac{\partial U_S}{\partial(\nabla D)_{\Pi \mu}} \right) & = \frac{\partial U_P}{\partial(\nabla D)_{\Pi \mu}} \\ \Longrightarrow \left( \delta_{PS} - a^{-1}_{PR} \partial_\alpha D_\Gamma \frac{ \partial^2 {\cal F}_{\Gamma \alpha}}{\partial U_R \partial U_S} + a^{-1}_{PR} D_\Gamma \frac{ \partial^2 G_\Gamma}{\partial U_R \partial U_S} \right) \frac{\partial U_S}{\partial(\nabla D)_{\Pi \mu}} & = a^{-1}_{PR} \frac{\partial{\cal F}_{\Pi \mu}}{\partial U_R}, \end{aligned}$$ and so $$\mathbb{A}_{\Gamma \alpha \Pi \mu}(0,0,\bar{U}) = \frac{\partial{\cal F}_{\Gamma \alpha}}{\partial U_P}\bigg|_{\bar{U}} \, a^{-1}_{PR} \, \frac{\partial{\cal F}_{\Pi \mu}}{\partial U_R}\bigg|_{\bar{U}},$$ which is *positive semi-definite* on the space of $N^* \times 3$ (or $N^* \times 4$) matrices. This establishes the degenerate ellipticity of the dual system at the state $x \mapsto D(x) = 0$. To examine the ellipticity-related properties of the system in a bounded neighborhood, say $\mathcal{N}$, of $(D = 0, \nabla D = 0) \in \mathbb{R}^{N^*} \times \mathbb{R}^{N^* \times \bar{\alpha}}, \bar{\alpha} = 3,4$, we define $$M_{PS} := \delta_{PS} - a^{-1}_{PR} \left( \partial_\alpha D_\Gamma \frac{\partial^2 {\cal F}_{\Gamma \alpha}}{\partial U_R \partial U_S} - D_\Gamma \frac{\partial^2 G_\Gamma}{\partial U_R \partial U_S} \right),$$ and note that $$\frac{\partial U_P}{\partial(\nabla D)_{\Pi \mu} } = M^{-1}_{PQ} a^{-1}_{QR} \frac{\partial{\cal F}_{\Pi \mu}}{\partial U_R},$$ where $M^{-1}$ exists and is positive definite by the boundedness of $\mathcal{N}$ and the second derivatives of the functions ${\cal F}$ and $G_\Gamma$, along with an appropriately large choice of the elements of the diagonal matrix $[a_{ij}]$ (in case the second-derivatives are not bounded in some regions of the domain of primal variables we assume that the functions are such that the positive-definiteness of $M$ is maintained. Alternatively, the choice of $H$ can be enhanced (as, e.g. in [\[eq:gen_H\]](#eq:gen_H){reference-type="eqref" reference="eq:gen_H"}) to dominate the growth of the second derivatives, catering to the specifics of the second-derivative functions in any particular problem). The degenerate ellipticity or 'convexity' of the system [\[eq:conserv_source\]](#eq:conserv_source){reference-type="eqref" reference="eq:conserv_source"} in the neighborhood $\mathcal{N}$ is now defined as the positive semi-definiteness of the matrix $\mathbb{A}$ on the space $\mathbb{R}^{N^* \times \bar{\alpha}}$ of matrices, and this in turn is governed by the matrix $$\mathbb{A}^{(sym)}_{\Gamma \alpha \Pi \mu}\bigg|_{(\nabla D, D, \bar{U})} = \frac{\partial{\cal F}_{\Gamma \alpha}}{\partial U_P}\bigg|_{U(\nabla D, D, \bar{U})} \frac{1}{2} \left( M^{-1}_{PQ}\bigg|_{U(\nabla D, D, \bar{U})} a^{-1}_{QR} + M^{-1}_{RQ}\bigg|_{U(\nabla D, D, \bar{U})} a^{-1}_{QP}\right)\frac{\partial{\cal F}_{\Pi \mu}}{\partial U_R}\bigg|_{U(\nabla D, D, \bar{U})}.$$ By the positive definiteness of the matrix $[M_{PS}]$ in the neighborhood $\mathcal{N}$, it follows that $$\partial_\alpha D_\Gamma \ \mathbb{A}^{(sym)}_{\Gamma \alpha \Pi \mu}\bigg|_{(\nabla D, D, \bar{U})} \partial_\mu D_\Pi \geq 0 \qquad \forall \qquad (D, \nabla D) \in \mathcal{N},$$ which establishes a 'local' degenerate ellipticity of the system [\[eq:conserv_source\]](#eq:conserv_source){reference-type="eqref" reference="eq:conserv_source"}. We note that degenerate ellipticity is stronger than the Legendre-Hadamard condition given by the requirement of positive semi-definiteness of $\mathbb{A}$ on the space of tensor products from $\mathbb{R}^{N^*} \otimes \mathbb{R}^{\bar{\alpha}}$, and not directly comparable to the strong-ellipticity condition, since it is weaker than the latter when restricted to the space $\mathbb{R}^{N^*} \otimes \mathbb{R}^{\bar{\alpha}}$ but simultaneously requiring semi-definiteness on the larger space of $\mathbb{R}^{N^* \times \bar{\alpha}}$. Also of note is that degenerate ellipticity does not preclude the failure of ellipticity characterized by the condition $\det [\mathbb{A}_{\Gamma \alpha \Pi \mu} n_\alpha n_\mu] \neq 0$ for all unit direction $n \in \mathbb R^ {\bar{\alpha}}$, $\bar{\alpha} = 3$ or $4$, thus allowing for weak (gradient) discontinuities of weak solutions $x \mapsto D(x)$ of [\[eq:dual_EL\]](#eq:dual_EL){reference-type="eqref" reference="eq:dual_EL"} (or at least its linearized counterpart), a feature that is important for recovering discontinuous solutions of the primal problem (e.g. inviscid Burgers) expressed as combinations of derivatives of the dual fields through the DtP mapping as, e.g., demonstrated in the context of the linear transport equation in [@KA1]. If a solution of the primal system is close to the base state $\bar{U}$, then it seems natural to expect, due to this local degenerate ellipticity, that such a solution can be obtained in a 'stable' manner by the dual formulation *designed by the choice of the auxiliary potential $H$ as a shifted quadratic (or 'power law') about the base state $\bar{U}$*, for instance by an iterative scheme starting from a guess $(D = 0, U = \bar{U})$. Our experience [@KA1; @sga; @KA2; @a_arora_thesis] shows that this observation is of great practical relevance in using the dual scheme, and we consistently exploit it in all our computational approximations. To make contact with the parlance of the classical 'rate problems' of Hill [@hill1957uniqueness; @hill1956new; @hill1979aspects], degenerate ellipticity here corresponds to the absence of negative 'energy' modes of the linearized, or 'incremental/rate,' dual problem at dual states whose corresponding primal state, obtained via the DtP mapping, may well entail a loss of positive-semi-definiteness of the physical incremental moduli on the space of dyads $a \otimes n \ ( a, n \in \mathbb R^{3})$ in the primal rate problem under quasi-static conditions. Furthermore, by a theorem of Ball [@ball1976convexity] and in the context of nonlinear hyperelasticity as the primal problem, quasiconvexity implies the Legendre-Hadamard condition so that it is possible that the dual problem remains degenerate elliptic/convex, even when the primal problem is not quasiconvex. # A variational principle for rate-dependent, dynamic, single crystal plasticity {#sec:rd_sc} We follow the scheme described in Sec. [2](#sec:dual_cont_mech){reference-type="ref" reference="sec:dual_cont_mech"} to develop the required variational principle. The specifics of rate-dependent single crystal plasticity theory can be found in the expositions of [@hutchinson1976bounds; @asaro1983micromechanics]. Let $\Omega \subset \mathbb R^3$ be a given, fixed reference configuration with all spatial derivatives below being w.r.t rectangular Cartesian coordinates parametrizing this reference, and partial derivatives w.r.t time, $t$, representing material time derivatives, also alternatively written with a superposed dot. The interval $[0,T]$ is fixed, but chosen arbitrarily. Lowercase Greek (super)subscripts refer to numbering of slip systems. We consider the following set of equations on $\Omega$: $$\label{eq:primal_rd} \begin{aligned} \rho_0 \dot{v} - \partial_j {\cal N}_{ij}(F,P) & = 0 \\ \dot{P}_{ij} - \sum_\alpha \Big( r^\alpha(F,P,g) \, m^\alpha_i n^\alpha_k \Big) P_{kj} & = 0 \\ \dot{g}_\alpha - h_{\alpha \beta} (g) \, r^\beta(F, P, g) & = 0 \\ \dot{y}_i - v_i & = 0 \\ \partial_j y_i - F_{ij} & = 0, \end{aligned}$$ with the boundary conditions $$\label{eq:primal_rd_bc} {\cal N}_{ij}(F,P)\big|_{(x,t)} n_j\big|_x = \bar{t}_i(x,t), \ x \in \partial\Omega_{\bar{t}}; \qquad y_i(x,t) = y^{(b)}_i(x,t), \ x \in \partial\Omega_y,$$ and initial conditions $$\label{eq:eq:primal_rd_ic} y_i(x,0) = y^{(0)}_i(x), \qquad v_i(x,0) = v^{(0)}_i(x), \qquad P_{ij} (x,0) = P^{(0)}_{ij} (x), \qquad g^{\alpha}(x,0) = g^{\alpha(0)}(x), \qquad x \in \Omega.$$ In the above, $\rho_0$ is a given mass density field on the reference configuration, ${\cal N}$ is the response function for the first Piola-Kirchhoff stress w.r.t. the reference configuration, $y,v, F$ are the position, velocity, and deformation gradient fields, respectively, $P$ is the plastic distortion tensor, $r^\alpha$ are response functions for the slip system rates (e.g., the power law [@hutchinson1976bounds] or the Perzyna overstress model [@perzyna1966fundamental]), $(m^\alpha, n^\alpha)$ are the elastically unstretched slip direction and slip normal vectors, $g^\alpha$ are the strengths, and $h_{\alpha \beta}$ are the hardening matrix response functions [@hill1966generalized]. All quantities indexed by $\alpha$ refer to an object corresponding to the $\alpha^{th}$ slip system. The functions $\bar{t},y^{(b)}, y^{(0)}, v^{(0)}, P^{(0)}, g^{\alpha(0)}$ are prescribed. Now define the array of primal fields $$U = (y, v, F, P, g),$$ the dual fields $$D = (\xi, \gamma, \Phi, \Pi, \Gamma),$$ and assume the potential $H$ to be of the form $$\begin{aligned} & H(y,v,F,P, g, x, t) = \\ & \quad \frac{1}{2} \left( a_y \Big|y - \bar{y} |_{(x,t)} \Big|^2 + a_v \Big|v - \bar{v}|_{(x,t)} \Big|^2 + a_F \Big|F - \bar{F} |_{(x,t)} \Big|^2 + a_P \Big|P - \bar{P}|_{(x,t)} \Big|^2 + a_g \Big|g - \bar{g}|_{(x,t)} \Big|^2 \right) \\ & + \frac{1}{p} \left( b_F \big|F - \bar{F}|_{(x,t)} \Big|^p + b_P \Big|P - \bar{P}|_{(x,t)} \Big|^p + b_g \Big|g - \bar{g}|_{(x,t)} \Big|^p \right), \end{aligned}$$ for $p > 2$ as needed. Here, the *base states*, the collection of space-time fields with overhead bars, are arbitrarily specified, with their closeness to an actual solution of the primal problem resulting in a better design of the variational principle. The introduction of the power $p$ is simply to ensure strict convexity of the ensuing Lagrangian $\mathcal{L}_H$ in the primal variables $U$ for each fixed set of values of $(\mathcal{D}, x, t)$, which in turn is closely dictated by the nonlinearities of the primal problem. The non-negative real-valued constants $a_{(\cdot)}, b_{(\cdot)}$ are chosen arbitrarily, typically large, when non-zero, to facilitate the strict convexity of $\mathcal{L}_H$ that appears below. Clearly, there is a great deal of freedom in making the choices of $H$; e.g., the power $p$ in the specific choice above does not even have to be the same on each of the terms. We now define the pre-dual functional $$\label{eq:S_hat_rd} \begin{aligned} \widehat{S}[U, D] & = \int_\Omega \int_0^T \mathcal{L}_H (U, \mathcal{D},x, t) \, dx dt + \mbox{inital and boundary contributions} \\ & := \ \int_\Omega \int_0^T - \rho_0 v_i \partial_t \gamma_i + {\cal N}_{ij}\big|_{(F,P)} \partial_j \gamma_i - P_{ij} \partial_t \Pi_{ij} - \Pi_{ij} \sum_\alpha r^\alpha \big|_{(F,P,g)} \, m^\alpha_i n^\alpha_k P_{kj} \ dx dt \\ & \quad - \int_\Omega \rho_0 v^{(0)}_i(x) \gamma_i(x,0) + P^{(0)}_{ij}(x) \Pi_{ij}(x,0) \, dx - \int_{\partial\Omega_{\bar{t}}} \int_0^T \bar{t}_i \gamma_i \, da dt \\ & \quad - \int_\Omega \int_0^T g_\alpha \partial_t \Gamma^\alpha + \Gamma^\alpha h_{\alpha \beta}\big|_g r^\beta \big|_{(F,P,g)} + y_i \partial_t \xi_i + \xi_i v_i + y_i \partial_j \Phi_{ij} + \Phi_{ij} F_{ij} \, dx dt \\ & \quad - \int_\Omega g^{\alpha (0)}(x) \Gamma^\alpha(x,0) + y_i^{(0)}(x) \xi_i(x,0) \, dx + \int_{\partial\Omega_y} \int_0^T y_i^{(b)} \Phi_{ij} n_j \, da dt\\ & \quad + \int_\Omega \int_0^T H(y,v,F,P, g, x, t) \, dxdt , \end{aligned}$$ where the array $\mathcal{D}$ is defined as $$\mathcal{D}= (\partial_t \gamma, \nabla \gamma, \partial_t \Pi, \Pi, \partial_t \Gamma, \Gamma, \partial_t \xi, \xi, div \, \Phi, \Phi).$$ In order to define the function $U^{(H)}$ we need to consider the $(x,t)$-pointwise equations for $U^{(H)}$ for the given values $(\mathcal{D}(x,t), \bar{U}(x,t))$ (we will drop the superscript $^{(H)}$ for notational convenience): $$\begin{aligned} & \frac{\partial\mathcal{L}_H}{\partial y_i} : \qquad - \partial_t \xi_i - \partial_j \Phi_{ij} + a_y \left( y_i - \bar{y}_i \right) = 0\\ & \frac{\partial\mathcal{L}_H}{\partial v_i} : \qquad - \rho_0 \partial_t \gamma_i - \xi_i + a_v \left( v_i - \bar{v}_i \right) = 0\\ & \frac{\partial\mathcal{L}_H}{\partial F_{ij}} : \qquad \partial_l \gamma_k \frac{\partial{\cal N}_{kl}}{\partial F_{ij}}\bigg|_{(F,P)} - \Pi_{rs} \sum_\alpha \frac{\partial r^\alpha}{\partial F_{ij}}\bigg|_{(F,P,g)} m^\alpha_r n^\alpha_k P_{ks} \\ & \qquad \qquad \quad - \Gamma^\alpha h_{\alpha \beta}|_g \frac{\partial r^\beta}{\partial F_{ij}}\bigg|_{(F,P,g)} - \Phi_{ij} + \left(a_F + b_F \left| F - \bar{F} \right|^{p-2}\right) \left( F_{ij} - \bar{F}_{ij} \right) = 0\\ & \frac{\partial\mathcal{L}_H}{\partial P_{rs}} : \qquad \partial_l \gamma_k \frac{\partial{\cal N}_{kl}}{\partial P_{rs}}\bigg|_{(F,P)} - \partial_t \Pi_{rs} - \Pi_{ij} \sum_\alpha \frac{\partial r^\alpha}{\partial P_{rs}}\bigg|_{(F,P,g)} m^\alpha_i n^\alpha_k P_{kj} \\ & \qquad \qquad \quad - \Pi_{is} \sum_\alpha r^\alpha|_{(F,P,g)} m^\alpha_i n^\alpha_r - \Gamma^\alpha h_{\alpha \beta}|_g \frac{\partial r^\beta}{\partial P_{rs}}\bigg|_{(F,P,g)} + \left(a_P + b_P \left| P - \bar{P} \right|^{p-2}\right) \left( P_{ij} - \bar{P}_{ij} \right) = 0\\ & \frac{\partial\mathcal{L}_H}{\partial g^\alpha} : \qquad - \Pi_{ij} \sum_\kappa \frac{\partial r^\kappa}{\partial g^\alpha}\bigg|_{(F,P,g)} m^\kappa_i n^\kappa_k P_{kj} - \partial_t \Gamma^\alpha - \Gamma^\kappa h_{\kappa \beta}|_g \frac{\partial r^\beta}{\partial g^\alpha}\bigg|_{(F,P,g)} + \left(a_g + b_g \left| g - \bar{g} \right|^{p-2}\right) \left( g^\alpha - \bar{g}^\alpha \right) = 0. \end{aligned}$$ By making suitable choices for the various constants appearing in $H$, the expectation is that $\mathcal{L}_H$ can be made strictly convex in $U^{(H)}$ so that a (unique) solution for $U^{(H)}$ exists and can be solved by standard techniques without difficulty at (almost) every point of the domain. We now define a *dual* functional for dynamic rate-dependent, single crystal plasticity as $$S[D] = \widehat{S} \left[ U^{(H)}, D\right]$$ interpreted as replacing all occurrences of $U$ in the right-hand-side of [\[eq:S_hat_rd\]](#eq:S_hat_rd){reference-type="eqref" reference="eq:S_hat_rd"} by $U^{(H)}\left(\mathcal{D}, \bar{U}(x,t) \right)$ subject to the following essential 'boundary conditions' on parts of the space-time domain boundary given by ${\cal B}:= (\partial\Omega \times (0,T)) \cup (\Omega \times \{0,T\})$: $$\begin{aligned} & \mbox{ (arbitrarily) specified $\gamma$ on ${\cal B}\backslash ( \ (\partial\Omega_{\bar{t}} \times (0,T)) \cup (\Omega \times \{0\}) \ )$ and $\Phi$ on ${\cal B}\backslash (\partial\Omega_y \times (0,T)) $ and} \\ & \mbox{ (arbitrarily) specified $\Pi, \Gamma, \xi$ on ${\cal B}\backslash (\Omega \times \{0\}) $}. \end{aligned}$$ The Euler-Lagrange equations and the natural boundary conditions of $S$ are the equations [\[eq:primal_rd\]](#eq:primal_rd){reference-type="eqref" reference="eq:primal_rd"}-[\[eq:primal_rd_bc\]](#eq:primal_rd_bc){reference-type="eqref" reference="eq:primal_rd_bc"}-[\[eq:eq:primal_rd_ic\]](#eq:eq:primal_rd_ic){reference-type="eqref" reference="eq:eq:primal_rd_ic"}, with the replacement $U \rightarrow U^{(H)}\left(\mathcal{D}, \bar{U}(x,t) \right)$. # A variational principle for rate-independent, quasi-static, single crystal plasticity {#sec:ri_sc} The specifics of rate-independent single crystal plasticity theory may be found in the expositions [@havner1992finite; @bassani1993plastic]. *In this section, the summation convention is not used on lower-case Greek indices* which index the slip systems. We consider the following set of equations on a fixed reference $\Omega \subset \mathbb R^3$: $$\label{eq:primal_ri} \begin{aligned} \partial_j {\cal N}_{ij}(F,P) & = 0 \\ \dot{P}_{ij} - \sum_\alpha \Big( r^\alpha \, m^\alpha_i n^\alpha_k \Big) P_{kj} & = 0 \\ \dot{g}_\alpha - \sum_\beta h_{\alpha \beta} (g) \, r^\beta(F, P, g) & = 0 \\ \partial_j y_i - F_{ij} & = 0 \\ Y^\alpha (F,P,g) + s_\alpha^2 & = 0 \\ r^\alpha Y^\alpha & = 0 \\ r^\alpha - p_\alpha^2 & = 0, \end{aligned}$$ with the boundary conditions $$\label{eq:primal_ri_bc} {\cal N}_{ij}(F,P)\big|_{(x,t)} n_j\big|_x = \bar{t}_i(x,t), \ x \in \partial\Omega_{\bar{t}}; \qquad y_i(x,t) = y^{(b)}_i(x,t), \ x \in \partial\Omega_y,$$ and initial conditions $$\label{eq:eq:primal_ri_ic} P_{ij} (x,0) = P^{(0)}_{ij} (x), \qquad g^{\alpha}(x,0) = g^{\alpha(0)}(x), \qquad x \in \Omega.$$ In the above, ${\cal N}$ is the response function for the first Piola-Kirchhoff stress w.r.t. the reference configuration, $y, F$ are the position and deformation gradient fields, respectively, $P$ is the plastic distortion tensor, $r^\alpha$ is a slip-rate, $(m^\alpha, n^\alpha)$ are the unstretched slip direction and slip normal vectors, $g^\alpha$ is a strength, $h_{\alpha \beta}$ are the hardening matrix response functions, $Y^\alpha$ is an yield response function (the canonical example being $Y^\alpha = \tau^\alpha - g^\alpha$, where $\tau^\alpha$ is the resolved shear stress on the slip system $\alpha$ given by $\tau^\alpha = (F^e m^\alpha)_i T_{ij} (F^{e-T} n^\alpha)_j$ where $F^e := FP^{-1}$ is the elastic distortion, and $T = (\det F)^{-1} {\cal N}F^T$ is the Cauchy stress tensor), and $s_\alpha, p_\alpha$ are slack variables. All quantities indexed by $\alpha$ refer to an object corresponding to the $\alpha^{th}$ slip system. The slack variables enable the imposition of the inequalities $$Y^\alpha \leq 0; \qquad r^\alpha \geq 0.$$ The functions $\bar{t},y^{(b)}, P^{(0)}, g^{\alpha(0)}$ are prescribed. Now define the array of primal fields $$U = (y, F, P, g, r, s, p),$$ the dual fields $$D = (\gamma, \Phi, \Pi, \Gamma, \mu, \rho, \nu),$$ and assume the potential $H$ to be of the form $$\begin{aligned} & H(y,v,F,P, g, r,s, p, x, t) = \\ & \quad \frac{1}{2} \left( a_y \Big|y - \bar{y} |_{(x,t)} \Big|^2 + a_F \Big|F - \bar{F} |_{(x,t)} \Big|^2 + a_P \Big|P - \bar{P}|_{(x,t)} \Big|^2 + a_g \Big|g - \bar{g}|_{(x,t)} \Big|^2 \right) \\ & + \frac{1}{2} \left( a_r \big|r - \bar{r}|_{(x,t)} \big|^2 + a_s \big|s - \bar{s}|_{(x,t)} \big|^2 + a_p \big|p - \bar{p}|_{(x,t)} \big|^2 \right)\\ & + \frac{1}{p} \left( b_F \Big|F - \bar{F}|_{(x,t)} \Big|^p + b_P \Big|P - \bar{P}|_{(x,t)} \Big|^p + b_g \Big|g - \bar{g}|_{(x,t)} \Big|^p \right), \end{aligned}$$ for $p > 2$ as needed, with the same understanding operative for base states and the various constants that appear as in the previous Section [4](#sec:rd_sc){reference-type="ref" reference="sec:rd_sc"}. We now define the pre-dual functional $$\label{eq:S_hat_ri} \begin{aligned} \widehat{S}[U, D] & = \int_\Omega \int_0^T \mathcal{L}_H (U, \mathcal{D},x, t) \, dx dt + \mbox{inital and boundary contributions} \\ & := \ \int_\Omega \int_0^T - {\cal N}_{ij}\big|_{(F,P)} \partial_j \gamma_i - P_{ij} \partial_t \Pi_{ij} - \Pi_{ij} \sum_\alpha r^\alpha \, m^\alpha_i n^\alpha_k P_{kj} \ dx dt \\ & \quad - \int_\Omega P^{(0)}_{ij}(x) \Pi_{ij}(x,0) \, dx + \int_{\partial\Omega_{\bar{t}}} \int_0^T \bar{t}_i \gamma_i \, da dt + \int_{\partial\Omega_y} \int_0^T y_i^{(b)} \Phi_{ij} n_j \, da dt\\ & \quad - \int_\Omega \int_0^T y_i \partial_j \Phi_{ij} + \Phi_{ij} F_{ij} + \sum_\alpha g^\alpha \partial_t \Gamma^\alpha + \sum_\alpha \sum_\beta \Gamma^\alpha h_{\alpha \beta}\big|_g r^\beta \, dx dt \\ & \quad + \int_\Omega \int_0^T \sum_\alpha \left( \rho^\alpha Y^\alpha \big|_{(F,P,g)} + \rho^\alpha s_\alpha^2 + r^\alpha Y^\alpha\big|_{(F,P,g)} \mu^\alpha + r^\alpha \nu^\alpha - \nu^\alpha p_\alpha^2 \right) \, dx dt\\ & \quad - \int_\Omega g^{\alpha (0)}(x) \Gamma^\alpha(x,0) \, dx \\ & \quad + \int_\Omega \int_0^T H(y,v,F,P, g, r,s, p, x, t) \, dx dt, \end{aligned}$$ where the array $\mathcal{D}$ is defined as $$\mathcal{D}= (\nabla \gamma, \partial_t \Pi, \Pi, div \, \Phi, \Phi, \rho, \mu, \nu, \partial_t \Gamma, \Gamma).$$ In order to define the function $U^{(H)}$ we need to consider the following $(x,t)$-pointwise equations for $U^{(H)}$ for the given values $(\mathcal{D}(x,t), \bar{U}(x,t))$ (we will drop the superscript $^{(H)}$ for notational convenience): $$\begin{aligned} & \frac{\partial\mathcal{L}_H}{\partial y_i} : \qquad - \partial_j \Phi_{ij} + a_y \left( y_i - \bar{y}_i \right) = 0\\ & \frac{\partial\mathcal{L}_H}{\partial F_{rs}} : \qquad - \partial_j \gamma_i \frac{\partial{\cal N}_{ij}}{\partial F_{rs}}\bigg|_{(F,P)} - \Phi_{ij} + \sum_\alpha (\rho^\alpha + r^\alpha \mu^\alpha) \frac{\partial Y^\alpha}{\partial F_{rs}}\bigg|_{(F,P,g)} + \left(a_F + b_F \left| F - \bar{F} \right|^{p-2}\right) \left( F_{rs} - \bar{F}_{rs} \right) = 0\\ & \frac{\partial\mathcal{L}_H}{\partial P_{rs}} : \qquad - \partial_l \gamma_k \frac{\partial{\cal N}_{kl}}{\partial P_{rs}}\bigg|_{(F,P)} - \partial_t \Pi_{rs} - \Pi_{is} \sum_\alpha r^\alpha m^\alpha_i n^\alpha_r + \sum_\alpha (\rho^\alpha + r^\alpha \mu^\alpha) \frac{\partial Y^\alpha}{\partial P_{rs}}\bigg|_{(F,P,g)} \\ & \qquad \qquad \quad + \left(a_P + b_P \left| P - \bar{P} \right|^{p-2}\right) \left( P_{rs} - \bar{P}_{rs} \right) = 0\\ & \frac{\partial\mathcal{L}_H}{\partial g^\mu} : \qquad \sum_\alpha (\rho^\alpha + r^\alpha \mu^\alpha) \frac{\partial Y^\alpha}{\partial g^\mu}\bigg|_{(F,P,g)} - \partial_t \Gamma^\mu - \sum_\alpha \sum_\beta \Gamma^\alpha \frac{\partial h_{\alpha \beta}}{\partial g^\mu}\bigg|_g r^\beta + \left(a_g + b_g \left| g - \bar{g} \right|^{p-2}\right) \left( g^\mu - \bar{g}^\mu \right) = 0\\ & \frac{\partial\mathcal{L}_H}{\partial r^\alpha} : \qquad - \Pi_{ij} m^\alpha_i n^\alpha_k P_{kj} + Y^\alpha\big|_{(F,P,g)} \mu^\alpha + \nu^\alpha - \sum_\kappa \Gamma^\kappa h_{\kappa \alpha}\big|_g + a_r (r^\alpha - \bar{r}^\alpha) = 0\\ & \frac{\partial\mathcal{L}_H}{\partial s^\alpha} : \qquad 2 s_\alpha \rho^\alpha + a_s (s^\alpha - \bar{s}^\alpha) = 0 \\ & \frac{\partial\mathcal{L}_H}{\partial p^\alpha} : \qquad - 2 p_\alpha \nu^\alpha + a_p (p^\alpha - \bar{p}^\alpha) = 0. \end{aligned}$$ Again, by making suitable choices for the various constants appearing in $H$, $\mathcal{L}_H$ can be made strictly convex in $U^{(H)}$ so that a (unique) solution for $U^{(H)}$ exists and can be solved by standard techniques without difficulty at (almost) every point of the domain. We now define a *dual* functional for quasi-static, rate-dependent, single crystal plasticity as $$S[D] = \widehat{S} \left[ U^{(H)}, D\right]$$ interpreted as replacing all occurrences of $U$ in the right-hand-side of [\[eq:S_hat_ri\]](#eq:S_hat_ri){reference-type="eqref" reference="eq:S_hat_ri"} by $U^{(H)}\left(\mathcal{D}, \bar{U}(x,t) \right)$ subject to the following essential 'boundary conditions' on parts of the space-time domain boundary given by ${\cal B}:= (\partial\Omega \times (0,T)) \cup (\Omega \times \{0,T\})$: $$\begin{aligned} & \mbox{ (arbitrarily) specified $\gamma$ on ${\cal B}\backslash ( \ (\partial\Omega_{\bar{t}} \times (0,T)) \cup (\Omega \times \{0\}) \ )$ and $\Phi$ on ${\cal B}\backslash (\partial\Omega_y \times (0,T)) $ and} \\ & \mbox{ (arbitrarily) specified $\Pi, \Gamma$ on ${\cal B}\backslash (\Omega \times \{0\}) $}. \end{aligned}$$ The Euler-Lagrange equations and the natural boundary conditions of $S$ are the equations [\[eq:primal_ri\]](#eq:primal_ri){reference-type="eqref" reference="eq:primal_ri"}-[\[eq:primal_ri_bc\]](#eq:primal_ri_bc){reference-type="eqref" reference="eq:primal_ri_bc"}-[\[eq:eq:primal_ri_ic\]](#eq:eq:primal_ri_ic){reference-type="eqref" reference="eq:eq:primal_ri_ic"}, with the replacement $U \rightarrow U^{(H)}\left(\mathcal{D}, \bar{U}(x,t) \right)$. # Concluding remarks and outlook {#sec:concl} A formal scheme for developing variational principles for systems of nonlinear partial differential equations arising in continuum mechanics has been proposed. It is based on the realization that such a system of equations may be viewed as an 'invariant' or a 'symmetry' of a family of dual variational principles parametrized by a set of scalar potentials of the primal variables, the parametrization acting as the symmetry operation, and the invariant being the Euler-Lagrange equations of any of the variational principles in that family. The scheme appears to be best suited for problems which are difficult to solve in the 'primal' setting, be it due to lack of existence of solutions as defined by extant strategies, uniqueness, or stability, cf. [@a_arora_thesis; @sga], and not meant as a competitor for problems that are solved robustly by existing techniques for the primal problem. It offers the possibility of defining the notion of a very weak solution of the primal problem as the solution to the dual variational problem, which with enough regularity, defines a genuine weak solution of the primal PDE system. This can be useful, as nonlinear PDE systems are generally much harder to solve than a variational minimization/maximization problem. The Euler-Lagrange equations of the dual problem have a certain degenerate ellipticity, and knowledge of 'base states' close to desired solutions can be incorporated in the scheme without approximation; these two features combined together help in obtaining (un)stable solutions of the primal problem in a stable way within the dual formulation - a case study is provided in [@sga]. Degenerate ellipticity by itself is not a very strong property (depending on taste, e.g. when compared to strong ellipticity or strict convexity when physically natural), but does take on significance when the primal PDE system loses ellipticity (along with becoming indefinite) or hyperbolicity. As a (non-rigorous) sketch of how our scheme may have the potential of achieving the above objective, consider the case of nonlinear hyperelasticity without higher-order regularization. The dominant (and, perhaps, only) strategy available [@ball1976convexity] is to declare minimizers of the elastic energy as solutions to the problem. It is well-understood that, more or less, quasiconvexity of the functional (along with some coercivity) is equivalent to the existence of minimizers. As laid out in many works, quasiconvexity is hard to check (but its failure not so), and it is known to fail for many physical energy functionals that produce fine microstructures as limits of energy minimizing sequences, the latter, however, having no status as minimizers of the energy functional itself due to lack of its lower semicontinuity. Juxtaposing the present scheme with this approach, it seeks to define some notion of a solution to the PDE of elastostatics given an elastic stress response function (a system that may be the formal Euler-Lagrange equations of the physical energy functional) thus 'severing' the link with looking for minimizers of the physical energy functional and hence its quasiconvexity - and produces a convex variational principle on the dual side, whose critical points and minimizers can nevertheless be sought and, with sufficient regularity in them and the DtP mapping, be deemed as solutions to the PDE of elastostatics - such solutions, of course, need not have a connection to being minimizers of the primal energy functional. Perhaps more importantly, even when the regularity-related steps cannot be carried through, the obtained critical points of the dual problem can be declared as some sort of very weak solutions of the primal PDE system, because of their consistency with the primal problem in the presence of regularity. The formalism has been used in this paper to present variational principles for a class of single crystal plasticity problems which demonstrate its relevance to the theory of generally non-associated, multi-surface plasticity. A particular spin-off of the approach is a potentially robust technique [@action_3 Sec. 2] for computing solutions for non-monotone systems of nonlinear algebraic equations that are not, in the first instance, the gradients of a scalar objective, as can arise in the local material update of classical plasticity models. In these situations, the dual scheme always produces symmetric Jacobians and, as is well-appreciated in computational plasticity circles, this is of practical significance. From the perspective of robust computation of approximate solutions of the scheme, the 'universal' degenerate ellipticity of the scheme appears to make it particularly suitable for the application of Discontinuous Galerkin methods for elliptic problems [@arnold2000discontinuous]. In closing, we mention that the ideas presented herein have strong links to modern mathematical thinking on Hidden Convexity in PDE advanced in [@brenier2018initial; @brenier_book] (with the terminology of 'Hidden Convexity' credited by Brenier to L. C. Evans), and appear to be also related to the recent work [@rockafellar2023augmented] on Hidden Convexity in Augmented Lagrangian techniques. # Acknowledgment {#acknowledgment .unnumbered} I thank Janusz Ginster for helpful discussions. This work was supported by the NSF OIA-DMR grant \# 2021019 and by a Simons Pivot Fellowship grant \# 98317. It was partially done while I was on sabbatical leave at a) the Max Planck Institute for Mathematics in the Sciences in Leipzig, and b) the Hausdorff Institute for Mathematics at the University of Bonn funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -- EXC-2047/1 -- 390685813, as part of the Trimester Program on Mathematics for Complex Materials. The support and hospitality of both institutions are acknowledged, as are the informal technical discussions with researchers there. # Appendix {#appendix .unnumbered} # Dirichlet boundary conditions for elastostatics in first-order form [\[eq:gov_cont_mech\]](#eq:gov_cont_mech){reference-type="eqref" reference="eq:gov_cont_mech"} {#app:A} Consider the system [\[eq:elastostat\]](#eq:elastostat){reference-type="eqref" reference="eq:elastostat"} with $$U = (y_1, y_2, y_3, F_{11}, F_{12},F_{13}, F_{21}, F_{22}, F_{23}, F_{31}, F_{32}, F_{33}).$$ For $\Gamma = 4, \dots, 12; j = 1, \ldots, 3$, we consider $F_{\Gamma j}$ to be of the form $$F_{\Gamma j} (U) := {\cal A}_{\Gamma Ij} U_I$$ with ${\cal A}, {\cal B}$ constant matrices with $0$ entries, unless otherwise specified. Then, $$\begin{aligned} {\cal A}_{411} = 1; & \qquad {\cal B}_{44} = 1 & \qquad \Longrightarrow \qquad \partial_1 y_1 - F_{11} = 0\\ {\cal A}_{512} = 1; & \qquad {\cal B}_{55} = 1 & \qquad \Longrightarrow \qquad \partial_2 y_1 - F_{12} = 0\\ {\cal A}_{613} = 1; & \qquad {\cal B}_{66} = 1 & \qquad \Longrightarrow \qquad \partial_3 y_1 - F_{13} = 0 \\ %% {\cal A}_{721} = 1; & \qquad {\cal B}_{77} = 1 & \qquad \Longrightarrow \qquad \partial_1 y_2 - F_{21} = 0\\ {\cal A}_{822} = 1; & \qquad {\cal B}_{88} = 1 & \qquad \Longrightarrow \qquad \partial_2 y_2 - F_{22} = 0\\ {\cal A}_{923} = 1; & \qquad {\cal B}_{99} = 1 & \qquad \Longrightarrow \qquad \partial_3 y_2 - F_{23} = 0 \\ %% {\cal A}_{10 \, 31} = 1; & \qquad {\cal B}_{10 \, 10} = 1 & \qquad \Longrightarrow \qquad \partial_1 y_3 - F_{31} = 0\\ {\cal A}_{11 \, 32} = 1; & \qquad {\cal B}_{11 \, 11} = 1 & \qquad \Longrightarrow \qquad \partial_2 y_3 - F_{32} = 0\\ {\cal A}_{12 \, 33} = 1; & \qquad {\cal B}_{12 \, 12} = 1 & \qquad \Longrightarrow \qquad \partial_3 y_3 - F_{33} = 0.\\ \end{aligned}$$ Let the matrix entries $B_{\Gamma j} = 0$ unless otherwise specified and $n$ be the outward unit normal field on the boundary $\partial\Omega$. Now, let $y^*_1$ be the desired Dirichlet b.c. on $y_1$ on $\partial\Omega_4 = \partial\Omega_5 = \partial\Omega_6 =: \partial\Omega_{456}$, and for $\Gamma = 4,5,6$ let $B_{\Gamma j} n_j$ be defined as $$B_{\Gamma j} n_j := {\cal A}_{\Gamma I j} n_j y^*_I \qquad \mbox{on} \ \partial\Omega_{456},$$ with $y^*_I = 0, I \neq 1$ without loss of generality. Then[\[eq:gov_bc\]](#eq:gov_bc){reference-type="eqref" reference="eq:gov_bc"} implies the Dirichlet b.c. $$(y_1 - y_1^*) n_j = 0 \ \forall \ j = 1,2,3 \ \mbox{on} \ \Longrightarrow y_1 - y_1^* = 0 \ \mbox{on} \ \partial\Omega_{456}.$$ Similarly, let $y^*_2$ be the desired Dirichlet b.c. on $y_2$ on $\partial\Omega_7 = \partial\Omega_8 = \partial\Omega_9 =: \partial\Omega_{789}$, and for $\Gamma = 7,8,9$ let $B_{\Gamma j} n_j$ be defined as $$B_{\Gamma j} n_j := {\cal A}_{\Gamma I j} n_j y^*_I \qquad \mbox{on} \ \partial\Omega_{789},$$ with $y^*_I = 0, I \neq 2$ without loss of generality. Then[\[eq:gov_bc\]](#eq:gov_bc){reference-type="eqref" reference="eq:gov_bc"} implies the Dirichlet b.c. $$(y_2 - y_2^*) n_j = 0 \ \forall \ j = 1,2,3 \ \mbox{on} \ \Longrightarrow y_2 - y_2^* = 0 \ \mbox{on} \ \partial\Omega_{789},$$ and let $y^*_3$ be the desired Dirichlet b.c. on $y_3$ on $\partial\Omega_{10} = \partial\Omega_{11} = \partial\Omega_{12} =: \partial\Omega_{10 \, 11 \, 12}$, and for $\Gamma = 10, 11, 12$ let $B_{\Gamma j} n_j$ be defined as $$B_{\Gamma j} n_j := {\cal A}_{\Gamma I j} n_j y^*_I \qquad \mbox{on} \ \partial\Omega_{10 \, 11 \, 12},$$ with $y^*_I = 0, I \neq 3$ without loss of generality. Then[\[eq:gov_bc\]](#eq:gov_bc){reference-type="eqref" reference="eq:gov_bc"} implies the Dirichlet b.c. $$(y_3 - y_3^*) n_j = 0 \ \forall \ j = 1,2,3 \ \mbox{on} \ \Longrightarrow y_3 - y_3^* = 0 \ \mbox{on} \ \partial\Omega_{10 \, 11 \, 12}.$$ [^1]: Department of Civil & Environmental Engineering, and Center for Nonlinear Analysis, Carnegie Mellon University, Pittsburgh, PA 15213, email: acharyaamit\@cmu.edu.
arxiv_math
{ "id": "2310.03201", "title": "A Hidden Convexity in Continuum Mechanics, with application to\n classical, continuous-time, rate-(in)dependent plasticity", "authors": "Amit Acharya", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Mixed volumes in $n$-dimensional Euclidean space are functionals of $n$-tuples consisting of convex bodies $K,L,C_1,\ldots,C_{n-2}$. The Alexandrov--Fenchel inequalities are fundamental inequalities between mixed volumes of convex bodies, which cover as very special cases many important inequalities between basic geometric functionals. The problem of characterizing completely the equality cases in the Alexandrov--Fenchel inequality is wide open. Major recent progress was made by Yair Shenfeld and Ramon van Handel [@SvH22; @SvH23+], in particular they resolved the problem in the cases where $K,L$ are general convex bodies and $C_1,\ldots,C_{n-2}$ are polytopes, zonoids or smooth bodies (under some dimensional restriction). We introduce the class of polyoids, which includes polytopes, zonoids and triangle bodies, and characterize polyoids by using generating measures. Based on this characterization and Shenfeld and van Handel's contribution, we extend their result to a class of convex bodies containing all polyoids and smooth bodies. Our result is stated in terms of the support of the mixed area measure of the unit ball $B^n$ and $C_1,\ldots,C_{n-2}$. A geometric description of this support is provided in the accompanying work [@HugReichert23+]. author: - Daniel Hug and Paul A. Reichert title: Extremizers of the Alexandrov--Fenchel inequality within a new class of convex bodies --- #### MSC-classes 2020. 52A39, 52A20, 52A21 52A40 #### Keywords. Polytope, zonoid, polyoid, macroid, Alexandrov--Fenchel inequality, generating measure, mixed area measure # Introduction Mixed volumes of convex bodies (nonempty compact convex sets) in Euclidean space $\mathbb{R}^n$, $n\ge 2$, are symmetric functionals of $n$-tuples of convex bodies. They naturally arise as coefficients of polynomial expansions of nonnegative Minkowski combinations of convex bodies. Writing $\mathop{\mathrm{V}}$ for the volume functional (Lebesgue measure) and $\alpha_1K_1+\cdots+\alpha_mK_m$ for the Minkowski combination of the convex bodies $K_1,\ldots,K_m\subset\mathbb{R}^n$ with nonnegative coefficients $\alpha_1,\ldots,\alpha_m\in\mathbb{R}$, we have $$\label{eq:1.1} \mathop{\mathrm{V}}(\alpha_1K_1+\cdots+\alpha_mK_m)=\sum_{i_1,\ldots,i_n=1}^m \mathop{\mathrm{V}}(K_{i_1},\ldots,K_{i_n})\alpha_{i_1}\cdots\alpha_{i_n},$$ where $\mathop{\mathrm{V}}(K_{i_1},\ldots,K_{i_n})$ is called the mixed volume of $K_{i_1},\ldots,K_{i_n}$. As symmetric functions of their $n$ arguments, mixed volumes are uniquely determined by this expansion. We refer to [@Schneider Chap. 5.1] or [@Hug Chap. 3.3] for an introduction to mixed volumes. Conversely, the mixed volume $\mathop{\mathrm{V}}(K_1,\ldots,K_n)$ of a given $n$-tuple of convex bodies $K_1,\ldots,K_n$ can be obtained as an alternating sum of volumes of Minkowski sums, that is, $$\label{eq:1.2} \mathop{\mathrm{V}}(K_1,\ldots,K_n)=\frac{1}{n!}\sum_{k=1}^n(-1)^{n+k}\sum_{1\le i_1<\cdots<i_k\le n}\mathop{\mathrm{V}}(K_{i_1}+\cdots +K_{i_k}).$$ While relations [\[eq:1.1\]](#eq:1.1){reference-type="eqref" reference="eq:1.1"} and [\[eq:1.2\]](#eq:1.2){reference-type="eqref" reference="eq:1.2"} can be efficiently employed for introducing mixed volumes and understanding some of their basic properties, their usefulness in deriving inequalities for mixed volumes seems to be limited. We refer to Schneider [@Schneider Notes for Sect. 5.1] for further background information. A deep inequality for mixed volumes of convex bodies, with many consequences and applications to diverse fields, has been found and established by Alexandrov [@AF1937] (see Schneider [@Schneider Notes for Sect. 7.3], also for some historical comments). **Theorem 1** (Alexandrov--Fenchel Inequality). *[\[thm:af\]]{#thm:af label="thm:af"} Let $K, L\subset\mathbb{R}^n$ be convex bodies, and let $\mathbfcal{C}= (C_1, \ldots, C_{n-2})$ be an $(n-2)$-tuple of convex bodies in $\mathbb{R}^n$. Then $$\begin{aligned} \mathop{\mathrm{V}}(K, L, \mathbfcal{C})^2 \ge \mathop{\mathrm{V}}(K, K, \mathbfcal{C}) \mathop{\mathrm{V}}(L, L, \mathbfcal{C}),\tag{AFI} \end{aligned}$$ where $\mathop{\mathrm{V}}(K, L, \mathbfcal{C}):=\mathop{\mathrm{V}}(K,L,C_1, \ldots, C_{n-2})$.* We state the Alexandrov--Fenchel inequality in a second version. It makes use of a linear extension of mixed volumes, known already to Alexandrov, to differences of support functions of convex bodies (see [@Schneider Sect. 5.2]). Such extensions turned out to be useful in proofs of the inequality. For a convex body $K\subset\mathbb{R}^n$, the support function $h_K:\mathbb{R}^n\to\mathbb{R}$ is defined by $h_K(u):=\max\{\langle x,u\rangle:x\in K\}$, where $\langle \cdot\,,\cdot\rangle$ denotes the Euclidean scalar product. The support function is positively homogeneous of degree one, hence it is often sufficient to consider its restriction to the Euclidean unit sphere $\mathbb{S}^{n-1}$, and it uniquely determines the underlying convex body $K$. **Theorem 2** (General Alexandrov--Fenchel Inequality). *[\[thm:af2\]]{#thm:af2 label="thm:af2"} Let $K_1, K_2, L\subset\mathbb{R}^n$ be convex bodies, and let $\mathbfcal{C}= (C_1, \ldots, C_{n-2})$ be an $(n-2)$-tuple of convex bodies in $\mathbb{R}^n$. Then $$\begin{aligned} \mathop{\mathrm{V}}(h_{K_1} - h_{K_2}, L, \mathbfcal{C})^2 \ge \mathop{\mathrm{V}}(h_{K_1} - h_{K_2}, h_{K_1} - h_{K_2}, \mathbfcal{C}) \mathop{\mathrm{V}}(L, L, \mathbfcal{C}).\tag{GAFI} \end{aligned}$$* For a proof of the equivalence of the two versions, we refer to Shenfeld and van Handel [@SvH23+ Lem. 3.11]. Despite considerable effort, to date it is unknown when exactly equality holds in (AFI) for a general $(n-2)$-tuple of convex bodies $\mathbfcal{C}= (C_1, \ldots, C_{n-2})$ in $\mathbb{R}^n$. While the equality cases can be described purely by dimensionality considerations when at least one of the mixed volumes vanishes, the situation turns out to be much more subtle when $\mathop{\mathrm{V}}(K, K, \mathbfcal{C})$ and $\mathop{\mathrm{V}}(L, L, \mathbfcal{C})$ are both positive. Shenfeld and van Handel [@SvH23+] recently fully characterized the equality cases in (AFI) when $C_1, \ldots, C_{n-2}$ are polytopes. Then they used this result to achieve a characterization when $C_1, \ldots, C_{n-2}$ are polytopes, zonoids or smooth convex bodies, under a mild dimensionality assumption, which they called supercriticality (see [@SvH23+ Thm. 14.9]). Supercriticality is a natural condition that provides some dimensional restriction on a given sequence of nonempty sets, which is satisfied e.g. for any sequence $\mathbfcal{C}=(C_1,\ldots,C_{n-2})$ of full-dimensional convex bodies in $\mathbb{R}^n$. It is related to a well-known condition that ensures that the mixed volume of a given tuple of convex bodies is positive. We refer to Section [3](#sec:3){reference-type="ref" reference="sec:3"} for a precise definition and basic properties that are related to this concept. Recall that a zonoid is a limit (with respect to the Hausdorff metric) of a sequence of finite Minkowski sums of segments. A convex body $K$ is said to be smooth if each boundary point of $K$ is contained in a unique supporting hyperplane of $K$. In this article, we study a class of convex bodies, which we call *polyoids*, that encompasses polytopes, zonoids and triangle bodies. Polyoids are obtained as limits of sequences of polytopes that are finite Minkowski sums of polytopes having at most a fixed number $k$ of vertices (for some $k\in\mathbb{N}$). If $k=2$, we are back in the zonoid case. For $k=3$ we cover the class of triangle bodies (cf. [@Schneider1996 Sect. 3], [@Schneider p. 201]). If $\mathcal{P}^n_k$ denotes the set of $k$-topes in $\mathbb{R}^n$ (polytopes having at most $k$ vertices), then the class of polyoids in $\mathbb{R}^n$ is the union of the Minkowski classes $\mathfrak{M}(\mathcal{P}^n_k)$, $k\in\mathbb{N}$, generated by $\mathcal{P}^n_k$ (see [@Schneider Sect. 3.5] for information on Minkowski classes and additive generation). Our treatment will be limited to supercritical $(n-2)$-tuples of convex bodies in $\mathbb{R}^n$. We refer to Section [2](#sec:2){reference-type="ref" reference="sec:2"} for precise definitions and further discussion for these notions. The main aim of this work is to extend the characterization of the equality cases for (AFI), obtained by Shenfeld and van Handel [@SvH23+], to all convex bodies $K,L$ and all supercritical tuples $\mathbfcal{C}= (C_1, \ldots, C_{n-2})$ of polyoids and smooth convex bodies. We begin by stating Shenfeld and van Handel's result for supercritical tuples of polytopes, zonoids and smooth convex bodies. For this purpose, we need the mixed area measure $\mathop{\mathrm{S}}(K_1,\ldots,K_{n-1},\cdot)$ of an $(n-1)$-tuple of convex bodies $K_1,\ldots,K_{n-1}\subset\mathbb{R}^n$. Recall from [@Schneider Sect. 5.1] (or [@Hug Thm. 4.1]) how these finite Borel measures on the Euclidean unit sphere $\mathbb{S}^{n-1}$ are defined. They are related to mixed volumes and support functions via the relation $$\label{eqex} \mathop{\mathrm{V}}(K_1,\ldots,K_{n-1},K_n)=\frac{1}{n}\int_{\mathbb{S}^{n-1}} h_{K_n}(u)\, \mathop{\mathrm{S}}(K_1,\ldots,K_{n-1},\mathop{}\!\mathrm{d}u),$$ which holds for all convex bodies $K_1,\ldots,K_n\subset \mathbb{R}^n$. For given $K_1,\ldots,K_{n-1}$, the mixed area measure $\mathop{\mathrm{S}}(K_1,\ldots,K_{n-1},\cdot)$ is the unique Borel measure on $\mathbb{S}^{n-1}$ such that [\[eqex\]](#eqex){reference-type="eqref" reference="eqex"} holds for all convex bodies $K_n$. As in the case of the mixed volume, also the mixed area measure can be extended as an $(n-1)$-linear map to differences of support functions (see again [@Schneider Sect. 5.2]). Then relation [\[eqex\]](#eqex){reference-type="eqref" reference="eqex"} remains true with convex bodies replaced by differences of support functions. If $B^n$ is the Euclidean unit ball, then $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{C},\cdot)$ denotes the support of the mixed area measure of $B^n$ and $\mathbfcal{C}= (C_1,\ldots, C_{n-2})$, which is the complement of the largest open subset of $\mathbb{S}^{n-1}$ on which $\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{C},\cdot)$ vanishes. **Theorem 3** ([@SvH23+ Thm. 14.9]). *Let $K, L \subset\mathbb{R}^n$ be convex bodies, and let $\mathbfcal{C}= (C_1, \ldots, C_{n-2})$ be a supercritical $(n-2)$-tuple of polytopes, zonoids or smooth convex bodies in $\mathbb{R}^n$ such that $\mathop{\mathrm{V}}(K, K, \mathbfcal{C}), \mathop{\mathrm{V}}(L, L, \mathbfcal{C}) > 0$. Then (AFI) holds with equality if and only if there are $a > 0$ and $x \in \mathbb{R}^n$ such that $h_K = h_{aL + x}$ on $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{C},\cdot)$.* In the case where $C_1, \ldots, C_{n-2}$ are all smooth, the result was already known (see [@Schneider Thm. 7.6.8] and the comment after [@HugReichert23+ Thm. 1.2]). The main point of the preceding theorem is that a mixture of smooth and non-smooth bodies (which then are polytopes or zonoids) is admitted. Shenfeld and van Handel also characterized the much more involved equality cases for arbitrary tuples of polytopes $\mathbfcal{C}$. For their treatment of the zonoid case, the characterization theorem for polytopes is used as a crucial ingredient. Our main result is an extension of the preceding theorem where zonoids and polytopes are included in the larger class of polyoids. **Theorem 1**. *Let $K, L \subset\mathbb{R}^n$ be convex bodies, and let $\mathbfcal{C}= (C_1, \ldots, C_{n-2})$ be a supercritical $(n-2)$-tuple of polyoids or smooth convex bodies in $\mathbb{R}^n$.* 1. *If  $\mathop{\mathrm{V}}(K,L,\mathbfcal{C})=0$, then (AFI) holds with equality and $K,L$ are homothetic.* 2. *Let $\mathop{\mathrm{V}}(K,L,\mathbfcal{C})>0$. Then (AFI) holds with equality if and only if there are $a>0$ and $x\in\mathbb{R}^n$ such that $h_K=h_{aL+x}$ on $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}},\mathbfcal{C},\cdot)$.* At the end of Section [2](#sec:2){reference-type="ref" reference="sec:2"}, we introduce a formally larger class of convex bodies (which we call macroids) for which the statement of Theorem [Theorem 1](#corfin){reference-type="ref" reference="corfin"} remains true. In [@Schneider1988 Sect. 4], Schneider established a characterization of the equality cases in the Alexandrov--Fenchel inequality for convex bodies $K,L$ and zonoids $C_1,\ldots,C_{n-2}$, under the additional assumption that $K,L$ are centrally symmetric and all bodies are full-dimensional. Schneider's characterization involves specific geometric information about $\mathbfcal{C}$, namely the closure of the set of all extremal normal vectors of the $(n-1)$-tuple $({B^{n}}, \mathbfcal{C})$. In contrast, Shenfeld and van Handel first characterize the equality cases in terms of the support of the mixed area measure $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{C},\cdot)$ (without the assumption of central symmetry of $K,L$ and with a relaxed dimensionality assumption). Finally, they show that $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{C},\cdot)$ equals the closure of the set of all extremal normal vectors of the $(n-1)$-tuple $({B^{n}}, \mathbfcal{C})$ (see [@SvH23+ Prop. 14.13]). According to a general conjecture due to Schneider [@Schneider Conjecture 7.6.14], for an arbitrary $(n-1)$-tuple of convex bodies $(C,\mathbfcal{C})$ the support of the mixed area measure $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}(C, \mathbfcal{C},\cdot)$ is precisely the closure of the set of all extremal normal vectors of $(C, \mathbfcal{C})$. This conjecture is open even in the case where all bodies are zonoids (see [@Schneider1988 Sect. 4] for some discussion). For the application to the equality cases of the Alexandrov--Fenchel inequality, only the special case where $C={B^{n}}$ is required. In [@HugReichert23+], Schneider's conjecture concerning the support of mixed area measures is settled for the class of polyoids, which in particular covers the case where all $n-1$ bodies are general zonoids. In combination with the results of the present work, we thus obtain a geometric characterization of the equality cases in (AFI) not only for general convex bodies $K,L$ and zonoids, but for general convex bodies $K,L$ and the larger class of polyoids (and smooth bodies). The paper is structured as follows. In Section [2](#sec:2){reference-type="ref" reference="sec:2"} we deduce a representation result for the support functions of polyoids, stated as Corollary [Corollary 10](#thm:ptb-char){reference-type="ref" reference="thm:ptb-char"}, from a more general result concerning Minkowski classes generated by homothety invariant closed families of convex bodies. A related representation theorem for support functions of zonoids in terms of their generating measures is a well-known and versatile tool in the study of zonoids. We also introduce the larger class of macroids whose definition is motivated by Corollary [Corollary 10](#thm:ptb-char){reference-type="ref" reference="thm:ptb-char"}. In Section [3](#sec:3){reference-type="ref" reference="sec:3"} we start with a brief discussion of supercritical tuples of sets. Then we prepare the proof of Theorem [\[thm:supercritical\]](#thm:supercritical){reference-type="ref" reference="thm:supercritical"}, which is an equivalent version of Theorem [Theorem 1](#corfin){reference-type="ref" reference="corfin"}, that involves the mixed area measure of a difference of support functions. Our arguments are inspired by and partly based on the results by Shenfeld and van Handel [@SvH23+]. Theorems [\[thm:supercritical\]](#thm:supercritical){reference-type="ref" reference="thm:supercritical"} and [Theorem 1](#corfin){reference-type="ref" reference="corfin"} both hold within the formally larger class of macroids. In Appendix [4](#app:macroid-not-polyoid){reference-type="ref" reference="app:macroid-not-polyoid"} we construct a macroid that is not a polyoid. # Polyoids and beyond {#sec:2} In this section, we introduce the class of polyoids and establish a characterization theorem. Our definition is guided by the geometric definition of a zonoid as a limit of a sequence of zonotopes, where a zonotope is a finite Minkowski sum of segments. In the following, we work in Euclidean space $\mathbb{R}^n$ with scalar product $\langle\cdot\,,\cdot \rangle$ and norm $\|\cdot\|$. For a set $A\subseteq\mathbb{R}^n$, we set $A^\perp:=\{x\in\mathbb{R}^n\colon \langle x,a\rangle=0 \text{ for }a\in A\}$, the linear subspace orthogonal to the linear span of $A$, and $u^\perp:=\{u\}^\perp$ for $u\in\mathbb{R}^n$. The volume of the Euclidean unit ball $B^n$ is denoted by $\kappa_n$, its surface area is $\omega_n:=n\kappa_n$. If we write $\mathcal{H}^{n-1}$ for the $(n-1)$-dimensional Hausdorff measure in $\mathbb{R}^n$ and $\mathbb{S}^{n-1}$ for the unit sphere, then $\mathcal{H}^{n-1}(\mathbb{S}^{n-1})=\omega_n$ for $n\ge 1$. Most of the time we focus on $n\ge 2$, but almost all statements and definitions hold for $n\in\mathbb{N}_0$ (if properly interpreted). We write $\mathcal{K}^n$ for the set of nonempty compact convex sets in $\mathbb{R}^n$ and endow $\mathcal{K}^n$ with the Hausdorff metric. Elements of $\mathcal{K}^n$ are called convex bodies. A map $\varphi:\mathbb{R}^n\to\mathbb{R}^n$ is a dilatation if there is some $\lambda> 0$ such that $\varphi(x)=\lambda x$ for $x\in\mathbb{R}^n$. A homothety is a dilatation followed by a translation. For $k\in\mathbb{N}$ we set $[k]:=\{1,\ldots,k\}$. If $(E,\rho)$ is a metric space and $A\subseteq E$ is nonempty, then $\mathop{\mathrm{diam}}A:=\sup\{\rho(x,y) \;\colon x,y\in A\}\in [0,\infty]$ denotes the diameter of $A$. **Definition 2**. *For each $k\in\mathbb{N}$, let $\mathcal{P}^n_k\subset\mathcal{K}^n$ be the set of polytopes in $\mathbb{R}^n$ with at most $k$ vertices. Elements of $\mathcal{P}^n_k$ are called *$k$-topes*. A finite Minkowski sum of $k$-topes is called a *$k$-polyotope*.* **Remark 3**. *[\[thm:ktopeClosed\]]{#thm:ktopeClosed label="thm:ktopeClosed"} For any compact set $A\subset\mathbb{R}^n$, the set $\set*{ P \in \mathcal{P}^n_k \;\colon P \subseteq A }\subset\mathcal{K}^n$ is compact. Hence $\mathcal{P}^n_k$ is a countable union of compact subsets of $\mathcal{K}^n$ and thus a measurable subset of $\mathcal{K}^n$. It is convenient to consider the subspace $\sigma$-algebra on $\mathcal{P}^n_k$ which is induced by the Borel $\sigma$-algebra of $\mathcal{K}^n$.* Next we define a class of convex bodies which generalizes the class of zonoids and contains arbitrary polytopes. **Definition 4**. *Let $k \in \mathbb{N}$ and $K \in \mathcal{K}^n$. If $K\in\mathcal{K}^n$ is the limit of a sequence of $k$-polyotopes, then $K$ is called a *$k$-polyoid*. A convex body $K$ is called a *polyoid* (a *polyotope*) if it is a $k$-polyoid (a $k$-polyotope) for some $k\in\mathbb{N}$.* **Remark 5**. 1. *For a given $k\in\mathbb{N}$, the class of $k$-polyoids in $\mathbb{R}^n$ is a closed subset of $\mathcal{K}^n$. In the terminology of [@Schneider Sect. 3.5], the class of $k$-polyoids is the Minkowski class $\mathfrak{M}(\mathcal{P}^n_k)$ generated by $\mathcal{P}^n_k$.* 2. *A $1$-polyoid is just a singleton, a $2$-polyoid is a zonoid and a $3$-polyoid is a *triangle body*, as defined in [@Schneider p. 201] (or [@Schneider1996 Sect. 3]). Moreover, for a given polytope $P$ there is some integer $k\in\mathbb{N}$ (depending on $P$) such that $P$ is a $k$-polyotope and hence a $k$-polyoid.* 3. *Clearly, $\mathcal{P}^n_k\subseteq \mathcal{P}^n_{\ell}$ for $k\le\ell$. Hence any $k$-polyoid is an $\ell$-polyoid for $k\le \ell$. In particular, if $C_1,\ldots,C_{r}$ are polyoids in $\mathbb{R}^n$, for a fixed $r\in\mathbb{N}$, then there is some $k\in\mathbb{N}$ such that $C_1,\ldots,C_{r}$ are $k$-polyoids. Similar statements hold for polyotopes.* 4. *In $\mathbb{R}^2$ every centrally symmetric convex body is a 2-polyoid (a zonoid), and every convex body in $\mathbb{R}^2$ is a $3$-polyoid (a triangle body). The first fact is well-known (cf. [@Schneider Cor. 3.5.7]), the second follows from [@Schneider Thm. 3.2.14].* 5. *Let $n\ge 3$. If $k\in\mathbb{N}$ is fixed and $P_k^*$ is an indecomposable polytope in $\mathbb{R}^n$ with more than $k$ vertices, then it follows from [@Schneider Thm. 3.4.2] (see also [@Berg69 Thm. 4]) that $P_k^*$ is not approximable by the class $\mathcal{P}^n_k$. Hence $P_k^*$ is not a $k$-polyoid, but certainly $P_k^*$ is an $\ell$-polyoid, for some $\ell>k$. For instance, for each $k\ge 2$, there is some indecomposable $(k+1)$-tope (with triangular $2$-faces) which is not a $k$-polyoid.* 6. *The Minkowski sum of a triangle in $\mathbb{R}^2\times\{0\}$ and a $2$-dimensional ball in $\{0\}\times \mathbb{R}^2$ yields an example of a $3$-polyoid which is not a zonoid, not a polytope, and neither smooth nor strictly convex. It is clear from [@Schneider Cor. 3.5.12] that the class of $3$-polyoids is much larger than the class of zonoids.* 7. *For a given $k\in [n]$, Ricker [@Ricker81] calls a finite Minkowski sum of $r$-dimensional simplices with $r\in\{0,\ldots,k\}$ a $k$-zonotope. Each such $k$-zonotope is a particular $(k+1)$-polyotope, for $k\in[n]$. Ricker then defines a $k$-zonoid (for $k\in[n]$) as a limit of $k$-zonotopes and characterizes $k$-zonoids in terms of the ranges of $k$ vector measures, thus extending a known result for 1-zonoids (i.e., zonoids). A $3$-dimensional double pyramid over a triangle base is not a $k$-zonoid (as follows from [@Schneider Thm. 3.4.2]), for any $k\in \mathbb{N}$, but it is a $5$-polyotope.* 8. *Let $K$ be an $n$-dimensional convex cone which is not a polytope. Then $K$ is indecomposable by [@Sallee74 Thm. 2], hence [@Schneider Thm. 3.4.2] implies that $K$ is not a polyoid.* We now prepare the proof of Corollary [Corollary 10](#thm:ptb-char){reference-type="ref" reference="thm:ptb-char"}, which is an analogue for polyoids of a well-known result for zonoids (see [@Schneider Thm. 3.5.3] or [@Hug Thm. 4.13]). In the following, measurability in a topological space $E$ always refers to the Borel $\sigma$-algebra on $E$. Let $\mu$ be a finite measure on $E$, let $E_0\subseteq E$ be measurable and $\mu(E\setminus E_0)=0$. In this case we say that $\mu$ is supported in $E_0$. If $g:E_0\to\mathbb{R}$ is a bounded and measurable function, then the integral of $g$ over $E$ with respect to $\mu$ is defined by choosing any measurable extension of $g$ to $E$ (and clearly this is independent of the particular extension). The next lemma follows from the fact (applied with $S=E_0$) that if $S$ is a separable metric space, then probability measures with finite support on $S$ are weakly dense in the probability measures on $S$ (see the discussion on pages 72-73 of [@Billingsley], Appendix III, Thm. 4 and the discussion after Thm. 5 on page 239 of the first edition (1968) of [@Billingsley] or [@Varadarajan1958 Thm. 3]). **Lemma 6**. *Let $(E,\rho)$ be a separable metric space and $E_0\subseteq E$ a measurable subset. Let $\mu$ be a finite Borel measure on $E$ with $\mu(E\setminus E_0)=0$. Then there is a sequence of discrete Borel measures $\mu_j$, $j\in\mathbb{N}$, on $E$ with $\mu_j(E) = \mu(E)$ and $\mu_j(E\setminus E_0)=0$ such that if $g \colon E_0\to\mathbb{R}$ is continuous and bounded, then $$\label{eqneu1} \lim_{j\to\infty}\int g \mathop{}\!\mathrm{d}\mu_j =\int g \mathop{}\!\mathrm{d}\mu.$$* Let $\mathcal{K}_*$ be a Borel subset of $\mathcal{K}^n$. In the following, we always assume that $\mathcal{K}_*\neq\varnothing$. With the restriction of the Hausdorff metric, $\mathcal{K}_*$ is a separable metric space whose Borel $\sigma$-algebra coincides with the subspace $\sigma$-algebra induced on $\mathcal{K}_*$. In particular, we will be interested in the cases of the homothety invariant classes $\mathcal{P}^n$, the set of polytopes in $\mathbb{R}^n$, and the subclass $\mathcal{P}^n_k$ which is closed in $\mathcal{K}^n$. For a Borel measure $\nu$ on a separable metric space $E$, the support of $\nu$ is the complement of the largest open set on which $\nu$ vanishes and denoted by $\mathop{\mathrm{supp}}\nu$. Thus $\mathop{\mathrm{supp}}\nu$ is a closed set. If $\nu$ is a finite Borel measure on $\mathcal{K}_*$ with bounded support, then $\mathop{\mathrm{supp}}\nu$ is closed in $\mathcal{K}_*$ (but not compact in general). If $\mathcal{K}_*$ is closed in $\mathcal{K}^n$ and $\mathop{\mathrm{supp}}\nu$ is bounded, then $\mathop{\mathrm{supp}}\nu$ is compact. If $\mathcal{K}_*$ is a closed and homothety invariant class of convex bodies (hence containing all singletons), then the Minkowski class $\mathfrak{M}(\mathcal{K}_*)$ consists of all finite Minkowski sums of convex bodies from $\mathcal{K}_*$ and all convex bodies in their closure. Next we define the *positive hull* of the support of a measure on $\mathcal{K}_*$. **Definition 7**. * Let $\mu$ be a probability measure on a Borel set $\varnothing\neq\mathcal{K}_*\subseteq\mathcal{K}^n$. Then $$\mathop{\mathrm{pos}}\mu \coloneqq \set*{ \sum_{i=1}^N \lambda_i L_i \;\colon N \in \mathbb{N}_0, \forall i \in [N]\colon \lambda_i \ge 0, L_i \in \mathop{\mathrm{supp}}\mu }$$ denotes the set of nonnegative (finite) Minkowski combinations of convex bodies in $\mathop{\mathrm{supp}}\mu$, where $\mathop{\mathrm{supp}}\mu$ is defined with respect to the metric space $\mathcal{K}_*$. The empty sum is defined as $\{0\}$. If $\mu_1, \ldots, \mu_\ell$ are probability measures on $\mathcal{K}_*$, then $$\mathop{\mathrm{pos}}\prn*{\mu_1, \ldots, \mu_\ell} \coloneqq \mathop{\mathrm{pos}}\mu_1 \times \cdots \times \mathop{\mathrm{pos}}\mu_\ell$$ is the set of $\ell$-tuples with components in $\mathop{\mathrm{pos}}\mu_1, \ldots, \mathop{\mathrm{pos}}\mu_\ell$, respectively.* We provide a simple lemma. As usual, empty sums are interpreted as $0$ (or $\{0\}$ if sets in $\mathbb{R}^n$ are concerned). Recall that $\kappa_n$ is the volume of the unit ball $B^n$ and $\omega_n=n\kappa_n$ denotes its surface area. The mean width of a convex body $K\in\mathcal{K}^n$ can be expressed in the form $$w(K)=\frac{2}{\omega_n}\int_{\mathbb{S}^{n-1}} h_K(u)\, \mathcal{H}^{n-1}(\mathop{}\!\mathrm{d}u)\ge 0$$ with $w(K)>0$ if and only if $\dim K\ge 1$. **Lemma 8**. *Let $\ell,n\in\mathbb{N}_0$ and $A_1, \ldots, A_\ell\in\mathcal{K}^n$. Then $$\label{eq:const} \sum_{i=1}^\ell \mathop{\mathrm{diam}}A_i \le \sqrt{\pi}n\, \mathop{\mathrm{diam}}\sum_{i=1}^\ell A_i .$$* *Proof.* For the proof, we can focus on $n\ge 2$. Let $w(A)$ denote the mean width of $A\in\mathcal{K}^n$. Jung's inequality (or an obvious bound with $\sqrt{2}$ replaced by $2$) implies that $w(A)\le \sqrt{2}\mathop{\mathrm{diam}}A$. Moreover, since $A$ contains a segment of length $\mathop{\mathrm{diam}}A$, we have $2\mathop{\mathrm{diam}}A\le \omega_n\kappa_{n-1}^{-1} w(A)$. Since the mean width is Minkowski additive, we get $$\begin{aligned} \sum_{i=1}^\ell \mathop{\mathrm{diam}}A_i&\le \frac{1}{2}\frac{\omega_n}{\kappa_{n-1}}\sum_{i=1}^\ell w(A_i)=\frac{1}{2}\frac{\omega_n}{\kappa_{n-1}}w\left(\sum_{i=1}^\ell A_i\right) \le \frac{\omega_n}{\kappa_{n-1}}\mathop{\mathrm{diam}}\sum_{i=1}^\ell A_i.\end{aligned}$$ The assertion follows since the Gamma function is increasing on $[1.5,\infty)$. ◻ The representation [\[eq:refprokgen\]](#eq:refprokgen){reference-type="eqref" reference="eq:refprokgen"} in the following theorem can be viewed as a specific version of Choquet's integral representation theorem (see [@Phelps] or [@LMNS10 Thm. 3.45 and Chap. 7]), if combined with [@Schneider Thm. 3.4.2] (see also [@Berg69 Thm. 4]). Thus it follows that the measure $\mu$ in Theorem [Theorem 9](#thm:ptb-chargeneral){reference-type="ref" reference="thm:ptb-chargeneral"} (b) can be chosen such that it is supported by the indecomposable convex bodies in $\mathcal{K}_*$. We provide a direct argument for both directions of the following equivalence. The special case $\mathcal{K}_*=\mathcal{P}^n_k$ provides a characterization of $k$-polyoids and is stated as Corollary [Corollary 10](#thm:ptb-char){reference-type="ref" reference="thm:ptb-char"}. In the following, we always assume that $\varnothing\neq \mathcal{K}_*\subseteq \mathcal{K}^n$ is Borel measurable. **Theorem 9**. *Let $\varnothing\neq\mathcal{K}_*\subseteq\mathcal{K}^n$, $n\in\mathbb{N}_0$, be a homothety invariant closed class of convex bodies. Then the following are equivalent.* 1. *$K\in\mathfrak{M}(\mathcal{K}_*)$.[\[it:pc1gen\]]{#it:pc1gen label="it:pc1gen"}* 2. *There is a probability measure $\mu$ on $\mathcal{K}_*$ with bounded support such that $$\label{eq:refprokgen} h_K = \int h_L \, \mu(\mathop{}\!\mathrm{d}L).$$[\[it:pc3gen\]]{#it:pc3gen label="it:pc3gen"}* *If ([\[it:pc3gen\]](#it:pc3gen){reference-type="ref" reference="it:pc3gen"}) holds, then $K$ is the limit of a sequence in $\mathop{\mathrm{pos}}\mu$.* *Proof.* The assertion is clear for $n=0$ or if $\mathop{\mathrm{diam}}K=0$. Hence, we can assume that $n\ge 1$ and $\mathop{\mathrm{diam}}K>0$ in the following. "([\[it:pc1\]](#it:pc1){reference-type="ref" reference="it:pc1"}) $\implies$ ([\[it:pc3\]](#it:pc3){reference-type="ref" reference="it:pc3"})": Without loss of generality, $0 \in K\in\mathfrak{M}(\mathcal{K}_*)$. Hence $K = \lim_{\ell\to\infty} Q_\ell$, where $Q_\ell = \sum_{i=1}^{m_\ell} Q_\ell^{(i)}$ with $Q_\ell^{(i)}\in\mathcal{K}_*$, $m_{\ell}>0$ and $\mathop{\mathrm{diam}}Q_\ell^{(i)}>0$. There are points $x_\ell \in Q_\ell$ and $x_\ell^{(i)} \in Q_\ell^{(i)}$ with $x_\ell = \sum_{i=1}^{m_\ell} x_\ell^{(i)}$ such that $x_\ell \to 0$ as $\ell\to\infty$. Setting $P_\ell \coloneqq Q_\ell - x_\ell$ and $P_\ell^{(i)} \coloneqq Q_\ell^{(i)} - x_\ell^{(i)}\in \mathcal{K}_*$ for $\ell\in\mathbb{N}$ and $i \in m_\ell$, we have $$K = \lim_{\ell\to\infty} P_\ell = \lim_{\ell\to\infty} \sum_{i=1}^{m_\ell} P_\ell^{(i)},\quad 0 \in P_\ell^{(i)}\in\mathcal{K}_*\quad\text{and}\quad \mathop{\mathrm{diam}}P_\ell^{{(i)}} > 0.$$ The sequence $(\mathop{\mathrm{diam}}P_\ell)_\ell$ is bounded by some constant $d \in (0, \infty)$. For $\ell\in\mathbb{N}$ and $i \in [m_\ell]$, define positive numbers $$d_\ell \coloneqq \mathop{\mathrm{diam}}P_\ell, \quad d_\ell^{(i)} \coloneqq \mathop{\mathrm{diam}}P_\ell^{(i)}, \quad e_\ell \coloneqq \sum_{i=1}^{m_\ell} d_\ell^{(i)}, \quad c_\ell^{(i)} \coloneqq \frac{e_\ell}{d_\ell^{(i)}} ,$$ and discrete probability measures $\mu_\ell$ on $\mathcal{K}_*$ by $$\mu_\ell \coloneqq \sum_{i=1}^{m_\ell} \frac{1}{c_\ell^{(i)}} \delta_{c_\ell^{(i)} P_\ell^{(i)}}, \quad\text{noting that } \sum_{i=1}^{m_\ell} \frac{1}{c_\ell^{(i)}} = 1 .$$ By construction and basic properties of support functions, $$h_{P_\ell} = \int h_P \,\mu_\ell(\mathop{}\!\mathrm{d}P) .$$ If $P \in \mathop{\mathrm{supp}}\mu_\ell$, then $P = c_\ell^{(i)} P_\ell^{(i)}$ for some $i \in [m_\ell]$, hence $0 \in P$ and, by Lemma [Lemma 8](#thm:diamSum){reference-type="ref" reference="thm:diamSum"}, $$\mathop{\mathrm{diam}}P = c_\ell^{(i)} d_\ell^{(i)} = e_\ell \le \sqrt{\pi}n\,d_\ell \le \sqrt{\pi}n\,d,$$ so that $\mathop{\mathrm{supp}}\mu_\ell \subseteq S \coloneqq \set*{ L \in\mathcal{K}_*\;\colon L \subseteq \sqrt{\pi}nd{B^{n}}}$. Since $\mathcal{K}_*$ is closed in $\mathcal{K}^n$, $S$ is compact. Thinking of the measures $\mu_\ell$ as measures on $S$ (with the restriction of the Hausdorff metric), a special case of Prokhorov's theorem [@Billingsley pp. 57--59] yields a subsequence $(\mu_{\ell_s})_s$ of $(\mu_\ell)_\ell$ that weakly converges to some probability measure $\mu$ on $\mathcal{K}_*$ which is also compactly supported in $S$. Hence, for all $u \in \mathbb{R}^n$, $$h_K(u) = \lim_{s\to\infty} h_{P_{\ell_s}}(u) = \lim_{s\to\infty} \int h_P(u) \,\mu_{\ell_s}(\mathop{}\!\mathrm{d}P) = \int h_P(u)\, \mu(\mathop{}\!\mathrm{d}P),$$ since $P\mapsto h_P(u)$ is continuous and bounded on $S$. So $\mu$ has the desired property. "([\[it:pc3\]](#it:pc3){reference-type="ref" reference="it:pc3"}) $\implies$ ([\[it:pc1\]](#it:pc1){reference-type="ref" reference="it:pc1"})": Let $\mu$ be a probability measure on $\mathcal{K}_*$ with bounded support such that [\[eq:refprokgen\]](#eq:refprokgen){reference-type="eqref" reference="eq:refprokgen"} holds. Let $E_0$ denote the support of $\mu$ with respect to the metric space $E=\mathcal{K}_*$. According to Lemma [Lemma 6](#thm:discreteApprox){reference-type="ref" reference="thm:discreteApprox"}, $\mu$ is the weak limit of a sequence $\mu_\ell$ of discrete probability measures on $\mathcal{K}_*$ supported in $\mathop{\mathrm{supp}}\mu$. For all $\ell\in\mathbb{N}$, we define $K_\ell \in \mathcal{K}^n$ by $$h_{K_\ell} = \int h_P \, \mu_\ell(\mathop{}\!\mathrm{d}P).$$ By construction, $K_\ell \in \mathop{\mathrm{pos}}\mu_\ell$ is a finite sum of convex bodies in $\mathcal{K}_*$. Since $\mathop{\mathrm{supp}}\mu$ is bounded, the function $P\mapsto h_P(u)$, $P\in E_0$, is bounded and continuous, for each $u\in \mathbb{R}^n$. Hence Lemma [Lemma 6](#thm:discreteApprox){reference-type="ref" reference="thm:discreteApprox"} ensures that, for each $u\in\mathbb{R}^n$, $$h_{K_\ell}(u) = \int h_P(u) \,\mu_\ell(\mathop{}\!\mathrm{d}P) \to \int h_P(u) \,\mu(\mathop{}\!\mathrm{d}P) = h_K(u) \quad (\ell\to\infty).$$ This shows that $K_\ell \to K$ as $\ell\to\infty$ (with respect to the Hausdorff metric). ◻ **Corollary 10**. *Let $K$ be a convex body in $\mathbb{R}^n$, $n\in\mathbb{N}_0$ and $k \in\mathbb{N}$. Then the following are equivalent.* 1. *$K$ is a $k$-polyoid.[\[it:pc1\]]{#it:pc1 label="it:pc1"}* 2. *There is a probability measure $\mu$ on $\mathcal{P}^n_k$ with compact support such that $$\label{eq:refprok} h_K = \int h_P \, \mu(\mathop{}\!\mathrm{d}P).$$[\[it:pc3\]]{#it:pc3 label="it:pc3"}* *If ([\[it:pc3\]](#it:pc3){reference-type="ref" reference="it:pc3"}) holds, then $K$ is the limit of a sequence in $\mathop{\mathrm{pos}}\mu$.* **Remark 11**. * In view of Corollary [Corollary 10](#thm:ptb-char){reference-type="ref" reference="thm:ptb-char"}, a probability measure $\mu$ on $\mathcal{P}^n_k$ with compact support in $\mathcal{P}^n_k$ satisfying [\[eq:refprok\]](#eq:refprok){reference-type="eqref" reference="eq:refprok"} is called a *generating measure* of the $k$-polyoid $K$. Generating measures of polyoids are not uniquely determined (compare [@Schneider Rem. 3.2.15]). In the following, we will only use that for a given polyoid a generating measure exists. * **Example 12**. * We describe the non-uniqueness by a simple example. Let $e_1,e_2\in\mathbb{R}^2$ be the standard basis vectors. Consider the intervals $I_1:=[0,e_1]$, $I_2:=[0,e_2]$ and $I_3:=[0,e_1+e_2]$. Let $\mathop{\mathrm{conv}}$ denote the convex hull operator. Let $$P_1:=\mathop{\mathrm{conv}}(I_1\cup\{e_1+e_2\})\quad \text{and}\quad P_2:=\mathop{\mathrm{conv}}\{e_1,e_1+e_2,2e_1+e_2\}.$$ Then $$\mu_1:=\frac{1}{2}\left(\delta_{I_2+I_3}+ \delta_{I_1}\right),\quad \mu_2:=\frac{1}{2}\left(\delta_{I_1+I_2}+\delta_{I_3}\right) \quad\text{and}\quad \mu_3:=\frac{1}{2}\left(\delta_{P_1}+ \delta_{P_2}\right)$$ are three generating measures of the $4$-polyoid $P:=\frac{1}{2}(I_1+I_2+I_3)$, which in fact is also a zonoid (zonotope) with generating measure $$\mu_4:=\frac{1}{3} \left(\delta_{\frac{3}{2}I_1}+ \delta_{\frac{3}{2}I_2}+ \delta_{\frac{3}{2}I_3}\right).$$ By adding to $P$ a suitable triangle, we get a $3$-polyoid which is not a zonoid and has two different generating measures. In the plane, examples of non-uniqueness can be easily constructed using Minkowski's existence theorem for polygons and the Minkowski additivity of the first area measure.* Corollary [Corollary 10](#thm:ptb-char){reference-type="ref" reference="thm:ptb-char"} shows that polyoids can be characterized via the integral representation [\[eq:refprok\]](#eq:refprok){reference-type="eqref" reference="eq:refprok"} and as limits of sequences in the positive hull of a generating measure of the polyoid. The arguments in Section [3](#sec:3){reference-type="ref" reference="sec:3"} are based on both types of description. In the following lemma, we show that a convex body the support function of which is given by a more general integral representation is still the limit of a sequence of polytopes in the positive hull of a generating measure on $\mathcal{P}^n$. The lemma suggests the definition of a class of convex bodies that we will call macroids in Definition [Definition 14](#def:Macroid){reference-type="ref" reference="def:Macroid"}. The argument for the implication "(b) $\implies$ (a)" of Lemma [Theorem 9](#thm:ptb-chargeneral){reference-type="ref" reference="thm:ptb-chargeneral"} does not use any specific properties of the measurable subclass $\mathcal{K}_*\subseteq\mathcal{K}^n$. Therefore we have the following lemma. Finally, we will choose $\mathcal{K}_*=\mathcal{P}^n$. **Lemma 13**. *Let $\varnothing\neq\mathcal{K}_*\subseteq\mathcal{K}^n$ be a Borel set, $n\in\mathbb{N}_0$. Suppose that $\mu$ is a probability measure on $\mathcal{K}_*$ with bounded support. Let $K\in\mathcal{K}^n$ be defined by $$\label{eq:refprokb} h_K = \int h_P \, \mu(\mathop{}\!\mathrm{d}P).$$ Then $K$ is the limit of a sequence in $\mathop{\mathrm{pos}}\mu$.* **Definition 14** (Macroids). *Let $\varnothing\neq\mathcal{K}_*\subseteq\mathcal{K}^n$ be a Borel set. A convex body $K$ in $\mathbb{R}^n$, $n\in\mathbb{N}_0$, for which there is a probability measure $\mu$ on $\mathcal{K}_*$ with bounded support such that [\[eq:refprokb\]](#eq:refprokb){reference-type="eqref" reference="eq:refprokb"} holds, is called a $\mathcal{K}_*$-*macroid* with generating measure $\mu$. If $\mathcal{K}_*=\mathcal{P}^n$, we call $K$ a macroid with generating measure $\mu$.* **Remark 15**. *Suppose that $K$ is a $\mathcal{K}_*$-macroid with generating measure $\mu$ on $\mathcal{K}_*$. We may extend $\mu$ trivially to all of $\mathcal{K}^n$. Then $\mu$ is a probability measure with bounded support (by definition) and (by Fubini's theorem) $$w(K)=\int w(Q)\, \mu(\mathop{}\!\mathrm{d}Q)<\infty.$$ The assumption that $\mu$ is a probability measure is not restrictive. To see this, note that if $\widetilde{\mu}$ is a Borel measure on $\mathcal{K}^n$ with $|\widetilde{\mu}|:=\widetilde{\mu}(\mathcal{K}^n)\in (0,\infty)$ and if $\widetilde{\mu}$ has bounded support, then $$\int h_Q\, \widetilde{\mu}(\mathop{}\!\mathrm{d}Q)=\int h_Q\, \mu(\mathop{}\!\mathrm{d}Q),$$ where $\mu(\mathcal{A}):=|\widetilde{\mu}|^{-1}\widetilde{\mu}\left(|\widetilde{\mu}|^{-1}\mathcal{A}\right)$, for Borel sets $\mathcal{A}\subseteq \mathcal{K}^n$, defines a probability measure with bounded support.* *For the present purpose, we could also replace the assumption of bounded support by an integrability assumption. To explain this statement, let $\mu$ be a Borel probability measure on $\mathcal{K}^n$ such that $$0<\int w(Q)\, \mu(\mathop{}\!\mathrm{d}Q)<\infty.$$ The Steiner point of $K\in \mathcal{K}^n$ is defined by $$s(K):=\frac{1}{\kappa_n}\int_{\mathbb{S}^{n-1}} h_k(u)u\, \mathcal{H}^{n-1}(\mathop{}\!\mathrm{d}u)$$ and satisfies $s(K)\in \text{relint}\, K$ (see [@Schneider Sect. 1.7.1]). Fubini's theorem yields $$s(K)=\int s(Q)\, \mu(\mathop{}\!\mathrm{d}Q).$$ Therefore we obtain $$h_{\frac{K-s(K)}{w(K)}}=\int h_P\, \mu^*(\mathop{}\!\mathrm{d}P),$$ where $$\mu^*(\mathcal{A}):=\frac{1}{w(K)}\int \mathbf{1}\left\{\frac{Q-s(Q)}{w(Q)}\in \mathcal{A}\right\}w(Q)\, \mu(\mathop{}\!\mathrm{d}Q),$$ for Borel sets $\mathcal{A}\subseteq \mathcal{K}^n$, is a probability measure concentrated on $$\mathcal{K}^n_{0,1}:=\{L\in \mathcal{K}^n\colon w(L)=1,s(L)=0\}.$$ In particular, $\mu^*$ has bounded support.* **Remark 16**. *Each polyoid is a macroid, but not every macroid is a polyoid; for an example, see Appendix [4](#app:macroid-not-polyoid){reference-type="ref" reference="app:macroid-not-polyoid"}. An explicit geometric characterization of the class of polyoids within the class of macroids remains to be discovered.* **Remark 17**. * An obvious motivation for introducing macroids is that Theorem [Theorem 1](#corfin){reference-type="ref" reference="corfin"} is true in fact for the strictly larger class of macroids. An explicit example of a convex body that is not a macroid is provided by a circular cone. This follows from Proposition [Proposition 18](#prop:indemacro){reference-type="ref" reference="prop:indemacro"}. * **Proposition 18**. *Let $K \in \mathcal{K}^n$ be an indecomposable macroid. Then $K$ is a polytope.* *Proof.* We may assume that $\dim K>0$ and $$h_K=\int h_Q\, \mu(\mathop{}\!\mathrm{d}Q),$$ where $\mu$ is a Borel probability measure on $\mathcal{P}^n$ with bounded support. By Fubini's theorem, we have $$w(K)=\int w(Q)\, \mu(\mathop{}\!\mathrm{d}Q),$$ hence there is some $P\in \mathop{\mathrm{supp}}\mu$ with $w(P)>0$ (that is, $\dim P>0$) and $\mu(B(P,1/k))>0$ for all $k\in\mathbb{N}$, where $B(P,1/k)$ denotes a closed ball around $P$ with radius $1/k$ in $\mathcal{P}^n$ (or in $\mathcal{K}^n$) with respect to the Hausdorff metric $d$ on $\mathcal{K}^n$ (or its restriction to a subset). For $k\in\mathbb{N}$, the convex body $K_k\in\mathcal{K}^n$ is defined by $$h_{K_k} \coloneqq \frac{1}{\mu(B(P, 1/k))} \int_{B(P, 1/k)} h_Q\, \mu(\mathop{}\!\mathrm{d}Q)$$ and satisfies $w(K_k)>0$. Then clearly $K_k\to P$ as $k\to\infty$ (with respect to the Hausdorff metric). Moreover, if $L_k\in\mathcal{K}^n$ is given by $$h_{L_k}:=\int_{B(P, 1/k)^\complement}h_Q\, \mu(\mathop{}\!\mathrm{d}Q),$$ then $$\mu(B(P,1/k))K_k+L_k=K.$$ Since $K$ is indecomposable and $\dim K_k>0$, it follows that $K=c(k)K_k+x_k$, where $$c(k)=\frac{w(K)}{w(K_k)}\quad\text{and}\quad x_k\in\mathbb{R}^n.$$ Since $K_k\to P$, we have $c(k)\to w(K)/w(P)>0$ and $x_k\to x_0\in\mathbb{R}^n$, as $k\to\infty$. Thus we arrive at $K=w(P)^{-1}w(K)P+x_0$, which shows that $K$ is a polytope. ◻ **Remark 19**. *Various types of mean section or projection bodies have been studied in integral and stochastic geometry. Starting from a convex body $K\subset\mathbb{R}^n$, the support function of a new mean body is defined as an integral average of the support functions of sections or projections of $K$, which is precisely the principle by which macroids are defined; see [@GW98; @GW12; @GW14; @GHW17] and the literature cited there.* *Another special case of definition [\[eq:refprokb\]](#eq:refprokb){reference-type="eqref" reference="eq:refprokb"} is the convolution $\widetilde{\mu}\ast h_K$ of a probability (or finite) measure $\widetilde{\mu}$ on the rotation group ${\rm SO}_n$ and the support function of a fixed convex body $K\in\mathcal{K}^n$, as considered in [@GZ99 Sects. 2 and 5]. In our notation, this reads $$\begin{aligned} (\widetilde{\mu}\ast h_K)(u)&=\int h_K(\rho^{-1} u)\, \widetilde{\mu}(\mathop{}\!\mathrm{d}\rho) =\int h_L(u)\, f_K(\widetilde{\mu})(\mathop{}\!\mathrm{d}L), \end{aligned}$$ where $f_K(\widetilde{\mu})$ is the image measure of $\widetilde{\mu}$ under the map $f_K:{\rm SO}_n\to\mathcal{K}^n$, $\rho\mapsto \rho K$.* *A general definition of a convex body as an integral average with respect to some measure on a suitable index set has been anticipated by Wolfgang Weil in [@Weil76 (1)], but then only the special case of zonoids has been explored in [@Weil76].* **Remark 20**. *Let $u\in\mathbb{S}^{n-1}$. If $K$ is a macroid with generating measure $\mu$, then the support set $F(K,u)$ of $K$ is a macroid with generating measure $F_u(\mu)$, where $F_u:\mathcal{P}^n\to\mathcal{P}^n$, $P\mapsto F(P,u)$, is measurable and $F_u(\mu)$ is the image measure of $\mu$ under the map $F_u$, that is, $$\label{eqsupportset} h_{F(K,u)}=\int h_{F(P,u)}\, \mu(\mathop{}\!\mathrm{d}P)= \int h_Q\, F_u(\mu)(\mathop{}\!\mathrm{d}Q).$$ The measurability of $F_u$ follows from [@SW Thm. 12.2.6 (a) and Thm. 12.3.2], since $F(K,u)=K\cap H(K,u)$, where $H(K,u)=u^\perp +h(K,u)u$ clearly depends continuously on $K$. Furthermore, note that $h_{F(K,u)}(x)=h_K'(u;x)$ by [@Schneider Thm. 1.7.2], for $x\in\mathbb{R}^n$. Since $t^{-1}|h_L(u+tx)-h_L(x)|\le R\|x\|$, for $t>0$ and $L\in \mathop{\mathrm{supp}}\mu\subseteq RB^n$ (and some $R>0$), the assertion follows from the dominated convergence theorem. * # The characterization theorem {#sec:3} We start by recalling various concepts of criticality for finite sequences of subsets of $\mathbb{R}^n$. Recall that the cardinality of a finite set $I$ is denoted by $|I|$. For a nonemtpy set $A\subseteq\mathbb{R}^n$, let $\mathop{\mathrm{span}}A$ denote the (linear) span of $A$ and $\mathop{\mathrm{\overline{span}}}A=\mathop{\mathrm{span}}(A-A)$ the linear subspace parallel to the smallest affine subspace containing $A$. By the dimension $\dim A\in\{0,\ldots,n\}$ of a set $A\neq\varnothing$ we mean the dimension of its affine span. **Definition 21**. *Let $\mathbfcal{A} = (A_1, \ldots, A_\ell)$, $\ell\in\mathbb{N}_0$, be a tuple of nonempty subsets of $\mathbb{R}^n$. Then $\mathbfcal{A}$ is called* 1. **supercritical* if $\dim \mathop{\mathrm{\overline{span}}}\sum_{i\in I} A_i\ge \abs{I} + 2$ for all $\varnothing \ne I \subseteq \set*{1, \ldots, \ell}$.* 2. **critical* if $\dim \mathop{\mathrm{\overline{span}}}\sum_{i\in I} A_i\ge \abs{I} + 1$ for all $\varnothing \ne I \subseteq \set*{1, \ldots, \ell}$.* 3. **semicritical* if $\dim \mathop{\mathrm{\overline{span}}}\sum_{i\in I} A_i\ge \abs{I}$ for all $\varnothing \ne I \subseteq \set*{1, \ldots, \ell}$.* Note that here we deviate from the terminology used in [@SvH23+ Sect. 12], where a tuple of convex bodies satisfying (iii) in Definition [Definition 21](#Def:critical){reference-type="ref" reference="Def:critical"} is called subcritical instead of semicritical. (Instead we reserve the notion of a subcritical tuple of sets for one that is not critical; see [@HugReichert23+]). The various notions of criticality introduced above have useful properties some of which are discussed below. Each of the three notions is preserved by passing to a subtuple, taking permutations of the given tuple, replacing all sets by the same affine transformation or by individual translations, or if the sets are replaced by supersets. Supercriticality implies criticality, which in turn implies semicriticality. The empty tuple is supercritical. Moreover, if all sets in an $\ell$-tuple $\mathbfcal{A}$ are full-dimensional, then $\mathbfcal{A}$ is supercritical if and only if $\ell \le n-2$ or $\ell=0$ (that is, $\mathbfcal{A}$ is the empty tuple). **Lemma 22**. *Let $\ell \in \mathbb{N}$ and $\mathbfcal{A} = (A_1, \ldots, A_\ell)$ be a tuple of nonempty sets in $\mathbb{R}^n$.* 1. *Let $\mathbfcal{A}$ be critical and $A_{\ell + 1} \subseteq \mathbb{R}^n$ be nonempty. Then $(A_1, \ldots, A_{\ell + 1})$ is semicritical if and only if $\dim\mathop{\mathrm{\overline{span}}}A_{\ell + 1}\ge 1$.[\[it:critSimple1\]]{#it:critSimple1 label="it:critSimple1"}* 2. *Let $\mathbfcal{A}$ be supercritical and $A_{\ell + 1} \subseteq \mathbb{R}^n$ be nonempty. Then $(A_1, \ldots, A_{\ell+1})$ is critical if and only if $\dim\mathop{\mathrm{\overline{span}}}A_{\ell + 1}\ge 2$.[\[it:critSimple2\]]{#it:critSimple2 label="it:critSimple2"}* *Proof.* (a) Suppose that $A_{\ell + 1}$ has dimension at least one. Let $I\subseteq [n+1]$ be nonempty. We distinguish three cases. If $I\subseteq [\ell]$, then $\dim\mathop{\mathrm{\overline{span}}}\sum_{i\in I}A_i\ge |I|+1\ge |I|$, since $\mathbfcal{A}$ is critical. If $I=\{\ell +1\}$, then $\dim\mathop{\mathrm{\overline{span}}}\sum_{i\in I}A_i\ge 1=|I|$, since $\dim\mathop{\mathrm{\overline{span}}}A_{\ell + 1}\ge 1$. If $I=J\cup \{\ell +1\}$ and $\varnothing \neq J\subseteq [\ell]$, then $$\dim\mathop{\mathrm{\overline{span}}}\sum_{i\in I}A_i\ge\dim\mathop{\mathrm{\overline{span}}}\sum_{i\in J}A_i\ge |J|+1=|I|,$$ where we used again that $\mathbfcal{A}$ is critical. Clearly, if $(A_1, \ldots, A_{\ell + 1})$ is semicritical then $\dim\mathop{\mathrm{\overline{span}}}A_{\ell + 1}\ge 1$. The proof of (b) is similar. ◻ The following lemma connects semicriticality of an $n$-tuple of convex bodies to the positivity of the mixed volume of these convex bodies (see [@Schneider Theorem 5.1.8]). **Lemma 23**. *Let $\mathbfcal{C}=(K_1,\ldots,K_n)$ be an $n$-tuple of convex bodies in $\mathbb{R}^n$. Then $\mathbfcal{C}$ is semicritical if and only if  $\mathop{\mathrm{V}}(\mathbfcal{C})>0$.* As pointed out before, mixed area measures can be extended to differences of support functions. If $g_1,g_2$ are differences of support functions and $\mathbfcal{C}$ is an $(n-3)$-tuple of convex bodies in $\mathbb{R}^n$ (if $n\ge 3$), then we set $S_{g_1,g_2,\mathbfcal{C}}:=S(g_1,g_2,\mathbfcal{C},\cdot)$. A similar convention applies in case just one of the bodies is replaced by a difference of support functions. The statement and proof of the following lemma is suggested by a similar result concerning zonoids; see [@SvH23+ Theorem 14.9]. In the following, $\mathcal{K}_*\subseteq\mathcal{K}^n$ always denotes a measurable class of convex bodies. **Lemma 24**. *Assume that $n \ge 3$. Let $\mathbfcal{C}$ be an $(n-3)$-tuple of convex bodies in $\mathbb{R}^n$, and let $K\in\mathcal{K}^n$ be a $\mathcal{K}_*$-macroid with generating measure $\mu$. If $(K, \mathbfcal{C})$ is supercritical and $f$ is a difference of support functions with $\mathop{\mathrm{S}}_{f, K, \mathbfcal{C}} = 0$, then $\mathop{\mathrm{S}}_{f, P, \mathbfcal{C}} = 0$  for all $P \in \mathop{\mathrm{pos}}\mu$.* *Proof.* Let $P \in\mathop{\mathrm{pos}}\mu$. Then there are $\ell\in\mathbb{N}_0$, $\lambda_1, \ldots, \lambda_\ell\ge 0$ and $L_1, \ldots, L_\ell \in\mathop{\mathrm{supp}}\mu$ such that $P = \sum_{i=1}^\ell \lambda_i L_i$ and $$\mathop{\mathrm{S}}_{f, P, \mathbfcal{C}} = \sum_{i=1}^\ell \lambda_i \mathop{\mathrm{S}}_{f, L_i, \mathbfcal{C}} .$$ Note that this holds trivially with $\mathop{\mathrm{S}}_{f, P, \mathbfcal{C}}=0$ if $\ell=0$. So it suffices to prove that $\mathop{\mathrm{S}}_{f, L, \mathbfcal{C}} = 0$ for all $L \in \mathop{\mathrm{supp}}\mu$. By Fubini's theorem and basic properties of mixed area measures (see [@Schneider Sect. 5.1] or [@Hug Sect. 4.1]), which remain true in the case where differences of support functions are admitted in some of the arguments of the mixed volumes and the mixed area measures, $$\begin{aligned} \int \mathop{\mathrm{V}}(f, f, L, \mathbfcal{C}) \,\mu(\mathop{}\!\mathrm{d}L) &= \frac{1}{n} \int \int h_L(u) \, \mathop{\mathrm{S}}_{f, f, \mathbfcal{C}}(\mathop{}\!\mathrm{d}u) \, \mu(\mathop{}\!\mathrm{d}L)\nonumber\allowdisplaybreaks\\ & = \frac{1}{n} \int \int h_L(u) \, \mu(\mathop{}\!\mathrm{d}L) \, \mathop{\mathrm{S}}_{f, f, \mathbfcal{C}}(\mathop{}\!\mathrm{d}u)\nonumber\allowdisplaybreaks\\ &= \frac{1}{n} \int h_K(u) \, \mathop{\mathrm{S}}_{f, f, \mathbfcal{C}}(\mathop{}\!\mathrm{d}u)\nonumber \allowdisplaybreaks\\ &= \mathop{\mathrm{V}}(f, f, K, \mathbfcal{C})\nonumber\\ &= \frac{1}{n} \int f\, \mathop{}\!\mathrm{d}\mathop{\mathrm{S}}_{f, K, \mathbfcal{C}} = 0\label{eq:zerorel} .\end{aligned}$$ If $L \in \mathcal{K}^n$ is a singleton, then $\mathop{\mathrm{V}}(f, f, L, \mathbfcal{C}) = \mathop{\mathrm{V}}(f, f, 0L, \mathbfcal{C}) = 0$ by translation invariance and multilinearity of $\mathop{\mathrm{V}}$. If $L \in \mathcal{K}^n$ is not a singleton, then $\mathop{\mathrm{V}}(K, K, L, \mathbfcal{C})>0$. In fact, first we get $\dim K \ge 1+2=3$, since $(K, \mathbfcal{C})$ is supercritical. By Lemma [Lemma 22](#thm:critSimple){reference-type="ref" reference="thm:critSimple"} (b) it follows that $(K,K,\mathbfcal{C})$ is critical, but then $(L,K,K,\mathbfcal{C})$ is semicritical by Lemma [Lemma 22](#thm:critSimple){reference-type="ref" reference="thm:critSimple"}(b) and since $\dim L\ge 1$. Hence the assertion follows from Lemma [Lemma 23](#lem:critvol){reference-type="ref" reference="lem:critvol"}. Since $\mathop{\mathrm{S}}_{f, K, \mathbfcal{C}} = 0$, it follows from the extension of [\[eqex\]](#eqex){reference-type="eqref" reference="eqex"} to differences of support functions that $\mathop{\mathrm{V}}(f, K, L, \mathbfcal{C})=0$. Hence, by the General Alexandrov--Fenchel Inequality (GAFI) we get $$0 = \mathop{\mathrm{V}}(f, K, L, \mathbfcal{C})^2 \ge \mathop{\mathrm{V}}(f, f, L, \mathbfcal{C}) \cdot \mathop{\mathrm{V}}(K, K, L, \mathbfcal{C}),$$ which implies that $\mathop{\mathrm{V}}(f, f, L, \mathbfcal{C})\le 0$. Since $\mathop{\mathrm{V}}(f, f, L, \mathbfcal{C})$ is continuous in $L \in \mathcal{K}^n$, it follows from [\[eq:zerorel\]](#eq:zerorel){reference-type="eqref" reference="eq:zerorel"} that $\mathop{\mathrm{V}}(f, f, L, \mathbfcal{C}) = 0$ for all $L \in\mathop{\mathrm{supp}}\mu$. Now let $L \in \mathop{\mathrm{supp}}\mu$. If $L$ is a singleton, then $\mathop{\mathrm{S}}_{f, L, \mathbfcal{C}} = 0$ by translation invariance and multilinearity of $\mathop{\mathrm{S}}$. If $L$ is not a singleton, then again $\mathop{\mathrm{V}}(K, K, L, \mathbfcal{C})>0$. Moreover, $\mathop{\mathrm{V}}(f, K, L, \mathbfcal{C}) = 0$ and $\mathop{\mathrm{V}}(f, f, L, \mathbfcal{C}) = 0$, as shown above. Therefore, $\mathop{\mathrm{S}}_{f, L, \mathbfcal{C}} = 0$ is implied by [@SvH23+ Lem. 3.12 (a)]. ◻ Next we compare how the smallest affine subspace containing a given $k$-polyoid with generating measure $\mu$ is related to the smallest affine subspace of a polytope from the positive hull of the support of $\mu$, if both affine subspaces are translated to the origin $0$. **Lemma 25**. *Let $n \in \mathbb{N}_0$. Let $K\in\mathcal{K}^n$ be a $\mathcal{K}_*$-macroid with generating measure $\mu$, and let $Q \in \mathop{\mathrm{pos}}\mu$. Then $\mathop{\mathrm{\overline{span}}}Q \subseteq \mathop{\mathrm{\overline{span}}}K$.* *Proof.* For $n=0$ the assertion is clear. Let $u \in \prn*{\mathop{\mathrm{\overline{span}}}K}^\perp$ (the linear subspace orthogonal to $\mathop{\mathrm{\overline{span}}}K$). Then $$\int (h_P(u) + h_P(-u))\, \mu(\mathop{}\!\mathrm{d}P) = h_K(u) + h_K(-u) = 0 .$$ Since the map $P\mapsto h_P(u)+h_P(-u)$, $P\in \mathcal{K}_*$, is continuous and nonnegative, we get $h_P(u) + h_P(-u) = 0$ for all $P \in \mathop{\mathrm{supp}}\mu$. Because $Q$ is a (nonnegative) Minkowski combination of such $P$, it follows that $h_Q(u) + h_Q(-u) = 0$. Hence, $u \in \prn*{\mathop{\mathrm{\overline{span}}}Q}^\perp$. So $\prn*{\mathop{\mathrm{\overline{span}}}K}^\perp \subseteq \prn*{\mathop{\mathrm{\overline{span}}}Q}^\perp$, proving the claim. ◻ The proof of the next auxiliary result is inspired by [@SvH23+ Thm. 14.9]. **Lemma 26**. *Let $n\ge 3$. Let $\mathbfcal{C}= (C_1, \ldots, C_{n-2})$ be an $(n-2)$-tuple of $\mathcal{K}_*$-macroids in $\mathbb{R}^n$ with generating measures $\mu_1, \ldots, \mu_{n-2}$. Let $f$ be a difference of support functions. Assume that $f$ is linear on $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{Q},\cdot)$ whenever $\mathbfcal{Q}= (Q_1, \ldots, Q_{n-2}) \in \mathop{\mathrm{pos}}(\mu_1, \ldots, \mu_{n-2})$ with $\mathop{\mathrm{\overline{span}}}Q_i = \mathop{\mathrm{\overline{span}}}C_i$  for $i \in [n-2]$. Then $f$ is also linear on $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{C},\cdot)$.* *Proof.* By Lemma [Lemma 13](#lem:ptb-con){reference-type="ref" reference="lem:ptb-con"}, for $i \in [n-2]$, there exists a sequence $\widetilde C_i^{(1)}, \widetilde C_i^{(2)}, \ldots$ of sums of convex bodies in $\mathop{\mathrm{supp}}\mu_i$ that converges to $C_i$. Being an element of $\mathop{\mathrm{pos}}\mu_i$, $\widetilde C_i^{(j)}$ satisfies $\mathop{\mathrm{\overline{span}}}\widetilde C_i^{(j)} \subseteq \mathop{\mathrm{\overline{span}}}C_i$ by Lemma [Lemma 25](#thm:mposSpan){reference-type="ref" reference="thm:mposSpan"}. On the other hand, $\widetilde C_i^{(j)} \to C_i$ implies that the reverse inclusion holds for all $j$ greater than or equal to some $q \in \mathbb{N}$. For $i \in [n-2]$, define $$C_i^{(j)} \coloneqq \widetilde C_i^{(q+j)} + \frac{1}{j^2}\sum_{j'=1}^{j-1} \widetilde C_i^{(q + j')},\quad j\in\mathbb{N} .$$ Because $(d(0, \widetilde C_i^{(j)}))_j$ is bounded by some $c_i \in (0, \infty)$, $$d(C_i^{(j)}, \widetilde C_i^{(q + j)}) \le \frac{(j-1)c_i}{j^2} \to 0\quad \text{as }j\to\infty$$ and $$\lim_{j\to\infty} C_i^{(j)} = \lim_{j\to\infty} \widetilde C_i^{(q+j)} = C_i .$$ Moreover, $$\mathop{\mathrm{\overline{span}}}C_i = \mathop{\mathrm{\overline{span}}}\widetilde C_i^{(q + j)} \subseteq \mathop{\mathrm{\overline{span}}}C_i^{(j)} \subseteq \mathop{\mathrm{\overline{span}}}C_i .$$ For all $j \in \mathbb{N}$, we have $\mathbfcal{C}^{(j)} \coloneqq (C_1^{(j)}, \ldots, C_{n-2}^{(j)})\in \mathop{\mathrm{pos}}(\mu_1, \ldots, \mu_{n-2})$. By assumption and since $\mathop{\mathrm{\overline{span}}}C_i^{(j)} = \mathop{\mathrm{\overline{span}}}C_i$ for $i \in [n-2]$, there is some $x_j\in\mathbb{R}^n$ such that $f = \left<x_j, \cdot\right>$ on $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{C}^{(j)},\cdot)$. By definition of $C_i^{(j)}$ and the multilinearity of $\mathop{\mathrm{S}}$, we obtain $$\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{C}^{(j)},\cdot) = \bigcup_{j'_1, \ldots, j'_{n-2} = 1}^j \mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}}, \widetilde C_1^{(q + j'_1)}, \ldots, \widetilde C_{n-2}^{(q + j'_{n-2})},\cdot) .$$ In particular, $$\begin{aligned} \label{eq:lg1} \mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{C}^{(j)},\cdot) \subseteq \mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{C}^{(j+1)},\cdot) \quad \text{for all $j \in \mathbb{N}$} .\end{aligned}$$ Hence, there is $p\in\mathbb{N}$ such that $$E \coloneqq \mathop{\mathrm{span}}\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{C}^{(p)},\cdot) = \mathop{\mathrm{span}}\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{C}^{(j)},\cdot) \quad \text{for all $j \ge p$} .$$ Then for all $j \ge p$, $\left<x_p, \cdot\right>$ and $\left<x_j, \cdot\right>$ must agree on $E$ because they agree on $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{C}^{(p)},\cdot)$ and we obtain for all $j \ge p$, $$f = \left<x_p, \cdot\right> \quad \text{on $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{C}^{(j)},\cdot)$} .$$ Because $\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{C}^{(j)},\cdot)$ weakly converges to $\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{C},\cdot)$, as $j\to\infty$, it follows that $$f = \left<x_p, \cdot\right> \quad \text{on $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{C},\cdot) \subseteq \mathop{\mathrm{cl}}\bigcup_{j=p}^\infty \mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{C}^{(j)},\cdot)$} ,$$ which proves the assertion of the lemma. ◻ **Remark 27**. * In the proof of Lemma [Lemma 26](#thm:linGlue){reference-type="ref" reference="thm:linGlue"}, we implicitly showed that if $\mathbfcal{C}= (C_1, \ldots, C_{n-2})$ is an $(n-2)$-tuple of $\mathcal{K}_*$-macroids in $\mathbb{R}^n$, $n\ge 3$, with generating measures $\mu_1, \ldots, \mu_{n-2}$, then there exists a sequence of $(n-2)$-tuples $\mathbfcal{Q}^{(j)} = (Q_1^{(j)}, \ldots, Q_{n-2}^{(j)}) \in \mathop{\mathrm{pos}}(\mu_1, \ldots, \mu_{n-2})$, $j\in\mathbb{N}$, such that $\mathop{\mathrm{\overline{span}}}Q_i^{(j)} = \mathop{\mathrm{\overline{span}}}C_i$ for $i\in [n-2]$ and $\mathbfcal{Q}^{(j)}\to \mathbfcal{C}$ as $j\to\infty$. * We can now prove our main result. A crucial tool for our argument is the important special case of polytopes, which was already treated by Shenfeld and van Handel [@SvH23+ Thm. 8.1]. Recall that a convex body is said to be smooth if for each boundary point there is a unique supporting hyperplane passing through it. In particular, smooth convex bodies are full-dimensional. **Theorem 28**. *[\[thm:supercritical\]]{#thm:supercritical label="thm:supercritical"} Let $n\ge 2$. Let $\mathbfcal{C}= (C_1, \ldots, C_{n-2})$ be a supercritical $(n-2)$-tuple of macroids or smooth convex bodies in $\mathbb{R}^n$. Let $f$ be a difference of support functions. Then $\mathop{\mathrm{S}}_{f, \mathbfcal{C}}= 0$ if and only if $f$ is linear on $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}},\mathbfcal{C},\cdot)$.* *Proof.* Let $n=2$. Then the assumption states that $\mathop{\mathrm{S}}_f=0$. If $f=h_K-h_L$ for $K,L\in \mathcal{K}^2$, this implies that $S(K,\cdot)=S(L,\cdot)$, hence $K=L+x$ for some $x\in\mathbb{R}^2$. This shows that $f$ is linear. Finally, note that $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}(B^n,\cdot)=\mathbb{S}^1$. In the following, we assume that $n\ge 3$. It is sufficient to prove the theorem in the case where $\mathbfcal{C}$ is a supercritical tuple of macroids. The extension with the possible inclusion of smooth convex bodies follows immediately by an application of [@SvH23+ Cor. 14.3]. For this, note that if a smooth convex body in $\mathbfcal{C}$ (which necessarily has full dimension) is replaced by the Euclidean unit ball (which is a zonoid and hence a polyoid), neither the condition $\mathop{\mathrm{S}}_{f, \mathbfcal{C}}= 0$ nor the set $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}},\mathbfcal{C},\cdot)$ are changed. Moreover, also the supercriticality of $\mathbfcal{C}$ is not affected by replacing a smooth body by the unit ball. Hence, it is sufficient in the following to consider a supercritical $(n-2)$-tuple $\mathbfcal{C}$ of macroids in $\mathbb{R}^n$ with $n\ge 3$. First, we assume that $\mathop{\mathrm{S}}_{f, \mathbfcal{C}}= 0$. Let $\mathbfcal{Q}= (Q_1, \ldots, Q_{n-2}) \in \mathop{\mathrm{pos}}(\mu_1, \ldots, \mu_{n-2})$ be such that $\mathop{\mathrm{\overline{span}}}Q_i = \mathop{\mathrm{\overline{span}}}C_i$ for $i\in [n-2]$. So for all $I \subseteq [n-2]$, the tuple $(\mathbfcal{C}_I, \mathbfcal{Q}_{I^\complement})$ is supercritical, where $I^\complement:=[n-2]\setminus I$. Based on the hypothesis $\mathop{\mathrm{S}}_{f, \mathbfcal{C}}= 0$, Lemma [Lemma 24](#thm:extremalDecomposition){reference-type="ref" reference="thm:extremalDecomposition"} allows us to sequentially replace $C_i$ by $Q_i$ and to finally obtain $\mathop{\mathrm{S}}_{f, \mathbfcal{Q}} = 0$. Since $\mathbfcal{Q}$ is a supercritical tuple of polytopes, $f$ is linear on $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}}, \mathbfcal{Q},\cdot)$ by [@SvH23+ Thm. 8.1]. Now the claim follows from Lemma [Lemma 26](#thm:linGlue){reference-type="ref" reference="thm:linGlue"}. For the reverse direction, we assume that $f$ is linear on $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}},\mathbfcal{C},\cdot)$. Let $K\in\mathcal{K}^n$ be an arbitrary convex body. Then [@Schneider Lem. 7.6.15] (compare also [@SvH23+ Lem. 8.11]) implies that $$n\mathop{\mathrm{V}}(f,K,\mathbfcal{C})=\int f(u)\, \mathop{\mathrm{S}}(K,\mathbfcal{C},\mathop{}\!\mathrm{d}u)=0.$$ By the symmetry of mixed volumes, we obtain $$0=\int h_K(u)\, \mathop{\mathrm{S}}(f,\mathbfcal{C},\mathop{}\!\mathrm{d}u),$$ which yields $\mathop{\mathrm{S}}(f,\mathbfcal{C},\cdot)=0$, since differences of support functions are dense in $C(\mathbb{S}^{n-1})$ (see e.g. [@Schneider Lem. 1.7.8]). ◻ Finally, we obtain a characterization of the equality cases of (AFI) for supercritical tuples of macroids and smooth bodies. **Theorem 29**. *Let $K, L \subset\mathbb{R}^n$ be convex bodies, and let $\mathbfcal{C}= (C_1, \ldots, C_{n-2})$ be a supercritical $(n-2)$-tuple of macroids or smooth convex bodies in $\mathbb{R}^n$.* 1. *If $\mathop{\mathrm{V}}(K,L,\mathbfcal{C})=0$, then (AFI) holds with equality and $K,L$ are homothetic.* 2. *Let $\mathop{\mathrm{V}}(K,L,\mathbfcal{C})>0$. Then (AFI) holds with equality if and only if there are $a>0$ and $x\in\mathbb{R}^n$ such that $h_K=h_{aL+x}$ on $\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}},\mathbfcal{C},\cdot)$.* *Proof.* (a) If $\mathop{\mathrm{V}}(K,L,\mathbfcal{C})=0$, then $\mathop{\mathrm{V}}(K,K,\mathbfcal{C})\mathop{\mathrm{V}}(L,L,\mathbfcal{C})=0$, and hence (AFI) holds with equality. By symmetry, we can assume that $\mathop{\mathrm{V}}(K,K,\mathbfcal{C})=0$. Then also $\mathop{\mathrm{V}}(K,K+L,\mathbfcal{C})=0$. If $K$ is a singleton, then $K,L$ are homothetic. If $K$ has dimension at least $1$, then $\dim(K+L)\le 1$, since otherwise $\dim(K+L)\ge 2$, $\dim(K)\ge 1$ and the assumed supercriticality of $\mathbfcal{C}$ imply that $\mathop{\mathrm{V}}(K,K+L,\mathbfcal{C})>0$, a contradiction. Thus we get $\dim(K+L)\le 1$, in particular, $\dim(K)=1$ and $L$ is contained in a segment parallel to $K$. Hence again $K,L$ are homothetic. \(b\) Suppose that $\mathop{\mathrm{V}}(K,L,\mathbfcal{C})>0$. By [@Schneider Thm. 7.4.2] or [@SvH23+ Lem. 2.5], (AFI) holds with equality if and only if there is some $a>0$ such that $$\mathop{\mathrm{S}}(K,\mathbfcal{C},\cdot)=\mathop{\mathrm{S}}(aL,\mathbfcal{C},\cdot),$$ that is, $$\label{eq:equiv1} \mathop{\mathrm{S}}_{f,\mathbfcal{C}}=0\quad \text{with }f=h_K-ah_L.$$ Theorem [\[thm:supercritical\]](#thm:supercritical){reference-type="ref" reference="thm:supercritical"} implies that [\[eq:equiv1\]](#eq:equiv1){reference-type="eqref" reference="eq:equiv1"} holds if and only if here is some $x\in\mathbb{R}^n$ such that $$\label{eq:equiv3} f=\langle x,\cdot\rangle\quad\text{on }\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}},\mathbfcal{C},\cdot),$$ but clearly [\[eq:equiv3\]](#eq:equiv3){reference-type="eqref" reference="eq:equiv3"} is equivalent to $$h_K=h_{aL+x}\quad \text{on }\mathop{\mathrm{supp}}\mathop{\mathrm{S}}({B^{n}},\mathbfcal{C},\cdot),$$ which proves the asserted equivalence. ◻ # A macroid that is not a polyoid {#app:macroid-not-polyoid} In this section, we construct an example of a macroid that is not a polyoid, thereby showing that the class of macroids is larger than the class of polyoids. ## Zonotope kernels of polytopes Let $K, L, M \in \mathcal{K}^n$ be convex bodies. If $h_K - h_L = h_M$, then $K \ominus L \coloneqq M$ is called the *Minkowski difference* of $K$ and $L$. **Lemma 30**. *Let $K\in\mathcal{K}^n$ be a convex body and let $e, f$ be two linearly independent segments that are summands of $K$. Then $e + f$ is also a summand of $K$.* *Proof.* To show that $e + f$ slides freely in $K$ (see [@Schneider Sect. 3.2] and in particular Theorem 3.2.2 there), it suffices to consider two-dimensional slices of $K$ parallel to $e + f$. Hence we can reduce to the case that $K$ is two-dimensional. Let $\pm u$ be the normals of $e$ and $\pm v$ the normals of $f$. As $F(e, \pm v)$ are trivial, $F(K \ominus e, \pm v)$ are translates of $F(K, \pm v)$. So translates of $f$ are not only contained in $F(K, \pm v)$ but also in $F(K \ominus e, \pm v)$. Then [@Schneider Thm. 3.2.11] yields that $f$ is a summand of $K \ominus e$. This completes the proof. ◻ **Lemma 31**. *The function $\zeta\colon \mathcal{P}^n \to \mathcal{P}^n$ that maps a polytope to its unique largest (i.e. inclusion-maximal) zonotope summand, centrally symmetric around the origin, is well-defined. Every zonotope summand of $P \in \mathcal{P}^n$ is a summand of $\zeta(P)$.* *Proof.* We show that every polytope $P$ has a unique largest zonotope summand. Let $\mathcal{Z}(P)$ denote the nonempty set of origin centered zonotope summands of $P$. First note that summands of polytopes are polytopes (see [@Schneider p. 157]) and polytopes that are zonoids are zonotopes (see [@Schneider Cor. 3.5.7]). Hence the set of all origin centered zonotopes that are summands of $P$ equals the set of all origin centered zonoids that are summands of $P$. The latter set is compact as the intersection of a compact set (the set of centered zonoids having mean width less or equal the mean width of $P$) and a closed set (the set of summands of $P$). It follows that there is a $Z \in \mathcal{Z}(P)$ of maximum mean width. This $Z$ is inclusion-maximal in $\mathcal{Z}(P)$. Let $Y \in \mathcal{Z}(P)$. Then there are pairwise linearly independent $x_1, \ldots, x_k \in \mathbb{S}^{n-1}$ and scalars $y_1, \ldots, y_k, z_1, \ldots, z_k \ge 0$ such that $$Y = \sum_{i=1}^k y_i [-x_i, x_i], \quad Z = \sum_{i=1}^k z_i [-x_i, x_i] .$$ Assume for a contradiction that $Y$ is not a summand of $Z$. Up to reordering of the indices, it follows that $y_1 > z_1$. Then $y_1 [-x_1, x_1]$ is a summand of $P$, but, as $Z$ is maximal, not a summand of $$P \ominus \sum_{i=2}^k z_i [-x_i, x_i].$$ Let $\ell\in[k]$ be the largest index such that $y_1 [-x_1, x_1]$ is a summand of $\widetilde P \coloneqq P \ominus \sum_{i=2}^\ell z_i [-x_i, x_i]$. Then $l < k$ and $z_{\ell+1} [-x_{\ell+1}, x_{\ell+1}]$ is also a summand of $\widetilde P$. Now Lemma [Lemma 30](#thm:segment-summands){reference-type="ref" reference="thm:segment-summands"} shows that $y_1 [-x_1, x_1] + z_{\ell+1} [-x_{\ell+1}, x_{\ell+1}]$ is a summand of $\widetilde P$, but this contradicts the maximality of $\ell$. Hence, every $Y \in \mathcal{Z}(P)$ is a summand of $Z$, and there is only one maximal zonotope summand of $P$. ◻ Next, we aim to prove that $\zeta$ is measurable. We write $h(K,u)=h_K(u)$ for the support function of $K\in\mathcal{K}^n$ evaluated at $u\in\mathbb{S}^{n-1}$. We write $B(K,r)$ for a ball with center $K$ and radius $r\ge 0$ with respect to the Hausdorff metric $d$ on the space $\mathcal{K}^n$ of convex bodies. **Lemma 32**. *Let $X$ be a separable metric space and $f \colon X \to \mathcal{K}^n$ a function such that for any $u \in \mathbb{S}^{n-1}$ and $\lambda \in \mathbb{R}$, $$S_f(u, \lambda) \coloneqq \set*{ x \in X \colon h(f(x), u) \ge \lambda }$$ is closed. Then $f$ is measurable.* *Proof.* Fix some countable and dense set $Q \subseteq \mathbb{S}^{n-1}$. Let $K \in \mathcal{K}^n$ and $r > 0$. By continuity of $h_L$ for every $L \in \mathcal{K}^n$, $$B(K, r) = \bigcap_{u \in Q } h(\cdot, u)^{-1}([h_K(u) - r, h_K(u) + r]).$$ Taking the preimage under $f$, we get $$f^{-1}(B(K, r)) = \bigcap_{u \in Q} h(f(\cdot), u)^{-1}([h_K(u) - r, h_K(u) + r]).$$ By hypothesis, $h(f(\cdot), u)$ is a Borel set for every $u \in \mathbb{S}^{n-1}$. Since $Q$ is countable, $f^{-1}(B(K, r))$ is a Borel set as well. Because balls like $B(K, r)$ generate the Borel $\sigma$-algebra of $\mathcal{K}^n$, this shows that $f$ is measurable. ◻ **Lemma 33**. *$\zeta$ is a measurable function.* *Proof.* We apply Lemma [Lemma 32](#thm:lowerSemicontinuous){reference-type="ref" reference="thm:lowerSemicontinuous"}. Let $u \in \mathbb{S}^{n-1}$ and $\lambda \in \mathbb{R}$. It suffices to show that $$S_\zeta(u, \lambda) = \set*{ P \in \mathcal{P}^n \colon h(\zeta(P), u) \ge \lambda }$$ is closed. Let $(P_i)$ be a sequence in $S_\zeta(u, \lambda)$ that converges to $P \in \mathcal{P}^n$. Applying the Blaschke selection theorem to the bounded sequence $(\zeta(P_i) )$, we find a subsequence $(Q_i)$ such that the sequence $(\zeta(Q_i))$ converges to a centered zonoid $Z$ that is also a summand of $P$. Because summands of polytopes are polytopes and polytopes that are zonoids are zonotopes, it follows that $Z$ is a zonotope. So $Z \in \zeta(P)$ and, in particular, $$h(\zeta(P), u) \ge h(Z, u) = \lim_{i\to\infty} h(\zeta(Q_i), u) \ge \lambda.$$ So $P \in S_\zeta(u, \lambda)$, proving that the latter set closed. An application of Lemma [Lemma 32](#thm:lowerSemicontinuous){reference-type="ref" reference="thm:lowerSemicontinuous"} concludes the proof. ◻ ## Admissible sequences of polytopes Let $K \subseteq \mathbb{R}^3$ be a convex body. A support set $F(K, u)$ will be called *a singleton* or *trivial* if it is zero-dimensional, *an edge* if it is one-dimensional, and *a facet* if it is two-dimensional. It should be observed that unless $K$ is a polytope, the current definition does not imply that the normal cone of $K$ at a point in the relative interior of an edge is two-dimensional. **Definition 34**. *Let $(P_i)$ be a bounded sequence of (indecomposable) polytopes in $\mathbb{R}^3$ with the following properties:* - *All facets are triangles.* - *For every $i\in\mathbb{N}$, $P_i$ is $3$-dimensional.* - *For every $i\in\mathbb{N}$, no two edges of $P_i$ have the same direction.* - *If $i, j\in\mathbb{N}$ are distinct and $u$ is a facet normal of $P_i$, then $F(P_j, u)$ is trivial.* - *If $\ell, i, j\in\mathbb{N}$ are distinct and $e, f, g$ are edges of $P_\ell, P_i, P_j$, then $e + f + g$ is $3$-dimensional. In particular, $e + f$ is $2$-dimensional.* - *$K \coloneqq \sum_{i=1}^\infty P_i$ is a well-defined convex body.* *We call such a sequence *admissible* and $K$ its *associated body*.* **Remark 35**. *Let $K_i$, $i\in\mathbb{N}$, and $K$ be convex bodies in $\mathcal{K}^n$. Then $K=\sum_{i=1}^\infty K_i$ holds (where the convergence of the partial sums is meant with respect to the Hausdorff metric) if and only if $h_K=\sum_{i=1}^\infty h_{K_i}$ (where the convergence holds pointwise, but then also uniformly on the unit sphere).* **Remark 36**. *Let $P_i,K\in\mathcal{K}^3$, $i\in\mathbb{N}$, be given as in Definition [Definition 34](#DefA3){reference-type="ref" reference="DefA3"}. Then $K$ has at most countably many extreme points. Items three, four and five imply that if $F(K,u)$ is an edge of $K$, then there is a unique $i\in\mathbb{N}$ such that $F(P_i,u)$ is an edge of $P_i$. In this situation, $F(K,u)$ is a translate of $F(P_i,u)$ and no other edge of any of the polytopes $P_j$, $j\neq i$, is parallel to $F(K,u)$. From item four we conclude that if $F(K,u)$ is a triangular facet, then there is a unique $i$ such that $F(K,u)$ is a translate of $F(P_i,u)$. See Lemma [Lemma 41](#thm:edges-of-faces){reference-type="ref" reference="thm:edges-of-faces"} for further discussion.* Recall that every summand of a polytope is a polytope (see [@Schneider p. 157]). For a polytope $Q\in\mathcal{P}^n$, we consider the convex cone $$\mathcal{S}(Q):=\set*{ P \in \mathcal{P}^n \given \exists R \in \mathcal{P}^n, \alpha > 0\colon Q = \alpha P + R }.$$ The elements of $\mathcal{S}(Q)$ are called *scaled summands of $Q$*. **Lemma 37**. *Let $Q \in \mathcal{P}^n$ be a polytope with macroid-generating measure $\mu$ on $\mathcal{P}^n$, that is, $$h_Q=\int h_P\, \mu(\mathop{}\!\mathrm{d}P).$$ Then $\mathop{\mathrm{supp}}\mu \subseteq \mathcal{S}(Q)$.* *Proof.*   *I.* Let $\beta > 0$ be a lower bound on the lengths of the edges of $Q$, and let $P\in \mathcal{S}(Q)$ be nontrivial. Then $\frac{\beta}{\mathop{\mathrm{diam}}P} P$ is a summand of $Q$, as we show first. Let $F(P, u)$ be an edge. Since $F(P, u)$ is a scaled summand of $F(Q, u)$, the latter must have an edge $e$ (which is also an edge of $Q$) homothetic to $F(P, u)$. The length of $F(P, u)$ is at most $\mathop{\mathrm{diam}}P$ and the length of $e$ is at least $\beta$, so $e$ contains a translate of $\frac{\beta}{\mathop{\mathrm{diam}}P} F(P, u)$. Hence [@Schneider Thm. 3.2.11] implies that $\frac{\beta}{\mathop{\mathrm{diam}}P} P$ is a summand of $Q$. *II.* The set $\mathcal{S}(Q)$ is closed in $\mathcal{P}^n$, as we show next. Let $(P_i)$ be a sequence in $\mathcal{S}(Q)$ converging to some $P \in \mathcal{P}^n$. If $P$ is trivial, then $P \in \mathcal{S}(Q)$. Otherwise, there are a sequence of nontrivial polytopes $(P_i)$ and a sequence of polytopes $(R_i)$ such that $Q = \frac{\beta}{\mathop{\mathrm{diam}}P_i} P_i + R_i$, and $(R_i)$ must also converge to some $R \in \mathcal{K}^n$ such that $Q = \frac{\beta}{\mathop{\mathrm{diam}}P} P + R$. As $R$ is a summand of $P$, it must be a polytope. So $P \in \mathcal{S}(Q)$. *III.* Assume for a contradiction that there is some $L \in \mathop{\mathrm{supp}}\mu\setminus \mathcal{S}(Q)$. Then ${\sf d} \coloneqq d(L, \mathcal{S}(Q)) > 0$ and $\lambda \coloneqq \mu(B(L, {\sf d}/2)) > 0$. Define convex bodies $L'$ and $R$ by $$h_{L'} \coloneqq \lambda^{-1} \int_{B(L, {\sf d}/2)} h_P \,\mu(\mathop{}\!\mathrm{d}P)\quad\text{and}\quad h_R \coloneqq \int_{B(L, {\sf d}/2)^\complement} h_P \,\mu(\mathop{}\!\mathrm{d}P)$$ so that $Q = \lambda L' + R$. It follows that $R$ is a polytope and $L' \in \mathcal{S}(Q)$, and hence $${\sf d} = d(L, \mathcal{S}(Q)) \le d(L, L') \le {\sf d}/2 < {\sf d},$$ which is a contradiction. ◻ We write $\mathop{\mathrm{\overline{span}}}A$ for the linear subspace parallel to the smallest affine subspace containing a given nonempty set $A\subseteq\mathbb{R}^3$. **Lemma 38**. *For every edge $e$ of a polytope $P$ in $\mathbb{R}^3$, there are a normal $v \in \mathbb{S}^{2}$ of $e$ and $u \in \mathbb{Q}^3\setminus\mathop{\mathrm{\overline{span}}}\{e\}$ such that $u \perp v$ and $e = F(P, v)$.* *Proof.* Let $U$ be the relatively open normal cone of the edge $e$. Then $\mathop{\mathrm{span}}U$ is two-dimensional, and $U$ is open in $\mathop{\mathrm{span}}U$. Choose $w\in \mathbb{S}^{2} \cap U^\perp$, so that $\mathop{\mathrm{span}}U = w^\perp$. Let $\times$ denote the cross product. The continuous and surjective map $\mathbb{R}^3 \to \mathop{\mathrm{span}}U$, $x \mapsto w\times x$, maps the dense set $S \coloneqq \mathbb{Q}^3 \setminus \mathop{\mathrm{\overline{span}}}\{e\} \subseteq \mathbb{R}^3$ to the dense set $\{w\} \times S \subseteq \mathop{\mathrm{span}}U$. Because $U \setminus\set*{0} \subseteq \mathop{\mathrm{span}}U$ is nonempty and open, there must be some $\tilde v \in (U\setminus\set*{0}) \cap (\{w\} \times S)$. By construction, there is $u \in S$ such that $\tilde v = w \times u$. Now, $v \coloneqq \norm{\tilde v}^{-1}{\tilde v} \in \mathbb{S}^{2}$ is an inner normal of $e$, i.e. $e = F(P, v)$, and is orthogonal to $u \in S = \mathbb{Q}^3 \setminus \mathop{\mathrm{\overline{span}}}\{e\}$. ◻ In the following, we write $\pi_uK$ for the orthogonal projection of $K$ to $u^\perp$ for a vector $u \in \mathbb{R}^3 \setminus\set*{0}$. Moreover, $\mathop{\mathrm{S}}_1'(L,\cdot)$ denotes the first area measure of a convex body $L\subset u^\perp$ with respect to $u^\perp$ as the ambient space. **Lemma 39**. *Let $(P_i)$ be an admissible sequence and $K$ its associated body together with a macroid-generating measure $\mu$.* *There is a set $\mathcal{M} \subseteq \mathcal{P}^3$ of full $\mu$-measure that satisfies the following property: If some $P \in \mathcal{M}$ has an edge with direction $v\in \mathbb{S}^{2}$, then one of the $P_i$ also has an edge in direction $v$.* *Proof.* We intend to use Lemma [Lemma 38](#thm:exists-rational){reference-type="ref" reference="thm:exists-rational"}. If $u \in \mathbb{R}^3 \setminus\set*{0}$, then $$\pi_{u^\perp} K=\sum_{i=1}^\infty \pi_{u^\perp} P_i,$$ and hence the weak continuity and the Minkowski linearity of the area measures imply that $$\mathop{\mathrm{S}}_1' (\pi_{u^\perp} K,\cdot)=\sum_{i=1}^\infty \mathop{\mathrm{S}}_1'(\pi_{u^\perp} P_i,\cdot) .$$ So $\mathop{\mathrm{S}}_1'(\pi_{u^\perp} K,\cdot)$ is a discrete Borel measure (i.e., has countable support) in $u^\perp \cap \mathbb{S}^{2}$. Denoting by $$\omega_u \coloneqq \set*{ v \in u^\perp \cap \mathbb{S}^{2} \colon \mathop{\mathrm{S}}_1'(\pi_{u^\perp} K, \set*{v}) > 0 }$$ the set of its atoms, we obtain from special cases of [@HugReichert23+ Thm. 2.23, Lem. 3.4] that $$0 = \mathop{\mathrm{S}}_1'(\pi_{u^\perp} K, \omega_u^\complement) = \int \mathop{\mathrm{S}}_1'(\pi_{u^\perp} P, \omega_u^\complement) \,\mu(\mathop{}\!\mathrm{d}P).$$ So the set $$\mathcal{M}_1 \coloneqq \bigcap_{u \in \mathbb{Q}^3\setminus\set*{0}} \set*{ P \in \mathcal{P}^3 \colon \mathop{\mathrm{S}}_1'(\pi_{u^\perp} P, \omega_u^\complement) = 0 }$$ has full $\mu$-measure. Since for each $u \in \mathbb{Q}^3\setminus\set*{0}$ the set $\omega_u$ is countable, the set of pairs $$C \coloneqq \set*{ (u, v) \in (\mathbb{Q}^3\setminus\set*{0}) \times \mathbb{S}^{2} \colon v \in \omega_u }$$ is countable. Using the notation from Remark [Remark 20](#rem:support-macroid){reference-type="ref" reference="rem:support-macroid"}, we define $$\mathcal{M}_2 \coloneqq \bigcap_{(u, v) \in C} F(\cdot, v)^{-1}(\mathop{\mathrm{supp}}F_v(\mu)).$$ The set $\mathcal{M}_2$ has full $\mu$-measure. To see this, it is sufficient to consider $v\in\mathbb{S}^{n-1}$ and $P\in\mathcal{P}^3$ with $F(P,v)\notin \mathop{\mathrm{supp}}F_v(\mu)$ and to show that $P\notin\mathop{\mathrm{supp}}\mu$. By assumption, there is a neighbourhood $U'$ of $F(P,v)$ with $F_v(\mu)(U')=0$, hence $\mu(\{F (P',v):P'\in U'\})=0$. Since $\{F (P',v):P'\in U'\}$ is a neighbourhood of $P$, it follows that $P\notin\mathop{\mathrm{supp}}\mu$. Furthermore, for all $P \in \mathcal{M}_2$ and $(u, v) \in C$, Lemma [Lemma 37](#thm:polytope-measure-support){reference-type="ref" reference="thm:polytope-measure-support"} shows that $F(P, v)$ is a scaled summand of $F(K, v)$. Now let $\mathcal{M} \coloneqq \mathcal{M}_1 \cap \mathcal{M}_2$. Assume $P \in \mathcal{M}$ and that $e$ is an edge of $P$. By Lemma [Lemma 38](#thm:exists-rational){reference-type="ref" reference="thm:exists-rational"}, there are $u \in \mathbb{Q}^3\setminus\mathop{\mathrm{\overline{span}}}e$ and $v \in u^\perp \cap \mathbb{S}^{2}$ such that $e = F(P, v)$. In particular, $F(\pi_{u^\perp} P, v)$ is nontrivial and $\mathop{\mathrm{S}}_1'(\pi_{u^\perp} P, \set*{v}) > 0$. Since $P \in \mathcal{M}_1$, it follows that $v\in\omega_u$ and so $(u, v) \in C$. Since $P \in \mathcal{M}_2$, this implies that $e = F(P, v)$ is a scaled summand of the nontrivial support set $F(K, v)$, which is then either an edge or a parallelogram. If it is an edge, it has the same direction as $e$ and also is an edge of one of the $P_i$ and we are done. If it is a parallelogram, then one of the sides of the parallelogram must have the same orientation as $e$, and one of the $P_i$ has an edge with this orientation. This concludes the proof. ◻ **Lemma 40**. *Let $(P_i)$ be an admissible sequence and $K$ its associated body together with a macroid-generating measure $\mu$. Then there is a set $\mathcal{M}'$ of full $\mu$-measure such that for each $P \in \mathcal{M}'$ and for each $u \in \mathbb{S}^{2}$, $F(P, u)$ is a scaled summand of $F(K, u)$.* *Proof.*   (I) Let $P \in \mathcal{M}$, where $\mathcal{M}$ is as in the statement of Lemma [Lemma 39](#thm:edges){reference-type="ref" reference="thm:edges"}. If $F(P, u)$ is trivial, so is the claim. If $F(P, u)$ is an edge, then by Lemma [Lemma 39](#thm:edges){reference-type="ref" reference="thm:edges"}, there is $i \in \mathbb{N}$ such that $F(P, u)$ is homothetic to the edge $F(P_i, u)$ and hence a scaled summand of $F(K, u)$. \(II\) Now consider the case that $P\in \mathcal{M}$ and $F(P, u)$ is a facet. The edges of the polytopes $P_i$, $i \in \mathbb{N}$, together have only countably many directions. Denote the countable set of these directions by $A \subset \mathbb{S}^{2}$. The facet $F(P, u)$ is incident to (at least) two edges with linearly independent directions $v, w \in A$ that determine the facet normal $u$ up to sign $\sigma \in \set*{-1, 1}$ via $$\phi\colon \set*{-1, 1} \times \set*{ (a, b) \in A^2 \given a \ne \pm b }, \quad (\sigma, u, v) \mapsto \sigma \frac{u \times v}{\norm{u \times v}}.$$ So the facet normals of $P$ are contained in the countable image of $\phi$, which is independent of the choice of $P$. For each $u \in \mathbb{S}^{2}$, the set $F(K, u)$ is a polytope. Consider the set of full $\mu$-measure $$\mathcal{M}_3 \coloneqq \bigcap_{u\in\mathop{\mathrm{im}}\phi} F(\cdot, u)^{-1}(\mathop{\mathrm{supp}}F_u(\mu)).$$ If $P$ is also in $\mathcal{M}_3$, then by Lemma [Lemma 37](#thm:polytope-measure-support){reference-type="ref" reference="thm:polytope-measure-support"} and $u \in \mathop{\mathrm{im}}\phi$, the support set $F(P, u)$ is a scaled summand of $F(K, u)$. Now the assertion follows from (I) and (II) with $\mathcal{M}' \coloneqq \mathcal{M} \cap \mathcal{M}_3$. ◻ ## Unique decomposability **Lemma 41**. *Let $(P_i)$ be an admissible sequence and $K$ its associated body together with a macroid-generating measure $\mu$. Let $e$ be an edge of $F(K, u)$ for some $u \in \mathbb{S}^{2}$. Then there is a unique $i\in\mathbb{N}$ such that $F(P_i, u)$ has an edge homothetic to $e$, $e$ is in fact a translate of $F(P_i,u)$, and this edge is unique among the edges of $P_i$.* *Proof.* The uniqueness statements immediately follow from the properties of admissible sequences $(P_i)$. Note that an edge of $F(K,u)$ need not be an edge of $K$ as defined here. If $F(K, u)$ is a singleton, it does not have any edges. If $F(K, u)$ is an edge, then there is $i\in\mathbb{N}$ such that $F(P_i, u)$ is a translate of $F(K, u)$. If $F(K, u)$ is a triangle, then there is $i\in\mathbb{N}$ such that $F(P_i, u)$ is a translate of $F(K, u)$. So a translate of $e$ is an edge of $F(P_i, u)$. If $F(K, u)$ is a parallelogram, then there are unique $i, j\in\mathbb{N}$ with $i\neq j$ such that $F(P_i,u)$ is a translate of an edge of $P_i$, $F(P_j,u)$ is a translate of an edge of $P_j$ and $F(P_i, u) + F(P_j, u)$ is a translate of $F(K, u)$. So $e$ is either a translate of $F(P_i, u)$ or of $F(P_j, u)$. ◻ **Lemma 42**. *Let $(P_n)$ be an admissible sequence and $K$ its associated body together with a macroid-generating measure $\mu$.* *Then the polytopes with a nontrivial zonotope summand are contained in a $\mu$-zero set $\mathcal{N}$.* *Proof.* Recall the measurable function $\zeta$ from Lemmas [Lemma 31](#thm:zonotope-summand){reference-type="ref" reference="thm:zonotope-summand"} and [Lemma 33](#thm:zeta-measurable){reference-type="ref" reference="thm:zeta-measurable"}. The macroid $Z$ generated by $\zeta(\mu)$, the image measure of $\mu$ under the map $\zeta$, is a zonoid and summand of $K$, implying $$\mathop{\mathrm{S}}_2(Z,\cdot) \ll \mathop{\mathrm{S}}_2(K,\cdot) = \sum_{n=1}^\infty \sum_{m=1}^\infty \mathop{\mathrm{S}}(P_n, P_m,\cdot).$$ Because the right-hand side is a discrete measure, so is $\mathop{\mathrm{S}}_2(Z,\cdot)$. If we can show that $Z$ has no facets, then $\mathop{\mathrm{S}}_2(Z,\cdot)$ is a discrete measure without atoms, hence zero. Then $Z$ is at most one-dimensional. If we can also show that $Z$ has no edges, then $Z$ must be trivial, and the set $\mathcal{N}$ of polytopes $P$ with nontrivial $\zeta(P)$ is a $\mu$-zero set, proving the claim. It remains to show that $Z$ has no facets or edges. We aim at a contradiction and assume that $F(Z, u)$ is a facet or an edge. Since $F(Z, u)$ is a summand of the polytope $F(K, u)$, it has an edge $e$ that is homothetic to an edge of $F(K, u)$. Because $Z$ is centrally symmetric around the origin, $F(Z, -u) = -F(Z, u)$ also has an edge that is a translate of $e$, and therefore $F(K, -u)$ has an edge that is homothetic to $e$. Hence, $F(K, u)$ and $F(K, -u)$ both contain an edge homothetic to $e$. By Lemma [Lemma 41](#thm:edges-of-faces){reference-type="ref" reference="thm:edges-of-faces"}, and especially the uniqueness statement, there is $i\in\mathbb{N}$ such that $F(P_i, u)$ and $F(P_i, -u)$ intersect in the very same edge homothetic to $e$. But this contradicts $P_i$ being $3$-dimensional, and $Z$ cannot have edges or facets. This completes the proof. ◻ **Lemma 43**. *Let $(P_i)$ be an admissible sequence and $K$ its associated body together with a macroid-generating measure $\mu$.* *Then $\mu$ is supported in translates of $\operatorname{pos} ((P_i)_i)$, the set of translates of finite positive (Minkowski) combinations of polytopes from the sequence $(P_i)_{i\in\mathbb{N}}$.* *Proof.* By Lemmas [Lemma 40](#thm:support-set-summands){reference-type="ref" reference="thm:support-set-summands"} and [Lemma 42](#thm:no-zonotope-summands){reference-type="ref" reference="thm:no-zonotope-summands"}, $\mu$ is supported in the polytopes $P$ that have no nontrivial zonotope summand and such that for all $u \in \mathbb{S}^{2}$, the support set $F(P, u)$ is a scaled summand of $F(K, u)$. Let $\mathcal{M}_4$ be a set of such polytopes of full $\mu$-measure, and let $P \in \mathcal{M}_4$. For the proof, we may assume that $P$ is nontrivial. (i) All facets of $K$ are triangles and parallelograms. The only scaled summands of a triangle are homothets of that triangle; the only scaled summands of a parallelogram are (possibly degenerate) parallelograms with the same edge directions but possibly different proportions. So all facets of $P$ are of this kind. (ii) Let $u\in\mathbb{S}^{2}$ be such that $F(P, u)$ is a triangular facet. Then $F(K, u)$ is homothetic to $F(P,u)$, that is, there are unique $\alpha_u > 0$ and $t_u \in \mathbb{R}^n$ such that $F(P, u) = \alpha_u F(K, u) + t_u$. Moreover, there are unique $i = i(u)\in\mathbb{N}$ and $t_u'\in\mathbb{R}^n$ such that $F(K, u)$ is a translate of $F(P_i, u)$ and $F(P, u) = \alpha_u F(P_i, u) + t_u'$. Also note that there are at most two triangular facets of $P$ that have an edge parallel to a fixed direction; otherwise, there would be an $i\in\mathbb{N}$ such that $P_i$ also had more than two such facets, contradicting the hypothesis that $P_i$ has at most one edge parallel to a given direction. (iii) We observe that $\dim P=3$. Recall that $P$ is nontrivial. If $\dim P=1$, then $P$ is a segment, which is a zonotope, a contradiction. If $\dim P=2$, then $P$ is a triangle or a non-degenerate parallelogram, which is a zonotope. The latter is excluded. Hence $P$ is a triangle with $P=F(P,v)=F(P,-v)$ for a unit vector $v$. Let $e$ be an edge of $P$. Then $P_{i(v)}$ and $P_{i(-v)}$ both contain an edge parallel to $e$. Hence, $i \coloneqq i(v) = i(-v)$ and $F(P_i, v) = F(P_i, -v)$ and thus $\dim P_i=2$, a contradiction. (iv) Let $G$ be the graph with the edges of $P$ as $G$-vertices, where two edges, i.e. $G$-vertices, are connected if and only if they are opposite edges in a parallelogram facet of $P$. Since every edge is only part of two facets, the maximum degree of a $G$-vertex, i.e. an edge of $P$, is two. The connected components of $G$ are cycles or chains. Let us first make sure that no cycles can occur. Assume that the edge $e$ of $P$ with direction $u$ is part of a cycle. Then $\pi_{u^\perp} P$ is a convex polygon and $\pi_{u^\perp} e$ is one of its vertices. The two edges incident to $\pi_{u^\perp} e$ are projections of parallelograms that connect $e$ to the two neighbors of $e$ in $G$, and an induction shows that all support sets of $\pi_{u^\perp} P$ either are projections of edges parallel to $e$ or parallelograms connecting two such edges. For the sake of applying [@Schneider Thm. 3.2.22], let $F(e, v)$ be an edge of $e$. In this case, $v \in u^\perp$, and so $e$ is a summand of $F(P, v)$. Then [@Schneider Thm. 3.2.22] guarantees that $e$ is a summand of $P$, in contradiction to $P$ having no nontrivial zonotope summand. Therefore, the connected component of any edge $e$ of $P$ is a chain $e_1 - \cdots - e_k$ of $e$-translates. The endpoints of this chain must be edges of two triangular facets of $P$. By (ii), there can be no other chain with edges parallel to $e_1, \ldots, e_k$. So if $f$ is an edge parallel to $e$, then $f = e_j$ for some $j\in[k]$ and $f$ is a translate of $e$. Moreover, for any edge $e$ of $P$ there are exactly two triangular facets of $P$ with an edge parallel to $e$. (v) Let $u, v\in\mathbb{S}^{2}$, and $i \coloneqq i(u)$ as in (ii), such that $F(P, u)$ is a triangle and $F(P_i, v)$ is a facet adjacent to $F(P_i, u)$ via an edge $e$. By (iii), there is exactly one $w\in\mathbb{S}^{2}$ besides $u$ such that $F(P, w)$ is a triangle with an edge parallel to $e$. By (ii), $F(P_i, w) \ne F(P_i, u)$ is then also a triangle with an edge parallel to $e$. Because $P_i$ contains no other edge parallel to $e$, it follows that $v = w$. So we have $$F(P, u) = \alpha_u F(P_i, u) + t_u, \quad F(P, v) = \alpha_v F(P_i, v) + t_v.$$ By (iii), all edges of $P$ parallel to $e$ are translates of each other, hence it follows that $\alpha_u = \alpha_v$. (vi) Let $u, v\in\mathbb{S}^{2}$, and $i \coloneqq i(u)$ as in (ii), such that $F(P, u)$ and $F(P_i, v)$ are triangles. Then the triangles $F(P_i, u)$ and $F(P_i, v)$ are connected via a chain of neighboring facets. Iteration of (v) shows that $F(P, v)$ is a triangle and $\alpha_u = \alpha_v$. So $\alpha_u$ only depends on $P$ and $i(u)$, and we set $\alpha_{i(u)}(P) \coloneqq \alpha_u > 0$. If $i\in\mathbb{N}$ and $P$ contains no triangular facet $F(P, w)$ with $i(w) = i$, then we set $\alpha_i(P) \coloneqq 0$. For each $i\in\mathbb{N}$, there are uncountably many edge normals of $P_i$ but only countably many facet normals of $K$. Let $u\in\mathbb{S}^{2}$ be such that $F(P_i, u)$ is an edge and $F(K, u)$ is not a facet and in fact a translate of $F(P_i, u)$. Then for each $P \in \mathcal{M}_4$, $F(P, u)$ is a summand of $F(K, u)$ and satisfies $\mathop{\mathrm{V}}(F(P, u)) = \alpha_i(P) \mathop{\mathrm{V}}(F(P_i, u))$. This shows that $\mathcal{M}_4 \ni P \mapsto \alpha_i(P)$ is measurable, $$\mathop{\mathrm{V}}(F(K, u)) = \int_{\mathcal{M}_4} \mathop{\mathrm{V}}(P, u) \mu(\mathop{}\!\mathrm{d}P) = \int_{\mathcal{M}_4} \alpha_i(P) \mu(\mathop{}\!\mathrm{d}P) \mathop{\mathrm{V}}(F(P_i, u))$$ and thus we get $$\label{eq:A14.int} \int_{\mathcal{M}_4} \alpha_i(P)\,\mu(\mathop{}\!\mathrm{d}P) = 1.$$ (vii) Let $\widetilde P \coloneqq \sum_{i=1}^\infty \alpha_i(P)P_i$, involving only finitely many nonzero summands. Note that $\dim\widetilde P=3$, since $\alpha_i(P)>0$ for some $i\in\mathbb{N}$. Every facet of $P$ or $\widetilde P$ is either triangular or a parallelogram. The preceding items show that the triangular facets of $P$ are translates of the triangular facets of $\widetilde P$, and vice versa. (viii) It remains to consider the parallelogram facets. Let $u\in\mathbb{S}^{2}$. If $F(K, u)$ is not a parallelogram, it is a singleton, an edge or a triangle. For all $P\in\mathcal{M}_4$, neither $F(P, u)$ nor $F(\widetilde P, u)$ is then a parallelogram. (ix) We consider the situation from (viii). From now on, we assume that $F(K, u)$ is a parallelogram. We choose $v, w\in\mathbb{S}^{2}\cap u^\perp$ such that the edges of $F(K, u)$ are $F(F(K, u), \pm v)$ and $F(F(K, u), \pm w)$. There are unique distinct $i, j \in \mathbb{N}$ such that $F(F(K, u), \pm v)$ are translates of $F(P_i, u)$ and $F(F(K, u), \pm w)$ are translates of $F(P_j, u)$. Let $P \in \mathcal{M}_4$. Then $F(\widetilde P, u)$ is a translate of $$\alpha_i(P)F(P_i, u) + \alpha_j(P)F(P_j, u),$$ which might be a singleton, an edge or a parallelogram. On the other hand, $F(P, u)$ is a translate of $$F(F(P, u), v) + F(F(P, u), w).$$ (x) We consider the situation from (ix) and aim to show that $F(P, u)$ and $F(\widetilde{P},u)$ are translates of each other. In the current item, we show that the conclusion holds at least if $P$ is taken from a subset of $\mathcal{M}_4$ of full measure. The argument will be completed in (xi). Let $P\in\mathcal{M}_4$. If $F(F(P, u), v)$ is not a singleton, then it is an edge parallel to $F(P_i, u)$. Hence it must be a translate of $\alpha_i(P)F(P_i, u)$. In either case, $$\label{eq:A14eq2} \mathop{\mathrm{V}}(F(F(P, u), v)) \le \alpha_i(P) \mathop{\mathrm{V}}(F(P_i, u)).$$ Relation [\[eq:A14eq2\]](#eq:A14eq2){reference-type="eqref" reference="eq:A14eq2"} can be used to bound the integrand in $$\mathop{\mathrm{V}}(F(P_i, u)) = \mathop{\mathrm{V}}(F(F(K, u), v)) = \int_{\mathcal{M}_4} \mathop{\mathrm{V}}(F(F(P, u), v)) \,\mu(\mathop{}\!\mathrm{d}P).$$ But then [\[eq:A14.int\]](#eq:A14.int){reference-type="eqref" reference="eq:A14.int"} from (vi) implies that equality must hold in [\[eq:A14eq2\]](#eq:A14eq2){reference-type="eqref" reference="eq:A14eq2"}, for $\mu$-almost all polytopes $P\in\mathcal{M}_4$. Hence, $F(F(P, u), v)$ is a translate of $\alpha_i(P) F(P_i, u)$, and a similar argument shows that $F(F(P, u), w)$ is a translate of $\alpha_j(P) F(P_j, u)$, for $\mu$-almost all $P\in\mathcal{M}_4$. So there is a measurable set $\mathcal{M}_5(u) \subseteq \mathcal{M}_4$ of full $\mu$-measure such that for all $P\in\mathcal{M}_5(u)$, a translate of $F(P, u)$ is $$\alpha_i(P) F(P_i, u) + \alpha_j(P) F(P_j, u),$$ and hence also a translate of $F(\widetilde P, u)$. (xi) Finally, set $\mathcal{M}_5 \coloneqq \bigcap_u \mathcal{M}_5(u)$, where we take the countable intersection over all normals of parallelogram facet normals of $K$. Then $\mathcal{M}_5$ is a measurable set of full $\mu$-measure and for all $P \in \mathcal{M}_5$ and $u \in \mathbb{S}^{2}$, $F(P, u)$ is a parallelogram if and only if $F(\widetilde P, u)$ is, and in this case both are translates of each other: When $F(K, u)$ is a parallelogram, it follows from (x) that $F(P, u)$ and $F(\tilde P, u)$ are translates, and if it is not, neither of them is a parallelogram due to (viii). The proof is concluded by an application of Minkowski's uniqueness theorem for area measures of convex polytopes. ◻ **Lemma 44**. *Let $(P_i)$ be an admissible sequence and $K$ its associated body together with a macroid-generating measure $\mu$.* *Then for all $i\in\mathbb{N}$ there is $P\in\mathop{\mathrm{supp}}\mu$ such that $P_i$ is a scaled summand of $P$.* *Proof.* Let $u$ be the normal of a (necessarily triangular) facet of $P_i$. Then $F(K, u)$ is a translate of this triangular facet. Clearly, $\mu$ is concentrated on $\mathop{\mathrm{supp}}\mu$ and, according to Lemma [Lemma 43](#thm:unique-decomposability){reference-type="ref" reference="thm:unique-decomposability"}, on the set of translates of all finite positive combinations of the $P_j$, $j\in\mathbb{N}$. By [\[eqsupportset\]](#eqsupportset){reference-type="eqref" reference="eqsupportset"} we have $$h_{F(K, u)} = \int h_{F(P, u)}\, \mu(\mathop{}\!\mathrm{d}P) ,$$ hence there is $P \in \operatorname{pos}((P_j)_j) \cap \mathop{\mathrm{supp}}\mu$ such that $F(P, u)$ is nontrivial. This can only be the case if $P_i$ is a scaled summand in the finite positive combination defining $P$, as $F(P_j, u)$ is trivial for all $j \ne i$. ◻ **Theorem 45**. *Let $(P_i)$ be an admissible sequence and $K$ its associated body. If for all $i\in\mathbb{N}$ the body $P_i$ has at least $i$ vertices, then $K$ is not a polyoid, although it is a macroid.* *Proof.* Assume that $K$ is a $k$-polyoid with a generating measure $\mu$ supported in the space of $k$-topes. Lemma [Lemma 44](#thm:prime-summand){reference-type="ref" reference="thm:prime-summand"} shows that $P_{k+1}$ is a scaled summand of some $P\in\mathop{\mathrm{supp}}\mu$. But then $P$ is not a $k$-tope (see, e.g., [@DP19 Lem. 2.3]), in contradiction to the property $\mathop{\mathrm{supp}}\mu \subseteq \mathcal{P}^3_k$ of $\mu$. So $K$ is not a $k$-polyoid for any $k\in\mathbb{N}$. ◻ **Remark 46**. *Let $(P_i)_{i\ge 4}$ be a bounded sequence of polytopes such that $P_i$ is a $3$-dimensional $i$-tope having only triangular faces with no edge direction occurring twice. When we apply independent uniform random rotations to each of the $P_i$, we obtain almost surely an admissible sequence. This way, we can construct a macroid that is not a polyoid.* **Acknowledgements.** D. Hug was supported by DFG research grant HU 1874/5-1 (SPP 2265). The authors are grateful to Ramon van Handel for helpful comments on an earlier version of the manuscript. 99 Aleksandr D. Alexandrov. A. D. Alexandrov selected works. Part I. In: 1st edition. Vol. 4. Classics of Soviet Mathematics. London: Yu. G. Reshetnyak and S. Kutateladze, 1996. Chap. To the theory of mixed volumes of convex bodies. Part II: New inequalities for mixed volumes and their applications. Christian Berg. Shephard's approximation theorem for convex bodies and the Milman theorem. Math. Scand. 25 (1969), 19--24. Patrick Billingsley. Convergence of Probability Measures. Second edition. Wiley Series in Probability and Statistics: Probability and Statistics. A Wiley-Interscience Publication. John Wiley & Sons, Inc., New York, 1999, pp. x+277. Antoine Deza and Lionel Pournin. Diameter, decomposability, and Minkowski sums of polytopes. Canad. Math. Bull. 62 (2019), no. 4, 741--755. Dario Cordero-Erausquin, Bo'az Klartag, Quentin Merigot and Filippo Santambrogio. One more proof of the Alexandrov-Fenchel inequality. C. R. Math. Acad. Sci. Paris 357 (2019), no. 8, 676--680. Paul Goodey, Markus Kiderlen and Wolfgang Weil. Section and projection means of convex bodies. Monatsh. Math. 126 (1998), no. 1, 37--54. Paul Goodey and Wolfgang Weil. A uniqueness result for mean section bodies. Adv. Math. 229 (2012), no. 1, 596--601. Paul Goodey and Wolfgang Weil. Sums of sections, surface area measures, and the general Minkowski problem. J. Differential Geom. 97 (2014), no. 3, 477--514. Paul Goodey, Daniel Hug and Wolfgang Weil. Kinematic formulas for area measures. Indiana Univ. Math. J. 66 (2017), no. 3, 997--1018. Eric Grinberg and Gaoyong Zhang. Convolutions, transforms, and convex bodies. Proc. London Math. Soc. (3) 78 (1999), no. 1, 77--115. Daniel Hug and Paul A. Reichert. The support of mixed area measures involving a new class of convex bodies. ArXiv. Daniel Hug and Wolfgang Weil. Lectures on Convex Geometry. Vol. 286. Graduate Texts in Mathematics. Springer, Cham, 2020, pp. xviii+287. Jaroslav Lukeš, Jan Malý, Ivan Netuka and Jiří Spurný. Integral representation theory. Applications to convexity, Banach spaces and potential theory. De Gruyter Studies in Mathematics, 35. Walter de Gruyter & Co., Berlin, 2010. xvi+715 pp. Robert R. Phelps. Lectures on Choquet's theorem. Second edition. Lecture Notes in Mathematics, 1757. Springer-Verlag, Berlin, 2001. viii+124 pp. Werner Ricker. A new class of convex bodies. Papers in algebra, analysis and statistics (Hobart, 1981), pp. 333--340, Contemp. Math., 9, Amer. Math. Soc., Providence, R.I., 1981. George T. Sallee. On the indecomposability of the cone. J. London Math. Soc 9 (1974), 363--367. Rolf Schneider. "On the Aleksandrov--Fenchel inequality". In: Discrete ge- ometry and convexity (New York, 1982). Vol. 440. Ann. New York Acad. Sci. New York Acad. Sci., New York, 1985, pp. 132--141. Rolf Schneider. On the Aleksandrov--Fenchel inequality involving zonoids. Geom. Dedicata 27 (1988), no. 1, 113--126. Rolf Schneider. Simple valuations on convex bodies. Mathematika 43 (1996), 32--39. Rolf Schneider. Convex Bodies: the Brunn--Minkowski Theory. Second expanded edition. Vol. 151. Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, 2014, pp. xxii+736. Rolf Schneider and Wolfgang Weil. Stochastic and integral geometry. Probability and its Applications (New York). Springer-Verlag, Berlin, 2008. xii+693 pp. Yair Shenfeld and Ramon van Handel. Mixed volumes and the Bochner method. Proc. Amer. Math. Soc. 147 (2019), no. 12, 5385--5402. Yair Shenfeld and Ramon van Handel. The extremals of Minkowski's quadratic inequality. Duke Math. J. 171 (2022), no. 4, 957--1027. Yair Shenfeld and Ramon van Handel. The Extremals of the Alexandrov--Fenchel inequality for convex polytopes. ArXiv:2011.04059. (To appear in Acta Mathematica.) Veeravalli S. Varadarajan. On the convergence of sample probability distributions. Sankhya 19 (1958), 23--26. Yu Wang. A remark on the Alexandrov--Fenchel inequality. J. Funct. Anal. 274 (2018), no. 7, 2061--2088. Wolfgang Weil. Kontinuierliche Linearkombinationen von Strecken. Math. Z. 148 (1976), 71--84. Authors' addresses: Daniel Hug, Karlsruhe Institute of Technology (KIT), Institute of Stochastics, D-76128 Karlsruhe, Germany, daniel.hug\@kit.edu Paul Reichert, Karlsruhe, Germany, math\@paulr.de
arxiv_math
{ "id": "2309.16864", "title": "Extremizers of the Alexandrov--Fenchel inequality within a new class of\n convex bodies", "authors": "Daniel Hug and Paul A. Reichert", "categories": "math.MG", "license": "http://creativecommons.org/licenses/by-nc-sa/4.0/" }
--- abstract: | 0.9 In 2005, Borisov and Sapir proved that ascending HNN extensions of finitely generated linear groups are residually finite. Subsequently, Druu and Sapir noted the existence of finitely generated non-linear residually finite groups based on the work of Borisov and Sapir. In 2017, Kharlampovich, Myasnikov and Sapir showed that there exist finitely generated non-linear solvable residually finite groups. In this paper, we construct the first examples of finitely generated non-linear solvable residually 2 groups. **Keywords:** HNN extension, Residually $p$ groups, BaumslagSolitar groups **Mathematics Subject Classification 2020:** 20E26, 20F05, 20F16 author: - Donsung Lee bibliography: - bibgen.bib date: September 23, 2023 title: Non-linear, solvable, residually $p$ groups --- $$ # Introduction For a group property $X$, a group $G$ is *residually X* if for every nontrivial element $g$, there exists a homomorphism $h:G\to H$ to a group $H$ with property $X$ such that $h\left(g\right)\ne1$. A group $G$ is *linear* if there exists a field $F$ and an injective homomorphism $\phi:G\to\mathrm{GL}\left(m,\,F\right)$ for some integer $m$. A group $G$ is *virtually* $X$ if $G$ has a subgroup which has property $X$ and is of finite index. Mal'cev[@MR0003420] established in 1940 that a finitely generated linear group is residually finite. Platonov [@MR0231897] further demonstrated in 1968 that a finitely generated linear group is virtually residually (finite $p$) for some prime $p$. Henceforth, we will refer to a group as *residually* $p$ if it is residually (finite $p$). In [@MR2138070], Borisov and Sapir demonstrated that ascending HNN extensions of finitely generated linear groups are residually finite. Given that finitely generated linear groups are themselves residually finite, a natural question arises: are there ascending HNN extensions of linear groups that are non-linear? A result of Wehrfritz [@MR0367080] is classical. *Theorem 1*. [@MR0367080 Corollary 2.4] Let $r$ and $s$ be integers with $\left|r\right|,\left|s\right|>1$. Then $$\begin{aligned} G & =\left\langle a,b,h\;|\;hah^{-1}=a^{r},\;hbh^{-1}=b^{s}\right\rangle \end{aligned}$$ is non-linear. Using this theorem, DruuSapir [@MR2115010] provided the first example of non-linear residually finite 1-related group, and offered an alternative proof of Theorem 1.1. Following the work of BorisovSapir [@MR2138070] and DruuSapir [@MR2115010], more examples of finitely generated non-linear residually finite groups have been discovered or re-discovered. In particular, Tholozan and Tsouvalas in [@tholozan2022residually] provided the first examples of finitely generated non-linear hyperbolic residually finite groups. In [@MR2533795], Borisov and Sapir also proved that ascending HNN extensions of finitely generated free groups are virtually residually $p$ for every sufficiently large prime $p$. Note that this theorem also implies that there are finitely generated non-linear residually $p$ groups from Wehrfritz's non-linear groups, since a virtually linear group is linear by the induced representation. Recently, Chong and Wise [@MR4388367] showed that there exist uncountably many non-linear finitely generated residually finite groups. On the other hand, Kharlampovich, Myasnikov and Sapir in [@MR3671739] showed that there are finitely generated non-linear solvable residually finite groups of solvability class 3, by using Bou-Rabee's depth function. The non-linear groups of [@MR2115010] and [@MR3671739] are mutually exclusive; the former have a nonabelian free subgroup and the latter are solvable. We construct the first examples of finitely generated non-linear solvable residually $p$ groups. Denote by $\mathbb{F}_{2}$ the finite field of order 2. For each nonzero integer $n$, define an injective endomorphism $$\begin{aligned} \rho_{n} & :\mathrm{GL}\left(3,\,\mathbb{F}_{2}\left[t,\,t^{-1}\right]\right)\to\mathrm{GL}\left(3,\,\mathbb{F}_{2}\left[t,\,t^{-1}\right]\right)\end{aligned}$$ by $$\begin{aligned} \rho_{n}\left(\begin{array}{ccc} a_{11}\left(t\right) & a_{12}\left(t\right) & a_{13}\left(t\right)\\ a_{21}\left(t\right) & a_{22}\left(t\right) & a_{23}\left(t\right)\\ a_{31}\left(t\right) & a_{32}\left(t\right) & a_{33}\left(t\right) \end{array}\right) & :=\left(\begin{array}{ccc} a_{11}\left(t^{n}\right) & a_{12}\left(t^{n}\right) & a_{13}\left(t^{n}\right)\\ a_{21}\left(t^{n}\right) & a_{22}\left(t^{n}\right) & a_{23}\left(t^{n}\right)\\ a_{31}\left(t^{n}\right) & a_{32}\left(t^{n}\right) & a_{33}\left(t^{n}\right) \end{array}\right).\end{aligned}$$ Define two matrices $c$ and $d$ in $\mathrm{GL}\left(3,\,\mathbb{F}_{2}\left[t,t^{-1}\right]\right)$ by $$\begin{aligned} c & :=\left(\begin{array}{ccc} t & 1+t & 1+t\\ 1+t & t & 1+t\\ 1+t & 1+t & t \end{array}\right),\;d:=\left(\begin{array}{ccc} t & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{array}\right).\end{aligned}$$ Let $G$ be the group generated by $c$ and $d$. By induction, for every nonzero integer $n$ we have $$\begin{aligned} c^{n} & =\rho_{n}\left(c\right),\;d^{n}=\rho_{n}\left(d\right).\end{aligned}$$ Thus, the restriction $\rho_{n}|_{G}$ becomes an injective endomorphism from $G$ into itself. Denote by $H_{n}$ the HNN extension of $G$ by $\rho_{n}|_{G}$. We state the main results. *Theorem 2*. For every nonzero integer $n$, $H_{n}$ is solvable of class 4. *Theorem 3*. For every integer $n$ such that $\left|n\right|>1$, $H_{n}$ is non-linear. In our proof of Theorem 1.3 in Section 3, we employ the method (Theorem 3.1) that Wehrfritz used to establish Theorem 1.1. Generalizing $\rho_{n}$, define an injective endomorphism $$\begin{aligned} \rho_{n}^{m} & :\mathrm{GL}\left(m,\,\mathbb{F}_{2}\left[t,\,t^{-1}\right]\right)\to\mathrm{GL}\left(m,\,\mathbb{F}_{2}\left[t,\,t^{-1}\right]\right)\end{aligned}$$ for every integer $m$ such that $m\ge3$, as each $A\in\mathrm{GL}\left(m,\,\mathbb{F}_{2}\left[t,t^{-1}\right]\right)$ satisfies $$\begin{aligned} \left(\rho_{n}^{m}\left(A\right)\right)_{ij}\left(t\right) & =A_{ij}\left(t^{n}\right).\end{aligned}$$ *Corollary 4*. For every pair of integers $\left(m,n\right)$ such that $m\ge3$ and $\left|n\right|>1$, the HNN extension of $\mathrm{GL}\left(m,\,\mathbb{F}_{2}\left[t,t^{-1}\right]\right)$ by $\rho_{n}^{m}$ is finitely generated, non-linear, and residually finite. *Proof.* Suslin in [@MR0472792] proved that for $m\ge3$, $\mathrm{SL}\left(m,\,\mathbb{F}_{2}\left[t,t^{-1}\right]\right)$ is finitely generated by elementary matrices. Thus, $\mathrm{GL}\left(m,\,\mathbb{F}_{2}\left[t,t^{-1}\right]\right)$ is also finitely generated, and by [@MR2138070], the HNN extension is residually finite. The non-linearity follows from Theorem 1.3 and the fact that a subgroup of a linear group is also linear.$\qedhere$ ◻ *Theorem 5*. For every integer $n$, $H_{n}$ is a residually $2$ group if and only if $n$ is odd. Combining Theorem 1.2, Theorem 1.3 and Theorem 1.5, we establish the result stated in the abstract. *Corollary 6*. For every integer $n$ such that $\left|n\right|>1$ and $n$ is odd, $H_{n}$ is finitely generated, non-linear, solvable, and residually 2. Note that $H_{n}$ is an ascending HNN extension of a finitely generated linear group, making it residually finite by BorisovSapir [@MR2138070]. In Section 4, we will prove that $H_{n}$ is an extension of the BaumslagSolitar group $BS\left(1,n\right)$ by a nilpotent group (Lemma 4.5). The group $H_{n}$ shares solvability and residual properties with $BS\left(1,n\right)$. For any nonzero integer $n$, both $BS\left(1,n\right)$ and $H_{n}$ are solvable (Theorem 1.2), residually finite [@MR0285589], and residually 2 if and only if $n$ is odd [@MR3076943 Teorema 2], (Theorem 1.5). *Remark 7*. The argument we have used to prove Corollary 1.4 can be applied to any supergroup $\widetilde{G}$ of $G$ in $\mathrm{GL}\left(m,\,\mathbb{F}_{2}\left[t,t^{-1}\right]\right)$ such that $\rho_{n}^{m}$ induces an endomorphism on $\widetilde{G}$. For example, since $\rho_{-1}$ is an involution of $\mathrm{GL}\left(3,\,\mathbb{F}_{2}\left[t,t^{-1}\right]\right)$, we define the unitary group of the form induced by $\rho_{-1}$: $$\begin{aligned} U & :=\left\{ A\in\mathrm{GL}\left(3,\,\mathbb{F}_{2}\left[t,\,t^{-1}\right]\right)\;:\;A^{T}\rho_{-1}\left(A\right)=I\right\} ,\end{aligned}$$ where $A^{T}$ means the matrix transpose. Then, $U$ is finitely generated by $c,d$ and the orthogonal subgroup of $\mathrm{SL}\left(3,\,\mathbb{F}_{2}\right)$, isomorphic to the symmetric group $S_{3}$, which is easily proved by the method of Cooper and Long in [@MR1431138] using the action on the BruhatTits building. Therefore, for any integer $n$ such that $\left|n\right|>1$, the HNN extension of $U$ by $\rho_{n}|_{U}$ is finitely generated, non-linear, and residually finite. The rest of this paper is organized as follows. In Section 2, we introduce basic definitions and prove $\mathrm{Theorem\;1.2}$. We prove $\mathrm{Theorem\;1.3}$ in Section 3, and $\mathrm{Theorem\;1.5}$ in Section 4. *Acknowledgements 1*. This paper is supported by National Research Foundation of Korea (grant number 2020R1C1C1A01006819) and Samsung Science and Technology Foundation (project number SSTF-BA2001-01). The author thanks Sang-hyun Kim for valuable and encouraging comments. # Proof of Theorem 1.2 Let us define $b_{0}:=dc^{-1}$ in $G$ and define a matrix $X\in\mathrm{GL}\left(3,\,\mathbb{F}_{2}\left(t\right)\right)$ by $$\begin{aligned} X & :=\left(\begin{array}{ccc} 0 & 0 & 1\\ 1+t^{-1} & 1+t & 1+t\\ 1+t^{-1} & 1+t^{-1} & t^{-1} \end{array}\right).\end{aligned}$$ A direct computation yields $$\begin{aligned} & Xb_{0}X^{-1}=\left(\begin{array}{ccc} 0 & 0 & 1\\ 1 & 0 & 1\\ 0 & 1 & 1 \end{array}\right),\\ & Xd^{-1}X^{-1}=\left(\begin{array}{ccc} 1 & 0 & 0\\ 1 & 1+t^{-1} & 1\\ 1 & t^{-1} & 0 \end{array}\right),\end{aligned}$$ where $\left(\begin{array}{ccc} 0 & 0 & 1\\ 1 & 0 & 1\\ 0 & 1 & 1 \end{array}\right)$ (resp. $\left(\begin{array}{ccc} 1 & 0 & 0\\ 1 & 1+t^{-1} & 1\\ 1 & t^{-1} & 0 \end{array}\right)$) is equal to $t^{-1}x$ (resp. $t^{-4}yxy$) in $\mathrm{GL}\left(3,\,\mathbb{F}_{2}\left(t\right)\right)$ as in [@lee2023characterizing Appendix A]. Thus, we use the results on $x$ and $yxy$ in $\mathrm{PGL}\left(3,\,\mathbb{F}_{2}\left(t\right)\right)$ in [@lee2023characterizing Appendix A] to compute the presentation of $G$, generated by $c$ and $d$, via $$\begin{aligned} \mathrm{SL}\left(3,\,\mathbb{F}_{2}\left(t\right)\right) & \cong\mathrm{PSL}\left(3,\,\mathbb{F}_{2}\left(t\right)\right).\end{aligned}$$ From now on, for a list $W$ of elements in a group $A$, we denote by $\left\langle W\right\rangle _{A}$ (resp. $\left\langle \!\!\left\langle W\right\rangle \!\!\right\rangle _{A}$) the subgroup generated (resp. normally generated) by the elements in $W$. We omit the subscript $A$ when there is no room for confusion. We use the commutator convention $\left[a,b\right]=a^{-1}b^{-1}ab$. Define a subgroup $NC:=\left\langle d^{m}b_{0}d^{-m}\;:\;m\in\mathbb{Z}\right\rangle$ of $G$. Note that $G$ splits over $NC$ satisfying $G=NC\rtimes\left\langle d\right\rangle$ via the determinant map. In addition, for each nonzero integer $i$, put $$\begin{aligned} b_{i} & :=\left[b_{0},\,d^{i}\right].\end{aligned}$$ *Lemma 8*. The group $NC$ is presented by the generators $\left\{ b_{i}\::\:i\in\mathbb{Z}\right\}$ and the following relations: $1=b_{i}^{4},\;i\in\mathbb{Z},$ $\left[b_{0},b_{i}\right]=b_{i}^{2},\;i\ne0,$ $\left[b_{i},b_{j}\right]=b_{i}^{2}b_{j-i}^{2}b_{j}^{2},\;i<j,\;i\ne0\ne j,$ $b_{i}^{2}=b_{-i}^{2},\;i>0.$ *Proof.* See Appendix A. $\qedhere$ ◻ Put $E_{2^{i}}$ as the elementary abelian group of order $2^{i}$. Given two positive integers $i\le j$, define a map $\mu_{ij}:E_{2^{i}}\to E_{2^{j}}$ carrying the $m$-th standard generator of $E_{2^{i}}$ to the $m$-th standard generator of $E_{2^{j}}$. Then, the pair $\left\langle E_{2^{i}},\,\mu_{ij}\right\rangle$ is a direct system, and define $E_{2^{\infty}}$ to be the direct limit. *Corollary 9*. The derived subgroup of $NC$ is isomorphic to $E_{2^{\infty}}$. Moreover, via the abelianization the group $NC$ admits a following short exact sequence: $$\begin{aligned} 1 & \longrightarrow E_{2^{\infty}}\longrightarrow NC\longrightarrow\mathbb{Z}/4\mathbb{Z}\times E_{2^{\infty}}\longrightarrow1.\end{aligned}$$ *Proof.* According to Lemma A.1 (see Appendix A), the center of $NC$ contains $$\left\langle b_{i}^{2}\;:\;i\in\mathbb{Z}\right\rangle _{NC}=\left\langle b_{i}^{2}\;:\;i\ge0\right\rangle _{NC}\simeq E_{2^{\infty}}.$$ Furthermore, due to the relation $\left.2_{b}\right)$, every $b_{i}^{2}$ for $i\ge1$ is represented by a commutator $\left[b_{0},b_{i}\right]$. Thus, the derived subgroup of $NC$ contains a normal subgroup $$\left\langle b_{i}^{2}\;:\;i\ge1\right\rangle _{NC}\simeq E_{2^{\infty}}.$$ Given the presentation of $NC$ provided by Lemma 2.1, the quotient group $NC/\left\langle b_{i}^{2}\;:\;i\ge1\right\rangle _{NC}$ is generated by the image of $b_{i}$ for $i\in\mathbb{Z}$, and is abelian, as indicated by $\left.2_{b}\right)$ and $\left.3_{b}\right)$ in $\mathrm{Lemma}\;2.1$. Consequently, $\left\langle b_{i}^{2}\;:\;i\ge1\right\rangle _{NC}$ also contains the derived subgroup. $\qedhere$ ◻ *Corollary 10*. The group $G$ is solvable of class 3. *Proof.* Combining $G=NC\rtimes\left\langle d\right\rangle$ with the short exact sequence on $NC$ in Corollary 2.2, we conclude that $G$ is solvable of class 3. $\qedhere$ ◻ Let us consider $H_{n}$. Put $t$ as the additional generator of $H_{n}$ associated with the HNN extension of $G$. Define subgroups $\widetilde{NC}_{n}$ and $\widetilde{G}_{n}$ of $H_{n}$ by $$\begin{aligned} \widetilde{NC}_{n} & :=\bigcup_{i=0}^{\infty}t^{-i}NCt^{i},\;\widetilde{G}_{n}:=\bigcup_{i=0}^{\infty}t^{-i}Gt^{i}.\end{aligned}$$ Observe that $\det\left(c\right)=t=\det\left(d\right)$ by direct computation. Because $NC=\ker\left(\det|_{G}\right)$ and a word $g\left(c,d\right)$ in $c,d$ represents an element of $G$, $\rho_{n}$ induces an endomorphism on $NC$. Therefore, whenever $i_{1}<i_{2}$ we have $$\begin{aligned} \left(t^{-i_{1}}NCt^{i_{1}}\right) & \subset\left(t^{-i_{2}}NCt^{i_{2}}\right),\;\left(t^{-i_{1}}Gt^{i_{1}}\right)\subset\left(t^{-i_{2}}Gt^{i_{2}}\right).\end{aligned}$$ *Lemma 11*. The group $\widetilde{NC}_{n}$ is nilpotent of class 2 for every nonzero integer $n$. *Proof.* By the presentation in Lemma 2.1, the quotient group $NC/\left\langle b_{i}^{2}\;:\;i\ge0\right\rangle _{NC}$ is abelian and isomorphic to $E_{2^{\infty}}$. Therefore, for any element $g\in NC$, $\left\langle b_{i}^{2}\;:\;i\ge0\right\rangle _{NC}$ includes $g^{2}$. According to Lemma 2.1, $\widetilde{NC}_{n}$ is generated by $\left\{ t^{-i}b_{j}t^{i}\::\;j\in\mathbb{Z},\:i\ge0\right\}$. Let us define $$\begin{aligned} a_{i,j} & :=t^{-i}b_{j}t^{i}.\end{aligned}$$ Since $t^{-i}NCt^{i}$ is isomorphic to $NC$, the center of $\widetilde{NC}_{n}$ contains $$\begin{aligned} \left\langle a_{i,j}^{2}\;:\;j\ge0,\:i\ge0\right\rangle _{\widetilde{NC}_{n}}.\end{aligned}$$ As in the proof of Corollary 2.2, the derived subgroup of $\widetilde{NC}_{n}$ contains $\left\langle a_{i,j}^{2}\;:\;j\ge1,\:i\ge0\right\rangle _{\widetilde{NC}_{n}}$, since $a_{i,j}^{2}=\left[a_{i,0},\,a_{i,j}\right]$ from the relation $\left.2_{b}\right)$ in Lemma 2.1. Define $\widetilde{Q}_{n}$ by $$\begin{aligned} \widetilde{Q}_{n} & :=\widetilde{NC}_{n}/\left\langle a_{i,j}^{2}\;:\;j\ge1,\:i\ge0\right\rangle _{\widetilde{NC}_{n}},\end{aligned}$$ and define $q_{n}:\widetilde{NC}_{n}\to\widetilde{Q}_{n}$ to be the quotient map. Choose two images of generators $q_{n}\left(a_{i_{1},j_{1}}\right)$ and $q_{n}\left(a_{i_{2},j_{2}}\right)$ in $\widetilde{Q}_{n}$. Without loss of generality, suppose $i_{2}\le i_{1}$. Since $a_{i_{2},j_{2}}\in t^{-i_{1}}NCt^{i_{1}}$ and $t^{-i_{1}}NCt^{i_{1}}\cong NC$, from the short exact sequence of Corollary 2.2, the images of $a_{i_{1},j_{1}}$ and $a_{i_{2},j_{2}}$ commute in the quotient group $$\begin{aligned} t^{-i_{1}}NCt^{i_{1}}/\left\langle a_{i_{1},j}^{2}\;:\;j\ge1\right\rangle _{t^{-i_{1}}NCt^{i_{1}}} & =t^{-i_{1}}NCt^{i_{1}}/\left\langle a_{i,j}^{2}\;:\;j\ge1,\:i_{1}\ge i\ge0\right\rangle _{t^{-i_{1}}NCt^{i_{1}}}.\end{aligned}$$ In other words, the subgroup $\left\langle a_{i_{1},j}^{2}\;:\;j\ge1\right\rangle _{\widetilde{NC}_{n}}$ includes the commutator $\left[a_{i_{1},j_{1}},a_{i_{2},j_{2}}\right]$. Thus, $q_{n}\left(a_{i_{1},j_{1}}\right)$ and $q_{n}\left(a_{i_{2},j_{2}}\right)$ commute in $\widetilde{Q}_{n}$, which implies that $\widetilde{Q}_{n}$ is abelian. We conclude that $\left\langle a_{i,j}^{2}\;:\;j\ge1,\:i\ge0\right\rangle _{\widetilde{NC}_{n}}$ is the derived subgroup of $\widetilde{NC}_{n}$. Because the center contains the derived subgroup, $\widetilde{NC}_{n}$ is nilpotent of class 2. $\qedhere$ ◻ *Lemma 12*. The group $\widetilde{G}_{n}$ has a faithful linear representation into $\mathrm{GL}\left(3,\,\mathcal{P}\left(\mathbb{F}_{2}\right)\right)$, where $\mathcal{P}\left(\mathbb{F}_{2}\right)$ is the field of Puiseux series with coefficients in $\mathbb{F}_{2}$. Moreover, via the determinant map, $\widetilde{G}_{n}$ admits the following split short exact sequence: $$\begin{aligned} 1 & \longrightarrow\widetilde{NC}_{n}\longrightarrow\widetilde{G}_{n}\longrightarrow\mathbb{Z}\left[\frac{1}{n}\right]\longrightarrow1,\end{aligned}$$ where $\mathbb{Z}\left[\frac{1}{n}\right]$ is the additive group of the ring of $S$-integers. *Proof.* Suppose the indeterminate of $\mathcal{P}\left(\mathbb{F}_{2}\right)$ is $t$. Denote by $\mathbb{F}_{2}\left[\mathcal{P}\right]$ the subring of $\mathcal{P}\left(\mathbb{F}_{2}\right)$ of series of finite length. For any coprime pair of integers $\left(a,b\right)$ such that $a\ne0\ne b$, abusing notation, define $\rho_{\frac{a}{b}}:\mathrm{GL}\left(3,\,\mathbb{F}_{2}\left[\mathcal{P}\right]\right)\to\mathrm{GL}\left(3,\,\mathbb{F}_{2}\left[\mathcal{P}\right]\right)$ by $$\begin{aligned} \rho_{\frac{a}{b}}\left(\begin{array}{ccc} a_{11}\left(t\right) & a_{12}\left(t\right) & a_{13}\left(t\right)\\ a_{21}\left(t\right) & a_{22}\left(t\right) & a_{23}\left(t\right)\\ a_{31}\left(t\right) & a_{32}\left(t\right) & a_{33}\left(t\right) \end{array}\right) & :=\left(\begin{array}{ccc} a_{11}\left(t^{\frac{a}{b}}\right) & a_{12}\left(t^{\frac{a}{b}}\right) & a_{13}\left(t^{\frac{a}{b}}\right)\\ a_{21}\left(t^{\frac{a}{b}}\right) & a_{22}\left(t^{\frac{a}{b}}\right) & a_{23}\left(t^{\frac{a}{b}}\right)\\ a_{31}\left(t^{\frac{a}{b}}\right) & a_{32}\left(t^{\frac{a}{b}}\right) & a_{33}\left(t^{\frac{a}{b}}\right) \end{array}\right).\end{aligned}$$ Then, $\rho_{\frac{a}{b}}$ is an automorphism of $\mathrm{GL}\left(3,\,\mathbb{F}_{2}\left[\mathcal{P}\right]\right)$. For each integer $i\ge0$, define a representation $\mathrm{rep}_{i}:t^{-i}Gt^{i}\to\mathrm{GL}\left(3,\,\mathbb{F}_{2}\left[\mathcal{P}\right]\right)$ as for each $g\in G$, $$\begin{aligned} \mathrm{rep}_{i}\left(t^{-i}gt^{i}\right) & :=\rho_{n^{-i}}\left(g\right).\end{aligned}$$ We see each $\mathrm{rep}_{i}$ is faithful by construction. Moreover, for any pair of integers $\left(i,j\right)$ such that $i\le j$, $\mathrm{rep}_{i}$ and $\mathrm{rep}_{j}$ have the same image of each element in $t^{-i}Gt^{i}$. Indeed, for each $g\in G,$ $$\begin{aligned} \mathrm{rep}_{i}\left(t^{-i}gt^{i}\right) & =\rho_{n^{-i}}\left(g\right)=\rho_{n^{-j}}\left(\rho_{n^{j-i}}\left(g\right)\right)=\rho_{n^{-j}}\left(t^{j-i}gt^{i-j}\right)=\mathrm{rep}_{j}\left(t^{-i}gt^{i}\right).\end{aligned}$$ Define a representation $\mathrm{rep}:\widetilde{G}_{n}\to\mathrm{GL}\left(3,\,\mathbb{F}_{2}\left[\mathcal{P}\right]\right)$ as follows: for each $g\in G$ and $i\ge0$, $$\begin{aligned} \mathrm{rep}\left(t^{-i}gt^{i}\right) & :=\mathrm{rep}_{i}\left(t^{-i}gt^{i}\right).\end{aligned}$$ Then, it is faithful, as every $\mathrm{rep}_{i}$ is faithful. By construction, we have $$\begin{aligned} \det\left(\mathrm{rep}\left(\widetilde{G}_{n}\right)\right) & =\left\{ t^{z}\;:\;z\in\mathbb{Z}\left[\frac{1}{n}\right]\right\} .\end{aligned}$$ Since $\ker\left(\det|_{G}\right)=NC$, we also have $$\begin{aligned} \ker\left(\det\circ\mathrm{rep}_{i}\right) & =t^{-i}NCt^{i},\end{aligned}$$ which implies $$\begin{aligned} \ker\left(\det\circ\mathrm{rep}\right) & =\widetilde{NC}_{n},\end{aligned}$$ and we obtain the short exact sequence desired. Finally, the sequence splits, as for each nonnegative integer $i$, $$\mathrm{rep}\left(t^{-i}dt^{i}\right)=\left(\begin{array}{ccc} t^{n^{-i}}\\ & 1\\ & & 1 \end{array}\right).\;\qedhere$$ ◻ *Proof of Theorem [1.2]{.nodecor} 1*. By definition, we have $$\begin{aligned} \widetilde{G}_{n} & =\left\langle t^{-i}dt^{i},\,t^{-i}b_{0}t^{i}\;:\;i\ge0\right\rangle _{H_{n}}=\left\langle \!\!\left\langle c,\,d\right\rangle \!\!\right\rangle _{H_{n}},\end{aligned}$$ which implies $H_{n}=\widetilde{G}_{n}\rtimes\left\langle t\right\rangle _{H_{n}}$. Therefore, from Lemma 2.4 and Lemma 2.5, $H_{n}$ has a subnormal series: $$\begin{aligned} 1 & \trianglelefteq\left(\widetilde{NC}_{n}\right)'\trianglelefteq\widetilde{NC}_{n}\trianglelefteq\widetilde{G}_{n}\trianglelefteq H_{n},\end{aligned}$$ where $\left(\widetilde{NC}_{n}\right)'$ is the derived subgroup of $\widetilde{NC}_{n}$. Lemma 2.4, and Lemma 2.5 also guarantee that each factor is abelian. $\qed$ # Proof of Theorem 1.3 We start this section with an implicit result in the proof of Theorem 1.1 by Wehrfritz [@MR0367080 Corollary 2.4]. *Theorem 13*. Suppose a group $H$ is generated by two non-torsion elements $a$ and $b$, and there exists an isomorphism $\iota:H\to H'$ such that $H'\subset H$ and for an integer $\left|r\right|>1$, $$\begin{aligned} \iota\left(a\right) & =a^{r},\;\iota\left(b\right)=b^{r}.\end{aligned}$$ Then, if the HNN extension $H*_{\iota}$ is linear, there exists a nonzero integer $s$ such that $\left\langle a^{s},b^{s}\right\rangle$ is nilpotent. *Proof.* We reproduce a corresponding portion of the proof of [@MR0367080 Corollary 2.4]. Put $t$ as the additional generator of $H*_{\iota}$ associated with the HNN extension of $H$. Suppose that $H*_{\iota}$ is linear over a field $F$, and let us identify $H*_{\iota}$ with the image of the representation. We observe that the subgroups $\left\langle a,t\right\rangle$ and $\left\langle b,t\right\rangle$ are isomorphic to the BaumslagSolitar group $BS\left(1,r\right)$, which is solvable. According to a theorem of Mal'cev [@MR0335656 Theorem 3.6], there exists an integer $m>0$ such that every solvable linear subgroup of $H*_{\iota}$ has a triangularizable normal subgroup of a finite index dividing $m$. Therefore, two elements of $H*_{\iota}$, $$\begin{aligned} \left[a^{m},\,t^{-m}\right] & =a^{m\left(r^{m}-1\right)},\;\left[b^{m},\,t^{-m}\right]=b^{m\left(r^{m}-1\right)},\end{aligned}$$ are unipotent. Choose $s=m\left(r^{m}-1\right)$. By the assumption, $a^{s}$ is not a torsion element. Moreover, as $a^{s}$ is also unipotent, the field $F$ must have characteristic 0. By applying [@MR0367080 Lemma 2.3], we conclude that the subgroup $\left\langle a^{s},b^{s}\right\rangle$ of $H*_{\iota}$ is nilpotent. $\qedhere$ ◻ *Lemma 14*. The group $G$ is presented by the generators $c,d$ and the following relations: $1=x_{i}^{4},$ for each integer $i\ge0,$ $1=\left[x_{i}^{2},\,d^{-1}\right],$ for each integer $i\ge0,$ where $x_{i}$ is a word in $c$ and $d$, recursively defined by $$\begin{aligned} x_{0} & :=b_{0}=dc^{-1},\;x_{i+1}:=\left[d^{-1},\,x_{i}\right],\;i\ge0.\end{aligned}$$ *Proof.* At first, we construct a presentation of $G$ directly from the short exact sequence: $$\begin{aligned} 1 & \longrightarrow NC\longrightarrow G\longrightarrow\left\langle d\right\rangle \longrightarrow1,\end{aligned}$$ and the presentation of $NC$ in Lemma 2.1. Since $$b_{-i}=\left[b_{0},\,d^{-i}\right]=d^{i}b_{i}^{-1}d^{-i},$$ all of $\left.4_{b}\right)$ and the relations involving $b_{i}$, $i<0$ are redundant. Thus, we have a presentation in [@lee2023characterizing Theorem A.5] for $G$, without $\left.1_{o}\right)$ and replacing $yxy$ (resp. $x$) with $d^{-1}$ (resp. $b_{0}$), with the following relations. $1=\left[b_{0}^{2},d\right],$ $1=b_{i}^{4},$ for each integer $i\ge0,$ $\left[b_{0},b_{i}\right]=b_{i}^{2},$ for each integer $i\ge1,$ $\left[b_{i},b_{j}\right]=b_{i}^{2}b_{j-i}^{2}b_{j}^{2},\;1\le i<j.$ $\quad\;\:$The presentation we need to prove is also a simplification of that of [@lee2023characterizing Lemma 2.2], whose proof in [@lee2023characterizing Appendix A] from [@lee2023characterizing Theorem A.5] makes no use of the relation $\left.1_{o}\right)$. $\qedhere$ ◻ *Lemma 15*. There exists an injective endomorphism $\eta:G\to G$ satisfying $$\begin{aligned} \eta\left(b_{0}\right) & =\left[d^{-1},\,b_{0}\right],\;\eta\left(d\right)=d.\end{aligned}$$ *Proof.* As in the proof of [@lee2023characterizing Lemma 2.3], define a matrix $Y\in\mathrm{GL}\left(3,\,\mathbb{F}_{2}\left(t\right)\right)$ by $$Y:=\left(\begin{array}{ccc} 1 & 0 & 0\\ 0 & 1 & t^{-1}\\ 0 & t\left(1+t\right)^{-1} & t^{-1}\left(1+t\right)^{-1} \end{array}\right).$$ Then, a direct computation yields $$\begin{aligned} & Yb_{0}Y^{-1}=\left[d^{-1},\,b_{0}\right],\\ & YdY^{-1}=d.\qedhere\end{aligned}$$ ◻ *Lemma 16*. The group $G$ is not finitely presented. *Proof.* Suppose $G$ is finitely presented. Then, there exists a nonnegative integer $M$ such that $G$ is presented by the generators $c,d$ and the following relations: $1=x_{i}^{4},$ for each integer $i$ such that $M\ge i\ge0,$ $1=\left[x_{i}^{2},\,d^{-1}\right],$ for each integer $i$ such that $M\ge i\ge0.$ $\quad\;\:$Choose $M_{0}$ as the smallest one among the set of such integers, and delete every relation involving the index larger than $M_{0}$. By the presentation above, $\left\langle \!\!\left\langle d\right\rangle \!\!\right\rangle _{G}$ is of index $4$ in $G$. Furthermore, $G$ splits over it satisfying $$\begin{aligned} G & =\left\langle \!\!\left\langle d\right\rangle \!\!\right\rangle _{G}\rtimes\left\langle x_{0}\right\rangle _{G}.\end{aligned}$$ By the ReidemeisterSchreier process, $\left\langle \!\!\left\langle d\right\rangle \!\!\right\rangle _{G}$ is presented by the generators $$\begin{aligned} d,\,x_{0}dx_{0}^{-1},\,x_{0}^{2}dx_{0}^{-2},\,x_{0}^{-1}dx_{0},\end{aligned}$$ and the relators consisting of rewritten words of $x_{0}^{i}Rx_{0}^{-i}$ in the new generators, $i=0,1,2,3$ where $R$ is a relator of $G$. However, two generators are redundant, because from $\left.2_{x,M}\right)$ we have $x_{0}^{2}dx_{0}^{-2}=d$ and $x_{0}dx_{0}^{-1}=x_{0}^{-1}dx_{0}$. We have $x_{0}^{-1}d^{-1}x_{0}=d^{-1}x_{1}$ by definition, and $$\begin{aligned} \left\langle \!\!\left\langle d\right\rangle \!\!\right\rangle _{G} & =\left\langle x_{1},\,d\right\rangle _{G}.\end{aligned}$$ On the other hand, the new relators are computed as $1=y_{i}^{4},$ for each integer $i$ such that $M_{0}>i\ge0,$ $1=\left[y_{i}^{2},\,d^{-1}\right],$ for each integer $i$ such that $M_{0}>i\ge0,$ where $y_{i}$ is a word in $x_{1}$ and $d$, recursively defined as $$\begin{aligned} y_{0} & :=x_{1},\;y_{i+1}:=\left[d^{-1},\,y_{i}\right],\;i\ge0.\end{aligned}$$ If $M_{0}>0$, by Lemma 3.3 we have $G=\eta^{-1}\left(\left\langle x_{1},\,d\right\rangle _{G}\right)$, which contradicts the minimality of $M_{0}$. Therefore, we must assume $M_{0}=0$. In this case, $x_{1}$ and $d$ freely generate $\left\langle \!\!\left\langle d\right\rangle \!\!\right\rangle _{G}=\left\langle x_{1},\,d\right\rangle _{G}$, which contradicts the solvability of $G$ established in Corollary 2.3. $\qedhere$ ◻ *Proof of Theorem [1.3]{.nodecor} 1*. Take an integer $n$ such that $\left|n\right|>1$, and suppose $H_{n}$ is linear. Then, according to Theorem 3.1, there exists a nonzero integer $s$ such that $\left\langle c^{s},d^{s}\right\rangle$ is nilpotent. Since $\rho_{s}$ is injective, $G=\left\langle c,d\right\rangle$ is also nilpotent. However, this leads to a contradiction with $\mathrm{Lemma\;3.4}$, as a finitely generated nilpotent group is finitely presented. $\qed$ # Proof of Theorem 1.5 For a positive integer $n$, let us define a group $Q_{n}$ as $$\begin{aligned} Q_{n} & :=G/\left\langle \!\!\left\langle d^{n}\right\rangle \!\!\right\rangle _{G}.\end{aligned}$$ *Lemma 17*. The group $Q_{n}$ is a semidirect product satisfying $$Q_{n}=\left\langle b_{0},\,b_{1},\,\cdots,\,b_{n-1}\right\rangle _{Q_{n}}\rtimes\left\langle d\right\rangle _{Q_{n}},$$ where $\left\langle d\right\rangle _{Q_{n}}$ is isomorphic to the cyclic group of order $n$ and $\left\langle b_{0},b_{1},\cdots,b_{n-1}\right\rangle _{Q_{n}}$ is presented by the relations: $1=b_{i}^{4},\;0\le i\le n-1,$ $\left[b_{0},b_{i}\right]=b_{i}^{2},\;0<i\le n-1,$ $\left[b_{i},b_{j}\right]=b_{i}^{2}b_{j-i}^{2}b_{j}^{2},\;0<i<j\le n-1,$ $b_{i}^{2}=b_{n-i}^{2},\;1\le i\le\left\lfloor \frac{n}{2}\right\rfloor ,$ where $\left\lfloor \frac{n}{2}\right\rfloor$ is the greatest integer less than or equal to $\frac{n}{2}$. *Proof.* By the proof of Lemma 3.2, the group $Q_{n}$ is presented by the generators $b_{0},\,d$ and the following relations: $1=d^{n},$ $1=\left[b_{0}^{2},d\right],$ $1=b_{i}^{4},$ for each integer $i\ge0,$ $\left[b_{0},b_{i}\right]=b_{i}^{2},$ for each integer $i\ge1,$ $\left[b_{i},b_{j}\right]=b_{i}^{2}b_{j-i}^{2}b_{j}^{2},\;0<i<j,$ where $b_{i}=\left[b_{0},\;d^{i}\right].$ By $\left.d_{n}\right)$ and the definition of $b_{i}$, for every integer $i$ such that $i\ne0,-n$, we have $b_{i+n}=b_{i}$ in $Q_{n}$. Therefore, we transform these relations to $1=d^{n},$ $1=\left[b_{0}^{2},d\right],$ $1=b_{i}^{4},$ for each integer $i$ such that $0\le i\le n-1,$ $\left[b_{0},b_{i}\right]=b_{i}^{2},$ for each integer $0<i\le n-1,$ $\left[b_{i},b_{j}\right]=b_{i}^{2}b_{j-i}^{2}b_{j}^{2},\;0<i<j,$ $b_{i}^{2}=b_{-i}^{2},\;i>0,$ where $\left.4_{b}\right)$ from Lemma 2.1 is a relation of $NC$, so we may add this relation for $Q_{n}$. By applying $b_{i+n}=b_{i}$, $\left.4_{b}\right)$ is reduced to $$\begin{aligned} b_{i}^{2} & =b_{n-i}^{2},\;1\le i\le\left\lfloor \frac{n}{2}\right\rfloor ,\end{aligned}$$ which is the relation $\left.c_{n}\right)$. *Claim [1]{.nodecor} 1*. The relation $\left.3_{b}'\right)$ follows from $\left.3_{b,\,n}\right)$ and $\left.d_{n}\right)$. *Proof of Claim [1]{.nodecor} 1*. Given two pairs of integers $\left(i_{1},j_{2}\right)$, $\left(i_{1},j_{2}\right)$ such that $$\begin{aligned} 0 & <i_{1}<j_{1},\:0<i_{2}<j_{2},\:i_{1}\equiv i_{2}\:(\mathrm{mod}\:n),\:j_{1}\equiv j_{2}\:(\mathrm{mod}\:n),\end{aligned}$$ it suffices to prove $$\begin{aligned} & \,\left[b_{i_{1}},\,b_{j_{1}}\right]=\left[b_{i_{2}},\,b_{j_{2}}\right],\\ & b_{i_{1}}^{2}b_{j_{1}-i_{1}}^{2}b_{j_{1}}^{2}\,=b_{i_{2}}^{2}b_{j_{2}-i_{2}}^{2}b_{j_{2}}^{2}.\end{aligned}$$ The equality of (2) directly follows from the equations $b_{i_{1}}=b_{i_{2}}$ and $b_{j_{1}}=b_{j_{2}}$ from $\left.d_{n}\right)$. The eqality of (3) follows from $b_{i_{1}}=b_{i_{2}}$, $b_{j_{1}}=b_{j_{2}}$, and $b_{j_{1}-i_{1}}=b_{j_{2}-i_{2}}$, where the last follows from the congruence $$\begin{aligned} j_{1}-i_{1} & \equiv j_{2}-i_{2}\:(\mathrm{mod}\:n).\;\qed\end{aligned}$$ We return to the proof of Lemma 4.1. By Claim 1, the group $Q_{n}$ is presented by the generators $b_{0},d$ and the relations $\left.d_{n}\right)$, $\left.2_{o}\right)$, $\left.1_{b,\,n}\right)$, $\left.2_{b,\,n}\right)$, $\left.3_{b,\,n}\right)$, and $\left.c_{n}\right)$. By the presentation, $Q_{n}$ is a semidirect product $\left\langle \!\!\left\langle b_{0}\right\rangle \!\!\right\rangle _{Q_{n}}\rtimes\left\langle d\right\rangle _{Q_{n}}$, where $\left\langle d\right\rangle _{Q_{n}}$ is isomorphic to the cyclic group of order $n$. By the ReidemeisterSchreier process, we compute the presentation of $\left\langle \!\!\left\langle b_{0}\right\rangle \!\!\right\rangle _{Q_{n}}=\left\langle b_{0},b_{1},\cdots,b_{n-1}\right\rangle _{Q_{n}}$ as required. $\qedhere$ ◻ *Corollary 18*. The center of $\left\langle b_{0},b_{1},\cdots,b_{n-1}\right\rangle _{Q_{n}}$ is generated by $b_{0}^{2},b_{1}^{2},\cdots,b_{\left\lfloor \frac{n}{2}\right\rfloor }^{2}$ and the group $\left\langle b_{0},b_{1},\cdots,b_{n-1}\right\rangle _{Q_{n}}$admits a short exact sequence on the center $$\begin{aligned} 1 & \longrightarrow E_{2^{\left\lfloor \frac{n}{2}\right\rfloor +1}}\longrightarrow\left\langle b_{0},\,b_{1},\,\cdots,\,b_{n-1}\right\rangle _{Q_{n}}\longrightarrow E_{2^{n}}\longrightarrow1.\end{aligned}$$ *Proof.* It is direct from the presentation of Lemma 4.1. $\qedhere$ ◻ *Corollary 19*. For each positive integer $n$, there is an injective homomorphism $$\begin{aligned} \xi_{n} & :\left\langle b_{0},\,b_{1},\,\cdots,\,b_{n-1}\right\rangle _{G}\to\left\langle b_{0},\,b_{1},\,\cdots,\,b_{2n-1}\right\rangle _{Q_{2n}}\end{aligned}$$ satisfying $$\begin{aligned} \xi_{n}\left(b_{i}\right) & =b_{i},\;0\le i\le n-1.\end{aligned}$$ *Proof.* By [@lee2023characterizing Lemma A.3], the center of $\left\langle b_{0},b_{1},\cdots,b_{n-1}\right\rangle _{G}$ is $\left\langle b_{0}^{2},b_{1}^{2},\cdots,b_{n-1}^{2}\right\rangle _{G}$, and induces a short exact sequence: $$\begin{aligned} 1 & \longrightarrow E_{2^{n}}\longrightarrow\left\langle b_{0},\,b_{1},\,\cdots,\,b_{n-1}\right\rangle _{G}\longrightarrow E_{2^{n}}\longrightarrow1.\end{aligned}$$ On the other hand, the center of $\left\langle b_{0},b_{1},\cdots,b_{2n-1}\right\rangle _{Q_{2n}}$ is $\left\langle b_{0}^{2},b_{1}^{2},\cdots,b_{n}^{2}\right\rangle _{Q_{2n}}$, and induces a short exact sequence: $$\begin{aligned} 1 & \longrightarrow E_{2^{n+1}}\longrightarrow\left\langle b_{0},\,b_{1},\,\cdots,\,b_{2n-1}\right\rangle _{Q_{2n}}\longrightarrow E_{2^{2n}}\longrightarrow1.\end{aligned}$$ By applying the four lemma, we obtain the injectivity. $\qedhere$ ◻ For each pair of integers $\left(m,n\right)$ such that $n$ is odd and $m>0$, define a group $P_{n,\,m}$ to be $$P_{n,\,m}:=H_{n}/\left\langle \!\!\left\langle d^{2^{m}},\,t^{2^{m}}\right\rangle \!\!\right\rangle _{H_{n}},$$ and let $\pi_{n,\,m}:H_{n}\to P_{n,\,m}$ be the quotient map. *Lemma 20*. For each pair of integers $\left(m,n\right)$ such that $n$ is odd and $m>0$, the group $P_{n,\,m}$ is a semidirect product $$P_{n,\,m}=Q_{2^{m}}\rtimes\left\langle t\right\rangle _{P_{n,\,m}},$$ where $\left\langle t\right\rangle _{P_{n,\,m}}$ is isomorphic to the cyclic group of order $2^{m}$ and $Q_{2^{m}}$ is generated by $c,d$ in $P_{n,\,m}$. *Proof.* From the presentation of $G$ in Lemma 3.2, $P_{n,\,m}$ is presented by the generators $c,d,t$, where $c$ (resp. $d$) is the image of $c$ (resp. $d$) in $H_{n}$, and the relators $$\begin{aligned} R_{G},\,tct^{-1}c^{-n},\,tdt^{-1}d^{-n},\,d^{2^{m}},\,t^{2^{m}},\end{aligned}$$ where $R_{G}$ is the collection of the relators of $G$. From this presentation, we have $$\begin{aligned} P_{n,\,m}/\left\langle \!\!\left\langle c,\,d\right\rangle \!\!\right\rangle _{P_{n,\,m}} & \simeq\left\langle t\right\rangle _{P_{n,\,m}}\simeq\mathbb{Z}/2^{m}\mathbb{Z}.\end{aligned}$$ Therefore, $P_{n,\,m}$ is a semidirect product $\left\langle \!\!\left\langle c,d\right\rangle \!\!\right\rangle _{P_{n,\,m}}\rtimes\left\langle t\right\rangle _{P_{n,\,m}}$. By the ReidemeisterSchreier process, we compute the presentation of $\left\langle \!\!\left\langle c,d\right\rangle \!\!\right\rangle _{P_{n,\,m}}$. A new generating set is $$\left\{ t^{i}ct^{-i},\,t^{i}dt^{-i}\;:\;0\le i\le2^{m}-1\right\} ,$$ which is again generated by $c$ and $d$ by the HNN construction. Likewise, any $t^{i}$ conjugation of a relator from $R_{G}$ is reduced to relators in $R_{G}$. $t^{i}$ conjugations of $d^{2^{m}}$ are also reduced to $d^{2^{m}}$, as for each integer $i$ such that $0\le i\le2^{m}-1$, $$\begin{aligned} t^{i}d^{2^{m}}t^{-i} & =\left(d^{2^{m}}\right)^{n^{i}}.\end{aligned}$$ $t^{i}$ conjugations of $tct^{-1}c^{-n}$ and $tdt^{-1}d^{-n}$ are deleted with removing $2^{m}-1$ additional generators, except for two relators: $$\begin{aligned} d^{n^{2^{m}}-1},\,c^{n^{2^{m}}-1}.\end{aligned}$$ For any pair of integers $\left(m,n\right)$ such that $m>0$ and $n$ is odd, $2^{m+2}$ divides $n^{2^{m}}-1$, which is easily proved by induction. Thus, the relator $d^{n^{2^{m}}-1}$ is deduced from $d^{2^{m}}$. On the other hand, we decompose $c^{2^{m}}$ as $$\begin{aligned} c^{2^{m}} & =\left(b_{0}^{-1}d\right)^{2^{m}}=b_{0}^{-1}\left(db_{0}^{-1}d^{-1}\right)\left(d^{2}b_{0}^{-1}d^{-2}\right)\cdots\left(d^{2^{m}-1}b_{0}^{-1}d^{-2^{m}+1}\right)d^{2^{m}},\end{aligned}$$ where the last term is cancelled from $d^{2^{m}}=1$ in $P_{n,\,m}$. Then, $c^{2^{m}}$ is included in the quotient image of $$\begin{aligned} \left\langle b_{0},\,db_{0}d^{-1},\,\cdots,\,d^{2^{m}-1}b_{0}d^{-2^{m}+1}\right\rangle _{G}\end{aligned}$$ in $P_{n,\,m}$. By [@lee2023characterizing Lemma A.3], we have $$\begin{aligned} 1 & =\left(c^{2^{m}}\right)^{4}=c^{2^{m+2}}\end{aligned}$$ in $P_{n,\,m}$. From the fact $2^{m+2}\;|\;n^{2^{m}}-1$, this relation deduced from $d^{2^{m}}=1$ implies $c^{n^{2^{m}}-1}=1$. In summary, we have established the isomorphism $$\begin{aligned} \left\langle \!\!\left\langle c,\,d\right\rangle \!\!\right\rangle _{P_{n,\,m}} & \simeq G/\left\langle \!\!\left\langle d^{2^{m}}\right\rangle \!\!\right\rangle _{G}=Q_{2^{m}}.\;\qedhere\end{aligned}$$ ◻ *Lemma 21*. The group $H_{n}$ is a semidirect product satisfying $$H_{n}\simeq\widetilde{NC}_{n}\rtimes BS\left(1,\,n\right),$$ where $BS\left(1,n\right)$ is the BaumslagSolitar group. *Proof.* Lemma 2.5 and the fact $H_{n}=\widetilde{G}_{n}\rtimes\left\langle t\right\rangle _{H_{n}}$ in the proof of Theorem 1.2 (see Section 2) imply that $$\begin{aligned} H_{n} & =\widetilde{G}_{n}\rtimes\left\langle t\right\rangle _{H_{n}}\simeq\left(\widetilde{NC}_{n}\rtimes\mathbb{Z}\left[\frac{1}{n}\right]\right)\rtimes\left\langle t\right\rangle _{H_{n}},\end{aligned}$$ It suffices to show that $\widetilde{NC}_{n}$ is normal in $H_{n}$, which means $$\begin{aligned} H_{n} & \simeq\left(\widetilde{NC}_{n}\rtimes\mathbb{Z}\left[\frac{1}{n}\right]\right)\rtimes\left\langle t\right\rangle _{H_{n}}\simeq\widetilde{NC}_{n}\rtimes\left(\mathbb{Z}\left[\frac{1}{n}\right]\rtimes\left\langle t\right\rangle _{H_{n}}\right)\simeq\widetilde{NC}_{n}\rtimes BS\left(1,n\right).\end{aligned}$$ For any pair of integers $\left(i,j\right)$ such that $i\ge0$, take a generator $t^{-i}b_{j}t^{i}$ of $\widetilde{NC}_{n}$. For any integer $l$, a computation yields $$\begin{aligned} d^{l}t^{-i}b_{j}t^{i}d^{-l} & =t^{-i}\left(t^{i}d^{l}t^{-i}\right)b_{j}\left(t^{i}d^{l}t^{-i}\right)^{-1}t^{i}=t^{-i}d^{ln^{i}}b_{j}d^{-ln^{i}}t^{i}\in\widetilde{NC}_{n}.\;\qedhere\end{aligned}$$ ◻ *Proof of Theorem [1.5]{.nodecor} 1*. Moldavanskiı̆ [@MR3076943 Teorema 2] showed that the BaumslagSolitar group $BS\left(1,n\right)$ is residually $p$ if and only if $n\equiv1$ mod $p$. By Lemma 4.5, it suffices to show that for every odd integer $n$, $H_{n}$ is a residually 2 group. Fix an odd integer $n$ and select a nonzero element $g$ in $H_{n}$. According to Lemma 4.5, there exist $\nu\in\widetilde{NC}_{n}$ and $\beta\in\left\langle d,t\right\rangle _{H_{n}}$ such that $g=\nu\beta$. If $\beta\ne1$, by using the fact that $\widetilde{NC}_{n}$ is normal in $H_{n}$, apply the fact that $BS\left(1,n\right)$ is residually $2$ in [@MR3076943] to construct a required homomorphism from $H_{n}$ to a 2-group. Therefore, we may assume $g\in\widetilde{NC}_{n}$. Due to the normalcy of $\widetilde{NC}_{n}$, we may further assume $g\in\left\langle b_{i}\;:\;i=0,1,2,\cdots\right\rangle$, composing inner automorphisms with the quotient map if necessary. For some sufficiently large integer $M$, suppose $g\in\left\langle b_{0},b_{1},\cdots,b_{M}\right\rangle$. Choose an integer $r$ such that $M<2^{r}$. Then, we have $$\begin{aligned} \pi_{n,\,r+1}\left(g\right) & \ne1,\end{aligned}$$ since $$\begin{aligned} \xi_{2^{r}} & :\left\langle b_{0},\,b_{1},\,\cdots,\,b_{2^{r}-1}\right\rangle _{G}\to\left\langle b_{0},\,b_{1},\,\cdots,\,b_{2^{r+1}-1}\right\rangle _{Q_{2^{r+1}}}\end{aligned}$$ is injective by Corollary 4.3. $\qed$ *Remark 22*. Lubotzky in [@MR0928062] established a well-known theorem stating that a group $\Gamma$ is linear over a field of characteristic 0 if and only if there is a prime $p$ such that $\Gamma$ admits a $p$-congruence structure with a bounded rank $c$. According to the proof of Theorem 1.5, $\left\{ \ker\pi_{n,\,m}\right\} _{m\ge1}$ is a descending chain of normal subgroups of $H_{n}$ such that $$\begin{aligned} \bigcap_{m=1}^{\infty}\ker\pi_{n,\,m} & =\left\{ 1\right\} .\end{aligned}$$ Moreover, for all $m\ge1$, each quotient $\ker\pi_{n,\,1}/\ker\pi_{n,\,m}$ is a finite 2-group. Therefore, we observe that $\left\{ \ker\pi_{n,\,m}\right\} _{m\ge1}$ behaves almost like a 2-congruence structure for $H_{n}$, after formally adding $H_{n}$ itself as the first element in the chain. When $\left|n\right|>1$, the only obstruction is the nonexistence of a uniform bound on the ranks of $\ker\pi_{n,\,m_{1}}/\ker\pi_{n,\,m_{2}}$; if such a bound existed, $H_{n}$ would be linear, violating Theorem 1.3. # Proof of Lemma 2.1 The goal is to prove: *Lemma [2.1]{.nodecor} 1*. The group $NC$ is presented by the generators $\left\{ b_{i}\::\:i\in\mathbb{Z}\right\}$ and the following relations: $1=b_{i}^{4},\;i\in\mathbb{Z},$ $\left[b_{0},b_{i}\right]=b_{i}^{2},\;i\ne0,$ $\left[b_{i},b_{j}\right]=b_{i}^{2}b_{j-i}^{2}b_{j}^{2},\;i<j,\;i\ne0\ne j,$ $b_{i}^{2}=b_{-i}^{2},\;i>0.$ *Proof.* From the fact $\det b_{0}=1$ and (1), we apply [@lee2023characterizing Lemma A.2] to $\left\langle b_{0},b_{1},\cdots,b_{n}\right\rangle _{G}$ for each positive integer $n$. Then, this group is presented by the generators $b_{0},b_{1},\cdots,b_{n}$ and the relations: $1=b_{i}^{4},\;0\le i\le n,$ $\left[b_{0},b_{i}\right]=b_{i}^{2},\;0<i\le n,$ $\left[b_{i},b_{j}\right]=b_{i}^{2}b_{j-i}^{2}b_{j}^{2},\;0<i<j\le n.$ $\quad\;\:$Therefore, the subgroup $\left\langle d^{m}b_{0}d^{-m}\;:\;m\le0\right\rangle$ is presented by the generators $b_{0},b_{1},\cdots$ and the relations: $1=b_{i}^{4},\;0\le i,$ $\left[b_{0},b_{i}\right]=b_{i}^{2},\;0<i,$ $\left[b_{i},b_{j}\right]=b_{i}^{2}b_{j-i}^{2}b_{j}^{2},\;0<i<j.$ $\quad\;\:$Define $NC_{0}:=\left\langle d^{m}b_{0}d^{-m}\;:\;m\le0\right\rangle$ and $NC_{k+1}:=dNC_{k}d^{-1}$ for each $k\ge0$. Then, by definition, we have $$\begin{aligned} NC & =\cup_{k=0}^{\infty}NC_{k}.\end{aligned}$$ *Claim [2]{.nodecor} 1*. For each integer $k\ge0$, the subgroup $NC_{k}$ is presented by the generators $$\begin{aligned} b_{-k},\,b_{-k+1},\,\cdots,\,b_{0},\,b_{1},\,\cdots\end{aligned}$$ and the relations: $1=b_{i}^{4},\;-k\le i,$ $\left[b_{0},b_{i}\right]=b_{i}^{2},\;i\ne0,\;-k\le i,$ $\left[b_{i},b_{j}\right]=b_{i}^{2}b_{j-i}^{2}b_{j}^{2},\;-k\le i<j,\;i\ne0\ne j,$ $b_{i}^{2}=b_{-i}^{2},\;0<i\le k.$ Let us postpone the proof of Claim 2 for a while. Then, the rest is canonical. Consider a relator $R$ in $NC$. Then, there exists an integer $k$ such that $R\in NC_{k}$, which implies $R$ is included in the normal closure of the relators of $NC_{k}$. Therefore, by collecting all of the relators of $\left.1_{b,\,-k}\right)\text{\textendash}\left.4_{b,\,-k}\right)$ in Claim 2 for every $k$, we obtain the set of relators desired. $\qedhere$ ◻ *Lemma 23*. Under the assumptions $\left.1_{b,\,-k}\right)\text{\textendash}\left.4_{b,\,-k}\right)$, the center of $NC_{k}=$$\left\langle b_{i}\;:\;-k\le i\right\rangle$ contains $$\begin{aligned} \left\langle b_{i}^{2}\;:\;-k\le i\right\rangle _{NC_{k}} & =\left\langle b_{i}^{2}\;:\;0\le i\right\rangle _{NC_{k}}.\end{aligned}$$ *Proof.* The eqality directly follows from $\left.4_{b,\,-k}\right)$. [@lee2023characterizing Lemma A.3] imiplies $$\begin{aligned} \left[b_{i}^{2},\,b_{j}\right] & =1=\left[b_{i},\,b_{j}^{2}\right],\;0\le i,j.\end{aligned}$$ It suffices to show $\left[b_{i},b_{j}^{2}\right]=1$ when $i<0\le j$. The cases when $j=0$ are directly established from $\left.2_{b,\,-k}\right)$. When $i<0<j$, we deduce that $$\begin{aligned} & \,\left[b_{i},\,b_{j}^{2}\right]=b_{i}^{-1}b_{j}^{-2}b_{i}b_{j}^{2}\\ = & \,b_{i}^{-1}b_{j}^{-1}b_{i}\left[b_{i},\,b_{j}\right]b_{j}\\ = & \,b_{i}^{-1}b_{j}^{-1}b_{i}^{-1}b_{j-i}^{2}b_{j}^{-1}\\ = & \,\left[b_{i},\,b_{j}\right]b_{j}^{-1}b_{i}^{-2}b_{j-i}^{2}b_{j}^{-1}\\ = & \,b_{i}^{2}b_{j-i}^{2}b_{j}b_{i}^{-2}b_{j-i}^{2}b_{j}^{-1}\\ = & \,b_{-i}^{2}b_{j-i}^{2}b_{j}b_{-i}^{-2}b_{j-i}^{2}b_{j}^{-1}\\ = & \,1,\end{aligned}$$ where the third equality follows from $\left.3_{b,\,-k}\right)$; the fifth from $\left.3_{b,\,-k}\right)$; the sixth from $\left.4_{b,\,-k}\right)$; the last follows from $\left.1_{b,\,-k}\right)$ and (4). $\qedhere$ ◻ *Proof of Claim [2]{.nodecor} 1*. From the construction, $NC_{k}$ and $NC_{k+1}$ is isomorphic for every nonnegative integer $k$, with the isomorphism $$\begin{aligned} \gamma_{k} & :NC_{k}\to NC_{k+1},\;\gamma_{k}\left(x\right):=dxd^{-1}.\end{aligned}$$ We use induction. When $k=0$, it is done for the presentation of $NC_{0}$. Suppose the presentation of $NC_{k}$ as above. For every integer $i\ne0,1$, we have $$\begin{aligned} d^{-1}b_{i}d & =d^{-1}b_{0}^{-1}d^{-i}b_{0}d^{i+1}=b_{1}^{-1}b_{i+1},\end{aligned}$$ and for $i=0,1,$ $$\begin{aligned} & \,d^{-1}b_{0}d=b_{0}b_{1},\\ & \,d^{-1}b_{-1}d=d^{-1}b_{0}^{-1}db_{0}=b_{1}^{-1}.\end{aligned}$$ By applying the isomorphism $\gamma_{k}^{-1}$, it suffices to prove for $NC_{k}$, the relations $\left.1_{b,\,-k}\right)\text{\textendash}\left.4_{b,\,-k}\right)$ are deduced from $1=\left(b_{1}^{-1}b_{i}\right)^{4},\;-k\le i,$ $1=b_{1}^{4},$ $\left[b_{0}b_{1},b_{1}^{-1}b_{i}\right]=\left(b_{1}^{-1}b_{i}\right)^{2},\;i\ne0,1,\;-k\le i,$ $\left[b_{0}b_{1},b_{1}^{-1}\right]=b_{1}^{-2},$ $\left[b_{1}^{-1}b_{i},b_{1}^{-1}b_{j}\right]=\left(b_{1}^{-1}b_{i}\right)^{2}\left(b_{1}^{-1}b_{j-i+1}\right)^{2}\left(b_{1}^{-1}b_{j}\right)^{2},\;-k\le i<j,\;i\ne0,1\ne j,$ $\left[b_{1}^{-1},b_{1}^{-1}b_{j}\right]=b_{1}^{-2}\left(b_{1}^{-1}b_{j+1}\right)^{2}\left(b_{1}^{-1}b_{j}\right)^{2},\;1<j,$ $\left[b_{1}^{-1}b_{i},b_{1}^{-1}\right]=\left(b_{1}^{-1}b_{i}\right)^{2}\left(b_{1}^{-1}b_{-i+1}\right)^{2}b_{1}^{-2},\;-k\le i<0,$ $\left(b_{1}^{-1}b_{i+2}\right)^{2}=\left(b_{1}^{-1}b_{-i}\right)^{2},\;0<i\le k,$ $\left(b_{1}^{-1}b_{2}\right)^{2}=b_{1}^{-2},$ and vice versa. Suppose $\left.1_{b,\,-k}\right)\text{\textendash}\left.4_{b,\,-k}\right)$. Then, $\left.1_{b}^{-1}\right)$ follows from $\left.1_{b,\,-k}\right)$; $\left.2_{b}^{-1}\right)$ from $\left.2_{b,\,-k}\right)$ by using the commutator identity $$\begin{aligned} \left[a,\,bc\right] & =\left[a,\,c\right]c^{-1}\left[a,\,b\right]c;\end{aligned}$$ $\left.3_{b}^{-1},+\right)$ from $\left.3_{b,\,-k}\right)$ and (5); $\left.3_{b,\,-k}^{-1},-\right)$ from $\left.3_{b,\,-k}\right)$ and (5); $\left.4_{b}^{-1}\right)$ from $\left.3_{b,\,-k}\right)$; $\left.1_{b,\,-k}'\right)$ from $\left.3_{b,\,-k}\right)$. To obatin $\left.2_{b,\,-k}'\right)$, expanding the commutator in the left hand side, we compute that $$\begin{aligned} & \,\left[b_{0}b_{1},\,b_{1}^{-1}b_{i}\right]=\\ = & \,\left[b_{0}b_{1},\,b_{i}\right]\left[b_{0}b_{1},\,b_{1}^{-1}\right]\\ = & \,\left[b_{0},\,b_{i}\right]\left[b_{1},\,b_{i}\right]\left[b_{0},\,b_{1}^{-1}\right]\\ = & \,b_{1}^{2}b_{i}^{2}\left[b_{1},\,b_{i}\right]=b_{1}^{2}b_{i}^{2}\left[b_{i},\,b_{1}\right],\end{aligned}$$ where $\left[b_{i},b_{j}\right]$ is in the center from $\left.2_{b,\,-k}\right)$, $\left.3_{b,\,-k}\right)$, and Lemma A.1; the fourth equality follows from Lemma A.1, $\left.1_{b,\,-k}\right)$, and $\left.2_{b,\,-k}\right)$; and the last from $\left.1_{b,\,-k}\right)$. By comparison, the right hand side of $\left.2_{b,\,-k}'\right)$ becomes $$\begin{aligned} \left(b_{1}^{-1}b_{i}\right)^{2} & =b_{1}b_{i}b_{1}b_{i}=b_{1}^{2}b_{i}\left[b_{i},\,b_{1}\right]b_{i}=b_{1}^{2}b_{i}^{2}\left[b_{i},\,b_{1}\right],\end{aligned}$$ which establishes $\left.2_{b,\,-k}'\right)$. The proof of $\left.3_{b,\,-k}'\right)$ is similar, by expanding the left hand side by using (5). For $\left.4_{b,\,-k}'\right)$, we need to show $$\begin{aligned} b_{i+2}b_{1}b_{i+2} & =b_{-i}b_{1}b_{-i}.\end{aligned}$$ Under the assumption $i\ne0$, we deduce (6) from $\left.3_{b,\,-k}\right)$, which concludes the deduction from $\left.1_{b,\,-k}\right)\text{\textendash}\left.4_{b,\,-k}\right)$. Indeed, $$\begin{aligned} & \,b_{i+2}b_{1}b_{i+2}\\ = & \,b_{i+2}^{2}b_{1}\left[b_{1},\,b_{i+2}\right]=b_{1}^{-1}b_{i+1}^{2}\\ = & \,b_{-i}^{2}b_{1}\left[b_{1},\,b_{-i}\right]=b_{-i}b_{1}b_{-i}.\end{aligned}$$ On the other hand, suppose $\left.1_{b,\,-k}'\right)\text{\textendash}\left.4_{b,\,-k}'\right)$, $\left.1_{b}^{-1}\right)$, $\left.2_{b}^{-1}\right)$, $\left.3_{b}^{-1},+\right)$, $\left.3_{b,\,-k}^{-1},-\right)$, $\left.4_{b}^{-1}\right)$. At first, the relation $\left.1_{b,\,-k}\right)$ for $i=1$ directly follows from $\left.1_{b}^{-1}\right)$. By applying (5) to $\left.2_{b}^{-1}\right)$ and by using $\left.1_{b}^{-1}\right)$, we have $\left.2_{b,\,-k}\right)$ for $i=1$ from $$\begin{aligned} b_{1}^{-2} & =\left[b_{0}b_{1},\,b_{1}^{-1}\right]=b_{1}^{-1}\left[b_{0},\,b_{1}^{-1}\right]b_{1}=\left[b_{1},\,b_{0}\right].\end{aligned}$$ By applying $\left.1_{b}^{-1}\right)$ and $\left.2_{b,\,-k}\right)$ for $i=1$, we have $\left.1_{b,\,-k}\right)$ for $i=0$. For $i\ne0,1$, by expanding the left hand side of $\left.2_{b,\,-k}'\right)$ we deduce that $$\begin{aligned} & \,\left(b_{1}^{-1}b_{i}\right)^{2}=\left[b_{0}b_{1},\,b_{1}^{-1}b_{i}\right]\\ = & \,\left[b_{0}b_{1},\,b_{i}\right]b_{i}^{-1}\left[b_{0}b_{1},\,b_{1}^{-1}\right]b_{i}\\ = & \,\left[b_{0}b_{1},\,b_{i}\right]b_{i}^{-1}b_{1}^{-2}b_{i}\\ = & \,b_{1}^{-1}\left[b_{0},\,b_{i}\right]b_{1}\left[b_{1},\,b_{i}\right]b_{i}^{-1}b_{1}^{-2}b_{i},\end{aligned}$$ where after equating the first term and the last, by reducing the same words, we deduce that $$\begin{aligned} & \,b_{i}=\left[b_{0},\,b_{i}\right]b_{1}\left[b_{1},\,b_{i}\right]b_{i}^{-1}b_{1}^{-1},\\ \iff & \,b_{i}^{2}b_{1}\left(b_{1}^{-1}b_{i}^{-1}b_{1}b_{i}\right)=\left[b_{0},\,b_{i}\right]b_{1}\left[b_{1},\,b_{i}\right],\\ \iff & \,b_{i}^{2}=\left[b_{0},\,b_{i}\right],\end{aligned}$$ so finally we have established $\left.2_{b,\,-k}\right)$ for every $i$. For convenience, put $c_{i}:=b_{1}^{-1}b_{i}$ for each $i$. We show $\left.1_{b,\,-k}\right)$ for every $i$. The cases $i=0,1$ are already done. For $i>1$, we expand $$\begin{aligned} & \,b_{i}^{4}=\left(b_{1}c_{i}b_{1}c_{i}\right)^{2}\\ = & \,\left(c_{i}\left[c_{i},\,b_{1}^{-1}\right]b_{1}^{2}c_{i}\right)^{2}\\ = & \,\left(c_{i}^{-1}c_{i+1}^{-2}b_{1}^{4}c_{i}\right)^{2}\\ = & \,c_{i}^{-1}c_{i+1}^{-4}c_{i}=1,\end{aligned}$$ where the third equality follows from $\left.3_{b}^{-1},+\right)$, the fourth from $\left.1_{b}^{-1}\right)$, and the last from $\left.1_{b,\,-k}'\right)$. For $i$ such that $-k\le i<0$, we expand $$\begin{aligned} & \,b_{i}^{4}=\left(c_{i}\left[c_{i},\,b_{1}^{-1}\right]b_{1}^{2}c_{i}\right)^{2}\\ = & \,\left(c_{i}^{3}c_{1-i}^{2}c_{i}\right)^{2}\\ = & \,c_{i}^{3}c_{1-i}^{2}c_{i}^{4}c_{1-i}^{2}c_{i}=1,\end{aligned}$$ where the second eqality follows from $\left.3_{b,\,-k}^{-1},-\right)$, and the last from $\left.1_{b,\,-k}'\right)$. To deal with $\left.3_{b,\,-k}\right)$ and $\left.4_{b,\,-k}\right)$, we introduce several claims. *Claim [3]{.nodecor} 1*. Under the assumptions $\left.1_{b,\,-k}'\right)\text{\textendash}\left.4_{b,\,-k}'\right)$, $\left.1_{b}^{-1}\right)$, $\left.2_{b}^{-1}\right)$, $\left.3_{b}^{-1},+\right)$, $\left.3_{b,\,-k}^{-1},-\right)$, $\left.4_{b}^{-1}\right)$, we have $$\begin{aligned} & \,c_{j}^{2}\in Z\left(\left\langle c_{0},\,c_{2},\,c_{3},\cdots\right\rangle \right),\;j\ge0.\\ & \,\left[b_{1}^{2},\,c_{j}\right]=1,\;j\ge0,\\ & \,\left[b_{1},\,c_{j}^{2}\right]=1,\;j\ge0,\end{aligned}$$ where $Z\left(A\right)$ is the center of the group $A$. *Proof of Claim [3]{.nodecor} 1*. We first show (7) by induction. When $j=0$, for any integer $i$ such that $1<i$, the relation $\left.2_{b,\,-k}'\right)$ $\left[c_{0}^{-1},c_{i}\right]=c_{i}^{2}$ implies $$\begin{aligned} & \,\left[c_{0}^{2},\,c_{i}\right]=c_{0}^{-2}c_{i}^{-1}c_{0}^{2}c_{i}\\ = & \,c_{0}^{-1}\left[c_{0},\,c_{i}\right]c_{i}^{-1}c_{0}c_{i}\\ = & \,c_{0}^{-1}c_{i}c_{0}c_{i}=c_{i}\left[c_{i},\,c_{0}\right]c_{i}=1.\end{aligned}$$ For any pair $\left(i,j\right)$ such that $1<i<j$, the relation $\left.3_{b,\,-k}'\right)$ means $\left[c_{i},c_{j}\right]=c_{i}^{2}c_{j-i+1}^{2}c_{j}^{2}$. This implies $\left[c_{2},c_{3}\right]=c_{3}^{2}$ or $c_{2}^{-1}c_{3}^{-1}c_{2}=c_{3}$. Therefore, we have $\left[c_{2},c_{3}^{2}\right]=1=\left[c_{2}^{2},c_{3}\right]$. For some integer $4\le N$, suppose $\left[c_{i}^{2},c_{j}\right]=1$ for every pair $\left(i,j\right)$ such that $1<i,j<N$. By expanding $\left.3_{b,\,-k}'\right)$ we have $$\begin{aligned} & \,\left[c_{i},\,c_{N}\right]=c_{i}^{2}c_{N-i+1}^{2}c_{N}^{2}\\ \iff & \,\left(c_{i}c_{N}^{-1}\right)^{2}=c_{N-i+1}^{2},\end{aligned}$$ which implies $$\begin{aligned} & \,1=\left[c_{N-i+1}^{2},\,c_{i}c_{N}^{-1}\right]\\ \iff & \,1=\left[c_{N-i+1}^{2},\,c_{N}^{-1}\right]c_{N}\left[c_{N-i+1}^{2},\,c_{i}\right]c_{N}^{-1}\\ \iff & \,1=\left[c_{N-i+1}^{2},\,c_{N}^{-1}\right],\end{aligned}$$ where the first equivalence follows from (5) and the second from the induction hypothesis. From this calculation, we have established $$\begin{aligned} 1 & =\left[c_{i}^{2},\,c_{N}\right],\;1<i<N.\end{aligned}$$ On the other hand, we also have $$\begin{aligned} & \,1=\left[c_{N}^{2},\,c_{i}\right]\\ \iff & \,1=c_{N}^{-2}c_{i}^{-1}c_{N}^{2}c_{i}\\ \iff & \,1=c_{N}^{-1}\left[c_{N},\,c_{i}\right]c_{i}^{-1}c_{N}c_{i}\\ \iff & \,1=c_{N}c_{N-i+1}^{2}c_{i}c_{N}c_{i}\end{aligned}$$ $$\begin{aligned} \iff & \,c_{i}c_{N}c_{i}c_{N}=c_{N-i+1}^{-2}\\ \iff & \,c_{i}^{-1}c_{N}c_{i}^{-1}c_{N}=c_{N-i+1}^{-2}\\ \iff & \,\left[c_{i},\,c_{N}\right]=c_{i}^{2}c_{N-i+1}^{2}c_{N}^{2},\end{aligned}$$ where the third equivalence follows from $\left.3_{b,\,-k}'\right)$; the fourth from (10); and the fifth from (10) and $\left.1_{b,\,-k}'\right)$. It concludes the proof of (7). The equation (8) is direct from $\left.4_{b}^{-1}\right)$ $c_{2}^{2}=b_{1}^{-2}$ and (7). For (9), for each $i>1$, we expand $$\begin{aligned} & \,\left[b_{1},\,c_{j}^{2}\right]\\ = & \,\left[b_{1},\,c_{j}\right]c_{j}^{-1}\left[b_{1},\,c_{j}\right]c_{j}\\ = & \,b_{1}^{-2}c_{j+1}^{2}c_{j}^{2}c_{j}^{-1}b_{1}^{-2}c_{j+1}^{2}c_{j}^{3},\\ = & \,b_{1}^{-4}c_{j+1}^{4}c_{j}^{4}=1,\end{aligned}$$ where the first equality follows from (5); the second from $\left.3_{b}^{-1},+\right)$; the third from (7) and (8); and the last from $\left.1_{b,\,-k}'\right)$ and $\left.1_{b}^{-1}\right)$. $\qed$ *Claim [4]{.nodecor} 1*. Under the assumptions $\left.1_{b,\,-k}'\right)\text{\textendash}\left.4_{b,\,-k}'\right)$, $\left.1_{b}^{-1}\right)$, $\left.2_{b}^{-1}\right)$, $\left.3_{b}^{-1},+\right)$, $\left.3_{b,\,-k}^{-1},-\right)$, $\left.4_{b}^{-1}\right)$, we have $$\begin{aligned} b_{i}^{2} & \in Z\left(\left\langle b_{0},\,b_{1},\,b_{2},\cdots\right\rangle \right),\;i\ge0.\end{aligned}$$ *Proof of Claim [4]{.nodecor} 1*. At first, the cases $i=0,1$ directly follow from $\left.2_{b,\,-k}\right)$ and (8). The equation $\left[b_{0},b_{j}^{2}\right]=1$ for $j>1$ also follows from $\left.1_{b,\,-k}\right)$ and $\left.2_{b,\,-k}\right)$. For any integer $j>1$, observe that $$\begin{aligned} & \,\left[b_{1},\,b_{j}^{2}\right]\\ = & \,\left[b_{1},\,b_{1}c_{j}b_{1}c_{j}\right]\\ = & \,\left[b_{1},\,b_{1}^{2}c_{j}^{2}\left[c_{j},\,b_{1}\right]\right]\\ = & \,1,\end{aligned}$$ where the last equality follows from (8), (9), and $\left.3_{b}^{-1},+\right)$. On the other hand, for integers $i,j\ge1$ we deduce that $$\begin{aligned} & \,\left[b_{i}^{2},\,b_{j}\right]\\ = & \,\left[b_{1}c_{i}b_{1}c_{i},\,b_{1}c_{j}\right]\\ = & \,\left[b_{1}^{2}c_{i}^{2}\left[b_{1},\,c_{i}\right],\,b_{1}c_{j}\right]\\ = & \,\left[b_{1}^{2}c_{i}^{2}\left[b_{1},\,c_{i}\right],\,c_{j}\right]c_{j}^{-1}\left[b_{1}^{2}c_{i}^{2}\left[b_{1},\,c_{i}\right],\,b_{1}\right]c_{j}\\ = & \,1,\end{aligned}$$ where the last equality follows from (8), (9), and $\left.3_{b}^{-1},+\right)$. $\qed$ *Claim [5]{.nodecor} 1*. Under the assumptions $\left.1_{b,\,-k}'\right)\text{\textendash}\left.4_{b,\,-k}'\right)$, $\left.1_{b}^{-1}\right)$, $\left.2_{b}^{-1}\right)$, $\left.3_{b}^{-1},+\right)$, $\left.3_{b,\,-k}^{-1},-\right)$, $\left.4_{b}^{-1}\right)$, when $1<j$, we have $$\begin{aligned} & \,\left[b_{1},\,b_{j}\right]\,=b_{1}^{2}b_{j-1}^{2}b_{j}^{2},\\ & \,c_{j}^{2}\,=b_{j-1}^{2}.\end{aligned}$$ *Proof of Claim [5]{.nodecor} 1*. We use induction. The case $j=2$ follows from $\left.4_{b}^{-1}\right)$. Suppose (12) holds for $1<j\le N$. By the induction hypothesis, we have $$\begin{aligned} & \,b_{1}^{2}b_{N-1}^{2}b_{N}^{2}\\ = & \,\left[b_{1},\,b_{N}\right]=\left[b_{1}^{-1},\,c_{N}\right]\\ = & \,b_{1}^{-2}\left(b_{1}^{-1}b_{N+1}\right)^{2}\left(b_{1}^{-1}b_{N}\right)^{2}\\ = & \,b_{1}\left(b_{N+1}b_{1}^{-1}b_{N+1}b_{1}^{-1}\right)\left(b_{N}b_{1}^{-1}b_{N}\right),\end{aligned}$$ where the third equality follows from $\left.3_{b}^{-1},+\right)$. By equating the first term with the last term, we establish (12) as follows: $$\begin{aligned} & \,b_{1}b_{N-1}^{2}b_{N}b_{1}b_{N}^{-1}=b_{N+1}b_{1}^{-1}b_{N+1}b_{1}^{-1}\\ \iff & \,b_{1}b_{N-1}^{2}b_{N}^{2}b_{1}\left[b_{1},\,b_{N}\right]b_{N}^{2}=b_{N+1}b_{1}^{-1}b_{N+1}b_{1}^{-1}\\ \iff & \,b_{N}^{2}=b_{N+1}b_{1}^{-1}b_{N+1}b_{1}^{-1}\\ \iff & \,b_{N+1}^{2}b_{N}^{2}b_{1}^{2}=\left[b_{N+1},\,b_{1}\right],\end{aligned}$$ where the last equivalence follows from the induction hypothesis and (9). The equation (13) directly follows from (12). Indeed, $$\begin{aligned} c_{i}^{2} & =\left(b_{1}^{-1}b_{i}\right)^{2}=b_{1}b_{i}b_{1}b_{i}=b_{1}b_{i}^{2}b_{1}\left[b_{1},\,b_{i}\right]=b_{i-1}^{2}.\;\qed\end{aligned}$$ *Claim [6]{.nodecor} 1*. Under the assumptions $\left.1_{b,\,-k}'\right)\text{\textendash}\left.4_{b,\,-k}'\right)$, $\left.1_{b}^{-1}\right)$, $\left.2_{b}^{-1}\right)$, $\left.3_{b}^{-1},+\right)$, $\left.3_{b,\,-k}^{-1},-\right)$, $\left.4_{b}^{-1}\right)$, when $-k\le i<0$, we have $$\begin{aligned} & \,c_{i}^{2}\,=b_{1-i}^{2}.\\ & \,\left[b_{i},\,b_{1}^{-1}\right]\,=b_{1-i}^{2}b_{-i}^{2}b_{1}^{2}.\end{aligned}$$ *Proof of Claim [6]{.nodecor} 1*. The equation (14) directly follows from $\left.4_{b,\,-k}'\right)$ and (13). By expanding the left hand side of (15), we have $$\begin{aligned} & \,\left[b_{i},\,b_{1}^{-1}\right]=\left[c_{i},\,b_{1}^{-1}\right]\\ = & \,\left(b_{1}^{-1}b_{i}\right)^{2}\left(b_{1}^{-1}b_{1-i}\right)^{2}b_{1}^{-2}\\ = & \,b_{1-i}^{2}b_{-i}^{2}b_{1}^{2},\end{aligned}$$ where the second equality follows from $\left.3_{b,\,-k}^{-1},-\right)$, and the last from (14). $\qed$ We return to the proof of Claim 2. To show $\left.4_{b,\,-k}\right)$, for any integer $i$ such that $0<i\le k$, by expanding $\left.4_{b,\,-k}'\right)$ we deduce that $$\begin{aligned} & \,\left(b_{1}^{-1}b_{i+2}\right)^{2}=\left(b_{1}^{-1}b_{-i}\right)^{2}\\ \iff & \,b_{i+1}^{2}=b_{1}^{-1}b_{-i}b_{1}^{-1}b_{-i}\\ \iff & \,b_{1}b_{i+1}^{2}=b_{-i}b_{1}^{-1}b_{-i}\\ \iff & \,b_{1}b_{i+1}^{2}=b_{-i}^{2}b_{1}^{-1}\left[b_{1}^{-1},\,b_{-i}\right]\\ \iff & \,b_{1}b_{i+1}^{2}=b_{-i}^{2}b_{i+1}^{2}b_{i}^{2}b_{1}\\ \iff & \,b_{1}b_{i+1}^{2}b_{1}^{-1}b_{i}^{-2}b_{i+1}^{-2}=b_{-i}^{2}\\ \iff & \,b_{i}^{2}=b_{-i}^{2},\end{aligned}$$ where the first equivalence follows from (13); the fourth from (11), (15), and $\left.1_{b,\,-k}\right)$; and the last from (11) and $\left.1_{b,\,-k}\right)$. Finally, we show $\left.3_{b,\,-k}\right)$. The case $1=i<j$ is already handled by (12). The case $i<j=1$ also directly follows from (15) and $\left.4_{b,\,-k}\right)$. Therefore, we may assume $i\ne1\ne j$. For any pair $\left(i,j\right)$ such that $1<i<j$, by expanding the left hand side of $\left.3_{b,\,-k}\right)$, we have $$\begin{aligned} & \,\left[b_{i},\,b_{j}\right]=\left[b_{1}c_{i},\,b_{1}c_{j}\right]\\ = & \,\left[b_{1}c_{i},\,c_{j}\right]c_{j}^{-1}\left[b_{1}c_{i},\,b_{1}\right]c_{j}\\ = & \,c_{i}^{-1}\left[b_{1},\,c_{j}\right]c_{i}\left[c_{i},\,c_{j}\right]c_{j}^{-1}\left[c_{i},\,b_{1}\right]c_{j}\\ = & \,c_{i}^{-1}\left[b_{1},\,b_{j}\right]c_{i}^{3}c_{j-i+1}^{2}c_{j}\left[b_{i},\,b_{1}\right]c_{j}\\ = & \,c_{i}^{2}\left(b_{1}^{2}b_{j-1}^{2}b_{j}^{2}\right)\left(b_{1}^{2}b_{i-1}^{2}b_{i}^{2}\right)c_{j}^{2}c_{j-i+1}^{2}\\ = & \,b_{i}^{2}b_{j-i}^{2}b_{j}^{2},\end{aligned}$$ where the fourth equality follows from $\left.3_{b,\,-k}'\right)$, the fifth from (12), and the last from (13), (9), and $\left.1_{b,\,-k}\right)$. For the cases $i<0$, we need to introduce more claims. *Claim [7]{.nodecor} 1*. Under the assumptions $\left.1_{b,\,-k}'\right)\text{\textendash}\left.4_{b,\,-k}'\right)$, $\left.1_{b}^{-1}\right)$, $\left.2_{b}^{-1}\right)$, $\left.3_{b}^{-1},+\right)$, $\left.3_{b,\,-k}^{-1},-\right)$, $\left.4_{b}^{-1}\right)$, for any pair of integers $\left(i,j\right)$ such that $i<0$ and $j>0$, $\left[b_{i},b_{j}\right]$ is included in the abelian subgroup $\left\langle b_{0}^{2},b_{1}^{2},\cdots\right\rangle .$ *Proof of Claim [7]{.nodecor} 1*. As in the partial proof of $\left.3_{b,\,-k}\right)$, we compute that $$\begin{aligned} & \,\left[b_{i},\,b_{j}\right]\\ = & \,c_{i}^{-1}\left[b_{1},\,b_{j}\right]c_{i}^{3}c_{j-i+1}^{2}c_{j}\left[b_{i},\,b_{1}\right]c_{j}\\ = & \,c_{i}^{-1}\left(b_{1}^{2}b_{j-1}^{2}b_{j}^{2}\right)c_{i}^{3}b_{j-i}^{2}c_{j}b_{1-i}^{2}b_{-i}^{2}b_{1}^{2}c_{j}\\ = & \,c_{i}^{-1}\left(b_{1}^{2}b_{j-1}^{2}b_{j}^{2}\right)c_{i}^{3}b_{j-i}^{2}b_{1-i}^{2}b_{-i}^{2}b_{1}^{2}b_{j-1}^{2},\end{aligned}$$ where the second equality follows from (12) and (15); the last from (11) and (13). Therefore, it suffices to prove for each $\left(i,j\right)$ such that $i<0$ and $j>0$, $c_{i}^{-1}b_{j}^{2}c_{i}$ is included in the abelian subgroup $\left\langle b_{0}^{2},b_{1}^{2},\cdots\right\rangle .$ By expanding it, $$\begin{aligned} & \,c_{i}^{-1}b_{j}^{2}c_{i}=b_{j}^{2}\left[b_{j}^{2},\,c_{i}\right]\\ = & \,b_{j}^{2}\left[b_{1}c_{j}b_{1}c_{j},\,c_{i}\right]\\ = & \,b_{j}^{2}c_{j}^{-1}\left[b_{1}c_{j}b_{1},\,c_{i}\right]c_{j}\left[c_{j},\,c_{i}\right]\\ = & \,b_{j}^{2}c_{j}^{-1}b_{1}^{-1}\left[b_{1}c_{j},\,c_{i}\right]b_{1}\left[b_{1},\,c_{i}\right]c_{j}\left[c_{j},\;c_{i}\right]\\ = & \,b_{j}^{2}c_{j}^{-1}b_{1}^{-1}c_{j}^{-1}\left[b_{1},\;c_{i}\right]c_{j}\left[c_{j},\;c_{i}\right]b_{1}\left[b_{1},\;c_{i}\right]c_{j}\left[c_{j},\;c_{i}\right],\end{aligned}$$ where in the last term, every commutators are in the $\left\langle b_{0}^{2},b_{1}^{2},\cdots\right\rangle$ from $\left.1_{b}^{-1}\right)$, $\left.3_{b,\,-k}'\right)$, $\left.3_{b,\,-k}^{-1},-\right)$, (13), and (14). By rearranging the last term, we have $$\begin{aligned} c_{i}^{-1}b_{j}^{2}c_{i} & =b_{j}^{2}.\;\qed\end{aligned}$$ *Claim [8]{.nodecor} 1*. Under the assumptions $\left.1_{b,\,-k}'\right)\text{\textendash}\left.4_{b,\,-k}'\right)$, $\left.1_{b}^{-1}\right)$, $\left.2_{b}^{-1}\right)$, $\left.3_{b}^{-1},+\right)$, $\left.3_{b,\,-k}^{-1},-\right)$, $\left.4_{b}^{-1}\right)$, for any pair of integers $\left(i,j\right)$ such that $i<0$ and $j\ge0$, we have $$\begin{aligned} & \,\left[b_{i},\,b_{j}^{2}\right]=1,\end{aligned}$$ *Proof of Claim [8]{.nodecor} 1*. It is direct from Claim 7. Indeed, $$\begin{aligned} b_{i}b_{j}^{2} & =b_{j}b_{i}\left[b_{i},\,b_{j}\right]b_{j}=b_{j}b_{i}b_{j}\left[b_{i},\,b_{j}\right]=b_{j}^{2}b_{i}\left[b_{i},\,b_{j}\right]^{2}=b_{j}^{2}b_{i},\end{aligned}$$ We again return to the proof of $\left.3_{b,\,-k}\right)$. Suppose $i<0$ and $1<j$. We expand $$\begin{aligned} & \,\left[b_{i},\,b_{j}\right]\\ = & \,c_{i}^{-1}\left(b_{1}^{2}b_{j-1}^{2}b_{j}^{2}\right)c_{i}^{3}b_{j-i}^{2}b_{1-i}^{2}b_{-i}^{2}b_{1}^{2}b_{j-1}^{2}\\ = & \,\left(b_{1}^{2}b_{j-1}^{2}b_{j}^{2}\right)b_{1-i}^{2}b_{j-i}^{2}b_{1-i}^{2}b_{-i}^{2}b_{1}^{2}b_{j-1}^{2}\\ = & \,b_{i}^{2}b_{j-i}^{2}b_{j}^{2},\end{aligned}$$ where the first equality follows from the proof of Claim 7; the second from (11), (14), and (16); and the last from (11), $\left.1_{b,\,-k}\right)$, and $\left.4_{b,\,-k}\right)$. On the other hand, suppose $i<j<0$. We expand $$\begin{aligned} & \,\left[b_{i},\,b_{j}\right]\\ = & \,c_{i}^{-1}\left[b_{1},\,b_{j}\right]c_{i}^{3}c_{j-i+1}^{2}c_{j}\left[b_{i},\,b_{1}\right]c_{j}\\ = & \,c_{i}^{-1}b_{1-j}^{2}b_{-j}^{2}b_{1}^{2}c_{i}^{3}b_{j-i}^{2}c_{j}b_{1-i}^{2}b_{-i}^{2}b_{1}^{2}c_{j}\\ = & \,b_{1-j}^{2}b_{-j}^{2}b_{1}^{2}c_{i}^{2}b_{j-i}^{2}b_{1-i}^{2}b_{-i}^{2}b_{1}^{2}c_{j}^{2}\\ = & \,b_{1-j}^{2}b_{-j}^{2}b_{1}^{2}b_{1-i}^{2}b_{j-i}^{2}b_{1-i}^{2}b_{-i}^{2}b_{1}^{2}b_{1-j}^{2}\\ = & \,b_{i}^{2}b_{j-i}^{2}b_{j}^{2},\end{aligned}$$ where the second equality from (15); the third from (16); the fourth from (14); and the last from (11), $\left.1_{b,\,-k}\right)$, and $\left.4_{b,\,-k}\right)$. We have established $\left.3_{b,\,-k}\right)$, and finished the proof of Claim 2. $\qed$ 0.9 $$ Donsung Lee; <disturin@snu.ac.kr> Department of Mathematical Sciences and Research Institute of Mathematics, Seoul National University, Gwanak-ro 1, Gwankak-gu, Seoul, South Korea 08826
arxiv_math
{ "id": "2309.13389", "title": "Non-linear, solvable, residually $p$ groups", "authors": "Donsung Lee", "categories": "math.GR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | This paper discusses parabolic reverse Hölder inequalities and their connections to parabolic Muckenhoupt weights. The main result gives several characterizations for this class of weights. There are challenging features related to the parabolic geometry and the time lag, for example, in covering and chaining arguments. We also prove a Gehring type self-improving property for parabolic reverse Hölder inequalities. address: - Department of Mathematics, Aalto University, P.O. Box 11100, FI-00076 Aalto, Finland - Department of Mathematics, Aalto University, P.O. Box 11100, FI-00076 Aalto, Finland author: - Juha Kinnunen - Kim Myyryläinen title: Characterizations of parabolic reverse Hölder classes --- [^1] # Introduction This paper continues and complements a discussion of parabolic reverse Hölder inequalities and Muckenhoupt weights in [@KinnunenMyyry2023] and [@kinnunenSaariMuckenhoupt; @kinnunenSaariParabolicWeighted]. We attempt to create a higher dimensional version of the one-dimensional theory introduced by Sawyer [@sawyer1986] and studied, for example, by Cruz-Uribe, Neugebauer and Olesen [@CUNO1995], Martı́n-Reyes, Pick and de la Torre [@MRPT1993], Martı́n-Reyes and de la Torre [@MRT1994]. Our approach is motivated by certain doubly nonlinear parabolic partial differential equations as in [@KinnunenMyyry2023; @kinnunenSaariMuckenhoupt; @kinnunenSaariParabolicWeighted]. Several challenges occur compared to the standard theory of weighted norm inequalities. For example, the doubling property of Muckenhoupt weights is replaced by a forward in time doubling property in [@KinnunenMyyry2023; @kinnunenSaariParabolicWeighted]. A parabolic Muckenhoupt weight satisfies a forward in time doubling property, but it is not currently known whether the same holds true for a weight satisfying a parabolic reverse Hölder inequality. There are also interesting features related to the parabolic geometry and the time lag. In contrast with the parabolic Muckenhoupt classes, a parabolic reverse Hölder inequality with a positive time lag implies the corresponding condition with zero time lag. Alternative higher dimensional versions have been studied by Berkovits [@berkovits2011], Forzani, Martı́n-Reyes and Ombrosi [@ForzaniMartinreyesOmbrosi2011], Lerner and Ombrosi [@LO2010] and Ombrosi [@Ombrosi2005]. Let $1<p<\infty$, $x\in\mathbb R^n$, $L>0$ and $t \in \mathbb{R}$. A parabolic rectangle centered at $(x,t)$ with side length $L$ is $$R = R(x,t,L) = Q(x,L) \times (t-L^p, t+L^p)$$ and its upper and lower parts are $$R^+(\gamma) = Q(x,L) \times (t+\gamma L^p, t+L^p) \quad\text{and}\quad R^-(\gamma) = Q(x,L) \times (t - L^p, t - \gamma L^p) ,$$ where $0 \leq \gamma < 1$ is called the time lag. Here $Q(x,L)=\{y \in \mathbb R^n: \lvert y_i-x_i\rvert \leq L,\,i=1,\dots,n\}$ denotes a spatial cube. Let $1<q<\infty$. A nonnegative weight $w$ belongs to the parabolic reverse Hölder class $RH^+_q$ if there exists a constant $C$ such that $$\biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-(\gamma)} w^{q} \biggr)^\frac{1}{q} \leq C \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+(\gamma)} w$$ for every parabolic rectangle $R \subset \mathbb R^{n+1}$. Lemma [Lemma 6](#lem:RHItimelag){reference-type="ref" reference="lem:RHItimelag"} shows that the definition of $RH^+_q$ does not depend on the time lag. In other words, if a weight belongs to $RH^+_q$ with some time lag, it belongs to $RH^+_q$ with any time lag. Reverse Hölder inequalities are closely related to Muckenhoupt weights. A weight $w$ satisfies a parabolic Muckenhoupt condition, if $$\sup_{R \subset \mathbb R^{n+1}}\biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-(\gamma)} w \biggr) \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+(\gamma)} w^{\frac{1}{1-q}} \biggr)^{q-1} < \infty.$$ Every parabolic Muckenhoupt weight satisfies a parabolic reverse Hölder inequality, see [@kinnunenSaariParabolicWeighted Theorem 5.2] and [@KinnunenMyyry2023 Theorem 5.2]. Conversely, Theorem [Theorem 24](#rhar){reference-type="ref" reference="rhar"} shows that a weight satisfying the parabolic reverse Hölder inequality is a parabolic Muckenhoupt weight under the assumption that the weight satisfies a forward in time parabolic doubling condition in [\[eq:pardoubling\]](#eq:pardoubling){reference-type="eqref" reference="eq:pardoubling"}. Our main result Theorem [Theorem 7](#thm:RHIchar){reference-type="ref" reference="thm:RHIchar"} gives several characterizations of the parabolic reverse Hölder inequality. We also study the corresponding limiting class $RH^+_\infty$ in Proposition [Proposition 5](#rhinfty){reference-type="ref" reference="rhinfty"}. Self-improving phenomena are essential in the theory of Muckenhoupt weights and reverse Hölder inequalities. Theorem [Theorem 20](#gehring){reference-type="ref" reference="gehring"} is a parabolic Gehring type higher integrability result, which asserts that $$w\in RH^+_q \Longrightarrow w\in RH^+_{q+\varepsilon}$$ for some $\varepsilon>0$. The characterizations of parabolic reverse Hölder inequalities and the parabolic Gehring lemma also hold in the case $p=1$ which extends the corresponding one-dimensional results. # Definition and properties of parabolic reverse Hölder inequalities Throughout the underlying space is $\mathbb{R}^{n+1}=\{(x,t):x=(x_1,\dots,x_n)\in\mathbb R^n,t\in\mathbb R\}$. Unless otherwise stated, constants are positive and the dependencies on parameters are indicated in the brackets. The Lebesgue measure of a subset $A$ of $\mathbb{R}^{n+1}$ is denoted by $\lvert A\rvert$. A cube $Q$ is a bounded interval in $\mathbb R^n$, with sides parallel to the coordinate axes and equally long, that is, $Q=Q(x,L)=\{y \in \mathbb R^n: \lvert y_i-x_i\rvert \leq L,\,i=1,\dots,n\}$ with $x\in\mathbb R^n$ and $L>0$. The point $x$ is the center of the cube and $L$ is the side length of the cube. Instead of Euclidean cubes, we work with the following collection of parabolic rectangles in $\mathbb{R}^{n+1}$. **Definition 1**. Let $1<p<\infty$, $x\in\mathbb R^n$, $L>0$ and $t \in \mathbb{R}$. A parabolic rectangle centered at $(x,t)$ with side length $L$ is $$R = R(x,t,L) = Q(x,L) \times (t-L^p, t+L^p)$$ and its upper and lower parts are $$R^+(\gamma) = Q(x,L) \times (t+\gamma L^p, t+L^p) \quad\text{and}\quad R^-(\gamma) = Q(x,L) \times (t - L^p, t - \gamma L^p) ,$$ where $0 \leq \gamma < 1$ is the time lag. Note that $R^-(\gamma)$ is the reflection of $R^+(\gamma)$ with respect to the time slice $\mathbb{R}^n \times \{t\}$. The spatial side length of a parabolic rectangle $R$ is denoted by $l_x(R)=L$ and the time length by $l_t(R)=2L^p$. We write $R^\pm$ for $R^{\pm}(0)$ in the case with zero time lag. The top of a rectangle $R = R(x,t,L)$ is $Q(x,L) \times\{t+L^p\}$ and the bottom is $Q(x,L) \times\{t-L^p\}$. The $\lambda$-dilate of $R$ with $\lambda>0$ is denoted by $\lambda R = R(x,t,\lambda L)$. The integral average of $f \in L^1(A)$ in measurable set $A\subset\mathbb{R}^{n+1}$, with $0<|A|<\infty$, is denoted by $$f_A = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_A f \, d x\, d t= \frac{1}{\lvert A\rvert} \int_A f(x,t)\, d x\, d t.$$ This section discusses basic properties of parabolic reverse Hölder inequalities. We begin with the definition of the uncentered parabolic maximal functions. The differentials $\, d x\, d t$ in integrals are omitted in the sequel. **Definition 2**. Let $f$ be a locally integrable function. The uncentered forward in time and backward in time parabolic maximal functions are defined by $$M^{+}f(x,t) = \sup_{R^-\ni(x,t)} \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+} \lvert f \rvert$$ and $$M^{-}f(x,t) = \sup_{R^+\ni(x,t)} \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} \lvert f \rvert.$$ A locally integrable nonnegative function $w$ is called a weight. We begin with the definitions of parabolic reverse Hölder classes $RH^+_q$ and $RH^+_\infty$. It is enough to consider the case with zero time lag, since Lemma [Lemma 6](#lem:RHItimelag){reference-type="ref" reference="lem:RHItimelag"} shows that the time lag does not play any role in the definitions. **Definition 3**. Let $1<q<\infty$. A weight $w$ belongs to the parabolic reverse Hölder class $RH^+_q$ if there exists a constant $C=[w]_{RH^+_q}$ such that $$\biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} w^{q} \biggr)^\frac{1}{q} \leq C \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+} w$$ for every parabolic rectangle $R \subset \mathbb R^{n+1}$. If the condition above holds with the time axis reversed, then $w \in RH^-_q$. **Definition 4**. A weight $w$ belongs to the parabolic reverse Hölder class $RH^+_\infty$ if there exists a constant $C=[w]_{RH^+_\infty}$ such that $$\mathop{\mathrm{ess\,sup}}_{R^-} w \leq C \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+} w$$ for every parabolic rectangle $R \subset \mathbb R^{n+1}$. If the condition above holds with the time axis reversed, then $w \in RH^-_\infty$. We discuss characterizations for $RH^+_\infty$. Compare Proposition [Proposition 5](#rhinfty){reference-type="ref" reference="rhinfty"} $(ii)$ with Theorem [Theorem 7](#thm:RHIchar){reference-type="ref" reference="thm:RHIchar"} $(ii)$ and Proposition [Proposition 5](#rhinfty){reference-type="ref" reference="rhinfty"} $(iii)$ with Theorem [Theorem 7](#thm:RHIchar){reference-type="ref" reference="thm:RHIchar"} $(vi)$ below. **Proposition 5**. *Let $w$ be a weight. The following conditions are equivalent.* (i) *$w \in RH^+_\infty$.* (ii) *There exists a constant $C$ such that $$\frac{w(E)}{w(R^+)} \leq C \frac{\lvert E \rvert}{\lvert R^- \rvert}$$ for every measurable set $E\subset R^-$.* (iii) *There exists a constant $C$ such that $$M^+(w \chi_{R^-})(x,t) \leq C w_{R^+}$$ for every $(x,t)\in R^-$.* *Proof.* First we show that $(i)\Longleftrightarrow (ii)$. Assume that $(i)$ holds and let $E\subset R^-$ be a measurable set. Then $$w(E) = \int_{R^-} w \chi_E \leq \lvert E \rvert \mathop{\mathrm{ess\,sup}}_{R^-} w \leq C w_{R^+} \lvert E \rvert.$$ This proves $(ii)$. Then assume that $(ii)$ holds. Let $E_\lambda = R^- \cap \{w>\lambda\}$, $\lambda>0$. We have $$\lambda \lvert E_\lambda \rvert \leq w(E_\lambda) \leq C w_{R^+} \lvert E_\lambda \rvert ,$$ which implies that $\lambda \leq C w_{R^+}$ when $\lvert E_\lambda \rvert >0$. Let $q>1$. By Cavalieri's principle, we obtain $$\begin{aligned} \int_{R^+} w^q &= q \int_0^\infty \lambda^{q-1} \lvert E_\lambda \rvert \, d \lambda = q \int_0^{C w_{R^+}} \lambda^{q-1} \lvert E_\lambda \rvert \, d \lambda \\ &\leq q \lvert R^- \rvert \int_0^{C w_{R^+}} \lambda^{q-1} \, d \lambda = \lvert R^- \rvert (C w_{R^+})^q\end{aligned}$$ Thus, it holds that $$\biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} w^q \biggr)^\frac{1}{q} \leq C w_{R^+}$$ for every $q>1$. By letting $q\to\infty$, we obtain $(i)$. Then we show that $(i)\Longleftrightarrow (iii)$. We observe that $(i)$ implies $(iii)$ since $$M^+(w \chi_{R^-})(x,t) = \sup_{P^-\ni (x,t)} \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{P^+} w \chi_{R^-} \leq \mathop{\mathrm{ess\,sup}}_{R^-} w \leq C w_{R^+}$$ for every $(x,t) \in R^-$. Then we show that $(iii)$ implies $(i)$. By the Lebesgue differentiation theorem [@KinnunenMyyryYang2022 Lemma 2.3], we have $$w \chi_{R^-}(x,t) \leq M^+(w \chi_{R^-})(x,t)$$ for almost every $(x,t) \in \mathbb{R}^{n+1}$. This together with $(iii)$ implies that $$w(x,t) \leq M^+(w \chi_{R^-})(x,t) \leq C w_{R^+}$$ for almost every $(x,t) \in R^-$. By taking the essential supremum over $R^-$, we obtain $(i)$. ◻ Next we show that the parabolic reverse Hölder classes do not depend on the time lag. **Lemma 6**. *Let $1<q\leq\infty$ and $0<\gamma<1$. Then $w$ belongs to $RH_q^+$ if and only if there exists a constant $C$ such that $$\biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-(\gamma)} w^{q} \biggr)^\frac{1}{q} \leq C \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+(\gamma)} w$$ for every parabolic rectangle $R \subset \mathbb R^{n+1}$.* *Proof.* Assume that $w\in RH_q^+$. Let $R \subset \mathbb R^{n+1}$ be a parabolic rectangle with side length $L$. Choose $N\in\mathbb{N}$ and $0<\beta\leq1$ such that $1+\gamma = (N + \beta)(1-\gamma)$. Let $$R_0^+(\gamma) = R^-(\gamma) + (0, \beta (1-\gamma) L^p) \quad\text{and}\quad R_k^+(\gamma) = R^-(\gamma) + (0, (k+\beta) (1-\gamma) L^p)$$ for $k=1,\dots,N$. Note that $R_N^+(\gamma) = R^+(\gamma)$. Let $\rho = \beta^{1/p} (1-\gamma)^{1/p}$. We partition $R^-(\gamma)$ into $\lceil \rho^{-1} \rceil^n \lceil \rho^{-p} \rceil$ subrectangles $S^-_{i}$ with spatial side length $\rho L$ and time length $\rho^p L^p$ such that the overlap of $\{S^-_{i}\}_i$ is bounded by $2^{n+1}$. This can be done by dividing each spatial edge of $R^-(\gamma)$ into $\lceil \rho^{-1} \rceil$ equally long subintervals with an overlap bounded by $2$, and the time interval of $R^-(\gamma)$ into $\lceil \rho^{-p} \rceil$ equally long subintervals with an overlap bounded by $2$. We observe that every $S^{+}_i$ is contained in $R^+_0(\gamma)$. Then $w\in RH_q^+$ implies that there exists a constant $C_1$ such that $$\begin{aligned} \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-(\gamma)} w^{q} \biggr)^\frac{1}{q} &\leq \Biggl( \sum_i \frac{\lvert S^-_i \rvert}{\lvert R^-(\gamma) \rvert} \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{S^-_i} w^{q} \Biggr)^\frac{1}{q} \leq \biggl( \frac{\rho^{n+p}}{1-\gamma} \biggr)^\frac{1}{q} \sum_i \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{S^-_i} w^{q} \biggr)^\frac{1}{q} \\ &\leq \bigl( \beta^{\frac{n}{p}+1} (1-\gamma)^\frac{n}{p} \bigr)^\frac{1}{q} C_1 \sum_i \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{S^{+}_i} w \\ &= \bigl( \beta^{\frac{n}{p}+1} (1-\gamma)^\frac{n}{p} \bigr)^\frac{1}{q} C_1 \sum_i \frac{\lvert R^+_0(\gamma) \rvert}{\lvert S^{+}_i \rvert} \frac{1}{\lvert R^+_0(\gamma) \rvert} \int_{S^{+}_i} w \\ &\leq \bigl( \beta^{\frac{n}{p}+1} (1-\gamma)^\frac{n}{p} \bigr)^{\frac{1}{q}-1} C_1 2^{n+1} \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{+}_0(\gamma)} w = C_2 \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{+}_0(\gamma)} w ,\end{aligned}$$ where $C_2 = \bigl( \beta^{\frac{n}{p}+1} (1-\gamma)^\frac{n}{p} \bigr)^{\frac{1}{q}-1} C_1 2^{n+1}$. By iterating the previous argument and Hölder's inequality, we obtain $$\begin{aligned} \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{+}_0(\gamma)} w &\leq \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{+}_0(\gamma)} w^{q} \biggr)^\frac{1}{q} \leq \frac{C_1 2^{n+1}}{(1-\gamma)^{\frac{n }{p} (1-\frac{1}{q}) }} \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{+}_1(\gamma)} w \\ &\leq C_3^N \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{+}_N(\gamma)} w \leq C_4 \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{+}(\gamma)} w ,\end{aligned}$$ where $C_3 = (1-\gamma)^{\frac{n }{p} (\frac{1}{q}-1) } C_1 2^{n+1}$ and $C_4 = C_3^\frac{1+\gamma}{1-\gamma}$. Thus, we conclude that $$\biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-(\gamma)} w^{q} \biggr)^\frac{1}{q} \leq C_2 \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{+}_0(\gamma)} w \leq C_2 C_4 \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{+}(\gamma)} w .$$ By letting $q\to\infty$, we obtain the same conclusion for $RH^+_\infty$. Then we prove the other direction. Let $R \subset \mathbb R^{n+1}$ be a parabolic rectangle with side length $L$. We partition $R^-$ into $2^n \lceil (1+\gamma)/(1-\gamma) \rceil$ subrectangles $S^-_{i}(\gamma)$ with spatial side length $L/(1+\gamma)^\frac{1}{p}$ and time length $(1-\gamma) L^p/(1+\gamma)$ such that the overlap of $\{S^-_{i}(\gamma)\}_i$ is bounded by $2^{n+1}$. This can be done by dividing each spatial edge of $R^-$ into $\lceil (1+\gamma)^\frac{1}{p} \rceil = 2$ equally long subintervals, and the time interval of $R^-$ into $\lceil (1+\gamma)/(1-\gamma) \rceil$ equally long subintervals with an overlap bounded by $2$. We observe that every $S^{+}_i(\gamma)$ is contained in $R^+_0$. Then by the assumption, we have $$\begin{aligned} \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} w^{q} \biggr)^\frac{1}{q} &\leq \Biggl( \sum_i \frac{\lvert S^-_i(\gamma) \rvert}{\lvert R^- \rvert} \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{S^-_i(\gamma)} w^{q} \Biggr)^\frac{1}{q} \leq \biggl( \frac{1-\gamma}{(1+\gamma)^{\frac{n}{p}+1}} \biggr)^\frac{1}{q} \sum_i \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{S^-_i(\gamma)} w^{q} \biggr)^\frac{1}{q} \\ &\leq C_1^\frac{1}{q} C \sum_i \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{S^{+}_i(\gamma)} w = C_1^\frac{1}{q} C \sum_i \frac{\lvert R^+ \rvert}{\lvert S^{+}_i(\gamma) \rvert} \frac{1}{\lvert R^+ \rvert} \int_{S^{+}_i(\gamma)} w \\ &\leq C_1^{\frac{1}{q}-1} C2^{n+1} \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{+}} w ,\end{aligned}$$ where $C_1 = (1-\gamma)/ (1+\gamma)^{\frac{n}{p}+1}$. This completes the proof for $1<q<\infty$. Letting $q\to\infty$ in the argument above, we obtain the claim for $q=\infty$. ◻ # Characterizations of $\bigcup_{q>1} RH^+_q$ This section discusses several characterizations of parabolic reverse Hölder inequalities in terms of conditions that resemble characterizations of the Muckenhoupt $A_\infty$ class in the classical setting. Reverse Hölder classes and Muckenhoupt classes require separate discussion in the parabolic case. The connection between these classes is discussed in Section [5](#section:muckenhoupt){reference-type="ref" reference="section:muckenhoupt"}. The results in this section also hold in the case $p=1$. **Theorem 7**. *Let $0<\gamma<1$ and $w$ be a weight. The following conditions are equivalent.* (i) *$w\in RH^+_q$ for some $1<q<\infty$.* (ii) *There exist constants $K,\delta >0$ such that $$\frac{w(E)}{w(R^+)} \leq K \biggl( \frac{\lvert E \rvert}{\lvert R^- \rvert} \biggr)^\delta$$ for every parabolic rectangle $R\subset\mathbb{R}^{n+1}$ and measurable set $E \subset R^-$.* (iii) *For every $\beta>0$ there exists $0<\alpha<1$ such that for every parabolic rectangle $R\subset\mathbb{R}^{n+1}$ and every measurable set $E \subset R^-$ satisfying $\lvert E \rvert < \alpha \lvert R^- \rvert$ we have $w(E) < \beta w(R^+)$.* (iv) *There exist $0<\alpha<1$ and $0<\beta<1/2^{n+p}$ such that for every parabolic rectangle $R\subset\mathbb{R}^{n+1}$ and every measurable set $E \subset R^-$ satisfying $\lvert E \rvert < \alpha \lvert R^- \rvert$ we have $w(E) < \beta w(R^+)$.* (v) *There exist $0<\alpha<1$ and $0<\beta<1/2^{n+p}$ such that for every parabolic rectangle $R\subset\mathbb{R}^{n+1}$ we have $$w( R^- \cap \{ \alpha w > w_{R^+} \} ) < \beta w( R^+ ) .$$* (vi) *There exists a constant $C$ such that $$\int_{R^-} M^+(w \chi_{R^-}) \leq C \int_{R^+} w$$ for every parabolic rectangle $R\subset\mathbb{R}^{n+1}$.* (vii) *There exists a constant $C$ such that $$\int_{R^-} w \log^+ \biggl(\frac{w}{w_{R+}}\biggr) \leq C w(R^+)$$ for every parabolic rectangle $R\subset\mathbb{R}^{n+1}$.* The proof is presented in the subsections below. ## Quantitative measure condition We show $(i)\Longleftrightarrow (ii)$ in Theorem [Theorem 7](#thm:RHIchar){reference-type="ref" reference="thm:RHIchar"}. **Theorem 8**. *Let $w$ be a weight. Then $w\in RH^+_q$ for some $1<q<\infty$ if and only if there exist constants $K,\delta >0$ such that $$\frac{w(E)}{w(R^+)} \leq K \biggl( \frac{\lvert E \rvert}{\lvert R^- \rvert} \biggr)^\delta$$ for every parabolic rectangle $R\subset\mathbb{R}^{n+1}$ and measurable set $E \subset R^-$.* *Proof.* Assume first that $w\in RH^+_q$. Let $E$ be a measurable subset of $R^-$. By Hölder's inequality, we have $$\begin{aligned} \frac{w(E)}{w(R^+)} &= \frac{\lvert E \vert}{w(R^+)} \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{E} w \leq \frac{\lvert E \vert}{w(R^+)} \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{E} w^q \biggr)^\frac{1}{q} \leq \frac{\lvert E \vert^{1-\frac{1}{q} }}{w(R^+)} \lvert R^- \rvert^\frac{1}{q} \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} w^q \biggr)^\frac{1}{q} \\ &\leq \frac{\lvert E \vert^{1-\frac{1}{q} }}{w(R^+)} \lvert R^- \rvert^\frac{1}{q} C \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+} w = C \lvert E \vert^{1-\frac{1}{q}} \lvert R^- \rvert^{\frac{1}{q}-1} \leq C \biggl( \frac{\lvert E \vert}{\lvert R^- \vert} \biggr)^{1-\frac{1}{q}} .\end{aligned}$$ Then we prove the other direction. Assume that $$\frac{w(E)}{w(R^+)} \leq K \biggl( \frac{\lvert E \rvert}{\lvert R^- \rvert} \biggr)^\frac{1}{q} ,$$ where $K >0$, $q=\delta^{-1}>0$ and $E$ is a measurable subset of $R^-$. Since the ratio of the Lebesgue measure of $R^-$ to the Lebesgue measure of $E$ is always greater than or equal to 1, we may assume without loss of generality that the exponent $q$ is strictly greater than 1. Let $E_\lambda = R^- \cap \{ w > \lambda \}$. We have $\lvert E_\lambda \rvert \leq w(E_\lambda) / \lambda$. It follows that $$\lvert E_\lambda \rvert \leq \frac{1}{\lambda} w(E_\lambda) \leq s\frac{K}{\lambda} \biggl( \frac{\lvert E_\lambda \rvert}{\lvert R^- \rvert} \biggr)^\frac{1}{q} w(R^+) ,$$ and hence we get $$\lvert E_\lambda \rvert \leq \frac{K^{q'}}{\lambda^{q'}} \frac{w(R^+)^{q'}}{\lvert R^- \rvert^{q'-1}} ,$$ where $q'=\frac{q}{q-1}$ is the conjugate exponent of $q$. Letting $0<\varepsilon<q'-1$ and applying Cavalieri's principle gives $$\begin{aligned} \int_{R^-} w^{1+\varepsilon} &= (1+\varepsilon) \int_0^\infty \lambda^{\varepsilon} \lvert R^- \cap \{ w > \lambda \} \rvert \, d \lambda\\ &= (1+\varepsilon) \int_0^{w_{R^+}} \lambda^{\varepsilon} \lvert E_\lambda \rvert \, d \lambda+ (1+\varepsilon) \int_{ w_{R^+} }^\infty \lambda^{\varepsilon} \lvert E_\lambda \rvert \, d \lambda\\ &\leq \lvert R^- \rvert \biggl( \frac{w(R^+)}{\lvert R^+ \rvert} \biggr)^{1+\varepsilon} + (1+\varepsilon) K^{q'} \frac{w(R^+)^{q'}}{\lvert R^- \rvert^{q'-1}} \int_{w_{R^+}}^\infty \lambda^{\varepsilon-q'} \, d \lambda\\ &= \lvert R^- \vert \biggl( \frac{w(R^+)}{\lvert R^+ \rvert} \biggr)^{1+\varepsilon} + \frac{(1+\varepsilon) K^{q'}}{q'-1-\varepsilon} \frac{w(R^+)^{q'}}{\lvert R^- \rvert^{q'-1}} \biggl( \frac{w(R^+)}{\lvert R^+ \rvert} \biggr)^{\varepsilon-q'+1} \\ &= \biggl( 1 + \frac{(1+\varepsilon) K^{q'}}{q'-1-\varepsilon} \biggr) \lvert R^- \vert \biggl( \frac{w(R^+)}{\lvert R^+ \rvert} \biggr)^{1+\varepsilon} .\end{aligned}$$ Thus, we obtain $$\biggl( \int_{R^-} w^{1+\varepsilon} \biggr)^\frac{1}{1+\varepsilon} \leq c \int_{R^+} w$$ where $c^{1+\varepsilon} = 1 + (1+\varepsilon) K^{q'} / (q'-1-\varepsilon)$. By taking the supremum over all parabolic rectangles, we conclude that $w\in RH^+_{1+\varepsilon}$ and thus the proof is complete. ◻ ## Qualitative measure condition We show $(i)\Longleftrightarrow (iv)$ in Theorem [Theorem 7](#thm:RHIchar){reference-type="ref" reference="thm:RHIchar"}. First we note that Theorem [Theorem 7](#thm:RHIchar){reference-type="ref" reference="thm:RHIchar"} $(ii)$ implies $(iii)$, since if $\lvert E \rvert < \alpha \lvert R^- \rvert$, then $$\begin{aligned} w(E) \leq K \biggl( \frac{\lvert E \rvert}{\lvert R^- \rvert} \biggr)^\delta w(R^+) \leq K \alpha^\delta w(R^+) ,\end{aligned}$$ where we can choose $\alpha$ small enough such that $K\alpha^\delta\leq\beta$. The implication from $(iii)$ to $(iv)$ is immediate. To prove the reverse implication from $(iv)$ to $(i)$, we need the following lemma. We present the version with a time lag for later use. **Lemma 9**. *Let $0\leq\gamma<1$. Assume that there exist $0<\alpha,\beta<1$ such that for every parabolic rectangle $R$ and every measurable set $E \subset R^-(\gamma)$ satisfying $\lvert E \rvert < \alpha \lvert R^-(\gamma) \rvert$ we have $w(E) < \beta w(R^+(\gamma))$. Then we have the following properties.* (i) *For every parabolic rectangle $R$ and every measurable set $E \subset R^-(\gamma)$ for which $w(E) \geq \beta w(R^+(\gamma))$ it holds that $\lvert E \rvert \geq \alpha \lvert R^-(\gamma) \rvert$.* (ii) *Let $\theta>0$. For every parabolic rectangle $R$ and $0\leq\omega\leq \theta$ it holds that $$w(R^-(\gamma)) \leq C w(R^-(\gamma) + (0, \omega L^p)),$$ where $C\geq1$ depends on $n,p, \gamma, \alpha,\beta$ and $\theta$.* *Proof.* (i) This is simply the contraposition of the qualitative measure condition. \(ii\) We start by proving an auxiliary result which states that for every parabolic rectangle $R=R(x,t,L) \subset \mathbb{R}^{n+1}$ we have $$\label{lowertoupper} w(R^-(\gamma))\leq C_0 w(R^+(\gamma))$$ for some constant $C_0$ that depends on $n,p,\alpha$ and $\beta$. Choose $m \in \mathbb{N}$ such that $$\frac{1}{2^{(n+p)m}} <\alpha \leq \frac{1}{2^{(n+p)(m-1)}} .$$ We partition $R^-(\gamma)$ into subrectangles $R^-_i(\gamma)$ with spatial side length $L /2^m$ and time length $(1-\gamma) L^p /2^{pm}$ such that the overlap of $\{R^-_i(\gamma)\}_i$ is bounded by $2$. This can be done by dividing each spatial edge of $R^-(\gamma)$ into $2^m$ equally long pairwise disjoint intervals, and the time interval of $R^-(\gamma)$ into $\lceil 2^{pm} \rceil$ equally long subintervals such that their overlap is bounded by $2$. By the choice of $m$, we have $\lvert R^-_i(\gamma) \rvert < \alpha \lvert R^-(\gamma) \rvert$, and thus the qualitative measure condition implies $w(R^-_i(\gamma)) < \beta w(R^+(\gamma))$. Then it holds that $$\begin{aligned} w(R^-(\gamma)) &\leq \sum_i w(R^-_i(\gamma)) \leq \sum_i \beta w(R^+) = 2^{nm} \lceil 2^{pm} \rceil \beta w(R^+(\gamma)) \\ &\leq 2^{(n+p)m+1} \beta w(R^+(\gamma)) \leq C_0 w(R^+(\gamma)) ,\end{aligned}$$ where $C_0 = \max\{1, 2^{n+p+1}\beta/\alpha \}$. This finishes the proof of the auxiliary result. We move on to the main claim. Let $\theta>0$ and $R \subset \mathbb{R}^{n+1}$ be a fixed parabolic rectangle of side length $L$. Choose $m \in \mathbb{N}$ such that $$\label{partitionbound} \frac{(1+\gamma) L^p}{2^{pm}} \leq \frac{(1-\gamma) L^p}{2} < \frac{(1+\gamma) L^p}{2^{p(m-1)}} .$$ We partition $R^-(\gamma)$ into subrectangles $R^-_{0,i}(\gamma)$ with spatial side length $L /2^m$ and time length $(1-\gamma) L^p /2^{pm}$ such that the overlap of $\{R^-_{0,i}(\gamma)\}_i$ is bounded by $2$. This can be done by dividing each spatial edge of $R^-(\gamma)$ into $2^m$ equally long pairwise disjoint intervals, and the time interval of $R^-(\gamma)$ into $\lceil 2^{pm} \rceil$ equally long subintervals such that their overlap is bounded by $2$. Our plan is to shift every rectangle $R^-_{0,i}(\gamma)$ forward in time by multiple times of $(1+\gamma) L^p / 2^{pm}$ until the shifted rectangles are contained in $R^-(\gamma) + (0, \theta L^p )$. To this end, choose $N \in \mathbb{N}$ such that $$(N-1) \frac{(1+\gamma) L^p}{2^{pm}} < \theta L^p \leq N \frac{(1+\gamma) L^p}{2^{pm}} .$$ We first move every rectangle $R^-_{0,i}(\gamma)$ forward in time by $(N-1) (1+\gamma) L^p / 2^{pm}$. Then we shift once more by the distance $(1+\gamma) L^p / 2^{pm}$ those rectangles that are not yet subsets of $R^-(\gamma) + (0, \theta L^p )$. Denote so obtained shifted rectangles by $R^-_{N,i}(\gamma)$. Observe that the choice of $N$ and [\[partitionbound\]](#partitionbound){reference-type="eqref" reference="partitionbound"} ensures that all shifted rectangles $R^-_{N,i}(\gamma)$ are contained in $R^-(\gamma) + (0, \theta L^p)$. By the construction and the bounded overlap of $R^-_{0,i}(\gamma)$, the overlap of $R^-_{N,i}(\gamma)$ is bounded by $4$. Then we apply the auxiliary result [\[lowertoupper\]](#lowertoupper){reference-type="eqref" reference="lowertoupper"} for $R^-_{0,i}(\gamma)$ and $R^+_{0,i}(\gamma)$ and continue applying [\[lowertoupper\]](#lowertoupper){reference-type="eqref" reference="lowertoupper"} for shifted rectangles total of $N$ times to obtain $$w(R^-_{0,i}(\gamma)) \leq C_0 w(R^+_{0,i}(\gamma)) \leq C_0^N w(R^-_{N,i}(\gamma)) ,$$ where $$C_0^{N} \leq C_0^{1+ 2^{pm} \theta / (1+\gamma) } \leq C_0^{1 + 2^{p+1} \theta /(1-\gamma)} = C .$$ Therefore, we conclude that $$\begin{aligned} w(R^-(\gamma)) \leq \sum_i w(R^-_{0,i}(\gamma)) \leq C\sum_i w(R^-_{N,i}(\gamma)) \leq 4 Cw(R^-(\gamma) + (0, \theta L^p))\end{aligned}$$ by $R^-_{N,i}(\gamma)\subset R^-(\gamma) + (0, \theta L^p)$ and the bounded overlap of $R^-_{N,i}(\gamma)$. Since $C$ is an increasing function with respect to $\theta$, the claim follows. ◻ **Lemma 10**. *Let $w$ be a weight in $\mathbb{R}^{n+1}$. Assume that there exist $0<\alpha<1$ and $0<\beta<1/2^{n+p}$ such that for every parabolic rectangle $R$ and every measurable set $E \subset R^-$ for which $\lvert E \rvert < \alpha \lvert R^- \rvert$ it holds that $w(E) < \beta w(R^+)$. Then there exists $c=c(n,p,\alpha,\beta)$ such that for every parabolic rectangle $R=R(x,t,L) \subset \mathbb{R}^{n+1}$ and $\lambda \geq w_{U^+}$ we have $$w( R^{-} \cap \{ w > \lambda \} ) \leq c \lambda \lvert R \cap \{ w > (1-2^{n+p}\beta) \lambda \} \rvert ,$$ where $U^+ = R^+ + \tau L^p$ with $\tau = 1/(2^p-1)$.* *Proof.* Let $R_0=R(x_0,t_0,L) = Q(x_0,L) \times (t_0-L^p, t_0+L^p)$ and $\lambda \geq w_{U^+_0}$. Without loss of generality, we may assume that $\alpha < 1/2^{n+p}$. Denote $S^-_0 = R^-_0$. The time length of $S^-_0$ is $l_t(S^-_0) = L^p$. We partition $S^-_0$ by dividing each spatial edge into $2$ equally long intervals. If $$\frac{l_t(S_{0}^-)}{\lfloor 2^{p} \rfloor} < \frac{L^p}{2^{p}},$$ we divide the time interval of $S^-_0$ into $\lfloor 2^{p} \rfloor$ equally long intervals. Otherwise, we divide the time interval of $S^-_0$ into $\lceil 2^{p} \rceil$ equally long intervals. We obtain subrectangles $S^-_1$ of $S^-_0$ with spatial side length $L_1 = l_x(S^-_1) = l_x(S^-_0)/2 = L / 2$ and time length either $$l_t(S^-_1) = \frac{l_t(S^-_0)}{\lfloor 2^{p} \rfloor} = \frac{L^p}{\lfloor 2^{p} \rfloor} \quad \text{or} \quad l_t(S^-_1) = \frac{L^p}{\lceil 2^{p} \rceil} .$$ For every $S^-_1$, there exists a unique rectangle $R^-_1$ with spatial side length $L_1 = L / 2$ and time length $L_1^p = L^p / 2^{p}$ such that $R^-_1$ has the same bottom as $S^-_1$, unless the top of $S^-_1$ intersects with the top of $S^-_0$ in which case we choose $R^-_1$ that has the same top as $S^-_1$. This way every $R_1^-$ is contained in $S^-_0$ and their overlap is bounded by $3$. Consider the corresponding $U^+_1 = R^-_1 + (1+\tau) L_1^p$. We select those rectangles $S^-_1$ for which $$\frac{w(U^+_1)}{\lvert U^+_1 \rvert} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{U^+_1} w > \lambda$$ and denote the obtained collection by $\{ S^-_{1,j} \}_j$. If $$\frac{w(U^+_1)}{\lvert U^+_1 \rvert} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{U^+_1} w \leq \lambda ,$$ we subdivide $S^-_1$ in the same manner as above and select all those subrectangles $S^-_2$ for which $$\frac{w(U^+_2)}{\lvert U^+_2 \rvert} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{U^+_2} w > \lambda$$ to obtain family $\{ S^-_{2,j} \}_j$. We continue this selection process recursively. At the $i$th step, we partition unselected rectangles $S^-_{i-1}$ by dividing each spatial side into $2$ equally long intervals. If $$\label{RHI:JNproof_eq1} \frac{l_t(S_{i-1}^-)}{\lfloor 2^{p} \rfloor} < \frac{L^p}{2^{pi}},$$ we divide the time interval of $S^-_{i-1}$ into $\lfloor 2^{p} \rfloor$ equally long intervals. Otherwise, if $$\label{RHI:JNproof_eq2} \frac{l_t(S_{i-1}^-)}{\lfloor 2^{p} \rfloor} \geq \frac{L^p}{2^{pi}},$$ we divide the time interval of $S^-_{i-1}$ into $\lceil 2^{p} \rceil$ equally long intervals. We obtain subrectangles $S^-_i$. For every $S^-_i$, there exists a unique rectangle $R^-_i$ with spatial side length $L_i = L / 2^{i}$ and time length $L_i^p = L^p / 2^{pi}$ such that $R^-_i$ has the same bottom as $S^-_i$, unless the top of $S^-_i$ intersects with the top of $S^-_{i-1}$ in which case we choose $R^-_i$ that has the same top as $S^-_i$. This way every $R_i^-$ is contained in $S^-_{i-1}$ and their overlap is bounded by $3$. Consider the corresponding $U^+_i = R^-_i + (1+\tau) L_i^p$. Select those $S^-_i$ for which $$\frac{w(U^+_i)}{\lvert U^+_i \rvert} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{U^+_i} w > \lambda$$ and denote the obtained collection by $\{ S^-_{i,j} \}_j$. If $$\frac{w(U^+_i)}{\lvert U^+_i \rvert} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{U^+_i} w \leq \lambda ,$$ we continue the selection process in $S^-_i$. In this manner we obtain a collection $\{S^-_{i,j} \}_{i,j}$ of pairwise disjoint rectangles. Observe that if [\[RHI:JNproof_eq1\]](#RHI:JNproof_eq1){reference-type="eqref" reference="RHI:JNproof_eq1"} holds, then we have $$l_t(S_i^-) = \frac{l_t(S^-_{i-1})}{\lfloor 2^{p} \rfloor} < \frac{L^p}{2^{pi}}.$$ On the other hand, if [\[RHI:JNproof_eq2\]](#RHI:JNproof_eq2){reference-type="eqref" reference="RHI:JNproof_eq2"} holds, then $$l_t(S_i^-) = \frac{l_t(S^-_{i-1})}{\lceil 2^{p} \rceil} \leq \frac{l_t(S^-_{i-1})}{2^{p}} \leq \dots \leq \frac{L^p}{2^{pi}} .$$ This gives an upper bound $$l_t(S_i^-) \leq \frac{L^p}{2^{pi}}$$ for every $S_i^-$. Suppose that [\[RHI:JNproof_eq2\]](#RHI:JNproof_eq2){reference-type="eqref" reference="RHI:JNproof_eq2"} is satisfied at the $i$th step. Then we have a lower bound for the time length of $S_i^-$, since $$l_t(S^-_i) = \frac{l_t(S_{i-1}^-)}{\lceil 2^{p} \rceil} \geq \frac{\lfloor 2^{p} \rfloor}{\lceil 2^{p} \rceil} \frac{L^p}{2^{pi}} \geq \frac{1}{2} \frac{L^p}{2^{pi}} .$$ On the other hand, if [\[RHI:JNproof_eq1\]](#RHI:JNproof_eq1){reference-type="eqref" reference="RHI:JNproof_eq1"} is satisfied, then $$l_t(S^-_i) = \frac{l_t(S_{i-1}^-)}{\lfloor 2^{p} \rfloor} \geq \frac{l_t(S_{i-1}^-)}{ 2^{p}}.$$ In this case, [\[RHI:JNproof_eq2\]](#RHI:JNproof_eq2){reference-type="eqref" reference="RHI:JNproof_eq2"} has been satisfied at an earlier step $i'$ with $i'< i$. We obtain $$l_t(S^-_i) \geq \frac{l_t(S_{i-1}^-)}{ 2^{p}} \geq \dots \geq \frac{l_t(S_{i'}^-)}{ 2^{p(i-i')}} \geq \frac{1}{2} \frac{L^p}{ 2^{pi}}$$ by using the lower bound for $S_{i'}^-$. Thus, we have $$\frac{1}{2} \frac{L^p}{2^{pi}} \leq l_t(S^-_i) \leq \frac{L^p}{2^{pi}}$$ for every $S^-_i$. We show that $U^+_i$ is contained in $U^-_{i-1} = R^-_{i-1} + \tau L_{i-1}^p$ for a fixed rectangle $S^-_{i-1}$ and for every subrectangle $S^-_i \subset S^-_{i-1}$. By the construction, we have $R^-_i \subset S^-_{i-1} \subset R^-_{i-1}$. Recall that $\tau = 1/(2^p-1)$. Then $$\begin{aligned} U^+_i &= R^-_i + (1+\tau) L_i^p \subset S^-_{i-1} + (1+\tau) L_i^p \subset R^-_{i-1} + (1+\tau) L_i^p \\ &= R^-_{i-1} + \biggl( 1+\frac{1}{2^p-1} \biggr) \frac{L^p}{2^{pi}} = R^-_{i-1} + \frac{2^p}{2^p-1} \frac{L^p}{2^{pi}} \\ &= R^-_{i-1} + \frac{1}{2^p-1} \frac{L^p}{2^{p(i-1)}} = R^-_{i-1} + \tau L_{i-1}^p = U^-_{i-1} .\end{aligned}$$ We have a collection $\{ S^-_{i,j} \}_{i,j}$ of pairwise disjoint rectangles. However, the rectangles in the corresponding collection $\{ U^+_{i,j} \}_{i,j}$ may overlap. Thus, we replace it by a maximal subfamily $\{ \widetilde{U}^+_{i,j} \}_{i,j}$ of pairwise disjoint rectangles, which is constructed in the following way. For every $i\in\mathbb{N}$, we may extract a maximal disjoint subcollection $\{ \widehat{U}^+_{i,j} \}_{j}$ from $\{ U^+_{i,j} \}_{j}$ such that for every $U^+_{i,j}$ there is $\widehat{U}^+_{i,j}$ with $$\text{pr}_x(U^+_{i,j}) \subset \text{pr}_x(\widehat{U}^+_{i,j}) \quad\text{and}\quad \text{pr}_t(U^+_{i,j}) \subset 3 \text{pr}_t(\widehat{U}^+_{i,j}) .$$ Here pr$_x$ denotes the projection to $\mathbb R^n$ and pr$_t$ denotes the projection to the time axis. Choose $\{ \widehat{U}^+_{1,j} \}_{j}$ and denote it by $\{ \widetilde{U}^+_{1,j} \}_j$. Then consider the collection $\{ \widehat{U}^+_{2,j} \}_{j}$ where each $\widehat{U}^+_{2,j}$ either intersects some $\widetilde{U}^+_{1,j}$ or does not intersect any $\widetilde{U}^+_{1,j}$. Select the rectangles $\widehat{U}^+_{2,j}$, that do not intersect any $\widetilde{U}^+_{1,j}$, and denote the obtained collection by $\{ \widetilde{U}^+_{2,j} \}_j$. At the $i$th step, choose those $\widehat{U}^+_{i,j}$ that do not intersect any previously selected $\widetilde{U}^+_{i',j}$, $i' < i$. Hence, we obtain a collection $\{ \widetilde{U}^+_{i,j} \}_{i,j}$ of pairwise disjoint rectangles. Observe that for every $U^+_{i,j}$ there exists $\widetilde{U}^+_{i',j}$ with $i' < i$ such that $$\label{plussubset} \text{pr}_x(U^+_{i,j}) \subset \text{pr}_x(\widetilde{U}^+_{i',j}) \quad \text{and} \quad \text{pr}_t(U^+_{i,j}) \subset 3 \text{pr}_t(\widetilde{U}^+_{i',j}) .$$ Note that $S^-_{i,j}$ is spatially contained in $U^+_{i,j}$, that is, $\text{pr}_x S^-_{i,j}\subset \text{pr}_x U^+_{i,j}$. In the time direction, we have $$\label{minusplussubset} \text{pr}_t(S^-_{i,j}) \subset ( 3+2\tau ) \text{pr}_t(U^+_{i,j}) ,$$ since $$( 4 + 2\tau ) \frac{l_t(U^+_{i,j})}{2} = (2+\tau) L_i^p .$$ Therefore, by [\[plussubset\]](#plussubset){reference-type="eqref" reference="plussubset"} and [\[minusplussubset\]](#minusplussubset){reference-type="eqref" reference="minusplussubset"}, it holds that $$\label{subsetcubes} \sum_{i,j} \lvert S^-_{i,j} \rvert = \Big\lvert \bigcup_{i,j} S^-_{i,j} \Big\rvert \leq c_1 \sum_{i,j} \lvert \widetilde{U}^+_{i,j} \rvert \quad\text{with}\quad c_1 = 3 ( 3+2\tau ).$$ Let $\sigma = 2^{n+p}\beta$. It holds that $$\begin{aligned} w(U^+_{i,j} \cap \{ w \leq (1-\sigma) w_{U^+_{i,j}} \}) \leq (1-\sigma) w_{U^+_{i,j}} \lvert U^+_{i,j} \rvert = (1-\sigma) w(U^+_{i,j})\end{aligned}$$ from which we obtain $$w(U^+_{i,j} \cap \{ w > (1-\sigma) w_{U^+_{i,j}} \}) \geq \sigma w(U^+_{i,j}) .$$ From the selection criterion, we get $$w(U^+_{i-1,j}) \leq \lambda \lvert U^+_{i-1,j} \rvert = 2^{n+p} \lambda \lvert U^+_{i,j} \rvert < 2^{n+p} w(U^+_{i,j}) .$$ We have $$w(U^+_{i,j} \cap \{ w > (1-\sigma) w_{U^+_{i,j}} \}) > \frac{\sigma}{2^{n+p}} w(U^+_{i-1,j}) = \beta w(U^+_{i-1,j}) .$$ Recall that $U^+_{i,j} \subset U^-_{i-1,j}$. Thus, we may apply Lemma [Lemma 9](#lemma:timemove){reference-type="ref" reference="lemma:timemove"} (i) to obtain $$\lvert U^+_{i,j} \cap \{ w > (1-\sigma) w_{U^+_{i,j}} \} \rvert \geq \alpha \lvert U^+_{i-1,j} \rvert$$ and since $w_{U^+_{i,j}}>\lambda$ we have $$\label{superlevelestimate} \lvert U^+_{i,j} \cap \{ w > (1-\sigma) \lambda \} \rvert \geq \lvert U^+_{i,j} \cap \{ w > (1-\sigma) w_{U^+_{i,j}} \} \rvert \geq \alpha \lvert U^+_{i-1,j} \rvert .$$ If $(x,t) \in S^-_0 \setminus \bigcup_{i,j} S^-_{i,j}$, then there exists a sequence of subrectangles $S^-_l$ containing $(x,t)$ such that $$\frac{w(U^+_l)}{\lvert U^+_l \rvert} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{U^+_l} w \leq \lambda$$ and $\lvert S^-_l \rvert \to 0$ as $l \to \infty$. The Lebesgue differentiation theorem [@KinnunenMyyryYang2022 Lemma 2.3] implies that $w(x,t) \leq \lambda$ for almost every $(x,t) \in S^-_0 \setminus \bigcup_{i,j} S^-_{i,j}$. It follows that $$S^-_0 \cap \{ w > \lambda \} \subset \bigcup_{i,j} S^-_{i,j}$$ up to a set of measure zero. By using this with Lemma [Lemma 9](#lemma:timemove){reference-type="ref" reference="lemma:timemove"} (ii) for $\theta=1+\tau$, the selection criterion,  [\[subsetcubes\]](#subsetcubes){reference-type="eqref" reference="subsetcubes"} and [\[superlevelestimate\]](#superlevelestimate){reference-type="eqref" reference="superlevelestimate"}, we obtain $$\begin{aligned} w( S^-_0 \cap \{ w > \lambda \} ) &\leq \sum_{i,j} w( S^-_{i,j} ) \leq \sum_{i,j} w( R^-_{i-1,j} ) \leq C \sum_{i,j} w( U^+_{i-1,j} ) \leq C \lambda \sum_{i,j} \lvert U^+_{i-1,j} \rvert \\ &\leq 2^{n+p+1} C \lambda \sum_{i,j} \lvert S^-_{i,j} \rvert \leq 2^{n+p+1} c_1 C \lambda \sum_{i,j} \lvert \widetilde{U}^+_{i,j} \rvert \leq 2 c_1 C \lambda \sum_{i,j} \lvert \widetilde{U}^+_{i-1,j} \rvert \\ &\leq 2 c_1 C \alpha^{-1} \lambda \sum_{i,j} \lvert \widetilde{U}^+_{i,j} \cap \{ w > (1-\sigma) \lambda \} \rvert \\ &\leq 2 c_1 C \alpha^{-1} \lambda \lvert R_0 \cap \{ w > (1-\sigma) \lambda \} \rvert .\end{aligned}$$ This completes the proof. ◻ The following theorem states that the qualitative measure condition implies the parabolic reverse Hölder inequality. **Theorem 11**. *Let $w$ be a weight in $\mathbb{R}^{n+1}$. Assume that there exist $0<\alpha<1$ and $0<\beta<1/2^{n+p}$ such that for every parabolic rectangle $R$ and every measurable set $E \subset R^-$ satisfying $\lvert E \rvert < \alpha \lvert R^- \rvert$ we have $w(E) < \beta w(R^+)$. Then $w\in RH^+_q$ for some $1<q<\infty$.* *Proof.* We prove the claim first for bounded functions. Thus, assume that $w$ is bounded. Let $R \subset \mathbb R^{n+1}$ be a parabolic rectangle. Let $\varepsilon>0$ to be chosen later. We use the same notation as in the statement of Lemma [Lemma 10](#weightmeasure-estimate){reference-type="ref" reference="weightmeasure-estimate"}. Hence, for $\lambda \geq w_{U^+}$ we have $$w( R^{-} \cap \{ w > \lambda \} ) \leq c \lambda \lvert R \cap \{ w > \sigma \lambda \} \rvert ,$$ where $\sigma=1-2^{n+p}\beta$ and $U^+ = R^+ + \tau L^p$ with $\tau = 1/(2^p-1)$. Applying this with Cavalieri's principle and Lemma [Lemma 9](#lemma:timemove){reference-type="ref" reference="lemma:timemove"} (ii) for $\theta=1+\tau$ (with the constant $C$), we obtain $$\begin{aligned} \int_{R^-} w^{1+\varepsilon} &= \varepsilon \int_0^\infty \lambda^{\varepsilon-1} w( R^{-} \cap \{ w > \lambda \} ) \, d \lambda\\ &= \varepsilon \int_0^{w_{U^+}} \lambda^{\varepsilon-1} w( R^{-} \cap \{ w > \lambda \} ) \, d \lambda+ \varepsilon \int_{w_{U^+}}^\infty \lambda^{\varepsilon-1} w( R^{-} \cap \{ w > \lambda \} ) \, d \lambda\\ &\leq w( R^{-} ) \varepsilon \int_0^{w_{U^+}} \lambda^{\varepsilon-1} \, d \lambda+ c \varepsilon \int_{w_{U^+}}^\infty \lambda^{\varepsilon} \lvert R \cap \{ w > \sigma \lambda \} \rvert \, d \lambda\\ &\leq w( R^{-} ) w_{U^+}^\varepsilon + c \sigma^{-1-\varepsilon} \varepsilon \int_0^\infty \lambda^{\varepsilon} \lvert R \cap \{ w > \lambda \} \rvert \, d \lambda\\ &\leq C \lvert U^{+} \rvert w_{U^+}^{1+\varepsilon} + c \sigma^{-1-\varepsilon} \frac{\varepsilon}{1+\varepsilon} \int_{R} w^{1+\varepsilon} .\end{aligned}$$ By choosing $\varepsilon>0$ to be small enough, we can absorb the integral over $R^-$ of the second term to the left-hand side to get $$\begin{aligned} \biggl( 1- \frac{c}{\sigma^{1+\varepsilon}} \frac{\varepsilon}{1+\varepsilon} \biggr) \int_{R^-} w^{1+\varepsilon} \leq \lvert U^{+} \rvert w_{U^+}^{1+\varepsilon} + \frac{c}{\sigma^{1+\varepsilon}} \frac{\varepsilon}{1+\varepsilon} \int_{R^+} w^{1+\varepsilon} .\end{aligned}$$ Hence, we have $$\label{cavalieri_iteration} \int_{R^-} w^{1+\varepsilon} \leq c_0 \lvert U^{+} \rvert w_{U^+}^{1+\varepsilon} + c_1 \varepsilon \int_{R^+} w^{1+\varepsilon} ,$$ where $$c_0 = \frac{C(1+\varepsilon) }{1-(c \sigma^{-1-\varepsilon} -1) \varepsilon} \quad \text{and} \quad c_1 = \frac{c \sigma^{-1-\varepsilon} }{1-(c \sigma^{-1-\varepsilon} -1) \varepsilon} .$$ Fix $R_0 = Q(x_0,L) \times (t_0 - L^p, t_0 + L^p) \subset \mathbb R^{n+1}$. We cover $R^-_0$ by $M = 2^{n+1}$ rectangles $R^-_{1,j}$ with spatial side length $l_x = L/2^{1/p}$ and time length $l_t = L^p / 2$. This can be done by dividing each spatial edge of $R^-_0$ into two equally long intervals that may overlap each other, and the time interval of $R^-_0$ into two equally long pairwise disjoint intervals. Observe that the overlap of $R^-_{1,j}$ is bounded by $M/2 = 2^n$. Then consider $R^+_{1,j}$ and cover it in the same way as before by $M$ rectangles $R^-_{2,j}$ with spatial side length $l_x = L/2^{2/p}$ and time length $l_t = L^p / 2^2$. At the $i$th step, cover $R^+_{i-1,j}$ by $M$ rectangles $R^-_{i,j}$ with spatial side length $l_x = L/2^{i/p}$ and time length $l_t = L^p / 2^i$ such that their overlap is bounded by $M/2$. We note that every $R_{i,j}$ and corresponding $U^+_{i,j}$ is contained in $R_0$. By iterating [\[cavalieri_iteration\]](#cavalieri_iteration){reference-type="eqref" reference="cavalieri_iteration"} we obtain $$\begin{aligned} \int_{R^-_0} w^{1+\varepsilon} &\leq \sum_{j=1}^{M} \int_{R^-_{1,j}} w^{1+\varepsilon} \leq \sum_{j=1}^{M} c_0 \lvert U^{+}_{1,j} \rvert w_{U^+_{1,j}}^{1+\varepsilon} + \sum_{j=1}^{M} c_1 \varepsilon \int_{R^+_{1,j}} w^{1+\varepsilon} \\ &\leq c_0 \sum_{j=1}^M \lvert U^{+}_{1,j} \rvert w_{U^+_{1,j}}^{1+\varepsilon} + c_1 \varepsilon \sum_{j=1}^{M^2} \int_{R^-_{2,j}} w^{1+\varepsilon} \\ &\leq c_0 \sum_{j=1}^M \lvert U^{+}_{1,j} \rvert w_{U^+_{1,j}}^{1+\varepsilon} + c_1 \varepsilon \sum_{j=1}^{M^2} \biggl( c_0 \lvert U^+_{2,j}\rvert w_{U^+_{2,j}}^{1+\varepsilon} + c_1 \varepsilon \int_{R^+_{2,j}} w^{1+\varepsilon} \biggr) \\ &= c_0 \sum_{j=1}^M \lvert U^{+}_{1,j} \rvert w_{U^+_{1,j}}^{1+\varepsilon} + c_0 c_1 \varepsilon \sum_{j=1}^{M^2} \lvert U^+_{2,j}\rvert w_{U^+_{2,j}}^{1+\varepsilon} + (c_1 \varepsilon)^2 \sum_{j=1}^{M^2} \int_{R^+_{2,j}} w^{1+\varepsilon} \\ &\leq c_0 \sum_{i=1}^N \biggl( (c_1 \varepsilon)^{i-1} \sum_{j=1}^{M^i} \lvert U^+_{i,j} \rvert w^{1+\varepsilon}_{U^+_{i,j}} \biggr) + (c_1 \varepsilon)^N \sum_{j=1}^{M^N} \int_{R^+_{N,j}} w^{1+\varepsilon} \\ &\leq c_0 \sum_{i=1}^N \biggl( (c_1 \varepsilon)^{i-1} \sum_{j=1}^{M^i} \lvert U^+_{i,j} \rvert w^{1+\varepsilon}_{U^+_{i,j}} \biggr) + \biggl( c_1 \varepsilon \frac{M}{2} \biggr)^N \int_{R_0} w^{1+\varepsilon} \\ &= I + II .\end{aligned}$$ We observe that $II$ tends to zero if $\varepsilon < \tfrac{2}{c_1 M} = \tfrac{1}{c_1 2^n}$ as $N \to \infty$. Since $$\lvert U^+_{i,j} \rvert^{-\varepsilon} = L^{-(n+p)\varepsilon} 2^{(\tfrac{n}{p}+1)i \varepsilon } = 2^{1+\varepsilon} L^{n+p} 2^{(\tfrac{n}{p}+1)i \varepsilon } \lvert R_{0} \rvert^{-(1+\varepsilon)},$$ for the inner sum of the first term $I$ we have $$\begin{aligned} \sum_{j=1}^{M^i} \lvert U^+_{i,j} \rvert w^{1+\varepsilon}_{U^+_{i,j}} = \sum_{j=1}^{M^i} \lvert U^+_{i,j} \rvert^{-\varepsilon} \biggl( \int_{U^+_{i,j}} w \biggr)^{1+\varepsilon} \leq 2^{1+\varepsilon} L^{n+p} 2^{(\tfrac{n}{p}+1)i \varepsilon } \biggl(\frac M2\biggr)^i w^{1+\varepsilon}_{R_0} .\end{aligned}$$ Thus, it follows that $$\begin{aligned} I \leq c_0 2^{1+\varepsilon} L^{n+p} w^{1+\varepsilon}_{R_0} \sum_{i=1}^N (c_1 \varepsilon)^{i-1} 2^{(\tfrac{n}{p}+1)i \varepsilon } \biggl(\frac M2\biggr)^i .\end{aligned}$$ We estimate the sum by $$\begin{aligned} \sum_{i=1}^N (c_1 \varepsilon)^{i-1} 2^{(\tfrac{n}{p}+1)i \varepsilon } \biggl(\frac M2\biggr)^i &= 2^{(\tfrac{n}{p}+1) \varepsilon } \frac{M}{2} \sum_{i=0}^{N-1} \biggl( c_1 \varepsilon 2^{(\tfrac{n}{p}+1) \varepsilon } \frac{M}{2} \biggr)^i \\ &\leq 2^{(\tfrac{n}{p}+1) \varepsilon } \frac{M}{2} \frac{1}{1-c_1 \varepsilon 2^{(\tfrac{n}{p}+1) \varepsilon } \frac{M}{2} } \\ &= \frac{2^{(\tfrac{n}{p}+1) \varepsilon +n }}{1-c_1 \varepsilon 2^{(\tfrac{n}{p}+1) \varepsilon + n } } ,\end{aligned}$$ whenever $\varepsilon$ is small enough, for example $$\varepsilon < \frac{1}{c_1 2^{\tfrac{n}{p}+1} M } = \frac{1}{c_1 2^{\tfrac{n}{p}+1+n} } .$$ Then it holds that $$\begin{aligned} \int_{R^-_0} w^{1+\varepsilon} &\leq c_0 2^{1+\varepsilon} L^{n+p} w^{1+\varepsilon}_{R_0} \frac{ 2^{(\tfrac{n}{p}+1) \varepsilon +n } }{1- c_1 \varepsilon 2^{(\tfrac{n}{p}+1) \varepsilon +n } }\end{aligned}$$ for small enough $\varepsilon$. Since $w_{R^-_0} \leq C w_{R^+_0}$ for some $C=C(n,p,\alpha,\beta)$ by Lemma [Lemma 9](#lemma:timemove){reference-type="ref" reference="lemma:timemove"} (ii), we conclude that $$\begin{aligned} \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-_0} w^{1+\varepsilon} \biggr)^\frac{1}{1+\varepsilon} &\leq c_2 \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R_0} w = \frac{c_2}{2} \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-_0} w + \frac{c_2 }{2} \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+_0} w \leq \frac{c_2 }{2} (C+1) \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+_0} w ,\end{aligned}$$ where $$c_2 = 2 \Biggl( c_0 \frac{ 2^{(\tfrac{n}{p}+1) \varepsilon +n } }{1- c_1 \varepsilon 2^{(\tfrac{n}{p}+1) \varepsilon +n}} \Biggr)^\frac{1}{1+\varepsilon} .$$ Hence, the claim holds for bounded functions. For unbounded $w$, we consider truncations $\min\{w,k\}$, $k \in \mathbb N$, and apply the claim with the monotone convergence theorem as $k \to \infty$. This completes the proof. ◻ ## Superlevel measure condition We show $(ii)\Longrightarrow (v) \Longrightarrow (iv)$ in Theorem [Theorem 7](#thm:RHIchar){reference-type="ref" reference="thm:RHIchar"}. We start with $(ii)$ implies $(v)$. **Theorem 12**. *Let $w$ be a weight. Assume that there exist constants $K,\delta >0$ such that $$\frac{w(E)}{w(R^+)} \leq K \biggl( \frac{\lvert E \rvert}{\lvert R^- \rvert} \biggr)^\delta$$ for every parabolic rectangle $R\subset\mathbb{R}^{n+1}$ and measurable set $E \subset R^-$. Then there exist $0<\alpha<1$ and $0<\beta<1/2^{n+p}$ such that for every parabolic rectangle $R$ we have $$w( R^- \cap \{ \alpha w > w_{R^+} \} ) < \beta w( R^+ ) .$$* *Proof.* Denote $E = R^- \cap \{ \alpha w > w_{R^+} \}$. We have $\lvert E \rvert < \alpha w(E)/w_{R^+}$. Thus, the assumption implies that $$\frac{w(E)}{w(R^+)} \leq K \biggl( \frac{\lvert E \rvert}{\lvert R^- \rvert} \biggr)^\delta < K \biggl( \alpha \frac{w(E)}{w(R^+)} \biggr)^\delta ,$$ from which we get $$w(E) < K^\frac{1}{1-\delta} \alpha^\frac{\delta}{1-\delta} w(R^+) .$$ We finish the proof by choosing $\alpha$ small enough such that $$\beta = K^\frac{1}{1-\delta} \alpha^\frac{\delta}{1-\delta} < \frac{1}{2^{n+p}} .$$ ◻ Next we show that $(v)$ implies $(iv)$ in Theorem [Theorem 7](#thm:RHIchar){reference-type="ref" reference="thm:RHIchar"}. **Theorem 13**. *Let $w$ be a weight. Assume that there exist $0<\alpha<1$ and $0<\beta<1/2^{n+p}$ such that for every parabolic rectangle $R$ we have $$w( R^- \cap \{ \alpha w > w_{R^+} \} ) < \beta w( R^+ ) .$$ Then there exist $0<\alpha'<1$ and $0<\beta'<1/2^{n+p}$ such that for every parabolic rectangle $R$ and every measurable set $E \subset R^-$ for which $\lvert E \rvert < \alpha' \lvert R^- \rvert$ it holds that $w(E) < \beta' w(R^+)$.* *Proof.* Let $E \subset R^-$ be a measurable set such that $\lvert E \rvert < \alpha' \lvert R^- \rvert$ where $\alpha' < (1/2^{n+p}-\beta)\alpha$. It follows that $$\begin{aligned} w( E ) &\leq w( E \cap \{\alpha w > w_{R^+} \} ) + w( E \cap \{\alpha w \leq w_{R^+} \} ) \\ &\leq \beta w( R^+ ) + \frac{w_{R^+}}{\alpha} \lvert E \rvert = \biggl( \beta + \frac{1}{\alpha} \frac{\lvert E \rvert}{\lvert R^+ \rvert} \biggr) w(R^+) \\ &< \biggl( \beta + \frac{\alpha'}{\alpha} \biggr) w(R^+) = \beta' w(R^+),\end{aligned}$$ where $\beta' = \beta + \frac{\alpha'}{\alpha} < 1/2^{n+p}$. ◻ ## Fujii--Wilson condition We show $(i)\Longrightarrow (vi) \Longrightarrow (vii) \Longrightarrow (iii)$ in Theorem [Theorem 7](#thm:RHIchar){reference-type="ref" reference="thm:RHIchar"}. We begin with the boundedness of the parabolic maximal function on $L^q$. **Lemma 14**. *Let $1<q\leq \infty$. Assume that $f\in L^1_{\mathrm{loc}}(\mathbb{R}^{n+1})$. Then there exists a constant $c$ such that $$\begin{aligned} \int_{\mathbb{R}^{n+1}} (M^+f)^q \leq c \int_{\mathbb{R}^{n+1}} \lvert f \rvert^q .\end{aligned}$$* *Proof.* Let $E = \{ M^+f > \lambda \}$. For every $z \in E$ there exists a parabolic rectangle $R_{z}$ such that $z \in R^-_{z}$ and $$\mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+_{z}} \lvert f \rvert > \lambda .$$ By a similar argument to the Vitali covering theorem, we obtain a countable collection $\{R_i\}_i$ of pairwise disjoint parabolic rectangles such that $$E \subset \bigcup_{z\in E} R_z \subset \bigcup_{i=1}^\infty 5R_i .$$ Thus, we have $$\begin{aligned} \lvert E \rvert &\leq \sum_i \lvert 5 R_i \rvert = 5^{n+p} \sum_i \lvert R_i \rvert = 5^{n+p} 2 \sum_i \lvert R_i^+ \rvert \leq \frac{5^{n+p} 2}{\lambda} \sum_i \int_{R^+_{i}} \lvert f \rvert \leq \frac{5^{n+p} 2}{\lambda} \int_{\mathbb{R}^{n+1}} \lvert f \rvert .\end{aligned}$$ In other words, $M^+f$ is bounded from $L^{1}$ to $L^{1,\infty}$. Moreover, we observe that $M^{+}f$ is bounded on $L^{\infty}$ since $$\lVert M^{+} f \rVert_{L^\infty(\mathbb{R}^{n+1})} \leq \lVert f \rVert_{L^\infty(\mathbb{R}^{n+1})} .$$ The Marcinkiewicz interpolation theorem implies that $M^{+}f$ is bounded on $L^{q}$, particularly $$\begin{aligned} \int_{\mathbb{R}^{n+1}} (M^{+}f)^q &\leq \frac{ q 2^{q+1} 5^{n+p} }{q-1} \int_{\mathbb{R}^{n+1}} \lvert f \rvert^q .\end{aligned}$$ ◻ The next theorem states that the parabolic reverse Hölder inequality implies the parabolic Fujii--Wilson condition. **Theorem 15**. *Let $1<q<\infty$. Assume that $w\in RH^+_q$. Then there exists a constant $C$ such that $$\int_{R^-} M^+ (w \chi_{R^-}) \leq C \int_{R^+} w$$ for every parabolic rectangle $R\subset\mathbb{R}^{n+1}$.* *Proof.* By Hölder's inequality, Lemma [Lemma 14](#lem:MfbddLq){reference-type="ref" reference="lem:MfbddLq"} (with the constant $c$) and the assumption, we obtain $$\begin{aligned} \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} M^+(w \chi_{R^-}) \leq \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} M^+(w \chi_{R^-})^q \biggr)^\frac{1}{q} \leq c \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} w^q \biggr)^\frac{1}{q} \leq c C \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+} w .\end{aligned}$$ This completes the proof. ◻ The following lemma is a reverse weak type estimate for the parabolic maximal function. **Lemma 16**. *Let $w$ be a weight. Assume that there exists a constant $C$ such that $w(R^-)\leq C w(R^+)$ for every parabolic rectangle $R\in\mathbb{R}^{n+1}$. Then there exists a constant $c$ such that for every parabolic rectangle $R \subset \mathbb{R}^{n+1}$ and $\lambda \geq w_{R^+}$ we have $$w(R^- \cap \{w>\lambda\}) \leq c \lambda \lvert R^- \cap \{ M^+ w > \lambda \}\rvert .$$* *Proof.* Let $R_0=R(x_0,t_0,L) = Q(x_0,L) \times (t_0-L^p, t_0+L^p)$ and $\lambda \geq w_{R^+_0}$. Denote $S^-_0 = R^-_0$. The time length of $S^-_0$ is $l_t(S^-_0) = L^p$. We partition $S^-_0$ by dividing each spatial edge into $2$ equally long intervals. If $$\frac{l_t(S_{0}^-)}{\lfloor 2^{p} \rfloor} < \frac{L^p}{2^{p}},$$ we divide the time interval of $S^-_0$ into $\lfloor 2^{p} \rfloor$ equally long intervals. Otherwise, we divide the time interval of $S^-_0$ into $\lceil 2^{p} \rceil$ equally long intervals. We obtain subrectangles $S^-_1$ of $S^-_0$ with spatial side length $L_1 = l_x(S^-_1) = l_x(S^-_0)/2 = L / 2$ and time length either $$l_t(S^-_1) = \frac{l_t(S^-_0)}{\lfloor 2^{p} \rfloor} = \frac{L^p}{\lfloor 2^{p} \rfloor} \quad \text{or} \quad l_t(S^-_1) = \frac{L^p}{\lceil 2^{p} \rceil} .$$ For every $S^-_1$, there exists a unique rectangle $R_1$ with spatial side length $L_1 = L / 2$ and time length $L_1^p = 2 L^p / 2^{p}$ such that $R_1$ has the same bottom as $S^-_1$. We select those rectangles $S^-_1$ for which $$\frac{w(R^+_1)}{\lvert R^+_1 \rvert} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+_1} w > \lambda$$ and denote the obtained collection by $\{ S^-_{1,j} \}_j$. If $$\frac{w(R^+_1)}{\lvert R^+_1 \rvert} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+_1} w \leq \lambda ,$$ we subdivide $S^-_1$ in the same manner as above and select all those subrectangles $S^-_2$ for which $$\frac{w(R^+_2)}{\lvert R^+_2 \rvert} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+_2} w > \lambda$$ to obtain family $\{ S^-_{2,j} \}_j$. We continue this selection process recursively. At the $i$th step, we partition unselected rectangles $S^-_{i-1}$ by dividing each spatial side into $2$ equally long intervals. If $$\label{weaktype:JNproof_eq1} \frac{l_t(S_{i-1}^-)}{\lfloor 2^{p} \rfloor} < \frac{L^p}{2^{pi}},$$ we divide the time interval of $S^-_{i-1}$ into $\lfloor 2^{p} \rfloor$ equally long intervals. Otherwise, if $$\label{weaktype:JNproof_eq2} \frac{l_t(S_{i-1}^-)}{\lfloor 2^{p} \rfloor} \geq \frac{L^p}{2^{pi}},$$ we divide the time interval of $S^-_{i-1}$ into $\lceil 2^{p} \rceil$ equally long intervals. We obtain subrectangles $S^-_i$. For every $S^-_i$, there exists a unique rectangle $R_i$ with spatial side length $L_i = L / 2^{i}$ and time length $L_i^p = 2 L^p / 2^{pi}$ such that $R_i$ has the same bottom as $S^-_i$. Select those $S^-_i$ for which $$\frac{w(R^+_i)}{\lvert R^+_i \rvert} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+_i} w > \lambda$$ and denote the obtained collection by $\{ S^-_{i,j} \}_j$. If $$\frac{w(R^+_i)}{\lvert R^+_i \rvert} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+_i} w \leq \lambda ,$$ we continue the selection process in $S^-_i$. In this manner we obtain a collection $\{S^-_{i,j} \}_{i,j}$ of pairwise disjoint rectangles. Observe that if [\[weaktype:JNproof_eq1\]](#weaktype:JNproof_eq1){reference-type="eqref" reference="weaktype:JNproof_eq1"} holds, then we have $$l_t(S_i^-) = \frac{l_t(S^-_{i-1})}{\lfloor 2^{p} \rfloor} < \frac{L^p}{2^{pi}}.$$ On the other hand, if [\[weaktype:JNproof_eq2\]](#weaktype:JNproof_eq2){reference-type="eqref" reference="weaktype:JNproof_eq2"} holds, then $$l_t(S_i^-) = \frac{l_t(S^-_{i-1})}{\lceil 2^{p} \rceil} \leq \frac{l_t(S^-_{i-1})}{2^{p}} \leq \dots \leq \frac{L^p}{2^{pi}} .$$ This gives an upper bound $$l_t(S_i^-) \leq \frac{L^p}{2^{pi}}$$ for every $S_i^-$. Suppose that [\[weaktype:JNproof_eq2\]](#weaktype:JNproof_eq2){reference-type="eqref" reference="weaktype:JNproof_eq2"} is satisfied at the $i$th step. Then we have a lower bound for the time length of $S_i^-$, since $$l_t(S^-_i) = \frac{l_t(S_{i-1}^-)}{\lceil 2^{p} \rceil} \geq \frac{\lfloor 2^{p} \rfloor}{\lceil 2^{p} \rceil} \frac{L^p}{2^{pi}} \geq \frac{1}{2} \frac{L^p}{2^{pi}} .$$ On the other hand, if [\[weaktype:JNproof_eq1\]](#weaktype:JNproof_eq1){reference-type="eqref" reference="weaktype:JNproof_eq1"} is satisfied, then $$l_t(S^-_i) = \frac{l_t(S_{i-1}^-)}{\lfloor 2^{p} \rfloor} \geq \frac{l_t(S_{i-1}^-)}{ 2^{p}}.$$ In this case, [\[weaktype:JNproof_eq2\]](#weaktype:JNproof_eq2){reference-type="eqref" reference="weaktype:JNproof_eq2"} has been satisfied at an earlier step $i'$ with $i'< i$. We obtain $$l_t(S^-_i) \geq \frac{l_t(S_{i-1}^-)}{ 2^{p}} \geq \dots \geq \frac{l_t(S_{i'}^-)}{ 2^{p(i-i')}} \geq \frac{1}{2} \frac{L^p}{ 2^{pi}}$$ by using the lower bound for $S_{i'}^-$. Thus, we have $$\frac{1}{2} \frac{L^p}{2^{pi}} \leq l_t(S^-_i) \leq \frac{L^p}{2^{pi}}$$ for every $S^-_i$. If $(x,t) \in S^-_0 \setminus \bigcup_{i,j} S^-_{i,j}$, then there exists a sequence of subrectangles $S^-_l$ containing $(x,t)$ such that $$\frac{w(R^+_l)}{\lvert R^+_l \rvert} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+_l} w \leq \lambda$$ and $\lvert S^-_l \rvert \to 0$ as $l \to \infty$. The Lebesgue differentiation theorem [@KinnunenMyyryYang2022 Lemma 2.3] implies that $$w \leq \lambda$$ for almost every $(x,t) \in S^-_0 \setminus \bigcup_{i,j} S^-_{i,j}$. It follows that $$S^-_0 \cap \{ w > \lambda \} \subset \bigcup_{i,j} S^-_{i,j}$$ up to a set of measure zero. By the assumption, we have $w(R^-_{i-1,j}) \leq C w(R^+_{i-1,j})$ for every $R_{i-1,j}$. Since $$\lambda < \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+_{i,j}} w \leq M^+ w(x,t)$$ for every $(x,t)\in S^-_{i,j} \subset R^-_{i,j}$, we conclude that $$\begin{aligned} w(S^-_0 \cap \{ w > \lambda \}) &\leq \sum_{i,j} w(S^-_{i,j}) \leq \sum_{i,j} w(R^-_{i-1,j}) \leq C \sum_{i,j} w(R^+_{i-1,j})\\ &\leq C \lambda \sum_{i,j} \lvert {R^+_{i-1,j}} \rvert \leq 2^{n+p+1} C \lambda \sum_{i,j} \lvert {S^-_{i,j}} \rvert \\ &= 2^{n+p+1} C \lambda \sum_{i,j} \lvert {S^-_{i,j}} \cap \{M^+ w(x,t) > \lambda\} \rvert \\ &\leq 2^{n+p+1} C \lambda \lvert S^-_0 \cap \{M^+ w(x,t) > \lambda\} \rvert .\end{aligned}$$ This completes the proof. ◻ We observe that the parabolic Fujii--Wilson condition implies the following parabolic logarithmic condition. **Theorem 17**. *Let $w$ be a weight. Assume that there exists a constant $C_1$ such that $$\int_{R^-} M^+ (w \chi_{R^-}) \leq C_1 \int_{R^+} w$$ for every parabolic rectangle $R\subset\mathbb{R}^{n+1}$. Then there exists a constant $C_2$ such that $$\int_{R^-} w \log^+ \biggl(\frac{w}{w_{R+}}\biggr) \leq C_2 w(R^+)$$ for every parabolic rectangle $R\subset\mathbb{R}^{n+1}$.* *Proof.* Since the assumption implies $w(R^-)\leq C_1 w(R^+)$ for every parabolic rectangle $R\subset\mathbb{R}^{n+1}$, we observe that Lemma [Lemma 16](#lem:reverseweaktype){reference-type="ref" reference="lem:reverseweaktype"} is applicable. Thus, it follows that $$\begin{aligned} \int_{R^-} w \log^+ \biggl(\frac{w}{w_{R+}}\biggr) &= \int_{R^-\cap \{w>w_{R+}\}} \biggl(w \int_{w_{R^+}}^{w} \frac{1}{\lambda} \, d \lambda\biggr)\\ &= \int_{w_{R^+}}^\infty\biggl( \frac{1}{\lambda} \int_{R^- \cap \{w>\lambda\}} w\biggr)\, d \lambda\\ &= \int_{w_{R^+}}^\infty \frac{1}{\lambda} w(R^- \cap \{w>\lambda\}) \, d \lambda\\ &\leq c \int_{w_{R^+}}^\infty \lvert R^- \cap \{ M^+ (w \chi_{R^-}) > \lambda \}\rvert \, d \lambda \\ &\leq c \int_{R^-} M^+ (w \chi_{R^-}) \leq c C_1 \int_{R^+} w .\end{aligned}$$ ◻ The next theorem shows that the parabolic logarithmic condition implies the qualitatitive measure condition. This completes the proof of Theorem [Theorem 7](#thm:RHIchar){reference-type="ref" reference="thm:RHIchar"}. **Theorem 18**. *Let $w$ be a weight. Assume that there exists a constant $C$ such that $$\int_{R^-} w \log^+ \biggl(\frac{w}{w_{R+}}\biggr) \leq C w(R^+)$$ for every parabolic rectangle $R\subset\mathbb{R}^{n+1}$. Then for every $\beta>0$ there exists $0<\alpha<1$ such that for every parabolic rectangle $R$ and every measurable set $E \subset R^-$ satisfying $\lvert E \rvert < \alpha \lvert R^- \rvert$ we have $w(E) < \beta w(R^+)$.* *Proof.* Let $\beta>0$. Choose $\sigma>1$ such that $C/\log \sigma \leq \beta/2$ and $0<\alpha<1$ such that $\sigma \alpha \leq \beta/2$. Let $E\subset R^-$ be a measurable set with $\lvert E \rvert < \alpha \lvert R^- \rvert$. Then we have $$\begin{aligned} w(E \cap \{ w \leq \sigma w_{R^+} \}) \leq \sigma w_{R^+} \lvert E \rvert < \sigma \alpha w(R^+) \leq \frac{\beta}{2} w(R^+)\end{aligned}$$ and $$\begin{aligned} w(E \cap \{ w > \sigma w_{R^+} \}) &= \frac{1}{\log\sigma} \int_{E \cap \{w > \sigma w_{R^+}\}} w \log \sigma \leq \frac{1}{\log\sigma} \int_{E \cap \{w > \sigma w_{R^+}\}} w \log \biggl(\frac{w}{w_{R^+}}\biggr) \\ &\leq \frac{1}{\log\sigma} \int_{R^- \cap \{w > w_{R^+}\}} w \log \biggl(\frac{w}{w_{R^+}}\biggr) = \frac{1}{\log\sigma} \int_{R^-} w \log^+ \biggl(\frac{w}{w_{R^+}}\biggr) \\ &\leq \frac{C}{\log\sigma} w(R^+) \leq \frac{\beta}{2} w(R^+) .\end{aligned}$$ This shows that $w(E) < \beta w(R^+)$. ◻ # Parabolic Gehring lemma In this section, we show the parabolic Gehring lemma which states that the parabolic reverse Hölder inequality is self-improving. In particular, it implies that if $w\in RH^+_q$, then $w \in RH^+_{q+\varepsilon}$ for some $\varepsilon>0$. The results in this section also hold in the case $p=1$. The next lemma is the main ingredient in the proof of the parabolic Gehring lemma. **Lemma 19**. *Let $1<q<\infty$ and $w$ be a weight. Assume that there exists a constant $C_1>1$ such that for every parabolic rectangle $R\subset\mathbb{R}^{n+1}$ and $\lambda\geq w_{R^+}$ we have $$\int_{R^- \cap \{w>\lambda\}} w^q \leq C_1 \lambda^{q-1} \int_{R \cap \{w>\lambda\}} w .$$ Then there exist $\varepsilon=\varepsilon(n,p,q,C_1)>0$ and $C=C(n,p,q,C_1)$ such that for every $R\subset\mathbb{R}^{n+1}$ it holds that $$\int_{R^-} w^{q+\varepsilon} \leq C \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+} w \biggr)^\varepsilon \int_{R} w^q .$$* *Proof.* We prove the claim first for bounded functions. Thus, assume that $w$ is bounded. Let $R \subset \mathbb{R}^{n+1}$ be a parabolic rectangle and $\lambda_0 = w_{R^+}$. Let $\varepsilon>0$ to be chosen later. We apply Cavalieri's principle with the exponent $\varepsilon$ and the measure $d\mu = w^q \, d x\, d t$ to obtain $$\begin{aligned} \int_{R^- \cap \{w>\lambda_0\}} w^{q+\varepsilon} &= \int_{R^- \cap \{w>\lambda_0\}} w^{\varepsilon} \, d \mu\\ &= \varepsilon \int_{\lambda_0}^\infty \biggl(\lambda^{\varepsilon -1} \int_{R^- \cap \{w>\lambda\}} w^q\biggr) \, d \lambda+ \lambda_0^{\varepsilon} \int_{R^- \cap \{w>\lambda_0\}} w^q .\end{aligned}$$ The assumption implies $$\int_{\lambda_0}^\infty\biggl(\lambda^{\varepsilon -1} \int_{R^- \cap \{w>\lambda\}} w^q\biggr) \, d \lambda\leq C_1 \int_{\lambda_0}^\infty \biggl(\lambda^{q+\varepsilon -2} \int_{R \cap \{w>\lambda\}} w\biggr)\, d \lambda.$$ By Cavalieri's principle with the exponent $q+\varepsilon -1$ and $\, d \mu= w \, d x\, d t$, we get $$\begin{aligned} \int_{\lambda_0}^\infty \biggl(\lambda^{q+\varepsilon -2} \int_{R \cap \{w>\lambda\}} w\biggr) \, d \lambda\leq \frac{1}{q+\varepsilon -1} \int_{R \cap \{w>\lambda_0\}} w^{q+\varepsilon} .\end{aligned}$$ Consequently, $$\begin{aligned} \int_{R^- \cap \{w>\lambda_0\}} w^{q+\varepsilon } \leq \frac{C_1 \varepsilon}{q+\varepsilon -1} \int_{R \cap \{w>\lambda_0\}} w^{q+\varepsilon} + \lambda_0^{\varepsilon} \int_{R^- \cap \{w>\lambda_0\}} w^q .\end{aligned}$$ By the boundedness of $w$ and choosing $\varepsilon>0$ to be small enough, we can absorb the integral over $R^- \cap \{w>\lambda_0\}$ of the first term to the left-hand side to obtain $$\begin{aligned} \biggl( 1- \frac{C_1 \varepsilon}{q+\varepsilon -1} \biggr) \int_{R^- \cap \{w>\lambda_0\}} w^{q+\varepsilon } \leq \frac{C_1 \varepsilon}{q+\varepsilon -1} \int_{R^+ \cap \{w>\lambda_0\}} w^{q+\varepsilon} + \lambda_0^{\varepsilon} \int_{R^- \cap \{w>\lambda_0\}} w^q .\end{aligned}$$ Hence, we have $$\begin{aligned} \int_{R^- \cap \{w>\lambda_0\}} w^{q+\varepsilon} \leq c_0 \lambda_0^{\varepsilon} \int_{R^- \cap \{w>\lambda_0\}} w^q + c_1 \varepsilon \int_{R^+ \cap \{w>\lambda_0\}} w^{q+\varepsilon} ,\end{aligned}$$ where $$c_0 = \frac{q+\varepsilon -1}{q+\varepsilon -1- C_1\varepsilon } \quad\text{and}\quad c_1 = \frac{C_1}{q+\varepsilon -1- C_1\varepsilon } .$$ We combine the previous estimate with $$\begin{aligned} \int_{R^-} w^{q+\varepsilon} &= \int_{R^- \cap \{w>\lambda_0\}} w^{q+\varepsilon} + \int_{R^- \cap \{w\leq \lambda_0\}} w^{q+\varepsilon}\\ & \leq \int_{R^- \cap \{w>\lambda_0\}} w^{q+\varepsilon} + \lambda_0^{\varepsilon} \int_{R^- \cap \{w\leq \lambda_0\}} w^q\end{aligned}$$ to obtain $$\label{iteration-estimate} \int_{R^-} w^{q+\varepsilon} \leq c_0 w_{R^+}^\varepsilon \int_{R^-} w^q + c_1 \varepsilon \int_{R^+} w^{q+\varepsilon} .$$ Fix $R_0 = Q(x_0,L) \times (t_0 - L^p, t_0 + L^p) \subset \mathbb{R}^{n+1}$. We cover $R^-_0$ by $M = 2^{n+1}$ rectangles $R^-_{1,j}$ with spatial side length $l_x = L/2^{1/p}$ and time length $l_t = L^p / 2$. This can be done by dividing each spatial edge of $R^-_0$ into two equally long intervals that may overlap each other, and the time interval of $R^-_0$ into two equally long pairwise disjoint intervals. Observe that the overlap of $R^-_{1,j}$ is bounded by $M/2 = 2^n$. Then consider $R^+_{1,j}$ and cover it in the same way as before by $M$ rectangles $R^-_{2,j}$ with spatial side length $l_x = L/2^{2/p}$ and time length $l_t = L^p / 2^2$. At the $i$th step, cover $R^+_{i-1,j}$ by $M$ rectangles $R^-_{i,j}$ with spatial side length $l_x = L/2^{i/p}$ and time length $l_t = L^p / 2^i$ such that their overlap is bounded by $M/2$. Note that every $R_{i,j}$ is contained in $R_0$. Then iterating [\[iteration-estimate\]](#iteration-estimate){reference-type="eqref" reference="iteration-estimate"} we obtain $$\begin{aligned} \int_{R^-_0} w^{q+\varepsilon} &\leq \sum_{j=1}^{M} \int_{R^-_{1,j}} w^{q+\varepsilon} \leq \sum_{j=1}^{M} c_0 w_{R^+_{1,j}}^\varepsilon \int_{R^{-}_{1,j}} w^q + \sum_{j=1}^{M} c_1 \varepsilon \int_{R^+_{1,j}} w^{q+\varepsilon} \\ &\leq c_0 \sum_{j=1}^M w_{R^+_{1,j}}^\varepsilon \int_{R^{-}_{1,j}} w^q + c_1 \varepsilon \sum_{j=1}^{M^2} \int_{R^-_{2,j}} w^{q+\varepsilon} \\ &\leq c_0 \sum_{j=1}^M w_{R^+_{1,j}}^\varepsilon \int_{R^{-}_{1,j}} w^q + c_1 \varepsilon \sum_{j=1}^{M^2} \biggl( c_0 w_{R^+_{2,j}}^\varepsilon \int_{R^{-}_{2,j}} w^q + c_1 \varepsilon \int_{R^+_{2,j}} w^{q+\varepsilon} \biggr) \\ &= c_0 \sum_{j=1}^M w_{R^+_{1,j}}^\varepsilon \int_{R^{-}_{1,j}} w^q + c_0 c_1 \varepsilon \sum_{j=1}^{M^2} w_{R^+_{2,j}}^\varepsilon \int_{R^{-}_{2,j}} w^q + (c_1 \varepsilon)^2 \sum_{j=1}^{M^2} \int_{R^+_{2,j}} w^{q+\varepsilon} \\ &\leq c_0 \sum_{i=1}^N \biggl( (c_1 \varepsilon)^{i-1} \sum_{j=1}^{M^i} w_{R^+_{i,j}}^\varepsilon \int_{R^{-}_{i,j}} w^q \biggr) + (c_1 \varepsilon)^N \sum_{j=1}^{M^N} \int_{R^+_{N,j}} w^{q+\varepsilon} \\ &\leq c_0 \sum_{i=1}^N \biggl( (c_1 \varepsilon)^{i-1} \sum_{j=1}^{M^i} w_{R^+_{i,j}}^\varepsilon \int_{R^{-}_{i,j}} w^q \biggr) + \biggl( c_1 \varepsilon \frac{M}{2} \biggr)^N \int_{R_0} w^{q+\varepsilon} \\ &= I + II .\end{aligned}$$ We observe that $II$ tends to zero if $\varepsilon < 2/(c_1 M) = 1/(c_1 2^n)$ as $N \to \infty$ since $w$ is bounded by the initial assumption. For the inner sum of the first term $I$, we have $$\begin{aligned} \sum_{j=1}^{M^i} w_{R^+_{i,j}}^\varepsilon \int_{R^{-}_{i,j}} w^q &= \sum_{j=1}^{M^i} \lvert R^+_{i,j} \rvert^{-\varepsilon} w(R^+_{i,j})^\varepsilon \int_{R^{-}_{i,j}} w^q \leq \sum_{j=1}^{M^i} 2^{(\frac{n}{p}+1)\varepsilon i} \lvert R^+_0 \rvert^{-\varepsilon} w(R^+_0)^\varepsilon \int_{R^{-}_{i,j}} w^q \\ &\leq 2^{(\frac{n}{p}+1)\varepsilon i} w_{R^+_{0}}^\varepsilon \biggl(\frac M2\biggr)^i \int_{R_0} w^q .\end{aligned}$$ Thus, it follows that $$\begin{aligned} I \leq c_0 w_{R^+_{0}}^\varepsilon \int_{R_0} w^q \sum_{i=1}^N (c_1 \varepsilon)^{i-1} 2^{(\frac{n}{p}+1)\varepsilon i} \biggl(\frac M2\biggr)^i .\end{aligned}$$ We estimate the sum by $$\begin{aligned} \sum_{i=1}^N (c_1 \varepsilon)^{i-1} 2^{(\frac{n}{p}+1)\varepsilon i} \biggl(\frac M2\biggr)^i &= 2^{(\frac{n}{p}+1)\varepsilon } \frac{M}{2} \sum_{i=0}^{N-1} \biggl( c_1 \varepsilon 2^{(\frac{n}{p}+1)\varepsilon} \frac{M}{2} \biggr)^i \\ &\leq 2^{(\frac{n}{p}+1) \varepsilon } \frac{M}{2} \frac{1}{1-c_1 \varepsilon 2^{(\frac{n}{p}+1) \varepsilon } \frac{M}{2} } \\ &= \frac{2^{(\frac{n}{p}+1) \varepsilon +n }}{1-c_1 \varepsilon 2^{(\frac{n}{p}+1) \varepsilon + n } } = \frac{C}{c_0} ,\end{aligned}$$ whenever $\varepsilon$ is small enough, for example $$\varepsilon < \frac{1}{c_1 2^{\frac{n}{p}} M } = \frac{1}{c_1 2^{\frac{n}{p}+n+1} } .$$ Then it holds that $$\begin{aligned} \int_{R^-_0} w^{q+\varepsilon} \leq C w_{R^+_{0}}^\varepsilon \int_{R_0} w^q \end{aligned}$$ for small enough $\varepsilon$. Hence, the claim holds for bounded functions. For unbounded $w$, we consider truncations $\min\{w,k\}$, $k \in \mathbb N$, and apply the claim with the monotone convergence theorem as $k \to \infty$. This completes the proof. ◻ We are ready to prove the parabolic Gehring lemma. **Theorem 20**. *Let $1<q<\infty$ and $w$ be a weight. Assume that there exists a constant $C_1>0$ such that for every parabolic rectangle $R\subset\mathbb{R}^{n+1}$ we have $$\label{gehringRHI} \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} w^q \biggr)^\frac{1}{q} \leq C_1 \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+} w .$$ Then there exist $\varepsilon=\varepsilon(n,q,C_1)>0$ and $C=C(n,q,C_1)$ such that for every $R\subset\mathbb{R}^{n+1}$ it holds that $$\biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} w^{q+\varepsilon} \biggr)^\frac{1}{q+\varepsilon} \leq C \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+} w .$$* *Proof.* Let $R_0=R(x_0,t_0,L) = Q(x_0,L) \times (t_0-L^p, t_0+L^p)$ and $\lambda\geq \lambda_0 = w_{R^+_0}$. Denote $S^-_0 = R^-_0$. We partition $S^-_0$ by dividing each spatial edge into $2$ equally long intervals. If $$\frac{l_t(S_{0}^-)}{\lfloor 2^{p} \rfloor} < \frac{L^p}{2^{p}},$$ we divide the time interval of $S^-_0$ into $\lfloor 2^{p} \rfloor$ equally long intervals. Otherwise, we divide the time interval of $S^-_0$ into $\lceil 2^{p} \rceil$ equally long intervals. We obtain subrectangles $S^-_1$ of $S^-_0$ with spatial side length $l_x(S^-_1)=l_x(S^-_0)/2 = L / 2$ and time length either $$l_t(S^-_1)=\frac{l_t(S^-_0)}{\lfloor 2^{p} \rfloor} =\frac{L^p}{\lfloor 2^{p} \rfloor} \quad\text{or}\quad l_t(S^-_1)=\frac{L^p}{\lceil 2^{p} \rceil}.$$ For every $S^-_1$, there exists a unique rectangle $R_1$ with spatial side length $l_x = L / 2$ and time length $l_t = 2 L^p / 2^{p}$ such that $R_1$ has the same bottom as $S^-_1$. We select those rectangles $S^-_1$ for which $$\frac{w(S^+_1)}{\lvert S^+_1 \rvert} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{S^+_1} w > \lambda$$ and denote the obtained collection by $\{ S^-_{1,j} \}_j$. If $$\frac{w(S^+_1)}{\lvert S^+_1 \rvert} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{S^+_1} w \leq \lambda ,$$ we subdivide $S^-_1$ in the same manner as above and select all those subrectangles $S^-_2$ for which $$\frac{w(S^+_2)}{\lvert S^+_2 \rvert} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{S^+_2} w > \lambda$$ to obtain family $\{ S^-_{2,j} \}_j$. We continue this selection process recursively. At the $i$th step, we partition unselected rectangles $S^-_{i-1}$ by dividing each spatial side into $2$ equally long intervals. If $$\label{gehring:JNproof_eq1} \frac{l_t(S_{i-1}^-)}{\lfloor 2^{p} \rfloor} < \frac{L^p}{2^{pi}},$$ we divide the time interval of $S^-_{i-1}$ into $\lfloor 2^{p} \rfloor$ equally long intervals. If $$\label{gehring:JNproof_eq2} \frac{l_t(S_{i-1}^-)}{\lfloor 2^{p} \rfloor} \geq \frac{L^p}{2^{pi}},$$ we divide the time interval of $S^-_{i-1}$ into $\lceil 2^{p} \rceil$ equally long intervals. We obtain subrectangles $S^-_i$. For every $S^-_i$, there exists a unique rectangle $R_i$ with spatial side length $l_x = L / 2^{i}$ and time length $l_t = 2 L^p / 2^{pi}$ such that $R_i$ has the same bottom as $S^-_i$. Select those $S^-_i$ for which $$\frac{w(S^+_i)}{\lvert S^+_i \rvert} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{S^+_i} w > \lambda$$ and denote the obtained collection by $\{ S^-_{i,j} \}_j$. If $$\frac{w(S^+_i)}{\lvert S^+_i \rvert} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{S^+_i} w \leq \lambda ,$$ we continue the selection process in $S^-_i$. In this manner we obtain a collection $\{S^-_{i,j} \}_{i,j}$ of pairwise disjoint rectangles. Observe that if [\[gehring:JNproof_eq1\]](#gehring:JNproof_eq1){reference-type="eqref" reference="gehring:JNproof_eq1"} holds, then we have $$l_t(S_i^-) = \frac{l_t(S^-_{i-1})}{\lfloor 2^{p} \rfloor} < \frac{L^p}{2^{pi}}.$$ On the other hand, if [\[gehring:JNproof_eq2\]](#gehring:JNproof_eq2){reference-type="eqref" reference="gehring:JNproof_eq2"} holds, then $$l_t(S_i^-) = \frac{l_t(S^-_{i-1})}{\lceil 2^{p} \rceil} \leq \frac{l_t(S^-_{i-1})}{2^{p}} \leq \dots \leq \frac{L^p}{2^{pi}} .$$ This gives an upper bound $$l_t(S_i^-) \leq \frac{L^p}{2^{pi}}$$ for every $S_i^-$. Suppose that [\[gehring:JNproof_eq2\]](#gehring:JNproof_eq2){reference-type="eqref" reference="gehring:JNproof_eq2"} is satisfied at the $i$th step. Then we have a lower bound for the time length of $S_i^-$, since $$l_t(S^-_i) = \frac{l_t(S_{i-1}^-)}{\lceil 2^{p} \rceil} \geq \frac{\lfloor 2^{p} \rfloor}{\lceil 2^{p} \rceil} \frac{L^p}{2^{pi}} \geq \frac{1}{2} \frac{L^p}{2^{pi}} .$$ On the other hand, if [\[gehring:JNproof_eq1\]](#gehring:JNproof_eq1){reference-type="eqref" reference="gehring:JNproof_eq1"} is satisfied, then $$l_t(S^-_i) = \frac{l_t(S_{i-1}^-)}{\lfloor 2^{p} \rfloor} \geq \frac{l_t(S_{i-1}^-)}{ 2^{p}}.$$ In this case, [\[gehring:JNproof_eq2\]](#gehring:JNproof_eq2){reference-type="eqref" reference="gehring:JNproof_eq2"} has been satisfied at an earlier step $i'$ with $i'< i$. We obtain $$l_t(S^-_i) \geq \frac{l_t(S_{i-1}^-)}{ 2^{p}} \geq \dots \geq \frac{l_t(S_{i'}^-)}{ 2^{p(i-i')}} \geq \frac{1}{2} \frac{L^p}{ 2^{pi}}$$ by using the lower bound for $S_{i'}^-$. Thus, we have $$\frac{1}{2} \frac{L^p}{2^{pi}} \leq l_t(S^-_i) \leq \frac{L^p}{2^{pi}}$$ for every $S^-_i$. By using the bounds for the time length of $S^-_i$, we observe that $$\begin{aligned} l_t(R_i) - l_t(S^-_i) &\leq \frac{2 L^p}{2^{pi}} - \frac{1}{2} \frac{L^p}{2^{pi}} = \frac{3}{2} \frac{L^p}{2^{pi}} \\ &\leq \frac{L^p}{2^{p(i-1)}} = \frac{2L^p}{2^{p(i-1)}} - \frac{L^p}{2^{p(i-1)}} \\ &\leq l_t(R_{i-1}) - l_t(S^-_{i-1}) .\end{aligned}$$ This implies $$\label{gehring:subset} R_{i} \subset R_{i-1}$$ for a fixed rectangle $S^-_{i-1}$ and for every subrectangle $S^-_{i} \subset S^-_{i-1}$. We have a collection $\{ S^-_{i,j} \}_{i,j}$ of pairwise disjoint rectangles. However, the rectangles in the corresponding collection $\{ S^+_{i,j} \}_{i,j}$ may overlap. Thus, we replace it by a subfamily $\{ \widetilde{S}^+_{i,j} \}_{i,j}$ of pairwise disjoint rectangles, which is constructed in the following way. At the first step, choose $\{ S^+_{1,j} \}_{j}$ and denote it by $\{ \widetilde{S}^+_{1,j} \}_j$. Then consider the collection $\{ S^+_{2,j} \}_{j}$ where each $S^+_{2,j}$ either intersects some $\widetilde{S}^+_{1,j}$ or does not intersect any $\widetilde{S}^+_{1,j}$. Select the rectangles $S^+_{2,j}$ that do not intersect any $\widetilde{S}^+_{1,j}$, and denote the obtained collection by $\{ \widetilde{S}^+_{2,j} \}_j$. At the $i$th step, choose those $S^+_{i,j}$ that do not intersect any previously selected $\widetilde{S}^+_{i',j}$, $i' < i$. Hence, we obtain a collection $\{ \widetilde{S}^+_{i,j} \}_{i,j}$ of pairwise disjoint rectangles. Observe that for every $S^+_{i,j}$ there exists $\widetilde{S}^+_{i',j}$ with $i' < i$ such that $$\label{gehring:plussubset} \text{pr}_x(S^+_{i,j}) \subset \text{pr}_x(\widetilde{S}^+_{i',j}) \quad \text{and} \quad \text{pr}_t(S^+_{i,j}) \subset 3 \text{pr}_t(\widetilde{S}^+_{i',j}) .$$ Here pr$_x$ denotes the projection to $\mathbb R^n$ and pr$_t$ denotes the projection to the time axis. Rename $\{ S^-_{i,j} \}_{i,j}$ and $\{ \widetilde{S}^+_{i,j} \}_{i,j}$ as $\{ S^-_{i} \}_{i}$ and $\{ \widetilde{S}^+_{j} \}_j$, respectively. Note that $S^-_i$ is spatially contained in $S^+_i$, that is, $\text{pr}_x S^-_i\subset \text{pr}_x S^+_i$. In the time direction, we have $$\label{gehring:minusplussubset} \text{pr}_t(S^-_i) \subset \text{pr}_t(R_i) \subset 7 \text{pr}_t(S^+_i) ,$$ since $$( 7 + 1 ) \frac{l_t(S^+_i)}{2} \geq 8 \frac{L^p}{2^{pi+2}} = \frac{2L^p}{2^{pi}} = l_t(R_i) .$$ Therefore, by [\[gehring:plussubset\]](#gehring:plussubset){reference-type="eqref" reference="gehring:plussubset"} and [\[gehring:minusplussubset\]](#gehring:minusplussubset){reference-type="eqref" reference="gehring:minusplussubset"}, it holds that $$\label{gehring:subsetcubes} \sum_i \lvert S^-_i \rvert = \Big\lvert \bigcup_i S^-_i \Big\rvert \leq c_1 \sum_j \lvert \widetilde{S}^+_j \rvert \quad\text{with}\quad c_1 = 21.$$ If $(x,t) \in R^-_0 \setminus \bigcup_i S^-_i$, then there exists a sequence $\{S^-_l\}_{l\in\mathbb N}$ of subrectangles containing $(x,t)$ such that $$\frac{w(S^-_l)}{\lvert S^-_l \rvert} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{S^-_l} w \leq \lambda$$ and $\lvert S^-_l \rvert \to 0$ as $l \to \infty$. The Lebesgue differentiation theorem [@KinnunenMyyryYang2022 Lemma 2.3] implies that $w(x,t) \leq \lambda$ for almost every $(x,t) \in R^-_0 \setminus \bigcup_i S^-_i$. It follows that $$\label{gehring:levelsetsubset} R^-_0 \cap \{ w > \lambda \} \subset \bigcup_i S^-_i$$ up to a set of measure zero. Consider $S^-_i$ and denote its parent by $S^-_{i^-}$, that is, $S^-_{i}$ was obtained by subdividing the previous $S^-_{i^-}$ for which $w_{S^+_{i^-}} \leq\lambda$. We move the corresponding $R^+_i$ forward in time until the shifted rectangle is contained in $S^+_{i^-}$. The time distance between the bottom of $R^+_i$ and the bottom of $S^+_{i^-}$ is bounded above by $2^{p+1} l_t(R^+_i)$. The assumption [\[gehringRHI\]](#gehringRHI){reference-type="eqref" reference="gehringRHI"} with Hölder's inequality implies that $w(R^-) \leq C_1 w(R^+)$ for every parabolic rectangle $R$. Thus, we can apply the proof of Lemma [Lemma 9](#lemma:timemove){reference-type="ref" reference="lemma:timemove"} (ii) with $\theta = 2^{p+1}$ to obtain $$\label{gehring:shifting} w(R^+_i) \leq 4 \max\{1, C_1^{1+ 2^{2p+1} }\} w(S^+_{i^-})$$ for every $i\in\mathbb{N}$. By using [\[gehring:levelsetsubset\]](#gehring:levelsetsubset){reference-type="eqref" reference="gehring:levelsetsubset"}, [\[gehringRHI\]](#gehringRHI){reference-type="eqref" reference="gehringRHI"}, [\[gehring:shifting\]](#gehring:shifting){reference-type="eqref" reference="gehring:shifting"} and [\[gehring:subsetcubes\]](#gehring:subsetcubes){reference-type="eqref" reference="gehring:subsetcubes"}, we obtain $$\begin{aligned} \int_{R^-_0 \cap \{ w>\lambda \}} w^q &\leq \sum_i \int_{S^-_i} w^q \leq \sum_i \int_{R^-_i} w^q \leq C_1^q \sum_i \lvert R^-_i \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+_i} w \biggr)^q \\ &\leq C_1^q 4^q \max\{1, C_1^{q(1+ 2^{2p+1}) }\} \sum_i \lvert R^-_i \rvert \Biggl( \frac{\lvert S^+_{i^-} \rvert}{\lvert R^+_i \rvert} \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{S^+_{i^-}} w \Biggr)^q \\ &\leq c_2 \lambda^q \sum_i \lvert R^-_i \rvert \leq 2 c_2 \lambda^q \sum_i \lvert S^-_i \rvert \\ &\leq 2 c_1 c_2 \lambda^q \sum_j \lvert \widetilde{S}^+_j \rvert ,\end{aligned}$$ where $c_2 = 2^{q(2+n+p)} C_1^q \max\{1, C_1^{q(1+ 2^{2p+1}) }\}$. We have $$\begin{aligned} \lvert \widetilde{S}^+_j \rvert &\leq \frac{1}{\lambda} \int_{\widetilde{S}^+_j} w = \frac{1}{\lambda} \int_{\widetilde{S}^+_j \cap \{w>\lambda/2\}} w + \frac{1}{\lambda} \int_{\widetilde{S}^+_j \cap \{w\leq\lambda/2\}} w \\ &\leq \frac{1}{\lambda} \int_{\widetilde{S}^+_j \cap \{w>\lambda/2\}} w + \frac{1}{\lambda} \int_{\widetilde{S}^+_j \cap \{w\leq\lambda/2\}} \frac{\lambda}{2} \\ &\leq \frac{1}{\lambda} \int_{\widetilde{S}^+_j \cap \{w>\lambda/2\}} w + \frac{1}{2} \lvert \widetilde{S}^+_j \rvert ,\end{aligned}$$ and thus $$\lvert \widetilde{S}^+_j \rvert \leq \frac{2}{\lambda} \int_{\widetilde{S}^+_j \cap \{w>\lambda/2\}} w .$$ It follows that $$\begin{aligned} \int_{R^-_0 \cap \{ w>\lambda \}} w^q &\leq 2 c_1 c_2 \lambda^q \sum_j \lvert \widetilde{S}^+_j \rvert \leq 4 c_1 c_2 \lambda^{q-1} \sum_j \int_{\widetilde{S}^+_j \cap \{w>\lambda/2\}} w \\ &= 4 c_1 c_2 \lambda^{q-1} \int_{\bigcup_j \widetilde{S}^+_j \cap \{w>\lambda/2\}} w \leq 4 c_1 c_2 \lambda^{q-1} \int_{R_0 \cap \{w>\lambda/2\}} w ,\end{aligned}$$ since $\widetilde{S}^+_j$ are pairwise disjoint. On the other hand, we have $$\begin{aligned} \int_{R^-_0 \cap \{\lambda\geq w>\lambda/2\}} w^q = \int_{R^-_0 \cap \{\lambda\geq w>\lambda/2\}} w^{q-1} w \leq \lambda^{q-1} \int_{R_0 \cap \{w>\lambda/2\}} w .\end{aligned}$$ Combining the two previous estimates, we get $$\begin{aligned} \int_{R^-_0 \cap \{w>\lambda/2\}} w^q = \int_{R^-_0 \cap \{w>\lambda\}} w^q + \int_{R^-_0 \cap \{\lambda\geq w>\lambda/2\}} w^q \leq c_3 \biggl( \frac{\lambda}{2} \biggr)^{q-1} \int_{R_0 \cap \{w>\lambda/2\}} w\end{aligned}$$ for $\lambda\geq \lambda_0 = w_{R^+_0}$, where $c_3 = 2^{q-1} (4 c_1 c_2 + 1)$. Since this holds for any parabolic rectangle $R_0$, we may apply Lemma [Lemma 19](#selfimprovelemma){reference-type="ref" reference="selfimprovelemma"} which states that there exist $\varepsilon>0$ and $C>1$ such that $$\int_{R^-} w^{q+\varepsilon} \leq C \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+} w \biggr)^\varepsilon \int_{R} w^q$$ for every parabolic rectangle $R \subset \mathbb{R}^{n+1}$. We estimate the second integral on the right-hand side by applying [\[gehringRHI\]](#gehringRHI){reference-type="eqref" reference="gehringRHI"} to get $$\begin{aligned} \int_{R} w^q &= \int_{R^-} w^q + \int_{R^+} w^q \leq C_1^q \lvert R^- \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+} w \biggr)^q + C_1^q \lvert R^+ \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{++}} w \biggr)^q \\ &\leq C_1^{2q} \lvert R^- \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{++}} w \biggr)^q + C_1^q \lvert R^- \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{++}} w \biggr)^q = C_2 \lvert R^- \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{++}} w \biggr)^q ,\end{aligned}$$ where $R^{++} = R^+ + l_t(R^+)$ and $C_2 = 2 \max\{ C_1^{2q}, C_1^{q} \}$. Therefore, we have $$\begin{aligned} \int_{R^-} w^{q+\varepsilon} &\leq C \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+} w \biggr)^\varepsilon \int_{R} w^q \leq C C_1^\varepsilon C_2 \lvert R^- \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{++}} w \biggr)^\varepsilon \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{++}} w \biggr)^q \\ &= C_3^{q+\varepsilon} \lvert R^- \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{++}} w \biggr)^{q+\varepsilon} ,\end{aligned}$$ where $C_3^{q+\varepsilon} = C C_1^\varepsilon C_2$. We conclude that $$\biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} w^{q+\varepsilon} \biggr)^\frac{1}{q+\varepsilon} \leq C_3 \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{++}} w$$ for every parabolic rectangle $R \subset \mathbb{R}^{n+1}$. It is left to replace $R^{++}$ by $R^+$ in the estimate above. This is done by the following argument. Fix $R_0 = Q(x_0,L) \times (t_0 - L^p, t_0 + L^p) \subset \mathbb{R}^{n+1}$. We cover $R^-_0$ by $M = 2^{n+1}$ rectangles $R^-_{i}$ with spatial side length $l_x = L/2^{1/p}$ and time length $l_t = L^p / 2$. This can be done by dividing each spatial edge of $R^-_0$ into two equally long intervals that may overlap each other, and the time interval of $R^-_0$ into two equally long pairwise disjoint intervals. Observe that every $R^{++}_i$ is contained in $R^+_0$ and the overlap of $R^{++}_{i}$ is bounded by $M/2 = 2^n$. Then it holds that $$\begin{aligned} \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-_0} w^{q+\varepsilon} \biggr)^\frac{1}{q+\varepsilon} &\leq \Biggl( \sum_i \frac{\lvert R^-_i \rvert}{\lvert R^-_0 \rvert} \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-_i} w^{q+\varepsilon} \Biggr)^\frac{1}{q+\varepsilon} \leq 2^{-(\frac{n}{p}+1)/(q+\varepsilon)} \sum_i \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-_i} w^{q+\varepsilon} \biggr)^\frac{1}{q+\varepsilon} \\ &\leq 2^{-(\frac{n}{p}+1)/(q+\varepsilon)} C_3 \sum_i \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{++}_i} w = 2^{-(\frac{n}{p}+1)/(q+\varepsilon)} C_3 \sum_i \frac{\lvert R^+_0 \rvert}{\lvert R^{++}_i \rvert} \frac{1}{\lvert R^+_0 \rvert} \int_{R^{++}_i} w \\ &\leq 2^{-(\frac{n}{p}+1)/(q+\varepsilon)} C_3 2^{\frac{n}{p}+1} \frac{M}{2} \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{+}_0} w = 2^{n+(\frac{n}{p}+1)(1-\frac{1}{q+\varepsilon})} C_3 \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{+}_0} w .\end{aligned}$$ This completes the proof. ◻ In addition to the self-improvement of the exponent on the left-hand side of the parabolic reverse Hölder inequality, we observe that the exponent on the right-hand side can be replaced by any smaller positive exponent. For the elliptic case, for example, see [@Heinonen_et_al Lemma 3.38]. **Theorem 21**. *Let $1<q<\infty$ and $w$ be a weight. Assume that there exists a constant $C_1>0$ such that for every parabolic rectangle $R\subset\mathbb{R}^{n+1}$ we have $$\label{thm:RHIforRHS} \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} w^q \biggr)^\frac{1}{q} \leq C_1 \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+} w .$$ Then for every $0<s<1$ there exists a constant $C=C(n,p,q,s,C_1)$ such that for every $R\subset\mathbb{R}^{n+1}$ it holds that $$\biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} w^{q} \biggr)^\frac{1}{q} \leq C \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+} w^s \biggr)^\frac{1}{s} .$$* *Proof.* Let $R \subset \mathbb R^{n+1}$ be a parabolic rectangle. Fix $0<s<1$. Let $\theta=s(q-1)/(q-s)$, that is, $$1 = \frac{\theta}{s} + \frac{1-\theta}{q} .$$ We apply Hölder's inequality, Young's inequality $$ab \leq \varepsilon a^r + \varepsilon^{-\frac{1}{r-1}} b^\frac{r}{r-1}$$ with $r=1/(1-\theta)$ and [\[thm:RHIforRHS\]](#thm:RHIforRHS){reference-type="eqref" reference="thm:RHIforRHS"} to get $$\begin{aligned} \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} w = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} w^\theta w^{1-\theta} &\leq \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} w^s \biggr)^\frac{\theta}{s} \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} w^q \biggr)^\frac{1-\theta}{q} \leq \varepsilon^{1-\frac{1}{\theta}} \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} w^s \biggr)^\frac{1}{s} + \varepsilon \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} w^q \biggr)^\frac{1}{q} \\ &\leq \varepsilon^{1-\frac{1}{\theta}} \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} w^s \biggr)^\frac{1}{s} + C_1 \varepsilon \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+} w .\end{aligned}$$ Hence, we have $$\label{iteration-(1,s)-RHI} \int_{R^-} w \leq \varepsilon^{1-\frac{1}{\theta}} \lvert R^- \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} w^s \biggr)^\frac{1}{s} + C_1 \varepsilon \int_{R^+} w .$$ Fix $R_0 = Q(x_0,L) \times (t_0 - L^p, t_0 + L^p) \subset \mathbb{R}^{n+1}$. We cover $R^-_0$ by $M = 2^{n+1}$ rectangles $R^-_{1,j}$ with spatial side length $l_x = L/2^{1/p}$ and time length $l_t = L^p / 2$. This can be done by dividing each spatial edge of $R^-_0$ into two equally long intervals that may overlap each other, and the time interval of $R^-_0$ into two equally long pairwise disjoint intervals. Observe that the overlap of $R^-_{1,j}$ is bounded by $M/2 = 2^n$. Then consider $R^+_{1,j}$ and cover it in the same way as before by $M$ rectangles $R^-_{2,j}$ with spatial side length $l_x = L/2^{2/p}$ and time length $l_t = L^p / 2^2$. At the $i$th step, cover $R^+_{i-1,j}$ by $M$ rectangles $R^-_{i,j}$ with spatial side length $l_x = L/2^{i/p}$ and time length $l_t = L^p / 2^i$ such that their overlap is bounded by $M/2$. Note that every $R_{i,j}$ is contained in $R_0$. Then iterating [\[iteration-(1,s)-RHI\]](#iteration-(1,s)-RHI){reference-type="eqref" reference="iteration-(1,s)-RHI"} we obtain $$\begin{aligned} \int_{R^-_0} w &\leq \sum_{j=1}^{M} \int_{R^-_{1,j}} w \leq \sum_{j=1}^{M} \varepsilon^{1-\frac{1}{\theta}} \lvert R^-_{1,j} \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-_{1,j}} w^s \biggr)^\frac{1}{s} + \sum_{j=1}^{M} C_1 \varepsilon \int_{R^+_{1,j}} w \\ &\leq \varepsilon^{1-\frac{1}{\theta}} \sum_{j=1}^M \lvert R^-_{1,j} \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-_{1,j}} w^s \biggr)^\frac{1}{s} + C_1 \varepsilon \sum_{j=1}^{M^2} \int_{R^-_{2,j}} w \\ &\leq \varepsilon^{1-\frac{1}{\theta}} \sum_{j=1}^M \lvert R^-_{1,j} \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-_{1,j}} w^s \biggr)^\frac{1}{s} + C_1 \varepsilon \sum_{j=1}^{M^2} \Biggl( \varepsilon^{1-\frac{1}{\theta}} \lvert R^-_{2,j} \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-_{2,j}} w^s \biggr)^\frac{1}{s} + C_1 \varepsilon \int_{R^+_{2,j}} w \Biggr) \\ &= \varepsilon^{1-\frac{1}{\theta}} \sum_{j=1}^M \lvert R^-_{1,j} \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-_{1,j}} w^s \biggr)^\frac{1}{s} + \varepsilon^{1-\frac{1}{\theta}} C_1 \varepsilon \sum_{j=1}^{M^2} \lvert R^-_{2,j} \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-_{2,j}} w^s \biggr)^\frac{1}{s} + (C_1 \varepsilon)^2 \sum_{j=1}^{M^2} \int_{R^+_{2,j}} w \\ &\leq \varepsilon^{1-\frac{1}{\theta}} \sum_{i=1}^N \Biggl( (C_1 \varepsilon)^{i-1} \sum_{j=1}^{M^i} \lvert R^-_{i,j} \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-_{i,j}} w^s \biggr)^\frac{1}{s} \Biggr) + (C_1 \varepsilon)^N \sum_{j=1}^{M^N} \int_{R^+_{N,j}} w \\ &\leq \varepsilon^{1-\frac{1}{\theta}} \sum_{i=1}^N \Biggl( (C_1 \varepsilon)^{i-1} \sum_{j=1}^{M^i} \lvert R^-_{i,j} \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-_{i,j}} w^s \biggr)^\frac{1}{s} \Biggr) + \biggl( C_1 \varepsilon \frac{M}{2} \biggr)^N \int_{R_0} w \\ &= I + II .\end{aligned}$$ We observe that $II$ tends to zero if $\varepsilon < 2/(C_1 M) = 1/(C_1 2^n)$ as $N \to \infty$. For the inner sum of the first term $I$, we have $$\begin{aligned} \sum_{j=1}^{M^i} \lvert R^-_{i,j} \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-_{i,j}} w^s \biggr)^\frac{1}{s} &= \sum_{j=1}^{M^i} \lvert R^-_{i,j} \rvert^{1-\frac{1}{s}} \biggl( \int_{R^-_{i,j}} w^s \biggr)^\frac{1}{s} \leq \sum_{j=1}^{M^i} 2^{(\frac{n}{p}+1)(\frac{1}{s}-1) i} \lvert R^-_0 \rvert^{1-\frac{1}{s}} \biggl( \int_{R^-_{i,j}} w^s \biggr)^\frac{1}{s} \\ &\leq 2^{(\frac{n}{p}+1)(\frac{1}{s}-1) i +\frac{1}{s}} \biggl( \frac{M}{2} \biggr)^i \lvert R^-_0 \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R_{0}} w^s \biggr)^\frac{1}{s} .\end{aligned}$$ Thus, it follows that $$\begin{aligned} I \leq \varepsilon^{1-\frac{1}{\theta}} 2^{\frac{1}{s}} \lvert R^-_0 \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R_{0}} w^s \biggr)^\frac{1}{s} \sum_{i=1}^N (C_1 \varepsilon)^{i-1} 2^{(\frac{n}{p}+1)(\frac{1}{s}-1) i} \biggl( \frac{M}{2} \biggr)^i .\end{aligned}$$ We estimate the sum by $$\begin{aligned} \sum_{i=1}^N (C_1 \varepsilon)^{i-1} 2^{(\frac{n}{p}+1)(\frac{1}{s}-1) i} \biggl( \frac{M}{2} \biggr)^i &= 2^{(\frac{n}{p}+1)(\frac{1}{s}-1) +n} \sum_{i=0}^{N-1} \bigl( C_1 \varepsilon 2^{(\frac{n}{p}+1)(\frac{1}{s}-1) +n} \bigr)^i \\ &= \frac{2^{(\frac{n}{p}+1)(\frac{1}{s}-1) +n }}{1-C_1 \varepsilon 2^{(\frac{n}{p}+1)(\frac{1}{s}-1) + n } } ,\end{aligned}$$ whenever $\varepsilon < 1/ (C_1 2^{(\frac{n}{p}+1)(\frac{1}{s}-1) + n })$. Then it holds that $$\begin{aligned} \int_{R^-_0} w \leq \varepsilon^{1-\frac{1}{\theta}} \frac{ 2^{(\frac{n}{p}+1)(\frac{1}{s}-1) +n +\frac{1}{s} }}{1-C_1 \varepsilon 2^{(\frac{n}{p}+1)(\frac{1}{s}-1) + n } } \lvert R^-_0 \rvert \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R_{0}} w^s \biggr)^\frac{1}{s}\end{aligned}$$ for $$0< \varepsilon < \min\biggl\{ \frac{1}{C_1 2^n} , \frac{1}{C_1 2^{(\frac{n}{p}+1)(\frac{1}{s}-1) + n } } \biggr\} = \frac{1}{C_1 2^{(\frac{n}{p}+1)(\frac{1}{s}-1) + n } } .$$ Choose $\varepsilon = 1/ (C_1 2^{(\frac{n}{p}+1)(\frac{1}{s}-1) + n +1})$. By the arbitrariness of $R_0$ and [\[thm:RHIforRHS\]](#thm:RHIforRHS){reference-type="eqref" reference="thm:RHIforRHS"}, we conclude that $$\label{eq:(q,s)-RHI} \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{--}} w^{q} \biggr)^\frac{1}{q} \leq C_1 \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-} w \leq C \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R} w^s \biggr)^\frac{1}{s}$$ for every parabolic rectangle $R \subset \mathbb{R}^{n+1}$, where $R^{--} = R^- - l_t(R^-)$ and $$C = C_1 \varepsilon^{\frac{q(s-1)}{s(q-1)}} \frac{ 2^{(\frac{n}{p}+1)(\frac{1}{s}-1) +n +\frac{1}{s} }}{1-C_1 \varepsilon 2^{(\frac{n}{p}+1)(\frac{1}{s}-1) + n } } .$$ Fix $R_0 = Q(x_0,L) \times (t_0 - L^p, t_0 + L^p) \subset \mathbb{R}^{n+1}$. We cover $Q(x_0,L) \times (t_0 - L^p, t_0 - L^p/2)$ by $2^n$ rectangles $R^{-}_{1,i}$ with spatial side length $l_x = L/2^{1/p}$ and time length $l_t = L^p / 2$ by dividing each edge of $Q(x_0,L)$ into two equally long intervals that may overlap each other. Denote $R^{--}_{2,i} = R^+_{1,i}$. Observe that the union of $R^{--}_{2,i}$ covers $Q(x_0,L) \times (t_0 - L^p/2, t_0)$. Moreover, note that every $R_{2,i}$ is contained in $R^+_0$. Then by [\[eq:(q,s)-RHI\]](#eq:(q,s)-RHI){reference-type="eqref" reference="eq:(q,s)-RHI"}, we have $$\begin{aligned} \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-_0} w^q \biggr)^\frac{1}{q} &\leq \Biggl( \frac{\lvert R^-_{1,i} \rvert}{\lvert R^-_0 \rvert} \sum_i \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-_{1,i}} w^q + \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{--}_{2,i}} w^q \biggr) \Biggr)^\frac{1}{q} \\ &\leq 2^{-(\frac{n}{p}+1)/q} \sum_i \Biggl( \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-_{1,i}} w^q \biggr)^\frac{1}{q} + \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{--}_{2,i}} w^q \biggr)^\frac{1}{q} \Biggr) \\ &\leq 2^{-(\frac{n}{p}+1)/q} (C_1+1) \sum_i \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^{--}_{2,i}} w^q \biggr)^\frac{1}{q} \\ &\leq 2^{-(\frac{n}{p}+1)/q} (C_1+1) C \sum_i \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R_{2,i}} w^s \biggr)^\frac{1}{s} \\ &\leq 2^{-(\frac{n}{p}+1)/q} (C_1+1) C 2^n 2^{\frac{n}{p}\frac{1}{s}} \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+_0} w^s \biggr)^\frac{1}{s} .\end{aligned}$$ This completes the proof. ◻ # Connection to parabolic Muckenhoupt weights {#section:muckenhoupt} In this section, we show that the parabolic reverse Hölder inequality together with the following parabolic doubling condition implies the parabolic Muckenhoupt condition. We recall the definition of parabolic Muckenhoupt classes $A^+_q$. **Definition 22**. Let $1<q<\infty$ and $0<\gamma<1$. A weight $w$ belongs to the parabolic Muckenhoupt class $A^+_q(\gamma)$ if $$[w]_{A^+_q(\gamma)} = \sup_{R \subset \mathbb R^{n+1}} \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^-(\gamma)} w \biggr) \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+(\gamma)} w^{\frac{1}{1-q}} \biggr)^{q-1} < \infty ,$$ where the supremum is taken over all parabolic rectangles $R \subset \mathbb R^{n+1}$. If the condition above holds with the time axis reversed, then $w \in A^-_q(\gamma)$. We say that a measure is forward in time parabolic doubling if $$\label{eq:pardoubling} w(R^-(\gamma)) \leq c_d w(\tfrac{1}{2}R^+(\gamma))$$ for every parabolic rectangle $R\subset\mathbb{R}^{n+1}$, where $c_d>0$ is the parabolic doubling constant. Here $\tfrac{1}{2}R^+(\gamma)$ is the dilation of $R^+(\gamma)$ with respect to the center point of $R^+(\gamma)$ in the parabolic geometry. **Lemma 23**. *Let $w>0$ be a weight satisfying the parabolic doubling condition [\[eq:pardoubling\]](#eq:pardoubling){reference-type="eqref" reference="eq:pardoubling"} with $0<\gamma<1$. Assume that there exist $0<\alpha<1$ and $0<\beta<2^{n+p-1}/c_d^2$ such that for every parabolic rectangle $R$ and every measurable set $E \subset R^-(\gamma)$ for which $\lvert E \rvert < \alpha \lvert R^-(\gamma) \rvert$ it holds that $w(E) < \beta w(R^+(\gamma))$. Then there exist $\tau = \tau(p,\gamma) \geq 1$, $\rho=\rho(n,p,\alpha,\beta)$ and $c=c(n,p,\gamma,\alpha,\beta)$ such that for every parabolic rectangle $R=R(x,t,L) \subset \mathbb{R}^{n+1}$ and $\lambda \geq (w_{U^-})^{-1}$ we have $$\lvert R^{+}(\gamma) \cap \{ w^{-1} > \lambda \} \rvert \leq c \lambda w( R^{\tau} \cap \{ w^{-1} > \rho \lambda \} ) ,$$ where $$U^- = R^+(\gamma) - \tau (1+\gamma) L^p \quad\text{and}\quad R^{\tau} = Q(x,L) \times (t+\gamma L^p - \tau(1+\gamma)L^p , t+L^p) .$$ Note that $U^- = R^-(\gamma)$ and $R^\tau = R$ for $\tau=1$.* *Proof.* Let $R_0=R(x_0,t_0,L) = Q(x_0,L) \times (t_0-L^p, t_0+L^p)$. Denote $f=w^{-1}$ and $\, d \mu= w \, d x\, d t$. Let $\tau\geq 1$ to be chosen later. Denote $S^+_0 = R^+_0(\gamma)$. The time length of $S^+_0$ is $l_t(S^+_0) = (1-\gamma) L^p$. We partition $S^+_0$ by dividing each spatial edge into $2$ equally long intervals. If $$\frac{l_t(S_{0}^+)}{\lceil 2^{p} \rceil} > \frac{(1-\gamma) L^p}{2^{p}},$$ we divide the time interval of $S^+_0$ into $\lceil 2^{p} \rceil$ equally long intervals. Otherwise, we divide the time interval of $S^+_0$ into $\lfloor 2^{p} \rfloor$ equally long intervals. We obtain subrectangles $S^+_1$ of $S^+_0$ with spatial side length $L_1 = l_x(S^+_1) = l_x(S^+_0)/2 = L / 2$ and time length either $$l_t(S^+_1) = \frac{l_t(S^+_0)}{\lceil 2^{p} \rceil} = \frac{(1-\gamma) L^p}{\lceil 2^{p} \rceil} \quad \text{or} \quad l_t(S^+_1) = \frac{(1-\gamma) L^p}{\lfloor 2^{p} \rfloor} .$$ For every $S^+_1$, there exists a unique rectangle $R_1$ with spatial side length $L_1 = L / 2$ and time length $2L_1^p = 2 L^p / 2^{p}$ such that $R_1$ has the same top as $S^+_1$. We select those rectangles $S^+_1$ for which $$\frac{\lvert U^-_1 \rvert}{w(U^-_1)} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{U^-_1} f \, d \mu> \lambda$$ and denote the obtained collection by $\{ S^+_{1,j} \}_j$. If $$\frac{\lvert U^-_1 \rvert}{w(U^-_1)} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{U^-_1} f \, d \mu\leq \lambda ,$$ we subdivide $S^+_1$ in the same manner as above and select all those subrectangles $S^+_2$ for which $$\frac{\lvert U^-_2 \rvert}{w(U^-_2)} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{U^-_2} f \, d \mu> \lambda$$ to obtain family $\{ S^+_{2,j} \}_j$. We continue this selection process recursively. At the $i$th step, we partition unselected rectangles $S^+_{i-1}$ by dividing each spatial side into $2$ equally long intervals. If $$\label{eq:Aqproof_eq1} \frac{l_t(S_{i-1}^+)}{\lceil 2^{p} \rceil} > \frac{(1-\gamma) L^p}{2^{pi}},$$ we divide the time interval of $S^+_{i-1}$ into $\lceil 2^{p} \rceil$ equally long intervals. Otherwise, if $$\label{eq:Aqproof_eq2} \frac{l_t(S_{i-1}^+)}{\lceil 2^{p} \rceil} \leq \frac{(1-\gamma) L^p}{2^{pi}},$$ we divide the time interval of $S^+_{i-1}$ into $\lfloor 2^{p} \rfloor$ equally long intervals. We obtain subrectangles $S^+_i$. For every $S^+_i$, there exists a unique rectangle $R_i$ with spatial side length $L_i = L / 2^{i}$ and time length $2L_i^p = 2 L^p / 2^{pi}$ such that $R_i$ has the same top as $S^+_i$. Select those $S^+_i$ for which $$\frac{\lvert U^-_i \rvert}{w(U^-_i)} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{U^-_i} f \, d \mu> \lambda$$ and denote the obtained collection by $\{ S^+_{i,j} \}_j$. If $$\frac{\lvert U^-_i \rvert}{w(U^-_i)} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{U^-_i} f \, d \mu\leq \lambda$$ we continue the selection process in $S^+_i$. In this manner we obtain a collection $\{S^+_{i,j} \}_{i,j}$ of pairwise disjoint rectangles. Observe that if [\[eq:Aqproof_eq1\]](#eq:Aqproof_eq1){reference-type="eqref" reference="eq:Aqproof_eq1"} holds, then we have $$l_t(S_i^+) = \frac{l_t(S^+_{i-1})}{\lceil 2^{p} \rceil} \geq \frac{(1-\gamma) L^p}{2^{pi}}.$$ On the other hand, if [\[eq:Aqproof_eq2\]](#eq:Aqproof_eq2){reference-type="eqref" reference="eq:Aqproof_eq2"} holds, then $$l_t(S_i^+) = \frac{l_t(S^+_{i-1})}{\lfloor 2^{p} \rfloor} \geq \frac{l_t(S^+_{i-1})}{2^{p}} \geq \dots \geq \frac{(1-\gamma) L^p}{2^{pi}} .$$ This gives a lower bound $$l_t(S_i^+) \geq \frac{ (1-\gamma) L^p}{2^{pi}}$$ for every $S_i^+$. Suppose that [\[eq:Aqproof_eq2\]](#eq:Aqproof_eq2){reference-type="eqref" reference="eq:Aqproof_eq2"} is satisfied at the $i$th step. Then we have an upper bound for the time length of $S_i^+$, since $$\begin{aligned} l_t(S^+_i) = \frac{l_t(S_{i-1}^+)}{\lfloor 2^{p} \rfloor} \leq \frac{\lceil 2^{p} \rceil}{\lfloor 2^{p} \rfloor} \frac{(1-\gamma) L^p}{2^{pi}} \leq \biggl( 1+ \frac{1}{\lfloor 2^{p} \rfloor} \biggr) \frac{(1-\gamma) L^p}{2^{pi}} .\end{aligned}$$ On the other hand, if [\[eq:Aqproof_eq1\]](#eq:Aqproof_eq1){reference-type="eqref" reference="eq:Aqproof_eq1"} is satisfied, then $$l_t(S^+_i) = \frac{l_t(S_{i-1}^+)}{\lceil 2^{p} \rceil} \leq \frac{l_t(S_{i-1}^+)}{2^{p}}.$$ In this case, [\[eq:Aqproof_eq2\]](#eq:Aqproof_eq2){reference-type="eqref" reference="eq:Aqproof_eq2"} has been satisfied at an earlier step $i'$ with $i'< i$. We obtain $$\begin{aligned} l_t(S^+_i) \leq \frac{l_t(S_{i-1}^+)}{ 2^{p}} \leq \dots \leq \frac{l_t(S_{i'}^+)}{ 2^{p(i-i')}} \leq \biggl( 1+ \frac{1}{\lfloor 2^{p} \rfloor} \biggr) \frac{(1-\gamma) L^p}{ 2^{pi}}\end{aligned}$$ by using the upper bound for $S_{i'}^+$. Thus, we have $$\frac{(1-\gamma) L^p}{2^{pi}} \leq l_t(S^+_i) \leq \biggl( 1+ \frac{1}{\lfloor 2^{p} \rfloor} \biggr) \frac{(1-\gamma) L^p}{2^{pi}}$$ for every $S^+_i$. Let $U^{--}_i = U^-_i - (0,(1+\gamma)L_i^p)$. We have a collection $\{ S^+_{i,j} \}_{i,j}$ of pairwise disjoint rectangles. However, the rectangles in the corresponding collections $\{ U^-_{i,j} \}_{i,j}$ and $\{ U^{--}_{i,j} \}_{i,j}$ may overlap. Thus, we replace them by subfamilies $\{ \widetilde{U}^-_{i,j} \}_{i,j}$ and $\{ \widetilde{U}^{--}_{i,j} \}_{i,j}$ of pairwise disjoint rectangles, which are constructed in the following way. At the first step, choose $\{ U^-_{1,j} \}_{j}$ and $\{ U^{--}_{1,j} \}_{j}$ and denote them by $\{ \widetilde{U}^-_{1,j} \}_j$ and $\{ \widetilde{U}^{--}_{1,j} \}_j$. Then consider the collections $\{ U^-_{2,j} \}_{j}$ and $\{ U^{--}_{2,j} \}_{j}$ where each $U^-_{2,j}$ and $U^{--}_{2,j}$ either intersects some $\widetilde{U}^-_{1,j}$ or $\widetilde{U}^{--}_{1,j}$, or does not intersect any $\widetilde{U}^-_{1,j}$ or $\widetilde{U}^{--}_{1,j}$. Select the pairs of rectangles $U^-_{2,j}$, $U^{--}_{2,j}$ so that neither $U^-_{2,j}$ nor $U^{--}_{2,j}$ intersects any $\widetilde{U}^-_{1,j}$ or $\widetilde{U}^{--}_{1,j}$, and denote the obtained collections by $\{ \widetilde{U}^-_{2,j} \}_j$ and $\{ \widetilde{U}^{--}_{2,j} \}_j$. At the $i$th step, choose those pairs $U^-_{i,j}$, $U^{--}_{i,j}$ so that neither $U^-_{i,j}$ nor $U^{--}_{i,j}$ intersects any previously selected $\widetilde{U}^-_{i',j}$ or $\widetilde{U}^{--}_{i',j}$, $i' < i$. Hence, we obtain collections $\{ \widetilde{U}^-_{i,j} \}_{i,j}$ and $\{ \widetilde{U}^{--}_{i,j} \}_{i,j}$ of pairwise disjoint rectangles. Observe that for every $U^-_{i,j}$ there exists $\widetilde{U}^-_{i',j}$ with $i' < i$ such that $$\label{eq:Aqproof_plussubset} \text{pr}_x(U^-_{i,j}) \subset \text{pr}_x(\widetilde{U}^-_{i',j}) \quad \text{and} \quad \text{pr}_t(U^-_{i,j}) \subset \biggl( 2\frac{1+\gamma}{1-\gamma} + 2^{1-p}+1 \biggr) \text{pr}_t(\widetilde{U}^-_{i',j}) ,$$ since $$\biggl( 2 \frac{1 + \gamma}{1-\gamma} + 2^{1-p} \biggr) \frac{l_t(\widetilde{U}^-_{i',j})}{2} = \frac{(1-\gamma)L^p}{2^{p(i'+1)}} + \frac{(1+\gamma)L^p}{2^{pi'}} \geq l_t(U^-_{i,j}) + \frac{(1+\gamma)L^p}{2^{pi'}} .$$ Here pr$_x$ denotes the projection to $\mathbb R^n$ and pr$_t$ denotes the projection to the time axis. Note that $S^+_{i,j}$ is spatially contained in $U^-_{i,j}$, that is, $\text{pr}_x S^+_{i,j}\subset \text{pr}_x U^-_{i,j}$. In the time direction, we have $$\label{eq:Aqproof_minusplussubset} \text{pr}_t(S^+_{i,j}) \subset \text{pr}_t(R^\tau_{i,j}) \subset \biggl( 2\tau \frac{1 + \gamma}{1-\gamma} +1 \biggr) \text{pr}_t(U^-_{i,j}) ,$$ since $$\biggl( 2\tau \frac{1 + \gamma}{1-\gamma} + 2 \biggr) \frac{l_t(U^-_{i,j})}{2} = \frac{(1-\gamma)L^p}{2^{pi}} + \frac{\tau(1+\gamma)L^p}{2^{pi}} = l_t(R^{\tau}_{i,j}) ,$$ where recall that $R^{\tau}_{i,j} = Q(x_{R_{i,j}} ,L_i) \times (t_{R_{i,j}} +\gamma L_i^p - \tau(1+\gamma)L_i^p , t_{R_{i,j}} +L_i^p)$. Therefore, by [\[eq:Aqproof_plussubset\]](#eq:Aqproof_plussubset){reference-type="eqref" reference="eq:Aqproof_plussubset"} and [\[eq:Aqproof_minusplussubset\]](#eq:Aqproof_minusplussubset){reference-type="eqref" reference="eq:Aqproof_minusplussubset"}, it holds that $$\label{eq:Aqproof_start} \Big\lvert \bigcup_{i,j} S^+_{i,j} \Big\rvert \leq c_1 \sum_{i,j} \lvert \widetilde{U}^-_{i,j} \rvert \quad\text{with}\quad c_1 = \biggl( 2\frac{1+\gamma}{1-\gamma} + 2^{1-p}+1 \biggr) \biggl( 2\tau \frac{1 + \gamma}{1-\gamma} +1 \biggr).$$ For the rest of the proof and to simplify the notation, let $U^-_i = \widetilde{U}^-_{i,j}$ and $U^-_{i-1} = \widetilde{U}^-_{i-1,j'}$ be fixed, where $U^-_i$ was obtained by subdividing the previous $U^-_{i-1}$ for which $\lvert U^-_{i-1} \rvert / w(U^-_{i-1}) \leq \lambda$. We want to apply the parabolic doubling property to reach from $U^{--}_i$ to $U_{i-1}^-$. Choose $\tau\geq1$ such that $$\begin{aligned} \tau(1+\gamma)L^p &= \frac{\tau(1+\gamma)L^p}{2^p} + \frac{(1+\gamma)L^p}{2^p} + 2\gamma L^p + \frac{1}{2}(1-\gamma)L^p + \frac{1}{2} \frac{(1-\gamma)L^p}{2^p} \\ &\qquad + 2^p 2\gamma L^p + \frac{1}{2} 2^p (1-\gamma)L^p +\frac{1}{2} (1-\gamma) L^p +(1-\gamma)L^p -\frac{(1-\gamma)L^p}{2^p} ,\end{aligned}$$ that is, $$\begin{aligned} \tau &= \frac{2^p}{2^p-1} \biggl( 2^{p-1} +2 +\frac{1}{2^{p+1}} + \Bigl(2^{p} -2 +\frac{1}{2^p}\Bigr) \frac{\gamma}{1+\gamma} \biggr) .\end{aligned}$$ With this choice, we have enough room in time to apply the parabolic doubling condition to reach from $U^{--}_i$ to $U_{i-1}^-$. More precisely, there exist two parabolic rectangles $P, V$ such that $U_{i-1}^-\subset P^-(\gamma)$, $V^-(\gamma) =\tfrac{1}{2}P^+(\gamma)$ and $\frac{1}{2}V^+(\gamma) = U_i^{--}$. Applying the parabolic doubling condition [\[eq:pardoubling\]](#eq:pardoubling){reference-type="eqref" reference="eq:pardoubling"} twice, we obtain $$\begin{aligned} w(U^-_{i-1}) \leq w(P^-(\gamma)) \leq c_d w(V^-(\gamma)) \leq c_d^2 w(U_i^{--}) .\end{aligned}$$ By the proof of Lemma [Lemma 9](#lemma:timemove){reference-type="ref" reference="lemma:timemove"} (ii), we have $$\lvert U_i^{--} \cap \{ \rho w > w_{U_i^-} \} \rvert < \frac{\rho}{w_{U_i^-}} w(U_i^{--}) = \rho \frac{w(U_i^{--})}{w(U_i^-)} \lvert U_i^- \rvert \leq \rho \frac{2^{n+p+1}\beta}{\alpha} \lvert U_i^- \rvert = \alpha \lvert U_i^- \rvert ,$$ where $\rho = \alpha^2 /( 2^{n+p+1}\beta)$. Then by the assumption (qualitative measure condition) it holds that $$w(U_i^{--} \cap \{ \rho w > w_{U_i^-} \}) < \beta w(U_i^-) ,$$ which implies $$w(U_i^{--}) < w(U_i^{--} \cap \{ \rho w \leq w_{U_i^-} \}) + \beta w(U_i^-) .$$ Combining the estimates above, we obtain $$\begin{aligned} \frac{2^{n+p} }{c_d^2 \lambda} \lvert U_i^- \rvert &= \frac{1}{c_d^2 \lambda} \lvert U_{i-1}^{-} \rvert \leq \frac{1}{c_d^2} w(U_{i-1}^-) \leq w(U_i^{--}) \\ &\leq w(U_i^{--} \cap \{ \rho w \leq w_{U_i^-} \}) + \beta w(U_i^-) \\ &\leq w(U_i^{--} \cap \{ \rho w \leq w_{U_i^-} \}) + \frac{\beta}{\lambda} \lvert U_i^- \rvert ,\end{aligned}$$ and thus $$\biggl( \frac{2^{n+p}}{c_d^2} - \beta \biggr) \lvert U_i^- \rvert \leq \lambda w(U_i^{--} \cap \{ \rho w \leq w_{U_i^-} \}) .$$ Since $\beta < 2^{n+p-1}/c_d^2$ and $w_{U^-_{i}} < \lambda^{-1}$, we have $$\label{eq:Aqproof_sublevelestimate} \lvert U_i^- \rvert \leq c_2 \lambda w(U_i^{--} \cap \{ \rho w \leq w_{U_i^-} \}) \leq c_2 \lambda w(U_i^{--} \cap \{ w^{-1} > \rho \lambda \}) ,$$ where $c_2 = c_d^2/2^{n+p-1}$. If $(x,t) \in S^+_0 \setminus \bigcup_{i,j} S^+_{i,j}$, then there exists a sequence of subrectangles $S^+_l$ containing $(x,t)$ such that $$\frac{\lvert U^-_l \rvert}{w(U^-_l)} = \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{U^-_l} f \, d \mu\leq \lambda$$ and $\lvert S^+_l \rvert \to 0$ as $l \to \infty$. The Lebesgue differentiation theorem [@KinnunenMyyryYang2022 Lemma 2.3] implies that $$w^{-1} = f(x,t) \leq \lambda$$ for almost every $(x,t) \in S^+_0 \setminus \bigcup_{i,j} S^+_{i,j}$. It follows that $$S^+_0 \cap \{ w^{-1} > \lambda \} \subset \bigcup_{i,j} S^+_{i,j}$$ up to a set of measure zero. Using this together with [\[eq:Aqproof_start\]](#eq:Aqproof_start){reference-type="eqref" reference="eq:Aqproof_start"} and [\[eq:Aqproof_sublevelestimate\]](#eq:Aqproof_sublevelestimate){reference-type="eqref" reference="eq:Aqproof_sublevelestimate"}, we obtain $$\begin{aligned} \lvert S^+_0 \cap \{ w^{-1} > \lambda \} \rvert &\leq c_1 \sum_{i,j} \lvert \widetilde{U}^-_{i,j} \rvert \leq c_1 c_2 \lambda \sum_{i,j} w(\widetilde{U}^{--}_{i,j} \cap \{ w^{-1} > \rho \lambda \}) \\ &\leq c \lambda w(R_0^{\tau} \cap \{ w^{-1} > \rho \lambda \}) ,\end{aligned}$$ where $c = c_1 c_2$. This completes the proof. ◻ The following theorem shows that the parabolic reverse Hölder inequality together with the parabolic doubling condition implies the parabolic Muckenhoupt condition. **Theorem 24**. *Let $1<q<\infty$ and $w \in RH_q^+$ satisfying the parabolic doubling condition [\[eq:pardoubling\]](#eq:pardoubling){reference-type="eqref" reference="eq:pardoubling"} with $0<\gamma<1$. Then $w\in A^+_r(\gamma)$ for some $r>1$.* *Proof.* By Lemma [Lemma 6](#lem:RHItimelag){reference-type="ref" reference="lem:RHItimelag"} and the proof of Theorem [Theorem 8](#thm:quantiAinfty){reference-type="ref" reference="thm:quantiAinfty"}, we see that the assumptions of Lemma [Lemma 23](#lemma:measureweight-estimate){reference-type="ref" reference="lemma:measureweight-estimate"} are satisfied and thus it can be applied. We prove the claim first for weights satisfying that $w^{-1}$ is bounded. Let $R \subset \mathbb R^{n+1}$ be a parabolic rectangle. Let $\varepsilon>0$ to be chosen later. Denote $B = (w_{U^-})^{-1}$. Applying Cavalieri's principle with Lemma [Lemma 23](#lemma:measureweight-estimate){reference-type="ref" reference="lemma:measureweight-estimate"}, we obtain $$\begin{aligned} \int_{R^+(\gamma)} w^{-\varepsilon} &= \varepsilon \int_0^\infty \lambda^{\varepsilon-1} \lvert R^{+}(\gamma) \cap \{ w^{-1} > \lambda \} \rvert \, d \lambda\\ &= \varepsilon \int_0^{B} \lambda^{\varepsilon-1} \lvert R^{+}(\gamma) \cap \{ w^{-1} > \lambda \} \rvert \, d \lambda+ \varepsilon \int_{B}^\infty \lambda^{\varepsilon-1} \lvert R^{+}(\gamma) \cap \{ w^{-1} > \lambda \} \rvert \, d \lambda\\ &\leq \lvert R^{+}(\gamma) \rvert \varepsilon \int_0^{B} \lambda^{\varepsilon-1} \, d \lambda+ c \varepsilon \int_{B}^\infty \lambda^{\varepsilon} w( R^{\tau} \cap \{ w^{-1} > \rho \lambda \} ) \, d \lambda\\ &\leq \lvert R^{+}(\gamma) \rvert B^\varepsilon + c \rho^{-1-\varepsilon} \varepsilon \int_0^\infty \lambda^{\varepsilon} w( R^{\tau} \cap \{ w^{-1} > \lambda \} ) \, d \lambda\\ &\leq \lvert U^- \rvert (w_{U^-})^{-\varepsilon} + c \rho^{-1-\varepsilon} \frac{\varepsilon}{1+\varepsilon} \int_{R^{\tau}} w^{-\varepsilon} ,\end{aligned}$$ where $c$ is the constant from Lemma [Lemma 23](#lemma:measureweight-estimate){reference-type="ref" reference="lemma:measureweight-estimate"}. By choosing $\varepsilon>0$ to be small enough, we can absorb the integral over $R^+(\gamma)$ of the second term to the left-hand side to get $$\begin{aligned} \biggl( 1- \frac{c}{\rho^{1+\varepsilon}} \frac{\varepsilon}{1+\varepsilon} \biggr) \int_{R^+(\gamma)} w^{-\varepsilon} \leq \lvert U^- \rvert (w_{U^-})^{-\varepsilon} + \frac{c}{\rho^{1+\varepsilon}} \frac{\varepsilon}{1+\varepsilon} \int_{R^{\tau}\setminus R^+(\gamma)} w^{-\varepsilon} .\end{aligned}$$ Denote $R^{\tau,-} = R^{\tau} \setminus R^+(\gamma)$. Hence, we have $$\label{eq:qualiproof_cavalieri_iteration} \int_{R^+(\gamma)} w^{-\varepsilon} \leq c_0 \lvert U^- \rvert (w_{U^-})^{-\varepsilon} + c_1 \varepsilon \int_{R^{\tau,-}} w^{-\varepsilon} ,$$ where $$c_0 = \frac{1+\varepsilon }{1-(c \rho^{-1-\varepsilon} -1) \varepsilon} \quad \text{and} \quad c_1 = \frac{c \rho^{-1-\varepsilon} }{1-(c \rho^{-1-\varepsilon} -1) \varepsilon} .$$ Fix $R_0 = Q(x_0,L) \times (t_0 - L^p, t_0 + L^p) \subset \mathbb R^{n+1}$. We cover $R^{\tau,-}_{0}(\gamma)$ by $$M = 2^n \biggl\lceil \frac{\tau(1+\gamma)}{(1-\gamma)/2^p} \biggr\rceil = 2^{n} \biggl\lceil 2^p \tau \frac{1+\gamma}{1-\gamma} \biggr\rceil$$ rectangles $R^+_{1,j}(\gamma)$ with spatial side length $L_1 = L/2$ and time length $(1-\gamma)L_1^p = (1-\gamma) L^p / 2^p$. This can be done by dividing each spatial edge of $R^{\tau,-}_{0}(\gamma)$ into two equally long pairwise disjoint intervals, and the time interval of $R^{\tau,-}_{0}(\gamma)$ into $\lceil 2^p \tau (1+\gamma) /(1-\gamma) \rceil$ equally long intervals such that their overlap is bounded by $2$. Thus, the overlap of $R^+_{1,j}(\gamma)$ is bounded by $2$. Then consider $R^{\tau,-}_{1,j}(\gamma)$ and cover it in the same way as before by $M$ rectangles $R^+_{2,j}(\gamma)$ with spatial side length $L_2 = L/2^{2}$ and time length $(1-\gamma)L_2^p = (1-\gamma)L^p / 2^{2p}$. At the $i$th step, cover $R^{\tau,-}_{i-1,j}(\gamma)$ by $M$ rectangles $R^+_{i,j}(\gamma)$ with spatial side length $L_i = L/2^{i}$ and time length $(1-\gamma)L_i^p = (1-\gamma)L^p / 2^{pi}$ such that their overlap is bounded by $2$ for fixed $R^{\tau,-}_{i-1,j}(\gamma)$. We observe that the time distance between the bottom of $U^-_{i,j}$ and the bottom of $R_0^+(\gamma)$ is at most $$\label{eq:qualiproof_maxdist-lowerparts} \sum_{i=0}^\infty l_t(R^{\tau,-}_{i,j}(\gamma)) = \sum_{i=0}^\infty \frac{\tau(1+\gamma)L^p}{2^{pi}} = \frac{2^p}{2^p -1} \tau(1+\gamma)L^p .$$ We construct a chain of rectangles from each $U^-_{i,j}$ to $U^{\sigma,-}_{0} = R^+(\gamma) - (0, \sigma (1+\gamma) L^p )$, where $\sigma\geq\tau$ is chosen later. Fix $U^-_{i} = U^-_{i,j}$. Let $N=i$ denote the number of rectangles in the chain and $d_{i,k}$, $k\in \{1,\dots,N\}$, the distances between the bottoms of the rectangles given by $$\begin{aligned} d_{i,k} &= 2^{kp} (1+\gamma) L_i^p + \frac{1}{2} (2^{kp}-2^{(k-1)p}) (1-\gamma)L_i^p \\ &\qquad + 2^{(k+1)p} (1+\gamma) L_i^p + \frac{1}{2} (2^{(k+1)p}-2^{kp}) (1-\gamma)L_i^p \\ &= 2^{kp} (2^p+1) (1+\gamma) L_i^p + 2^{kp-1} (2^{p}-2^{-p}) (1-\gamma)L_i^p .\end{aligned}$$ Define the elements of the chain by $$\begin{aligned} V_0 = U^-_i = Q(x_{R_{i}},L_i) \times (a_0 , a_0 + (1-\gamma) L_i^p) \quad\text{and}\quad V_k = Q_{k} \times I_k\end{aligned}$$ for every $k\in \{1,\dots,N\}$, where $$\begin{aligned} Q_k &= 2^k Q(x_{R_{i}},L_i) + \frac{2^{k}-1}{2^{i}-1}(x_{R_{0}} - x_{R_{i}}) , \\ I_k &= (a_k,b_k) = (a_{k-1} - d_{i,k} , a_{k-1} + 2^{kp} (1-\gamma) L_i^p - d_{i,k} ) .\end{aligned}$$ Observe that $Q_0 = pr_x(U^-_{i})$, $Q_{N} = pr_x(U^-_{0,\sigma})$ and $\lvert V_{k} \vert = 2^{n+p}\lvert V_{k-1} \rvert$. The time distance between the bottom of $V_0$ and the bottom of $V_N$ is $$\begin{aligned} \sum_{k=1}^{N} d_{i,k} &= \sum_{k=1}^{i} 2^{kp} (2^p+1) (1+\gamma) L_i^p + 2^{kp-1} (2^{p}-2^{-p}) (1-\gamma)L_i^p \\ & = \frac{2^{2p}+2^p}{2^p -1} \frac{2^{pi}-1}{2^{pi}} (1+\gamma) L^p + \frac{2^{2p}-1 }{2^{p+1} -2} \frac{2^{pi}-1}{2^{pi}} (1-\gamma) L^p .\end{aligned}$$ Hence, the maximum possible distance between the bottom of $V_0$ and the bottom of $V_N$ is $$\label{eq:qualiproof_maxdist-chain} \sum_{k=1}^{\infty} d_{i,k} = \frac{2^{2p}+2^p}{2^p -1} (1+\gamma) L^p + \frac{2^{2p}-1 }{2^{p+1} -2} (1-\gamma) L^p .$$ By combining [\[eq:qualiproof_maxdist-lowerparts\]](#eq:qualiproof_maxdist-lowerparts){reference-type="eqref" reference="eq:qualiproof_maxdist-lowerparts"} and [\[eq:qualiproof_maxdist-chain\]](#eq:qualiproof_maxdist-chain){reference-type="eqref" reference="eq:qualiproof_maxdist-chain"}, we obtain an upper bound for the time length from the bottom of $R_0^+(\gamma)$ to the bottom of $V_N$. Based on this, we fix $U^{\sigma,-}_{0}$ by choosing $\sigma$ such that $$\sigma (1+\gamma)L^p = \frac{2^p}{2^p -1} \tau(1+\gamma)L^p + \frac{2^{2p}+2^p}{2^p -1} (1+\gamma) L^p + \frac{2^{2p}-1 }{2^{p+1} -2} (1-\gamma) L^p ,$$ that is, $$\sigma = \frac{2^p\tau}{2^p -1} + \frac{2^{2p}+2^p}{2^p -1} + \frac{2^{2p}-1 }{2^{p+1} -2} \frac{1-\gamma}{1+\gamma} .$$ We add one more rectangle $V_{N+1}$ into the chain so that the chain would end at $U^{\sigma,-}_{0}$. Define $$V_{N+1} = V_N - (0, b_i l_t(V_N) ) = V_N - (0, b_i 2^{pi} (1-\gamma) L_{i}^p ) = V_N - ( 0, b_i (1-\gamma) L^p ) ,$$ where $b_i$ is chosen such that the bottom of $V_{N+1}$ intersects with the bottom of $U^{\sigma,-}_{0}$. Then $U^{\sigma,-}_{0}$ is contained in $V_{N+1}$. Next we find an upper bound for $b_i$. We observe that a rough lower bound for the time length from the bottom of $R^+_0(\gamma)$ to the bottom $V_N$ is given by $$\frac{(1-\gamma)L^p}{2^p} + (2^p+1) (1+\gamma) L^p + 2^{-1} (2^{p}-2^{-p}) (1-\gamma)L^p .$$ Therefore, the time distance between the bottom of $V_N$ and the bottom of $U^{\sigma,-}_{0}$ is less than $$\begin{aligned} & \sigma (1+\gamma)L^p - (2^p+1) (1+\gamma) L^p - 2^{-1} (2^{p}+2^{-p}) (1-\gamma)L^p \\ &\qquad = \frac{2^p}{2^p -1} \tau(1+\gamma)L^p + \frac{2^p+1}{2^p -1} (1+\gamma) L^p - \frac{2^{p}+2^{-p} }{2^{p+1} -2} (1-\gamma) L^p .\end{aligned}$$ By this, we obtain an upper bound for $b_i$ $$b_i (1-\gamma) L^p \leq \frac{2^p}{2^p -1} \tau(1+\gamma)L^p + \frac{2^p+1}{2^p -1} (1+\gamma) L^p - \frac{2^{p}+2^{-p} }{2^{p+1} -2} (1-\gamma) L^p ,$$ that is, $$b_i \leq \frac{2^p \tau + 2^p + 1}{2^p -1} \frac{1+\gamma}{1-\gamma} - \frac{2^{p}+2^{-p} }{2^{p+1} -2} = \theta .$$ By the definition of $V_k$, we can apply the parabolic doubling condition [\[eq:pardoubling\]](#eq:pardoubling){reference-type="eqref" reference="eq:pardoubling"} twice for each pair of $V_{k-1}, V_k$, $k\in\{1,\dots,N\}$, and Lemma [Lemma 9](#lemma:timemove){reference-type="ref" reference="lemma:timemove"} $(ii)$ for $V_{N}, V_{N+1}$ with $\theta\geq\sup b_i$ to get $$w(V_0) \geq c_d^{-2} w(V_1) \geq c_d^{-2N} w(V_N) \geq c_d^{-2i} \frac{1}{c_2} w(V_{N+1}) ,$$ where $c_2$ is the constant from Lemma [Lemma 9](#lemma:timemove){reference-type="ref" reference="lemma:timemove"} $(ii)$. We conclude that $$\label{eq:qualiproof_chainestimate2} w(U^{\sigma,-}_{0}) \leq w(V_{N+1}) \leq c_2 c_d^{2i} w(V_0) \leq c_2 \xi^i w(U^-_i) ,$$ where $\xi = c_d^{2}$. We iterate [\[eq:qualiproof_cavalieri_iteration\]](#eq:qualiproof_cavalieri_iteration){reference-type="eqref" reference="eq:qualiproof_cavalieri_iteration"} to obtain $$\begin{aligned} \int_{R^+_0(\gamma)} w^{-\varepsilon} &\leq c_0 \lvert U^-_0 \rvert (w_{U^-_0})^{-\varepsilon} + c_1 \varepsilon \int_{R^{\tau,-}_{0}} w^{-\varepsilon} \\ &\leq c_0 \lvert U^-_0 \rvert (w_{U^-_0})^{-\varepsilon} + c_1 \varepsilon \sum_{j=1}^M \int_{R^+_{1,j}(\gamma)} w^{-\varepsilon} \\ &\leq c_0 \lvert U^-_0 \rvert (w_{U^-_0})^{-\varepsilon} + c_1 \varepsilon \sum_{j=1}^M \biggl( c_0 \lvert U^-_{1,j} \rvert (w_{U^-_{1,j}})^{-\varepsilon} + c_1 \varepsilon \int_{R^{\tau,-}_{1,j}(\gamma)} w^{-\varepsilon} \biggr) \\ &= c_0 \lvert U^-_0 \rvert (w_{U^-_0})^{-\varepsilon} + c_0 c_1 \varepsilon \sum_{j=1}^M \lvert U^-_{1,j} \rvert (w_{U^-_{1,j}})^{-\varepsilon} + (c_1 \varepsilon)^2 \sum_{j=1}^M \int_{R^{\tau,-}_{1,j}(\gamma)} w^{-\varepsilon} \\ &\leq c_0 \sum_{i=0}^N \biggl( (c_1 \varepsilon)^{i} \sum_{j=1}^{M^i} \lvert U^-_{i,j} \rvert (w_{U^-_{i,j}})^{-\varepsilon} \biggr) + (c_1 \varepsilon)^{N+1} \sum_{j=1}^{M^N} \int_{R^{\tau,-}_{N,j}(\gamma)} w^{-\varepsilon} \\ &\leq c_0 \sum_{i=0}^N \biggl( (c_1 \varepsilon)^{i} \sum_{j=1}^{M^i} \lvert U^-_{i,j} \rvert (w_{U^-_{i,j}})^{-\varepsilon} \biggr) + (c_1 \varepsilon)^{N+1} M^N \int_{R^{\sigma,-}_{0}(\gamma)} w^{-\varepsilon} \\ &= I + II .\end{aligned}$$ We observe that $II$ tends to zero if $\varepsilon < \tfrac{1}{c_1 M}$ as $N \to \infty$ since $w^{-\varepsilon}$ is bounded by the initial assumption. For the inner sum of the first term $I$, we apply [\[eq:qualiproof_chainestimate2\]](#eq:qualiproof_chainestimate2){reference-type="eqref" reference="eq:qualiproof_chainestimate2"} to get $$\begin{aligned} \sum_{j=1}^{M^i} \lvert U^-_{i,j} \rvert (w_{U^-_{i,j}})^{-\varepsilon} &= \sum_{j=1}^{M^i} \lvert U^-_{i,j} \rvert^{1+\varepsilon} w(U^-_{i,j})^{-\varepsilon} \leq \sum_{j=1}^{M^i} 2^{-(n+p)(1+\varepsilon) i} \lvert U^{\sigma,-}_{0} \rvert^{1+\varepsilon} w(U^-_{i,j})^{-\varepsilon} \\ &\leq \sum_{j=1}^{M^i} 2^{-(n+p)(1+\varepsilon) i} \lvert U^{\sigma,-}_{0} \rvert^{1+\varepsilon} c_2^\varepsilon \xi^{\varepsilon i} w(U^{\sigma,-}_{0})^{-\varepsilon} \\ &= 2^{-(n+p)(1+\varepsilon) i} c_2^\varepsilon \xi^{\varepsilon i} M^i \lvert U^{\sigma,-}_{0} \rvert (w_{U^{\sigma,-}_{0}})^{-\varepsilon} .\end{aligned}$$ Thus, it follows that $$\begin{aligned} I &\leq c_0 \sum_{i=0}^N (c_1 \varepsilon)^{i} 2^{-(n+p)(1+\varepsilon) i} c_2^\varepsilon \xi^{\varepsilon i} M^i \lvert U^{\sigma,-}_{0} \rvert (w_{U^{\sigma,-}_{0}})^{-\varepsilon} \\ &\leq c_0 c_2^\varepsilon \lvert U^{\sigma,-}_{0} \rvert (w_{U^{\sigma,-}_{0}})^{-\varepsilon} \sum_{i=0}^N (c_1 \varepsilon)^{i} 2^{-(n+p)(1+\varepsilon) i} \xi^{\varepsilon i} M^i .\end{aligned}$$ We estimate the sum by $$\begin{aligned} \sum_{i=0}^N (c_1 \varepsilon)^{i} 2^{-(n+p)(1+\varepsilon) i} \xi^{\varepsilon i} M^i &= \sum_{i=0}^N \bigl( c_1 \varepsilon 2^{-(n+p)(1+\varepsilon)} \xi^{\varepsilon} M \bigr)^i \\ &\leq \frac{1}{1- c_1 \varepsilon 2^{-(n+p)(1+\varepsilon)} \xi^{\varepsilon} M} ,\end{aligned}$$ whenever $\varepsilon$ is small enough, for example, $\varepsilon < 2^{n+p} /(c_1 \xi M)$. Then it holds that $$\begin{aligned} \int_{R^+_0(\gamma)} w^{-\varepsilon} &\leq \frac{c_0 c_2^\varepsilon}{1- c_1 \varepsilon 2^{-(n+p)(1+\varepsilon)} \xi^{\varepsilon} M} \lvert U^{\sigma,-}_{0} \rvert (w_{U^{\sigma,-}_{0}})^{-\varepsilon}\end{aligned}$$ for small enough $\varepsilon$. We conclude that $$\label{eq:qualiproof_muckenhoupt-estimate} \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{U^{\sigma,-}_{0}} w \biggl( \mathchoice {{\setbox 0=\hbox{$\displaystyle{\textstyle-}{\int}$} \vcenter{\hbox{$\textstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\textstyle{\scriptstyle-}{\int}$} \vcenter{\hbox{$\scriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% {{\setbox 0=\hbox{$\scriptscriptstyle{\scriptscriptstyle-}{\int}$} \vcenter{\hbox{$\scriptscriptstyle-$}}\kern-.5\wd 0}}% \!\int_{R^+_0(\gamma)} w^{-\varepsilon} \biggr)^\frac{1}{\varepsilon} \leq c_3 ,$$ where $$c_3^\varepsilon = \frac{c_0 c_2^\varepsilon}{1- c_1 \varepsilon 2^{-(n+p)(1+\varepsilon)} \xi^{\varepsilon} M} .$$ We have shown that [\[eq:qualiproof_muckenhoupt-estimate\]](#eq:qualiproof_muckenhoupt-estimate){reference-type="eqref" reference="eq:qualiproof_muckenhoupt-estimate"} holds for weights satisfying that $w^{-1}$ is bounded. For general $w$, we consider truncations $\max\{w,1/k\}$, $k \in \mathbb N$, and apply [\[eq:qualiproof_muckenhoupt-estimate\]](#eq:qualiproof_muckenhoupt-estimate){reference-type="eqref" reference="eq:qualiproof_muckenhoupt-estimate"} with Fatou's lemma as $k\to\infty$. Hence, [\[eq:qualiproof_muckenhoupt-estimate\]](#eq:qualiproof_muckenhoupt-estimate){reference-type="eqref" reference="eq:qualiproof_muckenhoupt-estimate"} holds for general weights as well. Let $r = 1+1/\varepsilon$. Applying [@KinnunenMyyry2023 Theorem 3.1] we conclude that $w \in A^+_r(\gamma)$. This completes the proof. ◻ 10 L. Berkovits, *Parabolic Muckenhoupt weights in the Euclidean space*, J. Math. Anal. Appl. **379** (2011), no. 2, 524--537. D. Cruz-Uribe, C. J. Neugebauer and V. Olesen, *The one-sided minimal operator and the one-sided reverse Hölder inequality*, Studia Math. **116** (1995), 255--270. L. Forzani, F. J. Martı́n-Reyes, and S. Ombrosi, *Weighted inequalities for the two-dimensional one-sided Hardy-Littlewood maximal function*, Trans. Amer. Math. Soc. **363** (2011), no. 4, 1699--1719. J. Heinonen, T. Kilpeläinen and O. Martio, Nonlinear potential theory of degenerate elliptic equations, Dover Publications, Inc., Mineola, NY, 2006. J. Kinnunen and K. Myyryläinen, *Characterizations of parabolic Muckenhoupt classes*, arXiv preprint arXiv:2306.07600 (2023). J. Kinnunen, K. Myyryläinen, and D. Yang, *John--Nirenberg inequalities for parabolic BMO*, Math. Ann. (2022). J. Kinnunen, K. Myyryläinen, D. Yang, and C. Zhu, *Parabolic Muckenhoupt weights on spaces of homogeneous type*, arXiv preprint arXiv:2208.08328 (2022). J. Kinnunen and O. Saari, *On weights satisfying parabolic Muckenhoupt conditions*, Nonlinear Anal. **131** (2016), 289--299. , *Parabolic weighted norm inequalities and partial differential equations*, Anal. PDE **9** (2016), no. 7, 1711--1736. A. Lerner and S. Ombrosi, *A boundedness criterion for general maximal operators*, Publ. Mat. **54** (2010), 53--71. J. Ma, Q. He and D. Yan, *Weighted characterization of parabolic fractional maximal operator*, Frontiers of Mathematics **18** (2023), no. 1, 185--196. F.J. Martı́n-Reyes, L. Pick and A. de la Torre, *$A^+_\infty$ condition*, Can. J. Math. **45** (1993), 1231--1244. , *One-sided BMO spaces*, J. London Math. Soc. **49** (1994), 529--542. S. Ombrosi, *Weak weighted inequalities for a dyadic one-sided maximal function in $\mathbb R^n$*, Proc. Amer. Math. Soc. **133** (2005), no. 6, 1769--1775. O. Saari, *Parabolic BMO and global integrability of supersolutions to doubly nonlinear parabolic equations*, Rev. Mat. Iberoam. **32** (2016), no. 3, 1001--1018. E. Sawyer, *Weighted inequalities for the one-sided Hardy-Littlewood maximal functions*, Trans. Amer. Math. Soc. **297** (1986), no. 1, 53--61. [^1]: The second author was supported by the Magnus Ehrnrooth Foundation.
arxiv_math
{ "id": "2310.00370", "title": "Characterizations of parabolic reverse H\\\"older classes", "authors": "Juha Kinnunen and Kim Myyryl\\\"ainen", "categories": "math.CA math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we study solutions for a weakly coupled system of eikonal equations arising in an optimal path-planning problem for autonomous vehicles. The model considered takes into account the possibility of two types of breakdown, partial and total, which happen at a known, spatially inhomogeneous rate. In particular, we show how to bypass the delicate degenerate coupling condition by using existing results on weakly coupled systems of Hamilton Jacobi equations. Then we consider finite element method schemes built for convection-diffusion problems to construct approximate solutions for this system and produce some numerical simulations. author: - "Maria Teresa Chiri$^{1}$, Kenneth D Czuprynski$^{2}$ and Ludmil T Zikatanov$^3$ [^1] [^2] [^3] [^4]" bibliography: - references.bib title: | Weakly coupled systems of eikonal equations for autonomous vehicles\ --- # Introduction In the era of autonomous vehicles, there is an increasing need to develop robust and widely applicable path-planning algorithms. The models introduced so far are meant for a variety of autonomous vehicles which range from planetary exploration rovers [@Tompkins] to flying drones [@Nieto], underwater vehicles [@980841], and of course cars [@Reeds]. Although the literature on path planning methods used in robotics is already vast (see [@Alt; @Chaz; @Dij; @Kav; @Lav] for some examples) and the number of models continues to increase over the years, there are still important issues that need to be addressed. A new challenge, for instance, is to derive models taking into account accidental events such as vehicle breakdowns, even in the case of a completely known environment. In this respect, in [@Cornell1] the authors consider an optimal path model in which a vehicle switches between two different modes, representing a partial and a total breakdown condition. The switch happens stochastically at known rates and each mode has its own deterministic dynamics and running cost. Their approach is based on dynamic programming in continuous state and time and leads to globally optimal trajectories by solving a system of two weakly coupled eikonal equations. This represents a simple problem in the context of the general family of weakly coupled systems of Hamilton-Jacobi equations which appear frequently in the literature of optimal control problems with random switching costs governed by Markov chains [@Dav; @Flem; @Prot]. Recently weakly coupled Hamilton-Jacobi systems were studied from a viewpoint of weak KAM theory. For example, [@Camilli1; @Cagn] and [@Mitake] investigated the large-time behavior of the solution for time-dependent problems. In [@Mitake1], the authors studied homogenization for weakly coupled systems and the rate of convergence to matched solutions while [@Davini1] generalized the notion of Aubry sets for the case of systems and proved comparison principle with respect to their boundary data on Aubry sets.\ We will study the system derived in [@Cornell1] in the framework of viscosity solutions, as previously done in [@Engler; @Ishi]. In particular, we will prove a comparison principle in the classical sense [@Bardi] with the difference that the boundary conditions will be prescribed not only along the perimeter of the domain but also on the Aubry set of the system. On the other side, the existence of solutions is a very delicate issue. We will discuss an example of a one-dimensional weakly coupled system of eikonal equations showing how the lack of boundary conditions on the Aubry set affects the uniqueness of solutions and we will provide boundary conditions for which the system does not admit solutions.\ Numerically, a range of different methods have been applied to solve the eikonal equations. Fast marching methods [@Sethian1] and fast sweeping methods [@Kao] are efficient algorithms for the causal propagation of boundary information into the domain. In [@Cornell1] the coupled system of eikonal equations is solved using FMMs within a value iteration scheme which uses policy evaluations to speed up convergence. A number of finite element formulations have also been considered for the eikonal equations (cf. [@Caboussat; @Yang; @guermond2008fast]). In [@Yang], an artificial viscosity regularization with a homotopy scheme for the stable reduction of the viscosity is introduced. High-order discontinuous Galerkin formulations have also been developed for the eikonal equations [@Flad] and are amenable to parallelization. In this work, we consider an artificial viscosity formulation similar to [@Yang] but consider finite element schemes built for convection-diffusion problems [@Morton] as means of achieving stable numerical solutions. We point out that numerical simulations will be performed in a case in which the Aubry set is empty to avoid the theoretical complications described above. The paper is organized as follows: in Section II we will describe the model introduced in [@Cornell1], in Section III we will prove a comparison principle for viscosity solutions of the weakly coupled system of eikonal equations and show an example of non-uniqueness and non-existence; in Section IV we will derive a numerical scheme based on a finite element method to approximate solutions for this system, while simulations will be discussed in section V. Finally in Section VI we will describe possible future research directions. # The model Inspired by [@Cornell1], we consider the problem of optimal path planning for an autonomous vehicle (AV) subject to two potential breakdown events:\ 1. ***Partial breakdown*** or ***Mode 1***, in which the vehicle is damaged but can keep moving toward its target;\ 2. ***Total breakdown*** or ***Mode 2***, in which the vehicle is damaged and needs to move toward a repair depot.\ The classical formulation of the optimal path planning problem consists of an agent that tries to optimally travel from one point to another while obeying an ordinary differential equation describing its motion. In our analysis, we will consider AVs moving on a stretch of road modeled by a rectangle $\Omega=[0,L]\times[0,S]$ where $L$ represents the length and $S$ is the width. The vehicles' dynamic is described by the ODE $$\dot{y}(t)= d(t)f(y(t)), \quad y(0)=x_0$$ where $d:[0,T]\to S^1$ is the direction of motion, $f$ is the speed and $y(t)$ is the trajectory starting at $x_0\in \Omega$. Each breakdown type is assumed to occur at some known rate and accrues some predetermined cost.\ Denote with $T_b$ the time at which the first partial breakdown occurs, $T_G$ the time at which the AV would reach the target if no breakdown occurred, and with $T_1=\min\lbrace T_b, T_G\rbrace$, then the expected cost in Mode 1 is given by $$\begin{aligned} \label{C1} J_1(x, a(\cdot))=\mathbb{E}\Big[&\int_0^{T_1}K_1(y(s))+\lambda(y(s))R(y(s))ds \notag \\ &+\chi_{[T_b<T_G]}u_2(y(T_b)) \Big]\,, \end{aligned}$$ where $u_2$ is the value function in Mode 2, $K_1$ is the running cost, $\lambda>0$ is the rate at which the partial breakdown occurs (trajectory dependent) and $R$ is the repair cost. If a partial breakdown occurs before the robot reaches the goal, then the last term in [\[C1\]](#C1){reference-type="eqref" reference="C1"} is nonzero and the AV switches to Mode 2. Similarly, let $T_B$ be the time of the first total breakdown, $T_D$ the time the AV would reach the depot if no breakdown occurred, and denote with $T_2=\min\lbrace T_B, T_D \rbrace$, then the expected cost in Mode 2 is given by $$\begin{aligned} \label{C2} J_2(x, a(\cdot))=\mathbb{E}\Big[&\int_0^{T_2}K_2(y(s))ds \notag \\ &+\chi_{[T_B<T_D]}\Big( R(y(T_B))+u_1(y(T_B))\Big)\notag\\ &+\chi_{[T_B\geq T_D]}R(y(T_D))+u_1(y(T_D)) \Big]\,,\end{aligned}$$ where $K_2$ is the running cost. The last two terms in [\[C2\]](#C2){reference-type="eqref" reference="C2"} again encode the optimal cost-to-go whenever a mode switch occurs. Let $G \subset \Omega$ denote the goal locations for admissible AV trajectories and let $D \subset \Omega$ denote the location of the repair depots that the AV can navigate to after undergoing a partial breakdown. According to the derivation described in [@Cartee], the value functions $u_1$ and $u_2$ solve the following system of eikonal equations: $$\label{eikonal-system-original} \begin{cases} |\nabla u_1| f_1 + \phi_1 (u_1-u_2) = K_1 + \lambda R, \quad &x\in \Omega \\ |\nabla u_2| f_2 + \phi_2(u_2 -u_1) = K_2 + \phi_2 R, \quad &x \in \Omega\\ u_1 =0, \quad &x \in G\\ u_2=R+ u_1, \quad &x \in D \end{cases}$$ Observe that $i$-th equation in the system depends only on $\nabla u_i$, but not on $\nabla u_j$ for $i\neq j$, therefore we refer to $\eqref{eikonal-system-original}$ as a weakly coupled system. The functions $\lambda, \phi_2:\Omega \rightarrow (0,\infty)$ correspond to the rates of exponential random variables modeling the time until another complete breakdown and they are trajectory dependent; the function $\phi_1(x):\Omega \rightarrow (0,\infty)$ denotes the rate at which partial breakdowns occur. The asymmetry in the notation for the rates of the exponential random variables and the rate for the partial breakdowns comes from the necessity of giving a consistent notation to the coupling matrix. The functions $K_i:\Omega \rightarrow [0,+\infty)$ denote the running cost associated with the cost functional, while $f_i:\Omega\to [0,+\infty)$ are the space dependent speeds. # Viscosity solutions and comparison principle In this section, we want to prove a comparison principle for viscosity solutions of the weakly coupled system [\[eikonal-system-original\]](#eikonal-system-original){reference-type="eqref" reference="eikonal-system-original"}. We start by recalling the definition of viscosity solution introduced in [@Visc], in the context of the system we are considering. **Definition 1**. *A continuous function $u:\Omega\to\mathbb{R}^2$, with $u(x)=(u_1(x),u_2(x))$, is a *viscosity subsolution* (resp. *supersolution*) of [\[eikonal-system-original\]](#eikonal-system-original){reference-type="eqref" reference="eikonal-system-original"}, if for each $i=1,2$ and test function $\varphi$ of class $\mathcal{C}^1$, when $u_i-\varphi$ attains its local maximum (resp. local minimum) at $x$, then $$\label{ESV} \begin{aligned} |\nabla \varphi | f_1+ \phi_1u_1 &\leq K_1 + \lambda R + \phi_1 u_2, \\ |\nabla\varphi| f_2 + \phi_2u_2 &\leq K_2 + \phi_2 R+ \phi_2u_1, \end{aligned}\quad (\hbox{resp.}~\geq~)$$ A function is called a *viscosity solution* of [\[eikonal-system-original\]](#eikonal-system-original){reference-type="eqref" reference="eikonal-system-original"} if it is both a viscosity subsolution and supersolution.* A standard assumption in literature for weakly coupled systems of Hamilton-Jacobi equations is the so-called monotonicity condition, i.e. the $i$-th Hamiltonian is increasing in $u_i$ and decreasing in $u_j$ with $j\neq i$. Such a condition is clearly satisfied by [\[eikonal-system-original\]](#eikonal-system-original){reference-type="eqref" reference="eikonal-system-original"}. Moreover, the Hamiltonians are locally Lipschitz continuous, coercive, and strictly convex in the gradient of $u$. On the other side, the coupling matrix for the system is given by $$\label{coup} D(x)=\begin{pmatrix} \phi_1 & -\phi_1\\ -\phi_2 & \phi_2 \end{pmatrix}$$which is degenerate (the sum of the element of each row is equal to zero) and irreducible for every $x\in\Omega$, this simply means that coupling is not trivial. Degenerate couplings have been avoided for a long time, since they represented an obstruction to achieving uniqueness of the solution (cf. [@Engler]). This was until it was realized this type of coupling induces pathologies on the Aubry set which behaves as a boundary in the inner part of the domain [@Roquejoffre; @Camilli1] . This set, in the particular case of [\[eikonal-system-original\]](#eikonal-system-original){reference-type="eqref" reference="eikonal-system-original"}, is given by $$\mathcal{A}\doteq\lbrace x\in\Omega, K_1(x)+K_2(x)+(\lambda(x)+\phi_2(x))R(x)=0 \rbrace$$ (since the right-hand side of both the equations is non-negative) and we may have $\mathcal{A}\setminus \partial \Omega \neq \emptyset$. Therefore to prove a comparison principle and consequently to get the uniqueness of the solution, it is essential to add boundary conditions on $\mathcal{A}\cup \partial \Omega$. In the following, we will use the notation $USC(\Omega)$ and $LSC(\Omega)$ for the set of upper semicontinuous and lower semicontinuous functions on $\Omega$. **Theorem 1**. *\[Comparison Principle\] Let $\phi_1, \phi_2, \lambda, R:\Omega\to \mathbb{R}$ be continuous functions. Let $u\in USC(\Omega)$ and $v\in LSC(\Omega)$ be respectively bounded subsolutions and supersolution of [\[eikonal-system-original\]](#eikonal-system-original){reference-type="eqref" reference="eikonal-system-original"}. Assume that $u(x)\leq v(x)$ for every $x\in \mathcal{A}\cup \partial \Omega$, then $u(x)\leq v(x)$ on $\bar\Omega$* *Proof.* Consider $\delta\in(0,1)$ an set $$M_\delta\doteq \sup_{[0,L]\times[0,S]}\sup_{i=1,2}\lbrace \delta u_i-v_i \rbrace$$ If $M_\delta\leq 0$ there is nothing to prove, therefore we assume it is strictly positive. By compactness the $sup$ is a maximum achieved at a point $x_0\in[0,L]\times[0,S]$ for some $i_0$. Denote with $\mathcal{I}$ the set of the index for which the maximum is achieved. We can distinguish three cases.\ **Case 1.** $\mathcal{I}=\lbrace 1, 2\rbrace$ and $x_0\in\mathcal{A}\cup \partial \Omega$. In this case we have $$\begin{aligned} M_\delta & =\frac{1}{2}\big[(\delta u_1-v_1)(x_0)+(\delta u_2-v_2)(x_0)\big]\\ &\leq\frac{1}{2}\leq(\delta-1)\big(v_1(x_0)+v_2(x_0)\big)\\ &\leq(1-\delta)|v|_\infty.\end{aligned}$$ **Case 2.** $\mathcal{I}=\lbrace 1, 2\rbrace$ and $x_0\notin\mathcal{A}\cup\partial \Omega$. For instance, we can assume without loss of generality that $K_1(x_0)+\lambda(x_0)R(x_0)\neq 0$. In this case, we proceed classically by considering $$\sup_{\Omega\times\Omega}\Big\{ \delta u_1(x)-v_1(y)-\frac{|x-y|^2}{2\varepsilon^2}-|x-x_0|^2\Big \}.$$ Let $(\bar{x},\bar{y})$ be the point at which this maximum is achieved. This value is clearly greater than $M_\delta$, moreover, the following properties are satisfied: $$\bar{x}, \,\bar{y} \to x_0\,\qquad \frac{|x-y|^2}{2\varepsilon^2},\,|\bar{x}-x_0|^2\to 0\quad \hbox{ as }\varepsilon\to 0.$$ Let $p_\varepsilon=\frac{\bar{x}-\bar{y}}{\varepsilon^2}$. Since $u_1$ is viscosity subsolution of [\[eikonal-system-original\]](#eikonal-system-original){reference-type="eqref" reference="eikonal-system-original"}, we get $$\begin{aligned} \Big\vert {p_\varepsilon+2 (\bar{x} - x_0)}\Big \vert f_1(\bar{x})+\delta\phi_1(\bar{x})&(u_1(\bar{x})-u_2(\bar{x}))\\ &\leq \delta \big(K_1(\bar{x})+\lambda(\bar{x})R(\bar{x})\big). \end{aligned}$$ On the other side, since $v_1$ is viscosity supersolution $$\vert p_\varepsilon\vert f_1(\bar{y} )+\phi_1(\bar{y})(v_1(\bar{y})-v_2(\bar{y}))\geq\big(K_1(\bar{y})+\lambda(\bar{y})R(\bar{y})\big).$$ By subtracting the two inequalities we first get $$\begin{aligned} &\Big\vert {p_\varepsilon+2 (\bar{x} - x_0)}\Big \vert f_1(\bar{x})- \vert p_\varepsilon\vert f_1(\bar{y} )\\ &=\Big(\Big\vert {p_\varepsilon+2 (\bar{x} - x_0)}\Big \vert -\vert p_\varepsilon\vert\Big) f_1(\bar{x} )+\vert p_\varepsilon\vert \big( f_1(\bar{x} ) - f_1(\bar{y} )\big) \end{aligned}$$ which goes to $0$ as $\varepsilon\to 0$, and $$\begin{aligned} \delta\phi_1(\bar{x})&\big(u_1(\bar{x})-u_2(\bar{x})\big)-\phi_1(\bar{y})\big(v_1(\bar{y})-v_2(\bar{y})\big)\\ &=\phi_1(\bar{x})\big(\delta u_1(\bar{x})-\delta u_2(\bar{x})-v_1(\bar{y})+v_2(\bar{y})\big)\\ &\qquad \qquad +(\phi_1(\bar{x})-\phi_1(\bar{y}))(v_1(\bar{y})-v_2(\bar{y}))\end{aligned}$$ where $$(\phi_1(\bar{x})-\phi_1(\bar{y}))(v_1(\bar{y})-v_2(\bar{y}))\to 0 \qquad \hbox{as }\quad \varepsilon\to 0.$$ Therefore we get $$\begin{aligned} \phi_1(\bar{x})&\big(\delta u_1(\bar{x})-\delta u_2(\bar{x})-v_1(\bar{y})+v_2(\bar{y})\big)\\ &\leq (\delta-1)\big(K_1(\bar{x})+\lambda(\bar{x})R(\bar{x})\big)+o_\varepsilon(1).\end{aligned}$$ Since $\delta u_j-v_j$ is upper-semicontinuous for $j=1,2$, it follows that $$\label{diff} \limsup_{\varepsilon\to0}(\delta u_j(\bar{x})-v_j(\bar{y}) )\leq \delta u_j(x_0)-v_j(x_0)\leq M_\delta.$$ hence $$-\phi_1(\bar{x})(\delta u_2(\bar{x})-v_2(\bar{y}))\geq -\phi_1(\bar{x})M_\delta+o_\varepsilon(1) .$$ Moreover we have $\delta u_1(\bar{x})-v_1(\bar{y})\geq M_\delta$, therefore $$\phi_1(\bar{x})(\delta u_1(\bar{x})-v_1(\bar{y}))\geq\phi_1(\bar{x})M_\delta.$$ We can conclude that $$(\delta-1)\big(K_1(\bar{x})+\lambda(\bar{x})R(\bar{x})\big)\geq 0$$ which leads to contraddiction when $\varepsilon\to 0$ since $x_0\notin\mathcal{A}$. **Case 3.** $\mathcal{I}\neq\lbrace 1, 2\rbrace$. We can argue almost like in Case 2, but the estimate needs to be more rigorous on the index not in $\mathcal{I}$. Indeed let's assume that $\mathcal{I}=\lbrace{1\rbrace}$. The previous computations still hold, but for [\[diff\]](#diff){reference-type="eqref" reference="diff"}, if $j=2$ $$\limsup_{\varepsilon\to 0} (\delta u_2(\bar{x})-v_2(\bar{y}) )\leq \delta u_2(x_0)-v_2(x_0)\leq M_\delta-\eta$$ for some $\eta>0$. It follows that $$\phi_1(x_0)\eta\leq (\delta-1)\big(K_1(\bar{x})+\lambda(\bar{x})R(\bar{x})\big)+o_\varepsilon(1)$$ which again leads to contradiction as $\varepsilon\to 0$. We can conclude that the only possibility is $M_\delta \leq(1-\delta)|v|_\infty$ which in the case of $\delta=1$ implies $M_1\leq0$. ◻ **Lemma 1**. *Let $u\in USC(\Omega)$ be a bounded viscosity subsolution of [\[eikonal-system-original\]](#eikonal-system-original){reference-type="eqref" reference="eikonal-system-original"}, then $u$ is Lipschitz continuous on $\Omega$.* *Proof.* Since $\Omega$ is a compact set, we have that $K_1, K_2, \phi_1,\phi_2, \lambda$ are bounded. Therefore $$|\nabla u_i|f_i(x)\leq C_i \qquad \hbox{i=1,2}$$ which implies that $u$ is Lipschitz. ◻ The comparison principle proved in Theorem [Theorem 1](#CP){reference-type="ref" reference="CP"} implies that if a solution exists, it must be unique. On the other hand, it is not clear which type of boundary conditions on $\partial\Omega$ and $\mathcal{A}$ ensure the existence of viscosity solutions. We borrow an example from [@Camilli1] to illustrate how the set $\mathcal{A}$ can affect the uniqueness of solutions for a weekly coupled system. Moreover, we provide bundary conditions on $\mathcal{A}$ for which such a system has no solution.\ **Example 1**. *Consider the one-dimensional problem $$\label{EX} \begin{cases} |u'_1(x)|+u_1(x)-u_2(x)=F(x),\quad &x\in(-1,1)\\ |u'_2(x)|+u_2(x)-u_1(x)=F(x)\quad &x\in(-1,1)\\ u_i(\pm 1)=0\quad & i=1,2 \end{cases}$$ with $F(x)=2|x|$. It is immediate to check that both $u_1(x)=u_2(x)= 1-x^2$ and $u_1(x)=u_2(x)=\min\lbrace 1-x^2, x^2-C\rbrace$, $C\in(0,1)$, are viscosity solutions of [\[EX\]](#EX){reference-type="eqref" reference="EX"}. Here the set $\mathcal{A}$ is given by $\lbrace 0\rbrace$, hence without prescribing a boundary condition in $0$ we do not have uniqueness. If we just add the condition $u_i(0)=\frac{1}{2}$, this immediately selects the unique solution $u_1(x)=u_2(x)=\min\lbrace 1-x^2, x^2-\frac{1}{2}\rbrace$. On the other side for $u_i(0)=2$, no solution satisfies [\[EX\]](#EX){reference-type="eqref" reference="EX"}.* # Numerical Solution In this section the numerical solution of the weakly coupled system [\[eikonal-system-original\]](#eikonal-system-original){reference-type="eqref" reference="eikonal-system-original"} using the finite element method (FEM) is considered. The numerical scheme will solve a sequence of systems augmented by artificial viscosity terms. Formally, we have the problem: Find $u^\varepsilon_1$ and $u^\varepsilon_2$ such that $$\label{eikonal-system-augmented} \begin{aligned} -\varepsilon \Delta u_1^\varepsilon + |\nabla u^\varepsilon_1| f_1 + \phi_1 u^\varepsilon_1 &= K_1 + \lambda R + \phi_1u^\varepsilon_2, \\ -\varepsilon \Delta u^\varepsilon_2 + |\nabla u^\varepsilon_2| f_2 + \phi_2u^\varepsilon_2 &= K_2 + \phi_2 R+ \phi_2u^\varepsilon_1, \end{aligned}$$ for $x \in \Omega$ with $u^{\varepsilon}_1 = 0$ for $x \in G$ and $u^{\varepsilon}_2 = R + u^{\varepsilon}_1$ for $x \in D$, as $\varepsilon \rightarrow 0$. Although the viscosity term provides some mechanism for control of the solution, we are still left to deal with the original nonlinearity which is accomplished through linearization as in [@Yang]. Let $u^{\varepsilon_n}_k$ denote the solution of [\[eikonal-system-augmented\]](#eikonal-system-augmented){reference-type="eqref" reference="eikonal-system-augmented"} for the $n^{th}$ value of $\varepsilon$, where $k=1,2$. Linearization of the absolute value term results in the approximation $$\begin{aligned} \label{eqn:linearization} |\nabla u^{\varepsilon_n}_{k}| & \approx |\nabla u^{\varepsilon_{n-1}}_{k}| +\frac{\nabla u^{\varepsilon_{n-1}}_{k}}{|\nabla u^{\varepsilon_{n-1}}_{k}|} \left(\nabla u^{\varepsilon_{n}}_{k}- \nabla u^{\varepsilon_{n-1}}_{k}\right) \nonumber \\ &\approx \frac{\nabla u^{\varepsilon_{n-1}}_{k} }{|\nabla u^{\varepsilon_{n-1}}_{k}|} \nabla u^{\varepsilon_n}_{k},\end{aligned}$$ for $\nabla u ^{\varepsilon_{n-1}}_{k}$ not identically zero. Inserting the approximation [\[eqn:linearization\]](#eqn:linearization){reference-type="eqref" reference="eqn:linearization"} into [\[eikonal-system-augmented\]](#eikonal-system-augmented){reference-type="eqref" reference="eikonal-system-augmented"} leads to the following iteration. Let $u^{\varepsilon_0}_1$ and $u^{\varepsilon_0}_2$ denote initial guesses and let $\beta^{\varepsilon_{n-1}}_k$ denote the normalization of $\nabla u^{\varepsilon_{n-1}}_{k}$, then for $n = 1,2,\dots$, we solve $$\label{eikonal-system-w-linearization} \begin{aligned} -\varepsilon_n \Delta u^{\varepsilon_n}_1 + \beta^{\varepsilon_{n-1}}_1\cdot \nabla u^{\varepsilon_n}_1 f_1 + \phi_1 u^{\varepsilon_n}_1 &= K_1 + \lambda R + \phi_1u^{\varepsilon_n}_2 \\ -\varepsilon_n \Delta u^{\varepsilon_n}_2 + \beta^{\varepsilon_{n-1}}_2 \cdot\nabla u^{\varepsilon_n}_2 f_2 + \phi_2u^{\varepsilon_n}_2 &= K_2 + \phi_2 R+ \phi_2u^{\varepsilon_n}_1 \end{aligned}$$ for $x \in \Omega$ with $u^{\varepsilon_n}_1 = 0$ for $x \in G$ and $u^{\varepsilon_n}_2 = R + u^{\varepsilon_n}_1$ for $x \in D$. Equation [\[eikonal-system-w-linearization\]](#eikonal-system-w-linearization){reference-type="eqref" reference="eikonal-system-w-linearization"} now resembles a convection-diffusion problem for which a number of numerical schemes exist. In this work, we formulate the discrete system via a streamline diffusion approximation. ## Streamline Diffusion For the discretization of equation [\[eikonal-system-w-linearization\]](#eikonal-system-w-linearization){reference-type="eqref" reference="eikonal-system-w-linearization"} we use a streamline diffusion approximation [@Brooks1; @Johnson] and in our derivation we follow the settings in [@Bank1]. We seek an approximate solution which is a piecewise linear, continuous function on a given simplicial mesh $\mathcal{T}_h$ covering exactly the computational domain $\Omega$. Namely, letting $T$ denote a generic element we have: $\overline{\Omega} = \bigcup_{T\in\mathcal{T}_h} T$. Further, assume that the sets $G$ and $D$ are partitioned by the mesh exactly. Then for $\Gamma$ denoting the set $G$ or $D$, the piecewise linear finite element space containing our solutions is given by $V_\Gamma(h)$ $$\begin{aligned} V_\Gamma(h) := \left\lbrace v \in C(\overline{\Omega}) \; | \; v|_T \in P_1(T), \; \forall T \in \mathcal{T}_h, v|_{\Gamma} = g \right\rbrace\end{aligned}$$ for the appropriate function $g$, and where $P_1(T)$ denotes the space of polynomials of degree less than or equal to one over $T$. Note that the solution spaces for $u_1$ and $u_2$ differ slightly due to the sets $G$ and $D$, and are given by $V_G(h)$ and $V_D(h)$, respectively. For ease of presentation in the derivation of the weak form of [\[eikonal-system-w-linearization\]](#eikonal-system-w-linearization){reference-type="eqref" reference="eikonal-system-w-linearization"}, we suppress the superscript notation as well as the iteration variable $n$. The test functions for the streamline diffusion formulation have the form $v + \theta h \nu\beta\cdot \nabla v$ for $v \in V_\Gamma(h)$, where $\nu = (|\beta|^2 +1)^{-1/2}$. The variable $h$ denotes the mesh size and $\theta$ is a positive constant. Consider the first equation in [\[eikonal-system-w-linearization\]](#eikonal-system-w-linearization){reference-type="eqref" reference="eikonal-system-w-linearization"}, then for $v_1 \in V_D(h)$ multiplying by a test function and integrating yields $$\label{eqn:weak-form-derivation-1} \begin{aligned} \left(-\varepsilon\Delta u_1 + \beta_1 \cdot \nabla u_1 f_1 + \phi_1 u_1, v_1 + \theta h \nu\beta_1\cdot \nabla v_1\right)& \\ &\hspace*{-2.5in}= \left(\phi_1u_2, v_1 + \theta h \nu\beta_1\cdot \nabla v_1\right)\\ &\hspace*{-2in}+ \left(K_1 + \lambda R, v_1 + \theta h \nu\beta_1\cdot \nabla v_1\right) \end{aligned}$$ where $(u,v) = \int_{\Omega} uv d\omega$. We remark that because $v \in V_\Gamma(h)$, the components of $\nabla v$ are constant on each element and therefore integrations of the form $(w, \theta h \nu\beta\cdot \nabla v)$ are considered elementwise. Working from right to left, define the linear functional $l_1(\cdot)$ by $$\begin{aligned} \label{eqn:weak-form-derivation-2} l_1(v_1) := \left(K_1 + \lambda R, v_1 + \theta h \nu\beta_1\cdot \nabla v_1\right)\end{aligned}$$ and the bilinear form $b_1(\cdot, \cdot)$ by $$\begin{aligned} \label{eqn:weak-form-derivation-3} b_1(u_2,v_1) := \left(\phi_1u_2, v_1 + \theta h \nu\beta_1\cdot \nabla v_1\right).\end{aligned}$$ The final term can be split into the sum of the viscosity term and its remainder. Then for $\beta_1 = ([\beta_1]_1, [\beta_1]_2)^T$ the introduction of the matrix $$\begin{aligned} {\bf B}_1 = \begin{pmatrix} [\beta_1]_1^2 & [\beta_1]_1[\beta_1]_2 \\ [\beta_1]_1[\beta_1]_2& [\beta_1]_2^2 \end{pmatrix}f_1\end{aligned}$$ allows us to represent the next term as $$\label{eqn:weak-form-derivation-4} \begin{aligned} \left(\beta_1 \cdot \nabla u_1 f_1 + \phi_1 u_1, v_1 + \theta h \nu\beta_1\cdot \nabla v_1\right)&\\ %&= %\theta h^p \nu \left(\beta_1 \nabla u_1 f_1 ,\beta_1\cdot \nabla v_1\right) %+ %\left(\beta_1 \cdot\nabla u_1 f_1 , v_1\right) %+ %\left( \phi_1 u_1, v_1 + \theta h \nu\beta_1\cdot \nabla v_1\right) \nonumber \\ &\hspace{-2in}= \theta h \nu \left(\nabla u_1 ,{\bf B}_1 \nabla v_1\right) + \left(\beta_1\cdot \nabla u_1 f_1 , v_1\right)\\ + \left( \phi_1 u_1, v_1 + \theta h \nu\beta_1\cdot \nabla v_1\right). \end{aligned}$$ Performing integration by parts on the viscosity term yields $$\label{eqn:weak-form-derivation-5} \begin{aligned} -\varepsilon \left( \Delta u_1, v_1 + \theta h \nu\beta_1\cdot \nabla v_1\right)& \\&\hspace*{-1.5in}= \varepsilon(\nabla u_1, \nabla v_1) - \varepsilon\left(\nabla u_1, v_1+ \theta h^p \nu\beta_1\cdot \nabla v_1\right)_{\partial \Omega } \end{aligned}$$ where $(\nabla u , v)_\Gamma = \int_{\Gamma} v \nabla u \cdot n dS$. We note that due to the definition of $V_{\Gamma}(h)$, all higher derivatives of $\nabla v_1$ and $\beta_1$ are zero for any given element. In this case, integration by parts results in an additional term due to the set $D$ not necessarily being the entire (or part) of the boundary of the domain. By letting ${\bf K}_1 = {\bf I} \varepsilon + \theta h \nu {\bf B}_1$ we can define the bilinear form $a_1(\cdot, \cdot)$ as $$\label{eqn:weak-form-derivation-6} \begin{aligned} a_1(u_1,v_1) &:= \left(\nabla u_1 ,{\bf K}_1 \nabla v_1\right) + \left(\beta_1\cdot \nabla u_1 f_1 , v_1\right)\\ &\hspace*{.25in}+ \left( \phi_1 u_1, v_1 + \theta h \nu\beta_1\cdot \nabla v_1\right) \\ &\hspace*{.25in}- \varepsilon\left(\nabla u_1, v_1+ \theta h \nu\beta_1\cdot \nabla v_1\right)_{\partial \Omega } \end{aligned}$$ which represents the combination of [\[eqn:weak-form-derivation-4\]](#eqn:weak-form-derivation-4){reference-type="eqref" reference="eqn:weak-form-derivation-4"} and [\[eqn:weak-form-derivation-5\]](#eqn:weak-form-derivation-5){reference-type="eqref" reference="eqn:weak-form-derivation-5"}. Using the linear and bilinear forms defined above, equation [\[eqn:weak-form-derivation-1\]](#eqn:weak-form-derivation-1){reference-type="eqref" reference="eqn:weak-form-derivation-1"} becomes $$\begin{aligned} a_1(u_1,v_1) - b_1(u_2,v_1) &= l_1(v_1), \quad \forall v_1 \in V_D(h).\end{aligned}$$ This provides the weak form for the first equation in [\[eikonal-system-w-linearization\]](#eikonal-system-w-linearization){reference-type="eqref" reference="eikonal-system-w-linearization"}. By following the same derivation and introducing analogous bilinear and linear forms we arrive at the weak form of [\[eikonal-system-w-linearization\]](#eikonal-system-w-linearization){reference-type="eqref" reference="eikonal-system-w-linearization"}: find $(u_1, u_2) \in (V_D, V_G)$ such that $$\label{eqn:coupled-weak-form} \begin{aligned} a_1(u_1,v_1) - b_1(u_2,v_1) &= l_1(v_1), \quad \forall v_1 \in V_D(h) \\ a_2(u_2,v_2) - b_2(u_1,v_2) &= l_2(v_2), \quad \forall v_2 \in V_G(h). \end{aligned}$$ The system [\[eqn:coupled-weak-form\]](#eqn:coupled-weak-form){reference-type="eqref" reference="eqn:coupled-weak-form"} then represents our finite element formulation for a given $\varepsilon$. This results in a single linear system which solves for the epsilon dependent $u_1$ and $u_2$ simultaneously using the algebraic multilevel solver Multigraph 2.1 (cf. [@Bank2; @Bank3]). We remark that this is in contrast to the value iteration approach in [@Cornell1] which iterates between each equation in the coupled system. In our setting, the only iteration is due to decreasing $\varepsilon$. # Numerical Example We consider the road scenario depicted in Figure [1](#fig:road-scene){reference-type="ref" reference="fig:road-scene"}. Here, a pair of disabled vehicles are present on the shoulder of the road due to a collision. ![Road scenario](road-scene){#fig:road-scene} The assumption is that a breakdown is more likely in regions near the pair of disabled vehicles due to potential debris etc. which the AV must account for in navigating to the goal location indicated by the green dot; in this case, we assume the goal location and repair depot coincide. This increased likelihood of a breakdown is represented in the model by the function $\phi(x)$ given in Figure [2](#fig:gaussian){reference-type="ref" reference="fig:gaussian"}. ![Counter plot of the spatially dependent likelihood of breakdown function $\phi(x)$ associated with the road layout in Figure [1](#fig:road-scene){reference-type="ref" reference="fig:road-scene"}.](gaussian){#fig:gaussian} For the above scenario we define $\Omega = [0,2] \times [0,1]$, let $G = D = (1.9, 0.5)$, and uniformly refine a total of seven times. We set $f_1 =1$, $f_2 = 0.2$, $R = 1$, $K_1=1$, $K_2 = 1$ and define $\phi(x) = 7 e^{-5(x-1)^2 - 5*y^2}$ which is plotted above. Additionally, we take $\lambda = 1$ and $\phi_2 (x) = 3$. Figures [3](#fig:u1){reference-type="ref" reference="fig:u1"} and [4](#fig:u2){reference-type="ref" reference="fig:u2"} display the contour plots associated with the value functions which correspond to vehicle operation in modes 1 and 2, respectively. A trajectory from any starting point $(x_0, y_0)$ is obtained by moving in the direction of the gradient. The contours show that the optimal trajectories move away from the disabled vehicles as expected. ![Value function $u_1$ corresponding to normal operation.](u1-gaussian){#fig:u1} ![Value function $u_2$ which corresponds to operation after a partial breakdown.](u2-guassian){#fig:u2} # Conclusions and future work In this work, we considered the model of optimal path planning with random breakdowns introduced in [@Cornell1] and analyzed from a theoretical point of view. We proved a comparison principle for viscosity solutions of the weakly coupled system of eikonal equations of the value functions. In particular, we described how to bypass the non-trivial degenerate coupling condition [\[coup\]](#coup){reference-type="eqref" reference="coup"}. Then we showed an example of how the lack of boundary conditions on the Aubry set can compromise the uniqueness of the solution. Finally, in the same example, we have provided boundary conditions for which the system does not have solutions. It still remains an open question how to choose boundary conditions to ensure the existence of a solution. Indeed, the few existence results proved for weakly coupled systems require that the coupling matrix is non-degenerate [@Engler], or that the Aubry set is empty [@Camilli], or that the domain is a torus [@Camilli1]. Future direction may also include the extension of the model in [@Cornell1] to domains topologically more challenging, like networks, assuming not only that the vehicle is subject to breakdowns but also that the speed varies because of heterogeneous road conditions or the presence of lights at the nodes. Numerically, this work took advantage of finite element schemes used in convection-diffusion problems by iteratively solving a linearized form of the system augmented with artificial viscosity [\[eikonal-system-w-linearization\]](#eikonal-system-w-linearization){reference-type="eqref" reference="eikonal-system-w-linearization"}; specifically, the streamline diffusion approximation was used as a means of obtaining stable numerical solutions at each iteration. A number of schemes exist for such problems and the exploration of their utility in the context of the coupled system considered in this paper is one potential direction for future work. [^1]: \*The research of Ludmil Zikatanov is based upon work supported by and while serving at the National Science Foundation. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the views of the National Science Foundation. [^2]: $^{1}$ Department of Mathematics and Statistics, Queen's University, Kingston, ON, Canada `maria.chiri@queensu.ca` [^3]: $^{2}$ Applied Research Laboratory, The Pennsylvania State University, University Park, PA, USA `kdc168@psu.edu` [^4]: $^{3}$ National Science Foundation, Alexandria, VA, USA `lzikatan@nsf.gov`
arxiv_math
{ "id": "2310.02966", "title": "Weakly coupled systems of eikonal equations for autonomous vehicles", "authors": "Maria Teresa Chiri and Kenneth D Czuprynski and Ludmil T Zikatanov", "categories": "math.OC", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | A matrix optimization problem over an uncertain linear system on finite horizon (abbreviated as MOPUL) is studied, in which the uncertain transition matrix is regarded as a decision variable. This problem is in general NP-hard. By using the given reference values of system outputs at each stage, we develop a polynomial-time solvable semidefinite programming (SDP) approximation model for the problem. The upper bound of the cumulative error between reference outputs and the optimal outputs of the approximation model is theoretically analyzed. Two special cases associated with specific applications are considered. The quality of the SDP approximate solutions in terms of feasibility and optimality is also analyzed. Results of numerical experiments are presented to show the influences of perturbed noises at reference outputs and control levels on the performance of SDP approximation. author: - "Jintao Xu [^1]" - "Shu-Cherng Fang [^2]" - "Wenxun Xing[^3]" title: "Semidefinite Programming Approximation for A Matrix Optimization Problem over An Uncertain Linear System [^4] " --- **Keywords** Matrix optimization, Semidefinite programming, Uncertain linear system, NP-hard, Approximation model\ **Mathematics Subject Classification (2020)** 90C22 90C26 90C30 90C59 93C05 # Introduction {#section:1} The discrete-time uncertain linear system is widely studied in control theory [@Kothare1996; @Cohen1998; @Cuzzola2002; @Duan2006; @Heemels2010; @Cairano2016; @Tripathy2017]. The uncertainty may come from the uncertain parameter matrix associated with the given constraint set such as a convex hull [@Kothare1996; @Cuzzola2002; @Duan2006; @Heemels2010; @Blanco2010] or other settings [@Cohen1998; @Tripathy2017]. Robust optimization models are often adopted to deal with the uncertain parameters [@Kothare1996; @Wan2003; @Li2010; @Parsi2022]. However, in many scenarios such as the linear model predictive control (MPC) for optimal tracking [@Camacho2007; @Alessio2009], COVID-19 pandemic optimal control [@Xu2023], Markov chain estimation, and enterprise input-output analysis [@Liu2020] (to be described in Section 2), we face a new class of matrix optimization problems that regard the uncertain transition matrix as a decision variable. In this paper, we consider the following matrix optimization problem over an uncertain linear system on finite horizon ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}): $$\begin{aligned} \begin{split}\label{MOPUL} \min_{A, U, \omega}&~~ \lambda_{1}f_{1}\left(A\right)+\lambda_{2}f_{2}\left(U\right)+\lambda_{3}f_{3}\left(\omega\right)\nonumber\\ {\rm s.t.} &~~x_{t}=Ax_{t-1}+Bu_{t-1},~~t=1, 2, \ldots, N,\\ &~~y_{t}=Cx_{t},~~t=0, 1, \ldots, N,\\ &~~\sum_{t=1}^{N}\left\Vert y_{t}-r_{t}\right\Vert_{2}\leq \omega,\\ &~~\left(A, U, \omega\right)\in\mathcal{S}, \end{split}\tag{MOPUL}\end{aligned}$$ where $A\in\mathbb{R}^{n\times n}$, $U\coloneqq(u_{0}, u_{1}, \dots, u_{N-1})\in\mathbb{R}^{m\times N}$ and $\omega\in\mathbb{R}_+$ are decision variables, $\{\lambda_i\}_{i=1}^3\subseteq\mathbb{R}_+$, $N\in\mathbb{N}_+$, $B\in\mathbb{R}^{n\times m}$, $C\in\mathbb{R}^{p\times n}$, $\{r_t\}_{t=1}^N\subseteq\mathbb{R}^p$ and $\mathcal{S}\subseteq\mathbb{R}^{n\times n}\times\mathbb{R}^{m\times N}\times\mathbb{R}_+$ are given. $\{f_i(\cdot)\}_{i=1}^3$ are assumed to be semidefinite representable (SD representable) functions [@BenTal2001; @Xing2020], and $\mathcal{S}$ is assumed to be an SD representable set [@BenTal2001; @Xing2020]. In addition, $C$ is assumed to be of full column rank. In problem ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}), $f_1(A)$, $f_2(U)$ and $f_3(\omega)$ are the given objective functions of decision variables $A$, $U$ and $\omega$ with given weights of $\lambda_1$, $\lambda_2$ and $\lambda_3$, respectively. For example, $f_1(A)=\Vert A-A^{\rm r}\Vert_F$, where $A^{\rm r}$ is a given reference matrix, $f_2(U)=\sum_{t=1}^{N-1}\Vert u_t-u_{t-1}\Vert_2^2$ as in [@Alessio2009; @Camacho2007], and $f_3(\omega)=\omega$. Moreover, $A$, $B$, $C$ and $U$ are the transition matrix, fixed parameter matrices and control, respectively, of the discrete-time linear system on a finite horizon $N$ described by the first two constraints. $x_t\in\mathbb{R}^n$ is the $n$ dimensional system state with a given initial state $x_0$, and $y_{t}\in\mathbb{R}^p$ is the $p$ dimensional system output at the $t$th stage, $t=1, 2, \ldots, N$. $r_{t}$ carries the reference value of the system output $y_t$ for each $t=1, 2, \ldots, N$. $\omega$ is the control level/threshold of the cumulative error between $\{y_t\}_{t=1}^N$ and $\{r_t\}_{t=1}^N$. The first two constraints are commonly seen in the control theory of discrete-time finite horizon linear systems. The difference is that the transition matrix $A$ is a decision variable in ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) and an uncertain parameter in control theory. The third constraint restricts the cumulative error between the system outputs and their reference values within a control level $\omega$. When $f_3(\omega) = \omega$, the cumulative error constraint can be lifted to the objective function. Other restrictions on the decision variables are contained in the set $\mathcal{S}$ as the fourth constraint, and entanglement of decision variables is allowed in it. In Section [2](#section:2){reference-type="ref" reference="section:2"}, we show that the ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) model is widely applicable. Notice that the first three constraints of ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) are multivariate polynomial constraints. Since a multivariate polynomial optimization problem in general is NP-hard [@Nesterov2000], we know ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) is generally computationally intractable. On the other hand, semidefinite programs (SDP) are polynomial-time solvable [@BenTal2001; @Nesterov1994; @Terlaky1996] with many successful applications to the control theory [@DAVIDD.2001; @Balakrishnan2003; @Blanco2010; @Tanaka2018; @Bujarbaruah2020], multiple-input multiple-output (MIMO) analysis [@CHENG2019; @Mobasher2007], combinatorial optimization problems [@Goemans1995; @Gaar2020; @Han2002; @Guimarildeaes2020], and portfolio selection problems [@Ghaoui2003; @Chen2011]. SDP solvers such as SeDuMi (<https://sedumi.ie.lehigh.edu>), MOSEK (<https://www.mosek.com>) and DSDP (<https://www.mcs.anl.gov/hs/software/DSDP/>) are readily available. The first contribution of this paper is to construct an SDP approximation model for ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}). Notice that $r_{t}$ is regarded as the reference value of $y_{t}$. We can use $C^{\dag}r_{t}$ to approximate $x_t$, where $C^{\dag}$ denotes the Moore-Penrose inverse of matrix $C$. Similar to [@Xu2023], we can replace the first constraint by $x_{t}=AC^{\dag}r_{t-1}+Bu_{t-1}, t=1, 2, \ldots, N$, to reformulate ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) as an SDP approximation model. The second contribution of this paper is to provide a theoretical analysis of the quality of SDP approximate solutions in terms of the feasibility and optimality. For an SDP approximate solution $(A^{{\rm a}*}, U^{{\rm a}*}, \omega^{{\rm a}*})$ and consequently output values of $y_{t}$ at stage $t$ of the linear system, an upper bound of the cumulative error $\sum_{t=1}^{N}\Vert y_{t}-r_{t}\Vert_{2}$ corresponding to $(A^{{\rm a}*},U^{{\rm a}*},\omega^{{\rm a}*})$ is provided in Theorem [Theorem 2](#theorem:2){reference-type="ref" reference="theorem:2"} for general setting. Moreover, the feasibility of an SDP approximate solution to ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) with respect to a fixed control level is guaranteed in Theorem [Theorem 3](#theorem:3){reference-type="ref" reference="theorem:3"}. Motivated by the application problems, two special cases of ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) with SDP approximations concerning two settings of $(\{\lambda_{i}\}_{i=1}^{3}, \{f_i\}_{i=1}^{3}, \mathcal{S})$ are considered in Subsection [3.3](#subsection:3.3){reference-type="ref" reference="subsection:3.3"} for better theoretical estimations on the optimal objective values. The third contribution of this paper is to show the influences of perturbed noise levels at reference outputs and control levels on the performance of the SDP approximation model through numerical experiments. Equipped with accurate reference outputs and proper control levels, SDP approximation performs really well numerically. The rest of the paper is organized as follows. Some specific applications of ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) are introduced in Section [2](#section:2){reference-type="ref" reference="section:2"}. In Section [3](#section:3){reference-type="ref" reference="section:3"}, an SDP approximation model is constructed, and theoretic analysis of its performance is provided. Numerical results are reported in Section [4](#section:4){reference-type="ref" reference="section:4"} and some concluding remarks are made in Section [5](#section:5){reference-type="ref" reference="section:5"}. **Notations.** Throughout the paper, $\mathbb{R}^{n}$, $\mathbb{R}_{+}^{n}$, $\mathbb{R}^{m\times n}$, and $\mathbb{N}_{+}$ denote the sets of real $n$-dimensional vectors, nonnegative vectors, $m\times n$ matrices, and positive integers, respectively. $\textbf{S}^{n}$, $\textbf{S}_{+}^{n}$, and $\textbf{S}_{++}^{n}$ denote the sets of real $n\times n$ symmetric, positive semidefinite ($X\succeq0$), and positive definite matrices, respectively. $X^{\dag}$ denotes the Moore-Penrose inverse of $X$. $\Vert x\Vert_{2}=(\sum_{i=1}^{n}x_{i}^{2})^{\frac{1}{2}}$ and $\Vert x\Vert_{Q}=\sqrt{x^{T}Qx}$, where $Q\in\textbf{S}_{++}^{n}$. $\Vert X\Vert_{F}$, $\Vert X\Vert_{2}$, and $\Vert X\Vert_{*}$ denote the Frobenius norm, the spectral norm which is equal to the maximum singular value of $X$, and the nuclear norm which is equal to the sum of all singular values of matrix $X$, respectively. $O$ and $I$ denote the matrix of all zeros and the unit matrix whose sizes vary from the context, respectively. $\boldsymbol{0}$ and $\boldsymbol{1}$ denote the column vector of all zeros and ones whose sizes vary from the context, respectively. # Applications {#section:2} In this section, we present four specific applications of problem ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}). Their special structures and the quality of the corresponding SDP approximate solutions in terms of feasibility and optimality will be further investigated in Sections [3](#section:3){reference-type="ref" reference="section:3"} and [4](#section:4){reference-type="ref" reference="section:4"}. ## Linear model predictive control for optimal tracking {#subsection:2.1} Model predictive control (MPC) is a class of optimal control strategies, in which the optimizer determines control signals and the model predicts outputs [@Camacho2007]. Referring to equation (18) in [@Alessio2009], equation (2.5) in [@Camacho2007], and related discussions therein, an optimal tracking problem over an uncertain linear system goes in the following form: $$\begin{aligned} \begin{split}\label{O-MPC} \min_{A, U}&~~\sum_{t=1}^{N}\left\Vert y_{t}-r_{t}\right\Vert_{2}+\lambda\sum_{t=1}^{N-1}\left\Vert u_{t}-u_{t-1}\right\Vert_{2}\\ {\rm s.t.} &~~x_{t}=Ax_{t-1}+Bu_{t-1},~~t=1, 2, \ldots, N,\\ &~~y_{t}=Cx_{t},~~t=0, 1, \ldots, N,\\ &~~(A, U)\in\mathcal{S}_{\rm\scriptscriptstyle MPC}, \end{split}\tag{O-MPC}\end{aligned}$$ where the transition matrix $A\in\mathbb{R}^{n\times n}$ and control $U\coloneqq(u_{0}, u_{1}, \ldots, u_{N-1})\in\mathbb{R}^{m\times N}$ are decision variables, system horizon $N\in\mathbb{N}_{+}$, parameter $\lambda\geq0$, $\{x_t\}_{t=0}^N\subseteq\mathbb{R}^n$ are $n$ dimensional system states with a given initial state $x_0$, $\{y_t\}_{t=0}^N\subseteq\mathbb{R}^p$ are $p$ dimensional system outputs, $B\in\mathbb{R}^{n\times m}$ and $C\in\mathbb{R}^{p\times n}$ are given system parameter matrices, and $\{r_{t}\}_{t=1}^{N}\subseteq\mathbb{R}^p$ are given reference signals. The cumulative error $\sum_{t=1}^{N}\Vert y_{t}-r_{t}\Vert_{2}$ enforces system outputs to track the given reference signals, and control efforts $\sum_{t=1}^{N-1}\left\Vert u_{t}-u_{t-1}\right\Vert_{2}$ are penalized for variations. SD representable set $\mathcal{S}_{\rm\scriptscriptstyle MPC}=\mathcal{S}_{\rm\scriptscriptstyle UC}\cap~\mathcal{S}_{\rm\scriptscriptstyle AR}\subseteq\mathbb{R}^{n\times n}\times\mathbb{R}^{m\times N}$, where $\mathcal{S}_{\rm\scriptscriptstyle UC}$ is the uncertainty set of linear system and $\mathcal{S}_{\rm\scriptscriptstyle AR}$ is composed of additional restrictions on $(A, U)$. Examples of $\mathcal{S}_{\rm\scriptscriptstyle UC}$ include $\{{\rm constant}~A\}\times\mathbb{R}^{m\times N}$ for the linear system with a constant transition matrix [@Bemporad2000] and $\{\sum_{i=1}^{k}\theta^i A^i, (\theta^{1}, \theta^{2}, \ldots, \theta^{k})^{\mathrm{T}}\in\mathbb{R}_{+}^{k}, \sum_{i=1}^{k}\theta^{i}=1\}\times\mathbb{R}^{m\times N}$ with given $k$ matrices $\{A^i\}_{i=1}^k$ for the uncertain linear system with transition matrix in the polytopic uncertainty set [@Cairano2016]. Examples of $\mathcal{S}_{\rm\scriptscriptstyle AR}$ include $\mathbb{R}^{n\times n}\times\{U|\alpha_{1}\leq u_{t}-u_{t-1}\leq\alpha_{2}\}$ with $\alpha_{1}, \alpha_{2}\in\mathbb{R}^m$ and component-wise inequalities [@Alessio2009]. Notice that ([\[O-MPC\]](#O-MPC){reference-type="ref" reference="O-MPC"}) is a special case of ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) by setting $\lambda_{1}=0$, $\lambda_{2}=\lambda$, $\lambda_{3}=1$, $f_2(U)=\sum_{t=1}^{N-1}\left\Vert u_{t}-u_{t-1}\right\Vert_{2}$, $f_3(\omega)=\omega$, and $\mathcal{S}=\mathcal{S}_{\rm\scriptscriptstyle MPC}\times\mathbb{R}_+$. ## COVID-19 pandemic optimal control model {#subsection:2.2} To realize an effective prevention and control for the COVID-19 pandemic, we can construct the so-called "susceptible-asymptomatic infected-symptomatic infected-removed optimal control" model as below by dividing the total population into 4 groups of susceptible (S), asymptomatic infected (I$_\text{a}$), symptomatic infected (I$_\text{s}$), and removed (R). $$\begin{aligned} \begin{split}\label{O-COVID} \min_{A, U}&~~\sum_{t=1}^{N}\left\Vert x_{t}-r_{t}\right\Vert_{2}\\ {\rm s.t.}&~~ x_{t}=Ax_{t-1}+u_{t-1},~~t=1, 2,\ldots, N,\\ &~~\left(A, U\right)\in\mathcal{S}_{\rm\scriptscriptstyle COVID}, \end{split}\tag{O-COVID}\end{aligned}$$ where the transmission matrix $A\in\mathbb{R}^{4\times4}$ and the exit and entry control $U\coloneqq(u_{0}, u_{1}, \ldots, u_{N-1})\in\mathbb{R}^{4\times N}$ are decision variables, $N\in\mathbb{N}_+$ is the duration of COVID-19 transmission studied in ([\[O-COVID\]](#O-COVID){reference-type="ref" reference="O-COVID"}), $\{x_{t}\coloneqq(x_t^{\rm\scriptscriptstyle S}, x_t^{\rm\scriptscriptstyle I_a}, x_t^{\rm\scriptscriptstyle I_s}, x_t^{\rm\scriptscriptstyle R})^{\mathrm T}\}_{t=0}^N$ and $\{u_{t}\coloneqq(u_t^{\rm\scriptscriptstyle S}, u_t^{\rm\scriptscriptstyle I_a}, u_t^{\rm\scriptscriptstyle I_s}, u_t^{\rm\scriptscriptstyle R})^{\mathrm T}\}_{t=0}^N\subseteq\mathbb{R}^4$ are the numbers of individuals in each group and their variations through exit and entry, respectively, and $\{r_{t}\coloneqq(r_t^{\rm\scriptscriptstyle S}, r_t^{\rm\scriptscriptstyle I_a}, r_t^{\rm\scriptscriptstyle I_s}, r_t^{\rm\scriptscriptstyle R})^{\mathrm T}\}_{t=1}^N\subseteq\mathbb{R}^4$ are the expected numbers of individuals in each group. Then COVID-19 transmission is $x_{t}=Ax_{t-1}+u_{t-1}$ in ([\[O-COVID\]](#O-COVID){reference-type="ref" reference="O-COVID"}). Additional constraints on $(A, U)$ are contained in the SD representable constraint set $\mathcal{S}_{\rm\scriptscriptstyle COVID}\subseteq\mathbb{R}^{4\times 4}\times\mathbb{R}^{4\times N}$. To realize the target $\{r_t\}_{t=1}^{N}$ estimated by the medical facilities, the transmission matrix $A$ and the exit and entry control $U$ are determined in ([\[O-COVID\]](#O-COVID){reference-type="ref" reference="O-COVID"}) through the minimization of $\sum_{t=1}^{N}\Vert x_{t}-r_{t}\Vert_{2}$. Notice that ([\[O-COVID\]](#O-COVID){reference-type="ref" reference="O-COVID"}) is a special case of ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) by setting $\lambda_1=\lambda_2=0, \lambda_3=1$, $f_3(\omega)=\omega$, $B=C=I$, and $\mathcal{S}=\mathcal{S}_{\rm\scriptscriptstyle COVID}\times\mathbb{R}_+$. ## Markov chains Estimation {#subsection:2.3} Let $\{X_{t}\}_{t\geq0}$ be a homogeneous Markov chain on states $\{s_{i}\}_{i=1}^{m}$ with an unknown low-rank transition matrix $P=(p_{ij}\coloneqq\mathbb{P}\left(X_{t}=s_{j}|X_{t-1}=s_{i}\right))_{m\times m}\in\mathbb{R}^{m\times m}$, which implies a latent low-dimensionality structure [@Zhu2021]. We can construct an optimization model for the Markov chains estimation with a low-rank demand as the following: $$\begin{aligned} \begin{split}\label{O-Markov} \min_{P}&~~\sum_{t=1}^{N}\left\Vert \pi_{t}-r_{t}\right\Vert_{2}\\ {\rm s.t.} &~~\pi_{t}=P\pi_{t-1},~~t=1, 2, \ldots, N,\\ &~~P\in\mathcal{S}_{\rm\scriptscriptstyle Markov}, \end{split}\tag{O-Markov}\end{aligned}$$ where the transition matrix $P$ is a decision variable, observation horizon $N\in\mathbb{N}_+$, probability distributions $$\pi_{t}\coloneqq\left(\mathbb{P}\left(X_{t}=s_{1}\right), \mathbb{P}\left(X_{t}=s_{2}\right), \ldots, \mathbb{P}\left(X_{t}=s_{m}\right)\right)^{\mathrm{T}}\in\mathbb{R}^{m}, t=0, 1, \ldots, N,$$ the $i$th component of $r_t$, i.e. $(r_{t})_{i}$ is an observed frequency of the event $\{X_{t}=s_{i}\}$, for $i=1, 2, \ldots, m$, $t=1, 2, \ldots, N$, and $$\begin{aligned} \mathcal{S}_{\rm\scriptscriptstyle Markov}\coloneqq\left\{P=\left(p_{ij}\right)_{m\times m}\in\mathbb{R}^{m\times m}\left|\begin{array}{ll} & p_{ij}\geq0, i, j=1, 2, \ldots, m,\\ & \sum_{i=1}^{m}p_{ij}=1, j=1, 2, \ldots, m,\\ & \left\Vert P\right\Vert_{*}\leq\alpha,\\ & \text{and subject to a finite number of linear inequality}\\ & \text{constraints on}~P. \end{array} \right.\right\},\end{aligned}$$ in which $\Vert\cdot\Vert_*$ denotes the nuclear norm and $0<\alpha<m$. Different from Zhang and Wang [@Zhang2020], Li et al. [@Li2018], and Zhu et al. [@Zhu2021] of using the information of event $\{X_{t-1}=s_{i}, X_{t}=s_{j}\}$, ([\[O-Markov\]](#O-Markov){reference-type="ref" reference="O-Markov"}) estimates the low-rank transition matrix through frequency approximation of event $\{X_{t}=s_{i}\}$. The low-rank demand is enforced by the nuclear norm constraint in $\mathcal{S}_{\rm\scriptscriptstyle Markov}$. Notice that ([\[O-Markov\]](#O-Markov){reference-type="ref" reference="O-Markov"}) is a special case of ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) by setting $\lambda_1=\lambda_2=0, \lambda_3=1$, $f_3(\omega)=\omega$, $B=O, C=I$, and $\mathcal{S}=\mathcal{S}_{\rm\scriptscriptstyle Markov}\times \mathbb{R}^{m\times N}\times\mathbb{R}_+$, where $O$ is the matrix of all zeros. ## Multi-stage enterprise input-output problems {#subsection:2.4} Input-output analysis is a framework describing and analyzing input (consumption) and output (production) activities and their relations in an economy [@Miller2009; @Liu2020]. Referring to [@Liu2020], considering an enterprise production with $m_1$ self-made and $m_2$ out-sourced products, the following multi-stage enterprise input-output optimization problem can be constructed to realize the given expected output values of enterprise production by controlling production technologies and purchase-sale plans. $$\begin{aligned} \begin{split}\label{O-IN/OUTPUT1} \min_{A, U}&~~\sum_{t=1}^{N}\left\Vert x_{t}-r_{t}\right\Vert_{2}\\ {\rm s.t.} &~~x_{t}=Ax_{t-1}+u_{t-1},~~t=1, 2, \ldots, N,\\ &~~A\in\mathcal{S}_{\rm\scriptscriptstyle IO}, \end{split}\tag{O-IN/OUTPUT1}\end{aligned}$$ where the production technology matrix $A\in\mathbb{R}^{(m_1+m_2)\times (m_1+m_2)}$ with structure described in ([\[S_IO\]](#S_IO){reference-type="ref" reference="S_IO"}) and purchase-sale control $U\coloneqq(u_{0}, u_{1}, \ldots, u_{N-1})\in\mathbb{R}^{(m_1+m_2)\times N}$ are decision variables, $N\in\mathbb{N}_+$ is the duration of enterprise production, $\{x_{t}\}_{t=1}^N\subseteq\mathbb{R}^{m_1+m_2}$ are the production output values of $m_1+m_2$ products in which $(x_{t})_{i}$ is the output value of the $i$th self-made product, $i=1, 2, \ldots, m_1$, and $(x_t)_{m_1+j}$ is the output value of the $j$th out-sourced product at each stage, $j=1, 2, \ldots, m_2$, $\{u_t\}_{t=0}^{N-1}\subseteq\mathbb{R}^{m_1+m_2}$ are the purchase-sale values of $m_1+m_2$ products at each stage, $\{r_t\}_{t=1}^{N}\subseteq\mathbb{R}^{m_1+m_2}$ are given expected output values as the references for $\{x_t\}_{t=1}^N$, and constraint set $$\begin{aligned} \label{S_IO} \mathcal{S}_{\rm\scriptscriptstyle IO}\coloneqq \left\{\begin{pmatrix} I-G_{m_{1}\times m_{1}} & O\\ -H_{m_2\times m_{1}} & I \end{pmatrix}\in\mathbb{R}^{(m_1+m_2)\times (m_1+m_2)}\left| \begin{array}{l} \text{subject to a finite number of linear inequality}\\ \text{constraints on $G\in\mathbb{R}^{m_1\times m_1}$ and $H\in\mathbb{R}^{m_2\times m_{2}}$}.\end{array}\right.\right\},\end{aligned}$$ in which $G$ and $H$ are composed of technical coefficients [@Liu2020]. Then enterprise production is $x_{t}=Ax_{t-1}+u_{t-1}$ in ([\[O-IN/OUTPUT1\]](#O-IN/OUTPUT1){reference-type="ref" reference="O-IN/OUTPUT1"}). The production technology matrix $A$ and purchase-sale control $U$ are determined to realize the expected enterprise output values by minimizing the discrepancy between the system output and the expected output values $\sum_{t=1}^{N}\left\Vert x_{t}-r_{t}\right\Vert_{2}$. Notice that ([\[O-IN/OUTPUT1\]](#O-IN/OUTPUT1){reference-type="ref" reference="O-IN/OUTPUT1"}) is a special case of ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) by setting $\lambda_1=\lambda_2=0, \lambda_3=1$, $f_3(\omega)=\omega$, $B=C=I$, and $\mathcal{S}=\mathcal{S}_{\rm\scriptscriptstyle IO}\times \mathbb{R}^{(m_1+m_2)\times N}\times \mathbb{R}_+$. When a steady and controllable change of the production technology is preferred within a guaranteed level of cumulative error, we may consider the following problem: $$\begin{aligned} \begin{split}\label{O-IN/OUTPUT2} \min_{A, U}&~~\left\Vert A-A^{\rm r}\right\Vert_{F}\nonumber\\ {\rm s.t.}&~~x_{t}=Ax_{t-1}+u_{t-1},~~t=1, 2, \ldots, N,\\ &~~\sum_{t=1}^{N}\left\Vert x_{t}-r_{t}\right\Vert_{2}\leq\omega,\\ &~~\Vert u_{t}-u_{t}^{\rm r}\Vert_{2}\leq\omega_{t},~~t=0, 1,\ldots, N-1,\\ &~~A\in\mathcal{S}_{\rm\scriptscriptstyle IO}, \end{split}\tag{O-IN/OUTPUT2}\end{aligned}$$ where the production technology matrix $A\in\mathbb{R}^{(m_1+m_2)\times(m_1+m_2)}$ and purchase-sale control $U\coloneqq(u_{0}, u_{1},$\ $\ldots, u_{N-1})\in\mathbb{R}^{(m_1+m_2)\times N}$ are decision variables, $N\in\mathbb{N}_+$ is the duration of enterprise production, $\{r_t\}_{t=1}^N\subseteq\mathbb{R}^{m_1+m_2}$, $A^{\rm r}\in\mathbb{R}^{(m_1+m_2)\times(m_1+m_2)}$, and $\{u_{t}^{\rm r}\}_{t=0}^{N-1}\subseteq\mathbb{R}^{m_1+m_2}$ are given reference values of $\{x_t\}_{t=1}^N$, $A$, and $\{u_{t}\}_{t=0}^{N-1}$, respectively, and $\omega, \{\omega_{t}\}_{t=0}^{N-1}\subseteq\mathbb{R}_+$ are control levels. The second constraint guarantees a cumulative precision of the iteration within the control level $\omega$. And the third constraint guarantees a controllable change of the purchase-sale values. Notice that ([\[O-IN/OUTPUT2\]](#O-IN/OUTPUT2){reference-type="ref" reference="O-IN/OUTPUT2"}) is a special case of ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) by setting $\lambda_1=1, \lambda_2=\lambda_3=0$, $f_1(A)=\Vert A-A^{\rm r}\Vert_{F}$, $B=C=I$, and $\mathcal{S}=\mathcal{S}_{\rm\scriptscriptstyle IO}\times\{U|\Vert u_{t}-u_{t}^{\rm r}\Vert_{2}\leq\omega_{t}, t=0, 1,\ldots, N-1\}\times\{{\rm constant}~\omega\}$. # Semidefinite approximation {#section:3} In this section, we explore the SDP relaxation for problem ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}). In Subsection [3.1](#subsection:3.1){reference-type="ref" reference="subsection:3.1"}, we discuss the computational intractability of the problem and construct a polynomial-time solvable SDP approximation model in Subsection [3.2](#subsection:3.2){reference-type="ref" reference="subsection:3.2"}. The quality of SDP approximate solutions to two specific settings in terms of feasibility and optimality is analyzed in Subsection [3.3](#subsection:3.3){reference-type="ref" reference="subsection:3.3"}. ## Computational intractability {#subsection:3.1} When $\{f_i\}_{i=1}^3$ and $\mathcal{S}$ are SD representable, the computational intractability of ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) mainly comes from the entanglement of decision variables in the first three constraints. Specifically, combined with the first two constraints, the third constraint of ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) is equivalent to $$\begin{aligned} & \sum_{t=1}^{N}\xi_{t}\leq \omega,\\ & \left\Vert y_{t}-r_{t}\right\Vert_{2}^{2}\leq\xi_t^2, 0\leq\xi_{t},~t=1, 2,\ldots, N,\end{aligned}$$ where $$\begin{aligned} \begin{split}\label{poly} \left\Vert y_{t}-r_{t}\right\Vert_{2}^{2}=&x_{0}^{\mathrm{T}}(A^{\mathrm{T}})^{t}C^{\mathrm{T}}CA^{t}x_{0}+2\sum_{j=0}^{t-1}x_{0}^{\mathrm{T}}(A^{\mathrm{T}})^{t}C^{\mathrm{T}}CA^{j}Bu_{t-1-j}\\ &+\sum_{i, j=0}^{t-1}u_{t-1-i}^{\mathrm{T}}B^{\mathrm{T}}(A^{\mathrm{T}})^{i}C^{\mathrm{T}}CA^{j}Bu_{t-1-j}-2x_{0}^{\mathrm{T}}(A^{\mathrm{T}})^{t}C^{\mathrm{T}}r_{t}\\ &-2\sum_{i=0}^{t-1}u_{t-1-i}^{\mathrm{T}}B^{\mathrm{T}}(A^{\mathrm{T}})^{i}C^{\mathrm{T}}r_{t}+r_{t}^{\mathrm{T}}r_{t}. \end{split}\end{aligned}$$ This is a nonnegative multivariate polynomial of degree $2t$, $t=1, 2,\ldots, N$ over $A$. Thus ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) equivalently contains a series of multivariate polynomial constraints. Since the problem of minimizing a nonnegative multivariate polynomial of degree higher than or equal to 4 is in general NP-hard [@Nesterov2000], we know ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) is NP-hard. ## Approximation model {#subsection:3.2} Notice that the vector $r_{t}$ in ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) can be viewed as given reference values of the system output $y_{t}$ at each stage. For ([\[O-MPC\]](#O-MPC){reference-type="ref" reference="O-MPC"}) in Subsection [2.1](#subsection:2.1){reference-type="ref" reference="subsection:2.1"}, it represents the reference signal in the linear control system. For ([\[O-COVID\]](#O-COVID){reference-type="ref" reference="O-COVID"}) in Subsection [2.2](#subsection:2.2){reference-type="ref" reference="subsection:2.2"}, it represents the expected number of individuals. For ([\[O-Markov\]](#O-Markov){reference-type="ref" reference="O-Markov"}) in Subsection [2.3](#subsection:2.3){reference-type="ref" reference="subsection:2.3"}, it represents the observed frequency of certain event. And for ([\[O-IN/OUTPUT1\]](#O-IN/OUTPUT1){reference-type="ref" reference="O-IN/OUTPUT1"}) and ([\[O-IN/OUTPUT2\]](#O-IN/OUTPUT2){reference-type="ref" reference="O-IN/OUTPUT2"}) in Subsection [2.4](#subsection:2.4){reference-type="ref" reference="subsection:2.4"}, it represents the expected output value of enterprise production. In the proposed approximation model, with the similar idea of [@Xu2023], $r_{t}$ is used to decouple the nested iteration of $x_{t}$ to avoid the multivariate polynomial structures in ([\[poly\]](#poly){reference-type="ref" reference="poly"}). Specifically, we replace the constraint $x_{t}=Ax_{t-1}+Bu_{t-1}$, $t=1, 2, \ldots, N$ in ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) by $x_{t}=AC^{\dag}r_{t-1}+Bu_{t-1}$, $t=1, 2, \ldots, N$ . Then an approximate matrix optimization problem over an uncertain linear system on finite horizon (abbreviated as [\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}) can be constructed as the following: $$\begin{aligned} \begin{split}\label{AMOPUL} \min_{A, U, \omega}&\ \ \lambda_{1}f_{1}\left(A\right)+\lambda_{2}f_{2}\left(U\right)+\lambda_{3}f_{3}\left(\omega\right)\\ {\rm s.t.} &\ \ r_{0}=Cx_{0},~~x_{0}^{\rm a}=x_{0},\\ &\ \ x_{t}^{\rm a}=AC^{\dag}r_{t-1}+Bu_{t-1},~~t=1, 2,\ldots, N,\\ &\ \ y_{t}^{\rm a}=Cx_{t}^{\rm a},~~t=0, 1,\ldots, N,\\ &\ \ \sum_{t=1}^{N}\left\Vert y_{t}^{\rm a}-r_{t}\right\Vert_{2}\leq \omega,\\ &\ \ (A,U,\omega)\in \mathcal{S}, \end{split}\tag{AMOPUL}\end{aligned}$$ where the transition matrix $A\in\mathbb{R}^{n\times n}$, control $U\coloneqq(u_{0}, u_{1}, \dots, u_{N-1})\in\mathbb{R}^{m\times N}$, and control level/thre-shold $\omega\in\mathbb{R}_+$ are decision variables. To avoid potential confusions in notation, $x_{t}^{\rm a}$ and $y_{t}^{\rm a}$ are used to denote the approximate values of $x_{t}$ and $y_{t}$ in ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}), respectively. We call $\sum_{t=1}^{N}\Vert y_{t}^{\rm a}-r_{t}\Vert_{2}$ the approximate cumulative error. The meanings of other notations are the same as in ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}). Definitions and some properties of the SD representability are included below. **Definition 1**. ***(Semidefinite representable set [@BenTal2001])**[\[definition:1\]]{#definition:1 label="definition:1"}\ A convex set $\mathcal{C}\subseteq\mathbb{R}^{n}$ is called semidefinite representable (SD representable) if there exist $$\begin{aligned} L_{i}^{j}\in\textbf{S}^{m_j}, i=0, 1, \ldots, n+d, j=1,2,\dots, J\end{aligned}$$ such that $$\begin{aligned} \mathcal{C}=\left\{x=\left(x_{1}, x_{2}, \ldots, x_{n}\right)^{\mathrm{T}}\in\mathbb{R}^{n}\left| \begin{array}{ll} & L_{0}^{j}+\sum_{i=1}^{n}x_{i}L_{i}^{j}+\sum_{i=1}^{d}u_{i}L_{n+i}^{j}\succeq 0,~~j=1,2\dots,J\\ & \mathrm{\ for\ some\ } u=\left(u_{1}, u_{2}, \ldots, u_{d}\right)^{\mathrm{T}}\in\mathbb{R}^{d} \end{array} \right.\right\}.\end{aligned}$$* **Definition 2**. ***(Semidefinite representable function [@BenTal2001])**[\[definition:2\]]{#definition:2 label="definition:2"}\ A convex function $f: \mathbb{R}^{n}\rightarrow\mathbb{R}\cup\{+\infty\}$ is called semidefinite representable (SD representable) if the set $\{(x, \lambda)\in\mathbb{R}^{n+1}|f(x)\leq \lambda\}$ is SD representable.* SD representability of sets is preserved through the set operations of intersection, direct product, affine mapping and its inverse [@BenTal2001]. Notice that the matrix norm $\Vert\cdot\Vert_{F}$, $\Vert\cdot\Vert_Q$, $\Vert\cdot\Vert_{2}$, and $\Vert\cdot\Vert_{*}$ used in this paper are all SD representable functions [@BenTal2001]. The next lemma discloses the connections between the SD representability and SDP problems. **Lemma 1** ([@BenTal2001]). *A minimization problem $\min_{x}\{c^{\mathrm{T}}x|x\in\cap_{i=1}^{k}\mathcal{C}_{i}\subseteq\mathbb{R}^n\}$ can be equivalently formulated as an SDP problem if $\mathcal{C}_{i}$ is SD representable for $i=1, 2, \ldots, k$.* In order to construct an equivalent SDP reformulation, we need the next two results. **Lemma 2**. *Let $x\in\mathbb{R}^{n}$ and $X=\begin{pmatrix*} O & x\\ x^{\mathrm{T}} & 0 \end{pmatrix*}\in\textbf{S}^{n+1}$. If $X\in\textbf{S}_{+}^{n+1}$, then $x=\boldsymbol{0}$.* *Proof.* Let $v_{i, n+1}=(0, \ldots, 0, 1, 0, \ldots, 0, 1)^{\mathrm{T}}\in\mathbb{R}^{n+1}$, whose $i$th and $(n+1)$th elements are 1, and $\bar{v}_{i, n+1}=(0, \ldots, 0, -1, 0, \ldots, 0, 1)^{\mathrm{T}}\in\mathbb{R}^{n+1}$, whose $i$th element is $-1$ and $(n+1)$th element is 1, $i=1, 2, \ldots, n$. Since $X\in\mathcal{S}_+^{n+1}$, we know that $$\begin{aligned} 2x_{i}=v_{i, n+1}^{\mathrm{T}}Xv_{i, n+1}\geq0,~\text{and}~ -2x_{i}=\bar{v}_{i, n+1}^{\mathrm{T}}X\bar{v}_{i, n+1}\geq0, i=1, 2, \ldots, n.\end{aligned}$$ Therefore $x=\boldsymbol{0}$. ◻ **Lemma 3**. *(Schur complement [@Horn2005 Theorem 1.12(b) ][\[lemma:3\]]{#lemma:3 label="lemma:3"})\ Let $M\in\textbf{S}^{n}$ be partitioned as $$\begin{aligned} M=\begin{pmatrix} E & F\\ F^{\mathrm{T}} & G \end{pmatrix},\end{aligned}$$ where $E\in\textbf{S}^{q}$ is nonsingular with $q\leq n-1$. Then $M\in\textbf{S}_{+}^{n}$ if and only if $E\in\textbf{S}_{++}^{q}$ and $G-F^{\mathrm{T}}E^{-1}F\in\textbf{S}_{+}^{n-q}$.* **Theorem 1**. *Under the assumption that $\{f_{i}\}_{i=1}^{3}$ and $\mathcal{S}$ are SD representable, problem ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}) has the following SDP reformulation: $$\begin{aligned} \min_{A, U, \omega, \left\{\xi_{t}\right\}_{t=1}^{N}}&~~ \lambda_{1}f_{1}\left(A\right)+\lambda_{2}f_{2}\left(U\right)+\lambda_{3}f_{3}\left(\omega\right)\\ {\rm s.t.}~~~~~&~~ \begin{pmatrix} \xi_{t}I & CAC^{\dag}r_{t-1}+CBu_{t-1}-r_{t}\\ \left(CAC^{\dag}r_{t-1}+CBu_{t-1}-r_{t}\right)^{\mathrm{T}} & \xi_{t} \end{pmatrix}\in\textbf{S}_{+}^{p+1},\\ &~~t=1, 2, \ldots, N,\\ &~~\sum_{t=1}^{N}\xi_{t}\leq\omega,\\ &~~(A,U,\omega)\in \mathcal{S},\nonumber\end{aligned}$$* *where $A\in\mathbb{R}^{n\times n}$, $U\coloneqq(u_{0}, u_{1}, \dots, u_{N-1})\in\mathbb{R}^{m\times N}$, $\omega\in\mathbb{R}_+$, and $\left\{\xi_{t}\right\}_{t=1}^{N}\subseteq\mathbb{R}_+$ are decision variables, and the remaining notations are defined the same as in ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}).* *Proof.* ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}) is equivalent to $$\begin{aligned} \min_{A, U, \omega, \left\{\xi_{t}\right\}_{t=1}^{N}}&~~ \lambda_{1}f_{1}\left(A\right)+\lambda_{2}f_{2}\left(U\right)+\lambda_{3}f_{3}\left(\omega\right)\\ {\rm s.t.}~~~~~&~~ \left\Vert CAC^{\dag}r_{t-1}+CBu_{t-1}-r_{t}\right\Vert_{2}^{2}\leq\xi_{t}^{2},~~ \xi_{t}\geq0,~~t=1, 2, \ldots, N,\\ &~~\sum_{t=1}^{N}\xi_{t}\leq \omega,\\ &~~(A,U,\omega)\in \mathcal{S}\nonumber.\end{aligned}$$ We now prove that the first constraint in the above reformulation is equivalent to $$\begin{aligned} \begin{pmatrix} \xi_{t}I & CAC^{\dag}r_{t-1}+CBu_{t-1}-r_{t}\\ \left(CAC^{\dag}r_{t-1}+CBu_{t-1}-r_{t}\right)^{\mathrm{T}} & \xi_{t} \end{pmatrix}\in\textbf{S}_{+}^{p+1},~~t=1, 2, \ldots, N.\end{aligned}$$ For each $t=1, 2, \ldots, N$, if $\xi_{t}=0$, then $$\begin{aligned} &\left\Vert CAC^{\dag}r_{t-1}+CBu_{t-1}-r_{t}\right\Vert_{2}^{2}\leq\xi_{t}^{2}, \xi_{t}=0\\ \Longleftrightarrow& ~CAC^{\dag}r_{t-1}+CBu_{t-1}-r_{t}=\boldsymbol{0}\\ \Longleftrightarrow& ~\begin{pmatrix} O & CAC^{\dag}r_{t-1}+CBu_{t-1}-r_{t}\\ \left(CAC^{\dag}r_{t-1}+CBu_{t-1}-r_{t}\right)^{\mathrm{T}} & 0 \end{pmatrix}\in\textbf{S}_{+}^{p+1}\\ \Longleftrightarrow& ~\begin{pmatrix} \xi_{t}I & CAC^{\dag}r_{t-1}+CBu_{t-1}-r_{t}\\ \left(CAC^{\dag}r_{t-1}+CBu_{t-1}-r_{t}\right)^{\mathrm{T}} & \xi_{t} \end{pmatrix}\in\textbf{S}_{+}^{p+1},\end{aligned}$$ where the second equivalency follows from Lemma [Lemma 2](#lemma:2){reference-type="ref" reference="lemma:2"}. If $\xi_{t}>0$, then $$\begin{aligned} &\left\Vert CAC^{\dag}r_{t-1}+CBu_{t-1}-r_{t}\right\Vert_{2}^{2}\leq\xi_{t}^{2},~~ \xi_{t}>0\\ \Longleftrightarrow& ~\xi_{t}-\left(CAC^{\dag}r_{t-1}+CBu_{t-1}-r_{t}\right)^{\mathrm{T}}(\xi_{t}I)^{-1}\left(CAC^{\dag}r_{t-1}+CBu_{t-1}-r_{t}\right)\geq0,~~ \xi_{t}>0\\ \Longleftrightarrow& ~\xi_{t}-\left(CAC^{\dag}r_{t-1}+CBu_{t-1}-r_{t}\right)^{\mathrm{T}}(\xi_{t}I)^{-1}\left(CAC^{\dag}r_{t-1}+CBu_{t-1}-r_{t}\right)\geq0,~~ \xi_{t}I\in\textbf{S}_{++}^{p}\\ \Longleftrightarrow& ~\begin{pmatrix} \xi_{t}I & CAC^{\dag}r_{t-1}+CBu_{t-1}-r_{t}\\ \left(CAC^{\dag}r_{t-1}+CBu_{t-1}-r_{t}\right)^{\mathrm{T}} & \xi_{t} \end{pmatrix}\in\textbf{S}_{+}^{p+1},\end{aligned}$$ where the second equivalency follows from the fact that a symmetric matrix is positive definite if and only if all of its eigenvalues are positive [@Barrett2014], and the third equivalency follows from Lemma [\[lemma:3\]](#lemma:3){reference-type="ref" reference="lemma:3"}. Therefore, we obtain the equivalent reformulation. According to the SD representability assumptions and Lemma [Lemma 1](#lemma:1){reference-type="ref" reference="lemma:1"}, ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}) becomes an SDP problem. ◻ The computational tractability of SDP problems [@BenTal2001; @Nesterov1994; @Terlaky1996] and Theorem [Theorem 1](#theorem:1){reference-type="ref" reference="theorem:1"} assure that ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}) is polynomial-time solvable. Once an optimal solution $(A^{\rm a*}, U^{\rm a*}, \omega^{\rm a*})$ of ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}) is obtained, it is worth estimating the cumulative error between the system outputs and the references in ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}). An upper bound for this gap is provided in the next theorem. **Theorem 2**. *If there exist $\beta>0$ and $\omega^{\rm u}>0$ such that $\Vert CA^{\rm a}C^{\dag}\Vert_{2}\leq\beta$ and $\omega^{\rm a}\leq\omega^{\rm u}$ for any feasible $A^{\rm a}, \omega^{\rm a}$ of ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}), then for each optimal solution $(A^{\rm a*}, U^{\rm a*}, \omega^{\rm a*})$ of ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}), by letting $x_{t}=A^{\rm a*}x_{t-1}+Bu_{t-1}^{\rm a*}, y_{t}=Cx_{t}, t=1,2, \ldots, N$, we have $$\sum_{t=1}^{N}\Vert y_{t}-r_{t}\Vert_{2}\leq \left(\sum_{i=0}^{N-1}\beta^{i}\right)\omega^{\rm u}.$$* *Proof.* When $C$ has full column rank, we have $x_{0}=C^{\dag}r_{0}$ and $C^{\dag}C=I$ [@Penrose1956]. Hence $$\begin{aligned} &\sum_{t=1}^{N}\left\Vert y_{t}-r_{t}\right\Vert_{2}\\ =&\sum_{t=1}^{N}\left\Vert C(A^{\rm a*}x_{t-1}+Bu_{t-1}^{\rm a*})-CA^{\rm a*}C^{\dag}r_{t-1}+CA^{\rm a*}C^{\dag}r_{t-1}-r_{t}\right\Vert_{2}\\ \leq&\sum_{t=1}^{N}\left\Vert CA^{\rm a*}C^{\dag}\left(Cx_{t-1}-r_{t-1}\right)\right\Vert_{2}+\sum_{t=1}^{N}\left\Vert y_{t}^{\rm a}-r_{t}\right\Vert_{2}\\ \leq&\sum_{t=1}^{N}\left\Vert CA^{\rm a*}C^{\dag}\left(y_{t-1}-r_{t-1}\right)\right\Vert_{2}+\omega^{\rm a*}\\ \leq&\left\Vert CA^{\rm a*}C^{\dag}\right\Vert_{2}\left(\sum_{t=1}^{N-1}\left\Vert y_{t}-r_{t}\right\Vert_{2}\right)+\omega^{\rm a*}.\end{aligned}$$ With the same arguments, we have $$\begin{aligned} \sum_{t=1}^{n}\left\Vert y_{t}-r_{t}\right\Vert_{2}\leq\left\Vert CA^{\rm a*}C^{\dag}\right\Vert_{2}\left(\sum_{t=1}^{n-1}\left\Vert y_{t}-r_{t}\right\Vert_{2}\right)+\omega^{\rm a*},~~n=2, 3, \ldots, N-1.\end{aligned}$$ Consequently, we have $$\begin{aligned} &\sum_{t=1}^{N}\left\Vert y_{t}-r_{t}\right\Vert_{2}\\ \leq&\left\Vert CA^{\rm a*}C^{\dag}\right\Vert_{2}^{N-1}\left\Vert y_{1}-r_{1}\right\Vert_{2}+\left(\sum\limits_{i=0}^{N-2}\left\Vert CA^{\rm a*}C^{\dag}\right\Vert_{2}^{i}\right)\omega^{\rm a*}\\ =&\left\Vert CA^{\rm a*}C^{\dag}\right\Vert_{2}^{N-1}\left\Vert y_{1}^{\rm a}-r_{1}\right\Vert_{2}+\left(\sum\limits_{i=0}^{N-2}\left\Vert CA^{\rm a*}C^{\dag}\right\Vert_{2}^{i}\right)\omega^{\rm a*}\\ \leq&\left(\sum\limits_{i=0}^{N-1}\left\Vert CA^{\rm a*}C^{\dag}\right\Vert_{2}^{i}\right)\omega^{\rm a*}\\ \leq&\left(\sum\limits_{i=0}^{N-1}\beta^{i}\right)\omega^{\rm u}.\end{aligned}$$ ◻ **Remark 1**. *For the application problems in Section [2](#section:2){reference-type="ref" reference="section:2"}, we may assume that the variables $A$ and $\omega$ are bounded. Then the assumptions $\Vert CA^{\rm a}C^{\dag}\Vert_{2}\leq\beta$ and $\omega^{\rm a}\leq\omega^{\rm u}$ for any feasible $A^{\rm a}$ and $\omega^{\rm a}$ of ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}) in Theorem [Theorem 2](#theorem:2){reference-type="ref" reference="theorem:2"} are satisfied. In this case, additional constraints such as $\Vert A\Vert_{2}\leq\alpha$ and $\omega\leq\omega^{\rm u}$ with large enough $\alpha$ and $\omega^{\rm u}$ can be added to $\mathcal{S}$ if necessary.* When the control level $\omega$ in ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) is fixed as a constant, we see the feasibility of an ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}) solution to ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) as below. **Theorem 3**. *Suppose that $\mathcal{S}\subseteq\{(A, U, \omega)|\omega=\omega^{\rm c}\}$ in ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) with $\omega^{c}>0$ being given. Replace the constraint $\sum_{t=1}^{N}\Vert y_{t}^{\rm a}-r_{t}\Vert_{2}\leq \omega^{\rm c}$ with $\sum_{t=1}^{N}\Vert y_{t}^{\rm a}-r_{t}\Vert_{2}\leq\omega^{\rm c}/(\sum_{i=0}^{N-1}\beta^{i})$ in the SDP approximation ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}) for some $\beta>0$. If $\Vert CA^{\rm a}C^{\dag}\Vert_{2}\leq\beta$ for any feasible $A^{\rm a}$ of ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}), then any feasible solution $(A^{\rm a}, U^{\rm a})$ of ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}) is feasible to ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}), and the optimal objective value of ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}) becomes an upper bound for that of ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}).* *Proof.* With similar arguments as in the proof of Theorem [Theorem 2](#theorem:2){reference-type="ref" reference="theorem:2"}, we have $$\begin{aligned} \sum_{t=1}^{n}\left\Vert y_{t}-r_{t}\right\Vert_{2}\leq\left\Vert CA^{\rm a}C^{\dag}\right\Vert_{2}\left(\sum_{t=1}^{n-1}\left\Vert y_{t}-r_{t}\right\Vert_{2}\right)+\frac{\omega^{\rm c}}{\sum_{i=0}^{N-1}\beta^{i}},~~n=2, 3, \ldots, N,\end{aligned}$$ for any feasible solution $(A^{\rm a}, U^{\rm a})$ of (AMOPUL). Hence $$\begin{aligned} \sum_{t=1}^{N}\left\Vert y_{t}-r_{t}\right\Vert_{2}\leq&\left(\sum\limits_{i=0}^{N-1}\left\Vert CA^{\rm a}C^{\dag}\right\Vert_{2}^{i}\right)\frac{\omega^{\rm c}}{\sum_{i=0}^{N-1}\beta^{i}}\\ \leq&\left(\sum\limits_{i=0}^{N-1}\beta^{i}\right)\frac{\omega^{\rm c}}{\sum_{i=0}^{N-1}\beta^{i}}\\ =&~~\omega^{\rm c},\end{aligned}$$ which implies the feasibility of $(A^{\rm a}, U^{\rm a})$ to ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}). Then the optimal objective value of ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}) becomes an upper bound for that of ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}). ◻ The above theorem shows that the control level plays an important role in ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}). Numerically, we will study this issue further in Section [4](#section:4){reference-type="ref" reference="section:4"}. **Remark 2**. *For Theorem [Theorem 3](#theorem:3){reference-type="ref" reference="theorem:3"}, if $\mathcal{S}\subseteq\{(A, U, \omega)|\Vert A\Vert_2\leq\alpha\}$, by taking $\beta=\alpha\Vert C\Vert_2\Vert C^{\dag}\Vert_2$, the assumption $\Vert CA^{\rm a}C^{\dag}\Vert_{2}\leq\beta$ is satisfied.* **Remark 3**. *A weighted cumulative error $\sum_{t=1}^{N}\Vert y_{t}-r_{t}\Vert_{Q}$ can also be used in ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) by replacing $\Vert\cdot\Vert_{2}$ with $\Vert\cdot\Vert_{Q}$. The corresponding approximation model can then be constructed by using $\Vert\cdot\Vert_{Q}$ in ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}). With similar arguments as in the proof of Theorem [Theorem 1](#theorem:1){reference-type="ref" reference="theorem:1"}, its approximation is an SDP problem. As $\eta_{1}\Vert x\Vert_{Q}\leq\Vert x\Vert_{2}\leq\eta_{2}\Vert x\Vert_{Q}$ holds for all $x\in\mathbb{R}^{p}$ and some $\eta_{1}, \eta_{2}>0$ [@Zeidler1995 1.12, Proposition 4], Theorem [Theorem 3](#theorem:3){reference-type="ref" reference="theorem:3"} follows when an upper bound of $\sum_{t=1}^{N}\Vert y_{t}^{\rm a}-r_{t}\Vert_{Q}$ is given by $$\begin{aligned} \frac{\omega^{\rm c}}{(\frac{\eta_{2}\beta}{\eta_{1}})^{N-1}+\sum_{i=0}^{N-2}\frac{\eta_{2}^{i+1}\beta^{i}}{\eta_{1}^{i+1}}}.\end{aligned}$$* **Remark 4**. *When the matrix $A$ is time-varying in ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}), i.e., $x_{t}=A_{t-1}x_{t-1}+Bu_{t-1}$, we can also get a similar SDP approximation with similar discussions.* ## Two special cases {#subsection:3.3} We study two special cases of ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) associated with specific application problems in Section [2](#section:2){reference-type="ref" reference="section:2"} focusing on the reference outputs fitting and transition matrix estimation, respectively. ### MOPUL1 {#subsubsection:3.3.1} To fit the given reference outputs $\{r_t\}_{t=1}^{N}$, we consider the following ([\[MOPUL1\]](#MOPUL1){reference-type="ref" reference="MOPUL1"}) problem to minimize the cumulative error of reference outputs: $$\begin{aligned} \begin{split}\label{MOPUL1} \min_{A, U}&~~\sum_{t=1}^{N}\left\Vert y_{t}-r_{t}\right\Vert_{2}\\ {\rm s.t.} &~~x_{t}=Ax_{t-1}+Bu_{t-1},~~t=1, 2,\ldots, N,\\ &~~y_{t}=Cx_{t},~~t=0, 1, \ldots, N,\\ &~~\left(A, U\right)\in\mathcal{S}_{\rm\scriptscriptstyle M1}, \end{split}\tag{MOPUL1}\end{aligned}$$ where the transition matrix $A\in\mathbb{R}^{n\times n}$ and control $U\coloneqq(u_{0}, u_{1}, \ldots, u_{N-1})\in\mathbb{R}^{m\times N}$ are decision variables, $\mathcal{S}_{\rm\scriptscriptstyle M1}$ is SD representable. It is a special case of ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) by setting $\lambda_{1}=\lambda_{2}=0$, $\lambda_{3}=1$, $f_{3}(\omega)=\omega$, and $\mathcal{S}=\mathcal{S}_{\rm\scriptscriptstyle M1}\times\mathbb{R}_+$. Notice that ([\[O-MPC\]](#O-MPC){reference-type="ref" reference="O-MPC"}) with $\lambda=0$ in Subsection [2.1](#subsection:2.1){reference-type="ref" reference="subsection:2.1"}, ([\[O-COVID\]](#O-COVID){reference-type="ref" reference="O-COVID"}) in Subsection [2.2](#subsection:2.2){reference-type="ref" reference="subsection:2.2"}, ([\[O-Markov\]](#O-Markov){reference-type="ref" reference="O-Markov"}) in Subsection [2.3](#subsection:2.3){reference-type="ref" reference="subsection:2.3"}, and ([\[O-IN/OUTPUT1\]](#O-IN/OUTPUT1){reference-type="ref" reference="O-IN/OUTPUT1"}) in Subsection [2.4](#subsection:2.4){reference-type="ref" reference="subsection:2.4"} are four examples of ([\[MOPUL1\]](#MOPUL1){reference-type="ref" reference="MOPUL1"}). An SDP approximation model of ([\[MOPUL1\]](#MOPUL1){reference-type="ref" reference="MOPUL1"}) becomes the following: $$\begin{aligned} \begin{split}\label{AMOPUL1} \min_{A, U}&~~\sum_{t=1}^{N}\left\Vert y_{t}^{\rm a}-r_{t}\right\Vert_{2}\nonumber\\ {\rm s.t.} &~~r_{0}=Cx_{0},~~x_{0}^{\rm a}=x_{0},\\ &~~x_{t}^{\rm a}=AC^{\dag}r_{t-1}+Bu_{t-1},~~t=1, 2,\ldots, N,\\ &~~y_{t}^{\rm a}=Cx_{t}^{\rm a},~~t=0, 1, \ldots, N,\\ &~~\left(A, U\right)\in\mathcal{S}_{\rm\scriptscriptstyle M1}, \end{split}\tag{AMOPUL1}\end{aligned}$$ where $A$ and $U\coloneqq(u_{0}, u_{1}, \ldots, u_{N-1})$ are decision variables. The next theorem provides an upper bound for the optimal objective value $v_{\rm\scriptscriptstyle A1}^*$ of ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}) assuming that the given reference outputs are accurate. **Theorem 4**. *Let $\{\epsilon_{t}\}_{t=1}^N$ be $N$ nonnegative constants. For problem ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}), if the set $$\begin{aligned} \label{Z_epsilon} \mathcal{Z}_\epsilon\coloneqq\left\{(A, U)\in\mathcal{S}_{\rm\scriptscriptstyle M1}\left|\begin{array}{l}U=(u_0, u_1,\dots, u_{N-1}),\\ \hat{y}_{0}=Cx_{0},\\ \hat{y}_{t}=CAC^{\dag}\hat{y}_{t-1}+CBu_{t-1},\\ \Vert \hat{y}_t-r_{t}\Vert_{2}\leq\epsilon_{t},~~t=1, 2, \ldots, N\end{array}\right.\right\}\neq\emptyset,\end{aligned}$$ then we have $$\begin{aligned} v_{\rm\scriptscriptstyle A1}^*\leq\left(1+\gamma\right)\sum_{t=1}^{N-1}\epsilon_{t}+\epsilon_{N},\end{aligned}$$ where $\gamma\coloneqq\inf_{\scriptscriptstyle(A, U)\in \mathcal{Z}_\epsilon}\Vert CAC^{\dag}\Vert_2$. Moreover, if $\{r_{t}\}_{t=1}^N$ are accurate system references, i.e., there exists an $(A, U)\in\mathcal{S}_{\rm\scriptscriptstyle M1}$ such that $\hat{y}_{t}=r_{t}, t=1, 2, \ldots, N$, then $v_{\rm\scriptscriptstyle A1}^*=0$.* *Proof.* For each approximate iteration $y_{t}^{\rm a}=CAC^{\dag}r_{t-1}+CBu_{t-1}, t=1, 2, \ldots, N$ with $(A, U)\in\mathcal{Z}_\epsilon$, $$\begin{aligned} &\sum\limits_{t=1}^{N}\left\Vert y_{t}^{\rm a}-r_{t}\right\Vert_{2}\\ =&\sum\limits_{t=1}^{N}\left\Vert CAC^{\dag}r_{t-1}+CBu_{t-1}-r_{t}\right\Vert_{2}\\ =&\sum\limits_{t=1}^{N}\left\Vert CAC^{\dag}(r_{t-1}-\hat{y}_{t-1}+\hat{y}_{t-1})+CBu_{t-1}-(r_{t}-\hat{y}_{t}+\hat{y}_{t})\right\Vert_{2}\\ \leq&\sum_{t=1}^{N}\left\Vert CAC^{\dag}\left(\hat{y}_{t-1}-r_{t-1}\right)\right\Vert_{2}+\sum_{t=1}^{N}\left\Vert \hat{y}_{t}-r_{t}\right\Vert_{2}\\ \leq&\left\Vert CAC^{\dag}\right\Vert_{2}\sum_{t=1}^{N-1}\left\Vert \hat{y}_{t}-r_{t}\right\Vert_{2}+\sum_{t=1}^{N}\left\Vert \hat{y}_{t}-r_{t}\right\Vert_{2}\\ \leq&\left(1+\Vert CAC^{\dag}\Vert_{2}\right)\sum_{t=1}^{N-1}\epsilon_{t}+\epsilon_{N}.\end{aligned}$$ Thus $$\begin{aligned} v_{\rm\scriptscriptstyle A1}^*&\leq\sum\limits_{t=1}^{N}\left\Vert y_{t}^{\rm a}-r_{t}\right\Vert_{2}\leq\left(1+\Vert CAC^{\dag}\Vert_{2}\right)\sum_{t=1}^{N-1}\epsilon_{t}+\epsilon_{N}\end{aligned}$$ holds for each $(A, U)\in\mathcal{Z}_\epsilon$. Consequently, we have $$\begin{aligned} v_{\rm\scriptscriptstyle A1}^*&\leq\inf_{\scriptscriptstyle(A, U)\in \mathcal{Z}_\epsilon}\left\{\left(1+\Vert CAC^{\dag}\Vert_{2}\right)\sum_{t=1}^{N-1}\epsilon_{t}+\epsilon_{N}\right\}=\left(1+\gamma\right)\sum_{t=1}^{N-1}\epsilon_{t}+\epsilon_{N},\end{aligned}$$ and the rest of theorem follows. ◻ This theorem shows that the accuracy of reference outputs is important to ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}). Given a sequence of accurate reference outputs, ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}) can fit them without error. Numerically, we will study this issue further in Section [4](#section:4){reference-type="ref" reference="section:4"}. Let $v_{\rm\scriptscriptstyle M1}^*$ and $v_{\rm\scriptscriptstyle A1}^*$ be the optimal objective values of ([\[MOPUL1\]](#MOPUL1){reference-type="ref" reference="MOPUL1"}) and ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}), respectively, some bounds on $v_{\rm\scriptscriptstyle A1}^*/v_{\rm \scriptscriptstyle M1}^*$ are provided in the next two results. **Theorem 5**. *If problem ([\[MOPUL1\]](#MOPUL1){reference-type="ref" reference="MOPUL1"}) is attainable, then we have $$\begin{aligned} v_{\rm\scriptscriptstyle A1}^*\leq\left(1+\gamma^*\right)v_{\rm\scriptscriptstyle M1}^*,\end{aligned}$$ where $\gamma^*\coloneqq\inf_{\scriptscriptstyle(A^*, U^*)\in\mathcal{F}_{\rm\scriptscriptstyle M1}^{*}}\inf_{\scriptscriptstyle(A, U)\in \mathcal{Z}_{\epsilon}^*}\Vert CAC^{\dag}\Vert_2$ with $\mathcal{F}_{\rm\scriptscriptstyle M1}^{*}$ being the optimal solution set of ([\[MOPUL1\]](#MOPUL1){reference-type="ref" reference="MOPUL1"}) and $\mathcal{Z}_{\epsilon}^*$ being the set defined in ([\[Z_epsilon\]](#Z_epsilon){reference-type="ref" reference="Z_epsilon"}) in which $\epsilon_t=\Vert \hat{y}_{t}^*-r_{t}\Vert_{2}$, $\hat{y}_{t}^*=CA^*C^{\dag}\hat{y}_{t-1}^*+CBu^*_{t-1}, t=1, 2, \ldots, N$, and $\hat{y}_{0}^*=Cx_{0}$ for each $(A^*, U^*)\in\mathcal{F}_{\rm\scriptscriptstyle M1}^{*}$.* *Proof.* When ([\[MOPUL1\]](#MOPUL1){reference-type="ref" reference="MOPUL1"}) is attainable, we have $(A^*,U^*)\in \mathcal{Z}_{\scriptscriptstyle \epsilon^*}\ne\emptyset$ for each $(A^*,U^*)\in\mathcal{F}_{\rm\scriptscriptstyle M1}^{*}$. Theorem [Theorem 4](#theorem:4){reference-type="ref" reference="theorem:4"} says that $$\begin{aligned} v_{\rm\scriptscriptstyle A1}^*\leq\left(1+\inf_{\scriptscriptstyle(A, U)\in \mathcal{Z}_{\epsilon^{*}}}\Vert CAC^{\dag}\Vert_2\right)v_{\rm\scriptscriptstyle M1}^*\end{aligned}$$ holds for each $(A^*,U^*)\in\mathcal{F}_{\rm\scriptscriptstyle M1}^{*}$. Hence $$\begin{aligned} v_{\rm\scriptscriptstyle A1}^*&\leq\inf_{\scriptscriptstyle(A^*, U^*)\in\mathcal{F}_{\rm\scriptscriptstyle M1}^{*}}\left(1+\inf_{\scriptscriptstyle(A, U)\in \mathcal{Z}_{\epsilon^{*}}}\Vert CAC^{\dag}\Vert_2\right)v_{\rm\scriptscriptstyle M1}^*=\left(1+\gamma^*\right)v_{\rm\scriptscriptstyle M1}^*.\end{aligned}$$ ◻ **Theorem 6**. *If problem ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}) is attainable, then we have $$\begin{aligned} v_{\rm\scriptscriptstyle M1}^*\leq \left(\sum_{i=0}^{N-1}(\zeta^*)^i\right) v_{\rm\scriptscriptstyle A1}^*,\end{aligned}$$ where $\zeta^*\coloneqq\inf_{\scriptscriptstyle(A^{\rm a*},U^{\rm a*})\in \mathcal{F}_{\rm\scriptscriptstyle A1}^{*}}\Vert CA^{\rm a*}C^{\dag}\Vert_{2}$ with $\mathcal{F}_{\rm\scriptscriptstyle A1}^{*}$ being the optimal solution set of ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}).* *Proof.* When ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}) is attainable, let $(A^{\rm a*}, U^{\rm a*})\in\mathcal{F}_{\rm\scriptscriptstyle A1}^{*}$ be an optimal solution of\ ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}), $x_{t}=A^{\rm a*}x_{t-1}+Bu_{t-1}^{\rm a*}$, $y_{t}=Cx_{t}$ in ([\[MOPUL1\]](#MOPUL1){reference-type="ref" reference="MOPUL1"}), and $x^{\rm a*}_{t}=A^{\rm a*}C^{\dag}r_{t-1}+Bu_{t-1}^{\rm a*}$, $y^{\rm a*}_{t}=Cx^{\rm a*}_{t}$ in ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}), $t=1, 2, \ldots, N$. With similar arguments as in the proof of Theorem [Theorem 2](#theorem:2){reference-type="ref" reference="theorem:2"}, we can obtain the following $N-1$ inequalities: $$\begin{aligned} \sum_{t=1}^{n}\Vert y_{t}-r_{t}\Vert_{2}\leq\Vert CA^{\rm a*}C^{\dag}\Vert_{2}\sum_{t=1}^{n-1}\Vert y_{t}-r_{t}\Vert_{2}+\sum_{t=1}^{N}\left\Vert y_{t}^{\rm a*}-r_{t}\right\Vert_{2},~~n=2, 3, \ldots, N,\end{aligned}$$ which imply that $$\begin{aligned} \sum_{t=1}^{N}\Vert y_{t}-r_{t}\Vert_{2}&\leq\left(\sum_{i=0}^{N-1}\Vert CA^{\rm a*}C^{\dag}\Vert_{2}^{i}\right)\sum_{t=1}^{N}\left\Vert y_{t}^{\rm a*}-r_{t}\right\Vert_{2}\\ &=\left(\sum_{i=0}^{N-1}\Vert CA^{\rm a*}C^{\dag}\Vert_{2}^{i}\right)v_{\rm\scriptscriptstyle A1}^*.\end{aligned}$$ Since any optimal solution of ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}) is feasible to ([\[MOPUL1\]](#MOPUL1){reference-type="ref" reference="MOPUL1"}), we know $$\begin{aligned} v_{\rm\scriptscriptstyle M1}^*\leq \left(\sum_{i=0}^{N-1}\Vert CA^{\rm a*}C^{\dag}\Vert_{2}^{i}\right)v_{\rm\scriptscriptstyle A1}^*\end{aligned}$$ holds for each $(A^{\rm a*}, U^{\rm a*})\in\mathcal{F}_{\rm\scriptscriptstyle A1}^{*}$. Hence $$\begin{aligned} v_{\rm\scriptscriptstyle M1}^*&\leq \inf_{\scriptscriptstyle(A^{\rm a*},U^{\rm a*})\in \mathcal{F}_{\rm\scriptscriptstyle A1}^{*}}\left(\sum_{i=0}^{N-1}\Vert CA^{\rm a*}C^{\dag}\Vert_{2}^{i}\right)v_{\rm\scriptscriptstyle A1}^*\\ &=\left(\sum_{i=0}^{N-1}(\zeta^*)^{i}\right)v_{\rm\scriptscriptstyle A1}^*.\end{aligned}$$ ◻ **Remark 5**. *When ([\[MOPUL1\]](#MOPUL1){reference-type="ref" reference="MOPUL1"}) and ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}) are both attainable with $v_{\rm \scriptscriptstyle M1}^*>0$, Theorems [Theorem 5](#theorem:5){reference-type="ref" reference="theorem:5"} and [Theorem 6](#theorem:6){reference-type="ref" reference="theorem:6"} imply that $$\begin{aligned} \frac{1}{\sum_{i=0}^{N-1}(\zeta^*)^i}\leq \frac{v_{\rm\scriptscriptstyle A1}^*}{v_{\rm \scriptscriptstyle M1}^*}\leq 1+\gamma^*.\end{aligned}$$ Furthermore, if $\zeta^*=\gamma^*=0$, then $v_{\rm\scriptscriptstyle M1}^*=v_{\rm \scriptscriptstyle A1}^*$.* ### MOPUL2 {#subsubsection:3.3.2} In some scenarios such as ([\[O-IN/OUTPUT2\]](#O-IN/OUTPUT2){reference-type="ref" reference="O-IN/OUTPUT2"}) in Subsection [2.4](#subsection:2.4){reference-type="ref" reference="subsection:2.4"} and a COVID-19 pandemic optimal control model MOCM in [@Xu2023], a small change of the transition matrix $A$ is preferred within a guaranteed level of cumulative error. We may consider the following problem: $$\begin{aligned} \begin{split}\label{MOPUL2} \min_{A, U}&~~\left\Vert A-A^{\rm r}\right\Vert_{F}\nonumber\\ {\rm s.t.} &~~x_{t}=Ax_{t-1}+Bu_{t-1},~~t=1, 2, \ldots, N,\\ &~~y_{t}=Cx_{t},~~t=0, 1, \ldots, N,\\ &~~\sum_{t=1}^{N}\left\Vert y_{t}-r_{t}\right\Vert_{2}\leq \omega,\\ &~~\left\Vert u_{t}-u_{t}^{\rm r}\right\Vert_{2}\leq \omega_{t},~~t=0, 1, \ldots, N-1,\\ &~~\left(A, U\right)\in\mathcal{S}_{\rm\scriptscriptstyle M2}, \end{split}\tag{MOPUL2}\end{aligned}$$ where the transition matrix $A\in\mathbb{R}^{n\times n}$ and control $U\coloneqq(u_{0}, u_{1}, \ldots, u_{N-1})\in\mathbb{R}^{m\times N}$ are decision variables, $\{r_t\}_{t=1}^{N}\subseteq\mathbb{R}^{p}$, $A^{\rm r}\in\mathbb{R}^{n\times n}$, and $\{u_{t}^{\rm r}\}_{t=0}^{N-1}\subseteq\mathbb{R}^{m\times 1}$ are given references of $\{y_t\}_{t=1}^{N}$, $A$, and $\{u_{t}\}_{t=0}^{N-1}$, respectively, $\omega, \{\omega_{t}\}_{t=0}^{N-1}\subseteq\mathbb{R}_{+}$ are control levels, and constraint set $\mathcal{S}_{\rm\scriptscriptstyle M2}\subseteq\mathbb{R}^{n\times n}\times\mathbb{R}^{m\times N}$ is SD representable. Notice that this is a special case of ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}) by setting $\lambda_{1}=1$, $\lambda_{2}=\lambda_{3}=0$, $f_{1}(A)=\Vert A-A^{\rm r}\Vert_{F}$, and $\mathcal{S}=(\mathbb{R}^{n\times n}\times\{U|\Vert u_{t}-u_{t}^{\rm r}\Vert_{2}\leq\omega_{t}, t=0, 1, \ldots, N-1\}\cap\mathcal{S}_{\rm\scriptscriptstyle M2})\times\{{\rm constant}~\omega\}$. The objective function enforces a steady and controllable change of transition matrix $A$ with respect to the reference value. The third constraint guarantees a cumulative precision of the iteration within the control level $\omega$. And the fourth constraint guarantees a controllable change of the system inputs. Correspondingly, we can derive the following SDP approximation model: $$\begin{aligned} \begin{split}\label{AMOPUL2} \min_{A, U}&~~\left\Vert A-A^{\rm r}\right\Vert_{F}\\ {\rm s.t.} &~~r_{0}=Cx_{0},~~x_{0}^{\rm a}=x_{0},\\ &~~x_{t}^{\rm a}=AC^{\dag}r_{t-1}+Bu_{t-1},~~t=1, 2, \ldots, N,\\ &~~y_{t}^{\rm a}=Cx_{t}^{\rm a},~~t=0, 1, \ldots, N,\\ &~~\sum_{t=1}^{N}\left\Vert y_{t}^{\rm a}-r_{t}\right\Vert_{2}\leq\tilde{\omega},\\ &~~\left\Vert u_{t}-u_{t}^{\rm r}\right\Vert_{2}\leq \omega_{t},~~t=0, 1, \ldots, N-1,\\ &~~\left(A, U\right)\in\mathcal{S}_{\rm\scriptscriptstyle M2}, \end{split}\tag{AMOPUL2}\end{aligned}$$ where $A$ and $U\coloneqq(u_{0}, u_{1}, \ldots, u_{N-1})$ are decision variables, $\tilde{\omega}\in\mathbb{R}_+$ is control level. Following Theorem [Theorem 3](#theorem:3){reference-type="ref" reference="theorem:3"}, we have the relationship between ([\[MOPUL2\]](#MOPUL2){reference-type="ref" reference="MOPUL2"}) and ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}) in the next result. **Theorem 7**. *Take $\tilde{\omega}=\omega/(\sum_{i=0}^{N-1}\beta^{i})$ in ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}) with respect to $\omega$ in ([\[MOPUL2\]](#MOPUL2){reference-type="ref" reference="MOPUL2"}) and $\beta>0$. If $\Vert CA^{\rm a}C^{\dag}\Vert_{2}\leq\beta$ for any feasible $A^{\rm a}$ of ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}), then any feasible solution of ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}) is feasible to ([\[MOPUL2\]](#MOPUL2){reference-type="ref" reference="MOPUL2"}), and the optimal objective value of ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}) becomes an upper bound for that of ([\[MOPUL2\]](#MOPUL2){reference-type="ref" reference="MOPUL2"}).* **Remark 6**. *The existence of an upper bound $\beta$ of $\Vert CA^{\rm a}C^{\dag}\Vert_{2}$ in Theorem [Theorem 7](#theorem:7){reference-type="ref" reference="theorem:7"} is guaranteed as mentioned in Remark [Remark 1](#remark:2){reference-type="ref" reference="remark:2"}.* # Numerical experiments {#section:4} In this section, we study the influences of perturbed noises at reference outputs $\{r_t\}_{t=1}^N$ and control level $\omega$ on the performance of the proposed approximation model ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}) numerically. Theorem [Theorem 4](#theorem:4){reference-type="ref" reference="theorem:4"} shows that the noise levels of the given reference outputs $\{r_t\}_{t=1}^N$ are keys to the optimal objective value. We study the numerical performance of ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}) and ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}) with different noise levels of $\{r_t\}_{t=1}^N$ in Subsection [4.1](#subsection:4.1){reference-type="ref" reference="subsection:4.1"} and Subsubsection [4.2.1](#subsubsection:4.2.1){reference-type="ref" reference="subsubsection:4.2.1"}, respectively. In addition, Theorems [Theorem 3](#theorem:3){reference-type="ref" reference="theorem:3"} and [Theorem 7](#theorem:7){reference-type="ref" reference="theorem:7"} indicate that the size of feasible set of ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}) is mainly determined by the control level $\tilde{\omega}$. Related numerical results on the performance of ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}) in terms of $\tilde{\omega}$ are reported in Subsubsection [4.2.2](#subsubsection:4.2.2){reference-type="ref" reference="subsubsection:4.2.2"}. All data are randomly generated in our experiments as following: - **An ideal instance.** Take $n=m=p=100$, $N=30$, and $B=C=I$ in ([\[MOPUL\]](#MOPUL){reference-type="ref" reference="MOPUL"}). Initial $r_{0}$ is uniformly generated in $(-0.5, 0.5)^p$. $\hat{A}$ is an ideal value of $A$ with each component being generated from the normal distribution $\mathcal{N}(0, 0.1^2)$. $\hat{u}_{t}\coloneqq\boldsymbol{1}\hat{u}_t^{\rm s}$ is an ideal value of $u_t$ with $\hat{u}_t^{\rm s}$ being uniformly generated in $(-0.5, 0.5)$, $t=0, 1, \ldots, N-1$. Define $\hat{x}_{t}=\hat{A}\hat{x}_{t-1}+\hat{u}_{t-1}, t=1, 2, \dots, N$ and $\hat{x}_0=r_0$. Then $(\hat{A}, \{\hat{u}_t\}_{t=0}^{N-1}, \{\hat{x}_t\}_{t=0}^N)$ forms an ideal instance. - **Reference outputs.** Reference output $r_{t}\coloneqq\hat{x}_{t}+e_{t}$ with the ideal value $\hat{x}_{t}$ and a perturbed noise $e_{t}$ with each component being generated from $\mathcal{N}(\mu, \sigma^{2})$, $t=1, 2, \ldots, N$. A total of 20 random instances are generated for each $(\mu, \sigma)$. The following 4 evaluation criteria are used to measure the performance of ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}) and ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}): - **Cumulative error (CE)**: $\sum_{t=1}^{N}\Vert x_t^*-r_{t}\Vert_{2}$ measures the cumulative precision of ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}), where $\{r_t\}_{t=0}^N$ are the generated reference outputs, $x_t^*=A^{\rm a*}x_{t-1}^*+u_{t-1}^{\rm a*}, t=1, 2, \ldots, N$, and $x_0^*=r_0$ for an optimal solution $(A^{\rm a*}, (u_0^{\rm a*}, u_1^{\rm a*}, \ldots, u_{N-1}^{\rm a*}))$ of ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}). - **Approximate cumulative error (ACE)**: $\sum_{t=1}^{N}\Vert x_t^{\rm a*}-r_{t}\Vert_{2}$ measures the approximate cumulative precision of ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}), where $\{r_t\}_{t=0}^N$ are the generated reference outputs, $x_t^{\rm a*}=A^{\rm a*}r_{t-1}+u_{t-1}^{\rm a*}$, $t=1, 2, \ldots, N$ for an optimal solution $(A^{\rm a*}, (u_0^{\rm a*}, u_1^{\rm a*}, \ldots, u_{N-1}^{\rm a*}))$ of ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}). - **Relative error of $A^{\rm a*}$ (RE$\boldsymbol{A^{\rm a*}}$)**: $\frac{\Vert A^{\rm a*}-\hat{A}\Vert_{F}}{\Vert \hat{A}\Vert_{F}}$ measures the difference between the true $\hat{A}$ and an approximate solution $A^{\rm a*}$. - **Relative error of $U^{\rm a*}$ (RE$\boldsymbol{U^{\rm a*}}$)**: $\frac{\Vert U^{\rm a*}-\hat{U}\Vert_{F}}{\Vert \hat{U}\Vert_{F}}$ measures the difference between the true $\hat{U}$ and an approximate solution $U^{\rm a*}$, where $\hat{U}\coloneqq(\hat{u}_{0}, \hat{u}_{1}, \ldots, \hat{u}_{N-1})$ and $U^{a*}\coloneqq(u_{0}^{a*}, u_{1}^{a*}, \ldots, u_{N-1}^{a*})$. Numerical results using real data for the COVID-19 pandemic optimal control can be referred to [@Xu2023]. All experiments are implemented using MATLAB R2019b on a laptop equipped with 4.00 GB memory and AMD Ryzen 3 2200U with Radeon Vega Mobile Gfx (2.50 GHz). We use MOSEK (version 9.1.9) (<https://www.mosek.com>) in CVX-w64 (version 2.2) (<http://cvxr.com/cvx/>) to solve all involved optimization problems. Five significant digits are taken for the numerical results shown in every table. ## Performance of AMOPUL1 {#subsection:4.1} We study the influences of noise levels at the reference outputs on the performance of ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}). For simplicity, the constraint set $\mathcal{S}_{\rm\scriptscriptstyle M1}$ is set to be box constrained: $$\begin{aligned} &\{A|-0.4\leq a_{ij}\leq0.4, i, j=1, 2, \ldots, n\}\times\{U|-0.5\leq u_{t}^{i}\leq0.5, i=1, 2, \ldots, n, t=0, 1, \ldots, N-1\},\end{aligned}$$ which is bounded and hence ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}) is attainable. Then ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}) can be reformulated as $$\begin{aligned} \min_{A, U}&~~\sum_{t=1}^{N}\left\Vert Ar_{t-1}+u_{t-1}-r_{t}\right\Vert_{2}\\ {\rm s.t.} &~~r_{0}=x_{0},\nonumber\\ &~~-0.4\leq a_{ij}\leq0.4,~~i, j=1, 2, \ldots, n,\\ &~~-0.5\leq u_{t}^{i}\leq0.5,~~i=1, 2, \ldots, n, t=0, 1, \ldots, N-1.\nonumber\end{aligned}$$ A total of 20 instances of $\{r_t\}_{t=1}^{N}$ with perturbed noises for each $(\mu,\sigma)$ are generated with respect to $\mu=0, \sigma=$ 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8; and $\mu=1, \sigma=$ 2.5, 3, respectively. The means and standard deviations of the cumulative errors (CE) and relative errors of $A^{\rm a*}$ and $U^{\rm a*}$ (RE$A^{\rm a*}$ and RE$U^{\rm a*}$) are shown in Table [\[table:1\]](#table:1){reference-type="ref" reference="table:1"}, where $(A^{\rm a*}, U^{\rm a*})$ is the optimal solution of ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}). ccccccccc \*($\mu$, $\sigma$) & & &\ (l)2-3 (l)4-5 (l)6-7 & mean & std & mean & std & mean & std\ (0, 0.05) & 1.9885e-08 & 4.9712e-08 & 1.0070e+00 & 4.8692e-03 & 7.7927e-01 & 9.3722e-03\ (0, 0.1)& 1.3192e-08 & 2.7314e-08& 1.0190e+00 & 7.4878e-03 & 8.1443e-01 & 1.5714e-02\ (0, 0.2) & 1.8346e-08& 2.5523e-08 & 1.0324e+00 & 8.0172e-03 & 8.8606e-01 & 1.2572e-02\ (0, 0.3) & 4.1105e-08 & 1.2360e-07 & 1.0485e+00 & 9.6251e-03 & 9.2916e-01 & 1.0614e-02\ (0, 0.4) & 7.6893e-09 & 2.0765e-08 & 1.0725e+00 & 1.1846e-02 & 9.5445e-01 & 1.1122e-02\ (0, 0.5) & 1.2974e-08 & 3.5385e-08 & 1.0949e+00 & 9.8607e-03 & 9.6895e-01 & 7.6003e-03\ (0, 0.6) & 3.6863e-08 & 5.0465e-08 & 1.1194e+00 & 1.7583e-02 & 9.7755e-01 & 7.0614e-03\ (0, 0.7) & 7.2976e-09 & 1.7096e-08 & 1.1439e+00 & 1.0911e-02 & 9.8106e-01 & 3.4137e-03\ (0, 0.8) & 1.5437e-08 & 2.9897e-08 & 1.1655e+00 & 2.1094e-02 & 9.8639e-01 & 3.6822e-03\ **(1, 2.5)** & **1.6904e+01** & **4.3973e+01** & **1.6462e+00** & **6.3211e-02** & **1.0176e+00** & **1.2296e-02**\ **(1, 3.0)** & **8.6961e+01** & **9.3299e+01** & **1.8027e+00** & **7.9844e-02** & **1.0479e+00** & **2.4774e-02**\ Figures [\[figure:1\]](#figure:1){reference-type="ref" reference="figure:1"} and [\[figure:2\]](#figure:2){reference-type="ref" reference="figure:2"} plot the trends of CE, RE$A^{\rm a*}$ and RE$U^{\rm a*}$ shown in Table [\[table:1\]](#table:1){reference-type="ref" reference="table:1"} with respect to the perturbed $(\mu, \sigma)$ pairs, respectively, in which the length of each error bar above and below the mean value reflects the corresponding standard deviation. Observe that when the perturbed noise of $\{r_t\}_{t=1}^{N}$ is small (say, $\mu=0, \sigma=0.05$ - $0.8$), the means and standard deviations of CE are all less than $10^{-6}$ as shown in Table [\[table:1\]](#table:1){reference-type="ref" reference="table:1"} and Figure [\[figure:1\]](#figure:1){reference-type="ref" reference="figure:1"}. This shows that ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}) achieves accurate and robust solutions. When the perturbed noise becomes large (say, $\mu=1, \sigma=2.5$, $3$), the reference outputs $\{r_t\}_{t=1}^N$ become chaotic. In this case, the approximate output values fail to fit the given reference outputs, and CE becomes large and oscillating. On the other hand, RE$A^{\rm a*}$ and RE$U^{\rm a*}$ are more sensitive to the perturbed noises. The means of RE$A^{\rm a*}$ and RE$U^{\rm a*}$ in Figure [\[figure:2\]](#figure:2){reference-type="ref" reference="figure:2"} increase as the perturbed noise becomes larger. In particular, when the noise is large enough (say, $\mu=1, \sigma=2.5$, $3$), there is a significant increase in the mean and standard deviation of RE$A^{\rm a*}$. In summary, ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}) handles the system output quite well in fitting given reference outputs with small perturbed noises. ## Performance of AMOPUL2 {#subsection:4.2} We now study the performance of ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}) with perturbed noises at reference outputs $\{r_t\}_{t=1}^N$ and different control levels $\tilde{\omega}$ and $\{\omega_t\}_{t=0}^{N-1}$. The constraint set $\mathcal{S}_{\rm\scriptscriptstyle M2}$ is set to be the trivial constraint $\mathbb{R}^{n\times n}\times\mathbb{R}^{n\times N}$ for simplicity. Then ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}) can be reformulated as $$\begin{aligned} \begin{split}\label{amopul2_special} \min_{A, U}&~~\Vert A-\hat{A}\Vert_{F}\\ {\rm s.t.} &~~r_{0}=x_{0},\\ &~~\sum_{t=1}^{N}\left\Vert Ar_{t-1}+u_{t-1}-r_{t}\right\Vert_{2}\leq \tilde{\omega},\\ &~~\left\Vert u_{t}-\hat{u}_{t}\right\Vert_{2}\leq \omega_{t},~~t=0, 1, \ldots, N-1, \end{split}\end{aligned}$$ where $\hat{A}$ and $\hat{U}=(\hat{u}_0,\hat{u}_1,\dots,\hat{u}_{N-1}))$ are taken to be the ideal data, $\tilde{\omega}$ and $\{\omega_t\}_{t=0}^{N-1}$ are the control levels. ### Influence of noises {#subsubsection:4.2.1} Fixed $\tilde{\omega}=10$ and $\omega_{t}=3, t=0,1,\dots,N-1$, we study the influence of the perturbed noise at the reference outputs on the performance of ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}). A total of 20 instances of $\{r_t\}_{t=1}^{N}$ with perturbed noises for each $(\mu,\sigma)$ with respect to $\mu=0$, and $\sigma=$ 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8 are employed. The means and standard deviations of the relative error of $A^{\rm a*}$ (RE$A^{\rm a*}$) and approximate cumulative error (ACE) are shown in Table [\[table:2\]](#table:2){reference-type="ref" reference="table:2"}, where $(A^{\rm a*}, (u_0^{\rm a*}, u_1^{\rm a*}, \ldots, u_{N-1}^{\rm a*}))$ is an optimal solution of ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}). ccccc \*($\mu$, $\sigma$) & &\ (l)2-3 (l)4-5 & mean & std & mean & std\ **(0, 0.05)**& **5.5770e-11** & **6.3925e-11** & 3.0853e-01 & 1.3365e-02\ **(0, 0.1)** & **3.4610e-12** & **1.1336e-12** & 4.7206e-01 & 1.7822e-02\ **(0, 0.2)** & **6.7624e-12** & **1.2553e-11** & 3.9937e+00 & 3.8810e-01\ (0, 0.3) & 5.6781e-02 & 6.4317e-03 & **1.0000e+01** & **5.2062e-08**\ (0, 0.4) & 1.6691e-01 & 1.0648e-02 & **1.0000e+01** & **5.4277e-08**\ (0, 0.5) & 2.5858e-01 & 9.7655e-03 & **1.0000e+01** & **1.1359e-07**\ (0, 0.6) & 3.3659e-01 & 1.3048e-02 & **1.0000e+01** & **1.4509e-07**\ (0, 0.7) & 3.9691e-01 & 9.0062e-03 & **1.0000e+01** & **1.3490e-07**\ (0, 0.8) &4.4853e-01 & 1.5996e-02 & **1.0000e+01** & **1.8154e-07**\ Table [\[table:2\]](#table:2){reference-type="ref" reference="table:2"} shows that for the reference outputs with small noises (say, $\mu=0, \sigma=0.05$ - $0.2$), the means and standard deviations of RE$A^{\rm a*}$ are all less than $10^{-10}$, while the means of ACE are less than 10 ($=\tilde{\omega}$) with small standard deviations, i.e., the strict inequality ACE $<\tilde{\omega}$ is satisfied in the second constraint of ([\[amopul2_special\]](#amopul2_special){reference-type="ref" reference="amopul2_special"}) , which means that this constraint is inactive. When the noise becomes larger (say, $\mu=0$, $\sigma=0.3$-$0.8$), the mean of RE$A^{\rm a*}$ increases drastically, while the means of ACE become 10 ($=\tilde{\omega}$) with standard deviations being less than $10^{-6}$, i.e., the equality ACE $=\tilde{\omega}$ is almost binding in the second constraint of ([\[amopul2_special\]](#amopul2_special){reference-type="ref" reference="amopul2_special"}), which means the constraint becomes active. Therefore, the error of recovering the true $\hat{A}$ mainly comes from the perturbed noises at reference outputs in ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}) with a fixed control level of $\tilde{\omega}$ and $\{\omega_{t}\}_{t=0}^{N-1}$. The smaller the perturbed noises are, the higher accuracy is for recovering the true $\hat{A}$. ### Influence of control levels {#subsubsection:4.2.2} Fixed the noise level of $\{r_t\}_{t=1}^N$ with respect to $\mu=0$ and $\sigma=0.5$, we study the influence of control levels $\tilde{\omega}$ and $\{\omega_t\}_{t=0}^{N-1}$ on the performance of ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}). Set $\tilde{\omega}=$ 2, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, and $\omega_t=$ 3, 4.5, 6, 8, $t=0, 1,\dots,N-1$, respectively. The means and standard deviations of RE$A^{\rm a*}$ and ACE are shown in Table [\[table:3\]](#table:3){reference-type="ref" reference="table:3"}, where $(A^{\rm a*}, (u_0^{\rm a*}, u_1^{\rm a*}, \ldots, u_{N-1}^{\rm a*}))$ is an optimal solution of (AMOPUL2). cccccccccc \*$\left(\{\omega_t\}, \omega\right)$ & & &\*$\left(\{\omega_t\}, \omega\right)$ & &\ (l)2-3 (l)4-5 (l)7-8 (l)9-10 & mean & std & mean & std & & mean & std & mean & std\ (3, 2)& 2.9161e-01 & 9.8335e-03 & 2.0000e+00 & 5.6203e-08 & (6, 2) & 5.6497e-02 & 8.0709e-03 & 2.0000e+00 & 4.1237e-08\ (3, 10) & 2.5858e-01 & 9.7655e-03 & 1.0000e+01 & 1.1359e-07& (6, 10) & 3.5717e-02 & 7.4733e-03 & 1.0000e+01 & 7.6000e-08\ (3, 20) & 2.2544e-01 & 9.5567e-03 & 2.0000e+01 &1.2352e-07 & (6, 20) & 1.5907e-02 & 6.7876e-03 & 2.0000e+01 & 6.3577e-08\ (3, 30) & 1.9594e-01 & 9.4152e-03 & 3.0000e+01 & 1.0370e-07& (6, 30) & 1.8893e-03 & 3.3554e-03 & 2.8972e+01 & 1.3560e+00\ (3, 40) & 1.6855e-01 & 9.2127e-03 & 4.0000e+01 & 1.2075e-07& (6, 40) & 2.1835e-11 & 9.2773e-11 & 3.3480e+01 & 2.4152e+00\ (3, 50) & 1.4263e-01 & 9.0004e-03 & 5.0000e+01 & 1.4636e-07& (6, 50) & 2.4596e-13 & 9.5882e-13 & 3.6904e+01 & 2.2087e+00\ (3, 60) & 1.1790e-01 & 8.7597e-03 & 6.0000e+01 & 1.4685e-07& (6, 60) & 2.1943e-11 & 4.4039e-11 & 4.1768e+01 & 2.1271e+00\ (3, 70) & 9.4359e-02 & 8.4786e-03 & 7.0000e+01 & 1.6536e-07& (6, 70) & 2.5626e-12 & 3.0437e-12 & 4.5740e+01 & 2.2466e+00\ (3, 80) & 7.2036e-02 & 8.1467e-03 & 8.0000e+01 & 1.1959e-07& (6, 80) & 2.6641e-12& 3.4618e-12 & 5.1083e+01 & 2.2815e+00\ (3, 90) & 5.1010e-02 & 7.7407e-03 & 9.0000e+01 & 9.4728e-08& (6, 90) & 5.0958e-12 & 3.5629e-12 & 5.7665e+01 & 2.0483e+00\ (3, 100) & 3.1406e-02 & 7.2410e-03 & 1.0000e+02 & 1.1437e-07& (6, 100) & 7.8724e-12 & 5.3282e-12 & 6.4275e+01 & 1.8747e+00\ (3, 110) & 1.3400e-02 & 6.6400e-03 & 1.1000e+02 & 1.1805e-07& (6, 110) & 1.1513e-11& 8.8819e-12 & 7.0651e+01 & 1.7933e+00\ (3, 120) & 1.2051e-03 & 2.4367e-03 & 1.1852e+02 & 1.6358e+00& (6, 120) & 3.0276e-11& 4.5556e-11 & 7.6993e+01 & 1.6993e+00\ (3, **130**) & **1.7433e-14** & **3.3912e-14** & **1.2337e+02** & **2.1901e+00**& (6, **130**) & **3.1542e-11** & **7.0938e-11** & **8.2822e+01** & **1.6659e+00**\ (3, **140**) & **7.7324e-15** & **4.8385e-15** & **1.2786e+02** & **2.1799e+00**& (6, **140**) & **1.5461e-11** & **3.1048e-11** & **8.7964e+01**& **1.7284e+00**\ (3, **150**) & **1.2545e-14** & **5.6427e-15** & **1.3255e+02** & **2.1943e+00**& (6, **150**) & **5.7891e-12** & **2.9446e-12** & **9.2701e+01** & **1.7949e+00**\ (3, **160**) & **1.2572e-14** & **6.5987e-15** & **1.3742e+02** & **2.1906e+00**& (6, **160**) & **8.2313e-13** & **9.8118e-13** & **9.6858e+01** & **1.8979e+00**\ (4.5, 2)& 1.5888e-01 & 9.6910e-03 & 2.0000e+00 & 4.2127e-08 & (**8**, 2) & **1.5031e-13** & **2.8242e-13**& **6.3282e-01** & **3.6554e-01**\ (4.5, 10) & 1.3281e-01 & 9.1542e-03 & 1.0000e+01 & 7.4270e-08& (**8**, 10) & **5.7621e-13**& **1.3298e-12**& **1.7918e+00** & **5.7976e-01**\ (4.5, 20) & 1.0664e-01 & 8.7720e-03 & 2.0000e+01 & 6.9067e-08& (**8**, 20)& **1.7518e-12** & **2.7700e-12** & **5.6229e+00** & **9.4325e-01**\ (4.5, 30) & 8.3178e-02 & 8.4139e-03 & 3.0000e+01 & 8.7391e-08& (**8**, 30) & **2.5078e-12** & **4.0871e-12** & **1.1458e+01**& **7.7308e-01**\ (4.5, 40) & 6.1383e-02 & 8.0030e-03 & 4.0000e+01 & 6.1359e-08& (**8**, 40)& **1.2160e-12** & **1.3510e-12**& **1.8148e+01** & **1.4540e+00**\ (4.5, 50) & 4.1028e-02 & 7.5202e-03 & 5.0000e+01 & 1.3419e-07& (**8**, 50) & **4.8283e-13** & **6.0433e-13** & **2.2691e+01** & **1.1005e+00**\ (4.5, 60) & 2.2192e-02 & 6.9524e-03 & 6.0000e+01 & 9.8571e-08& (**8**, 60) & **8.7809e-13** & **3.8889e-12**& **2.5669e+01** & **1.0196e+00**\ (4.5, 70) & 5.8491e-03 & 5.1886e-03 & 6.9716e+01 &6.2190e-01 & (**8**, 70) & **9.3531e-12** & **1.7779e-11**& **2.7447e+01** & **1.1129e+00**\ (4.5, 80) & 3.5200e-06 & 1.5742e-05 & 7.6419e+01 & 2.0065e+00& (**8**, 80) & **8.5084e-12** & **3.9371e-12**& **2.8603e+01** & **1.1621e+00**\ (4.5, 90) & 1.0881e-14 & 1.3872e-14 & 8.1689e+01 & 1.9256e+00& (**8**, 90) & **7.3759e-12**& **8.0317e-12**& **2.8347e+01** & **1.2325e+00**\ (4.5, 100) & 7.6680e-12 & 3.4256e-11 & 8.7126e+01 & 1.8341e+00& (**8**, 100) & **3.9721e-12**& **3.1534e-12** & **2.7390e+01** & **1.8718e+00**\ (4.5, 110) & 7.3171e-13 & 3.2461e-12 & 9.2319e+01 & 1.8332e+00& (**8**, 110)& **3.2240e-11**& **4.2381e-11** & **2.8903e+01** & **3.1120e+00**\ (4.5, 120) & 5.2741e-13 & 2.3420e-12 & 9.7047e+01 & 1.8736e+00& (**8**, 120) & **7.3876e-15**& **9.6040e-15** & **3.3420e+01** & **2.7333e+00**\ (4.5, **130**) & **3.7929e-13** & **1.6870e-12** & **1.0132e+02** & **1.9350e+00**& (**8**, **130**)& **5.8003e-14**& **1.0165e-13** & **4.3559e+01** & **2.7069e+00**\ (4.5, **140**) & **9.6352e-12** & **4.0316e-11** & **1.0527e+02** & **1.9955e+00**& (**8**, **140**) & **3.6111e-11** & **7.4397e-11** & **5.4163e+01** & **2.5081e+00**\ (4.5, **150**) & **7.8673e-12** & **3.4276e-11** & **1.0906e+02** & **2.0712e+00**& (**8**, **150**) & **2.5740e-14** & **5.9137e-14** & **6.2407e+01** & **1.9772e+00**\ (4.5, **160**) & **5.8232e-11** & **1.1314e-10** & **1.1278e+02** & **2.1251e+00**& (**8**, **160**) & **1.5156e-11** & **3.5062e-11** & **6.9692e+01** & **1.7940e+00**\ Figures [\[figure:3\]](#figure:3){reference-type="ref" reference="figure:3"} and [\[figure:4\]](#figure:4){reference-type="ref" reference="figure:4"} plot the trends of RE$A^{\rm a*}$ and ACE shown in Table [\[table:3\]](#table:3){reference-type="ref" reference="table:3"} with respect to different values of $\tilde{\omega}$ and $\{\omega_t\}_{t=0}^{N-1}$, respectively, in which the length of each error bar above and below the mean value reflects the corresponding standard deviation. Two major observations can be made as follows. **A. Influence of $\boldsymbol{\tilde{\omega}}$** For each fixed $\omega_{t}=$ 3, 4.5, 6, $t=0, 1, \ldots, N-1$, the mean of RE$A^{\rm a*}$ decreases to almost zero as $\tilde{\omega}$ increases as shown in Figure [\[figure:3\]](#figure:3){reference-type="ref" reference="figure:3"}. This implies that we can obtain a higher accuracy of recovering the transition matrix through a more relaxed constraint on the cumulative precision in the second constraint of ([\[amopul2_special\]](#amopul2_special){reference-type="ref" reference="amopul2_special"}). However, the means of ACE has an increasing trend with respect to $\tilde{\omega}$ as shown in Figure [\[figure:4\]](#figure:4){reference-type="ref" reference="figure:4"} for each fixed $\omega_t$. Observe that when $\tilde{\omega}$ is small, the ACE curves almost coincide with the line $y=x$, i.e., the equality ACE $=\tilde{\omega}$ is almost binding in the second constraint of ([\[amopul2_special\]](#amopul2_special){reference-type="ref" reference="amopul2_special"}), which means that this constraint becomes active. When $\tilde{\omega}$ becomes larger, the ACE curves lie below the line $y=x$, i.e., the strict inequality ACE $<\tilde{\omega}$ is satisfied in the second constraint of ([\[amopul2_special\]](#amopul2_special){reference-type="ref" reference="amopul2_special"}), which means that it becomes inactive. The trade-off between the accuracy of transition matrix recovery and the approximate cumulative error shows that a proper value of $\tilde{\omega}$ plays an important role in the performance of ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}). In an extreme case with sufficiently large $\omega_t$ (say, $\omega_{t}=$ 8, $t=0, 1, \ldots, N-1$), since the third constraint of ([\[amopul2_special\]](#amopul2_special){reference-type="ref" reference="amopul2_special"}) is sufficiently relaxed, RE$A^{\rm a*}$ is almost equal to zero for each $\tilde{\omega}$ as shown in Figure [\[figure:3\]](#figure:3){reference-type="ref" reference="figure:3"}, while its ACE curve lies below the line $y=x$ as shown in Figure [\[figure:4\]](#figure:4){reference-type="ref" reference="figure:4"}, i.e., the strict inequality ACE $<\tilde{\omega}$ is satisfied in the second constraint of ([\[amopul2_special\]](#amopul2_special){reference-type="ref" reference="amopul2_special"}), which means the constraint becomes inactive. **B. Influence of $\boldsymbol{\{\omega_{t}\}_{t=0}^{N-1}}$** For each fixed value of $\tilde{\omega}$, the means of RE$A^{\rm a*}$ and ACE both decrease as $\omega_{t}$ increases as shown in Figures [\[figure:3\]](#figure:3){reference-type="ref" reference="figure:3"} and [\[figure:4\]](#figure:4){reference-type="ref" reference="figure:4"}, respectively, while we gradually lose the accuracy of recovery of the true $\hat{U}$. In the extreme cases with sufficiently large $\tilde{\omega}=$ 130, 140, 150, 160, i.e., the second constraint of ([\[amopul2_special\]](#amopul2_special){reference-type="ref" reference="amopul2_special"}) is much relaxed, RE$A^{\rm a*}$ is almost equal to zero as shown in Figure [\[figure:3\]](#figure:3){reference-type="ref" reference="figure:3"}. In addition, the corresponding curves in Figure [\[figure:4\]](#figure:4){reference-type="ref" reference="figure:4"} lie below the line $y=x$, i.e., the strict inequality ACE $<\tilde{\omega}$ is satisfied in the second constraint of ([\[amopul2_special\]](#amopul2_special){reference-type="ref" reference="amopul2_special"}), which means this constraint becomes inactive. Consequently, the values of the control levels $\tilde{\omega}$ and $\{\omega_{t}\}_{t=0}^{N-1}$ play key roles in the performance of ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}). On one hand, when the values of $\tilde{\omega}$ and $\{\omega_{t}\}_{t=0}^{N-1}$ are small, since the second constraint of ([\[amopul2_special\]](#amopul2_special){reference-type="ref" reference="amopul2_special"}) becomes tight, ACE becomes small, i.e., the cumulative precision in the second constraint of ([\[amopul2_special\]](#amopul2_special){reference-type="ref" reference="amopul2_special"}) becomes high. However, the value of RE$A^{\rm a*}$ becomes large, i.e., the recovery precision of the true $\hat{A}$ becomes low. In the extreme cases with sufficiently small $\tilde{\omega}$ and $\{\omega_t\}_{t=0}^{N-1}$, i.e., extremely high requirements of the cumulative precision in the second constraint of ([\[amopul2_special\]](#amopul2_special){reference-type="ref" reference="amopul2_special"}) and the recovery precision of the true $(\hat{u}_0, \hat{u}_1, \ldots, \hat{u}_{N-1})$ in the third constraint of ([\[amopul2_special\]](#amopul2_special){reference-type="ref" reference="amopul2_special"}), ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}) may be infeasible, for example, setting $\tilde{\omega}=\omega_t=0, t=0, 1, \ldots, N-1$. On the other hand, when the values of $\tilde{\omega}$ and $\{\omega_{t}\}_{t=0}^{N-1}$ become sufficiently large, the second and third constraints of ([\[amopul2_special\]](#amopul2_special){reference-type="ref" reference="amopul2_special"}) may become inactive, while RE$A^{\rm a*}$ becomes small, i.e., the recovery precision of the true $\hat{A}$ becomes high. Based on the results for ([\[AMOPUL1\]](#AMOPUL1){reference-type="ref" reference="AMOPUL1"}) and ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}) in Subsection [4.1](#subsection:4.1){reference-type="ref" reference="subsection:4.1"} and Subsubsection [4.2.1](#subsubsection:4.2.1){reference-type="ref" reference="subsubsection:4.2.1"}, we can see that the accuracy of given reference outputs $\{r_t\}_{t=1}^N$ is the key for the performance of ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}). More accurate $\{r_t\}_{t=1}^N$ leads to better performance of ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}). In addition, the results of ([\[AMOPUL2\]](#AMOPUL2){reference-type="ref" reference="AMOPUL2"}) in Subsubsection [4.2.2](#subsubsection:4.2.2){reference-type="ref" reference="subsubsection:4.2.2"} indicate that the control level of approximate cumulative error plays an important role in the performance of ([\[AMOPUL\]](#AMOPUL){reference-type="ref" reference="AMOPUL"}). With a proper value of the control level, the proposed SDP approximation model may perform very well. # Concluding remarks {#section:5} This paper studies a matrix optimization problem over an uncertain linear system on finite horizon, in which the uncertain transition matrix is regarded as a decision variable. To decouple the entanglement of decision variables caused by the corresponding multivariate polynomial constraints for computational efficiency, we construct a polynomial-time solvable SDP approximation model by taking the given reference values as system outputs at each stage. Theoretical and numerical results show that the reference values of outputs and control levels play key roles of the proposed approach. The SDP approximation performs very well when the noises of reference outputs are small and control levels are proper. One potential extension of this work is to treat other parameter matrices of an uncertain linear system as decision variables. The entangled decision variables can be similarly decoupled by using the given reference values as substitutions to construct an SDP approximation model. Another potential extension is to study the matrix optimization problem over an uncertain non-linear system. A possible way is to approximate the uncertain nonlinear system by a linear system using the cutting plane/simplicial decomposition methods [@Bertsekas2015]. And then develop a polynomial-time solvable SDP approximation model. 10 A. Alessio, A. Bemporad, A survey on explicit model predictive control, In: L. Magni, D. M. Raimondo, F. Allgöwer, (eds.) Nonlinear Model Predictive Control: Towards New Challenging Applications. Lecture Notes in Control and Information Sciences, vol. 384, pp. 345-369, Springer-Verlag, Berlin Heidelberg, 2009. V. Balakrishnan, L. Vandenberghe, Semidefinite programming duality and linear time-invariant systems, IEEE Trans. Autom. Control., 48(1), 30-41, 2003. W. Barrett, Hermitian and positive definite matrices, In: L. Hogben, (ed.) Handbook of Linear Algebra, 2nd Edition, Chapter 9, pp. 1-13, CRC Press, Boca Raton, FL, 2014. A. Bemporad, M. Morari, V. Dua, E. N. Pistikopoulos, The explicit solution of model predictive control via multiparametric quadratic programming, In: Proceedings of the 2000 American Control Conference. ACC (IEEE Cat. No.00CH36334), vol. 2, pp. 872-876, 2000. A. Ben-Tal, A. Nemirovski, Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications, Society for Industrial and Applied Mathematics, Philadelphia, PA, 2001. D. P. Bertsekas, Convex Optimization Algorithms, Athena Scientific, Nashua, NH, 2015. T. B. Blanco, M. Cannon, B. D. Moor, On efficient computation of low-complexity controlled invariant sets for uncertain linear systems, Int. J. Control, 83(7), 1339-1346, 2010. M. Bujarbaruah, S. H. Nair, F. Borrelli, A semi-definite programming approach to robust adaptive MPC under state dependent uncertainty, In: 2020 European Control Conference (ECC), pp. 960-965, 2020. S. D. Cairano, Indirect adaptive model predictive control for linear systems with polytopic uncertainty, In: 2016 American Control Conference (ACC), pp. 3570-3575, 2016. E. F. Camacho, C. Bordons, Model Predictive Control, Springer-Verlag, London, 2007. L. Chen, S. He, S. Zhang, Tight bounds for some risk measures, with applications to robust portfolio selection, Oper. Res., 59(4), 847-865, 2011. A. Cohen, U. Shaked, Robust discrete-time $H_{\infty}$-optimal tracking with preview, Int. J. Robust Nonlinear Control, 8(1), 29-37, 1998. F. A. Cuzzola, J. C. Geromel, M. Morari, An improved approach for constrained robust model predictive control, Automatica, 38(7), 1183-1189, 2002. Z. Duan, J. Zhang, C. Zhang, E. Mosca, Robust $H_2$ and $H_{\infty}$ filtering for uncertain linear systems, Automatica, 42(11), 1919-1926, 2006. E. Gaar, F. Rendl, A computational study of exact subgraph based SDP bounds for Max-Cut, stable set and coloring, Math. Program., 183, 283-308, 2020. L. E. Ghaoui, M. Oks, F. Oustry, Worst-case value-at-risk and robust portfolio optimization: a conic programming approach, Oper. Res., 51(4), 543-556, 2003. M. X. Goemans, D. P. Williamson, Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming, Journal of the ACM, 42(6), 1115-1145, 1995. D. A. Guimarães, A. S. da Cunha, D. L. Pereira, Semidefinite programming lower bounds and branch-and-bound algorithms for the quadratic minimum spanning tree problem, Eur. J. Oper. Res., 280(1), 46-58, 2020. Q. Han, Y. Ye, J. Zhang, An improved rounding method and semidefinite programming relaxation for graph partition, Math. Program., 92, 509-535, 2002. W. P. M. H. Heemels, J. Daafouz, G. Millerioux, Observer-based control of discrete-time LPV systems with uncertain parameters, IEEE Trans. Autom. Control, 55(9), 2130-2135, 2010. R. A. Horn, F. Zhang, Basic properties of the Schur complement, In: F. Zhang, (ed.) The Schur Complement and Its Applications. Numerical Methods and Algorithms, vol. 4, pp. 17-46, Springer Science$+$Business Media, Inc., New York, NY, 2005. M. V. Kothare, V. Balakrishnan, M. Morari, Robust constrained model predictive control using linear matrix inequalities, Automatica, 32(10), 1361-1379, 1996. D. Li, Y. Xi, The feedback robust MPC for LPV systems with bounded rates of parameter changes, IEEE Trans. Autom. Control, 55(2), 503-507, 2010. X. Li, M. Wang, A. Zhang, Estimation of Markov chain via rank-constrained likelihood, In: (J. Dy and A. Krause, eds.) Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 80, pp. 3033-3042, PMLR, 2018. Q. Liu, Z. Chen, R. Su, Input-Output Analysis (in Chinese), China Renmin University Press, Beijing, China, 2020. C. Lu, Y.-F. Liu, W.-Q. Zhang, S. Zhang, Tightness of a new and enhanced semidefinite relaxation for MIMO detection, SIAM J. Optim., 29(1), 719-742, 2019. R. E. Miller, P. D. Blair, Input-Output Analysis: Foundations and Extensions, Cambridge University Press, New York, 2009. A. Mobasher, M. Taherzadeh, R. Sotirov, A. K. Khandani, A near-maximum-likelihood decoding algorithm for MIMO systems based on semi-definite programming, IEEE Trans. Inf. Theory, 53(11), 3869-3886, 2007. Y. Nesterov, Squared functional systems and optimization problems, In: H. Frenk, K. Roos, T. Terlaky, and S. Zhang, (eds.) High performance optimization. Applied Optimization, vol. 33, pp. 405-440, Springer Science$+$Business Media, Dordrecht, The Netherlands, 2000. Y. Nesterov, A. Nemirovskii, Interior-Point Polynomial Algorithms in Convex Programming, Society for Industrial and Applied Mathematics, Philadelphia, PA, 1994. A. Parsi, A. Iannelli, R. S. Smith, An explicit dual control approach for constrained reference tracking of uncertain linear systems, IEEE Trans. Autom. Control, 68(5), 2652-2666, 2023. R. Penrose, On best approximate solutions of linear matrix equations, Mathematical Proceedings of the Cambridge Philosophical Society, 52(1), 17-19, 1956. T. Tanaka, P. M. Esfahani, S. K. Mitter, LQG control with minimum directed information: semidefinite programming approach, IEEE Trans. Autom. Control, 63(1), 37-52, 2018. T. Terlaky, (ed.) Interior Point Methods of Mathematical Programming, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1996. N. S. Tripathy, I. N. Kar, K. Paul, Stabilization of uncertain discrete-time linear system with limited communication, IEEE Trans. Autom. Control, 62(9), 4727-4733, 2017. Z. Wan, M. V. Kothare, An efficient off-line formulation of robust model predictive control using linear matrix inequalities, Automatica, 39(5), 837-846, 2003. W. Xing, S.-C. Fang, Introduction to Linear Conic Optimization (in Chinese), Tsinghua University Press, Beijing, China, 2020. J. Xu, W. Xing, SIR type COVID-19 multi-stage optimal control model (in Chinese), Operations Research Transactions, 27(1), 43-52, 2023. D. D. Yao, S. Zhang, X. Y. Zhou, Stochastic linear-quadratic control via semidefinite programming, SIAM J. Control Optim., 40(3), 801-823, 2001. E. Zeidler, Applied Functional Analysis: Applications to Mathematical Physics, Springer Science$+$Business Media, New York, NY, 1995. A. Zhang, M. Wang, Spectral state compression of Markov processes, IEEE Trans. Inf. Theory, 66(5), 3202-3231, 2020. Z. Zhu, X. Li, M. Wang, A. Zhang, Learning Markov models via low-rank optimization, Oper. Res., 70(4), 2384-2398, 2021. [^1]:  Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China. Email: xujt19\@mails.tsinghua.edu.cn [^2]:  Edward P. Fitts Department of Industrial and Systems Engineering, North Carolina State University, Raleigh, NC 27606, USA. Email: fang\@ncsu.edu [^3]:  Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China. Email: wxing\@tsinghua.edu.cn [^4]:  Fang's research was supported by the Walter Clark Professor Endowment of the North Carolina State University. Xing's research was supported by the National Natural Science Foundation of China (Grant No. 11771243).
arxiv_math
{ "id": "2309.13628", "title": "Semidefinite Programming Approximation for a Matrix Optimization Problem\n over an Uncertain Linear System", "authors": "Jintao Xu, Shu-Cherng Fang, Wenxun Xing", "categories": "math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In 1976, Donald Cartwright and John McMullen characterized axiomatically the Riemann-Liouvile fractional integral in their paper [@CarMc], that was published in 1978. The motivation for their work was to answer affirmatively to a conjecture stated by J. S. Lew a few years before, in 1972. Essentially, their "Cartwright-McMullen theorem in fractional calculus" proved that the Riemann-Liouville fractional integral is the only continuous extension of the usual integral operator to positive real orders, in such a way that the Index Law holds. In this paper, we propose an analogous result for the uniqueness of the extension of the Stieltjes integral operator, in the case of a smooth integrator. address: | Departamento de Análise Matemática, Estatística e Optimización, Facultade de Matemáticas, Universidade de Santiago de Compostela; Rúa Lope Gómez de Marzoa, 6, ZIP: 15782, Spain;\ ORCID: <https://orcid.org/0000-0003-2266-2075> author: - Daniel Cao Labora dedication: Dedicated to the former work of Donald Cartwright and John R. McMullen, from University of Sydney, that settled a fundamental result in the discipline of fractional calculus in 1978. title: An extension of the Cartwright-McMullen theorem in fractional calculus for the smooth Stieltjes case --- # Basic concepts Initially, we introduce the fundamental tools that are required to develop this work. Most concretely, we give some brief notions concerning the Riemann-Liouville fractional integral and classical results about convolutions. During the rest of the document, we will assume that $a,b \in \mathbb{R}$ are real numbers and $\alpha, \beta \in \mathbb{R}^+$ are strictly positive real numbers. It is important to take into account that we will be dealing most of the time with the Banach space $L^p[a,b]$ for $p \in [1,\infty]$ and, hence, all the functional identities that we describe will hold in the whole interval $[a,b]$ except for, at most, a zero Lebesgue measure set. We will use the notation $\mathcal{B}\left(L^p[a,b]\right)$ for the space of linear continuous operators from $L^p[a,b]$ to itself. Finally, we highlight that all the deductions are obviously still valid if we replace $L^p[a,b]$ by $\mathcal{C}([a,b])$. **Definition 1**. *Given $\alpha \in \mathbb{R}^+$ and $a \in \mathbb{R}$ we define the Riemann-Liouville fractional integral of order $\alpha$ and base point $a$ of a function $f \in L^p[a,b]$ as $$\left(I_{a^+}^{\alpha}f\right)(t):=\int_a^t \frac{(t-s)^{\alpha-1}}{\Gamma(\alpha)}\,f(s) \, ds = \int_a^t \frac{(s-a)^{\alpha-1}}{\Gamma(\alpha)}\,f(t-s+a) \, ds.$$* The exact expression for $I_{a^+}^{\alpha}$ is not extremely relevant for our purposes. Of course, we observe that for $\alpha=1$ we recover the classical integral operator with base point $a$. The important fact that we need to take into account is the following proposition, which is a compilation of well-known results that can be found in [@Samko]. **Proposition 2**. *The fractional integral operator $I_{a^+}^{\alpha}$ satisfies several properties:* - *The map $I_{a^+}^{\alpha}: L^p[a,b] \longrightarrow L^p[a,b]$ is well-defined.* - *$I_{a^+}^1$ is the usual integral operator with base point $a$.* - *We have the Index Law $I_{a^+}^{\alpha+\beta} = I_{a^+}^{\alpha} \circ I_{a^+}^{\beta}$ for any $\alpha, \beta>0$.* - *The map $\mathbb{R}^+ \to \mathcal{B}\left(L^p[a,b]\right)$ given by $\alpha \to I_{a^+}^{\alpha}$ is continuous.* We are also interested in the "Riemann-Liouville fractional integral with respect to a function $h$", which is also defined at [@Samko]. Analogously to what is done at [@Samko], in the rest of the document we will assume that $h \in \mathcal{C}^1[a,b]$ with $h'(t) \neq 0$ for any $t \in [a,b]$. Besides, to consider only the case of a left base point of integration, we make the assumption $h'(t)>0$. In other case $h'(t)<0$, and the same result can be obtained after considering a fractional integral with right base point. **Definition 3**. *Given $\alpha \in \mathbb{R}^+$ and $a \in \mathbb{R}$ we define the Riemann-Liouville fractional integral of order $\alpha$ and base point $a$ of a function $f \in L^p[a,b]$ with respect to $h$ as $$\left(I_{h, \, a^+}^{\alpha}f\right)(t):=\int_a^t \frac{(h(t)-h(s))^{\alpha-1}}{\Gamma(\alpha)}\,f(s) \, h'(s)\, ds.$$* There are several properties of this operator that are derived in [@Samko]. We will only highlight that, for $\alpha =1$, the operator $I_{h, \, a^+}^{1}$ is the Stieltjes integral operator with integrator $h$ and that $I_{h, \, a^+}^{\alpha} \in \mathcal{B}\left(L^p[a,b]\right)$. With respect to convolutions, we recall that for each function $g \in L^1[a,b]$ we can induce, in a continuous way, a convolution operator after defining a map $C_a:L^1[a,b] \longrightarrow \mathcal{B}(L^p[a,b])$. More specifically, for each $g \in L^1[a,b]$ its associated convolution operator $C_a(g):L^p[a,b] \longrightarrow L^p[a,b]$ is defined as $$(C_a(g)\,f)(t):= (g * f)(t):=\int_{a}^t g(t-s+a) \cdot f(s) \, ds.$$ Under the previous notation, we say that $g$ is the kernel of the convolution operator $C_a(g)$. There are some well-known properties about convolution operators that we summarize below. Essentially, the convolution operation $*$ is commutative and associative. Moreover, the linear operator $C_a$ is continuous and well-defined, in the sense that $C_a(g)$ is a bounded endomorphism in $L^p[a,b]$ for any $g \in L^1[a,b]$ that changes continuously with respect to the operator norm when moving $g$. Moreover, we will use the following Theorem, due to Titchmarsh in [@Tit], that roughly states that convolution operators are injective, provided that the kernel is different from the zero function on any right neighbourhood of the base point $a$. **Theorem 4** (Titchmarsh). *Suppose that $f,g \in L^1[a,b]$ are such that $f*g \equiv 0$. Then, there exist $\lambda,\mu \in \mathbb{R}^+$ such that the following three conditions hold:* - *$f \equiv 0$ (almost everywhere) in the interval $[a,a+\lambda]$,* - *$g \equiv 0$ (almost everywhere) in the interval $[a,a+\mu]$,* - *$\lambda+\mu \geq b-a$.* As we already mentioned, the previous result implies that $C_a(g)$ is injective, provided that $g \in L^1[a,b]$ is a non-identically null function at any right neighbourhood of $a$. In our case, the interest of convolutions and their connection with Riemann-Liouville fractional integral comes from the identity $C_a \left(g_{a, \,\alpha}\right)=I_{a^+}^{\alpha}$ for $\alpha>0$, where we have defined $$\label{Eq} g_{a,\, \alpha}(t):=\frac{(t-a)^{\alpha-1}}{\Gamma(\alpha)} \in L^1[a,b].$$ Thus, many typical tools concerning convolution theory can be applied to study the so-called "fractional operators". Finally, and due to technical reasons, we will require also the following result. **Lemma 5**. *Consider three Banach spaces $X,Y,Z$ and denote by $\mathcal{B}(X,Y)$ the space of continuous linear maps from $X$ to $Y$ with the norm topology. Then, the operator $\textnormal{Comp}: \mathcal{B}(X,Y) \times \mathcal{B}(Y,Z) \longrightarrow \mathcal{B}(X,Z)$ given by $\textnormal{Comp}(g, f)= f \circ g$ is continuous.* *Proof.* From the triangle inequality, we deduce $$\Vert \textnormal{Comp}(g_1,f_1)-\textnormal{Comp}(g_2,f_2)\Vert \leq \Vert f_1 \circ g_1 - f_1 \circ g_2 \Vert + \Vert f_1 \circ g_2 - f_2 \circ g_2 \Vert.$$ Moreover, the right hand side can be bounded from above by $$\Vert f_1 \Vert \cdot \Vert g_1 - g_2 \Vert + \Vert f_1 - f_2 \Vert \cdot \Vert g_2 \Vert.$$ Finally, we observe that if $(g_2,f_2)$ tends to $(g_1,f_1)$ the previous bound goes to zero, since $\Vert g_1 - g_2 \Vert$, $\Vert f_1 - f_2 \Vert$ go to zero and $\Vert g_2 \Vert$ tends to $\Vert g_1 \Vert$ due to the continuity of the norm. ◻ **Remark 6**. *From the previous result it is straightforward to check that, given a function $g \in \mathcal{B}(Y,Z)$, then the map $\textnormal{Comp}_{(g,\cdot)}: \mathcal{B}(X,Y) \longrightarrow \mathcal{B}(X,Z)$ defined as $\textnormal{Comp}_{(g,\cdot)}(f)=f \circ g$ is continuous. Analogously, given a function $f \in \mathcal{B}(X,Y)$, then the map $\textnormal{Comp}_{(\cdot,f)}: \mathcal{B}(Y,Z) \longrightarrow \mathcal{B}(X,Z)$ defined as $\textnormal{Comp}_{(\cdot,f)}(g)=f \circ g$ is continuous.* # Introduction to the state of the art In 1978, Donald Cartwright and John McMullen provided a result that characterized axiomatically the Riemann-Liouvile fractional integral. In this sense, they proved that the Riemann-Liouville fractional integral was the unique definition that extended the integral operator $I_{a^+}^1$, assuming some reasonable hypotheses for the extension. Our goal is to provide a similar result for the Stieltjes integral operator with integrator $h$. As one can expect, the unique family that performs this extension will be the "Riemann-Liouville fractional integral with respect to $h$". Now, we state the Cartwright and McMullen result for the case of real functions. Interested readers can find their original work in [@CarMc], which differs from the following statement in the consideration of complex-valued functions. The consideration of real valued functions allows us to omit the third hypothesis in [@CarMc], regarding the positivity of $J_{a^+}^{\alpha}$. **Theorem 7** (Cartwright-McMullen, real version). *Given a fixed $a \in \mathbb{R}$, there is only one family of operators $\left(J_{a^+}^{\alpha}\right)_{\alpha > 0}$ on $L^p[a,b]$ satisfying the following conditions:* 1. *The operator of order $1$ is the usual integral with base point $a$. That is, $J_{a^+}^1=I_{a^+}^1$. (Interpolation property)* 2. *The Index Law holds. That is, $J_{a^+}^{\alpha} \circ J_{a^+}^{\beta}=J_{a^+}^{\alpha+\beta}$ for all $\alpha,\beta > 0$. (Index Law)* 3. *The family is continuous with respect to the parameter. That is, the following map $\textnormal{Ind}_a:\mathbb{R}^+ \longrightarrow \mathcal{B} \left(L^p[a,b] \right)$ given by $\textnormal{Ind}_a(\alpha) = J_{a^+}^{\alpha}$ is continuous, where the codomain has the norm topology. (Continuity)* *This family is precisely given by the Riemann-Liouville fractional integrals and, hence, we have $J_{a^+}^{\alpha}=I_{a^+}^{\alpha}$.* Although we will not reproduce the proof presented in [@CarMc], we sketch the main idea. Essentially, the authors split the proof into two steps. The first step is to show that the result holds when $J_{a^+}^{\alpha}$ is assumed to be a convolution operator. More concretely, the idea is to show that it is enough to study $J_{a^+}^{1/m}$ for $m \in \mathbb{Z}^+$. The main reason is that the Index Law and the continuity with respect to the index allow us to describe $J_{a^+}^{\alpha}$ for $\alpha >0$, provided that we know $J_{a^+}^{1/m}$. When $J_{a^+}^{1/m}$ is assumed to be a convolution operator, it is possible to use appropriated tools from convolution theory, namely Titchmarsh Theorem, to conclude the uniqueness up to the product with a $m$-th root of unity, that is, $$J_{a^+}^{\frac{1}{m}}=e^{\frac{2\pi k i}{m}}I_{a^+}^{\frac{1}{m}}, \textnormal{ where } k \in \{0,1,2,\dots,m-1\}.$$ The special treatment for the real case uses that the only possible $m$-roots of unity are $1$ or $-1$, so the Index Law implies $$J_{a^+}^{\frac{p}{q}}=\pm I_{a^+}^{\frac{p}{q}}, \textnormal{ where } p/q \in \mathbb{Q},$$ where the "$-$" sign could depend, in principle, of each $p/q$. However, only the "$+$" sign is possible for each $p/q$, due to the Index Law applied to the composition $J_{a^+}^{p/q}=J_{a^+}^{(p/2q)} \circ J_{a^+}^{(p/2q)}.$ Finally, continuity forces to $J_{a^+}^{\alpha}=I_{a^+}^{\alpha}$ for any $\alpha>0.$ The second step is to show that the result holds when $J_{a^+}^{\alpha}$ is not necessarily a convolution operator. At first, one uses the Index Law and the interpolation property in order to prove that $J_{a^+}^{1/m}$ commutes with any convolution operator with polynomial kernel. Indeed, the continuity of $C_a$ and the density of polynomials in $L^p[a,b]$ imply that it commutes with any convolution operator. The previous fact is crucial to deduce that $I_{a^+}^1 \circ J_{a^+}^{1/m}$ is always a convolution operator, although $J_{a^+}^{1/m}$ was not. Finally, one mimics the discussion developed during the first step, now for the convolution operator $I_{a^+}^1 \circ J_{a^+}^{1/m}$, to conclude the uniqueness for $I_{a^+}^1 \circ J_{a^+}^{1/m}$ up to the product with a $m$-root of unity. Thus, we arrive to a similar situation to the previous one $$I_{a^+}^1 \circ J_{a^+}^{\frac{1}{m}}=e^{\frac{2\pi k i}{m}}I_{a^+}^{1+\frac{1}{m}}, \textnormal{ where } k \in \{0,1,2,\dots,m-1\}.$$ Finally, the injectivity of $I_{a^+}^1$ implies $$J_{a^+}^{\frac{1}{m}}=e^{\frac{2\pi k i}{m}}I_{a^+}^{\frac{1}{m}}, \textnormal{ where } k \in \{0,1,2,\dots,m-1\},$$ and we can end the argument as in the previous step. # The Stieltjes case It is a reasonable question if we can give a similar result to the previous one for the case of the Stieltjes integral operator. We would want to ensure that there is only one continuous interpolation for it such that the Index Law holds. The answer is positive when the integrator is given by a function $h \in \mathcal{C}^1[a,b]$ such that $h'(t) > 0$ for any $t \in [a,b]$. Furthermore, we can give an explicit construction of the interpolation in this case, which is the "Riemann-Liouville fractional integral with respect to the function $h$". Instead of developing a technical proof for the result, it will be deduced as a corollary of the Cartwright-McMullen Theorem after suitable remarks. **Theorem 8**. *There is only one family of operators $(J_{h,\, a^+}^{\alpha})_{\alpha > 0}$ on $L^p[a,b]$ satisfying the following conditions:* 1. *The operator of order $1$ is the usual integral. That is, $J_{h, \, a^+}^1=I_{h, \, a^+}^1$. (Interpolation property)* 2. *The Index Law holds. That is, $J_{h, \, a^+}^{\alpha} \circ J_{h, \, a^+}^{\beta}=J_{h, \, a^+}^{\alpha+\beta}$ for all $\alpha,\beta > 0$. (Index Law)* 3. *The family is continuous with respect to the parameter. That is, the following map $\textnormal{Ind}_a:\mathbb{R}^+ \longrightarrow \mathcal{B} \left(L^p[a,b]\right)$ given by $\textnormal{Ind}_{h, \, a}(\alpha) = J_{h, \, a^+}^{\alpha}$ is continuous, where the codomain has the norm topology. (Continuity Property)* *The family is precisely given by the "Riemann-Liouville fractional integral with respect to the function $h$".* *Proof.* Consider the operator $R_{h}: L^p[h(a),h(b)] \longrightarrow L^p[a,b]$ given by $R_{h}(f)=f \circ h$. Since $h$ is continuously differentiable and $h'(t) > 0$ when $t \in [a,b]$, it is a consequence of the Change of Variables Theorem that $R_h$ is well-defined, meaning that $f \circ h \in L^p[a,b]$ when $f \in L^p[h(a),h(b)]$. Although $h$ is not necessarily linear, it is straightforward to check that $R_h$ is an invertible linear operator, where $R_h^{-1}=R_{h^{-1}}$. To see that the operator $R_h$ is continuous, we recall that, for a function $f \in L^p[h(a),h(b)]$, we have $$\Vert f \Vert_{L^p[h(a),h(b)]} = \int_{h(a)}^{h(b)} \left \vert f(t)\right \vert \, dt= \int_a^b \left \vert f(h(t))\right \vert \cdot \left \vert h'(t) \right \vert \, dt \geq m \cdot \int_a^b \left \vert f(h(t))\right \vert \, dt,$$ where $m = \min\{ \vert h'(t) \vert \in \mathbb{R}^+: t \in [a,b] \} >0$ exists since $\vert h' \vert$ is continuous on the compact interval $[a,b]$ and does not vanish. Thus, $$\Vert R_h(f) \Vert_{L^p[a,b]}= \int_a^b \vert f(h(t)) \vert \, ds \leq \frac{1}{m} \, \Vert f \Vert_{L^p[h(a),h(b)]}$$ and we have proved that $R_h$ is continuous. The previous properties concerning $R_h$ are of our interest because $$\label{Eq4} I_{h,\,a^+}^1 = R_h \circ I_{h(a)^+}^1 \circ R^{-1}_h.$$ This claim follows by direct calculation, since $$\left(R_h \circ I_{h(a)^+}^1 \circ R^{-1}_h f\right)(t)=\left(R_h \circ I_{h(a)^+}^1 (f \circ h^{-1}) \right)(t)=R_h \left(\int_{h(a)}^t f\left(h^{-1}(s)\right) \, ds \right)$$ and the application of $R_h$ and the Change of Variables Theorem allow to rewrite the previous right hand side as $$\int_{h(a)}^{h(t)} f\left(h^{-1}(s)\right) \, ds=\int_{a}^{t} f(s) \cdot h'(s) \, ds = \left(I_{h,\,a^+}^1 f\right)(t).$$ In fact, it is immediate to show by iterated composition that $I_{h,\,a^+}^n = R_h \circ I_{h(a)^+}^n \circ R^{-1}_h$ for any positive integer $n$. Now, our intuition tells us that if $J_{h(a)^+}^{\alpha}$ is a "nice interpolation" for $I_{h(a)^+}^{\alpha}$, then the definition $R_h \circ J_{h(a)^+}^{\alpha} \circ R^{-1}_h$ should be a "nice interpolation" for $I_{h,\,a^+}^{\alpha}.$ Conversely, a choice $J_{h, \, a^+}^{\alpha}$ fulfilling the hypotheses in Theorem [Theorem 8](#Theo){reference-type="ref" reference="Theo"}, should imply that $$\label{Eq3} K_{h(a)^+}^{\alpha}:=R_h^{-1} \circ J_{h, \, a^+}^{\alpha} \circ R_h.$$ is under the hypotheses of the Cartwright-McMullen Theorem [Theorem 7](#TheoO){reference-type="ref" reference="TheoO"}. This intuition can be confirmed after the following three remarks. Before doing this, note that the continuity of $J_{h, \, a^+}^{\alpha}$, $R_h$ and $R_h^{-1}$ imply that $K_{h(a)^+}^{\alpha} \in \mathcal{B}\left(L^p[h(a),h(b)]\right)$. 1. If $J_{h, \, a^+}^{\alpha}=I_{h, \, a^+}^{\alpha}$, then $K_{h(a)^+}^{1}=I_{h(a)^+}^1$. This is a consequence of Equations ([\[Eq4\]](#Eq4){reference-type="ref" reference="Eq4"}), ([\[Eq3\]](#Eq3){reference-type="ref" reference="Eq3"}), and the Interpolation Property (first hypothesis in Theorem [Theorem 8](#Theo){reference-type="ref" reference="Theo"}). 2. If $J_{h, \, a^+}^{\alpha} \circ J_{h, \, a^+}^{\beta} = J_{h, \, a^+}^{\alpha+\beta}$, then $K_{h(a)^+}^{\alpha} \circ K_{h(a)^+}^{\beta} = K_{h(a)^+}^{\alpha+\beta}$. This is a consequence of Equation ([\[Eq3\]](#Eq3){reference-type="ref" reference="Eq3"}) and the Index Law (second hypothesis in Theorem [Theorem 8](#Theo){reference-type="ref" reference="Theo"}). 3. If the map $\textnormal{Ind}_{h, \, a}(\alpha)=J_{h, \, a^+}^{\alpha}$ is continuous, then $\textnormal{Ind}_{h(a)}(\alpha)=K_{h(a)^+}^{\alpha}$ is continuous. This part is not completely straightforward, since it is a consequence of ([\[Eq3\]](#Eq3){reference-type="ref" reference="Eq3"}) and the Continuity Property (third hypothesis in Theorem [Theorem 8](#Theo){reference-type="ref" reference="Theo"}), but the continuity of the composition operator described in Lemma [Lemma 5](#Lema){reference-type="ref" reference="Lema"} and Remark [Remark 6](#Rema){reference-type="ref" reference="Rema"} also plays a key role. We just see that we can write $\textnormal{Ind}_{h(a)}(\alpha)$ as a composition of continuous operators $$\textnormal{Ind}_{h(a)}(\alpha)=\textnormal{Comp}_{ \left(\cdot, R_h^{-1} \right)} \circ \textnormal{Comp}_{\left(R_h,\cdot\right)} \circ \textnormal{Ind}_{h, \, a}(\alpha)$$ Therefore, since $R_h$ and $R_h^{-1}$ are bijective, two different choices for $J_{h, \, a^+}^{\alpha}$ would induce two different possibilities for $J_{h(a)^+}^{\alpha}$ in the Cartwright-McMullen Theorem. Since this is not possible, there is also a unique choice for $J_{h, \, a^+}^{\alpha}$ that is induced from the unique choice for $J_{h(a)^+}^{\alpha}$, which is $J_{h(a)^+}^{\alpha}=I_{h(a)^+}^{\alpha}$. Consequently, the unique possibility for $J_{h, \, a^+}^{\alpha}$ will be given by $$I_{h, \, a^+}^{\alpha}= R_h \circ I_{h(a)^+}^{\alpha} \circ R_h^{-1}.$$ ◻ 9 Cartwright, D.; McMullen, J. A note on the fractional calculus. *Proceedings of the Edinburgh Mathematical Society* **1978**, *21(1)*, pp. 79--80. Samko, S.; Kilbas, A.; Marichev, O. *Fractional Integrals and Derivatives: Theory and Applications*, $1$st ed.; Gordon and Breach Science Publishers: Amsterdam, The Netherlands; 1993. Titchmarsh, E. The zeros of certain integral functions. *Proceedings of the London Mathematical Society* **1926**, *25*, pp. 283--302.
arxiv_math
{ "id": "2309.07148", "title": "An extension of the Cartwright-McMullen theorem in fractional calculus\n for the smooth Stieltjes case", "authors": "Daniel Cao Labora", "categories": "math.CA math.FA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Let $A(p,n,k)$ denote the number of $p$-tuples of commuting permutations of $n$ elements whose permutation action results in exactly $k$ orbits or connected components. We provide a new proof of an explicit formula for $A(p,n,k)$ which is essentially due to Bryan and Fulman, in their work on orbifold higher equivariant Euler characteristics. Our proof is self-contained, elementary, and relies on the construction of an explicit bijection, in order to perform the $p+1\rightarrow p$ reduction. We also investigate a conjecture by the first author, regarding the log-concavity of $A(p,n,k)$ with respect to $k$. The conjecture generalizes a previous one by Heim and Neuhauser related to the Nekrasov-Okounkov formula. address: - Abdelmalek Abdesselam, Department of Mathematics, P. O. Box 400137, University of Virginia, Charlottesville, VA 22904-4137, USA - Pedro Brunialti, Department of Mathematics, P. O. Box 400137, University of Virginia, Charlottesville, VA 22904-4137, USA - Tristan Doan, Department of Mathematics, P. O. Box 400137, University of Virginia, Charlottesville, VA 22904-4137, USA - Philip Velie, Department of Mathematics, P. O. Box 400137, University of Virginia, Charlottesville, VA 22904-4137, USA author: - Abdelmalek Abdesselam - Pedro Brunialti - Tristan Doan - Philip Velie title: A bijection for tuples of commuting permutations and a log-concavity conjecture --- # Introduction For $n\ge 0$, let us denote by $[n]$ the finite set $\{1,\ldots,n\}$, and by $\mathfrak{S}_n$ the symmetric group of permutations of $[n]$. For $p\ge 0$, we consider the set of ordered $p$-tuples of commuting permutations $$\mathscr{C}_{p,n}:=\left\{\ (\sigma_1,\ldots,\sigma_p)\in (\mathfrak{S}_n)^p\ |\ \forall i,j,\ \sigma_i\sigma_j=\sigma_j\sigma_i \ \right\}\ .$$ For a tuple $(\sigma_1,\ldots,\sigma_p)$ of (non-necessarily commuting) permutations, let $\langle \sigma_1,\ldots,\sigma_p\rangle$ be the subgroup they generate inside $\mathfrak{S}_n$. The obvious action of $\mathfrak{S}_n$ on $[n]$ restricts to an action of $\langle \sigma_1,\ldots,\sigma_p\rangle$ with a number of orbits which we will denote by $\kappa( \sigma_1,\ldots,\sigma_p)$. For $0\le k\le n$, we let $\mathscr{C}_{p,n,k}$ be the subset of $\mathscr{C}_{p,n}$ made of tuples for which $\kappa( \sigma_1,\ldots,\sigma_p)=k$. We finally define our main object of study $$A(p,n,k):=|\mathscr{C}_{p,n,k}|\ ,$$ where, as usual, $|\cdot|$ denotes the cardinality of finite sets. Our main result is a new proof of the following theorem giving an explicit, albeit complicated, formula for the $A(p,n,k)$. **Theorem 1**. *We have $$A(p,n,k)=\frac{n!}{k!}\times \sum_{n_1,\ldots,n_k\ge 1} {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}\{n_1+\cdots+n_k=n\}\times \prod_{i=1}^{k} \left[\frac{B(p,n_i)}{n_i}\right]\ ,$$ where ${\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}\{\cdots\}$ denotes the indicator function of the condition between braces, and $B(p,\cdot)$ is the multiplicative function (in the number theory sense, i.e., $B(p,ab)=B(p,a)B(p,b)$ when $a,b$ are coprime) which satisfies $$B(p,q^m)=\frac{(q^p-1)(q^{p+1}-1)\cdots(q^{p+m-1}-1)}{(q-1)(q^2-1)\cdots(q^m-1)}\ ,$$ when $m\ge 0$ and $q$ is a prime number. [\[mainthm\]]{#mainthm label="mainthm"}* Our motivation for considering the previous theorem is the following log-concavity conjecture by the first author. **Conjecture 1**. *(Abdesselam [@Abdesselam2]) For all $p\ge 1$, all $n\ge 3$, and for all $k$ such that $2\le k\le n-1$, $$A(p,n,k)^2\ge A(p,n,k-1)\ A(p,n,k+1)\ .$$ [\[mainconj\]]{#mainconj label="mainconj"}* The case $p=1$, included for esthetic coherence, is not conjectural. Since $A(1,n,k)=c(n,k)$, the unsigned Stirling number of the first kind, the stated log-concavity property is a well known fact (see, e.g., [@Abdesselam1] and references therein). The case $p=2$ is a conjecture by Heim and Neuhauser [@HeimN] related to the Nekrasov-Okounkov formula [@NekrasovO; @Westbury], as will be explained in §[3](#conjsec){reference-type="ref" reference="conjsec"}. The case "$p=\infty$" is proved in [@Abdesselam2]. The form in which Theorem [\[mainthm\]](#mainthm){reference-type="ref" reference="mainthm"} is stated is the one needed for the proof given in [@Abdesselam2], and we did not see this precise formulation in the literature. However, we do not claim Theorem [\[mainthm\]](#mainthm){reference-type="ref" reference="mainthm"} is new. Indeed, it follows easily from the following identity by Bryan and Fulman [@BryanF] $$\sum_{n=0}^{\infty}\sum_{k=0}^{n}\frac{1}{n!}\ A(p,n,k)\ x^k u^n= \prod_{d_1,\ldots,d_{p-1}=1}^{\infty} (1-u^{d_1\cdots d_{p-1}})^{-x\, d_1^{p-2} d_2^{p-3}\cdots d_{p-2}}\ , \label{BFidentity}$$ which holds in the ring of formal power series $\mathbb{C}[[x,u]]$. To see how Theorem [\[mainthm\]](#mainthm){reference-type="ref" reference="mainthm"} can be derived from ([\[BFidentity\]](#BFidentity){reference-type="ref" reference="BFidentity"}), first (re)define, for $p\ge 1$ and $n\ge 1$, $$B(p,n):=\sum_{s_1|s_2|\cdots|s_{p-1}|n}s_1\cdots s_{p-1}\ , \label{altdefeq}$$ where the sum is over tuples of integers $s_1,\ldots,s_{p-1}\ge 1$ which form an "arithmetic flag", namely, such that $s_1$ divides $s_2$, $s_2$ divides $s_3$,..., $s_{p-1}$ divides $n$. In particular, $B(1,n)=1$, and $B(2,n)=\sigma(n)$ the divisor sum from number theory. Since the divisor lattice factorizes over the primes, it is clear from the alternative definition ([\[altdefeq\]](#altdefeq){reference-type="ref" reference="altdefeq"}), that the $B(p,\cdot)$ is a mutiplicative function, in the number theory sense. Its computation reduces to the prime power case. If $q$ is a prime and $m\ge 0$, then we have $$\begin{aligned} B(p,q^m) &=& \sum_{0\le m_1\le\cdots\le m_{p-1}\le m}\ q^{m_1+\cdots+m_{p-1}} \\ & = & \sum_{\lambda\subset (m)^{p-1}} q^{|\lambda|}\\ & = & \left[ \begin{array}{c} m+p-1 \\ m \end{array} \right]_q \\ & = & \frac{(q^p-1)(q^{p+1}-1)\cdots(q^{p+m-1}-1)}{(q-1)(q^2-1)\cdots(q^m-1)}\ .\end{aligned}$$ Here, we changed variables to the integer partition $\lambda=(m_{p-1},m_{p-2},\ldots,m_{1})$ with weight $|\lambda|$ and whose shape is contained in the rectangular partition $(m)^{p-1}$ with $p-1$ parts equal to $m$. Finally, we used the well known formula for the sum over $\lambda$ as a Gaussian polynomial or $q$-binomial coefficient (see, e.g., [@Stanley Prop. 1.7.3]). This shows the equivalence between ([\[altdefeq\]](#altdefeq){reference-type="ref" reference="altdefeq"}) and the definition given in Theorem [\[mainthm\]](#mainthm){reference-type="ref" reference="mainthm"}. By changing variables from $s_1,\ldots,s_{p-1}$ to $d_1,\ldots,d_p$ given by $$d_1=s_1\ ,\ d_2=\frac{s_2}{s_1}\ ,\ldots,\ d_{p-1}=\frac{s_{p-1}}{s_{p-2}}\ ,\ d_p=\frac{n}{s_{p-1}}\ ,$$ we can also write $$B(p,n)=\sum_{d_1\cdots d_p=n} d_1^{p-1}d_{2}^{p-2}\cdots d_{p-1}\ ,$$ as a multiple Dirichlet convolution of power functions (see, e.g., [@Moller] where the connection to $q$-binomial coefficients was also noted). We then have the following easy formal power series computations $$\begin{aligned} \sum_{n=1}^{\infty}\frac{B(p,n)}{n}\ u^n &= & \sum_{d_1,\ldots,d_p\ge 1} \frac{d_1^{p-1}d_{2}^{p-2}\cdots d_{p-1}}{d_1\cdots d_p} \times u^{d_1\cdots d_p}\\ & = & \sum_{m\ge 1} B(p-1,m)\times \sum_{d_p\ge 1}\frac{(u^m)^{d_p}}{d_p}\\ & = & \sum_{m\ge 1} B(p-1,m) \times\left(-\log(1-u^m)\right)\ ,\end{aligned}$$ where we introduced the new summation index $m:=d_1\cdots d_{p-1}$. Multiplying by $x$, and taking exponentials gives $$\exp\left(x \sum_{n=1}^{\infty}\frac{B(p,n)}{n}\ u^n\right)= \prod_{m=1}^{\infty} (1-u^m)^{-xB(p-1,m)}\ ,$$ which is the right-hand side of ([\[BFidentity\]](#BFidentity){reference-type="ref" reference="BFidentity"}) when collecting factors according to $m:=d_1\cdots d_{p-1}$. We have thus shown that ([\[BFidentity\]](#BFidentity){reference-type="ref" reference="BFidentity"}) can be rewritten as $$\sum_{n=0}^{\infty}\sum_{k=0}^{n}\frac{1}{n!}\ A(p,n,k)\ x^k u^n= \exp\left(x \sum_{n=1}^{\infty}\frac{B(p,n)}{n}\ u^n\right)\ .$$ Extracting coefficients of monomials in $x$ and $u$, on both sides, immediately yields Theorem [\[mainthm\]](#mainthm){reference-type="ref" reference="mainthm"}. In the article [@BryanF], $x$ is assumed to be the Euler characteristic of a manifold. However, their proof of ([\[BFidentity\]](#BFidentity){reference-type="ref" reference="BFidentity"}) holds if $x$ merely is a formal variable. Their work was aiming at generalizing the "stringy" orbifold Euler characteristic [@DixonHVW; @AtiyahS], from sums over pairs of commuting permutations, to commuting tuples of arbitrary length $p$. Another motivation for their work was the study by Hopkins, Kuhn, and Ravenel [@HopkinsKR] of a hierarchy of cohomology theories where the $p$-th level seemed to crucially involve $p$-tuples of commuting elements of a finite group such as $\mathfrak{S}_n$. The group-theoretic proof by Bryan and Fulman involved a delicate analysis of conjugacy classes in wreath products. Another proof one can find in the literature is the algebraic one by White [@White]. It uses the remark that $\mathscr{C}_{p,n}$ is in bijection with ${\rm Hom}(\mathbb{Z}^p,\mathfrak{S}_n)$, namely, the set of group homomorphisms from the additive group $\mathbb{Z}^p$ to the symmetric group $\mathfrak{S}_n$, i.e., $\mathbb{Z}^p$ actions on a set of $n$ elements. The proof by White also uses the fact that $B(p,n)$ is the number of subgroups of $\mathbb{Z}^p$ of index $n$ (a remark by Stanley already mentioned in [@BryanF]) and the main part of the argument is the computation of this number using Hermite normal forms, i.e., Gaussian elimination over the integers. Note that $B(p,n)$ is a well studied quantity, see, e.g., [@LubotzkyS Ch. 15] as well as the article by Solomon [@Solomon] where work on $B(p,n)$ is traced back to the time of Hermite and Eisenstein. Aslo note that a proof of the $x=1$ evaluation of the $p=3$ case of ([\[BFidentity\]](#BFidentity){reference-type="ref" reference="BFidentity"}) was also given in [@Britnell]. Our proof, given in the next section, is elementary and in the spirit of bijective enumerative combinatorics. In Lemma [\[polymerlem\]](#polymerlem){reference-type="ref" reference="polymerlem"}, we reduce the $A(p,n,k)$ to the $k=1$ case of transitive actions, via a polymer gas representation, in the language of statistical mechanics, or the exponential formula in enumerative combinatorics, often mentioned as the general slogan "sums over all objects are exponentials of sums over connected objects". The main argument is a reduction of $A(p+1,n,1)$ to the computation of $A(p,n,1)$. We condition the sum over tuples $(\sigma_1,\ldots,\sigma_{p+1})$, first on the number $r$ of orbits for the sub-tuple $(\sigma_1,\ldots,\sigma_{p})$ and then on the set partition $X=\{X_1,\ldots,X_r\}$ of $[n]$ given by that orbit decomposition. With $r$ and $X$ fixed, we then construct a bijection $$(\sigma_1,\ldots,\sigma_{p+1})\longmapsto (\widetilde{\sigma},\gamma,\tau,z) \label{bijeq}$$ where $\widetilde{\sigma}$ is a transitive $p$-tuple of commuting permutations on the subset $X_1$ containing the element $1\in[n]$. By $\gamma$ we denote a permutation of $[r]$ which is such that $\gamma(1)=1$. The $\tau$ is a certain collection of bijective maps between blocks $X_i$. Finally, the crucial ingredient is $z$ which is an element of $X_1$. One can intuitively understand our proof as counting possibly flat or degenerate discrete $(p+1)$-dimensional tori with $n$ points. As is familiar in topology, one can build such a torus by gluing both ends of a cylinder. However, we are allowed to perform a twist when doing this gluing and this is determined by $z$. Namely, $(\sigma_{p+1})^{r}$, the "Poincaré return map" to $X_1$, does not necessarily fix $1$ but may send it to some $z\neq 1$. We remark that it is possible to explicitly iterate the bijection involved in the $p+1$ to $p$ reduction, but given the complexity of the resulting recursive combinatorial data, we will refrain from doing this here. # Proofs We first take care of the reduction to the transitive action case. **Lemma 1**. *We have $$A(p,n,k)=\ \frac{n!}{k!}\times \sum_{n_1,\ldots,n_k\ge 1} {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}\{n_1+\cdots+n_k=n\}\times \prod_{i=1}^{k}\left(\frac{A(p,n_i,1)}{n_i!} \right)\ .$$ [\[polymerlem\]]{#polymerlem label="polymerlem"}* For a tuple $(\sigma_1,\ldots,\sigma_p)$ in $\mathscr{C}_{p,n,k}$, let $\Pi(\sigma_1,\ldots,\sigma_p)$ denote the unordered set partition of $[n]$ given by the orbits of the action of the subgroup $\langle\sigma_1,\ldots,\sigma_p\rangle$. We condition the sum over tuples in $\mathscr{C}_{p,n,k}$, according to this set partition. We also sum over orderings of the blocks of that partition (with $k$ blocks), and compensate for this overcounting by dividing by $k!$. This gives $$A(p,n,k)=\frac{1}{k!}\times\sum_{(X_1,\ldots,X_k)}\ \ \ \sum_{(\sigma_1,\ldots,\sigma_p)\in\mathscr{C}_{p,n,k}}\ {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}\left\{ \ \Pi(\sigma_1,\ldots,\sigma_p)=\{X_1,\ldots,X_k\}\ \right\}\ ,$$ where the sum is over ordered tuples of subsets $(X_1,\ldots,X_r)$, where the $X_i$ are nonempty, pairwise disjoint, and together have union equal to $[n]$. For $1\le i\le k$ and $1\le j\le p$, we let $\sigma_j^{(i)}$ be the restriction and corestriction of $\sigma_j$ to the subset $X_i$ which must be stable by $\sigma_j$. For fixed $X_1,\ldots,X_k$, the sum over tuples $(\sigma_1,\ldots,\sigma_p)$ clearly amounts to summing independently over the tuples $(\sigma_{1}^{(i)},\ldots\sigma_{p}^{(i)})$ in each $X_i$, $1\le i\le k$. The tuple $(\sigma_{1}^{(i)},\ldots\sigma_{p}^{(i)})$ is made of commuting permutations of $X_i$ whose action on the latter must be transitive. The number of such tuples only depends on the size $|X_i|$ of the set $X_i$, and not its location within $[n]$. As a result, we have $$\begin{aligned} A(p,n,k) &=& \frac{1}{k!}\times \sum_{(X_1,\ldots,X_k)} A(p,|X_1|,1)\cdots A(p,|X_k|,1)\\ & = & \frac{1}{k!}\times \sum_{n_1,\ldots,n_k\ge 1} {\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}\{n_1+\cdots+n_k=n\}\times \frac{n!}{n_1!\cdots n_k!}\times \prod_{i=1}^{k}A(p,n_i,1) \ ,\end{aligned}$$ where the multinomial coefficient accounts for the number of tuples of disjoint sets $(X_1,\ldots,X_k)$ with fixed cardinalities $n_1,\ldots,n_k$. 0◻ We now move on to the main part of the proof, i.e., the $p+1$ to $p$ reduction and showing that $$A(p+1,n,1)=\sum_{rs=n} A(p,s,1) \times \frac{n!}{r!\times s!^r}\times (r-1)!\times s!^{r-1}\times s\ , \label{reductioneq}$$ where the sum is over pairs of integers $r,s\ge 1$ whose product is $n$. Let $(\sigma_1,\ldots,\sigma_{p+1})\in\mathscr{C}_{p+1,n,1}$ denote a $(p+1)$-tuple of commuting permutations being counted on the left-hand side. We let $X=\{X_1,\ldots,X_r\}:=\Pi(\sigma_1,\ldots,\sigma_p)$ be the set of orbits determined by the first $p$ permutations. For a fixed set partition $X$ of $[n]$, define $\mathscr{C}_{p+1,n,1}^X\subset\mathscr{C}_{p+1,n,1}$ as the set of $(p+1)$-tuples which produce the given $X$ by the above definition. We organize the count by conditioning on $X$, i.e., writing $$A(p+1,n,1)=\sum_X \left|\mathscr{C}_{p+1,n,1}^X\right|\ ,$$ and then computing the terms in the last sum by constructing an explicit bijection between $\mathscr{C}_{p+1,n,1}^X$ and a set of combinatorial data whose cardinality is easy to derive. We will use an automatic numbering of the blocks of $X$ by ordering them according to their minimal element, with respect to the ordered set $[n]$. We let $X_1$ be the block containing the element $1\in[n]$, and number the other blocks so that $$1=\min X_1<\min X_2<\cdots<\min X_r\ .$$ **Lemma 2**. *Let $f$ be an element of $\langle\sigma_{p+1}\rangle$, i.e., a power of $\sigma_{p+1}$, and let $\alpha,\beta\in[r]$. If $\exists x\in X_{\alpha}$, $f(x)\in X_{\beta}$, then $\forall y\in X_{\alpha}$, $f(y)\in X_{\beta}$.* Since such $y$ is in the same $\langle\sigma_1,\ldots,\sigma_p\rangle$-orbit as $x$, there exists a permutation $g\in\langle\sigma_1,\ldots,\sigma_p\rangle$, such that $y=g(x)$. Since $\sigma_1,\ldots,\sigma_{p+1}$ commute, then $g$ must commute with $f$, and therefore $f(y)=f(g(x))=g(f(x))$. This shows that $f(y)$ is in the same $\langle\sigma_1,\ldots,\sigma_p\rangle$-orbit as $f(x)$, namely, $X_{\beta}$. 0◻ The last lemma allows us, from an $f\in\langle\sigma_{p+1}\rangle$, to construct a map $\widehat{f}:[r]\rightarrow[r]$ defined by $\widehat{f}(\alpha)=\beta$, whenever $\exists x\in X_{\alpha}$, $f(x)\in X_{\beta}$. This construction satisfies $\widehat{{\rm Id}}={\rm Id}$, and $\widehat{f\circ g}=\widehat{f}\circ\widehat{g}$, namely, it gives a group homomorphism from $\langle\sigma_{p+1}\rangle$ to $\mathfrak{S}_r$. We apply this to $f=\sigma_{p+1}$ and consider the cycle decomposition of the permutation $\widehat{\sigma_{p+1}}$, and focus on the cycle containing the element $1\in[r]$, namely $(\alpha_1\ \alpha_2\ \cdots\ \alpha_t)$, with $\alpha_1=1$. We clearly have $$\sigma_{p+1}(X_1)\subset X_{\alpha_2}\ ,\ \sigma_{p+1}(X_{\alpha_2})\subset X_{\alpha_3}\ ,\ \cdots\ ,\ \sigma_{p+1}(X_{\alpha_{t-1}})\subset X_{\alpha_t}\ ,\ \sigma_{p+1}(X_{\alpha_t})\subset X_{1}\ .$$ Hence $X_{1}\cup X_{\alpha_2}\cup\cdots\cup X_{\alpha_t}$ is stable by $\sigma_{p+1}$, in addition to being stable by $\langle\sigma_1,\ldots,\sigma_{p}\rangle$ since, each of the $X$ blocks are. Given that the $(p+1)$-tuple of permutations $(\sigma_1,\ldots,\sigma_{p+1})$ is assumed to act transitively, this can only happen if the previous union of $X$ blocks is all of $[n]$, i.e., if $t=r$. For notational convenience, we define the permutation $\gamma\in\mathfrak{S}_r$, by letting $\gamma(i)=\alpha_i$ for all $i\in[r]$. In particular, $\gamma(1)=1$, by construction. We now have, $$\sigma_{p+1}(X_1)\subset X_{\gamma(2)}\ ,\ \sigma_{p+1}(X_{\gamma(2)})\subset X_{\gamma(3)}\ ,\ \cdots\ ,\ \sigma_{p+1}(X_{\gamma(r-1)})\subset X_{\gamma(r)}\ ,\ \sigma_{p+1}(X_{\gamma(r)})\subset X_{1}\ . \label{cyclicincleq}$$ Since $\sigma_{p+1}$ is injective, it follows that $$|X_1|\le |X_{\gamma(2)}|\le\cdots\le|X_{\gamma(r)}|\le|X_1|\ ,$$ and, therefore, all the $X$ blocks must have the same cardinality say $s$, so that $n=rs$, namely, $r$ must divide $n$. The above argument also produces bijective maps $$\tau_{i}:X_{\gamma(i)}\longrightarrow X_{\gamma(i+1)}\ ,$$ for $1\le i\le r-1$, obtained by restriction (and corestriction) of $\sigma_{p+1}$. We collect them into a tuple $\tau=(\tau_1,\ldots,\tau_{r-1})$. We now define the $p$-tuple of permutations of the first block $X_1$ given by $\widetilde{\sigma}=(\widetilde{\sigma}_1,\ldots,\widetilde{\sigma}_p)$ where, for all $j\in[p]$, $\widetilde{\sigma}_j$ is obtained from $\sigma_j$ by restricting it to the subset $X_1$. It is easy to see that $\widetilde{\sigma}$ is a $p$-tuple of commuting permutations of the set $X_1$, which altogether act transitively on it. Finally, we define the element $z=(\sigma_{p+1})^r(1)$ of the block $X_1$. This concludes the definition of the map mentioned in ([\[bijeq\]](#bijeq){reference-type="ref" reference="bijeq"}) which to a tuple $(\sigma_1,\ldots,\sigma_{p+1})\in\mathscr{C}_{p+1,n,1}$ associates the data $(\widetilde{\sigma},\gamma,\tau,z)$. Once we establish that this construction is bijective, the reduction formula ([\[reductioneq\]](#reductioneq){reference-type="ref" reference="reductioneq"}) will follow easily. Indeed, after identification of $X_1$ with $[s]$, we see that there are $A(p,s,1)$ possible choices for $\widetilde{\sigma}$. Deciding on the permutation $\gamma$, which fixes $1$, results in $(r-1)!$ choices. The number of possibilities for the bijective maps in $\tau$ accounts for a factor $s!^{r-1}$, and there are $s$ possibilities for $z$. Summing over the unordered set partition $X$ can be done with the multinomial coefficient $n!/s!^r$ for ordered set partitions and correcting for the overcounting by dividing by $r!$, as in the proof of Lemma [\[polymerlem\]](#polymerlem){reference-type="ref" reference="polymerlem"}. All that remains in order to complete the proof of ([\[reductioneq\]](#reductioneq){reference-type="ref" reference="reductioneq"}) is to show our map ([\[bijeq\]](#bijeq){reference-type="ref" reference="bijeq"}) is indeed bijective. **Injectivity:** We will show how the tuple $(\sigma_1,\ldots,\sigma_{p+1})$ is determined by the data $(\widetilde{\sigma},\gamma,\tau,z)$, and the a priori knowledge of the fixed partition $X$. By construction, for all $j$, $1\le j\le p$, the restriction of $\sigma_j$ to $X_1$ must be $$\sigma_j|_{X_1}=\widetilde{\sigma}_j\ . \label{jX1def}$$ Strictly speaking, there is also a change of codomain involved (from $X_1$ to $[n]$), but we ignored this and will continue to do this for the next similar statements. We must also have, for all $i$, $1\le i\le r-1$, $$\sigma_{p+1}|_{X_{\gamma(i)}}=\tau_i\ . \label{pplus1Xidef}$$ From the commutation relation $\sigma_j\circ(\sigma_{p+1})^i=(\sigma_{p+1})^i\circ\sigma_j$, restricted to $X_1$, we deduce that for all $i$, $2\le i\le r$, we must have $$\sigma_j\circ\tau_{i-1}\circ\cdots\circ\tau_1= \tau_{i-1}\circ\cdots\circ\tau_1\circ\widetilde{\sigma}_j$$ i.e., $$\sigma_j|_{X_{\gamma(i)}}=\tau_{i-1}\circ\cdots\circ\tau_1\circ\widetilde{\sigma}_j \circ\tau_{1}^{-1}\circ\cdots\circ\tau_{i-1}^{-1}\ . \label{jXidef}$$ Hence $\sigma_1,\ldots,\sigma_{p}$ are known, while $\sigma_{p+1}$ is almost entirely determined. We are only missing the restriction of $\sigma_{p+1}$ on the last block $X_{\gamma(r)}$. Since $z$ is in the orbit $X_1$ of the element $1$ for the action of $\sigma_1,\ldots,\sigma_p$, or equivalently $\widetilde{\sigma}_1,\ldots,\widetilde{\sigma}_p$, there exists $g\in\langle\widetilde{\sigma}_1,\ldots,\widetilde{\sigma}_p\rangle$, such that $g(1)=z$. We claim that we must have $$\sigma_{p+1}|_{X_{\gamma(r)}}=g\circ\tau_{1}^{-1}\circ\cdots\circ\tau_{r-1}^{-1}\ . \label{pplus1Xrdef}$$ Indeed, let $x\in X_{\gamma(r)}$, then $x=(\sigma_{p+1})^{r-1}(y)$ for some $y\in X_1$. Again, by transitivity on $X_1$, there exists $h\in\langle\sigma_1,\ldots,\sigma_p\rangle$ such that $y=h(1)$. As a consequence of the Abelian property of the group $\langle\sigma_1,\ldots,\sigma_{p+1}\rangle$, we must have $$\begin{aligned} \sigma_{p+1}(x) &=& (\sigma_{p+1})^r\circ h(1)\\ &=& h\circ(\sigma_{p+1})^r(1)\\ &= & h(z)\\ & = & h(g(1))\\ &=& g(h(1))\\ & = & g(y)\\ & = & g\circ\tau_{1}^{-1}\circ\cdots\circ\tau_{r-1}^{-1}(x)\ .\end{aligned}$$ We now have recovered the restrictions of all $p+1$ permutations $\sigma_j$ on all blocks $X_i$ of the decomposition of $[n]$, from the output of our map, which shows that it is injective. **Surjectivity:** We start from the data $(\widetilde{\sigma},\gamma,\tau,z)$ and construct $(\sigma_1,\ldots,\sigma_{p+1})\in\mathscr{C}_{p+1,n,1}^{X}$ which maps to it. This time, we use the equations ([\[jX1def\]](#jX1def){reference-type="ref" reference="jX1def"}), ([\[pplus1Xidef\]](#pplus1Xidef){reference-type="ref" reference="pplus1Xidef"}), ([\[jXidef\]](#jXidef){reference-type="ref" reference="jXidef"}), ([\[pplus1Xrdef\]](#pplus1Xrdef){reference-type="ref" reference="pplus1Xrdef"}) as definitions of $\sigma_1,\ldots,\sigma_{p+1}$ as maps $[n]\rightarrow[n]$. The use of ([\[pplus1Xrdef\]](#pplus1Xrdef){reference-type="ref" reference="pplus1Xrdef"}) requires some care, namely showing the uniqueness of $g$. Let $\widetilde{H}:=\langle\widetilde{\sigma}_1,\ldots,\widetilde{\sigma}_p\rangle$. The hypothesis on the tuple $\widetilde{\sigma}$ is that it is made of $p$ commuting permutations of the set $X_1$, such that the permutation action of $\widetilde{H}$ on $X_1$ is transitive. Suppose $g_1(1)=g_2(1)=z$ for some $g_1,g_2\in\widetilde{H}$. If $x\in X_1$, then $\exists h\in\widetilde{H}$, $h(1)=x$. By the Abelian property of $\widetilde{H}$, we have $$g_i(x)=g_i\circ h(1)=h\circ g_i(1)=h(z)\ ,$$ for $i=1$ as well as $i=2$, and thus $g_1(x)=g_2(x)$. Since $x$ is arbitrary, we have $g_1=g_2$. This justifies the use of ([\[pplus1Xrdef\]](#pplus1Xrdef){reference-type="ref" reference="pplus1Xrdef"}) as a definition of a map. We now have constructed the maps $\sigma_1,\ldots,\sigma_{p+1}$. It is immediate, from ([\[jX1def\]](#jX1def){reference-type="ref" reference="jX1def"}) and ([\[jXidef\]](#jXidef){reference-type="ref" reference="jXidef"}), that $\sigma_1,\ldots,\sigma_p$ are bijective within each $X_{\gamma(i)}$, $1\le i\le r$, and therefore over all of $[n]$. One easily checks also the commutation relations $\sigma_j\circ\sigma_{\ell}=\sigma_{\ell}\circ\sigma_j$, $1\le j,\ell\le p$, on each $X$ block, and therefore on $[n]$. From ([\[pplus1Xidef\]](#pplus1Xidef){reference-type="ref" reference="pplus1Xidef"}), we see that $\sigma_{p+1}$ is injective on each $X_{\gamma(i)}$, $1\le i\le r-1$, and the images of these restrictions are disjoint because $\gamma$ is a permutation. From ([\[pplus1Xrdef\]](#pplus1Xrdef){reference-type="ref" reference="pplus1Xrdef"}), it holds that $\sigma_{p+1}|_{X_{\gamma(r)}}:X_{\gamma(r)}\rightarrow X_1$ is bijective. As a result, $\sigma_{p+1}:[n]\rightarrow[n]$ is bijective. From ([\[pplus1Xidef\]](#pplus1Xidef){reference-type="ref" reference="pplus1Xidef"}) and ([\[jXidef\]](#jXidef){reference-type="ref" reference="jXidef"}), we also obtain $$\sigma_j\circ\sigma_{p+1}|_{X_{\gamma(i)}}= \tau_{i}\circ\cdots\circ\tau_1\circ\widetilde{\sigma}_j \circ\tau_{1}^{-1}\circ\cdots\circ\tau_{i-1}^{-1} =\sigma_{p+1}\circ\sigma_{j}|_{X_{\gamma(i)}}\ ,$$ for all $i,j$ such that $1\le j\le p$ and $1\le i\le r-1$. Finally, for all $j$, $1\le j\le p$, the restrictions of $\sigma_j\circ\sigma_{p+1}$ and $\sigma_{p+1}\circ\sigma_{j}$ on $X_{\gamma(r)}$ coincide, because $g$ and $\widetilde{\sigma}_j$ must commute. We have now checked that $(\sigma_1,\ldots,\sigma_{p+1})$ is a commuting tuple of permutations of $[n]$. The corresponding action is transitive because ([\[cyclicincleq\]](#cyclicincleq){reference-type="ref" reference="cyclicincleq"}) holds by construction and $\widetilde{\sigma}$ is assumed to act transitively on $X_1$. Checking that the produced tuple $(\sigma_1,\ldots,\sigma_{p+1})\in\mathscr{C}_{p+1,n,1}^{X}$ indeed maps to $(\widetilde{\sigma},\gamma,\tau,z)$ is straightforward. Therefore, our map is surjective. In order to finish the proof of Theorem [\[mainthm\]](#mainthm){reference-type="ref" reference="mainthm"}, we define $C(p,n):=\frac{A(p,n,1)}{(n-1)!}$. Since $A(1,n,1)=(n-1)!$ counts cyclic permutations of $n$ elements, we have $C(1,n)=1=B(1,n)$. The, now established, recursion ([\[reductioneq\]](#reductioneq){reference-type="ref" reference="reductioneq"}) implies that $C$ satisfies $$C(p+1,n)=\sum_{rs=n}s\ C(p,s)\ .$$ By a trivial induction on $p$, $C(p,n)$ must coincide with $B(p,n)$ defined, e.g., in ([\[altdefeq\]](#altdefeq){reference-type="ref" reference="altdefeq"}). We plug $A(p,n,1)=(n-1)!\times B(p,n)$ in the result of Lemma [\[polymerlem\]](#polymerlem){reference-type="ref" reference="polymerlem"}, and Theorem [\[mainthm\]](#mainthm){reference-type="ref" reference="mainthm"} follows. 0◻ # On conjecture [\[mainconj\]](#mainconj){reference-type="ref" reference="mainconj"} {#conjsec} As mentioned in the introduction, the case $p=1$ of Conjecture [\[mainconj\]](#mainconj){reference-type="ref" reference="mainconj"} is well established. The opposite extreme "$p=\infty$" is settled in the companion article [@Abdesselam2]. Let us now focus on the $p=2$ case, and relate it to an already large body of literature, in particular, the work of Heim, Neuhauser, and many others. Since, for $p=2$, $B(p-1,m)=B(1,m)=1$, the Bryan-Fulman identity ([\[BFidentity\]](#BFidentity){reference-type="ref" reference="BFidentity"}) simply reads $$\sum_{n=0}^{\infty}\sum_{k=0}^{n}\frac{1}{n!}A(2,n,k)x^k u^n= \prod_{m=1}^{\infty} (1-u^{m})^{-x}\ .$$ On the other hand, the so-called D'Arcais polynomials $P_n(x)$ are defined [@DArcais] by the generating function identity $$\prod_{m=1}^{\infty} (1-u^{m})^{-x}=\sum_{n=0}^{\infty} P_n(x) u^n\ .$$ The D'Arcais polynomials can therefore be expressed in terms of commuting pairs of permutations $$P_n(x)=\frac{1}{n!}\sum_{k=0}^n A(2,n,k)\ x^k\ . \label{DArcaiseq}$$ We are not aware of the commuting permutation interpretation ([\[DArcaiseq\]](#DArcaiseq){reference-type="ref" reference="DArcaiseq"}) of D'Arcais polynomials having been used in the number theory literature reviewed, e.g., in [@HeimN], and we hope it could be of help in this area. If one shifts the variable $x$ by one, one gets the standard formulation of the Nekrasov-Okounkov formula [@NekrasovO; @Westbury] $$\prod_{m=1}^{\infty} (1-u^{m})^{-x-1}=\sum_{n=0}^{\infty} Q_n(x) u^n$$ where $$Q_n(x)=\sum_{\lambda\vdash n}\prod_{\Box\in\lambda} \left(1+\frac{x}{h(\Box)^2}\right)\ .$$ Namely, the sum is over integer partitions $\lambda$ of $n$. The product is over cells in the usual Ferrers-Young diagram of the partition $\lambda$, and $h(\Box)$ denotes the hook length number of that cell. Clearly $Q_n(x)=P_n(x+1)$ and therefore, the log-concavity (of the coefficients of) the polynomial $P_n$ would imply that of $Q_n$ as well as the unimodality of the latter which was conjectured by Heim and Neuhauser as well as Amdeberhan (see [@HeimN] and references therein). As a strengthening of this unimodality conjecture, the log-concavity of the $P_n(x)$'s, i.e., the $p=2$ case of Conjecture [\[mainconj\]](#mainconj){reference-type="ref" reference="mainconj"} was stated as Challenge 3 in [@HeimN]. The authors also reported on checking this numerically for all $n\le 1500$. For recent progress towards such log-concavity properties in the $p=2$ case, see [@HongZ; @Zhang]. Using Mathematica, we checked that Conjecture [\[mainconj\]](#mainconj){reference-type="ref" reference="mainconj"} is true for $p=3,4,5$ for all $n\le 100$. One can also test the conjecture by considering the dilute polymer gas regime, in the terminology of statistical mechanics, i.e., when $k$ is close to $n$ and most orbits are singletons, as in the next proposition. **Proposition 1**. *The inequality in Conjecture [\[mainconj\]](#mainconj){reference-type="ref" reference="mainconj"} holds for all $p\ge 1$, and $n\ge 3$, when $k=n-1$.* Let $$\Delta(p,n):=A(p,n,n-1)^2-A(p,n,n)\ A(p,n,n-2)\ .$$ From Theorem [\[mainthm\]](#mainthm){reference-type="ref" reference="mainthm"}, we easily deduce $$\begin{aligned} A(p,n,n) & = & 1 \\ A(p,n,n-1) & = & \binom{n}{2}\ (2^p-1) \\ A(p,n,n-2) & = & \binom{n}{3}\ (3^p-1)+\binom{n}{4}\ 3 (2^p-1)^2\ .\end{aligned}$$ Therefore $$\Delta(p,n)=\left[{\binom{n}{2}}^2-3\binom{n}{4}\right] (2^p-1)^2-\binom{n}{3}\ (3^p-1)\ .$$ As mentioned before, the conjecture is known for $p=1$, so now we focus on $p\ge 2$. If $p\ge 3$, then $2\left(\frac{1}{2}\right)^p+\left(\frac{3}{4}\right)^p\le \frac{43}{64}$, the $p=3$ value. Therefore, for $p\ge 3$, we have $4^p\ge 2\times 2^p+3^p$ which implies $$4^p-2\times 2^p+1\ge 3^p-1\ .$$ The last inequality being also true for $p=2$, we have that for all $p\ge 2$, the inequality $(2^p-1)^2\ge 3^p-1$ holds. Hence $$\begin{aligned} \Delta(p,n) &\ge & \left[{\binom{n}{2}}^2-3\binom{n}{4}-\binom{n}{3}\right] (2^p-1)^2 \\ & = & \frac{1}{24}n(n-1)(3n^2+5n-10)(2^p-1)^2\ .\end{aligned}$$ Since $n\ge 3$ implies $3n^2+5n-10\ge 32>0$, we have $\Delta(p,n)>0$. 0◻ The first author thanks Ken Ono for introducing him to the Nekrasov-Okounkov formula, and the unimodality conjecture of Amdeberhan, Heim and Neuhauser. 999 A. Abdesselam, A local injective proof of log-concavity for increasing spanning forests. Discrete Math. **346** (2023), no. 12, Paper No. 113651. A. Abdesselam, Log-concavity with respect to the number of orbits for infinite tuples of commuting permutations. Preprint arXiv:2309.07358\[math.CO\], 2023. M. Atiyah, and G. Segal, On equivariant Euler characteristics. J. Geom. Phys. **6** (1989), no. 4, 671--677. J. R. Britnell, A formal identity involving commuting triples of permutations. J. Combin. Theory Ser. A **120** (2013), no. 4, 941--943. J. Bryan, and J. Fulman, Orbifold Euler characteristics and the number of commuting $m$-tuples in the symmetric groups. Ann. Comb. **2** (1998), no. 1, 1--6. F. D'Arcais, Développement en série. Intermédiaire Math. **20** (1913), 233--234. L. Dixon, J. A. Harvey, C. Vafa, and E. Witten, Strings on orbifolds. Nuclear Phys. B **261** (1985), no. 4, 678--686. B. Heim, and M. Neuhauser, Horizontal and vertical log-concavity. Res. Number Theory **7** (2021), no. 1, Paper No. 18, 12 pp. L. Hong, and S. Zhang, Towards Heim and Neuhauser's unimodality conjecture on the Nekrasov-Okounkov polynomials. Res. Number Theory **7** (2021), no. 1, Paper No. 17, 11 pp. M. J. Hopkins, N. J. Kuhn, and D. C. Ravenel, Morava $K$-theories of classifying spaces and generalized characters for finite groups. In: *Algebraic Topology (San Feliu de Guı́xols, 1990)*, pp. 186--209. Eds: J. Aguadé, M. Castellet, and F. R. Cohen, Lecture Notes in Math. **1509**, Springer-Verlag, Berlin, 1992. A. Lubotzky, and D. Segal, *Subgroup Growth*. Progr. Math. **212**, Birkhäuser Verlag, Basel, 2003. J. M. Møller, Equivariant Euler characteristics of partition posets. European J. Combin. **61** (2017), 1--24. N. A. Nekrasov, and A. Okounkov, Seiberg-Witten theory and random partitions. In: *The Unity of Mathematics*. In honor of the ninetieth birthday of I. M. Gelfand. Papers from the conference held in Cambridge, MA, August 31--September 4, 2003, pp. 525--596. Eds: P. Etingof, V. Retakh, and I. M. Singer, Progr. Math. **244**, Birkhäuser Boston, Inc., Boston, MA, 2006. L. Solomon, Partially ordered sets with colors. In: *Relations Between Combinatorics and Other Parts of Mathematics*. Proceedings of the Symposium in Pure Mathematics of the American Mathematical Society held at the Ohio State University, Columbus, Ohio, March 20--23, 1978. pp. 309--329. Ed: D. K. Ray-Chaudhuri, Proc. Sympos. Pure Math. **XXXIV**, American Mathematical Society, Providence, RI, 1979. R. P. Stanley, *Enumerative Combinatorics. Volume 1*. Second edition. Cambridge Stud. Adv. Math. **49**, Cambridge University Press, Cambridge, 2012. B. W. Westbury, Universal characters from the Macdonald identities. Adv. Math. **202** (2006), no. 1, 50--63. T. White, Couting free Abelian actions. Preprint arXiv:1304.2830\[math.CO\], 2013. S. Zhang, Log-concavity in powers of infinite series close to $(1-z)^{-1}$. Res. Number Theory **8** (2022), no. 4, Paper No. 66, 17 pp.
arxiv_math
{ "id": "2309.09407", "title": "A bijection for tuples of commuting permutations and a log-concavity\n conjecture", "authors": "Abdelmalek Abdesselam, Pedro Brunialti, Tristan Doan, Philip Velie", "categories": "math.CO math.NT math.PR", "license": "http://creativecommons.org/licenses/by/4.0/" }
# Deformations of $\nabla$-Parallel Spinors {#section:canonically_parallel} In this section we study solutions of the equation $\nabla \psi =0$ on a $\tad$ manifold, and describe how they change under $\mathcal{H}$-homothetic deformations. It is known from [@3str Cor.4.1.1] that the $\nabla$-holonomy algebra satisfies $$\mathfrak{hol}(\nabla) \subseteq (\mathfrak{sp}(n-1)\oplus \mathfrak{sp}(1)) \oplus \mathfrak{so}(3) \subset \mathfrak{so}(4n-4) \oplus \mathfrak{so}(3),$$ so if there is a $\nabla$-parallel spinor then this containment must be proper (since the lift of the standard action of $\mathfrak{so}(3)$ does not annihilate any element of the spinor module, see e.g.[@Wang; @AHLspheres]). An important example of this occurs for *parallel* $\tad$ manifolds (i.e. those with $\beta:= 2(\delta-2\alpha)=0$), in which case it is known from [@3str Cor.4.1.2] that $\mathfrak{hol}(\nabla)\subseteq \mathfrak{sp}(n-1)$ (this reduction occurs as a result of the identities in Proposition [\[prelims:canonical_connection\]](#prelims:canonical_connection){reference-type="ref" reference="prelims:canonical_connection"}, which imply that $\nabla\xi_i = \nabla \varphi_i =0$ in the parallel case). It is known that the spin lift of $\mathfrak{sp}(n-1) \subset \mathfrak{so}(4n-4)\oplus \mathfrak{so}(3)$ annihilates $2n$ vectors in the spin representation (see [@AHLspheres §4]), and we obtain the following result: A parallel $\tad$ manifold admits at least $2n$ solutions of the equation $\nabla \psi = 0$. Solutions of $\nabla \psi =0$ arising in the parallel case are, a priori, unstable, in the sense that perturbing $\alpha,\delta$ can cause $\beta$ to become non-zero, increasing the size of $\mathfrak{hol}(\nabla)$. Philosophically, $\nabla$-parallel spinors should therefore occur as an isolated special case in a family of differential equations depending on $\alpha,\delta$. Using similar arguments as in the previous section, we describe in the following theorem the family of such equations obtained starting from a parallel $\tad$ structure and applying $\mathcal{H}$-homothetic deformations. Starting with a parallel $3$-$(\alpha_0,\delta_0)$-Sasaki structure ($\delta_0=2\alpha_0$), one checks that the parameters in Definition [\[H_deformation_definition\]](#H_deformation_definition){reference-type="ref" reference="H_deformation_definition"} needed to obtain a $\tad$ structure are $$a= \frac{2\alpha_0^2}{\alpha\delta}, \qquad b = \frac{2\alpha_0^2(2\alpha-\delta)}{\alpha\delta^2} ,\qquad c = \frac{2\alpha_0}{\delta}.$$ A *Proof.* Recalling the the difference tensor $\tau$ from Lemma [\[difference_tensor_lemma\]](#difference_tensor_lemma){reference-type="ref" reference="difference_tensor_lemma"}, ◻
arxiv_math
{ "id": "2309.16610", "title": "$\\mathcal{H}$-Killing Spinors and Spinorial Duality for Homogeneous\n 3-$(\\alpha,\\delta)$-Sasaki Manifolds", "authors": "Ilka Agricola and Jordan Hofmann", "categories": "math.DG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this expository paper we discuss several properties on closed aspherical parabolic ${{\mathsf{G}}}$-manifolds $X/\Gamma$. These are manifolds $X/\Gamma$, where $X$ is a smooth contractible manifold with a parabolic ${{\mathsf{G}}}$-structure for which $\Gamma\leq \mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X)$ is a discrete subgroup acting properly discontinuously on $X$ with compact quotient. By a parabolic ${\mathsf{G}}$-structure on $X$ we have in mind a Cartan structure which is modeled on one of the classical parabolic geometries arising from simple Lie groups ${\mathsf{G}}$ of rank one. Our results concern in particular the properties of the automorphism groups $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X/\Gamma)$. Our main results show that the existence of certain parabolic ${{\mathsf{G}}}$-structures can pose strong restrictions on the topology of compact aspherical manifolds $X/\Gamma$ and their parabolic automorphism groups. In this realm we prove that any compact aspherical standard $CR$-manifold with virtually solvable fundamental group is diffeomorphic to a quotient of a Heisenberg manifold of complex type with its standard $CR$-structure. Furthermore we discuss the analogue properties of standard quaternionic contact manifolds in relation to the quaternionic Heisenberg group. address: - | Department of Mathematics\ University of Fribourg\ Chemin du Musée 23\ CH-1700 Fribourg, Switzerland - | Department of Mathematics, Josai University\ Keyaki-dai 1-1, Sakado, Saitama 350-0295, Japan author: - Oliver Baues - Yoshinobu Kamishima date: September 22, 2023 title: On the automorphism group of parabolic structures and closed aspherical manifolds --- [^1] # Introduction Let ${{\mathbb H}}^{n+1}_{{\mathbb K}}$ be the ${\mathbb K}$-hyperbolic space over ${\mathbb K}= {\mathbb R}, {\mathbb C}$ or ${\mathbb H}$. (Compare [@CG] and the definitions therein.) The group of ${\mathbb K}$-hyperbolic isometries $\mathop{\rm PU}\nolimits(n+1,1;{\mathbb K})$ of ${{\mathbb H}}^{n+1}_{{\mathbb K}}$ extends to an analytic action on the boundary sphere $\partial {{\mathbb H}}^{n+1}_{{\mathbb K}}=S^{|{\mathbb K}|(n+1)-1}$. This corresponds to the rank one standard parabolic geometry on $S^{|{\mathbb K}|(n+1)-1}=\mathop{\rm PU}\nolimits(n+1,1;{\mathbb K})/P_{\mathbb K}$ where $P_{\mathbb K}$ is a maximal parabolic subgroup. In terms of classical geometry, according to whether ${\mathbb K}={\mathbb R},\,{\mathbb C}$ or ${\mathbb H}$, the model spaces $S^{|{\mathbb K}|(n+1)-1}=S^n,S^{2n+1}$ or $S^{4n+3}$ admit a *conformally flat structure, spherical $CR$-structure* or *spherical quaternionic contact structure*, respectively. (For precise definition of these notions, see [@OK1] or [@BG], and Section [3](#sec:parabolic_geom){reference-type="ref" reference="sec:parabolic_geom"} of this article, for example.) The geometry of these spaces is intricately reflected in the standard graduation of the Lie algebra of $\mathop{\rm PU}\nolimits(n+1,1;{\mathbb K})$, which is $\mathop{\rm PO}\nolimits(n+1,1)$ for ${\mathbb K}={\mathbb R}$ or $\mathop{\rm PU}\nolimits(n+1,1)$, $\mathop{\rm PSp}\nolimits(n+1,1)$ for ${\mathbb K}={\mathbb C},{\mathbb H}$, where: $$\begin{split} \mathfrak{so}(n+1,1)&= \mathfrak{g}^{-1} + \mathfrak{g}^{0} + \mathfrak{g}^{1}= {\mathbb R}^n + (\mathfrak{so}(n)+ {\mathbb R}) + ({\mathbb R}^n)^*, \\ \mathfrak{su}(n+1,1)&= \mathfrak{g}^{-2}+ \mathfrak{g}^{-1} + \mathfrak{g}^{0} + \mathfrak{g}^{1} + \mathfrak{g}^{2}\\ & ={\rm Im}{\mathbb C}+ {\mathbb C}^n + (\mathfrak{u}(n)+ {\mathbb R}) + ({\mathbb C}^n)^* + ({\rm Im}{\mathbb C})^*, \\ \mathfrak{sp}(n+1,1)&= \mathfrak{g}^{-2}+ \mathfrak{g}^{-1} + \mathfrak{g}^{0} + \mathfrak{g}^{1} + \mathfrak{g}^{2}\\ &={\rm Im}{\mathbb H}+ {\mathbb H}^n + (\mathfrak{sp}(n)+ \mathfrak{sp}(1) + {\mathbb R}) + ({\mathbb H}^n)^* + ({\rm Im}{\mathbb H})^* \end{split}$$ in which $\displaystyle P_{\mathbb K}$ is generated by the parabolic subalgebra $\displaystyle\mathfrak{p} =\mathfrak{g}^{0} + \mathfrak{g}^{1}$ for ${\mathbb K}={\mathbb R}$, or $\displaystyle\mathfrak{g}^{0} + \mathfrak{g}^{1} + \mathfrak{g}^{2}$ for ${\mathbb C}, {\mathbb H}$, respectively (see [@Al]). Put ${\mathsf{G}}=P_{\mathbb K}$, for brevity. By a *parabolic ${{\mathsf{G}}}$-structure* on a manifold $M$, we thus mean either one of a positive definite conformal structure, a strictly pseudo-convex $CR$-structure, or a positive definite quaternionic contact structure ($qc$-structure for short) on $M$. If $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ is the group of structure preserving transformations on such a parabolic ${{\mathsf{G}}}$-manifold $M$, $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ is called: (1) The group of conformal transformations $\mathop{\rm Conf}\nolimits(M,[g])$. (2) The group of $CR$-transformations $\mathop{\rm Aut}\nolimits_{CR}(M,\{{\mathsf{D}},J\})$, or (3) The group of $qc$-transformations $(\mathop{\rm Aut}\nolimits_{qc}(M,{\mathsf{D}},\,\{J_\alpha\}_{\alpha=1}^3)$, respectively. #### *Rigidity of parabolic ${\mathsf{G}}$-manifolds with non-proper automorphism group* An important observation on parabolic ${\mathsf{G}}$-manifolds is that the automorphism group $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ does not necessarily act properly on $M$. In particular this is the case for the model spheres $S^{|{\mathbb K}|(n+1)-1}$. Given a parabolic ${{\mathsf{G}}}$-manifold $M$, in case $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ is a non-proper group the parabolic ${\mathsf{G}}$-manifold $M$ is completely determined by works of D. V. Alekseevsky [@Al], J. Ferrand [@Ferrand], R. Schoen [@SC], J.  Lee [@JL], C.  Frances [@CF], S.  Ivanov and D.  Vassilev [@IV], Webster [@WE] and others, as follows: **Theorem A 1**. *If the automorphism group $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ of a parabolic ${\mathsf{G}}$-manifold $M$ does *not act properly*, then $M$ with its parabolic structure admits a structure preserving diffeomorphism to one of the standard model spaces as specified in $(1), (2), (3) :$ $(1)$ $M$ is conformal to either the standard sphere $S^{n}$ or the euclidean space ${\mathbb R}^n$. Here it occurs $$\left( \mathop{\rm Iso}\nolimits(M),{\rm Conf}(M) \right)= \begin{cases} \left( {\rm O}(n+1), {\rm PO}(n+1,1) \right) & (M=S^{n}) \, ,\\ \left({\mathbb R}^n\rtimes {\rm O}(n), {\mathbb R}^n\rtimes ({\rm O}(n)\times {\mathbb R}^{+}) \right) & (M={\mathbb R}^n) \, . \end{cases}$$ $(2)$ $M$ has a spherical $CR$-structure isomorphic to either the standard sphere $S^{2n+1}$ or the Heisenberg Lie group ${\mathcal{N}}$ (with its canonical $CR$-structure). It occurs $$\left({\operatorname*{Psh}\,}_{CR}(M),\mathop{\rm Aut}\nolimits_{CR}(M)\right)=\begin{cases} \left({\rm U}(n+1), {\rm PU}(n+1,1)\right) \ \ \ \ (M=S^{2n+1}),\\ \left( {\mathcal{N}}\rtimes {\rm U}(n), {\mathcal{N}}\rtimes ({\rm U}(n)\times {\mathbb R}^{+}) \right) \ (M={\mathcal{N}}).\, \end{cases}$$ $(3)$ $M$ has a spherical $qc$-structure isomorphic to either the standard sphere $S^{4n+3}$ or the quaternionic Heisenberg nilpotent Lie group ${\mathcal{M}}$.\  $({\operatorname*{Psh}\,}_{qc}(M),\mathop{\rm Aut}\nolimits_{qc}(M))$ occurs $$\begin{cases} \left(\mathop{\rm Sp}\nolimits(n+1)\cdot \mathop{\rm Sp}\nolimits(1),\ {\rm PSp}(n+1,1),\ S^{4n+3}\right) \ \ \ \ \ \ \ \ \ \ \ (M=S^{4n+3}),\, \\ \left({\mathcal{M}}\rtimes {\rm Sp}(n)\cdot \mathop{\rm Sp}\nolimits(1),\ {\mathcal{M}}\rtimes ({\rm Sp}(n)\cdot \mathop{\rm Sp}\nolimits(1)\times {\mathbb R}^{+}),\ {\mathcal{M}}\,\right) \ (M={\mathcal{M}}).\,\\ \end{cases}$$* The theorem also gives the pairs $({\operatorname*{Psh}\,}_{{\mathsf{G}}}(M),\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M))$ for the respective cases (1), (2), (3). Here ${\operatorname*{Psh}\,}_{{\mathsf{G}}}(M)$ is the maximal subgroup of $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ that is acting properly on $M$. In particular, in case (1), ${\operatorname*{Psh}\,}_{{\mathsf{G}}}(M)$ is the subgroup of $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M))$ which preserves the canonical Riemannian metric that defines the conformal structure on $M$. In cases (2) and (3), the group ${\operatorname*{Psh}\,}_{{\mathsf{G}}}(M)$ coincides with the subgroup of $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ that preserves the canonical structure defining contact forms. #### *${\mathsf{G}}$-Hermitian subgroups of $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$* As the rigidity theorem shows, for any parabolic ${\mathsf{G}}$-manifold $M$, there exists a maximal subgroup ${\operatorname*{Psh}\,}_{{\mathsf{G}}}(M)$ of $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ that is acting properly on $M$. If the parabolic structure on $M$ is of type (2) or (3), it is defined by the conformal class of a contact form $\omega$. In this case we define ${\operatorname*{Psh}\,}_{{\mathsf{G}}}(M, \omega)$ to be the subgroup of $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ that preserves $\omega$. Note that ${\operatorname*{Psh}\,}_{{\mathsf{G}}}(M, \omega)$ always acts properly on $M$ (see [@OK1]), that is, ${\operatorname*{Psh}\,}_{{\mathsf{G}}}(M, \omega)$ is contained in ${\operatorname*{Psh}\,}_{{\mathsf{G}}}(M)$. The groups ${\operatorname*{Psh}\,}_{{\mathsf{G}}}(M, \omega)$ are also called ${\mathsf{G}}$-Hermitian subgroups of $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$, see Section [3](#sec:parabolic_geom){reference-type="ref" reference="sec:parabolic_geom"} below. As to the precise relation of ${\operatorname*{Psh}\,}_{{\mathsf{G}}}(M)$ with the ${\mathsf{G}}$-Hermitian subgroups we additionally have the following: **Theorem 1** (see Proposition [Proposition 3](#prop:vanish){reference-type="ref" reference="prop:vanish"}, [@OK1]). *Let $M$ be a parabolic ${\mathsf{G}}$-manifold and let $H \leq \mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ be a closed subgroup. Then there exists a canonical cohomology class $[\lambda_{{\mathsf{G}}}]$ in the differentiable cohomology group $H^1_d(H,C^\infty(M,{\mathbb R}^+))$ with the following properties:* 1. *If $[\lambda_{{\mathsf{G}}}]=0$ and the ${\mathsf{G}}$-structure is conformal, then there exists a Riemannian metric $g$ representing the conformal structure such that $H$ is contained in $\mathop{\rm Iso}\nolimits(M,g)$.* 2. *If $[\lambda_{{\mathsf{G}}}]=0$ and the parabolic structures is of type $(2)$ or $(3)$, there exists a compatible contact form $\omega$ (that is, $\omega$ is representing the parabolic structure) such that $H$ is contained in $\displaystyle {\operatorname*{Psh}\,}_{{\mathsf{G}}}(M,\omega)$.* 3. *The group $H$ acts properly on $M$ if and only if $[\lambda_{{\mathsf{G}}}]=0$.* If $M$ is compact, Theorem [Theorem 1](#thm:proper){reference-type="ref" reference="thm:proper"} combined with Theorem A implies that $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ is a compact Lie group, except if $M$ is one of the standard parabolic ${\mathsf{G}}$-spheres as described in Theorem A. In the following we will be in particular interested in compact parabolic ${\mathsf{G}}$-manifolds. #### *Automorphisms of aspherical parabolic ${\mathsf{G}}$-manifolds* A compact manifold $M$ is called aspherical if its universal covering manifold $X$ is contractible. In that case $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ is compact and its identity component $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)^0$ is a compact torus. The latter fact is a consequence of the following fundamental result on compact Lie group actions on closed aspherical manifolds: **Theorem B 1** (Conner and Raymond [@CR1; @CR]). *Let $X/\Gamma$ be a closed aspherical Riemannian manifold. Then the isometry group $\mathop{\rm Iso}\nolimits(X/\Gamma)$ is a finite group or the identity component $\mathop{\rm Iso}\nolimits(X/\Gamma)^0$ is isomorphic to a $k$-torus $T^k$. Moreover, there is a central group extension[\[Pshqcr\]](#Pshqcr){reference-type="eqref" reference="Pshqcr"}, $1{\rightarrow}{\mathbb Z}^k{\rightarrow}\, \Gamma{\longrightarrow}\, Q{\rightarrow}1$, where $Q=\Gamma/{\mathbb Z}^k$.* Theorem B shows that if the fundamental group $\Gamma$ of $M$ has no normal solvable subgroup then $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ is a finite group. Well known examples of such aspherical manifolds $M$ are compact locally symmetric spaces of non-compact type (without local flat factors). In this context, we remark the following general fact: **Theorem 2**. *Let $X/\Gamma$ be a closed aspherical manifold such that $\Gamma$ has no normal solvable subgroup. If $X/\Gamma$ admits a parabolic ${{\mathsf{G}}}$-structure, then its automorphism $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X/\Gamma)$ is a finite group which is isomorphic to a subgroup of $\mathop{\rm Out}\nolimits(\Gamma)$. In particular, $\Gamma$ is of finite index in its normalizer $N_{\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X)}(\Gamma)$.* In the theorem $\mathop{\rm Out}\nolimits(\Gamma) = \mathop{\rm Aut}\nolimits(\Gamma)/ \mathop{\rm Inn}\nolimits(\Gamma)$ denotes the outer automorphism group of $\Gamma$. #### *$CR$- and $qc$-structures on closed aspherical manifolds*   From our viewpoint of *parabolic ${{\mathsf{G}}}$*-structures, we are interested how the existence of a parabolic ${{\mathsf{G}}}$-structure determines the topology of $X/\Gamma$ and in particular its smooth structure. We show that the existence of certain $CR$- or $qc$-parabolic structures on $X/\Gamma$, that admit a non-trivial connected group of automorphisms $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X/\Gamma)^0$, poses a strong restriction on $\Gamma$. In fact, we show that under the assumption that $\Gamma$ is virtually solvable any standard $CR$-manifold $X/\Gamma$ is *diffeomorphic* to a Heisenberg type manifold which is derived from the standard model space in (2) of Theorem A. In the $qc$-case a much stronger rigidity result holds, namely we show that *any* closed aspherical standard $qc$-manifold $X/\Gamma$ is *$qc$-equivalent* to a $qc$-manifold of quaternionic Heisenberg type (as in (3) of Theorem A). #### *Heisenberg manifolds* Recall from Theorem A that the Heisenberg Lie group ${\mathcal{N}}$ admits a maximal proper subgroup of affine transformations ${\mathcal{N}}\rtimes {\rm U}(n)$. The group ${\mathcal{N}}\rtimes {\rm U}(n) = {\operatorname*{Psh}\,}_{CR}({\mathcal{N}})$ then preserves the canonical left-invariant $CR$-structure, and the canonical pseudo-Hermitian structure on the Heisenberg group ${\mathcal{N}}$ (see for example, [@Ka1; @OK]). Given a torsion-free discrete uniform subgroup $\Gamma$ contained in ${\mathcal{N}}\rtimes {\rm U}(n)$, the compact quotient manifold $$M= {\mathcal{N}}/\Gamma$$ is then called a *Heisenberg infra-nilmanifold*. #### *Standard $CR$-structures* Suppose that $M$ admits a $CR$-structure with contact form $\omega$, where $M$ is a $2n+1$-dimensional manifold. Then the $CR$-structure is called *standard* if the Reeb vector field associated to $\omega$ generates a one-parameter subgroup of ${\operatorname*{Psh}\,}_{CR}(M, \omega)$. In that sense, every Heisenberg infra-nilmanifold carries a canonical standard $CR$-structure, where $\mathop{\rm Aut}\nolimits_{CR}(M)^0 = \operatorname*{Psh}\,_{CR}(M, \omega)^0=S^1$. (The structure is induced from the standard $CR$-structure on the Heisenberg group ${\mathcal{N}}$.) See Section [4](#sec:CRsolv){reference-type="ref" reference="sec:CRsolv"} for details. Further examples of (aspherical) standard $CR$-manifolds may be constructed as $S^1$-bundles over any compact Hermitian locally symmetric spaces $B$, using the Kähler class of $B$ to determine the circle bundle. (In fact, this construction works over every compact Kähler manifold $B$. See [@OK] and the references therein). #### *Virtually solvable fundamental group $\Gamma$* Let $M= X/\Gamma$ be a closed aspherical manifold such that $\Gamma$ is a virtually solvable group (which means that $\Gamma$ contains a solvable subgroup of finite index). In fact, since $M$ is aspherical, $\Gamma$ is a torsion-free virtually polycyclic group. It is known that every such group occurs as the fundamental group of a compact aspherical manifold $X/\Gamma$. Note further that the fundamental group $\Gamma$ determines $M$ up to homeomorphism, but not necessarily up to diffeomorphism unless some further geometric structure on $M$ is specified that enforces smooth rigidity (cf$.$ [@Baues] and the references therein.) Every *standard* $CR$-manifold with solvable fundamental group turns out to be diffeomorphic to a Heisenberg infra-nilmanifold: **Theorem 3**. *Let $M = X/\pi$ be a $2n+1$-dimensional closed aspherical positive definite strictly pseudoconvex *standard* $CR$-manifold. If $\pi$ is virtually solvable then then there exists a discrete faithful representation $\rho : \pi{\rightarrow}\, {\mathcal{N}}\rtimes {\rm U}(n)$ and $M$ is diffeomorphic to the Heisenberg infra-nilmanifold ${\mathcal{N}}/\rho(\pi)$. In particular, $\pi$ is virtually nilpotent and $\mathop{\rm Aut}\nolimits_{CR}(M)^0=S^1$.* By choosing a representative contact form $\omega$ for the $CR$-structure on $M$, we obtain a pseudo-Hermitian manifold $(M,(\omega,J))$. Note that a standard pseudo-Hermitian manifold $(M,(\omega,J))$ is equivalent to a Sasaki manifold $(M, g, (\omega,J),\xi)$ by assigning a positive definite Riemannian metric $g=\omega\cdot \omega+d\omega\circ J$, called Sasaki metric. (Compare [@OK; @Bl].) By the existence of the $S^1$-action generated by the Reeb field, we see that $M$ admits a fibering over a Kähler orbifold. We will prove: *If the fundamental group of a closed aspherical Sasaki manifold $M$ is virtually solvable, then $M$ is diffeomorphic to a Heisenberg infra-nilmanifold.* A Sasaki manifold is called *regular* if the $S^1$-action generated by the Reeb field is free. For a regular Sasaki manifold Theorem $3'$ is stated and proved in [@OK Corollary 2, Proposition 6.10]. Assuming that the fundamental group of $M$ is nilpotent, a proof of Theorem 3$'$ involving methods of rational homotopy theory is provided in [@NY]. The following result is obtained by O. Baues and V. Cortés [@BC] which is used for our proof. **Theorem C 1**. *Let $X/\Gamma$ be a closed aspherical Kähler manifold with $\Gamma$ virtually solvable. Then a finite cover of $X/\Gamma$ is biholomorphic to a complex torus.* We next seek whether similar results hold for $4n+3$-dimensional $qc$-manifolds. #### *Quaternionic Heisenberg manifolds* A quaternionic Heisenberg Lie group is a $4n+3$-dimensional nilpotent Lie group $\mathcal M$ with center ${\mathbb R}^3$ whose quotient is isomorphic to the quaternionic vector space ${\mathbb H}^n$, whose structure determines the Lie product on ${\mathcal{M}}$ (see [@Al; @Ka]). The group ${\mathcal{M}}$ admits a maximal proper subgroup of affine transformations ${\operatorname*{Psh}\,}_{qc}({\mathcal{M}})={\mathcal{M}}\rtimes \mathop{\rm Sp}\nolimits(n)\cdot \mathop{\rm Sp}\nolimits(1)$. Then ${\operatorname*{Psh}\,}_{qc}({\mathcal{M}})$ preserves the canonical $qc$-structure and the canonical $qc$-Hermitian structure on ${\mathcal{M}}$. (Compare Theorem A.) A quotient ${\mathcal{M}}/\Gamma$ by a torsion-free discrete uniform subgroup $\Gamma\leq {\operatorname*{Psh}\,}_{qc}({\mathcal{M}})$ is called a quaternionic Heisenberg infra-nilmanifold. For any $qc$-manifold $M$, let ${\mathcal{T}}=\{\xi_1,\xi_2,\xi_3\}$ denote the three-dimensional integrable distribution complementary to the codimension three subbundle ${\mathsf{D}}$ on $M$ determined by the $qc$-structure, called *$qck$-distribution* (see Section [3](#sec:parabolic_geom){reference-type="ref" reference="sec:parabolic_geom"}). If ${\mathcal{T}}$ generates a subgroup of ${\operatorname*{Psh}\,}_{qc}(M)$, then $M$ is called a *standard* $qc$-manifold. #### *Remark* When $M$ is a closed aspherical standard $qc$-manifold, ${\mathcal{T}}$ generates a three-torus $T^3\leq {\operatorname*{Psh}\,}_{qc}(M)$. In this case, the $qc$-structure on $M$ thus does not give rise to a $3$-Sasaki structure since the $qck$-distribution ${\mathcal{T}}$ does not generate an $\mathop{\rm Sp}\nolimits(1)$-action on $M$ but a $T^3$-action [@Ka Theorem 5.4]. Then the quotient space $M/{\mathcal{T}}$ inherits a hyper-Kähler structure from $M$. We have the following strong rigidity property for standard $qc$-structures on closed aspherical manifolds. **Theorem 4**. *Let $X/\pi$ be a positive definite closed aspherical *standard* $qc$-manifold. Then $X/\pi$ is $qc$-isometric to a quaternionic Heisenberg infra-nilmanifold ${\mathcal{M}}/\pi$ where $\pi\leq {\mathcal{M}}\rtimes \mathop{\rm PSp}\nolimits(n)$, is a discrete uniform subgroup. In particular, $\pi$ is virtually nilpotent and $\mathop{\rm Aut}\nolimits_{qc}(X/\pi)^0=T^3$ is a three-torus.* In the context of hyper-Kähler manifolds the following striking fact plays an important role in the proof of Theorem [Theorem 4](#Tqc){reference-type="ref" reference="Tqc"}: #### *Rigidity of aspherical hyper-Kähler manifolds* Recall that the quaternionic torus $T^n_{\mathbb H}$ admits a natural flat homogeneous hyper-Kähler structure which is induced by a linear quaternionic hermitian form on ${\mathbb H}^n$ (cf$.$ [@BG]). We then have:  *Let $M$ be a closed aspherical hyper-Kähler manifold. Then a finite cover of $M$ is *hypercomplexally isometric* to a quaternionic torus $T^n_{\mathbb H}$ with its natural flat hyper-Kähler structure.* Here a hypercomplex isomorphism is simultaneously a holomorphic diffeomorphism with respect to each complex structure. This strong rigidity for aspherical hyper-Kähler manifolds is a consequence of the Calabi-Yau theorem and the Cheeger-Gromoll splitting theorem. For related result on complex hyperhermitian surfaces, see [@Bo]. #### *The paper is organized as follows* In Section 2, we prove Theorem [Theorem 2](#thm:1,2,3){reference-type="ref" reference="thm:1,2,3"} of the Introduction saying that $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X/\Gamma)$ is finite whenever $\Gamma$ has no normal solvable groups. In Section 3, we introduce a cohomology-invariant for the group $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ (see Proposition [Proposition 3](#prop:vanish){reference-type="ref" reference="prop:vanish"}) to show the coincidence of the Lie groups $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ and ${\operatorname*{Psh}\,}_{{\mathsf{G}}}(M)$ when $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ acts properly on $M$ in Theorem [Theorem 5](#th:equal){reference-type="ref" reference="th:equal"}. In particular, when $M$ is compact, $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ is a compact Lie group. We prove Theorem [Theorem 3](#Tcr){reference-type="ref" reference="Tcr"} in Section [4](#sec:CRsolv){reference-type="ref" reference="sec:CRsolv"}. Section [5](#sec:qcsolv){reference-type="ref" reference="sec:qcsolv"} concerns the properties of standard $qc$-manifolds. # ${{\mathsf{G}}}$-manifolds with compact automorphism groups {#sec:compact_auto} We study the structure of $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X/\Gamma)$ for closed aspherical ${{\mathsf{G}}}$-manifolds. #### *Lifting Lemma* To prepare our proof we start our discussion with some well known general setup. Let $\tilde M$ be the universal covering space of $M=\tilde M/\pi$ and denote $N_{{\rm Diff}(\tilde M)}(\pi)$ the normalizer of $\pi$ in ${\rm Diff}(\tilde M)$. The conjugation $$\displaystyle \mu: N_{{\rm Diff}(\tilde M)}(\pi){\rightarrow}{\rm Aut}(\pi)$$ defined by $\mu(\tilde f)(\gamma)=\tilde f\circ\gamma\circ \tilde f^{-1}$ $({}^\forall\,\gamma\in \pi)$ induces a homomorphism $$\varphi : {\rm Diff}(M) \to {\rm Out}(\pi) \; .$$ **Lemma 1** (see [@LR]). *There is an exact commutative diagram: $$\label{lift1} \begin{CD} @. 1@. 1 @. 1\\ @. @VVV @VVV @VVV \\ 1@>>> Z(\pi)@>>> \pi @>\mu>> {\rm Inn}(\pi)\\ @. @VVV @VVV @VVV\\ 1@>>>Z_{{\rm Diff}(\tilde M)}(\pi)@>>> N_{{\rm Diff}(\tilde M)}(\pi) @>\mu>> {\rm Aut}(\pi)\\ @. @V\nu VV @V\nu VV @VVV\\ 1@>>>{\rm ker}\,\varphi@>>>{\rm Diff}(M) @>\varphi>> {\rm Out}(\pi)\\ @. @VVV @VVV @VVV\\ @. 1@. 1 @. 1\\ \end{CD}$$* Here, $Z_{{\rm Diff}(\tilde M)}(\pi)$ denotes the centralizer of $\pi$ in ${\rm Diff}(\tilde M)$. #### *Application to automorphisms of aspherical manifolds* ***Proof of Theorem $\ref{thm:1,2,3}$**.* As we show now, replacing ${\rm Diff}(\tilde M)$ in Lemma [Lemma 1](#lem:lift1_diagram){reference-type="ref" reference="lem:lift1_diagram"} by $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X)$, $N_{\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X)}(\Gamma)$ turns out to be discrete and the homomorphism $\mu: N_{\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X)}(\Gamma){\rightarrow}{\rm Aut}(\Gamma)$ injective. As in diagram [\[lift1\]](#lift1){reference-type="eqref" reference="lift1"}, there is an exact sequence: $$\label{ex:1} 1{\rightarrow}\ker \varphi{\rightarrow}\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X/\Gamma)\stackrel{\varphi}{\longrightarrow}\mathop{\rm Out}\nolimits(\Gamma)$$ such that $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X/\Gamma)^0\leq \ker \varphi$. Let $Z_{\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X)}(\Gamma)$ be the centralizer of $\Gamma$ in $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X)$. Then as can be inferred from diagram [\[lift1\]](#lift1){reference-type="eqref" reference="lift1"}, $\ker \varphi$ is associated with the covering group extension: $$1{\rightarrow}\, Z(\Gamma){\rightarrow}\, Z_{\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X)}(\Gamma){\longrightarrow}\, \ker \varphi {\rightarrow}1.$$ Since $Z(\Gamma)$ is the center of $\Gamma$, note $Z(\Gamma)=\{1\}$ by the hypothesis so that $$\label{ex:2} Z_{\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X)}(\Gamma)\cong \ker \varphi.$$ Now $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X/\Gamma)$ is compact by Corollary [Corollary 7](#eq:Pcomp){reference-type="ref" reference="eq:Pcomp"}. Since a compact connected Lie group acting on a closed aspherical manifold is a torus (cf$.$ Theorem B), $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X/\Gamma)^0=T^k$, $k\geq 0$. Then the orbit map ${\rm ev}(t)=tz$ of $T^k$ into $X/\Gamma$ at any point $z\in X/\Gamma$ induces an injective homomorphism $\displaystyle {\rm ev}_*: {\mathbb Z}^k{\longrightarrow}\, \Gamma$ such that ${\rm ev}_*( {\mathbb Z}^k)\leq Z(\Gamma)$ (cf$.$ [@CR Lemma 4.2]). As $Z(\Gamma)$ is trivial, by assumption, we have $k=0$. That is, $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X/\Gamma)^0=\{1\}$. In particular $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X/\Gamma)$ is a finite group. Therefore, the subgroup $\ker \varphi$ is finite by [\[ex:1\]](#ex:1){reference-type="eqref" reference="ex:1"}, and so is $Z_{\mathop{\rm Aut}\nolimits_G(X)}(\Gamma)$ by [\[ex:2\]](#ex:2){reference-type="eqref" reference="ex:2"}. As $Z_{\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X)}(\Gamma)$ is centralized by $\Gamma$, it follows $\displaystyle Z_{\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X)}(\Gamma)=\{1\}$ by [@BK Theorem 2]. Hence $\ker \varphi=\{1\}$, that is $\displaystyle \varphi : \mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X/\Gamma){\rightarrow}\mathop{\rm Out}\nolimits(\Gamma)$ is injective. Since the quotient of $N_{\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X)}(\Gamma)$ by $\Gamma$ is isomorphic to $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X/\Gamma)$ from [\[lift1\]](#lift1){reference-type="eqref" reference="lift1"}. As $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X/\Gamma)$ is finite, $\Gamma$ is of finite index in $N_{\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X)}(\Gamma)$. ◻ Along the same direction as the above proof, we have: **Corollary 2**. *Let $X/\Gamma$ be a closed aspherical manifold. Suppose that $G\leq \operatorname*{Diff}\,(X/\Gamma)$ is a connected Lie group. If $Z(\Gamma)$ is trivial (or finite), then $G$ is a simply connected solvable Lie group.* # Invariant substructures for ${\operatorname*{Psh}\,}_{{\mathsf{G}}}(M)$ {#sec:parabolic_geom} Let $M$ be a parabolic ${\mathsf{G}}$-manifold. If $M$ is of $CR$- or $qc$-type, we shall prove that, when $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ acts properly, there exists a representative form $\omega$ for the parabolic ${\mathsf{G}}$-structure such that $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ coincides with ${\operatorname*{Psh}\,}_{{\mathsf{G}}}(M,\omega)$. (The obvious analogue also holds for conformal structures defined by Riemannian metrics. See [@OK1] and references therein for both results). #### *Geometry associated with parabolic ${\mathsf{G}}$-structures* Whereas a conformal structure is equivalent with a conformal class of Riemannian metrics, the classical geometries underlying the case (2) and (3) parabolic geometries are considerably more involved. Let us thus briefly recall the geometric data associated with case (2) and (3) parabolic geometries: In the case of $CR$-structures, we have a contact form $\omega$ on a connected smooth manifold $M$, which is determined up to scaling with a positive function, and a complex structure $J$ on the contact bundle $\ker \omega$ which is compatible with $\omega$ in the sense that the Levi form $d\omega\circ J$ is a positive definite Hermitian form on $D$. These data define a *strictly pseudo-convex* $CR$-structure. Note that $\omega$ is defined up to a conformal change with a positive function. Let a $qc$-structure on a $4n+3$-manifold $M$ be given. This amounts to a positive definite codimension three subbundle ${\mathsf{D}}$, which is non-integrable and such that ${\mathsf{D}}+ [{\mathsf{D}}, {\mathsf{D}}]=TM$. Moreover, there is a hypercomplex structure $\{J_k\}_{k=1}^3$ on ${\mathsf{D}}$, and an ${\rm Im}\,{\mathbb H}$-valued $1$-form $\omega=\omega_1 i+ \omega_2 j + \omega_3 k$. It is also required that ${\mathsf{D}}=\mathop{\ker}\,\omega$ and the forms $d\omega_k \circ J_k$ are positive definite Hermitian forms. Note that $\omega$ is defined up to a conformal change with a positive function and conjugation with $\mathop{\rm Sp}\nolimits(1)$. #### *Description of the automorphism groups of parabolic ${\mathsf{G}}$-structures* The associated automorphism groups $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ are then described as follows (for cases (2) and (3) compare [@OK1; @Ka]): $$\label{Autqcr} \begin{cases} (1)\,\mathop{\rm Conf}\nolimits(M)=\bigl\{\, \alpha\in\operatorname*{Diff}\,(M)\mid \alpha^*g=u_\alpha\,g \,\bigr\},\\ (2)\,\mathop{\rm Aut}\nolimits_{CR}(M, \{\omega,J\}) =\{ \, \alpha\in\operatorname*{Diff}\,(M)\mid \alpha^*\omega=u_\alpha\, \omega, \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \alpha_*\circ J=J\circ \alpha_*|_{\ker\, \omega} \, \bigr\}, \\ (3) \mathop{\rm Aut}\nolimits_{qc}(M, (\omega,\{J_k\}_{k=1}^3))=\bigl\{\, \alpha\in\operatorname*{Diff}\,(M)\mid \\ \ \ \ \ \ \ \ \ \ \ \ \alpha^*\omega=u_\alpha\; a_\alpha\cdot\omega\cdot\overline{a_\alpha}, \ \ \alpha_* \circ J_k=\sum_{j=1}^{3}a_{kj}J_j\circ \alpha_*|_{\ker\, \omega} \,\bigr\}, \\ \end{cases}$$ where $u_\alpha\in C^\infty(M,{\mathbb R}^+)$, $a_\alpha\in C^\infty(M,\mathop{\rm Sp}\nolimits(1))$, and the matrix $(a_{kj})\in C^\infty(M,\mathop{\rm SO}\nolimits(3))$ is given by the conjugation action of $a_\alpha$ on ${\rm Im}\,{\mathbb H}$. We would like to emphasize that the definition of $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ does not depend on the particular choice of data $g$ or $\omega$ in their conformal class. In fact, the choice of $g$ or $\omega$ amounts to choosing a representative geometry. The symmetries of the representative geometry define a subgroup of $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$. These groups are the isometry group $\mathop{\rm Iso}\nolimits(M,g)$, respectively the pseudo-Hermitian groups ${\operatorname*{Psh}\,}_{CR}(M, \{\omega,J\})$ and ${\operatorname*{Psh}\,}_{qc}(M, (\omega,\{J_k\}_{k=1}^3))$: $$\label{Pshqcr} \begin{cases} (1)\,\mathop{\rm Iso}\nolimits(M,g)=\bigl\{\alpha\in\operatorname*{Diff}\,(M)\mid \alpha^*g=g\ \bigr\},\\ (2)\, {\operatorname*{Psh}\,}_{CR}(M, \omega) =\{\alpha\in\operatorname*{Diff}\,(M)\mid \alpha^*\omega=\omega, \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \alpha_*\circ J=J\circ \alpha_*|_{\ker\, \omega}\bigr\}, \\ (3)\, {\operatorname*{Psh}\,}_{qc}(M, \omega)= % ,\{J_k\}_{k=1}^3))= \bigl\{\alpha\in\operatorname*{Diff}\,(M)\mid \alpha^*\omega=a_\alpha\cdot \omega\cdot \overline{a_\alpha},\\ \, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \alpha_* \circ J_k=\sum_{j=1}^{3}a_{kj}J_j\circ \alpha_*|_{\ker\, \omega} \bigr\}. \\ \end{cases}$$ Note that the groups in [\[Pshqcr\]](#Pshqcr){reference-type="eqref" reference="Pshqcr"} vary considerably under a conformal change of $g$, respectively $\omega$, while the group $\mathop{\rm Aut}\nolimits_{\mathsf{G}}(M)$ is preserved. ## Conformal invariant cohomology class {#Confinv} The space $C^\infty(M,{\mathbb R}^+)$ of smooth positive functions on $M$ is endowed with an action of $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$, where for $\alpha\in \mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$, $f\in C^\infty(M,{\mathbb R}^+)$, we have $$(\alpha_* f)(x)=f(\alpha^{-1}x)\,\ (x\in M).$$ Thus $C^\infty(M,{\mathbb R}^+)$ is a smooth $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$-module. To any such module there is an associated differentiable group cohomology $H^*_d$ for the Lie group $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ (see [@OK1], for detailed explanation). We explain now that the action of $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ on $C^\infty(M,{\mathbb R}^+)$ gives rise to a natural cohomology class that carries geometric information about the dynamics of this action. #### *Construction of the associated cohomology class* **Proposition 3**. *For any closed subgroup $L$ of $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ there is a natural cohomology class $[\lambda_{{\mathsf{G}}}]\in H^1_d(L,C^\infty(M,{\mathbb R}^+))$, which is associated to the parabolic ${\mathsf{G}}$-structure on $M$.* *Proof.* Let $\alpha\in \mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ such that $\displaystyle \alpha^*g=u_\alpha g$ for a Riemannian metric, $\alpha^*\omega=u_\alpha\omega$ for a contact form, or $\displaystyle \alpha^*\omega=u_\alpha\, a_\alpha\cdot \omega\cdot\overline{a_\alpha}$ for a quaternionic contact form, representing the parabolic ${\mathsf{G}}$-structure on $M$. We construct $[\lambda_{\mathsf{G}}]$ for the $qc$-group $\mathop{\rm Aut}\nolimits_{qc}(M)$ in place of $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$. (But the proof holds also for $\mathop{\rm Conf}\nolimits(M)$, $\mathop{\rm Aut}\nolimits_{CR}(M)$, see [@OK1]) Let $\alpha, \beta\in \mathop{\rm Aut}\nolimits_{qc}(M)$. We write $\alpha \, \beta \in \mathop{\rm Aut}\nolimits_{qc}(M)$ for the composition of the two $qc$-transformations. We calculate $$\begin{split} (\alpha\, \beta)^*\omega&=u_{\alpha\beta}\; a_{\alpha\beta} \cdot \omega\cdot\overline{a_{\alpha\beta}},\\ \beta^*\alpha^*\omega&= \beta^*(u_\alpha\, a_\alpha\cdot \omega\cdot \overline{a_\alpha}) =(\beta^*u_\alpha\, u_\beta) (\beta^*a_\alpha\cdot a_\beta) \cdot \omega\cdot (\overline{\beta^*a_\alpha\cdot a_\beta}). \\ \end{split}$$ (Note that $\displaystyle \beta^* \,\overline{a_\alpha}\, (x)= \overline{a_\alpha}\,(\beta x)=\overline{a_\alpha(\beta x)}= \overline{\beta^*a_\alpha}\,(x)$). Taking the norm, we have $$\displaystyle ||(\alpha\beta)^*\omega||=u_{\alpha\beta}\, ||\omega||=\beta^*u_\alpha\, u_\beta\, ||\omega|| \, .$$ Thus the smooth maps $u_\alpha, u_\beta, u_{\alpha\beta}\in C^\infty(M,{\mathbb R}^+)$ satisfy $$\label{crossed} u_{\alpha\beta}=\beta^*u_\alpha\, u_\beta\ \ \mbox{on}\ M.$$ Define $\lambda_{{\mathsf{G}}} = \lambda_{{\mathsf{G}},\omega}: \mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)\to C^\infty(M,{\mathbb R}^+)$ to be $$\label{crossedhom} \lambda_{{\mathsf{G}}} (\alpha)=\alpha_*u_\alpha.$$ In particular, $\displaystyle \lambda_{{\mathsf{G}}}(\alpha)(x)=u_\alpha(\alpha^{-1}x)$. We observe that $\lambda_{{\mathsf{G}}}$ is a crossed homomorphism with respect to the representation of $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ on $C^\infty(M,{\mathbb R}^+)$: $$\begin{split} \lambda_{{\mathsf{G}}}(\alpha\beta) \, (x)&= \, (\alpha\beta)_*u_{\alpha\beta}\, (x)=u_{\alpha\beta} \, (\beta^{-1}\alpha^{-1}x)\\ &=\beta^*u_\alpha\, (\beta^{-1}\alpha^{-1}x)\; u_\beta\, (\beta^{-1}\alpha^{-1}x)\ \, (\text{by} \eqref{crossed})\\ &= \lambda_{{\mathsf{G}}}(\alpha)\, (x)\; \alpha_*\lambda_{{\mathsf{G}}}(\beta)\, (x) =(\lambda_{{\mathsf{G}}}(\alpha)\;\alpha_*\lambda_{{\mathsf{G}}}\, (\beta))\, (x). \end{split}$$ Hence, $\displaystyle \lambda_{{\mathsf{G}}}(\alpha\beta)=\lambda_{{\mathsf{G}}}(\alpha)\cdot\alpha_*\lambda_{{\mathsf{G}}}(\beta)$, that is, $\lambda_{{\mathsf{G}}}$ is a crossed homomorphism and thus a one-cocycle for the differentiable cohomology of $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ with coefficients in $C^\infty(M,{\mathbb R}^+)$. Let $\displaystyle [\lambda_{{\mathsf{G}}}]\in H^1_d(\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M),\,C^\infty(M,{\mathbb R}^+))$ denote its corresponding cohomology class. We show $[\lambda_{{\mathsf{G}}}]$ is a conformal ${\mathsf{G}}$-invariant. For $qc$-forms $\omega, \omega'$, suppose that $\omega'$ is $qc$-equivalent to $\omega$, that is, $$\label{conG} \omega'=u \; b\cdot \omega\cdot \bar b, \; \; \; \text{ for } u\in C^\infty(M,{\mathbb R}^+), \; b \in C^\infty(M,\mathop{\rm Sp}\nolimits(1)) \; .$$ For $\alpha\in \mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$, write $\displaystyle \alpha^*\omega'=u'_\alpha\; a'_{\alpha} \cdot \omega'\cdot \overline{a'_{\alpha}}$. Thus $$\lambda_{{\mathsf{G}}, \omega'} (\alpha)=\alpha_*u'_\alpha\, .$$ Then $\displaystyle \alpha^*\omega'=u'_\alpha\; a'_{\alpha} \cdot (u \; b\cdot \omega\cdot \bar b) \cdot \overline{a'_{\alpha}} =(u'_\alpha u)\; (a'_{\alpha}\cdot b) \cdot \omega\cdot (\overline{a'_{\alpha} \cdot b})$. Also $$\begin{split} \alpha^*\omega'&=\alpha^*(u\; b\cdot \omega\cdot \bar b) =(\alpha^*u\, u_\alpha)\, (\alpha^*b \cdot a_\alpha) \cdot \omega\cdot (\overline{\alpha^*b\cdot a_\alpha}). \end{split}$$ Taking the norm $||\alpha^*\omega'||$, it follows $\displaystyle u'_\alpha\, u=\alpha^*u\, u_\alpha$, that is, $$\displaystyle u'_\alpha\,(\alpha^*u)^{-1} \, u = u_\alpha.$$ This shows $\displaystyle \alpha_*u'_\alpha\cdot \delta^0(u)(\alpha)=\alpha_*u_\alpha$. Hence, $\displaystyle [\lambda_{{\mathsf{G}}, \omega'}]=[\lambda_{{\mathsf{G}},\omega}]$ and so the cohomology class $[\lambda_{{\mathsf{G}}, \omega}]$ is a quaternionic conformal invariant. ◻ Regarding the cohomology groups of $L$ with coefficients in $C^\infty(M,{\mathbb R}^+)$ we have the following important general fact: **Theorem 4** ([@OK1 Theorem 10]). *Suppose that $L$ acts properly on $M$. Then $$H^i(L, C^\infty(M,{\mathbb R}^+) )= \{0 \} , \; i \geq 1 .$$* Recall that ${\operatorname*{Psh}\,}_{{\mathsf{G}}}(M)$ denotes the unique maximal subgroup of $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$ that acts properly on $M$. We now prove: **Theorem 5**. *Let $M$ be a parabolic ${\mathsf{G}}$-manifold of $CR$- or $qc$-type. Then there exists a representative form $\omega$ for the parabolic ${\mathsf{G}}$-structure such that $${\operatorname*{Psh}\,}_{{\mathsf{G}}}(M)= {\operatorname*{Psh}\,}_{\mathsf{G}}(M,\omega) \; .$$* *Proof.* Put $L = {\operatorname*{Psh}\,}_{{\mathsf{G}}}(M)$ for the following. Since ${\operatorname*{Psh}\,}_{{\mathsf{G}}}(M)$ acts properly, Theorem [Theorem 4](#thm:vanish){reference-type="ref" reference="thm:vanish"} implies $H^1(L , C^\infty(M,{\mathbb R}^+)) = \{ 0 \}$. Let $\eta$ be a representative form for the parabolic ${\mathsf{G}}$-structure on $M$. Since $H^1(L , C^\infty(M,{\mathbb R}^+)) = \{ 0 \}$, in particular, $[\lambda_{{\mathsf{G}},\eta}]=0$, where $\lambda_{{\mathsf{G}},\eta}$ is a one-cocycle for $L$. Since $\alpha \in \mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M)$, we can write $\alpha^*\eta = u_\alpha\, a_\alpha\cdot \eta \cdot \overline{a_\alpha}$. Thus the equation $$\lambda_{{\mathsf{G}},\eta}=\delta^0 v, \text{ for some } v\in C^\infty(M,{\mathbb R}^+),$$ means that $\displaystyle \alpha_*u_\alpha=\alpha_*v v^{-1}$, $\alpha \in L$. Or equivalently, $\displaystyle u_\alpha\cdot \alpha^*v=v$. Put $\omega =v\, \eta$. Then it follows that $$\displaystyle \alpha^*\omega =\alpha^*v\; u_\alpha\, a_\alpha\cdot \eta \cdot\, \overline{a_\alpha} =v\; a_\alpha\cdot \eta \cdot \overline{a_\alpha}=a_\alpha\cdot \omega \cdot \overline{a_\alpha}.$$ This shows that $\alpha \in {\operatorname*{Psh}\,}_{{\mathsf{G}}}(M,\omega)$. That is, we have ${\operatorname*{Psh}\,}_{\mathsf{G}}(M)$ is contained in ${\operatorname*{Psh}\,}_{{\mathsf{G}}}(M,\omega)$. Since $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M,\omega)$ is acting properly, by Lemma [Lemma 6](#lem:inv_metric){reference-type="ref" reference="lem:inv_metric"}, $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(M,\omega)$ is contained in ${\operatorname*{Psh}\,}_{\mathsf{G}}(M)$. The theorem is proved. ◻ Next we note: **Lemma 6**. *The subgroup ${\operatorname*{Psh}\,}_{qc}(M, \omega)$ of $\mathop{\rm Aut}\nolimits_{\mathsf{G}}(M)$ preserves an associated Riemannian metric $g_\omega$. In particular, ${\operatorname*{Psh}\,}_{qc}(M, \omega)$ acts properly on $M$.* *Proof.* $$g_\omega(\boldsymbol{x},\boldsymbol{y})=\sum_{i=1}^{3}\omega_i(\boldsymbol{x})\cdot \omega_i(\boldsymbol{y})+ d\omega_1(J_1 \boldsymbol{x}, \boldsymbol{y}), \, \text{ where } \boldsymbol{x}, \boldsymbol{y}\in TM. \qedhere$$ ◻ **Corollary 7**. *When $X/\Gamma$ is a closed aspherical ${\mathsf{G}}$-manifold, $\mathop{\rm Aut}\nolimits_{{\mathsf{G}}}(X/\Gamma)$ is a compact Lie group. In particular, $[\lambda_{{\mathsf{G}}}]=0$.* *Proof.* By Theorem A, the automorphism group of any compact aspherical parabolic ${\mathsf{G}}$-manifold is acting properly. ◻ **Remark 8**. *Note that, as $\mathop{\rm Aut}\nolimits_{CR}({\mathcal{N}})={\mathcal{N}}\rtimes {\rm U}(n)\times {\mathbb R}^+$, is not acting properly on the Heisenberg group ${\mathcal{N}}$, $[\lambda_{CR}]\neq 0$ in $H^1_d(\mathop{\rm Aut}\nolimits_{CR}({\mathcal{N}}),C^\infty({\mathcal{N}},{\mathbb R}^+))$. On the other hand, if $X={\mathcal{N}}-\{\boldsymbol{0}\}$, then $\mathop{\rm Aut}\nolimits_{CR}(X)={\rm U}(n)\times {\mathbb R}^+$ which acts properly on $X$. Then $[\lambda_{CR}]\in H^1_d({\rm U}(n)\times {\mathbb R}^+, C^\infty(X,{\mathbb R}^+))= \{0\}$. The quotient of $X$ by an infinite discrete subgroup of ${\rm U}(n)\times {\mathbb R}^+$ is an infra-Hopf manifold.* # $CR$-manifolds $X/\pi$ with virtually solvable group $\pi$ {#sec:CRsolv} ## Sasaki manifolds and Reeb flow {#sec:sasaki} A Sasaki structure on $M$ is equivalent with a *standard* $CR$-structure. For any $CR$-structure $(\omega, J)$, the Reeb field $\xi$ is defined by the conditions $$\omega(\xi) = 1 \text{ and } d\omega(\xi, \cdot) = 0 .$$ A $CR$-structure is called standard if the flow of the Reeb field is contained in the pseudo-Hermitian group $\operatorname*{Psh}\,_{CR}(M, \omega)$. Then the metric $$g=\omega\cdot\omega+d\omega\circ J$$ is called *Sasaki metric*. The group $\operatorname*{Psh}\,_{CR}(M,\omega)$ is thus contained in the isometry group of the Sasaki metric $g$. Note further that the Reeb flow is contained in the center of $\operatorname*{Psh}\,_{CR}(M,\omega)$ (compare [@OK]). #### *The Reeb field generates $S^1$ on compact manifolds.* On a compact $CR$-manifold the Reeb flow always gives rise to a circle action. **Proposition 9**. *Let $(M, \omega, J)$ be a closed strictly pseudoconvex standard $CR$-manifold. Then the Reeb field generates an $S^1$-action on $M$.* *Proof.* By the definition of standard $CR$-structure, the Reeb field $\xi$ generates a one-parameter group ${{\mathsf{A}}}$ of $CR$-transformations for $(\ker\, \omega,J)$. Let $\operatorname*{Psh}\,(M,(\omega,J))$ be the pseudo-Hermitian group. Since $\operatorname*{Psh}\,(M,(\omega,J))\leq \mathop{\rm Iso}\nolimits(M,g)$ for the Sasaki metric $g=\omega\cdot\omega+d\omega\circ J$, $\operatorname*{Psh}\,(M,(\omega,J))$ is compact. If $\displaystyle\bar {{\mathsf{A}}}$ is the closure of ${{\mathsf{A}}}$ in $\operatorname*{Psh}\,(M,(\omega,J))$, then $\bar A$ is isomorphic to a $k$-torus $T^k$. Let ${\mathcal{T}}^k$ be the distribution of vector fields for $T^k$ on $M$. Consider the restriction $\omega|_{{\mathcal{T}}^k} : {\mathcal{T}}^k{\rightarrow}{\mathbb R}$. Then ${\mathcal{T}}^k=\langle \xi\rangle\oplus \ker\, (\omega|_{{\mathcal{T}}^k})$. (For this, recall $\omega(\xi) = 1$, $d\omega(\xi, \cdot) = 0$.) Let $\boldsymbol{u}\in \ker\, (\omega|_{{\mathcal{T}}^k})$. Since $\ker\, \omega$ is $J$-invariant, $J\boldsymbol{u}\in\ker\, \omega$. As $\boldsymbol{u}\in {\mathcal{T}}^k$, $\boldsymbol{u}$ generates a one-parameter group of transformations $\{\varphi_t\}_{t\in {\mathbb R}}$ holomorphic on ${\mathsf{D}}$. Putting $p_{-t}=\varphi_{-t}p$, for a point $p\in M$, note that $\displaystyle [\boldsymbol{u},J\boldsymbol{u}]=\lim_{t{\rightarrow}0}({\varphi_t}_*J\boldsymbol{u}_{p_{-t}}-J\boldsymbol{u}_p)/t= J\lim_{t{\rightarrow}0}({\varphi_t}_*\boldsymbol{u}_{p_{-t}}-\boldsymbol{u}_p)/t=J[\boldsymbol{u},\boldsymbol{u}]=\boldsymbol{0}$. Let $d\omega\circ J$ be the positive definite Levi form on $\ker\, \omega$, then $\displaystyle 2d\omega(\boldsymbol{u},J\boldsymbol{u})=\boldsymbol{u}\omega(J\boldsymbol{u})-J\boldsymbol{u}\omega(\boldsymbol{u})-\omega([\boldsymbol{u},J\boldsymbol{u}])=0$. Thus $\boldsymbol{u}=\boldsymbol{0}$. It follows ${{\mathsf{A}}}=S^1$. ◻ ## Aspherical Sasaki manifolds {#sec:aspherical_Sasaki} Now let $M=X/\pi$ be a $2n+1$-dimensional closed aspherical manifold with a standard $CR$-structure. Since $M$ is compact, the Reeb field $\xi$ generates an $S^1$-action on $M$ (Proposition [Proposition 9](#Reebflow){reference-type="ref" reference="Reebflow"}). Then it follows (cf$.$ [@OK]) that 1. A Sasaki structure $(\omega,J)$ induces a Kähler structure $(\Omega, J)$ on $W$ such that $d\omega=p^*\Omega$ and $J$ is an induced complex structure from $(\ker\, \omega,J)$ with $p_*J=Jp_*$. 2. The central group extension $\displaystyle 1\to {\mathbb R}\cap \pi\to \pi\stackrel{\phi}{\longrightarrow}Q\to 1$ embeds into the pseudo-Hermitian group as in the diagram $$\label{eq:g}\begin{CD} 1@>>>\RR@>>> \operatorname*{Psh}\,(X)@>\phi>> \mathop{\rm Iso}\nolimits_h(W)@>>> 1\\ @. @AAA @AAA @AAA @.\\ 1@>>>{\mathbb R}\cap \pi@>>> \pi @>\phi>> Q@>>> 1 \end{CD}$$where the quotient group $Q=\pi \big/\, {\mathbb R}\cap \pi$ acts effectively and properly discontinuously on $W$ as a group of Kähler isometries. It follows from diagram [\[eq:g\]](#eq:g){reference-type="eqref" reference="eq:g"} that $Q$ acts effectively and properly discontinuously on $W$. ## $S^1$-action on a closed aspherical manifold {#sec:circle_action} Let $M=X/\pi$ be a closed aspherical manifold with an effective $S^1$-action. Recall (cf$.$ Section [2](#sec:compact_auto){reference-type="ref" reference="sec:compact_auto"}, proof of Theorem [Theorem 2](#thm:1,2,3){reference-type="ref" reference="thm:1,2,3"}) that, for any $x\in M$, the orbit map ${\rm ev}_x : S^1{\rightarrow}M$ defined by ${\rm ev}_x(t)=t \cdot x$, induces an *injective* homomorphism $${\rm ev}_*:\pi_1(S^1,1)={\mathbb Z}\to \pi_1(M,x)=\pi ,$$ such that ${\rm ev}_*({\mathbb Z})\leq Z(\pi)$ is contained in the center of $\pi$. This implies that the $S^1$-action lifts to a proper action of ${\mathbb R}$ on $X$, where $\pi$ commutes with ${\mathbb R}$. Since $S^1 = {\mathbb R}/{\mathbb Z}$ and the action is effective, we thus have an equivariant principal bundle on the universal cover $X$ of the form $$\label{eq:prin}\begin{CD} ({\mathbb Z}= {\mathbb R}\cap \pi, {\mathbb R})@>>> (\pi,X)@>>> (Q = \pi\big/ {\mathbb R}\cap \pi,W = X /{\mathbb R}) \; . \end{CD}$$ #### *Associated principal bundle* We suppose now that $Q$ admits a torsion-free subgroup of finite index. Thus we may choose a torsion-free normal subgroup $Q'$ of finite index in $Q$, such that $W/Q'$ is a closed aspherical manifold. We put $\pi' = \phi^{-1}(Q')$ for the preimage of $Q'$ in $\pi$. Then the central group extension $$\label{eq:principale} \begin{CD} 1@>>> {\mathbb Z}= {\mathbb R}\cap \pi@>>>\pi' @>>> Q' @>>> 1 \end{CD}$$ gives rise to a principal circle bundle $$\label{eq:principalb} \begin{CD} S^1 = {\mathbb R}/{\mathbb Z}@>>> P = X/\pi' @>p>> B = W/ Q' \ . \end{CD}$$ ## Standard $CR$-structures on circle bundles {#sec:standard_circle} As in Section [4.2](#sec:aspherical_Sasaki){reference-type="ref" reference="sec:aspherical_Sasaki"}, we now suppose that the contractible manifold $X$ has a standard $CR$-structure $(\omega, J)$. We require further that the Reeb flow generates the principal ${\mathbb R}$-action in [\[eq:principale\]](#eq:principale){reference-type="eqref" reference="eq:principale"}. Also we assume that the $CR$-structure is preserved by $\pi$, that is, $\pi \leq \operatorname*{Psh}\,(X,\omega)$. And we impose that $\pi \cap {\mathbb R}= {\mathbb Z}$. This ensures that the principal action of ${\mathbb R}$ on $X$ descends to an *effective* action of $S^1 = {\mathbb R}\big/ {\mathbb Z}$ on $X/\pi$, as in [\[eq:principalb\]](#eq:principalb){reference-type="eqref" reference="eq:principalb"}. Since $\pi \leq \operatorname*{Psh}\,(X,\omega)$, the principal $S^1$-bundle [\[eq:principalb\]](#eq:principalb){reference-type="eqref" reference="eq:principalb"} with total space $$P =X/\pi'$$ inherits a compatible induced standard $CR$-structure $(\bar \omega, J)$. The Reeb field $\xi$ pushes down to the Reeb field $\bar \xi$ on $P$. Since the action of $S^1 = {\mathbb R}/ {\mathbb Z}$ is effective, $\bar \xi$ is also the fundamental vector field of the principal $S^1$-action on $P$. This fact implies that the induced contact form $\bar \omega$ is in fact a *connection form* for the principal circle bundle $P$ in [\[eq:principalb\]](#eq:principalb){reference-type="eqref" reference="eq:principalb"}. Furthermore, $d\omega$ is the curvature form of the connection $\bar \omega$ and satisfies $$d \omega = p^* \bar \Omega \, ,$$ for a closed form $\bar \Omega$ on $B$. Since it arises as the curvature of the connection form, it follows that the cohomology class of $\bar \Omega$ is integral, and that $$e(B) = \, [ \, \,\bar \Omega \, ] \; \in H^2(B, {\mathbb Z})$$ is the characteristic class of the bundle [\[eq:principalb\]](#eq:principalb){reference-type="eqref" reference="eq:principalb"} (cf$.$ [@Ko] or [@Bl Section 2.2]). Since we also have a $CR$-structure, $\bar \Omega$ is its associated Kähler form on $B$. #### *Finite group action on $P$* Since the group $\pi$ is contained in $\operatorname*{Psh}\,(X,\omega)$, $\pi$ centralizes the ${\mathbb R}$-action on $X$, since it arises from the Reeb flow. Therefore $\pi$ acts on $P$ by bundle automorphisms with respect to [\[eq:principalb\]](#eq:principalb){reference-type="eqref" reference="eq:principalb"}. Furthermore, the action of $\pi$ on $P$ descends to the group $$\mu = Q/ Q' \, ,$$ which is acting on $B= W/Q'$ by Kähler isometries. That is, $\mu$ preserves the Kähler form on $B$, and, in particular, it fixes the Kähler class $e(B)$. We further note: **Lemma 10**. *The holomorphic action of the finite quotient group $\mu$ on $B$ is effective, that is, the homomorphism $\displaystyle \mu \to {\rm hol}(B)$ is injective.* *Proof.* By our construction, $Q$ is normalizing $Q'$, and it has an effective action on $W$. Since $W \to W/ Q'$ is a covering map, any element of $Q$ that acts trivially on $W/Q'$ is a lift of the identity of $W/Q'$ with respect to this covering. Therefore, it must be in $Q'$, showing that $\displaystyle \mu \to {\rm hol}(B)$ is injective. ◻ ## Biholomorphism between $W$ and ${\mathbb C}^n$ {#sec:biho} By [@BC Theorem 2.1], the aspherical Kähler manifold $W/Q'$ is biholomorphic to a complex euclidean space form ${\mathbb C}^n/\rho(Q')$, where $\rho: Q'\to E_{{\mathbb C}}(n)={\mathbb C}^n\rtimes {\rm U}(n)$ is a faithful representation. Note that $\rho(Q')$ is a Bieberbach group. Therefore, $\Lambda ={\mathbb C}^n\cap \rho(Q')$ is a maximal free abelian normal subgroup of $\rho(Q')$ and of finite index in $\rho(Q')$. Let $$\displaystyle H: \; W \to {\mathbb C}^n$$ be a corresponding biholomorphism equivariant with respect to $\rho$. From this, we see that (going down to a finite index subgroup if necessary) we may choose $Q'$ such that $\rho(Q') = \Lambda$ is contained in ${\mathbb C}^n$. Then $X/Q'$ is a complex torus biholomorphic to $$T^n_{\mathbb C}= {\mathbb C}^n / \Lambda .$$ Moreover, we have a covering action: $$\label{eq:cov}\begin{CD} \Lambda @>>> (Q^H,{\mathbb C}^n)@>>>(Q^H/\Lambda,T^n_{\mathbb C}) \end{CD}$$ where $Q^H$ induces a holomorphic action of $Q^H/\Lambda$ on $T^n_{\mathbb C}$. That is, there is a homomorphism $\displaystyle \theta: Q^H/\Lambda\to {\rm hol}(T^n_{\mathbb C})$. Every biholomorphic map of a complex torus $T^n_{\mathbb C}$ is induced by a complex affine transformation of the vector space ${\mathbb C}^n$, see [@GH]. Thus it follows that $Q^H$ is contained in $E_{{\mathbb C}}(n)$. In particular, $\rho$ extends to a homomorphism $\rho:Q \to E_{{\mathbb C}}(n)$ inducing $\theta$. Since $\rho(Q)$ is a crystallographic group, we may, in addition, arrange things such that $\rho(Q')$ equals $\Lambda$: $$\rho(Q') = \Lambda = \rho(Q) \cap {\mathbb C}^n .$$ Then, the finite group $Q^H/\Lambda$ maps injectively to ${\rm U}(n)$, so that we have $$\label{eq:emfini} \mu =Q^H/\Lambda\leq {\rm U}(n).$$ #### *Compatible choice of associated linear Hermitian form* Consider now the characteristic class $e(B)$ of the circle bundle [\[eq:principalb\]](#eq:principalb){reference-type="eqref" reference="eq:principalb"}. By the biholomorphism $B\to T^n_{\mathbb C}$, $e(B)$ is transported to a class $$e(T^n_{\mathbb C}) = e(B)^H \in H^2(T^n_{\mathbb C}, {\mathbb Z}) .$$ Moreover, since $e(B)$ is a Kähler class, $e(T^n_{\mathbb C})$ is contained in the Kähler cone of $T^n_{\mathbb C}$. **Lemma 11**. *There exists a positive definite linear Hermitian two form $\Omega_{{\mathbb C}^n}$ $($of type $(1,1))$ on ${\mathbb C}^n$ such that its image $\bar \Omega_{\mathbb C}$ on $T^n_{\mathbb C}$ represents the characteristic class $e(T^n_{\mathbb C})$. Then, we have, $$\label{eq:e} e(T^n_{\mathbb C}) = [\, \bar \Omega_{{\mathbb C}^n} \, ] \in H^2(T^n_{\mathbb C}, {\mathbb Z}) \, .$$* *Proof.* Let $g$ be a Kähler metric on $T^n_{\mathbb C}$ with Kähler form $\bar \Omega$ such that the Kähler class $e(T^n_{\mathbb C}) = [\bar \Omega]$. Since $T^n_{\mathbb C}$ has trivial canonical bundle its first Chern-class $c_1(T^n_{\mathbb C})$ vanishes. Let $\Theta$ denote the Ricci form for $g$. Then $0 = c_1(T^n_{\mathbb C}) = [\Theta]$. In particular, $\Theta$ is null-cohomologous. In this situation the Calabi-Yau existence theorem for Kähler metrics with prescibed Ricci curvature [@Be Chapter 11] asserts that there exists a unique Ricci flat Kähler metric $g'$ on $T^n_{\mathbb C}$ with Kähler form $\bar \Omega'$, satisfying $[\bar \Omega'] = [\bar \Omega] = e(T^n_{\mathbb C})$. As a consequence of the Cheeger-Gromoll splitting and decomposition theorem for Ricci-nonnegatively curved manifolds [@CG; @FW] it follows that the only Ricci flat Riemannian metrics on a torus (in fact on closed aspherical manifolds) are flat metrics. This shows that $g'$ is a flat Kähler metric on $T^n_{\mathbb C}$ and thus invariant by the holomorphic action of $T^n_{\mathbb C}$ on itself. Pulling back $\bar \Omega'$ to a form $\Omega'$ on ${\mathbb C}^n$, $\Omega_{{\mathbb C}^n} = \Omega'$ is linear and it has the required properties. ◻ Note that by this construction, $\rho(Q)$ preserves the positive definite Hermitian form $\Omega_{{\mathbb C}^n}$, and we may thus put ${\rm U}(n) = {\rm U}(\Omega_{{\mathbb C}^n})$ on ${\mathbb C}^n$. #### *Equivalence of bundles over the complex torus* By the identification $W = {\mathbb C}^n$ via the biholomorphism $H$ and our construction, the bundle [\[eq:prin\]](#eq:prin){reference-type="eqref" reference="eq:prin"}, now equates to an equivariant principal bundle $$\label{eq:prin2}\begin{CD} ({\mathbb Z}= {\mathbb R}\cap \pi', {\mathbb R})@>>> (\pi',X)@>>> ( \Lambda = \pi' \big/ {\mathbb R}\cap \pi' , {\mathbb C}^n = X /{\mathbb R}) \; \end{CD}$$ that descends to a principal circle bundle $$\label{eq:principalb2} \begin{CD} S^1 = {\mathbb R}/{\mathbb Z}@>>>P = X/\pi' @>p>> T^n_{\mathbb C}= {\mathbb C}^n/ \Lambda \; , \end{CD}$$ whose characteristic class is $e(T^n_{\mathbb C})$ as defined in [\[eq:e\]](#eq:e){reference-type="eqref" reference="eq:e"}. ## Locally homogeneous standard $CR$-structure on $P$ {#sec:crl} For brevity, we put $\bar \Omega_0 = \bar \Omega_{{\mathbb C}^n}$ for the above linear Kähler form on $T^n_{\mathbb C}$. Since the bundle $P$ has fundamental class $e(P) = [ \bar \Omega_{0}]$, there exists a $CR$-structure $(\bar \eta,J)$ on $P$, with contact form $\bar \eta$, which is satisfying $$\label{eq:eta_Omega} d\bar \eta = p^* \, {\bar \Omega}_0 .$$ Indeed, let $\bar \tau \in \Omega^1(T^n_{\mathbb C})$ be any one-form such that $\Omega_0 - \Omega = d\tau$. Then $$\label{eq:eta_def} \bar \eta \, = \, \bar \omega + p^* \bar \tau$$ satisfies [\[eq:eta_Omega\]](#eq:eta_Omega){reference-type="eqref" reference="eq:eta_Omega"}. The forms $\bar \omega$ and $\bar \eta$ have the same Reeb field, so that the Reeb field $\bar \xi$ generates the original $S^1$-action on $P$. That is, $\bar \eta$ is a connection form for the principal bundle [\[eq:principalb2\]](#eq:principalb2){reference-type="eqref" reference="eq:principalb2"}. The $CR$-structure $(\bar \eta,J)$ on $P$ then pulls back to a $\pi'$-invariant $CR$-structure $(\eta,J)$ on $X$, with contact form $\eta$, and Reeb field $\xi$, generating the ${\mathbb R}$-action on $X$. Moreover, we have $d\eta = p^*\Omega_0$. (In particular, all the conditions on $(X, \eta, J)$ in Section [4.4](#sec:standard_circle){reference-type="ref" reference="sec:standard_circle"} are satisfied.) We note that the standard $CR$-structure $(\eta,J)$ on $X$ is homogeneous. Indeed, $\operatorname*{Psh}\,(X,\eta)$ acts transitively on $X$, since the holomorphic isometries of $(X, \eta, J) \big/ \, {\mathbb R}= ({\mathbb C}^n, \Omega_0)$ lift to elements of $\operatorname*{Psh}\,(X,\eta)$ (see [@OK Proposition 3.4]). We show now that $X$ with its $CR$-structure is equivalent to the Heisenberg group ${\mathcal{N}}$ equipped with its standard left-invariant $CR$-structure. In fact, it follows that $\operatorname*{Psh}\,(X,\eta) = {\mathcal{N}}\rtimes {\rm U}(n)$, as is noted in [@OK Proposition 6.1 (2)]. Here, the Reeb flow ${\mathbb R}$ generates the center of the Heisenberg group ${\mathcal{N}}$, and ${\mathcal{N}}/{\mathbb R}= {\mathbb C}^n$. Note that, by construction, $\pi'$ is contained in $\operatorname*{Psh}\,(X, \eta,J)$. Therefore $\pi'$ is a discrete uniform subgroup of $$\operatorname*{Psh}\,(X, \eta,J) = \operatorname*{Psh}\,({\mathcal{N}}) = {\mathcal{N}}\rtimes {\rm U}(n) .$$ Note further that $\pi'$ is a nilpotent group, since it is a central extension of $\Lambda$. Therefore $\pi' \leq {\mathcal{N}}$ is a uniform lattice in ${\mathcal{N}}$, by the Bieberbach theorem for nilmanifolds [@Aus; @KLR]. In particular, the orbit map ${\mathcal{N}}\to X$ gives an $S^1$-equivariant diffeomorphism $${\mathcal{N}}/\pi' \to X/\pi'$$ over the base space $T^n_{\mathbb C}$. This shows that $X/\pi'$ is a Heisenberg nilmanifold. ## Locally homogeneous standard $CR$-structure on $X/\pi$ {#crlquotinent} As we have seen, the group $\mu = \pi /\pi'$ acts on the total space $P$ of the principal circle bundle [\[eq:principalb2\]](#eq:principalb2){reference-type="eqref" reference="eq:principalb2"} by bundle automorphisms, and the induced action of $\mu$ on $T^n_{\mathbb C}$ fixes the curvature form $\bar \Omega_0$. For the final step in the proof of Theorem 3, we show now that there exists a connection form with curvature $\bar \Omega$ that is fixed by the group $\mu$: **Proposition 12**. *There exists a connection form $\bar \vartheta$ for the principal circle bundle [\[eq:principalb2\]](#eq:principalb2){reference-type="eqref" reference="eq:principalb2"}, such that* 1. *$d\bar \vartheta = p^* \, {\bar \Omega}_0$.* 2. *$\mu$ is contained in $\operatorname*{Psh}\,(P, \bar \vartheta, J)$.* *Proof.* Let ${\mathcal{A}}(\bar \Omega_0) = \{ \bar \eta \in \Omega^1(P) \mid d \bar \eta = p^* \, {\bar \Omega}_0 \}$ be the space of connection forms for the bundle [\[eq:eta_def\]](#eq:eta_def){reference-type="eqref" reference="eq:eta_def"} with curvature $\bar \Omega_0$. Choose a base point $\bar \eta_0 \in {\mathcal{A}}(\bar \Omega_0)$. Then ${\mathcal{A}}(\bar \Omega_0) = \bar \eta_0 + \{ p^* \tau \mid \tau \in Z^1(T^n_{\mathbb C}) \}$ is an affine space over the vector space of closed one-forms $Z^1(T^n_{\mathbb C})$. Since the curvature $\bar \Omega_0$ is preserved by $\mu$, for any $g \in \mu$, $\bar \eta \in {\mathcal{A}}(\bar \Omega_0)$, it follows $g^* \bar \eta \in {\mathcal{A}}(\bar \Omega_0)$. Therefore the group $\mu$ acts on ${\mathcal{A}}(\bar \Omega_0)$. For $g \in \mu$, define $$c( g ) = g^* \bar \eta_0 - \bar \eta_0 \, \in Z^1(T^n_{\mathbb C}) \; .$$ Since $c(g_1 g_2) = g_2^* g_1^* \bar \eta_0 - \bar \eta_0 = g_2^* c(g_1) + c(g_2)$, $c: \mu \to Z^1(T^n_{\mathbb C})$ defines a one-cocycle for the natural representation of $\mu$ on $Z^1(T^n_{\mathbb C})$. The action of $\mu$ on ${\mathcal{A}}(\bar \Omega_0)$ is by affine transformations, since, for $\bar \eta \in {\mathcal{A}}(\bar \Omega_0)$, we have $$g^* \bar \eta = \bar \eta_0 + \, g^* (\bar \eta - \bar \eta_0) + c(g).$$ There is a corresponding cohomology class $[c ] \in H^1( \mu, Z^1(T^n_{\mathbb C}))$ associated to the one-cocycle $c$. Since $\mu$ is finite, we have $H^1( \mu, Z^1(T^n_{\mathbb C})) = \{0 \}$, so $c$ must be a coboundary. This implies that there exists $\tau \in Z^1(T^n_{\mathbb C})$ such that $c(g) = g^* \tau - \tau$. As a consequence, $\bar \vartheta = \bar \eta_0 - \tau$ is a fixed point for the action of $\mu$. ◻ Pulling back the connection form $\bar \vartheta$ to a connection form $\vartheta$ on the covering bundle [\[eq:prin2\]](#eq:prin2){reference-type="eqref" reference="eq:prin2"}, we note: **Corollary 13**. *There exists on $X$ a connection form $\vartheta$ for the bundle [\[eq:prin\]](#eq:prin){reference-type="eqref" reference="eq:prin"}, with $d \vartheta = \Omega_0$, such that the corresponding $CR$-structure on $X$ satisfies $$\pi \leq \operatorname*{Psh}\,(X, \vartheta,J) .$$* As above, the $CR$-structure $(X, \vartheta,J)$ is homogeneous and $\operatorname*{Psh}\,(X, \vartheta,J) = \operatorname*{Psh}\,({\mathcal{N}}) = {\mathcal{N}}\rtimes {\rm U}(n)$. Therefore $X/\pi$ is an infra-nilmanifold that is finitely covered by the nilmanifold $X/\pi'$. This concludes the proof of Theorem 3. # Indication to the proof of Theorem [Theorem 4](#Tqc){reference-type="ref" reference="Tqc"} {#sec:qcsolv} ## Quaternionic contact structures {#def:qc} Let $X$ be a $4n+3$-dimensional smooth manifold. A *quaternionic contact structure* is a codimension $3$-subbundle ${\mathsf{D}}$ on $X$ which satisfies that $\displaystyle {\mathsf{D}}\oplus[{\mathsf{D}},{\mathsf{D}}]=TX$. In addition the following conditions are required: There exists a non-degenerate ${\rm Im}\, {\mathbb H}$-valued $1$-form $\omega=\omega_1i+\omega_2j+\omega_3k$ called a *quaternionic contact form* on $X$ such that 1. $\displaystyle \ker\,\omega=\mathop{\cap}_{i=1}^3\ker\,\omega_i={\mathsf{D}}$. 2. $\displaystyle \omega\mathop{\wedge}\omega\mathop{\wedge}\omega\,\wedge\overbrace{d\omega\wedge\cdots\wedge d\omega}^n\neq 0$ on $X$. The non-degeneracy of $d\omega_k$, $k=1,2,3$, on ${\mathsf{D}}$ defines the bundle of endomorphisms $\{J_1,J_2,J_3\}$: $$\label{hypercom} J_k=(d\omega_j\vert_{{\mathsf{D}}})^{-1}\circ (d\omega_{i}\vert_{{\mathsf{D}}}):{{\mathsf{D}}} {\rightarrow}\, {{\mathsf{D}}}, \ \, (i,j,k)\sim(1,2,3) %(\alpha, \beta,\gamma)$$which constitutes a *hypercomplex structure* on ${\mathsf{D}}$. Note that the Levi form $$d\omega_i\circ J_i: {\mathsf{D}}\times {\mathsf{D}}{\longrightarrow}\, {\mathbb R}, \; i=1,2,3$$ is a positive definite symmetric bilinear form on ${\mathsf{D}}$. Then $(X,{\mathsf{D}},\omega,\{J_i\}_{i=1}^{3})$ is said to be a positive definite $qc$-manifold. #### *Standard $qc$-manifolds and three-Sasaki manifolds* For any $qc$-manifold $M$, let ${\mathcal{T}}=\{\xi_1,\xi_2,\xi_3\}$ denote the three-dimensional integrable distribution complementary to the codimension three subbundle ${\mathsf{D}}$ on $M$ determined by the $qc$-structure, called *$qck$-distribution* (see Section [3](#sec:parabolic_geom){reference-type="ref" reference="sec:parabolic_geom"}). If ${\mathcal{T}}$ generates a subgroup of ${\operatorname*{Psh}\,}_{qc}(M)$, then $M$ is called a *standard* $qc$-manifold. **Proposition 14**. *Let $(M,\omega,\{J_i\}_{i=1}^{3})$ be a closed strictly pseudoconvex standard $qc$-manifold. Then the Reeb fields generate a compact Lie group action of $\mathop{\rm SU}\nolimits(2)$ or the torus group $T^3$ on $M$.* See [@Ka Proposition 4.5] for the proof. If the Reeb fields generate an $\mathop{\rm SU}\nolimits(2)$-action then $M$ is called a *three-Sasaki manifold* (sometimes also a quaternionic $CR$-manifold [@AK]). ## Aspherical standard $qc$-manifolds {#asphestanqc} Let $X/\pi$ be a $4n+3$-dimensional positive definite closed aspherical *standard* $qc$-manifold. Since $X/\pi$ is aspherical, the $qck$-distribution $\hat{\mathcal{T}}=\{\hat\xi_1, \hat\xi_2,\hat\xi_3\}$ on $X/\pi$ generates a compact Lie group action by Proposition [Proposition 14](#qck-Reebflow){reference-type="ref" reference="qck-Reebflow"}. Since $X/\pi$ is aspherical, the flow is given by a three-torus $T^3\leq {\operatorname*{Psh}\,}_{qc}(X/\pi)$. Moreover, the $T^3$-action lifts to a proper ${\mathbb R}^3$-action on $X$ (as in Section [4.3](#sec:circle_action){reference-type="ref" reference="sec:circle_action"} in the case of circle actions). This gives rise to an equivariant principal bundle over $W=X/{\mathbb R}^3$: $$\displaystyle ({\mathbb R}^3\cap \pi, {\mathbb R}^3)\to \, (\pi, X)\stackrel{p}{\longrightarrow}\, (Q,W) \; .$$ Then it follows (cf$.$ [@Ka]) that 1. The standard qc-structure $(\omega, \{J_i\}_{i=1}^{3})$ induces a hyper-Kähler structure $(\Omega, J = \{J_i\}_{i=1}^{3})$ on $W$, such that $d\omega=p^*\Omega$, where $\Omega = \Omega_1i+\Omega_2j+\Omega_3k$ and $J$ is an induced hypercomplex structure from $(\ker\, \omega,J)$ with $p_*J=Jp_*$. Here $\Omega_i$ is a Kähler form with respect to $J_i$, $i=1,2,3$. 2. The central group extension $\displaystyle 1\to {\mathbb R}^3\cap \pi\to \pi\stackrel{\phi}{\longrightarrow}Q\to 1$ embeds into the pseudo-Hermitian group of the $qc$-structure as in the diagram $$\label{eq:gqc}\begin{CD} 1@>>>{\mathbb R}^3@>>> {\operatorname*{Psh}\,}_{qc}(X)@>\phi>> {\mathop{\rm Iso}\nolimits}_{hK}(W)@>>> 1\\ @. @AAA @AAA @AAA @.\\ 1@>>>{\mathbb R}^3 \cap \pi@>>> \pi @>\phi>> Q@>>> 1 \end{CD}$$ where the quotient group $Q=\pi \big/\, {\mathbb R}^3 \cap \pi$ acts effectively and properly discontinuously on $W$, and as a group of hyper-Kähler isometries for $(\Omega,J)$. 3. $X$ is $qc$-homogeneous if and only if $W$ is hyper-Kähler homogeneous ([@Ka Proposition C]). #### **Detailed explanation of [\[eq:gqc\]](#eq:gqc){reference-type="eqref" reference="eq:gqc"}** Let $\omega=\omega_1i+\omega_2j+\omega_3k$ be a lift of the $qc$-form of $X/\pi$ to the universal covering (cf$.$ Section [3](#sec:parabolic_geom){reference-type="ref" reference="sec:parabolic_geom"}). Then $\omega$ is a $\pi$-invariant non-degenerate ${\rm Im}\, {\mathbb H}$-valued $1$-form such that $\pi\leq \operatorname*{Psh}\,_{qc}(X,(\omega, \{J_i\}_{j=1}^3)$ as in (3) of [\[Pshqcr\]](#Pshqcr){reference-type="eqref" reference="Pshqcr"}. Let ${\mathcal{T}}=\{\xi_1, \xi_2, \xi_3\}$ be a lift of $\hat{\mathcal{T}}$ which generates a proper ${\mathbb R}^3$-action on $X$. Since $\pi$ centralizes ${\mathbb R}^3$, for every $\gamma\in \pi$, it follows $\gamma_*\xi_i=\xi_i$ $(i=1,2,3)$. Noting the action of $\pi$ on $\omega$ from (3) of [\[Pshqcr\]](#Pshqcr){reference-type="eqref" reference="Pshqcr"}, each element $\gamma\in \pi$ satisfies $$\label{eq:actionpi} \gamma^*\omega=\omega, \ \gamma_*J_i=J_i\gamma_*.$$ Recall that the $qc$-structure of $\displaystyle(X,{\mathbb R}^3, \{\omega_i, J_i\}_{i=1}^3, g_\omega)$ induces a simply connected complete hyper-Kähler manifold $\displaystyle(W, \{\Omega_{i},J_i\}_{i=1}^3, g)$ for which $p^*\Omega=d\omega$ where $\omega=\omega_1i+\omega_2j+\omega_3k$ is the $\pi$-invariant one-form on $X$ and $\Omega$ is a $Q$-invariant two-form on $W$: $$\label{eq:hyper-K} \Omega=\Omega_1i+\Omega_2j+\Omega_3k.$$ Furthermore, $g_\omega=\sum_{i=1}^3\omega_i\cdot\omega_i+d\omega_1\circ J_1$ is a canonical Riemannian metric on $X$ and $g=\Omega_i\circ J_i$ $(i=1,2,3)$ is a hyper-Kähler metric on $W$. By the equation $p^*\Omega=d\omega$ and [\[eq:actionpi\]](#eq:actionpi){reference-type="eqref" reference="eq:actionpi"}, each $\alpha\in Q$ preserves the hyper-Kähler struture on $W$: $$\label{eq:3hquat} \alpha^*\Omega=\Omega,\ \, \alpha_*J_i=J_i\alpha_*.$$In paricular, $Q$ acts as Kähler isometries of $(W,(\Omega_i,J_i))$. ### Associated principal three-torus bundle We suppose now that $Q$ admits a torsion-free subgroup of finite index. Thus we may choose a torsion-free normal subgroup $Q'$ of finite index in $Q$, such that $W/Q'$ is a closed aspherical manifold. We put $\pi' = \phi^{-1}(Q')$ for the preimage of $Q'$ in $\pi$. Then the central group extension $$\label{eq:principaler3} \begin{CD} 1@>>> {\mathbb Z}^3 = {\mathbb R}^3 \cap \pi@>>>\pi' @>>> Q' @>>> 1 \end{CD}$$ gives rise to a principal torus bundle $$\label{eq:principalt3b} \begin{CD} T^3 = {\mathbb R}^3/{\mathbb Z}^3 @>>> P = X/\pi' @>p>> B = W/ Q' \ . \end{CD}$$ ## Proof of Lemma D Let $g$ be hyper-Kähler with hyper-Kähler form $\Omega$ (compare [\[eq:hyper-K\]](#eq:hyper-K){reference-type="eqref" reference="eq:hyper-K"}) on $M$. Since $g$ is hyper-Kähler its holonomy group is contained in $\mathop{\rm Sp}\nolimits(n)$. In particular, it is contained in $\mathrm{SU}(n)$, which implies that $g$ is a Ricci- flat Kähler metric on $M$ (e.g. [@Be Proposition 10.29]). In view of the fact that $M$ is aspherical all Ricci flat metrics are flat [@FW]. We therefore have established that $g$ is a flat hyper-Kähler metric. In particular, its universal covering space must be isometric to ${\mathbb H}^n$ with linear Kähler structure $\Omega$. ## Hypercomplex isometric isomorphim Applying Lemma D in the introduction to the hyper-Kähler manifold $B = W/Q'$ asserts that $B = W/Q'$ with the hyper-Kähler structure [\[eq:hyper-K\]](#eq:hyper-K){reference-type="eqref" reference="eq:hyper-K"} is hyper-holomorphically isometric to a flat hyper-Kähler torus $T^n_{\mathbb H}={\mathbb H}^n/\Lambda$, where $\Lambda \leq {\mathbb H}^n$ is a lattice. ## Conclusion of the proof Using basic methods developed in [@Ka] the remaining part of the proof of Theorem [Theorem 4](#Tqc){reference-type="ref" reference="Tqc"} follows a similar but simplified procedure as that of Section [4](#sec:CRsolv){reference-type="ref" reference="sec:CRsolv"}. The key step is to establish that the universal covering $X$ with its $qc$-structure is homogeneous and arises from a quaternionic Heisenberg group, that is $X$ is $qc$-isometric to a $qc$-Heisenberg group with its standard $qc$-structure. The details shall be presented elsewhere. 99 [L. Auslander]{.smallcaps}. *Bieberbach's Theorems on Space Groups and Discrete Uniform Subgroups of Lie Groups*, *Annals of Mathematics* 71, No. 3, 579-590 (1960). [D.V. Alekseevski]{.smallcaps}, *Groups of conformal transformations of Riemannian spaces*, *Math. USSR Sb.* 18, No. 2, 285-301 (1972). [D. A. Alekseevsky and Y. Kamishima]{.smallcaps}, *Pseudo-conformal quaternionic $CR$ structure on $(4n+3)$-dimensional manifolds*, *Annali di Matematica Pura ed Applicata*, 187 (3), 487-529 (2008). [O. Baues]{.smallcaps}, *Infra-solvmanifolds and rigidity of subgroups in solvable linear algebraic groups*, *Topology* **43**, no. 4, 903-924 (2004). [O. Baues and V. Cortés]{.smallcaps}, *Aspherical Kähler manifolds with solvable fundamental group*, *Geom. Dedicata* 122, 215-229 (2006). [O. Baues and Y. Kamishima]{.smallcaps}, *On the vanishing of equivariant cohomology of proper actions and application to the conformal and $CR$-automorphism groups*, *Preprint* \[arXiv:2101.03831v2 \[math.DG\] 28 Jan 2021.\] [O. Baues and Y. Kamishima]{.smallcaps}, *Locally homogeneous aspherical Sasaki manifolds*, *Differential Geom. Appl.* 70, Article ID 101607, 40 p. (2020). [O. Baues and Y. Kamishima]{.smallcaps}, *Isometry groups with radical, and aspherical Riemannian manifolds with large symmetry, I*, *Geometry & Topology* 27 (2023), 1-50. [A.L. Besse]{.smallcaps}, *Einstein manifolds*, *Classics in Mathematics*, Springer-Verlag, Berlin, 2008. [D.E. Blair]{.smallcaps}, *Riemannian geometry of contact and symplectic manifolds*, *Progress in Mathematics* 203, Birkhäuser Boston Inc., Boston, MA, 2002. [C. P. Boyer]{.smallcaps}, *A note on hyper-Hermitian four-manifolds*, *Proc. Amer. Math. Soc.* 102, no. 1, 157--164 (1988). [C. Boyer, K. Galicki]{.smallcaps}, *Sasaki Geometry*, *Oxford Mathematical Monographs*, 2008. [J. Cheeger and D. Gromoll]{.smallcaps}, *The splitting theorem for manifolds of nonnegative Ricci curvature*, *J. Differential Geometry* 6 119-128 (1971). [S.S. Chen and L. Greenberg]{.smallcaps}, *Hyperbolic Spaces*, *Contribution to Analysis* (A Collection of Papers Dedicated to Lipman Bers, eds. L. Ahlfors and others), Academic Press, New York and London, 49-87 (1974). [P. E. Conner and F. Raymond]{.smallcaps}, *Injective operations of the toral groups*, *Topology* **10**, 283-296 (1971). [P. E. Conner and F. Raymond]{.smallcaps}, *Actions of compact Lie groups on aspherical manifolds*, *Topology of Manifolds, (eds. J. C. Carrel and C. H. Edwards)*, Markham, Chicago, 227-264 (1969). [A.E. Fischer and J.A. Wolf]{.smallcaps}, *The Calabi construction for compact Ricci flat Riemannian manifolds*, *Bull. Amer. Math. Soc.* 80 (1), 92-97 (1974). [J. Ferrand]{.smallcaps}, *The action of conformal transformations on a Riemannian manifold*, *Math. Ann.* (2) 304, 277-291 (1996). [C. Frances]{.smallcaps}, *Sur le groupe d'automorphismes des géométries paraboliques de rang 1*, *Ann. Sci. École Norm. Sup.*, (4) 40 (5) 741-764 (2007). [P. Griffiths and J. Harris]{.smallcaps}, *Principles of algebraic geometry*, *Pure Appl. Math. Wiley-Interscience, New York* (1978). [S. Ivanov and D. Vassilev]{.smallcaps}, *Conformal quaternionic contact curvature and the local sphere theorem*, *J. Math. Pures Appl.* (9) 93, no. 3, 277-307 (2010). [Y. Kamishima]{.smallcaps}, *Heisenberg, Spherical $CR$ geometry and Bochner flat locally conformal Kähler manifolds*, *International Journal of Geometric Methods in Modern Physics*, 3 (5-6), 1089-1116 (2006). [Y. Kamishima]{.smallcaps}, *Quaternionic contact structure with integrable complementary distribution*, *Preprint* \[arXiv:1902.08796v2\[math.GT\]27July2022\] [Y. Kamishima, K. B.  Lee and F. Raymond]{.smallcaps}, *The Seifert construction and its application to infra-nilmanifolds*, Quart. J. Math. Oxford Ser. (2) 34, 433-452 (1983). [S. Kobayashi]{.smallcaps}, *Principal fibre bundles with the 1-dimensional toroidal group*, Tohoku Math. J.(2) **8** (1), 29-45 (1956). [J. Lee]{.smallcaps}, *$CR$ Manifolds with noncompact connected automorphism groups*, *Journal of Geometric Analysis* 6 (1), 79-90 (1996). [K.B.  Lee and F. Raymond]{.smallcaps}, *Seifert fiberings*, *Mathematical Surveys and Monographs*, vol. 166, 2010. [A. de Nicola and I. Yudin]{.smallcaps}, *Nilpotent aspherical Sasakian manifolds*, *to appear*, \[arXiv:2207.14233v1\[math.DG\]28Jul2022\] [R. Schoen]{.smallcaps}, *On the conformal and $CR$ automorphism groups*, *Geometric and Functional Analysis* 5 (2), 464-481 (1995). [S. Webster]{.smallcaps}, *On the transformation group of a real hypersurfaces*, *Trans. Amer. Math. Soc.* 231(2), 179-190 (1977). [^1]: This work was partially supported by JSPS grant No 22K03319
arxiv_math
{ "id": "2309.13569", "title": "On the automorphism group of parabolic structures and closed aspherical\n manifolds", "authors": "Oliver Baues and Yoshinobu Kamishima", "categories": "math.DG math.GT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this paper, we study subgroup separability (also known as LERF) and related properties for surface braid groups and virtual (singular) braid groups author: - | KISNNEY ALMEIDA\ Universidade Estadual de Feira de Santana,\ Departamento de Ciências Exatas,\ Av. Transnordestina S/N, CEP 44036-900 - Feira de Santana - BA - Brazil.\ e-mail: `kisnney@gmail.com`\ IGOR LIMA\ Universidade de Brasília,\ Departamento de Matemática,\ Brasília, DF, 70910-900, Brasil.\ e-mail: `igor.matematico@gmail.com`\ OSCAR OCAMPO \ Universidade Federal da Bahia,\ Departamento de Matemática - IME,\ Av. Adhemar de Barros S/N CEP: 40170-110 - Salvador - BA - Brazil.\ e-mail: `oscaro@ufba.br` title: Subgroup separability of surface braid groups and virtual braid groups --- # Introduction {#intro} The braid groups of the $2$-disc, or Artin braid groups, were introduced by Artin in 1925 and further studied in 1947 [@A1; @A2]. Surface braid groups were initially studied by Zariski [@Z], and were later generalised by Fox and Neuwirth to braid groups of arbitrary topological spaces using configuration spaces as follows [@FoN]. Let $S$ be a compact, connected surface, and let $n\in \mathbb N$. The *$n$th ordered configuration space of $S$*, denoted by $F_{n}(S)$, is defined by: $$F_n(S)=\left\{(x_{1},\ldots,x_{n})\in S^{n} \mid x_{i}\neq x_{j}\,\, \text{if}\,\, i\neq j;\,i,j=1,\ldots,n\right\}.$$ The *$n$-string pure braid group $P_n(S)$ of $S$* is defined by $P_n(S)=\pi_1(F_n(S))$. The symmetric group $S_{n}$ on $n$ letters acts freely on $F_{n}(S)$ by permuting coordinates, and the *$n$-string braid group $B_n(S)$ of $S$* is defined by $B_n(S)=\pi_1(F_n(S)/S_{n})$. This gives rise to the following short exact sequence: $$\label{eq:ses} 1 \ensuremath{\longrightarrow}P_{n}(S) \ensuremath{\longrightarrow}B_{n}(S) \stackrel{\sigma}{\longrightarrow} S_{n} \ensuremath{\longrightarrow}1.$$ The map $\@ifnextchar [{\mbox{$\sigma \colon\thinspace B_{n}(S)}\mbox{$\sigma \colon\thinspace B_{n}(S) \ensuremath{\longrightarrow}B_n(S)$} \ensuremath{\longrightarrow}S_n$}$ is the standard homomorphism that associates a permutation to each element of $S_{n}$. *Remarks 1*. 1. Follows from the definition that $F_1(S)=S$ for any surface $S$, the groups $P_1(S)$ and $B_1(S)$ are isomorphic to $\pi_1(S)$. For this reason, braid groups over the surface $S$ may be seen as generalizations of the fundamental group of $S$. 2. Let $S$ be a surface with boundary $\partial S$ and let $S'=S\setminus \partial S$. We note that the $n^{th}$ (pure) braid groups of $S$ and $S'$ are isomorphic, see [@GPi Remarks 8(d)]. For more information on general aspects of surface braid groups we recommend the survey [@GPi], in particular its Section 2 where equivalent definitions of these groups are given, showing different viewpoints of these groups. A group $G$ is called *subgroup separable* or *locally extended residually finite (LERF)* if each f.g. subgroup $H$ of $G$ is the intersection of finite index subgroups of $G$. A group $G$ is called *residually finite* if the trivial subgroup is the intersection of the finite index subgroups of $G$. The concept of subgroup separability clearly implies residual finiteness and it was first studied by Hall [@H]. Mal'cev proved residual finiteness implies solvable word problem [@Mal] and the same idea may be adapted to prove LERF implies solubility of the generalized word problem, as explained in [@FW]. Subgroup separability is also connected to the profinite topology of $G$, *i.e.*, the topology in which a base for the open sets is the set of all cosets of normal subgroups of finite index in $G$. A group $G$ is subgroup separable if and only if every finitely generated subgroup of $G$ is closed in the profinite topology [@H2]. Usually it is hard to establish general results on subgroup separability, but we know that finite groups, abelian groups [@Mal], surface groups [@S2], free groups [@H] and free products of LERF groups are LERF [@Bu], for example. The case of classical braid groups has already been established [@DM], so it is natural to investigate the available generalizations. In a previous work, Almeida and Lima have established a criterion for the subgroup separability of Artin groups [@AL], which inspired us to investigate the case of surface braid groups and virtual braid groups, as we do in this paper. We also discuss some properties that are related to subgroup separability for those groups. The paper is organized as follows: In Section [2](#sec:prelim){reference-type="ref" reference="sec:prelim"}, we explain some facts on subgroup separability we will need. In Section [3](#sec:sbg){reference-type="ref" reference="sec:sbg"} we deal with surface braid groups: In the first three subsections we present our main tools - the analysis of which surface braid groups contain $F_2\times F_2$ as a subgroup, the Fadell-Neuwirth short exact sequence and property (LR), respectively - and in the fourth subsection we prove our main results on subgroup separability of surface braid groups and other consequences. Finally in Section [4](#sec:vbg){reference-type="ref" reference="sec:vbg"} we study the case of virtual braid groups. ## Acknowledgments {#acknowledgments .unnumbered} The first author was partially supported by FINAPESQ-UEFS, FAPDF, Brazil. The second author was partially supported by DPI/UnB, FAPDF, Brazil. The third author was partially supported by FAPDF, Brazil and by National Council for Scientific and Technological Development (CNPq) through a *Bolsa de Produtividade* 305422/2022-7. # Subgroup separability {#sec:prelim} In this section we establish some results on subgroup separability that we will need. First we present its behaviour on subgroups. **Theorem 2** ([@S Lemma 1.1]). *Let $H$ be a subgroup of $G$. If $H$ is not LERF then $G$ is not LERF either. The converse holds if $[G:H]< \infty$.* The disk case is already fully solved. **Theorem 3** ([@DM]). *If $S$ is the disk, $B_n(S)$ is LERF if and only if $n\leq 3$.* When investigating subgroup separability of braids, it is enough to deal with the pure case, since they form a finite index subgroup of the full group. **Corollary 4**. *The (full) surface braid group $B_n(S)$ (resp. the virtual braid group $VB_n$) is LERF if and only if the pure surface braid group $P_n(S)$ (resp. the pure virtual braid group $VP_n$) is.* *Proof.* It follows from the remark above and Theorem [Theorem 2](#thm:lerf1){reference-type="ref" reference="thm:lerf1"}. ◻ **Corollary 5**. *If $S$ is the disk, $P_n(S)$ is LERF if and only if $n\leq 3$.* *Proof.* It follows from Theorem [Theorem 3](#thm:disk){reference-type="ref" reference="thm:disk"} and Corollary [Corollary 4](#cor:braidpure){reference-type="ref" reference="cor:braidpure"}. ◻ Althought the direct product of two LERF groups is not always LERF, it is true for some specific direct products - and free products. **Theorem 6** ([@Bu Corollary 1.2]). *A free product of LERF groups is LERF.* **Lemma 7** ([@AL Lemma 3.6]). *Let $G$ be a group. If $G$ is LERF then $G\times \mathbb{Z}$ is LERF.* Theorem [Theorem 2](#thm:lerf1){reference-type="ref" reference="thm:lerf1"} gives us the possibility of giving a negative answer for subgroup separability of a group by analyzing its subgroups. There are two well-known groups that have been used for this purpose, which we present below. **Lemma 8**. *The group $F_2\times F_2$ is not LERF* *Proof.* The group $F_2\times F_2$ has unsolvable generalized word problem [@Mik2] hence it may not be LERF - see Section [3.5](#gwp){reference-type="ref" reference="gwp"} for more details. ◻ **Lemma 9** ([@BKS]). *The group $K=\langle y, \alpha, \beta \,|\,\alpha ^y=\beta\alpha, \beta^y=\beta\rangle$ is not LERF.* We shall use the following notation: $g^h = h^{-1}gh$. Then from [@BKS Equation (1)] the group $K$ has generators $y, \alpha, \beta$ and relations $y^{-1}\alpha y = \alpha\beta$ and $y^{-1}\beta y = \beta$. Taking the inverse of both relations we obtain $y^{-1}\alpha ^{-1} y = \beta^{-1}\alpha^{-1}$ and $y^{-1}\beta^{-1} y = \beta^{-1}$. Then, taking $\alpha_1=\alpha^{-1}$ and $\beta_1=\beta^{-1}$ we obtain the following presentation for $K$: $$K=\langle y, \alpha_1, \beta_1 \,|\,y^{-1}\alpha_1 y = \beta_1\alpha_1, y^{-1}\beta_1 y = \beta_1\rangle.$$ Since $y$ commutes with $\beta_1$ then we may rewrite the first relation as $y \alpha_1 y^{-1} = \beta_1^{-1}\alpha_1$. Then returning to $\beta$ (in fact, we are using the automorphism $y\ensuremath{\longmapsto}y$, $\alpha_1\ensuremath{\longmapsto}\alpha_1$ and $\beta^{-1}\ensuremath{\longmapsto}\beta$) we obtain the following presentation for $K$: $$\label{eq:k} K=\langle y, \alpha_1, \beta \,|\,y \alpha_1 y^{-1}=\beta\alpha_1, y \beta y^{-1}= \beta\rangle.$$ # Surface braid groups {#sec:sbg} We start this section by establishing the language that we shall use repeatedly in this paper. As in [@PR], a compact surface $S$ will be called *large* if it is different from 3 - the sphere, - the projective plane, - the disk, - the annulus, - the torus, - the Möbius strip, or - the Klein bottle. We shall call these seven surfaces *non-large surfaces*. From [@PR Proposition 1.6] follows that if $S$ is a large compact surface then the center of the (pure) braid group ($P_n(S)$) $B_n(S)$ is the trivial group. Let $Pants$ denote the sphere minus 3 points. ![The sphere minus 3 points](sphere3points.png){#pant} *Remark 10*. We recall that a *subsurface* $N$ of a surface $M$ is the closure of an open subset of $M$. Let $x\in N$. Let $N$ be a subsurface of $M$ such that $\pi_1(N,x)\neq \{1\}$. The inclusion $N\subseteq M$ induces a homomorphism $\psi\colon \pi_1(N,x)\ensuremath{\longrightarrow}\pi_1(M,x)$ that is injective if and only if none of the connected components of the closure $\overline{M\setminus N}$ of $M\setminus N$ is a disk, see [@PR Proposition 2.1]. Using this result, we note that the fundamental group of large surfaces contains a copy of the rank two free group, since the pants surface is a subsurface of them and $\pi_1(Pants)$ is a free group of rank 2. ## The Fadell-Neuwirth short exact sequence {#ss:fadell} Let $S$ be a connected surface and let $n\in \mathbb{N}$. If $m\geq 1$, the map $p\colon F_{n+m}(S)\ensuremath{\longrightarrow}F_n(S)$, of the configuration space $F_{n+m}(S)$ onto $F_n(S)$, defined by $p(x_1,\ldots,x_n,\ldots,x_{n+m}) = (x_1,\ldots,x_n)$ induces a homomorphism $p_{\ast}\colon P_{n+m}(S)\ensuremath{\longrightarrow}P_n(S)$. The homomorphism $p_{\ast}$ geometrically "forgets" the last $m$ strings. If $M$ is without boundary, Fadell and Neuwirth showed that $p$ is a locally-trivial fibration [@FaN Theorem 1], with fibre $F_m(M\setminus \{ x_1,\ldots,x_n \})$ over the point $(x_1,\ldots,x_n)$, which we consider to be a subspace of the total space via the map $i\colon F_m(M\setminus \{x_1,\ldots,x_n\})\ensuremath{\longrightarrow}F_{n+m}(M)$ defined by $i((y_1,\ldots,y_m)) = (x_1,\ldots,x_n,y_1,\ldots,y_m)$. Applying the associated long exact sequence in homotopy to this fibration, we obtain the Fadell-Neuwirth short exact sequence of pure braid groups: $$\label{eq:ses} 1\ensuremath{\longrightarrow}P_{m}(S\setminus \{x_1,\ldots,x_n\}) \stackrel{i_{\ast}}{\longrightarrow} P_{n+m}(S) \stackrel{p_{\ast}}{\longrightarrow} P_n(S) \ensuremath{\longrightarrow}1$$ where $n\geq 3$ if $S$ is the sphere [@F; @FvB], $n\geq 2$ if $S$ is the real projective plane [@vB], and $n\geq 1$ otherwise [@FaN], and $i_{\ast}$ is the homomorphism induced by the map $i$. This sequence has been widely studied. For instance, one question studied for many authors during several years was the splitting problem for surface pure braid groups: does the short exact sequence ([\[eq:ses\]](#eq:ses){reference-type="ref" reference="eq:ses"}) split? The latter is completely solved, see [@GG] for more details, in particular its Theorem 2. Additional information on this sequence may be seen in [@GPi Section 3.1]. In the following remarks, we record explicit information of some surface braid groups that we will use several times during the text. In many of them, it is a direct product decomposition using the splitting of the Fadell-Neuwirth short exact sequence with trivial action, since there is a section that sends generators of the quotient group into central elements of the respective surface braid group, see [@GG Theorem 2] and the references therein. The free group of rank 2 will be denoted by $F_2$. *Remarks 11*. Suppose $n\geq 1$. 1. Braid groups with few strings over the sphere and the projective plane are finite: - $B_1(\ensuremath{\mathbb S}^{2})$, $P_1(\ensuremath{\mathbb S}^{2})$ and $P_2(\ensuremath{\mathbb S}^{2})$ are trivial groups, $B_2(\ensuremath{\mathbb S}^{2})\cong \ensuremath{\mathbb Z}_2$, $B_3(\ensuremath{\mathbb S}^{2})$ is isomorphic to $\ensuremath{\mathbb Z}_3\rtimes \ensuremath{\mathbb Z}_4$ with non-trivial action and $P_3(\ensuremath{\mathbb S}^{2})\cong \ensuremath{\mathbb Z}_2$, see [@FvB] and also [@GPi Section 4]. - $B_1(\ensuremath{\mathbb R}P^2)=P_1(\ensuremath{\mathbb R}P^2)=\pi_1(\ensuremath{\mathbb R}P^2)\cong \ensuremath{\mathbb Z}_2$, $B_2(\ensuremath{\mathbb R}P^2)$ has order 16 and is isomorphic to the generalized quaternions,and $P_2(\ensuremath{\mathbb R}P^2)$ is isomorphic to the quaternion group, see [@vB] and also [@GPi Section 4]. 2. If $S$ is the sphere, then $P_{n+3}(S)\cong P_n(Pants)\times \mathbb{Z}_2$. In particular, $P_4(S)\cong P_1(Pants)\times \mathbb{Z}_2\cong F_2\times \mathbb{Z}_2$. 3. Suppose $S$ is the disk. It is an immediate consequence of the classical Artin presentation of $P_2(S)$ and $B_2(S)$ that they are isomorphic to $\mathbb{Z}$, see [@A2] and also [@KM]. Let $n\geq 3$. There is a decomposition $P_{n+2}(S)\cong P_n(Pants)\times \mathbb{Z}$ that follows from the splitting of the Fadell-Neuwirth short exact sequence. In particular, note that $P_3(S)\cong F_2\times \mathbb{Z}$. 4. If $S$ is the annulus then $P_{n+1}(S)\cong P_n(Pants)\times \mathbb{Z}$. In particular, $P_2(S)\cong P_1(Pants)\times \mathbb{Z}\cong F_2\times \mathbb{Z}$. 5. If $S$ is the torus then $P_{n+1}(T)\cong P_n(T\setminus \{x_1\})\times \mathbb{Z}^2$. In particular, $P_2(T)\cong P_1(T\setminus \{x_1\})\times \mathbb{Z}^2\cong F_2\times \mathbb{Z}^2$. ## The direct product of free groups $F_2 \times F_2$ as a subgroup of surface braid groups {#ss:f2xf2} Let us divide the connected, compact surfaces into three families. - The seven non-large surfaces. - - The pants surface, - The torus minus one point, - The projective plane minus two points, - The Klein bottle minus one point, - The connected sum of 3 projective planes. - The connected, compact surfaces not considered in the families $\mathcal{F}_1$ and $\mathcal{F}_2$. We start with the following useful result. **Lemma 12**. *Let $n\geq 1$.* 1. *If $S$ is a large surface (i.e. $S\in \mathcal{F}_2\cup \mathcal{F}_3$), then $P_n(Pants)$ is a subgroup of $P_n(S)$.* 2. *Let $X$ be a sphere minus four points. If $S$ belongs to the family $\mathcal{F}_3$, then $P_n(X)$ is a subgroup of $P_n(S)$.* *Proof.* First, we note that for item (a) none of the connected components of the closure $\overline{S\setminus Pants}$ of $S\setminus Pants$ is a disk assuming that $S$ is a large surface. Similarly, for item (b), none of the connected components of the closure $\overline{S\setminus X}$ is a disk, if $S$ belongs to the family $\mathcal{F}_3$. See Figure [2](#pant){reference-type="ref" reference="pant"} for an illustration of the cases of the torus minus one point and the Klein bottle minus one point. ![The torus minus a point and the Klein bottle minus a point](torusminuspoint.png){#pant} Hence, from [@PR Proposition 2.2] and its proof, it follows that $P_n(Pants)$ is a subgroup of $P_n(S)$ proving item (a) and also we conclude that $P_n(X)$ is a subgroup of $P_n(S)$ proving item (b). ◻ It is well known that the Cartesian product $F_2 \times F_2$ of two copies of the rank two free group is not a subgroup of surface groups. Next we study this problem for surface braid groups. First, we start with the case of few strands for some specific surfaces. **Proposition 13**. *The Cartesian product $F_2 \times F_2$ of two copies of the rank two free group is not a subgroup of the surface braid groups:* 1. *$P_2(Pants)$,* 2. *$P_n(S)$, where $S$ is the sphere and $n\leq 3$ or $S$ is the projective plane and $n\leq 2$.* *Proof.* 1. We consider the Fadell-Neuwirth short exact sequence $$1\ensuremath{\longrightarrow}P_2(Pants) \ensuremath{\longrightarrow}P_{4}(D^2) \ensuremath{\longrightarrow}P_2(D^2) \ensuremath{\longrightarrow}1.$$ If $F_2 \times F_2$ is a subgroup of $P_2(Pants)$ then it is a subgroup of $P_4(D^2)$ and also a subgroup of $B_4(D^2)$. But this is a contradiction with the main theorem of [@Ak], who showed that $F_2 \times F_2$ is not a subgroup of $B_4(D^2)$. 2. This item is obvious since the groups considered are finite.  ◻ *Remark 14*. We recall that Makanina constructed a subgroup $F_2(a,b) \times F_2(c,d)$ of the Artin braid group $B_n$, for $n\geq 5$, taking the following elements using the classical Artin presentation of $B_n$: $a=\sigma_3^2$, $b=\sigma_3\sigma_2^2\sigma_3$, $c=\sigma_4\sigma_3\sigma_2^2\sigma_3\sigma_4$ and $d=\sigma_4\sigma_3\sigma_2\sigma_1^2\sigma_2\sigma_3\sigma_4$, see [@Mak Proof of Theorem 1]. Now, we move for the cases in which $F_2 \times F_2$ is a subgroup of surface braid groups. **Theorem 15**. *The direct product $F_2 \times F_2$ of two copies of the rank two free group is a subgroup of $P_n(S)$ if one of the following conditions is satisfied:* 1. *$S$ is a surface in $\mathcal{F}_3$ and $n\geq 2$,* 2. *the surface $S$ belongs to $\mathcal{F}_2$ and $n\geq 3$,* 3. *$S$ belongs to $\mathcal{F}_1$, but different from the sphere, and $n\geq 5$,* 4. *$S$ is the sphere and $n\geq 6$.* *Proof.* We shall use the Fadell-Neuwirth short exact sequence [\[eq:ses\]](#eq:ses){reference-type="eqref" reference="eq:ses"} $$1\ensuremath{\longrightarrow}P_{m}(S\setminus \{x_1,\ldots,x_n\}) \ensuremath{\longrightarrow}P_{n+m}(S) \ensuremath{\longrightarrow}P_n(S) \ensuremath{\longrightarrow}1.$$ for $n\geq 3$ if $S$ is the sphere, for $n\geq 2$ if $S$ is the projective plane, and for $n\geq 1$ otherwise. Let $n\geq 2$. 1. Let $X$ be the sphere minus four points. From Lemma [Lemma 12](#lem:pant){reference-type="ref" reference="lem:pant"} if a surface $S$ belongs to the family $\mathcal{F}_3$ then $P_n(X)$ is a subgroup of $P_n(S)$. So it is enough to prove $F_2\times F_2$ is a subgroup of $P_n(X)$. We consider the Fadell-Neuwirth short exact sequence $$1\ensuremath{\longrightarrow}P_2(X) \ensuremath{\longrightarrow}P_{5}(D^2) \stackrel{\ensuremath{\varphi}}{\longrightarrow} P_3(D^2) \ensuremath{\longrightarrow}1$$ forgetting the last 2 strings of pure braids in $P_5(D^2)$. Note that the elements $a$, $b$, $c$ and $d$ defined by Makanina (see Remark [Remark 14](#rem:maka){reference-type="ref" reference="rem:maka"}) belong to the kernel of $\ensuremath{\varphi}$, i.e., $F_2 \times F_2$ is a subgroup of $P_2(X)$. Since $P_2(X)$ is a subgroup of $P_n(X)$, for all $n\geq 2$, then the result follows. 2. Let $S$ be a surface in $\mathcal{F}_2$. To prove that if $n\geq 3$ then $F_2 \times F_2$ is a subgroup of $P_n(S)$ we may use a similar argument of the previous item, with Pants instead of $X$, but using in this case the Fadell-Neuwirth short exact sequence $$1\ensuremath{\longrightarrow}P_2(Pants) \ensuremath{\longrightarrow}P_{5}(D^2) \stackrel{\psi}{\longrightarrow} P_2(D^2) \ensuremath{\longrightarrow}1$$ forgetting the last 3 strings of pure braids in $P_5(D^2)$. 3. Let $n\geq 5$. If $S$ is the disk the claim was proved by Makanina [@Mak]. Suppose that $S$ is either the annulus, the torus, the Möbius strip, or the Klein bottle. Since the pure Artin braid group $P_n(D^2)$ is a subgroup of $P_n(S)$ (from [@PR Proposition 2.2] and its proof) then the result of this item follows. Let $S$ be the projective plane and $n\geq 5$. In this case we use item (b) of this theorem and the Fadell-Neuwirth short exact sequence $$1\ensuremath{\longrightarrow}P_{n-2}(X) \ensuremath{\longrightarrow}P_{n}(S) \ensuremath{\longrightarrow}P_2(S) \ensuremath{\longrightarrow}1,$$ where $X$ is the projective plane minus two points, to conclude the result. 4. Let $S$ be the sphere and let $n\geq 6$. Using the Fadell-Neuwirth short exact sequence $$1\ensuremath{\longrightarrow}P_{n-3}(Pants) \ensuremath{\longrightarrow}P_{n}(S) \ensuremath{\longrightarrow}P_3(S) \ensuremath{\longrightarrow}1$$ and item (b) of this theorem we conclude the result for the sphere.  ◻ ## Property (LR) {#ss:lr} A subgroup $H$ of a group $G$ is called a *retract* if there is a homomorphism $\rho: G \ensuremath{\longrightarrow}H$ which restricts to the identity map on $H$. This is equivalent to $G$ splitting as a semidirect product $N$ of $H$, where $N=\ensuremath{\operatorname{\text{Ker}}\left({\rho}\right)}$. In this case the map $\rho$ is called a *retraction* of $G$ onto $H$. Let $G$ be a group and let $H$ be a subgroup of G. We will say that $H$ is a *virtual retract* of $G$, denoted $H \leq_{vr} G$, if there exists a subgroup $K\leq G$ such that $|G:K|<\infty$, $H \subset K$ and $H$ is a retract of K. If $G$ is a group, we say $G$ has *property (LR) (local retractions)* if all finitely generated subgroups of $G$ are virtual retracts. This property was defined by Long and Reid [@LR] although it has been studied long before its explicit definition. A virtual retract of a residually finite group is closed in the profinite topology [@Mi], hence property (LR) implies subgroup separability. In fact, Scott [@S2] proved that all surface groups are LERF essentially by showing that they satisfy property (LR). Every finite group is trivially (LR), since every subgroup has finite index. Every free group is (LR) [@H; @Bu2] and (LR) is preserved by free products [@Gr]. Although our main objective in this paper is to establish subgroup separability for braid groups, we can actually prove property (LR) for some of the pure braid groups, which is a stronger result. Property (LR) is not preserved by direct products - indeed $F_2\times F_2$ is not (LR) since it is not even LERF - however that is true if one of the factors is virtually abelian. We recall a virtually abelian group is a group which contains an abelian subgroup of finite index. **Theorem 16** ([@Mi Proposition 5.6]). *Let $X$ be a finitely generated virtually abelian group and $Y$ a group satisfying (LR). Then $X\times Y$ satisfies (LR).* Now we can deal with the cases below, by using some information of surface braid groups that was collected in Remarks [Remarks 11](#rems:sbg){reference-type="ref" reference="rems:sbg"}. **Theorem 17**. *Let $S$ be a non-large surface.* 1. *$P_2(S)$ is (LR) if $S$ is the disk, sphere, projective plane, annulus or torus.* 2. *$P_3(S)$ is (LR) if $S$ is the disk or the sphere.* 3. *$P_4(S)$ is (LR) if $S$ is the sphere.* *Proof.* 1. If $S$ is the disk then $P_2(S)=\mathbb{Z}$ hence it is (LR). If $S$ is the sphere or the projective plane then $P_2(S)$ is finite hence (LR). If $S$ is the annulus then $P_2(S)=P_1(Pants)\times \mathbb{Z}\cong F_2\times \mathbb{Z}$ hence (LR) by Theorem [Theorem 16](#thm:Mi1){reference-type="ref" reference="thm:Mi1"}. If $S$ is the torus then $P_2(T)=P_1(T\setminus \{x_1\})\times \mathbb{Z}^2\cong F_2\times \mathbb{Z}^2$ hence (LR) by Theorem [Theorem 16](#thm:Mi1){reference-type="ref" reference="thm:Mi1"}. 2. If $S$ is the disk then $P_3(S)=F_2\times \mathbb{Z}$ hence (LR) by Theorem [Theorem 16](#thm:Mi1){reference-type="ref" reference="thm:Mi1"}. If $S$ is the sphere, then $P_3(S)$ is finite hence (LR). 3. If $S$ is the sphere, then $P_4(S)=P_1(Pants)\times \mathbb{Z}_2\cong F_2\times \mathbb{Z}_2$ hence (LR) by Theorem [Theorem 16](#thm:Mi1){reference-type="ref" reference="thm:Mi1"}.  ◻ ## Subgroup separability for surface braid groups {#ss:ss-for-sbg} In this subsection, we determine completely under which conditions surface braid groups are (or are not) LERF. First, we consider the general case of large surfaces, then we study case by case the non-large surfaces. During this subsection we shall again use some information of surface braid groups that was collected in Remarks [Remarks 11](#rems:sbg){reference-type="ref" reference="rems:sbg"}. **Theorem 18**. *Let $S$ be a large surface and let $n\geq 2$. Then $P_n(S)$ is not LERF.* *Proof.* Let $n\geq 2$ and let $S$ be a large surface. We note that the result follows for some surfaces by a simple application of Theorem [Theorem 15](#thm:f2f2){reference-type="ref" reference="thm:f2f2"}. We shall give now a proof that covers all surfaces of the statement. First, we prove that the pure surface braid group $P_n(Pants)$ is not LERF. From the Fadell-Neuwirth short exact sequence [\[eq:ses\]](#eq:ses){reference-type="eqref" reference="eq:ses"} we obtain the decomposition $P_{n+2}=P_n(Pants)\times \mathbb{Z}$. If $P_n(Pants)$ is LERF then from Lemma [Lemma 7](#lem:lerf1){reference-type="ref" reference="lem:lerf1"} also $P_{n+2}$ is. But this is a contradiction, since $P_{n+2}$ is not LERF for $n\geq 2$, by Corollary [Corollary 5](#cor:puredisk){reference-type="ref" reference="cor:puredisk"}. From Lemma [Lemma 12](#lem:pant){reference-type="ref" reference="lem:pant"} follows that $P_n(Pants)$ is a subgroup of $P_n(S)$. By applying Theorem [Theorem 2](#thm:lerf1){reference-type="ref" reference="thm:lerf1"} we conclude $P_n(S)$ is not LERF. ◻ We move to the case of non-large surfaces. We start considering the special case of the Klein bottle. **Proposition 19**. *The pure braid group of the Klein bottle with 2 strings $P_2(Kb)$ is not LERF.* *Proof.* We consider here the presentation of the braid groups of the Klein bottle given in [@GP Theorem 2.1]. Also, we consider the section $s\colon P_n(Kb)\ensuremath{\longrightarrow}P_{n+1}(Kb)$ of the epimorphism $P_{n+1}(Kb)\ensuremath{\longrightarrow}P_n(Kb)$, that geometrically forgets the last string of braids in $P_{n+1}(Kb)$, described in [@GP Proposition 5.1]. By the results above, we may assume $P_1(Kb) = \langle a_1, b_1 \mid b_1a_1b_1^{-1} = a_1^{-1} \rangle$. From [@GP p. 18] we have $P_2(Kb)=F_2(a_2,b_2)\rtimes s(P_1(Kb))$, where $$s(P_1(Kb)) = \langle a_1a_2, b_2b_1 \mid (b_2b_1)(a_1a_2)(b_2b_1)^{-1} = (a_2a_1)^{-1} \rangle$$ and the action (see [@GP eqn (5.9)]) is given by - $a_1a_2\cdot a_2\cdot (a_1a_2)^{-1} = a_2$, - $a_1a_2\cdot b_2\cdot (a_1a_2)^{-1} = a_2^{-2}b_2$, - $b_2b_1\cdot a_2\cdot (b_2b_1)^{-1} = a_2^{-1}$, - $b_2b_1\cdot b_2\cdot (b_2b_1)^{-1} = a_2b_2a_2$. Let $p\colon P_2(Kb)\ensuremath{\longrightarrow}P_1(Kb)$ denote the epimorphism that geometrically forgets the last string of braids in $P_2(Kb)$. The kernel of $p$ is $F_2(a_2,b_2)$ the free group of rank 2. Let $Q = \ensuremath{\mathbb Z}\times \ensuremath{\mathbb Z}$ be the finite index 2 subgroup of $P_1(Kb)$ with presentation $\langle a_1, b_1^2 \mid a_1b_1^2 = b_1^2 a_1 \rangle$. Let $H = p^{-1}(Q)$ be the index 2 subgroup of $P_2(Kb)$. With this information we have the following commutative diagram of short exact sequences: $$\xymatrix{ & & 1 \ar[d] & 1 \ar[d] & \\ 1 \ar[r] & F_2(a_2,b_2) \ar[r] \ar@{=}[d] & H \ar[r]^{p|_{H}} \ar[d] & Q \ar[r] \ar[d] & 1\\ 1 \ar[r] & F_2(a_2,b_2) \ar[r] & P_2(Kb) \ar[r]^{p} \ar[d]^{\psi'} & P_1(Kb) \ar[r] \ar[d]^{\psi} & 1\\ & & \ensuremath{\mathbb Z}_2 \ar@{=}[r] \ar[d] & \ensuremath{\mathbb Z}_2 \ar[d] & \\ & & 1 & 1 & }$$ where $s\colon P_1(Kb)\ensuremath{\longrightarrow}P_2(Kb)$ is a section of $p$ given by $a_1\ensuremath{\longmapsto}a_1a_2$, $b_1\ensuremath{\longmapsto}b_2b_1$, $\psi\colon P_1(Kb)\ensuremath{\longrightarrow}\ensuremath{\mathbb Z}_2$ is given by $a_1\ensuremath{\longmapsto}\overline{0}$ and $b_1\ensuremath{\longmapsto}\overline{1}$, and $\psi'=\psi\circ p$. By using the method for presentation of extensions, given in [@J Chapter 10], and using the presentations of the groups $F_2(a_2,b_2)$ and $Q=\langle a_1, b_1^2 \mid a_1b_1^2 = b_1^2 a_1 \rangle$ we have that the group $H$ has a presentation given by generators $a_2,\, b_2,\, a_1a_2,\, (b_2b_1)^2$ and relations - $(a_1a_2)(b_2b_1)^2 = (b_2b_1)^2(a_1a_2)$ - $a_1a_2\cdot a_2\cdot (a_1a_2)^{-1} = a_2$, - $a_1a_2\cdot b_2\cdot (a_1a_2)^{-1} = a_2^{-2}b_2$, - $(b_2b_1)^2\cdot a_2\cdot (b_2b_1)^{-2} = a_2$, - $(b_2b_1)\cdot b_2\cdot (b_2b_1)^{-2} = b_2$. So, by renaming generators, $P_2(Kb)$ has an index two subgroup $H=L\times \mathbb{Z}$ where $L = F_2(a,b)\rtimes \mathbb{Z}$ and $\mathbb{Z}$ in the semi-direct product is generated by $c$ ($=a_1a_2$) which action is given by $cac^{-1}=a$ and $cbc^{-1}=a^{-2}b$. Let $N$ be the index two subgroup of $L$ whose quotient is isomorphic to $\ensuremath{\ensuremath{\left\langle a \,\mid\, a^2=1\right\rangle}}$. Then, by using the Reidemeister-Schreier method (see [@KM Appendix I.6]) with the transversal $\Lambda=\{ e, a \}$, we obtain the following presentation for $N$: $$\label{eq:n} N=\ensuremath{\ensuremath{\left\langle c, b, x, t \,\mid\, c x c^{-1} = x, ctc^{-1}=x^{-1}t, cbc^{-1}=x^{-1}b\right\rangle}}.$$ We shall use the presentation of $K$ given by equation ([\[eq:k\]](#eq:k){reference-type="ref" reference="eq:k"}). Now, we consider the following homomorphisms $\ensuremath{\varphi}\colon K\ensuremath{\longrightarrow}N$ and $\psi\colon N\ensuremath{\longrightarrow}K$ defined by $$\ensuremath{\varphi}(y) = c,\, \ensuremath{\varphi}(\alpha_1)=b\, \textrm{ and }\, \ensuremath{\varphi}(\beta)=x^{-1}$$ and $$\psi(c)=y,\, \psi(b)=\alpha_1,\, \psi(x)=\beta^{-1}\, \textrm{ and }\, \psi(t)=\alpha_1.$$ Since the composition $\psi\circ \ensuremath{\varphi}\colon K\ensuremath{\longrightarrow}K$ is the identity homomorphism (in particular it is injective) then $\ensuremath{\varphi}$ is injective and $K$ is isomorphic to a subgroup of $N$. As a consequence, from Lemma [Lemma 9](#lem:Knotlerf){reference-type="ref" reference="lem:Knotlerf"} and Theorem [Theorem 2](#thm:lerf1){reference-type="ref" reference="thm:lerf1"}, $P_2(Kb)$ is not LERF. ◻ Now, we state the general result for non-large surfaces. **Theorem 20**. *Let $S$ be a non-large surface.* 1. *$P_2(S)$ is LERF if and only if $S$ is not the Klein bottle.* 2. *$P_3(S)$ is LERF if and only if $S$ is either the disk, the sphere or the projective plane.* 3. *$P_4(S)$ is LERF if and only if $S$ is the sphere.* 4. *$P_n(S)$ is not LERF for every $n\geq 5$.* *Proof.* Let $S$ be a non-large surface. Recall that the case of the disk was covered in [@DM]. 1. If $S$ is the sphere, projective plane, annulus or torus, then the result follows from Theorem [Theorem 17](#thm:LRnlarge){reference-type="ref" reference="thm:LRnlarge"}. We know from Proposition [Proposition 19](#prop:kb){reference-type="ref" reference="prop:kb"} that the group $P_2(Kb)$ is not LERF. It remains to consider the case of the Möbius band. Let $S=Mb$ the Möbius band. From [@GP Proof of Proposition A.1, item (a)] the group $P_2(Mb)$ has a subgroup $G$ of finite index such that $G=F_2\times \mathbb{Z}$, where $F_2$ is the free group of rank 2. The result then follows by applying Theorem [Theorem 2](#thm:lerf1){reference-type="ref" reference="thm:lerf1"} and Lemma [Lemma 7](#lem:lerf1){reference-type="ref" reference="lem:lerf1"}. 2. If $S$ is the sphere then the result follows from Theorem [Theorem 17](#thm:LRnlarge){reference-type="ref" reference="thm:LRnlarge"}. For the projective plane, using the Fadell-Neuwirth short exact sequence [\[eq:ses\]](#eq:ses){reference-type="eqref" reference="eq:ses"}, we have that the pure braid group $P_3(\mathbb{R}P^2)$ has $P_2(Mb)$ as a finite index subgroup since $P_2(\mathbb{R}P^2)=Q_8$, where $Q_8$ is the quaternion group. Since $P_2(Mb)$ is LERF (from the first item of this theorem) and has finite index in $P_3(\mathbb{R}P^2)$ we conclude from Theorem [Theorem 2](#thm:lerf1){reference-type="ref" reference="thm:lerf1"} that $P_3(\mathbb{R}P^2)$ is LERF. Now, let $S$ be the annulus, the torus, the Möbius band or the Klein bottle. From the short exact sequence [\[eq:ses\]](#eq:ses){reference-type="eqref" reference="eq:ses"} with $n=1$ and $m=2$ we obtain that $P_3(S)$ has a subgroup $P_2(S\setminus \{x_1\})$ that is not LERF from Theorem [Theorem 18](#thm:large){reference-type="ref" reference="thm:large"}. Therefore, from Theorem [Theorem 2](#thm:lerf1){reference-type="ref" reference="thm:lerf1"} we conclude this item. 3. In this case, if $S$ is the sphere, the result is also guaranteed by Theorem [Theorem 17](#thm:LRnlarge){reference-type="ref" reference="thm:LRnlarge"}. Now suppose that $S$ is different from the sphere. From the short exact sequence [\[eq:ses\]](#eq:ses){reference-type="eqref" reference="eq:ses"} with $n=2$ and $m=2$ for the case of the projective plane, and with $n=1$ and $m=3$ otherwise, we obtain that $P_4(S)$ has a subgroup that is not LERF from Theorem [Theorem 18](#thm:large){reference-type="ref" reference="thm:large"} or item (b) of this theorem. So, applying Theorem [Theorem 2](#thm:lerf1){reference-type="ref" reference="thm:lerf1"} we get the conclusion of this item. 4. To show $P_l(S)$ is not LERF for every $l\geq 5$ it is enough to consider the short exact sequence [\[eq:ses\]](#eq:ses){reference-type="eqref" reference="eq:ses"} with $n=3$ and $m=l-3$ and apply Theorem [Theorem 2](#thm:lerf1){reference-type="ref" reference="thm:lerf1"}. Then the respective subgroup is a pure braid group of a large surface $P_{l-3}(S\setminus \{x_1, x_2, x_3\})$ that is not LERF by Theorem [Theorem 18](#thm:large){reference-type="ref" reference="thm:large"}, since $l-3\geq 2$.  ◻ Since (LR) implies LERF then we have the result below. **Corollary 21**. *Let $S$ be a compact surface. If $S$ is large then $P_n(S)$ is not (LR) for all $n\geq 2$. If $S$ is not large then* 1. *$P_2(S)$ is (LR) if $S$ is the disk, sphere, projective plane, annulus or torus. $P_2(S)$ is not (LR) if $S$ is the Klein bottle;* 2. *$P_3(S)$ is (LR) if $S$ is the sphere or the disk and not (LR) if $S$ is the annulus, torus, Möbius band or Klein bottle.* 3. *$P_4(S)$ is (LR) if and only if $S$ is the sphere.* 4. *$P_n(S)$ is not (LR) for every $n\geq 5$.* *Proof.* Follows from Theorem [Theorem 17](#thm:LRnlarge){reference-type="ref" reference="thm:LRnlarge"}, Theorem [Theorem 18](#thm:large){reference-type="ref" reference="thm:large"} and Theorem [Theorem 20](#thm:nlarge){reference-type="ref" reference="thm:nlarge"}. ◻ For the pure braids, there are two cases for which we don't know the validity of property (LR): $P_2(S)$ when $S$ is the Möbius band and $P_3(S)$ when $S$ is the projective plane. For the full braid groups, we have the result below. **Corollary 22**. *Let $S$ be a compact surface. If $S$ is large then $B_n(S)$ is not (LR) for all $n\geq 2$. If $S$ is not large then* 1. *If $S$ is the Klein bottle then $B_2(S)$ is not (LR);* 2. *If $S$ is the annulus, Möbius band, Klein bottle or the torus then $B_3(S)$ is not (LR);* 3. *If $S$ is not the sphere then $B_4(S)$ is not (LR);* 4. *$B_n(S)$ is not (LR) for every $n\geq 5$.* *Proof.* It follows from Theorem [Theorem 18](#thm:large){reference-type="ref" reference="thm:large"}, Theorem [Theorem 20](#thm:nlarge){reference-type="ref" reference="thm:nlarge"} and Corollary [Corollary 4](#cor:braidpure){reference-type="ref" reference="cor:braidpure"}. ◻ It is worth mentioning that $B_2(S)$ if $S$ is the sphere or projective plane and $B_3(S)$ if $S$ is the sphere are finite groups hence (LR). If $S$ is the disk then $B_2(S)$ is infinite cyclic hence also (LR). We however don't know the validity of (LR) for: $B_2(S)$ if $S$ is annulus, Möbius band or torus; $B_3(S)$ if $S$ is the projective plane, sphere or disk; $B_4(S)$ if $S$ is the sphere. ## The generalized word problem {#gwp} The occurrence problem for a finitely presented group $G$ is the problem of deciding, given $w, u_1,\ldots, u_n\in G$ (written as words in generators of $G$), whether $w\in \langle u_1,\ldots, u_n \rangle$, the subgroup of $G$ generated by $u_1,\ldots, u_n$. Since the occurrence problem has the word problem as a special case (to decide whether $w=1$, ask whether $w\in \langle 1\rangle$), it is also known as the generalized word problem. The latter term is due to Magnus, who solved the problem for one-relator groups in [@Mag]. Mikhailova [@Mik2] showed that the occurrence problem is unsolvable for $F_2\times F_2$. Using this result Makanina [@Mak] showed that the occurrence problem is also unsolvable for braid groups $B_n$ with $n\geq 5$ strings and Stillwell [@St] showed that the occurrence problem for the mapping class group $M(g,0)$ of the closed orientable surface of genus $g\geq 1$ is solvable if and only if $g=1$. From Theorem [Theorem 15](#thm:f2f2){reference-type="ref" reference="thm:f2f2"} we may conclude the following result for surface braid groups. *Remark 23*. Makanina [@Mak] did not discuss solvability of the occurrence problem for the pure braid groups of the disk. However, their construction of a group $F_2\times F_2$ inside $B_n(D^2)$ is such that the free generators of the rank 2 free groups are pure braids. So, from the construction given in [@Mak] we also conclude the occurrence problem is unsolvable for pure braid groups $P_n(D^2)$, with $n\geq 5$. **Proposition 24**. *The occurrence problem for the braid group $B_n(S)$ of the surface $S$, and its pure braid subgroup $P_n(S)$, is unsolvable if* 1. *$S$ is a surface in $\mathcal{F}_3$ and $n\geq 2$,* 2. *the surface $S$ belongs to $\mathcal{F}_2$ and $n\geq 3$,* 3. *$S$ belongs to $\mathcal{F}_1$, but different from the sphere, and $n\geq 5$,* 4. *$S$ is the sphere and $n\geq 6$.* *Proof.* The general idea is to exhibit a subgroup $H$ of $P_n(S)$ such that the $H$ occurrence problem is unsolvable for $H$. The result follows from Theorem [Theorem 15](#thm:f2f2){reference-type="ref" reference="thm:f2f2"} and the fact that the occurrence problem is unsolvable for $F_2\times F_2$, see [@Mik2]. ◻ It is known that subgroup separability implies solubility of the generalized word problem (see [@FW Section 3] ). Combining that fact with Theorem [Theorem 20](#thm:nlarge){reference-type="ref" reference="thm:nlarge"} gives us the following result. **Corollary 25**. *Let $S$ be a non-large surface. Then the occurence problem for $P_n(S)$ is solvable:* 1. *If $n=2$ and $S$ is not the Klein bottle;* 2. *If $n=3$ and $S$ is the disk, sphere or projective plane;* 3. *If $n=4$ and $S$ is the sphere.* We recall that for $n=1$ the (pure) braid group is equal to the fundamental group of $S$, see Remarks [Remarks 1](#rem:pi1){reference-type="ref" reference="rem:pi1"}. There are some missing cases, even for the pure braid groups. We don't know the answer for: - $P_2(S)$ if $S\in \mathcal{F}_2$ or if $S$ is the Klein bottle; - $P_n(S)$ if $S$ is the annulus, torus, Möbius Strip or the Klein bottle and $n\in \{3, 4\}$; - $P_4(S)$ if $S$ is the disk or projective plane. # Virtual braid groups {#sec:vbg} The virtual braid group is the natural companion to the category of virtual knots, just as the Artin braid group is to usual knots and links. We note that a virtual knot diagram is like a classical knot diagram with one extra type of crossing, called a virtual crossing. The virtual braid groups have interpretations in terms of diagrams, see [@Kam2], [@Kau] and [@V]. The notion of virtual knots and links was introduced by Kauffman together with virtual braids in [@Kau], and since then it has drawn the attention of several researchers. Let $n\geq 2$ be a positive integer. For the first definition, we shall consider the classical presentation of the Artin braid group $B_n$ (see [@A2]) and the presentation of the virtual braid group $VB_n$ that was formulated in [@V p.798] and restated in [@BB]. *Definition 26*. The *braid group on $n$ strands*, denoted by $B_n$, is the abstract group generated by $\sigma_i$, for $i=1,2,\dots,n-1$, with the following relations: - $\sigma_i\sigma_{i+1}\sigma_{i}=\sigma_{i+1}\sigma_{i}\sigma_{i+1}$, $i=1,2,\dots,n-2;$ - $\sigma_{i}\sigma_j=\sigma_{j}\sigma_i$, $\mid i-j\mid\ge 2$. The *virtual braid group on $n$ strands*, denoted by $VB_n$, is the abstract group generated by $\sigma_i$ (classical generators) and $v_i$ (virtual generators), for $i=1,2,\dots,n-1$, with relations (AR1), (AR2) and: - $v_iv_{i+1}v_{i}=v_{i+1}v_{i}v_{i+1}$, $i=1,2,\dots,n-2;$ - $v_{i}v_j=v_{j}v_i$, $\mid i-j\mid\ge 2;$ - $v_i^2=1$, $i=1,2,\dots,n-1;$ - $\sigma_{i}v_j=v_{j}\sigma_i$, $\mid i-j\mid\ge 2;$ - $v_iv_{i+1}\sigma_{i}=\sigma_{i+1}v_{i}v_{i+1}$, $i=1,2,\dots,n-2.$ The generator $\sigma_i$ corresponds to the diagram represented on the left of Figure [4](#fig:classical){reference-type="ref" reference="fig:classical"}, the generator $\sigma_i^{-1}$ is obtained from $\sigma_i$ by making a crossing change. The geometric generator $v_i$ is illustrated in Figure [7](#fig:singvirt){reference-type="ref" reference="fig:singvirt"}. ![Classical crossings, for $i=1,\ldots, n-1$.](sigmai){#fig:classical width="0.4\\columnwidth"} ![Classical crossings, for $i=1,\ldots, n-1$.](sigmai1){#fig:classical width="0.4\\columnwidth"} Recently, in [@CPM], Caprau, De la Pena and McGahan introduced virtual singular braids as a generalization of classical singular braids defined by Birman [@Bi] and Baez [@Ba] for the study of Vassiliev invariants, and virtual braids defined by Kauffman [@Kau] and Vershinin [@V]. In [@CPM] the authors proved an Alexander and Markov Theorem for virtual singular braids and gave two presentations for the monoid of virtual singular braids, denoted by $VSB_n$. In a more recent paper [@CY] Caprau and Yeung showed that the monoid $VSB_n$ embeds in the group $VSG_n$ called the virtual singular braid group on $n$ strands. They also gave a presentation of the virtual singular pure braid group $VSPG_n$ and showed that $VSG_n$ is a semi-direct product of $VSPG_n$ and the symmetric group $S_n$. Virtual singular braids are similar to classical braids, in addition to having classical crossings, they also have virtual and singular crossings. The multiplication of two virtual singular braids $\alpha$ and $\beta$ on $n$ strands is given using vertical concatenation. The braid $\alpha\beta$ is formed by placing $\alpha$ on top of $\beta$ and gluing the bottom endpoints of $\alpha$ with the top endpoints of $\beta$. Under this binary operation, the set of isotopy classes of virtual singular braids on $n$ strands forms a monoid. Following [@CY Definition 3] we will call an element in $VSG_n$ an *extended virtual singular braid* or simply a *virtual singular braid* for short. ![Singular and virtual crossings, for $i=1,\ldots, n-1$.](taui){#fig:singvirt width="0.6\\columnwidth"} ![Singular and virtual crossings, for $i=1,\ldots, n-1$.](taui1){#fig:singvirt width="0.6\\columnwidth"} ![Singular and virtual crossings, for $i=1,\ldots, n-1$.](vimarked){#fig:singvirt width="0.6\\columnwidth"} In [@CY Definition 3] was defined an abstract group called the virtual singular braid group such that the virtual singular braid monoid $VSB_n$ (defined in [@CPM]) embeds in it. We shall use the presentation as stated in [@CY Pages 5-6]. We note that the relations of the groups $B_n$, $VB_n$ given in their respective presentations (see Definition [Definition 26](#defn:vbn){reference-type="ref" reference="defn:vbn"}) appear in the following definition. *Definition 27*. The *virtual singular braid group*, denoted by $VSG_n$, is the abstract group generated by $\sigma_i$ (classical generators), $\tau_i$ (singular generators) and $v_i$ (virtual generators), where $i=1,2,\dots,n-1$, subject to the following relations: - Two point relations: $v_i^2=1$ and $\sigma_{i}\tau_i=\tau_{i}\sigma_i$, for $i=1,2,\dots,n-1$ for all $i=1,2,\dots,n-1$. - Three point relations, for all $|i-j|=1$: - $\sigma_i\sigma_{j}\sigma_{i}=\sigma_{j}\sigma_{i}\sigma_{j}$, - $v_iv_{j}v_{i}=v_{j}v_{i}v_{j}$, - $v_i\sigma_{j}v_{i}=v_{j}\sigma_{i}v_{j}$, - $v_i\tau_{j}v_{i}=v_{j}\tau_{i}v_{j}$, - $\sigma_i\sigma_{j}\tau_{i}=\tau_{j}\sigma_{i}\sigma_{j}$. - Commuting relations: $g_{i}h_j=h_{j}g_i$ for $\mid i-j\mid\ge 2$, where $g_i, h_i\in \{\sigma_i, \tau_i, v_i \}$. *Remark 28*. Let $n\geq 3$. From [@CG Proposition 3.1] and [@CY Theorem 4] we conclude that the groups $B_n$, $VB_n$ and $SG_n$ are contained in the virtual singular braid group $VSG_n$. Finally, we obtain the following result about subgroup separability for virtual braid groups. **Theorem 29**. *Let $n\geq 2$.* 1. *The virtual braid group $VB_n$ and its pure subgroup $VP_n$ are LERF if and only if $n=2$.* 2. *The virtual singular braid group $SVG_n$ and its pure subgroup $SVP_n$ are LERF if and only if $n=2$.* *Proof.* Let $n\geq 2$. 1. First we consider the case of few strings. We note that there are decompositions of the virtual pure braid groups with few strings as follows - $VB_2=\ensuremath{\mathbb Z}\ast \ensuremath{\mathbb Z}_2$ (it is obvious from its presentation), and - $VP_3=\overline{P_4}\ast \ensuremath{\mathbb Z}$ (see [@SW Lemma 2.4]), where $\overline{P_4}$ is isomorphic to the classical pure braid group $P_4$ module its center. Hence $VP_2$ and $VB_2$ are LERF from Theorem [Theorem 6](#thm:lerf2){reference-type="ref" reference="thm:lerf2"} and Theorem [Theorem 2](#thm:lerf1){reference-type="ref" reference="thm:lerf1"}. On the other hand $VP_3$ is not LERF since $\overline{P_4}$ is not (see [@DM Paragraph before Corollary 1.6]). Now we consider the case $n\geq 4$. It is well known that, for $n\geq 2$, $B_n$ is a subgroup of $VB_n$ (see [@Kam] and also [@CG Proposition 3.1]). Since $B_n$ is not LERF for $n\geq 4$ [@DM Corollary 1.6] then $VB_n$ and so $VP_n$ are not LERF for $n\geq 4$. 2. This item holds for $n=2$ from Theorem [Theorem 6](#thm:lerf2){reference-type="ref" reference="thm:lerf2"} and the fact that $SVB_2=\ensuremath{\mathbb Z}^2\ast \ensuremath{\mathbb Z}_2$ (this is clear from Definition [Definition 27](#defn:vsgn){reference-type="ref" reference="defn:vsgn"}). For $n\geq 3$ it follows from Remark [Remark 28](#rem:embedding){reference-type="ref" reference="rem:embedding"} and the first item of this theorem.  ◻ *Remark 30*. Let $n\geq 2$. Since the Artin braid group $B_n$ is a subgroup of the virtual braid group $VB_n$ and of the virtual singular braid group, see [@Kam], [@CG Proposition 3.1] and [@O Remark 4], then we also may conclude that the occurrence problem is unsolvable for $VB_n$ and $SVG_n$, when $n\geq 5$. GGO2 A. M. Akimenkov, Subgroups of the braid group $B_4$, *Math. Notes Acad. Sci. USSR* **50** (6), (1991), 1211--1218. K. Almeida and I. Lima, Subgroup separability of Artin groups, *J. Algebra* **583** (2021), 25--37. E. Artin, Theorie der Zöpfe, *Abh. Math. Sem. Univ. Hamburg* **4** (1925), 47--72. E. Artin, Theory of braids, *Ann. Math.* **48** (1947), 101--126. J. Baez, Link invariants of finite type and perturbation theory, *Lett. Math. Phys.* **26** (1992), 43--51. V. G. Bardakov and P. Bellingeri, Combinatorial properties of virtual braids, *Topology and its Applications* **156**, 6 (2009), 1071--1082. J. S. Birman, On braid groups, *Comm. Pure Appl. Math.* **22** (1969), 41--72. R.G. Burns, A note on free groups. *Proc. Amer. Math. Soc.* **23** (1969), 14--17. R. G. Burns, On finitely generated subgroups of free products, *J. Aust. Math. Soc.* **12** (3) (1971) 358--364. J. Van Buskirk, Braid groups of compact 2-manifolds with elements of finite order, *Trans. Amer. Math. Soc.* **122** (1966) 81--97. R. Burns, A. Karrass and D. Solitar, A note on groups with separable finitely generated subgroups, Bulletin of the Australian Mathematical Society 36.1 (1987): 153-160. C. Caprau, A. Pena and S. McGahan, Virtual singular braids and links, *Manuscripta Math.* **151**(1) (2016) 147--175. C. Caprau and A. Yeung, Algebraic structures among virtual singular braids, (2022) arXiv:2201.09187v1. B. A. Cisneros de la Cruz and G. Gandolfi, Algebraic, combinatorial and topological properties of singular virtual braid monoids, *J. Knot Theory Ramifications* **28**, 10 (2019), 1950069. O. T. Dasbach and B. S. Mangum, The automorphism group of a free group is not subgroup separable, Knots, braids, and mapping class groups---papers dedicated to Joan S. Birman (New York, 1998), AMS/IP Stud. Adv. Math., vol. 24, Amer. Math. Soc., Providence, RI, 2001, pp. 23--27. E. Fadell, Homotopy groups of configuration spaces and the string problem of Dirac, *Duke Math. J.* **29** (1962) 231--242. E. Fadell and J. Van Buskirk, The braid groups of $E^2$ and $S^2$, *Duke Math. J.* **29** (1962) 243--257. E. Fadell and L. Neuwirth, Configuration spaces, Math. Scand. 10 (1962) 111--118. S. Friedl and H. Wilton, The membership problem for 3--manifold groups is solvable, *Algebraic and Geometric Topology* **16.4** (2016) 1827--1850. R. H. Fox and L. Neuwirth, The braid groups, *Math. Scandinavica* **10** (1962), 119--126. D. L. Gonçalves and J. Guaschi, Braid groups of non-orientable surfaces and the Fadell-Neuwirth short exact sequence, *J. Pure Appl. Algebra* **214** (2010) 667--677. K.W.  Gruenberg, Residual properties of infinite soluble groups. *Proc. London Math. Soc.* (3) 7 (1957), 29--62. J. Guaschi and C. De Miranda E Pereiro, Lower central and derived series of semi-direct products, and applications to surface braid groups, *J. Pure Appl. Algebra* **224** (2020), 106309. J. Guaschi, D. Juan-Pineda, A survey of surface braid groups and the lower algebraic K-theory of their group rings, from: "Handbook of group actions, Vol. II", (L Ji, A Papadopoulos, S-T Yau, editors), Adv. Lect. Math. 32, Int. Press, Somerville, MA (2015) 23--75. M. Hall, Jr., Coset representations in free groups. Trans. *Amer. Math. Soc.* **67** (1949), 421--432. M. Hall, A Topology for Free Groups and Related Groups, *Annals of Mathematics*, vol. 52, **1** (1950), 127---39. D. L. Johnson, Presentation of groups, London Math. Soc. Lecture Notes **22**, Cambridge University Press, 1976. S. Kamada, Invariants of virtual braids and a remark on left stabilizations and virtual exchange moves, *Kobe J. Math.* **21** (2004) 33--49. S. Kamada, Braid presentation of virtual knots and welded knots, *Osaka J. Math.* **44**, 2 (2007), 441--458. L. H. Kauffman, Virtual knot theory, *Eur. J. Comb.* **20**, 7 (1999), 663--690. D.  Long and A.  Reid, Subgroup separability and virtual retractions of groups, *Topology* **47** (2008), no. 3, 137--159. A. I. Mal'cev, On homomorphisms onto finite groups, *Ivan. Gos. Ped. Inst.*, **18** (1958), 49--60 (Russian). English transl: On homomorphisms onto finite groups, *Amer. Math. Soc. Translations*, Series 2, **119** (1983), 67--79. W. Magnus, Das Identitätsproblem für Gruppen mit einer definierenden Relation, *Math. Ann.* **106** (1932), 295--307. T. A. Makanina, The occurrence problem in the braid group $B_{n+1}$ with $n+1\geq 5$, *Mat. Zametki* **29** (1981), 31--33. K. A. Mikhailova, The occurrence problem for free products of groups, *Mat. Sb.* **75** (1968), 199--210. A.  Minasyan, Virtual Retraction Properties in Groups, *International Mathematics Research Notices*, **2021**(17) (2019), 13434--13477. K. Murasugi and B. I. Kurpita, A study of braids, Mathematics and its Applications **484**, Kluwer Academic Publishers, Dordrecht, 1999. O. Ocampo, On virtual singular braid groups (2022), [arXiv:2207.13885](arXiv:2207.13885). L. Paris and D. Rolfsen, Geometric subgroups of surface braid groups, *Ann. Inst. Fourier* **49** (1999), 417--472. G. P. Scott, Braid groups and the group of homeomorphisms of a surface, *Proc. Cambridge Phil. Soc.* **68** (1970) 605--617. P.  Scott, Subgroups of Surface Groups are Almost Geometric, *Journal of the London Mathematical Society*, **s2-17**(3) (1978), 555-565. J. Stillwell, The occurrence problem for mapping class groups, *Proc. Amer. Math. Soc.* **101** (1987), no. 3, 411--416. A. I. Suciu and H. Wang, Pure virtual braids, resonance, and formality, *Math. Z.* **286** (2017), no. 3-4, 1495--1524. V. V. Vershinin, On homology of virtual braids and Burau representation, *J. Knot Theory Ramifications* **10**, 5 (2001), 795--812. O. Zariski, The topological discriminant group of a Riemann surface of genus $p$, *Amer. J. Math.* **59** (1937), 335--358.
arxiv_math
{ "id": "2310.00478", "title": "Subgroup separability of surface braid groups and virtual braid groups", "authors": "Kisnney Almeida, Igor Lima, Oscar Ocampo", "categories": "math.GR math.GT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
arxiv_math
{ "id": "2309.15058", "title": "Monoidal Structures in Orthogonal Calculus", "authors": "Leon Hendrian", "categories": "math.AT", "license": "http://creativecommons.org/licenses/by-sa/4.0/" }
--- abstract: | Let $\mathfrak{g}$ be an affine Lie algebra with associated Yangian $Y_\hbar(\mathfrak{g})$. We prove the existence of two meromorphic $R$--matrices associated to any pair of representations of $Y_\hbar(\mathfrak{g})$ in the category $\mathcal{O}$. They are related by a unitary constraint and constructed as products of the form $\mathcal{R}^{\uparrow/\downarrow}(s)=\mathcal{R}^+(s)\cdot\mathcal{R}^{0,\uparrow/\downarrow}(s)\cdot\mathcal{R}^-(s)$, where $\mathcal{R}^+(s) = \mathcal{R}^-_{21}(-s)^{-1}$. The factors $\mathcal{R}^{0,\uparrow/\downarrow}(s)$ are meromorphic, abelian $R$--matrices, with a WKB--type singularity in $\hbar$, and $\mathcal{R}^-(s)$ is a rational twist. Our proof relies on two novel ingredients. The first is an irregular, abelian, additive difference equation whose difference operator is given in terms of the $q$--Cartan matrix of $\mathfrak{g}$. The regularisation of this difference equation gives rise to $\mathcal{R}^{0,\uparrow/\downarrow}(s)$ as the exponentials of the two canonical fundamental solutions. The second key ingredient is a higher order analogue of the adjoint action of the affine Cartan subalgebra $\mathfrak{h}\subset\mathfrak{g}$ on $Y_\hbar(\mathfrak{g})$. This action has no classical counterpart, and produces a system of linear equations from which $\mathcal{R}^-(s)$ is recovered as the unique solution. Moreover, we show that both $\mathcal{R}^{\uparrow/\downarrow}(s)$ give rise to the same rational $R$--matrix on the tensor product of any two highest--weight representations. Let $\mathfrak{g}$ be an affine Lie algebra and $Y_\hbar(\mathfrak{g})$ the corresponding Yangian. We prove the existence of two meromorphic $R$--matrices, related by a unitarity relation, on representations of $Y_\hbar(\mathfrak{g})$ in the category $\mathcal{O}$. Specifically, we construct two abelian meromorphic $R$--matrices $\mathcal{R}^{0,\uparrow/\downarrow}(s)$ with respect to the Drinfeld tensor product, and a rational twist $\mathcal{R}^-(s)$ relating it to the standard one. We then obtain $\mathcal{R}^{\uparrow/\downarrow}(s)=\mathcal{R}^+(s)\cdot\mathcal{R}^{0,\uparrow/\downarrow}(s)\cdot\mathcal{R}^-(s)$ where $\mathcal{R}^+(s) = \mathcal{R}^-_{21}(-s)^{-1}$. Our proof relies on two new ingredients. The first one is an irregular, abelian, additive difference equation whose difference operator is given in terms of the $q$--Cartan matrix of $\mathfrak{g}$. Its regularisation gives rise to $\mathcal{R}^{0,\uparrow/\downarrow}(s)$ as the exponentials of the two canonical fundamental solutions. The second one is a higher order analogue of the derivation of the affine Cartan subalgebra $\mathfrak{h}\subset\mathfrak{g}$, which produces a system of linear equations. We then obtain $\mathcal{R}^-(s)$ as their unique solution. Moreover, we show that both $\mathcal{R}^{\uparrow/\downarrow}(s)$ give rise to the same rational $R$--matrix on a tensor product of two arbitrary highest--weight representations. address: - Dipartimento di Scienze Matematiche, Fisiche e Informatiche, Università di Parma, and INFN Gruppo Collegato di Parma, 43124 Parma, Italy - Department of Mathematics, The Ohio State University, Columbus, OH 43210, USA - Department of Mathematics and Statistics, University of Saskatchewan, Saskatoon, SK S7N 5E6, Canada author: - Andrea Appel - Sachin Gautam - Curtis Wendlandt title: The R-matrix of the affine Yangian --- # Introduction {#sec:Intro} ## {#opening} Let $\mathfrak{g}$ be an affine Kac-Moody algebra. In contrast to its counterpart for a finite--dimensional, semisimple Lie algebra, the affine Yangian $Y_\hbar(\mathfrak{g})$ is not known to have a universal $R$--matrix. In particular, it is not known whether an arbitrary representation $V$ of $Y_\hbar(\mathfrak{g})$ gives rise to a solution of the quantum Yang-Baxter equation [\[qybe\]](#qybe){reference-type="eqref" reference="qybe"} with additive spectral parameter. A notable exception arises when $\mathfrak{g}$ is simply--laced, and $V$ is the equivariant cohomology of a quiver variety for the underlying affine Dynkin diagram. In this setting, a rational solution of the [\[qybe\]](#qybe){reference-type="eqref" reference="qybe"} corresponding to $V$ has been constructed by Maulik and Okounkov in [@maulik-okounkov-qgqc] using stable envelopes. The main goal of this paper is to construct solutions $\mathcal{R}(s)$ of the [\[qybe\]](#qybe){reference-type="eqref" reference="qybe"} for an arbitrary affine Yangian $Y_\hbar(\mathfrak{g})$, with the exception of those of types $\mathsf{A}_1^{(1)}$ and $\mathsf{A}_2^{(2)}$, and a representation $V$ whose restriction to $\mathfrak{g}$ lies in category $\mathcal{O}$. Our solutions are *meromorphic* with respect to the spectral parameter $s$, natural with respect to the underlying representations, and compatible with the tensor product. We also show that, on representations generated by a highest--weight vector, they can be normalized so as to be rational in $s$, and conjecture that they then coincide with those constructed in [@maulik-okounkov-qgqc] when $\mathfrak{g}$ is simply-laced and $V$ arises from geometry. ## {#ssec:approach} Our approach builds upon the recent works of the last two authors with Valerio Toledano Laredo [@sachin-valerio-III; @GTLW], where meromorphic braidings were constructed on the category of finite--dimensional representations of the Yangian of any semisimple Lie algebra over the complex numbers. It hinged on constructing (1) the negative part of $\mathcal{R}(s)$ as an intertwiner between the standard and Drinfeld coproduct [@GTLW] and (2) the abelian part of $\mathcal{R}(s)$ via methods of resummation [@sachin-valerio-III]. One striking difference that arises while passing from the finite to affine setting is that the resulting $R$--matrices are no longer regular at $\hbar=0$, and in fact possess an essential singularity there. Remarkably, this singularity has the same form as a WKB expansion [@froman] (see also [@BO Ch. 10]). Indeed, we show that, as a function of $\hbar$, $\mathcal{R}(s)$ has the form $$\label{eq:intro-R-sing} \mathcal{R}(s) = \exp\left(\frac{1}{\hbar}{\mathsf{g}_{\scriptscriptstyle{\mathrm{sing}}}}s^{-1}\right) (1+O(\hbar)),$$ where $\mathsf{g}_{\scriptscriptstyle{\mathrm{sing}}}$ is a $2$--tensor which we determine explicitly (see Theorem [Theorem 9](#thm:R0-main){reference-type="ref" reference="thm:R0-main"} [\[R0:leading\]](#R0:leading){reference-type="eqref" reference="R0:leading"} and the subsequent remark). This equation strongly hints at yet another approach to solving [\[qybe\]](#qybe){reference-type="eqref" reference="qybe"} by varying $\hbar$ in an annular neighbourhood of $0$. We believe that this observation is more than a mere analogy and will return to it in a future work. An *a priori* physical justification for the appearance of the singular term in [\[eq:intro-R-sing\]](#eq:intro-R-sing){reference-type="eqref" reference="eq:intro-R-sing"} may arise from Costello--Witten--Yamazaki's 4--dimensional gauge theory [@CWY1; @CWY2]. Specifically, and up to central elements, as a renormalization term coming from an affine extension of the theory. We now provide an overview of the previously known results that motivated the present paper. ## {#opening-2} Recall that the quantum Yang--Baxter equation is the following equation for an $\operatorname{End}(V^{\otimes 2})$--valued function (or formal power series) $\EuScript{R}(s)$: $$\label{qybe} \tag{QYBE} \EuScript{R}_{12}(s_1)\EuScript{R}_{13}(s_1+s_2)\EuScript{R}_{23}(s_2) = \EuScript{R}_{23}(s_2)\EuScript{R}_{13}(s_1+s_2)\EuScript{R}_{12}(s_1).$$ This equation is $\operatorname{End}(V^{\otimes 3})$--valued, and the right--hand subscripts indicate on which tensor factors $\EuScript{R}$ acts. Drinfeld's theory of Yangians [@drinfeld-qybe] provides a uniform method for constructing rational solutions of this equation. The Yangian $Y_\hbar(\mathfrak{a})$ of a finite-dimensional simple Lie algebra $\mathfrak{a}$ was introduced in [@drinfeld-qybe]. It is the canonical Hopf algebra deformation of the current algebra $U(\mathfrak{a}[z])$, and gives rise to rational solutions of [\[qybe\]](#qybe){reference-type="eqref" reference="qybe"} on its irreducible, finite--dimensional representations. To state Drinfeld's results more precisely, let $\Delta:Y_\hbar(\mathfrak{a})\to Y_\hbar(\mathfrak{a})^{\otimes 2}$ be the coproduct and $\tau_s\in\operatorname{Aut}(Y_\hbar(\mathfrak{a}))$ the one--parameter group of Hopf algebra automorphisms, quantizing the shift automorphisms $z\mapsto z+s$ of $U(\mathfrak{a}[z])$. We abbreviate $\Delta_s = \tau_s\otimes\operatorname{Id}\circ\Delta$ and $\Delta^{\operatorname{op}}_s = \tau_s\otimes\operatorname{Id}\circ\Delta^{\operatorname{op}}$. Then, by [@drinfeld-qybe Thms. 3], there exists a unique formal series $$\mathcal{R}^{\scriptscriptstyle{\mathrm{D}}}(s) \in 1^{\otimes 2} + s^{-1}Y_\hbar(\mathfrak{a})^{\otimes 2}[\negthinspace[s^{-1}]\negthinspace]$$ satisfying the intertwining equation $$\Delta^{\operatorname{op}}_s(x) = \mathcal{R}^{\scriptscriptstyle{\mathrm{D}}}(s)\Delta_s(x)\mathcal{R}^{\scriptscriptstyle{\mathrm{D}}}(s)^{-1} \quad \forall\quad x\in Y_\hbar(\mathfrak{a}),$$ in addition to the cabling identities $$\begin{aligned} \Delta\otimes\operatorname{Id}(\mathcal{R}^{\scriptscriptstyle{\mathrm{D}}}(s)) &= \mathcal{R}^{\scriptscriptstyle{\mathrm{D}}}_{13}(s)\mathcal{R}^{\scriptscriptstyle{\mathrm{D}}}_{23}(s),\\ \operatorname{Id}\otimes\Delta (\mathcal{R}^{\scriptscriptstyle{\mathrm{D}}}(s)) &= \mathcal{R}^{\scriptscriptstyle{\mathrm{D}}}_{13}(s)\mathcal{R}^{\scriptscriptstyle{\mathrm{D}}}_{12}(s). \end{aligned}$$ These readily imply that $\mathcal{R}^{\scriptscriptstyle{\mathrm{D}}}(s)$ is a solution to [\[qybe\]](#qybe){reference-type="eqref" reference="qybe"} on $Y_\hbar(\mathfrak{a})^{\otimes 3}$. Let $V_1,V_2$ be two finite--dimensional irreducible representations of $Y_\hbar(\mathfrak{a})$, and $\mathcal{R}^{\scriptscriptstyle{\mathrm{D}}}_{V_1,V_2}(s)$ the evaluation of $\mathcal{R}^{\scriptscriptstyle{\mathrm{D}}}(s)$ on $V_1\otimes V_2$. Drinfeld also showed that, upon normalizing by its eigenvalue on the tensor product of highest--weight spaces, its dependence on $s$ becomes rational [@drinfeld-qybe Thm. 4]. Hence, in particular, one obtains rational solutions to [\[qybe\]](#qybe){reference-type="eqref" reference="qybe"} valued in any finite--dimensional, irreducible representation $V$ of $Y_\hbar(\mathfrak{a})$. ## {#YKM} In [@drinfeld-yangian-qaffine], a different presentation of $Y_\hbar(\mathfrak{a})$ is given called Drinfeld's new, or loop, presentation. It takes as an input only the Cartan matrix of $\mathfrak{a}$, and can be used to define the Yangian $Y_\hbar(\mathfrak{g}_{\scriptstyle{\mathsf{KM}}})$ of any symmetrizable Kac--Moody algebra $\mathfrak{g}_{\scriptstyle{\mathsf{KM}}}$, only as an algebra. In light of Drinfeld's foundational results, it is natural to ask whether [\[qybe\]](#qybe){reference-type="eqref" reference="qybe"} can be solved for a suitable category of representations of $Y_\hbar(\mathfrak{g}_{\scriptstyle{\mathsf{KM}}})$, in a functorial manner. However, the following two obstacles present themselves almost immediately. 1. [\[it:O2\]]{#it:O2 label="it:O2"} The universal $R$--matrix $\mathcal{R}^{\scriptscriptstyle{\mathrm{D}}}(s)$ is only known to exist in finite types. Moreover, our results suggest, *a posteriori*, that there is no such object in the affine setting, without modifying the definition of the affine Yangian, cf.  [\[eq:intro-R-sing\]](#eq:intro-R-sing){reference-type="eqref" reference="eq:intro-R-sing"}. 2. [\[it:O1\]]{#it:O1 label="it:O1"} The coproduct $\Delta$ has only been defined in the case where $\mathfrak{g}_{\scriptstyle{\mathsf{KM}}}$ is of finite or affine type [@guay-nakajima-wendlandt].[^1] ## {#we-do} We answer in the affirmative the question raised in the previous paragraph for the Yangian $Y_\hbar(\mathfrak{g})$ associated to any affine Lie algebra[^2] which is not of type $\mathsf{A}_1^{(1)}$ or $\mathsf{A}_2^{(2)}$. We consider the category $\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, consisting of the representations of $Y_\hbar(\mathfrak{g})$ whose restriction to $\mathfrak{g}$ lies in category $\mathcal{O}$. We prove that there are two meromorphic braidings on $\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, related by a unitarity relation (see Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} (5) below). These give rise to rational solutions of [\[qybe\]](#qybe){reference-type="eqref" reference="qybe"} on tensor products of highest--weight representations. The construction of the corresponding meromorphic $R$--matrices relies on a technique which we refer to as the *abelianization method* (see Section [2.4](#ssec:abel){reference-type="ref" reference="ssec:abel"} for a precise description). Roughly, this amounts to building $\mathcal{R}(s)$ from the three component of its Gaussian decomposition: $$\label{gauss} \mathcal{R}(s) = \mathcal{R}^+(s)\cdot \mathcal{R}^0(s)\cdot \mathcal{R}^-(s)\ .$$ Note that, in the case of $Y_\hbar(\mathfrak{a})$, such a factorization first appeared, as a conjecture, in the work [@khoroshkin-tolstoy] of Khoroshkin and Tolstoy. We refer the reader to the works of the third author [@WDYhg; @WRQD] for the precise statement and proof of this conjecture, and its implications for $\mathcal{R}^{\scriptscriptstyle{\mathrm{D}}}(s)$. Since in [\[gauss\]](#gauss){reference-type="eqref" reference="gauss"} we further have $\mathcal{R}^+(s) = \mathcal{R}^-_{21}(s)^{-1}$, this equation can be viewed as an abelianization of $\mathcal{R}(s)$ via a (rational) twist. ## {#lit-surv} We conclude with a brief review of the vast literature on affine Yangians. With the exception of the aforementioned work of Maulik--Okounkov [@maulik-okounkov-qgqc], there are no results predicting the existence of $R$--matrices associated to representations of affine Yangians in the generality considered in the present paper. We first recall that, in the case of $\widehat{\mathfrak{sl}}_2$, the affine Yangian originally appeared in [@boyarchenko-levendorskii]. However, the definition provided in *loc. cit.* is now considered incomplete, see [@Kod19 Rem. 5.2]. The currently accepted definition is given in [@Tsy17b; @Tsy17], see also [@BerTsy19]. The available literature on affine Yangians has significantly expanded, in particular along the following two research directions. The first one aims to the construction of new representations in terms of Schur--Weyl type dualities, *e.g.,* [@Gu07], vertex operators, *e.g.,* [@GRWvrep; @Kod19], or $W$--algebras, *e.g.,* [@Ueda-Kodera-22; @Ueda22a]. The second one, instead, stems from the theory of cohomological Hall algebras, *e.g.,* [@schiffmann2017cohomological; @SV1; @YaGuPBW] (see also [@VaVa98]). The theory of affine Yangians is closely related to quantum toroidal algebras [@BerTsy19; @sachin-valerio-1; @sachin-valerio-2; @Tsy17b]. The latter have been studied extensively in the $\mathfrak{gl}_n$ case, see *e.g.,* [@FJMM16; @feigin2023commutative] and references therein, and in other types, see *e.g.,* [@garbali-negut; @negut], in relation to shuffle algebras. The category $\mathcal{O}$ for quantum toroidal algebras, and more generally for quantum affinization, was introduced, and its simple objects classified, by Hernandez [@Her05]. In particular, a formula for the abelian $R$--matrix $\EuScript{R}^0(\zeta)$ is given in [@Her05 Sec. 5.4]. This is a purely formal object, analogous to the one obtained by Damiani [@damiani] in finite type, and it is a fundamental ingredient in the construction of $q$--characters. It would be interesting to understand its functional nature and compare it to the abelian $R$--matrix $\mathcal{R}^0(s)$, relying on the functor defined in [@sachin-valerio-2], in the same spirit as [@sachin-valerio-III Thm. 9.6]. # Main results {#sec:main-results} This section contains the main theorems of this paper. We also explain the technique employed to prove these, *i.e.*, the abelianization method. ## {#section} Let $\mathfrak{g}$ be a Kac--Moody algebra of affine type, with the exception of $\mathsf{A}_1^{(1)}$ and $\mathsf{A}_2^{(2)}$, $\hbar\in\mathbb{C}^{\times}$ and let $Y_\hbar(\mathfrak{g})$ be the Yangian associated to $\mathfrak{g}$ (see Section [3.2](#ssec: yangian){reference-type="ref" reference="ssec: yangian"} for its definition). In [@guay-nakajima-wendlandt], Guay, Nakajima, and the third author define an algebra homomorphism, called the *twisted coproduct* $\Delta^z:Y_\hbar(\mathfrak{g})\to Y_\hbar(\mathfrak{g})^{\otimes 2}[z;z^{-1}]\!]$ (see Section [3.9](#ssec:pre-cop){reference-type="ref" reference="ssec:pre-cop"}). Evaluating $\Delta^z$ at $z=1$ leads to terms with infinitely many tensors, which must be carefully interpreted as defining a topological coproduct. Nevertheless, this yields a well--defined action of $Y_\hbar(\mathfrak{g})$ on any tensor product of modules in category $\mathcal{O}$, *i.e.*, $Y_\hbar(\mathfrak{g})$--modules whose restriction to $\mathfrak{g}\subset Y_\hbar(\mathfrak{g})$ lies in category $\mathcal{O}$ (see Section [4.1](#ssec:cat-O){reference-type="ref" reference="ssec:cat-O"}). We denote this category by $\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, and the tensor product of $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$ by $V_1\underset{\scriptscriptstyle{\operatorname{ KM},0}}{\otimes} V_2$. Let $\mathbb{C}\ni s\mapsto \tau_s\in\operatorname{Aut}(Y_\hbar(\mathfrak{g}))$ be the one--parameter group of bialgebra automorphisms (see Section [3.6](#ssec: shift-yangian){reference-type="ref" reference="ssec: shift-yangian"}). For $V\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, set $V(s)=\tau_s^*(V)$, and for $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$ define: $$V_1\underset{\scriptscriptstyle{\operatorname{ KM},s}}{\otimes} V_2 =V_1(s)\underset{\scriptscriptstyle{\operatorname{ KM},0}}{\otimes} V_2\,.$$ ## Meromorphic $R$--matrices {#ssec:thm1} We are now in position to state the first main theorem of this paper. **Theorem 1**. *Let $V_1,V_2$ be $Y_\hbar(\mathfrak{g})$--modules in category $\mathcal{O}$. Then, there are two meromorphic functions $\mathcal{R}^\eta_{V_1,V_2}\colon\mathbb{C}\to\operatorname{End}(V_1\otimes V_2)$, $\eta\in\{\,\uparrow\,,\, \downarrow\,\}$, with the following properties.* 1. *[\[eq:intro-R-fun\]]{#eq:intro-R-fun label="eq:intro-R-fun"} $\mathcal{R}^{\eta}_{V_1,V_2}(s)$ is holomorphic on a half--plane $\varepsilon(\eta)\operatorname{Re}(s/\hbar)\gg 0$ and approaches $\operatorname{Id}_{V_1\otimes V_2}$ as $\varepsilon(\eta)\operatorname{Re}(s/\hbar)\to\infty$. Here, $\varepsilon(\uparrow)=+1$ and $\varepsilon(\downarrow)=-1$.* 2. *[\[eq:intro-R-inter\]]{#eq:intro-R-inter label="eq:intro-R-inter"} The following is a $Y_\hbar(\mathfrak{g})$--intertwiner which is natural in $V_1$ and $V_2$: $$\label{eq:intro-mero-braid} (1\,2)\circ\mathcal{R}^\eta_{V_1,V_2}(s)\,\colon\,V_1\underset{\scriptscriptstyle{\operatorname{ KM},s}}{\otimes} V_2\to (V_2\underset{\scriptscriptstyle{\operatorname{ KM},-s}}{\otimes} V_1)(s)\,$$* 3. *[\[eq:intro-R-QYBE\]]{#eq:intro-R-QYBE label="eq:intro-R-QYBE"} For $V_1,V_2,V_3\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, the following cabling identities hold. $$\begin{aligned} \mathcal{R}_{V_1\underset{\scriptscriptstyle{\operatorname{ KM},s_1}}{\otimes} V_2, V_3}^{\eta}(s_2) &= \mathcal{R}_{V_1,V_3}^{\eta}(s_1+s_2)\cdot\mathcal{R}_{V_2,V_3}^{\eta}(s_2)\,,\\ \mathcal{R}_{V_1,V_2\underset{\scriptscriptstyle{\operatorname{ KM},s_2}}{\otimes}V_3}(s_1+s_2) &= \mathcal{R}_{V_1,V_3}^{\eta}(s_1+s_2)\cdot\mathcal{R}_{V_1,V_2}^{\eta}(s_1)\,. \end{aligned}$$* *In particular, the quantum Yang--Baxter equation holds on $V_1\otimes V_2\otimes V_3$ : $$\begin{aligned} \mathcal{R}_{V_1,V_2}^{\eta}(s_1)\cdot \mathcal{R}_{V_1,V_3}^{\eta}(s_1+s_2)&\cdot\mathcal{R}_{V_2,V_3}^{\eta}(s_2)=\\ =\mathcal{R}_{V_2,V_3}^{\eta}(s_2)&\cdot \mathcal{R}_{V_1,V_3}^{\eta}(s_1+s_2)\cdot\mathcal{R}_{V_1,V_2}^{\eta}(s_1)\,. \end{aligned}$$* 4. *[\[eq:intro-R-translation\]]{#eq:intro-R-translation label="eq:intro-R-translation"} For any $a,b\in\mathbb{C}$ and $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, we have $$\mathcal{R}_{V_1(a),V_2(b)}^{\eta}(s)=\mathcal{R}_{V_1,V_2}^{\eta}(s+a-b)\,.$$* 5. *[\[eq:intro-R-unit\]]{#eq:intro-R-unit label="eq:intro-R-unit"} The following unitarity relation holds $$\mathcal{R}_{V_1,V_2}^{\uparrow}(s)^{-1}=(1\,2)\circ\mathcal{R}_{V_2,V_1}^{\downarrow}(-s)\circ(1\,2)\,.$$* 6. *[\[eq:intro-R-asym\]]{#eq:intro-R-asym label="eq:intro-R-asym"} $\mathcal{R}^\eta_{V_1,V_2}(s)$ have the same asymptotic expansion[^3] as $\operatorname{Re}(s/\hbar)\to\varepsilon(\eta)\infty$. This expansion remains valid in a larger sector $\Sigma^\eta_\delta$, for any $\delta>0$, where, if $\theta=\arg(\hbar)$, then $$\Sigma^\uparrow_\delta=\{re^{\iota\phi}:r\in\mathbb{R}_{>0}\ \text{ and } \phi\in (\theta-\pi+\delta,\theta+\pi-\delta)\} = -\Sigma^\downarrow_\delta\ .$$ (see Figures [2](#afig:sector){reference-type="ref" reference="afig:sector"} given in Appendix [8](#app:LapDE){reference-type="ref" reference="app:LapDE"}).* **Remarks 1**. 1. One can state this theorem more compactly as the existence of two meromorphic braidings, related by the unitary relation [\[eq:intro-R-unit\]](#eq:intro-R-unit){reference-type="eqref" reference="eq:intro-R-unit"}, on the meromorphic (in fact, polynomial) tensor category $\left(\mathcal{O}(Y_{\hbar}(\mathfrak{g})),\underset{\scriptscriptstyle{\operatorname{ KM},s}}{\otimes}\right)$. 2. The $1$--jet of the asymptotic expansion of $\mathcal{R}^\eta(s)$ can be obtained by combining Theorems [Theorem 7](#thm:R-){reference-type="ref" reference="thm:R-"} [\[R-(s,z):4\]](#R-(s,z):4){reference-type="eqref" reference="R-(s,z):4"} and [Theorem 9](#thm:R0-main){reference-type="ref" reference="thm:R0-main"} [\[R0:leading\]](#R0:leading){reference-type="eqref" reference="R0:leading"}. ## Rational $R$--matrices {#ssec:rat} Our results also imply the existence of a *rational* $R$--matrix on any tensor product of highest weight modules, denoted below by $\mathsf{R}(s)$. Namely, we prove the following: **Theorem 2**. *Let $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$ be two representations, generated by highest--weight vectors $\mathsf{v}_1,\mathsf{v}_2$ respectively. Then, $\mathcal{R}^{\uparrow/\downarrow}_{V_1,V_2}(s)$ yield, after normalizing to take value $1$ on $\mathsf{v}_1\otimes\mathsf{v}_2$, the same operator $\mathsf{R}_{V_1,V_2}(s)$, whose matrix entries are rational in $s$. Moreover, $\displaystyle%\lim_{\hbar\to 0}\sfR_{V_1,V_2}(s) = \mathsf{R}_{V_1,V_2}(\infty) = \operatorname{Id}_{V_1\otimes V_2}$.* **Remarks 2**. 1. The factorization of $\mathcal{R}^\eta(s)$ as a product of a scalar--valued meromorphic function and a matrix--valued rational function is known to be not natural with respect to morphisms in $\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$. Indeed, it was shown in [@GTLW Thms. 6.4, 7.3] that there is no rational braiding on the category of finite--dimensional representations of a finite--type Yangian.\ 2. We also want to highlight the fact that the theorem above is more general than [@drinfeld-qybe Thm. 4] for finite types. The latter assumes that the representations are finite--dimensional and irreducible, and its proof is presumably[^4] based on a "generic irreducibility\" argument. Our proof of Theorem [Theorem 2](#thm:rat){reference-type="ref" reference="thm:rat"} (see Section [7.2](#ssec:rat-R0){reference-type="ref" reference="ssec:rat-R0"}) uses explicit calculation of $\operatorname{Ad}(\mathcal{R}^0)(s)$ acting on operators from $Y_\hbar(\mathfrak{g})\otimes Y_\hbar(\mathfrak{g})$. It is valid for highest--weight representations and does not assume integrability or generic cyclicity. ## Abelianization method {#ssec:abel} The strategy employed in [@GTLW] is based on an interplay between the standard tensor product and the *Drinfeld tensor product*. The latter was introduced by the second author and Toledano Laredo in [@sachin-valerio-III] and defines a meromorphic tensor structure on $\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$ which depends *rationally* on $s$. Namely, it gives rise to a family of actions of $Y_\hbar(\mathfrak{g})$ on $V_1\otimes V_2$, which is a rational function in $s$, denoted by $V_1\underset{\scriptscriptstyle{\operatorname{D},s}}{\otimes} V_2$. (see Theorem [Theorem 5](#thm:GTLW){reference-type="ref" reference="thm:GTLW"}). The abelianization method decouples the construction of $\mathcal{R}(s)$ into two separate, independent problems. Assume the following datum is given: 1. [\[prob:R0\]]{#prob:R0 label="prob:R0"} two meromorphic braidings with respect to the Drinfeld tensor product on $\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, *i.e.*, a natural system of invertible intertwiners $$\label{eq:R0-intro} (1\,2)\circ\mathcal{R}_{V_1,V_2}^{0,\uparrow/\downarrow}(s): V_1\underset{\scriptscriptstyle{\operatorname{D},s}}{\otimes}V_2\to (V_2\underset{\scriptscriptstyle{\operatorname{D},-s}}{\otimes}V_1)(s)\,,$$ for any $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, satisfying the analogue of Theorems [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"} and [Theorem 2](#thm:rat){reference-type="ref" reference="thm:rat"} with respect to the Drinfeld tensor product; 2. [\[prob:Rm\]]{#prob:Rm label="prob:Rm"} a meromorphic twist intertwining the standard tensor product and the Drinfeld tensor product on $\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, *i.e.*, a natural system of invertible intertwiners $$\label{eq:Rm-intro} \mathcal{R}^-_{V_1,V_2}(s): V_1\underset{\scriptscriptstyle{\operatorname{ KM},s}}{\otimes}V_2\to V_1\underset{\scriptscriptstyle{\operatorname{D},s}}{\otimes}V_2\,,$$ for any $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, such that the cocycle equation $$\label{eq:cocycle-intro} \mathcal{R}^-_{V_1\underset{\scriptscriptstyle{\operatorname{D},s_1}}{\otimes}V_2, V_3}(s_2)\cdot \mathcal{R}^-_{V_1, V_2}(s_1) = \mathcal{R}^-_{V_1, V_2\underset{\scriptscriptstyle{\operatorname{D},s_2}}{\otimes}V_3}(s_1+s_2)\cdot \mathcal{R}^-_{V_2,V_3}(s_2)$$ holds for any $V_1,V_2,V_3\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$. We require the matrix entries of $\mathcal{R}^-(s)$ to depend rationally on $s$, and $\mathcal{R}^-(\infty) = \operatorname{Id}$. These conditions ensure that the functional nature of $\mathcal{R}^0(s)$ does not change upon multiplication by $\mathcal{R}^-(s)$ on the right and by $\mathcal{R}^-_{21}(-s)^{-1}$ on the left. Then, the isomorphism of $Y_\hbar(\mathfrak{g})$--modules $$\label{eq:R-intro} (1\,2)\circ \mathcal{R}_{V_1,V_2}^{\uparrow/\downarrow}(s): V_1\underset{\scriptscriptstyle{\operatorname{ KM},s}}{\otimes}V_2\to (V_2\underset{\scriptscriptstyle{\operatorname{ KM},-s}}{\otimes}V_1)(s)\,,$$ defined by the commutativity of the following diagram: $$\label{eq:intro-Gauss-diag} \begin{tikzcd} V_1\underset{\scriptscriptstyle{\operatorname{ KM},s}}{\otimes}V_2 \arrow[d, "\mathcal{R}^-_{V_1,V_2}(s)"'] \arrow[rrr, "(1\,2)\circ\mathcal{R}_{V_1,V_2}^{\uparrow/\downarrow}(s)"] & & & (V_2\underset{\scriptscriptstyle{\operatorname{ KM},-s}}{\otimes}V_1)(s)\arrow[d, "\mathcal{R}^-_{V_2,V_1}(-s)"] \\ V_1\underset{\scriptscriptstyle{\operatorname{D},s}}{\otimes}V_2 \arrow[rrr, "(1\,2)\circ\mathcal{R}_{V_1,V_2}^{0,\uparrow/\downarrow}(s)"']&&& (V_2\underset{\scriptscriptstyle{\operatorname{D},-s}}{\otimes}V_1)(s) \end{tikzcd}$$ is readily seen to satisfy the properties [\[eq:intro-R-fun\]](#eq:intro-R-fun){reference-type="ref" reference="eq:intro-R-fun"}--[\[eq:intro-R-asym\]](#eq:intro-R-asym){reference-type="ref" reference="eq:intro-R-asym"} of Theorem [Theorem 1](#thm:main){reference-type="ref" reference="thm:main"}. The details of this calculation can be found in [@GTLW §7.1] Finally, note that the intertwiner $$\label{eq:Rp-intro} \mathcal{R}^+_{V_1,V_2}(s)=(1\,2)\circ\mathcal{R}^-_{V_2,V_1}(-s)^{-1} \circ(1\,2): V_1\underset{\scriptscriptstyle{\operatorname{D},s}}{\otimes}V_2\to V_1\underset{\scriptscriptstyle{\operatorname{ KM},s}}{\otimes}V_2\,$$ automatically provides a meromorphic twist in the opposite direction. Thus, [\[eq:intro-Gauss-diag\]](#eq:intro-Gauss-diag){reference-type="eqref" reference="eq:intro-Gauss-diag"} reads $$\label{eq:R-gauss-intro} \mathcal{R}^{\uparrow/\downarrow}_{V_1,V_2}(s) = \mathcal{R}^+_{V_1,V_2}(s)\circ\mathcal{R}_{V_1,V_2}^{0,\uparrow/\downarrow}(s)\circ \mathcal{R}^-_{V_1,V_2}(s)\,.$$ Problems [\[prob:R0\]](#prob:R0){reference-type="ref" reference="prob:R0"} and [\[prob:Rm\]](#prob:Rm){reference-type="ref" reference="prob:Rm"} are solved in Theorems [Theorem 9](#thm:R0-main){reference-type="ref" reference="thm:R0-main"} and [Theorem 7](#thm:R-){reference-type="ref" reference="thm:R-"}, respectively, as we now spell out. ## Construction of $\mathcal{R}^0(s)$ {#ssec:intro-R0} The following difference equation is crucial to our construction of $\mathcal{R}^0(s)$: $$\label{eq:intro-L0-diff} (\mathsf{p}-\mathsf{p}^{-1})\det(\mathbf{B}(\mathsf{p})) \cdot \Lambda(s) = \mathcal{G}(s),$$ where - $\mathsf{p}$ is the shift operator defined by $\mathsf{p}\cdot f(s) = f(s-\hbar/2)$ , - $\mathbf{B}(\mathsf{p})$ is the symmetrized affine $\mathsf{p}$--Cartan matrix (see Section [7.4](#ssec:T-cartan){reference-type="ref" reference="ssec:T-cartan"}) , - $\mathcal{G}(s) = \sum_{ij} \mathbf{B}(\mathsf{p})^{\ast}_{ji}\cdot \mathcal{T}_{ij}(s)$, where $\mathbf{B}(\mathsf{p})^\ast$ is the adjoint matrix and $\mathcal{T}_{ij}(s)$ is the element $$\mathcal{T}_{ij}(s) =\hbar^2 \sum_{m\geqslant 1} m!s^{-m-1} \sum_{\begin{subarray}{c} a,b\geqslant 0 \\ a+b=m-1 \end{subarray}} (-1)^a \frac{t_{i,a}}{a!}\otimes \frac{t_{j,b}}{b!} % \oint_{C_1} t_i'(u)\otimes t_j(u+s)\, du\ .$$ in $\mathrm{Prim}^{\scriptscriptstyle{\mathrm{D}}}(Y_\hbar(\mathfrak{g}))\otimes\mathrm{Prim}^{\scriptscriptstyle{\mathrm{D}}}(Y_\hbar(\mathfrak{g}))[\![s^{-1}]\!]$. Here, $\{t_{i,r}\}_{i\in{\mathbf I}, r\in\mathbb{Z}_{\geqslant 0}}\subset Y^{0}_{\hbar}(\mathfrak{g})$ are generators of the commatative subalgebra of $Y_\hbar(\mathfrak{g})$, defined in Section [3.5](#ssec:tir){reference-type="ref" reference="ssec:tir"} below, and $\mathrm{Prim}^{\scriptscriptstyle{\mathrm{D}}}(Y_\hbar(\mathfrak{g}))$ is the linear subspace spanned by them. These elements are primitive with respect to Drinfeld coproduct. **Remarks 3**. 1. The difference equation [\[eq:intro-L0-diff\]](#eq:intro-L0-diff){reference-type="eqref" reference="eq:intro-L0-diff"} makes sense for any type. In finite type, it reduces to the one considered in [@khoroshkin-tolstoy; @sachin-valerio-III]. Its derivation, carried out in Proposition [Proposition 6](#pr:tau){reference-type="ref" reference="pr:tau"} [\[tau:comm\]](#tau:comm){reference-type="eqref" reference="tau:comm"} and Corollary [Corollary 2](#cor:Rg){reference-type="ref" reference="cor:Rg"}, rests on the intertwining equation for $\mathcal{R}^0(s)$. This is in contrast with the finite--type case, where Khoroshkin--Tolstoy [@khoroshkin-tolstoy] obtained the corresponding difference equation in order to compute the canonical tensor of a non--degenerate pairing. Their construction also uses the fact that for finite--type Cartan matrices, the determinant of the $q$--symmetrized Cartan matrix divides a $q$--number. The analogue of the non--degenerate pairing for affine Yangians is not known, and the latter statement is false (see the list of determinants given in Appendix [10](#app:QCM){reference-type="ref" reference="app:QCM"}).\ 2. We observe that [\[eq:intro-L0-diff\]](#eq:intro-L0-diff){reference-type="eqref" reference="eq:intro-L0-diff"} is *irregular*, meaning that the difference operator has order of vanishing $3$ at $\mathsf{p}=1$ (Lemma [Lemma 3](#lem:order){reference-type="ref" reference="lem:order"}) and the right--hand side $\mathcal{G}(s)$ is $O(s^{-2})$. Thus, there is no solution in $(Y^{0}_{\hbar}(\mathfrak{g})\otimes Y^{0}_{\hbar}(\mathfrak{g}))[\![s^{-1}]\!]$. However, a direct computation shows that the coefficient of $s^{-2}$ and $s^{-3}$ in $\mathcal{G}(s)$ are central (see Lemma [Lemma 4](#lem:Rg-terms){reference-type="ref" reference="lem:Rg-terms"}). As central elements play no role in interwining equations, the last remark allows us to regularize our difference equation and obtain $\Lambda(s)\in\mathrm{Prim}^{\scriptscriptstyle{\mathrm{D}}}(Y_\hbar(\mathfrak{g}))^{\otimes 2}[\negthinspace[s]\negthinspace]$ as its unique formal solution (see Proposition [Proposition 7](#pr:Rl){reference-type="ref" reference="pr:Rl"}). We then prove that $\displaystyle\mathcal{R}^0(s)=\exp(\Lambda(s))$ is an abelian $R$--matrix for $Y_\hbar(\mathfrak{g})$, *i.e.*, it satisfies the same properties from Section [1.1](#opening){reference-type="ref" reference="opening"} with $\tau_s\otimes\mathbf{1}\circ\Delta$ replaced by $\underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}$ (see Theorem [Theorem 9](#thm:R0-main){reference-type="ref" reference="thm:R0-main"}). To determine the functional nature of this formal object, we evaluate [\[eq:intro-L0-diff\]](#eq:intro-L0-diff){reference-type="eqref" reference="eq:intro-L0-diff"} on a tensor product of two representations from $\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$. Focusing on one matrix entry at a time, the problem becomes scalar valued. In Theorem [Theorem 11](#appA:thm){reference-type="ref" reference="appA:thm"}, we prove the existence and uniqueness of two fundamental solutions to the scalar analogue of [\[eq:intro-L0-diff\]](#eq:intro-L0-diff){reference-type="eqref" reference="eq:intro-L0-diff"}. Hence, [\[eq:intro-L0-diff\]](#eq:intro-L0-diff){reference-type="eqref" reference="eq:intro-L0-diff"} admits two fundamental solutions $\Lambda_{V_1,V_2}^{\uparrow/\downarrow}(s)$ whose exponentials yield the meromorphic braiding $\mathcal{R}^{0,\uparrow/\downarrow}_{V_1V_2}(s)$. Moreover, their asymptotic expansions as $\operatorname{Re}(s/\hbar)\to\pm\infty$ coincide and recover the action of $\Lambda(s)$. The rationality of (a normalization of) these operators is shown in Theorem [Theorem 10](#thm:rat-R0){reference-type="ref" reference="thm:rat-R0"}, which combined with the rational nature of $\mathcal{R}^-(s)$ proves Theorem [Theorem 2](#thm:rat){reference-type="ref" reference="thm:rat"}. ## Construction of $\mathcal{R}^-(s)$ {#ssec:intro-Rm} We now describe the solution to the problem [\[prob:Rm\]](#prob:Rm){reference-type="ref" reference="prob:Rm"}. The rational twist is obtained from the action of a canonical element in a suitable completion of $(Y_\hbar(\mathfrak{g})\otimes Y_\hbar(\mathfrak{g}))[\![s^{-1}]\!]$, which is uniquely, and explicitly, determined by an intertwining equation. To state our result precisely, let $$\upsigma_z:Y_\hbar(\mathfrak{g})\to Y_\hbar(\mathfrak{g})[z,z^{-1}]\quad\mbox{and}\quad \underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}:Y_\hbar(\mathfrak{g})\to(Y_\hbar(\mathfrak{g})\otimes Y_\hbar(\mathfrak{g}))[s;s^{-1}]\!]$$ be, respectively, the principal grading shift in $z$ (see [3.9](#ssec:pre-cop){reference-type="ref" reference="ssec:pre-cop"}) and the *deformed Drinfeld coproduct*. The latter was introduced in [@GTLW Thm. 3.4] and gives rise to the Drinfeld tensor product (see Section [4.4](#ssec:dr-tensor){reference-type="ref" reference="ssec:dr-tensor"}). Consider the homomorphisms $$\underset{\scriptscriptstyle{{\operatorname{D}},s\,\,\,}}{\Delta^z} =(\operatorname{Id}\otimes\upsigma_z)\circ\underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta} \quad \text{ and }\quad \underset{\scriptscriptstyle{{\operatorname{KM}},s\,\,\,}}{\Delta^z}=(\tau_s\otimes \operatorname{Id})\circ \Delta^z\,.$$ The sought--after $\mathcal{R}^-(s,z)$ is of the following form $$\label{eq:intro-Rm-z} \mathcal{R}^-(s, z)=\sum_{\beta\in\mathsf{Q}_+}\mathcal{R}^-(s)_{\beta} z^{\mathsf{ht}(\beta)}\in(Y^{}_{\hbar}(\mathfrak{g})\otimes Y^{}_{\hbar}(\mathfrak{g}))[\![s^{-1}]\!][\![z]\!],%\fml{s^{-1}, z}\,,$$ where $\mathsf{ht}$ is the height function. Here, $Y^{\pm}_{\hbar}(\mathfrak{g})$ (resp. $Y^{0}_{\hbar}(\mathfrak{g})$) are the unital subalgebras of $Y_\hbar(\mathfrak{g})$ generated by the Drinfeld generators $\{x_{i,r}^\pm\}_{i\in{\mathbf I}, r\in\mathbb{Z}_{\geqslant 0}}$ (resp. by $\mathfrak{h}$ and the commuting Drinfeld generators $\{t_{i,r}\}_{i\in{\mathbf I}, r\in\mathbb{Z}_{>0}}$) and equipped with the standard grading over the root lattice $\mathsf{Q}$ (see Sections [3.2](#ssec: yangian){reference-type="ref" reference="ssec: yangian"}, [3.5](#ssec:tir){reference-type="ref" reference="ssec:tir"}, and [3.7](#ssec:Q-grade){reference-type="ref" reference="ssec:Q-grade"}). The operator $\mathcal{R}^-(s)$ is assumed to satisfy the following properties. 1. [\[eq:intro-norm\]]{#eq:intro-norm label="eq:intro-norm"} **Normalization:** $\mathcal{R}^-(s)_{0}=1\otimes 1.$ 2. [\[eq:intro-triang\]]{#eq:intro-triang label="eq:intro-triang"} **Triangularity:** for any $\beta\in\mathsf{Q}_+$, $$\label{eq:intro-Rm-norm-triang} \mathcal{R}^-(s)_{\beta}\in(Y^{-}_{\hbar}(\mathfrak{g})_{-\beta}\otimes Y^{+}_{\hbar}(\mathfrak{g})_{\beta})[\![s^{-1}]\!]\,.$$ 3. [\[eq:intro-inter\]]{#eq:intro-inter label="eq:intro-inter"} **Intertwining equation:** for any $x\in Y_\hbar(\mathfrak{g})$, $$\mathcal{R}^-(s,z)\,\underset{\scriptscriptstyle{{\operatorname{KM}},s\,\,\,}}{\Delta^z}(x)= \underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}^{\!z}(x)\, \mathcal{R}^-(s,z)\,.$$ The matrix entries of $\mathcal{R}^-(s,z)$ acting on a tensor product of category $\mathcal{O}$ modules become polynomials in $z$. Thus, by evaluating at $z=1$, one obtains an invertible intertwiner between $\underset{\scriptscriptstyle{\operatorname{ KM},s}}{\otimes}$ and $\underset{\scriptscriptstyle{\operatorname{D},s}}{\otimes}$. In finite--type, the crucial observation in [@GTLW §4] is that the conditions [\[eq:intro-norm\]](#eq:intro-norm){reference-type="ref" reference="eq:intro-norm"}, [\[eq:intro-triang\]](#eq:intro-triang){reference-type="ref" reference="eq:intro-triang"} and [\[eq:intro-inter\]](#eq:intro-inter){reference-type="ref" reference="eq:intro-inter"} for $x=t_{i,1}$, where $i$ is an arbitrary node of the Dynkin diagram already have a unique solution. A very general rank $1$ reduction argument, valid for this solution, reduces [\[eq:intro-inter\]](#eq:intro-inter){reference-type="ref" reference="eq:intro-inter"} for arbitrary $x\in Y_\hbar(\mathfrak{g})$, to $x=x_0^{\pm}$ in $Y_{\hbar}(\mathfrak{sl}_{2})$, which has been verified in [@GTLW §4.8]. By this analogy, we consider the analogue of the *second embedding* of §2.7 in *loc. cit.*: $$\label{eq:intro-second-embedding} \operatorname{T}':\mathfrak{h}'\to Y^{0}_{\hbar}(\mathfrak{g})\qquad\mbox{given by}\qquad \operatorname{T}'(d_ih_i)=t_{i,1}\,,$$ where ${\mathsf D}=(d_i)_{i\in{\mathbf I}}$ is the symmetrising diagonal matrix associated to $\mathfrak{g}$, $\mathfrak{h}'\subset\mathfrak{h}$ is the span of the coroots $\{h_i\}_{i\in{\mathbf I}}$ (see Section [3.1](#ssec:Notation){reference-type="ref" reference="ssec:Notation"}). This is a natural choice, since the formulae of the standard coproduct are explicit and fairly manageable in this case. It is clear that if $h\in\mathfrak{h}'$ and $\beta\in\mathsf{Q}_+$ are such that $\beta(h)\neq0$, then, the intertwining equation [\[eq:intro-inter\]](#eq:intro-inter){reference-type="ref" reference="eq:intro-inter"} for $\operatorname{T}'(h)$ produces an explicit expression of the block $\mathcal{R}^-(s)_{\beta}$ in terms of lower blocks $\mathcal{R}^-(s)_{\gamma}$, $\gamma<\beta$. For Yangians of finite type, this is enough to determine $\mathcal{R}^-(s,z)$ entirely. In our case, however, it fails almost completely: since the imaginary root $\delta$ vanishes on $\mathfrak{h}'$. The resulting system is unable to determine recursively the blocks $\mathcal{R}^-(s)_{n\delta}$, $n\in\mathbb{Z}_{>0}$, and, consequently, any block $\mathcal{R}^-(s)_{\beta}$ with $\beta>n\delta$ for some $n\in\mathbb{Z}_{>0}$. ## Extension of the second embedding We overcome this issue by constructing an extension of the second embedding $$\operatorname{T}:\mathfrak{h}\to Y^{0}_{\hbar}(\mathfrak{g})%\qquad\mbox{such that}\qquad \Top\vert_{\h'}=\Topp\,.$$ satisfying the following properties (see Theorem [Theorem 6](#T:map-T){reference-type="ref" reference="T:map-T"}). 1. For any $h\in\mathfrak{h}$ and $i\in{\mathbf I}$, one has $$[\operatorname{T}(h), x_{i,r}^\pm]=\pm\alpha_i(h)x_{i,r+1}^\pm\,.$$ 2. For any $h\in\mathfrak{h}$ and $s\in\mathbb{C}$, one has $$\operatorname{ad}(\tau_s(\operatorname{T}(h)))=\operatorname{ad}(\operatorname{T}(h)+sh)\,.$$ 3. There is a family of translation--invariant elements $\mathrm{Q}_{n\delta}\in Y^{-}_{\hbar}(\mathfrak{g})_{n\delta}\otimes Y^{+}_{\hbar}(\mathfrak{g})_{n\delta}$, $n>0$, such that, for any $h\in\mathfrak{h}$, one has $$\Delta^z(\operatorname{T}(h))=\square\operatorname{T}(h)+\hbar[h\otimes 1, \Omega_z+\mathrm{Q}_z]\,,$$ where $\displaystyle \mathrm{Q}_z=\sum_{n>0}\mathrm{Q}_{n\delta}z^{n\,\mathsf{ht}(\delta)}$. Relying on the map $\operatorname{T}$, we reduce the intertwining equation [\[eq:intro-inter\]](#eq:intro-inter){reference-type="ref" reference="eq:intro-inter"} to a system of linear equations. By choosing an element $\rho^\vee\in\mathfrak{h}$ such that $\alpha_i(\rho^\vee)=1$ for all $i\in{\mathbf I}$, we obtain the following recursive expression of the blocks of $\mathcal{R}^-(s,z)$: $$\label{eq:intro-Rm-recur} \begin{aligned} \mathcal{R}^-(s)_\beta = \hbar\sum_{k\geqslant 0}%(s\hgt{\beta})^{-k-1} \frac{\operatorname{ad}(\square\operatorname{T}(\rho^\vee))^k}{(s\,\mathsf{ht}(\beta))^{k+1}} \left( \sum_{\alpha\in\Phi_+}\mathsf{ht}(\alpha)\mathcal{R}^-(s)_{\beta-\alpha} (\Omega_\alpha+\mathrm{Q}_\alpha)\right) \,. \end{aligned}$$ By [\[eq:intro-norm\]](#eq:intro-norm){reference-type="ref" reference="eq:intro-norm"}, the blocks are uniquely determined and satisfy [\[eq:intro-triang\]](#eq:intro-triang){reference-type="ref" reference="eq:intro-triang"}. By a rank one reduction argument, we finally prove that $\mathcal{R}^-(s,z)$ satisfies also the intertwining equation [\[eq:intro-inter\]](#eq:intro-inter){reference-type="ref" reference="eq:intro-inter"}. Therefore, it yields an intertwiner $$\mathcal{R}^-_{V_1,V_2}(s):V_1\underset{\scriptscriptstyle{\operatorname{ KM},s}}{\otimes}V_2\to V_1\underset{\scriptscriptstyle{\operatorname{D},s}}{\otimes}V_2\,$$ which is readily seen to depend rationally on $s$, equal $\operatorname{Id}_{V_1\otimes V_2}$ at $s=\infty$, and satisfy the cocycle equation [\[prob:Rm\]](#prob:Rm){reference-type="ref" reference="prob:Rm"}[^5]. ## Outline of the paper We review the definition of $Y_\hbar(\mathfrak{g})$ and the basic properties of its category $\mathcal{O}$ modules in Sections [3](#sec:Y){reference-type="ref" reference="sec:Y"} and [4](#sec:Y-rep){reference-type="ref" reference="sec:Y-rep"}, respectively. In Section [5](#sec:Delta-T){reference-type="ref" reference="sec:Delta-T"}, we introduce the transformation $\operatorname{T}:\mathfrak{h}\to Y^{0}_{\hbar}(\mathfrak{g})$ and prove its fundamental properties in Theorem [Theorem 6](#T:map-T){reference-type="ref" reference="T:map-T"}. It is used in Section [6](#sec:neg-R){reference-type="ref" reference="sec:neg-R"} to construct the operator $\mathcal{R}^-(s)$ as a rational twist relating the standard coproduct and the Drinfeld coproduct on category $\mathcal{O}$ modules (see Theorem [Theorem 8](#thm:R^--reps){reference-type="ref" reference="thm:R^--reps"}). In Section [7](#sec:ab-R){reference-type="ref" reference="sec:ab-R"}, we construct the meromorphic braidings $\mathcal{R}^{0,\eta}$ in Theorem [Theorem 9](#thm:R0-main){reference-type="ref" reference="thm:R0-main"} and prove that they both have the same asymptotic expansion given by the formal abelian $R$--matrix $\mathcal{R}^0(s)$. The proof relies on well--known techniques to solve additive difference equations, which we review in Appendix [8](#app:LapDE){reference-type="ref" reference="app:LapDE"} for the reader's convenience. Finally, in Appendices [9](#app:Augmented){reference-type="ref" reference="app:Augmented"} and [10](#app:QCM){reference-type="ref" reference="app:QCM"}, we provide the proofs of Proposition [Proposition 1](#P:full-rank){reference-type="ref" reference="P:full-rank"} and Lemma [Lemma 3](#lem:order){reference-type="ref" reference="lem:order"}, respectively.\ # The affine Yangian $Y_\hbar(\mathfrak{g})$ {#sec:Y} ## Affine Lie algebras {#ssec:Notation} Throughout this paper, we fix a symmetrizable, indecomposable Cartan matrix of *affine type* $\mathbf{A}= (a_{ij})_{i,j\in{\mathbf I}}$ and let $(d_i)_{i\in{\mathbf I}}$ be the associated symmetrizing integers, taken to be positive and relatively prime. As in [@guay-nakajima-wendlandt], we further assume that $\mathbf{A}$ is not of type $\mathsf{A}_1^{(1)}$ or $\mathsf{A}_2^{(2)}$. Let $\mathfrak{g}$ be the Kac--Moody Lie algebra associated to $\mathbf{A}$. Recall that $\mathfrak{g}$ is generated, as a Lie algebra, by the following set of generators: - A realization of $\mathbf{A}$ [@kac §1.1]. That is, a $|{\mathbf I}|+1$--dimensional $\mathbb{C}$--vector space $\mathfrak{h}$, together with two linearly independent subsets $\{h_i\}_{i\in{\mathbf I}}\subset\mathfrak{h}$ and $\{\alpha_i\}_{i\in{\mathbf I}}\subset\mathfrak{h}^*$ related by $\alpha_j(h_i) = a_{ij}$ for every $i,j\in{\mathbf I}$. - Raising and lowering operators $\{e_i,f_i\}_{i\in{\mathbf I}}$. These generators are subject to the usual Chevalley--Serre relations: - $\mathfrak{h}$ is abelian. - $[h,e_i]=\alpha_i(h)e_i$ and $[h,f_i]=-\alpha_i(h)f_i$, for every $h\in\mathfrak{h}$ and $i\in{\mathbf I}$. - $[e_i,f_j]=\delta_{ij}h_i$ for each $i,j\in {\mathbf I}$. - $\operatorname{ad}(e_i)^{1-a_{ij}}e_j = 0 = \operatorname{ad}(f_i)^{1-a_{ij}}f_j$ for each $i,j\in {\mathbf I}$ with $i\neq j$. Let $\mathfrak{n}^+$ (resp. $\mathfrak{n}^-$) denote the Lie subalgebra of $\mathfrak{g}$ generated by $\{e_i\}_{i\in{\mathbf I}}$ (resp. $\{f_i\}_{i\in{\mathbf I}}$). We recall some of the important ingredients in the theory of affine Lie algebras. We mostly follow [@kac Ch. 6]. For the list of tables of affine Dynkin diagrams, see [@kac §4.8]. - By [@kac Prop. 4.7], there is a unique ${\mathbf I}$-tuple $(\alpha_i)_{i\in {\mathbf I}}$ of positive, relatively prime integers satisfying $\sum_{i\in{\mathbf I}} a_{ji}a_i = 0$ for all $j\in{\mathbf I}$. These numbers are listed explicitly in [@kac Ch. 4, Table Aff]. Following [@kac Thm. 5.6], we define $\delta\in \mathfrak{h}^\ast$ by $\delta=\sum_{i\in{\mathbf I}} a_i\alpha_i \in\sum_i \mathbb{Z}_{\geqslant 0}\alpha_i$. In particular, this element satisfies $\delta(h_j)=0$ for all $j\in {\mathbf I}$. - Using $A^T=DAD^{-1}$, we get that $(a_i^{\vee}=d_ia_i)_{i\in{\mathbf I}}$ are the coefficients of a linear dependence relation among the rows of $\mathbf{A}$. We record the linear relations: $\sum_{i\in{\mathbf I}} a_i^{\vee}a_{ij}=0,\ \forall\ j\in{\mathbf I}$. Let $\EuScript{C}=\sum_i a_i^{\vee} h_i$. Note that $\EuScript{C}$ is central in $\mathfrak{g}$; as in [@kac §6.2], we call it the *canonical central element* of $\mathfrak{g}$. - Let $\mathfrak{h}^\prime\subset\mathfrak{h}$ denote the span of $\{h_i:i\in{\mathbf I}\}$. Let $\Phi$ denote the set of roots of $(\mathfrak{g},\mathfrak{h})$, which comes naturally with the polarization $\Phi=\Phi_+\sqcup \Phi_-$. Let $\mathsf{Q}=\mathbb{Z}\Phi$ be the root lattice, and $\mathsf{Q}_+ =\sum_{\alpha\in\Phi_+} \mathbb{Z}_{\geqslant 0}\alpha$. Set $\mathfrak{g}^\prime= [\mathfrak{g},\mathfrak{g}]$. ## The Yangian $Y_{\hbar}(\mathfrak{g})$ [@drinfeld-yangian-qaffine] {#ssec: yangian} Let $\hbar\in\mathbb{C}^{\times}$. The Yangian $Y_{\hbar}(\mathfrak{g})$ is the unital, associative $\mathbb{C}$--algebra generated by elements $\mathfrak{h}\cup \{x^{\pm}_{i,r},\xi_{i,r}\}_{i\in{\mathbf I}, r\in\mathbb{Z}_{\geqslant 0}}$, subject to the following relations: 1. [\[Y1\]]{#Y1 label="Y1"} $\xi_{i,0}=d_ih_i\in\mathfrak{h}$ for every $i\in{\mathbf I}$. $\mathfrak{h}$ is abelian, and $\displaystyle[h,\xi_{i,r}] = 0 = [\xi_{i,r}, \xi_{j,s}]$ for every $h\in\mathfrak{h}$, $i,j\in{\mathbf I}$ and $r,s\in\mathbb{Z}_{\geqslant 0}$. 2. [\[Y2\]]{#Y2 label="Y2"} For $i,j\in{\mathbf I}$ and $s\in \mathbb{Z}_{\geqslant 0}$: $[\xi_{i,0}, x_{j,s}^{\pm}] = \pm d_ia_{ij} x_{j,s}^{\pm}$. 3. [\[Y3\]]{#Y3 label="Y3"} For $i,j\in{\mathbf I}$ and $r,s\in\mathbb{Z}_{\geqslant 0}$: $$[\xi_{i,r+1}, x^{\pm}_{j,s}] - [\xi_{i,r},x^{\pm}_{j,s+1}] = \pm\hbar\frac{d_ia_{ij}}{2}(\xi_{i,r}x^{\pm}_{j,s} + x^{\pm}_{j,s}\xi_{i,r})\ .$$ 4. [\[Y4\]]{#Y4 label="Y4"} For $i,j\in{\mathbf I}$ and $r,s\in \mathbb{Z}_{\geqslant 0}$: $$[x^{\pm}_{i,r+1}, x^{\pm}_{j,s}] - [x^{\pm}_{i,r},x^{\pm}_{j,s+1}]= \pm\hbar\frac{d_ia_{ij}}{2}(x^{\pm}_{i,r}x^{\pm}_{j,s} + x^{\pm}_{j,s}x^{\pm}_{i,r})\ .$$ 5. [\[Y5\]]{#Y5 label="Y5"} For $i,j\in{\mathbf I}$ and $r,s\in \mathbb{Z}_{\geqslant 0}$: $[x^+_{i,r}, x^-_{j,s}] = \delta_{ij} \xi_{i,r+s}$. 6. [\[Y6\]]{#Y6 label="Y6"} Let $i\not= j\in{\mathbf I}$ and set $m = 1-a_{ij}$. For any $r_1,\cdots, r_m, s\in \mathbb{Z}_{\geqslant 0}$: $$\sum_{\pi\in\mathfrak{S}_m} \left[x^{\pm}_{i,r_{\pi(1)}},\left[x^{\pm}_{i,r_{\pi(2)}},\left[\cdots, \left[x^{\pm}_{i,r_{\pi(m)}},x^{\pm}_{j,s}\right]\cdots\right]\right]\right]=0.$$ Note that it follows from this definition that the assignment $$\sqrt{d_i} e_i\mapsto x_{i,0}^+, \quad \sqrt{d_i} f_i\mapsto x_{i,0}^-, \quad d_i h_i \mapsto \xi_{i,0}, \quad h\mapsto h,$$ for all $i\in {\mathbf I}$ and $h\in \mathfrak{h}$, extends to an algebra homomorphism $U(\mathfrak{g})\to Y_{\hbar}(\mathfrak{g})$. We shall frequently work through this homomorphism without further comment. We denote by $Y^{0}_{\hbar}(\mathfrak{g})$ and $Y^{\pm}_{\hbar}(\mathfrak{g})$ the unital subalgebras of $Y_{\hbar}(\mathfrak{g})$ generated by $\mathfrak{h}\cup\{\xi_{i,r}\}_{i\in{\mathbf I}, r\in\mathbb{Z}_{\geqslant 0}}$ and $\{x_{i,r}^{\pm}\}_{i\in{\mathbf I}, r\in\mathbb{Z}_{\geqslant 0}}$, respectively. Let $Y^{\geqslant}_{\hbar}(\mathfrak{g})$ (resp. $Y^{\leqslant}_{\hbar}(\mathfrak{g})$) denote the subalgebras of $Y_{\hbar}(\mathfrak{g})$ generated by $Y^{0}_{\hbar}(\mathfrak{g})$ and $Y^{+}_{\hbar}(\mathfrak{g})$ (resp. $Y^{0}_{\hbar}(\mathfrak{g})$ and $Y^{-}_{\hbar}(\mathfrak{g})$). The $q$--toroidal version of the following theorem was proved by Hernandez in [@hernandez-affinizations Thm. 2]. **Theorem 3**. *The multiplication map $Y^{-}_{\hbar}(\mathfrak{g})\otimes Y^{0}_{\hbar}(\mathfrak{g}) \otimes Y^{+}_{\hbar}(\mathfrak{g}) \to Y_{\hbar}(\mathfrak{g})$ is surjective. Moreover, $Y^{0}_{\hbar}(\mathfrak{g})$ (resp. $Y^{\pm}_{\hbar}(\mathfrak{g})$) are algebras generated by $\mathfrak{h}\cup\{\xi_{i,r}\}_{i\in{\mathbf I},r\in\mathbb{Z}_{\geqslant 0}}$ (resp. $\{x_{i,r}^{\pm}\}_{i\in{\mathbf I},r\in\mathbb{Z}_{\geqslant 0}}$) subject to relation [\[Y1\]](#Y1){reference-type="ref" reference="Y1"} (resp. [\[Y4\]](#Y4){reference-type="ref" reference="Y4"} and $\ref{Y6}$).* ## Filtration {#ssec:filtration} The Yangian $Y_{\hbar}(\mathfrak{g})$ is a filtered algebra once given the *loop filtration*. That is, let $\deg(y_r)=r$, where $y$ is one of $\xi_i$ or $x_i^{\pm}$. For each $k\geqslant 0$, set $$\mathbf{F}_k(Y_{\hbar}(\mathfrak{g})) =\text{Linear subspace spanned by monomials of degree } \leqslant k\ .$$ The induced filtration on $Y_{\hbar}(\mathfrak{g})^{\otimes 2}$ will again be denoted by $\mathbf{F}_\bullet(Y_{\hbar}(\mathfrak{g})^{\otimes 2})$. ## Formal currents {#ssec:currents} For each $i\in{\mathbf I}$, we define $\xi_i(u),x_i^{\pm}(u)\in Y_{\hbar}(\mathfrak{g})[\negthinspace[u^{-1}]\negthinspace]$ by $$\xi_i(u)=1+\hbar\sum_{r\in\mathbb{Z}_{\geqslant 0}} \xi_{i,r}u^{-r-1} \quad \text{ and }\quad x_i^{\pm}(u) = \hbar\sum_{r\in\mathbb{Z}_{\geqslant 0}} x_{i,r}^{\pm} u^{-r-1}.$$ The relations [\[Y1\]](#Y1){reference-type="ref" reference="Y1"}--[\[Y6\]](#Y6){reference-type="ref" reference="Y6"} can be written in terms of these formal series, see [@sachin-valerio-2 Prop. 3.3]. Namely, it is proven that these relations are equivalent to the following identities: 1. [\[cY1\]]{#cY1 label="cY1"} For any $i\in{\mathbf I}$, $\xi_{i,0}=d_ih_i\in\mathfrak{h}$. For any $i,j\in{\mathbf I}$ and $h,h'\in\mathfrak{h}$, $$[\xi_i(u), \xi_j(v)]=0\qquad\qquad [\xi_i(u),h]=0\qquad\qquad [h,h']=0.$$ 2. [\[cY2\]]{#cY2 label="cY2"} For any $i\in{\mathbf I}$, and $h\in\mathfrak{h}$, $\displaystyle[h,x^\pm_i(u)]=\pm\alpha_i(h)x^\pm_i(u)$. 3. [\[cY3\]]{#cY3 label="cY3"} For any $i,j\in {\mathbf I}$, and $a = \hbar d_ia_{ij}/2$ $$(u-v\mp a)\xi_i(u)x_j^{\pm}(v)= (u-v\pm a)x_j^{\pm}(v)\xi_i(u)\mp 2a x_j^{\pm}(u\mp a)\xi_i(u).$$ 4. [\[cY4\]]{#cY4 label="cY4"}For any $i,j\in {\mathbf I}$, and $a = \hbar d_ia_{ij}/2$ $$\begin{gathered} (u-v\mp a) x_i^{\pm}(u)x_j^{\pm}(v)\\ = (u-v\pm a)x_j^{\pm}(v)x_i^{\pm}(u) +\hbar\left([x_{i,0}^{\pm},x_j^{\pm}(v)] - [x_i^{\pm}(u),x_{j,0}^{\pm}]\right).\end{gathered}$$ 5. [\[cY5\]]{#cY5 label="cY5"}For any $i,j\in {\mathbf I}$ $$(u-v)[x_i^+(u),x_j^-(v)]=-\delta_{ij}\hbar\left(\xi_i(u)-\xi_i(v)\right).$$ 6. [\[cY6\]]{#cY6 label="cY6"}For any $i\neq j\in{\mathbf I}$, $m=1-a_{ij}$, $r_1,\cdots, r_m\in \mathbb{Z}_{\geqslant 0}$, and $s\in \mathbb{Z}_{\geqslant 0}$ $$\sum_{\pi\in\mathfrak{S}_m} \left[x^{\pm}_i(u_{\pi_1}),\left[x^{\pm}_i(u_{\pi(2)}),\left[\cdots, \left[x^{\pm}_i(u_{\pi(m)}),x^{\pm}_j(v)\right]\cdots\right]\right]\right]=0.$$ Here the relations [\[cY1\]](#cY1){reference-type="ref" reference="cY1"}--[\[cY5\]](#cY5){reference-type="ref" reference="cY5"} are identities in $Y_{\hbar}(\mathfrak{g})[u,v;u^{-1},v^{-1}]\!]$, while the Serre relation [\[cY6\]](#cY6){reference-type="ref" reference="cY6"} (for a fixed pair $i,j\in {\mathbf I}$ with $i\neq j$) is an equality in the formal series space $Y_{\hbar}(\mathfrak{g})[\![u_1^{-1},\ldots,u_m^{-1},v^{-1}]\!]$, where $m=1-a_{ij}$. ## Alternate set of generators for $Y^{0}_{\hbar}(\mathfrak{g})$ {#ssec:tir} For each $i\in{\mathbf I}$, let $t_i(u)$ and $B_i(z)$ be the formal series defined by $$\begin{gathered} t_i(u) = \hbar\sum_{r\geqslant 0} t_{i,r}u^{-r-1} =\log(\xi_i(u)) = \sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{n} \left(\hbar \sum_{r\in\mathbb{Z}_{\geqslant 0}}\xi_{i,r}u^{-r-1}\right)^n,\\ B_i(z) =\hbar\sum_{r\geqslant 0} t_{i,r}\frac{z^r}{r!}.\end{gathered}$$ That is, $t_i(u)$ is the formal series logarithm of $\xi_i(u)$ and $B_i(z)$ is the formal Borel transform of $t_i(u)$. The elements $\{t_{i,r}\}_{i\in{\mathbf I},r\in\mathbb{Z}_{\geqslant 0}}\subset Y^{0}_{\hbar}(\mathfrak{g})$ are polynomials in $\{\xi_{i,r}\}_{i\in{\mathbf I},r\in\mathbb{Z}}$, and together with $\mathfrak{h}$, generated $Y^{0}_{\hbar}(\mathfrak{g})$. We record the formulae for the first few terms, for future use: $$\begin{aligned} t_{i,0} &= \xi_{i,0}, \label{eq:ti0} \\ t_{i,1} &= \xi_{i,1} - \frac{\hbar}{2}\xi_{i,0}^2, \label{eq:ti1}\\ t_{i,2} &= \xi_{i,2} - \hbar \xi_{i,1}\xi_{i,0} + \frac{\hbar^2}{3}\xi_{i,0}^3, \label{eq:ti2}\\ t_{i,3} &= \xi_{i,3} - \hbar\xi_{i,2}\xi_{i,0} - \frac{\hbar}{2}\xi_{i,1}^2 +\hbar^2\xi_{i,1}\xi_{i,0}^2 - \frac{\hbar^3}{4}\xi_{i,0}^4\ \label{eq:ti3} .\end{aligned}$$ The following commutation relation was obtained in [@sachin-valerio-1 §2.9]: $$\label{eq:comm-Bi} [B_i(z),x^{\pm}_{k,n}] = \pm \frac{e^{\frac{d_ia_{ik}\hbar}{2}z} - e^{-\frac{d_ia_{ik}\hbar}{2}z}}{z} \left(\sum_{r\geqslant 0} x^{\pm}_{k,n+r} \frac{z^r}{r!}\right).$$ Comparing coefficients of $z$, one obtains (see [@sachin-valerio-1 Remark 2.9]): $$= \pm d_ia_{ik} \sum_{\ell=0}^{\lfloor \frac{m}{2}\rfloor} \left(\begin{array}{c} m\\ 2\ell\end{array}\right) \frac{(\hbar d_ia_{ik}/2)^{2\ell}}{2\ell+1} x_{k,m+n-2\ell}^{\pm}\ .$$ A few special cases of this relation which will be particularly relevant to us are $$\begin{aligned} &= \pm d_ia_{ij} x_{j,n+1}^{\pm}\ ,\label{eq:commt1} \\ [t_{i,2},x_{j,n}^{\pm}] &= \pm d_ia_{ij} x_{j,n+2}^{\pm} \pm \frac{\hbar^2}{12} (d_ia_{ij})^3 x_{j,n}^{\pm}, \label{eq:commt2}\ ,\\ [t_{i,3},x_{j,n}^{\pm}] &= \pm d_ia_{ij} x_{j,n+3}^{\pm} \pm \frac{\hbar^2}{4} (d_ia_{ij})^3 x_{j,n+1}^{\pm}\label{eq:commt3}\ .\end{aligned}$$ ## Shift automorphism {#ssec: shift-yangian} The group of translations of the complex plane acts on $Y_{\hbar}(\mathfrak{g})$ as follows. For $s\in\mathbb{C}$, $\tau_s(h)=h,\ \forall\ h\in\mathfrak{h}$, and $\tau_s(y(u)) = y(u-s)$, where $y$ is one of $\xi_i,x_i^{\pm}$. In terms of modes, we have $$\tau_s(y_r) = \sum_{i=0}^r \left(\begin{array}{c}r\\i\end{array}\right) s^{r-i}y_i\ .$$ If $s$ is instead viewed as a formal variable, then these same formulae define an algebra homomorphism $Y_{\hbar}(\mathfrak{g})\to Y_{\hbar}(\mathfrak{g})[s]$, still denoted $\tau_s$. This is a filtered algebra homomorphism, provided $Y_{\hbar}(\mathfrak{g})[s]\cong Y_{\hbar}(\mathfrak{g})\otimes \mathbb{C}[s]$ is equipped with the standard tensor product filtration in which $\deg s=1$. ## $\mathsf{Q}$--grading {#ssec:Q-grade} Viewed as a module over $\mathfrak{h}$, we have $Y_{\hbar}(\mathfrak{g})= \bigoplus_{\beta\in \mathsf{Q}}Y_{\hbar}(\mathfrak{g})_{\beta}$, where  $$Y_{\hbar}(\mathfrak{g})_\beta = \{y\in Y_{\hbar}(\mathfrak{g}): [h,y]=\beta(h)y,\ \forall\ h\in\mathfrak{h}\}.$$ This gives rise to a $\mathsf{Q}$-graded algebra structure on $Y_{\hbar}(\mathfrak{g})$ for which $Y^{\pm}_{\hbar}(\mathfrak{g})$, $Y^{\scriptscriptstyle{\geqslant}}_{\hbar}(\mathfrak{g})$ and $Y^{\scriptscriptstyle{\leqslant}}_{\hbar}(\mathfrak{g})$ are all $\mathsf{Q}$-graded subalgebras. In particular, we have $$Y^{\scriptscriptstyle{\geqslant}}_{\hbar}(\mathfrak{g}) = \bigoplus_{\beta\in\mathsf{Q}_+} Y^{\scriptscriptstyle{\geqslant}}_{\hbar}(\mathfrak{g})_{\beta} \quad \text{ and }\quad Y^{\scriptscriptstyle{\leqslant}}_{\hbar}(\mathfrak{g}) = \bigoplus_{\beta\in\mathsf{Q}_+} Y^{\scriptscriptstyle{\leqslant}}_{\hbar}(\mathfrak{g})_{-\beta}.$$ ## The elements $\EuScript{C}_r$ {#ssec:cr} Recall from Section [3.1](#ssec:Notation){reference-type="ref" reference="ssec:Notation"} that $\EuScript{C}= \sum_{i\in{\mathbf I}} a_id_ih_i\in\mathfrak{h}$ is the canonical central element of $\mathfrak{g}$. We define higher order analogues of $\EuScript{C}$ in $Y_{\hbar}(\mathfrak{g})$ by setting $$\EuScript{C}_r=\sum_{i\in {\mathbf I}} a_it_{i,r} \quad \forall \quad r\geqslant 0.$$ In particular, $\EuScript{C}_0$ is the image of $\EuScript{C}$ in $Y_{\hbar}(\mathfrak{g})$. In general, these elements do not belong to the center of $Y_{\hbar}(\mathfrak{g})$. However, we do have the following corollary of the relations [\[eq:commt1\]](#eq:commt1){reference-type="eqref" reference="eq:commt1"}--[\[eq:commt3\]](#eq:commt3){reference-type="eqref" reference="eq:commt3"}. **Corollary 1**. *The elements $\EuScript{C}_0,\EuScript{C}_1$ are central, while $\EuScript{C}_2$ and $\EuScript{C}_3$ satisfy $$=\pm \frac{\hbar^2}{12} \mu_j x_{j,s}^\pm \quad \text{ and }\quad [\EuScript{C}_3,x_{j,s}^\pm]=\pm \frac{\hbar^2}{4} \mu_j x_{j,s+1}^\pm,$$ for all $j\in {\mathbf I}$ and $s\geqslant 0$. Here $\mu_j$ is defined by $$\mu_j=\sum_{i\in {\mathbf I}} a_i (d_i a_{ij})^3 \quad \forall \quad j\in {\mathbf I}.$$* That $\mathrm{ad}(\EuScript{C}_j)$ is a non-trivial derivation for $j=2,3$ will play a crucial role in the constructions of this article. As will the following proposition, which is proven in detail in Appendix [9](#app:Augmented){reference-type="ref" reference="app:Augmented"}. **Proposition 1**. *Let $\mathbf{B}=(d_ia_{ij})_{i,j\in {\mathbf I}}$ be the symmetrized Cartan matrix of $\mathfrak{g}$, and let $\underline{\mu}=(\mu_j)_{i\in {\mathbf I}}\in \mathbb{Z}^{{\mathbf I}}$, where $\mu_j$ is defined above. Then the augmented matrix $(\mathbf{B}\,|\, \underline{\mu})$ has rank $|{\mathbf I}|$.* ## The coproduct $\Delta^{z}$ {#ssec:pre-cop} Finally, we recall the definition of the twisted standard coproduct $\Delta^z$ on $Y_{\hbar}(\mathfrak{g})$, which was introduced in [@guay-nakajima-wendlandt §6.1]. By [@guay-nakajima-wendlandt §6.1], the assignment $$\upsigma_z(x_{i,r}^\pm)=z^{\pm 1} x_{i,r}^\pm, \quad \upsigma_z(\xi_{i,r})=\xi_{i,r} \quad \text{ and } \quad \upsigma_z(h)=h,$$ for all $i\in {\mathbf I}$, $r\geqslant 0$ and $h\in \mathfrak{h}$, extends to an injective algebra homomorphism $$\upsigma_z:Y_{\hbar}(\mathfrak{g})\to Y_{\hbar}(\mathfrak{g})[z^{\pm 1}].$$ Note that if $\rho^{\vee}\in\mathfrak{h}$ is chosen so that $\alpha_i(\rho^{\vee})=1$, for every $i\in{\mathbf I}$, then $\upsigma_z = \operatorname{Ad}(z^{\rho^{\vee}})$. Next, for each $\beta\in \Phi_+\cup\{0\}$, let $\Omega_\beta\in \mathfrak{g}_{-\beta}\otimes \mathfrak{g}_\beta$ be the canonical element defined by the restriction of the standard invariant form on $\mathfrak{g}$ to $\mathfrak{g}_{-\beta}\times \mathfrak{g}_\beta$. We then define $\Omega_z$ and $\Omega_z^-$ in $(\mathfrak{g}\otimes \mathfrak{g})[\![z]\!]$ by $$\Omega_z^-=\sum_{\beta\in \Phi_+}\Omega_\beta z^{\mathsf{ht}(\beta)} \quad \text{ and }\quad \Omega_z=\Omega_0+\Omega_z^-,$$ where $\mathsf{ht}:\mathsf{Q}_+\to \mathbb{Z}_{\geqslant 0}$ is the additive *height* function, defined on $\beta=\sum_{j}n_j \alpha_j$ by $\mathsf{ht}(\beta)=\sum_{j} n_j$. Note in particular that $\Omega_\beta z^{\mathsf{ht}(\beta)}=(\operatorname{Id}\otimes \upsigma_z)(\Omega_\beta)$ for each $\beta\in \mathsf{Q}_+$. By Theorem 6.2 of [@guay-nakajima-wendlandt], there is an algebra homomorphism $$\Delta^z:Y_{\hbar}(\mathfrak{g})\to Y_{\hbar}(\mathfrak{g})^{\otimes 2}[z^{-1};z]\!]$$ uniquely determined by the formulae $$\label{Delta^z:def} \begin{gathered} \Delta^z(y)=y\otimes 1 + 1\otimes \upsigma_z(y),\\ % \Delta^z(t_{i,1})=t_{i,1}\otimes 1 + 1\otimes t_{i,1}+\hbar[\xi_{i,0}\otimes 1, \Omega_z], \end{gathered}$$ for each $y\in \mathfrak{h}\cup\{x_{j,0}^\pm\}_{j\in {\mathbf I}}$ and $i\in {\mathbf I}$. We will call $\Delta^z$ the (twisted) *standard coproduct* on $Y_{\hbar}(\mathfrak{g})$. It is not coassociative or counital, but satisfies the twisted coalgebra relations $$\label{twisted-coalg} \begin{gathered} (\Delta^z \otimes \operatorname{Id})\circ \Delta^{zw}=(\operatorname{Id}\otimes \Delta^w)\circ \Delta^z,\\ (\varepsilon\otimes \operatorname{Id})\Delta^z(x)=1\otimes \upsigma_z(x)\quad \text{ and }\quad (\operatorname{Id}\otimes \varepsilon)\Delta^z(x)=x\otimes 1 \quad \forall \; x\in Y_{\hbar}(\mathfrak{g}), \end{gathered}$$ where $\varepsilon: Y_{\hbar}(\mathfrak{g})\to \mathbb{C}$ is the counit, defined by $\varepsilon(y)=0$ for all $y\in \mathfrak{h}\cup\{\xi_{i,r},x_{i,r}^\pm\}_{i\in {\mathbf I},r\geqslant 0}$. It also follows easily from the formulae given above that $\Delta^z$ preserves the loop filtration on $Y_{\hbar}(\mathfrak{g})$ introduced in Section [3.3](#ssec:filtration){reference-type="ref" reference="ssec:filtration"} above. That is, for every $k\in\mathbb{Z}_{\geqslant 0}$ we have $$\label{eq:Delta-filt} \Delta^z(y)\in\mathbf{F}_k(Y_{\hbar}(\mathfrak{g})^{\otimes 2})[\negthinspace[z]\negthinspace],\ \forall\ y\in\mathbf{F}_k(Y_{\hbar}(\mathfrak{g}))\ .$$ We shall make use of the linear map $\square:Y_{\hbar}(\mathfrak{g})\to Y_{\hbar}(\mathfrak{g})\otimes Y_{\hbar}(\mathfrak{g})$ defined by $$\square(y)=y\otimes 1 + 1\otimes y \quad \forall \; y\in Y_{\hbar}(\mathfrak{g}).$$ Though it is not an algebra homomorphism, it satisfies $\square([x,y])=[\square(x),\square(y)]$ for all $x,y\in Y_{\hbar}(\mathfrak{g})$. In particular, by (4.13) of [@guay-nakajima-wendlandt], we have $$\label{Delta-xi1} \begin{aligned} \Delta^z(x_{i,1}^+)&=\square^z(x_{i,1}^+)-z\hbar[1\otimes x_{i,0}^+,\Omega_z], \\ \Delta^z(x_{i,1}^-)&=\square^z(x_{i,1}^-)+\hbar[x_{i,0}^-\otimes 1,\Omega_z], \end{aligned}$$ for each $i\in {\mathbf I}$, where $\square^z=(\operatorname{Id}\otimes \upsigma_z)\circ \square$. Using the fact that $\xi_{i,r}-t_{i,r}\in\mathbf{F}_{r-1}(Y_{\hbar}(\mathfrak{g}))$, it follows from the proof of [@GW-Poles Prop. 2.6] that $$\label{Delta-filt2} (\Delta^z-\square^z)(t_{i,r}) \in \mathbf{F}_{r-1}(Y_{\hbar}(\mathfrak{g})^{\otimes 2})[\negthinspace[z]\negthinspace], \ \forall\ i\in{\mathbf I},r\in\mathbb{Z}_{\geqslant 0}.$$ # Representations of affine Yangians {#sec:Y-rep} In this section we give the definitions of the relevant categories of representations of $Y_{\hbar}(\mathfrak{g})$, namely $\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$ and $\mathcal{O}_{\scriptscriptstyle{\mathrm{int}}}(Y_{\hbar}(\mathfrak{g}))\subset\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$. We remind the reader of the standard tensor product $\underset{\scriptscriptstyle{\operatorname{ KM},s}}{\otimes}$ and the Drinfeld tensor product $\underset{\scriptscriptstyle{\operatorname{D},s}}{\otimes}$, two different $s$--dependent tensor structures on $\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$. The dependence is polynomial in $s$ for $\underset{\scriptscriptstyle{\operatorname{ KM},s}}{\otimes}$ and rational in $s$ for $\underset{\scriptscriptstyle{\operatorname{D},s}}{\otimes}$.\ ## Categories $\mathcal{O}_{\scriptscriptstyle{\mathrm{int}}}(Y_{\hbar}(\mathfrak{g}))\subset \mathcal{O}(Y_{\hbar}(\mathfrak{g}))$ {#ssec:cat-O} A representation $V$ of $Y_{\hbar}(\mathfrak{g})$ is said to be in category $\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$ if its restriction to $U(\mathfrak{g})$ is in category $\mathcal{O}$. That is, 1. $V$ is a direct sum of finite--dimensional weight spaces. $$V = \bigoplus_{\mu\in\mathfrak{h}^*} V[\mu],\ \operatorname{dim}(V[\mu])<\infty,$$ where $V[\mu] =\{v\in V : h\cdot v = \mu(h)v,\ \forall h\in\mathfrak{h}\}$. 2. Let $P(V) =\{\mu : V[\mu]\neq\{0\}\}$ be the set of weight of $V$. Then, there exist $\lambda_1,\ldots,\lambda_r\in\mathfrak{h}^*$ such that $$P(V) \subset \bigcup_{j=1}^r \lambda_j - \mathsf{Q}_+.$$ Let $\mathcal{O}_{\scriptscriptstyle{\mathrm{int}}}(Y_{\hbar}(\mathfrak{g}))$ be the full subcategory of $\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$ consisting of those $V\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$ which satisfy the following *integrability condition*: for every $\mu\in P(V)$ and $i\in{\mathbf I}$, there exists $N\gg 0$ such that $V[\mu-j\alpha_i]=0,\ \forall\ j\geqslant N$. Given $V\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$ and $s\in \mathbb{C}$, let $V(s)$ denote the pull--back representation $\tau_s^*(V)$. **Remark 1**. These are the Yangian analogues of the corresponding categories for quantum affinizations studied by Hernandez in [@hernandez-affinizations] (see also [@sachin-valerio-2 §3]). The analogue of the following *rationality property* for quantum affine algebras was obtained in [@beck-kac §6] and [@hernandez-drinfeld Prop. 3.8]. The Yangian version, stated below, can be found in [@sachin-valerio-2 Prop. 3.6]. **Proposition 2**. *Let $V$ be a representation of $Y_{\hbar}(\mathfrak{g})$ which is $\mathfrak{h}$--diagonalizable, with finite--dimensional weight spaces. Then, for every weight $\mu\in\mathfrak{h}^*$ of $V$, the generating series $$\xi_i(u)\in \operatorname{End}(V[\mu])[\negthinspace[u^{-1}]\negthinspace],\qquad x_i^{\pm}(u) \in \operatorname{Hom}(V[\mu],V[\mu\pm\alpha_i])[\negthinspace[u^{-1}]\negthinspace],$$ defined in [3.4](#ssec:currents){reference-type="ref" reference="ssec:currents"} above, are the Taylor series expansions at $\infty$ of rational functions of $u$.* ## Matrix logarithms {#ssec:log} Let $i\in{\mathbf I}$, $V\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$ and $\mu\in P(V)$. By the previous proposition, the operator $\xi_i(u)$ acting on the finite--dimensional weight space $V[\mu]$ becomes a rational, abelian, function of $u\in\mathbb{C}$, taking value $1$ at $u=\infty$. Let $A\subset \mathbb{C}^{\times}$ be the set of poles of $\xi_i(u)^{\pm 1}$. As shown in [@sachin-valerio-III Prop. 5.4], we can view $t_i(u)$ as Taylor series near $\infty$ of a single--valued function defined on the cut plane: $$t_i(u) = \log(\xi_i(u)) : \mathbb{C}\setminus \bigcup_{a\in A} [0,a] \to \operatorname{End}(V[\mu]).$$ ## Standard tensor product {#ssec:km-tensor} Let $\Delta^z$ be the twisted standard coproduct introduced in Section [3.9](#ssec:pre-cop){reference-type="ref" reference="ssec:pre-cop"}. Given $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, with action homomorphisms $\pi_\ell:Y_{\hbar}(\mathfrak{g})\to\operatorname{End}(V_\ell)$, the composition $$(\pi_1\otimes \pi_2) \circ \Delta^z : Y_{\hbar}(\mathfrak{g})\to \operatorname{End}(V_1\otimes V_2)[\negthinspace[z]\negthinspace]$$ can be evaluated at $z=1$ to yield an action of $Y_{\hbar}(\mathfrak{g})$ on $V_1\otimes V_2$ (see [@guay-nakajima-wendlandt Cor. 6.9]). The resulting representation of $Y_{\hbar}(\mathfrak{g})$ is denoted $V_1\underset{\scriptscriptstyle{\operatorname{ KM},0}}{\otimes}V_2$. More generally, we set $$V_1\underset{\scriptscriptstyle{\operatorname{ KM},s}}{\otimes} V_2 =V_1(s)\underset{\scriptscriptstyle{\operatorname{ KM},0}}{\otimes} V_2\quad \forall \; s\in \mathbb{C}.$$ The properties of this tensor product are summarized in the following theorem, which is a consequence of the results of [@guay-nakajima-wendlandt]; see also [@GTLW Prop. 8.2]. **Theorem 4**. *The category $\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$ together with the tensor product $\underset{\scriptscriptstyle{\operatorname{ KM},s}}{\otimes}$ is a (polynomial) tensor category. In more detail, we have the following properties:* 1. *For $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, $V_1\underset{\scriptscriptstyle{\operatorname{ KM},s}}{\otimes}V_2$ depends polynomially in $s$.* 2. *The tensor product is compatible with the shift automorphism. That is, for every $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, we have: $$(V_1\underset{\scriptscriptstyle{\operatorname{ KM},s_1}}{\otimes}V_2)(s_2) = V_1(s_2)\underset{\scriptscriptstyle{\operatorname{ KM},s_1}}{\otimes} V_2(s_2),\qquad V_1(s_1)\underset{\scriptscriptstyle{\operatorname{ KM},s_2}}{\otimes} V_2 = V_1\underset{\scriptscriptstyle{\operatorname{ KM},s_1+s_2}}{\otimes} V_2.$$* 3. *Let $\mathsf{1}$ denote the $1$--dimensional, trivial representation of $Y_{\hbar}(\mathfrak{g})$. Then the following natural identifications of vector spaces are $Y_{\hbar}(\mathfrak{g})$--intertwiners. $$\mathsf{1}\underset{\scriptscriptstyle{\operatorname{ KM},s}}{\otimes} V_2 \cong V_2,\qquad V_1\underset{\scriptscriptstyle{\operatorname{ KM},s}}{\otimes} \mathsf{1}\cong V_1(s).$$* 4. *The tensor product is asssociative in the following sense. For any $V_1,V_2,V_3\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, the natural identification of vector spaces is a $Y_{\hbar}(\mathfrak{g})$--intertwiner: $$(V_1\underset{\scriptscriptstyle{\operatorname{ KM},s_1}}{\otimes}V_2)\underset{\scriptscriptstyle{\operatorname{ KM},s_2}}{\otimes} V_3 \cong V_1\underset{\scriptscriptstyle{\operatorname{ KM},s_1+s_2}}{\otimes} (V_2\underset{\scriptscriptstyle{\operatorname{ KM},s_2}}{\otimes} V_3).$$* ## Drinfeld tensor product {#ssec:dr-tensor} The following version of the Drinfeld tensor product on representation of Yangians was introduced in [@sachin-valerio-III §4] (see also [@GTLW §3.2--3.4, and Prop. 8.1]). Let $\underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}$ be the assignment on the generating set $\{h,\xi_{i,r},x_{i,r}^\pm\}_{h\in \mathfrak{h},i\in {\mathbf I},r\geq 0}$ of $Y_{\hbar}(\mathfrak{g})$, with values in $Y_{\hbar}(\mathfrak{g})\otimes Y_{\hbar}(\mathfrak{g})[s;s^{-1}]\negthinspace]$, defined as follows: - For any $h\in\mathfrak{h}$, $\underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}(h) = \square(h)=h\otimes 1 + 1\otimes h$. - For any $i\in{\mathbf I}$, $\underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}(\xi_i(u)) = \xi_i(u-s)\otimes\xi_i(u)$. Thus, $$\underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}(\xi_{i,r}) = \tau_s(\xi_{i,r})\otimes 1 + 1\otimes\xi_{i,r} +\hbar\sum_{p=0}^{r-1} \tau_s(\xi_{i,p})\otimes \xi_{i,r-1-p}\ .$$ Note that the elements $\{t_{i,r}\}$ introduced in Section [3.5](#ssec:tir){reference-type="ref" reference="ssec:tir"} are primitive with respect to the Drinfeld coproduct. That is, $$\underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}(t_{i,r}) = \tau_s(t_{i,r})\otimes 1 + 1\otimes t_{i,r}\ .$$ - For each $i\in{\mathbf I}$, we have: $$\begin{aligned} \underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}(x^+_{i,r}) &= \tau_s(x_{i,r}^+)\otimes 1 + 1\otimes x_{i,r}^+ \\ &\phantom{=} + \hbar \sum_{N\geqslant 0} s^{-N-1} \left(\sum_{n=0}^N (-1)^{n+1} \left(\begin{array}{c} N\\ n\end{array}\right) \xi_{i,n}\otimes x_{i,r+N-n}^+ \right),\\ \underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}(x^-_{i,r}) &= \tau_s(x_{i,r}^-)\otimes 1 + 1\otimes x_{i,r}^- \\ &\phantom{=} + \hbar \sum_{N\geqslant 0} s^{-N-1} \left(\sum_{n=0}^N (-1)^{n+1} \left(\begin{array}{c} N\\ n\end{array}\right) x^-_{i,r+n}\otimes \xi_{i,N-n}^+ \right).\end{aligned}$$ It was shown in [@GTLW Thm. 3.4] that the analogue of this assignment for the Yangian $Y_\hbar(\mathfrak{a})$ of a finite-dimensional simple Lie algebra $\mathfrak{a}$ defines an algebra homomorphism $Y_\hbar(\mathfrak{a})\to Y_\hbar(\mathfrak{a})\otimes Y_\hbar(\mathfrak{a})[s;s^{-1}]\negthinspace]$. This result extends naturally to any Kac--Moody algebra satisfying the condition that each rank $2$ diagram subalgebra is of finite type, since the defining relations of the Yangian are inherently of rank $2$ type. In particular, this applies to the affine Kac--Moody algebra $\mathfrak{g}$: the above assignment extends to an algebra homomorphism $$\underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}:Y_{\hbar}(\mathfrak{g})\to Y_{\hbar}(\mathfrak{g})\otimes Y_{\hbar}(\mathfrak{g})[s;s^{-1}]\negthinspace].$$ In contrast with the infinite sums encountered in the definition of $\Delta$ from Section [3.9](#ssec:pre-cop){reference-type="ref" reference="ssec:pre-cop"}, the infinite sums written above do not truncate. It is a consequence of the rationality property provided by Proposition [Proposition 2](#pr:rationality){reference-type="ref" reference="pr:rationality"}, that these formal Laurent series in $s^{-1}$ become rational functions of $s$, once evaluated on $V_1\otimes V_2$. More precisely, given $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, the composition $$(\pi_1\otimes \pi_2)\circ \underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta} : Y_{\hbar}(\mathfrak{g}) \to \operatorname{End}(V_1\otimes V_2)[s;s^{-1}]\negthinspace]$$ takes values in $\operatorname{End}(V_1\otimes V_2)(s)$ (rational functions of $s$). We let $V_1\underset{\scriptscriptstyle{\operatorname{D},s}}{\otimes}V_2$ denote the resulting representation of $Y_{\hbar}(\mathfrak{g})$. The following theorem summarizes the results of [@GTLW §3.2--3.4]. **Theorem 5**. *The category $\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$ together with the tensor product $\underset{\scriptscriptstyle{\operatorname{D},s}}{\otimes}$ is a (rational) tensor category. In more detail, we have the following properties:* 1. *For $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, $V_1\underset{\scriptscriptstyle{\operatorname{D},s}}{\otimes}V_2$ depends rationally on $s$.* 2. *The tensor product is compatible with the shift automorphism. That is, for every $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, we have: $$(V_1\underset{\scriptscriptstyle{\operatorname{D},s_1}}{\otimes}V_2)(s_2) = V_1(s_2)\underset{\scriptscriptstyle{\operatorname{D},s_1}}{\otimes} V_2(s_2),\qquad V_1(s_1)\underset{\scriptscriptstyle{\operatorname{D},s_2}}{\otimes} V_2 = V_1\underset{\scriptscriptstyle{\operatorname{D},s_1+s_2}}{\otimes} V_2.$$* 3. *Let $\mathsf{1}$ denote the $1$--dimensional, trivial representation of $Y_{\hbar}(\mathfrak{g})$. Then the following natural identifications of vector spaces are $Y_{\hbar}(\mathfrak{g})$--intertwiners. $$\mathsf{1}\underset{\scriptscriptstyle{\operatorname{D},s}}{\otimes} V_2 \cong V_2,\qquad V_1\underset{\scriptscriptstyle{\operatorname{D},s}}{\otimes} \mathsf{1}\cong V_1(s).$$* 4. *The tensor product is asssociative in the following sense. For any $V_1,V_2,V_3\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, the natural identification of vector spaces is a $Y_{\hbar}(\mathfrak{g})$--intertwiner: $$(V_1\underset{\scriptscriptstyle{\operatorname{D},s_1}}{\otimes}V_2)\underset{\scriptscriptstyle{\operatorname{D},s_2}}{\otimes} V_3 \cong V_1\underset{\scriptscriptstyle{\operatorname{D},s_1+s_2}}{\otimes} (V_2\underset{\scriptscriptstyle{\operatorname{D},s_2}}{\otimes} V_3).$$* **Remark 2**. We wish to stress the point that $\underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}$ is *not* coassociative. The associativity of $\underset{\scriptscriptstyle{\operatorname{D},s}}{\otimes}$ rests on an identity among rational functions of two variables, which does not seem to have a lift at the level of algebra (see [@GTLW Remark 3.1]). # The transformation $\operatorname{T}$ {#sec:Delta-T} In this section, we describe a linear map $\operatorname{T}:\mathfrak{h}\to Y^{0}_{\hbar}(\mathfrak{g})$, which plays a crucial role in our construction of $\mathcal{R}^-(s)$ in Section [6](#sec:neg-R){reference-type="ref" reference="sec:neg-R"} below. ## The transformation $\operatorname{T}$ {#ssec:T-main} Let $\circ\in{\mathbf I}$ denote the *extending vertex*, as in [@kac §6.1] and let $d\in\mathfrak{h}$ denote the *scaling element*, chosen so as to have $\alpha_i(d) = \delta_{i,\circ}$ (see [@kac §6.2]). By Proposition [Proposition 1](#P:full-rank){reference-type="ref" reference="P:full-rank"}, the augmented matrix $(\mathbf{B}\,|\, \underline{\mu})$ has full rank and thus the unit vector $(\delta_{j,\circ})_{j\in {\mathbf I}}\in \mathbb{Q}^{{\mathbf I}}$ lies in its range. In particular, there exists a tuple $(\zeta_i)_{i\in {\mathbf I}}\in \mathbb{Q}^{{\mathbf I}}$ and $\zeta\in \mathbb{Q}^\times$ such that $$\frac{1}{4}\zeta\mu_j +\sum_{i\in {\mathbf I}}d_ja_{ji}\zeta_i =\delta_{j,\circ} \quad \forall \; j\in {\mathbf I}.$$ Since $\{d\}\cup \{d_i h_i\}_{i\in {\mathbf I}}$ is a basis of $\mathfrak{h}$, the assignment $\operatorname{T}:\{d\}\cup \{d_i h_i\}_{i\in {\mathbf I}}\to Y^{0}_{\hbar}(\mathfrak{g})$ defined by $$\operatorname{T}(d)=\sum_{i\in {\mathbf I}}\zeta_i t_{i,1} + \frac{1}{\hbar^2}\zeta \EuScript{C}_3\quad \text{ and }\quad \operatorname{T}(d_ih_i)=t_{i,1} \quad \forall\; i\in {\mathbf I}$$ uniquely extends to a linear map $\operatorname{T}:\mathfrak{h}\to Y^{0}_{\hbar}(\mathfrak{g})$. The following theorem provides the main result of this section. **Theorem 6**. *The linear map $\operatorname{T}$ has the following properties:* 1. *[\[map-T:1\]]{#map-T:1 label="map-T:1"} For each $h\in \mathfrak{h}$, $j\in {\mathbf I}$ and $r\geqslant 0$, one has $$=\pm \alpha_j(h) x_{j,r+1}^\pm.$$* 2. *[\[map-T:2\]]{#map-T:2 label="map-T:2"} For each $h\in \mathfrak{h}$ and $s\in \mathbb{C}$, one has $$\mathrm{ad}(\tau_s(\operatorname{T}(h)))=\mathrm{ad}(\operatorname{T}(h)+sh).$$* 3. *[\[map-T:3\]]{#map-T:3 label="map-T:3"} For each $h\in \mathfrak{h}$, $\Delta^z\circ \operatorname{T}$ satisfies $$\Delta^z(\operatorname{T}(h))=\operatorname{T}(h)\otimes 1 + 1\otimes \operatorname{T}(h) + \hbar[h\otimes 1, \Omega_z+\mathrm{Q}_z]$$ where $\mathrm{Q}_z$ is a formal series in $z$ with the following properties:* 1. *[\[Q_z:i\]]{#Q_z:i label="Q_z:i"} $\mathrm{Q}_z=\sum_{n>0} \mathrm{Q}_{n\delta} z^{n\mathsf{ht}(\delta)}$, where each coefficient $\mathrm{Q}_{n\delta}$ satisfies $$\mathrm{Q}_{n\delta}\in Y^{-}_{\hbar}(\mathfrak{g})_{-n\delta}\otimes Y^{+}_{\hbar}(\mathfrak{g})_{n\delta}.$$* 2. *[\[Q_z:ii\]]{#Q_z:ii label="Q_z:ii"} For each $a,b\in \mathbb{C}$, one has $$(\tau_a\otimes \tau_b)(\mathrm{Q}_z)=\mathrm{Q}_{z}.$$* 3. *[\[Q_z:iii\]]{#Q_z:iii label="Q_z:iii"}The tensor factors of $\mathrm{Q}_z$ are primitive: $$\begin{aligned} (\Delta^w\otimes \operatorname{Id})(\mathrm{Q}_{z})&=\mathrm{Q}_{z}^{13}+\mathrm{Q}_{z/w}^{23} \\ % (\operatorname{Id}\otimes \Delta^w)(\mathrm{Q}_z)&=\mathrm{Q}_z^{12}+\mathrm{Q}_{zw}^{13}. \end{aligned}$$* 4. *[\[Q_z:iv\]]{#Q_z:iv label="Q_z:iv"} $\mathrm{Q}_{z}$ belongs to the centralizer of $U(\mathfrak{g}^\prime)^{\otimes 2}$ in $Y_{\hbar}(\mathfrak{g})^{\otimes 2}[\![z]\!]$.* *[Proof]{.smallcaps}.* Let us begin with some preliminary observations that will be useful throughout the proof. Since the constant $\zeta$ is nonzero, we can set $\mathsf{h}=\frac{\hbar^2}{\zeta}(d-\sum_{i\in {\mathbf I}}\zeta_id_i h_i )\in \mathfrak{h}$. Then $\{\mathsf{h}\}\cup\{d_ih_i\}_{i\in {\mathbf I}}$ is a basis of $\mathfrak{h}$, and we have $$\label{3C_2-preimage} \begin{gathered} \operatorname{T}(\mathsf{h})=\frac{\hbar^2}{\zeta}\bigg(\operatorname{T}(d)-\sum_{i\in {\mathbf I}}\zeta_it_{i,1} \bigg)=\EuScript{C}_3,\\ [\mathsf{h},x_{j,r}^\pm]=\pm\frac{\hbar^2}{\zeta}\bigg(\delta_{j,\circ}-\sum_{i\in {\mathbf I}}\zeta_id_ia_{ij} \bigg)x_{j,r}^\pm =\pm \frac{\hbar^2}{4} \mu_jx_{j,r}^\pm, \end{gathered}$$ for all $j\in {\mathbf I}$ and $r\geqslant 0$. It thus follows by Corollary [Corollary 1](#cor:c123){reference-type="ref" reference="cor:c123"} that we have the equality of operators $\mathrm{ad}(\mathsf{h})=\mathrm{ad}(3\EuScript{C}_2)$ on $Y_{\hbar}(\mathfrak{g})$. Consider now Parts [\[map-T:1\]](#map-T:1){reference-type="eqref" reference="map-T:1"} and [\[map-T:2\]](#map-T:2){reference-type="eqref" reference="map-T:2"} of the theorem. They clearly hold when $h\in \{d_ih_i\}_{i\in {\mathbf I}}$, so it is sufficient to verify them for $h=\mathsf{h}$. For Part [\[map-T:1\]](#map-T:1){reference-type="eqref" reference="map-T:1"}, this is immediate from [\[3C_2-preimage\]](#3C_2-preimage){reference-type="eqref" reference="3C_2-preimage"} and Corollary [Corollary 1](#cor:c123){reference-type="ref" reference="cor:c123"}. As for Part [\[map-T:2\]](#map-T:2){reference-type="eqref" reference="map-T:2"}, since $\operatorname{T}(\mathsf{h})=\EuScript{C}_3$, we have $$\begin{aligned} \mathrm{ad}(\tau_s(\operatorname{T}(\mathsf{h})))&=\mathrm{ad}(\EuScript{C}_3+3s\EuScript{C}_2+3s^2\EuScript{C}_1+s^3 \EuScript{C}_0)\\ &=\mathrm{ad}(\EuScript{C}_3+3s\EuScript{C}_2)\\ &=\mathrm{ad}(\operatorname{T}(\mathsf{h})+s\mathsf{h}), \end{aligned}$$ where in the second equality we have used that $\EuScript{C}_0$ and $\EuScript{C}_1$ are central, and in the last equality we have applied [\[3C_2-preimage\]](#3C_2-preimage){reference-type="eqref" reference="3C_2-preimage"}. Let us now turn to Part [\[map-T:3\]](#map-T:3){reference-type="eqref" reference="map-T:3"}. We will construct the unique series $\mathrm{Q}_z$ satisfying $$\Delta^z(\operatorname{T}(h))=\operatorname{T}(h)\otimes 1 + 1\otimes \operatorname{T}(h) + \hbar[h\otimes 1, \Omega_z+\mathrm{Q}_z]$$ for all $h\in \mathfrak{h}$, in addition to the conditions [\[Q_z:i\]](#Q_z:i){reference-type="ref" reference="Q_z:i"}--[\[Q_z:iv\]](#Q_z:iv){reference-type="ref" reference="Q_z:iv"}, in several steps which will be carried out in Sections [5.2](#ssec:T-beta-nperp){reference-type="ref" reference="ssec:T-beta-nperp"}--[5.5](#ssec:Q_z-exp){reference-type="ref" reference="ssec:Q_z-exp"}. In order to explain these steps, we introduce $\Theta_z\in Y_{\hbar}(\mathfrak{g})^{\otimes 2}[\negthinspace[z]\negthinspace]$ via the following equation: $$\label{Theta_z-def} \Delta^z(\EuScript{C}_3) = \EuScript{C}_3\otimes 1 + 1\otimes \EuScript{C}_3 + \hbar\Theta_z. %\Theta_z\ceq \hbar^{-1}\left( \Delta^z(\caff_3)-\caff_3\otimes 1 -1\otimes \caff_3 \right).$$ It follows from the definitions of $\EuScript{C}_3$ and $\Delta^z$ that $\Theta_z=\sum_{\beta>0}\Theta_\beta z^{\mathsf{ht}(\beta)}$, where the summation is taken over all nonzero $\beta \in \mathsf{Q}_+$ and $\Theta_\beta$ satisfies $$\Theta_\beta \in Y^{{\scriptscriptstyle\leqslant}}_{\hbar}(\mathfrak{g})_{-\beta}\otimes Y^{{\scriptscriptstyle\geqslant}}_{\hbar}(\mathfrak{g})_{\beta}.$$ In more detail, that $\Theta_z$ takes this form follows from Proposition 2.9 of [@GW-Poles] (or, more precisely, its proof). Next, let $\EuScript{K}_\beta=\Theta_\beta+\beta(\mathsf{h})\Omega_\beta$ for each nonzero $\beta\in \Phi_+$, so that $$\label{K_z} \EuScript{K}_z=\sum_{\beta>0}\EuScript{K}_\beta z^{\mathsf{ht}(\beta)}=\Theta_z-[\mathsf{h}\otimes 1, \Omega_z].$$ These elements are of interest to us because of the equation $$\Delta^z(\EuScript{C}_3) = \EuScript{C}_3\otimes 1 + 1\otimes \EuScript{C}_3 + \hbar \left(\EuScript{K}_z + [\mathsf{h}\otimes 1, \Omega_z]\right).$$ Thus, for the equation in [\[map-T:3\]](#map-T:3){reference-type="eqref" reference="map-T:3"} to hold, we must find $\mathrm{Q}_z$ so that $[\mathsf{h}\otimes 1, \mathrm{Q}_z]=\EuScript{K}_z$ and establish its stated properties. We proceed as follows: 1. In Section [5.2](#ssec:T-beta-nperp){reference-type="ref" reference="ssec:T-beta-nperp"}, we will prove that $\EuScript{K}_\beta=0$ for all $\beta \in \mathsf{Q}_+\!\setminus \mathbb{Z}_{\geqslant 0}\delta$, and that $\EuScript{K}_z$ belongs to the centralizer of $U(\mathfrak{g}^\prime)^{\otimes 2}$ in $Y_{\hbar}(\mathfrak{g})^{\otimes 2}[\![z]\!]$. This will be achieved in Proposition [Proposition 3](#P:Theta){reference-type="ref" reference="P:Theta"} with the help of Lemma [Lemma 1](#L:Theta_z){reference-type="ref" reference="L:Theta_z"}, which computes the commutation relations between $\Theta_z$ and $\Delta^z(y)$, for all $y\in \{x_{i,0}^\pm, t_{i,1}\}_{i\in {\mathbf I}}$. 2. By the previous step, $\EuScript{K}_z$ takes the form $\EuScript{K}_z=\sum_{n>0}\EuScript{K}_{n\delta}z^{n\mathsf{ht}(\delta)}$. In Sections [5.3](#ssec:T-Gamma_i){reference-type="ref" reference="ssec:T-Gamma_i"} and [5.4](#ssec:T-Gamma_i-2){reference-type="ref" reference="ssec:T-Gamma_i-2"}, we will explicitly compute each coefficient $\EuScript{K}_{n\delta}$ and study some of their properties; see Lemma [Lemma 2](#L:Gamma-i){reference-type="ref" reference="L:Gamma-i"} and Proposition [Proposition 4](#P:K-exp){reference-type="ref" reference="P:K-exp"}. 3. In Section [5.5](#ssec:Q_z-exp){reference-type="ref" reference="ssec:Q_z-exp"}, we finally define $\mathrm{Q}_z=\sum_{n>0}\mathrm{Q}_{n\delta}z^{n\mathsf{ht}(\delta)}$ by setting $\mathrm{Q}_{n\delta}=-\frac{1}{n\delta(\mathsf{h})}\EuScript{K}_{n\delta}$ for each positive integer $n$, so that $[\mathsf{h}\otimes 1,\mathrm{Q}_z]=\EuScript{K}_z$. In Proposition [Proposition 5](#P:Q_z){reference-type="ref" reference="P:Q_z"}, we show that $\mathrm{Q}_z$ has all the properties stated in Part [\[map-T:3\]](#map-T:3){reference-type="eqref" reference="map-T:3"} of the theorem.  ◻ ## Proof that $\EuScript{K}_\beta=0$ for $\beta\in \mathsf{Q}_+\!\setminus\! \mathbb{Z}_{\geqslant 0}\delta$ {#ssec:T-beta-nperp} Our goal in this section is to show that the series $\EuScript{K}_z\in Y_{\hbar}(\mathfrak{g})^{\otimes 2}[\![z]\!]$ defined in [\[K_z\]](#K_z){reference-type="eqref" reference="K_z"} belongs to the centralizer of $U(\mathfrak{g}^\prime)^{\otimes 2}$ in $Y_{\hbar}(\mathfrak{g})^{\otimes 2}[\![z]\!]$ and, consequently, that its component $\EuScript{K}_\beta$ is zero for $\beta\in \mathsf{Q}_+\!\setminus\! \mathbb{Z}_{\geqslant 0}\delta$. We begin with the following lemma, which spells out some of the commutation relations satisfied by $\Theta_z$. **Lemma 1**. *For each $i\in {\mathbf I}$, one has $$\begin{gathered} =-[\xi_{i,0}\otimes 1,[\square(\EuScript{C}_3),\Omega_z]],\\ [\Theta_z,\Delta^z(x_{i,0}^+)]=\frac{\hbar^2}{4}z\mu_i[\Omega_z,1\otimes x_{i,0}^+] \quad \text{ and }\quad [\Theta_z,\Delta^z(x_{i,0}^-)]=\frac{\hbar^2}{4}\mu_i[\Omega_z,x_{i,0}^-\otimes 1]. % \end{gathered}$$* *[Proof]{.smallcaps}.* Since $\EuScript{C}_3$ and $t_{i,1}$ commute and $\Delta^z$ is an algebra homomorphism, we have $$=[\Delta^z(\EuScript{C}_3),\Delta^z(t_{i,1})]-[\square(\EuScript{C}_3),\Delta^z(t_{i,1})]=-[\square(\EuScript{C}_3),\Delta^z(t_{i,1})].$$ Since $\square(\EuScript{C}_3)$ and $\square(t_{i,1})$ commute, we obtain $$=-\hbar[\square(\EuScript{C}_3),[\xi_{i,0}\otimes 1, \Omega_z]]=-\hbar[\xi_{i,0}\otimes 1,[\square(\EuScript{C}_3), \Omega_z]],$$ which yields the first identity of the lemma. Similarly, we have $$=\Delta^z([\EuScript{C}_3,x_{i,0}^\pm])-\square^z[\EuScript{C}_3,x_{i,0}^\pm]=\pm\frac{\hbar^2}{4}\mu_i (\Delta^z-\square^z)(x_{i,1}^\pm),$$ where we recall that $\square^z=(1\otimes \upsigma_z)\circ \square$. The second and third relations of the lemma now follow from [\[Delta-xi1\]](#Delta-xi1){reference-type="eqref" reference="Delta-xi1"}. ◻ **Proposition 3**. *The element $\EuScript{K}_z$ has the following properties:* 1. *[\[Theta:2\]]{#Theta:2 label="Theta:2"} For each $x\in \mathfrak{g}^\prime$, one has $$=0=[1\otimes x,\EuScript{K}_z].$$* 2. *[\[Theta:3\]]{#Theta:3 label="Theta:3"} $\EuScript{K}_\beta=0$ for all $\beta \in \mathsf{Q}_+\!\setminus \mathbb{Z}_{\geqslant 0}\delta$.* *[Proof]{.smallcaps}.* If Part [\[Theta:2\]](#Theta:2){reference-type="eqref" reference="Theta:2"} holds, then by taking $x=\xi_{i,0}$ in the relation $[\EuScript{K}_z, x\otimes 1]=0$ we find that $$0=[\EuScript{K}_z, \xi_{i,0}\otimes 1]=\sum_{\beta>0} (\alpha_i,\beta) \EuScript{K}_\beta z^{\mathsf{ht}(\beta)} \quad \forall \; i\in {\mathbf I}.$$ Projecting onto the $Y_{\hbar}(\mathfrak{g})_{-\beta}\otimes Y_{\hbar}(\mathfrak{g})_{\beta}$-component, we obtain $(\alpha_i,\beta) \EuScript{K}_\beta=0$ for all $i\in {\mathbf I}$, and thus $\EuScript{K}_\beta=0$ provided $(\beta,\alpha_i)\neq 0$ for some $i\in {\mathbf I}$. This shows that Part [\[Theta:3\]](#Theta:3){reference-type="eqref" reference="Theta:3"} follows from Part [\[Theta:2\]](#Theta:2){reference-type="eqref" reference="Theta:2"}. Let us now turn to proving Part [\[Theta:2\]](#Theta:2){reference-type="eqref" reference="Theta:2"}. For each nonzero $\beta \in \mathsf{Q}_+$, set $$\Theta_{\beta}(s)=(\tau_s\otimes \operatorname{Id})(\Theta_\beta)-\Theta_\beta \in s(Y^{{\scriptscriptstyle\leqslant}}_{\hbar}(\mathfrak{g})_{-\beta}\otimes Y^{{\scriptscriptstyle\geqslant}}_{\hbar}(\mathfrak{g})_{\beta})[s].$$ Note that each of these is a polynomial of degree at most $2$. Indeed, by [\[Delta-filt2\]](#Delta-filt2){reference-type="eqref" reference="Delta-filt2"}, $(\Delta_z-\square)(t_{i,3})$ belongs to the subspace $\mathbf{F}_2(Y_{\hbar}(\mathfrak{g})^{\otimes 2})[\![z]\!]$ of $Y_{\hbar}(\mathfrak{g})^{\otimes 2}[\![z]\!]$. So, the same is true for $(\Delta_z-\square)(\EuScript{C}_3)$ and thus $\Theta_z$. As $\tau_s\otimes \operatorname{Id}$ sends any element of $\mathbf{F}_k(Y_{\hbar}(\mathfrak{g})^{\otimes 2})$ to a polynomial in $s$ of degree at most $k$ (see Section [3.6](#ssec: shift-yangian){reference-type="ref" reference="ssec: shift-yangian"} above), the assertion follows. This allows us to write $$\Theta_z(s)=\sum_{\beta>0}\Theta_{\beta}(s)z^{\mathsf{ht}(\beta)}=\Theta_z^{(1)}s+ \Theta_z^{(2)}s^2,$$ where $\Theta_z^{(a)}\in Y_{\hbar}(\mathfrak{g})^{\otimes 2}[\![z]\!]$ for each $a=1,2$. Furthermore, $\Theta_z(s)$ is $\mathfrak{g}$-invariant, in the sense that it satisfies $$\label{Theta_z(s)-inv} [\Delta^z(x),\Theta_z(s)]=0 \quad \forall \;x\in \mathfrak{g}.$$ For $x\in \mathfrak{h}$ this is clear from the form of $\Theta_z(s)$, and for $x=x_{i}^\pm$ this follows by applying $\tau_s\otimes \operatorname{Id}$ to the second line of identities from Lemma [Lemma 1](#L:Theta_z){reference-type="ref" reference="L:Theta_z"}. Applying $\tau_s\otimes \operatorname{Id}$ instead to the first equation of Lemma [Lemma 1](#L:Theta_z){reference-type="ref" reference="L:Theta_z"} while using that $\EuScript{C}_0$ and $\EuScript{C}_1$ are central, we obtain $$=-[\xi_{i,0}\otimes 1,[\square(\EuScript{C}_3)+3s\EuScript{C}_2\otimes 1,\Omega_z]].$$ Expanding both sides and subtracting the first relation of Lemma [Lemma 1](#L:Theta_z){reference-type="ref" reference="L:Theta_z"} yields $$\label{Theta_z} s[\Theta_z,\xi_{i,0}\otimes 1]+s[\Theta_z(s),\xi_{i,0}\otimes 1]+[\Theta_z(s),\Delta^z(t_{i,1})]=-3s[\xi_{i,0}\otimes 1,[\EuScript{C}_2\otimes 1,\Omega_z]]$$ Comparing powers of $s$, we immediately deduce that $$\label{Theta_z'} [\Theta_z^{(2)},\xi_{i,0}\otimes 1]=0 \quad \text{ and }\quad [\Theta_z^{(1)},\xi_{i,0}\otimes 1]=-[\Theta_z^{(2)},\Delta^z(t_{i,1})].$$ It follows from the first identity that the $\beta$-component $\Theta_{z,\beta}^{(2)}$ of $\Theta_z^{(2)}$ is zero unless $(\beta,\alpha_i)=0$ for all $i\in {\mathbf I}$. Furthermore, by using this identity, and applying $\operatorname{ad}(\xi_{i,0}\otimes 1)$ to [\[Theta_z(s)-inv\]](#Theta_z(s)-inv){reference-type="eqref" reference="Theta_z(s)-inv"} with $x=x^{\pm}_{i,0}$, we deduce that $$\label{Theta_z^2} [\Theta_z^{(2)},x\otimes 1]=0=[\Theta_z^{(2)},1\otimes x] \quad \forall\; x\in \mathfrak{g}^\prime.$$ Consequently, the bracket $[\Theta_z^{(2)},\Delta^z(t_{i,1})]$ coincides with $[\Theta_z^{(2)},\square(t_{i,1})]$, and hence the second equation of [\[Theta_z\'\]](#Theta_z'){reference-type="eqref" reference="Theta_z'"} yields $$=-[\Theta_{z,\beta}^{(2)},\square(t_{i,1})] \quad \forall \; \beta\in \mathsf{Q}_+\setminus\{0\}, \, i\in {\mathbf I}.$$ Note that the left--hand side is zero if $\beta\in \alpha_i^\perp$, and the right--hand side is zero if $\beta\not\in \alpha_i^\perp$. Hence both sides are identically zero for all nonzero $\beta \in \mathsf{Q}_+$. Using this observation together with [\[Theta_z(s)-inv\]](#Theta_z(s)-inv){reference-type="eqref" reference="Theta_z(s)-inv"}, [\[Theta_z\]](#Theta_z){reference-type="eqref" reference="Theta_z"}, and [\[Theta_z\^2\]](#Theta_z^2){reference-type="eqref" reference="Theta_z^2"}, and the fact that $\operatorname{ad}(\mathsf{h})=3\operatorname{ad}(\EuScript{C}_2)$, we conclude that $$\begin{gathered}\label{Theta-die} [\Theta_z(s),x\otimes 1]=0=[\Theta_z(s),1\otimes x]\\ s[\EuScript{K}_z,\xi_{i,0}\otimes 1]=-[\Theta_z(s),\Delta^z(t_{i,1})]=-[\Theta_z(s),\square(t_{i,1})] \end{gathered}$$ for all $x\in \mathfrak{g}^\prime$ and $i\in {\mathbf I}$. From the first equality with $x\in \mathfrak{h}^\prime$, we find that the component $\Theta_{\beta}(s)$ is zero unless $\beta\in \cap_{i\in {\mathbf I}}\alpha_i^\perp$. As $s[\EuScript{K}_z,\xi_{i,0}\otimes 1]$ has no such component and $\mathrm{ad}(\square (t_{i,1}))$ is weight zero, we deduce from the second line that $$\label{Theta-die'} [\EuScript{K}_z,\xi_{i,0}\otimes 1]=0 \quad \forall\; i \in {\mathbf I}. %\quad \text{ and }\quad [\Theta_z(s),\square(t_{i1})]=0$$ To prove that $[\EuScript{K}_z,x\otimes 1]=0=[\EuScript{K}_z,1\otimes x]$ for general $x\in \mathfrak{g}^\prime$, it now suffices to show that $\EuScript{K}_z$ commutes with $\Delta^z(x_{i,0}^\pm)$ for each $i\in {\mathbf I}$. We have $$3[[\EuScript{C}_2\otimes 1, \Omega_z],\Delta^z(x_{i,0}^{\pm})]=\pm \frac{\hbar^2}{4}\mu_i [x_{i,0}^\pm \otimes 1, \Omega_z]+3[\EuScript{C}_2\otimes 1, [\Omega_z,\square^z(x_{i,0}^\pm)]].$$ By Lemma 4.2 and §6.3 of [@guay-nakajima-wendlandt], we have $[\Omega_z,\square^z(x_{i,0}^+)]=x_{i,0}^+\otimes \xi_{i,0}$ and $[\Omega_z,\square^z(x_{i,0}^-)]=-z^{-1}\xi_{i,0}\otimes x_{i,0}^-$. Hence, we obtain $$\begin{aligned} 3[[\EuScript{C}_2\otimes 1, \Omega_z],\Delta^z(x_{i,0}^{-})]&=-\frac{\hbar^2}{4}\mu_i [x_{i,0}^- \otimes 1, \Omega_z],\\ % 3[[\EuScript{C}_2\otimes 1, \Omega_z],\Delta^z(x_{i,0}^{+})]&=\frac{\hbar^2}{4}\mu_i [x_{i,0}^+ \otimes 1, \Omega_z]+\frac{\hbar^2}{4}\mu_i x_{i,0}^+\otimes \xi_{i,0}=-\frac{\hbar^2}{4}z\mu_i [1\otimes x_{i,0}^+, \Omega_z]. \end{aligned}$$ Combining these calculations with Lemma [Lemma 1](#L:Theta_z){reference-type="ref" reference="L:Theta_z"} yields $[\EuScript{K}_z,\Delta^z(x_{i,0}^{\pm})]=0$ for all $i\in {\mathbf I}$, as desired. This completes the proof of Part [\[Theta:2\]](#Theta:2){reference-type="eqref" reference="Theta:2"}. ◻ ## Computing $\EuScript{K}_{n\delta}$ explicitly, I {#ssec:T-Gamma_i} By the results of the previous section, $\EuScript{K}_z$ is a series of the form $\sum_{n>0}\EuScript{K}_{n\delta}z^{n\mathsf{ht}(\delta)}$ with each coefficient $\EuScript{K}_{n\delta}$ belonging to the centralizer of $U(\mathfrak{g}^\prime)^{\otimes 2}$. The goal of this section and Section [5.4](#ssec:T-Gamma_i-2){reference-type="ref" reference="ssec:T-Gamma_i-2"} is to compute these coefficients explicitly. Our starting point is the following lemma, which will also play a crucial role in Section [5.5](#ssec:Q_z-exp){reference-type="ref" reference="ssec:Q_z-exp"}. **Lemma 2**. *Let $n$ be a positive integer and set $\beta=n\delta$. Then, for each $i\in {\mathbf I}$, one has $$=d_i\hbar \sum_{\alpha+\gamma=\beta}\alpha(\mathsf{h})\gamma(h_i)\left[\Omega_\alpha,\Omega_\gamma\right],$$ where the summation runs over all positive roots $\alpha,\gamma\in \Phi_+$ such that $\alpha+\gamma=\beta$. Moreover, the element $\Gamma_{i,\beta}=[\square(t_{i,1}),\EuScript{K}_\beta]$ has the following properties:* 1. *[\[Gamma:1\]]{#Gamma:1 label="Gamma:1"}For each $a,b\in \mathbb{C}$, one has $(\tau_a\otimes \tau_b)(\Gamma_{i,\beta})=\Gamma_{i,\beta}$.* 2. *[\[Gamma:2\]]{#Gamma:2 label="Gamma:2"} One has $$\begin{aligned} (\Delta^z\otimes \operatorname{Id})(\Gamma_{i,\beta})&=\Gamma_{i,\beta}^{13}+\Gamma_{i,\beta}^{23}z^{-\mathsf{ht}(\beta)}+\Gamma_{i,\beta}^-(z),\\ % (\operatorname{Id}\otimes \Delta^z)(\Gamma_{i,\beta})&=\Gamma_{i,\beta}^{12}+\Gamma_{i,\beta}^{13}z^{\mathsf{ht}(\beta)} +\Gamma_{i,\beta}^+(z), \end{aligned}$$ where $\Gamma^-_{i,\beta}(z)$ and $\Gamma^+_{i,\beta}(z)$ are given explicitly by $$\begin{aligned} \Gamma^-_{i,\beta}(z)&=d_i\hbar \sum_{\alpha+\gamma=\beta}(\alpha(\mathsf{h})\gamma(h_i)-\gamma(\mathsf{h})\alpha(h_i))[\Omega_\alpha^{13},\Omega_\gamma^{23}]z^{-\mathsf{ht}(\gamma)}\\[-1em] % &\hspace{7em} + \hbar \beta(\mathsf{h}) \mathrm{ad}(\xi_{i,0}^{(1)})\left( [\Omega_z^{12},\Omega_{\beta}^{13}+\Omega_\beta^{23}z^{-\mathsf{ht}(\beta)}]\right), \\[3pt] % \Gamma^+_{i,\beta}(z)&=d_i\hbar \sum_{\alpha+\gamma=\beta}\left(\alpha(\mathsf{h})\gamma(h_i)-\gamma(\mathsf{h})\alpha(h_i)\right) [\Omega_\alpha^{12},\Omega_\gamma^{13}]z^{\mathsf{ht}(\gamma)}\\[-1em] % &\hspace{7em} +\hbar \beta(\mathsf{h})\mathrm{ad}(\xi_{i,0}^{(2)})\left([\Omega_z^{23},\Omega_\beta^{12}+\Omega_\beta^{13}z^{\mathsf{ht}(\beta)}]\right), \end{aligned}$$ with both summations taken over all $\alpha,\gamma\in \Phi_+$ such that $\alpha+\gamma=\beta$.* 3. *[\[Gamma:3\]]{#Gamma:3 label="Gamma:3"} The series $\Gamma_{i,\beta}^+(z)$ and $\Gamma_{i,\beta}^-(z)$ satisfy $$\Gamma_{i,\beta}^\pm(z)\in (\mathfrak{n}^-\otimes \mathfrak{n}^\pm \otimes \mathfrak{n}^+)[z^{\pm 1}]$$* *[Proof]{.smallcaps}.* From the first identity of Lemma [Lemma 1](#L:Theta_z){reference-type="ref" reference="L:Theta_z"}, we obtain $$+\hbar[\Theta_z,[\xi_{i,0}\otimes 1,\Omega_z]]=-[\xi_{i,0}\otimes 1,[\square(\EuScript{C}_3),\Omega_z]].$$ As $\mathrm{ad}(\EuScript{C}_3)$ is weight zero, the right-hand side has no $Y_{\hbar}(\mathfrak{g})_{-\beta}\otimes Y_{\hbar}(\mathfrak{g})_{\beta}$ component. Since $\mathrm{ad}(t_{i,1})$ is weight zero, this implies that $$\label{sqt_i,theta_b:1} [\square(t_{i,1}),\Theta_\beta]=\hbar\sum_{\alpha+\gamma=\beta}[\Theta_\alpha,[\xi_{i,0}\otimes 1,\Omega_\gamma]] =-d_i\hbar\sum_{\alpha+\gamma=\beta}\gamma(h_i)[\Theta_\alpha,\Omega_\gamma],$$ where the summations are taken over all $\gamma\in\Phi_+$ and $\alpha\in \mathsf{Q}_+\setminus\{0\}$ such that $\alpha+\gamma=\beta$. However, for the term $\gamma(h_i)[\Theta_\alpha,\Omega_\gamma]$ to provide a nonzero contribution, we must have $\gamma\notin \alpha_i^\perp$. As $\alpha+\gamma=\beta$ and $\beta\in \alpha_i^\perp$, this implies that $\alpha\notin \alpha_i^\perp$ as well. By Part [\[Theta:3\]](#Theta:3){reference-type="eqref" reference="Theta:3"} of Proposition [Proposition 3](#P:Theta){reference-type="ref" reference="P:Theta"}, for any such $\alpha$ we have $$\EuScript{K}_\alpha=\Theta_\alpha+\alpha(\mathsf{h})\Omega_\alpha=0.$$ The formula for $[\square(t_{i,1}),\Theta_\beta]$ stated in the lemma now follows from [\[sqt_i,theta_b:1\]](#sqt_i,theta_b:1){reference-type="eqref" reference="sqt_i,theta_b:1"} after replacing $\Theta_\alpha$ by $-\alpha(\mathsf{h})\Omega_\alpha$. We now turn to establishing Parts [\[Gamma:1\]](#Gamma:1){reference-type="eqref" reference="Gamma:1"}--[\[Gamma:3\]](#Gamma:3){reference-type="eqref" reference="Gamma:3"} of the lemma. *[Proof of Part [\[Gamma:1\]](#Gamma:1){reference-type="eqref" reference="Gamma:1"}]{.smallcaps}.* From what has been proven above, we have $$\Gamma_{i,\beta}=d_i\hbar \sum_{\alpha+\gamma=\beta}\alpha(\mathsf{h})\gamma(h_i)\left[\Omega_\alpha,\Omega_\gamma\right]+\beta(\mathsf{h})[\square(t_{i,1}),\Omega_\beta].$$ It is therefore sufficient to prove that $[\square(t_{i1}),\Omega_\beta]$ is fixed by $\tau_a\otimes \tau_b$ for any $a,b\in \mathbb{C}$. We have $$\begin{aligned} (\tau_a\otimes \tau_b)([\square(t_{i,1}),\Omega_\beta])&=[\square(t_{i,1})+a\xi_{i,0}\otimes 1 +1\otimes b\xi_{i,0},\Omega_\beta]\\ % &=[\square(t_{i,1}),\Omega_\beta]+(\beta,\alpha_i)(b-a)\Omega_\beta. \end{aligned}$$ Since $(\beta,\alpha_i)=n(\delta,\alpha_i)=0$, this coincides with $[\square(t_{i,1}),\Omega_\beta]$. ◻ *[Proof of Part [\[Gamma:2\]](#Gamma:2){reference-type="eqref" reference="Gamma:2"}]{.smallcaps}.* By the first assertion of the lemma, we have $$\begin{aligned} &(\Delta^z\otimes \operatorname{Id})([\square(t_{i,1}),\Theta_\beta])\\ & =d_i\hbar \!\sum_{\alpha+\gamma=\beta}\!\alpha(\mathsf{h})\gamma(h_i)\left[\Omega_\alpha^{13}+\Omega_\alpha^{23}z^{-\mathsf{ht}(\alpha)},\Omega_\gamma^{13}+\Omega_\gamma^{23}z^{-\mathsf{ht}(\gamma)}\right]\\ & =(\square^z\otimes \operatorname{Id})[\square(t_{i,1}),\Theta_\beta]\\ &\hspace{5em}+d_i\hbar \!\sum_{\alpha+\gamma=\beta}\!\alpha(\mathsf{h})\gamma(h_i) ([\Omega_\alpha^{13},\Omega_\gamma^{23}]z^{-\mathsf{ht}(\gamma)} +[\Omega_\alpha^{23},\Omega_\gamma^{13}]z^{-\mathsf{ht}(\alpha)})\\ & =(\square^z\otimes \operatorname{Id})[\square(t_{i,1}),\Theta_\beta]+d_i\hbar \!\sum_{\alpha+\gamma=\beta}\!(\alpha(\mathsf{h})\gamma(h_i)-\gamma(\mathsf{h})\alpha(h_i))[\Omega_\alpha^{13},\Omega_\gamma^{23}]z^{-\mathsf{ht}(\gamma)}. \end{aligned}$$ Similarly, we have $$\begin{aligned} &(\Delta^z\otimes \operatorname{Id})[\square(t_{i,1}),\Omega_\beta]\\ &= [t_{i,1}^{(3)}+\square(t_{i,1})\otimes 1+\hbar[\xi_{i,0}\otimes 1,\Omega_z]\otimes 1, \Omega_{\beta}^{13}+\Omega_\beta^{23}z^{-\mathsf{ht}(\beta)}]\\ & =(\square^z\otimes \operatorname{Id})[\square(t_{i,1}),\Omega_\beta]+\hbar [\mathrm{ad}(\xi_{i,0}^{(1)})(\Omega_z^{12}),\Omega_{\beta}^{13}+\Omega_\beta^{23}z^{-\mathsf{ht}(\beta)}]\\ & =(\square^z\otimes \operatorname{Id})[\square(t_{i,1}),\Omega_\beta]+\hbar \mathrm{ad}(\xi_{i,0}^{(1)})( [\Omega_z^{12},\Omega_{\beta}^{13}+\Omega_\beta^{23}z^{-\mathsf{ht}(\beta)}]), \end{aligned}$$ where in the last line we have used that $\mathrm{ad}(\xi_{i,0}^{(1)})$ commutes with $\Omega_{\beta}^{13}$ and $\Omega_{\beta}^{23}$ as $\beta=n\delta$. As $\Gamma_{i,\beta}=[\square(t_{i,1}),\Theta_\beta+\beta(\mathsf{h})\Omega_\beta]$, this implies that $$(\Delta^z\otimes \operatorname{Id})(\Gamma_{i,\beta})=\Gamma_{i,\beta}^{13}+\Gamma_{i,\beta}^{23}z^{-\mathsf{ht}(\beta)}+\Gamma_{i,\beta}^-(z)$$ with $\Gamma_{i,\beta}^-(z)$ as in the statement of the lemma. The computation of $(\operatorname{Id}\otimes \Delta^z)(\Gamma_{i,\beta})$ is nearly identical, and hence omitted. ◻ *[Proof of Part [\[Gamma:3\]](#Gamma:3){reference-type="eqref" reference="Gamma:3"}]{.smallcaps}.* As the proofs for $\Gamma_{i,\beta}^-(z)$ and $\Gamma_{i,\beta}^+(z)$ are the same, we will focus on the former. By the formulas of Part [\[Gamma:2\]](#Gamma:2){reference-type="eqref" reference="Gamma:2"}, it is sufficient to prove that $$\begin{gathered} \mathrm{ad}(\xi_{i,0}^{(1)})\left( [\Omega_z^{12},\Omega_{\beta}^{13}+\Omega_\beta^{23}z^{-\mathsf{ht}(\beta)}]\right) \in (\mathfrak{n}^-\otimes \mathfrak{n}^-\otimes \mathfrak{n}^+)[z^{-1}]. \end{gathered}$$ It is clear that $\mathrm{ad}(\xi_{i,0}^{(1)})\left( [\Omega_z^{12},\Omega_{\beta}^{13}+\Omega_\beta^{23}z^{-\mathsf{ht}(\beta)}]\right)$ is a Laurent series in $z$ with coefficients in $\mathfrak{n}^-\otimes \mathfrak{g}^\prime\otimes \mathfrak{n}^+$. Let $\widetilde{\Omega}_z=\Omega_z+(\Omega^{-}_{z^{-1}})^{21} \in \mathfrak{g}^{\otimes 2}[\![z^{\pm 1}]\!]$. Note that this is just $(\operatorname{Id}\otimes \upsigma_z)(\widetilde{\Omega})$ where $\widetilde{\Omega}$ is the *full* Casimir tensor of $\mathfrak{g}$. It satisfies $$=0\quad \forall\; x\in \mathfrak{g}.$$ Hence, we have $$= -[(\Omega^{-}_{z^{-1}})^{21}, \Omega_{\beta}^{13}+\Omega_\beta^{23}z^{-\mathsf{ht}(\beta)}],$$ which is a formal series in $z^{-1}$ with coefficients in $\mathfrak{g}^\prime\otimes \mathfrak{n}^-\otimes \mathfrak{n}^+$. Hence, we have $$\mathrm{ad}(\xi_{i,0}^{(1)})\left( [\Omega_z^{12},\Omega_{\beta}^{13}+\Omega_\beta^{23}z^{-\mathsf{ht}(\beta)}]\right)\in (\mathfrak{n}^-\otimes \mathfrak{g}^\prime\otimes \mathfrak{n}^+)[z^{-1};z]\!]\cap (\mathfrak{g}^\prime\otimes \mathfrak{n}^-\otimes \mathfrak{n}^+)[\![z^{-1}]\!]$$ This shows that $\mathrm{ad}(\xi_{i,0}^{(1)})\left( [\Omega_z^{12},\Omega_{\beta}^{13}+\Omega_\beta^{23}z^{-\mathsf{ht}(\beta)}]\right)$ is a polynomial in $z^{-1}$ with coefficients in $\mathfrak{n}^-\otimes \mathfrak{n}^-\otimes \mathfrak{n}^+$, and thus that $\Gamma_{i,\beta}^-(z)\in (\mathfrak{n}^-\otimes \mathfrak{n}^-\otimes \mathfrak{n}^+)[z^{-1}]$. ◻  ◻ ## Computing $\EuScript{K}_{n\delta}$ explicitly, II {#ssec:T-Gamma_i-2} We now turn towards computing each coefficient $\EuScript{K}_{n\delta}$ of the series $\EuScript{K}_z$. For the sake of brevity, let us define a linear operator $\mathrm{ad}_{x,y}$ on $Y_{\hbar}(\mathfrak{g})$, for each $x,y\in Y_{\hbar}(\mathfrak{g})$, by setting $$\mathrm{ad}_{x,y}=\mathrm{ad}(x)\circ \mathrm{ad}(y):Y_{\hbar}(\mathfrak{g})\to Y_{\hbar}(\mathfrak{g}).$$ For each $n>0$ and $\ell\in \{1,2\}$, we then define $$\begin{gathered} \EuScript{K}_{i,n\delta;\ell} = (\mathrm{ad}^{(\ell)}_{x_{i,1}^-,x_{i,0}^+}+\mathrm{ad}^{(\ell)}_{x_{i,1}^+,x_{i,0}^-})(\Gamma_{i,n\delta}) -\hbar \mathrm{ad}^{(\ell)}_{x_{i,0}^-,x_{i,0}^+}(\Gamma_{i,n\delta}) \cdot \xi_{i,0}^{(\ell)},\end{gathered}$$ where the superscript in $\mathrm{ad}^{(\ell)}_{x,y}$ indicates that it operates in the $\ell$-th tensor factor. **Proposition 4**. *Let $n$ be a positive integer and fix $\ell\in \{1,2\}$. Then $\EuScript{K}_{n\delta}$ is given explicitly by $$\EuScript{K}_{n\delta}=(-1)^\ell\frac{3}{2{n\delta}(\mathsf{h})}\sum_{i\in {\mathbf I}} \frac{a_i}{di}\EuScript{K}_{i,{n\delta};\ell}.$$* *[Proof]{.smallcaps}.* We shall establish the claimed formula for $\EuScript{K}_{n\delta}$ in the case where $\ell=1$. The proof in the $\ell=2$ case follows by a simple modification of the same argument. To begin, note that since $[3\EuScript{C}_2\otimes 1,\EuScript{K}_{n\delta}]=-n\delta(\mathsf{h})\EuScript{K}_{n\delta}$ (and $\delta(\mathsf{h})\neq 0$), it is sufficient to establish that $$\label{[C_2,K_beta]} [\EuScript{C}_2\otimes 1,\EuScript{K}_{n\delta}]=\sum_{i\in {\mathbf I}} \frac{a_i}{2di}\EuScript{K}_{i,{n\delta};1}.$$ By Part [\[Theta:2\]](#Theta:2){reference-type="eqref" reference="Theta:2"} of Proposition [Proposition 3](#P:Theta){reference-type="ref" reference="P:Theta"}, we have $$=[x_{i,0}^\pm \otimes 1,[\square(t_{i,1}),\EuScript{K}_{n\delta}]]=\mp 2d_i[x_{i,1}^\pm \otimes 1, \EuScript{K}_{n\delta}],$$ and therefore $$=\mp\frac{1}{2d_i}[x_{i,0}^\pm \otimes 1,\Gamma_{i,{n\delta}}] \quad \forall \; i\in {\mathbf I}.$$ It follows that, for each $i\in {\mathbf I}$, we have $$\begin{aligned} [\xi_{i,1}\otimes 1,\EuScript{K}_{n\delta}] &= [[x_{i,1}^+\otimes 1,x_{i,0}^-\otimes 1],\EuScript{K}_{n\delta}] \\ &= -\frac{1}{2d_i}[[x_{i,0}^+\otimes 1,\Gamma_{i,{n\delta}}], x_{i,0}^-\otimes 1] =\frac{1}{2d_i}\mathrm{ad}_{x_{i,0}^-,x_{i,0}^+}^{(1)}(\Gamma_{i,{n\delta}}) \end{aligned}$$ where in the second equality we have used that, by Proposition [Proposition 3](#P:Theta){reference-type="ref" reference="P:Theta"}, $[x_{i,0}^-\otimes 1, \EuScript{K}_{n\delta}]=0$. Similarly, we obtain $$\begin{aligned} [\xi_{i,2}\otimes 1,\EuScript{K}_{n\delta}] &= [[x_{i,1}^+\otimes 1,x_{i,1}^-\otimes 1],\EuScript{K}_{n\delta}] \\ & =-[x_{i,1}^-\otimes 1,[x_{i,1}^+\otimes 1,\EuScript{K}_{n\delta}]]+[x_{i,1}^+\otimes 1,[x_{i,1}^-\otimes 1,\EuScript{K}_{n\delta}]]\\ % & =\frac{1}{2d_i}(\mathrm{ad}_{x_{i,1}^-,x_{i,0}^+}^{(1)}+\mathrm{ad}_{x_{i,1}^+,x_{i,0}^-}^{(1)})(\Gamma_{i,{n\delta}}). \end{aligned}$$ Since $t_{i,2}=\xi_{i,2}-\hbar \xi_{i,1}\xi_{i,0}+\frac{\hbar^2}{3}\xi_{i,0}^3$ (see [\[eq:ti2\]](#eq:ti2){reference-type="eqref" reference="eq:ti2"}) and $[\xi_{i,0}\otimes 1,\EuScript{K}_{n\delta}]=0$, the above computations yield $$\begin{aligned} 2d_i[t_{i,2}\otimes 1,\EuScript{K}_{n\delta}] &= (\mathrm{ad}_{x_{i,1}^-,x_{i,0}^+}^{(1)}+\mathrm{ad}_{x_{i,1}^+,x_{i,0}^-}^{(1)})(\Gamma_{i,{n\delta}})-\hbar\mathrm{ad}_{x_{i,0}^-,x_{i,0}^+}^{(1)}(\Gamma_{i,{n\delta}})\cdot \xi_{i,0}^{(1)} \\ &=\EuScript{K}_{i,{n\delta};1}. \end{aligned}$$ As $\EuScript{C}_2=\sum_{i\in {\mathbf I}} a_i t_{i,2}$, this implies the formula [\[\[C_2,K_beta\]\]](#[C_2,K_beta]){reference-type="eqref" reference="[C_2,K_beta]"}. ◻ ## The series $\mathrm{Q}_z$ {#ssec:Q_z-exp} In this section, we use the results of the previous subsections to construct a series $\mathrm{Q}_z$ satisfying all the properties listed in Part [\[map-T:3\]](#map-T:3){reference-type="eqref" reference="map-T:3"} of Theorem [Theorem 6](#T:map-T){reference-type="ref" reference="T:map-T"}. To begin, recall from Proposition [Proposition 4](#P:K-exp){reference-type="ref" reference="P:K-exp"} that for each positive integer $n$, index $i\in {\mathbf I}$, and number $\ell\in \{1,2\}$, we defined $$\EuScript{K}_{i,n\delta;\ell} = (\mathrm{ad}^{(\ell)}_{x_{i,1}^-,x_{i,0}^+}+\mathrm{ad}^{(\ell)}_{x_{i,1}^+,x_{i,0}^-})(\Gamma_{i,n\delta}) -\hbar \mathrm{ad}^{(\ell)}_{x_{i,0}^-,x_{i,0}^+}(\Gamma_{i,n\delta}) \cdot \xi_{i,0}^{(\ell)},$$ where $\xi_{i,0}^{(\ell)}=1^{\otimes (\ell-1)}\otimes \xi_{i,0}\otimes 1^{\otimes (2-\ell)}$ and $\Gamma_{i,n\delta}$ and $\mathrm{ad}^{(\ell)}_{x,y}$ are given as follows: 1. $\Gamma_{i,n\delta}=[\square(t_{i,1}),\EuScript{K}_{n\delta}]$. By Lemma [Lemma 2](#L:Gamma-i){reference-type="ref" reference="L:Gamma-i"}, this may be written equivalently as $$\Gamma_{i,n\delta}=d_i\hbar \sum_{\alpha+\gamma=n\delta}\alpha(\mathsf{h})\gamma(h_i)\left[\Omega_\alpha,\Omega_\gamma\right]+n\delta(\mathsf{h})[\square(t_{i,1}),\Omega_{n\delta}],$$ with $\mathsf{h}=\frac{\hbar^2}{\zeta}(d-\sum_{i\in {\mathbf I}}\zeta_id_i h_i )\in \mathfrak{h}$, as in [\[3C_2-preimage\]](#3C_2-preimage){reference-type="eqref" reference="3C_2-preimage"}. 2. $\mathrm{ad}_{x,y}=\mathrm{ad}(x)\circ \mathrm{ad}(y)$ for all $x,y\in Y_\hbar(\mathfrak{g})$, while $$\mathrm{ad}_{x,y}^{(1)}=\mathrm{ad}_{x,y}\otimes \operatorname{Id}\quad \text{ and }\quad \mathrm{ad}_{x,y}^{(2)}=\operatorname{Id}\otimes \mathrm{ad}_{x,y}.$$ We now introduce the distinguished series $\mathrm{Q}_z\in Y_{\hbar}(\mathfrak{g})^{\otimes 2}[\![z]\!]$ by setting $$\mathrm{Q}_z=-\sum_{n>0} \frac{1}{n\delta(\mathsf{h})}\EuScript{K}_{n\delta}z^{n\mathsf{ht}(\delta)}.$$ By Proposition [Proposition 4](#P:K-exp){reference-type="ref" reference="P:K-exp"}, the coefficient $\mathrm{Q}_{n\delta}=\frac{1}{n\delta(\mathsf{h})}\EuScript{K}_{n\delta}$ is given by $$\label{Q:2-defs} \mathrm{Q}_{n\delta}=\frac{3}{2n^2\delta(\mathsf{h})^2}\sum_{i\in {\mathbf I}} \frac{a_i}{di}\EuScript{K}_{i,n\delta;1} = -\frac{3}{2n^2\delta(\mathsf{h})^2}\sum_{i\in {\mathbf I}} \frac{a_i}{di}\EuScript{K}_{i,n\delta;2}.$$ The following proposition completes the proof of Theorem [Theorem 6](#T:map-T){reference-type="ref" reference="T:map-T"} by showing that $\mathrm{Q}_z$ satisfies all the conditions spelled out in Part [\[map-T:3\]](#map-T:3){reference-type="eqref" reference="map-T:3"} therein. **Proposition 5**. *The series $\mathrm{Q}_z$ has the following properties:* 1. *[\[Qz:1\]]{#Qz:1 label="Qz:1"} For each $h\in \mathfrak{h}$, one has $$\Delta^z(\operatorname{T}(h))=\operatorname{T}(h)\otimes 1 + 1\otimes \operatorname{T}(h) + \hbar[h\otimes 1, \Omega_z+\mathrm{Q}_z].$$* 2. *[\[Qz:2\]]{#Qz:2 label="Qz:2"} For each $n>0$, one has $$\mathrm{Q}_{n\delta}\in Y^{-}_{\hbar}(\mathfrak{g})_{-n\delta}\otimes Y^{+}_{\hbar}(\mathfrak{g})_{n\delta}.$$* 3. *[\[Qz:3\]]{#Qz:3 label="Qz:3"} For each $a,b\in \mathbb{C}$, one has $$(\tau_a\otimes \tau_b)(\mathrm{Q}_z)=\mathrm{Q}_{z}.$$* 4. *[\[Qz:4\]]{#Qz:4 label="Qz:4"} The tensor factors of $\mathrm{Q}_z$ are primitive: $$(\Delta^w\otimes \operatorname{Id})(\mathrm{Q}_{z})=\mathrm{Q}_{z}^{13}+\mathrm{Q}_{z/w}^{23} \quad \text{ and }\quad (\operatorname{Id}\otimes \Delta^w)(\mathrm{Q}_z)=\mathrm{Q}_z^{12}+\mathrm{Q}_{zw}^{13}.$$* 5. *[\[Qz:5\]]{#Qz:5 label="Qz:5"} $\mathrm{Q}_{z}$ belongs to the centralizer of $U(\mathfrak{g}^\prime)^{\otimes 2}$ in $Y_{\hbar}(\mathfrak{g})^{\otimes 2}[\![z]\!]$.* *[Proof]{.smallcaps}.* Note that Part [\[Qz:5\]](#Qz:5){reference-type="eqref" reference="Qz:5"} follows immediately from the definition of $\mathrm{Q}_z$ and Part [\[Theta:2\]](#Theta:2){reference-type="eqref" reference="Theta:2"} of Proposition [Proposition 3](#P:Theta){reference-type="ref" reference="P:Theta"}. We shall prove the remaining four statements in order. *[Proof of [\[Qz:1\]](#Qz:1){reference-type="eqref" reference="Qz:1"}]{.smallcaps}.* If $h\in \mathfrak{h}^\prime$, then $[h\otimes 1,\mathrm{Q}_z]=0$ and so the claimed formula for $\Delta^z(\operatorname{T}(h))$ holds by definition of $\Delta^z$ (see [\[Delta\^z:def\]](#Delta^z:def){reference-type="eqref" reference="Delta^z:def"}). It thus suffices to verify the formula in the case where $h=\mathsf{h}$. Since $\operatorname{T}(\mathsf{h})=\EuScript{C}_3$, this amounts to checking $$\Delta^z(\EuScript{C}_3)=\EuScript{C}_3\otimes 1 + 1\otimes \EuScript{C}_3+\hbar[\mathsf{h}\otimes 1,\Omega_z+\mathrm{Q}_z].$$ This is a consequence of the definition of $\mathrm{Q}_z$. Indeed, we have $$=-\sum_{n>0}n\delta(\mathsf{h})\mathrm{Q}_{n\delta}z^{n\mathsf{ht}(\delta)}=\EuScript{K}_z=\Theta_z-[\mathsf{h}\otimes 1,\Omega_z]. \qedhere$$ ◻ *[Proof of [\[Qz:2\]](#Qz:2){reference-type="eqref" reference="Qz:2"}]{.smallcaps}.* By Lemma [Lemma 2](#L:Gamma-i){reference-type="ref" reference="L:Gamma-i"}, we have $$\Gamma_{i,n\delta}=d_i\hbar \sum_{\alpha+\gamma=n\delta}\alpha(\mathsf{h})\gamma(h_i)\left[\Omega_\alpha,\Omega_\gamma\right]+n\delta(\mathsf{h})[\square(t_{i,1}),\Omega_{n\delta}] \in Y^{-}_{\hbar}(\mathfrak{g})_{-n\delta}\otimes Y^{+}_{\hbar}(\mathfrak{g})_{n\delta}.$$ It follows by definition of $\EuScript{K}_{i,n\delta;1}$ and $\EuScript{K}_{i,n\delta;2}$ that $$\EuScript{K}_{i,n\delta;1}\in Y_{\hbar}(\mathfrak{g})_{-n\delta}\otimes Y^{+}_{\hbar}(\mathfrak{g})_{n\delta} \quad \text{ and }\quad \EuScript{K}_{i,n\delta;2}\in Y^{-}_{\hbar}(\mathfrak{g})_{-n\delta}\otimes Y_{\hbar}(\mathfrak{g})_{n\delta}.$$ Thus, we conclude from [\[Q:2-defs\]](#Q:2-defs){reference-type="eqref" reference="Q:2-defs"} that $\mathrm{Q}_{n\delta}$ belongs to $Y^{-}_{\hbar}(\mathfrak{g})_{-n\delta}\otimes Y^{+}_{\hbar}(\mathfrak{g})_{n\delta}$. ◻ *[Proof of [\[Qz:3\]](#Qz:3){reference-type="eqref" reference="Qz:3"}]{.smallcaps}.* Fix a positive integer $n$ and complex numbers $a,b\in \mathbb{C}$. Then, by definition of $\EuScript{K}_{i,n\delta;1}$ and $\EuScript{K}_{i,n\delta;2}$, we have $$\begin{aligned} &(\operatorname{Id}\otimes \tau_b)(\EuScript{K}_{i,n\delta;1})\\ &=(\mathrm{ad}^{(1)}_{x_{i,1}^-,x_{i,0}^+}+\mathrm{ad}^{(1)}_{x_{i,1}^+,x_{i,0}^-})((\operatorname{Id}\otimes \tau_b)\Gamma_{i,{n\delta}}) -\hbar \mathrm{ad}^{(1)}_{x_{i,0}^-,x_{i,0}^+}((\operatorname{Id}\otimes \tau_b)\Gamma_{i,{n\delta}}) \cdot \xi_{i,0}^{(1)}, \\[5pt] &(\tau_a\otimes \operatorname{Id})(\EuScript{K}_{i,n\delta;2})\\ & =(\mathrm{ad}^{(2)}_{x_{i,1}^-,x_{i,0}^+}+\mathrm{ad}^{(2)}_{x_{i,1}^+,x_{i,0}^-})((\tau_a\otimes \operatorname{Id})\Gamma_{i,{n\delta}}) -\hbar \mathrm{ad}^{(2)}_{x_{i,0}^-,x_{i,0}^+}((\tau_a\otimes \operatorname{Id})\Gamma_{i,{n\delta}}) \cdot \xi_{i,0}^{(2)}. \end{aligned}$$ Since $(\tau_a\otimes \tau_b)(\Gamma_{i,{n\delta}})=\Gamma_{i,{n\delta}}$ by Part [\[Gamma:1\]](#Gamma:1){reference-type="eqref" reference="Gamma:1"} of Lemma [Lemma 2](#L:Gamma-i){reference-type="ref" reference="L:Gamma-i"}, we get that $(\operatorname{Id}\otimes \tau_b)(\EuScript{K}_{i,n\delta;1})=\EuScript{K}_{i,n\delta;1}$ and $(\tau_a\otimes \operatorname{Id})(\EuScript{K}_{i,n\delta;2})=\EuScript{K}_{i,n\delta;2}$. By [\[Q:2-defs\]](#Q:2-defs){reference-type="eqref" reference="Q:2-defs"}, this implies that $$(\tau_a\otimes \tau_b)(\mathrm{Q}_z)=\mathrm{Q}_z \quad \forall\; a,b\in \mathbb{C}. \qedhere$$ ◻ *[Proof of [\[Qz:4\]](#Qz:4){reference-type="eqref" reference="Qz:4"}]{.smallcaps}.* By Part [\[Gamma:2\]](#Gamma:2){reference-type="eqref" reference="Gamma:2"} of Lemma [Lemma 2](#L:Gamma-i){reference-type="ref" reference="L:Gamma-i"}, we have $$\begin{aligned} &((\Delta^z-\square^z)\otimes \operatorname{Id})(\EuScript{K}_{i,{n\delta};2})\\ &=(\mathrm{ad}^{(3)}_{x_{i,1}^-,x_{i,0}^+}+\mathrm{ad}^{(3)}_{x_{i,1}^+,x_{i,0}^-})(\Gamma_{i,{n\delta}}^-(z)) -\hbar \mathrm{ad}^{(3)}_{x_{i,0}^-,x_{i,0}^+}(\Gamma_{i,{n\delta}}^-(z)) \cdot \xi_{i,0}^{(3)}. \end{aligned}$$ By Part [\[Gamma:3\]](#Gamma:3){reference-type="eqref" reference="Gamma:3"} of Lemma [Lemma 2](#L:Gamma-i){reference-type="ref" reference="L:Gamma-i"} and Proposition [Proposition 4](#P:K-exp){reference-type="ref" reference="P:K-exp"}, this yields that $$\label{K^-:lives} \EuScript{K}_{n\delta}^-(z)=((\Delta^z-\square^z)\otimes \operatorname{Id})(\EuScript{K}_{n\delta})\in (\mathfrak{n}^-\otimes \mathfrak{n}^-\otimes Y_\hbar(\mathfrak{g}))[z^{-1}]$$ Similarly, Lemma [Lemma 2](#L:Gamma-i){reference-type="ref" reference="L:Gamma-i"} and Proposition [Proposition 4](#P:K-exp){reference-type="ref" reference="P:K-exp"} imply that $$\label{K^+:lives} \EuScript{K}_{n\delta}^+(z)=(\operatorname{Id}\otimes(\Delta^z-\square^z))(\EuScript{K}_{n\delta})\in (Y_{\hbar}(\mathfrak{g})\otimes \mathfrak{n}^+\otimes \mathfrak{n}^+)[z].$$ Note that, by definition of $\mathrm{Q}_z$, the statement of Part [\[Qz:4\]](#Qz:4){reference-type="eqref" reference="Qz:4"} is equivalent to the assertion that $\EuScript{K}_{n\delta}^+(z)=0=\EuScript{K}_{n\delta}^-(z)$. This is established in the following claim. *Claim*. $\EuScript{K}_{n\delta}^+(z)$ and $\EuScript{K}_{n\delta}^+(z)$ are both equal to $0$. *[Proof of Claim]{.smallcaps}.* Define $\Theta^+_{z,w}$ and $\Theta^-_{z,w}$ by the equations $$(\Delta^z\otimes \operatorname{Id})(\Theta_{zw})=\Theta_{zw}^{13}+\Theta_{w}^{23}+\Theta_{z,w}^- \quad \text{ and }\quad (\operatorname{Id}\otimes \Delta^w)(\Theta_z)=\Theta_z^{12}+\Theta_{zw}^{13}+\Theta^+_{z,w}.$$ By the twisted coassociativity of $\Delta^z$ (see [\[twisted-coalg\]](#twisted-coalg){reference-type="eqref" reference="twisted-coalg"}), we have $$%\label{C3-coass} \begin{aligned} 0=&\hbar^{-1}(\Delta^z\otimes \operatorname{Id}\circ \Delta^{zw}-\operatorname{Id}\otimes \Delta^w\circ \Delta^z)(\EuScript{C}_3)\\ % = &\Theta_z^{12}-\Theta_{w}^{23}+ (\Delta^z\otimes \operatorname{Id})(\Theta_{zw})-(\operatorname{Id}\otimes \Delta^w)(\Theta_z) =\Theta_{z,w}^--\Theta_{z,w}^+. \end{aligned}$$ Hence, we have $\Theta_{z,w}^+=\Theta_{z,w}^-$. Therefore, we drop the superscript and write $\Theta_{z,w}$ for $\Theta_{z,w}^\pm$. Next, observe that since the coefficients of $[\mathsf{h}\otimes 1,\Omega_z]$ belong to $\mathfrak{g}\otimes \mathfrak{g}$, the element $\EuScript{K}_z=\Theta_z-[\mathsf{h}\otimes 1,\Omega_z]$ satisfies $$(\Delta^z\otimes \operatorname{Id})(\EuScript{K}_{zw})=\EuScript{K}_{zw}^{13}+\EuScript{K}_{w}^{23}+\Theta_{z,w} \quad \text{ and }\quad (\operatorname{Id}\otimes \Delta^w)(\EuScript{K}_z)=\EuScript{K}_z^{12}+\EuScript{K}_{zw}^{13}+\Theta_{z,w}.$$ In particular, we have $$\sum_{n>0}\EuScript{K}_{n\delta}^-(z)(zw)^{n\mathsf{ht}(\delta)}=\Theta_{z,w}= \sum_{n>0}\EuScript{K}_{n\delta}^+(w)z^{n\mathsf{ht}(\delta)}.$$ However, by [\[K\^-:lives\]](#K^-:lives){reference-type="eqref" reference="K^-:lives"} and [\[K\^+:lives\]](#K^+:lives){reference-type="eqref" reference="K^+:lives"}, this is only possible if both the left-hand side and right-hand side are identically zero. Therefore, $\EuScript{K}_{n\delta}^-(z)=0=\EuScript{K}_{n\delta}^+(w)$ for each positive integer $n$, as desired. This completes the proof of Part [\[Qz:4\]](#Qz:4){reference-type="eqref" reference="Qz:4"}, and thus the proof of the proposition. ◻  ◻  ◻ # Construction of $\mathcal{R}^-(s)$ {#sec:neg-R} In this section, we show that the standard and Drinfeld tensor products on $\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$ are conjugate to each other by a triangular element $\mathcal{R}^-(s)$. In other words, $\mathcal{R}^-(s)$ is a rational tensor structure on the identity functor $$(\operatorname{Id},\mathcal{R}^-(s)) : \left(\mathcal{O}(Y_{\hbar}(\mathfrak{g})), \underset{\scriptscriptstyle{\operatorname{D},s}}{\otimes}\right)\to \left(\mathcal{O}(Y_{\hbar}(\mathfrak{g})), \underset{\scriptscriptstyle{\operatorname{ KM},s}}{\otimes}\right).$$ We refer the reader to Theorem [Theorem 8](#thm:R^--reps){reference-type="ref" reference="thm:R^--reps"} below for the precise meaning of this statement. Our proof is almost verbatim to the one for finite type [@GTLW Thm. 4.1], with some subtle differences highlighted in this section. Additionally, the argument in *loc. cit.* rests on the linear map $\operatorname{T}:\mathfrak{h}\to Y^{0}_{\hbar}(\mathfrak{g})$ and the formulae for $\Delta_s(\operatorname{T}(h))$. For affine Yangians, the map only exists, *a priori*, from $\mathfrak{h}^\prime\to Y^{0}_{\hbar}(\mathfrak{g})$. That is the reason we had to extend it, using $\EuScript{C}_3$ as in Theorem [Theorem 6](#T:map-T){reference-type="ref" reference="T:map-T"}. ## The formal series $\mathcal{R}^-(s,z)$ {#ssec:R^--formal} Recall from Section [5.1](#ssec:T-main){reference-type="ref" reference="ssec:T-main"} that there is a series $\mathrm{Q}_z=\sum_{n>0}\mathrm{Q}_{n\delta}z^{n\mathsf{ht}(\delta)}$, with $\mathrm{Q}_{n\delta}$ defined explicitly in [\[Q:2-defs\]](#Q:2-defs){reference-type="eqref" reference="Q:2-defs"}, which satisfies $$\Delta^z(\operatorname{T}(h))=\operatorname{T}(h)\otimes 1 +1\otimes \operatorname{T}(h)+\hbar[h\otimes 1,\Omega_z+\mathrm{Q}_z] \quad \forall\; h\in \mathfrak{h}$$ in addition to the conditions of Part [\[map-T:3\]](#map-T:3){reference-type="eqref" reference="map-T:3"} in Theorem [Theorem 6](#T:map-T){reference-type="ref" reference="T:map-T"}. Here the transformation $\operatorname{T}$ itself is defined in Theorem [Theorem 6](#T:map-T){reference-type="ref" reference="T:map-T"} above, while $\Omega_z\in (\mathfrak{g}\otimes \mathfrak{g})[\![z]\!]$ is as in Section [3.9](#ssec:pre-cop){reference-type="ref" reference="ssec:pre-cop"}. Furthermore, we introduce the algebra homomorphisms $$\underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}^z =(\operatorname{Id}\otimes\upsigma_z)\circ\underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta} \quad \text{ and }\quad \Delta_s^z=(\tau_s\otimes \operatorname{Id})\circ \Delta^z$$ in addition to the function $\nu:\mathsf{Q}_+\to \mathbb{Z}_{\geqslant 0}$ defined by $$\nu(\beta)=\text{min}\{k\in\mathbb{Z}_{\geqslant 0}: \beta=\alpha^{(1)}+\cdots+\alpha^{(k)} \text{ for } \alpha^{(1)},\ldots,\alpha^{(k)}\in \Phi_+\}.$$ The following theorem provides the formal version of the main result of this section. **Theorem 7**. *There exists a unique family of elements $\{\mathcal{R}^-(s)_\beta\}_{\beta\in \mathsf{Q}_+}$, with $\mathcal{R}^-(s)_\beta\in (Y_{\hbar}(\mathfrak{g})_{-\beta}\otimes Y_{\hbar}(\mathfrak{g})_\beta)[\![s^{-1}]\!]$ for each $\beta\in \mathsf{Q}_+$, satisfying $\mathcal{R}^-(s)_0=1\otimes 1$ in addition to the intertwiner equation $$\label{eq:IER-} \mathcal{R}^-(s,z)\Delta_s^z(\operatorname{T}(h))= \underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}^{\!z}(\operatorname{T}(h)) \mathcal{R}^-(s,z) \quad \forall\; h\in \mathfrak{h}$$ in $Y_{\hbar}(\mathfrak{g})^{\otimes 2}[s;s^{-1}]\!] [\![z]\!]$, where $\mathcal{R}^-(s,z)=\sum_{\beta\in \mathsf{Q}_+}\mathcal{R}^-(s)_\beta z^{\mathsf{ht}(\beta)}$. Moreover:* 1. *[\[R-(s,z):1\]]{#R-(s,z):1 label="R-(s,z):1"} For each $x\in Y_{\hbar}(\mathfrak{g})$, one has $$\mathcal{R}^-(s,z)\Delta_s^z(x)= \underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}^{\!z}(x) \mathcal{R}^-(s,z).$$* 2. *[\[R-(s,z):2\]]{#R-(s,z):2 label="R-(s,z):2"} For each $\beta\in \mathsf{Q}_+$, $\mathcal{R}^-(s)_\beta$ satisfies $$\mathcal{R}^-(s)_\beta\in s^{-\nu(\beta)}\left( Y^{-}_{\hbar}(\mathfrak{g})\otimes Y^{+}_{\hbar}(\mathfrak{g})\right)[\![s^{-1}]\!].$$* 3. *[\[R-(s,z):3\]]{#R-(s,z):3 label="R-(s,z):3"} For each $\beta\in \mathsf{Q}_+$ and $a,b\in \mathbb{C}$, $\mathcal{R}^-(s)_\beta$ satisfies $$\tau_a\otimes \tau_b(\mathcal{R}^-(s)_\beta)=\mathcal{R}^-(s+a-b)_\beta.$$* 4. *[\[R-(s,z):4\]]{#R-(s,z):4 label="R-(s,z):4"} As an element of $Y_{\hbar}(\mathfrak{g})^{\otimes 2}[\![z]\!][\![s^{-1}]\!]$, $\mathcal{R}^-(s,z)$ is of the form $$\mathcal{R}^-(s,z)=1+\hbar s^{-1}(\Omega_z^-+\mathrm{Q}_z)+O(s^{-2}).$$* *[Proof]{.smallcaps}.* With Theorem [Theorem 6](#T:map-T){reference-type="ref" reference="T:map-T"} at our disposal, the proof of the finite-type counterpart to this result, given in [@GTLW §4], carries over to the present setting with only minor adjustments. In what follows we summarize the key steps while making transparent the role played by $z$ (which does not appear in [@GTLW]) and Theorem [Theorem 6](#T:map-T){reference-type="ref" reference="T:map-T"}. *[Proof of Existence and Uniqueness]{.smallcaps}.* By Theorem [Theorem 6](#T:map-T){reference-type="ref" reference="T:map-T"}, $\mathrm{ad}(\tau_s(\operatorname{T}(h)))=\mathrm{ad}(\operatorname{T}(h)+sh)$ for all $h\in \mathfrak{h}$ and $(\tau_s\otimes \operatorname{Id})(\mathrm{Q}_z)=\mathrm{Q}_z$. It follows that the intertwiner equation [\[eq:IER-\]](#eq:IER-){reference-type="eqref" reference="eq:IER-"} is equivalent to $$(\mathcal{T}(h)+\mathrm{ad}(sh\otimes \operatorname{Id}))\cdot \mathcal{R}^-(s,z)=\hbar \mathcal{R}^-(s,z)[h\otimes 1,\Omega_z+\mathrm{Q}_z]$$ for all $h\in \mathfrak{h}$, where we have set $\mathcal{T}(h)=\mathrm{ad}(\square(\operatorname{T}(h)))$. Taking the $Y_{\hbar}(\mathfrak{g})_{-\beta}\otimes Y_{\hbar}(\mathfrak{g})_\beta$ component of this equation, for any fixed $\beta\in \mathsf{Q}_+$, and dividing both sides of the resulting identity by $z^{\mathsf{ht}(\beta)}$ yields $$\label{R^-:recur-rat} (\mathcal{T}(h)-s\beta(h)) \cdot \mathcal{R}^-(s)_\beta = -\hbar\sum_{\alpha\in\Phi_+} \alpha(h) \mathcal{R}^-(s)_{\beta-\alpha} (\Omega_\alpha+\mathrm{Q}_\alpha),$$ where it is understood that $\mathcal{R}^-(s)_\gamma=0=\mathrm{Q}_\alpha$ for $\gamma\not\in\mathsf{Q}_+$ and $\alpha \notin \mathbb{Z}\delta$. In particular, the right--hand side of the equation above is a finite sum. The proof that there is at most one family $\{\mathcal{R}^-(s)_\beta\}_{\beta\in \mathsf{Q}_+}$ satisfying the conditions of the theorem now proceeds identically to the proof of the analogous assertion for the Yangian of a finite--dimensional simple Lie algebra given in [@GTLW §4.2]. Indeed, if $h\in \mathfrak{h}$ is chosen so that $\beta(h)\neq 0$, then the operator on the left--hand side of the above equation can be inverted to yield $$\label{R^-:recur} \begin{aligned} \mathcal{R}^-(s)_\beta &= \frac{\hbar}{s\beta(h)} \left(1 - \frac{\mathcal{T}(h)}{s\beta(h)}\right)^{-1} \sum_{\alpha\in\Phi_+}\alpha(h)\mathcal{R}^-(s)_{\beta-\alpha}(\Omega_\alpha+\mathrm{Q}_\alpha) \\ &= \hbar\sum_{k\geqslant 0} \frac{\mathcal{T}(h)^k}{(s\beta(h))^{k+1}} \sum_{\alpha\in\Phi_+}\alpha(h)\mathcal{R}^-(s)_{\beta-\alpha}(\Omega_\alpha+\mathrm{Q}_\alpha). \end{aligned}$$ This equation determines $\mathcal{R}^-(s)_\beta$ recursively in terms of $\mathcal{R}^-(s)_{\gamma}$ with $\mathsf{ht}(\gamma)<\mathsf{ht}(\beta)$, and thus establishes the uniqueness of $\{\mathcal{R}^-(s)_\beta\}_{\beta\in \mathsf{Q}_+}$ satisfying [\[eq:IER-\]](#eq:IER-){reference-type="eqref" reference="eq:IER-"} and the initial condition $\mathcal{R}^-(s)_0=1\otimes 1$. To prove existence, we proceed as in [@GTLW §4.2]. Choose $h\in\mathfrak{h}$ such that $\beta(h)\neq 0$ for every nonzero $\beta\in\mathsf{Q}_+$. Corresponding to this choice of $h$, we may define a family of elements $\{\mathcal{R}^-(s)_\beta\}_{\beta\in \mathsf{Q}_+}$ recursively on the height of $\beta$ using [\[R\^-:recur\]](#R^-:recur){reference-type="eqref" reference="R^-:recur"} and the condition $\mathcal{R}^-(s)_0=1$. This produces elements $\mathcal{R}^-(s)_\beta\in (Y_{\hbar}(\mathfrak{g})_{-\beta}\otimes Y_{\hbar}(\mathfrak{g})_\beta)[\![s^{-1}]\!]$ which by definition satisfy $$\label{eq:IER-:fixed} \mathcal{R}^-(s,z)\Delta_s^z(\operatorname{T}(h))= \underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}^{\!z}(\operatorname{T}(h)) \mathcal{R}^-(s,z)$$ for our fixed choice of $h$, where $\mathcal{R}^-(s,z)=\sum_{\beta\in \mathsf{Q}_+}\mathcal{R}^-(s)_\beta z^{\mathsf{ht}(\beta)}$. To see that $\mathcal{R}^-(s,z)$ in fact satisfies [\[eq:IER-\]](#eq:IER-){reference-type="eqref" reference="eq:IER-"} in general, let $h'\in \mathfrak{h}$ and note that, since $\Delta_s^z$ and $\underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}^{\!z}$ are algebra homomorphisms, the two series $$\mathcal{R}^-(s,z)\Delta_s^z(\operatorname{T}(h')) \quad \text{ and }\quad \underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}^{\!z}(\operatorname{T}(h'))\mathcal{R}^-(s,z)$$ both solve [\[eq:IER-:fixed\]](#eq:IER-:fixed){reference-type="eqref" reference="eq:IER-:fixed"}, are of the form $\sum_{\beta\in \mathsf{Q}_+}\EuScript{R}(s)_\beta z^{\mathsf{ht}(\beta)}$ with $\EuScript{R}(s)_\beta\in (Y_{\hbar}(\mathfrak{g})_{-\beta}\otimes Y_{\hbar}(\mathfrak{g})_{\beta})[s;s^{-1}]\!]$ for each $\beta\in \mathsf{Q}_+$, and have $Y_{\hbar}(\mathfrak{g})_0\otimes Y_{\hbar}(\mathfrak{g})_0$ components equal to $(\tau_s\otimes \operatorname{Id})\square(\operatorname{T}(h'))$. They therefore coincide by the uniqueness argument given above (see [\[R\^-:recur\]](#R^-:recur){reference-type="eqref" reference="R^-:recur"}). ◻ *[Proof of Parts [\[R-(s,z):2\]](#R-(s,z):2){reference-type="eqref" reference="R-(s,z):2"}--[\[R-(s,z):4\]](#R-(s,z):4){reference-type="eqref" reference="R-(s,z):4"}]{.smallcaps}.* By Part [\[map-T:3\]](#map-T:3){reference-type="eqref" reference="map-T:3"} of Theorem [Theorem 6](#T:map-T){reference-type="ref" reference="T:map-T"}, $\Omega_\alpha+\mathrm{Q}_\alpha$ lies in $Y^{-}_{\hbar}(\mathfrak{g})_{-\alpha}\otimes Y^{+}_{\hbar}(\mathfrak{g})_{\alpha}$ for each $\alpha\in \Phi_+$. Therefore, the recursion [\[R\^-:recur\]](#R^-:recur){reference-type="eqref" reference="R^-:recur"} implies that $$\mathcal{R}^-(s)_\beta\in s^{-\nu(\beta)} \left(Y^{-}_{\hbar}(\mathfrak{g})_{-\beta}\otimes Y^{+}_{\hbar}(\mathfrak{g})_{\beta}\right)[\negthinspace[s^{-1}]\negthinspace]$$ for all $\beta\in \mathsf{Q}_+$, with $\mathcal{R}^-(s)_{\alpha}=(\Omega_\alpha+\mathrm{Q}_\alpha)s^{-1} +O(s^{-2})$ for all $\alpha\in \Phi_+$. This proves Parts [\[R-(s,z):2\]](#R-(s,z):2){reference-type="eqref" reference="R-(s,z):2"} and [\[R-(s,z):4\]](#R-(s,z):4){reference-type="eqref" reference="R-(s,z):4"} of the theorem. Similarly, given $a,b\in \mathbb{C}$, Parts [\[map-T:2\]](#map-T:2){reference-type="eqref" reference="map-T:2"} and [\[map-T:3\]](#map-T:3){reference-type="eqref" reference="map-T:3"} of Theorem [Theorem 6](#T:map-T){reference-type="ref" reference="T:map-T"} imply that the elements $(\tau_a\otimes \tau_b)(\mathcal{R}^-(s)_\beta)$ and $\mathcal{R}^-(s+a-b)_\beta$ both satisfy [\[R\^-:recur-rat\]](#R^-:recur-rat){reference-type="eqref" reference="R^-:recur-rat"} with $s$ replaced by $s+a-b$, and thus coincide by uniqueness (see also [@GTLW §4.3]). Note that the argument given in *loc. cit.* doesn't work as stated since $\operatorname{T}(\mathfrak{h})\oplus\mathfrak{h}$ is not stable under $\tau_b$. Still $\mathrm{ad}$ of this subspace is, which is all that is needed here. ◻ *[Proof of Part [\[R-(s,z):1\]](#R-(s,z):1){reference-type="eqref" reference="R-(s,z):1"}]{.smallcaps}.* Since the desired identity is satisfied for $x\in \mathfrak{h}\oplus \operatorname{T}(\mathfrak{h})$, it suffices to prove that $$\mathcal{R}^-(s,z)\Delta_s^z(x_{i,0}^\pm)= \underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}^{\!z}(x_{i,0}^\pm) \mathcal{R}^-(s,z)$$ for each $i\in {\mathbf I}$. This is proven using a rank $1$ reduction argument, exactly as in [@GTLW §4.7], with the $\mathfrak{sl}_2$ case having been verified in [@GTLW §4.8]. ◻  ◻ **Remark 3**. We emphasize that the use of Theorem [Theorem 6](#T:map-T){reference-type="ref" reference="T:map-T"} is essential in the above proof, and in particular in the existence and uniqueness argument for $\{\mathcal{R}^-(s)_\beta\}_{\beta\in \mathsf{Q}_+}$. Indeed, unlike in the setting of [@GTLW Thm 4.1], it is not sufficient to replace $\{\operatorname{T}(h)\}_{h\in \mathfrak{h}}$ by the family $\{t_{i,1}\}_{i,\in {\mathbf I}}$ in [\[eq:IER-\]](#eq:IER-){reference-type="eqref" reference="eq:IER-"}, as it is not true that for any nonzero $\beta\in \mathsf{Q}_+$ there is $h\in \mathfrak{h}^\prime$ such that $\beta(h)\neq 0$. The existence of such an $h$ is necessary to pass from [\[R\^-:recur-rat\]](#R^-:recur-rat){reference-type="eqref" reference="R^-:recur-rat"} to [\[R\^-:recur\]](#R^-:recur){reference-type="eqref" reference="R^-:recur"}. In addition, unlike in the finite case considered in [@GTLW], the infinite sum $\sum_\beta \mathcal{R}^-(s)_\beta$ does not converge to an element of $Y_{\hbar}(\mathfrak{g})^{\otimes 2}[\![s^{-1}]\!]$. This is the reason behind the introduction of the auxiliary parameter $z$ in the statement of the theorem. ## The operators $\mathcal{R}^-_{V_1,V_2}(s)$ {#ssec:R^--rep} Let $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, with $\pi_\ell:Y_{\hbar}(\mathfrak{g})\to\operatorname{End}(V_\ell)$ being the action homomorphisms. By the category $\mathcal{O}$ condition, the matrix entries of $\pi_1\otimes\pi_2 (\Omega_z)$ and $\pi_1\otimes\pi_2 (\mathrm{Q}_z)$ are polynomials in $z$ and therefore can be evaluated at $z=1$. Let $$\Omega_{V_1,V_2} =\left.\pi_1\otimes\pi_2 (\Omega_z)\right|_{z=1} \quad \text{ and }\quad \mathrm{Q}_{V_1,V_2} =\left.\pi_1\otimes\pi_2 (\mathrm{Q}_z)\right|_{z=1}.$$ We have the following representation--theoretic version of Theorem [Theorem 7](#thm:R-){reference-type="ref" reference="thm:R-"}. **Theorem 8**. *Let $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$. Then, there exist a unique $\operatorname{End}(V_1\otimes V_2)$--valued, rational function of $s$, denoted by $\mathcal{R}^-_{V_1,V_2}(s)$, satisfying the following properties.* 1. *(Normalization) $\mathcal{R}^-_{V_1,V_2}(\infty)=\operatorname{Id}_{V_1\otimes V_2}$.* 2. *(Triangularity) $\displaystyle\mathcal{R}^-_{V_1,V_2}(s) = \sum_{\beta\in\mathsf{Q}_+} \mathcal{R}^-_{V_1,V_2}(s)_\beta$, where, for any $\mu_1,\mu_2\in\mathfrak{h}^*$ $$\mathcal{R}^-_{V_1,V_2}(s)_\beta : V_1[\mu_1]\otimes V_2[\mu_2] \to V_1[\mu_1-\beta]\otimes V_2[\mu_2+\beta].$$* 3. *(Intertwiner) For every $h\in\mathfrak{h}$, the following intertwining relation holds: $$\left[\square(\operatorname{T}(h)) + s h\otimes 1, \mathcal{R}^-_{V_1,V_2}(s)\right] = \hbar \mathcal{R}_{V_1,V_2}^-(s) [h\otimes 1, \Omega_{V_1,V_2}+\mathrm{Q}_{V_1,V_2}].$$* *$\hspace*{-0.75in}$ Moreover, this operator has the following properties:* 4. *If $\mathcal{R}^-(s,z)$ is as in Theorem [Theorem 7](#thm:R-){reference-type="ref" reference="thm:R-"}, then $$\mathcal{R}^-_{V_1,V_2}(s) = \left.\pi_1\otimes \pi_2 (\mathcal{R}^-(s,z))\right|_{z=1}.$$* 5. *$\displaystyle\mathcal{R}^-_{V_1,V_2}(s) : V_1\underset{\scriptscriptstyle{\operatorname{ KM},s}}{\otimes} V_2 \to V_1\underset{\scriptscriptstyle{\operatorname{D},s}}{\otimes} V_2$ is a $Y_\hbar(\mathfrak{g})$--intertwiner, which is natural in $V_1,V_2$ and compatible with the shift automorphism: $$\mathcal{R}^-_{V_1(a),V_2(b)}(s) = \mathcal{R}^-_{V_1,V_2}(s+a-b).$$* 6. *(Cocycle equation) The following diagram commutes, for every $V_1,V_2,V_3\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$: $$\xy (0,60)*{(V_1\underset{\scriptscriptstyle{\operatorname{ KM},s_1}}{\otimes} V_2) \underset{\scriptscriptstyle{\operatorname{ KM},s_2}}{\otimes} V_3}="a"; (0,30)*{(V_1\underset{\scriptscriptstyle{\operatorname{D},s_1}}{\otimes} V_2) \underset{\scriptscriptstyle{\operatorname{ KM},s_2}}{\otimes} V_3}="b"; (0,0)*{(V_1\underset{\scriptscriptstyle{\operatorname{D},s_1}}{\otimes} V_2) \underset{\scriptscriptstyle{\operatorname{D},s_2}}{\otimes} V_3}="c"; (80,60)*{V_1\underset{\scriptscriptstyle{\operatorname{ KM},s_1+s_2}}{\otimes} (V_2\underset{\scriptscriptstyle{\operatorname{ KM},s_2}}{\otimes} \otimes V_3)}="ap"; (80,30)*{V_1\underset{\scriptscriptstyle{\operatorname{ KM},s_1+s_2}}{\otimes} (V_2\underset{\scriptscriptstyle{\operatorname{D},s_2}}{\otimes} \otimes V_3)}="bp"; (80,0)*{V_1\underset{\scriptscriptstyle{\operatorname{D},s_1+s_2}}{\otimes} (V_2\underset{\scriptscriptstyle{\operatorname{D},s_2}}{\otimes} \otimes V_3)}="cp"; {\ar^{\operatorname{Id}_{V_1}\otimes \mathcal{R}^-_{V_2,V_3}(s_2)} "ap"; "bp"}; {\ar_{\mathcal{R}^-_{V_1,V_2}(s_1)\otimes \operatorname{Id}_{V_3}} "a"; "b"}; {\ar_{\mathcal{R}^-_{V_1\underset{\scriptscriptstyle{\operatorname{D},s_1}}{\otimes}V_2,V_3}(s_2)} "b"; "c"}; {\ar^{\mathcal{R}^-_{V_1,V_2\underset{\scriptscriptstyle{\operatorname{D},s_2}}{\otimes}V_3}(s_1+s_2)} "bp"; "cp"}; {\ar@{=} "a"; "ap"}; {\ar@{=} "c"; "cp"}; \endxy$$* **Remark 4**. We wish to highlight that the cocycle equation only holds for the rational intertwiners, not for the formal $\mathcal{R}^-(s,z)$; see [@GTLW Remark 4.1]. *[Proof]{.smallcaps}.* For notational simplicity, we drop the subscript $V_1,V_2$. Let us fix a $\mu\in\mathfrak{h}^*$ and consider the intertwining equation valued in the finite--dimensional vector space $\mathcal{E}=\operatorname{End}((V_1\otimes V_2)[\mu])$. For $h\in\mathfrak{h}$, let $\mathsf{r}(h) =[h\otimes 1, \Omega+\mathrm{Q}]$, viewed as a strictly lower triangular operator on $\mathcal{E}$. Now, we have: $$A(h,s)=\operatorname{ad}(\square(\operatorname{T}(h))+sh\otimes 1) - \rho(\hbar\mathsf{r}(h)) \in \operatorname{End}(\mathcal{E})[s],$$ where $\rho(X)$ denotes the operator of right multiplication by $X$. This operator is triangular in the weight grading on $\mathcal{E}$, with diagonal blocks being linear in $s$, for generic $h$. The same argument given in the proof of Theorem [Theorem 7](#thm:R-){reference-type="ref" reference="thm:R-"}, carried out in $\operatorname{End}(\mathcal{E})$, shows that there is a unique unipotent solution to [\[eq:IER-\]](#eq:IER-){reference-type="eqref" reference="eq:IER-"}. Moreover, each $\mathcal{R}^-(s)_\beta$ is a rational function of $s$, vanishing at $s=\infty$ for $\beta>0$. This proves the first part of the theorem. The uniqueness of the solution, together with Theorem [Theorem 7](#thm:R-){reference-type="ref" reference="thm:R-"}, imply that the Taylor series expansion of this rational function at $s=\infty$ is the same as the evaluation of $\mathcal{R}^-(s,z)$ on $V_1\otimes V_2$, specialized at $z=1$. This proves (4). Part (5) is now a consequence of Theorem [Theorem 7](#thm:R-){reference-type="ref" reference="thm:R-"}. The cocycle equation also rests on uniqueness, and coassociativity of $\Delta_s$ and $\underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}$ on representations (see [@GTLW §4.7]). Namely, consider the two sides of the cocycle equation: $$\begin{aligned} L(s_1,s_2) &=\mathcal{R}^-_{V_1\underset{\scriptscriptstyle{\operatorname{D},s_1}}{\otimes}V_2,V_3}(s_2)\circ \left(\mathcal{R}^-_{V_1,V_2}(s_1)\otimes \operatorname{Id}_{V_3}\right), \\ R(s_1,s_2) &=\mathcal{R}^-_{V_1,V_2\underset{\scriptscriptstyle{\operatorname{D},s_2}}{\otimes}V_3}(s_1+s_2)\circ \left(\operatorname{Id}_{V_1}\otimes \mathcal{R}^-_{V_2,V_3}(s_2)\right). \end{aligned}$$ It is straightforward to see that both of these operators intertwine the two action of $\operatorname{T}(h)$. That is, using the formulae for $\Delta_s$ and Theorem [Theorem 6](#T:map-T){reference-type="ref" reference="T:map-T"} [\[map-T:3\]](#map-T:3){reference-type="eqref" reference="map-T:3"}, these operators solve the following equation: $$\begin{gathered} \operatorname{ad}\left(\sum_{a=1}^3 \operatorname{T}(h)^{(a)} + (s_1+s_2)h^{(1)} +s_2 h^{(2)}\right)\cdot X(s_1,s_2) = \\ \hbar X(s_1,s_2)(\mathsf{r}_{12}(h)+\mathsf{r}_{23}(h) +\mathsf{r}_{13}(h)).\end{gathered}$$ As in [@GTLW §4.6], there is at most one solution to this equation, which is triangular in the sense that $X = \sum_{\beta,\gamma\in\mathsf{Q}_+} X_{\beta,\gamma}$, where, for $\mu_1,\mu_2,\mu_3\in\mathfrak{h}^*$, $$X_{\beta,\gamma} : V_1[\mu_1]\otimes V_2[\mu_2]\otimes V_3[\mu_3] \to V_1[\mu_1-\beta]\otimes V_2[\mu_2+\beta-\gamma]\otimes V_3[\mu_3+\gamma].$$ Hence, $L(s_1,s_2) = R(s_1,s_2)$. ◻ # Construction of $\mathcal{R}^0(s)$ {#sec:ab-R} In this section, we introduce an additive, regular difference equation, whose coefficients come from $\mathrm{Prim}^{\scriptscriptstyle{\mathrm{D}}}(Y_{\hbar}(\mathfrak{g}))^{\otimes 2}$, where $\mathrm{Prim}^{\scriptscriptstyle{\mathrm{D}}}(Y_{\hbar}(\mathfrak{g}))$ is the linear span of $\{t_{i,r}\}_{i\in{\mathbf I}, r\in\mathbb{Z}_{\geqslant 0}}$. We show that the exponential of the two fundamental solutions of this difference equation give rise to two meromorphic braidings on $(\mathcal{O}(Y_{\hbar}(\mathfrak{g})),\underset{\scriptscriptstyle{\operatorname{D},s}}{\otimes})$, related by a unitarity condition. They have the same asymptotic expansion, which is the unique formal solution of our difference equation, *i.e.*, a formal abelian $R$--matrix. ## Main construction and result {#ssec:R0-main} We begin by stating the main result of this section. **Theorem 9**. *Given $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, there exist two meromorphic $\operatorname{End}(V_1\otimes V_2)$--valued functions, $\mathcal{R}^{0,\eta}_{V_1,V_2}(s)$, $\eta\in\{\uparrow,\downarrow\}$, which are natural in $V_1$ and $V_2$ and have the following properties:* 1. *[\[R0:fn\]]{#R0:fn label="R0:fn"} Let $\mathbb{H}^{\uparrow} = \{s\in\mathbb{C}: \operatorname{Re}(s/\hbar)\gg 0\} = -\mathbb{H}^{\downarrow}$. Then, for every $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, $\mathcal{R}^{0,\eta}_{V_1,V_2}(s)$ is holomorphic and invertible in $\mathbb{H}^{\eta}$ and approaches $\operatorname{Id}_{V_1\otimes V_2}$ as $|s|\to\infty$.* 2. *[\[R0:int\]]{#R0:int label="R0:int"} For every $V_1,V_2$, the following is a $Y_{\hbar}(\mathfrak{g})$--intertwiner: $$(1\ 2) \circ \mathcal{R}^{0,\eta}_{V_1,V_2}(s) : V_1\underset{\scriptscriptstyle{\operatorname{D},s}}{\otimes} V_2 \to (V_2\underset{\scriptscriptstyle{\operatorname{D},-s}}{\otimes}V_1)(s)\ .$$* 3. *[\[R0:shift\]]{#R0:shift label="R0:shift"} The following holds, for every $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$ and $a,b\in\mathbb{C}$: $$\mathcal{R}^{0,\eta}_{V_1(a),V_2(b)}(s) = \mathcal{R}^{0,\eta}_{V_1,V_2}(s+a-b)\ .$$* 4. *[\[R0:cabling\]]{#R0:cabling label="R0:cabling"} For every $V_1,V_2,V_3\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, we have: $$\begin{aligned} \mathcal{R}^{0,\eta}_{V_1\underset{\scriptscriptstyle{\operatorname{D},s_1}}{\otimes}V_2,V_3}(s_2) &= \mathcal{R}^{0,\eta}_{V_1,V_3}(s_1+s_2) \mathcal{R}^{0,\eta}_{V_2,V_3}(s_2)\ , \\ \mathcal{R}^{0,\eta}_{V_1,V_2\underset{\scriptscriptstyle{\operatorname{D},s_2}}{\otimes}V_3}(s_1+s_2) &= \mathcal{R}^{0,\eta}_{V_1,V_3}(s_1+s_2) \mathcal{R}^{0,\eta}_{V_1,V_2}(s_1)\ . \end{aligned}$$* 5. *[\[R0:unitary\]]{#R0:unitary label="R0:unitary"} For every $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, we have: $$\mathcal{R}^{0,\uparrow}_{V_1,V_2}(s)^{-1} = (1\ 2)\circ \mathcal{R}^{0,\downarrow}_{V_2,V_1}(-s)\circ (1\ 2)\ .$$* 6. *[\[R0:asym\]]{#R0:asym label="R0:asym"} There exists a unique formal series $\mathcal{R}^0(s)\in Y^{0}_{\hbar}(\mathfrak{g})^{\otimes 2}[\negthinspace[s^{-1}]\negthinspace]$ such that, for every $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, with $\pi_\ell:Y_{\hbar}(\mathfrak{g})\to\operatorname{End}(V_\ell)$ the corresponding action homomorphism, we have: $$\mathcal{R}^{0,\eta}_{V_1,V_2}(s)\sim \pi_1\otimes\pi_2 (\mathcal{R}^0(s)),\ \text{as } \pm\operatorname{Re}(s/\hbar) \to\infty .$$ This asymptotic expansion remains valid in a larger sector $\Sigma^\eta_\delta$, for any $\delta>0$, where, if $\theta = \arg(\hbar)$ then: $$\Sigma^\uparrow_\delta =\{re^{\iota\phi} : r\in\mathbb{R}_{>0}, \phi\in (\theta-\pi+\delta,\theta+\pi-\delta)\} = -\Sigma^\downarrow_\delta.$$ (see Figure [2](#afig:sector){reference-type="ref" reference="afig:sector"} in Appendix [8](#app:LapDE){reference-type="ref" reference="app:LapDE"} where $\chi=\hbar/2$).* 7. *[\[R0:leading\]]{#R0:leading label="R0:leading"} The first order term of $\mathcal{R}^0(s)$ is given by: $$\mathcal{R}^0(s) = \exp\left(s^{-1}\left(\frac{1}{\hbar}\mathsf{g}_{\scriptscriptstyle{\mathrm{sing}}}+ \frac{\hbar}{4\mathsf{d}^{(0)}} \mathsf{g}_0\right)+ O(s^{-2})\right)\,.$$ Here, the constant $\ell^{(0)}$ and the tensor $\mathsf{g}_0$ are given in Lemma [Lemma 4](#lem:Rg-terms){reference-type="ref" reference="lem:Rg-terms"}, $\mathsf{d}^{(0)}$ in Lemma [Lemma 3](#lem:order){reference-type="ref" reference="lem:order"} below, and $$\mathsf{g}_{\scriptscriptstyle{\mathrm{sing}}}= \frac{\ell^{(0)}}{\mathsf{d}^{(0)}} \left(\frac{\EuScript{C}_2\otimes\EuScript{C}_0}{2} -\EuScript{C}_1\otimes\EuScript{C}_1 + \frac{\EuScript{C}_0\otimes \EuScript{C}_2}{2}\right).$$* **Remark 5**. Note that by Corollary [Corollary 1](#cor:c123){reference-type="ref" reference="cor:c123"} the tensor $\mathsf{g}_{\scriptscriptstyle{\mathrm{sing}}}$ commutes with zero--weight elements of $Y_\hbar(\mathfrak{g})^{\otimes 2}$, and hence with $\mathcal{R}^{\pm}(s)$. Thus, it can be taken to the left, as stated in [\[eq:intro-R-sing\]](#eq:intro-R-sing){reference-type="eqref" reference="eq:intro-R-sing"}. The proof of this theorem is outlined in Section [7.3](#ssec:R0-thm-pf){reference-type="ref" reference="ssec:R0-thm-pf"} and worked out in the following sections. Before going into the proof, we will prove the rationality property of $\mathcal{R}^0(s)$. ## Rationality {#ssec:rat-R0} Assume that $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$ are two highest--weight representations. Let $\lambda_\ell\in\mathfrak{h}^*$ be their highest weights, and fix $\mathsf{v}_\ell\in V_\ell[\lambda_\ell]$ highest--weight vectors ($\ell=1,2$). For $\eta\in\{\uparrow,\downarrow\}$, define $\mathfrak{f}^\eta(s)$ as the eigenvalue of $\mathcal{R}^{0,\eta}_{V_1,V_2}(s)$ on $V_1[\lambda_1]\otimes V_2[\lambda_2]$: $$\mathcal{R}^{0,\eta}(s)\cdot \mathsf{v}_1\otimes\mathsf{v}_2 = \mathfrak{f}^\eta(s) \mathsf{v}_1\otimes\mathsf{v}_2\ .$$ **Theorem 10**. *With the notational set up as above, the normalized operator $$\mathsf{R}^0_{V_1,V_2}(s) =\mathfrak{f}^\eta(s)^{-1}\mathcal{R}^{0,\eta}_{V_1,V_2}(s)$$ is independent of $\eta$, and is rational in $s$.* *[Proof]{.smallcaps}.* Let $\mu_\ell\in P(V_\ell)$ be two weights ($\ell=1,2$). We argue by induction on $\operatorname{ht}(\lambda_1-\mu_1) + \operatorname{ht}(\lambda_2-\mu_2)$. For notational convenience we will drop $V_1,V_2$ from the subscripts of our operators. The base case being clear, we focus on the induction step. Note that, by the highest--weight property, we have $$V_1[\mu_1]\otimes V_2[\mu_2] = \sum_{\begin{subarray}{c} k_1,k_2\in{\mathbf I}\\ r_1,r_2\in\mathbb{Z}_{\geqslant 0}\end{subarray}} x^-_{k_1,r_1}\left(V_1[\mu_1+\alpha_{k_1}]\right) \otimes x^-_{k_2,r_2}\left(V_2[\mu_2+\alpha_{k_2}]\right).$$ Moreover, by the commutation relation $[t_{k,1},x^-_{k,r}] = -2d_kx^-_{k,r+1}$, for any $V\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$ and $\mu\in P(V)$, we have $$\sum_{r\in\mathbb{Z}_{\geqslant 0}} x^-_{k,r}\left(V[\mu]\right)= Y^{0}_{\hbar}(\mathfrak{g})\cdot x^-_{k,0}\left(V[\mu]\right).$$ Recall that $\mathcal{R}^{0,\eta}(s)$ commutes with the action of $Y^{0}_{\hbar}(\mathfrak{g})\otimes Y^{0}_{\hbar}(\mathfrak{g})$ on $V_1\otimes V_2$. Thus, it suffices to show that the action of $\mathsf{R}^0(s)$ on the image of $x^-_{k,0}\otimes 1$ (and $1\otimes x^-_{k,0}$) is rational, and independent of $\eta$. We focus of the former, as the latter will follow either with a similar proof, or using unitarity. We will use the following commutation relation between $\mathcal{R}^{0,\eta}(s)$ and $x^-_{k,0}\otimes 1$, which will be established in the proof of Part [\[R0:int\]](#R0:int){reference-type="eqref" reference="R0:int"} of Theorem [Theorem 9](#thm:R0-main){reference-type="ref" reference="thm:R0-main"}, outlined in Section [7.3](#ssec:R0-thm-pf){reference-type="ref" reference="ssec:R0-thm-pf"}: $$\label{eq:pf-ratR0-1} \operatorname{Ad}(\mathcal{R}^{0,\eta}(s)^{-1})\cdot (x^-_{k,0}\otimes 1) = x^-_{k,0}\otimes 1 + \mathfrak{Y}_k(s),$$ where $$\mathfrak{Y}_k(s) = \hbar\sum_{N\geqslant 0} s^{-N-1} \left( \sum_{n=0}^N (-1)^{n+1} \left(\begin{array}{c} N\\ n\end{array}\right) x^-_{k,n}\otimes \xi_{k,N-n}\right).$$ The right--hand side of [\[eq:pf-ratR0-1\]](#eq:pf-ratR0-1){reference-type="eqref" reference="eq:pf-ratR0-1"} has the following contour integral representation, which immediately shows that it is rational in $s$ (see [@sachin-valerio-III §4.5]): $$x^-_{k,0}\otimes 1 + \mathfrak{Y}_k(s) = \frac{1}{\hbar} \oint_{C_1} x^-_k(u)\otimes \xi_k(u+s)\, du\ .$$ Here, $C_1$ is a contour enclosing all the poles of $x^-_k(u)$ acting on $V_1[\mu_1+\alpha_k]$, and $s$ is so large that $\xi_k(u+s)$ acting on $V_2[\mu_2]$ is holomorphic on and within $C_1$. Now, let $w_1\in V_1[\mu_1+\alpha_k]$ and $w_2\in V_2[\mu_2]$. Then, since $\operatorname{Ad}(\mathcal{R}^{0,\eta}(s)) = \operatorname{Ad}(\mathsf{R}^0(s))$, we have $$\begin{aligned} \mathsf{R}^0(s)^{-1}\circ &(x^-_{k,0}\otimes 1)\cdot (w_1\otimes w_2)\\ &= \left(\operatorname{Ad}(\mathcal{R}^{0,\eta}(s)^{-1})\cdot (x^-_{k,0}\otimes 1)\right) \left(\mathsf{R}^0(s)^{-1}\cdot (w_1\otimes w_2)\right)\\ &= \left(x^-_{k,0}\otimes 1+ \mathfrak{Y}_k(s)\right)\cdot \left(\mathsf{R}^0(s)^{-1}\cdot (w_1\otimes w_2)\right). \end{aligned}$$ The last line gives a vector in $V_1[\mu_1]\otimes V_2[\mu_2]$ depending rationally on $s$, by the induction hypothesis combined with the rationality of $x^-_{k,0}\otimes 1 + \mathfrak{Y}_k(s)$. The theorem is proved. ◻ ## Proof of Theorem [Theorem 9](#thm:R0-main){reference-type="ref" reference="thm:R0-main"} {#ssec:R0-thm-pf} Our proof is based on an explicit construction of $\mathcal{R}^0(s)$, both as a formal series, and as an $\operatorname{End}(V_1\otimes V_2)$-valued meromorphic function of $s$. This $\mathcal{R}^0(s)$ has the form $$\mathcal{R}^0(s) = \exp(\Lambda(s)),\ \text{ where } \Lambda(s)\in s^{-1}\mathrm{Prim}^{\scriptscriptstyle{\mathrm{D}}}(Y_{\hbar}(\mathfrak{g}))^{\otimes 2}[\negthinspace[s^{-1}]\negthinspace].$$ The $\Lambda(s)$, in turn, is obtained in a few steps. We outline the steps here, which are carried out in Sections [7.4](#ssec:T-cartan){reference-type="ref" reference="ssec:T-cartan"}--[7.9](#ssec:Rl-props){reference-type="ref" reference="ssec:Rl-props"}. To begin, consider the difference equation $$\label{eq:L0-diff} (\mathsf{p}-\mathsf{p}^{-1})\det(\mathbf{B}(\mathsf{p})) \cdot \Lambda(s) = \mathcal{G}(s),$$ where $\mathsf{p}$, $\mathbf{B}(\mathsf{p})$ and $\mathcal{G}(s)$ are defined as follows: - $\mathsf{p}$ is the shift operators: $\mathsf{p}\cdot f(s) = f(s-\hbar/2)$. - $\mathbf{B}(\mathsf{p})$ is the symmetrized, affine $\mathsf{p}$--Cartan matrix (see Section [7.4](#ssec:T-cartan){reference-type="ref" reference="ssec:T-cartan"} below). - $\mathcal{G}(s) = \sum_{ij} \mathbf{B}(\mathsf{p})^{\ast}_{ji}\cdot \mathcal{T}_{ij}(s)$, where $\mathbf{B}(\mathsf{p})^\ast$ is the adjoint matrix to $\mathbf{B}(\mathsf{p})$ and $$\mathcal{T}_{ij}(s) =%\oint_{C_1} t_i'(u)\otimes t_j(u+s)\, du. \hbar^2 \sum_{m\geqslant 1} m!s^{-m-1} \sum_{\begin{subarray}{c} a,b\geqslant 0 \\ a+b=m-1 \end{subarray}} (-1)^a \frac{t_{i,a}}{a!}\otimes \frac{t_{j,b}}{b!}$$ In Section [7.5](#ssec:tau){reference-type="ref" reference="ssec:tau"}, we prove important properties of the operators $\mathcal{T}_{ij}(s)$. These are used to establish nearly identical properties for the operator $\mathcal{G}(s)$ in Section [7.6](#ssec:Rg){reference-type="ref" reference="ssec:Rg"}. We compute the first few terms of $\mathcal{G}(s)$ in Section [7.7](#ssec:Rg-reg){reference-type="ref" reference="ssec:Rg-reg"} and show that the coefficients of $s^{-2}$ and $s^{-3}$ are central. This fact is used in Section [7.8](#ssec:R0-diff){reference-type="ref" reference="ssec:R0-diff"} to regularize the difference equation [\[eq:L0-diff\]](#eq:L0-diff){reference-type="eqref" reference="eq:L0-diff"} to $$\label{eq:Rl-diff} (\mathsf{p}-\mathsf{p}^{-1})\det(\mathbf{B}(\mathsf{p})) \cdot \Lambda(s) = \mathcal{G}_{\scriptscriptstyle{\operatorname{reg}}}(s),$$ where $\mathcal{G}_{\scriptscriptstyle{\operatorname{reg}}}(s)$ is obtained by removing the aforementioned central terms from $\mathcal{G}(s)$, and is defined explicitly in [\[eq:Rg-reg\]](#eq:Rg-reg){reference-type="eqref" reference="eq:Rg-reg"}. We show in Corollary [Corollary 4](#cor:Rl-formal){reference-type="ref" reference="cor:Rl-formal"} that this equation has a unique formal solution. The properties of $\Lambda(s)$, analogous to the assertions in this theorem, are obtained in Proposition [Proposition 7](#pr:Rl){reference-type="ref" reference="pr:Rl"}. Now, given $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$, the evaluation of the difference equation [\[eq:Rl-diff\]](#eq:Rl-diff){reference-type="eqref" reference="eq:Rl-diff"} on $V_1\otimes V_2$ has coefficients from the following subalgebra of $\operatorname{End}(V_1\otimes V_2)$: $$\bigoplus_{\mu_1\in P(V_1),\mu_2\in P(V_2)} \operatorname{End}_{Y^{0}_{\hbar}(\mathfrak{g})}(V_1[\mu_1]) \otimes \operatorname{End}_{Y^{0}_{\hbar}(\mathfrak{g})}(V_2[\mu_2]).$$ Thus, fixing $\mu_1,\mu_2$, we can view [\[eq:Rl-diff\]](#eq:Rl-diff){reference-type="eqref" reference="eq:Rl-diff"} as an equation for a finite size matrix--valued function of $s$. In Appendix [8](#app:LapDE){reference-type="ref" reference="app:LapDE"} (see Theorem [Theorem 11](#appA:thm){reference-type="ref" reference="appA:thm"}), we establish the existence and uniqueness of the solutions of such equations with the prescribed asymptotics, for any right--hand side of $O(s^{-4})$ (this $4$ is $1$ plus the order of vanishing of the polynomial on the left--hand side at $\mathsf{p}=1$; see Lemma [Lemma 3](#lem:order){reference-type="ref" reference="lem:order"} below). We verify that our difference equation satisfies the hypotheses of Theorem [Theorem 11](#appA:thm){reference-type="ref" reference="appA:thm"} in Section [7.10](#ssec:borel-growth){reference-type="ref" reference="ssec:borel-growth"} (see Corollary [Corollary 5](#cor:borel-growth){reference-type="ref" reference="cor:borel-growth"}). Given these preparatory results, let us prove the theorem. Note that [\[R0:fn\]](#R0:fn){reference-type="eqref" reference="R0:fn"} and [\[R0:asym\]](#R0:asym){reference-type="eqref" reference="R0:asym"} follow from definitions. Parts [\[R0:shift\]](#R0:shift){reference-type="eqref" reference="R0:shift"} and [\[R0:unitary\]](#R0:unitary){reference-type="eqref" reference="R0:unitary"} of the theorem are a direct consequence of Parts [\[Rl:shift\]](#Rl:shift){reference-type="eqref" reference="Rl:shift"} and [\[Rl:unitary\]](#Rl:unitary){reference-type="eqref" reference="Rl:unitary"} of Proposition [Proposition 7](#pr:Rl){reference-type="ref" reference="pr:Rl"}, respectively. Part [\[R0:cabling\]](#R0:cabling){reference-type="eqref" reference="R0:cabling"} follows from the fact that the coefficients of $\Lambda(s)$ are tensors of primitive elements with respect to the Drinfeld coproduct. Let us prove [\[R0:int\]](#R0:int){reference-type="eqref" reference="R0:int"}. It is clear that $\mathcal{R}^{0,\eta}_{V_1,V_2}(s)$ commutes with operators from $Y^{0}_{\hbar}(\mathfrak{g})$. Therefore, it is enough to show that $(1\ 2)\circ\mathcal{R}^{0,\eta}_{V_1,V_2}(s)$ commutes with $\underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}(x_{k,0}^{\pm})$, for all $k\in{\mathbf I}$. We focus on the $+$ case, the other one being similar. Recall that $$\underset{\scriptscriptstyle{{\operatorname{D}},s}}{\Delta}(x_{k,0}^+) = \square(x_{k,0}^+) +\hbar\sum_{N\geqslant 0} s^{-N-1} \left( \sum_{a+b=n} (-1)^{a+1} \left(\begin{array}{c} N\\ a\end{array}\right) \xi_{k,a}\otimes x^+_{k,b} \right).$$ Let $\mathfrak{X}_k(s)$ denote the second term on the right--hand side. The desired commutation relation decouples to the following two: $$\begin{aligned} \operatorname{Ad}(\mathcal{R}^{0,\eta}_{V_1,V_2}(s))\cdot x^+_{k,0}\otimes 1 &= x^+_{k,0}\otimes 1 + \mathfrak{X}_k^{\operatorname{op}}(-s), \\ \operatorname{Ad}(\mathcal{R}^{0,\eta}_{V_1,V_2}(s)^{-1})\cdot 1\otimes x^+_{k,0} &= 1\otimes x^+_{k,0} + \mathfrak{X}_k(s). \end{aligned}$$ Note that the second follows from the first, given Part [\[R0:unitary\]](#R0:unitary){reference-type="eqref" reference="R0:unitary"} of the theorem (unitarity). For the first, we write $$x^+_{k,0}\otimes 1 + \mathfrak{X}^{\operatorname{op}}_k(-s) = \sum_{a=0}^{\infty} x^+_{k,a}\otimes \partial_s^{(a)}(\xi_k(s)).$$ The desired relation is then a consequence of the following claim. For every $k\in{\mathbf I}, n\in\mathbb{Z}_{\geqslant 0}$ and $y\in Y^{0}_{\hbar}(\mathfrak{g})$, we have $$\operatorname{ad}(\Lambda_{V_1,V_2}^{\eta}(s))\cdot (x^+_{k,n}\otimes y) = \sum_{a=0}^{\infty} x^+_{k,n+a}\otimes \partial_s^{(a)}(t_k(s))y.$$ We show that this identity is true for the formal $\Lambda(s)$ in Proposition [Proposition 7](#pr:Rl){reference-type="ref" reference="pr:Rl"} [\[Rl:comm\]](#Rl:comm){reference-type="eqref" reference="Rl:comm"}. To deduce it for the meromorphic functions $\Lambda_{V_1,V_2}^\eta(s)$, we apply the difference operator $\mathcal{D}(\mathsf{p})=(\mathsf{p}-\mathsf{p}^{-1})\det(\mathbf{B}(\mathsf{p}))$ to both sides to conclude that they are solutions to the same difference equation, with the same asymptotic expansion as $s\to\infty$. Hence, by the uniqueness result of Theorem [Theorem 11](#appA:thm){reference-type="ref" reference="appA:thm"}, they are equal. Our construction of $\mathcal{R}^0(s)$ is weight preserving. So, let $V_1,V_2\in\mathcal{O}(Y_{\hbar}(\mathfrak{g}))$ and $\mu_\ell\in P(V_\ell)$ ($\ell=1,2$) be two fixed weights. All the operators considered in Sections [7.5](#ssec:tau){reference-type="ref" reference="ssec:tau"}--[7.7](#ssec:Rg-reg){reference-type="ref" reference="ssec:Rg-reg"} can be viewed as meromorphic $\operatorname{End}(V_1[\mu_1]\otimes V_2[\mu_2])$--valued functions of a complex variable $s$, which are regular near $s=\infty$. We will be interested in both their functional nature, and naturality with regards to $V_1,V_2$. Therefore, their Taylor series expansions will be shown to be the evaluation on $V_1\otimes V_2$ of an element of $Y^{0}_{\hbar}(\mathfrak{g})^{\otimes 2}[\negthinspace[s^{-1}]\negthinspace]$. An identity among such operators can be shown either functionally, or formally. The functional proofs, by which we mean proofs involving the usual "contour deformation\" trick, can be found in [@sachin-valerio-III] (we will make precise citation when needed). Here, we also give the formal proofs --- both for completeness and for their aesthetic beauty. ## $\mathsf{p}$--Cartan matrix {#ssec:T-cartan} Let $\mathsf{p}$ be an indeterminate and let $\mathbf{B}(\mathsf{p}) =([d_ia_{ij}]_\mathsf{p}) \in \mathrm{M}_{{\mathbf I}\times{\mathbf I}}(\mathbb{Z}[\mathsf{p},\mathsf{p}^{-1}])$. Here, we use the standard notation of Gaussian numbers: $$[n]_q =\frac{q^n-q^{-n}}{q-q^{-1}}.$$ Let $\mathbf{B}(\mathsf{p})^\ast$ be the adjoint matrix of $\mathbf{B}(\mathsf{p})$, so that $$\mathbf{B}(\mathsf{p})^\ast \mathbf{B}(\mathsf{p}) = \mathbf{B}(\mathsf{p})\mathbf{B}(\mathsf{p})^\ast = \det(\mathbf{B}(\mathsf{p}))\operatorname{Id}.$$ The following lemma is crucial, and is proved by direct inspection (see Appendix [10](#app:QCM){reference-type="ref" reference="app:QCM"} for the table of determinants of symmetrized, affine $\mathsf{p}$--Cartan matrices). **Lemma 3**. *The order of vanishing of $\det(\mathbf{B}(\mathsf{p}))$ at $\mathsf{p}=1$ is $2$. That is, $$\mathsf{d}^{(0)}=\left. \frac{\det(\mathbf{B}(\mathsf{p}))}{(\mathsf{p}-\mathsf{p}^{-1})^2} \right|_{\mathsf{p}=1} \text{ exists and } \neq 0\ .$$ Moreover, all the zeroes of $\det(\mathbf{B}(\mathsf{p}))$ have modulus $1$.* Below we will view $\mathsf{p}$ as an operator on either rational, matrix--valued functions of $s$, regular near $s=\infty$; or formal series in $s^{-1}$, via: $$\label{eq:T-action} \mathsf{p}\cdot f(s) = f(s-\hbar/2)\ .$$ ## The $\mathcal{T}$ operators {#ssec:tau} Recall that for $i,j\in{\mathbf I}$, we defined $$\label{eq:tau-formal} \mathcal{T}_{ij}(s) = \hbar^2 \sum_{m\geqslant 1} m!s^{-m-1} \sum_{\begin{subarray}{c} a,b\geqslant 0 \\ a+b=m-1 \end{subarray}} (-1)^a \frac{t_{i,a}}{a!}\otimes \frac{t_{j,b}}{b!}\ .$$ In Lemma 6.5 of [@GTLW], it was shown that, when evaluated on $V_1[\mu_1]\otimes V_2[\mu_2]$, $\mathcal{T}_{ij}(s)$ is the Taylor series expansion near $s=\infty$ of the contour integral $$\label{eq:tau} \mathcal{T}_{ij}(s) =\oint_{C_1} t_i'(u)\otimes t_j(u+s)\, du,$$ where $C_1$ is a contour enclosing zeroes and poles of $\xi_i(u)$ acting on $V_1[\mu_1]$, and $s$ is large enough so that $t_j(u+s)$ acting on $V_2[\mu_2]$ is holomorphic on and within $C_1$. As usual, we suppress $2\pi\iota$ factor in our notations $\oint = \frac{1}{2\pi\iota} \int$. Here, $t_j(w)$ is viewed as a single--valued function on a cut plane as in Section [4.2](#ssec:log){reference-type="ref" reference="ssec:log"}. This expression is used to show that $\exp(\mathcal{T}_{ij}(s))$ becomes a rational function of $s$ on $V_1[\mu_1]\otimes V_2[\mu_2]$. (see the proof of Proposition [Proposition 6](#pr:tau){reference-type="ref" reference="pr:tau"} [\[tau:fun\]](#tau:fun){reference-type="eqref" reference="tau:fun"} below). We can also rewrite [\[eq:tau-formal\]](#eq:tau-formal){reference-type="eqref" reference="eq:tau-formal"} above using $B_i(z)$ (see Section [3.5](#ssec:tir){reference-type="ref" reference="ssec:tir"} above), and the fact that $m!s^{-m-1} = (-\partial_s)^m\cdot s^{-1}$, as $$\label{eq:tau-borel} \mathcal{T}_{ij}(s) = \left. zB_i(-z)\otimes B_j(z)\right|_{z=-\partial_s} \cdot s^{-1} = \left. B_i(-z)\otimes B_j(z)\right|_{z=-\partial_s} \cdot s^{-2}.$$ **Proposition 6**. *The elements $\{\mathcal{T}_{ij}(s)\}_{i,j\in{\mathbf I}}$ have the following properties:* 1. *[\[tau:fun\]]{#tau:fun label="tau:fun"} As an operator on $V_1[\mu_1]\otimes V_2[\mu_2]$, $\exp(\mathcal{T}_{ij}(s))$ is a rational function of $s$, regular at $s=\infty$ with value $\operatorname{Id}_{V_1[\mu_1]\otimes V_2[\mu_2]}$ at $s=\infty$.* 2. *[\[tau:unitary\]]{#tau:unitary label="tau:unitary"} $\mathcal{T}_{ij}^{\operatorname{op}}(s) = \mathcal{T}_{ji}(-s)$.* 3. *[\[tau:shift\]]{#tau:shift label="tau:shift"} $\tau_a\otimes \tau_b (\mathcal{T}_{ij}(s)) = \mathcal{T}_{ij}(s+a-b)$ for each $a,b\in \mathbb{C}$.* 4. *[\[tau:comm\]]{#tau:comm label="tau:comm"} Let $k\in{\mathbf I}$, $n\in\mathbb{Z}_{\geqslant 0}$ and let $y\in Y^{0}_{\hbar}(\mathfrak{g})$. Then, the following commutation relations hold, where $\mathsf{p}$ is the shift operator [\[eq:T-action\]](#eq:T-action){reference-type="eqref" reference="eq:T-action"}: $$\begin{aligned} &= \pm (\mathsf{p}^{d_ia_{ik}} - \mathsf{p}^{-d_ia_{ik}}) \cdot \left(\sum_{a=0}^{\infty} x^{\pm}_{k,n+a} \otimes \partial_s^{(a)}(t_j(s))y\right)\\ &=\pm (\mathsf{p}^{d_ia_{ik}} - \mathsf{p}^{-d_ia_{ik}}) \cdot \\ &\hspace*{0.5in} \hbar\left( \sum_{N\geqslant 0} N! s^{-N-1} \sum_{a+b=N} (-1)^a \frac{x_{k,n+a}^{\pm}}{a!} \otimes \frac{t_{j,b}y}{b!} \right), \\ [\mathcal{T}_{ij}(s), y\otimes x_{k,n}^{\pm}] &= \mp (\mathsf{p}^{d_ja_{jk}} - \mathsf{p}^{-d_ja_{jk}}) \cdot \left(\sum_{b=0}^{\infty} y(-\partial_s)^{(b)}(t_i(-s))\otimes x^{\pm}_{k,n+b}\right)\\ &=\pm (\mathsf{p}^{d_ja_{jk}} - \mathsf{p}^{-d_ja_{jk}}) \cdot \\ &\hspace*{0.5in}\hbar\left( \sum_{N\geqslant 0} N! s^{-N-1} \sum_{a+b=N} (-1)^a \frac{t_{i,a}y}{a!} \otimes \frac{x^{\pm}_{k,n+b}}{b!} \right).\end{aligned}$$* *[Proof]{.smallcaps}.* We remind the reader of the following general fact, proved in Claims 1 and 2 of the proof of [@sachin-valerio-III Thm. 5.5]: Let $V,W$ be two finite--dimensional $\mathbb{C}$--vector spaces, $A:\mathbb{C}\to\operatorname{End}(V)$ and $B:\mathbb{C}\to\operatorname{End}(W)$ two rational functions, taking value $\operatorname{Id}$ at $\infty$, such that $[A(s),A(s')]=0=[B(s),B(s')]$ for all $s,s'\in\mathbb{C}$. Let $\sigma(A)$ and $\sigma(B)$ be the set of poles of $A(s)^{\pm 1}$ and $B(s)^{\pm 1}$ respectively. Then the following is a rational $\operatorname{End}(V\otimes W)$--valued function of $s$, taking value $\operatorname{Id}$ at $s=\infty$: $$X(s) = \exp\left(\oint_{C_1} A(u)^{-1}A'(u)\otimes \log(B(u+s))\, du\right).$$ Here $C_1$ is a countour enclosing $\sigma(A)$, and $s$ is large enough so that $\log(B(u+s))$ is analytic within and on $C_1$. Moreover, we have $$X(s) = \exp\left(\oint_{C_2} \log(A(u-s))\otimes B(u)^{-1}B'(u)\, du\right).$$ This proves [\[tau:fun\]](#tau:fun){reference-type="eqref" reference="tau:fun"} and [\[tau:unitary\]](#tau:unitary){reference-type="eqref" reference="tau:unitary"}. Note that [\[tau:unitary\]](#tau:unitary){reference-type="eqref" reference="tau:unitary"} can also be easily deduced from the formal expansion [\[eq:tau-formal\]](#eq:tau-formal){reference-type="eqref" reference="eq:tau-formal"}. To prove [\[tau:shift\]](#tau:shift){reference-type="eqref" reference="tau:shift"}, note that $\tau_a B_i(z) = e^{az}B_i(z)$. Therefore, using [\[eq:tau-borel\]](#eq:tau-borel){reference-type="eqref" reference="eq:tau-borel"}, we have: $$\begin{aligned} (\tau_a\otimes \tau_b)(\mathcal{T}_{ij}(s) &= \left.\tau_a(B_i(-z))\otimes \tau_b(B_j(z))\right|_{z=-\partial_s} \cdot s^{-2} \\ &= \left.e^{-(a-b)z} B_i(-z)\otimes B_j(z)\right|_{z=-\partial_s}\cdot s^{-2} \\ &= e^{(a-b)\partial_s}\cdot \mathcal{T}_{ij}(s) = \mathcal{T}_{ij}(s+a-b). \end{aligned}$$ In the last line, we used Taylor's theorem $e^{c\partial_s}\cdot f(s) = f(s+c)$. We remark that a "contour deformation\" style proof of [\[tau:comm\]](#tau:comm){reference-type="eqref" reference="tau:comm"} is given in [@sachin-valerio-III Prop. 5.10]. Here, we give a different "formal\" proof, using the expression [\[eq:tau-borel\]](#eq:tau-borel){reference-type="eqref" reference="eq:tau-borel"} of $\mathcal{T}_{ij}(s)$ in terms of $B_i(z)$ and the commutation relation [\[eq:comm-Bi\]](#eq:comm-Bi){reference-type="eqref" reference="eq:comm-Bi"}. Let us focus on the first relation (the second one can be deduced easily from the first, using the unitarity relation [\[tau:unitary\]](#tau:unitary){reference-type="eqref" reference="tau:unitary"}). Setting $c_{ik} = d_ia_{ik}\hbar/2$, we have $$\begin{aligned} &= \left. z[B_i(-z),x^\pm_{k,n}]\otimes B_j(z)y\right| _{z=-\partial_s} \cdot s^{-1} \\ &= \left. \pm (e^{c_{ik}z}-e^{-c_{ik}z}) \left(\hbar\sum_{a,b\geqslant 0} (-1)^a \frac{x^\pm_{k,n+a}}{a!} \otimes \frac{t_{j,b}y}{b!} z^{a+b}\right) \right|_{z=-\partial_s} \cdot s^{-1}\\ &= \pm (e^{-c_{ik}\partial_s} - e^{c_{ik}\partial_s}) \cdot X^\pm_{k,j;n}(s) (1\otimes y)\ ,\end{aligned}$$ where $$\label{eq:X-notation} \begin{aligned} X^\pm_{k,j;n}(s) &= \hbar\sum_{N\geqslant 0} \left(\sum_{a+b=N} (-1)^a \frac{x^\pm_{k,n+a}}{a!} \otimes \frac{t_{j,b}}{b!} \right) (-\partial_s)^N \cdot s^{-1} \\ &= \hbar\sum_{N\geqslant 0} N!s^{-N-1} \left(\sum_{a+b=N} (-1)^a \frac{x^\pm_{k,n+a}}{a!} \otimes \frac{t_{j,b}}{b!}\right)\\ &= \sum_{a=0}^\infty x_{k,n+a}^{\pm}\otimes \partial_s^{(a)}(t_j(s)). \end{aligned}$$ Since $\mathsf{p}=e^{-(\hbar/2)\partial_s}$, the first relation of [\[tau:comm\]](#tau:comm){reference-type="eqref" reference="tau:comm"} follows immediately. ◻ ## The operator $\mathcal{G}(s)$ {#ssec:Rg} We now define $$\label{eq:Rg} \mathcal{G}(s) =\sum_{i,j\in{\mathbf I}} \mathbf{B}(\mathsf{p})^\ast_{ji}\cdot \mathcal{T}_{ij}(s).$$ The following is a direct corollary of Proposition [Proposition 6](#pr:tau){reference-type="ref" reference="pr:tau"} and the symmetry of $\mathbf{B}(\mathsf{p})^\ast$. **Corollary 2**. *The element $\mathcal{G}(s)$ has the following properties:* 1. *[\[Rg:1\]]{#Rg:1 label="Rg:1"} $\exp(\mathcal{G}(s))$ is a rational function of $s$, taking value $1$ at $s=\infty$.* 2. *[\[Rg:2\]]{#Rg:2 label="Rg:2"}$\mathcal{G}^{\operatorname{op}}(s) = \mathcal{G}(-s)$.* 3. *[\[Rg:3\]]{#Rg:3 label="Rg:3"} $\tau_a\otimes \tau_b (\mathcal{G}(s)) = \mathcal{G}(s+a-b)$ for each $a,b\in \mathbb{C}$.* 4. *[\[Rg:4\]]{#Rg:4 label="Rg:4"} Let $k\in{\mathbf I},n\in\mathbb{Z}_{\geqslant 0}$ and let $y\in Y^{0}_{\hbar}(\mathfrak{g})$. Then, we have the following commutation relations: $$\begin{aligned} &= \pm (\mathsf{p}-\mathsf{p}^{-1})\det(\mathbf{B}(\mathsf{p})) \cdot \sum_{a=0}^\infty x^{\pm}_{k,n+a}\otimes y\partial_s^{(a)}(t_k(s)),\\ [\mathcal{G}(s),y\otimes x_{k,n}^\pm] &= \mp (\mathsf{p}-\mathsf{p}^{-1})\det(\mathbf{B}(\mathsf{p})) \cdot \sum_{b=0}^{\infty} y(-\partial_s)^{(b)}(t_k(s))\otimes x_{k,n+b}^{\pm}.\end{aligned}$$* ## Expansion and regularization of $\mathcal{G}$ {#ssec:Rg-reg} The following lemma computes the expansion of $\mathcal{G}(s)$ in $s^{-1}$, and is crucial in carrying out a "regularization\" argument later. **Lemma 4**. *$\mathcal{G}(s)$ admits the expansion $$\begin{aligned} \hbar^{-2}\mathcal{G}(s) &= \ell^{(0)}\EuScript{C}_0\otimes\EuScript{C}_0 s^{-2} + 2\ell^{(0)}\left(-\EuScript{C}_1\otimes \EuScript{C}_0 + \EuScript{C}_0\otimes \EuScript{C}_1\right)s^{-3} \\ &\hspace*{0.2in} + 6 \left(\ell^{(0)}\left(\frac{\EuScript{C}_2\otimes \EuScript{C}_0}{2} - \EuScript{C}_1\otimes \EuScript{C}_1 + \frac{\EuScript{C}_0\otimes\EuScript{C}_2}{2}\right)+ \frac{\hbar^2}{4} \mathsf{g}_0\right)s^{-4} + O(s^{-5}),\end{aligned}$$ where $\ell^{(0)}$ and $\mathsf{g}_0$ are given as follows:* - *$\ell^{(0)}\in\mathbb{Z}_{>0}$ is a constant depending on $\mathfrak{g}$ defined by $\ell^{(0)}a_ia_j = \mathbf{B}(1)^\ast_{i,j}$ for every $i,j\in{\mathbf I}$ (see Table [2](#table){reference-type="ref" reference="table"} below).* - *$\mathsf{g}_0=\sum_{i,j} \mathbf{B}^{\ast,(2)}_{ij} t_{i,0}\otimes t_{j,0}\in\mathfrak{h}\otimes\mathfrak{h}$, where $\mathbf{B}^{\ast,(2)}$ is the coefficient of $t^2$ in the Taylor series expansion of $\mathbf{B}(e^t)^\ast$ near $t=0$: $$\mathbf{B}(e^t)^* = \mathbf{B}^\ast + t^2 \mathbf{B}^{\ast,(2)} + O(t^4).$$* *[Proof]{.smallcaps}.* Using the definition of $\mathcal{G}(s)$, and the formula [\[eq:tau-borel\]](#eq:tau-borel){reference-type="eqref" reference="eq:tau-borel"} for $\mathcal{T}_{ij}(s)$, we obtain $$\mathcal{G}(s) = \sum_{i,j} \mathbf{B}(\mathsf{p})^{\ast}_{ij} \cdot \mathcal{T}_{ij}(s) = \left.\sum_{i,j} \mathbf{B}(e^{\frac{\hbar}{2}z})^* B_i(-z)\otimes B_j(z) \right|_{z=-\partial_s} \cdot s^{-2}.$$ For notational simplicity, let us write $$\mathcal{T}_{ij}^{(n)} =\sum_{a=0}^n (-1)^a \frac{t_{i,a}}{a!}\otimes \frac{t_{j,n-a}}{(n-a)!}\ ,$$ so that $B_i(-z)\otimes B_j(z) = \hbar^2\sum_{n\geqslant 0} \mathcal{T}_{ij}^{(n)}z^n$. With this notation at hand, we can write the expansion of $\mathcal{G}(s)$ as $$\label{Rg-exp-proof} \mathcal{G}(s) = \hbar^2 \sum_{N\geqslant 0} (N+1)!s^{-N-2} \left( \sum_{n=0}^{\lfloor\frac{N}{2}\rfloor} \frac{\hbar^{2n}}{2^{2n}} \sum_{i,j} \mathbf{B}^{*,(2n)}_{ij} \mathcal{T}_{ij}^{(N-2n)} \right),$$ where $\mathbf{B}^{*,(2n)}$ is the coefficient of $t^{2n}$ in the Taylor expansion of $\mathbf{B}(e^t)^\ast$ at $t=0$: $$\mathbf{B}(e^t)^\ast = \sum_{n=0}^{\infty} \mathbf{B}^{*,(2n)} t^{2n}.$$ Thus, the coefficient of $s^{-2}$ in $\hbar^{-2}\mathcal{G}(s)$ is given by $\sum_{ij} \mathbf{B}^{*,(0)} t_{i,0}\otimes t_{j,0}$. Note that $\mathbf{B}^{*,(0)} = \mathbf{B}^\ast$ is the adjoint of the corank $1$, symmetric matrix $\mathbf{B}=D\mathbf{A}$, and hence is of rank $1$. Thus, its rows (and columns, as it is symmetric) are scalar multiples of $\EuScript{C}_0$. In other words, there is a constant $\ell^{(0)}$ such that $$\mathbf{B}^*_{ij} = \ell^{(0)}a_ia_j\, \text{ for every } i,j\in{\mathbf I}\ .$$ This shows that the coefficient of $s^{-2}$ in $\hbar^{-2}\mathcal{G}(s)$ is $\sum_{ij} \mathbf{B}^*_{ij}t_{i,0}\otimes t_{j,0} = \ell^{(0)}\EuScript{C}_0\otimes\EuScript{C}_0$, as claimed in the statement of the lemma. Similarly, we obtain from [\[Rg-exp-proof\]](#Rg-exp-proof){reference-type="eqref" reference="Rg-exp-proof"} that the coefficients of $s^{-3}$ and $s^{-4}$ in $\hbar^{-2}\mathcal{G}(s)$ are given by $$2 \sum_{ij} \mathbf{B}^*_{ij} \mathcal{T}_{ij}^{(1)} \quad \text{ and }\quad %6 \sum_{ij} (\bfB^*_{ij} \Rtau_{ij}^{(2)} + \frac{\hbar^2}{4}\bfB^{*,(2)}_{ij} \Rtau^{(0)}_{ij}) % 6 \sum_{ij} \mathbf{B}^*_{ij} \mathcal{T}_{ij}^{(2)}+\frac{6\hbar^2}{4}\mathsf{g}_0,$$ respectively. Since $\mathbf{B}^*_{ij} = \ell^{(0)}a_ia_j$ for each $i,j\in {\mathbf I}$, we have $$\begin{gathered} 2 \sum_{ij} \mathbf{B}^*_{ij} \mathcal{T}_{ij}^{(1)}= 2 \ell^{(0)}(\EuScript{C}_0\otimes\EuScript{C}_1 - \EuScript{C}_1\otimes\EuScript{C}_0),\\ % 6 \sum_{ij} \mathbf{B}^*_{ij} \mathcal{T}_{ij}^{(2)}=6 \ell^{(0)}\left(\frac{\EuScript{C}_0\otimes \EuScript{C}_2}{2}- \EuScript{C}_1\otimes \EuScript{C}_1+ \frac{\EuScript{C}_2\otimes \EuScript{C}_0}{2}\right).\end{gathered}$$ Combining these facts, we can conclude that the $s^{-3}$ and $s^{-4}$ coefficients of $\hbar^{-2}\mathcal{G}(s)$ are as stated in the lemma. We are left to compute the explicit values of the constant $\ell^{(0)}$. To this end, observe that for any fixed $i\in {\mathbf I}$, we have $$\ell^{(0)}= \frac{\det(\mathbf{B}(1)^{(i)})}{a_i^2},$$ where $\mathbf{B}(1)^{(i)}$ is the submatrix obtained by removing the $i$--th row and $i$--th column from $\mathbf{B}$. By choosing $i$ so that $a_i=1$ and $\mathbf{B}(1)^{(i)}$ is a (connected) finite--type Dynkin diagram, whose determinants are known (see, for instance, [@kac §4.8, Table Fin]), we obtain the explicit values of $\ell^{(0)}$ listed in Table [2](#table){reference-type="ref" reference="table"} below: Type of $\mathfrak{g}$ $\mathsf{A}_n^{(1)}$ $\mathsf{B}_n^{(1)}$ $\mathsf{C}_n^{(1)}$ $\mathsf{D}_n^{(1)}$ $\mathsf{E}_6^{(1)}$ $\mathsf{E}_7^{(1)}$ $\mathsf{E}_8^{(1)}$ $\mathsf{F}_4^{(1)}$ $\mathsf{G}_2^{(1)}$ ------------------------ ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- ---------------------- $\ell^{(0)}$ $n+1$ $2^n$ $4$ $4$ $3$ $2$ $1$ $4$ $3$ : Table of values of $\ell^{(0)}$ Type of $\mathfrak{g}$ $\mathsf{A}_{2n}^{(2)}$ $\mathsf{A}_{2n-1}^{(2)}$ $\mathsf{D}_{n+1}^{(2)}$ $\mathsf{E}_6^{(2)}$ $\mathsf{D}_4^{(3)}$ ------------------------ ------------------------- --------------------------- -------------------------- ---------------------- ---------------------- $\ell^{(0)}$ $2^n$ $4$ $2^n$ $4$ $3$ : Table of values of $\ell^{(0)}$ This type-by-type computation completes the proof of the lemma. ◻ Next, we define $\mathcal{G}_{\scriptscriptstyle{\operatorname{reg}}}(s)$ by setting $$\label{eq:Rg-reg} \mathcal{G}_{\scriptscriptstyle{\operatorname{reg}}}(s) =\mathcal{G}(s) - \hbar^2\left( \ell^{(0)}\EuScript{C}_0\otimes\EuScript{C}_0 s^{-2} + 2\ell^{(0)}\left(-\EuScript{C}_1\otimes \EuScript{C}_0 + \EuScript{C}_0\otimes \EuScript{C}_1\right)s^{-3} \right).$$ Since the terms removed from $\mathcal{G}(s)$ are central (see Corollary [Corollary 1](#cor:c123){reference-type="ref" reference="cor:c123"} above), we have $\operatorname{ad}(\mathcal{G}(s)) = \operatorname{ad}(\mathcal{G}_{\scriptscriptstyle{\operatorname{reg}}}(s))$. **Corollary 3**. *Properties [\[Rg:2\]](#Rg:2){reference-type="eqref" reference="Rg:2"}--[\[Rg:4\]](#Rg:4){reference-type="eqref" reference="Rg:4"} of Corollary [Corollary 2](#cor:Rg){reference-type="ref" reference="cor:Rg"} hold for $\mathcal{G}_{\scriptscriptstyle{\operatorname{reg}}}(s)$. In addition, $$\mathcal{G}_{\scriptscriptstyle{\operatorname{reg}}}(s) = 6 \hbar^2 \left(\ell^{(0)}\left(\frac{\EuScript{C}_2\otimes \EuScript{C}_0}{2} - \EuScript{C}_1\otimes \EuScript{C}_1 + \frac{\EuScript{C}_0\otimes\EuScript{C}_2}{2}\right)+ \frac{\hbar^2}{4} \mathsf{g}_0\right)s^{-4} + O(s^{-5}),$$ where $\ell^{(0)}$ and $\mathsf{g}_0$ are as in Lemma [Lemma 4](#lem:Rg-terms){reference-type="ref" reference="lem:Rg-terms"}.* ## The difference equation and its formal solution {#ssec:R0-diff} Recall the difference equation [\[eq:L0-diff\]](#eq:L0-diff){reference-type="eqref" reference="eq:L0-diff"}: $$(\mathsf{p}-\mathsf{p}^{-1})\det(\mathbf{B}(\mathsf{p})) \cdot \Lambda(s) = \mathcal{G}(s)\ .$$ This is an irregular, additive difference equation. Meaning, the difference operator has order of vanishing $3$, while the right--hand side is $O(s^{-2})$. In more elementary terms, this equation has no solution in $Y^{0}_{\hbar}(\mathfrak{g})^{\otimes 2}[\negthinspace[s^{-1}]\negthinspace]$, since the difference operator applied to any such formal power series results in a series starting with $s^{-4}$. The following simple lemma makes this more precise. **Lemma 5**. *Let $\mathcal{A}$ be an arbitrary vector space over $\mathbb{C}$, and let $F(s) = \sum_{n\geqslant 0} F_n s^{-n-1}\in\mathcal{A}[\negthinspace[s^{-1}]\negthinspace]$. Let $\hbar\in\mathbb{C}^{\times}$ and $\mathsf{p}\cdot F(s) = F(s-\hbar/2)$, as above. Then, for any polynomial $D(\mathsf{p})\in\mathbb{C}[\mathsf{p}^{\pm 1}]$, we have $$D(\mathsf{p})\cdot F(s) = \sum_{N\geqslant 0} s^{-N-1} \left( \sum_{\ell=0}^N \left(\begin{array}{c} N\\ \ell\end{array}\right) F_{N-\ell} \frac{\hbar^\ell}{2^\ell} \left.\left((\mathsf{p}\partial_\mathsf{p})^{\ell}\cdot D(\mathsf{p})\right)\right|_{\mathsf{p}=1} \right)\ .$$* *[Proof]{.smallcaps}.* The proof of this lemma is a direct verification. Namely, write $D(\mathsf{p}) = \sum_{r\in\mathbb{Z}} d_r\mathsf{p}^r$. Then, we have: $$\begin{aligned} D(\mathsf{p})\cdot F(s) &= \sum_{n\geqslant 0} F_n s^{-n-1} \left(\sum_r d_r\left(1-s^{-1}r\frac{\hbar}{2}\right)^{-n-1}\right)\\ &= \sum_{n\geqslant 0} F_n s^{-n-1} \left( \sum_r d_r \sum_{\ell\geqslant 0} \left(\begin{array}{c} n+\ell\\ \ell\end{array}\right) r^\ell\frac{\hbar^\ell}{2^\ell} s^{-\ell} \right)\\ &= \sum_{N\geqslant 0} s^{-N-1} \left( \sum_{\ell=0}^N F_{N-\ell} \left(\begin{array}{c} N\\ \ell\end{array}\right) \frac{\hbar^\ell}{2^\ell} \left(\sum_r d_rr^\ell \right) \right)\ .\end{aligned}$$ The lemma now follows from the observation that $$\sum_r d_rr^\ell = \left.(\mathsf{p}\partial_\mathsf{p})^\ell\cdot D(\mathsf{p}) \right|_{\mathsf{p}=1}. \qedhere$$ ◻ **Remark 6**. The lemma above implies that if $D(\mathsf{p})$ has order of vanishing $k\in\mathbb{Z}_{\geqslant 0}$ at $\mathsf{p}=1$, then $D(\mathsf{p}) \cdot F(s) \in s^{-k-1}\mathcal{A}[\negthinspace[s^{-1}]\negthinspace]$, for every $F(s)\in s^{-1}\mathcal{A}[\negthinspace[s^{-1}]\negthinspace]$. In fact, the operation is invertible, thus yielding an isomorphism of vector spaces $$D(\mathsf{p}) : s^{-1}\mathcal{A}[\negthinspace[s^{-1}]\negthinspace] \stackrel{\sim}{\longrightarrow}s^{-k-1}\mathcal{A}[\negthinspace[s^{-1}]\negthinspace]\ .$$ **Corollary 4**. *There exists a unique $\Lambda(s)\in s^{-1}Y^{0}_{\hbar}(\mathfrak{g})^{\otimes 2}[\negthinspace[s^{-1}]\negthinspace]$ such that $$(\mathsf{p}-\mathsf{p}^{-1})\det(\mathbf{B}(\mathsf{p})) \cdot \Lambda(s) = \mathcal{G}_{\scriptscriptstyle{\operatorname{reg}}}(s)\ .$$* ## Properties of $\Lambda(s)$ {#ssec:Rl-props} We now turn to establishing the key properties satisfied by the element $\Lambda(s)$ from the previous corollary. Recall that $\mathrm{Prim}^{\scriptscriptstyle{\mathrm{D}}}(Y_{\hbar}(\mathfrak{g}))$ is the linear span of $\{t_{i,r}\}_{i\in{\mathbf I},r\in\mathbb{Z}_{\geqslant 0}}$. **Proposition 7**. *$\Lambda(s)$ has the following properties:* 1. *[\[Rl:prim\]]{#Rl:prim label="Rl:prim"} $\displaystyle\Lambda(s)\in \mathrm{Prim}^{\scriptscriptstyle{\mathrm{D}}}(Y_{\hbar}(\mathfrak{g}))^{\otimes 2}[\negthinspace[s^{-1}]\negthinspace]$.* 2. *[\[Rl:LT\]]{#Rl:LT label="Rl:LT"} The leading term of $\Lambda(s)$ is given by $$\Lambda(s) = s^{-1} \left(\frac{\ell^{(0)}}{\mathsf{d}^{(0)}\hbar} \left(\frac{\EuScript{C}_2\otimes\EuScript{C}_0}{2} -\EuScript{C}_1\otimes\EuScript{C}_1 + \frac{\EuScript{C}_0\otimes \EuScript{C}_2}{2}\right)+ \frac{\hbar}{4\mathsf{d}^{(0)}} \mathsf{g}_0\right)+ O(s^{-2}),$$ where $\ell^{(0)},\mathsf{g}_0$ are as given in Lemma [Lemma 4](#lem:Rg-terms){reference-type="ref" reference="lem:Rg-terms"} and $\mathsf{d}^{(0)}$ is given in Lemma [Lemma 3](#lem:order){reference-type="ref" reference="lem:order"}.* 3. *[\[Rl:unitary\]]{#Rl:unitary label="Rl:unitary"} $\Lambda^{\operatorname{op}}(s) = -\Lambda(-s)$.* 4. *[\[Rl:shift\]]{#Rl:shift label="Rl:shift"} $(\tau_a\otimes \tau_b)(\Lambda(s)) = \Lambda(s+a-b)$ for each $a,b\in \mathbb{C}$.* 5. *[\[Rl:comm\]]{#Rl:comm label="Rl:comm"} For each $k\in{\mathbf I}$, $n\in\mathbb{Z}_{\geqslant 0}$ and $y\in Y^{0}_{\hbar}(\mathfrak{g})$, we have the following commutation relations: $$\begin{aligned} &= \pm \sum_{a=0}^{\infty} x_{k,n+a}^{\pm} \otimes \partial_s^{(a)}(t_k(s)) y, \\ [\Lambda(s),y\otimes x_{k,n}^\pm] &= \mp \sum_{b=0}^{\infty} y(-\partial_s)^{(b)}(t_k(-s))\otimes x^{\pm}_{k,n+b}.\end{aligned}$$* *[Proof]{.smallcaps}.* Note that [\[Rl:prim\]](#Rl:prim){reference-type="eqref" reference="Rl:prim"} is obvious from the construction and [\[Rl:shift\]](#Rl:shift){reference-type="eqref" reference="Rl:shift"} follows directly from Part [\[tau:shift\]](#tau:shift){reference-type="eqref" reference="tau:shift"} of Proposition [Proposition 6](#pr:tau){reference-type="ref" reference="pr:tau"}. Let us prove [\[Rl:LT\]](#Rl:LT){reference-type="eqref" reference="Rl:LT"}. Let $\Lambda_0$ denote the coefficient of $s^{-1}$ in $\Lambda(s)$. In our case, the difference operator is $$\mathcal{D}(\mathsf{p}) = (\mathsf{p}-\mathsf{p}^{-1})\det(\mathbf{B}(\mathsf{p})) = (\mathsf{p}-\mathsf{p}^{-1})^3 \frac{\det(\mathbf{B}(\mathsf{p}))}{(\mathsf{p}-\mathsf{p}^{-1})^2}.$$ Let $\displaystyle C(\mathsf{p})= \frac{\det(\mathbf{B}(\mathsf{p}))}{(\mathsf{p}-\mathsf{p}^{-1})^2}$. The following is an easy computation: $$\left. (\mathsf{p}\partial_\mathsf{p})^{\ell}\cdot \mathcal{D}(\mathsf{p})\right|_{\mathsf{p}=1} = \sum_{j=0}^{\ell} \left(\begin{array}{c} \ell\\ j\end{array}\right) 3(3^{j-1}-1)(1+(-1)^{j-1}) \left.\left((\mathsf{p}\partial_\mathsf{p})^{\ell-j}\cdot C(\mathsf{p})\right)\right|_{\mathsf{p}=1}\ .$$ The relevant coefficient is the $\ell=j=3$ term, where we get $48\mathsf{d}^{(0)}$. Using Lemma [Lemma 5](#lem:diff-ops){reference-type="ref" reference="lem:diff-ops"}, we compare the coefficients of $s^{-4}$ in $\mathcal{D}(\mathsf{p})\cdot\Lambda(s)$ and $\mathcal{G}_{\scriptscriptstyle{\operatorname{reg}}}(s)$ from Lemma [Lemma 4](#lem:Rg-terms){reference-type="ref" reference="lem:Rg-terms"} to obtain $$48 \frac{\hbar^3}{8} \mathsf{d}^{(0)}\Lambda_0 = 6 \hbar^2\left(\ell^{(0)}\left(\frac{\EuScript{C}_2\otimes \EuScript{C}_0}{2} - \EuScript{C}_1\otimes \EuScript{C}_1 + \frac{\EuScript{C}_0\otimes\EuScript{C}_2}{2}\right)+ \frac{\hbar^2}{4} \mathsf{g}_0\right),$$ which is precisely [\[Rl:LT\]](#Rl:LT){reference-type="eqref" reference="Rl:LT"}. For [\[Rl:unitary\]](#Rl:unitary){reference-type="eqref" reference="Rl:unitary"}, we flip the tensor factors in the difference equation [\[eq:Rl-diff\]](#eq:Rl-diff){reference-type="eqref" reference="eq:Rl-diff"} to get $$\mathcal{D}(\mathsf{p})\cdot \Lambda^{\operatorname{op}}(s) = \mathcal{G}_{\scriptscriptstyle{\operatorname{reg}}}(-s).$$ On the other hand, $\mathsf{p}\cdot f(-s) = (\mathsf{p}^{-1}\cdot f(s))|_{s\mapsto -s}$, and $\mathcal{D}(\mathsf{p}^{-1}) = -\mathcal{D}(\mathsf{p})$. This gives: $$\mathcal{D}(\mathsf{p})\cdot \Lambda(-s) = -\mathcal{G}_{\scriptscriptstyle{\operatorname{reg}}}(-s).$$ Thus, by uniqueness, we have $\Lambda^{\operatorname{op}}(s) = -\Lambda(-s)$. The proof of [\[Rl:comm\]](#Rl:comm){reference-type="eqref" reference="Rl:comm"} is also based on a uniqueness argument. Namely, apply $\mathcal{D}(\mathsf{p})$ to both sides of the commutation relation. The resulting equation holds by Corollaries [Corollary 2](#cor:Rg){reference-type="ref" reference="cor:Rg"} and [Corollary 3](#cor:Rg-reg){reference-type="ref" reference="cor:Rg-reg"}. Thus both sides solve the same difference equation and hence must be equal. ◻ ## Growth properties of the Borel transform of $\mathcal{G}_{\scriptscriptstyle{\operatorname{reg}}}(s)$ {#ssec:borel-growth} Recall from Corollary [Corollary 2](#cor:Rg){reference-type="ref" reference="cor:Rg"} that $\exp(\mathcal{G}(s))$ is a rational function of $s$, taking value $1$ at $s=\infty$. The matrix entries of $\mathcal{G}(s)$, and hence of $\mathcal{G}_{\scriptscriptstyle{\operatorname{reg}}}(s)$, are of the form $$\log\left(\prod_j \frac{s-a_j}{s-b_j}\right)+ r_0(s),$$ where $\{a_j,b_j\}_j\subset\mathbb{C}$ is a finite set of complex numbers and $r_0(s)$ is a rational function vanishing at $s=\infty$. *[Proof of Claim]{.smallcaps}.* Write $X(s) = \exp(\mathcal{G}(s))$ in its mulitplicative Jordan decomposition, whose entries are again rational functions of $s$ (see [@sachin-valerio-2 Lemma 4.12]): $X(s) = X_d(s)(1+X_n(s))$. Note that entries of $X_d(s)$ are rational functions taking value $1$ at $s=\infty$, while those of $X_n(s)$ vanish at $s=\infty$. Hence, $$\mathcal{G}(s) = \log(X(s)) = \log(X_d(s)) + \sum_{r\geqslant 0} (-1)^r \frac{X_n(s)^{r+1}}{r+1}.$$ Here, the second sum on the last line is finite, since $X_n(s)$ is nilpotent. Our claim follows. ◻ **Corollary 5**. *Let $g(s)$ be a matrix entry of $\mathcal{G}_{\scriptscriptstyle{\operatorname{reg}}}(s)$. Write $g(s) = \sum_{n=0}^{\infty} g_n s^{-n-1}$ and let $\mathcal{B}\left(g\right)(t) = \sum_{n=0}^{\infty} g_n \frac{t^n}{n!}$ be its Borel transform. Then, $\mathcal{B}\left(g\right)(t)$ is an entire function of $t$, and there exist constants $C_1,C_2,R$ such that: $$\left|\mathcal{B}\left(g\right)(t)\right| \leqslant C_1 e^{C_2|t|}, \text{ for every $t$ with } |t|>R\ .$$* *[Proof]{.smallcaps}.* The proof is an easy computation of the Borel transform of (a) the logarithm of rational function, and (b) a rational function taking value $1$ at $s=\infty$. Namely, it follows from the following computations: $$g(s) = \log\left(\frac{s-a}{s-b}\right)\ \Rightarrow\ \mathcal{B}\left(g\right)(t) = \frac{e^{bt}-e^{at}}{t}\ .$$ $$g(s) = \frac{1}{(s-a)^{\ell+1}}\ \Rightarrow\ \mathcal{B}\left(g\right)(t) = \frac{t^{\ell}}{\ell!} e^{at}. \qedhere$$ ◻ # Laplace transform and regular difference equations {#app:LapDE} We collect some of the well--known techniques to solve linear, additive, regular difference equations. The material of this section is fairly standard, and can be found in any advanced text on complex analysis, for instance [@ablowitz-fokas; @costin; @whittaker-watson]. ## Set up and statement of the main theorem {#appA:setup} Let $D(\mathsf{p})\in\mathbb{C}[\mathsf{p}^{\pm 1}]$ and let $k\in\mathbb{Z}_{\geqslant 0}$ be its order of vanishing at $\mathsf{p}=1$. Let $g(s)$ be a meromorphic, $\mathbb{C}$--valued function of $s\in\mathbb{C}$, which is holomorphic near $s=\infty$, and has order of vanishing $k+1$ there. Thus, $g(s) = \sum_{n=k}^{\infty} g_n s^{-n-1}$. Let $\mathcal{B}\left(g\right)$ denote the Borel transform of $g(s)$: $$\mathcal{B}\left(g\right)(t) = \sum_{n=k}^{\infty} g_n \frac{t^n}{n!}\ .$$ It is an easy exercise to show that the series written above has infinite radius of convergence (assuming $g(s)$ has non--zero radius of convergence near $\infty$), hence defines an entire function of $t\in\mathbb{C}$.\ Let us fix $\chi\in\mathbb{C}^{\times}$ a non--zero step and let $\mathsf{p}$ act as shift $\mathsf{p}\cdot F(s) = F(s-\chi)$. Let $\theta=\arg(\chi)\in (-\pi,\pi]$.\ Below, we use the following notation for rays and half--planes. Let $\psi\in\mathbb{R}$ and $c\in\mathbb{R}_{>0}$. Let $\ell_\psi =\mathbb{R}_{\geqslant 0}e^{\iota\psi}$ be the ray at phase $\psi$. And, $$\mathbb{H}_{\psi,c} =\{z\in\mathbb{C}: \operatorname{Re}(ze^{-\iota\psi})>c\}\ ,$$ is the half--plane orthogonal to $\ell_\psi$, located to the right of the line perpendicular to $\ell_\psi$, passing through $ce^{\iota\psi}$. ![The ray $\ell_\psi$ and corresponding half--plane $\mathbb{H}_{\psi,c}$](halfSpace.pdf){height="1in"} According to Lemma [Lemma 5](#lem:diff-ops){reference-type="ref" reference="lem:diff-ops"}, we have a unique formal power series $f(s)\in s^{-1}\mathbb{C}[\negthinspace[s^{-1}]\negthinspace]$, such that $\displaystyle D(\mathsf{p})\cdot f(s) = \sum_{n=k}^\infty g_ns^{-n-1}$. **Theorem 11**. *Assume that the following two conditions hold:* - *The roots of $D(\mathsf{p})=0$ lie on the unit circle.* - *$\mathcal{B}\left(g\right)(t)$ has at most exponential growth as $t\to\infty$. Meaning, there exist constants $R,C_1,C_2 \in\mathbb{R}_{>0}$, such that $$|\mathcal{B}\left(g\right)(t)| < C_1 e^{C_2|t|}, \text{ for every $t$ such that } |t|>R.$$* *Then, there exist two meromorphic functions $f^\eta(s)$, $\eta\in\{\uparrow,\downarrow\}$, uniquely determined by the following conditions.* 1. *[\[fn:diff\]]{#fn:diff label="fn:diff"} $D(\mathsf{p})\cdot f^\eta(s) = g(s)$.* 2. *[\[fn:hol\]]{#fn:hol label="fn:hol"} $f^\eta(s)$ is holomorphic in $\mathbb{H}^\eta$, where $\mathbb{H}^{\uparrow} = \{s : \operatorname{Re}(s/\chi)\gg 0\} = -\mathbb{H}^{\downarrow}$.* 3. *[\[fn:asym\]]{#fn:asym label="fn:asym"} $f^\eta(s)\sim f(s)$ as $\pm\operatorname{Re}(s/\chi)\to \infty$.* *Moreover, $f^\eta(s)$ is holomorphic in a larger domain $\mathcal{P}^\eta_\delta$, for any $\delta>0$, where, $$\mathcal{P}^{\uparrow}_\delta =\bigcup_{\psi\in \left(\theta-\frac{\pi}{2}+\delta, \theta+\frac{\pi}{2}-\delta\right)} \mathbb{H}_{\psi,C_2}\ , \qquad \mathcal{P}^{\downarrow}_{\delta} = -\mathcal{P}^{\uparrow}_{\delta}\ .$$ Here $C_2$ is the constant from our assumption on $\mathcal{B}\left(g\right)(t)$ above.* *The asymptotic expansion $f^\eta(s)\sim f(s)$ is valid in a larger sector $\Sigma^\eta_\delta$, for any $\delta>0$, $$\Sigma^{\uparrow}_\delta =\{re^{\iota\psi} : r\in\mathbb{R}_{>0}, \psi \in \left(\theta-\pi+\delta, \theta+\pi-\delta\right)\} , \qquad \Sigma^{\downarrow}_{\delta} = -\Sigma^{\uparrow}_{\delta}\ .$$* *(see Figures [1](#afig:domain){reference-type="ref" reference="afig:domain"} and [2](#afig:sector){reference-type="ref" reference="afig:sector"} below).* ![Domains $\mathcal{P}^{\eta}_\delta$ of holomorphy](domain.pdf){#afig:domain height="2in"} ![Sectors $\Sigma^{\eta}_{\delta}$ of asymptotic expansion](Pacmans.pdf){#afig:sector height="2in"} *[Proof]{.smallcaps}.* For notational convenience, we set $\chi=1$. The reader can easily verify that the statement of the theorem can be obtained from its $\chi=1$, $\theta=0$ counterpart, by a counterclockwise rotation by $\theta$.\ The proof is given in the rest of this section. We show uniqueness of $f^\eta$ in Section [8.2](#appA:uniqueness){reference-type="ref" reference="appA:uniqueness"}. The existence is based on a general technique of Laplace transforms and their asymptotic expansions obtained via Watson's lemma. We review these results in Section [8.3](#appA:watson){reference-type="ref" reference="appA:watson"}, and use them to show the existence of $f^{\eta}$ in Section [8.4](#appA:existence){reference-type="ref" reference="appA:existence"}. We show that the domain of holomorphy and sector of validity of asymptotic expansion can be enlarged, as stated in the theorem, in Section [8.5](#appA:doms){reference-type="ref" reference="appA:doms"}. ◻ ## Uniqueness {#appA:uniqueness} The uniqueness of $f^\eta(s)$ follows from the following general lemma. For this, we drop the hypothesis that the roots of $D(\mathsf{p})=0$ lie on the unit circle, as it appears naturally from the conclusion of the lemma. We will assume, without loss of generality, that $0$ is not a root of $D(\mathsf{p})$. Let $\overline{\rho}(D)$ (resp. $\underline{\rho}(D)$) be the modulus of the longest (resp. shortest) root of $D(\mathsf{p})=0$. Note that $0<\underline{\rho}(D)\leqslant \overline{\rho}(D)$. **Lemma 6**. *Let $\Sigma\subset\mathbb{C}$ be an unbounded open set satisfying: $$\begin{gathered} \text{For every } z\in\mathbb{C}, \text{ there exists } N\gg 0, \text{ such that } \\ z+n\in\Sigma,\ \text{ (resp. $z-n\in\Sigma$)}\ \ , \forall\ n\geqslant N\ .\end{gathered}$$ Assume that $\phi(s)$ is a meromorphic function of $s$, holomorphic on $\Sigma$ such that:* 1. *$\phi(s)$ is asymptotically zero in $\Sigma$. That is, $$\text{For every } n\in\mathbb{Z}_{\geqslant 0},\ \lim_{\begin{subarray}{c} s\to\infty \\ s\in\Sigma \end{subarray}} s^n\phi(s) = 0\ .$$* 2. *$D(\mathsf{p})\cdot \phi(s) = 0$.* *If $\overline{\rho}(D)\leqslant 1$ (resp. $\underline{\rho}(D)\geqslant 1$), then $\phi \equiv 0$.* *[Proof]{.smallcaps}.* For the purposes of the proof, we write $D(\mathsf{p}) = \mathsf{p}^N - \sum_{r=0}^{N-1}d_r \mathsf{p}^r$, and $d_0\neq 0$ (up to an overall shift, this is the general case). We work with the right fundamental domain. Thus, for every $s\in \mathbb{C}$, $$\phi(s) = \sum_{n=0}^{N-1} d_n \phi(s+N-n)$$ In vector notation $\vec{\Phi}(s) = (\phi(s)\ \phi(s+1)\ \cdots \ \phi(s+N-1))^\mathsf{p}$, we get: $\displaystyle \vec{\Phi}(s) = \mathcal{D}\cdot \vec{\Phi}(s+1)$, where, $$\mathcal{D} = \left[ \begin{array}{ccccc} d_{N-1} & d_{N-2} & \cdots & \cdots & d_0 \\ 1 & 0 & \cdots & \cdots & 0 \\ 0 & 1 & 0 & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0 & \cdots & 0 & 1 & 0 \end{array} \right]$$ is the companion matrix of $D(\mathsf{p})$. The following identity is known as Gelfand's formula. Its proof can be found in any standard functional analysis book, for instance [@riesz-nagy §149]. Here $|\cdot|$ is an arbitrary, fixed norm on the space of $N\times N$ matrices: $$\lim_{n\to \infty} |\mathcal{D}^n|^{\frac{1}{n}} = \overline{\rho}(D)\ .$$ We will assume that $\overline{\rho}(D)\leqslant 1$. For the left domain, we have to use the fact that $\mathcal{D}$ is invertible ($d_0\neq 0$), thus, $|\mathcal{D}^{-n}|$ grows as $\underline{\rho}(D)^{-n}$.\ The remainder of the argument is standard. Let $s_0\in\mathbb{C}$ be fixed, and assume that we are given $\varepsilon>0$. We will show that $|\vec{\Phi}(s_0)|<\varepsilon$, hence showing that it has to be zero.\ Choose an $\ell\geqslant 1$ and $\alpha>0$. Using the asymptotically zero condition, we get $R>0$ such that $|\vec{\Phi}(s)|<\alpha |s|^{-\ell}$ for every $s\in\Sigma$ with $|s|>R$. Using the hypothesis on $\Sigma$, we choose an $N_0>0$ so that $s_0+n\in\Sigma$ and $\operatorname{Re}(s_0+n)>R$, for every $n\geqslant N_0$. The following inequality follows for every $n\geqslant N_0$: $$\begin{aligned} \left|\vec{\Phi}(s_0)\right| &= \left| \mathcal{D}^n \cdot \vec{\Phi}(s_0+n)\right| \leqslant |\mathcal{D}|^n \alpha |s_0+n|^{-\ell}\ . \end{aligned}$$ As $|\mathcal{D}|^n$ grows proportionally to $\overline{\rho}(D)^n$, it is enough to observe that $$\lim_{n\to\infty} \rho^n |s_0+n|^{-\ell} = \left\{ \begin{array}{cl} 0 & \text{ if } \rho\leqslant 1\ ,\\ \infty & \text{ otherwise}. \end{array} \right.$$ ◻ **Remark 7**. It is worth mentioning that the uniqueness fails in general. For instance, consider the equation $f(t+1) - 2f(t) = 0$. It has infinitly many solutions $\lambda 2^{t}$ ($\lambda\in\mathbb{C}$) which are all asymptotically zero as $\operatorname{Re}(t)\to-\infty$. ## Laplace's theorem and Watson's lemma {#appA:watson} The following theorem summarizes the fundamental results of Laplace transforms and their asymptotic expansions. For a proof, see [@ablowitz-fokas §6.2.2] or [@costin §3.3, 3.4]. **Theorem 12**. *Assume that $\psi\in\mathbb{R}$ and $\ell_{-}\psi = \mathbb{R}_{>0}e^{-\iota\psi}$ is the ray at phase $-\psi$. Assume given a continuous function $F(t)$, $t\in\ell_{-\psi}$ such that* - *$|F(t)|$ grows slower than exponential as $t\to\infty$. That is, there are constants $R,C_1,C_2\in\mathbb{R}_{>0}$ such that $$|F(t)| < C_1 e^{C_2|t|}, \text{ for } |t|>R,\ t\in\ell_{-\psi}\ .$$* - *$F(t)$ has at worst logarithmic singularity as $t\to 0$. That is, there are constants $r,c_1,c_2\in\mathbb{R}_{>0}$ such that $$|F(t)|<c_1 t^{c_2-1}, \text{ for } |t|<r,\ t\in\ell_{-\psi}\ .$$* *Then, the following formula defines a function of $z$, holomorphic on the half plane $\mathbb{H}_{\psi,C_2}$: $$\mathcal{L}\left(F\right)_\psi(z) =\int_{\ell_{-\psi}} F(t) e^{-tz} dt\ .$$* *Moreover, if $F(t)\sim \sum F_n \frac{t^n}{n!}$ as $t\to 0$ along $\ell_{-\psi}$, then we have the following asymptotic expansion as $\operatorname{Re}(ze^{-\iota\psi})\to\infty$: $$\mathcal{L}\left(F\right)_\psi(z) \sim \sum_{n=0}^{\infty} F_n z^{-n-1}\ .$$ This expansion remains valid as $|z|\to\infty$ in $$z\in\Sigma_{\psi} =\{re^{\iota t} : r\in\mathbb{R}_{>0}, t\in (\psi-\pi/2+\delta,\psi+\pi/2-\delta)\}\ ,$$ for any $\delta>0$.* ![Sector $\Sigma_\psi^{\delta}$](SigmaPsiDelta.pdf){#afig:asym height="1.5in"} **Remark 8**. Note that the last part of the theorem is a triviality, since $$|s|\to\infty,\ s\in\Sigma_\psi^{\delta} \text{ if, and only if } \operatorname{Re}(se^{-\iota\psi})\to\infty\ .$$ ## Existence {#appA:existence} Let $\psi\in\mathbb{R}$, and define: $$f_\psi(s) =\int_{\ell_{-\psi}} \frac{\mathcal{B}\left(g\right)(t)}{D(e^t)} e^{-ts}\, dt\ .$$ We have to verify that the integrand satisfies the conditions of Theorem [Theorem 12](#athm:watson){reference-type="ref" reference="athm:watson"} above, as long as $\ell_{-\psi}$ does not contain any roots of $D(e^t)=0$. As roots of $D(\mathsf{p})=0$ are on the unit circle, this excludes $\psi = \pm \frac{\pi}{2}$ rays.\ As $t\to\infty$ along $\ell_{-\psi}$, either $\operatorname{Re}(t)\to\infty$ or $-\operatorname{Re}(t)\to\infty$. It is not difficult to see that $|D(e^t)|$ approaches a finite, positive number, or $\infty$, as $\pm \operatorname{Re}(t)\to\infty$. In either case, we can choose $c>0$ and $R>0$, such that $|D(t)|>c$ for every $t\in\ell_{-\psi}$, with $|t|>R$. Combined with our hypothesis on $\mathcal{B}\left(g\right)(t)$, the sub--exponential growth condition is met for our kernel. We note that, by our hypothesis on the order of vanishing, $\frac{\mathcal{B}\left(g\right)(t)}{D(e^t)}$ is holomorphic near $t=0$.\ Thus for $\psi\not=\pm\frac{\pi}{2}$ modulo $2\pi$, we get a holomorphic function on $\mathbb{H}_{\psi,C_2}$. Moreover, $$D(\mathsf{p})\cdot f_\psi(s) = \int_{\ell_{-\psi}} \frac{\mathcal{B}\left(g\right)(t)}{D(e^t)} D(e^t) e^{-ts}\, dt = \int_{\ell_{-\psi}} \mathcal{B}\left(g\right)(t) e^{-ts}\, dt = g(s)\ .$$ This functional equation allows us to extend $f_\psi(s)$ as a meromorphic function of $s\in\mathbb{C}$. Moreover, as $|s|\to\infty$, $s\in\Sigma^\delta_\psi$ (see Figure [3](#afig:asym){reference-type="ref" reference="afig:asym"}), we get the following asymptotic expansion of $f_\psi$: let $\displaystyle\frac{\mathcal{B}\left(g\right)(t)}{D(e^t)} = \sum_{n=0}^{\infty} \beta_n \frac{t^n}{n!}$ be its Taylor series expansion. Then, $$f_\psi(s) \sim \sum_{n=0}^{\infty} \beta_n s^{-n-1}\ .$$ By uniqueness of the formal solution to $D(\mathsf{p})\cdot f(s)=g(s)$, the series on the right--hand side is $f(s)$ for all $\psi$.\ Hence, we obtain $f^\uparrow$ for $\psi=0$ and $f^{\downarrow}$ for $\psi=\pi$. This prove that existence of the claimed meromorphic solutions. ## Extension of domains {#appA:doms} We will now show that $f_{\psi_1}(s) = f_{\psi_2}(s)$ for any $\psi_1,\psi_2\in (-\pi/2,\pi/2)$. Similarly, for $\psi_1,\psi_2\in (\pi/2,3\pi/2)$. Therefore, $f^{\uparrow} = f_\psi$ for any $\psi\in (-\pi/2,\pi/2)$, and $f^{\downarrow} = f_\psi$ for any $\psi\in (\pi/2,3\pi/2)$. This shows that $f^\uparrow$ is holomorphic in $\mathcal{P}^\uparrow_\delta$, and $f^\uparrow\sim f$ is valid in $\Sigma^\uparrow_\delta$ for any $\delta>0$.\ Now, let $\psi_1,\psi_2\in (-\pi/2,\pi/2)$, $\psi_1<\psi_2$. As $f_{\psi_1}(s)=f_{\psi_2}(s)$ is an identity between two meromorphic functions solving the same difference equation, it is enough to verify it for $s\in\mathbb{H}_{\psi_1,C_2}\cap \mathbb{H}_{\psi_2,C_2}$. Note that this set satisfies the hypotheses of Lemma [Lemma 6](#alem:uniqueness){reference-type="ref" reference="alem:uniqueness"}. Fix a constant $C_3>C_2$, and assume that $s\in \mathbb{H}_{\psi_1,C_2}\cap \mathbb{H}_{\psi_2,C_2}$ is such that $\operatorname{Re}(se^{-\iota\theta})\geqslant C_3$ for every $\theta\in [\psi_1,\psi_2]$. Then, we get: $$f_{\psi_1}(s) - f_{\psi_2}(s) = \int_{\ell_{-\psi_1}} - \int_{\ell_{-\psi_2}} \left(\frac{\mathcal{B}\left(g\right)(t)}{D(e^t)} e^{-ts}\right)\, dt\ .$$ Let $R>0$ and let $\mathcal{C}_R$ be the closed contour consisting of three smooth components: (a) straight (directed) line segment $L_{1,R}$ from $0$ to $Re^{-\iota\psi_1}$, (b) circular arc $\gamma_R$ from $Re^{-\iota\psi_1}$ to $Re^{-\iota\psi_2}$, and (c) directed segment from $Re^{-\iota\psi_2}$ to $0$, denoted by $-L_{2,R}$. ![The contour $\mathcal{C}_R$](piePiece.pdf){height="1in"} As the integrand is holomorphic on the right half--plane, by Cauchy's theorem: $$\int_{\mathcal{C}_R} \frac{\mathcal{B}\left(g\right)(t)}{D(e^t)} e^{-ts}\, dt = 0$$ Therefore, we get: $$\begin{aligned} f_{\psi_1}(s) - f_{\psi_2}(s) &= \lim_{R\to\infty} \int_{L_{1,R}} - \int_{L_{2,R}} \left(\frac{\mathcal{B}\left(g\right)(t)}{D(e^t)} e^{-ts}\, dt\right)\\ &= \lim_{R\to\infty} \int_{\mathcal{C}_R} - \int_{\gamma_R} \left(\frac{\mathcal{B}\left(g\right)(t)}{D(e^t)} e^{-ts}\, dt\right)\\ &= -\lim_{R\to\infty} \int_{\gamma_R} \frac{\mathcal{B}\left(g\right)(t)}{D(e^t)} e^{-ts}\, dt \end{aligned}$$ Thus, it remains to show that the last limit is zero. Recall that we have the following bound on our kernel: there exist constants $C_1,C_2$ and $R_1$ such that $$\left| \frac{\mathcal{B}\left(g\right)(t)}{D(e^t)} \right| < C_1 e^{C_2|t|}, \text{ for every $t$ with } |t|>R_1\ .$$ And, $s\in\mathbb{C}$ is chosen so that $|e^{-st}| = e^{-\operatorname{Re}(st)} \leqslant e^{-C_3R}$ ($C_3>C_2$ was a fixed constant). Hence, for $R\gg 0$, we obtain: $$\left| \int_{\gamma_R} \frac{\mathcal{B}\left(g\right)(t)}{D(e^t)} e^{-ts}\, dt\right| \leqslant C_1 e^{(C_2-C_3)R} (\psi_2-\psi_1)R.$$ The last quantity clearly approaches $0$, as $R\to \infty$, and we are done. # The augmented symmetrized Cartan matrix {#app:Augmented} In this appendix, we give a full proof of Proposition [Proposition 1](#P:full-rank){reference-type="ref" reference="P:full-rank"}, which states that the augmented matrix $(\mathbf{B}\,|\, \underline{\mu})$ has full rank, where $\underline{\mu}$ is the column vector with $j$-th component $\mu_j=\sum_{i\in {\mathbf I}} a_i (d_ia_{ij})^3$ (see Corollary [Corollary 1](#cor:c123){reference-type="ref" reference="cor:c123"}) and $\mathbf{B}=(d_i a_{ij})_{i,j\in {\mathbf I}}$ is the symmetrized Cartan matrix of $\mathfrak{g}$. ## A characterization of $\mu_j$ {#appB-mu_j} We begin with the following simple lemma. **Lemma 7**. *For each $i,j\in {\mathbf I}$, set $m_{ji}=a_{ji}(1-a_{ji}^2)$. Then $$\mu_j/d_j^3=6a_j-\sum_{\substack{i\in {\mathbf I}\\a_{ji}<-1}}m_{ji}a_i \quad \forall\; j\in {\mathbf I}.$$ In particular, $\mu_j=6d_j^3 a_j$ unless there is $i_j\in {\mathbf I}$ such that $a_{j,i_j}<-1$. If such an index $i_j$ exists then, provided $\mathfrak{g}$ is not of type $\mathsf{C}_2^{(1)}$, it is unique and one has $$\mu_j= \begin{cases} 6d_j^3(a_j-a_{i_j}) \quad &\text{ if }\; a_{j,i_j}=-2,\\ 6d_j^3(a_j-4a_{i_j}) \quad &\text{ if }\; a_{j,i_j}=-3. \end{cases}$$* *[Proof]{.smallcaps}.* Fix $j\in {\mathbf I}$. Then, since $\mu_j=\sum_{i\in {\mathbf I}} a_i (d_ia_{ji})^3$, we have $$\mu_j/d_j^3=\sum_{i\in {\mathbf I}} a_i a_{ji}^3=\sum_{i\in {\mathbf I}} a_i a_{ji}+\sum_{i\in {\mathbf I}}a_i (a_{ji}^3-a_{ji})=\sum_{i\in {\mathbf I}}a_i (a_{ji}^3-a_{ji}),$$ where we have used that $\mathbf{A}\underline{\delta}=0$, with $\underline{\delta}=(a_j)_{j\in {\mathbf I}}\in \mathbb{Z}^{{\mathbf I}}$. Since $a_{jj}^3-a_{jj}=6$ and $m_{ji}=a_{ji}-a_{ji}^3$ is zero when $a_{ji}=-1$, this proves the first part of the lemma. As for the second assertion, if $i\in {\mathbf I}$ is such that $a_{ji}<-1$, then the vertices $i$ and $j$ of the Dynkin diagram of $\mathfrak{g}$ are connected by multiple edges, with an arrow going from $j$ to $i$. By [@kac §4.8, Tables Aff 1--3], any such index $i=i_j$ is unique except in type $\mathsf{C}_2^{(1)}$, where there is a single short simple root connected to two long simple roots. The claimed formulas now follow from the fact that, for any such type we have $$\sum_{\substack{i\in {\mathbf I}\\a_{ji}<-1}}m_{ji}a_i = \begin{cases} 6a_i &\text{ if } a_{ji}=-2\\ 24a_i&\text{ if } a_{ji}=-3. \end{cases} \qedhere$$ ◻ ## The rank of $(\mathbf{B}\,|\, \underline{\mu})$ {#appB-full-rank} We now come to the following result, which is a restatement of Proposition [Proposition 1](#P:full-rank){reference-type="ref" reference="P:full-rank"}. **Proposition 8**. *The augmented matrix $(\mathbf{B}\,|\, \underline{\mu})$ has rank $|{\mathbf I}|$.* *[Proof]{.smallcaps}.* Let $\mathbf{B}_j$ denote the $j$-th column of $\mathbf{B}$. We first note that there does not exist a vector $\underline{\gamma}\in \mathbb{Q}^{{\mathbf I}}$ such that $\mathbf{B}\underline{\gamma}\in (\mathbb{Q}_{>0})^{{\mathbf I}}.$ Indeed, if there did, then $\underline{\delta}=(a_j)_{j\in {\mathbf I}}\in (\mathbb{Z}_{>0})^{{\mathbf I}}$ would satisfy $\underline{\delta}^t (\mathbf{B}\underline{\gamma})>0$, but this contradicts that $\underline{\delta}^t(\mathbf{B}\underline{\gamma})=(\underline{\delta}^t \mathbf{B})\underline{\gamma}=0.$ Hence, to prove the proposition it is sufficient to show that there is a vector $\underline{\gamma}\in \mathbb{Z}^{{\mathbf I}}$ such that $$\label{ul-gamma} \underline{\gamma}\in (\mathbb{Z}\underline{\mu} + \mathrm{range}(\mathbf{B})) \cap (\mathbb{Z}_{>0})^{{\mathbf I}}.$$ If the Dynkin diagram associated to $\mathbf{A}$ is simply laced, then Lemma [Lemma 7](#L:full-rank){reference-type="ref" reference="L:full-rank"} yields $\mu_j=6a_j>0$ for all $j\in {\mathbf I}$, so we may take $\underline{\gamma}=\underline{\mu}$. Suppose instead that the Dynkin diagram of $\mathbf{A}$ belongs to the following list: $$\mathsf{C}_\ell^{(1)}\, (\ell>2), \; \mathsf{F}_4^{(1)},\; \mathsf{A}_{2\ell-1}^{(2)},\; \mathsf{E}_6^{(2)}.$$ Then it follows from Lemma [Lemma 7](#L:full-rank){reference-type="ref" reference="L:full-rank"} and [@kac Tables Aff 1--2] that $\mu_j=6d_j^3 a_j$ for all $j\in {\mathbf I}$ such that $a_{ji}\geqslant -1$ for all $i\in {\mathbf I}$, and $\mu_j=6d_j^3(a_j-a_{i})=6d_j^3$ if $j\in {\mathbf I}$ is such $a_{ji}=-2$ for some $i\in {\mathbf I}$. Thus, for all these types we may again take $\underline{\gamma}=\underline{\mu}$. To complete the proof, it remains to show that there is $\underline{\gamma}$ as in [\[ul-gamma\]](#ul-gamma){reference-type="eqref" reference="ul-gamma"} associated to each of the following affine types: $$\mathsf{C}_2^{(1)}, \; \mathsf{G}_2^{(1)}, \; \mathsf{D}_4^{(3)},\; \mathsf{B}_\ell^{(1)},\; \mathsf{A}_{2\ell}^{(2)},\; \mathsf{D}_{\ell+1}^{(2)}.$$ We do this case by case, using the classification provided by [@kac Thm. 4.8] and appealing to the notation from Tables Fin and Aff of [@kac §4.8] as needed. *The $\mathsf{B}_\ell^{(1)}$ case*. In this case, the Dynkin diagram of $\mathfrak{g}$ is $$\dynkin [extended,labels={1,1,2,2,2,2,2},scale=1.5]B[1]{}$$ and we take ${\mathbf I}=\{0,l,\ldots, \ell\}$ with $\{1,\ldots,\ell\}$ labelling the type $\mathsf{B}_\ell$ subdiagram with vertices given by the blackened nodes (labelled from left to right in increasing order), and $i=0$ labels the extending node. In the above diagram, the node $j\in {\mathbf I}$ is marked by the number $a_j$. By Lemma [Lemma 7](#L:full-rank){reference-type="ref" reference="L:full-rank"} we have $\mu_j=6d_j^3a_j=48 a_j$ for all $0\leqslant j<\ell$ and $\mu_\ell=0$. Equivalently, $$\underline{\mu}^t=48\cdot(1,1,2,\ldots, 2,0).$$ On the other hand, we have $\mathbf{B}_{\ell}^t=(0,\ldots,-2,2)$, so we may take $$\underline{\gamma}=\underline{\mu}+\mathbf{B}_{\ell}\in (\mathbb{Z}_{>0})^{{\mathbf I}}.$$ *The $\mathsf{D}_{\ell+1}^{(2)}$ case*. We again take ${\mathbf I}=\{0,1,\ldots,\ell\}$, and label the nodes of the Dynkin diagram, which is $$\dynkin [extended, labels={1,1,1,1,1,1,1,1},scale=1.5]D[2]{}$$ from left to right in increasing order. Since $a_j=1$ for all $j\in {\mathbf I}$, Lemma [Lemma 7](#L:full-rank){reference-type="ref" reference="L:full-rank"} yields $$\underline{\mu}^t=(0,6d_1^3,\ldots, 6d_{\ell-1}^3,0)=48\cdot (0,1,\ldots,1,0).$$ As $\mathbf{B}_0^t=(2,-2,0,\ldots,0)$ and $\mathbf{B}_\ell^t=(0,\ldots,0,-2,2)$, we deduce that $$\underline{\gamma}=\underline{\mu}+\mathbf{B}_0+\mathbf{B}_\ell\in (\mathbb{Z}_{>0})^{{\mathbf I}}.$$ *The $\mathsf{A}_{2\ell}^{(2)}$ case*. In this case, the Dynkin diagram is $$\dynkin [extended, labels={2,2,2,2,2,2,1}, scale=1.5]A[2]{even}$$ and we choose the same labeling convention as in the $\mathsf{D}_{\ell+1}^{(2)}$ case. We have $d_0=1$, $d_j=2$ for $0\leqslant j<\ell$, and $d_\ell=4$. Lemma [Lemma 7](#L:full-rank){reference-type="ref" reference="L:full-rank"} thus yields $$\underline{\mu}^t=(0,12d_1^3,\ldots,12d_{\ell-2}^3,6d_{\ell-1}^3(a_{\ell-1}-a_\ell), 6 d_\ell^3)=48\cdot (0, 2, \ldots,2, 1, 8).$$ Since $\mathbf{B}_0^t=(2,-2,0,\ldots,0)$, the vector $\underline{\gamma}=\underline{\mu}+\mathbf{B}_0$ lies in $(\mathbb{Z}_{>0})^{{\mathbf I}}$. *The cases $\mathsf{C}_2^{(1)}$, $\mathsf{G}_2^{(1)}$ and $\mathsf{D}_4^{(3)}$.* In these cases, all the relevant data, together with a choice of $\underline{\gamma}$ satisfying [\[ul-gamma\]](#ul-gamma){reference-type="eqref" reference="ul-gamma"}, is given in the following table: Type Diagram $\mathbf{B}$ $\underline{\mu}$ $\underline{\gamma}$ ---------------------- --------------------------------------------- -------------- ------------------- ---------------------------------- $\mathsf{C}_2^{(1)}$ $\dynkin [extended, labels={1,2,1}]C[1]{2}$ $\underline{\mu}+\mathbf{B}_2$ $\mathsf{G}_2^{(1)}$ $\dynkin [extended, labels={1,2,3}]G[1]{2}$ $\underline{\mu}+16\mathbf{B}_3$ $\mathsf{D}_4^{(3)}$ $\dynkin [extended,labels={1,2,1}]D[3]{4}$ $\underline{\mu}-5\mathbf{B}_3$ Therefore, we may conclude that $(\mathbf{B}\,|\, \underline{\mu})$ has rank $|{\mathbf I}|$ for all affine types, where we continue to exclude types $\mathsf{A}_1^{(1)}$ and $\mathsf{A}_2^{(2)}$. ◻ # Determinant of affine quantum Cartan matrices {#app:QCM} In this short appendix, we prove Lemma [Lemma 3](#lem:order){reference-type="ref" reference="lem:order"}. We provide explicit formulae for both the determinant of the symmetrized affine quantum Cartan matrix $\mathbf{B}(\mathsf{p})$ and $$\mathsf{d}^{(0)}= \left. \frac{\det(\mathbf{B}(\mathsf{p}))}{(\mathsf{p}-\mathsf{p}^{-1})^2} \right|_{\mathsf{p}=1}\,.$$ We exclude type ${\mathsf A}_1^{(1)}$. For the exceptional types, the formulae are obtained by a direct computation while, for the other types, it is enough to proceed by a corank two Lagrange expansion. The resulting formulae are given in Table [3](#table-ADE){reference-type="ref" reference="table-ADE"} for untwisted simply laced types; in Table [4](#table-BCFG){reference-type="ref" reference="table-BCFG"} for untwisted non simply laced types; in Table [5](#table-twisted){reference-type="ref" reference="table-twisted"} for twisted types. By direct inspection, this shows that $\det(\mathbf{B}(\mathsf{p}))$ has order of vanishing $2$ at $\mathsf{p}=1$ and all its zeros lie in $U(1)$. **Remark 9**. For simply laced types , the explicit formulae for $\det(\mathbf{B}(\mathsf{p}))$ first appeared in [@suter-07 Sec. 4]. Note however that in untwisted type BCFG our formulae differ from those given in *loc. cit.*, due to a different definition of the quantum Cartan matrix in these types. Type of $\mathfrak{g}$ $\det(\mathbf{B}(\mathsf{p}))$ $\mathsf{d}^{(0)}$ ------------------------ ---------------------------------------------------------------------------- --------------------- $\mathsf{A}_{n}^{(1)}$ $\mathsf{p}^{-n-1}(\mathsf{p}^{n+1}-1)^2$ $\frac{(n+1)^2}{4}$ $\mathsf{D}_n^{(1)}$ $\mathsf{p}^{-n-1}(\mathsf{p}^2+1)(\mathsf{p}^{2(n-2)}-1)(\mathsf{p}^4-1)$ $4(n-2)$ $\mathsf{E}_6^{(1)}$ $\mathsf{p}^{-7}(\mathsf{p}^2+1)(\mathsf{p}^{6}-1)^2$ 18 $\mathsf{E}_7^{(1)}$ $\mathsf{p}^{-8}(\mathsf{p}^2+1)(\mathsf{p}^{8}-1)(\mathsf{p}^6-1)$ 24 $\mathsf{E}_8^{(1)}$ $\mathsf{p}^{-9}(\mathsf{p}^2+1)(\mathsf{p}^{10}-1)(\mathsf{p}^6-1)$ 30 : Untwisted type ADE Type of $\mathfrak{g}$ $\det(\mathbf{B}(\mathsf{p}))$ $\mathsf{d}^{(0)}$ ------------------------ ---------------------------------------------------------------------------------------------------- -------------------- $\mathsf{B}_n^{(1)}$ $\mathsf{p}^{-3n-1}(\mathsf{p}^2+1)^{n}(\mathsf{p}^{2(2n-3)}-1)(\mathsf{p}^8-1)$ $2^{n+2}(2n-3)$ $\mathsf{C}_n^{(1)}$ $\mathsf{p}^{-3n-11}(\mathsf{p}^2+1)^{n+1}(\mathsf{p}^4+1)(\mathsf{p}^{4(n+2)}-1)(\mathsf{p}^8-1)$ $2^{n+5}(n+2)$ $\mathsf{F}_4^{(1)}$ $\mathsf{p}^{-11}(\mathsf{p}^2+1)^3(\mathsf{p}^{10}-1)(\mathsf{p}^6-1)$ $120$ $\mathsf{G}_2^{(1)}$ $[3]_\mathsf{p}\mathsf{p}^{-9}(\mathsf{p}^2+1)(\mathsf{p}^{10}-1)(\mathsf{p}^6-1)$ $90$ : Untwisted type BCFG Type of $\mathfrak{g}$ $\det(\mathbf{B}(\mathsf{p}))$ $\mathsf{d}^{(0)}$ --------------------------- ----------------------------------------------------------------------------------- -------------------- $\mathsf{A}_{2n}^{(2)}$ $\mathsf{p}^{-6n-5}(\mathsf{p}^2+1)^{2n}(\mathsf{p}^{2(4n+1)}-1)(\mathsf{p}^8-1)$ $4^{n+1}(4n+1)$ $\mathsf{A}_{2n-1}^{(2)}$ $\mathsf{p}^{-2(n+1)}(\mathsf{p}^2+1)(\mathsf{p}^{2(n-2)}-1)(\mathsf{p}^4-1)$ $4(n-2)$ $\mathsf{D}_{n+1}^{(2)}$ $\mathsf{p}^{-3n-2}(\mathsf{p}^2+1)^n(\mathsf{p}^{4n}-1)(\mathsf{p}^4-1)$ $2^{n+2}n$ $\mathsf{E}_6^{(2)}$ $\mathsf{p}^{-9}(\mathsf{p}^2+1)^2(\mathsf{p}^{8}-1)(\mathsf{p}^6-1)$ $48$ $\mathsf{D}_4^{(3)}$ $\mathsf{p}^{-7}(\mathsf{p}^2+1)(\mathsf{p}^{6}-1)^2$ $18$ : Twisted type This project started during a *Research in Pairs* visit at the Centro Internazionale per la Ricerca Matematica in Trento, May 21 - June 9, 2023. We are grateful to the entire CIRM team for providing us an exceptional research environment. We also would like to thank Maria Angelica Cueto for her help with the more computational aspects of the paper, and Valerio Toledano Laredo for his insightful comments. AA was supported in part by the Programme FIL 2022 of the University of Parma co-sponsored by Fondazione Cariparma. SG was supported through the Simons foundation collaboration grant 526947. CW gratefully acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), provided via the Discovery Grants Program (Grant RGPIN-2022-03298 and DGECR-2022-00440). GTLW21 M. J. Ablowitz and A. S. Fokas, *Introduction to complex variables and applications*, Cambridge Texts in Applied Mathematics, Cambridge University Press, Cambridge, 2021. J. Beck and V. G. Kac, *Finite--dimensional representations of quantum affine algebras at roots of unity*, J. Amer. Math. Soc. **9** (1996), no. 2, 391--423. S. I. Boyarchenko and S. Z. Levendorskii, *On affine Yangians*, Lett. Math. Phys. **32** (1994), no. 4, 269--274. C. M. Bender and S. A. Orszag, *Advanced mathematical methods for scientists and engineers. I*, Springer-Verlag, New York, 1999, Asymptotic methods and perturbation theory, Reprint of the 1978 original. M. Bershtein and A. Tsymbaliuk, *Homomorphisms between different quantum toroidal and affine Yangian algebras*, J. Pure Appl. Algebra **223** (2019), no. 2, 867--899. O. Costin, *Asymptotics and Borel summability*, Chapman & Hall/CRC Monographs and Surveys in Pure and Applied Mathematics, vol. 141, CRC Press, Boca Raton, FL, 2009. K. Costello, E. Witten, and M. Yamazaki, *Gauge theory and integrability, I*, ICCM Not. **6** (2018), no. 1, 46--119. to3em, *Gauge theory and integrability, II*, ICCM Not. **6** (2018), no. 1, 120--146. I. Damiani, *La $R$--matrice pour les algèbres quantiques de type affine non tordu*, Annales scientifiques de l'ENS **13** (1998), no. 4, 493--523. V. G. Drinfeld, *Hopf algebras and the quantum Yang-Baxter equation*, Soviet Math. Dokl. **32** (1985), no. 1, 254--258. to3em, *A new realization of Yangians and quantum affine algebras*, Soviet Math. Dokl. **36** (1988), no. 2, 212--216. N. Fröman and P. O. Fröman, *JWKB approximation. Contributions to the theory*, North-Holland Publishing Co., Amsterdam, 1965. B. Feigin, M. Jimbo, and E. Mukhin, *Commutative subalgebra of a shuffle algebra associated with quantum toroidal $\mathfrak{gl}_{m|n}$*, 2023. B. Feigin, M. Jimbo, T. Miwa, and E. Mukhin, *Branching rules for quantum toroidal $\mathfrak{gl}_n$*, Adv. Math. **300** (2016), 229--274. A. Garbali and A. Neguţ, *Computing the $R$-matrix of the quantum toroidal algebra*, J. Math. Phys. **64** (2023), no. 1, Paper No. 011702, 32. N. Guay, H. Nakajima, and C. Wendlandt, *Coproduct for Yangians of affine Kac-Moody algebras*, Adv. Math. **338** (2018), 865--911. N. Guay, V. Regelskis, and C. Wendlandt, *Vertex representations for Yangians of Kac-Moody algebras*, J. Éc. polytech. Math. **6** (2019), 665--706. S. Gautam and V. Toledano Laredo, *Yangians and quantum loop algebras*, Selecta Math. (N.S.) **19** (2013), no. 2, 271--336. to3em, *Yangians, quantum loop algebras, and abelian difference equations*, J. Amer. Math. Soc. **29** (2016), no. 3, 775--824. to3em, *Meromorphic tensor equivalence for Yangians and quantum loop algebras*, Publ. Math. Inst. Hautes Études Sci. **125** (2017), 267--337. S. Gautam, V. Toledano Laredo, and C. Wendlandt, *The meromorphic $R$-matrix of the Yangian*, Representation theory, mathematical physics, and integrable systems, Progr. Math., vol. 340, Birkhäuser/Springer, Cham, \[2021\] ©2021, pp. 201--269. N. Guay, *Affine Yangians and deformed double current algebras in type A*, Adv. Math. **211** (2007), no. 2, 436--484. S. Gautam and C. Wendlandt, *Poles of finite-dimensional representations of Yangians*, Selecta Math. (N.S.) **29** (2023), no. 1, Paper No. 13, 68. D. Hernandez, *Representations of quantum affinizations and fusion product*, Transform. Groups **10** (2005), no. 2, 163--200. to3em, *Representations of quantum affinizations and fusion product*, Transform. Groups **10** (2005), no. 2, 163--200. to3em, *Drinfeld coproduct, quantum fusion category and applications*, Proc. London Math. Soc. **95** (2007), no. 3, 567--608. V. G. Kac, *Infinite dimensional Lie algebras*, Cambridge University Press, 1990. R. Kodera, *Affine Yangian action on the Fock space*, Publ. Res. Inst. Math. Sci. **55** (2019), no. 1, 189--234. S. M. Khoroshkin and V. N. Tolstoy, *Yangian double*, Lett. Math. Phys. **36** (1996), 373--402. R. Kodera and M. Ueda, *Coproduct for affine Yangians and parabolic induction for rectangular $W$-algebras*, Lett. Math. Phys. **112** (2022), no. 1, Paper No. 3, 37. D. Maulik and A. Okounkov, *Quantum groups and quantum cohomology*, Astérisque **408** (2019), 209 pp. A. Neguţ, *The $R$-matrix of the quantum toroidal algebra*, Kyoto J. Math. **63** (2023), no. 1, 23--49. F. Riesz and B. Sz.-Nagy, *Functional analysis*, french ed., Dover Books on Advanced Mathematics, Dover Publications, Inc., New York, 1990, Reprint of the 1955 original. R. Suter, *Quantum affine Cartan matrices, Poincaré series of binary polyhedral groups, and reflection representations*, Manuscripta Math. **122** (2007), no. 1, 1--21. O. Schiffmann and E. Vasserot, *On cohomological Hall algebras of quivers : Yangians*, arXiv:1705:07491 (2017). to3em, *On cohomological Hall algebras of quivers: generators*, J. Reine Angew. Math. **760** (2020), 59--132. A. Tsymbaliuk, *The affine Yangian of $\mathfrak{gl}_1$ revisited*, Adv. Math. **304** (2017), 583--645. to3em, *Classical limits of quantum toroidal and affine Yangian algebras*, J. Pure Appl. Algebra **221** (2017), no. 10, 2633--2646. M. Ueda, *Coproduct for the Yangian of type $A_2^{(2)}$*, RIMS Kokyuroku **2161** (2020), 181--194. to3em, *Affine super Yangians and rectangular $W$-superalgebras*, J. Math. Phys. **63** (2022), no. 5, Paper No. 051701, 34. M. Varagnolo and E. Vasserot, *Double-loop algebras and the Fock space*, Invent. Math. **133** (1998), no. 1, 133--159. C. Wendlandt, *The formal shift operator on the Yangian double*, Int. Math. Res. Not. IMRN (2022), no. 14, 10952--11010. to3em, *The restricted quantum double of the Yangian*, ` arXiv:2204.00983`, 2022. E. T. Whittaker and G. N. Watson, *A course of modern analysis*, 4th ed., Cambridge University Press, 1927. Y. Yang and G. Zhao, *The PBW theorem for affine Yangians*, Transform. Groups **25** (2020), no. 4, 1371--1385. [^1]: More precisely, the results from [@guay-nakajima-wendlandt] require to exclude types $\mathsf{A}_1^{(1)}$ and $\mathsf{A}_2^{(2)}$. The latter case, however, has been considered separately in [@Ueda-A2]. [^2]: We expect our results to readily extend to type $\mathsf{A}_2^{(2)}$. [^3]: *In this paper we use the classical notion of asymptotic expansions à la Poincaré. Namely, for an unbounded set $S\subset\mathbb{C}$, we say $f(z)\sim \sum_{r=0}^{\infty} f_r z^{-r}$, as $z\to \infty$ in $S$, if for every $n\geq 0$, we have $$\lim_{\begin{subarray}{c} z\to\infty\\ z\in S\end{subarray}} z^n\left(f(z)-\sum_{r=0}^n f_rz^{-r}\right)= 0.$$* [^4]: The proof does not appear in print. [^5]: Note that the cocycle equation only holds as an identity of rational functions valued in $\operatorname{End}(V_1\otimes V_2\otimes V_3)$, and does not seem to possess a natural lift to $Y_\hbar(\mathfrak{g})$, see [@GTLW Rmk. 4.1].
arxiv_math
{ "id": "2309.02377", "title": "The R-matrix of the affine Yangian", "authors": "Andrea Appel, Sachin Gautam, Curtis Wendlandt", "categories": "math.RT math-ph math.MP math.QA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We consider branching Brownian motion in which initially there is one particle at $x$, particles produce a random number of offspring with mean $m+1$ at the time of branching events, and each particle branches at rate $\beta = 1/2m$. Particles independently move according to Brownian motion with drift $-1$ and are killed at the origin. It is well-known that this process eventually dies out with positive probability. We condition this process to survive for an unusually large time $t$ and study the behavior of the process at small times $s \ll t$ using a spine decomposition. We show, in particular, that the time when a particle gets furthest from the origin is of the order $t^{5/6}$. Keywords: branching Brownian motion, Yaglom limit theorem, Brownian taboo process, excursion theory MSC 2020 classification: 60J80, 60J70, 60F05 author: - "Pascal Maillard[^1]   and Jason Schweinsberg[^2]" title: The all-time maximum for branching Brownian motion with absorption conditioned on long-time survival --- # Introduction We will consider one-dimensional branching Brownian motion with absorption. This process was first studied in 1978 by Kesten [@kesten]. In addition to its intrinsic mathematical interest, branching Brownian motion with absorption has been applied in the study of partial differential equations [@hhk06] and has been used to model populations undergoing selection in [@bbs; @bdmm1; @bdmm2]. The process evolves as follows. At time zero, there is a single particle at $x > 0$. Each particle moves according to one-dimensional Brownian motion with a drift of $-\mu$. Each particle independently branches at rate $\beta$, and when a branching event occurs, the particle dies and is replaced by a random number of offspring. We assume the numbers of offspring produced at different branching events are independent and identically distributed, and we denote by $p_k$ the probability that an individual has $k$ offspring. We define $m$ so that $m + 1 = \sum_{k=0}^{\infty} k p_k$ is the mean of the offspring distribution, and we assume the offspring distribution has finite variance. We also assume that $\beta = 1/2m$, which by a scaling argument can be done without loss of generality. Kesten [@kesten] showed that if $\mu \geq 1$, then the process almost surely goes extinct, whereas if $\mu < 1$, then with positive probability, the process survives forever. We will assume that $\mu = 1$, which is the case of critical drift. Let $\zeta$ denote the time when the process goes extinct, which is almost surely finite. It was shown in [@ms20], building on earlier work in [@kesten; @bbs14], that there is a positive constant $C$ such that $$\label{eq:survival} \ensuremath{\mathbf{P}}_x(\zeta > t) \sim C x e^{x - (3 \pi^2 t/2)^{1/3}},$$ where $\sim$ means that the ratio of the two sides tends to one as $t \rightarrow \infty$ while $x$ is fixed. Yaglom-type limit theorems for the behavior of the process conditioned on the event that it survives for an unusually long time were first proved by Kesten [@kesten]. Kesten obtained estimates on the number of particles at time $t$ and the position of the right-most particle at time $t$ for the process conditioned on survival until time $t$. More precise results along these lines, as well as other results about the behavior of the process conditioned on survival until time $t$, were recently established in [@ms20]. The main results in [@ms20] focused on the behavior of the process either at times close to $t$, or at times in $[\delta t, (1 - \delta)t]$ for a fixed constant $\delta > 0$. In the present paper, we will be concerned primarily with the behavior of the branching Brownian motion at times $s \ll t$, when the process is conditioned to survive for an unusually long time $t$. We now introduce some notation. We use the standard Ulam--Harris labelling, in that particles are labeled by words over the positive integers. Let $N(s)$ denote the set of particles alive at time $s$. If $u \in N(s)$, then $X_u(s)$ denotes the position of the particle $u$ at time $s$, and for $0 < r < s$, we denote by $X_u(r)$ the position at time $r$ of the ancestor of this particle. Let $X(s) := \{X_u(s): u \in N(s)\}$ be the set of the locations of all particles at time $s$, and let $$M(s) := \max_{u \in N(s)} X_u(s)$$ be the location of the particle furthest from the origin at time $s$. For $s \in [0, t]$, let $$L_t(s) := c (t - s)^{1/3}, \qquad c := \bigg( \frac{3 \pi^2}{2} \bigg)^{1/3}.$$ Roughly speaking, the significance of $L_t(s)$ is that if a particle reaches $L_t(s)$ at time $s$, then there is a good chance that a descendant of this particle will still be alive at time $t$. See [@bbs14; @ms20] for more precise versions of this statement. To simplify notation, we also write $$L_t := L_t(0) = ct^{1/3}.$$ Because, throughout the paper, we will be considering asymptotics as $t \rightarrow \infty$, we will use the notation $f(t) \sim g(t)$ to mean $\lim_{t \rightarrow \infty} f(t)/g(t) = 1$ and $f(t) \ll g(t)$ to mean $\lim_{t \rightarrow \infty} f(t)/g(t) = 0$. We will write $f(t) \asymp g(t)$ if there exist constants $0 < C_1 < C_2 < \infty$ such that $C_1f(t) < g(t) < C_2f(t)$ for all $t > 0$. We will also say that a random variable $Y_t$, which depends on $t$, is $\Theta_p(f(t))$ if for all $\varepsilon> 0$, there exist constants $0 < C_1 < C_2 < \infty$ such that $P(C_1 f(t) < Y_t < C_2 f(t)) > 1 - \varepsilon$ for sufficiently large $t$. Convergence in law is denoted by $\Rightarrow$, and convergence in probability is denoted by $\rightarrow_p$. ## Main Result Our main result, Theorem [Theorem 1](#th:global_max){reference-type="ref" reference="th:global_max"} below, concerns the maximum location that any particle achieves before time $t$, as well as the time when this all-time maximum occurs, when the process is conditioned to survive until time $t$. Note, in particular, that the time at which the all-time maximum is achieved is $\Theta_p(t^{5/6})$, which is much smaller than $t$. This occurs because, conditional on survival of the process for a large time $t$, with high probability a particle quickly moves very far away from the origin, getting close to $L_t$. After this initial burst, the value of $M(s)$ decreases over time as the process heads towards its extinction shortly after time $t$. See Figure 5 of [@ds07] for an illustration of this phenomenon when the drift $\mu$ is slightly larger than one. See also Theorems 1.5 and 2.9 of [@ms20] for results about the asymptotic behavior of $M(s)$ when $s \geq \delta t$, conditional on survival of the process until time $t$. **Theorem 1**. *Define $$\mathfrak M = \max_{s\ge0} M(s),\quad \mathfrak m = \operatorname*{arg\,max}_{s\ge0} M(s).$$ Suppose the position $x$ of the initial particle may depend on $t$ but satisfies $L_t - x \gg t^{1/6}$. Then as $t \rightarrow \infty$, conditional on $\zeta > t$ we have the convergence in law $$\label{globmax} \left(\frac{L_t - \mathfrak M}{t^{1/6}}, \frac{\mathfrak m}{t^{5/6}}\right) \Rightarrow \left(c^{1/2}R, 3c^{-1/2}UR\right),$$ where $R$ and $U$ are independent random variables, $R$ is Rayleigh distributed with density $2r e^{-r^2}$ on $\mbox{\msbm R}_+$, and $U$ has a uniform distribution on $[0,1]$.* **Remark 2**. *Note that the convergence in ([\[globmax\]](#globmax){reference-type="ref" reference="globmax"}) can also be written as $$\label{maxrem} \left(L_t^{-1/2}(L_t - \mathfrak M), L_t^{1/2} \frac{\mathfrak m}{t}\right) \Rightarrow (R, 3UR).$$* The proof of Theorem [Theorem 1](#th:global_max){reference-type="ref" reference="th:global_max"} can be found in section [4](#maxsec){reference-type="ref" reference="maxsec"}. ## Related work and comments The branching Brownian motion with absorption (and drift) considered in this article is a particular example of a *branching Markov process*, as defined for example in [@inw]. Since we are interested in the case of critical drift, it is interesting to compare our results with classical results on *critical* branching processes. The behavior of critical branching processes is well-understood in the case of the classical mono-type branching process, i.e. the Bienaymé--Galton--Watson process. Under the assumption that the offspring distribution has finite variance, classical results by Kolmogorov [@Kolmogorov1938] and Yaglom [@yaglom] show that the probability of survival until generation $n$ decays inversely proportional to $n$ and conditioned on survival, the size of the population, rescaled by $1/n$, converges in law to an exponential distribution. Slack [@Slack1968] extended these results to offspring distributions in the domain of attraction of an $\alpha$-stable law, with $\alpha\in (1,2]$, where the survival probability decays like $1/n^{\alpha-1}$. The books by Harris [@HarrisBook] and Athreya and Ney [@AthreyaNeyBook] contain extensions to finite-type branching processes, under a certain finite-variance condition. Harris [@HarrisBook] also treats a certain class of processes with a countably infinite number of types, which was recently generalized by de Raphélis [@deRaphelis2017; @deRaphelis2022]. The latter articles actually prove convergence of the rescaled tree to a limiting tree. This was done previously for the finite-type case by Miermont [@Miermont2008] ($\alpha=2$) and Berzunza [@Berzunza2018] ($\alpha\in (1,2]$). Critical branching Markov processes in general type space (and in continuous time) were studied in the 70's by Hering and co-authors [@Hering1971; @Hering1977; @Hering1978; @HH1981]. Under a condition on the first moment semigroup called *uniform primitivity*, they obtain asymptotics for the survival probability of order $1/t^{\alpha-1}$, for $\alpha \in (1,2]$, as well as a Yaglom limit theorem. See also Harris, Horton, Kyprianou and Wang [@HHKW2022] for a different approach using moments and many-to-few formulae, under more restrictive assumptions. The condition of uniform primitivity is verified for branching diffusions on bounded domains under regularity assumptions on the coefficients and the domain, but it is in general not verified for branching diffusions on unbounded domains, such as the one considered in this article. The notion of *generalized principal eigenvalue* $\lambda_c$ has been succesfully applied to the question of *local* survival vs. local extinction for general branching diffusions on unbounded domains in [@EK2004], and to the generalization of limit theorems in the supercritical case $\lambda_c > 0$ in, for example, [@EHK2010]. However, refined limit theorems are missing to this date in the critical case $\lambda_c = 0$. In light of these results, the process studied in this article can be considered as a prototypical example of a branching diffusion in an unbounded domain and whose asymptotics are radically different from the cases mentioned above. Indeed, it corresponds to the case $\alpha = 1$, as witnessed by the streched exponential asymptotic for the survival probability from [\[eq:survival\]](#eq:survival){reference-type="eqref" reference="eq:survival"} and the appearance of an underlying continuous-state branching process driven by a $1$-stable Lévy process, called Neveu's continuous state branching process [@ms20]. The results in this article give a precise picture of the behavior of the process conditioned on survival until time $t$. Underlying Theorem [Theorem 1](#th:global_max){reference-type="ref" reference="th:global_max"} is the maybe surprising fact that the process conditioned on survival until time $t$ behaves radically differently from the process conditioned to survive forever[^3]. Indeed using [\[eq:survival\]](#eq:survival){reference-type="eqref" reference="eq:survival"} and classical change-of-measure techniques for branching processes (see below), one can show that the latter can be constructed as a BBM with a distinguished particle commonly called a *spine*, moving according to a Bessel-3 process. As seen below, the process conditioned to survive until time $t$ also behaves like a certain spinal process on time-scales small than $t$, but the two spinal processes are comparable only until the time-scale $t^{2/3}$, when their behavior drastically changes. As shown below, this leads to the global maximum appearing at a time-scale $t^{5/6} \ll t$, as shown in Theorem [Theorem 1](#th:global_max){reference-type="ref" reference="th:global_max"}. ## An auxiliary branching Brownian motion with spine To prove Theorem [Theorem 1](#th:global_max){reference-type="ref" reference="th:global_max"}, we will introduce another process, a branching Brownian motion (BBM) with a distinguished particle called the *spine*, which will approximate the original process conditioned on survival until time $t$. Its definition involves the so-called *Brownian taboo process*, introduced by Knight [@knight]. #### Brownian taboo process. Following Knight [@knight], consider standard Brownian motion started at some point $x\in (0,1)$ and killed as soon as it exits the interval $[0,1]$. Then the function $x\mapsto \sin(\pi x)$ is a non-negative eigenfunction of the infinitesimal generator of this process with eigenvalue $-\pi^2/2$. The *Brownian taboo process* is the stochastic process $(K_t)_{t\ge0}$ defined as the Doob $h$-transform of this process with respect to this eigenfunction. Its law satisfies that for every $t\ge0$ and every bounded or positive measurable functional $F$, $$\label{eq:taboo} \mbox{\msbm E}_x[F((K_s)_{s\in [0,t]})] = \frac{1}{\sin(\pi x)}e^{\frac{\pi^2}{2}t}\mbox{\msbm E}_x[F((B_s)_{s\in[0,t]})\sin(\pi B_t)],$$ where $(B_t)_{t\ge0}$ is standard Brownian motion started from $x$. The Brownian taboo process can be interpreted as Brownian motion conditioned to stay forever in the interval $(0,1)$. It follows from its definition as a Doob transform of Brownian motion that the Brownian taboo process is a diffusion on $(0,1)$ with speed measure $m(dx)$ and scale measure $s(dx)$ given by (see e.g. [@borodinsalminen Paragraph II.31]) $$m(dx) = 2\sin^2(\pi x)\,dx,\quad s(dx) = \frac{1}{\sin^2(\pi x)}\,dx.$$ Note that $\int_0^1m(dx) = 1$, so that $m$ is the stationary probability for the process. One can check using the formulae in [@borodinsalminen Paragraph II.6] that the boundary points 0 and 1 are both *entrance-not-exit*, i.e. the process can be defined starting from 0 or 1 as the weak limit when $x\to0$ or $x\to 1$, but the process never hits $0$ nor $1$ when started inside the interval $(0,1)$. #### Branching Brownian motion with spine. We define a branching Brownian motion with a distinguished particle called the *spine* as follows: - The process starts at time zero from a single particle, the spine, at a point $x\in [0,L_t]$. - For $s \in [0, t]$, define $$\tau(s) = \int_0^s \frac{1}{L_t(u)^2} \: du.$$ The spine's trajectory is equal in law to the process $(L_t(s) K_{\tau(s)})_{s\in[0,t]}$, where $(K_u)_{u\ge0}$ is the Brownian taboo process on $[0,1]$ started at $x/L_t(0)$. - The spine branches at the accelerated rate $(m+1)\beta$ and according to the size-biased offspring distribution, i.e. the probability of it having $k$ offspring is $kp_k/(m+1)$, $k=1,2,\ldots$. These offspring are located at same position as their parent. - One of these offspring is chosen randomly to continue as the spine. The others spawn independent branching Brownian motions with drift $-1$ and absorption at $0$ (i.e., independent copies of the process $X$ started from their position). See e.g. Hardy and Harris [@HH] or Harris and Roberts [@HR] for a general theory of spine decompositions for branching Markov processes, including a rigorous construction. For the BBM with spine, we let ${\tilde N}(s)$ be the set of particles alive at time $s$. For $u \in {\tilde N}(s)$, we let ${\tilde X}_u(s)$ be the location of the particle $u$ at time $s$, and we let ${\tilde X}(s) := \{{\tilde X}_u(s): u \in {\tilde N}(s)\}$. Let $\xi_s$ be the spine particle at time $s$, and denote the trajectory of the spine by $({\tilde X}_\xi(s))_{s\in[0,t]}$. The following proposition, which is proved in section [2](#spinesec){reference-type="ref" reference="spinesec"}, says that up to time $\delta t$ for small $\delta$, the BBM with spine approximates well the BBM conditioned on survival until time $t$. **Proposition 3**. *Let $\mu_s$ be the law of $(X(r))_{r\in[0,s]}$ conditioned on $\{\zeta > t\}$, and let $\nu_s$ be the law of $(\tilde X(r))_{r\in[0,s]}$, where both $(X(r))_{r\in[0,s]}$ and $(\tilde X(r))_{r\in[0,s]}$ begin with one particle at $x$. We assume that $x$ may depend on $t$ but satisfies $\lim_{t \rightarrow \infty} (L_t - x) = \infty$. Then for every $\varepsilon>0$, there exists $\delta > 0$ such that for sufficiently large $t$, we have $$d_{TV}(\mu_{\delta t},\nu_{\delta t}) \le \varepsilon,$$ where $d_{TV}$ is the total variation distance between measures.* ## Sketch of the argument for Theorem [Theorem 1](#th:global_max){reference-type="ref" reference="th:global_max"} based on excursion theory {#sketch-of-the-argument-for-theorem-thglobal_max-based-on-excursion-theory} We outline here an argument for Theorem [Theorem 1](#th:global_max){reference-type="ref" reference="th:global_max"} based on the spine construction and excursion theory for the Brownian taboo process. Let $(K_s)_{s\ge0}$ be a Brownian taboo process starting from an arbitrary point in the interval $(0,1)$. Define for $\gamma > 0$, $$M_\gamma = \min_{s\ge0} \, \big(K_s + \gamma s\big),\qquad m_\gamma = \operatorname*{argmin}_{s\ge0} \, \big(K_s + \gamma s \big).$$ We claim that as $\gamma\to 0$, we have the convergence in law $$\label{Rayleighsimple} \bigg(\frac{M_\gamma}{\sqrt \gamma}, \sqrt \gamma m_\gamma \bigg) \Rightarrow ({\tilde R},U {\tilde R}),$$ where ${\tilde R}$ and $U$ are independent random variables, ${\tilde R}$ has a Rayleigh distribution with density $\pi^2 r e^{-\pi^2r^2/2}$ on $\mbox{\msbm R}_+$, and $U$ has a uniform distribution on $[0,1]$. A stronger version of ([\[Rayleighsimple\]](#Rayleighsimple){reference-type="ref" reference="Rayleighsimple"}) will be proved in Lemma [Lemma 6](#lem:taboo_with_drift){reference-type="ref" reference="lem:taboo_with_drift"} below. We can understand why ([\[Rayleighsimple\]](#Rayleighsimple){reference-type="ref" reference="Rayleighsimple"}) should hold by using the excursion theory for the Brownian taboo process, which was developed by Lambert [@lambert] and will be reviewed in section [3](#taboosec){reference-type="ref" reference="taboosec"}. The Brownian taboo process stays in the interval $(0,1)$, and it can be decomposed according to its excursions away from $1/2$. We will consider the Poisson point process $\mathcal{N}$ consisting of the points $(u, a_u)$, where $u$ is the local time at $1/2$ associated with a particular excursion of the taboo process below $1/2$, and $a_u$ is the minimum value achieved by the taboo process during this excursion. As we will see in ([\[hdlimit\]](#hdlimit){reference-type="ref" reference="hdlimit"}) below, for small $d$, the rate of excursions per unit of local time during which the process goes below $d$ is approximately $\pi^2 d/2$. Therefore, for small values of the second coordinate, the intensity of the Poisson point process $\mathcal{N}$ is approximately $\pi^2/2$. When $\gamma$ is small, we expect $M_{\gamma}$ to be close to zero, so we will be concerned primarily with excursions of the Brownian taboo process that get close to zero. The event $M_{\gamma}/\sqrt{\gamma} \leq x$ is the same as the event that $K_s \leq x \sqrt{\gamma} - \gamma s$ for some $s \geq 0$. Also, we will see in ([\[Ls12\]](#Ls12){reference-type="ref" reference="Ls12"}) below that after a large time $s$, the Brownian taboo process will have accumulated a local time of approximately $2s$ by time $s$. Therefore, if $K_s \leq x \sqrt{\gamma} - \gamma s$, then the excursion underway at time $s$ should correspond to a point $(u, a_u)$ of $\mathcal{N}$ such that $a_u \leq x \sqrt{\gamma} - \gamma u/2$. That is, we have $M_{\gamma}/\sqrt{\gamma} \leq x$ when there is a point of $\mathcal{N}$ in the triangle in the figure below. Because this triangle has area $x^2$ and the Poisson point process $\mathcal{N}$ has intensity approximately $\pi^2/2$, we have $P(M_{\gamma}/\sqrt{\gamma} > x) \approx e^{-\pi^2 x^2/2}$, which is the Rayleigh distribution that appears in ([\[Rayleighsimple\]](#Rayleighsimple){reference-type="ref" reference="Rayleighsimple"}). Furthermore, conditional on $M_{\gamma}/\sqrt{\gamma} = x$, the excursion that produces the minimum value of $K_s - \gamma s$ should correspond to a point located at a uniform position on the diagonal line in the figure below. The local time $u$ of this excursion is therefore approximately uniformly distributed on $[0, 2x/\sqrt{\gamma}]$, which means the actual time $s$ of the excursion is approximately uniformly distributed on $[0, x/\sqrt{\gamma}]$, as indicated in ([\[Rayleighsimple\]](#Rayleighsimple){reference-type="ref" reference="Rayleighsimple"}). (24,12.5) (4,2)(1,0)18 (4,2)(0,1)9 (18,2)(-2,1)14 (22.4,1.6)$u$ (0.5,8.7)$x\sqrt{\gamma}$ (18,1.5)(0,1)1 (3.5,9)(1,0)1 (3.5,11.5)$a_u$ (16,0.2)$2x/\sqrt{\gamma}$ We now sketch an argument for how to obtain Theorem [Theorem 1](#th:global_max){reference-type="ref" reference="th:global_max"} from ([\[Rayleighsimple\]](#Rayleighsimple){reference-type="ref" reference="Rayleighsimple"}). More details are provided in section [4](#maxsec){reference-type="ref" reference="maxsec"}. We will argue that the location $M(s)$ of the particle at time $s$ that is furthest from the origin is likely to stay close to the location $X_{\xi}(s)$ of the spinal particle. We can write $X_{\xi}(s) = L_t(s) (1-K_{\tau(s)})$, where $(K_u)_{u \geq 0}$ is a Brownian taboo process (note that $K$ is a Brownian taboo process if and only if $1-K$ is a Brownian taboo process). Then $$\label{LX} \frac{L_t - X_{\xi}(s)}{L_t} = \bigg(1 - \frac{L_t(s)}{L_t} \bigg) + \frac{L_t(s)}{L_t} K_{\tau(s)}.$$ The function $\tau$ is strictly increasing on $[0, t]$ with $\tau(t) = (3/c^2) t^{1/3}$, so the inverse function $\tau^{-1}$ is well-defined on the interval $[0, (3/c^2) t^{1/3}]$. Allowing $s$ and $u$ to vary with $t$ for this paragraph, for $s \ll t$ we have $$\label{tauasymp} \tau(s) \sim s/L_t^2,$$ and therefore if $u \ll t^{1/3}$, then $$\label{tauinverse} \tau^{-1}(u) \sim L_t^2 u.$$ Also, for $s \ll t$, we have $$L_t - L_t(s) \sim \frac{cs}{3t^{2/3}} = \frac{c^3}{3} \cdot \frac{s}{L_t^2} = \frac{\pi^2}{2} \cdot \frac{s}{L_t^2} \sim \frac{\pi^2 \tau(s)}{2}.$$ It follows that for $u \ll t^{1/3}$, we have $$\label{Ltdiff} L_t - L_t(\tau^{-1}(u)) \sim \frac{\pi^2 u}{2}.$$ We can now make the substitution $u = \tau(s)$, and equation ([\[LX\]](#LX){reference-type="ref" reference="LX"}) becomes $$\label{LXu} \frac{L_t - X_{\xi}(\tau^{-1}(u))}{L_t} = \bigg(\frac{L_t - L_t(\tau^{-1}(u))}{L_t} \bigg) + \frac{L_t(\tau^{-1}(u))}{L_t} K_{u}.$$ Let $$\gamma = \frac{\pi^2}{2L_t}.$$ Using ([\[Ltdiff\]](#Ltdiff){reference-type="ref" reference="Ltdiff"}) to approximate the first term on the right-hand side of ([\[LXu\]](#LXu){reference-type="ref" reference="LXu"}) and then approximating $L_t(\tau^{-1}(u))/L_t$ by $1$ in the second term, we obtain the approximation $$\frac{L_t - X_{\xi}(\tau^{-1}(u))}{L_t} \approx \gamma u + K_u.$$ Consequently, as long as the branching Brownian motion attains its all-time maximum at approximately the same time that the spine does, we will be able to obtain from ([\[Rayleighsimple\]](#Rayleighsimple){reference-type="ref" reference="Rayleighsimple"}) that $$\label{LXconv} \bigg( \frac{L_t - \mathfrak{M}}{L_t \sqrt{\gamma}}, \sqrt{\gamma} \tau(\mathfrak{m}) \bigg) \Rightarrow ({\tilde R}, U {\tilde R}).$$ Note that $L_t \sqrt{\gamma} = \sqrt{\pi^2 L_t/2}$. Also, $\tau(\mathfrak{m}) \approx \mathfrak{m}/L_t^2$ and $$\label{gammaLt2} \frac{\sqrt{\gamma}}{L_t^2} = \sqrt{\frac{\pi^2 L_t}{2}} \cdot \frac{1}{L_t^3} = \sqrt{\frac{\pi^2 L_t}{2}} \cdot \frac{1}{c^3 t} = \sqrt{\frac{2}{\pi^2}} \cdot \frac{\sqrt{L_t}}{3t}.$$ Because ${\tilde R}\sqrt{\pi^2/2}$ has the same distribution as $R$, the result ([\[maxrem\]](#maxrem){reference-type="ref" reference="maxrem"}) can be obtained from ([\[LXconv\]](#LXconv){reference-type="ref" reference="LXconv"}), and Theorem [Theorem 1](#th:global_max){reference-type="ref" reference="th:global_max"} follows. # Proof of Proposition 3 {#spinesec} Our goal in this section is to prove Proposition [Proposition 3](#lem:spine_comparison){reference-type="ref" reference="lem:spine_comparison"}. We begin with the following lemma. **Lemma 4**. *Let $(B_s)_{s\ge0}$ be a standard Brownian motion started from a point $x\in[0,L_t(0)]$ and let $(b_u)_{u\ge0}$ be a standard Brownian motion started from $x/L_t(0)\in [0,1]$. Define $$B'_u = \frac{B_{\tau^{-1}(u)}}{L_t(\tau^{-1}(u))}.$$ Then, there exists a constant $C>0$ such that for every $\delta\in(0,1/2)$, every $t\ge1$, and every positive bounded measurable functional $F$, $$\exp(-C(t^{-1/3}+\delta)) \le \frac{\mbox{\msbm E}\left[F((B'_u)_{u\in[0,\tau(\delta t)]})\mathds{1}_{B'_u\in[0,1]\,\forall u\le \tau(\delta t)}\right]}{\mbox{\msbm E}\left[F((b_u)_{u\in[0,\tau(\delta t)]})\mathds{1}_{b_u\in[0,1]\,\forall u\le \tau(\delta t)}\right]} \le \exp(Ct^{-1/3}).$$* *Proof.* Define $\delta$ and $F$ as in the statement of the lemma. Define for $s\in [0, t)$, $$\rho_s = \bigg( \frac{L_t(0)}{L_t(s)} \bigg)^{1/2} \exp \bigg( \frac{L_t'(s) B_s^2}{2L_t(s)} - \frac{L_t'(0)B_0^2}{2L_t(0)} - \int_0^s \frac{L_t''(u)B_u^2}{2L_t(u)} \: du \bigg).$$ By Lemma 5.3 in [@ms20], we have $$\mbox{\msbm E}\left[F((b_u)_{u\in[0,\tau(\delta t)]})\mathds{1}_{b_u\in[0,1]\,\forall u\le \tau(\delta t)}\right] = \mbox{\msbm E}\left[F((B'_u)_{u\in[0,\tau(\delta t)]})\mathds{1}_{B'_u\in[0,1]\,\forall u\le \tau(\delta t)}\,\rho_{\delta t}\right].$$ It remains to show that for some $C>0$, we have that $e^{-Ct^{-1/3}} \le \rho_{\delta t} \le e^{C(\delta+t^{-1/3})}$ on the event $\{B'_u\in[0,1]\,\forall u\le \tau(\delta t)\}$. On this event, we have $B_s\in[0,L_t(s)]$ for every $s\le \delta t$, by the definition of $B'_u$. Furthermore, note that $$L_t(s) =O(t^{1/3}),\quad |L_t'(s)|=O(t^{-2/3}),\quad|L_t''(s)| =O(t^{-5/3})\quad\text{for all $s\le t/2$.}$$ Hence, the absolute value of the quantity in the exponential in the definition of $\rho_s$ is bounded by $Ct^{-1/3}$ for some $C>0$, when $s=\delta t$. Furthermore, the first factor in the definition of $\rho_s$ satisfies, when $s=\delta t$, $$\label{eq:L} 1\le \bigg( \frac{L_t(0)}{L_t(\delta t)} \bigg)^{1/2} = \bigg( \frac{1}{1-\delta} \bigg)^{1/6} \le e^{C\delta},$$ for some $C>0$. The statement follows. ◻ For the original branching Brownian motion process, define, as in [@ms20], $$\begin{aligned} z_t(x,s) &= L_t(s) \sin \bigg( \frac{\pi x}{L_t(s)} \bigg) e^{x - L_t(s)} \mathds{1}_{x \in [0, L_t(s)]} \\ Z_t(s) &= \sum_{u \in N(s)} z_t(X_u(s), s) \\ Z_t'(s) &= \sum_{u \in N(s)} z_t(X_u(s), s) \mathds{1}_{X_u(r)\in [0,L_t(r)]\,\forall r\le s}.\end{aligned}$$ We recall from [@ms20] that these quantities control the survival probabilities of the process. For example, conditional on the process up to time $s$, the probability that the process survives until time $t$ is approximately proportional to $Z_t(s)$, assuming that all particles are far below $L_t(s)$ at time $s$. **Lemma 5**. *There exists $C>0$ such that for every $\delta\in(0,1/2)$, $t\ge1$, and $x\in(0,L_t(0))$, and every positive bounded measurable functional $F$, we have $$\exp(-C(t^{-1/3}+\delta)) \le \frac{\mbox{\msbm E}_x\left[F((X(s))_{s\in[0,\delta t]})Z_t'(\delta t)\right]}{\mbox{\msbm E}_x\left[F((\tilde X(s))_{s\in[0,\delta t]})\right] z_t(0,x)} \le \exp(C(t^{-1/3}+\delta)).$$* *Proof.* Let $\delta$, $t$, $x$, and $F$ be defined as in the statement of the lemma. Throughout the proof, we will make use of two auxiliary branching processes with spine, denoted by $X'$ and $X''$, with the position of their respective spines denoted by $X_\xi'$ and $X_\xi''$. These processes are defined in essentially the same way as $\tilde X$, but with the following differences: - In $X'$, the spine performs standard Brownian motion started at $x$ and killed at 0. - In $X''$, the spine's motion satisfies $X_\xi''(s) = L_t(s)B''_{\tau(s)}$ for a Brownian motion $(B''_u)_{u\ge0}$ started at $x/L_t(0)$ and killed at 0. We now relate these processes to the original process $X$ using a *many-to-one lemma*. We use the version from Harris and Roberts [@HR]; see Section 4.1 in that paper. The first step is to define a branching process with spine through a measure change of the original process using a martingale of the form $$\sum_{u \in N(s)} \zeta(u, s) e^{-\beta m s},$$ where we recall that $\beta$ is the branching rate, $m+1$ is the mean of the offspring distribution and $\beta m = 1/2$ by assumption. In our case, we set $$\begin{aligned} \zeta(u, s) &= e^{X_u(s) - X_u(0) + s/2},\quad s\ge 0,\ i\le N(s),\end{aligned}$$ and note that this corresponds to a martingale which transforms the law of a Brownian motion with drift $-1$ to the law of a Brownian motion without drift. With this definition, the law $\mathbb Q^1_x$ from Harris and Roberts [@HR] is precisely the law of our process $X'$. Furthermore, for some positive functional $G$ and for all $u \in N(\delta t)$, set $$\begin{aligned} Y(u) &= F((X(s))_{s\in[0,\delta t]})G((X_u(s))_{s\in[0,\delta t]})e^{X_u(\delta t)} \\ &= F((X(s))_{s\in[0,\delta t]})G((X_u(s))_{s\in[0,\delta t]})e^{X_u(0)}\zeta(u,\delta t) e^{-\delta t/2}.\end{aligned}$$ By the Many-to-one lemma from [@HR Section 4.1], we then have that $$\begin{aligned} \label{eq:many_to_one} \mbox{\msbm E}_x\left[\sum_{u \in N(\delta t)} Y(u)\right] = e^x \mbox{\msbm E}_x\left[F((X'(s))_{s\in[0,\delta t]})G((X_\xi'(s))_{s\in[0,\delta t]})\right].\end{aligned}$$ We now specialize this to $$G((x(s))_{s\in[0,\delta t]}) = \sin\left(\frac{\pi x(\delta t)}{L_t(\delta t)}\right) \mathds{1}_{x(s)\in [0,L_t(s)]\,\forall s\le \delta t},$$ noting that with this definition, we have $$Z_t'(\delta t) = L_t(\delta t)e^{-L_t(\delta t)} \sum_{u \in N(\delta t)} e^{X_u(\delta t)}G((X_u(s))_{s\in[0,\delta t]}).$$ Using [\[eq:many_to_one\]](#eq:many_to_one){reference-type="eqref" reference="eq:many_to_one"}, we get $$\label{eq:Xprime} \mbox{\msbm E}_x\left[F((X(s))_{s\in[0,\delta t]})Z_t'(\delta t)\right] = L_t(\delta t)e^{x-L_t(\delta t)} \mbox{\msbm E}_x\left[F((X'(s))_{s\in[0,\delta t]})G((X_\xi'(s))_{s\in[0,\delta t]})\right].$$ Conditioning on $X_\xi'$ in the expectation on the right-hand side of [\[eq:Xprime\]](#eq:Xprime){reference-type="eqref" reference="eq:Xprime"} and applying Lemma [Lemma 4](#lem:BM_girsanov){reference-type="ref" reference="lem:BM_girsanov"} with $(X_\xi'(s))_{s\ge0}$ taking the role of $(B_s)_{s\ge0}$ and $(B''(u))_{u \geq 0}$ taking the role of $(b_u)_{u \geq 0}$ gives $$\begin{aligned} \label{eq:Xprimeprime} &\mbox{\msbm E}_x\left[F((X'(s))_{s\in[0,\delta t]})G((X_\xi'(s))_{s\in[0,\delta t]})\right] \nonumber \\ &\qquad = e^{O(\delta+t^{-1/3})}\mbox{\msbm E}_x\left[F((X''(s))_{s\in[0,\delta t]})\sin(\pi B''_{\tau(\delta t)})\mathds{1}_{B''_u \in [0,1]\,\forall u\in [0,\tau(\delta t)]}\right]. \end{aligned}$$ Finally, recall that the Brownian taboo process is obtained from Brownian motion killed outside the interval $[0,1]$ by a Doob transform using the $-(\pi^2/2)$-harmonic function $\sin(\pi x)$; see [\[eq:taboo\]](#eq:taboo){reference-type="eqref" reference="eq:taboo"}. This gives $$\begin{aligned} \label{eq:Xtilde} &\mbox{\msbm E}_x\left[F((X''(s))_{s\in[0,\delta t]})\sin(\pi B''_{\tau(\delta t)})\mathds{1}_{B''_u \in [0,1]\,\forall u\in [0,\tau(\delta t)]}\right] \nonumber \\ &\qquad \qquad = e^{-\pi^2 \tau(\delta t)/2}\sin\bigg( \frac{\pi x}{L_t(0)} \bigg) \mbox{\msbm E}_x\left[F((\tilde X(s))_{s\in[0,\delta t]})\right].\end{aligned}$$ Collecting [\[eq:Xprime\]](#eq:Xprime){reference-type="eqref" reference="eq:Xprime"}, [\[eq:Xprimeprime\]](#eq:Xprimeprime){reference-type="eqref" reference="eq:Xprimeprime"} and [\[eq:Xtilde\]](#eq:Xtilde){reference-type="eqref" reference="eq:Xtilde"}, and noting that $L_t(\delta t) = L_t(0)e^{O(\delta)}$ and $e^{-\pi^2 \tau(\delta t)/2} = e^{L_t(\delta t)-L_t(0)}$, we get $$\mbox{\msbm E}_x\left[F((X(s))_{s\in[0,\delta t]})Z_t'(\delta t)\right] = e^{O(\delta+t^{-1/3})} z_t(0,x) \mbox{\msbm E}_x\left[F((\tilde X(s))_{s\in[0,\delta t]})\right],$$ which was to be proven. ◻ *Proof of Proposition [Proposition 3](#lem:spine_comparison){reference-type="ref" reference="lem:spine_comparison"}.* Let $\varepsilon,\eta,\delta >0$. In the course of the proof, we will choose $\eta$ small depending on $\varepsilon$, and then $\delta$ small depending on $\varepsilon$ and $\eta$. Define the event $$G = \{Z_t(\delta t) \le \eta,\ M(\delta t) \le L_t(\delta t) - \eta^{-1},\ Z_t(\delta t) = Z_t'(\delta t)\}$$ By the assumption on the initial condition and results from Maillard and Schweinsberg [@ms20], we will show below that for every $\varepsilon,\eta>0$ there exists $\delta>0$ such that $$\label{eq:prob_G_delta} \ensuremath{\mathbf{P}}_x(G\,|\,\zeta > t) \ge 1-\varepsilon.$$ Furthermore, by the second part of Theorem 1.1 in [@ms20], there exists a constant $\alpha\in(0,\infty)$ such that, choosing $\eta$ small enough, we have for large enough $t$, $$\label{eq:prob_zeta_ge_t} \ensuremath{\mathbf{P}}_x(\zeta > t) = e^{O(\varepsilon)} \alpha z_t(0,x)$$ and $$\label{eq:cond_G_delta} \ensuremath{\mathbf{P}}_x(\zeta > t\,|\,\mathcal F_{\delta t})\mathds{1}_{G} = e^{O(\varepsilon)} \alpha Z_t'(\delta t)\mathds{1}_{G}.$$ We now wrap up the proof, assuming [\[eq:prob_G\_delta\]](#eq:prob_G_delta){reference-type="eqref" reference="eq:prob_G_delta"}. It is enough to show that for every functional $H$ with $0\le H\le 1$, we have for large enough $t$, $$\label{eq:toshow} \mbox{\msbm E}_x[H((X(r))_{r\in[0,\delta t]})\,|\,\zeta>t] \le \mbox{\msbm E}_x[H((\tilde X(r))_{r\in[0,\delta t]})] e^{O(\varepsilon+\delta)} + O(\varepsilon+\delta).$$ Let $H$ be such a functional. By [\[eq:prob_G\_delta\]](#eq:prob_G_delta){reference-type="eqref" reference="eq:prob_G_delta"}, we have $$\label{eq:exp_H_G_delta} \mbox{\msbm E}_x[H((X(r))_{r\in[0,\delta t]})\,|\,\zeta>t] = \mbox{\msbm E}_x[H((X(r))_{r\in[0,\delta t]})\mathds{1}_{G}\,|\,\zeta>t] + O(\varepsilon).$$ Furthermore, by [\[eq:prob_zeta_ge_t\]](#eq:prob_zeta_ge_t){reference-type="eqref" reference="eq:prob_zeta_ge_t"} and [\[eq:cond_G\_delta\]](#eq:cond_G_delta){reference-type="eqref" reference="eq:cond_G_delta"}, we have $$\begin{aligned} \mbox{\msbm E}_x[H((X(r))_{r\in[0,\delta t]})\mathds{1}_{G}\,|\,\zeta>t] &= \frac{\mbox{\msbm E}_x[H((X(r))_{r \in [0, \delta t]}) \mathds{1}_G \ensuremath{\mathbf{P}}_x(\zeta > t \,|\, \mathcal{F}_{\delta t})]}{\ensuremath{\mathbf{P}}_x(\zeta > t)} \\ &= \frac{e^{O(\varepsilon)} \mbox{\msbm E}_x[H((X(r))_{r \in [0, \delta t]} \mathds{1}_G Z_t'(\delta t)]}{z_t(0, x)} \\ &\le \frac{e^{O(\varepsilon)}\mbox{\msbm E}_x[H((X(r))_{r\in[0,\delta t]})Z_t'(\delta t)]}{z_t(0,x)},\end{aligned}$$ and Lemma [Lemma 5](#lem:spine_Z){reference-type="ref" reference="lem:spine_Z"} immediately yields for large enough $t$, $$\label{eq:almost_done} \mbox{\msbm E}_x[H((X(r))_{r\in[0,\delta t]})\mathds{1}_{G}\,|\,\zeta>t] \le e^{O(\varepsilon+\delta)} \mbox{\msbm E}_x[H((\tilde X(r))_{r\in[0,\delta t]})].$$ Equations [\[eq:exp_H\_G_delta\]](#eq:exp_H_G_delta){reference-type="eqref" reference="eq:exp_H_G_delta"} and [\[eq:almost_done\]](#eq:almost_done){reference-type="eqref" reference="eq:almost_done"} now yield [\[eq:toshow\]](#eq:toshow){reference-type="eqref" reference="eq:toshow"}, which finishes the proof of the lemma, assuming [\[eq:prob_G\_delta\]](#eq:prob_G_delta){reference-type="eqref" reference="eq:prob_G_delta"}. We finish by proving [\[eq:prob_G\_delta\]](#eq:prob_G_delta){reference-type="eqref" reference="eq:prob_G_delta"}. We bound the probability of the complement in three steps. Recall that we assume that $L_t-x\to\infty$ as $t\to\infty$, which in particular implies that $z_t(0,x)\to 0$ as $t\to\infty$. Theorem 2.4 in [@ms20] then says that the finite-dimensional distributions of the process $(Z_t(st))_{s\in [0,1)}$, conditioned on $\zeta > t$, converge, as $t\to\infty$, to those of a certain stochastically continuous process starting from 0. This entails that for every $\varepsilon,\eta>0$, one can choose $\delta$ small enough such that for large enough $t$, $$\ensuremath{\mathbf{P}}_x(Z_t(\delta t) > \eta\,|\,\zeta>t) \le \varepsilon.$$ Furthermore, Theorem 2.9 in [@ms20] yields that for large enough $t$, $$\ensuremath{\mathbf{P}}_x(M(\delta t) > L_t(\delta t) - \eta^{-1}\,|\,\zeta>t) \le \varepsilon.$$ Finally, Lemma 5.8 in [@ms20] applied with $r=A=0$ and $s=\delta t$ gives that for large enough $t$, $$\ensuremath{\mathbf{P}}_x(M(s) > L_t(s)\,\text{for some $s\in[0,\delta t]$}) = O(\delta z_t(0,x)).$$ Using [\[eq:prob_zeta_ge_t\]](#eq:prob_zeta_ge_t){reference-type="eqref" reference="eq:prob_zeta_ge_t"} and using that $Z_t(\delta t) = Z_t'(\delta t)$ on the event that $M(s) \le L_t(s)$ for all $s\in[0,\delta t]$, this implies that $$\ensuremath{\mathbf{P}}_x(Z_t(\delta t) \ne Z_t'(\delta t)\,|\,\zeta > t) = O(\delta).$$ It now follows that one can choose $\delta$ small enough such that for large enough $t$, the result [\[eq:prob_G\_delta\]](#eq:prob_G_delta){reference-type="eqref" reference="eq:prob_G_delta"} holds. This finishes the proof. ◻ # Excursions of the Brownian taboo process {#taboosec} The main goal of this section is to prove the following lemma, which generalizes ([\[Rayleighsimple\]](#Rayleighsimple){reference-type="ref" reference="Rayleighsimple"}) in the introduction and is the key to the proof of Theorem [Theorem 1](#th:global_max){reference-type="ref" reference="th:global_max"}. **Lemma 6**. *Suppose that for every $\gamma>0$, we have a Brownian taboo process $(K_\gamma(s))_{s\ge0}$ starting a (possibly random) point $z_\gamma \in [0,1]$. Assume that we have the following limit in probability: $$\label{initialz} \lim_{\gamma \rightarrow 0} \frac{z_{\gamma}}{\sqrt{\gamma}} = \infty.$$ Suppose furthermore we have a (possibly random) function $g: \mbox{\msbm R}^+ \rightarrow \mbox{\msbm R}^+$ such that, in probability, $$\label{gcond} \lim_{\gamma \rightarrow 0} \gamma^{1/2} g(\gamma) = \infty,$$ and for all $\gamma > 0$, we have (possibly random) continuous functions $b_{\gamma}, d_{\gamma}: [0, g(\gamma)] \rightarrow [0, \infty)$ such that the following limits hold in probability: $$\label{adhyp} \lim_{\gamma \rightarrow 0} \sup_{0 \leq s \leq g(\gamma)} |1 - b_{\gamma}(s)| = 0, \qquad \lim_{\gamma \rightarrow 0} \sup_{0 < s \leq g(\gamma)} \bigg|1 - \frac{d_{\gamma}(s)}{\gamma s} \bigg| = 0.$$ Define for $\gamma > 0$, $$\label{Mmdef} M_\gamma = \min_{0 \leq s \leq g(\gamma)} \, \big(b_{\gamma}(s) K_s + d_{\gamma}(s)\big),\qquad m_\gamma = \operatorname*{arg\,min}_{0 \leq s \leq g(\gamma)} \, \big(b_{\gamma}(s) K_s + d_{\gamma}(s) \big).$$ Then, as $\gamma\to 0$, we have the convergence in law $$\label{Rayleighconv} \bigg(\frac{M_\gamma}{\sqrt \gamma}, \sqrt \gamma m_\gamma \bigg) \Rightarrow ({\tilde R},U {\tilde R}),$$ where ${\tilde R}$ and $U$ are independent random variables, ${\tilde R}$ has a Rayleigh distribution with density $\pi^2 r e^{-\pi^2r^2/2}$ on $\mbox{\msbm R}_+$, and $U$ has a uniform distribution on $[0,1]$. Furthermore, define for $\theta > 0$, $$M_{\gamma}^*(\theta) = \min_{\substack{0 \leq s \leq g(\gamma) \\ |s - m_{\gamma}| \geq \theta/\sqrt{\gamma}}} \big(b_{\gamma}(s) K_s + d_{\gamma}(s)\big).$$ Then, for every $\kappa>0$, there exists $\eta > 0$, such that for every $\theta > 0$, $$\label{secondexc} \limsup_{\gamma \rightarrow 0} P(M_{\gamma}^*(\theta) - M_{\gamma} \leq \eta \sqrt{\gamma}) < \kappa.$$* The main tool used in the proof of Lemma [Lemma 6](#lem:taboo_with_drift){reference-type="ref" reference="lem:taboo_with_drift"} is the excursion theory of the Brownian taboo process due to Lambert [@lambert], who also extended the results to more general completely asymmetric Lévy processes. We review this theory here. Let $(K_s)_{s \geq 0}$ be a Brownian taboo process. For $0 < x < 1$, let $$\label{px} p(x) = 2 \sin^2 (\pi x),$$ so that $p(x)$ is the density of the speed measure $m(dx)$ and therefore is the stationary density for the process; see Theorem 3.1 of [@lambert]. Denote the local time at $x \in (0, 1)$ by $$\ell_s^{(x)} = \lim_{\varepsilon\rightarrow 0+} \frac{1}{2 \varepsilon} \int_0^s \mathds{1}_{\{|K_r - x| < \varepsilon\}} \: dr.$$ Corollary 4.3 of [@lambert] states that $$\label{Lsx} \lim_{s \rightarrow \infty} \frac{\ell_s^{(x)}}{s} = p(x) \hspace{.2in}\mbox{a.s.}$$ Let $\sigma_u^x = \inf\{s > 0: \ell^{(x)}_s > u\}$ be the (right-continuous) inverse local time. If $\sigma_{u-}^x < \sigma_u^x$, then let $e_u$ be the excursion defined by $e_u(r) = K_{\sigma_{u-}^x + r}$ for $0 \leq r \leq \sigma_{u-}^x - \sigma_u^x$, and $e_u(r) = x$ for all $r > \sigma_{u-}^x - \sigma_u^x$. Let $\mathcal{U}$ denote the set of all continuous functions $f: [0, \infty) \rightarrow (0, 1)$ such that for some $y > 0$, we have $f(0) = f(y) = x$, $f(r) \neq x$ for all $r \in (0, y)$, and $f(r) = x$ for all $r > y$. We interpret $\mathcal{U}$ as the space of possible excursions, with $y$ corresponding to the excursion length. Then the set of points $(u, e_u)$ is a Poisson point process on $\mbox{\msbm R}^+ \times \mathcal{U}$ whose intensity measure is given by the product of Lebesgue measure and an excursion measure on $\mathcal{U}$ which, following [@lambert], we denote by $n_x$. For $0 < d < 1 - x$, let $$A_d = \Big\{f \in \mathcal{U}: \sup_{r \geq 0} f(r) > 1 - d\Big\}$$ be the set of excursions whose maximum exceeds $1 - d$. Proposition 4.2 of [@lambert], applied with $1 - d - x$ in place of $\eta$ and using the values of $\rho$ and $W^{-(\rho)}(x)$ from the top of p. 256 in [@lambert], states that $$n_x(A_d) = \frac{\pi}{2} \cdot \frac{\sin (\pi(1-d))}{\sin (\pi(1-d-x)) \sin (\pi x)}.$$ We now consider the case $x = 1/2$. By symmetry, if $0 < d < 1/2$ and we define $$A_d^* = \Big\{f \in \mathcal{U}: \inf_{r \geq 0} f(r) < d\Big\},$$ then $$\label{Adstar} n_{1/2}(A_d^*) = n_{1/2}(A_d) = \frac{\pi}{2} \cdot \frac{\sin(\pi d)}{\sin(\frac{\pi}{2} - \pi d)}.$$ For the excursion $e_u$, write $a_u = \inf_{r \geq 0} e_u(r)$. Let $\mathcal N$ be the point process on $\mbox{\msbm R}^+ \times (0, 1/2)$ consisting of the points $(u, a_u)$ for which $a_u < 1/2$, so we are recording here only excursions below $1/2$. Then $\mathcal N$ is a Poisson point process whose intensity measure is given by the product of Lebesgue measure and a measure $\nu$ such that $\nu((0, d)) = n_{1/2}(A_d^*)$ for $0 < d < 1/2$. By differentiating the right-hand side of ([\[Adstar\]](#Adstar){reference-type="ref" reference="Adstar"}), we see that the measure $\nu$ has a density $h$, which is positive on $(0, 1/2)$ and satisfies $$\label{hdlimit} \lim_{d \rightarrow 0} h(d) = \frac{\pi^2}{2}.$$ Also, we write $\ell_s = \ell_s^{(1/2)}$ and $\sigma_u = \sigma_u^{1/2}$, and then equations ([\[px\]](#px){reference-type="ref" reference="px"}) and ([\[Lsx\]](#Lsx){reference-type="ref" reference="Lsx"}) imply $$\label{Ls12} \lim_{s \rightarrow \infty} \frac{\ell_s}{s} = 2.$$ In order to prepare the proof of Lemma [Lemma 6](#lem:taboo_with_drift){reference-type="ref" reference="lem:taboo_with_drift"}, we first prove a similar result regarding the Poisson process $\mathcal N$. Denote by $\mathcal N^{(u)}$ the projection of $\mathcal N$, seen as a set, onto the first coordinate, i.e. it is the (random) set of $u \in \mbox{\msbm R}_+$, such that $(u,a_u)\in \mathcal N$ for some $a_u$. Define $$N_\gamma = \min_{u\in \mathcal N^{(u)}} \left\{a_u + \gamma \frac u 2\right\},\quad u^*_\gamma = \operatorname*{argmin}_{u\in \mathcal N^{(u)}} \left\{a_u + \gamma \frac u 2\right\},\quad N_\gamma^* = \min_{u\in \mathcal N^{(u)},\ u\ne u^*_\gamma} \left\{a_u + \gamma \frac u 2\right\}.$$ Note that the minimum in the definition of $N_\gamma$ is attained almost surely and at a unique point, so that $u^*_\gamma$ is well-defined. To see this, first define $N_\gamma$ with the minimum replaced by an infimum, and note that $N_\gamma < 1/2$ almost surely, because the measure $\nu$ gives infinite mass to every interval $[a,1/2)$, for $a<1/2$. Now, for every $A\in (0,1/2)$, on the event $\{N_\gamma < A\}$, every sequence of points $(u_n,a_{u_n})\in \mathcal N$ approaching this infimum has to be contained in the triangle $\{(u,a)\in \mbox{\msbm R}_+^2: a+\gamma u/2 \le A\}$ for large enough $n$. But the number of points in this triangle is finite almost surely, and so $u_n$ is a minimizer for large $n$. The uniqueness of the minimizer comes from the continuity of the intensity measure of the point process $\mathcal N$. We now have the following result: **Lemma 7**. *Define $\tilde R$ and $U$ as in the statement of Lemma [Lemma 6](#lem:taboo_with_drift){reference-type="ref" reference="lem:taboo_with_drift"}. Then as $\gamma \rightarrow 0$, we have $$\left(\frac{1}{\sqrt\gamma}N_\gamma,\sqrt\gamma u^*_\gamma\right) \Rightarrow (\tilde R,2U\tilde R),$$ and $$\frac{1}{\sqrt\gamma} \left(N_\gamma^* - N_\gamma\right) \Rightarrow X^*,$$ where $X^*$ is some strictly positive r.v. Moreover, the statement still holds if, in the definitions of $N_\gamma$, $u^*_\gamma$ and $N^*_\gamma$ one adds an additional constraint $u \in [g_1(\gamma),g_2(\gamma)]$ for some functions $g_1,g_2$ satisfying $0\le g_1(\gamma) \ll \gamma^{-1/2} \ll g_2(\gamma) \le +\infty$.* *Proof.* We first consider the case $g_1 \equiv 0$ and $g_2 \equiv +\infty$. Define the point process $\mathcal N_\gamma$ by rescaling the first coordinate of $\mathcal N$ by $\sqrt \gamma$ and the second by $1/\sqrt \gamma$, i.e. $\mathcal N_\gamma$ consists of the points $(v,a^\gamma_v) = (u\sqrt\gamma,a_u/\sqrt\gamma)$ for $(u,a_u)\in \mathcal N$. It follows that $\mathcal N_\gamma$ is a Poisson point process with intensity measure $\nu_\gamma \otimes du$, where $$\nu_\gamma = \nu(\sqrt\gamma\,\cdot )/\sqrt \gamma.$$ We denote by $\mathcal N_\gamma^{(v)}$ the projection of $\mathcal N_\gamma$ onto the first coordinate. Then, we have $$\begin{aligned} N_\gamma &= \sqrt\gamma \left(\min_{v\in \mathcal N_\gamma^{(v)}} \left\{ a^\gamma_v + \frac{v}{2} \right\} \right),\\ u^*_\gamma &= (\sqrt\gamma)^{-1} v_\gamma^*,\quad\text{where } v_\gamma^* := \operatorname*{argmin}_{v \in \mathcal N_\gamma^{(v)}} \left\{ a^\gamma_v+ \frac{v}{2} \right\},\\ N_\gamma^* &= \sqrt\gamma \left(\min_{v\in \mathcal N_\gamma^{(v)},\ v\ne v_\gamma^*} \left\{ a^\gamma_v + \frac{v}{2} \right\} \right).\end{aligned}$$ Note that $v_\gamma^* \le 2 \min_{v\in \mathcal N_\gamma^{(v)}} a^\gamma_v + \frac{v}{2}$. Hence, it suffices to show that $$\label{eq:poisson_toshow1} \left(\min_{v\in \mathcal N_\gamma^{(v)}} a^\gamma_v + \frac{v}{2}, v_\gamma^*\right) \Rightarrow (\tilde R,2U\tilde R),$$ and $$\label{eq:poisson_toshow2} \left(\min_{v\in \mathcal N_\gamma^{(v)},\ v\ne v_\gamma^*} a^\gamma_v + \frac{v}{2}\right) - \left(\min_{v\in \mathcal N_\gamma^{(v)}} a^\gamma_v + \frac{v}{2}\right) \Rightarrow X^*,$$ with $X^*$ as above. By [\[hdlimit\]](#hdlimit){reference-type="eqref" reference="hdlimit"}, one sees that $\nu_\gamma$ vaguely converges to $\pi^2/2$ times Lebesgue measure on $\mbox{\msbm R}_+$; in fact, its density converges locally uniformly. By standard thinning and superposition arguments for Poisson processes, it follows that one can couple $\mathcal N_\gamma$ with a Poisson process $\mathcal N_0$ with intensity measure $\pi^2/2$ times Lebesgue measure on $\mbox{\msbm R}_+^2$ in such a way that on every compact set, they are equal with probability going to 1 as $\gamma \to 0$. Using similar arguments to the ones used to show that $\min_{v\in \mathcal N_\gamma^{(v)}} a^\gamma_v + \frac{v}{2}$ is attained almost surely and denoting the points of $\mathcal N_0$ by $(v, a_v^0)$, it follows that the following convergence in law holds: $$\label{eq:poisson_convergence} \left(\min_{v\in \mathcal N_\gamma^{(v)}} \left\{ a^\gamma_v + \frac{v}{2} \right\},\ v_\gamma^*,\ \min_{v\in \mathcal N_\gamma^{(v)},\ v\ne v_\gamma^*} \left\{ a^\gamma_v + \frac{v}{2} \right\} \right) \Rightarrow \left(M_0,\ v_0^*,\ M_0^*\right),$$ where $M_0 = \min_{v\in \mathcal N_0^{(v)}} a^0_v + \frac{v}{2}$, $v_0^* = \operatorname*{argmin}_{v\in \mathcal N_0^{(v)}} a^0_v + \frac{v}{2}$ and $M_0^* = \min_{v\in \mathcal N_0^{(v)},\ v\ne v_0^*} a^0_v + \frac{v}{2}$. For $t>0$, let $C_t = \{(v,a)\in \mbox{\msbm R}_+^2, a+v/2\le t\}$, which is a right triangle whose two short sides have lengths $t$ and $2t$, respectively. Hence, it has Lebesgue measure $t^2$. We therefore have, $$\ensuremath{\mathbf{P}}(M_0 > t) = \ensuremath{\mathbf{P}}(\mathcal N_0(C_t) = 0) = e^{-(\pi^2/2) t^2},$$ so that $M_0$ is equal in law to $\tilde R$. Moreover, conditioning on $M_0 = t$, we have that $v_0^*$ is uniformly distributed on $[0,2t]$ (and $a^0_{v_0^*} = t - v_0^*/2$). Hence, we have $$\left(M_0 , v_0^*\right) \stackrel{\mathrm{law}}= (\tilde R,2U\tilde R).$$ Furthermore, the remaining points form again a Poisson process with the same intensity measure but restricted to $\mbox{\msbm R}_+^2\backslash C_t$. In particular, using the continuity of the intensity measure, $$X^* := M_0^* - M_0 > 0,\quad \text{almost surely.}$$ These two facts, together with [\[eq:poisson_convergence\]](#eq:poisson_convergence){reference-type="eqref" reference="eq:poisson_convergence"}, yield [\[eq:poisson_toshow1\]](#eq:poisson_toshow1){reference-type="eqref" reference="eq:poisson_toshow1"} and [\[eq:poisson_toshow2\]](#eq:poisson_toshow2){reference-type="eqref" reference="eq:poisson_toshow2"} and finish the proof in the case $g_1 \equiv 0$ and $g_2 \equiv +\infty$. To cover the general case, note that since $\sqrt{\gamma}u^*_\gamma$ converges in law to a positive, finite random variable as $\gamma\to 0$, the assumptions imply that $u^*_\gamma\in [g_1(\gamma),g_2(\gamma)]$ with high probability. This proves the first statement in the general case. For the second statement, it follows from the above proof that the minimizer for $N_\gamma^*$ is again of order $\gamma^{-1/2}$, so that it is in the interval $[g_1(\gamma),g_2(\gamma)]$ as well with high probability. This concludes the proof. ◻ *Proof of Lemma [Lemma 6](#lem:taboo_with_drift){reference-type="ref" reference="lem:taboo_with_drift"}.* We wish to reduce the problem so that we can apply Lemma [Lemma 7](#lem:Poisson){reference-type="ref" reference="lem:Poisson"}. We do this in several steps. [Step 1.]{.ul} We first argue that it is sufficient to prove ([\[Rayleighconv\]](#Rayleighconv){reference-type="ref" reference="Rayleighconv"}) and ([\[secondexc\]](#secondexc){reference-type="ref" reference="secondexc"}) when $b_{\gamma}(s) = 1$ and $d_{\gamma}(s) = \gamma s$ for all $s \geq 0$. To see this, define $$\label{Mtildedef} {\tilde M}_\gamma = \min_{0 \leq s \leq g(\gamma)} \, \big(K_s + \gamma s \big),\quad {\tilde m}_\gamma = \operatorname*{argmin}_{0 \leq s \leq g(\gamma)} \, \big(K_s + \gamma s \big), \quad {\tilde M}_{\gamma}^*(\theta) = \min_{\substack{0 \leq s \leq g(\gamma) \\ |s - {\tilde m}_{\gamma}| \geq \theta/\sqrt{\gamma}}} \big(K_s + \gamma s \big).$$ Suppose we can show that $$\label{Rayleightilde} \bigg(\frac{{\tilde M}_\gamma}{\sqrt \gamma}, \sqrt \gamma {\tilde m}_\gamma \bigg) \Rightarrow ({\tilde R},U {\tilde R}),$$ and that for every $\kappa>0$, there exists $\eta > 0$, such that for every $\theta > 0$, we have $$\label{secondexctilde} \limsup_{\gamma \rightarrow 0} P({\tilde M}_{\gamma}^*(\theta) - {\tilde M}_{\gamma} \leq \eta \sqrt{\gamma}) < \kappa.$$ For $s \geq 0$, we have $$\bigg|\frac{b_{\gamma}(s) K_s + d_{\gamma}(s)}{K_s + \gamma s} - 1 \bigg| = \bigg| \frac{(b_{\gamma}(s) - 1)K_s + (\frac{d_{\gamma}(s)}{\gamma s} - 1) \gamma s}{K_s + \gamma s} \bigg| \leq |b_{\gamma}(s) - 1| + \bigg|\frac{d_{\gamma}(s)}{\gamma s} - 1 \bigg|.$$ It now follows from ([\[adhyp\]](#adhyp){reference-type="ref" reference="adhyp"}) that $$\label{ratiobound} \lim_{\gamma \rightarrow 0} \sup_{0 \leq s \leq g(\gamma)} \bigg|\frac{b_{\gamma}(s) K_s + d_{\gamma}(s)}{K_s + \gamma s} - 1 \bigg| = 0.$$ Therefore, we have $M_{\gamma}/{\tilde M}_{\gamma} \rightarrow_p 1$ as $\gamma \rightarrow 0$. Also, ([\[Rayleightilde\]](#Rayleightilde){reference-type="ref" reference="Rayleightilde"}), ([\[secondexctilde\]](#secondexctilde){reference-type="ref" reference="secondexctilde"}) and ([\[ratiobound\]](#ratiobound){reference-type="ref" reference="ratiobound"}) imply that for all $\theta > 0$ and $\kappa > 0$, we have $$\limsup_{\gamma \rightarrow 0} P \bigg( |m_{\gamma} - {\tilde m}_{\gamma}| > \frac{\theta}{\sqrt{\gamma}} \bigg) < \kappa,$$ which implies that $\sqrt{\gamma} |m_{\gamma} - {\tilde m}_{\gamma}| \rightarrow_p 0$ as $\gamma \rightarrow 0$. Therefore, the convergence ([\[Rayleightilde\]](#Rayleightilde){reference-type="ref" reference="Rayleightilde"}) implies ([\[Rayleighconv\]](#Rayleighconv){reference-type="ref" reference="Rayleighconv"}). Also, by ([\[ratiobound\]](#ratiobound){reference-type="ref" reference="ratiobound"}) and the result $\sqrt{\gamma} |m_{\gamma} - {\tilde m}_{\gamma}| \rightarrow_p 0$, the convergence ([\[secondexctilde\]](#secondexctilde){reference-type="ref" reference="secondexctilde"}) implies ([\[secondexc\]](#secondexc){reference-type="ref" reference="secondexc"}). We will therefore aim to prove ([\[Rayleightilde\]](#Rayleightilde){reference-type="ref" reference="Rayleightilde"}) and ([\[secondexctilde\]](#secondexctilde){reference-type="ref" reference="secondexctilde"}). [Step 2.]{.ul} Next, we note that additionally, we can consider the process over an infinite time horizon. For $s\ge g(\gamma)$, we have $$K_s + \gamma s \ge \gamma s \ge \gamma g(\gamma) \gg \sqrt{\gamma},$$ and so, since $M_\gamma$ is of order $\sqrt{\gamma}$ in probability as $\gamma \to 0$, the minimum is attained in the time interval $[0,g(\gamma)]$ with high probability. Therefore, it is enough to prove the result with $g(\gamma) = +\infty$. [Step 3.]{.ul} We recall from the two previous steps that we can and will assume $g(\gamma) = +\infty$, $b_{\gamma}(s) = 1$ and $d_{\gamma}(s) = \gamma s$ for all $s \geq 0$. We now show that we can ignore a certain time interval at the beginning. Recall that $\sigma_u$ denotes the inverse local time at $1/2$ of the Brownian taboo process. Set $$\begin{aligned} M^{(1)}_\gamma &= \min_{0 \leq s \leq \sigma_{\gamma^{-1/4}}} \, \big(K_s + \gamma s \big).\end{aligned}$$ We claim that $$\label{eq:M1ggsqrtgamma} M^{(1)}_\gamma/\sqrt{\gamma} \rightarrow_p \infty \quad \text{as $\gamma \to 0$.}$$ For this, we first bound from below $$\begin{aligned} \label{eq:M1decomposition} M^{(1)}_\gamma \ge \min_{0 \leq s \leq \sigma_{\gamma^{-1/4}}} \, K_s = \min\left(\min_{s \leq \sigma_0} \, K_s,\ \min_{u\in \mathcal N^{(u)},\,u\le \gamma^{-1/4}} \, a^\gamma_u\right).\end{aligned}$$ We now consider separately both terms on the RHS of [\[eq:M1decomposition\]](#eq:M1decomposition){reference-type="eqref" reference="eq:M1decomposition"}. *Step 3a.* We start with the term $\min_{s \leq \sigma_0} \, K_s$. If $z_\gamma \ge 1/2$, then this term equals 1/2. If $z_{\gamma} < 1/2$, then by the strong Markov property of the Brownian taboo process, the probability that it is smaller than some $z<z_\gamma$ is the probability that for the Brownian taboo process started at $1/2$, the first excursion of the process below $z_{\gamma}$ also goes below $z$. This equals[^4] $$\frac{n_{1/2}\big(A^*_{z}\big)}{n_{1/2}(A^*_{z_{\gamma}})} = \frac{\nu((0, z))}{\nu((0, z_{\gamma}))}.$$ From ([\[hdlimit\]](#hdlimit){reference-type="ref" reference="hdlimit"}), it follows that $\min_{s \leq \sigma_0} \, K_s$ is of the same order as $z_\gamma$, with high probability as $\gamma\to 0$. From ([\[initialz\]](#initialz){reference-type="ref" reference="initialz"}), it then follows that $(\min_{s \leq \sigma_0} \, K_s)/\sqrt{\gamma} \rightarrow_p \infty$ as $\gamma\to0$. *Step 3b.* For the second term on the RHS of [\[eq:M1decomposition\]](#eq:M1decomposition){reference-type="eqref" reference="eq:M1decomposition"}, a first moment argument shows that, $$\ensuremath{\mathbf{P}}\left(\min_{u\in \mathcal N^{(u)},\,u\le \gamma^{-1/4}} \, a^\gamma_u \le \gamma^{3/8}\right) \le \gamma^{-1/4} \nu((0,\gamma^{3/8})),$$ and [\[hdlimit\]](#hdlimit){reference-type="eqref" reference="hdlimit"} shows that this goes to 0 as $\gamma\to 0$. This shows that $(\min_{u\in \mathcal N^{(u)},\,u\le \gamma^{-1/4}} \, a^\gamma_u)/\sqrt{\gamma} \rightarrow_p \infty$ as $\gamma\to 0$. Steps 3a and 3b combined with [\[eq:M1decomposition\]](#eq:M1decomposition){reference-type="eqref" reference="eq:M1decomposition"} now yield [\[eq:M1ggsqrtgamma\]](#eq:M1ggsqrtgamma){reference-type="eqref" reference="eq:M1ggsqrtgamma"}. [Step 4.]{.ul} Taking into account the last step, it remains to show that the statement of the lemma still holds if we add the condition $s\ge \sigma_{\gamma^{-1/4}}$ to all quantities (and still assuming $g(\gamma) = +\infty$, $b_{\gamma}(s) = 1$ and $d_{\gamma}(s) = \gamma s$ for all $s \geq 0$). Recall from [\[Ls12\]](#Ls12){reference-type="eqref" reference="Ls12"} that $\ell_s/s \to 2$ almost surely as $s\to\infty$. In fact, the speed in this convergence is uniformly bounded in the starting point $z_\gamma\in[0,1]$. Indeed, note that the time for the Brownian taboo process to reach $1/2$ from any starting point is stochastically bounded by the time for the process to reach $1/2$ started from $0$ or $1$, which is a finite random variable because $0$ and $1$ are entrance boundaries for the taboo process. The same holds for the inverse local time. Hence, there exists $\eta(\gamma) \to 0$ as $\gamma\to0$, such that, $$\begin{aligned} \label{eq:uniform_ell_s} &\forall s\ge \sigma_{\gamma^{-1/4}}: 2(1-\eta(\gamma)) \le \frac{\ell_s}{s} \le 2 (1+\eta(\gamma)), \quad \text{with high probability as $\gamma\to 0$},\\ \label{eq:uniform_sigma_u} &\forall u\ge \gamma^{-1/4}: \frac{1-\eta(\gamma)}{2} \le \frac{\sigma_{u-}}{u} \le \frac{\sigma_u}{u} \le \frac{1+\eta(\gamma)}{2}, \quad \text{with high probability as $\gamma\to 0$}.\end{aligned}$$ With the same arguments as in Step 1, it follows from [\[eq:uniform_ell_s\]](#eq:uniform_ell_s){reference-type="eqref" reference="eq:uniform_ell_s"} that it is enough to show the result with $d_\gamma(s) = \gamma \ell_s/2$. Now note that since $\ell_s$ is constant on the open time interval delimiting an excursion, we have the following: $$\begin{aligned} \label{eq:from_s_to_u} \min_{s\ge \sigma_{\gamma^{-1/4}}} \, \big(K_s + \gamma \ell_s/2 \big) = \min_{u\in \mathcal N^{(u)}, u>\gamma^{-1/4}} \, \big(a^\gamma_u + \gamma u/2 \big).\end{aligned}$$ Moreover, the minimizers $s^*$ and $u^*$ satisfy $s^* \in [\sigma_{u^*-},\sigma_{u^*}]$. The statement now readily follows from Lemma [Lemma 7](#lem:Poisson){reference-type="ref" reference="lem:Poisson"} (with $g_1(\gamma) = \gamma^{-1/4}$ and $g_2(\gamma) = +\infty$), and [\[eq:uniform_sigma_u\]](#eq:uniform_sigma_u){reference-type="eqref" reference="eq:uniform_sigma_u"}. ◻ # The all-time maximum {#maxsec} In this section, we prove Theorem [Theorem 1](#th:global_max){reference-type="ref" reference="th:global_max"}. We first recall that if $0 \leq s \leq t$, then $$L_t(s) = c(t - s)^{1/3}, \qquad \tau(s) = \int_0^s \frac{1}{L_t(u)^2} \: du.$$ We next recall two lemmas about the extinction times for branching Brownian motion. The result ([\[survivelim\]](#survivelim){reference-type="ref" reference="survivelim"}) below is an immediate consequence of Theorem 1.3 in [@ms20]; note that the function $\phi(v)$ in ([\[survivelim\]](#survivelim){reference-type="ref" reference="survivelim"}) corresponds to $1 - \phi(-cv/3)$ in Theorem 1.3 of [@ms20]. The result ([\[survive2\]](#survive2){reference-type="ref" reference="survive2"}) is part of Theorem 1 of [@bbs14] and is stated in the scaling of the present paper in (1.4) of [@ms20]. **Lemma 8**. *Let $q$ be the extinction probability for a Galton-Watson process with offspring distribution $(p_k)_{k=1}^{\infty}$, and let $v \in \mbox{\msbm R}$. If $x = L_t$, then $$\label{survivelim} \lim_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_x(\zeta > t + vt^{2/3}) = \phi(v),$$ where $\phi$ is a decreasing function satisfying $\lim_{z \rightarrow \infty} \phi(z) = 0$ and $\lim_{z \rightarrow -\infty} \phi(z) = 1-q.$ Also, there is a positive constant $C$ such that if $0 < x \leq L_t - 1$, then $$\label{survive2} \ensuremath{\mathbf{P}}_x(\zeta > t) \leq C L_t \sin \bigg( \frac{\pi x}{L_t} \bigg) e^{x - L_t}.$$* Lemma [Lemma 9](#survival23){reference-type="ref" reference="survival23"} is a special case of Proposition 1.4 in [@ms20]. **Lemma 9**. *Suppose $x > 0$, possibly depending on $t$, and $\lim_{t \rightarrow \infty} (L_t - x) = \infty$. Then, for all $v > 0$, we have $$\lim_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_x(\zeta > t + vt^{2/3} \,|\, \zeta > t) = e^{-cv/3}.$$* We next prove three lemmas concerning the maximum position for branching Brownian motion with absorption started from one particle at $x$. Lemma [Lemma 10](#MGbound){reference-type="ref" reference="MGbound"} controls the all-time maximum position for branching Brownian motion with absorption, while Lemmas [Lemma 11](#rtmostms){reference-type="ref" reference="rtmostms"} and [Lemma 12](#rightLD){reference-type="ref" reference="rightLD"} bound the maximum position that a particle achieves after a certain time. Note that in Lemmas [Lemma 11](#rtmostms){reference-type="ref" reference="rtmostms"} and [Lemma 12](#rightLD){reference-type="ref" reference="rightLD"}, the position $x$ of the initial particle is allowed to depend on $t$. **Lemma 10**. *Let $x > 0$, possibly depending on $t$, and let $a > 0$. Then $$\ensuremath{\mathbf{P}}_x \Big( \sup_{s \geq 0} M(s) \geq x + a \Big) \leq e^{-a}.$$* *Proof.* For $s \geq 0$, let $$V(s) = \sum_{u \in N(s)} X_u(s) e^{X_u(s)}.$$ Then the process $(V(s), s \geq 0)$ is a nonnegative martingale, as shown, for example, in Lemma 2 of [@hh07]. Let $\sigma = \inf\{s: M(s) \geq x+a\}$. Then, on the event $\sigma < \infty$, we have $V(\sigma) \geq (x+a)e^{(x+a)}$. Therefore, by the Optional Sampling Theorem, $$\ensuremath{\mathbf{P}}_x(\sigma < \infty) \leq \frac{xe^{x}}{(x+a)e^{(x+a)}} \leq e^{-a},$$ as claimed. ◻ **Lemma 11**. *Suppose $x > 0$, and define $t$ such that $x = L_t$. Then for all $u > 0$ such that $L_{t+u} \ge x+ 1$, we have $$\ensuremath{\mathbf{P}}_x \bigg( \sup_{s \geq u} M(s) \geq x \bigg) \leq C_0 (L_{t+u} - x)e^{-(L_{t+u} - x)},$$ where $C_0$ is a positive constant not depending on $t$ or $u$.* *Proof.* We first suppose that $x = L_t \ge 1$. Then, from equation ([\[survivelim\]](#survivelim){reference-type="ref" reference="survivelim"}) with $v = 0$, one can see that there is a positive constant $C_1$ such that $\ensuremath{\mathbf{P}}_x(\zeta > t) \geq C_1$ for all $x \geq 1$. Let $\sigma = \inf\{s \geq u: M(s) \geq x\}$. By applying the strong Markov property at time $\sigma$, we get $$\begin{aligned} \ensuremath{\mathbf{P}}_x(\zeta > t + u) &\geq \ensuremath{\mathbf{P}}_x(\sigma < \infty) \ensuremath{\mathbf{P}}_x(\zeta > t + u \,|\, \sigma < \infty) \\ &\geq \ensuremath{\mathbf{P}}_x(\sigma < \infty) \ensuremath{\mathbf{P}}_x(\zeta > t) \\ &\geq C_1 \ensuremath{\mathbf{P}}_x(\sigma < \infty).\end{aligned}$$ Rearranging this equation and applying ([\[survive2\]](#survive2){reference-type="ref" reference="survive2"}) with $t+u$ in place of $t$ gives $$\begin{aligned} \ensuremath{\mathbf{P}}_x(\sigma < \infty) &\leq \frac{1}{C_1} \ensuremath{\mathbf{P}}_x(\zeta > t + u) \\ &\leq \frac{C}{C_1} L_{t+u} \sin \bigg( \frac{\pi x}{L_{t+u}} \bigg) e^{x - L_{t+u}} \\ &\leq \frac{C \pi}{C_1} (L_{t+u} - x) e^{x - L_{t+u}},\end{aligned}$$ which implies the statement of the lemma with $C_0 = C \pi/C_1$. It remains to consider the case $x<1$. Then, translating the whole process by $1-x$ but still keeping the absorption at 0 can only increase the maximal displacement. Hence, $$\ensuremath{\mathbf{P}}_x \bigg( \sup_{s \geq u} M(s) \geq x \bigg) \le \ensuremath{\mathbf{P}}_1 \bigg( \sup_{s \geq u} M(s) \geq 1 \bigg).$$ Now, if $1+x\le L_{t+u} \le 2$, then $(L_{t+u}-x)e^{-L_{t+u}-x} \ge c_0>0$ for some constant $c_0$, and so the statement of the lemma follows with $C_0 = 1/c_0$. On the other hand, if $L_{t+u} > 2$, then if $t_1\ge t$ is such that $L_{t_1} = 1$ and $u_1 = u + t-t_1 \le u$, then we can apply the lemma with $x = 1 = L_{t_1}$ and $u = u_1$ in order to get (noting that $t_1+u_1 = t+u$) $$\begin{aligned} \ensuremath{\mathbf{P}}_1 \bigg( \sup_{s \geq u} M(s) \geq 1 \bigg) &\le \ensuremath{\mathbf{P}}_1 \bigg( \sup_{s \geq u_1} M(s) \geq 1 \bigg) \\ &\le C_0(L_{t+u}-1)e^{-(L_{t+u}-1)} \\ &\le e^{1-x}C_0(L_{t+u}-x)e^{-(L_{t+u}-x)}.\end{aligned}$$ This finishes the proof of the lemma. ◻ **Lemma 12**. *Suppose $x > 0$, possibly depending on $t$, and $\lim_{t \rightarrow \infty} (L_t - x) = \infty$. Choose positive constants $a$ and $b$ such that $0 < a + 2/3 < b < 1$. Then $$\lim_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_x \bigg( \sup_{s \geq t^b} M(s) \geq L_t - t^a \, \Big|\, \zeta > t \bigg) = 0.$$* *Proof.* Let $y = L_t - t^a$. Note that $y = L_u$, where $$u = \bigg( \frac{y}{c} \bigg)^3 = \bigg(t^{1/3} - \frac{t^a}{c} \bigg)^3 \geq t - \frac{t^a}{c} \cdot 3 t^{2/3},$$ which is greater than $t - t^b/2$ for sufficiently large $t$ because $a + 2/3 < b$. Therefore, by Lemma [Lemma 8](#survivalms){reference-type="ref" reference="survivalms"}, $$\label{liminf0} \liminf _{t \rightarrow \infty} \ensuremath{\mathbf{P}}_y\bigg(\zeta > t - \frac{t^b}{2} \bigg) \geq \liminf_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_y (\zeta > u) = \phi(0) > 0.$$ Thus, by the strong Markov property applied at the time $\sigma = \inf\{s \geq t^b: M(s) > y\}$, we have $$\begin{aligned} \ensuremath{\mathbf{P}}_x\bigg(\zeta > t + \frac{t^b}{2} \, \Big|\, \zeta > t \bigg) &\geq \ensuremath{\mathbf{P}}_x \bigg( \sup_{s \geq t^b} M(s) \geq y \, \Big|\, \zeta > t \bigg) \ensuremath{\mathbf{P}}_x \bigg( \zeta > t + \frac{t^b}{2} \,\Big| \, \zeta > t, \: \sup_{s \geq t^b} M(s) \geq y \bigg) \nonumber \\ &\geq \ensuremath{\mathbf{P}}_x \bigg( \sup_{s \geq t^b} M(s) \geq y \, \Big|\, \zeta > t \bigg) \ensuremath{\mathbf{P}}_x \bigg( \zeta > t + \frac{t^b}{2} \,\Big| \, \sup_{s \geq t^b} M(s) \geq y \bigg) \nonumber \\ &\geq \ensuremath{\mathbf{P}}_x \bigg( \sup_{s \geq t^b} M(s) \geq y \, \Big|\, \zeta > t \bigg) \ensuremath{\mathbf{P}}_y\bigg( \zeta > t - \frac{t^b}{2} \bigg). \label{zetaab}\end{aligned}$$ Because $b > 2/3$, the term on the left-hand side of ([\[zetaab\]](#zetaab){reference-type="ref" reference="zetaab"}) tends to zero as $t \rightarrow \infty$ by Lemma [Lemma 9](#survival23){reference-type="ref" reference="survival23"}. By ([\[liminf0\]](#liminf0){reference-type="ref" reference="liminf0"}), the second of the two factors on the right-hand side of ([\[zetaab\]](#zetaab){reference-type="ref" reference="zetaab"}) stays bounded away from zero as $t \rightarrow \infty$. It follows that the first factor on the right-hand side of ([\[zetaab\]](#zetaab){reference-type="ref" reference="zetaab"}) must tend to zero as $t \rightarrow \infty$, which is precisely the statement of the lemma. ◻ *Proof of Theorem [Theorem 1](#th:global_max){reference-type="ref" reference="th:global_max"}.* We will work with the BBM with spine. The basic idea is that the global maximum and the argmax should be roughly equal to the global maximum and the argmax of the spine. Let $(K_r)_{r \geq 0}$ be a Brownian taboo process started from $K_0 = (L_t-x)/L_t$, and let $${\tilde X}_{\xi}(s) = L_t(s)(1-K_{\tau(s)})$$ denote the position of the spine at time $s$. Let $5/6 < b < 1$, and define $${\tilde M} = \max_{s \in [0, t^b]} {\tilde X}_{\xi}(s), \qquad {\tilde m} = \operatorname*{arg\,max}_{s \in [0, t^b]} \: {\tilde X}_{\xi}(s).$$ We will now aim to prove ([\[maxrem\]](#maxrem){reference-type="ref" reference="maxrem"}). We will first show that ([\[maxrem\]](#maxrem){reference-type="ref" reference="maxrem"}) holds if $\mathfrak M$ and $\mathfrak m$ are replaced by ${\tilde M}$ and ${\tilde m}$. That is, we will first show that as $t\to\infty$, $$\label{spineMm} \bigg( L_t^{-1/2} (L_t - {\tilde M}), L_t^{1/2} \frac{{\tilde m}}{t} \bigg) \Rightarrow (R, 3UR).$$ Note that $$L_t - {\tilde X}_{\xi}(s) = L_t - L_t(s) (1-K_{\tau(s)}) = L_t \bigg( \bigg(1 - \frac{L_t(s)}{L_t} \bigg) + \frac{L_t(s)}{L_t} K_{\tau(s)}\bigg).$$ Dividing both sides by $L_t$ and making the substitution $r = \tau(s)$ on the right-hand side, we have $$\frac{L_t - {\tilde X}_{\xi}(s)}{L_t} = \bigg(1 - \frac{L_t(\tau^{-1}(r))}{L_t} \bigg) + \frac{L_t(\tau^{-1}(r))}{L_t} K_r.$$ Now define $$\gamma = \frac{\pi^2}{2L_t}.$$ We will apply Lemma [Lemma 6](#lem:taboo_with_drift){reference-type="ref" reference="lem:taboo_with_drift"} with $g(\gamma) = \tau(t^b)$ (suppose $t\ge 1$ without loss of generality, so that $t^b \le t$). Indeed, it follows from [\[tauasymp\]](#tauasymp){reference-type="eqref" reference="tauasymp"} and the hypothesis on $b$ that, as $t\to\infty$ (equivalently, as $\gamma\to 0$), $$\gamma^{-1/2} \ll \tau(t^b) \ll \gamma^{-1}.$$ In particular, hypothesis [\[gcond\]](#gcond){reference-type="eqref" reference="gcond"} from Lemma [Lemma 6](#lem:taboo_with_drift){reference-type="ref" reference="lem:taboo_with_drift"} is verified. Furthermore, define for $r\in [0,\tau(t^b)]$, $$b_{\gamma}(r) = \frac{L_t(\tau^{-1}(r))}{L_t}, \qquad d_{\gamma}(r) = 1 - \frac{L_t(\tau^{-1}(r))}{L_t}.$$ Then $$\label{spinemotion} \frac{L_t - {\tilde X}_{\xi}(s)}{L_t} = b_{\gamma}(r) K_r + d_{\gamma}(r).$$ It follows that $$\label{LtMm} \bigg(\frac{L_t - {\tilde M}}{L_t}, \tilde m \bigg) = \bigg( \min_{r \in [0, \tau(t^b)]} \big( b_{\gamma}(r) K_r + d_{\gamma}(r) \big), \: \tau^{-1} \Big( \operatorname*{argmin}_{r \in [0, \tau(t^b)]} \big( b_{\gamma}(r) K_r + d_{\gamma}(r) \big) \Big) \bigg).$$ Because $\tau(t^b) \ll t^{1/3}$, we can see from ([\[Ltdiff\]](#Ltdiff){reference-type="ref" reference="Ltdiff"}) that $$\lim_{t \rightarrow \infty} \sup_{r \in [0, \tau(t^b)]} |1 - b_{\gamma}(r)| = 0$$ and $$\lim_{t \rightarrow \infty} \sup_{r \in [0, \tau(t^b)]} \bigg|1 - \frac{d_{\gamma}(r)}{\gamma r} \bigg| = 0.$$ Because $\gamma \rightarrow 0$ as $t \rightarrow \infty$, the hypotheses ([\[adhyp\]](#adhyp){reference-type="ref" reference="adhyp"}) in Lemma [Lemma 6](#lem:taboo_with_drift){reference-type="ref" reference="lem:taboo_with_drift"} are satisfied. Because $L_t - x \gg t^{1/6}$ by assumption, we have $K_0 \gg t^{-1/6}$, and therefore $K_0 \gg \sqrt{\gamma}$, so the assumption ([\[initialz\]](#initialz){reference-type="ref" reference="initialz"}) in Lemma [Lemma 6](#lem:taboo_with_drift){reference-type="ref" reference="lem:taboo_with_drift"} also holds. Thus, by ([\[Rayleighconv\]](#Rayleighconv){reference-type="ref" reference="Rayleighconv"}), $$\label{newRUR} \bigg( \frac{1}{\sqrt{\gamma}} \min_{r \in [0, \tau(t^b)]} \big( b_{\gamma}(r) K_r + d_{\gamma}(r) \big), \sqrt{\gamma} \operatorname*{argmin}_{r \in [0, \tau(t^b)]} \big( b_{\gamma}(r) K_r + d_{\gamma}(r) \big) \bigg) \Rightarrow ({\tilde R}, U {\tilde R}).$$ Combining ([\[LtMm\]](#LtMm){reference-type="ref" reference="LtMm"}) with ([\[newRUR\]](#newRUR){reference-type="ref" reference="newRUR"}) and using ([\[tauinverse\]](#tauinverse){reference-type="ref" reference="tauinverse"}), we get, as $t\to\infty$, $$\label{LtMprelim} \bigg( \frac{L_t - {\tilde M}}{L_t \sqrt{\gamma}}, \frac{\sqrt{\gamma} {\tilde m}}{L_t^2} \bigg) \Rightarrow ({\tilde R}, U {\tilde R}).$$ Note that $L_t \sqrt{\gamma} = \sqrt{\pi^2 L_t/2}$, and recall from ([\[gammaLt2\]](#gammaLt2){reference-type="ref" reference="gammaLt2"}) that $$\frac{\sqrt{\gamma}}{L_t^2} = \sqrt{\frac{2}{\pi^2}} \cdot \frac{\sqrt{L_t}}{3t}.$$ Therefore, we can rewrite ([\[LtMprelim\]](#LtMprelim){reference-type="ref" reference="LtMprelim"}) as $$\bigg( L_t^{-1/2} (L_t - {\tilde M}), L_t^{1/2} \frac{{\tilde m}}{t} \bigg) \Rightarrow \bigg( \sqrt{\frac{\pi^2}{2}} {\tilde R}, 3 \sqrt{\frac{\pi^2}{2}} U {\tilde R} \bigg),\quad t\to\infty.$$ Because ${\tilde R} \sqrt{\pi^2/2}$ has the same distribution as $R$, the result ([\[spineMm\]](#spineMm){reference-type="ref" reference="spineMm"}) follows. The next step is to extend the result to the case of the full BBM with spine, rather than considering only the spinal particle. Let $$\tilde{\mathfrak M} = \max_{s \in [0, t^b]} {\tilde X}_1(s), \qquad \tilde{\mathfrak m} = \operatorname*{arg\,max}_{s \in [0, t^b]} {\tilde X}_1(s).$$ To compare $\tilde{\mathfrak M}$ with ${\tilde M}$, let $0 < \delta < 1/6$, and $A_1$ be the event that there exist $s \in (0, t^b)$ and $z > 0$ such that a particle branches off the spine at time $s$ and location $z$, and eventually has a descendant that gets above $z + t^{\delta}$. To bound $P(A_1)$, recall that the spine branches at rate $(m+1)\beta$ according to the size-biased offspring distribution. Let $\zeta$ denote the mean of the size-biased offspring distribution, which is finite because the offspring distribution has finite variance. Then the expected number of children of the spine by time $t^b$ is $\beta(m+1)\zeta t^b$. By Lemma [Lemma 10](#MGbound){reference-type="ref" reference="MGbound"}, the probability that a particular one of these children has a descendant that gets more than $t^{\delta}$ above the location where it branched off the spine is at most $e^{-t^{\delta}}$. Thus, $$\label{descendant} P(A_1) \leq \beta (m+1) \zeta t^b \cdot e^{-t^{\delta}},$$ which tends to zero as $t \rightarrow \infty$. Because, on the event $A_1$, we have $0 \leq \tilde{\mathfrak M} - {\tilde M} \leq t^{\delta}$, it follows that $$\label{changeM} L_t^{-1/2}(\tilde{\mathfrak M} - {\tilde M}) \rightarrow_p 0,$$ where $\rightarrow_p$ denotes convergence in probability as $t \rightarrow \infty$. We next derive a similar result for the argmax. Recall that $\tilde m$ is the time when the spine attains its maximum and note that equation ([\[spineMm\]](#spineMm){reference-type="ref" reference="spineMm"}) implies that ${\tilde m} = \Theta_p(t^{5/6})$, and therefore ([\[tauasymp\]](#tauasymp){reference-type="ref" reference="tauasymp"}) implies that $\tau({\tilde m}) = \Theta_p(t^{1/6})$. We aim to show that $$\label{changem} t^{-5/6}({\tilde{\mathfrak m}} - {\tilde m}) \rightarrow_p 0,$$ We do this in two steps. Define $\hat m \le \tilde{\mathfrak m}$ to be the time at which the particle that attains the maximum branches off the spine. We bound seperately $|\hat m - \tilde m|$ and $\tilde{\mathfrak m} - \hat m$, starting with the first one. Let $\kappa > 0$. By [\[secondexc\]](#secondexc){reference-type="eqref" reference="secondexc"} from Lemma [Lemma 6](#lem:taboo_with_drift){reference-type="ref" reference="lem:taboo_with_drift"} and [\[spinemotion\]](#spinemotion){reference-type="eqref" reference="spinemotion"}, there exists $\eta>0$, such that for every $\theta > 0$, $$\limsup_{t \rightarrow \infty} \ensuremath{\mathbf{P}}_x\bigg( \tilde{X_{\xi}}({\tilde m}) - \sup_{\substack{s \in [0,\tau(t^b)]\\ |\tau(s)-\tau(\tilde m)| \ge \theta/\sqrt\gamma}} \tilde{X}_{\xi}(s) \leq \eta \sqrt{\gamma} L_t \bigg) < \kappa.$$ Because $\sqrt{\gamma} L_t \asymp t^{1/6}$, it thus follows from ([\[descendant\]](#descendant){reference-type="ref" reference="descendant"}) that with probability tending to one as $t \rightarrow \infty$, $$|\tau(\hat m)-\tau(\tilde m)| < \theta/\sqrt{\gamma}.$$ But since $\tau(\tilde m) = \Theta_p(t^{1/6})$ and $\sqrt\gamma = \Theta_p(t^{-1/6})$, and since $\theta>0$ is arbitrary, it follows from [\[tauinverse\]](#tauinverse){reference-type="eqref" reference="tauinverse"} that $$\label{changemhat} t^{-5/6}(\hat m - \tilde m) \rightarrow_p 0.$$ It remains to bound the difference $\tilde{\mathfrak m} - \tilde m$. Choose $d$ such that $2/3 < d < 5/6$, and let $u = t^d$. Let $A_2$ be the event that there exist $s \in (0, t^b)$ and $z > 0$ such that a particle branches off the spine at time $s$ and location $z$, and a descendant of this particle gets above $z$ after time $t + u$. In particular, we have $$\label{A2} \tilde{\mathfrak{m}} - \hat m \le t^d,\quad\text{on the event $A_2^c$}.$$ To bound the probability of the event $A_2$, suppose a particle branches off the spine at time $s$ and location $z$, and choose $v$ such that $z = L_v$. By Lemma [Lemma 11](#rtmostms){reference-type="ref" reference="rtmostms"}, the probability that any descendant of the particle gets above $z$ after time $s + u$ is at most $C_0 (L_{v + u} - z) e^{-(L_{v+u} - z)}$. Note that $L_{v+u} - z = L_{v+u} - L_v$, which is a decreasing function of $v$. Because the spine never gets above $L_t$, the expression $L_{v+u} - z$ is minimized when $z = L_t$, or equivalently when $v = t$. Therefore, the probability that any descendant of the particle gets above $z$ after time $s + u$ is at most $C_0 (L_{t+u} - L_t) e^{-(L_{t+u} - L_t)}$. Thus, $$\label{PA2} P(A_2) \leq \beta (m+1) \zeta t^b \cdot C_0 (L_{t+u} - L_t) e^{-(L_{t+u} - L_t)}.$$ Because $L_{t+u} - L_t \asymp t^{d - 2/3}$, the above expression tends to zero as $t \rightarrow \infty$. Together with [\[changemhat\]](#changemhat){reference-type="eqref" reference="changemhat"} and [\[A2\]](#A2){reference-type="eqref" reference="A2"}, this proves [\[changem\]](#changem){reference-type="eqref" reference="changem"}. Then, from ([\[spineMm\]](#spineMm){reference-type="ref" reference="spineMm"}), ([\[changeM\]](#changeM){reference-type="ref" reference="changeM"}), and ([\[changem\]](#changem){reference-type="ref" reference="changem"}), we get $$\label{BBMwspine} \bigg( L_t^{-1/2} (L_t - \tilde{\mathfrak M}), L_t^{1/2} \frac{\tilde{\mathfrak m}}{t} \bigg) \Rightarrow (R, 3UR).$$ By Proposition [Proposition 3](#lem:spine_comparison){reference-type="ref" reference="lem:spine_comparison"}, the result ([\[BBMwspine\]](#BBMwspine){reference-type="ref" reference="BBMwspine"}) still holds when we replace the BBM with spine by the original branching Brownian motion conditioned on $\zeta > t$ in the definitions of $\tilde{\mathfrak M}$ and $\tilde{\mathfrak m}$. It remains to show that for the branching Brownian motion conditioned on $\zeta > t$, with high probability the maximum will not occur after time $t^b$. However, note that ([\[spineMm\]](#spineMm){reference-type="ref" reference="spineMm"}) implies that $L_t - \tilde{\mathfrak M} = \Theta_p(t^{1/6})$. Therefore, we can choose $a$ such that $1/6 < a < b - 2/3$, and then apply Lemma [Lemma 12](#rightLD){reference-type="ref" reference="rightLD"} with these choices of $a$ and $b$. Lemma [Lemma 12](#rightLD){reference-type="ref" reference="rightLD"} implies that, with probability tending to one as $t \rightarrow \infty$, no particle gets above $L_t - t^a$ after time $t^b$, which means that with probability tending to one as $t \rightarrow \infty$, the maximum occurs before time $t^b$. The proof is now complete. ◻ We thank Julien Berestycki for helpful discussions related to this work. 99 K. B. Athreya and P. E. Ney (1972). *Branching Processes*, Grundlehren der mathematischen Wissenschaften (Vol. 196). New York: Springer-Verlag. J. Berestycki, N. Berestycki, and J. Schweinsberg (2013). The genealogy of branching Brownian motion with absorption. *Ann. Probab.* **41**(2), 527--618. J. Berestycki, N. Berestycki, and J. Schweinsberg (2014). Critical branching Brownian motion with absorption: survival probability. *Probab. Theory Related Fields* **160**(3-4), 489--520. G. H. Berzunza Ojeda (2018). On scaling limits of multitype Galton-Watson trees with possibly infinite variance. *ALEA Lat. Am. J. Probab. Math. Stat.* **15**(1), 21--48. A. N. Borodin and P. Salminen (2002). *Handbook of Brownian motion---facts and formulae*, second ed., Probability and its Applications. Basel: Birkhäuser Verlag. E. Brunet, B. Derrida, A. H. Mueller, and S. Munier (2006). Noisy traveling waves: effect of selection on genealogies. *Europhys. Lett.* **76**, 1--7. E. Brunet, B. Derrida, A. H. Mueller, and S. Munier (2007). Effect of selection on ancestry: an exactly soluble case and its phenomenological generalization. *Phys. Rev. E* **76**, 041104. L. de Raphélis (2017). Scaling limit of multitype Galton-Watson trees with infinitely many types. *Ann. Inst. Henri Poincaré Probab. Stat.* **53**(1), 200--225. L. de Raphélis (2022). Scaling limit of the subdiffusive random ralk on a Galton--Watson tree in random environment. *Ann. Probab.* **50**(1), 339--396. B. Derrida and D. Simon (2007). The survival probability of a branching random walk in presence of an absorbing wall. *EPL* **78**, 60006. J. Engländer, S. C. Harris and A. E. Kyprianou (2010). Strong Law of Large Numbers for branching diffusions. *Ann. Inst. Henri Poincaré Probab. Stat.* **46**(1), 279--298. J. Engländer and A. E Kyprianou (2004). Local extinction versus local exponential growth for spatial branching processes. *The Annals of Probability*, **32**(1), 78--99. R. Hardy and S. C. Harris (2009). A Spine Approach to Branching Diffusions with Applications to $L^p$-Convergence of Martingales. In *Séminaire de Probabilités XLII* 281--330. *Lecture Notes in Math.* **1979**. Springer, Berlin. J. W. Harris and S. C. Harris (2007). Survival probabilities for branching Brownian motion with absorption. *Electron. Comm. Probab.* **12**, 81--92. J. W. Harris, S. C. Harris, and A. E. Kyprianou (2006). Further probabilistic analysis of the Fisher-Kolmogorov-Petrovskii-Piscounov equation: one-sided travelling waves. *Ann. Inst. Henri Poincaré Probab. Statist.* **42**(1), 125--145. S. C. Harris, E. Horton, A. E. Kyprianou, and M. Wang (2022). Yaglom limit for critical nonlocal branching Markov processes. *Ann. Probab.* **50**(6), 2373--2408. S. C. Harris and M. I. Roberts (2017). The many-to-few lemma and multiple spines. *Ann. Inst. Henri Poincaré Probab. Statist.* **53**(1), 226--242. T. E. Harris (1963). *The Theory of Branching Processes*, Grundlehren der Mathematischen Wissenschaften (Vol. 119). Berlin: Springer-Verlag. H. Hering (1971). Critical Markov branching processes with general set of types. *Trans. Amer. Math. Soc.* **160**, 185--202. H. Hering (1977). Minimal moment conditions in the limit theory for general Markov branching processes. *Ann. Inst. Henri Poincaré Probab. Stat.* **13**(4), 299--319. H. Hering (1978). Multigroup branching diffusions. In *Branching processes (Conf., Saint Hippolyte, Que., 1976)* (Vol. 5, pp. 177--217). Dekker, New York. H. Hering and F. M. Hoppe (1981). Critical branching diffusions : proper normalization and conditioned limit. *Ann. Inst. Henri Poincaré Probab. Stat.* **17**(3), 251--274. N. Ikeda, M. Nagasawa, and S. Watanabe (1969). Branching Markov processes III. *J. Math. Kyoto Univ.* **9**, 95--160. H. Kesten (1978). Branching Brownian motion with absorption. *Stochastic Process. Appl.* **7**(1), 9--47. F. Knight (1969). Brownian local times and taboo processes. *Trans. Amer. Math. Soc.* **143**, 173--185. A. N. Kolmogorov (1938). Zur Lösung einer biologischen Aufgabe. *Comm. Math. Mech. Chebyshev Univ. Tomsk* **2**(1), 1--12. A. Lambert (2000). Completely asymmetric Lévy processes confined in a finite interval. *Ann. Inst. Henri Poincaré Probab. Statist.* **36**(2), 251--274. G. Miermont (2008). Invariance principles for spatial multitype Galton-Watson trees. *Ann. Inst. Henri Poincaré Probab. Statist.* **44**(6), 1128--1161. P. Maillard and J. Schweinsberg (2020). Yaglom-type limit theorems for branching Brownian motion with absorption. *Ann. H. Lebesgue* **5**, 921--985. E. Powell (2019). An invariance principle for branching diffusions in bounded domains. *Probab. Theory and Related Fields* **173**(3-4), 999--1062. R. S. Slack (1968). A branching process with mean one and possibly infinite variance. *Z. Wahrscheinlichkeitstheorie und Verw. Gebiete* **9**(2), 139--145. A. M. Yaglom (1947). Certain limit theorems of the theory of branching random processes. *Doklady Acad. Nauk SSSR* **56**, 795--798. [^1]: Université de Toulouse and Institut Universitaire de France. Supported in part by ANR grant ANR-20-CE92-0010-01. [^2]: University of California San Diego. Supported in part by NSF Grant DMS-1707953 [^3]: By this we mean the process conditioned to survive until time $s$ and letting $s\to\infty$. [^4]: One can also calculate this using the scale function of the Brownian taboo process.
arxiv_math
{ "id": "2310.00707", "title": "The all-time maximum for branching Brownian motion with absorption\n conditioned on long-time survival", "authors": "Pascal Maillard and Jason Schweinsberg", "categories": "math.PR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- author: - Guillaume Rialland date: | Université de Paris-Saclay, UVSQ, CNRS, Laboratoire de Mathématiques de Versailles, 78000 Versailles\ `guillaume.rialland@uvsq.fr` title: "**Asymptotic stability of solitary waves for the 1D near-cubic non-linear Schrödinger equation in the absence of internal modes**" --- 80pt80pt [Abstract.]{.smallcaps} We consider perturbations of the one-dimensional cubic Schrödinger equation, under the form $i \, \partial_t \psi + \partial_x^2 \psi + |\psi|^2 \psi - g( |\psi|^2 ) \psi = 0$. Under hypotheses on the function $g$ that can be easily verified in some cases, we show that the linearized problem around a solitary wave does not have internal mode (nor resonance) and we prove the asymptotic stability of these solitary waves, for small frequencies. [a]{style="color: white"}\ \ [a]{style="color: white"} We consider the non-linear Schrödinger equation $$i \, \partial_t \psi + \partial_x^2 \psi + |\psi|^2 \psi - g( |\psi|^2 ) \psi = 0, \ \, \, \, \, \, \, \, \, \forall (t \, , x) \in \mathbb{R}\times \mathbb{R}, \label{NLS}$$ which is a perturbation of the cubic NLS equation $i \, \partial_t \psi + \partial_x^2 \psi + | \psi |^2 \psi = 0$. Here, $g \, : \, \mathbb{R}_+ \to \mathbb{R}$ is a function so that the term $g ( |\psi|^2 ) \psi$ is small compared to $| \psi|^2 \psi$ for $| \psi |$ small. We refer to [@Pe] or [@Ki] for the physical interest of such equations.\ \ The corresponding Cauchy problem is globally well-posed in the energy space $H^1 ( \mathbb{R})$ (see for example [@Ca2]) and we recall the Galilean transform, translation and phase invariances of this equation: if $\psi (t \, , x)$ is a solution then, for any $\beta,\sigma,\gamma \in \mathbb{R}$, $\widetilde{\psi} (t \, , x) = e^{i( \beta x - \beta^2 t + \gamma )} \psi (t \, , x-2 \beta t - \sigma )$ is also a solution to the same equation.\ \ Solitary waves are solutions of [\[NLS\]](#NLS){reference-type="eqref" reference="NLS"} which take the form $\psi (t \, , x) = e^{i \omega t} \phi_\omega (x)$ where $$\phi_\omega '' = \omega \phi_\omega - \phi_\omega^3 + \phi_\omega g(\phi_\omega^2). \label{eqphi}$$ It will be proven in the first section below that, under minor hypotheses on $g$ and provided that $\omega$ is small enough, the equation [\[eqphi\]](#eqphi){reference-type="eqref" reference="eqphi"} has a unique solution $\phi_\omega \in H^1 ( \mathbb{R})$ that is nonnegative, even and that vanishes at infinity. The invariances previously described generate a family of traveling waves given by $\psi (t \, , x) = e^{i ( \beta x - \beta^2 t + \omega t + \gamma )} \phi_\omega (x-2 \beta t - \sigma )$. To begin with, we recall the following standard orbital stability result (see [@Ca], [@Gr], [@Il], [@We2]). **Proposition 1.** For $\omega_0$ small enough and any $\epsilon > 0$, there exists $\delta > 0$ so that, for any $\psi_0 \in H^1 ( \mathbb{R})$ satisfying $|| \psi_0 - \phi_{\omega_0} ||_{H^1 ( \mathbb{R})} \leqslant \delta$, if we let $\psi$ be the solution of [\[NLS\]](#NLS){reference-type="eqref" reference="NLS"} with initial data $\psi (0) = \psi_0$, then $$\sup_{t \in \mathbb{R}} \inf_{( \gamma , \sigma ) \in \mathbb{R}^2} || \psi (t \, , \cdot + \sigma ) - e^{i \gamma} \phi_{\omega_0} ||_{H^1 ( \mathbb{R})} \leqslant \epsilon.$$ In this paper we are interested in the asymptotic stability of solitary waves. There is a vast literature about the asymptotic stability of solitary waves for nonlinear Schrödinger equations, in different cases (various nonlinearities, with or without potential, in different dimensions), see for example [@Co], [@Cu1], [@Cu2], [@Ma1] and the review [@Ma4]. Before stating our main results, we need to introduce a few hypotheses. First introduce $G (s) = \int_0^s g$. Now let us consider the following hypotheses: $$\begin{array}{rl} (H_1) & \displaystyle{g \in \mathscr{C}^5 ((0 \, , + \infty )) \cap \mathscr{C}^1 ( [0 \, , + \infty )) \, , \, \, g^{(k)}(s) \, \underset{s \to 0}{=} \, o \left ( s^{1-k} \right ) \, \, \text{for all $k \in [\![ 0 \, , 5 ]\!]$} \, \, \text{and} \, \, g \not\equiv 0 \, \, \text{near $0$},} \\ \\ (H_2) & \displaystyle{\lim_{\omega \to 0} \frac{1}{\varepsilon_\omega^2 \sqrt{\omega}} \int_{\mathbb{R}} \left ( -3 g( \phi_\omega^2 ) + \phi_\omega^2 g'(\phi_\omega^2) + 4 \frac{G(\phi_\omega^2)}{\phi_\omega^2} \right ) \, \text{d}x = + \infty,} \end{array}$$ where $\varepsilon_\omega := \sup\limits_{0 \leqslant s \leqslant 3 \omega} | sg''(s)|$. In this definition, as we shall see in the incoming proofs, $3 \omega$ can be replaced by $2^+ \omega$ where $2^+$ is any constant strictly greater than $2$. Note that the hypothesis $(H_1)$ implies that $\varepsilon_\omega$ exists and is not zero for $\omega>0$ small enough ($\varepsilon_\omega =0$ for $\omega>0$ small would imply that $g'' \equiv 0$ near $0$, thus $g \equiv 0$ since $g(0)=g'(0)=0$). The hypothesis $(H_1)$ also implies that $\varepsilon_\omega \longrightarrow 0$ when $\omega \to 0$.\ \ Depending on the function $g$, the equation [\[NLS\]](#NLS){reference-type="eqref" reference="NLS"} may (or may not) involve what are called *internal modes*. An internal mode is a solution to the system [\[IntM\]](#IntM){reference-type="eqref" reference="IntM"}. It generates periodic solutions to the linearized equation around the solitary wave. For example, $g(s)=s^2$ is a case without internal mode (see the particular study of this case in [@Ma1]) while $g(s) = -s^2$ is a case with an internal mode (see [@Pe]). In the case $g=0$, there is a resonance (see [@Ch]). These considerations justify why we ask for $g \not\equiv 0$ in hypothesis $(H_1)$. The hypothesis $(H_2)$ is a repulsion hypothesis, which involves in particular the sign of the function $g$; the previous remarks let us see that this sign is indeed important. See [@Pe], [@Ch] and [@CG] for related discussions. Internal modes are potential obstacles to the asymptotic stability of solitons, and we do not address this issue here. We will show that, under the two hypotheses $(H_1)$ and $(H_2)$, there does not exist any internal mode to our problem, in the sense below. Corollary 2 will also assure that there does not exist resonance in this case either. We introduce the following operators, that appear when we linearize [\[NLS\]](#NLS){reference-type="eqref" reference="NLS"} around $\phi_\omega$: $$L_+ = - \partial_x^2 + \omega - 3 \phi_\omega^2 + g ( \phi_\omega^2) + 2 \phi_\omega^2 g'( \phi_\omega^2 ) \, \, \, \, \, \text{and} \, \, \, \, \, L_- = - \partial_x^2 + \omega - \phi_\omega^2 + g ( \phi_\omega^2 ).$$ **Theorem 1.** Assume that hypotheses $(H_1)$ and $(H_2)$ are satisfied. Then, for $\omega$ small enough, the only solutions $(X \, , Y \, , \lambda ) \in H^1 ( \mathbb{R})^2 \times \mathbb{C}$ to the system $$\left \{ \begin{array}{ccl} L_- X &=& \lambda Y \\ L_+ Y &=& \lambda X \end{array} \right. \label{IntM}$$ are $X=Y=0$ (and any $\lambda \in \mathbb{C}$) or $\lambda = 0$, $X \in \text{span} ( \phi_\omega )$ and $Y \in \text{span} ( \phi_\omega ')$. Under the same assumptions, we get the following result that ensures the asymptotic stability of the solitons of equation [\[NLS\]](#NLS){reference-type="eqref" reference="NLS"}. **Theorem 2.** Assume that hypotheses $(H_1)$ and $(H_2)$ are satisfied. For $\omega_0$ small enough, there exists $\delta > 0$ so that, for any $\psi_0 \in H^1 ( \mathbb{R})$ satisfying $|| \psi_0 - \phi_{\omega_0} ||_{H^1 ( \mathbb{R})} \leqslant \delta$, if we let $\psi$ be the solution of [\[NLS\]](#NLS){reference-type="eqref" reference="NLS"} with initial data $\psi (0) = \psi_0$, then there exists $\beta_+ \in \mathbb{R}$ and $\omega_+ > 0$ such that, for any bounded interval $I \subset \mathbb{R}$, $$\lim_{t \to + \infty} \inf_{( \gamma , \sigma ) \in \mathbb{R}^2} \sup_{x \in I} | \psi (t \, , x+ \sigma ) - e^{i \gamma} e^{i \beta_+ x} \phi_{\omega_+} (x) | = 0.$$ *Remarks.* A few remarks can be given about this result. Most of them are already in the paper [@Ma1] and shall not be recalled here. - The result is written with an \"$\inf\limits_{ \gamma , \sigma}$\" formulation. It can be stated in another way, which is the actual way the proof will be led: there exists $\mathscr{C}^1$ functions $\beta , \sigma , \gamma , \gamma \, : \, [0 \, , + \infty ) \mapsto \mathbb{R}^4$ such that $\lim\limits_{t \to + \infty} \beta (t) = \beta_+$, $\lim\limits_{t \to + \infty} \omega (t) = \omega_+$ and $$\lim_{t \to + \infty} \sup_{x \in I} \left | \psi (t \, , x + \sigma (t)) - e^{i \gamma (t)} e^{i \beta (t) x} \phi_{\omega (t)} (x) \right | = 0.$$ - The proof will show that $\omega (t) , \omega_+ \in \left ( 0 \, , \frac{3 \omega_0}{2} \right )$. In fact, we could show that, for any $\eta > 0$, $\delta$ can be chosen small enough such that $\omega (t) , \omega_+ \in \left ( 0 \, , \omega_0 + \eta \right )$. [a]{style="color: white"}\ The hypothesis $(H_2)$ might appear a little bit cryptic. Let us see how it can be verified in simple cases. Consider for example $g(s) = s^\sigma$ with $\sigma > 1$. We have $sg'(s) = \sigma s^\sigma$, $\frac{G(s)}{s} = \frac{s^\sigma}{\sigma +1}$ and $\varepsilon_\omega = \sigma ( \sigma -1) (3 \omega )^{\sigma -1}$. The hypothesis $(H_1)$ is clearly satisfied. To verify the hypothesis $(H_2)$, we need the following lower bound which will be proved in the first section below: $\phi_\omega (x) \geqslant c \sqrt{\omega} \, e^{- \sqrt{\omega} |x|}$ where $c>0$ does not depend on $\omega$. We see that $$\begin{array}{rcl} \displaystyle{\frac{1}{\varepsilon_\omega^2 \sqrt{\omega}} \int_{\mathbb{R}} \left ( -3 g(\phi_\omega^2) + \phi_\omega^2 g'(\phi_\omega^2) + 4 \frac{G(\phi_\omega^2)}{\phi_\omega^2} \right ) \, \text{d}x} &=& \displaystyle{\frac{1}{\sigma^2 ( \sigma - 1) ( \sigma +1)} \frac{\omega^{- \left ( 2 \sigma - \frac{3}{2} \right )}}{3^{2 \sigma -2}} \int_{\mathbb{R}} \phi_\omega^{2 \sigma}} \\ \\ & \geqslant & \displaystyle{c_\sigma \omega^{-(\sigma -1)} \, \underset{\omega \to 0^+}{\longrightarrow} \, + \infty} \end{array}$$ therefore $(H_2)$ is satisfied. Hence the theorem stated above holds for $g(s) = s^\sigma$ (with $\sigma >1$).\ \ Consider a more general situation where $g$ verifies $(H_1)$ and $g''(s) \sim a s^p$ as $s \to 0$, with $a>0$ and $p>-1$. Denote $\sigma := p+2$. Since $\sigma > 1$, $g''(s) \sim as^{\sigma -2}$ leads to $g'(s) \sim \frac{a}{\sigma -1} s^{\sigma -1}$, $g(s) \sim \frac{a}{\sigma (\sigma -1)} s^\sigma$ and $G(s) \sim \frac{a}{( \sigma +1) \sigma ( \sigma - 1)} s^{\sigma+1}$. We get $$-3g(s) + sg'(s) + 4 \frac{G(s)}{s} \sim \frac{(\sigma -1)a}{\sigma (\sigma +1)} s^\sigma \, \, \, \, \, \, \, \text{where} \, \, \frac{(\sigma -1)a}{\sigma (\sigma +1)} > 0,$$ which gives $-3 g(s) + sg'(s) + 4 \frac{G(s)}{s} \geqslant \frac{(\sigma - 1)a}{2 \sigma (\sigma+1)} s^\sigma = c_{a , \sigma} s^\sigma$ for $s$ small enough, with $c_{a,\sigma} > 0$. We will see in the first section below that $||\phi_\omega ||_{\infty} \leqslant \sqrt{3 \omega}$ for $\omega$ small enough. Thus, taking $\omega$ small enough, we see that $$-3 g(\phi_\omega^2) + \phi_\omega^2 g'(\phi_\omega^2) + 4 \frac{G(\phi_\omega^2)}{\phi_\omega^2} \geqslant c_{a,\sigma} \phi_\omega^{2 \sigma} \geqslant c_{a,\sigma} \omega^{\sigma} e^{-2 \sigma \sqrt{\omega} |x|}.$$ On the other hand, from $g''(s) \sim as^{\sigma -2}$ we deduce that, for $s$ small enough, $|sg''(s)| \leqslant 2 as^{\sigma-1}$ and thus, for $\omega$ small enough, $\varepsilon_\omega \leqslant 2 a (3 \omega )^{\sigma -1} = c_{a,\sigma} \omega^{\sigma -1}$. Gathering these estimates and integrating, we get $$\frac{1}{\varepsilon_\omega^2 \sqrt{\omega}} \int_{\mathbb{R}} \left ( -3 g(\phi_\omega^2) + \phi_\omega^2 g'(\phi_\omega^2) + 4 \frac{G(\phi_\omega^2)}{\phi_\omega^2} \right ) \, \text{d}x \geqslant \frac{c_{a,\sigma}}{\omega^{2 (\sigma-1)} \sqrt{\omega}} \, \omega^\sigma \omega^{-1/2} = c_{a,\sigma} \omega^{1-\sigma} \, \underset{\omega \to 0^+}{\longrightarrow} \, + \infty$$ hence $(H_2)$ is satisfied here too. This case includes functions such as $g(s) = a_1 s^{\sigma_1} + a_2 s^{\sigma_2} + \cdots$ where $1 < \sigma_1 < \sigma_2 < \cdots$, $a_1 > 0$ and $a_i$ (for $i \geqslant 2$) are real numbers whose signs do not matter.\ \ We will first give a condition on $g$ to ensure there is no internal mode for our problem (Theorem 1). This will be the object of our second part. The third part of this paper is dedicated to the proof of Theorem 2 in itself. The proof extends the one of the analogous result for the case $g(s) = s^2$, that can be found in [@Ma1]. It relies on virial arguments, the study of a transformed problem and spectral properties of the linearized operators ($L_+$, $L_-$) and their transformed versions ($M_+$, $M_-$).\ \ One can find in [@Co] a different approach to the asymptotic stability of the solitons of equation [\[NLS\]](#NLS){reference-type="eqref" reference="NLS"}. The functional setting is different, with the use of weighted spaces, and a stronger conclusion about the convergence (often called *full asymptotic stability*). The result of [@Co] relies on a natural spectral assumption, namely the non-existence of internal mode and resonance, which was another motivation for Theorem 1 and Corollary 2. Our hypotheses $(H_1)$ and $(H_2)$ and the discussion above thus give concrete situations where the result in [@Co] can be applied.\ \ The letters $u$, $v$, $w$ and $z$ will denote complex-valued functions; we will index by $1$ their real part and by $2$ their imaginary part (for example, $u=u_1+iu_2$ with $u_1,u_2 \in \mathbb{R}$). The Fourier transform of a function $w$ is denoted by $\widehat{w}$. For $\alpha > 0$, we will use the operator $$X_\alpha = (1 - \alpha \partial_x^2 )^{-1} \, \, \, \, \, \, \text{i.e.} \, \, \, \, \, \, \widehat{X_\alpha w} ( \xi ) = \frac{\widehat{w} ( \xi )}{1 + \alpha \xi^2} \, \, \text{for $w \in L^2 ( \mathbb{R})$.}$$ The $L^2$ scalar product is denoted by $\langle u \, , v \rangle = \text{Re} \left ( \int_{\mathbb{R}} u \overline{v} \, \text{d}x \right )$ and the $L^2$ norm is denoted by $|| \cdot ||$. The $H^1$ norm will be denoted by $|| \cdot ||_{H^1 ( \mathbb{R})}$.\ \ About the virial arguments, we fix a smooth even function $\chi \, : \, \mathbb{R}\to \mathbb{R}$ satisfying $\chi = 2$ on $[0 \, , 1]$, $\chi = 0$ on $[2 \, , + \infty )$ and $\chi ' \leqslant 0$ on $[0 \, , + \infty )$. For $K>0$ we define $$\begin{array}{ll} \chi_K (x) = \chi \left ( \frac{x}{K} \right ), & \eta_K (x) = \text{sech}\left ( \frac{2 x}{K} \right ), \\ \\ \zeta_K (x) = \exp \left ( - \frac{|x|}{K} \left ( 1 - \chi ( \sqrt{\omega_0} x) \right ) \right ), & \displaystyle{\Phi_K (x) = \int_0^x \zeta_K(y)^2 \, \text{d}y.} \end{array}$$ We take $A$ and $B$ two large constants that we will fix later (and that depend on $\omega_0$); the idea is to have $A \gg B \gg \frac{1}{\sqrt{\omega_0}} \gg 1$. In everything that follows, $A$ and $B$ are constants (that depend on $\omega_0$) which are assumed to satisfy $A>B>\omega_0^{-1/2} > 1$. Such an inequality will be verified when we indeed fix $A$ and $B$ (in the proof of Proposition 4 for $B$, in the proof of Theorem 2 for $A$). We then define $\Psi_{A,B} = \chi_A^2 \Phi_B$. Most of the bounds we will use and the sketches of the proofs are drawn from [@Ma2], [@Ma3], [@Ma1]. Finally we introduce the following weight function $$\rho (x) = \text{sech}\left ( \frac{\sqrt{\omega_0}}{10} \, x \right ).$$ Lastly, in this paper, the letter $C$ denotes various positive constants whose expression change from one line to another. The concerned constants do not depend on the parameters $\omega_0$, $\epsilon$, $\alpha$, $A$ and $B$, except in the last part of the proof of Proposition 4, when parameters such as $B$, $\alpha$, $A$ are already fixed.\ \ This paper is the result of many discussions with Yvan Martel. The motivation of this paper and its proof are based on his paper [@Ma1]. May he be warmly thanked for it here. # Preliminaries ## Solitary waves Our proof relies on estimates on the solitons $\phi_\omega$, hence we first have to gather such estimates. The task was easier in the case of the defocusing cubic-quintic NLS equation (see [@Ma1]), where solitons were known explicitely. Here, solitons are not know explicitely, but we can prove the following bounds. **Lemma 1.** Assume $g$ to be $\mathscr{C}^{5} ( (0 \, , + \infty ))$, $\mathscr{C}^1 ([0 \, , \infty ))$ and such that $g(0)=g'(0)=0$. There exists $\omega_0 > 0$ (depending on $g$) such that, for all $\omega \in (0 \, , \omega_0 )$, there exists a unique solution $\phi_\omega \in H^1 ( \mathbb{R})$ to the equation $\phi_\omega '' - \omega \phi_\omega + \phi_\omega^3 - g( \phi_\omega^2 ) \phi_\omega = 0$ such that $\phi_\omega$ is even and nonnegative.\ Moreover, the application $(x \, , \omega) \in \mathbb{R}\times (0 \, , \omega_0) \mapsto \phi_\omega (x)$ is $\mathscr{C}^6$. *Proof.* Let us denote $f_\omega (\zeta) = - \omega \zeta + \zeta^3 - g(\zeta^2) \zeta$ and $F_\omega (\zeta) = \int_0^\zeta f_\omega$. We know from [@Be] that a solution $\phi_\omega$ verifying all wanted conditions exists if and only if $\zeta_\omega := \inf \{ \zeta > 0 \, \, | \, \, F_\omega ( \zeta ) = 0 \}$ exists and is not zero, and $f_\omega ( \zeta_\omega ) > 0$. In our case, since $g(0)=0$, $f_\omega (\zeta_\omega) > 0$ implies $\zeta_\omega \neq 0$. First, we check that $F_\omega ( \zeta ) = - \frac{\omega \zeta^2}{2} + \frac{\zeta^4}{4} - \frac{G(\zeta^2)}{2}$. By the change of variable $s = \zeta^2$, we have the equivalence $$F_\omega ( \zeta )=0 \, \, \, \, \Longleftrightarrow \, \, \, \, \frac{s}{2} - \frac{G(s)}{s} = \omega.$$ Let us denote $J(s) = \frac{s}{2} - \frac{G(s)}{s}$. We take $J(0)$ to be $0$. Indeed, since $g(0)=g'(0)=0$, we have $g(s) = o(s)$ and then $G(s) = o(s^2)$ as $s \to 0^+$. Therefore, $J(s) \sim \frac{s}{2}$. $J$ is clearly $\mathscr{C}^6$ on $(0 \, , + \infty)$ and it is $\mathscr{C}^2$ on $[0 \, , + \infty)$, verifying $J'(0)=\frac{1}{2}$, $J''(0)=0$. Since $J'(0) \neq 0$, by local inversion we know that there exists $s_0 > 0$ such that $J$ is bijective from $[0 \, , s_0]$ to $[0 \, , J( s_0 )]$. Taking $\omega_0 = J(s_0)$, it is now clear that, for every $\omega \in (0 \, , \omega_0)$, there exists a unique $s_\omega \in (0 \, , s_0)$ such that $J(s_\omega) = \omega$. The uniqueness shows that $\zeta_\omega = \sqrt{s_\omega}$ is the quantity $\inf \{ \zeta > 0 \, \, | \, \, F_\omega ( \zeta ) = 0 \}$ we look for.\ \ Now, $f_\omega ( \zeta_\omega ) = \zeta_\omega ( - \omega - g(s_\omega) + s_\omega)$. We aim to prove that this is positive. First, we have $J \left ( \frac{3 \omega}{2} \right ) = \frac{3 \omega}{4} - \frac{G(3 \omega /2)}{3 \omega /2}$. Since $G(s) = o(s^2)$, $J \left ( \frac{3 \omega}{2} \right ) \sim \frac{3 \omega}{4}$ as $\omega \to 0$; thus we can take a smaller $\omega_0$ to be sure that $J \left ( \frac{3 \omega}{2} \right ) < \omega$ for all $\omega \in (0 \, , \omega_0 )$. From now on we make that assumption. This proves that, for all $\omega \in (0 \, , \omega_0 )$, we have $\frac{3 \omega}{2} < s_\omega$.\ \ Since $g (s) = o(s)$, we can assume that $|g(s)| \leqslant \frac{s}{3}$ for all $s \in [0 \, , s_1]$. On the foregoing, we may have assumed that $s_0 \leqslant s_1$. From now on we make that assumption. Now, we can check that, for all $\omega \in (0 \, , \omega_0 )$, $$g(s_\omega) \leqslant \frac{s_\omega}{3} \, \, \, \, \, \text{thus} \, \, \, \, \, - \omega - g(s_\omega) + s_\omega \geqslant - \omega - \frac{s_\omega}{3} + s_\omega = \frac{2 s_\omega}{3} - \omega > 0$$ as we have seen. This shows that $f_\omega ( \zeta_\omega ) > 0$ and completes the first part of the lemma.\ \ The regularity of the function $(x \, , \omega) \mapsto \phi_\omega (x)$ comes from standard arguments. We recall from [@Be] that the solution $\phi_\omega$ is the only solution of the Cauchy problem $$\left \{ \begin{array}{l} \phi_\omega '' - \omega \phi_\omega + \phi_\omega^3 - g( \phi_\omega^2 ) \phi_\omega = 0, \\ \phi_\omega (0) = \zeta_\omega , \, \, \, \phi_\omega '(0)=0. \end{array} \right.$$ We have to check that $\omega \mapsto \zeta_\omega$ is $\mathscr{C}^6$ on $(0 \, , \omega_0 )$. This is the case since $\omega \mapsto \zeta_\omega$ is nothing else than $\sqrt{J^{-1}}$ and that $J$ is $\mathscr{C}^6$ on $(0 \, , \omega_0 )$. Note that $J$ itself is possibly not $\mathscr{C}^6$ near $0$. That is not a problem, since the solutions $\phi_\omega$ take their values in $(0 \, , + \infty )$; hence $(0 \, , + \infty )$ is the arrival domain of the Cauchy-Lipschitz theorem with parameter we apply. We then get the $\mathscr{C}^6$ regularity we seek. [a]{style="color: white"}\ The hypotheses above about $g$ will always be assumed: they are implied by hypothesis $(H_1)$. We have $G(s) = o(s^2)$ and thus $\omega = \frac{\zeta_\omega^2}{2} - \frac{G(\zeta_\omega^2)}{\zeta_\omega^2} = \frac{\zeta_\omega^2}{2} + o(\zeta_\omega^2)$. Hence, $\zeta_\omega^2 \sim 2 \omega$ i.e. $\zeta_\omega \sim \sqrt{2 \omega}$. We will suppose in the whole paper that $\omega$ is chosen small enough so that $\zeta_\omega \leqslant \sqrt{3 \omega}$. We also suppose that $\omega$ is chosen small enough so that $|g(s)| < s$ for any $s \in [0 \, , 3 \omega ]$. Moreover, we will need an equivalent of $\frac{\text{d} \zeta_\omega}{\text{d} \omega}$. Recalling that $( \omega \mapsto \zeta_\omega ) = \sqrt{J^{-1}}$, we write that $$\frac{\text{d} \zeta_\omega}{\text{d} \omega} = \frac{1}{2 J'(J^{-1} ( \omega )) \sqrt{J^{-1} ( \omega )}} = \frac{1}{\zeta_\omega \left ( 1 - \frac{2 g(\zeta_\omega^2)}{\zeta_\omega} + \frac{2 G ( \zeta_\omega^2)}{\zeta_\omega^3} \right )} \sim \frac{1}{\zeta_\omega} \sim \frac{1}{\sqrt{2 \omega}}$$ since $\frac{g(\zeta_\omega^2)}{\zeta_\omega} = o ( \zeta_\omega ) = o(1)$ and $\frac{G(\zeta_\omega^2)}{\zeta_\omega^3} = o(\zeta_\omega) = o(1)$.\ \ In what follows, we always take $\omega \in (0 \, , \omega_0)$. We drop the notation $\omega_0$ and only say that $\omega$ is *\"small enough\"*. We might have to reduce the range to which $\omega$ belongs in what follows, which is not a problem. Let $Q_\omega$ be the solitary-wave solution of the cubic Schrödinger stationary equation $Q_\omega '' - \omega Q_\omega + Q_\omega^3 = 0$. That is to say, $Q_\omega$ corresponds to the case $g=0$. We know $Q_\omega$ explicitly: denoting $Q(x) = \frac{\sqrt{2}}{\cosh (x)}$, $Q_\omega$ is given by $Q_\omega (x) = \sqrt{\omega} \, Q ( \sqrt{\omega} \, x)$. We can guess that $\phi_\omega$ has growth properties that are similar to $Q_\omega$. This is the object of the following lemma. Besides, since $\phi_\omega$ is $\mathscr{C}^6$ with regards to $\omega$ (provided $(H_1)$ is satisfied), it makes sense to consider $\Lambda_\omega := \omega \, \frac{\partial \phi_\omega}{\partial \omega}$ and we know that $\Lambda_\omega$ is the solution on $\mathbb{R}$ of the following Cauchy system $$\left \{ \begin{array}{l} - \Lambda_\omega '' = - \omega \phi_\omega - \omega \Lambda_\omega + 3 \phi_\omega^2 \Lambda_\omega - 2 \phi_\omega^2 g'(\phi_\omega^2) \Lambda_\omega - g(\phi_\omega^2) \Lambda_\omega \\ \\ \Lambda_\omega (0) = \omega \frac{\text{d} \zeta_\omega}{\text{d} \omega} \sim \sqrt{\frac{\omega}{2}} , \, \, \, \Lambda_\omega '(0) = 0, \end{array} \right.$$ where we recognise the first line to be $L_+ \Lambda_\omega = - \omega \phi_\omega$. Controlling $\Lambda_\omega$ and its derivative will be the object of Lemma 5. **Lemma 2.** Assume $g$ to be $\mathscr{C}^{5} ( (0 \, , + \infty ))$, $\mathscr{C}^1 ([0 \, , \infty ))$ and such that $g(0)=g'(0)=0$. For any $k \in [\![ 0 \, , 6 ]\!]$, there exists $C_k > 0$ such that, for any $\omega > 0$ small enough and any $x \in \mathbb{R}$, $$| \phi_\omega^{(k)} (x) | \leqslant C_k \omega^{\frac{1+k}{2}} e^{- \sqrt{\omega} |x|}.$$ Moreover, for every $\varepsilon > 0$, for any $\omega > 0$ small enough, $$| \phi_\omega (x) - Q_\omega (x) | \leqslant \varepsilon \sqrt{\omega} \, e^{- \sqrt{\omega} |x|}.$$ Lastly, there exists $c>0$ such that $\phi_\omega (x) \geqslant c \sqrt{\omega} \, e^{- \sqrt{\omega} |x|}$. *Proof.* This proof will require several steps and is based on standard ordinary differential equations arguments that can be found in [@Be]. We will denote $P_\omega = \phi_\omega - Q_\omega$. Let $\varepsilon > 0$. Let us take $x_0 > 0$ such that $Q(x) < \varepsilon$ for $x \geqslant x_0$ ($x_0$ does not depend on $\omega$). Now, for $x \geqslant x_0 / \sqrt{\omega}$, $Q_\omega (x) < \varepsilon \sqrt{\omega}$. Considering the equations satisfied by $\phi_\omega$ and $Q_\omega$, we get $$P_\omega '' - \omega P_\omega = - P_\omega ( Q_\omega^2 + \phi_\omega Q_\omega + \phi_\omega^2 ) + g ( \phi_\omega^2 ) \phi_\omega.$$ It is clear that $0 \leqslant Q_\omega (x) \leqslant \sqrt{2 \omega}$ for all $x \in \mathbb{R}$. Now, since $\phi_\omega$ is nonincreasing on $\mathbb{R}_+$ and even, $0 \leqslant \phi_\omega (x) \leqslant \phi_\omega (0) = \zeta_\omega \leqslant C \sqrt{\omega}$. Thus, we get $$| P_\omega '' | \leqslant \omega | P_\omega | + 2 \left ( |Q_\omega|^2 + | \phi_\omega |^2 \right ) |P_\omega| + \varepsilon | \phi_\omega |^3 \leqslant C \omega | P_\omega | + C \varepsilon \omega^{3/2}.$$ Considering the vectorial function $\overrightarrow{\textbf{P}}_\omega (x) = (P_\omega (x) \, , P_\omega '(x) / \sqrt{\omega} )$, we have $|| \overrightarrow{\textbf{P}}_\omega ' (x) ||_{1} \leqslant C \sqrt{\omega} || \overrightarrow{\textbf{P}}_\omega (x) ||_1 + C \varepsilon \omega$. We then use Grönwall's lemma and the fact that $P_\omega '(0)=0$ to see that $$|P_\omega (x)| \leqslant || \overrightarrow{\textbf{P}}_\omega ||_1 \leqslant - C \varepsilon \sqrt{\omega} + (P_\omega (0) + \varepsilon \sqrt{\omega} ) e^{C \sqrt{\omega} x}.$$ As $|P_\omega (0)| = | \zeta_\omega - \sqrt{2 \omega} | = o ( \sqrt{\omega} )$, we get that, $|P_\omega (x_0 / \sqrt{\omega})| \leqslant C \varepsilon \sqrt{\omega}$ and thus $| \phi_\omega (x_0 / \sqrt{\omega} ) | \leqslant C \varepsilon \sqrt{\omega}$. Now, $\phi_\omega$ being nonincreasing, we get, for any $x \geqslant x_0 / \sqrt{\omega}$, that $|\phi_\omega (x)| \leqslant C \varepsilon \sqrt{\omega}$. Now, let us use standard arguments from [@Be]. Setting $v_\omega (x) = \phi_\omega (x)^2$, we have, for any $x \geqslant x_0 / \sqrt{\omega}$, $$v_\omega ''(x) = 2 \phi_\omega '(x)^2 + 2 \left ( \omega - \phi_\omega (x)^2 + g ( \phi_\omega (x)^2) \right ) v_\omega (x) \geqslant 2 \left ( \omega - 4 C \varepsilon^2 \omega - 4 C \varepsilon^2 \omega \right ) v_\omega (x) \geqslant \omega v_\omega (x)$$ providing we take $\varepsilon$ small enough so that $1 - 8 C \varepsilon^2 > \frac{1}{2}$. Now taking $z_\omega (x) = e^{- \sqrt{\omega}x} (v_\omega '(x) + \sqrt{\omega} \, v_\omega (x))$, we have $z_\omega '(x) = e^{- \sqrt{\omega}x} (v_\omega ''(x) - \omega v_\omega (x)) \geqslant 0$ for $x \geqslant x_0 / \sqrt{\omega}$. Therefore $z$ is nondecreasing on $\left [ \frac{x_0}{\sqrt{\omega}} \, , + \infty \right )$. Suppose that $z_\omega (y) > 0$ for some $y > x_0 / \sqrt{\omega}$. Then, for all $x \geqslant y$, $z_\omega (x) \geqslant z_\omega (y) > 0$ thus $v_\omega '(x) + \sqrt{\omega} \, v_\omega (x) \geqslant z_\omega (y) e^{\sqrt{\omega}x}$, showing that $v_\omega + \sqrt{\omega} \, v_\omega \not\in L^1 ([y \, , + \infty ))$. However we know that $\phi_\omega \in H^1 ( \mathbb{R})$, hence $\phi_\omega , \phi_\omega ' \in L^2 ( \mathbb{R})$ and $v_\omega = \phi_\omega^2 \in L^1 ( \mathbb{R})$ and $v_\omega ' = 2 \phi_\omega \phi_\omega ' \in L^1 ( \mathbb{R})$ too. Finally, this is absurd: we conclude that $z_\omega$ remains nonpositive for all $x \geqslant x_0 / \sqrt{\omega}$. This shows that $x \mapsto e^{\sqrt{\omega} x} v_\omega (x)$ is nonincreasing on $\left [ \frac{x_0}{\sqrt{\omega}} \, , + \infty \right )$ and then $$\forall x \geqslant \frac{x_0}{\sqrt{\omega}} , \, \, \, \, 0 \leqslant v_\omega (x) \leqslant e^{x_0} v_\omega \left ( \frac{x_0}{\sqrt{\omega}} \right ) e^{- \sqrt{\omega} x}.$$ Since $v_\omega \left ( \frac{x_0}{\sqrt{\omega}} \right ) \leqslant 4 \varepsilon^2 \omega$, we finally get that $v_\omega (x) \leqslant C \varepsilon^2 \omega e^{- \sqrt{\omega} x}$ and thus $\phi_\omega (x) \leqslant C \varepsilon \sqrt{\omega} \, e^{- \frac{\sqrt{\omega}}{2} x}$ for any $x \geqslant x_0 / \sqrt{\omega}$.\ \ Now we see that, by the variation of the constants, $$\phi_\omega (x) = \left ( \zeta_\omega + \frac{I_\omega}{2 \sqrt{\omega}} \right ) e^{- \sqrt{\omega} x} - \frac{e^{\sqrt{\omega} x}}{2 \sqrt{\omega}} \int_x^\infty \ell_\omega (y) e^{- \sqrt{\omega} y} \, \text{d}y + \frac{e^{- \sqrt{\omega} x}}{2 \sqrt{\omega}} \int_x^\infty \ell_\omega (y) e^{\sqrt{\omega} y} \, \text{d}y$$ where $\ell_\omega (y) = - \phi_\omega (y)^3 + g(\phi_\omega (y)^2) \phi_\omega (y)$ and $I_\omega = \int_0^\infty \ell_\omega (y) e^{- \sqrt{\omega} y} \, \text{d}y$. Note that all of this integrals indeed converge, as $|\ell_\omega (y)| \leqslant C \omega^{3/2} e^{- 3 \sqrt{\omega} x/2}$ when $y \to \infty$. Separating the integral $I_\omega$ at $x_0 / \sqrt{\omega}$ and using respectively the control $\phi_\omega (y) \leqslant C \sqrt{\omega} e^{- \sqrt{\omega} y/2}$ if $y \geqslant x_0 / \sqrt{\omega}$, and the control $\phi_\omega (y) \leqslant C \sqrt{\omega}$ if $0 \leqslant y < x_0 / \sqrt{\omega}$, we get that $|I_\omega| \leqslant C \omega$. Hence $\zeta_\omega + \frac{I_\omega}{2 \sqrt{\omega}} \leqslant C \sqrt{\omega}$.\ \ About the integral terms, we shall separate the integral at the point $x_0 / \sqrt{\omega}$ too. If $x \geqslant x_0 / \sqrt{\omega}$, there is no need to separate: the upper bound $\phi_\omega (y) \leqslant C \sqrt{\omega} \, e^{- \sqrt{\omega} y/2}$ directly gives $\left | \int_x^\infty \ell_\omega (y) e^{- \sqrt{\omega} y} \, \text{d}y \right | \leqslant C \omega e^{-5 \sqrt{\omega} x/2}$. If $0 \leqslant x < x_0 / \sqrt{\omega}$, we separate the integral and use the same upper bounds as for $I_\omega$; we get $$\left | \int_x^{+ \infty} \ell_\omega (y) e^{- \sqrt{\omega} y} \, \text{d}y \right | \leqslant C \omega \left ( e^{- \sqrt{\omega} x} - e^{-x_0} \right ) + C \omega e^{-5 x_0/2} \leqslant C \omega .$$ We then get $$\left | \frac{e^{\sqrt{\omega} x}}{2 \sqrt{\omega}} \int_x^\infty \ell_\omega (y) e^{- \sqrt{\omega} y} \, \text{d}y \right | \leqslant C \sqrt{\omega} \, e^{\sqrt{\omega} x} \leqslant C \sqrt{\omega} \, e^{x_0} = C \sqrt{\omega} \leqslant C \sqrt{\omega} \, e^{- \sqrt{\omega} x}$$ thanks to the lower bound $e^{- \sqrt{\omega} x} \geqslant e^{-x_0}$. In the lines above, the important fact is that the constant $C$ (which changes from one expression to another) does not depend on $\omega$. We thus have proved that, for any $x \geqslant 0$, $$\left | \frac{e^{\sqrt{\omega} x}}{2 \sqrt{\omega}} \int_x^\infty \ell_\omega (y) e^{- \sqrt{\omega} y} \, \text{d}y \right | \leqslant C \sqrt{\omega} \, e^{- \sqrt{\omega} x}.$$ The reasoning is exactly the same for the second integral: a direct exponential control when $x \geqslant x_0 / \sqrt{\omega}$ thanks to the previous upper bound, and a bounded control when $x < x_0 / \sqrt{\omega}$, which is sufficient for our purpose. Finally, we get that, for any $x \in \mathbb{R}$, $|\phi_\omega (x)| \leqslant C \sqrt{\omega} \, e^{- \sqrt{\omega} |x|}$ where $C$ does not depend on $\omega$.\ \ The estimates on the derivatives of $\phi_\omega$ follow from the expression obtained previously. We indeed have $$\phi_\omega '(x) = \left ( - \sqrt{\omega} \zeta_\omega - \frac{I_\omega}{2} \right ) e^{- \sqrt{\omega} x} - \frac{e^{\sqrt{\omega}x}}{2} \int_x^\infty \ell_\omega (y) e^{- \sqrt{\omega} y} \, \text{d}y - \frac{e^{- \sqrt{\omega} x}}{2} \int_x^\infty \ell_\omega (y) e^{\sqrt{\omega} y} \, \text{d}y.$$ With the bounds shown above about the integral terms, we get $|\phi_\omega '(x)| \leqslant C \omega e^{- \sqrt{\omega} x}$ with the same proof. To control $\phi_\omega ''$ and further derivatives, we use the equation satisfied by $\phi_\omega$; the conclusion follows.\ \ Now let us prove the bound on $P_\omega = \phi_\omega - Q_\omega$. To start, let us prove that $||P_\omega||_{\infty} = o ( \sqrt{\omega} )$ as $\omega \to 0$. Let $\varepsilon >0$. We know, from the exponential decays of $\phi_\omega$ and $Q_\omega$, that $|P_\omega (x)| \leqslant C \sqrt{\omega} \, e^{- \sqrt{\omega} x}$ for all $x \in \mathbb{R}$. Let us take $\omega$ sufficiently small so that $\zeta_\omega \leqslant \sqrt{3 \omega}$, $|g(s)| \leqslant \delta_1 s$ for all $s \in [0 \, , 3 \omega ]$, and finally $| \zeta_\omega - \sqrt{2 \omega} | \leqslant \delta_2 \sqrt{\omega}$; where we have denoted $\delta_1 = \frac{\varepsilon}{4} \, e^{-12 \ln (C / \varepsilon )}$ and $\delta_2 = \frac{\varepsilon}{2} \, e^{-12 \ln (C / \varepsilon )}$. The previous lines imply that $\phi_\omega \leqslant \sqrt{3 \omega}$, $g ( \phi_\omega^2 ) \leqslant \delta_1 \phi_\omega^2 \leqslant 3 \delta_1 \omega$ and $|P_\omega (0)| \leqslant \delta_2 \sqrt{\omega}$. We then have, thanks to the equation $P_\omega '' - \omega P_\omega = - P_\omega (Q_\omega^2 + \phi_\omega Q_\omega + \phi_\omega^2 ) + g ( \phi_\omega^2 ) \phi_\omega$ verified by $P_\omega$, $$| P_\omega '' | \leqslant \omega | P_\omega | + (2 \omega + \sqrt{6} \omega + 3 \sqrt{\omega} ) |P_\omega | + 3 \sqrt{3} \delta_1 \omega^{3/2} \leqslant 12 \omega | P_\omega | + 6 \delta_1 \omega^{3/2}.$$ Let $x_\omega = \frac{\ln (C / \varepsilon)}{\sqrt{\omega}}$, such that $|P_\omega (x)| \leqslant C \sqrt{\omega} \, e^{- \sqrt{\omega} x_\omega} = \varepsilon \sqrt{\omega}$ for all $x \geqslant x_\omega$. Following the same computation as in the first part of the proof, we get by Gronwäll's lemma that, for all $x \in [0 \, , x_\omega]$, $$|P_\omega (x)| \leqslant \sqrt{\omega} \left [ \frac{\delta_1}{2} + e^{12 \sqrt{\omega} x} \left ( \delta_2 + \frac{\delta_1}{2} \right ) \right ] \leqslant \sqrt{\omega} \left [ \frac{\delta_1}{2} + e^{12 \sqrt{\omega} x_\omega} \left ( \delta_2 + \frac{\delta_1}{2} \right ) \right ] = \varepsilon$$ thanks to the judicious choices of $\delta_1$ and $\delta_2$. Therefore, we have proved that $|P_\omega (x)| \leqslant \varepsilon \sqrt{\omega}$ for all $x \geqslant 0$ and all $\omega >0$ small enough.\ \ Now, the variation of the constants and the fact that $P_\omega '(0)=0$ give the expression $$P_\omega (x) = A \, e^{\sqrt{\omega} x} + \left ( \frac{P_\omega (0)}{2} - \frac{J_\omega}{2 \sqrt{\omega}} \right ) e^{- \sqrt{\omega} x} - \frac{e^{\sqrt{\omega} x}}{2 \sqrt{\omega}} \int_x^{+ \infty} S_\omega (y) e^{- \sqrt{\omega} y} \, \text{d}y + \frac{e^{-\sqrt{\omega} x}}{2 \sqrt{\omega}} \int_x^{+ \infty} S_\omega (y) e^{\sqrt{\omega} y} \, \text{d}y$$ where $S_\omega = Q_\omega^3 - \phi_\omega^3 + g ( \phi_\omega^2 ) \phi_\omega = - P_\omega (Q_\omega^2 + \phi_\omega Q_\omega + \phi_\omega^2 ) + g ( \phi_\omega^2 ) \phi_\omega$ and $J_\omega = \int_0^{+ \infty} S_\omega (y) e^{\sqrt{\omega} y} \, \text{d}y$. Taking $\omega$ sufficiently small so that $g(s) \leqslant \varepsilon s$ for all $s \in [0 \, , 3 \omega ]$ and $|\zeta_\omega| \leqslant \sqrt{3 \omega}$, and also using the inequality $||P_\omega||_\infty \leqslant \varepsilon \sqrt{\omega}$ we have just proved, we find that $| S_\omega (y) | \leqslant C \varepsilon \omega^{3/2} e^{-2 \sqrt{\omega}y}$ for all $y \geqslant 0$. This gives $$|J_\omega| \leqslant C \varepsilon \omega , \, \, \, \, \, \, \, \left | \int_x^{+ \infty} S_\omega (y) e^{- \sqrt{\omega} y} \, \text{d}y \right | \leqslant C \varepsilon \omega e^{-3 \sqrt{\omega} x} \, \, \, \, \, \, \, \text{and} \, \, \, \, \, \, \, \left | \int_x^{+ \infty} S_\omega (y) e^{\sqrt{\omega} y} \, \text{d}y \right | \leqslant C \varepsilon \omega e^{- \sqrt{\omega} x}.$$ Since $P_\omega$ vanishes at infinity, this shows that $A=0$, and gathering all the upper bounds we get that $| P_\omega (x) | \leqslant C \varepsilon \sqrt{\omega} e^{- \sqrt{\omega} x}$.\ \ For the last bound, we know from the explicit expression of $Q_\omega$ that $Q_\omega (x) \geqslant c \sqrt{\omega} \, e^{- \sqrt{\omega} |x|}$. Taking $\varepsilon$ small enough in $| \phi_\omega (x) - Q_\omega (x) | \leqslant \varepsilon \sqrt{\omega} \, e^{- \sqrt{\omega} |x|}$, we obtain the desired lower bound: $\phi_\omega (x) \geqslant c \sqrt{\omega} \, e^{- \sqrt{\omega} |x|}$. [a]{style="color: white"}\ We recall that the linearization of [\[NLS\]](#NLS){reference-type="eqref" reference="NLS"} around $\phi_\omega$ involves the operators $$L_+ = - \partial_x^2 + \omega - 3 \phi_\omega^2 + g ( \phi_\omega^2) + 2 \phi_\omega^2 g'( \phi_\omega^2 ) \, \, \, \, \, \text{and} \, \, \, \, \, L_- = - \partial_x^2 + \omega - \phi_\omega^2 + g ( \phi_\omega^2 )$$ as we can see in [@We2] for instance. Some spectral properties are known about $L_+$ and $L_-$ (see [@We1]). Both operators are self-adjoint in $L^2$. In $L^2$, the operator $L_+$ has exactly one negative eigenvalue and its kernel is generated by $\phi_\omega '$. On the other hand, still in $L^2$, the kernel of $L_-$ is generated by $\phi_\omega$.\ \ Let us discuss some aspects about the invertibility of $L_+$ and $M_-$, which will intervene in the last part of our proof. We denote by $A_\omega$ the even solution of $L_+ A_\omega = 0$ such that $\phi_\omega '' A_\omega - \phi_\omega ' A_\omega ' = 1$ on $\mathbb{R}$. The variation of the constants shows that, if $A_\omega$ was bounded, then we would have $A_\omega (x) , A_\omega '(x) \, \underset{x \to + \infty}{\longrightarrow} \, 0$, which clearly contradicts the relation $\phi_\omega '' A_\omega - \phi_\omega ' A_\omega ' = 1$; thus $A_\omega$ is not bounded on $\mathbb{R}$. We will need the following estimate on $A_\omega$. **Lemma 3.** Assume that hypothesis $(H_1)$ holds. For $\omega > 0$ small enough and for any $k \in [\![ 0 \, , 6 ]\!]$, there exists some constants $C_k > 0$ such that $|A_\omega^{(k)} (x)| \leqslant C_k \omega^{\frac{k-3}{2}} e^{\sqrt{\omega} |x|}$ for all $x \in \mathbb{R}$. *Proof.* Starting with the wronskian relation, we have $A_\omega ' - \frac{\phi_\omega ''}{\phi_\omega '} \, A_\omega = - \frac{1}{\phi_\omega '}$ on $(0 \, , + \infty )$ and thus we get $$A_\omega (x) = \phi_\omega '(x) \left [ \alpha_\omega - \int_x^{1/ \sqrt{\omega}} \frac{\text{d}y}{\phi_\omega '(y)^2} \right ]$$ where $\alpha_\omega$ is an unknown constant (that depends on $\omega$). Now, let us define $\text{res}_\omega (x) := \frac{1}{\phi_\omega '(x)^2} - \frac{1}{\phi_\omega ''(0) x^2}$. Using $\phi_\omega '(x) = \phi_\omega ''(0) x + \mathcal{O} (x^3)$ as $x \to 0$, we see that $\text{res}_\omega (x) = \mathcal{O} (1)$. Differentiating the expression of $A_\omega$ above, we find that $$A_\omega (x) \, \underset{x \to 0}{=} \, \frac{1}{\phi_\omega ''(0)} + \left ( \alpha_\omega - \int_0^{1/ \sqrt{\omega}} \text{res}_\omega \right ) \phi_\omega ''(0) + o(1).$$ Since $A_\omega$ is even, $A_\omega '(0) = 0$ thus $\alpha_\omega = \int_0^{1/\sqrt{\omega}} \text{res}_\omega - \frac{1}{\phi_\omega ''(0)^2}$.\ \ Now let us take $\varepsilon >0$ and introduce $D_\omega := P_\omega ' = \phi_\omega ' - Q_\omega '$ where we recall that $P_\omega = \phi_\omega - Q_\omega$. We see that $D_\omega ' = \omega P_\omega - P_\omega (Q_\omega^2 + \phi_\omega Q_\omega + \phi_\omega^2) + g(\phi_\omega^2) \phi_\omega$. Using the estimates of Lemma 2 we obtain, for $\omega >0$ small enough, $|D'(x)| \leqslant C \varepsilon \omega^{3/2} e^{- \sqrt{\omega} x}$ for all $x>0$. For $x > \omega^{-1/2}$, we get $$|D(x)| \leqslant \int_x^{+ \infty} C \varepsilon \omega^{3/2} e^{- \sqrt{\omega} y} \, \text{d}y \leqslant C \varepsilon \omega e^{- \sqrt{\omega} x} \leqslant C \varepsilon \omega^{3/2} x e^{- \sqrt{\omega} x},$$ and for $0 < x < \omega^{-1/2}$ we get $$|D(x)| \leqslant \int_0^x C \varepsilon \omega^{3/2} e^{- \sqrt{\omega} y} \, \text{d}y \leqslant C \varepsilon \omega (e^{- \sqrt{\omega} x} -1) \leqslant C \varepsilon \omega \sqrt{\omega} x \leqslant C \varepsilon \omega^{3/2} x e^{- \sqrt{\omega} x}.$$ Thus, $|D(x)| \leqslant C \varepsilon \omega^{3/2} x e^{- \sqrt{\omega} x}$ for all $x>0$. Note that we have used the fact that $D(0)=0$. Also note that it is also true that $|D(x)| \leqslant C \varepsilon \omega e^{- \sqrt{\omega} x}$ for all $x>0$. Now, using the explicit expression of $Q_\omega '$, we see that $| Q_\omega '(x) | \geqslant C \omega^{3/2} x$ for $x \in (0 \, , \omega^{-1/2} )$. For such $x$, this leads to $| \phi_\omega '(x) | \geqslant (1 - C \varepsilon ) | Q_\omega '(x)|$. Choosing $\varepsilon >0$ correctly, we obtain $\phi_\omega '(x)^2 \geqslant C Q_\omega'(x)^2 \geqslant C \omega^3 x^2$ for all $x \in (0 \, , \omega^{-1/2})$.\ \ On the other hand, differentiating four times the quantity $(\phi_\omega ')^2$ thanks to [\[eqphi\]](#eqphi){reference-type="eqref" reference="eqphi"} and using Lemma 2, we easily see that, for $\omega >0$ small enough and all $x>0$, $$\left | \frac{\text{d}^4}{\text{d}x^4} \left ( \phi_\omega ''(0)^2 x^2 - \phi_\omega '(x)^2 \right ) \right | \leqslant C \omega^4.$$ Since the function $x \mapsto \phi_\omega ''(0)^2 x^2 - \phi_\omega'(x)^2$ and its first three derivatives vanish at $x=0$, we obtain $| \phi_\omega ''(0)^2 x^2 - \phi_\omega '(x)^2 | \leqslant C \omega^4 x^4$ for all $x>0$.\ \ Finally, using [\[eqphi\]](#eqphi){reference-type="eqref" reference="eqphi"}, we see that $\phi_\omega ''(0) \sim - \sqrt{2} \, \omega^{3/2}$ as $\omega \to 0$. Thus, for $\omega > 0$ small enough, $\phi_\omega ''(0)^2 \geqslant C \omega^3$. Putting these estimates together, we find that, for $x \in (0 \, , \omega^{-1/2})$, $$| \text{res}_\omega (x) | = \frac{| \phi_\omega ''(0)^2 x^2 - \phi_\omega'(x)^2 |}{\phi_\omega '(x)^2 \phi_\omega ''(0)^2 x^2} \leqslant \frac{C \omega^4 x^4}{C \omega^3 x^2 \cdot C \omega^3 x^2} \leqslant C \omega^{-2}.$$ Integrating on $(0 \, , \omega^{-1/2})$ and recalling that $\phi_\omega ''(0)^{-2} \leqslant C \omega^{-3}$, we obtain $| \alpha_\omega | \leqslant C \omega^{-3}$. The conclusion now follows easily. For $0 < x \leqslant \omega^{-1/2}$, using the previous upper bounds and the explicit expression of $Q_\omega '$ we see that $| \phi_\omega '(x)| \leqslant C \varepsilon \omega^{3/2} x + | Q_\omega '(x)| \leqslant C \omega^{3/2} x$ and thus $$\left | \phi_\omega '(x) \int_x^{1/\sqrt{\omega}} \frac{\text{d}y}{\phi_\omega '(y)^2} \right | \leqslant C \omega^{3/2} x \int_x^{1/\sqrt{\omega}} \frac{C \omega^{-3} \, \text{d}y}{y^2} \leqslant C \omega^{3/2} x \cdot \frac{C \omega^{-3}}{x} \leqslant C \omega^{-3/2} \leqslant C \omega^{-3/2} e^{\sqrt{\omega} x}.$$ On the other hand, for $x> \omega^{-1/2}$, we have $$\int_{1/\sqrt{\omega}}^x \frac{\text{d}y}{\phi_\omega '(y)^2} \leqslant \int_{1/\sqrt{\omega}}^x \frac{C \omega^{-2} \, \text{d}y}{e^{-2 \sqrt{\omega} y}} \leqslant C \omega^{-5/2} e^{2 \sqrt{\omega} x}.$$ Using Lemma 2, we obtain $$\left | \phi_\omega '(x) \int_{1/ \sqrt{\omega}}^x \frac{\text{d}y}{\phi_\omega '(y)^2} \right | \leqslant C \omega e^{-\sqrt{\omega} x} \cdot C \omega^{-5/2} e^{2 \sqrt{\omega} x} \leqslant C \omega^{-3/2} e^{\sqrt{\omega} x}.$$ Hence the bound for $A_\omega$ is proved for all $x>0$. The bounds for its derivatives are similar and do not show additional difficulties, now that $\alpha_\omega$ is estimated. [a]{style="color: white"}\ For any bounded continuous function $W$, define $$I_+ [W] (x) := \left | \begin{array}{ll} \displaystyle{- \phi_\omega '(x) \int_0^x A_\omega W - A_\omega (x) \int_x^{+ \infty} \phi_\omega ' W} & \text{if $x \geqslant 0$} \\ \\ \displaystyle{\phi_\omega '(x) \int_x^0 A_\omega W + A_\omega (x) \int_{- \infty}^x \phi_\omega ' W} & \text{if $x<0$.} \end{array} \right.$$ Note that if $\langle W \, , \phi_\omega ' \rangle = 0$ then $- \int_x^{+ \infty} \phi_\omega ' W = \int_{- \infty}^x \phi_\omega ' W$ and therefore the two expressions above coincide at $x=0$ and provide a solution to the equation $L_+U = W$. We will now provide estimates on $\Lambda_\omega$. In what follows, let us denote $\Lambda_\omega^Q := \omega \frac{\partial Q_\omega}{\partial \omega}$. First, we shall prove the following result, only here to be used in the next proof. **Lemma 4.** For $\omega > 0$ small enough (as in the previous lemmas), $\Lambda_\omega$ is bounded on $\mathbb{R}$. *Proof.* Our proof relies on spectral arguments. To this end, let $L_+^Q := - \partial_x^2 + \omega - 3 Q_\omega^2$ and $L_+^{Q0} := - \partial_x^2 + 1 - 3 Q^2$. We know from [@Ch] that $L_+^{Q0}$ has only one negative eigenvalue which is $-3$, associated to the eigenfunction $Q^2$. The kernel of $L_+^{Q0}$ is generated by $Q'$. We know the following spectral coercivity property from [@Tao]: for any $u \in H^1 ( \mathbb{R})$, $$\langle L_+^{Q0} u \, , u \rangle \geqslant c_1 ||u||_{H^1}^2 - c_2 | \langle u \, , Q^2 \rangle |^2 - c_3 | \langle u \, , Q' \rangle |^2$$ with $c_1,c_2,c_3$ positive constants. Let $\text{Ev}_\omega u (x) = u \left ( \frac{x}{\sqrt{\omega}} \right )$. We see that $\text{Ev}_\omega^{-1} u(x) = u ( \sqrt{\omega} \, x)$, $\text{Ev}_\omega^\star = \sqrt{\omega} \, \text{Ev}_\omega^{-1}$ and $L_+^Q = \omega \, \text{Ev}_\omega^{-1} L_+^{Q0} \text{Ev}_\omega$. Using these identities, we compute $$\langle L_+^Q u \, , u \rangle = \sqrt{\omega} \langle L_+^{Q0} ( \text{Ev}_\omega u ) \, , ( \text{Ev}_\omega u ) \rangle \geqslant \sqrt{\omega} \left [ c_1 || \text{Ev}_\omega u ||_{H^1}^2 - c_2 | \langle \text{Ev}_\omega u \, , Q^2 \rangle |^2 - c_3 | \langle \text{Ev}_\omega u \, , Q' \rangle |^2 \right ]$$ where $\langle \text{Ev}_\omega u \, , Q^2 \rangle = \omega^{-1/2} \langle u \, , Q_\omega^2 \rangle$, $\langle \text{Ev}_\omega u \, , Q' \rangle = \omega^{-1/2} \langle u \, , Q_\omega ' \rangle$ and $|| \text{Ev}_\omega u ||_{H^1}^2 = \omega^{-1/2} || u ||_{H_\omega^1}^2$ with $||u||_{H_\omega^1}^2 := \omega ||u||^2 + ||u'||^2$. Hence, the following lower bound holds for all $u \in H^1 ( \mathbb{R})$, $$\langle L_+^Q u \, , u \rangle \geqslant c_1 ||u||_{H_\omega^1}^2 - \frac{c_2}{\sqrt{\omega}} | \langle u \, , Q_\omega^2 \rangle |^2 - \frac{c_3}{\sqrt{\omega}} | \langle u \, , Q_\omega ' \rangle |^2.$$ Now, take $\varepsilon > 0$ which we will fix later. We take $\omega_0> 0$ small enough (in a sense that we will precise later) and $\omega > 0$ close enough to $\omega_0$ (we ask that $| \omega - \omega_0 | \leqslant \varepsilon \omega_0$). We denote $\tau := \frac{\phi_\omega - \phi_{\omega_0}}{\omega - \omega_0}$ that satisfies the equation $$\begin{array}{rl} & \displaystyle{\tau '' = \phi_\omega + \omega_0 \tau - ( \phi_\omega^2 + \phi_\omega \phi_{\omega_0}^2 + \phi_{\omega_0}^2 ) \tau + \phi_\omega \, \frac{g(\phi_\omega^2) - g(\phi_{\omega_0}^2)}{\omega - \omega_0} + g(\phi_{\omega_0}^2) \tau} \\ \\ \text{i.e.} & \displaystyle{L_+^{Q} \tau = - \phi_\omega + (\phi_\omega^2 + \phi_\omega \phi_{\omega_0} + \phi_{\omega_0}^2 - 3 Q_{\omega_0}^2 ) \tau - \phi_\omega \, \frac{g(\phi_\omega^2) - g(\phi_{\omega_0}^2)}{\omega - \omega_0} - g(\phi_{\omega_0}^2) \tau,} \end{array}$$ where $L_+^Q$ is the previous operator with the pulsation $\omega_0$. We take $\omega_0$ small enough such that the bounds in Lemma 2 hold. Moreover, we see that $| Q_\omega - Q_{\omega_0} | \leqslant C | \omega - \omega_0 | \omega_0^{-1/2} \leqslant C \varepsilon \sqrt{\omega_0}$. To see that, recall that $Q_\omega$ is known explicitly, thus we can compute $\Lambda_\omega^Q := \sqrt{\frac{\omega}{2}} \left ( 1 - \sqrt{\omega} x \tanh ( \sqrt{\omega} x) \right ) \, \frac{1}{\cosh ( \sqrt{\omega} x )}$ which gives $| \Lambda_\omega^Q | \leqslant C \sqrt{\omega_0}$ and then $| \partial_\omega Q_\omega | \leqslant C \omega_0^{-1/2}$. This proves the upper bound on $| Q_\omega - Q_{\omega_0} |$. Now, let us estimate $\langle L_+^Q \tau \, , \tau \rangle$. First, $$| \langle \phi_\omega \, , \tau \rangle | \leqslant || \phi_\omega || \, || \tau || \leqslant C \omega_0^{1/4} || \tau ||.$$ Now, about the second term, writing $$\begin{array}{rcl} | \phi_\omega^2 + \phi_\omega \phi_{\omega_0} + \phi_{\omega_0}^2 - 3 Q_{\omega_0}^2 | & \leqslant & (\phi_\omega + Q_\omega ) | \phi_\omega - Q_\omega | + (Q_\omega + Q_{\omega_0} ) | Q_\omega - Q_{\omega_0} | \\ \\ & & \, \, \, + \, \phi_\omega | \phi_{\omega_0} - Q_{\omega_0} | + Q_{\omega_0} | \phi_\omega - Q_\omega | + Q_{\omega_0} | Q_\omega - Q_{\omega_0} | \\ \\ & & \, \, \, + \, (\phi_{\omega_0} + Q_{\omega_0} ) | \phi_{\omega_0} - Q_{\omega_0} |, \end{array}$$ we get $| \phi_\omega^2 + \phi_\omega \phi_{\omega_0} + \phi_{\omega_0}^2 - 3Q_{\omega_0}^2 | \leqslant C \varepsilon \omega_0$. Thus, $$\left | \left \langle ( \phi_\omega^2 + \phi_\omega \phi_{\omega_0} + \phi_{\omega_0}^2 ) \tau \, , \tau \right \rangle \right | \leqslant C \varepsilon \omega_0 || \tau ||^2.$$ Now, about the third term, we take $\omega_0$ (and $\omega$) small enough such that $| g'(s) | \leqslant \varepsilon$ for all $s \in [0 \, , 5 \omega_0 ]$. This implies $|g(\phi_\omega^2) - g(\phi_{\omega_0}^2)| \leqslant \varepsilon | \phi_\omega^2 - \phi_{\omega_0}^2 |$, which leads to $$\left | \phi_\omega \, \frac{g(\phi_\omega^2) - g(\phi_{\omega_0}^2)}{\omega - \omega_0} \right | \leqslant \phi_\omega \varepsilon | \tau | ( \phi_\omega + \phi_{\omega_0} ) \leqslant C \varepsilon \omega_0 | \tau |.$$ Thus, $$\left | \left \langle \phi_\omega \, \frac{g(\phi_\omega^2) - g(\phi_{\omega_0}^2)}{\omega - \omega_0} \, \tau \, , \tau \right \rangle \right | \leqslant C \varepsilon \omega_0 || \tau ||^2.$$ Finally, about the last term, $| g(\phi_{\omega_0}^2) | \leqslant \varepsilon \phi_{\omega_0}^2 \leqslant C \varepsilon \omega_0$, thus $\left | \langle g(\phi_{\omega_0}^2) \tau \, , \tau \rangle \right | \leqslant C \varepsilon \omega_0 || \tau ||^2$. Gathering these estimates, we have $$| \langle L_+^Q \tau \, , \tau \rangle | \leqslant C \omega_0^{1/4} || \tau || + C \varepsilon \omega_0 || \tau ||^2.$$ Using the spectral inequality, and since $\tau \in H^1 ( \mathbb{R})$, we know that $\langle L_+^Q \tau \, , \tau \rangle \geqslant c_1 || \tau ||_{H_{\omega_0}^1}^2 - \frac{c_2}{\sqrt{\omega_0}} | \langle \tau \, , Q_{\omega_0}^2 \rangle |^2 - \frac{c_3}{\sqrt{\omega_0}} | \langle \tau \, , Q_{\omega_0}' \rangle |^2$. Since $\tau$ is even and $Q_{\omega_0} '$ is odd, $\langle \tau \, , Q_{\omega_0} ' \rangle = 0$. We estimate the other scalar product as follows, using both that $L_+^Q Q_{\omega_0}^2 = -3 \omega_0 Q_{\omega_0}^2$ and that $L_+^Q$ is self-adjoint: $$\begin{array}{rcl} | \langle \tau \, , Q_{\omega_0}^2 \rangle | &=& \displaystyle{\frac{1}{3 \omega_0} | \langle \tau \, , L_+^Q Q_{\omega_0}^2 \rangle | \, \, = \, \, \frac{1}{3 \omega_0} | \langle L_+^Q \tau \, , Q_{\omega_0}^2 \rangle |} \\ \\ & \leqslant & \displaystyle{\frac{1}{3 \omega_0} \left [ | \langle \phi_\omega \, , Q_{\omega_0}^2 \rangle | + | \langle (\phi_\omega^2 + \phi_\omega \phi_{\omega_0} + \phi_{\omega_0}^2 -3 Q_{\omega_0}^2 ) \tau \, , Q_{\omega_0}^2 \rangle | + \left | \left \langle \phi_\omega \, \frac{g(\phi_\omega^2) - g(\phi_{\omega_0}^2)}{\omega - \omega_0} \, , Q_{\omega_0}^2 \right \rangle \right | \right.} \\ \\ & & \displaystyle{\left. \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \textcolor{white}{\frac{1}{2}} \, \, \, \, + \, | \langle g(\phi_{\omega_0}^2) \tau \, , Q_{\omega_0}^2 \rangle | \right ].} \end{array}$$ Directly using the exponential controls, we find $| \langle \phi_\omega \, , Q_{\omega_0}^2 \rangle | \leqslant C \omega_0$. In order to control the other terms, we recall the estimates proved above: $$| \phi_\omega^2 + \phi_\omega \phi_{\omega_0} + \phi_{\omega_0}^2 - 3 Q_{\omega_0}^2 | \leqslant C \varepsilon \omega_0 , \, \, \, \, \left | \phi_\omega \, \frac{g(\phi_\omega^2) - g(\phi_{\omega_0}^2)}{\omega - \omega_0} \right | \leqslant C \varepsilon \omega_0 | \tau | \, \, \, \, \text{and} \, \, \, \, | g(\phi_{\omega_0}^2) | \leqslant C \varepsilon \omega_0.$$ This leads to: first, $$| \langle (\phi_\omega^2 + \phi_\omega \phi_{\omega_0} + \phi_{\omega_0}^2 -3 Q_{\omega_0}^2 ) \tau \, , Q_{\omega_0}^2 \rangle | \leqslant C \varepsilon \omega_0 \int_{\mathbb{R}} | \tau | Q_{\omega_0}^2 \leqslant C \varepsilon \omega_0 || \tau || \, || Q_{\omega_0}^2 || \leqslant C \varepsilon \omega_0^{7/4} || \tau ||;$$ second, $$\left | \left \langle \phi_\omega \, \frac{g(\phi_\omega^2) - g(\phi_{\omega_0}^2)}{\omega - \omega_0} \, , Q_{\omega_0}^2 \right \rangle \right | \leqslant C \varepsilon \omega_0 \int_{\mathbb{R}} \tau^2 Q_{\omega_0}^2 \leqslant C \varepsilon \omega_0^{7/4} || \tau ||;$$ and third, $$| \langle g(\phi_{\omega_0}^2) \tau \, , Q_{\omega_0}^2 \rangle | \leqslant C \varepsilon \omega_0 \int_{\mathbb{R}} | \tau | Q_{\omega_0}^2 \leqslant C \varepsilon \omega_0^{7/4} || \tau ||.$$ Overall, we obtain $\langle \tau \, , Q_{\omega_0}^2 \rangle | \leqslant C + C \varepsilon \omega_0^{3/4} || \tau ||$ which leads to $| \langle \tau \, , Q_{\omega_0}^2 \rangle |^2 \leqslant C + C \varepsilon \omega_0^{3/4} || \tau || + C \varepsilon^2 \omega_0^{3/2} || \tau ||^2$. Henceforth, going back to the spectral inequality, we obtain $$\begin{array}{rl} & \displaystyle{|| \tau ||_{H_{\omega_0}^1}^2 \leqslant C | \langle L_+^Q \tau \, , \tau \rangle | + \frac{C}{\sqrt{\omega_0}} | \langle \tau \, , Q_{\omega_0}^2 \rangle |^2 \leqslant C \omega_0^{1/4} || \tau || + C \varepsilon \omega_0 || \tau||^2 + C \omega_0^{-1/2} + C \varepsilon \omega_0^{1/4} || \tau || + C \varepsilon^2 \omega_0 || \tau ||^2} \\ \\ \text{thus} & \displaystyle{\omega_0 || \tau ||^2 \leqslant || \tau ||_{H_{\omega_0}^1}^2 \leqslant C \omega_0^{1/4} || \tau || + C \varepsilon \omega_0 || \tau ||^2 + C \omega_0^{-1/2}} \\ \\ \text{thus} & \displaystyle{\omega_0 ( 1 - C \varepsilon ) || \tau ||^2 - C \omega_0^{1/4} || \tau || - C \omega_0^{-1/2} \leqslant 0.} \end{array}$$ Choosing $\varepsilon > 0$ small enough, we may assume $1 - C \varepsilon \geqslant \frac{1}{2}$ and thus $$\frac{\omega_0}{2} || \tau ||^2 - C \omega_0^{1/4} || \tau || - C \omega_0^{-1/2} \leqslant 0.$$ The positive root of the polynomial $\frac{\omega_0}{2} X^2 - C \omega_0^{1/4} X - C \omega_0^{-1/2}$ being $C \omega_0^{-3/4}$ (where the constant $C$ is different), we have $|| \tau || \leqslant C \omega_0^{-3/4}$.\ \ Now, recalling that $|| \tau ' ||^2 \leqslant || \tau ||_{H_{\omega_0}^1}^2 \leqslant C \omega_0^{1/4} || \tau || + C \varepsilon \omega_0 || \tau ||^2$ and using the upper bound above about $|| \tau ||$, we get $|| \tau ' ||^2 \leqslant C \omega_0^{-1/2}$. This leads to $|| \tau ||_{L^\infty}^2 \leqslant 2 || \tau || \, || \tau' || \leqslant C \omega_0^{-3/4} \omega_0^{-1/4} = C \omega_0^{-1}$ and thus $|| \tau ||_{L^\infty} \leqslant C \omega_0^{-1/2}$.\ \ Now, take $x \in \mathbb{R}$ fixed. We have $\left | \frac{\phi_\omega (x) - \phi_{\omega_0} (x)}{\omega - \omega_0} \right | = | \tau (x) | \leqslant C \omega_0^{-1/2}$ for $\omega$ taken as before. Letting $\omega \to \omega_0$, we obtain $\left | ( \partial_\omega \phi_\omega )_{\omega = \omega_0} (x) \right | \leqslant C \omega_0^{-1/2}$ and thus $| \Lambda_{\omega_0} (x) | \leqslant C \sqrt{\omega_0}$. As we will see in the next lemma, we could not hope for a better estimate. The constant $C$ is uniform (it does not depend on $x$), showing that $\Lambda_{\omega_0}$ is indeed bounded. This is the result announced. [a]{style="color: white"}\ Now let us give more precise bounds about $\Lambda_\omega$. **Lemma 5.** Assume $g$ to be $\mathscr{C}^{5} ( (0 \, , + \infty ))$, $\mathscr{C}^1 ([0 \, , \infty ))$ and such that $g(0)=g'(0)=0$. For any $k \in [\![ 0 \, , 6 ]\!]$, there exists $C_k > 0$ such that, for any $\omega >0$ small enough and any $x \in \mathbb{R}$, $$| \Lambda_\omega^{(k)} (x) | \leqslant C_k \omega^{\frac{1+k}{2}} (1 + \sqrt{\omega} |x| ) e^{- \sqrt{\omega} |x|}.$$ Moreover, for every $\varepsilon > 0$, for any $\omega >0$ small enough, $$| \Lambda_\omega (x) - \Lambda_\omega^Q (x) | \leqslant \varepsilon \sqrt{\omega} (1 + \sqrt{\omega} |x| ) e^{- \sqrt{\omega} |x|}.$$ At last, for $\omega$ small enough, $\langle \phi_\omega \, , \Lambda_\omega \rangle \geqslant C \sqrt{\omega}$. *Proof.* The condition $\langle W \, , \phi_\omega ' \rangle = 0$ is in particular satisfied by $W= - \omega \phi_\omega$ since $\phi_\omega \phi_\omega '$ is odd. We know that $L_+ \Lambda_\omega = - \omega \phi_\omega$. Hence, there exists some constants $c_\omega^A,c_\omega^\phi$ (possibly depending on $\omega$) such that $\Lambda_\omega = I_+ [ - \omega \phi_\omega ] + c_\omega^A A_\omega + c_\omega^\phi \phi_\omega '$. Since $I_+ [ - \omega \phi_\omega ]$, $A_\omega$ and $\Lambda_\omega$ are even while $\phi_\omega '$ is odd, we obtain $c_\omega^\phi = 0$. Moreover, since $\Lambda_\omega$ is bounded on $\mathbb{R}$ (see Lemma 4), $c_\omega^A = 0$. Hence $\Lambda_\omega = I_+ [ - \omega \phi_\omega ]$. We also easily check that, using the bounds on $\phi_\omega$, $\phi_\omega '$ and $A_\omega$, we have $| I_+ [ - \omega \phi_\omega ] (x) | \leqslant C \sqrt{\omega} ( 1 + \sqrt{\omega} |x| ) e^{- \sqrt{\omega} |x|}$. The term $\omega |x| e^{- \sqrt{\omega} |x|}$ comes from the first integral in the definition of $I_+$. Thus, $$| \Lambda_\omega (x) | \leqslant C \sqrt{\omega} ( 1 + \sqrt{\omega} |x| ) e^{- \sqrt{\omega} |x|}.$$ Differentiating the formula $\Lambda_\omega = I_+ [ - \omega \phi_\omega ]$, we similarly get the estimates on the derivatives of $\Lambda_\omega$. Now consider the second point of the lemma: let $\varepsilon > 0$ and $\delta > 0$ which will be fixed later (depending on $\varepsilon$). The proof is similar to the one of the analogous result in Lemma 2. Let us denote $\Theta_\omega := \Lambda_\omega - \Lambda_\omega^Q$. Recalling that $P_\omega = \phi_\omega - Q_\omega$, the equation satisfied by $\Theta_\omega$ is $$\Theta_\omega '' = \omega P_\omega + \omega \Theta_\omega - 3 \phi_\omega^2 \Theta_\omega -3 \Lambda_\omega^Q P_\omega ( \phi_\omega + Q_\omega ) + 2 \Lambda_\omega \phi_\omega^2 g'(\phi_\omega^2) + \Lambda_\omega \phi_\omega^2.$$ Taking $\omega$ small enough, we can assume that $|P_\omega| \leqslant \delta \sqrt{\omega}$, $\phi_\omega^2 \leqslant \zeta_\omega^2 \leqslant 3 \omega$, $|g'(\phi_\omega^2)| \leqslant \delta$ and $|g(\phi_\omega^2)| \leqslant \delta \phi_\omega^2 \leqslant C \delta \omega$. We also see, from the bound above about $\Lambda_\omega$, that $| \Lambda_\omega | \leqslant C \sqrt{\omega}$ (for example, observe that $x \mapsto (1 + \sqrt{\omega} x ) e^{- \sqrt{\omega} x}$ is nonincreasing on $[0 \, , + \infty )$). Gathering these bounds we obtain $$| \Theta_\omega '' | \leqslant C \delta \omega^{3/2} + 10 \omega | \Theta_\omega |.$$ We can assume $\omega$ small enough such that $\left | \omega \, \frac{\text{d} \zeta_\omega}{\text{d} \omega} - \sqrt{\frac{\omega}{2}} \right | \leqslant \delta \sqrt{\omega}$ i.e. $| \Theta_\omega (0)| \leqslant \delta \sqrt{\omega}$. By Grönwall's lemma, we get that, for any $x>0$, $$| \Theta_\omega (x) | \leqslant \sqrt{\omega} \left [ \frac{C \delta}{10} + e^{10 \sqrt{\omega} x} \left ( \delta + \frac{C \delta}{10} \right ) \right ] \leqslant C \delta \sqrt{\omega} (1 + e^{10 \sqrt{\omega} x}).$$ We also know that $| \Theta_\omega (x)| \leqslant C \sqrt{\omega} ( 1 + \sqrt{\omega} x ) e^{- \sqrt{\omega} x} \leqslant C \sqrt{\omega} \, e^{- \sqrt{\omega} x /2}$. Denoting $x_\omega := 2 \omega^{-1/2} \ln (C/\varepsilon)$, we see that, for any $x \geqslant x_\omega$, $| \Theta_\omega (x) | \leqslant C \sqrt{\omega} \, e^{- \sqrt{\omega} x_\omega / 2} = \varepsilon \sqrt{\omega}$. On the other hand, for any $x \in [0 \, , x_\omega ]$, $| \Theta_\omega (x) | \leqslant C \delta \sqrt{\omega} \left ( 1 + C \varepsilon^{-20} \right ) \leqslant \varepsilon \sqrt{\omega}$, provided we take $\delta$ small enough (depending on $\varepsilon$ only, not depending on $\omega$). Therefore, we have proved that $|| \Theta_\omega ||_\infty \leqslant \varepsilon \sqrt{\omega}$.\ \ Now, consider $\widetilde{T}_\omega := -3 \phi_\omega^2 \Theta_\omega -3 \Lambda_\omega^Q P_\omega ( \phi_\omega + Q_\omega ) + 2 \Lambda_\omega \phi_\omega^2 g'(\phi_\omega^2) + \Lambda_\omega g(\phi_\omega^2)$ and $T_\omega := \omega P_\omega + \widetilde{T}_\omega$, in order that $\Theta_\omega '' - \omega \Theta_\omega = T_\omega$. The method of the variation of the constants and the initial condition $\Theta_\omega '(0)=0$ show that, for $x > 0$, $$\Theta_\omega (x) = \left ( \frac{\Theta_\omega (0)}{2} + \frac{\text{IT}^-}{2 \sqrt{\omega}} \right ) e^{\sqrt{\omega} x} + \frac{\Theta_\omega (0)}{2} e^{- \sqrt{\omega} x} - \frac{e^{\sqrt{\omega} x}}{2 \sqrt{\omega}} \int_x^{+ \infty} T_\omega (y) e^{- \sqrt{\omega} y} \, \text{d}y - \frac{e^{- \sqrt{\omega} x}}{2 \sqrt{\omega}} \int_0^x (\omega P_\omega (y) + \widetilde{T}_\omega (y)) e^{\sqrt{\omega} y} \, \text{d}y,$$ where $\text{IT}^- = \int_0^{+ \infty} T_\omega (y) e^{- \sqrt{\omega} y} \, \text{d}y$. The previous bounds on $\phi_\omega$ and $\Lambda_\omega$ assure the existence of $\text{IT}^-$ and of all the integral terms in the expression of $\Theta_\omega (x)$. Since $\Theta_\omega (x) \, \underset{x \to + \infty}{\longrightarrow} \, 0$, $\frac{\Theta_\omega (0)}{2} + \frac{\text{IT}^-}{2 \sqrt{\omega}} = 0$. Moreover, using the bounds on $\phi_\omega$ and $\Lambda_\omega$, we see that $$\left | \int_x^{+ \infty} T_\omega (y) e^{- \sqrt{\omega} y} \, \text{d}y \right | \leqslant \varepsilon \omega e^{-2 \sqrt{\omega} x}, \, \, \, \left | \int_0^x \omega P_\omega (y) e^{\sqrt{\omega} y} \, \text{d}y \right | \leqslant \varepsilon \omega^{3/2} x , \, \, \, \left | \int_0^x \widetilde{T}_\omega (y) e^{\sqrt{\omega} y} \, \text{d}y \right | \leqslant \varepsilon \omega.$$ Gathering these estimates in the expression of $\Theta_\omega$, we obtain $$| \Theta_\omega (x) | \leqslant C \varepsilon \sqrt{\omega} ( 1 + \sqrt{\omega} x) e^{- \sqrt{\omega} x},$$ which is the desired result.\ \ For the last point of the lemma, we take $\varepsilon > 0$ that we will fix later. Providing we take $\omega$ small enough, we have $$| \phi_\omega (x) - Q_\omega (x) | \leqslant \varepsilon \sqrt{\omega} \, e^{- \sqrt{\omega} |x|} \, \, \, \, \, \text{and} \, \, \, \, \, | \Lambda_\omega (x) - \Lambda_\omega^Q (x) | \leqslant \varepsilon \sqrt{\omega} ( 1 + \sqrt{\omega} |x| ) e^{- \sqrt{\omega} |x|}$$ where we recall that $$Q_\omega (x) = \frac{\sqrt{2 \omega}}{\cosh ( \sqrt{\omega} \, x )} \, \, \, \, \, \text{and} \, \, \, \, \, \Lambda_\omega^Q (x) = \sqrt{\frac{\omega}{2}} \left ( 1 - \sqrt{\omega} \, x \tanh ( \sqrt{\omega} \, x) \right ) \, \frac{1}{\cosh ( \sqrt{\omega} \, x )}.$$ We write that $\langle \phi_\omega \, , \Lambda_\omega \rangle = \langle Q_\omega \, , \Lambda_\omega^Q \rangle + \langle \phi_\omega - Q_\omega \, , \Lambda_\omega^Q \rangle + \langle \phi_\omega \, , \Lambda_\omega - \Lambda_\omega^Q \rangle$, where $$\langle Q_\omega \, , \Lambda_\omega^Q \rangle = 2 \sqrt{\omega} \int_0^{+ \infty} (1 - y \tanh y ) \frac{\text{d}y}{\cosh^2 (y)} = \sqrt{\omega} \int_0^{+ \infty} \frac{\text{d}y}{\cosh^2 (y)} \geqslant \frac{\sqrt{\omega}}{2},$$ integrating by parts. Using the control on $\phi_\omega - Q_\omega$ we find $$| \langle \phi_\omega - Q_\omega \, , \Lambda_\omega^Q \rangle | \leqslant 2 \varepsilon \sqrt{2 \omega} \int_0^{+ \infty} e^{-2y} (1+y) \, \text{d}y = C \varepsilon \sqrt{\omega}.$$ Using the control on $\Lambda_\omega - \Lambda_\omega^Q$ we similarly find that $| \langle \phi_\omega \, , \Lambda_\omega - \Lambda_\omega^Q \rangle | \leqslant C \varepsilon \sqrt{\omega}$. Gathering these estimates we find $\langle \phi_\omega \, , \Lambda_\omega \rangle \geqslant \left ( \frac{1}{2} - C \varepsilon \right ) \sqrt{\omega} \geqslant \frac{\sqrt{\omega}}{4}$ provided we take $\varepsilon$ small enough (and thus $\omega$ small enough). ## Conjugate identity Let $S = \phi_\omega \cdot \partial_x \cdot \frac{1}{\phi_\omega}$ so that $S^* = - \frac{1}{\phi_\omega} \cdot \partial_x \cdot \phi_\omega$. Let us define $$\begin{array}{rl} & M_+ = - \partial_x^2 + \omega - g ( \phi_\omega^2 ) + 2 \, \frac{G ( \phi_\omega^2 )}{\phi_\omega^2} \\ \\ \text{and}& M_- = - \partial_x^2 + \omega - 5 g ( \phi_\omega^2) + 2 \phi_\omega^2 \, g'( \phi_\omega^2) + 6 \, \frac{G ( \phi_\omega^2)}{\phi_\omega^2}. \end{array}$$ **Lemma 6.** We have $S^2 L_+ L_- = M_+ M_- S^2$. *Proof.* From (3.25)-(3.26) of [@Ch] we recall the following general formula: for any nonvanishing function $R$, denoting $V_\pm = R^2 \pm 3R' + \frac{R''}{R}$, we have $$( \partial_x - R ) ( \partial_x^2 - V_+ ) ( \partial_x + R) = ( \partial_x + R) ( \partial_x^2 - V_-) ( \partial_x - R)$$ both sides being equal to $\partial_x^4 - \left ( 2R^2 + \frac{R''}{R} \right ) \partial_x^2 - \left ( 4RR'+ \frac{R'''}{R} +\frac{R'R''}{R^2} \right ) \partial_x - (3(R')^2 + 3RR''-R^4)$. Let us apply this identity with $R = \phi_\omega ' / \phi_\omega$. Thanks to [\[eqphi\]](#eqphi){reference-type="eqref" reference="eqphi"} and the identity $(\phi_\omega ')^2 = \omega \phi_\omega^2 - \frac{1}{2} \phi_\omega^4 + G ( \phi_\omega^2 )$ that is itself derived from [\[eqphi\]](#eqphi){reference-type="eqref" reference="eqphi"}, we find that $$\begin{array}{rl} & \displaystyle{R^2 = \omega - \frac{1}{2} \phi_\omega^2 + \frac{G ( \phi_\omega^2 )}{\phi_\omega^2},} \\ \\ & \displaystyle{R' = - \frac{1}{2} \phi_\omega^2 + g ( \phi_\omega^2 ) - \frac{G ( \phi_\omega^2 )}{\phi_\omega^2},} \\ \\ \text{and} & \displaystyle{\frac{R''}{R} = - \phi_\omega^2 + 2 \left ( \phi_\omega^2 \, g'( \phi_\omega^2) - g ( \phi_\omega^2 ) + \frac{G ( \phi_\omega^2 )}{\phi_\omega^2} \right ).} \end{array}$$ Hence, we get $$V_+ = \omega - 3 \phi_\omega^2 + 2 \phi_\omega^2 \, g'( \phi_\omega^2) + g ( \phi_\omega^2 ) \, \, \, \, \, \text{and} \, \, \, \, \, V_- = \omega - 5 g( \phi_\omega^2) + 2 \phi_\omega^2 g'( \phi_\omega^2) + 6 \, \frac{G ( \phi_\omega^2 )}{\phi_\omega^2}.$$ We easily check that $\partial_x - R= S$, $\partial_x + R = S^*$, $\partial_x^2 - V_+ = -L_+$ and $\partial_x^2 - V_- = -M_-$. We also check that $S^*S = L_-$ and $SS^* = M_+$. Thus the identity we have started with gives $-SL_+S^* = -S^* M_- S$. Composing by $S$ on the left and $S$ on the right, we get $S^2 L_+L_- = M_+M_-S^2$.\ [a]{style="color: white"}\ In what follows, we will denote $a_\omega^- = -5 g(\phi_\omega^2) + 2 \phi_\omega^2 g'(\phi_\omega^2) + 6 \frac{G(\phi_\omega^2)}{\phi_\omega^2}$ and $a_\omega^+ = -g(\phi_\omega^2) + 2 \frac{G(\phi_\omega^2)}{\phi_\omega^2}$ (in order that $M_\pm = - \partial_x^2 + \omega + a_\omega^{\pm}$). These potentials are crucial in our proof. ## Invertibility of $M_-$ In this section we assume that $\text{Ker} (M_-) = \{ 0 \}$. In the next section, Corollary 1 will show that hypotheses $(H_1)$ and $(H_2)$ are sufficient to ensure that this assumption is true. We follow the same reasoning as [@Ma1]. Denoting by $B_1$ and $B_2$ two solutions of $M_- B_1 = M_- B_2$ satisfying $$|B_1^{(k)} (x)| \leqslant C_k \omega^{- \frac{1}{4} + \frac{k}{2}} e^{- \sqrt{\omega} x} , \, \, \, \, \, \, | B_2^{(k)} (x) | \leqslant C_k \omega^{- \frac{1}{4} + \frac{k}{2}} e^{\sqrt{\omega} x}$$ for $C_k > 0$ and $B_1 B_2 ' - B_1 ' B_2 = 1$ on $\mathbb{R}$. These estimates are proved as in Lemma 3. Two such independent solutions exist because $\text{Ker} (M_-) = 0$. For any bounded continuous function $W$, the formula $$J_- [W] (x) := B_1 (x) \int_{- \infty}^x B_2 W + B_2 (x) \int_x^{+ \infty} B_1 W$$ defines a solution to $M_- U = W$. # Non-existence of internal modes As explained in the introduction, we seek hypotheses on $g$ that will ensure that the equation [\[NLS\]](#NLS){reference-type="eqref" reference="NLS"} does not have internal modes. An internal mode is a solution $(X \, , Y \, , \lambda ) \in H^1 ( \mathbb{R})^2 \times \mathbb{C}$ to the following system: $$\left \{ \begin{array}{ccl} L_- X &=& \lambda Y \\ L_+ Y &=& \lambda X. \end{array} \right.$$ For $\omega$ small enough, let us denote $P_B^{\pm} = - (a_\omega^\pm)' \frac{\Phi_B}{\zeta_B^2}$ and $P_B = \frac{P_B^+ + P_B^-}{2}$. We recall the definition of $\varepsilon_\omega := \sup\limits_{0 \leqslant s \leqslant 3 \omega} |sg''(s)|$. We recall that $\omega$ is always assumed small enough so that $\phi_\omega \leqslant \zeta_\omega \leqslant \sqrt{3 \omega}$. Under the hypothesis $(H_1)$, Taylor's formula gives $$\left | \frac{G ( \phi_\omega^2 )}{\phi_\omega^4} \right | \leqslant \varepsilon_\omega , \, \, \, \, \, \, \, \left | \frac{g( \phi_\omega^2 )}{\phi_\omega^2} \right | \leqslant \varepsilon_\omega , \, \, \, \, \, \, \, \left | g'( \phi_\omega^2 ) \right | \leqslant \varepsilon_\omega , \, \, \, \, \, \, \, \left | \phi_\omega^2 g '' ( \phi_\omega^2 ) \right | \leqslant \varepsilon_\omega.$$ Therefore, using the expressions of $P_B^+$ and $P_B^-$, and also using that $| \phi_\omega ' / \phi_\omega | \leqslant C \sqrt{\omega}$ we see that $$|P_B (x)| \leqslant C \varepsilon_\omega \phi_\omega^2 (x) \left | \frac{\phi_\omega '(x)}{\phi_\omega (x)} \right | \frac{|\Phi_B (x)|}{\zeta_B^2 (x)} \leqslant C \sqrt{\omega} \varepsilon_\omega |x| \phi_\omega^2 (x) \leqslant C \sqrt{\omega} \varepsilon_\omega x \omega e^{- \sqrt{\omega} |x|} \leqslant C \varepsilon_\omega \omega e^{- \sqrt{\omega} |x| /10}.$$ From now on, in everything that follows, we assume the hypothesis $(H_1)$ to be satisfied.\ \ The following lemma is a coercivity result about the quadratic form $u \mapsto \int_{\mathbb{R}} P_B u^2$. It is a weaker version of a theorem from Simon, see [@Si]. The proof given here is elementary. This result will intervene both in the proof of the spectral question we study here, and in the proof of the main theorem that will take place later. **Lemma 7.** Assume that $\displaystyle{\int_{\mathbb{R}} \frac{a_{\omega}^+ + a_{\omega}^-}{2}} > 0$. For $\omega > 0$ small enough and $B>0$ large enough, for any $u \in H^1 ( \mathbb{R})$, $$\int_{\mathbb{R}} P_B u^2 \geqslant C \gamma_B \varepsilon_\omega \sqrt{\omega} \int_{\mathbb{R}} \rho u^2 - \frac{C \varepsilon_\omega \sqrt{\omega}}{\gamma_B} \int_{\mathbb{R}} (u')^2$$ where $\displaystyle{P_B = \frac{P_B^+ + P_B^-}{2} = - \frac{(a_{\omega}^+ + a_{\omega}^-)'}{2} \, \frac{\Phi_B}{\zeta_B}}$ and $\displaystyle{\gamma_B := \int_{\mathbb{R}} \frac{P_B}{\varepsilon_\omega} \in \, ]0 \, , C \sqrt{\omega} [}$.\ \ Setting $P_\infty := - \frac{x(a_\omega^+ + a_\omega^- )'}{2}$ and $\gamma_\infty := \varepsilon_\omega^{-1} \int_{\mathbb{R}} P_\infty$, the same result holds replacing $B$ by $\infty$ everywhere: for $\omega > 0$ small enough and any $u \in H^1 ( \mathbb{R})$, $$\int_{\mathbb{R}} P_\infty u^2 \geqslant C \gamma_\infty \varepsilon_\omega \sqrt{\omega} \int_{\mathbb{R}} \rho u^2 - \frac{C \varepsilon_\omega \sqrt{\omega}}{\gamma_\infty} \int_{\mathbb{R}} (u')^2.$$ *Proof.* We start by writing that, for $x,y \in \mathbb{R}$, $\displaystyle{u^2 (x) = u^2(y) - 2 \int_x^y u'(z) u(z) \, \text{d}z}$. In what follows let us denote $\widetilde{P_B} (y) := \frac{P_B (y)}{C \omega \varepsilon_\omega}$ such that $|\widetilde{P_B} (y)| \leqslant e^{- \sqrt{\omega} |y|}$. We multiply the previous identity by $\widetilde{P_B} (y)$ and integrate in $y$, leading to $$\left ( \int_{\mathbb{R}} \widetilde{P_B} \right ) u^2 (x) = \int_{\mathbb{R}} u^2 \widetilde{P_B} -2 \int_x^{+ \infty} \widetilde{P_B} (y) \int_x^y u'(z) u(z) \, \text{d}z \, \text{d}y + 2 \int_{- \infty}^x \widetilde{P_B} (y) \int_y^x u'(z) u(z) \, \text{d}z.$$ We now multiply by $e^{- \sqrt{\omega} |x|/10}$ and integrate in $x$, using $\int_{\mathbb{R}} e^{- \sqrt{\omega} |x|/2} \, \text{d}x = \frac{C}{\sqrt{\omega}}$: $$\begin{array}{rcl} \displaystyle{\left ( \int_{\mathbb{R}} \widetilde{P_B} \right ) \int_{\mathbb{R}} u^2 (x) e^{- \sqrt{\omega} |x|/10} \, \text{d}x} &=& \displaystyle{\frac{C}{\sqrt{\omega}} \int_{\mathbb{R}} u^2 \widetilde{P_B} - 2 \int_{\mathbb{R}} e^{- \sqrt{\omega} |x|/10} \int_x^{+ \infty} \widetilde{P_B} (y) \int_x^y u'(z) u(z) \, \text{d}z \, \text{d}y \, \text{d}x} \\ \\ & & \, \, \, \, \, \, \displaystyle{+ 2 \int_{\mathbb{R}} e^{- \sqrt{\omega} |x|/10} \int_{- \infty}^x \widetilde{P_B} (y) \int_y^x u'(z) u(z) \, \text{d}z \, \text{d}y \, \text{d}x}. \end{array}$$ By the Fubini theorem, $$\int_{\mathbb{R}} e^{- \sqrt{\omega} |x| /10} \int_x^{+ \infty} \widetilde{P_B} (y) \int_x^y u'(z) u(z) \, \text{d}z \, \text{d}y \, \text{d}x = \int_{\mathbb{R}} \left ( \int_{- \infty}^z e^{- \sqrt{\omega} |x|/10} \, \text{d}x \right ) \left ( \int_z^{+ \infty} \widetilde{P_B} (y) \, \text{d}y \right ) u'(z) u(z) \, \text{d}z.$$ We notice that $$\int_{- \infty}^z e^{- \sqrt{\omega} |x|/10} \, \text{d}x \leqslant \frac{C}{\sqrt{\omega}} \, \, \, \, \text{if $z>0$} \, \, \, \, \, \, \, \, \, \, \, \text{and} \, \, \, \, \, \, \, \, \, \, \, \int_{- \infty}^z e^{- \sqrt{\omega} |x|/10} \, \text{d}x \leqslant \frac{C}{\sqrt{\omega}} e^{- \sqrt{\omega} |x|/10} \, \, \, \, \text{if $z<0$}.$$ Similarly, since $| \widetilde{P_B} (y)| \leqslant e^{- \sqrt{\omega} |x| /10}$, $$\left | \int_z^{+ \infty} \widetilde{P_B} (y) \, \text{d}y \right | \leqslant \frac{C}{\sqrt{\omega}} e^{- \sqrt{\omega} |x|/10} \, \, \, \, \text{if $z>0$} \, \, \, \, \, \, \, \, \, \, \, \text{and} \, \, \, \, \, \, \, \, \, \, \, \left | \int_z^{+ \infty} \widetilde{P_B} (y) \, \text{d}y \right | \leqslant \frac{C}{\sqrt{\omega}} \, \, \, \, \text{if $z<0$}.$$ Thus, for all $z \in \mathbb{R}$, $$\left | \left ( \int_{- \infty}^z e^{- \sqrt{\omega} |x|/10} \, \text{d}x \right ) \left ( \int_z^{+ \infty} \widetilde{P_B} (y) \right ) \right | \leqslant \frac{C}{\omega} e^{- \sqrt{\omega} |x|/10}.$$ By the Cauchy-Schwarz inequality, we get $$\left | \int_{\mathbb{R}} e^{- \sqrt{\omega} |x|/10} \int_x^{+ \infty} \widetilde{P_B} (y) \int_x^y u'(z) u(z) \, \text{d}z \, \text{d}y \, \text{d}x \right | \leqslant \frac{C}{\omega} \left ( \int_{\mathbb{R}} u'(x)^2 e^{- \sqrt{\omega} |x|/10} \, \text{d}x \right )^{1/2} \left ( \int_{\mathbb{R}} u(x)^2 e^{- \sqrt{\omega} |x|/10} \, \text{d}x \right )^{1/2}.$$ Hence, $$\begin{array}{rcl} \displaystyle{\left ( \int_{\mathbb{R}} \widetilde{P_B} \right ) \int_{\mathbb{R}} u(x)^2 e^{- \sqrt{\omega} |x|/10} \, \text{d}x} & \leqslant & \displaystyle{\frac{C}{\sqrt{\omega}} \int_{\mathbb{R}} u^2 \widetilde{P_B} + \frac{C}{\omega} \left ( \int_{\mathbb{R}} u'(x)^2 e^{- \sqrt{\omega} |x|/10} \, \text{d}x \right )^{1/2} \left ( \int_{\mathbb{R}} u(x)^2 e^{- \sqrt{\omega} |x|/10} \, \text{d}x \right )^{1/2}} \\ \\ & \leqslant & \displaystyle{\frac{C}{\sqrt{\omega}} \int_{\mathbb{R}} u^2 \widetilde{P_B} + \frac{C}{\omega^2 \int_{\mathbb{R}} \widetilde{P_B}} \int_{\mathbb{R}} u'(x)^2 e^{- \sqrt{\omega}|x|/10} \, \text{d}x + \frac{\int_{\mathbb{R}} \widetilde{P_B}}{2} \int_{\mathbb{R}} u(x)^2 e^{- \sqrt{\omega} |x|/10} \, \text{d}x}, \end{array}$$ using Young's inequality in the last line. We finally get that $$\left ( \int_{\mathbb{R}} \widetilde{P_B} \right ) \int_{\mathbb{R}} u(x)^2 e^{- \sqrt{\omega} |x|/10} \, \text{d}x \leqslant \frac{C}{\sqrt{\omega}} \int_{\mathbb{R}} u^2 \widetilde{P_B} + \frac{C}{\omega^2 \int_{\mathbb{R}} \widetilde{P_B}} \int_{\mathbb{R}} u'(x)^2 e^{- \sqrt{\omega}|x|/10} \, \text{d}x.$$ Now recalling the definition of $\widetilde{P_B}$, we see that $\displaystyle{\int_{\mathbb{R}} \widetilde{P_B} = \frac{\gamma_B}{C \omega}}$. Also writing that $e^{- \sqrt{\omega} |x|/10} \leqslant 1$ in the second integral of the right side, and that $e^{- \sqrt{\omega} |x|/10} \geqslant \rho (x) /2$ in the integral of the left side, and multiplying the inequality above by $\varepsilon_\omega \omega^{3/2}$, we obtain the desired inequality. The proof for the analogous result with $B=\infty$ is identical. [a]{style="color: white"}\ Now we prove that hypotheses $(H_1)$ and $(H_2)$ are sufficient to ensure there does not exist an internal mode in our problem. **Proposition 2.** Assume that hypotheses $(H_1)$ and $(H_2)$ hold. Then, for $\omega$ small enough, there does not exist $V,W \in H^1 ( \mathbb{R})$ and $\lambda \in \mathbb{C}$ such that $$\left \{ \begin{array}{ccc} M_- V &=& \lambda W \\ M_+ W &=& \lambda V. \end{array} \right.$$ other than $V=W=0$. *Proof.* Note that the hypothesis $(H_2)$ implies that, for $K_0>0$ any fixed positive constant, and for $\omega$ small enough (which is the case we will consider in what follows), $$\varepsilon_\omega \gamma_\infty = - \int_{\mathbb{R}} \frac{x (a_\omega^+ + a_\omega^-)'}{2} = \int_{\mathbb{R}} \frac{a_{\omega}^+ + a_{\omega}^-}{2} > K_0 \varepsilon_\omega^2 \sqrt{\omega}.$$ Starting with the system $\left \{ \begin{array}{ccc} M_- V &=& \lambda W \\ M_+ W &=& \lambda V \end{array} \right.$, we multiply the first line by $(2 \Phi_B V' + \Phi_B ' V)$, the second by $(2 \Phi_B W' + \Phi_B ' W)$, we integrate on $\mathbb{R}$ and we sum: $$\begin{array}{rcl} \displaystyle{\int_{\mathbb{R}} ( M_- V)(2 \Phi_B V' + \Phi_B ' V) + \int_{\mathbb{R}} ( M_+ W) (2 \Phi_B W' + \Phi_B ' W)} & = & \displaystyle{\lambda \int_{\mathbb{R}} ((WV'+VW') 2 \Phi_B + 2 \Phi_B ' VW)} \\ \\ &=& \displaystyle{\lambda \left [ 2VW \Phi_B \right ]_{- \infty}^{+ \infty} \, \, = \, \, 0.} \end{array}$$ Now, following virial computations (basically integrating by parts), $$\begin{array}{rcl} \displaystyle{\int_{\mathbb{R}} (M_- V)(2 \Phi_B V' + \Phi_B ' V)} &=& \displaystyle{\int_{\mathbb{R}} -V''(2 \Phi_B V' + \Phi_B ') + \omega \underbrace{\int_{\mathbb{R}} V(2 \Phi_B V' + \Phi_B 'V)}_{= \, 0} + \int_{\mathbb{R}} a_\omega^- V (2 \Phi_B V' + \Phi_B ' V)} \\ \\ &=& \displaystyle{\int_{\mathbb{R}} 2 ((\zeta_B V)')^2 + \int_{\mathbb{R}} ( \ln \zeta_B)'' V^2 - \int_{\mathbb{R}} (a_\omega^-)' \Phi_B V^2.} \end{array}$$ Now let $B \to + \infty$. We recall that $V \in H^1 ( \mathbb{R}) \subset L^\infty ( \mathbb{R})$. First, $| ( \ln \zeta_B )'' (x) | \leqslant \frac{C \sqrt{\omega}}{B} \, \mathbbm{1}_{[1,2]} ( \sqrt{\omega} |x| ) \leqslant \frac{C}{B} \rho (x)$, thus $\int_{\mathbb{R}} ( \ln \zeta_B)'' V^2 \longrightarrow 0$ as $B \to + \infty$. Moreover, since $\Phi_B (x) \longrightarrow x$ as $B \to + \infty$, the dominated convergence theorem shows that $\int_{\mathbb{R}} (a_\omega^-)' \Phi_B V^2 \longrightarrow \int_{\mathbb{R}} x(a_\omega^-)'V^2$ as $B \to + \infty$. Finally, note that $\zeta_B (x) \longrightarrow 1$ as $B \to + \infty$, $| \zeta_B '(x)| \leqslant \frac{C}{B} e^{-|x|/B}$ and $| \zeta_B ''(x)| \leqslant \frac{C}{B^2} e^{-|x| / B} + \frac{C}{B} \theta (x)$ where $\theta$ has a compact support that does not depend on $B$. Using these estimates and the dominated convergence theorem, we see that $$\int_{\mathbb{R}} ((\zeta_B V)')^2 = \int_{\mathbb{R}} \zeta_B^2 (V')^2 - \int_{\mathbb{R}} \zeta_B \zeta_B '' V^2 \, \underset{R \to + \infty}{\longrightarrow} \, \int_{\mathbb{R}} (V')^2.$$ Hence, $$\int_{\mathbb{R}} (M_- V)(2 \Phi_B V' + \Phi_B ' V) \, \underset{B \to + \infty}{\longrightarrow} \, \int_{\mathbb{R}} 2 (V')^2 - \int_{\mathbb{R}} x (a_\omega^-)' V^2.$$ We have a similar formula involving $M_+ W$. Combining these two identities, we get $$0 = 2 \int_{\mathbb{R}} ((V')^2 + (W')^2) - \int_{\mathbb{R}} x(a_\omega^-)' V^2 - \int_{\mathbb{R}} x(a_\omega^+)' W^2. \label{sp1}$$ Now, let us take $R_\infty$ a bounded function that we will define later. Taking the initial system, we multiply the first line by $R_\infty V$ and the second line by $R_\infty W$, before again integrating on $\mathbb{R}$ and taking the difference; this leads to $$\int_{\mathbb{R}} M_- V \cdot R_\infty V - \int_{\mathbb{R}} M_+ W \cdot R_\infty W = \lambda \int_{\mathbb{R}} R_\infty VW - \lambda \int_{\mathbb{R}} R_\infty VW = 0.$$ We compute $$\begin{array}{rcl} \displaystyle{\int_{\mathbb{R}} M_- V \cdot R_\infty V} &=& \displaystyle{\int_{\mathbb{R}} -V'' R_\infty V + \omega \int_{\mathbb{R}} R_\infty V^2 + \int_{\mathbb{R}} a_\omega^- R_\infty V^2} \\ \\ &=& \displaystyle{ \int_{\mathbb{R}} R_\infty (V')^2 - \int_{\mathbb{R}} \frac{R_\infty ''}{2} V^2 + \omega \int_{\mathbb{R}} R_\infty V^2 + \int_{\mathbb{R}} a_\omega^- R_\infty V^2.} \end{array}$$ Here too, we have a similar formula involving $M_+ W$. Taking the difference, we find $$0 = \int_{\mathbb{R}} \left ( \omega R_\infty - \frac{R_\infty ''}{2} \right ) (V^2 - W^2) + \int_{\mathbb{R}} R_\infty ((V')^2 - (W')^2) + \int_{\mathbb{R}} a_\omega^- R_\infty V^2 - \int_{\mathbb{R}} a_\omega^+ R_\infty W^2. \label{sp2}$$ Now summing [\[sp1\]](#sp1){reference-type="eqref" reference="sp1"} and [\[sp2\]](#sp2){reference-type="eqref" reference="sp2"}, we get $$\begin{array}{rcl} 0 &=& \displaystyle{2 \int_{\mathbb{R}} ((V')^2 + (W')^2) + \int_{\mathbb{R}} \left ( -x(a_\omega^-)' + \omega R_\infty - \frac{R_\infty''}{2} \right ) V^2 + \int_{\mathbb{R}} \left ( - x( a_\omega^+)' - \omega R_\infty + \frac{R_\infty''}{2} \right ) W^2} \\ \\ & & \, \, \, \, \displaystyle{+ \, \, \int_{\mathbb{R}} R_B ((V')^2 - (W')^2) + \int_{\mathbb{R}} a_\omega^- R_\infty V^2 - \int_{\mathbb{R}} a_\omega^+ R_B W^2.} \end{array}$$ Now, let us define $R_\infty$ as the bounded solution of the ordinary differential equation $- \frac{R_\infty''}{2} + \omega R_\infty = D_\infty$ where $D_\infty := - \frac{x(a_\omega^+ - a_\omega^-)'}{2}$. We finally obtain $$0 = 2 \int_{\mathbb{R}} ((V')^2 + (W')^2) + \int_{\mathbb{R}} P_\infty (V^2 + W^2) + K_{2a} + K_{2b}$$ where $K_{2a} := \int_{\mathbb{R}} R_\infty ((V')^2 - (W')^2)$ and $K_{2b} := \int_{\mathbb{R}} a_\omega^- R_\infty V^2 - \int_{\mathbb{R}} a_\omega^+ R_\infty W^2$.\ \ By Lemma 7, we can assume $\omega$ small enough so that $$\int_{\mathbb{R}} P_\infty V^2 \geqslant C \gamma_\infty \varepsilon_\omega \sqrt{\omega} \int_{\mathbb{R}} \rho V^2 - \frac{C \varepsilon_\omega \sqrt{\omega}}{\gamma_\infty} \int_{\mathbb{R}} (V')^2$$ and that the same inequality holds taking $W$ instead of $V$. Let us now control the error terms $J_1$, $K_{2a}$ and $K_{2b}$.\ \ About $K_{2a}$, we first see that $R_\infty$ is bounded and we can control this aspect. Indeed, the explicit expression of $R_\infty$ is given by the variation of the constants: $$R_\infty (x) = \frac{1}{\sqrt{2 \omega}} \left ( \int_{- \infty}^x e^{\sqrt{2 \omega} (y-x)} D_\infty (y) \, \text{d}y + \int_x^{+ \infty} e^{\sqrt{2 \omega} (x-y)} D_\infty (y) \, \text{d}y \right ).$$ Using this expression and the estimate $|D_\infty(x)| \leqslant C \varepsilon_{\omega} \omega^{3/2} |x| e^{- \sqrt{\omega} |x|}$, we show that $|R_\infty| \leqslant \frac{C}{\omega} |D_\infty| \leqslant C \varepsilon_\omega \rho^2$. This leads to $$|K_{2a}| \leqslant C \varepsilon_\omega \int_{\mathbb{R}} ((V')^2 + (W')^2).$$ About $K_{2b}$, we first recall that $|a_\omega^{\pm}| \leqslant \varepsilon_\omega \phi_\omega^2 \leqslant \varepsilon_\omega \omega \rho$. This and the estimate $||R_B||_{\infty} \leqslant C \varepsilon_\omega$ lead to $$|K_{2b}| \leqslant C \varepsilon_\omega^2 \omega \int_{\mathbb{R}} \rho (V^2 + W^2).$$ Putting all this together, we find that $$\begin{array}{rcl} 0 &=& \displaystyle{2 \int_{\mathbb{R}} ((V')^2 + (W')^2) + \int_{\mathbb{R}} P_B (V^2 + W^2) + K_{2a} + K_{2b}} \\ \\ & \geqslant & \displaystyle{2 \int_{\mathbb{R}} ((V')^2 + (W')^2) + C \varepsilon_\omega \gamma_\infty \sqrt{\omega} \int_{\mathbb{R}} \rho (V^2 + W^2) - C \varepsilon_\omega \frac{\sqrt{\omega}}{\gamma_\infty} \int_{\mathbb{R}} ((V')^2 + (W')^2)} \\ \\ & & \displaystyle{\, \, \, \, \, \, - \, C \varepsilon_\omega \int_{\mathbb{R}} ((V')^2 + (W')^2) - C \varepsilon_\omega^2 \omega \int_{\mathbb{R}} \rho (V^2 + W^2)} \\ \\ & \geqslant & \displaystyle{\left ( 2 - C \varepsilon_\omega \frac{\sqrt{\omega}}{\gamma_\infty} - C \varepsilon_\omega \right ) \int_{\mathbb{R}} ((V')^2 + (W')^2) + \left ( C \varepsilon_\omega \gamma_\infty \sqrt{\omega} - C \varepsilon_\omega^2 \omega \right ) \int_{\mathbb{R}} \rho (V^2 + W^2).} \end{array}$$ We first see that $2 - C \varepsilon_\omega \frac{\sqrt{\omega}}{\gamma_\infty} - C \varepsilon_\omega \geqslant 2 - \frac{C}{K_0} - C \varepsilon_\omega$. Thus we can assume $\omega$ small enough and $K_0$ large enough such that $2 - \frac{C}{K_0} - C \varepsilon_\omega \geqslant 1$. Note that $K_0$ does not depend on $\omega$. On the other hand, we see that $$C \varepsilon_\omega \gamma_\infty \sqrt{\omega} - C \varepsilon_\omega^2 \omega \geqslant K_0 \varepsilon_\omega^2 \omega - C \varepsilon_\omega^2 \omega = \varepsilon_\omega^2 \omega \left ( K_0 - C \right ).$$ We can assume $\omega$ small enough and $K_0$ large enough (still not depending on $\omega$) such that $K_0 - C \geqslant 1$ for instance. Putting all this together, we get $$0 \geqslant \int_{\mathbb{R}} ((V')^2 + (W')^2) + \varepsilon_\omega^2 \omega \int_{\mathbb{R}} \rho (V^2 + W^2)$$ which leads to $V=W=0$. [a]{style="color: white"}\ Before concluding the proof of Theorem 1, let us check, as announced in the previous section, that hypotheses $(H_1)$ and $(H_2)$ ensure that $\text{Ker} (M_-) = \{ 0 \}$. **Corollary 1.** Assume that hypotheses $(H_1)$ and $(H_2)$ hold. Then, for $\omega$ small enough, $\text{Ker} (M_-) = \{ 0 \}$. *Proof.* Take $V \in \text{Ker} (M_-)$, $\lambda =0$ and $W=0$. We have $M_- V = \lambda W$ and $M_+ W = \lambda V$, thus Proposition 2 gives $V=0$. [a]{style="color: white"}\ Now we can give the proof of Theorem 1. Let $X,Y \in H^1 ( \mathbb{R})$ and $\lambda \in \mathbb{C}$ be solutions of the system [\[IntM\]](#IntM){reference-type="eqref" reference="IntM"} that we recall here: $$\left \{ \begin{array}{ccl} L_- X &=& \lambda Y \\ L_+ Y &=& \lambda X. \end{array} \right.$$ Thanks to this system we see that $X,Y \in H^6 ( \mathbb{R})$. Then $M_+ M_- S^2 X = S^2 L_+ L_- X = \lambda^2 S^2 X$. Let $V := S^2 X$. First, assume $\lambda \neq 0$. Denoting $W := \lambda^{-1} M_- V$, we have $$\left \{ \begin{array}{ccl} M_- V &=& \lambda W \\ M_+ W &=& \lambda V. \end{array} \right.$$ Therefore we know from Proposition 2 that, providing $\omega$ is small enough, $V=W=0$. As $\text{Ker} (S^2) = \text{span} ( \phi_\omega \, , x \phi_\omega )$, the relation $S^2 X = 0$ gives $X = c_1 \phi_\omega + c_2 x \phi_\omega$. This gives $L_- X = -2 c_2 \phi_\omega '$. Hence, $Y = -2 c_2 \lambda^{-1} \phi_\omega '$. This leads to $L_+ Y = 0$ i.e. $X=0$ and then $Y=0$.\ \ Now, assume $\lambda = 0$. We have $L_- X = L_+ Y = 0$. Since $\text{Ker} (L_-) = \text{span} ( \phi_\omega )$ and $\text{Ker} (L_+) = \text{span} ( \phi_\omega ' )$, we get $X = c_1 \phi_\omega$ and $Y = c_2 \phi_\omega '$. Reciprocally, all of these are solutions of the system. This completes the proof of Theorem 1. [a]{style="color: white"}\ \ Theorem 1, which is now proved, shows that there does not exist internal modes under hypotheses $(H_1)$ and $(H_2)$. We can go a little further and show, with the same proof, that there does not exist *resonances* under the same hypotheses, in the sense below. See [@Ge] for similar arguments on the Klein-Gordon equation. **Corollary 2.** Assume that hypotheses $(H_1)$ and $(H_2)$ are satisfied and that $\omega$ is small enough. Let $(X \, , Y \, , \lambda)$ be a solution to the system [\[IntM\]](#IntM){reference-type="eqref" reference="IntM"}. Assume that $X,Y$ belong to $L^\infty$ and that $X',Y'$ belong to $H^1$. Such a solution is called a *resonance*. Then, either $X=Y=0$; or $\lambda = 0$, $X \in \text{span} ( \phi_\omega )$ and $Y \in \text{span} ( \phi_\omega ')$. *Proof.* In Proposition 2, one can assume $V$ and $W$ to be $L^\infty$ with derivatives in $L^2$, the result remains true. Indeed, the integrals $\int_{\mathbb{R}} (V')^2$ and $\int_{\mathbb{R}} (W')^2$ still have a sense, and so have the other integrals since $V^2$ and $W^2$ are always integrated after multiplication by an appropriate weight. For instance, the virial computations hold thanks to the presence of $\zeta_B$ and $\Phi_B$; and the integrals $\int_{\mathbb{R}} P_\infty V^2$, $\int_{\mathbb{R}} R_\infty V^2$ or $\int_{\mathbb{R}} \rho V^2$ exist since $P_\infty$, $R_\infty$ and $\rho$ are $L^1$ while $V^2$ (and $W^2$) are $L^\infty$. Hence, Proposition 2 remains true after this change.\ \ Now, take $(X \, , Y \, , \lambda)$ a resonance in our problem. As in the proof of Theorem 1, assume first that $\lambda \neq 0$ and let $V := S^2X$ and $W := \lambda^{-1} M_- V$. We can compute $$S^2 = \partial_x^2 - 2 \frac{\phi_\omega '}{\phi_\omega} \cdot \partial_x + \omega - g(\phi_\omega^2) + 2 \frac{G(\phi_\omega^2)}{\phi_\omega^2}.$$ We know that $X' \in H^1 \subset L^\infty$, thus $V=S^2 X \in L^\infty$. Besides, deriving the relation $\lambda Y = L_- X$ we see that $X''' \in L^2$, which shows that $$V' = X''' - 2 \frac{\phi_\omega '}{\phi_\omega} X '' - 2 \left ( \frac{\phi_\omega '}{\phi_\omega} \right ) ' X' + \left ( \omega - g(\phi_\omega^2) + 2 \frac{G(\phi_\omega^2)}{\phi_\omega^2} \right ) X' + \left ( - g(\phi_\omega^2) + 2 \frac{G(\phi_\omega^2)}{\phi_\omega^2} \right ) ' \in L^2.$$ Similarly, we show that $W \in L^\infty$ and $W' \in L^2$. Now, thanks to the new version of Proposition 2, we obtain $V=W=0$. The relation $S^2X=0$ is nothing but a second order ordinary differential equation, therefore here too we find $X=c_1 \phi_\omega + c_2 x \phi_\omega$, then $Y = -2c_2 \lambda^{-1} \phi_\omega '$ and finally $X=Y=0$.\ \ Now assume $\lambda = 0$. We have $L_- X= L_+ Y=0$ but this time $X,Y$ are not supposed to be in $H^1$. However, $L_+ Y = 0$ leads to $Y \in \text{span} ( \phi_\omega ' \, , A_\omega )$ where $A_\omega$ is defined just before Lemma 3. Since $Y$ and $\phi_\omega '$ are bounded while $A_\omega$ is not, we get $Y \in \text{span} ( \phi_\omega ')$. The same argument holds for $X$ and we find that $X \in \text{span} ( \phi_\omega )$. This completes the proof of Corollary 2. # Asymptotic stability ## Modulation decomposition We fix an initial data $\phi_\omega \in H^1 ( \mathbb{R})$ close to $\phi_{\omega_0}$. By the orbital stability property we know that the global solution $\psi$ of [\[NLS\]](#NLS){reference-type="eqref" reference="NLS"} remains close to the family of solitary waves for all time. It is standard to decompose $\psi$ as $$\psi (t \, , y) = e^{i ( \beta (t) (y- \sigma (t)) + \gamma (t))} \left [ \phi_{\omega (t)} (y - \sigma (t)) + u(t \, , y - \sigma (t)) \right ]$$ where the functions $\beta$, $\sigma$, $\gamma$ and $\omega$ are of class $\mathscr{C}^1$ (as functions of time) and uniquely fixed so that, for all $t \geqslant 0$, the following orthogonality relations hold: $$\langle u \, , \phi_\omega \rangle = \langle u \, , x \phi_\omega \rangle = \langle u \, , i \Lambda_\omega \rangle = \langle u \, , i \phi_\omega ' \rangle = 0.$$ This choice of orthogonality relations is known to lead to the following inequality, satisfied for all $t \geqslant 0$, $$\frac{| \dot{\beta} |}{\sqrt{\omega}} + \frac{| \dot{\omega} |}{\omega} + \sqrt{\omega} | \dot{\sigma} - 2 \beta | + | \dot{\gamma} - \omega - \beta^2| \leqslant C \sqrt{\omega} \left \| \text{sech}( \sqrt{ \omega} x/2) u \right \|^2 \leqslant C \sqrt{\omega} || \rho^2 u ||^2. \label{orth}$$ See [@We2]. Furthermore, the orbital stability result can be written as follows: for $\epsilon$ small and for all $t \geqslant 0$, $$|| \partial_x u || + ||u|| + | \beta | + | \omega - \omega_0 | \leqslant \epsilon \label{orbstab}$$ for $\psi_0$ taken sufficiently close to $\phi_{\omega_0}$.\ \ Write $u = u_1 + i u_2$. The equation [\[NLS\]](#NLS){reference-type="eqref" reference="NLS"} satisfied by $\psi$ leads to the following system satisfied by $(u_1 \, , u_2)$: $$\left \{ \begin{array}{ccl} \partial_t u_1 &=& L_- u_2 + \theta_2 + m_2 - q_2 \\ \partial_t u_2 &=& - L_+ u_1 - \theta_1 - m_1 + q_1 \end{array} \right. \label{Su}$$ where $$\begin{array}{l} \theta_1 \, = \, \dot{\beta} x \phi_\omega + ( \dot{\gamma} - \omega - \beta^2 ) \phi_\omega - \beta ( \dot{\sigma} - 2 \beta ) \phi_\omega, \\ \\ \theta_2 \, = \, - \frac{\dot{\omega}}{\omega} \Lambda_\omega + ( \dot{\sigma} - 2 \beta ) \phi_\omega ', \\ \\ m_1 \, = \, \dot{\beta} x u_1 + ( \dot{\gamma} - \omega - \beta^2 ) u_1 - ( \dot{\sigma} - 2 \beta ) \partial_x u_2 - \beta ( \dot{\sigma} - 2 \beta ) u_1, \\ \\ m_2 \, = \, \dot{\beta} x u_2 + ( \dot{\gamma} - \omega - \beta^2 ) u_2 + ( \dot{\sigma} - 2 \beta ) \partial_x u_1 - \beta ( \dot{\sigma} - 2 \beta ) u_2, \\ \\ q_1 \, = \, \text{Re} \left [ h(\phi_\omega + u) - h ( \phi_\omega ) - h' ( \phi_\omega ) u \right ], \\ \\ q_2 \, = \, \text{Im} \left [ h ( \phi_\omega + u ) - \frac{h ( \phi_\omega )}{\phi_\omega} \, u \right ] \end{array}$$ where $h(r) = |r|^2 r - g(|r|^2)r$. ## First virial estimate Since $| \omega - \omega_0 | \leqslant \epsilon$, we get, for $\epsilon < \frac{\omega_0}{2}$, that $\frac{\omega_0}{2} \leqslant \omega \leqslant \frac{3 \omega_0}{2}$. This enables to control $\phi_\omega$, $\Lambda_\omega$ and their derivatives by powers of $\rho$. More precisely, for instance, $\phi_\omega \leqslant C \sqrt{\omega} \, \rho^N$, $| \phi_\omega ' | \leqslant C \omega \rho^N$, $| \Lambda_\omega | \leqslant C \sqrt{\omega} \, \rho^N$ and $| \Lambda_\omega ' | \leqslant C \omega \rho^N$ for any $N \in [\![ 0 \, , 7 ]\!]$. **Proposition 3.** There exists $C>0$ such that, for $\epsilon$ small enough and any $T \geqslant 0$, $$\int_0^T \left ( || \eta_A \partial_x u ||^2 + \frac{1}{A^2} ||\eta_A u||^2 \right ) \, \text{d}t \leqslant C \epsilon + C \omega_0 \int_0^T || \rho^2 u ||^2 \, \text{d}t.$$ *Proof.* We will use a virial argument. Let $w = \zeta_A u$ and $$\mathcal{I} = \int_{\mathbb{R}} u_1 \left ( 2 \Phi_A \partial_x u_2 + \Phi_A ' u_2 \right ).$$ From the equation [\[Su\]](#Su){reference-type="eqref" reference="Su"} and noticing that $\int_{\mathbb{R}} (2 \Phi_A \partial_x u_1 + \Phi_A ' u_1) u_1 = \int_{\mathbb{R}} (2 \Phi_A \partial_x u_2 + \Phi_A ' u_2 ) u_2 = 0$ (by integration by parts), we get that $$\begin{array}{rcl} \dot{\mathcal{I}} &=& \displaystyle{- \int_{\mathbb{R}} (2 \Phi_A \partial_x u_1 + \Phi_A ' u_1) \partial_x^2 u_1 - \int_{\mathbb{R}} (2 \Phi_A \partial_x u_2 + \Phi_A ' u_2 ) \partial_x^2 u_2} \\ \\ & & \, \, \, \displaystyle{+ \int_{\mathbb{R}} (2 \Phi_A \partial_x u_1 + \Phi_A ' u_1) ( \theta_1 + m_1 ) + \int_{\mathbb{R}} (2 \Phi_A \partial_x u_2 + \Phi_A ' u_2 ) ( \theta_2 + m_2 )} \\ \\ & & \, \, \, \displaystyle{- \, \, \text{Re} \left [ \int_{\mathbb{R}} \left ( 2 \Phi_A \partial_x \overline{u} + \Phi_A ' \overline{u} \right ) \left ( h( \phi_\omega + u) - h ( \phi_\omega ) \right ) \right ].} \end{array}$$ Integrating by parts, we get that, for $k \in \{ 1 \, , 2 \}$, $$- \int_{\mathbb{R}} \left ( 2 \Phi_A \partial_x u_k + \Phi_A ' u_k \right ) \partial_x^2 u_k = 2 \int_{\mathbb{R}} ( \partial_x w_k)^2 + \int_{\mathbb{R}} ( \ln \zeta_A )'' w_k^2$$ where, after computations, $(\ln \zeta_A )'' = - \frac{|x|}{A} \left ( 1 - \chi ( \sqrt{\omega_0} x) \right ) \mathbbm{1}_{[1,2]} (\sqrt{\omega_0} x)$. We see that $$| ( \ln \zeta_A )'' (x) | \leqslant \frac{C \sqrt{\omega_0}}{A} \, \mathbbm{1}_{[1,2]} ( \sqrt{\omega_0} |x| ) \leqslant \frac{C \sqrt{\omega_0}}{A} \rho^4 (x).$$ Thus, the first part of $\dot{\mathcal{I}}$ is controlled as follows: $$- \int_{\mathbb{R}} \left ( 2 \Phi_A \partial_x u_k + \Phi_A ' u_k \right ) \partial_x^2 u_k \geqslant 2 \int_{\mathbb{R}} ( \partial_x w_k )^2 - \frac{C \sqrt{\omega_0}}{A} ||\rho^2 w_k||^2.$$ Now, about the second term in $\dot{\mathcal{I}}$, we notice that, denoting $H(r)=\frac{|r|^4}{4} - \frac{G(|r|^2)}{2}$, $$\partial_x \text{Re} \left [ H( \phi_\omega + u) - H ( \phi_\omega ) - h( \phi_\omega ) u \right ] = \text{Re} \left [ ( \partial_x \overline{u} ) \left ( h ( \phi_\omega + u) - h ( \phi_\omega ) \right ) \right ] + \text{Re} \left [ \phi_\omega ' \left ( h ( \phi_\omega + u) - h ( \phi_\omega ) - h ' ( \phi_\omega ) u \right ) \right ].$$ Now integrating by parts, we decompose $$- \text{Re} \left [ \int_{\mathbb{R}} \left ( 2 \Phi_A \partial_x \overline{u} + \Phi_A ' \overline{u} \right ) \left ( h( \phi_\omega + u) - h ( \phi_\omega ) \right ) \right ] = I_1 + I_2 + I_3$$ with $$\begin{array}{rl} & \displaystyle{I_1 = 2 \int_{\mathbb{R}} \Phi_A ' \, \text{Re} \left [ H ( \phi_\omega + u ) - H ( \phi_\omega ) - h( \phi_\omega )u \right ],} \\ \\ & \displaystyle{I_2 = 2 \int_{\mathbb{R}} \Phi_A \, \text{Re} \left [ \phi_\omega ' \left ( h ( \phi_\omega + u ) - h ( \phi_\omega ) - h ' ( \phi_\omega ) u \right ) \right ],} \\ \\ \text{and} & \displaystyle{I_3 = - \int_{\mathbb{R}} \Phi_A ' \, \text{Re} \left [ \overline{u} \left ( h ( \phi_\omega + u ) - h ( \phi_\omega ) \right ) \right ].} \end{array}$$ We recall that $\Phi_A ' = \zeta_A^2$. We note that $0 < \Phi_A ' \leqslant 1$ and $| \Phi_A (x) | \leqslant |x|$ on $\mathbb{R}$. Therefore, $$|\Phi_A (x) \phi_\omega (x) | \leqslant \sqrt{\omega} |x| \text{sech}( \sqrt{\omega} x ) \leqslant C \rho^4 (x).$$ Now, about $I_1$, using the definitions of $H$ and $h$ and developing $|\phi_\omega + u|^4$ we compute that $$\begin{array}{rcl} \text{Re} \left [ H ( \phi_\omega + u ) - H ( \phi_\omega ) - h( \phi_\omega )u \right ] &=& \displaystyle{\frac{|u|^4}{4} + \phi_\omega^2 \text{Re} (u)^2 + \frac{\phi_\omega^2 |u|^2}{2} + \frac{\phi_\omega |u|^2 \text{Re} (u)}{2}} \\ \\ & & \displaystyle{\, \, \, \, \, - \frac{G ( |\phi_\omega + u|^2 )}{2} - \frac{| \phi_\omega |^4}{4} + \frac{G ( \phi_\omega^2 )}{2} + \phi_\omega g ( \phi_\omega^2 ) \text{Re} (u).} \end{array}$$ Now, $G$ is real-valued and we can write Taylor's expansion: $$G ( | \phi_\omega + u|^2 ) = G ( \phi_\omega^2 + |u|^2 + 2 \phi_\omega \text{Re} (u)) = G ( \phi_\omega^2) + \left ( |u|^2 + 2 \phi_\omega \text{Re}(u) \right ) g ( \phi_\omega^2 ) + \int_{\phi_\omega^2}^{| \phi_\omega + u|^2} \left ( | \phi_\omega +u|^2 - t \right ) g'(t) \, \text{d}t$$ where $\displaystyle{\left | \int_{\phi_\omega^2}^{| \phi_\omega + u|^2} \left ( | \phi_\omega +u|^2 - t \right ) g'(t) \, \text{d}t \right |} \leqslant C \left | |\phi_\omega + u|^2 - \phi_\omega^2 \right | \leqslant C |u|^4 + C \phi_\omega |u|^2 |\text{Re} (u)| + C \phi_\omega^2 | \text{Re} (u)|^2$. Putting these estimations together and using the inequalities $| \text{Re} (u) | \leqslant |u|$ and $\phi_\omega |u|^3 = ( \phi_\omega |u|) (|u|^2) \leqslant \frac{\phi_\omega^2 |u|^2}{2} + \frac{|u|^4}{2}$, we ultimately find that $$|I_1| \leqslant C \int_{\mathbb{R}} \Phi_A ' ( \phi_\omega^2 |u|^2 + |u|^4 ) = C \int_{\mathbb{R}} \zeta_A^2 ( \phi_\omega^2 |u|^2 + |u|^4 ) \leqslant C \omega_0 \int_{\mathbb{R}} \rho^4 |u|^2 + C \int_{\mathbb{R}} \zeta_A^2 |u|^4,$$ using $\phi_\omega \leqslant C \sqrt{\omega} \, \rho^2 \leqslant C \sqrt{\omega_0} \, \rho^2$ and $\zeta_A^2 \leqslant 1$ to control the first term. The control of the third term is similar, writing $$h ( \phi_\omega + u) - h ( \phi_\omega) = |u|^2 \phi_\omega + |u|^3 + \phi_\omega^2 u + 2 \phi_\omega^2 \text{Re} (u) + 2 \phi_\omega u \text{Re} (u) - u g ( |\phi_\omega + u|^2 ) - \phi_\omega \left ( g (|\phi_\omega + u|^2 ) - g ( \phi_\omega^2 ) \right ).$$ Using the inequalities $|u g ( |\phi_\omega +u|^2)| \leqslant C |u| \, |\phi_\omega + u|^2 \leqslant C|u| \phi_\omega^2 + C |u|^3$ and $|g(|\phi_\omega +u|^2) - g(\phi_\omega^2)| \leqslant C \left | |\phi_\omega + u|^2 - \phi_\omega^2 \right | = C |u|^2 + 2C \phi_\omega \text{Re} (u)$, we find that $\left | \text{Re} \left [ \overline{u} \left ( h ( \phi_\omega + u ) - h ( \phi_\omega ) \right ) \right ] \right | \leqslant C ( \phi_\omega^2 |u|^2 + |u|^4 )$ and then we can control $I_3$ the same way we controlled $I_1$: $$|I_3| \leqslant C \omega_0 \int_{\mathbb{R}} \rho^4 |u|^2 + C \int_{\mathbb{R}} \zeta_A^2 |u|^4.$$ About $I_2$, we compute that $$\begin{array}{rcl} h(\phi_\omega + u) - h ( \phi_\omega) - h'(\phi_\omega ) u &=& \phi_\omega |u|^2 + |u|^2 u + 2 \phi_\omega u \text{Re} (u) - (\phi_\omega + u) g(\phi_\omega^2 + |u|^2 + 2 \phi_\omega \text{Re}(u) ) \\ \\ & & \, \, \, \, + (\phi_\omega + u) g(\phi_\omega^2) + 2 \phi_\omega^2 u g'(\phi_\omega^2). \end{array}$$ Using Taylor's expansion formula, we write that $$g ( \phi_\omega^2 + |u|^2 + 2 \phi_\omega \text{Re} (u)) = g ( \phi_\omega^2 ) + (|u|^2 + 2 \phi_\omega \text{Re}(u)) g'(\phi_\omega^2) + \underbrace{\int_{\phi_\omega^2}^{\phi_\omega^2 + |u|^2 + 2 \phi_\omega \text{Re}(u)} (\phi_\omega^2 + |u|^2 + 2 \phi_\omega \text{Re}(u) - s) g''(s) \, \text{d}s}_{=: \, \text{IR}}$$ where we control the integral term IR as follows, recalling that $g''(s) = \mathcal{O} (1/s)$ since $(H_1)$ holds, $$\begin{array}{rcl} \displaystyle{\left | \int_{\phi_\omega^2}^{\phi_\omega^2 + |u|^2 + 2 \phi_\omega \text{Re}(u)} (\phi_\omega^2 + |u|^2 + 2 \phi_\omega \text{Re}(u) - s) g''(s) \, \text{d}s \right |} & \leqslant & \displaystyle{\left | |u|^2 + 2 \phi_\omega \text{Re} (u) \right | \, \left | \int_{\phi_\omega^2}^{\phi_\omega^2 + |u|^2 + 2 \phi_\omega \text{Re}(u)} \frac{C \, \text{d}s}{s} \right |} \\ \\ & \leqslant & \displaystyle{\left | |u|^2 + 2 \phi_\omega \text{Re} (u) \right | \, \left | \ln \left ( 1 + \frac{|u|^2}{\phi_\omega^2} + \frac{2 \text{Re} (u)}{\phi_\omega} \right ) \right |.} \end{array}$$ We know that $\ln (1 + \cdot )$ is $1$-Lipschitz on $[0 \, , + \infty )$. We can say that this function is $C$-Lipschitz on $[-1/2 \, , + \infty )$ for example. We shall separate two cases. First, assume that $\left | \frac{u}{\phi_\omega} \right | \leqslant \frac{1}{4}$. Then $\frac{|u|^2}{\phi_\omega^2} + \frac{2 \text{Re}(u)}{\phi_\omega} \geqslant - \frac{1}{2}$ and we have $$\begin{array}{rcl} \displaystyle{\text{IR}} & \leqslant & \displaystyle{C \left | |u|^2 + 2 \phi_\omega \text{Re} (u) \right | \, \left | \frac{|u|^2}{\phi_\omega^2} + \frac{2 \text{Re} (u)}{\phi_\omega} \right | \, \, \leqslant \, \, \frac{C}{\phi_\omega^2} \left | |u|^2 + 2 \phi_\omega \text{Re} (u) \right |^2} \\ \\ & \leqslant & \displaystyle{\frac{C}{\phi_\omega^2} \left ( |u|^4 + \phi_\omega^2 |u|^2 \right ) \, \, \leqslant \, \, C |u|^2} \end{array}$$ recalling, for the last inequality, that $|u / \phi_\omega | \leqslant C$. This gives $$\begin{array}{rl} & \left | - (\phi_\omega + u) g(\phi_\omega^2 + |u|^2 + 2 \phi_\omega \text{Re}(u) ) + (\phi_\omega + u) g(\phi_\omega^2) + 2 \phi_\omega^2 u g'(\phi_\omega^2) \right | \\ \\ = & \left | - |u|^2 ( \phi_\omega + \text{Re}(u)) g'(\phi_\omega^2) - 2 \phi_\omega \text{Re} (u)^2 g'(\phi_\omega^2) + \text{IR} \cdot (\phi_\omega + \text{Re} (u)) \right | \\ \\ \leqslant & C ( |u|^2 \phi_\omega + |u|^3 ) + ( \phi_\omega + |u| ) |\text{IR}| \\ \\ \leqslant & C ( |u|^2 \phi_\omega + |u|^3 ). \end{array}$$ This leads to $\left | \text{Re} \left ( h(\phi_\omega + u) - h ( \phi_\omega ) - h'(\phi_\omega ) u \right ) \right | \leqslant C ( \phi_\omega |u|^2 + |u|^3 )$.\ \ Now, assume that $\left | \frac{u}{\phi_\omega} \right | > \frac{1}{4}$. We have $\phi_\omega \leqslant C |u|$ and everything is easier. Using $|g(s)| \leqslant Cs$ and $|g'(s)| \leqslant C$, we see that $\left | \text{Re} \left ( h(\phi_\omega + u) - h ( \phi_\omega ) - h'(\phi_\omega ) u \right ) \right | \leqslant C ( \phi_\omega |u|^2 + |u|^3 )$ in this case too.\ \ Hence, whatever case we are in, we have the inequality above and thus, $$|I_2| \leqslant C \int_{\mathbb{R}} | \Phi_A \phi_\omega ' | ( \phi_\omega |u|^2 + |u|^3 ) \leqslant C \omega_0 \int_{\mathbb{R}} \rho^4 |u|^2,$$ using the inequalities $|\Phi_A \phi_\omega '| \leqslant |x \phi_\omega '| \leqslant C \sqrt{\omega} \rho^4$, $\phi_\omega \leqslant C \sqrt{\omega_0}$ and $|u| \leqslant C \omega_0 \leqslant C \sqrt{\omega_0}$. This last inequality follows from Sobolev embedding. Indeed, by the orbital stability property, we have $||u||_{H^1 ( \mathbb{R})} \leqslant C \epsilon$ and thus, by Sobolev embedding, $||u||_{L^\infty} \leqslant C ||u||_{H^1 ( \mathbb{R})} \leqslant C \epsilon \leqslant C \omega_0$.\ \ Now, we put the estimates on $I_1$, $I_2$ and $I_3$ together and we use the following inequality (see [@Ma2] or [@Ma1]): $$\int_{\mathbb{R}} \zeta_A^2 |u|^4 \leqslant CA^2 ||u||_{L^\infty}^2 \int_{\mathbb{R}} | \partial_x w|^2 \leqslant CA^2 \epsilon^2 \int_{\mathbb{R}} | \partial_x w|^2.$$ We then obtain that $$|I_1| + |I_2| + |I_3| \leqslant C \omega_0 \int_{\mathbb{R}} \rho^4 |u|^2 + CA^2 \epsilon^2 \int_{\mathbb{R}} | \partial_x w |^2.$$ Now, we integrate by parts to see that, for $k \in \{ 1 \, , 2 \}$, $$\left | \int_{\mathbb{R}} (2 \Phi_A \partial_x u_k + \Phi_A ' u_k ) \theta_k \right | = \left | \int_{\mathbb{R}} u_k ( 2 \Phi_A \partial_x \theta_k + \Phi_A ' \theta_k ) \right | \leqslant C ||u||_{L^\infty} \int_{\mathbb{R}} ( |x| \, |\partial_x \theta_k| + |\theta_k| ),$$ using that $|\Phi_A (x)| \leqslant |x|$ and $|\Phi_A'| \leqslant 1$. Now, recalling the expressions of $\theta_k$, we see that $\partial_x \theta_1 = \dot{\beta} \phi_\omega + \dot{\beta} x \phi_\omega ' + ( \dot{\gamma} - \omega - \beta^2) \phi_\omega ' - \beta ( \dot{\sigma} - 2 \beta ) \phi_\omega '$ and $\partial_x \theta_2 = - \frac{\dot{\omega}}{\omega} \Lambda_\omega ' + ( \dot{\sigma} - 2 \beta ) \phi_\omega ''$. Using that all of the functions $\phi_\omega$, $x \phi_\omega '$, $\phi_\omega '$, $\phi_\omega ''$ and $\Lambda_\omega '$ are bounded (by $C$, independent of $\omega$ and $\epsilon$), we see that $$\int_{\mathbb{R}} (|x| \, |\partial_x \theta_k| + | \theta_k | ) \leqslant C ||\rho^2 u||^2$$ using [\[orth\]](#orth){reference-type="eqref" reference="orth"} and the fact that $\beta$ is bounded. Thus we get $$\left | \int_{\mathbb{R}} (2 \Phi_A \partial_x u_k + \Phi_A ' u_k ) \theta_k \right | \leqslant C \epsilon ||\rho^2 u||^2 \leqslant C \omega_0 ||\rho^2 u||^2.$$ The last terms remaining in the expression of $\dot{\mathcal{I}}$ are $\displaystyle{\int_{\mathbb{R}} (2 \Phi_A \partial_x u_k + \Phi_A ' u_k)m_k}$. Integrating by parts using the expression of $m_1$ and seeing that $\int_{\mathbb{R}} \Phi_A ' x u_1^2 = 0$ (since $\Phi_A ' x u_1^2$ is odd), we get $$- \int_{\mathbb{R}} (2 \Phi_A \partial_x u_1 + \Phi_A ' u_1) m_1 = \dot{\beta} \int_{\mathbb{R}} \Phi_A u_1^2 + ( \dot{\sigma} - 2 \beta ) \int_{\mathbb{R}} (2 \Phi_A \partial_x u_1 + \Phi_A ' u_1 ) \partial_x u_2.$$ Combining this identity with the corresponding identity for $\int_{\mathbb{R}} ( 2 \Phi_A \partial_x u_2 + \Phi_A ' u_2 )m_2$, we get $$- \int_{\mathbb{R}} (2 \Phi_A \partial_x u_1 + \Phi_A ' u_1) m_1 - \int_{\mathbb{R}} ( 2 \Phi_A \partial_x u_2 + \Phi_A ' u_2 )m_2 = \dot{\beta} \int_{\mathbb{R}} \Phi_A |u|^2 + ( \dot{\sigma} - 2 \beta ) \int_{\mathbb{R}} \Phi_A ' ( u_2 \partial_x u_1 - u_1 \partial_x u_2 ).$$ Therefore, using the upper bounds $||\Phi_A||_{L^\infty} \leqslant C A$, $|\Phi_A '| \leqslant 1$, $||u||, || \partial_x u|| \leqslant C \epsilon$, [\[orth\]](#orth){reference-type="eqref" reference="orth"} and the fact that $A > \frac{1}{\sqrt{\omega_0}}$, we find that $$\left | \int_{\mathbb{R}} (2 \Phi_A \partial_x u_1 + \Phi_A ' u_1) m_1 + \int_{\mathbb{R}} ( 2 \Phi_A \partial_x u_2 + \Phi_A ' u_2 )m_2 \right | \leqslant C A \epsilon^2 \sqrt{\omega_0} ||\rho^2 u||^2.$$ Putting all these estimates together, noticing that $|| \rho^2 w || \leqslant || \rho^2 u||$ and taking $\epsilon$ small enough so that $C A^2 \epsilon^2 \leqslant \frac{1}{2}$ (which also implies that $CA \epsilon^2 \sqrt{\omega_0} \leqslant CA^2 \epsilon^2 \frac{\sqrt{\omega_0}}{A} \leqslant \frac{\sqrt{\omega_0}}{2A} \leqslant \frac{\omega_0}{2}$), we get that $$\dot{\mathcal{I}} \geqslant \left ( 2 - CA^2 \epsilon^2 \right ) \int_{\mathbb{R}} | \partial_x w|^2 - C \left ( \omega_0 + \frac{\sqrt{\omega_0}}{A} + A \epsilon^2 \sqrt{\omega_0} \right ) ||\rho^2 u||^2 \geqslant \int_{\mathbb{R}} | \partial_x w|^2 - C \omega_0 || \rho^2 u ||^2.$$ This being established, we can conclude the proof. For any $T \geqslant 0$, the above estimates for $\Phi_A$ and [\[orbstab\]](#orbstab){reference-type="eqref" reference="orbstab"} give, by definition of $\mathcal{I}$, $$| \mathcal{I}(T) | \leqslant C ( || \Phi_A ||_{L^\infty} + || \Phi_A ' ||_{L^\infty} ) ||u(T)||_{H^1 ( \mathbb{R})}^2 \leqslant CA \epsilon^2 \leqslant C \epsilon$$ providing we take $\epsilon$ small enough (which we assume from now on). Integrating on $[0 \, , T]$ the inequality satisfied by $\dot{\mathcal{I}}$, we get $$\int_0^T \int_{\mathbb{R}} | \partial_x w |^2 \leqslant \underbrace{\int_0^T \dot{\mathcal{I}}}_{\leqslant \, | \mathcal{I} (T)| + | \mathcal{I} (0)|} + \, \, C \omega_0 \int_0^T || \rho^2 u ||^2 \leqslant C \epsilon + C \omega_0 \int_0^T ||\rho^2 u||^2.$$ Now recall the following inequality from [@Ma2] or [@Ma1]: $$\int_{\mathbb{R}} \eta_A |w|^2 \leqslant CA^2 \int_{\mathbb{R}} | \partial_x w|^2 + CA \sqrt{\omega_0} \int_{\mathbb{R}} \rho^4 |w|^2,$$ which implies $$\frac{1}{A^2} \int_0^T \int_{\mathbb{R}} \eta_A^2 |u|^2 \leqslant C \epsilon + C \omega_0 \int_0^T || \rho^2 u ||^2$$ using $\eta_A \leqslant C \zeta_A^2$ and $1/A < \sqrt{\omega_0}$. Now, recalling $w= \zeta_A u$ and integrate by parts, we find that $$\int_{\mathbb{R}} \zeta_A^2 | \partial_x w|^2 = \int_{\mathbb{R}} \zeta_A^4 | \partial_x u |^2 - \int_{\mathbb{R}} \zeta_A^3 \zeta_A '' |u|^2 - 2 \int_{\mathbb{R}} \zeta_A^2 ( \zeta_A ')^2 |u|^2$$ and thus, using the inequalities $\frac{1}{C} \eta_A \leqslant \zeta_A^2 \leqslant C \eta_A$ and $\zeta_A | \zeta_A '' | + | \zeta_A'|^2 \leqslant \frac{C}{A^2} \zeta_A^2$, we obtain $$\int_{\mathbb{R}} \eta_A^2 | \partial_x u|^2 \leqslant C \int_{\mathbb{R}} | \partial_x w|^2 + \frac{C}{A^2} \int_{\mathbb{R}} \eta_A^2 |u|^2.$$ Integrating over $[0 \, , T]$ and combining with the previous inequalities, we finally find that $$\int_0^T \left ( || \eta_A \partial_x u ||^2 + \frac{1}{A^2} || \eta_A u ||^2 \right ) \, \text{d}t \leqslant C \int_0^T \int_{\mathbb{R}} | \partial_x w |^2 + \frac{C}{A^2} \int_0^T \int_{\mathbb{R}} \eta_A^2 |u|^2 \leqslant C \epsilon + C \omega_0 \int_0^T || \rho^2 u ||^2 \, \text{d}t,$$ which is the desired result. ## Transformed problem We will later fix a certain $\alpha > 0$, chosen small. For this $\alpha$ we introduce $v_1 = X_\alpha^2 M_- S^2 u_2$, $v_2 = - X_\alpha^2 S^2 L_+ u_1$ and $v = v_1 + iv_2$. We recall that $$S^2 = \partial_x^2 - 2 \, \frac{\phi_\omega '}{\phi_\omega} \cdot \partial_x + \omega - g ( \phi_\omega^2 ) + 2 \, \frac{G ( \phi_\omega^2 )}{\phi_\omega^2}.$$ We then compute $$\begin{array}{rcl} M_- S^2 &=& - \partial_x^4 + 2 \partial_x^2 \cdot \frac{\phi_\omega '}{\phi_\omega} \cdot \partial_x + \partial_x \cdot \left ( 2 \phi_\omega^2 g'(\phi_\omega^2) - 4 g ( \phi_\omega^2 ) + 4 \, \frac{G ( \phi_\omega^2)}{\phi_\omega^2} \right ) \cdot \partial_x \\ \\ & & \, \, + \left ( 4 \phi_\omega \phi_\omega ' g' ( \phi_\omega^2 ) - 6 \, \frac{\phi_\omega '}{\phi_\omega} \, g( \phi_\omega^2 ) - 4 \phi_\omega ' \phi_\omega^3 g''( \phi_\omega^2 ) + 4 \frac{\phi_\omega'}{\phi_\omega} \frac{G ( \phi_\omega^2 )}{\phi_\omega^2} - 2 \omega \, \frac{\phi_\omega '}{\phi_\omega} \right ) \cdot \partial_x \\ \\ & & \, \, + \, \omega^2 + 2 \omega \left ( g ( \phi_\omega^2 ) - \phi_\omega^2 g'(\phi_\omega^2) + 2 \phi_\omega^4 g''( \phi_\omega^2) \right ) \\ \\ & & \, \, - \, 2 g'(\phi_\omega^2) G ( \phi_\omega^2 ) + \phi_\omega^4 g'(\phi_\omega^2) - 2 \phi_\omega^6 g''( \phi_\omega^2) + 4 \phi_\omega^2 G(\phi_\omega^2) g''(\phi_\omega^2) \\ \\ & & \, \, - \, 2 \phi_\omega^2 g ( \phi_\omega^2 ) + 2 G ( \phi_\omega^2 ) + g ( \phi_\omega^2 )^2 \end{array}$$ and $$\begin{array}{rcl} S^2 L_+ &=& - \partial_x^4 + 2 \partial_x^2 \cdot \frac{\phi_\omega '}{\phi_\omega} \cdot \partial_x + \partial_x \cdot \left ( - \phi_\omega^2 - 2 g ( \phi_\omega^2 ) + 2 \, \frac{G ( \phi_\omega^2 )}{\phi_\omega^2} + 2 \phi_\omega^2 g'( \phi_\omega^2 ) \right ) \cdot \partial_x \\ \\ & & \, \, + \left ( -2 \phi_\omega \phi_\omega ' + 4 \phi_\omega \phi_\omega ' g' ( \phi_\omega^2 ) - 2 \, \frac{\phi_\omega '}{\phi_\omega} \, g( \phi_\omega^2 ) + 4 \phi_\omega ' \phi_\omega^3 g''( \phi_\omega^2 ) - 2 \omega \, \frac{\phi_\omega '}{\phi_\omega} \right ) \cdot \partial_x \\ \\ & & \, \, + \, \omega^2 + \omega \left ( -3 \phi_\omega^2 + 20 \phi_\omega^4 g''(\phi_\omega^2) + 8 \phi_\omega^6 g'''(\phi_\omega^2) + 2 \phi_\omega^2 g'(\phi_\omega^2) + 2 \, \frac{G ( \phi_\omega^2)}{\phi_\omega^2} \right ) \\ \\ & & \, \, + \, 3 \phi_\omega^4 - 3 \phi_\omega^2 g ( \phi_\omega^2) - 3 \phi_\omega^4 g'(\phi_\omega^2) + 4 \phi_\omega^2 g ( \phi_\omega^2 ) g'( \phi_\omega^2 ) -2 g'( \phi_\omega^2 ) G( \phi_\omega^2 ) \\ \\ & & \, \, - \, 12 \phi_\omega^6 g''(\phi_\omega^2) + 16 \phi_\omega^2 G( \phi_\omega^2 ) g''(\phi_\omega^2) + 4 \phi_\omega^4 g( \phi_\omega^2 ) g''( \phi_\omega^2 ) - 4 \phi_\omega^8 g'''(\phi_\omega^2) \\ \\ & & \, \, + \, 8 \phi_\omega^4 G ( \phi_\omega^2 ) g''' ( \phi_\omega^2 ) - g ( \phi_\omega^2 )^2 + 2 g ( \phi_\omega^2 ) \, \frac{G ( \phi_\omega^2 )}{\phi_\omega^2}. \end{array}$$ We introduce the operators $Q_-$ and $Q_+$, obtained respectively from $M_- S^2$ and $S^2 L_+$ by differentiation with respect to $\omega$ and then multiplication by $\omega$. Their exact expressions are given below. $$\begin{array}{rcl} Q_- &=& 2 \partial_x^2 \cdot \left ( \frac{\Lambda_\omega ' \phi_\omega - \phi_\omega ' \Lambda_\omega}{\phi_\omega^2} \right ) \cdot \partial_x + \partial_x \cdot \left ( -4 \phi_\omega \Lambda_\omega g'( \phi_\omega^2 ) + 4 \phi_\omega^3 \Lambda_\omega g'' ( \phi_\omega^2 ) + 8 \frac{\Lambda_\omega g( \phi_\omega^2 )}{\phi_\omega} - 8 \frac{\Lambda_\omega G ( \phi_\omega^2 )}{\phi_\omega^3} \right ) \cdot \partial_x \\ \\ & & \, \, + \, \left ( 4 \Lambda_\omega ' \phi_\omega g'( \phi_\omega^2 ) -8 \Lambda_\omega \phi_\omega ' g'( \phi_\omega^2 ) -4 \Lambda_\omega \phi_\omega ' \phi_\omega^2 g''( \phi_\omega^2 ) -4 \Lambda_\omega ' \phi_\omega^3 g'' ( \phi_\omega^2 ) -8 \Lambda_\omega \phi_\omega ' \phi_\omega^4 g'''(\phi_\omega^2) \right. \\ \\ & & \, \, \, \, \, \, \, \, \, \, \, \, \, \, \left. -6 \frac{\Lambda_\omega ' g ( \phi_\omega^2 )}{\phi_\omega} + 4 \frac{\Lambda_\omega ' G ( \phi_\omega^2 )}{\phi_\omega^3} + 14 \frac{\Lambda_\omega \phi_\omega ' g( \phi_\omega^2 )}{\phi_\omega^2} -12 \frac{\Lambda_\omega \phi_\omega ' G (\phi_\omega^2)}{\phi_\omega^4} -2 \omega \frac{\phi_\omega '}{\phi_\omega} -2 \omega \frac{\Lambda_\omega ' \phi_\omega - \Lambda_\omega \phi_\omega '}{\phi_\omega^2} \right ) \cdot \partial_x \\ \\ & & \, \, + \, 2 \omega^2 + 2 \omega \left ( g ( \phi_\omega^2 ) - \phi_\omega^2 g'( \phi_\omega^2 ) + 2 \phi_\omega^4 g''( \phi_\omega^2 ) \right ) + 4 \omega \left ( 3 \Lambda_\omega \phi_\omega^3 g''( \phi_\omega^2 ) + 2 \Lambda_\omega \phi_\omega^5 g'''(\phi_\omega^2 ) \right ) \\ \\ & & \, \, + \, 4 \Lambda_\omega \phi_\omega g''( \phi_\omega^2 ) G ( \phi_\omega^2 ) -10 \Lambda_\omega \phi_\omega^5 g''( \phi_\omega^2 ) -4 \Lambda_\omega \phi_\omega^7 g'''( \phi_\omega^2 ) + 8 \Lambda_\omega \phi_\omega^3 g( \phi_\omega^2) g''( \phi_\omega^2) + 8 \Lambda_\omega \phi_\omega^3 G ( \phi_\omega^2 ) g'''( \phi_\omega^2 ) \end{array}$$ and $$\begin{array}{rcl} Q_+ &=& 2 \partial_x^2 \cdot \left ( \frac{\Lambda_\omega ' \phi_\omega - \phi_\omega ' \Lambda_\omega}{\phi_\omega^2} \right ) \cdot \partial_x + \partial_x \cdot \left ( -2 \Lambda_\omega \phi_\omega + 4 \frac{\Lambda_\omega g( \phi_\omega^2 )}{\phi_\omega} - 4 \frac{\Lambda_\omega G ( \phi_\omega^2 )}{\phi_\omega^3} + 4 \Lambda_\omega \phi_\omega^3 g'' ( \phi_\omega^2 ) \right ) \cdot \partial_x \\ \\ & & \, \, + \, \left ( -2 \Lambda_\omega \phi_\omega ' -2 \Lambda_\omega ' \phi_\omega + 4 \Lambda_\omega ' \phi_\omega g'( \phi_\omega^2 ) + 20 \Lambda_\omega \phi_\omega ' \phi_\omega^2 g''( \phi_\omega^2 ) + 4 \Lambda_\omega ' \phi_\omega^3 g''(\phi_\omega^2) \right. \\ \\ & & \, \, \, \, \, \, \, \, \, \, \, \, \, \, \left. + 8 \Lambda_\omega \phi_\omega ' \phi_\omega^4 g'''(\phi_\omega^2 ) -2 \frac{\Lambda_\omega ' g(\phi_\omega^2)}{\phi_\omega} + 2 \frac{\Lambda_\omega \phi_\omega ' g ( \phi_\omega^2 )}{\phi_\omega^2} - 2 \omega \frac{\phi_\omega '}{\phi_\omega} - 2 \omega \frac{\Lambda_\omega ' \phi_\omega - \Lambda_\omega \phi_\omega '}{\phi_\omega^2} \right ) \cdot \partial_x \\ \\ & & \, \, + \, 2 \omega^2 + \omega \left ( -3 \phi_\omega^2 + 20 \phi_\omega^4 g''(\phi_\omega^2) + 8 \phi_\omega^6 g'''( \phi_\omega^2 ) + 2 \phi_\omega^2 g'(\phi_\omega^2) + 2 \frac{G ( \phi_\omega^2 )}{\phi_\omega^2} \right ) \\ \\ & & \, \, + \, 2 \omega \left ( -3 \Lambda_\omega \phi_\omega + 42 \Lambda_\omega \phi_\omega^3 g''(\phi_\omega^2) + 44 \Lambda_\omega \phi_\omega^5 g'''(\phi_\omega^2) + 8 \Lambda_\omega \phi_\omega^7 g''''(\phi_\omega^2) \right. \\ \\ & & \, \, \, \, \, \, \, \, \, \, \, \, \, \, \left. + 2 \Lambda_\omega \phi_\omega g'(\phi_\omega^2) + 2 \frac{\Lambda_\omega g(\phi_\omega^2)}{\phi_\omega} - 4 \frac{\Lambda_\omega G ( \phi_\omega^2 )}{\phi_\omega^3} \right ) \\ \\ & & \, \, + \, 12 \Lambda_\omega \phi_\omega^3 -6 \Lambda_\omega \phi_\omega g ( \phi_\omega^2 ) -18 \Lambda_\omega \phi_\omega^3 g'(\phi_\omega^2) -78 \Lambda_\omega \phi_\omega^5 g''(\phi_\omega^2) + 8 \Lambda_\omega \phi_\omega^3 g'(\phi_\omega^2)^2 \\ \\ & & \, \, + \, 56 \Lambda_\omega \phi_\omega^3 g(\phi_\omega^2) g''(\phi_\omega^2) + 28 \Lambda_\omega \phi_\omega g''(\phi_\omega^2) G(\phi_\omega^2) - 56 \Lambda_\omega \phi_\omega^7 g'''(\phi_\omega^2) + 64 \Lambda_\omega \phi_\omega^3 G(\phi_\omega^2) g'''(\phi_\omega^2) \\ \\ & & \, \, + \, 8 \Lambda_\omega \phi_\omega^5 g'(\phi_\omega^2) g''(\phi_\omega^2) + 24 \Lambda_\omega \phi_\omega^5 g ( \phi_\omega^2 ) g'''(\phi_\omega^2 ) -8 \Lambda_\omega \phi_\omega^9 g''''(\phi_\omega^2) + 16 \Lambda_\omega \phi_\omega^5 G(\phi_\omega^2) g''''(\phi_\omega^2) \\ \\ & & \, \, + \, 4 \frac{\Lambda_\omega g(\phi_\omega^2)^2}{\phi_\omega} - 4 \frac{\Lambda_\omega g ( \phi_\omega^2 ) G ( \phi_\omega^2 )}{\phi_\omega^3} + 4 \frac{\Lambda_\omega G(\phi_\omega^2) g'(\phi_\omega^2)}{\phi_\omega}. \end{array}$$ We give without proof several estimates about the operators $X_\alpha$ that can be found in [@Ma3] or [@Ma1]. **Lemma 8.** There exists $C>0$ such that, for $\alpha > 0$ small enough and any $q \in L^2 ( \mathbb{R})$, $$\begin{array}{ll} ||X_\alpha q|| \leqslant ||q|| , & || \partial_x X_\alpha^{1/2} q || \leqslant \alpha^{-1/2} ||q||, \\ || \rho X_\alpha q|| \leqslant C ||X_\alpha ( \rho q) || , & ||\rho^{-1} X_\alpha ( \rho q ) || \leqslant C ||X_\alpha q|| , \\ || \eta_A X_\alpha q || \leqslant C || X_\alpha ( \eta_A q ) || \leqslant C || \eta_A q || , & || \eta_A^{-1} X_\alpha ( \eta_A q) || \leqslant C || X_\alpha q || , \\ || \rho^{-1} X_\alpha \partial_x^2 ( \rho q) || \leqslant C \alpha^{-1} ||q|| , & || \rho^{-1} X_\alpha \partial_x ( \rho q ) || \leqslant C \alpha^{-1/2} ||q|| , \\ || \eta_A X_\alpha \partial_x^2 q || \leqslant C \alpha^{-1} || \eta_A q || , & || \eta_A X_\alpha \partial_x q || \leqslant C \alpha^{-1/2} || \eta_A q ||. \end{array}$$ We then obtain the following estimates, about $M_-$ and $L_+$. **Lemma 9.** There exists $C>0$ such that, for $\alpha > 0$ small enough and any $q \in L^2 ( \mathbb{R})$, $$\begin{array}{ll} || \eta_A X_\alpha^2 M_- S^2 q || + || \eta_A X_\alpha^2 S^2 L_+ q || \leqslant C \left ( \alpha^{-3/2} || \eta_A \partial_x q || + \omega_0^2 || \eta_A q || \right ), \\ \\ || \eta_A \partial_x X_\alpha^2 M_- S^2 q || + || \eta_A \partial_x X_\alpha^2 S^2 L_+ q || \leqslant C \left ( \alpha^{-2} || \eta_A \partial_x q || + \omega_0^{5/2} || \rho^2 g || \right ). \end{array}$$ *Proof.* Let us start with $X_\alpha^2 M_- S^2$, whose explicit expression is given before. We have to analyse each term constituting $M_- S^2$. To do so, notice that $X_\alpha$ and $\partial_x$ commute. First, $$|| \eta_A X_\alpha^2 \partial_x^4 q || = || \eta_A X_\alpha \partial_x^2 (X_\alpha \partial_x \partial_x q) || \leqslant C \alpha^{-1} || \eta_A X_\alpha \partial_x (\partial_x q) || \leqslant C \alpha^{-3/2} || \eta_A \partial_x q ||.$$ We also have $$|| \eta_A \partial_x X_\alpha^2 \partial_x^4 q || = || \eta_A X_\alpha \partial_x^2 (X_\alpha \partial_x^2 \partial_x q ) || \leqslant C \alpha^{-2} || \eta_A \partial_x q ||$$ for the same reason. Now, let $R = \phi_\omega ' / \phi_\omega$ (as in the proof of Lemma 6). We recall for what follows that $| R | \leqslant C \sqrt{\omega}$. Thus, $$|| \eta_A X_\alpha^2 \partial_x^2 \cdot R \cdot \partial_x q || = || \eta_A X_\alpha \partial_x^2 (X_\alpha \cdot R \cdot \partial_x q) || \leqslant C \alpha^{-1} || \eta_A X_\alpha \cdot R \cdot \partial_x q || \leqslant C \alpha^{-1} || \eta_A R \partial_x q || \leqslant C \alpha^{-1} \sqrt{\omega} || \eta_A \partial_x q ||.$$ And also $$|| \eta_A \partial_x X_\alpha^2 \partial_x^2 \cdot R \cdot \partial_x q || = || \eta_A X_\alpha \partial_x^2 (X_\alpha \partial_x \cdot R \cdot \partial_x q) || \leqslant C \alpha^{-3/2} \sqrt{\omega} || \eta_A \partial_x q ||$$ for the same reason. Then, denoting $b_\omega^1 := 2 \phi_\omega^2 g'( \phi_\omega^2 ) - 4 g ( \phi_\omega^2 ) + 4 \frac{G ( \phi_\omega^2 )}{\phi_\omega^2}$, we find that $| b_\omega^1 | \leqslant C \phi_\omega^2 \leqslant C \omega$. Therefore, $$|| \eta_A X_\alpha^2 \partial_x \cdot b_\omega^1 \cdot \partial_x q || \leqslant || \eta_A X_\alpha \partial_x ( X_\alpha \cdot b_\omega^1 \cdot \partial_x q) || \leqslant C \alpha^{-1/2} || \eta_A X_\alpha \cdot b_\omega^1 \cdot \partial_x q || \leqslant C \alpha^{-1/2} || \eta_A b_\omega^1 \partial_x q || \leqslant C \alpha^{-1/2} \omega || \eta_A \partial_x q ||.$$ And also $$|| \eta_A \partial_x X_\alpha^2 \partial_x \cdot b_\omega^1 \cdot \partial_x q || \leqslant || \eta_A X_\alpha \partial_x^2 ( X_\alpha \cdot b_\omega^1 \cdot \partial_x q) || \leqslant C \alpha^{-1} \omega || \eta_A \partial_x q ||$$ for the same reason. Now, denoting $b_\omega^2 := 4R \phi_\omega^2 g'(\phi_\omega^2) - 6 R g ( \phi_\omega^2) - 4 R \phi_\omega^4 g''(\phi_\omega^2) + 4 R \frac{G ( \phi_\omega^2)}{\phi_\omega^2} - 2 \omega R$, we see that $|b_\omega^2| \leqslant C |R| \phi_\omega^2 \leqslant C \omega^{3/2}$. Consequently, $$|| \eta_A X_\alpha^2 b_\omega^2 \cdot \partial_x q || \leqslant C || \eta_A b_\omega^2 \partial_x q || \leqslant C \omega^{3/2} || \eta_A \partial_x q ||.$$ And also $$|| \eta_A \partial_x X_\alpha^2 b_\omega^2 \cdot \partial_x q || = || \eta_A X_\alpha \partial_x (X_\alpha b_\omega^2 \cdot \partial_x q) || \leqslant C \alpha^{-1/2} \omega^{3/2} || \eta_A \partial_x q ||$$ for the same reason. Finally, we denote $b_\omega^3 := \omega^2 + 2 \omega \left ( g ( \phi_\omega^2 ) - \phi_\omega^2 g'(\phi_\omega^2) + 2 \phi_\omega^4 g''( \phi_\omega^2) \right ) - 2 g'(\phi_\omega^2) G ( \phi_\omega^2 ) + \phi_\omega^4 g'(\phi_\omega^2) - 2 \phi_\omega^6 g''( \phi_\omega^2) + 4 \phi_\omega^2 G(\phi_\omega^2) g''(\phi_\omega^2) - 2 \phi_\omega^2 g ( \phi_\omega^2 ) + 2 G ( \phi_\omega^2 ) + g ( \phi_\omega^2 )^2$. We see that $| b_\omega^3 | \leqslant \omega^2 + C \omega \phi_\omega^2 + C \phi_\omega^4 \leqslant C \omega^2$. This gives $$|| \eta_A X_\alpha^2 ( b_\omega^3 q) || \leqslant C || \eta_A b_\omega^3 q || \leqslant C \omega^2 || \eta_A q ||.$$ On the other hand, $\partial_x (b_\omega^3 q) = (b_\omega^3)' q + b_\omega^3 \partial_x q$ where $$\begin{array}{rcl} (b_\omega^3)' &=& 4 \omega \left ( 3 \phi_\omega^3 g''(\phi_\omega^2) + 2 \phi_\omega^5 g'''(\phi_\omega^2) \right ) \phi_\omega ' -10 \phi_\omega ' \phi_\omega^5 g''(\phi_\omega^2) - 4 \phi_\omega ' \phi_\omega^7 g'''(\phi_\omega^2) +4 \phi_\omega ' \phi_\omega G ( \phi_\omega^2 ) g''(\phi_\omega^2) \\ \\ & & \, \, \, \, + \, 8 \phi_\omega ' \phi_\omega^3 g ( \phi_\omega^2 ) g''(\phi_\omega^2) + 8 \phi_\omega ' \phi_\omega^3 G ( \phi_\omega^2 ) g'''(\phi_\omega^2). \end{array}$$ Recalling that $| \phi_\omega ' | \leqslant C \omega \rho^2$, we find that $|(b_\omega^3)'| \leqslant C \omega \phi | \phi ' | + C \phi^3 | \phi'| \leqslant C \omega^{5/2} \rho^2$. This leads to $$|| \eta_A \partial_x X_\alpha^2 (b_\omega^3 q ) || \leqslant || \eta_A X_\alpha^2 (b_\omega^3)'q|| + || \eta_A X_\alpha^2 (b_\omega^3 \partial_x q)|| \leqslant C || \eta_A (b_\omega^3)' q|| + C \omega^2 || \eta_A \partial_x q || \leqslant C \omega^{5/2} || \rho^2 q || + C \omega^2 || \eta_A \partial_x q ||.$$ We conclude simply by noticing that $\omega \leqslant 1$. The proof for $X_\alpha^2 S^2 L_+ q$ is identical and does not add any complication to the proof above.\ [a]{style="color: white"}\ Applying this lemma to $u_2$ and $u_1$, we obtain the following estimates. **Lemma 10.** There exists $C>0$ such that, for $\alpha > 0$ small enough, $$\begin{array}{ll} || \eta_A v || \leqslant C \left ( \alpha^{-3/2} || \eta_A \partial_x u || + \omega_0^2 || \eta_A u || \right ), \\ \\ || \eta_A \partial_x v || \leqslant C \left ( \alpha^{-2} || \eta_A \partial_x u || + \omega_0^{5/2} || \rho^2 u || \right ). \end{array}$$ We have to check similar estimates on the operators $Q_-$ and $Q_+$. **Lemma 11.** There exists $C>0$ such that, for $\alpha > 0$ small enough and any $q \in L^2 ( \mathbb{R})$, $$\begin{array}{ll} || \eta_A X_\alpha^2 Q_- q || + || \eta_A X_\alpha^2 Q_+ q || \leqslant C \left ( \alpha^{-1} \sqrt{\omega_0} || \eta_A \partial_x q || + \omega_0^2 || \eta_A q || \right ), \\ \\ || \eta_A \partial_x X_\alpha^2 Q_- q || + || \eta_A \partial_x X_\alpha^2 Q_+ q || \leqslant C \left ( \alpha^{-3/2} \sqrt{\omega_0} || \eta_A \partial_x q || + \omega_0^{5/2} || \rho^2 g || \right ). \end{array}$$ *Proof.* The proof is similar to the one of the previous lemma. We first show that $$\left | \frac{\Lambda_\omega ' \phi_\omega - \Lambda_\omega \phi_\omega '}{\phi_\omega^2} \right | \leqslant C \sqrt{\omega}.$$ Indeed, we first see that $$( \Lambda_\omega ' \phi_\omega - \Lambda_\omega \phi_\omega ' )' = \Lambda_\omega '' \phi_\omega - \Lambda_\omega \phi_\omega '' = \omega \phi_\omega^2 -2 \Lambda_\omega \phi_\omega^3 + 2 \Lambda_\omega \phi_\omega^3 g'(\phi_\omega^2),$$ using the equations satisfied by $\phi_\omega$ and $\Lambda_\omega$. Therefore, writing that $|g'(\phi_\omega^2)| \leqslant 1$, we see that, for any $x \geqslant 0$, $$| \Lambda_\omega ' \phi_\omega - \Lambda_\omega \phi_\omega ' | (x) = \left | - \int_x^{+ \infty} \left ( \omega \phi_\omega^2 - 2 \Lambda_\omega \phi_\omega^3 + 2 \Lambda_\omega \phi_\omega^3 g'(\phi_\omega^2) \right ) \right | \leqslant C \omega \int_x^{+ \infty} \phi_\omega^2 + C \int_x^{+ \infty} | \Lambda_\omega | \phi_\omega^3.$$ Now using the estimates on $\Lambda_\omega$ and $\phi_\omega$ we get $$| \Lambda_\omega ' \phi_\omega - \Lambda_\omega \phi_\omega ' | (x) \leqslant C \omega^{3/2} e^{-2 \sqrt{\omega} x} + C \omega^{3/2} e^{-4 \sqrt{\omega} x} \leqslant C \omega^{3/2} e^{-2 \sqrt{\omega} x}.$$ We recall that $\phi_\omega (x) \geqslant c \sqrt{\omega} e^{- \sqrt{\omega} |x|}$. Thus, $$\left | \frac{\Lambda_\omega ' \phi_\omega - \Lambda_\omega \phi_\omega '}{\phi_\omega^2} \right | \leqslant C \sqrt{\omega}.$$ We also see, thanks to the estimates on $\Lambda_\omega$ and its derivatives, that $| \Lambda_\omega | \leqslant C \sqrt{\omega}$ and $| \Lambda_\omega ' | \leqslant C \omega$. Now let us write the operator $Q_-$ as $$Q_- = \partial_x^2 \cdot c_\omega^1 \cdot \partial_x + \partial_x \cdot c_\omega^2 \cdot \partial_x + c_\omega^3 \cdot \partial_x + c_\omega^4.$$ Using $(H_1)$, we see that $|c_\omega^1| \leqslant C \sqrt{\omega}$, $|c_\omega^2| \leqslant C \omega$, $|c_\omega^3| \leqslant C \omega^{3/2}$, $|c_\omega^4| \leqslant C \omega^2$ and $|(c_\omega^4)'| \leqslant C \omega^{5/2}$. Reasoning as in the previous proof, we obtain the desired result. The same estimates and the same proof hold for $Q_+$. It is for this proof that we use $(H_1)$ in its entirety: we indeed have to control $g$ up to its fifth derivative (because of the expression of $Q_+$). [a]{style="color: white"}\ Now let us prove a last estimate, more elementary (in the sense that it does not involve any derivative of $q$) but that will be useful. **Lemma 12.** There exists $C>0$ such that, for $\alpha > 0$ small enough and any $q \in L^2 ( \mathbb{R})$, $$|| \eta_A X_\alpha^2 M_- S^2 q || + || \eta_A X_\alpha^2 S^2 L_+ q || \leqslant C \alpha^{-2} || \eta_A q ||.$$ *Proof.* The proof is analogous to the one of Lemma 9. For example, see that $$|| \eta_A X_\alpha^2 \partial_x^4 q || = || \eta_A X_\alpha \partial_x^2 ( X_\alpha \partial_x^2 q) || \leqslant C \alpha^{-1} || \eta_A X_\alpha \partial_x^2 q || \leqslant C \alpha^{-2} || \eta_A \partial_x q ||.$$ For the other terms it is similar and easier; for instance the last term is controlled as follows: $$|| \eta_A \partial_x X_\alpha^2 (b_\omega^3 q) || \leqslant C \alpha^{-1/2} || \eta_A X_\alpha ( b_\omega^3 q) || \leqslant C \alpha^{-1/2} || \eta_A b_\omega^3 q || \leqslant C \alpha^{-1/2} || \eta_A q ||.$$ This completes the proof. ## Second virial estimate Using the system [\[Su\]](#Su){reference-type="eqref" reference="Su"} satisfied by $u$ and the identity of Lemma 6, we find the following system satisfied by $v$: $$\left \{ \begin{array}{ccl} \partial_t v_1 &=& M_- v_2 + Y_\alpha^- v_2 + X_\alpha^2 n_2 - X_\alpha^2 r_2 \\ \partial_t v_2 &=& - M_+ v_1 - Y_\alpha^+ v_1 - X_\alpha^2 n_1 + X_\alpha^2 r_1 \end{array} \right. \label{Sv}$$ where $$\begin{array}{lll} n_1 \, = \, S^2 L_+ m_2 + \frac{\dot{\omega}}{\omega} Q_+ u_1, & r_1 \, = \, S^2 L_+ q_2 , & Y_\alpha^- \, = \, X_\alpha^2 \cdot a_\omega^- \cdot X_\alpha^{-2} - a_\omega^-, \\ \\ n_2 \, = \, -M_- S^2 m_1 + \frac{\dot{\omega}}{\omega} Q_- u_2 , & r_2 \, = \, -M_- S^2 q_1 , & Y_\alpha^+ \, = \, X_\alpha^2 \cdot a_\omega^+ \cdot X_\alpha^{-2} - a_\omega^+. \end{array}$$ **Proposition 4.** Suppose hypotheses $(H_1)$ and $(H_2)$ are satisfied. Assume that $\omega_0$ is small enough. There exists $C>0$ such that, for $B>0$ large enough, $\alpha > 0$ and $\epsilon > 0$ small enough, and for any $T>0$, $$\int_0^T || \rho v ||^2 \, \text{d}t \leqslant C \epsilon^2 + C \int_0^T \left ( \frac{1}{A \sqrt{\omega_0}} || \eta_A \partial_x u ||^2 + \frac{\omega_0^{5/2}}{A^3} || \eta_A u ||^2 + \frac{\omega_0^5}{A} || \rho^2 u ||^2 \right ) \, \text{d}t.$$ *Proof.* We use another virial argument. Let $z = \chi_A \zeta_B v$ and $$\mathcal{J} = \int_{\mathbb{R}} v_2 \left ( 2 \Psi_{A,B} \partial_x v_2 + \Psi_{A,B} ' v_2 \right ).$$ Using the equation [\[Sv\]](#Sv){reference-type="eqref" reference="Sv"} and integrating by parts (following computations from [@Ma3] and [@Ma1]), we get that $$\dot{\mathcal{J}} = \int_{\mathbb{R}} \left ( 2 ( \partial_x z_1)^2 + P_B^+ z_1^2 \right ) + \int_{\mathbb{R}} \left ( 2 ( \partial_x z_2 )^2 + P_B^- z_2^2 \right ) + J_1 + J_2 + J_3 + J_4 + J_5$$ where $\displaystyle{P_B^{\pm} := - (a_{\omega_0}^{\pm})' \frac{\Phi_B}{\zeta_B^2}}$ and $$\begin{array}{l} \displaystyle{J_1 \, = \, \sum_{k=1}^2 \int_{\mathbb{R}} (\ln \zeta_B)'' z_k^2} , \\ \\ \displaystyle{J_2 \, = \, - \sum_{k=1}^2 \int_{\mathbb{R}} \left ( \frac{1}{2} ( \chi_A^2 )' (\zeta_B^2) ' + \left ( 3 ( \chi_A ')^2 + \chi_A '' \chi_A \right ) \zeta_B^2 + \frac{1}{2} ( \chi_A^2 )''' \Phi_B \right ) v_k^2 + 2 \sum_{k=1}^2 \int_{\mathbb{R}} (\chi_A^2)' \Phi_B ( \partial_x v_k)^2} , \\ \\ \displaystyle{J_3 \, = \, \int_{\mathbb{R}} ( 2 \Psi_{A,B} \partial_x v_1 + \Psi_{A,B} ' v_1 ) Y_\alpha^+ v_1 + \int_{\mathbb{R}} (2 \Psi_{A,B} \partial_x v_2 + \Psi_{A,B} ' v_2 ) Y_\alpha^- v_2} , \\ \\ \displaystyle{J_4 \, = \, \sum_{k=1}^2 \int_{\mathbb{R}} ( 2 \Psi_{A,B} \partial_x v_k + \Psi_{A,B} ' v_k ) ( X_\alpha^2 n_k - X_\alpha^2 r_k )} , \\ \\ \displaystyle{J_5 \, = \, \int_{\mathbb{R}} \frac{\Phi_B}{\zeta_B^2} \left ( \left ( a_{\omega_0}^- - a_{\omega}^- \right )' z_1^2 + \left ( a_{\omega_0}^+ - a_{\omega}^+ \right )' z_2^2 \right ).} \end{array}$$ Notice the obvious similarities with the notation in Lemma 7 and Proposition 2; however, the pulsation involved in $P_B$ is $\omega_0$ (not $\omega$). Setting $\mathcal{K} := - \int_{\mathbb{R}} z_1 z_2 R_B$ where $R_B$ is a bounded function to be defined later, we find that $$\dot{\mathcal{J}} + \dot{\mathcal{K}} = \int_{\mathbb{R}} \left [ 2 ( \partial_x z_1)^2 + \left ( P_B^+ + \omega_0 R_B - \frac{R_B ''}{2} \right ) z_1^2 \right ] + \int_{\mathbb{R}} \left [ 2 ( \partial_x z_2)^2 + \left ( P_B^- - \omega_0 R_B + \frac{R_B''}{2} \right ) z_2^2 \right ] + \sum_{j=1}^5 (J_j + K_j )$$ where $$\begin{array}{l} \displaystyle{K_1 \, = \, \sum_{k=1}^2 (-1)^k \int_{\mathbb{R}} \left ( (\chi_A \zeta_B)' \chi_A \zeta_B R_B' + ((\chi_A \zeta_B)')^2 R_B \right ) v_k^2} , \\ \\ \displaystyle{K_2 \, = \, \int_{\mathbb{R}} \left ( ( \partial_x z_1)^2 - ( \partial_x z_2)^2 \right ) R_B - \int_{\mathbb{R}} (a_{\omega_0}^- z_2^2 - a_{\omega_0}^+ z_1^2) R_B} , \\ \\ \displaystyle{K_3 \, = \, \int_{\mathbb{R}} \left ( (Y_\alpha^+ v_1)v_1 - (Y_\alpha^- v_2)v_2 \right ) \chi_A^2 \zeta_B^2 R_B} , \\ \\ \displaystyle{K_4 \, = \, \sum_{k=1}^2 (-1)^{k-1} \int_{\mathbb{R}} (X_\alpha^2 n_k - X_\alpha^2 r_k) v_k \chi_A^2 \zeta_B^2 R_B} , \\ \\ \displaystyle{K_5 \, = \, ( \omega - \omega_0 ) \int_{\mathbb{R}} (z_1^2 - z_2^2) R_B + \int_{\mathbb{R}} \left [ (a_\omega^+ - a_{\omega_0}^+ ) z_1^2 - (a_\omega^- - a_{\omega_0}^-) z_2^2 \right ] R_B.} \end{array}$$ Let us define $R_B$ as the bounded solution of the ordinary differential equation $- \frac{R_B ''}{2} + \omega R_B = D_B$ where $D_B := \frac{P_B^+ - P_B^-}{2}$. Here also, notice the similarities with the notation in Lemma 7 and Proposition 2. We have the control $|R_B| \leqslant C \varepsilon_{\omega_0} \rho$. Such a choice leads to $$\dot{\mathcal{J}} + \dot{\mathcal{K}} = \int_{\mathbb{R}} \left [ 2 ( \partial_x z_1)^2 + P_B z_1^2 \right ] + \int_{\mathbb{R}} \left [ 2 ( \partial_x z_2)^2 + P_B z_2^2 \right ] + \sum_{j=1}^5 (J_j + K_j ).$$ We will need a result that enables us to control $|| \rho \partial_x v||$ and $|| \rho v ||$ in terms of $||\partial_x z||$ and $||\rho z||$, plus error terms involving $u$. This is the following lemma. **Lemma 13.** There exists $C>0$ such that, for $A,B>0$ large enough (depending on $\omega_0$) and $\alpha > 0$ small enough, $$|| \rho \partial_x v ||^2 + || \rho v ||^2 \leqslant C \int_{\mathbb{R}} \left ( | \partial_x z |^2 + \frac{1}{B^2} \rho |z|^2 \right ) + \frac{C}{A^3 \omega_0^{3/2}} \left ( \alpha^{-4} || \eta_A \partial_x u ||^2 + \omega_0^4 || \eta_A u ||^2 \right ).$$ *Proof.* First, for $|x| \leqslant A$, $z = \zeta_B v$ and we write that $$\int_{|x| \leqslant A} \rho^2 |v|^2 \leqslant C \int_{|x| \leqslant A} \rho \zeta_B^2 |v|^2 = C \int_{|x| \leqslant A} \rho |z|^2$$ using that $\rho \leqslant C \zeta_B^2$. Now, we have $\partial_x z = \zeta_B ' v + \zeta_B \partial_x v$ and $| \zeta_B ' | \leqslant \frac{C}{B} \zeta_B$ which lead to $$\rho^2 | \partial_x v |^2 \leqslant C \rho \zeta_B^2 | \partial_x v |^2 \leqslant C \rho | \partial_x z|^2 + C \rho \frac{\zeta_B^2}{B^2} |v|^2 \leqslant C | \partial_x z |^2 + \frac{C}{B^2} \, \rho |z|^2.$$ Therefore, $$\int_{|x| \leqslant A} \rho^2 | \partial_x v |^2 \leqslant C \int_{|x| \leqslant A} | \partial_x z |^2 + \frac{C}{B^2} \int_{|x| \leqslant A} \rho | z|^2$$ and finally $$\int_{|x| \leqslant A} \left ( \rho^2 | \partial_x v |^2 + \rho^2 |v|^2 \right ) \leqslant C \int_{|x| \leqslant A} | \partial_x z |^2 + \frac{C}{B^2} \int_{|x| \leqslant A} \rho |z|^2 \leqslant C \int_{\mathbb{R}} \left ( | \partial_x z |^2 + \frac{1}{B^2} \rho |z|^2 \right ).$$ Now, for $|x| > A$, we see that $\rho (x)^2 \leqslant C e^{\left ( \frac{4}{A} - \frac{\sqrt{\omega_0}}{5} \right ) |x|} \eta_A (x)^2$. If we take $A$ large enough such that $\frac{4}{A} < \frac{\sqrt{\omega_0}}{5}$, we see that $$\rho^2 \leqslant C e^{- \frac{A \sqrt{\omega_0}}{5}} \eta_A^2 \leqslant \frac{C}{A^N \omega_0^{N/2}} \eta_A^2,$$ the last inequality being true if $A \sqrt{\omega_0}$ is large enough, i.e. if $A$ is large enough (depending on $\omega_0$). Then, using Lemma 10, we obtain $$\begin{array}{rcl} \displaystyle{\int_{|x| > A} \rho^2 ( | \partial_x v |^2 + |v|^2 )} & \leqslant & \displaystyle{\frac{C}{A^3 \omega_0^{3/2}} \left ( || \eta_A \partial_x v ||^2 + || \eta_A v ||^2 \right )} \\ \\ & \leqslant & \displaystyle{\frac{C}{A^3 \omega_0^{3/2}} \left ( \alpha^{-3} || \eta_A \partial_x u ||^2 + \omega_0^4 || \eta_A u ||^2 + \alpha^{-4} || \eta_A \partial_x u ||^2 + \omega_0^5 || \rho^2 u ||^2 \right )} \\ \\ & \leqslant & \displaystyle{\frac{C}{A^3 \omega_0^{3/2}} \left ( \alpha^{-3} || \eta_A \partial_x u ||^2 + \omega_0^4 || \eta_A u ||^2 + \alpha^{-4} || \eta_A \partial_x u ||^2 \right ).} \end{array}$$ Putting these estimates together, we get the desired result. [a]{style="color: white"}\ We now get back to the proof of Proposition 4 and in the first place we control the terms $J_j$,$K_j$.\ \ *(About $J_1$.)* We write that $$| ( \ln \zeta_B )''| \leqslant \frac{C \sqrt{\omega_0}}{B} \, \mathbbm{1}_{[1,2]} ( \sqrt{\omega_0} |x| ) \leqslant \frac{C \sqrt{\omega_0}}{B} \rho,$$ which leads to $$|J_1| \leqslant \frac{C \sqrt{\omega_0}}{B} \int_{\mathbb{R}} \rho |z|^2.$$ *(About $K_1$.)* We start by writing that $| \chi_A ' | \leqslant \frac{C}{A} \leqslant \frac{C}{B}$, $| \zeta_B ' | \leqslant \frac{C}{B} \zeta_B$, $|R_B| \leqslant C \varepsilon_{\omega_0} \rho^2$ and $|R_B'| \leqslant C \varepsilon_{\omega_0} \sqrt{\omega_0} \, \rho^2$. The estimates on $R_B$ are shown similarly as the estimates on $R_\infty$ in the proof of Proposition 2. Recalling that $B > \omega_0^{-1/2}$, this leads to $$\left | ( \chi_A \zeta_B )' \chi_A \zeta_B R_B ' + ( ( \chi_A \zeta_B )')^2 R_B \right | \leqslant \frac{C \varepsilon_{\omega_0} \sqrt{\omega_0}}{B} \, \rho^2$$ and then $$\begin{array}{rcl} | K_1 | & \leqslant & \displaystyle{\frac{C \varepsilon_{\omega_0} \sqrt{\omega_0}}{B} \int_{\mathbb{R}} \rho^2 |v|^2} \\ \\ & \leqslant & \displaystyle{ \frac{C \varepsilon_{\omega_0} \sqrt{\omega_0}}{B} \left [ || \partial_x z ||^2 + \frac{1}{B^2} \int_{\mathbb{R}} \rho |z|^2 + \frac{1}{A^3 \omega_0^{3/2}} \left ( \alpha^{-4} || \eta_A \partial_x u ||^2 + \omega_0^4 || \eta_A u ||^2 \right ) \right ],} \end{array}$$ using Lemma 13.\ \ *(About $J_2$.)* We start by recalling that $| \chi_A ' | \leqslant \frac{C}{A} \mathbbm{1}_{A < |x| < 2A}$, $| \chi_A '' | \leqslant \frac{C}{A^2} \mathbbm{1}_{A < |x| < 2A}$ and $| \chi_A ''' | \leqslant \frac{C}{A^3}\mathbbm{1}_{A < |x| < 2A}$. Moreover, for $|x|>A$, $| \zeta_B (x)| \leqslant C e^{-A/B}$ and $| \zeta_B '(x)| \leqslant \frac{C}{B} e^{-A/B}$. Thus, using the fact that $\zeta_B \leqslant C \eta_A^2$ (since $A \gg B$), $$\begin{array}{l} \displaystyle{| ( \chi_A^2 ) ' ( \zeta_B^2 )' | \leqslant \frac{C e^{-A/B}}{AB} \zeta_B^2 \leqslant \frac{C e^{-A/B}}{AB} \eta_A^2 \leqslant \frac{CB}{A^{3}} \eta_A^2 \, , } \\ \\ \displaystyle{ \left ( ( \chi_A')^2 + | \chi_A '' \chi_A | \right ) \zeta_B^2 \leqslant \frac{C e^{-A/B}}{A^2} \zeta_B^2 \leqslant \frac{C e^{-A/B}}{A^2} \eta_A^2 \leqslant \frac{CB}{A^3} \eta_A^2 \, ,} \end{array}$$ for $A/B$ large enough (we recall that $A \gg B$). We also know that $| \Phi_B | \leqslant CB$. Using the fact that $\mathbbm{1}_{|x| < 2A} \leqslant C \eta_A^2$, we obtain $$| ( \chi_A^2)' \Phi_B | \leqslant \frac{CB}{A} \, \eta_A^2 \, , \, \, \, \, \, \, | (\chi_A^2)''' \Phi_B | \leqslant \frac{CB}{A^3} \, \eta_A^2.$$ Putting these estimates together we get $$\begin{array}{rcl} |J_2| & \leqslant & \displaystyle{\frac{CB}{A} || \eta_A \partial_x v||^2 + \frac{CB}{A^3} || \eta_A v ||^2} \\ \\ & \leqslant & \displaystyle{\frac{CB}{A} \left ( \alpha^{-4} || \eta_A \partial_x u ||^2 + \omega^{5} || \rho^2 u ||^2 \right ) + \frac{CB}{A^3} \left ( \alpha^{-3} || \eta_A \partial_x u ||^2 + \omega^4 || \eta_A u ||^2 \right )} \\ \\ & \leqslant & \displaystyle{\frac{CB \alpha^{-4}}{A} || \eta_A \partial_x u ||^2 + \frac{CB \omega_0^4}{A} \left ( \frac{1}{A^2} || \eta_A u ||^2 + \omega_0 || \rho^2 u ||^2 \right ). } \end{array}$$ *(About $K_2$.)* We know that $R_B$ is bounded and that $||R_B||_\infty \leqslant C \varepsilon_{\omega_0}$. Moreover, $|a_{\omega_0}^{\pm}| \leqslant C \varepsilon_{\omega_0} \phi_{\omega_0}^2 \leqslant C \varepsilon_{\omega_0} \omega_0 \rho$. This gives $$|K_2| \leqslant C \varepsilon_{\omega_0} || \partial_x z ||^2 + C_2 \varepsilon_{\omega_0}^2 \omega_0 \int_{\mathbb{R}} \rho |z|^2.$$ Here, an explicit name has been given to the constant $C_2$ in order to be clear a little later.\ \ *(About $J_3$.)* We have $| \Psi_{A,B} | \leqslant CB$ and $| \Psi_{A,B} ' | \leqslant C$ (thanks to the bounds $| \chi_A ' | \leqslant C/B$ and $| \Phi_B | \leqslant CB$). Using the Cauchy-Schwarz inequality, we find $$\begin{array}{rcl} \displaystyle{\left | \int_{\mathbb{R}} ( 2 \Psi_{A,B} \partial_x v_1 + \Psi_{A,B} ' v_1 ) Y_\alpha^+ v_1 \right |} &=& \displaystyle{\left | \int_{\mathbb{R}} ( 2 \Psi_{A,B} \partial_x v_1 + \Psi_{A,B} ' v_1 ) \rho \cdot \rho^{-1} Y_\alpha^+ v_1 \right |} \\ \\ & \leqslant & \displaystyle{\left \| (2 \Psi_{A,B} \partial_x v_1 + \Psi_{A,B} ' v_1 ) \rho \right \| + || \rho^{-1} Y_\alpha^+ v_1 ||} \\ \\ & \leqslant & C \left ( B || \rho \partial_x v_1 || + || \rho v_1 || \right ) || \rho^{-1} Y_\alpha^+ v_1 || \end{array}$$ where we recall that $$\begin{array}{rcl} Y_\alpha^{\pm} &=& X_\alpha^2 (a_\omega^{\pm} \cdot X_\alpha^{-2} - X_\alpha^{-2} \cdot a_\omega^{\pm} ) \\ \\ &=& X_\alpha^2 \cdot \left [ 2 \alpha ( 2 \partial_x \cdot (a_\omega^\pm) ' - ( a_\omega^{\pm} ) '' ) + \alpha^2 \left ( -4 \partial_x^3 \cdot (a_\omega^{\pm} )' +6 \partial_x^2 \cdot (a_\omega^{\pm} )'' -4 \partial_x \cdot (a_\omega^{\pm}) ''' + (a_\omega^{\pm})'''' \right ) \right ]. \end{array}$$ Using Lemma 8 and the bounds on $a_\omega^{\pm}$ and its derivative, we find $$\begin{array}{rcl} || \alpha \rho^{-1} X_\alpha^2 \partial_x ( (a_\omega^\pm)'v_k) || & = & \alpha || \rho^{-1} X_\alpha \partial_x ( \rho \rho^{-1} X_\alpha ((a_\omega^{\pm})'v_k)) || \, \, \leqslant \, \, \alpha \cdot C \alpha^{-1/2} || \rho^{-1} X_\alpha ((a_\omega^\pm)'v_k)|| \\ \\ & \leqslant & C \sqrt{\alpha} || \rho^{-1} (a_\omega^\pm)' v_k || \, \, \leqslant \, \, C \sqrt{\alpha} \, \omega^{3/2} || \rho v_k ||. \end{array}$$ Similarly, we find for instance $|| \alpha^2 \rho^{-1} X_\alpha^2 \partial_x^3 ( (a_\omega^\pm)' v_k ) || \leqslant \alpha^2 \cdot C \alpha^{-3/2} || \rho^{-1} (a_\omega^\pm)' v_k || \leqslant C \sqrt{\alpha} \, \omega^{3/2} || \rho v_k ||$. All the other terms are smaller, for example $|| \alpha \rho^{-1} X_\alpha^2 ( (a_\omega^\pm)'' v_k) || \leqslant C \alpha \omega^2 || \rho v_k ||$. We obtain the following estimate: $$|| \rho^{-1} Y_\alpha^+ v_1 || \leqslant C \sqrt{\alpha} \, \omega^{3/2} || \rho v_1 ||$$ and a similar estimate holds for $Y_\alpha^- v_2$. These estimates lead to $$\begin{array}{rcl} |J_3| & \leqslant & \displaystyle{C \sum_{k=1}^2 \left ( B || \rho \partial_x v_k || + || \rho v_k || \right ) \sqrt{\alpha} \, \omega^{3/2} || \rho v_k ||} \\ \\ & \leqslant & \displaystyle{C \sqrt{\alpha} \, \omega^{3/2} \sum_{k=1}^2 \left ( B^2 || \rho \partial_x v_k ||^2 + || \rho v_k ||^2 \right ) } \\ \\ & \leqslant & \displaystyle{C \sqrt{\alpha} \, \omega^{3/2} \left ( B^2 || \rho \partial_x v ||^2 + || \rho v ||^2 \right )} \\ \\ & \leqslant & \displaystyle{C \sqrt{\alpha} \, \omega_0^{3/2} \left [ B^2 || \partial_x z ||^2 + \int_{\mathbb{R}} \rho |z|^2 + \frac{B^2}{A^3 \omega_0^{3/2}} \left ( \alpha^{-4} || \eta_A \partial_x u ||^2 + \omega_0^4 || \eta_A u ||^2 \right ) \right ].} \end{array}$$ *(About $K_3$.)* The estimate is quite similar to $J_3$. We use the bounds $\chi_A^2 \zeta_B^2 \leqslant 1$ and $|R_B| \leqslant C \varepsilon_{\omega_0} \leqslant C$, as well as the Cauchy-Schwarz inequality again: $$\begin{array}{rcl} \displaystyle{\left | \int_{\mathbb{R}} ( Y_\alpha^+ v_1) v_1 \chi_A^2 \zeta_B^2 R_B \right |} & \leqslant & \displaystyle{C \int_{\mathbb{R}} | \rho^{-1} Y_\alpha^+ v_1 | \, | \rho v_1 |} \\ \\ & \leqslant & \displaystyle{C || \rho^{-1} Y_\alpha^+ v_1 || \, || \rho v_1 ||} \\ \\ & \leqslant & \displaystyle{C \sqrt{\alpha} \, \omega^{3/2} \varepsilon_\omega || \rho v_1 ||^2} \\ \\ & \leqslant & \displaystyle{C \sqrt{\alpha} \, \omega_0^{3/2} \varepsilon_{3 \omega_0 /2} \left [ || \partial_x z ||^2 + \frac{1}{B^2} \int_{\mathbb{R}} \rho |z|^2 + \frac{B^2}{A^3 \omega_0^{3/2}} \left ( \alpha^{-4} || \eta_A \partial_x u ||^2 + \omega_0^4 || \eta_A u ||^2 \right ) \right ],} \end{array}$$ using the estimates obtained previously.\ \ *(About $J_4$.)* First, we recall from the proof of Proposition 3 that $$\left | \text{Re} \left [ h(\phi_\omega + u) - h ( \phi_\omega ) - h'(\phi_\omega) u \right ] \right | \leqslant C \left ( \phi_\omega |u|^2 + |u|^3 \right ).$$ This shows that $|q_1| \leqslant C |u|^2 \leqslant C \epsilon |u|$. Now, to control $q_2$, let us write $$\text{Im} \left [ h(\phi_\omega + u) - \frac{h ( \phi_\omega )}{\phi_\omega} \, u \right ] = |u|^2 u_2 + 2 \phi_\omega u_1 u_2 - u_2 \left ( g(\phi_\omega^2 + |u|^2 + 2 \phi_\omega u_1) - g(\phi_\omega^2) \right ).$$ Here we notice that $$| g(\phi_\omega^2 + |u|^2 + 2 \phi_\omega u_1 ) - g(\phi_\omega^2) | = \left | \int_{\phi_\omega^2}^{\phi_\omega^2 + |u|^2 + 2 \phi_\omega u_1} g'(s) \, \text{d}s \right | \leqslant \left | |u|^2 + 2 \phi_\omega u_1 \right | \leqslant C |u|$$ which gives $|q_2| \leqslant C |u|^2 \leqslant C \epsilon |u|$. Using the definitions of $r_1$ and $r_2$, we find that, for $k \in \{ 1 \, , 2 \}$, $$|| \eta_A X_\alpha^2 r_k || \leqslant C \alpha^{-2} || \eta_A q_k || \leqslant C \alpha^{-2} \epsilon || \eta_A u ||.$$ Hence, using the Cauchy-Schwarz inequality and the upper bounds $|\Psi_{A,B}| \leqslant CB \eta_A^2$ and $|\Psi_{A,B}'| \leqslant C \eta_A^2$, $$\begin{array}{rcl} \displaystyle{\left | \int_{\mathbb{R}} ( 2 \Psi_{A,B} \partial_x v_k + \Psi_{A,B} ' v_k ) X_\alpha^2 r_k \right |} & \leqslant & \displaystyle{C \left ( B || \eta_A \partial_x v_k || \, || \eta_A X_\alpha^2 r_k || + || \eta_A v_k || \, || \eta_A X_\alpha^2 r_k || \right )} \\ \\ & \leqslant & C \alpha^{-2} \epsilon || \eta_A u || \left [ B \left ( \alpha^{-2} || \eta_A \partial_x u || + \omega_0^{5/2} || \rho^2 u || \right ) + \alpha^{-3/2} || \eta_A \partial_x u || + \omega_0^2 || \eta_A u || \right ] \\ \\ & \leqslant & C \alpha^{-2} \epsilon || \eta_A u || B \left ( \alpha^{-2} || \eta_A \partial_x u || + \omega_0^{5/2} || \eta_A u || \right ) \\ \\ & \leqslant & C \alpha^{-2} B \epsilon \left ( || \eta_A u ||^2 + \alpha^{-4} || \eta_A \partial_x u ||^2 \right ) \end{array}$$ where we have used that $|| \rho^2 u || \leqslant || \eta_A u ||$ and $B > \omega_0^{-1/2}$. Now, let us control the other term in $J_4$. We write that $$|| \eta_A X_\alpha^2 n_k || \leqslant C \alpha^{-2} || \eta_A m_k || + \left | \frac{\dot{\omega}}{\omega} \right | \, || \eta_A X_\alpha^2 Q_{\pm} u_k ||.$$ Gathering the estimates $| x \eta_A | \leqslant CA$ and [\[orth\]](#orth){reference-type="eqref" reference="orth"}, we see that $$\begin{array}{rcl} || \eta_A m_k || & = & || \dot{\beta} x \eta_A u_k + ( \dot{\gamma} - \omega - \beta^2 ) \eta_A u_k \pm ( \dot{\sigma} - 2 \beta ) \eta_A \partial_x u_{3-k} - \beta ( \dot{\sigma} - 2 \beta ) \eta_A u_k || \\ \\ & \leqslant & C \left ( \omega_0 A ||u_k|| + \sqrt{\omega} || \eta_A u_k || + || \eta_A \partial_x u_{3-k} || + || \eta_A u_k || \right ) \underbrace{|| \rho^2 u ||^2}_{\leqslant \, C \epsilon || \eta_A u ||} \\ & \leqslant & C \sqrt{\omega_0} A \epsilon^2 || \eta_A u ||, \end{array}$$ using [\[orbstab\]](#orbstab){reference-type="eqref" reference="orbstab"} and the fact that $\sqrt{\omega_0} A > 1$. Besides, $$|| \eta_A X_\alpha^2 Q_{\pm} u_k || \leqslant C \left ( \alpha^{-1} \sqrt{\omega_0} || \eta_A \partial_x u_k|| + \omega_0^2 || \eta_A u_k || \right )$$ which leads to $$\begin{array}{rcl} || \eta_A X_\alpha^2 n_k || & \leqslant & C \alpha^{-2} \sqrt{\omega_0} A \epsilon^2 || \eta_A u || + C |\dot{\omega}| \left ( \alpha^{-1} \omega_0^{-1/2} || \eta_A \partial_x u || + \omega_0 || \eta_A u || \right ) \\ \\ & \leqslant & C \alpha^{-2} \sqrt{\omega_0} A \epsilon^2 || \eta_A u || + C \omega_0^{3/2} \epsilon^2 \left ( \alpha^{-1} \omega_0^{-1/2} || \eta_A \partial_x u || + \omega_0 || \eta_A u|| \right ) \\ \\ & \leqslant & C \alpha^{-2} \sqrt{\omega_0} A \epsilon^2 || \eta_A u || + C \alpha^{-1} \omega_0 \epsilon^2 || \eta_A \partial_x u ||. \end{array}$$ Hence, using the same arguments as previously, $$\begin{array}{rcl} \displaystyle{\left | \int_{\mathbb{R}} (2 \Psi_{A,B} \partial_x v_k + \Psi_{A,B} ' v_k ) X_\alpha^2 n_k \right |} & \leqslant & C \left ( B || \eta_A \partial_x v_k || + || \eta_A v_k || \right ) || \eta_A X_\alpha^2 n_k || \\ \\ & \leqslant & C \left ( B \alpha^{-2} || \eta_A \partial_x u || + B \omega_0^{5/2} || \eta_A u || \right ) \left ( \alpha^{-2} \sqrt{\omega_0} A \epsilon^2 || \eta_A u || + \alpha^{-1} \omega_0 \epsilon^2 || \eta_A \partial_x u || \right ) \\ \\ & \leqslant & C (AB \sqrt{\omega_0} \alpha^{-4} \epsilon^2 + B \alpha^{-3} \omega_0 \epsilon^2 ) || \eta_A \partial_x u ||^2 + C(AB \alpha^{-2} \omega_0^{3} \epsilon^2 + AB \alpha^{-4} \sqrt{\omega_0} \epsilon^2) || \eta_A u ||^2 \\ \\ & \leqslant & CAB \sqrt{\omega_0} \alpha^{-4} \epsilon^2 \left ( || \eta_A \partial_x u||^2 + || \eta_A u ||^2 \right ) \end{array}$$ after computations. Gathering these estimates we find $$| J_4 | \leqslant C (AB \sqrt{\omega_0} \alpha^{-4} \epsilon^2 + \alpha^{-2} B \epsilon ) || \eta_A u ||^2 + C (AB \sqrt{\omega_0} \alpha^{-4} \epsilon^2 + \alpha^{-6} B \epsilon ) || \eta_A \partial_x u ||^2.$$ *(About $K_4$.)* The estimates we use are the same as for $J_4$ and the integral upper bounds are slightly easier. We recall that $|\chi_A^2 \zeta_B^2 R_B| \leqslant C \varepsilon_{\omega_0} \eta_A^2$. We find $$\begin{array}{rcl} \displaystyle{\left | \int_{\mathbb{R}} (X_\alpha^2 n_k) v_k \chi_A^2 \zeta_B^2 R_B \right |} & \leqslant & \displaystyle{C \varepsilon_{\omega_0} || \eta_A v_k || \, || \eta_A X_\alpha^2 n_k ||} \\ \\ & \leqslant & \displaystyle{C \varepsilon_{\omega_0} \left ( \alpha^{-3/2} || \eta_A \partial_x u || + \omega_0^2 || \eta_A u || \right ) \left ( \alpha^{-2} \sqrt{\omega_0} A \epsilon^2 || \eta_A u || + \alpha^{-1} \omega_0 \epsilon^2 || \eta_A \partial_x u || \right )} \\ \\ & \leqslant & \displaystyle{C \varepsilon_{\omega_0} \alpha^{-7/2} \sqrt{\omega_0} A \epsilon^2 \left ( || \eta_A u ||^2 + || \eta_A \partial_x u ||^2 \right )} \end{array}$$ after computations. And on the other hand, $$\begin{array}{rcl} \displaystyle{\left | \int_{\mathbb{R}} (X_\alpha^2 r_k) v_k \chi_A^2 \zeta_B^2 R_B \right |} & \leqslant & C \varepsilon_{\omega_0} || \eta_A v_k || \, || \eta_A X_\alpha^2 r_k || \\ \\ & \leqslant & C \varepsilon_{\omega_0} \left ( \alpha^{-3/2} || \eta_A \partial_x u || + \omega_0^2 || \eta_A u || \right ) \alpha^{-2} \epsilon || \eta_A u || \\ \\ & \leqslant & C \varepsilon_{\omega_0} \alpha^{-7/2} \epsilon \left ( || \eta_A u ||^2 + || \eta_A \partial_x u ||^2 \right ). \end{array}$$ This leads to $$|K_4| \leqslant C \varepsilon_{\omega_0} \alpha^{-7/2} \epsilon \left ( 1 + \sqrt{\omega_0} A \epsilon \right ) \left ( || \eta_A u ||^2 + || \eta_A \partial_x u ||^2 \right ).$$ *(About $J_5$.)* We first notice that $$\begin{array}{rcl} \partial_\omega ( a_\omega^+ ) ' &=& \displaystyle{-2 \partial_\omega \phi_\omega ' \phi_\omega g'(\phi_\omega^2) - 4 \phi_\omega ' \phi_\omega^2 \partial_\omega \phi_\omega g''(\phi_\omega^2) + 4 \frac{\partial_\omega \phi_\omega '}{\phi_\omega} g(\phi_\omega^2) + 6 \phi_\omega ' \partial_\omega \phi_\omega g'(\phi_\omega^2)} \\ \\ & & \, \, \, \displaystyle{- \, 4 \frac{\partial_\omega \phi_\omega '}{\phi_\omega^3} G(\phi_\omega^2) + 12 \frac{\phi_\omega ' \partial_\omega \phi_\omega}{\phi_\omega^4} G(\phi_\omega^2) - 12 \frac{\phi_\omega ' \partial_\omega \phi_\omega}{\phi_\omega^2} g'(\phi_\omega^2).} \end{array}$$ We recall that $\partial_\omega \phi_\omega = \omega^{-1} \Lambda_\omega$ and we know estimates on $\Lambda_\omega$. More precisely, we recall that $| \partial_\omega \phi_\omega ' | \leqslant C \rho^4$, $| \phi_\omega | \leqslant C \sqrt{\omega_0}$, $|g'(\phi_\omega^2)| \leqslant \varepsilon_{3 \omega_0/2}$, $| \partial_\omega \phi_\omega | \leqslant \frac{C \rho^4}{\sqrt{\omega_0}}$, $| g''(\phi_\omega^2)| \leqslant \frac{\varepsilon_{3 \omega_0/2}}{\phi_\omega^2}$, $| g(\phi_\omega^2) | \leqslant \varepsilon_{3 \omega_0/2} \phi_\omega^2$, $|G (\phi_\omega^2)| \leqslant \varepsilon_{3 \omega_0/2} \phi_\omega^4$ and $|\phi_\omega '| \leqslant C \omega_0$. This gives $| \partial_\omega ( a_\omega^+ )' | \leqslant C \varepsilon_{3 \omega_0 /2} \sqrt{\omega_0} \rho^4$. Thus, integrating this inequality on $[\omega_0 \, , \omega]$, we get $$\left | \frac{\Phi_B}{\zeta_B^2} \left ( (a_\omega^+)' - (a_{\omega_0}^+)' \right ) \right | \leqslant C |x| \varepsilon_{3 \omega_0/2} \sqrt{\omega_0} \rho^4 | \omega - \omega_0 | \leqslant C \varepsilon_{3 \omega_0 /2} |\omega - \omega_0| \rho.$$ The same proof holds for $a_\omega^-$ with a minor difference. Indeed, $\partial_\omega (a_\omega^-)'$ involves $g'''$ (not only $G$, $g$, $g'$ and $g''$) and this derivative is not controlled by $\varepsilon_{\omega_0}$. We thus have to introduce $\widetilde{\varepsilon}_{\omega} := \sup\limits_{|s| \leqslant 3 \omega} | s^2 g'''(s)|$. We cannot be sure that $\varepsilon_\omega \leqslant \widetilde{\varepsilon}_\omega$, since $g''(0)$ is possibly not zero (it possibly does not even exist). With the same arguments as $a_{\omega}^+$, we find that $$\left | \frac{\Phi_B}{\zeta_B^2} \left ( (a_\omega^+)' - (a_{\omega_0}^+)' \right ) \right | \leqslant C ( \varepsilon_{3 \omega_0 / 2} + \widetilde{\varepsilon}_{3 \omega_0 /2} ) | \omega - \omega_0 | \rho.$$ Using the upper bound $|\omega - \omega_0 | \leqslant \epsilon$, we finally obtain the following estimate: $$|J_5| \leqslant C ( \varepsilon_{3 \omega_0 / 2} + \widetilde{\varepsilon}_{3 \omega_0 /2} ) \epsilon \int_{\mathbb{R}} \rho |z|^2.$$ *(About $K_5$.)* This estimate is similar. The first part is easier. Using the estimate $|R_B| \leqslant C \varepsilon_{\omega_0} \rho$ (which is analogous to the estimate on $R_\infty$ given in the proof Proposition 2), we have $$| \omega - \omega_0 | \int_{\mathbb{R}} |R_B| \, |z|^2 \leqslant C \varepsilon_{\omega_0} \epsilon \int_{\mathbb{R}} \rho |z|^2.$$ For the second part of $K_5$, similarly as $J_5$ we write that $| \partial_\omega a_\omega^{\pm} | \leqslant C \varepsilon_{3 \omega_0 /2} \rho$ and thus $| a_\omega^\pm - a_{\omega_0}^\pm | \leqslant C \varepsilon_{3 \omega_0 / 2} \epsilon \rho$. Then we get $$|K_5| \leqslant C ( \varepsilon_{\omega_0} + \varepsilon_{\omega_0} \varepsilon_{3 \omega_0 /2} ) \epsilon \int_{\mathbb{R}} \rho |z|^2 \leqslant C \varepsilon_{\omega_0} \epsilon \int_{\mathbb{R}} \rho |z|^2.$$ *(Conclusion.)* We first recall from Lemma 7 that $$\int_{\mathbb{R}} P_B |z|^2 \geqslant C \varepsilon_{\omega_0} \left ( \gamma_B \sqrt{\omega_0} \int_{\mathbb{R}} \rho |z|^2 - \frac{\sqrt{\omega_0}}{\gamma_B} || \partial_x z ||^2 \right ).$$ Let us take $B$ large enough (depending on $\omega_0$) such that $\gamma_B \geqslant \frac{1}{2} \int_{\mathbb{R}} \frac{P_\infty}{\varepsilon_{\omega_0}} \geqslant 10 C_2 \varepsilon_{\omega_0} \sqrt{\omega_0}$. This comes from $(H_2)$. Here, recall that $C_2$ is the constant involved in the control of $K_2$. We obtain $$\int_{\mathbb{R}} P_B |z|^2 \geqslant 10 C_2 \varepsilon_{\omega_0}^2 \omega_0 \int_{\mathbb{R}} \rho |z|^2 - \frac{C}{10} || \partial_x z ||^2.$$ First, let us take $\omega_0$ small enough such that $$|K_2| \leqslant \frac{1}{100} || \partial_x z ||^2 + C_2 \omega_0 \varepsilon_{\omega_0}^2 \int_{\mathbb{R}} \rho |z|^2.$$ Note that the control on $K_2$ does not imply $A$, $B$, $\alpha$ or $\epsilon$: it only depends on $\omega_0$. The fact that we have the quantity $\varepsilon_{\omega_0}^2 \omega_0$ in front of $\int_{\mathbb{R}} \rho |z|^2$ is crucial. It matches the analogous term in the inequality above given by Lemma 7.\ \ Now, we take $B$ large enough so that the previous assumption about $\gamma_B$ holds, and that $$|J_1| \leqslant \frac{\varepsilon_{\omega_0}^2 \omega_0}{100} \int_{\mathbb{R}} \rho |z|^2 \, , \, \, \, |K_1| \leqslant \frac{1}{100} \left [ || \partial_x z ||^2 + C_2 \varepsilon_{\omega_0}^2 \omega_0 \int_{\mathbb{R}} \rho |z|^2 + \frac{1}{A^3 \omega_0^{3/2}} \left ( \alpha^{-4} || \eta_A \partial_x u ||^2 + \omega_0^4 || \eta_A u ||^2 \right ) \right ].$$ From now on, $B$ is considered as a constant. Now, let us fix $\alpha$ small enough (depending on $\omega_0$ and $B$) such that $$|J_3|, |K_3| \leqslant \frac{1}{100} \left ( || \partial_x z ||^2 + C_2 \omega_0 \varepsilon_{\omega_0}^2 \int_{\mathbb{R}} \rho |z|^2 \right ) + \frac{C}{A^3 \omega_0^{3/2}} \left ( \alpha^{-4} || \eta_A \partial_x u ||^2 + \omega_0^4 || \eta_A u ||^2 \right ).$$ From now on, $\alpha$ is considered as a constant. We get $$|J_2| \leqslant \frac{C}{A} \left ( || \eta_A \partial_x u ||^2 + \frac{\omega_0^4}{A^2} || \eta_A u ||^2 + \omega_0^5 || \rho^2 u ||^2 \right ).$$ Now, $A$ remains to be fixed but the way we choose $A$ will be given a little bit later. We choose $\epsilon$ small enough (depending on $\omega_0$ and $A$) such that $$|J_4| , |K_4| \leqslant \frac{1}{100A} \left ( || \eta_A \partial_x u ||^2 + \frac{\omega_0^4}{A^2} || \eta_A u ||^2 \right ) \, \, \, \, \, \text{and} \, \, \, \, \, |J_5| , |K_5| \leqslant \frac{C_2 \varepsilon_{\omega_0}^2 \omega_0}{100} \int_{\mathbb{R}} \rho |z|^2.$$ All of this lead to $$\left | \sum_{j=1}^5 (J_j + K_j) \right | \leqslant 2C_2 \varepsilon_{\omega_0}^2 \omega_0 \int_{\mathbb{R}} \rho |z|^2 + \frac{1}{10} || \partial_x z ||^2 + C \left ( \frac{1}{A^3 \omega_0^{3/2}} + \frac{1}{A} \right ) || \eta_A \partial_x u ||^2 + \frac{C \omega_0^{5/2}}{A^3} || \eta_A u ||^2 + \frac{C \omega_0^5}{A} || \rho^2 u ||^2.$$ Now, we get $$\begin{array}{rcl} \dot{\mathcal{J}} + \dot{\mathcal{K}} & \geqslant & \displaystyle{\left ( 2 - \frac{1}{10} - \frac{1}{10} \right ) || \partial_x z ||^2 + C_2 \varepsilon_{\omega_0}^2 \omega_0 \left ( 10-2 \right ) \int_{\mathbb{R}} \rho |z|^2 - C \left ( \frac{1}{A^3 \omega_0^{3/2}} + \frac{1}{A} \right ) || \eta_A \partial_x u ||^2} \\ \\ & & \displaystyle{ \, \, \, \, \, \, \, \, - \, \frac{C \omega_0^{5/2}}{A^3} || \eta_A u ||^2 - \frac{C \omega_0^5}{A} || \rho^2 u ||^2} \\ \\ & \geqslant & \displaystyle{|| \partial_x z ||^2 + C_2 \varepsilon_{\omega_0}^2 \omega_0 \int_{\mathbb{R}} \rho |z|^2 - \frac{C}{A \sqrt{\omega_0}} || \eta_A \partial_x u ||^2 - \frac{C \omega_0^{5/2}}{A^3} || \eta_A u ||^2 - \frac{C \omega_0^5}{A} || \rho^2 u ||^2,} \end{array}$$ where we have noticed that $\frac{1}{A} + \frac{1}{A^3 \omega_0^{3/2}} \leqslant \frac{C}{A \sqrt{\omega_0}}$. We can assume that $B$ has been chosen large enough so that $\varepsilon_{\omega_0}^2 \omega_0 \geqslant \frac{1}{B^2}$. Lemma 13 then gives $$\begin{array}{rcl} \displaystyle{|| \partial_x z ||^2 + C_2 \varepsilon_{\omega_0}^2 \omega_0 \int_{\mathbb{R}} \rho |z|^2} & \geqslant & \displaystyle{C \left ( || \partial_x z ||^2 + \frac{1}{B^2} \int_{\mathbb{R}} \rho |z|^2 \right )} \\ \\ & \geqslant & \displaystyle{C || \rho v ||^2 - \frac{C}{A^3 \omega_0^{3/2}} || \eta_A \partial_x u ||^2 - \frac{C \omega_0^{5/2}}{A^3} || \eta_A u ||^2} \end{array}.$$ Finally we obtain $$\dot{\mathcal{J}} + \dot{\mathcal{K}} \geqslant C || \rho v ||^2 - \frac{C}{A \sqrt{\omega_0}} || \eta_A \partial_x u ||^2 - \frac{C \omega_0^{5/2}}{A^3} || \eta_A u ||^2 - \frac{C \omega_0^5}{A} || \rho^2 u ||^2.$$ By the definition of $\mathcal{J}$ and the upper bounds $| \Psi_{A,B} | \leqslant C \eta_A^2$ and $| \Psi_{A,B} ' | \leqslant C \eta_A^2$ (recall that $B$ is now a constant), we have, for any $T>0$, $$\begin{array}{rcl} | \mathcal{J} (T) | &=& \displaystyle{\left | \int_{\mathbb{R}} v_1 ( 2 \Psi_{A,B} \partial_x v_2 + \Psi_{A,B} ' v_2 ) \right | \, \, \leqslant \, \, C \left ( || \eta_A v(T)||^2 + || \eta_A \partial_x v(T)||^2 \right )} \\ \\ & \leqslant & \displaystyle{C \left ( || \eta_A u(T)||^2 + || \eta_A \partial_x u(T)||^2 \right ) \, \, \leqslant \, \, ||u(T)||_{H^1}^2 \, \, \leqslant \, \, C \epsilon^2.} \end{array}$$ Writing that $|z_k| \leqslant |v_k|$ and $|R_B| \leqslant C \rho^2 \leqslant C \eta_A^2$, the same argument gives $| \mathcal{K} (T) | \leqslant C \epsilon^2$ too. Therefore, $$\int_0^T ( \dot{\mathcal{J}} + \dot{\mathcal{K}} ) \, \text{d}t \leqslant | \mathcal{J} (T) | + | \mathcal{K} (T) | + | \mathcal{J} (0) | + | \mathcal{K} (0) | \leqslant C \epsilon^2.$$ Using the inequality on $\dot{\mathcal{J}} + \dot{\mathcal{K}}$ and integrating it on $[0 \, , T]$, we finally obtain: $$\int_0^T || \rho v ||^2 \, \text{d}t \leqslant C \epsilon^2 + C \int_0^T \left ( \frac{1}{A \sqrt{\omega_0}} || \eta_A \partial_x u ||^2 + \frac{\omega_0^{5/2}}{A^3} || \eta_A u ||^2 + \frac{\omega_0^5}{A} || \rho^2 u ||^2 \right ) \, \text{d}t.$$ This is the result announced. ## Coercivity property and conclusion Now we will need the following coercivity property. **Proposition 5.** Assume $(H_1)$ and $(H_2)$. We have $$\omega_0^2 || \rho^2 u || \leqslant C || \rho v ||.$$ *Proof.* We follow the exact same proof as in [@Ma1]. We need two lemmas to obtain the desired result. First, if $q \in L^2 ( \mathbb{R})$ satisfies $\langle q \, , \phi_\omega \rangle = \langle q \, , x \phi_\omega \rangle = 0$, then $|| \rho^2 q || \leqslant C \omega_0^{-2} || \rho (X_\alpha^2 S^2 L_+ q) ||$. We follow the proof in [@Ma1]. We recall that we know that $| \langle \phi_\omega \, , \Lambda_\omega \rangle | \geqslant C \sqrt{\omega}$. We only have to check that we can write $$\begin{array}{rl} & \displaystyle{\frac{q''}{\phi_\omega} = \left ( \frac{q}{\phi_\omega} \right ) '' + (f_3 q)' + f_2 q} \\ \\ \text{and} & \displaystyle{\frac{q''''}{\phi_\omega} = \left ( \frac{q}{\phi_\omega} \right ) '''' + (f_3 q)''' + (f_2 q)'' + (f_1 q)' + f_0 q} \end{array}$$ where $f_j$ are $\mathscr{C}^\infty$ functions (whose expression change from line to line) which satisfy $|f_j(x)| \leqslant C \omega^{-1/2} e^{\sqrt{\omega} |x|}$. This is easily checked thanks to the lower bound $\phi_\omega (x) \geqslant c \sqrt{\omega} \, e^{- \sqrt{\omega} |x|}$. For example, in the first line, $f_2= -2 \frac{\omega}{\phi_\omega} + \phi_\omega - 2 \frac{G(\phi_\omega^2)}{\phi_\omega^3}$. The rest of the proof is entirely identical to the proof of Lemma 11 in [@Ma1]. Note that we use the expression and the properties of $I_+$ here.\ \ The second lemma we need is the following one: if $q \in L^2 ( \mathbb{R})$ satisfies $\langle q \, , \Lambda_\omega \rangle = \langle q \, , \phi_\omega ' \rangle = 0$, then $|| \rho^2 q || \leqslant C \omega_0^{-2} || \rho (X_\alpha^2 M_- S^2)q ||$. Here the proof is entirely identical to the proof of Lemma 12 in [@Ma1]. There is only an identity at the end of the proof which is different: in our case we have $\phi_\omega '' \phi_\omega - 2 ( \phi_\omega ')^2 = - \omega \phi_\omega^2 + \phi_\omega^2 g(\phi_\omega^2) - 2 G(\phi_\omega^2)$. The rest of the argument is unchanged. Note that we use the expression and the properties of $J_-$ here; that is why hypothesis $(H_2)$ is needed. [a]{style="color: white"}\ Now we can conclude the proof of Theorem 2. Using propositions 3, 4 and 5, we obtain $$\begin{array}{rcl} \displaystyle{\int_0^T || \rho^2 u ||^2 \, \text{d}t} & \leqslant & \displaystyle{C \omega_0^{-4} \int_0^T || \rho v ||^2 \, \text{d}t} \\ \\ & \leqslant & \displaystyle{C \omega_0^{-4} \epsilon^2 + C \int_0^T \left ( \frac{\omega_0^{-9/2}}{A} || \eta_A \partial_x u ||^2 + \frac{\omega_0^{-3/2}}{A^3} || \eta_A u ||^2 + \frac{\omega_0}{A} || \rho^2 u ||^2 \right ) \, \text{d}t} \\ \\ & \leqslant & \displaystyle{C \omega_0^{-4} \epsilon^2 + \frac{C \omega_0^{-9/2}}{A} \int_0^T \left ( || \eta_A \partial_x u ||^2 + \frac{1}{A^2} || \eta_A u ||^2 \right ) \, \text{d}t + \frac{C \omega_0}{A} \int_0^T || \rho^2 u ||^2 \, \text{d}t} \\ \\ & \leqslant & \displaystyle{C \omega_0^{-4} \epsilon^2 + \frac{C \omega_0^{-9/2}}{A} \left ( C \epsilon + C \omega_0 \int_0^T || \rho^2 u ||^2 \, \text{d}t \right ) + \frac{C \omega_0}{A} \int_0^T || \rho^2 u ||^2 \, \text{d}t.} \end{array}$$ Since $\omega_0^{-9/2} / A \leqslant \omega_0^{-4}$ and $\omega_0 < \omega_0^{-7/2}$, we have $$\int_0^T || \rho^2 u ||^2 \, \text{d}t \leqslant C \omega_0^{-4} \epsilon^2 + \frac{C \omega_0^{-7/2}}{A} \int_0^T || \rho^2 u ||^2 \, \text{d}t.$$ Now we fix $A$. We choose $A$ (depending on $\omega_0$, $B$ and $\alpha$) such that $A>B> \omega_0^{-1/2}$ and $\frac{C \omega_0^{-7/2}}{A} \leqslant \frac{1}{100}$. This gives $$\int_0^T || \rho^2 u ||^2 \, \text{d}t \leqslant C \omega_0^{-4} \epsilon^2.$$ Using the first virial property, letting $T \to + \infty$ and recalling that $A$ is now a constant, we obtain $$\int_0^{+ \infty} \left ( || \eta_A \partial_x u||^2 + || \eta_A u ||^2 + \omega_0 || \rho^2 u ||^2 \right ) \leqslant C \epsilon + C \omega_0^{-3} \epsilon^2 \leqslant C \omega_0^{-3} \epsilon^2.$$ Now, we recall the system [\[Su\]](#Su){reference-type="eqref" reference="Su"} verified by $u$ and we integrate by parts, noticing that $u_2 \partial_x^2 u_1 - u_1 \partial_x^2 u_2 = \partial_x (u_2 \partial_x u_1 - u_1 \partial_x u_2)$: $$\begin{array}{rcl} \displaystyle{\frac{\text{d}}{\text{d}t} \left ( \frac{|| \rho^2 u ||^2}{2} \right )} & = & \displaystyle{\int_{\mathbb{R}} \rho^4 ( u_1 \partial_t u_1 + u_2 \partial_t u_2 )} \\ \\ & = & \displaystyle{\int_{\mathbb{R}} ( \rho^4 )' (u_1 \partial_x u_2 - u_2 \partial_x u_1 ) + \int_{\mathbb{R}} 2 \rho^4 u_1 u_2 \phi_\omega^2 ( 1 - g'(\phi_\omega^2))} \\ \\ & & \displaystyle{\, \, + \, \int_{\mathbb{R}} \rho^4 \left ( ( \theta_2 + m_2 - q_2) u_1 - (\theta_1 + m_1 - q_1) u_2 \right ).} \end{array}$$ We write that $| \rho ' | \leqslant C \rho$, so $| (\rho^4)' | \leqslant C \rho^4$. Hence, $$\left | \int_{\mathbb{R}} ( \rho^4 )' (u_1 \partial_x u_2 - u_2 \partial_x u_1 ) \right | \leqslant C \int_{\mathbb{R}} \rho^4 \left ( | \partial_x u |^2 + |u|^2 \right ).$$ Another easy bound is the following one (using $| \phi_\omega^2 - \phi_\omega^2 g'(\phi_\omega^2)| \leqslant C$): $$\left | \int_{\mathbb{R}} 2 \rho^4 u_1 u_2 \phi_\omega^2 ( 1 - g'(\phi_\omega^2)) \right | \leqslant C || \rho^2 u ||^2.$$ Recalling that $|q_1|,|q_2| \leqslant C \epsilon |u| \leqslant C |u|$, we have $$\left | \int_{\mathbb{R}} \rho^4 ( -q_2 u_1 + q_1u_2 ) \right | \leqslant C || \rho^2 u ||^2.$$ Now, using [\[orth\]](#orth){reference-type="eqref" reference="orth"} and $|x \phi_\omega |, |\phi_\omega|, |\Lambda_\omega| , |\phi_\omega ' | \leqslant C$, we find $$| \theta_1 | , | \theta_2 | \leqslant C || \rho^2 u ||^2.$$ On the other hand, $$|m_1| \leqslant | \dot{\beta} | \, |x u_1| + | \dot{\gamma} - \omega - \beta^2 | \, |u_1| + | \dot{\sigma} - 2 \beta | \, | \partial_x u_2 | + |\beta| \, | \dot{\sigma} - 2 \beta | \, |u_1| \leqslant C || \rho^2 u ||^2 ( 1 + |x| )$$ and the same estimate holds for $m_2$. Since $\int_{\mathbb{R}} |x| \rho^4 < + \infty$, we finally obtain that: $$\left | \frac{\text{d}}{\text{d}t} || \rho^2 u ||^2 \right | \leqslant C \left ( || \rho^2 \partial_x u ||^2 + || \rho^2 u ||^2 \right ).$$ We recall that $\int_0^{+ \infty} || \rho^2 u ||^2 \, \text{d}t \leqslant C \omega_0^{-4} \epsilon^2 < \infty$; therefore there exists a sequence $t_n \to + \infty$ such that $$|| \rho^2 u(t_n) || \, \underset{n \to + \infty}{\longrightarrow} \, 0.$$ Now let us consider $t>0$ and $n$ such that $t_n > t$. We integrate the previous inequality on $[t \, , t_n]$, which gives $$|| \rho^2 u(t) ||^2 \leqslant || \rho^2 u(t_n) ||^2 + C \int_t^{t_n} \left ( || \rho^2 \partial_x u ||^2 + || \rho^2 u ||^2 \right ) \, \text{d} \tau.$$ Passing to the limit $n \to + \infty$, we get $$|| \rho^2 u (t)||^2 \leqslant C \int_t^{+ \infty} \left ( || \rho^2 \partial_x u ||^2 + || \rho^2 u ||^2 \right ) \, \text{d} \tau \, \underset{t \to + \infty}{\longrightarrow} \, 0.$$ The previous integral term exists (and converges to $0$ as $t \to + \infty$) because $$\int_0^{+ \infty} \left ( || \rho^2 \partial_x u ||^2 + || \rho^2 u ||^2 \right ) \leqslant \int_0^{+ \infty} \left ( || \eta_A \partial_x u ||^2 + || \eta_A u ||^2 \right ) < \infty.$$ Hence we have shown that $$|| \rho^2 u(t) || \, \underset{t \to + \infty}{\longrightarrow} \, 0.$$ Now, let us take $x,y \in \mathbb{R}$. Using the Cauchy-Schwarz inequality and the basic inequality $| (\rho^2)'| \leqslant C \rho^2$, we write that $$\begin{array}{rcl} \rho^2 (x) | u(t \, , x) |^2 & = & \displaystyle{\rho^2 (y) | u(t \, , y) |^2 + \int_x^y \left ( 2 \, \text{Re} \left ( \overline{u(t)} \, \partial_x u(t) \right ) \rho^2 + |u(t)|^2 ( \rho^2 )' \right )} \\ \\ & \leqslant & \displaystyle{\rho^2 (y) | u(t \, , y)|^2 + C ||u(t)||_{H^1 ( \mathbb{R})} || \rho^2 u(t) ||.} \end{array}$$ We integrate for $y \in [0 \, , 1]$ and use the Cauchy-Schwarz inequality again, as well as [\[orbstab\]](#orbstab){reference-type="eqref" reference="orbstab"}: $$\rho^2 (x) | u(t \, , x) |^2 \leqslant \int_{\mathbb{R}} \rho^2 |u(t)|^2 + C ||u(t)||_{H^1 ( \mathbb{R})} || \rho^2 u (t) || \leqslant C || u(t)||_{H^1 ( \mathbb{R})} || \rho^2 u(t) || \leqslant C \epsilon || \rho^2 u(t) ||.$$ Henceforth, $$\sup_{x \in \mathbb{R}} \rho^2 (x) | u(t \, , x) | \leqslant C \epsilon || \rho^2 u(t) || \, \underset{t \to + \infty}{\longrightarrow} \, 0.$$ This assures that, for any compact $I \subset \mathbb{R}$, $$\sup_{x \in I} | u(t \, , x)| \leqslant \frac{1}{\displaystyle{\min_I} (\rho^2)} \, \sup_{x \in \mathbb{R}} \rho^2 (x) |u(t \, , x)|^2 \, \underset{t \to + \infty}{\longrightarrow} \, 0.$$ Now, we recall from [\[orth\]](#orth){reference-type="eqref" reference="orth"} that $| \dot{\beta} | + | \dot{\omega} | \leqslant C || \rho^2 u ||^2$ thus $$\int_0^{+ \infty} | \dot{\beta} | \, \text{d}t + \int_0^{+ \infty} | \dot{\omega} | \, \text{d}t \leqslant C \int_0^{+ \infty} || \rho^2 u ||^2 \, \text{d}t < \infty,$$ which shows that $\omega (t)$ and $\beta (t)$ have finite limits when $t \to + \infty$ (namely respectively $\omega_+$ and $\beta_+$). Letting $t \to + \infty$ in [\[orbstab\]](#orbstab){reference-type="eqref" reference="orbstab"} we find that $| \beta_+ | + | \omega_+ - \omega_0 | \leqslant \epsilon$. Finally, to conclude we write that $$| \psi (t \, , x + \sigma (t)) - e^{i \gamma (t)} e^{i \beta_+ x} \phi_{\omega_+} (x) | \leqslant | e^{i \beta (t) x} \phi_{\omega (t)} (x) - e^{i \beta_+ x} \phi_{\omega_+} (x) | + | u(t \, , x) |.$$ First, $$\left | e^{i \beta_+ x} \phi_{\omega (t)} (x) - e^{i \beta_+ x} \phi_{\omega_+} (x) \right | = | \phi_{\omega (t)} (x) - \phi_{\omega_+} (x) | = \left | \int_{\omega_+}^{\omega (t)} \partial_{\widetilde{\omega}} \phi_{\widetilde{\omega}} (x) \, \text{d} \widetilde{\omega} \right | \leqslant \frac{C | \omega (t) - \omega_+ |}{\sqrt{\omega_0}}.$$ This shows that $$\sup_{x \in \mathbb{R}} \left | e^{i \beta_+ x} \phi_{\omega (t)} (x) - e^{i \beta_+ x} \phi_{\omega_+} (x) \right | \, \underset{t \to + \infty}{\longrightarrow} \, 0.$$ And on the other hand, $$\left | e^{i \beta_+ x} \phi_{\omega (t)} (x) - e^{i \beta(t) x} \phi_{\omega (t)} (x) \right | \leqslant \left | e^{i \beta_+ x} - e^{i \beta (t) x} \right | = 2 \left | \sin \left ( \frac{\beta_+ - \beta (t)}{2} \, x \right ) \right |$$ which shows that, for any compact $I \subset \mathbb{R}$, $$\sup_{x \in I} \left | e^{i \beta_+ x} \phi_{\omega (t)} (x) - e^{i \beta(t) x} \phi_{\omega (t)} (x) \right | \leqslant \sup_{x \in I} 2 \left | \sin \left ( \frac{\beta_+ - \beta (t)}{2} \, x \right ) \right | \, \underset{t \to + \infty}{\longrightarrow} \, 0.$$ Gathering those two estimates and the fact that $\displaystyle{\sup_{x \in \mathbb{R}} |u(t \, , x) | \, \underset{t \to + \infty}{\longrightarrow} \, 0}$, we finally obtain that $$\sup_{x \in \mathbb{R}} | \psi (t \, , x + \sigma (t)) - e^{i \gamma (t)} e^{i \beta_+ x} \phi_{\omega_+} (x) | \, \underset{t \to + \infty}{\longrightarrow} \, 0,$$ which is the theorem we sought to establish. 99 H. Berestycki, P.-L. Lions, Nonlinear scalar field equations. I. Existence of a ground state. *Arch. Rational Mech. Anal.* **82**, (1983) 313-345. T. Cazenave, Semilinear Schrödinger equations. *Courant Lecture Notes in Mathematics* **10**. Providence, RI: American Mathematical Society (AMS); New York, NY: Courant Institute of Mathematical Sciences (2003). T. Cazenave, P.-L. Lions, Orbital stability of standing waves for some nonlinear Schrödinger equations, *Comm. Math. Phys.* **85** (1982), 549-561. S.-M. Chang, S. Gustafson, K. Nakanishi, T.-P. Tsai, Spectra of linearized operators for NLS solitary waves, *SIAM J. Math. Anal.* **39** (2007/2008), 1070-1111. M. Coles, S. Gustafson, A degenerate edge bifurcation in the 1D linearized non-linear Schrödinger Equation, *Discrete Contin. Dyn. Syst.* **36** (2016), 2991-3009. C. Collot, P. Germain, Asymptotic stability of solitary waves for one dimensional nonlinear Schrödinger equations, *preprint* `arXiv:2306.03668` (2023). S. Cuccagna, M. Maeda, A survey on asymptotic stability of ground states of nonlinear Schrödinger equations II, *Discrete Contin. Dyn. Syst.*, Series S **14** (2021), 1693-1716. S. Cuccagna, M. Maeda, On selection of standing wave at small energy in the 1D Cubic Schrödinger Equation with a trapping potential, *Commun. Math. Phys. 396*, No. 3 (2022), 1135-1186. M. Grillakis, J. Shatah, W.A. Strauss, Stability theory of solitary waves in the presence of symmetry, I, *J. Funct. Anal.* **74** (1987), 160-197. P. Germain, F. Pusateri, Quadratic Klein-Gordon equations with a potential in one dimension, *Forum Math. Pi 10* (link to arXiv version: `https://arxiv.org/abs/2006.15688`), Paper No. e17 (2022), 148-149. I.D. Iliev, K.P. Kirchev, Stability and instability of solitary waves for one-dimensional singular Schrödinger equations, *Differential and Integral Equations* **6** (1993), 685-703. Y.S. Kivshar, B.A. Malomed, Dynamics of solitons in nearly integrable systems, *Reviews of Modern Physics*, Vol. 61, No. 4, October 1989. M. Kowalczyk, Y. Martel, C. Muñoz, Soliton dynamics for the 1D NLKG equation with symmetry and in the absence of internal modes, *J. Eur. Math. Soc.* (2021). M. Kowalczyk, Y. Martel, C. Muñoz, On asymptotic stability of nonlinear waves. *Séminaire Laurent Schwartz - EDP et applications* (2016-2017), Exp. No. 18, 27pp. M. Kowalczyk, Y. Martel, C. Muñoz, H. Van Den Bosch, A sufficient condition for asymptotic stability of kinks in general $(1+1)$-scalar field models, *Ann. PDE* **7** (2021), No. 1, Paper No. 10, 98 pp. Y. Martel, Asymptotic stability of solitary waves for the 1D cubic-quintic Schrödinger equation with no internal mode, *Prob. Math. Phys.* **3** (2022), 839-867. D.E. Pelinovsky, Y.S. Kivshar, V.V. Afanasjev, Internal modes of envelope solitons, *Phys. D* **116** (1998), 121-142. B. Simon, The bound state of weakly coupled Schrödinger operators in one and two dimensions, *Annals of physics* **97** (1976), 279-288. T. Tao, Nonlinear dispersive equations. Local and global analysis. *CBMS Regional Conference Series in Mathematics* **106**. Providence, RI: American Mathematical Society (AMS) (ISBN 0-8218-4143-2/pbk). xv (2006), 361-362. M.I. Weinstein, Lyapunov stability of ground states of nonlinear dispersive evolution equations, *Comm. Pure Appl. Math.* **29** (1986), 51-68. M.I. Weinstein, Modulational stability of ground states of nonlinear Schrödinger equations, *SIAM J. Math. Anal.* **16** (1985), No. 3, 472-491.
arxiv_math
{ "id": "2310.01019", "title": "Asymptotic stability of solitary waves for the 1D near-cubic non-linear\n Schr\\\"odinger equation in the absence of internal modes", "authors": "Guillaume Rialland", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We study two coupled systems, one playing the role of the driver system and the other one of the driven system. The driver system is a time-delayed oscillator, and the driven or response system has a negligible delay. Since the driver system plays the role of the only external forcing of the driven system, we investigate its influence on the response system amplitude, frequency and the conditions for which it triggers a resonance in the response system output. It results that in some ranges of the coupling value, the stronger the value does not mean the stronger the synchronization, due to the arise of a resonance. Moreover, coupling means an interchange of information between the driver and the driven system. Thus, a built-in delay should be taken into account. Therefore, we study whether a delayed-nonlinear oscillator can pass along its delay to the entire coupled system and, as a consequence, to model the lag in the interchange of information between the two coupled systems. author: - Mattia Coccolo - Miguel A.F. Sanjuán title: Nonlinear delayed forcing drives a non-delayed Duffing oscillator --- # Introduction Over the last years an important research activity has been devoted to the dynamics of coupled and driven nonlinear oscillators. These systems exhibit complex and rich behaviors due to the interplay of nonlinearity, coupling, and external forcing. The results are of interest for several scientific disciplines such as biomedical sciences [@jiruska; @mormann; @Rulkov], where coupled oscillators are prevalent in biological systems, such as neurons. Studying their dynamics helps us understanding phenomena like neuronal synchronization, leading to advancements in medical treatments and diagnostics. In some cases, coupled oscillators can synchronize their motion, where their frequencies and phases align [@jensen; @hramov]. To emphasize the role of the previous ideas, we can mention that coupled chaotic oscillators can be employed to enhance communication security [@Koronovskii; @Naderi]. On the other hand, they are crucial in understanding and controlling vibrations in mechanical systems [@defoort]. Furthermore, they have applications in networked systems, ranging from improving the performance of communication networks to understanding the behavior of interconnected systems [@delellis; @zhang].Among the fields of interest, we can also mention electronics [@Yao] and mechanical engineering [@sujith]. Coupled and driven systems can be modeled as dynamical systems in which one of their parameters is the dynamical variable that comes from another dynamical system, through a coupling mechanism [@Pecora1; @Pecora2; @Pecora3]. Usually, we refer as *the driver system* the source of the driving signal, and as *the response system* the driven signal. Consequently, the driver system sends a signal to the response system altering its behavior according to the received input. The synchronization of the dynamics of the response system with respect to the driver system [@Boccaletti], constitutes a relevant effect observed when an oscillator is driven by another oscillator. Two main cases can be distinguished. When the two oscillators are identical or nearly identical, identical synchronization [@Pecora4] may be observed. However, when they are different, generalized synchronization [@Rulkov:1995; @kocarev:1996] is expected. The driver signal can be either periodic or aperiodic [@Ding]. Among the various coupling mechanisms, such as the replacement method or the subsystem decomposition [@Pecora1; @Pecora2; @Pecora3], we have chosen to use *the continuous control* that was discussed in [@Ding; @Kapitaniak]. The implementation of the coupling mechanism is carried out by introducing in the response system a square matrix, whose elements are constant, multiplied by the vector of the difference between the dynamical variable of the driver system and the dynamical variable of the response system. Nevertheless, the study of the synchronization is not the main goal of this article. In fact, we study the case in which a delayed nonlinear oscillator is the only driver, through the coupling mechanism, of another nonlinear oscillator without delay. Therefore, we analyze the driver system as the only external forcing acting on the response system. We have chosen the continuous control [@Ding; @Kapitaniak] as a coupling mechanism because we think that it models better the implementation of an external forcing into the system. As a matter of fact, the coupling matrix becomes constant playing the role of the forcing amplitude, and the time delay determines the frequency of the forcing. In the other above-mentioned methods one or more variables of the driver system are substituted directly into the response system, without the possibility to change the strength of the coupling. This means that there is nothing playing the role of the forcing amplitude. Thus, with that implementation we investigate the typical effects of an external forcing, here affected by delay, on the oscillations amplitude and frequency of a given dynamical system. Moreover, the forcing generates at the right frequency and for the right amplitude the appearance of a resonance. Some applications of implementing delay in an external forcing or control are discussed in [@Gu; @Sayed1; @Sayed2] Although the synchronization of the two systems is not the main goal of this article, a subsidiary objective can be pondered as a consequence of it. In fact, through the coupling mechanism the driver system transfers to the response system some of its features and here we focus on the delay transmission. We want to emphasize that the synchronization achieved here cannot be identical because the space dimensions of the two systems are different, being one infinite dimensional for the delayed oscillator and the other one finite dimensional. There is a reason to study the conditions of the delay transmission. The coupling, as we wrote before, is an interchange of information from the driver system to the response system and the speed of this interchange is finite, so a built-in delay needs to be considered. Therefore, due to this intrinsic delay, the response system is affected showing delay-induced behaviors. Hence, we have decided to study the optimal parameter values that model this delay in the driving signal and transfer it from the driver system oscillator to the entire coupled system. The result is that, for coupling values that do not trigger the resonance, the response system acts with a similar delay-induced behavior as the driver system, without being a perfect copy. Also, the coupling constant can be used as a control parameter to determine how much delay-induced behavior we want the response system output to show. The organization of the paper is as follows. In Sec. [2](#s:model){reference-type="ref" reference="s:model"}, we define the model that we have used. We identify the role of the coupling constant in the dynamics of the response system in Sec. [3](#s:CC){reference-type="ref" reference="s:CC"}. We discuss in Sec. [4](#s:CC_tau){reference-type="ref" reference="s:CC_tau"} the coupling constant and the driver system delay influence on the dynamics of the response system is done. In Sec. [5](#s:tau){reference-type="ref" reference="s:tau"}, we analyze some particular values of the coupling constant in function of the driver system delay. Finally, some concluding remarks appear in Sec. [6](#s:conclucion){reference-type="ref" reference="s:conclucion"}. # The model and the continuous control {#s:model} The unidirectional coupling can be summarized in a simple way. We can define an autonomous nonlinear dynamical systems as the driver system and its dynamical state given by a vector $\mathbf{x_1}\in\mathbb{R}^n$ of $n$ scalar variables. The system dynamics is governed by a set of $n$ nonlinear differential equations $\mathbf{\dot{x}_1}=\mathbf{F}(\mathbf{x_1})$. Then, another nonlinear dynamical system is considered as the response system, whose dynamical state is given by a vector $\mathbf{x_2}\in\mathbb{R}^d$. The differential equations of this second system are $\mathbf{\dot{x}_2}=\mathbf{G}(\mathbf{x_2})$. When the unidirectional drive is established, the response system becomes $\mathbf{\dot{x}_2}=\mathbf{G}(\mathbf{x_1},\mathbf{x_2})$. The continuous control scheme provides a simple form of unidirectional coupling: $$\mathbf{G}(\mathbf{x_1},\mathbf{x_2})=\mathbf{G}(\mathbf{x_2})+\mathbf{C}\cdot(\mathbf{x_1}-\mathbf{x_2}),$$ where $\mathbf{C}$ is a square matrix of dimension $n$ whose elements are constants. This matrix is multiplied by the vector of differences $(\mathbf{x_1}-\mathbf{x_2})$. The numerical values of the constants inside $\mathbf{C}$ measure the strength of the coupling for each forcing signal, which may be constructed from one, or more, of all the variables of the drive. Here, we have decided to study the output of a Duffing oscillator, as the response system, when it is driven by a time-delayed Duffing oscillator, as the driver system, following: $$\begin{aligned} \label{eq:1} Driver&\rightarrow \quad \frac{d^2x_1}{dt^2}+\mu\frac{dx_1}{dt}+\gamma x_1(t-\tau)+\alpha x_1(1-x_1^2)=0\\ Response&\rightarrow \quad \frac{d^2x_2}{dt^2}+\mu\frac{dx_2}{dt}+\alpha x_2(1-x_2^2)=C(x_1-x_2), \end{aligned}$$ where we have fixed the parameters $\mu=0.01$, $\alpha=-1$ and $\gamma=-0.5$. The parameter $C$ is the coupling constant, which is the only nonzero element of the continuous control scheme coupling matrix [@Ding; @Kapitaniak] that measures the strength of the coupling for the forcing signal and plays the role of the the external forcing amplitude, i.e., the time-delayed Duffing oscillator. The dissipation $\mu$ is kept small in order to better appreciate the effects of the variation of the driver system delay $\tau$ and of the coupling constant $C$ on the dynamics of the response system. The history functions of the driver system are $u_0=v_0=1$, and the initial conditions of the response system are $x_0=y_0=0.5$. We expect that our conclusions are of general validity and not specific for the considered boundary conditions. The potentials $$\begin{aligned} \label{eq:Pot} Driver&\rightarrow \quad \frac{\alpha x^2}{2}+\frac{\alpha x^4}{4}-\frac{\gamma x^2}{2}\\ Response&\rightarrow \quad \frac{\alpha x^2}{2}+\frac{\alpha x^4}{4} \end{aligned}$$ and the fixed points of both systems are shown in Fig. [1](#fig:1){reference-type="ref" reference="fig:1"}. The unstable fixed point $x_0=0$ is the same for the two potentials, while the stable fixed points of the driver system are $$\label{eq:2} x^{DS}_{*}=\pm\sqrt{\frac{\alpha+\gamma}{\alpha}}=\pm1.225,$$ and the stable fixed points of the response system are $$\label{eq:3} x^{RS}_{*}=\pm\sqrt{\frac{\alpha}{\alpha}}=\pm1.$$ Moreover, following [@Coccolo], we perform the linear stability analysis for the fixed points. The characteristic equation of the linearized system is $$\label{eq:char} \lambda^2+\mu\lambda+\alpha(1+3(x^{DS}_*)^2)+\gamma e^{\lambda\tau}=0.$$ Then, we take $\lambda=\rho+i\omega$ as the eigenvalue associated with the equilibrium points. The critical stability curve can be found by fixing $\rho=0$. Hence, we substitute $\lambda=i\omega$ in the last equation and separate the real and imaginary parts obtaining the equations $$\begin{aligned} &\omega^2-\alpha(1+3(x^{DS}_*)^2)=\gamma\cos{\omega\tau}\\ &\mu\omega=\gamma\sin{\omega\tau}.\end{aligned}$$ After squaring and adding both equations we obtain $$\label{eq:sqare} (\omega^2-\alpha(1+3(x^{DS}_*)^2))^2+(\mu\omega)^2=\gamma^2.$$ Then, substituting the parameter values $\alpha=-1,\gamma=-0.5,\mu=0.01$, we can find four solutions, among which one is $\omega=2.0004$ giving $\tau=1.5505$ as the solution of the equation $$\label{eq:tau} \tau=\frac{\arccos{((\omega^2-\alpha(1+3(x^{DS}_*)^2)/\gamma)}}{\omega}.$$ The $\tau$ value just computed is where the fixed points lose stability and is shown as the red asterisk in Fig. [2](#fig:2){reference-type="ref" reference="fig:2"}(a). Then, as already reported in [@Coccolo; @Cantisan], the unforced time-delayed Duffing oscillator undergoes various bifurcations while $\tau$ changes. We show such behaviors in Figs. [2](#fig:2){reference-type="ref" reference="fig:2"}(a) and (b), where we plot the oscillations amplitude and a diagram showing maxima and minima oscillations amplitudes, respectively. This diagram has been plotted by representing on the figure the maxima and minima of the last $5$ periods of the oscillations for each $\tau$ value. Four regions are discernible in the figures. The first one (**I**) for $\tau<1.53$ in which the oscillations converge to the fixed point. The second one (**II**), $1.53<\tau<2.35$ where the oscillations are sustained and confined to one of the wells. The third one (**III**), $2.35<\tau<3.05$ the amplitude has jumped to a value bigger than the width of the well, what it means that the trajectories move from one well to another one, and also they are aperiodic. The last one (**IV**), for $\tau>3.05$ where the trajectories are no longer confined to either of the wells and oscillations are periodic. The result is a limit cycle in phase space that spans both wells. All the simulations of the manuscript have been carried out with the DDE tools of Matlab and checked with a fourth-order Runge-Kutta integrator for the non-delayed case, with an integration step of $0.01$. These behaviors of the driver system are depicted in Fig. [3](#fig:2b){reference-type="ref" reference="fig:2b"}. ![The double-well potentials and stable fixed points of the Duffing oscillator (in red) and of the delayed-Duffing oscillator (in blue).](Fig_1.jpg){#fig:1 width="10.0cm"} ![The figures show the oscillations amplitude $A_{x_1}$ (a) and the maxima and minima diagram (b) of the driver system. We can appreciate the oscillations amplitudes (a) and the oscillator behaviors (b) in all the $\tau$ regions of the driver system. The history functions for the time-delayed Duffing oscillator are constant $(u_{0},v_{0})=(1,1)$. The red asterisk in panel (a) is the value of $\tau$ predicted, through the Eqs.([\[eq:char\]](#eq:char){reference-type="ref" reference="eq:char"} - [\[eq:tau\]](#eq:tau){reference-type="ref" reference="eq:tau"}), by the stability analysis at which the fixed points undergo a change of stability. ](Fig_2.jpg){#fig:2 width="15.0cm"} ![The figure shows the oscillations of the driver system in the stable regions defined in Fig. [2](#fig:2){reference-type="ref" reference="fig:2"}. Panels (a) and (b) shows the driver system $x$ oscillations and the orbit in the phase space for $\tau\in$ region I, respectively. Panels (c) and (d) for $\tau\in$ region II. Panels (e) and (f) for $\tau\in$ region III. Panels (g) and (h) for $\tau\in$ region IV. ](Fig_2b.jpg){#fig:2b width="14.0cm"} # The role of the coupling constant {#s:CC} In Fig. [4](#fig:3){reference-type="ref" reference="fig:3"}(a), we show the $x$-time series of the two systems when $C=0$, i.e., without coupling, and with $\tau=1$ and $\mu=0.01$. In the figure, we can see in blue the driver system $x$-time series oscillations, that from now on we call $x_1$, while in red the response system without coupling, that we call $x_2$. We can appreciate how the two oscillators tend to their specific fixed points independently one from the other. Once we have seen how the two oscillators behave independently, from this point on we switch on the coupling constant so that the time-delayed Duffing starts to drive the Duffing oscillator without delay. The effects are shown in Fig. [4](#fig:3){reference-type="ref" reference="fig:3"}, in which in black we represent the response system $x-$time series for $C=0.06$, from now on $x_{2C}$. The coupling value has been chosen for explanatory purposes. In fact, it is the value for which it is possible to appreciate that the oscillations of $x_{2C}$ are slightly displaced up towards the fixed point of the driver system, although they have not yet jumped into the other well. In Fig. [4](#fig:3){reference-type="ref" reference="fig:3"}(b), we show (the curve in blue) the absolute value of the asymptotic distance between $x_1$ and $x_{2C}$, $|x_1-x_{2C}|$, for $t>200$. Also, it is shown that the mean distance between the $x$-time series of the coupled response system and the $x$-time series of the driver system, black line, $<x_1-x_{2C}>=2.1339$, is smaller than the mean distance of the $x$-time series of the uncoupled response system and the $x$-time series of the driver system, red line,$<x_1-x_{2}>=2.2031$. This measures the level of synchronization between the two oscillators, following the standard definition of synchronization [@Miranda], $$\label{eq:5} \lim_{t\to\infty}|x_1(t)-x_{2C}(t)|\rightarrow0,$$ stating that if the mean of the asymptotic distance between the solutions of the two oscillators goes to zero, the two oscillators are synchronized. From now on, when we state that one case is more synchronized than another, it means that this definition has been used. To obtain the mean of the asymptotic distance, we have computed the absolute value of the difference between the last third part of the $x$-series of the two systems and then its mean value. Then, we set larger values of $C$ for $\tau=1$, as shown in Fig. [5](#fig:4){reference-type="ref" reference="fig:4"}. We can see that for $C=1$, Fig. [5](#fig:4){reference-type="ref" reference="fig:4"}(a), the response system $x$-time series jumps into the other well, its main distance from the driver system becomes significantly smaller $<x_1-x_{2C}>=0.2528$ than in Fig. [4](#fig:3){reference-type="ref" reference="fig:3"}(b). Certainly, the main distance between $x_1$ and $x_2$ does not change. Contrary to the expectations when the $C$ value increases at $C=1.66$, Fig. [5](#fig:4){reference-type="ref" reference="fig:4"}(b), the response system $x$-time series asymptotic oscillations grow larger and the mean distance $<x_1-x_{2C}>=0.4733$ increases. Then, if the coupling constant increases further at $C=3$, Fig. [5](#fig:4){reference-type="ref" reference="fig:4"}(c), the asymptotic oscillations amplitude decreases and also the mean distance $<x_1-x_{2C}>=0.1460$, as it is supposed to be. In other words, comparing the Figs. [5](#fig:4){reference-type="ref" reference="fig:4"}(a)-(c), the oscillations amplitude of the response system grows, reaches a maximum and decreases in function of the coupling constant. This effect of the coupling constant is comparable to the effect of an external forcing amplitude that induces a resonance. ![Panel (a) shows the driver system and the response system $x$-time series at fixed $\tau=1$. In blue the driver, in red the Duffing with $C=0$ and in black the response system with $C=0.06$. Panel (b) shows the distance between the driver system and the response system $x$-time series $|x_1(t)-x_{2C}(t)|$ (the blue oscillations) and the mean distances $<x_1-x_2>$ and $<x_{1}-x_{2C}>$, the red and black lines, respectively. ](Fig_3.jpg){#fig:3 width="15.0cm"} ![The figure shows the effect for growing coupling constant on the dynamics of the response system, $C=1, C=1.66$ and $C=3$ at fixed $\tau=1$. The panels show the driver system $x$-time series, $x_1$ in blue, and the response system $x_{2C}$ in red. In the panels it is appreciable that the oscillations amplitude of the response system is larger in the $C=1.66$ case. Moreover, counter intuitively the synchronization of the two system is better at $C=1$ than at $C=1.66$. ](Fig_4.jpg){#fig:4 width="15.0cm"} ![The panels show (a) the gradient of the amplitude of the response system oscillator $A_{x_{2C}}$. Then, (b) the gradient of the mean distance between the driver system and the response system. Finally (c), the gradient of the response system frequency $\omega_{2}$, and (d) the gradient of the difference between the frequency of the driver system and the response system, $|\omega_1-\omega_{2}|$. The frequencies have been calculated using the fast Fourier transform. All figures show gradient plots in function of the coupling constant $C$ and the time delay $\tau$. In panels (c) and (d), only the regions II and IV are represented, because the driver system frequency in region I is zero and region III is aperiodic, making the comparison with the response system meaningless.](Fig_5.jpg){#fig:5 width="14cm"} # The combined effect of the coupling constant and the delay {#s:CC_tau} Now to start the analysis of the mentioned coupling constant effect and its interaction with the delay $\tau$, we plot Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}. Here, we can find the following gradient plots. In Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}(a), the oscillations amplitude of the response system, $A_{x_{2C}}$. In Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}(b) the mean distance between the driver system and the response system asymptotic behaviors, $<|x_1(t)-x_{2C}(t)|>, t>200$. In Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}(c) the response system oscillation frequency, $\omega_{2}$, for the driver system regions with periodic oscillations. All throughout the manuscript the frequencies of the response system and of the driver are all calculated using the fast Fourier transform. In Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}(d) the difference between the oscillations frequencies of the driver system and response system, $|\omega_1-\omega_{2}|$. All the gradient plots are in function of the delay $\tau$ and the coupling constant $C$. We can see in Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}(a) that the oscillations amplitude of the response system grows with the delay $\tau$ of the driver system, similarly to the driver system itself, as shown in Fig. [2](#fig:2){reference-type="ref" reference="fig:2"}. On the other hand, the oscillations amplitude grows and changes for some range of the coupling constant $C$ that varies in every region. Then, Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}(b) shows the level of synchronization between the two oscillators. In fact, every point in the figure represents the mean distance of the asymptotic behaviors of the oscillators $x$-series. So that, the smaller the mean distance between the two oscillators $x$-series, the higher the level of synchronization. Counter intuitively, it is not clear all along the panel that the larger the coupling constant the better the synchronization. In fact, the synchronization is better in some regions like region I, for $C = 1$ and $\tau = 1$, than for the case $C = 1.66$ for the same $\tau$ value. To better understand the previous figure, we analyze the panels region by region and we study a particular case for a fixed $\tau$ and varying $C$ in the interesting cases. ## The coupling constant effect In region I : the oscillation amplitudes, Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}(a), are smaller with respect to the other regions. The exception being the zone of lower $\tau$ values and $0\lesssim C\lesssim2$. Then, an area of relatively higher amplitude is visible in the middle of the region. In fact, we can appreciate a higher amplitude "tubular structure" that cross the region along the $\tau$ values, but only for certain $C$ values. This peak resembles a resonance peak. As a matter of fact, the response system oscillations are small outside "the tube" but are much larger inside it, as a consequence of the the driver system. As expected for a resonance, there is a dependence on the amplitude of the external forcing, in this case $C$. However, there is also a dependence on the frequency of the external forcing, here the delay $\tau$. In Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}(b), we can notice a high difference between the two oscillators $x-$series for $C$ values outside and inside the resonance area. So that, smaller $C$ values can synchronize the two oscillators more than larger $C$ values, located inside the resonance area, as in the already discussed example of $\tau=1$ in Fig. [5](#fig:4){reference-type="ref" reference="fig:4"}. Now, we study a particular case, for which we consider for a given value of $\tau=1$ a range of $C$ values shown in Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}. This $\tau$ value lies in the region I depicted in Fig. [2](#fig:2){reference-type="ref" reference="fig:2"}. Thus, we plot in Fig. [7](#fig:6){reference-type="ref" reference="fig:6"}, the asymptotic oscillations amplitude and the asymptotic behavior of the response system oscillations as a maxima-minima diagram. All of this for the coupling constant values $0.01<C<4$. In particular, Fig. [7](#fig:6){reference-type="ref" reference="fig:6"}(a) shows the amplitude $A_{x_{2C}}$ in which we can recognize a bell shaped curve that reminds of a resonance, with its maximum at $C=1.66$ while the red line is the amplitude $A_{x_{2}}\simeq 0.57$ of the response system with $C=0$. Finally, the black line is the amplitude of the driver system, $A_{x_1}\simeq 0$, which oscillations fall into the fixed point. This peak is a section of the $\tau$ space spanning peak seen in Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}. The peak appearance is recognizable in the maxima-minima diagram, Fig. [7](#fig:6){reference-type="ref" reference="fig:6"}(b). These last figures have been portrayed by plotting the maxima and minima of the oscillations last $5$ periods, to show the oscillators asymptotic behaviors. In the figure, also appear as a straight black line the driver system asymptotic behavior that is constant because the change of $C$ does not affect it. Also, in Fig. [7](#fig:6){reference-type="ref" reference="fig:6"}(a) we can spot a little peak around $C=0.444$ that matches with a change in the maxima-minima diagram of the response system in Fig. [7](#fig:6){reference-type="ref" reference="fig:6"}(b). This little peak is related with the small filiform zone around $C\thickapprox0.444$ that crosses all the region I along the $\tau$ axis in Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}(a). All that has been written shows how the coupling constant is introducing a perturbation into the response system that, in addition to driving it, also forces the system. This triggers a resonance between the driver system and the oscillations of the response system. In region II : we can see that a zone of higher amplitudes, Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}(a), starts for values of $C\thickapprox1$ and is connected to the high amplitude area of the region I. In fact, we can recognize that the resonance area of region I continues in the region II. These amplitude maxima spread for more $C$ values until, in the right of the figure, it reaches a wide zone before merging with the aperiodic region III. Then, we can distinguish a variety of peaks in the response system amplitude, most of them for $C\lesssim 1$. These peaks seem generated by an erratic behavior in function of the coupling values of the response system in response of the driver system. We call them *adjustment peaks*, because the response system is adjusting its behavior to the driver system before reaching the higher amplitude trend of the higher $C$ values. In these peaks zone, a little variation in the coupling constant can be determinant for the response system to change its asymptotic behavior into the driver system well. The name adjustment peaks comes from the fact that the coupling constant $C$ for the peaks cannot overdue the dynamics of the response system, but it is able to stretch the orbits at a larger amplitude. On the other hand, a slightly bigger or smaller value of $C$ can drive the response system into the driver system well, so that the response system amplitude oscillates in function of $C$. In region I and II, the gradient plot of the oscillations difference $<|x_1(t)-x_{2C}(t)|>$, Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}(b), shows that to obtain a better synchronization of the two oscillator values of $C$ outside the resonance area should be chosen. In Figs. [6](#fig:5){reference-type="ref" reference="fig:5"}(c) and (d), we can appreciate that the response system oscillation frequency $\omega_2$ grows and the difference between the two oscillators frequencies, $|\omega_1-\omega_2|$ decreases while $C$ grows up. However, Although, the difference grows at $\tau\thickapprox1.5$ and $C\thickapprox4$, because we are close to the aperiodic region III and some fluctuation can start. This behavior is not visible by the difference in amplitudes, Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}(b), so the effect is just on the frequency. Here, we repeat what we have done before, we study a particular case, here $\tau=2$. In fact, in Figs. [8](#fig:7){reference-type="ref" reference="fig:7"}(a) and (b) we can see how the amplitude of the response system vary in function of the coupling constant $C$. The results can be visualized in the maxima-minima diagram, as shown in Fig. [8](#fig:7){reference-type="ref" reference="fig:7"}(b). As before, the black straight lines that are the driver asymptotic behaviors have been plotted for comparison with the response systems in Fig. [8](#fig:7){reference-type="ref" reference="fig:7"}(b) and the behavior of the driver system does not change for different values of $C$. In this set of figures, we can distinguish a variety of peaks in the response system amplitude, most of them for $C\leq 1$. These peaks are the adjustment peaks that we described before. The peak at $C=1$ is different because it starts the general tendency that goes further for bigger values of $C$, i.e., the amplitude decreases in order to adjust to the amplitude of the driver system. However, for $C>1$ some little peaks call our attention since they disrupt the general tendency of the curve, in particular the one for $C=3$. This can be generated by a resonance between the external forcing (the driver system) and the system (the response system). Finally, in Fig. [8](#fig:7){reference-type="ref" reference="fig:7"}(c), we show the change in the frequency $\omega_2$ of the response system, in function of $C$. Here, we can appreciate that, initially, $\omega_2=1.3684$, then its value oscillates until it becomes $\omega_2=\omega_1=1.5881$, for $C>1$. We have to take into account that the frequency plot has been only analyzed in this case and for $\tau$ values in the region IV, because just in these two regions the oscillations of the driver system are periodic. In region III : the $\tau$ values are in the aperiodic region, defined in Fig. [2](#fig:2){reference-type="ref" reference="fig:2"}, so the response system behaviors become aperiodic when the coupling value exceeds $C\thickapprox0.06$, see Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}(a) and Fig. [6](#fig:5){reference-type="ref" reference="fig:5"} (b). In region IV : we can find high oscillation amplitudes for both oscillators, see Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}(a). Here, we can see that high oscillations amplitude grows for $C\gtrsim0.06$. Besides, the driver system shows a minimum in the oscillations amplitude at $\tau=3.68$, and the same behavior appears in the response system for almost all the $C$ values. In Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}(b) we can see how the difference between the oscillators $x-$series is maximum for very small values of $C$, but then it starts to oscillate, growing and shrinking when peaks in amplitude show up and finally becomes small for $C\thickapprox4$. As a result, the response system behaviors adjust to the driver system behavior in a complicated way inside the chosen range of $C$ values. In fact, there are a lot of fluctuations in the response system amplitude, Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}(a), and in the mean difference $<|x_1(t)-x_{2C}(t)|>$, Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}(b), except for $\tau\thickapprox4$ and $C\thickapprox 4$. Similar oscillating behaviors can be found in the frequency plots, Figs. [6](#fig:5){reference-type="ref" reference="fig:5"}(c) and (d). In the last phrase we do not take into account values of $\tau$ near the region III because the aperiodic oscillations influence is still noticeable. ## The $\tau$ effect We know that, in a time-delayed system, the variation of the delay $\tau$ is responsible for the modification of the frequency of the delay-induced oscillations [@Cantisan], when they exist. So, an interesting question is: how do the resonance peaks spotted in the previous figures behave when $\tau$ varies? To answer this question, we study the oscillations of the response system while the external forcing frequency, the driver system delay, changes. Therefore, we analyze Fig. [6](#fig:5){reference-type="ref" reference="fig:5"} along the $\tau$ axis. We start this second part for a coupling constant $\mathbf{C=1}$ at the beginning of the Fig. [7](#fig:6){reference-type="ref" reference="fig:6"}(a) resonance peak and by extension on the border of the tubular structure described in Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}(a). This coupling value gives a particular peak in the amplitude plot for $\tau=2$, Fig. [8](#fig:7){reference-type="ref" reference="fig:7"}(a). Now by analyzing Fig. [9](#fig:8){reference-type="ref" reference="fig:8"}(a), we can see that the response system amplitude in region II are high, and the driver system is forcing the response system to oscillate at large amplitudes between its fixed point and the one of the driver system. On the other hand, in the region I the oscillations amplitude follows the driver. In fact, we can see that the $\tau$ value for which there is a change in the stability for the response system corresponds with the value predicted by the stability analysis of the driver. Analyzing the frequencies, Fig. [9](#fig:8){reference-type="ref" reference="fig:8"}(b), we can see that there is a correspondence of the values in region II. It is worth to mention a down spike in the $\omega_1$ at $\tau\thickapprox3.1$, not appearing in all the $\omega-\tau$ plots, or being different in case that it appears. It seems that so close to the chaotic region III some of its influence remains. In fact, the value of the frequency is still variable for $\tau$ values slightly larger than $3.05$. Important to mention is that this is the coupling value for which the curves of the two frequencies are more similar in region II, except for some values. The most interesting exception is at $\tau=2$, where the oscillations amplitude reaches a maximum, it is also possible to spot a peak for the frequency $\omega_2$, Fig. [9](#fig:8){reference-type="ref" reference="fig:8"}(b). Keeping on our analysis, bring us to the following value of $\mathbf{C=1.66}$, see Figs. [9](#fig:8){reference-type="ref" reference="fig:8"}(c) and (d). Value for which in the Fig. [7](#fig:6){reference-type="ref" reference="fig:6"}(a) a maximum is reached by the response system amplitudes and that falls inside the tubular structure of Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}(a). In Fig. [9](#fig:8){reference-type="ref" reference="fig:8"}(c), we can appreciate that the oscillations amplitude, in region I, in region II and in region III, are enhanced with respect to the previous value of $C$. Contrary to the expectations, a larger coupling value with respect to the former one does not give a better synchronization of the two systems due to the presence of the resonance. In fact, in the above-mentioned regions, the response system oscillations amplitude, at this value of the coupling constant, reaches its maximum and the $\omega_1$ and $\omega_2$ curves in region II lose coherence, Fig. [9](#fig:8){reference-type="ref" reference="fig:8"}(d). In the regions in which the resonance high amplitude are present, the two oscillators are no longer synchronized in both the oscillations amplitude and the frequency. The last case is for $\mathbf{C=3}$, Figs. [9](#fig:8){reference-type="ref" reference="fig:8"}(e) and (f), related with the small peak in Fig. [8](#fig:7){reference-type="ref" reference="fig:7"}(a). Here, the coupling constant is big enough to fall outside the resonance tubular structure described in Fig. [6](#fig:5){reference-type="ref" reference="fig:5"}(a) and far beyond the values that produce the peak in Fig. [7](#fig:6){reference-type="ref" reference="fig:6"}. In fact, the oscillations amplitude are smaller than the previous case and since the coupling constant is large, in the range that we have used, the oscillations amplitude of the response system follows the amplitude of the driver system better than in all the other cases. Also, the $\tau$ value for the response system changes the stability corresponds with the driver system value, see Fig. [9](#fig:8){reference-type="ref" reference="fig:8"}(e). In this case, the driver system and the response system frequencies match almost as well as the $C=1$ case in the region II or better in region IV. Definitely, they match better than in the $C=1.66$ case. So, again, we recognize how the oscillations amplitude and the difference in oscillations between the driver and the response system grow, reach a maximum and decrease in function of the coupling constant and how they vary in function of $\tau$ has also been studied. ![The panels show a slice of the previous Fig. [6](#fig:5){reference-type="ref" reference="fig:5"} for varying values of $C$ and fixed $\tau=1$. In particular (a) it portraits the amplitude of the oscillators, being $A_{x_{2C}}$ the amplitude of the response system with the nonzero coupling term, $A_{x_{2}}$ the amplitude of the response system without the coupling term and $A_{x_1}$ the amplitude of the driver system. Then, (b) the maxima and minima diagram of the asymptotic behavior for the driver (the black straight line) and the response system. It is interesting to observe the amplitude peaks in panel (a): the smaller one for $C=0.444$ and the larger one for $C=1.66$ that suggests a resonance induced by the coupling of the two oscillators. In panel (b), the maxima and minima of the driver system overlap since the oscillator goes to the fixed point. It is also shown that the oscillations of the response system tend to the driver system fixed point (straight black line), for bigger values of $C=0.06$ and values outside the resonance area. ](Fig_6.jpg){#fig:6 width="14.9cm"} ![The panels show (a) the amplitude of the oscillators, being $A_{x_{2C}}$ the amplitude of the response system with the nonzero coupling term, $A_{x_{2}}$ the amplitude of the response system without the coupling term and $A_{x_1}$ the amplitude of the driver system. Then, (b) the maxima and minima diagram of the asymptotic behavior for the driver (black straight lines) and the response system for varying values of $C$ and fixed $\tau=2$, respectively. Some interesting peaks appear in the amplitude plot, due to a resonance. Finally, panel (c) shows the $\omega_1$ of the driver system in blue, that does not change when $C$ changes and $\omega_2$ of the response system in red that for $C=0$ is $\omega_2=1.3684$ and when the coupling constant is larger than $C\simeq1.2$ it becomes $\omega_2=\omega_1$.](Fig_7.jpg){#fig:7 width="13.5cm"} ![Panels (a), (c) and (e) show the oscillations amplitude and panels (b),(d) and (f) the frequency in the region II and IV for three coupling values. The first one before the peak in Fig. [7](#fig:6){reference-type="ref" reference="fig:6"}(a), i.e., $C=1$. The second is the value that gives the top of the peak, $C=1.66$. The third one beyond the peak, $C=3$. Interestingly in panel (a) we can see that in the region I for $C=1$, the response system follows the driver better than the second case, panel(c), although the second coupling constant value is larger than the first one. The resonance peaks already mentioned are recognizable in Region I, II and III for $C=1.66$ and in region II for $C=1$. Finally, it is interesting that in Region II the frequencies follows the $\omega_1$ better than in all the other cases.](Fig_8.jpg){#fig:8 width="13.5cm"} ![The panels (a) and (b) show, respectively, for $C=3$ and for $C=4$ the mean of the maxima and minima of the last 5 periods of the driver system (solid line) and of the response system (the dots). Panels (c) and (d) show in black the oscillations amplitude of the driver and in blue of the response system, the red line is the amplitude of the Duffing with $C=0$. Panels (e) and (f) show the differences of the $x-$series of the driver and the Duffing in red, and of the driver and the response system in blue. For this constant coupling values the response system dynamics adapt to the driver system and the delay-induced oscillations are completely transmitted from the driver system to the response system. Hence, we have modeled the built-in delay of the synchronization just using the delay in the driver system.](Fig_9.jpg){#fig:9 width="14cm"} # Delay-induced oscillations in a non-delayed system {#s:tau} In the introduction we wrote about a subsidiary objective of this work. We use the delayed driver system as the only excitation of the driven system and we want to ascertain for which values of the coupling constant its features are better transferred. In particular, coupling means an interchange of information between the driver and the driven system and the speed of this interchange is finite. Thus, an in-built delay should be taken into account. This is the reason to study whether a delayed-nonlinear oscillator can pass along its delay to the entire coupled system. In previous sections, we have seen that the best candidates are values of $C$ outside the resonance areas. This means that through the coupling mechanism the driver system can transfer some of its delay-induced oscillations to the response system in a complicated way that depends on the $\tau$ region. Thus, the response system starts to behave as a delayed oscillator. To visualize this effect we can focus on Fig. [9](#fig:8){reference-type="ref" reference="fig:8"}(e). Then, to analyze deeply the phenomenon we plot Fig. [10](#fig:9){reference-type="ref" reference="fig:9"}. In the panels (a) and (b), we show the mean of the maxima-minima diagram of the last $5$ periods of the orbit in function of $\tau$. The lighter line on the background is the mean of the maxima-minima diagram of the driver system. In Figs. [10](#fig:9){reference-type="ref" reference="fig:9"}(c) and (d) we show the oscillations amplitude of the driver and the response system. In Figs. [10](#fig:9){reference-type="ref" reference="fig:9"}(e) and (f) the distance between the driver and the response system $x-$series in blue and the distance between the driver and the Duffing for $C=0$ in red. In these panels we can appreciate a good agreement between the driver and the response system. Looking at those figures we can assure that, for those values of $C$, the delay is completely transmitted, although not perfectly, from the driver system to the response system. In Figs. [10](#fig:9){reference-type="ref" reference="fig:9"}(a) and (c), in the region II the effect of the Fig. [8](#fig:7){reference-type="ref" reference="fig:7"} peak at $C=3$ is visible as a zone of higher amplitudes. In fact, slightly smaller or larger values of $C$ outside the mentioned peak can guarantee a good enough match with the driver system as the $C=4$ case. So, we can say trustingly that it is possible for the driver to carry on the synchronization built-in delay just with its own delay. As we saw, the coupling constant value has to fall outside the regions of resonance. If the coupling constant is large enough, i.e., $C>3$ the synchronization of the delay-induced oscillations is not just localized to a particular $\tau$ region or value, as we have seen the case of $C=1$ and $\tau$ region 1, but it is generalized to all the regions. For $C>4$ this effect does not change drastically in comparison with the case $C=4$. Finally, we also show here that we can use the coupling constant to control which degree of delay-induced oscillations we want to transfer to the response system in order to control the in-built delay due to the coupling. # Conclusions {#s:conclucion} Two coupled systems have been studied. A time-delayed Duffing oscillator as the driver system and a Duffing oscillator without delay as the response system. The driver system plays two roles, the first one as the external forcing of the response system and the second one as the responsible to bring the coupling built-in delay into the response system. As regards the first role, the driver system behaving as an external forcing can induce a resonance in the response system as we have seen in the case of $C=1.66$ in the regions I, II and III. Also, in the case of $C=1$ a resonance shows up but just in the region II. Finally, some other resonance peaks pop up, in region I and II, that can be assimilated, in the case of a periodic external forcing, as peaks related with other harmonics. An interesting feature is the adjustment peaks, that give birth to fractal-like zones in the $C-\tau$ gradient plots. In those zones, the amplitude of the response system is highly sensitive to the $C$ value. In fact, for really close values of the coupling constant the response system can fall into the driver system well or have the orbits stretched by the influence of both the driver system and of the response system fixed points. On the other hand, when the coupling constant takes values outside the resonance area the delay-induced behaviors are better transferred from the driver system to the response system. So that, the response system behaves as a time-delayed system, allowing for the finite velocity in the coupling transmission of information. The best synchronization along all the $\tau$ value is reached at $C=4$, but there are specific cases, as $\tau$ regions or values that reach a good synchronization for smaller $C$ values. As it has been shown, the difference between the asymptotic oscillations of the two systems changes in the parameter set in a complicated way. Finally, the coupling constant can be used as a control parameter that allowing to decide how much of the delay-induced oscillations we want to be transmitted from the driver to the response system, always remembering that identical synchronization is impossible, as explained before. This means that a perfect transmission of the delay from the driver system to the response system is unattainable. To summarize, both roles unveiled interesting properties in the response system behavior. First of all, the coupling mechanism can trigger a resonance in the driven system oscillations as an external forcing. Second, it is possible to model the delay due to the coupling interchange of information, just with the delay of the driver. Also, we have proved that a previous study can give suggestions to which values of the coupling constant we should use in a specific case. Not always it is possible to apply the stronger coupling value. Also, we can decide, through the coupling constant, how much of the delay-induced oscillation are transferred from one oscillator to the other. # Acknowledgment This work has been supported by the Spanish State Research Agency (AEI) and the European Regional Development Fund (ERDF, EU) under Project No. PID2019-105554GB-I00 (MCIN/AEI/10.13039/501100011033). 99 Jiruska, P., de Curtis, M., Jefferys, J.G.R., Schevon, C.A., Schiff, S.J. & Schindler, K. Synchronization and desynchronization in epilepsy: controversies and hypotheses. J. Physiol. 591, 787-797 (2013). Mormann, F., Lehnertz, K., David, P., & Elger, C.E. Mean phase coherence as a measure for phase synchronization and its application to the EEG of epilepsy patients. Phys. D: Nonlinear Phenom. 144, 358-369 (2000). Rulkov, N. F. Regularization of synchronized chaotic bursts. Phys. Rev. Lett. 86, 183 (2001). Jensen, R.V. Synchronization of randomly driven nonlinear oscillators. Phys. Rev. E 58, R6907 (1998). Hramov, A.E., and Koronovskii, A.A. An approach to chaotic synchronization. Chaos 14, 603-610 (2004). Koronovskii, A.A., Moskalenko, O.I., & Hramov, A.E. On the use of chaotic synchronization for secure communication. Phys.-Usp. 52, 1213 (2009). Naderi, B., and Kheiri, H. Exponential synchronization of chaotic system and application in secure communication. Optik 127, 2407-2412 (2016). Defoort, M., Hentz, S., Shaw, S.W., & Shoshani, O. Amplitude stabilization in a synchronized nonlinear nanomechanical oscillator. Commun. Phys. 5, 93 (2022). DeLellis, P., Di Bernardo, M., Gorochowski, T.E., & Russo, G. Synchronization and control of complex networks via contraction, adaptation and evolution. IEEE Circuits Syst. Mag. 10, 64-82 (2010). Zhang, X., Hu, X., Kurths, J., & Liu, Z. Explosive synchronization in a general complex network. Phys. Rev. E 88, 010802 (2013). Yao, Z., Ma, J., Yao, Y., & Wang, C. Synchronization realization between two nonlinear circuits via an induction coil coupling. Nonlinear Dyn. 96, 205-217 (2019). Sujith, R.I., and Unni, V.R. Complex system approach to investigate and mitigate thermoacoustic instability in turbulent combustors. Phys. Fluids 32, 061401 (2020). Pecora, L.M. and Carroll, T.L. Synchronization in chaotic systems. Phys. Rev. Lett. 64, 821 - 824 (1990). Pecora, L.M. and Carroll, T.L. Driving systems with chaotic signals. Phys. Rev. A. 44, 2374 - 2383 (1991). Pecora, L.M., and Carroll, T.L. Synchronization of chaotic systems. Chaos 25, 097611 (2015). Boccaletti, S., Kurths, J., Osipov, G., Valladares, D.L., & Zhou, C.S. The synchronization of chaotic systems. Phys. Rep. 366, 1-101 (2002). Pecora, L., Carroll, T., Johnson, G., Mar, D., & Fink, K.S. Synchronization stability in coupled oscillator arrays: Solution for arbitrary configurations. Int. J. Bifurcat. Chaos 10, 273-290 (2000). Rulkov, N.F., Sushchik, M.M., Tsimring, L.S. & Abarbanel, H.D.I. Generalized synchronization of chaos in directionally coupled chaotic systems. Phys. Rev. E51, 980 - 994 (1995). Kocarev, L. and Parlitz, U. Generalized synchronization, predictability, and equivalence of unidirectional coupled dynamical systems, Phys. Rev. Lett. 76, 1816 - 1819 (1996). Ding, M. and Ott, E. Enhancing synchronism of chaotic systems, Phys. Rev. E49, R945 (1994). Kapitaniak, T. Synchronization of chaos using continuous control, Phys. Rev. E 50, 1642 - 1644 (1994). Gu, X., Jia, F., Deng, Z., & Hu, R. Stochastic response of nonlinear viscoelastic systems with time-delayed feedback control force and bounded noise excitation. Int. J. Struct. Stab. Dyn., 21, 2150181 (2021). Hamed, Y. S., Albogamy, K. M., & Sayed, M. Nonlinear vibrations control of a contact-mode AFM model via a time-delayed positive position feedback. Alex. Eng. J, 60, 963-977 (2021). Sayed, M., Mousa, A. A., & Alzaharani, D. Y. Non-linear time delay saturation controller for reduction of a non-linear vibrating system via 1: 4 internal resonance. J. Vibroengineering, 18, 2515-2536 (2016). Cantisán, J., Coccolo, M., Seoane, J.M., & Sanjuán, M.A.F. Delay-induced resonance in the time-delayed duffing oscillator. Int. J. Bifurcat. Chaos 30, 2030007 (2020). Coccolo, M., Cantisán, J., Seoane, J.M., Rajasekar, S., & Sanjuán, M.A.F. Delay-induced resonance suppresses damping-induced unpredictability. Philos. Trans. Royal Soc. A 379, 20200232 (2021). González-Miranda, J. M. Synchronization and control of chaos: an introduction for scientists and engineers. World Scientific (2004).
arxiv_math
{ "id": "2309.07512", "title": "Nonlinear delayed forcing drives a non-delayed Duffing oscillator", "authors": "Mattia Coccolo, Miguel A.F. Sanju\\'an", "categories": "math.DS math-ph math.MP nlin.CD", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The pseudoforest version of the Strong Nine Dragon Tree Conjecture states that if a graph $G$ has maximum average degree $mad(G) = 2 \max_{H \subseteq G} \frac{e(G)}{v(G)}$ at most $2(k + \frac{d}{d+k+1})$, then it has a decomposition into $k+1$ pseudoforests where in one pseudoforest $F$ the components of $F$ have at most $d$ edges. This was proven in 2020. We strengthen this theorem by showing that we can find such a decomposition where additionally $F$ is acyclic, the diameter of the components of $F$ is at most $2\ell + 2$, where $\ell = \left\lfloor\frac{d-1}{k+1}\right\rfloor$, and at most $2\ell + 1$ if $d \equiv 1 \mod (k+1)$. Furthermore, for any component $K$ of $F$ and any $z \in \mathbb N$, we have $diam(K) \leq 2z$ if $e(K) \geq d - z(k-1) + 1$. We also show that both diameter bounds are best possible as an extension for both the Strong Nine Dragon Tree Conjecture for pseudoforests and its original conjecture for forests. In fact, they are still optimal even if we only enforce $F$ to have any constant maximum degree, instead of enforcing every component of $F$ to have at most $d$ edges. address: - Institute of Computer Science, Johannes Gutenberg University Mainz - Institute of Science and Technology, Klosterneuburg, Austria - School of Mathematics, Georgia Institute of Technology author: - Sebastian Mies - Benjamin Moore - Evelyne Smith-Roberge bibliography: - bib-refs.bib date: - - title: Beyond the Pseudoforest Strong Nine Dragon Tree Theorem --- # Introduction In this paper we study decompositions. Recall that a *decomposition* of a graph $G$ is a partition of the edge set of the graph. We will use the notation that $e(G)$ is the number of edges of $G$, and $v(G)$ is the number of vertices of $G$. Decompositions of graphs are a heavily studied area, as if one can decompose a graph into only a few simple pieces, then one can in some sense deduce that the entire graph structure is simple. One of the most simple classes of graphs is the class of forests; thus a natural question is *when does a graph decompose into $k$ forests?* This question has been answered completely by a beautiful theorem of Nash-Williams: **Theorem 1** ([@nash]). *A graph $G$ decomposes into $k$ forests if and only if $$\gamma(G) := \max_{\substack{H \subseteq G,\\ v(H) \geq 2}} \frac{e(G)}{v(G)-1} \leq k.$$* We will refer to $\gamma(G)$ as the *fractional arboricity of $G$*. The notation $H \subseteq G$ means that $H$ is a subgraph of $G$. Noting that a forest $F \subseteq G$ has at most $v(G) -1$ edges, and that if a graph $G$ decomposes into $k$ forests, then so do all of its subgraphs, we see that Nash-Williams' Theorem says that the obvious necessary conditions are in fact sufficient. A natural approach to strengthening Nash-Williams' Theorem would be to try and control the types of forests that appear in the decomposition. In general, not much can be done, but by observing that the fractional arboricity need not be integral, one might intuitively believe that if a graph $G$ has fractional arboricity closer to $k$ than to $k+1$, then $G$ is very close to decomposing into $k$ forests instead of $k+1$ forests, and thus one could impose more structure on at least one of the forests in the decomposition. For example, applying Nash-Williams' Theorem to cycles gives us that cycles decompose into two forests as they have fractional arboricity exactly $\frac{n}{n-1}$ if $n$ is the number of vertices in the cycle. However, of course a cycle decomposes into a forest and a matching, which is a significantly stronger statement than a decomposition into two forests. More substantially, let us consider what happens for planar graphs. By Euler's formula, a simple planar graph $G$ with at least three vertices has at most $3v(G)-6$ edges, and if $G$ is moreover triangle-free, then $G$ has at most $2v(G) -4$ edges. Thus Nash-Williams' Theorem says that simple planar graphs decompose into at most $3$ forests, and triangle-free planar graphs decompose into at most $2$ forests. Assuming the planar graph is not a forest, Nash-Williams' Theorem cannot say anything further even if $G$ has larger girth. But it is easy to see that the average degree of a simple planar graph of girth $g$ is bounded by $\frac{2g}{g-2}$ (see [@dischargingtutorial]), and thus the fractional arboricity of planar graphs tends towards $1$ as the girth increases. Multiple papers have been written showing that better decompositions can be found for planar graphs of larger girth: two of the authors of this paper showed that planar graphs with girth at least $5$ decompose into two forests, one of which has every component containing at most five edges [@miesMoore]; Kim et al. [@Kostochkaetal] showed that planar graphs of girth at least $6$ decompose into a forest and a forest where each component has at most $2$ edges; and Montassier et al. [@ndtConjs] showed that planar graphs of girth at least $8$ decompose into a forest and a matching. The Strong Nine Dragon Tree Conjecture, proposed in [@ndtConjs], formalizes this intuition: **Conjecture 1** (Strong Nine Dragon Tree Conjecture). *For any integers $k$ and $d$, every graph $G$ with fractional arboricity at most $k+ \frac{d}{k+d+1}$ decomposes into $k+1$ forests such that in one of the forests, every connected component contains at most $d$ edges.* This conjecture remains wide open despite a large effort. In [@ndtConjs], Montassier et al. proved the $k=1$ and $d=1$ case. In [@Kostochkaetal], Kim et al. proved the $k=1$ and $d=2$ case. In a breakthrough result, Yang proved the Strong Nine Dragon Tree conjecture when $d=1$ [@Yangmatching]. In [@approxArb] Blumenstock and Fischer proved the $k=d$ case for some special graph classes. Recently, two of the authors proved the conjecture when $d \leq k+1$ and gave an approximate bound when $d \leq 2(k+1)$ (see [@miesMoore]). The strongest evidence for the Strong Nine Dragon Tree Conjecture comes from the Nine Dragon Tree Theorem, proven by Jiang and Yang: **Theorem 1** (Nine Dragon Tree Theorem, [@ndtt]). *For any integers $k$ and $d$, every graph $G$ with fractional arboricity at most $k+ \frac{d}{k+d+1}$ decomposes into $k+1$ forests, such that one of the forests has maximum degree at most $d$.* In fact, the Nine Dragon Tree Theorem is tight in the following sense: **Theorem 1** ([@ndtConjs]). *For any integers $k$ and $d$, there exist infinitely many graphs $G$ such that there exists a set $D \subseteq E(G)$ where $|D| \leq d$, $\gamma(G-D) = k + \frac{d}{k+d+1}$, but $G$ does not decompose into $k+1$ forests where one of them has maximum degree $d$.* The idea of the construction given in Theorem [Theorem 1](#montassierexample){reference-type="ref" reference="montassierexample"} is that there exists a graph $G-D$ where in every decomposition of $G$ into $k+1$ forests where one of the forests has maximum degree at most $d$, every component of this forest is an isolated vertex, or a star with exactly $d$ edges. By then cleverly adding few edges, one can increase the fractional arboricity only very slightly, while preventing a decomposition that satisfies the conclusion of the Nine Dragon Tree Theorem. In light of this, Xuding Zhu asked if the Strong Nine Dragon Tree Conjecture could be strengthened to the following: **Question 1** (Zhu, personal communication). *For any integers $k$ and $d$, does every graph $G$ with fractional arboricity at most $k+ \frac{d}{k+d+1}$ decompose into $k+1$ forests such that the components of one of the forests are isomorphic to stars each containing at most $d$ edges?* We will unfortunately show the answer to this question is no and we conjecture that it is still no even if we remove the constraint on the number of edges of the components. Had the answer been yes, this would have had applications that a proof of the Strong Nine Dragon Tree Conjecture would not. For example, Merker and Postle [@boundeddiameter] showed that if a graph decomposes into a forest and a forest where all components are isomorphic to stars (called star forests), then the graph decomposes into two forests, where every component has diameter[^1] at most 18. So had the answer to Question [Question 1](#conj:zhuStars){reference-type="ref" reference="conj:zhuStars"} been yes, using Euler's formula, it would have implied that planar graphs of girth at least five decompose into a forest and a star forest, and thus into two forests where each component has diameter at most $18$ (and in fact $14$ from an improvement in [@mousaviHaji]). Even though the Strong Nine Dragon Tree Conjecture remains open, the Nine Dragon Tree Theorem as well as the partial progress theretoward have already had applications. For example, the Nine Dragon Tree Theorem as well as a result of Zhu [@ZHUgamecol] imply best possible bounds on the game chromatic number of planar graphs with girth at least five, and two of the authors used the partial progress towards the Strong Nine Dragon Tree Conjecture to show that all $5$-edge-connected planar graphs have a $\frac{5}{6}$-thin spanning tree [@miesMoore]. While the original motivation for the Nine Dragon Tree Conjectures came from its applications, there is no reason to restrict our attention to just forests. One can pose a Strong Nine Dragon Tree-style conjecture for any family of graphs that has a Nash-Williams type theorem. In fact, there are pseudoforest versions [@sndtcPsfs; @ndttPsfs], digraph versions [@digraphndt], and even matroidal versions [@matroidndt]. For this paper, we will be interested in *pseudoforests*. Recall that a pseudoforest is a graph where each connected component contains at most one cycle. Hakimi showed: **Theorem 1** ([@hakimi], Hakimi's Theorem). *A graph $G$ decomposes into $k$ pseudoforests if and only if $$\text{mad}(G) := \max_{\substack{H \subseteq G, \\ V(H) \neq \varnothing}} \frac{2e(G)}{v(G)} \leq 2k$$* Again, just like Nash-Williams' Theorem, Hakimi's Theorem says that the obvious necessary condition is sufficient. We will refer to $mad(G)$ as the *maximum average degree* of a graph $G$. Similarly to fractional arboricity, the maximum average degree of a graph need not be integral. Fan et al. [@ndttPsfs] proved a pseudoforest version of the Nine Dragon Tree Conjecture: **Theorem 1** (Pseudoforest Nine Dragon Tree Theorem). *Let $k$ and $d$ be integers. Every graph with maximum average degree at most $2(k + \frac{d}{d+k+1})$ decomposes into $k+1$ pseudoforests where one of the pseudoforests has maximum degree $d$. Further, for every $\epsilon >0$ and every pair $k$ and $d$, there exists a graph $G$ such that $G$ has maximum average degree at most $2(k+ \frac{d}{k+d+1} + \epsilon)$ but does not decompose into $k+1$ pseudoforests where one of the pseudoforests has maximum degree $d$.* Similarly, we can ask for a pseudoforest analogue of the Strong Nine Dragon Tree Conjecture. This was proven by Grout and one of the authors [@sndtcPsfs]: **Theorem 1** (Pseudoforest Strong Nine Dragon Tree Theorem). *Let $k$ and $d$ be integers. Every graph with maximum average degree at most $2(k + \frac{d}{k+d+1})$ decomposes into $k+1$ pseudoforests where in one of the pseudoforests, every connected component contains at most $d$ edges.* In this paper, we strengthen this theorem, showing that not only can we make the components of one of the pseudoforests have at most $d$ edges, but that we can make this pseudoforest a forest, and further bound the diameter of every component. **Theorem 1**. *Let $k, d \in \mathbb N$, where $k \geq 1$. Let $\ell = \left\lfloor\frac{d-1}{k+1}\right\rfloor$. Then every graph $G$ with maximum average degree at most $2(k + \frac{d}{d+k+1})$ decomposes into $k + 1$ pseudoforests where one of the pseudoforests $F$ satisfies the following:* - *$F$ is acyclic,* - *every component $K$ of $F$ has $e(K) \leq d$,* - *$diam(K) \leq 2\ell + 2$, and if $d \equiv 1 \mod (k+1)$, then $diam(K) \leq 2\ell+1$,* - *for every component $K$ of $F$ satisfying $e(K) \geq d -z(k-1) +1$, we have $diam(K) \leq 2z$ for any $z \in \mathbb{N}$.* We remark that Theorem [Theorem 1](#thm:upperBound){reference-type="ref" reference="thm:upperBound"} holds even if the graphs are allowed to contain loops and parallel edges. Thus we show that in some sense, the spirit of Question [Question 1](#conj:zhuStars){reference-type="ref" reference="conj:zhuStars"} is maintained for pseudoforests, even though we cannot always make the last pseudoforest a star forest where all of the components have at most $d$ edges. We point out an appealing corollary where this does in fact hold: **Corollary 1**. *Let $k$ and $d$ be integers such that $d \leq k+1$. Every graph with maximum average degree at most $2(k + \frac{d}{d+k+1})$ decomposes into $k + 1$ pseudoforests where one of the pseudoforests $F$ is a forest, every component $K$ of $F$ has $e(K) \leq d$ and is isomorphic to a star.* A natural class where one can apply Corollary [Corollary 1](#cor:starforests){reference-type="ref" reference="cor:starforests"} is the class of graphs with fixed odd maximum degree. Let $G$ be a graph with odd maximum degree $\Delta$. It follows that $G$ has maximum average degree at most $\Delta$. Setting $k = \frac{\Delta-1}{2}$, and $d = \frac{\Delta+1}{2}$, gives that $2(k + \frac{d}{k+d+1}) = \Delta$. Further, $d \leq k+1$. Therefore such graphs decompose into $\frac{\Delta-1}{2} +1$ pseudoforests, where one of the pseudoforests is a star forest where every component has at most $\frac{\Delta+1}{2}$ edges. In addition, we construct examples showing our theorem is best possible in the following sense: **Theorem 1**. *Let $k, \ell, D \in \mathbb N$, $\epsilon > 0$, $k \geq 1$ and $\alpha \in \{0, 1\}$. There are simple graphs $G$ with $\gamma(G) < k + \frac{\ell(k+1) + \alpha}{(\ell + 1)(k+1) + \alpha}+ \epsilon$ that do not have a decomposition into $k+1$ pseudoforests where one of the pseudoforests has maximum degree at most $D$ and the diameter of every component is less than $2\ell + 1 + \alpha$.* Note that for $\ell = \left\lfloor\frac{d-1}{k+1}\right\rfloor$ we have $\frac{(k+1)\ell}{(k+1)(\ell + 1)}< \frac{d}{d+k+1}$ and if $d \equiv 1 \mod (k+1)$, then $\frac{(k+1)\ell + 1}{(k+1)(\ell + 1) + 1}= \frac{d}{d+k+1}$. Furthermore, our second statement that bounds the diameter of large components even more if $k \geq 2$ is also best possible in a similar way: **Theorem 1**. *Let $k, D, d, z \in \mathbb N$, $\epsilon > 0$, $k \geq 2$, $z \leq \left\lfloor\frac{d-1}{k+1}\right\rfloor$. There are simple graphs $G$ with $\gamma(G) < k + \frac{d}{d+k+1}+ \epsilon$, where in any decomposition into $k+1$ pseudoforests such that one pseudoforest $F$ has maximum degree at most $D$, $F$ has a component $K$ with $e(K) \geq d - z(k-1) + 1$ and $diam(K) \geq 2(z+1)$.* Note that since $mad(G)/2 < \gamma(G)$ we can replace $\gamma(G)$ with $mad(G)/2$ in both Theorem [Theorem 1](#thm:lowerBound){reference-type="ref" reference="thm:lowerBound"} and [Theorem 1](#thm:z:lowerBound){reference-type="ref" reference="thm:z:lowerBound"} to prove the optimality of Theorem [Theorem 1](#thm:upperBound){reference-type="ref" reference="thm:upperBound"}. On the other hand we can replace the words *pseudoforest* with *forest* in order to get a lower bound for diameter refinements of the Strong Nine Dragon Tree Conjecture. We now sketch how we prove these theorems. For Theorem [Theorem 1](#thm:upperBound){reference-type="ref" reference="thm:upperBound"}, we follow the approach first developed for pseudoforests in [@ndttPsfs] and extended to some forests in [@Yangmatching]; which later was pushed further to prove the Nine Dragon Tree Theorem in [@ndtt], then the Pseudoforest Strong Nine Dragon Tree Theorem in [@sndtcPsfs]; and which was finally used to give the the current best known result on the Strong Nine Dragon Tree Conjecture [@miesMoore]. Although focusing on establishing the diameter constraints, we will reprove Theorem [Theorem 1](#thm:psndt){reference-type="ref" reference="thm:psndt"} without much additional effort. The idea of the proof of Theorem [Theorem 1](#thm:upperBound){reference-type="ref" reference="thm:upperBound"} is as follows: we start with some decomposition into $k+1$ pseudoforests and we pick any of the pseudoforests, say $F$, and try to transform $F$ into a forest with bounded component size and diameter. We may assume that the remaining $k$ pseudoforests are maximal (i.e., have $v(G)$ edges), as otherwise we can rearrange the decomposition to reduce the number of edges in $F$. Furthermore, we may assume that $F$ either contains a cycle, a component with too many edges, or a component with too large diameter; such components we will call *bad*. We will search around this bad component to try to find nearby components that are small and acyclic which we can use to "augment\" our decomposition. To formalize this, we will use the well-known fact that a graph decomposes into $k$ pseudoforests if and only if it has an orientation with outdegree at most $k$. Using the orientation of the other $k$ pseudoforests, we will be able to prove that if a bad component is close to many acyclic components with few edges (from here on out, called *small* components), we can augment our decomposition such that we can remove the bad component, while maintaining a pseudoforest decomposition. Unfortunately, there is no guarantee that a bad component will be near many small components and so we will explore further from the components near the bad component to try and find an augmentation which will make the components near the bad component small, so that we can then augment and get rid of the bad component. Continuing the exploration until we can no longer do so, we get what we call an *exploration subgraph*. In this exploration subgraph, we put an order on the components (which we call a *legal order*) that encodes in some sense both the distance of components from the bad component, as well as their sizes. We will show that in this subgraph, either we can augment the decomposition to make components closer to the bad component smaller or acyclic (which we call *reducing* the legal order), which would eventually result in many components near the bad component being small (and thus allowing us to augment our decomposition and get rid of the bad component), or we cannot do any of these augmentations. In this latter case, we will show this implies that the exploration subgraph actually contains a large number of edges, contradicting the maximum average degree bound. The augmentations performed are similar to those in the proof of Theorem [Theorem 1](#thm:psndt){reference-type="ref" reference="thm:psndt"}, but we require a much more refined analysis to obtain Theorem [Theorem 1](#thm:upperBound){reference-type="ref" reference="thm:upperBound"}. For Theorems [Theorem 1](#thm:lowerBound){reference-type="ref" reference="thm:lowerBound"} and [Theorem 1](#thm:z:lowerBound){reference-type="ref" reference="thm:z:lowerBound"} we do the following: we start with an exploration subgraph $C$ in which $F$ contains a component with large diameter and which cannot be improved any further by our algorithm for the upper bounds. We then add a set $S$ of just a few vertices as well as many copies of $C$ adding edges between $S$ and $C$ carefully. Since each of the vertices of $S$ can only have a few adjacent edges in $F$, we can force the edges between $S$ and at least one copy of $C$ to belong to the other $k$ pseudoforests. This will then force many edges within $C$ to belong to $F$, which results in $F$ having long paths. We end the introduction with some notation and conventions. All mentioned paths are meant to be simple. If $G$ is a graph and $X \subseteq V(G)$, then $G[X]$ denotes the *induced subgraph* of $G$ by $X$. We also write $E(X)$ for $E(G[X])$ and $e(X)$ for $e(G[X])$ if it is clear from context which graph $G$ is underlying. Further, $deg_X(v)$ denotes the degree and $deg^+_X(v)$ the outdegree of a vertex $v \in X$ in $G[X]$. If $X = V(G)$ and it is clear which underlying graph $G$ we have, we simply write $deg(v)$ or $deg^+(v)$. If $A$ is a set, then let $A + a := A \cup \{a\}$ and $A - a := A \setminus \{a\}$. If $G$ is a graph and $S \subseteq V(G)$, then $G - S$ is the graph obtained by removing the vertices of $S$ and all incident edges from $V(G)$ and $E(G)$, respectively. Further, let $G - v := G - \{v\}$ for vertices $v$ of $G$. Similarly, if $e$ is an edge of a graph $G$, then $G - e := (V(G), E(G) - e)$. Let $E(V_1, V_2)$ denote the set of all edges in $G$ with exactly one endpoint in each of the sets $V_1, V_2 \subseteq V(G)$, and let $e(V_1, V_2) := |E(V_1, V_2)|$. The structure of the paper is as follows: in Section 2 we will prove Theorem [Theorem 1](#thm:upperBound){reference-type="ref" reference="thm:upperBound"} and in Sections 3 and 4 we prove Theorems [Theorem 1](#thm:lowerBound){reference-type="ref" reference="thm:lowerBound"} and [Theorem 1](#thm:z:lowerBound){reference-type="ref" reference="thm:z:lowerBound"}, respectively. # Proof of the upper bounds Our goal in this section is to prove Theorem [Theorem 1](#thm:upperBound){reference-type="ref" reference="thm:upperBound"}. Throughout this section, graphs are allowed to contain parallel edges and loops. Before we begin, we mention a well-known correspondence between pseudoforests and orientations that we will exploit. Recall that, given a graph $G$, an *orientation* of $G$ is obtained by taking each edge $xy \in E(G)$, and replacing $xy$ with exactly one of the arcs $(x,y)$ and $(y,x)$. **Observation 1**. A graph $G$ is a pseudoforest if and only if $G$ admits an orientation where every vertex has outdegree at most one. From this observation, we get an important corollary. **Corollary 1**. *A graph admits a decomposition into $k$ pseudoforests if and only if it admits an orientation such that every vertex has outdegree at most $k$.* ## Picking the counterexample We prove Theorem [Theorem 1](#thm:upperBound){reference-type="ref" reference="thm:upperBound"} by contradiction. In this subsection, we describe how we will pick our counterexample and explain the setup for our proof. To that end: let $k$ and $d$ be fixed positive integers, and suppose that $G$ is a vertex-minimal counterexample to Theorem [Theorem 1](#thm:upperBound){reference-type="ref" reference="thm:upperBound"} for these fixed values of $k$ and $d$. Our first step will be to obtain desirable orientations of $G$. For this, we use a lemma proved in [@ndttPsfs] (Lemma $2.1$). Technically, we need a stronger lemma, but the same proof as Lemma $2.1$ in [@ndttPsfs] suffices to prove the strengthening given below. **Lemma 1** ([@ndttPsfs]). *If $G$ is a vertex-minimal counterexample to Theorem [Theorem 1](#thm:upperBound){reference-type="ref" reference="thm:upperBound"}, then there exists an orientation of $G$ such that for all $v \in V(G)$, we have $k \leq deg^+(v) \leq k+1$.* Let $\mathcal{F}$ be the set of edge-coloured (mixed) subgraphs of $G$ where every vertex $v \in V(G)$ has exactly $k$ outgoing blue (directed) edges and all the remaining edges are red, undirected, and induce a pseudoforest. We will call $f \in \mathcal F$ a *red-blue colouring*. Note that $\mathcal F$ is non-empty by Lemma [Lemma 1](#orientationlemma){reference-type="ref" reference="orientationlemma"} and Corollary [Corollary 1](#cor:pseudodecompiff){reference-type="ref" reference="cor:pseudodecompiff"}. Furthermore, we can extract a decomposition into $k+1$ pseudoforests from $f$, where $k$ pseudoforests are coloured blue and one is coloured red. If $k \geq 2$, we have multiple options to form the blue pseudoforests. The red pseudoforest will be the one in which we wish to eliminate cycles and bound the size and diameter of each connected component. As a note for the reader, as we modify a red-blue colouring in the following pages, it will be easier to focus on maintaining a blue $k$-orientation rather than considering the $k$ individual blue pseudoforests. We now categorize the red components that contradict Theorem [Theorem 1](#thm:upperBound){reference-type="ref" reference="thm:upperBound"}. **Definition 1**. A red component $K$ of a pseudoforest $F$ is *bad* if $K$ satisfies any of the following: 1. contains a cycle, or 2. has more than $d$ edges, or 3. has $diam(K) > 2\ell + 2$, or 4. has $diam(K) = 2\ell + 2$ when $d \equiv 1 \mod (k+1)$, or 5. has $diam(K) > 2z$ and $e(K) \geq d - z(k-1) + 1$ for a $z \in \mathbb N$ with $1 \leq z \leq \ell$. We say $K$ is *(i)-bad* if $(i)$ is true for $K$ and $(j)$ is not true for $K$ for all $j < i$. Note that for (5)-bad components we chose the range $1 \leq z \leq \ell$ since if $z = 0$ then $K$ is (2)-bad, and if $z > \ell$ then $K$ is (3)-bad. As $G$ is a counterexample, in every red-blue colouring there is at least one bad component (of the red pseudoforest). Our goal will be to augment the decomposition in order to reduce the number of bad components, or possibly make a bad component "less bad.\" We will not be able to do that in a single step necessarily, so instead we will focus on one bad component and a specific subgraph that stems from this component where we will perform possibly many augmentations to eventually make the bad component less bad. To that end, we define the following subgraph. **Definition 1**. Suppose that $f$ is a red-blue colouring of $G$. Let $R$ be a bad component of this colouring. We define the *exploration subgraph $H_{f, R}$ of $f$ rooted at $R$* in the following manner: let $S \subseteq V(G)$ where $v \in S$ if and only if there exists a path $P = v_{1},\ldots,v_{m}$ such that $v_{m}= v$, $v_{1} \in V(R)$, and for each $i \in \{1, \dots, m-1\}$ either $(v_{i},v_{i+1})$ is a blue arc or $v_{i}v_{i+1}$ is a red edge. Let $H_{f, R}$ be the graph induced by $S$, where $E(H_{f, R})$ inherits the colours of $f$ as well as the orientations of the blue arcs (whereas the red edges of $H_{f, R}$ remain undirected). It might not yet be clear why we made this particular definition for $H_{f, R}$; however, the next observation shows that for any exploration subgraph $H_{f, R}$, the red edge density must be low. Before stating this observation, we fix some notation. Given a subgraph $G'$ of $G$, we will let $E_{b}(G')$ and $E_{r}(G')$ denote the sets of edges of $G'$ coloured blue and red, respectively. We let $e_{b}(G') = |E_{b}(G')|$ and $e_{r}(G') = |E_{r}(G')|$. **Observation 1**. For any red-blue colouring $f$ of $G$ and any choice of root component $R$, the exploration subgraph $H_{f, R}$ satisfies $$\frac{e_{r}(H_{f, R})}{v(H_{f, R})} \leq \frac{d}{d+k+1}.$$ *Proof.* Suppose towards a contradiction that $e_{r}(H_{f, R}) / v(H_{f, R}) > d / (d+k+1).$ As $H_{f, R}$ is an induced subgraph defined in part by directed paths and every vertex $v \in V(G)$ has $k$ outgoing blue edges by definition of $f$, each vertex in $H_{f, R}$ has $k$ outgoing blue edges. Thus, $e_b(H_{f, R}) / v(H_{f, R}) = k.$ Therefore $$\frac{\text{mad}(G)}{2} \geq \frac{e(H_{f, R})}{v(H_{f, R})} = \frac{e_{r}(H_{f, R})}{v(H_{f, R})} + \frac{e_{b}(H_{f, R})}{v(H_{f, R})} > k + \frac{d}{d+k+1}.$$ But this contradicts that $G$ has $\text{mad}(G) \leq 2(k+\frac{d}{d+k+1})$. ◻ Throughout the proof of Theorem [Theorem 1](#thm:upperBound){reference-type="ref" reference="thm:upperBound"}, we will be attempting to show that we can augment a given decomposition in such a way that either we obtain a decomposition satisfying Theorem [Theorem 1](#thm:upperBound){reference-type="ref" reference="thm:upperBound"} or we can find an exploration subgraph $H_{f, R}$ which contradicts Observation [Observation 1](#finishingobservation){reference-type="ref" reference="finishingobservation"}. As Observation [Observation 1](#finishingobservation){reference-type="ref" reference="finishingobservation"} allows us to focus only on red edges, it is natural to focus on red components which have small average degree. With this in mind, we define the notion of a *small red component*. **Definition 1**. Let $K$ be a connected acyclic red component, and let $\ell$ be as in Theorem [Theorem 1](#thm:upperBound){reference-type="ref" reference="thm:upperBound"}. We say $K$ is *small* if $e(K) \leq \ell$. The following lemma motivates the definition of $\ell := \left\lfloor\frac{d-1}{k+1}\right\rfloor$. **Lemma 1**. *A red component $K$ is small if and only if $\frac{e(K)}{v(K)} < \frac{d}{d+k+1}$. In particular, $\ell$ is the largest integer $n$ such that $\frac{n}{n+1} < \frac{d}{d+k+1}$.* *Proof.* If $K$ contains a cycle, then $K$ is not small and we have $\frac{e(K)}{v(K)} = 1 > \frac{d}{d+k+1}$. If $K$ is acyclic, $\frac{e(K)}{v(K)} \!=\! \frac{e(K)}{e(K) + 1} < \frac{d}{d+k+1}$ is equivalent to $e(K) < \frac{d}{k+1}$, which is equivalent to $e(K) \leq \left\lfloor\frac{d-1}{k+1}\right\rfloor = \ell$. ◻ A small component cannot be bad; we prove this in Corollary [Corollary 1](#cor:smallIsNotBad){reference-type="ref" reference="cor:smallIsNotBad"}. We will want to augment our decomposition, and we will want a measure of progress that our decomposition is improving. Of course, reducing the number of bad components or their size would be a clear improvement. However, this might not always be possible, so we will introduce a notion of a "legal order\" of the red components. This order keeps track of the number of edges in components which are "close\" to the root component, with the idea being that if we can continually perform augmentations to make components "closer\" to the root component have fewer edges without creating any new bad components, then we might be able to perform an augmentation which makes the root component "less bad" or even not bad at all. We formalize this in the following manner. **Definition 1**. We call an order $(R_{1},\ldots,R_{t})$ of the red components of $H_{f, R}$ *legal* if all red components of $H_{f, R}$ are in the order, $R_{1}$ is the root component $R$, and for each $j \in \{2,\ldots,t\}$ there exists an integer $i$ with $1 \leq i < j$ such that there is a blue arc $(u,v)$ with $u \in V(R_{i})$ and $v \in V(R_{j})$. Let $(R_{1},\ldots,R_{t})$ be a legal order. We will say that $R_{i}$ is a *parent* of $R_{j}$ if $i < j$ and there is a blue arc $(v_{i},v_{j})$ in $H_{f, R}$ where $v_{i} \in R_{i}$ and $v_{j} \in R_{j}$. Note that a red component may have many parents. If $R_{i}$ is a parent of $R_{j}$, then we say that $R_{j}$ is a *child of $R_{i}$ generated by $(v_i, v_j)$*. We say a red component $R_{i}$ is an *ancestor* of $R_{j}$ if there exists a sequence of red components $R_{i_{1}},\ldots,R_{i_{m}}$ such that $R_{i_{1}} = R_{i}$, $R_{j_{m}} = R_{j}$, and $R_{i_{q}}$ is a parent of $R_{i_{q+1}}$ for all $q \in \{1,\ldots,m-1\}$. An important concept is that of vertices *witnessing* a legal order. **Definition 1**. Given a legal order $(R_{1},\ldots,R_{t})$ and integer $j \in \{2, \dots, t\}$, we say a vertex $v \in V(R_j)$ *witnesses the legal order for $R_{j}$* if there is a blue arc $(u,v)$ and integer $1 \leq i< j$ such that $u \in V(R_{i})$. Observe that there may be many vertices that witness the legal order for a given red component. More importantly, for every red component except the root, there exists a vertex that witnesses the legal order. We also want to compare two different legal orders. **Definition 1**. Let $a = (a_{1},\ldots,a_{t})$ and $a' = (a'_{1},\ldots,a'_{t'})$ be two tuples of natural numbers. Let $a_i = 0$ for $t < i \leq t'$ and $a'_i = 0$ for $t' < i \leq t$. We will say $a$ is *lexicographically smaller* than $a'$ if there is an $m \in \mathbb N$ such that $a_i = a'_i$ for all $i < m$ and $a_m < a'_m$. In this case we write $a <_{lex} a'$. Note that this defines a total order $\leq_{lex}$. We are now able to define a rating for red-blue colourings which indicates how close such a colouring is to satisfying Theorem [Theorem 1](#thm:upperBound){reference-type="ref" reference="thm:upperBound"}. Namely: we say that an $(i)$-bad component is always worse than a $(j)$-bad component if $i < j$; that an $(i)$-bad component $K$ is worse than an $(i)$-bad component $K'$ if $e(K) > e(K')$; and if two colourings (with minimal legal orders $(R_1, \dots, R_t$) and $(R'_1, \dots, R'_{t'})$, respectively) have the same types of bad components, then we compare $(e(R_1), \dots, e(R_t))$ and $(e(R'_1), \dots, e(R'_{t'}))$ under $\leq_{lex}$; the idea is that each such tuple indicates how close the corresponding colouring is to an augmentation that reduces the "badness" of bad components. We will formalize this now. Given a red-blue colouring $f$, let $\Delta^{(i)}(f) := \big(\Delta^{(i)}_{v(G)}(f), \Delta^{(i)}_{v(G)-1}(f), \ldots, \Delta^{(i)}_0(f)\big)$ where $\Delta^{(i)}_j(f)$ is the number of red components of $f$ that are $(i)$-bad and have exactly $j$ edges. Let $\sigma(f, R)$ denote the lexicographically smallest tuple $(e(R_1), \ldots, e(R_t))$ over all legal orders $(R_1, \ldots, R_t)$ of $H_{f, R}$. We call a legal order $(R'_1, \ldots, R'_{t'})$ with $\sigma(f, R) = (e(R'_1), \ldots, e(R'_{t'}))$ a *smallest legal order*.\ For our vertex-minimal counterexample, we choose a colouring $f \in \mathcal F$ and a bad component $R$ of the red pseudoforest $F$ of $f$ such that $\Delta(f, R) = \big(\Delta^{(1)}(f), \ldots, \Delta^{(5)}(f), \sigma(f, R)\big)$ is lexicographically smallest (where we also use $\leq_{lex}$ to compare values of $\sigma$ and $\Delta^{(i)}$ for every $i$).\ For the occasions where we only want to compare the bad components of two red pseudoforests, we define $\Delta'(f) = \big(\Delta^{(1)}(f), \ldots, \Delta^{(5)}(f)\big)$. From here on out, we will fix $f, R, F$ picked in the manner described above. If not stated otherwise, parental relationships between red components will be considered in the context of an arbitrary but fixed smallest legal order $(R_1, \ldots, R_t)$. In order to augment our decomposition we will only use the following simple procedure taken from [@sndtcPsfs]. **Definition 1**. Let $K$ be a red component of $H_{f, R}$ and let $C$ be an acyclic child of $K$ generated by $(x, y)$. Suppose that $e = xv$ is a red edge in $K$ incident to $x$. To *exchange $e$ and $(x,y)$* is to perform the following procedure: first, change the colour of $(x,y)$ to red and remove its orientation. Second, change the colour of $e$ to blue and orient it to $(x,v)$. **Observation 1** ([@sndtcPsfs]). Suppose we exchange the edge $e = xv$ and $(x,y)$. Then the resulting red-blue colouring is in $\mathcal F$. We will implicitly use Observation [Observation 1](#flipobservation){reference-type="ref" reference="flipobservation"} throughout the paper. Eventually, we will need to show that if we cannot augment our decomposition then the average degree of $H_{f, R}$ is too high, contradicting Observation [Observation 1](#finishingobservation){reference-type="ref" reference="finishingobservation"}. It is easy to show that if $H_{f, R}$ does not contain any small red components, then the average degree of $H_{f, R}$ is too high, since $\frac{e(K)}{v(K)} \geq \frac{d}{d+k+1}$ if and only if $K$ is not small. However, getting rid of all small components is not realistic. To get around this, we partition the components of $F$ so that each part contains a non-small component along with some of its small children. We set up a partitioning of the set of components of $F$ where every part of the partition consists of a non-small component and some small components. If the average degree of every part is at least $\frac{d}{d+k+1}$, then we still obtain the same contradiction. This partitioning requires that small components always have a parent that is not small. We will prove this later in Corollary [Corollary 1](#smallchildrencorollary){reference-type="ref" reference="smallchildrencorollary"}. **Definition 1**. Denote the set of red components of $H_{f, R}$ that are not small by $\mathcal K$. In an arbitrary fashion we assign each small component to exactly one of its parents in $\mathcal K$.\ Let $K \in \mathcal K$ and let $C_1, \dots, C_q$ be the small children of $K$ that were assigned to $K$.\ Then $\mathcal{C}(K) := \{C_1, \dots, C_q\}$ and $$K_{\mathcal C}:= \big(V(K) \cup \bigcup_{C \in \mathcal{C}(K)} V(C), \;E(K) \cup \bigcup_{C \in \mathcal{C}(K)} E(C)\big).$$ **Lemma 1**. *Assume that in $H_{f, R}$* 1. *small components do not have small children,* 2. *we have $\frac{e(K_{\mathcal C})}{v(K_{\mathcal C})}> \frac{d}{d+k+1}$ for every bad component $K$, and* 3. *we have $\frac{e(K_{\mathcal C})}{v(K_{\mathcal C})}\geq \frac{d}{d+k+1}$ for every non-bad red component $K$.* *Then we obtain a contradiction to Observation [Observation 1](#finishingobservation){reference-type="ref" reference="finishingobservation"}, which proves Theorem [Theorem 1](#thm:upperBound){reference-type="ref" reference="thm:upperBound"}.* *Proof.* As small components do not have small children by assumption, it follows that any red component of $H_{f, R}$ is a subgraph of $K_{\mathcal C}$ for a non-small component $K$. We have that $\frac{e(K_{\mathcal C})}{v(K_{\mathcal C})}\geq \frac{d}{d+k+1}$ holds for every $K_{\mathcal C}$ of this partition of $(V(H_{f, R}, E_r(H_{f, R}))$. Since there is at least one (non-small) bad component for which this inequality is strict, we obtain a contradiction to Observation [Observation 1](#finishingobservation){reference-type="ref" reference="finishingobservation"}. ◻ The setup for our proof is now done. It remains to show the three conditions of Lemma [Lemma 1](#lemma:howToGetMADContradiction){reference-type="ref" reference="lemma:howToGetMADContradiction"} hold for our optimally chosen colouring $f$. In the following subsections we will show the second condition for all types of bad components. In the final Subsection [2.5](#sec:bad5nonBad){reference-type="ref" reference="sec:bad5nonBad"} we will also show the first and third condition. ## Density of $K_{\mathcal C}$ if $K$ is (1)-bad In this subsection, we show that if we have a component which is (1)-bad, then it has no small children, and thus contributes to showing that the average degree of our exploration subgraph is too high. **Observation 1**. If $C$ is an acyclic child of $K$ generated by $(x,y)$, then $x$ does not lie in a cycle of $F$. *Proof.* Suppose towards a contradiction that $x$ lies in a cycle of $F$. Let $e$ be an edge incident to $x$ which lies in the cycle coloured red. Now exchange $e$ and $(x,y)$. As $(x,y)$ was an arc between two distinct red components and $e$ was in the cycle coloured red, after performing the exchange, we reduce the number of cycles in $F$ by one and do not affect other cyclic components. Thus the exchange results in a red-blue colouring with a smaller $\Delta^{(1)}$ which is a contradiction. ◻ **Lemma 1**. *If $K$ is a cyclic red component of $H_{f, R}$, then there is no blue arc $(x, y)$ from $K$ to an acyclic red component $C$.* *Proof.* Suppose towards a contradiction that there is such an edge. By Observation [Observation 1](#redcyclesaturationlemma){reference-type="ref" reference="redcyclesaturationlemma"} we know that $x$ does not lie on a cycle of $F$. There is a unique red path from $x$ to the cycle of $K$. On this path let $w$ be the neighbour of $x$. Exchange $xw$ with $(x, y)$. This creates a new acyclic red component containing $x$ and $y$, and a new cyclic red component containing $w$ which has fewer edges than $K$. Again, this results in a red-blue colouring with smaller $\Delta^{(1)}$, a contradiction. ◻ We are now equipped to prove the main result of this subsection. **Lemma 1**. *If a red component $K$ of $H_{f, R}$ is (1)-bad, then $\frac{e(K_{\mathcal C})}{v(K_{\mathcal C})}> \frac{d}{d+k+1}$.* *Proof.* If $K$ is cyclic, then $K$ does not have any small children by Lemma [Lemma 1](#lemma:cyclicDoesntHaveAcyclicChildren){reference-type="ref" reference="lemma:cyclicDoesntHaveAcyclicChildren"}. Thus, $$\frac{e(K_{\mathcal C})}{v(K_{\mathcal C})}= \frac{e(K)}{v(K)} = 1 > \frac{d}{d+k+1}.$$ ◻ ## Density of $K_{\mathcal C}$ if $K$ is (3)-bad In this subsection, we build towards bounding the density of $K_{\mathcal C}$ when $K$ is (3)-bad. Recall that a red component is (3)-bad if it is acyclic, has at most $d$ edges, and has diameter at least $2\ell + 3$. First, we aim to show that small components are not bad. To that end, we prove the following. **Lemma 1**. *We have that $d > \ell (k+1)$; and if $z \in \{0, \ldots, \ell\}$, we have moreover that $d - z(k-1) + 1 > 2\ell + 1.$* *Proof.* We have $$\ell (k+1) = \left\lfloor\frac{d-1}{k+1}\right\rfloor (k+1) \leq d - 1.$$ The second part of the lemma follows immediately since $k \geq 1$. ◻ This leads us to the useful corollary below. **Corollary 1**. *If $K$ is an acyclic red component with $e(K) \leq 2\ell + 1$, then $K$ is not bad. In particular, small red components are not bad.* The following technical lemma will be important for several future manipulations we will perform upon $F$. **Lemma 1**. *If $K$ is a red acyclic component of $H_{f, R}$ and there is a colouring $f' \in \mathcal F$ with a red forest $F'$ whose set of components can be obtained from the set of components of $F$ by:* - *removing $K$,* - *possibly adding acyclic components $K_1, ..., K_q$ with $q \in \mathbb N, e(K_i) < e(K)$ and $diam(K_i) \leq diam(K)$ for every $1 \leq i \leq q$, and* - *possibly adding or removing non-bad components,* *then $K, K_1, ..., K_q$ are not bad and thus $\Delta'(f') \leq_{\text{lex}} \Delta'(f)$.* *Proof.* Obviously, the addition and removal of non-bad components does not change $\Delta'$.\ First suppose that $K$ is bad. Since $K$ is acyclic, it follows that $K$ is $(b)$-bad for some $b \in \{2, \ldots, 5\}$. Then $\Delta^{(b)}_{e(K)}$ decreases and $\Delta^{(b)}_i$ remains the same for all $i > e(K)$ when manipulating $F$ as described above. Thus, $\Delta'$ decreases, which is a contradiction.\ Next, suppose that $K_i$ is $(b)$-bad, where $i \in \{1, ..., q\}$ and $b \in \{2, ..., 5\}$. But then $K$ is also $(b)$-bad and we again obtain a contradiction as $\Delta'$ decreases when performing the operations described in the lemma. ◻ In the next two lemmas we will show that the large diameter of $(3)$-bad components prevents them from having small children in our optimal colouring $f$. **Observation 1**. For any vertex $v$ of a tree $T$, there is a path of length at least $\left\lceil\frac{diam(T)}{2}\right\rceil$ starting at $v$. If $v$ does not lie on a longest path of $T$, then this bound can be increased by one. *Proof.* Let $P$ be a path that attains the diameter of $T$. If $v \in V(P)$, then the result is immediate. Otherwise, $v \not \in V(P)$ and let $P' = v, \dots, v'$ be the shortest path from $v$ to $P$. Concatenating $P'$ and the longest path in $P$ starting at $v'$ gives the result. ◻ **Lemma 1**. *If there is a small child $C$ of a red acyclic component $K$ of $H_{f, R}$ generated by $(x,y)$, then every red path starting at $x$ has length at most $e(C)+1$, and the diameter of $K$ is at most $2e(C) +2$.* *Proof.* Suppose towards a contradiction that there is a path of length at least $e(C) + 2$ starting at $x$. Let $P$ be this path, and let $e$ be the edge on this path that is incident with $x$. Exchange $e$ with $(x,y)$. This does not create any cyclic component. Let $K'$ and $K''$ be the resulting new components, where $x \in V(K')$. We have $e(K'') < e(K)$ and $diam(K'') \leq diam(K)$, since $K''$ is a subgraph of $K - e$.\ Furthermore, we have that $e(K') \leq e(K) - e(P) + e(C) + 1 < e(K)$ and it is also easy to see that $diam(K') \leq diam(K)$. By Lemma [Lemma 1](#lemma:replaceBadCompWithSmallerBadComps){reference-type="ref" reference="lemma:replaceBadCompWithSmallerBadComps"} we get that $K, K'$ and $K''$ are not bad and thus $\Delta'$ has not increased.\ Finally, we can construct a smaller legal order by taking $(R_1, \ldots, R_t)$ up until $K$, and then replacing $K$ with one of $K'$ and $K''$ containing a vertex that witnesses this legal order, and completing the order arbitrarily. By this contradiction, it follows that every path of $K$ starting at $x$ has length at most $e(C) + 1$. If $diam(K) \geq 2e(C) + 3$, then by Observation [Observation 1](#obs:diamToPathLB){reference-type="ref" reference="obs:diamToPathLB"} there would be a path of length $e(C) + 2$ starting at $x$, which is a contradiction. ◻ For red acyclic components of large diameter, Lemma [Lemma 1](#lemma:boundDiamBySizeOfChild){reference-type="ref" reference="lemma:boundDiamBySizeOfChild"} gives the following. **Corollary 1**. *Let $K$ be an acyclic red component of $H_{f, R}$. If $diam(K) > 2\ell +2$, then $K$ does not have any small children.* We are now equipped to prove the main result of this subsection. **Lemma 1**. *If a red component $K$ of $H_{f, R}$ is (3)-bad, then $\frac{e(K_{\mathcal C})}{v(K_{\mathcal C})}> \frac{d}{d+k+1}$.* *Proof.* If $K$ is (3)-bad, then by Corollary [Corollary 1](#cor:tooLargeDiamMeansNoChildren){reference-type="ref" reference="cor:tooLargeDiamMeansNoChildren"} it does not have any small children. Thus, $$\frac{e(K_{\mathcal C})}{v(K_{\mathcal C})} = \frac{e(K)}{v(K)} \geq \frac{2\ell +3}{2\ell+4} > \frac{\ell + 1}{\ell + 2} \geq \frac{d}{d+k+1}.$$ The last inequality holds due to Lemma [Lemma 1](#lemma:ellAndDensity){reference-type="ref" reference="lemma:ellAndDensity"}. ◻ ## Density of $K_{\mathcal C}$ if $K$ is (2)-,(4)- or (5)-bad We will not be able to get rid of all the small children of non-small red components that are are not (1)- or (3)-bad like we did in the previous subsections. However, we will bound the number of small children components in the following lemma. A similar lemma can be found in [@sndtcPsfs], but in our case we have to carry out a more careful analysis to ensure that we do not create new bad components when manipulating $f$. **Lemma 1**. *If $K$ is an acyclic red component of $H_{f, R}$, then $K$ has at most $k$ small children.* *Proof.* Suppose that $K$ has $k+1$ small children. As each vertex has only $k$ outgoing blue edges, there exist two distinct vertices $x_1, x_2 \in V(K)$ such that there are two distinct small children $C_1$ and $C_2$ generated by blue arcs $(x_1, y_1)$ and $(x_2, y_2)$, respectively. Let $e_1$ and $e_2$ be the edges on the red path between $x_1$ and $x_2$ that are incident to $x_1$ and $x_2$, respectively. Note that $e_1$ and $e_2$ are not necessarily distinct. For $i \in \{1, 2\}$ let $K_i$ be the component of $K - e_i$ containing $x_i$ and let $K'_i = (V(K_i) \cup V(C_i), \; E(K_i) \cup E(C_i) + x_i y_i)$. Let $L_i = K - V(K_j)$, where $j = 3 - i$. Furthermore, we define *doing exchange i* to mean exchanging $e_i$ and $(x_i, y_i)$. Note that after doing exchange $i$, the red pseudoforest of the resulting colouring contains the components $K'_i$ and $L_j$ with $e(L_j) < e(K)$. Thus, we call $K'_1, L_2, C_2$ and $K'_2, L_1, C_1$ *new components*. The described components are depicted in Figure [\[fig:componentsInLeqKChildren\]](#fig:componentsInLeqKChildren){reference-type="ref" reference="fig:componentsInLeqKChildren"}. Without loss of generality let $L_2$ contain a vertex that witnesses the legal order (if $K$ is not the root). **Claim 1**. *We have $e(K'_i) < e(K)$ for each $i \in \{1, 2\}$.* *Proof.* Let $i \in \{1,2\}$, $j = 3-i$ and suppose towards a contradiction that we have $e(K'_i) \geq e(K)$. Then we have $$\begin{aligned} e(K_j) &\leq e(K) - 1 - e(K_i) \\ &= e(K) - 1 - (e(K'_i) - e(C_i) - 1) \\ &\leq \ell, \textnormal{\hskip 10mm since $C_i$ is small}. \end{aligned}$$ First, suppose that $j = 2$ and $e(K) \leq 2\ell+1$. It follows that $e(K_{1}) \leq \ell$ since $e(K_{2}) \leq \ell$ and $e(K) \geq e(K_{1}) + e(K_{2}) +1$. After doing exchange $1$ we have $e(K'_1) \leq 2\ell + 1$ and $e(L_2) < e(K)$. Thus, none of the new components are bad by Corollary [Corollary 1](#cor:smallIsNotBad){reference-type="ref" reference="cor:smallIsNotBad"} and also $K$ is not bad. We see that after the exchange we obtain a smaller legal order by taking the same legal order up to $K$ but replacing $K$ with $L_2$ and then completing the order arbitrarily.\ Thus, this case cannot happen and it must be that $j = 1$ or $e(K) > 2\ell + 1$. In this case we perform exchange $j$ instead of exchange $i$, then $e(K'_j) \leq 2\ell + 1$, since $e(K_j) \leq \ell$. Note that $K'_j$ is not bad by Corollary [Corollary 1](#cor:smallIsNotBad){reference-type="ref" reference="cor:smallIsNotBad"}, and so by Lemma [Lemma 1](#lemma:replaceBadCompWithSmallerBadComps){reference-type="ref" reference="lemma:replaceBadCompWithSmallerBadComps"} we get that neither $K$ nor any of the new components are bad and in particular, $K$ is not the root component.\ We can again obtain a smaller legal order by taking the same legal order up to $K$ but replacing $K$ with $K'_2$ if $j=2$ and $e(K) \leq 2\ell + 1$, or with $L_2$ otherwise, and then completing the order arbitrarily. ◻ For each $i \in \{1,2\}$, let $j = 3-i$ and let $r_i$ be the number of edges in a longest path in $K_i$ starting at $x_i$. Note that $r_i \leq e(C_j)$, as otherwise there would be a path of size $e(C_j) + 1 + |\{e_j\}|$ starting at $x_j$, which is a contradiction to Lemma [Lemma 1](#lemma:boundDiamBySizeOfChild){reference-type="ref" reference="lemma:boundDiamBySizeOfChild"}. Similarly, $r_i \leq e(C_i) + 1$. **Claim 2**. *$diam(K'_1), diam(K'_2) \leq 2\ell + 1$.* *Proof.* Suppose $diam(K'_i) \geq 2\ell + 2$. As $e(C_i) \leq \ell$ and thus $r_i \leq \ell$, a longest path in $K'_i$ that contains $x_{i}$ has at most $2\ell + 1$ edges and thus, $x_i$ does not lie on a longest path of $K'_i$. But by Observation [Observation 1](#obs:diamToPathLB){reference-type="ref" reference="obs:diamToPathLB"} this gives a path of size $\ell + 2$ starting at $x_i$, which is a contradiction since every path in $K'_i$ starting at $x_i$ is fully contained in either $K_i$ or $C_i + x_iy_i$. ◻ Let us look at the consequences of the two claims regarding whether or not $K$ is bad: we defined $K$ to be acyclic, thus it is not (1)-bad and neither are any new components.\ Using Claim [Claim 1](#claim:atMostKChildren:numEdges){reference-type="ref" reference="claim:atMostKChildren:numEdges"} we know that $K$ is not (2)-bad or otherwise $\Delta^{(2)}$ would decrease when doing exchange $1$ or $2$. Thus, none of the new components are (2)-bad.\ Analogously, using Claim [Claim 2](#claim:atMostKChildren:diam){reference-type="ref" reference="claim:atMostKChildren:diam"} we could decrease $\Delta'$ if $K$ was (3)-bad or (4)-bad and also none of the new components are (3)-bad or (4)-bad.\ If $K$ was (5)-bad, it is again clear by Claim [Claim 1](#claim:atMostKChildren:numEdges){reference-type="ref" reference="claim:atMostKChildren:numEdges"} that we could decrease $\Delta'$.\ If there is an $i \in \{1, 2\}$ such that $K'_i$ is not (5)-bad (and thus not bad), then this either decreases $\Delta'$ or we find a smaller legal order after doing exchange $i$ by taking the same legal order up to $K$ but replacing $K$ with the component $K'_2$, if $i = 2$, or $L_2$, if $i = 1$ and then completing the order arbitrarily.\ Thus, for the rest of the proof we assume that both $K'_1$ and $K'_2$ are (5)-bad and we aim to show that $K$ is also (5)-bad. This proves the lemma, as in this case we can do either exchange $1$ or $2$ and get a smaller $\Delta'$, by Claim [Claim 1](#claim:atMostKChildren:numEdges){reference-type="ref" reference="claim:atMostKChildren:numEdges"}, a contradiction.\ If a longest path $P$ of $K'_i$ does not contain $x_i$, then either $P \subseteq C_i$ and thus $diam(K'_i) \leq \ell$, or $P \subseteq K_i$, thus $diam(K'_i) = diam(K_i)$ and by Observation [Observation 1](#obs:diamToPathLB){reference-type="ref" reference="obs:diamToPathLB"} we obtain $diam(K'_i) \leq 2r_i$. As $r_{i} \leq e(C_{i}) + 1$, it follows that $diam(K'_i) \leq r_i + e(C_i) + 1$. On the other hand if there exists a longest path of $K'_i$ containing $x_i$, we also have $diam(K'_i) \leq r_i + e(C_i) + 1$. Thus, in any case we have that $diam(K'_i) \leq r_i + e(C_i) + 1$. As both $K'_1$ and $K'_2$ are (5)-bad, there are natural numbers $z_1$ and $z_2$ such that for each $i \in \{1,2\}$, we have $diam(K'_i) \geq 2z_i + 1$ and $e(K'_i) \geq d - z_i(k-1) + 1$. Thus $$\begin{aligned} e(K'_i) &\geq d - \left\lfloor\frac{diam(K'_i) - 1}{2}\right\rfloor (k-1) + 1\\ &\geq d - \left\lfloor\frac{r_i + e(C_i)}{2}\right\rfloor (k-1) + 1. \end{aligned}$$ We know that $diam(K) \geq r_1 + r_2 + 1$. Thus, it suffices to prove $e(K) \geq d - \left\lfloor\frac{r_1 + r_2}{2}\right\rfloor (k-1) + 1$ in order to prove that $K$ is (5)-bad, completing the proof of the lemma as explained above. Observe $$\begin{aligned} e(K) &\geq e(K_1) + e(K_2) + 1 \\ &= \sum_{i=1}^2 \big(e(K'_i) - e(C_i) - 1 \big) + 1 \\ &\geq \sum_{i=1}^2 \Big( d - \left\lfloor\frac{r_i + e(C_i)}{2}\right\rfloor (k-1) - e(C_i) \Big) + 1. \end{aligned}$$ From here, let $\alpha_i = 1$ if $r_i \not \equiv e(C_i) \mod 2$ and 0 otherwise. Let $\beta = 1$ if $e(C_1) \not \equiv e(C_2) \mod 2$, and 0 otherwise. Note that $\left\lfloor\frac{r_i + e(C_i)}{2}\right\rfloor = \frac{r_i + e(C_i)-\alpha_i}{2}$. Then $$\begin{aligned} e(K) &\geq 2d - \left(\frac{e(C_1)+r_1-\alpha_1}{2} + \frac{e(C_2) + r_2 - \alpha_2}{2} \right)(k-1)-(e(C_1) + e(C_2)) + 1 \\ &= d - \frac{r_1+r_2-(\alpha_1 + \alpha_2)}{2}(k-1)+1+d-\frac{k+1}{2}(e(C_1) + e(C_2)) \\ &\geq d-\frac{r_1+r_2-(\alpha_1 + \alpha_2)}{2}(k-1)+1+d-\frac{k+1}{2}(2\ell - \beta) \textrm{ since $e(C_i) \leq \ell$ for $i \in \{1,2\}$} \\ &> d -\frac{r_1 + r_2 - (\alpha_1 + \alpha_2 + \beta)}{2}(k-1) + 1.\end{aligned}$$ If $1 \in \{\alpha_1, \alpha_2, \beta\}$, then $\left\lfloor\frac{r_1 + r_2}{2}\right\rfloor \geq \frac{r_1 + r_2 - (\alpha_1 + \alpha_2 + \beta)}{2}$. If $1 \not \in \{\alpha_1, \alpha_2, \beta\}$, then by definition of $\alpha_1, \alpha_2$, and $\beta$ we have that $r_1 + r_2$ is even, and so again $\left\lfloor\frac{r_1 + r_2}{2}\right\rfloor \geq \frac{r_1 + r_2 - (\alpha_1 + \alpha_2 + \beta)}{2}$. Thus in either case $K$ is (5)-bad since $$e(K) \geq d - \left\lfloor\frac{r_1 + r_2}{2}\right\rfloor (k-1)+1.$$ ◻ **Lemma 1**. *If a red component $K$ of $H_{f, R}$ is (2)-bad, then $\frac{e(K_{\mathcal C})}{v(K_{\mathcal C})}> \frac{d}{d+k+1}$.* *Proof.* Note that we have $|\mathcal C(K)| \leq k$ by Lemma [Lemma 1](#lemma:atMostKChildren){reference-type="ref" reference="lemma:atMostKChildren"}. Suppose that $K$ is (2)-bad. Then we have $$\frac{e(K_{\mathcal C})}{v(K_{\mathcal C})} = \frac{e(K) + \sum_{C \in \mathcal C(K)} e(C)}{v(K) + \sum_{C \in \mathcal C(K)} v(C)} \geq \frac{(d + 1) + k \cdot 0}{(d + 2) + k \cdot 1} > \frac{d}{d+k+1}.$$ ◻ In order to show the same for (4)-bad components, we only need the following simple corollary from Lemma [Lemma 1](#lemma:boundDiamBySizeOfChild){reference-type="ref" reference="lemma:boundDiamBySizeOfChild"}: **Corollary 1**. *Let $K$ be an acyclic red component of $H_{f, R}$. If $diam(K) \geq 2\ell + 1$, then for all small children $C$ of $K$, we have $e(C) = \ell$.* **Lemma 1**. *If a red component $K$ of $H_{f, R}$ is (4)-bad, then $\frac{e(K_{\mathcal C})}{v(K_{\mathcal C})}> \frac{d}{d+k+1}$.* *Proof.* Let $K$ be (4)-bad. Since $d \equiv 1 \mod (k+1)$ we have $(k+1)\ell = d - 1$. Thus $$\begin{aligned} \frac{e(K_{\mathcal C})}{v(K_{\mathcal C})} &\geq \frac{(2\ell + 2) + k\ell}{(2\ell + 3) + k(\ell + 1)} \\ &= \frac{(k+1)\ell + \ell + 2}{(k+1)\ell + k + \ell + 3}\\ &= \frac{d + (\ell + 1)}{d + k + 1 + (\ell + 1)}\\ &> \frac{d}{d+k+1}. \end{aligned}$$ ◻ **Lemma 1**. *If a red component $K$ of $H_{f, R}$ is (5)-bad, then $\frac{e(K_{\mathcal C})}{v(K_{\mathcal C})}> \frac{d}{d+k+1}$.* *Proof.* Suppose $K$ is (5)-bad. Then we have that $e(K) \geq d - z(k-1) + 1$ and $diam(K) > 2z$ for a $z \in \mathbb N$ with $1 \leq z \leq \ell$. Thus, $e(C) \geq z$ for every small child $C$ of $K$ by Lemma [Lemma 1](#lemma:boundDiamBySizeOfChild){reference-type="ref" reference="lemma:boundDiamBySizeOfChild"}, which gives us $$\frac{e(K_{\mathcal C})}{v(K_{\mathcal C})} \geq \frac{e(K) + kz}{e(K) + 1 + k(z+1)} \geq \frac{d - z(k-1) + 1 + kz}{d - z(k-1) + 2 + k(z+1)} \geq \frac{d + (z + 1)}{d + k + 1 + (z + 1)} > \frac{d}{d+k+1}.$$ ◻ ## Density of $K_{\mathcal C}$ if $K$ is not bad {#sec:bad5nonBad} In this subsection we will show that the density of $K_{C}$ for non-bad components is large if $K$ is not small. If $K$ is small, we will show that $K$ has no small children. Before this we start with a technical lemma: **Lemma 1**. *Let $K$ be a red acyclic component of $H_{f, R}$ and $C$ be a small child of $K$ generated by $(x, y)$. Then $K' := (V(K) \cup V(C), E(K) \cup E(C) + xy)$ has diameter at most $2e(C) + 2$.* *Proof.* If $K'$ contains a path of size $2e(C) + 3$, then by Lemma [Lemma 1](#lemma:boundDiamBySizeOfChild){reference-type="ref" reference="lemma:boundDiamBySizeOfChild"} this path must contain $xy$ and therefore contain $x$. But again by Lemma [Lemma 1](#lemma:boundDiamBySizeOfChild){reference-type="ref" reference="lemma:boundDiamBySizeOfChild"}, there are only paths of size at most $e(C) + 1$ starting at $x$ in $K$ and $E(C) + xy$ also has at most $e(C) + 1$ edges. ◻ **Lemma 1**. *Let $K$ be a red component of $H_{f, R}$ that is not bad. Let $C$ be a small child of $K$ generated by $(x, y)$. Furthermore, suppose that $K$ is small or $d \not\equiv 1 \mod (k+1)$ or $e(C) < \ell$. Then $e(K) \geq d - e(C)k$.* *Proof.* Assume to the contrary that $e(K) + e(C) + 1 < d - e(C)(k-1) + 1$. Let $L := (V(K) \cup V(C), E(K) \cup E(C) + xy)$. **Claim 3**. *$L$ is not bad.* *Proof.* Note that $L$ is acyclic and thus not $(1)$-bad. By our assumption we have $e(L) < d - e(C)(k-1) + 1$ and thus it is not $(2)$-bad. Further, $L$ is not $(5)$-bad. To see this, suppose there exists $z \in \{1, \dots, \ell\}$ such that $e(K) \geq d-z(k-1)+1$. By the assumption, $z > e(C)$. By Lemma [Lemma 1](#lemma:diamOfKPlusC){reference-type="ref" reference="lemma:diamOfKPlusC"} it follows that the diameter of $L$ is at most $2e(C)+2$, and thus the diameter of $L$ is at most $2z$ implying that $L$ is not $(5)$-bad. Similarly, by Lemma [Lemma 1](#lemma:diamOfKPlusC){reference-type="ref" reference="lemma:diamOfKPlusC"} we obtain that $diam(L) \leq 2e(C) + 2$ and thus it is not (3)-bad. Finally, we have that $L$ is not (4)-bad, since if $K$ is small we have $e(L) \leq 2\ell + 1$ and if $e(C) < \ell$, then by Lemma [Lemma 1](#lemma:diamOfKPlusC){reference-type="ref" reference="lemma:diamOfKPlusC"} we even have $diam(L) \leq 2\ell$. ◻ Since $K$ is not bad, it is not the root component $R$ and thus there is a vertex $w \in V(K)$ witnessing the legal order. **Case 1:** $w \neq x$.\ Let $e$ be the red edge incident to $x$ in $K$ such that $e$ lies on the path from $x$ to $w$ in $K$. Then exchange $e$ and $(x,y)$ and let $K'$ and $K''$ be the new red components containing $x$ and $w$, respectively. As $K'$ and $K''$ are subgraphs of $L$ and $L$ is not bad by Claim [Claim 3](#claim:Lnotbad){reference-type="ref" reference="claim:Lnotbad"}, it follows that $K'$ and $K''$ are not bad. Furthermore, find a smaller legal order by taking the same legal order up to $K$ but replacing $K$ with its proper subgraph $K''$ and completing the order arbitrarily. **Case 2:** $w = x$.\ We refer the reader to Figure [\[Case2\]](#Case2){reference-type="ref" reference="Case2"} for an illustration. As $K$ is not the root component, $K$ has an ancestor. As $x$ witnesses the legal order, there is a parent component $S_1$ of $K$ that has a blue arc to $x$. If $S_1$ does not have any edges and thus only consists of a single vertex $x_1$, then $x_1$ also witnesses the legal order. In this manner we can find an ancestor of $K$ that contains an edge, since the root component $R$ contains at least one edge. Let $S_1, \ldots, S_n$ be a sequence of red components such that $K$ is a child of $S_1$, $e(S_n) \geq 1$ and for $i \in \{1, \ldots, n-1\}$, $S_i$ is a child of $S_{i+1}$ and $e(S_i) = 0$. There is a blue directed path $P = x_n,\ldots,x_1,x,y$ with $x_{i} \in V(S_{i})$ for all $i \in \{1, \ldots n\}$. Let $e$ be a red edge incident to $x_{n}$. Now do the following. Colour $(x,y)$ red, remove its orientation and reverse the direction of all remaining arcs in $P$. Colour $e$ blue, and orient $e$ away from $x_{n}$. The resulting coloured mixed graph is in $\mathcal F$, which contains the red and non-bad component $L$. By Lemma [Lemma 1](#lemma:replaceBadCompWithSmallerBadComps){reference-type="ref" reference="lemma:replaceBadCompWithSmallerBadComps"} we can conclude that neither $S_{n}$ nor the components of $S_{n}-e$ are bad. Thus, none of the components that have been manipulated by the exchange are bad and in particular, $S_{n}$ is not the root.\ Finally, we can find a smaller legal order in the colouring of this orientation, as we simply take the same legal order up to $S_{n}$, replace $S_{n}$ with one of the two components of $S_{n}-e$ and complete the remaining order arbitrarily. ◻ **Corollary 1**. *If $K$ is a small red component of $H_{f, R}$, then $K$ does not have any small red children. Furthermore, every small red component of $H_{f, R}$ has a parent component which is not small.* *Proof.* Suppose $K$ is small and has a small child $C$. Then $e(K) \leq \ell$ by definition of small, and by Lemma [Lemma 1](#lemma:linkKWithCIfBelowZBound){reference-type="ref" reference="lemma:linkKWithCIfBelowZBound"} $e(K) \geq d-e(C)k$. Moreover since $C$ is small, $e(K) \geq d-\ell k$. Thus $d-\ell k \leq \ell$, or $d \leq \ell(k+1)$. This contradicts the definition of $\ell$.\ For the second part of the corollary remember that the root component is bad and therefore not small due to Corollary [Corollary 1](#cor:smallIsNotBad){reference-type="ref" reference="cor:smallIsNotBad"}. ◻ **Lemma 1**. *If a red component $K \in \mathcal K$ of $H_{f, R}$ is not bad, then $\frac{e(K_{\mathcal C})}{v(K_{\mathcal C})}\geq \frac{d}{d+k+1}$.* *Proof.* First, let $d \equiv 1 \mod (k+1)$ and thus $(k+1)\ell = d - 1$. Suppose all small children of $K$ have $\ell$ edges and recall that $K$ has at most $k$ small children by Lemma [Lemma 1](#lemma:atMostKChildren){reference-type="ref" reference="lemma:atMostKChildren"}. Then $$\frac{e(K_{\mathcal C})}{v(K_{\mathcal C})} \geq \frac{e(K) + k\ell}{e(K) + 1 + k(\ell + 1)} \geq \frac{\ell + 1 + k\ell}{\ell + 2 + k(\ell + 1)} = \frac{(k+1)\ell + 1}{(k+1)\ell + k + 2} = \frac{d}{d+k+1}.$$ Otherwise, we can apply Lemma [Lemma 1](#lemma:linkKWithCIfBelowZBound){reference-type="ref" reference="lemma:linkKWithCIfBelowZBound"} and obtain $e(K) \geq d - ke(C)$, where $C$ is a small child of $K$ with the smallest number of edges. In this case we have $$\frac{e(K_{\mathcal C})}{v(K_{\mathcal C})} \geq \frac{d - ke(C) + ke(C)}{d - ke(C) + 1 + k(e(C) + 1)} \geq \frac{d}{d+k+1}.$$ ◻ Theorem [Theorem 1](#thm:z:lowerBound){reference-type="ref" reference="thm:z:lowerBound"} now follows from Lemma [Lemma 1](#lemma:howToGetMADContradiction){reference-type="ref" reference="lemma:howToGetMADContradiction"}. # Lower Bound of the Overall Diameter We will construct a graph $G$ parameterized by $\delta$ and $p$ that satisfies Theorem [Theorem 1](#thm:lowerBound){reference-type="ref" reference="thm:lowerBound"} for fixed $k, D, \epsilon, \alpha$. We do this in phases. First we define a specific tree with a particular orientation and edge-colouring. Given a tree, we will call the set of vertices that are not leaves *inner vertices* of the tree. Let $T$ be a tree with root $r_T$ and odd depth $\delta \geq \frac{2(\ell + 1)}{\epsilon} - 1$ formed from a 1-coloured directed path of length $\delta$ starting at $r_T$ by adding another $k-1$ outgoing edges (to new vertices) with colours $2, \ldots, k$ to each even-depth vertex of this path. From $T$, we construct a tree $C$ in the following manner. For each $t \in V(T)$, we add a path $P_{t}$ which is disjoint from $T$ except for at $t$, and $P_{t}$ has length $l_t$ where $$l_t = \begin{cases*} 2\ell + 1 + \alpha & \text{if $t = r_T$,}\\ \ell + \alpha & \text{if $t \neq r_T$ and $depth_{T}(t)$ is even,}\\ \ell & \text{if $depth_{T}(t)$ is odd.}\\ \end{cases*}$$ If $t \neq r_T$, then $t$ is an endpoint of $P_t$; and $r_T$ is in the middle of $P_{r_T}$ (i.e. there are two edge-disjoint subpaths of $P_{r_T}$ starting at $r_T$ with length at least $\ell$). Further, for all $t \in V(T)$, we colour all edges of $P_{t}$ with $k+1$, and we orient the edges towards $t$. Let us call $C$ a *colourful tree*. We say that $r_C := r_T$ is the root of $C$ as well. Finally, we obtain our desired graph $G$ by taking $p$ pairwise disjoint copies of $C$, where $p$ is any integer such that $p \geq kD + k^2 + 1$, a set of new vertices $S := \{s_{1},\ldots,s_{k}\}$ and for every colour $i \in \{1, \ldots k\}$ and every vertex $x$ belonging to a colourful tree we add an edge $(x, s_i)$, if $x$ does not have an outgoing edge coloured $i$. For each copy of $C$, we let $T_{C}$ denote that copy of $T$ contained in $C$. We will refer to the colours $\{1,\ldots,k\}$ as blue, and the colour $k+1$ as red. Note that unlike for the upper bound, where it is useful to think of all of the different blue colours as the same, for this construction each blue monochromatic component should be thought of as having a specific blue colour $i$. We call the established colouring and orientation of $G$ the *example colouring*. A colourful tree having the colours of the example colouring is depicted in Figure [\[fig:exampleColouring\]](#fig:exampleColouring){reference-type="ref" reference="fig:exampleColouring"}. Note the example colouring is a colouring where with the tools developed in Section 2, we would not be able to reduce the diameter further, and the red pseudoforest contains a path of length $2\ell + 1 + \alpha$. The example colouring will be especially useful in Subsection [3.2](#subsec:boundingFracArb){reference-type="ref" reference="subsec:boundingFracArb"} as it allows us to concisely refer to different structures within $G$. Our first point of order is to show that in any decomposition of $G$ into $k+1$ pseudoforests, the red pseudoforest has large diameter. We do this in Subsection [3.1](#subsec:boundingDiam){reference-type="ref" reference="subsec:boundingDiam"}. In Subsection [3.2](#subsec:boundingFracArb){reference-type="ref" reference="subsec:boundingFracArb"}, we lower-bound the fractional arboricity of $G$, which completes the proof of Theorem [Theorem 1](#thm:lowerBound){reference-type="ref" reference="thm:lowerBound"}. ## Bounding the Diameter {#subsec:boundingDiam} **Theorem 1**. *In any decomposition of $G$ into $k$ blue pseudoforests and one red pseudoforest where every vertex has at most $D$ incident red edges, there is a red component that has diameter at least $2\ell + 1 + \alpha$.* We assume there is a colouring $f$ that contradicts Theorem [Theorem 1](#thm:lowerBoundDiam){reference-type="ref" reference="thm:lowerBoundDiam"}. In the following lemma and corollary we will easily force colours on many edges of at least one colourful tree. The result is depicted in Figure [\[fig:forcedColoursOnEdges\]](#fig:forcedColoursOnEdges){reference-type="ref" reference="fig:forcedColoursOnEdges"}. The rest of the subsection will show that we cannot find colours for the remaining black edges in the figure without creating red paths which are too long or creating red components with too many cycles.\ In what follows, an *$S$-$C$-$S$-path* is a path with endpoints in $S$ and whose inner vertices are all from one colourful tree $C$. **Lemma 1**. *There is a colourful tree $C$ in $G$ such that every edge of $E(C, S)$ is coloured blue in $f$. Furthermore, there is no monochromatic $S$-$C$-$S$-path.* *Proof.* There are at most $kD$ red edges incident to $S$. For any colour $b \in \{1, \ldots, k\}$ there can be at most $k$ $S$-$C$-$S$ paths having colour $b$ in $f$ as otherwise we would have monochromatic components containing two cycles. As $p> kD + k^2$ there is at least one colourful tree $C$ which satisfies the lemma. ◻ For the rest of the subsection we let $C$ be a colourful tree satisfying Lemma [Lemma 1](#lemma:colourfulEdgesToS){reference-type="ref" reference="lemma:colourfulEdgesToS"}. The following corollary follows easily from Lemma [Lemma 1](#lemma:colourfulEdgesToS){reference-type="ref" reference="lemma:colourfulEdgesToS"}. **Corollary 1**. *In $f$, any vertex of $C$ that is not an inner vertex of $T_C$ has a ~~n~~ $b$-coloured edge to $S$ for every $b \in \{1, \ldots, k\}$ and any vertex of $T_C$ with odd depth has $k-1$ incident edges to $S$ in pairwise distinct colours of $\{1, \ldots, k$}. Any edge of $E(C)$ that is not incident with an inner vertex of $T_C$ is coloured red in $f$. Furthermore, every red component containing at least one vertex in $C$ is acyclic in $f$.* For $i \in \{\ell, \ell + 1, 2\ell + 1 + \alpha\}$, let $V_i$ be the set of vertices of $V(T_C)$ contained in red components with exactly $i$ edges in the example colouring. Given a colour $b \in \{1, \dots, k\}$, we say a vertex $v \in V(C)$ has a *$b$-coloured $S$-path* if there is a path of colour $b$ in $f$ that goes from $v$ to a vertex of $S$ and for any inner vertex $w$ of this path, $w \in V(C)$ and $depth_C(w) \geq depth_C(v)$. Further, we say a vertex *$t$ is the end of a low $i$-path* if $t$ is an endpoint of a red path $P$ in $f$ with $e(P) \geq i$ and $depth_C(v) \geq depth_C(t)$ for every $v \in V(P)$. **Lemma 1**. *In $f$, for each $i \in \{\ell, \ell+1\}$ every $t \in V_i$ is the end of a low $i$-path and has a $b$-coloured $S$-path for every $b \in \{1, \ldots, k\}$.* *Proof.* The lemma is clear for any leaf of $T_C$ due to Corollary [Corollary 1](#cor:forcedColourOfEdges){reference-type="ref" reference="cor:forcedColourOfEdges"}. Next, suppose that $t \in V_{\ell + \alpha}$ has even depth, is not the root of $T_C$ and that the lemma is true for all of its children $c_1, \ldots, c_k$ in $T_C$. Let $u$ be the vertex such that there is a red arc $(u, t)$ in the example colouring. Note that $u$ is also a child of $t$ in $C$. By Corollary [Corollary 1](#cor:forcedColourOfEdges){reference-type="ref" reference="cor:forcedColourOfEdges"}, $u$ is the end of a low $(\ell + \alpha - 1)$-path in $f$ and it also has a $b$-coloured $S$-path (of length 1) for any $b \in \{1, \ldots, k\}$.\ No two edges of $t c_1, \ldots, t c_k, t u$ have the same colour $j \in \{1, \ldots, k\}$ in $f$ or there would be a monochromatic $S$-$C$-$S$-path, contradicting Lemma [Lemma 1](#lemma:colourfulEdgesToS){reference-type="ref" reference="lemma:colourfulEdgesToS"}. Furthermore, no two of these edges are red or $f$ would have a red path with at least $(\ell + \alpha - 1) + 2 + \ell$ edges, a contradiction. Thus the $k+1$ edges have pairwise distinct colours, from which the lemma follows. Now, let $t \in V_\ell$ have odd depth, not be a leaf of $T_C$ and again suppose that the lemma is true for the only child $t'$ of $t$ in $T_C$. Let $u$ be the other child of $t$ in $C$, which is the neighbour of $t$ in $P_t$. Note that $t'$ has a low $(\ell + \alpha)$-path, and by Corollary [Corollary 1](#cor:forcedColourOfEdges){reference-type="ref" reference="cor:forcedColourOfEdges"} $u$ has a low $(\ell-1)$-path. Since $t$ has $k-1$ $S$-paths of length $1$, at most one of the edges to the children of $t$ can be blue. If both edges were red, then there would be a red path of length $(\ell+\alpha) + 2 + (\ell-1)$, which is contrary to our assumptions. The lemma follows. ◻ The following corollary shows that $r_C$ is contained in a red component with too high diameter, contradicting our initial assumption and thus completing the proof of Theorem [Theorem 1](#thm:lowerBoundDiam){reference-type="ref" reference="thm:lowerBoundDiam"}. **Corollary 1**. *The diameter of the red component of $r_C$ in $f$ is at least $2\ell+1 + \alpha$.* *Proof.* By Lemma [Lemma 1](#lemma:inductionOverNonRootVertices){reference-type="ref" reference="lemma:inductionOverNonRootVertices"} we know that the $k$ children of $r_C$ in $T_C$ are the ends of low $\ell$-paths and that they each have a $b$-coloured $S$-path for all $1 \leq b \leq k$. Furthermore, by Corollary [Corollary 1](#cor:forcedColourOfEdges){reference-type="ref" reference="cor:forcedColourOfEdges"} the two neighbours of $r_C$ in $P_{r_C}$ are also ends of low $\ell$- and $(\ell + \alpha - 1)$-paths, respectively, and they have $b$-coloured $S$-paths of length $1$ for every $1 \leq b \leq k$.\ We again conclude that there cannot be two edges from $r_C$ to one of its $k+2$ children in $C$ that have the same colour $b$ with $1 \leq b \leq k$ as otherwise we have a monochromatic $S-C-S$-path, contradicting Lemma [Lemma 1](#lemma:colourfulEdgesToS){reference-type="ref" reference="lemma:colourfulEdgesToS"}. Thus, two of the $k+2$ edges from $r_T$ to its children are red and thus together with the low $\ell$- (and perhaps $(\ell + \alpha -1)$-) paths described aboveform a red path with at least $(\ell + \alpha - 1) + \ell + 2 = 2\ell + 1 + \alpha$ edges. ◻ ## Bounding the Fractional Arboricity {#subsec:boundingFracArb} The last point of order is to bound the fractional arboricity of $G$. As mentioned in the introduction, this will also give us a bound on its maximum average degree. We aim to show that the entire graph is the densest subgraph of $G$. We have chosen $\delta$ large enough such that the red components of a colourful tree in the example colouring that do not contain a root of a colourful tree, which have a density of roughly $\frac{\ell(k+1) + \alpha}{(\ell + 1)(k+1) + \alpha}$, compensate for the largest red component (which contains the root). **Theorem 1**. *The fractional arboricity of $G$ is less than $k + \frac{\ell(k+1) + \alpha}{(\ell + 1)(k+1) + \alpha}+ \epsilon$.* We assume towards a contradiction that there is a subgraph $H \subseteq G$ with $v(H) \geq 2$ and $\frac{e(H)}{v(H) - 1} \geq k + \frac{\ell(k+1) + \alpha}{(\ell + 1)(k+1) + \alpha}+ \epsilon$ for the rest of the subsection. Let $\mathcal X$ be the set of all subsets $X \subseteq V(G)$ with $S \subsetneq X$ and $$\frac{e(X)}{|X \setminus S|} > k + \beta,$$ where $$\beta = \frac{ \ell(k+1) + \alpha + \frac{2(\ell+1)}{\delta + 1} } { (\ell+1)(k+1) + \alpha + \frac{2(\ell+1)}{\delta + 1} }$$ **Lemma 1**. *$\frac{e(G)}{|V(G) \setminus S|} = k + \beta$ and thus $V(G) \not\in \mathcal X$.* *Proof.* In the example colouring every colourful tree $C$ has $\frac{\delta + 1}{2}$ red components $P_t$ containing $\ell$ edges (where $depth_{T_C}(t)$ is odd) and $C$ has $\frac{\delta - 1}{2}$ red components $P_t$ containing $\ell + \alpha$ edges (where $depth_{T_C}(t)$ is even and non-zero). Furthermore, note that every vertex of $V(G) \setminus S$ has exactly $k$ outgoing blue edges in the example colouring and there are no outgoing blue edges from $S$. It follows that $$\frac{e(G)}{|V(G) \setminus S|} = k + \frac{ p\big( 2\ell + \alpha + 1 + \frac{\delta - 1}{2} (\ell + \alpha) + k \frac{\delta + 1}{2} \ell \big) }{ p\big( 2\ell + \alpha + 2 + \frac{\delta - 1}{2} (\ell + \alpha + 1) + k \frac{\delta + 1}{2} (\ell + 1) \big) } = k + \beta.$$ ◻ **Lemma 1**. *$$\frac{\ell(k+1) + \alpha}{(\ell + 1)(k+1) + \alpha} < \beta < \frac{\ell(k+1) + \alpha}{(\ell + 1)(k+1) + \alpha}+ \epsilon.$$* *Proof.* The first inequality is easy to see. For the second inequality we use the lower bound of $\delta \geq \frac{2(\ell + 1)}{\epsilon} - 1$ and the fact that for any numbers $a \geq 0, b \geq 1$, and $\gamma > 0$, it holds that $\frac{a + \gamma}{b + \gamma} < \frac{a}{b} + \gamma$.\  ◻ **Corollary 1**. *If Theorem [Theorem 1](#thm:lowerBoundFracArb){reference-type="ref" reference="thm:lowerBoundFracArb"} is false, then $\mathcal X$ is not empty.* *Proof.* Let $H$ be an induced subgraph of $G$ with $v(H) \geq 2$ and $\frac{e(H)}{v(H) - 1} \geq k + \frac{\ell(k+1) + \alpha}{(\ell + 1)(k+1) + \alpha}+ \epsilon$. Note that $S \subseteq V(H)$, as otherwise $H$ decomposes into at most $k-1$ star forests, where the centers lie in $S$, and one forest whose edges are from colourful trees, contradicting Theorem [Theorem 1](#nashthm){reference-type="ref" reference="nashthm"} since $\frac{e(H)}{v(H)-1} > k$ by assumption. We have $$k + \beta < k + \frac{\ell(k+1) + \alpha}{(\ell + 1)(k+1) + \alpha}+ \epsilon \leq \frac{e(H)}{v(H) - 1} \leq \frac{e(H)}{|V(H) \setminus S|}.$$ ◻ Our goal is to show that $V(G) \in \mathcal X$, which contradicts Lemma [Lemma 1](#contradictionforbound){reference-type="ref" reference="contradictionforbound"}. For this we will show that if there is an $X \in \mathcal X$, then we can manipulate $X$ in multiple steps such that after every step the new set of vertices is still in $\mathcal X$, and the resulting set is $V(G)$. The technical lemma that we will use implicitly in the remaining lemmas is the following, which follows from the definition of $\mathcal{X}$: **Lemma 1**. *Let $X \in \mathcal X$ and $Z \subseteq V(G) \setminus S$.\ If $Z \subseteq X$ and $$\frac{ e(Z) + e(X \setminus Z, Z) }{|Z|} \leq k + \beta,$$ then $X \setminus Z \in \mathcal X$ and in particular, $X \setminus Z \neq S$. If $X \cap Z = \varnothing$ and $$\frac{e(Z) + e(X, Z)}{|Z|} \geq k + \beta,$$ then $X \cup Z \in \mathcal X$.* Let $C$ be a colourful tree and $t \in V(T_C)$ for the rest of the subsection. **Lemma 1**. *If $X \in \mathcal X$ and $t \in X$, then $X \cup V(P_t) \in \mathcal X$.* *Proof.* If we add all vertices of $V(P_t) \setminus X$ to $X$, then for each vertex $v \in V(P_t) \setminus X$ in the induced subgraph we add at least $k$ edges from $v$ to $S$ and at least one other edge, namely the red outgoing edge of $v$ in the example colouring. Thus, $$e(V(P_t) \setminus X) + e(V(P_t) \setminus X, X) \geq (k+1) \; |V(P_t) \setminus X|.$$ ◻ **Lemma 1**. *If $X \in \mathcal X$ and $t \not\in X$, then $X \setminus V(P_t) \in \mathcal X$.* *Proof.* Let $P' = X \cap V(P_t) \neq \varnothing$. When subtracting $P'$ from $X$ the induced subgraph will lose $k|P'|$ edges of $E(P', S)$ and another $e(P') \leq |P'| - 1$ edges. If $t \neq r_C$ or $G[P']$ has exactly one component, then we have $$\frac{e(P')+e(P',S)}{|P'|} = \frac{k|P'| + e(P')}{|P'|} \leq k + \frac{|P'| - 1}{|P'|} \leq k + \frac{\ell}{\ell + 1} < k + \beta.$$ If $t = r_C$ and $G[P']$ has two components, we have that $e(P') \leq |P'| - 2$ and $|P'| \leq 2\ell + 1 + \alpha$. In this case we have $$\frac{e(P')+e(P',S)}{|P'|} =\frac{k|P'| + e(P')}{|P'|} \leq k + \frac{|P'| - 2}{|P'|} \leq k + \frac{\ell + (\ell + \alpha - 1)}{\ell + 1 + (\ell + \alpha)} \leq k + \frac{\ell}{\ell + 1} < k + \beta.$$ ◻ **Lemma 1**. *If $X \in \mathcal X$, $t \in X \cap V_\ell$ and a neighbour of $t$ in $T_C$ is not in $X$, then $X \setminus V(P_t) \in \mathcal X$.* *Proof.* Let $X' = X \cup V(P_t)$. We have $X' \in \mathcal X$ due to Lemma [Lemma 1](#lemma:wholePInX){reference-type="ref" reference="lemma:wholePInX"}. When removing the $\ell + 1$ vertices of $P_t$ from $G[X']$, we remove $k\ell$ edges of $E(V(P_t) - t, S)$, at most $k+1$ edges that were incident to $t$ and all $\ell -1$ edges of $E(P_t -t)$. We have $$\frac{k\ell + (k+1) + (\ell - 1)}{\ell + 1} = \frac{k(\ell + 1) + \ell}{\ell + 1} = k + \frac{\ell}{\ell + 1} < k +\beta$$ and thus $X \setminus V(P_t) = X' \setminus V(P_t) \in \mathcal X$. ◻ By repeated application of the last three lemmas we get the following corollary: **Corollary 1**. *If $\mathcal X \neq \varnothing$, then there is an $X \in \mathcal X$ such that for every $t \in V(T_C) \cap X$ we have $V(P_t) \subseteq X$ and if additionally $t \in V_\ell$, then $deg_{X}(t) = k+2$.* We also want to prove the degree property of Corollary [Corollary 1](#cor:Xexists){reference-type="ref" reference="cor:Xexists"} for every $t \in V(T_C)$. We will tackle this in the following two lemmas. **Lemma 1**. *Let $X \in \mathcal X$ such that for every $t \in V(T_C) \cap X$ we have $V(P_t) \subseteq X$ and if additionally $t \in V_\ell$, then $deg_{X}(t) = k+2$. Furthermore, suppose there is a vertex $r' \in V(T_C)$ such that all vertices of the tree $T'$ containing the vertices $t \in V(T_C)$ with $depth(t) \geq depth(r')$ are also in $X$, but the parent of $r'$ is not in $X$. Let $T'' = T' \cup \bigcup_{t \in V(T')} P_{t}$. Then $X \setminus V(T'') \in \mathcal X$.* *Proof.* We have that $r' \in V_{\ell+1}$, as $r'$ is the root of $T'$ and thus has at most $k+1$ neighbours in $X$, and all vertices in $V_{\ell} \cap X$ have $k+2$ neighbours in $X$. It follows that $\alpha =1$. Let $T''$ contain $x$ vertices of $V_{\ell+1}$ and thus $kx$ vertices of $V_\ell$. We have $$\frac{e(T'') + e(X \setminus V(T''), V(T''))}{v(T'')} = k + \frac{ x(\ell+1) + kx\ell }{ x(\ell+2) + kx(\ell+1) } = k + \frac{\ell(k+1) + \alpha}{(\ell + 1)(k+1) + \alpha} < k + \beta.$$ Thus, $X \setminus V(T'') \in \mathcal X$. ◻ **Lemma 1**. *If $\mathcal X \neq \varnothing$, then there is an $X \in \mathcal X$ where every $t \in V_\ell \cap X$ has $deg_{X}(t) = k+2$ and for every $t \in X \cap (V_{\ell+1} \cup V_{2\ell + 1 + \alpha})$ the child of $t$ in $T_C$ is also in $X$.* *Proof.* Note that by Corollary [Corollary 1](#cor:Xexists){reference-type="ref" reference="cor:Xexists"}, there exists an $X \in \mathcal{X}$ where every $t \in V_\ell \cap X$ has $\deg_X(t) = k+2$. From all such $X$, we choose one set $X$ where the number of vertices $t \in X \cap (V_{\ell+1} \cup V_{2\ell + 1 + \alpha})$, where the child of $t$ in $T_C$ is not in $X$, is minimal. Suppose towards a contradiction that this minimum value is greater than zero and let $t$ be such a vertex with maximal depth and its child being $t' \not\in X$.\ Let $T'$ be the subtree of $C$ induced by $t'$ and all of its descendants. By Lemma [Lemma 1](#lemma:noPerfectKAryTrees){reference-type="ref" reference="lemma:noPerfectKAryTrees"} we can choose $X$ such that no vertex of $T'$ is in $X$. Our desired contradiction will be $X \cup V(T') \cup \bigcup_{t\in V(T')} V( P_{t} ) \in \mathcal X$, since this set contains $t'$. Let $x := |V_{\ell+1} \cap V(T')|$ and thus $|V_\ell \cap V(T')| = kx+ |t'|$. Since $T' \neq T_C$, we have $x < \frac{\delta - 1}{2}$ and thus $$k + \frac { (\ell + |\{t t'\} | ) + x(\ell + \alpha) + kx\ell }{ (\ell + 1) + x(\ell + \alpha + 1) + kx(\ell + 1) } = k + \frac { \ell(k+1) + \alpha + \frac{\ell+1}{x} }{ (\ell+1)(k+1) + \alpha + \frac{\ell+1}{x} }\\ > k + \beta.$$ ◻ **Corollary 1**. *$\mathcal X$ is empty. Thus, Theorem [Theorem 1](#thm:lowerBoundFracArb){reference-type="ref" reference="thm:lowerBoundFracArb"} is true.* *Proof.* Let $X \in \mathcal X$ such that it satisfies Lemma [Lemma 1](#lemma:ellPlusOneHasKChildren){reference-type="ref" reference="lemma:ellPlusOneHasKChildren"}. We can choose $X$ such that its induced subgraph does not have any subtrees $T'$ with root $r'$ as described in Lemma [Lemma 1](#lemma:noPerfectKAryTrees){reference-type="ref" reference="lemma:noPerfectKAryTrees"}. But then we have either $X = S$ or $X = V(G)$, both of which give a contradiction: the former to the definition of $\mathcal{X}$, and the latter to Lemma [Lemma 1](#contradictionforbound){reference-type="ref" reference="contradictionforbound"}. ◻ # Lower Bound of the Diameter of Big Components In this section we will prove Theorem [Theorem 1](#thm:z:lowerBound){reference-type="ref" reference="thm:z:lowerBound"} by using a very similar construction to the construction given in the last section. The structure of this section is also identical to the last section and for some proofs we will just refer to the previous section. First we give the construction. We will again describe the *example colouring of $G$*: For every colourful tree $C$ the tree $T_C$ has odd depth $\delta > \frac{2(z+1)}{\epsilon} - 1$, and is comprised of a directed blue path of colour $1$ and length $\delta - 1$ starting at the root $r_C$ of $T_C$ and $C$, where additionally, every vertex with even depth has $k-1$ other outgoing edges with colours $2, \ldots, k$ in $T_C$. These $k-1$ children are leaves in $T_C$. We obtain $C$ from $T_C$ by adding the fewest number of new vertices and red edges such that no two vertices of $V(T_C)$ are in the same red component and for every vertex $t \in V(T_C)$ we have: - If $depth(t)$ is odd, then there is an induced red path of length $z$ ending in $t$. - If $t = r_C$, then $t$ is in a red acyclic component $K$ with $e(K) = d - z(k-1) + 1$ and $diam(K) = 2(z+1)$, $t$ is exactly in the middle of a red path $Q_t \subset K$ of length $2(z+1)$ and $K - t$ consists of components with at most $z$ edges. - If $depth(t)$ is even and $t \neq r_C$, then $t$ is in a red acyclic component $K$ having $d - zk$ edges and containing a red path $Q_t$ of length $z+1$ ending in $t$, and $K - t$ consists of components with at most $z$ edges. Furthermore, all edges of a red component in $C$ of a vertex $t \in V(T_C)$ are oriented towards $t$ in the example colouring. Note that the paths $Q_t$ and the diameter of $K$ in the second and third case are well-defined by Lemma [Lemma 1](#lemma:dLargerThanLKPlus1){reference-type="ref" reference="lemma:dLargerThanLKPlus1"}. For vertices $t \in V(T_C)$, we again denote by $P_t$ the red component containing $t$ in the example colouring. Note that when creating $C$, there are many possible configurations for the red component $P_t$ if $depth(t)$ is even. We obtain $G$ by taking $p \geq kD + k^2 + 1$ disjoint copies of $C$ and adding a set $S = \{s_{1},\ldots,s_{k}\}$ of new vertices and for every colour $i \in \{1, \ldots, k\}$ and every vertex $x$ of a colourful tree we add an edge $xs_i$ oriented towards $s_i$ in the example colouring if $x$ does not already have an outgoing edge coloured $i$. ## Bounding the Diameter {#bounding-the-diameter} In this subsection, we prove the theorem below. **Theorem 1**. *In any decomposition of $G$ into $k$ blue pseudoforests and one red pseudoforest where every vertex has at most $D$ incident red edges, there is a red component $K$ with $e(K) \geq d - z(k-1) + 1$ that has diameter at least $2(z+1)$.* For the rest of the subsection we assume that there is a colouring $f$ of $G$ contradicting Theorem [Theorem 1](#thm:z:lowerBoundDiam){reference-type="ref" reference="thm:z:lowerBoundDiam"}. The following lemma and corollary can be proven analogously to Lemma [Lemma 1](#lemma:colourfulEdgesToS){reference-type="ref" reference="lemma:colourfulEdgesToS"} and Corollary [Corollary 1](#cor:forcedColourOfEdges){reference-type="ref" reference="cor:forcedColourOfEdges"}. **Lemma 1**. *There is a colourful tree $C$ such that every edge of $E(C, S)$ is coloured blue in $f$. Furthermore, there is no monochromatic $S$-$C$-$S$-path.* For the rest of the subsection, we let $C$ be a colourful tree satisfying Lemma [Lemma 1](#lemma:z:colourfulEdgesToS){reference-type="ref" reference="lemma:z:colourfulEdgesToS"}. **Corollary 1**. *In $f$, any vertex of $C$ that is not an inner vertex of $T_C$ has a $b$-coloured edge to $S$ for every $b \in \{1, \ldots, k\}$ and any vertex of $T_C$ with odd depth has $k-1$ incident edges to $S$ in pairwise distinct colours of $\{1, \ldots, k\}$. Every edge of $E(C)$ that is not incident to an inner vertex of $T_C$ is coloured red in $f$. Furthermore, every red component containing at least one vertex of $C$ is acyclic in $f$.* For each vertex $t \in V(T_C)$, let $K_t$ be the subgraph induced by the vertices $v$ in the same red component in $f$ as $t$ and with $depth_C(v) \geq depth_C(t)$. Note that $K_t$ is connected and all of its edges are coloured red in $f$. The following lemma bounds $e(K_t)$, and will be used to bound $e(K_{r_C})$ later. Recall that by Corollary [Corollary 1](#cor:z:forcedColourOfEdges){reference-type="ref" reference="cor:z:forcedColourOfEdges"}, every edge not incident to an inner vertex of $T_C$ is red. The main idea below is that if an edge from $t$ to $V(C) \setminus V(T_C)$ is coloured blue in $f$ (thus not matching the example colouring), then we will argue a corresponding edge from $t$ to one of its children $t'$ in $T_C$ is coloured red in $f$; which, together with showing $t'$ is the end of a low $z$-path, will be enough to bound $e(K_t)$. Note that the diameter bounds proceed similarly to the previous section, and the new argument is in bounding the number of edges. **Lemma 1**. *If $t \in V(T_C)$ with $t \neq r_C$, then $t$ has a $b$-coloured $S$-path for any $b \in \{1, \ldots, k\}$ and $t$ is the end of a low $z$-path $P$. If $depth(t)$ is even, then $e(P) \geq z+1$ and $e(K_t) \geq d - zk$.* *Proof.* The lemma is clear for any leaf of $T_C$ due to Corollary [Corollary 1](#cor:z:forcedColourOfEdges){reference-type="ref" reference="cor:z:forcedColourOfEdges"}. Now, let $t$ have even depth and suppose that the lemma is true for the children of $t$ in $T_C$. All of the children of $t$ in $C$ have a $b$-coloured $S$-path for any $b \in \{1, \ldots, k\}$. Thus, at most $k$ edges from $t$ to the children of $t$ in $C$ can be blue or otherwise there is a monochromatic $S$-$C$-$S$-path, contradicting Lemma [Lemma 1](#lemma:z:colourfulEdgesToS){reference-type="ref" reference="lemma:z:colourfulEdgesToS"}. At least $k+1$ children of $t$ in $C$ are ends of low $z$-paths ($k$ of them are children of $t$ in $T_C$ and the other one is on $Q_t$; recall by Corollary [Corollary 1](#cor:z:forcedColourOfEdges){reference-type="ref" reference="cor:z:forcedColourOfEdges"} the edges of $Q_t$ not incident with $t$ are red). Let us call the set of edges to these $k+1$ children $E_t$. Since at least one of the edges of $E_t$ is red, $t$ is the end of a low $(z+1)$-path in $f$. Recall that each of the components of $P_t - t$ has at most $z$ edges, the components of the $k+1$ children of $t$ which are not in $P_t$ have at least $z$ edges, and since at most $k$ edges to the children of $t$ in $C$ are blue (like in the example colouring), we have $e(K_t) \geq e(P_t) = d - zk$. If fewer than $k$ edges of $E_t$ were blue in $f$, then we had $diam(K_t) \geq 2(z+1)$ and $e(K_t) \geq e(P_t) + (z+1) \geq d - z(k-1) + 1$, which is a contradiction. Thus, $t$ has a $b$-coloured $S$-path for each $b \in \{1, \ldots, k\}$. Now, let $t$ have odd depth and not be a leaf and again suppose that the lemma is true for the only child $t'$ of $t$ in $T_C$. Let $u$ be the other child of $t$ in $C$, which is the neighbour of $t$ in $P_t$. Note that $t'$ has a low $(z+1)$-path and $u$ has a low $(z-1)$-path. Since $t$ has $k-1$ $S$-paths of length $1$, at most one of the edges to the children of $t$ can be blue. If both edges were red, then we would have $e(K_t) \geq (d - zk) + (z-1) + 2 = d - z(k-1) + 1$ and $diam(K_t) \geq (z+1) + 2 + (z-1) = 2(z+1)$, which is contrary to our assumptions. The lemma follows. ◻ **Lemma 1**. *We have $e(K_{r_C}) \geq d - z(k-1) + 1$ and $diam(K_{r_C}) \geq 2(z+1)$, which is contradictory to our assumptions.* *Proof.* The proof is analogous to the case in the proof of Lemma [Lemma 1](#lemma:z:inductionOverNonRootVertices){reference-type="ref" reference="lemma:z:inductionOverNonRootVertices"} where $t$ has even depth, but this time $|E_t| = k+2$, which forces the statement of the lemma. ◻ ## Bounding the Fractional Arboricity {#bounding-the-fractional-arboricity} **Theorem 1**. *The fractional arboricity of $G$ is less than $k + \frac{d}{d+k+1}+ \epsilon$. Thus the maximum average degree of $G$ is less than $2\left(k + \frac{d}{d+k+1}+ \epsilon \right)$.* We assume that there is a subgraph $H$ with $\frac{e(H)}{v(H) - 1} \geq k + \frac{d}{d+k+1}+ \epsilon$ for the rest of the subsection. Let $\mathcal X$ be the set of all $X \subseteq V(G)$ having $S \subsetneq X$ and $$\frac{e(X)}{|X \setminus S|} > k + \beta,$$ where $$\beta = \frac{d + \frac{2(z+1)}{\delta + 1}}{d + k + 1 + \frac{2(z+1)}{\delta + 1}}.$$ **Lemma 1**. *$\frac{e(G)}{|V(G) \setminus S|} = k + \beta$ and thus $V(G) \not\in \mathcal X$.* *Proof.* In any colourful tree $C$ there are $\frac{\delta - 1}{2}$ red components $P_t$ in the example colouring, where $depth_{T_C}(t)$ is even and non-zero, and there are $k \frac{\delta + 1}{2}$ such components for which $depth_{T_C}(t)$ is odd. Moreover, every vertex in a colourful tree has exactly one outgoing $i$-edge for each $i \in \{1,2, \dots, k\}$ in the example colouring. We have $$\begin{aligned} \frac{e(G)}{|V(G) \setminus S|} &= k + \frac{ p\big( d - z(k-1) + 1 + \frac{\delta - 1}{2} (d - zk) + k \frac{\delta + 1}{2} z \big) }{ p\big( d - z(k-1) + 2 + \frac{\delta - 1}{2} (d - zk + 1) + k \frac{\delta + 1}{2} (z + 1) \big) } \\ &= k + \frac{\frac{\delta + 1}{2} d + (z+1)}{\frac{\delta + 1}{2} (d+k+1) + (z+1)} \\ &= k + \beta. \end{aligned}$$ ◻ Immediately from the definitions it follows that: **Observation 1**. $$\frac{d}{d+k+1} < \beta < \frac{d}{d+k+1}+ \epsilon$$ **Corollary 1**. *If Theorem [Theorem 1](#thm:z:lowerBoundFracArb){reference-type="ref" reference="thm:z:lowerBoundFracArb"} is false, then $\mathcal X$ is not empty.* *Proof.* Analogously to Corollary [Corollary 1](#cor:connectionWeirdDensityToFracArb){reference-type="ref" reference="cor:connectionWeirdDensityToFracArb"}. ◻ Let $C$ be a colourful tree and $t \in V(T_C)$ for the rest of the subsection and let $\ell := \left\lfloor\frac{d-1}{k+1}\right\rfloor$. Note that since $z \leq \ell$, we have $\frac{z}{z+1} \leq \frac{d}{d+k+1}$. **Lemma 1**. *If $X \in \mathcal X$ and $t \in X$, then $X \cup V(P_t) \in \mathcal X$.* *Proof.* Analogously to Lemma [Lemma 1](#lemma:wholePInX){reference-type="ref" reference="lemma:wholePInX"}. ◻ **Lemma 1**. *If $X \in \mathcal X$ and $t \not\in X$, then $X \setminus V(P_t) \in \mathcal X$.* *Proof.* Let $P' = X \cap V(P_t) \neq \varnothing$. When subtracting $P'$ from $X$ the induced subgraph will lose $k|P'|$ edges of $E(P', S)$ and another $e(P')$ edges. Let $c$ be the number of components of $G[P']$. By construction every component of $G[P']$ has at most $z$ edges. We have $$\frac{k|P'| + e(P')}{|P'|} \leq k + \frac{cz}{c(z+1)} < k + \beta.$$ ◻ Let $V_i$, where $i \in \{z, d - zk, d - z(k-1) + 1\}$, be the set of vertices $t \in V(T_C)$ such that $e(P_t) = i$. Note that these three sets are pairwise disjoint, since $z < d - zk$. **Lemma 1**. *If $X \in \mathcal X$, $t \in X \cap V_z$ and at most one of the two neighbours of $t$ in $T_C$ are in $X$, then $X \setminus V(P_t) \in \mathcal X$.* *Proof.* Analogously to Lemma [Lemma 1](#lemma:ellHasDegkPlusTwo){reference-type="ref" reference="lemma:ellHasDegkPlusTwo"} using $z \leq \ell$. ◻ By repeated application of the last three lemmas we get the following corollary: **Corollary 1**. *If $\mathcal X \neq \varnothing$, then there is an $X \in \mathcal X$ such that for every $t \in V(T_C) \cap X$ we have $V(P_t) \subseteq X$ and if additionally $t \in V_z$, then $deg_{X}(t) = k+2$.* Like in Section 3 we will prove this corollary for all the vertices of $T_C$ now. **Lemma 1**. *Let $X \in \mathcal X$ such that for every $t \in V(T_C) \cap X$ we have $V(P_t) \subseteq X$ and if additionally $t \in V_z$, then $deg_{X}(t) = k+2$. Furthermore, suppose there is a vertex $r' \in V(T_C)$ such that all vertices of the tree $T'$ containing the vertices $t \in V(T_C)$ with $depth(t) \geq depth(r')$ are also in $X$, but the parent of $r'$ is not in $X$. Let $T'' = T' \cup \bigcup_{t \in V(T')} P_{t}$. Then $X \setminus V(T'')\in \mathcal X$.* *Proof.* It must be that $r' \in V_{d - zk}$ because of the degree property of $X$. Let $T'$ contain $x$ vertices of $V_{d - zk}$ and thus $kx$ vertices of $V_{z}$. We have $$\begin{aligned} \frac{ e(T'') + e(X \setminus V(T''), V(T''))} { v(T'') } &= k + \frac{ x(d - zk) + xkz }{ x(d+1 - zk) + xk(z+1) } \\ &= k + \frac{d}{d+k+1}\\ &< k + \beta. \end{aligned}$$ Thus, $X \setminus V(T'') \in \mathcal X$. ◻ **Lemma 1**. *If $\mathcal X \neq \varnothing$, then there is an $X \in \mathcal X$ where every $t \in V_{z} \cap X$ has $deg_X(t) = k+2$ and for every $t \in X \cap (V_{d-zk} \cup V_{d - z(k-1) + 1})$ the child of $t$ in $T_C$ is also in $X$.* *Proof.* We choose $X$, $t'$ and $T'$ like we did in the proof of Lemma [Lemma 1](#lemma:ellPlusOneHasKChildren){reference-type="ref" reference="lemma:ellPlusOneHasKChildren"}. Let $x := |V_{d-zk} \cap V(T')|$ and thus $|V_{z} \cap V(T')| = kx + |t'|$. Since $T' \neq T_C$, we have $x < \frac{\delta - 1}{2}$ and thus $$k + \frac { (z + |\{t t'\} | ) + x(d - zk) + kxz }{ z+1 + x(d - zk + 1) + kx(z+1) } = k + \frac { xd + (z+1) }{ x(d+k+1) + (z+1) }\\ > \beta.$$ ◻ **Corollary 1**. *$\mathcal X$ is empty. Thus, Theorem [Theorem 1](#thm:z:lowerBoundFracArb){reference-type="ref" reference="thm:z:lowerBoundFracArb"} is true.* [^1]: Recall the *distance* between vertices $u$ and $v$ in a graph $G$ is the minimum number of edges in a path with endpoints $u$ and $v$. The *diameter* of $G$ is the maximum distance between any two vertices in $G$.
arxiv_math
{ "id": "2310.00931", "title": "Beyond the Pseudoforest Strong Nine Dragon Tree Theorem", "authors": "Sebastian Mies, Benjamin Moore, and Evelyne Smith Roberge", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | This paper studies the performative prediction problem where a learner aims to minimize the expected loss with a decision-dependent data distribution. Such setting is motivated when outcomes can be affected by the prediction model, e.g., in strategic classification. We consider a state-dependent setting where the data distribution evolves according to an underlying controlled Markov chain. We focus on stochastic derivative free optimization (DFO) where the learner is given access to a loss function evaluation oracle with the above Markovian data. We propose a two-timescale DFO($\lambda$) algorithm that features (i) a sample accumulation mechanism that utilizes every observed sample to estimate the overall gradient of performative risk, and (ii) a two-timescale diminishing step size that balances the rates of DFO updates and bias reduction. Under a general non-convex optimization setting, we show that DFO($\lambda$) requires ${\cal O}( 1 /\epsilon^3)$ samples (up to a log factor) to attain a near-stationary solution with expected squared gradient norm less than $\epsilon > 0$. Numerical experiments verify our analysis. author: - "Haitong Liu, Qiang Li, Hoi-To Wai [^1]" bibliography: - reference.bib title: - Two-timescale Derivative Free Optimization for Performative Prediction with Markovian Data - Two-timescale Derivative Free Optimization for Performative Prediction with Markovian Data --- # Introduction {#sec:submission} Consider the following stochastic optimization problem with decision-dependent data: $$\begin{aligned} \label{perf} \min_{ {\bm \theta}\in \mathbb{R}^d } ~ {\cal L}({\bm \theta}) = \mathbb{E}_{ Z \sim \Pi_{{\bm \theta}} } \big[ \ell( {\bm \theta}; Z ) \big].\end{aligned}$$ Notice that the decision variable ${\bm \theta}$ appears in both the loss function $\ell( {\bm \theta}; Z )$ and the data distribution $\Pi_{{\bm \theta}}$ supported on ${\sf Z}$. The overall loss function ${\cal L}({\bm \theta})$ is known as the *performative risk* which captures the distributional shift due to changes in the deployed model. This setting is motivated by the recent studies on *performative prediction* [@perdomo2020performative], which considers outcomes that are supported by the deployed model ${\bm \theta}$ under training. For example, this models strategic classification [@hardt2016strategic; @dong2018strategic] in economical and financial practices such as with the training of loan classifier for customers who may react to the deployed model ${\bm \theta}$ to maximize their gains; or in price promotion mechanism [@Zhang2018PriceProm] where customers react to prices with the aim of gaining a lower price; or in ride sharing business [@narang2022multiplayer] with customers who adjust their demand according to prices set by the platform. The objective function ${\cal L}({\bm \theta})$ is non-convex in general due to the effects of ${\bm \theta}$ on both the loss function and distribution. Numerous efforts have been focused on characterizing and finding the so-called *performative stable* solution which is a fixed point to the repeated risk minimization (RRM) process [@perdomo2020performative; @mendler2020stochastic; @brown2022performative; @li2022state; @roy2022projection; @drusvyatskiy2022stochastic]. While RRM might be a natural algorithm for scenarios when the learner is agnostic to the performative effects in the dynamic data distribution, the obtained solution maybe far from being optimal or stationary to [\[perf\]](#perf){reference-type="eqref" reference="perf"}. On the other hand, recent works have studied *performative optimal* solutions that minimizes [\[perf\]](#perf){reference-type="eqref" reference="perf"}. This is challenging due to the non-convexity of ${\cal L}({\bm \theta})$ and more importantly, the absence of knowledge of $\Pi_{{\bm \theta}}$. In fact, evaluating ${\nabla}{\cal L}({\bm \theta})$ or its stochastic gradient estimate would require learning the distribution $\Pi_{{\bm \theta}}$ *a-priori* [@izzo2021learn]. To design a tractable procedure, prior works have assumed structures for [\[perf\]](#perf){reference-type="eqref" reference="perf"} such as approximating $\Pi_{{\bm \theta}}$ by Gaussian mixture [@izzo2021learn], $\Pi_{{\bm \theta}}$ depends linearly on ${\bm \theta}$ [@narang2022multiplayer], etc., combined with a two-phase algorithm that separately learns $\Pi_{{\bm \theta}}$ and optimizes ${\bm \theta}$. Other works have assumed a *mixture dominance* structure [@Miller2021OutsideTE] on the combined effect of $\Pi_{{\bm \theta}}$ and $\ell(\cdot)$ on ${\cal L}({\bm \theta})$, which in turn implies that ${\cal L}({\bm \theta})$ is convex. Based on this assumption, a derivative free optimization (DFO) algorithm was analyzed in [@ray2022decision]. r0.45 **Stochastic DFO Settings** **Rate** ----------------------------- -------------------------- Decision-indep. ${\cal O}(1/\epsilon^2)$ [@ghadimi2013] Decision-depend. (Markov) ${\cal O}(1/\epsilon^3)$ This paper focuses on approximating the *performative optimal* solution without relying on additional condition on the distribution $\Pi_{{\bm \theta}}$ and/or using a two-phase algorithm. We concentrate on stochastic DFO algorithms [@ghadimi2013] which do not involve first order information (i.e., gradient) about ${\cal L}({\bm \theta})$. As an advantage, these algorithms avoid the need for estimating $\Pi_{{\bm \theta}}$. Instead, the learner is given access to the loss function evaluation oracle $\ell( {\bm \theta}; Z )$ and receive data samples from a controlled Markov chain. Note that the latter models the *stateful* and *strategic* agent setting considered in [@ray2022decision; @roy2022projection; @li2022state; @brown2022performative]. Such setting is motivated when the actual data distribution adapts slowly to the decision model, which will be announced by the learner during the (stochastic) optimization process. The proposed $\texttt{DFO}\left(\lambda\right)$ algorithm features (i) a two-timescale step sizes design to control the bias-variance tradeoff in the derivative-free gradient estimates, and (ii) a sample accumulation mechanism with forgetting factor $\lambda$ that aggregates every observed samples to control the amount of error in gradient estimates. In addition to the new algorithm design, our main findings are summarized below: - Under the Markovian data setting, we show in Theorem [\[thm1\]](#thm1){reference-type="ref" reference="thm1"} that the $\texttt{DFO}\left(\lambda\right)$ algorithm finds a near-stationary solution $\bar{{\bm \theta}}$ with $\mathbb{E}[ \| {\nabla}{\cal L}( \bar{{\bm \theta}} ) \|^2] \leq \epsilon$ using ${\cal O}(\frac{d^2}{\epsilon^3}\log 1/\epsilon)$ samples/iterations. Compared to prior works, our analysis does not require structural assumption on the distribution $\Pi_{{\bm \theta}}$ or convexity condition on the performative risk [@izzo2021learn; @Miller2021OutsideTE; @ray2022decision]. - Our analysis demonstrates the trade-off induced by the forgetting factor $\lambda$ in the $\texttt{DFO}\left(\lambda\right)$ algorithm. We identify the desiderata for the optimal value(s) of $\lambda$. We show that increasing $\lambda$ allows to reduce the number of samples requited by the algorithm if the performative risk gradient has a small Lipschitz constant. For the rest of this paper, §[2](#sec:setup){reference-type="ref" reference="sec:setup"} describes the problem setup and the $\texttt{DFO}\left(\lambda\right)$ algorithm, §[3](#sec:main){reference-type="ref" reference="sec:main"} presents the main results, §[4](#sec:proof){reference-type="ref" reference="sec:proof"} outlines the proofs. Finally, we provide numerical results to verify our findings in §[5](#sec:num){reference-type="ref" reference="sec:num"}. Finally, as displayed in Table [\[DFA_table\]](#DFA_table){reference-type="ref" reference="DFA_table"}, we remark that stochastic DFO under *decision dependent* (and Markovian) samples has a convergence rate of ${\cal O}(1/\epsilon^3)$ towards an $\epsilon$-stationary point, which is worse than the decision independent setting that has ${\cal O}(1/\epsilon^2)$ in [@ghadimi2013]. We believe that this is a fundamental limit for DFO-type algorithms when tackling problems with decision-dependent sample due to the challenges in designing a low variance gradient estimator; see §[4.1](#sec:hardness){reference-type="ref" reference="sec:hardness"}. **Related Works**. The idea of DFO dates back to [@Nemirovski1983], and has been extensively studied thereafter [@flaxman2005; @Agarwal2010; @Nesterov2017; @ghadimi2013]. Results on matching lower bound were established in [@Jamieson2012]. While a similar DFO framework is adopted in the current paper for performative prediction, our algorithm is limited to using a special design in the gradient estimator to avoid introducing unwanted biases. There are only a few works considering the Markovian data setting in performative prediction. [@brown2022performative] is the first paper to study the dynamic settings, where the response of agents to learner's deployed classifier is modeled as a function of classifier and the current distribution of the population; also see [@izzo2022learn]. On the other hand, @li2022state [@roy2022projection] model the unforgetful nature and the reliance on past experiences of *single/batch* agent(s) via controlled Markov Chain. Lastly, @ray2022decision investigated the state-dependent framework where agents' response may be driven to best response at a geometric rate. **Notations**: Let $\mathbb{R}^d$ be the $d$-dimensional Euclidean space equipped with inner product $\langle\cdot, \cdot\rangle$ and induced norm $\|x\|=\sqrt{\langle x, x\rangle}$. Let ${\cal S}$ be a (measurable) sample space, and $\mu$, $\nu$ are two probability measures defined on ${\cal S}$. Then, we use $\boldsymbol{\delta}_{\text{TV}}\left(\mu, \nu \right) \mathrel{\mathop:}=\sup_{A\subset {\cal S}}\mu(A)-\nu(A)$ to denote the total variation distance between $\mu$ and $\nu$. Denote $\mathbb{T}_{{\bm \theta}}(\cdot, \cdot)$ as the state-dependent Markov kernel and its stationary distribution is $\Pi_{{\bm \theta}}(\cdot)$. Let $\mathbb{B}^{d}$ and $\mathbb{S}^{d-1}$ be the unit ball and its boundary (i.e., a unit sphere) centered around the origin in $d$-dimensional Euclidean space, respectively, and correspondingly, the ball and sphere of radius $r>0$ are $r\mathbb{B}^{d}$ and $r\mathbb{S}^{d-1}$. # Problem Setup and Algorithm Design {#sec:setup} In this section, we develop the $\texttt{DFO}\left(\lambda\right)$ algorithm for tackling [\[perf\]](#perf){reference-type="eqref" reference="perf"} and describe the problem setup. Assume that ${\cal L}({\bm \theta})$ is differentiable, we focus on finding an *$\epsilon$-stationary* solution, ${\bm \theta}$, which satisfies $$\begin{aligned} \label{eq:stationary} \| {\nabla}{\cal L} ({\bm \theta}) \|^2 \leq \epsilon.\end{aligned}$$ With the goal of reaching [\[eq:stationary\]](#eq:stationary){reference-type="eqref" reference="eq:stationary"}, there are two key challenges in our stochastic algorithm design: (i) to estimate the gradient ${\nabla}{\cal L}({\bm \theta})$ without prior knowledge of $\Pi_{{\bm \theta}}$, and (ii) to handle the *stateful* setting where one cannot draw samples directly from the distribution $\Pi_{{\bm \theta}}$. We shall discuss how the proposed $\texttt{DFO}\left(\lambda\right)$ algorithm, which is summarized in Algorithm [\[alg:dfo_lambda\]](#alg:dfo_lambda){reference-type="ref" reference="alg:dfo_lambda"}, tackles the above issues through utilizing two ingredients: (a) two-timescales step sizes, and (b) sample accumulation with the forgetting factor $\lambda \in [0,1)$. 2 Constants $\delta_{0}, \eta_{0}, \tau_0, \alpha, \beta$, maximum epochs $T$, forgetting factor $\lambda,$ loss function $\ell\left( \cdot; \cdot \right)$. Set initial ${\bm \theta}_0$ and sample $Z_0$. [\[line:step\]]{#line:step label="line:step"} $\delta_{k}\leftarrow \delta_0/(1+k)^{\beta}$, $\eta_{k}\leftarrow \eta_0/(1+k)^{\alpha}$, $\tau_{k} \leftarrow \max\{1, \tau_0\log(1+k)\}$ Update ${\bm \theta}_{k}^{(1)}\leftarrow{\bm \theta}_{k}$, $Z_{k}^{(0)} \leftarrow Z_{k}$, ${\bm u}_{k}\sim \sf \text{Unif}(\mathbb{S}^{d-1})$ Deploy the model $\check{{\bm \theta}}_{k}^{(m)} = {\bm \theta}_{k}^{(m)} + \delta_{k} {\bm u}_{k}$ Draw $Z_{k}^{(m)} \sim \mathbb{ T}_{\check{{\bm \theta}}_{k}^{(m)}}(Z_{k}^{(m-1)}, \cdot)$ Update ${\bm \theta}_{k}^{(m)}$ as $$\begin{split} &\textstyle {\bm g}_{k}^{(m)} = \frac{d}{\delta_{k}} \ell \big(\check{{\bm \theta}}_{k}^{(m)}; Z_{k}^{(m)} \big) {\bm u}_{k}, \\ &{\bm \theta}_{k}^{(m+1)} = {\bm \theta}_{k}^{(m)}-\eta_{k}\lambda^{\tau_{k}-m} {\bm g}_{k}^{(m)}. \end{split}$$ $Z_{k+1}\leftarrow Z_{k}^{(\tau_{k})}$, ${\bm \theta}_{k+1}\leftarrow{\bm \theta}_{k}^{(\tau_{k}+1)}$. **Output:** Last iterate ${\bm \theta}_{T}$. **Estimating ${\nabla}{\cal L}({\bm \theta})$ via Two-timescales DFO.** First notice that the gradient of ${\cal L}(\cdot)$ can be derived as $$\begin{aligned} {\nabla}{\cal L}({\bm \theta}) = \mathbb{E}_{ Z \sim \Pi_{{\bm \theta}} } [ {\nabla}\ell( {\bm \theta}; Z) + \ell( {\bm \theta}; Z ) {\nabla}_{{\bm \theta}} \log \Pi_{{\bm \theta}}( Z ) ] ,\end{aligned}$$ As a result, constructing the stochastic estimates of ${\nabla}{\cal L}({\bm \theta})$ typically requires knowledge of $\Pi_{{\bm \theta}}( \cdot )$ which may not be known a-priori unless a separate estimation procedure is applied; see e.g., [@izzo2021learn]. To avoid the need for direct evaluations of ${\nabla}_{{\bm \theta}} \log \Pi_{{\bm \theta}}( Z )$, we consider an alternative design via zero-th order optimization [@ghadimi2013]. The intuition comes from observing that with $\delta \to 0^+$, ${\cal L}( {\bm \theta}+ \delta {\bm u} ) - {\cal L}( {\bm \theta})$ is an approximate of the directional derivative of ${\cal L}$ along ${\bm u}$. This suggests that an estimate for ${\nabla}{\cal L}({\bm \theta})$ can be constructed using the *objective function values* of $\ell({\bm \theta};Z)$ only. Inspired by the above, we aim to construct a gradient estimate by querying $\ell(\cdot)$ at randomly perturbed points. Formally, given the current iterate ${\bm \theta}\in \mathbb{R}^d$ and a query radius $\delta>0$, we sample a vector ${\bm u} \in \mathbb{R}^{d}$ uniformly from $\mathbb{S}^{d-1}$. The zero-th order gradient estimator for ${\cal L}({\bm \theta})$ is then defined as $$\begin{aligned} \label{eq:grd} g_{\delta}({\bm \theta}; {\bm u}, Z) \mathrel{\mathop:}=\frac{d}{\delta} \ell(\check{{\bm \theta}}; Z) \, {\bm u} \quad \text{with} \quad \check{{\bm \theta}}\mathrel{\mathop:}={\bm \theta}+\delta {\bm u},~Z \sim \Pi_{\check{{\bm \theta}}}(\cdot).\end{aligned}$$ In fact, as ${\bm u}$ is zero-mean, $g_{\delta}( {\bm \theta}; {\bm u}, Z )$ is an unbiased estimator for ${\nabla}{\cal L}_{\delta} ( {\bm \theta})$. Here, ${\cal L}_{\delta} ( {\bm \theta})$ is a smooth approximation of ${\cal L}({\bm \theta})$ [@flaxman2005; @Nesterov2017] defined as $$\begin{aligned} {\cal L}_{\delta} ( {\bm \theta}) = \mathbb{E}_{ {\bm u} } [ {\cal L} ( \check{{\bm \theta}}) ] = \mathbb{E}_{ {\bm u} } [ \mathbb{E}_{ Z \sim \Pi_{\check{{\bm \theta}}} } [ {\ell} ( \check{{\bm \theta}}; Z ) ] ].\end{aligned}$$ Furthermore, it is known that under mild condition \[cf. Assumption [\[assu:Lip\]](#assu:Lip){reference-type="ref" reference="assu:Lip"} to be discussed later\], $\| {\nabla}{\cal L}_{\delta} ( {\bm \theta}) - {\nabla}{\cal L}( {\bm \theta}) \| = {\cal O}(\delta)$ and thus [\[eq:grd\]](#eq:grd){reference-type="eqref" reference="eq:grd"} is an ${\cal O}(\delta)$-biased estimate for ${\nabla}{\cal L}({\bm \theta})$. We remark that the gradient estimator in [\[eq:grd\]](#eq:grd){reference-type="eqref" reference="eq:grd"} differs from the one used in classical works on DFO such as [@ghadimi2013]. The latter takes the form of $\frac{d}{\delta} ( \ell(\check{{\bm \theta}}; Z) - \ell( {\bm \theta}; Z ) ) \, {\bm u}$. Under the setting of standard stochastic optimization where the sample $Z$ is drawn *independently* of ${\bm u}$ and Lipschitz continuous $\ell (\cdot; Z)$, the said estimator in [@ghadimi2013] is shown to have constant variance while it remains ${\cal O}(\delta)$-biased. Such properties *cannot* be transferred to [\[eq:grd\]](#eq:grd){reference-type="eqref" reference="eq:grd"} since $Z$ is drawn from a distribution dependent on ${\bm u}$ via $\check{{\bm \theta}}= {\bm \theta}+ \delta {\bm u}$. In this case, the two-point gradient estimator would become biased; see §[4.1](#sec:hardness){reference-type="ref" reference="sec:hardness"}. However, we note that the variance of [\[eq:grd\]](#eq:grd){reference-type="eqref" reference="eq:grd"} would increase as ${\cal O}(1/\delta^2)$ when $\delta \to 0$, thus the parameter $\delta$ yields a bias-variance trade off in the estimator design. To remedy for the increase of variance, the $\texttt{DFO}\left(\lambda\right)$ algorithm incorporates a *two-timescale step size* design for generating gradient estimates ($\delta_{k}$) and updating models ($\eta_{k}$), respectively. Our design principle is such that the models are updated at a *slower timescale* to adapt to the gradient estimator with ${\cal O}(1/\delta^2)$ variance. Particularly, we will set $\eta_{k+1}/\delta_{k+1} \rightarrow 0$ to handle the bias-variance trade off, e.g., by setting $\alpha > \beta$ in line 4 of Algorithm [\[alg:dfo_lambda\]](#alg:dfo_lambda){reference-type="ref" reference="alg:dfo_lambda"}. **Markovian Data and Sample Accumulation.** We consider a setting where the sample/data distribution observed by the $\texttt{DFO}\left(\lambda\right)$ algorithm evolves according to a *controlled Markov chain (MC)*. Notice that this describes a stateful agent(s) scenario such that the deployed models (${\bm \theta}$) would require time to manifest their influence on the samples obtained; see [@li2022state; @roy2022projection; @brown2022performative; @ray2022decision; @izzo2022learn]. To describe the setting formally, we denote $\mathbb{T}_{{\bm \theta}}: {\sf Z} \times {\cal Z} \to \mathbb{R}_+$ as a Markov kernel controlled by a deployed model ${\bm \theta}$. For a given ${\bm \theta}$, the kernel has a unique stationary distribution $\Pi_{{\bm \theta}}(\cdot)$. Under this setting, suppose that the previous state/sample is $Z$, the next sample follows the distribution $Z' \sim \mathbb{T}_{{\bm \theta}} ( Z, \cdot )$ which is not necessarily the same as $\Pi_{{\bm \theta}}(\cdot)$. As a consequence, the gradient estimator [\[eq:grd\]](#eq:grd){reference-type="eqref" reference="eq:grd"} is not an unbiased estimator of ${\nabla}{\cal L}_{\delta} ({\bm \theta})$ since $Z \sim \Pi_{\check{{\bm \theta}}} (\cdot)$ cannot be conveniently accessed. A common strategy in settling the above issue is to allow a *burn-in* phase in the algorithm as in [@ray2022decision]; also commonly found in MCMC methods [@robert1999monte]. Using the fact that $\mathbb{T}_{{\bm \theta}}$ admits the stationary distribution $\Pi_{{\bm \theta}}$, if one can wait a sufficiently long time before applying the current sample, i.e., consider initializing with the previous sample $Z^{(0)} = Z$, the procedure $$\begin{aligned} \label{eq:burnin} Z^{(m)} \sim \mathbb{T}_{{\bm \theta}} (Z^{(m-1)}, \cdot),~m=1,\ldots,\tau,\vspace{-.2cm}\end{aligned}$$ would yield a sample $Z^+ = Z^{(\tau)}$ that admits a distribution close to $\Pi_{{\bm \theta}}$ provided that $\tau \gg 1$ is sufficiently large compared to the mixing time of $\mathbb{T}_{{\bm \theta}}$. Intuitively, the procedure [\[eq:burnin\]](#eq:burnin){reference-type="eqref" reference="eq:burnin"} may be inefficient as a number of samples $Z^{(1)}, Z^{(2)}, \ldots, Z^{(\tau-1)}$ will be completely ignored at the end of each iteration. As a remedy, the $\texttt{DFO}\left(\lambda\right)$ algorithm incorporates a sample accumulation mechanism which gathers the gradient estimates generated from possibly non-stationary samples via a forgetting factor of $\lambda \in [0,1)$. Following [\[eq:grd\]](#eq:grd){reference-type="eqref" reference="eq:grd"}, ${\nabla}{\cal L}({\bm \theta})$ is estimated by $$\begin{aligned} \textstyle {\bm g} = \frac{d}{\delta} \sum_{m=1}^{\tau} \lambda^{\tau - m} \ell( {\bm \theta}^{(m)} + \delta {\bm u} ; Z^{(m)} ) \, {\bm u},~~\text{with}~~Z^{(m)} \sim \mathbb{ T}_{ {\bm \theta}^{(m)} + \delta {\bm u} }(Z^{(m-1)}, \cdot).\end{aligned}$$ At a high level, the mechanism works by assigning large weights to samples that are close to the end of an epoch (which are less biased). Moreover, ${\bm \theta}^{(m)}$ is *simultaneously updated* within the epoch to obtain an online algorithm that gradually improves the objective value of [\[perf\]](#perf){reference-type="eqref" reference="perf"}. Note that with $\lambda=0$, the `DFO`(0) algorithm reduces into one that utilizes *burn-in* [\[eq:burnin\]](#eq:burnin){reference-type="eqref" reference="eq:burnin"}. We remark that from the implementation perspective for performative prediction, Algorithm [\[alg:dfo_lambda\]](#alg:dfo_lambda){reference-type="ref" reference="alg:dfo_lambda"} corresponds to a *greedy deployment* scheme [@perdomo2020performative] as the latest model ${\bm \theta}_{k}^{(m)} + \delta_{k} {\bm u}_{k}$ is deployed at every sampling step. Line 6--10 of Algorithm [\[alg:dfo_lambda\]](#alg:dfo_lambda){reference-type="ref" reference="alg:dfo_lambda"} details the above procedure. Lastly, we note that recent works have analyzed stochastic algorithms that rely on a *single trajectory* of samples taken from a Markov Chain, e.g., [@sun2018markov; @karimi2019non; @doan2022finite], that are based on stochastic gradient. [@sun2019decentralized] considered a `DFO` algorithm for general optimization problems but the MC studied is not controlled by ${\bm \theta}$. # Main Results {#sec:main} This section studies the convergence of the $\texttt{DFO}\left(\lambda\right)$ algorithm and demonstrates that the latter finds an $\epsilon$-stationary solution \[cf. [\[eq:stationary\]](#eq:stationary){reference-type="eqref" reference="eq:stationary"}\] to ([\[perf\]](#perf){reference-type="ref" reference="perf"}). We first state the assumptions required for our analysis: **Assumption 1**. **(Smoothness)**[\[assu:Lip\]]{#assu:Lip label="assu:Lip"} ${\cal L}({\bm \theta})$ is differentiable, and there exists a constant $L > 0$ such that $$\left\Vert {\nabla}{\cal L}({\bm \theta}) - {\nabla}{\cal L}({\bm \theta}^\prime) \right\Vert \leq L\left\Vert {\bm \theta}-{\bm \theta}^\prime \right\Vert, ~ \forall {\bm \theta}, {\bm \theta}^\prime \in \mathbb{R}^d.$$ **Assumption 2**. **(Bounded Loss)**[\[assu:BoundLoss\]]{#assu:BoundLoss label="assu:BoundLoss"} There exists a constant ${ G}> 0$ such that $$|\ell({\bm \theta}; z)| \leq { G},~\forall~ {\bm \theta}\in \mathbb{R}^d, ~\forall ~z\in {\sf Z}.$$ **Assumption 3**. **(Lipschitz Distribution Map)**[\[assu:smooth_dist\]]{#assu:smooth_dist label="assu:smooth_dist"} There exists a constant $L_1 > 0$ such that $$\boldsymbol{\delta}_{\text{TV}}\left(\Pi_{{\bm \theta}_1}, \Pi_{{\bm \theta}_2} \right)\leq L_1 \left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert\quad\forall{\bm \theta}_1,{\bm \theta}_2\in\mathbb{R}^d.$$ The conditions above state that the gradient of the performative risk is Lipschitz continuous and the state-dependent distribution vary smoothly w.r.t. ${\bm \theta}$. Note that Assumption [\[assu:Lip\]](#assu:Lip){reference-type="ref" reference="assu:Lip"} is found in recent works such as [@izzo2021learn; @ray2022decision], and Assumption [\[assu:BoundLoss\]](#assu:BoundLoss){reference-type="ref" reference="assu:BoundLoss"} can be found in [@izzo2021learn]. Assumption [\[assu:smooth_dist\]](#assu:smooth_dist){reference-type="ref" reference="assu:smooth_dist"} is slightly strengthened from the Wasserstein-1 distance bound in [@perdomo2020performative], and it gives better control for distribution shift in our Markovian data setting. Next, we consider the assumptions about the controlled Markov chain induced by $\mathbb{T}_{{\bm \theta}}$: **Assumption 4**. **(Geometric Mixing)**[\[assu:FastMixing\]]{#assu:FastMixing label="assu:FastMixing"} Let $\{Z_{k}\}_{k\geq 0}$ denote a Markov Chain on the state space ${\sf Z}$ with transition kernel $\mathbb{T}_{{\bm \theta}}$ and stationary measure $\Pi_{{\bm \theta}}$. There exist constants $\rho \in [0,1)$, $M \geq 0$, such that for any $k\geq 0$, $z\in {\sf Z}$, $$\boldsymbol{\delta}_{\text{TV}}\left(\mathbb{ P}_{{\bm \theta}}(Z_k\in \cdot | Z_0 =z), \Pi_{{\bm \theta}} \right) \leq M \rho^{k}.$$ **Assumption 5**. **(Smoothness of Markov Kernel)**[\[assu:smooth_kernel\]]{#assu:smooth_kernel label="assu:smooth_kernel"} There exists a constant $L_{2} \geq 0$ such that $$\begin{aligned} \boldsymbol{\delta}_{\text{TV}}\left(\mathbb{T}_{{\bm \theta}_1}(z, \cdot) , \mathbb{T}_{{\bm \theta}_2}(z, \cdot) \right) \leq L_{2} \left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert, ~\forall {\bm \theta}_1, {\bm \theta}_2\in \mathbb{R}^{d},~z\in {\sf Z}.\end{aligned}$$ Assumption [\[assu:FastMixing\]](#assu:FastMixing){reference-type="ref" reference="assu:FastMixing"} is a standard condition on the mixing time of the Markov chain induced by $\mathbb{T}_{{\bm \theta}}$; Assumption [\[assu:smooth_kernel\]](#assu:smooth_kernel){reference-type="ref" reference="assu:smooth_kernel"} imposes a smoothness condition on the Markov transition kernel $\mathbb{T}_{{\bm \theta}}$ with respect to ${\bm \theta}$. For instance, the geometric dynamically environment in [@ray2022decision] constitutes a special case which satisfies the above conditions. Unlike [@ray2022decision; @izzo2021learn; @Miller2021OutsideTE], we do not impose any additional assumption (such as mixture dominance) other than Assumption [\[assu:smooth_dist\]](#assu:smooth_dist){reference-type="ref" reference="assu:smooth_dist"} on $\Pi_{{\bm \theta}}$. As a result, [\[perf\]](#perf){reference-type="eqref" reference="perf"} remains an 'unstructured' non-convex optimization problem. Our main theoretical result on the convergence of the $\texttt{DFO}\left(\lambda\right)$ algorithm towards a near-stationary solution of [\[perf\]](#perf){reference-type="eqref" reference="perf"} is summarized as: We have defined the following quantities and constants: $$\begin{aligned} \label{eq:c567} c_5=2 { G}, \quad c_6=\frac{\max\{L^2,{ G}^2(1 - \beta)\}}{ 1-2\beta }, \quad c_7=\frac{L{ G}^2 }{ 2 \beta - \alpha + 1 },\end{aligned}$$ with ${\alpha = \frac{2}{3}, \beta = \frac{1}{6}}$. Observe the following corollary on the iteration complexity of $\texttt{DFO}\left(\lambda\right)$ algorithm: **Corollary 1**. *($\epsilon$-stationarity) Suppose that the Assumptions of Theorem [\[thm1\]](#thm1){reference-type="ref" reference="thm1"} hold. Fix any $\epsilon>0$, the condition $\min_{0\leq k\leq T-1} \mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2 \leq \epsilon$ holds whenever $$\begin{aligned} \textstyle T \geq \left( 12 \max\left\{c_5 (1-\lambda), c_6, \frac{c_7}{1-\lambda}\right\} \right)^3 \frac{d^2}{\epsilon^3} .\end{aligned}$$* In the corollary above, the lower bound on $T$ is expressed in terms of the number of epochs that Algorithm [\[alg:dfo_lambda\]](#alg:dfo_lambda){reference-type="ref" reference="alg:dfo_lambda"} needs to achieve the target accuracy. Consequently, the total number of samples required (i.e., the number of inner iterations taken in Line 6--9 of Algorithm [\[alg:dfo_lambda\]](#alg:dfo_lambda){reference-type="ref" reference="alg:dfo_lambda"} across all epochs) is: $$\begin{aligned} \label{eq:sample-complex} \textstyle {\tt S}_{\epsilon} = \sum_{k=1}^{T} \tau_k = {\cal O}\left( \frac{d^2}{\epsilon^3} \log(1/\epsilon)\right).\end{aligned}$$ We remark that due to the decision-dependent properties of the samples, the $\texttt{DFO}\left(\lambda\right)$ algorithm exhibits a worse sampling complexity [\[eq:sample-complex\]](#eq:sample-complex){reference-type="eqref" reference="eq:sample-complex"} than prior works in stochastic DFO algorithm, e.g., [@ghadimi2013] which shows a rate of ${\cal O}( d / \epsilon^2 )$ on non-convex smooth objective functions. In particular, the adopted one-point gradient estimator in [\[eq:grd\]](#eq:grd){reference-type="eqref" reference="eq:grd"} admits a variance that can only be controlled by a time varying $\delta$; see the discussions in §[4.1](#sec:hardness){reference-type="ref" reference="sec:hardness"}. Achieving the desired convergence rate requires setting $\eta_k = \Theta( k^{-2/3} )$, $\delta_k = \Theta( k^{-1/6} )$, i.e., yielding a two-timescale step sizes design with $\eta_k / \delta_k \to 0$. Notice that the influence of forgetting factor $\lambda$ are reflected in the constant factor of [\[eq:main_result\]](#eq:main_result){reference-type="eqref" reference="eq:main_result"}. Particularly, if $c_5 > c_7$ and $c_5\geq c_6$, the optimal choice is $\lambda = 1-\sqrt{\frac{c_7}{c_5}}$, otherwise the optimal choice is $\lambda\in\left[0, 1-c_7/c_6\right]$. Informally, this indicates that when the performative risk is smoother (i.e. its gradient has a small Lipschitz constant), a large $\lambda$ can speed up the convergence of the algorithm; otherwise a smaller $\lambda$ is preferable. # Proof Outline of Main Results {#sec:proof} This section outlines the key steps in proving Theorem [\[thm1\]](#thm1){reference-type="ref" reference="thm1"}. Notice that analyzing the $\texttt{DFO}\left(\lambda\right)$ algorithm is challenging due to the two-timescales step sizes and Markov chain samples with time varying kernel. Our analysis departs significantly from prior works such as [@ray2022decision; @izzo2021learn; @brown2022performative; @li2022state] to handle the challenges above. Let ${\cal F}^k = \sigma( {\bm \theta}_0, Z_{s}^{(m)}, u_s, 0 \leq s \leq k, 0 \leq m \leq \tau_{k})$ be the filtration. Our first step is to exploit the smoothness of ${\cal L}({\bm \theta})$ to bound the squared norms of gradient. Observe that: **Lemma 1**. ***(Decomposition)** [\[lem:decomposition\]]{#lem:decomposition label="lem:decomposition"} Under Assumption [\[assu:Lip\]](#assu:Lip){reference-type="ref" reference="assu:Lip"}, it holds that $$\begin{aligned} \label{eq:decompose1} \sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2 \leq {\bf I}_{1}(t) + {\bf I}_{2}(t) + {\bf I}_{3}(t) + {\bf I}_{4}(t),\end{aligned}$$ for any $t \geq 1$, where $$\begin{aligned} &\textstyle {\bf I}_{1}(t) \mathrel{\mathop:}=\sum_{k=1}^{t}\frac{1-\lambda}{\eta_{k}}\left(\mathbb{E}\left[{\cal L}({\bm \theta}_k)\right]-\mathbb{E}\left[{\cal L}({\bm \theta}_{k+1})\right]\right) \\ &\textstyle {\bf I}_{2}(t) \mathrel{\mathop:}=-\sum_{k=1}^{t}\mathbb{E}\left\langle \nabla {\cal L}({\bm \theta}_k) \big| (1-\lambda)\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m} \cdot \left(g_{k}^{(m)}-\mathbb{E}_{Z\sim \Pi_{\check{{\bm \theta}}_k}}\left[g_{\delta_k}({\bm \theta}_{k};u_k, Z)\right]\right) \right\rangle \\ & \textstyle {\bf I}_{3}(t) \mathrel{\mathop:}=-\sum_{k=1}^{t}\mathbb{E}\left\langle {\nabla}{\cal L}({\bm \theta}_k) {\big|} (1-\lambda) \left(\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}{\nabla}{{\cal L}_{\delta_{k}}({\bm \theta}_{k})}\right)-{\nabla}{\cal L}({\bm \theta}_k) \right\rangle \\ & \textstyle {\bf I}_{4}(t) \mathrel{\mathop:}=\frac{L(1-\lambda)}{2}\sum_{k=1}^{t}\eta_{k}\mathbb{E}\left\Vert \sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}g_{k}^{(m)} \right\Vert^2\end{aligned}$$* The lemma is achieved through the standard descent lemma implied by Assumption [\[assu:Lip\]](#assu:Lip){reference-type="ref" reference="assu:Lip"} and decomposing the upper bound on $|| {\nabla}{\cal L}({\bm \theta}_k) ||^2$ into respectful terms; see the proof in Appendix [7](#ap:decompose){reference-type="ref" reference="ap:decompose"}. Among the terms on the right hand side of [\[eq:decompose1\]](#eq:decompose1){reference-type="eqref" reference="eq:decompose1"}, we note that ${\bf I}_{1}(t),{\bf I}_{3}(t)$ and ${\bf I}_{4}(t)$ arises directly from Assumption [\[assu:Lip\]](#assu:Lip){reference-type="ref" reference="assu:Lip"}, while ${\bf I}_{2}(t)$ comes from bounding the noise terms due to Markovian data. We bound the four components in Lemma [\[lem:decomposition\]](#lem:decomposition){reference-type="ref" reference="lem:decomposition"} as follows. For simplicity, we denote ${\cal A}(t) \mathrel{\mathop:}=\frac{1}{1+t} \sum_{k=0}^{t} \mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2$. Among the four terms, we highlight that the main challenge lies on obtaining a tight bound for ${\bf I}_{2}(t)$. Observe that $$\begin{aligned} \label{eq:aa} {\bf I}_{2}(t)\leq(1-\lambda) \mathbb{E}\left[ \sum_{k=0}^{t} \left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert \cdot \bigg\|\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k} -m} \Delta_{k,m} \bigg\| \right] \vspace{-.1cm}\end{aligned}$$ where $\Delta_{k,m} \!\overset{\text{def}}{=}\!\mathbb{E}_{{\cal F}^{k-1}}\![ g_{k}^{(m)}\!-\!\mathbb{E}_{Z\sim \Pi_{\check{{\bm \theta}}_k}}g_{k}({\bm \theta}_{k};u_{k},Z)]$. There are two sources of bias in $\Delta_{k,m}$: one is the noise induced by drifting of decision variable in every epoch, the other is the bias that depends on the mixing time of Markov kernel. To control these biases, we are inspired by the proof of [@wu2020finite Theorem 4.7] to introduce a reference Markov chain $\Tilde{Z}_{k}^{(\ell)}$, $\ell=0,...,\tau_k$, whose decision variables remains fixed for a period of length $\tau_k$ and is initialized with $\Tilde{Z}_{k}^{(0)} = Z_{k}^{(0)}$: $$\begin{aligned} &\Tilde{Z}_{k}^{(0)}\xrightarrow[]{\check{{\bm \theta}}_{k}}\Tilde{Z}_{k}^{(1)}\xrightarrow[]{\check{{\bm \theta}}_{k}}\Tilde{Z}_{k}^{(2)}\xrightarrow[]{\check{{\bm \theta}}_{k}}\Tilde{Z}_{k}^{(3)}\cdots\xrightarrow[]{\check{{\bm \theta}}_{k}}\Tilde{Z}_{k}^{(\tau_{k})} \label{chain:ref}\end{aligned}$$ and we recall that the actual chain in the algorithm evolves as $$Z_{k}^{(0)}\xrightarrow[]{\check{{\bm \theta}}_{k+1}^{(0)}}Z_{k}^{(1)}\xrightarrow[]{\check{{\bm \theta}}_{k+1}^{(1)}}Z_{k}^{(2)}\cdots\xrightarrow[]{\check{{\bm \theta}}_{k+1}^{(\tau_{k}-1)}}Z_{k}^{(\tau_{k})}. \label{chain:real}$$ With the help of the reference chain, we decompose $\Delta_{k,m}$ into $$\begin{aligned} \Delta_{k, m} &= \mathbb{E}_{{\cal F}^{k-1}}\left[\frac{d}{\delta_{k}} \left(\mathbb{E}[\ell(\check{{\bm \theta}}_{k}^{(m)}; Z_{k}^{(m)})|\check{{\bm \theta}}_{k}^{(m)}, Z_{k}^{(0)}] - \mathbb{E}_{\Tilde{Z}_{k}^{(m)}}[\ell(\check{{\bm \theta}}_{k}^{(m)}; \Tilde{Z}_{k}^{(m)})|\check{{\bm \theta}}_{k}^{(m)}, \Tilde{Z}_{k}^{(0)}] \right) u_{k}\right] \\ &+ \mathbb{E}_{{\cal F}^{k-1}} \left[ \frac{d}{\delta_{k}} \left(\mathbb{E}_{\Tilde{Z}_{k}^{(m)}}[\ell(\check{{\bm \theta}}_{k}^{(m)}; \Tilde{Z}_{k}^{(m)})|\check{{\bm \theta}}_{k}^{(m)}, \Tilde{Z}_{k}^{(0)}] - \mathbb{E}_{Z\sim \Pi_{\check{{\bm \theta}}_k}}[\ell(\check{{\bm \theta}}_{k}^{(m)}; Z)|\check{{\bm \theta}}_{k}^{(m)}] \right) u_{k} \right] \\ &+ \mathbb{E}_{{\cal F}^{k-1}} \frac{d}{\delta_{k}} \mathbb{E}_{Z\sim\Pi_{\check{{\bm \theta}}_{k}}} \left[ \ell(\check{{\bm \theta}}_{k}^{(m)}; Z) - \ell(\check{{\bm \theta}}_k; Z)|\check{{\bm \theta}}_{k}^{(m)}, \check{{\bm \theta}}_{k}\right] u_{k} \mathrel{\mathop:}=A_{1} + A_{2} + A_{3}\end{aligned}$$ We remark that $A_1$ reflects the drift of ([\[chain:real\]](#chain:real){reference-type="ref" reference="chain:real"}) from initial sample $Z_k^{(0)}$ driven by varying $\check{{\bm \theta}}_{k}^{(m)}$, $A_2$ captures the statistical discrepancy between above two Markov chains ([\[chain:real\]](#chain:real){reference-type="ref" reference="chain:real"}) and ([\[chain:ref\]](#chain:ref){reference-type="ref" reference="chain:ref"}) at same step $m$, and $A_3$ captures the drifting gap between $\check{{\bm \theta}}^{}_{k}$ and $\check{{\bm \theta}}^{(m)}_{k}$. Applying Assumption [\[assu:smooth_dist\]](#assu:smooth_dist){reference-type="ref" reference="assu:smooth_dist"}, $A_1$ and $A_2$ can be upper bounded with the smoothness and geometric mixing property of Markov kernel. In addition, $A_3$ can be upper bounded using Lipschitz condition on (stationary) distribution map $\Pi_{{\bm \theta}}$. Finally, the forgetting factor $\lambda$ helps to control $\| \check{{\bm \theta}}_{k}^{(\cdot)} - \check{{\bm \theta}}_{k} \|$ to be at the same order of a single update. Therefore, $\left\Vert \Delta_{k,m} \right\Vert$ can be controlled by an upper bound relying on $\lambda, \rho, L$. The following lemma summarizes the above results as well as the bounds on the other terms: **Lemma 2**. *Under Assumption [\[assu:BoundLoss\]](#assu:BoundLoss){reference-type="ref" reference="assu:BoundLoss"}, [\[assu:smooth_dist\]](#assu:smooth_dist){reference-type="ref" reference="assu:smooth_dist"}, [\[assu:FastMixing\]](#assu:FastMixing){reference-type="ref" reference="assu:FastMixing"} and [\[assu:smooth_kernel\]](#assu:smooth_kernel){reference-type="ref" reference="assu:smooth_kernel"}, with $\eta_{t+1}=\eta_{0}(1+t)^{-\alpha}$, $\delta_{t+1}=\delta_0 (1+t)^{-\beta}$ and $\alpha \in (0,1)$, $\beta \in (0, \frac{1}{2})$. Suppose that $0 < 2\alpha-4\beta < 1$ and $$\tau_{k}\geq\frac{1}{\log 1/\max\{\rho,\lambda\}}\left(\log(1+k)+\max\{\log\frac{\delta_0}{d}, 0\}\right).$$ Then, it holds that $$\begin{aligned} {\bf I}_{2}(t) & \leq \frac{ c_2 d^{5/2}}{(1-\lambda)^2}{{\cal A}(t)}^{\frac{1}{2}} (1+t)^{1-(\alpha-2\beta)}, \quad \forall ~t \geq \max\{t_1,t_2\} \label{eq:i2} \\ {\bf I}_{1}(t) &\leq c_1 (1-\lambda) (1+t)^{\alpha},~~{\bf I}_{3}(t) \leq c_3 {{\cal A}(t)}^{\frac{1}{2}} (1+t)^{1-\beta}, ~~{\bf I}_{4}(t) \leq \frac{c_4 d^2}{1-\lambda} (1+t)^{1-\left(\alpha-2\beta\right)}, \label{eq:i4-bound}\end{aligned}$$ where $t_1, t_2$ are defined in [\[eq:const-t1\]](#eq:const-t1){reference-type="eqref" reference="eq:const-t1"}, [\[eq:const-t2\]](#eq:const-t2){reference-type="eqref" reference="eq:const-t2"}, and $c_1, c_2, c_3, c_4$ are constants defined as follows: $$\begin{aligned} c_{1} &\mathrel{\mathop:}=2{ G}/ \eta_0, ~~ c_{2} \mathrel{\mathop:}=\frac{\eta_0}{\delta_0^2}\frac{6\cdot (L_1{ G}^2+L_2{ G}^2+\sqrt{L}{ G}^{3/2} )}{\sqrt{1-2\alpha+4\beta}}, \\ c_{3} &\mathrel{\mathop:}=\frac{2}{\sqrt{1-2\beta}}\max\{L\delta_0, { G}\sqrt{1-\beta}\}, ~~ c_{4} \mathrel{\mathop:}=\frac{\eta_{0}}{\delta_{0}^2}\cdot\frac{ L { G}^2}{2\beta-\alpha+1}.\end{aligned}$$* See Appendix [8](#App_bound_four_terms){reference-type="ref" reference="App_bound_four_terms"} for the proof. We comment that the bound for ${\bf I}_{4}(t)$ cannot be improved. As a concrete example, consider the constant function $\ell({\bm \theta}; z) = {\rm c} \neq 0$ for all $z \in {\sf Z}$, it can be shown that $\| g_{k}^{(m)} \|^2 = {\rm c}^2$ and consequently ${\bf I}_4(t) = \Omega( \eta_{k}/ \delta_{k}^2 ) = \Omega( t^{1-(\alpha-2\beta)} )$, which matches [\[eq:i4-bound\]](#eq:i4-bound){reference-type="eqref" reference="eq:i4-bound"}. Finally, plugging Lemma [Lemma 2](#lem:bound_four_terms){reference-type="ref" reference="lem:bound_four_terms"} into Lemma [\[lem:decomposition\]](#lem:decomposition){reference-type="ref" reference="lem:decomposition"} gives: $${\cal A}(t) \leq \frac{c_1 (1-\lambda)}{(1+t)^{1-\alpha}} + \frac{ c_2 d^{5/2}}{(1-\lambda)^2} \frac{ {{\cal A}(t)}^{\frac{1}{2}} }{ (1+t)^{\alpha-2\beta} } + c_3 \frac{ {{\cal A}(t)}^{\frac{1}{2}} }{ (1+t)^\beta } + c_4 \frac{d^2}{1-\lambda} \frac{1}{(1+t)^{\alpha-2\beta} }.$$ Since ${\cal A}(t) \geq 0$, the above is a quadratic inequality that implies the following bound: **Lemma 3**. *Under Assumption [\[assu:Lip\]](#assu:Lip){reference-type="ref" reference="assu:Lip"}--[\[assu:smooth_kernel\]](#assu:smooth_kernel){reference-type="ref" reference="assu:smooth_kernel"}, with the step sizes $\eta_{t+1}=\eta_{0}(1+t)^{-\alpha}$, $\delta_{t+1}=\delta_0 (1+t)^{-\beta}, \tau_{k}\geq\frac{1}{\log 1/\max\{\rho,\lambda\}}\left(\log(1+k)+\max\{\log\frac{\delta_0}{d}, 0\}\right)$, $\eta_0=d^{-2/3}, \delta_0=d^{1/3}$, $\alpha \in (0,1)$, $\beta \in (0, \frac{1}{2})$. If $2\alpha-4\beta < 1$, then there exists a constant $t_0$ such that the iterates $\{{\bm \theta}_k\}_{k\geq 0}$ satisfies $$\begin{aligned} \frac{1}{1+T}\sum_{k=0}^{T}\mathbb{E}\left\Vert {\nabla}{{\cal L}({\bm \theta}_k)} \right\Vert^2 \leq 12 \max\{c_5 (1-\lambda), c_6, \frac{c_7}{1-\lambda}\} d^{2/3} T^{-\min\{2\beta,1-\alpha,\alpha-2\beta\}},~\forall~T \geq t_0. \end{aligned}$$* Optimizing the step size exponents $\alpha, \beta$ in the above concludes the proof of Theorem [\[thm1\]](#thm1){reference-type="ref" reference="thm1"}. ## Discussions {#sec:hardness} We conclude by discussing two alternative zero-th order gradient estimators to [\[eq:grd\]](#eq:grd){reference-type="eqref" reference="eq:grd"}, and argue that they do not improve over the sample complexity in the proposed $\texttt{DFO}\left(\lambda\right)$ algorithm. We study: $$\begin{aligned} \textstyle {\bm g}_{\sf 2pt-I} \mathrel{\mathop:}=\frac{d}{\delta} \left[\ell\left({\bm \theta}+\delta {\bm u}; Z \right) - \ell({\bm \theta}; Z) \right] {\bm u}, \quad {\bm g}_{\sf 2pt-II} \mathrel{\mathop:}=\frac{d}{\delta} \left[\ell\left({\bm \theta}+\delta {\bm u}; Z_1 \right) - \ell({\bm \theta}; Z_2) \right] {\bm u},\end{aligned}$$ where ${\bm u} \sim \sf \text{Unif}(\mathbb{S}^{d-1})$. For ease of illustration, we assume that the samples $Z, Z_1, Z_2$ are drawn directly from the stationary distributions $Z \sim \Pi_{{\bm \theta}+\delta u}$, $Z_1 \sim \Pi_{{\bm \theta}+\delta {\bm u} }$, $Z_2\sim \Pi_{{\bm \theta}}$. We recall from §[2](#sec:setup){reference-type="ref" reference="sec:setup"} that the estimator ${\bm g}_{\sf 2pt-I}$ is a finite difference approximation of the directional derivative of objective function along the randomized direction ${\bm u}$[^2], as proposed in [@Nesterov2017; @ghadimi2013]. For non-convex stochastic optimization with decision independent sample distribution, i.e., $\Pi_{{\bm \theta}} \equiv \bar{\Pi}$ for all ${\bm \theta}$, the DFO algorithm based on ${\bm g}_{\sf 2pt-I}$ is known to admit an optimal sample complexity of ${\cal O}(1/\epsilon^2)$ [@Jamieson2012]. Note that $\mathbb{E}_{{\bm u} \sim {\sf \text{Unif}(\mathbb{S}^{d-1})} , Z \sim \bar{\Pi}} [ \ell({\bm \theta};Z) {\bm u} ] = {\bm 0}$. However, in the case of decision-dependent sample distribution as in [\[perf\]](#perf){reference-type="eqref" reference="perf"}, ${\bm g}_{\sf 2pt-I}$ would become a *biased* estimator since the sample $Z$ is drawn from $\Pi_{{\bm \theta}+\delta {\bm u}}$ which depends on ${\bm u}$. The DFO algorithm based on ${\bm g}_{\sf 2pt-I}$ may not converge to a stationary solution of [\[perf\]](#perf){reference-type="eqref" reference="perf"}. A remedy to handle the above issues is to consider the estimator ${\bm g}_{\sf 2pt-II}$ which utilizes *two samples* $Z_1, Z_2$, each independently drawn at a different decision variable, to form the gradient estimate. In fact, it can be shown that $\mathbb{E}[ {\bm g}_{\sf 2pt-II} ] = {\nabla}{\cal L}_\delta( {\bm \theta})$ yields an unbiased gradient estimator. However, due to the decoupled random samples $Z_1, Z_2$, we have $$\begin{aligned} & {\textstyle \mathbb{E}\left\Vert {\bm g}_{\sf 2pt-II} \right\Vert^2} = \mathbb{E}\left[\left(\ell\left({\bm \theta}+\delta {\bm u}; Z_1 \right) - \ell({\bm \theta}; Z_1) + \ell({\bm \theta}; Z_1) - \ell({\bm \theta}; Z_2)\right)^2\right]\frac{d^2}{\delta^2}\\ &\overset{(a)}{\geq} \mathbb{E}\left[\frac{3}{4}\left(\ell({\bm \theta}; Z_1) - \ell({\bm \theta}; Z_2)\right)^{2} - 3\left(\ell\left({\bm \theta}+\delta {\bm u}; Z_1 \right) - \ell({\bm \theta}; Z_1)\right)^{2}\right]\frac{d^2}{\delta^2}\\ &=\frac{3}{2}{\sf Var}[\ell({\bm \theta};Z)]\frac{d^2}{\delta^2}-3\mathbb{E}\left[\left(\ell\left({\bm \theta}+\delta {\bm u}; Z_1 \right) - \ell({\bm \theta}; Z_1)\right)^{2}\right]\frac{d^2}{\delta^2} \overset{(b)}{\geq} \frac{3}{2}\frac{\sigma^2 d^2}{\delta^2}-3\mu^2 d^2 = \Omega(1 /\delta^2).\end{aligned}$$ where in (a) we use the fact that $(x+y)^2\geq\frac{3}{4}x^2-3 y^2$, in (b) we assume ${\sf Var}[\ell({\bm \theta};Z)]\mathrel{\mathop:}=\mathbb{E}\left(\ell({\bm \theta};Z)-{\cal L}({\bm \theta})\right)^2\geq \sigma^2>0$ and $\ell({\bm \theta};z)$ is $\mu$-Lipschitz in ${\bm \theta}$. As such, this two-point gradient estimator does not reduce the variance when compared with the estimator in [\[eq:grd\]](#eq:grd){reference-type="eqref" reference="eq:grd"}. Note that a two-sample estimator also incurs additional sampling overhead in the scenario of Markovian samples. # Numerical Experiments {#sec:num} ![image](Fig/quartic.pdf){width=".3\\linewidth"} ![image](Fig/pricing.pdf){width=".3\\linewidth"} ![image](Fig/lr.pdf){width=".3\\linewidth"} [\[fig:simulation\]]{#fig:simulation label="fig:simulation"} We examine the efficacy of the $\texttt{DFO}\left(\lambda\right)$ algorithm on a few toy examples by comparing $\texttt{DFO}\left(\lambda\right)$ with a simple stochastic gradient descent scheme with greedy deployment. Unless otherwise specified, we use the step size choices in [\[eq:stepsize_thm\]](#eq:stepsize_thm){reference-type="eqref" reference="eq:stepsize_thm"} for $\texttt{DFO}\left(\lambda\right)$. All experiments are conducted on a server with an Intel Xeon 6318 CPU using Python 3.7. To measure performance, we record the gradient norm $\left\Vert {\nabla}{\cal L}({\bm \theta}) \right\Vert$ and estimate its expected value using at least $10$ trials. **1-Dimensional Case: Quadratic Loss.** The first example considers a scalar quadratic loss function $\ell : \mathbb{R}\times \mathbb{R}\to \mathbb{R}$ defined by $\ell({\bm \theta}; z)=\frac{1}{12} z {\bm \theta}(3{\bm \theta}^2-8{\bm \theta}-48)$. To simulate the controlled Markov chain scenario, the samples are generated dynamically according to an auto-regressive (AR) process $Z_{t+1}=(1-\gamma) Z_t + \gamma \bar{Z}_{t+1}$ with $\bar{Z}_{t+1} \sim {\cal N}({\bm \theta}, \frac{(2-\gamma)}{\gamma}\sigma^2)$ with parameter $\gamma\in (0,1)$. Note that the stationary distribution of the AR process is $\Pi_{{\bm \theta}}={\cal N}({\bm \theta}, \sigma^2)$. As such, the performative risk function in this case is ${\cal L}({\bm \theta})=\mathbb{E}_{Z\sim\Pi_{{\bm \theta}}}\left[\ell({\bm \theta};Z)\right]=\frac{{\bm \theta}^2}{12}({\bm \theta}^2-8{\bm \theta}-48)$, which is quartic in ${\bm \theta}$. Note that ${\cal L}({\bm \theta})$ is not convex in ${\bm \theta}$ and the set of stationary solution is $\{ {\bm \theta}: {\nabla}{\cal L}({\bm \theta}) = 0 \} = \{4, 0, -2\}$, among which the optimal solution is ${\bm \theta}_{PO} = \mathop{\mathrm{arg\,min}}_{ {\bm \theta}} {\cal L}( {\bm \theta}) = 4$. In our experiments below, all the algorithms are initialized by ${\bm \theta}_0= 6$. In Figure [\[fig:simulation\]](#fig:simulation){reference-type="ref" reference="fig:simulation"} (left), we compare the norms of the gradient for performative risk with pure `DFO` (no burn-in), the `DFO`($\lambda$) algorithm, and stochastic gradient descent with greedy deployment scheme (SGD-GD) against the number of samples observed by the algorithms. We first observe from Figure [\[fig:simulation\]](#fig:simulation){reference-type="ref" reference="fig:simulation"} (left) that pure `DFO` and `SGD-GD` methods do not converge to a stationary point to ${\cal L}({\bm \theta})$ even after more samples are observed. On the other hand, $\texttt{DFO}\left(\lambda\right)$ converges to a stationary point of ${\cal L}( {\bm \theta})$ at the rate of $\| {\nabla}{\cal L}( {\bm \theta}) \|^2 = {\cal O}(1/ S^{0.32} )$, matching Theorem [\[thm1\]](#thm1){reference-type="ref" reference="thm1"} that predicts a rate of ${\cal O}(1/S^{1/3})$ (up to a logarithmic factor), where $S$ is the total number of samples observed. Besides, we observe that with large $\lambda = 0.75$, $\texttt{DFO}\left(\lambda\right)$ converges at a faster rate at the beginning (i.e., transient phase), but the convergence rate slows down at the steady phase (e.g., when no. of samples observed is greater than $10^6$) compared to running the same algorithm with smaller $\lambda$. **Higher Dimension Case: Markovian Pricing.** The second example examines a multi-dimensional ($d=5$) pricing problem similar to [@izzo2021learn Sec. 5.2]. The decision variable ${\bm \theta}\in \mathbb{R}^5$ denotes the prices of $d=5$ goods and $\kappa$ is a drifting parameter for the prices. Our goal is to maximize the average revenue $\mathbb{E}_{Z\sim\Pi_{{\bm \theta}} }[\ell({\bm \theta};Z)]$ with $\ell({\bm \theta};z)=-\langle{\bm \theta}\,|\,z\rangle$, where $\Pi_{{\bm \theta}} \equiv {\cal N}( {\bm \mu}_0-\kappa{\bm \theta},\sigma^2 {\bm I})$ is the unique stationary distribution of the Markov process (i.e., an AR process) $$\textstyle Z_{t+1}=(1-\gamma) Z_{t} +\gamma \bar{Z}_{t+1}~\bar{Z}_{t+1} \sim {\cal N}( {\bm \mu}_0-\kappa {\bm \theta}, \frac{2-\gamma}{\gamma}\sigma^2 {\bm I} ). \vspace{-.1cm}$$ Note that in this case, the performative optimal solution is ${\bm \theta}_{PO} = \mathop{\mathrm{arg\,min}}_{ {\bm \theta}} {\cal L}( {\bm \theta}) = {\bm{\mu}_0}/{(2\kappa)}$. We set $\gamma=0.5, \sigma=5$, drifting parameter $\kappa=0.5,$ initial mean of non-shifted distribution ${\bm \mu}_0 = \left[-2,2,-2,2,-2\right]^\top$. All the algorithms are initialized by ${\bm \theta}_0=\left[2,-2,2,-2,2\right]^\top$. We simulate the convergence behavior for different algorithms in Figure [\[fig:simulation\]](#fig:simulation){reference-type="ref" reference="fig:simulation"} (middle). Observe that the differences between the $\texttt{DFO}\left(\lambda\right)$ algorithms with different $\lambda$ becomes less significant than Figure [\[fig:simulation\]](#fig:simulation){reference-type="ref" reference="fig:simulation"} (left). **Markovian Performative Regression.** The last example considers the linear regression problem in [@nagaraj2020least] which is a prototype problem for studying stochastic optimization with Markovian data (e.g., reinforcement learning). Unlike the previous examples, this problem involves a pair of correlated r.v.s that follows a decision-dependent joint distribution. We adopt a setting similar to the regression example in [@izzo2021learn], where $(X,Y) \sim \Pi_{{\bm \theta}}$ with $X \sim {\cal N}(0, \sigma_1^2{\bm I}), Y|X \sim {\cal N} \left(\left\langle\beta({\bm \theta})\,|\,X\right\rangle, \sigma_2^2\right)$, $\beta({\bm \theta})= {\bm a}_0+a_1{\bm \theta}$. The loss function is $\ell({\bm \theta};x,y) = (\left\langle x\,|\,{\bm \theta}\right\rangle-y)^2+\frac{\mu}{2}\left\Vert {\bm \theta} \right\Vert^2$. In this case, the performative risk is: $$\begin{aligned} \textstyle {\cal L}({\bm \theta}) &=\mathbb{E}_{\Pi_{{\bm \theta}}}\left[\ell({\bm \theta};X,Y)\right]=(\sigma_1^2 a_1^2-2\sigma_1^2 a_1+\sigma_1^2+\frac{\mu}{2})\left\Vert {\bm \theta} \right\Vert^2 \\ &\quad - 2 \sigma_1^2 (1-a_1) {\bm \theta}^\top {\bm a_0}+\sigma_1^2\left\Vert {\bm a}_0 \right\Vert^2+\sigma_2^2, \vspace{-.1cm}\end{aligned}$$ For simplicity, we assume $\sigma_1^2(1-a_1)=\sigma_1^2 a_1^2-2\sigma_1^2 a_1+\sigma_1^2+\mu/2$, from which we can deduce ${\bm \theta}_{PO}={\bm a}_0$. In this experiment, we consider Markovian samples $(\Tilde{X}_{t},\Tilde{Y}_{t})_{t=1}^{T}$ drawn from an AR process: $$\begin{split} & (\Tilde{X}_{t},\Tilde{Y}_{t}) = (1-\gamma)(\Tilde{X}_{t-1},\Tilde{Y}_{t-1})+ \gamma (X_{t},Y_{t}), \\ & X_{t} \sim {\cal N}(0, {\textstyle \frac{2-\gamma}{\gamma}} \sigma_1^2 \mathit{I}),~Y_{t}|X_t \sim {\cal N}(\left\langle X_t\,|\, \beta({\bm \theta}_{t-1} ) \right\rangle, {\textstyle \frac{2-\gamma}{\gamma}} \sigma_2^2), \end{split}\vspace{-.1cm}$$ for any $t \geq 1$. We set $d=5$, $a_0=[-1,1,-1,1,-1]^\top$, $a_1=0.5, \sigma_1^2=\sigma_2^2=1$, regularization parameter $\mu=0.5$, mixing parameter $\gamma=0.1$. The algorithms are initialized with ${\bm \theta}_0=[1,-1,1,-1,1]^\top$. Figure [\[fig:simulation\]](#fig:simulation){reference-type="ref" reference="fig:simulation"} (right) shows the result of the simulation. Similar to the previous examples, we observe that pure `DFO` and `SGD` fail to find a stationary solution to ${\cal L}({\bm \theta})$. Meanwhile, $\texttt{DFO}\left(\lambda\right)$ converges to a stationary solution after a reasonable number of samples are observed. **Conclusions.** We have described a derivative-free optimization approach for finding a stationary point of the performative risk function. In particular, we consider a non-i.i.d. data setting with samples generated from a controlled Markov chain and propose a two-timescale step sizes approach in constructing the gradient estimator. The proposed $\texttt{DFO}\left(\lambda\right)$ algorithm is shown to converge to a stationary point of the performative risk function at the rate of ${\cal O}(1/T^{1/3})$. # Comparison to Related Works {#ass:lit} This section provides a detailed comparison to related works on performative prediction under the stateful agent setting. This setting is relevant as the influences of the updated ${\bm \theta}$ on the agent may not be manifested immediately due to the unforgetful nature of the agent. The recent works can be grouped into two categories in terms of the sought solution to [\[perf\]](#perf){reference-type="eqref" reference="perf"}: (i) finding the *performative stable* solution satisfying ${\bm \theta}_{PS} = \mathop{\mathrm{arg\,min}}_{ {\bm \theta}\in \mathbb{R}^d } \mathbb{E}_{ Z \sim \Pi_{{\bm \theta}_{PS}} } [ \ell( {\bm \theta}; Z ) ]$, (ii) finding or approximating the *performative optimal* solution that tackles [\[perf\]](#perf){reference-type="eqref" reference="perf"} directly. For seeking the *performative stable* solution, [@brown2022performative] is the first to study population-based algorithms where the stateful agent updates the state-dependent distribution iteratively towards $\Pi_{{\bm \theta}}$. The authors proved that under a special case when $k$ groups that form the mixture distribution for $\Pi_{{\bm \theta}}$ respond slowly, then classical retraining algorithms converge to the performative stable solution. The follow-up works [@li2022state; @roy2022projection] focus on more sophisticated stateful agents and the reliance on past experiences of agents via controlled Markov Chain. In [@li2022state], the authors developed gradient-type state-dependent stochastic approximation algorithm to achieve performative stable solution. In [@roy2022projection], the authors proposed a stochastic conditional gradient-type algorithm with state-dependent Markovian data to tackle constrained nonconvex performative prediction problem. The search for (approximate) *performative optimal* solution is challenging due to the non-convexity of [\[perf\]](#perf){reference-type="eqref" reference="perf"}. @izzo2022learn assumes that the transient distribution is parameterized by a low-dimensional vector and the distribution converges to $\Pi_{{\bm \theta}}$ geometrically. Under these settings, the authors proposed to learn the distribution as a linear model to form an unbiased estimate of ${\nabla}{\cal L}({\bm \theta})$. The resultant algorithm follows a two-phases update approach: it first estimate the gradient correction term \[cf. second term in [\[eq:all-gradient\]](#eq:all-gradient){reference-type="eqref" reference="eq:all-gradient"}\], followed by stochastic gradient update steps. Such approach has two main drawbacks: (i) estimating the gradient correction term requires strong prior assumptions on the distribution map (see e.g. Assumption 2 and 3 in [@izzo2022learn]), which limits its applicability, (ii) the estimation phase gathers a substantial amount of potentially sensitive information from data reaction patterns, which may incur privacy concern. Furthermore, it is noted that such procedure has a convergence rate of ${\cal O}(T^{-1/5})$ to stationary solution of ${\cal L}({\bm \theta})$, which is outperformed by DFO in the current paper. As mentioned in the main paper, adopting the DFO setting avoids the need to estimate the gradient correction term, which may necessitate additional assumptions on $\Pi_{{\bm \theta}}$ as seen in [@izzo2022learn]. To this end, one of the first works to address performative optimal points with DFO method is [@ray2022decision] in the stateful agent setting. Notably, the analysis in [@ray2022decision] relies on (i) a mixture dominance assumption on $\Pi_{{\bm \theta}}$, and (ii) a geometric decay environment assumption on the stateful agent. In addition to relaxing the mixture dominance assumption, we remark that Assumption [\[assu:FastMixing\]](#assu:FastMixing){reference-type="ref" reference="assu:FastMixing"} is relaxed from the geometric decay environment condition in [@ray2022decision]. For example, our setting covers general MDP models and the controlled AR(1) model, see [@li2022state Appendix A.1]. # Proof of Lemma [\[lem:decomposition\]](#lem:decomposition){reference-type="ref" reference="lem:decomposition"} {#ap:decompose} *Proof.* Throughout this section, we let $\check{{\bm \theta}}_{k}\mathrel{\mathop:}={\bm \theta}_{k}+\delta_{k}u_{k}, g_{k}({\bm \theta};u,z)\mathrel{\mathop:}=g_{\delta_{k}}({\bm \theta};u,z)$ and ${\cal L}_{k}({\bm \theta})\mathrel{\mathop:}={\cal L}_{\delta_{k}}({\bm \theta})$ for simplicity. We begin our analysis from Assumption [\[assu:Lip\]](#assu:Lip){reference-type="ref" reference="assu:Lip"} and the observation that ${\bm \theta}_{k+1}-{\bm \theta}_{k} = -\eta_{k} \sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}g_{k}^{(m)}$. Recall that $g_{k}^{(m)}=\frac{d}{\delta_{k}}\ell\left(\check{{\bm \theta}}_{k}^{(m)}; z_{k}^{(m)} \right)u_{k}$ and $\check{{\bm \theta}}_{k}^{(m)} = {\bm \theta}_{k}^{(m)} + \delta_{k} u_{k}$, we have $${\cal L}({\bm \theta}_{k+1}) - {\cal L}({\bm \theta}_{k}) + \eta_{k}\left\langle{\nabla}{\cal L}({\bm \theta}_{k})\,|\, \sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}g_{k}^{(m)}\right\rangle \leq \frac{L}{2}\eta_{k}^2\left\Vert \sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}g_{k}^{(m)} \right\Vert^2,$$ Rearranging terms and adding $\frac{\eta_{k}}{1-\lambda}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2$ on the both sides lead to $$\begin{aligned} \frac{\eta_{k}}{1-\lambda}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2 &\leq {\cal L}({\bm \theta}_{k}) - {\cal L}({\bm \theta}_{k+1}) -\frac{\eta_{k}}{1-\lambda} \left\langle{\nabla}{\cal L}({\bm \theta}_{k})\,|\,(1-\lambda)\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}g_{k}^{(m)}-{\nabla}{\cal L}({\bm \theta}_k)\right\rangle \\ &\quad + \frac{L}{2}\eta_{k}^2\left\Vert \sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}g_{k}^{(m)} \right\Vert^2\end{aligned}$$ Let ${\cal F}^k = \sigma( {\bm \theta}_0, Z_{s}^{(m)}, u_s, 0 \leq s \leq k, 0 \leq m \leq \tau_{k})$ be the filtration of random variables. Taking expectation conditioned on ${\cal F}^{k-1}$ gives $$\begin{aligned} \frac{\eta_{k}}{1-\lambda}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2 \leq &\mathbb{E}_{{\cal F}^{k-1}}\left[{\cal L}({\bm \theta}_{k}) - {\cal L}({\bm \theta}_{k+1}) \right] \\ &-\frac{\eta_{k}}{1-\lambda} \left\langle{\nabla}{\cal L}({\bm \theta}_k)\,|\,(1-\lambda)\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m} \mathbb{E}_{{\cal F}^{k-1}}\left[g_{k}^{(m)}\right] -{\nabla}{\cal L}({\bm \theta}_k)\right\rangle \\ &+ \frac{L}{2}\eta_{k}^2\mathbb{E}_{{\cal F}^{k-1}}\left\Vert \sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}g_{k}^{(m)} \right\Vert^2,\end{aligned}$$ By adding and subtracting, we obtain $$\begin{aligned} &\frac{\eta_{k}}{1-\lambda}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2 \leq \mathbb{E}_{{\cal F}^{k-1}}\left[{\cal L}({\bm \theta}_{k}) - {\cal L}({\bm \theta}_{k+1}) \right] \\ &-\frac{\eta_{k}}{1-\lambda} \left\langle{\nabla}{\cal L}({\bm \theta}_k)\,|\,(1-\lambda)\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}\left(\mathbb{E}_{{\cal F}^{k-1}}\left[g_{k}^{(m)}\right]-\mathbb{E}_{Z\sim\Pi_{\check{{\bm \theta}}_k}, {\cal F}^{k-1}}\left[g_{k}({\bm \theta}_{k};u_k, Z)\right]\right)\right\rangle \\ &-\frac{\eta_{k}}{1-\lambda} \left\langle{\nabla}{\cal L}({\bm \theta}_k)\,|\,(1-\lambda)\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}\mathbb{E}_{Z\sim\Pi_{\check{{\bm \theta}}_k},{\cal F}^{k-1}}\left[g_{k}({\bm \theta}_{k};u_k, Z)\right]-{\nabla}{\cal L}({\bm \theta}_k)\right\rangle \\ &+ \frac{L}{2}\eta_{k}^2\mathbb{E}_{{\cal F}^{k-1}}\left\Vert \sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}g_{k}^{(m)} \right\Vert^2\end{aligned}$$ By Lemma [\[lem:BoundedBias\]](#lem:BoundedBias){reference-type="ref" reference="lem:BoundedBias"}, the conditional expectation evaluates to $\mathbb{E}_{Z\sim\Pi_{\check{{\bm \theta}}_k}}\left[g_{k}({\bm \theta}_{k};u_k, Z)\right]={\nabla}{\cal L}_{k}({\bm \theta}_{k})$. Dividing $\frac{\eta_{k}}{1-\lambda}$ derive that $$\begin{aligned} \left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2 \leq &\frac{1-\lambda}{\eta_{k}} \mathbb{E}_{{\cal F}^{k-1}}\left({\cal L}({\bm \theta}_{k}) - {\cal L}({\bm \theta}_{k+1}) \right) \\ &-\left\langle{\nabla}{\cal L}({\bm \theta}_k)\,|\,(1-\lambda)\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}\left(\mathbb{E}_{{\cal F}^{k-1}}\left[g_{k}^{(m)}\right]-\mathbb{E}_{{\cal F}^{k-1}}\mathbb{E}_{Z\sim\Pi_{\check{{\bm \theta}}_k}}\left[g_{k}({\bm \theta}_{k};u_k,Z)|u_k\right]\right)\right\rangle \\ &-\left\langle{\nabla}{\cal L}({\bm \theta}_k)\,|\,(1-\lambda)\left(\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}{\nabla}{\cal L}_{k}({\bm \theta}_{k})\right)-{\nabla}{\cal L}({\bm \theta}_k)\right\rangle \\ &+ \frac{L(1-\lambda)}{2}\eta_{k}\mathbb{E}_{{\cal F}^{k-1}}\left\Vert \sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}g_{k}^{(m)} \right\Vert^2\end{aligned}$$ Summing over $k$ from 0 to $t$, indeed we obtain $$\begin{aligned} &\sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2 \\ &\leq\sum_{k=0}^{t}\frac{1-\lambda}{\eta_{k}}\mathbb{E}\left[{\cal L}({\bm \theta}_k)-{\cal L}({\bm \theta}_{k+1})\right] \\ &\quad -\sum_{k=0}^{t}\mathbb{E}\left\langle{\nabla}{\cal L}({\bm \theta}_k)\,|\,(1-\lambda)\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}\left(\mathbb{E}_{{\cal F}^{k-1}}\left[ g_{k}^{(m)}\right]-\mathbb{E}_{{\cal F}^{k-1}}\mathbb{E}_{Z\sim\Pi_{\check{{\bm \theta}}_k}}\left[g_{k}({\bm \theta}_{k};u_k,Z)|u_k\right]\right)\right\rangle \\ &\quad -\sum_{k=0}^{t}\mathbb{E}\left\langle{\nabla}{\cal L}({\bm \theta}_k)\,|\,(1-\lambda)\left(\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}{\nabla}{{\cal L}_{k}({\bm \theta}_k)}\right)-{\nabla}{\cal L}({\bm \theta}_k)\right\rangle \\ &\quad +\frac{L(1-\lambda)}{2}\sum_{k=0}^{t}\eta_{k}\mathbb{E}\left\Vert \sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}g_{k}^{(m)} \right\Vert^2 \mathrel{\mathop:}={\bf I}_{1}(t) + {\bf I}_{2}(t) + {\bf I}_{3}(t) + {\bf I}_{4}(t)\end{aligned}$$ ◻ # Proof of Lemma [Lemma 2](#lem:bound_four_terms){reference-type="ref" reference="lem:bound_four_terms"} {#App_bound_four_terms} **Lemma 4**. *Under Assumption [\[assu:BoundLoss\]](#assu:BoundLoss){reference-type="ref" reference="assu:BoundLoss"} and step size $\eta_{t}=\eta_{0}(1+t)^{-\alpha}$, it holds that $${\bf I}_{1}(t)\leq c_1 (1-\lambda)(1+t)^{\alpha}$$* where constant $c_1=\frac{2 { G}}{\eta_0}$. *Proof.* We observe the following chain $$\begin{aligned} {\bf I}_{1}(t) &= \sum_{k=0}^{t}\frac{1-\lambda}{\eta_{k}}\left(\mathbb{E}\left[{\cal L}({\bm \theta}_k)\right]-\mathbb{E}\left[{\cal L}({\bm \theta}_{k+1})\right]\right) \\ &= (1-\lambda)\sum_{k=0}^{t} \mathbb{E}[{\cal L}({\bm \theta}_{k})]/\eta_k-\mathbb{E}[{\cal L}({\bm \theta}_{k+1})]/\eta_{k+1}+\mathbb{E}[{\cal L}({\bm \theta}_{k+1})]/\eta_{k+1}-\mathbb{E}[{\cal L}({\bm \theta}_{k+1})]/\eta_{k} \\ &\overset{(a)}{=}(1-\lambda)\left[\mathbb{E}[{\cal L}({\bm \theta}_0)/\eta_{0}]-\mathbb{E}[{\cal L}({\bm \theta}_{t+1})/\eta_{t+1}]+\sum_{k=0}^{t}(\frac{1}{\eta_{k+1}}-\frac{1}{\eta_{k}})\mathbb{E}[{\cal L}({\bm \theta}_{k+1})]\right] \\ &\leq (1-\lambda)\max_{k} \left|\mathbb{E}[{\cal L}({\bm \theta}_{k})]\right| \left(\frac{1}{\eta_{0}}+\frac{1}{\eta_{t+1}}+\frac{1}{\eta_{t+1}}-\frac{1}{\eta_{0}}\right)\end{aligned}$$ where equality (a) is obtained using the fact that step size $\eta_k>0$ is a decreasing sequence. Applying assumption [\[assu:BoundLoss\]](#assu:BoundLoss){reference-type="ref" reference="assu:BoundLoss"} to the last inequality leads to $$\begin{aligned} {\bf I}_{1}(t) &\leq (1-\lambda ) { G}\frac{2}{\eta_{t+1}} \\ &\leq c_{1} (1-\lambda) (1+t)^{\alpha}\end{aligned}$$ where the constant $c_1=\frac{2 { G}}{\eta_0}$. ◻ **Lemma 5**. *Under Assumption [\[assu:Lip\]](#assu:Lip){reference-type="ref" reference="assu:Lip"}, [\[assu:BoundLoss\]](#assu:BoundLoss){reference-type="ref" reference="assu:BoundLoss"}, [\[assu:smooth_dist\]](#assu:smooth_dist){reference-type="ref" reference="assu:smooth_dist"}, [\[assu:FastMixing\]](#assu:FastMixing){reference-type="ref" reference="assu:FastMixing"}, [\[assu:smooth_kernel\]](#assu:smooth_kernel){reference-type="ref" reference="assu:smooth_kernel"}, and constraint $0<2\alpha-4\beta<1$, and for all $k\geq 0$, $\tau_{k}\geq\frac{1}{\log 1/\max\{\rho,\lambda\}}\log(1+k)$, then there exists universal constants $t_1,t_2>0$ such that $${\bf I}_{2}(t) \leq c_2 \frac{d^2}{(1-\lambda)^2}{{\cal A}(t)}^{1/2}(1+t)^{1-(\alpha-2\beta)} \quad \forall t\geq \max\{t_1,t_2\}$$ where ${\cal A}(t) \mathrel{\mathop:}=\frac{1}{1+t} \sum_{k=0}^{t} \mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2$ and $c_2 \mathrel{\mathop:}=\frac{\eta_0}{\delta_0^2}\frac{6\cdot (L_1{ G}^2+L_2{ G}^2+\sqrt{L}{ G}^{3/2} )}{\sqrt{1-2\alpha+4\beta}}$ is a constant.* *Proof.* Fix $k>0$, and recall $\check{{\bm \theta}}_{k}\mathrel{\mathop:}={\bm \theta}_{k}+\delta_{k}u_{k}, \check{{\bm \theta}}_{k}^{(\ell)}\mathrel{\mathop:}={\bm \theta}_{k}^{(\ell)}+\delta_{k}u_{k}$, then consider the following pair of Markov chains: $$\begin{aligned} Z_{k} &=Z_{k}^{(0)}\xrightarrow[]{\check{{\bm \theta}}_{k}^{(1)}}Z_{k}^{(1)}\xrightarrow[]{\check{{\bm \theta}}_{k}^{(2)}}Z_{k}^{(2)}\xrightarrow[]{\check{{\bm \theta}}_{k}^{(3)}}Z_{k}^{(3)}\cdots\xrightarrow[]{\check{{\bm \theta}}_{k}^{(\tau_{k})}}Z_{k}^{(\tau_{k})}=Z_{k+1} \label{chain1} \\ Z_k &=\Tilde{Z}_{k}^{(0)}\xrightarrow[]{\check{{\bm \theta}}_{k}}\Tilde{Z}_{k}^{(1)}\xrightarrow[]{\check{{\bm \theta}}_{k}}\Tilde{Z}_{k}^{(2)}\xrightarrow[]{\check{{\bm \theta}}_{k}}\Tilde{Z}_{k}^{(3)}\cdots\xrightarrow[]{\check{{\bm \theta}}_{k}}\Tilde{Z}_{k}^{(\tau_{k})} \label{chain2}\end{aligned}$$ where the arrow associated with ${\bm \theta}$ represents the transition kernel $\mathbb{T}_{{\bm \theta}}(\cdot,\cdot)$. Note that Chain [\[chain1\]](#chain1){reference-type="ref" reference="chain1"} is the trajectory of DFO($\lambda$) algorithm at iteration $k$, while Chain [\[chain2\]](#chain2){reference-type="ref" reference="chain2"} describes the trajectory of the same length generated by a reference Markov chain with fixed transition kernel $\mathbb{T}_{\check{{\bm \theta}}_{k}}(\cdot,\cdot)$. Since $Z_{k}=Z_{k}^{(0)}=\Tilde{Z}_{k}^{(0)}$, we shall use them interchangeably. Define $\Delta_{k, m}\mathrel{\mathop:}=\mathbb{E}_{{\cal F}^{k-1}}\left[g_{k}^{(m)}-\mathbb{E}_{Z\sim \Pi_{\check{{\bm \theta}}_k}}\left[g_{k} ({\bm \theta}_{k};u_{k}, Z)\right]\right]$, then ${\bf I}_{2}(t)$ can be reformed as $$\begin{aligned} {\bf I}_{2}(t) &= -(1-\lambda)\mathbb{E}\sum_{k=0}^{t}\left\langle{\nabla}{\cal L}({\bm \theta}_k)\,|\,\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}\Delta_{k, m}\right\rangle \\ &\leq (1-\lambda) \mathbb{E}\sum_{k=0}^{t} \left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert \cdot \left\Vert \sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k} -m} \Delta_{k, m} \right\Vert\end{aligned}$$ Next, observe that each $\Delta_{k, m}$ can be decomposed into 3 bias terms as follows $$\begin{aligned} \Delta_{k, m} &= \mathbb{E}_{{\cal F}^{k-1}} \left[ \frac{d}{\delta_{k}} \left(\mathbb{E}[\ell(\check{{\bm \theta}}_{k}^{(m)}; Z_{k}^{(m)})|\check{{\bm \theta}}_{k}^{(m)}, Z_{k}^{(0)}] - \mathbb{E}_{Z\sim \Pi_{\check{{\bm \theta}}_k}}[\ell(\check{{\bm \theta}}_k; Z)|\check{{\bm \theta}}_{k}] \right) u_{k}\right] \\ &= \mathbb{E}_{{\cal F}^{k-1}}\left[\frac{d}{\delta_{k}} \left(\mathbb{E}[\ell(\check{{\bm \theta}}_{k}^{(m)}; Z_{k}^{(m)})|\check{{\bm \theta}}_{k}^{(m)}, Z_{k}^{(0)}] - \mathbb{E}_{\Tilde{Z}_{k}^{(m)}}[\ell(\check{{\bm \theta}}_{k}^{(m)}; \Tilde{Z}_{k}^{(m)})|\check{{\bm \theta}}_{k}^{(m)}, \Tilde{Z}_{k}^{(0)}] \right) u_{k}\right] \\ &+ \mathbb{E}_{{\cal F}^{k-1}} \left[ \frac{d}{\delta_{k}} \left(\mathbb{E}_{\Tilde{Z}_{k}^{(m)}}[\ell(\check{{\bm \theta}}_{k}^{(m)}; \Tilde{Z}_{k}^{(m)})|\check{{\bm \theta}}_{k}^{(m)}, \Tilde{Z}_{k}^{(0)}] - \mathbb{E}_{Z\sim \Pi_{\check{{\bm \theta}}_k}}[\ell(\check{{\bm \theta}}_{k}^{(m)}; Z)|\check{{\bm \theta}}_{k}^{(m)}] \right) u_{k} \right] \\ &+ \mathbb{E}_{{\cal F}^{k-1}} \frac{d}{\delta_{k}} \underbrace{ \mathbb{E}_{Z\sim\Pi_{\check{{\bm \theta}}_{k}}} \left[ \ell(\check{{\bm \theta}}_{k}^{(m)}; Z) - \ell(\check{{\bm \theta}}_k; Z)|\check{{\bm \theta}}_{k}^{(m)}, \check{{\bm \theta}}_{k}\right] }_{\leq c_8 \left\Vert \check{{\bm \theta}}_{k}^{(m)}-\check{{\bm \theta}}_{k} \right\Vert+\frac{L}{2}\left\Vert \check{{\bm \theta}}_{k}^{(m)}-\check{{\bm \theta}}_{k} \right\Vert^2} u_{k} \end{aligned}$$ where we use Lemma [\[lem:lip_decoupled_risk\]](#lem:lip_decoupled_risk){reference-type="ref" reference="lem:lip_decoupled_risk"} in the last inequality and $c_8\mathrel{\mathop:}=2\left(\sqrt{L{ G}} + { G}L_1\right)$. Here we bound these three parts separately. For the first term, it holds that $$\begin{aligned} &\left| \mathbb{E}[\ell(\check{{\bm \theta}}_{k}^{(m)}; Z_{k}^{(m)})|\check{{\bm \theta}}_{k}^{(m)}, Z_{k}^{(0)}] - \mathbb{E}_{\Tilde{Z}_{k}^{(m)}}[\ell(\check{{\bm \theta}}_{k}^{(m)}; \Tilde{Z}_{k}^{(m)})|\check{{\bm \theta}}_{k}^{(m)}, \Tilde{Z}_{k}^{(0)}] \right| \\ =&\left| \int_{\sf Z}\ell(\check{{\bm \theta}}_{k}^{(m)}; z)\mathbb{P}(Z_{k}^{(m)}=z|Z_{k}^{(0)})-\ell(\check{{\bm \theta}}_{k}^{(m)}; z)\mathbb{P}(\Tilde{Z}_{k}^{(m)}=z|\Tilde{Z}_{k}^{(0)})dz \right| \\ \leq &G\int_{\sf Z}\left|\mathbb{P}(Z_{k}^{(m)}=z|Z_{k}^{(0)})- \mathbb{P}(\Tilde{Z}_{k}^{(m)}=z|\Tilde{Z}_{k}^{(0)})\right| dz \\ = &2 { G}\boldsymbol{\delta}_{\text{TV}}\left(\mathbb{P}(z_{k}^{(m)}\in\cdot|Z_{k}^{(0)}), \mathbb{P}(\Tilde{Z}_{k}^{(m)}\in\cdot|Z_{k}^{(0)}) \right) \\ \leq & 2 { G}L_2 \sum_{\ell=1}^{m-1} \left\Vert \check{{\bm \theta}}_{k}^{(\ell)} -\check{{\bm \theta}}_k \right\Vert = 2 { G}L_2 \sum_{\ell=1}^{m-1} \left\Vert {\bm \theta}_{k}^{(\ell)} -{\bm \theta}_k \right\Vert\end{aligned}$$ where the first inequality is due to Assumption [\[assu:BoundLoss\]](#assu:BoundLoss){reference-type="ref" reference="assu:BoundLoss"}, the second inequality is due to Lemma [Lemma 12](#lem:tv_summation_bound){reference-type="ref" reference="lem:tv_summation_bound"}. For the second term, we have $$\begin{aligned} &\left|\mathbb{E}_{\Tilde{Z}_{k}^{(m)}}[\ell(\check{{\bm \theta}}_{k}^{(m)}; Z_{k}^{(m)})] - \mathbb{E}_{Z\sim \Pi_{\check{{\bm \theta}}_k}}[\ell(\check{{\bm \theta}}_k; Z)]\right| \\ =&\left|\int_{\sf Z}\ell(\check{{\bm \theta}}_{k}^{(m)}; z)\mathbb{P}(\Tilde{Z}_{k}^{(m)}=z|\Tilde{Z}_{k}^{(0)})-\ell(\check{{\bm \theta}}_{k}^{(m)}; z)\Pi_{\check{{\bm \theta}}_k}(z) d z\right| \\ \overset{(a)}{\leq} &{ G}\int_{\sf Z}|\mathbb{P}(\Tilde{Z}_{k}^{(m)}=z|\Tilde{Z}_{k}^{(0)})- \Pi_{\check{{\bm \theta}}_k}(z))| dz \\ = & 2 { G}\boldsymbol{\delta}_{\text{TV}}\left(\mathbb{P}(\Tilde{Z}_{k}^{(m)}\in\cdot|\Tilde{Z}_{k}^{(0)}), \Pi_{\check{{\bm \theta}}_k} \right) \\ \overset{(b)}{\leq} &2{ G}M\rho^{m}\end{aligned}$$ where we use Assumption [\[assu:BoundLoss\]](#assu:BoundLoss){reference-type="ref" reference="assu:BoundLoss"} in inequality (a) and Assumptions [\[assu:FastMixing\]](#assu:FastMixing){reference-type="ref" reference="assu:FastMixing"} in inequality (b). Combining three upper bounds, we obtain that $$\begin{aligned} \left\Vert \Delta_{k,m} \right\Vert&\leq \mathbb{E}_{{\cal F}^{k-1}}\frac{d}{\delta_{k}}\left(2 { G}L_2 \sum_{\ell=1}^{m-1}\left[ \left\Vert {\bm \theta}_{k}^{(\ell)} -{\bm \theta}_k \right\Vert\right] + 2{ G}M\rho^{m} + c_8 \left\Vert \check{{\bm \theta}}_{k}^{(m)}-\check{{\bm \theta}}_k \right\Vert+\frac{L}{2} \left\Vert \check{{\bm \theta}}_{k}^{(m)}-\check{{\bm \theta}}_k \right\Vert^2\right) \\ &\leq\frac{d}{\delta_{k}}\left(2 L_2 { G}\sum_{\ell=1}^{m-1} \sum_{j=1}^{\ell-1}\eta_{k}\lambda^{\tau_{k}-j}\frac{d { G}}{\delta_{k}} + 2{ G}M\rho^{m} + c_8 \sum_{j=1}^{m-1}\eta_{k}\lambda^{\tau_{k}-j}\frac{d { G}}{\delta_{k}}\right) \\ &+\frac{d}{\delta_{k}}\frac{L}{2} \left(\sum_{j=1}^{m-1}\eta_{k}\lambda^{\tau_{k}-j}\frac{d { G}}{\delta_{k}}\right)^2 \\ &<\frac{d}{(1-\lambda)^2}\left(2L_2{ G}^2 d+c_8 { G}d\right)\lambda^{\tau_{k}-m+1}\frac{\eta_{k}}{\delta_{k}^2}+\frac{L{ G}^2 d^3}{2(1-\lambda)^2}\lambda^{2(\tau_{k}-m+1)}\frac{\eta_{k}^2}{\delta_{k}^3}+2{ G}M d\frac{\rho^m}{\delta_{k}}\end{aligned}$$ Then it holds that $$\begin{aligned} \left\Vert \sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}\Delta_{k,m} \right\Vert &\leq \sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}\left\Vert \Delta_{k,m} \right\Vert \\ &\leq \frac{d}{(1-\lambda)^2}\left(2L_2{ G}^2d+c_8d{ G}\right)\frac{\eta_{k}}{\delta_{k}^2}\sum_{m=1}^{\tau_{k}}\lambda^{2(\tau_{k}-m)}\lambda \\ &+\frac{L{ G}^2}{2(1-\lambda)^2}d^3\frac{\eta_{k}^2}{\delta_{k}^3}\sum_{m=1}^{\tau_{k}}\lambda^{3(\tau_{k}-m)}\lambda^2 \\ &+2{ G}M d\delta_{k}^{-1}\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}\rho^{m} \\ &\leq (2L_2{ G}^2 d+c_8 G d)\frac{d\lambda}{(1-\lambda)^3}\frac{\eta_{k}}{\delta_{k}^2} \\ &+\frac{L{ G}^2}{2}\frac{d^3\lambda^2}{1-\lambda}\frac{\eta_{k}^2}{\delta_{k}^3} +2{ G}M d\delta_{k}^{-1}\tau_{k}\max\{\rho,\lambda\}^{\tau_{k}}\end{aligned}$$ Finally, provided $\tau_{k}\geq\log_{\max\{\rho,\lambda\}}(1+k)^{-1}$ and $0<2\alpha-4\beta<1$, we can bound ${\bf I}_{2}(t)$ as follows: $$\begin{aligned} {\bf I}_{2}(t) &\leq (1-\lambda) \mathbb{E}\sum_{k=0}^{t} \left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert \cdot \left\Vert \sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k} -m} \Delta_{k,m} \right\Vert \\ &\leq (1-\lambda) \mathbb{E}\sum_{k=0}^{t} \left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert\left[(2 L_2{ G}^2 d+c_8 G d)\frac{d\lambda}{(1-\lambda)^3}\frac{\eta_{k}}{\delta_{k}^2}+\frac{L{ G}^2}{2}\frac{d^3\lambda^2}{1-\lambda}\frac{\eta_{k}^2}{\delta_{k}^3} +2{ G}M d\frac{\tau_{k}}{\delta_{k}(1+k)}\right] \\ &\leq \frac{d\lambda}{(1-\lambda)^2}(2L_2{ G}^2 d+c_8 { G}d)\left(\sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2\right)^{1/2}\left(\sum_{k=0}^{t}\frac{\eta_{k}^2}{\delta_{k}^4}\right)^{1/2} \\ &+d^3\lambda^2 \frac{L{ G}^2}{2}\left(\sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2\right)^{1/2}\left(\sum_{k=0}^t\frac{\eta_{k}^4}{\delta_{k}^6}\right)^{1/2} \\ &+\frac{d}{1-\lambda}{ G}M \left(\sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2\right)^{1/2}\left(\sum_{k=0}^{t}\frac{\tau_{k}^2}{\delta_{k}^2(1+k)^2}\right)^{1/2} \\ &\overset{(b)}{\leq} \frac{d^{2}\lambda}{(1-\lambda)^2}6(L_2{ G}^2+\sqrt{L}{ G}^{3/2}+L_1{ G}^2 )\left(\sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2\right)^{1/2}\left(\sum_{k=0}^{t}\frac{\eta_{k}^2}{\delta_{k}^4}\right)^{1/2} \\ &\leq c_{2}\frac{d^{2}}{(1-\lambda)^2}\left(\frac{1}{1+t}\sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2\right)^{1/2} \cdot (1+t)^{1-(\alpha-2\beta)}\quad \forall t\geq \max\{t_1, t_2\}\end{aligned}$$ where $c_2 \mathrel{\mathop:}=\frac{\eta_0}{\delta_0^2}\frac{6\cdot (L_1{ G}^2+L_2{ G}^2+\sqrt{L}{ G}^{3/2} )}{\sqrt{1-2\alpha+4\beta}}$. The inequality (b) holds since $\tau_{k}=\Theta(\log k)$, $4\alpha-6\beta>2\alpha-4\beta$ and $2-2\beta>2\alpha-4\beta$, so there exist constants $$\begin{aligned} t_{1} & \mathrel{\mathop:}=\inf_{t}\left\{ t\geq 0 \, | \, \frac{d^6\lambda^4 L^2{ G}^4}{4}\sum_{k=0}^{t} \frac{\eta_{k}^4}{\delta_{k}^6}\leq \frac{d^2\lambda^2(2L_2{ G}^2d+c_8{ G}d)^2}{(1-\lambda)^4}\sum_{k=0}^{t} \frac{\eta_{k}^{2}}{\delta_{k}^{4}}\right\} \label{eq:const-t1} \\ t_{2} & \mathrel{\mathop:}=\inf_{t}\left\{ t\geq 0 \, | \, d^2 { G}^2M^2\sum_{k=0}^{t} \frac{\tau_{k}^2}{\delta_{k}^2 (1+k)^2}\leq \frac{d^2\lambda^2(2L_2{ G}^2d+c_8{ G}d)^2}{(1-\lambda)^4}\sum_{k=0}^{t} \frac{\eta_{k}^{2}}{\delta_{k}^{4}}\right\} \label{eq:const-t2}\end{aligned}$$ In brief, we have $$\begin{aligned} {\bf I}_{2}(t) \leq c_{2}\frac{d^{2}}{(1-\lambda)^2}\left(\frac{1}{1+t}\sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2\right)^{1/2} \cdot (1+t)^{1-(\alpha-2\beta)}\quad \forall t\geq \max\{t_1,t_2\}\end{aligned}$$ ◻ **Lemma 6**. *Under Assumption [\[assu:Lip\]](#assu:Lip){reference-type="ref" reference="assu:Lip"}, [\[assu:BoundLoss\]](#assu:BoundLoss){reference-type="ref" reference="assu:BoundLoss"} and $0<\beta< 1/2$, with $\tau_{k}\geq\frac{1}{\log 1/\max\{\rho,\lambda\}}\left(\log(1+k)+\log\frac{d}{\delta_0}\right)$, it holds that $${\bf I}_{3}(t)\leq c_3 {{\cal A}(t)}^{\frac{1}{2}} (1+t)^{1-\beta}$$ where ${\cal A}(t) \mathrel{\mathop:}=\frac{1}{1+t} \sum_{k=0}^{t} \mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2$ and constant $c_3=\frac{1}{\sqrt{1-2\beta}}\max\{2^{1-\beta}L\delta_0, 2^\beta { G}\sqrt{1-\beta}\}$.* *Proof.* Recall that $g_{k}({\bm \theta};u,z)\mathrel{\mathop:}=g_{\delta_{k}}({\bm \theta};u,z)$ and ${\cal L}_{k}({\bm \theta})\mathrel{\mathop:}={\cal L}_{\delta_{k}}({\bm \theta})$. $$\begin{aligned} {\bf I}_{3}(t) &= -\sum_{k=0}^{t} \mathbb{E}\left\langle{\nabla}{\cal L}({\bm \theta}_k)\,|\,(1-\lambda)\left(\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m} {\nabla}{{\cal L}_{k}({\bm \theta}_{k})} \right) -{\nabla}{\cal L}({\bm \theta}_k)\right\rangle \\ &= -\sum_{k=0}^{t} \mathbb{E}\left\langle{\nabla}{\cal L}({\bm \theta}_k)\,|\,\left((1-\lambda)\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}\right) {\nabla}{{\cal L}_{k}({\bm \theta}_{k})} -{\nabla}{\cal L}({\bm \theta}_k)\right\rangle \\ &= -\sum_{k=0}^{t} \mathbb{E}\left\langle{\nabla}{\cal L}({\bm \theta}_k)\,|\,{\nabla}{{\cal L}_{k}({\bm \theta}_{k})} -{\nabla}{\cal L}({\bm \theta}_k)\right\rangle - \lambda^{\tau_{k}}\mathbb{E}\left\langle{\nabla}{\cal L}({\bm \theta}_k)\,|\,\mathbb{E}_{Z\sim \Pi_{\check{{\bm \theta}}_k}}[g_{k}({\bm \theta}_{k};u_{k},Z)]\right\rangle\end{aligned}$$ where we apply Lemma [Lemma 9](#lem:UnbiasedGrad){reference-type="ref" reference="lem:UnbiasedGrad"} at the last equality. By triangle inequality, Cauchy-Schwarz inequality and Assumption [\[assu:BoundLoss\]](#assu:BoundLoss){reference-type="ref" reference="assu:BoundLoss"}, we obtain $$\begin{aligned} {\bf I}_{3}(t) &\leq \sum_{k=0}^{t} \mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert \cdot \left\Vert {\nabla}{{\cal L}_{k}({\bm \theta}_{k})} - {\nabla}{\cal L}({\bm \theta}_k) \right\Vert + \sum_{k=0}^{t} \lambda^{\tau_{k}} \mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert \frac{d{ G}}{\delta_{k}}\end{aligned}$$ Provided $\tau_{k}\geq\frac{\log(1+k)+\log\frac{d}{\delta_0}}{\log 1/\max\{\rho,\lambda\}} \geq \frac{ \log{\delta_0 / d(1+k)^{-1}}}{\log \max\{\rho,\lambda\}} = \log_{\max\{\rho,\lambda\}}\frac{\delta_0}{d} (1+k)^{-1}\geq \log_{\lambda} \frac{\delta_0}{d}(1+k)^{-1}$, with Lemma [\[lem:BoundedBias\]](#lem:BoundedBias){reference-type="ref" reference="lem:BoundedBias"} as a consequence of Assumption [\[assu:Lip\]](#assu:Lip){reference-type="ref" reference="assu:Lip"}, we have $$\begin{aligned} {\bf I}_{3}(t) &\leq \sum_{k=0}^{t} \mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert \cdot L\delta_{k} + \sum_{k=0}^{t} \mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert \frac{\delta_0}{d} \frac{d{ G}}{\delta_{0}} (1+k)^{\beta-1} \\ &= \sum_{k=0}^{t} \mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert \cdot L\delta_{k} + { G}\sum_{k=0}^{t} \mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert (1+k)^{\beta-1} \\ &\leq L\left( \sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2\right)^{1/2} \left(\sum_{k=0}^{t}\delta_{k}^2 \right)^{1/2} + { G}\left(\sum_{k=0}^{t} \mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2\right)^{1/2} \left( \sum_{k=0}^{t} (1+k)^{2(\beta-1)}\right)^{1/2}\end{aligned}$$ Since $\beta<1/2$, it holds that $$\begin{aligned} &\sum_{k=0}^{t} \delta_{k}^2 = \sum_{k=0}^{t} \frac{\delta_{0}^2}{(1+k)^{2\beta}} \leq \frac{\delta_{0}^{2}}{1-2\beta} \left[ 1-2\beta+(1+t)^{1-2\beta}-1\right] \leq \frac{\delta_{0}^{2}}{1-2\beta}(1+t)^{1-2\beta} \\ &\sum_{k=0}^{t} (1+k)^{2(\beta-1)} \leq 1+\int_{0}^{t} (x+1)^{2(\beta-1)} dx < 1+ \frac{1}{1-2\beta}\end{aligned}$$ Then we can conclude $$\begin{aligned} {\bf I}_{3}(t) &\leq c_{3} \left( \frac{1}{1+t}\sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2\right)^{1/2} \cdot (1+t)^{1-\beta}\end{aligned}$$ where $c_{3} \mathrel{\mathop:}=\frac{2}{\sqrt{1-2\beta}}\max\{L\delta_0, { G}\sqrt{1-\beta}\}$. ◻ **Lemma 7**. *Under assumption [\[assu:BoundLoss\]](#assu:BoundLoss){reference-type="ref" reference="assu:BoundLoss"} and constraint $0<\alpha<1$, it holds that $${\bf I}_{4}(t)\leq c_4 \frac{d^2}{1-\lambda} (1+t)^{1-\left(\alpha-2\beta\right)}$$ where constant $c_4=\frac{\eta_{0} L { G}^2 }{\delta_{0}^2 (2\beta-\alpha+1)}$.* *Proof.* $$\begin{aligned} {\bf I}_{4}(t) &= \frac{(1-\lambda)L}{2}\sum_{k=0}^{t} \eta_{k} \mathbb{E}\left\Vert \sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}g_{k}^{(m)} \right\Vert^2 \\ &\leq\frac{(1-\lambda)L}{2}\sum_{k=0}^{t} \eta_{k}\mathbb{E}\left(\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}\left\Vert {g_{k}^{(m)}} \right\Vert\right)^2 \\ &\leq\frac{(1-\lambda)L}{2}\sum_{k=0}^{t} \eta_{k}\left(\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}\right)^2\frac{(d{ G})^2}{\delta_{k}^2} \\ &\leq\frac{(1-\lambda)L d^2{ G}^2}{2}\sum_{k=0}^{t}\left(\frac{1-\lambda^{\tau_{k}}}{1-\lambda}\right)^2\frac{\eta_{k}}{\delta_{k}^2} \\ &<\frac{d^2 L{ G}^2}{2(1-\lambda)}\sum_{k=0}^{t}\frac{\eta_{k}}{\delta_{k}^2}\end{aligned}$$ Recall that $\eta_{k}=\frac{\eta_{0}}{(k+1)^{\alpha}}$, $\delta_{k}=\frac{\delta_{0}}{(1+k)^{\beta}}$ and $\alpha<1, \beta\geq 0$, it is clear that $\alpha-2\beta<1$, so it holds that $$\begin{aligned} \sum_{k=0}^{t}\frac{\eta_{k}}{\delta_{k}^2} &= \frac{\eta_{0}}{\delta_{0}^{2}} \sum_{k=0}^{t} (1+k)^{2\beta-\alpha} \leq \frac{\eta_{0}}{\delta_{0}^{2}} \left(1+\int_{0}^{t} (1+x)^{2\beta-\alpha} dx\right) \\ &\leq \frac{\eta_{0}}{\delta_{0}^2 (2\beta-\alpha+1)} \left[ (1+t)^{2\beta-\alpha+1} -\alpha+2\beta \right] \leq \frac{2\eta_{0}}{\delta_{0}^2 (2\beta-\alpha+1)} (1+t)^{2\beta-\alpha+1}\end{aligned}$$ In conclusion, we obtain that $$\begin{aligned} {\bf I}_{4}(t) \leq d^2\frac{L{ G}^2}{1-\lambda} \frac{\eta_{0}}{\delta_{0}^2 (2\beta-\alpha+1)} \cdot (1+t)^{2\beta-\alpha+1} = c_{4}\frac{d^2}{1-\lambda} (1+t)^{1-(\alpha-2\beta)}\end{aligned}$$ where $c_{4} \mathrel{\mathop:}=\frac{\eta_{0} }{\delta_{0}^2 }\cdot\frac{L { G}^2}{2\beta-\alpha+1}.$ ◻ # Proof of Lemma [Lemma 3](#lem:major_bound){reference-type="ref" reference="lem:major_bound"} {#proof-of-lemma-lemmajor_bound} *Proof.* Combining Lemmas [\[lem:decomposition\]](#lem:decomposition){reference-type="ref" reference="lem:decomposition"} and [Lemma 2](#lem:bound_four_terms){reference-type="ref" reference="lem:bound_four_terms"}, subject to the constraints $0<\alpha<1,0<\beta\leq 1/2, 0<2\alpha-4\beta\leq 1$, it holds that for any $t\geq\max\{t_1,t_2\}$, $$\begin{aligned} &\sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2 \\ &\leq{\bf I}_{1}(t) + {\bf I}_{2}(t) + {\bf I}_{3}(t) + {\bf I}_{4}(t) \\ & \leq c_1 (1-\lambda) (1+t)^{\alpha} + c_2 \frac{d^{5/2}}{(1-\lambda)^2} (1+t)^{1-(\alpha-2\beta)} {\cal A}(t)^{1/2} \\ &\quad +c_3 (1+t)^{1-\beta}{\cal A}(t)^{1/2} +c_4 \frac{d^2}{1-\lambda} (1+t)^{1-(\alpha-2\beta)}\end{aligned}$$ Recall ${\cal A}(t) \mathrel{\mathop:}=\frac{1}{1+t} \sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2$, above inequality can be rewritten as $$\begin{aligned} {\cal A}(t) &\leq \frac{1}{1+t}\bigg[c_2 \frac{d^{5/2}}{(1-\lambda)^2} (1+t)^{1-(\alpha-2\beta)} {\cal A}(t)^{1/2} \\ &\quad +c_3 (1+t)^{1-\beta}{\cal A}(t)^{1/2} +c_1 (1-\lambda) (1+t)^{\alpha}+c_4 \frac{d^2}{1-\lambda}(1+t)^{1-(\alpha-2\beta)}\bigg] \\ &= \left(c_2 \frac{d^{5/2}}{(1-\lambda)^2} (1+t)^{-(\alpha-2\beta)}+c_3 (1+t)^{-\beta}\right){\cal A}(t)^{1/2} + c_1 (1-\lambda) (1+t)^{-(1-\alpha)} \\ &\quad + c_4 \frac{d^2}{1-\lambda} (1+t)^{-(\alpha-2\beta)}\end{aligned}$$ which is a quadratic inequality in ${\cal A}(t)^{1/2}$. Let $x={\cal A}(t)^{1/2}, a=c_2 \frac{d^{5/2}}{(1-\lambda)^2} (1+t)^{-(\alpha-2\beta)}+c_3 t^{-\beta}, b=c_1 (1-\lambda) (1+t)^{-(1-\alpha)}+ c_4 \frac{d^2}{1-\lambda} (1+t)^{-(\alpha-2\beta)}$, we have $x^2-ax-b\leq 0$. Since $a,b>0$, the quadratic has two real roots, denoted as $x_1,x_2$ respectively, and $x_1<0<x_2$. Moreover, we must have $x\leq x_2$, which implies $x\leq\frac{a+\sqrt{a^2+4b}}{2}\leq \frac{a+a+2\sqrt{b}}{2}=a+\sqrt{b}$. Therefore, ${\cal A}(t)=x^2\leq (a+\sqrt{b})^2\leq 2(a^2+b)$. Substituting $a,b$ back leads to $$\begin{aligned} {\cal A}(t) &\leq 2\left(c_2 \frac{d^{5/2}}{(1-\lambda)^2} (1+t)^{-(\alpha-2\beta)}+c_3 (1+t)^{-\beta}\right)^2 +2c_1 (1-\lambda) (1+t)^{-(1-\alpha)} \\ &\quad + 2c_4 \frac{d^2}{1-\lambda} (1+t)^{-(\alpha-2\beta)} \\ &\overset{(a)}{\leq} 4 c_2^2 \frac{d^5}{(1-\lambda)^4} (1+t)^{-2(\alpha-2\beta)}+ 4 c_3^2 (1+t)^{-2\beta} + 2 c_1 (1-\lambda) (1+t)^{-(1-\alpha)} \\ &\quad + 2 c_4 \frac{d^2}{1-\lambda} (1+t)^{-(\alpha-2\beta)} \\ &\leq 4 c_3^2 (1+t)^{-2\beta} + 2 c_1 (1-\lambda) (1+t)^{-(1-\alpha)}+ 4 c_4 \frac{d^2}{1-\lambda} (1+t)^{-(\alpha-2\beta)},\end{aligned}$$ where inequality (a) is due to the fact $(x+y)^2\leq 2 (x^2+y^2)$, the last inequality holds because there exists sufficiently large constant $t_3$ such that, $4 c_2^2 \frac{d^5}{(1-\lambda)^4} (1+t)^{-2(\alpha-2\beta)}\leq 2 c_4 \frac{d^2}{1-\lambda} (1+t)^{-(\alpha-2\beta)} \forall t\geq t_3$ given $\alpha>2\beta$. Therefore, set $t_0\mathrel{\mathop:}=\max\{t_1,t_2,t_3\}$, then for all $t\geq t_0$, we have $$\begin{aligned} {\cal A}(t) &\leq 4 \max\{ c_1 (1-\lambda), c_3^2 , c_4 \frac{d^2}{1-\lambda}\} \cdot \left((1+t)^{-2\beta}+(1+t)^{-(1-\alpha)}+(1+t)^{-(\alpha-2\beta)}\right) \\ &\leq 12 \max\{ c_1 (1-\lambda), c_3^2 , c_4 \frac{d^2}{1-\lambda}\} (1+t)^{-\min\{2\beta,1-\alpha,\alpha-2\beta\}}\end{aligned}$$ Recall that constant $c_1$ contains $1/\eta_0$, $c_3$ contains $\delta_0$, $c_4$ contains $\eta_0/\delta_0^2$, , thus we can set $\delta_0 = d^{1/3}, \eta_0 = d^{-2/3}$, which yields $$\begin{aligned} {\cal A}(t) &\leq 12 \max\{c_5 (1-\lambda), c_6, \frac{c_7}{1-\lambda}\} d^{2/3} (1+t)^{-\min\{2\beta,1-\alpha,\alpha-2\beta\}}\end{aligned}$$ where constants $$\begin{aligned} c_5 =2 { G},\quad c_6 =\frac{4 \max\{L^2, { G}^2(1-\beta)\}}{1-2\beta}, \quad c_7 =\frac{L{ G}^2 }{2\beta-\alpha+1}\end{aligned}$$ do not contain $\eta_0$ and $\delta_0$. Moreover, note that $\max_{\alpha,\beta}\min\{2\beta,1-\alpha,\alpha-2\beta\}=\frac{1}{3}$, thus it holds $$\begin{aligned} &\frac{1}{1+T}\sum_{k=0}^{T}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta})_k \right\Vert^2 \leq 12 \max\{c_5 (1-\lambda), c_6, \frac{c_7}{1-\lambda}\} d^{2/3} (1+T)^{-1/3}\end{aligned}$$ where the rate ${\cal O}(1/T^{1/3})$ can be attained by choosing $\alpha=\frac{2}{3}$, $\beta=\frac{1}{6}.$ This immediately leads to Theorem [\[thm1\]](#thm1){reference-type="ref" reference="thm1"} by observing $$\min_{0\leq k \leq T} \mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2\leq \textstyle \frac{1}{1+T}\sum_{k=0}^{T}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_k) \right\Vert^2.$$ ◻ # Non-smooth Analysis In this section, we aim to apply our algorithm to non-smooth performative risk optimization problem and analyze its convergence rate. Before presenting the theorem, we need the following Lipschitz loss assumption [\[assu:lip_ell\]](#assu:lip_ell){reference-type="ref" reference="assu:lip_ell"}. **Assumption 6**. **(Lipschitz Loss)**[\[assu:lip_ell\]]{#assu:lip_ell label="assu:lip_ell"} There exists constant $L_0>0$ such that $$\left|\ell({\bm \theta}_{1};z)-\ell({\bm \theta}_{2};z)\right|\leq L_{0}\left\Vert {\bm \theta}_{1}-{\bm \theta}_{2} \right\Vert, ~\forall~ {\bm \theta}_{1},{\bm \theta}_{2}\in \mathbb{R}^d, ~\forall ~z\in {\sf Z}$$ Under Assumption [\[assu:lip_ell\]](#assu:lip_ell){reference-type="ref" reference="assu:lip_ell"} and some other regularity conditions, one can show that the performative risk is also Lipschitz continuous. Formally, this can be stated as follows. **Lemma 8**. *Under Assumption [\[assu:lip_ell\]](#assu:lip_ell){reference-type="ref" reference="assu:lip_ell"}, [\[assu:BoundLoss\]](#assu:BoundLoss){reference-type="ref" reference="assu:BoundLoss"}, [\[assu:smooth_dist\]](#assu:smooth_dist){reference-type="ref" reference="assu:smooth_dist"}, the performative risk ${\cal L}({\bm \theta})$ is ($L_0+2 L_1{ G}$)-Lipschitz continuous.* Under non-smooth settings, the convergence behavior can be characterized in both squared gradient norm and proximity gap. Now, we are ready to show the following theorem: **Theorem 1**. ***($\texttt{DFO}\left(\lambda\right)$ for Non-smooth Optimization)** Under Assumption [\[assu:lip_ell\]](#assu:lip_ell){reference-type="ref" reference="assu:lip_ell"}, [\[assu:BoundLoss\]](#assu:BoundLoss){reference-type="ref" reference="assu:BoundLoss"}, [\[assu:smooth_dist\]](#assu:smooth_dist){reference-type="ref" reference="assu:smooth_dist"}, [\[assu:FastMixing\]](#assu:FastMixing){reference-type="ref" reference="assu:FastMixing"}, [\[assu:smooth_kernel\]](#assu:smooth_kernel){reference-type="ref" reference="assu:smooth_kernel"}, with two time-scale step sizes $\eta_k=\eta_0 (1+k)^{-\alpha}, \delta_k=d (1+k)^{-\beta}, \tau_{k}\geq\frac{\log(1+k)}{\log 1/\max\{\rho,\lambda\}}$, where $\alpha,\beta$ satisfies $0<3\beta<\alpha<1$, there exists a constant $t_4$ such that, the iterates $\{{\bm \theta}_k\}_{k\geq 1}$ satisfies for all $T\geq t_4$ $$\frac{1}{1+T}\sum_{k=0}^{T}\mathbb{E}\left\Vert {\nabla}{{\cal L}_{\delta_{k}}({\bm \theta}_k)} \right\Vert^2 ={\cal O}(T^{-\min\{1-\alpha, \alpha-3\beta\}})$$ and the following error estimate holds for all $T > 0$ and ${\bm \theta}\in\mathbb{R}^{d}$ $$\frac{1}{1+T}\sum_{k=0}^{T}\mathbb{E}|{\cal L}_{\delta_{k}}({\bm \theta})-{\cal L}({\bm \theta})|={\cal O}(T^{-\beta})$$* **Corollary 2**. *($\epsilon$-stationarity, $\mu$-proximity) Suppose Assumptions of Theorem [Theorem 1](#thm:non-smooth){reference-type="ref" reference="thm:non-smooth"} hold. Fix any $\epsilon,\mu>0$, for $T= \max\{{\cal O}(1/\epsilon^4), {\cal O}(1/\mu^6)\}$, the following estimates hold simultaneously $$\frac{1}{1+T} \sum_{k=0}^{T}\mathbb{E}\left\Vert {\nabla}{\cal L}_{\delta_{k}}({\bm \theta}_k) \right\Vert^2 \leq \epsilon$$ $$\frac{1}{1+T} \sum_{k=0}^{T}\mathbb{E}\left|{\cal L}_{\delta_{k}}({\bm \theta}_{k})-{\cal L}({\bm \theta}_{k})\right|\leq \mu$$* Next, we present the proof of Theorem [Theorem 1](#thm:non-smooth){reference-type="ref" reference="thm:non-smooth"}. *Proof.* This proof resembles the proof of Lemma [Lemma 2](#lem:bound_four_terms){reference-type="ref" reference="lem:bound_four_terms"}, where we reinterpret $\sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_{k}) \right\Vert^2$ as $\sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}_{\delta_{k}}({\bm \theta}_{k}) \right\Vert^2$, and ${\cal L}({\bm \theta}_{k})$ as ${\cal L}_{\delta_{k}}({\bm \theta}_{k})$, with additional bias terms that, as we shall prove, are not dominant. Due to Lemma [Lemma 8](#lem:composite_lip){reference-type="ref" reference="lem:composite_lip"}, $\cal{L}({\bm \theta})$ is $(L_0+2 L_1{ G})$-Lipschitz. Then by Lemma [Lemma 9](#lem:UnbiasedGrad){reference-type="ref" reference="lem:UnbiasedGrad"}, ${\cal L}_{\delta}({\bm \theta})$ is $\frac{d}{\delta}(L_0 + 2 L_1 { G})$-smooth for all $\delta>0$. Similar to Lemma [\[lem:decomposition\]](#lem:decomposition){reference-type="ref" reference="lem:decomposition"}, we have $$\begin{aligned} &{\cal L}_{\delta_{k}}({\bm \theta}_{k+1})-{\cal L}_{\delta_{k}}({\bm \theta}_{k})+\frac{\eta_{k}}{1-\lambda}\left\langle{\nabla}{\cal L}_{\delta_{k}}({\bm \theta}_{k})\,|\,(1-\lambda)\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}g_{k}^{(m)}\right\rangle \\ &\leq\frac{d (L_0+2 L_1{ G}) }{2\delta_{k}}\eta_{k}^2\left\Vert \sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}g_{k}^{(m)} \right\Vert^2\end{aligned}$$ By adding, subtracting and rearranging terms, after taking conditional expectation on ${\cal F}^{k-1}$, it holds that $$\begin{aligned} \frac{\eta_{k}}{1-\lambda}\left\Vert {\nabla}{\cal L}_{\delta_{k}}({\bm \theta}_k) \right\Vert^2 &\leq \mathbb{E}_{{\cal F}^{k-1}}\left[{\cal L}_{\delta_{k}}({\bm \theta}_{k}) - {\cal L}_{\delta_{k+1}}({\bm \theta}_{k+1}) + {\cal L}_{\delta_{k+1}}({\bm \theta}_{k+1}) - {\cal L}_{\delta_{k}}({\bm \theta}_{k+1}) \right] \\ &+\frac{\eta_{k}}{1-\lambda} \mathbb{E}_{{\cal F}^{k-1}}\left\langle{\nabla}{\cal L}_{\delta_{k}}({\bm \theta}_k)\,|\,{\nabla}{\cal L}_{\delta_{k}}({\bm \theta}_k)-(1-\lambda)\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}g_{k}^{(m)}\right\rangle \\ &+ \frac{d}{2\delta_{k}}(L_0 + 2 L_1 { G})\eta_{k}^2\mathbb{E}_{{\cal F}^{k-1}}\left\Vert \sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}g_{k}^{(m)} \right\Vert^2\end{aligned}$$ By Lemma [Lemma 9](#lem:UnbiasedGrad){reference-type="ref" reference="lem:UnbiasedGrad"}, we have $\mathbb{E}_{Z\sim\Pi_{\check{{\bm \theta}}_k},u_{k}} [g_{\delta_{k}}({\bm \theta}_k;u_k,Z)]={\nabla}{\cal L}_{\delta_{k}}({\bm \theta}_{k})$, then by dividing and summing over $k$, it holds that $$\begin{aligned} &(1-\lambda)\sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}_{\delta_{k}}({\bm \theta}_k) \right\Vert^2 \\ &\leq \sum_{k=0}^{t} \frac{1-\lambda}{\eta_{k}} \mathbb{E}\left[ {\cal L}_{\delta_{k}}({\bm \theta}_{k})- {\cal L}_{\delta_{k+1}}({\bm \theta}_{k+1}) + {\cal L}_{\delta_{k+1}}({\bm \theta}_{k+1}) - {\cal L}_{\delta_{k}}({\bm \theta}_{k+1}) \right] \\ &\quad +(1-\lambda)\sum_{k=0}^{t} \mathbb{E}\left\langle{\nabla}{\cal L}_{\delta_{k}}({\bm \theta}_k)\,|\,\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}\Big(\mathbb{E}_{Z\sim\Pi_{\check{{\bm \theta}}_k}} [g_{\delta_{k}}({\bm \theta}_k;u_k,Z)]-g_{k}^{(m)}\Big)\right\rangle \\ &\quad +\sum_{k=0}^{t}\lambda^{\tau_{k}}\mathbb{E}\left\Vert {\nabla}{\cal L}_{\delta_{k}}({\bm \theta}_k) \right\Vert^2 \\ &\quad + \frac{d (L_0 + 2 L_1 { G})(1-\lambda)}{2}\sum_{k=0}^{t}\frac{\eta_{k}}{\delta_{k}}\mathbb{E}\left\Vert \sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}g_{k}^{(m)} \right\Vert^2 \\ &\mathrel{\mathop:}={\bf I}_{5}(t) + {\bf I}_{6}(t) + {\bf I}_{7}(t) + {\bf I}_{8}(t)\end{aligned}$$ After splitting RHS into ${\bf I}_{5}(t),{\bf I}_{6}(t),{\bf I}_{7}(t),{\bf I}_{8}(t)$, we can bound them separately. Under Assumption [\[assu:BoundLoss\]](#assu:BoundLoss){reference-type="ref" reference="assu:BoundLoss"} and the estimate $\delta_{k}-\delta_{k+1}=\Theta(k^{-\beta-1})$, it holds that $$\begin{aligned} {\bf I}_{5}(t) &=(1-\lambda)\sum_{k=0}^{t}\frac{1}{\eta_{k}}\mathbb{E}\left[{\cal L}_{\delta_{k}}({\bm \theta}_{k})-{\cal L}_{\delta_{k+1}}({\bm \theta}_{k+1})\right]+(1-\lambda)\sum_{k=0}^{t}\frac{1}{\eta_{k}}\mathbb{E}\left[{\cal L}_{\delta_{k+1}}({\bm \theta}_{k+1})-{\cal L}_{\delta_{k}}({\bm \theta}_{k+1})\right] \\ &\overset{(a)}{\leq} (1-\lambda ) { G}\frac{2}{\eta_{t+1}} + (1-\lambda)\sum_{k=0}^{t}\mathbb{E}\frac{{\cal L}_{\delta_{k+1}}({\bm \theta}_{k}) - {\cal L}_{\delta_{k}}({\bm \theta}_{k})}{\eta_{k}} \\ &\overset{(b)}{\leq} (1-\lambda ) { G}\frac{2}{\eta_{t+1}} + (1-\lambda)(L_0+2 L_1{ G}) \sum_{k=0}^{t}\frac{\delta_{k}-\delta_{k+1}}{\eta_{k}} \\ &={\cal O}\left((1+t)^{\alpha}+(1+t)^{\alpha-\beta}\right)={\cal O}\left((1+t)^{\alpha}\right)\end{aligned}$$ where we apply the summation by part in inequality (a) as in Lemma [Lemma 4](#lem:bound_term_1){reference-type="ref" reference="lem:bound_term_1"}, and use the fact $|{\cal L}_{\delta_1}({\bm \theta})-{\cal L}_{\delta_2}({\bm \theta})|\leq\mathbb{E}_{w}|{\cal L}({\bm \theta}+\delta_1 w)-{\cal L}({\bm \theta}+\delta_2 w)|\leq (L_0+2 L_1{ G}) |\delta_1-\delta_2|$ in inequality (e), as a consequence of Lipschitz continuity. As for ${\bf I}_{6}(t)$, if we let ${\cal B}(t)\mathrel{\mathop:}=\frac{1}{1+t}\sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}_{\delta_{k}}({\bm \theta}_{k}) \right\Vert^2$, by definition of $g_{k}^{(m)}$, we can split the term as follows $$\begin{aligned} &\mathbb{E}_{{\cal F}^{k-1}} \frac{d}{\delta_{k}} \left(\mathbb{E}_{Z\sim \Pi_{\check{{\bm \theta}}_k}}[\ell(\check{{\bm \theta}}_k; Z)|\check{{\bm \theta}}_{k}] - \mathbb{E}[\ell(\check{{\bm \theta}}_{k}^{(m)}; Z_{k}^{(m)})|\check{{\bm \theta}}_{k}^{(m)}, Z_{k}^{(0)}] \right) \\ &= \mathbb{E}_{{\cal F}^{k-1}}\frac{d}{\delta_{k}} \mathbb{E}_{Z\sim\Pi_{\check{{\bm \theta}}_{k}}} \left[ \ell(\check{{\bm \theta}}_k; Z) - \ell(\check{{\bm \theta}}_{k}^{(m)}; Z)|\check{{\bm \theta}}_{k}^{(m)}, \check{{\bm \theta}}_{k}\right] \\ &\quad + \mathbb{E}_{{\cal F}^{k-1}} \frac{d}{\delta_{k}} \left(\mathbb{E}_{Z\sim \Pi_{\check{{\bm \theta}}_k}}[\ell(\check{{\bm \theta}}_{k}^{(m)}; Z)|\check{{\bm \theta}}_{k}^{(m)}] - \mathbb{E}_{\Tilde{Z}_{k}^{(m)}}[\ell(\check{{\bm \theta}}_{k}^{(m)}; \Tilde{Z}_{k}^{(m)})|\check{{\bm \theta}}_{k}^{(m)}, \Tilde{Z}_{k}^{(0)}] \right) \\ &\quad + \mathbb{E}_{{\cal F}^{k-1}}\frac{d}{\delta_{k}} \left(\mathbb{E}_{\Tilde{Z}_{k}^{(m)}}[\ell(\check{{\bm \theta}}_{k}^{(m)}; \Tilde{Z}_{k}^{(m)})|\check{{\bm \theta}}_{k}^{(m)}, \Tilde{Z}_{k}^{(0)}] - \mathbb{E}[\ell(\check{{\bm \theta}}_{k}^{(m)}; Z_{k}^{(m)})|\check{{\bm \theta}}_{k}^{(m)}, Z_{k}^{(0)}] \right)\end{aligned}$$ By applying Jensen's inequality and triangle inequality according to the above splitting, it holds that $$\begin{aligned} &\left\Vert \mathbb{E}_{{\cal F}^{k-1}}\mathbb{E}_{Z\sim\Pi_{\check{{\bm \theta}}_k}} [g_{\delta_{k}}({\bm \theta}_k;u_k,Z)]-g_{k}^{(m)} \right\Vert \\ =&\left|\frac{d}{\delta_{k}}|\mathbb{E}_{{\cal F}^{k-1}} \mathbb{E}_{Z\sim \Pi_{\check{{\bm \theta}}_k}}[\ell(\check{{\bm \theta}}_k; Z)|\check{{\bm \theta}}_{k}] - \mathbb{E}[\ell(\check{{\bm \theta}}_{k}^{(m)}; Z_{k}^{(m)})|\check{{\bm \theta}}_{k}^{(m)}, Z_{k}^{(0)}] \right| \\ \leq&\mathbb{E}_{{\cal F}^{k-1}}\frac{d}{\delta_{k}}\left| \mathbb{E}_{Z\sim \Pi_{\check{{\bm \theta}}_k}}[\ell(\check{{\bm \theta}}_k; Z)|\check{{\bm \theta}}_{k}] - \mathbb{E}[\ell(\check{{\bm \theta}}_{k}^{(m)}; Z_{k}^{(m)})|\check{{\bm \theta}}_{k}^{(m)}, Z_{k}^{(0)}] \right| \\ \leq& \mathbb{E}_{{\cal F}^{k-1}} \frac{d}{\delta_{k}}\left|\mathbb{E}_{Z\sim\Pi_{\check{{\bm \theta}}_{k}}} \left[ \ell(\check{{\bm \theta}}_k; Z) - \ell(\check{{\bm \theta}}_{k}^{(m)}; Z)|\check{{\bm \theta}}_{k}^{(m)}, \check{{\bm \theta}}_{k}\right]\right| \\ +& \mathbb{E}_{{\cal F}^{k-1}} \frac{d}{\delta_{k}} \left|\mathbb{E}_{Z\sim \Pi_{\check{{\bm \theta}}_k}}[\ell(\check{{\bm \theta}}_{k}^{(m)}; Z)|\check{{\bm \theta}}_{k}^{(m)}] - \mathbb{E}_{\Tilde{Z}_{k}^{(m)}}[\ell(\check{{\bm \theta}}_{k}^{(m)}; \Tilde{Z}_{k}^{(m)})|\check{{\bm \theta}}_{k}^{(m)}, \Tilde{Z}_{k}^{(0)}] \right| \\ +& \mathbb{E}_{{\cal F}^{k-1}}\frac{d}{\delta_{k}} \left|\mathbb{E}_{\Tilde{Z}_{k}^{(m)}}[\ell(\check{{\bm \theta}}_{k}^{(m)}; \Tilde{Z}_{k}^{(m)})|\check{{\bm \theta}}_{k}^{(m)}, \Tilde{Z}_{k}^{(0)}] - \mathbb{E}[\ell(\check{{\bm \theta}}_{k}^{(m)}; Z_{k}^{(m)})|\check{{\bm \theta}}_{k}^{(m)}, Z_{k}^{(0)}] \right| \\ \overset{(c)}{\leq} &\frac{d}{\delta_{k}}\mathbb{E}_{{\cal F}^{k-1}}L_0\left\Vert \check{{\bm \theta}}_{k}^{(m)}-\check{{\bm \theta}}_{k} \right\Vert \\ + &\frac{2d{ G}}{\delta_{k}}\mathbb{E}_{{\cal F}^{k-1}} \boldsymbol{\delta}_{\text{TV}}\left(\Pi_{{\bm \theta}_{k}}, \mathbb{P}(\hat{Z}_{k}^{(m)}\in\cdot|\check{{\bm \theta}}_{k}^{(0)},\hat{Z}_{k}^{(0)}) \right)\\ + & \frac{2d{ G}}{\delta_{k}}\mathbb{E}_{{\cal F}^{k-1}} \boldsymbol{\delta}_{\text{TV}}\left(\mathbb{P}(\hat{Z}_{k}^{(m)}\in\cdot|\check{{\bm \theta}}_{k}^{(0)},\hat{Z}_{k}^{(0)}), \mathbb{P}(Z_{k}^{(m)}\in\cdot|\check{{\bm \theta}}_{k}^{(0)},Z_{k}^{(0)}) \right) \\ \overset{(d)}{\leq} & \frac{d L_0}{\delta_{k}} \mathbb{E}_{{\cal F}^{k-1}}\left\Vert \check{{\bm \theta}}_{k}^{(m)}-\check{{\bm \theta}}_{k} \right\Vert+\frac{2d{ G}}{\delta_{k}}M\rho^{m}+\frac{2d L_2{ G}}{\delta_{k}}\mathbb{E}_{{\cal F}^{k-1}}\sum_{\ell=1}^{m-1}\left\Vert \check{{\bm \theta}}_{k}^{(\ell)}-\check{{\bm \theta}}_{k} \right\Vert \\ \leq & \frac{d L_0}{\delta_{k}} d{ G}\sum_{j=1}^{m-1}\lambda^{\tau_{k}-j}\frac{\eta_{k}}{\delta_{k}}+\frac{2d{ G}M}{\delta_{k}}\rho^{m}+\frac{2d L_{2}{ G}}{\delta_{k}}d{ G}\sum_{\ell=1}^{m-1}\sum_{j=1}^{\ell-1}\lambda^{\tau_{k}-j}\frac{\eta_{k}}{\delta_{k}} \\ < & d^2 L_0{ G}\frac{\eta_{k}}{\delta_{k}^2}\frac{\lambda^{\tau_{k}-m+1}}{1-\lambda}+\frac{2d{ G}M}{\delta_{k}}\rho^{m}+2d^2 L_2{ G}^2\frac{\eta_{k}}{\delta_{k}^2}\frac{\lambda^{\tau_{k}-m+2}}{(1-\lambda)^2}\end{aligned}$$ where inequality (c) is due to Lipschitzness of decoupled risk, inequality (d) is due to Assumption [\[assu:FastMixing\]](#assu:FastMixing){reference-type="ref" reference="assu:FastMixing"} and Lemma [Lemma 12](#lem:tv_summation_bound){reference-type="ref" reference="lem:tv_summation_bound"} (a consequence of Assumption [\[assu:smooth_kernel\]](#assu:smooth_kernel){reference-type="ref" reference="assu:smooth_kernel"}). Given $\tau_{k}\geq\frac{\log(1+k)}{\log 1/\max\{\rho,\lambda\}}$, then the following deterministic bound holds for all $k>0$, $$\begin{aligned} & \mathbb{E}_{{\cal F}^{k-1}}\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}\left\Vert \mathbb{E}_{Z\sim\Pi_{\check{{\bm \theta}}_k}} [g_{\delta_{k}}({\bm \theta}_k;u_k,Z)]-g_{k}^{(m)} \right\Vert \\ \leq & d^2 L_0{ G}\frac{\eta_{k}}{\delta_{k}^2}\frac{\lambda}{1-\lambda}\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}+2d^2 L_2{ G}^2\frac{\eta_{k}}{\delta_{k}^2}\frac{\lambda^2}{1-\lambda}\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m} \\ + & 2d{ G}M/\delta_{k}\sum_{m=1}^{\tau_{k}}\rho^m\lambda^{\tau_{k}-m} \\ < & d^2 \frac{\lambda}{(1-\lambda)^2} L_0{ G}\frac{\eta_{k}}{\delta_{k}^2}+2d^2 \frac{\lambda^2}{(1-\lambda)^2} L_2{ G}^2\frac{\eta_{k}}{\delta_{k}^2}+2d { G}M/\delta_{k}\sum_{m=1}^{\tau_{k}}\max\{\rho,\lambda\}^{\tau_{k}} \\ \leq & d^2 \frac{\lambda}{(1-\lambda)^2} L_0{ G}\frac{\eta_{k}}{\delta_{k}^2}+2d^2 \frac{\lambda^2}{(1-\lambda)^2} L_2{ G}^2\frac{\eta_{k}}{\delta_{k}^2}+2d { G}M \frac{\tau_{k}}{(1+k)\delta_{k}}\end{aligned}$$ So for sufficiently large $t$, it holds that $$\begin{aligned} {\bf I}_{6}(t)&\leq (1-\lambda)\sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_{k}) \right\Vert\mathbb{E}_{{\cal F}^{k-1}}\left\Vert \sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}\mathbb{E}_{Z\sim\Pi_{\check{{\bm \theta}}_k}} [g_{\delta_{k}}({\bm \theta}_k;u_k,Z)]-g_{k}^{(m)} \right\Vert \\ &\leq (1-\lambda)\sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_{k}) \right\Vert\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}\mathbb{E}_{{\cal F}^{k-1}}\left\Vert \mathbb{E}_{Z\sim\Pi_{\check{{\bm \theta}}_k}} [g_{\delta_{k}}({\bm \theta}_k;u_k,Z)]-g_{k}^{(m)} \right\Vert \\ &\leq \sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_{k}) \right\Vert d^2 \frac{\lambda}{1-\lambda}((L_0+2 L_1{ G}) { G}+2 L_1{ G}+2\lambda L_2{ G}^2)\frac{\eta_{k}}{\delta_{k}^2} \\ &+\sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_{k}) \right\Vert 2d { G}M \frac{\tau_{k}}{(1+k)\delta_{k}} \\ &\leq\sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_{k}) \right\Vert d^2 \frac{2\lambda}{(1-\lambda)^2}(L_0 { G}+2 L_1{ G}+2\lambda L_2{ G}^2)\frac{\eta_{k}}{\delta_{k}^2} \\ &= d^2 \frac{2\lambda}{(1-\lambda)^2}( L_0{ G}+2 L_1{ G}+2\lambda L_2{ G}^2) \sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_{k}) \right\Vert\frac{\eta_{k}}{\delta_{k}^2} \\ &\leq d^2 \frac{2\lambda}{(1-\lambda)^2}(L_0 { G}+2 L_1{ G}+2\lambda L_2{ G}^2) \Big(\sum_{k=0}^{t}\mathbb{E}\left\Vert {\nabla}{\cal L}({\bm \theta}_{k}) \right\Vert^2\Big)^{1/2} \Big(\sum_{k=0}^{t}\frac{\eta_{k}^2}{\delta_{k}^4}\Big)^{1/2} \\ &\leq c_9 d^2 {\cal B}(t)^{1/2} (1+t)^{\frac{1}{2}+\frac{1}{2}-(\alpha-2\beta)}\end{aligned}$$ Therefore, there exists a constant $c_9>0$ such that $${\bf I}_{6}(t) \leq c_9 d^2 {{\cal B}(t)}^{1/2}(1+t)^{1-(\alpha-2\beta)}$$ where there is an extra $\beta/2$ in exponent because the $L$ in $c_2$ is now a variable $d (L_0+2 L_1{ G}) /\delta_{k}$. For ($L_0+2 L_1{ G}$)-Lipschitz continuous ${\cal L}({\bm \theta})$, for all $\delta>0$ it holds that $\left\Vert {\nabla}{\cal L}_{\delta}({\bm \theta}) \right\Vert\leq (L_0+2 L_1{ G})$. Given $\tau_{k}\geq\frac{\log(1+k)}{\log 1/\max\{\rho,\lambda\}}$, it holds that $\lambda^{\tau_{k}}\mathbb{E}\left\Vert {\nabla}{\cal L}_{\delta_{k}}({\bm \theta}_{k}) \right\Vert^2\leq \frac{d L^2}{\delta_{0}(1+k)}$, then ${\bf I}_{7}(t)$ can be bounded as follows $${\bf I}_{7}(t) \leq \frac{d L^2}{\delta_{0}}\sum_{k=0}^{t} (1+k)^{-1}={\cal O}(\log(1+t))$$ ${\bf I}_{8}(t)$ is similar to ${\bf I}_{4}(t)$. For all $0\leq k\leq t, 1\leq m\leq \tau_{k}$, it holds that $\left\Vert g_{k}^{(m)} \right\Vert\leq\frac{d { G}}{\delta_{k}}$, which implies $$\begin{aligned} {\bf I}_{8}(t)&\leq (1-\lambda)\frac{d (L_0 + 2 L_1 { G})}{2}\sum_{k=0}^{t}\frac{\eta_{k}}{\delta_{k}}\mathbb{E}\left\Vert \sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}g_{k}^{(m)} \right\Vert^2 \\ &\leq (1-\lambda)\frac{d (L_0 + 2 L_1 { G})}{2}\sum_{k=0}^{t}\frac{\eta_{k}}{\delta_{k}}\mathbb{E}\Big(\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}\left\Vert g_{k}^{(m)} \right\Vert\Big)^2 \\ &\leq (1-\lambda)\frac{d (L_0 + 2 L_1 { G})}{2}\sum_{k=0}^{t}\frac{\eta_{k}}{\delta_{k}}\Big(\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}\frac{d { G}}{\delta_{k}}\Big)^2 \\ &= (1-\lambda)\frac{d^3 (L_0 + 2 L_1 { G}) { G}^2}{2}\sum_{k=0}^{t}\frac{\eta_{k}}{\delta_{k}^3}\Big(\sum_{m=1}^{\tau_{k}}\lambda^{\tau_{k}-m}\Big)^2 \\ &\leq \frac{d^3 (L_0 + 2 L_1 { G}) { G}^2}{2(1-\lambda)}\sum_{k=0}^{t}\frac{\eta_{k}}{\delta_{k}^3}\leq c_{10} (1+t)^{1-\left(\alpha-3\beta\right)}\end{aligned}$$ where $c_{10}>0$ is a constant hiding the factor $\frac{\eta_{0}}{\delta_{0}^3}$. Applying quadratic technique in Lemma [Lemma 3](#lem:major_bound){reference-type="ref" reference="lem:major_bound"}, and for all $\alpha,\beta$ satisfying $0<3\beta<\alpha<1$, it is clear that only ${\bf I}_{5}(t)$ and ${\bf I}_{8}(t)$ contribute to the asymptotic rate, so for all $t\geq t_4$ (for some constant $t_4>0$), we have $$\frac{1}{1+T}\sum_{k=0}^{T}\mathbb{E}\left\Vert {\nabla}{{\cal L}_{\delta_{k}}({\bm \theta}_k)} \right\Vert^2 ={\cal O}(T^{-\min\{1-\alpha, \alpha-3\beta\}})$$ The error estimate directly follows from Lemma [Lemma 8](#lem:composite_lip){reference-type="ref" reference="lem:composite_lip"}. ◻ *Remark 1*. Note that Corrollary [Corollary 2](#cor:non-smooth){reference-type="ref" reference="cor:non-smooth"} follows directly from Theorem [Theorem 1](#thm:non-smooth){reference-type="ref" reference="thm:non-smooth"} by setting $\alpha=3/4$ and $\beta=1/6$. # Auxiliary Lemmas **Lemma 9**. ***(Smoothing)** For continuous ${\cal L}({\bm \theta}):\mathbb{R}^d\rightarrow\mathbb{R}$, its smoothed approximation ${\cal L}_{\delta}({\bm \theta})\mathrel{\mathop:}=\mathbb{E}_{w\sim \sf \text{Unif}(\mathbb{B}^{d})}[{\cal L}({\bm \theta}+\delta w)]$ is differentiable, and it holds that $$\mathbb{E}_{\substack{u\sim \sf \text{Unif}(\mathbb{S}^{d-1}),\\ Z\sim \Pi_{{\bm \theta}+\delta u}}} [g_{\delta}({\bm \theta}; u, Z)]={\nabla}{\cal L}_{\delta}({\bm \theta})$$ Moreover, if ${\cal L}({\bm \theta})$ is $\bar{L}$-Lipschitz continuous, then ${\cal L}_{\delta}({\bm \theta})$ is $\frac{d}{\delta} \bar{L}$-smooth.* *Proof.* The first fact follows from (generalized) Stoke's theorem. Given continuous ${\cal L}({\bm \theta})$, it holds that $$\begin{aligned} \label{eqn:stokes} {\nabla}\int_{\delta\mathbb{B}^{d}}{\cal L}({\bm \theta}+ v)dv = \int_{\delta \mathbb{S}^{d-1}}{\cal L}({\bm \theta}+ r) \frac{r}{\left\Vert r \right\Vert}dr\end{aligned}$$ Observe that the RHS of Equation ([\[eqn:stokes\]](#eqn:stokes){reference-type="ref" reference="eqn:stokes"}) is continuous in ${\bm \theta}$, which implies ${\cal L}_{\delta}({\bm \theta})=\frac{1}{{\sf vol}(\delta\mathbb{B}^{d})}\int_{\delta\mathbb{B}^{d}}{\cal L}({\bm \theta}+ v)dv$ is differentiable. Note that the volume to surface area ratio of $\delta\mathbb{B}^{d}$ is $\delta/d$, so it follows from Equation ([\[eqn:stokes\]](#eqn:stokes){reference-type="ref" reference="eqn:stokes"}) that $$\begin{aligned} {\nabla}{\cal L}_{\delta}({\bm \theta})&=\frac{{\sf vol}(\delta\mathbb{S}^{d-1})}{{\sf vol}(\delta\mathbb{B}^{d})}\int_{\delta \mathbb{S}^{d-1}}{\cal L}({\bm \theta}+ r) \frac{r}{{\sf vol}(\delta\mathbb{S}^{d-1})\left\Vert r \right\Vert}dr =\frac{d}{\delta}\mathbb{E}_{u\sim \sf \text{Unif}(\mathbb{S}^{d-1})}[{\cal L}({\bm \theta}+\delta u)u] \\ &=\mathbb{E}_{u\sim \sf \text{Unif}(\mathbb{S}^{d-1})}\mathbb{E}_{Z\sim\pi_{{\bm \theta}+\delta u}}[\frac{d}{\delta}\ell({\bm \theta}+\delta u;Z)u] =\mathbb{E}_{\substack{u\sim \sf \text{Unif}(\mathbb{S}^{d-1}),\\ Z\sim \Pi_{{\bm \theta}+\delta u}}} [g_{\delta}({\bm \theta}; u, Z)]\end{aligned}$$ where we use the definition of $g_{\delta}({\bm \theta};u,z)$ in the last equality. If further assuming ${\cal L}({\bm \theta})$ is $\bar{L}$-Lipschitz continuous, then we obtain $$\begin{aligned} \left\Vert {\nabla}{\cal L}_{\delta}({\bm \theta}_1)-{\nabla}{\cal L}_{\delta}({\bm \theta}_2) \right\Vert &=\frac{d}{\delta}\cdot \left\Vert \frac{1}{{\sf vol}(\mathbb{S}^{d-1})}\int_{\mathbb{S}^{d-1}} \left[{\cal L}({\bm \theta}_1+\delta u)-{\cal L}({\bm \theta}_2+\delta u)\right] u du \right\Vert \\ &\leq \frac{d}{\delta}\cdot \bar{L} \left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert.\end{aligned}$$ ◻ **Lemma 10**. ***(${\cal O}(\delta)$-Biased Gradient Estimation)**[\[lem:BoundedBias\]]{#lem:BoundedBias label="lem:BoundedBias"}* *Under Assumptions [\[assu:Lip\]](#assu:Lip){reference-type="ref" reference="assu:Lip"}, fix a proximity parameter $\delta>0$, it holds that $$\begin{aligned} \left\Vert \mathbb{E}_{\substack{u\sim \sf \text{Unif}(\mathbb{S}^{d-1}),\\ Z\sim \Pi_{{\bm \theta}+\delta u}}} [g_{\delta}({\bm \theta}; u, Z)] - {\nabla}{\cal L}({\bm \theta}) \right\Vert &= \left\Vert {\nabla}{\cal L}_{\delta}({\bm \theta}) - {\nabla}{\cal L}({\bm \theta}) \right\Vert \leq \delta L\end{aligned}$$* *Proof.* By Lemma [Lemma 9](#lem:UnbiasedGrad){reference-type="ref" reference="lem:UnbiasedGrad"}, we have $$\mathbb{E}_{\substack{u\sim \sf \text{Unif}(\mathbb{S}^{d-1}),\\ Z\sim\Pi_{{\bm \theta}+\delta u}}} [g({\bm \theta};u,Z)] = {\nabla}{\cal L}_{\delta}({\bm \theta})$$ Note that when ${\cal L}({\bm \theta})$ is differentiable, we have $${\nabla}{\cal L}_{\delta}({\bm \theta})= {\nabla}\left[\mathbb{E}_{w\sim\sf \text{Unif}(\mathbb{B}^{d})}{\cal L}({\bm \theta}+\delta w)\right]=\mathbb{E}_{w\sim\sf \text{Unif}(\mathbb{B}^{d})}{\nabla}{\cal L}({\bm \theta}+\delta w)$$ Then under Assumption [\[assu:Lip\]](#assu:Lip){reference-type="ref" reference="assu:Lip"}, by linearity of expectation and Jensen's inequality, it holds that $$\begin{aligned} \left\Vert {\nabla}{\cal L}_{\delta}({\bm \theta})-{\nabla}{\cal L}({\bm \theta}) \right\Vert=\left\Vert \mathbb{E}_{w\sim \sf \text{Unif}(\mathbb{B}^{d})}[{\nabla}{\cal L}({\bm \theta}+\delta w) - {\nabla}{\cal L}({\bm \theta})] \right\Vert \leq \delta L.\end{aligned}$$ ◻ **Corollary 3**. *Under Assumption [\[assu:Lip\]](#assu:Lip){reference-type="ref" reference="assu:Lip"} and [\[assu:BoundLoss\]](#assu:BoundLoss){reference-type="ref" reference="assu:BoundLoss"}, for all ${\bm \theta}\in\mathbb{R}^d$, it holds that $$\left\Vert {\nabla}{\cal L}({\bm \theta}) \right\Vert\leq 2\sqrt{LG}$$* *Proof.* Omitted. ◻ **Lemma 11**. ***(Lipschitz Continuity of Decoupled Risk)**[\[lem:lip_decoupled_risk\]]{#lem:lip_decoupled_risk label="lem:lip_decoupled_risk"} Under Assumption [\[assu:Lip\]](#assu:Lip){reference-type="ref" reference="assu:Lip"}, [\[assu:BoundLoss\]](#assu:BoundLoss){reference-type="ref" reference="assu:BoundLoss"} and [\[assu:smooth_dist\]](#assu:smooth_dist){reference-type="ref" reference="assu:smooth_dist"}, it holds that $$\left|\mathbb{E}_{Z\sim \Pi_{{\bm \theta}_2}}\left[\ell({\bm \theta}_1;Z)-\ell({\bm \theta}_2;Z)\right]\right|\leq 2({ G}L_1+\sqrt{L{ G}})\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert+\frac{L}{2}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert^2$$* *Proof.* Let ${\cal L}({\bm \theta}_1,{\bm \theta}_2)\mathrel{\mathop:}=\mathbb{E}_{Z\sim\Pi_{{\bm \theta}_2}}\ell({\bm \theta}_1;Z)$ denote the decoupled performative risk, then we have $$\begin{aligned} \text{LHS} &=\left|{\cal L}({\bm \theta}_1,{\bm \theta}_2)-{\cal L}({\bm \theta}_2, {\bm \theta}_2)\right| \\ &\leq \left|{\cal L}({\bm \theta}_1)-{\cal L}({\bm \theta}_2)\right| + \left|{\cal L}({\bm \theta}_1,{\bm \theta}_2)-{\cal L}({\bm \theta}_1,{\bm \theta}_1)\right| \\ &\leq \left|{\cal L}({\bm \theta}_1)-{\cal L}({\bm \theta}_2)-\left\langle{\nabla}{\cal L}({\bm \theta}_2)\,|\,{\bm \theta}_1-{\bm \theta}_2\right\rangle\right|+\left|\left\langle{\nabla}{\cal L}({\bm \theta}_2)\,|\,{\bm \theta}_1-{\bm \theta}_2\right\rangle\right| +\left|{\cal L}({\bm \theta}_1,{\bm \theta}_1)-{\cal L}({\bm \theta}_1, {\bm \theta}_2)\right| \\ &\overset{(a)}{\leq}\frac{L}{2}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert^2+\left|\left\langle{\nabla}{\cal L}({\bm \theta}_2)\,|\,{\bm \theta}_1-{\bm \theta}_2\right\rangle\right| +\left|{\cal L}({\bm \theta}_1,{\bm \theta}_1)-{\cal L}({\bm \theta}_1, {\bm \theta}_2)\right| \\ &\overset{(b)}{\leq}\frac{L}{2}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert^2+2\sqrt{L{ G}}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert +\left|{\cal L}({\bm \theta}_1,{\bm \theta}_1)-{\cal L}({\bm \theta}_1, {\bm \theta}_2)\right| \\ &=\frac{L}{2}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert^2+2\sqrt{ L{ G}}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert +\left|\int\ell({\bm \theta}_1;z)\left(\Pi_{{\bm \theta}_1}(z)-\Pi_{{\bm \theta}_2}(z)\right)dz\right| \\ &\overset{(c)}{\leq}\frac{L}{2}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert^2+2\sqrt{L{ G}}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert +2{ G}\boldsymbol{\delta}_{\text{TV}}\left(\Pi_{{\bm \theta}_1}, \Pi_{{\bm \theta}_2} \right) \\ &\leq\frac{L}{2}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert^2+2\sqrt{L{ G}}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert +2{ G}L_1\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert \\ &=2\left(\sqrt{L{ G}} +{ G}L_1\right)\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert+\frac{L}{2}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert^2\end{aligned}$$ where we use Assumption [\[assu:Lip\]](#assu:Lip){reference-type="ref" reference="assu:Lip"} in inequality (a), Corollary [Corollary 3](#cor:bdd_grad){reference-type="ref" reference="cor:bdd_grad"} in inequality (b), Assumption [\[assu:BoundLoss\]](#assu:BoundLoss){reference-type="ref" reference="assu:BoundLoss"} in inequality (c), and Assumption [\[assu:smooth_dist\]](#assu:smooth_dist){reference-type="ref" reference="assu:smooth_dist"} in the last inequality. ◻ **Lemma 12**. *Under Assumption [\[assu:smooth_kernel\]](#assu:smooth_kernel){reference-type="ref" reference="assu:smooth_kernel"}, it holds that for all $0\leq\ell\leq m$, $m\geq 1$ $$\boldsymbol{\delta}_{\text{TV}}\left(\mathbb{P}(Z_{k}^{(\ell+1)}\in \cdot|Z_{k}^{(0)}), \mathbb{P}(\tilde{Z}_{k}^{(\ell +1)}\in \cdot|Z_{k}^{(0)}) \right) \leq L_2 \left\Vert \check{{\bm \theta}}_{k}^{(\ell)} -\check{{\bm \theta}}_k \right\Vert + \boldsymbol{\delta}_{\text{TV}}\left(\mathbb{P}(Z_{k}^{(\ell)} \in \cdot|Z_{k}^{(0)}), \mathbb{P}(\Tilde{Z}_{k}^{(\ell)}\in \cdot|Z_{k}^{(0)}) \right)$$ Unfold above recursion leads to the following inequality, $$\boldsymbol{\delta}_{\text{TV}}\left(\mathbb{P}(Z_{k}^{(m)}\in \cdot|Z_{k}^{(0)}), \mathbb{P}(\tilde{Z}_{k}^{(m)}\in \cdot|Z_{k}^{(0)}) \right) \leq L_2 \sum_{\ell=1}^{m-1} \left\Vert \check{{\bm \theta}}_{k}^{(\ell)} -\check{{\bm \theta}}_{k} \right\Vert, \quad \forall m\geq 1.$$* *Proof.* Recall the notation $\check{{\bm \theta}}_{k}^{(\ell)}=\check{{\bm \theta}}_{k}^{(\ell)}+\delta_{k}u_{k}, \check{{\bm \theta}}_{k} = \check{{\bm \theta}}_{k}+\delta_{k}u_{k}$, and the fact that $Z_{k}=Z_{k}^{(0)}=\Tilde{Z}_{k}^{(0)}$, we have $$\begin{aligned} 2\cdot\text{LHS} &= \int_{{\sf Z}} \left| \mathbb{P}(Z_{k}^{(\ell+1)}=z|Z_{k}^{(0)}) - \mathbb{P}(\Tilde{Z}_{k}^{(\ell+1)}=z|Z_{k}^{(0)})\right| dz \\ &= \int_{{\sf Z}} \left| \int_{{\sf Z}} \mathbb{P}({Z}_{k}^{(\ell)}=z^\prime, Z_{k}^{(\ell+1)} = z|Z_{k}^{(0)}) - \mathbb{P}(\Tilde{Z}_{k}^{(\ell)}=z^\prime, \Tilde{Z}_{k}^{(\ell+1)} = z|Z_{k}^{(0)}) dz^\prime \right| dz \\ &\leq \int_{\sf Z}\int_{\sf Z} \left| \mathbb{T}_{\check{{\bm \theta}}_k^{(\ell)}}(z^\prime, z) \mathbb{P}(Z_{k}^{(\ell)} = z^\prime|Z_{k}^{(0)}) - \mathbb{T}_{\check{{\bm \theta}}_{k}}(z^\prime, z) \mathbb{P}(\Tilde{Z}_{k}^{(\ell)} = z^\prime|Z_{k}^{(0)}) \right| dz^\prime dz \\ &\leq \int_{\sf Z}\int_{\sf Z} \left| \mathbb{T}_{\check{{\bm \theta}}_{k}^{(l\ell}}(z^\prime, z) \mathbb{P}(Z_{k}^{(\ell)} = z^\prime|Z_{k}^{(0)}) - \mathbb{T}_{\check{{\bm \theta}}_{k}}(z^\prime, z) \mathbb{P}(Z_{k}^{(\ell)} = z^\prime|Z_{k}^{(0)}) \right| dz^\prime dz \\ &\quad + \int_{\sf Z}\int_{\sf Z} \left| \mathbb{T}_{\check{{\bm \theta}}_k}(z^\prime, z) \mathbb{P}(Z_{k}^{(\ell)} = z^\prime|Z_{k}^{(0)}) - \mathbb{T}_{\check{{\bm \theta}}_{k}}(z^\prime, z) \mathbb{P}(\Tilde{Z}_{k}^{(\ell)} = z^\prime|Z_{k}^{(0)}) \right| dz^\prime dz \\ &\overset{(a)}{=} \int_{\sf Z} \mathbb{P}(Z_{k}^{\ell}=z^\prime|Z_{k}^{(0)}) \int_{\sf Z} \left| \mathbb{T}_{\check{{\bm \theta}}_k}(z^\prime, z) - \mathbb{T}_{\check{{\bm \theta}}_{k}^{(\ell)}}(z^\prime, z)\right| dzdz^\prime \\ &\quad + \int_{\sf Z}\left[\int_{\sf Z} \mathbb{T}_{\check{{\bm \theta}}_k}(z^\prime, z)dz\right] \left| \mathbb{P}(Z_{k}^{(\ell)}=z^\prime|Z_{k}^{(0)}) - \mathbb{P}(\Tilde{Z}_{k}^{(\ell)} = z^\prime|Z_{k}^{(0)})\right| dz^\prime \\ &\leq \int_{\sf Z} \mathbb{P}(Z_{k}=z^\prime|Z_{k}^{(0)}) \cdot 2 \boldsymbol{\delta}_{\text{TV}}\left(\mathbb{T}_{\check{{\bm \theta}}_{k}}(z^\prime, \cdot), \mathbb{T}_{\check{{\bm \theta}}_{k}}(z^\prime, \cdot) \right) dz^\prime + 2\boldsymbol{\delta}_{\text{TV}}\left(\mathbb{P}(Z_{k}^{(\ell)} \in \cdot|Z_{k}^{(0)}), \mathbb{P}(\Tilde{Z}_{k}^{(\ell)}\in \cdot|Z_{k}^{(0)}) \right) \\ &\leq 2\int_{\sf Z} \mathbb{P}(Z_{k}^{(\ell)} =z^\prime|Z_{k}^{(0)}) dz^\prime \cdot L_2 \left\Vert \check{{\bm \theta}}_{k}^{(\ell)} - \check{{\bm \theta}}_{k} \right\Vert + 2\boldsymbol{\delta}_{\text{TV}}\left(\mathbb{P}(Z_{k}^{(\ell)} \in \cdot|Z_{k}^{(0)}), \mathbb{P}(\Tilde{Z}_{k}^{(\ell)} \in \cdot|Z_{k}^{(0)}) \right) \\ &= 2\left[ L_2 \left\Vert \check{{\bm \theta}}_{k}^{(\ell)} -\check{{\bm \theta}}_k \right\Vert + \boldsymbol{\delta}_{\text{TV}}\left(\mathbb{P}(Z_{k}^{(\ell)} \in \cdot|Z_{k}^{(0)}), \mathbb{P}(\Tilde{Z}_{k}^{(\ell)}\in \cdot|Z_{k}^{(0)}) \right) \right]=2\cdot\text{RHS}\end{aligned}$$ where inequality (a) holds due to the (absolutely) integrable condition (which automatically holds for probability density functions and kernels), and Assumption [\[assu:smooth_kernel\]](#assu:smooth_kernel){reference-type="ref" reference="assu:smooth_kernel"} is used in the last inequality. ◻ **Assumption 7**. Assume that there exists constants $L_0, L_1$ such that: 1. [\[assu:replace1\]]{#assu:replace1 label="assu:replace1"} $|\ell({\bm \theta};z)-\ell({\bm \theta};z^\prime)|\leq L_0\left\Vert z-z^{\prime} \right\Vert$ for any ${\bm \theta}\in \mathbb{R}^d$, $z,z^{\prime} \in {\sf Z}$, 2. [\[assu:replace2\]]{#assu:replace2 label="assu:replace2"} $W_1(\Pi_{{\bm \theta}},\Pi_{{\bm \theta}^{\prime}})\leq L_1\left\Vert {\bm \theta}-{\bm \theta}^{\prime} \right\Vert$ for any ${\bm \theta}, {\bm \theta}^{\prime} \in \mathbb{R}^d$, where $W_1(\Pi,\Pi')$ denotes the Wasserstein-1 distance between the distributions $\Pi, \Pi'$. We observe that a similar result to Lemma [\[lem:lip_decoupled_risk\]](#lem:lip_decoupled_risk){reference-type="ref" reference="lem:lip_decoupled_risk"} can be proven by replacing Assumption [\[assu:smooth_dist\]](#assu:smooth_dist){reference-type="ref" reference="assu:smooth_dist"} with Assumption [Assumption 7](#assu:alternative){reference-type="ref" reference="assu:alternative"}: **Lemma 13**. ***(Lipschitz Continuity of Decoupled Risk, Alternative Condition based on Wasserstein-1 distance.)**[\[lem:lip_decoupled_risk_wasserstein\]]{#lem:lip_decoupled_risk_wasserstein label="lem:lip_decoupled_risk_wasserstein"} Under Assumption [\[assu:Lip\]](#assu:Lip){reference-type="ref" reference="assu:Lip"}, [\[assu:BoundLoss\]](#assu:BoundLoss){reference-type="ref" reference="assu:BoundLoss"}, [Assumption 7](#assu:alternative){reference-type="ref" reference="assu:alternative"}. Then, for any ${\bm \theta}_1, {\bm \theta}_2 \in \mathbb{R}^d$, it holds that $$\left|\mathbb{E}_{Z\sim \Pi_{{\bm \theta}_2}}\left[\ell({\bm \theta}_1;Z)-\ell({\bm \theta}_2;Z)\right]\right|\leq (L_0 L_1+2\sqrt{L{ G}})\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert+\frac{L}{2}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert^2.$$* *Proof.* Observe that $$\begin{aligned} \text{LHS} &=\left|{\cal L}({\bm \theta}_1,{\bm \theta}_2)-{\cal L}({\bm \theta}_2, {\bm \theta}_2)\right| \\ &\leq \left|{\cal L}({\bm \theta}_1)-{\cal L}({\bm \theta}_2)\right| + \left|{\cal L}({\bm \theta}_1,{\bm \theta}_2)-{\cal L}({\bm \theta}_1,{\bm \theta}_1)\right| \\ &\leq \left|{\cal L}({\bm \theta}_1)-{\cal L}({\bm \theta}_2)-\left\langle{\nabla}{\cal L}({\bm \theta}_2)\,|\,{\bm \theta}_1-{\bm \theta}_2\right\rangle\right|+\left|\left\langle{\nabla}{\cal L}({\bm \theta}_2)\,|\,{\bm \theta}_1-{\bm \theta}_2\right\rangle\right| +\left|{\cal L}({\bm \theta}_1,{\bm \theta}_1)-{\cal L}({\bm \theta}_1, {\bm \theta}_2)\right| \\ &\leq\frac{L}{2}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert^2+\left|\left\langle{\nabla}{\cal L}({\bm \theta}_2)\,|\,{\bm \theta}_1-{\bm \theta}_2\right\rangle\right| +\left|{\cal L}({\bm \theta}_1,{\bm \theta}_1)-{\cal L}({\bm \theta}_1, {\bm \theta}_2)\right| \\ &\leq \frac{L}{2}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert^2+2\sqrt{L{ G}}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert +\left|{\cal L}({\bm \theta}_1,{\bm \theta}_1)-{\cal L}({\bm \theta}_1, {\bm \theta}_2)\right| \\ &=\frac{L}{2}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert^2+2\sqrt{ L{ G}}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert +\left|\mathbb{E}_{Z\sim\Pi_{{\bm \theta}_1},Z^{\prime}\sim\Pi_{{\bm \theta}_2}}[\ell({\bm \theta}_1;Z)-\ell({\bm \theta}_1;Z^{\prime})]\right| \\ &\overset{(a)}{\leq}\frac{L}{2}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert^2+2\sqrt{L{ G}}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert +L_0 W_1(\Pi_{{\bm \theta}_1},\Pi_{{\bm \theta}_2}) \\ &\overset{(b)}{\leq}\frac{L}{2}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert^2+2\sqrt{L{ G}}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert +L_0 L_1\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert \\ &=\left(L_0 L_1+2\sqrt{L{ G}}\right)\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert+\frac{L}{2}\left\Vert {\bm \theta}_1-{\bm \theta}_2 \right\Vert^2 \end{aligned}$$ where the inequality (a) is due to [@perdomo2020performative Lemma D.4] and the alternative assumption [\[assu:replace1\]](#assu:replace1){reference-type="ref" reference="assu:replace1"}, the inequality (b) is due to the alternative assumption [\[assu:replace2\]](#assu:replace2){reference-type="ref" reference="assu:replace2"} (i.e., Lipschitz condition on distribution map $\Pi_{{\bm \theta}}$ in Wasserstein-1 metric). ◻ Consequently, the conclusions in Theorem [\[thm1\]](#thm1){reference-type="ref" reference="thm1"} hold (with slightly different constants) when Assumption [\[assu:smooth_dist\]](#assu:smooth_dist){reference-type="ref" reference="assu:smooth_dist"} is replaced by Assumption [Assumption 7](#assu:alternative){reference-type="ref" reference="assu:alternative"}. We observe that the former assumption is only used in ensuring the bound in Lemma [\[lem:lip_decoupled_risk\]](#lem:lip_decoupled_risk){reference-type="ref" reference="lem:lip_decoupled_risk"}; cf. the proof of Lemma [Lemma 5](#lem:bound_term_2){reference-type="ref" reference="lem:bound_term_2"}. [^1]: H.T.  Liu, Q. Li and H.-T. Wai are with the Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Hong Kong SAR of China. Emails: `antonyhtliu@link.cuhk.edu.hk, {liqiang, htwai}@se.cuhk.edu.hk` [^2]: Note that in [@Nesterov2017; @ghadimi2013], the random vector ${\bm u}$ is drawn from a Gaussian distribution.
arxiv_math
{ "id": "2310.05792", "title": "Two-timescale Derivative Free Optimization for Performative Prediction\n with Markovian Data", "authors": "Haitong Liu, Qiang Li, Hoi-To Wai", "categories": "math.OC", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this note we provide conditions for local invariance of finite dimensional submanifolds for solutions to stochastic partial differential equations (SPDEs) in the framework of the variational approach. For this purpose, we provide a connection to SPDEs in continuously embedded spaces. address: - Indian Institute of Science Education and Research Thiruvanantapuram, Department of Mathematics, Maruthamala P.O, 695551, Kerala, India - Albert Ludwig University of Freiburg, Department of Mathematical Stochastics, Ernst-Zermelo-Straße 1, D-79104 Freiburg, Germany - University of Wuppertal, Department of Mathematics and Natural Sciences, Gauß-straße 20, D-42097 Wuppertal, Germany author: - Rajeev Bhaskaran - Stefan Tappe date: 7 September, 2023 title: A note on invariant manifolds for stochastic partial differential equations in the framework of the variational approach --- [^1] # Introduction In this paper we deal with invariant manifolds for stochastic partial differential equations (SPDEs) in the framework of the variational approach. More precisely, let $G,K$ be Banach spaces, and let $H$ be a separable Hilbert space such that $(G,H,K)$ is a triple of continuously embedded spaces. Consider a $(G,H,K)$-variational SPDE of the form $$\begin{aligned} \label{SPDE} \left\{ \begin{array}{rcl} dY_t & = & L(Y_t) dt + A(Y_t) dW_t \\ Y_0 & = & y_0 \end{array} \right.\end{aligned}$$ driven by an $\mathbb{R}^{\infty}$-Wiener process $W$ with measurable coefficients $L : G \to K$ and $A : G \to \ell^2(H)$. Variational SPDEs of this kind have been studied, for example, in [@Rozovskii; @Prevot-Roeckner; @Liu-Roeckner]. Given a finite dimensional submanifold $\mathcal{M}$, we are interested in the question when $\mathcal{M}$ is locally invariant for the SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}); a property, which in particular ensures the existence of local solutions. The invariance problem for submanifolds has been investigated, for example, in [@Milian-manifold] for finite dimensional SDEs, in [@Filipovic-inv; @Nakayama] for SPDEs in the framework of the semigroup approach, and recently in [@BT] for SPDEs in continuously embedded spaces. The latter framework is similar to the framework of the variational approach, and this is a crucial observation for providing our results from this paper. Indeed, if $L$ is even a measurable mapping $L : G \to H$ with values in the Hilbert space $H$, then we may regard ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}) also as a $(G,H)$-embedded SPDE in the spirit of [@BT]. The remainder of this paper is organized as follows. In Section [2](#sec-SPDEs-embedded){reference-type="ref" reference="sec-SPDEs-embedded"} we provide the required background about SPDEs in continuously embedded spaces, and in Section [3](#sec-SPDEs){reference-type="ref" reference="sec-SPDEs"} we provide the required background about SPDEs in the framework of the variational approach. After these preparations, we present our results on invariant manifolds; namely in Section [4](#sec-K-separable){reference-type="ref" reference="sec-K-separable"} we consider the situation where $K$ is a separable Hilbert space, and in Section [5](#sec-general){reference-type="ref" reference="sec-general"} we consider the general situation. Afterwards, we use our findings for the following two applications. In Section [6](#sec-HS){reference-type="ref" reference="sec-HS"} we construct examples of invariant submanifolds in Hermite Sobolev spaces, and in Section [7](#sec-Laplace){reference-type="ref" reference="sec-Laplace"} we characterize linear submanifolds for the stochastic $p$-Laplace equation. For convenience of the reader, in Appendix [8](#app-embedding){reference-type="ref" reference="app-embedding"} we provide the required auxiliary results about continuously embedded spaces. # Stochastic partial differential equations in continuously embedded spaces {#sec-SPDEs-embedded} In this section we provide the required prerequisites about SPDEs in continuously embedded spaces. It is similar to [@BT Sec. 2], where further details can be found. Let $G$ be a Banach space and let $H$ be a separable Hilbert space such that $(G,H)$ is a pair of continuously embedded spaces; see Definition [Definition 22](#def-embedding){reference-type="ref" reference="def-embedding"}. Let $\mathcal{N}\subset G$ be a subset, endowed with the relative topology induced by $G$. In particular, we could choose $\mathcal{N}:= G$. Furthermore, let $L : \mathcal{N}\to H$ and $A : \mathcal{N}\to \ell^2(H)$ be measurable coefficients. **Definition 1**. *Let $y_0 \in \mathcal{N}$ be arbitrary. A triplet $(\mathbb{B},W,Y)$ is called a *local martingale solution* to the $(G,H)$-embedded SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}) with $Y_0 = y_0$ if the following conditions are fulfilled:* 1. *$\mathbb{B}= (\Omega,\mathscr{F},(\mathscr{F}_t)_{t \in \mathbb{R}_+},\mathbb{P})$ is a stochastic basis; that is, a filtered probability space satisfying the usual conditions.* 2. *$W$ is a standard $\mathbb{R}^{\infty}$-Wiener process on the stochastic basis $\mathbb{B}$.* 3. *$Y$ is an $\mathcal{N}$-valued $\mathcal{P}$-$\mathcal{B}(\mathcal{N})$-measurable process such that for some strictly positive stopping time $\tau > 0$ we have $\mathbb{P}$-almost surely $$\begin{aligned} \label{SPDE-int-cond} \int_0^{t \wedge \tau} \big( \| L(Y_s) \|_H + \| A(Y_s) \|_{\ell^2(H)}^2 \big) ds < \infty, \quad t \in \mathbb{R}_+\end{aligned}$$ and $\mathbb{P}$-almost surely $$\begin{aligned} \label{SPDE-integral-form} Y_{t \wedge \tau} = y_0 + \text{{\rm ($H$-)}} \int_0^{t \wedge \tau} L(Y_s) ds + \text{{\rm ($H$-)}} \int_0^{t \wedge \tau} A(Y_s) dW_s, \quad t \in \mathbb{R}_+.\end{aligned}$$ The stopping time $\tau$ is also called the *lifetime* of $Y$.* **Remark 2**. *As usual $\mathcal{P}$ denotes the predictable $\sigma$-algebra on $\Omega \times \mathbb{R}_+$. Moreover, as the notation indicates, the stochastic integral $$\begin{aligned} \text{{\rm ($H$-)}} \int_0^{t \wedge \tau} L(Y_s) ds\end{aligned}$$ appearing in ([\[SPDE-integral-form\]](#SPDE-integral-form){reference-type="ref" reference="SPDE-integral-form"}) is a pathwise Bochner integral in the separable Hilbert space $(H,\| \cdot \|_H)$, and the stochastic integral $$\begin{aligned} \text{{\rm ($H$-)}} \int_0^{t \wedge \tau} A(Y_s) dW_s\end{aligned}$$ appearing in ([\[SPDE-integral-form\]](#SPDE-integral-form){reference-type="ref" reference="SPDE-integral-form"}) is an Itô integral in the separable Hilbert space $(H,\| \cdot \|_H)$.* **Remark 3**. *If there is no ambiguity, we will simply call $Y$ a local martingale solution to the $(G,H)$-embedded SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}) with $Y_0 = y_0$.* **Definition 4**. *A subset $\mathcal{M}\subset \mathcal{N}$ is called *locally invariant* for the $(G,H)$-embedded SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}) if for each $y_0 \in \mathcal{M}$ there exists a local martingale solution $Y$ to the $(G,H)$-embedded SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}) with $Y_0 = y_0$ and lifetime $\tau > 0$ such that $Y^{\tau} \in \mathcal{M}$ up to an evanescent set.* Finite dimensional *submanifolds* in embedded Banach spaces can be defined as in [@BT Sec. 3.1], where Hilbert spaces have been considered. By virtue of the results from [@fillnm Sec. 6.1], where finite dimensional submanifolds in Banach spaces have been studied, all definitions and results from [@BT Sec. 3.1] transfer to the present situation with Banach spaces. Let us recall the following result; for further details concerning the upcoming notation we refer to [@BT Sec. 4]. **Theorem 5**. *[@BT Thm. 4.3] Let $\mathcal{M}$ be a finite dimensional $(G,H)$-submanifold of class $C^2$. Suppose that $L|_{\mathcal{M}} : \mathcal{M}\to H$ and $A|_{\mathcal{M}} : \mathcal{M}\to \ell^2(H)$ are continuous. Then the following statements are equivalent:* 1. *The submanifold $\mathcal{M}$ is locally invariant for the $(G,H)$-embedded SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}).* 2. *We have $$\begin{aligned} \label{tang-A} &A^j|_{\mathcal{M}} \in \Gamma(T \mathcal{M}), \quad j \in \mathbb{N}, \\ \label{tang-L} &[ L|_{\mathcal{M}} ]_{\Gamma(T \mathcal{M})} - \frac{1}{2} \sum_{j=1}^{\infty} [A^j|_{\mathcal{M}}, A^j|_{\mathcal{M}}]_{\mathcal{M}} = [0]_{\Gamma(T \mathcal{M})}.\end{aligned}$$* 3. *For each $y_0 \in \mathcal{M}$ there exists a local martingale solution $Y$ to the SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}) with $Y_0 = y_0$ and lifetime $\tau$ such that $Y^{\tau} \in \mathcal{M}$ up to an evanescent set and the sample paths of $Y^{\tau}$ are continuous with respect to $\| \cdot \|_G$.* # Stochastic partial differential equations in the framework of the variational approach {#sec-SPDEs} In this section we provide the required prerequisites about SPDEs in the framework of the variational approach. Let $G,K$ be Banach spaces and let $H$ be a separable Hilbert space such that $(G,H,K)$ is a triplet of continuously embedded spaces. Let $\mathcal{N}\subset G$ be a subset, endowed with the relative topology induced by $G$. Furthermore, let $L : \mathcal{N}\to K$ and $A : \mathcal{N}\to \ell^2(H)$ be measurable coefficients. **Remark 6**. *Note the difference to the framework from Section [2](#sec-SPDEs-embedded){reference-type="ref" reference="sec-SPDEs-embedded"}, where we had a drift $L : \mathcal{N}\to H$ with values in the Hilbert space $H$.* **Definition 7**. *Let $y_0 \in \mathcal{N}$ be arbitrary. A triplet $(\mathbb{B},W,Y)$ is called a *local martingale solution* to the $(G,H,K)$-variational SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}) with $Y_0 = y_0$ if the following conditions are fulfilled:* 1. *$\mathbb{B}= (\Omega,\mathscr{F},(\mathscr{F}_t)_{t \in \mathbb{R}_+},\mathbb{P})$ is a stochastic basis.* 2. *$W$ is a standard $\mathbb{R}^{\infty}$-Wiener process on the stochastic basis $\mathbb{B}$.* 3. *$Y$ is an $\mathcal{N}$-valued $\mathcal{P}$-$\mathcal{B}(\mathcal{N})$-measurable process such that for some strictly positive stopping time $\tau > 0$ the image $$\begin{aligned} \{ L(Y_t^{\tau}) : t \in \mathbb{R}_+ \} \subset K\end{aligned}$$ is $\mathbb{P}$-almost surely a separable subset of $(K,\| \cdot \|_K)$, we have $\mathbb{P}$-almost surely $$\begin{aligned} \label{int-cond-var} \int_0^{t \wedge \tau} \big( \| L(Y_s) \|_K + \| A(Y_s) \|_{\ell^2(H)}^2 \big) ds < \infty, \quad t \in \mathbb{R}_+\end{aligned}$$ and $\mathbb{P}$-almost surely $$\begin{aligned} \label{SPDE-integral-form-var} Y_{t \wedge \tau} = y_0 + \text{{\rm ($K$-)}} \int_0^{t \wedge \tau} L(Y_s) ds + \text{{\rm ($H$-)}} \int_0^{t \wedge \tau} A(Y_s) dW_s, \quad t \in \mathbb{R}_+.\end{aligned}$$ The stopping time $\tau$ is also called the *lifetime* of $Y$.* **Remark 8**. *As the notation indicates, the stochastic integral $$\begin{aligned} \text{{\rm ($K$-)}} \int_0^{t \wedge \tau} L(Y_s) ds\end{aligned}$$ appearing in ([\[SPDE-integral-form-var\]](#SPDE-integral-form-var){reference-type="ref" reference="SPDE-integral-form-var"}) is a pathwise Bochner integral in the Banach space $(K,\| \cdot \|_K)$, and the stochastic integral $$\begin{aligned} \text{{\rm ($H$-)}} \int_0^{t \wedge \tau} A(Y_s) dW_s\end{aligned}$$ appearing in ([\[SPDE-integral-form-var\]](#SPDE-integral-form-var){reference-type="ref" reference="SPDE-integral-form-var"}) is an Itô integral in the separable Hilbert space $(H,\| \cdot \|_H)$.* **Remark 9**. *If there is no ambiguity, we will simply call $Y$ a local martingale solution to the $(G,H,K)$-variational SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}) with $Y_0 = y_0$.* **Definition 10**. *A subset $\mathcal{M}\subset \mathcal{N}$ is called *locally invariant* for the $(G,H,K)$-variational SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}) if for each $y_0 \in \mathcal{M}$ there exists a local martingale solution $Y$ to the $(G,H,K)$-variational SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}) with $Y_0 = y_0$ and lifetime $\tau > 0$ such that $Y^{\tau} \in \mathcal{M}$ up to an evanescent set.* # The situation where $K$ is a separable Hilbert space {#sec-K-separable} In this section we investigate invariance of finite dimensional submanifolds in the situation where $K$ is a separable Hilbert space. More precisely, let $G$ be a Banach space and let $H,K$ be separable Hilbert spaces such that $(G,H,K)$ is a triplet of continuously embedded spaces. Furthermore, let $L : G \to K$ and $A : G \to \ell^2(H)$ be measurable mappings. Note that the present situation applies to the variational SPDEs considered in [@Rozovskii]. **Remark 11**. *We may and will consider $A$ also as a measurable mapping $A : G \to \ell^2(K)$.* **Proposition 12**. *Let $y_0 \in G$ be arbitrary, and let $Y$ be a local martingale solution to the $(G,H,K)$-variational SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}) with lifetime $\tau$ and $Y_0 = y_0$. Then $Y$ is also a local martingale solution to the $(G,K)$-embedded SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}) with lifetime $\tau$ and $Y_0 = y_0$.* *Proof.* Taking into account condition ([\[int-cond-var\]](#int-cond-var){reference-type="ref" reference="int-cond-var"}), by Lemma [Lemma 25](#lemma-Wiener-int-G-H){reference-type="ref" reference="lemma-Wiener-int-G-H"} we have $\mathbb{P}$-almost surely $$\begin{aligned} \label{int-cond-1} \int_0^{t \wedge \tau} \big( \| L(Y_s) \|_K + \| A(Y_s) \|_{\ell^2(K)}^2 \big) ds < \infty, \quad t \in \mathbb{R}_+\end{aligned}$$ as well as $$\begin{aligned} \label{H-K-int-coincides} \text{{\rm ($H$-)}} \int_0^{t \wedge \tau} A(Y_s) d W_s = \text{{\rm ($K$-)}} \int_0^{t \wedge \tau} A(Y_s) d W_s, \quad t \in \mathbb{R}_+.\end{aligned}$$ Hence, by ([\[SPDE-integral-form-var\]](#SPDE-integral-form-var){reference-type="ref" reference="SPDE-integral-form-var"}) we have $\mathbb{P}$-almost surely $$\begin{aligned} \label{SPDE-integral-form-2} Y_{t \wedge \tau} = y_0 + \text{{\rm ($K$-)}} \int_0^{t \wedge \tau} L(Y_s) ds + \text{{\rm ($K$-)}} \int_0^{t \wedge \tau} A(Y_s) dW_s, \quad t \in \mathbb{R}_+,\end{aligned}$$ completing the proof. ◻ **Proposition 13**. *Let $\mathcal{M}\subset G$ be a subset such that $A|_{\mathcal{M}} : \mathcal{M}\to \ell^2(H)$ is continuous. Let $y_0 \in \mathcal{M}$ be arbitrary, and let $Y$ be a local martingale solution to the $(G,K)$-embedded SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}) with lifetime $\tau$ and $Y_0 = y_0$ such that $Y^{\tau} \in \mathcal{M}$ and the sample paths of $Y^{\tau}$ are continuous with respect to $\| \cdot \|_G$. Then $Y$ is also a local martingale solution to the $(G,H,K)$-variational SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}) with lifetime $\tau$ and $Y_0 = y_0$.* *Proof.* Note that the image $$\begin{aligned} \{ L(Y_t^{\tau}) : t \in \mathbb{R}_+ \} \subset K\end{aligned}$$ is everywhere a separable subset of $(K,\| \cdot \|_K)$, because $K$ is a separable Hilbert space. Furthermore, condition ([\[int-cond-1\]](#int-cond-1){reference-type="ref" reference="int-cond-1"}) implies ([\[int-cond-var\]](#int-cond-var){reference-type="ref" reference="int-cond-var"}), because $A|_{\mathcal{M}} : \mathcal{M}\to \ell^2(H)$ is continuous and the sample paths of $Y^{\tau}$ are continuous with respect to $\| \cdot \|_G$. Hence, by Lemma [Lemma 25](#lemma-Wiener-int-G-H){reference-type="ref" reference="lemma-Wiener-int-G-H"} we have ([\[H-K-int-coincides\]](#H-K-int-coincides){reference-type="ref" reference="H-K-int-coincides"}). Consequently, equation ([\[SPDE-integral-form-2\]](#SPDE-integral-form-2){reference-type="ref" reference="SPDE-integral-form-2"}) implies ([\[SPDE-integral-form-var\]](#SPDE-integral-form-var){reference-type="ref" reference="SPDE-integral-form-var"}). ◻ **Theorem 14**. *Let $\mathcal{M}$ be a finite dimensional $(G,K)$-submanifold of class $C^2$. Suppose that $L|_{\mathcal{M}} : \mathcal{M}\to K$ and $A|_{\mathcal{M}} : \mathcal{M}\to \ell^2(H)$ are continuous. Then the following statements are equivalent:* 1. *The submanifold $\mathcal{M}$ is locally invariant for the $(G,K)$-embedded SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}).* 2. *The submanifold $\mathcal{M}$ is locally invariant for the $(G,H,K)$-variational SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}).* 3. *We have ([\[tang-A\]](#tang-A){reference-type="ref" reference="tang-A"}) and ([\[tang-L\]](#tang-L){reference-type="ref" reference="tang-L"}).* *Proof.* (i) $\Rightarrow$ (ii): This is a consequence of Theorem [Theorem 5](#thm-SPDE){reference-type="ref" reference="thm-SPDE"} and Proposition [Proposition 13](#prop-solutions-2){reference-type="ref" reference="prop-solutions-2"}. \(ii\) $\Rightarrow$ (i): This is a consequence of Proposition [Proposition 12](#prop-solutions-1){reference-type="ref" reference="prop-solutions-1"}. \(i\) $\Leftrightarrow$ (iii): This is a consequence of Theorem [Theorem 5](#thm-SPDE){reference-type="ref" reference="thm-SPDE"}. ◻ Taking into account the decomposition (3.2) from [@BT Prop. 3.25], we arrive at the following consequence. **Corollary 15**. *Let $\mathcal{M}$ is a finite dimensional $(G,H,K)$-submanifold of class $C^2$. Suppose that for each $j \in \mathbb{N}$ we have $A^j \in C(G;H)$ with an extension $A^j \in C^1(H;K)$, and that for each $y \in \mathcal{M}$ the series $\sum_{j=1}^{\infty} D A^j(y) A^j(y)$ converges in $K$. Then the following statements are equivalent:* 1. *The submanifold $\mathcal{M}$ is locally invariant for the $(G,K)$-embedded SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}).* 2. *The submanifold $\mathcal{M}$ is locally invariant for the $(G,H,K)$-variational SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}).* 3. *We have ([\[tang-A\]](#tang-A){reference-type="ref" reference="tang-A"}) and $$\begin{aligned} L|_{\mathcal{M}} - \frac{1}{2} \sum_{j=1}^{\infty} D A^j \cdot A^j|_{\mathcal{M}} \in \Gamma(T \mathcal{M}).\end{aligned}$$* # The general situation {#sec-general} In this section we investigate invariance of finite dimensional submanifolds in the general situation. Let $G,K$ be Banach spaces and let $H$ be a separable Hilbert space such that $(G,H,K)$ is a triplet of continuously embedded spaces. Furthermore, let $L : G \to K$ and $A : G \to \ell^2(H)$ be measurable mappings. Note that this situation covers the situation with a Gelfand triplet $(G,H,K) = (V,H,V^*)$ for some reflexive Banach space $V$; see, for example [@Liu-Roeckner]. **Lemma 16**. *Let $\mathcal{M}\subset G$ be a subset such that $L(\mathcal{M}) \subset H$. Then the restriction $L|_{\mathcal{M}} : \mathcal{M}\to H$ is $\mathcal{B}(\mathcal{M})$-$\mathcal{B}(H)$-measurable.* *Proof.* By Lemma [Lemma 23](#lemma-Kuratowski){reference-type="ref" reference="lemma-Kuratowski"} we have $H \in \mathcal{B}(K)$ and $\mathcal{B}(H) = \mathcal{B}(K)_H$. Therefore, for each $B \in \mathcal{B}(K)$ we have $B \cap H \in \mathcal{B}(K)$, and hence $$\begin{aligned} L|_{\mathcal{M}}^{-1}(B \cap H) = L^{-1}(B \cap H) \cap \mathcal{M}\in \mathcal{B}(G)_{\mathcal{M}} = \mathcal{B}(\mathcal{M}),\end{aligned}$$ which completes the proof. ◻ **Proposition 17**. *Let $\mathcal{M}\subset G$ be a subset such that $L(\mathcal{M}) \subset H$ and $L|_{\mathcal{M}} : (\mathcal{M},\| \cdot \|_G) \to (H,\| \cdot \|_H)$ is continuous. Let $y_0 \in \mathcal{M}$ be arbitrary, and let $Y$ be a local martingale solution to the $(G,H,K)$-variational SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}) with lifetime $\tau$ and $Y_0 = y_0$ such that $Y^{\tau} \in \mathcal{M}$ and the sample paths of $Y^{\tau}$ are continuous with respect to $\| \cdot \|_G$. Then $Y$ is also a local martingale solution to the $(G,H)$-embedded SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}) with lifetime $\tau$ and $Y_0 = y_0$.* *Proof.* Condition ([\[int-cond-var\]](#int-cond-var){reference-type="ref" reference="int-cond-var"}) implies ([\[SPDE-int-cond\]](#SPDE-int-cond){reference-type="ref" reference="SPDE-int-cond"}), because $L|_{\mathcal{M}} : (\mathcal{M},\| \cdot \|_G) \to (H,\| \cdot \|_H)$ is continuous and the sample paths of $Y^{\tau}$ are continuous with respect to $\| \cdot \|_G$. Furthermore, taking into account Lemma [Lemma 24](#lemma-Leb-int-G-H){reference-type="ref" reference="lemma-Leb-int-G-H"} we have $\mathbb{P}$-almost surely $$\begin{aligned} \label{H-K-int-Bochner-coincides} \text{{\rm ($H$-)}} \int_0^{t \wedge \tau} L(Y_s) ds = \text{{\rm ($K$-)}} \int_0^{t \wedge \tau} L(Y_s) ds, \quad t \in \mathbb{R}_+.\end{aligned}$$ Hence, the identity ([\[SPDE-integral-form-var\]](#SPDE-integral-form-var){reference-type="ref" reference="SPDE-integral-form-var"}) implies ([\[SPDE-integral-form\]](#SPDE-integral-form){reference-type="ref" reference="SPDE-integral-form"}). ◻ **Proposition 18**. *Let $\mathcal{M}\subset G$ be a subset such that $L(\mathcal{M}) \subset H$. Let $y_0 \in \mathcal{M}$ be arbitrary, and let $Y$ be a local martingale solution to the $(G,H)$-embedded SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}) with lifetime $\tau$ and $Y_0 = y_0$ such that $Y^{\tau} \in \mathcal{M}$. Then $Y$ is also a local martingale solution to the $(G,H,K)$-variational SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}) with lifetime $\tau$ and $Y_0 = y_0$.* *Proof.* Since $Y$ is predictable, the stopped process $Y^{\tau}$ is also predictable, and thus $\mathscr{F}\otimes \mathcal{B}(\mathbb{R}_+)$-measurable. Hence, for each $\omega \in \Omega$ the sample path $t \mapsto Y_t^{\tau(\omega)}(\omega)$ is $\mathcal{B}(\mathbb{R}_+)$-$\mathcal{B}(\mathcal{M})$-measurable. Therefore, by Lemma [Lemma 16](#lemma-L-restr){reference-type="ref" reference="lemma-L-restr"} for each $\omega \in \Omega$ the mapping $t \mapsto L(Y_t^{\tau}(\omega))$ is $\mathcal{B}(\mathbb{R}_+)$-$\mathcal{B}(H)$-measurable. Now, by virtue of Lemma [Lemma 24](#lemma-Leb-int-G-H){reference-type="ref" reference="lemma-Leb-int-G-H"}, condition ([\[SPDE-int-cond\]](#SPDE-int-cond){reference-type="ref" reference="SPDE-int-cond"}) implies ([\[int-cond-var\]](#int-cond-var){reference-type="ref" reference="int-cond-var"}), and we have ([\[H-K-int-Bochner-coincides\]](#H-K-int-Bochner-coincides){reference-type="ref" reference="H-K-int-Bochner-coincides"}). Therefore, the identity ([\[SPDE-integral-form\]](#SPDE-integral-form){reference-type="ref" reference="SPDE-integral-form"}) implies ([\[SPDE-integral-form-var\]](#SPDE-integral-form-var){reference-type="ref" reference="SPDE-integral-form-var"}) ◻ **Theorem 19**. *Let $\mathcal{M}$ be a finite dimensional $(G,H)$-submanifold of class $C^2$. Suppose that $L(\mathcal{M}) \subset H$ and that $L|_{\mathcal{M}} : \mathcal{M}\to (H,\| \cdot \|_H)$ and $A|_{\mathcal{M}} : \mathcal{M}\to \ell^2(H)$ are continuous. Then the following statements are equivalent:* 1. *The submanifold $\mathcal{M}$ is locally invariant for the $(G,H)$-embedded SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}).* 2. *The submanifold $\mathcal{M}$ is locally invariant for the $(G,H,K)$-variational SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}).* 3. *We have ([\[tang-A\]](#tang-A){reference-type="ref" reference="tang-A"}) and ([\[tang-L\]](#tang-L){reference-type="ref" reference="tang-L"}).* *Proof.* (i) $\Rightarrow$ (ii): This is a consequence of Proposition [Proposition 18](#prop-solutions-G-H-2){reference-type="ref" reference="prop-solutions-G-H-2"}. \(ii\) $\Rightarrow$ (i): This is a consequence of Theorem [Theorem 5](#thm-SPDE){reference-type="ref" reference="thm-SPDE"} and Proposition [Proposition 17](#prop-solutions-G-H-1){reference-type="ref" reference="prop-solutions-G-H-1"}. \(i\) $\Leftrightarrow$ (iii): This is a consequence of Theorem [Theorem 5](#thm-SPDE){reference-type="ref" reference="thm-SPDE"}. ◻ # Invariant submanifolds in Hermite Sobolev spaces {#sec-HS} In this section we present our first application. Namely, we use our findings from Section [4](#sec-K-separable){reference-type="ref" reference="sec-K-separable"} in order to construct examples of invariant submanifolds in Hermite Sobolev spaces; see [@BT App. A] for the required background about Hermite Sobolev spaces. Let $p \in \mathbb{R}$ be arbitrary and define the Hermite Sobolev spaces $$\begin{aligned} G := \mathscr{S}_{p+1}(\mathbb{R}^d), \quad H := \mathscr{S}_{p+\frac{1}{2}}(\mathbb{R}^d) \quad \text{and} \quad K := \mathscr{S}_p(\mathbb{R}^d). \end{aligned}$$ Then $(G,H,K)$ is a triplet of continuously embedded separable Hilbert spaces, and hence we are in the framework of Section [4](#sec-K-separable){reference-type="ref" reference="sec-K-separable"}. Let $$\begin{aligned} b \in \mathscr{S}_{-(p+1)}(\mathbb{R}^d;\mathbb{R}^d) \quad \text{and} \quad \sigma \in \ell^2(\mathscr{S}_{-(p+1)}(\mathbb{R}^d;\mathbb{R}^d))\end{aligned}$$ be given mappings. We define the coefficients $L : G \to K$ and $A^j : G \to H$ for $j \in \mathbb{N}$ as $$\begin{aligned} L(y) &:= \frac{1}{2} \sum_{i,j=1}^d ( \langle \sigma,y \rangle \langle \sigma,y \rangle^{\top} )_{ij} \partial_{ij}^2 y - \sum_{i=1}^d \langle b_i,y \rangle \partial_i y, \\ A^j(y) &:= - \sum_{i=1}^d \langle \sigma_{i}^j,y \rangle \partial_i y, \quad j \in \mathbb{N}.\end{aligned}$$ This provides well-defined continuous mappings $L : G \to K$ and $A : G \to \ell^2(H)$ for the SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}); see [@BT Sec. 6.3] for further details. Such SPDEs were recently studied in [@Rajeev; @Rajeev-2019]. We will call them Itô type SPDEs due to their connection to finite dimensional SDEs; see [@BT Sec. 7]. As an immediate consequence of Theorem [Theorem 14](#thm-K-separable){reference-type="ref" reference="thm-K-separable"} we obtain the following result. **Proposition 20**. *Let $\mathcal{M}$ be a finite dimensional $(G,K)$-submanifold of class $C^2$. Then the following statements are equivalent:* 1. *The submanifold $\mathcal{M}$ is locally invariant for the $(G,K)$-embedded Itô type SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}).* 2. *The submanifold $\mathcal{M}$ is locally invariant for the $(G,H,K)$-variational Itô type SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}).* In [@BT Sec. 6.3] there are several results and examples concerning locally invariant submanifolds for the $(G,K)$-embedded Itô type SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}); in particular for submanifolds generated by the translation group $(\tau_x)_{x \in \mathbb{R}^d}$, which are of the form $$\begin{aligned} \mathcal{M}= \{ \tau_x \Phi : x \in \mathcal{N}\}\end{aligned}$$ for some $\Phi \in G$ and a $C^2$-submanifold $\mathcal{N}$ of $\mathbb{R}^d$. As a consequence of Proposition [Proposition 20](#prop-K-separable){reference-type="ref" reference="prop-K-separable"}, all the aforementioned findings from [@BT Sec. 6.3] transfer to the $(G,H,K)$-variational Itô type SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}). # Linear submanifolds for the stochastic $p$-Laplace equation {#sec-Laplace} In this section we present our second application. Namely, we use our findings from Section [5](#sec-general){reference-type="ref" reference="sec-general"} in order to characterize linear submanifolds for the stochastic $p$-Laplace equation (cf. [@Liu-Roeckner Example 1.2.6]) $$\begin{aligned} \label{SPDE-Laplace} \left\{ \begin{array}{rcl} dY_t & = & {\rm div} \, \big( | \nabla Y_t |^{p-2} \nabla Y_t \big) dt + A(Y_t) dW_t \\ Y_0 & = & y_0. \end{array} \right.\end{aligned}$$ Let us start with a more general situation. Namely, consider a Gelfand triplet $(V,H,V^*)$. We assume that the drift $L : V \to V^*$ in the SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}) is a continuous linear operator $L \in L(V,V^*)$. Furthermore, let $A : V \to \ell^2(H)$ be a measurable mapping. Let $v_1,\ldots,v_m \in V$ be eigenvectors or elements from the kernel of $L$ for some $m \in \mathbb{N}$; that is, we have $$\begin{aligned} L v_i = \lambda_i v_i, \quad i=1,\ldots,m\end{aligned}$$ with $\lambda_1,\ldots,\lambda_m \in \mathbb{R}$. We consider the linear submanifold $$\begin{aligned} \mathcal{M}:= {\rm lin}\{ v_1,\ldots,v_m \}.\end{aligned}$$ **Proposition 21**. *Suppose that $A|_{\mathcal{M}} : \mathcal{M}\to \ell^2(H)$ is continuous. Then the following statements are equivalent:* 1. *The submanifold $\mathcal{M}$ is locally invariant for the $(V,H)$-embedded SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}).* 2. *The submanifold $\mathcal{M}$ is locally invariant for the $(V,H,V^*)$-variational SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}).* 3. *We have ([\[tang-A\]](#tang-A){reference-type="ref" reference="tang-A"}).* *Proof.* Note that $L(\mathcal{M}) \subset V$ and that $L|_{\mathcal{M}} : \mathcal{M}\to V$ is continuous, showing that the assumptions of Theorem [Theorem 19](#thm-general){reference-type="ref" reference="thm-general"} are more than satisfied. Hence, together with [@BT Cor. 4.7] this completes the proof. ◻ Now we consider the stochastic $p$-Laplace equation ([\[SPDE-Laplace\]](#SPDE-Laplace){reference-type="ref" reference="SPDE-Laplace"}). Let us fix some $p \in [2,\infty)$. The space $V$ is given by $V := H_0^{1,p}(\Lambda)$ with an open subset $\Lambda \subset \mathbb{R}^d$; the so-called Sobolev space of order $1$ in $L^p(\Lambda)$; see [@Liu-Roeckner p. 78]. The drift $L$ in the SPDE ([\[SPDE\]](#SPDE){reference-type="ref" reference="SPDE"}) is the $p$-Laplacian $L : V \to V^*$ given by $$\begin{aligned} L(y) = {\rm div} \, \big( | \nabla y |^{p-2} \nabla y \big), \quad y \in V.\end{aligned}$$ The $p$-Laplacian $L : V \to V^*$ is indeed a continuous linear operator $L \in L(V,V^*)$; see equation (4.14) in [@Liu-Roeckner p. 81]. Furthermore, let $A : V \to \ell^2(H)$ be a measurable mapping. Then Proposition [Proposition 21](#prop-linear-manifold){reference-type="ref" reference="prop-linear-manifold"} applies to every finite dimensional submanifold $\mathcal{M}$ generated by eigenvectors or elements from the kernel of $L$. # Continuously embedded spaces {#app-embedding} In this appendix we provide the required auxiliary results about continuously embedded spaces. **Definition 22**. *Let $H$ and $K$ be two normed spaces. Then we call $(H,K)$ *continuously embedded normed spaces* (or *normed spaces with continuous embedding*) if the following conditions are fulfilled:* 1. *We have $H \subset K$ as sets.* 2. *The embedding operator ${\rm Id}: (H, \| \cdot \|_H) \to (K, \| \cdot \|_K)$ is continuous; that is, there is a constant $C > 0$ such that $$\begin{aligned} \label{embedding-inequ} \| x \|_K \leq C \| x \|_H \quad \text{for all $x \in H$.}\end{aligned}$$* **Lemma 23**. *Let $(H,K)$ be two continuously embedded Banach spaces. Then we have $H \in \mathcal{B}(K)$ and $\mathcal{B}(H) = \mathcal{B}(K)_H$, where $\mathcal{B}(K)_H$ denotes the trace $\sigma$-algebra $$\begin{aligned} \mathcal{B}(K)_H = \{ B \cap H : B \in \mathcal{B}(K) \}.\end{aligned}$$* *Proof.* This is a consequence of Kuratowski's theorem; see, for example [@Parthasarathy Thm. I.3.9]. ◻ **Lemma 24**. *Let $(H,K)$ be a pair of continuously embedded Banach spaces, and let $(E,\mathcal{E},\mu)$ be a finite measure space. Let $f : E \to H$ be an $\mathcal{E}$-$\mathcal{B}(H)$-measurable mapping such that the image $f(E)$ is a separable subset of $(H,\| \cdot \|_H)$ and $$\begin{aligned} \label{f-H-integrable} \int_E \| f \|_H \, d \mu < \infty.\end{aligned}$$ Then the following statements are true:* 1. *The mapping $f$ is also $\mathcal{E}$-$\mathcal{B}(K)$-measurable, the image $f(E)$ is a separable subset of $(K,\| \cdot \|_K)$, and we have $$\begin{aligned} \label{f-K-integrable} \int_E \| f \|_K \, d \mu < \infty.\end{aligned}$$* 2. *We have $$\begin{aligned} \label{H-K-coincide} \text{{\rm ($H$-)}} \int_E f \, d \mu = \text{{\rm ($K$-)}} \int_E f \, d \mu.\end{aligned}$$* *Proof.* For each $B \in \mathcal{B}(K)$ we have $$\begin{aligned} f^{-1}(B \cap H) = f^{-1}(B) \cap f^{-1}(H) = f^{-1}(B) \cap E = f^{-1}(B).\end{aligned}$$ Furthermore, by Lemma [Lemma 23](#lemma-Kuratowski){reference-type="ref" reference="lemma-Kuratowski"} we have $\mathcal{B}(K)_H = \mathcal{B}(H)$, and hence $$\begin{aligned} f^{-1}(\mathcal{B}(K)) = f^{-1}(\mathcal{B}(K)_H) = f^{-1}(\mathcal{B}(H)) \subset \mathcal{E}.\end{aligned}$$ There is a countable subset $D \subset f(E)$ which is dense in $f(E)$ with respect to $\| \cdot \|_H$. Let $x \in f(E)$ be arbitrary. Then there exists a sequence $(x_n)_{n \in \mathbb{N}} \subset D$ such that $\| x_n - x \|_H \to 0$, which implies $\| x_n - x \|_K \to 0$ due to ([\[embedding-inequ\]](#embedding-inequ){reference-type="ref" reference="embedding-inequ"}). Furthermore, by ([\[embedding-inequ\]](#embedding-inequ){reference-type="ref" reference="embedding-inequ"}) and ([\[f-H-integrable\]](#f-H-integrable){reference-type="ref" reference="f-H-integrable"}) we have ([\[f-K-integrable\]](#f-K-integrable){reference-type="ref" reference="f-K-integrable"}), proving the first statement. For the proof of the second statement we proceed in two steps. Suppose first that the mapping $f$ is simple; that is, for some $n \in \mathbb{N}$ we have $$\begin{aligned} f = \sum_{i=1}^n c_i \mathbbm{1}_{B_i}\end{aligned}$$ with $c_i \in H$ and $B_i \in \mathcal{E}$ for each $i=1,\ldots,n$. Then we have $$\begin{aligned} \label{integral-simple} \text{{\rm ($H$-)}} \int_E f \, d \mu = \sum_{i=1}^n c_i \cdot \mu(B_i) = \text{{\rm ($K$-)}} \int_E f \, d \mu.\end{aligned}$$ Now, we consider the general situation. According to [@Da_Prato Lemma I.1.3] there exists a sequence $(f_n)_{n \in \mathbb{N}}$ of simple functions $f_n : E \to H$ such that $\| f_n(x) - f(x) \|_H \downarrow 0$ for each $x \in E$. Hence, for each $n \in \mathbb{N}$ and each $x \in E$ we have $$\begin{aligned} \| f_n(x) - f(x) \|_H \leq \| f_1(x) - f(x) \|_H \leq \| f(x) \|_H + \| f_1(x) \|_H.\end{aligned}$$ By Lebesgue's dominated convergence theorem we obtain $$\begin{aligned} \label{fn-f-H} \int_E \| f_n - f \|_H \, d\mu \to 0,\end{aligned}$$ and hence $$\begin{aligned} \label{fn-H} \int_E f_n \, d\mu \to \text{{\rm ($H$-)}} \int_E f \, d \mu \quad \text{in $(H,\| \cdot \|_H)$,}\end{aligned}$$ where $\int_E f_n \, d\mu$ is defined according to ([\[integral-simple\]](#integral-simple){reference-type="ref" reference="integral-simple"}). By ([\[embedding-inequ\]](#embedding-inequ){reference-type="ref" reference="embedding-inequ"}) and ([\[fn-f-H\]](#fn-f-H){reference-type="ref" reference="fn-f-H"}) we have $$\begin{aligned} \int_E \| f_n - f \|_K \, d\mu \to 0,\end{aligned}$$ and hence $$\begin{aligned} \label{fn-K} \int_E f_n \, d\mu \to \text{{\rm ($K$-)}} \int_E f \, d \mu \quad \text{in $(K,\| \cdot \|_K)$.}\end{aligned}$$ Moreover, by ([\[embedding-inequ\]](#embedding-inequ){reference-type="ref" reference="embedding-inequ"}) and ([\[fn-H\]](#fn-H){reference-type="ref" reference="fn-H"}) we have $$\begin{aligned} \label{fn-HK} \int_E f_n \, d\mu \to \text{{\rm ($H$-)}} \int_E f \, d \mu \quad \text{in $(K,\| \cdot \|_K)$.}\end{aligned}$$ Combining ([\[fn-K\]](#fn-K){reference-type="ref" reference="fn-K"}) and ([\[fn-HK\]](#fn-HK){reference-type="ref" reference="fn-HK"}) we arrive at ([\[H-K-coincide\]](#H-K-coincide){reference-type="ref" reference="H-K-coincide"}). ◻ With a similar proof, we obtain the following auxiliary result concerning the Itô integral. **Lemma 25**. *Let $(H,K)$ be a pair of continuously embedded separable Hilbert spaces, and let $W$ be a standard $\mathbb{R}^{\infty}$-Wiener process on some stochastic basis $\mathbb{B}$. Let $A$ be an $\ell^2(H)$-valued $\mathcal{P}$-$\mathcal{B}(\ell^2(H))$-measurable process such that $\mathbb{P}$-almost surely $$\begin{aligned} \int_0^t \| A_s \|_{\ell^2(H)}^2 \, ds < \infty, \quad t \in \mathbb{R}_+.\end{aligned}$$ Then the following statements are true:* 1. *$A$ is also an $\ell^2(K)$-valued $\mathcal{P}$-$\mathcal{B}(\ell^2(K))$-measurable process and we have $\mathbb{P}$-almost surely $$\begin{aligned} \int_0^t \| A_s \|_{\ell^2(K)}^2 \, ds < \infty, \quad t \in \mathbb{R}_+.\end{aligned}$$* 2. *We have $\mathbb{P}$-almost surely $$\begin{aligned} \text{{\rm ($H$-)}} \int_0^t A_s dW_s = \text{{\rm ($K$-)}} \int_0^t A_s dW_s, \quad t \in \mathbb{R}_+.\end{aligned}$$* 20 Bhaskaran, R., Tappe, S. (2022): Invariant manifolds for stochastic partial differential equations in continuously embedded Hilbert spaces. arXiv: 2111.11735v2. Da Prato, G., Zabczyk, J. (2014): *Stochastic Equations in Infinite Dimensions.* Second Edition. Cambridge University Press, Cambridge. Filipović, D. (2000): Invariant manifolds for weak solutions to stochastic equations. *Probability Theory and Related Fields* **118**(3), 323--341. Filipović, D. (2001): *Consistency Problems for Heath--Jarrow--Morton Interest Rate Models.* Springer, Berlin. Gawarecki, L., Mandrekar, V. (2011): *Stochastic Differential Equations in Infinite Dimensions with Applications to SPDEs.* Springer, Berlin. Liu, W., Röckner, M. (2015): *Stochastic Partial Differential Equations: An Introduction.* Springer, Heidelberg. Milian, A. (1997): Invariance for stochastic equations with regular coefficients. *Stochastic Analysis and Applications* **15**(1), 91--101. Nakayama, T. (2004): Viability Theorem for SPDE's including HJM framework. *J. Math. Sci. Univ. Tokyo* **11**(3), 313--324. Parthasarathy, K. R. (1967): *Probability Measures on Metric Spaces.* Academic Press, New York. Prévôt, C., Röckner, M. (2007): *A Concise Course on Stochastic Partial Differential Equations.* Springer, Berlin. Rajeev, B. (2013): Translation invariant diffusions in the space of tempered distributions. *Indian Journal of Pure and Applied Mathematics* **44**(2), 231--258. Rajeev, B. (2019): Translation invariant diffusions and stochastic partial differential equations in $\mathscr{S}'$. arXiv: 1901.00277v2. Rozovskii, B. (1990): *Stochastic Evolution Systems.* Kluwer Academic, Dordrecht. [^1]: Stefan Tappe gratefully acknowledges financial support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- project number 444121509.
arxiv_math
{ "id": "2309.03823", "title": "A note on invariant manifolds for stochastic partial differential\n equations in the framework of the variational approach", "authors": "Rajeev Bhaskaran and Stefan Tappe", "categories": "math.PR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- author: - "Jingyu Gao[^1]" - "Xiurui Geng[^2]" title: "A linearly convergent method for solving high-order proximal operator[^3]" --- [^1]: Aerospace Information Research Institute, Chinese Academy of Sciences; School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences; Key Laboratory of Technology in Geo-Spatial Information Processing and Application System, Chinese Academy of Sciences . [^2]: Aerospace Information Research Institute, Chinese Academy of Sciences; School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences; Key Laboratory of Technology in Geo-Spatial Information Processing and Application System, Chinese Academy of Sciences (). [^3]: Submitted to the editors DATE.
arxiv_math
{ "id": "2309.01306", "title": "A linearly convergent method for solving high-order proximal operator", "authors": "Jingyu Gao and Xiurui Geng", "categories": "math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this article, we prove that every idempotent element of a $G$-graded commutative ring $R=\bigoplus\limits_{n\in G}R_{n}$ with $G$ an Abelian group is contained in the subring $\bigoplus\limits_{h\in H}R_{h}$ where $H$ is the torsion subgroup of $G$. This theorem gives an affirmative answer to the generalized version of Kaplansky's idempotent conjecture in the commutative case. Our result also generalizes as well as unifies several important results in the literature, especially including Kirby and Bass results on idempotents. Next, we prove that the Jacobson radical of every $G$-graded commutative ring with $G$ a torsion-free (totally ordered) Abelian group is a graded ideal. We prove this theorem by a short and quite elementary method, and generalizes Bergman's theorem (which asserts that the Jacobson radical of every $\mathbb{Z}$-graded ring is a graded ideal). These results also allow us to show that an Abelian group $G$ is torsion-free if and only if the Jacobson radical of every $G$-graded ring is a graded ideal, or equivalently, the idempotents of every $G$-graded ring are contained in the base subring, or equivalently, the group-ring $\mathbb{Q}[G]$ has no nontrivial idempotents. Finally, we prove that if a $G$-graded commutative ring $R$ with $G$ a torsion-free Abelian group has a non-zero-divisor homogeneous element of nonzero degree, then the nilradical and the Jacobson radical of $R$ are the same. address: | Department of Mathematics, Faculty of Basic Sciences, University of Maragheh\ P. O. Box 55136-553, Maragheh, Iran. author: - Abolfazl Tarizadeh title: Homogeneity of the Jacobson radical and idempotents in a graded ring --- # Introduction Kaplansky's idempotent conjecture (see e.g. [@Gardam]) claims that if $G$ is a torsion-free group (not necessarily Abelian) and $K$ is a field, then the group-ring $K[G]$ has no nontrivial idempotents. This conjecture is still open in general. In order to get a deeper insight to the above conjecture, we generalize it in the following way. **Conjecture 1**. *Every idempotent of a $G$-graded ring $R=\bigoplus\limits_{n\in G}R_{n}$ with $G$ a torsion-free group is contained in the base subring $R_{0}$ where 0 is the identity of $G$.* The second version of the generalized idempotent conjecture reads as follows: **Conjecture 2**. *Every idempotent of a $G$-graded ring $R=\bigoplus\limits_{n\in G}R_{n}$ with $G$ an Abelian group is contained in the subring $\bigoplus\limits_{n\in H}R_{n}$ where $H$ is the torsion subgroup of $G$.* Our results completely settle down the above conjectures in the commutative case (see Theorems [Theorem 4](#Lemma T-K){reference-type="ref" reference="Lemma T-K"}, [Theorem 8](#Theorem i-T){reference-type="ref" reference="Theorem i-T"} and [Theorem 11](#Theorem 3-III){reference-type="ref" reference="Theorem 3-III"}). In fact, in Theorem [Theorem 11](#Theorem 3-III){reference-type="ref" reference="Theorem 3-III"}, we prove a general result which asserts that if $M$ is a commutative monoid, then every idempotent element of an $M$-graded ring $R=\bigoplus\limits_{m\in M}R_{m}$ is contained in the subring $\bigoplus\limits_{m\in M^{\ast}}R_{m}$ where we call $M^{\ast}=\{x\in M : \exists n\geqslant1, \exists y\in M, nx+y=y\}$ the quasi-torsion submonoid of $M$. We prove this theorem in three main steps. In the first step (Theorem [Theorem 4](#Lemma T-K){reference-type="ref" reference="Lemma T-K"}), we prove a key result which asserts that every idempotent element of a $G$-graded commutative ring $R=\bigoplus\limits_{n\in G}R_{n}$ with $G$ a torsion-free Abelian group is contained in the base subring $R_{0}$. In the second step (Theorem [Theorem 8](#Theorem i-T){reference-type="ref" reference="Theorem i-T"}), by reducing the problem to the finitely generated case and then using the fundamental theorem of finitely generated Abelian groups and most importantly using the first step, we show that every idempotent element of a $G$-graded commutative ring $R=\bigoplus\limits_{n\in G}R_{n}$ with $G$ an Abelian group is contained in the subring $\bigoplus\limits_{h\in H}R_{h}$ where $H$ is the torsion subgroup of $G$. Finally in the third step (Theorem [Theorem 11](#Theorem 3-III){reference-type="ref" reference="Theorem 3-III"}), by passing to the grading with the Grothendieck group and then using the second step, we complete the proof. As a consequence, we obtain that for any commutative ring $R$ and for any commutative monoid $M$, then every idempotent element of the monoid-ring $R[M]$ is contained in the subring $R[M^{\ast}]$.\ In Theorem [Theorem 14](#Theorem II){reference-type="ref" reference="Theorem II"} we prove that the Jacobson radical of every $G$-graded commutative ring with $G$ a torsion-free Abelian group is a graded ideal. We prove this theorem by a natural and quite elementary method. Our result also generalizes Bergman's theorem [@Bergman Corollary 2] which was already proved in the literature by difficult methods (see e.g. [@Bergman; @Cohen-Mongomery; @Ilic; @Kelarev; @Kelarev-Okininski; @Mazurek]). Then inspired by Theorem [Theorem 14](#Theorem II){reference-type="ref" reference="Theorem II"}, it is quite natural to propose the following general problem (here $G$ is not necessarily an Abelian group and the ring is not necessarily commutative): **Conjecture 3**. *The Jacobson radical of every $G$-graded ring with $G$ a torsion-free group is a graded ideal.* The elegant ideas used in the proofs of key results (including Theorems [Theorem 4](#Lemma T-K){reference-type="ref" reference="Lemma T-K"}, [Theorem 8](#Theorem i-T){reference-type="ref" reference="Theorem i-T"}, [Theorem 11](#Theorem 3-III){reference-type="ref" reference="Theorem 3-III"} and [Theorem 14](#Theorem II){reference-type="ref" reference="Theorem II"}) are another major achievement of this article. These results also allow us to show that an Abelian group $G$ is torsion-free if and only if the Jacobson radical of every $G$-graded ring is a graded ideal, or equivalently, the idempotents of every $G$-graded ring are contained in the base subring, or equivalently, the group-ring $\mathbb{Q}[G]$ has no nontrivial idempotents (see Theorem [Theorem 18](#Bergman's th charact){reference-type="ref" reference="Bergman's th charact"}). Finally, we prove that if a $G$-graded commutative ring $R$ with $G$ a torsion-free Abelian group has a non-zero-divisor homogeneous element of nonzero degree, then the nilradical and the Jacobson radical of $R$ are the same (see Theorem [Theorem 19](#Lemma 3 NZD){reference-type="ref" reference="Lemma 3 NZD"} and Corollaries [Corollary 20](#Coro 2 Gro){reference-type="ref" reference="Coro 2 Gro"}, [Corollary 21](#Coro 3 nice){reference-type="ref" reference="Coro 3 nice"}). This result also generalizes several important results in the literature. # Preliminaries The Preliminary section of the article [@A.; @Tarizadeh] is necessary and quite useful here. For the convenience of the reader, some additional background is also given below. In this article all monoids, groups and rings are assumed to be commutative. Recall that for a given ring $R$, then an element $x$ of an $R$-module $M$ is called a *torsion element* if $ax=0$ for some non-zero-divisor $a\in R$. It can be easily seen that $T(M)=\{x\in M:\exists a\in R\setminus Z(R), ax=0\}$, the set of torsion elements, is an $R$-submodule of $M$ (note that $R$ does not need to be an integral domain here) where $Z(R)$ denotes the set of zero-divisors of $R$. The module $T(M)$ is called the *torsion submodule* or the *torsion part of $M$*. An $R$-module $M$ is called *torsion-free* if $T(M)=0$, and *torsion* if $T(M)=M$. For example, every nonzero ideal of an integral domain is a torsion-free module. Every flat module (and hence every free or projective module) is torsion-free. As an example for torsion modules, if $A$ is the total ring of fractions of a ring $R$ and $B$ is an extension ring of $A$, then as $R$-modules $T(B/R)=A/R$, and in particular, $A/R$ is a torsion $R$-module. If $G$ is an Abelian group then as a $\mathbb{Z}$-module, $T(G)=\{x\in G:\exists n\geqslant1, nx=0\}$ is the torsion subgroup of $G$ (the subgroup of elements of finite order). It is well known that an Abelian group is torsion-free (every non-identity element is of infinite order) if and only if it is a totally ordered group. Its proof can be found in [@Lam Theorem 6.31] and [@Levi §3]. More generally, it can be shown that a monoid $M$ with the cancellation property is a totally ordered monoid if and only if $M$ is strongly torsion-free (i.e. if whenever $na=nb$ for some $n\geqslant2$ and some $a,b\in M$, then $a=b$), or equivalently, the Grothendieck group of $M$ is a torsion-free group. For the proof of the first equivalence see [@Northcott §2.12, Theorem 22], the second equivalence is clear. In Theorem [Theorem 18](#Bergman's th charact){reference-type="ref" reference="Bergman's th charact"} we give various characterizations for torsion-free Abelian groups. By the structure theorem over PIDs, if $M$ is a finitely generated module over a principal ideal domain $R$, then we have the factorization $M=T(M)\oplus F$ where $F$ is a free $R$-submodule of $M$ of finite rank (i.e. $F\simeq R^{d}$ for some natural number $d\geqslant0$). In particular, if $G$ is a finitely generated Abelian group then we have the decomposition $G=T(G)\oplus F$ where $F$ is a finitely generated torsion-free subgroup of $G$. Let $R=\bigoplus\limits_{n\in G}R_{n}$ be a $G$-graded ring with $G$ an Abelian group. Every nonzero element of $R_{n}$ is called a homogeneous element of $R$ of degree $n\in G$. If $f= \sum\limits_{n\in G} r_{n} \in R$ then it always means that $r_n\in R_n$ for all $n \in G$. We define the *support of $f$* as the finite set $\operatorname{Supp}(f) : = \{n\in G : r_{n} \neq 0\}$. For any ring $R$ and a monoid $M$, the monoid-ring $R[M]=\bigoplus\limits_{m\in M}S_{m}$ is an $M$-graded ring with the homogeneous components $S_{m}=R\epsilon_{m}=\{r\epsilon_{m}: r \in R\}$. The map $R\rightarrow R[M]$ given by $r\mapsto r\epsilon_{0}$ is an injective morphism of rings. If $R$ is a nonzero ring then the map $M\rightarrow R[M]$ given by $m\mapsto\epsilon_{m}$ is an injective morphism of monoids into the multiplicative monoid of $R[M]$. If $f:R\rightarrow S$ is a morphism of rings and $g:M\rightarrow S$ is a morphism of monoids into the multiplicative monoid of $S$, i.e., $g(a+b)=g(a)g(b)$ for all $a,b\in M$ and $g(0)=1$, then there exists a unique morphism of rings $h:R[M]\rightarrow S$ such that $h(r\epsilon_{m})=f(r)g(m)$ for all $r\in R$ and $m\in M$. This is called the universal property of monoid-rings. In particular, the additive map $R[M]\rightarrow R$ given by $r\epsilon_{m}\mapsto r$ is a surjective morphism of rings and its kernel is a free $R$-module with the basis $\{\epsilon_{m}-\epsilon_{0}$ : $0\neq m\in M\}$. If $M'$ is a submonoid of $M$, then by the universal property of monoid-rings, we have an injective morphism of rings $R[M']\rightarrow R[M]$ which is given by $r\epsilon'_{m}\mapsto r\epsilon_{m}$ where $\epsilon'_{m}=(\delta_{a,m})_{a\in M'}$. Hence, we can identify the monoid-ring $R[M']$ with the image of this map which is the subring $\sum\limits_{m\in M'}R\epsilon_{m}$. We also recall changing the grading on a given graded ring, as we shall need it later on. Let $R=\bigoplus\limits_{m\in M}R_{m}$ be an $M$-graded ring and $\varphi:M\rightarrow M'$ a monoid morphism. Then $R=\bigoplus\limits_{d\in M'}S_{d}$ can be viewed as an $M'$-graded ring with the homogeneous components $S_{d}=\sum\limits_{\substack{m\in M,\\ \varphi(m)=d}}R_{m}$ (it is clear that if $d\in M'$ is not in the image of $\varphi$, then $S_{d}=0$). In particular, if $H$ and $K$ are subgroups of an Abelian group $G$ with $G=H\oplus K$, i.e. $G=H+K$ and $H\cap K=0$. Then we can make a $G$-graded ring $R=\bigoplus\limits_{x\in G}R_{x}$ into an $H$-graded ring $R=\bigoplus\limits_{h\in H}S_{h}$ with $S_{h}=\sum\limits_{\substack{x\in G,\\x-h\in K}}R_{x}$. Finally, using the canonical morphism of additive groups $\mathbb{Z}\rightarrow\mathbb{Z}_{d}=\mathbb{Z}/d\mathbb{Z}$ with $d\geqslant2$, we can make every $\mathbb{Z}$-graded ring $R=\bigoplus\limits_{n\in\mathbb{Z}}R_{n}$ into a $\mathbb{Z}_{d}$-graded ring $R=\bigoplus\limits_{k=0}^{d-1}S_{k}$ with homogeneous components $S_{k}=\sum\limits_{n\geqslant0}R_{nd+k}$. In particular, for any ring $R$ and for each $d\geqslant0$, then $\sum\limits_{n\geqslant0}Rx^{nd}=R+Rx^{d}+Rx^{2d}+\cdots$ is a subring of the polynomial ring $R[x]$. The nilradical of a ring $R$ is denoted by $\mathfrak{N}(R)$ or simply by $\mathfrak{N}$. The Jacobson radical of $R$ is denoted by $\mathfrak{J}(R)$ or simply by $\mathfrak{J}$. # Idempotents in a graded ring In this section, we give a complete characterization of idempotent elements in graded rings. It is well-known that every idempotent element of an $\mathbb{N}$-graded ring $R=\bigoplus\limits_{n\geqslant0}R_{n}$ is contained in the base subring $R_{0}$. In the following theorem we will show that the same assertion holds for $\mathbb{Z}$-graded rings and more generally for $G$-graded rings with $G$ a torsion-free group. But note that the $\mathbb{N}$-graded case argument no longer works here (the proof is a bit more subtle). Our theorem also gives a natural and quite simple proof to Kirby's theorem [@Kirby Theorem 1], which was already proved by very difficult methods and by additional "strongly graded" (i.e. $R_{m}R_{n}=R_{m+n}$) assumption. **Theorem 4**. *Every idempotent element of a $G$-graded ring $R=\bigoplus\limits_{n\in G}R_{n}$ with $G$ a torsion-free group is contained in the base subring $R_{0}$.* *Proof.* Let $f=\sum\limits_{n\in G}r_{n}$ be an idempotent of $R$ with $r_{n}\in R_{n}$ for all $n$. If $\mathfrak{p}$ is a minimal prime ideal of $R$, then $f+\mathfrak{p}$ is an idempotent of the integral domain $R/\mathfrak{p}$. Thus $f\in\mathfrak{p}$ or $1-f\in\mathfrak{p}$. Since $G$ is a totally ordered group, thus by [@A.; @Tarizadeh Corollary 3.4], every minimal prime ideal of $R$ is a graded ideal. Hence, $r_{n}\in\mathfrak{p}$ and so $r_{n}$ is nilpotent for all $n\neq0$. It follows that $f+\mathfrak{N}=r_{0}+\mathfrak{N}$ where $\mathfrak{N}$ is the nilradical of $R$. This shows that $r_{0}+\mathfrak{N}_{0}$ is an idempotent of the ring $R_{0}/\mathfrak{N}_{0}$ where $\mathfrak{N}_{0}=\mathfrak{N}\cap R_{0}$ is the nilradical of $R_{0}$. But it is well known that the idempotents of any ring can be lifted modulo its nilradical (see e.g. [@Tarizadeh-Sharma]). So there exists an idempotent $e\in R_{0}$ such that $a:=r_{0}-e$ is nilpotent. It follows that $(1-e)r_{0}=(1-e)a$ and $e(1-r_{0})=-ea$ are nilpotent. Also $(1-e)r_{n}$ and $er_{n}$ are nilpotent for all $n\neq0$. This shows that $(1-e)f$ and $e(1-f)$ are nilpotent elements of $R$. But it can be seen that if $e,e'$ are idempotents of a ring such that $e-e'$ is a member of the Jacobson radical, then $e=e'$ (see [@Tarizadeh-Sharma Lemma 3.8]). Hence, $f=ef=e\in R_{0}$. ◻ The following corollaries are immediate consequences of Theorem [Theorem 4](#Lemma T-K){reference-type="ref" reference="Lemma T-K"}. **Corollary 5**. *Every idempotent of a $\mathbb{Z}$-graded ring $R=\bigoplus\limits_{n\in\mathbb{Z}}R_{n}$ is contained in the base subring $R_{0}$.* For any ring $R$ and for a monoid $M$, then $S=\{\epsilon_{m}: m\in M\}$ is a multiplicative subset of the monoid-ring $R[M]$. It can be seen that the localization $S^{-1}(R[M])$ is canonically isomorphic (as $G$-graded rings) to the group-ring $R[G]$ where $G$ is the Grothendieck group of $M$. In particular, the localization of the polynomial ring $R[x]$ with respect to the multiplicative set $S=\{1,x,x^{2},\cdots\}$ is denoted by $R[x,x^{-1}]$ (it is called the ring of Laurent polynomials over $R$) and we have the canonical isomorphism of $\mathbb{Z}$-graded rings $R[\mathbb{Z}]\simeq R[x,x^{-1}]$. **Corollary 6**. *For any ring $R$, each idempotent of the ring of Laurent polynomials $R[x,x^{-1}]$ is contained in $R$.* Recall from [@Tarizadeh §3] or [@Tarizadeh-Taheri §4] that an ideal of a ring is called a *regular ideal* if it is generated by a set of idempotents. **Corollary 7**. *If $G$ is a torsion-free group, then the following assertions hold:\ $\mathbf{(i)}$ Every regular ideal of a $G$-graded ring is a graded ideal.\ $\mathbf{(ii)}$ Let $R$ be a $G$-graded ring. Then $\operatorname{Spec}(R)$ is connected if and only if $\operatorname{Spec}(R_{0})$ is connected.* *Proof.* It follows immediately from Theorem [Theorem 4](#Lemma T-K){reference-type="ref" reference="Lemma T-K"}. ◻ The following theorem is the second main result of this article which also generalizes Theorem [Theorem 4](#Lemma T-K){reference-type="ref" reference="Lemma T-K"}. **Theorem 8**. *Every idempotent of a $G$-graded ring $R=\bigoplus\limits_{n\in G}R_{n}$ with $G$ an Abelian group is contained in the subring $\bigoplus\limits_{h\in H}R_{h}$ where $H$ is the torsion subgroup of $G$.* *Proof.* Let $f$ be an idempotent of $R$. Let $K$ be the subgroup of $G$ generated by the finite subset $\operatorname{Supp}(f)$. Then $f$ is an idempotent element of the subring $\bigoplus\limits_{x\in K}R_{x}$, because $\operatorname{Supp}(f)\subseteq K$. Hence, without loss of generality, we may assume that $G$ is a finitely generated group. Then by the fundamental theorem of finitely generated Abelian groups, we may write $G=H\oplus F$ where $F$ is a (finitely generated) torsion-free subgroup of $G$. Then by changing of the grading, $R=\bigoplus\limits_{x\in F}S_{x}$ can be viewed as an $F$-graded ring with the homogeneous components $S_{x}=\sum\limits_{\substack{n\in G,\\n-x\in H}}R_{n}$. Then by Theorem [Theorem 4](#Lemma T-K){reference-type="ref" reference="Lemma T-K"}, we have $f\in S_{0}=\sum\limits_{n\in H}R_{n}$. ◻ If $M$ is a commutative monoid, then it can be easily seen that the set $M^{\ast}=\{x\in M:\exists n\geqslant1, \exists y\in M, nx+y=y\}$ is a submonoid of $M$. We call $M^{\ast}$ the *quasi-torsion submonoid of $M$*. Note that the torsion submonoid $T(M)=\{x\in M:\exists n\geqslant1, nx=0\}$ is contained in $M^{\ast}$ and the equality holds if $M$ has the cancellation property. **Lemma 9**. *Let $R=\bigoplus\limits_{m\in M}R_{m}$ be an $M$-graded ring with $M$ a monoid. If $M$ has the cancellation property, then $R_{0}$ is a subring of $R$.* *Proof.* It suffices to show that $1\in R_{0}$. Suppose $1=\sum\limits_{m\in M}r_{m}$ where $r_{m}\in R_{m}$ for all $m\in M$. If $x\in M$, then $r_{x}=\sum\limits_{m\in M}r_{x}r_{m}$. Since $M$ has the cancellation property, $r_{x}r_{0}$ is the only homogeneous element of degree $x$ on the right hand side of the above equality. Hence $r_{x}=r_{x}r_{0}$ for all $x\in M$. This yields that $1=(\sum\limits_{m\in M}r_{m})r_{0}=r_{0}\in R_{0}$. ◻ In general, we have the following result: **Lemma 10**. *Let $R=\bigoplus\limits_{m\in M}R_{m}$ be an $M$-graded ring with $M$ a monoid. Then $\bigoplus\limits_{m\in M^{0}}R_{m}$ is a subring of $R$ where we call $M^{0}=\{x\in M:\exists y\in M, x+y=y\}$ the quasi-zero submonoid of $M$.* *Proof.* Let $G=\{[a,b]: a,b\in M\}$ be the Grothendieck group of $M$. Consider the canonical morphism $f:M\rightarrow G$ which is given by $m\mapsto[m,0]$. By changing of the grading, we can make $R=\bigoplus\limits_{x\in G}S_{x}$ into a $G$-graded with homogeneous components $S_{x}=\sum\limits_{[m,0]=x}R_{m}$. Since $G$ has the cancellation property and since $M^{0}=\operatorname{Ker}(f)=f^{-1}(0)$, thus by Lemma [Lemma 9](#Lemma 5-five-bes){reference-type="ref" reference="Lemma 5-five-bes"}, $S_{0}=\bigoplus\limits_{m\in M^{0}}R_{m}$ is a subring of $R$. ◻ By the above lemma, if $M'$ is a submonoid of $M$ containing $M^{0}$, then $\bigoplus\limits_{m\in M'}R_{m}$ is a subring of an $M$-graded ring $R=\bigoplus\limits_{m\in M}R_{m}$. In particular, since $M^{0}\subseteq M^{\ast}$, thus $\bigoplus\limits_{m \in M^{\ast}}R_{m}$ is a subring of $R$. **Theorem 11**. *Every idempotent of an $M$-graded ring $R=\bigoplus\limits_{m\in M}R_{m}$ with $M$ a monoid is contained in the subring $\bigoplus\limits_{m \in M^{\ast}}R_{m}$ where $M^{\ast}$ is the quasi-torsion submonoid of $M$.* *Proof.* It follows from Theorem [Theorem 8](#Theorem i-T){reference-type="ref" reference="Theorem i-T"} by passing to the Grothendieck group of $M$. More precisely, let $G=\{[a,b]: a,b\in M\}$ be the Grothendieck group of $M$. Consider the canonical morphism of monoids $f:M\rightarrow G$ which is given by $m\mapsto[m,0]$. By changing of the grading, we can make $R=\bigoplus\limits_{x\in G}S_{x}$ into a $G$-graded with homogeneous components $S_{x}=\sum\limits_{[m,0]=x}R_{m}$. Then by Theorem [Theorem 8](#Theorem i-T){reference-type="ref" reference="Theorem i-T"}, every idempotent of $R$ is contained in the subring $\bigoplus\limits_{x\in H}S_{x}$ where $H$ is the torsion subgroup of $G$. But $M^{\ast}=f^{-1}(H)$. Hence, $\bigoplus\limits_{x\in H}S_{x}=\bigoplus\limits_{m \in M^{\ast}}R_{m}$. ◻ The following result generalizes Bass' theorem (see [@Bass Lemma 6.7] or [@Kanwar-Leroy Corollary 4]). **Corollary 12**. *For any ring $R$ and a monoid $M$, then every idempotent of the monoid-ring $R[M]$ is contained in the subring $R[M^{\ast}]$ where $M^{\ast}$ is the quasi-torsion submonoid of $M$.* *Proof.* We know that the monoid-ring $R[M]=\bigoplus\limits_{m\in M}S_{m}$ is an $M$-graded ring with homogeneous components $S_{m}=R\epsilon_{m}$. Then by Theorem [Theorem 11](#Theorem 3-III){reference-type="ref" reference="Theorem 3-III"}, every idempotent of $R[M]$ is contained in the subring $\bigoplus\limits_{m\in M^{\ast}}S_{m}=R[M^{\ast}]$. ◻ **Corollary 13**. *If $M$ is a torsion-free monoid with the cancellation property, then every idempotent of the monoid-ring $R[M]$ is contained in $R$.* *Proof.* By hypothesis, $M^{\ast}=0$ and so $R[M^{\ast}]=R$. Hence, the assertion follows from the above corollary. ◻ # Homogeneity of the Jacobson radical In Theorem [Theorem 14](#Theorem II){reference-type="ref" reference="Theorem II"} we generalize Bergman's theorem to the more general setting of $G$-graded rings, and give a new and quite elementary proof of it. **Theorem 14**. *The Jacobson radical of every $G$-graded ring with $G$ a torsion-free group is a graded ideal.* *Proof.* Let $R=\bigoplus\limits_{n\in G}R_{n}$ be a $G$-graded ring. It suffices to show that $\mathfrak{J}(R)=\bigcap \limits_{\mathfrak{m}\in\operatorname{Max}(R)}\mathfrak{m}^{\ast}$. The inclusion $\bigcap\limits_{\mathfrak{m}\in\operatorname{Max}(R)}\mathfrak{m}^{\ast} \subseteq\mathfrak{J}(R)$ is clear. To establish the reverse inclusion, take $f=\sum\limits_{n\in G}r_{n}\in\mathfrak{J}(R)$ with $r_{n}\in R_{n}$ for all $n$. Suppose there is a maximal ideal $\mathfrak{m}$ of $R$ such that $f\notin\mathfrak{m}^{\ast}$. Then there exists some $0\neq d\in G$ such that $r_{d}\notin\mathfrak{m}^{\ast}$, because if $r_{n}\in\mathfrak{m}^{\ast}$ for all $n\neq0$, then $r_{0}=f-\sum\limits_{n\neq0}r_{n}\in\mathfrak{m}$ and hence $r_{0}\in\mathfrak{m^{\ast}}$, yielding $f\in\mathfrak{m^{\ast}}$ which is a contradiction. Since $G$ is a totally ordered group, thus by [@A.; @Tarizadeh Corollary 3.3], the graded ideal $\mathfrak{m}^{\ast}$ is a prime ideal of $R$. We know that $1+r_{d}f$ is invertible in $R$ and thus its image is invertible in the $G$-graded integral domain $R/\mathfrak{m}^{\ast}$. By [@A.; @Tarizadeh Lemma 5.2], in a $G$-graded integral domain, every invertible element is homogeneous. Thus there exists some $k$ such that the homogeneous component $(1+r_{d}f)_{n}\in\mathfrak{m}^{\ast}$ for all $n\neq k$. We have $(1+r_{d}f)_{n}=r_{d}r_{n-d}+\delta_{0,n}$ where $\delta_{0,n}$ is the Kronecker delta. It follows that $k=2d$. If $n\neq d, -d$ then $n+d\neq 2d,0$ and thus $r_{d}r_{n}=(1+r_{d}f)_{n+d}\in\mathfrak{m}^{\ast}$. Therefore, $r_{n}\in\mathfrak{m}^{\ast}$ for all $n\neq d, -d$. This yields $r_{d}+r_{-d}-f\in\mathfrak{m}^{\ast}$. Hence, $1+r_{d}+r_{-d}+\mathfrak{m}^{\ast}$ is invertible in $R/\mathfrak{m}^{\ast}$. Again by [@A.; @Tarizadeh Lemma 5.2], we get that $r_{d}\in\mathfrak{m}^{\ast}$ which is a contradiction. ◻ If $M$ is a totally ordered monoid (by an ordering $<$) with the cancellation property, then it can be seen that its Grothendieck group is a totally ordered group whose order is defined by $[a,b]<[c,d]$ if $a+d<b+c$ in $M$.\ As applications of Theorem [Theorem 14](#Theorem II){reference-type="ref" reference="Theorem II"}, we get the following results. **Corollary 15**. *If $M$ is a totally ordered monoid with the cancellation property, then the Jacobson radical of an $M$-graded ring is a graded ideal.* *Proof.* It follows from Theorem [Theorem 14](#Theorem II){reference-type="ref" reference="Theorem II"} by passing to the $G$-grading where $G$ is the Grothendieck group of $M$ which is a torsion-free group. ◻ **Corollary 16**. *If $I$ is an ideal of a $G$-graded ring $R$ with $G$ a torsion-free group, then the colon ideal $\mathfrak{J}(R):_{R}I$ is a graded ideal.* *Proof.* By Theorem [Theorem 14](#Theorem II){reference-type="ref" reference="Theorem II"}, the Jacobson radical of $R$ is a graded ideal. It is also a radical ideal. Hence, the desired conclusion follows from [@A.; @Tarizadeh Theorem 6.1]. ◻ **Corollary 17**. *Let $f=\sum\limits_{i\in G}r_{i}$ and $g=\sum\limits_{k\in G}r'_{k}$ be elements of a $G$-graded ring $R=\bigoplus\limits_{i\in G}R_{i}$ with $G$ a torsion-free group and $r_{i},r'_{i}\in R_{i}$ for all $i$. Then $fg\in\mathfrak{J}(R)$ if and only if $r_{i}r'_{k}\in\mathfrak{J}(R)$ for all $i,k\in G$.* *Proof.* It follows from Theorem [Theorem 14](#Theorem II){reference-type="ref" reference="Theorem II"} and [@A.; @Tarizadeh Corollary 6.6]. ◻ # A characterization result and quasi-Jacobson rings We have the following theorem which is obtained in the light of the above results. By $\mathbb{Z}_{p}=\mathbb{Z}/p\mathbb{Z}$ we mean the ring of integers modulo $p$ and by $\mathbb{Q}$ we mean the field of rational numbers. **Theorem 18**. *For an Abelian group $G$ the following assertions are equivalent:\ $\mathbf{(i)}$ $G$ is a totally ordered group.\ $\mathbf{(ii)}$ $G$ is a torsion-free group.\ $\mathbf{(iii)}$ The nilradical of every $G$-graded ring is a graded ideal.\ $\mathbf{(iv)}$ For each prime number $p$, the nilradical of the group-ring $\mathbb{Z}_{p}[G]$ is a graded ideal.\ $\mathbf{(v)}$ The Jacobson radical of every $G$-graded ring is a graded ideal.\ $\mathbf{(vi)}$ For each prime number $p$, the Jacobson radical of the group-ring $\mathbb{Z}_{p}[G]$ is a graded ideal.\ $\mathbf{(vii)}$ The idempotents of every $G$-graded ring is contained in its base subring.\ $\mathbf{(viii)}$ For any ring $R$, each idempotent of the group-ring $R[G]$ is contained in $R$.\ $\mathbf{(ix)}$ For any field $K$, the group-ring $K[G]$ has no nontrivial idempotents.\ $\mathbf{(x)}$ The group-ring $\mathbb{Q}[G]$ has no nontrivial idempotents.\ $\mathbf{(xi)}$ For any integral domain $R$, the group-ring $R[G]$ is an integral domain.\ $\mathbf{(xii)}$ The group-ring $\mathbb{Q}[G]$ is an integral domain.\ $\mathbf{(xiii)}$ For any integral domain $R$, the units of the group-ring $R[G]$ are precisely of the form $r\epsilon_{g}$ with $r$ is invertible in $R$ and $g\in G$.\ $\mathbf{(xiv)}$ The units of the group-ring $\mathbb{Q}[G]$ are precisely of the form $r\epsilon_{g}$ with $0\neq r\in\mathbb{Q}$ and $g\in G$.* *Proof.* (i)$\Leftrightarrow$(ii): This is well-known.\ (i)$\Rightarrow$(iii): See [@A.; @Tarizadeh Corollary 3.4].\ (iii)$\Rightarrow$(iv): There is nothing to prove.\ (iv)$\Rightarrow$(ii): If $G$ is not torsion-free, then it has a nonzero element $h\in G$ of finite order $n\geqslant2$. Let $p$ be a prime divisor of $n$, and note that $g:=(n/p)h\in G$ is a nonzero element of order $p$. Clearly in the group-ring $\mathbb{Z}_{p}[G]$ the element $\epsilon_{0}-\epsilon_{g}$ is nilpotent, because the Frobenius endomorphism yields that $(\epsilon_{0}-\epsilon_{g})^{p}=(\epsilon_{0})^{p}-(\epsilon_{g})^{p} =\epsilon_{0}-\epsilon_{pg}=\epsilon_{0}-\epsilon_{0}=0$. But $\epsilon_{0}$ (as well as $\epsilon_{g}$) is not nilpotent which is a contradiction.\ (ii)$\Rightarrow$(v): This is Theorem [Theorem 14](#Theorem II){reference-type="ref" reference="Theorem II"}.\ (v)$\Rightarrow$(vi): It is clear.\ (vi)$\Rightarrow$(ii): It is proved exactly like the implication (iv)$\Rightarrow$(ii).\ (ii)$\Rightarrow$(vii): This is Theorem [Theorem 4](#Lemma T-K){reference-type="ref" reference="Lemma T-K"}.\ (vii)$\Rightarrow$(viii)$\Rightarrow$(ix)$\Rightarrow$(x): All implications obvious.\ (x)$\Rightarrow$(ii): If $G$ is not torsion-free, then it has a nonzero element $g\in G$ of finite order $n\geqslant2$. Then the element $f:= (1/n)\sum\limits_{k=0}^{n-1}\epsilon_{kg}=(1/n) \big(\epsilon_{0}+\epsilon_{g}+\cdots+\epsilon_{(n-1)g}\big)$ of the group-ring $\mathbb{Q}[G]$ is a nontrivial idempotent. Indeed, $f^{2}=(1/n^{2})\sum\limits_{0\leqslant k,d\leqslant n-1}\epsilon_{(k+d)g}= n(1/n^{2})\sum\limits_{k=0}^{n-1}\epsilon_{kg}=f$.\ (i)$\Rightarrow$(xi): Suppose $(a\epsilon_{x})(b\epsilon_{y})=0$ for some $a,b\in R$ and some $x,y\in G$. Then $ab\epsilon_{x+y}=0$ and so $ab=0$. This yields that $a=0$ or $b=0$. Thus by [@A.; @Tarizadeh Lemma 3.1], the zero ideal of $R[G]$ is a prime ideal.\ (xi)$\Rightarrow$(xii)$\Rightarrow$(x): Clear.\ (i)$\Rightarrow$(xiii): Let $f$ be an invertible element of $R[G]$. By (xi), $R[G]$ is an integral domain. Thus by [@A.; @Tarizadeh Lemma 5.2], there exist some $r\in R$ and $a\in G$ such that $f=r\epsilon_{a}$. By [@A.; @Tarizadeh Lemma 5.1], $f^{-1}=r'\epsilon_{-a}$ for some $r'\in R$. We have $(r\epsilon_{a})(r'\epsilon_{-a})=\epsilon_{0}$ and so $rr'=1$.\ (xiii)$\Rightarrow$(xiv): Obvious.\ (xiv)$\Rightarrow$(ii): If $G$ is not torsion-free, then it has a nonzero element $g\in G$ of finite order $n\geqslant2$. If $n=2$ then $2\epsilon_{0}+\epsilon_{g}$ is invertible in $\mathbb{Q}[G]$, because $(2\epsilon_{0}+\epsilon_{g})\cdot \big((2/3)\epsilon_{0}-(1/3)\epsilon_{g}\big) =\epsilon_{0}=1$. If $n\geqslant3$ then $f:=(1/n)\sum\limits_{k=0}^{n-1}\epsilon_{kg}= (1/n)(\epsilon_{0}+\cdots+\epsilon_{(n-1)g})$ is an idempotent and so $1-2f=\big((n-2)/n\big)\epsilon_{0}-(2/n) \sum\limits_{k=1}^{n-1}\epsilon_{kg}$ is invertible in $\mathbb{Q}[G]$. But $2\epsilon_{0}+\epsilon_{g}$ and $1-2f$ are not homogeneous which contradicts the hypothesis. ◻ We know that the nilradical of any ring is contained in its Jacobson radical. If the nilradical and the Jacobson radical of a ring are the same, then we call it a *quasi-Jacobson ring*. Note that this notion generalizes the classical Jacobson ring notion. Recall that a ring is called a *Jacobson ring* if every prime ideal is an intersection of maximal ideals. One can show that a ring $R$ is Jacobson if and only if each quotient ring $R/I$ is quasi-Jacobson where $I$ is an ideal of $R$. It also can be easily seen that a ring $R$ is quasi-Jacobson if and only if the maximal spectrum $\operatorname{Max}(R)$ is Zariski dense in $\operatorname{Spec}(R)$. In this regard, we have the following result. **Theorem 19**. *If a $G$-graded ring $R=\bigoplus\limits_{x\in G}R_{x}$ with $G$ a torsion-free group has a non-zero-divisor homogeneous element of nonzero degree, then $R$ is quasi-Jacobson.* *Proof.* It suffices to show that $\mathfrak{J}(R)\subseteq\mathfrak{N}(R)$. Take $f=\sum\limits_{x\in G}r_{x}\in\mathfrak{J}(R)$ where $r_{x}\in R_{x}$ for all $x\in G$. Let $a\in R_{y}$ be a non-zero-divisor of $R$ for some $0\neq y\in G$. Since $y$ is of infinite order (because $G$ is torsion-free) and since $\operatorname{Supp}(f)$ is a finite set, we can find a natural number $N\geqslant1$ such that $x+Ny\neq0$ for all $x\in\operatorname{Supp}(f)$. We know that $1+fa^{N}$ is invertible in $R$. Note that the homogeneous component of degree zero of the element $1+fa^{N}$ is 1. Then using [@A.; @Tarizadeh Theorem 5.8] and the fact that $a^{N}$ is a non-zero-divisor of $R$, we have $r_{x}a^{N}$ and so $r_{x}$ are nilpotent for all $x\in\operatorname{Supp}(f)$. Hence, $f=\sum\limits_{x\in\operatorname{Supp}(f)}r_{x}$ is nilpotent. ◻ **Corollary 20**. *Let $R=\bigoplus\limits_{m\in M}R_{m}$ be an $M$-graded ring where $M$ is a totally ordered monoid with the cancellation property. If $R$ has a non-zero-divisor homogeneous element of nonzero degree, then $R$ is quasi-Jacobson.* *Proof.* It follows from Theorem [Theorem 19](#Lemma 3 NZD){reference-type="ref" reference="Lemma 3 NZD"} by passing to the $G$-grading where $G$ is the Grothendieck group of $M$ which is a torsion-free group. ◻ **Corollary 21**. *Let $R$ be a ring and $M$ a nonzero totally ordered monoid which has the cancellation property. Then the monoid-ring $R[M]$ is quasi-Jacobson.* *Proof.* We may assume $R$ is a nonzero ring (the assertion is clear for the zero ring). Since $M$ has the cancellation property, $\epsilon_{m}$ is a non-zero-divisor of $R[M]$ for all $m\in M$. Since $M$ is nonzero, $R[M]$ has a non-zero-divisor homogeneous element of nonzero degree. Then apply the above corollary. ◻ By the above result, for any ring $R$, then $R[x]$ is quasi-Jacobson. If $G$ is a nonzero torsion-free group, then the group-ring $R[G]$ is quasi-Jacobson. In particular, the ring of Laurent polynomials $R[x,x^{-1}]$ is quasi-Jacobson. 10 H. Bass, Euler Characteristics and Characters of Discrete Groups, Invent. Math. 35 (1976) 155-196. G.M. Bergman, On Jacobson radicals of graded rings, Unpublished work, (1975) 10 pp. M. Cohen and S. Montgomery, Group-graded rings, smash products, and group actions, Trans. Amer. Math. Soc. 282 (1984) 237-258. G. Gardam, A counterexample to the unit conjecture for group rings, Ann. Math. 194(3) (2021) 967-979. E. Ilić-Georgijević, On the Jacobson radical of a groupoid graded ring, J. Algebra, 573 (2021) 561-575. P. Kanwar, A. Leroy, and J. Matczuk, Idempotents in ring extensions, J. Algebra 389 (2013) 128-136. D. Kirby, Idempotents in a graded ring, J. London Math. Soc. 8(2) (1974) 375-376. A.V. Kelarev, On the Jacobson radical of graded rings, Comment. Math. Univ. Carolin. 33(1) (1992) 21-24. A.V. Kelarev and J. Okninski, The Jacobson radical of graded PI-rings and related classes of rings, J. Algebra, 186 (1996) 818-830. T.Y. Lam, A First Course in Noncommutative Rings, Springer-Verlag, (2001). F.W. Levi, Ordered groups, Proc. Indian Acad. Sci. A **16**(4) (1942) 256-263. R. Mazurek, P.P. Nielsen and M. Ziembowski, The upper nilradical and Jacobson radical of semigroup graded rings, J. Pure Appl. Algebra, 219(4) (2015) 1082-1094. D.G. Northcott, Lessons on Rings, Modules and Multiplicities, Cambridge University Press (1968). A. Tarizadeh, Flat topology and its dual aspects, Commun. Algebra, 47(1) (2019) 195-205. A. Tarizadeh, Homogeneity of zero-divisors, units and colon ideals in a graded ring, https://doi.org/10.48550/arXiv.2108.10235 (2021). A. Tarizadeh and Z. Taheri, Stone type representations and dualities by power set ring, J. Pure Appl. Algebra, 225(11) (2021) 106737. A. Tarizadeh and P.K. Sharma, Structural results on lifting, orthogonality and finiteness of idempotents, Rev. R. Acad. Cienc. Exactas Fı́s. Nat. Ser. A Mat. (RACSAM), **116**(1), 54 (2022).
arxiv_math
{ "id": "2309.02880", "title": "Homogeneity of the Jacobson radical and idempotents in a graded ring", "authors": "Abolfazl Tarizadeh", "categories": "math.AC math.AG math.GR math.RA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We provide a different proof of the equivariant version of the Borsuk-Whitehead-Hanner Theorem in the category of proper $G$-spaces which are metrizable by a $G$-invariant metric. author: - | Luis Martínez [^1]\ e-mail: luchomartinez9816\@hotmail.com\ Departamento de Matemáticas, Facultad de Ciencias, Universidad Nacional Autónoma de México --- **2020 AMS Subject Classification:** 55M15, 54H15.\ **Key Words:** Adjunction space, proper $G$-space, $G$-deformation. # Introduction Given two metrizable spaces $X$ and $Y$ and a continuous map $f:A\rightarrow Y$, where $A$ is a closed subset of $X$, the Borsuk-Whitehead-Hanner Theorem (see [@Hu Chapter IV, Theorem 5.3], or [@Hanner1951 Theorem 8.2]) states that the adjunction space $X\cup_f Y$ is ANE (absolute neighborhood extensor) for the class of metrizable spaces provided that $X$, $A$ and $Y$ are ANE's and $X\cup_f Y$ is metrizable. More general versions of this fact were shown in [@Hyman1967]. In [@Antonyan2001 Theorem 3.11] was shown an equivariant version of this theorem in the context of proper $G$-spaces, where $G$ is a locally compact Hausdorff group. In that proof, Hyman's idea is followed and $G$-semicanonical covers are constructed. The goal of this paper is to present a proof of the theorem without using $G$-semicanonical covers. In that sense, we present Lemma [Lemma 7](#-1R){reference-type="ref" reference="-1R"} that was shown in [@Antonyan1990] in the context of compact groups, and from this Lemma we prove the Theorem [Theorem 8](#main){reference-type="ref" reference="main"}. The principal tool of the proof is to construct equivariant deformations of the adjunction space. # The notions {#pre} Throughout the paper the letter $G$ will always denote a locally compact Hausdorff group and $\mathbb{N}=\{1,2,\dots\}$ denotes the natural numbers. By a topological transformation group we mean a triple $(G,X,\theta)$, where $\theta: G\times X\rightarrow X$ is a continuous action, that is, a continuous map such that $\theta(g,\theta(h,x))=\theta(gh,x)$ and $\theta(e,x)=x$ for any $g, h\in G$ and any $x\in X$, where $e$ is the unity of $G$. A space $X$ together with a fixed action $\theta$ of the group $G$ is called a $G$-space. It is usual to write $gx$ instead of $\theta(g,x)$ for the image $\theta(g,x)$ of a pair $(g,x)\in G\times X$. In a similar way, we write $G(S)=\theta(G\times S)$ for each $S\subseteq X$. In particular, $G(x)$ denotes the $G$-orbit of $x\in X$. For any $x\in X$ we denote $G_x=\{g\in G: gx=x\}$ the isotropy subgroup (or stabilizer) of $x\in X$. The orbit space $X/G=\{G(x): x\in X\}$ is endowed with the quotient topology determined by the orbit map $p :X\rightarrow X/G$, $x\mapsto G(x)$, $x\in X$. If $S\subseteq X$ satisfies that $S=p^{-1}(p(S))$ (that is, $S=G(S)$), then $S$ is called a $G$-subset (or an invariant subset) of $X$. If $Y$ is another $G$-space, a continuous map $f: X\rightarrow Y$ is called a G-map (or an equivariant map) if $f(gx) = gf(x)$ for every $g\in G$ and $x\in X$. If $G$ acts trivially on $Y$ then $f$ is called invariant map. Let $X$ and $Y$ be $G$-spaces. Two $G$-maps $p, q: X\rightarrow Y$ are called $G$-homotopic provided that there exists a continuous map $F: X\times I\rightarrow Y$ such that $F_t: X\rightarrow Y$, $x\mapsto F(x,t)$, is a $G$-map for any $t\in I$, $F_0=p$ and $F_1=q$. In this case $F$ is called a $G$-homotopy. Take $\mathcal{V}$ an open cover of $Y$. $F$ is said to be *limited* by $\mathcal{V}$ (or a $(\mathcal{V},G)$-homotopy) provided that for any $x\in X$ there exists $V\in \mathcal{V}$ such that $F(\{x\}\times I)\subseteq V$. In this case $p$ and $q$ are called $(\mathcal{V},G)$-homotopic. Let $\epsilon>0$ be a real number and suppose that $Y$ is metrizable. $F$ is called $(\epsilon,G)$-homotopy provided that $F(\{x\}\times I)$ has diameter less than $\epsilon$, for each $x\in X$. Take $X$ a $G$-space. A $G$-homotopy $F:X\times I\rightarrow X$ such that $F_0$ is the identity map on $X$ is called a $G$-deformation of $X$. If $\mathcal{V}$ is an open cover of $X$ and $F$ is $(\mathcal{V},G)$-homotopy, then $F$ is called $(\mathcal{V},G)$-deformation. If $X$ is metrizable and $F$ is $(\varepsilon,G)$-homotopy, then $F$ is called $(\varepsilon,G)$-deformation. A sequence of $G$-deformations $(F^n)_{n\in \mathbb{N}}$ of $X$ is said to converge to the identity map, provided that for every $x\in X$ and every neighborhood $V$ of $x$ in $X$, there exists a neighborhood $W$ of $x$ in $X$ and $k\in \mathbb{N}$ so that $F^n(W\times I)\subseteq V$, for every $n\geq k$. **Example 1**. Let $(X,d)$ be a metric $G$-space. Suppose that for each $n\in\mathbb{N}$ there exists a $(\frac{1}{n},G)$-deformation $F^n$ of $X$. We are going to check that $(F^n)_{n\in \mathbb{N}}$ converges to the identity. Take $x\in X$ and let $V$ be a neighborhood of $x$ in $X$. There exists $n\in\mathbb{N}$ such that $B_d(x,\frac{1}{n})\subseteq V$. Observe that $F^k(B_d(x,\frac{1}{2n+1})\times I)\subseteq V$, for each $k\geq 2n+1$. Indeed, take $k\geq 2n+1$, $y\in B_d(x,\frac{1}{2n+1})$ and $t\in I$. Then: $$\begin{aligned} d(F^k(y,t), x) & \leq d(F^k(y,t),F^k(y,0))+d(F^k(y,0),x) \\ &\leq\mbox{diam}(F^k(\{y\}\times I))+d(y,x)\\ & <\frac{1}{k}+\frac{1}{2n+1}\\ & <\frac{1}{n}, \end{aligned}$$ then $F^k(y,t)\in V$ and we conclude that $F^k(B_d(x,\frac{1}{2n+1})\times I)\subseteq V$. This shows that $F$ converges to the identity. Let $X$ be a $G$-space. A subset $V\subseteq X$ is called small, if for every point of $X$ there is a neighborhood $U$ in $X$ with the property that the set $\langle U,V\rangle=\{g\in G: gU\cap V\neq \emptyset\}$ has compact closure in $G$. Recall that $X$ is called proper (in the sense of Palais [@Palais61 Definition 1.2.2]), if $X$ has an open cover consisting of small sets. In this case, each stabilizer is compact and each orbit is closed [@Palais61 Proposition 1.1.4]. In the present paper we are interested in the class $G$-$\mathcal{M}$ of all metrizable proper $G$-spaces $X$ that admit a $G$-invariant metric. Remember that a compatible metric $d$ on a proper $G$-space $X$ is called $G$-invariant provided that $d(gx,gy)=d(x,y)$ for any $x, y\in X$ and $g\in G$. In this case the orbit space is metrizable. Indeed, the function $\tilde{d}(G(x),G(y))=\mbox{inf}\{d(\tilde{x},\tilde{y}): \tilde{x}\in G(x), \tilde{y}\in G(y)\}$, defines a compatible metric with the quotient topology of $X/G$ [@Palais61 Theorem 4.3.4]. Let $Y$ be a $G$-space. A closed $G$-subset $A$ of $Y$ is called $G$-neighborhood retract (resp. $G$-retract) of $Y$ if there exists a $G$-neighborhood $U$ of $A$ (resp. $U=Y$) and a $G$-retraction $r: U\rightarrow A$ (resp. $r: Y\rightarrow A$). A $G$-space $X$ is called *absolute neighborhood extensor* (resp. *absolute extensor*) of $G$-$\mathcal{M}$ provided that, for every $Y\in G$-$\mathcal{M}$ and every closed $G$-subset $A$ of $Y$, every $G$-map $f: A\rightarrow X$ can be extended to a $G$-map $F: U\rightarrow X$ on some $G$-neighborhood $U$ of $A$ in $Y$ (resp. on all of $Y$). In this case we write $X\in G$-A(N)E. A $G$-space $X\in G$-$\mathcal{M}$ is called *absolute neighborhood retract* (resp. *absolute retract*) of $G$-$\mathcal{M}$ provided that, for every $Y\in G$-$\mathcal{M}$ and every closed $G$-embedding $\iota: X\rightarrow Y$, the image $\iota(X)$ is $G$-retract of some $G$-neighborhood in $Y$ (resp. of all of $Y$). In this case we write $X\in G$-A(N)R. Given $X\in G$-$\mathcal{M}$, it is known by [@Antonyan2014 Corollary 6.3] that $X\in G$-A(N)E if and only if $X\in G$-A(N)R. In the sequel we will need the following results: **Theorem 2**. *[@Antonyan2014 Theorem 6.1] For every $X\in G$-$\mathcal{M}$ there exists $L\in G$-$\mathcal{M}$ such that $L\in G$-AE and $X$ can be considered as an invariant closed subset of $L$.* **Lemma 3**. Let $X\in G$-$\mathcal{M}$. Then: 1. If $A$ and $B$ are disjoint closed $G$-subsets of $X$, then there exists an invariant map $F: X\rightarrow I$ such that $F(A)=\{0\}$ and $F(B)=\{1\}$. In particular, there are disjoint open $G$-subsets $U$ and $V$ of $X$ such that $A\subseteq U$ and $B\subseteq V$. 2. Let $A$ be a closed $G$-subset of $X$ and let $O$ be an open $G$-subset of $X$. If $A\subseteq O$, then there exists $W$ an open $G$-subset of $X$ such that $A\subseteq W\subseteq \overline{W}\subseteq O$. *Proof.* (i) Let $p: X\rightarrow X/G$ be the orbit map. Observe that $A=p{}^{-1}( p (A))$ and $B=p{}^{-1}( p(B))$, then $p(A)$ and $p(B)$ are disjoint closed subsets of $X/G$. Since $X/G$ is normal, then there exists an invariant map $f: X/G\rightarrow I$ such that $f(p(A))=\{0\}$ and $f(p(B))=\{1\}$. Notice that $F=f\circ p$ satisfies the desired condition. \(ii\) Put $B=X\setminus O$. Since $A$ and $B$ are disjoint closed $G$-subsets of $X$, by (i) there are disjoint open $G$-subsets $W$ and $V$ of $X$ such that $A\subseteq W$ and $B\subseteq V$. Then $A\subseteq W\subseteq\overline{W}\subseteq X\setminus V\subseteq O$. ◻ **Lemma 4**. Let $X, Y\in G$-$\mathcal{M}$ and let $A$ be a closed $G$-subset of $X$. Let $f: T\rightarrow Y$ be a $G$-map, where $T$ is the closed $G$-subset $T=(X\times \{0\})\cup (A\times I)$ of $X\times I$. If $f$ can be extended to a $G$-map $\tilde{f}: (X\times \{0\})\cup U\rightarrow Y$ for some $G$-neighborhood $U$ of $A\times I$ in $X\times I$, then $f$ extends to a $G$-map $F: X\times I\rightarrow Y$. *Proof.* Since $I$ is compact, then there exists a neighborhood $\tilde{V}$ of $A$ in $X$ such that $\tilde{V}\times I\subseteq U$. Since $U$ is invariant, then $V\times I\subseteq U$, where $V=G(\tilde{V})$. Notice that $V$ is a $G$-neighborhood of $A$ in $X$. By Lemma [Lemma 3](#twopro){reference-type="ref" reference="twopro"} there exists an invariant map $e: X\rightarrow I$ such that $e(A)=\{1\}$ and $e(X\setminus V)=\{0\}$. Consider $F: X\times I\rightarrow I$, given by: $F(x,t)=\tilde{f}(x,e(x)t)$, for each $(x,t)\in X\times I$, which is a $G$-map and extends to $f$. ◻ **Lemma 5**. [@Antonyan2001 Lemma 3.10] Let $X\in G$-ANE $\cap\ G$-$\mathcal{M}$. If $A$ is a closed $G$-subset of $X$ and $A\in G$-ANR, then for any open cover $\mathcal{V}$ of $X$ there exists a $(\mathcal{V},G)$-homotopy $H:X\times I \rightarrow X$ such that: 1. $H_0=\mbox{id}_X$. 2. $H(a,t)=a$, for any $a\in A$ and any $t\in I$. 3. There exists a $G$-neighborhood $V$ of $A$ in $X$ such that $H_1(V)=A$. **Lemma 6**. Let $X\in G$-AE and let $A$ be a closed $G$-subset of $X$. Then $A\in G$-AE if and only if $A$ is a strong $G$-deformation retract of $X$. *Proof.* $\Rightarrow)$ Consider the $G$-map $f: (X\times\{0\})\cup (A\times I)\rightarrow A$ given by: $f(x,t)=\left\{ \begin{array}{ll} x, & \mbox{if } x\in X,\ t=0 \\ x, & \mbox{if } x\in A,\ t\in I \end{array} \right.$ Observe $X\times I\in G$-$\mathcal{M}$ and $A\in G$-AE, then $f$ can be extended to a $G$-map $F: X\times I\rightarrow A$. Notice that $F_1$ is a $G$-retraction. $\Leftarrow)$ Since $A$ is $G$-retract of a $G$-AE, then $A\in G$-AE. ◻ # Adjunction spaces and $G$-ANE's **Lemma 7**. Let $X\in G$-$\mathcal{M}$ and let $L\in G$-$\mathcal{M}\cap G$-AE such that $X$ is a closed $G$-subset of $L$. Then the following statements are equivalent: 1. $X\in G$-ANE. 2. For each open cover $\mathcal{V}$ of $X$ there exists a $(\mathcal{V},G)-$deformation $F:X\times I\rightarrow X$ of $X$ such that $F_1$ is equivariantly extendable to some $G$-neighborhood of $X$ in $L$. 3. For some metric on $X$ and for any $\epsilon>0$, there exists a $(\epsilon,G)$-deformation $F:X\times I\rightarrow X$ such that $F_1$ is equivariantly extendable to some $G$-neighborhood of $X$ in $L$. 4. There exists a sequence of $G$-deformations $(F^n)_{n\in \mathbb{N}}$ of $X$ converging to the identity map such that $F^n_1$ extends to some $G$-neighborhood of $X$ in $L$, for any $n\in \mathbb{N}$. *Proof.* (i) $\Rightarrow$ (ii) Consider $\mathcal{V}$ an open cover of $X$ and define $F:X\times I\rightarrow X$ given by $F(x,t)=x$, $x\in X$, $t\in I$. Put $F^n=F$ for any $n\in \mathbb{N}$ and observe that $(F^n)_{n\in \mathbb{N}}$ satisfies (ii) because $X\in G$-ANE. (ii)$\Rightarrow$ (iii) Let $d$ be a compatible metric on $X$. Take $\epsilon>0$ and put $\mathcal{V}_\epsilon=\{B_d(x,\frac{\epsilon}{2}): x\in X\}$ which is an open cover of $X$. By (i) there exists a $(\mathcal{V},G)$-deformation $F$ of $X$ such that $F_1$ is equivariantly extendable to some $G$-neighborhood of $X$ in $L$. Notice that $F$ is a $(\epsilon,G)$-deformation. (iii)$\Rightarrow$(iv) For each $n\in \mathbb{N}$ let $F^n$ be a $(\frac{1}{n},G)$-deformation of $X$ such that $F^n_1$ can be extended to a $G$-neighborhood of $X$ in $L$. It was shown in Example [Example 1](#ejeconverid){reference-type="ref" reference="ejeconverid"} that $(F^n)_{n\in \mathbb{N}}$ converges to the identity. \(iv\) $\Rightarrow$ (i) Take $U$ a $G$-neighborhood of $X$ in $L$ and let $f: U\rightarrow X$ be a $G$-extension of $F^1_1.$ Put $t_n=1-2^{-(n-1)}$ for any $n\in \mathbb{N}$. We want to construct a $G$-map $F: U\times [0,1)\rightarrow X$ such that $F(x,t_n)=F^n_1(x)$, for each $x\in X$ and each $n\in \mathbb{N}$. First define $F$ on $U\times \{0\}$ by $F(x,0)=f(x)$, $x\in U$. Since $t_1=0$, suppose that $F$ has been defined on $U\times [t_1,t_n]$ for some $n\geq 1$. Now we shall construct $F$ on $U\times [t_n,t_{n+1}]$. To do that, consider the closed $G$-subset $T= (X\times I)\cup (U\times \{0\})$ of $U\times I$, and observe that $k: T\rightarrow X$ defined by: $k(x,t)=\left\{ \begin{array}{ll} F(x,t_n), & \mbox{if } x\in U,\ t=0 \\ F^n(x,1-t), & \mbox{if } x\in X,\ t\in I \end{array} \right.$ is a $G$-map. Indeed, take $g\in G$ and $(x,t)\in T$. If $x\in U$ and $t=0$, then: $k(g(x,t))=k(gx,0)=F(gx,t_n)=gF(x,t_n)=gk(x,t)$, in a similar way, if $x\in X$, $t\in I$, then: $k(g(x,t))=k(gx,t)=F^n(gx,1-t)=gF^n(x,1-t)=gk(x,t)$. This shows that $k$ is a $G$-map. Since $F_1^{n+1}: X\rightarrow X$ can be extended to a $G$-neighborhood of $X$ in $L$ and $L\in G$-AE, then $F_1^{n+1}\circ k: T\rightarrow X$ extends to a $G$-neighborhood of $T$ in $U\times I$. Follows by Lemma [Lemma 4](#dowker){reference-type="ref" reference="dowker"} that there exists a $G$-extension $K: U\times I\rightarrow X$ of $F_1^{n+1}\circ k$. Define $F$ on $U\times [t_n,t_{n+1}]$ by: $F(x,(1-t)t_n+tt_{n+1})=\left\{ \begin{array}{ll} F_{2t}^{n+1}(F(x,t_n)), & \mbox{if } 0\leq t\leq \frac{1}{2} \\ K_{2t-1}(x), & \mbox{if } \frac{1}{2}\leq t\leq 1 \end{array} \right.$ Notice that $F$ is a $G$-map. Indeed, take $g\in G$ and $(x,s)\in U\times [t_n,t_{n+1}]$. Then $s=(1-t)t_n+tt_{n+1}$ for some $0\leq t\leq 1$. If $0\leq t\leq \frac{1}{2}$, then: $F(gx,s)=F(gx,(1-t)t_n+tt_{n+1})=F_{2t}^{n+1}(F(gx,t_n))=gF_{2t}^{n+1}(F(x,t_n))=gF(x,s)$. On the other hand, if $\frac{1}{2}\leq t\leq 1$, then: $F(gx,s)=F(gx,(1-t)t_n+tt_{n+1})=K(gx,2t-1)=gK(x,2t-1)=gF(x,s)$, then we are done. In this way we have constructed the $G$-map $F:U\times [0,1)\rightarrow X$. Now define the $G$-map $H:X\times I\rightarrow X$ by $H\restriction_{X\times [0,1)}=F\restriction_{X\times [0,1)}$ and $H(x,1)=x$ for every $x\in X$. To show that $H$ is continuous it only remains to verify the continuity on $X\times \{1\}$. Take $x_0\in X$ and let $V$ be a neighborhood of $H(x_0,1)=x_0$ in $X$. Since $(F^n)_{n\in \mathbb{N}}$ converges to the identity, there are neighborhoods $W$ and $W_1$ of $x_0$ in $X$ and $k, k_1\in \mathbb{N}$ such that $k\geq k_1$ and $F^{n+1}(W_1\times I)\subseteq V$, for each $n\geq k_1$, and $F^n(W\times I)\subseteq W_1$, for each $n\geq k$. We shall check that $H(W\times [t_k,1])\subseteq V$. Take $x\in W$, $s\in [t_k,1]$, $n\geq k$ and $t\in [0,1]$ such that $s\in [t_n,t_{n+1}]$ and $s=(1-t)t_n+tt_{n+1}$. Then $\{F^n_1(x), F^n_{2-2t}(x)\}\subseteq W_1$ and $\{F^{n+1}_{2t}(F^n_1(x)), F^{n+1}_1(F^n_{2-2t}(x))\}\subseteq V$. But $H(x,s)=\left\{ \begin{array}{ll} F_{2t}^{n+1}(F^n_1(x)), & \mbox{if } 0\leq t\leq \frac{1}{2} \\ F^{n+1}_1(F^n_{2-2t}(x)), & \mbox{if } \frac{1}{2}\leq t\leq 1 \end{array} \right.$ then $H(x,s)\in V$. On $U\times [0,1)$ consider the metric $\rho$ given by $\rho((u_1,t_1),(u_2,t_2))=d(u_1,u_2)+|t_1-t_2|$, where $d$ is an invariant metric on $L$. Consider the $G$-neighborhood of $X\times [0,1)$ given by: $O=\left\lbrace (u,t)\in U\times [0,1): \begin{array}{c} \exists (x,s)\in X\times [0,1), \\ 0\leq \rho((u,t),(x,s)),d(F(u,t),F(x,s)) <1-t \end{array}\right\rbrace$. We shall check that $O$ is invariant. Take $(u,t)\in O$ and $g\in G$. There exists $(x,s)\in X\times [0,1)$ such that $0\leq \rho((u,t),(x,s)),\rho(F(u,t),F(x,s)) <1-t$. Since $X$ is invariant, then $(gx,s)\in X\times [0.1)$. Moreover, $\rho((gu,t),(gx,s))=d(gu,gx)+|t-s|=d(u,x)+|t-s|=\rho((u,t),(x,s))<1-t$, in a similar way $d(F(gu,t),F(gx,s))<1-t$, then $(gu,t)\in O$ and $O$ is invariant. Notice that $X\times [t_n,t_{n+1}]\subseteq O$, for every $n\geq 1$. For $n=1$, since $X\times [t_1,t_2]\subseteq O$, then there exists a neighborhood $\tilde{U}_1$ of $X$ in $U$ such that $\tilde{U}_1\times [t_1,t_2]\subseteq O$. But $O$ is invariant, then $U_1\times [t_1,t_2]\subseteq O$, where $U_1=G(\tilde{U}_1)\subseteq U$ is a $G$-neighborhood of $X$ in $U$. In a similar way, for $n=2$ there exists a $G$-neighborhood $\tilde{U}_2$ of $X$ in $U$ such that $\tilde{U}_2\times [t_2,t_3]\subseteq O$. It follows by Lemma [Lemma 3](#twopro){reference-type="ref" reference="twopro"} that there exists a $G$-neighborhood $U_2$ of $X$ in $U$ such that $U_2\subseteq \overline{U}_2\subseteq U_1\cap \tilde{U}_2$, then $U_2\times [t_2,t_3]\subseteq O$. In this way, inductively there is a $G$-neighborhood $U_n$ of $X$ in $U$ such that $U_n\times [t_n,t_{n+1}]\subseteq O$ and $\overline{U_{n+1}}\subseteq U_n$, $n\in \mathbb{N}$. For each $n\geq 1$ let $e_n: U\rightarrow I$ be an invariant map such that: $e_n(u)=\left\{ \begin{array}{ll} 0, & \mbox{if } u\in U\setminus U_n \\ 1, & \mbox{if } u\in \overline{U_{n+1}} \end{array} \right.$ then $e:U\rightarrow I$, $u\mapsto \sum\limits_{n=1}^\infty\frac{e_n(u)}{2^n}$, $u\in U$, is an invariant map. Take $u\in U_1\setminus X$. We shall check that $(u,e(u))\in O$. Since $X$ is closed in $L$, then $X=\cap_{n=1}^\infty U_n$ and there exists $m\geq 1$ such that $u\in U_m\setminus U_{m+1}$. Then: $e_n(u)=\left\{ \begin{array}{ll} 1, & \mbox{if } n<m \\ 0, & \mbox{if } n>m\ \end{array} \right.$ and $e(u)=\frac{1}{2}+\cdots+\frac{1}{2^{m-1}}+\frac{1}{2^m}e_m(u)=1-\frac{1}{2^{m-1}}+\frac{1}{2^m}e_m(u)=t_m+\frac{1}{2^m}e(u)\in [t_m,t_{m+1}]$, hence $(u,e(u))\in U_m\times [t_m,t_{m+1}]\subseteq O$. Consider $r: U\rightarrow X$, given by: $r(u)=\left\{ \begin{array}{ll} F(u,e(u)), & \mbox{if } u\in U\setminus X \\ u, & \mbox{if } u\in X\ \end{array} \right.$ To prove that $r$ is continuous, take $x\in \mbox{Fr}_U(X)$ and let $(u_n)_{n\in \mathbb{N}}$ be a sequence in $U\setminus X$ such that $\mbox{lim}(u_n)=x$. We shall check that $\lim(r(u_n))=r(x)=x$. Since $x\in U_1$, we can suppose that $u_n\in U_1$, for each $n\in \mathbb{N}$. Take $n\in \mathbb{N}$. We know that $(u_n,e(u_n))\in O$, then there exists $(x_n,s_n)\in X\times [0,1)$ such that: $\rho((u_n,e(u_n)),(x_n,s_n))<1-e(u_n)$ and $d(F(u_n,e(u_n)),F(x_n,s_n))<1-e(u_n)$. But $\lim(u_n)= x$, then $\lim(e(u_n))=e(x)=1$ and $\lim(u_n,e(u_n))=(x,1)$. This shows that: $\lim(\rho((u_n,e(u_n)),(x_n,s_n)))=\mbox{lim}(d(F(u_n,e(u_n)),F(x_n,s_n)))=0$, then $\lim(x_n,s_n)=(x,1)$ and $\lim(H(x_n,s_n))=H(x,1)=x$. But $H(x_n,s_n)=F(x_n,s_n)$ for each $n\in \mathbb{N}$, then: $\lim(r(u_n))=\lim(F(u_n,e(u_n)))=\lim(F(x_n,s_n))=\lim(H(x_n,s_n))=x$, hence $r$ is continuous. Observe that $r$ is a $G$-map, then $X$ is a $G$-neighborhood retract of $U$. We conclude that $X\in G$-ANE. ◻ Consider $(G,X,\alpha)$ and $(G,Y,\beta)$ topological transformation groups. Let $A$ be a closed $G$-subset of $X$ and let $f: A\rightarrow Y$ be a $G$-map. Remember that the *adjunction space* obtained by adjoining $X$ to $Y$ by means of $f$, denoted by $X\cup_f Y$, is the quotient space of $(X\sqcup Y)/\sim$, where $\sim$ is the equivalence relation generated by: $\{(a,f(a))\in (X\sqcup Y)\times (X\sqcup Y): a\in A\}$. There is a canonical action of $G$ on $X\cup_f Y$, defined as follows: first consider the action $\alpha\sqcup\beta: G\times (X\sqcup Y)\rightarrow X\sqcup Y$, given by: $(\alpha\sqcup\beta) (g,z)=\left\{ \begin{array}{ll} \alpha(g,z), & \mbox{if } z\in X\\ \beta(g,z), & \mbox{if } z\in Y \end{array} \right.$ and let $p: X\sqcup Y\rightarrow X\cup_f Y$ the quotient map. Since $G$ is locally compact, then $\mbox{id}_G\times p$ is an identification and by the universal property of a quotient, there exists a continuous map $\theta: G\times X\cup_fY\rightarrow X\cup_f Y$ such that the following diagram is commutative: $\xymatrix{G\times (X\sqcup Y)\ar[d]_-{\mbox{id}_G\times p}\ar[r]^-{\alpha\sqcup \beta}& X\sqcup Y\ar[d]^-{p}\\ G\times (X\cup_f Y) \ar[r]_-{\theta}& X\cup_f Y }$ In the next result $X\cup_f Y$ is considered as a $G$-space with the action $\theta$. **Theorem 8**. *Let $X, Y\in G$-$\mathcal{M}$, let $A$ be a closed $G$-subset of $X$ and let $f: A\rightarrow Y$ be a $G$-map. If $X$, $A$ and $Y$ are $G$-ANE's and $X\cup_f Y\in G$-$\mathcal{M}$, then $X\cup_f Y$ is $G$-ANE.* *Proof.* Let $\mathcal{V}$ be an open cover of $X\cup_f Y$. Notice that $\mathcal{U}=p{}^{-1}(\mathcal{V})=\{p{}^{-1}(V): V\in \mathcal{V}\}$ is an open cover of $X\sqcup Y$. Since $\mathcal{W}=\{X\cap U: U\in\mathcal{U}\}$ is an open cover of $X$, by Lemma [Lemma 5](#lemrede){reference-type="ref" reference="lemrede"} there exists a $(\mathcal{W},G)$-homotopy $T: X\times I\rightarrow X$ such that $T_0=\mbox{id}_X$, $T(a,t)=a$ for each $a\in A$ and $t\in I$, and there exists a $G$-neighborhood $V$ of $A$ in $X$ so that $T_1(V)=A$. Consider the $(\mathcal{U},G)$-homotopy $K: (X\sqcup Y)\times I\rightarrow X\sqcup Y$ given by: $K(z,t)=\left\{ \begin{array}{ll} T(z,t), & \mbox{if } z\in X \\ z, & \mbox{if } z\in Y \end{array} \right.$ There is a continuous map $H: (X\cup_fY)\times I\rightarrow X\cup_fY$ such that the following diagram is commutative: $\xymatrix{ (X\sqcup Y)\times I\ar[d]_-{p\times \mbox{id}_I}\ar[r]^-{K}& X\sqcup Y\ar[d]^-{p}\\ (X\cup_f Y)\times I \ar[r]_-{H}& X\cup_f Y }$ Observe that $H$ is a $(\mathcal{V},G)$-deformation of $X\cup_fY$. Indeed, take $z\in X\cup_f Y$, $t\in I$, $g\in G$ and $w\in X\sqcup Y$ such that $p(w)=z$. Since $p(gw)=gz$, then: $H(gz,t)=p(K(gw,t))=p(gK(w,t))=gp(K(w,t))=gH(z,t)$, hence $H$ is a $G$-map. Now take $U\in \mathcal{U}$ and $V\in \mathcal{V}$ such that $p(U)\subseteq V$ and $K(\{w\}\times I)\subseteq U$. Then $H(\{z\}\times I)=p(K(\{w\}\times I))\subseteq V$, and this shows that $H$ is limited by $\mathcal{V}.$ By Theorem [Theorem 2](#embedingAE){reference-type="ref" reference="embedingAE"} there exists $L\in G$-AE such that $X\cup_fY$ can be considered as a closed $G$-subset of $L$. We shall check that $H_1$ can be extended to a $G$-neighborhood of $X\cup_f Y$ in $L$. Since $p\restriction_Y: Y\rightarrow p(Y)$ is a closed $G$-equivalence, we identify $Y=p(Y)$ as a $G$-space of $X\cup_f Y$. Observe that $F=p(X\setminus V)$ is a closed $G$-subset of $L$ and $F\cap Y=\emptyset$. By Lemma [Lemma 3](#twopro){reference-type="ref" reference="twopro"} there are $R$ and $S$ open $G$-subsets of $L$ so that $F\subseteq R$, $Y\subseteq S$ and $R\cap S=\emptyset$. Consider the following $G$-subsets of $L$: $B=L\setminus (R\cup S)$, $C=B\cap (X\cup_f Y)\subseteq p(V\setminus A)$, $D=R\cap(X\cup_fY)\subseteq p(X\setminus A)$ and $E=S\cap (X\cup_f Y)\subseteq p(V\cup Y).$ Note that $\phi: C\rightarrow A$ given by $\phi(c)=K_1(p{}^{-1}(c))$ is a $G$-map. Indeed, take $c\in C$, $g\in G$ and $v\in V\setminus A$ such that $p(v)=c$. Then $p(gv)=gc$ and: $\phi(gc)=K_1(gv)=gK_1(v)=gK_1(p{}^{-1}(c))=g\phi(c)$. Since $A\in G$-ANE and $C$ is a closed $G$-subset of $B$, then there exists a $G$-extension $\tilde{\phi}: W_0\rightarrow A$ of $\phi$, where $W_0$ is a $G$-neighborhood of $C$ in $B$. Since $W_0$ and $C\cup D$ are closed $G$-subsets of $W_0\cup D$ and $W_0\cap (C\cup D)= C$, then the following map is equivariant: $\tilde{\varphi}_1: W_0\cup D\rightarrow X$, given by: $\tilde{\varphi}_1(s)=\left\{ \begin{array}{ll} \tilde{\phi}(s), & \mbox{if } s\in W_0 \\ K_1(p{}^{-1}(s)), & \mbox{if } s\in C\cup D \end{array} \right.$ Moreover, since $X$ is a $G$-ANE and $W_0\cup D$ is a closed $G$-subset of $W_0\cup R$ , then there exists a $G$-neighborhood $W_1$ of $W_0\cup D$ in $W_0\cup R$ and a $G$-extension $\varphi_1: W_1\rightarrow X$ of $\tilde{\varphi}_1$. Notice that $W_0\cap (C\cup E)=C$ and $p(\tilde{\phi}(s))=p(\phi(s))=pK_1p{}^{-1}(s)=H_1(s)$, for any $s\in C$. Moreover, $W_0$ and $C\cup E$ are closed $G$-subsets of $W_0\cup E$. Since $\tilde{\phi}$ and $p$ are $G$-maps, then $\tilde{\varphi}_2: W_0\cup E\rightarrow Y$ given by $\tilde{\varphi}_2(s)=\left\{ \begin{array}{ll} p(\tilde{\phi}(s)), & \mbox{if } s\in W_0 \\ H_1(s), & \mbox{if } s\in C\cup E \end{array} \right.$ is a $G$-map. But $Y\in G$-ANE and $W_0\cup E$ is a closed $G$-subset of $W_0\cup S$, then there exists $\varphi_2: W_2\rightarrow Y$ a $G$-extension of $\tilde{\varphi}_2$, where $W_2$ is a $G$-neighborhood of $W_0\cup E$ in $W_0\cup S$. Take $s\in W_1\cap W_2=W_0$. Notice that $p(\varphi_1(s))=p(\tilde{\phi}(s))=\varphi_2(s)$. Then the map $\varphi: W\rightarrow Z$ given by: $\varphi(s)=\left\{ \begin{array}{ll} p(\varphi_1(s)), & \mbox{if } s\in W_1 \\ p(\varphi_2(s)), & \mbox{if } s\in W_2 \end{array} \right.$ is well defined, where $W=W_1\cup W_2$. Since $W_1=W\setminus S$ and $W_2=W\setminus R$, then $\varphi$ is continuous. Moreover, $\varphi$ is a $G$-map because $p\circ \varphi_1$ and $p\circ\varphi_2$ are $G$-maps. Notice that $W$ is a $G$-neighborhood of $X\cup_f Y$ in $L$. To prove that $W$ is open, observe that $W_1$ is open in $B\cup R$ and $B\cup R$ is closed in $L$, then $(B\cup R)\setminus W_1$ is closed in $L$. Similarly $(B\cup R)\setminus W_2$ is closed in $L$, then $L\setminus W=[(B\cup R)\setminus W_1] \cup [(B\cup S)\setminus W_2]$ is closed in $L$ and $W$ is open. It only remains to verify that $\varphi$ is an extension of $H_1$. Take $s\in X\cup_f Y$. If $s\in C\cup D$, then $s\in W_1$ and: $\varphi(s)=p(\varphi_1(s))=p(\phi(s))=p(K_1(p{}^{-1}(s)))=H_1(s)$. If $s\in E,$ then $s\in W_2$ and: $\varphi(s)=p(\varphi_2(s))=\varphi_2(s)=H_1(s)$. Then by Lemma [Lemma 7](#-1R){reference-type="ref" reference="-1R"} we conclude that $X\cup_fY$ is $G$-ANE. ◻ 99 N. Antonyan, S. Antonyan, E. Martin-Peinador, Equivariant embeddings of metrizable proper $G$-spaces, *Topol. Appl.* 163 (2014), 11-24. S. Antonyan, Retraction properties of an orbit space, *Math. USSR Sb.* 65 (2) (1990), 305-321. S. Antonyan, E. Elfving, A. Mata-Romero, Adjunction spaces and unions of G-ANEs, *Topology Proc.* 26 (1) (2001--2002), 1--28. O. Hanner, Some theorems on absolute neighborhood retracts,*Ark. Mat.* 1 (1951), 389--408. S.T. Hu, Theory of Retracts, *Wayne State University Press*, 1965. D. Hyman, generalization of the Borsuk-Whitehead-Hanner theorem, *Pacific J. Math.* 23 (1967), 263--271. R. Palais, On the existence of slices for actions of non-compact Lie groups, *Ann. of Math.* 73 (1961), 295--323. [^1]: The author was supported by CONAHCYT (México)
arxiv_math
{ "id": "2309.16453", "title": "A note on adjunction spaces and G-ANE's", "authors": "Luis Mart\\'inez", "categories": "math.GN", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | The compact 16-dimensional Moufang plane, also known as the Cayley plane, has traditionally been defined through the lens of octonionic geometry. In this study, we present a novel approach, demonstrating that the Cayley plane can be defined in an equally clean, straightforward and more economic way using two different division and composition algebras: the paraoctonions and the Okubo algebra. The result is quite surprising since paraoctonions and Okubo algebra possess a weaker algebraic structure than the octonions, since they are non-alternative and do not uphold the Moufang identities. Intriguingly, the real Okubo algebra has $\text{SU}\left(3\right)$ as automorphism group, which is a classical Lie group, while octonions and paraoctonions have an exceptional Lie group of type $\text{G}_{2}$. This is remarkable, given that the projective plane defined over the real Okubo algebra is nevertheless isomorphic and isometric to the octonionic projective plane which is at the very heart of the geometric realisations of all types of exceptional Lie groups. Despite its historical ties with octonionic geometry, our research underscores the real Okubo algebra as the weakest algebraic structure allowing the definition of the compact 16-dimensional Moufang plane. author: - Daniele Corradetti$^{*}$, Alessio Marrani $^{\dagger}$, Francesco Zucconi $^{\ddagger}$ title: "[A minimal and non-alternative realisation of the Cayley plane]{.smallcaps}" --- # [Introduction]{.smallcaps} {#introduction .unnumbered} A Moufang plane is a projective plane where every line is a translation line or, alternatively, where the "little Desargues theorem" holds (see in Sec. [\[sec:Discussions-and-verifications\]](#sec:Discussions-and-verifications){reference-type="ref" reference="sec:Discussions-and-verifications"}). Among the various characteristics of Moufang planes, a notable one is their dimensionality. Specifically, it is well-known that all compact, connected Moufang planes are of dimension 2, 4, 8 and 16 and isomorphic to precisely the projective planes over the Hurwitz division algebras $\mathbb{R},\mathbb{C},\mathbb{H}$ and $\mathbb{O}$. Of all these planes, the 16-dimensional Moufang plane stands out due to the historical obstacles in its definition arising from the lack of associativity of the octonions $\mathbb{O}$. This definitional challenge sparked significant interest in mathematical research during the early 20th century, culminating in one of the most fascinating interplays between projective geometry, algebra, and differential geometry. Indeed, one of the most remarkable achievements of the resulting mathematical research activity, mainly due to Cartan [@Car14], Jordan, Wigner and von Neumann [@Jordan] and Freudenthal [@Fr54], is an interesting three-fold description of these planes: as a completion of the affine plane $\mathscr{A}^{2}\left(\mathbb{K}\right)$, for every $\mathbb{K}\in\left\{ \mathbb{R},\mathbb{C},\mathbb{H},\mathbb{O}\right\}$; as the rank-1 idempotent elements of the rank-three Jordan algebra $\mathfrak{J}_{3}\left(\mathbb{K}\right)$; as a coset manifold with a specific isometry and isotropy group. Furthermore, the investigation of octonionic geometry, particularly the study of the octonionic projective plane $\mathbb{O}P^{2}$, unraveled a deep connection between octonions and exceptional Lie groups [@Fr54; @Freud; @1965; @Tits; @Vinberg; @Rosenf98]. This connection, which was first envisaged by Cartan[@Car15] and then explored by Chevalley and Schafer [@ChSch], is so deep that every known realization of compact exceptional Lie groups somehow involves the octonions $\mathbb{O}$ in one form or another [@Yokota]. Notably, each of these realizations of exceptional Lie groups has a geometrical aspect in which the 16-dimensional Moufang plane plays a pivotal role. Indeed, following Freudenthal [@Fr54; @Freud; @1965], one can obtain all exceptional Lie groups of type $\text{F}_{4},\text{E}_{6},\text{E}_{7}$ and $\text{E}_{8}$ as transformation groups of the 16-dimensional Moufang plane preserving the features of elliptic geometry, projective geometry, symplectic geometry and metasymplectic geometry respectively [@LM01]. Historically, the compact 16-dimensional Moufang plane's definition arose out of octonionic geometry. However, in this work we show that this plane can be defined in an equally clean, straightforward and more minimal way by means of two different division composition algebras endowed with less algebraic structure than the octonions, and that do not uphold the Moufang identities, historically associated with the Moufang property of the plane. Clearly, in order to define a 16-dimensional plane that satisfies the affine and projective axioms of incidence geometry, an 8-dimensional division algebra is necessary. Hurwitz theorem [@Hurwitz98] states the existence of only one 8-dimensional division composition algebra with a unit element, i.e. the algebra of octonions $\mathbb{O}$. Yet, when non-unital algebras are considered, three 8-dimensional division composition algebras emerge [@ElDuque; @Comp]: the aforementioned octonions $\mathbb{O}$, the para-octonions $p\mathbb{O}$ (not to be confused with the split-octonions that are not a division algebra) and the real Okubo algebra $\mathcal{O}$. All three 8-dimensional algebras, being division and composition, allow independent and self-contained definitions of an affine and projective plane over them. Quite unexpectedly, despite the three different algebraic origins, the three definitions give rise to the same incidence plane: the compact 16-dimensional Moufang plane. This result is quite surprising because the three algebras, though deeply related, display very different properties. For instance, while octonions have a unit element and paraoctonions have a paraunit, the Okubo algebra merely contains idempotent elements. These differences apparently show up into the projective planes defined over these algebras: e.g., as a consequence of not having an identity element, the points on the Okubic plane $\left(0,0\right),\left(x,x\right)$ and $\left(y,y\right)$ are not all three incident to the same Okubic line, nor it exists an Okubic collineation that switches coordinates, i.e. $\left(x,y\right)\longrightarrow\left(y,x\right)$, as one has in octonionic case. Despite these apparent differences, in Sec. [\[sec:Three-realizations-of\]](#sec:Three-realizations-of){reference-type="ref" reference="sec:Three-realizations-of"} we show that the projective planes, obtained directly from their corresponding foundational algebras, are all isomorphic and even all isometric one another. Property $\mathbb{O}$ $p\mathbb{O}$ $\mathcal{O}$ -------------- ---------------- ---------------- --------------------------- Unital Yes No No Paraunital Yes Yes No Alternative Yes No No Flexible Yes Yes Yes Composition Yes Yes Yes Automorphism $\text{G}_{2}$ $\text{G}_{2}$ $\text{SU}\left(3\right)$ : [\[tab:Synoptic-table-of\]]{#tab:Synoptic-table-of label="tab:Synoptic-table-of"}Synoptic table of the algebraic properties of octonions $\mathbb{O}$, paraoctonions $p\mathbb{O}$ and the real Okubic algebra $\mathcal{O}$. The result is remarkable in itself. However, since the 16-dimensional Moufang plane is so deeply related with exceptional Jordan algebras, exceptional Lie Groups and symmetric spaces, it also paves the way for a novel, more minimal algebraic realization of these ubiquitous mathematical objects. A synoptic summary of the algebraic properties of octonions $\mathbb{O}$, paraoctonions $p\mathbb{O}$ and of the real Okubic algebra $\mathcal{O}$ is summarized in Table [1](#tab:Synoptic-table-of){reference-type="ref" reference="tab:Synoptic-table-of"}. It is worth noting that the minimal algebraic structure between such three algebras is the Okubo algebra $\mathcal{O}$, which is neither unital, nor para-unital; it is non-alternative and has the smallest automorphism group, i.e. $\text{SU}\left(3\right)$ which has dimension 8 compared to $G_{2}$ that is a 14-dimensional group. Both paraoctonions $p\mathbb{O}$ and the real Okubic algebra $\mathcal{O}$ are non-alternative, but flexible algebras. Their relation to the Moufang plane is thus intriguing, because, notoriously, Moufang planes are associated to Moufang identities, that in turn imply the alternativity of the underlying algebra [@Mou35; @HP]. In fact, all this does not give arise to any contradiction, since both the Okubic and paraoctonionic projective planes can be coordinatised by an alternative algebra, i.e. the octonions, through a non-linear planar ternary field as we show in Sec. [\[sec:Discussions-and-verifications\]](#sec:Discussions-and-verifications){reference-type="ref" reference="sec:Discussions-and-verifications"}. An even more striking observation is that, while the octonions possess an automorphism group that is an exceptional group, the automorphism Lie group of the real Okubo algebrais not exceptional, nor has any immediate relation to $\text{G}_{2}$ itself. Nevertheless, the projective plane over the Okubo algebra gives rise to a geometric realisation of all types of exceptional Lie groups as $\text{G}_{2},\text{F}_{4},\text{E}_{6},\text{E}_{7}$ and $\text{E}_{8}$ as the transformation group respectively preserving: the non-degenerate quadrangles of the plane (type $\text{G}_{2}$); the distances of the plane (type $\text{F}_{4}$); the usual incidence relations between line and points (type $\text{E}_{6}$); extended incidence relations according to symplectic and metasymplectic geometry (type $\text{E}_{7}$ and $\text{E}_{8}$, for this last part see Freudenthal work [@Freud; @1965 Sec. 4.13]). It is well known that all compact exceptional Lie groups have $\text{SU}\left(3\right)$ as a subgroup, this work points out how the presence of a subgroup $\text{SU}\left(3\right)$ is related with an Okubic structure underlying the 16-dimensional Moufang plane. It is here worth recalling (see e.g. [@Ste80; @Ste08]) that Lie groups of type $\text{E}_{6}$ are largely studied and are still viable candidates for GUT theories and that the real Okubo algebra was discovered by Susumo Okubo in his investigations on $\text{SU}\left(3\right)$ as the gauge group for QCD [@Okubo95]. Thus, we expect the Okubic formulation of the Cayley plane to find a physical application as a concrete alternative to its octonionic realisation and to the octonionic formulation of the rank-3 exceptional Jordan algebra, also known as Albert algebra. Additionally, it is known that M-theory may display an hidded Cayley-Moufang fibration [@Sa11]. Here it is worth noting that variations in the foundational algebra of this plane could potentially lead to novel physical theories. The present work is thus structured as follows. In Sec. [\[sec:Octonions,-para-octonions-and\]](#sec:Octonions,-para-octonions-and){reference-type="ref" reference="sec:Octonions,-para-octonions-and"} we review the three algebras we are going to use: octonions $\mathbb{O}$, paraoctonions $p\mathbb{O}$ and the real Okubic algebra $\mathcal{O}$. In Sec. [\[sec:Affine-and-projective\]](#sec:Affine-and-projective){reference-type="ref" reference="sec:Affine-and-projective"} we define the three affine and projective planes. Since the construction is formally very similar we develop only the details of the construction of Okubic affine and projective plane, pointing out the differences occurring in the other algebras. The main result is in Sec. [\[sec:Three-realizations-of\]](#sec:Three-realizations-of){reference-type="ref" reference="sec:Three-realizations-of"} where we present the isomorphism between the three planes. Finally, in Sec. [\[sec:Discussions-and-verifications\]](#sec:Discussions-and-verifications){reference-type="ref" reference="sec:Discussions-and-verifications"} we discuss our findings and introduce a software tool that facilitates direct and numerical verification of calculations involving octonionic, para-octonionic, and Okubic computations. This tool has been made publicly available and can be accessed on our GitHub repository at `https://github.com/DCorradetti/OkuboAlgebras`. # [\[sec:Octonions,-para-octonions-and\]]{#sec:Octonions,-para-octonions-and label="sec:Octonions,-para-octonions-and"}Composition algebras Composition algebras are algebras endowed with a norm that enjoys the multiplicative property, i.e. $n\left(x\cdot y\right)=n\left(x\right)n\left(y\right)$. Composition algebras with multiplicative identity are called Hurwitz algebras and are fully classified [@ElDuque; @Comp]. On the other hand, composition algebras without multiplicative identity but with associative norm were discovered by Petersson [@Petersson; @1969] and indipendently by Okubo [@Okubo; @1978]; they are now called symmetric composition algebras [@KMRT] and are completely classified in para-Hurwitz and Okubo algebras [@ElDuque; @Comp]. Para-Hurwitz algebras are non-unital composition algebras strictly related to their unital companion, i.e. the corresponding Hurwitz algebra, while on the other hand Okubo algebras are somewhat more unique in feature appearing only as 8-dimensional algebras and with some very peculiar characteristics that distinguish them from both Huwitz and para-Hurwitz algebras. It is worth noting that while it is possible to define an Okubo algebra over any field, here we will be focusing on the Okubo algebra over the real $\mathbb{R}$, which is a division composition algebra. In this section we review some useful notions about composition algebras. Then we focus on Hurwitz algebras and, subsequently, we enter into the realm of symmetric composition algebras, specifically highlighting para-Hurwitz and Petersson algebras that in fact exhaust all algebras of this family. Even if this section is made of known results, we thought it might be worthwhile to collect them in a few pages of review content given their paramount importance in the understanding of the algebraic context of the subsequent sections. ## Composition Algebras An *algebra*, denoted by $A$, is a vector space over a field $\mathbb{F}$ equipped with a bilinear multiplication. For our discussion, we will restrict our attention to algebras of finite dimension and the field $\mathbb{F}$ will be taken to be either the field of real $\mathbb{R}$ or complex numbers $\mathbb{C}$. The specific properties of the multiplication operation in an algebra lead to various classifications. Specifically, an algebra $A$ is said to be *commutative* if $x\cdot y=y\cdot x$ for every $x,y\in A$; is *associative* if satisfies $x\cdot\left(y\cdot z\right)=\left(x\cdot y\right)\cdot z$; is *alternative* if $x\cdot\left(y\cdot y\right)=\left(x\cdot y\right)\cdot y$; and finally, *flexible* if $x\cdot\left(y\cdot x\right)=\left(x\cdot y\right)\cdot x$. It is worth noting that the last three proprieties can be seen as successive refinements of associativity, i.e. $$\text{associative}\Rightarrow\text{alternative}\Rightarrow\text{flexible}.$$ This observation stems from a nontrivial theorem proved by Artin (see [@Scha]) who showed that all alternative algebras are flexible. Since $A$ must be a group with respect to addition, every algebra has a zero element $0\in A$. Furthermore, if the algebra does not have zero divisors, it is referred to as a *division* algebra, i.e. an algebra for which $x\cdot y=0$ implies $x=0$ or $y=0$. While the zero element is a universal feature in any algebra, the algebra is termed *unital* if there exists an element $1\in X$ such that $1\cdot x=x\cdot1=x$ for all $x\in A$. Consider an algebra $A$. Then a quadratic form $n$ on $A$ over the field $\mathbb{F}$ is called *norm* and its polarization is given by $$\left\langle x,y\right\rangle =n\left(x+y\right)-n\left(x\right)-n\left(y\right),\label{eq:polarNorm}$$ so that the norm can be explicitly given as $$n\left(x\right)=\frac{1}{2}\left\langle x,x\right\rangle ,\label{eq:n(x)=00003D1/2(x,x)}$$ for every $x\in A$. An algebra $A$ with a non-degenerate norm $n$ that satisfies the following multiplicative property, i.e. $$\begin{aligned} n\left(x\cdot y\right) & =n\left(x\right)n\left(y\right),\label{eq:comp(Def)}\end{aligned}$$ for every $x,y\in A$, is called a *composition* algebra and is denoted with the triple $\left(A,\cdot,n\right)$ or simply as $A$ if there are no reason for ambiguity. Given a composition algebra $A$, applying equation ([\[eq:polarNorm\]](#eq:polarNorm){reference-type="ref" reference="eq:polarNorm"}) to the multiplicative property of the norm expressed in ([\[eq:comp(Def)\]](#eq:comp(Def)){reference-type="ref" reference="eq:comp(Def)"}), we find that $$\begin{aligned} \left\langle x\cdot y,x\cdot z\right\rangle & =n\left(x\right)\left\langle y,z\right\rangle ,\end{aligned}$$ for every $x,y,z\in A$, which is an useful identity to be aware of. ## Unital composition algebras Composition algebras that possess an unit element are called *Hurwitz algebras*. The interplay between the multiplicative property of the norm in ([\[eq:comp(Def)\]](#eq:comp(Def)){reference-type="ref" reference="eq:comp(Def)"}) and the existence of a unit element, is full of interesting implications. Indeed, every Hurwitz algebra is endowed with an order-two antiautomorphism called *conjugation*, defined by $$\overline{x}=\left\langle x,1\right\rangle 1-x.\label{eq:conjugation}$$ The linearization of the norm, when paired with the composition, results in the notable relation $\left\langle x\cdot y,z\right\rangle =\left\langle y,\overline{x}\cdot z\right\rangle ,$ that imply that $\overline{x\cdot y}=\overline{y}\cdot\overline{x}$ and $$x\cdot\overline{x}=n\left(x\right)1.\label{eq:ConjugNorm}$$ Moreover, from the existence of a unit element in a composition algebra we have that elements with unit norm form a goup and, even more strikingly, that the whole algebra must be alternative (for a proof see [@ElDuque; @Comp Prop. 2.2]). Equation ([\[eq:ConjugNorm\]](#eq:ConjugNorm){reference-type="ref" reference="eq:ConjugNorm"}) can be rephrased in the well-known *Hamilton-Cayley equation,* $x^{2}-\left\langle x,1\right\rangle x-n\left(x\right)1=0,$ which holds true for every unital composition algebra. Finally, a relation that is crucial for the Veronesean representation of the projective plane over a unital composition algebras, is the following $$x\cdot\left(\overline{x}\cdot y\right)=\left(x\cdot\overline{x}\right)\cdot y=n\left(x\right)y,\label{eq:compAlg x.x=0000BA.y=00003Dn(x)y}$$ which has a nice analogous in the case of *symmetric composition* algebras that we discuss in Section [\[sec:Symmetric-composition-algebras\]](#sec:Symmetric-composition-algebras){reference-type="ref" reference="sec:Symmetric-composition-algebras"}. A major theorem by Hurwitz proves that the only unital composition algebras over the reals are $\mathbb{R},\mathbb{C},\mathbb{H}$ and $\mathbb{O}$ accompanied by their split counterparts $\mathbb{C}_{s},\mathbb{H}_{s},\mathbb{O}_{s}$ (see [@Hurwitz98; @ElDuque; @Comp Cor. 2.12]). Consequently, there are seven Hurwitz algebras, each having real dimensions of 1, 2, 4, or 8. Out of these, four are also division algebras, i.e. $\mathbb{R},\mathbb{C},\mathbb{H}$ and $\mathbb{O}$, while three are split algebras and thus have zero divisors, i.e. $\mathbb{C}_{s},\mathbb{H}_{s},\mathbb{O}_{s}$. The properties of such algebras are quite different one another. More specifically, $\mathbb{R}$ is also totally ordered, commutative and associative; $\mathbb{C}$ is just commutative and associative; $\mathbb{H}$ is only associative and, finally, $\mathbb{O}$ is only alternative. **Hurwitz** **O.** **C.** **A.** **Alt.** **F.** **p-Hurwitz** **O.** **C.** **A.** **Alt.** **F.** -------------------------------- -------- -------- -------- ---------- -------- -- ---------------------------------- -------- -------- -------- ---------- -------- $\mathbb{R}$ Yes Yes Yes Yes Yes $p\mathbb{R}\cong\mathbb{R}$ Yes Yes Yes Yes Yes $\mathbb{C}$, $\mathbb{C}_{s}$ No Yes Yes Yes Yes $p\mathbb{C}$, $p\mathbb{C}_{s}$ No Yes No No Yes $\mathbb{H}$,$\mathbb{H}_{s}$ No No Yes Yes Yes $p\mathbb{H}$,$p\mathbb{H}_{s}$ No No No No Yes $\mathbb{O}$,$\mathbb{O}_{s}$ No No No Yes Yes $p\mathbb{O}$,$p\mathbb{O}_{s}$ No No No No Yes : *[\[tab:Hurwitz-para-Hurwitz\]]{#tab:Hurwitz-para-Hurwitz label="tab:Hurwitz-para-Hurwitz"}On the left,* we have summarized the algebraic properties, i.e. totally ordered (O), commutative (C), associative (A), alternative (Alt), flexible (F), of all Hurwitz algebras, namely $\mathbb{R},\mathbb{C},\mathbb{H}$ and $\mathbb{O}$ along with their split counterparts $\mathbb{C}_{s},\mathbb{H}_{s},\mathbb{O}_{s}$. *On the right*, we have summarized the algebraic properties of all para-Hurwitz algebras, namely $p\mathbb{R},p\mathbb{C},p\mathbb{H}$ and $p\mathbb{O}$ accompanied by their split counterparts $p\mathbb{C}_{s},p\mathbb{H}_{s},p\mathbb{O}_{s}$. As shown by Table [2](#tab:Hurwitz-para-Hurwitz){reference-type="ref" reference="tab:Hurwitz-para-Hurwitz"} all properties of $\mathbb{R},\mathbb{C},\mathbb{H}$ and $\mathbb{O}$ are valid also for the split companions with the only difference that the latter are not division algebras and do have zero divisors. Generalizations of Hurwitz Theorem can be done over arbitrary fields (see [@ZSSS p. 32]) but for our purposes this will not be needed. ## [\[sec:Symmetric-composition-algebras\]]{#sec:Symmetric-composition-algebras label="sec:Symmetric-composition-algebras"}Symmetric composition algebras We now turn our attention to a special class of composition algebras, i.e. symmetric composition algebras, that are not unital but exhibit many properties analogous of Hurwitz algebras. Compositions algebras with associative norms (see below) were independently studied by Petersson [@Petersson; @1969], Okubo [@Okubo95], and Faulkner [@Fau14]. In [@OO81a], Okubo-Osborn shown that over an algebraically closed field the only two types of symmetric composition algebras are para-Hurwitz algebras and Okubo algebras, but a final classification was done by Elduque and Myung [@Elduque; @91; @Elduque; @Myung; @90]. A symmetric composition algebra $\left(A,*,n\right)$ is a composition algebra wherein the norm is associative, i.e. satisfies the identity $$\left\langle x*y,z\right\rangle =\left\langle x,y*z\right\rangle ,\label{eq:associativityNorm}$$ where $x,y,z\in A$ and $\left\langle x,y\right\rangle =n\left(x+y\right)-n\left(x\right)-n\left(y\right)$, as stated in ([\[eq:polarNorm\]](#eq:polarNorm){reference-type="ref" reference="eq:polarNorm"}). From equation ([\[eq:associativityNorm\]](#eq:associativityNorm){reference-type="ref" reference="eq:associativityNorm"}), we extract a significant attribute of symmetric composition algebras. More precisely, considering: $$\begin{aligned} \left\langle \left(x*y\right)*x,z\right\rangle & =\left\langle x*y,x*z\right\rangle =n\left(x\right)\left\langle y,z\right\rangle ,\end{aligned}$$ and given that $n\left(x+y\right)=n\left(x\right)+n\left(y\right)+\left\langle x,y\right\rangle$, we can deduce $$\begin{aligned} n\left(\left(x*y\right)*x-n\left(x\right)y\right) & =2n^{2}\left(x\right)n\left(y\right)-n\left(x\right)\left\langle x*y,x*y\right\rangle =0.\end{aligned}$$ Thus, since the norm $n$ is non singular we have the following important proposition ** 1**. *Let $\left(A,*,n\right)$ be symmetric composition algebra then $$\left(x*y\right)*x=n\left(x\right)*y,\label{eq:x*y*x=00003Dn(x)y}$$ for every $x,y\in A$.* In the realm of Hurwitz algebras, and similarly for symmetric composition algebras, all automorphisms are isometries. Indeed, it sufficies to consider that a map $\varphi:A\longrightarrow A$ such that $\varphi\left(x*y\right)=\varphi\left(x\right)*\varphi\left(y\right),$implies that $\varphi\left(\left(x*y\right)*x\right)=n\left(x\right)*\varphi\left(y\right),$ on one side, while on the other hand, $\left(\varphi\left(x\right)*\varphi\left(y\right)\right)*\varphi\left(x\right)=n\left(\varphi\left(x\right)\right)*\varphi\left(y\right),$ so that it must be $$n\left(\varphi\left(x\right)\right)=n\left(x\right),$$ for every $x\in A$. In fact, symmetric composition algebras are deeply intertwined with Hurwitz algebras. Indeed, given a symmetric composition algebra $\left(A,*,n\right)$ and a norm $1$ element $a\in A$, we can utilize Kaplansky's trick to define a new product $$x\cdot y=\left(a*x\right)*\left(y*a\right),$$ for every $x,y\in A$, resulting in a new composition algebra $\left(A,\cdot,n\right)$. Now, consider the element $e=a*a$. Since ([\[eq:x\*y\*x=00003Dn(x)y\]](#eq:x*y*x=00003Dn(x)y){reference-type="ref" reference="eq:x*y*x=00003Dn(x)y"}) and $n\left(a\right)=1$ we then have that $$\begin{aligned} e\cdot x & =\left(a*\left(a*a\right)\right)*\left(x*a\right)=x,\\ x\cdot e & =\left(a*x\right)*\left(\left(a*a\right)*a\right)=x,\end{aligned}$$ for every $x\in A$. Consequently, $\left(A,\cdot,n\right)$ is a unital composition algebra, or equivalently, a Hurwitz algebra. As a direct implication of the Hurwitz theorem, symmetric composition algebras can only have dimensions of 1, 2, 4, or 8. ### Para-Hurwitz algebras An important class of symmetric composition algebras is that of para-Hurwitz algebras. Given any Hurwitz algebra $\left(A,\cdot,n\right)$ a conjugation is naturally defined as $$\overline{x}=\left\langle x,1\right\rangle 1-x,$$ for every $x\in A$. Then, consider the new product $$x\bullet y=\overline{x}\cdot\overline{y},\label{eq:para-Hurwitz}$$ for every $x,y\in A$. Since $n\left(x\right)=n\left(\overline{x}\right)$ we have that $$n\left(x\bullet y\right)=n\left(\overline{x}\cdot\overline{y}\right)=n\left(x\right)n\left(y\right),$$ and thus the algebra $\left(A,\bullet,n\right)$ is again a composition algebra. On the other hand $\left(A,\bullet,n\right)$ is not an unital algebra since $$x\bullet1=1\bullet x=\overline{x}.$$ Moreover, the algebra is a symmetric composition algebra since it can be shown to upholds $$\begin{aligned} \left\langle x\bullet y,z\right\rangle & =\left\langle x,y\bullet z\right\rangle ,\end{aligned}$$ and it is then called a *para-Hurwitz* algebra [@ElDuque; @Comp]. For every Hurwitz algebra, i.e. unital composition algebra, of dimension $>1$ we have a para-Hurwitz algebra that is a symmetric composition algebra that we denote as $p\mathbb{C},$$p\mathbb{C}_{s}$,$p\mathbb{H},$$p\mathbb{H}_{s}$, $p\mathbb{O}$ and $p\mathbb{O}_{s}$ respectively. It is worth noting that all para-Hurwitz algebras are non-alternative algebras, since $$\begin{aligned} x\bullet\left(x\bullet y\right) & =\overline{x}\cdot\left(\overline{\overline{x}\cdot\overline{y}}\right)=\overline{x}\cdot\left(y\cdot x\right),\\ \left(x\bullet x\right)\bullet y & =\left(x\cdot x\right)\cdot\overline{y},\end{aligned}$$ thus, in general, $x\bullet\left(x\bullet y\right)\neq\left(x\bullet x\right)\bullet y$. Nevertheless, by Proposition [ 1](#prop:x*y*x){reference-type="ref" reference="prop:x*y*x"} they are flexible and more specifically $$x*y*x=n\left(x\right)*y,$$ for every $x,y\in A$. Moreover, if the the Hurwitz algebra $\left(A,\cdot,n\right)$ is a division algebra, then also the para-Hurwitz $\left(A,*,n\right)$ defined from ([\[eq:para-Hurwitz\]](#eq:para-Hurwitz){reference-type="ref" reference="eq:para-Hurwitz"}) is a division algebra. Algebraic properties of the Hurwitz algebras are summarized in Table [2](#tab:Hurwitz-para-Hurwitz){reference-type="ref" reference="tab:Hurwitz-para-Hurwitz"}. ### Petersson algebras A generalisation of para-Hurwitz algebras was presented by Petersson in [@Petersson; @1969]. Starting with a Hurwitz algebra $\left(A,\cdot,n\right)$, he introduced a new algebra $\left(A,*,n\right)$ such that $$x*y=\tau\left(\overline{x}\right)\cdot\tau^{2}\left(\overline{x}\right),$$ where $\tau$ is an order three automorphisms, i.e. $\tau^{3}=\text{id}$. The new algebra, typically denoted as $A_{\tau}$, becomes a composition algebra that is non-unital. Moreover, Petersson demonstrated that over an algebraically closed field like $\mathbb{C}$, there exists a specific automorphism that results in a non-para-Hurwitz algebra. This new algebra is a symmetric composition algebra containing idempotent elements. Petersson algebras are crucial in characterizing symmetric composition algebras since we have the following ** 2**. **[\[thm:(Elduque-Perez-)-An\]]{#thm:(Elduque-Perez-)-An label="thm:(Elduque-Perez-)-An"}(Elduque-Perez [@EP96 Th. 2.5])* An algebra $A$ is a symmetric composition algebra with an nonzero idempotent if and only if there exists a Hurwitz algebra $H$ and an automorphisms $\tau$ of $H$ such that $A$ is isomorphic to the algebra $H_{\tau}$.* # [\[sec:Octonions,-Paraoctonions-and\]]{#sec:Octonions,-Paraoctonions-and label="sec:Octonions,-Paraoctonions-and"}Octonions, paraoctonions and the real Okubo algebra In this section, we delve into the three division algebras of primary interest: the octonions $\mathbb{O}$, the paraoctonions $p\mathbb{O}$ and the real Okubo algebra $\mathcal{O}$. The main objective of this section is summarized in Table [3](#tab:Oku-Para-Octo){reference-type="ref" reference="tab:Oku-Para-Octo"} that synoptically illustrates the relationships between the product of the three algebras. It's crucial to note that although it is possible to switch from one algebra to another by altering the product definition, none of these algebras are isomorphic to the others: e.g. the octonions $\mathbb{O}$ are alternative and unital, para-octonions $p\mathbb{O}$ are nor alternative nor unital but do have a para-unit, while the Okubo algebra $\mathcal{O}$ is non-alternative and only has idempotents elements. It's also worth highlighting that the Okubo algebra $\mathcal{O}$ is the least structured among these algebras. ## The algebra of octonions The algebra of octonions $\mathbb{O}$ is the only division Hurwitz algebra with a dimension of eight. We define the composition algebra of octonion $\left(\mathbb{O},\cdot,n\right)$ as the eight dimensional real vector space with basis $\left\{ \text{i}_{0}=1,\text{i}_{1},...,\text{i}_{7}\right\}$ with a bilinear product encoded through the *Fano plane* and explained in Fig. [1](#fig:octonion fano plane){reference-type="ref" reference="fig:octonion fano plane"}. ![[\[fig:octonion fano plane\]]{#fig:octonion fano plane label="fig:octonion fano plane"} Multiplication rules for octonions $\mathbb{O}$ as real vector space $\mathbb{R}^{8}$ in the basis $\left\{ \text{i}_{0}=1,\text{i}_{1},...,\text{i}_{7}\right\}$. Lines in the Fano plane identify associative triples of the product and the arrow indicates the sign (positive in the sense of the arrow and negative in the opposite sense). In addition to the previous rules it is intended that $\text{i}_{k}^{2}=-1$.](FanoPlaneWikiConv.png){#fig:octonion fano plane} Given an element $x\in\mathbb{O}$ with decomposition $$x=x_{0}+\stackrel[k=1]{7}{\sum}x_{k}\text{i}_{k},$$ the norm $n$ is the obvious Euclidean one defined by $$n\left(x\right)=x_{0}^{2}+x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2}+x_{5}^{2}+x_{6}^{2}+x_{7}^{2},\label{eq:octonionic norm}$$ for which the conjugation results $$\overline{x}=x_{0}-\stackrel[k=1]{7}{\sum}x_{k}i_{k},$$ and therefore $$n\left(x\right)=\overline{x}\cdot x,\label{eq:n(x)=00003Dxcx}$$ as it happens for every Hurwitz algebra. Then a look at ([\[eq:octonionic norm\]](#eq:octonionic norm){reference-type="ref" reference="eq:octonionic norm"}) shows that $n\left(x\right)=0$ if and only if $x=0$ and thus the inverse of a non-zero element of the octonions is easily found as $$x^{-1}=\frac{\overline{x}}{n\left(x\right)}.$$ Also, from ([\[eq:n(x)=00003Dxcx\]](#eq:n(x)=00003Dxcx){reference-type="ref" reference="eq:n(x)=00003Dxcx"}) we have that the octonionic inner product is given by $$\begin{aligned} \left\langle x,y\right\rangle & =x\overline{y}+y\overline{x},\label{eq:octonionic inner}\end{aligned}$$ so that $\left\langle x,x\right\rangle =2n\left(x\right)$. Straightforward calculations shows that the algebra of octonions is neither commutative nor associative, but it is alternative. But, since any two elements of an alternative algebra generate an associative subalgebra, it is then easy to see that $\left(\mathbb{O},\cdot,n\right)$ is indeed an Hurwitz algebra since it is unital and $$\begin{aligned} n\left(x\cdot y\right) & =\left(\overline{x\cdot y}\right)\cdot\left(x\cdot y\right)\\ & =\left(\overline{y}\cdot\overline{x}\right)\cdot\left(x\cdot y\right)\nonumber \\ & =\overline{y}\cdot\left(\overline{x}\cdot x\right)\cdot y=n\left(x\right)n\left(y\right).\nonumber \end{aligned}$$ Since the algebra is a composition algebra and any non-zero element has non-zero norm, i.e. $n\left(x\right)\neq0$ then $\left(\mathbb{O},\cdot,n\right)$ is also a division algebra since if $x\cdot y=0$ then $$n\left(x\cdot y\right)=n\left(x\right)n\left(y\right)=0,$$ which implies that $x=0$ or $y=0$. Moreover, an important relation that will be used later on in the definition of the projective plane is the following consequence of alternativity, i.e. $$\overline{x}\cdot\left(x\cdot y\right)=n\left(x\right)y,\label{eq:Oct-xb*x*y=00003Dn(x)y}$$ for every $x,y\in\mathbb{O}$. ### Moufang identities While octonions are not a group under multiplication due to their lack of associativity, non-zero octonions form a *Moufang loop*, i.e. a loop that satisfy the following *Moufang identities*, i.e. $$\begin{aligned} \left(\left(x\cdot y\right)\cdot x\right)\cdot z & =x\cdot\left(y\cdot\left(x\cdot z\right)\right),\\ \left(\left(z\cdot x\right)\cdot y\right)\cdot x & =z\cdot\left(x\cdot\left(y\cdot x\right)\right),\\ \left(x\cdot y\right)\cdot\left(z\cdot x\right) & =x\cdot\left(\left(y\cdot z\right)\cdot x\right),\label{eq:MoufangIdent3}\end{aligned}$$ for every $x,y,z\in\mathbb{O}$. Moufang identities are particularly relevant since they are historically linked to geometrical properties of the Moufang plane (see Sec. [\[sec:Discussions-and-verifications\]](#sec:Discussions-and-verifications){reference-type="ref" reference="sec:Discussions-and-verifications"}). It is worth noting that any unital algebra satisfying Moufang identities is an alternative algebra. Indeed, setting $z=1$ Moufang identities turn into the flexible identity, i.e. $$\left(x\cdot y\right)\cdot x=x\cdot\left(y\cdot x\right),$$ while setting $y=1$ we have the identity for the left and right alternativity, i.e. $$\begin{aligned} \left(x\cdot x\right)\cdot z & =x\cdot\left(x\cdot z\right),\\ \left(z\cdot x\right)\cdot x & =z\cdot\left(x\cdot x\right).\end{aligned}$$ Thus, non alternative algebras do not uphold Moufang identities. ## Okubo algebras Symmetric composition algebras might have remained relatively unnoticed among algebraists had Petersson [@Petersson; @1969] not demonstrated that for every field $\mathbb{F}$ there exists a unique eight-dimensional algebra that is not a para-Hurwitz algebra. This result essentially broadened the reach of the Hurwitz classification theorem. On the other hand, Okubo algebras were independently developed by mathematical physicist Susumo Okubo in the course of his work on quarks and Gell-Mann matrices while pursuing an algebra that featured $\text{SU}\left(3\right)$ as automorphism group instead of $\text{G}_{2}$ as in the case of Octonions[@Okubo95]. Even more interestingly, Okubo discovered that such algebra is a division composition algebra and a deformation of its product would give back the octonions[@Okubo; @1978; @Okubo; @78c; @Okubo1978b]. It was with more recent works [@KMRT], with the joint efforts of Osborn, Elduque and Myung [@OO81a; @OO81b; @Elduque; @91; @Elduque; @93; @Elduque; @Myung; @90; @ElduQueAut], that the context of Okubo algebras was fully elucidated. Following [@Okubo; @1978] and [@Elduque; @Myung; @90], we define the real Okubo Algebra $\mathcal{O}$ as the set of three by three Hermitian traceless matrices over the complex numbers $\mathbb{C}$ with the following bilinear product $$x*y=\mu\cdot xy+\overline{\mu}\cdot yx-\frac{1}{3}\text{Tr}\left(xy\right),\label{eq:product Ok}$$ where $\mu=\nicefrac{1}{6}\left(3+\text{i}\sqrt{3}\right)$ and the juxtaposition is the ordinary associative product between matrices. It is worth noting that ([\[eq:product Ok\]](#eq:product Ok){reference-type="ref" reference="eq:product Ok"}) can be seen as a modification of the Jordanian product. Indeed, setting $\mu=\nicefrac{1}{2}$ and negletting the last term, we retrieve the usual Jordan product over Hermitian traceless matrices, i.e. $$x\circ y=\frac{1}{2}xy+\frac{1}{2}yx.$$ Nevertheless, Hermitian traceless matrices are not closed under such product, thus requiring the additional term $-\nicefrac{1}{3}\text{Tr}\left(xy\right)$ for the closure of the algebra. Indeed, setting in ([\[eq:product Ok\]](#eq:product Ok){reference-type="ref" reference="eq:product Ok"}) $\text{Im}\mu=0$, one retrieves from the traceless part of the exceptional Jordan algebra $\mathfrak{J}_{3}\left(\mathbb{C}\right)$, whose derivation Lie algebra is $\mathfrak{su}\left(3\right)$. Analyzing ([\[eq:product Ok\]](#eq:product Ok){reference-type="ref" reference="eq:product Ok"}), it becomes evident that the resulting algebra is neither unital, associative, nor alternative. Nonetheless, $\mathcal{O}$ is a *flexible* algebra, i.e. $$x*\left(y*x\right)=\left(x*y\right)*x,$$ which will turn out to be an even more useful property than alternativity in the definition of the projective plane. Even though the Okubo algebra is not unital, it does have idempotents, i.e. $e*e=e$, such as $$e=\left(\begin{array}{ccc} 2 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & -1 \end{array}\right),\label{eq:idemp}$$ that together with $$\begin{array}{ccc} \text{i}_{1}=\sqrt{3}\left(\begin{array}{ccc} 0 & 1 & 0\\ 1 & 0 & 0\\ 0 & 0 & 0 \end{array}\right), & & \text{i}_{2}=\sqrt{3}\left(\begin{array}{ccc} 0 & 0 & 1\\ 0 & 0 & 0\\ 1 & 0 & 0 \end{array}\right),\\ \text{i}_{3}=\sqrt{3}\left(\begin{array}{ccc} 0 & 0 & 0\\ 0 & 0 & 1\\ 0 & 1 & 0 \end{array}\right), & & \text{i}_{4}=\sqrt{3}\left(\begin{array}{ccc} 1 & 0 & 0\\ 0 & -1 & 0\\ 0 & 0 & 0 \end{array}\right),\\ \text{i}_{5}=\sqrt{3}\left(\begin{array}{ccc} 0 & -i & 0\\ i & 0 & 0\\ 0 & 0 & 0 \end{array}\right), & & \text{i}_{6}=\sqrt{3}\left(\begin{array}{ccc} 0 & 0 & -i\\ 0 & 0 & 0\\ i & 0 & 0 \end{array}\right),\\ \text{i}_{7}=\sqrt{3}\left(\begin{array}{ccc} 0 & 0 & 0\\ 0 & 0 & -i\\ 0 & i & 0 \end{array}\right), \end{array}\label{eq:definizione i ottonioniche}$$ form a basis for $\mathcal{O}$ that has real dimension $8$.[^1] It is worth noting that the choice of the idempotent $e$ as in ([\[eq:idemp\]](#eq:idemp){reference-type="ref" reference="eq:idemp"}) does not yield to any loss of generality for the subsequent development of our work since all idempotents are conjugate under the automorphism group (cfr. [@ElduQueAut Thm. 20]). The choice of this special basis is motivated on the fact that it will turn to be an orthonormal basis with respect to the norm in ([\[eq:Norm-Ok\]](#eq:Norm-Ok){reference-type="ref" reference="eq:Norm-Ok"}) and that through a special bijective map between Okubo algebra and octonions the elements of the basis $\left\{ e,\text{i}_{1},...,\text{i}_{7}\right\}$ here defined will correspond to the octonionic one previously defined. Let us consider the quadratic form $n$ over Okubo algebra, given by $$n\left(x\right)=\frac{1}{6}\text{Tr}\left(x^{2}\right),\label{eq:Norm-Ok}$$ for every $x\in\mathcal{O}$. It is straightforward to see that this *norm* has signature $(8,0)$, is associative and composition over the real Okubo algebra, i.e. $$\begin{aligned} n\left(x*y\right) & =n\left(x\right)n\left(y\right),\\ \left\langle x*y,z\right\rangle & =\left\langle x,y*z\right\rangle ,\label{eq:symmetric polar-1}\end{aligned}$$ where $\left\langle \cdot,\cdot\right\rangle$ is the *polar form* given by $$\left\langle x,y\right\rangle =n\left(x+y\right)-n\left(x\right)-n\left(y\right).\label{eq:polar form}$$ Therefore, Okubo algebra is a *symmetric composition* algebra [@KMRT Ch. VIII] and, thus, enjoying the notable relation $$x*\left(y*x\right)=\left(x*y\right)*x=n\left(x\right)y.\label{eq:symm-comp}$$ For our purposes it will be of paramount importance to notice the following [@OkMy80] ** 3**. *The Okubo Algebra is a division algebra.* *Proof.* Without any loss of generality, let us suppose that $d\neq0$ is a left divisor of zero, i.e. $d*x=0$, then $$n\left(d*x\right)=n\left(d\right)n\left(x\right)=0.$$ But, since the algebra is symmetric composition algebra, for the ([\[eq:symm-comp\]](#eq:symm-comp){reference-type="ref" reference="eq:symm-comp"}) we also have $$\begin{aligned} \left(d*x\right)*d & =0=n\left(d\right)x,\end{aligned}$$ and therefore $n\left(d\right)=0$, i.e. $\text{Tr}\left(d^{2}\right)=0$. But, since the element $d$ is of the form $$d=\left(\begin{array}{ccc} \xi_{1} & x_{1}+\text{i}y_{1} & x_{2}+\text{i}y_{2}\\ x_{1}-\text{i}y_{1} & \xi_{2} & x_{3}+\text{i}y_{3}\\ x_{2}-\text{i}y_{2} & x_{3}-\text{i}y_{3} & -\xi_{1}-\xi_{2} \end{array}\right),$$ where $x_{i},y_{i},\xi_{i}\in\mathbb{R}$, the norm $n\left(d\right)$ is given by $$n\left(d\right)=\frac{1}{3}\left(x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+y_{1}^{2}+y_{2}^{2}+y_{3}^{2}+\xi_{1}^{2}+\xi_{2}^{2}+\xi_{1}\xi_{2}\right),\label{eq:norma-d}$$ which yields that $\text{Tr}\left(d^{2}\right)\neq0$ in case of $\xi_{1},$$\xi_{2}\in\mathbb{R}$ and $\xi_{1},\xi_{2}\neq0$. ◻ Unfortunately, since $\mathcal{O}$ is not a unital algebra, an element $x$ does not have an inverse. This implies that, concerning its product, the Okubo algebra is not a loop (as it was in the case of the octonions that were a Moufang loop) but only a quasigroup. Nevertheless, considering the existence of the idempotent $e$, and inspired by the identity $$x*\left(e*x\right)=\left(x*e\right)*x=n\left(x\right)e,$$ we can define $\left(x\right)_{L}^{-1}=n\left(x\right)^{-1}\left(e*x\right)$ and $\left(x\right)_{R}^{-1}=n\left(x\right)^{-1}\left(x*e\right)$ so that, given a definite choice of the idempotent $e$, one has $$\left(x\right)_{L}^{-1}*x=x*\left(x\right)_{R}^{-1}=e.$$ As an implication of the previous argument we have the following ** 4**. *An equation of the kind $$a*x=b,\,\,\text{or}\,\,\,\,x*a=b,$$ has a unique solution which is respectively given by $$x=\frac{1}{n\left(a\right)}b*a,\,\,\,\text{or}\,\,\,\,x=\frac{1}{n\left(a\right)}a*b,$$ for every $a,b\in\mathcal{O}$, with $a\neq0$.* *Proof.* Let us consider the equation $a*x=b$. Since $\mathcal{O}$ is a division algebra and $a\neq0$ we can multiply by $a$ obtaining $$\left(a*x\right)*a=b*a,$$ but since $\left(a*x\right)*a=n\left(a\right)x$ and $n\left(a\right)\in\mathbb{R}$, we then have $x=n\left(a\right)^{-1}b*a$. A similar argument is valid for the case of $x*a=b$. ◻ Although the above proposition is straightforward, it has profound geometrical implications, as it confirms the applicability of affine and projective axioms to planes over the Okubo algebra. This topic will be elaborated upon in subsequent sections. ## [\[subsec:Conjugation-and-the\]]{#subsec:Conjugation-and-the label="subsec:Conjugation-and-the"}Conjugation and the Trivolution In unital composition algebras, as noted earlier, there exists a canonical involution, an order-two antihomomorphism known as *conjugation*. This can be defined using the orthogonal projection of the unit element as $$x\mapsto\overline{x}=\left\langle x,1\right\rangle 1-x.\label{eq:coniugazione}$$ This canonical involution has the distinctive property of being an antihomomorphism with respect to the product, i.e., $\overline{x\cdot y}=\overline{y}\cdot\overline{x},$ and the basic property with the norm of $x\cdot\overline{x}=n\left(x\right)1$. For non-unital composition algebras, the previous definition isn't applicable. However, if an idempotent element $e$ is present in the algebra, one might be tempted to extend the previous definition $$x\mapsto\widetilde{x}=\left\langle x,e\right\rangle e-x,$$ to investigate if similar properties remain valid. In the case of a para-Hurwitz $p\mathbb{K}$ obtained from an Hurwitz algebra $\left(\mathbb{K},\cdot,n\right)$ imposing the new product $x\bullet y=\overline{x}\cdot\overline{y},$for every $x,y\in\mathbb{K}$ we have a special element, called para-unit, i.e. $1\in p\mathbb{K}$ such that $1\bullet x=\overline{x}.$ Thus, we might want to have a look to the map $L_{1}$ given by left multiplication by the para-unit, i.e. $$\begin{aligned} x & \mapsto L_{1}\left(x\right)=1\bullet x.\end{aligned}$$ Clearly, the same arguments apply to $R_{1}\left(x\right)$ since $x\bullet1=1\bullet x$. Indeed, we notice that $L_{1}^{2}\left(x\right)=x$ thus is an involution and, since $$\begin{aligned} L_{1}\left(x\bullet y\right) & =1\bullet\left(x\bullet y\right)=y\cdot x,\end{aligned}$$ and $$\begin{aligned} L_{1}\left(x\right)\bullet L_{1}\left(y\right) & =\overline{x}\bullet\overline{y}=x\cdot y,\end{aligned}$$ the map is also an anti-homomorphism, i.e. $L_{1}\left(x\bullet y\right)=L_{1}\left(y\right)\bullet L_{1}\left(x\right)$. Finally, since $$x\bullet L_{1}\left(x\right)=x\bullet1\bullet x=n\left(x\right)1.$$ Thus, the order-two anti-homomorphism $x\mapsto\widetilde{x}=L_{1}\left(x\right)$ realises over the para-Hurwitz algebra $p\mathbb{K}$ all the main features of the canonical involution or conjugation of the Hurwitz algebra $\mathbb{K}$. Unfortunately, the situation within the Okubo algebra is less straightforward. Indeed, if we consider the idempotent $e$ and define the map $$x\mapsto\left\langle x,e\right\rangle e-x,$$ it exhibits order two but is not neither a homomorphism or an antihomomorphism. On the other hand, if we consider the maps $$\begin{aligned} x & \longrightarrow L_{e}\left(x\right)=e*x,\label{eq:azione a sinistra}\\ x & \longrightarrow R_{e}\left(x\right)=x*e,\label{eq:azioni a destra}\end{aligned}$$ then we do have in both cases a nice relation with the norm, since $$\begin{aligned} x*L_{e}\left(x\right) & =n\left(x\right)e,\\ R_{e}\left(x\right)*x & =n\left(x\right)e,\end{aligned}$$ Yet, even if $R_{e}\circ L_{e}=\text{id}$ holds true as in the para-Hurwitz case, neither $L_{e}$ and $R_{e}$ are automorphism nor antiautomorphism. On the other hand, if we generalise ([\[eq:coniugazione\]](#eq:coniugazione){reference-type="ref" reference="eq:coniugazione"}) with the following map $$x\mapsto\left\langle x,e\right\rangle e-x*e,$$ we have indeed a special automorphism, that we call $\tau$, which, nevertheless, is not of order two but of order three. Therefore, while it is not possible to have a involution over Okubo algebra that enjoys the same properties of the conjugation of Hurwitz algebras, it is possible to define something in a similar fashion such as an order-three automorphism $\tau$, hereafter referred to as a *trivolution*, defined as $$x\longrightarrow\tau\left(x\right)=\left\langle x,e\right\rangle e-x*e,\label{eq:tau}$$ or, equivalently, as $$\begin{aligned} x & \longrightarrow\tau\left(x\right)=L_{e}\left(x\right)^{2}=e*\left(e*x\right),\\ x & \longrightarrow\tau^{2}\left(x\right)=R_{e}\left(x\right)^{2}=\left(x*e\right)*e.\end{aligned}$$ It is easy to see that the automorphism $\tau$ is of order $3$ since, applying flexibility, we have $R_{e}^{2}\circ L_{e}^{2}=\text{id}$. It is also worth noting the stunning analogy with the conjugation expressed for unital composition algebra in ([\[eq:coniugazione\]](#eq:coniugazione){reference-type="ref" reference="eq:coniugazione"}) and at the same time the analogy with the one expressed for para-Hurwitz algebras in ([\[eq:azione a sinistra\]](#eq:azione a sinistra){reference-type="ref" reference="eq:azione a sinistra"}). Even more interesting, the order-three automorphism $\tau$ is also an order-three automorphism over the octonions $\mathbb{O}$. Indeed, if we consider the base given in ([\[eq:definizione i ottonioniche\]](#eq:definizione i ottonioniche){reference-type="ref" reference="eq:definizione i ottonioniche"}) and set $e=\text{i}_{0}$, we can define $\tau$ as the linear map given by $$\begin{array}{cc} \tau\left(\text{i}_{k}\right) & =\text{i}_{k},k=0,1,3,7\\ \tau\left(\text{i}_{2}\right) & =-\frac{1}{2}\left(\text{i}_{2}-\sqrt{3}\text{i}_{5}\right),\\ \tau\left(\text{i}_{5}\right) & =-\frac{1}{2}\left(\text{i}_{5}+\sqrt{3}\text{i}_{2}\right),\\ \tau\left(\text{i}_{4}\right) & =-\frac{1}{2}\left(\text{i}_{4}-\sqrt{3}\text{i}_{6}\right),\\ \tau\left(\text{i}_{6}\right) & =-\frac{1}{2}\left(\text{i}_{6}+\sqrt{3}\text{i}_{4}\right), \end{array}\label{eq:Tau(Octonions)}$$ Such definition extends to an order-three homomorphism over octonions $\mathbb{O}$ once we consider $\left\{ \text{i}_{0}=1,\text{i}_{2},...,\text{i}_{7}\right\}$ a basis for this algebra. It is interesting to note that in the octonions there are two Argan planes, generated by $\left\{ \text{i}_{2},\text{i}_{5}\right\}$ and $\left\{ \text{i}_{4},\text{i}_{6}\right\}$ on which the automorphism $\tau$ acts as the cubic root of unity $\frac{1}{2}\left(1+\sqrt{3}\text{i}\right)$. For completeness we give also the action of the inverse $\tau^{-1}$ over such basis, i.e. $$\begin{array}{cc} \tau^{2}\left(\text{i}_{k}\right) & =\text{i}_{k},k=0,1,3,7\\ \tau^{2}\left(\text{i}_{2}\right) & =-\frac{1}{2}\left(\text{i}_{2}+\sqrt{3}\text{i}_{5}\right),\\ \tau^{2}\left(\text{i}_{5}\right) & =-\frac{1}{2}\left(\text{i}_{5}-\sqrt{3}\text{i}_{2}\right),\\ \tau^{2}\left(\text{i}_{4}\right) & =-\frac{1}{2}\left(\text{i}_{4}+\sqrt{3}\text{i}_{6}\right),\\ \tau^{2}\left(\text{i}_{6}\right) & =-\frac{1}{2}\left(\text{i}_{6}-\sqrt{3}\text{i}_{4}\right). \end{array}\label{eq:Tau(Octonions)-1}$$ ## Okubo algebra, octonions and para-octonions An important feature of the Okubo algebra $\mathcal{O}$ is its interplay with the algebra of octonions $\mathbb{O}$. Indeed, octonions and the Okubo algebra are linked one another in such a way that we can easily pass from one to the other simply changing the definition of the bilinear product over the vector space of the algebra. Let us consider the Kaplansky's trick we introduced earlier and let us define a new product over the Okubo algebra $\mathcal{O}$ as $$x\cdot y=\left(e*x\right)*\left(y*e\right),$$ where $x,y\in\mathcal{O}$ and $e$ is an idempotent of $\mathcal{O}$. Given that $e*e=e$ and $n\left(e\right)=1$, the element $e$ acts as a left and right identity, i.e. $$\begin{aligned} x\cdot e & =e*x*e=n\left(e\right)x=x,\\ e\cdot x & =e*x*e=n\left(e\right)x=x.\end{aligned}$$ Moreover, since Okubo algebra is a composition algebra, the same norm $n$ enjoys the following relation $$n\left(x\cdot y\right)=n\left(\left(e*x\right)*\left(y*e\right)\right)=n\left(x\right)n\left(y\right),$$ which means that $\left(\mathcal{O},\cdot,n\right)$ is a unital composition algebra of real dimension $8$. Since it is also a division algebra, then it must be isomorphic to that of octonions $\mathbb{O}$ as noted by Okubo himself [@Okubo; @1978; @Okubo; @78c]. On the other hand, if we consider the order three automorphism of the octonions in ([\[eq:Tau(Octonions)\]](#eq:Tau(Octonions)){reference-type="ref" reference="eq:Tau(Octonions)"}), the Okubo algebra is then realised as a Petersson algebra from the octonions setting $$\begin{aligned} x*y & =\tau\left(\overline{x}\right)\cdot\tau^{2}\left(\overline{y}\right).\end{aligned}$$ Note that ([\[eq:tau\]](#eq:tau){reference-type="ref" reference="eq:tau"}) is formulated assuming the knowledge of the Okubic product. Reading the same maps as Okubic maps we then have the notable relation, i.e. $$\begin{aligned} \overline{x} & =R_{e}^{3}\left(x\right)=\left(\left(x*e\right)*e\right)*e,\\ \tau\left(x\right) & =R_{e}^{4}\left(x\right)=\left(\left(\left(x*e\right)*e\right)*e\right)*e,\end{aligned}$$ so that, in fact, the two maps are linked one another, i.e. $$\begin{aligned} \tau\left(x\right) & =\overline{x}*e\\ \overline{x} & =\tau\left(e*x\right).\end{aligned}$$ While these maps are intertwined, it's important to highlight their distinct impacts on the algebra's structure. While $\tau$ is an automorphism for both Okubo algebra $\mathcal{O}$ and octonions $\mathbb{O}$, $\overline{x}$ do not respect the algebrical structure of the Okubo algebra $\mathcal{O}$, since it is not an automorphism nor an anti-automorphism with respect to the Okubic product, while it is an anti-homomorphism over octonions $\mathbb{O}$. The scenario with para-octonions, $p\mathbb{O}$ is more straightforward. By definition, para-octonions are obtainable from octonions $\mathbb{O}$ through $$x\bullet y=\overline{x}\cdot\overline{y},$$ while, on the other hand, octonions $\mathbb{O}$ are obtainable from para-octonions $p\mathbb{O}$ through the aid of the para-unit $1\in p\mathbb{O}$, such that $$\begin{aligned} x\cdot y & =\left(1\bullet x\right)\bullet\left(y\bullet1\right)\\ & =\overline{x}\bullet\overline{y}=x\cdot y.\end{aligned}$$ The new algebra $\left(p\mathbb{O},\cdot,n\right)$ is again an eight-dimensional composition algebra which is also unital and division and thus, for Hurwitz theorem, isomorphic to that of octonions $\mathbb{O}$. Moreover, since $\tau\left(\overline{x}\right)=\overline{\tau\left(x\right)}$, we also have that the Okubic algebra is obtainable from the para-Hurwitz algebra with the introduction of a Petersson-like product, i.e. $$x*y=\tau\left(x\right)\bullet\tau^{2}\left(y\right).$$ We thus have that all algebras are obtainable one from the other as summarized in Table [3](#tab:Oku-Para-Octo){reference-type="ref" reference="tab:Oku-Para-Octo"}. Algebra $\left(\mathcal{O},*\right)$ $\left(p\mathbb{O},\bullet\right)$ $\left(\mathbb{O},\cdot\right)$ -------------- --------------------------------------------- ---------------------------------------------------------------------------------- ----------------------------------------------------------------------- $x*y$ $x*y$ $\tau\left(x\right)\bullet\tau^{2}\left(y\right)$ $\tau\left(\overline{x}\right)\cdot\tau^{2}\left(\overline{y}\right)$ $x\bullet y$ $\tau^{2}\left(x\right)*\tau\left(y\right)$ $x\bullet y$ $\overline{x}\cdot\overline{y}$ $x\cdot y$ $\left(e*x\right)*\left(y*e\right)$ $\left(\boldsymbol{1}\bullet x\right)\bullet\left(y\bullet\boldsymbol{1}\right)$ $x\cdot y$ : [\[tab:Oku-Para-Octo\]]{#tab:Oku-Para-Octo label="tab:Oku-Para-Octo"}In this table we see how to obtain the Okubic product $*,$ the para-octonionic product $\bullet$ and the octonionic product $\cdot$ from Okubo algebra $\left(\mathcal{O},*\right)$, para-octonions $\left(p\mathbb{O},\bullet\right)$ and octonions $\left(\mathbb{O},\cdot\right)$ respectively Nonetheless, it's vital to note that while transitioning from one algebra to another is feasible, these algebras are not isomorphic. For example, while the octonions $\mathbb{O}$ are alternative and unital, para-octonions $p\mathbb{O}$ are nor alternative nor unital but do have a para-unit. In contrast, the Okubo algebra $\mathcal{O}$ is non-alternative and only contains idempotent elements. # [\[sec:Affine-and-projective\]]{#sec:Affine-and-projective label="sec:Affine-and-projective"}Affine and projective planes An *incidence plane* $P^{2}$ is given by the triple $\left\{ \mathscr{P},\mathscr{L},\mathscr{R}\right\}$ where $\mathscr{P}$ is the set of points of the plane, $\mathscr{L}$ is the set of lines and $\mathscr{R}$ are the incidence relations of poins and lines. The plane $P^{2}$ is called an *affine plane* if it satisfies the axioms of affine geometry, which means that the relations $\mathscr{R}$ are such that: two points are joined by a single line; any two non-parallel lines intersect in one point; finally, for each line and each point there is a unique line which passes through the point and it is parallel to the line. Instead, for $P^{2}$ to be projective $\mathscr{R}$ must satisfy the property for which: 1. Any two distinct points are incident to a unique line. 2. Any two distinct lines are incident with a unique point. 3. (*not degenerate*) There exist four points such that no three are incident one another. From definitions provided earlier, both affine and projective planes can be defined in very abstract terms. However, in this section, our focus is on the study of incidence planes defined in a natural way over algebras: one aimed to generalize the construction of classical affine and projective planes[@Compact; @Projective]. Our interest lies in the definition wherein points of the affine plane are characterized by two coordinates, where lines are linear functions of the coordinates with respect to the a sum and a product that are those of the algebra itself. To achieve a projective completion, it becomes necessary to introduce an additional line at infinity and a few suitable relations. This foundational approach to constructing affine and projective planes proves effective -with minor yet meaningful modifications- for all three 8-dimensional division composition algebras: octonions $\mathbb{O}$ , para-octonions $p\mathbb{O}$ and Okubo algebra $\mathcal{O}$. Thus, we outline the framework for defining the affine and projective planes for all three cases. However, in order to avoid repetitions we will develop in all the details only for the Okubo case $\mathcal{O}$, highlighting in Subsection [\[subsec:Octonionic-and-para-octonionic\]](#subsec:Octonionic-and-para-octonionic){reference-type="ref" reference="subsec:Octonionic-and-para-octonionic"} the differences and variations needed in the other two cases. ## The Okubic affine plane and its completion A direct consequence of Proposition [ 4](#prop:SolutionLinearEq a*x=00003Db){reference-type="ref" reference="prop:SolutionLinearEq a*x=00003Db"} is the feasibility of defining affine geometry over the Okubo algebra or, in other words, an Okubic affine plane $\mathscr{A}_{2}\left(\mathcal{O}\right)$ that satisfies all axioms of affine geometry. ![[\[fig:The affine plane\]]{#fig:The affine plane label="fig:The affine plane"}Representation of the completion of the affine plane: $\left(0,0\right)$ represents the origin, $\left(0\right)$ the point at the infinity on the $x$-axis, $\left(s\right)$ is the point at infinity of the line $\left[s,t\right]$ of slope $s$ while $\left(\infty\right)$ is the point at the infinity on the $y$-axis and of vertical lines $\left[c\right]$.](AffinePlane.png){#fig:The affine plane} Indeed, we identify a *point* on the Okubic affine plane $\mathscr{A}_{2}\left(\mathcal{O}\right)$ by two coordinates $\left(x,y\right)$ with $x,y\in\mathcal{O}$, while a *line* of slope $s\in\mathcal{O}$ and offset $t\in\mathcal{O}$ is the set $\left[s,t\right]=\left\{ \left(x,s*x+t\right):x\in\mathcal{O}\right\}$. Thus, the $x$ axis is represented by the line $\left[0,0\right]$. On the other hand, *vertical lines* are identified by $\left[c\right]$ which stands for the set $\left\{ c\right\} \times\mathcal{O}$. Here $c\in\mathcal{O}$ represents the intersection with the $x$ axis, thus $\left[0\right]$ denotes the $y$ axis. Finally, as for the incidence rules we say that a point $\left(x,y\right)\in\mathscr{A}_{2}\left(\mathcal{O}\right)$ is *incident* to a line $\left[s,t\right]\subset\mathscr{A}_{2}\left(\mathcal{O}\right)$ if belongs to such line, i.e. $\left(x,y\right)\in\left[s,t\right]$. We now proceed to prove that the set of points, lines and incidence relations previously defined forms an affine plane. ** 5**. *The Okubic affine plane $\mathscr{A}_{2}\left(\mathcal{O}\right)$ with the previous incidence rules satisfies the axioms of affine geometry.* *Proof.* First of all, we can straightforwardly see that given any two points $\left(x_{1},y_{1}\right)$ and $\left(x_{2},y_{2}\right)$ there is a unique line joining them. If $x_{1}=x_{2}=x$, the line is simply $\left[x\right]$. On the other hand, if $x_{1}\neq x_{2}$, the line is given by $\left[s,y_{1}-s*x_{1}\right]$, where $s$ is determined by the linear equation $$s*\left(x_{1}-x_{2}\right)=\left(y_{1}-y_{2}\right),$$ which has a unique solution given by $$s=\frac{\left(x_{1}-x_{2}\right)*\left(y_{1}-y_{2}\right)}{n\left(x_{1}-x_{2}\right)}.\label{eq:SlopeOku}$$ Similarly, for two lines $\left[s_{1},t_{1}\right]$ and $\left[s_{2},t_{2}\right]$ with distinct slopes $s_{1}\neq s_{2}$, a unique point of intersection exists, i.e. $\left\{ \left(x,s_{1}*x+t_{1}\right)\right\}$ where $x$ is $$x=\frac{\left(t_{2}-t_{1}\right)*\left(s_{1}-s_{2}\right)}{n\left(s_{1}-s_{2}\right)}.\label{eq:InterOku}$$ If two lines have the same slope, they are disjoint. Two such lines are called *parallel*. Finally, for each line $\left[s,t\right]$ and each point $\left(x,y\right)$ there is a unique line, given by i.e. $\left[s,y-s*x\right]$, which passes through $\left(x,y\right)$ and that is parallel to $\left[s,t\right]$. ◻ The projective completion of the affine plane $\overline{\mathscr{A}_{2}}\left(\mathcal{O}\right)$ is obtained adding a line at infinity $\left[\infty\right]$, i.e. $$\left[\infty\right]=\left\{ \left(s\right):s\in\mathcal{O}\cup\left\{ \infty\right\} \right\} ,$$ where $\left(s\right)$ identifies the end point at infinity of a line with slope $s\in\mathcal{O}\cup\left\{ \infty\right\}$. Finally, we define $\left(\infty\right)$ the point at infinity of $\left[\infty\right]$. We now proceed to verify that the plane $\overline{\mathscr{A}_{2}}\left(\mathcal{O}\right)$ satisfies axioms of projective geometry: every two lines intersect in a unique point; for every two points passes a unique line; there are at least four points that form a non degenrate quadrangle. Indeed, have the following ** 6**. *The extended Okubic affine plane $\overline{\mathscr{A}_{2}}\left(\mathcal{O}\right)$ is a projective plane.* *Proof.* First we need to show that for every two points of the extended plane it still passes a unique line. This is straightforward since if the points are of the affine plane the line was already determined; if are both of them at infinity, i.e. $\left(s\right)$ and $\left(s'\right)$, then such line is $\left[\infty\right]$; finally, if one is on the affine plane $\left(x,y\right)$ and the other is at infinity $\left(s\right)$, the line that joins them is $\left[s,y-s*x\right].$ On the other hand, if two lines are not parallel the interstection was already determined; if they are parallel lines, such as $\left[s,t_{1}\right]$ and $\left[s,t_{2}\right]$, they now intersect in the point $\left(s\right)$; finally,while two vertical lines intersect in $\left(\infty\right)$. The only thing that is left is to verify that it exists a non-degenerate quadrangle where no three points are collinear which in this case can be found easily, e.g. the quadrangle $\diamondsuit=\left\{ \left(0,0\right),\left(e,e\right),\left(0\right),\left(\infty\right)\right\} \subset\overline{\mathscr{A}_{2}}\left(\mathcal{O}\right)$ is such that no three elements are incident to the same line. Indeed, the lines that join those points are $\left[0,0\right],\left[0\right],\left[\infty\right],\left[0,e\right],\left[e,0\right]$ and $\left[e\right]$ and none of those contains three elements of $\diamondsuit$ . ◻ * 7*. As in the standard projective plane over a field, we would like to point out to the reader the existence of a fundamental triangle also in the extended Okubic affine plane. More precisely, the entirety of the affine plane is encompassed by a triangle given by three special points: the *origin* $\left(0,0\right)$; the *0-point at infinity*, i.e. the point $\left(0\right)$ obtained prolonging the $x$ axis to infinity; finally, the *$\infty$-point at infinity*, i.e. the point $\left(\infty\right)$ obtained prolonging the $y$ axis to infinity. We designate $\triangle$ the set made by those three points, i.e. $\triangle=\left\{ \left(0,0\right),\left(0\right),\left(\infty\right)\right\} .$ ## [\[sec:The-Okubic-projective\]]{#sec:The-Okubic-projective label="sec:The-Okubic-projective"}The Okubic projective plane We will now define directly the projective plane $\mathcal{O}P^{2}$ and subsequently illustrate its correspondence with the completion of the affine plane $\overline{\mathscr{A}_{2}}\left(\mathcal{O}\right)$. Historically, numerous tricks were used for defining projective planes over non-associative algebras such as octonions. Here we will use a variation of the one proposed by H. Salzmann [@Compact; @Projective] which is based on what he calls "Veronese coordinates". Let $V$ be the 27-dimensional real vector space $V\cong\mathbb{\mathcal{O}}^{3}\times\mathbb{R}^{3}$ , with elements of the form $$\left(x_{\nu};\lambda_{\nu}\right)_{\nu}=\left(x_{1},x_{2},x_{3};\lambda_{1},\lambda_{2},\lambda_{3}\right),$$ where $x_{\nu}\in\mathcal{O}$, $\lambda_{\nu}\in\mathbb{R}$ and $\nu=1,2,3$. We then define the Veronese vectors to be those $w\in V$ that satisfy the following *Veronese conditions,* $$\begin{aligned} \lambda_{1}x_{1} & =x_{2}*x_{3},\,\,\lambda_{2}x_{2}=x_{3}*x_{1},\,\,\lambda_{3}x_{3}=x_{1}*x_{2},\label{eq:Okubo Ver-1}\\ n\left(x_{1}\right) & =\lambda_{2}\lambda_{3},\,n\left(x_{2}\right)=\lambda_{3}\lambda_{1},n\left(x_{3}\right)=\lambda_{1}\lambda_{2}.\label{eq:Okubo Ver-2}\end{aligned}$$ It is straightforward to see that if $w=\left(x_{\nu};\lambda_{\nu}\right)_{\nu}$ is a Veronese vector then also $\mu w=\mu\left(x_{\nu};\lambda_{\nu}\right)_{\nu}$ is Veronese for every $\mu\in\mathbb{R}$. The set of Veronese vectors is therefore a subset that we will call $H$ and for ever $w$ that is Veronese we indicate as $\mathbb{R}w\subset H$ the class of real multiples of $w$. The *Okubic projective plane* $\mathcal{O}P^{2}$ is then the geometry having the 1-dimensional subspaces $\mathbb{R}w$ as *points*, i.e. $$\mathscr{P}_{\mathcal{O}}=\left\{ \mathbb{R}w:w\in H\smallsetminus\left\{ 0\right\} \right\} .\label{eq:Projective plane}$$ The set of *lines $\mathscr{L}_{\mathcal{O}}$* is formed by subspaces $\ell_{w}$ in the projective plane $\mathcal{O}P^{2}$ that are orthogonal to a Veronese vector $w\in H$, i.e. $$\ell_{w}=w^{\bot}=\left\{ z\in H:\beta\left(z,w\right)=0\right\} ,\label{eq:Projective line}$$ where the bilinear form $\beta$ is the extension to $V$ of the polarisation of the Okubic norm. More specifically, defining $\left\langle x,y\right\rangle =n\left(x+y\right)-n\left(x\right)-n\left(y\right)$, for any two Okubic elements $x,y\in\mathbb{\mathcal{O}}$, the bilinear form $\beta$ is given by $$\beta\left(v,w\right)=\stackrel[\nu=1]{3}{\sum}\left(\left\langle x_{\nu},y_{\nu}\right\rangle +\lambda_{\nu}\eta_{\nu}\right),\label{eq:beta bilinear}$$ where $v=\left(x_{\nu};\lambda_{\nu}\right)_{\nu}$ and $w=\left(y_{\nu};\eta_{\nu}\right)_{\nu}$ are Veronese vectors in $H\subset V$. Finally, the incidence relations are again given by inclusion $\subseteq$, i.e. we say that a point $\mathbb{R}w\in\mathcal{O}P^{2}$ is *incident* to the line $\ell_{v}$ iff $\mathbb{R}w\in v^{\bot}$, i.e. $\beta\left(w,v\right)=0$. * 8*. Since all real multiple of a Veronese vectors $v=\left(x_{\nu};\lambda_{\nu}\right)_{\nu}$ identify the same point on the projective line, we will usually take as representative of the class the one such that $\lambda_{1}+\lambda_{2}+\lambda_{3}=1$, such that an alternative definition of the set of points in ([\[eq:Projective plane\]](#eq:Projective plane){reference-type="ref" reference="eq:Projective plane"}) could be $$\mathcal{O}P^{2}=\left\{ \left(x_{\nu};\lambda_{\nu}\right)_{\nu}\in H\smallsetminus\left\{ 0\right\} ,\lambda_{1}+\lambda_{2}+\lambda_{3}=1\right\} .$$ * 9*. It is also worth noting how the norm $n$, defined over the symmetric composition algebra $\mathbb{\mathcal{O}}$, is intertwined with the geometry of the plane. This relationship becomes evident when considering the quadratic form of the bilinear symmetric form $\beta$, i.e. $$q\left(v\right)\coloneqq\frac{1}{2}\beta\left(v,v\right)=n\left(x_{1}\right)+n\left(x_{2}\right)+n\left(x_{3}\right)+\frac{1}{2}\left(\lambda_{1}^{2}+\lambda_{2}^{2}+\lambda_{3}^{2}\right),\label{eq:Norm element projective}$$ where $v=\left(x_{\nu};\lambda_{\nu}\right)_{\nu}$. In the next section we will show that the triple $\mathcal{O}P^{2}=\left\{ \mathscr{P}_{\mathcal{O}},\mathscr{L}_{\mathcal{O}},\subseteq\right\}$ is indeed a projective plane and, even more, is equivalent to the completion of the affine plane $\overline{\mathscr{A}_{2}}\left(\mathcal{O}\right)$. ## [\[sec:Correspondence-between-affine\]]{#sec:Correspondence-between-affine label="sec:Correspondence-between-affine"}Correspondence between affine and projective plane In establishing a one-to-one correspondence between the completion of the affine plane $\overline{\mathscr{A}_{2}}\left(\mathcal{O}\right)$ and the projective plane $\mathcal{O}P^{2}$, we must ensure that such a correspondence maintains the incidence relations. Specifically, a point incident to a line in $\overline{\mathscr{A}_{2}}\left(\mathcal{O}\right)$ should map to a point in $\mathcal{O}P^{2}$ that is incident to the image of the original line. As demonstrated in [@Corr-OkuboSpin], the map which sends points and lines from $\overline{\mathscr{A}_{2}}\left(\mathcal{O}\right)$ to $\mathcal{O}P^{2}$ defined by $$\begin{array}{ccc} \left(x,y\right) & \mapsto & \mathbb{R}\left(x,y,x*y;n\left(y\right),n\left(x\right),1\right),\\ \left(x\right) & \mapsto & \mathbb{R}\left(0,0,x;n\left(x\right),1,0\right),\\ \left(\infty\right) & \mapsto & \mathbb{R}\left(0,0,0;1,0,0\right),\\ \left[s,t\right] & \mapsto & \left(t*s,-t,-s;1,n\left(s\right),n\left(t\right)\right)^{\bot},\\ \left[c\right] & \mapsto & \left(-c,0,0;0,1,n\left(c\right)\right)^{\bot},\\ \left[\infty\right] & \mapsto & \left(0,0,0;0,0,1\right)^{\bot}, \end{array}\label{eq:correspondence}$$ is indeed well-defined and keeps the incidence relations. ** 10**. *The aforementioned correspondence ([\[eq:correspondence\]](#eq:correspondence){reference-type="ref" reference="eq:correspondence"}) is well-defined and is a one-to-one correspondence between points and lines of the affine plane $\overline{\mathscr{A}_{2}}\left(\mathcal{O}\right)$ and points and lines of the projective plane $\mathcal{O}P^{2}$* *Proof.* In fact, this is just a trivial check that relies on the Veronese conditions and $\mathcal{O}$ being a symmetric composition algebra for which just ([\[eq:comp(Def)\]](#eq:comp(Def)){reference-type="ref" reference="eq:comp(Def)"}) and ([\[eq:x\*y\*x=00003Dn(x)y\]](#eq:x*y*x=00003Dn(x)y){reference-type="ref" reference="eq:x*y*x=00003Dn(x)y"}) has to be used. For example, let $\left(x,y\right)$ be a point of the affine plane, then the vector $\left(x,y,x*y;n\left(y\right),n\left(x\right),1\right)$ is a Veronese vector since a direct check of ([\[eq:Okubo Ver-1\]](#eq:Okubo Ver-1){reference-type="ref" reference="eq:Okubo Ver-1"}) and ([\[eq:Okubo Ver-2\]](#eq:Okubo Ver-2){reference-type="ref" reference="eq:Okubo Ver-2"}) yields to $$\begin{array}{ccccc} n\left(y\right)x=y*\left(x*y\right), & & n\left(x\right)y=\left(x*y\right)*x,\,\, & & x*y=x*y,\\ n\left(x\right)=n\left(x\right), & & n\left(y\right)=n\left(y\right), & & n\left(x*y\right)=n\left(x\right)n\left(y\right), \end{array}$$ that are either identically true or obtainable from the fact that Okubo algebra is a composition algebra, i.e. $n\left(x*y\right)=n\left(x\right)n\left(y\right)$, or from the symmetric composition identity, i.e. $n\left(x\right)y=\left(x*y\right)*x.$ On the other hand, for any Veronese vector $v=\left(x_{\nu};\lambda_{\nu}\right)_{\nu}$ with $\lambda_{3}\neq0$ we have that subspace $\mathbb{R}v$ is the same of $$\mathbb{R}v=\mathbb{R}\left(x,y,x*y;n\left(y\right),n\left(x\right),1\right),$$ where $x=\lambda_{3}^{-1}x_{1}$ and $y=\lambda_{3}^{-1}x_{2}$ which is again a Veronese vector. The check with a generic line proceeds on the same way, but it might be interesting to explicitly check that $$\left[\infty\right]\longrightarrow\left(0,0,0;0,0,1\right)^{\bot},$$ is indeed a line. First of all, we need to find the Veronese vectors orthogonal to $\left(0,0,0;0,0,1\right)$. These are vectors with $\lambda_{3}=0$ and, therefore, with $n\left(x_{1}\right)=n\left(x_{2}\right)=0$, and thus with $x_{1}=x_{2}=0$. Then, elements orthogonal to $\left(0,0,0;0,0,1\right)$ might take only two forms depending on $x_{3}$ being $0$ or $x_{3}\neq0$, i.e. $$\left(0,0,0;0,0,1\right)^{\bot}=\left\{ \mathbb{R}\left(0,0,x;n\left(x\right),1,0\right)\right\} \cup\left\{ \mathbb{R}\left(0,0,0;1,0,0\right)\right\} ,$$ where $x\in\mathcal{O}$. In fact, these are in a trivial way elements of an Okubic line $\mathcal{O}\cup\left\{ \infty\right\}$as one-point compactification of Okubo algebra. ◻ While that ([\[eq:correspondence\]](#eq:correspondence){reference-type="ref" reference="eq:correspondence"}) is well defined and is a one-to-one correspondence between $\overline{\mathscr{A}_{2}}\left(\mathcal{O}\right)$ and $\mathcal{O}P^{2}$ was a trivial check, the proof that incidence rules are preserved by ([\[eq:correspondence\]](#eq:correspondence){reference-type="ref" reference="eq:correspondence"}) is a little bit more involved and for this reason we will show it in a complete form with the following ** 11**. *The correspondence in ([\[eq:correspondence\]](#eq:correspondence){reference-type="ref" reference="eq:correspondence"}) preserves the incidence relations between $\overline{\mathscr{A}_{2}}\left(\mathcal{O}\right)$ and $\mathcal{O}P^{2}$.* *Proof.* We need to show that the image of a point $\left(x,y\right)$ incident to the line $\left[s,t\right]$ is mapped by ([\[eq:correspondence\]](#eq:correspondence){reference-type="ref" reference="eq:correspondence"}) into a point of the projective plane, i.e. $\mathbb{R}\left(x,y,x*y;n\left(y\right),n\left(x\right),1\right)$, that is incident to the image of $\left[s,t\right]$, i.e. is incident to $\left(t*s,-t,-s;1,n\left(s\right),n\left(t\right)\right)^{\bot}$. By definition of incidence on the projective plane and of ([\[eq:Projective line\]](#eq:Projective line){reference-type="ref" reference="eq:Projective line"}), the image of $\left(x,y\right)$ is incident to the image of $\left[s,t\right]$ if and only if the following condition is satisfied $$\left\langle t*s,x\right\rangle -\left\langle t,y\right\rangle -\left\langle s,x*y\right\rangle +n\left(y\right)+n\left(s\right)n\left(x\right)+n\left(t\right)=0.\label{eq:eqretta}$$ Noting that $$\left\langle s*x,t-y\right\rangle =n\left(s*x+t-y\right)-n\left(s*x\right)-n\left(t-y\right),$$ and since ([\[eq:associativityNorm\]](#eq:associativityNorm){reference-type="ref" reference="eq:associativityNorm"}), we then have $$\begin{array}{cc} \left\langle s*x,t-y\right\rangle & =\left\langle s*x,t\right\rangle -\left\langle s*x,y\right\rangle \\ & =\left\langle t,s*x\right\rangle -\left\langle s,x*y\right\rangle \\ & =\left\langle t*s,x\right\rangle -\left\langle s,x*y\right\rangle , \end{array}$$ and, therefore, $$\left\langle t*s,x\right\rangle -\left\langle s,x*y\right\rangle =n\left(s*x+t-y\right)-n\left(s*x\right)-n\left(t-y\right).$$ Inserting the latter into ([\[eq:eqretta\]](#eq:eqretta){reference-type="ref" reference="eq:eqretta"}) and noting that $n\left(s\right)n\left(x\right)=n\left(s*x\right)$, then ([\[eq:eqretta\]](#eq:eqretta){reference-type="ref" reference="eq:eqretta"}) is equivalent to $$\begin{aligned} n\left(s*x+t-y\right) & =0.\end{aligned}$$ Since Okubo algebra is a division composition algebra, and the only element of zero norm is zero, then ([\[eq:eqretta\]](#eq:eqretta){reference-type="ref" reference="eq:eqretta"}) is satisfied iff $s*x+t-y=0$, that is $\left(x,y\right)\in\left[s,t\right]$. The cases for the incidence of $\left(s\right)$ and $\left(\infty\right)$ with $\left[\infty\right]$ can be proved in the same way. ◻ Once is shown that the map ([\[eq:correspondence\]](#eq:correspondence){reference-type="ref" reference="eq:correspondence"}) gives a one to one correspondence that keeps incidence relation we thus have the following ** 12**. *The Okubic plane given by the triple $\mathcal{O}P^{2}=\left\{ \mathscr{P}_{\mathcal{O}},\mathscr{L}_{\mathcal{O}},\subseteq\right\}$ is a projective plane and is isomorphic to the completion of the affine plane $\overline{\mathscr{A}_{2}}\left(\mathcal{O}\right)$.* ## [\[subsec:Octonionic-and-para-octonionic\]]{#subsec:Octonionic-and-para-octonionic label="subsec:Octonionic-and-para-octonionic"}Octonionic and para-octonionic planes Previous definitions pertaining to the Okubic plane can be generalized to the para-octonionic case and, with minor variations, to the octonionic case as referenced in [@corr; @Notes; @Octo; @Compact; @Projective]. Indeed, in the context of the paraoctonionic case, there is no need to alter the definitions formally, provided we replace the Okubonic product $*$ with the para-octonionic product $\bullet$. In a manner analogous to the Okubic case, a point of the para-octonionic affine plane $\mathscr{A}_{2}\left(p\mathbb{O}\right)$ is given by a pair of elements $\left(x,y\right)$ with $x$,$y\in\left\{ p\mathbb{O}\right\}$, while a *line* of slope $s\in p\mathbb{O}$ and offset $t\in p\mathbb{O}$ is the set $\left[s,t\right]=\left\{ \left(x,s\bullet x+t\right):x\in p\mathbb{O}\right\}$ and, of course, we say that a point $\left(x,y\right)\in\mathscr{A}_{2}\left(p\mathbb{O}\right)$ is *incident* to a line $\left[s,t\right]\subset\mathscr{A}_{2}\left(p\mathbb{O}\right)$ if belongs to such line, i.e. $\left(x,y\right)\in\left[s,t\right]$. The octonionic case $\mathbb{O}$ with the product $\cdot$ follows the same definitions. For the affine plane, distinctions primarily manifest in the octonionic equations that describe the slope $s$ of the line passing through two points of the plane and coordinate $x$ of the intersection of two generic lines as found in ([\[eq:SlopeOku\]](#eq:SlopeOku){reference-type="ref" reference="eq:SlopeOku"}) and ([\[eq:InterOku\]](#eq:InterOku){reference-type="ref" reference="eq:InterOku"}). In the para-octonionic scenario, the expressions remain as $$s=\frac{\left(x_{1}-x_{2}\right)\bullet\left(y_{1}-y_{2}\right)}{n\left(x_{1}-x_{2}\right)},\,\,\,x=\frac{\left(t_{2}-t_{1}\right)\bullet\left(s_{1}-s_{2}\right)}{n\left(s_{1}-s_{2}\right)}.$$ However, the octonionic variant introduces a slight modification due to the unique properties of octonions as a unital composition algebra. Given that $x^{-1}=\overline{x}/n\left(x\right)$, the equations transform to $$s=\frac{\left(y_{1}-y_{2}\right)\cdot\overline{\left(x_{1}-x_{2}\right)}}{n\left(x_{1}-x_{2}\right)},\,\,\,x=\frac{\overline{\left(s_{1}-s_{2}\right)}\cdot\left(t_{2}-t_{1}\right)}{n\left(s_{1}-s_{2}\right)}.$$ Similar modifications are observed in the projective planes' definitions, as seen in ([\[eq:Okubo Ver-1\]](#eq:Okubo Ver-1){reference-type="ref" reference="eq:Okubo Ver-1"}) and ([\[eq:Okubo Ver-2\]](#eq:Okubo Ver-2){reference-type="ref" reference="eq:Okubo Ver-2"}). For the para-octonionic case, given a vector $\left(x_{1},x_{2},x_{3};\lambda_{1},\lambda_{2},\lambda_{3}\right)\in p\mathbb{O}^{3}\times\mathbb{R}^{3}$, one isolate the subset of Veronese vectors satisfying the following conditions $$\begin{aligned} \lambda_{1}x_{1} & =x_{2}\bullet x_{3},\,\,\lambda_{2}x_{2}=x_{3}\bullet x_{1},\,\,\lambda_{3}x_{3}=x_{1}\bullet x_{2},\label{eq:Ver-1-paraOct-1}\\ n\left(x_{1}\right) & =\lambda_{2}\lambda_{3},\,n\left(x_{2}\right)=\lambda_{3}\lambda_{1},n\left(x_{3}\right)=\lambda_{1}\lambda_{2},\label{eq:Ver-2-paraOct-1}\end{aligned}$$ which closely resemble the Okubic conditions. In contrast, the octonionic variant relies on the Veronese conditions applicable to all Hurwitz algebras, i.e., $$\begin{aligned} \lambda_{1}\overline{x_{1}} & =x_{2}\cdot x_{3},\,\,\lambda_{2}\overline{x_{2}}=x_{3}\cdot x_{1},\,\,\lambda_{3}\overline{x_{3}}=x_{1}\cdot x_{2},\label{eq:Ver-1-Oct}\\ n\left(x_{1}\right) & =\lambda_{2}\lambda_{3},\,n\left(x_{2}\right)=\lambda_{3}\lambda_{1},n\left(x_{3}\right)=\lambda_{1}\lambda_{2}.\label{eq:Ver-2-Oct}\end{aligned}$$ To conclude, these differences in the Veronese conditions correspond to varied formulations of the one-to-one relationship between the affine and projective planes. Within this relationship, the para-octonions retain the formal mapping in ([\[sec:Correspondence-between-affine\]](#sec:Correspondence-between-affine){reference-type="ref" reference="sec:Correspondence-between-affine"}), and more specifically one still has $$\begin{aligned} \left(x,y\right) & \mapsto\mathbb{R}\left(x,y,x\bullet y;n\left(y\right),n\left(x\right),1\right),\\ \left[s,t\right] & \mapsto\left(t\bullet s,-t,-s;1,n\left(s\right),n\left(t\right)\right)^{\bot},\end{aligned}$$ while for the octonionic plane one has to modify them as follow $$\begin{aligned} \left(x,y\right) & \mapsto\mathbb{R}\left(x,y,\overline{y}\cdot x;n\left(y\right),n\left(x\right),1\right),\\ \left[s,t\right] & \mapsto\left(\overline{s}\cdot t,-t,-s;1,n\left(s\right),n\left(t\right)\right)^{\bot}.\end{aligned}$$ Interestingly, as inferred from the above equations, the Veronese conditions for para-octonions are simpler than those for octonions, as we do not need to use the conjugation. This leads to an intriguing observation: defining the Cayley plane appears more intuitive using para-octonions than octonions. # Collineations on the plane In this section we study the collineations of the Okubic affine and projective plane. We start presenting explicit forms of elations, more specifically translations and shears, and of the triality collineation (see below). The direct study of the motion group is important since it might be an alternative way in proving the isomorphism of the Okubic plane with the Cayley plane. Indeed, it is well known that any 16-dimensional compact plane with a collineation group of dimension greater than 40 it is isomorphic to the Cayley plane $\mathbb{O}P^{2}$ (see [@Compact; @Projective Chap. 8]). In fact, this is not needed since we will write an explicit isomorphism between the Okubic plane $\mathcal{O}P^{2}$, the paraoctonionic plane $p\mathbb{O}P^{2}$ and the octonionic plane $\mathbb{O}P^{2}$ in the next section. As result, the collineation groups of the three planes coincide and is the exceptional Lie group $\text{E}_{6(-26)}$. Nevertheless, it is noteworthy that a variation in the foundational algebra defining the plane, despite preserving the overall collineation group, alters the algebraic description of the collineations. Consequently, in the Okubic realization of the 16-dimensional Moufang plane, the reflection $\begin{array}{c} \left(x,y\right)\longrightarrow\left(y,x\right)\end{array}$, is not a collineation, whereas it is in its octonionic realisation. ## [\[sec:Collineations\]]{#sec:Collineations label="sec:Collineations"}Collineations A *collineation* is a bijection $\varphi$ of the set of points of the plane onto itself, such that lines map to lines. Since the identity map is a collineation, the inverse $\varphi^{-1}$ and the composition $\varphi\circ\varphi'$ are collineations if $\varphi,\varphi'$ are both collineations, then the set of collineations is in fact a group under composition that we will denote as $\text{Aut}\left(\mathcal{O}P^{2}\right)$. A notable characteristic of collineations is that they keep incidence relations of both affine and projective planes. Indeed, given two points $p_{1}$ and $p_{2}$, there is only one line passing through them and, clearly, the image of such line is the only one that passes through $p_{1}^{\varphi}$ and $p_{2}^{\varphi}$ (where we used the classical notation $p^{\varphi}$ and $\ell^{\varphi}$ to indicate the image of the point $p$ and the line $\ell$ through the collineation $\varphi$). We thus have the following ** 13**. *Collineations of the affine plane send parallel lines into parallel lines.* As a consequence of the previous proposition we also have the following ** 14**. *Any affine collineation can be extended uniquely as a projective collineation.* *Proof.* Since an affine collineation sends parallel lines $\left[s,t\right]$ into parallel lines $\left[s,t\right]^{\varphi}$, so that in fact we have that all lines with slope $s$ go in lines with slope $s^{\varphi}$. Clearly, we can extend the affine collineation to the projective plane if and only if we set that parallels lines go to the same point at infinity, i.e. setting $\left(s\right)^{\varphi}=\left(s^{\varphi}\right)$. ◻ A set of collineations $\triangle$ is called *transitive* on a set $M$ if for every $x,y\in M$ it exists a collineation $\varphi$ such that $x^{\varphi}=y$. On the other hand, a set of collineations $\triangle$ is called *doubly transitive* if for any quadruple of points $x,y,z,w\in M$, it exists a collineation $\varphi$ such that $x^{\varphi}=y$ and $z^{\varphi}=w$. ## Axial collineations Given a collineation $\varphi$ we say that $\varphi$ is *axial* if fixes every point of a line $\ell$. In this case, the line $\ell$ is called an *axis* of $\varphi$. On the other hand we say that $\varphi$ is *central* if fixes every line passing through a point $p$, which in this case it is called a *center* of $\varphi$. It is known from a general setting of projective geometry that any collineation of a projective plane is axial if and only if is central (see [@HP Thm. 4.9]). Moreover, it is easy to see that an axial collineation that has two centers or two axis is the identity. Indeed, let us suppose that $\varphi$ has two center $p$ and $q$, then any other point $r$ outside the line joining $p$ and $q$ would be fixed since $r$ is given as intersection of two fixed lines, one passing through $p$ and the other through $q$. On the other hand, we could just replicate the argument for the point $r$ with $p$ and determine that the collineation must fix also the line joining $p$ and $q$. Given a point $p$ and a line $\ell$, we denote an axial collineation with center $p$ and line $\ell$ as $\varphi_{\left[p,\ell\right]}$ and the group of such collineations as $\Gamma_{\left[p,\ell\right]}$. It is then easy to verify what is known as the *conjugation formula*, i.e. ** 15**. **[\[lem:(Conjugation-formula-)\]]{#lem:(Conjugation-formula-) label="lem:(Conjugation-formula-)"}(Conjugation formula [@HP Lemma. 4.11])* For every collineation $\varphi$ the group $\Gamma_{\left[p^{\varphi},\ell^{\varphi}\right]}$ of collineations with center $p^{\varphi}$ and axis $\ell^{\varphi}$ is just the conjugate of $\Gamma_{\left[p,\ell\right]}$ through $\varphi$ $$\varphi^{-1}\circ\Gamma_{\left[p,\ell\right]}\circ\varphi=\Gamma_{\left[p^{\varphi},\ell^{\varphi}\right]}.$$* Moreover, an axial collineation $\varphi_{\left[p,\ell\right]}$ that fixes a point $q$ outside $p\cup\ell$ is the identity. Indeed if $q$ is fixed by $\varphi_{\left[p,\ell\right]}$ then joining the points of $\ell$ with $q$ we would see that also $q$ is a center. Since axial collineations fix only a point called center and a line called axis, they can be easily divided in two classes: 1. those for which the center $p$ is incident to the axis $\ell$ and that are called *elations*; 2. those collineations for which the center $p$ is not incident to the axis $\ell$ and that are called *homologies*. Finally, a last theorem it is worth reviewing, since it is a standard argument that we will use. ** 16**. **(see [@Compact; @Projective sec. 23.9])* [\[lem:Transitive\]]{#lem:Transitive label="lem:Transitive"}Suppose that $\triangle$ is a set of collineations of center $p$ and axis $\ell$, and let $m$ be a line through $p$ with $m\neq\ell$. If $\triangle$ is transitive on the set of points of $m$ that are not incident with the center or the axis, i.e., $m\smallsetminus\left\{ p,m\wedge\ell\right\}$ then $\triangle$ is the group of all collineations with center $p$ and axis $\ell$, i.e. $\Gamma_{\left[p,\ell\right]}$.* ## [\[sec:Elations\]]{#sec:Elations label="sec:Elations"}Elations We now focus on a special class of axial collineations called *elations*, i.e. that are those collineations in which the center is incident to the axis. ** 17**. *Collineations of $\overline{\mathscr{A}_{2}}\left(\mathcal{O}\right)$ that have the line at infinity $\left[\infty\right]$ as axis and center incident with the axis are precisely the translations $$\begin{array}{c} \tau_{a,b}:\left(x,y\right)\longrightarrow\left(x+a,y+b\right),\\ \tau_{a,b}\mid_{\left[\infty\right]}=id. \end{array}$$* *Proof.* First of all, we show that this are collineations that have axis $\left[\infty\right]$ and center incident to the axis. Indeed, given a line $\left[s,t\right]$ or $\left[c\right]$, then its image through $\tau_{a,b}$ is another line given by $$\begin{aligned} \left[s,t\right]^{\tau_{a,b}} & =\left[s,t-s*a+b\right],\label{eq:t-s*p+q}\\ \left[c\right]^{\tau_{a,b}} & =\left[c+a\right].\end{aligned}$$ Since this are collineations of the affine plane, they do extend in a unique way as collineations on the projective plane and since the slope $\left(s\right)$ is unchanged, then the line at infinity is the axis of the collineation. Moreover, let us now consider ([\[eq:t-s\*p+q\]](#eq:t-s*p+q){reference-type="ref" reference="eq:t-s*p+q"}). Clearly if $a\neq0$ there is a unique slope $s$, namely $s=n\left(s\right)^{-1}\left(a*b\right)$, such that $\left[s,t\right]^{\tau_{a,b}}=\left[s,t\right]$ for every $t\in\mathcal{O}$. But the set $\left\{ \left[s,t\right]:t\in\mathcal{O}\right\}$ is exactly the set of parallel lines that pass through the point $\left(s\right)$, i.e. $\left(s\right)$ is a center of the collineation and is incident to the axis $\left[\infty\right]$. The same reasoning can be applied when $a=0$, since in that case all vertical lines $\left[c\right]$ with $c\in\mathcal{O}$ would be invariant and the center of the collineation would be $\left(\infty\right)$. We now need to demonstrate that all elations with axis $\left[\infty\right]$ and center $\left(p\right)$ are of the form of $\tau_{a,b}$. First of all since $\left[\infty\right]$ is the axis, which means that the collineation fixes pointwise the line $\left[\infty\right]$, then the image of a line of slope $\left(p\right)$ will be a line of the same slope $\left(p\right)$. Now let be $q_{1},q_{2}$ any two points in the affine plane incident to a line of slope $p$. Then it exists a traslation of the form $\tau_{a,b}$ that sends $q_{1}$ in $q_{2}$ . The group of translations $\tau_{a,b}$ is thus transitive on the line $M$ joining $q_{1}$ and $q_{2}$ which has slope $\left(p\right)$. This means that the group of translations is that of all collineation with center $\left(p\right)$ and axis $\left[\infty\right]$ by Lemma [\[lem:Transitive\]](#lem:Transitive){reference-type="ref" reference="lem:Transitive"}. ◻ We now focus on elations that have vertical axis $\left[0\right]$ and center in $\left(\infty\right)$ which they too enjoy an easy and elegant characterization. ** 18**. *Collineations of $\overline{\mathscr{A}_{2}}\left(\mathcal{O}\right)$ that have the vertical axis $\left[0\right]$ as axis and center in $\left(\infty\right)$ are precisely the shears $$\begin{array}{cc} \sigma_{a}: & \left(x,y\right)\longrightarrow\left(x,y+ax\right),\\ & \left(s\right)\longrightarrow\left(s+a\right),\\ & \left(\infty\right)\longrightarrow\left(\infty\right). \end{array}\label{eq:shears}$$* *Proof.* First of all we show that this are collineations that have axis $\left[0\right]$ and center in $\left(\infty\right)$. Indeed, given a line $\left[s,t\right]$ or $\left[c\right]$, then its image through $\sigma_{a}$ is another line given by $$\begin{aligned} \left[s,t\right]^{\sigma_{a}} & =\left[s+a,t\right],\label{eq:t-s*p+q-1}\\ \left[c\right]^{\sigma_{a}} & =\left[c\right],\end{aligned}$$ so that $\sigma_{a}$ are indeed collineations. Since all lines of the form $\left[c\right]$ are invariant, therefore the point $\left(\infty\right)$ that joins them is the center of all $\sigma_{a}$ . On the other hand, looking at ([\[eq:shears\]](#eq:shears){reference-type="ref" reference="eq:shears"}) it is evident that all points of the for $\left(0,t\right)$ are fixed by all $\sigma_{a}$ and thus $\left[0\right]$ is the axis. Since $\left(\infty\right)\in\left[0\right]$, then $\sigma_{a}$ are elations for every $a\in\mathcal{O}$. Now we proceed with the same argument of the previous theorem to show that all the elations with axis $\left[0\right]$ and center $\left(\infty\right)$ are of the previous form. Let $M$ be the vertical line $\left[c\right]$ with $c\neq0$ and let us consider two points $q_{1},q_{2}\in M\smallsetminus\left(\infty\right)$. Let us suppose $q_{1}=\left(c,y\right)$ and $q_{2}=\left(c,y'\right)$ then the shear $\sigma_{a}$ with $a=n\left(c\right)^{-1}c*\left(y'-y\right),$ sends $q_{1}$ in $q_{2}$. Thus the group of shears is transitive over $M\smallsetminus\left(\infty\right)$ and thus coincides with the group of all elations with axis $\left[0\right]$ and center $\left(\infty\right)$, i.e. i.e. $\Gamma_{\left[\left(\infty\right),\left[0\right]\right]}$. ◻ Translations and shears occur also in the octonionic realisation of the 16-dimensional Moufang plane. We now point out a transformation that is a collineation when formulated in the octonionic realisation, but is not a collineation on the Okubic projective plane. ** 19**. *The reflection of the coordinates over the Okubic plane given by $\begin{array}{c} \left(x,y\right)\longrightarrow\left(y,x\right),\end{array}$[*is*]{.upright} not collineation.* *Proof.* Let us consider the image of a line $\left[s,t\right]$ through the map that sends $\begin{array}{c} \left(x,y\right)\longrightarrow\left(y,x\right).\end{array}$ Let us suppose that $$\left[s,t\right]=\left\{ \left(x,s*x+t\right):x\in\mathbb{\mathcal{O}}\right\} \longrightarrow\text{ \ensuremath{\left[s',t'\right]}=\ensuremath{\left\{ \left(s*x+t,x\right):x\in\mathbb{\mathcal{O}}\right\} } },$$ and let us determine $s'$ and $t'$. Since by definition $\left[s',t'\right]=\left\{ \left(x',s'*x'+t'\right):x'\in\mathbb{\mathcal{O}}\right\}$we then have that $$\begin{aligned} \begin{cases} x'=s*x+t,\\ x=s'*x'+t', \end{cases}\end{aligned}$$ which means $$x'=s*\left(s'*x'+t'\right)+t,$$ and thus for the ([\[eq:x\*y\*x=00003Dn(x)y\]](#eq:x*y*x=00003Dn(x)y){reference-type="ref" reference="eq:x*y*x=00003Dn(x)y"}) after multiplying on the LHS for $s$, we obtain $$\left(x'-t\right)*s=n\left(s\right)\left(s'*x'+t'\right),$$ and, finally, $$n\left(s\right)t'+t*s=x'*s-n\left(s\right)s'*x',$$ which yield to a slope $s'$ that varies with $x'$ and thus it is not a line since the slope is not fixed for all $x'$. ◻ * 20*. In case of octonions $\mathbb{O}$ reflections over the affine and projective plane are collineations. In fact, the previous map can be defined over the octonionic projective plane as the collineation $$\rho:\begin{cases} \left(x,y\right)\longrightarrow\left(y,x\right),\\ \left(s\right)\longrightarrow\left(s^{-1}\right),\\ \left(\infty\right)\longrightarrow\left(0\right),\\ \left(0\right)\longrightarrow\left(\infty\right), \end{cases}$$ with $x,y,s\in\mathbb{O}$ which is an axial collineation of axis $\left[1,0\right]$ and center $\left(-1\right)$ that sends $$\rho:\begin{cases} \left[s,t\right]\longrightarrow\left[s^{-1},-s^{-1}t\right], & s\neq0\\ \left[0,t\right]\longrightarrow\left[t\right],\\ \left[t\right]\longrightarrow\left[0,t\right],\\ \left[0\right]\longrightarrow\left[\infty\right],\\ \left[\infty\right]\longrightarrow\left[0\right]. \end{cases}$$ From an heuristic point of view the previous theorem is clear since, for this reflection to be a collineation, it would require the existence of an inverse at the infinity line, i.e. $\left(s\right)\longrightarrow\left(s^{-1}\right)$. Also, note that once we try to define such collineation reading it from the octonions from Tab. [3](#tab:Oku-Para-Octo){reference-type="ref" reference="tab:Oku-Para-Octo"}, i.e. defining implicitly $s^{-1}=x$ such that read in the octonionic algebra we would have $x\cdot s=1$, we then have two choices for the implicit definition of $x$, i.e. $$\begin{aligned} \left(x*e\right)*\left(e*s\right) & =e,\text{ or }\left(s*e\right)*\left(e*x\right)=e,\end{aligned}$$ that yield to different, even though $\tau$-conjugated, definitions of $x$ which thus would violate the uniqueness of the extension to the projective plane of an affine collineation. Another heuristic reason for the lack of such collineation is that the axis of such reflection, if it would exists as in the octonionic case $\overline{\mathscr{A}_{2}}\left(\mathbb{O}\right)$ would be the line $\left[1,0\right]$ containing all elements of the form $\left(x,x\right)$ with $x\in\mathbb{O}$. In our case, it is easy to verify that points $\left(x,x\right)$ are not all collinear, e.g. the line joining the point $\left(0,0\right)$ with $\left(x,x\right)$ is given by $\left[n\left(x\right)^{-1}x*x,0\right]$ for every $x\in\mathbb{\mathcal{O}}$. * 21*. The previous proposition does not mean that the set of collineations is not transitive over $\mathcal{O}P^{2}$ since for every pair of points $p_{1}=\left(x,y\right)$ and $p_{2}=\left(x',y'\right)$ we can find a collineation that sends $\left(x,y\right)^{\varphi}=\left(x',y'\right)$ such as the translation $\tau_{a,b}$ with $a=x'-x$ and $b=y'-y$. Even more the Okubic projective plane, as a Corollary of Theorem [ 24](#thm:Isomorphism){reference-type="ref" reference="thm:Isomorphism"}, is transitive on quadrangles. ## [\[sec:Triality-collineations\]]{#sec:Triality-collineations label="sec:Triality-collineations"}Triality collineations Throught the use of the Okubic-Veronese coordinates a special set of collineations can be easily spotted, i.e. the *triality collineation* [@Compact; @Projective] given by a cyclic permutation of the coordinates $$\widetilde{t}:\left(x_{1},x_{2},x_{3};\lambda_{1},\lambda_{2},\lambda_{3}\right)\longrightarrow\left(x_{2},x_{3},x_{1};\lambda_{2},\lambda_{3},\lambda_{1}\right).$$ ** 22**. *The triality collineation can be read on the affine plane in the following way:* *$$\widetilde{t}:\begin{cases} \left(x,y\right) & \longrightarrow\frac{1}{n\left(y\right)}\left(y,x*y\right),\,\,\,y\neq0\\ \left(x\right) & \longrightarrow\frac{1}{n\left(x\right)}\left(0,x\right),x\neq0\\ \left(x,0\right) & \longrightarrow\left(x\right),\,\,\,\\ \left(0\right) & \longrightarrow\left(\infty\right),\\ \left(\infty\right) & \longrightarrow\left(0,0\right). \end{cases}\label{eq:triality}$$ In particular it induces a collineation $t\colon\mathscr{A}_{2}\left(\mathcal{O}\right)\rightarrow\mathscr{A}_{2}\left(\mathcal{O}\right)$on the affine plane.* *Proof.* If $y\neq0$, the image of $t\left(x,y\right)$ by the bijection ([\[eq:correspondence\]](#eq:correspondence){reference-type="ref" reference="eq:correspondence"}) in the projective plane is given by $$\frac{1}{n\left(y\right)}\left(y,x*y\right)\longrightarrow\frac{1}{n\left(y\right)}\left(y,x*y,\frac{y*x*y}{n\left(y\right)};\frac{n\left(x*y\right)}{n\left(y\right)},1,n\left(y\right)\right),$$ and since $y*x*y=n\left(y\right)x$ and $n\left(x*y\right)=n\left(x\right)*n\left(y\right)$, then the image of $t\left(x,y\right)$ is in $\mathbb{R}\left(y,x*y,x;n\left(x\right),1,n\left(y\right)\right)$ which is the image of the triality collineation $\widetilde{t}$ of the projective point $\mathbb{R}\left(x,y,x*y;n\left(y\right),n\left(x\right),1\right)$. With the same procedure we find the other correspondences. ◻ ![[\[fig:Action-on-the\]]{#fig:Action-on-the label="fig:Action-on-the"}Action on the affine plane $\mathscr{A}_{2}\left(\mathcal{O}\right)$ of the triality collineation defined in ([\[eq:triality\]](#eq:triality){reference-type="ref" reference="eq:triality"}).](trialitycollineation2.png){#fig:Action-on-the} * 23*. As shown in Fig ([3](#fig:Action-on-the){reference-type="ref" reference="fig:Action-on-the"}) the triality collineation $t$ sends the line at infinity $\left[\infty\right]$ into the line $\left[0\right]$, while the $y$ axis $\left[0\right]$ is sent into the $x$ axis $\left[0,0\right]$; finally the $x$ axis $\left[0,0\right]$ is sent into the line at infinity $\left[\infty\right]$. This phenomenon is the dual, of what happens, in the reverse order, for the three points $\left(0,0\right)$,$\left(0\right)$ and $\left(\infty\right)$. # [\[sec:Three-realizations-of\]]{#sec:Three-realizations-of label="sec:Three-realizations-of"}Three realizations of the 16-dimensional Moufang plane In section [\[sec:Affine-and-projective\]](#sec:Affine-and-projective){reference-type="ref" reference="sec:Affine-and-projective"}, we have constructed three projective planes coming from three division algebras, by modifications of Veronese-type formulas. In the preceding section [\[sec:Collineations\]](#sec:Collineations){reference-type="ref" reference="sec:Collineations"}, we explicitly constructed the primary families of collineations for the Okubonic plane and highlighted certain distinctive features of the plane. More specifically, we observed that the points $(0,0)$, $(x,x)$, and $(y,y)$ are not collinear -as it happens in projective planes obtained over Hurwitz algebras-, and the transformation from $(x,y)\longrightarrow(y,x)$ does not constitute a collineation. Despite these distinctions, in Theorem [ 24](#thm:Isomorphism){reference-type="ref" reference="thm:Isomorphism"}, we construct two collineations, i.e. ([\[eq:isomorfPOPO\]](#eq:isomorfPOPO){reference-type="ref" reference="eq:isomorfPOPO"}) and ([\[eq:isomorfPOPO-1\]](#eq:isomorfPOPO-1){reference-type="ref" reference="eq:isomorfPOPO-1"}) that prove the three planes to be projectively isomorphic. Furthermore, we will show that such collineations are isometries. As a consequence, there exists a complete equivalence among the Okubonic, octonionic, and paraoctonic planes that we will extensively discuss in section [\[sec:Discussions-and-verifications\]](#sec:Discussions-and-verifications){reference-type="ref" reference="sec:Discussions-and-verifications"}. ## Isomorphism between Okubic and the Cayley plane In the context of projective spaces, an isomorphism refers to a bijection between the points of the spaces that preserves the incidence relations. We have the following ** 24**. *The Okubic projective $\mathcal{O}P^{2}$ plane is isomorphic to the octonionic projective plane $\mathbb{O}P^{2}$.* *Proof.* Consider the following bijective map (see sec. [\[subsec:Conjugation-and-the\]](#subsec:Conjugation-and-the){reference-type="ref" reference="subsec:Conjugation-and-the"}) defined over the real Okubo algebra given by $$\begin{aligned} x\longrightarrow\overline{x} & =\left\langle x,e\right\rangle e-x,\\ x\longrightarrow\tau\left(x\right) & =\left\langle x,e\right\rangle e-x*e,\end{aligned}$$ where $*$ is the Okubic product. Notice that $\tau$, as bijective maps over the octonions, is an order three automorphism that realizes the Okubo algebra as a Petersson algebra since $$\begin{aligned} x*y & =\tau\left(\overline{x}\right)\cdot\tau^{2}\left(\overline{y}\right),\end{aligned}$$ for every $x,y\in\mathbb{O}$. Let the Okubic projective plane be $\mathcal{O}P^{2}=\left\{ \mathscr{P}_{\mathbb{\mathcal{O}}},\mathscr{L}_{\mathcal{O}},\mathscr{R}_{\mathcal{O}}\right\}$ and consider the bijective map $\Phi:\mathcal{O}P^{2}\longrightarrow\mathbb{O}P^{2}$ given by $$\Phi:\begin{cases} \left(x,y\right) & \longrightarrow\left(\tau^{2}\left(\overline{x}\right),y\right),\\ \left(s\right) & \longrightarrow\left(\tau\left(\overline{s}\right)\right),\\ \left(\infty\right) & \longrightarrow\left(\infty\right),\\ \left[s,t\right] & \longrightarrow\left[\tau\left(\overline{s}\right),t\right],\\ \left[c\right] & \longrightarrow\left[\tau^{2}\left(\overline{c}\right)\right],\\ \left[\infty\right] & \longrightarrow\left[\infty\right]. \end{cases}\label{eq:isomorfPOPO}$$ If we call the image incidence plane $\Phi\left(\mathcal{O}P^{2}\right)=\left\{ \mathscr{R}_{\mathcal{O}}^{\Phi},\mathscr{L}_{\mathcal{O}}^{\Phi},\mathscr{R}_{\mathcal{O}}^{\Phi}\right\}$, then we notice that the octonionic projective plane is given by $\mathbb{O}P^{2}=\left\{ \mathscr{P}_{\mathbb{\mathcal{O}}}^{\Phi},\mathscr{L}_{\mathcal{O}}^{\Phi},\mathscr{R}_{\mathbb{O}}\right\} .$ To show the projective isomorphism and complete the theorem we need to show that $\mathscr{R}_{\mathbb{O}}\cong\mathscr{R}_{\mathcal{O}}^{\Phi}$, in other words that every point in the Okubic plane $\left(x,y\right)$ is incident to an Okubic line $\ell$ if and only if the image point $\left(x,y\right)^{\Phi}$ is incident to the image of the octonion line $\ell^{\Phi}$, i.e. $\Phi\left(\left(x,y\right)\right)\in\Phi\left(\ell\right)$. By definition of the Okubic projective plane $$\begin{aligned} \left(x,y\right) & \in\left[s,t\right]=\left\{ y=s*x+t\right\} ,\\ \left(s\right) & \in\left[s,t\right],\\ \left(\infty\right) & \in\left[\infty\right],\end{aligned}$$ for every $x,y,s\in\mathcal{O}$. But, since the image of the line $\left[s,t\right]$ is $$\left[\tau\left(\overline{s}\right),t\right]=\left\{ \left(x,y\right)\in\mathbb{O}:y=\tau\left(\overline{s}\right)\cdot\tau^{2}\left(\overline{x}\right)+t\right\} ,$$ and since $$s*x=\tau\left(\overline{s}\right)\cdot\tau^{2}\left(\overline{x}\right),$$ we then have that $$\begin{aligned} \left(x,y\right)^{\Phi} & =\left(\tau^{2}\left(\overline{x}\right),y\right)\in\left[\tau\left(\overline{s}\right),t\right]=\left[s,t\right]^{\Phi},\\ \left(s\right)^{\Phi} & =\left(\tau\left(\overline{s}\right)\right)\in\left[\tau\left(\overline{s}\right),t\right]=\left[s,t\right]^{\Phi},\\ \left(\infty\right) & \in\left[\infty\right],\end{aligned}$$ which thus concludes the proof of the theorem. ◻ In the previous theorem we explicitly found an isomorphism between the completion of the affine plane over the Okubo algebra and that over the octonions. For practical reason it is also useful to have the isomorphism $\widetilde{\Phi}:\mathcal{O}P^{2}\longrightarrow\mathbb{O}P^{2}$ developed for the Veronese formalism, i.e. between Veronese Okubic and octonionic vectors. The isomorphism between the Okubic Veronese vectors and octonionic Veronese vectors is given by $$\begin{cases} \left(x,y,x*y;n\left(y\right),n\left(x\right),1\right)\longrightarrow\left(\tau^{2}\left(\overline{x}\right),y,y\cdot\overline{\tau^{2}\left(\overline{x}\right)};n\left(y\right),n\left(x\right),1\right),\\ \left(0,0,x;n\left(x\right),1,0\right)\longrightarrow\left(0,0,\tau^{2}\left(\overline{x}\right);n\left(x\right),1,0\right),\\ \left(0,0,0;1,0,0\right)\longrightarrow\left(0,0,0;1,0,0\right), \end{cases}$$ where the first vectors are Veronese under conditions ([\[eq:Okubo Ver-1\]](#eq:Okubo Ver-1){reference-type="ref" reference="eq:Okubo Ver-1"}) and ([\[eq:Okubo Ver-2\]](#eq:Okubo Ver-2){reference-type="ref" reference="eq:Okubo Ver-2"}), while the image vectors are Veronese under conditions ([\[eq:Ver-1-Oct\]](#eq:Ver-1-Oct){reference-type="ref" reference="eq:Ver-1-Oct"}) and ([\[eq:Ver-2-Oct\]](#eq:Ver-2-Oct){reference-type="ref" reference="eq:Ver-2-Oct"}) that involves conjugation and octonionic product. ## Isomorphism with the para-octonionic plane Recall that a point of the paraoctonionic affine plane $\mathscr{A}_{2}\left(p\mathbb{O}\right)$ plane is given by a pair of elements $\left(x,y\right)$ with $x$,$y\in\left\{ p\mathbb{O}\right\}$, while a *line* of slope $s\in p\mathbb{O}$ and offset $t\in p\mathbb{O}$ is the set $\left[s,t\right]=\left\{ \left(x,s\bullet x+t\right):x\in p\mathbb{O}\right\}$ and, of course, we say that a point $\left(x,y\right)\in\mathscr{A}_{2}\left(p\mathbb{O}\right)$ is *incident* to a line $\left[s,t\right]\subset\mathscr{A}_{2}\left(p\mathbb{O}\right)$ if belongs to such line, i.e. $\left(x,y\right)\in\left[s,t\right]$. If the previous definitions define a para-octonionic affine plane, a projective plane can be directly defined through the Veronese conditions ([\[eq:Ver-1-paraOct-1\]](#eq:Ver-1-paraOct-1){reference-type="ref" reference="eq:Ver-1-paraOct-1"}) and ([\[eq:Ver-2-paraOct-1\]](#eq:Ver-2-paraOct-1){reference-type="ref" reference="eq:Ver-2-paraOct-1"}), i.e., $$\begin{aligned} \lambda_{1}x_{1} & =x_{2}\bullet x_{3},\,\,\lambda_{2}x_{2}=x_{3}\bullet x_{1},\,\,\lambda_{3}x_{3}=x_{1}\bullet x_{2}\label{eq:Ver-1-paraOct}\\ n\left(x_{1}\right) & =\lambda_{2}\lambda_{3},\,n\left(x_{2}\right)=\lambda_{3}\lambda_{1},n\left(x_{3}\right)=\lambda_{1}\lambda_{2}.\label{eq:Ver-2-paraOct}\end{aligned}$$ An explicit isomorphism between the Okubic projective plane $\mathcal{O}P^{2}$ and the para-octonionic projective plane $p\mathbb{O}P^{2}$ is obtained considering the bijective map $p\Phi:\mathcal{O}P^{2}\longrightarrow p\mathbb{O}P^{2}$ given by $$p\Phi:\begin{cases} \left(x,y\right) & \longrightarrow\left(\tau^{2}\left(x\right),y\right),\\ \left(s\right) & \longrightarrow\left(\tau\left(s\right)\right),\\ \left(\infty\right) & \longrightarrow\left(\infty\right),\\ \left[s,t\right] & \longrightarrow\left[\tau\left(s\right),t\right],\\ \left[c\right] & \longrightarrow\left[\tau^{2}\left(c\right)\right],\\ \left[\infty\right] & \longrightarrow\left[\infty\right]. \end{cases}\label{eq:isomorfPOPO-1}$$ The proof that the map given in ([\[eq:isomorfPOPO-1\]](#eq:isomorfPOPO-1){reference-type="ref" reference="eq:isomorfPOPO-1"}) is a collineation adheres closely to the steps outlined in Theorem [ 24](#thm:Isomorphism){reference-type="ref" reference="thm:Isomorphism"}. This is expected, given the similarity between the maps. The sole distinction between para-octonions $p\mathbb{O}$ and octonions $\mathbb{O}$is the presence of a paraunit in the former, as opposed to a unit in the latter, and the fact that while the former is merely flexible, the latter is alternative. ## Isometries Theorem [ 24](#thm:Isomorphism){reference-type="ref" reference="thm:Isomorphism"} and its para-octonionic counterpart ensures projective isomorphism between the three planes $\mathcal{O}P^{2}$, $\mathbb{O}P^{2}$ and $p\mathbb{O}P^{2}$. If projective spaces are isomorphic, they share the same incidence relations between points and lines. However, this does not imply that the distances or angles between points are preserved and this is of high importance in our case since, according to Lie theory, it is well-known that the collineation group of the octonionic plane is the minimally non-compact real form of the exceptional Lie group $\text{E}_{6}$, namely $\text{E}_{6(-26)}$, while the group of elliptic motions, i.e. isometries, over the octonionic projective plane is its maximal compact subgroup, namely $\text{F}_{4(-52)}$ (see [@corr; @Notes; @Octo]). By the map in ([\[eq:isomorfPOPO\]](#eq:isomorfPOPO){reference-type="ref" reference="eq:isomorfPOPO"}) we have an Okubic realization of both the Lie groups $\text{F}_{4(-52)}$ and $\text{E}_{6(-26)}$. Thus, we can write the homogeneous space presentation of the compact Cayley-Moufang plane as $\text{F}_{4(-52)}/\text{Spin}\left(9\right)$,a 16-dimensional symmetric coset. In fact, we have the following ** 25**. *The map $\Phi$ defined in ([\[eq:isomorfPOPO\]](#eq:isomorfPOPO){reference-type="ref" reference="eq:isomorfPOPO"}) is an isometry* *Proof.* By construction the Okubic plane comes equipped with the following distance: $$d_{\mathcal{O}}\left(p_{1},p_{2}\right)=n\left(x_{1}-x_{2}\right)^{2}+n\left(y_{1}-y_{2}\right)^{2},$$ with $p_{1}=\left(x_{1},y_{1}\right)\in\mathscr{A}^{2}\left(\mathcal{O}\right)$ and $p_{2}=\left(x_{2},y_{2}\right)\in\mathscr{A}^{2}\left(\mathcal{O}\right)$. By its very construction the octonionic plane comes with the following distance: $$d_{\mathbb{O}}=n\left(x_{1}-x_{2}\right)^{2}+n\left(y_{1}-y_{2}\right)^{2},$$ with $p_{1}=\left(x_{1},y_{1}\right)\in\mathscr{A}^{2}\left(\mathbb{O}\right)$ and $p_{2}=\left(x_{2},y_{2}\right)\in\mathscr{A}^{2}\left(\mathbb{O}\right)$.Then, the images by $\Phi$ of the two points $p_{1}$ and $p_{2}$ are given by $$p_{1}^{\Phi}=\left(\tau^{2}\left(\overline{x}_{1}\right),y_{1}\right),p_{2}^{\Phi}=\left(\tau^{2}\left(\overline{x}_{2}\right),y_{2}\right).$$ Therefore the octonionic distance between $p_{1}^{\Phi}$ and $p_{2}^{\Phi}$ is given by $$d_{\mathbb{O}}\left(p_{1}^{\Phi},p_{2}^{\Phi}\right)=n\left(\tau^{2}\left(\overline{x}_{1}\right)-\tau^{2}\left(\overline{x}_{2}\right)\right)^{2}+n\left(y_{1}-y_{2}\right)^{2},$$ but since all automorphism of Hurwitz and para-Hurwitz algebras as isometries and $\tau^{2}$ is an automorphism we have $$\begin{aligned} d_{\mathbb{O}}\left(p_{1}^{\Phi},p_{2}^{\Phi}\right) & =n\left(\tau^{2}\left(\overline{x}_{1}-\overline{x}_{2}\right)\right)^{2}+n\left(y_{1}-y_{2}\right)^{2}\\ & =n\left(\overline{x}_{1}-\overline{x}_{2}\right)^{2}+n\left(y_{1}-y_{2}\right)^{2}\\ & =n\left(x_{1}-x_{2}\right)^{2}+n\left(y_{1}-y_{2}\right)^{2}\\ & =d_{\mathcal{O}}\left(p_{1},p_{2}\right),\end{aligned}$$ thus completing the proof. ◻ Similar and straightforward proof can be given for the map $p\Phi$ for the para-octonionic case. ## Collineation groups As a corollary of Theorem [ 24](#thm:Isomorphism){reference-type="ref" reference="thm:Isomorphism"} we have that the Lie group of collineations of the Okubic projective plane is the following ** 26**. *The group of collineations of the Okubic projective plane is $\text{E}_{6\left(-26\right)}$.* *Proof.* Let $\Phi:\mathcal{O}P^{2}\longrightarrow\mathbb{O}P^{2}$ be the isomorphism in ([\[eq:isomorfPOPO\]](#eq:isomorfPOPO){reference-type="ref" reference="eq:isomorfPOPO"}) and $\Phi^{-1}:\mathbb{O}P^{2}\longrightarrow\mathcal{O}P^{2}$ its inverse given by $$\begin{aligned} \Phi^{-1} & :\begin{cases} \left(x,y\right) & \longrightarrow\left(\tau\left(\overline{x}\right),y\right)\\ \left(s\right) & \longrightarrow\left(\tau^{2}\left(\overline{s}\right)\right)\\ \left(\infty\right) & \longrightarrow\left(\infty\right)\\ \left[s,t\right] & \longrightarrow\left[\tau^{2}\left(\overline{s}\right),t\right]\\ \left[c\right] & \longrightarrow\left[\tau\left(\overline{c}\right)\right]\\ \left[\infty\right] & \longrightarrow\left[\infty\right] \end{cases},\end{aligned}$$ where $x,y,s,t,c\in\mathbb{O}$. Then, since both $\Phi$ and $\Phi^{-1}$ send lines to lines, for every collineation of the octonionic projective plane $\gamma\in\text{Aut}\left(\mathbb{O}P^{2}\right)$, then the composition of collineations $$\widetilde{\gamma}=\Phi^{-1}\gamma\Phi,\label{eq:isomorf-collinea}$$ is a collineation of the Okubic projective plane. Conversely any collineation $\widetilde{\gamma}\in\text{Aut}\left(\mathcal{O}P^{2}\right)$ induces a collineation $$\gamma=\Phi\widetilde{\gamma}\Phi^{-1},$$ in $\text{Aut}\left(\mathbb{O}P^{2}\right)$. Moreover, it is clear that $$\widetilde{\gamma}\circ\widetilde{\delta}=\left(\Phi^{-1}\gamma\Phi\right)\left(\Phi^{-1}\delta\Phi\right)=\Phi^{-1}\left(\gamma\delta\right)\Phi,$$ so that the two collineation groups are identical $\text{Aut}\left(\mathbb{O}P^{2}\right)\cong\text{Aut}\left(\mathcal{O}P^{2}\right)$, i.e. $\text{Aut}\left(\mathcal{O}P^{2}\right)\cong\text{E}_{6\left(-26\right)}$ . ◻ For the sake of completeness, we now recover the collineation $\widetilde{\gamma}=\Phi^{-1}\gamma\Phi$ corresponding to the octonionic collineation $\gamma$ given by the reflection $\left(x,y\right)\longrightarrow\left(y,x\right)$. By ([\[eq:isomorf-collinea\]](#eq:isomorf-collinea){reference-type="ref" reference="eq:isomorf-collinea"}), we have that the octonionic reflection given by switching coordinates is on the Okubic plane given by the collineation $$\widetilde{\gamma}:\left(x,y\right)\longrightarrow\left(\tau\left(\overline{y}\right),\tau^{2}\left(\overline{x}\right)\right).$$ Moreover, given that the group of elliptic motion of the octonionic plane is $\text{F}_{4\left(-52\right)}$, we have the following corollary of Theorem [ 25](#thm:The-map-Isometric){reference-type="ref" reference="thm:The-map-Isometric"} ** 27**. *The group of elliptic motion of the Okubic projective plane is $\text{F}_{4\left(-52\right)}$.* Corollary [ 26](#cor:Collineations){reference-type="ref" reference="cor:Collineations"} and [ 27](#cor:Isometry){reference-type="ref" reference="cor:Isometry"} state the existence of an Okubonic geometric realization of exceptional Lie groups $\text{E}_{6\left(-26\right)}$ and $\text{F}_{4\left(-52\right)}$ which, to our knowledge was never pointed out. ## [\[subsec:Stabilizer-of-a\]]{#subsec:Stabilizer-of-a label="subsec:Stabilizer-of-a"}$G_{2}$ as stabilizer of a quadrangle Another exceptional Lie group with direct geometrical significance in the octonionic plane is $G_{2\left(-14\right)}$. This exceptional Lie group is recognized as the group of automorphisms of the octonions, denoted as $\text{Aut}\left(\mathbb{O}\right)=G_{2\left(-14\right)}$. However, this is also the group $\Gamma\left(\diamondsuit,\mathbb{O}\right)$ of collineations that fix every point of a quadrangle [@corr; @Notes; @Octo; @Compact; @Projective] of the projective octonionic plane. Given Theorems [ 24](#thm:Isomorphism){reference-type="ref" reference="thm:Isomorphism"} and [ 25](#thm:The-map-Isometric){reference-type="ref" reference="thm:The-map-Isometric"} our objective is to identify an Okubic realization of $G_{2\left(-14\right)}$. To this end, we examine the subgroups of collineations $\Gamma\left(\triangle,\mathbb{\mathcal{O}}\right)$ and $\Gamma\left(\diamondsuit,\mathbb{\mathcal{O}}\right)$. Specifically, the former represents the group of collineations that preserve every point of the triangle $\triangle=\left\{ \left(0,0\right),\left(0\right),\left(\infty\right)\right\}$, while the latter is the group that maintains every point of the quadrangle $\diamondsuit=\left\{ \left(0,0\right),\left(e,e\right),\left(0\right),\left(\infty\right)\right\}$. ** 28**. *The group $\Gamma\left(\triangle,\mathbb{\mathcal{O}}\right)$ of collineations that fix every point of $\triangle$ are transformations of this form $$\begin{aligned} \left(x,y\right) & \mapsto\left(A\left(x\right),B\left(y\right)\right)\\ \left(s\right) & \mapsto\left(C\left(s\right)\right)\\ \left(\infty\right) & \mapsto\left(\infty\right)\end{aligned}$$ where $A,B$ and $C$ are automorphism for the sum over $\mathbb{\mathcal{O}}$ and in respect to multiplication they satisfy $$B\left(s*x\right)=C\left(s\right)*A\left(x\right).\label{eq:B(xs)=00003DC(s)A(x)}$$* *Proof.* A collineation that fixes $\left(0,0\right)$, $\left(0\right)$ and $\left(\infty\right)$, thus it also leaves invariant the $x$-axis and $y$-axis. Moreover, since the incidence relations must be preserved, maps all lines parallel to the $x$-axis and the $y$-axis to lines parallel to the $x$-axis and the $y$-axis. Then, the first coordinate is the image of a function that does not depend on $y$ and the second coordinate is image of a fuction that does not depend by $x$, i.e. $\left(x,y\right)\mapsto\left(A\left(x\right),B\left(y\right)\right)$ and $\left(s\right)\mapsto\left(C\left(s\right)\right)$. Now consider the image of a point on the line $\left[s,t\right]$. The point is of the form $\left(x,s*x+t\right)$ and its image goes to $$\left(x,s*x+t\right)\mapsto\left(A\left(x\right),B\left(s*x+t\right)\right).$$ In order this to be a collineation, the points of $\left[s,t\right]$ must all belong to a line that, setting $x=0$, passes through the points $p_{1}=\left(0,B\left(t\right)\right)$ and $p_{2}=\left(C\left(s\right)\right)$, e.g. $\left[C\left(s\right),B\left(t\right)\right]$. Every line $\left(A\left(x\right),B\left(s*x+t\right)\right)$ passing through $p_{1}$ and $p_{2}$ must satisfy the condition $$B\left(s*x+t\right)=C\left(s\right)*A\left(x\right)+B\left(t\right).\label{eq:Condition B=00003D00003DCA}$$ Given ([\[eq:Condition B=00003D00003DCA\]](#eq:Condition B=00003D00003DCA){reference-type="ref" reference="eq:Condition B=00003D00003DCA"}), if $B$ is an automorphism with respect to the sum over $\mathbb{\mathcal{O}}$, then $B\left(s*x\right)=C\left(s\right)*A\left(x\right)$. Conversely if $B\left(s*x\right)=C\left(s\right)*A\left(x\right)$ is true than $B\left(s*x+t\right)=B\left(s*x\right)+B\left(t\right)$ and $B$ is an automorphism with respect to the sum. ◻ A further corollary of Theorem [ 24](#thm:Isomorphism){reference-type="ref" reference="thm:Isomorphism"} is that the Okubic projective plane, being the 16-dimensional Moufang planes, has a collineation group transitive on quadrangles. Without loss of generality we can thus consider the quadrangle $\diamondsuit$ given by the points $\left(0,0\right)$, $\left(e,e\right)$, $\left(0\right)$ and $\left(\infty\right)$, that is $\diamondsuit=\triangle\cup\left\{ \left(e,e\right)\right\}$, and consider the collineations that fix the set $\diamondsuit$. For this purpose, it is important to note that a relation analogous to ([\[eq:B(xs)=00003DC(s)A(x)\]](#eq:B(xs)=00003DC(s)A(x)){reference-type="ref" reference="eq:B(xs)=00003DC(s)A(x)"}) holds in the case of both para-octonions and octonions. Specifically, in the octonionic case, we have $$B\left(s\cdot x\right)=C\left(s\right)\cdot A\left(x\right),$$ for all $x,s\in\mathbb{O}$. This is particularly relevant since, in the case of octonions, the relation simplifies if we require the collineations to also stabilize the point $\left(e,e\right)$, i.e. to stabilize the non-degenerate quadrangle $\diamondsuit=\triangle\cup\left\{ \left(e,e\right)\right\} .$ In this scenario, the previous formula transforms into $$A\left(s\cdot x\right)=A\left(s\right)\cdot A\left(x\right),$$ which implies that the stabilizer of the non-degenerate quadrangle is isomorphic to the automorphism group of the octonions, denoted as $G_{2\left(-14\right)}.$ Switching back to the Okubic algebra from the previous result, we then arrive at the following statement: ** 29**. *The exceptional Lie group $G_{2\left(-14\right)}$ has the following realisation* *$$\begin{array}{cc} G_{2\left(-14\right)}\cong\left\{ \left(A,B,C\right)\in\text{Spin}\left(8\right):\right. & B\left(s*x\right)=C\left(s\right)*A\left(x\right),\\ & \left.A\left(e\right)=B\left(e\right)=C\left(e\right)=e\right\} , \end{array}\label{eq:G2-Ok}$$ or, equivalently* *$$\begin{array}{cc} G_{2\left(-14\right)}\cong\left\{ \left(A,B,C\right)\in\text{Tri}\left(\mathcal{O}\right):\right. & B\left(e\ast x\right)=e\ast A\left(x\right),\\ & \left.B\left(x\ast e\right)=C\left(x\right)\ast e\right\} , \end{array}\label{eq:G2-Ok-1}$$ where $x,s\in\mathbb{\mathcal{O}}$ and $e*e=e\in\mathbb{\mathcal{O}}$.* We give here two independent proofs of the same statement. The first is a Corollary of the isomorphism in Theorem [ 24](#thm:Isomorphism){reference-type="ref" reference="thm:Isomorphism"}, relying on the fact that the stabilizer of a non-degenerate quadrangle on the octonionic projective plane is $\text{G}_{2\left(-14\right)}.$ The second is a proof on Lie theory, that does not rely on the knowledge of the geometry of the octonionic plane. *Proof.* From the previous proposition the stabilizer of the triangle $\triangle$ is given by the group of triples $\left(A,B,C\right)\in\text{Spin}\left(8\right)$ such that $$B\left(s*x\right)=C\left(s\right)*A\left(x\right),$$ with $x,s\in\mathbb{\mathcal{O}}$. To stabilize $\diamondsuit=\triangle\cup\left\{ \left(e,e\right)\right\}$, we have to impose $$\left(e,e\right)\mapsto\left(A\left(e\right),B\left(e\right)\right)=\left(e,e\right),$$ and since $e*e=e$, then $C\left(e\right)=e$ and $A\left(e\right)=B\left(e\right)=C\left(e\right)=e$, obtaining the RHS of ([\[eq:G2-Ok\]](#eq:G2-Ok){reference-type="ref" reference="eq:G2-Ok"}). Knowing that the stabilizer of a non-degenerate quadrangle of the compact 16-dimensional Moufang plane is $\text{G}_{2\left(-14\right)}$, we then obtain the identification with the LHS of ([\[eq:G2-Ok\]](#eq:G2-Ok){reference-type="ref" reference="eq:G2-Ok"}). ◻ We now proceed with the second proof of Theorem 29. *Proof.* Recall that the triality group Tri$\left(\mathcal{O}\right)$ of the real Okubo algebra $\mathcal{O}$ is defined as $$\text{Tri}\left(\mathcal{O}\right):=\left\{ A,B,C\in\text{Spin}\left(\mathcal{O}\right):B\left(s\ast x\right)=C\left(s\right)\ast A\left(x\right),~\forall s,x\in\mathcal{O}\right\} ,\label{def-Tri}$$ and that, as proved in [@KMRT], $\text{Tri}\left(\mathcal{O}\right)\simeq\text{Spin}\left(8\right)\simeq\text{Spin}\left(\mathcal{O}\right).$ Let us consider the action of triality ([\[def-Tri\]](#def-Tri){reference-type="ref" reference="def-Tri"}) in three cases: the first where $s=x=e$, for which $$B\left(e\right)=C\left(e\right)\ast A\left(e\right),\label{eq:1}$$ the second with $s\in\mathcal{O}$ and fixed $x=e$, i.e., $$B\left(s*e\right)=C\left(s\right)\ast A\left(e\right),\label{eq:2}$$ finally, the case $x\in\mathcal{O}$ and $s=e$ for which $$B\left(e*x\right)=C\left(e\right)\ast A\left(x\right).\label{eq:3}$$ Now, we want to determine the subgroup of Tri$\left(\mathcal{O}\right)$ defined by the following constraints $$\begin{aligned} B\left(e\ast x\right) & = & e\ast A\left(x\right),\label{con-1}\\ B\left(x\ast e\right) & = & C\left(x\right)\ast e.\label{con-2}\end{aligned}$$ Since $\mathcal{O}$ is a division algebra, ([\[eq:3\]](#eq:3){reference-type="ref" reference="eq:3"}) and the constraint ([\[con-1\]](#con-1){reference-type="ref" reference="con-1"}), as well as ([\[eq:2\]](#eq:2){reference-type="ref" reference="eq:2"}) and the constraint ([\[con-2\]](#con-2){reference-type="ref" reference="con-2"}), imply $$C\left(e\right)=e=A\left(e\right),\label{conss}$$ which in turn imply, by ([\[eq:1\]](#eq:1){reference-type="ref" reference="eq:1"}), that $B\left(e\right)=e\ast e=e.$ Thus one can reformulate the constraints ([\[con-1\]](#con-1){reference-type="ref" reference="con-1"}) and ([\[con-2\]](#con-2){reference-type="ref" reference="con-2"}), for any subgroup of Tri$\left(\mathcal{O}\right)\simeq\text{Spin}\left(8\right)$, with ([\[conss\]](#conss){reference-type="ref" reference="conss"}). A well-known theorem by Dynkin (see Th. 1.5 of [@Dynkin]) states that a maximal (and non-symmetric) embedding of $\text{SU}_{3}=$Aut$\left(\mathcal{O}\right)$ into $\text{Spin}\left(8\right)$ exists such that all 8-dimensional irreducible representations (henceforth denoted with the term "irreprs.") of $\text{Spin}\left(8\right)$ stay irreducible in $\text{SU}_{3}$, all reducing to the same adjoint representation. By using the Dynkin labels to identify the representations, it holds that $$\begin{gathered} \text{Spin}\left(8\right)\underset{\text{\text{max,ns}}}{\supset}\text{SU}_{3},\\ \begin{array}{l} \left(1,0,0,0\right)=\left(1,1\right),\\ \left(0,0,0,1\right)=\left(1,1\right),\\ \left(0,0,1,0\right)=\left(1,1\right), \end{array}\nonumber \end{gathered}$$ where we adopted the conventions of [@Yamatsu], and the subscripts "max", "s" and "ns" respectively stand for maximal, symmetric and non-symmetric. Thus, for the triality of Spin$\left(8\right)$, the adjoint irrepr. $\left(1,1\right)$ of $\text{SU}_{3}$, for which the basis ([\[eq:definizione i ottonioniche\]](#eq:definizione i ottonioniche){reference-type="ref" reference="eq:definizione i ottonioniche"}) of the Okubo algebra $\mathcal{O}$ provides a realization $$\mathcal{O}\simeq\left\{ e,\text{i}_{1},..,\text{i}_{7}\right\} \simeq\left(1,1\right)\text{~of~}\text{SU}_{3},$$ can be mapped to any of the three 8-dimensional irreprs. of $\text{Spin}\left(8\right)$ and, with no loss of generality, up to triality of $\text{Spin}\left(8\right)$ , one can identify $$\begin{array}{l} C\left(\mathcal{O}\right):=\left\{ C\left(e\right),C\left(\text{i}_{1}\right),..,C\left(\text{i}_{7}\right)\right\} \simeq\left(1,0,0,0\right)~\text{of~Spin}\left(8\right),\\ A\left(\mathcal{O}\right):=\left\{ A\left(e\right),A\left(\text{i}_{1}\right),..,A\left(\text{i}_{7}\right)\right\} \simeq\left(0,0,0,1\right)~\text{of~Spin}\left(8\right),\\ B\left(\mathcal{O}\right):=\left\{ B\left(e\right),B\left(\text{i}_{1}\right),..,B\left(\text{i}_{7}\right)\right\} \simeq\left(0,0,1,0\right)~\text{of~Spin}\left(8\right). \end{array}$$ We now implement the first constraint of ([\[conss\]](#conss){reference-type="ref" reference="conss"}), namely of $C\left(e\right)=e$. The largest subgroup of $\text{Spin}\left(8\right)$ allowing $C\left(e\right)=e$ is its maximal (and symmetric) subgroup Spin$\left(7\right)$, for which it holds that $$\begin{gathered} \text{Spin}(8)\underset{\text{\text{max,s}}}{\supset}\text{Spin}\left(7\right)\ni A,B,C:\left\{ \begin{array}{l} \left(1,0,0,0\right)=\underset{C\left(e\right)=e}{\left(0,0,0\right)}\oplus\underset{\left\{ C\left(\text{i}_{1}\right),..,C\left(\text{i}_{7}\right)\right\} }{\left(1,0,0\right)},\\ \left(0,0,0,1\right)=\underset{\left\{ A\left(e\right),A\left(\text{i}_{1}\right),..,A\left(\text{i}_{7}\right)\right\} }{\left(0,0,1\right)},\\ \left(0,0,1,0\right)=\underset{\left\{ B(e),B(\text{i}_{1}),..,B(\text{i}_{7})\right\} }{\left(0,0,1\right)}, \end{array}\right.\end{gathered}$$ where $\left(1,0,0\right)$ is the 7-dimensional fundamental (*vector*) irrepr. of $\text{Spin}\left(7\right)$ and $\left(0,0,1\right)$ denotes its 8-dimensional (*spinor*) irrepr.. Next, one must impose the second constraint of ([\[conss\]](#conss){reference-type="ref" reference="conss"}). This can be implemented by a further symmetry breaking implying $A\left(e\right)=e$, which, together to $C\left(e\right)=e$, implies also that $B\left(e\right)=e$. The largest subgroup of $\text{Spin}\left(7\right)$ allowing for an action of this kind is $G_{2(-14)}$, which is a maximal and non-symmetric subgroup of $\text{Spin}\left(7\right)$ itself : $$\begin{gathered} \text{Spin}\left(7\right)\underset{\text{\text{max,ns}}}{\supset}G_{2(-14)}\ni A,B,C:\\ \left\{ \begin{array}{l} \left(0,0,0\right)=\underset{C\left(e\right)=e}{\left(0,0\right)},\\ \left(1,0,0\right)=\underset{\left\{ C\left(\text{i}_{1}\right),..,C\left(\text{i}_{7}\right)\right\} }{\left(0,1\right)},\\ \left(0,0,1\right)=\underset{A\left(e\right)=e}{\left(0,0\right)}\oplus\underset{\left\{ A\left(\text{i}_{1}\right),..,A\left(\text{i}_{7}\right)\right\} }{\left(0,1\right)},\\ \left(0,0,1\right)=\underset{B\left(e\right)=e}{\left(0,0\right)}\oplus\underset{\left\{ B\left(\text{i}_{1}\right),..,B\left(\text{i}_{7}\right)\right\} }{\left(0,1\right)}. \end{array}\right.\end{gathered}$$ Thus, the (largest) subgroup of Tri$\left(\mathcal{O}\right)\simeq$$\text{Spin}\left(8\right)$ defined by the constraints ([\[conss\]](#conss){reference-type="ref" reference="conss"}) (or, equivalently, by the constraints ([\[con-1\]](#con-1){reference-type="ref" reference="con-1"}) and ([\[con-2\]](#con-2){reference-type="ref" reference="con-2"})) is $G_{2(-14)}$, which is next-to-maximal (and symmetric) in $\text{Spin}\left(8\right)$, being determined by the chain of two maximal embeddings: $$\text{Spin}\left(8\right)\supset_{\text{max,s}}\text{Spin}\left(7\right)\supset_{\text{max,ns}}G_{2(-14)}.$$ ◻ # [\[sec:Discussions-and-verifications\]]{#sec:Discussions-and-verifications label="sec:Discussions-and-verifications"}Discussions and verifications Let $A$ be an algebra, and let $x,y,z$ be elements of this algebra. It is well known that the validity of Moufang identities ([\[eq:MoufangIdent3\]](#eq:MoufangIdent3){reference-type="ref" reference="eq:MoufangIdent3"}) in the algebra, i.e. $$\begin{aligned} \left(\left(x\cdot y\right)\cdot x\right)\cdot z & =x\cdot\left(y\cdot\left(x\cdot z\right)\right),\\ \left(\left(z\cdot x\right)\cdot y\right)\cdot x & =z\cdot\left(x\cdot\left(y\cdot x\right)\right),\\ \left(x\cdot y\right)\cdot\left(z\cdot x\right) & =x\cdot\left(\left(y\cdot z\right)\cdot x\right),\label{eq:MoufangIdent3-1}\end{aligned}$$ are linked with the Moufang properties of projective plane over the algebra [@Mou35; @Compact; @Projective Sec. 12.15]. Furthermore, as an immediate corollary, if the algebra is unital, then, setting $y=1$ the Moufang identities imply alternativity, resulting in $$\begin{aligned} \left(x\cdot x\right)\cdot z & =x\cdot\left(x\cdot z\right),\\ \left(z\cdot x\right)\cdot x & =z\cdot\left(x\cdot x\right),\end{aligned}$$ for every $x,z\in A$. In fact, the relation between alternative rings and Moufang planes is a one-to-one correspondence [@Pi75 p. 160][@HP p. 143] and is so deep that Moufang planes are also called "alternative planes" [@Ste72]. Considering this background, the isomorphism between the Okubic projective plane and the Cayley plane appears counterintuitive. The Okubic algebra is non-alternative and non-unital, making it markedly different from the often-considered octonionic algebra used for realizing the 16-dimensional Moufang plane. Notably, the Moufang identities in ([\[eq:MoufangIdent3-1\]](#eq:MoufangIdent3-1){reference-type="ref" reference="eq:MoufangIdent3-1"}) do not hold in the Okubo algebra. Hence, the emergence of a Moufang plane from a projective plane over the Okubo algebra demands a thorough explanation. In fact, the Moufang property of a plane is tied to the alternativity of its associated planar ternary ring (PTR). Typically, this ternary ring is linear (see below) and thus isomorphic to the algebra from which the plane is defined. However, as we will show in this section, the Okubic case is not so simple. Since the Okubic algebra lacks an identity, it is not a ring and therefore cannot be employed to coordinatize the Okubic projective plane. Clearly the best candidate for such coordinatization are the octonions $\mathbb{O}$. ## Coordinatizing the Okubic plane with Octonions In incidence geometry, relationships between incidence planes and algebraic structures are developed after a process of relabeling called coordinatisation that involves a set $\mathscr{C}$ containing the symbols $0,1\in\mathscr{C}$ and not containing the symbol $\infty$. In most of cases, dealing with unital algebras the set $\mathscr{C}$ is just the original algebra used for the definition of the plane, but since the Okubic algebra is not unital the set of symbol $\mathscr{C}$ cannot be $\mathcal{O}$. In our case, a natural candidate for the set $\mathscr{C}$ is clearly the ring of octonions $\mathbb{O}$. To coordinatise the Okubic plane with octonionic coordinates we consider the non-degenerate quadrangle $\diamondsuit=\left\{ \left(0,0\right),\left(e,e\right),\left(0\right),\left(\infty\right)\right\}$ of the Okubic projective plane $\mathcal{O}P^{2}$ and use the ring of octonions $\mathbb{O}$ for its coordinatisation so that, in the new coordinates, the quadrangle is $\left\{ \left(0,0\right),\left(1,1\right),\left(0\right),\left(\infty\right)\right\}$. From the general theory we know [@Fau14 Cor. 3.4] that up to isomorphism there is a unique standard coordinatisation that maps $\diamondsuit$ into $\left\{ \left(0,0\right),\left(1,1\right),\left(0\right),\left(\infty\right)\right\}$. In fact, such coordinatisation is easily obtained sending the Okubic elements with their respective octonionic representative following the previous deformation of the product (see Tab. [3](#tab:Oku-Para-Octo){reference-type="ref" reference="tab:Oku-Para-Octo"}), so that $e\in\mathcal{O}$ is sent to $1\in\mathbb{O}$ and the other elements of the base $\left\{ \text{i}_{1},...,\text{i}_{7}\right\}$ in ([\[eq:definizione i ottonioniche\]](#eq:definizione i ottonioniche){reference-type="ref" reference="eq:definizione i ottonioniche"}) are sent into a multiple of the imaginary unit of the octonions $\mathbb{O}$. Once the relabeling process called coordinatization is done, we can now define a unique ternary ring $\theta\left(s,x,t\right)$ that encodes algebraically the geometrical properties of the incidence plane. Indeed, we define the planar ternary ring (PTR) by the incidence rules of the plane so that $$\theta\left(s,x,t\right)=y,\,\,\,\text{iff \,\,}\left(x,y\right)\in\left[s,t\right],$$ for all $x,y,s,t\in\mathscr{C}$. For this ternary ring we then define an *associated product* and an *associated sum*, i.e. $$\begin{aligned} sx & \coloneqq\theta\left(s,x,0\right),\\ x+t & \coloneqq\theta\left(1,x,t\right).\end{aligned}$$ Then algebraic properties of the associated product and of the associated sum are then studied in order to deduce geometrical properties of the coordinatized projective plane. Note that in case of the octonionic projective plane $\mathbb{O}P^{2}$ we have that the product and the sum of $\theta$ coincided with those defined over the algebra of octonions $\left(\mathbb{O},+,\cdot\right)$ so that $$\theta\left(s,x,t\right)=sx+t=s\cdot x+t.$$ When this happen, the planar ternary ring is called *linear* [@Compact; @Projective Sec. 22.4]. Unfortunately, this is not the case for the Okubic projective plane so that the ternary ring derived coordinatising the Okubic projective plane with the ring of the octonions $\mathbb{O}$ is not linear since $$\theta\left(s,x,t\right)=sx+t\neq s*x+t,$$ as one easily might expect since the octonionic product is not the Okubic product. In summary, while the planar ternary ring is alternative (thereby not contradicting the one-to-one relationship between Moufang planes and alternative rings), the originating algebra is non-alternative. This distinction arises because the ternary ring derived from coordinatization is nonlinear. * 30*. It is worth noting that while we are relabeling the Okubic coordinates with octonions in order to obtain a planar ternary ring, this process does not constitutes at all a projective isomorphism between the Okubic plane $\mathcal{O}P^{2}$ and the octonionic projective plane $\mathbb{O}P^{2}$ . Indeed, consider any map $\phi:\mathcal{O}\longrightarrow\mathbb{O}$ is such that $$\begin{array}{c} 0_{\mathcal{O}}\longrightarrow0_{\mathbb{O}},\\ e_{\mathcal{O}}\longrightarrow1_{\mathbb{O}},\\ x_{\mathcal{O}}\longrightarrow\phi\left(x\right). \end{array}\label{eq:map}$$ Then consider on the octonionic plane $\mathbb{O}P^{2}$ the three points $\left(0_{\mathbb{O}},0_{\mathbb{O}}\right),\left(1_{\mathbb{O}},1_{\mathbb{O}}\right),\left(\phi\left(x\right),\phi\left(x\right)\right)$. These three points are collinear, belonging to the same line $\left[1,0\right]=\left\{ x\in\mathbb{O}:\left(x,1\cdot x\right)\right\}$, i.e. the line passing through the origin with slope $s=1$. Nevertheless, in the Okubic plane $\mathcal{O}P^{2}$ the three points $\left(0_{\mathcal{O}},0_{\mathcal{O}}\right),\left(e_{\mathcal{O}},e_{\mathcal{O}}\right),\left(x,x\right)$ are not collinear. Indeed the only line joining $\left(0_{\mathcal{O}},0_{\mathcal{O}}\right),\left(e_{\mathcal{O}},e_{\mathcal{O}}\right)$ is $\left[e_{\mathcal{O}},0\right],$i.e. $\left(0_{\mathcal{O}},0_{\mathcal{O}}\right),\left(e_{\mathcal{O}},e_{\mathcal{O}}\right)\in\left[e_{\mathcal{O}},0\right]=\left\{ x\in\mathcal{O}:\left(x,e*x\right)\right\}$, while the point $\left(x,x\right)\notin\left[e_{\mathcal{O}},0\right]$ since $x\neq e*x$ for any $x\neq e$. We thus have that any relabeling process of the Okubic algebra with the octonionic algebra does not yield to a collineation and, in fact, changes the incidence rules of the plane. ## Direct verification of the "Little Desargues Theorem" It is well-known that the *Little Desargues Theorem* is valid in every Moufang plane. This theorem is a weaker version of the *Desargues Theorem*, which holds true for every Moufang plane over an associative algebra. ![[\[fig:Little-Desargues-configuration\]]{#fig:Little-Desargues-configuration label="fig:Little-Desargues-configuration"}Little Desargues configuration: two triangles $a,b,c$ and $a',b',c'$ are perspective, i.e. lines $\overline{aa'}$, $\overline{bb'}$ and $\overline{cc'}$ interesect on the same point $p$ that is the origin of the perspectivity, then the points of intersection of corresponding sides all lie on one line $\ell$ that is the axis of the perspectivity. In the Little Desargues configuration the perspectivity that relates the two triangles is also an elation, thus the center of the perspectivity $p$ is incident to the axis $\ell$.](LittleDesarguesConfiguration.png){#fig:Little-Desargues-configuration} The Deasargues theorem states that if two triangles $a,b,c$ and $a',b',c'$, are perspective, i.e. lines $\overline{aa'}$, $\overline{bb'}$ and $\overline{cc'}$ interesect on the same point $p$ that is the origin of the perspectivity, then the points of intersection of corresponding sides all lie on one line $\ell$, termed the axis of the perspectivity. However, this theorem is not valid in non-associative Moufang planes. A special case arises when the point $p$ also lies on the axis $\ell$, making the perspectivity an elation (see sec. [\[sec:Elations\]](#sec:Elations){reference-type="ref" reference="sec:Elations"}); this case is valid in all Moufang planes and is known as the Little Desargues Theorem(see Figure [4](#fig:Little-Desargues-configuration){reference-type="ref" reference="fig:Little-Desargues-configuration"}). Rather than presenting a formal proof of the theorem's validity (which is already established for any Moufang plane), we opted for a numerical verification using a Wolfram Mathematica notebook now available at the repository `https://github.com/DCorradetti/OkuboAlgebras`. The notebook is fully documented with a notation coherent to that used in this article, so that it can be easily used to verify all calculations of this article involving octonions, para-octonions and the real Okubic algebra. To validate the Little Desargues Theorem, we first represented the real Okubic algebra three-by-three complex Hermitian matrices endowed with the Okubic product in ([\[eq:product Ok\]](#eq:product Ok){reference-type="ref" reference="eq:product Ok"}). Next, we developed the following Mathematica functions: - `sLine[x_{1},y_{1},x_{2},y_{2}]`: Computes the slope of the line connecting points $\left(x_{1},y_{1}\right)$ and $\left(x_{2},y_{2}\right)$; - `tLine[x_{1},y_{1},x_{2},y_{2}]`: Determines the offset of the line connecting points $\left(x_{1},y_{1}\right)$ and $\left(x_{2},y_{2}\right)$; - `xPoint[s_{1},t_{1},s_{2},t_{2}]`:Calculates the $x$-coordinate of the intersection point of lines $\left[s_{1},t_{1}\right]$ and $\left[s_{2},t_{2}\right]$; - `yPoint[s_{1},t_{1},s_{2},t_{2}]`:Calculates the $y$-coordinate of the intersection point of lines $\left[s_{1},t_{1}\right]$ and $\left[s_{2},t_{2}\right]$; - `incidence[x,y,s,t]`:Determines the $y$-coordinate of the intersection point of lines $\left[s_{1},t_{1}\right]$ and $\left[s_{2},t_{2}\right]$; All function arguments are elements of the real Okubo algebra. Then, to set up the configuration for the Little Desargues Theorem, we: 1. Defined the center of the perspectivity $p$ and the axis $\ell$ so that $p\in\ell$. 2. Picked a point $a$ not belonging to $\ell$ and a point $a'$ incident to the line $\overset{\rightharpoonup}{ap}$. 3. Picked a point $b$ not belonging to $\ell$ nor $\overset{\rightharpoonup}{ap}$ and defined the line $\overset{\rightharpoonup}{ab}$. 4. Found the intersection $l_{3}$of $\overset{\rightharpoonup}{ab}$ with $\ell$. 5. Found the point $b'$ given by the intersection of the lines $\overset{\rightharpoonup}{l_{3}a'}$ and $\overset{\rightharpoonup}{bp}$. 6. Picked a point $c$ not belonging to $\ell$ nor $\overset{\rightharpoonup}{ap}$ nor $\overset{\rightharpoonup}{bp}$ and thus defined the line $\overset{\rightharpoonup}{ac}$. 7. Found the intersection $l_{2}$of $\overset{\rightharpoonup}{ac}$ with $\ell$. 8. Found the point $c'$ given by the intersection of lines $\overset{\rightharpoonup}{l_{2}a'}$ and $\overset{\rightharpoonup}{cp}$. Finally, to check the validity of the "little Desargues theorem" we verified that the point $l_{1}$, given by the intersection of $\overset{\rightharpoonup}{cb}$ with $\overset{\rightharpoonup}{c'b'}$, was incident to $\ell$. As shown in the Wolfram notebook, we verified the validity of the "little Desargues theorem", and, choosing $p\notin\ell$, that the full Desargues theorem is not valid. # Conclusions This work provides a novel construction of the 16-dimensional Cayley plane using two flexible algebras: the para-octonions and the real Okubo algebra. Despite lacking many algebraic properties of octonions, including alternativity and identity element, both algebras nonetheless gives the same projective plane up to isometries. Through two explicit collineations, we established an equivalence between the Okubo, octonionic, and para-octonionic planes, i.e., $\mathcal{O}P^{2}$, $\mathbb{O}P^{2}$ and $p\mathbb{O}P^{2}$. Moreover, numerical computations directly confirmed foundational projective properties like the Little Desargues Theorem. $\mathbb{O}$ $p\mathbb{O}$ $\mathcal{O}$ -------------- ---------------- ---------------- --------------------------- Unital Yes No No Paraunital No Yes No Alternative Yes No No Flexible Yes Yes Yes Composition Yes Yes Yes Automorphism $\text{G}_{2}$ $\text{G}_{2}$ $\text{SU}\left(3\right)$ : [\[tab:Summary-Ok-Oct-pOct\]]{#tab:Summary-Ok-Oct-pOct label="tab:Summary-Ok-Oct-pOct"}Summary of the algebraic properties of the three division and composition algebras that allows a straightforward and natural definition of the Cayley plane with the mathematical setup described in this thesis. Among the three constructions of the 16-dimensional Moufang plane, the one based on the Okubo algebra $\mathcal{O}$ is the simplest possible for the definition of such a plane, requiring only an 8-dimensional algebra that is neither alternative nor unital and that has an automorphism group of real dimension $8$, compared to that of the octonions, and para-octonios that has dimension $14$ (see Table [4](#tab:Summary-Ok-Oct-pOct){reference-type="ref" reference="tab:Summary-Ok-Oct-pOct"}). Surprisingly, even if historically octonions $\mathbb{O}$ were the first algebra used for the definition of the compact 16-dimensional Moufang plane, they are the less economic algebra that can be used for the definition such plane. For the sake of completeness we should say that, in order to have an affine plane correctly defined with a natural setup as that defined above, the 8-dimensional algebra used for its definition must be a division algebra. Moreover, to have a completion of the affine plane in correspondence with a Veronese-type of condition as those above, one need to have a composition algebra. Thus, for the generalised Hurwitz theorem [@ElDuque; @Comp], the only three algebras for which this setup can exist are those recalled in this paper. As a corollary of the construction presented in this work, concrete geometric realization of the real forms of the exceptional Lie groups $\text{E}_{6\left(-26\right)}$, $\text{F}_{4\left(-52\right)}$ and $\text{G}_{2\left(-14\right)}$ emerge without recurring to the uses of octonions. This challenges the conventional thinking that links exceptional Lie and Jordan structures to octonions and emphasizes the role of symmetric composition algebras. Future work should further explore, algebraic and physical interpretations of this new realization. From the algebraic point of view, it is known the existence of an Okubo Jordan algebra [@Elduque; @Gradings; @SC], that we expect to be linked with 16-dimensional Moufang plane as the exceptional Jordan algebra is [@Jordan]. Okubic construction of Tits-Freudenthal Magic Square were already considered for symmetric composition algebras [@EldMS1; @EldMS2], so we do expect to find a geometrical interpretation of those construction as Freudenthal and Rosenfeld did for the Hurwitz version[@Freud; @1965; @Rosenfeld-1993]. From the physical side, given the connections between M-theory and the octonionic Cayley plane, we expect that alternative constructions like the Okubo algebra that has $\text{SU}\left(3\right)$ as automorphism group instead of $\text{G}_{2}$ may unravel novel phases of the theory. In particular, the investigation of the physical consequences of the lack of unity of Okubo algebra may turn out to be rather intriguing. We also have provided a valuable reference implementation of the algebras with examples in a Wolfram Mathematica notebook in order to help researchers in practical calculations. All in all, by going beyond the deeply rooted connections between octonions and exceptional mathematics, this study paves the way to reimagining non-associative geometry. Symmetric composition algebras, once considered pathological, may encapsulate geometric worlds equally rich as their unital cousins. Much work remains to fully chart the landscape. # Acknowledgments We thank Alberto Elduque for useful suggestions and the anonymous referee of Communications in Algebra for pointing out the isomorphism of the Okubo projective plane with the octonionic plane. The work of AM is supported by a "Maria Zambrano" distinguished researcher fellowship, financed by the European Union within the NextGenerationEU program. SBGHLS Anastasiou A. , Borsten L., Duff M.J., Hughes L.J., Nagy S., *A magic pyramid of supergravities*, JHEP **04** (2014) 178. A. Anastasiou, L. Borsten, M.J. Hughes, S. Nagy, *Global symmetries of Yang-Mills squared in various dimensions*, JHEP **01** (2016) 148. Baez, J. C., *The octonions*. Bull. Amer. Math. Soc. **39** (2002) 145-205. L. Bentz, T. Dray, *Subalgebras of the Split Octonions*, Adv. Appl. Clifford Algebras **28** (2018) 2, 40. Borsten L. , Marrani A., *A Kind of Magic*, Class. Quant. Grav. 34 (2017) 23, 235014. Borsten L., Duff M.J., Ferrara S., Marrani A., Rubens W., *Small Orbits*, Phys. Rev. **D85** (2012) 086002 Cacciatori S. L. , Cerchiai B. L. , Marrani A. , *Squaring the Magic*, Adv. Theor. Math. Phys. 19 (2015) 923-954. Cartan E., *Les groupes réels simples finis et continus*, Ann. Éc. Norm. 31 (1914), 263--355. Cartan E., La theorie des groupes continus et la geometrie, extended translation of a paper by G.Fano for Encycl. Sci. Math., Oeuvres completes III, (1915), 1727-1861. Chevalley C. , Schafer R.D., *The exceptional simple Lie algebras F4 and E6*, Proc. Nat. Acad. Sci. US 36 (1950) 137--141. Ciftci S., Kaya R., Ferrar J. C., *On 4-transitivity in the Moufang plane*, J. Geom. 31 (1988) 65-68. J. H. Conway and D. A. Smith, On quaternions and octonions: their geometry, arithmetic and symmetry, Natick, MA: A. K. Peters (2003). Corradetti D., Marrani A., Chester D., Aschheim R, *Octonionic Planes and Real Forms of $G_{2}$, $F_{4}$ and $E_{6}$*, Geom. Integr. Quantization **23** (2022) 1-19. Corradetti D., Marrani A., Chester D., Aschheim R. *Conjugation Matters. Bioctonionic Veronese Vectors and Cayley-Rosenfeld Planes*, Int. J. Geom. Methods Mod. Phys, **19.09** (2022) 2250142. Corradetti D., Marrani A., Chester D., Aschheim R.*, A Magic Approach to octonionic Rosenfeld Planes*, arXiv:2212.06426 (2022) . Corradetti D., Zucconi F., *A Geometrical Interpretation of Okubo Spin Group*, J. Geom. Phys (2022) 104641. Corradetti D., Marrani A., Zucconi F., *A Deformation of the Okubic Albert Algebra and its Relation to the Okubic Affine and Projective Planes*, arXiv:2208.03967. Dynkin E.B., *Maximal subgroups of classical groups*, Trudy Moskov. Mat. Obshch. **1**, 39--166 (1952). English translation in: Amer. Math. Soc. Transl. (**2**) vol. 6 (1957), 245--378. Elduque A., *Okubo Algebras: Automorphisms, Derivations and Idempotent*, Contemporary Mathematics, vol. 652, Amer. Math. Soc. Providence, RI, 2015, pp. 61-73. Elduque A., *Composition algebras*; in *Algebra and Applications I: Non-associative Algebras and Categories*, Chapter 2, pp. 27-57, edited by Abdenacer Makhlouf, Sciences-Mathematics, ISTE-Wiley, London 2021. Elduque A., *The magic square and symmetric compositions*; Revista Mat. Iberoamericana **20.2** (2004) 475-491. Elduque A., *A new look at Freudenthal's magic square*, in Non Associative Algebra and Its Applications, (L. Sabinin, L. Sbitneva and I.P. Shestakov, eds.); Lecture Notes in Pure and Applied Mathematics, vol. 246, pp. 149-165. Chapman and Hall, Boca Raton 2006. A. Elduque, and H.C. Myung, *On Okubo algebras*, in From symmetries to strings: forty years of Rochester Conferences, World Sci. Publ., River Edge, NJ 1990, 299- 310. Elduque A. and H.C. Myung, *Flexible composition algebras and Okubo algebras*, Comm. Algebra 19 (1991), no. 4, 1197--1227. Elduque A. and H.C. Myung, *On flexible composition algebras*, Comm. Algebra 21 (1993) 7, 2481--2505. Elduque, A., & Maria Pérez, J. *Composition algebras with associative bilinear form*. Communications in Algebra, **24**(3) (1996). 1091--1116. Elduque A., *Gradings on symmetric composition algebras*, Journal of Algebra 322, 10 (2009) 3542-3579. Faulkner J. R., *The Role of Nonassociative Algebra in Projective Geometry*, Graduate Studies in Mathematics, American Mathematical Society, Volume 159, 2014. Freudenthal H., *Beziehungen der E7 and E8 zur Oktavenebene. I--IV*. Indag. Math. **16**(1954) pp. 218--230, **16**(1954) pp. 363--368, **17**(1955) pp. 151--157, **17**(1955) pp. 277--285. Freudenthal H., *Lie groups in the foundations of geometry*, Advances in Mathematics, volume 1, (1965) pp. 145 - 190. Günaydin M. and Zagermann M., *Unified Maxwell-Einstein and Yang-Mills-Einstein supergravity theories in five dimensions* JHEP **07** (2003) 023. Günaydin M., McReynolds S. and Zagermann M., *Unified Maxwell-Einstein and Yang-Mills-Einstein supergravity theories in four dimensions*, JHEP **09** (2005) 026. Günaydin M., K. Koepsell and H. Nicolai, *Conformal and Quasiconformal Realizations of Exceptional Lie Groups*, Commun. Math. Phys. **221**, **57** (2001), hep-th/0008063 Gürsey F., *Spontaneous Breaking of Exceptional Groups*, 5th International Colloquium on Group Theoretical Methods in Physics (1976) 213-230. Hughes D. R. and Piper F. C., *Projective planes*, Springer-Verlag, Berlin-Heidelberg-New York, 1973. Hurwitz A., *Uber die Komposition der quadratischen Formen von beliebig vielen Variablen*, Nachr. Ges. Wiss. Gottingen, 1898. Jacobson N., *Cayley numbers and simple Lie algebras of type G*, Duke Math. J. **5** (1939), 775--783. Jacobson N., *Some groups of transformations defined by Jordan algebras. II. Groups of type F4.*, Journal für die reine und angewandte Mathematik, vol. 1960, no. 204, 1960, pp. 74-98. Jordan P., von Neumann J. and Wigner E., *On an Algebraic Generalization of the Quantum Mechanical Formalism*, Ann. Math. **35** (1934) 29-64. Knus M.-A., Merkurjev A., Rost M. and Tignol J.-P., *The book of involutions.*American Mathematical Society Colloquium Publications 44, American Mathematical Society, Providence, RI, 1998. Landsberg J., Manivel M., *The projective geometry of Freudenthal's magic square*. J. Algebra, **239** (2001), no. 2, pp. 477--512. Manogue, C.A; Dray, T. 2015. *The Geometry of Octonions*. World Scientific. Michel L. and Radicati L., *Properties of the Breaking of Hadronic Internal Symmetry*, Ann. of Phys **66** (1971) 758-783. Moufang, R. (1935), *Zur Struktur von Alternativkörpern*, Math. Ann., **110**: 416--430 Okubo S., *Deformation of Pseudo-quaternion and Pseudo-octonion Algebras*, Hadronic J. **1** (1978) 1383. Okubo S., *Pseudoquaternion and Pseudooctonion Algebras* Hadronic J. **1** (1978) 1250. Okubo S., *octonions as traceless 3 x 3 matrices via a flexible Lie-admissible algebra*, Hadronic J. **1** (1978), 1432-1465. Okubo S., Myung H.C., *Some new classes of division algebras*, J. Algebra **67** (1980), 479--490. Okubo S., Osborn M., *Algebras with nondegenerate associative symmetric bilinear forms permitting composition*, Communications in Algebra, **9:12**, (1981) 1233-1261 Okubo S., Osborn M., Algebras with nondegenerate associative symmetric bilinear forms permitting composition, II, Communications in Algebra, **9:20** (1981) 2015-2073, DOI: Okubo S., *Introduction to octonion and other non-associative algebras in physics*, Montroll Memorial Lecture Series in Mathematical Physics 2, Cambridge University Press, Cambridge, 1995. Pickert G., *Projektive Ebenen*. Springer, Berlin-Heidelberg-New York, 2nd ed. 1975. Petersson H.P. , *Eine Identitat funften Grades, der gewisse Isotope von Kompositions- Algebren genugen*, Math. Z. **109** (1969), 217--238. Rosenfeld B. A., *Spaces with Exceptional Fundamental Groups*, Publications de l'Institut Mathématique, nouvelle série tome **54** (68), 97-119 (1993). Rosenfeld B. A., *Geometry of Lie Groups*, Kluwer 1997. Rosenfeld B.A., *Geometry of Planes over Nonassociative Algebras*, Acta Appl. Math. **50** (1998) 103-110. Salzmann H., Betten D., Grundhofer T., Howen R. and Stroppel M., *Compact projective Planes: With an Introduction to octonion Geometry*, Berlin, New York: De Gruyter, 2011. Salzmann H., *16-dimensional compact projective planes with a collineation group of dimension $\ge$ 35*, Arch. Math. 90 (2008), 284--288; R 08m: 51040 Salzmann H., *Compact 16-dimensional planes: an update,* arXiv (2017) 1706.0369. Satake I.*,On Representations and Compactifications of Symmetric Riemannian Spaces*. The Annals of Mathematics, **71**(1) (1960) 77. Sati H., *On the geometry of the supermultiplet in M-theory*. International Journal of Geometric Methods in Modern Physics **8** (07) (2011), 1519-1551 Schafer R. D. , *Introduction to Non-Associative Algebras*, Dover, New York, 1995. Springer T.A., *The projective octave plane. I, II*, Indag. Math. (Proceedings), Volume 63, 1960, Pages 74-88, Springer T., Veldkamp F., *Elliptic and hyperbolic octave planes. I, II* and *III* in Indag. Math. (Proceedings) (1963) pp. 413-438 Springer T., Veldkamp F., *On Hjelmslev-Moufang planes*. Math Z **107**, 249--263 (1968). Springer T., Veldkamp F., *Octonions, Jordan Algebras and Exceptional Groups*, Springer-Verlag Berlin Heidelberg 2000 Springer T.A., *Characterization of a class of cubic forms*, Indag. Math. **24** 1962, pp. 259--265. Stech B., *Exceptional Groups for Grand Unification*, Springer US, Boston, MA, 1980. Stech B., & Tavartkiladze, Z. Generation symmetry and E6 unification. Phys. Rev. **D 77** (2008) 076009. Stevenson F. W., *Projective Planes*, W.H. Freeman & Co, 1972 Tits J., *Algèbres alternatives, algèbres de Jordan et algèbres de Lie exceptionnelles*, Indag. Math. **28** (1962) 530--535. Vinberg E. B., *A Construction of Exceptional Lie Groups* (Russian), Tr. Semin. Vek- torn. Tensorn. Anal., **13** (1966) 7--9. Yamatsu N., *Finite-dimensional Lie algebra and their representations for unified model building*, arXiv 1522.08771. Yokota I., *Exceptional Lie Groups*, arXiv:0902.0431 (2009). Yokota I., *Exceptional Lie Group F4 and its Representation Rings*, Jour. Fac. Sci.,Vol. 3, No. 1, pp.35-60 (1968) Yokota I., *Non-compact Simple Lie Group F4,2 of Type F4*, Jour. Fac. Sci.,Vol. 12, N. 1, (1977) Zhevlakov K.A., A. M. Slin'ko, I. P.. Shestakov and A. I. Shirshov, *Rings that are nearly associative*, Academic Press, New York 1982. $*$ [Departamento de Matemática,]{.smallcaps}\ [Universidade do Algarve,]{.smallcaps}\ [Campus de Gambelas,]{.smallcaps}\ [8005-139 Faro, Portugal]{.smallcaps} a55499@ualg.pt $\dagger$ [Instituto de Física Teorica, Dep.to de Física,]{.smallcaps}\ [Universidad de Murcia,]{.smallcaps}\ [Campus de Espinardo,]{.smallcaps}\ [E-30100, Spain]{.smallcaps} alessio.marrani@um.es $\ddagger$ [Dipartimento di Scienze Matematiche, Informatiche e Fisiche,]{.smallcaps}\ [Università di Udine,]{.smallcaps}\ [Udine, 33100, Italy]{.smallcaps} francesco.zucconi@uniud.it [^1]: Actually, the 8 matrices three by three (2.19)-(2.20) are, up to an overall factor $\sqrt{3}$, the Gell-Mann matrices. In particular, the idempotent (2.19) is $-\sqrt{3}$ times the eighth Gell-Mann matrix $\lambda_{8}$,with the first and third rows and columns exchanged.
arxiv_math
{ "id": "2309.00967", "title": "A minimal and non-alternative realisation of the Cayley plane", "authors": "Daniele Corradetti, Alessio Marrani, Francesco Zucconi", "categories": "math.RA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We set out an elementary approach to derive Visible Point Identities summed on lattice points of inverted triangle (2D), pyramid (3D), hyperpyramid (4D, 5D and so on) utilizing the greatest common divisor for the nD Visible Point Vectors. This enables study of partitions in nD space into vector parts distributed along straight lines radial from the origin in first hyperquadrant where coordinates of lattice points are all positive integers. We also give several new combinatorial identities for Visible Point Vector partitions. address: Mathematical Sciences Institute, The Australian National University, Canberra, ACT, 0200, Australia author: - Geoffrey B Campbell title: Visible Point Vector Partition Identities for Hyperpyramid Lattices --- [^1] # VPV identities in square hyperpyramid regions {#S:Intro VPV hyperpyramids} In the decade up to 2000 the author published papers that culminated in the 2000 paper on hyperpyramid "Visible Point Vector" (VPV) identities. (See Campbell [@gC1992; @gC1993; @gC1994a; @gC1994b; @gC1997; @gC1998] then [@gC2000].) These papers although breaking new ground, were not at the time taken any further than statement of a general theorem and a few prominent examples. Since then, these identities have not been developed further in the literature, despite there being evidently a large number of ways the Parametric Euler Sum Identities of the 21st century along with experimental computation results are definitely applicable. Partitions and lattice sums have been developed "in their separate lanes" in mathematical analysis, statistical mechanics physics, and in chemistry, without any great attempt to relate these areas of contemporary research. Add to this the possibility that light diffusion lattice models, random walk regimes, and stepping stone weighted partitions seem fundamentally applicable in contexts of VPV identities, and it becomes clear that the transition from integer partitions to vector partitions may be a path for future researches. So, to be clear, we state the basic 2D, 3D and 4D identities as follows. **Statement 1**. *For $|y|, |z|<1,$ $$\label{21.17i} \prod_{\substack{gcd(a,b)=1 \\ a<b \\ a \geq 0, \; b \geq 1}} \left( \frac{1}{1-y^a z^b} \right)^{\frac{1}{b}} = \left(\frac{1-yz}{1-z}\right)^{\frac{1}{1-y}}$$ $$\nonumber = 1 + \frac{z}{1!} + \begin{vmatrix} 1 & -1 \\ \frac{1-y^2}{1-y} & 1 \\ \end{vmatrix} \frac{z^2}{2!} + \begin{vmatrix} 1 & -1 & 0 \\ \frac{1-y^2}{1-y} & 1 & -2 \\ \frac{1-y^3}{1-y} & \frac{1-y^2}{1-y} & 1 \\ \end{vmatrix} \frac{z^3}{3!} + \begin{vmatrix} 1 & -1 & 0 & 0 \\ \frac{1-y^2}{1-y} & 1 & -2 & 0 \\ \frac{1-y^3}{1-y} & \frac{1-y^2}{1-y} & 1 & -3 \\ \frac{1-y^4}{1-y} & \frac{1-y^3}{1-y} & \frac{1-y^2}{1-y} & 1 \\ \end{vmatrix} \frac{z^4}{4!} + etc.$$* **Statement 2**. *For each of $|x|, |y|, |z|<1,$ $$\label{21.18i} \prod_{\substack{gcd(a,b,c)=1 \\ a,b<c \\ a,b\geq0, \; c>0}} \left( \frac{1}{1-x^a y^b z^c} \right)^{\frac{1}{c}} = \left(\frac{(1-xz)(1-yz)}{(1-z)(1-xyz)}\right)^{\frac{1}{(1-x)(1-y)}}$$ $$\nonumber = 1 + \frac{z}{1!} + \begin{vmatrix} 1 & -1 \\ \frac{(1-x^2)(1-y^2)}{(1-x)(1-y)} & 1 \\ \end{vmatrix} \frac{z^2}{2!} + \begin{vmatrix} 1 & -1 & 0 \\ \frac{(1-x^2)(1-y^2)}{(1-x)(1-y)} & 1 & -2 \\ \frac{(1-x^3)(1-y^3)}{(1-x)(1-y)} & \frac{(1-x^2)(1-y^2)}{(1-x)(1-y)} & 1 \\ \end{vmatrix} \frac{z^3}{3!}$$ $$\nonumber + \begin{vmatrix} 1 & -1 & 0 & 0 \\ \frac{(1-x^2)(1-y^2)}{(1-x)(1-y)} & 1 & -2 & 0 \\ \frac{(1-x^3)(1-y^3)}{(1-x)(1-y)} & \frac{(1-x^2)(1-y^2)}{(1-x)(1-y)} & 1 & -3 \\ \frac{(1-x^4)(1-y^4)}{(1-x)(1-y)} & \frac{(1-x^3)(1-y^3)}{(1-x)(1-y)} & \frac{(1-x^2)(1-y^2)}{(1-x)(1-y)} & 1 \\ \end{vmatrix} \frac{z^4}{4!} + etc.$$* **Statement 3**. *For each of $|w|, |x|, |y|, |z|<1,$ $$\label{21.19i} \prod_{\substack{gcd(a,b,c,d)=1 \\ a,b,c<d \\ a,b,c\geq0, \; d>0}} \left( \frac{1}{1-w^a x^b y^c z^d} \right)^{\frac{1}{d}} = \left(\frac{(1-wz)(1-xz)(1-yz)(1-wxyz)}{(1-z)(1-wxz)(1-wyz)(1-xyz)}\right)^{\frac{1}{(1-w)(1-x)(1-y)}},$$ $$\nonumber = 1 + \frac{z}{1!} + \begin{vmatrix} 1 & -1 \\ \frac{(1-w^2)(1-x^2)(1-y^2)}{(1-w)(1-x)(1-y)} & 1 \\ \end{vmatrix} \frac{z^2}{2!}$$ $$\nonumber + \begin{vmatrix} 1 & -1 & 0 \\ \frac{(1-w^2)(1-x^2)(1-y^2)}{(1-w)(1-x)(1-y)} & 1 & -2 \\ \frac{(1-w^3)(1-x^3)(1-y^3)}{(1-w)(1-x)(1-y)} & \frac{(1-w^2)(1-x^2)(1-y^2)}{(1-w)(1-x)(1-y)} & 1 \\ \end{vmatrix} \frac{z^3}{3!}$$ $$\nonumber + \begin{vmatrix} 1 & -1 & 0 & 0 \\ \frac{(1-w^2)(1-x^2)(1-y^2)}{(1-w)(1-x)(1-y)} & 1 & -2 & 0 \\ \frac{(1-w^3)(1-x^3)(1-y^3)}{(1-w)(1-x)(1-y)} & \frac{(1-w^2)(1-x^2)(1-y^2)}{(1-w)(1-x)(1-y)} & 1 & -3 \\ \frac{(1-w^4)(1-x^4)(1-y^4)}{(1-w)(1-x)(1-y)} & \frac{(1-w^3)(1-x^3)(1-y^3)}{(1-w)(1-x)(1-y)} & \frac{(1-w^2)(1-x^2)(1-y^2)}{(1-w)(1-x)(1-y)} & 1 \\ \end{vmatrix} \frac{z^4}{4!} + etc.$$* Each of the above identities give us exact enumerations of certain functions of vector partitions. Suppose we say the $z$ variable is a "vertical axis" upon which to plot the 2D, 3D, 4D graph for the type of partitions and partition functions under consideration. Then the power series in $z$ give us exact determinant representations at each 1D, 2D or 3D layer corresponding to $z=1,2,3,...$ . The proofs of the power series determinant coefficient functions in ([\[21.17i\]](#21.17i){reference-type="ref" reference="21.17i"}) to ([\[21.19i\]](#21.19i){reference-type="ref" reference="21.19i"}) rely only on applying Cramer's Rule to the coefficient recurrences, as well as differentiating the logarithms of both sides of the infinite products and their closed form evaluations. So, in ensuing pages, we give the simplest $n$-space hyperpyramid VPV theorem due to the author in [@gC2000]. The so-called "Skewed Hyperpyramid $n$-space Identities\" from [@gC2000] we shall cover in a later paper. The application of the determinant coefficient technique of our earlier work on hyperquadrant lattices is applicable and bearing some semblance to the $q$-binomial from earlier papers' variants. Note that for most identities in this paper, the left side products are taken over a set of integer lattice points inside an inverted 2D triangle lattice, or 3D pyramid, or 3+ dimension hyperpyramid on the Euclidean cartesian space. In the first 15 years of the 21st century the summations found by the Borwein brothers Peter and Jonathan, their father David with their colleagues, see [@dB2006] to [@jB2013] have renewed interest in the old Euler Sums. Their results give us particular values of polylogarithms and related functions popularized by Lewin [@lL1981; @lL1982; @lL1991] involving the generalized Harmonic numbers. This work has been developed some way over nearly two decades so now we speak of the Mordell-Tornheim-Witten sums, which are polylogarithm generalizations all seen to be applicable to the VPV identities, but that connection is not yet fully worked through in the present literature. These newer researches including experimentally calculated results can, many of them, be substituted into VPV identities to give us exact results for weighted vector partitions. To make sense of these new results, we need to go back to fundamental definitions and ideas for partitions of vectors as distinct from those well considered already for integer partitions. We examine the following correspondences. From the elementary generating function for unrestricted integer partitions we have, $$\label{21.20i} \prod_{n=1}^{\infty} \left(1+x^{1n}+x^{2n}+x^{3n}+ \ldots \right)=1+p(1)x+p(2)x^2+p(3)x^3+\ldots .$$ So, equations ([\[21.17i\]](#21.17i){reference-type="ref" reference="21.17i"}) to ([\[21.19i\]](#21.19i){reference-type="ref" reference="21.19i"}) similarly imply $$\label{21.21i} \prod_{\substack{gcd(a,b)=1; \; a<b \\ a \geq 0, \; b \geq 1 }} \left( 1+\binom{1/b}{1}(y^a z^b)+\binom{1/b}{2}(y^a z^b)^2+\binom{1/b}{3}(y^a z^b)^3+\ldots \right)$$ $$= \sum_{\substack{a<b \\ a \geq 0, \; b \geq 1 }} V_2(a,b) y^a z^b.$$ $$\label{21.22i} \prod_{\substack{gcd(a,b,c)=1; \\ a,b<c \\ a,b \geq 0, \; c \geq 1 }} \left( 1+\binom{1/c}{1}(x^a y^b z^c)+\binom{1/c}{2}(x^a y^b z^c)^2+\binom{1/c}{3}(x^a y^b z^c)^3+\ldots \right)$$ $$= \sum_{\substack{a,b<c \\ a,b \geq 0, \; c \geq 1 }} V_3(a,b,c) x^a y^b z^c.$$ $$\label{21.23i} \prod_{\substack{gcd(a,b,c,d)=1; \\ a,b,c<d \\ a,b,c \geq 0, \; d \geq 1 }} \left( 1+\binom{1/d}{1}(w^a x^b y^c z^d)+\binom{1/d}{2}(w^a x^b y^c z^d)^2+\binom{1/d}{3}(w^a x^b y^c z^d)^3+\ldots \right)$$ $$= \sum_{\substack{a,b,c<d \\ a,b,c \geq 0, \; d \geq 1 }} V_4(a,b,c,d) w^a x^b y^c z^d.$$ Equations ([\[21.21i\]](#21.21i){reference-type="ref" reference="21.21i"}) to ([\[21.23i\]](#21.23i){reference-type="ref" reference="21.23i"}) ought to give us exact closed form generating functions $V_2$, $V_3$, $V_4$, in 2D, 3D and 4D space respectively. We put this aside for now. # Vector Partitions whose parts are on nD straight lines. From equation ([\[21.20i\]](#21.20i){reference-type="ref" reference="21.20i"}) we see that replacing $x$ by $y^a z^b$ for $|y^a z^b|<1$ where $a$ and $b$ are coprime positive integers, gives the equation $$\label{21.24i} \prod_{n=1}^{\infty} \frac{1}{1 - (y^a z^b)^n} =\prod_{n=1}^{\infty} \left(1+(y^a z^b)^{1n}+(y^a z^b)^{2n}+(y^a z^b)^{3n}+ \ldots \right)$$ $$=1+p(1)(y^a z^b)^1+p(2)(y^a z^b)^2+p(3)(y^a z^b)^3+\ldots .$$ Of course this is just a thinly disguised version of the generating function for unrestricted integer partitions. ie. one-dimensional partitions. In the 2D case we are saying that the number of partitions of an integer lattice point vector $\langle A,B \rangle$ on the line $z=ay/b$ for $gcd(a,b)=1$ into vector parts also on this line, is equal to $p(gcd(A,B))$. So ([\[21.24i\]](#21.24i){reference-type="ref" reference="21.24i"}), enumerating the number of vector partitions for the lattice points along a 1D line in 2D space, applies equally to a 1D line in 3D space as follows, where $a$, $b$ and $c$ are positive integers with $gcd(a,b,c)=1$. $$\label{21.25i} \prod_{n=1}^{\infty} \frac{1}{1 - (x^a y^b z^c)^n} =\prod_{n=1}^{\infty} \left(1+(x^a y^b z^c)^{1n}+(x^a y^b z^c)^{2n}+(x^a y^b z^c)^{3n}+ \ldots \right)$$ $$=1+p(1)(x^a y^b z^c)^1+p(2)(x^a y^b z^c)^2+p(3)(x^a y^b z^c)^3+\ldots .$$ Similarly, in the 4D case extension corresponding to the 3D equation ([\[21.25i\]](#21.25i){reference-type="ref" reference="21.25i"}) we have, where $a$, $b$, $c$ and $d$ are positive integers with gcd(a,b,c,d)=1, $$\label{21.26i} \prod_{n=1}^{\infty} \frac{1}{1 - (w^a x^b y^c z^d)^n}$$ $$=\prod_{n=1}^{\infty} \left(1+(w^a x^b y^c z^d)^{1n}+(w^a x^b y^c z^d)^{2n}+(w^a x^b y^c z^d)^{3n}+ \ldots \right)$$ $$=1+p(1)(w^a x^b y^c z^d)^1+p(2)(w^a x^b y^c z^d)^2+p(3)(w^a x^b y^c z^d)^3+\ldots .$$ We shall now do something interesting with equation ([\[21.24i\]](#21.24i){reference-type="ref" reference="21.24i"}). We write an "Upper VPV" identity derived from it as follows. $$\label{21.27i} \prod_{\substack{a,b,n \geq 1; \; a \leq b \\ gcd(a,b)=1}} \frac{1}{1 - (y^a z^b)^n}$$ $$\begin{aligned} % \nonumber % Remove numbering (before each equation) &=& \frac{1}{1 - y^1 z^1} \\ &\times& \frac{1}{1 - y^1 z^2} \frac{1}{1 - y^2 z^2} \\ &\times& \frac{1}{1 - y^1 z^3} \frac{1}{1 - y^2 z^3} \frac{1}{1 - y^3 z^3} \\ &\times& \frac{1}{1 - y^1 z^4} \frac{1}{1 - y^2 z^4} \frac{1}{1 - y^3 z^4} \frac{1}{1 - y^4 z^4} \\ &\times& \frac{1}{1 - y^1 z^5} \frac{1}{1 - y^2 z^5} \frac{1}{1 - y^3 z^5} \frac{1}{1 - y^4 z^5} \frac{1}{1 - y^5 z^5} \\ &\times& \frac{1}{1 - y^1 z^6} \frac{1}{1 - y^2 z^6} \frac{1}{1 - y^3 z^6} \frac{1}{1 - y^4 z^6} \frac{1}{1 - y^5 z^6} \frac{1}{1 - y^6 z^6} \\ &\times& etc. \end{aligned}$$ $$=\prod_{\substack{a,b,n \geq 1; \; a \leq b \\ gcd(a,b)=1}} \left(1+(y^a z^b)^{1n}+(y^a z^b)^{2n}+(y^a z^b)^{3n}+ \ldots \right)$$ $$=\prod_{\substack{a,b \geq 1; \; a \leq b \\ gcd(a,b)=1}} \left( 1+p(1)(y^a z^b)^1+p(2)(y^a z^b)^2+p(3)(y^a z^b)^3+\ldots \right)$$ $$= \sum_{n_1,n_2,n_3... \geq 0} p(n_1)(y^1 z^1)^{n_1} p(n_2)(y^1 z^2)^{n_2}p(n_3)(y^1 z^3)^{n_3}p(n_4)(y^2 z^3)^{n_3}\ldots$$ $$= \sum_{n_1,n_2,n_3... \geq 0} p(n_1)p(n_2)p(n_3)p(n_4)\cdots(y^1 z^1)^{n_1}(y^1 z^2)^{n_2}(y^1 z^3)^{n_3}(y^2 z^3)^{n_3}\ldots$$ $$= \sum_{n_1,n_2,n_3... \geq 0} p(n_1)p(n_2)p(n_3)p(n_4)\cdots(y^{(1n_1+1n_2+1n_3+2n_4+\ldots)}z^{(1n_1+2n_2+3n_3+3n_4+\ldots)})$$ where each coefficient of $n_i$ in the index sum of $y$ is coprime to the coefficient of $n_i$ in the index sum of $z$; $$\begin{aligned} % \nonumber % Remove numbering (before each equation) &=& 1 + p_{(1|1)}y^1 z^1 \\ & & + \; p_{(1|2)} y^1 z^2 + p_{(2|2)} y^2 z^2 \\ & & + \; p_{(1|3)} y^1 z^3 + p_{(2|3)} y^2 z^3 + p_{(3|3)} y^3 z^3 \\ & & + \; p_{(1|4)} y^1 z^4 + p_{(2|4)} y^2 z^4 + p_{(3|4)} y^3 z^4 + p_{(4|4)} y^4 z^4 \\ & & + \; p_{(1|5)} y^1 z^5 + p_{(2|5)} y^2 z^5 + p_{(3|5)} y^3 z^5 + p_{(4|5)} y^4 z^5 + p_{(5|5)} y^5 z^5 \\ & & + \; etc. \end{aligned}$$ where $p_{(a|b)} := p_{(a|b)}(y,z)$ is the coefficient function of $y^a z^b$. This enables study of partitions in $n$D space into vector parts distributed along straight lines radial from the origin in first hyperquadrant where coordinates of lattice points are all positive integers. **2D Vector Partitions whose parts are on two straight lines.** The above analysis is with respect to vector partitions in the upper half first quadrant above and including the line $y=z$. The simplest version of this theory departing from the single radial from the origin line of lattice points, would be to state the result if dealing with 2D partitions from two such radial lines of lattice points. For example, consider the two lines with equations $y=z/2$ and $y=z/3$. The lattice point vectors along these lines in the first quadrant may be listed as $$\begin{aligned} % \nonumber % Remove numbering (before each equation) S_1 &=& \{\langle 1,2\rangle ,\langle 2,4\rangle ,\langle 3,6\rangle ,\langle 4,8\rangle ,\langle 5,10\rangle ,\ldots\}; \\ S_2 &=& \{\langle 1,3\rangle ,\langle 2,6\rangle ,\langle 3,9\rangle ,\langle 4,12\rangle ,\langle 5,15\rangle ,\ldots\}.\end{aligned}$$ Following the above rationale we see that the generating function for 2D vector partitions into parts contained in $S_1$ and $S_2$ is $$\frac{1}{((1-yz^2)(1-y^2z^4)(1-y^3z^6)\cdots)(1-yz^3)(1-y^2z^6)(1-y^3z^9)\cdots)}$$ $$= \left(1+p(1)y^1 z^2+p(2)y^2 z^4+p(3)y^3 z^6+\ldots \right)\left(1+p(1)y^1 z^3+p(2)y^2 z^6+p(3)y^3 z^9+\ldots \right)$$ $$\begin{aligned} % \nonumber to remove numbering (before each equation) &=& 1 + p(1) y z^2 + p(1) y z^3 \\ & & + \; p(2) y^2 z^4 + p(1)^2 y^2 z^5 + p(2) y^2 z^6 \\ & & + \; p(3) y^3 z^6 + p(1) p(2) y^3 z^7 + p(1) p(2) y^3 z^8 + p(3) y^3 z^9 \\ & & + \; etc.\end{aligned}$$ $$\begin{aligned} % \nonumber to remove numbering (before each equation) &=& 1 + y z^2 + y z^3 + 2 y^2 z^4 + y^2 z^5 + y^2 (3 y + 2) z^6 \\ & & + \; 2 y^3 z^7 + y^3 (5 y + 2) z^8 + 3 y^3 (y + 1) z^9 + y^4 (7 y + 4) z^{10} \\ & & + \; y^4 (5 y + 3) z^{11} + y^4 (11 y^2 + 6 y + 5) z^{12} + y^5 (7 y + 6) z^{13} \\ & & + \; y^5 (14 y^2 + 10 y + 5) z^{14} + y^5 (11 y^2 + 9 y + 7) z^{15} \\ & & + \; etc.\end{aligned}$$ The coefficients here plot onto the grid $$\begin{array}{c|ccccccccc} \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 15 & & & & & & 7 & 9 & 11 & \cdots \\ 14 & & & & & & 5 & 10 & 14 & \cdots \\ 13 & & & & & & 6 & 7 & & \cdots \\ 12 & & & & & 5 & 6 & 11 & & \cdots \\ 11 & & & & & 3 & 5 & & & \cdots \\ 10 & & & & & 4 & 7 & & & \cdots \\ 9 & & & & 3 & 3 & & & & \cdots \\ 8 & & & & 2 & 5 & & & & \cdots \\ 7 & & & & 2 & & & & & \cdots \\ 6 & & & 2 & 3 & & & & & \cdots \\ 5 & & & 1 & & & & & & \cdots \\ 4 & & & 2 & & & & & & \cdots \\ 3 & & 1 & & & & & & & \cdots \\ 2 & & 1 & & & & & & & \cdots \\ 1 & & & & & & & & & \cdots \\ 0 & 1 & & & & & & & & \cdots \\ \hline z/y& 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & \cdots \end{array}$$ **Example interpretations reading from this graph.** 1. The number of partitions of vector $\langle 7,15 \rangle$ into unrestricted number of parts from $S_1$ and $S_2$ is 11. 2. The number of partitions of vector $\langle 5,10 \rangle$ into unrestricted number of parts from $S_1$ and $S_2$ is 7. 3. The number of partitions of vector $\langle 4,9 \rangle$ into unrestricted number of parts from $S_1$ and $S_2$ is 3. 4. The number of partitions of vector $\langle 4,7 \rangle$ into unrestricted number of parts from $S_1$ and $S_2$ is 0. **3D Vector Partitions whose parts are on two straight lines.** A further simple version of the above approach allows us to work out exactly the number of partitions into vector parts that lie upon "radial from the origin" lines of lattice points. We state the following example in 3D partitions in the first 3D hyperquadrant (ie. with lattice points whose co-ordinates are triples of positive integers.) For example, consider the first line through 3D space with equation $x=y/2=z/3$; then the second line with equation $x=y/3=z/4$. The lattice point vectors along these lines in the first quadrant may be listed as $$\begin{aligned} % \nonumber % Remove numbering (before each equation) S_3 &=& \{\langle 1,2,3\rangle ,\langle 2,4,6\rangle ,\langle 3,6,9\rangle ,\langle 4,8,12\rangle ,\langle 5,10,15\rangle ,\ldots\}; \\ S_4 &=& \{\langle 1,3,4\rangle ,\langle 2,6,8\rangle ,\langle 3,9,12\rangle ,\langle 4,12,16\rangle ,\langle 5,15,20\rangle ,\ldots\}.\end{aligned}$$ Applying our rationale we see that the generating function for 3D vector partitions into parts contained in $S_3$ and $S_4$ is $$\frac{1}{((1-xy^2z^3)(1-x^2y^4z^6)(1-x^3y^6z^9)\cdots)(1-xy^3z^4)(1-x^2y^6z^8)(1-x^3y^9z^{12})\cdots)}$$ $$= \left(1+p(1)xy^2z^3+p(2)x^2y^4z^6+p(3)x^3y^6z^9+\ldots \right) \quad \quad \quad$$ $$\quad \quad \times \left(1+p(1)xy^3z^4+p(2)x^2y^6z^8+p(3)x^3y^9z^{12}+\ldots \right)$$ $$\begin{aligned} % \nonumber to remove numbering (before each equation) &=& 1 + p(1) x y^3 z^4+ p(1) x y^2 z^3 \\ & & + \; p(2) x^2 y^6 z^8 + p(1)^2 x^2 y^5 z^7 + p(2) x^2 y^4 z^6 \\ & & + \; p(3) x^3 y^6 z^9 + p(1) p(2) x^3 y^7 z^{10} + p(1) p(2) x^3 y^8 z^11 + p(3) x^3 y^9 z^{12} \\ & & + \; etc.\end{aligned}$$ There are many extended possibilities for the above enumerations of partitions in higher cartesian space sets of lattice point vectors. There is no reason other than a formal complexity why we can't create exact enumerative formulas for "$n$D Vector Partitions whose parts are on $m$ straight lines" for $m$ and $n$ both arbitrary fixed positive integers. **Distinct Vector Partitions along an *n*-space line.** Recall that Euler, in addition to giving us the "unrestricted" integer partitions generating function, also noted that for $|x|<1$, $$\label{21.28i} \prod_{k=1}^{\infty} (1+x^k) = \sum_{n=1}^{\infty} \mathcal{D}(n)x^n,$$ where $\mathcal{D}(n)$ is the number of partitions of positive integer $n$ into distinct integer parts. From equation ([\[21.28i\]](#21.28i){reference-type="ref" reference="21.28i"}) we see that replacing $x$ by $y^a z^b$ for $|y^a z^b|<1$ where $a$ and $b$ are coprime positive integers, gives the equation $$\label{21.29i} \prod_{n=1}^{\infty} (1 + (y^a z^b)^n) =\prod_{n=1}^{\infty} \left(1+(y^a z^b)^{1n}+(y^a z^b)^{2n}+(y^a z^b)^{3n}+ \ldots \right)$$ $$=1+\mathcal{D}(1)(y^a z^b)^1+\mathcal{D}(2)(y^a z^b)^2+\mathcal{D}(3)(y^a z^b)^3+\ldots .$$ Similarly to the previous rationale using "unrestricted" partitions, this is a version of the generating function for integer partitions into distinct vector parts. ie. one-dimensional partitions into distinct parts. In the 2D case we are saying that the number of partitions of an integer lattice point vector $\langle A,B \rangle$ on the line $z=ay/b$ for $gcd(a,b)=1$ into "distinct" vector parts also on this line, is equal to $\mathcal{D}(gcd(A,B))$. In a further example, consider again the two lines with equations $y=z/2$ and $y=z/3$. The lattice point vectors along these lines in the first quadrant we list again as $$\begin{aligned} % \nonumber % Remove numbering (before each equation) S_1 &=& \{\langle 1,2\rangle ,\langle 2,4\rangle ,\langle 3,6\rangle ,\langle 4,8\rangle ,\langle 5,10\rangle ,\ldots\}; \\ S_2 &=& \{\langle 1,3\rangle ,\langle 2,6\rangle ,\langle 3,9\rangle ,\langle 4,12\rangle ,\langle 5,15\rangle ,\ldots\}.\end{aligned}$$ Following the above rationale we see that the generating function for 2D vector partitions into distinct parts contained in $S_1$ and $S_2$ is $$((1+yz^2)(1+y^2z^4)(1+y^3z^6)\cdots)((1+yz^3)(1+y^2z^6)(1+y^3z^9)\cdots)$$ $$= \left(1+\mathcal{D}(1)y^1 z^2+\mathcal{D}(2)y^2 z^4+\mathcal{D}(3)y^3 z^6+\ldots \right) \quad \quad$$ $$\quad \times \left(1+\mathcal{D}(1)y^1 z^3+\mathcal{D}(2)y^2 z^6+\mathcal{D}(3)y^3 z^9+\ldots \right)$$ $$\begin{aligned} % \nonumber to remove numbering (before each equation) &=& 1 + \mathcal{D}(1) y z^2 + \mathcal{D}(1) y z^3 \\ & & + \; \mathcal{D}(2) y^2 z^4 + \mathcal{D}(1)^2 y^2 z^5 + \mathcal{D}(2) y^2 z^6 \\ & & + \; \mathcal{D}(3) y^3 z^6 + \mathcal{D}(1) \mathcal{D}(2) y^3 z^7 + \mathcal{D}(1) \mathcal{D}(2) y^3 z^8 + \mathcal{D}(3) y^3 z^9 \\ & & + \; etc.\end{aligned}$$ $$\begin{aligned} % \nonumber to remove numbering (before each equation) &=& 1 + y z^2 + y z^3 + y^2 z^4 + y^2 z^5 + y^2 (2 y + 1) z^6 \\ & & + \; y^3 z^7 + y^3 (2 y + 1) z^8 + 2 y^3 (y + 1) z^9 + y^4 (3 y + 1) z^{10} \\ & & + \; 2 y^4 (y + 1) z^{11} + 2 y^4 (2 y^2 + y + 1) z^{12} + y^5 (3 y + 2) z^13 \\ & & + \; 2 y^5 (2 y^2 + y + 1) z^{14} + y^5 (4 y^2 + 4 y + 3) z^{15} \\ & & + \; etc.\end{aligned}$$ We can easily plot the coefficients here onto the grid $$\begin{array}{c|ccccccccc} \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 15 & & & & & & 3 & 4 & 4 & \cdots \\ 14 & & & & & & 2 & 2 & 4 & \cdots \\ 13 & & & & & & 2 & 3 & & \cdots \\ 12 & & & & & 2 & 2 & 4 & & \cdots \\ 11 & & & & & 2 & 2 & & & \cdots \\ 10 & & & & & 1 & 3 & & & \cdots \\ 9 & & & & 2 & 2 & & & & \cdots \\ 8 & & & & 1 & 2 & & & & \cdots \\ 7 & & & & 1 & & & & & \cdots \\ 6 & & & 1 & 2 & & & & & \cdots \\ 5 & & & 1 & & & & & & \cdots \\ 4 & & & 1 & & & & & & \cdots \\ 3 & & 1 & & & & & & & \cdots \\ 2 & & 1 & & & & & & & \cdots \\ 1 & & & & & & & & & \cdots \\ 0 & 1 & & & & & & & & \cdots \\ \hline z/y& 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & \cdots \end{array}$$ **Example interpretations reading from this graph.** 1. The number of partitions of vector $\langle 7,15 \rangle$ using distinct parts from $S_1$ and $S_2$ is 11. 2. The number of partitions of vector $\langle 5,10 \rangle$ using distinct parts from $S_1$ and $S_2$ is 7. 3. The number of partitions of vector $\langle 4,9 \rangle$ using distinct parts from $S_1$ and $S_2$ is 3. 4. The number of partitions of vector $\langle 4,7 \rangle$ using distinct parts from $S_1$ and $S_2$ is 0. # Deriving 2D VPV identities in extended triangle regions {#S:2D VPV hyperpyramids} In this section we derive the $2D$ Visible Point Vector identities from essentially creating them out of a simple summation transformation based on the simple idea that the integer lattice points in the first quadrant have co-ordinates that are either coprime integer pairs namely, "lattice points visible from the origin", or co-ordinates that are the integer multiples of the coprime pairs. As we did in the hyperquadrant paper, we again start with a simple $2D$ summation. Consider $$\nonumber \sum_{n=1}^{\infty} \left( \sum_{m=1}^{n} \frac{y^m}{m^a} \right) \frac{z^n}{n^b}$$ $$\nonumber =\left(\frac{y^1}{1^a}\right)\frac{z^1}{1^b} +\left(\frac{y^1}{1^a}+\frac{y^2}{2^a}\right)\frac{z^2}{2^b} +\left(\frac{y^1}{1^a}+\frac{y^2}{2^a}+\frac{y^3}{3^a}\right)\frac{z^3}{3^b} +\left(\frac{y^1}{1^a}+\frac{y^2}{2^a}+\frac{y^3}{3^a}+\frac{y^4}{4^a}\right)\frac{z^4}{4^b}$$ $$\nonumber +\left(\frac{y^1}{1^a}+\frac{y^2}{2^a}+\frac{y^3}{3^a}+\frac{y^4}{4^a}+\frac{y^5}{5^a}\right)\frac{z^5}{5^b} +\left(\frac{y^1}{1^a}+\frac{y^2}{2^a}+\frac{y^3}{3^a}+\frac{y^4}{4^a}+\frac{y^5}{5^a}+\frac{y^6}{6^a}\right)\frac{z^6}{6^b}+\cdots$$ $$\nonumber =\frac{y^1 z^1}{1^a 1^b}$$ $$\nonumber +\frac{y^1 z^2}{1^a 2^b}+\frac{y^2 z^2}{2^a 2^b}$$ $$\nonumber +\frac{y^1 z^3}{1^a 3^b}+\frac{y^2 z^3}{2^a 3^b}+\frac{y^3 z^3}{3^a 3^b}$$ $$\nonumber +\frac{y^1 z^4}{1^a 4^b}+\frac{y^2 z^4}{2^a 4^b}+\frac{y^3 z^4}{3^a 4^b}+\frac{y^4 z^4}{4^a 4^b}$$ $$\nonumber +\frac{y^1 z^5}{1^a 5^b}+\frac{y^2 z^5}{2^a 5^b}+\frac{y^3 z^5}{3^a 5^b}+\frac{y^4 z^5}{4^a 5^b}+\frac{y^5 z^5}{5^a 5^b}$$ $$\nonumber +\frac{y^1 z^6}{1^a 6^b}+\frac{y^2 z^6}{2^a 6^b}+\frac{y^3 z^6}{3^a 6^b}+\frac{y^4 z^6}{4^a 6^b}+\frac{y^5 z^6}{5^a 6^b}+\frac{y^6 z^6}{6^a 6^b}$$ $$\nonumber +\frac{y^1 z^7}{1^a 7^b}+\frac{y^2 z^7}{2^a 7^b}+\frac{y^3 z^7}{3^a 7^b}+\frac{y^4 z^7}{4^a 7^b}+\frac{y^5 z^7}{5^a 7^b}+\frac{y^6 z^7}{6^a 7^b}+\frac{y^7 z^7}{7^a 7^b}$$ $$\nonumber + \quad \vdots \quad + \quad \vdots \quad + \quad \vdots \quad + \quad \vdots \quad + \quad \vdots \quad + \quad \vdots \quad + \quad \vdots \quad \ddots$$ $$\nonumber = \sum_{m,n \geq 1; m \leq n}^{\infty} \frac{y^m z^n}{m^a n^b}$$ $$\nonumber = \sum_{\substack{ h,j,k \geq 1 \\ j \leq k ; \, (j,k)=1}} \frac{(y^j z^k)^h}{h^{a+b} (j^a k^b)}$$ $$\nonumber = \sum_{\substack{ j,k \geq 1 \\ j \leq k ; \, (j,k)=1}} \frac{1}{(j^a k^b)} \sum_{h=1}^{\infty} \frac{(y^j z^k)^h}{h^{a+b}}$$ $$\nonumber = \sum_{\substack{ j,k \geq 1 \\ j \leq k ; \, (j,k)=1}} \frac{1}{(j^a k^b)} \log \left( \frac{1}{1 - y^j z^k} \right) \quad if \quad a+b=1.$$ Therefore, we have shown that $$\nonumber \sum_{n=1}^{\infty} \left( \sum_{m=1}^{n} \frac{y^m}{m^a} \right) \frac{z^n}{n^b} = \sum_{\substack{ j,k \geq 1 \\ j \leq k ; \, (j,k)=1}} \frac{1}{(j^a k^b)} \log \left( \frac{1}{1 - y^j z^k} \right) \quad if \quad a+b=1.$$ Exponentiating both sides gives us the $2D$ first extended triangle VPV identity, where in this $2D$ case the $nD$ pyramid reduces to the form of a triangle shaped array of lattice point vectors, and so we can state the **Theorem 1**. ***The $2D$ first quadrant triangle VPV identity.** For $|y|<1, |z|<1,$ $$\label{21.01} \prod_{\substack{ j,k \geq 1 \\ j \leq k ; \, (j,k)=1}} \left( \frac{1}{1-y^j z^k} \right)^{\frac{1}{j^a k^b}} = \exp\left\{ \sum_{n=1}^{\infty} \left( \sum_{m=1}^{n} \frac{y^m}{m^a} \right) \frac{z^n}{n^b} \right\} \quad if \quad a+b=1.$$* As with our earlier exploits into the $2D$ first quadrant case, for the present result we take some simple example cases where new and interesting results arise. So, let us take the case where $a=0, b=1$, giving us $$\nonumber \prod_{\substack{ j,k \geq 1 \\ j \leq k ; \, (j,k)=1}} \left( \frac{1}{1-y^j z^k} \right)^{\frac{1}{k}} = \exp\left\{ \sum_{n=1}^{\infty} \left( \sum_{m=1}^{n} y^m \right) \frac{z^n}{n} \right\}$$ $$\nonumber = \exp\left\{ \sum_{n=1}^{\infty} \left( y \frac{1-y^n}{1-y} \right) \frac{z^n}{n} \right\} = \exp\left\{ \frac{y}{1-y} \log \left( \frac{1-yz}{1-z} \right) \right\}.$$ So, we arrive then at the following pair of equivalent results, $$\label{21.02} \prod_{\substack{ j,k \geq 1 \\ j \leq k ; \, (j,k)=1}} \left( \frac{1}{1-y^j z^k} \right)^{\frac{1}{k}} = \left( \frac{1-yz}{1-z} \right)^{\frac{y}{1-y}} ,$$ and $$\label{21.03} \prod_{\substack{ j,k \geq 1 \\ j \leq k ; \, (j,k)=1}} \left( 1-y^j z^k \right)^{\frac{1}{k}} = \left( \frac{1-z}{1-yz} \right)^{\frac{y}{1-y}} .$$ From here, multiply both sides of ([\[21.02\]](#21.02){reference-type="ref" reference="21.02"}) and the case of ([\[21.03\]](#21.03){reference-type="ref" reference="21.03"}) with $y \mapsto y^2$ and $z \mapsto z^2$ to get, $$\label{21.04} \prod_{\substack{ j,k \geq 1; \, j \leq k \\ gcd(j,k)=1}} \left( 1+y^j z^k \right)^{\frac{1}{k}} = \left( \frac{1-yz}{1-z} \right)^{\frac{y}{1-y}} \left( \frac{1-z^2}{1-y^2z^2} \right)^{\frac{y^2}{1-y^2}} .$$ Obviously multiplying both sides of ([\[21.03\]](#21.03){reference-type="ref" reference="21.03"}) and ([\[21.04\]](#21.04){reference-type="ref" reference="21.04"}) give a restated ([\[21.04\]](#21.04){reference-type="ref" reference="21.04"}). Particular cases: $y = \frac{1}{2}$ gives us from ([\[21.03\]](#21.03){reference-type="ref" reference="21.03"}) and ([\[21.04\]](#21.04){reference-type="ref" reference="21.04"}) the remarkable two results, $$\nonumber \prod_{\substack{ j,k \geq 1; \, j \leq k \\ gcd(j,k)=1}} \left( 1- \frac{z^k}{2^j} \right)^{\frac{1}{k}} = \frac{2-2z}{2-z} = 1 - \frac{z}{2} - \frac{z^2}{4} - \frac{z^3}{8} - \frac{z^4}{16} - \frac{z^5}{32} - \ldots$$ $$\nonumber = \left( 1- \frac{z}{2} \right)$$ $$\nonumber \sqrt{\left( 1- \frac{z^2}{2^1} \right)}$$ $$\nonumber \sqrt[3]{\left( 1- \frac{z^3}{2^1} \right)\left( 1- \frac{z^3}{2^2} \right)}$$ $$\nonumber \sqrt[4]{\left( 1- \frac{z^4}{2^1} \right)\left( 1- \frac{z^4}{2^3} \right)}$$ $$\nonumber \sqrt[5]{\left( 1- \frac{z^5}{2^1} \right)\left( 1- \frac{z^5}{2^2} \right)\left( 1- \frac{z^5}{2^3} \right)\left( 1- \frac{z^5}{2^4} \right)}$$ $$\nonumber \sqrt[6]{\left( 1- \frac{z^6}{2^1} \right)\left( 1- \frac{z^6}{2^5} \right)}$$ $$\nonumber \vdots \, ,$$ $$\nonumber \prod_{\substack{ j,k \geq 1; \, j \leq k \\ gcd(j,k)=1}} \left( 1+ \frac{z^k}{2^j} \right)^{\frac{1}{k}} = \frac{1-\frac{z}{2}}{1-z}\sqrt[3]{\frac{1-z^2}{1-\frac{z^2}{4}}} = 1 +\frac{z}{2} +\frac{z^2}{4} +\frac{3z^3}{8} +\frac{z^4}{4} +\frac{5z^5}{16} + \ldots$$ $$\nonumber = \left( 1+ \frac{z}{2} \right)$$ $$\nonumber \sqrt{\left( 1+ \frac{z^2}{2^1} \right)}$$ $$\nonumber \sqrt[3]{\left( 1+ \frac{z^3}{2^1} \right)\left( 1+ \frac{z^3}{2^2} \right)}$$ $$\nonumber \sqrt[4]{\left( 1+ \frac{z^4}{2^1} \right)\left( 1+ \frac{z^4}{2^3} \right)}$$ $$\nonumber \sqrt[5]{\left( 1+ \frac{z^5}{2^1} \right)\left( 1+ \frac{z^5}{2^2} \right)\left( 1+ \frac{z^5}{2^3} \right)\left( 1+ \frac{z^5}{2^4} \right)}$$ $$\nonumber \sqrt[6]{\left( 1+ \frac{z^6}{2^1} \right)\left( 1+ \frac{z^6}{2^5} \right)}$$ $$\nonumber \vdots .$$ These two equations can be easily verified on a calculating engine like Mathematica or WolframAlpha by expanding each side into it's Taylor series around $z=0$ and comparing coefficients of like powers of $z$. Next, take the cases of ([\[21.03\]](#21.03){reference-type="ref" reference="21.03"}) and ([\[21.04\]](#21.04){reference-type="ref" reference="21.04"}) with $y=2$, both of which converge if $|z|<\frac{1}{2}$, so then, after a slight adjustment to both sides we have, $$\nonumber \prod_{\substack{ j,k \geq 1; \, j < k \\ gcd(j,k)=1}} \left( 1- 2^j z^k \right)^{\frac{1}{k}} = 1- \frac{z^2}{(1-z)^2} = 1 - z^2 - 2z^3 - 3z^4 - 4z^5 - \ldots - n z^{n+1} - \ldots$$ $$\nonumber = \sqrt{\left( 1- 2^1 z^2 \right)}$$ $$\nonumber \sqrt[3]{\left( 1- 2^1 z^3 \right)\left( 1- 2^2 z^3 \right)}$$ $$\nonumber \sqrt[4]{\left( 1- 2^1 z^4 \right)\left( 1- 2^3 z^4 \right)}$$ $$\nonumber \sqrt[5]{\left( 1- 2^1 z^5 \right)\left( 1- 2^2 z^5 \right)\left( 1- 2^3 z^5 \right)\left( 1- 2^4 z^5 \right)}$$ $$\nonumber \sqrt[6]{\left( 1- 2^1 z^6 \right)\left( 1- 2^5 z^6 \right)}$$ $$\nonumber \sqrt[7]{\left( 1- 2^1 z^7 \right)\left( 1- 2^2 z^7 \right)\left( 1- 2^3 z^7 \right)\left( 1- 2^4 z^7 \right) \left( 1- 2^5 z^7 \right)\left( 1- 2^6 z^7 \right)}$$ $$\nonumber \vdots \, ,$$ which is also easy to verify on a calculating engine term by term from the power series of each side. The notably simple coefficients make this result somewhat tantalizing, as there seems no obvious reason for such coefficients to come out of the products of binomial series roots. We remark at this juncture that equations ([\[21.03\]](#21.03){reference-type="ref" reference="21.03"}) and it's reciprocal equation ([\[21.04\]](#21.04){reference-type="ref" reference="21.04"}) are amenable to applying the limit as $y \rightarrow 1$. In fact we have as follows that, $$\nonumber \lim_{y \rightarrow 1} \left( \frac{1 - z}{1 - y z}\right)^{\frac{y}{1 - y}} = e^{\frac{z}{z-1}}$$ and also from considering equation ([\[21.04\]](#21.04){reference-type="ref" reference="21.04"}) there is the limit, easily evaluated, $$\nonumber \lim_{y \rightarrow 1} \left( \frac{1 - yz}{1 - z}\right)^{\frac{y}{1 - y}} \left( \frac{1 - z^2}{1 - y^2 z^2}\right)^{\frac{y^2}{1 - y^2}} = e^{\frac{z}{1-z^2}}.$$ Therefore, applying these two limits to equations ([\[21.03\]](#21.03){reference-type="ref" reference="21.03"}) and ([\[21.04\]](#21.04){reference-type="ref" reference="21.04"}) respectively we obtain the two interesting results ([\[21.05\]](#21.05){reference-type="ref" reference="21.05"}) and ([\[21.06\]](#21.06){reference-type="ref" reference="21.06"}) given here. $$\label{21.05} \prod_{k=1}^{\infty} \left( 1- z^k \right)^{\frac{\varphi(k)}{k}} = e^{\frac{z}{z-1}} = \sum_{k=0}^{\infty} \frac{\alpha(k)z^k}{k!}$$ $$\nonumber = 1 - \frac{z}{1!} - \frac{z^2}{2!} - \frac{z^3}{3!} + \frac{z^4}{4!} + \frac{19 z^5}{5!} + \frac{151 z^6}{6!} + \frac{1091 z^7}{7!}$$ $$\nonumber + \frac{7841 z^8}{8!} + \frac{56519 z^9}{9!} + \frac{396271 z^{10}}{10!} + O(z^{11}),$$ where $\varphi(k)$ is the Euler totient function, the number of positive integers less than and coprime to $k$. ([\[21.05\]](#21.05){reference-type="ref" reference="21.05"}) demonstrates that sequence $\alpha(k)$ has the exponential generating function $e^{\frac{z}{z-1}}$. The first 31 coefficients generated by the series are, $$\tiny{ \begin{array}{c|c} \hline \mathbf{n} & \mathbf{\alpha(n) \; from \; (\ref{21.05}}) \\ \hline 0 & 1 \\ 1 & -1 \\ 2 & -1 \\ 3 & -1 \\ 4 & 1 \\ 5 & 19 \\ 6 & 151 \\ 7 & 1091 \\ 8 & 7841 \\ 9 & 56519 \\ 10 & 396271 \\ 11 & 2442439 \\ 12 & 7701409 \\ 13 & -145269541 \\ 14 & -4833158329 \\ 15 & -104056218421 \\ 16 & -2002667085119 \\ 17 & -37109187217649 \\ 18 & -679877731030049 \\ 19 & -12440309297451121 \\ 20 & -227773259993414719 \\ 21 & -4155839606711748061 \\ 22 & -74724654677947488521 \\ 23 & -1293162252850914402221 \\ 24 & -20381626111249718908319 \\ 25 & -244110863655032038665001 \\ 26 & 267543347653261450406351 \\ 27 & 172316772106087159102974551 \\ 28 & 8944973491570029894272392801 \\ 29 & 361702062324149751903132843499 \\ 30 & 13353699077321671584329389125031 \\ \hline \end{array} }$$ Amazingly $\gcd(\alpha(k),k!)=1$, for all values of $k$ up to 34, and mostly beyond that, and $\alpha(k) \equiv 1 \; or \; 9 \; (mod \; 10)$, and also the recurrence relation $$\alpha(n)+(n-1)(n-2) \, \alpha(n-2)=(2n-3) \, \alpha(n-1)$$ holds. (See OEIS sequence A293116 [@nS2023]) This recurrence relation allows us to write continued fractions for the ratios $\alpha(n+1)/\alpha(n)$. $$\label{21.06} \prod_{k=1}^{\infty} \left( 1+ z^k \right)^{\frac{\varphi(k)z^k}{k}} = e^{\frac{z}{1-z^2}} = \sum_{k=0}^{\infty} \frac{\beta(k)z^k}{k!}$$ $$\nonumber = 1 + \frac{z}{1!} + \frac{z^2}{2!} + \frac{7 z^3}{3!} + \frac{25 z^4}{4!} + \frac{181 z^5}{5!} + \frac{1201 z^6}{6!}$$ $$\nonumber + \frac{10291 z^7}{7!} + \frac{97777 z^8}{8!} + \frac{202709 z^9}{9!} + O(z^{{10}}),$$ where again, $\varphi(k)$ is the Euler totient function. Next we take ([\[21.01\]](#21.01){reference-type="ref" reference="21.01"}) with the case that $a=1$ and $b=0$, so then $$\nonumber \prod_{\substack{ j,k \geq 1 \\ j \leq k ; \, (j,k)=1}} \left( \frac{1}{1-y^j z^k} \right)^{\frac{1}{j}} = \exp\left\{ \sum_{n=1}^{\infty} \left( \sum_{m=1}^{n} \frac{y^m}{m} \right) z^n \right\}$$ $$\nonumber = \exp\left\{ \frac{1}{1-z} \sum_{n=1}^{\infty} \frac{y^n z^n}{n} \right\} = \exp\left\{ \frac{1}{1-z} \log \left( \frac{1}{1-yz} \right) \right\}.$$ This leads us to establish that $$\label{21.07} \prod_{\substack{ j,k \geq 1 \\ j \leq k ; \, (j,k)=1}} \left( \frac{1}{1-y^j z^k} \right)^{\frac{1}{j}} = \left( \frac{1}{1-yz} \right)^{\frac{1}{1-z}} ,$$ which is equivalent to $$\label{21.08} \prod_{\substack{ j,k \geq 1 \\ j \leq k ; \, (j,k)=1}} \left( 1-y^j z^k \right)^{\frac{1}{j}} = \left( 1-yz \right)^{\frac{1}{1-z}} .$$ From multiplying both sides of ([\[21.07\]](#21.07){reference-type="ref" reference="21.07"}) in which $y \mapsto y^2$ and $z \mapsto z^2$ with both sides of ([\[21.08\]](#21.08){reference-type="ref" reference="21.08"}) we obtain $$\label{21.09} \prod_{\substack{ j,k \geq 1 \\ j \leq k ; \, (j,k)=1}} \left( 1+y^j z^k \right)^{\frac{1}{j}} = \frac{\left( 1-y^2z^2 \right)^{\frac{1}{1-z^2}}}{\left( 1-yz \right)^{\frac{1}{1-z}}} .$$ Particular cases: $z = \frac{1}{2}$ gives us from ([\[21.08\]](#21.08){reference-type="ref" reference="21.08"}) and ([\[21.09\]](#21.09){reference-type="ref" reference="21.09"}) the remarkable result that $$\nonumber \prod_{\substack{ j,k \geq 1; \, j \leq k \\ gcd(j,k)=1}} \left( 1- \frac{y^j}{2^k} \right)^{\frac{1}{j}} = \left( 1- \frac{y}{2} \right)^2 = 1 - \frac{y}{4} + \frac{y^2}{4}$$ $$\nonumber = \left( 1- \frac{y^1}{2^1} \right)$$ $$\nonumber \left( 1- \frac{y^1}{2^2} \right)$$ $$\nonumber \left( 1- \frac{y^1}{2^3} \right)\sqrt{\left( 1- \frac{y^2}{2^3} \right)}$$ $$\nonumber \left( 1- \frac{y^1}{2^4} \right)\sqrt[3]{\left( 1- \frac{y^3}{2^4} \right)}$$ $$\nonumber \left( 1- \frac{y^1}{2^5} \right)\sqrt{\left( 1- \frac{y^2}{2^5} \right)}\sqrt[3]{\left( 1- \frac{y^3}{2^5} \right)}\sqrt[4]{\left( 1- \frac{y^4}{2^5} \right)}$$ $$\nonumber \left( 1- \frac{y^1}{2^6} \right)\sqrt[5]{\left( 1- \frac{y^5}{2^6} \right)}$$ $$\nonumber \vdots \, ,$$ and the result, $$\nonumber \prod_{\substack{ j,k \geq 1; \, j \leq k \\ gcd(j,k)=1}} \left( 1+ \frac{y^j}{2^k} \right)^{\frac{1}{j}} = \frac{\sqrt[3]{(4-y^2)^4}}{\sqrt[3]{4}(2-y)^2} = 1 + y + \frac{5 y^2}{12} + \frac{y^3}{6} + \frac{11 y^4}{144} + \frac{5 y^5}{144} + O(y^6)$$ $$\nonumber = \left( 1+ \frac{y^1}{2^1} \right)$$ $$\nonumber \left( 1+ \frac{y^1}{2^2} \right)$$ $$\nonumber \left( 1+ \frac{y^1}{2^3} \right)\sqrt{\left( 1+ \frac{y^2}{2^3} \right)}$$ $$\nonumber \left( 1+ \frac{y^1}{2^4} \right)\sqrt[3]{\left( 1+ \frac{y^3}{2^4} \right)}$$ $$\nonumber \left( 1+ \frac{y^1}{2^5} \right)\sqrt{\left( 1+ \frac{y^2}{2^5} \right)}\sqrt[3]{\left( 1+ \frac{y^3}{2^5} \right)}\sqrt[4]{\left( 1+ \frac{y^4}{2^5} \right)}$$ $$\nonumber \left( 1+ \frac{y^1}{2^6} \right)\sqrt[5]{\left( 1+ \frac{y^5}{2^6} \right)}$$ $$\nonumber \vdots \, .$$ These two equations can be verified on a calculating engine like Mathematica or WolframAlpha by expanding each side into it's Taylor series around $y=0$ and comparing coefficients of like powers of $y$. However, the calculation is an infinite series for each coefficient, unlike in the previous examples, where it is a finite sum. # Deriving 3D VPV identities in square pyramid regions {#S:3D VPV hyperpyramids} We start by considering the infinite inverted pyramid with square layered arrays of lattice point vectors as per the following diagram, with VPVs bolded. $$\label{21.09a} \tiny{ \begin{array}{ccccccccccc} & & & & & \mathbf{\langle 1,6,6 \rangle} & & & & & \\ & & & & \mathbf{\langle 1,5,6 \rangle} & & \langle 2,6,6 \rangle & & & & \\ & & & \mathbf{\langle 1,4,6 \rangle} & & \mathbf{\langle 2,5,6 \rangle} & & \langle 3,6,6 \rangle & & & \\ & & \mathbf{\langle 1,3,6 \rangle} & & \langle 2,4,6 \rangle & & \mathbf{\langle 3,5,6 \rangle} & & \langle 4,6,6 \rangle & & \\ & \mathbf{\langle 1,2,6 \rangle} & & \mathbf{\langle 2,3,6 \rangle} & & \mathbf{\langle 3,4,6 \rangle} & & \mathbf{\langle 4,5,6 \rangle} & & \mathbf{\langle 5,6,6 \rangle} & \\ \mathbf{\langle 1,1,6 \rangle} & & \langle 2,2,6 \rangle & & \langle 3,3,6 \rangle & & \langle 4,4,6 \rangle & & \mathbf{\langle 5,5,6 \rangle} & & \langle 6,6,6 \rangle \\ & \mathbf{\langle 2,1,6 \rangle} & & \mathbf{\langle 3,2,6 \rangle} & & \mathbf{\langle 4,3,6 \rangle} & & \mathbf{\langle 5,4,6 \rangle} & & \mathbf{\langle 6,5,6 \rangle} & \\ & & \mathbf{\langle 3,1,6 \rangle} & & \langle 4,2,6 \rangle & & \mathbf{\langle 5,3,6 \rangle} & & \langle 6,4,6 \rangle & & \\ & & & \mathbf{\langle 4,1,6 \rangle} & & \mathbf{\langle 5,2,6 \rangle} & & \langle 6,3,6 \rangle & & & \\ & & & & \mathbf{\langle 5,1,6 \rangle} & & \langle 6,2,6 \rangle & & & & \\ & & & & & \mathbf{\langle 6,1,6 \rangle} & & & & & \\ & & & & & & & & & & \\ & & & & \mathbf{\langle 1,5,5 \rangle} & & & & & & \\ & & & \mathbf{\langle 1,4,5 \rangle} & & \mathbf{\langle 2,5,5 \rangle} & & & & & \\ & & \mathbf{\langle 1,3,5 \rangle} & & \mathbf{\langle 2,4,5 \rangle} & & \mathbf{\langle 3,5,5 \rangle} & & & & \\ & \mathbf{\langle 1,2,5 \rangle} & & \mathbf{\langle 2,3,5 \rangle} & & \mathbf{\langle 3,4,5 \rangle} & & \mathbf{\langle 4,5,5 \rangle} & & & \\ \mathbf{\langle 1,1,5 \rangle} & & \mathbf{\langle 2,2,5 \rangle} & & \mathbf{\langle 3,3,5 \rangle} & & \mathbf{\langle 4,4,5 \rangle} & & \langle 5,5,5 \rangle & & \\ & \mathbf{\langle 2,1,5 \rangle} & & \mathbf{\langle 3,2,5 \rangle} & & \mathbf{\langle 4,3,5 \rangle} & & \mathbf{\langle 5,4,5 \rangle} & & & \\ & & \mathbf{\langle 3,1,5 \rangle} & & \mathbf{\langle 4,2,5 \rangle} & & \mathbf{\langle 5,3,5 \rangle} & & & & \\ & & & \mathbf{\langle 4,1,5 \rangle} & & \mathbf{\langle 1,2,5 \rangle} & & & & & \\ & & & & \mathbf{\langle 5,1,5 \rangle} & & & & & & \\ & & & & & & & & & & \\ & & & \mathbf{\langle 1,4,4 \rangle} & & & & & & & \\ & & \mathbf{\langle 1,3,4 \rangle} & & \langle 2,4,4 \rangle & & & & & & \\ & \mathbf{\langle 1,2,4 \rangle} & & \mathbf{\langle 2,3,4 \rangle} & & \mathbf{\langle 3,4,4 \rangle} & & & & & \\ \mathbf{\langle 1,1,4 \rangle} & & \langle 2,2,4 \rangle & & \mathbf{\langle 3,3,4 \rangle} & & \langle 4,4,4 \rangle & & & & \\ & \mathbf{\langle 2,1,4 \rangle} & & \mathbf{\langle 3,2,4 \rangle} & & \mathbf{\langle 4,3,4 \rangle} & & & & & \\ & & \mathbf{\langle 3,1,4 \rangle} & & \langle 4,2,4 \rangle & & & & & & \\ & & & \mathbf{\langle 4,1,4 \rangle} & & & & & & & \\ & & & & & & & & & & \\ & & \mathbf{\langle 1,3,3 \rangle} & & & & & & & & \\ & \mathbf{\langle 1,2,3 \rangle} & & \mathbf{\langle 2,3,3 \rangle} & & & & & & & \\ \mathbf{\langle 1,1,3 \rangle} & & \mathbf{\langle 2,2,3 \rangle} & & \langle 3,3,3 \rangle & & & & & & \\ & \mathbf{\langle 2,1,3 \rangle} & & \mathbf{\langle 3,2,3 \rangle} & & & & & & & \\ & & \mathbf{\langle 3,1,3 \rangle} & & & & & & & & \\ & & & & & & & & & & \\ & \mathbf{\langle 1,2,2 \rangle} & & & & & & & & & \\ \mathbf{\langle 1,1,2 \rangle} & & \langle 2,2,2 \rangle & & & & & & & & \\ & \mathbf{\langle 2,1,2 \rangle} & & & & & & & & & \\ & & & & & & & & & & \\ \mathbf{\langle 1,1,1 \rangle} & & & & & & & & & & \end{array} }$$ From this we create a $3D$ summation over integer co-ordinates in the above lattice point vectors. We consider the sum, $$\nonumber \sum_{n=1}^{\infty} \left( \sum_{l=1}^{n} \frac{x^l}{l^a} \right) \left( \sum_{m=1}^{n} \frac{y^m}{m^b} \right) \frac{z^n}{n^c}$$ $$\nonumber =\left(\frac{x^1}{1^a}\right)\left(\frac{y^1}{1^b}\right)\frac{z^1}{1^c}$$ $$\nonumber +\left(\frac{x^1}{1^a}+\frac{x^2}{2^a}\right)\left(\frac{y^1}{1^b}+\frac{y^2}{2^b}\right)\frac{z^2}{2^c}$$ $$\nonumber +\left(\frac{x^1}{1^a}+\frac{x^2}{2^a}+\frac{x^3}{3^a}\right)\left(\frac{y^1}{1^b}+\frac{y^2}{2^b}+\frac{y^3}{3^b}\right)\frac{z^3}{3^c}$$ $$\nonumber +\left(\frac{x^1}{1^a}+\frac{x^2}{2^a}+\frac{x^3}{3^a}+\frac{x^4}{4^a}\right) \left(\frac{y^1}{1^b}+\frac{y^2}{2^b}+\frac{y^3}{3^b}+\frac{y^4}{4^b}\right)\frac{z^4}{4^c}$$ $$\nonumber +\left(\frac{x^1}{1^a}+\frac{x^2}{2^a}+\frac{x^3}{3^a}+\frac{x^4}{4^a}+\frac{x^5}{5^a}\right) \left(\frac{y^1}{1^b}+\frac{y^2}{2^b}+\frac{y^3}{3^b}+\frac{y^4}{4^b}+\frac{y^5}{5^b}\right)\frac{z^5}{5^c}+\cdots$$ $$\nonumber =\frac{x^1 y^1 z^1}{1^a 1^b 1^c}$$ $$\nonumber +\frac{x^1 y^1 z^2}{1^a 1^b 2^c}+\frac{x^1 y^2 z^2}{1^a 2^b 2^c}$$ $$\nonumber +\frac{x^2 y^1 z^2}{2^a 1^b 2^c}+\frac{x^2 y^2 z^2}{2^a 2^b 2^c}$$ $$\nonumber +\frac{x^1y^1 z^3}{1^a 1^b 3^c}+\frac{x^1 y^2 z^3}{1^a 2^b 3^c}+\frac{x^1 y^3 z^3}{1^a 3^b 3^c}$$ $$\nonumber +\frac{x^2y^1 z^3}{2^a 1^b 3^c}+\frac{x^2 y^2 z^3}{2^a 2^b 3^c}+\frac{x^2 y^3 z^3}{2^a 3^b 3^c}$$ $$\nonumber +\frac{x^3y^1 z^3}{3^a 1^b 3^c}+\frac{x^3 y^2 z^3}{3^a 2^b 3^c}+\frac{x^3 y^3 z^3}{3^a 3^b 3^c}$$ $$\nonumber +\frac{x^1 y^1 z^4}{1^a 1^b 4^c}+\frac{x^1 y^2 z^4}{1^a 2^b 4^c}+\frac{x^1 y^3 z^4}{1^a 3^b 4^c}+\frac{x^1 y^4 z^4}{1^a 4^b 4^c}$$ $$\nonumber +\frac{x^2 y^1 z^4}{2^a 1^b 4^c}+\frac{x^2 y^2 z^4}{2^a 2^b 4^c}+\frac{x^2 y^3 z^4}{2^a 3^b 4^c}+\frac{x^2 y^4 z^4}{2^a 4^b 4^c}$$ $$\nonumber +\frac{x^3 y^1 z^4}{3^a 1^b 4^c}+\frac{x^3 y^2 z^4}{3^a 2^b 4^c}+\frac{x^3 y^3 z^4}{3^a 3^b 4^c}+\frac{x^3 y^4 z^4}{3^a 4^b 4^c}$$ $$\nonumber +\frac{x^4 y^1 z^4}{4^a 1^b 4^c}+\frac{x^4 y^2 z^4}{4^a 2^b 4^c}+\frac{x^4 y^3 z^4}{4^a 3^b 4^c}+\frac{x^4 y^4 z^4}{4^a 4^b 4^c}$$ $$\nonumber +\frac{x^1y^1z^5}{1^a1^b5^c}+\frac{x^1y^2z^5}{1^a2^b5^c}+\frac{x^1y^3z^5}{1^a3^b5^c}+\frac{x^1y^4z^5}{1^a4^b5^c}+\frac{x^1y^5z^5}{1^a5^b5^c}$$ $$\nonumber +\frac{x^2y^1z^5}{2^a1^b5^c}+\frac{x^2y^2z^5}{2^a2^b5^c}+\frac{x^2y^3z^5}{2^a3^b5^c}+\frac{x^2y^4z^5}{2^a4^b5^c}+\frac{x^2y^5z^5}{2^a5^b5^c}$$ $$\nonumber +\frac{x^3y^1z^5}{3^a1^b5^c}+\frac{x^3y^2z^5}{3^a2^b5^c}+\frac{x^3y^3z^5}{3^a3^b5^c}+\frac{x^3y^4z^5}{3^a4^b5^c}+\frac{x^3y^5z^5}{3^a5^b5^c}$$ $$\nonumber +\frac{x^4y^1z^5}{4^a1^b5^c}+\frac{x^4y^2z^5}{4^a2^b5^c}+\frac{x^4y^3z^5}{4^a3^b5^c}+\frac{x^4y^4z^5}{4^a4^b5^c}+\frac{x^4y^5z^5}{4^a5^b5^c}$$ $$\nonumber +\frac{x^5y^1z^5}{5^a1^b5^c}+\frac{x^5y^2z^5}{5^a2^b5^c}+\frac{x^5y^3z^5}{5^a3^b5^c}+\frac{x^5y^4z^5}{5^a4^b5^c}+\frac{x^5y^5z^5}{5^a5^b5^c}$$ $$\nonumber + \quad \vdots \; \quad \; + \; \quad \vdots \; \; \quad + \; \quad \vdots \; \; \quad + \; \quad \vdots \; \; \quad + \; \quad \vdots \; \ddots$$ $$\nonumber = \sum_{l,m,n \geq 1; \; l,m \leq n}^{\infty} \frac{x^l y^m z^n}{l^a m^b n^c}$$ $$\nonumber = \sum_{\substack{ h,l,m,n \geq 1 \\ l,m \leq n ; \, \gcd(l,m,n)=1}} \frac{(x^l y^m z^n)^h}{h^{a+b+c} (l^a m^b n^c)}$$ $$\nonumber = \sum_{\substack{ l,m,n \geq 1 \\ l,m \leq n ; \, \gcd(l,m,n)=1}} \frac{1}{(l^a m^b n^c)} \sum_{h=1}^{\infty} \frac{(x^l y^n z^n)^h}{h^{a+b+c}}$$ $$\nonumber = \sum_{\substack{ l,m,n \geq 1 \\ l,m \leq n ; \, \gcd(l,m,n)=1}} \frac{1}{(l^a m^b n^c)} \log \left( \frac{1}{1 - x^l y^b z^c} \right) \quad if \quad a+b+c=1.$$ Therefore, we have shown that if $a+b+c=1$ then $$\nonumber \sum_{n=1}^{\infty} \left( \sum_{l=1}^{n} \frac{x^l}{l^a} \right) \left( \sum_{m=1}^{n} \frac{y^m}{m^b} \right) \frac{z^n}{n^c} = \sum_{\substack{ l,m,n \geq 1 \\ l,m \leq n ; \, \gcd(l,m,n)=1}} \frac{1}{(l^a m^b n^c)} \log \left( \frac{1}{1 - x^l y^b z^c} \right).$$ Exponentiating both sides gives us the $3D$ "pyramid VPV identity\". The identity is summarized in the **Theorem 2**. ***The $3D$ first hyperquadrant pyramid VPV identity.** If $|x|, |y|, |z| < 1$, with $a+b+c=1$, $$\label{21.10} \prod_{\substack{l,m,n \geq 1 \\ l,m \leq n ; \, \gcd(l,m,n)=1}} \left( \frac{1}{1-x^l y^m z^n} \right)^{\frac{1}{l^a m^b n^c}} = \exp\left\{ \sum_{n=1}^{\infty} \left( \sum_{l=1}^{n} \frac{x^l}{l^a} \right) \left( \sum_{m=1}^{n} \frac{y^m}{m^b} \right) \frac{z^n}{n^c} \right\}.$$* As we did for the $2D$ particular cases, we can examine some obvious example corollaries arising from this theorem. Firstly, take the case where $a=b=0, c=1$, so then, $$\nonumber \prod_{\substack{l,m,n \geq 1 \\ l,m \leq n ; \, \gcd(l,m,n)=1}} \left( \frac{1}{1-x^l y^m z^n} \right)^{\frac{1}{n}} = \exp\left\{ \sum_{n=1}^{\infty} \left( \sum_{l=1}^{n} x^l \right) \left( \sum_{m=1}^{n} y^m \right) \frac{z^n}{n} \right\}$$ $$\nonumber = \exp\left\{ \sum_{n=1}^{\infty} xy \left( \frac{1-x^n}{1-x} \right) \left( \frac{1-y^n}{1-y} \right) \frac{z^n}{n} \right\}$$ $$\nonumber = \exp\left\{ \frac{xy}{(1-x)(1-y)} \log \left( \frac{(1-xz)(1-yz)}{(1-z)(1-xyz)} \right) \right\},$$ which brings us after exponentiating both sides to a set of $3D$ infinite products. So, we have $$\label{21.11} \prod_{\substack{l,m,n \geq 1 \\ l,m \leq n ; \, \gcd(l,m,n)=1}} \left( \frac{1}{1-x^l y^m z^n} \right)^{\frac{1}{n}} = \left(\frac{(1-xz)(1-yz)}{(1-z)(1-xyz)}\right)^{\frac{xy}{(1-x)(1-y)}},$$ and the equivalent identity, $$\label{21.12} \prod_{\substack{l,m,n \geq 1 \\ l,m \leq n ; \, \gcd(l,m,n)=1}} \left( 1-x^l y^m z^n \right)^{\frac{1}{n}} = \left( \frac{(1-z)(1-xyz)}{(1-xz)(1-yz)} \right)^{\frac{xy}{(1-x)(1-y)}}.$$ We see that ([\[21.11\]](#21.11){reference-type="ref" reference="21.11"}) and ([\[21.12\]](#21.12){reference-type="ref" reference="21.12"}) are generalizations of the 2D identities ([\[21.02\]](#21.02){reference-type="ref" reference="21.02"}) and ([\[21.03\]](#21.03){reference-type="ref" reference="21.03"}) from the previous section. Writing ([\[21.12\]](#21.12){reference-type="ref" reference="21.12"}) in longhand, referencing the diagram ([\[21.09a\]](#21.09a){reference-type="ref" reference="21.09a"}), we see that $$\nonumber \left( \frac{(1-z)(1-xyz)}{(1-xz)(1-yz)} \right)^{\frac{xy}{(1-x)(1-y)}}$$ $$\nonumber = (1-xyz)$$ $$\nonumber \sqrt{(1-xyz^2)(1-xy^2z^2)(1-x^2yz^2)}$$ $$\nonumber \sqrt[3]{(1-xyz^3)(1-x^2yz^3)(1-x^3yz^3)(1-x^2yz^3)}$$ $$\nonumber \sqrt[3]{(1-x^2y^2z^3)(1-x^2y^3z^3)(1-x^3yz^3)(1-x^3y^2z^3)}$$ $$\nonumber \sqrt[4]{(1-xyz^4)(1-x^2yz^4)(1-x^3yz^4)(1-x^4yz^4)}$$ $$\nonumber \sqrt[4]{(1-xy^2z^4)(1-x^3y^2z^4)}$$ $$\nonumber \sqrt[4]{(1-xy^3z^4)(1-x^2y^3z^4)(1-x^3y^3z^4)(1-x^4y^3z^4)}$$ $$\nonumber \sqrt[4]{(1-xy^4z^4)(1-x^3y^4z^4)}$$ $$\nonumber \sqrt[5]{(1-xyz^5)(1-x^2yz^5)(1-x^3yz^5)(1-x^4yz^5)(1-x^5yz^5)}$$ $$\nonumber \sqrt[5]{(1-xy^2z^5)(1-x^2y^2z^5)(1-x^3y^2z^5)(1-x^4y^2z^5)(1-x^5y^2z^5)}$$ $$\nonumber \sqrt[5]{(1-xy^3z^5)(1-x^2y^3z^5)(1-x^3y^3z^5)(1-x^4y^3z^5)(1-x^5y^3z^5)}$$ $$\nonumber \sqrt[5]{(1-xy^4z^5)(1-x^2y^4z^5)(1-x^3y^4z^5)(1-x^4y^4z^5)(1-x^5y^4z^5)}$$ $$\nonumber \sqrt[5]{(1-xy^5z^5)(1-x^2y^5z^5)(1-x^3y^5z^5)(1-x^4y^5z^5)}$$ $$\nonumber \quad \textmd{etc}.$$ This is easily verified on a calculating application if expanded on both sides as power series in $z$. # VPV identities in nD first hyperquadrant hyperpyramid regions {#S:VPV hyperpyramids} The $n$ dimensional first hyperquadrant hyperpyramid VPV Identity is encoded in the following **Theorem 3**. ***The $nD$ first hyperquadrant hyperpyramid VPV identity.** If $i = 1, 2, 3,...,n$ then for each $x_i \in \mathbb{C}$ such that $|x_i|<1$ and $b_i \in \mathbb{C}$ such that $\sum_{i=1}^{n}b_i = 1$, $$\label{21.13} \prod_{\substack{ \gcd(a_1,a_2,...,a_n)=1 \\ a_1,a_2,...,a_{n-1} < a_n \\ a_1,a_2,...,a_n \geq 1}} \left( \frac{1}{1-{x_1}^{a_1}{x_2}^{a_2}{x_3}^{a_3}\cdots{x_n}^{a_n}} \right)^{\frac{1}{{a_1}^{b_1}{a_2}^{b_2}{a_3}^{b_3}\cdots{a_n}^{b_n}}}$$ $$\nonumber = \exp\left\{ \sum_{k=1}^{\infty} \prod_{i=1}^{n-1} \left( \sum_{j=1}^{k} \frac{{x_i}^j}{j^{b_i}} \right)\frac{{x_n}^k}{k^{b_n}} \right\}$$ $$\nonumber = \exp\left\{ \sum_{k=1}^{\infty} \left( \sum_{j=1}^{k} \frac{{x_1}^j}{j^{b_1}} \right) \left( \sum_{j=1}^{k} \frac{{x_2}^j}{j^{b_2}} \right) \left( \sum_{j=1}^{k} \frac{{x_3}^j}{j^{b_3}} \right) \cdots \left( \sum_{j=1}^{k} \frac{{x_{n-1}}^j}{j^{b_{n-1}}} \right) \frac{{x_n}^k}{k^{b_n}} \right\}.$$* This result is quite straight-forward to prove using the technique of our two previous sections. It was also given in Campbell [@gC2000] by summing on the VPV's in the $n$-space hyperpyramid, defined by the inequalities $$\label{21.14} x_1<x_n, x_2<x_n, x_3<x_n, ... , x_{n-1}<x_n$$ in the first $n$-space hyperquadrant, and applying the following **Lemma 1**. *Consider an infinite region raying out of the origin in any Euclidean vector space. The set of all lattice point vectors apart from the origin in that region is precisely the set of positive integer multiples of the VPVs in that region.* The corresponding theorem from Campbell [@gC1994] was summed very simply over all lattice point vectors in the first hyperquadrant. Further consequences of the above theorem are given as follows. The 2D case of theorem [Theorem 3](#9.1a){reference-type="ref" reference="9.1a"} is **Corollary 1**. *If $|yz|$ and $|z|<1$ and $s+t=1$ then, $$\label{21.15} \prod_{\substack{ (a,b)=1 \\ a < b \\ a \geq 0, b \geq 1}} \left( \frac{1}{1-{y}^{a}{z}^{b}} \right)^{\frac{1}{{a}^{s}{b}^{t}}}$$ $$\nonumber = \exp\left\{ \frac{{z}^1}{1^{t}} + \left(1+ \frac{{y}^1}{1^{s}}\right) \frac{{z}^2}{2^{t}} + \left(1+ \frac{{y}^1}{1^{s}}+\frac{{y}^2}{2^{s}}\right)\frac{{z}^3}{3^{t}}+\cdots \right\}$$* The 3D case of theorem [Theorem 3](#9.1a){reference-type="ref" reference="9.1a"} is **Corollary 2**. *If $|xyz|$, $|yz|$ and $|z|<1$ and $r+s+t=1$ then, $$\label{21.16} \prod_{\substack{ (a,b,c)=1 \\ a,b < c \\ a,b \geq 0, c \geq 1}} \left( \frac{1}{1-{x}^{a}{y}^{b}{z}^{c}} \right)^{\frac{1}{{a}^{r}{b}^{s}{c}^{t}}}$$ $$\nonumber = \exp\left\{ \frac{{z}^1}{1^{t}} + \left(1+ \frac{{x}^1}{1^{r}}\right)\left(1+ \frac{{y}^1}{1^{s}}\right) \frac{{z}^2}{2^{t}} + \left(1+ \frac{{x}^1}{1^{r}}+\frac{{x}^2}{2^{r}}\right)\left(1+ \frac{{y}^1}{1^{s}}+\frac{{y}^2}{2^{s}}\right)\frac{{z}^3}{3^{t}}+\cdots \right\}$$* The 4D case of theorem [Theorem 3](#9.1a){reference-type="ref" reference="9.1a"} is **Corollary 3**. *If $|wxyz|$, $|xyz|$, $|yz|$ and $|z|<1$ and $r+s+t+u=1$ then, $$\label{21.16a} \prod_{\substack{ (a,b,c,d)=1 \\ a,b,c < d \\ a,b,c \geq 0, d \geq 1}} \left( \frac{1}{1-{w}^{a}{x}^{b}{y}^{c}{z}^{d}} \right)^{\frac{1}{{a}^{r}{b}^{s}{c}^{t}{d}^{u}}} = \exp \left\{ \mathrm{P}_3(r,w;s,x;t,y;u,z)\right\}$$ where $\mathrm{P}_3$, is a 4D hyperpyramid function, $$\begin{gathered} \nonumber \mathrm{P}_3(r,w;s,x;t,y;u,z) = \frac{{z}^1}{1^{u}} + \left(1+ \frac{{w}^1}{1^{r}}\right)\left(1+ \frac{{x}^1}{1^{s}}\right)\left(1+ \frac{{y}^1}{1^{t}}\right) \frac{{z}^2}{2^{u}} \\ + \left(1+ \frac{{w}^1}{1^{r}}+\frac{{w}^2}{2^{r}}\right)\left(1+ \frac{{x}^1}{1^{s}}+\frac{{x}^2}{2^{s}}\right) \left(1+ \frac{{y}^1}{1^{t}}+\frac{{y}^2}{2^{t}}\right)\frac{{z}^3}{3^{u}}+\cdots \end{gathered}$$* The approach we adopt to give the reader an intuitive sense for these identities is to state corollaries and then examples from them. The 2D case through to the 5D case of ([\[21.13\]](#21.13){reference-type="ref" reference="21.13"}) are given in the following examples of the *square hyperpyramid identity*. **Corollary 4**. *For $|y|, |z|<1,$ $$\label{21.17} \prod_{\substack{(a,b)=1 \\ a<b \\ a \geq 0,b \geq 1}} \left( \frac{1}{1-y^a z^b} \right)^{\frac{1}{b}} = \left(\frac{1-yz}{1-z}\right)^{\frac{1}{1-y}}$$ $$\nonumber = 1 + \frac{z}{1!} + \begin{vmatrix} 1 & -1 \\ \frac{1-y^2}{1-y} & 1 \\ \end{vmatrix} \frac{z^2}{2!} + \begin{vmatrix} 1 & -1 & 0 \\ \frac{1-y^2}{1-y} & 1 & -2 \\ \frac{1-y^3}{1-y} & \frac{1-y^2}{1-y} & 1 \\ \end{vmatrix} \frac{z^3}{3!} + \begin{vmatrix} 1 & -1 & 0 & 0 \\ \frac{1-y^2}{1-y} & 1 & -2 & 0 \\ \frac{1-y^3}{1-y} & \frac{1-y^2}{1-y} & 1 & -3 \\ \frac{1-y^4}{1-y} & \frac{1-y^3}{1-y} & \frac{1-y^2}{1-y} & 1 \\ \end{vmatrix} \frac{z^4}{4!} + etc.$$* In this case it is fairly easy to find the Taylor coefficients for the ([\[21.17\]](#21.17){reference-type="ref" reference="21.17"}) right side function. Hence we get a closed form evaluation of the determinant coefficients. In Mathematica, and WolframAlpha one easily sees that the Taylor series is $$\nonumber \left(\frac{1-yz}{1-z}\right)^{\frac{1}{1-y}} = 1 + z + (y + 2) \frac{z^2}{2!} + (2 y^2 + 5 y + 6) \frac{z^3}{3!} + (6 y^3 + 17 y^2 + 26 y + 24) \frac{z^4}{4!}$$ $$\nonumber + (24 y^4 + 74 y^3 + 129 y^2 + 154 y + 120) \frac{z^5}{5!} + O(z^6)$$ and that the expansion is encapsulated by $\sum_{n=0}^{\infty} c_n z^n$ where $c_0 = 1$, $c_1 = 1$ with the recurrence $$\nonumber ny c_n + (n+2) c_{n+2} = (2 + n + y + ny) c_{n+1}.$$ Incidentally, also in Mathematica, and WolframAlpha one easily sees, for example, that the code $$\nonumber Det[\{1,-1,0,0\},\{(1-y^2)/(1-y),1,-2,0\},\{(1-y^3)/(1-y),(1-y^2)/(1-y),1,-3\},$$ $$\nonumber \{(1-y^4)/(1-y),(1-y^3)/(1-y),(1-y^2)/(1-y),1\}]$$ nicely verifies the coefficient given by $$\nonumber \begin{vmatrix} 1 & -1 & 0 & 0 \\ \frac{1-y^2}{1-y} & 1 & -2 & 0 \\ \frac{1-y^3}{1-y} & \frac{1-y^2}{1-y} & 1 & -3 \\ \frac{1-y^4}{1-y} & \frac{1-y^3}{1-y} & \frac{1-y^2}{1-y} & 1 \\ \end{vmatrix} = 6 y^3 + 17 y^2 + 26 y + 24. \\$$ It is interesting to compare our identity ([\[21.11\]](#21.11){reference-type="ref" reference="21.11"}) given earlier in this paper with the following result. **Corollary 5**. *For each of $|x|, |y|, |z|<1,$ $$\label{21.18} \prod_{\substack{(a,b,c)=1 \\ a,b<c \\ a,b\geq0,c>0}} \left( \frac{1}{1-x^a y^b z^c} \right)^{\frac{1}{c}} = \left(\frac{(1-xz)(1-yz)}{(1-z)(1-xyz)}\right)^{\frac{1}{(1-x)(1-y)}}$$ $$\nonumber = 1 + \frac{z}{1!} + \begin{vmatrix} 1 & -1 \\ \frac{(1-x^2)(1-y^2)}{(1-x)(1-y)} & 1 \\ \end{vmatrix} \frac{z^2}{2!} + \begin{vmatrix} 1 & -1 & 0 \\ \frac{(1-x^2)(1-y^2)}{(1-x)(1-y)} & 1 & -2 \\ \frac{(1-x^3)(1-y^3)}{(1-x)(1-y)} & \frac{(1-x^2)(1-y^2)}{(1-x)(1-y)} & 1 \\ \end{vmatrix} \frac{z^3}{3!}$$ $$\nonumber + \begin{vmatrix} 1 & -1 & 0 & 0 \\ \frac{(1-x^2)(1-y^2)}{(1-x)(1-y)} & 1 & -2 & 0 \\ \frac{(1-x^3)(1-y^3)}{(1-x)(1-y)} & \frac{(1-x^2)(1-y^2)}{(1-x)(1-y)} & 1 & -3 \\ \frac{(1-x^4)(1-y^4)}{(1-x)(1-y)} & \frac{(1-x^3)(1-y^3)}{(1-x)(1-y)} & \frac{(1-x^2)(1-y^2)}{(1-x)(1-y)} & 1 \\ \end{vmatrix} \frac{z^4}{4!} + etc.$$* **Corollary 6**. *For each of $|w|, |x|, |y|, |z|<1,$ $$\label{21.19} \prod_{\substack{(a,b,c,d)=1 \\ a,b,c<d \\ a,b,c\geq0,d>0}} \left( \frac{1}{1-w^a x^b y^c z^d} \right)^{\frac{1}{d}} = \left(\frac{(1-wz)(1-xz)(1-yz)(1-wxyz)}{(1-z)(1-wxz)(1-wyz)(1-xyz)}\right)^{\frac{1}{(1-w)(1-x)(1-y)}},$$ $$\nonumber = 1 + \frac{z}{1!} + \begin{vmatrix} 1 & -1 \\ \frac{(1-w^2)(1-x^2)(1-y^2)}{(1-w)(1-x)(1-y)} & 1 \\ \end{vmatrix} \frac{z^2}{2!}$$ $$\nonumber + \begin{vmatrix} 1 & -1 & 0 \\ \frac{(1-w^2)(1-x^2)(1-y^2)}{(1-w)(1-x)(1-y)} & 1 & -2 \\ \frac{(1-w^3)(1-x^3)(1-y^3)}{(1-w)(1-x)(1-y)} & \frac{(1-w^2)(1-x^2)(1-y^2)}{(1-w)(1-x)(1-y)} & 1 \\ \end{vmatrix} \frac{z^3}{3!}$$ $$\nonumber + \begin{vmatrix} 1 & -1 & 0 & 0 \\ \frac{(1-w^2)(1-x^2)(1-y^2)}{(1-w)(1-x)(1-y)} & 1 & -2 & 0 \\ \frac{(1-w^3)(1-x^3)(1-y^3)}{(1-w)(1-x)(1-y)} & \frac{(1-w^2)(1-x^2)(1-y^2)}{(1-w)(1-x)(1-y)} & 1 & -3 \\ \frac{(1-w^4)(1-x^4)(1-y^4)}{(1-w)(1-x)(1-y)} & \frac{(1-w^3)(1-x^3)(1-y^3)}{(1-w)(1-x)(1-y)} & \frac{(1-w^2)(1-x^2)(1-y^2)}{(1-w)(1-x)(1-y)} & 1 \\ \end{vmatrix} \frac{z^4}{4!} + etc.$$* **Corollary 7**. *For each of $|v|, |w|, |x|, |y|, |z|<1,$ $$\label{21.20} \prod_{\substack{(a,b,c,d,e)=1 \\ a,b,c,d<e \\ a,b,c,d\geq0,e>0}} \left( \frac{1}{1-v^a w^b x^c y^d z^e} \right)^{\frac{1}{e}}$$ $$\begin{gathered} \nonumber = \left(\frac{(1-vz)(1-wz)(1-xz)(1-yz)}{(1-z)(1-vwz)(1-vxz)(1-vyz)}\right)^{\frac{1}{(1-v)(1-w)(1-x)(1-y)}} \\ \nonumber \times \left(\frac{(1-vwxz)(1-vwyz)(1-vxyz)(1-wxyz)}{(1-wxz)(1-wyz)(1-xyz)(1-vwxyz)}\right)^{\frac{1}{(1-v)(1-w)(1-x)(1-y)}}. \end{gathered}$$ $$\nonumber = 1 + \frac{z}{1!} + \begin{vmatrix} 1 & -1 \\ \frac{(1-v^2)(1-w^2)(1-x^2)(1-y^2)}{(1-v)(1-w)(1-x)(1-y)} & 1 \\ \end{vmatrix} \frac{z^2}{2!}$$ $$\nonumber + \begin{vmatrix} 1 & -1 & 0 \\ \frac{(1-v^2)(1-w^2)(1-x^2)(1-y^2)}{(1-v)(1-w)(1-x)(1-y)} & 1 & -2 \\ \frac{(1-v^3)(1-w^3)(1-x^3)(1-y^3)}{(1-v)(1-w)(1-x)(1-y)} & \frac{(1-v^2)(1-w^2)(1-x^2)(1-y^2)}{(1-v)(1-w)(1-x)(1-y)} & 1 \\ \end{vmatrix} \frac{z^3}{3!}$$ $$\nonumber + \begin{vmatrix} 1 & -1 & 0 & 0 \\ \frac{(1-v^2)(1-w^2)(1-x^2)(1-y^2)}{(1-v)(1-w)(1-x)(1-y)} & 1 & -2 & 0 \\ \frac{(1-v^3)(1-w^3)(1-x^3)(1-y^3)}{(1-v)(1-w)(1-x)(1-y)} & \frac{(1-v^2)(1-w^2)(1-x^2)(1-y^2)}{(1-v)(1-w)(1-x)(1-y)} & 1 & -3 \\ \frac{)1-v^4)(1-w^4)(1-x^4)(1-y^4)}{(1-v)(1-w)(1-x)(1-y)} & \frac{(1-v^3)(1-w^3)(1-x^3)(1-y^3)}{(1-v)(1-w)(1-x)(1-y)} & \frac{(1-v^2)(1-w^2)(1-x^2)(1-y^2)}{(1-v)(1-w)(1-x)(1-y)} & 1 \\ \end{vmatrix} \frac{z^4}{4!}$$ $$\nonumber + etc.$$* # 2D VPV identities for a z-axis symmetric extended triangle lattice {#S:2D VPV right-hyperpyramids} As we did in the section [3](#S:2D VPV hyperpyramids){reference-type="ref" reference="S:2D VPV hyperpyramids"} of this paper, we again start with a simple $2D$ summation. Consider an infinite extension of the inverted triangle 2D lattice point vectors with the Visible Point Vectors bolded, $$\label{21.20a} \tiny{ \begin{array}{ccccccccccc} \langle -5,5 \rangle & \mathbf{\langle -4,5 \rangle} & \mathbf{\langle -3,5 \rangle} & \mathbf{\langle -2,5 \rangle} & \mathbf{\langle -1,5 \rangle} & \langle 0,5 \rangle & \mathbf{\langle 1,5 \rangle} & \mathbf{\langle 2,5 \rangle} & \mathbf{\langle 3,5 \rangle} & \mathbf{\langle 4,5 \rangle} & \langle 5,5 \rangle \\ & \langle -4,4 \rangle & \mathbf{\langle -3,4 \rangle} & \langle -2,4 \rangle & \mathbf{\langle -1,4 \rangle} & \langle 0,4 \rangle & \mathbf{\langle 1,4 \rangle} & \langle 2,4 \rangle & \mathbf{\langle 3,4 \rangle} & \langle 4,4 \rangle & \\ & & \langle -3,3 \rangle & \mathbf{\langle -2,3 \rangle} & \mathbf{\langle -1,3 \rangle} & \langle 0,3 \rangle & \mathbf{\langle 1,3 \rangle} & \mathbf{\langle 2,3 \rangle} & \langle 3,3 \rangle & & \\ & & & \langle -2,2 \rangle & \mathbf{\langle -1,2 \rangle} & \langle 0,2 \rangle & \mathbf{\langle 1,2 \rangle} & \langle 2,2 \rangle & & & \\ & & & & \mathbf{\langle -1,1 \rangle} & \mathbf{\langle 0,1 \rangle} & \mathbf{\langle 1,1 \rangle} & & & & \\ & & & & & \langle 0,0 \rangle & & & & & \end{array} }$$ Next we create the following summation with the sum covering the above co-ordinates in infinite extended form. $$\nonumber \sum_{n=1}^{\infty} \left( \sum_{m=-n}^{n} \frac{y^m}{m^a} \right) \frac{z^n}{n^b}$$ $$\nonumber =\left(\frac{y^{-1}}{(-1)^a}+1+\frac{y^1}{1^a}\right)\frac{z^1}{1^b}$$ $$\nonumber +\left(\frac{y^{-2}}{(-2)^a}+\frac{y^{-1}}{(-1)^a}+1+\frac{y^1}{1^a}+\frac{y^2}{2^a}\right)\frac{z^2}{2^b}$$ $$\nonumber +\left(\frac{y^{-3}}{(-3)^a}+\frac{y^{-2}}{(-2)^a}+\frac{y^{-1}}{(-1)^a}+1+\frac{y^1}{1^a}+\frac{y^2}{2^a}+\frac{y^3}{3^a}\right)\frac{z^3}{3^b}$$ $$\nonumber +\left(\frac{y^{-4}}{(-4)^a}+\frac{y^{-3}}{(-3)^a}+\frac{y^{-2}}{(-2)^a}+\frac{y^{-1}}{(-1)^a}+1+\frac{y^1}{1^a}+\frac{y^2}{2^a}+\frac{y^3}{3^a}+\frac{y^4}{4^a}\right)\frac{z^4}{4^b}$$ $$\nonumber + \; etc.$$ $$\nonumber =\frac{y^{-1} z^1}{(-1)^a 1^c}+\frac{y^0 z^1}{1\times 1^c}+\frac{y^1 z^1}{1^a 1^c}$$ $$\nonumber +\frac{y^{-2} z^2}{(-2)^a 2^c}+\frac{y^{-1} z^2}{(-1)^a 2^c}+\frac{y^0 z^2}{1\times 2^c}+\frac{y^1 z^2}{1^a 2^c}+\frac{y^2 z^2}{2^a 2^c}$$ $$\nonumber +\frac{y^{-3} z^3}{(-3)^a 3^c}+\frac{y^{-2} z^3}{(-2)^a 3^c}+ \frac{y^{-1} z^3}{(-1)^a 3^c}+\frac{y^0 z^3}{1\times 3^c}+\frac{y^1 z^3}{1^a 3^c}+\frac{y^2 z^3}{2^a 3^c}+\frac{y^3 z^3}{3^a 3^c}$$ $$\nonumber +\frac{y^{-4} z^4}{(-4)^a 4^c}+\frac{y^{-3} z^4}{(-3)^a 4^c}+\frac{y^{-2} z^4}{(-2)^a 4^c}+\frac{y^{-1} z^4}{(-1)^a 4^c} +\frac{y^0 z^4}{1\times 4^c}+\frac{y^1 z^4}{1^a 4^c}+\frac{y^2 z^4}{2^a 4^c}+\frac{y^3 z^4}{3^a 4^c}+\frac{y^4 z^4}{4^a 4^c}$$ $$\nonumber + \; etc.$$ $$\nonumber = \sum_{|m|,n \geq 1; |m| \leq n}^{\infty} \frac{y^m z^n}{m^a n^b}$$ $$\nonumber = \sum_{\substack{ h,|j|,k \geq 1 \\ |j| \leq k ; \, (j,k)=1}} \frac{(y^j z^k)^h}{h^{a+b} (j^a k^b)}$$ $$\nonumber = \sum_{\substack{ |j|,k \geq 1 \\ |j| \leq k ; \, (j,k)=1}} \frac{1}{(j^a k^b)} \sum_{h=1}^{\infty} \frac{(y^j z^k)^h}{h^{a+b}}$$ $$\nonumber = \sum_{\substack{ |j|,k \geq 1 \\ |j| \leq k ; \, (j,k)=1}} \frac{1}{(j^a k^b)} \log \left( \frac{1}{1 - y^j z^k} \right) \quad if \quad a+b=1.$$ Therefore, we have shown that $$\nonumber \sum_{n=1}^{\infty} \left( \sum_{m=-n}^{n} \frac{y^m}{m^a} \right) \frac{z^n}{n^b} = \sum_{\substack{ j,k \geq 1 \\ j \leq k ; \, (j,k)=1}} \frac{1}{(j^a k^b)} \log \left( \frac{1}{1 - y^j z^k} \right) \quad if \quad a+b=1.$$ Exponentiating both sides (and swapping sides) gives us the $2D$ first extended inverted symmetric triangle VPV identity, where in this $2D$ case the $nD$ pyramid reduces to the form of a triangle shaped array of lattice point vectors having the $z$-axis as the axis of symmetry, and so we can state the **Theorem 4**. ***The $\mathbf{2D}$ vertical symmetry extended triangle VPV identity.** For $0<|yz|<1$, $0<|z/y|<1$, $0<|z|<1,$ with $a+b=1$, $$\label{21.01r} \prod_{\substack{ |j|,k \geq 1 \\ |j| \leq k ; \, (j,k)=1}} \left( \frac{1}{1-y^j z^k} \right)^{\frac{1}{j^a k^b}} = \exp\left\{ \sum_{n=1}^{\infty} \left( \sum_{m=-n}^{n} \frac{y^m}{m^a} \right) \frac{z^n}{n^b} \right\} \quad if \quad a+b=1.$$* As with our earlier exploits into the $2D$ first quadrant case, for the present result we take some simple example cases where new and interesting results arise. So, let us take the case where $a=0, b=1$, giving us for $0<|yz|<1$, $0<|z/y|<1$, $0<|z|<1$, $$\nonumber \prod_{\substack{ |j|,k \geq 1 \\ |j| \leq k ; \, (j,k)=1}} \left( \frac{1}{1-y^j z^k} \right)^{\frac{1}{k}} = \exp\left\{ \sum_{n=1}^{\infty} \left( \sum_{m=-n}^{n} y^m \right) \frac{z^n}{n} \right\}$$ $$\nonumber = \exp\left\{ \sum_{n=1}^{\infty} \left(\frac{y^{2n+1} - 1}{y^n (y-1)}\right) \frac{z^n}{n} \right\} = \exp\left\{ \frac{1}{1-y} \log \left( \frac{(1-yz)^y}{1-z/y} \right) \right\}.$$ So, we arrive then at the following pair of equivalent results, for $0<|yz|<1$, $0<|z/y|<1$, $0<|z|<1$, $$\label{21.02r} \prod_{\substack{ |j|,k \geq 1 \\ |j| \leq k ; \, (j,k)=1}} \left( \frac{1}{1-y^j z^k} \right)^{\frac{1}{k}} = \left( \frac{(1-yz)^y}{1-z/y} \right)^{\frac{1}{1-y}} ,$$ and $$\label{21.03r} \prod_{\substack{ |j|,k \geq 1 \\ |j| \leq k ; \, (j,k)=1}} \left( 1-y^j z^k \right)^{\frac{1}{k}} = \left( \frac{1-z/y}{(1-yz)^y} \right)^{\frac{1}{1-y}} .$$ From here, multiply both sides of ([\[21.02r\]](#21.02r){reference-type="ref" reference="21.02r"}) and the case of ([\[21.03r\]](#21.03r){reference-type="ref" reference="21.03r"}) with $y \mapsto y^2$ and $z \mapsto z^2$ to get, $$\label{21.04r} \prod_{\substack{ |j|,k \geq 1 \\ |j| \leq k ; \, (j,k)=1}} \left( 1+y^j z^k \right)^{\frac{1}{k}} = \left( \frac{(1-yz)^y}{1-z/y} \right)^{\frac{1}{1-y}} \left( \frac{1-(z/y)^2}{(1-(yz)^2)^{y^2}} \right)^{\frac{1}{1-y^2}} .$$ Particular cases: $y = \frac{1}{2}$ gives us from ([\[21.03r\]](#21.03r){reference-type="ref" reference="21.03r"}) and ([\[21.04r\]](#21.04r){reference-type="ref" reference="21.04r"}) the two results that $$\nonumber \prod_{\substack{ |j|,k \geq 1 \\ |j| \leq k ; \, (j,k)=1}} \left( 1- \frac{z^k}{2^j} \right)^{\frac{-1}{k}} = \frac{1 - z/2}{(1 - 2z)^2} \sqrt[4]{\left(\frac{1 - 4z^2}{\sqrt[4]{1 - z^2/4}}\right)^3}$$ $$\nonumber = 1 + \frac{7z}{2} + \frac{19z^2}{4} + \frac{61z^3}{8} + \frac{117z^4}{8} + \frac{423z^5}{16} + \frac{4861z^6}{96} + \frac{18259z^7}{192}$$ $$\nonumber + \frac{140867z^8}{768} + \frac{538373z^9}{1536} + \frac{696379z^{10}}{1024} + O(z^{11})$$ $$\nonumber = \frac{1}{\left( 1- \frac{z}{2} \right)}$$ $$\nonumber \frac{1}{\sqrt{\left( 1- 2^1 z^2 \right)\left( 1- \frac{z^2}{2^1} \right)}}$$ $$\nonumber \frac{1}{\sqrt[3]{\left( 1- 2^2 z^3 \right)\left( 1- 2^1 z^3 \right) \left( 1- \frac{z^3}{2^1} \right)\left( 1- \frac{z^3}{2^2} \right)}}$$ $$\nonumber \frac{1}{\sqrt[4]{\left( 1- 2^3 z^4 \right)\left( 1- 2^1 z^4 \right) \left( 1- \frac{z^4}{2^1} \right)\left( 1- \frac{z^4}{2^3} \right)}}$$ $$\nonumber \frac{1}{\sqrt[5]{\left( 1- 2^4 z^5 \right)\left( 1- 2^3 z^5 \right)\left( 1- 2^2 z^5 \right)\left( 1- 2^1 z^5 \right) \left( 1- \frac{z^5}{2^1} \right)\left( 1- \frac{z^5}{2^2} \right)\left( 1- \frac{z^5}{2^3} \right)\left( 1- \frac{z^5}{2^4} \right)}}$$ $$\nonumber \frac{1}{\sqrt[6]{\left( 1- 2^5 z^6 \right)\left( 1- 2^1 z^6 \right) \left( 1- \frac{z^6}{2^1} \right)\left( 1- \frac{z^6}{2^5} \right)}}$$ $$\nonumber \vdots \, ,$$ $$\nonumber \prod_{\substack{ |j|,k \geq 1 \\ |j| \leq k ; \, (j,k)=1}} \left( 1+ \frac{z^k}{2^j} \right)^{\frac{1}{k}} = \frac{2-z}{2-2z} \sqrt[3]{\frac{4-z^2}{4-4z^2}}$$ $$\nonumber = 1 +\frac{z}{2} +\frac{3 z^2}{4} +\frac{5 z^3}{8} +\frac{13 z^4}{16}+ \frac{23 z^5}{32} + \frac{167 z^6}{192}$$ $$\nonumber + \frac{305 z^7}{384} + \frac{59 z^8}{64} + \frac{659 z^9}{768} + O(z^{10})$$ $$\nonumber = \left( 1+ 2z \right) \left( 1+ \frac{z}{2} \right)$$ $$\nonumber \sqrt{\left( 1+ 2^1 z^2 \right)\left( 1+ \frac{z^2}{2^1} \right)}$$ $$\nonumber \sqrt[3]{\left( 1+ 2^2 z^3 \right)\left( 1+ 2^1 z^3 \right)\left( 1+ \frac{z^3}{2^1} \right)\left( 1+ \frac{z^3}{2^2} \right)}$$ $$\nonumber \sqrt[4]{\left( 1+ 2^3 z^4 \right)\left( 1+ 2^1 z^4 \right) \left( 1+ \frac{z^4}{2^1} \right)\left( 1+ \frac{z^4}{2^3} \right)}$$ $$\nonumber \sqrt[5]{\left( 1+ 2^4 z^5 \right)\left( 1+ 2^3 z^5 \right)\left( 1+ 2^2 z^5 \right)\left( 1+ 2^1 z^5 \right) \left( 1+ \frac{z^5}{2^1} \right)\left( 1+ \frac{z^5}{2^2} \right)\left( 1+ \frac{z^5}{2^3} \right)\left( 1+ \frac{z^5}{2^4} \right)}$$ $$\nonumber \sqrt[6]{\left( 1+ 2^5 z^6 \right)\left( 1+ 2^1 z^6 \right)\left( 1+ \frac{z^6}{2^1} \right)\left( 1+ \frac{z^6}{2^5} \right)}$$ $$\nonumber \vdots .$$ These two equations can be easily verified on a calculating engine like Mathematica or WolframAlpha by expanding each side into it's Taylor series around $z=0$ and comparing coefficients of like powers of $z$. Next, take the cases of ([\[21.03r\]](#21.03r){reference-type="ref" reference="21.03r"}) and ([\[21.04r\]](#21.04r){reference-type="ref" reference="21.04r"}) with $y=2$, both of which converge if $|z|<2$, so then, after a slight adjustment to both sides by a factor of $1-2z$, We remark at this juncture that equations ([\[21.03r\]](#21.03r){reference-type="ref" reference="21.03r"}) and it's reciprocal equation ([\[21.04r\]](#21.04r){reference-type="ref" reference="21.04r"}) are amenable to applying the limit as y approaches 1. In fact we have as follows that, $$\nonumber \lim_{y \rightarrow 1} \left( \frac{1 - z}{1 - y z}\right)^{\frac{y}{1 - y}} = e^{\frac{z}{z-1}}$$ and also from considering equation ([\[21.04r\]](#21.04r){reference-type="ref" reference="21.04r"}) there is the limit, easily evaluated, $$\nonumber \lim_{y \rightarrow 1} \left( \frac{1 - yz}{1 - z}\right)^{\frac{y}{1 - y}} \left( \frac{1 - z^2}{1 - y^2 z^2}\right)^{\frac{y^2}{1 - y^2}} = e^{\frac{z}{1-z^2}}.$$ Therefore, applying these two limits to equations ([\[21.03r\]](#21.03r){reference-type="ref" reference="21.03r"}) and ([\[21.04r\]](#21.04r){reference-type="ref" reference="21.04r"}) respectively we obtain the two interesting results ([\[21.05r\]](#21.05r){reference-type="ref" reference="21.05r"}) and ([\[21.06r\]](#21.06r){reference-type="ref" reference="21.06r"}) given here. $$\label{21.05r} \prod_{k=1}^{\infty} \left( 1- z^k \right)^{\frac{\varphi(k)}{k}} = e^{\frac{z}{z-1}} = \sum_{k=0}^{\infty} \frac{\alpha(k)z^k}{k!}$$ $$\nonumber = 1 - \frac{z}{1!} - \frac{z^2}{2!} - \frac{z^3}{3!} + \frac{z^4}{4!} + \frac{19 z^5}{5!} + \frac{151 z^6}{6!} + \frac{1091 z^7}{7!}$$ $$\nonumber + \frac{7841 z^8}{8!} + \frac{56519 z^9}{9!} + \frac{396271 z^{10}}{10!} + O(z^{11}),$$ demonstrating sequence $\alpha(k)$ has the exponential generating function $e^{\frac{z}{z-1}}$. Amazingly $\gcd(\alpha(k),k!)=1$, for all values of $k$ up to 34, and mostly beyond that, and $\alpha(k) \equiv 1 \; or \; 9 \; (mod \; 10)$, and also the recurrence relation $$\alpha(n)+(n-1)(n-2) \, \alpha(n-2)=(2n-3) \, \alpha(n-1)$$ holds. (See OEIS sequence A293116 [@nS2023]) This recurrence relation allows us to write continued fractions for the ratios $\alpha(n+1)/\alpha(n)$. $$\label{21.06r} \prod_{k=1}^{\infty} \left( 1+ z^k \right)^{\frac{\varphi(k)z^k}{k}} = e^{\frac{z}{1-z^2}} = \sum_{k=0}^{\infty} \frac{\beta(k)z^k}{k!}$$ $$\nonumber = 1 + \frac{z}{1!} + \frac{z^2}{2!} + \frac{7 z^3}{3!} + \frac{25 z^4}{4!} + \frac{181 z^5}{5!} + \frac{1201 z^6}{6!}$$ $$\nonumber + \frac{10291 z^7}{7!} + \frac{97777 z^8}{8!} + \frac{202709 z^9}{9!} + O(z^{{10}}),$$ where $\varphi(k)$ is the Euler totient function, the number of positive integers less than and coprime to $k$. Next we take ([\[21.01r\]](#21.01r){reference-type="ref" reference="21.01r"}) with the case that $a=1$ and $b=0$, so then $$\nonumber \prod_{\substack{ j,k \geq 1 \\ j \leq k ; \, (j,k)=1}} \left( \frac{1}{1-y^j z^k} \right)^{\frac{1}{j}} = \exp\left\{ \sum_{n=1}^{\infty} \left( \sum_{m=1}^{n} \frac{y^m}{m} \right) z^n \right\}$$ $$\nonumber = \exp\left\{ \frac{1}{1-z} \sum_{n=1}^{\infty} \frac{y^n z^n}{n} \right\} = \exp\left\{ \frac{1}{1-z} \log \left( \frac{1}{1-yz} \right) \right\}.$$ This leads us to establish that $$\label{21.07r} \prod_{\substack{ j,k \geq 1 \\ j \leq k ; \, (j,k)=1}} \left( \frac{1}{1-y^j z^k} \right)^{\frac{1}{j}} = \left( \frac{1}{1-yz} \right)^{\frac{1}{1-z}} ,$$ which is equivalent to $$\label{21.08r} \prod_{\substack{ j,k \geq 1 \\ j \leq k ; \, (j,k)=1}} \left( 1-y^j z^k \right)^{\frac{1}{j}} = \left( 1-yz \right)^{\frac{1}{1-z}} .$$ From multiplying both sides of ([\[21.07\]](#21.07){reference-type="ref" reference="21.07"}) in which $y \mapsto y^2$ and $z \mapsto z^2$ with both sides of ([\[21.08\]](#21.08){reference-type="ref" reference="21.08"}) we obtain $$\label{21.09r} \prod_{\substack{ j,k \geq 1 \\ j \leq k ; \, (j,k)=1}} \left( 1+y^j z^k \right)^{\frac{1}{j}} = \frac{\left( 1-y^2z^2 \right)^{\frac{1}{1-z^2}}}{\left( 1-yz \right)^{\frac{1}{1-z}}} .$$ Particular cases: $z = \frac{1}{2}$ gives us from ([\[21.08r\]](#21.08r){reference-type="ref" reference="21.08r"}) and ([\[21.09r\]](#21.09r){reference-type="ref" reference="21.09r"}) the remarkable result that $$\nonumber \prod_{\substack{ j,k \geq 1; \, j \leq k \\ gcd(j,k)=1}} \left( 1- \frac{y^j}{2^k} \right)^{\frac{1}{j}} = \left( 1- \frac{y}{2} \right)^2 = 1 - \frac{y}{4} + \frac{y^2}{4}$$ $$\nonumber = \left( 1- \frac{y^1}{2^1} \right)$$ $$\nonumber \left( 1- \frac{y^1}{2^2} \right)$$ $$\nonumber \left( 1- \frac{y^1}{2^3} \right)\sqrt{\left( 1- \frac{y^2}{2^3} \right)}$$ $$\nonumber \left( 1- \frac{y^1}{2^4} \right)\sqrt[3]{\left( 1- \frac{y^3}{2^4} \right)}$$ $$\nonumber \left( 1- \frac{y^1}{2^5} \right)\sqrt{\left( 1- \frac{y^2}{2^5} \right)}\sqrt[3]{\left( 1- \frac{y^3}{2^5} \right)}\sqrt[4]{\left( 1- \frac{y^4}{2^5} \right)}$$ $$\nonumber \left( 1- \frac{y^1}{2^6} \right)\sqrt[5]{\left( 1- \frac{y^5}{2^6} \right)}$$ $$\nonumber \vdots \, ,$$ and the result, $$\nonumber \prod_{\substack{ j,k \geq 1; \, j \leq k \\ gcd(j,k)=1}} \left( 1+ \frac{y^j}{2^k} \right)^{\frac{1}{j}} = \frac{\sqrt[3]{(4-y^2)^4}}{\sqrt[3]{4}(2-y)^2} = 1 + y + \frac{5 y^2}{12} + \frac{y^3}{6} + \frac{11 y^4}{144} + \frac{5 y^5}{144} + O(y^6)$$ $$\nonumber = \left( 1+ \frac{y^1}{2^1} \right)$$ $$\nonumber \left( 1+ \frac{y^1}{2^2} \right)$$ $$\nonumber \left( 1+ \frac{y^1}{2^3} \right)\sqrt{\left( 1+ \frac{y^2}{2^3} \right)}$$ $$\nonumber \left( 1+ \frac{y^1}{2^4} \right)\sqrt[3]{\left( 1+ \frac{y^3}{2^4} \right)}$$ $$\nonumber \left( 1+ \frac{y^1}{2^5} \right)\sqrt{\left( 1+ \frac{y^2}{2^5} \right)}\sqrt[3]{\left( 1+ \frac{y^3}{2^5} \right)}\sqrt[4]{\left( 1+ \frac{y^4}{2^5} \right)}$$ $$\nonumber \left( 1+ \frac{y^1}{2^6} \right)\sqrt[5]{\left( 1+ \frac{y^5}{2^6} \right)}$$ $$\nonumber \vdots \, .$$ These two equations can be verified on a calculating engine like Mathematica or WolframAlpha by expanding each side into it's Taylor series around $y=0$ and comparing coefficients of like powers of $y$. However, the calculation is an infinite series for each coefficient, unlike in the previous examples, where it is a finite sum. # 3D VPV identities for a right square pyramid lattice {#S:3D VPV right hyperpyramids} As we did in section [4](#S:3D VPV hyperpyramids){reference-type="ref" reference="S:3D VPV hyperpyramids"}, we start with a simple $3D$ summation. We derive on a $3D$ inverted pyramid shaped lattice that extends infinitely and occupies four adjacent hyperquadrants of the eight hyperquadrants that comprise the $X-Y-Z$ $3$-space. We depict this infinite inverted pyramid with square layered arrays of lattice point vectors as per the following diagram, with VPVs bolded. $$\label{21.09ra} \tiny{ \begin{array}{ccccccccccc} \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots& & \\ \mathbf{\langle -1,-3,3 \rangle} & & \mathbf{\langle 0,-2,3 \rangle} & & \mathbf{\langle 1,-1,3 \rangle} & & \mathbf{\langle 2,0,3 \rangle} & & \mathbf{\langle 3,1,3 \rangle} & & \\ & \langle 0,-3,3 \rangle & & \mathbf{\langle 1,-2,3 \rangle} & & \mathbf{\langle 2,-1,3 \rangle} & & \langle 3,0,3 \rangle & & & \\ & & \mathbf{\langle 1,-3,3 \rangle} & & \mathbf{\langle 2,-2,3 \rangle} & & \mathbf{\langle 3,-1,3 \rangle} & & & & \\ & & & \mathbf{\langle 2,-3,3 \rangle} & & \mathbf{\langle 3,-2,3 \rangle} & & & & & \\ & & & & \langle 3,-3,3 \rangle & & & & & & \\ & & & & & & & & & & \\ & & & & \langle -2,2,2 \rangle & & & & & & \\ & & & \mathbf{\langle -2,1,2 \rangle} & & \mathbf{\langle -1,2,2 \rangle} & & & & & \\ & & \langle -2,0,2 \rangle & & \mathbf{\langle -1,1,2 \rangle} & & \langle 0,2,2 \rangle & & & & \\ & \mathbf{\langle -2,-1,2 \rangle} & & \mathbf{\langle -1,0,2\rangle} & & \mathbf{\langle 0,1,2 \rangle} & & \mathbf{\langle 1,2,2 \rangle} & & & \\ \langle -2,-2,2 \rangle & & \mathbf{\langle -1,-1,2 \rangle} & & \langle 0,0,2 \rangle & & \mathbf{\langle 1,1,2 \rangle} & & \langle 2,2,2 \rangle & & \\ & \mathbf{\langle -1,-2,2 \rangle} & & \mathbf{\langle 0,-1,2 \rangle} & & \mathbf{\langle 1,3,2 \rangle} & & \langle 2,0,2 \rangle & & & \\ & & \langle 0,-2,2 \rangle & & \mathbf{\langle 1,-1,2 \rangle} & & \langle 2,0,2 \rangle & & & & \\ & & & \mathbf{\langle 1,-2,2 \rangle} & & \mathbf{\langle 2,-1,2 \rangle} & & & & & \\ & & & & \langle 2,-2,2 \rangle & & & & & & \\ & & & & & & & & & & \\ & & & & \mathbf{\langle -1,1,1 \rangle} & & & & & & \\ & & & \mathbf{\langle -1,0,1 \rangle} & & \mathbf{\langle 0,1,1 \rangle} & & & & & \\ & & \mathbf{\langle -1,-1,1 \rangle} & & \mathbf{\langle 0,0,1 \rangle} & & \mathbf{\langle 1,1,1 \rangle} & & & & \\ & & & \mathbf{\langle 0,-1,1 \rangle} & & \mathbf{\langle 1,0,1 \rangle} & & & & & \\ & & & & \mathbf{\langle 1,-1,1 \rangle} & & & & & & \\ & & & & & & & & & & \\ & & & & \langle 0,0,0 \rangle & & & & & & \end{array} }$$ So now, we consider the sum, whose shape is an inverted $3D$ right pyramid whose apex is at the origin $\langle 0,0,0 \rangle$, given by (for $0 < each \; of \; |xz|,|z/x|,|yz|,|z/y|,|z|<1$) $$\nonumber \sum_{n=1}^{\infty} \left( \sum_{l=-n}^{n} \frac{x^l}{l^a} \right) \left( \sum_{m=-n}^{n} \frac{y^m}{m^b} \right) \frac{z^n}{n^c}$$ $$\nonumber =\left(\frac{x^{-1}}{(-1)^a}+1+\frac{x^1}{1^a}\right)\left(\frac{y^{-1}}{(-1)^a}+1+\frac{y^1}{1^b}\right)\frac{z^1}{1^c}$$ $$\nonumber +\left(\frac{x^{-2}}{(-2)^a}+\frac{x^{-1}}{(-1)^a}+1+\frac{x^1}{1^a}+\frac{x^2}{2^a}\right) \left(\frac{y^{-2}}{(-2)^b}+\frac{y^{-1}}{(-1)^b}+1+\frac{y^1}{1^b}+\frac{y^2}{2^b}\right)\frac{z^2}{2^c}$$ $$\nonumber +\left(\frac{x^{-3}}{(-3)^a}+\frac{x^{-2}}{(-2)^a}+\frac{x^{-1}}{(-1)^a}+1+\frac{x^1}{1^a}+\frac{x^2}{2^a}+\frac{x^3}{3^a}\right)$$ $$\nonumber \left(\frac{y^{-3}}{(-3)^a}+\frac{y^{-2}}{(-2)^b}+\frac{y^{-1}}{(-1)^b}+1+\frac{y^1}{1^b}+\frac{y^2}{2^b}+\frac{y^3}{3^b}\right)\frac{z^3}{3^c}$$ $$\nonumber +\left(\frac{x^{-4}}{(-4)^a}+\frac{x^{{-3}}}{(-3)^a}+\frac{x^{-2}}{(-2)^a} +\frac{x^{-1}}{(-1)^a}+1+\frac{x^1}{1^a}+\frac{x^2}{2^a}+\frac{x^3}{3^a}+\frac{x^4}{4^a}\right)$$ $$\nonumber +\left(\frac{y^{-4}}{(-4)^a}+\frac{y^{{-3}}}{(-3)^a}+\frac{y^{-2}}{(-2)^a} +\frac{y^{-1}}{(-1)^a}+1+\frac{y^1}{1^a}+\frac{y^2}{2^a}+\frac{y^3}{3^a}+\frac{y^4}{4^a}\right)\frac{z^4}{4^c}$$ $$\nonumber +$$ $$\nonumber \vdots$$ $$\nonumber =\frac{x^{-1}y^1 z^1}{(-1)^a\; 1^b\; 1^c} \; \, \; + \; \, \; \frac{x^0 y^1 z^1}{1 \times 1^b\; 1^c} \; \, \; + \; \, \; \frac{x^1 y^1 z^1}{1^a\; 1^b\; 1^c}$$ $$\nonumber +\frac{x^{-1}y^0 z^1}{(-1)^a\times 1\times 1^c}+\frac{x^0 y^0 z^1}{1\times 1\times 1^c}+\frac{x^1 y^0 z^1}{1^a\times 1\times 1^c}$$ $$\nonumber +\frac{x^{-1}y^{-1}z^1}{(-1)^a\; (-1)^b\; 1^c}+\frac{x^0 y^{-1} z^1}{1\times (-1)^b\; 1^c}+\frac{x^1 y^{-1} z^1}{1^a\; (-1)^b\; 1^c}$$ $$\nonumber +\frac{x^{-2} y^2 z^2}{(-2)^a 2^b 2^c} \; \, \; + \; \, \; \frac{x^{-1} y^2 z^2}{(-1)^a 2^b 2^c} \; \, \; + \; \, \; \frac{x^0 y^2 z^2}{1\times 2^b 2^c} \; \, \; + \; \, \; \frac{x^1 y^2 z^2}{1^a 2^b 2^c} \; \, \; + \; \, \; \frac{x^2 y^2 z^2}{2^a 2^b 2^c}$$ $$\nonumber +\frac{x^{-2} y^1 z^2}{(-2)^a 1^b 2^c} \; \, \; + \; \, \; \frac{x^{-1} y^1 z^2}{(-1)^a 1^b 2^c} \; \, \; + \; \, \; \frac{x^0 y^1 z^2}{1\times 1^b 2^c} \; \, \; + \; \, \; \frac{x^1 y^1 z^2}{1^a 1^b 2^c} \; \, \; + \; \, \; \frac{x^2 y^1 z^2}{2^a 1^b 2^c}$$ $$\nonumber +\frac{x^{-2} y^0 z^2}{(-2)^a \times 1 \times 2^c}+\frac{x^{-1} y^0 z^2}{(-1)^a \times 1 \times 2^c}+\frac{x^0 y^0 z^2}{1\times 1 \times 2^c}+\frac{x^1 y^0 z^2}{1^a \times 1 \times 2^c}+\frac{x^2 y^0 z^2}{2^a \times 1 \times 2^c}$$ $$\nonumber +\frac{x^{-2} y^{-1} z^2}{(-2)^a (-1)^b 2^c}+\frac{x^{-1} y^{-1} z^2}{(-1)^a (-1)^b 2^c}+\frac{x^0 y^{-1} z^2}{1\times (-1)^b 2^c}+\frac{x^1 y^{-1} z^2}{1^a (-1)^b 2^c}+\frac{x^2 y^{-1} z^2}{2^a (-1)^b 2^c}$$ $$\nonumber +\frac{x^{-2} y^{-2} z^2}{(-2)^a (-2)^b 2^c}+\frac{x^{-1} y^{-2} z^2}{(-1)^a (-2)^b 2^c}+\frac{x^0 y^{-2} z^2}{1\times (-2)^b 2^c}+\frac{x^1 y^{-2} z^2}{1^a (-2)^b 2^c}+\frac{x^2 y^{-2} z^2}{2^a (-2)^b 2^c}$$ $$\nonumber +$$ $$\nonumber \vdots$$ $$\nonumber = \sum_{|l|,|m|,n \geq 1; \; |l|,|m| \leq n}^{\infty} \frac{x^l y^m z^n}{l^a m^b n^c}$$ $$\nonumber = \sum_{\substack{ h,|l|,|m|,n \geq 1 \\ |l|,|m| \leq n ; \, \gcd(l,m,n)=1}} \frac{(x^l y^m z^n)^h}{h^{a+b+c} (l^a m^b n^c)}$$ $$\nonumber = \sum_{\substack{ |l|,|m|,n \geq 1 \\ |l|,|m| \leq n ; \, \gcd(l,m,n)=1}} \frac{1}{(l^a m^b n^c)} \sum_{h=1}^{\infty} \frac{(x^l y^n z^n)^h}{h^{a+b+c}}$$ $$\nonumber = \sum_{\substack{ |l|,|m|,n \geq 1 \\ |l|,|m| \leq n ; \, \gcd(l,m,n)=1}} \frac{1}{(l^a m^b n^c)} \log \left( \frac{1}{1 - x^l y^b z^c} \right) \quad if \quad a+b+c=1.$$ Therefore, we have shown that if $a+b+c=1$ and for $0 < |x|, |1/x|, |y|, |1/y|$ all $< |z|<1$, then $$\nonumber \sum_{n=1}^{\infty} \left( \sum_{l=-n}^{n} \frac{x^l}{l^a} \right) \left( \sum_{m=-n}^{n} \frac{y^m}{m^b} \right) \frac{z^n}{n^c} = \sum_{\substack{ |l|,|m|,n \geq 1 \\ |l|,|m| \leq n ; \, \gcd(l,m,n)=1}} \frac{1}{(l^a m^b n^c)} \log \left( \frac{1}{1 - x^l y^b z^c} \right).$$ Simplifying both sides gives us the $3D$ "right square pyramid VPV identity\", where in this $3D$ case the pyramid takes the form of layered square shaped arrays of lattice point vectors as shown in the above workings, with the central axis rising vertically from the apex point $\langle 0,0,0 \rangle$. The identity is summarized in the **Theorem 5**. ***The $3D$ right square pyramid VPV identity.** For $0 < each of |xz|,|z/x|,|yz|,|z/y|,|z|<1$, with $a+b+c=1$, $$\label{21.10r} \prod_{\substack{|l|,|m|,n \geq 1 \\ |l|,|m| \leq n ; \, \gcd(l,m,n)=1}} \left( \frac{1}{1-x^l y^m z^n} \right)^{\frac{1}{l^a m^b n^c}} = \exp\left\{ \sum_{n=1}^{\infty} \left( \sum_{l=-n}^{n} \frac{x^l}{l^a} \right) \left( \sum_{m=-n}^{n} \frac{y^m}{m^b} \right) \frac{z^n}{n^c} \right\}.$$* As we did for the $2D$ particular cases, we can examine some obvious example corollaries arising from this theorem. Firstly, take the case where $a=b=0, c=1$, so then, $$\nonumber \prod_{\substack{|l|,|m|,n \geq 1 \\ |l|,|m| \leq n ; \, \gcd(l,m,n)=1}} \left( \frac{1}{1-x^l y^m z^n} \right)^{\frac{1}{n}} = \exp\left\{ \sum_{n=1}^{\infty} \left( \sum_{l=-n}^{n} x^l \right) \left( \sum_{m=-n}^{n} y^m \right) \frac{z^n}{n} \right\}$$ $$\nonumber = \exp\left\{ \sum_{n=1}^{\infty} \left(\frac{x^{2n+1} - 1}{x^n (x-1)}\right) \left(\frac{y^{2n+1} - 1}{y^n (y-1)}\right) \frac{z^n}{n} \right\}$$ $$\nonumber = \exp\left\{\frac{1}{(1-x)(1-y)}\sum_{n=1}^{\infty}\left((xy)^{n+1} + \left(\frac{1}{xy}\right)^n - x\left(\frac{x}{y}\right)^n - y\left(\frac{y}{x}\right)^n \right) \frac{z^n}{n} \right\}$$ $$\nonumber = \exp\left\{ \frac{1}{(1-x)(1-y)} \log\left(\frac{(1-xz/y)^x(1-yz/x)^y}{(1-xyz)^{xy}(1-z/(xy))}\right) \right\},$$ $$\nonumber = \left\{ \frac{(1-xz/y)^x(1-yz/x)^y}{(1-xyz)^{xy}(1-z/(xy))}\right\}^{\frac{1}{(1-x)(1-y)}}.$$ This then implies **Corollary 8**. *For $0 < \; each \; of \; |xz|,|z/x|,|yz|,|z/y|,|z|<1$, with $a+b+c=1$, $$\label{21.11r} \prod_{\substack{|l|,|m|,n \geq 1 \\ |l|,|m| \leq n ; \, \gcd(l,m,n)=1}} \left( \frac{1}{1-x^l y^m z^n} \right)^{\frac{1}{n}} = \left\{ \frac{(1-xz/y)^x(1-yz/x)^y}{(1-xyz)^{xy}(1-z/(xy))}\right\}^{\frac{1}{(1-x)(1-y)}},$$ and its reciprocal identity, $$\label{21.12r} \prod_{\substack{|l|,|m|,n \geq 1 \\ |l|,|m| \leq n ; \, \gcd(l,m,n)=1}} \left( 1-x^l y^m z^n \right)^{\frac{1}{n}} = \left\{ \frac{(1-xyz)^{xy}(1-z/(xy))}{(1-xz/y)^x(1-yz/x)^y}\right\}^{\frac{1}{(1-x)(1-y)}}.$$* We see that ([\[21.11r\]](#21.11r){reference-type="ref" reference="21.11r"}) and ([\[21.12r\]](#21.12r){reference-type="ref" reference="21.12r"}) are generalizations of the 2D identities ([\[21.02\]](#21.02){reference-type="ref" reference="21.02"}) and ([\[21.03\]](#21.03){reference-type="ref" reference="21.03"}) from an earlier section. # VPV identities in nD right-square hyperpyramid regions {#S:VPV right-hyperpyramids} The $n$ dimensional square hyperpyramid VPV Identity is encoded in the following **Theorem 6**. ***The $nD$ right-square hyperpyramid VPV identity.** If $i = 1, 2, 3,...,n$ then for each $x_i \in \mathbb{C}$ such that $|x_i|<1$ and $b_i \in \mathbb{C}$ such that $\sum_{i=1}^{n}b_i = 1$, $$\label{21.13r} \prod_{\substack{ \gcd(|a_1|,|a_2|,...,|a_{n-1}|,a_n)=1 \\ |a_1|,|a_2|,...,|a_{n-1}| \leq a_n \\ |a_1|,|a_2|,...,|a_{n-1}|,a_n \geq 1}} \left( \frac{1}{1-{x_1}^{a_1}{x_2}^{a_2}{x_3}^{a_3}\cdots{x_n}^{a_n}} \right)^{\frac{1}{{a_1}^{b_1}{a_2}^{b_2}{a_3}^{b_3}\cdots{a_n}^{b_n}}}$$ $$\nonumber = \exp\left\{ \sum_{k=1}^{\infty} \prod_{i=1}^{n-1} \left( \sum_{j=-k}^{k} \frac{{x_i}^j}{j^{b_i}} \right)\frac{{x_n}^k}{k^{b_n}} \right\}$$ $$\nonumber = \exp\left\{ \sum_{k=1}^{\infty} \left( \sum_{j=-k}^{k} \frac{{x_1}^j}{j^{b_1}} \right) \left( \sum_{j=-k}^{k} \frac{{x_2}^j}{j^{b_2}} \right) \left( \sum_{j=-k}^{k} \frac{{x_3}^j}{j^{b_3}} \right) \cdots \left( \sum_{j=-k}^{k} \frac{{x_{n-1}}^j}{j^{b_{n-1}}} \right) \frac{{x_n}^k}{k^{b_n}} \right\}.$$* This result is quite straight-forward to prove using the technique of our two previous sections. This methodology was also given in Campbell [@gC2000], but has not been worked through for corollaries of theorem [Theorem 6](#21.1ar){reference-type="ref" reference="21.1ar"} over the past 24 years. Since in the previous section we gave the 3D example of theorem [Theorem 6](#21.1ar){reference-type="ref" reference="21.1ar"}, we state the 4D case of of it now. **Corollary 9**. *If $|wxyz|$, $|xyz|$, $|yz|$ and $|z|<1$ and $r+s+t+u=1$ then, $$\label{21.16ar} \prod_{\substack{ (a,b,c,d)=1 \\ |a|,|b|,|c| \leq d \\ a,b,c \geq 0, d \geq 1}} \left( \frac{1}{1-{w}^{a}{x}^{b}{y}^{c}{z}^{d}} \right)^{\frac{1}{{a}^{r}{b}^{s}{c}^{t}{d}^{u}}} = \exp \left\{ \mathrm{P}(r,w;s,x;t,y;u,z)\right\}$$ where $$\nonumber \mathrm{P}(r,w;s,x;t,y;u,z) = \sum_{n=1}^{\infty} \left( \sum_{k=-n}^{n} \frac{w^k}{k^r} \right) \left( \sum_{k=-n}^{n} \frac{x^k}{k^s} \right) \left( \sum_{k=-n}^{n} \frac{y^k}{k^t} \right) \frac{z^{n}}{n^u}.$$* Take the case where $r=s=t=0, u=1$, so then, $$\nonumber \prod_{\substack{|h|,|i|,|j|,k \geq 1 \\ |h|,|i|,|j| \leq k ; \, \gcd(h,i,j,k)=1}} \left( \frac{1}{1-w^h x^i y^j z^k} \right)^{\frac{1}{k}}$$ $$\nonumber = \exp\left\{ \sum_{n=1}^{\infty} \left( \sum_{k=-n}^{n} w^k \right) \left( \sum_{k=-n}^{n} x^k \right) \left( \sum_{k=-n}^{n} y^k \right) \frac{z^n}{n} \right\}$$ $$\nonumber = \exp\left\{ \sum_{n=1}^{\infty} \left(\frac{w^{2n+1} - 1}{w^n (w-1)}\right) \left(\frac{x^{2n+1} - 1}{x^n (x-1)}\right) \left(\frac{y^{2n+1} - 1}{y^n (y-1)}\right) \frac{z^n}{n} \right\}$$ $$\nonumber = \left\{ \frac{(1-wxyz)^{wxy}(1-wz/(xy))^w (1-xz/(wy))^x (1-yz/(wx))^y} {(1-wxz/y)^{wx}(1-wyz/x)^{wy}(1-xyz/w)^{xy}(1-z/(wxy))}\right\}^{\frac{1}{(1-w)(1-x)(1-y)}}.$$ This then implies **Corollary 10**. *For $0 < \; each \; of \; |wz|,|z/w|,|xz|,|z/x|,|yz|,|z/y|,|z|<1$, $$\label{21.11r1} \prod_{\substack{|h|,|i|,|j|,k \geq 1 \\ |h|,|i|,|j| \leq k ; \, \gcd(h,i,j,k)=1}} \left( \frac{1}{1-w^h x^i y^j z^k} \right)^{\frac{1}{n}}$$ $$\nonumber = \left\{ \frac{(1-wxyz)^{wxy}(1-wz/(xy))^w (1-xz/(wy))^x (1-yz/(wx))^y} {(1-wxz/y)^{wx}(1-wyz/x)^{wy}(1-xyz/w)^{xy}(1-z/(wxy))}\right\}^{\frac{1}{(1-w)(1-x)(1-y)}},$$ and its reciprocal identity, $$\label{21.12r1} \prod_{\substack{|h|,|i|,|j|,k \geq 1 \\ |h|,|i|,|j| \leq k ; \, \gcd(h,i,j,k)=1}} \left( 1-w^h x^i y^j z^k \right)^{\frac{1}{n}}$$ $$\nonumber = \left\{ \frac{(1-wxz/y)^{wx}(1-wyz/x)^{wy}(1-xyz/w)^{xy}(1-z/(wxy))}{(1-wxyz)^{wxy}(1-wz/(xy))^w (1-xz/(wy))^x (1-yz/(wx))^y}\right\}^{\frac{1}{(1-w)(1-x)(1-y)}}.$$* Note that both sides of ([\[21.11r1\]](#21.11r1){reference-type="ref" reference="21.11r1"}) are formally equivalent to the power series $$\nonumber 1 + \frac{z}{1!} + \begin{vmatrix} \frac{(1-w^{3})(1-x^{3})(1-y^{3})}{w^1 x^1 y^1(1-w)(1-x)(1-y)} & -1 \\ \frac{(1-w^{5})(1-x^{5})(1-y^{5})}{w^2 x^2 y^2(1-w)(1-x)(1-y)} & \frac{(1-w^{3})(1-x^{3})(1-y^{3})}{w^1 x^1 y^1(1-w)(1-x)(1-y)} \\ \end{vmatrix} \frac{z^2}{2!}$$ $$\nonumber + \begin{vmatrix} \frac{(1-w^{3})(1-x^{3})(1-y^{3})}{w^1 x^1 y^1(1-w)(1-x)(1-y)} & -1 & 0 \\ \frac{(1-w^{5})(1-x^{5})(1-y^{5})}{w^2 x^2 y^2(1-w)(1-x)(1-y)} & \frac{(1-w^{3})(1-x^{3})(1-y^{3})}{w^1 x^1 y^1(1-w)(1-x)(1-y)} & -2 \\ \frac{(1-w^{7})(1-x^{7})(1-y^{7})}{w^3 x^3 y^3(1-w)(1-x)(1-y)} & \frac{(1-w^{5})(1-x^{5})(1-y^{5})}{w^2 x^2 y^2(1-w)(1-x)(1-y)} & \frac{(1-w^{3})(1-x^{3})(1-y^{3})}{w^1 x^1 y^1(1-w)(1-x)(1-y)} \\ \end{vmatrix} \frac{z^3}{3!}+ etc.$$ # Envoi: Research Exercises - **2D first quadrant Upper VPV combinatorial sum**. For each of $|y|, |z|<1,$ show that $$\label{21.37a} \sum_{\substack{\gcd(a,b)=1 \\ 0 \leq a < b \leq 1}} \frac{y^a \, z^b}{1 - y^a \, z^b} = \frac{z}{(1-z)(1-yz)}.$$ For this example we work it through in more detail. Proof: Starting with the left side of ([\[21.37a\]](#21.37a){reference-type="ref" reference="21.37a"}) $$\begin{aligned} % \nonumber % Remove numbering (before each equation) LHS &=& \frac{y^0 z^1}{1 - y^0 z^1} \\ &+& \frac{y^0 z^2}{1 - y^0 z^2} + \frac{y^1 z^2}{1 - y^1 z^2} \\ &+& \frac{y^0 z^3}{1 - y^0 z^3} + \frac{y^1 z^3}{1 - y^1 z^3} + \frac{y^2 z^3}{1 - y^2 z^3} \\ &+& \frac{y^0 z^4}{1 - y^0 z^4} + \frac{y^1 z^4}{1 - y^1 z^4} + \frac{y^2 z^4}{1 - y^3 z^4} \\ &+& \frac{y^0 z^5}{1 - y^0 z^5} + \frac{y^1 z^5}{1 - y^1 z^5} + \frac{y^2 z^5}{1 - y^3 z^5} + \frac{y^3 z^5}{1 - y^0 z^5} + \frac{y^4 z^5}{1 - y^4 z^5} \\ &+& \frac{y^0 z^6}{1 - y^0 z^6} + \frac{y^1 z^6}{1 - y^1 z^6} + \frac{y^5 z^6}{1 - y^5 z^6} \\ &+& etc. \end{aligned}$$ $$= \sum_{\substack{\gcd(j,k)=1 \\ 0 \leq j < k \leq 1}} \sum_{a=0}^{\infty}(y^{aj} z^{ak}) = \sum_{a=0}^{\infty} \sum_{\substack{\gcd(j,k)=1 \\ 0 \leq j < k \leq 1}}(y^{aj} z^{ak})$$ $$\begin{aligned} % \nonumber % Remove numbering (before each equation) &=& y^0 z^1 \\ &+& (y^0 + y^1) z^2 \\ &+& (y^0 + y^1 + y^2) z^3 \\ &+& (y^0 + y^1 + y^2 + y^3) z^4 \\ &+& (y^0 + y^1 + y^2 + y^3 + y^4) z^5 \\ &+& (y^0 + y^1 + y^2 + y^3 + y^4 + y^5) z^6 \\ &+& etc. \end{aligned}$$ $$= \frac{1-y}{1-y}z + \frac{1-y^2}{1-y}z^2 + \frac{1-y^3}{1-y}z^3 + \frac{1-y^4}{1-y}z^4 + \cdots$$ $$= \frac{1}{1-y} \left( \frac{z}{1-z} - \frac{yz}{1-yz} \right)$$ $$= \frac{z}{(1-z)(1-yz)}. \quad \quad \quad \blacksquare$$ - **2D first quadrant Upper VPV zeta function analogy for the previous exercise**. Show that for $\Re y > 1$, $\Re z > 1$, $$\label{21.37a1} \sum_{\substack{\gcd(a,b)=1 \\ 1 \leq a < b \leq 2}} \frac{1}{a^y b^z} = \frac{\zeta(y)\zeta(z)}{\zeta(y+z)}.$$ Proof: Starting with the left side of ([\[21.37a1\]](#21.37a1){reference-type="ref" reference="21.37a1"}) $$\begin{aligned} LHS &=& \left( \frac{1}{1^y} \right) \frac{1}{2^z}\\ &+& \left(\frac{1}{1^y} + \frac{1}{2^y} \right) \frac{1}{3^z} \\ &+& \left(\frac{1}{1^y} + \frac{1}{3^y} \right) \frac{1}{4^z} \\ &+& \left(\frac{1}{1^y} + \frac{1}{2^y} + \frac{1}{3^y} + \frac{1}{4^y} \right) \frac{1}{5^z} \\ &+& \left(\frac{1}{1^y} + \frac{1}{5^y} \right) \frac{1}{6^z} \\ &+& \left(\frac{1}{1^y} + \frac{1}{2^y} + \frac{1}{3^y} + \frac{1}{4^y} + \frac{1}{5^y} + \frac{1}{6^y} \right) \frac{1}{7^z} \\ &+& etc. \end{aligned}$$ $$= \sum_{\substack{\gcd(j,k)=1 \\ 0 \leq j < k \leq 1}} \sum_{a=0}^{\infty} \frac{1}{j^{ay} k^{az}} = \sum_{a=0}^{\infty} \sum_{\substack{\gcd(j,k)=1 \\ 0 \leq j < k \leq 1}}\frac{1}{j^{ay} k^{az}}$$ $$\begin{aligned} % \nonumber % Remove numbering (before each equation) &=& y^0 z^1 \\ &+& (y^0 + y^1) z^2 \\ &+& (y^0 + y^1 + y^2) z^3 \\ &+& (y^0 + y^1 + y^2 + y^3) z^4 \\ &+& (y^0 + y^1 + y^2 + y^3 + y^4) z^5 \\ &+& (y^0 + y^1 + y^2 + y^3 + y^4 + y^5) z^6 \\ &+& etc. \end{aligned}$$ $$= \frac{1-y}{1-y}z + \frac{1-y^2}{1-y}z^2 + \frac{1-y^3}{1-y}z^3 + \frac{1-y^4}{1-y}z^4 + \cdots$$ $$= \frac{1}{1-y} \left( \frac{z}{1-z} - \frac{yz}{1-yz} \right)$$ $$= \frac{z}{(1-z)(1-yz)}. \quad \quad \quad \blacksquare$$ Infer from ([\[21.37a\]](#21.37a){reference-type="ref" reference="21.37a"}) and ([\[21.37a1\]](#21.37a1){reference-type="ref" reference="21.37a1"}) that they encode two similar statements about partitions into lattice point vectors $\langle a,b \rangle$ with $gcd(a,b)=1$ and for ([\[21.37a\]](#21.37a){reference-type="ref" reference="21.37a"}) we have non-negative integers $a$ and positive integers $b$; whereas for ([\[21.37a1\]](#21.37a1){reference-type="ref" reference="21.37a1"}) we have positive integers for both $a$ and $b$. - **Find the particular cases of ([\[21.37a\]](#21.37a){reference-type="ref" reference="21.37a"})**. Prove the following: $$\begin{aligned} % \nonumber % Remove numbering (before each equation) \sum_{\substack{\gcd(a,b)=1 \\ 0 \leq a < b \leq 1}} \frac{1}{3^{n} \, 2^{n} -1} &=& \frac{2^{-n}}{(1-3^{-n})(1-2^{-n})} \quad (for \; \Re n >1), \\ &=& \frac{1}{2^{n}}\left(1+\frac{1}{2^n}+\frac{1}{3^n}+\frac{1}{4^n}+\frac{1}{6^n}+\frac{1}{8^n}+\frac{1}{9^n}+\frac{1}{12^n}+\cdots \right); \\ \sum_{\substack{\gcd(a,b)=1 \\ 0 \leq a < b \leq 1}} \frac{z^{a+b}}{1 - z^{a+b}} &=& \frac{z}{(1-z)^2}, \quad |z|<1; \\ \sum_{\substack{\gcd(a,b)=1 \\ 0 \leq a < b \leq 1}} tan^{2(a+b)}(\theta) &=& \frac{1}{cos^{2}(\theta)}.\end{aligned}$$ - **Find the particular cases of ([\[21.37a1\]](#21.37a1){reference-type="ref" reference="21.37a1"})**. Prove the following: $$\begin{aligned} % \nonumber % Remove numbering (before each equation) \sum_{\substack{\gcd(a,b)=1 \\ a,b \geq 1}} \frac{1}{a^2 b^2} &=& \frac{5}{2}, \\ \sum_{\substack{\gcd(a,b)=1 \\ a,b \geq 1}} \frac{1}{a^2 b^3} &=& \frac{\pi^2 \zeta(3)}{6 \zeta(5)}, \\ \sum_{\substack{\gcd(a,b)=1 \\ a,b \geq 1}} \frac{1}{a^3 b^5} &=& \frac{9450\zeta(3)\zeta(5)}{\pi^8}.\end{aligned}$$ - **3D first hyperquadrant sums using gcd**. For each of $|x|, |y|, |z|<1,$ show that $$\label{21.37b} \sum_{\substack{\gcd(a,b,c)=1 \\ a,b \geq 0,c>0}} \frac{x^a \, y^b \, z^c}{1 - x^a \, y^b \, z^c} = \frac{z}{(1-x)(1-y)(1-z)}.$$ Similarly, show that for $\Re x > 1$, $\Re y > 1$, $\Re z > 1$, $$\label{21.37b1} \sum_{\substack{\gcd(a,b,c)=1 \\ a,b,c \geq 1}} \frac{1}{a^x b^y c^z} = \frac{\zeta(x)\zeta(y)\zeta(z)}{\zeta(x+y+z)}.$$ Infer from ([\[21.37b\]](#21.37b){reference-type="ref" reference="21.37b"}) and ([\[21.37b1\]](#21.37b1){reference-type="ref" reference="21.37b1"}) that they encode two similar statements about partitions into lattice point vectors $\langle a,b,c \rangle$ with $gcd(a,b,c)=1$ and for ([\[21.37b\]](#21.37b){reference-type="ref" reference="21.37b"}) we have non-negative integers $a$, $b$ and positive integers $c$; whereas for ([\[21.37b1\]](#21.37b1){reference-type="ref" reference="21.37b1"}) we have positive integers for $a$, $b$ and $c$. - **4D hyperquadrant VPV sums using gcd**. For each of $|w|, |x|, |y|, |z|<1,$ show that $$\label{21.37c} \sum_{\substack{\gcd(a,b,c,d)=1 \\ a,b,c\geq0,d>0}} \frac{w^a \, x^b \, y^c \, z^d}{1 - w^a \, x^b \, y^c \, z^d} = \frac{z}{(1-w)(1-x)(1-y)(1-z)}.$$ Similarly, show that for $\Re w$, $\Re x$, $\Re y$, and $\Re z$ all $> 1$, $$\label{21.37c1} \sum_{\substack{\gcd(a,b,c,d)=1 \\ a,b,c,d \geq 1}} \frac{1}{a^w b^x c^y d^z} = \frac{\zeta(w)\zeta(x)\zeta(y)\zeta(z)}{\zeta(w+x+y+z)}.$$ Infer from ([\[21.37c\]](#21.37c){reference-type="ref" reference="21.37c"}) and ([\[21.37c1\]](#21.37c1){reference-type="ref" reference="21.37c1"}) that they encode two similar statements about partitions into lattice point vectors $\langle a,b,c,d \rangle$ with $gcd(a,b,c,d)=1$ and for ([\[21.37c\]](#21.37c){reference-type="ref" reference="21.37c"}) we have non-negative integers $a$, $b$, $c$ and positive integers $d$; whereas for ([\[21.37c1\]](#21.37c1){reference-type="ref" reference="21.37c1"}) we have positive integers for $a$, $b$, $c$ and $d$. - **5D hyperquadrant VPV sums using gcd**. For each of $|v|, |w|, |x|, |y|, |z|<1,$ show that $$\label{21.37d} \sum_{\substack{\gcd(a,b,c,d,e)=1 \\ a,b,c,d \geq0,e>0}} \frac{v^a \, w^b \, x^c \, y^d \, z^e}{1 - v^a \, w^b \, x^c \, y^d \, z^e} = \frac{z}{(1-v)(1-w)(1-x)(1-y)(1-z)}.$$ Similarly, show that for $\Re v$, $\Re w$, $\Re x$, $\Re y$, and $\Re z$ all $> 1$, $$\label{21.37d1} \sum_{\substack{\gcd(a,b,c,d,e)=1 \\ a,b,c,d,e \geq 1}} \frac{1}{a^v b^w c^x d^y e^z} = \frac{\zeta(v)\zeta(w)\zeta(x)\zeta(y)\zeta(z)}{\zeta(v+w+x+y+z)}.$$ Infer from ([\[21.37d\]](#21.37d){reference-type="ref" reference="21.37d"}) and ([\[21.37d1\]](#21.37d1){reference-type="ref" reference="21.37d1"}) that they encode two similar statements about partitions into lattice point vectors $\langle a,b,c,d \rangle$ with $gcd(a,b,c,d,e)=1$ and for ([\[21.37d\]](#21.37d){reference-type="ref" reference="21.37d"}) we have non-negative integers $a$, $b$, $c$, $d$ and positive integers $e$; whereas for ([\[21.37d1\]](#21.37d1){reference-type="ref" reference="21.37d1"}) we have positive integers for $a$, $b$, $c$, $d$ and $e$. - **nD hyperquadrant sums using gcd**. For each of $|q_1|, |q_2|, |q_3|, \ldots, |q_n|<1,$ show that $$\label{21.37e} \sum_{\substack{\gcd(a_1,a_2,a_3,\ldots,a_n)=1 \\ a_1,a_2,\ldots,a_{n-1} \geq 0,a_n \geq 1}} \frac{q_1^{a_1} \, q_2^{a_2} \, q_3^{a_3} \cdots q_n^{a_n}}{1 - q_1^{a_1} \, q_2^{a_2} \, q_3^{a_3} \cdots q_n^{a_n}} = \frac{q_n}{(1-q_1)(1-q_2)(1-q_3)\cdots(1-q_n)}.$$ Similarly, show that for $\Re q_1$, $\Re q_2$, $\Re q_3$, $\ldots$, and $\Re q_n$ all $> 1$, $$\label{21.37d1r} \sum_{\substack{\gcd(a_1,a_2,a_3, \ldots ,a_n)=1 \\ a_1,a_2 \ldots a_{n} \geq 1}} \frac{1}{a_1^{q_1} a_2^{q_2} a_3^{q_3} \cdots a_n^{q_n}} = \frac{\zeta(q_1)\zeta(q_2)\zeta(q_3) \ldots \zeta(q_n)}{\zeta({q_1}+ {q_2}+ {q_3}+ \cdots +{q_n})}.$$ Infer from ([\[21.37d\]](#21.37d){reference-type="ref" reference="21.37d"}) and ([\[21.37d1\]](#21.37d1){reference-type="ref" reference="21.37d1"}) that they encode two similar statements about partitions into lattice point vectors $\langle a_1,a_2,a_3,\ldots,a_n \rangle$ with $gcd(a_1,a_2,a_3,\ldots,a_n)=1$ and for ([\[21.37d\]](#21.37d){reference-type="ref" reference="21.37d"}) we have non-negative integers $a_1$, $a_2$, $\ldots$, $a_{n-1}$ and positive integers $a_n$; whereas for ([\[21.37d1r\]](#21.37d1r){reference-type="ref" reference="21.37d1r"}) we have positive integers for $a_1$, $a_2$, $\ldots$, up to $a_{n}$. 99 BAILEY, D.H. and BORWEIN, J.M. *Computation and structure of character polylogarithms with applications to character Mordell-Tornheim-Witten sums.* Math. Comput., 85:295----324, 2016. BAILEY, D.H., BORWEIN, J.M. and GIRGENSOHN, R. *Experimental Evaluation of Euler Sums.* Exper. Math., 3:17--30, 1994. BAILEY, D.H., BORWEIN, P.B. and PLOUFFE,S. On the Rapid Computation of Various Polylogarithmic Constants. Mathematics of Computation, 66(218):903--913, April 1997. BAILEY, D.H. and BROADHURST, D.J. *A Seventeenth-Order Polylogarithm Ladder*: http : //arxiv.org/abs/math.CA/9906134/. arxiv preprint, June 1999. BERNDT, B.C. *Ramanujan's Notebooks, Part IV.* New York: Springer-Verlag, 1994. ISBN 0-387-94109-6. BORWEIN, D., BORWEIN, J.M. and GIRGENSOHN, R. *Explicit evaluation of Euler sums.* Proc. Edinburgh Math. Soc., 38:277----294, 1995. BORWEIN, D.; BORWEIN, J.M.; BRADLEY, D.M. *Parametric Euler sum identities.* J. Math. Anal. Appl., 316:328----338, 2006. BORWEIN, J.M. and BAILEY, D.M. *Mathematics by Experiment: Plausible Reasoning in the 21st Century.* Wellesley, MA, 2003. Ed. A K Peters. BORWEIN, J.M., BRADLEY, D.M., BROADHURST, D.J. and LISONEK, P. *Special Values of Multidimensional Polylogarithms.* CECM, 98(106), 1998. BORWEIN, J.M.; BRADLEY, D.M.; BROADHURST, D.J. and LISONEK, P. *Special values of multiple polylogarithms.* Trans. Am. Math. Soc., 353:907----941, 2001. BORWEIN, J.M., GLASSER, M.L., MCPHEDRAN, R.C., WAN, J.G. and ZUCKER, I.J. *Lattice Sums then and now.* volume 150 of Encyclopedia of Mathematics and Its Applications. Cambridge University Press, 2013. CAMPBELL, G. B. *Multiplicative functions over Riemann zeta function products*, J. Ramanujan Soc. 7 No. 1, 1992, 52-63. CAMPBELL, G. B. *Dirichlet summations and products over primes*, Int. J. Math. Math. Sci., Vol 16, No 2, (1993) 359-372. CAMPBELL, G. B. *A generalized formula of Hardy*, Int. J. Math. Math. Sci., Vol 17, No 2, (1994) 369-378. CAMPBELL, G. B. *A new class of infinite products, and Euler's totient*, International Journal of Mathematics and Mathematical Sciences, vol. 17, no. 3, pp. 417-422, 1994. https://doi.org/10.1155/S0161171294000591. CAMPBELL, G. B. *Infinite products over visible lattice points*, International Journal of Mathematics and Mathematical Sciences, vol. 17, no. 4, pp. 637-654, 1994. https://doi.org/10.1155/S0161171294000918. CAMPBELL, G. B. *Combinatorial identities in number theory related to q-series and arithmetical functions*, Doctor of Philosophy Thesis, School of Mathematical Sciences, The Australian National University, October 1997. CAMPBELL, G. B. *A closer look at some new identities*, International Journal of Mathematics and Mathematical Sciences, vol. 21, no. 3, pp. 581-586, 1998. https://doi.org/10.1155/S0161171298000805. CAMPBELL, G. B. *Infinite products over hyperpyramid lattices*, International Journal of Mathematics and Mathematical Sciences, vol. 23, no. 4, pp. 271-277, 2000. https://doi.org/10.1155/S0161171200000764. CAMPBELL, G. B. *Some n-space q-binomial theorem extensions and similar identities*, arXiv:1906.07526v1 \[math.NT\], Jun 2019. (https://arxiv.org/abs/1906.07526) EULER, L. *Institutiones calculi integralis, volume 1.* 1768. http://doi.org/10.3931/e-rara-3788. LANDEN, J. *Mathematical memoirs, volume 1.* Book on Demand Ltd. 112, 1780. Volume 1 Paperback -- 11 November 2013. LEWIN, L. *Polylogarithms and associated functions.* Elsevier Science Ltd.: North-Holland, New York, 1981. ISBN 0-444-00550-1,(First published London: Macdonald, 1958.). LEWIN, L. *The dilogarithm in algebraic fields.* J. Austral. Math. Soc. Ser. A, 33:302--330, 1982. LEWIN, L (Ed.). *Structural Properties of Polylogarithms.* Amer. Math. Soc. - Providence, RI., 1991. ISBN 0-8218-1634-9. SLOANE, N.J.A. *Expansion of e.g.f. exp(x/(x-1))*. The On-Line Encyclopedia of Integer Sequences (OEIS), 2023. sequence A293116. (www.oeis.org/A293116). SOFO, A. *Integral identities for sums.* Math. Commun., 13:303----309, 2008. SOFO, A. *Families of Integrals of Polylogarithmic Functions.* Researchgate, 7, 2019. Mathematics 10.3390/math7020143. WATSON, G.N. *A Note on Spence's Logarithmic Transcendent.* Quart. J. Math. Oxford Ser., 8:39--42, 1937. [^1]: Thanks to Professor Dr Henk Koppelaar, whose suggestions helped summarize, for this paper, parts of chapters 5, 12 and 21 of the author's draft book to appear in 2024.
arxiv_math
{ "id": "2309.16094", "title": "Visible Point Vector Partition Identities for Hyperpyramid Lattices", "authors": "Geoffrey B. Campbell", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- author: - "David Gómez-Castro [^1]" title: Beginner's guide to Aggregation-Diffusion Equations --- [^1]: Instituto de Matemática Interdisciplinar, Universidad Complutense de Madrid. <dgcastro@ucm.es>
arxiv_math
{ "id": "2309.13713", "title": "Beginner's guide to Aggregation-Diffusion Equations", "authors": "David G\\'omez-Castro", "categories": "math.AP cs.NA math.NA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We analyze the spectrum of the Neumann-Poincaré (NP) operator for a doubly connected domain lying between two level curves defined by a conformal mapping, where the inner boundary of the domain is of general shape. The analysis relies on an infinite-matrix representation of the NP operator involving the Grunsky coefficients of the conformal mapping and an application of the Gershgorin circle theorem. As the thickness of the domain shrinks to zero, the spectrum of the doubly connected domain approaches the interval $[-1/2,1/2]$ in the Hausdorff distance and the density of eigenvalues approaches that of a thin circular annulus. author: - Doosung Choi - "Mikyoung Lim[^1]" - "Stephen P. Shipman[^2]" title: Spectral analysis of the Neumann-Poincaré operator for thin doubly connected domains --- # Introduction Let $\Omega$ be a bounded domain in the real $(x,y)$-plane with an analytic boundary $\Gamma$. The Neumann--Poincaré (NP) operator associated with $\Omega$ is the boundary-integral operator $$\mathcal{K}_\Gamma[\phi](x)= -\frac{1}{2\pi} \int_\Gamma \frac{\left\langle x-y,\nu_y\right\rangle}{|x-y|^2}\phi(y)\, d\sigma(y),\quad x\in\Gamma,$$ in which $\nu_x$ is the unit normal vector at $x$. Its $L^2$-adjoint, which we call the adjoint NP operator, is $$\mathcal{K}_\Gamma^*[\phi](x)= \frac{1}{2\pi} \int_\Gamma \frac{\left\langle x-y,\nu_x\right\rangle}{|x-y|^2}\phi(y)\, d\sigma(y),\quad x\in\Gamma.$$ The normal vector $\nu_x$ is directed into the region exterior to $\Omega$. The adjoint NP operator is related to the single-layer potential $$\begin{aligned} \displaystyle&\mathcal{S}_\Gamma[\phi](x)= \frac{1}{2\pi} \int_\Gamma \ln|x-y|\phi(y)\, d\sigma(y)\quad \mbox{for }x\in\mathbb{R}^2,\end{aligned}$$ which is a harmonic function in $\mathbb{R}^2\!\setminus\!\Gamma$, by the "jump relations\" on $\Gamma$: $$\label{eqn:Kstarjump} \begin{aligned} \mathcal{S}_\Gamma[\phi]\big|^{+}&=\mathcal{S}_\Gamma[\phi]\big|^{-},\\ \frac{\partial}{\partial\nu}\mathcal{S}_\Gamma[\phi]\Big|^{\pm}&=\left(\pm\frac{1}{2}I+\mathcal{K}^*_\Gamma\right)[\phi]. \end{aligned}$$ We will denote the identity operator in any space by $I$. Setting $H=\mathcal{S}_\Gamma[\phi]$, the jump relations can be written equivalently as $$\label{Kgeom} \begin{aligned} \phi \;=\; \left[ \frac{\partial H}{\partial\nu} \right]_\Gamma &\;:=\; \frac{\partial H}{\partial\nu}\Big|^+ - \frac{\partial H}{\partial\nu}\Big|^- \\ \mathcal{K}^*_\Gamma[\phi] \;=\; \left\langle \frac{\partial H}{\partial\nu} \right\rangle_{\!\!\Gamma} &\;:=\; \frac{1}{2} \left( \frac{\partial H}{\partial\nu}\Big|^+ + \frac{\partial H}{\partial\nu}\Big|^- \right). \end{aligned}$$ The NP operator naturally arises in interface problems for the Laplace equation, and its spectral properties have been intensely studied. The NP operator is not symmetric on $L^2(\Gamma)$ unless $\Omega$ is a disk or a ball [@Lim:2015:SBI], but it can be symmetrized using Plemelj's symmetrization principle $\mathcal{K}_\Gamma\mathcal{S}_\Gamma=\mathcal{S}_\Gamma\mathcal{K}^*_\Gamma$ [@Khavinson:2007:PVP], whereby $\mathcal{K}^*_\Gamma$ becomes self-adjoint on $H^{-1/2}(\Gamma)$ and its spectrum $\sigma(\mathcal{K}_{\Gamma}^*)$ is contained in $(-1/2,1/2]$ [@Escauriaza:2004:TPS; @Fabes:1992:SRC; @Kellogg:1929:FPT]. When $\Gamma$ is of class $C^{1,\alpha}$, the NP operator is compact and, thus, $\sigma(\mathcal{K}_{\Gamma}^*)$ is a sequence that accumulates to $0$. In two dimensions, the so-called twin-spectrum relation holds [@Mayergoyz:2005:ERN], that is, the eigenvalues come in plus-minus pairs. The spectrum of the NP operator is known explicitly for special shapes. For a circle, it is $\{0, 1/2\}$, and it is known for a sphere, an ellipse, an ellipsoid [@Neumann:1887:MAM; @Ritter:1995:SEI], confocal ellipses [@Chung:2014:CAL], and concentric disks and spheres [@Ammari:2013:STN; @Ammari:2014:STN2]. For a torus, the NP operator has infinitely many negative eigenvalues [@Ando:2019:SSN]. When $\Gamma$ has corners or cusps, the NP operator has essential spectrum [@PerfektPutinar2017; @Cha:2018:PME], and interesting cases and phenomena have been studied, such as intersecting disks [@Kang:2017:SRN; @Helsing:2017:CSN], wedge [@Perfekt2019] and conical [@HelsingPerfekt2018] shapes, touching disks and crescent domains [@Jung:2023:SAN], and embedded eigenvalues [@LiPerfektShipman2021; @LiShipman2019]. We refer to [@Ammari:2018:MCM; @Kang:2022:SGA; @Khavinson:2007:PVP] and the references therein for more results on the spectrum of the NP operator. For thin domains, which is the focus of this paper, there are some interesting results. Ando *et. al.* [@Ando:2022:SSN1] showed that, for any divergent positive sequence $\{R_j\}_{j=1}^\infty$, the union of spectra of the NP operator for planar rectangles $\Omega_j$ with the aspect ratios $R_j$ is densely distributed in $[-1/2,1/2]$, that is, $$\overline{\cup_{j=1}^\infty \sigma(\mathcal{K}_{\partial\Omega_j}^*)} = [-1/2,1/2].$$ Furthermore, Ando *et. al.* [@Ando:2022:SSN] obtained that the set of eigenvalues of the NP operator on a sequence of the prolate spheroids is densely distributed in the interval $[0, 1/2]$ as their eccentricities approach $1$. They also showed that eigenvalues for the oblate ellipsoids fill up $[-1/2,1/2]$ as the ellipsoids become flatter. In this work, we consider a doubly connected domain of general shape, whose inner and outer boundaries are two level curves defined by a conformal mapping. We prove that the spectrum of the NP operator for the doubly connected domain tends to $[-1/2,1/2]$ as thickness of the shape tends to zero, in the sense of the Hausdorff distance. Moreover, the density of eigenvalues approaches that of a thin circular annulus. The proof applies the Gershgorin circle theorem to an infinite matrix representation of the NP operator involving the Grunsky coefficients associated with the conformal mapping. # The NP operator for a doubly connected domain Let $\Omega$ be a simply connected bounded domain in the complex $z$-plane, with conformal radius $\gamma$. By the Riemann mapping theorem, there is a conformal mapping $z=\Psi(w)$ taking the exterior of the disk of radius $\gamma$ in the complex $w$-plane bijectively to the exterior ${\overline\Omega}^c$ of $\Omega$, such that $z\sim w$ at infinity: $$z = \Psi(w)=w+a_0 + \frac{a_1}{w} + \frac{a_2}{w^2} + \frac{a_3}{w^3} + \cdots.$$ For $r>\gamma$, let $\Gamma_{\!r}$ denote the curve in the $z$-plane equal to the image under $\Psi$ of the circle of radius $r$ in the $w$-plane, $$\Gamma_{\!r} \;=\; \{ \Psi(w) : |w|=r \},$$ which is the boundary of the region $$\Omega_r \;=\; \{ \Psi(w) : |w|<r \}.$$ Choose two real numbers $r_\mathrm{i}$ and $r_\mathrm{e}$ such that $$\gamma < r_\mathrm{i}< r_\mathrm{e},$$ and define inner and outer curves ${\Gamma_{\!\mathrm{i}}}=\Gamma_{\!r_\mathrm{i}}$ and ${\Gamma_{\!\mathrm{e}}}=\Gamma_{\!r_\mathrm{e}}$ bounding the regions $\Omega_\mathrm{i}=\Omega_{r_\mathrm{i}}$ and $\Omega_\mathrm{e}=\Omega_{r_\mathrm{e}}$, that is, $${\Gamma_{\!\mathrm{i}}}\;=\; \{ \Psi(w) : |w|=r_\mathrm{i}\}, \quad {\Gamma_{\!\mathrm{e}}}\;=\; \{ \Psi(w) : |w|=r_\mathrm{e}\}.$$ Their union ${\Gamma_{\!\mathrm{i}}}\cup{\Gamma_{\!\mathrm{e}}}$ is the boundary of a distorted annulus in the $z$-plane, $$A \;=\; \{ \Psi(w) : r_\mathrm{i}< |w| < r_\mathrm{e}\}.$$ We are interested in the thin-domain limit $r_\mathrm{i}\to r_\mathrm{e}$, or $r\nearrow1$, where $$\label{ratio} r = \frac{r_\mathrm{i}}{r_\mathrm{e}}.$$ An example is shown in Fig. [2](#doublyctd){reference-type="ref" reference="doublyctd"}. ![The image of a circular annulus under a conformal mapping is the thin distorted annulus. The conformal mapping is $z=\Psi(w) = w + \frac{0.3+0.5i}{w} - \frac{0.2+0.1i}{w^3} + \frac{0.1i}{w^5} + \frac{0.05i}{w^6} + \frac{0.01i}{w^7}$ and the inner and outer radii are $r_\mathrm{i}= 1.1$ and $r_\mathrm{e}= 1.15$.](preimage.pdf){#doublyctd} $\underrightarrow{\quad\Psi\quad}$ ![The image of a circular annulus under a conformal mapping is the thin distorted annulus. The conformal mapping is $z=\Psi(w) = w + \frac{0.3+0.5i}{w} - \frac{0.2+0.1i}{w^3} + \frac{0.1i}{w^5} + \frac{0.05i}{w^6} + \frac{0.01i}{w^7}$ and the inner and outer radii are $r_\mathrm{i}= 1.1$ and $r_\mathrm{e}= 1.15$.](thin2.pdf){#doublyctd} Consider the single-layer potential and the Neumann-Poincaré operator for the doubly connected domain $A$. Let the normal vector $\nu_\mathrm{i}$ on ${\Gamma_{\!\mathrm{i}}}$ be taken to point into the exterior domain to ${\Gamma_{\!\mathrm{i}}}$, and let $\nu_\mathrm{e}$ on ${\Gamma_{\!\mathrm{e}}}$ point into the exterior domain to ${\Gamma_{\!\mathrm{e}}}$. For any density $\phi$ on ${\partial\!A}$, write $\phi=\phi_\mathrm{i}+\phi_\mathrm{e}$, where $\phi_\mathrm{i}$ is supported on ${\Gamma_{\!\mathrm{i}}}$ and $\phi_\mathrm{e}$ on ${\Gamma_{\!\mathrm{e}}}$. The single-layer potential of ${\partial\!A}$ is a sum $$\mathcal{S}_{\partial\!A}[\phi] \;=\; \mathcal{S}_{{\Gamma_{\!\mathrm{i}}}}[\phi_\mathrm{i}] + \mathcal{S}_{{\Gamma_{\!\mathrm{e}}}}[\phi_\mathrm{e}]$$ and consequently $$\begin{aligned} \frac{\partial}{\partial\nu_\mathrm{i}} \mathcal{S}_{\partial\!A}[\phi]\big|^\pm_{\Gamma_{\!\mathrm{i}}}(x) &\;=\; \pm{\textstyle \frac{1}{2}}\phi_\mathrm{i}(x) + \mathcal{K}_{{\Gamma_{\!\mathrm{i}}}}^*[\phi_\mathrm{i}] + \frac{\partial}{\partial\nu_\mathrm{i}}\mathcal{S}_{\Gamma_{\!\mathrm{e}}}[\phi_\mathrm{e}], \qquad x\in{\Gamma_{\!\mathrm{i}}}\\ \frac{\partial}{\partial\nu_\mathrm{e}} \mathcal{S}_{\partial\!A}[\phi]\big|^\pm_{\Gamma_{\!\mathrm{e}}}(x) &\;=\; \pm{\textstyle \frac{1}{2}}\phi_\mathrm{e}(x) + \mathcal{K}_{{\Gamma_{\!\mathrm{e}}}}^*[\phi_\mathrm{e}] + \frac{\partial}{\partial\nu_\mathrm{e}}\mathcal{S}_{\Gamma_{\!\mathrm{e}}}[\phi_\mathrm{i}], \qquad x\in{\Gamma_{\!\mathrm{e}}}.\end{aligned}$$ Since the outward normal vector $\nu$ to $A$ on ${\partial\!A}={\Gamma_{\!\mathrm{i}}}\cup{\Gamma_{\!\mathrm{e}}}$ is $\nu=\nu_\mathrm{e}$ on ${\Gamma_{\!\mathrm{e}}}$ and $\nu=-\nu_\mathrm{i}$ on ${\Gamma_{\!\mathrm{i}}}$, one obtains $$\frac{\partial}{\partial\nu} \renewcommand{\arraystretch}{1.5} \left[\!\! \begin{array}{c} \mathcal{S}_{\partial\!A}[\phi]_\mathrm{i} \\ \mathcal{S}_{\partial\!A}[\phi]_\mathrm{e} \end{array} \!\!\right] ^\pm \;=\; \renewcommand{\arraystretch}{1.5} \left[\!\! \begin{array}{c} -\frac{\partial}{\partial\nu_\mathrm{i}}\mathcal{S}_{\partial\!A}[\phi]\big|^\mp_{\Gamma_{\!\mathrm{i}}} \\ \frac{\partial}{\partial\nu_\mathrm{e}}\mathcal{S}_{\partial\!A}[\phi]\big|^\pm_{\Gamma_{\!\mathrm{e}}} \end{array} \!\!\right] \;=\; \renewcommand{\arraystretch}{1.5} \left[\!\! \begin{array}{c} \pm{\textstyle \frac{1}{2}}\phi - \mathcal{K}^*_{\Gamma_{\!\mathrm{i}}}[\phi_\mathrm{i}] - \frac{\partial}{\partial\nu_\mathrm{i}}\mathcal{S}_{\Gamma_{\!\mathrm{e}}}[\phi_\mathrm{e}] \\ \pm{\textstyle \frac{1}{2}}\phi + \mathcal{K}^*_{\Gamma_{\!\mathrm{e}}}[\phi_\mathrm{e}] + \frac{\partial}{\partial\nu_\mathrm{e}}\mathcal{S}_{\Gamma_{\!\mathrm{i}}}[\phi_\mathrm{i}] \end{array} \!\!\right] .$$ In view of ([\[eqn:Kstarjump\]](#eqn:Kstarjump){reference-type="ref" reference="eqn:Kstarjump"}) therefore, with respect to the decomposition of fields on ${\Gamma_{\!\mathrm{e}}}$ and ${\Gamma_{\!\mathrm{i}}}$, the NP operator attains the block form $$\label{NPdoubly} \mathbb{K}_{\partial\!A}^* = \begin{bmatrix} -\mathcal{K}_{{\Gamma_{\!\mathrm{i}}}}^* & -\dfrac{\partial\mathcal{S}_{{\Gamma_{\!\mathrm{e}}}}}{\partial\nu_\mathrm{i}}\\[2mm] \dfrac{\partial\mathcal{S}_{{\Gamma_{\!\mathrm{i}}}}}{\partial\nu_\mathrm{e}} & \mathcal{K}_{{\Gamma_{\!\mathrm{e}}}}^* \end{bmatrix}.$$ # Series expansion via Grunsky coefficients Following [@Jung:2021:SEL], this section describes how to use the Grunsky coefficients to transform the NP operator into an infinite matrix system. Associated with the conformal mapping $z\!=\!\Psi(w)$ are the Faber polynomials $\{F_n(z)\}_{n=0}^\infty$, introduced by G. Faber [@Faber:1903:UPE]. They are determined uniquely by the property that $F_n(z)-w^n = O(w^{-1})$ as $w\to\infty$, that is, there are numbers $\{c_{nm}\}_{m,n\geq1}$, known as the Grunsky coefficients, such that $$\label{eqn:Faberdefinition} F_n(z)\;=\;w^n+\sum_{m=1}^{\infty}c_{nm}{w^{-m}}%\qquad \text{for} \; |w|>\gamma$$ under the correspondence $z\!=\!\Psi(w)$, for $z\in\overline{\Omega}^c$, that is $|w|>\gamma$. $F_n(z)$ is monic of degree $n$, and these polynomials can be obtained recursively using the series expansion of $\Psi$. The first three are $F_0(z)=1,\ F_1(z)=z-a_0,\ F_2(z)=z^2-2a_0 z+a_0^2-2a_1.$ The Grunsky coefficients enjoy the recursion $$\begin{aligned} \label{grunsky} c_{n,m+1} = c_{n+1,m} - a_{n+m} + \sum_{s=1}^{n-1} a_{n-s}c_{sm} - \sum_{s=1}^{m-1} a_{m-s}c_{ns}, \quad n,m\ge 1\end{aligned}$$ with initial values $c_{1m} = a_m$ and $c_{m1} = ma_m$, $m\ge1$. Using a generating function for the Faber polynomials, one derives the relation $$mc_{nm}=nc_{mn}.$$ The coefficients satisfy the *strong Grunsky inequalities* [@Duren:1983:UF; @Grunsky:1939:KSA]: let $N\in\mathbb{N}$ and $\{\lambda_k\}_{k=1}^N$ be nonzero complex numbers and it follows that $$\label{inequal:strong} \sum_{n=1}^\infty n\left|\sum_{k=1}^N\frac{c_{kn}}{\gamma^{k+n}}\lambda_k \right|^2\leq\sum_{n=1}^N n\left|\lambda_n \right|^2,$$ where the equality holds if and only if $\Omega$ is of measure zero. For a fixed $m$, choose $\lambda_k=\frac{1}{\sqrt{m}}\delta_{mk}$ in ([\[inequal:strong\]](#inequal:strong){reference-type="ref" reference="inequal:strong"}) to get $$\label{Gineq} \sum_{n=1}^\infty \left|\sqrt{\frac{n}{m}}\frac{c_{mn}}{\gamma^{m+n}}\right|^2\leq 1.$$ Jung and Lim [@Jung:2021:SEL] developed an explicit infinite matrix formulation of the NP operator for simply connected domains by using the Grunsky coefficients. This formulation has been used in studies of shape recovery and neutral inclusions and in spectral analysis [@Choi:2023:IPP; @Choi:2021:ASR; @Choi:2023:GME; @Choi:2021:EEC; @Ji:2022:SPN; @Jung:2020:DEE]. Below, we use it to analyze the spectrum of the NP operator for the distorted annular region $A$. We now give a brief description of the development in [@Jung:2021:SEL]. In the conformal image of $\Psi$, namely ${\overline\Omega}^c=\left\{ z=\Psi(w) : |w|>\gamma \right\}$, the equation $z\!=\!\Psi(w)$ defines $w\!=\!e^{\rho+i\theta}$ as a function of $z$, and therefore $\rho\!>\!\log\gamma\in\mathbb{R}$ and $\theta\in\mathbb{R}/2\pi\mathbb{Z}$ act as global coordinates on ${\overline\Omega}^c$. Define $\rho_\mathrm{i}$ and $\phi_\mathrm{e}$ by $r_\mathrm{i}=e^{\rho_\mathrm{i}}$ and $r_\mathrm{e}=e^{\rho_\mathrm{e}}$. For $\alpha\!\in\!\{\mathrm{i},\mathrm{e}\}$, the coordinate $\theta$ parameterizes the curve ${\Gamma_{\!\alpha}}\!=\!\{z:\rho\equiv\rho_\alpha\}$ and the smooth functions $e^{in\theta}$ form a Riesz basis for $L^2({\Gamma_{\!\alpha}})$. At the point $z\in{\Gamma_{\!\alpha}}$ corresponding to $(\rho,\theta)$, differentiation in the direction of the outward normal vector is given by $$\frac{\partial}{\partial\nu} \;=\; \frac{1}{h(\rho,\theta)}\frac{\partial}{\partial\rho},$$ in which $h$ is the Jacobian of $\Psi$, $$h(\rho,\theta) = \left| \frac{\partial}{\partial\rho} \Psi(e^{\rho+i\theta}) \right| = \left| \frac{\partial}{\partial\theta} \Psi(e^{\rho+i\theta})\right| = e^\rho | \Psi'(w) |.$$ For any integer $n\geq1$, the following function $H^\alpha_n(z)$ is harmonic for $z\in\Omega_\alpha$ and for $z\in\overline{\Omega}_\alpha^c$ and decays as $z^{-1}$ at infinity. With the identification $z=\Psi(w)$ and $w=e^{\rho+i\theta}$ in the exterior region $\overline{\Omega}^c$, define $$\label{Hn1} H^\alpha_n(z) \;:=\; \renewcommand{\arraystretch}{1.5} \left\{ \begin{array}{rcll} F_n(z), & & z\in\overline\Omega_\alpha \\ \tilde F_n^\alpha(w) &:=\; e^{2n\rho_\alpha}\bar w^{-n} + \sum_{m=1}^\infty c_{nm}w^{-m}, &z\in\Omega_\alpha^c. \end{array} \right.$$ Importantly, in the region $\overline{\Omega}^c$, which contains the boundary ${\Gamma_{\!\alpha}}$ of $\Omega_\alpha$, $$\label{Hn2} F_n(z) \;=\; w^n + \sum_{m=1}^\infty c_{nm}w^{-m}, \qquad z\in\overline\Omega_\alpha\!\setminus\!\Omega.$$ The factor $e^{2n\rho_\alpha}$ is placed in $\tilde F_n^\alpha(w)$ so that $F_n$ and $\tilde F_n^\alpha$ coincide on ${\Gamma_{\!\alpha}}$. Thus, $H^\alpha_n$ is the single-layer potential for the density equal to the jump of $\partial H^\alpha_n/\partial\nu$ across ${\Gamma_{\!\alpha}}$, $$\label{Hnjump} \left[ \frac{\partial H^\alpha_n}{\partial\nu} \right]_{\Gamma_{\!\alpha}}\;=\; -2n\, r_\alpha^n\frac{e^{in\theta}}{h(\rho_\alpha,\theta)}.$$ The average of $\partial H^\alpha_n/\partial\nu$ across ${\Gamma_{\!\alpha}}$ is $$\label{Hnaverage} \left\langle \frac{\partial H^\alpha_n}{\partial\nu} \right\rangle_{\!{\Gamma_{\!\alpha}}} \;=\; -\sum_{m=1}^\infty m c_{nm} r_\alpha^{-m} \frac{e^{-im\theta}}{h(\rho_\alpha,\theta)}.$$ For each $n\in\mathbb{Z}$, define the density function $$\phi_n^{\alpha}(z) = \frac{e^{in\theta}}{h(\rho_\alpha,\theta)}, \qquad z\in{\Gamma_{\!\alpha}}.$$ According to [@Jung:2021:SEL], we can express a Hilbert space where the NP operator is acting on. Let $K^{-1/2}(\partial\Gamma_\alpha)$ be a Hilbert space isometric to $l^2$ space where any complex sequence $a=(a_m)\in l^2$ corresponds to the functional $f_a\in K^{-1/2}(\partial\Gamma_\alpha)$ given by $$f_a\left(\sum_{m\in\mathbb{Z}} b_m \phi_m^\alpha \right) = \sum_{m\in\mathbb{Z}} a_m \overline{b_m}.$$ Thus, we represent $K^{-1/2}(\partial\Gamma_\alpha)$ by $$K^{-1/2}(\partial\Gamma_\alpha) = \left\{ \sum_{m\in\mathbb{Z}} a_m \phi_m^\alpha : \sum_{m\in\mathbb{Z}} |a_m|^2 <\infty \right\}.$$ We can identify the operator $K^*_{\Gamma_{\!\alpha}}: K^{-1/2}(\partial\Gamma_\alpha) \to K^{-1/2}(\partial\Gamma_\alpha)$ with an infinite matrix on $\ell^2(\mathbb{Z})$, namely, $$\left[K^*_{\Gamma_{\!\alpha}}\right]: \ell^2 \to \ell^2.$$ We refer the readers to [@Jung:2021:SEL] for more detailed information of the spaces. We basically use this idea to find an infinite-matrix form of the integral operators in this paper. In light of ([\[Kgeom\]](#Kgeom){reference-type="ref" reference="Kgeom"}), ([\[Hnjump\]](#Hnjump){reference-type="ref" reference="Hnjump"}), ([\[Hnaverage\]](#Hnaverage){reference-type="ref" reference="Hnaverage"}), and the relation $mc_{nm}=nc_{mn}$, one obtains, for $n\geq1$, $$\label{K2} K^*_{\Gamma_{\!\alpha}}\left[ \phi_n^\alpha \right] \;=\; \frac{1}{2} \sum_{m=1}^\infty \frac{c_{mn}}{r_\alpha^{m+n}} \phi_{-m}^\alpha,$$ and by conjugation, $$\label{K3} K^*_{\Gamma_{\!\alpha}}\left[ \phi_{-n}^\alpha \right] \;=\; \frac{1}{2} \sum_{m=1}^\infty \frac{\overline{c_{mn}}}{r_\alpha^{m+n}} \phi_{m}^\alpha.$$ The density of the single-layer potential that is constant in $\Omega_\alpha$ and equal to $\log|w|$ in $\overline{\Omega}_\alpha^c$ is $\phi_0^\alpha$, and therefore $$\begin{aligned} \label{K1} \mathcal{K}^*_{{\Gamma_{\!\alpha}}}[\phi_0^\alpha] = \frac{1}{2} \phi_{0}^\alpha.\end{aligned}$$ To obtain the normal derivatives of the single-layer potentials on ${\Gamma_{\!\mathrm{i}}}$ and ${\Gamma_{\!\mathrm{e}}}$, use the second equation of ([\[Hn1\]](#Hn1){reference-type="ref" reference="Hn1"}) in the exterior region and ([\[Hn2\]](#Hn2){reference-type="ref" reference="Hn2"}) in the interior region and $\partial/\partial\nu=h^{-1}\partial/\partial\rho$. With the observation that, for $n\geq1$, $$S_{{\Gamma_{\!\alpha}}}[\phi^\alpha_n] \;=\; -\frac{H^\alpha_n}{2nr^n_\alpha}, \qquad S_{{\Gamma_{\!\alpha}}}[\phi^\alpha_{-n}] \;=\; -\frac{\overline{H^\alpha_n}}{2nr^n_\alpha}, \qquad S_{{\Gamma_{\!\alpha}}}[\phi^\alpha_0] \;=\; \log|w| = \rho,$$ one can compute $$\begin{aligned} \label{S1} \frac{\partial\mathcal{S}_{\Gamma_{\!\mathrm{i}}}}{\partial\nu_\mathrm{e}}[\phi_0^\mathrm{i}] = \phi_0^\mathrm{e}, \quad \frac{\partial\mathcal{S}_{\Gamma_{\!\mathrm{e}}}}{\partial\nu_\mathrm{i}}[\phi_0^\mathrm{e}] = 0,\end{aligned}$$ and, for $n\geq1$, $$\begin{aligned} \frac{\partial\mathcal{S}_{\Gamma_{\!\mathrm{i}}}}{\partial\nu_\mathrm{e}}[\phi_n^\mathrm{i}] &\;=\; \frac{1}{2} \Big(\frac{r_\mathrm{i}}{r_\mathrm{e}}\Big)^{\!\!n} \phi_n^\mathrm{e}+ \frac{1}{2} \sum_{m=1}^\infty \frac{c_{mn}}{r_\mathrm{e}^m r_\mathrm{i}^n} \phi_{-m}^\mathrm{e},\label{S2}\\ \frac{\partial\mathcal{S}_{\Gamma_{\!\mathrm{i}}}}{\partial\nu_\mathrm{e}}[\phi_{-n}^\mathrm{i}] &\;=\; \frac{1}{2} \Big(\frac{r_\mathrm{i}}{r_\mathrm{e}}\Big)^{\!\!n} \phi_{-n}^\mathrm{e}+ \frac{1}{2} \sum_{m=1}^\infty \frac{\overline{c_{mn}}}{r_\mathrm{e}^m r_\mathrm{i}^n} \phi_{m}^\mathrm{e},\label{S3}\\ \frac{\partial\mathcal{S}_{\Gamma_{\!\mathrm{e}}}}{\partial\nu_\mathrm{i}}[\phi_n^\mathrm{e}] &\;=\; - \frac{1}{2} \Big(\frac{r_\mathrm{i}}{r_\mathrm{e}}\Big)^{\!\!n} \phi_n^\mathrm{i}+ \frac{1}{2} \sum_{m=1}^\infty \frac{c_{mn}}{r_\mathrm{i}^m r_\mathrm{e}^n} \phi_{-m}^\mathrm{i}, \label{S4}\\ \frac{\partial\mathcal{S}_{\Gamma_{\!\mathrm{e}}}}{\partial\nu_\mathrm{i}}[\phi_{-n}^\mathrm{e}] &\;=\; - \frac{1}{2} \Big(\frac{r_\mathrm{i}}{r_\mathrm{e}}\Big)^{\!\!n} \phi_{-n}^\mathrm{i}+ \frac{1}{2} \sum_{m=1}^\infty \frac{\overline{c_{mn}}}{r_\mathrm{i}^m r_\mathrm{e}^n} \phi_{m}^\mathrm{i}.\label{S5}\end{aligned}$$ Define the following semi-infinite matrices: $$%\label{mat:G:ep} C = \begin{bmatrix} \ c_{11} & c_{12} & c_{13}& \cdots\\[1mm] \ c_{21} & c_{22} & c_{23} & \cdots\\[1mm] \ c_{31} & c_{32} & c_{33} & \cdots\\ \ \vdots & \vdots & \vdots & \ddots \end{bmatrix}, \qquad G = \begin{bmatrix} \ g_{11} & g_{12} & g_{13}& \cdots\\[1mm] \ g_{21} & g_{22} & g_{23} & \cdots\\[1mm] \ g_{31} & g_{32} & g_{33} & \cdots\\ \ \vdots & \vdots & \vdots & \ddots \end{bmatrix}, %\qquad %S = %\left[ %\begin{array}{@{}*7r} %\ddots& & \vdots & & \reflectbox{$\ddots$}\\ %& 0 & 0 & 1 \\[1mm] %\hdots & 0 & 1 & 0 &\hdots\\[1mm] %& 1 & 0 & 0 \\ %\reflectbox{$\ddots$} & & \vdots & &\ddots %\end{array} %\right].$$ where the components of $G$ are $$g_{mn} = \frac{c_{mn}}{r_\mathrm{i}^{m+n}},$$ and $S:\mathbb{N}\to-\mathbb{N}$ and $S^*:-\mathbb{N}\to\mathbb{N}$ are $$\label{mat:S} S= \begin{bmatrix} \ \vdots & \vdots & \vdots & \reflectbox{$\ddots$} \\[1mm] \ 0 & 0 & 1 & \cdots\\[1mm] \ 0 & 1 & 0 & \cdots\\[1mm] \ 1 & 0 & 0 & \cdots \end{bmatrix}, \qquad S^*= \begin{bmatrix} \ \hdots & 0 & 0 & 1 \\[1mm] \ \hdots & 0 & 1 & 0\\[1mm] \ \hdots & 1 & 0 & 0\\[1mm] \ \reflectbox{$\ddots$} & \vdots & \vdots & \vdots \end{bmatrix}.$$ Note that $S^*S$ is the identity on $\mathbb{N}$ and $SS^*$ is the identity on $-\mathbb{N}$. Define the diagonal semi-infinite matrices $$%\label{mat:G:ep} r^{\mathbb{N}} = \left[ \begin{array}{ccccccc} r & 0 & 0 & \hdots \\[1mm] 0 & r^{2} & 0 & \hdots \\[1mm] 0 & 0 & r^{3} & \hdots \\ \vdots & \vdots & \vdots & \ddots \end{array} \right], \quad r^{2\mathbb{N}} = \left[ \begin{array}{ccccccc} r^{2} & 0 & 0 & \hdots \\[1mm] 0 & r^{4} & 0 & \hdots \\[1mm] 0 & 0 & r^{6} & \hdots \\ \vdots & \vdots & \vdots & \ddots \end{array} \right]$$ and the infinite matrices $\mathcal{C}\in \mathbb{C}^{\mathbb{Z}\times\mathbb{Z}}$, $$\begin{aligned} \mathcal{C} = \left[ \begin{array}{@{}*7r} & & & & \vdots & & \reflectbox{$\ddots$} \\ & 0 & 0 & 0 & c_{21} & c_{22} &\\[1mm] & 0 & 0 & 0 & c_{11} & c_{12} & \hdots \\[1mm] & 0 & 0 & 1 & 0 & 0 & \\[1mm] \hdots & \overline{c_{12}} & \overline{c_{11}} & 0 & 0 & 0 & \\[1mm] & \overline{c_{22}} & \overline{c_{21}} & 0 & 0 & 0 &\\ \reflectbox{$\ddots$} & & \vdots & & & & \end{array} \right] = \begin{bmatrix} & & SC\\[1mm] & 1 & \\[1mm] \overline{C}S^* & & \\ \end{bmatrix}\label{mathcalC}\end{aligned}$$ and $$\begin{aligned} r^{|\mathbb{Z}|} &:= \left[ \begin{array}{ccccccc} \ddots & & & & & & \reflectbox{$\ddots$} \\ & r^2 & 0 & 0 & 0 & 0 & \\[1mm] & 0 & r & 0 & 0 & 0 & \\[1mm] & 0 & 0 & 1 & 0 & 0 & \\[1mm] & 0 & 0 & 0 & r & 0 & \\[1mm] & 0 & 0 & 0 & 0 & r^2 &\\ \reflectbox{$\ddots$} & & & & & & \ddots \end{array} \right] = \begin{bmatrix} S r^{\mathbb{N}} S^* & & \\[1mm] & 1 & \\[1mm] & & r^{\mathbb{N}} \\ \end{bmatrix}.\label{rZ}\end{aligned}$$ Moreover, we set $\mathcal{R}_\alpha := r_\alpha^{|\mathbb{Z}|} (\alpha\in\{\mathrm{i},\mathrm{e}\})$ as an infinite matrix by replacing $r$ in ([\[rZ\]](#rZ){reference-type="ref" reference="rZ"}) with $r_\alpha$ and then define $$\label{mathcalG} \mathcal{G} := \mathcal{R}_\alpha^{-1} \mathcal{C} \mathcal{R}_\alpha^{-1}.$$ Define an infinite matrix and a row vector function, $\Phi_\alpha$ ($\alpha\in\{\mathrm{i},\mathrm{e}\}$), $$\Phi_\alpha= \left[ \begin{array}{ccccccc} \hdots & \phi^\alpha_{-2} & \phi^\alpha_{-1} & \phi^\alpha_{0} & \phi^\alpha_{1} & \phi^\alpha_{2} & \hdots \end{array} \right]$$ Let $\Phi$ be an eigenfunction of the operator $\mathbb{K}_{\partial\!A}^*$ defined by $$\Phi := \left[ \begin{array}{c} \displaystyle\sum_{n\in\mathbb{Z}} a_n^\mathrm{i}\phi_n^\mathrm{i}\\[5mm] \displaystyle\sum_{n\in\mathbb{Z}} a_n^\mathrm{e}\phi_n^\mathrm{e} \end{array} \right].$$ Denote by $A^\alpha$ $(\alpha\in\{\mathrm{i},\mathrm{e}\})$ the infinite column vector $$A^\alpha = \left[ \begin{array}{ccccccc} \hdots & a^\alpha_{-2} & a^\alpha_{-1} & a^\alpha_{0} & a^\alpha_{1} & a^\alpha_{2} & \hdots \end{array} \right]^T.$$ Then we obtain $$\begin{aligned} \mathbb{K}_{\partial\!A}^* \Phi &= \begin{bmatrix} \displaystyle - \sum_{n\in\mathbb{Z}} \left( a_n^\mathrm{i}\mathcal{K}_{\Gamma_{\!\mathrm{i}}}^*[\phi_n^\mathrm{i}] + a_n^\mathrm{e}\dfrac{\partial\mathcal{S}_{\Gamma_{\!\mathrm{e}}}}{\partial\nu_\mathrm{i}}[\phi_n^\mathrm{e}] \right)\\[2mm] \displaystyle\sum_{n\in\mathbb{Z}} \left(a_n^\mathrm{i}\dfrac{\partial\mathcal{S}_{\Gamma_{\!\mathrm{i}}}}{\partial\nu_\mathrm{e}}[\phi_n^\mathrm{i}] + a_n^\mathrm{e}\mathcal{K}_{\Gamma_{\!\mathrm{e}}}^*[\phi_n^\mathrm{e}] \right) \end{bmatrix}\\ &=\begin{bmatrix} \displaystyle - \sum_{m\in\mathbb{Z}} \sum_{n\in\mathbb{Z}} \phi_m^\mathrm{i}\left(\left[\mathcal{K}_{\Gamma_{\!\mathrm{i}}}^*\right]_{mn} a_n^\mathrm{i}+ \left[\dfrac{\partial\mathcal{S}_{\Gamma_{\!\mathrm{e}}}}{\partial\nu_\mathrm{i}}\right]_{mn} a_n^\mathrm{e}\right) \\[2mm] \displaystyle\sum_{m\in\mathbb{Z}} \sum_{n\in\mathbb{Z}} \phi_m^e \left(\left[\dfrac{\partial\mathcal{S}_{\Gamma_{\!\mathrm{i}}}}{\partial\nu_\mathrm{e}}\right]_{mn} a_n^\mathrm{i}+ \left[\mathcal{K}_{\Gamma_{\!\mathrm{e}}}^*\right]_{mn} a_n^\mathrm{e}\right) \end{bmatrix}\\ %&=\left[ %\begin{array}{c|c} %\Phi_\iii & O \\ %\hline\rule{0pt}{2.5ex} %O & \Phi_\eee %\end{array} %\right] &=\left[ \begin{array}{c|c} \Phi_\mathrm{i}& \Phi_\mathrm{e} \end{array} \right] \left[ \begin{array}{c|c} -\left[\mathcal{K}_{\Gamma_{\!\mathrm{i}}}^*\right] & -\left[\dfrac{\partial\mathcal{S}_{\Gamma_{\!\mathrm{e}}}}{\partial\nu_\mathrm{i}}\right] \\[3mm] \hline\rule{0pt}{4ex} \left[\dfrac{\partial\mathcal{S}_{\Gamma_{\!\mathrm{i}}}}{\partial\nu_\mathrm{e}}\right] & \left[\mathcal{K}_{\Gamma_{\!\mathrm{e}}}^*\right] \end{array} \right] \left[\begin{array}{c} A^\mathrm{i}\\ \hline\rule{0pt}{2.5ex} A^\mathrm{e} \end{array}\right],\end{aligned}$$ in which the bracket notation in the blocks denotes the matrices for the operators with respect to the basis functions $\phi^{\mathrm{i},\mathrm{e}}_n$. We deduce from ([\[K2\]](#K2){reference-type="ref" reference="K2"})--([\[K1\]](#K1){reference-type="ref" reference="K1"}) that $$\begin{aligned} &\left[\mathcal{K}_{\Gamma_{\!\mathrm{i}}}^*\right] = \frac{1}{2} \mathcal{R}_\mathrm{i}^{-1} \mathcal{C} \mathcal{R}_\mathrm{i}^{-1} \\ &\left[\mathcal{K}_{\Gamma_{\!\mathrm{e}}}^*\right] = \frac{1}{2} \mathcal{R}_\mathrm{e}^{-1} \mathcal{C} \mathcal{R}_\mathrm{e}^{-1}.\end{aligned}$$ Similarly, from ([\[S1\]](#S1){reference-type="ref" reference="S1"})--([\[S5\]](#S5){reference-type="ref" reference="S5"}) we obtain $$\begin{aligned} \left[\dfrac{\partial\mathcal{S}_{\Gamma_{\!\mathrm{i}}}}{\partial\nu_\mathrm{e}}\right] &\;=\; \frac{1}{2} \mathcal{R}_\mathrm{i}\mathcal{R}_\mathrm{e}^{-1} + \frac{1}{2} \mathcal{R}_\mathrm{e}^{-1} \mathcal{C} \mathcal{R}_\mathrm{i}^{-1}\\ \left[\dfrac{\partial\mathcal{S}_{\Gamma_{\!\mathrm{e}}}}{\partial\nu_\mathrm{i}}\right] &\;=\; -\frac{1}{2} \mathcal{R}_\mathrm{i}\mathcal{R}_\mathrm{e}^{-1} + \frac{1}{2} \mathcal{R}_\mathrm{i}^{-1} \mathcal{C} \mathcal{R}_\mathrm{e}^{-1}.\end{aligned}$$ Then, we have the block matrix equation $$\begin{aligned} \label{KPhi} \mathbb{K}_{\partial\!A}^* \Phi =\left[ \begin{array}{c|c} \Phi_\mathrm{i}& 0 \\ \hline\rule{0pt}{2.5ex} 0 & \Phi_\mathrm{e} \end{array} \right] \big[\mathbb{K}_{\partial\!A}^*\big] \left[\begin{array}{c} A^\mathrm{i}\\ \hline\rule{0pt}{2.5ex} A^\mathrm{e} \end{array}\right],\end{aligned}$$ where $\big[\mathbb{K}_{\partial\!A}^*\big]$ is a block matrix form of $\mathbb{K}_{\partial\!A}^*$ given by $$\begin{aligned} \big[\mathbb{K}_{\partial\!A}^*\big] &:= \frac{1}{2} \left[ \begin{array}{c|c} - \mathcal{R}_\mathrm{i}^{-1} \mathcal{C} \mathcal{R}_\mathrm{i}^{-1} & \mathcal{R}_\mathrm{i}\mathcal{R}_\mathrm{e}^{-1} - \mathcal{R}_\mathrm{i}^{-1} \mathcal{C} \mathcal{R}_\mathrm{e}^{-1} \\ \hline\rule[-1ex]{0pt}{4ex} \mathcal{R}_\mathrm{i}\mathcal{R}_\mathrm{e}^{-1} + \mathcal{R}_\mathrm{e}^{-1} \mathcal{C} \mathcal{R}_\mathrm{i}^{-1} & \mathcal{R}_\mathrm{e}^{-1} \mathcal{C} \mathcal{R}_\mathrm{e}^{-1} \end{array} \right]\notag \\ &= \frac{1}{2} \left[ \begin{array}{c|c} - \mathcal{G} & r^{|\mathbb{Z}|} - \mathcal{G} r^{|\mathbb{Z}|} \\ \hline\rule[-1ex]{0pt}{4ex} r^{|\mathbb{Z}|} + r^{|\mathbb{Z}|} \mathcal{G} & r^{|\mathbb{Z}|} \mathcal{G} r^{|\mathbb{Z}|} \end{array} \right].\label{Kmatrix}\end{aligned}$$ # Spectral analysis of the Neumann-Poincaré operator **Lemma 1**. *For given $\lambda\in(-1/2,1/2)$, the operator $\left(\lambda I - \big[\mathbb{K}_{\partial\!A}^*\big]\right)$ is injective if and only if $\left(\lambda^2 I - \mathcal{B}\right)$ is injective, where $\mathcal{B}$ is an operator defined by $$\begin{aligned} \label{altmatrix} \mathcal{B} := \frac{1}{4} \left[ \begin{array}{c:c} r^{2\mathbb{N}} & -G(I - r^{2\mathbb{N}}) \\[1mm]\hdashline\rule[-1ex]{0pt}{4ex} -\overline{G} (I - r^{2\mathbb{N}}) r^{2\mathbb{N}} & r^{2\mathbb{N}} + \overline{G} (I - r^{2\mathbb{N}}) G (I - r^{2\mathbb{N}}) \\ \end{array} \right]\end{aligned}$$ and any type of the identity matrix is denoted by $I$.* *Proof.* Let $O$ denote any zero matrices. We first prove the injectivity of $\left(\lambda I - \big[\mathbb{K}_{\partial\!A}^*\big]\right)$ implies the injectivity of $\left(\lambda^2 I - \mathcal{B}\right)$. We assume that $$\label{injK} \left( \lambda I - \big[\mathbb{K}_{\partial\!A}^*\big] \right) \left[ \begin{array}{c} X_1 \\ \hline\rule[-1ex]{0pt}{4ex} X_2 \end{array} \right] = \left[ \begin{array}{c} O \\ \hline\rule[-1ex]{0pt}{4ex} O \end{array} \right] \implies \left[ \begin{array}{c} X_1 \\ \hline\rule[-1ex]{0pt}{4ex} X_2 \end{array} \right] = \left[ \begin{array}{c} O \\ \hline\rule[-1ex]{0pt}{4ex} O \end{array} \right].$$ By using ([\[Kmatrix\]](#Kmatrix){reference-type="ref" reference="Kmatrix"}), we have $$\begin{aligned} \left( \lambda I - \big[\mathbb{K}_{\partial\!A}^*\big] \right) \left[ \begin{array}{c} X_1 \\ \hline\rule[-1ex]{0pt}{4ex} X_2 \end{array} \right] &= \left[ \begin{array}{c|c} \lambda I + \frac{1}{2}\mathcal{G} & - \frac{1}{2}r^{|\mathbb{Z}|} + \frac{1}{2}\mathcal{G} r^{|\mathbb{Z}|} \\ \hline\rule[-1ex]{0pt}{4ex} -\frac{1}{2} r^{|\mathbb{Z}|} - \frac{1}{2}r^{|\mathbb{Z}|} \mathcal{G} & \lambda I - \frac{1}{2}r^{|\mathbb{Z}|} \mathcal{G} r^{|\mathbb{Z}|} \end{array} \right] \left[ \begin{array}{c} X_1 \\ \hline\rule[-1ex]{0pt}{4ex} X_2 \end{array} \right] \notag\\ &= \left[ \begin{array}{c} (\lambda I + \frac{1}{2}\mathcal{G}) X_1 - \frac{1}{2}( I - \mathcal{G}) r^{|\mathbb{Z}|} X_2 \\ \hline\rule[-1ex]{0pt}{4ex} -\frac{1}{2} r^{|\mathbb{Z}|} (I + \mathcal{G}) X_1 + (\lambda I - \frac{1}{2}r^{|\mathbb{Z}|} \mathcal{G} r^{|\mathbb{Z}|}) X_2 \end{array} \right].\label{P}\end{aligned}$$ Multiplying $r^{|\mathbb{Z}|}$ on the first row of ([\[P\]](#P){reference-type="ref" reference="P"}) and adding it to the second row gives $$\begin{aligned} \left[ \begin{array}{c} (\lambda I + \frac{1}{2}\mathcal{G}) X_1 - \frac{1}{2}( I - \mathcal{G}) r^{|\mathbb{Z}|} X_2 \\ \hline\rule[-1ex]{0pt}{4ex} (\lambda-\frac{1}{2}) r^{|\mathbb{Z}|} X_1 + ( \lambda I - \frac{1}{2} r^{2|\mathbb{Z}|} ) X_2 \end{array} \right] = \left[ \begin{array}{c} O \\ \hline\rule[-1ex]{0pt}{4ex} O \end{array} \right].\label{matrixeqn1}\end{aligned}$$ Solving ([\[matrixeqn1\]](#matrixeqn1){reference-type="ref" reference="matrixeqn1"}), we can eliminate $X_1 = -(\lambda-\frac{1}{2})^{-1} \, r^{-|\mathbb{Z}|} ( \lambda I - \frac{1}{2} r^{2|\mathbb{Z}|} ) X_2$ and get $$\begin{aligned} O &=\Big( \lambda I + \frac{1}{2}\mathcal{G} \Big) \Big( \lambda I - \frac{1}{2}r^{2|\mathbb{Z}|} \Big) X_2' + \frac{1}{2}\Big(\lambda - \frac{1}{2}\Big)( I - \mathcal{G}) r^{2|\mathbb{Z}|} X_2' \notag\\ &= \left[ \lambda^2 I + \frac{\lambda}{2} \mathcal{G} \left(I - r^{2|\mathbb{Z}|} \right) - \frac{r^{2|\mathbb{Z}|}}{4} \right] X_2',\label{X2Y2}\end{aligned}$$ where we denote $X_2'$ by $$\label{X2'} X_2' = r^{-|\mathbb{Z}|} X_2 =: \begin{bmatrix} \bf{x}_- \\[1mm] x_0 \\[1mm] \bf{x}_+ \end{bmatrix},$$ and using ([\[mathcalC\]](#mathcalC){reference-type="ref" reference="mathcalC"}), ([\[mathcalG\]](#mathcalG){reference-type="ref" reference="mathcalG"}), ([\[rZ\]](#rZ){reference-type="ref" reference="rZ"}), we obtain an equivalent relation of ([\[X2Y2\]](#X2Y2){reference-type="ref" reference="X2Y2"}) as $$\begin{aligned} &\begin{bmatrix} \lambda^2 I -\frac{1}{4} S r^{2\mathbb{N}} S^* & O & \frac{\lambda}{2} SG(I - r^{2\mathbb{N}}) \\[1mm] O & \lambda^2-\frac{1}{4} & O \\[1mm] \frac{\lambda}{2} \overline{G} S^* (I - S r^{2\mathbb{N}} S^*)& O & \lambda^2 I -\frac{1}{4} r^{2\mathbb{N}} \end{bmatrix} \begin{bmatrix} \bf{x}_- \\[1mm] x_0 \\[1mm] \bf{x}_+ \end{bmatrix} = O\notag \\ \iff &\quad \Big(\lambda^2 - \frac{1}{4} \Big)x_0 = 0, \quad \begin{bmatrix} \lambda^2 I -\frac{1}{4} S r^{2\mathbb{N}} S^*& \frac{\lambda}{2} S G(I - r^{2\mathbb{N}}) \\[1mm] \frac{\lambda}{2} \overline{G} S^* (I - S r^{2\mathbb{N}} S^*) & \lambda^2 I -\frac{1}{4} r^{2\mathbb{N}} \end{bmatrix} \begin{bmatrix} \bf{x}_- \\[1mm] \bf{x}_+ \end{bmatrix} = O.\label{yx}\end{aligned}$$ Since $S^*S$ and $SS^*$ are identity matrices, the second relation of ([\[yx\]](#yx){reference-type="ref" reference="yx"}) is equivalent to $$\begin{aligned} & \begin{bmatrix} \lambda^2 I -\frac{1}{4} r^{2\mathbb{N}}& \frac{1}{4} G(I - r^{2\mathbb{N}}) \\[1mm] \lambda^2 \overline{G} (I - r^{2\mathbb{N}}) & \lambda^2 I -\frac{1}{4} r^{2\mathbb{N}} \end{bmatrix} \begin{bmatrix} S^* \bf{x}_- \\[1mm] 2\lambda \bf{x}_+ \end{bmatrix} = O \notag \\ \iff & \left(\lambda^2 \begin{bmatrix} I & O \\[1mm] \overline{G} (I - r^{2\mathbb{N}}) & I \end{bmatrix} - \frac{1}{4} \begin{bmatrix} r^{2\mathbb{N}}& -G(I - r^{2\mathbb{N}}) \\[1mm] O & r^{2\mathbb{N}} \end{bmatrix}\right) \begin{bmatrix} S^* \bf{x}_- \\[1mm] 2\lambda \bf{x}_+ \end{bmatrix} = O\notag \\ \iff & \left(\lambda^2 I - \frac{1}{4} \begin{bmatrix} I & O \\[1mm] \overline{G} (I - r^{2\mathbb{N}}) & I \end{bmatrix}^{-1} \begin{bmatrix} r^{2\mathbb{N}}& -G(I - r^{2\mathbb{N}}) \\[1mm] O & r^{2\mathbb{N}} \end{bmatrix}\right) \begin{bmatrix} S^* \bf{x}_- \\[1mm] 2\lambda \bf{x}_+ \end{bmatrix} = O\notag \\ \iff & \left(\lambda^2 I - \frac{1}{4} \begin{bmatrix} I & O \\[1mm] -\overline{G} (I - r^{2\mathbb{N}}) & I \end{bmatrix} \begin{bmatrix} r^{2\mathbb{N}}& -G(I - r^{2\mathbb{N}}) \notag\\[1mm] O & r^{2\mathbb{N}} \end{bmatrix}\right) \begin{bmatrix} S^* \bf{x}_- \\[1mm] 2\lambda \bf{x}_+ \end{bmatrix} = O\\[1mm] \iff & \left(\lambda^2 I - \mathcal{B} \right) \begin{bmatrix} S^* \bf{x}_-\\[1mm] 2\lambda \bf{x}_+ \end{bmatrix} = O.\label{tilde}\end{aligned}$$ From the injectivity ([\[injK\]](#injK){reference-type="ref" reference="injK"}) and the notation ([\[X2\'\]](#X2'){reference-type="ref" reference="X2'"}), we get the injectivity of $\left(\lambda^2 I - \mathcal{B} \right)$ as follows: $$\left(\lambda^2 I - \mathcal{B} \right) \begin{bmatrix} S^* \bf{x}_-\\[1mm] 2\lambda \bf{x}_+ \end{bmatrix} = O \implies \begin{bmatrix} \bf{x}_-\\[1mm] \bf{x}_+ \end{bmatrix} =O.$$ Since $S^*$ is invertible and $\lambda$ is a real number, the injectivity of $\left(\lambda I - \big[\mathbb{K}_{\partial\!A}^*\big]\right)$ implies those of $\left(\lambda^2 I - \mathcal{B} \right)$: $$\left(\lambda^2 I - \mathcal{B} \right) \widetilde{X} = O \implies \widetilde{X}=O,$$ where $\widetilde{X} = \begin{bmatrix} S^* \bf{x}_-\\[1mm] 2\lambda \bf{x}_+ \end{bmatrix}$ and $\widetilde{X}$ spans the domain of the operator $\left(\lambda^2 I - \mathcal{B} \right)$. By the equivalent relations ([\[yx\]](#yx){reference-type="ref" reference="yx"}) and ([\[tilde\]](#tilde){reference-type="ref" reference="tilde"}), one finds that the injectivity of $\left(\lambda^2 I - \mathcal{B}\right)$ implies those of $\left(\lambda I - \big[\mathbb{K}_{\partial\!A}^*\big]\right)$. Therefore, we get the desired result. ◻ Based on Lemma [Lemma 1](#injectivity){reference-type="ref" reference="injectivity"}, we obtain the following theorem. **Theorem 2**. *Let $\lambda\in(-1/2,1/2)$. Then $\lambda$ is an eigenvalue of $\mathbb{K}_{\partial\!A}^*$ if and only if $\lambda^2$ is an eigenvalue of $\mathcal{B}$.* *Proof.* The contraposition of Lemma [Lemma 1](#injectivity){reference-type="ref" reference="injectivity"} is that, for given $\lambda$, the operator $\left(\lambda I - \big[\mathbb{K}_{\partial\!A}^*\big]\right)$ is not injective if and only if $\left(\lambda^2 I - \mathcal{B}\right)$ is not injective. Thus, $\lambda$ is an eigenvalue of $\mathbb{K}_{\partial\!A}^*$ if and only if $\lambda^2$ is an eigenvalue of $\mathcal{B}$. ◻ We will use the following version of the Gershgorin circle theorem. The first statement follows from [@Chonchaiya:2010:CSP p. 39]. **Lemma 3**. *Let $\mathcal{B} = [b_{mn}]_{m,n\in\mathbb{Z}\setminus\{0\}}$ be the infinite matrix defined in ([\[altmatrix\]](#altmatrix){reference-type="ref" reference="altmatrix"}), which operates on all spaces $\ell^p(\mathbb{Z}\setminus\{0\})$, $1\le p \le \infty$. The eigenvalues of $\mathcal{B}$ lie in the union of the Gershgorin disks, that is, $$\sigma_{\mathrm{point}}(\mathcal{B}) \subseteq \bigcup_{m\in\mathbb{Z}\setminus\{0\}} \mathcal{G}_m,$$ where $\mathcal{G}_m$ is the Gershgorin disk defined by $$\mathcal{G}_m = \left\{ \mu\in\mathbb{C}: |\mu - b_{mm} | \le \max\Big[\sum_{{n\in\mathbb{Z}\setminus\{0\}}, n\neq m} |b_{mn}|, \sum_{{n\in\mathbb{Z}\setminus\{0\}}, n\neq m} |b_{nm}| \Big] \right\}.$$ For each $m\in\mathbb{Z}\setminus\{0\}$, if $\mathcal{G}_m$ is disjoint from $\mathcal{G}_n$ for all $n\neq m$, then $\mathcal{G}_m$ contains an element of $\sigma_{\mathrm{point}}(\mathcal{B})$, that is, there exist an eigenvalue $\lambda_m$ of $\mathcal{B}$ such that $\lambda_m \in \mathcal{G}_m$.* *Proof.* We refer the reader to [@Chonchaiya:2010:CSP] for using the Gershgorin circle theorem to infinite matrices. Since non-diagonal terms of $\mathcal{B}$ decay exponentially, we can apply [@Chonchaiya:2010:CSP p. 39] to conclude that all eigenvalues of $\mathcal{B}$ lie within the union of the Gershgorin disks. The second statement follows from a standard continuity argument. Let $D$ denote the diagonal part of $\mathcal{B}$, and define $\mathcal{B}_t = t \mathcal{B} + (1-t)D$ for $t\in[0,1]$ so that $\mathcal{B}_0 = D$ and $\mathcal{B}_1 = \mathcal{B}$. The eigenvalues $\{\lambda_{m,0}\}_{m\in\mathbb{Z}\setminus\{0\}}$, of $\mathcal{B}_0$ are the diagonal elements. Let $\mathcal{G}_{m,t}$ denote the Gershgorin disk of $\mathcal{B}_t$, centered at the eigenvalue $\lambda_{m,t}$. Since the sets $\{\mathcal{G}_{m,1}\}$ are disjoint and each disk $\mathcal{G}_{m,t}$ shrinks monotonically to the single point $\{\lambda_{m,0}\}$ as $t\to0$, the sets $\{\mathcal{G}_{m,t}\}$, for each $t\in[0,1]$, are also disjoint. Continuity of $\lambda_{m,t}$ in $t$ now guarantees that $\lambda_{m,t}$ must remain in the disk $\mathcal{G}_{m,t}$ for all $t\in[0,1]$. ◻ **Lemma 4**. *Let $G = (g_{mn})_{m,n=1}^\infty$ with $g_{mn} = c_{mn}/r_\mathrm{i}^{m+n}$. There exists $\rho\in[0,1)$ such that $|g_{mn}| \le \sqrt{m} \, \rho^{m+n}$ for all $m,n\in\mathbb{N}$.* *Proof.* $\Gamma_\mathrm{i}$ is the image of a circle under the conformal map $z=\Psi(w)$ and thus has an analytic and univalent continuation to $|w|>r_\mathrm{i}-\delta$ for a small enough $\delta>0$. The inequality ([\[Gineq\]](#Gineq){reference-type="ref" reference="Gineq"}) yields $$\sqrt{\frac{n}{m}} \frac{|c_{mn}|}{(r_\mathrm{i}-\delta)^{m+n}} < 1,$$ and thus $$|g_{mn}| = \frac{|c_{mn}|}{r_\mathrm{i}^{m+n}} < \sqrt{\frac{m}{n}} \frac{(r_\mathrm{i}-\delta)^{m+n}}{r_\mathrm{i}^{m+n}} < \sqrt{m}\, \rho^{m+n},$$ where $\rho = (r_\mathrm{i}-\delta)/r_\mathrm{i}$. ◻ **Theorem 5**. *Let $r=\frac{r_\mathrm{i}}{r_\mathrm{e}}<1$ and $\rho\in(0,1)$ be given, and let $\lambda$ be an eigenvalue of $\mathbb{K}_{\partial\!A}^*$ with $\lambda\neq \pm\frac{1}{2}$. Then $$\begin{aligned} \lambda^2 \in \bigcup_{m=1}^\infty B(m,r),\end{aligned}$$ where $B(m,r)$ is a disk defined by $$\begin{aligned} B(m,r) = \left\{\mu\in\left[0,1/4\right]: \left| \mu - \frac{r^{2m}}{4} \right| \le R(m,r) \right\}\end{aligned}$$ with a radius given by $$\label{rad} R(m,r) = \sqrt{m} \, \rho^m (1-r^2) M(m,r)$$ and $M :=\max(M_1^\mathrm{row}, M_1^\mathrm{col}, M_2^\mathrm{row}, M_2^\mathrm{col})$ is a bounded function of $m$ and $r$ with $$\begin{aligned} &M_1^\mathrm{row}(m,r) = \frac{\rho}{4(1-\rho)(1-\rho r^2)},\\ &M_1^\mathrm{col}(m,r) = \frac{1}{4\sqrt{m}} \left(\frac{1-r^{2m}}{1- r^2}\right)r^{2m} \sum_{n=1}^\infty \sqrt{n} \rho^n,\\ &M_2^\mathrm{row}(m,r) = \frac{1}{4} \sum_{n=1}^\infty \rho^{n} \left(\frac{1-r^{2n}}{1- r^2}\right)\Big( r^{2n} + \sum_{k=1}^\infty \sqrt{n} \rho^{n+k} (1-r^{2k}) \Big),\\ &M_2^\mathrm{col}(m,r) = \frac{1}{4}\left(\frac{1-r^{2m}}{1- r^2}\right) \sum_{n=1}^\infty \sqrt{n} \, \rho^n \Big(\frac{1}{\sqrt{m}} + \sum_{k=1}^\infty \rho^{m+k} (1-r^{2k}) \Big).\end{aligned}$$* *Proof.* Applying Lemma [Lemma 3](#Gershgorin_type){reference-type="ref" reference="Gershgorin_type"} on the first and the second block rows of ([\[altmatrix\]](#altmatrix){reference-type="ref" reference="altmatrix"}), $$\bigcup_{m\in\mathbb{N}, \, i=1,2} B_i(m,r)$$ contains the spectrum of the block matrix ([\[altmatrix\]](#altmatrix){reference-type="ref" reference="altmatrix"}), where $B_1(m,r)$ and $B_2(m,r)$ are denoted by $$\label{B1} B_1(m,r) = \left\{ \mu\in\mathbb{C}: \left| \mu - \frac{1}{4}\left[ r^{2\mathbb{N}} \right]_{mm} \right| \le \max\left[R_{1}^\mathrm{row}(m,r), R_{1}^\mathrm{col}(m,r)\right] \right\},$$ with $$\label{B1R} R_{1}^\mathrm{row}(m,r) = \sum_{n=1}^\infty \left| \frac{1}{4} \left[ - G(I-r^{2\mathbb{N}}) \right]_{mn} \right|, \quad R_{1,m}^c = \sum_{n=1}^\infty \left| \frac{1}{4} \left[ - \overline{G}(I-r^{2\mathbb{N}})r^{2\mathbb{N}} \right]_{nm} \right|,$$ and $$\begin{aligned} B_2(m,r)= \bigg\{\mu\in\mathbb{C}: &\left| \mu - \frac{1}{4} \left[ r^{2\mathbb{N}} + \overline{G} (I - r^{2\mathbb{N}}) G (I - r^{2\mathbb{N}}) \right]_{mm} \right| \le \max(R_{2,m}^r,R_{2,m}^c)\bigg\},\label{B2}\end{aligned}$$ with $$\begin{aligned} &R_{2}^\mathrm{row}(m,r) = \sum_{n=1}^\infty \left| \frac{1}{4} \left[ -\overline{G} (I - r^{2\mathbb{N}}) r^{2\mathbb{N}} \right]_{mn} \right| +\sum_{n\neq m} \left| \frac{1}{4} \left[ \overline{G} (I - r^{2\mathbb{N}}) G (I - r^{2\mathbb{N}}) \right]_{mn} \right|, \label{B2Rr}\\ &R_{2}^\mathrm{col}(m,r) = \sum_{n=1}^\infty \left| \frac{1}{4} \left[ -G (I - r^{2\mathbb{N}}) \right]_{nm} \right| +\sum_{n\neq m} \left| \frac{1}{4} \left[ \overline{G} (I - r^{2\mathbb{N}}) G (I - r^{2\mathbb{N}}) \right]_{nm} \right|.\label{B2Rc}\end{aligned}$$ From Lemma [Lemma 4](#grunsky_bound){reference-type="ref" reference="grunsky_bound"}, the radii satisfy $$\begin{aligned} &R_{1}^\mathrm{row}(m,r) = \frac{1}{4} \sum_{n=1}^\infty |g_{mn}| (1-r^{2n}) \le \frac{1}{4} \sum_{n=1}^\infty \sqrt{m} \, \rho^{m+n} (1-r^{2n}) = \sqrt{m} \, \rho^m (1-r^2) M_1^\mathrm{row}(m,r)\label{R1r}\\ &R_{1}^\mathrm{col}(m,r) = \frac{1}{4} \sum_{n=1}^\infty |g_{nm}| (1-r^{2m})r^{2m} \le \frac{1}{4} \sum_{n=1}^\infty \sqrt{n} \, \rho^{m+n} (1-r^{2m})r^{2m} = \sqrt{m} \, \rho^{m} (1-r^2) M_1^\mathrm{col}(m,r),\label{R1c}\end{aligned}$$ where $M_1^\mathrm{row}(m,r) = \frac{\rho}{4(1-\rho)(1-\rho r^2)}$ and $M_1^\mathrm{col}(m,r) = \frac{r^{2m}(1-r^{2m})}{4\sqrt{m} (1-r^2)} \sum_{n=1}^\infty \sqrt{n} \rho^n$. If $\mu\in B_1(m,r)$, then ([\[R1r\]](#R1r){reference-type="ref" reference="R1r"}), ([\[R1c\]](#R1c){reference-type="ref" reference="R1c"}), and ([\[B1\]](#B1){reference-type="ref" reference="B1"}) implies that $$\label{disk1} \left| \mu - \frac{r^{2m}}{4} \right| \le \sqrt{m} \, \rho^m (1-r^2) \max\left[M_1^\mathrm{row}(m,r), M_1^\mathrm{col}(m,r)\right].$$ The inequality in ([\[B2\]](#B2){reference-type="ref" reference="B2"}) and the triangle inequality yields $$\label{B2'} \left| \mu - \frac{r^{2m}}{4} \right| = \left| \mu - \frac{1}{4} \left[ r^{2\mathbb{N}} \right]_{mm} \right|\le \max(R_2^\mathrm{row},R_2^\mathrm{col}) + \left|\frac{1}{4} \left[\overline{G} (I - r^{2\mathbb{N}}) G (I - r^{2\mathbb{N}}) \right]_{mm} \right|.$$ Let's estimate the right-hand side of ([\[B2\'\]](#B2'){reference-type="ref" reference="B2'"}). $$\begin{aligned} & R_{2}^\mathrm{row}(m,r) + \left|\frac{1}{4} \left[\overline{G} (I - r^{2\mathbb{N}}) G (I - r^{2\mathbb{N}}) \right]_{mm} \right| \notag\\ &= \sum_{n=1}^\infty \left| \frac{1}{4} \left[ -\overline{G} (I - r^{2\mathbb{N}}) r^{2\mathbb{N}} \right]_{mn} \right| + \sum_{n=1}^\infty \left| \frac{1}{4} \left[ \overline{G} (I - r^{2\mathbb{N}}) G (I - r^{2\mathbb{N}}) \right]_{mn} \right|\notag\\ &\le \frac{1}{4} \sum_{n=1}^\infty |g_{mn}| (1-r^{2n})\Big( r^{2n} + \sum_{k=1}^\infty |g_{nk}|(1-r^{2k}) \Big)\notag\\ &\le \frac{1}{4} \sum_{n=1}^\infty \sqrt{m} \, \rho^{m+n} (1-r^{2n})\Big( r^{2n} + \sum_{k=1}^\infty \sqrt{n} \rho^{n+k} (1-r^{2k}) \Big)\notag\\ &= \sqrt{m} \, \rho^m (1-r^2) M_2^\mathrm{row}(m,r) \label{R2r}\end{aligned}$$ where $$M_2^\mathrm{row}(m,r) = \frac{1}{4} \sum_{n=1}^\infty \rho^{n} (\frac{1-r^{2n}}{1-r^2})\Big( r^{2n} + \sum_{k=1}^\infty \sqrt{n} \rho^{n+k} (1-r^{2k}) \Big),$$ and $$\begin{aligned} & R_{2}^\mathrm{col}(m,r) + \left|\frac{1}{4} \left[\overline{G} (I - r^{2\mathbb{N}}) G (I - r^{2\mathbb{N}}) \right]_{mm} \right| \notag\\ &= \sum_{n=1}^\infty \left| \frac{1}{4} \left[ -G (I - r^{2\mathbb{N}}) \right]_{nm} \right| + \sum_{n=1}^\infty \left| \frac{1}{4} \left[ \overline{G} (I - r^{2\mathbb{N}}) G (I - r^{2\mathbb{N}}) \right]_{nm} \right|\notag\\ &\le \frac{1}{4} \sum_{n=1}^\infty |g_{nm}| (1-r^{2m})\Big( 1 + \sum_{k=1}^\infty |g_{mk}|(1-r^{2k}) \Big)\notag\\ &\le \frac{1}{4} \sum_{n=1}^\infty \sqrt{n} \, \rho^{m+n} (1-r^{2m})\Big(1 + \sum_{k=1}^\infty \sqrt{m} \rho^{m+k} (1-r^{2k}) \Big)\notag\\ &= \sqrt{m} \, \rho^m (1-r^2) M_2^\mathrm{col}(m,r),\label{R2c}\end{aligned}$$ where $$M_2^\mathrm{col}(m,r) = \frac{1}{4}(\frac{1-r^{2m}}{1-r^2}) \sum_{n=1}^\infty \sqrt{n} \, \rho^n \Big(\frac{1}{\sqrt{m}} + \sum_{k=1}^\infty \rho^{m+k} (1-r^{2k}) \Big).$$ If $\mu\in B_2(m,r)$, then ([\[R2r\]](#R2r){reference-type="ref" reference="R2r"}), ([\[R2c\]](#R2c){reference-type="ref" reference="R2c"}), and ([\[B2\'\]](#B2'){reference-type="ref" reference="B2'"}) implies that $$\label{disk2} \left| \mu - \frac{r^{2m}}{4} \right| \le \sqrt{m} \, \rho^m (1-r^2) \max\left[M_2^\mathrm{row}(m,r), M_2^\mathrm{col}(m,r)\right].$$ By combining ([\[disk1\]](#disk1){reference-type="ref" reference="disk1"}) and ([\[disk2\]](#disk2){reference-type="ref" reference="disk2"}), we get $$\label{UB} \bigcup_{m\in\mathbb{N}, \, i=1,2} B_i(m,r) \subset \bigcup_{m=1}^\infty B(m,r).$$ Then Lemma [Lemma 3](#Gershgorin_type){reference-type="ref" reference="Gershgorin_type"}, Theorem [Theorem 2](#eigenvalueK){reference-type="ref" reference="eigenvalueK"}, and ([\[UB\]](#UB){reference-type="ref" reference="UB"}) give the desired result. ◻ The ball $B(m,r)$ in Theorem [Theorem 5](#estimate){reference-type="ref" reference="estimate"} is disjoint from all other balls for sufficiently large $m$. The disjointness guarantees existence of an eigenvalues in the ball. **Lemma 6**. *Let $r$ and $\rho$ be given such that $(\frac{1+\rho}{2})^{\frac{1}{2}}< r < 1$. There exists $m_0 = m_0(\rho) \in\mathbb{N}$ such that the following properties hold for all $m\ge m_0$:* 1. *$B(m,r)$ is disjoint from $B(n,r)$ for all $n\in\mathbb{N}, \, n\neq m$.* 2. *$B(m,r)$ contains at least one squared eigenvalue of $\mathbb{K}_{\partial\!A}^*$.* *Proof.* Using Lemma [Lemma 3](#Gershgorin_type){reference-type="ref" reference="Gershgorin_type"} and Theorem [Theorem 5](#estimate){reference-type="ref" reference="estimate"}, we see that $(a)$ implies $(b)$. For the proof of $(a)$, it is enough to show that $B(m,r)$ is disjoint with the consecutive balls, that is, for some $m_0\in\mathbb{N}$, $$\label{disjointness} B(m,r) \cap B(m\pm 1,r)= \emptyset, \quad m\ge m_0.$$ To do this, we will show that the distance between the centers of $B(m,r)$ and $B(m\pm1,r)$ is greater than the sum of the radii of the consecutive balls. The distance between the centers is $$d := \left| \frac{r^{2m}}{4} - \frac{r^{2(m\pm1)}}{4} \right| = \frac{r^{2m} \left| 1 - r^{\pm2}\right| }{4}.$$ As $r > (\frac{1+\rho}{2})^{\frac{1}{2}}$, we have $$\label{ineq22} 4d = r^{2m} \left| 1 - r^{\pm2}\right| > \bigg( \frac{1+\rho}{2} \bigg)^m \left| 1 - r^{\pm2}\right|.$$ From ([\[rad\]](#rad){reference-type="ref" reference="rad"}), the sum of the radii of $B(m,r)$ and $B(m\pm1)$ satisfies $$\label{rrho} R(m,r) + R(m\pm1,r) = (\sqrt{m} \, M(m,r) + \sqrt{m\pm1} \, \rho^{\pm1} M(m\pm 1,r)) (1-r^2) \rho^m \le C_{\rho} \sqrt{m+1} \, \rho^m (1-r^2)$$ for some constant $C_\rho > 0$ independent of $r$ and $m$. Then ([\[ineq22\]](#ineq22){reference-type="ref" reference="ineq22"}) and ([\[rrho\]](#rrho){reference-type="ref" reference="rrho"}) gives $$\label{bound} R(m,r) + R(m\pm1,r) \le C_{\rho} \sqrt{m+1} \, \rho^m (1-r^2) < 4d \, C_\rho \sqrt{m+1} \left( \frac{2\rho}{1+\rho} \right)^m$$ by the inequality $$\frac{1-r^2}{\left| 1 - r^{\pm2}\right|} \le \max\left( \frac{1-r^2}{1-r^2}, \frac{1-r^2}{r^{-2}-1} \right) = \max\left(1,r^2\right) \le 1.$$ Since $\frac{2\rho}{1+\rho} < 1$, the right--hand side of ([\[bound\]](#bound){reference-type="ref" reference="bound"}) eventually converges to zero as $m$ grows. Hence, there exist $m_0\in\mathbb{N}$ such that $$R(m,r) + R(m\pm1,r) \le d \quad \mbox{for all }m\ge m_0.$$ Therefore, ([\[disjointness\]](#disjointness){reference-type="ref" reference="disjointness"}) holds and it completes the proof. ◻ **Theorem 7**. *The Hausdorff distance between the spectrum of the NP operator of $A$ and $[-1/2,1/2]$ converges to zero as $r_\mathrm{e}$ approaches to $r_\mathrm{i}$, that is, $$\lim_{r \to 1} d_H\left(\sigma(\mathbb{K}_{\partial\!A}^*),[-1/2,1/2]\right) = 0.$$* *Proof.* Let us denote $\sigma(\mathbb{K}_{\partial\!A}^*)^2 := \{ \lambda^2: \lambda \in \sigma(\mathbb{K}_{\partial\!A}^*) \}$. It is enough to show that $$\label{square} \lim_{r \to 1} d_H\left(\sigma(\mathbb{K}_{\partial\!A}^*)^2,[0,1/4]\right) = 0$$ because spectra of the NP operator of two-dimensional domains have the twin relation [@Mayergoyz:2005:ERN], that is, $-y\in\sigma(\mathbb{K}_{\partial\!A}^*)$ if $y\in\sigma(\mathbb{K}_{\partial\!A}^*)$. Thus, ([\[square\]](#square){reference-type="ref" reference="square"}) is equivalent with $$\lim_{r\to1} d_H\left(\sigma(\mathbb{K}_{\partial\!A}^*),[-1/2,1/2]\right) = 0.$$ Let us prove ([\[square\]](#square){reference-type="ref" reference="square"}). Let $r \in (r_0, 1)$ with $r_0 = (\frac{1+\rho}{2})^{\frac{1}{2}}$. For any $m\ge m_0$, there exists $\lambda_m \in\sigma\left(\mathbb{K}_{\partial\!A}^*\right)$ satisfying $\lambda_m\in B(m,r)$ by using Theorem [Theorem 5](#estimate){reference-type="ref" reference="estimate"} and Lemma [Lemma 6](#disjoint){reference-type="ref" reference="disjoint"}, i.e., $$\left|\lambda_m^2 - \frac{r^{2m}}{4} \right| < R(m,r), \quad m\ge m_0.$$ From ([\[rad\]](#rad){reference-type="ref" reference="rad"}), we have $$\begin{aligned} \left|\lambda_m^2 - \frac{r^{2m}}{4}\right| < \sqrt{m} \, \rho^m (1-r^2) M(m,r). \label{triangle}\end{aligned}$$ For a given $\epsilon>0$, we can find $m_1\ge m_0$ such that, for all $m\ge m_1$, $$\label{ineq1} \sqrt{m} \, \rho^m (1-r^2) M(m,r) < \frac{\epsilon}{2}.$$ We may choose $r_1<1$ such that, if $0 \le m<m_1$, $$\label{ineq4} \frac{r^{2m}}{4} - \frac{r^{2m_1}}{4} \le \frac{1}{4} - \frac{r^{2m_1}}{4} < \frac{\epsilon}{4} \quad \mbox{for any } r>r_1.$$ Moreover, there is $r_2 = (1-\epsilon)^{\frac{1}{2}}\in(0,1)$ such that $$\label{ineq2} \frac{r^{2m}}{4} - \frac{r^{2m+2}}{4} = \frac{r^{2m}(1-r^2)}{4} \le \frac{1-r^2}{4} < \frac{\epsilon}{4}.$$ for all $r\in(r_2,1)$ and $m\ge0$. From now on, we denote $r_{\max} = \max(r_0,r_1,r_2)$. Let $\mu$ be an arbitrary number in $[0,1/4]$. From ([\[ineq2\]](#ineq2){reference-type="ref" reference="ineq2"}) and the pigeonhole principle, we can find some integer $m'\ge0$ such that $$\label{ineq3} \left| \mu - \frac{r^{2m'}}{4} \right| < \frac{\epsilon}{4}.$$ If $m'\ge m_1$, then ([\[triangle\]](#triangle){reference-type="ref" reference="triangle"}), ([\[ineq1\]](#ineq1){reference-type="ref" reference="ineq1"}), and ([\[ineq3\]](#ineq3){reference-type="ref" reference="ineq3"}) imply that, for $r > r_{\max}$, $$\label{epsil1} \left| \mu - \lambda_{m'}^2 \right| \le \left| \mu - \frac{r^{2m'}}{4} \right| + \left| \lambda_{m'}^2 - \frac{r^{2m'}}{4} \right| < \frac{\epsilon}{4} + \frac{\epsilon}{2} < \epsilon.$$ Otherwise, $m'< m_1$, ([\[ineq3\]](#ineq3){reference-type="ref" reference="ineq3"}), ([\[ineq4\]](#ineq4){reference-type="ref" reference="ineq4"}), ([\[triangle\]](#triangle){reference-type="ref" reference="triangle"}), and ([\[ineq1\]](#ineq1){reference-type="ref" reference="ineq1"}) yields that, for $r>r_{\max}$, $$\label{epsil2} \left| \mu - \lambda_{m_1}^2 \right| \le \left| \mu - \frac{r^{2m'}}{4} \right| + \left| \frac{r^{2m'}}{4} - \frac{r^{2m_1}}{4} \right| + \left| \lambda_{m_1}^2 - \frac{r^{2m_1}}{4} \right| < \frac{\epsilon}{4} + \frac{\epsilon}{4} + \frac{\epsilon}{2} = \epsilon.$$ Let $D_\epsilon(x)$ denote a disk of radius $\epsilon$ centered at $x$. From ([\[epsil1\]](#epsil1){reference-type="ref" reference="epsil1"}) and ([\[epsil2\]](#epsil2){reference-type="ref" reference="epsil2"}), we always have $\lambda\in\sigma(\mathbb{K}_{\partial\!A}^*)$ such that $\mu\in D_\epsilon(\lambda^2)$ for any given $\mu\in[0,1/4]$. As $[0,1/4]$ already contains $\sigma(\mathbb{K}_{\partial\!A}^*)^2$, the Hausdorff distance between $\sigma(\mathbb{K}_{\partial\!A}^*)^2$ and $[0,1/4]$ turns out that, if $r>\max(r_0,r_1,r_2)$, $$d_H\left(\sigma(\mathbb{K}_{\partial\!A}^*)^2,[0,1/4]\right)=\inf\left\{ \delta\ge0: [0,1/4] \le \bigcup_{\lambda\in\sigma(\mathbb{K}_{\partial\!A}^*)} D_\delta(\lambda^2) \right\} < \epsilon$$ for an arbitrary $\epsilon>0$. Therefore, we conclude that $$\lim_{r\to1} d_H\left(\sigma(\mathbb{K}_{\partial\!A}^*)^2,[0,1/4]\right) = 0.$$ The Theorem follows by the symmetry of $\sigma(\mathbb{K}_{\partial\!A}^*)\setminus\{1/2\}$. ◻ Finally, ([\[triangle\]](#triangle){reference-type="ref" reference="triangle"}) and ([\[ineq1\]](#ineq1){reference-type="ref" reference="ineq1"}) show that as $r\to1$, $\sigma(\mathcal{K}^*_\Gamma)$ tends to the spectrum of the NP operator for a circular annulus. Indeed, the eigenvalues for circular annulus with ratio of outer an inner radii equal to $r$ are $\{\pm r^m/2\}$ and $1/2$. # Numerical results We compute eigenvalues of a finitely approximated matrix of $\mathbb{K}_{\Omega_\mathrm{e}\setminus\overline{\Omega}_i}^*$ for the thin distorted annulus $A$ introduced in Figure [2](#doublyctd){reference-type="ref" reference="doublyctd"}. The exterior conformal mapping is $$\Psi(w) = w + \frac{0.3+0.5i}{w} - \frac{0.2+0.1i}{w^3} + \frac{0.1i}{w^5} + \frac{0.05i}{w^6} + \frac{0.01i}{w^7},$$ and the interior and exterior radii are $r_\mathrm{i}= 1.1$ and $r_\mathrm{e}= 1.15$. According to proof of Theorem [Theorem 7](#conv_eig){reference-type="ref" reference="conv_eig"}, the spectrum of the Neumann-Poincaré operator $A$ approaches $$\label{circ_eig} \left\{ \pm\frac{r^m}{2} \right\}_{m\in\mathbb{N}}$$as $r = r_\mathrm{i}/r_\mathrm{e}\to 1$. Indeed, ([\[circ_eig\]](#circ_eig){reference-type="ref" reference="circ_eig"}) is the spectrum of the circular annulus with the interior and exterior radii given by $r_\mathrm{i}$ and $r_\mathrm{e}$. The following Figure [3](#eig_comparing){reference-type="ref" reference="eig_comparing"} compares eigenvalue distributions of both annuli. ![Approximated eigenvalue distributions for the circular and the distorted annuli. We truncate $\mathbb{K}_{\Omega_\mathrm{e}\setminus\overline{\Omega}_i}^*$ as a finite matrix of size $1002\times1002$. The $x$-- and $y$--axes represent an index and the corresponding eigenvalue, respectively. There is a slight difference in the eigenvalues around $\pm1/2$, but as the eigenvalues approach 0, the distributions are fairly consistent.](eigenvalues.pdf){#eig_comparing} ![A distribution for the difference of eigenvalues.](error.pdf){#Difference} Figure [4](#Difference){reference-type="ref" reference="Difference"} shows the error of the difference of the eigenvalues in Figure [3](#eig_comparing){reference-type="ref" reference="eig_comparing"}, that is, $$\frac{\hat{\lambda}_n^\pm - \lambda_n^{\pm,\text{disk}} }{\lambda_n^{\pm,\text{disk}}}$$ where $1\le n \le 1002$ and $$\lambda_n^{\pm, \text{disk}} \in \left\{ \pm\frac{1}{2}\left( \frac{r_\mathrm{i}}{r_\mathrm{e}} \right)^m \right\}_{0\le m \le500}.$$ The difference is apparently exponentially decreasing near zero. 10 H. Ammari, G. Ciraolo, H. Kang, H. Lee, and G. W. Milton. Spectral theory of a Neumann--Poincaré--type operator and analysis of cloaking due to anomalous localized resonance. , 208(2):667--692, 2013. H. Ammari, G. Ciraolo, H. Kang, H. Lee, and G. W. Milton. Spectral theory of a Neumann--Poincaré--type operator and analysis of cloaking by anomalous localized resonance II. In *Inverse problems and applications*, volume 615 of *Contemp. Math.*, pages 1--14. Amer. Math. Soc., Providence, RI, 2014. H. Ammari, B. Fitzpatrick, H. Kang, M. Ruiz, S. Yu, and H. Zhang. , volume 235 of *Mathematical Surveys and Monographs*. American Mathematical Society, Providence, RI, 2018. K. Ando, Y.-G. Ji, H. Kang, D. Kawagoe, and Y. Miyanishi. . , 36(7):1817--1828, 2019. K. Ando, H. Kang, S. Lee, and Y. Miyanishi. . , 54(6):6164--6185, 2022. K. Ando, H. Kang, and Y. Miyanishi. . , 146(2):791--800, 2022. P. D. Cha and A. Shin. Perturbation methods for the eigencharacteristics of symmetric and asymmetric systems. , 2018(3):1--25, 2018. D. Choi, J. Helsing, S. Kang, and M. Lim. Inverse problem for a planar conductivity inclusion. , 2023. D. Choi, J. Kim, and M. Lim. Analytical shape recovery of a conductivity inclusion based on faber polynomials. , 381(3):1837--1867, 2021. D. Choi, J. Kim, and M. Lim. Geometric multipole expansion and its application to semi-neutral inclusions of general shape. , 74(1):39, 2023. D. Choi, K. Kim, and M. Lim. . , 495(2):124756, 2021. R. Chonchaiya. Computing the spectra and pseudospectra of non-self adjoint random operators arising in mathematical physics. , 2010. D. Chung, H. Kang, K. Kim, and H. Lee. Cloaking due to anomalous localized resonance in plasmonic structures of confocal ellipses. , 74(5):1691--1707, 2014. P. L. Duren. , volume 259 of *Grundlehren der Mathematischen Wissenschaften*. Springer-Verlag New York, 1983. L. Escauriaza and M. Mitrea. Transmission problems and spectral theory for singular integral operators on lipschitz domains. , 216(1):141--171, 2004. G. Faber. Über polynomische Entwickelungen. , 57(3):389--408, 1903. E. Fabes, M. Sand, and J.-K. Seo. The spectral radius of the classical layer potentials on convex domains. In B. Dahlberg, R. Fefferman, Carlos Kenig, Eugene Fabes, David Jerison, and J. Pipher, editors, *Partial Differential Equations with Minimal Smoothness and Applications*, pages 129--137, New York, NY, 1992. H. Grunsky. . , 45(1):29--61, 1939. J. Helsing, H. Kang, and M. Lim. . , 34(4):991--1011, 2017. J. Helsing and K.-M. Perfekt. The spectra of harmonic layer potential operators on domains with rotationally symmetric conical points. , 118:235--287, 2018. Y.-G. Ji and H. Kang. . , 2022. Y. Jung and M. Lim. A decay estimate for the eigenvalues of the Neumann-Poincaré operator using the Grunsky coefficients. , 148(2):591--600, 2020. Y. Jung and M. Lim. . , 53(2):1630--1669, 2021. Y. Jung and M. Lim. Spectral analysis of the Neumann--Poincaré operator on the crescent-shaped domain and touching disks and analysis of plasmon resonance. , 74:103951, 2023. H. Kang. Spectral geometry and analysis of the Neumann--Poincaré operator, a review. In *Recent progress in mathematics*, volume 1 of *KIAS Springer Ser. Math.*, pages 119--153. Springer, Singapore, 2022. H. Kang, M. Lim, and S. Yu. . , 226(1):83--115, 2017. O. D. Kellogg. , volume 31 of *Die Grundlehren der Mathematischen Wissenschaften*. Springer-Verlag Berlin Heidelberg, 1929. D. Khavinson, M. Putinar, and H. S. Shapiro. Poincaré's variational problem in potential theory. , 185(1):143--184, 2007. W. Li, K.-M. Perfekt, and S. P. Shipman. Infinitely many embedded eigenvalues for the Neumann-Poincaré operator in 3d. , 2021. W. Li and S. P. Shipman. Embedded eigenvalues for the neumann-poincaré operator. , 31(4):505--534, 2019. M. Lim. Symmetry of a boundary integral operator and a characterization of a ball. , 45(2):537--543, 2001. I. D. Mayergoyz, D. R. Fredkin, and Z. Zhang. Electrostatic (plasmon) resonances in nanoparticles. , 72:155412, 2005. C. Neumann. . , 1887. K.-M. Perfekt. The transmission problem on a three-dimensional wedge. , 231:1745--1780, 2019. K.-M. Perfekt and M. Putinar. The essential spectrum of the neumann--poincaré operator on a domain with corners. , 223(2):1019--1033, 2017. S. Ritter. The spectrum of the electrostatic integral operator for an ellipsoid. In *Inverse scattering and potential problems in mathematical physics (Oberwolfach, 1993)*, volume 40 of *Methoden Verfahren Math. Phys.*, pages 157--167. Peter Lang, Frankfurt am Main, 1995. [^1]: Department of Mathematical Sciences, Korea Advanced Institute of Science and Technology, Daejeon 34141, South Korea (mklim\@kaist.ac.kr). [^2]: Department of Mathematics, Louisiana State University, Baton Rouge, LA, USA (dchoi\@lsu.edu, shipman\@lsu.edu).
arxiv_math
{ "id": "2309.02892", "title": "Spectral analysis of the Neumann-Poincar\\'e operator for thin doubly\n connected domains", "authors": "Doosung Choi and Mikyoung Lim and Stephen P. Shipman", "categories": "math.SP math.FA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Generalized cross-validation (GCV) is a widely-used method for estimating the squared out-of-sample prediction risk that employs a scalar degrees of freedom adjustment (in a multiplicative sense) to the squared training error. In this paper, we examine the consistency of GCV for estimating the prediction risk of arbitrary ensembles of penalized least squares estimators. We show that GCV is inconsistent for any finite ensemble of size greater than one. Towards repairing this shortcoming, we identify a correction that involves an additional scalar correction (in an additive sense) based on degrees of freedom adjusted training errors from each ensemble component. The proposed estimator (termed CGCV) maintains the computational advantages of GCV and requires neither sample splitting, model refitting, or out-of-bag risk estimation. The estimator stems from a finer inspection of ensemble risk decomposition and two intermediate risk estimators for the components in this decomposition. We provide a non-asymptotic analysis of the CGCV and the two intermediate risk estimators for ensembles of convex penalized estimators under Gaussian features and a linear response model. In the special case of ridge regression, we extend the analysis to general feature and response distributions using random matrix theory, which establishes model-free uniform consistency of CGCV. author: - | [^1] ¶ Pierre C. Bellec [^2] ¶\ <pierre.bellec@rutgers.edu> - | Jin-Hong Du [^3] [^4] ¶\ <jinhongd@andrew.cmu.edu> - | Takuya Koriyama$^\ast$ ¶\ [takuya.koriyama\@rutgers.edu](mailto:tk691@stat.rutgers.edu) - | Pratik Patil$^\ast$ [^5] ¶\ <pratikpatil@berkeley.edu> - | Kai Tan ¶\ [kai.tan\@rutgers.edu](mailto:kt536@stat.rutgers.edu) bibliography: - references.bib title: | Corrected generalized cross-validation\ for finite ensembles of penalized estimators --- # Introduction Ensemble methods are an important part of the toolkit in current machine learning practice. These methods combine multiple models and improve the predictive accuracy and stability compared to any single component model [@hastie2009elements; @dietterich1998experimental; @dietterich2000ensemble]. Bagging and its variants form an important class of ensemble methods that average predictors fitted on different subsamples of the full dataset. Classical literature on bagging includes works of @breiman_1996 [@buhlmann2002analyzing; @friedman_hall_2007], among others. There has been a flurry of recent work in characterizing the risk behavior of such subsampling-based ensembles methods in high dimensions; see, e.g., @loureiro2022fluctuations [@adlam2020understanding; @mucke_reiss_rungenhagen_klein_2022; @patil2022bagging; @du2023subsample]. These works demonstrate that the risk can be significantly reduced by appropriately choosing the ensemble size. Complementary to characterizing and understanding the risk behavior, it is important to have a reliable method for estimating these risks in implementing ensemble methods. Accurate risk estimation provides a benchmark for comparing the performance of different ensemble configurations and gives insights into the expected performance of the combined model on unseen data. In this paper, our primary focus is on the risk estimation techniques suitable for ensemble methods, especially when these ensembles are composed of penalized least squares estimators. One widely recognized technique for risk estimation is Generalized Cross-Validation (GCV). GCV adjusts the training error multiplicatively based on a factor known as degrees of freedom correction. The consistency of GCV has been extensively studied, initially for fixed-X design settings and linear smoothers [@golub_heath_wabha_1979; @craven_wahba_1979]. Subsequent work has extended this for the random-X settings, which are arguably more relevant in modern machine-learning applications where observations are drawn from some underlying distribution rather than being predetermined. Notably, the consistency of GCV for ridge regression has been shown in @patil2021uniform [@patil2022estimating; @wei_hu_steinhardt] under various settings. For linear base estimators (e.g., least squares estimator, ridge, etc.), the degrees of freedom of the ensemble estimator is simply the average of degrees of freedom for all base predictors. The first question that this paper asks is the following: Is GCV still consistent for ensembles of general penalized estimators? Perhaps the simplest base predictor to study this question is ridge regression, which has emerged as a simple but powerful tool, particularly when dealing with high-dimensional data without any special sparsity structure. The recent work of [@du2023subsample] considers risk estimation for ensemble ridge regression. The authors analyze the use of GCV for the full ensemble ridge estimator when the number of ensemble components goes to infinity, demonstrating that GCV is consistent for the prediction risk. However, an intriguing surprise from their work is the inconsistency of GCV for any finite ensemble. This finding suggests a limitation of GCV and prompts the question of whether this inconsistency is special to ridge regression or is a broader issue with GCV's applicability to other models as well. And in particular, this prompts the second key question of our paper: Is there a GCV correction that is consistent for all possible ensemble sizes? Understanding these two questions forms the basis of our work. We propose a novel risk estimator that addresses the limitations of naive GCV for ensembles of penalized estimators. Focusing on convex penalized predictors, we show that for a finite ensemble size larger than one, the GCV is always inconsistent and deviates from the true risk by an additive error that we characterize explicitly. Understanding this additive error leads to a novel correction to GCV, and we provide both non-asymptotic and asymptotic analyses of this corrected GCV under two different sets of data-generating assumptions. Our non-asymptotic analysis is restricted to Gaussian features and well-specified linear models, allowing general strongly convex penalty functions. Our asymptotic analysis is more general in accommodating random matrix theory features beyond Gaussianity and arbitrary data models while being restricted to ridge predictors. Our asymptotic analysis also enables the analysis of the "ridgeless" predictors (minimum-norm interpolators) in the overparameterized regime. For the case of ridge regression, this also allows us to show the uniform consistency in the regularization parameter. Before delving into precise details of our contributions, we present our results in , which demonstrates the clear gap between GCV and true risk. Our proposed corrected GCV (CGCV), on the other hand, accurately estimates the risk across varying settings for ensembles of both ridge and lasso predictors. ![ **CGCV is consistent for finite ensembles of penalized estimators while GCV is not.** We show a numerical comparison between the true squared risk (Risk), GCV error (GCV), and corrected GCV (CGCV) for ensembles across different tuning parameters $\lambda$ and ensemble sizes $M$, over 100 repetitions of the datasets. The left panel shows ridge ensemble with different $\lambda$ and $M$. Data is generated from a linear model with Gaussian design, $(n,p) = (2000, 500)$, and a signal-to-noise ratio of $1$. The subsample size is fixed at $k=800$. The right panel shows the plot of lasso ensemble under the same setting as the left panel except for a different range of $\lambda$. Further details of the experimental setup are given in . ](figures/Fig1_ridge_lasso.pdf){#fig:Fig1_ridge_lasso width="95%"} ## Summary of results The regime of primary interest for our results is the proportional asymptotic regime, in which the feature size $p$ scales proportionally with the sample size $n$. More formally, we implicitly consider a sequence of regression problems, say, indexed by $n$. The dimension $p{(n)}$, the distribution of the observations, the estimators and penalty under consideration, and the subsample size $k{(n)}$ all implicitly depend on $n$ with $p{(n)},k{(n)}$ growing with $n$. We omit the dependence on $n$ to lighten the notation and write simply, e.g., $p$ for the dimension and $k$ for the subsample size. Throughout the paper, the ratio $p/n$ is bounded from above by a constant independent of $n$, so that by extracting a subsequence if necessary, we may assume that $p/n$ has a finite limit. While several of our results are non-asymptotic with explicit dependence on certain problem parameters, we also state consistency results asymptotically using $\xrightarrow{\textup{p}}$/$\xrightarrow{\textup{a.s.}}$ (convergence in probability/almost sure convergence), $o_{\mathbb{P}}(\cdot)$ and $\mathcal{O}_{\mathbb{P}}(\cdot)$ notations, which are defined with respect to the aforementioned sequence of regression problems. Our contributions are four-fold, as summarized below. 1. **Inconsistency of GCV for finite ensembles.** We establish that ordinary GCV is inconsistent in estimating the squared out-of-sample prediction risk of finite ensembles with more than one penalized estimator (). Here, we assume strongly convex penalties that include ridge regression and elastic net, among others; and the inconsistency result of applies to base estimators that have normalized degrees of freedom bounded away from 0 (see for details). 2. **Corrected GCV for finite ensembles.** We introduce a novel risk estimator, termed corrected GCV (CGCV), designed to estimate the prediction risk of the arbitrary ensemble of penalized estimators (). The proposed estimator employs a scalar data-dependent correction term (in an additive sense) that incorporates degrees-of-freedom adjusted training errors of individual component estimators (). Importantly, CGCV preserves all computational advantages inherent to GCV, as it requires neither sample splitting nor model refitting nor out-of-bag risk estimation. 3. **Non-asymptotic analysis under Gaussian designs.** The structure of CGCV stems from inspecting the ensemble risk decomposition and two intermediate risk estimators that we construct for the components in this decomposition (). For Gaussian features and a well-specified linear model, we provide non-asymptotic bounds for these component estimators (). Building on these, we provide a non-asymptotic bound for the proposed corrected GCV estimator (). Our error bounds decrease at the $n^{-1/2}$ rate and provide explicit dependence on problem parameters such as the ensemble size $M$, the subsample size $k$, and the strong convexity parameter of the penalty (). The derived rates are pertinent within the scope of the proportional asymptotic regime. 4. **Asymptotic analysis under general designs.** For general feature structure (multiplicatively) composed of arbitrary covariance matrix and independent components with bounded $4+\eta$ moments and response vector with bounded $4+\eta$ moments for some $\eta > 0$, we show that the intermediate risk estimators are asymptotically consistent for ridge ensemble (). We further show uniform consistency in the regularization level $\lambda$ that includes the case of ridgeless regression for both the intermediate estimators and consequently CGCV ( and ). The asymptotic analysis allows us to relax the data and model assumptions while obtaining model-free uniform consistency of CGCV in the regularization parameter. ## Related work {#sec:related_work} Below, we describe related work on ensemble risk analysis and risk estimation, as well as a detailed comparison to other baseline risk estimation approaches based on sample-splitting and out-of-bag risk estimation. #### GCV for linear smoothers. Risk estimation is a crucial aspect of statistical learning that plays an important role in model assessment and selection. Over the years, a myriad of methods have been proposed, each with its own strengths and limitations. Among these, GCV has emerged as a widely adopted technique for risk estimation. GCV is conceptually motivated as an approximation to leave-one-out cross-validation, a method that provides unbiased estimates of the true prediction error but can be computationally expensive [@hastie2009elements]. The origins of GCV can be traced back to the work of @golub_heath_wabha_1979 and @craven_wahba_1979, where it was studied in the context of fixed-X design settings for linear smoothers. These settings, where the predictors are considered fixed and non-random, are common in experimental designs. The consistency of GCV was subsequently investigated in a series of papers by @li_1985 [@li_1986; @li_1987]. More recently, the focus has shifted toward the random-X and high-dimensional setting. This change in focus is motivated by modern machine-learning applications, which often deal with large datasets and where predictors are typically considered to be random variables drawn from some distribution rather than considered fixed values. In this context, GCV has been the subject of considerable research attention. In particular, [@leeb2008evaluation] establishes the consistency of GCV for the ordinary least-squares, and the consistency of GCV for ridge regression has been established in a number of recent studies under various data settings by @adlam_pennington_2020neural [@patil2021uniform; @patil2022estimating; @wei_hu_steinhardt; @han2023distribution], which provide different flavors of results (both asymptotic and non-asymptotic). #### GCV beyond linear smoothers. The generalized cross-validation was initially defined for linear smoothers, where the sum of training errors is adjusted multiplicatively by the trace of the smoothing matrix. There is a general way of understanding this estimator and also extending the definition of GCV for estimators that are not linear smoothers by using degrees of freedom adjustments. The notion of degrees of freedom of linear estimate ${{\widehat{\bm{\beta}}}}$ dates back to the pioneering paper of [@stein1981estimation]. This literature established the multivariate version of Stein's formula and proposed an unbiased estimator for estimating the mean square error of a Gaussian multivariate mean, which is often called Stein's Unbiased Risk Estimate ($\mathsf{SURE}$). For example, under the linear model $\bm{y}= \bm{X}\bm{\beta}_0 + \bm{\varepsilon}$ with Gaussian covariates $\bm{X}\in\mathbb{R}^{n\times p}$ and Gaussian noise $\bm{\varepsilon}\in\mathbb{R}^{n}$, the Stein's Unbiased Risk Estimate has expression $\widehat{\mathsf{SURE}} = \|\bm{y}- \bm{X}{{\widehat{\bm{\beta}}}}\|_2^2 + 2\sigma^2 {\widehat{\mathsf{df}}}{}- n \sigma^2$, where $\sigma^2$ is the noise variance, and $${\widehat{\mathsf{df}}}{}= \mathop{\mathrm{tr}}[(\partial/\partial \bm{y})\bm{X}{{\widehat{\bm{\beta}}}}] \label{df}$$ is the degrees of freedom of the estimate ${{\widehat{\bm{\beta}}}}$. The $\mathsf{SURE}$ is unbiased in the sense that $\mathbb{E}[\widehat{\mathsf{SURE}}] = \mathbb{E}[\|\bm{X}{{\widehat{\bm{\beta}}}}- \bm{X}\bm{\beta}_0\|_2^2].$ This definition of degrees of freedom as the trace of the Jacobian yields a natural generalization of the GCV formula, where the trace of the smoothing matrix (for linear smoothers) is replaced by ${\widehat{\mathsf{df}}}{}$, i.e., the trace of the Jacobian $(\partial/\partial\bm{y})\bm{X}{{\widehat{\bm{\beta}}}}$. Moving beyond estimating the in-sample error $\|\bm{X}{{\widehat{\bm{\beta}}}}- \bm{X}\bm{\beta}_0\|_2^2$, this generalization of GCV beyond linear smoothers is known to be consistent in the sense: $$\frac{\|\bm{y}- \bm{X}{{\widehat{\bm{\beta}}}}\|_2^2/n}{(1 - {\widehat{\mathsf{df}}}{}/n)^2} \Big/ \Bigl( \|\bm{\Sigma}^{1/2}({{\widehat{\bm{\beta}}}}- \bm{\beta}_0)\|_2^2 + \sigma^2 \Bigr) \xrightarrow{\textup{p}}1,$$ under linear models with jointly normal observations $(\bm{x}_i, y_i)$ and $\mathbb{E}[\bm{x}_i\bm{x}_i^\top]=\bm{\Sigma}$. This consistency is proven for the lasso [@bayati2011lasso; @bayati2013estimating; @miolane2021distribution; @celentano2020lasso], and for regularized least-squares with general convex penalty [@bellec2020out Section 3]. @bellec2022derivatives generalized the above GCV estimator for the risk $\|\bm{\Sigma}^{1/2}({{\widehat{\bm{\beta}}}}- \bm{\beta}_0)\|_2^2 + \|\bm{\varepsilon}\|_2^2/n$ in the context of regularized M-estimation where the noise has possibly infinite variance. @tan2022noise studied a generalization of GCV to estimate the out-of-sample error in a multi-task regression model where each response $y_i$ is vector-valued. As an alternative to GCV, [@rad2018scalable; @xu2019consistent; @wang2018approximate] developed the Approximate Leave-One Out (ALO) estimator to estimate the out-of-sample error for regularized M-estimators, which is provably consistent under smoothness assumptions on the data-fitting loss and the regularizer. #### Ensemble risk characterization. Ensemble methods combine weak predictors to yield strong predictors and are a common part of the statistical machine learning toolkit [@hastie2009elements]. Several variants exist such as bagging [@breiman_1996; @buhlmann2002analyzing], random forests [@breiman2001random], stacking [@wolpert1992stacked], among others. These methods are widely used due to their ability to improve prediction accuracy and robustness. The risk behavior of ensemble methods has been a topic of recent interest. [@patil2022bagging] provide a strategy for analyzing the risk of an ensemble of general predictors and present exact risk asymptotics for an ensemble of ridge predictors. Other related work on ensemble risk analysis include @lejeune2020implicit [@ando2023high]. @loureiro2022fluctuations [@adlam2020understanding] provide insights into the fluctuations in risk of ensemble methods, demonstrating that the risk can be significantly reduced by appropriately choosing the ensemble size and the diversity of the predictors. They also highlight the importance of the correlation structure among the predictors in determining the risk of the ensemble. Recently, @du2023subsample show that the naive extension of GCV estimator for ridge ensemble is generally inconsistent for finite ensemble size $M>1$, but it becomes consistent again when $M$ approaches infinity. The follow-up work of [@patil2023generalized] provides structural and risk equivalences between subsampling and ridge regularization. This work identifies an inflation factor of $\Delta / M$, for some finite bias term $\Delta$ (independent of $M$), between the risk of the ensemble at a given regularization level and full ridge at a larger "induced" regularization level. This inflation vanishes as $M \to \infty$. Even though GCV is consistent for ridge regression $(M=1)$ or the infinite-ensemble ridge $(M = \infty)$, these two works suggest the naive extension of GCV for finite-ensemble ridge may be inconsistent for $1< M < \infty$. #### Comparison with sample splitting and out-of-bag risk estimation. In the context of ensemble learning, other types of cross-validation methods include the sample splitting CV and out-of-bag CV, which utilize the test samples to estimate the risk. The sample-split cross-validation is the most common strategy for risk estimation [@patil2022bagging]. However, this approach presents several drawbacks. The estimator can suffer from finite-sample efficiency due to the inherent sample splitting: $V$-fold CV estimates the risk of an estimator trained on $(V-1)n$ observations only instead of the risk of the estimator trained on the full dataset of size $n$. Figure 1 of @rad2018scalable provides a clear illustration of this drawback. Furthermore, the accuracy of the estimator can suffer when dealing with large subsample sizes. On the other hand, @lopes2019estimating [@lopes2020measuring; @du2023extrapolated] use out-of-bag risk estimates to extrapolate the risk estimation. Their estimators provide a more efficient risk estimator without sample splitting but do not perform well with large subsample sizes, as the out-of-bag sample size becomes small. In light of the aforementioned limitations of existing methods, our aim is to develop a risk estimator that fulfills certain desiderata. First, we would like an estimator that does not require sample splitting. This would address the issue of finite-sample efficiency faced by split cross-validation. Second, we aim to extend the applicability of GCV to accommodate any ensemble size. This would overcome the limitation of GCV being inconsistent for finite ensembles, as identified in [@du2023subsample]. In this paper, we present two intermediate risk estimators that fulfill both these desiderata, which ultimately lead to our final corrected GCV. ## Organization The rest of the paper is organized as follows. In , we set our notation and define the ensemble estimators and their prediction risks. In , we define our risk estimators and provide a qualitative comparison to other related risk estimators. In , we provide a finite-sample analysis under stronger assumptions on the feature and response distributions. In , we show that the risk estimators are consistent under mild assumptions on the feature and response distributions for ridge ensembles. concludes the paper and offers several follow-up directions. Proofs of all the theoretical results are included in the supplement, which also contains a summary of the notational conventions used throughout the paper. The source code for generating all of the experimental figures in this paper can be accessed at: [\[code repository\]](https://github.com/kaitan365/CorrectedGCV). # Background {#sec:preliminaries} Consider the standard supervised regression setting where we observe $n$ i.i.d. samples $\{(\bm{x}_i, y_i) : i \in [n] \}$ in $\mathbb{R}^{p} \times \mathbb{R}$, where $[n]$ stands for the index set $\{1, 2, \ldots, n\}$. Let $\bm{X}\in\mathbb{R}^{n\times p}$ denote the feature matrix whose $i$-th row contains $\bm{x}_{i}^\top$ and $\bm{y}\in\mathbb{R}^n$ denote the response vector whose $i$-th entry is $y_{i}$. ## Ensemble estimator and prediction risk For each $m\in [M]$, let $I_m$ be a non-empty subset of $[n]$. Let $(\bm{X}_{I_m}, \bm{y}_{I_m})$ denote the corresponding random design matrix and response vector associated with the subsampled dataset $\{(\bm{x}_i, y_i) : i \in I_m\}$. For each of the subsampled datasets $(\bm{X}_{I_m}, \bm{y}_{I_m})$ for $m \in [M]$, we consider a penalized least squares estimator: $$\label{eq:def-hbeta} {{\widehat{\bm{\beta}}}}_m \in \mathop{\mathrm{argmin}}_{\bm{b}\in \mathbb{R}^{p}} \frac{1}{2|I_m|} \sum_{i\in I_m} (y_i - \bm{x}_i^\top\bm{b})^2 + g_m(\bm{b}) = \mathop{\mathrm{argmin}}_{\bm{b}\in \mathbb{R}^{p}} \frac{1}{2|I_m|} \|\bm{L}_{I_m}(\bm{y}- \bm{X}\bm{b})\|_2^2 + g_m(\bm{b})$$ where $g_m:{{\mathbb{R}}}^p\to{{\mathbb{R}}}$ is a prespecified convex penalty and $\bm{L}_{I_m}$ is the diagonal projection matrix with entries $(\bm{L}_{I_m})_{ii}=I\{i\in I_m\}$. Following [@stein1981estimation], we define the degrees of freedom for the estimator $\widehat{\bm{\beta}}_m$ as: $$\label{eq:def-df} {\widehat{\mathsf{df}}}{}_m = \mathop{\mathrm{tr}}[(\partial/\partial\bm{y}) \bm{X}\widehat{\bm{\beta}}_m].$$ For estimator defined in [\[eq:def-hbeta\]](#eq:def-hbeta){reference-type="eqref" reference="eq:def-hbeta"} using the full data ${(\bm{x}_i, y_i)}_{i\in[n]}$ with particular choices of penalty function $g$, the quantity ${\widehat{\mathsf{df}}}{}$ has explicit expressions; see, for example, @zou2007degrees [@tibshirani2012degrees; @dossal2013degrees; @vaiter2012degrees; @vaiter2017degrees; @bellec2022derivatives]) among others. presents the known explicit formulae for the lasso, ridge, and elastic net. **Estimator** ${{\widehat{\bm{\beta}}}}_m$ **Regularizer** $g_m(\bm{b})$ **Degrees of freedom** ${\widehat{\mathsf{df}}}{}_m$ -------------------------------------------- -------------------------------------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------- Lasso $\lambda\|\bm{b}\|_1$ $|\widehat S|$ Ridge $\frac{\lambda}{2} \|\bm{b}\|_2^2$ $\mathop{\mathrm{tr}}\big[\bm{X}_{I_m}\big(\bm{X}_{I_m}^\top\bm{X}_{I_m} + {|I_m|}\lambda \bm{I}\big)^{-1}\bm{X}_{I_m}^\top\big]$ Elastic net $\lambda_1\|\bm{b}\|_1 + \frac{\lambda_2}{2} \|\bm{b}\|_2^2$ $\mathop{\mathrm{tr}}\big[\bm{X}_{\widehat S}\big(\bm{X}_{\widehat S}^\top\bm{X}_{\widehat S} + {|I_m|}\lambda_2\bm{I}\big)^{-1}\bm{X}^\top_{\widehat S}\big]$ : Explicit formulae for the degrees of freedom in [\[eq:def-df\]](#eq:def-df){reference-type="eqref" reference="eq:def-df"} of the estimator [\[eq:def-hbeta\]](#eq:def-hbeta){reference-type="eqref" reference="eq:def-hbeta"} for commonly used penalty functions. Here $\widehat S=\{j\in [p]:e_j^\top \widehat{\bm{\beta}}_m \ne 0\}$ is the set of active variables, $\bm{X}_{\widehat S}$ is the submatrix of $\bm{X}_{I_m}$ made of columns indexed in $\widehat S$. See [@zou2007degrees; @tibshirani2012degrees; @dossal2013degrees; @bellec2021second] among others. The ensemble estimator constructed using [\[eq:def-hbeta\]](#eq:def-hbeta){reference-type="eqref" reference="eq:def-hbeta"} from subsample datasets $\{(\bm{X}_{I_m}, \bm{y}_{I_m})\}_{m\in[M]}$ is defined as: $$\begin{aligned} \label{eq:def-M-ensemble} {\widetilde{\bm{\beta}}}_{M}\bigl(\{I_m\}_{m=1}^M\bigr) &:= \frac{1}{M} \sum_{m=1}^M \widehat{\bm{\beta}}_m.\end{aligned}$$ For the sake of brevity, we will omit the dependency on $\{ I_{m} \}_{m\in[M]}$ for the ensemble estimator and simply write ${\widetilde{\bm{\beta}}}_M$ when it is clear from the context. By the linearity property of the trace operator, the degrees of freedom of the ensemble estimator can be easily shown to be the average of the individual degrees of freedom: $$\begin{aligned} {\widetilde{\mathsf{df}}}{}_M = \frac{1}{M} \sum_{m=1}^M {\widehat{\mathsf{df}}}{}_{m}.% \label{eq:def-tdf}\end{aligned}$$ We assess the performance of the $M$-ensemble predictor ${\widetilde{\bm{\beta}}}_{M}$ via conditional squared prediction risk: $$\begin{aligned} R_{M}&:=\mathbb{E}_{(\bm{x}_0,y_0)}\big[(y_0-\bm{x}_0^{\top} {\widetilde{\bm{\beta}}}_{M})^2\mid (\bm{X}, \bm{y}), \{I_{m}\}_{m = 1}^M\big], \label{eq:R_M}\end{aligned}$$ where $(\bm{x}_0, y_0)$ is an independent test point sampled from the same distribution as the training dataset $(\bm{X}, \bm{y})$. Note that the conditional risk $R_{M}$ is a scalar random variable that depends on both the dataset $(\bm{X}, \bm{y})$ and the random samples $\{I_{m}: m\in [M]\}$. Our goal is to construct estimators for $R_M$ that work for all choices of ensemble sizes $M$. ## GCV for ensemble estimator {#subsec:naive_gcv} Before we delve into our proposed estimators, let us first consider the ordinary Generalized Cross-Validation (GCV) estimator. Suppose we have a linear predictor ${\widehat{f}}(\bm{x}) = \bm{x}^{\top}\widehat{\bm{\beta}}$ where $\widehat{\bm{\beta}}$ is obtained through [\[eq:def-hbeta\]](#eq:def-hbeta){reference-type="eqref" reference="eq:def-hbeta"} based on the dataset $(\bm{X}, \bm{y})$. With the notation ${\widehat{\mathsf{df}}}{}$ in [\[df\]](#df){reference-type="eqref" reference="df"}, the ordinary GCV estimator for the prediction risk of this estimate $\widehat{\bm{\beta}}$ is defined as: $$\label{eq:ogcv} \widehat{R}^\textup{\text{gcv}}= \frac{\|\bm{y}- \bm{X}\widehat{\bm{\beta}} \|_2^2 / n}{(1 - {\widehat{\mathsf{df}}}{}/n)^2}.$$ The numerator of the GCV estimator is the training error, which is typically biased downwards (larger than the true prediction risk). On the other hand, the denominator of the GCV estimator accounts for this training "optimism". Note that this definition of the degree-of-freedom adjusted estimator coincides with the classic definition of GCV for linear smoother [@golub_heath_wabha_1979; @craven_wahba_1979], where the trace of the smoothing matrix represents the degrees of freedom of the linear predictor. Existing literature provides a rich theoretical background for the consistency and efficiency of the ordinary GCV estimator. For instance, its behaviors have been analyzed by @bayati2011lasso [@bayati2013estimating; @miolane2021distribution; @celentano2020lasso] for the lasso and in Section 3 of @bellec2020out for penalized M-estimator. For ridge regression, [@patil2021uniform] show the uniform consistency of the ordinary GCV even in the overparameterized regimes when the feature size $p$ is larger than the sample size $n$. Beyond ridge regression, the consistency of the estimator [\[eq:gcv-naive\]](#eq:gcv-naive){reference-type="eqref" reference="eq:gcv-naive"} is shown to be consistent for other convex penalized estimators under suitable design conditions, see for example, [@bayati2011lasso; @bayati2013estimating; @bellec2020out]. Two natural extensions of GCV for bagging can be considered. For the first extension ([\[eq:gcv-naive\]](#eq:gcv-naive){reference-type="eqref" reference="eq:gcv-naive"} below), the subsamples $I_1,\dots,I_M$ are sampled without looking at the data, and subsequently only the training data $(\bm{x}_i,y_i)_{i\in I_{1:M}}$ is used for training, where $I_{1:M}=\cup_{m=1}^MI_{m}$. Then, since the training data is $(\bm{x}_i,y_i)_{i\in I_{1:M}}$, a natural extension of the ordinary GCV for the ensemble estimator [\[eq:def-M-ensemble\]](#eq:def-M-ensemble){reference-type="eqref" reference="eq:def-M-ensemble"} is the following estimator: $$\begin{aligned} \frac{\|\bm{L}_{I_{1:M}} (\bm{y}-\bm{X}{\widetilde{\bm{\beta}}}_{M})\|_2^2 / |I_{1:M}| }{(1 - {\widetilde{\mathsf{df}}}{}_M / |I_{1:M}| )^2}, \label{eq:gcv-naive}\end{aligned}$$ where $\bm{L}_I$ is a diagonal matrix whose $i$-th diagonal entry is one if $i\in I$ and zero otherwise. In the special case when the ensemble size is $M = 1$ or the subsample size is $k = n$, the definition reduces to the ordinary GCV [\[eq:ogcv\]](#eq:ogcv){reference-type="eqref" reference="eq:ogcv"}, for which consistency has been established under various settings. For ensemble size $M>1$, there is less understanding of the behaviors of this estimator for general predictors. However, it is shown to be inconsistent when the ensemble size is $M=2$ for ridge predictors [@du2023subsample]. The inconsistency in large part happens because, for a finite ensemble size $M$, the residuals computed using the bagged predictor contain non-negligible fractions of out-of-sample and in-sample, and all of them are treated equally. For the second extension, we consider that all data $(\bm{x}_i,y_i)_{i\in[n]}$ is used even if some $i\in[n]$ does not belong to any of the subsamples $I_1,\ldots,I_M$. This leads to the ensemble GCV: $$\begin{aligned} \widetilde{R}_M^{\textup{\text{gcv}}} = \frac{\| (\bm{y}-\bm{X}{\widetilde{\bm{\beta}}}_{M})\|_2^2 / n }{(1 - {\widetilde{\mathsf{df}}}{}_M / n )^2}. \label{eq:fgcv-naive}\end{aligned}$$ As $M\to \infty$ with uniformly sampled $I_1,\ldots,I_M$, the union $I_{1:M}$ closely approaches the full set $[n]$ and the difference between [\[eq:gcv-naive\]](#eq:gcv-naive){reference-type="eqref" reference="eq:gcv-naive"} and [\[eq:fgcv-naive\]](#eq:fgcv-naive){reference-type="eqref" reference="eq:fgcv-naive"} vanishes. Both [\[eq:gcv-naive\]](#eq:gcv-naive){reference-type="eqref" reference="eq:gcv-naive"} and [\[eq:fgcv-naive\]](#eq:fgcv-naive){reference-type="eqref" reference="eq:fgcv-naive"} are consistent as $M$ tends to infinity for ridge predictors [@du2023subsample]; namely, $\widetilde{R}_{\infty}^{\textup{\text{gcv}}} \xrightarrow{\textup{p}}R_{\infty}$. For finite ensemble sizes, however, [\[eq:fgcv-naive\]](#eq:fgcv-naive){reference-type="eqref" reference="eq:fgcv-naive"} is generally preferred over [\[eq:gcv-naive\]](#eq:gcv-naive){reference-type="eqref" reference="eq:gcv-naive"} because one may gain efficiency from more observations (except when considering ensemble sizes that are neither extremely large nor when considering no ensemble). As we will show in , the ensemble GCV estimator, as defined in [\[eq:fgcv-naive\]](#eq:fgcv-naive){reference-type="eqref" reference="eq:fgcv-naive"}, is generally inconsistent for finite ensemble sizes. The primary goal of this paper is to fully understand this phenomenon, correct the inconsistency of GCV estimate [\[eq:fgcv-naive\]](#eq:fgcv-naive){reference-type="eqref" reference="eq:fgcv-naive"}, and develop a risk estimator that is consistent for all ensemble sizes $M$. # Proposed GCV correction {#sec:risk-estimators} In this section, we aim to address the limitations of the naive GCV estimators [\[eq:gcv-naive\]](#eq:gcv-naive){reference-type="eqref" reference="eq:gcv-naive"} and [\[eq:fgcv-naive\]](#eq:fgcv-naive){reference-type="eqref" reference="eq:fgcv-naive"}, which, as we demonstrate, fail to produce consistent estimates for the true prediction risk [\[eq:R_M\]](#eq:R_M){reference-type="eqref" reference="eq:R_M"} under finite ensemble sizes $M > 1$. By introducing term-by-term adjustments for each term of the decomposition [\[eq:risk-decomposition\]](#eq:risk-decomposition){reference-type="eqref" reference="eq:risk-decomposition"} below, we derive corrected and consistent risk estimators that hold for any ensemble size $M$. ## Intermediate risk estimators {#sec:intermediate_estimators} We begin by considering the decomposition of [\[eq:R_M\]](#eq:R_M){reference-type="eqref" reference="eq:R_M"} into its constituent components. The motivation behind our proposed risk estimators stems from this decomposition. It is easy to see that the risk of the $M$-ensemble estimator can be decomposed as follows: $$\begin{aligned} R_{M} &= \frac{1}{M^2} \sum_{m, \ell \in [M]} \mathbb{E}_{\bm{x}_0, y_0}\big[(y_0 - \bm{x}_0^\top {{\widehat{\bm{\beta}}}}_m ) (y_0 - \bm{x}_0^\top {{\widehat{\bm{\beta}}}}_\ell ) \mid (\bm{X}, \bm{y}), \{ I_m, I_\ell \}\big] \nonumber \\ &= \frac{1}{M^2}\sum_{m, \ell \in [M]} \underbrace{(\widehat{\bm{\beta}}_{m} - \bm{\beta}_0)^\top \bm{\Sigma}(\widehat{\bm{\beta}}_{\ell} - \bm{\beta}_0) + \sigma^2}_{R_{m, \ell}}. \label{eq:risk-decomposition}\end{aligned}$$ Here, $\bm{\beta}_0 := \mathbb{E}[\bm{x}_0 \bm{x}_0^{\top}]^{-1} \mathbb{E}[\bm{x}_0 y_0]$ denotes the coefficient vector of linear projection of $y_0$ onto $\bm{x}_0$, and $\sigma^2 := \mathbb{E}[(y_0 - \bm{x}_0^\top \bm{\beta}_0)^2]$ denotes the energy in the residual component in the response. As indicated in [\[eq:risk-decomposition\]](#eq:risk-decomposition){reference-type="eqref" reference="eq:risk-decomposition"}, we will denote the component in the risk decomposition corresponding to $\widehat{\bm{\beta}}_m$ and $\widehat{\bm{\beta}}_{\ell}$ by $R_{m,\ell}$. The fundamental idea behind our risk estimators is to estimate each component $R_{m,\ell}$ individually. The final risk estimator is then the sum of these estimated risk components. We propose an adjustment for each term $R_{m,\ell}$ in the risk decomposition, which leads to a $\sqrt n$-consistent estimator. This adjustment is a key component of our proposed risk estimators and contributes to their improved performance over the ordinary GCV estimator. Before we present the corrected GCV estimators, we first introduce two intermediate estimators that give rise to our final correction. #### Estimator using overlapping observations. Our first estimator is defined as analogous to $R_M$, with each of $R_{m, \ell}$ replaced by its estimate $\widehat{R}^\textup{\textrm{ovlp}}_{m, \ell}$ defined below: $$\begin{aligned} \widetilde{R}_{M}^{\textup{\textrm{ovlp}}} = \frac{\displaystyle 1}{\displaystyle M^2}\sum_{m,\ell\in[M]} \widehat{R}^\textup{\textrm{ovlp}}_{m, \ell}, \quad \widehat{R}^\textup{\textrm{ovlp}}_{m, \ell} = \frac{ (\bm{y}_{I_m\cap I_\ell} - \bm{X}_{I_m\cap I_\ell}\widehat{\bm{\beta}}_{m})^{\top}(\bm{y}_{I_m\cap I_\ell} - \bm{X}_{I_m\cap I_\ell}\widehat{\bm{\beta}}_{\ell}) / | I_m \cap I_\ell | } { {(1 - {{\widehat{\mathsf{df}}}{}_m} / {|I_m|}) (1 - {{\widehat{\mathsf{df}}}{}_\ell}/ {|I_\ell|})} }. \label{eq:risk-decomp-est-1}\end{aligned}$$ Here, the superscript "ovlp" is used because the numerator of $\widehat{R}^\textup{\textrm{ovlp}}_{m, \ell}$ uses only the overlapping observations of subsets $I_m$ and $I_{\ell}$ when computing the residuals. In contrast, our next estimator will use all the available observations when computing similar residuals. The intuition behind the ovlp-estimator [\[eq:risk-decomp-est-1\]](#eq:risk-decomp-est-1){reference-type="eqref" reference="eq:risk-decomp-est-1"} is that the summand in [\[eq:risk-decomp-est-1\]](#eq:risk-decomp-est-1){reference-type="eqref" reference="eq:risk-decomp-est-1"} is consistent for the corresponding summand in [\[eq:risk-decomposition\]](#eq:risk-decomposition){reference-type="eqref" reference="eq:risk-decomposition"}. This result is already known for $m=\ell$ under suitable conditions; see, for example, [@bellec2020out] and references therein. In the present paper, we extend this consistency result from $m = \ell$ to any pair of $(m,\ell)_{m,\ell\in[M]}$, see for a formal statement. **Remark 1** (Special cases of the ovlp-estimator). We consider two special cases under equal subsample sizes with $|I_m| = k$ for all $m\in[M]$: (a) When $k = n$, the estimator in [\[eq:risk-decomp-est-1\]](#eq:risk-decomp-est-1){reference-type="eqref" reference="eq:risk-decomp-est-1"} matches with ordinary GCV estimator [\[eq:ogcv\]](#eq:ogcv){reference-type="eqref" reference="eq:ogcv"}, which is consistent. (b) When $M = 1$, notice that $\widetilde{R}_{M}^{\textup{\textrm{ovlp}}}$ is equal to the GCV in [\[eq:gcv-naive\]](#eq:gcv-naive){reference-type="eqref" reference="eq:gcv-naive"} restricted to $k$ subsampled observations. Thus, [\[eq:risk-decomp-est-1\]](#eq:risk-decomp-est-1){reference-type="eqref" reference="eq:risk-decomp-est-1"} is consistent for $M = 1$. It is worth noting that the definition of the ovlp-estimator implicitly requires $| I_m \cap I_\ell |>0$, which necessitates a large enough subsample size $k$ to guarantee the overlapping sets $I_m\cap I_\ell$ are not empty. Furthermore, when the regularization is near zero (e.g., in ridgeless estimators), the term $1 - {\widehat{\mathsf{df}}}{}_m/|I_m|$ in the denominator can approach zero, further degrading the estimator's performance. To address these shortcomings, it is beneficial to use data from both $I_m$ and $I_{\ell}$, as well as other observations, to estimate the individual terms in the decomposition [\[eq:risk-decomposition\]](#eq:risk-decomposition){reference-type="eqref" reference="eq:risk-decomposition"}. This motivates us to construct the next estimator. #### Estimator using all observations. We propose the following estimator, which estimates each of $R_{m,l}$ using all the data $(\bm{X}, \bm{y})$. The expression is given below: $$\begin{aligned} \widetilde{R}_M^{\textup{\textrm{full}}} &= \frac{\displaystyle 1}{\displaystyle M^2} \sum_{m, \ell\in [M]}\widehat{R}_{m,\ell}^{\textup{\textrm{full}}}, \quad \qquad \widehat{R}_{m,\ell}^{\textup{\textrm{full}}} = \frac{ (\bm{y}- \bm{X}\widehat{\bm{\beta}}_{m})^\top (\bm{y}- \bm{X}\widehat{\bm{\beta}}_{\ell}) / n}{1 - {{\widehat{\mathsf{df}}}{}_m} / {n} - {{\widehat{\mathsf{df}}}{}_{\ell}} / {n} + ( {\widehat{\mathsf{df}}}{}_{m} {\widehat{\mathsf{df}}}{}_{\ell} / |I_m| |I_{\ell}| ) \cdot | I_m \cap I_\ell | / {n} }. \label{eq:risk-decomp-est-2}\end{aligned}$$ In contrast to the "ovlp" estimator in [\[eq:risk-decomp-est-1\]](#eq:risk-decomp-est-1){reference-type="eqref" reference="eq:risk-decomp-est-1"}, we use superscript "full" in [\[eq:risk-decomp-est-2\]](#eq:risk-decomp-est-2){reference-type="eqref" reference="eq:risk-decomp-est-2"}, which refers to the risk estimator that uses all the available observations ($\bm{y}$ and $\bm{X}$ are used in the numerator to compute residuals). Because the two intermediate estimators use different proportions of the samples to quantify the optimism, they require different degrees of freedom adjustments presented in the denominators. Compared to $\widetilde{R}^{\textup{\textrm{ovlp}}}_M$, the full-estimator offers several advantages. Firstly, it utilizes all available data, which is beneficial, especially when the subsample sizes $|I_m|$ and $|I_{\ell}|$ are small or the union of subsets is limited. Secondly, its denominator is lower bounded by a positive constant, ensuring numerical stability, especially for base estimators with regularization close to zero. It is worth noting that hybrid estimators can be constructed that utilize data points in a range between the extreme cases of the intersection and the full set of observations. We will not focus on such hybrid strategies in this paper. **Remark 2** (Special cases of the full-estimator). As with , we consider two special cases: (a) When $k = n$, observe from [\[eq:est-full-M=1_1\]](#eq:est-full-M=1_1){reference-type="eqref" reference="eq:est-full-M=1_1"} that [\[eq:risk-decomp-est-2\]](#eq:risk-decomp-est-2){reference-type="eqref" reference="eq:risk-decomp-est-2"} matches with [\[eq:fgcv-naive\]](#eq:fgcv-naive){reference-type="eqref" reference="eq:fgcv-naive"}, which is consistent, as argued previously. (b) When $M = 1$, the full-estimator in [\[eq:risk-decomp-est-2\]](#eq:risk-decomp-est-2){reference-type="eqref" reference="eq:risk-decomp-est-2"}, reduces to: $$\begin{aligned} \widetilde{R}_{1}^{\textup{\textrm{full}}} &= \frac{ (\bm{y}- \bm{X}{\widetilde{\bm{\beta}}}_{1})^\top (\bm{y}- \bm{X}{\widetilde{\bm{\beta}}}_{1})/n}{{1 - 2{{\widetilde{\mathsf{df}}}{}_1}/n + ({\widetilde{\mathsf{df}}}{}_{1})^2/(kn)}} \label{eq:est-full-M=1_1} \\ &= \frac{ (\bm{y}- \bm{X}{\widetilde{\bm{\beta}}}_{1})^\top \bm{L}_{1} (\bm{y}- \bm{X}{\widetilde{\bm{\beta}}}_{1}) / n + (\bm{y}- \bm{X}{\widetilde{\bm{\beta}}}_{1})^\top \bm{L}_{1^c} (\bm{y}- \bm{X}{\widetilde{\bm{\beta}}}_{1}) / n}{ 1-2{{\widetilde{\mathsf{df}}}{}}_1/n + ({{\widetilde{\mathsf{df}}}{}}_1)^2/(kn) }, \label{eq:est-full-M=1_2}\end{aligned}$$ where $\bm{L}_{1^c}=\bm{I}_n-\bm{L}_{1}$ and $\bm{L}_1 = \bm{L}_{I_1}$ for brevity. To see that [\[eq:est-full-M=1_1\]](#eq:est-full-M=1_1){reference-type="eqref" reference="eq:est-full-M=1_1"} is consistent for any subsample size $k$, from the consistency of ordinary GCV for a single base predictor and no bagging, with observations $(\bm{x}_i, y_i)_{i\in I_1}$ and sample size $k=|I_1|$, from Section 3 of @bellec2020out, we have $$\label{eq:est-full-M=1_numer1} \frac{ (\bm{y}- \bm{X}{\widetilde{\bm{\beta}}}_{1})^\top \bm{L}_{1} (\bm{y}- \bm{X}{\widetilde{\bm{\beta}}}_{1}) / k } { (1 - {{\widetilde{\mathsf{df}}}{}_1} / {k})^2 R_1 } \xrightarrow{\textup{p}} 1 ,$$ where $R_1$ is the risk of ${\widetilde{\bm{\beta}}}_1$. Moreover, by the law of large numbers as $(n-k)\to\infty$, $$\label{eq:est-full-M=1_numer2} \frac{ (\bm{y}- \bm{X}{\widetilde{\bm{\beta}}}_{1})^\top \bm{L}_{1^c} (\bm{y}- \bm{X}{\widetilde{\bm{\beta}}}_{1})}{ R_1 (n - k) } \xrightarrow{\textup{p}}1.$$ Writing [\[eq:est-full-M=1_numer1\]](#eq:est-full-M=1_numer1){reference-type="eqref" reference="eq:est-full-M=1_numer1"} and [\[eq:est-full-M=1_numer2\]](#eq:est-full-M=1_numer2){reference-type="eqref" reference="eq:est-full-M=1_numer2"} as $1+o_{\mathbb{P}}(1)$ and substituting into [\[eq:est-full-M=1_2\]](#eq:est-full-M=1_2){reference-type="eqref" reference="eq:est-full-M=1_2"}, we get that $$\begin{aligned} \frac{\widetilde{R}_1^\textup{\textrm{full}}}{R_1} &= \frac{[1+o_{\mathbb{P}}(1)]\cdot (1 - {{\widetilde{\mathsf{df}}}{}_1} / {k})^2 \cdot ({k} / {n}) + [1+o_{\mathbb{P}}(1)]\cdot ({n-k})/{n}}{ {1-2{{\widetilde{\mathsf{df}}}{}}_1/n + ({{\widetilde{\mathsf{df}}}{}}_1)^2/(kn)} }\xrightarrow{\textup{p}}1.\end{aligned}$$ ## Corrected GCV for ensembles Though the two intermediate estimators are consistent, the nature of the term-by-term correction requires enumeration over all pairs of $m,\ell\in[M]$. This quadratic complexity in $M$ prevents them from being computationally tractable and practical for large $M$. To mitigate the computational drawbacks of the intermediate estimators, we further propose a corrected GCV estimator as below. ![ Illustration of the corrected generalized cross-validation for subsampled ensembles with equal subsample sizes $k$. ](figures/cgcv-illustration-4.pdf){#fig:cgcv-illustration width="95%"} **Definition 1** (Corrected GCV for ensemble estimator). The corrected GCV estimator for ensemble size $M$ is defined as: $$\begin{aligned} \label{eq:cgcv-proposal-sub} \widetilde{R}^{\textup{\textrm{cgcv}},\textup{\textrm{ovlp}}}_{M} = \underbrace{\frac{ \| \bm{y}- \bm{X}{\widetilde{\bm{\beta}}}_{M} \|_2^2/n}{(1-{\widetilde{\mathsf{df}}}{}_M/n)^2}}_{\widetilde{R}_M^{\textup{\text{gcv}}}} - \underbrace{\frac{1}{M} \Biggl\{ \frac{({\widetilde{\mathsf{df}}}{}_M/n)^2}{(1-{\widetilde{\mathsf{df}}}{}_M/n)^2} \frac{1}{M}\sum_{m \in [M]} \bigg(\frac{n}{|I_m|}-1\bigg) \widehat{R}_{m, m}^\textup{\textrm{ovlp}}\Biggr\}}_{\mathrm{correction}}. \end{aligned}$$ We will also consider the alternative expression: $$\begin{aligned} \label{eq:cgcv-proposal} \widetilde{R}^{\textup{\textrm{cgcv}},\textup{\textrm{full}}}_{M} = \underbrace{\frac{ \| \bm{y}- \bm{X}{\widetilde{\bm{\beta}}}_{M} \|_2^2/n}{(1-{\widetilde{\mathsf{df}}}{}_M/n)^2}}_{\widetilde{R}_M^{\textup{\text{gcv}}}} - \underbrace{\frac{1}{M} \Biggl\{ \frac{({\widetilde{\mathsf{df}}}{}_M/n)^2}{(1-{\widetilde{\mathsf{df}}}{}_M/n)^2} \frac{1}{M}\sum_{m \in [M]} \bigg(\frac{n}{|I_m|}-1\bigg) \widehat{R}_{m, m}^\textup{\textrm{full}}\Biggr\}}_{\mathrm{correction}}, \end{aligned}$$ where the only difference is the use of $\widehat{R}_{m,m}^\textup{\textrm{full}}$ instead of $\widehat{R}^{\textup{\textrm{ovlp}}}_{m,m}$ in the rightmost sum in [\[eq:cgcv-proposal\]](#eq:cgcv-proposal){reference-type="eqref" reference="eq:cgcv-proposal"}. The evaluation of [\[eq:cgcv-proposal\]](#eq:cgcv-proposal){reference-type="eqref" reference="eq:cgcv-proposal"} (and [\[eq:cgcv-proposal-sub\]](#eq:cgcv-proposal-sub){reference-type="eqref" reference="eq:cgcv-proposal-sub"}) only requires computing three components: (1) the full training of the ensemble estimator; (2) the average ${\widetilde{\mathsf{df}}}{}_M$ defined in [\[eq:def-tdf\]](#eq:def-tdf){reference-type="eqref" reference="eq:def-tdf"}; and (3) the correction term that is built on the intermediate estimators. More importantly, this proposed estimator can be evaluated only in linear time with respect to $M$. See for an illustration of computing [\[eq:cgcv-proposal\]](#eq:cgcv-proposal){reference-type="eqref" reference="eq:cgcv-proposal"}. The correction term in [\[eq:cgcv-proposal\]](#eq:cgcv-proposal){reference-type="eqref" reference="eq:cgcv-proposal"} is derived from the insights gained through the intermediate estimators $\widetilde{R}_{M}^{\textup{\textrm{ovlp}}}$ and $\widetilde{R}_M^{\textup{\textrm{full}}}$. It allows us to alleviate the computational challenges posed by the pairwise enumeration in the intermediate estimators, leading to an estimator that retains the advantages of both while being more computationally tractable. While the correction is defined with respect to GCV in [\[eq:fgcv-naive\]](#eq:fgcv-naive){reference-type="eqref" reference="eq:fgcv-naive"}, one can also derive another correction with respect to [\[eq:gcv-naive\]](#eq:gcv-naive){reference-type="eqref" reference="eq:gcv-naive"}. We prefer to present the correction with respect to [\[eq:fgcv-naive\]](#eq:fgcv-naive){reference-type="eqref" reference="eq:fgcv-naive"} because it leads to a slightly simpler expression and because, in this case, the correction is always positive. Our theoretical results in the next section show the corrected GCV estimator [\[eq:cgcv-proposal\]](#eq:cgcv-proposal){reference-type="eqref" reference="eq:cgcv-proposal"} is consistent (), and the naive extension of GCV in [\[eq:fgcv-naive\]](#eq:fgcv-naive){reference-type="eqref" reference="eq:fgcv-naive"} *over-estimates the risk* with positive probability (). **Remark 3** (Special cases of corrected GCV). As with , we consider two special cases: (a) When $k = n$, observe the correction term is zero and the first term $\widetilde{R}_1^\textup{\text{gcv}}$ is consistent as argued previously; thus, $\widetilde{R}^{\textup{\textrm{cgcv}},\textup{\textrm{full}}}_M$ is consistent. (b) When $M = 1$, it is also not hard to see that $\widetilde{R}^{\textup{\textrm{cgcv}},\textup{\textrm{full}}}_1$ is consistent. As argued in , both $\widehat{R}^\textup{\textrm{ovlp}}_{1,1}$ and $\widehat{R}^\textup{\textrm{full}}_{1,1}$ are consistent. Now writing [\[eq:est-full-M=1_numer1\]](#eq:est-full-M=1_numer1){reference-type="eqref" reference="eq:est-full-M=1_numer1"} and [\[eq:est-full-M=1_numer2\]](#eq:est-full-M=1_numer2){reference-type="eqref" reference="eq:est-full-M=1_numer2"} as $1+o_{\mathbb{P}}(1)$, $$\frac{\widetilde{R}^{\textup{\textrm{cgcv}},\textup{\textrm{full}}}_1}{R_1} = \frac{ [1+o_{\mathbb{P}}(1)] (1 - {\widetilde{\mathsf{df}}}{}_1/k)^2 {k}/{n} + [1+o_{\mathbb{P}}(1)] ({n-k})/{n} }{(1- {\widetilde{\mathsf{df}}}{}_1/n)^2} - \frac{ [1+o_{\mathbb{P}}(1)] ({\widetilde{\mathsf{df}}}{}_1/n)^2 ({n-k})/{k}}{(1- {\widetilde{\mathsf{df}}}{}_1/n)^2} \xrightarrow{\textup{p}}1.$$ Here, the last equality follows because $$\begin{aligned} \frac{k}{n}(1 - {\widetilde{\mathsf{df}}}{}_1/k)^2 + \frac{n-k}{n} - \frac{n-k}{k}({\widetilde{\mathsf{df}}}{}_1/n)^2 &= 1 - 2 {\widetilde{\mathsf{df}}}{}_1/n + {\widetilde{\mathsf{df}}}{}_1^2/(kn) - \frac{n-k}{k}({\widetilde{\mathsf{df}}}{}_1/n)^2 \\ &= 1 - 2 {\widetilde{\mathsf{df}}}{}_1/n + {\widetilde{\mathsf{df}}}{}_1^2/n^2 = (1- {\widetilde{\mathsf{df}}}{}_1/n)^2.\end{aligned}$$ Our goal in the rest of the paper is to establish consistency properties of corrected GCV for any $M$ and provide rates of convergence under various assumptions on the data and the components estimators. We will provide a non-asymptotic analysis in and an asymptotic analysis in . In both of these sections, we will assume the following sampling scheme. **Assumption 1** (Sampling strategy). The subsample sets $\{I_m\}_{m=1}^M$ are independent of $(\bm{X}, \bm{y})$ and i.i.d., uniformly distributed over subsets of $[n]$ with cardinality $k$. Before we delve into our analysis, it is worth reiterating that the intermediate estimators [\[eq:risk-decomp-est-1\]](#eq:risk-decomp-est-1){reference-type="eqref" reference="eq:risk-decomp-est-1"} and [\[eq:risk-decomp-est-2\]](#eq:risk-decomp-est-2){reference-type="eqref" reference="eq:risk-decomp-est-2"} are defined for ensembles fitted on subsamples $\mathcal{D}_{I_m}$ for $m \in [M]$ of arbitrary sizes and component estimators fitted on different penalty functions $g_m$ for $m \in [M]$. While our proofs are simplified by assuming equal subsample sizes, this condition can be relaxed for certain theoretical claims to follow in . The formulation of the modified GCV, on the other hand, assumes that the same penalty function is used for estimators across subsamples and that subsample sizes do not vary much to ensure the validity of certain degrees of freedom concentrations, as we will explain in . # Non-asymptotic analysis for general ensembles {#sec:finite-sample-analysis} In this section, we provide a non-asymptotic analysis of the estimators introduced in the previous section. Our analysis is applicable to general convex penalized estimators such as ridge regression and elastic net, among others. To facilitate our analysis, we will need to impose certain assumptions on the data-generating process. Specifically, we will assume Gaussian features and linear models as encapsulated in the assumptions below. These assumptions serve as a starting point but are not indispensable for more specialized estimators, as we discuss in . **Assumption 2** (Feature structure). The design matrix $\bm{X}\in {{\mathbb{R}}}^{n\times p}$ has i.i.d. rows $\bm{x}_i$ for $i \in [n]$ each drawn from a Gaussian distribution: $\bm{x}_i \sim \mathcal{N}(\bm{0},\bm{\Sigma})$. **Assumption 3** (Response structure). The response vector $\bm{y}$ has i.i.d. entries $y_i$ for $i \in [n]$ each drawn from a linear model with Gaussian noise: $y_i = \bm{x}_i^\top\bm{\beta}_0 + \varepsilon_i$, where $\varepsilon_i\sim \mathcal{N}(0,\sigma^2)$ is independent of $\bm{x}_i$. Although these assumptions may appear restrictive, they serve to simplify our analysis and enable us to derive the estimate of risk as well as prove explicit non-asymptotic bounds. These assumptions can be substantially relaxed for specialized base estimators. In , we will demonstrate that we can significantly relax these assumptions by focusing on a specific base estimator of ridge(less) regression. **Assumption 4** (Penalty structure). The penalty function $g_{m}: {{\mathbb{R}}}^p \to {{\mathbb{R}}}$ in [\[eq:def-hbeta\]](#eq:def-hbeta){reference-type="eqref" reference="eq:def-hbeta"} is $\mu$-strongly convex with respect to $\bm{\Sigma}$, i.e., the function $\bm{b}\mapsto g_{m}(\bm{b}) - (\mu/2) \bm{b}^\top \bm{\Sigma}\bm{b}$ is convex in $\bm{b}\in {{\mathbb{R}}}^p$. If $g_m$ is twice continuously differentiable, is equivalent to $\inf_{\bm{b}\in {{\mathbb{R}}}^p} \nabla^2 g_m(\bm{b}) \succeq \mu \Sigma$, where $\nabla^2 g_m(\bm{b}) \in {{\mathbb{R}}}^{p\times p}$ is the Hessian matrix. For instance, if $g_m(\bm{b}) = \bm{b}^\top \bm{\Sigma}_{w}\bm{b}/2$ (generalized ridge penalty) with a positive definite matrix $\bm{\Sigma}_{w}\in {{\mathbb{R}}}^{p\times p}$, holds with $\mu = \sigma_{\min}(\bm{\Sigma}_{w})/\sigma_{\max}(\bm{\Sigma}).$ In particular, $\mu=\lambda/\sigma_{\max}(\bm{\Sigma})$ when $g_m(\bm{b}) = \lambda\|\bm{b}\|_2^2/2$. Note that the strong convexity penalty guarantees a certain differentiable structure of the penalized least squares estimator [\[eq:def-hbeta\]](#eq:def-hbeta){reference-type="eqref" reference="eq:def-hbeta"}, see, for example, Theorem 1 of @bellec2022derivatives and . ## Guarantees for intermediate risk estimators {#sec:intermediate-guarantees-gaussian} Our first result provides a non-asymptotic guarantee on the intermediate estimators. Recall the notation in [\[eq:risk-decomposition\]](#eq:risk-decomposition){reference-type="eqref" reference="eq:risk-decomposition"}: the estimation target is $R_M = M^{-2}\sum_{m, \ell}R_{m,\ell}$, where $R_{m, \ell} = (\widehat{\bm{\beta}}_m - \bm{\beta}_0)^\top\bm{\Sigma}(\widehat{\bm{\beta}}_\ell - \bm{\beta}_0) + \sigma^2$. For notational convenience, we introduce the following notation for each $m, \ell\in[M]$, $$\begin{aligned} d_{m,\ell}^\textup{\textrm{ovlp}}:= |I_m\cap I_\ell| \bigg(1-\frac{{\widehat{\mathsf{df}}}{}_m}{|I_m|}\bigg) \bigg(1-\frac{{\widehat{\mathsf{df}}}{}_\ell}{|I_\ell|}\bigg), \quad \qquad d_{m,\ell}^\textup{\textrm{full}}:= n-{\widehat{\mathsf{df}}}{}_m - {\widehat{\mathsf{df}}}{}_\ell + \frac{{\widehat{\mathsf{df}}}{}_m {\widehat{\mathsf{df}}}{}_\ell}{|I_m| |I_{\ell}|} | I_m \cap I_\ell |, \label{eq:d_sub_and_full}\end{aligned}$$ which are the denominators in the definitions of $\widehat{R}_{m, \ell}^\textup{\textrm{ovlp}}$ and $\widehat{R}_{m, \ell}^\textup{\textrm{full}}$ in [\[eq:risk-decomp-est-1\]](#eq:risk-decomp-est-1){reference-type="eqref" reference="eq:risk-decomp-est-1"} and [\[eq:risk-decomp-est-2\]](#eq:risk-decomp-est-2){reference-type="eqref" reference="eq:risk-decomp-est-2"}. Now, we are ready to provide the theoretical results of $\widehat{R}_{m, \ell}^\textup{\textrm{ovlp}}$ and $\widehat{R}_{m, \ell}^\textup{\textrm{full}}$ in . theoremThmSubFull [\[th:sub_full\]]{#th:sub_full label="th:sub_full"} Suppose be satisfied. Let $c=k/n$, $\gamma = \max(1, p/n)$, and $\tau = \min(1, \mu)$. Then there exists an absolute constant $C>0$ such that the following holds: $$\begin{aligned} \mathbb{E}\bigg[\Big|\frac{d_{m, \ell}^{\textup{\textrm{ovlp}}}}{n} \cdot \frac{\widehat{R}_{m, \ell}^{\textup{\textrm{ovlp}}} -R_{m,\ell}}{\sqrt{R_{m,m} R_{\ell,\ell}}}\Big| \bigg] \le C \frac{\gamma^{7/2}}{\tau^2 c^3 \sqrt n}, \quad \qquad \mathbb{E}\bigg[\Big|\frac{d_{m, \ell}^{\textup{\textrm{full}}}}{n} \cdot \frac{\widehat{R}_{m, \ell}^{\textup{\textrm{full}}} -R_{m,\ell}}{\sqrt{R_{m,m} R_{\ell,\ell}}}\Big| \bigg]\le C \frac{\gamma^{5/2}}{\tau^2 c^2 \sqrt n}. \label{eq:sub_full_moment_ineq}\end{aligned}$$ Furthermore, if the same penalty $g_m$ is used for subsample estimate [\[eq:def-hbeta\]](#eq:def-hbeta){reference-type="eqref" reference="eq:def-hbeta"} across all $m\in [M]$, for any $\epsilon\in(0,1)$, we have $$\begin{aligned} \mathbb{P}\bigg( \Big| 1- \frac{\widetilde{R}_{M}^\textup{\textrm{ovlp}}}{R_M} \Big| > \epsilon \bigg) \le C \frac{M^3 \gamma^{11/2}}{\epsilon \tau^4 c^7 \sqrt{n}}, \quad \qquad \mathbb{P}\bigg( \Big| 1- \frac{\widetilde{R}_{M}^\textup{\textrm{full}}}{R_M} \Big| > \epsilon \bigg) \le C \frac{M^3 \gamma^{9/2}}{\epsilon \tau^4 c^2 \sqrt{n}}. \label{eq:sub_full_relative_error}\end{aligned}$$ Thus, if $(M, \mu^{-1}, p/n, n/k)$ are bounded from above by a constant independent of $n$, we have $$\widetilde{R}_M^\textup{\textrm{ovlp}}/R_M = 1+\mathcal{O}_{\mathbb{P}}(n^{-1/2}), \quad \qquad \widetilde{R}_M^\textup{\textrm{full}}/R_M = 1+\mathcal{O}_{\mathbb{P}}(n^{-1/2}).$$ shows the rate of convergence for both the ovlp- and full-estimator are $n^{1/2}$. It is worth noting that the upper bounds regarding $\widetilde{R}_M^\textup{\textrm{ovlp}}$ and $\widetilde{R}_M^\textup{\textrm{full}}$ in [\[eq:sub_full_moment_ineq\]](#eq:sub_full_moment_ineq){reference-type="eqref" reference="eq:sub_full_moment_ineq"} and [\[eq:sub_full_relative_error\]](#eq:sub_full_relative_error){reference-type="eqref" reference="eq:sub_full_relative_error"} are slightly different, especially in the dependence of the ratio $c=k/n$ and $\gamma$; the upper bounds of the full-estimator are tighter than the ovlp-estimator in terms of its dependencies on $c$ and $\gamma$. While these upper bounds may not be optimal in the dependence on $c$ and $\gamma$, we empirically observe that the relative error of the full-estimator is smaller than the ovlp-estimator, especially when $c$ is small, see . Note that the guarantees in are in multiplicative form. This is preferable to the more common additive bounds because, in the regimes that we study, the risk $R_M$ may have varied scales with different subsample sizes; for example, when the subsample size is near the feature size and the regularization parameter is small, in which cases the risk can be high [@patil2022bagging]. It is also worth noting that we do not assume either the pure signal energy (i.e., $\| \bm{\beta}_0 \|_2^2$) or the effective signal energy (i.e., $\| \bm{\Sigma}^{1/2} \bm{\beta}_0 \|_2^2$) is bounded. This is possible because of the multiplicative bounds. **Remark 4** (Diminishing strong convexity parameter). The upper bounds in [\[eq:sub_full_moment_ineq\]](#eq:sub_full_moment_ineq){reference-type="eqref" reference="eq:sub_full_moment_ineq"} and [\[eq:sub_full_relative_error\]](#eq:sub_full_relative_error){reference-type="eqref" reference="eq:sub_full_relative_error"} give explicit dependence on the strong convexity parameter $\mu$ of through $\tau=\min(1,\mu)$. Our results thus allow to choose $\mu$ decreasing in $n$, for instance $\mu=n^{-1/8-\epsilon}$ is sufficient to ensure the upper bounds of [\[eq:sub_full_moment_ineq\]](#eq:sub_full_moment_ineq){reference-type="eqref" reference="eq:sub_full_moment_ineq"} and [\[eq:sub_full_relative_error\]](#eq:sub_full_relative_error){reference-type="eqref" reference="eq:sub_full_relative_error"} goes to 0 as $n\to\infty$ while $M,c^{-1},\gamma$ are bounded by constants. **Remark 5** (Extensions beyond strongly convex regularizers). Without any strong convexity assumption on the penalty (i.e., $\mu=0$), results such as for $M=1$ cannot be established using known techniques even for the well-studied lasso, unless additional assumptions are made on the subsample size. In particular, Proposition 4 of @celentano2020lasso shows that in the current proportional asymptotic regime with $n\asymp p$, the risk of a single lasso may be unbounded if the number of samples used to train the lasso falls below the Donoho-Tanner phase transition described in Section 3 of [@celentano2020lasso]. While the consistency of GCV for a single lasso is a consequence of this theory above this phase transition [@celentano2020lasso Theorem 9], below the phase transition both $\|\bm{\Sigma}^{1/2}({{\widehat{\bm{\beta}}}}-\bm{\beta}_0)\|_2$ and $1/(1-{\widehat{\mathsf{df}}}{}/n)$ are unbounded for certain $\bm{\beta}_0$. Furthermore, the constants appearing in the upper bounds in [@celentano2020lasso] explode as the sample size approaches the phase transition. We are not aware of any currently known techniques suitable to study GCV below this phase transition, and we believe new ideas are needed. Thus, in the present context, where we subsample $I_m$ of size $k=|I_m|$, small values of $k$ would eventually fall below the phase transition, in which case the theoretical analysis of bagging GCV is currently out of reach. We numerically compare the performance of the ovlp- and full-estimators for the ridge ensemble in . It is clear that the relative error of the full-estimator is smaller than the ovlp-estimator across different subsample size $k$ and tuning parameter $\lambda$, especially when $k$ is small and the tuning parameter $\lambda$ is small. We also observe that, as $k$ increases towards the full sample size $n$, the performance of ovlp-estimator and full-estimator are getting closer; this makes sense because the expressions of ovlp-estimator and full-estimator will be the same when $k=n$. For comparison between the ovlp- and full-estimators in the context of the elastic net ensemble and the lasso ensemble, we observe a similar comparative trend, and the results are shown in . ![ **Full-estimator $\widetilde{R}_{M}^{\textup{\textrm{full}}}$ performs better than ovlp-estimator $\widetilde{R}_{M}^{\textup{\textrm{ovlp}}}$ across different regularization and for small subsample size $k$.** Plots of relative errors of the "ovlp" and "full" estimators for ridge ensemble with ensemble size $M=10$. The left panel shows the results with ridge penalty $\lambda=1$ and varying subsample size $k$. The right panel shows the results with subsample size $k=300$ and varying ridge penalty $\lambda$. The data generating process is the same as in . More details on the experimental setup can be found in . ](figures/Fig2_sub_full_ridge.pdf){#fig:k width="75%"} ## Guarantees for corrected GCV {#sec:cgcv-guarantees-gaussian} The next result provides non-asymptotic guarantees for corrected GCV. Recall the form of the two variants of CGCV, $\widetilde{R}^{\textup{\textrm{cgcv}},\textup{\textrm{full}}}_{M}$ and $\widetilde{R}^{\textup{\textrm{cgcv}},\textup{\textrm{ovlp}}}_{M}$ from [\[eq:cgcv-proposal\]](#eq:cgcv-proposal){reference-type="eqref" reference="eq:cgcv-proposal"} and [\[eq:cgcv-proposal-sub\]](#eq:cgcv-proposal-sub){reference-type="eqref" reference="eq:cgcv-proposal-sub"}, respectively: $$\begin{aligned} \widetilde{R}^{\textup{\textrm{cgcv}},\char"0023}_{M} :=\frac{\|\bm{y}- \bm{X}{\widetilde{\bm{\beta}}}_M\|_2^2/n}{(1-{\widetilde{\mathsf{df}}}{}_M/n)^2} - \frac{({\widetilde{\mathsf{df}}}{}_M/n)^2}{(1-{\widetilde{\mathsf{df}}}{}_M/n)^2} \frac{1}{M^2} \left(\frac{n}{k}-1\right)\sum_{m=1}^M \widehat{R}_{m,m}^\char"0023,\end{aligned}$$ where $\char"0023\in\{\textup{\textrm{full}},\textup{\textrm{ovlp}}\}$, depending on which estimate is used in the rightmost sum to estimate the risk component $R_{m,m}$. theoremThmCorrectionGCV[\[th:correction_gcv\]]{#th:correction_gcv label="th:correction_gcv"} Suppose hold. Let $c=k/n$, $\gamma = \max(1, p/n)$, and $\tau = \min(1, \mu)$. Assume that the same penalty $g_m$ is used for each $m\in[M]$ in [\[eq:def-hbeta\]](#eq:def-hbeta){reference-type="eqref" reference="eq:def-hbeta"}. Then, for any $\epsilon\in (0,1)$, there exists an absolute constant $C>0$ such that the following holds: $$\label{eq:cgcv_relative_error} \mathbb{P}\Bigl(\Big|1-\frac{\widetilde{R}_M^{\textup{\textrm{cgcv}},\textup{\textrm{ovlp}}}}{R_M}\Big| > \epsilon\Bigr) \le C \frac{M^4 \gamma^{15/2}}{\epsilon\tau^6 c^6 \sqrt{n}}, \quad \qquad \mathbb{P}\Bigl(\Big|1-\frac{\widetilde{R}_M^{\textup{\textrm{cgcv}},\textup{\textrm{full}}}}{R_M}\Big| > \epsilon\Bigr) \le C \frac{M^4 \gamma^{13/2}}{\epsilon\tau^6 c^4 \sqrt{n}}.$$ Thus, if $(M, \mu^{-1}, p/n, n/k)$ are bounded by constants independent of $n$, we have $$\widetilde{R}^{\textup{\textrm{cgcv}},\textup{\textrm{ovlp}}}_M/R_M = 1+ \mathcal{O}_{\mathbb{P}}(n^{-1/2}), \quad \qquad \widetilde{R}^{\textup{\textrm{cgcv}},\textup{\textrm{full}}}_M/R_M = 1+ \mathcal{O}_{\mathbb{P}}(n^{-1/2}).$$ The upper bounds presented in provide dependence of the accuracy of corrected GCV on several problem parameters, such as $M$, $\tau$, $c$, and $\gamma$. The key message of is that the corrected GCV estimators (both $\widetilde{R}^{\textup{\textrm{cgcv}},\textup{\textrm{ovlp}}}_M$ and $\widetilde{R}^{\textup{\textrm{cgcv}},\textup{\textrm{full}}}_M$) enjoy the rate of convergence $n^{1/2}$. As expected, numerical comparisons show that full-CGCV outperforms ovlp-CGCV, see . Next, we provide some rough heuristics behind how we arrive at the corrected GCV from the full-estimator. The key lemmas are the concentration of $|I_m\cap I_\ell|$ and ${\widehat{\mathsf{df}}}{}_m$ below ( and , respectively): $$| I_m \cap I_\ell | \approx \frac{|I_m||I_\ell|}{n} \quad\text{ for all } m \neq \ell, \qquad {\widehat{\mathsf{df}}}{}_m \approx {\widetilde{\mathsf{df}}}{}_M = \frac{1}{M} \sum_{m' \in [M]} {\widehat{\mathsf{df}}}{}_{m'} \quad\text{ for all } m.$$ By those concentration properties, we observe that $d_{m,\ell}^\textup{\textrm{full}}$, the denominator in the full-estimator defined in [\[eq:d_sub_and_full\]](#eq:d_sub_and_full){reference-type="eqref" reference="eq:d_sub_and_full"}, has the following simple expression asymptotically: $$\begin{aligned} \frac{d_{m,\ell}^\textup{\textrm{full}}}{n} \approx \begin{dcases} 1 - 2 \, \frac{{\widetilde{\mathsf{df}}}{}_M}{n} + \bigg(\frac{{\widetilde{\mathsf{df}}}{}_M}{n}\bigg)^2 & \text{ if } m \neq \ell \\ 1 - 2 \, \frac{{\widetilde{\mathsf{df}}}{}_M}{n} + \bigg(\frac{{\widetilde{\mathsf{df}}}{}_M}{n}\bigg)^2 \frac{n}{|I_m|} & \text{ if } m=\ell \end{dcases} = \bigg(1-\frac{{\widetilde{\mathsf{df}}}{}_M}{n}\bigg)^2 + \mathop{\mathrm{\mathds{1}}}_{\{m=\ell\}} \bigg(\frac{n}{|I_m|}-1\bigg) \bigg(\frac{{\widetilde{\mathsf{df}}}{}_M}{n}\bigg)^2. \end{aligned}$$ Combining the above display and $\widehat{R}_{m, \ell}^\textup{\textrm{full}}\approx R_{m,\ell}$ by [\[eq:sub_full_moment_ineq\]](#eq:sub_full_moment_ineq){reference-type="eqref" reference="eq:sub_full_moment_ineq"}, we have $$\begin{aligned} \frac{\|\bm{y}- \bm{X}{\widetilde{\bm{\beta}}}_M\|_2^2}{n} &= \frac{1}{M^2} \sum_{m, \ell}\frac{(\bm{y}- \bm{X}\widehat{\bm{\beta}}_m)^\top(\bm{y}-\bm{X}\widehat{\bm{\beta}}_\ell)}{n} = \frac{1}{M^2}\sum_{m, \ell} \frac{d_{m,\ell}^\textup{\textrm{full}}}{n} \cdot \widehat{R}_{m, \ell}^\textup{\textrm{full}}\\ &\approx \bigg(1-\frac{{\widetilde{\mathsf{df}}}{}_M}{n}\bigg)^2 R_M + \bigg(\frac{{\widetilde{\mathsf{df}}}{}_M}{n}\bigg)^2 \frac{1}{M^2} \sum_{m=1}^M \bigg(\frac{n}{|I_m|}-1\bigg) R_{m,m}.\end{aligned}$$ Dividing by $(1-{\widetilde{\mathsf{df}}}{}_M/n)^2$ and plugging in $\widehat{R}_{m, m}^\textup{\textrm{ovlp}}$ or $\widehat{R}_{m,m}^\textup{\textrm{full}}$ to estimate $R_{m,m}$ inside the rightmost sum, we obtain the corrected GCV estimators given in . **Remark 6** (GCV is consistent for infinite ensembles). Note that for $M \to \infty$, it is easy to show that the correction term becomes 0 if $1-{\widetilde{\mathsf{df}}}{}_M/n \ge \delta$ for some positive constant $\delta$ and $M^{-1}\sum_{m=1}^M \widehat{R}_{m,m}^{\char"0023}$ is bounded from above. A special case of this result for ridge ensemble is proved in [@du2023subsample]. While this literature showed that GCV is not consistent for ridge ensemble with $M=2$, our results imply more than inconsistency by providing the correction term to make GCV consistent, and our results are applicable not only to ensemble ridge estimates but also to ensembles of any strongly-convex regularized least-squares, including elastic net estimates. In the next theorem, we show that GCV is inconsistent by providing a lower bound of the relative error of the GCV estimator. theoremPropGCVInconsistency[\[prop:gcv_inconsistency\]]{#prop:gcv_inconsistency label="prop:gcv_inconsistency"} Suppose hold, and assume that the same penalty $g_m$ is used for each $m\in[M]$ in [\[eq:def-hbeta\]](#eq:def-hbeta){reference-type="eqref" reference="eq:def-hbeta"}. Let $c=k/n$, $\gamma = \max(1, p/n)$, $\tau = \min(1, \mu)$. Then, there exists an absolute constant $C>0$ such that the following holds: The probability that $\widetilde{R}^\textup{\text{gcv}}_M$ in [\[eq:fgcv-naive\]](#eq:fgcv-naive){reference-type="eqref" reference="eq:fgcv-naive"} over-estimates the risk $R_M$ is lower bounded as: $$\text{for all } \delta\in(0,1), \quad \mathbb{P}\bigg( \frac{\widetilde{R}_M^{\textup{\text{gcv}}}}{R_M} \ge 1 +\delta^2 \frac{c(1-c)}{2M} \bigg) \ge \mathbb{P}\bigg(\frac{{\widetilde{\mathsf{df}}}{}_M}k \ge \delta\bigg) - C \frac{M^5\gamma^{15/2}}{\sqrt{n}\tau^5c^5 (1-c)\delta^2}.$$ Therefore, if there exists a positive constant $\delta_0$ independent of $n$ such that $\mathbb{P}({\widetilde{\mathsf{df}}}{}_M/k \ge \delta_0)\ge \delta_0$, and $\{M, \mu^{-1}, p/n, c^{-1}, (1-c)^{-1}\}$ are bounded by a constant independent of $n$, $\widetilde{R}_M^{\textup{\text{gcv}}}$ is inconsistent and over-estimates the risk $R_M$ with positive probability, in the sense that $$\liminf_{n\to\infty} \ \mathbb{P}\bigg(\frac{\widetilde{R}^\textup{\text{gcv}}_M}{R_M} \ge 1 + \frac{\delta_0^2c(1-c)}{2M}\bigg) \ge \delta_0.$$ In the context of ridge ensembles, @du2023subsample show that the GCV variant [\[eq:gcv-naive\]](#eq:gcv-naive){reference-type="eqref" reference="eq:gcv-naive"} is not consistent when ensemble size is $M=2$. It is possible to extend the inconsistency results for the variant [\[eq:gcv-naive\]](#eq:gcv-naive){reference-type="eqref" reference="eq:gcv-naive"} for finite ensemble sizes greater than $1$. We provide numerical experiments showing the inconsistency of this variant in . As discussed in , for a moderate ensemble size $M$ and subsample size $k$, we expect the GCV variant [\[eq:gcv-naive\]](#eq:gcv-naive){reference-type="eqref" reference="eq:gcv-naive"} to behave similarly to [\[eq:fgcv-naive\]](#eq:fgcv-naive){reference-type="eqref" reference="eq:fgcv-naive"}. Furthermore, the inconsistency of GCV implied by applies even for other penalized least squares estimator [\[eq:def-hbeta\]](#eq:def-hbeta){reference-type="eqref" reference="eq:def-hbeta"}, extending the previous result to more generality. **Remark 7** (On the assumption of non-negligibility of degrees of freedom). We argue here that assumption $\mathbb{P}({\widetilde{\mathsf{df}}}{}_M/k>\delta_0) > \delta_0$ in is unavoidable. If no such constant $\delta_0$ exists as $n,p,k\to\infty$, by extracting a subsequence if necessary, we may assume ${\widetilde{\mathsf{df}}}{}_M/k\xrightarrow{\textup{p}}0$. Herein, let $m\in[M]$ be fixed (e.g., take $m=1$ throughout this remark). Then the positiveness of $({\widehat{\mathsf{df}}}{}_{m})_{m=1}^M$ (see [\[eq:property_tr\]](#eq:property_tr){reference-type="eqref" reference="eq:property_tr"} in ) implies ${\widehat{\mathsf{df}}}{}_m/k \le M\cdot {\widetilde{\mathsf{df}}}{}_M/k = o_p(1)$. The fact that GCV is consistent for ${{\widehat{\bm{\beta}}}}_m$, combined with ${\widehat{\mathsf{df}}}{}_m/k\xrightarrow{\textup{p}}0$, gives $k^{-1} \sum_{i\in I_m}(y_i-\bm{x}_i^\top{{\widehat{\bm{\beta}}}}_m)^2 / R_{m,m} \xrightarrow{\textup{p}}1$. Now consider the deterministic $\bar\bm{\beta}$ defined as $\bar\bm{\beta}= \mathop{\mathrm{argmin}}_{\bm{b}\in {{\mathbb{R}}}^p} (\sigma^2 + \|\bm{\Sigma}^{1/2}(\bm{\beta}_0 - \bm{b})\|_2^2)/2 + g_m(\bm{b}),$ where we replaced the objective function in [\[eq:def-hbeta\]](#eq:def-hbeta){reference-type="eqref" reference="eq:def-hbeta"} inside the minimum by its expectation. The strong convexity of the objective function of $\bar\bm{\beta}$ and the optimality condition of $\widehat{\bm{\beta}}_m$ give $$\begin{aligned} \|\bm{\Sigma}^{1/2}({{\widehat{\bm{\beta}}}}_m - \bar\bm{\beta})\|_2^2 &\le \bigl[ \|\bm{\Sigma}^{1/2}({{\widehat{\bm{\beta}}}}_m - \bm{\beta}_0)\|_2^2 + \sigma^2 \bigr] - \bigl[ \|\bm{\Sigma}^{1/2}(\bar\bm{\beta}-\bm{\beta}_0)\|_2^2 +\sigma^2 \bigr] + 2(g_m({{\widehat{\bm{\beta}}}}_m) - g_m(\bar\bm{\beta})), \\ 0 &\le - \|\bm{L}_{I_m} (\bm{y}- \bm{X}{{\widehat{\bm{\beta}}}}_m)\|_2^2/k + \|\bm{L}_{I_m} (\bm{y}- \bm{X}\bar\bm{\beta})\|_2^2/k + 2(g_m(\bar\bm{\beta})-g_m({{\widehat{\bm{\beta}}}}_m)). \end{aligned}$$ Summing these two inequalities, the penalty terms cancel out, and using the law of large numbers for $\|\bm{L}_m (\bm{y}- \bm{X}\bar\bm{\beta})\|_2^2/k$ we get $\|\bm{\Sigma}^{1/2}({{\widehat{\bm{\beta}}}}_m - \bar\bm{\beta})\|_2^2 \le o_{\mathbb{P}}(1) [ \sigma^2 \|\bm{\Sigma}^{1/2}({{\widehat{\bm{\beta}}}}_m - \bm{\beta}_0)\|_2^2 + \sigma^2 \|\bm{\Sigma}^{1/2}(\bar\bm{\beta}- \bm{\beta}_0)\|_2^2 ]$ so that $\|\bm{\Sigma}^{1/2}({{\widehat{\bm{\beta}}}}_m - \bar\bm{\beta})\|_2^2/ [\sigma^2 + \|\bm{\Sigma}^{1/2}(\bar\bm{\beta}- \bm{\beta}_0)\|_2^2] \xrightarrow{\textup{p}}0$. This means according to the norm $\|\bm{\Sigma}^{1/2}(\cdot)\|_2$, ${{\widehat{\bm{\beta}}}}_m$ behaves like the point mass $\bar\bm{\beta}$ up to a negligible multiplicative factor. This contradicts the typical asymptotics seen in the proportional regime with $p \asymp n$ [@bayati2011lasso; @thrampoulidis2018precise; @miolane2021distribution; @loureiro2022fluctuations; @celentano2020lasso; @wang2020bridge among others], so that the assumption $\mathbb{P}({\widetilde{\mathsf{df}}}{}_M/k>\delta_0)>\delta_0$ is intrinsic to the proportional regime assumed throughout the paper. ## Numerical illustrations: Linear models with Gaussian designs {#sec:numerical-illustrations-gaussian} In this section, we corroborate our theoretical results with numerical experiments on synthetic data, showing the validity of our proposed corrected GCV ($\widetilde{R}_M^{\textup{\textrm{cgcv}},\textup{\textrm{full}}}$ in [\[eq:cgcv-proposal\]](#eq:cgcv-proposal){reference-type="eqref" reference="eq:cgcv-proposal"}). The purpose is to empirically show our proposal corrected GCV (shown as CGCV) estimator accurately estimates the true risk $R_M$ (shown as Risk), while the ordinary GCV in [\[eq:fgcv-naive\]](#eq:fgcv-naive){reference-type="eqref" reference="eq:fgcv-naive"} (shown as GCV) is a poor estimator of the true risk $R_M$. Experiments for non-Gaussian designs are discussed in below. ![ **CGCV is consistent for finite ensembles across different subsample sizes.** Plot of risks versus the subsample size $k$ for Elastic Net ensemble with different $\lambda$ and $M$ and fixed $\lambda_2 = 0.01$. ](figures/ElasticNet_k.pdf){#fig:ridge-k width="95%"} We first describe the simulation setting. We set $(n, p) =(2000, 500)$ and consider different combinations of $(k, M)$ and tuning parameters $(\lambda,\lambda_1,\lambda_2)$ for the three penalty functions in . We consider a linear model satisfying with noise variance $\sigma^2=1$. To be specific, we consider a non-isotropic covariance matrix $\bm{\Sigma}$, whose diagonal entries are evenly spaced in $[0.1, 10]$ and other entries are 0. We generate the true regression coefficient $\bm{\beta}_0$ as follows: generate $\bm{b}_0\in \mathbb{R}^p$ with its first $p-100$ rows sampled from i.i.d.$\mathcal{N}(0,1)$, and set the remaining rows to 0. We then scale $\bm{b}_0$ to obtain the coefficient $\bm{\beta}_0$ by $\bm{\beta}_0= \bm{b}_0(\sigma^2 / \bm{b}_0^\top\bm{\Sigma}\bm{b}_0)^{1/2}$ so that the signal-to-noise ratio $\bm{\beta}_0^\top\bm{\Sigma}\bm{\beta}_0 /\sigma^2$ is 1. For the ridge and lasso ensembles, the tuning parameter $\lambda$ of the corresponding penalty in is fixed across all the subsample estimates $\widehat{\bm{\beta}}_m$. For the elastic net ensemble, the tuning parameter $\lambda_2$ is fixed as $0.01$ and the same sequence of $\lambda_1$ is used across all subsample estimates $\widehat{\bm{\beta}}_m$. For each simulation setting, we perform 100 dataset repetitions and report the average of relevant risks. We present the simulation results of the elastic net ensemble in , and leave the results of the ridge and lasso ensembles to the of the appendix. shows the curve of risk and its estimators as a function of the ensemble size $M$. We observe a clear gap between the curves of the GCV estimator and the true risk for different settings of $(M, k, \lambda_1)$, especially for small subsample size $k$ or small ensemble size $M$. In stark contrast, our proposed corrected GCV aligns closely with the true risk curve. again confirms that our proposal outperforms the GCV in estimating the true risk across different values of $M$ and different combinations of $k$ and $\lambda_1$. We provide more simulation results on the ridge and lasso ensembles in . ![ **GCV gets closer to the risk as ensemble size increases across different subsample sizes.** Plot of risks versus the Elastic Net ensemble size $M$ for ridge ensemble with different $\lambda$ and $k$ and fixed $\lambda_2 = 0.01$. ](figures/ElasticNet_M.pdf){#fig:M width="95%"} # Asymptotic analysis for ridge ensembles {#sec:asm-consistency} For ridge ensembles, the GCV estimator [\[eq:gcv-naive\]](#eq:gcv-naive){reference-type="eqref" reference="eq:gcv-naive"} for ridge ensembles can be inconsistent when the ensemble size is larger than 1, as mentioned in . The inconsistency still presents when the features are sampled from non-Gaussian distributions [@du2023subsample]. In this section, we will show that the result in the previous section can be generalized to a wide class of data distributions when focusing on ridge predictors and using standard results from random matrix theory. Recall the base estimator $\widehat{\bm{\beta}}_m$ defined in [\[eq:def-hbeta\]](#eq:def-hbeta){reference-type="eqref" reference="eq:def-hbeta"} for $m \in [M]$. If we use the ridge penalty with regularization parameter $\lambda>0$, then the base estimator reduces to the *ridge* estimator fitted on the dataset $\mathcal{D}_{I_m}$: $$\widehat{\bm{\beta}}_{m,\lambda} = \mathop{\mathrm{argmin}}\limits_{\bm{\beta}\in\mathbb{R}^p} \sum_{i \in I_m} (y_i - \bm{x}_i^\top \bm{\beta})^2 / k + \lambda \| \bm{\beta}\|_2^2 = (\bm{X}^{\top} \bm{L}_{I_m} \bm{X}/ k + \lambda\bm{I}_p)^{-1}{\bm{X}^{\top} \bm{L}_{I_m} \bm{y}}/{k} .\label{eq:ingredient-estimator}$$ Taking $\lambda\rightarrow0^+$, $\widehat{\bm{\beta}}_{m,0}:=(\bm{X}^{\top} \bm{L}_{I_m} \bm{X}/|I_m|)^{+}\bm{X}^{\top} \bm{L}_{I_m} \bm{y}/|I_m|$ becomes the so-called *ridgeless* estimator, or minimum-norm interpolator, where $\bm{A}^+$ denotes the Moore-Penrose inverse of matrix $\bm{A}$. To prepare for our upcoming results, we impose the following assumptions on the responses, features, and subsample aspect ratios. **Assumption 5** (Response structure). The response $y$ satisfies that $\mathbb{E}[y]=0$ and $\mathbb{E}[y^{4+\delta}] \leq C_0$ for some $\delta>0$ and $C_0>0$. **Assumption 6** (Feature structure). The feature vector decomposes as $\bm{x}= \bm{\Sigma}^{1/2}\bm{z}$, where $\bm{z}\in\mathbb{R}^{p}$ contains i.i.d. entries with mean $0$, variance $1$, bounded moments of order $4+\delta$ for some $\delta > 0$, and $\bm{\Sigma}\in \mathbb{R}^{p \times p}$ is deterministic and symmetric with eigenvalues uniformly bounded between $r_{\min}>0$ and $r_{\max}<\infty$. We make several remarks on the data assumptions in our analysis. First, we do not impose any specific model assumptions between the response variable $y$ and the feature vector $\bm{x}$. Thus, our results are model-free, relying solely on the bounded moment assumption stated in , which ensures the generality of our subsequent findings. Although we assume zero mean for $y$, this is purely for simplicity, as centering can always be achieved by subtracting the sample mean in practice. Furthermore, the bounded moment condition can also be satisfied by imposing a stronger distributional assumption, such as sub-Gaussianity. **Assumption 7** (Proportional asymptotics). The sample size $n$, subsample size $k$, and the feature dimension $p$ tend to infinity such that $p/n\rightarrow\phi\in(0,\infty)$ and $p/k\rightarrow\psi\in[\phi,\infty]$. Even though we assume that both the data aspect ratio $p/n$ and the subsample aspect ratio $p/k$ converge to fixed constants, this is for the sake of simplicity of exposition and proofs, and is not essential. It suffices only to require that they scale proportionally such that $0 < \liminf p/n \le \liminf p/n \leq \liminf p/k < \infty$ as, by compactness, we can always extract a subsequence of regression problems in which $p/n$ and $p/k$ have a finite limit. Both , concerning the feature vector, and , clarifying the proportional asymptotics regime, are commonly used assumptions in the study of random matrix theory [@bai2010spectral; @el2010spectrum] and the analysis of ridge and ridgeless regression [@karoui2011geometric; @karoui_2013; @dobriban_wager_2018; @hastie2022surprises]. **Remark 8** (Extreme points in proportional asymptotics). In , we allow the limiting data aspect ratio $p/n$ and subsample aspect ratio $p/k$ to be in $(0,\infty)$ and $[\phi,\infty]$, respectively. In the extreme case when $p / k \to \infty$, the prediction risks of ridge ensembles admit simple asymptotic formulas. More specifically, both the prediction risk and the estimates converge to the null risk $\mathbb{E}[y^2]$, the risk of a full predictor that always outputs zero. ## Guarantees for intermediate risk estimators {#guarantees-for-intermediate-risk-estimators} Since the ridge predictor is a linear smoother, its degrees of freedom ${\widehat{\mathsf{df}}}{}_{m}$ defined in [\[eq:def-df\]](#eq:def-df){reference-type="eqref" reference="eq:def-df"} is equivalent to the trace of its smoothing matrix $\bm{S}_m = \bm{X}_{I_m}(\bm{X}_{I_m}^{\top}\bm{X}_{I_m}/k + \lambda\bm{I}_p)^{-1}\bm{X}_{I_m}^{\top}/k$; namely, ${\widehat{\mathsf{df}}}{}_m = \mathop{\mathrm{tr}}[\bm{S}_m]$. For ridge regression, the ovlp-estimator $\widehat{R}_{m,\ell,\lambda}^{\textup{\textrm{ovlp}}}$ and the full-estimator $\widehat{R}_{m,\ell,\lambda}^{\textup{\textrm{full}}}$ as defined in [\[eq:risk-decomp-est-1\]](#eq:risk-decomp-est-1){reference-type="eqref" reference="eq:risk-decomp-est-1"} and [\[eq:risk-decomp-est-2\]](#eq:risk-decomp-est-2){reference-type="eqref" reference="eq:risk-decomp-est-2"} are respectively given explicitly as follows: $$\begin{aligned} \widehat{R}_{m,\ell,\lambda}^{\textup{\textrm{ovlp}}} &= \frac{\displaystyle (\bm{y}- \bm{X}\widehat{\bm{\beta}}_{m,\lambda})^{\top}\bm{L}_{m}\bm{L}_{\ell}(\bm{y}- \bm{X}\widehat{\bm{\beta}}_{\ell,\lambda}) / |I_m\cap I_{\ell}|}{\displaystyle 1 - \frac{\mathop{\mathrm{tr}}[\bm{S}_m]}{|I_m|} - \frac{\mathop{\mathrm{tr}}[\bm{S}_\ell]}{|I_{\ell}|} + \frac{\mathop{\mathrm{tr}}[\bm{S}_m]}{|I_m|} \frac{\mathop{\mathrm{tr}}[\bm{S}_\ell]}{|I_{\ell}|} }, \\ \widehat{R}_{m,\ell,\lambda}^{\textup{\textrm{full}}} &= \frac{\displaystyle (\bm{y}- \bm{X}\widehat{\bm{\beta}}_{m,\lambda})^\top (\bm{y}- \bm{X}\widehat{\bm{\beta}}_{\ell,\lambda}) / n}{\displaystyle 1 - \frac{\mathop{\mathrm{tr}}[\bm{S}_m]}{n} - \frac{\mathop{\mathrm{tr}}[\bm{S}_\ell]}{n} + \frac{\mathop{\mathrm{tr}}[\bm{S}_m]}{|I_m|} \frac{\mathop{\mathrm{tr}}[\bm{S}_\ell]}{|I_{\ell}|} \cdot \frac{| I_m \cap I_\ell |}{n}}.\end{aligned}$$ Here, we use the subscript $\lambda$ to indicate the dependency of the estimators for the risk component on the ridge penalty parameter $\lambda$. Similarly, we write $\widetilde{R}_{M,\lambda}^{\textup{\textrm{ovlp}}}$, $\widetilde{R}^{\textup{\textrm{full}}}_{M,\lambda}$, and $R_{M,\lambda}$ for the two risk estimators and the risk, respectively. The following theorem shows that both the two estimators are pointwise consistent for the risk component $R_{m,\ell,\lambda}$. theoremThmRiskEst[\[thm:risk-est\]]{#thm:risk-est label="thm:risk-est"} Under with $\sqrt{n}/k=\mathcal{O}(1)$ for the ovlp-estimator, for $\lambda\geq 0$, for any $M \in \mathbb{N}$ and $m, l \in [M]$, it holds that $|\widehat{R}_{m,\ell,\lambda}^{\textup{\textrm{ovlp}}} - R_{m, \ell,\lambda}| \xrightarrow{\textup{a.s.}}0$ and $|\widehat{R}_{m,\ell,\lambda}^{\textup{\textrm{full}}} - R_{m, \ell,\lambda}| \xrightarrow{\textup{a.s.}}0$. Consequently, it holds that $| \widetilde{R}_{M,\lambda}^{\textup{\textrm{ovlp}}} - R_{M,\lambda} | \xrightarrow{\textup{a.s.}}0$ and $| \widetilde{R}^{\textup{\textrm{full}}}_{M,\lambda} - R_{M,\lambda} | \xrightarrow{\textup{a.s.}}0$. For the first estimator to work in the extreme case when $\psi = \infty$, we also need $\sqrt{n}/k=\mathcal{O}(1)$. We note that this is not required for the full-estimator precisely because the full-estimator uses both subsample observations and out-of-subsample observations. So, even if the number of overlapping subsample observations is small, the out-of-subsample observations are large, and the estimator is able to track the risk. The regime when $k$ is small is the regime where the full-estimator is substantially better than the ovlp-estimator, as we have seen in . One natural question that remains for ridge predictors is how to select the ridge penalty $\lambda$. Building on the previous results, we are able to show a stronger notion of consistency of the two proposed estimators over $\lambda\in\Lambda:=[0,\infty]$. When $\lambda=\infty$, the ensemble ridge predictor reduces to a null predictor that always outputs zero. Under the above assumptions, uniform consistency is established through the following theorem. theoremThmUniformConsistencyLambda[\[thm:unifrom-consistency-lambda\]]{#thm:unifrom-consistency-lambda label="thm:unifrom-consistency-lambda"} Under same conditions in , when $\psi\neq 1$, it holds that $\sup_{\lambda\in\Lambda}|\widetilde{R}_{M,\lambda}^{\textup{\textrm{ovlp}}} -R_{M,\lambda}| \xrightarrow{\textup{a.s.}}0$ and $\sup_{\lambda\in\Lambda}|\widetilde{R}_{M,\lambda}^{\textup{\textrm{full}}} -R_{M,\lambda}| \xrightarrow{\textup{a.s.}}0$. The restriction on $\psi$ is because both the risk and the risk estimate diverge when $\psi = 1$ for the ridgeless predictor when $\lambda=0$, such phenomenon has been seen in [@hastie2022surprises]. ## Guarantees for corrected GCV {#guarantees-for-corrected-gcv} The average degrees of freedom [\[eq:def-tdf\]](#eq:def-tdf){reference-type="eqref" reference="eq:def-tdf"} reduces to ${\widetilde{\mathsf{df}}}{}_M = M^{-1}\sum_{m=1}^M \mathop{\mathrm{tr}}(\bm{S}_m)$. The corrected GCV from [\[eq:cgcv-proposal\]](#eq:cgcv-proposal){reference-type="eqref" reference="eq:cgcv-proposal"} and [\[eq:cgcv-proposal-sub\]](#eq:cgcv-proposal-sub){reference-type="eqref" reference="eq:cgcv-proposal-sub"} for ridge regression are given for $\char"0023\in\{\textup{\textrm{ovlp}},\textup{\textrm{full}}\}$ by: $$\widehat{R}_{M,\lambda}^{\textup{\textrm{cgcv}},\char"0023}:= \widetilde{R}^{\textup{\text{gcv}}}_{M,\lambda} - \frac{1}{M} \left\{ \frac{(M^{-1}\sum_{m=1}^M \mathop{\mathrm{tr}}(\bm{S}_m)/n)^2}{(1-M^{-1}\sum_{m=1}^M \mathop{\mathrm{tr}}(\bm{S}_m)/n)^2} \bigg(\frac{n}{k} - 1\bigg) \frac{1}{M}\sum_{m=1}^M \widehat{R}_{m,m}^\char"0023\right\},\label{eq:asym-corr-term}$$ where $\widetilde{R}^{\textup{\text{gcv}}}_{M,\lambda}= (1-M^{-1}\sum_{m=1}^M \mathop{\mathrm{tr}}(\bm{S}_m)/n)^{-2} \|\bm{y}-\bm{X}{\widetilde{\bm{\beta}}}_{M}\|_2^2/n$ is the GCV estimator and the correction term $\widehat{R}_{m,m, \lambda}^\char"0023$ is given, for $\char"0023\in\{\textup{\textrm{ovlp}},\textup{\textrm{full}}\}$, by: $$\begin{aligned} \widehat{R}_{m, m, \lambda}^\textup{\textrm{ovlp}} &= \frac{\|\bm{y}_{I_m} - \bm{X}_{I_m} \widehat{\bm{\beta}}_{m,\lambda}\|_2^2/k}{(1-M^{-1}\sum_{m=1}^M \mathop{\mathrm{tr}}(\bm{S}_m)/k)^2},\\ \widehat{R}_{m, m, \lambda}^\textup{\textrm{full}} &= \frac{\|\bm{y}-\bm{X}\widehat{\bm{\beta}}_{m,\lambda}\|_2^2/n}{(1-M^{-1}\sum_{m=1}^M \mathop{\mathrm{tr}}(\bm{S}_m)/n)^2 + (M^{-1}\sum_{m=1}^M \mathop{\mathrm{tr}}(\bm{S}_m)/n)^2 \cdot ({n}/{k} - 1)}.\end{aligned}$$ theoremPropCorrectGCV[\[prop:correct-gcv\]]{#prop:correct-gcv label="prop:correct-gcv"} Under same conditions in , it holds that $| \widehat{R}_{M,\lambda}^{\textup{\textrm{cgcv}},\textup{\textrm{ovlp}}} - R_{M,\lambda} | \xrightarrow{\textup{a.s.}}0$ and $| \widehat{R}_{M,\lambda}^{\textup{\textrm{cgcv}},\textup{\textrm{full}}} - R_{M,\lambda} | \xrightarrow{\textup{a.s.}}0$. Moreover, when $\psi\neq 1$, it holds that $\sup_{\lambda\in\Lambda}|\widehat{R}_{M,\lambda}^{\textup{\textrm{cgcv}},\textup{\textrm{ovlp}}} - R_{M,\lambda}| \xrightarrow{\textup{a.s.}}0$ and $\sup_{\lambda\in\Lambda}|\widehat{R}_{M,\lambda}^{\textup{\textrm{cgcv}},\textup{\textrm{full}}} - R_{M,\lambda}| \xrightarrow{\textup{a.s.}}0$. specializes the results of for general convex penalty to the ridge predictor. In particular, the correction term decreases as $1/M$. On the other hand, compared to , it also extends the corrected GCV to ridgeless predictors. Some remarks are as follows. **Remark 9** (Consistency of GCV under general models). The GCV estimator $\widetilde{R}^{\textup{\text{gcv}}}_{M,\lambda}$ is shown to be consistent to the prediction risk $R_M$ for ridge predictors [@patil2021uniform] (when $M=1$ or $k=n$) and in the infinite ensemble [@du2023subsample]. Therein, the infinite ensemble is defined by letting the ensemble size $M$ tend to infinity for any given sample size $n$ and feature dimension $p$. Since the correction term [\[eq:asym-corr-term\]](#eq:asym-corr-term){reference-type="eqref" reference="eq:asym-corr-term"} vanishes as $M$ tends to infinity, above also indirectly shows that the corrected GCV estimator is consistent in the infinite ensemble under more general data models. On the other hand, the correction term in [\[eq:asym-corr-term\]](#eq:asym-corr-term){reference-type="eqref" reference="eq:asym-corr-term"} implies that the GCV estimator $\widetilde{R}^{\textup{\text{gcv}}}_{M,\lambda}$ is generally inconsistent for finite ensemble sizes when $n>k$ and ${\widetilde{\mathsf{df}}}{}_M > 0$. In the case of ridge regression, the latter condition is generally satisfied because for $\lambda \in [0, \infty)$ because $${\widetilde{\mathsf{df}}}{}_M=M^{-1}\sum_{m=1}^M\mathop{\mathrm{tr}}(\bm{S}_m) = M^{-1} \sum_{m=1}^{M} \mathop{\mathrm{tr}}[\bm{X}_{I_m}^{\top} \bm{X}_{I_m}/k (\bm{X}_{I_m}^{\top} \bm{X}_{I_m}/k+\lambda \bm{I}_p )^{-1}] > 0,$$ unless all the singular values of $\bm{X}_{I_m}$ are zero for all $m\in[M]$. Compared to @du2023subsample [Proposition 3], which shows the inconsistency of $\widetilde{R}^{\textup{\text{gcv}}}_{M,\lambda}$ for $M=2$ and $\lambda=0$ under a well-specified linear model, our result is more general, because it allows any ridge parameter $\lambda \ge 0$, any ensemble size $M \ge 2$, and an arbitrary response model. **Remark 10** (Model selection). The ensemble ridge predictors involve three key parameters: the ridge penalty $\lambda$, the subsample size $k$, and the ensemble size $M$. The problems of model selection of ridge ensembles over the subsample size $k$ and the ensemble size $M$ have already been discussed by @patil2022bagging [@du2023extrapolated]. Tuning over $M$ is relatively easy because the risk is decreasing in $M$ [@patil2022bagging]. As a consequence of , the tuned risk using the corrected GCV estimator converges to the oracle optimal risk in a grid of ridge penalties used to fit the ensemble. In other words, the optimality of the data-dependent model selection can be guaranteed. **Remark 11** (Non-asymptotic versus asymptotic analyses). When restricted to ridge estimators, the asymptotic analysis presented in the current section reveals a few benefits compared to the finite-sample analysis in . Firstly, the asymptotic analysis in the current section relaxes the assumptions on the data-generating process. More specifically, no explicit response-feature model is needed, except the bounded moment assumptions and feature structure as assumed in . Secondly, the consistency provided in the current section does not require strong convexity as in . It also applies to the ridgeless ensembles when $\lambda = 0$, in which case, the base predictor reduces to the minimum $\ell_2$-norm least squares. Last but not least, the asymptotic analysis enables the uniformity over $\lambda \in [0,\infty]$. ## Numerical illustrations: Non-linear models with non-Gaussian designs {#subsec:asym-numerical} To numerically evaluate our proposed corrected GCV estimator, we generate data from a nonlinear model $y=\bm{x}^{\top}\bm{\beta}_0 + (\|\bm{x}\|_2^2/p - 1 ) + \epsilon$, where $\bm{x}\sim\mathcal{N}_p(\mathbf{0},\bm{\Sigma}_{\textsc{ar1},\rho=0.25})$, $\bm{\beta}_0$ is the average of the top-5 eigenvectors of the covariance matrix of the AR(1) process $\bm{\Sigma}_{\textsc{ar1},\rho=0.25}=(\rho^{|i-j|})_{i,j\in[p]}$, and $\epsilon\sim\mathcal{N}(0,1)$. This setup has a linearized signal-noise ratio of 1.67. We set the sample size $n=6000$ and the feature dimension $p=1200$. Our first experiment is to compare the naive GCV estimator $\widetilde{R}^{\textup{\text{gcv}}}_M$ and the corrected GCV estimator $\widetilde{R}_M^{\textup{\textrm{cgcv}},\textup{\textrm{full}}}$ as a function of the subsample size $k$, or equivalently, the subsample aspect ratio $p/k$. In , we visualize the results for both the ridge and lasso predictors with different ensemble sizes $M$ and subsample aspect ratios $p/k$. Both GCV and CGCV are consistent when $M=1$ and $\psi=\phi$. For ridge predictors, we see that the naive GCV estimator is inconsistent for different finite ensemble sizes and subsample sizes. For the lasso predictors, the naive GCV estimates are not even close to the prediction risks when the subsample size is large, even for ensemble size $M=1$. Nevertheless, the corrected GCV estimates are in line with the true prediction risks for different ensemble sizes $M$. Although our theoretical results do not cover the lasso, the numerical results in the current subsection indicate the generality and robustness of our proposed corrected GCV estimator beyond ridge ensembles. On the other hand, as shown in , the inconsistency of the naive GCV vanishes as the ensemble size $M$ tends to infinity. ![ **CGCV is consistent for finite ensembles across different subsample aspect ratios.** The GCV estimates for the ridge and lasso ensembles with penalty $\lambda=10^{-2}$ and varying subsample size in a problem setup of , over 50 repetitions of the datasets. The left panel shows the results for ridge predictors. The right panel shows the results for the lasso predictors. ](figures/fig_psi.pdf){#fig:ridge-lasso-psi width="90%"} Finally, we compare the two estimators as a function of the ridge penalty $\lambda$ in . As we can see, the naive GCV estimator does not provide a valid estimate for the true risk. The inconsistency gets more significant when either the subsample size $k$ is small relative to the feature dimension $p$ or the ridge penalty is close to zero. Similar observations are also present for the lasso estimators; see . On the other hand, our corrected GCV estimator is uniformly consistent in various values of $\lambda$ for all ensemble sizes $M$, as proved in . ![ **GCV gets closer to the risk as ensemble size increases.** The GCV estimates for ridge ensemble with varying ensemble size in a problem setup of over 50 repetitions of the datasets. Here, the feature dimension is $p=1200$. The left panel shows the case when the subsample sizes are $k=2400$ (an underparameterized regime). The right panel shows the case when the subsample sizes are $k=800$ (an overparameterized regime). ](figures/fig_ridge_M.pdf){#fig:ridge-M width="90%"} ![ **CGCV is consistent for finite ensembles across different regularization levels.** The GCV estimates for ridge ensemble with varying ridge penalty $\lambda$ in a problem setup of over 50 repetitions of the datasets. Here, the feature dimension is $p=1200$. The left panel shows the case when the subsample sizes are $k=2400$ (an underparameterized regime). The right panel shows the case when the subsample sizes are $k=800$ (an overparameterized regime). ](figures/fig_ridge_lam.pdf){#fig:ridge-lambda width="90%"} # Discussion {#sec:discussion} The primary goal of this paper has been to examine GCV for risk estimation of ensembles of penalized estimators. Our main contribution is showing that ordinary GCV fails to be consistent for finite ensembles and in formulating a corrected GCV that is consistent for any finite ensemble. The main high-level idea in constructing the correction is deriving term-by-term corrections for the ordinary GCV estimator, which subsequently leads to the form of corrected GCV. Assuming Gaussianity and linear model, we derived non-asymptotic bounds for our corrected GCV estimator that enjoys $\sqrt{n}$-consistency and provides explicit dependence on the subsample size $k$, ensemble size $M$, and the form and level of regularization penalty $\lambda$. These assumptions, although seemingly stringent, are amenable to a wide range of general convex penalized estimators. Moreover, for ensembles of ridge regression, we prove the asymptotic consistency of our estimator under very mild assumptions, and in particular, without assuming Gaussianity and linear response model. As alluded to earlier, our intermediate risk estimators can incorporate several extensions and modifications, thereby further broadening their applicability. First, we can use different subsample sizes for different ensemble estimators. Second, we can use different types of penalties or different regularization parameters for different base estimators. In such cases, the components of the ovlp-estimator [\[eq:risk-decomp-est-1\]](#eq:risk-decomp-est-1){reference-type="eqref" reference="eq:risk-decomp-est-1"} and the full-estimator [\[eq:risk-decomp-est-2\]](#eq:risk-decomp-est-2){reference-type="eqref" reference="eq:risk-decomp-est-2"} are still consistent. Note that additive bounds, as indicated in already accommodate this flexibility. It is also possible to establish the multiplicative bounds assuming certain restrictions on the component risks. Finally, provided that the average cardinality concentrates and the cardinality of the intersection is lower bounded with high probability, a slightly general form of CGCV is expected to work. While the techniques of the present paper can be applied to estimate the risk of hybrid ensembles (with the $I_m$ having different cardinalities and the $\widehat{\bm{\beta}}_m$ having different penalty functions), it poses challenges when it comes to hyperparameter tuning. Tuning an ensemble model, especially when equipped with a plethora of hyperparameters, is computationally challenging. In such cases, a greedy strategy often employed is to minimize the risk for each subsample estimator separately. However, it is important to note that the optimal regularization parameter $\lambda$ for an individual ensemble component (i.e., for a fixed $m\in [M]$) may not be the best choice for the entire ensemble (i.e., after averaging over $m\in[M]$). This computational complexity of tuning ensembles becomes especially challenging as ensemble components employ different penalties or regularization amounts. For this reason, we recommend training each individual estimate ${{\widehat{\bm{\beta}}}}_m$ with the same penalty and tuning parameter (say, $\lambda$) and selecting the tuning parameter $\lambda$ for the ensemble average ${\widetilde{\bm{\beta}}}_{M} = {M}^{-1}\sum_{m=1}^M {{\widehat{\bm{\beta}}}}_m$ in [\[eq:def-M-ensemble\]](#eq:def-M-ensemble){reference-type="eqref" reference="eq:def-M-ensemble"} that yields the smallest corrected GCV criterion. Furthermore, one can also combine CGCV with the extrapolated CV method of @du2023extrapolated for tuning over $k$ and $M$. There are several natural avenues for future investigation. We discuss three of them below. #### Beyond squared loss for training. We have considered penalized estimators trained using squared loss in [\[eq:def-hbeta\]](#eq:def-hbeta){reference-type="eqref" reference="eq:def-hbeta"}. One may consider penalized estimators trained on different loss functions, such as the Huber loss or other robust loss functions, especially if the noise random variables $\epsilon_i$ are heavy-tailed and/or have infinite variance. Estimators of the squared risk in this context are obtained in [@rad2018scalable; @wang2018approximate; @bellec2020out; @bellec2022derivatives] for the non-ensemble setting, with the method in [@rad2018scalable; @wang2018approximate] being applicable beyond the squared risk. It would be of interest to extend (with necessary modifications) our corrected GCV criterion to enable ensembles of base estimators ${{\widehat{\bm{\beta}}}}_m$ trained on robust loss functions. We leave this research direction for future work. #### Beyond squared loss for testing. Our primary focus in this paper has been centered on evaluating the squared prediction risk. The special form of the squared risk allows the useful decomposition in [\[eq:risk-decomposition\]](#eq:risk-decomposition){reference-type="eqref" reference="eq:risk-decomposition"}, which subsequently leads to the estimators [\[eq:risk-decomp-est-1\]](#eq:risk-decomp-est-1){reference-type="eqref" reference="eq:risk-decomp-est-1"}, [\[eq:risk-decomp-est-2\]](#eq:risk-decomp-est-2){reference-type="eqref" reference="eq:risk-decomp-est-2"} and corrected GCV. In practice, one may be interested in error metrics other than squared error. More broadly, one may be interested in functionals of the out-of-sample error distribution, such as quantiles of the error distribution. For regular penalized estimators, one can construct estimators for such functionals by constructing estimators for the out-of-sample distribution first and then subsequently using the plug-in functionals of these distributions. See [@patil2022estimating] for an example that uses GCV-based correction of the in-sample residuals. Another avenue for loss estimation beyond the square risk is [@rad2018scalable; @wang2018approximate], wherein the proposed Approximate Leave-One-Out handles the non-ensemble setting. Whether one can further additively correct these residuals and construct consistent estimators of the out-of-sample distribution for the ensemble of penalized estimators is an interesting direction of future work. #### Risk estimation for generic ensembles. The subsamples $I_1,\ldots,I_M$ are assumed to be sampled without replacement throughout this paper. It is of interest to investigate risk estimators for other ensemble techniques like bagging (where we sample with replacement) or other random weighting schemes, i.e., where the data-fitting loss in [\[eq:def-hbeta\]](#eq:def-hbeta){reference-type="eqref" reference="eq:def-hbeta"} is replaced by the loss $\sum_{i=1}^n w_{m,i}(y_i-\bm{x}_i^\top\bm{\beta})^2$ for weights $(w_{m,i})_{i\in[n]}$ sampled independently of the data $(\bm{X},\bm{y})$. This includes weights $w_{m,i} \overset{\text{i.i.d.}}{\sim}\mathrm{Poisson}(1)$ or $(w_{m,1},\ldots,w_{m,n}) \sim$ Multinomial$(n,n,n^{-1})$ typically used in the pair bootstrap; see [@karoui2016can] for a study of such weights in unregularized estimation in the proportional asymptotics regime. While the correction in may need to be adapted to accommodate these variations, we believe the approach to obtaining the correction through intermediate estimators (as explained in ) is expected to generalize for these variations. # Acknowledgements {#acknowledgements .unnumbered} We warmly thank Arun K. Kuchibhotla for orchestrating this ensemble as well as many insightful discussions and feedback while developing the ideas in the paper. We also warmly thank Ryan J. Tibshirani, Alessandro Rinaldo, Yuting Wei, Matey Neykov, Daniel LeJeune for many insightful discussions surrounding generalized cross-validation and ensembles in general (and especially of the ridge and lasso estimators). P.C. Bellec's research was partially supported by the NSF Grant DMS-1945428. P. Patil thanks the computing support provided by grant MTH230020, which enabled experiments at the Pittsburgh Supercomputing Center Bridge-2 RM-partition. This document serves as a supplement to the paper " Corrected generalized cross-validation for finite ensembles of penalized estimators." The initial (unnumbered) section outlines the supplement's structure and summarizes the general notation used in the main paper and supplement in . ## Organization {#organization-1 .unnumbered} The content of this appendix is organized as follows. - In , we provide proofs of results in . l l l **Section** & **Content** & **Purpose** & & Preparatory derivative formulae & & Proof of & & Proof of & & Proof of & & Helper lemmas (and their proofs) used in the proofs of & & Miscellaneous useful facts used in the proofs of - In , we provide proofs of results in . l l l **Section** & **Content** & **Purpose** & & Preparatory definitions & & Proofs of (and proofs of ) & & Proof of & & Helper lemmas (and their proofs) used in the proofs of - In , we provide additional illustrations for , respectively. l l l **Section** & **Content** & **Purpose** & & Intermediate ovlp- vs. full-estimators in $k$ and $\lambda$ for elastic net and lasso & & CGCV ovlp- vs. full-estimators in $k$ and $\lambda$ for ridge, elastic net, and lasso & & CGCV vs. GCV in $k$ for ridge, elastic net, and lasso (Gaussian) & & CGCV vs. GCV in $M$ for ridge, elastic net, and lasso (Gaussian) & & CGCV vs. GCV in $\lambda$ for ridge, elastic net, and lasso (Gaussian) & & CGCV vs. GCV in $\lambda$ for ridge, elastic net, and lasso (Non-Gaussian) ## Notation {#notation .unnumbered} The notational conventions used in this paper are as follows. Any section-specific notation is introduced in-line as needed. L3.5cmL16cm **Notation** & **Description** Non-bold lower or upper case & Denotes scalars (e.g., $k$, $\lambda$, $M$) Bold lower case & Denotes vectors (e.g., $\bm{x}$, $\bm{y}$, $\bm{\beta}_0$) Bold upper case & Denotes matrices (e.g., $\bm{X}$, $\bm{\Sigma}$, $\bm{L}$) Script font & Denotes certain limiting functions (e.g., $\mathscr{R}$). $\mathbb{R}$, $\mathbb{R}_{\ge 0}$ & Set of real and non-negative real numbers $[n]$ & Set $\{1, \dots, n\}$ for a natural number $n$ $|I|$ & Cardinality of a set $I$ $(x)_{+}$ & Positive part of a real number $x$ $\nabla f$, $\nabla^2 f$ & Gradient and Hessian of a function $f$ $\mathop{\mathrm{\mathds{1}}}_A$, $\mathbb{P}(A)$ & Indicator random variable associated with an event $A$ and probability of $A$ $\mathbb{E}[X], \mathop{\mathrm{{\rm Var}}}(X)$ & Expectation and variance of a random variable $X$ $\mathcal{V}^\perp$ & Orthogonal complement of a vector space $\mathcal{V}$ $\mathop{\mathrm{tr}}[\bm{A}]$, $\bm{A}^{-1}$ & Trace and inverse (if invertible) of a square matrix $\bm{A}\in \mathbb{R}^{p \times p}$ $\mathrm{rank}(\bm{B})$, $\bm{B}^\top$, $\bm{B}^{+}$ & Rank, transpose and Moore-Penrose inverse of matrix $\bm{B}\in \mathbb{R}^{n \times p}$ $\bm{C}^{1/2}$ & Principal square root of a positive semidefinite matrix $\bm{C}$ $\bm{I}_p$ or $\bm{I}$ & The $p \times p$ identity matrix $\langle \bm{u}, \bm{v}\rangle$ & Inner product of vectors $\bm{u}$ and $\bm{v}$ $\| \bm{u}\|_{p}$ & The $\ell_p$ norm of a vector $\bm{u}$ for $p \ge 1$ $\|f\|_{L_p}$ & The $L_p$ norm of a function $f$ for $p \ge 1$ $\| \bm{A}\|_{\mathrm{op}}$ & Operator (or spectral) norm of a real matrix $\bm{A}$ $\| \bm{A}\|_{\mathrm{tr}}$ & Trace (or nuclear) norm of a real matrix $\bm{A}$ $\| \bm{A}\|_F$ & Frobenius norm of a real matrix $\bm{A}$ $X \lesssim Y$ & $X \le C Y$ for some absolute constant $C$ $X \lesssim_\alpha Y$ & $X \le C_\alpha Y$ for some constant $C_\alpha$ that may depend on ambient parameter $\alpha$ $X = \mathcal{O}_\alpha(Y)$ & $| X | \le C_\alpha Y$ for some constant $C_\alpha$ that may depend on ambient parameter $\alpha$ $\bm{u}\le \bm{v}$ & Lexicographic ordering for vectors $\bm{u}$ and $\bm{v}$ $\bm{A}\preceq \bm{B}$ & Loewner ordering for symmetric matrices $\bm{A}$ and $\bm{B}$ $\mathcal{O}_{\mathbb{P}}$, $o_{\mathbb{P}}$ & Probabilistic big-O and little-o notation $\bm{C}\simeq\bm{D}$ & Asymptotic equivalence of matrices $\bm{C}$ and $\bm{D}$ (see for more details) $\stackrel{\mathrm{d}}{=}$ & Equality in distribution $\xrightarrow{\textup{d}}$ & Denotes convergence in distribution $\xrightarrow{\textup{p}}$ & Denotes convergence in probability $\xrightarrow{\textup{a.s.}}$ & Denotes almost sure convergence [\[tab:notation\]]{#tab:notation label="tab:notation"} Note: Throughout, $C$, $C'$ denote positive absolute constants. If no subscript is present for norm $\| \bm{u}\|$ of a vector $\bm{u}$, then it is assumed to the $\ell_2$ norm of $\bm{u}$. If a proof of a statement is separated from the statement, the statement is restated (while keeping the original numbering) along with the proof for convenience. # Proofs in {#app:finite-sample-analysis} ## Preparatory derivative formulae {#sec:derivatives_formulae} For a pair of possibly overlapping subsamples $(I_m, I_\ell)$ (written as $(I, \Tilde{I})$ in this proof for easy of writing) of $\{1,2, \ldots, n\}$ with $|I_m|= |I_\ell|=k \le n$, we define $\widehat{\bm{\beta}}$ and ${\widetilde{\bm{\beta}}}$ as the estimators using subsamples $(\bm{x}_i, y_i)_{i\in I}$ and $(\bm{x}_i, y_i)_{i\in \widetilde I}$ respectively, i.e., $$\begin{aligned} \widehat{\bm{\beta}}:= \mathop{\mathrm{argmin}}_{\bm{\beta}\in {{\mathbb{R}}}^p} \frac{1}{k} \sum_{i\in I} (y_i - \bm{x}_i^\top \bm{\beta})^2 + g(\bm{\beta}), \quad \text{and} \quad {\widetilde{\bm{\beta}}}:= \mathop{\mathrm{argmin}}_{\bm{\beta}\in {{\mathbb{R}}}^p} \frac{1}{k} \sum_{i\in \widetilde I} (y_i - \bm{x}_i^\top \bm{\beta})^2 + \widetilde{g}(\bm{\beta}),\end{aligned}$$ and we denote the corresponding residual and the degree of freedom by $$\bm{r}:= \bm{y}- \bm{X}\widehat{\bm{\beta}}, \quad \widetilde\bm{r}:= \bm{y}-\bm{X}{\widetilde{\bm{\beta}}}, \quad {\widehat{\mathsf{df}}}{}:= \mathop{\mathrm{tr}}\Big[\frac{\partial}{\partial \bm{y}} \bm{X}\widehat{\bm{\beta}}\Big],\quad {\widetilde{\mathsf{df}}}{}:= \mathop{\mathrm{tr}}\Big[\frac{\partial}{\partial\bm{y}}\bm{X}{\widetilde{\bm{\beta}}}\Big].$$ For clarity, we use the notation below throughout of : $$\bm{Z}:= \biggl[\bm{X}\bm{\Sigma}^{-1/2} \mathrel{\Big|} \sigma^{-1} \bm{\epsilon}\biggr] \in {{\mathbb{R}}}^{n\times(p+1)}, ~ \bm{h}:= \begin{pmatrix} \bm{\Sigma}^{1/2}(\widehat{\bm{\beta}}- \bm{\beta}_0) \relax -\sigma \end{pmatrix} \in {{\mathbb{R}}}^{p+1} , ~ \widetilde\bm{h}:= \begin{pmatrix} \bm{\Sigma}^{1/2}({\widetilde{\bm{\beta}}}- \bm{\beta}_0) \relax -\sigma. \end{pmatrix}\in {{\mathbb{R}}}^{p+1}.$$ Under , the matrix $\bm{Z}$ has i.i.d. $\mathcal{N}(0,1)$ entries. The individual risk component $R_{m,\ell}$ in [\[eq:R_M\]](#eq:R_M){reference-type="eqref" reference="eq:R_M"} and the residuals $(\bm{r}, \widetilde{\bm{r}})$ can be written as functions of $(\bm{Z}, \bm{h}, \widetilde\bm{h})$ as follows: $$(\widehat{\bm{\beta}}-\bm{\beta}_0)^\top\bm{\Sigma}({\widetilde{\bm{\beta}}}-\bm{\beta}_0) + \sigma^2 = \bm{h}^{\top}\widetilde\bm{h}, \quad \bm{r}= -\bm{Z}\bm{h}, \quad \widetilde\bm{r}= -\bm{Z}\widetilde\bm{h}.$$ Our first lemma below provides the derivative formula of $\bm{h}$ with respect to $\bm{Z}$, which is an important component of our proof. Similar derivative formula have been studied in M-estimation [@bellec2022derivatives], multi-task linear model [@bellec2021chi; @tan2022noise], and multinomial logistic model [@tan2023multinomial]. lemmaLemDerivative[\[lem:derivative\]]{#lem:derivative label="lem:derivative"} Suppose the penalty $g$ satisfies with a constant $\mu>0$. Then there exists a matrix $\bm{B}\in {{\mathbb{R}}}^{(p+1)\times (p+1)}$ depending on $(\bm{z}_i)_{i\in I}$ such that $$\label{eq:property_B} \|\bm{B}\|_{\mathop{\mathrm{op}}} \le (k\mu)^{-1}, \quad \mathrm{rank}(\bm{B}) \le p, \quad \mathop{\mathrm{tr}}[\bm{B}] \ge 0$$ and the derivative of $\bm{h}$ with respect to $\bm{Z}=(z_{ij})_{i\in [n], j\in[p+1]}$ is given by $$\label{eq:derivartive_formula} \text{for all } i\in [n] \text{ and } j\in [p+1], \quad \frac{\partial \bm{h}}{\partial z_{ij}} = \bm{B}\bm{e}_j \bm{e}_i^\top \bm{L}\bm{r}-\bm{B}\bm{Z}^{\top} \bm{L}\bm{e}_i \bm{e}_j^\top \bm{h}\quad \text{with} \quad \bm{L}= \bm{L}_I = \sum_{i\in I} \bm{e}_i\bm{e}_i^\top.$$ Furthermore, $\bm{V}:= \bm{I}_n - \bm{Z}\bm{B}\bm{Z}^\top \bm{L}$ satisfies the following: $$\begin{aligned} \label{eq:property_tr} {\widehat{\mathsf{df}}}{}= k - \mathop{\mathrm{tr}}[\bm{L}\bm{V}], \quad 0 < k \bigg(1+ \frac{\|\bm{L}\bm{G}\|_{op}^2}{k\mu}\bigg)^{-1} \le \mathop{\mathrm{tr}}[\bm{L}\bm{V}] \le k \quad \text{and} \quad \|\bm{L}\bm{V}\|_{\mathop{\mathrm{op}}} \le 1,\end{aligned}$$ where $\bm{G}=\bm{X}\bm{\Sigma}^{-1/2}\in{{\mathbb{R}}}^{n\times p}$ is a standard Gaussian matrix. is proved in . Note that holds for the derivatives of $\widetilde\bm{h}$, using $\widetilde\bm{B}$, $\widetilde\bm{L}= \bm{L}_{\widetilde{I}}$, and $\widetilde\bm{V}$ instead. ## Proof of {#proof:th:sub_full} ### Part 1: Proof of \[[Proof for the $ovlp$-estimator]{.ul}\] Using the notation in with $I$ and $\widetilde I$ meaning $I_m$ and $I_\ell$, the inequality we want to show can be written as $$\label{eq:moment_ineq_sub} \mathbb{E}\Bigl[\Bigl| \frac{\bm{r}^\top \bm{L}\widetilde\bm{L}\widetilde\bm{r}- |I\cap \widetilde I|(1-{\widehat{\mathsf{df}}}{}/k) (1-{\widetilde{\mathsf{df}}}{}/k) \bm{h}^\top\widetilde\bm{h}}{\|\bm{h}\| \|\widetilde\bm{h}\|} \Bigr|\Bigr] \lesssim \sqrt{n}\tau^{-2} c^{-3} \gamma^{7/2},$$ where $c=k/n, \tau=\min(1, \mu), \gamma=\max(1, p/n)$. Using $\mathop{\mathrm{tr}}[\bm{L}\bm{V}] = k - {\widehat{\mathsf{df}}}{}$ and $\mathop{\mathrm{tr}}[\widetilde\bm{L}\widetilde\bm{V}] = k - {\widetilde{\mathsf{df}}}{}$ by [\[eq:property_tr\]](#eq:property_tr){reference-type="eqref" reference="eq:property_tr"}, we can rewrite the error inside the expectation in the left-hand side as sum of three terms $$\frac{\bm{r}^\top \bm{L}\widetilde\bm{L}\widetilde\bm{r}- |I\cap \widetilde I|(1-{\widehat{\mathsf{df}}}{}/k) (1-{\widetilde{\mathsf{df}}}{}/k) \bm{h}^\top\widetilde\bm{h}}{\|\bm{h}\| \|\widetilde\bm{h}\|} = {{\mathsf{Rem}}}_1+{{\mathsf{Rem}}}_2+{{\mathsf{Rem}}}_3,$$ where ${{\mathsf{Rem}}}_1, {{\mathsf{Rem}}}_2, {{\mathsf{Rem}}}_3$ are defined as $$\begin{aligned} {{\mathsf{Rem}}}_1 &= \frac{\bm{h}^\top \widetilde\bm{h}}{\|\bm{h}\| \|\widetilde\bm{h}\|} \frac{\mathop{\mathrm{tr}}[\widetilde\bm{L}\widetilde\bm{V}]}{k} (\mathop{\mathrm{tr}}[\widetilde\bm{L}\bm{L}\bm{V}] - \frac{|I\cap\widetilde I|\mathop{\mathrm{tr}}[\bm{L}\bm{V}]}{k} ),\relax {{\mathsf{Rem}}}_2 &= \frac{\mathop{\mathrm{tr}}[\widetilde\bm{L}\widetilde\bm{V}]}{k} \frac{\bm{r}^\top \widetilde\bm{L}\bm{L}\bm{r}(1 + \mathop{\mathrm{tr}}[\widetilde\bm{B}]) - \mathop{\mathrm{tr}}[\widetilde\bm{L}\bm{L}\bm{V}]\bm{h}^\top\widetilde\bm{h}}{\|\bm{h}\|\|\widetilde\bm{h}\|},\relax {{\mathsf{Rem}}}_3 &= \frac{\bm{r}^\top \widetilde\bm{L}\bm{L}\widetilde\bm{r}}{\|\bm{h}\|\|\widetilde\bm{h}\| } \frac{k - \mathop{\mathrm{tr}}[\widetilde\bm{L}\widetilde\bm{V}] (1+\mathop{\mathrm{tr}}[\widetilde{\bm{B}}])}{k}.\end{aligned}$$ Next, we bound the moment of $|{{\mathsf{Rem}}}_1|, |{{\mathsf{Rem}}}_2|, |{{\mathsf{Rem}}}_3|$ one by one. **Control of ${{\mathsf{Rem}}}_1$.** Since $\mathop{\mathrm{tr}}[\widetilde\bm{L}\widetilde\bm{V}] \in (0,k]$ by [\[eq:property_tr\]](#eq:property_tr){reference-type="eqref" reference="eq:property_tr"} and $|\widetilde\bm{h}^\top \bm{h}| \le \|\bm{h}\|\|\widetilde\bm{h}\|$ by the Cauchy--Schwarz inequality, we have $$|{{\mathsf{Rem}}}_1| \le |\mathop{\mathrm{tr}}[\widetilde\bm{L}\bm{L}\bm{V}] - k^{-1} |I\cap\widetilde I|\mathop{\mathrm{tr}}[\bm{L}\bm{V}]| = \text{RHS}.$$ Below we bound the second moment of RHS. Here, the key fact is that conditionally on $(|I\cap \widetilde I|, I)$, $I\cap \widetilde I$ is uniformly distributed over all subsets of $I$ of size $|I\cap \widetilde I|$. Then, the variance formula of the sampling without replacement (see with $x_i = V_{ii}$, $M=|I|=k$, and $m= |I\cap\widetilde I|$) implies that the conditional expectation of $(\text{RHS})^2$ given $(|I\cap \widetilde I|, I, \bm{V})$ is bounded from above as $$\begin{aligned} \mathbb{E}\Bigl[\Bigl(\mathop{\mathrm{tr}}[\widetilde\bm{L}\bm{L}\bm{V}] - \frac{|I\cap\widetilde I|}{k} \mathop{\mathrm{tr}}[\bm{L}\bm{V}]\Bigr)^2| |I\cap \widetilde I|, I, \bm{V}\Bigr] &= |I\cap \widetilde I|^2 \mathbb{E}\Bigl[\bigg(\frac{\sum_{i\in I\cap \widetilde I} V_{ii}}{|I\cap\widetilde I|} - \frac{\sum_{i\in I} V_{ii}}{|I|}\bigg)^2\Bigr| |I\cap \widetilde I|, I, \bm{V}\Bigr]\relax &= |I\cap \widetilde I|^2 \mathop{\mathrm{{\rm Var}}}\Bigl[\frac{\sum_{i\in I\cap \widetilde I} V_{ii}}{|I\cap \widetilde I|} | |I\cap \widetilde I|, I, \bm{V}\Bigr] \relax &\le |I\cap \widetilde I|^2 \frac{\sum_{i\in I} V_{ii}^2}{|I||I\cap \widetilde I|}\relax &= |I\cap \widetilde I|\frac{\sum_{i\in I} V_{ii}^2}{|I|}. \end{aligned}$$ Note that the above inequality holds even when $|I\cap \widetilde I|=0$, since the both sides are then $0$. Note in passing that $\sum_{i\in I} V_{ii}^2/|I| \le \|\bm{L}\bm{V}\|_{F}^2/|I| \le \|\bm{L}\bm{V}\|_{\mathop{\mathrm{op}}}^2 \le 1$ by [\[eq:property_tr\]](#eq:property_tr){reference-type="eqref" reference="eq:property_tr"}. Then, we obtain $$\label{eq:var_trace} \mathbb{E}\Bigl[\Bigl|\mathop{\mathrm{tr}}[\widetilde\bm{L}\bm{L}\bm{V}] - \frac{|I\cap\widetilde I|\mathop{\mathrm{tr}}[\bm{L}\bm{V}]}{k}\Bigr|^2\Bigr] \le \mathbb{E}\Bigl[|I\cap \widetilde I|\frac{\sum_{i\in I} V_{ii}^2}{|I|}\Bigr] \le \mathbb{E}[|I\cap \widetilde I|] \le k.$$ Thus, [\[eq:var_trace\]](#eq:var_trace){reference-type="eqref" reference="eq:var_trace"} and the Cauchy--Schwarz inequality $\mathbb{E}[\cdot]^2 \le \mathbb{E}[(\cdot)^2]$ yield $$\label{eq:rem_1_sub} \mathbb{E}|{{\mathsf{Rem}}}_1| \le \sqrt{\mathbb{E}[|{{\mathsf{Rem}}}_1|^2]} \le \sqrt{\mathbb{E}[(\mathop{\mathrm{tr}}[\widetilde\bm{L}\bm{L}\bm{V}] - k^{-1}{|I\cap\widetilde I|\mathop{\mathrm{tr}}[\bm{L}\bm{V}]})^2]} \le \sqrt{k}.$$ **Control of ${{\mathsf{Rem}}}_2$.** Since $\mathop{\mathrm{tr}}[\widetilde\bm{L}\widetilde\bm{V}]\in (0,k]$ by [\[eq:property_tr\]](#eq:property_tr){reference-type="eqref" reference="eq:property_tr"}, we have $$|{{\mathsf{Rem}}}_2| \le |\bm{r}^\top \widetilde\bm{L}\bm{L}\bm{r}(1 + \mathop{\mathrm{tr}}[\widetilde\bm{B}]) - \mathop{\mathrm{tr}}[\widetilde\bm{L}\bm{L}\bm{V}]\bm{h}^\top\widetilde\bm{h}|/(\|\bm{h}\|\|\widetilde\bm{h}\|).$$ The lemma below gives the bound of the second moment of the right hand side. lemmaLemPsiBound[\[lem:psi_bound\]]{#lem:psi_bound label="lem:psi_bound"} For any subset $J \subset [n]$ that is independent of $\bm{Z}$, we have $$\mathbb{E}\biggl[ \Bigl( \frac{-\bm{r}^{\top} \bm{L}_J \widetilde\bm{r} - \widetilde\bm{r}^\top \widetilde\bm{L}\bm{L}_J \bm{r} \mathop{\mathrm{tr}}[\widetilde\bm{B}] + \mathop{\mathrm{tr}}[\bm{L}_J \bm{V}] \bm{h}^\top \widetilde\bm{h} }{\|\bm{h}\| \|\widetilde\bm{h}\|}\Bigr)^2 \ \mathrel{\Big|} J \ \biggr] \lesssim \tau^{-2} \Big(|J| + \frac{p(|J|+p)^2}{k^2}\Big),$$ where $\tau=\min(1, \mu)$ and $\bm{L}_J = \sum_{i\in J}\bm{e}_i\bm{e}_i^\top$. is proved in . A sketch is as follows: follows from the derivative formula [\[eq:derivartive_formula\]](#eq:derivartive_formula){reference-type="eqref" reference="eq:derivartive_formula"} and the second order Stein's formula, where $-\bm{r}^\top \bm{L}_J\widetilde\bm{r}= \bm{r}^\top \bm{L}_J \bm{Z}\widetilde\bm{h}$ is regarded as a component-wise inner product of the Gaussian matrix $\bm{L}_J \bm{Z}\in {{\mathbb{R}}}^{n \times (p+1)}$ and $\bm{L}_J \bm{r}\widetilde\bm{h}^\top \in {{\mathbb{R}}}^{n \times (p+1)}$. with $J=I\cap\widetilde I$ and $\mathbb{E}|{{\mathsf{Rem}}}_2| \le \sqrt{\mathbb{E}[|{{\mathsf{Rem}}}_2|^2]}$ lead to $$\begin{aligned} \mathbb{E}|{{\mathsf{Rem}}}_2| &\lesssim \tau^{-1} \sqrt{\mathbb{E}[|I\cap\widetilde I| + p(|I\cap\widetilde I| + p)^2 k^{-2}]}\notag\relax &\le \tau^{-1} \sqrt{k + p(k+p)^2 k^{-2}} \lesssim \sqrt{k} \tau^{-1} (1 + p/k)^{3/2} \label{eq:rem_2_sub}\end{aligned}$$ thanks to $\tau = \min(1, \mu)$. **Control of ${{\mathsf{Rem}}}_3$.** Note $$|{{\mathsf{Rem}}}_3| = \frac{|\bm{r}^\top \widetilde\bm{L}\bm{L}\widetilde\bm{r}|}{\|\bm{h}\|\|\widetilde\bm{h}\| } \cdot \frac{|k - \mathop{\mathrm{tr}}[\widetilde\bm{L}\widetilde\bm{V}] (1+\mathop{\mathrm{tr}}[\widetilde{\bm{B}}])|}{k}\le \|\widetilde\bm{L}\bm{Z}\|_{\mathop{\mathrm{op}}}^2 \frac{|k - \mathop{\mathrm{tr}}[\widetilde\bm{L}\widetilde\bm{V}] (1+\mathop{\mathrm{tr}}[\widetilde{\bm{B}}])|}{k},$$ thanks to $|\bm{r}^\top \widetilde\bm{L}\bm{L}\widetilde\bm{r}| = |\bm{h}^\top \bm{Z}^\top \widetilde\bm{L}\bm{L}\bm{Z}\widetilde\bm{h}| \le \|\bm{h}\|\|\widetilde\bm{h}\|\|\widetilde\bm{L}\bm{L}\bm{Z}\|_{\mathop{\mathrm{op}}}^2 \le \|\bm{h}\|\|\widetilde\bm{h}\|\|\widetilde\bm{L}\bm{Z}\|_{\mathop{\mathrm{op}}}^2$. Let us define the random variable $Y$ and the event $\Omega$ as $$Y = k^{-1} (k - \mathop{\mathrm{tr}}[\widetilde\bm{L}\widetilde\bm{V}] (1+\mathop{\mathrm{tr}}[\widetilde{\bm{B}}])), \quad \Omega = \{ \|\widetilde\bm{L}\bm{Z}\|_{\mathop{\mathrm{op}}} \le 2(\sqrt{k} + \sqrt{p+1})\}.$$ Then, $\mathbb{E}[|{{\mathsf{Rem}}}_3|]$ is bounded from above as $$\begin{aligned} \mathbb{E}|{{\mathsf{Rem}}}_3| &\le \mathbb{E}[\|\widetilde{\bm{L}}\bm{Z}\|_{\mathop{\mathrm{op}}}^2 |Y|] \le \|Y\|_{\infty} \mathbb{E}[\mathop{\mathrm{\mathds{1}}}_{\Omega^c} \|\widetilde{\bm{L}}\bm{Z}\|_{\mathop{\mathrm{op}}}^2] + [2(\sqrt{k} + \sqrt{p+1})]^2 \mathbb{E}|Y|\relax &\lesssim \|Y\|_{\infty} \cdot \mathbb{P}(\Omega^c) \cdot \mathbb{E}[\|\widetilde{\bm{L}}\bm{Z}\|_{\mathop{\mathrm{op}}}^4]^{1/2} + (k+p) \mathbb{E}|Y| := {{\mathsf{Rem}}}_3^1 + {{\mathsf{Rem}}}_3^2\end{aligned}$$ thanks to Hölder's inequality. Below we bound ${{\mathsf{Rem}}}_3^1$ and ${{\mathsf{Rem}}}_3^2$. 1. Bound of ${{\mathsf{Rem}}}_3^1$: Since $\|\widetilde{\bm{L}}\bm{Z}\|_{\mathop{\mathrm{op}}} =^d \|\bm{G}\|_{\mathop{\mathrm{op}}}$ with $\bm{G}\in{{\mathbb{R}}}^{k\times(p+1)}$ being the standard Gaussian matrix, the concentration of the operator norm (see ) implies $$\mathbb{P}(\Omega^c) \le e^{-(\sqrt{k} + \sqrt{p+1})^2/2} \le e^{-(k+p)/2} \text{ and } \mathbb{E}\|\widetilde\bm{L}\bm{Z}\|_{\mathop{\mathrm{op}}}^4 \lesssim (k + p)^2.$$ By contrast, $\|\widetilde\bm{B}\|_{\mathop{\mathrm{op}}} \le (k\mu)^{-1}$ and $\mathrm{rank}(\widetilde\bm{B}) \le p$ by [\[eq:property_B\]](#eq:property_B){reference-type="eqref" reference="eq:property_B"} and $\mathop{\mathrm{tr}}[\widetilde\bm{L}\widetilde\bm{V}]\in(0,k]$ by [\[eq:property_tr\]](#eq:property_tr){reference-type="eqref" reference="eq:property_tr"} lead to $$\|Y\|_{\infty} = |1 - k^{-1}\mathop{\mathrm{tr}}[\widetilde{\bm{L}}\widetilde{\bm{V}}](1+\mathop{\mathrm{tr}}[\widetilde\bm{B}])]| \le 1 + |k^{-1}\mathop{\mathrm{tr}}[\widetilde{\bm{L}}\widetilde{\bm{V}}]| (1+|\mathop{\mathrm{tr}}[\widetilde\bm{B}]|) \le 2 + p(k\mu)^{-1} \le \tau^{-1} (1+p/k)$$ with $\tau=\min(1, \mu)$. Therefore, we have $$\begin{aligned} {{\mathsf{Rem}}}_3^1 = \|Y\|_{\infty} \mathbb{P}(\Omega^c) \mathbb{E}[\|\widetilde{\bm{L}}\bm{Z}\|_{\mathop{\mathrm{op}}}^4]^{1/2} \lesssim\tau^{-1} (1+p/k) e^{-(k+p)/2} (k + p) \lesssim \sqrt{k}\tau^{-1} (1+p/k)^{3/2},\end{aligned}$$ thanks to $e^{x} \ge \sqrt{x/2}$ for all $x>0$. 2. Bound of ${{\mathsf{Rem}}}_3^2$: The following lemma bounds ${{\mathsf{Rem}}}_3^2 = (k+p)\mathbb{E}[|Y|]$. lemmaLemTrBdf[\[lem:trB_df\]]{#lem:trB_df label="lem:trB_df"} Let $\xi_1 = (1+\mathop{\mathrm{tr}}[\bm{B}]) \|\bm{L}\bm{\psi}\|^2/\|\bm{h}\|^2 - \mathop{\mathrm{tr}}[\bm{L}\bm{V}]$ and $\xi_2 = k - (1+\mathop{\mathrm{tr}}[\bm{B}])^2 \|\bm{L}\bm{\psi}\|^2/\|\bm{h}\|^2$. Then $$\mathbb{E}|k - \mathop{\mathrm{tr}}[\bm{L}\bm{V}] (1+\mathop{\mathrm{tr}}[\bm{B}])| = \mathbb{E}| (1+\mathop{\mathrm{tr}}[\bm{B}]) \xi_1 + \xi_2 | \lesssim \sqrt{k} \tau^{-2}(1+p/k)^{5/2}.$$ is proved in . A sketch is as follows: The quantity $\xi_1$ is bounded using with $J=\widetilde I = I$. For $\xi_2$, we use a chi-square type moment inequality [@bellec2020out Theorem 7.1]. Recall ${{\mathsf{Rem}}}_3^2 = (k+p) \mathbb{E}|Y|$ with $Y:= k^{-1} (k - \mathop{\mathrm{tr}}[\widetilde\bm{L}\widetilde\bm{V}] (1+\mathop{\mathrm{tr}}[\widetilde{\bm{B}}]))$. Then, implies $${{\mathsf{Rem}}}_3^2 = (1+p/k) \mathbb{E}[|k - \mathop{\mathrm{tr}}[\widetilde\bm{L}\widetilde\bm{V}] (1+\mathop{\mathrm{tr}}[\widetilde{\bm{B}}])|] \lesssim \sqrt{k} \tau^{-2} (1+p/k)^{7/2}.$$ Combining the bounds of ${{\mathsf{Rem}}}_3^1$ and ${{\mathsf{Rem}}}_3^2$, we have $$\label{eq:rem_3_sub} \mathbb{E}|{{\mathsf{Rem}}}_3| \lesssim {{\mathsf{Rem}}}_3^1 + {{\mathsf{Rem}}}_3^2 \lesssim \sqrt{k} \tau^{-2} (1 + p/{k})^{7/2} + \sqrt{k}\tau^{-1} (1+p/k)^{3/2} \lesssim \sqrt{k} \tau^{-2} (1 + p/{k})^{7/2},$$ thanks to $\tau = \min(1, \mu) \le 1$. We have now controlled each of $\mathbb{E}|{{\mathsf{Rem}}}_1|,\mathbb{E}|{{\mathsf{Rem}}}_2|,\mathbb{E}|{{\mathsf{Rem}}}_3|$. [\[eq:rem_1\_sub\]](#eq:rem_1_sub){reference-type="eqref" reference="eq:rem_1_sub"}, [\[eq:rem_2\_sub\]](#eq:rem_2_sub){reference-type="eqref" reference="eq:rem_2_sub"} and [\[eq:rem_3\_sub\]](#eq:rem_3_sub){reference-type="eqref" reference="eq:rem_3_sub"} result in $$\begin{aligned} \mathbb{E}[|{{\mathsf{Rem}}}_1|] + \mathbb{E}[|{{\mathsf{Rem}}}_2|] + \mathbb{E}[|{{\mathsf{Rem}}}_3|] &\lesssim \sqrt{k}\tau^{-1} (1+p/k)^{3/2} + \sqrt{k} \tau^{-2} (1 + p/k)^{7/2} + \sqrt{k} \relax &\lesssim \sqrt{k} \tau^{-2} (1 + p/k)^{7/2} \lesssim \sqrt{n}\tau^{-2} c^{-3} \gamma^{7/2}\end{aligned}$$ thanks to $\gamma=\max(1, p/n) \ge 1$, $c=k/n \le 1$ and $\tau=\min(1, \mu) \le 1$. This finishes the proof of [\[eq:sub_full_moment_ineq\]](#eq:sub_full_moment_ineq){reference-type="eqref" reference="eq:sub_full_moment_ineq"} for the $\textup{\textrm{ovlp}}$-estimator. ------------------------------------------------------------------------ \[[Proof for the $\textup{\textrm{full}}$-estimator]{.ul}\] Using the notation in , our goal here is to show $$\mathbb{E}\Bigl[\Bigl| \frac{(n - {\widehat{\mathsf{df}}}{}-{\widetilde{\mathsf{df}}}{}+ \frac{|I\cap \widetilde I|}{k^2} {\widehat{\mathsf{df}}}{}{\widetilde{\mathsf{df}}}{}) {\bm{h}^\top \widetilde\bm{h}} -{\bm{r}^\top\widetilde\bm{r}}}{\|\bm{h}\|\|\widetilde{\bm{h}}\|} \Bigr|\Bigr] \lesssim \sqrt{n} \tau^{-2} c^{-2} \gamma^{5/2}.$$ Here, $\mathop{\mathrm{tr}}[\bm{V}] = n- {\widehat{\mathsf{df}}}{}$, $\mathop{\mathrm{tr}}[\bm{L}\bm{V}] = k-{\widehat{\mathsf{df}}}{}$ and $\mathop{\mathrm{tr}}[\widetilde\bm{L}\bm{V}] - \mathop{\mathrm{tr}}[\widetilde\bm{L}\bm{L}\bm{V}] = k - |I\cap \widetilde I|$ implies $$\begin{aligned} &\Big(n - {\widehat{\mathsf{df}}}{}-{\widetilde{\mathsf{df}}}{}+ \frac{|I\cap \widetilde I|}{k^2} {\widehat{\mathsf{df}}}{}{\widetilde{\mathsf{df}}}{}\Big) \bm{h}^\top \widetilde\bm{h}= \Bigl(\mathop{\mathrm{tr}}[\bm{V}] - \frac{{\widetilde{\mathsf{df}}}{}}{k}\big\{\mathop{\mathrm{tr}}[\widetilde\bm{L}\bm{V}] - \mathop{\mathrm{tr}}[\widetilde\bm{L}\bm{L}\bm{V}] + \frac{|I\cap\widetilde I|}{k}\mathop{\mathrm{tr}}[\bm{L}\bm{V}]\big\}\Bigr) \bm{h}^\top \widetilde\bm{h},\end{aligned}$$ so that, by simple algebra, the error can be decomposed as $$\begin{aligned} \Big(n - {\widehat{\mathsf{df}}}{}-{\widetilde{\mathsf{df}}}{}+ \frac{|I\cap \widetilde I|}{k^2} {\widehat{\mathsf{df}}}{}{\widetilde{\mathsf{df}}}{}\Big) {\bm{h}^\top \widetilde\bm{h}} -{\bm{r}^\top\widetilde\bm{r}} = \|\bm{h}\| \|\widetilde\bm{h}\| ({{\mathsf{Rem}}}_1 + {{\mathsf{Rem}}}_2 + {{\mathsf{Rem}}}_3 + {{\mathsf{Rem}}}_4),\end{aligned}$$ where the four following remainders will be bounded separately: $$\begin{aligned} {{\mathsf{Rem}}}_1 &= \frac{- \bm{r}^\top \widetilde\bm{r}- \widetilde\bm{r}^\top \widetilde\bm{L}\bm{r}\mathop{\mathrm{tr}}[\widetilde\bm{B}] + \mathop{\mathrm{tr}}[\bm{V}] \bm{h}^\top\widetilde\bm{h}}{\|\bm{h}\|\|\widetilde\bm{h}\|}, \relax {{\mathsf{Rem}}}_2 &= \frac{\mathop{\mathrm{tr}}[\widetilde\bm{B}]}{1+\mathop{\mathrm{tr}}[\widetilde\bm{B}]} \frac{ (1+\mathop{\mathrm{tr}}[\widetilde\bm{B}]) \widetilde\bm{r}^\top \widetilde\bm{L}\bm{r}- \mathop{\mathrm{tr}}[\widetilde\bm{L}\bm{V}] \bm{h}^\top \widetilde\bm{h}}{\|\bm{h}\|\|\widetilde\bm{h}\|},\relax {{\mathsf{Rem}}}_3 &= \frac{(1+\mathop{\mathrm{tr}}[\widetilde\bm{B}]) \mathop{\mathrm{tr}}[\widetilde\bm{L}\widetilde\bm{V}] - k}{(1+\mathop{\mathrm{tr}}[\widetilde\bm{B}])} \frac{\mathop{\mathrm{tr}}[\widetilde\bm{L}\bm{V}]}{k} \frac{\bm{h}^\top\widetilde\bm{h}}{\|\bm{h}\| \| \widetilde\bm{h}\|}, \relax {{\mathsf{Rem}}}_4 &= \frac{{\widetilde{\mathsf{df}}}{}}{k} (\mathop{\mathrm{tr}}[\widetilde\bm{L}\bm{L}\bm{V}] - \frac{|I\cap \widetilde I|}{k} \mathop{\mathrm{tr}}(\bm{L}\bm{V})) \frac{\bm{h}^\top \widetilde\bm{h}}{\|\bm{h}\|\|\widetilde\bm{h}\|}.\end{aligned}$$ **Control of ${{\mathsf{Rem}}}_1$ and ${{\mathsf{Rem}}}_2$.** with $K=[n]$ implies $$\begin{aligned} \mathbb{E}[|{{\mathsf{Rem}}}_1|] \le \sqrt{\mathbb{E}[|{{\mathsf{Rem}}}_1|^2]} &\lesssim \sqrt{\tau^{-2} (n + p(n+p)^2/k^2)} \relax &\le \tau^{-1} c^{-1} \sqrt{n + p(1+p/n)^2}\relax &\lesssim \sqrt{n} \tau^{-1} c^{-1} (1+p/n)^{3/2},\end{aligned}$$ thanks to $k/n = c\in (0,1]$, while $\mathop{\mathrm{tr}}[\widetilde\bm{B}] \ge 0$ by [\[eq:property_B\]](#eq:property_B){reference-type="eqref" reference="eq:property_B"} and with $J=\widetilde I$ lead to $$\mathbb{E}[|{{\mathsf{Rem}}}_2| \le \sqrt{\mathbb{E}[|{{\mathsf{Rem}}}_2|^2]} \lesssim \sqrt{\tau^{-2} (k + p(k+p)^2/k^2)} \lesssim \tau^{-1} \sqrt{k} (1 + p/k)^{3/2} \lesssim \sqrt{n} \tau^{-1} c^{-1} (1+p/n)^{3/2}.$$ **Control of ${{\mathsf{Rem}}}_3$.** According to [\[eq:property_tr\]](#eq:property_tr){reference-type="eqref" reference="eq:property_tr"}, it holds that $\mathop{\mathrm{tr}}[\widetilde\bm{L}\bm{V}] = \mathop{\mathrm{tr}}[\widetilde\bm{L}\bm{L}\bm{V}] + k - |I\cap \widetilde I|$ and $\|\bm{L}\bm{V}\|_{\mathop{\mathrm{op}}} \le 1$. Thus, $$k^{-1} |\mathop{\mathrm{tr}}[\widetilde\bm{L}\bm{V}] | = k^{-1} |\mathop{\mathrm{tr}}[\widetilde\bm{L}\bm{L}\bm{V}]| + k^{-1} |k- |I\cap \widetilde I|| \le \|\bm{L}\bm{V}\|_{\mathop{\mathrm{op}}} + 1 \le 2.$$ Combining the above display and $\mathop{\mathrm{tr}}[\widetilde\bm{B}] \ge 0$ by [\[eq:property_B\]](#eq:property_B){reference-type="eqref" reference="eq:property_B"}, as well as , we obtain $$\mathbb{E}[|{{\mathsf{Rem}}}_3|] \le 2 \mathbb{E}[|(1+\mathop{\mathrm{tr}}[\widetilde\bm{B}]) \mathop{\mathrm{tr}}[\widetilde\bm{L}\widetilde\bm{V}] - k|] \lesssim \tau^{-2} k^{1/2} (1 + p/k)^{5/2} \lesssim \sqrt{n} \tau^{-2} c^{-2} (1+p/n)^{5/2}.$$ **Control of ${{\mathsf{Rem}}}_4$.** implies $$\mathbb{E}[|{{\mathsf{Rem}}}_4|] \le \mathbb{E}|\mathop{\mathrm{tr}}[\widetilde\bm{L}\bm{L}\bm{V}] - k^{-1}|I\cap \widetilde I| \mathop{\mathrm{tr}}[\bm{L}\bm{V}]| \le \sqrt{k} \le \sqrt{n}.$$ From the above displays, we observe that the dominating upper bound is that of $\mathbb{E}[|{{\mathsf{Rem}}}_3|]$. Thus, we obtain $$\mathbb{E}[|{{\mathsf{Rem}}}_1|] + \mathbb{E}[|{{\mathsf{Rem}}}_2|] + \mathbb{E}[|{{\mathsf{Rem}}}_3|] + \mathbb{E}[|{{\mathsf{Rem}}}_4|] \lesssim \sqrt{n} \tau^{-2} c^{-2} (1+p/n)^{5/2} \lesssim \sqrt{n} \tau^{-2} c^{-2} \gamma^{5/2}$$ thanks to $\gamma=\max(1, p/n)$. This finishes the proof of [\[eq:sub_full_moment_ineq\]](#eq:sub_full_moment_ineq){reference-type="eqref" reference="eq:sub_full_moment_ineq"} for the $\textup{\textrm{full}}$-estimator. ------------------------------------------------------------------------ ### Part 2: Proof of For $\char"0023=\textup{\textrm{ovlp}}$ and $\textup{\textrm{full}}$, by algebra, the relative error is bounded from above as $$\begin{aligned} \Big|\frac{\widetilde{R}_M^\char"0023}{R_M} - 1\Big| &= \frac{|\sum_{m,\ell} (\widehat{R}_{m,\ell}^\char"0023- \bm{h}_m^\top \bm{h}_\ell)|}{\|\sum_{m'} \bm{h}_{m'}\|^2}\notag\relax &\le \frac{(\sum_{m'} \|\bm{h}_{m'}\|)^2}{\|\sum_{m'} \bm{h}_{m'}\|^2} \cdot \sum_{m,\ell} \frac{\|\bm{h}_m\|\|\bm{h}_\ell\|}{(\sum_{m'} \|\bm{h}_{m'}\|)^2} \cdot \frac{|\widehat{R}_{m, \ell}^\char"0023- \bm{h}_m^\top \bm{h}_\ell|}{\|\bm{h}_m\|\|\bm{h}_\ell\|} \notag \relax &\le \frac{(\sum_{m'} \|\bm{h}_{m'}\|)^2}{\|\sum_{m'} \bm{h}_{m'}\|^2} \cdot \max_{m, \ell} \frac{|\widehat{R}_{m, \ell}^\char"0023- \bm{h}_m^\top \bm{h}_\ell|}{\|\bm{h}_m\|\|\bm{h}_\ell\|} \notag \relax &\le M \cdot \frac{\sum_{m'} \|\bm{h}_{m'}\|^2}{\|\sum_{m'} \bm{h}_{m'}\|^2} \cdot \max_{m, \ell} \frac{|\widehat{R}_{m, \ell}^\char"0023- \bm{h}_m^\top \bm{h}_\ell|}{\|\bm{h}_m\|\|\bm{h}_\ell\|}\notag\relax &= M \cdot \text{Ratio} \cdot \max_{m, \ell} \frac{n} {|d_{m,\ell}^\char"0023|} |E_{m,\ell}^\char"0023| \label{eq:relative_error_ineq}.\end{aligned}$$ Here, we have defined $\text{Ratio}$ and $E_{m,\ell}^\char"0023$ by $$\label{eq:df_ratio_E_ml_est} \text{Ratio} := \frac{\sum_{m'} \|\bm{h}_{m'}\|^2}{\|\sum_{m'} \bm{h}_{m'}\|^2}, \quad E_{m,\ell}^\char"0023:= \frac{d_{m,\ell}^\char"0023(\widehat{R}_{m,\ell}^\char"0023-\bm{h}_m^\top\bm{h}_\ell)}{n\|\bm{h}_m\|\|\bm{h}_\ell\|},$$ where $d_{m,\ell}^\char"0023$ is the denominator of the corresponding estimator for $\char"0023\in\{\textup{\textrm{ovlp}}, \textup{\textrm{full}}\}$ defined in [\[eq:d_sub_and_full\]](#eq:d_sub_and_full){reference-type="eqref" reference="eq:d_sub_and_full"}. We restate their expressions for convenience: $$\label{eq:df_denominator_sub_full} d_{m,\ell}^\textup{\textrm{ovlp}}= |I_m\cap I_\ell| (1-{\widehat{\mathsf{df}}}{}_m/k)(1-{\widehat{\mathsf{df}}}{}_\ell/k), \quad d_{m,\ell}^\textup{\textrm{full}}= n-{\widehat{\mathsf{df}}}{}_m - {\widehat{\mathsf{df}}}{}_\ell + k^{-2}{|I_m\cap I_\ell|} {\widehat{\mathsf{df}}}{}_m{\widehat{\mathsf{df}}}{}_\ell$$ Below we bound $|E_{m,\ell}^\char"0023|$, $|d_{m,\ell}^\char"0023|^{-1}$, and Ratio. **Control of $E_{m, \ell}^\char"0023$.** Markov's inequality applied with the moment bound [\[eq:sub_full_moment_ineq\]](#eq:sub_full_moment_ineq){reference-type="eqref" reference="eq:sub_full_moment_ineq"} yields $$\label{eq:bound_E_ml_est} \text{ for all $\epsilon > 0$}, \quad \mathbb{P}(|E_{m,\ell}^\char"0023| > \epsilon) \le \epsilon^{-1} \mathbb{E}[|E_{m,\ell}^\char"0023|] \lesssim \frac{1}{\epsilon \sqrt{n} \tau^2} \times \begin{dcases} c^{-3} \gamma^{7/2} & \char"0023=\textup{\textrm{ovlp}}\relax c^{-2} \gamma^{5/2} & \char"0023=\textup{\textrm{full}} \end{dcases}$$ **Control of Ratio.** Here the key lemma to bound Ratio := ${\sum_{m'} \|\bm{h}_{m'}\|^2}/{\|\sum_{m'} \bm{h}_{m'}\|^2}$ is the concentration of the correlation $\bm{h}_m^\top \bm{h}_\ell/\|\bm{h}_m\|\|\bm{h}_\ell\|$: lemmaLemContractionCor[\[lem:contraction_cor\]]{#lem:contraction_cor label="lem:contraction_cor"} If the same penalty is used for all $m\in[M]$, i.e., $g_m=g$ in [\[eq:def-hbeta\]](#eq:def-hbeta){reference-type="eqref" reference="eq:def-hbeta"}, then $$\eta_{m,\ell} := \mathbb{E}\bigg[\frac{\bm{h}_m^\top \bm{h}_\ell}{\|\bm{h}_m\|\|\bm{h}_\ell \| } \mid (\bm{z}_i)_{i\in I_m\cap I_\ell} \bigg] \ge 0, \quad \text{and} \quad \mathbb{E}\bigg[\Bigl(\frac{\bm{h}_m^\top \bm{h}_\ell}{\|\bm{h}_m\|\|\bm{h}_\ell\|} -\eta_{m,\ell}\Bigr)^2\bigg]\lesssim \frac{\gamma }{nc^2\tau^2}.$$ is proved in . The proof is based on a symmetry argument for the non-negativity of $\eta_{m,\ell}$ and the Gaussian Poincaré inequality for the upper bound. Now we provide a high-probability bound for the Ratio. Let $U_{m,\ell} := -\bm{h}_m^\top\bm{h}_\ell/\|\bm{h}_m\|\|\bm{h}_\ell\| + \eta_{m,\ell}$. Then, using the positiveness of $\eta_{m,\ell}$ by , we find $$\begin{aligned} \sum_{m}\|\bm{h}_m\|^2 -\|\sum_{m} \bm{h}_m\|^2 &= \sum_{m\neq \ell} (-\bm{h}_m^\top\bm{h}_\ell + \eta_{m, \ell}\|\bm{h}_m\|\|\bm{h}_\ell\| - \eta_{m, \ell}\|\bm{h}_m\|\|\bm{h}_\ell\|)\relax &\le \sum_{m\neq \ell} U_{m, \ell} \|\bm{h}_m\|\|\bm{h}_\ell\| \relax &\le \Bigl(\sum_{m\neq\ell}U_{m\ell}^2\Bigr)^{1/2} \Bigl(\sum_{m\neq \ell}\|\bm{h}_m\|^2\|\bm{h}_\ell\|^2\Bigr)^{1/2}\relax &\le \Bigl(\sum_{m,\ell} U_{m\ell}^2\Bigr)^{1/2} \sum_{m}\|\bm{h}_m\|^2.\end{aligned}$$ Dividing both sides by $\sum_m\|\bm{h}_m\|^2$, we have $$\frac{\|\sum_{m} \bm{h}_m\|^2}{\sum_{m}\|\bm{h}_m\|^2} \ge 1-\Bigl(\sum_{m, \ell} U_{m, \ell}^2\Bigr)^{1/2}.$$ Then, Markov's inequality applied with $\mathbb{E}[U_{m,\ell}^2] \lesssim \gamma/(nc^2\tau^2)$ by yields $$\begin{aligned} \mathbb{P}\bigg(\frac{\sum_{m} \|\bm{h}_{m}\|^2}{\|\sum_{m} \bm{h}_{m}\|^2} \ge 2\bigg) &\le \mathbb{P}\bigg( 1- \Bigl(\sum_{m,\ell} U_{m,\ell}^2\Bigr)^{1/2} \le \frac{1}{2}\bigg) \le \mathbb{P}\Big(\sum_{m, \ell} U_{m, \ell}^2 \ge 1/4\Big) \notag\relax &\le 4\mathbb{E}\Big[\sum_{m, \ell} U_{m, \ell}^2\Big] \lesssim \frac{M^2\gamma} {nc^2\tau^2} \label{eq:correlation_bound},\end{aligned}$$ which gives the upper bound of Ratio $= \sum_{m} \|\bm{h}_{m}\|^2/{\|\sum_{m} \bm{h}_{m}\|^2}$. **Control of $|d_{m,\ell}^\char"0023|^{-1}$.** The lemma below gives an lower bound of $d_{m,\ell}^\char"0023$ for $\char"0023\in\{\textup{\textrm{ovlp}}, \textup{\textrm{full}}\}$. lemmaLemDenomLowerBound[\[lem:denom_lower_bound\]]{#lem:denom_lower_bound label="lem:denom_lower_bound"} Let $d_{m, \ell}^\textup{\textrm{ovlp}}$ and $d_{m,\ell}^\textup{\textrm{full}}$ be the random variables defined in [\[eq:df_denominator_sub_full\]](#eq:df_denominator_sub_full){reference-type="eqref" reference="eq:df_denominator_sub_full"}. Then, there exists a positive absolute constant $C >0$ such that for $\char"0023\in\{\textup{\textrm{ovlp}},\textup{\textrm{full}}\}$, $$\begin{aligned} \mathbb{P}\Bigl(\frac{d_{m,\ell}^\char"0023}{n} < C d^\char"0023(\tau, c, \gamma) \Bigr) \lesssim \frac{\gamma^4}{n c^2 \tau^4} \text{ where } d^\textup{\textrm{ovlp}}(\tau, c,\gamma) = \tau^2 c^4\gamma^{-2}, \text{ and } d^\textup{\textrm{full}}(\tau,c,\gamma)=\tau^2\gamma^{-2}.\end{aligned}$$ is proved in . The proof uses [\[eq:property_tr\]](#eq:property_tr){reference-type="eqref" reference="eq:property_tr"} and some property of the hypergeometric distribution (). Combining [\[eq:relative_error_ineq\]](#eq:relative_error_ineq){reference-type="eqref" reference="eq:relative_error_ineq"}, [\[eq:bound_E\_ml_est\]](#eq:bound_E_ml_est){reference-type="eqref" reference="eq:bound_E_ml_est"}, [\[eq:correlation_bound\]](#eq:correlation_bound){reference-type="eqref" reference="eq:correlation_bound"}, and yield the following for all $\epsilon>0$: $$\begin{aligned} \mathbb{P}\Big(\Big|\frac{\widetilde{R}_M^\char"0023}{R_M} - 1\Big|>\epsilon\Big) &\le \mathbb{P}\Big(\text{Ratio} \cdot \max_{m,\ell} \frac{n}{|d_{m,\ell}^\char"0023|} |E_{m,\ell}^\char"0023| > \frac{\epsilon}{M}\Big) \relax &\lesssim \frac{M^2\gamma}{n\tau^2 c^2} + \mathbb{P}\Big(2 \max_{m,\ell} \frac{n}{|d_{m,\ell}^\char"0023|} |E_{m,\ell}^\char"0023| > \frac{\epsilon}{M}\Big)\relax &\le \frac{M^2\gamma}{n\tau^2 c^2} + \sum_{m, \ell} \mathbb{P}\Big(\frac{n}{|d_{m,\ell}^\char"0023|}|E_{m,\ell}^\char"0023| > \frac{\epsilon}{2M}\Big) \relax &\le \frac{M^2\gamma}{n\tau^2 c^2} + \sum_{m,\ell} \bigg\{ \mathbb{P}\Big(\frac{d_{m,\ell}^\char"0023}{n} < Cd^\char"0023(\tau, c, \gamma)\Big) + \mathbb{P}\Big(|E_{m,\ell}^\char"0023| > \frac{\epsilon C d^\char"0023(\tau, c, \gamma)}{2M}\Big) \bigg\}\relax &\lesssim \frac{M^2\gamma}{n\tau^2 c^2} + M^2 \Bigl[\frac{\gamma^4}{n c^2 \tau^4} + \frac{2M}{\epsilon C d^\char"0023(\tau, c, \gamma)} \frac{1}{\sqrt{n}\tau^2} \cdot \begin{dcases} c^{-3} \gamma^{7/2} & \char"0023=\textup{\textrm{ovlp}}\relax c^{-2} \gamma^{5/2} & \char"0023=\textup{\textrm{full}}, \end{dcases} \Bigr]\end{aligned}$$ where $C>0$ is an absolute constant, and $d^\char"0023(c, \tau, \gamma)$ is given by $d^\textup{\textrm{ovlp}}(\tau, c, \gamma) = \tau^2 c^4 \gamma^{-2}$ and $d^\textup{\textrm{full}}(\tau, c, \gamma) = \tau^2 \gamma^{-2}$. Substituting this expression to the last bound, we obtain $$\begin{aligned} \text{for all } \epsilon>0, \quad \mathbb{P}\bigg(\Big|\frac{\widetilde{R}_M^\char"0023}{R_M} - 1\Big|>\epsilon\bigg) &\lesssim \frac{M^2\gamma}{n\tau^2 c^2} + \frac{M^2\gamma^4}{nc^2\tau^4} + \frac{M^3}{\epsilon\sqrt{n} \tau^{2+2}} \cdot \begin{dcases} c^{-3-4} \gamma^{7/2+2} &\char"0023=\textup{\textrm{ovlp}}\relax c^{-2} \gamma^{5/2+2} & \char"0023=\textup{\textrm{full}} \end{dcases} \notag\relax &\lesssim \frac{M^2\gamma^4}{nc^2\tau^4} + \frac{M^3}{\epsilon\sqrt{n} \tau^{4}} \times \begin{dcases} c^{-7} \gamma^{11/2} &\char"0023=\textup{\textrm{ovlp}}\relax c^{-2} \gamma^{9/2} & \char"0023=\textup{\textrm{full}} \end{dcases} \label{eq:relative_error_epsilon_any},\end{aligned}$$ thereby when $\epsilon\in(0,1)$, the second term dominates the first term. This concludes the proof of [\[eq:sub_full_relative_error\]](#eq:sub_full_relative_error){reference-type="eqref" reference="eq:sub_full_relative_error"}. ## Proof of {#proof:th:correction_gcv} The goal in this section is to show $\widetilde{R}_M^{\textup{\textrm{cgcv}}, \char"0023}/R_M \approx 1$ for $\char"0023\in\{\textup{\textrm{ovlp}}, \textup{\textrm{full}}\}$. The definition of $\widetilde{R}_M^{\textup{\textrm{cgcv}}, \char"0023}$ is recalled for convenience: $$\label{eq:df_cgcv_est} \widetilde{R}_M^{\textup{\textrm{cgcv}}, \char"0023} = \frac{\|M^{-1}\sum_m\bm{r}_m\|^2}{n(1-{\widetilde{\mathsf{df}}}{}_M/n)^2} - (c^{-1}-1) \frac{({\widetilde{\mathsf{df}}}{}_M/n)^2}{(1-{\widetilde{\mathsf{df}}}{}_M/n)^2} \frac{1}{M^2} \sum_m \widehat{R}_{m,m}^\char"0023.$$ Now, we define $\widetilde{R}^\textup{\textrm{cgcv}}_M$ by $$\label{eq:df_cgcv_without_est} \widetilde{R}^\textup{\textrm{cgcv}}_M := \frac{\|M^{-1}\sum_m\bm{r}_m\|^2}{n(1-{\widetilde{\mathsf{df}}}{}_M/n)^2} - (c^{-1}-1) \frac{({\widetilde{\mathsf{df}}}{}_M/n)^2}{(1-{\widetilde{\mathsf{df}}}{}_M/n)^2} \frac{1}{M^2} \sum_m \|\bm{h}_m\|^2.$$ The difference between $\widetilde{R}_M^\textup{\textrm{cgcv}}$ and $\widetilde{R}_M^{\textup{\textrm{cgcv}},\char"0023}$ is whether it uses $\|\bm{h}_m\|^2$ or its estimate $\widehat{R}^\char"0023_{m,m}$ in the rightmost sum. Considering $\widehat{R}_{m,m}^\char"0023\approx \|\bm{h}_m\|^2$ from [\[eq:sub_full_moment_ineq\]](#eq:sub_full_moment_ineq){reference-type="eqref" reference="eq:sub_full_moment_ineq"}, $\widetilde{R}_M^\textup{\textrm{cgcv}}$ is naturally expected to be close to $\widetilde{R}_M^{\textup{\textrm{cgcv}}, \char"0023}$. We state this approximation more precisely as follows: $$\label{eq:cgcv_cgcv_est_close} \text{for all } \epsilon\in(0,1),\quad \mathbb{P}\bigg(\Big|\frac{\widetilde{R}_M^{\textup{\textrm{cgcv}}} - \widetilde{R}_M^{\textup{\textrm{cgcv}}, \char"0023}}{R_M}\Big| > \epsilon\bigg) \lesssim \frac{M^2}{\sqrt{n} \epsilon \tau^6} \times \begin{dcases} c^{-6} \gamma^{15/2} & \char"0023=\textup{\textrm{ovlp}}\relax c^{-2} \gamma^{13/2} & \char"0023= \textup{\textrm{full}} \end{dcases}.$$ is proved in . The second task is to show $\widetilde{R}^\textup{\textrm{cgcv}}_M \approx R_M$. lemmaLemCgcvRiskClose[\[lem:cgcv_risk_close\]]{#lem:cgcv_risk_close label="lem:cgcv_risk_close"} For $\widetilde{R}_M^{\textup{\textrm{cgcv}}}$ defined in [\[eq:df_cgcv_without_est\]](#eq:df_cgcv_without_est){reference-type="eqref" reference="eq:df_cgcv_without_est"}, we have $$\text{for all } \epsilon\in(0,1), \quad \mathbb{P}\bigg(\Big|\frac{\widetilde{R}_{M}^\textup{\textrm{cgcv}}-R_M}{R_M}\Big| > \epsilon\bigg) \lesssim \frac{M^4}{\sqrt{n}\epsilon \tau^5} \cdot c^{-4} \gamma^{13/2}.$$ is proved in , where we use some concentration of ${\widehat{\mathsf{df}}}{}_m$ around its average ${\widetilde{\mathsf{df}}}{}_M = M^{-1}\sum_{m=1}^M{\widehat{\mathsf{df}}}{}_m$. Now we assume . Then, and together yield $$\begin{aligned} \mathbb{P}\Bigl(|{\widetilde{R}_M^{\textup{\textrm{cgcv}},\char"0023}}/{R_M} -1| > \epsilon\Bigr) &\le \mathbb{P}\Big(|({\widetilde{R}^{\textup{\textrm{cgcv}},\char"0023}_M-\widetilde{R}_M^\textup{\textrm{cgcv}}})/{R_M}| > \epsilon/2 \Big) + \mathbb{P}\Big(|{\widetilde{R}_M^\textup{\textrm{cgcv}}}/{R_M} - 1| > \epsilon/2\Bigr) \relax &\lesssim \frac{M^4}{\sqrt{n}\epsilon \tau^5} \cdot c^{-4} \gamma^{13/2} + \frac{M}{\sqrt{n} \epsilon \tau^6} \cdot \begin{dcases} c^{-6} \gamma^{15/2} & \char"0023=\textup{\textrm{ovlp}}\relax c^{-2} \gamma^{13/2} & \char"0023= \textup{\textrm{full}} \end{dcases}\relax &\lesssim \frac{M^4}{\sqrt{n}\epsilon\tau^6} \cdot \begin{dcases} c^{-6} \gamma^{15/2} & \char"0023=\textup{\textrm{ovlp}}\relax c^{-4} \gamma^{13/2} & \char"0023= \textup{\textrm{full}} \end{dcases},\end{aligned}$$ for all $\epsilon\in(0,1)$. This completes the proof of . In the following sections, we prove and . ### Proof of {#proof:lem:cgcv_cgcv_est_close} By the definition of $\widetilde{R}^{\textup{\textrm{cgcv}}, \char"0023}$ in [\[eq:df_cgcv_est\]](#eq:df_cgcv_est){reference-type="eqref" reference="eq:df_cgcv_est"} and $\widetilde{R}^\textup{\textrm{cgcv}}$ in [\[eq:df_cgcv_without_est\]](#eq:df_cgcv_without_est){reference-type="eqref" reference="eq:df_cgcv_without_est"}, $$\frac{\widetilde{R}_M^{\textup{\textrm{cgcv}}} - \widetilde{R}_M^{\textup{\textrm{cgcv}}, \char"0023}}{R_M} = (c^{-1}-1) \bigg(\frac{{\widetilde{\mathsf{df}}}{}_M/n}{1-{\widetilde{\mathsf{df}}}{}_M/n}\bigg)^2 \frac{\sum_m(\widehat{R}_{mm}^\char"0023- \|\bm{h}_m\|^2)}{\|\sum_m\bm{h}_m\|^2}.$$ Here, thanks to $(c^{-1}-1) ({\widetilde{\mathsf{df}}}{}_M/n)^2 \le (c^{-1}-1) (k/n)^2 = (c^{-1}-1) c^2 \le c$ by [\[eq:property_tr\]](#eq:property_tr){reference-type="eqref" reference="eq:property_tr"}, we have $$\begin{aligned} \bigg|\frac{\widetilde{R}_M^{\textup{\textrm{cgcv}}} - \widetilde{R}_M^{\textup{\textrm{cgcv}}, \char"0023}}{R_M}\bigg| &\le \frac{c}{(1-{\widetilde{\mathsf{df}}}{}_M/n)^2} \frac{\sum_m\|\bm{h}_m\|^2}{\|\sum_m\bm{h}_m\|^2}\max_{m} |\frac{\widehat{R}_{mm}^\char"0023}{\|\bm{h}_m\|^2}-1| = c \cdot U \cdot \max_{m} V_m^\char"0023,\end{aligned}$$ where $U$ and $V_m^\char"0023$ are defined as $$\label{eq:df_U_V_m} U:= \frac{1}{(1-{\widetilde{\mathsf{df}}}{}_M/n)^2} \frac{\sum_m\|\bm{h}_m\|^2}{\|\sum_m\bm{h}_m\|^2}, \quad\qquad V_{m}^\char"0023:= \bigg|\frac{\widehat{R}_{mm}^\char"0023}{\|\bm{h}_m\|^2}-1\bigg|.$$ **Control of $U$.** By (introduced later), there exists an absolute constant $C\in(0,1)$ such that $$\mathbb{P}(1-{\widehat{\mathsf{df}}}{}_m/n \le C\tau\gamma^{-1}) \le e^{-nc/2},$$ Applying the above display to $1-{\widetilde{\mathsf{df}}}{}_M/n = M^{-1} \sum_{m=1}^M (1-{\widehat{\mathsf{df}}}{}_m/n)$ with the union bound, we have $$\mathbb{P}\Bigl(\frac{1}{(1-{\widetilde{\mathsf{df}}}{}_M/n)^2}\ge \frac{\gamma^2}{C^2 \tau^2}\Bigr) = \mathbb{P}(1-{\widetilde{\mathsf{df}}}{}_M/n \le C \tau \gamma^{-1}) \le \sum_{m=1}^M \mathbb{P}(1-{\widehat{\mathsf{df}}}{}_m/n \le C\tau\gamma^{-1}) \lesssim M e^{-nc/2}.$$ Combining the above display and the upper bound of ${\sum_{m} \|\bm{h}_{m}\|^2}/{\|\sum_{m} \bm{h}_{m}\|^2}$ given by [\[eq:correlation_bound\]](#eq:correlation_bound){reference-type="eqref" reference="eq:correlation_bound"}, we have $$\begin{aligned} \mathbb{P}\Bigl(U > \frac{2\gamma^2}{C^2 \tau^2}\Bigr) &\le \mathbb{P}\Bigl(\frac{\sum_{m} \|\bm{h}_{m}\|^2}{\|\sum_{m} \bm{h}_{m}\|^2}>2\Bigr) + \mathbb{P}\Bigl(\frac{1}{(1-{\widetilde{\mathsf{df}}}{}_M/n)^2}\ge \frac{\gamma^2}{C^2 \tau^2}\Bigr)\notag \relax &\lesssim \frac{M^2\gamma^2}{n \tau^2 c^2} + \frac{M}{e^{nc/2}} \lesssim\frac{M^2\gamma^2}{n \tau^2 c^2} \label{eq:U_upper_bound}\end{aligned}$$ **Control of $V_m^\char"0023$.** with $M=1$ implies $$\label{eq:V_m_upper_bound} \text{for all } \epsilon>0, \quad \mathbb{P}(V_m^\char"0023>\epsilon) \lesssim \frac{\gamma^4}{nc^2\tau^4} + \frac{1}{\epsilon \sqrt{n}\tau^4} \cdot \begin{dcases} c^{-7} \gamma^{11/2} & \char"0023=\textup{\textrm{ovlp}}\relax c^{-2} \gamma^{9/2} & \char"0023= \textup{\textrm{full}} \end{dcases}$$ Now we have controlled $U$ and $V_m^\char"0023$. [\[eq:U_upper_bound\]](#eq:U_upper_bound){reference-type="eqref" reference="eq:U_upper_bound"} and [\[eq:V_m\_upper_bound\]](#eq:V_m_upper_bound){reference-type="eqref" reference="eq:V_m_upper_bound"} together yield, for all $\epsilon\in(0,1)$, $$\begin{aligned} &\mathbb{P}\bigg( \Big|\frac{\widetilde{R}_M^{\textup{\textrm{cgcv}}} - \widetilde{R}_M^{\textup{\textrm{cgcv}}, \char"0023}}{R_M}\Big|>\epsilon\bigg)\relax &\le \mathbb{P}(c U \max_m V_{m}^\char"0023> \epsilon) \le \mathbb{P}\Bigl(U > \frac{2\gamma^2}{C^2 \tau^2}\Bigr) + \mathbb{P}\Bigl(c\frac{2\gamma^2}{C^2\tau^2} \max_{m} V_M^\char"0023> \epsilon\Bigr)\relax &\lesssim \frac{M^2\gamma^2}{n \tau^{2} c^2} + \sum_m \mathbb{P}\Big(V_m^\char"0023> \frac{\epsilon C^2\tau^2}{2c \gamma^2}\Big) \text{ (by \eqref{eq:U_upper_bound})}\relax &\lesssim \frac{M^2\gamma^2}{n \tau^{2} c^2} + \frac{M\gamma^4}{n\tau^4c^2} + \frac{M}{\sqrt{n} \tau^4} \bigg(\frac{\epsilon C^2\tau^2}{2c \gamma^2}\bigg)^{-1} \cdot \begin{dcases} c^{-7} \gamma^{11/2} & \char"0023=\textup{\textrm{ovlp}}\relax c^{-2} \gamma^{9/2} & \char"0023= \textup{\textrm{full}} \end{dcases} \text{ (by \eqref{eq:V_m_upper_bound})} \relax &\lesssim \frac{M^2\gamma^4}{n \tau^{4} c^2} + \frac{M}{\sqrt{n} \epsilon \tau^6} \times \begin{dcases} c^{-6} \gamma^{15/2} & \char"0023=\textup{\textrm{ovlp}}\relax c^{-1} \gamma^{13/2} & \char"0023= \textup{\textrm{full}} \end{dcases} \text{ (thanks to $\tau\in(0,1]$ and $\gamma\ge 1$)}\relax &\lesssim \frac{M^2}{\sqrt{n} \epsilon \tau^6} \times \begin{dcases} c^{-6} \gamma^{15/2} & \char"0023=\textup{\textrm{ovlp}}\relax c^{-2} \gamma^{13/2} & \char"0023= \textup{\textrm{full}} \end{dcases} \text{ (thanks to $\epsilon\in(0,1)$)}\end{aligned}$$ This concludes the proof. ### Proof of {#proof:lem:cgcv_risk_close} If we define $d_{m,\ell}^\textup{\textrm{cgcv}}$ by $$d_{m,\ell}^\textup{\textrm{cgcv}}:= n\bigg\{\Big(1-\frac{{\widetilde{\mathsf{df}}}{}_M}{n}\Big)^2 + (c^{-1}-1) \mathop{\mathrm{\mathds{1}}}_{\{m=\ell\}}\Big(\frac{{\widetilde{\mathsf{df}}}{}_M}{n}\Big)^2\bigg\},$$ the relative error can be written as $$\frac{\widetilde{R}_M^\textup{\textrm{cgcv}}}{R_M} - 1 =\frac{\|\sum_m \bm{r}_m\|^2 - n(c^{-1}-1) ({\widetilde{\mathsf{df}}}{}_M/n)^2 \sum_m \|\bm{h}_m\|^2}{n(1-{\widetilde{\mathsf{df}}}{}_M/n)^2\|\sum_m \bm{h}_m\|^2} - 1= \frac{\sum_{m, \ell} (\bm{r}_m^\top \bm{r}_\ell - d_{m,\ell}^\textup{\textrm{cgcv}}\bm{h}_m^\top\bm{h}_\ell)}{n(1-{\widetilde{\mathsf{df}}}{}_M/n)^2 \|\sum_m\bm{h}_m\|^2}.$$ By the same argument in [\[eq:relative_error_ineq\]](#eq:relative_error_ineq){reference-type="eqref" reference="eq:relative_error_ineq"}, the relative error is bounded from above as $$\begin{aligned} \Bigl|\frac{\widetilde{R}_M^\textup{\textrm{cgcv}}}{R_M} - 1 \Bigr| &\le \frac{1}{(1-{\widetilde{\mathsf{df}}}{}_M/n)^2} \frac{(\sum_{m=1}^M\|\bm{h}_m\|)^2}{\|\sum_{m=1}^M\bm{h}_m\|^2} \cdot \max_{m, \ell} \frac{|\bm{r}_m^\top\bm{r}_\ell - d_{m,\ell}^\textup{\textrm{cgcv}}\bm{h}_m^\top\bm{h}_\ell|}{n\|\bm{h}_m\|\|\bm{h}_\ell|}\relax &\le M \cdot \frac{1}{(1-{\widetilde{\mathsf{df}}}{}_M/n)^2} \frac{\sum_m\|\bm{h}_m\|^2}{\|\sum_m\bm{h}_m\|^2} \cdot \max_{m,\ell} (\frac{|\bm{r}_m^\top\bm{r}_\ell - d_{m,\ell}^\textup{\textrm{full}}\bm{h}_m^\top \bm{h}_\ell |}{n \|\bm{h}_m\|\|\bm{h}_\ell\|} + \frac{|d_{m,\ell}^\textup{\textrm{cgcv}}- d_{m,\ell}^\textup{\textrm{full}}|}{n})\relax &= M \cdot U \cdot \max_{m,\ell}\Big(|E_{m,\ell}^\textup{\textrm{full}}| + |d_{m,\ell}^\textup{\textrm{cgcv}}-d_{m,\ell}^\textup{\textrm{full}}|/n\Big),\end{aligned}$$ where $E_{m,\ell}^\textup{\textrm{full}}$ and $U$ were defined before by [\[eq:df_ratio_E\_ml_est\]](#eq:df_ratio_E_ml_est){reference-type="eqref" reference="eq:df_ratio_E_ml_est"} and [\[eq:df_U\_V_m\]](#eq:df_U_V_m){reference-type="eqref" reference="eq:df_U_V_m"}. Their expressions are recalled here for convenience: $$U := \frac{1}{(1-{\widetilde{\mathsf{df}}}{}_M/n)^2} \frac{\sum_m\|\bm{h}_m\|^2}{\|\sum_m\bm{h}_m\|^2}, \quad E_{m,\ell}^\textup{\textrm{full}}:= \frac{\bm{r}_m^\top \bm{r}_\ell - d_{m,\ell}^\textup{\textrm{full}}\bm{h}_m^\top\bm{h}_\ell}{n\|\bm{h}_m\|\bm{h}_\ell\|} = \frac{d_{m,\ell}^\textup{\textrm{full}}(\widehat{R}_{m,\ell}^\textup{\textrm{full}}-\bm{h}_m^\top\bm{h}_\ell)}{n\|\bm{h}_m\|\bm{h}_\ell|},$$ where $d_{m,\ell}^\textup{\textrm{full}}= n-{\widehat{\mathsf{df}}}{}_m-{\widehat{\mathsf{df}}}{}_\ell + k^{-2}{|I_m\cap I_\ell|}{\widehat{\mathsf{df}}}{}_m{\widehat{\mathsf{df}}}{}_\ell$ is the denominator in the $\textup{\textrm{full}}$-estimator. Note in passing that $U$ and $E_{m,\ell}^\textup{\textrm{full}}$ have been already bounded by $\eqref{eq:bound_E_ml_est}$ with $\char"0023=\textup{\textrm{full}}$ and [\[eq:U_upper_bound\]](#eq:U_upper_bound){reference-type="eqref" reference="eq:U_upper_bound"}: there exists a absolute constant $C$ such that $$\label{eq:bound_E_ml_full_U} \mathbb{P}(U > C \tau^{-2}\gamma^2) \lesssim \frac{M^2\gamma^2}{n\tau^2 c^2}, \quad\quad \mathbb{P}(|E_{m,\ell}^\textup{\textrm{full}}| > \epsilon) \lesssim\frac{\tau^{5/2}}{\epsilon\sqrt{n}\tau^2 c^2} \text{ for all $\epsilon>0$}.$$ It remains to bound $|d_{m,\ell}^\textup{\textrm{cgcv}}-d_{m,\ell}^\textup{\textrm{full}}|/n$. Here, we will argue that $$\label{eq:concentrate_denom_full} \text{for all } m, \ell\in[M], \quad \text{for all }\epsilon>0, \quad \mathbb{P}\Big(\frac{|d_{m,\ell}^\textup{\textrm{full}}- d_{m,\ell}^\textup{\textrm{cgcv}}|}{n} > \epsilon\Big) \lesssim M \Bigl(e^{-nc/2} + \frac{\gamma^{9/2}}{\epsilon\sqrt{n} \tau^3 c^4}\Bigr).$$ The proof of [\[eq:concentrate_denom_full\]](#eq:concentrate_denom_full){reference-type="eqref" reference="eq:concentrate_denom_full"} is given in the end of this section. Now we assume it. Then [\[eq:bound_E\_ml_full_U\]](#eq:bound_E_ml_full_U){reference-type="eqref" reference="eq:bound_E_ml_full_U"} and [\[eq:concentrate_denom_full\]](#eq:concentrate_denom_full){reference-type="eqref" reference="eq:concentrate_denom_full"} together yield, for all $\epsilon\in(0,1)$, $$\begin{aligned} \mathbb{P}\Big(|\frac{\widetilde{R}_M^\textup{\textrm{cgcv}}}{R_M} - 1|>\epsilon\Big) &\le \mathbb{P}\Big( U \cdot \max_{m,\ell} \Big(|E_{m,\ell}^\textup{\textrm{full}}| + \frac{|d_{m,\ell}^\textup{\textrm{cgcv}}-d_{m,\ell}^\textup{\textrm{full}}|}{n}\Big) > \frac{\epsilon}{M}\Big)\relax &\le \mathbb{P}\Big(U > \frac{C \gamma^2}{\tau^2}\Big) + \mathbb{P}\Big(\max_{m,\ell} \Big(|E_{m,\ell}^\textup{\textrm{full}}| + \frac{|d_{m,\ell}^\textup{\textrm{cgcv}}-d_{m,\ell}^\textup{\textrm{full}}|}{n}\Big) > \frac{\epsilon}{M} \frac{\tau^2}{C\gamma^2}\Big)\relax &\lesssim\frac{M^2\gamma^2}{n\tau^2 c^2} + \sum_{m,\ell} \mathbb{P}\Big(|E_{m,\ell}^\textup{\textrm{full}}|> \frac{\epsilon\tau^2}{2CM\gamma^2}\Big) + \mathbb{P}\Big(\frac{|d_{m,\ell}^\textup{\textrm{cgcv}}-d_{m,\ell}^\textup{\textrm{full}}|}{n}> \frac{\epsilon\tau^2}{2CM\gamma^2}\Big)\relax &\lesssim \frac{M^2\gamma^2}{n\tau^2 c^2} + M^2 \bigg\{ \frac{2CM\gamma^2}{\epsilon\tau^2} \cdot \frac{\gamma^{5/2}}{\sqrt{n}\tau^2 c^2} + M e^{-nc/2} + \frac{2CM\gamma^2}{\epsilon\tau^2} \cdot \frac{M\gamma^{9/2}}{\sqrt{n}\tau^3 c^4} \bigg\}\relax &\lesssim \frac{M^4 \gamma^{13/2}}{\sqrt{n}\epsilon\tau^5 c^4} \quad \text{(thanks to $\epsilon, c, \tau \in(0,1]$ and $\gamma\ge1$}),\end{aligned}$$ which concludes the proof of . ------------------------------------------------------------------------ **[Proof of Equation [\[eq:concentrate_denom_full\]](#eq:concentrate_denom_full){reference-type="eqref" reference="eq:concentrate_denom_full"}]{.ul}.** The expressions of $d_{m,\ell}^\textup{\textrm{full}}$ and $d_{m,\ell}^\textup{\textrm{cgcv}}$ are recalled here for convenience: $$\frac{d_{m,\ell}^\textup{\textrm{full}}}{n} = 1 - \frac{{\widehat{\mathsf{df}}}{}_m}{n} - \frac{{\widehat{\mathsf{df}}}{}_\ell}{n} + \frac{|I_m\cap I_\ell|}{nk^2}{\widehat{\mathsf{df}}}{}_m{\widehat{\mathsf{df}}}{}_\ell, \quad \frac{d_{m,\ell}^\textup{\textrm{cgcv}}}{n} = \Big(1-\frac{{\widetilde{\mathsf{df}}}{}_M}{n}\Big)^2 + (c^{-1}-1) \mathop{\mathrm{\mathds{1}}}_{\{m=\ell\}}\Big(\frac{{\widetilde{\mathsf{df}}}{}_M}{n}\Big)^2.$$ Below we prove $d_{m,\ell}^\textup{\textrm{full}}/n \approx d_{m,\ell}^\textup{\textrm{cgcv}}/n$. The key lemma is the concentration of ${\widehat{\mathsf{df}}}{}_m$ around its average ${\widetilde{\mathsf{df}}}{}_M = \sum_m {\widehat{\mathsf{df}}}{}_m/M$. lemmaLemConcentrationDf[\[lem:concentration_df\]]{#lem:concentration_df label="lem:concentration_df"} Suppose the same penalty is used for $(\bm{h}_m)_{m=1}^M$. Then, we have $$\text{for all } m\in[M], \quad \text{for all }\epsilon>0, \quad \mathbb{P}\Big(\frac{|{\widehat{\mathsf{df}}}{}_m - {\widetilde{\mathsf{df}}}{}_M|}{n} > \epsilon\Big) \lesssim M\Big(e^{-nc/2} + \frac{\gamma^{9/2}}{\epsilon\sqrt{n}\tau^3 c^4}\Big).$$ The proof of is given in . From , it suffices to bound $|d_{m,\ell}^\textup{\textrm{full}}-d_{m,\ell}^\textup{\textrm{cgcv}}|/n$ from above by $|{\widehat{\mathsf{df}}}{}_m-{\widetilde{\mathsf{df}}}{}_M|/n$ and $|{\widehat{\mathsf{df}}}{}_\ell-{\widetilde{\mathsf{df}}}{}_M|/n$ up to an absolute constant. Below, we prove [\[eq:concentrate_denom_full\]](#eq:concentrate_denom_full){reference-type="eqref" reference="eq:concentrate_denom_full"} for $m=\ell$ and $m\neq \ell$ separately. **When $m=\ell$.** Letting $f$ be the function $f(x) = 1 -2x + c^{-1} x^2 = (1-x)^2 + (c^{-1}-1)x^2$, we have $$\frac{d_{m,m}^\textup{\textrm{full}}}{n} = 1- 2 \frac{{\widehat{\mathsf{df}}}{}_m}{n} + \frac{{\widehat{\mathsf{df}}}{}_m^2}{nk}= f\Big(\frac{{\widehat{\mathsf{df}}}{}_m}{n}\Big), \quad \frac{d_{m,m}^\textup{\textrm{cgcv}}}{n} = \Big(1-\frac{{\widetilde{\mathsf{df}}}{}_M}{n}\Big)^2 + (c^{-1}-1) \Big(\frac{{\widetilde{\mathsf{df}}}{}_M}{n}\Big)^2 = f\Big(\frac{{\widetilde{\mathsf{df}}}{}_M}{n}\Big).$$ Here, ${\widehat{\mathsf{df}}}{}_m/n, {\widetilde{\mathsf{df}}}{}_M/n \in [0, c)$ by [\[eq:property_tr\]](#eq:property_tr){reference-type="eqref" reference="eq:property_tr"}, while $f$ is $2$-Lipschitz on $[0,c]$ since $\sup_{x \in [0,c]} |f'(x)| = \sup_{x\in [0,c]} 2(1-x/c) = 2$. Thus, we obtain that, for all $\epsilon>0$, $$\begin{aligned} \mathbb{P}\Big(\frac{|d_{m,m}^\textup{\textrm{full}}- d_{m,m}^\textup{\textrm{cgcv}}|}{n} >\epsilon\Big) &= \mathbb{P}\Big(\Big|f\Big(\frac{{\widehat{\mathsf{df}}}{}_m}{n}\Big)-f\Big(\frac{{\widetilde{\mathsf{df}}}{}_M}{n}\Big)\Big|> \epsilon\Big) \relax &\le \mathbb{P}\Big( 2\Big|\frac{{\widehat{\mathsf{df}}}{}_m}{n}-\frac{{\widetilde{\mathsf{df}}}{}_M}{n}\Big| > \epsilon\Big)\relax &\lesssim M\Big(e^{-nc/2} + \frac{\gamma^{9/2}}{\epsilon\sqrt{n}\tau^3 c^4}\Big),\end{aligned}$$ thanks to the Lipschitz property of $f$ and . This completes the proof of [\[eq:concentrate_denom_full\]](#eq:concentrate_denom_full){reference-type="eqref" reference="eq:concentrate_denom_full"} for $m=\ell$. **When $m\neq \ell$.** Letting $g$ be the function $g(x,y)=(1-x)(1-y)$, we have $$\begin{aligned} \frac{d_{m, \ell}^\textup{\textrm{full}}}{n} = g\Big(\frac{{\widehat{\mathsf{df}}}{}_m}{n}, \frac{{\widehat{\mathsf{df}}}{}_\ell}{n}\Big) + \frac{{\widehat{\mathsf{df}}}{}_m}{n}\frac{{\widehat{\mathsf{df}}}{}_\ell}{n} \Big(\frac{|I_m\cap I_\ell| \cdot n}{k^2}-1\Big), \qquad \frac{d_{m,\ell}^\textup{\textrm{cgcv}}}{n} = g \Big(\frac{{\widetilde{\mathsf{df}}}{}_M}{n}, \frac{{\widetilde{\mathsf{df}}}{}_M}{n}\Big).\end{aligned}$$ Here, $g(x,y)=(1-x)(1-y)$ satisfies the following inequality: for all $x,x', y,y'\in [0,1]$, $$\begin{aligned} |g(x, y) - g(x',y')| \le |g(x, y)-g(x',y)| + |g(x', y)-g(x', y')| \le |x-x'| + |y-y'|. \end{aligned}$$ From this property of $g$ and ${\widehat{\mathsf{df}}}{}_m/n, {\widehat{\mathsf{df}}}{}_\ell/n, {\widetilde{\mathsf{df}}}{}_M/n \in [0,c) \subset[0,1]$ by [\[eq:property_tr\]](#eq:property_tr){reference-type="eqref" reference="eq:property_tr"}, we find $$\begin{aligned} \frac{|d_{m, \ell}^\textup{\textrm{full}}- d_{m,\ell}^\textup{\textrm{cgcv}}|}{n} &\le \frac{{\widehat{\mathsf{df}}}{}_m}{n}\frac{{\widehat{\mathsf{df}}}{}_\ell}{n} \Bigl|\frac{|I_m\cap I_\ell| \cdot n}{k^2}-1\Bigr| + \Big|g\Big(\frac{{\widehat{\mathsf{df}}}{}_m}{n}, \frac{{\widehat{\mathsf{df}}}{}_\ell}{n}\Big)-g\Big(\frac{{\widetilde{\mathsf{df}}}{}_M}{n}, \frac{{\widetilde{\mathsf{df}}}{}_M}{n}\Big)\Big| \relax &\le c^2 \Bigl|\frac{|I_m\cap I_\ell| \cdot n}{k^2}-1\Bigr| + \frac{|{\widehat{\mathsf{df}}}{}_m-{\widetilde{\mathsf{df}}}{}_M|}{n} + \frac{|{\widehat{\mathsf{df}}}{}_\ell-{\widetilde{\mathsf{df}}}{}_M|}{n}.\end{aligned}$$ Here, thanks to the bound of of $\mathop{\mathrm{{\rm Var}}}[|I_m \cap I_\ell|]$ (see ), the moment of the first term in the right hand side is bounded from above as $$\mathbb{E}\Bigl[c^2 \Bigl|\frac{|I_m\cap I_\ell| \cdot n}{k^2}-1\Bigr|\Bigr] = c^2 \frac{n}{k^2} \mathbb{E}\Bigl[\Bigl||I_m\cap I_\ell|-\frac{k^2}{n}\Bigr|\Bigr] \le \frac{1}{n} \sqrt{ \mathbb{E}\Bigl[\Bigl||I_m\cap I_\ell|-\frac{k^2}{n}\Bigr|^2\Bigr] } \le \frac{1}{n} \sqrt{\frac{k^2}{n}} = \frac{c}{\sqrt{n}}.$$ Therefore, Markov's inequality applied with the above moment bound result in $$\begin{aligned} \mathbb{P}\Big(\frac{|d_{m, \ell}^\textup{\textrm{full}}- d_{m,\ell}^\textup{\textrm{cgcv}}|}{n}>\epsilon\Big) &\le \mathbb{P}\Big(c^2 \Bigl|\frac{|I_m\cap I_\ell| \cdot n}{k^2}-1\Bigr|>\frac{\epsilon}{3}\Big)\relax &+ \mathbb{P}\Big(\frac{|{\widehat{\mathsf{df}}}{}_m-{\widetilde{\mathsf{df}}}{}_M|}{n} > \frac{\epsilon}{3}\Big) + \mathbb{P}\Big(\frac{|{\widehat{\mathsf{df}}}{}_\ell-{\widetilde{\mathsf{df}}}{}_M|}{n} > \frac{\epsilon}{3}\Big)\relax &\lesssim \frac{c}{\epsilon\sqrt{n}} + M\Big(e^{-nc/2} + \frac{\gamma^{9/2}}{\epsilon\sqrt{n}\tau^3 c^4}\Big) \relax &\lesssim M\Big(e^{-nc/2} + \frac{\gamma^{9/2}}{\epsilon\sqrt{n}\tau^3 c^4}\Big),\end{aligned}$$ for all $\epsilon>0$. This completes the proof of Equation [\[eq:concentrate_denom_full\]](#eq:concentrate_denom_full){reference-type="eqref" reference="eq:concentrate_denom_full"} for $m\neq\ell$. ## Proof of {#app:proof:prop:gcv_inconsistency} Define $\text{Correction}$ as $$\text{Correction} := (c^{-1}-1) \frac{({\widetilde{\mathsf{df}}}{}_M/n)^2}{(1-{\widetilde{\mathsf{df}}}{}_M/n)^2} \frac{\sum_m \|\bm{h}_m\|^2}{\|\sum_m\bm{h}_m\|^2},$$ so that can be written as $$\label{eq:gcv_C_n_1} \mathbb{P}\bigg(\Big|\frac{\widetilde{R}_M^\textup{\text{gcv}}}{R_M} - \text{Correction} - 1\Big| > \epsilon\bigg) \lesssim \frac{M^4 \gamma^{13/2}}{\epsilon \sqrt{n}\tau^5 c^4} \text{ for all } \epsilon\in(0,1).$$ Since $\|\sum_m\bm{h}_m\|^2 \le (\sum_m \|\bm{h}_m\|)^2 \le M\sum_m \|\bm{h}_m\|^2$ by the triangle inequality, it holds that $$\Bigl\{{{\widetilde{\mathsf{df}}}{}_M}/{k} \ge \delta\Bigr\} = \Bigl\{{{\widetilde{\mathsf{df}}}{}_M}/{n} \ge c\delta\Bigr\} \subset \Bigl\{\text{Correction} \ge (c^{-1}-1) \Big(\frac{c\delta}{1-c\delta}\Big)^2 \frac{1}{M} \ge \frac{\delta^2 c(1-c)}{M}\Bigr\}.$$ for all $\delta\in(0,1)$. Therefore, we obtain that, for all $\delta\in(0,1)$, $$\begin{aligned} \mathbb{P}\bigg(\frac{{\widetilde{\mathsf{df}}}{}_M}{k} \ge \delta\bigg) &\le \mathbb{P}\bigg(\text{Correction} \ge \frac{\delta^2 c(1-c)}{M}\bigg)\relax &\le \mathbb{P}\bigg(\frac{\widetilde{R}_M^\textup{\text{gcv}}}{R_M} - \text{Correction} - 1 \le -\frac{\delta^2 c(1-c)}{2M}\bigg) + \mathbb{P}\bigg(\frac{\widetilde{R}^\textup{\text{gcv}}_M}{R_M}-1 > \frac{\delta^2 c(1-c)}{2M}\bigg)\relax &\le C \bigg(\frac{\delta^2 c(1-c)}{2M}\bigg)^{-1} \frac{M^4 \gamma^{13/2}}{\sqrt{n}\tau^5 c^4} + \mathbb{P}\bigg(\frac{\widetilde{R}^\textup{\text{gcv}}_M}{R_M} > 1 + \frac{c(1-c)\delta^2}{2M}\bigg) \relax & \text{ (by \eqref{eq:gcv_C_n_1} with $\epsilon=\frac{\delta^2c(1-c)}{2M}$), } \relax &\le C' \frac{M^5}{\sqrt{n}\tau^5 \delta^2} \frac{\gamma^{13/2}}{c^5(1-c)} + \mathbb{P}\bigg(\frac{\widetilde{R}^\textup{\text{gcv}}_M}{R_M} > 1 + \frac{c(1-c)\delta^2}{2M}\bigg),\end{aligned}$$ where $C,C'>0$ are absolute constants. This finishes the proof. ------------------------------------------------------------------------ ## Technical lemmas and their proofs {#sec:technical-lemmas-nonasymp} ### Proof of {#proof:lem:derivative} By the change of variable $\bm{\beta}\mapsto \bm{u}= \bm{\Sigma}^{1/2}(\bm{\beta}-\bm{\beta}_0)$ and $\bm{G}= \bm{X}\bm{\Sigma}^{-1/2}$, we have $$\bm{\Sigma}^{1/2}(\widehat{\bm{\beta}} -\bm{\beta}_0) = \widehat{\bm{u}}, \quad \bm{r}= \bm{y}- \bm{X}\widehat{\bm{\beta}}=\bm{\epsilon}- \bm{G}\widehat{\bm{u}}, \quad {\widehat{\mathsf{df}}}{}= \mathop{\mathrm{tr}}[\bm{X}(\partial/\partial \bm{y}) \widehat{\bm{\beta}}] = \mathop{\mathrm{tr}}[\bm{G}(\partial/\partial\bm{\epsilon})\widehat{\bm{u}}],$$ where $\widehat{\bm{u}}$ is a penalized estimator with an isotropic design $\bm{G}= \bm{X}\bm{\Sigma}^{-1/2}$: $$\widehat{\bm{u}} = \widehat{\bm{u}}(\bm{\epsilon}, \bm{G}) := \mathop{\mathrm{argmin}}_{\bm{u}\in{{\mathbb{R}}}^{p}} \frac{1}{k}\sum_{i\in I} (\epsilon_i - \bm{g}_i^\top \bm{u})^2 + f(\bm{u}) \text{ with } f(\bm{u}) := g(\bm{\Sigma}^{-1/2} \bm{u}+ \bm{\beta}_0).$$ Note in passing that the map $\bm{u}\in{{\mathbb{R}}}^{p} \mapsto f(\bm{u}) - \mu \|\bm{u}\|^2/2$ is convex thanks to . Then, @bellec2022derivatives [Theorem 1] with $\bm{\Sigma}=\bm{I}_p$ implies the followings: there exists a matrix $\bm{A}\in {{\mathbb{R}}}^{p\times p}$ depending on $(\epsilon_i, \bm{g}_i)_{i\in I}$ such that $$\|\bm{A}\|_{\mathop{\mathrm{op}}} \leq (|I| \mu)^{-1} = (k\mu)^{-1},$$ and the derivative of $\widehat{\bm{u}}$ with respect to $\bm{G}=(g_{ij})_{ij}\in {{\mathbb{R}}}^{n\times p}$ and $\bm{\epsilon}=(\epsilon_i)\in{{\mathbb{R}}}^n$ are given by $$\begin{aligned} \text{for all } i\in [n], j\in [p], \quad &\frac{\partial\widehat{\bm{u}}}{\partial g_{ij}}(\bm{\epsilon},\bm{G}) = \begin{dcases} \bm{A}[\bm{e}_j \bm{e}_{i}^\top\bm{r}- \bm{G}^\top \bm{e}_i \bm{e}_j \widehat{\bm{u}}] & (i\in I)\relax \bm{0}_p & (i\notin I) \end{dcases} = \bm{A}[{\bm{e}_j} (\bm{L}\bm{r})_i - \bm{G}^\top\bm{L}\bm{e}_i \widehat{u}_j ], \relax \text{for all } i\in[n], \quad &\frac{\partial\widehat{\bm{u}}}{\partial \epsilon_i}(\bm{u},\bm{G}) = \begin{dcases} \bm{A}\bm{G}^\top \bm{e}_i & (i\in I)\relax \bm{0}_p & (i\notin I) \end{dcases} = \bm{A}\bm{G}^\top \bm{L}\bm{e}_i,\end{aligned}$$ where $\bm{L}=\sum_{i\in I} \bm{e}_i\bm{e}_i^\top$. Furthermore, equation (D.8) (and the following argument) in [@bellec2022derivatives] imply that $\bm{V}:= \bm{I}_n - \bm{G}\bm{A}\bm{G}^\top \bm{L}$ satisfies $$0 < k(1+\|\bm{G}\|_{\mathop{\mathrm{op}}}^2/(k\mu))^{-1} \le \mathop{\mathrm{tr}}[\bm{L}\bm{V}] = k - {\widehat{\mathsf{df}}}{}\le k, \quad \|\bm{L}\bm{V}\|_{\mathop{\mathrm{op}}} \le 1.$$ Thus, if we define the matrix $\bm{B}\in{{\mathbb{R}}}^{(p+1)\times(p+1)}$ as $\bm{B}:= \begin{pmatrix} \bm{A}& \bm{0}_p \relax \bm{0}_p^\top & 0 \end{pmatrix},$ the derivative of $\bm{h}= (\widehat{\bm{u}}^\top, -\sigma)^\top\in{{\mathbb{R}}}^{p+1}$ with respect to $\bm{Z}= [\bm{G}| \sigma^{-1}\bm{\epsilon}] = (z_{ij})_{ij} \in {{\mathbb{R}}}^{n\times (p+1)}$ can be written as $$\begin{aligned} 1\le j\le p, \quad \frac{\partial\bm{h}}{\partial z_{ij}} &= \frac{\partial}{\partial g_{ij}} \begin{pmatrix} \widehat{\bm{u}}\relax -\sigma \end{pmatrix} = \begin{pmatrix} \bm{A}[{\bm{e}_j} (\bm{L}\bm{r})_i - \bm{G}^\top\bm{L}\bm{e}_i \widehat{h}_j ]\relax 0 \end{pmatrix} = \bm{B}\Bigl[\begin{pmatrix} \bm{e}_j\relax 0 \end{pmatrix} (\bm{L}\bm{r})_i - \bm{Z}^\top \bm{L}\bm{e}_i \widehat{h}_j\Bigr],\relax \frac{\partial\bm{h}}{\partial z_{i, p+1}} &=\sigma \frac{\partial}{\partial \epsilon_{i}} \begin{pmatrix} \widehat{\bm{u}}\relax -\sigma \end{pmatrix} = \begin{pmatrix} \sigma \bm{A}\bm{G}^\top \bm{L}\bm{e}_i\relax 0 \end{pmatrix} = \bm{B}\Bigl[\begin{pmatrix} \bm{0}_p \relax 1 \end{pmatrix} (\bm{L}\bm{r})_i - \bm{Z}^\top \bm{L}\bm{e}_i \widehat{h}_{p+1}\Bigr].\end{aligned}$$ This finishes the proof of the derivative formula [\[eq:derivartive_formula\]](#eq:derivartive_formula){reference-type="eqref" reference="eq:derivartive_formula"}. It remains to prove $\mathop{\mathrm{tr}}[\bm{B}] \ge 0$ in [\[eq:property_B\]](#eq:property_B){reference-type="eqref" reference="eq:property_B"}. By the definition of $\bm{B}$, it suffices to show $\mathop{\mathrm{tr}}[\bm{A}] \ge 0$. Equation (7.4) in [@bellec2022observable] implies $\bm{v}^\top \bm{A}\bm{v}\ge 0$ for all $\bm{v}\in \ker(A)^\perp$. Then, letting $\bm{V}=(\bm{v}_1, \bm{v}_2, \dots, \bm{v}_p)\in{{\mathbb{R}}}^{p\times p}$ be an orthogonal matrix with its columns including an orthogonal basis of $\ker(\bm{A})^\perp$, we have $0 \le \sum_{i=1}^p \bm{v}_i^\top \bm{A}\bm{v}_i = \mathop{\mathrm{tr}}[\bm{V}^\top \bm{A}\bm{V}] = \mathop{\mathrm{tr}}[\bm{A}\bm{V}\bm{V}^\top] = \mathop{\mathrm{tr}}[\bm{A}].$ This completes the proof. ------------------------------------------------------------------------ ### Proof of {#proof:psi_bound} The proof is based on the moment inequality in ahead. We will bound $\Xi_J$ in using the derivative formula [\[eq:derivartive_formula\]](#eq:derivartive_formula){reference-type="eqref" reference="eq:derivartive_formula"} and the bound of the operator norm $\|\bm{B}\|_{\mathop{\mathrm{op}}} \le (k\mu)^{-1}$ by [\[eq:property_B\]](#eq:property_B){reference-type="eqref" reference="eq:property_B"}. Note in passing that the derivative formula [\[eq:derivartive_formula\]](#eq:derivartive_formula){reference-type="eqref" reference="eq:derivartive_formula"} implies $$\sum_{i\in J} \sum_{j=1}^{p+1} \Big\|\frac{\partial \bm{h}}{\partial z_{ij}}\Big\|^2 = \sum_{i\in J}\sum_{j=1}^{p+1} \|\bm{B}\bm{e}_j (\bm{L}\bm{r})_i - \bm{B}\bm{Z}^\top \bm{L}\bm{e}_i h_j\|^2 \lesssim \|\bm{B}\|_F^2 \|\bm{L}_J \bm{r}\|^2 + \|\bm{L}_J\bm{Z}\bm{B}^\top\|_F^2\|\bm{h}\|^2,$$ so that $\bm{r}= -\bm{Z}\bm{h}$ and $\|\bm{B}\|_{F}^2 \le \mathrm{rank}(\bm{B})\|\bm{B}\|_{\mathop{\mathrm{op}}}^2 \le p (k\mu)^{-2}$ lead to $$\label{eq:Xi_bound} \Xi_J = \sum_{i\in J} \|\bm{h}\|^{-2} \sum_{j=1}^{p+1} \Big\| \frac{\partial \bm{h}}{\partial z_{ij}} \Big\|^2 \lesssim \|\bm{B}\|_F^2 \|\bm{L}_J \bm{Z}\|_{\mathop{\mathrm{op}}}^2 \lesssim p (k\mu)^{-2}\|\bm{L}_J \bm{Z}\|_{\mathop{\mathrm{op}}}^2.$$ Thus, and $\mathbb{E}[\|\bm{L}_J\bm{Z}\|_{\mathop{\mathrm{op}}}^4] \lesssim (|J|+p)^2$ by [\[lem:gaussian_oper_bound\]](#lem:gaussian_oper_bound){reference-type="eqref" reference="lem:gaussian_oper_bound"} yield $$\mathbb{E}\bigg[ \frac{1}{\|\bm{h}\|^2 \|\widetilde\bm{h}\|^2} \Bigl(\bm{r}\bm{L}_J \widetilde\bm{r}+ \sum_{i\in J}\sum_{j=1}^{q} \frac{\partial r_i \widetilde h_j }{\partial z_{ij}} \Bigr)^2 \mathrel{\Big |} J \bigg] \lesssim |J| + p (|J|+p)^2 (k\mu)^{-2} \lesssim \tau^{-2} (|J| + p(|J|+p)^2/k^2),$$ thanks to $\tau=\min(1, \mu)$. It remains to bound the error inside the square in the LHS; the derivative formula [\[eq:derivartive_formula\]](#eq:derivartive_formula){reference-type="eqref" reference="eq:derivartive_formula"} leads to $$\frac{1}{\|\bm{h}\|\|\widetilde\bm{h}\|} \sum_{i\in K}\sum_{j=1}^{p+1}\Bigl( \frac{\partial (\widetilde h_j r_i)}{\partial z_{ij}} - \widetilde\bm{r}^{\top}\widetilde\bm{L}\bm{L}_J \bm{r}\mathop{\mathrm{tr}}[\widetilde\bm{B}] + \mathop{\mathrm{tr}}[\bm{L}_J \bm{V}] \widetilde\bm{h}^{\top}\bm{h}\Bigr) = \frac{\displaystyle - \widetilde\bm{h}^{\top} \widetilde\bm{B}\bm{Z}^{\top} \widetilde\bm{L}\bm{L}_J \bm{r} - \bm{r}^{\top} \bm{L}\bm{L}_J \bm{Z}\bm{B}\widetilde\bm{h}}{\displaystyle \|\bm{h}\|\|\widetilde\bm{h}\|}.$$ Here, the square of the RHS can be bounded from above by $4\mu^{-2} k^{-2} \|\bm{L}_J \bm{Z}\|_{\mathop{\mathrm{op}}}^4$. From this and $\mathbb{E}[\|\bm{L}_J\bm{Z}\|_{\mathop{\mathrm{op}}}^4] \lesssim (|J|+p)^2$ by [\[lem:gaussian_oper_bound\]](#lem:gaussian_oper_bound){reference-type="eqref" reference="lem:gaussian_oper_bound"}, we find that the expectation of the RHS is bounded from above by $(|J|+p)^2/(k\mu)^{-2}$ up to some absolute constant. This finishes the proof. ------------------------------------------------------------------------ **Lemma 1** (Variant of @bellec2020out [Proposition 6.1]). Assume that $\bm{h}$ and $\widetilde\bm{h}$ are locally Lipschitz function from ${{\mathbb{R}}}^{n\times q} \to {{\mathbb{R}}}^q$ such that $\|\bm{h}\|^2, \|\widetilde\bm{h}\|^2 \neq 0$, and let $\bm{r}= -\bm{Z}\bm{h}$ and $\widetilde\bm{r}= -\bm{Z}\widetilde\bm{h}$, where $\bm{Z}\in {{\mathbb{R}}}^{n\times q}$ has i.i.d. $\mathcal{N}(0,1)$ entries. If $J\subset[n]$ is independent of $\bm{Z}$, we have $$\begin{aligned} \mathbb{E}\bigg[ \frac{1}{\|\bm{h}\|^2 \|\widetilde\bm{h}\|^2} \Bigl(\bm{r}\bm{L}_J \widetilde\bm{r}+ \sum_{i\in J}\sum_{j=1}^{q} \frac{\partial r_i \widetilde h_j }{\partial z_{ij}} \Bigr)^2 \mathrel{\Big |} J \bigg] \lesssim |J| + \mathbb{E}[\|\bm{L}_J \bm{Z}\|_{\mathop{\mathrm{op}}}^2 (1 + \Xi_J + \widetilde\Xi_J)],\end{aligned}$$ where $\bm{L}_J = \sum_{i\in J} \bm{e}_i\bm{e}_i^\top$, $\Xi_J = \sum_{i\in J} \sum_{j=1}^{q} \|\bm{h}\|^{-2} \Big\|\frac{\partial \bm{h}}{\partial z_{ij}}\Big\|^2$, and $\widetilde\Xi_J = \sum_{i\in J} \sum_{j=1}^{q} \|\widetilde\bm{h}\|^{-2} \Big\|\frac{\partial \widetilde\bm{h}}{\partial z_{ij}}\Big\|^2$. Let $\bm{\varrho}= \bm{L}_J \bm{r}/\|\bm{h}\| = - \bm{L}_J \bm{Z}\bm{h}/\|\bm{h}\|\in {{\mathbb{R}}}^n$ and $\widetilde{\bm \eta}:= \widetilde\bm{h}/\|\widetilde\bm{h}\|\in {{\mathbb{R}}}^{q}$. Then, Proposition 6.1. in [@bellec2020out] implies $$\label{eq:sure_quadratic} \mathbb{E}\bigg[\Big(-\bm{\varrho}^\top \bm{Z}\widetilde{\bm\eta} + \sum_{i,j} \frac{\partial (\rho_i \widetilde\eta_j)}{\partial z_{ij}}\Big)^2\bigg] \le \mathbb{E}[\|\bm{\varrho}\|^2 \|\widetilde{\bm\eta}\|^2] + \sum_{ij} \mathbb{E}\bigg[\|\bm{\varrho}\|^2 \Big\|\frac{\partial\widetilde{\bm \eta}}{\partial z_{ij}}\Big\| + \|\widetilde{\bm\eta}\|^2 \Big\|\frac{\partial \bm{\varrho}}{\partial z_{ij}}\Big\|^2\bigg],$$ where $\mathbb{E}[\|\bm{\varrho}\|^2 \|\widetilde{\bm\eta}\|^2] \le \mathbb{E}[\|\bm{L}_J\bm{Z}\|_{\mathop{\mathrm{op}}}^2]$ thanks to $\|\widetilde{\bm\eta}\|^2 = 1$ and $\|\bm{\varrho}\|^2 \le \|\bm{L}_J \bm{Z}\|_{\mathop{\mathrm{op}}}^2$. It remains to bound the second term. Here, we use the following identity: if $\bm{h}: {{\mathbb{R}}}^{n\times q}\to {{\mathbb{R}}}^{q}$ is locally Lipschitz with $\|\bm{h}\|^2\neq 0$, we have $$\label{eq:identity_frac} \frac{\partial}{\partial z_{ij}} \Big(\frac{\bm{h}}{\|\bm{h}\|}\Big) = \frac{\bm{P}^\perp}{\|\bm{h}\|} \frac{\partial \bm{h}}{\partial z_{ij}}\text{ with }\bm{P}^\perp = \bm{I}_{p+1} - \frac{\bm{h}\bm{h}^\top}{\|\bm{h}\|^2} \text{ so that } \Big\| \frac{\partial}{\partial z_{ij}} \Big(\frac{\bm{h}}{\|\bm{h}\|}\Big) \Big\|^2 \le \frac{1}{\|\bm{h}\|^2} \Big\|\frac{\partial\bm{h}}{\partial z_{ij}}\Big\|^2,$$ where the inequality follows from $\|\bm{P}^\perp\|_{\mathop{\mathrm{op}}} \le 1$. Then, [\[eq:identity_frac\]](#eq:identity_frac){reference-type="eqref" reference="eq:identity_frac"} and $\|\bm{\varrho}\|^2 \le \|\bm{L}_J \bm{Z}\|_{\mathop{\mathrm{op}}}^2$ yield $$\sum_{i\in J}\sum_{j=1}^q \|\bm{\varrho}\|^2 \Big\|\frac{\partial\widetilde{\bm \eta}}{\partial z_{ij}}\Big\| \le \|\bm{L}_J\bm{Z}\|_{\mathop{\mathrm{op}}}^2 \sum_{i\in J}\sum_j \frac{1}{\|\bm{h}\|^2} \Big\|\frac{\partial \widetilde\bm{h}}{\partial z_{ij}}\Big\|^2 = \|\bm{L}_J \bm{Z}\|_{\mathop{\mathrm{op}}}^2 \Xi_J.$$ In the same way, $\bm{\varrho}=-\bm{L}_J\bm{Z}\bm{h}/\|\bm{h}\|$, $\|\widetilde{\bm\eta}\|^2 = 1$, and [\[eq:identity_frac\]](#eq:identity_frac){reference-type="eqref" reference="eq:identity_frac"} lead to $$\sum_{i\in J}\sum_{j=1}^q\|\widetilde{\bm\eta}\|^2 \Big\|\frac{\partial \bm{\varrho}}{\partial z_{ij}}\Big\|^2 = \sum_{i\in J}\sum_{j=1}^q \Big\|\bm{L}_J \Big(\bm{e}_i \frac{h_j}{\|\bm{h}\|} + \bm{Z}\frac{\partial}{\partial z_{ij}} \Big(\frac{\bm{h}}{\|\bm{h}\|}\Big)\Big)\Big\|^2 \le 2|J| + 2\|\bm{L}_J\bm{Z}\|_{\mathop{\mathrm{op}}}^2 \Xi_J.$$ Therefore, the RHS of [\[eq:sure_quadratic\]](#eq:sure_quadratic){reference-type="eqref" reference="eq:sure_quadratic"} is bounded from above by $|J| + \mathbb{E}[\|\bm{L}_J \bm{Z}\|_{\mathop{\mathrm{op}}}^2 (1 + \Xi_J + \widetilde\Xi_J)]$ up to some absolute constant. It remains to control the error inside the square: $$\begin{aligned} & \frac{1}{\|\bm{h}\|\|\widetilde\bm{h}\|}\sum_{i\in J} \sum_{j\in[q]} \frac{\partial r_i\widetilde h_j}{\partial z_{ij}} - \sum_{i\in J}\sum_{j\in[q]} \frac{\partial \rho_i \widetilde\eta_j}{\partial z_{ij}} =\sum_{i\in J}\sum_{j=1}^q \frac{r_i \widetilde h_j}{\|\bm{h}\|\|\widetilde\bm{h}\|} \bigg( \frac{\bm{h}^\top}{\|\bm{h}\|^2}\frac{\partial\bm{h}}{\partial z_{ij}} + \frac{\widetilde\bm{h}^\top }{\|\widetilde\bm{h}\|^2} \frac{\partial \widetilde\bm{h}}{\partial z_{ij}} \bigg)\end{aligned}$$ By multiple applications of the Cauchy-Schwartz inequality, the square of the RHS is bounded from above by $2\|\bm{L}_J \bm{Z}\|_{\mathop{\mathrm{op}}}^2 (\Xi_J + \widetilde{\Xi}_J)$. This finishes the proof. ------------------------------------------------------------------------ ### Proof of {#proof:trB_df} with $J=I=\widetilde I$ and $0 \le \mathop{\mathrm{tr}}[\bm{B}] \le p \|\bm{B}\|_{\mathop{\mathrm{op}}} \le p/(k\mu)$ by [\[eq:property_B\]](#eq:property_B){reference-type="eqref" reference="eq:property_B"} yield $$\mathbb{E}|(1+\mathop{\mathrm{tr}}[\bm{B}]) \xi_1| \lesssim \mu^{-1} (p/k ) \mathbb{E}|\xi_1| \lesssim \tau^{-1} (p/k) \sqrt{k} \tau^{-1} (1 + p/k)^{3/2} \lesssim \sqrt{k}\tau^{-2} (1+p/k)^{5/2},$$ while $\mathbb{E}|\xi_2| \lesssim \sqrt{k} \tau^{-2} (1+p/k)^2$ by below. Thus, we obtain $$\mathbb{E}[|k - \mathop{\mathrm{tr}}[\bm{L}\bm{V}](1+\mathop{\mathrm{tr}}[\bm{B}])|] \lesssim \sqrt{k}\tau^{-2} (1+p/k)^{5/2} + \sqrt{k} \tau^{-2} (1+p/k)^2 \lesssim \sqrt{k}\tau^{-2} (1+p/k)^{5/2}.$$ This finishes the proof. ------------------------------------------------------------------------ **Lemma 2**. We have $\mathbb{E}[|k - (1+\mathop{\mathrm{tr}}[\bm{B}])^2 \|\bm{L}\bm{r}\|^2/\|\bm{h}\|^2|] \lesssim k^{1/2} \tau^{-2} (1 + p/k)^2$. Define $\bm{a}\in {{\mathbb{R}}}^{n}$ and $\bm{b}\in {{\mathbb{R}}}^{n}$ as $$\text{for all } i\in [n], \quad a_i = \frac{1}{\|\bm{h}\|} (1+\mathop{\mathrm{tr}}[\bm{B}])(\bm{L}\bm{r})_i \quad \text{and} \quad b_i = \frac{1}{\|\bm{h}\|} (\bm{L}\bm{r})_i + \frac{1}{\|\bm{h}\|} \sum_{j=1}^{p+1} \frac{\partial h_j}{\partial z_{ij}}$$ so that $(1+\mathop{\mathrm{tr}}[\bm{B}])^2 \|\bm{L}\bm{r}\|^2/\|\bm{h}\|^2 = \|\bm{a}\|^2$. ahead and the Cauchy--Schwarz inequality yield $$\begin{aligned} \mathbb{E}[|k-\|\bm{a}\|^2|] &\le \mathbb{E}[2 (\sqrt{k}\|\bm{a}-\bm{b}\| + |k-\|\bm{b}\|^2| + \|\bm{a}-\bm{b}\|^2] \notag \relax &\lesssim \sqrt{k \mathbb{E}[\|\bm{a}-\bm{b}\|^2]} + \mathbb{E}[\|\bm{a}-\bm{b}\|^2] + \mathbb{E}[|k-\|\bm{b}\|^2|]. \label{eq:basic_triangle_cauchy}\end{aligned}$$ Thus, it suffices to bound $\mathbb{E}[\|\bm{a}-\bm{b}\|^2]$ and $\mathbb{E}[|k-\|\bm{b}\|^2]$. For $\mathbb{E}\|\bm{a}-\bm{b}\|^2$, [\[eq:property_B\]](#eq:property_B){reference-type="eqref" reference="eq:property_B"}, [\[eq:derivartive_formula\]](#eq:derivartive_formula){reference-type="eqref" reference="eq:derivartive_formula"}, and yield $$\mathbb{E}\|\bm{a}-\bm{b}\|^2 = \mathbb{E}[\|\bm{h}\|^{-1} \|\bm{L}\bm{Z}\bm{B}^\top \bm{h}\|^2] \le \mathbb{E}[\|\bm{B}\|_{\mathop{\mathrm{op}}}^2 \|\bm{L}\bm{Z}\|_{\mathop{\mathrm{op}}}^2] \lesssim (k\mu)^{-2} (k+p) \lesssim k^{-1} \tau^{-2} (1+p/k),$$ where $\tau = \min(1, \mu)$. If we denote $\|\bm{h}\|^{-2} \sum_{i\in I} \sum_{j=1}^{p+1} \|(\partial/\partial z_{ij})\bm{h}\|^2$ by $\Xi_I$, below leads to $$\begin{aligned} \mathbb{E}[|k-\|\bm{b}\|^2|] &\lesssim \sqrt{k(1+\mathbb{E}[\Xi_I])} + \mathbb{E}[\Xi_I] \text{ (by \Cref{prop:chi_square})}\relax &\lesssim \sqrt{k(1+ \mu^{-2} (1+p/k)^2)} + \mu^{-2} (1+p/k)^2 \text{ (thanks to \eqref{eq:Xi_bound} with $J=I$)}\relax &\lesssim \sqrt{k}\tau^{-2} (1+p/k)^2 \text{ (thanks to $\tau=\min(1, \mu)$)}.\end{aligned}$$ This concludes the proof. ------------------------------------------------------------------------ **Lemma 3** (Variant of @bellec2020out [Theorem 7.1]). Let $\bm{h}: {{\mathbb{R}}}^{k \times q}\to {{\mathbb{R}}}^{q}$ be a locally Lipschitz functions. If $\bm{Z}\in {{\mathbb{R}}}^{k\times q}$ has i.i.d. $\mathcal{N}(0,1)$ entries, we have $$\mathbb{E}\bigg[\bigg|k - \frac{1}{\|\bm{h}\|^2}\sum_{i=1}^k \Big(\bm{e}_i^\top \bm{Z}\bm{h}- \sum_{j=1}^{q} \frac{\partial h_j}{\partial z_{ij}}\Big)^2\bigg|\bigg] \lesssim \sqrt{k(1 + \mathbb{E}\Xi)} + \mathbb{E}\Xi \text{ with } \Xi = \frac{1}{\|\bm{h}\|^2} \sum_{i=1}^k \sum_{ j=1}^{q} \Big\|\frac{\partial \bm{h}}{\partial z_{ij}}\Big\|^2.$$ Define vectors $\bm{a}, \bm{b}\in {{\mathbb{R}}}^{k}$ by $$\text{for all } i\in [k], \quad a_i = \frac1{\|\bm{h}\|} \Big(\bm{e}_i^\top \bm{Z}\bm{h}- \sum_j \frac{\partial h_j}{\partial z_{ij}}\Big) \quad \text{and} \quad b_i = \bm{e}_i^\top \bm{Z}\frac{\bm{h}}{\|\bm{h}\|} - \sum_j \frac{\partial}{\partial z_{ij}} \Bigl(\frac{h_j}{\|\bm{h}\|}\Bigr) ,$$ so that the LHS of the assertion is $\mathbb{E}|k - \|\bm{a}\|^2|$. The same argument in [\[eq:basic_triangle_cauchy\]](#eq:basic_triangle_cauchy){reference-type="eqref" reference="eq:basic_triangle_cauchy"} leads to $$\mathbb{E}[|k-\|\bm{a}\|^2|] \lesssim \sqrt{k \mathbb{E}[\|\bm{a}-\bm{b}\|^2]} + \mathbb{E}[\|\bm{a}-\bm{b}\|^2] + \mathbb{E}[|k-\|\bm{b}\|^2|].$$ Below we bound $\mathbb{E}[\|\bm{a}-\bm{b}\|^2]$ and $\mathbb{E}[|k-\|\bm{b}\|^2]$. For $\|\bm{a}-\bm{b}\|^2$, multiple applications of the Cauchy--Schwarz inequality lead to $$\begin{aligned} \|\bm{a}-\bm{b}\|^2 &= \sum_{i=1}^k \Bigr\{-\frac{1}{\|\bm{h}\|} \sum_{j}\frac{\partial h_j}{\partial z_{ij}} + \sum_j \frac{\partial}{\partial z_{ij}} \Big(\frac{h_j}{\|\bm{h}\|}\Big)\Bigr\}^2 \relax &= \sum_{i=1}^k \Bigl( - \sum_{j} \frac{\bm{h}^\top}{\|\bm{h}\|^3} \frac{\partial \bm{h}}{\partial z_{ij}} h_j\Bigr)^2 \relax &\le \sum_{i,j} \frac{1}{\|\bm{h}\|^2} \Big\|\frac{\partial \bm{h}}{\partial z_{ij}}\Big\|^2 = \Xi.\end{aligned}$$ For $\mathbb{E}[|k-\|\bm{b}\|^2]$, Theorem 7.1 in [@bellec2020out] applied to the unit vector $\bm{h}/\|\bm{h}\|\in {{\mathbb{R}}}^q$ implies $$\mathbb{E}|k -\|\bm{b}\|^2| \lesssim \sqrt{k(1 + \mathbb{E}\Xi')} + \mathbb{E}\Xi' \quad \text{with} \quad \Xi' := \mathbb{E}\sum_{i=1}^k \sum_{j=1}^q \|\frac{\partial}{\partial z_{ij}} \Bigl(\frac{\bm{h}}{\|\bm{h}\|} \Bigr)\|^2.$$ Since $\Xi' \le \Xi$ by [\[eq:identity_frac\]](#eq:identity_frac){reference-type="eqref" reference="eq:identity_frac"}, we have $\mathbb{E}[|k-\|\bm{b}\|^2|] \lesssim \sqrt{k (1 + \mathbb{E}[\Xi])} + \mathbb{E}[\Xi].$ This finishes the proof. ------------------------------------------------------------------------ **Lemma 4**. We have $|k-\|\bm{a}\|^2| \le 2 (\sqrt{k}\|\bm{a}-\bm{b}\| + |k-\|\bm{b}\|^2| + \|\bm{a}-\bm{b}\|^2)$ for all vector $\bm{a}, \bm{b}$ in the same Euclidean space. By multiple applications of the triangle inequality and $2ab \le a^2 + b^2$, we have $$\begin{aligned} \bigl||k- \|\bm{a}\|^2| - |k-\|\bm{b}\|^2|\bigr| &\le \|\bm{a}- \bm{b}\|\|\bm{a}+ \bm{b}\| \relax &\le \|\bm{a}-\bm{b}\|^2 + 2\|\bm{a}-\bm{b}\|\|\bm{b}\| \relax&\le \|\bm{a}-\bm{b}\|^2 + 2\|\bm{a}-\bm{b}\| (\sqrt{|\|\bm{b}\|^2 - k |} + \sqrt{k}) \relax &\le 2\|\bm{a}-\bm{b}\|^2 + |\|\bm{b}\|^2-k| + 2\|\bm{a}-\bm{b}\| \sqrt{k}.\end{aligned}$$ Adding $|\|\bm{b}\|^2 - k|$ to the both side and using the triangle inequality, we conclude the proof. ------------------------------------------------------------------------ ### Proof of {#proof:contraction_cor} Letting $\bm{v}= \bm{h}/\|\bm{h}\|$ and $\widetilde{\bm{v}}=\widetilde{\bm{h}}/\|\widetilde\bm{h}\|$, the statement of can be written as follows; if the same penalty is used for $\bm{h}$ and $\widetilde\bm{h}$, we have $$\mathbb{E}\Bigl[\bm{v}^\top \widetilde\bm{v}\mathrel{\big |} (\bm{z}_i)_{i\in I\cap \widetilde I}\Bigr] \ge 0, \quad \text{and} \quad \mathbb{E}\Bigl[\mathop{\mathrm{{\rm Var}}}\Bigl[\bm{v}^\top\widetilde\bm{v}\mathrel{\big |} (\bm{z}_i)_{i\in I\cap \widetilde I}\Bigr]\Bigr] \lesssim \frac{\gamma}{nc^2\tau^2},$$ Below we prove this claim. Here, the key fact is that conditionally on $(\bm{z}_i)_{i\in I \cap \widetilde I}$, the random vectors $\bm{v}$ and $\widetilde{\bm{v}}$ are independent and identically distributed. Then, it immediately follows from this fact that $$\mathbb{E}\Bigl[\bm{v}^\top\widetilde\bm{v}\mathrel{\big |} (\bm{z}_i)_{i \in I\cap \widetilde I}\Bigr] = \Big\Vert\mathbb{E}\Bigl[\bm{v}\mathrel{\big |} (\bm{z}_{i})_{i\in I\cap \widetilde I} \Bigr]\Big\Vert^2 \ge 0.$$ Next we derive the bound of the variance: first using the Gaussian Poincaré inequality for the inequality below, second using that $\bm{v}$ does not depend on $(\bm{z}_i)_{i\in \widetilde{I}\setminus I}$ and $\widetilde{\bm{v}}$ does not depend on $(\bm{z}_{i})_{i\in I\setminus\widetilde I}$ for the equality below, $$\begin{aligned} \mathop{\mathrm{{\rm Var}}}\Bigl[\bm{v}^\top\widetilde\bm{v}\mid (\bm{z}_i)_{i\in I\cap \widetilde I}\Bigr] &\le \mathbb{E}\Bigl[\sum_{j\in[p+1]} \sum_{i\in (I\setminus\widetilde I) \cup (\widetilde I\setminus I)} \Bigl(\frac{\partial }{\partial z_{ij}} \bm{v}^\top \widetilde\bm{v}\Bigr)^2 \mid (\bm{z}_i)_{i\in I\cap \widetilde I} \Bigr] \relax &= \sum_{j\in[p+1]} \Bigg(\sum_{i\in I\setminus\widetilde I}\mathbb{E}\Bigl[\Bigl( \widetilde\bm{v}^\top \frac{\partial\bm{v}}{\partial z_{ij}} \Bigr)^2 \mid (\bm{z}_i)_{i\in I\cap \widetilde I}\Bigr] + \sum_{i\in \widetilde I\setminus I} \mathbb{E}\Bigl[\Bigl( \bm{v}^\top \frac{\partial\widetilde\bm{v}}{\partial z_{ij}} \Bigr)^2 \mid (\bm{z}_i)_{i\in I\cap \widetilde I} \Big]\Bigg).\end{aligned}$$ By the symmetry of $(\bm{h}, \widetilde\bm{h})$, it suffices to bound the first term. Letting $\bm{P}^\perp = \bm{I}_{p+1} - \bm{v}\bm{v}^\top$, we obtain the following from the identity [\[eq:identity_frac\]](#eq:identity_frac){reference-type="eqref" reference="eq:identity_frac"}: $$\begin{aligned} \sum_{j\in[p+1]}\sum_{i\in I\setminus\widetilde I}\Bigl( \widetilde\bm{v}^\top \frac{\partial\bm{v}}{\partial z_{ij}} \Bigr)^2 &= \frac{1}{\|\bm{h}\|^2} \sum_{j\in[p+1]}\sum_{i\in I} (\widetilde\bm{v}^\top \bm{P}^\perp \bm{B}\bm{e}_j (\bm{L}\bm{r})_i - \widetilde\bm{v}^\top \bm{P}^\perp \bm{B}\bm{Z}^\top \bm{L}\bm{e}_i h_j)^2\relax &\lesssim \|\bm{B}\|_{\mathop{\mathrm{op}}}^2 \|\bm{L}\bm{Z}\|_{\mathop{\mathrm{op}}}^2 + \|\bm{L}\bm{Z}\bm{B}^\top\|_{\mathop{\mathrm{op}}}^2 \lesssim (k\mu)^{-2} \|\bm{L}\bm{Z}\|_{\mathop{\mathrm{op}}}^2. \end{aligned}$$ Thus, the moment bound $\mathbb{E}[\|\bm{L}\bm{Z}\|_{\mathop{\mathrm{op}}}^2]\lesssim (k+p)$ by leads to $$\mathbb{E}\Bigl[\mathop{\mathrm{{\rm Var}}}\Bigl[\bm{v}^\top\widetilde\bm{v}\mid (\bm{z}_i)_{i\in I\cap \widetilde I}\Bigr]\Bigr] \lesssim (k\mu)^{-2} \mathbb{E}\Bigl[\|\bm{L}\bm{Z}\|_{\mathop{\mathrm{op}}}^2 + \|\widetilde{\bm{L}}\bm{Z}\|_{\mathop{\mathrm{op}}}^2\Bigr] \lesssim (k\mu)^{-2} (k+p) \lesssim n^{-1} \tau^{-2} c^{-2}\gamma,$$ thanks to $\tau=\min(1, \mu)$, $c=k/n\in(0,1)$, and $\gamma=\max(1, p/n)$. This completes the proof. ------------------------------------------------------------------------ ### Proof of {#proof:df_lower_bound} Below, we prove this assertion for $\char"0023=\textup{\textrm{ovlp}}$ and $\char"0023=\textup{\textrm{full}}$, separately. **Proof for $\char"0023=\textup{\textrm{ovlp}}$.** Recall that $d_{m,\ell}^\textup{\textrm{ovlp}}= |I_m\cap I_\ell|(1-{\widehat{\mathsf{df}}}{}_m/k)(1-{\widehat{\mathsf{df}}}{}_\ell/k)$. with $I=I_m$ and $I_\ell$ implies that there exists an absolute constant $C\in(0,1)$ such that $$\begin{aligned} \text{for all } m, \ell, \quad \mathbb{P}((1-{\widehat{\mathsf{df}}}{}_m/k)(1-{\widehat{\mathsf{df}}}{}_\ell/k) \le C^2 \tau^{2} c^2 \gamma^{-2}) \le 2 e^{-nc/2} \label{eq:df_k_df_ell}. \end{aligned}$$ (introduced later) and Markov's inequality lead to the following: $$\text{for all } m\neq \ell, \quad \mathbb{P}(|I_m\cap I_\ell|n/k^2-1| > 1/2) \le 4 \mathbb{E}[(I_m\cap I_\ell|n/k^2-1)^2] \le 4 n^2 k^{-4} k^2 n^{-1} = 4 n^{-1} c^{-2},$$ which implies $$\mathbb{P}(|I_m\cap I_\ell| \le k^2/(2n) = 2^{-1} nc^2) \le 4n^{-1} c^{-2}.$$ Note in passing that the above inequality also holds for $m=\ell$ since $|I_m\cap I_\ell| = k \ge k^2/2n$ with probability $1$. Therefore, we have, for all $m, \ell$, $$\begin{aligned} \mathbb{P}(d_{m, \ell}^{\textup{\textrm{ovlp}}} \le 2^{-1} C^2 n c^4\tau^2 \gamma^{-2}) &\le \mathbb{P}(|I_m\cap I_\ell| \le 2^{-1}nc^2) + \mathbb{P}((1-{\widehat{\mathsf{df}}}{}_m/k)(1-{\widehat{\mathsf{df}}}{}_\ell/k) \le C^2 \tau^{2} c^2 \gamma^{-2})\relax &\le 4n^{-1} c^{-2} + 2e^{-nc/2} \lesssim n^{-1}c^{-2}.\end{aligned}$$ **Proof for $\char"0023=\textup{\textrm{full}}$.** Recall that $d_{m,\ell}^\textup{\textrm{full}}= n - {\widehat{\mathsf{df}}}{}_m-{\widehat{\mathsf{df}}}{}_\ell + k^{-2} |I_m\cap I_\ell| {\widehat{\mathsf{df}}}{}_m {\widehat{\mathsf{df}}}{}_\ell$. We consider $m=\ell$ and $m\neq \ell$ separately. When $m=\ell$, [\[eq:df_k\_df_ell\]](#eq:df_k_df_ell){reference-type="eqref" reference="eq:df_k_df_ell"} with $m=\ell$ implies that the following holds with probability at least $1-2e^{-nc/2}$: $$n^{-1} d_{m,m}^\textup{\textrm{full}}= c (1-{\widehat{\mathsf{df}}}{}_m/k)^2 + 1-c \ge c C^2 \tau^2 c^2 \gamma^{-2} + 1 - c \ge C^2 \tau^2 \gamma^{-2} (c^3 + 1- c) \ge C' \tau^2\gamma^{-2},$$ where $C' = C^2 \min_{c\in[0,1]}(c^3+1-c) = C^2(1-2\sqrt{3}/9)$ is a positive absolute constant. When $m\neq \ell$, we decompose $$\frac{d_{m, \ell}^\textup{\textrm{full}}}{n} = \Big(1-\frac{{\widehat{\mathsf{df}}}{}_m}{n}\Big)\Big(1-\frac{{\widehat{\mathsf{df}}}{}_\ell}{n}\Big) + \frac{{\widehat{\mathsf{df}}}{}_m}{n}\frac{{\widehat{\mathsf{df}}}{}_\ell}{n} \Big(\frac{|I_m\cap I_\ell| \cdot n}{k^2}-1\Big) =: A_{m, \ell} + B_{m,\ell}.$$ implies that there exists an absolute constant $C\in(0,1)$ such that $$\mathbb{P}(A_{m,\ell} \le C^2 \tau^2 \gamma^{-2}) \le 2e^{-nc/2}$$ For $B_{m,\ell}$, $0<{\widehat{\mathsf{df}}}{}_m <k = nc$ and imply $$\mathbb{E}[|B_{m,\ell}|^2] \le (\frac{k^2}{n^2} \frac{n}{k^2})^2 \mathbb{E}[||I_m\cap I_\ell|-\frac{k^2}{n}|^2] \le \frac{1}{n^2} \frac{k^2}{n} = \frac{c^2}{n}.$$ Thus, Markov's inequality leads to $$\begin{aligned} \mathbb{P}(d_{m, \ell}^\textup{\textrm{full}}> n 2 C^2 \tau^2\gamma^{-2}) &\le \mathbb{P}(A_{m, \ell} > C^2 \tau^2\gamma^{-2}) + \mathbb{P}(B_{m, \ell} > C^2 \tau^2\gamma^{-2}) \relax &\le 2e^{-nc/2} + (C^2\tau^2\gamma^{-2})^{-2} \mathbb{E}[|B_{m,\ell}|^2] \relax &\lesssim (nc)^{-1} + \tau^{-4}\gamma^{4} c^2 n^{-1} \lesssim n^{-1} c^{-1} \tau^{-4} \gamma^4.\end{aligned}$$ This concludes the proof. ------------------------------------------------------------------------ **Lemma 5**. There exists an absolute constant $C\in(0,1)$ such that $$\mathbb{P}(1- {\widehat{\mathsf{df}}}{}/k \ge C \tau c \gamma^{-1}) \ge 1- e^{-nc/2}, \quad \mathbb{P}(1-{\widehat{\mathsf{df}}}{}/n \ge C\tau\gamma^{-1}) \ge 1 - e^{-nc/2}$$ Equation [\[eq:property_tr\]](#eq:property_tr){reference-type="eqref" reference="eq:property_tr"} implies $$(1-{\widehat{\mathsf{df}}}{}/k)^{-1} \le 1+\mu^{-1} \|\bm{L}\bm{G}\|_{\mathop{\mathrm{op}}}^2/k \le \tau^{-1} (1+\|\bm{L}\bm{G}\|_{\mathop{\mathrm{op}}}^2/k),$$ thanks to $\tau=\min(1,\mu)$, where $\bm{G}\in{{\mathbb{R}}}^{n\times p}$ has iid Gaussian entries. Since $\|\bm{L}\bm{G}\|_{\mathop{\mathrm{op}}} \stackrel{\mathrm{d}}{=} \|\bm{G}'\|_{\mathop{\mathrm{op}}}$ with $\bm{G}'\in {{\mathbb{R}}}^{k\times p}$ having iid $N(0,1)$ entries, with $t=\sqrt{k} + \sqrt{p}$ implies $$\mathbb{P}(\|\bm{L}\bm{Z}\|_{\mathop{\mathrm{op}}}^2 > \{2(\sqrt{k} + \sqrt{p})\}^2) \le e^{-(\sqrt{k} + \sqrt{p})^2/2} \le e^{-(k+p)/2} \le e^{-k/2}.$$ Since $\{2(\sqrt{k} + \sqrt{p})\}^2 \le 8(k+p)$. the following holds with probability at least $1-e^{-k/2}$: $$(1-{\widehat{\mathsf{df}}}{}/k)^{-1} \le \tau^{-1} (1 + 8(1+p/k)) \le 9 \tau^{-1} (1+p/k) \le 9 \tau^{-1} (1+c^{-1}p/n) \le 18 \tau^{-1} c^{-1}\gamma,$$ thanks to $\gamma=\max(1, p/n)$. This completes the proof for $1-{\widehat{\mathsf{df}}}{}/k$. For $1-{\widehat{\mathsf{df}}}{}/n$, $0 < {\widehat{\mathsf{df}}}{}_m < k = nc$ and the above display lead to $$\mathbb{P}(1-{\widehat{\mathsf{df}}}{}_m/n \ge (1-c) \vee \{(18)^{-1} \tau c \gamma^{-1}\}) \ge 1-e^{-nc/2},$$ where $(1-c) \vee \{(18)^{-1} \tau c \gamma^{-1}\} \ge (18)^{-1} \tau \gamma^{-1} ((1-c) \vee c) \ge (18)^{-1} \tau \gamma^{-1} 2^{-1}$ thanks to $c,\tau\in(0,1]$ and $\gamma>1$. Therefore taking $C=(36)^{-1}$ concludes the proof. ------------------------------------------------------------------------ ### Proof of {#proof:concentrate_denom_full} Now we suppose that exists a deterministic scalar $d_n$ that does not depend on $m$ such that $$\label{eq:df_concentrate_constant} \mathbb{P}(|{\widehat{\mathsf{df}}}{}_m/n - d_n| > \epsilon) \lesssim e^{-nc/2} + \frac{\gamma^{9/2}}{\epsilon\sqrt{n}\tau^3 c^4}.$$ Then, multiple applications of the triangle inequality leads to $$\begin{aligned} \mathbb{P}(|{\widehat{\mathsf{df}}}{}_m - {\widetilde{\mathsf{df}}}{}_M| > n \epsilon) &\le \mathbb{P}\bigg(\Big|\frac{{\widehat{\mathsf{df}}}{}_m}{n} - d_n\Big|> \frac\epsilon2\bigg) + \mathbb{P}\bigg(\Big|\frac{M^{-1}\sum_m{\widehat{\mathsf{df}}}{}_m}{n} - d_n\Big|>\frac\epsilon2\bigg)\relax &\le \mathbb{P}\bigg(\Big|\frac{{\widehat{\mathsf{df}}}{}_m}{n} - d_n\Big|> \frac\epsilon2\bigg) + \sum_{m=1}^M \mathbb{P}\bigg(\Big|\frac{{\widehat{\mathsf{df}}}{}_m}{n} - d_n\Big|>\frac\epsilon2\bigg)\relax &\lesssim M \Big(e^{-nc/2} + \frac{\gamma^{9/2}}{\epsilon\sqrt{n}\tau^3 c^4}\Big),\end{aligned}$$ which completes the proof. Below we prove [\[eq:df_concentrate_constant\]](#eq:df_concentrate_constant){reference-type="eqref" reference="eq:df_concentrate_constant"}. Let $\bm{L}_m=\bm{L}_{I_m}$ for brevity. Define $(W_m, d_n)$ by $$W_m := \frac{k\|\bm{L}_m \bm{r}_m\|^2}{n^2 \|\bm{h}_m\|^2} = \frac{c\|\bm{L}_m \bm{r}_m\|^2}{n\|\bm{h}_m\|^2}, \quad d_n = \frac{k}{n} - \sqrt{\mathbb{E}[W_m]}.$$ Note that $d_n$ does not depend on $m$ by symmetry. Since ${\widehat{\mathsf{df}}}{}_m = k - \mathop{\mathrm{tr}}[\bm{L}_m\bm{V}_m]$, our goal is to bound $${{\widehat{\mathsf{df}}}{}_m}/{n} - d_n = {\mathop{\mathrm{tr}}[\bm{L}_m\bm{V}_m]}/{n} - \sqrt{\mathbb{E}[W_m]}.$$ with $I = \widetilde{I} = I_m$ implies $$\mathbb{E}\bigg[\Big|W_m -\frac{\mathop{\mathrm{tr}}[\bm{L}_m\bm{V}_m]^2}{n^2}\Big|\bigg] = \frac{k}{n^2} \mathbb{E}\bigg[\Big|\frac{\|\bm{L}_m\bm{r}_m\|^2}{\|\bm{h}_m\|^2} - k \mathop{\mathrm{tr}}[\bm{L}_m\bm{V}_m]^2\Big|\bigg] \lesssim \frac{k}{n^2}\cdot \sqrt{n} \tau^{-2} c^{-3} \gamma^{7/2} = \frac{\gamma^{7/2}}{\sqrt{n}\tau^2 c^2},$$ while the Gaussian Poincaré inequality leads to (see below) $$\mathop{\mathrm{{\rm Var}}}[W_m] = \mathop{\mathrm{{\rm Var}}}\bigg[\frac{c}{n} \frac{\|\bm{L}_m\bm{r}_m\|^2}{\|\bm{h}_m\|^2}\bigg] = \frac{c^2}{n^2} \mathop{\mathrm{{\rm Var}}}\bigg[\frac{\|\bm{L}_m\bm{r}_m\|^2}{\|\bm{h}_m\|^2}\bigg] \lesssim \frac{c^2}{n^2} \cdot \frac{n\gamma^3}{c^2 \tau^2} = \frac{\gamma^3}{n\tau^2}.$$ By the above displays, we obtain $$\begin{aligned} \mathbb{E}\bigg[\Big|\frac{\mathop{\mathrm{tr}}[\bm{L}_m\bm{V}_m]^2}{n^2} - \mathbb{E}[W_m]\Big|\bigg] &\le \mathbb{E}\bigg[\Big|\frac{\mathop{\mathrm{tr}}[\bm{L}_m\bm{V}_m]^2}{n^2} - W_m\Big|\bigg] + \mathbb{E}\Big[\big|W_m - \mathbb{E}[W_m]\big|\Big] \relax &\le \mathbb{E}\bigg[\Big|\frac{\mathop{\mathrm{tr}}[\bm{L}_m\bm{V}_m]^2}{n^2} - W_m\Big|\bigg] + \sqrt{\mathop{\mathrm{{\rm Var}}}[W_m]} \relax &\lesssim \frac{\gamma^{7/2}}{\sqrt{n}\tau^2 c^2} + \sqrt{\frac{\gamma^3}{n\tau^2}} \lesssim \frac{\gamma^{7/2}}{\sqrt{n}\tau^2 c^2}.\end{aligned}$$ Now, implies that there exists an absolute constant $C>0$ such that $$\mathbb{P}(\Omega^c) \le e^{-(k+p)/2} \le e^{-nc/2} \text{ with } \Omega = \{\mathop{\mathrm{tr}}[\bm{L}_m\bm{V}_m]/n \ge C \gamma^{-1} c\tau\}$$ Notice that under $\Omega$, $$\begin{aligned} \Bigl|\frac{\mathop{\mathrm{tr}}[\bm{L}_m\bm{V}_m]^2}{n^2} - \mathbb{E}[W_m]\Bigr| &= \Bigl|\frac{\mathop{\mathrm{tr}}[\bm{L}_m\bm{V}_m]}{n} - \sqrt{\mathbb{E}[W_m]}\Bigr| \cdot \Bigl|\frac{\mathop{\mathrm{tr}}[\bm{L}_m\bm{V}_m]}{n} + \sqrt{\mathbb{E}[W_m]}\Bigr|\relax &= \Bigl|\frac{{\widehat{\mathsf{df}}}{}_m}{n} - d_n\Bigr| \cdot \Big|\frac{\mathop{\mathrm{tr}}[\bm{L}_m\bm{V}_m]}{n} + \sqrt{\mathbb{E}[W_m]}\Big| \relax & \ge \Bigl|\frac{{\widehat{\mathsf{df}}}{}_m}{n} - d_n\Bigr| \cdot C \gamma^{-1} c^2 \tau.\end{aligned}$$ Combining the above displays together, we obtain $$\begin{aligned} \mathbb{P}\bigg(\Big|\frac{{\widehat{\mathsf{df}}}{}_m}{n} - d_n\Big| > {\epsilon}\bigg) &\le \mathbb{P}(\Omega^c) + \mathbb{P}\bigg(\Big|\frac{\mathop{\mathrm{tr}}[\bm{L}_m\bm{V}_m]^2}{n^2} - \mathbb{E}[W_m]\Big| > \epsilon C \gamma^{-1} c^2\tau\bigg) \relax &\le e^{-nc/2} + \frac{\gamma}{\epsilon C c^2\tau} \mathbb{E}\bigg[\Big|\frac{\mathop{\mathrm{tr}}[\bm{L}_m\bm{V}_m]^2}{n^2} - \mathbb{E}[W_m]\Big|\bigg]\relax &\lesssim e^{-nc/2} + \frac{\gamma}{\epsilon c^2\tau} \cdot \frac{\gamma^{7/2}}{\sqrt{n}\tau^2 c^2} = e^{-nc/2} + \frac{\gamma^{9/2}}{\epsilon\sqrt{n}\tau^3 c^4}.\end{aligned}$$ This finishes the proof. ------------------------------------------------------------------------ **Lemma 6**. We have $\mathop{\mathrm{{\rm Var}}}(\|\bm{L}\bm{r}\|^2/\|\bm{h}\|) \lesssim n c^{-2} \tau^{-2} \gamma^3$. Let $\bm{u}= \bm{L}\bm{r} /\|\bm{h}\| = -\bm{L}\bm{Z}\bm{h}/\|\bm{h}\| = -\bm{L}\bm{Z}\bm{v}$ with $\bm{v}= \bm{h}/\|\bm{h}\|$. The Gaussian Poincaré inequality yields $$\mathop{\mathrm{{\rm Var}}}(\|\bm{u}\|^2) \le \mathbb{E}\bigg[ \sum_{ij} \bigg(\frac{\partial \|\bm{u}\|^2}{\partial z_{ij}}\bigg)^2 \bigg] = 4 \sum_{ij} \mathbb{E}\bigg(\bm{u}^\top \frac{\partial \bm{u}}{\partial z_{ij}}\bigg)^2,$$ where $$\bm{u}^\top \frac{\partial \bm{u}}{\partial z_{ij}} = \bm{v}^\top \bm{Z}^\top \bm{L}( \bm{e}_i v_j +\bm{Z}(\frac{\partial}{\partial z_{ij}} \frac{\bm{h}}{\|\bm{h}\|})) = \bm{v}^\top \bm{Z}^\top \bm{L}\bm{e}_i v_j +\bm{v}^\top \bm{Z}^\top \bm{L}\bm{Z}\bm{P}^\perp \frac{1}{\|\bm{h}\|} \frac{\partial \bm{h}}{\partial z_{ij}} = {{\mathsf{Rem}}}^1_{ij} + {{\mathsf{Rem}}}^2_{i,j}.$$ Note that $\sum_{ij} ({{\mathsf{Rem}}}^1_{ij})^2 \le \|\bm{L}\bm{Z}\|_{\mathop{\mathrm{op}}}^2$. For ${{\mathsf{Rem}}}^2_{ij}$, the derivative formula [\[eq:derivartive_formula\]](#eq:derivartive_formula){reference-type="eqref" reference="eq:derivartive_formula"} leads to $$\begin{aligned} \sum_{ij} ({{\mathsf{Rem}}}^2_{ij})^2 &= \frac{1}{\|\bm{h}\|^2} \sum_{ij} (\bm{v}^\top \bm{Z}^\top \bm{L}\bm{Z}\bm{P}^\perp (\bm{B}\bm{e}_j (\bm{L}\bm{r})_i - \bm{B}\bm{Z}^\top \bm{L}\bm{e}_i h_j))^2 \relax &\lesssim \frac{1}{\|\bm{h}\|^2} (\| \bm{B}^\top \bm{P}^\perp \bm{Z}^\top \bm{L}\bm{Z}\bm{v}\|^2 \|\bm{L}\bm{r}\|^2 + \| \bm{L}\bm{Z}\bm{B}^\top \bm{P}^\perp \bm{Z}^\top \bm{L}\bm{Z}\bm{v}\|^2\|\bm{h}\|^2)\relax &\lesssim \|\bm{B}\|_{\mathop{\mathrm{op}}}^2 \|\bm{L}\bm{Z}\|_{\mathop{\mathrm{op}}}^6 \lesssim (k\mu)^{-2} \|\bm{L}\bm{Z}\|_{\mathop{\mathrm{op}}}^6. \end{aligned}$$ Thus, using , we obtain $\mathop{\mathrm{{\rm Var}}}(\|\bm{u}\|^2) \lesssim (k+p) + k^{-2}\mu^{-2} (k^3 + p^3) \lesssim n c^{-2} \tau^{-2} \gamma^3$ thanks to $\tau=\min(1, \mu), c=k/n \in(0,1]$ and $\gamma=\max(1,p/n)$. ------------------------------------------------------------------------ ## Miscellaneous useful facts {#app:elementary-inequality} In this section, we collect several useful known results used in the proofs. **Lemma 7**. If $\bm{G}\in {{\mathbb{R}}}^{k\times q}$ has i.i.d. $\mathcal{N}0,1)$ entries, the tail $\mathbb{P}(\|\bm{G}\|_{\mathop{\mathrm{op}}} > \sqrt{k} + \sqrt{q} + t) \le \Phi(-t) \le e^{-t^2/2}$ holds for all $t\in {{\mathbb{R}}}$, where $\Phi(\cdot)$ is the CDF of the standard normal. Therefore, we have $\mathbb{E}[\|\bm{G}\|_{\mathop{\mathrm{op}}}^m] \le C(m) (\sqrt{k} + \sqrt{q})^r$ for all $m\ge 1$, where $C(m)>0$ is a constant depending only on $m$. See Theorem 2.13 of @davidson2001local for the tail bound. The moment bound is obtained by integrating the tail bound. ------------------------------------------------------------------------ **Lemma 8** (Simple random sampling properties; see, e.g., page 13 of [@chaudhuri2014modern]). Fix an array $(x_i)_{i=1}^M$ of length $M\ge 1$ and let $\mu_M$ be the mean $M^{-1}\sum_{i\in [M]} x_i$ and $\sigma_M^2$ be the variance $M^{-1} \sum_{i\in M} x_i^2 - \mu_M^2$. Suppose $J$ is uniformly distributed on $\{J \subset [M]: |J|=m\}$ for a fixed integer $m\le M$. Then, the mean and variance of the sample mean $\widehat{\mu}_J = \sum_{i\in J} x_i$ are given by $$\mathbb{E}[\widehat{\mu}_J] = \mu_M, \quad \text{and} \quad \mathop{\mathrm{{\rm Var}}}[\widehat{\mu}_J] = \frac{\sigma_M^2}{m} \left(1-\frac{m-1}{M-1}\right) \le \frac{\sum_{i\in M} x_i^2}{m M}.$$ **Lemma 9**. If $I$ and $\widetilde I$ are independent and uniformly distributed over $[n]$ with cardinality $k\le n$, we have $\mathbb{E}[(|I \cap \widetilde I| - n^{-1}k^2)^2] \le n^{-1} k^2$. A random variable $X$ is said to follow a hypergeometric distribution, denoted as $X \sim \text{Hypergeometric} (n,K,N)$, if its probability mass function can be expressed as: $$\mathbb{P}(X=k) = \frac{ \binom{K}{k}\binom{N-K}{n-k} }{\binom{N}{n} }, \text{ where } \max\{0, n+K-N\} \leq k \leq \min\{n,K\}.$$ The mean and variance of $X$ are respectively given by (e.g., @greene2017exponential): $$\mathbb{E}[X] = \frac{nK}{N}, \quad \text{and} \quad \mathop{\mathrm{{\rm Var}}}(X) = \frac{nK(N-K)(N-n)}{N^2(N-1)} \le \frac{nK}{N}.$$ Since $|I \cap \widetilde I|\sim \text{Hypergeometric} (k,k,n)$, we have $\mathbb{E}[(|I \cap \widetilde I| - n^{-1}k^2)^2] = \mathop{\mathrm{{\rm Var}}}(|I\cap \widetilde I|) \le k^2/n$. ------------------------------------------------------------------------ # Proofs in {#app:asm-consistency} ## Preparatory definitions {#sec:preparation-asm-consistency} **Fixed-point parameters.** In the study of ridge ensembles under proportional asymptotics, a key quantity that appears is the solution of a fixed-point equation. For any $\lambda > 0$ and $\theta > 0$, define $v_p(-\lambda; \theta)$ as the unique nonnegative solution to the fixed-point equation $$\begin{aligned} v_p(-\lambda;\theta)^{-1} &= \lambda + \theta\int r(1 + v_p(-\lambda;\theta)r)^{-1}{\,\mathrm{d}}H_p(r) ,\label{eq:v_ridge} \end{aligned}$$ where $H_{p}(r) = p^{-1}\sum_{i=1}^{p} \mathop{\mathrm{\mathds{1}}}_{\{r_i \le r\}}$ is the empirical spectral distribution of $\bm{\Sigma}$ and $r_i$'s are the eigenvalues of $\bm{\Sigma}$. For $\lambda=0$, define $v_p(0; \theta):=\lim_{\lambda \to 0^{+}} v_p(-\lambda; \theta)$ for $\theta > 1$ and $\infty$ otherwise. **Best linear projection.** Since the ensemble ridge estimators are linear estimators, we evaluate their performance relative to the oracle parameter: $$\bm{\beta}_0 = \mathbb{E}[\bm{x}\bm{x}^{\top}]^{-1} \mathbb{E}[\bm{x}y],$$ which is the best (population) linear projection of $y$ onto $\bm{x}$ and minimizes the linear regression error. Note that we can decompose any response $y$ into: $$\begin{aligned} y = {f_{\textsc{li}}}(\bm{x}) + {f_{\textsc{nl}}}(\bm{x}), \end{aligned}$$ where ${f_{\textsc{li}}}(\bm{x})=\bm{\beta}_0^{\top}\bm{x}$ is the oracle linear predictor, and ${f_{\textsc{nl}}}(\bm{x})=y-{f_{\textsc{li}}}(\bm{x})$ is the nonlinear component that is not explained by ${f_{\textsc{li}}}(\bm{x})$. The best linear projection has the useful property that ${f_{\textsc{li}}}(\bm{x})$ is (linearly) uncorrelated with ${f_{\textsc{nl}}}(\bm{x})$, although they are generally dependent. It is worth mentioning that this does not imply that $y$ and $\bm{x}$ follow a linear regression model. Indeed, our framework allows any nonlinear dependence structure between them and is model-free for the joint distribution of $(\bm{x},y)$. For $n$ i.i.d. samples from the same distribution as $(\bm{x},y)$, we define analogously the vector decomposition: $$\begin{aligned} \label{eq:li-nl-decomposition} \bm{y}= {\bm{f}_{\textsc{li}}}+{\bm{f}_{\textsc{nl}}}, \end{aligned}$$ where ${\bm{f}_{\textsc{li}}}=\bm{X}\bm{\beta}_0$ and ${\bm{f}_{\textsc{nl}}}= [{f_{\textsc{nl}}}(\bm{x}_i)]_{i\in[n]}$. **Covariance and resolvents.** For $j\in[M]$, let $\bm{L}_j$ be a diagonal matrix with $i$-th diagonal entry being 1 if $i\in I_m$ and 0 otherwise. Let $\widehat{\bm{\Sigma}}\xspace_j = \bm{X}^{\top}\bm{L}_{j}\bm{X}/k$ and $\bm{M}_j = (\widehat{\bm{\Sigma}}\xspace_j + \lambda\bm{I}_p)^{-1}$. **Risk and risk estimator.** Recall that for $m,\ell\in[M]$, the risk component is defined as $$\begin{aligned} R_{m,\ell} &= \|{f_{\textsc{nl}}}\|_{L_2}^2+ (\widehat{\bm{\beta}}_m - \bm{\beta}_0)^\top \bm{\Sigma}(\widehat{\bm{\beta}}_{\ell} - \bm{\beta}_0), \end{aligned}$$ while its two estimators are defined as $$\begin{aligned} \widehat{R}_{m,\ell}^{\textup{\textrm{ovlp}}} &= \frac{N_{m,\ell}^{\textup{\textrm{ovlp}}}}{D_{m,\ell}^{\textup{\textrm{ovlp}}}}, \quad \text{and} \quad \widehat{R}_{m,\ell}^{\textup{\textrm{full}}}=\frac{N_{m,\ell}^{\textup{\textrm{full}}}}{D_{m,\ell}^{\textup{\textrm{full}}}}, \end{aligned}$$ where $$\begin{aligned} N_{m,\ell}^{\textup{\textrm{ovlp}}} &= |I_m\cap I_{\ell}|^{-1}(\bm{y}- \bm{X}\widehat{\bm{\beta}}_m)^{\top}\bm{L}_{m\cap \ell}(\bm{y}- \bm{X}\widehat{\bm{\beta}}_{\ell}\relax D_{m,\ell}^{\textup{\textrm{ovlp}}} &= 1 - \frac{\mathop{\mathrm{tr}}(\bm{S}_m)}{|I_m|} - \frac{\mathop{\mathrm{tr}}(\bm{S}_{\ell})}{|I_{\ell}|} + \frac{\mathop{\mathrm{tr}}(\bm{S}_{m})\mathop{\mathrm{tr}}(\bm{S}_{\ell})}{|I_m||I_{\ell}|}\relax N_{m,\ell}^{\textup{\textrm{full}}} &= n^{-1} (\bm{y}- \bm{X}\widehat{\bm{\beta}}_m)^\top (\bm{y}- \bm{X}\widehat{\bm{\beta}}_{\ell})\relax D_{m,\ell}^{\textup{\textrm{full}}} &=1 - \frac{\mathop{\mathrm{tr}}(\bm{S}_m)}{n} - \frac{\mathop{\mathrm{tr}}(\bm{S}_{\ell})}{n} + \frac{\mathop{\mathrm{tr}}(\bm{S}_{m}) \mathop{\mathrm{tr}}(\bm{S}_{\ell}) | I_m \cap I_\ell |}{n |I_m| |I_{\ell}|}, \end{aligned}$$ and $\bm{L}_{m\cap \ell}=\bm{L}_{I_m\cap I_\ell}$. **Asymptotic equivalence.** Let $\bm{A}_p$ and $\bm{B}_p$ be sequences of additively conformable random matrices with arbitrary dimensions (including vectors and scalars as special cases). We define $\bm{A}_p$ and $\bm{B}_p$ to be *asymptotically equivalent*, denoted as $\bm{A}_p \simeq\bm{B}_p$, if $\lim_{p \to \infty} \left| \mathop{\mathrm{tr}}[\bm{C}_p (\bm{A}_p - \bm{B}_p)] \right| = 0$ almost surely for any sequence of random matrices $\bm{C}_p$ with bounded trace norm that are multiplicatively conformable to $\bm{A}_p$ and $\bm{B}_p$ and are independent of $\bm{A}_p$ and $\bm{B}_p$. Observe that when dealing with sequences of scalar random variables, this definition simplifies to the standard notion of almost sure convergence for the involved sequences. ## Proofs of {#app:subsec:asym-thm:risk-est} is a direct consequence of . Thus, below we focus on proving . The proof of builds on the following lemmas. (1) *Risk.* establishes the asymptotics $\mathscr{R}_{m,\ell}$ of the prediction risk $R_{m,\ell}$. (2) *Ovlp-estimator.* and establishes the asymptotics $\mathscr{N}_{m,\ell}^{\textup{\textrm{ovlp}}}$ and $\mathscr{D}_{m,\ell}^{\textup{\textrm{ovlp}}}$ of the numerator $N_{m,\ell}^{\textup{\textrm{ovlp}}}$ and the denominator $D_{m,\ell}^{\textup{\textrm{ovlp}}}$, respectively, for the ovlp-estimator. (3) *Full-estimator.* and establishes the asymptotics $\mathscr{N}_{m,\ell}^{\textup{\textrm{full}}}$ and $\mathscr{D}_{m,\ell}^{\textup{\textrm{full}}}$ of the numerator $N_{m,\ell}^{\textup{\textrm{full}}}$ and the denominator $D_{m,\ell}^{\textup{\textrm{full}}}$, respectively, for the full-estimator. For $\lambda>0$ and $k\in\mathcal{K}_n$ such that $p/k\rightarrow[\phi,\infty)$, the consistency of the two estimators follows by showing that the ratio of the numerator asymptotics and the denominator asymptotics in (1) and (2) matches with the risk asymptotics in , respectively: $$R_{m,\ell} \simeq\mathscr{R}_{m,\ell} = \frac{\mathscr{N}_{m,\ell}^{\textup{\textrm{ovlp}}}}{\mathscr{D}_{m,\ell}^{\textup{\textrm{ovlp}}}} \simeq\frac{N_{m,\ell}^{\textup{\textrm{ovlp}}}}{D_{m,\ell}^{\textup{\textrm{ovlp}}}} = \widehat{R}_{m,\ell}^{\textup{\textrm{ovlp}}},$$ and $$R_{m,\ell} \simeq\mathscr{R}_{m,\ell} = \frac{\mathscr{N}_{m,\ell}^{\textup{\textrm{full}}}}{\mathscr{D}_{m,\ell}^{\textup{\textrm{full}}}} \simeq\frac{N_{m,\ell}^{\textup{\textrm{full}}}}{D_{m,\ell}^{\textup{\textrm{full}}}} = \widehat{R}_{m,\ell}^{\textup{\textrm{full}}}.$$ Next, takes care of the boundary cases when $\psi=\infty$ and $\lambda=0$. Finally, from , we also have that the sequences of functions $\{R_{m,\ell}-\mathscr{R}_{m,\ell}\}_{p\in\mathbb{N}}$, $\{\widehat{R}_{m,\ell}^{\textup{\textrm{ovlp}}}-\mathscr{R}_{m,\ell}\}_{p\in\mathbb{N}}$, and $\{\widehat{R}_{m,\ell}^{\textup{\textrm{full}}}-\mathscr{R}_{m,\ell}\}_{p\in\mathbb{N}}$ are uniformly equicontinuous on $\lambda\in\Lambda=[0,\infty]$ when $\psi\neq 1$. This implies that the sequences of functions $\{R_{m,\ell}-\widehat{R}_{m,\ell}^{\textup{\textrm{ovlp}}}\}_{p\in\mathbb{N}}$ and $\{R_{m,\ell}-\widehat{R}_{m,\ell}^{\textup{\textrm{full}}}\}_{p\in\mathbb{N}}$ are also uniformly equicontinuous on $\lambda\in\Lambda=[0,\infty]$ almost surely. From Theorem 21.8 of [@davidson1994stochastic], it further follows that the sequences converge to zero uniformly over $\Lambda$ almost surely. The rest of the current subsection is committed to presenting and proving the important lemmas . **Lemma 10** (Asymptotic equivalence for prediction risk). Under , for $\psi\in[\phi,\infty)$ and $\lambda>0$, it holds that $$R_{m,\ell} \simeq\mathscr{R}_{m,\ell} := (\|{f_{\textsc{nl}}}\|_{L_2}^2 + \widetilde{c}_p(-\lambda;\psi)) (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi)),$$ where $\varphi_{m\ell}=\psi\mathop{\mathrm{\mathds{1}}}_{\{m=\ell\}}+\phi\mathop{\mathrm{\mathds{1}}}_{\{m\neq\ell\}}$ and $\widetilde{v}_p(-\lambda;\phi,\psi),\widetilde{c}_p(-\lambda;\psi)$ are defined in . From the definition of the ridge estimator [\[eq:ingredient-estimator\]](#eq:ingredient-estimator){reference-type="eqref" reference="eq:ingredient-estimator"} and the decomposition of the response [\[eq:li-nl-decomposition\]](#eq:li-nl-decomposition){reference-type="eqref" reference="eq:li-nl-decomposition"}, we have $$\begin{aligned} \widehat{\bm{\beta}}_m - \bm{\beta}_0 &= \bm{M}_m\frac{\bm{X}^{\top}\bm{L}_m}{k} (\bm{X}\bm{\beta}_0 + {\bm{f}_{\textsc{nl}}}) - \bm{\beta}_0 \relax &= - \lambda \bm{M}_m\bm{\beta}_0 + \bm{M}_m\frac{\bm{X}^{\top}\bm{L}_m}{k} {\bm{f}_{\textsc{nl}}}. \end{aligned}$$ Then it follows that $$\begin{aligned} &\|{f_{\textsc{nl}}}\|_{L_2}^2 + (\widehat{\bm{\beta}}_{m} - \bm{\beta}_0)^\top \bm{\Sigma}(\widehat{\bm{\beta}}_{\ell} - \bm{\beta}_0) \relax &= \lambda^2 \bm{\beta}_0^{\top}\bm{M}_m\bm{\Sigma}\bm{M}_{\ell}\bm{\beta}_0\relax &\qquad+ (\|{f_{\textsc{nl}}}\|_{L_2}^2 + {\bm{f}_{\textsc{nl}}}^{\top} \frac{\bm{L}_m\bm{X}}{k} \bm{M}_m \bm{\Sigma}\bm{M}_{\ell}\frac{\bm{X}^{\top}\bm{L}_{\ell}}{k} {\bm{f}_{\textsc{nl}}}) \relax &\qquad + (\lambda \bm{\beta}_0^{\top}\bm{M}_{\ell} \bm{\Sigma}\bm{M}_{\ell}\frac{\bm{X}^{\top}\bm{L}_m}{k} {\bm{f}_{\textsc{nl}}}+ \lambda \bm{\beta}_0^{\top}\bm{M}_{\ell} \bm{\Sigma}\bm{M}_m\frac{\bm{X}^{\top}\bm{L}_m}{k} {\bm{f}_{\textsc{nl}}})\relax &= T_B + T_V + T_C. \end{aligned}$$ **Bias term.** From , we have $$\begin{aligned} T_B \simeq\widetilde{c}_p(-\lambda;\psi) (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi)), \end{aligned}$$ where $\varphi_{m\ell}=\psi\mathop{\mathrm{\mathds{1}}}_{\{m=\ell\}}+\phi\mathop{\mathrm{\mathds{1}}}_{\{m\neq\ell\}}$ and $\widetilde{v}_p(-\lambda;\phi,\psi),\widetilde{c}_p(-\lambda;\psi)$ are defined in . **Variance term.** By conditional independence and Lemma D.2 of @patil2023generalized, the variance term converges to the the quadratic term of $\bm{f}_0=\bm{L}_{m\cap\ell}{\bm{f}_{\textsc{nl}}}$: $$\begin{aligned} T_V &\simeq\|{f_{\textsc{nl}}}\|_{L_2}^2 + \frac{1}{k^2}\bm{f}_0^{\top} \bm{X}\bm{M}_m \bm{\Sigma}\bm{M}_{\ell}\bm{X}^{\top} \bm{f}_0 \simeq\|{f_{\textsc{nl}}}\|_{L_2}^2 ( 1 + \widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi) ), \end{aligned}$$ where the last equivalence is from . **Cross term.** From @patil2023generalized [Lemma A.3], the cross term vanishes, i.e., $T_C \xrightarrow{\textup{a.s.}}0$. This completes the proof. ------------------------------------------------------------------------ **Lemma 11** (Asymptotic equivalence for numerator of the ovlp-estimator). Under , for $\psi\in[\phi,\infty)$ and $\lambda>0$, and index sets $I_m,I_{\ell}\overset{\textup{\texttt{SRS}}}{\sim}\mathcal{I}_k$, it holds that $$\begin{aligned} N_{m,\ell}^{\textup{\textrm{ovlp}}} &\simeq\mathscr{N}_{m,\ell}^{\textup{\textrm{ovlp}}}:= \lambda^2v_p(-\lambda;\psi)^2(\|{f_{\textsc{nl}}}\|_{L_2}^2 + \widetilde{c}_p(-\lambda;\psi)) (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi)), \end{aligned}$$ where $\varphi_{m\ell}=\psi\mathop{\mathrm{\mathds{1}}}_{\{m=\ell\}}+\phi\mathop{\mathrm{\mathds{1}}}_{\{m\neq\ell\}}$ and $v_p(-\lambda;\psi)$, $\widetilde{v}_p(-\lambda;\phi,\psi)$ and $\widetilde{c}_p(-\lambda;\psi)$ are defined in . From the definition of the ridge estimator [\[eq:ingredient-estimator\]](#eq:ingredient-estimator){reference-type="eqref" reference="eq:ingredient-estimator"} and the decomposition of the response [\[eq:li-nl-decomposition\]](#eq:li-nl-decomposition){reference-type="eqref" reference="eq:li-nl-decomposition"}, we have $$\begin{aligned} \bm{L}_{m\cap \ell}(\bm{y}- \bm{X}\widehat{\bm{\beta}}_m) &= \bm{L}_{m\cap \ell}\left(\bm{I}_p-\bm{X}\bm{M}_m\frac{\bm{X}^{\top}\bm{L}_m}{k}\right) \bm{y}\relax &= \lambda\bm{L}_{m\cap \ell}\bm{X}\bm{M}_m\bm{\beta}_0 + \bm{L}_{m\cap \ell}\left(\bm{I}_p-\bm{X}\bm{M}_m\frac{\bm{X}^{\top}\bm{L}_m}{k}\right) {\bm{f}_{\textsc{nl}}}. \end{aligned}$$ Then it follows that $$\begin{aligned} &\frac{1}{|I_m\cap I_{\ell}|} (\bm{y}- \bm{X}\widehat{\bm{\beta}}_m)^{\top}\bm{L}_{m\cap \ell}(\bm{y}- \bm{X}\widehat{\bm{\beta}}_{\ell}) \relax &= \lambda^2\bm{\beta}_0^{\top}\bm{M}_m\widehat{\bm{\Sigma}}\xspace_{m\cap\ell}\bm{M}_{\ell}\bm{\beta}_0\relax &\qquad + \frac{1}{|I_m\cap I_{\ell}|}(\bm{L}_m{\bm{f}_{\textsc{nl}}})^{\top}\left(\bm{I}_p-\frac{\bm{X}\bm{M}_m\bm{X}^{\top}}{k}\right)\bm{L}_{m\cap\ell}\left(\bm{I}_p-\frac{\bm{X}\bm{M}_{\ell}\bm{X}^{\top}}{k}\right)(\bm{L}_{\ell}{\bm{f}_{\textsc{nl}}}) \relax &\qquad + \frac{\lambda}{|I_m\cap I_{\ell}|} \left( \bm{\beta}_0^{\top} \bm{M}_{\ell}\bm{X}^{\top}\bm{L}_{m\cap\ell}\left(\bm{I}_p-\bm{X}\bm{M}_m\frac{\bm{X}^{\top}\bm{L}_m}{k}\right) {\bm{f}_{\textsc{nl}}}+ \bm{\beta}_0^{\top} \bm{M}_{\ell}\bm{X}^{\top}\bm{L}_{m\cap\ell}\left(\bm{I}_p-\bm{X}\bm{M}_{\ell}\frac{\bm{X}^{\top}\bm{L}_{\ell}}{k}\right) {\bm{f}_{\textsc{nl}}} \right)\relax &= T_B + T_V + T_C. \end{aligned}$$ Next, we analyze the three terms separately. **Bias term.** From , $$\begin{aligned} T_B \simeq\lambda^2 v_p(-\lambda;\psi)^2 \widetilde{c}_p(-\lambda;\psi) (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi)), \end{aligned}$$ where $\varphi_{m\ell}=\psi\mathop{\mathrm{\mathds{1}}}_{\{m=\ell\}}+\phi\mathop{\mathrm{\mathds{1}}}_{\{m\neq\ell\}}$ and $v_p(-\lambda;\psi)$, $\widetilde{v}_p(-\lambda;\phi,\psi)$ and $\widetilde{c}_p(-\lambda;\psi)$ are defined in . **Variance term.** By conditional independence and Lemma D.2 of [@patil2023generalized], the variance term converges to the the quadratic term of $\bm{f}_0=\bm{L}_{m\cap\ell}{\bm{f}_{\textsc{nl}}}$: $$\begin{aligned} T_V &\simeq\frac{1}{|I_m\cap I_{\ell}|}(\bm{L}_{m\cap\ell}{\bm{f}_{\textsc{nl}}})^{\top}\left(\bm{I}_p-\frac{\bm{X}\bm{M}_m\bm{X}^{\top}}{k}\right)\bm{L}_{m\cap\ell}\left(\bm{I}_p-\frac{\bm{X}\bm{M}_{\ell}\bm{X}^{\top}}{k}\right)(\bm{L}_{m\cap\ell}{\bm{f}_{\textsc{nl}}}) \relax &= \frac{1}{|I_m\cap I_{\ell}|}\|\bm{f}_0\|_2^2 - \frac{1}{k}\sum_{j\in\{m,\ell\}} \bm{f}_0^{\top} \frac{\bm{X}\bm{M}_j\bm{X}^{\top}}{|I_m\cap I_{\ell}|} \bm{f}_0 + \frac{|I_m\cap I_{\ell}|}{k^2} \bm{f}_0^{\top} \frac{\bm{X}\bm{M}_m\widehat{\bm{\Sigma}}\xspace_{m\cap\ell} \bm{M}_{\ell}\bm{X}^{\top}}{|I_m\cap I_{\ell}|} \bm{f}_0\relax%\label{eq:lem:gen-risk-nonlinear-eq-1} & \simeq\lambda^2 v_p(-\lambda;\psi)^2 (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi)), \end{aligned}$$ where the last equivalence is from . **Cross term.** From @patil2023generalized [Lemma A.3], the cross term vanishes, i.e., $T_C \xrightarrow{\textup{a.s.}}0$. ------------------------------------------------------------------------ **Lemma 12** (Asymptotic equivalence for denominator of the ovlp-estimator). Under , for $\psi\in[\phi,\infty)$ and $\lambda>0$, and index sets $I_m,I_{\ell}\overset{\textup{\texttt{SRS}}}{\sim}\mathcal{I}_k$, it holds that $$D_{m,\ell}^{\textup{\textrm{ovlp}}} \simeq\mathscr{D}_{m,\ell}^{\textup{\textrm{ovlp}}}:= \lambda^2v_p(-\lambda;\psi)^2,$$ where $v_p(-\lambda;\psi)$ is the unique nonnegative solution to fixed-point equation [\[eq:v_ridge\]](#eq:v_ridge){reference-type="eqref" reference="eq:v_ridge"}. Since $I_m,I_{\ell}\overset{\textup{\texttt{SRS}}}{\sim}\mathcal{I}_k$, we have that $|I_m|=|I_{\ell}|=k$. From @du2023subsample [Lemma C.1], we have that $$\frac{\mathop{\mathrm{tr}}(\bm{S}_j)}{|I_j|} = \frac{1}{k^2}\mathop{\mathrm{tr}}(\bm{X}\bm{M}_j\bm{X}^{\top}\bm{L}_j) = \frac{1}{k}\mathop{\mathrm{tr}}( \bm{M}_j\bm{\Sigma}_j) \simeq 1 - \lambda v_p(-\lambda;\psi).$$ By continuous mapping theorem, $$\begin{aligned} 1 - \frac{\mathop{\mathrm{tr}}(\bm{S}_m)}{|I_m|} - \frac{\mathop{\mathrm{tr}}(\bm{S}_{\ell})}{|I_{\ell}|} + \frac{\mathop{\mathrm{tr}}(\bm{S}_{m})\mathop{\mathrm{tr}}(\bm{S}_{\ell})}{|I_m||I_{\ell}|} &\simeq 1 - 2(1 - \lambda v_p(-\lambda;\psi)) + (1 - \lambda v_p(-\lambda;\psi)^2 = \lambda^2v_p(-\lambda;\psi)^2, \end{aligned}$$ which completes the proof. ------------------------------------------------------------------------ **Lemma 13** (Asymptotic equivalence for numerator of the full-estimator). Under , for $\psi\in[\phi,\infty)$ and $\lambda>0$, it holds that $$N_{m,\ell}^{\textup{\textrm{full}}} \simeq\mathscr{N}_{m,\ell}^{\textup{\textrm{full}}}:= d_p(-\lambda;\varphi_{m\ell},\psi) (\|{f_{\textsc{nl}}}\|_{L_2}^2 + \widetilde{c}_p(-\lambda;\psi)) (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi)),$$ where $$d_p(-\lambda;\varphi_{m\ell},\psi) = \begin{dcases} \frac{\psi-\phi}{\psi} + \frac{\phi}{\psi}\lambda^2 v_p(-\lambda;\psi)^2, &\text{ if }\varphi_{m\ell}=\psi,\relax \left(\frac{\psi-\phi}{\psi} + \frac{\phi}{\psi} \lambda v_p(-\lambda;\psi) \right)^2, &\text{ if }\varphi_{m\ell} = \phi, \end{dcases}$$ $\varphi_{m\ell}=\psi\mathop{\mathrm{\mathds{1}}}_{\{m=\ell\}}+\phi\mathop{\mathrm{\mathds{1}}}_{\{m\neq\ell\}}$, and $\widetilde{v}_p(-\lambda;\phi,\psi),\widetilde{c}_p(-\lambda;\psi)$ are defined in . Analogous to the proof of , the numerator splits into $$\begin{aligned} &\frac{1}{n} {(\bm{y}- \bm{X}\widehat{\bm{\beta}}_m)^\top (\bm{y}- \bm{X}\widehat{\bm{\beta}}_{\ell})} \relax &= \frac{1}{n}\lambda^2\bm{\beta}_0^{\top}\bm{M}_m\widehat{\bm{\Sigma}}\xspace\bm{M}_{\ell}\bm{\beta}_0\relax &\qquad + \frac{1}{n}{\bm{f}_{\textsc{nl}}}^{\top}\left(\bm{I}_p-\frac{\bm{L}_m\bm{X}\bm{M}_m\bm{X}^{\top}}{k}\right)\left(\bm{I}_p-\frac{\bm{X}\bm{M}_{\ell}\bm{X}^{\top}\bm{L}_{\ell}}{k}\right){\bm{f}_{\textsc{nl}}}\relax &\qquad \qquad + \frac{\lambda}{n} \left( \bm{\beta}_0^{\top} \bm{M}_{\ell}\bm{X}^{\top}\left(\bm{I}_p-\bm{X}\bm{M}_m\frac{\bm{X}^{\top}\bm{L}_m}{k}\right) {\bm{f}_{\textsc{nl}}}+ \bm{\beta}_0^{\top} \bm{M}_{\ell}\bm{X}^{\top}\left(\bm{I}_p-\bm{X}\bm{M}_{\ell}\frac{\bm{X}^{\top}\bm{L}_{\ell}}{k}\right) {\bm{f}_{\textsc{nl}}} \right)\relax &= T_B + T_V + T_C. \end{aligned}$$ Next, we analyze the three terms separately. The bias and the cross term are analyzed as in the proof of , by involving and @patil2023generalized [Lemma A.3], respectively: $$\begin{aligned} T_B & \simeq d_p(-\lambda;\varphi_{m\ell},\psi) \widetilde{c}_p(-\lambda;\psi) (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi)) ,\relax T_C & \simeq 0. \end{aligned}$$ For the variance term, by conditional independence and Lemma D.2 of [@patil2023generalized], the variance term converges to the the quadratic terms of $\bm{f}_1=\bm{L}_{m\cup\ell}{\bm{f}_{\textsc{nl}}}$ and $\bm{f}_0=\bm{L}_{m\cap\ell}{\bm{f}_{\textsc{nl}}}$, respectively: $$\begin{aligned} &\frac{1}{n}{\bm{f}_{\textsc{nl}}}^{\top} \left(\bm{I}_p-\frac{\bm{L}_m\bm{X}\bm{M}_m\bm{X}^{\top}}{k}\right)\left(\bm{I}_p-\frac{\bm{X}\bm{M}_{\ell}\bm{X}^{\top}\bm{L}_{\ell}}{k}\right) {\bm{f}_{\textsc{nl}}}\relax &=\frac{|I_m\cup I_{\ell}|}{n}{\bm{f}_{\textsc{nl}}}^{\top} \left(\bm{I}_p-\frac{\bm{L}_m\bm{X}\bm{M}_m\bm{X}^{\top}}{k}\right)\bm{L}_{m\cup\ell}\left(\bm{I}_p-\frac{\bm{X}\bm{M}_{\ell}\bm{X}^{\top}\bm{L}_{\ell}}{k}\right) {\bm{f}_{\textsc{nl}}}+\relax &\qquad + \frac{|(I_m\cup I_{\ell})^c|}{n}{\bm{f}_{\textsc{nl}}}^{\top} \left(\bm{I}_p-\frac{\bm{L}_m\bm{X}\bm{M}_m\bm{X}^{\top}}{k}\right)\bm{L}_{(m\cup\ell)^c}\left(\bm{I}_p-\frac{\bm{X}\bm{M}_{\ell}\bm{X}^{\top}\bm{L}_{\ell}}{k}\right) {\bm{f}_{\textsc{nl}}}\relax &=\frac{|I_m\cup I_{\ell}|}{n}\bm{f}_1^{\top} \left(\bm{I}_p-\frac{\bm{X}\bm{M}_m\bm{X}^{\top}}{k}\right)\bm{L}_{m\cup\ell}\left(\bm{I}_p-\frac{\bm{X}\bm{M}_{\ell}\bm{X}^{\top}}{k}\right) \bm{f}_1 \relax &\qquad + \frac{|(I_m\cup I_{\ell})^c|}{n} \|\bm{f}_0\|_2^2 + \frac{|(I_m\cup I_{\ell})^c|^2}{nk^2} \bm{f}_0^{\top} \bm{X}\bm{M}_m\widehat{\bm{\Sigma}}\xspace_{(m\cup\ell)^c } \bm{M}_{\ell}\bm{X}^{\top}\bm{f}_0. \end{aligned}$$ From , it follows that $$\begin{aligned} T_V &\simeq d_p(-\lambda;\varphi_{m\ell},\psi) \|{f_{\textsc{nl}}}\|_{L_2}^2 (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi)). \end{aligned}$$ Combining the above results completes the proof. ------------------------------------------------------------------------ **Lemma 14** (Asymptotic equivalence for denominator of the full-estimator). Under , for $\psi\in[\phi,\infty)$ and $\lambda>0$, and index sets $I_m,I_{\ell}\overset{\textup{\texttt{SRS}}}{\sim}\mathcal{I}_k$, it holds that $$D_{m,\ell}^{\textup{\textrm{full}}} \simeq\mathscr{D}_{m,\ell}^{\textup{\textrm{full}}}:= \begin{dcases} \frac{\psi-\phi}{\psi} + \frac{\phi}{\psi}\lambda^2 v_p(-\lambda;\psi)^2, &m=\ell,\relax \left(\frac{\psi-\phi}{\psi} + \frac{\phi}{\psi} \lambda v_p(-\lambda;\psi) \right)^2, &m\neq\ell, \end{dcases}$$ where $v_p(-\lambda;\psi)$ is the unique nonnegative solution to fixed-point equation [\[eq:v_ridge\]](#eq:v_ridge){reference-type="eqref" reference="eq:v_ridge"}. From the proof of , we have $\mathop{\mathrm{tr}}(\bm{S}_j)/|I_j| \xrightarrow{\textup{a.s.}}1 - \lambda v_p(-\lambda;\psi),$ for $j\in\{m,\ell\}$. From @du2023subsample [Lemma G.2], we also have $k/n\xrightarrow{\textup{a.s.}}\phi/\psi$. For $m=\ell$, we have $$\begin{aligned} 1 - \frac{\mathop{\mathrm{tr}}(\bm{S}_m)}{n} - \frac{\mathop{\mathrm{tr}}(\bm{S}_{\ell})}{n} + \frac{\mathop{\mathrm{tr}}(\bm{S}_{m}) \mathop{\mathrm{tr}}(\bm{S}_{\ell})}{n k} &\simeq 1 - \frac{2\phi}{\psi}(1-\lambda v_p(-\lambda;\psi)) + \frac{\phi}{\psi}(1-\lambda v_p(-\lambda;\psi))^2\relax &= \frac{\psi-\phi}{\psi} + \frac{\phi}{\psi}(\lambda v_p(-\lambda;\psi))^2 \end{aligned}$$ For $m\neq \ell$, from @du2023subsample [Lemma G.2], we also have $|I_m\cap I_{\ell}|/k\xrightarrow{\textup{a.s.}}\phi/\psi$. By continuous mapping theorem, it follows that $$\begin{aligned} 1 - \frac{\mathop{\mathrm{tr}}(\bm{S}_m)}{n} - \frac{\mathop{\mathrm{tr}}(\bm{S}_{\ell})}{n} + \frac{\mathop{\mathrm{tr}}(\bm{S}_{m}) \mathop{\mathrm{tr}}(\bm{S}_{\ell}) | I_m \cap I_\ell |}{n |I_m| |I_{\ell}|} &\simeq 1 - \frac{2\phi}{\psi}(1-\lambda v_p(-\lambda;\psi)) + \frac{\phi^2}{\psi^2}(1 - \lambda v_p(-\lambda;\psi))^2\relax &= \left(\frac{\psi-\phi}{\psi} + \frac{\phi}{\psi} \lambda v_p(-\lambda;\psi) \right)^2, \end{aligned}$$ which finishes the proof. ------------------------------------------------------------------------ **Lemma 15** (Boundary cases: diverging subsample aspect ratio and ridgeless). Under , the conclusions in hold for $\psi = \infty$ or $\lambda=0$, if $k=\omega(\sqrt{n})$ for the first estimator and no restriction on $k$ is needed for the second estimator. Furthermore, for $\mathscr{R}_{m,\ell},\mathscr{N}_{m,\ell}^{\textup{\textrm{ovlp}}},\mathscr{D}_{m,\ell}^{\textup{\textrm{ovlp}}},\mathscr{N}_{m,\ell}^{\textup{\textrm{full}}},\mathscr{D}_{m,\ell}^{\textup{\textrm{full}}}$ defined in , the sequences of functions $\{R_{m,\ell}-\mathscr{R}_{m,\ell}\}_{p\in\mathbb{N}}$, $\{N_{m,\ell}^{\textup{\textrm{ovlp}}}/D_{m,\ell}^{\textup{\textrm{ovlp}}}-\mathscr{N}_{m,\ell}^{\textup{\textrm{ovlp}}}/\mathscr{D}_{m,\ell}^{\textup{\textrm{ovlp}}}\}_{p\in\mathbb{N}}$, and $\{N_{m,\ell}^{\textup{\textrm{full}}}/D_{m,\ell}^{\textup{\textrm{full}}}-\mathscr{N}_{m,\ell}^{\textup{\textrm{full}}}/\mathscr{D}_{m,\ell}^{\textup{\textrm{full}}}\}_{p\in\mathbb{N}}$ are uniformly bounded, equicontinuous and approaching zero on $\lambda\in[0,\infty]$ almost surely. We split the proof into two parts. **Part (1) Diverging subsample aspect ratio for $\lambda>0$.** Recall that $$\begin{aligned} &\frac{1}{|I_m\cap I_{\ell}|} (\bm{y}- \bm{X}\widehat{\bm{\beta}}_m)^{\top}\bm{L}_{m\cap \ell}(\bm{y}- \bm{X}\widehat{\bm{\beta}}_{\ell}) \relax &= (\bm{\beta}_0 - \widehat{\bm{\beta}}_m)^{\top}\widehat{\bm{\Sigma}}\xspace_{m\cap \ell}(\bm{\beta}_0 - \widehat{\bm{\beta}}_m) + \frac{1}{|I_m\cap I_{\ell}|} \|\bm{L}_{m\cap\ell}{\bm{f}_{\textsc{nl}}}\|_2^2 + \frac{1}{|I_m\cap I_{\ell}|}\sum_{j\in\{m,\ell\}} {\bm{f}_{\textsc{nl}}}^{\top}\bm{L}_{m\cap\ell}\bm{X}(\bm{\beta}_0 - \widehat{\bm{\beta}}_m). \end{aligned}$$ From law of large numbers, $\|\bm{L}_{m\cap\ell}{\bm{f}_{\textsc{nl}}}\|_2^2 / |I_m\cap I_{\ell}| \xrightarrow{\textup{a.s.}}\|{f_{\textsc{nl}}}\|_{L_2}^2$ as $|I_m\cap I_{\ell}|$ tends to infinity, which is guaranteed when $k=\omega(\sqrt{n})$. From @patil2023generalized [Lemma A.3], ${\bm{f}_{\textsc{nl}}}^{\top}\bm{L}_{m\cap\ell}\bm{X}\bm{\beta}_0/|I_m\cap I_{\ell}| \xrightarrow{\textup{a.s.}}0$. For the other term, note that, $$\begin{aligned} \|\widehat{\bm{\beta}}_m\|_2 &\leq \|(\bm{X}^{\top}\bm{L}_m\bm{X}/k+\lambda\bm{I}_p)^{-1}(\bm{X}^{\top}\bm{L}_m\bm{y}/k)\|_2 \relax &\leq \|(\bm{X}^{\top}\bm{L}_m\bm{X}/k+\lambda\bm{I}_p)^{-1}\bm{X}^{\top}\bm{L}_m/\sqrt{k}\|\cdot\|\bm{L}_m\bm{y}/\sqrt{k}\|_2\relax &\leq C\sqrt{\mathbb{E}[y_1^2]}\cdot \|(\bm{X}^{\top}\bm{L}_m\bm{X}/k+\lambda\bm{I}_p)^{-1}\bm{X}^{\top}\bm{L}_m/\sqrt{k}\|_{\mathop{\mathrm{op}}}, \end{aligned}$$ where the last inequality holds eventually almost surely since Assumption [Assumption 5](#asm:model){reference-type="ref" reference="asm:model"} implies that the entries of $\bm{y}$ have bounded $4$-th moment, and thus from the strong law of large numbers, $\| \bm{L}_m \bm{y}/ \sqrt{k} \|_2$ is eventually almost surely bounded above by $C\sqrt{\mathbb{E}[y_1^2]}$ for some constant $C$. On the other hand, the operator norm of the matrix $(\bm{X}^\top \bm{L}_m\bm{X}/ k+ \lambda\bm{I}_p)^{-1} \bm{X}\bm{L}_m / \sqrt{k}$ is upper bounded $\max_i s_i/(s_i^2+\lambda)\leq 1/s_{\min}$ where $s_i$'s are the singular values of $\bm{X}$ and $s_{\min}$ is the smallest nonzero singular value. As $k, p \to \infty$ such that $p / k \to \infty$, $s_{\min} \to \infty$ almost surely (e.g., from results of @bloemendal2016principal) and therefore, $\|\widehat{\bm{\beta}}_m\|_2 \to 0$ almost surely. Because $\|\widehat{\bm{\Sigma}}\xspace\|_{\mathop{\mathrm{op}}}$ is upper bounded almost surely, we further have ${\bm{f}_{\textsc{nl}}}^{\top}\bm{L}_{m\cap\ell}\bm{X}\widehat{\bm{\beta}}_m/|I_m\cap I_{\ell}| \xrightarrow{\textup{a.s.}}0$. Consequently we have ${\bm{f}_{\textsc{nl}}}^{\top}\bm{L}_{m\cap\ell}\bm{X}(\bm{\beta}_0-\widehat{\bm{\beta}}_m)/|I_m\cap I_{\ell}| \xrightarrow{\textup{a.s.}}0$ and $$\begin{aligned} \frac{1}{|I_m\cap I_{\ell}|} (\bm{y}- \bm{X}\widehat{\bm{\beta}}_m)^{\top}\bm{L}_{m\cap \ell}(\bm{y}- \bm{X}\widehat{\bm{\beta}}_{\ell}) & \xrightarrow{\textup{a.s.}}\bm{\beta}_0^{\top}\bm{\Sigma}\bm{\beta}_0 + \|{f_{\textsc{nl}}}\|_{L_2}^2 . \end{aligned}$$ Since $\bm{S}_{j} = \bm{X}_{I_j} \widehat{\bm{\beta}}_j$ for $j\in\{m,\ell\}$, we have that $\mathop{\mathrm{tr}}(\bm{S}_{j} )/k\xrightarrow{\textup{a.s.}}0$. So the denominator converges to 1, almost surely. On the other hand, the asymptotic formula for $\lambda>0$ satisfies that $$\begin{aligned} \lim_{\psi\rightarrow\infty} \frac{\lambda^2v_p(-\lambda;\psi)^2(\|{f_{\textsc{nl}}}\|_{L_2}^2 + \widetilde{c}_p(-\lambda;\psi)) (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi))}{\lambda^2v_p(-\lambda;\psi)^2}= \|{f_{\textsc{nl}}}\|_{L_2}^2 + \bm{\beta}_0^{\top}\bm{\Sigma}\bm{\beta}_0, \end{aligned}$$ where we use the property that $\lim_{\psi\rightarrow\infty}v_p(-\lambda;\psi)=0$. Thus, the asymptotic formula is also well-defined and right continuous at $\psi=\infty$. **Part (2) Ridgeless predictor when $\lambda=0$.** Below, we analyze the numerator and the denominator separately for the first estimator. To indicate the dependency on $p$ and $\lambda$, we denote the denominator and its asymptotic equivalent by $$\begin{aligned} P_{p,\lambda}:= D_{m,\ell}^{\textup{\textrm{ovlp}}}, \qquad Q_{p,\lambda}:= \mathscr{D}_{m,\ell}^{\textup{\textrm{ovlp}}}, \end{aligned}$$ where we view $k$ and $n$ as sequences $\{ k_p \}$ and $\{ n_p \}$ that are indexed by $p$. For $j\in\{m,\ell\}$, since $\bm{S}_j\succeq {\bm 0}_{n\times n}$ and $\|\bm{S}_j\|_{\mathop{\mathrm{op}}} = \|\bm{L}_j\bm{X}(\bm{X}^{\top}\bm{L}_j\bm{X}/k+\lambda\bm{I}_p)^{+}\bm{X}^{\top}\bm{L}_j/|I_j|\|_{\mathop{\mathrm{op}}}\leq 1$ is upper bounded for $\lambda\geq 0$, with equality holds only if $\lambda=0$. When $\lambda=0$ and $\psi< 1$, we have $\mathop{\mathrm{tr}}(\bm{S}_j)/|I_j|=\mathop{\mathrm{tr}}(\bm{L}_j\bm{X}(\bm{X}^{\top}\bm{L}_j\bm{X}/k)^{-1}\bm{X}^{\top}\bm{L}_j/|I_j|)/|I_j| = p/|I_j|\leq\psi$ almost surely. When $\lambda=0$ and $\psi>1$, we have that $\mathop{\mathrm{tr}}(\bm{S}_j)/|I_j|\geq r_{\min}r_{\max}^{-1} p/|I_j|\xrightarrow{\textup{a.s.}}r_{\min}r_{\max}^{-1}\psi>\psi$. Thus, we have that $0<P_{p,\lambda}<(1-\psi)^2$ for $\lambda>0$ and $\lambda=0$, $\psi\neq 1$. Next we inspect the boundedness of the derivative of $P_{p,\lambda}$: $$\begin{aligned} \frac{\partial}{\partial \lambda}P_{p,\lambda} &= - \frac{\mathop{\mathrm{tr}}(\frac{\partial}{\partial \lambda}\bm{S}_m)}{|I_m|} - \frac{\mathop{\mathrm{tr}}(\frac{\partial}{\partial \lambda}\bm{S}_{\ell})}{|I_{\ell}|} + \frac{\mathop{\mathrm{tr}}(\frac{\partial}{\partial \lambda}\bm{S}_{m})\mathop{\mathrm{tr}}(\bm{S}_{\ell}) + \mathop{\mathrm{tr}}(\bm{S}_{m})\mathop{\mathrm{tr}}(\frac{\partial}{\partial \lambda}\bm{S}_{\ell})}{|I_m||I_{\ell}|}. \end{aligned}$$ For $j\in\{m,\ell\}$, note that $$\frac{\partial}{\partial \lambda}\bm{S}_j = \bm{L}_j\bm{X}\left(\frac{\bm{X}^{\top}\bm{L}_j\bm{X}}{k}+\lambda\bm{I}\right)^{-2} \frac{\bm{X}^{\top}\bm{L}_j}{|I_j|} .$$ We also have $\|\partial \bm{S}_j/\partial \lambda\|_{\mathop{\mathrm{op}}}$ is upper bounded almost surely on $\Lambda$, and thus so does $|\partial P_{p,\lambda}/\partial \lambda|$. We know that $P_{p,\lambda} - Q_{p,\lambda} \xrightarrow{\textup{a.s.}}0$ for $\lambda>0$. Define $Q_{p,0}:=\lim_{\lambda\rightarrow0^+}Q_{p,\lambda} = (1-\psi)^2$, which is well-defined according to @du2023subsample [Proposition E.2]. The equicontinuity property, together with the Moore-Osgood theorem, we have that almost surely, $$\begin{aligned} \lim_{p\rightarrow\infty}|P_{p,0}-Q_{p,0}| &= \lim_{\lambda\rightarrow0^+}\lim_{p\rightarrow\infty}|P_{p,\lambda}-Q_{p,\lambda}| = \lim_{\lambda\rightarrow0^+} 0 =0, \end{aligned}$$ which proves that the denominator formula $Q_{p,0}$ is valid for $\lambda=0$. For the numerator, similarly, we define $$\begin{aligned} P_{p,\lambda}'&:=N_{m,\ell}^{\textup{\textrm{ovlp}}} = \frac{1}{|I_m\cap I_{\ell}|} \bm{y}^{\top}L_{m\cap\ell}\bm{y}- \sum_{j\in\{m,\ell\}}\bm{y}^{\top}\bm{S}_jL_{m\cap\ell}\bm{y}+ \bm{y}^{\top}\bm{S}_m\bm{S}_{\ell}\bm{y}, \end{aligned}$$ and $Q_{p,\lambda}'=\mathscr{N}_{m,\ell}^{\textup{\textrm{ovlp}}}$. By the similar argument above, the sequence of function $P_{p,\lambda}'-Q_{p,\lambda}'$ is equicontinuous on $\Lambda$ and thus $Q_{p,0}'=\lim_{\lambda\rightarrow0^+}Q_{p,\lambda}'$ is well-defined. Finally, since the proof for the second estimator and the risk is similar, we omit it here for simplicity. **Part (3) Equicontinuity (in $\lambda$) of the ratio.** We again focus on the proof for the first estimator. Define $P_{p,\lambda}'' := N_{m,\ell}^{\textup{\textrm{ovlp}}}/D_{m,\ell}^{\textup{\textrm{ovlp}}}$ and $Q_{p,\lambda}'' := \mathscr{N}_{m,\ell}^{\textup{\textrm{ovlp}}}/\mathscr{D}_{m,\ell}^{\textup{\textrm{ovlp}}}$. From the proof of Part (1), we know that $D_{m,\ell}^{\textup{\textrm{ovlp}}}$ is positive and upper bounded almost surely, and $N_{m,\ell}^{\textup{\textrm{ovlp}}}$ is nonnegative and upper bounded almost surely. Thus, we have that $P_{p,\lambda}''$ is upper bounded almost surely over $\Lambda$. On the other hand, from the monotonicity and boundedness of fixed-point quantities [@du2023subsample Lemma F.12], it follows that when $\lambda=0$ and $\psi\neq 1$, $\mathscr{D}_{m,\ell}^{\textup{\textrm{ovlp}}}=(1 - \psi)^2$; when $\lambda>0$, $\mathscr{D}_{m,\ell}^{\textup{\textrm{ovlp}}}$ is a continuous function, with left limit at $\lambda=0$ bounded away from zero and infinity and right limit $\lim_{\lambda\rightarrow\infty}\lambda^2v_p(-\lambda;\psi)=1$, and is therefore also bounded away from zero and infinity over $\Lambda$. This implies that $Q_{p,\lambda}''$ is upper bounded over $\Lambda$. Next, we examine the derivative of $P_{p,\lambda}''$ and $Q_{p,\lambda}''$: $$\begin{aligned} \frac{\partial }{\partial \lambda} P_{p,\lambda}'' &= \frac{\frac{\partial N_{m,\ell}^{\textup{\textrm{ovlp}}}}{\partial \lambda} D_{m,\ell}^{\textup{\textrm{ovlp}}} - N_{m,\ell}^{\textup{\textrm{ovlp}}}\frac{\partial D_{m,\ell}^{\textup{\textrm{ovlp}}}}{\partial \lambda} }{(D_{m,\ell}^{\textup{\textrm{ovlp}}})^2}, \quad \text{and} \quad \frac{\partial }{\partial \lambda} Q_{p,\lambda}'' = \frac{\frac{\partial \mathscr{N}_{m,\ell}^{\textup{\textrm{ovlp}}}}{\partial \lambda} \mathscr{D}_{m,\ell}^{\textup{\textrm{ovlp}}} - \mathscr{N}_{m,\ell}^{\textup{\textrm{ovlp}}}\frac{\partial \mathscr{D}_{m,\ell}^{\textup{\textrm{ovlp}}}}{\partial \lambda} }{(\mathscr{D}_{m,\ell}^{\textup{\textrm{ovlp}}})^2}. \end{aligned}$$ By a similar argument as above, we can also show that $|\partial P_{p,\lambda}''/\partial \lambda|$ and $|\partial Q_{p,\lambda}''/\partial \lambda|$ are upper bounded over $\Lambda$. Applying the Moore-Osgood theorem, the conclusion follows. ------------------------------------------------------------------------ ## Proof of {#app:subsec:asym-prop:correct-gcv} The proofs of pointwise and uniform consistencies are separated below. **Pointwise consistency.** From and , we have that $$\begin{aligned} R_M &= \|{f_{\textsc{nl}}}\|_{L_2}^2 + \frac{1}{M^2}\sum_{m=1}^M \|\widehat{\bm{\beta}}(\mathcal{D}_{I_m}) - \bm{\beta}_0\|_{\bm{\Sigma}}^2\notag\relax &\qquad + \frac{1}{M^2} \sum_{m,\ell=1}^M (\widehat{\bm{\beta}}(\mathcal{D}_{I_m}) - \bm{\beta}_0)^\top \bm{\Sigma}(\widehat{\bm{\beta}}_{\ell} - \bm{\beta}_0)) \notag\relax &\simeq\left( \frac{1}{M} (1+\widetilde{v}_p(-\lambda;\psi,\psi)) \right.\notag\relax &\qquad\left.+ \frac{M-1}{M} (1+\widetilde{v}_p(-\lambda;\phi,\psi))\right) (\|{f_{\textsc{nl}}}\|_{L_2}^2 + \widetilde{c}_p(-\lambda;\psi)), \label{eq:prop:correct-gcv-eq-1}\relax \frac{1}{n}\|\bm{y}- \bm{X}\widehat{\bm{\beta}}_m)\|_2^2 &\simeq d_p(-\lambda;\psi,\psi) (1+\widetilde{v}_p(-\lambda;\psi,\psi)) (\|{f_{\textsc{nl}}}\|_{L_2}^2 + \widetilde{c}_p(-\lambda;\psi)), \label{eq:prop:correct-gcv-eq-2} \end{aligned}$$ and $$\begin{aligned} &\frac{1}{n}\|\bm{y}- \bm{X}{\widetilde{\bm{\beta}}}_M\|_2^2 \notag\relax &= \frac{1}{M^2}\sum_{m=1}^M \frac{1}{n} \|\bm{y}- \bm{X}\widehat{\bm{\beta}}_m)\|_2^2 + \frac{1}{M^2}\sum_{m,\ell=1}^M \frac{1}{n} {(\bm{y}- \bm{X}\widehat{\bm{\beta}}_m)^\top (\bm{y}- \bm{X}\widehat{\bm{\beta}}_{\ell})} \notag\relax &\simeq\left( \frac{1}{M}d_p(-\lambda;\psi,\psi) (1+\widetilde{v}_p(-\lambda;\psi,\psi)) \right. \notag\relax &\qquad \left.+ \frac{M-1}{M}d_p(-\lambda;\phi,\psi) (1+\widetilde{v}_p(-\lambda;\phi,\psi))\right) (\|{f_{\textsc{nl}}}\|_{L_2}^2 + \widetilde{c}_p(-\lambda;\psi)). \label{eq:prop:correct-gcv-eq-3} \end{aligned}$$ On the other hand, from @du2023subsample [Lemma 3.4.], the average degrees of freedom ${\widetilde{\mathsf{df}}}{}$ and the denominator $(1-{\widetilde{\mathsf{df}}}{}/n)^2$ of the naive GCV estimator satisfy that $$\begin{aligned} \frac{1}{n}{\widetilde{\mathsf{df}}}{}\simeq\frac{\phi}{\psi}(1-\lambda v_p(-\lambda;\psi)),\label{eq:prop:correct-gcv-eq-4} \end{aligned}$$ and $$\begin{aligned} (1-{\widetilde{\mathsf{df}}}{}/n)^2 \simeq\left(\frac{\psi-\phi}{\psi} + \frac{\phi}{\psi}\lambda v_p(-\lambda;\psi)\right)^2 = d_p(-\lambda;\phi,\psi).\label{eq:prop:correct-gcv-eq-5} \end{aligned}$$ Thus, from [\[eq:prop:correct-gcv-eq-1\]](#eq:prop:correct-gcv-eq-1){reference-type="eqref" reference="eq:prop:correct-gcv-eq-1"}, [\[eq:prop:correct-gcv-eq-3\]](#eq:prop:correct-gcv-eq-3){reference-type="eqref" reference="eq:prop:correct-gcv-eq-3"} and [\[eq:prop:correct-gcv-eq-5\]](#eq:prop:correct-gcv-eq-5){reference-type="eqref" reference="eq:prop:correct-gcv-eq-5"}, the difference between the prediction risk and the naive GCV estimator admits the following asymptotic representations: $$\begin{aligned} &R_M - \frac{\|\bm{y}- \bm{X}{\widetilde{\bm{\beta}}}_M\|_2^2/n}{(1-{\widetilde{\mathsf{df}}}{}/n)^2 } \notag\relax &\simeq\frac{1}{M}\left( 1 - \frac{d_p(-\lambda;\psi,\psi)}{d_p(-\lambda;\phi,\psi)} \right)(1+\widetilde{v}_p(-\lambda;\psi,\psi))(\|{f_{\textsc{nl}}}\|_{L_2}^2 + \widetilde{c}_p(-\lambda;\psi)).\label{eq:prop:correct-gcv-eq-6} \end{aligned}$$ On the other hand, from [\[eq:prop:correct-gcv-eq-2\]](#eq:prop:correct-gcv-eq-2){reference-type="eqref" reference="eq:prop:correct-gcv-eq-2"}, [\[eq:prop:correct-gcv-eq-4\]](#eq:prop:correct-gcv-eq-4){reference-type="eqref" reference="eq:prop:correct-gcv-eq-4"} and [\[eq:prop:correct-gcv-eq-5\]](#eq:prop:correct-gcv-eq-5){reference-type="eqref" reference="eq:prop:correct-gcv-eq-5"}, we also have that for all $m\in[M]$, $$\begin{aligned} & \frac{1}{(1-{\widetilde{\mathsf{df}}}{}/n)^2} \cdot \frac{1}{M}\cdot \frac{(\psi-\phi) ({\widetilde{\mathsf{df}}}{}/n)^2}{\phi (1-{\widetilde{\mathsf{df}}}{}/n)^2 + (\psi-\phi) ({\widetilde{\mathsf{df}}}{}/n)^2} \cdot \frac{\|\bm{y}-\bm{X}\widehat{\bm{\beta}}_m\|_2^2}{n} \notag\relax &\simeq\frac{1}{M}\frac{d_p(-\lambda;\psi,\psi)}{d_p(-\lambda;\phi,\psi)}\frac{(\psi - \phi)\frac{\phi^2}{\psi^2}(1-\lambda v_p(-\lambda;\psi))^2}{\phi d_p(-\lambda;\phi,\psi) + (\psi - \phi)\frac{\phi^2}{\psi^2}(1-\lambda v_p(-\lambda;\psi))^2 } (1+\widetilde{v}_p(-\lambda;\psi,\psi)) (\|{f_{\textsc{nl}}}\|_{L_2}^2 + \widetilde{c}_p(-\lambda;\psi)) \notag\relax &= \frac{1}{M} \frac{\frac{\phi(\psi - \phi)}{\psi^2}\left(1-\lambda v_p(-\lambda;\psi)\right)^2}{d_p(-\lambda;\phi,\psi)} (1+\widetilde{v}_p(-\lambda;\psi,\psi)) (\|{f_{\textsc{nl}}}\|_{L_2}^2 + \widetilde{c}_p(-\lambda;\psi)) \notag\relax &= - \frac{1}{M} \left(1 - \frac{d_p(-\lambda;\psi,\psi)}{d_p(-\lambda;\phi,\psi)}\right) (1+\widetilde{v}_p(-\lambda;\psi,\psi)) (\|{f_{\textsc{nl}}}\|_{L_2}^2 + \widetilde{c}_p(-\lambda;\psi)),\label{eq:prop:correct-gcv-eq-7} \end{aligned}$$ by noting that $d_p(-\lambda;\psi,\psi) - d_p(-\lambda;\phi,\psi) = \phi(\psi - \phi)\psi^{-2}(1-\lambda v_p(-\lambda;\psi))^2$. Matching [\[eq:prop:correct-gcv-eq-6\]](#eq:prop:correct-gcv-eq-6){reference-type="eqref" reference="eq:prop:correct-gcv-eq-6"} and [\[eq:prop:correct-gcv-eq-7\]](#eq:prop:correct-gcv-eq-7){reference-type="eqref" reference="eq:prop:correct-gcv-eq-7"} finishes the proof when $\widehat{R}_{m,m}$ is the full-estimator. The proof when $\widehat{R}_{m,m}$ is the ovlp-estimator follows similarly. **Uniform consistency.** From the proof of , we have that $$\begin{aligned} (1-{\widetilde{\mathsf{df}}}{}/n)^2\simeq\left(1-\frac{\phi}{\psi}(1-\lambda v_p(-\lambda;\psi)\right)^2=:\mathscr{D}_{m,\ell}^{\textup{\textrm{full}}},\qquad \text{for all }\ m\neq\ell, \end{aligned}$$ and $$\begin{aligned} \widetilde{R}^{\textup{\text{gcv}}}_M&\simeq\frac{1}{M^2}\sum_{m,\ell\in[M]}\frac{\mathscr{N}_{m,\ell}^{\textup{\textrm{full}}}}{\mathscr{D}_{1,2}^{\textup{\textrm{full}}}} \simeq\frac{1}{M^2}\sum_{m,\ell\in[M]} R_{m,\ell} - \frac{1}{M^2}\sum_{m\in[M]} \frac{\mathscr{D}_{m,m}^{\textup{\textrm{full}}}}{\mathscr{D}_{1,2}^{\textup{\textrm{full}}}} R_{m}\relax \widehat{R}_{m,m}^\char"0023&\simeq R_{m,m}. \end{aligned}$$ Then we have $$\begin{aligned} \widehat{R}_M^{\textup{\textrm{cgcv}},\char"0023}&=\widetilde{R}^{\textup{\text{gcv}}}_M- \frac{1}{M} \frac{({\widetilde{\mathsf{df}}}{}/n)^2}{(1-{\widetilde{\mathsf{df}}}{}/n)^2} \frac{\psi-\phi}{\phi}\frac{1}{M}\sum_{m=1}^M \widehat{R}_{m,m}^\char"0023\relax &\simeq\widetilde{R}^{\textup{\text{gcv}}}_M- \frac{1}{M^2}\sum_{m=1}^{M} \frac{\mathscr{D}_{1,1}^{\textup{\textrm{full}}} - \mathscr{D}_{1,2}^{\textup{\textrm{full}}}}{\mathscr{D}_{1,2}^{\textup{\textrm{full}}}} R_{m,m}\relax &\simeq\frac{1}{M^2}\sum_{m,\ell\in[M]}R_{m,\ell}\relax &= R_M, \end{aligned}$$ which establishes the point-wise consistency. Similar to the proof of , the uniform equicontinuity of $\widetilde{R}_M^{\textup{\text{gcv}}}$ and $R_{m,\ell}$ to their asymptotic limits follows from . And the uniformity for $|\widehat{R}_M^{\textup{\textrm{cgcv}},\char"0023} - R_M|$ follows similarly. ------------------------------------------------------------------------ ## Technical lemmas and their proofs {#app:subsec:asym-technical-lemmas} **Lemma 16** (Bias term of risk). Suppose the same assumptions in hold and let $\varphi_{m\ell}=\psi\mathop{\mathrm{\mathds{1}}}_{\{m=\ell\}}+\phi\mathop{\mathrm{\mathds{1}}}_{\{m\neq\ell\}}$, then it holds that: (1) $\lambda^2\bm{\beta}_0^{\top} \bm{M}_{m} \bm{\Sigma}\bm{M}_{\ell} \bm{\beta}_0 \simeq\widetilde{c}_p(-\lambda;\psi) (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi))$. (2) $\lambda^2\bm{\beta}_0^{\top} \bm{M}_{m} \widehat{\bm{\Sigma}}\xspace_{m\cap\ell}\bm{M}_{\ell} \bm{\beta}_0 \simeq\lambda^2v_p(-\lambda;\psi)^2\widetilde{c}_p(-\lambda;\psi) (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi))$ (3) $\lambda^2\bm{\beta}_0^{\top} \bm{M}_{m} \widehat{\bm{\Sigma}}\xspace\bm{M}_{\ell} \bm{\beta}_0 \simeq \begin{dcases} \left(\frac{\psi-\phi}{\psi} + \frac{\phi}{\psi } \lambda^2v_p(-\lambda;\psi)^2 \right) \widetilde{c}_p(-\lambda;\psi) (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi)), & m=\ell,\relax \left(\frac{\psi-\phi}{\psi} + \frac{\phi}{\psi}\lambda v_p(-\lambda;\psi)\right)^2 \widetilde{c}_p(-\lambda;\psi) (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi)), & m\neq\ell. \end{dcases}$ Here the nonnegative constants $v_p(-\lambda;\psi)$, $\widetilde{v}_p(-\lambda;\phi,\psi)$ and $\widetilde{c}_p(-\lambda;\psi)$ are defined through the following equations: $$\begin{aligned} \frac{1}{v_p(-\lambda;\psi)} &= \lambda+\psi \int\frac{r}{1+v_p(-\lambda;\psi)r }{\,\mathrm{d}}H_p(r),\relax \widetilde{v}_p(-\lambda;\phi,\psi) &= \frac{\displaystyle \phi \int\frac{r^2}{(1+v_p(-\lambda;\psi)r)^2}{\,\mathrm{d}}H_p(r) }{\displaystyle v_p(-\lambda;\psi)^{-2}-\phi \int\frac{r^2}{(1+v_p(-\lambda;\psi)r)^2}{\,\mathrm{d}}H_p(r)},\relax \widetilde{c}_p(-\lambda;\psi) &= \bm{\beta}_0^{\top}(v_p(-\lambda;\psi)\bm{\Sigma}+\bm{I}_p)^{-1}\bm{\Sigma}(v_p(-\lambda;\psi)\bm{\Sigma}+\bm{I}_p)^{-1}\bm{\beta}_0 . \end{aligned}$$ Note that $\bm{\beta}_0$ is independent of $\bm{M}_{m} \bm{\Sigma}\bm{M}_{\ell}$, $\bm{M}_{m} \widehat{\bm{\Sigma}}\xspace_{m\cap\ell}\bm{M}_{\ell}$ and $\bm{M}_{m} \widehat{\bm{\Sigma}}\xspace\bm{M}_{\ell}$. We analyze the deterministic equivalents of the latter for the three cases. **Part (1)** From @patil2022bagging [Lemma S.2.4], we have that $$\begin{aligned} \lambda^2\bm{M}_m\bm{\Sigma}\bm{M}_{\ell} \simeq\left(v_p(-\lambda;\psi) \bm{\Sigma}+ \bm{I}_p\right)^{-1} (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi))\bm{\Sigma}\left(v_p(-\lambda;\psi) \bm{\Sigma}+ \bm{I}_p\right)^{-1}. \end{aligned}$$ Since $\|\bm{\beta}_0\|_2$ is almost surely bounded from @patil2023generalized [Lemma D.5.], by the trace property of deterministic equivalents in @patil2023generalized [Lemma E.3 (4)], we have $$\begin{aligned} \bm{\beta}_0^{\top} \bm{M}_{m} \bm{\Sigma}\bm{M}_{\ell} \bm{\beta}_0 &\xlongequal{\textup{a.s.}} \bm{\beta}_0^{\top} \left(v_p(-\lambda;\psi) \bm{\Sigma}+ \bm{I}_p\right)^{-1} (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi))\bm{\Sigma}\left(v_p(-\lambda;\psi) \bm{\Sigma}+ \bm{I}_p\right)^{-1} \bm{\beta}_0 \relax & = \widetilde{c}_p(-\lambda;\psi) (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi)). \end{aligned}$$ **Part (2)** From @du2023subsample [Lemma D.6 (1) and Lemma F.8 (3)], we have that $$\begin{aligned} \lambda^2\bm{M}_m\widehat{\bm{\Sigma}}\xspace_{m\cap\ell}\bm{M}_{\ell} \simeq\lambda^2v_p(-\lambda;\psi)^2\left(v_p(-\lambda;\psi) \bm{\Sigma}+ \bm{I}_p\right)^{-1} (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi))\bm{\Sigma}\left(v_p(-\lambda;\psi) \bm{\Sigma}+ \bm{I}_p\right)^{-1}. \label{eq:bias-2} \end{aligned}$$ Then, by the trace property of deterministic equivalents in @patil2023generalized [Lemma E.3 (4)], we have $$\begin{aligned} \bm{\beta}_0^{\top} \bm{M}_{m} \widehat{\bm{\Sigma}}\xspace_{m\cap\ell}\bm{M}_{\ell} \bm{\beta}_0 &\xlongequal{\textup{a.s.}} \lambda^2v_p(-\lambda;\psi)^2\bm{\beta}_0^{\top} \left(v_p(-\lambda;\psi) \bm{\Sigma}+ \bm{I}_p\right)^{-1} (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi))\bm{\Sigma}\left(v_p(-\lambda;\psi) \bm{\Sigma}+ \bm{I}_p\right)^{-1} \bm{\beta}_0 \relax &= \lambda^2v_p(-\lambda;\psi)^2\widetilde{c}_p(-\lambda;\psi) (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi)). \end{aligned}$$ **Part (3)** Note that $\widehat{\bm{\Sigma}}\xspace= \frac{|I_m\cup I_\ell|}{n}\widehat{\bm{\Sigma}}\xspace_{m\cup \ell} + \frac{|(I_m\cup I_\ell)^c|}{n}\widehat{\bm{\Sigma}}\xspace_{(m\cup \ell)^c}$, we have $$\begin{aligned} \bm{M}_{m} \widehat{\bm{\Sigma}}\xspace\bm{M}_{\ell} = \frac{|I_m\cup I_\ell|}{n} \bm{M}_{m} \widehat{\bm{\Sigma}}\xspace_{m\cup \ell} \bm{M}_{\ell} + \frac{|(I_{m}\cup I_\ell)^c|}{n} \bm{M}_{m} \widehat{\bm{\Sigma}}\xspace_{(m\cup \ell)^c} \bm{M}_{\ell} . \label{eq:lem:risk-bias-1} \end{aligned}$$ For the first term in [\[eq:lem:risk-bias-1\]](#eq:lem:risk-bias-1){reference-type="eqref" reference="eq:lem:risk-bias-1"}, from @du2023subsample [Equation (54)], when $m\neq \ell$, it holds that $$\begin{aligned} \lambda^2\bm{M}_m\widehat{\bm{\Sigma}}\xspace_{m\cup \ell}\bm{M}_{\ell} &\simeq\lambda^2v_p(-\lambda; \psi)^2(1+\widetilde{v}_p(-\lambda;\phi,\psi))\left(\frac{2(\psi-\phi)}{2\psi-\phi} \frac{1}{\lambda v_p(-\lambda;\psi)} + \frac{\phi}{2\psi-\phi}\right)( v_p(-\lambda; \psi) \bm{\Sigma}+ \bm{I}_p)^{-2}\bm{\Sigma}, \end{aligned}$$ From Part (2), when $m=\ell$, it holds that $$\begin{aligned} \lambda^2\bm{M}_m\widehat{\bm{\Sigma}}\xspace_{m\cup \ell}\bm{M}_{\ell} &\simeq\lambda^2v_p(-\lambda;\psi)^2\left(v_p(-\lambda;\psi) \bm{\Sigma}+ \bm{I}_p\right)^{-1} (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi))\bm{\Sigma}\left(v_p(-\lambda;\psi) \bm{\Sigma}+ \bm{I}_p\right)^{-1}, \end{aligned}$$ For the second term in [\[eq:lem:risk-bias-1\]](#eq:lem:risk-bias-1){reference-type="eqref" reference="eq:lem:risk-bias-1"}, from @du2023subsample [Lemma F.8 (1)], we have $$\begin{aligned} \lambda^2\bm{M}_m\widehat{\bm{\Sigma}}\xspace_{(m\cup \ell)^c}\bm{M}_{\ell} &\simeq\lambda^2\bm{M}_1\bm{\Sigma}\bm{M}_2 \simeq\left(v_p(-\lambda;\psi) \bm{\Sigma}+ \bm{I}_p\right)^{-1} (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi))\bm{\Sigma}\left(v_p(-\lambda;\psi) \bm{\Sigma}+ \bm{I}_p\right)^{-1}, \end{aligned}$$ where the last equivalence is from Part (1). When $m=\ell$, the coefficients in [\[eq:lem:risk-bias-1\]](#eq:lem:risk-bias-1){reference-type="eqref" reference="eq:lem:risk-bias-1"} concentrate $|I_m\cup I_\ell|/n\xrightarrow{\textup{a.s.}}\phi/\psi$ and $|(I_m\cup I_\ell)^c|/n\xrightarrow{\textup{a.s.}}(\psi-\phi)/\psi$ from @du2023subsample [Lemma G.6]. Then [\[eq:lem:risk-bias-1\]](#eq:lem:risk-bias-1){reference-type="eqref" reference="eq:lem:risk-bias-1"} implies that $$\begin{aligned} \lambda^2\bm{M}_m\widehat{\bm{\Sigma}}\xspace\bm{M}_{\ell} &\simeq\left(\frac{\psi-\phi}{\psi} + \frac{\phi}{\psi } \lambda^2v_p(-\lambda;\psi)^2 \right) \left(v_p(-\lambda;\psi) \bm{\Sigma}+ \bm{I}_p\right)^{-1} (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi))\bm{\Sigma}\left(v_p(-\lambda;\psi) \bm{\Sigma}+ \bm{I}_p\right)^{-1}. \end{aligned}$$ When $m\neq\ell$, the coefficients in [\[eq:lem:risk-bias-1\]](#eq:lem:risk-bias-1){reference-type="eqref" reference="eq:lem:risk-bias-1"} concentrate $|I_m\cup I_\ell|/n\xrightarrow{\textup{a.s.}}\phi(2\psi-\phi)/\psi^2$ and $|(I_m\cup I_\ell)^c|/n\xrightarrow{\textup{a.s.}}(\psi-\phi)^2/\psi^2$ from @du2023subsample [Lemma G.6]. Then [\[eq:lem:risk-bias-1\]](#eq:lem:risk-bias-1){reference-type="eqref" reference="eq:lem:risk-bias-1"} implies that $$\begin{aligned} \lambda^2\bm{M}_m\widehat{\bm{\Sigma}}\xspace\bm{M}_{\ell} &\simeq\left(\frac{\psi-\phi}{\psi} + \frac{\phi}{\psi}\lambda v_p(-\lambda;\psi)\right)^2 \left(v_p(-\lambda;\psi) \bm{\Sigma}+ \bm{I}_p\right)^{-1} (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi))\bm{\Sigma}\left(v_p(-\lambda;\psi) \bm{\Sigma}+ \bm{I}_p\right)^{-1}. \end{aligned}$$ Finally, applying the trace property of deterministic equivalents in @patil2023generalized [Lemma E.3 (4)] as in the previous parts completes the proof. ------------------------------------------------------------------------ **Lemma 17** (Variance term of risk). Suppose the same assumptions in hold and let $\varphi_{m\ell}=\psi\mathop{\mathrm{\mathds{1}}}_{\{m=\ell\}}+\phi\mathop{\mathrm{\mathds{1}}}_{\{m\neq\ell\}}$, then it holds that: (1) $k^{-2}\bm{f}_0^{\top} \bm{X}\bm{M}_m \bm{\Sigma}\bm{M}_{\ell}\bm{X}^{\top} \bm{f}_0 \simeq\|{f_{\textsc{nl}}}\|_{L_2}^2 \widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi)$. (2) $\frac{1}{|I_m\cap I_{\ell}|}\bm{f}_0^{\top} \left(\bm{I}_p-\frac{\bm{X}\bm{M}_m\bm{X}^{\top}}{k}\right)\bm{L}_{m\cap\ell}\left(\bm{I}_p-\frac{\bm{X}\bm{M}_{\ell}\bm{X}^{\top}}{k}\right) \bm{f}_0 \simeq\|{f_{\textsc{nl}}}\|_{L_2}^2 \lambda^2v_p(-\lambda;\psi))^2(1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi))$. (3) $\frac{|I_m\cup I_{\ell}|}{n}\bm{f}_1^{\top} \left(\bm{I}_p-\frac{\bm{X}\bm{M}_m\bm{X}^{\top}}{k}\right)\bm{L}_{m\cup\ell}\left(\bm{I}_p-\frac{\bm{X}\bm{M}_{\ell}\bm{X}^{\top}}{k}\right) \bm{f}_1$ + $\frac{|(I_m\cup I_{\ell})^c|}{n} \|\bm{f}_0\|_2^2 + \frac{|(I_m\cup I_{\ell})^c|^2}{nk^2} \bm{f}_0^{\top} \bm{X}\bm{M}_m\widehat{\bm{\Sigma}}\xspace_{(m\cup\ell)^c } \bm{M}_{\ell}\bm{X}^{\top}\bm{f}_0$ $~~~~\simeq \begin{dcases} \|{f_{\textsc{nl}}}\|_{L_2}^2 \left(\frac{\psi-\phi}{\psi} + \frac{\phi}{\psi } \lambda^2v_p(-\lambda;\psi)^2 \right) (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi)), & m=\ell,\relax \|{f_{\textsc{nl}}}\|_{L_2}^2 \left(\frac{\psi-\phi}{\psi} + \frac{\phi}{\psi}\lambda v_p(-\lambda;\psi)\right)^2(1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi)), & m\neq\ell. \end{dcases}$ Here $\bm{f}_0=\bm{L}_{m\cap\ell}{\bm{f}_{\textsc{nl}}}$, $\bm{f}_1=\bm{L}_{m\cup\ell}{\bm{f}_{\textsc{nl}}}$ and the nonnegative constants $v_p(-\lambda;\psi)$ and $\widetilde{v}_p(-\lambda;\phi,\psi)$ are defined in . Since $\|{f_{\textsc{nl}}}\|_{4+\delta}<\infty$ from @patil2023generalized [Lemma D.5], we have that $\|\bm{f}_0\|_2^2/|I_m\cap I_{\ell}| \xrightarrow{\textup{a.s.}}\|{f_{\textsc{nl}}}\|_{L_2}^2$ by strong law of large number. Then, from , we have that the quadratic term $$\begin{aligned} \frac{1}{|I_m\cap I_{\ell}|}\bm{f}_0^{\top}\bm{X}\bm{A}\bm{X}^{\top}\bm{f}_0 &\simeq\frac{1}{|I_m\cap I_{\ell}|}\|{f_{\textsc{nl}}}\|_{L_2}^2 \mathop{\mathrm{tr}}(\bm{L}_{m\cap\ell}\bm{X}\bm{A}\bm{X}^{\top}\bm{L}_{m\cap\ell}) = \|{f_{\textsc{nl}}}\|_{L_2}^2 \mathop{\mathrm{tr}}(\bm{A}\widehat{\bm{\Sigma}}\xspace_{m\cap\ell}) \label{eq:lem:risk-var-eq-1} \end{aligned}$$ for any symmetric matrix $\bm{A}$ with bounded operator norm. We next apply this result for different values of $\bm{A}$. **Part (1)** Let $\bm{A}=\bm{M}_m \bm{\Sigma}\bm{M}_{\ell}$. Since from [\[eq:bias-2\]](#eq:bias-2){reference-type="eqref" reference="eq:bias-2"} and the product rule [@patil2022bagging Lemma S.7.4 (3)], we have that $$\bm{M}_{\ell} \widehat{\bm{\Sigma}}\xspace_{m\cap \ell} \bm{M}_m \bm{\Sigma}\simeq v_p(-\lambda;\psi)^2\left(v_p(-\lambda;\psi) \bm{\Sigma}+ \bm{I}_p\right)^{-1} (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi))\bm{\Sigma}\left(v_p(-\lambda;\psi) \bm{\Sigma}+ \bm{I}_p\right)^{-1}\bm{\Sigma}.$$ Then by the trace property of deterministic equivalents in @patil2023generalized [Lemma E.3 (4)], we have $$\begin{aligned} \frac{p}{k\mathop{\mathrm{\mathds{1}}}_{\{m=\ell\}}+n\mathop{\mathrm{\mathds{1}}}_{\{m\neq\ell\}}}\cdot\frac{1}{p}\mathop{\mathrm{tr}}(\bm{A}\widehat{\bm{\Sigma}}\xspace_{m\cap\ell}) & \simeq\varphi_{m\ell}(1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi)) \int \left(\frac{v_p(-\lambda;\psi) r}{1 + v_p(-\lambda;\psi) r}\right)^2 {\,\mathrm{d}}H_p(r) \relax &= \widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi)). \end{aligned}$$ Finally, note that $|I_m\cap I_{\ell}|(k\mathop{\mathrm{\mathds{1}}}_{\{m=\ell\}}+n\mathop{\mathrm{\mathds{1}}}_{\{m\neq\ell\}}) \simeq k^2$. This implies that $$k^{-2}\bm{f}_0^{\top} \bm{X}\bm{M}_m \bm{\Sigma}\bm{M}_{\ell}\bm{X}^{\top} \bm{f}_0 \simeq\|{f_{\textsc{nl}}}\|_{L_2}^2 \widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi).$$ **Part (2)** Note that $$\begin{aligned} &\frac{1}{|I_m\cap I_{\ell}|}\bm{f}_0^{\top} \left(\bm{I}_p-\frac{\bm{X}\bm{M}_m\bm{X}^{\top}}{k}\right)\bm{L}_{m\cap\ell}\left(\bm{I}_p-\frac{\bm{X}\bm{M}_{\ell}\bm{X}^{\top}}{k}\right) \bm{f}_0 \relax &= \frac{1}{|I_m\cap I_{\ell}|}\bm{f}_0^{\top} \bm{f}_0 - \frac{1}{|I_m\cap I_{\ell}|}\sum_{j\in\{m,\ell\}}\bm{f}_0^{\top} \frac{\bm{X}\bm{M}_j\bm{X}^{\top}}{k}\bm{f}_0 + \frac{1}{k^2}\bm{f}_0^{\top} \bm{X}\bm{M}_m\widehat{\bm{\Sigma}}\xspace_{m\cap\ell}\bm{M}_{\ell}\bm{X}^{\top} \bm{f}_0. \end{aligned}$$ We next analyze the three terms separately for $m\neq \ell$. By the law of large numbers, the first term converges as $\bm{f}_0^{\top} \bm{f}_0/|I_m\cap I_{\ell}| \xrightarrow{\textup{a.s.}}\|{f_{\textsc{nl}}}\|_{L_2}^2$. For the second term, let $\bm{A}= \bm{M}_m$ and $\bm{M}_{\ell}$. From @patil2023generalized [Corollary F.5 and Lemma F.8 (4)], it follows that for $j\in\{m,\ell\}$, $$\begin{aligned} \bm{M}_j\widehat{\bm{\Sigma}}\xspace_{m\cap\ell} \simeq\bm{I}_p - (v_p(-\lambda;\psi) \bm{\Sigma}+ \bm{I}_p)^{-1}, \end{aligned}$$ and thus, we have $$\begin{aligned} \frac{1}{p}\mathop{\mathrm{tr}}(\bm{M}_j \widehat{\bm{\Sigma}}\xspace_{m\cap\ell} ) &\simeq\int \frac{v_p(-\lambda;\psi)r}{1+v_p(-\lambda;\psi)r}{\,\mathrm{d}}H_p(r) = \psi^{-1}(1 -\lambda v_p(-\lambda;\psi)). \end{aligned}$$ It then follows that $$\begin{aligned} \frac{1}{|I_m\cap I_{\ell}|}\sum_{j\in\{m,\ell\}}\bm{f}_0^{\top} \frac{\bm{X}\bm{M}_j\bm{X}^{\top}}{k}\bm{f}_0 \simeq 2\|{f_{\textsc{nl}}}\|_{L_2}^2(1 -\lambda v_p(-\lambda;\psi)). \end{aligned}$$ For the third term, let $\bm{A}= \bm{M}_m \widehat{\bm{\Sigma}}\xspace_{m\cap \ell} \bm{M}_{\ell}$. When $m\neq \ell$, from @patil2023generalized [Lemma D.7 (1) and Lemma F.8 (5)], we have that $$\begin{aligned} \bm{M}_m \widehat{\bm{\Sigma}}\xspace_{m\cap \ell} \bm{M}_{\ell}\widehat{\bm{\Sigma}}\xspace_{m\cap \ell} &\simeq\frac{\psi}{\phi}\left(v_p(-\lambda;\psi)-\frac{\psi-\phi}{\psi}\lambda \widetilde{v}_v(-\lambda;\phi,\psi) \right)(v_p(-\lambda;\psi)\bm{\Sigma}+\bm{I}_p)^{-1}\bm{\Sigma}\notag\relax &\qquad - \lambda \widetilde{v}_v(-\lambda;\phi,\psi)(v_p(-\lambda;\psi)\bm{\Sigma}+\bm{I}_p)^{-2}\bm{\Sigma}, \end{aligned}$$ where $\widetilde{v}_v(-\lambda;\psi) = v_p(-\lambda;\psi)^2(1+\widetilde{v}_p(-\lambda;\phi,\psi))$. Then from [\[eq:lem:risk-var-eq-1\]](#eq:lem:risk-var-eq-1){reference-type="eqref" reference="eq:lem:risk-var-eq-1"}, we have $$\begin{aligned} &\frac{1}{|I_m\cap I_{\ell}|}\bm{f}_0^{\top} \left(\bm{I}_p-\frac{\bm{X}\bm{M}_m\bm{X}^{\top}}{k}\right)\bm{L}_{m\cap\ell}\left(\bm{I}_p-\frac{\bm{X}\bm{M}_{\ell}\bm{X}^{\top}}{k}\right) \bm{f}_0 \relax &\simeq\|{f_{\textsc{nl}}}\|_{L_2}^2 \left(1 - 2(1-\lambda v_p(-\lambda;\psi)) + \psi\left(v_p(-\lambda;\psi)-\frac{\psi-\phi}{\psi}\lambda \widetilde{v}_v(-\lambda;\phi,\psi) \right) \int \frac{r}{1+v_p(-\lambda;\psi) r}{\,\mathrm{d}}H_p(r) \right.\relax &\qquad \left. - \phi \lambda\widetilde{v}_v(-\lambda;\phi,\psi) \int \frac{r}{(1+v_p(-\lambda;\psi) r)^2}{\,\mathrm{d}}H_p(r) \right)\relax &= \|{f_{\textsc{nl}}}\|_{L_2}^2 \lambda \widetilde{v}_v(-\lambda;\phi,\psi) \left(\frac{1}{v_p(-\lambda;\psi)} + \phi \int \left(\frac{r}{1+v_p(-\lambda;\psi) r}\right)^2{\,\mathrm{d}}H_p(r) - (\psi-\phi) \int \frac{r}{1+v_p(-\lambda;\psi) r}{\,\mathrm{d}}H_p(r)\right.\relax &\qquad\left.- \phi \int \frac{r}{(1+v_p(-\lambda;\psi) r)^2}{\,\mathrm{d}}H_p(r)\right)\relax &=\|{f_{\textsc{nl}}}\|_{L_2}^2 \lambda \widetilde{v}_v(-\lambda;\phi,\psi) \left(\frac{1}{v_p(-\lambda;\psi)} - \psi\int \frac{r}{1+v_p(-\lambda;\psi) r}{\,\mathrm{d}}H_p(r) \right)\relax &= \|{f_{\textsc{nl}}}\|_{L_2}^2 \lambda^2 \widetilde{v}_v(-\lambda;\phi,\psi)\relax &= \|{f_{\textsc{nl}}}\|_{L_2}^2 \lambda^2v_p(-\lambda;\psi)^2(1+\widetilde{v}_p(-\lambda;\phi,\psi)), \end{aligned}$$ when $m\neq \ell$. When $m=\ell$, from [\[eq:lem:risk-var-eq-1\]](#eq:lem:risk-var-eq-1){reference-type="eqref" reference="eq:lem:risk-var-eq-1"} and @du2023subsample [Lemma D.7 (1)], we have $$\begin{aligned} &\frac{1}{|I_m\cap I_{\ell}|}\bm{f}_0^{\top} \left(\bm{I}_p-\frac{\bm{X}\bm{M}_m\bm{X}^{\top}}{k}\right)\bm{L}_{m\cap\ell}\left(\bm{I}_p-\frac{\bm{X}\bm{M}_{\ell}\bm{X}^{\top}}{k}\right) \bm{f}_0 \relax &= \frac{1}{k} \bm{f}_0^{\top} \left(\bm{I}_p-\frac{\bm{X}\bm{M}_m\bm{X}^{\top}}{k}\right)\bm{L}_{m}\left(\bm{I}_p-\frac{\bm{X}\bm{M}_{m}\bm{X}^{\top}}{k}\right) \bm{f}_0\relax &\simeq\|{f_{\textsc{nl}}}\|_{L_2}^2 (1 - \frac{2}{k}\mathop{\mathrm{tr}}(\bm{M}\widehat{\bm{\Sigma}}\xspace_m) + \frac{1}{k}\mathop{\mathrm{tr}}(\bm{M}_m\widehat{\bm{\Sigma}}\xspace_m\bm{M}_m\widehat{\bm{\Sigma}}\xspace_m))\relax &\simeq\|{f_{\textsc{nl}}}\|_{L_2}^2 \lambda^2v_p(-\lambda;\psi)^2(1+\widetilde{v}_p(-\lambda;\psi,\psi)). \end{aligned}$$ Combining the above results finish the proof for Part (2). **Part (3)** We analyze the three terms separately for $m\neq\ell$. From @du2023subsample [Lemma D.7 (3)], we have $$\begin{aligned} &\frac{1}{|I_m\cup I_{\ell}|}\mathop{\mathrm{tr}}\left( \left(\bm{I}_p-\frac{\bm{X}\bm{M}_m\bm{X}^{\top}}{k}\right)\bm{L}_{m\cup\ell}\left(\bm{I}_p-\frac{\bm{X}\bm{M}_{\ell}\bm{X}^{\top}}{k}\right) \right) \relax &\simeq\lambda^2v_p(-\lambda;\psi)^2\left(\frac{2(\psi-\phi)}{2\psi-\phi} \frac{1}{\lambda v_p(-\lambda;\psi)}+ \frac{\phi}{2\psi-\phi}\right)(1+\widetilde{v}_p(-\lambda;\psi,\psi)) . \end{aligned}$$ From , it then follows that $$\begin{aligned} &\frac{|I_m\cup I_{\ell}|}{n}\bm{f}_1^{\top} \left(\bm{I}_p-\frac{\bm{X}\bm{M}_m\bm{X}^{\top}}{k}\right)\bm{L}_{m\cup\ell}\left(\bm{I}_p-\frac{\bm{X}\bm{M}_{\ell}\bm{X}^{\top}}{k}\right) \bm{f}_1 \relax &\simeq\|{f_{\textsc{nl}}}\|_{L^2}^2 \frac{\phi(2\psi-\phi)}{\psi^2}\lambda^2v_p(-\lambda;\psi)^2\left(\frac{2(\psi-\phi)}{2\psi-\phi} \frac{1}{\lambda v_p(-\lambda;\psi)}+ \frac{\phi}{2\psi-\phi}\right)(1+\widetilde{v}_p(-\lambda;\psi,\psi)) . \end{aligned}$$ From strong law of large numbers, the second term converges $\frac{|(I_m\cup I_{\ell})^c|}{n} \|\bm{f}_0\|_2^2 \xrightarrow{\textup{a.s.}}\|{f_{\textsc{nl}}}\|_{L^2}^2$. For the third term, let $\bm{A}= \bm{M}_m\widehat{\bm{\Sigma}}\xspace_{(m\cup\ell)^c } \bm{M}_{\ell}$, then we have $$\begin{aligned} \frac{1}{p}\mathop{\mathrm{tr}}(\bm{M}_m\widehat{\bm{\Sigma}}\xspace_{(m\cup\ell)^c } \bm{M}_{\ell}\widehat{\bm{\Sigma}}\xspace_{m\cap \ell} \simeq\frac{1}{p}\mathop{\mathrm{tr}}(\bm{M}_m\bm{\Sigma}\bm{M}_{\ell}\widehat{\bm{\Sigma}}\xspace_{m\cap \ell} \simeq\psi^{-1}\widetilde{v}_p(-\lambda;\phi,\psi), \end{aligned}$$ where the first equality is from the conditional independence property and the second is from @du2023subsample [Lemma F.8 (3)]. Again, from , it follows that $$\frac{|(I_m\cup I_{\ell})^c|^2}{nk^2} \bm{f}_0^{\top} \bm{X}\bm{M}_m\widehat{\bm{\Sigma}}\xspace_{(m\cup\ell)^c } \bm{M}_{\ell}\bm{X}^{\top}\bm{f}_0 \simeq\|{f_{\textsc{nl}}}\|_{L_2}^2\left(\frac{\psi-\phi}{\psi}\right)^2\widetilde{v}_p(-\lambda;\phi,\psi).$$ Combining the above results finishes the proof of Part (3) for $m\neq\ell$. When $m=\ell$, the formula simplifies to $$\frac{1}{n}\bm{f}_0^{\top} \left(\bm{I}_p-\frac{\bm{X}\bm{M}_m\bm{X}^{\top}}{k}\right)\left(\bm{I}_p-\frac{\bm{X}\bm{M}_{\ell}\bm{X}^{\top}}{k}\right) \bm{f}_0.$$ From [\[eq:lem:risk-var-eq-1\]](#eq:lem:risk-var-eq-1){reference-type="eqref" reference="eq:lem:risk-var-eq-1"}, Part (2) and @patil2022bagging [Lemma S.2.5 (1)], we have $$\begin{aligned} &\frac{1}{n}\bm{f}_0^{\top} \left(\bm{I}_p-\frac{\bm{X}\bm{M}_m\bm{X}^{\top}}{k}\right)\left(\bm{I}_p-\frac{\bm{X}\bm{M}_{\ell}\bm{X}^{\top}}{k}\right) \bm{f}_0 \relax &= \frac{1}{n} \bm{f}_0^{\top} \left(\bm{I}_p-\frac{\bm{X}\bm{M}_m\bm{X}^{\top}}{k}\right)\left(\bm{I}_p-\frac{\bm{X}\bm{M}_{m}\bm{X}^{\top}}{k}\right) \bm{f}_0\relax &\simeq\|{f_{\textsc{nl}}}\|_{L_2}^2 \frac{k}{n}\left(1 - \frac{2}{k}\mathop{\mathrm{tr}}(\bm{M}\widehat{\bm{\Sigma}}\xspace_{m}) + \frac{1}{k}\mathop{\mathrm{tr}}(\bm{M}_m\widehat{\bm{\Sigma}}\xspace_{m}\bm{M}_{m}\widehat{\bm{\Sigma}}\xspace_{m})\right) \relax &\qquad + \|{f_{\textsc{nl}}}\|_{L_2}^2 \frac{n-k}{n}\left( 1 + \frac{1}{n}\mathop{\mathrm{tr}}(\bm{M}_m\widehat{\bm{\Sigma}}\xspace_{m}\bm{M}_{m}\widehat{\bm{\Sigma}}\xspace_{m^c})\right)\relax &\simeq\|{f_{\textsc{nl}}}\|_{L_2}^2 \frac{\phi}{\psi } \lambda^2v_p(-\lambda;\psi)^2 (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi)) + \frac{\psi-\phi}{\psi} \left( 1 + \frac{1}{k}\mathop{\mathrm{tr}}(\bm{M}_m\widehat{\bm{\Sigma}}\xspace_{m}\bm{M}_{m}\bm{\Sigma}) \right)\relax &\simeq\|{f_{\textsc{nl}}}\|_{L_2}^2 \frac{\phi}{\psi } \lambda^2v_p(-\lambda;\psi)^2 (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi)) + \frac{\psi-\phi}{\psi} \left( 1 + \widetilde{v}_p(-\lambda;\psi,\psi)\right) \relax &\simeq\|{f_{\textsc{nl}}}\|_{L_2}^2 \left(\frac{\psi-\phi}{\psi} + \frac{\phi}{\psi } \lambda^2v_p(-\lambda;\psi)^2 \right) (1+\widetilde{v}_p(-\lambda;\varphi_{m\ell},\psi)). \end{aligned}$$ Combing the above results finish the proof for Part (3). ------------------------------------------------------------------------ **Lemma 18** (Quadratic concentration with uncorrelated components). Let $\bm{X}\in\mathbb{R}^{n\times p}$ be the design matrix whose rows $\bm{x}_i$'s are independent samples drawn according to . Let $\bm{f}\in \mathbb{R}^{p}$ be a random vector with i.i.d. entries $f_i$'s, where $f_i$ has bounded $L_2$ norm and is uncorrelated with $\bm{x}_i$. Let $\bm{A}\in\mathbb{R}^{n\times n}$ be a symmetric matrix such that $\limsup\|\bm{A}\|_{\mathop{\mathrm{op}}}\leq M_0$ almost surely as $p\rightarrow\infty$ for some constant $M_0<\infty$. Then as $n,p \to \infty$ such that $p/n\rightarrow\phi\in(0,\infty)$, it holds that $$\begin{aligned} \frac{1}{n}\bm{f}^{\top}\bm{X}\bm{A}\bm{X}^{\top} \bm{f}\simeq\frac{1}{n}\|{f_{\textsc{nl}}}\|_{L_2}^2\mathop{\mathrm{tr}}(\bm{X}\bm{A}\bm{X}^{\top}). \end{aligned}$$ Let $\widehat{\bm{\Sigma}}\xspace=\bm{X}^{\top}\bm{X}/n$ and $\bm{M}=(\widehat{\bm{\Sigma}}\xspace+\lambda\bm{I}_p)^{-1}$ be the resolvent. Note that $$\begin{aligned} \frac{1}{n}\bm{f}^{\top}\bm{X}\bm{A}\bm{X}^{\top} \bm{f}= \frac{1}{n}\bm{f}^{\top}\bm{X}\bm{M}(\bm{M}^{-1}\bm{A}\bm{M}^{-1}) \bm{M}\bm{X}^{\top} \bm{f}\label{eq:lem:quad-uncorr-eq-1} \end{aligned}$$ Since $\bm{X}=\bm{Z}\bm{\Sigma}^{1/2}$, we have $\bm{M}\bm{X}^{\top} = \bm{\Sigma}^{-1/2}(\bm{Z}^{\top}\bm{Z}/n+\lambda\bm{\Sigma}^{-1})^{-1}\bm{Z}^{\top} = \bm{\Sigma}^{1/2}\bm{Z}^{\top}(\bm{Z}\bm{\Sigma}\bm{Z}^{\top}/n+\lambda\bm{I}_p)^{-1}$. Let $\bm{B}_1= (\bm{Z}\bm{\Sigma}\bm{Z}^{\top}/n+\lambda\bm{I}_p)^{-1}$ and $\bm{B}_2=\bm{Z}\bm{\Sigma}^{1/2}\bm{M}^{-1}\bm{A}\bm{M}^{-1} \bm{\Sigma}^{1/2}\bm{Z}^{\top}/n$, then [\[eq:lem:quad-uncorr-eq-1\]](#eq:lem:quad-uncorr-eq-1){reference-type="eqref" reference="eq:lem:quad-uncorr-eq-1"} becomes $$\begin{aligned} \frac{1}{n}\bm{f}^{\top}\bm{X}\bm{A}\bm{X}^{\top} \bm{f}= \bm{f}^{\top}\bm{B}_1 \bm{B}_2 \bm{B}_1 \bm{f}. \end{aligned}$$ Next, we adapt the idea of @bartlett2021deep [Lemma A.16], to show the diagonal concentration and trace concentration successively. **Diagonal concentration.** From a matrix identity in @patil2023generalized [Lemma D.4], we have that, for any $t>0$, $$\begin{aligned} \bm{B}_1^{-1}\bm{B}_2\bm{B}_1^{-1} &= \frac{1}{t}(\bm{B}_1^{-1} - (\bm{B}_1 + t \bm{B}_2)^{-1}) + t \bm{B}_1^{-1} \bm{B}_2 (\bm{B}_1 + t \bm{B}_2)^{-1} \bm{B}_2 \bm{B}_1^{-1} . \end{aligned}$$ Let $\bm{U}\in\mathbb{R}^{n\times n}$ with $U_{ij}=[\bm{f}]_i[\bm{f}]_j\mathop{\mathrm{\mathds{1}}}\{i\neq j\}$. We then have $$\begin{aligned} &\left|\sum_{1\leq i\neq j\leq n}[\bm{B}_1^{-1}\bm{B}_2\bm{B}_1^{-1}]_{ij}[\bm{f}]_i[\bm{f}]_j\right| \notag\relax &= |\langle \bm{B}_1^{-1}\bm{B}_2\bm{B}_1^{-1} ,\bm{U}\rangle| \notag\relax &\leq \frac{1}{t}|\langle \bm{B}_1^{-1},\bm{U}\rangle| - \frac{1}{t}|\langle(\bm{B}_1 + t \bm{B}_2)^{-1},\bm{U}\rangle| + t \|\bm{B}_1^{-1}\|_{\mathop{\mathrm{op}}}^2 \|\bm{B}_2\|_{\mathop{\mathrm{op}}}^2 \|(\bm{B}_1 + t \bm{B}_2)^{-1}\|_{\mathop{\mathrm{op}}}\|\bm{U}\|_{\mathop{\mathrm{tr}}}. \label{eq:lem:gen-risk-nonlinear-eq-4} \end{aligned}$$ For the first two terms, @patil2023generalized [Lemma D.3] implies that $$\begin{aligned} \frac{1}{n}|\langle \bm{B}_1^{-1} ,\bm{U}\rangle| \xrightarrow{\textup{a.s.}}0, \quad \text{and} \quad \frac{1}{n}|\langle (\bm{B}_1 + t\bm{B}_2)^{-1},\bm{U}\rangle|\xrightarrow{\textup{a.s.}}0, \end{aligned}$$ where the second convergence is due to $|\langle (\bm{B}_1 + t\bm{B}_2)^{-1},\bm{U}\rangle|\lesssim t|\langle \bm{B}_1 ^{-1},\bm{U}\rangle|$ because of Woodbury matrix identity and the bounded spectrums of $\bm{B}_1$ and $\bm{B}_2$. For the last term, note that $\|\bm{B}_1\|_{\mathop{\mathrm{op}}}\leq \lambda^{-1}$ and $\|\bm{B}_2\|_{\mathop{\mathrm{op}}}\leq \|\bm{A}\|_{\mathop{\mathrm{op}}}\|\widehat{\bm{\Sigma}}\xspace\|_{\mathop{\mathrm{op}}}$, where $\|\bm{A}\|_{\mathop{\mathrm{op}}}$ is almost surely bounded as assumed, and $\|\bm{Z}\bm{\Sigma}^{1/2}\|_{\mathop{\mathrm{op}}}^2/n=\|\widehat{\bm{\Sigma}}\xspace\|_{\mathop{\mathrm{op}}}\leq r_{\max}(1+\sqrt{\phi})^2$ almost surely as $n,p\rightarrow\infty$ and $p/n\rightarrow\phi\in(0,\infty)$ (see, e.g., @bai2010spectral). Also, $\|\bm{U}\|_{\mathop{\mathrm{tr}}}/n\leq 2\|\bm{f}\|_2^2/n \xrightarrow{\textup{a.s.}}2\|{f_{\textsc{nl}}}\|_{L_2}^2<\infty$ from the strong law of large numbers. Thus, the last term is almost surely bounded. By choosing $t=\sqrt{|\langle \bm{B}_1^{-1},\bm{U}\rangle|/\|\bm{U}\|_{\mathop{\mathrm{tr}}}}$, it then follows that $n^{-1}|\langle \bm{B}_1^{-1}\bm{B}_2\bm{B}_1^{-1} ,\bm{U}\rangle| \xrightarrow{\textup{a.s.}}0$. Therefore, $$\begin{aligned} \left|\frac{1}{n}\bm{f}^{\top}\bm{B}_1^{-1}\bm{B}_2\bm{B}_1^{-1}\bm{f}- \frac{1}{n}\sum_{i=1}^n [\bm{B}_1^{-1}\bm{B}_2\bm{B}_1^{-1}]_{ii}[\bm{f}]_i^2\right| \xrightarrow{\textup{a.s.}}0. \end{aligned}$$ **Trace concentration.** From the results in @knowles2017anisotropic, it holds that $$\begin{aligned} \max_{1\leq i\leq n}\left|[\bm{B}_1^{-1}\bm{B}_2\bm{B}_1^{-1}]_{ii} - \frac{1}{n}\mathop{\mathrm{tr}}[\bm{B}_1^{-1}\bm{B}_2\bm{B}_1^{-1}]\right| \xrightarrow{\textup{a.s.}}0. \end{aligned}$$ Further, since $n^{-1}\|\bm{f}\|_2^2\xrightarrow{\textup{a.s.}}\|{f_{\textsc{nl}}}\|_{L^2}^2$, we have $$\begin{aligned} \frac{1}{n}|\bm{f}^{\top} \bm{B}_1^{-1}\bm{B}_2\bm{B}_1^{-1}\bm{f}- \mathop{\mathrm{tr}}[\bm{B}_1^{-1}\bm{B}_2\bm{B}_1^{-1}]\|{f_{\textsc{nl}}}\|_{L^2}^2|\xrightarrow{\textup{a.s.}}0, \end{aligned}$$ which finishes the proof. ------------------------------------------------------------------------ # Additional numerical illustrations in {#app:experiments-gaussian} ## Comparison of intermediate ovlp- and full-estimators for elastic net and lasso {#app:experiments-gaussian-sub-vs-full} [Elastic net:]{.ul} ![ **Full-estimator performs better than ovlp-estimator across different subsample sizes.** The relative errors of the "ovlp" and "full" estimators for ridge ensemble with ensemble size $M=10$. The left panel shows the results with ridge penalty $\lambda=1$ and varying subsample size $k$. The right panel shows the results with subsample size $k=300$ and varying ridge penalty $\lambda$. The data generating process is the same as in . ](figures/Fig2_sub_full_ElasticNet.pdf){#fig:Fig2_sub_full_ElasticNet.pdf width="80%"} [Lasso:]{.ul} ![ **Full-estimator performs better than ovlp-estimator across different subsample sizes.** The relative errors of the "ovlp" and "full" estimators for ridge ensemble with ensemble size $M=10$. The left panel shows the results with ridge penalty $\lambda=1$ and varying subsample size $k$. The right panel shows the results with subsample size $k=300$ and varying ridge penalty $\lambda$. The data generating process is the same as in . ](figures/Fig2_sub_full_lasso.pdf){#fig:Fig2_sub_full_k.pdf width="80%"} ## Comparison of ovlp- and full-CGCV for ridge, elastic net, and lasso {#app:experiments-gaussian-sub-vs-full-CGCV} [Ridge:]{.ul} ![ **The CGCV\_full estimator ($\widetilde{R}_M^{\textup{\textrm{cgcv}},\textup{\textrm{full}}}$) performs better than CGCV_ovlp estimator ($\widetilde{R}_M^{\textup{\textrm{cgcv}},\textup{\textrm{ovlp}}}$).** Plots of relative errors of the $\widetilde{R}_M^{\textup{\textrm{cgcv}},\textup{\textrm{full}}}$ and $\widetilde{R}_M^{\textup{\textrm{cgcv}},\textup{\textrm{ovlp}}}$ for ridge ensemble with ensemble size $M=10$. The left panel shows the results with ridge penalty $\lambda=0.001$ and varying subsample size $k$. The right panel shows the results with subsample size $k=500$ and varying ridge penalty $\lambda$. The data generating process is the same as in . ](figures/CGCV_sub_full_ridge.pdf){#fig:Fig2_sub_full_ridge-cgcv.pdf width="80%"} [Elastic net:]{.ul} ![ **The CGCV\_full estimator ($\widetilde{R}_M^{\textup{\textrm{cgcv}},\textup{\textrm{full}}}$) performs better than CGCV_ovlp estimator ($\widetilde{R}_M^{\textup{\textrm{cgcv}},\textup{\textrm{ovlp}}}$).** Plots of relative errors of the $\widetilde{R}_M^{\textup{\textrm{cgcv}},\textup{\textrm{full}}}$ and $\widetilde{R}_M^{\textup{\textrm{cgcv}},\textup{\textrm{ovlp}}}$ for elastic net ensemble with ensemble size $M=10$ and $\lambda_2=0.01$. The left panel shows the results with elastic net penalty $\lambda_1=0.002$ and varying subsample size $k$. The right panel shows the results with subsample size $k=200$ and varying elastic net penalty $\lambda_1$. The data generating process is the same as in . ](figures/CGCV_sub_full_ElasticNet.pdf){#fig:Fig2_sub_full_ElasticNet-cgcv.pdf width="80%"} [Lasso:]{.ul} ![ **The CGCV\_full estimator ($\widetilde{R}_M^{\textup{\textrm{cgcv}},\textup{\textrm{full}}}$) performs similar to CGCV_ovlp estimator ($\widetilde{R}_M^{\textup{\textrm{cgcv}},\textup{\textrm{ovlp}}}$).** Plots of relative errors of the $\widetilde{R}_M^{\textup{\textrm{cgcv}},\textup{\textrm{full}}}$ and $\widetilde{R}_M^{\textup{\textrm{cgcv}},\textup{\textrm{ovlp}}}$ for lasso ensemble with ensemble size $M=10$. The left panel shows the results with lasso penalty $\lambda=0.003$ and varying subsample size $k$. The right panel shows the results with subsample size $k=200$ and varying lasso penalty $\lambda$. The data generating process is the same as in . ](figures/CGCV_sub_full_lasso.pdf){#fig:Fig2_sub_full_lasso.pdf width="80%"} ## Comparison of CGCV and GCV in $k$ for ridge, elastic net, and lasso {#app:experiments-gaussian-cgcv-vs-gcv-in-k} [Ridge:]{.ul} ![ **CGCV is consistent for finite ensembles across different subsample sizes.** Plot of risks versus the subsample size $k$ for the ridge ensemble with different $\lambda$ and $M$. The data generating process is the same as in . ](figures/ridge_k.pdf){#fig:surface-plot width="\\textwidth"} [Elastic Net:]{.ul} ![ **CGCV is consistent for finite ensembles across different subsample sizes.** Plot of risks versus the subsample size $k$ for elastic net ensemble with different $\lambda$ and $M$. ](figures/ElasticNet_k.pdf){#fig:ElasticNet_k.pdf width="\\textwidth"} [Lasso:]{.ul} ![ **CGCV is consistent for finite ensembles across different subsample sizes.** Plot of risks versus the subsample size $k$ for elastic net ensemble with different $\lambda$ and $M$. ](figures/lasso_k.pdf){#fig:lasso_k.pdf width="\\textwidth"} ## Comparison of GCV and GCV in $M$ for ridge, elastic net, and lasso {#app:experiments-gaussian-cgcv-vs-gcv-in-M} [Ridge:]{.ul} ![ **GCV gets closer to the risk as ensemble size increases across different subsample sizes.** Plot of risks versus the subsample size $k$ for the lasso ensemble with different $\lambda$ and $M$. The data generating process is the same as in . ](figures/ridge_M.pdf){#fig:lasso-surface-plot width="\\textwidth"} [Elastic Net:]{.ul} ![ **GCV gets closer to the risk as ensemble size increases across different subsample sizes.** Plot of risks versus the ensemble size $M$ for ridge ensemble with different $\lambda$ and $k$. ](figures/ElasticNet_M.pdf){#fig:ElasticNet_M.pdf width="\\textwidth"} [Lasso:]{.ul} ![ **GCV gets closer to the risk as ensemble size increases across different subsample sizes.** Plot of risks versus the ensemble size $M$ for ridge ensemble with different $\lambda$ and $k$. ](figures/lasso_M.pdf){#fig:lasso_M.pdf width="\\textwidth"} ## Comparison of CGCV and GCV in $\lambda$ for ridge, elastic net and lasso {#app:experiments-gaussian-cgcv-vs-gcv-in-lambda} [Ridge:]{.ul} ![ **CGCV is consistent for finite ensembles across different regularization levels.** Plot of risks versus the ensemble size $M$ for ridge ensemble with different $\lambda$ and $k$. ](figures/ridge_lam.pdf){#fig:lasso-lam width="\\textwidth"} [Elastic Net:]{.ul} ![ **GCV gets closer to the risk as ensemble size increases across different subsample sizes.** Plot of risks versus the ensemble size $M$ for ridge ensemble with different $\lambda$ and $k$. ](figures/ElasticNet_lam.pdf){#fig:lam-ridge width="\\textwidth"} [Lasso:]{.ul} ![ **GCV gets closer to the risk as ensemble size increases across different subsample sizes.** Plot of risks versus the ensemble size $M$ for ridge ensemble with different $\lambda$ and $k$. ](figures/lasso_lam.pdf){#fig:lam-lasso width="\\textwidth"} ## Illustration of inconsistency for GCV variant {#sec:naive-gcv-inconsistency} ![ **The GCV variant [\[eq:gcv-naive\]](#eq:gcv-naive){reference-type="eqref" reference="eq:gcv-naive"} is not consistent for finite $M>1$.** Plot of risks versus the regularization parameter for ridge, lasso, and elastic net ensembles, under different $M$ and fixed $k=800$. The data generating process is the same as in . ](figures/fig_GCV_naive.pdf){#fig:gcv_naive_lam width="\\textwidth"} # Additional numerical illustrations for {#app:experiments-nongaussian} ## Comparison of CGCV and GCV in $\lambda$ for elastic net and lasso {#app:experiments-nongaussian-cgcv-vs-gcv-in-lambda} [Elastic net:]{.ul} ![ **CGCV is consistent for finite ensembles across different regularization levels.** The GCV estimates for the Elastic Net ensemble with varying lasso penalty $\lambda_1$ and fixed $\lambda_2=0.01$ in a problem setup of over 50 repetitions of the datasets. Here the feature dimension is $p=1200$. The left and the right panels show the cases when the subsample sizes are $k=2400$ and $k=800$, respectively.](figures/fig_elastic_net_lam.pdf){#fig:elasticnet-lambda width="90%"} [Lasso:]{.ul} ![ **CGCV is consistent for finite ensembles across different regularization levels.** The GCV estimates for the lasso ensemble with varying lasso penalty $\lambda$ in a problem setup of over 50 repetitions of the datasets. Here the feature dimension is $p=1200$. The left and the right panels show the cases when the subsample sizes are $k=2400$ and $k=800$, respectively. ](figures/fig_lasso_lam.pdf){#fig:lasso-lambda width="90%"} [^1]: Author names sorted alphabetically. $^\ast$Corresponding authors. [^2]: Department of Statistics, Rutgers University, New Brunswick, NJ 08854, USA. [^3]: Department of Statistics and Data Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA. [^4]: Machine Learning Department, Carnegie Mellon University, Pittsburgh, PA 15213, USA. [^5]: Department of Statistics, University of California, Berkeley, CA 94720, USA.
arxiv_math
{ "id": "2310.01374", "title": "Corrected generalized cross-validation for finite ensembles of penalized\n estimators", "authors": "Pierre Bellec, Jin-Hong Du, Takuya Koriyama, Pratik Patil, Kai Tan", "categories": "math.ST stat.ME stat.ML stat.TH", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper, we study a version of the Iitaka conjectures for anticanonical divisors over perfect fields of positive characteristic. That is, we prove the inequality $\kappa(X,-K_X)\leq \kappa(X_y,-K_{X_y})+\kappa(Y,-K_Y)$, for a contraction $f\colon X\rightarrow Y$ with general fibre $X_y$ having good arithmetic properties. address: - Imperial College London, London, SW7 2AZ, UK - University College London, London, WC1E 6BT, UK - National Center for Theoretical Sciences, Taipei, 106, Taiwan - Astronomy Mathematics Building 5F, No. 1, Sec. 4, Roosevelt Rd., Taipei 10617, Taiwan author: - Marta Benozzo - Iacopo Brivio - Chi-Kang Chang bibliography: - bib.bib title: Superadditivity of anticanonical Iitaka dimension for contractions with $F$-split fibres --- # Introduction A projective algebraic variety $X$ is classified according to the positivity properties of its canonical divisor $K_X$. The most basic measure of such positivity is its *Iitaka dimension* $\kappa(X,K_X)$, an invariant which measures the rate of growth of the spaces $H^0(X,mK_X)$ as a function of $m$. For varieties over the complex numbers, Iitaka proposed the following Conjecture **Conjecture 1** (Iitaka's Conjecture, $C_{n,m}$). *Let $f\colon X\to Y$ be a contraction of smooth projective complex varieties, of dimensions $n$ and $m$ respectively, and let $y\in Y$ be a general point. Then $$\kappa(X,K_X)\geq \kappa(X_y,K_{X_y})+\kappa(Y,K_Y).$$* Although still open in general, over fields of characteristic $0$ this conjecture is proven for many important classes of contractions ([@Vie1; @Vie2; @Vie3; @Kaw_KDAFSOC; @Kaw_CAV; @Kaw_MMKDAFS; @Fuj_AFSGFMAD; @HPS; @Bir_IC6; @Cao; @CP; @CH]). In particular, $C_{n,m}$ holds when $\dim(Y) \leq 2$, and when $X_y$ admits a good minimal model. More recently it was shown in [@Chang] that a similar inequality holds for the anticanonical divisors. **Theorem 1** ([@Chang Theorem 1.1], $C_{n,m}^-$). *Let $f\colon X\to Y$ be a contraction of smooth complex projective varieties, of dimensions $n$ and $m$ respectively, such that the stable base locus $\mathbb{B}(-K_X)$ does not dominate $Y$, and let $y\in Y$ be a general point. Then $$\kappa(X,-K_X)\leq \kappa(X_y,-K_{X_y})+\kappa(Y,-K_Y).$$* The condition on the stable base locus is necessary, as shown by [@Chang Example 1.7]. Broadly speaking, for both and the main tools are *semipositivity* results for the sheaves $f_*\omega_{X/Y}^m$. In particular, [@Chang] makes use of the canonical bundle formula for klt-trivial fibrations ([@Amb_MBD]), the proof of which relies on Hodge-theoretic methods. Given that these techinques are not available in characteristic $p>0$ ([@MB]), it is natural to ask whether the above statements still hold in this case. As it turns out, both $C_{n,m}$ and $C_{n,m}^-$ can fail in general ([@counterexamples; @Benozzo]). However, has been proven for generically smooth contractions of relative dimension one, or when $\dim(X)\leq 3$ and $p>5$ ([@CZ], [@ejiri2017weak], [@Iitaka3folds], [@EZ] and [@zhang2019subadditivity]). Due to the lack of generic smoothness results, over fields of positive characteristic, the general fibre of a contraction may be singular and even non-reduced. In this case, Patakfalvi showed when $Y$ is of general type and $X_y$ has non-nilpotent Hasse-Witt matrix ([@Pat]). This suggests that we may expect $C_{n,m}$ to hold for contractions whose fibres have "arithmetically nice" singularities. Similarly, it was proven in [@Benozzo] that, when we have enough control on the singularities of the general fibre, $C_{n,1}^-$ and $C_{3,m}^-$ hold (the latter provided that $p\geq 5$), as well as $C_{n,n-1}^-$ if $\kappa(Y,-K_Y)=0$. In this paper we extend to certain contractions in positive characteristic in any dimension. **Theorem 2** (see ). *Let $f \colon X \to Y$ be a contraction of smooth projective varieties over an algebraically closed field $k$ of characteristic $p>0$, such that a general fibre $X_{y}$ is $K$-globally $F$-regular[^1]. Assume there is $m\in\mathbb{N}\setminus p\mathbb{N}$ such that $-mK_X$ is Cartier and $|V|\subseteq |-mK_X|$ such that $|V|_{X_y}$ induces a morphism with Stein degree not divisible by $p$. Then $$\kappa(X,-K_X)\leq \kappa(X_y,-K_{X_y})+\kappa(Y,-K_Y).$$* ## Proof outline The key statement is [@Chang Proposition 4.2], a result stating that negativity of $-K_X$ descends along contractions in a controlled way. More precisely: if $|-K_X-f^*E|_{\mathbb{Q}}\neq \emptyset$ then $|-K_Y-\epsilon E|_{\mathbb{Q}}\neq\emptyset$ for small $\epsilon>0$. This can be used to prove an injectivity theorem ([@Chang Theorem 4.3]) which implies when $\kappa(Y,-K_Y)=0$. The proof of [@Chang Proposition 4.2] is a consequence of the following facts. - As $\mathbb{B}(-K_X)$ does not dominate $Y$ we can take $\Delta\in |-K_X|_{\mathbb{Q}}$ such that $(X_y,\Delta_{X_y})$ has "nice" (i.e. klt) singularities; - $(X,\Delta)\xrightarrow{f}Y$ is now a log Calabi-Yau contraction, hence we have a canonical bundle formula $K_X+\Delta\sim_{\mathbb{Q}}f^*(K_Y+B_Y+M_Y)$. - If $\Gamma\in |-K_X-f^*E|_{\mathbb{Q}}$, then a small perturbation $(X,\epsilon\Gamma)$ will still have "nice" singularities over the generic point of $Y$ and $\mathbb{B}(-K_X-\epsilon\Gamma)$ does not dominate $Y$, hence we can apply (1) and (2) to the pair $(X,\epsilon\Gamma)$. There are several obstacles to generalising this approach to characteristic $p>0$. First of all, the canonical bundle formula is known to be false in general ([@Wit_CBF Example 3.5]). On the positive side, a weak form of it is known to hold for contractions whose generic fibre is globally $F$-split ([@DS; @ejiri2017weak]), so we chose to restrict ourselves to this setup. Even so, there are still issues with the first and last point. If $X_y$ is globally $F$-split, it is known that we can find a $\Delta_{X_y}\in |-K_{X_y}|_{\mathbb{Q}}$ such that $(X_y,\Delta_{X_y})$ is a globally $F$-split Calabi-Yau variety. However, one must show that such divisor lifts to an element of $|-K_X|_{\mathbb{Q}}$, and we are able to show that this is the case when, for some $m\geq 1$ not divisible by $p$, the rational map $\phi_{|-mK_X|}$ restricts to a morphism on $X_y$ with Stein degree not divisible by $p$. Lastly, globally $F$-split singularities behave poorly under perturbations, so this makes point (3) problematic. Our solution is to introduce *$K$-globally $F$-regular* varieties, a notion that interpolates between globally $F$-split and globally $F$-regular varieties. Roughly speaking, $X_y$ is $K$-globally $F$-regular if it is globally $F$-split and the Iitaka fibration of $-K_{X_y}$ maps $X_y$ to a variety which is globally $F$-regular. The advantage is that this class of varieties is stable under small perturbations by members of the anticanonical $\mathbb{Q}$-linear system. Under these assumptions we are able to prove the injectivity result . In [@Chang], the author concludes by reducing to the $\kappa(Y,-K_Y)=0$ case considering $g\colon Y\to Z$, the Iitaka fibration of $-K_Y$. Over fields of positive characteristic, this contraction may have highly singular fibers. However the singularities of the general fiber manifest themselves on the total space after a base-change by a sufficiently high power of the Frobenius. This allows us to circumvent the issue by considering where $X_e$ (resp. $Y_e$) is the normalization of the reduction of $Y\times_Z Z^e$ (resp. of $X\times_Z Z^e$). The resulting contractions $f_e,g_e$ and $h_e$ will be (universally) homeomorphic to the original ones, but their fibers will now be normal, thus the arguments of [@Chang] apply in this case. We can then conclude by comparing the canonical divisors of $X,Y$ and $Z$ with those of $X_e,Y_e$ and $Z_e$ using the correspondence between purely inseparable morphisms and foliations ([@LMMPZsoltJoe]). ## Acknowledgements {#acknowledgements .unnumbered} We would like to thank our advisors Paolo Cascini and Jungkai Chen for their guidance and helpful suggestions throughout the project. We thank Karl Schwede, Yoshinori Gongyo, and Hiromu Tanaka for answering our questions and pointing out useful papers. The first author was supported by the Engineering and Physical Sciences Research Council \[EP/S021590/1\], the EPSRC Centre for Doctoral Training in Geometry and Number Theory (The London School of Geometry and Number Theory), University College London. The second author is supported by the National Center for Theoretical Sciences and a grant from the Ministry of Science and Technology, grant number MOST-110-2123-M-002-005. The third author is supported by the PhD student's scholarship of National Taiwan University, and by the scholarship of National Science and Technology Council for PhD students to study abroad, which he used to visit the University of Tokyo. We would also like to thank the National Centre for Theoretical Sciences Mathematics division for their support that allowed the first two authors to meet in person in Taipei. # Conventions and notation - All of our schemes $X$ will be irreducible, separated, essentially of finite type over a field $k$ of characteristic $p>0$, and $F$-finite. Note that for any such scheme the dualising complex is well-defined ([@SS Section 2.4]). - A *k-variety* $X$ is a separated integral $k$-scheme of $k$-finite type. We denote by $\textup{Sing}(X)$ the locus of singular points of $X$. - When $X$ is an integral $k$-scheme we denote by $k(X)$ its field of rational functions. - A $\mathbb{K}$*-divisor* $D$ on a scheme $X$ is a formal finite linear combination $D=\sum_ia_iD_i$, where $D_i$ are irreducible closed codimension-one subsets of $X$ and $a_i\in\mathbb{K}$. We will take $\mathbb{K}\in \lbrace \mathbb{Z},\mathbb{Z}_{(p)},\mathbb{Q}\rbrace$. If $\mathbb{K}=\mathbb{Z}$ we refer to $D$ as an *integral divisor* or simply a *divisor*. We define the *positive part* (resp. *negative part*) of $D$ to be $D^+\coloneqq\sum_{a_i>0}a_iD_i$ (resp. $D^-\coloneqq\sum_{a_i<0}(-a_i)D_i$). - Given a divisor $D$ on a normal variety $X$, we denote by $|D|$ the complete linear system it defines. If $V\subseteq H^0(X,D)$ is a subspace, we denote by $|V|\subseteq |D|$ the corresponding linear subsystem. If $Y\subseteq X$ is a closed subvariety, we denote by $|V|_{Y}$ the linear subsystem given by the restriction of $V$ to $Y$. The corresponding rational maps will be denoted by $\phi_{|D|},\phi_{|V|}$ and $\phi_{|V|_Y}$ respectively. - A $\mathbb{Q}$-divisor $D$ is *$\mathbb{Q}$-Cartier* if $mD$ is Cartier for some $m$. If there exists such $m$ with $p \nmid m$, then we say $D$ is a *$\mathbb{Z}_{(p)}$-Cartier* $\mathbb{Z}_{(p)}$-divisor. - If $D_1,D_2$ are $\mathbb{Q}$-divisors on a scheme $X$ such that $mD_i$ is integral for $i=1,2$ and $mD_1\sim mD_2$ for some positive integer $m$, then we say $D_1$ and $D_2$ are *$\mathbb{Q}$-linearly equivalent* $\mathbb{Q}$-divisors, denoted by $D_1\sim_{\mathbb{Q}}D_2$. If $p\nmid m$ then we say $D_1$ and $D_2$ are *$\mathbb{Z}_{(p)}$-linearly equivalent* $\mathbb{Z}_{(p)}$-divisors, denoted $D_1\sim_{\mathbb{Z}_{(p)}}D_2$. - Let $f\colon X\to Y$ be a morphism of schemes and let $D$ be a divisor on $X$: we write $D\sim_{Y}0$ if $D\sim f^*M$ where $M$ is a Cartier divisor on $Y$. If $D$ is a $\mathbb{Q}$-divisor (resp. a $\mathbb{Z}_{(p)}$-divisor) we write $D\sim_{\mathbb{Q},Y}0$ (resp. $D\sim_{\mathbb{Z}_{(p)},Y}0$) if for some $m\geq 1$ (resp. for some $m\geq 1$ such that $p\nmid m$) we have that $mD$ is integral and $mD\sim_Y0$. In particular, we have that $D$ is Cartier (resp. $\mathbb{Q}$ or $\mathbb{Z}_{(p)}$-Cartier). - Let $D$ be a $\mathbb{Q}$-divisor on a scheme $X$: we say $D$ is *effective* ($D\geq 0$) if all of its coefficients are non-negative. We say $D$ is $\mathbb{Q}$*-effective* (resp. $\mathbb{Z}_{(p)}$*-effective*) if, for some $m\geq 1$ (resp. for some $m\geq 1$ such that $p\nmid m$) $mD$ is integral and $H^0(X,mD)\neq 0$. Given $\mathbb{Q}$-divisors $D_1,D_2$ we write $D_1\geq D_2$ if $D_1-D_2$ is effective, and we write $D_1\geq_{\mathbb{Q}}D_2$ (resp. $D_1\geq_{\mathbb{Z}_{(p)}}D_2$) if $D_1-D_2$ is $\mathbb{Q}$-effective (resp. $\mathbb{Z}_{(p)}$-effective). - A *sub-couple* $(X,B)$ consists of an integral normal scheme $X$ and a $\mathbb{Q}$-divisor $B$. If $B\geq 0$ we say $(X,B)$ is a *couple*. A *sub-pair* is a sub-couple $(X,B)$ such that $K_X+B$ is $\mathbb{Q}$-Cartier. If $B\geq 0$ we say $(X,B)$ is a *pair*. - Let $f\colon X\to Y$ be a morphism of $k$-schemes, where $k$ is algebraically closed. A *general fibre of f* is $X_y\coloneqq f^{-1}(y)$ where $y$ is a $k$-point belonging to a dense open subset of $Y$. We say $X_y$ is a *very general fibre* if $y$ is a $k$-point belonging to a countable intersection of dense open subsets of $Y$. - A *contraction* is a projective morphism of schemes $f\colon X\to Y$ such that $f_*\mathcal{O}_X=\mathcal{O}_Y$. - Let $f\colon X \rightarrow Y$ be a contraction with general fibres that are normal, $D$ a divisor on $X$ and $X_y$ a general fibre. Then $D$ is $\mathbb{Q}$-Cartier along any codimension $1$ point of $X_y$, hence we can define its restriction to $X_y$, $D_{X_y}:=D|_{X_y}$. - Given a contraction $f \colon X \rightarrow Y$ and $D$ a divisor on $X$, if $\eta$ is the generic point of $Y$, we denote by $D_{\eta} \coloneqq D|_{X_{\eta}}$. As above, this restriction is well-defined. When the geometric generic fibre is normal, we use an analogous notation $D_{\overline{\eta}} \coloneqq D|_{X_{\overline{\eta}}}$. - Let $f\colon X\to Y$ be a surjective projective morphism of schemes, and let $f\colon X\xrightarrow{g} Z\xrightarrow{h} Y$ be its Stein factorisation. The degree of $h$ is called the *Stein degree of $f$* and it is denoted by $\textup{St.deg}(f)$. Note that, when $Y$ is normal and $p\nmid\textup{St.deg}(f)$, then $f$ is a split morphism, that is the natural map $\mathcal{O}_Y\to f_*\mathcal{O}_X$ is a split inclusion, via the trace map $\textup{Tr}_{X/Y}\colon f_*\mathcal{O}_X\to\mathcal{O}_Y$. - Let $(X,B)$ be a sub-couple over $\mathbb{C}$. A *model* of $(X,B)$ is a normal, integral, separated, projective $A$-scheme of finite type $\mathcal{X}\to\mathop{\mathrm{Spec}}(A)$, where $A$ is a finitely generated $\mathbb{Z}$-algebra, together with a $\mathbb{Q}$-divisor $\mathcal{B}$ such that $(X,B)=(\mathcal{X},\mathcal{B})\times_{\mathop{\mathrm{Spec}}(A)}\mathop{\mathrm{Spec}}(\mathbb{C})$. We refer the reader to [@Kol_SMMP] for the definitions of the various classes of singularities appearing in the Minimal Model Program. *Remark 1*. In the remainder, we will mostly deal with normal varieties, thus we will tacitly use their $S2$ property. More precisely, when we use reflexive sheaves, we work locally over the regular locus, then extend the result to all of $X$. This is needed, for example, when applying Grothendieck duality to the Frobenius morphism. *Remark 2*. In the sequel, sometimes we will pull-back divisors under equidimensional morphisms $f\colon X \to Y$ of normal varieties, without requiring any Cartier assumption. This operation is well-defined since, given a prime divisor $D$ on $Y$, its preimage under $f$ is a divisor on $X$. # Preliminaries ## Iitaka dimension {#ss-Iitaka_dimension} In this section, after recalling its definition, we collect some results on the behaviour of the Iitaka dimension. We refer to [@Laz1] for more details. **Definition-Proposition 1** ([@Laz1 2.1.A]). *Let $X$ be a normal projective variety and $L$ a $\mathbb{Q}$-divisor on it. For every $m>0$ such that $mL$ is integral and the linear system $|mL|$ is not empty, $|mL|$ defines a rational map $\phi_{|mL|} \colon X \dashrightarrow \mathbb{P}^{N_m}$. For $m \gg 0$ and sufficiently divisible, $\dim(\phi_{|mL|}(X))$ stabilises. The *Iitaka dimension of $L$* is defined as: $$\kappa(X, L) \coloneqq \begin{cases} -\infty \quad \text{if} \; |mL|= \emptyset \; \text{for all} \; m \geq 0;\\ \max_{m\geq 1}\dim(\phi_{|mL|}(X)) \quad \text{otherwise}. \end{cases}$$* *Remark 3*. Sometimes it is useful to work with different characterisations of the Iitaka dimension. We recall one here. Let $X$ be a normal projective variety over a field $k$ and $L$ a Cartier divisor on it. Define the *section ring of $L$* as $R(X,L) \coloneqq \bigoplus_{m=0}^{\infty} H^0(X, mL)$. If $R(X,L) \neq 0$, it is an integral domain. Denote by $Q(X, L)$ its fraction field. If $\kappa(X,L) \geq 0$, then $\kappa(X,L)= \mathrm{tr.deg}_k Q(X,L)$. For more details, we refer to [@Mori 1.3]. **Lemma 1**. *Let $\varphi \colon X' \rightarrow X$ be a surjective morphism between normal projective varieties and $L$ a Cartier divisor on $X$. Then $\kappa(X', \varphi^*L) = \kappa(X, L)$.* *Proof.* Note that, as $\varphi$ is surjective, $\kappa(X,L) \leq \kappa(X', \varphi^*L)$. If $\varphi$ is a contraction, the result follows from the projection formula. By considering the Stein factorisation of $\varphi$, we can thus reduce to the case where $\varphi$ is finite. Now suppose $\varphi=\textup{F}^e$ for some $e >0$, then the statement is trivial since $\textup{F}^{e*}L=p^eL$. If $\varphi$ is purely inseparable, there exists $\psi \colon X \rightarrow X'$ such that $\varphi \circ \psi = \textup{F}^e$, for some $e\geq 0$. Then: $$\kappa(X,L) \leq \kappa(X', \varphi^*L) \leq \kappa(X, \psi^*\phi^*L) = \kappa(X,L).$$ If, instead, $\varphi$ is a Galois cover, the result is proven in [@Mori Proposition 1.5]. Note that they prove it over fields of characteristic $0$, but, assuming $\varphi$ is Galois, the same proof works also over fields of positive characteristic. If $\varphi$ is separable, there exists $\psi\colon X'' \rightarrow X'$ such that $\varphi \circ \psi$ is Galois. Thus: $$\kappa(X,L) \leq \kappa(X', \varphi^*L) \leq \kappa(X'', \psi^*\phi^*L) = \kappa(X,L).$$ In general, we can factor $\varphi$ in its separable and purely inseparable part and conclude by the above discussion. ◻ **Lemma 2**. *Let $f \colon X \rightarrow Y$ be a projective morphism between varieties. Assume that a very general fibre $X_y$ is reduced and normal. Let $\eta$ be the generic point of $Y$ and $\overline{\eta}$ its geometric generic point. Let $L$ be a Cartier divisor on $X$. Then, $$\kappa(X_{\overline{\eta}}, L_{\overline{\eta}}) =\kappa(X_\eta, L_\eta) = \kappa({X_y}, L_{X_y}).$$ Moreover, if $\kappa(X_{\eta}, L_{\eta}) \geq 0$, the above equalities hold also for the general fibre $X_y$.* *Proof.* The first equality is a consequence of the flat base change theorem. As for the second, note that we can assume $f$ is flat without loss of generality, hence we conclude by [@Har_AG Theorem III.12.8]. Now, suppose $\kappa(X_{\eta}, L_{\eta}) \geq 0$. Let $H$ be an ample enough Cartier divisor on $Y$ such that $L+f^*H$ is $\mathbb{Q}$-effective. By the easy additivity Theorem, [@fujita1977some Proposition 1] and [@BCZ Lemma 2.20], for a general fibre $X_y$ we have: $$\kappa(X, L+ f^*H) = \kappa(X_y, L_{X_y}) + \dim(Y) \quad \text{and} \quad \kappa(X, L+f^*H) = \kappa(X_\eta, L_{\eta}) + \dim(Y).$$ Thus, $\kappa(X_y, L_{X_y})=\kappa(X_\eta, L_{\eta})$. ◻ *Remark 4*. In the above , if $\kappa(X_\eta, L_\eta) = -\infty$, it may be false that the general fibre $X_y$ satisfies $\kappa({X_y}, L_{X_y})= - \infty$. Let $\hat{Y}$ be an abelian variety and $Y$ its dual. The Poincaré bundle $L$ on $X\coloneqq \hat{Y} \times Y\to Y$ gives a counterexample. ## Frobenius In this section, we define the different Frobenius morphisms we will use and outline their relations. **Definition 1**. Let $X$ be an $F$-finite scheme over a field of characteristic $p>0$. The *Frobenius* morphism on $X$, $\textup{F}^e_X \colon X^e \rightarrow X$, for $e \in \mathbb{N}$, is defined to be the identity on points and the $(p^{e})^{\text{th}}$-power on the structure sheaf. When the underlying scheme is clear from the context we will just write $\textup{F}^e$. Note that $X^e$ and $X$ are the same scheme, the index is just used to differentiate between the target and the source of $\textup{F}^e$. Given a morphism of $k$-schemes $\pi\colon X\to V$, we have the following commutative diagram: $$\label{e-relfrob} \begin{tikzcd} X^e \arrow[ddrr, "\pi^{e}", bend right=20, swap] \arrow[drr, "\textup{F}^{e}_{X^e/V^e}"] \arrow[drrrr, bend left=20, "\textup{F}^{e}_{X}"] & & \\ & &X_{V^e} \arrow[d, "\pi_{V^e}", swap] \arrow[rr, "(\textup{F}^{e}_{V})_{X}",swap] & & X \arrow[d, "\pi"]\\ & & V^e \arrow[rr, "\textup{F}^{e}_{V}"] & & V \end{tikzcd}$$ where $X_{V^e}$ is the fibre product $X \times_{V^e} V$ and $(\textup{F}^e_V)_X$ is the induced map. The morphism $\textup{F}^e_{X^e/V^e}$ denotes the $e^{th}$ *relative Frobenius of X over V*. When $V=\mathop{\mathrm{Spec}}(k)$ the relative Frobenius is also called the *$k$-linear Frobenius*. *Remark 5*. Note that $\textup{F}^e \colon X^e \rightarrow X$ is not $k$-linear. However, if $k$ is perfect, it differs from the $k$-linear Frobenius only by an automorphism of $\mathop{\mathrm{Spec}}(k)$. On the other hand, if $\mathop{\mathrm{Spec}}(k^e)\to V^e$ is a $k^e$-point, then the base-change $$\textup{F}^e_{X^e/V^e}\otimes_{V^e}k^e\colon X_{k^e}^e\to X_{k^e}$$ coincides with the $k$-linear Frobenius of $X_k \coloneqq X \times_V \mathop{\mathrm{Spec}}(k)$. ## Frobenius base change In positive characteristic, it is hard to control the singularities of the general fibre of a contraction due to the failure of generic smoothness theorems. However, after a base change with a high power of the Frobenius morphism, all the singularities of the general fibre appear on the total space. **Lemma 3** ([@Lena-Joe Lemma 2.4]). *Let $f: X \rightarrow Y$ be a contraction between normal projective varieties over a perfect field of characteristic $p>0$ and let $\overline{\eta}$ be the geometric generic point of $Y$. Consider the base change with a power of the Frobenius morphism:* *Then, for $e \gg 0$, $((X_{Y^e,\textup{red}})^{\nu})_{\overline{\eta}^e}=(X_{\overline{\eta},\textup{red}})^\nu$.* In positive characteristic, there is a correspondence between height one purely inseparable morphisms and foliations. Thanks to this, we are able to study the behaviour of the canonical divisors under purely inseparable base changes. **Definition 2**. A purely inseparable morphism of schemes $a \colon X' \rightarrow X$ is called of *height one* if there exists $\alpha \colon X \rightarrow X'$ such that $a\circ \alpha = \textup{F}$. **Definition 3**. Let $X$ be a normal variety over a perfect field of characteristic $p>0$. A *foliation* on $X$ is a subsheaf of the tangent sheaf, $\mathcal{F} \subseteq T_X$, which is saturated and closed under $p$-powers. *Remark 6*. One normally requires that $\mathcal{F}$ is *involutive*, that is closed under Lie brackets. However, this follows from closure under $p$-powers by [@Gerstenhaber]. **Proposition 2** ([@LMMPZsoltJoe Proposition 2.9]). *Let $X'$ be a normal variety over a perfect field of characteristic $p>0$. There is a $1$-to-$1$ correspondence* *given by:* - *$X:= \mathop{\mathrm{Spec}}_{X'} \mathcal{O}_{X'}^{\mathcal{F}}$, where $\mathcal{O}_{X'}^{\mathcal{F}}\subseteq \mathcal{O}_{X'}$ is the subsheaf of $\mathcal{O}_{X'}$ that is taken to zero by all the sections of $\mathcal{F}$;* - *$\mathcal{F}\coloneqq\lbrace \partial\in T_{X'} \textup{ s.t. } \partial\mathcal{O}_{X}=0\rbrace$.* *Moreover, morphisms of degree $p^r$ correspond to foliations of rank $r$.* **Proposition 3** ([@LMMPZsoltJoe Proposition 2.10]). *Let $X' \rightarrow X$ be a purely inseparable morphism of height one between normal varieties over a perfect field of characteristic $p>0$ and let $\mathcal{F}$ be the corresponding foliation. Then $$\omega_{X'/X} \simeq (\det \mathcal{F})^{[p-1]}.$$* As a consequence of the flattening lemma [@RG Théorème 5.2.2] we have the following. **Lemma 4** ([@Lena-Joe Lemma 2.19]). *Let $f\colon X \rightarrow Y$ be a projective dominant morphism of normal varieties. Then, there is an open subset $U \subseteq Y$ with $\textup{codim}(Y \setminus U) \geq 2$ such that $X_U\coloneqq f^{-1}(U)$ is flat over $U$.* **Theorem 3** ([@LMMPZsoltJoe Theorem 3.1]). *Let $X$ be a normal variety over a perfect field $k$ of characteristic $p>0$ and let $f \colon X \rightarrow Y$ be a morphism to a normal variety over $k$. Let $a \colon Y' \rightarrow Y$ be a finite purely inseparable $k$-morphism of height one from a normal variety, let $X'$ be the normalisation of the reduction of $X \times_Y Y'$ and $f' \colon X' \rightarrow Y'$ the induced morphism. Set $\mathcal{A}$ to be the foliation induced by $a$. Then:* - *$K_{X'/X} \sim (p-1)D$ for some Weil divisor $D$ on $X'$;* - *there is a non-empty open subset $U \subseteq Y'$ and an effective divisor $C$ on $f'^{-1}(U)$ such that $C \sim -D|_{f'^{-1}(U)}$.* *Moreover, assume $X_{\overline{\eta}}$ is reduced, where $\overline{\eta}$ is the geometric generic point of $Y$, and $f$ is equidimensional. Then:* - *$f'^*( \det \mathcal{A}) -D \sim C$ for some effective divisor $C$ on $X'$.* *Proof.* Points (i) and (ii) are just points (a) and (b) of [@LMMPZsoltJoe Theorem 3.1]. As for point (iii), first, assume $X \times_Y Y'$ is reduced. Note that $f'$ is equidimensional since so is $f$. In particular, as $f$ and $f'$ are both equidimensional, we can freely replace $Y$ by $Y_0\coloneqq Y\setminus(\textup{Sing}(Y)\cup a(\textup{Sing}(Y')))$, $Y'$ by $Y_0'\coloneqq a^{-1}(Y_0)$, $X$ by $f^{-1}(Y_0)$ and $X'$ by $f'^{-1}(Y_0')$. Then point (iii) follows from point (d) of [@LMMPZsoltJoe Theorem 3.1]. If $X \times_Y Y'$ is not reduced, by , there exists $U \subseteq Y$ with $\textup{codim}(Y \setminus U) \geq 2$ such that $f|_{X_U} \colon X_U \rightarrow U$ is flat, where $X_U\coloneqq f^{-1}(U)$. Let $U'\coloneqq a^{-1}(U)$. By [@Wit_CBF Remark 2.5], the fibre product $X_U \times_U U'$ is reduced since $f|_{X_U}$ is flat and $X_{\overline{\eta}}$ is reduced. Let $X'_{U'} \subseteq X'$ be the normalisation of $X_U \times_U U'$. By the above discussion, we conclude $(f'^*( \det \mathcal{A}) -D)|_{X'_{U'}} \sim C|_{X'_{U'}}$ for some divisor $C$ on $X'$ such that $C|_{X'_{U'}} \geq 0$. Since $f'$ is equidimensional, $\textup{codim}(X' \setminus X'_{U'}) \geq 2$, therefore, by normality of $X'$, we can extend the above equation on all of $X'$. ◻ In the sequel, we will need to consider base changes with purely inseparable maps that are not necessarily of height one. The previous results extend to this situation by induction on the height. **Corollary 1**. *Let $f \colon X \rightarrow Y$ be an equidimensional contraction between normal projective varieties and $g\colon Y \rightarrow Z$ a morphism between normal varieties. Let $Y_e$ be the normalisation of the reduction of $Y \times_Z Z^e$, $X_e$ the normalisation of the reduction of $X \times_Y Y_e$. Assume that $X_{\overline{\eta}}$ is reduced, where $\overline{\eta}$ is the geometric generic point of $Y$. Let $f_e \colon X_e \rightarrow Y_e$ and $g_e\colon Y_e\to Z^e$ be the induced morphisms. Then:* 1. *$K_{X_e/X} - f_e^*K_{Y_e/Y} \sim -C$ for some effective Weil divisor $C$ on $X_e$;* 2. *$K_{Y_e/Y} \sim D$ for some Weil divisor $D$ on $Y_e$ and there is a non-empty open subset $U \subseteq Z^e$ such that $-D|_{g_e^{-1}(U)}$ is effective.* *Proof.* We proceed by induction. When $e=1$, let $\mathcal{A}$ be the foliation on $Y_1$ corresponding to $Y_1\to Y$. By , $K_{Y_1/Y} \simeq (\det \mathcal{A})^{\left[ p-1 \right]}$ and, by , $$K_{X_1/X}-(p-1)f_1^*(\det \mathcal{A}) \sim -C$$ for some effective divisor $C$ on $X_1$, giving point (i). Point (ii) corresponds to points (i) and (ii) of the above . If $e>1$, consider the diagram: where $\pi_1$, $\pi_2$, $p_1$ and $p_2$ are the induced maps. By the inductive assumptions, - $K_{X_{e-1}/X} -f_{e-1}^* K_{Y_{e-1}/Y} \sim -C_1$ and $C_1\geq 0$; - $K_{X_e/X_{e-1}} - f_e^*K_{Y_e/Y_{e-1}} \sim -C_2$ and $C_2\geq 0$; - $K_{Y_{e-1}/Y} \sim D_1$ and there exists an open $U_1\subseteq Z^{e-1}$ such that $-D_1|_{g_{e-1}^{-1}(U_1)}\geq 0$; - $K_{Y_e/Y_{e-1}} \sim D_2$ and there exists an open $U_2\subseteq Z^{e}$ such that $-D_2|_{g_{e}^{-1}(U_2)}\geq 0$. Setting $C\coloneqq\pi_2^*C_1 + C_2$, $D\coloneqq p_2^*D_1 + D_2$, and $U\coloneqq U_1\cap U_2$ we get the claim. ◻ ## $F$-singularities Throughout this subsection we will denote by $(X,B)$ a sub-couple such that $K_X+B$ is a $\mathbb{Z}_{(p)}$-divisor. If $(1-p^e)(K_X+B)$ is integral for some $e\geq 1$, we will denote by $\mathcal{L}_{X,B}^{(e)}$ or $\mathcal{L}^{(e)}_{B}$ the divisorial sheaf $\mathcal{O}_X((1-p^e)(K_X+B))$. In particular, $\mathcal{L}^{(e)}=\mathcal{L}^{(e)}_X=\mathcal{O}_X((1-p^e)K_X)$. **Definition 4**. Suppose $B \geq 0$, let $e\geq 1$ such that $(p^e-1)(K_X+B)$ is integral, and let $L$ be a divisor on $X$. We have a natural map of $\mathcal{O}_X$-modules induced by Grothendieck duality $$T^e_{B}\colon \textup{F}^e_* \mathcal{L}^{(e)}_B\subseteq \textup{F}_*^e\mathcal{L}^{(e)} \to \mathcal{O}_X,$$ which in turn induces $$T^e_{B}(L)\colon \textup{F}^e_* \mathcal{L}^{(e)}_B\otimes_{\mathcal{O}_X}\mathcal{O}_X(L) \to \mathcal{O}_X(L).$$ The space of *Frobenius stable sections of $\mathcal{O}_X(L)$* is defined as $$S^0(X,B;L)\coloneqq\bigcap_{e>0:\, (1-p^e)(K_X+B) \textup{ is Weil }}\textup{Image}(H^0(X,T_B^e(L)))\subseteq H^0(X,\mathcal{O}_X(L)).$$ Note that $S^0(X,B;L)=\textup{Image}(H^0(X,T_B^e(L)))$ for some $e\gg 0$. **Proposition 4** ([@DS Section 2]). *We have correspondences* *where the horizontal arrows are bijections, and the equivalence relations on the right identify two maps which agree up to multiplication by a unit of $H^0(X, \mathcal{O}_X)$.* *Sketch of proof.* We outline the main ideas for the sake of completeness. When $\Delta\geq 0$ we set $\mathcal{L}\coloneqq \mathcal{O}_X((1-p^e)(K_X+\Delta))$ and $\phi\coloneqq T_\Delta^e$. Conversely, given $\phi$, by Grothendieck duality we have $$\begin{split} \phi\in \textup{Hom}_{\mathcal{O}_X} (\textup{F}^e_*\mathcal{L},\mathcal{O}_X)&\simeq \textup{Hom}_{\mathcal{O}_X} (\textup{F}^e_*(\mathcal{L}(p^eK_X)),\mathcal{O}_X(K_X))\\ &\simeq \textup{F}_*^e \textup{Hom}_{\mathcal{O}_X} (\mathcal{L}(p^eK_X),\mathcal{O}_X(K_X))\\ &\simeq \textup{F}_*^e H^0(X,\mathcal{L}^{-1}((1-p^e)K_X)). \end{split}$$ Hence we can identify $\phi$ with an element $D_{\phi}\in H^0(X,\mathcal{L}^{-1}((1-p^e)K_X))$, and we can set $\Delta\coloneqq D_{\phi}/(p^e-1)$. Note that if we change $\phi$ by multiplication by a unit, we obtain the same $\Delta$. It is easy to check that the two constructions are one the inverse of the other. When $\Delta=\Delta^+-\Delta^-$ is not effective one can see, by arguing locally, that there exists a non-zero $\mathcal{O}_X$-linear map $\phi$ fitting in the diagram for some effective Weil divisor $E\geq (p^e-1)\Delta^-$. Indeed, working locally and letting $s\in\mathcal{O}_X((1-p^e)(K_X+\Delta^+))$ and $s/f\in\mathcal{O}_X((1-p^e)(K_X+\Delta))$, for $f$ a regular function, we have $$\phi\left(\textup{F}^e_*\left(\dfrac{s}{f}\right)\right)=\dfrac{T_{\Delta^+}^e(\textup{F}_*^e(f^{p^e-1}s))}{f}.$$ Conversely, given $\phi$ as in the hypothesis, a similar local computation shows that $\textup{Image}(\phi)\subseteq\mathcal{O}_X(E)$, for some effective Weil divisor $E$. Then the same argument as in the effective case yields the required divisor $\Delta$ (see [@DS Subsection 2.1 and Lemma 2.3]). ◻ **Definition 5**. We say $(X,B)$ is *globally sub-F-split* (GsFS) if for all $e\geq 1$ sufficiently divisible, letting $\phi_e\colon \textup{F}^e_* \mathcal{L}^{(e)}_B \to k(X)$ be the map associated to $B$ by , there exists a map of $\mathcal{O}_X$-modules $\sigma_e\colon \mathcal{O}_X\to \textup{F}^e_* \mathcal{L}^{(e)}_B$ such that $\phi_e\circ \sigma_e=\textup{id}_{\mathcal{O}_X}$. If $B\geq 0$, we say $(X,B)$ is *globally F-split* (GFS). **Definition 6**. We say $(X,B)$ is *globally F-regular* (GFR) if $B\geq 0$ and for every effective Weil divisor $E$ and all $e\geq 1$ sufficiently divisible, the $\mathcal{O}_X$-linear map $$\textup{F}^e_*\mathcal{L}^{(e)}_{B+\frac{E}{p^e-1}}\hookrightarrow\textup{F}^e_*\mathcal{L}^{(e)}_B\xrightarrow{T_B^e}\mathcal{O}_X$$ admits a splitting $$\textup{id}_{\mathcal{O}_X}\colon \mathcal{O}_X\xrightarrow{\sigma_{E,e}}\textup{F}^e_*\mathcal{L}^{(e)}_{B+\frac{E}{p^e-1}}\to\mathcal{O}_X.$$ Globally $F$-split and globally $F$-regular pairs should be thought of as pairs of log Calabi-Yau type, resp. log Fano type, with arithmetically well-behaved Frobenius. This is made somewhat more precise by the next statements. **Theorem 4** ([@SS Theorem 5.1]). *Let $(X,B)$ be a klt pair over $\mathbb{C}$ such that $-K_X-B$ is ample. Then $(X,B)$ has open GFR type, that is for every model $(\mathcal{X},\mathcal{B})\to Spec(A)$ of $(X,B)$ the set of primes $\mathfrak{p}\subset A$ such that $(\mathcal{X}_{\mathfrak{p}},\mathcal{B}_{\mathfrak{p}})$ is GFR is open and dense in $\mathop{\mathrm{Spec}}(A)$.* A similar statement is expected to hold for Calabi-Yau pairs. **Conjecture 2** ([@HaWa Problem 5.1.2]). *Let $(X,B)$ be a log canonical pair over $\mathbb{C}$ such that $-K_X-B\sim_{\mathbb{Q}}0$. Then $(X,B)$ has dense GFS type, that is for every model $(\mathcal{X},\mathcal{B})\to Spec(A)$ of $(X,B)$ the set of primes $\mathfrak{p}\subset A$ such that $(\mathcal{X}_{\mathfrak{p}},\mathcal{B}_{\mathfrak{p}})$ is GFS is dense in $\mathop{\mathrm{Spec}}(A)$.* The next Lemma shows that the class of GFR varieties is stable under small perturbations of the boundary. **Lemma 5** ([@SS Corollary 3.10, Remark 3.11]). *Let $(X,B)$ be a GFR projective couple where $B$ is a $\mathbb{Z}_{(p)}$-divisor and let $D\geq 0$ be a $\mathbb{Q}$-divisor. Then $(X,B+\epsilon D)$ is GFR for all $0 \leq \epsilon\ll 1$ such that $B+\epsilon D$ is a $\mathbb{Z}_{(p)}$-divisor.* **Lemma 6** ([@ejiri2017weak Example 3.4]). *If $B\geq 0$, then $(X, B)$ is GFS iff $S^0(X, B; \mathcal{O}_X)= H^0(X, \mathcal{O}_X)$.* **Lemma 7** ([@SS Proposition 3.8.(i)]). *Let $(X,B)$ be a couple such that $B$ is a $\mathbb{Z}_{(p)}$-divisor. Then $(X,B)$ is GFR if and only if for all $\mathbb{Q}$-divisors $D\geq 0$ the couple $(X,B+D/(p^e-1))$ is GFS for all $e\gg 1$.* ## Calabi-Yau deformations of globally $F$-split varieties In this subsection we will study how GFS singularities behave in family. *Remark 7*. Let $(X,B)$ be a sub-couple over a perfect field $k$, such that $K_X+B$ is a $\mathbb{Z}_{(p)}$-divisor. Then all the different classes of $F$-singularities can be given by replacing the absolute Frobenius $\textup{F}_X^e$ by the $k$-linear Frobenius $\textup{F}^e_{X^e/k^e}$, as the two differ by the automorphism $\textup{F}^e_{k}$. **Lemma 8**. *Let $k$ be a perfect field, and let $R$ be a smooth local $k$-algebra, essentially of finite type, with fraction field $K$ and residue field $k$. Let $\pi\colon \left( X,B=\sum_i a_iB_i\right)\to \mathop{\mathrm{Spec}}R$ be a pair over $R$, where $\pi$ is a flat contraction with geometrically normal fibres and $\mathrm{Supp}(B_i)\to\mathop{\mathrm{Spec}}R$ is flat for all $i$. Assume $(1-p^e)(K_X+B)\sim 0$ and that $(X_{\overline{k}},B_{\overline{k}})$ is GFS. Then $(X_{\overline{K}},B_{\overline{K}})$ is GFS too.* *Proof.* Let $g^e\colon \mathop{\mathrm{Spec}}\overline{k}^e\to\mathop{\mathrm{Spec}}R^e$, and consider the corresponding base-change of the leftmost triangle in $$\begin{tikzcd} X^e_{\overline{k}^e}\arrow[rr]\arrow[d,"\textup{F}^e_{X^e_{\overline{k}^e}/\overline{k}^e}",swap] & & X^e\arrow[d,"\textup{F}^e_{X^e/R^e}"]\\ X_{\overline{k}^e} \arrow[rr,"q_e"]\arrow[d] & & X_{R^e}\arrow[d]\\ \mathop{\mathrm{Spec}}\overline{k}^e\arrow[rr,"g_e"] & & \mathop{\mathrm{Spec}}R^e. \end{tikzcd}$$ In what follows we will freely and implicitly replace $X$ with the complement of a closed subset having codimension and relative codimension $\geq 2$, since all the sheaves we will be dealing with are reflexive. In particular, as $\pi$ has geometrically normal fibres, we may actually assume that $\pi$ is smooth. By [@PSZ Lemma 2.18] the trace map of the relative Frobenius is compatible with base-change, that is we have a natural commutative diagram $$\begin{tikzcd} \textup{F}^e_{X^e_{\overline{k}^e}/\overline{k}^e,*}\omega_{X^e_{\overline{k}^e}/\overline{k}^e} \arrow[d] & & q_e^*\textup{F}^e_{X^e/R^e,*}\omega_{X^e/R^e} \arrow[d]\arrow[ll]\\ \omega_{X_{\overline{k}^e}/\overline{k}^e} & & q_e^*\omega_{X_{R^e}/R^e}\arrow[ll] \end{tikzcd}$$ where the horizontal arrows are isomorphisms. Note that we have $$(\textup{F}^e_{X^e/R^e})^*\omega_{X_{R^e}/R^e}=\omega_{X^e/R^e}^{p^e}$$ by [@ejiri2019albanese Lemma 3.1]. After twisting the above commutative square by $\omega_{X_{R^e}/R^e}^{-1}$ and composing with the inclusion $\mathcal{L}_{X^e,B^e}^{(e)} \subseteq \mathcal{L}_{X^e}^{(e)}$, we obtain $$\begin{tikzcd} \textup{F}^e_{X^e_{\overline{k}^e}/\overline{k}^e,*}\mathcal{L}^{(e)}_{X^e_{\overline{k}^e},B^e_{\overline{k}^e}} \arrow[d] & & q_e^*\textup{F}^e_{X^e/R^e,*}\mathcal{L}_{X^e,B^e}^{(e)} \arrow[d]\arrow[ll]\\ \mathcal{O}_{X_{\overline{k}^e}} & & q_e^*\mathcal{O}_{X_{R^e}}.\arrow[ll] \end{tikzcd}$$ Hence, by taking global sections we obtain $$\begin{tikzcd} H^0(X_{\overline{k}^e},\mathcal{L}^{(e)}_{X^e_{\overline{k}^e},B^e_{\overline{k}^e}}) \arrow[d] & & H^0(X^e,\mathcal{L}_{X^e,B^e}^{(e)}) \arrow[d]\arrow[ll,"q_e^*"]\\ H^0(X_{\overline{k}^e},\mathcal{O}_{X_{\overline{k}^e}}) & & H^0(X_{R^e},\mathcal{O}_{X_{R^e}}).\arrow[ll,"q_e^*"] \end{tikzcd}$$ Now, as $(1-p^e)(K_{X^e/R^e}+B^e)\sim 0$ we have a unique (up to $R^*$) nonzero section $s\in H^0(X_{R^e},\mathcal{O}_{X^e}((1-p^e)(K_{X^e/R^e}+B^e)))$. As $(X_{\overline{k}},B_{\overline{k}})$ is GFS, by we have that $s_{\overline{k}^e}\coloneqq q_e^*(s)$ gets mapped to a unit in $\overline{k}^e$ by the (twisted) trace map of $\textup{F}^e_{X^e_{\overline{k}^e}/\overline{k}^e}$. Hence $s$ gets mapped to a unit in $R^e$ by the (twisted) trace map of $\textup{F}^e_{X^e/R^e}$. As the trace map is compatible with base-change, by restricting to the geometric generic fibre the same argument yields that $s_{\overline{K}^e}$ gets mapped to a unit in $\overline{K}^e$ by the (twisted) trace map of $\textup{F}^e_{X^e_{\overline{K}^e}/\overline{K}^e}$. Thus, gives the conclusion. ◻ # Stein-separable morphisms and canonical bundle formulae By combining the results of [@ejiri2017weak; @DS] with [@ST] one can show that morphisms with $p$-prime Stein degree and GFS general fibre satisfy a canonical bundle formula. **Definition-Proposition 5**. *Let $g\colon X\to Z$ be a contraction of normal schemes and let $B\geq 0$ be a $\mathbb{Q}$-divisor on $X$ such that $(1-p^e)(K_X+B)\sim_Z 0$ for some $e\geq 0$ and $(X_{\overline{\zeta}},B_{\overline{\zeta}})$ is globally $F$-split, where $\zeta \in Z$ is the generic point. Then there exists a canonically defined effective $\mathbb{Q}$-divisor $B^Z$ on $Z$ such that* 1. *$(1-p^e)(K_X+B)\sim g^*((1-p^e)(K_Z+B^Z))$;* 2. *$(X,B)$ is GFS if and only if $(Z,B^Z)$ is GFS;* 3. *if $\Lambda\geq 0$ is a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $Z$ such that $K_Z+B^Z+\Lambda$ is $\mathbb{Z}_{(p)}$-Cartier, then $(B+g^*\Lambda)^Z=B^Z+\Lambda$.* *Proof.* Point (a) follows by [@ejiri2017weak Theorem 3.17] (see also [@DS Theorem 5.2]). This boils down to the following: write $(1-p^e)(K_X+B)\sim g^*M$ for some Cartier divisor $M$ on $Z$, so that we have $T^e_B\colon \textup{F}_*^e\mathcal{O}_X(f^*M)\to\mathcal{O}_X$. By pushing forward via $g$ and using the projection formula we obtain $\textup{F}_*^e\mathcal{O}_Z(M)\xrightarrow{\phi}\mathcal{O}_Z$. As $(X_{\overline{\zeta}},B_{\overline{\zeta}})$ is GFS we have $\phi\neq 0$ ([@ejiri2017weak Observation 3.19]), and yields a canonically defined $\mathbb{Q}$-divisor $B^Z\geq 0$ such that $M\sim(1-p^e)(K_Z+B^Z)$ and $\phi=T^e_{B^Z}$. As for point (b), implies it is enough to show $$S^0(X, B; \mathcal{O}_X) = S^0(Z, B^Z; \mathcal{O}_Z).$$ By the construction in point (a) we have a commutative diagram $$\label{e-pushforward_Fsplitting_contraction} \begin{tikzcd} g_*\textup{F}_*^e\mathcal{L}^{(e)}_{X,B} \arrow[r,"g_*T_{B}^e"] & g_*\mathcal{O}_X\\ \textup{F}_*^e\mathcal{L}^{(e)}_{Z,B^Z}\arrow[u] \arrow[r,"T_{{B}^Z}^e"] & \mathcal{O}_Z,\arrow[u] \end{tikzcd}$$ where the vertical arrows are isomorphisms. Taking global sections for $e\geq 1$ large enough yields $S^0(X,B;\mathcal{O}_X)=S^0(Z,B^Z;\mathcal{O}_Z)$, thus we conclude by . Point (c) follows by the same argument as in (a), noting that can be completed to $$\begin{tikzcd} g_*\textup{F}_*^e\mathcal{L}^{(e)}_{X,B+g^*\Lambda}\arrow[rr,bend left=20,"T_{B+g^*\Lambda}^e"]\arrow[r] & g_*\textup{F}_*^e\mathcal{L}^{(e)}_{X,B} \arrow[r,"T_{B}^e",swap] & g_*\mathcal{O}_X\\ \textup{F}_*^e\mathcal{L}^{(e)}_{Z,B^Z+\Lambda}\arrow[r]\arrow[rr,bend right=20,"T_{B^Z+\Lambda}^e",swap]\arrow[u] & \textup{F}_*^e\mathcal{L}^{(e)}_{Z,B^Z}\arrow[u] \arrow[r,"T_{B^Z}^e"] & \mathcal{O}_Z.\arrow[u] \end{tikzcd}$$ Observe that, since we are twisting by a divisor on $Z$, we still have $T^e_{B+g^*\Lambda,\overline{\zeta}}\neq 0 \neq T^e_{B^Z+\Lambda,\overline{\zeta}}$. ◻ We are now ready to define KGFR pairs. **Definition 7**. A projective couple $(X,B)$ is said to be *$K$-globally F-regular* (KGFR) if - $-K_X-B$ is semiample with induced contraction $f\colon X\to Y$; - the geometric generic fibre of $f$, $(X_{\overline{\eta}},B_{\overline{\eta}})$, is GFS; - $K_X+B\sim_{\mathbb{Z}_{(p)},Y}0$; - the pair $(Y,B^Y)$ induced by is GFR. Note that $(X,B)$ is GFS *a posteriori* thanks to (b). **Definition-Proposition 6**. *Let $h\colon Z\to Y$ be a separable finite morphism of integral normal $k$-schemes such that $p\nmid \deg(h)$, and let $B\geq 0$ be a $\mathbb{Q}$-divisor on $Z$ such that $(Z,B)$ is globally $F$-split and $(1-p^e)(K_Z+B)\sim_Y 0$ for some $e\geq 1$. Then there exists a canonically defined effective $\mathbb{Q}$-divisor $B^Y$ on $Y$ such that* 1. *$(1-p^e)(K_Z+B)\sim h^*((1-p^e)(K_Y+B^Y))$;* 2. *$(Y,B^Y)$ is GFS;* 3. *if $\Lambda\geq 0$ is a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $Y$ such that $K_Y+B^Y+\Lambda$ is $\mathbb{Z}_{(p)}$-Cartier and $(Y,B^Y+\Lambda)$ is GFS, then $(Z,B+h^*\Lambda)$ is GFS too, and $(B+h^*\Lambda)^Y=B^Y+\Lambda$;* 4. *If $(Z,B)$ is GFR then $(Y,B^Y)$ is GFR.* *Proof.* Let $M$ be a Cartier divisor such that $(1-p^e)(K_Z+B)\sim h^*(M)$, let $\mathcal{L}\coloneqq\mathcal{O}_Y(M)$ and $d\coloneqq\deg(h)$. By [@ST Corollary 4.2] we have the following commutative diagram $$\label{e-st_finitemor} \begin{tikzcd} h_*\textup{F}^e_*h^*\mathcal{L}\arrow[rrr,"h_*\phi_Z"] \arrow[dd,"\textup{F}^e_*\left(\frac{\textup{Tr}_{Z/Y}}{d}\otimes\textup{id}_{\mathcal{L}}\right)",bend left=15] & & & h_*\mathcal{O}_Z \arrow[dd,"\frac{\textup{Tr}_{Z/Y}}{d}",bend left=15] \\ &&&\\ \textup{F}^e_*\mathcal{L}\arrow[rrr,"\phi_Y"] \arrow[uu,"\textup{F}^e_*(h^\sharp\otimes\textup{id}_{\mathcal{L}})",bend left=15] & & & \mathcal{O}_Y, \arrow[uu,"h^\sharp",bend left=15]\\ \end{tikzcd}$$ where $\phi_Z$ is defined by $B$ using the correspondence in and $\phi_Y$ is defined by composition with the natural maps. Note that, for each column, going up and then down yields the identity. By taking global sections and using GFS-ness of $(Z,B)$ we obtain $1\in\textup{Image}(H^0(Y,\phi_Y))$. In particular $\phi_Y\neq 0$, thus by we obtain an effective $\mathbb{Q}$-divisor $B^Y$ such that $(Y,B_Y)$ is GFS too and $(1-p^e)(K_Z+B)\sim h^*((1-p^e)(K_Y+B^Y))$. To show point (c), denote by $\lambda\colon \mathcal{L}((1-p^e)\Lambda)\to\mathcal{L}$ the natural map, and observe that can be completed to $$\begin{tikzcd} h_*\textup{F}^e_*h^*\mathcal{L}((1-p^e)\Lambda) \arrow[rrrrrr,"\psi_Z",bend left=20] \arrow[rrr,"h_*\textup{F}^e_*h^*\lambda"] \arrow[dd,"\textup{F}^e_*\left(\frac{\textup{Tr}_{Z/Y}}{d}\otimes\textup{id}_{\mathcal{L}((1-p^e)\Lambda)}\right)",bend left=15] & & & h_*\textup{F}^e_*h^*\mathcal{L}\arrow[rrr,"h_*\phi_Z"] \arrow[dd,"\textup{F}^e_*\left(\frac{\textup{Tr}_{Z/Y}}{d}\otimes\textup{id}_{\mathcal{L}}\right)",bend left=15] & & & h_*\mathcal{O}_Z \arrow[dd,"\frac{\textup{Tr}_{Z/Y}}{d}",bend left=15] \\ & & & \\ \textup{F}^e_*\mathcal{L}((1-p^e)\Lambda)\arrow[rrrrrr,"\psi_Y",bend right=20] \arrow[rrr,"\textup{F}^e_*\lambda"] \arrow[uu,"\textup{F}^e_*(h^\sharp\otimes\textup{id}_{\mathcal{L}((1-p^e)\Lambda)})",bend left=15] & & &\textup{F}^e_*\mathcal{L}\arrow[rrr,"\phi_Y"] \arrow[uu,"\textup{F}^e_*(h^\sharp\otimes\textup{id}_{\mathcal{L}})",bend left=15] & & & \mathcal{O}_Y. \arrow[uu,"h^\sharp",bend left=15]\\ \end{tikzcd}$$ As $(Y,B^Y+\Lambda)$ is GFS, we have $1\in \textup{Image}(H^0(Y,\psi_Y))$, hence $1\in\textup{Image}(H^0(Z,\psi_Z))$. By we see that the maps $$\textup{F}^e_*h^*\mathcal{L}((1-p^e)\Lambda)\xrightarrow{\textup{F}^e_* h^*\lambda}\textup{F}^e_* h^*\mathcal{L}\xrightarrow{\phi_Z}\mathcal{O}_Z\hspace{3mm}\textup{and}\hspace{3mm}\textup{F}^e_*\mathcal{L}((1-p^e)\Lambda)\xrightarrow{\textup{F}^e_*\lambda}\textup{F}^e_* \mathcal{L}\xrightarrow{\phi_Y}\mathcal{O}_Y$$ correspond to the divisors $B+h^*\Lambda$ and $B^Y+\Lambda$, respectively. To show point (d) let $D\geq 0$ be a divisor on $Y$. By it is enough to show that $(Y,B^Y+D/(p^e-1))$ is GFS whenever $e$ is large enough. As $(Z,B)$ is GFR, we have that $(Z,B+h^*\left(D/(p^e-1)\right))$ is GFS provided $e\gg 0$, again by . Arguing as in point (a) we can push forward the splitting on $Y$ via the trace map of $h$ $$\begin{tikzcd} h_*\mathcal{O}_Z\arrow[r]\arrow[dd,"\frac{\textup{Tr}_{Z/Y}}{d}",bend left=15] & h_*\textup{F}^{e'}_*\mathcal{L}^{(e')}_{B+h^*\left(D/(p^e-1)\right)} \arrow[rrr,"h_*T_{B+h^*\left(D/(p^e-1)\right)}"] \arrow[dd,"\textup{F}^e_*\left(\frac{\textup{Tr}_{Z/Y}}{d}\otimes\textup{id}\right)",bend left=15] & & & h_*\mathcal{O}_Z \arrow[dd,"\frac{\textup{Tr}_{Z/Y}}{d}",bend left=15]\\ & & & \\ \mathcal{O}_Y\arrow[r]\arrow[uu,"h^\sharp",bend left=15] & \textup{F}^{e'}_*\mathcal{L}^{(e')}_{B^Y+D/(p^e-1)} \arrow[rrr,"T_{B^Y+D/(p^e-1)}"] \arrow[uu,"\textup{F}^e_*(h^\sharp\otimes\textup{id})",bend left=15] & & & \mathcal{O}_Y, \arrow[uu,"h^\sharp",bend left=15] \end{tikzcd}$$ hence $(Y,B^Y+D/(p^e-1))$ is GFS. ◻ *Remark 8*. It might be tempting to try and extend this result to the case of a split finite morphism $h$ (i.e. such that the natural map $\mathcal{O}_Y\to h_*\mathcal{O}_Z$ is split). However, this can easily be seen to fail by taking $Z=Y=\mathbb{P}^1$ and $h=\textup{F}$. As explained in [@ST] a necessary and sufficient condition for $\psi_Z$ to descend to a nonzero $\psi_Y$ is that the splitting of $\mathcal{O}_Y\to h_*\mathcal{O}_Z$ is given by the trace map. In this case it then follows that $B=h^*B^Y+\mathrm{Ram}(h)$, where $\mathrm{Ram}(h)$ denotes the ramification divisor of $h$. **Definition-Proposition 7**. *Let $f\colon X \to Y$ be a morphism of integral normal schemes such that $p\nmid\textup{St.deg}(f)$, and let $B$ be an effective $\mathbb{Q}$-divisor on $X$ such that $(X,B)$ is globally $F$-split and $(1-p^e)(K_X + B) \sim_Y 0$. Then there exists a canonically determined effective $\mathbb{Q}$-divisor $B^Y$ on $Y$ such that* 1. *$(1-p^e)(K_X + B) \sim f^*((1-p^e)(K_Y + B^Y))$;* 2. *$(Y,B^Y)$ is GFS;* 3. *if $\Lambda\geq 0$ is a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $Y$ such that $K_Y+B^Y+\Lambda$ is $\mathbb{Z}_{(p)}$-Cartier, then $(B+f^*\Lambda)^Y=B^Y+\Lambda$;* 4. *$(X, B+f^*\Lambda)$ is GFS if and only if $(Y,B^Y+\Lambda)$ is GFS.* 5. *$(Y,B^Y)$ is GFR whenever $(X,B)$ is KGFR.* *Proof.* Follows immediately from applying and to the Stein factorisation $f\colon X\xrightarrow{g}Z\xrightarrow{h}Y$. ◻ # F-complements Complements were first introduced by Shokurov in [@shokurov19933]: given a pair $(X,B)$ a *complement* is a $\mathbb{Q}$-divisor $\Gamma\geq 0$ such that $(X,B+\Gamma)$ is lc and $K_X+B+\Gamma\sim_{\mathbb{Q}}0$. In this subsection we introduce an analogous notion for $F$-singularities in the relative setting. **Definition 8**. Let $f\colon (X,B)\to Y$ be a contraction of normal quasi-projective varieties, where $B=B^+-B^-$ is a $\mathbb{Q}$-divisor such that $\mathrm{Supp}(B^-)$ does not dominate $Y$, and whose geometric generic fibre $(X_{\overline{\eta}},B_{\overline{\eta}})$ is globally $F$-split. Let $L$ be a $\mathbb{Q}$-effective $\mathbb{Q}$-divisor on $X$. We say $L$ *admits an $F$-complement for $(X/Y,B)$* if there exists $\Lambda\in |L|_{\mathbb{Q}}$ such that $(1-p^e)(K_X+B+\Lambda)\sim_Y 0$ for some $e\geq 1$, and $(X_{\overline{\eta}},B_{\overline{\eta}}+\Lambda_{\overline{\eta}})$ is globally $F$-split. In this case we say $\Lambda$ is an *F-complement for $(X/Y,B)$*. When $X$ is a $k$-variety and $Y=\mathop{\mathrm{Spec}}(k)$ we just refer to $\Lambda$ as an $F$-complement for $(X,B)$. By results of Schwede and Smith we have that projective GFS couples admit $F$-complements. **Theorem 5** ([@SS Theorem 4.3.(ii)]). *Let $k$ be an $F$-finite field and let $(X, B)$ be a globally F-split quasi-projective normal variety over $k$. Then there exists an F-complement $\Gamma$ for $(X,B)$.* We give a sufficient condition for the existence of $F$-complements on contractions with KGFR fibres. **Theorem 6**. *Let $f \colon X \to Y$ be a contraction of normal quasi-projective varieties with general fibre $X_y$, and let $B$ be a $\mathbb{Q}$-divisor on $X$ such that $\mathrm{Supp}(B^-)$ does not dominate $Y$ and $(X_y,B_{X_y})$ is KGFR. Let $D$ be a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $Y$, set $L\coloneqq-K_X-B-f^*D$, and assume there is $m\in\mathbb{N}\setminus p\mathbb{N}$ such that $mL$ is Cartier and $|V|\subseteq |mL|$ such that $\phi_{|V|_{X_y}}$ is a morphism and $p\nmid \textup{St.deg}(\phi_{|V|_{X_y}})$. Then $L$ admits an $F$-complement for $(X/Y,B)$.* Before we give a proof, we need the following result. **Proposition 8**. *Let $G$ be a non-normal projective $k$-variety, let $A$ be an ample $\mathbb{Q}$-divisor on $G$, let $\pi\colon \overline{G}\to G$ be the normalisation morphism and let $\overline{\Delta}\geq 0$ be a $\mathbb{Q}$-divisor on $\overline{G}$ such that* - *$(\overline{G},\overline{\Delta})$ is a GFR pair;* - *$-K_{\overline{G}}-\overline{\Delta}\sim_{\mathbb{Q}}\pi^*A.$* *Then there exists an $F$-complement $\overline{\Gamma}$ for $(\overline{G},\overline{\Delta})$ such that $\overline{\Gamma}=\pi^*\Gamma$ for some $\Gamma\in|A|_{\mathbb{Q}}$.* *Proof.* Let $m\geq 1$ divisible enough, and let $L\coloneqq mA$ and $\overline{L}\coloneqq m(-K_{\overline{G}}-\overline{\Delta})\sim\pi^*L$. Denote by $\overline{\mathcal{C}}\subset \mathcal{O}_{\overline{G}}$ and $\mathcal{C}\subset\mathcal{O}_G$ the conductor ideals, and by $\overline{C}\subset\overline{G}$ and $C\subset G$ the respective closed subschemes, so that we have an induced finite morphism $\pi|_{\overline{C}}\colon \overline{C}\to C$. We then have the following morphism of short exact sequences ([@Reid 2.1]) for all $l\geq 1$: $$\label{e-normalisationses} \begin{tikzcd} 0 \arrow[r] & \mathcal{O}_G(lL) \otimes_{\mathcal{O}_G}\mathcal{C}\arrow[r] \arrow[d,equal] & \mathcal{O}_G(lL) \arrow[r] \arrow[d,hook] & \mathcal{O}_C(lL)\arrow[r] \arrow[d,hook] & 0\\ 0 \arrow[r] & \pi_*(\mathcal{O}_{\overline{G}}(l\overline{L})\otimes_{\mathcal{O}_{\overline{G}}}\overline{\mathcal{C}}) \arrow[r] & \pi_*\mathcal{O}_{\overline{G}}(l\overline{L})\arrow[r] & (\pi|_{\overline{C}})_*\mathcal{O}_{\overline{C}}(l\overline{L})\arrow[r] & 0. \end{tikzcd}$$ Let now $R$ be an effective Cartier divisor on $\overline{G}$ such that $\mathcal{O}_{\overline{G}}(-R)\subseteq\overline{\mathcal{C}}$. For $l\gg 0$ we have that $(\overline{G},\overline{\Delta}+R/l)$ is still GFR by , thus we have an $F$-complement $\overline{\Xi}$ for $(\overline{G},\overline{\Delta}+R/l)$ by . In particular, $\overline{\Gamma}\coloneqq\overline{\Xi}+R/l$ is an $F$-complement for $(\overline{G},\overline{\Delta})$. We now need to show that $\overline{\Gamma}$ descends to $G$. After multiplying by $lmn$ for some $n \geq 1$ to clear denominators, we obtain that $lmn\overline{\Gamma}$ is the divisor of a section $\overline{\gamma}\in H^0(\overline{G},\mathcal{O}_{\overline{G}}(ln\overline{L})\otimes_{\mathcal{O}_{\overline{G}}}\overline{\mathcal{C}})$, and shows that $\overline{\gamma}=\pi^*\gamma$ for some $\gamma\in H^0(G,\mathcal{O}_G(lnL)\otimes_{\mathcal{O}_G}\mathcal{C})$. We conclude by letting $\Gamma\coloneqq (\gamma=0)/lmn$. ◻ *Proof of .* Up to replacing $m$ with a multiple (cf. [@PST Exercise 3.5]) we may assume $m=p^e-1$ for some $e\geq 1$. Throughout the rest of the proof we will always freely implictly replace $m$ with a suitable multiple (equivalently, replace $e$ with a suitable multiple). Consider the following diagram $$\label{e-restrictionmorphism} \begin{tikzcd} & \overline{G}\arrow[rd,"\pi"] &\\ X_y \arrow[dd,hook]\arrow[ur,"\overline{\varphi}"]\arrow[rr,"\varphi"] & & G\arrow[dd,hook]\arrow[dr,hook] & \\ & & & \mathbb{P}V\\ X\arrow[rr,dashed,"\Phi"] & & W\arrow[ur,hook] & \\ \end{tikzcd}$$ where notation is as follows: $\Phi\coloneqq \phi_{|V|}$, $\varphi\coloneqq\phi_{|V|_{X_y}}$, $\pi$ is the normalisation morphism, and $\overline{\varphi}$ is the naturally induced morphism. Let $A\coloneqq\mathcal{O}_W(1)$ and $A_G\coloneqq\mathcal{O}_{G}(1)$. Then yields an effective $\mathbb{Q}$-divisor $(B_{X_y})^{\overline{G}}$ on $\overline{G}$, such that $(\overline{G},(B_{X_y})^{\overline{G}})$ is GFR and $-m(K_{X_y}+B_{X_y})\sim \overline{\varphi}^*(-m(K_{\overline{G}}+(B_{X_y})^{\overline{G}}))$. By there is an $F$-complement $\Lambda_{\overline{G}}\in |-K_{\overline{G}}-(B_{X_y})^{\overline{G}}|_{\mathbb{Q}}$ for $(\overline{G},(B_{X_y})^{\overline{G}})$ such that $\Lambda_{\overline{G}}=\pi^*\Lambda_{G}$ for some $\Lambda_G\in |A_G/m|_{\mathbb{Q}}$. Letting $\Lambda_{X_y}\coloneqq\varphi^*\Lambda_G$ we then have that $(X_y,B_{X_y}+\Lambda_{X_y})$ is GFS and $K_{X_y}+B_{X_y}+\Lambda_{X_y}\sim_{\mathbb{Z}_{(p)}}0$, by . The above diagram induces the following on global sections for all $l\geq 0$: As the two rightmost maps are surjective for all $l\gg 0$ by Serre Vanishing, we conclude we can lift $\Lambda_{X_y}$ to $\Lambda\in |L|_{\mathbb{Q}}$. As $(X_y,B_{X_y}+\Lambda_{X_y})$ is GFS, we have that $(X_{\overline{\eta}},B_{\overline{\eta}}+\Lambda_{\overline{\eta}})$ is also GFS by . ◻ Thanks to the KGFR condition on the fibres, we can find $F$-complements even after a small perturbation of the boundary. **Corollary 2**. *Let $f \colon X \to Y$ be an equidimensional contraction of normal quasi-projective varieties with general fibre $X_y$, and let $B$ be a $\mathbb{Q}$-divisor on $X$ such that $\mathrm{Supp}(B^-)$ does not dominate $Y$ and $(X_y,B_{X_y})$ is KGFR. Let $D$ be a $\mathbb{Q}$-divisor on $Y$, set $L\coloneqq-K_X-B-f^*D$, and assume there is $m\in\mathbb{N}\setminus p\mathbb{N}$ such that $mL$ is Cartier and $|V|\subseteq |mL|$ such that $\phi_{|V|_{X_y}}$ is a morphism and $p\nmid\textup{St.deg}(\phi_{|V|_{X_y}})$. Let $E$ be a $\mathbb{Q}$-divisor on $Y$ and suppose there exists a $\mathbb{Q}$-divisor $0\leq \Gamma\sim_{\mathbb{Q}}L-f^*E$. Then $L_\epsilon\coloneqq(1-\epsilon)L$ admits an $F$-complement for $(X/Y,B+\epsilon\Gamma)$ for all $\epsilon\in \mathbb{Z}_{(p),>0}$ small enough.* *Proof.* Let $$B_\epsilon\coloneqq B+\epsilon\Gamma ,\hspace{3mm} D_\epsilon\coloneqq D+\epsilon E, \hspace{3mm} L_\epsilon\coloneqq -K_X-B_\epsilon-f^*D_\epsilon$$ so that $L_\epsilon\sim_{\mathbb{Q}}(1-\epsilon)L$. The corollary will follow from as soon as we verify that - $(X_y,B_{\epsilon,F})$ is KGFR, and - there is $n\in \mathbb{N}\setminus p\mathbb{N}$ s.t. $nL_{\epsilon}$ is Cartier and there exists $|V_\epsilon|\subseteq |nL_\epsilon|$ s.t. $|V_\epsilon|_{X_y}$ induces a morphism with Stein degree not divisible by $p$. Let $\psi\colon X_y\to H$ be the semiample contraction of $-K_{X_y}-B_{X_y}$. Then yields an effective $\mathbb{Q}$-divisor $(B_{X_y})^H$ such that $(H,(B_{X_y})^H)$ is GFR. In particular we can write $\Gamma_{X_y}=\psi^*\Gamma_H$ for some $\Gamma_H\in |-K_H-(B_{X_y})^H|_{\mathbb{Q}}$. As $(H,(B_{X_y})^H)$ is GFR we have $(H,(B_{X_y})^H+\epsilon\Gamma_H)$ is GFR for $\epsilon$ small enough such that $(B_{X_y})^H+\epsilon\Gamma_H$ is a $\mathbb{Z}_{(p)}$-divisor by , and $K_H+(B_{X_y})^H+\epsilon\Gamma_H$ is $\mathbb{Z}_{(p)}$-Cartier, hence $(X_y,B_{X_y}+\epsilon\Gamma_{X_y})$ is also KGFR by (b) and (c). This shows (1). To show (2), let $l$ be a $p$-prime positive integer such that $(1-\epsilon)l$ is an integer too. Then we conclude by letting $V_\epsilon\coloneqq V^{\otimes (1-\epsilon)l}\subseteq H^0(X,mlL_{\epsilon})$. ◻ # Injectivity Theorem The main result of this section is the following injectivity theorem (see [@Chang Theorem 4.3]). **Theorem 7** (Injectivity Theorem). *Let $f \colon X \to Y$ be an equidimensional contraction between normal quasi-projective varieties over a perfect field. Assume the general fibre $X_y$ is normal. Let $B=B^+-B^-$ be a $\mathbb{Q}$-divisor on $X$ such that $\mathrm{Supp}(B^-)$ does not dominate $Y$ and $(X_y,B_{X_y})$ is KGFR. Let $D$ be a $\mathbb{Q}$-divisor on $Y$, set $L\coloneqq -K_X-B-f^*D$ and assume* 1. *there is $m\in\mathbb{N}\setminus p\mathbb{N}$ such that $mL$ is Cartier and $|V|\subseteq |mL|$ such that $\phi_{|V|_{X_y}}$ is a morphism with $p\nmid\textup{St.deg}(\phi_{|V|_{X_y}})$;* 2. *there exists a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor $P \geq B^-$ such that $\kappa(X, f^*(-K_Y-D)+P)=0$.* *Then, the natural map $$H^0(X, mL) \to H^0(X_y,mL_{X_y})$$ is injective for all $m \geq 0$. In particular we have $\kappa(X, L)\leq \kappa(X_y, L_{X_y})$.* Provided that our contraction admits $F$-complements, we can follow the same proof as in [@Chang Theorem 3.8 and Proposition 4.2]. **Proposition 9**. *Let $f \colon X \to Y$ be a contraction of normal quasi-projective varieties, and let $B=B^+-B^-$ be a $\mathbb{Q}$-divisor such that $\mathrm{Supp}(B^-)$ does not dominate $Y$. Let $D$ be a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $Y$, let $L\coloneqq-K_X-B-f^*D$ and suppose $L$ admits an $F$-complement for $(X/Y,B)$. Lastly assume* 1. *$f$ is equidimensional or* 2. *$B\geq 0$ and $Y$ is $\mathbb{Q}$-Gorenstein.* *Then, $f^*(-K_Y-D)+B^-$ is $\mathbb{Q}$-effective.* *Proof.* Let $0\leq \Lambda\sim_{\mathbb{Q}}L$ be an $F$-complement for $(X/Y,B)$ and consider $\Delta\coloneqq B+\Lambda$, so that we have $(X_{\overline{\eta}},\Delta_{\overline{\eta}})$ is globally $F$-split and $K_X+\Delta\sim_{\mathbb{Z}_{(p)},Y}0$. Then [@ejiri2017weak Theorem 3.17] yields a canonically defined $\mathbb{Q}$-divisor $\Delta_Y$ such that $$K_X+\Delta\sim_{\mathbb{Z}_{(p)}}f^*(K_Y+\Delta_Y)\sim_{\mathbb{Q}} f^*(-D).$$ Hence it is enough to show that $f^*(\Delta_Y)+B^-$ is $\mathbb{Q}$-effective. If $B\geq 0$ then $\Delta_Y=\Delta^Y$ ([@ejiri2017weak Theorem 3.17], ), in particular, it is effective. Suppose now $f$ is equidimensional: then every component $P$ of $\mathrm{Supp}(\Delta^v)$ is mapped to a prime divisor $Q\subset Y$, hence $f^*(\Delta_Y)\geq \Delta^v$. Indeed, [@DS Proposition 5.7] yields $\textup{coeff}_Q(\Delta_Y)=1-d_Q$, where $$d_Q\coloneqq\sup\lbrace t \textup{ s.t. } (X,\Delta+f^*(tQ)) \textup{ is globally sub \textit{F}-split over }\eta_Q\rbrace.$$ In particular, as globally sub-$F$-split sub-couples are sub-log canonical in codimension one ([@DS Lemma 2.14]), we have: $$\textup{coeff}_P(\Delta^v)\leq 1-d_Q\textup{coeff}_P(f^*(Q))\leq \textup{coeff}_P(f^*(Q))(1-d_Q)=\textup{coeff}_P(f^*(\Delta_Y)).$$ As $\Delta^v+B^-\geq 0$, we conclude that $$f^*(\Delta_Y)+B^-\geq 0.$$ ◻ **Corollary 3**. *Let $f \colon X \to Y$ be a contraction of normal quasi-projective varieties, with $Y$ $\mathbb{Q}$-Gorenstein, and let $B=B^+-B^-$ be a $\mathbb{Q}$-divisor on $X$. If $B^- \neq 0$, assume moreover that $f$ is equidimensional and that $\mathrm{Supp}(B^-)$ does not dominate $Y$. Let $D,E$ be $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisors on $Y$, set $L\coloneqq-K_X-B-f^*D$, and assume that* 1. *there exists $0\leq\Gamma\sim_{\mathbb{Q}} L-f^*E$;* 2. *$L_{\epsilon}\coloneqq(1-\epsilon)L$ admits an $F$-complement for $(X,B+\epsilon \Gamma)$, for $\epsilon\in [0,1)\cap\mathbb{Q}$.* *Then, $f^*(-K_Y-D-\epsilon E)+B^-$ is $\mathbb{Q}$-effective.* *Proof.* Let $$B_\epsilon\coloneqq B+\epsilon\Gamma ,\hspace{3mm} D_\epsilon\coloneqq D+\epsilon E,$$ so that $L_\epsilon=-K_X-B_\epsilon-f^*D_\epsilon$ and all the hypotheses of are satisfied with respect to $f\colon (X,B_\epsilon)\to Y$, $L_\epsilon$, and $D_\epsilon$. Then yields $f^*(-K_Y-D_\epsilon)+B_\epsilon^-=f^*(-K_Y-D-\epsilon E)+B^-$ is $\mathbb{Q}$-effective. ◻ We are now ready to prove our injectivity theorem. *Proof of .* Let $X_y\coloneqq f^{-1}(y)$ for a general $y \in Y$ and note that we may assume $f$ is flat over a neighborhood of $y$. Let $U \subseteq Y$ be the regular locus of $Y$, its complement has codimension $\geq 2$. We may assume $y \in U$. Since $f$ is equidimensional, the complement of $X_U \coloneqq f^{-1}(U)$ has codimension $\geq 2$. In particular, it is enough to prove $H^0(X_U, mL|_{X_U}) \to H^0(X_y,mL_{X_y})$ is injective. By substituting $Y$ with $U$ and $X$ with $X_U$, we can assume $Y$ is smooth. By contradiction, suppose the map $H^0(X, mL) \to H^0(X_y,mL_{X_y})$ is not injective. Then there exists a $\mathbb{Q}$-divisor $0\leq N\sim_{\mathbb{Q}}L$ such that $X_y\subseteq \mathrm{Supp}(N)$. Note that, by hypothesis, there exists a unique effective $\mathbb{Q}$-divisor $M\sim_{\mathbb{Q}}f^*(-K_Y-D)+P$, hence we may assume $X_y\not\subseteq \mathrm{Supp}(M)$. Consider now the diagram where notation is as follows: - $Y'$ is the blowup of $Y$ at $y$ with exceptional divisor $E$, so that $\mu^*K_Y+aE=K_{Y'}$, with $a=\dim Y-1$; - $X'$ is the fibre product (hence it is also the blowup of $X$ at $X_y$ with exceptional divisor $G$, since blowup and flat base-change commute); - $f'$ is the induced morphism (note that since $f$ is equidimensional, so is $f'$); - $B'$ is the strict transform of $B$, so that $\pi^*(K_X+B)+bG=K_{X'}+B'$, with $b\leq a$. Let also $D'\coloneqq\mu^*D-aE$ and $L'\coloneqq-K_{X'}-B'-f'^*D'\sim_{\mathbb{Q}}\pi^*L+(a-b)G\geq \pi^*L$. As $\mathrm{Supp}(N)\supseteq X_y$ we have $\mathrm{Supp}(\pi^*N)\supseteq G$. Thus, letting $N'\coloneqq\pi^*N+(a-b)G$, $\mathrm{Supp}(N') \supseteq G$ too. In particular, for $0<\delta\ll 1$, we have an effective divisor $$0\leq \Gamma'\coloneqq N'-\delta G\sim_{\mathbb{Q}}L'-f'^*E',\hspace{3mm} E'\coloneqq\delta E.$$ Note that the data $f'\colon (X',B')\to Y',D',E',\Gamma'$ satisfy the hypotheses of , so $L'$ admits an $F$-complement for $(X'/Y',B'+\epsilon \Gamma')$. Applying to $f'\colon (X',B')\to Y'$, $L',D',E',\Gamma'$ yields the existence of an effective $\mathbb{Q}$-divisor $\overline{\Gamma}$ such that $$\begin{split} 0\leq \overline{\Gamma} \sim_{\mathbb{Q}} f'^*(-K_{Y'}-D'-\epsilon E')+(B')^-&=f'^*(\mu^*(-K_Y-D)-\epsilon E')+(B')^-\\ &\leq f'^*(\mu^*(-K_Y-D)-\epsilon E')+\pi^*P\\ &=\pi^*(f^*(-K_Y-D)+P)-\epsilon\delta G\\ &\sim_{\mathbb{Q}}\pi^*M-\epsilon\delta G, \end{split}$$ contradicting the assumption $X_y\not\subseteq \mathrm{Supp}(M)$. ◻ # Proof Now, we have all the ingredients to prove the main theorem. **Theorem 8**. *Let $f \colon X \to Y$ be a contraction of normal quasi-projective varieties over a perfect field $k$ of positive characteristic. Assume the general fibre $X_y$ is normal and let $B$ be an effective $\mathbb{Q}$-divisor on $X$ such that $(X_y,B_{X_y})$ is $K$-globally $F$-regular. Assume $Y$ is $\mathbb{Q}$-Gorenstein and $D$ is a $\mathbb{Q}$-Cartier $\mathbb{Q}$-divisor on $Y$. Set $L\coloneqq-K_X-B-f^*D$ and suppose there is $m\in\mathbb{N}\setminus p\mathbb{N}$ such that $mL$ is Cartier and $|V|\subseteq |mL|$ such that $\phi_{|V|_{X_y}}$ is a morphism with $p\nmid \textup{St.deg}(\phi_{|V|_{X_y}})$. Then, for a general fibre $X_y$, $$\kappa(X,L)\leq \kappa(X_y,L_{X_y})+\kappa(Y,-K_Y-D).$$* *Proof.* By and we may assume that $-K_Y-D$ is $\mathbb{Q}$-effective. We consider the contraction $g$ induced by a sufficiently divisible power of $-K_Y-D$ and we resolve it: In the above diagram, $X'$ is the normalisation of the main component of $X \times_Y Y'$ and $f'$, $\mu$, $\pi$ are the induced maps. We can choose $g'$ such that $f'$ is a sufficiently high equidimensional birational model of $f$ and $\textup{Exc}(\pi)\subseteq (f')^{-1}(\textup{Exc}(\mu))$. Specifically, $f'$ is constructed by the following diagram where $\mu_2$ is the normalised blowup of the base ideal of $|m(-K_Y-D)|$, $\pi_2$ is the normalisation of $(X\times_Y Y_1)_{\textup{main}}$ and $f'$ is an equidimensional model of $f_1$ such that every $\mu$-exceptional divisor is $\mathbb{Q}$-Cartier. Note that the condition on the exceptional loci is satisfied by $f_1$ by construction, and by $f'$ too, since $f_1$ is flat over codimension one points of $Y_1$ ([@Har_AG Proposition 9.7]). Let $h'\coloneqq g' \circ f'$. Since the fibres of $g'$ and of $h'$ may be very singular, we consider the base change with a high enough power of the Frobenius morphism on $Z$: where - $X_e$ is the normalisation of the reduction of $X'_{Z^e}$; - $Y_e$ is the normalisation of the reduction of $Y'_{Z^e}$; - $f_e$ and $g_e$ are the induced morphisms; - $a$ and $b$ are the induced morphisms; - $h_e\coloneqq g_e \circ f_e$. Choose $e \gg 0$ such that, if $\overline{\zeta}$ is the geometric generic point of $Z$, $X_{e,\overline{\zeta}}= (X_{\overline{\zeta},\mathrm{red}})^{\nu}$ and $Y_{e,\overline{\zeta}}= (Y_{\overline{\zeta},\mathrm{red}})^{\nu}$. Such an $e$ exists by . In particular, the general fibres of $f_e$, $g_e$ and $h_e$ are normal and reduced by [@LMMPZsoltJoe Proposition 2.1]. By easy additivity applied to $h_e$ we obtain $$\begin{split} \kappa(X_e,(\pi\circ b)^*L) & \leq \kappa(X_{e,z},((\pi \circ b)^*L)|_{X_{e,z}})+\dim (Z)\\ & =\kappa(X_{e,z},((\pi \circ b)^*L)|_{X_{e,z}})+\kappa(Y,-K_Y-D), \end{split}$$ where $X_{e,z}$ is a general fibre of $h_e$. As implies $\kappa(X_e,(\pi \circ b)^*L)=\kappa(X,L)$, we just need to show $$\kappa(X_{e,z},((\pi \circ b)^*L)|_{X_{e,z}})\leq \kappa(X_y,(\pi^*L)|_{X_y}).$$ Note that, since $f$ is separable and its general fibre is normal by assumption, the general fibre of $f_e$ is isomorphic to $X_y$ and the same holds for the geometric generic fibres. In particular, $X'_{\overline{\eta}}$ is reduced, where $\overline{\eta}$ is the geometric generic point of $Y$. Note that $X_e$ can also be described as the normalisation of the reduction of $X' \times_{Y'} Y_e$. Since $f'$ is equidimensional, we can apply : there exist $C_X$, $C_Y$ $\mathbb{Q}$-divisors on $X_e$ and $Y_e$ respectively, and $C$, $\mathbb{Q}$-divisor on $X_e$, such that - $K_{X_e}+ C_X \sim b^*(K_{X'})$; - $K_{Y_e}+ C_Y \sim a^*(K_{Y'})$; - $K_{X_e/X'}- f_e^*K_{Y_e/Y'} \sim -C$; - there exists $U \subseteq Z$ such that $C_X|_{X_{e,z}} \geq 0$, $C_Y|_{Y_{e,z}} \geq 0$ and $C|_{X_{e,z}} \geq 0$, where $X_{e,z}$ is a fibre of $h_e$ over any point $z \in U$ and $Y_{e,z}$ is a fibre of $g_e$ over any point $z \in U$. In particular, for $X_{e,z}$ general fibre of $h_e$, $$(C_X-f_e^*C_Y)|_{X_{e,z}} \sim C|_{X_{e,z}} \geq 0.$$ The goal now is to apply the injectivity to the contraction induced on the general fibres: where $z$ is a general closed point of $Z$ and $y$ is a general closed point of $Y_{e,z}$. In order to do so, we need to carefully define our divisors. Define $K_{X'}+B'\coloneqq \pi^*(K_X+B)$; note that $B'$ is not necessarily effective. However, we can find an effective $\mu$-exceptional divisor $\Theta'$ on $Y'$ such that $f'^*(\Theta')\geq (B')^-$. Define: $$\begin{split} D'\coloneqq\mu^*(D)-\Theta'\hspace{3mm}\textup{and}\hspace{3mm}L' &\coloneqq-K_{X'}-(B')^+-f'^*(D')\\ &=\pi^*L+(f'^*(\Theta')-(B')^-), \end{split}$$ so that $L'\geq \pi^*L$. Note that $L'$ is $\mathbb{Q}$-Cartier since it differs from $\pi^*L$ only by exceptional divisors. Moreover, we can choose $\Theta'$ so that $mL'$ is Cartier as well. Now, let $$B_e\coloneqq b^*(B')^+ + C_X -f_e^*(C_Y); \hspace{3mm} D_e\coloneqq a^*(D') + C_Y \hspace{3mm}\textup{and}\hspace{3mm} L_e\coloneqq b^*(L'),$$ so that: - $a^*(K_{Y'}+D')=K_{Y_e}+D_e$; - $L_e=-K_{X_e}-B_e-f_e^*(D_e)$. Note that $B_e|_{X_{e,z}} \geq 0$ by bullet $(4)$ in the above discussion. Condition (a) in holds on $(X_y, B_{e,{X_y}})$. Indeed, $\pi$ is an isomorphism over an open subset of $Y$ and the general fibres of $f'$ are isomorphic to the ones of $f_e$. In particular, if $\overline{\eta}$ is the geometric generic point of $Y_e$ (or of $Y'$), $X_{e,\overline{\eta}} \simeq X'_{\overline{\eta}}$ and $$K_{X_{e,\overline{\eta}}} = (b^*(K_{X'})-C_X)|_{X_{e,\overline{\eta}}} = K_{X'_{\overline{\eta}}} -C_X|_{X_{e,\overline{\eta}}},$$ whence $C_X|_{X_{e,\overline{\eta}}}=0$ and $B_{e,X_y}= B'_{X_y}=B_{X_y}$. As for condition (b), we need to find $P \geq (B_e)^-_{X_{e,z}} = 0$ such that $$\kappa(X_{e,z},(f_{X_{e,z}})^*(-K_{Y_{e,z}}-D_{e,Y_{e,z}})+P)=0.$$ We claim $P=0$ does the job. If $\zeta$ is the generic point of $Z$, $\kappa(Y'_\zeta, \mu^*(-K_Y-D)|_{Y'_\zeta})=0$. Indeed, if $k$ is uncountable, by construction of $g'$, $\kappa(Y'_z, \mu^*(-K_Y-D)|_{Y'_z})=0$, for the very general fibre $Y'_z$ of $g'$. By , this implies $\kappa(Y'_\zeta, \mu^*(-K_Y-D)|_{Y'_\zeta})=0$. If $k$ is countable, let $\overline{k} \supset k$ be a perfect uncountable extension of $k$ and $\overline{g'} \colon \overline{Y'} \to \overline{Z'}$ the base change of $g'$ with $\overline{k}$. If $\mathcal{D}$ is a $\mathbb{Q}$-divisor on $Y'$, denote by $\overline{\mathcal{D}}$ its base change with $\overline{k}$. By flat base change, $H^0(Y', \mathcal{D}) \otimes_{k} \overline{k}= H^0(\overline{Y'}, \overline{\mathcal{D}})$. By the above discussion, we can thus conclude that $\kappa(\overline{Y'_\zeta}, \overline{\mu^*(-K_Y-D)}|_{\overline{Y'_\zeta}})=0=\kappa(Y'_\zeta, \mu^*(-K_Y-D)|_{Y'_\zeta})$. Moreover, $$\begin{split} \mu^*(-K_Y-D)&=-K_{Y'}-\Xi'-\mu^*D\\ &=-K_{Y'}-D'-(\Xi'+\Theta'), \end{split}$$ where $\Xi'$ is a $\mu$-exceptional divisor, not necessarily effective. Note that, after possibly enlarging $\Theta'$ while keeping it $\mu$-exceptional, we can also assume $\Xi'+\Theta'\geq 0$ and $\mu$-exceptional. Then the projection formula on $\mu$ yields that $\mu^*(-K_Y-D)$ and $-K_{Y'}-D'$ have the same sections, thus, by : $$\begin{split} 0=\kappa(Y'_\zeta,(\mu^*(-K_Y-D))|_{Y'_\zeta})= \kappa(Y'_\zeta, (-K_{Y'}-D')|_{Y'_{\zeta}})=\\ \kappa(Y_{e,\zeta},(a^*(-K_{Y'}-D'))|_{Y_{e,\zeta}})= \kappa(Y_{e,\zeta}, (-K_{Y_e}-D_e)|_{Y_{e,\zeta}}). \end{split}$$ Since the general fibre of $g_e$ is normal and reduced, by , for the general fibre $Y_{e,z}$, we have: $$0= \kappa(Y_{e,\zeta},(-K_{Y_e}-D_e)|_{Y_{e,\zeta}})= \kappa(Y_{e,z}, (-K_{Y_e}-D_e)|_{Y_{e,z}}) = \kappa(Y_{e,z}, -K_{Y_{e,z}}-D_{e,Y_{e,z}}).$$ Thus, we can apply , which yields the inequality: $$\kappa(X_{e,z},L_{e,X_{e,z}})\leq \kappa(X_y,L_{e,X_y}).$$ Since $\pi$ is an isomorphism over the generic point of $Y$ and $b|_{X_y}$ is the identity morphism on the general fibre $X_y$ of $f_e$, we have $\kappa(X_y,L_{e,X_y})=\kappa(X_y, L'_{X_y})=\kappa(X_y,L_{X_y})$. As $b^*L'\geq b^*\pi^*L$ we conclude $$\kappa(X_{e,z},((\pi \circ b)^*L)_{X_{e,z}})\leq \kappa(X_y,L_{X_y}).$$ ◻ [^1]: *See*
arxiv_math
{ "id": "2309.16580", "title": "Superadditivity of anticanonical Iitaka dimension for contractions with\n F-split fibres", "authors": "Marta Benozzo, Iacopo Brivio, Chi-Kang Chang", "categories": "math.AG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We construct the first example of a lattice on an irreducible Euclidean building that is not residually finite. Conjecturally, the normal subgroup theorem extends to this lattice making it virtually simple. author: - Thomas Titz Mite, Stefan Witzel bibliography: - reference.bib date: October 2023 title: A $\tilde{C}_2$-lattice that is not residually finite --- Lattices on Euclidean buildings naturally arise as arithmetic subgroups of semisimple groups over local fields. Margulis's celebrated normal subgroup theorem shows that all their quotients are finite (the lattice on the building being center-free). Being linear, they are residually finite, meaning they have enough finite quotients to embed into their product. Quite possibly, every irreducible lattice on a Euclidean building of dimension at least $3$ is arithmetic. On $2$-dimensional buildings, however, there exist *exotic* lattices, that have recently attracted attention, in particular for the following reasons: on one hand work in progress indicates that the normal subgroup theorem still holds, while on the other hand they tend to have few known quotients. What is more, in type $\tilde{A}_2$ work of Bader--Caprace--Lécureux [@BaderCapraceLecureux] shows that lattices are linear if and only if they are arithmetic. This caused B-C-L to conjecture the following in type $\tilde{A}_2$: **Conjecture 1**. *Let $X$ be an irreducible, locally finite, exotic Euclidean building and let $\Gamma < \mathop{\mathrm{Aut}}(X)$ be a lattice. Then $\Gamma$ is virtually simple.* Assuming the normal subgroup theorem to hold, in order to prove that a lattice is virtually simple it suffices to show that it is not residually finite. Here we provide the first example of a building lattice for which this can be proven: **Theorem 2**. *Let $\mathcal T$ be the complex consisting of seven vertices, 45 edges, and 45 triangles indicated in Figure [\[fig:t\]](#fig:t){reference-type="ref" reference="fig:t"}. The universal cover $\widetilde{\mathcal T}$ is a building of type $\tilde{C}_2$ and its rank 2 residues are isomorphic to the incidence graph of the symplectic quadrangle over $\mathbb F_2$ and the 3-regular complete bipartite graph. The fundamental group $\pi_1(\mathcal T)$ is not residually finite. In particular, $\pi_1(\mathcal T)$ is a uniform $\tilde{C}_2$-lattice that is not residually finite.* This is a first report on the lattice, describing it to the community. A future article version will contain more information on how it was found. The proof decomposes into the following three statements. The first one is due to Radu [@Radu] while the other two are routine. Our main contribution is therefore to have constructed the complex $\mathcal T$. The statements are proven as Theorem [Theorem 15](#thm:radu_text){reference-type="ref" reference="thm:radu_text"}, Proposition [Proposition 20](#prop:pi1_text){reference-type="ref" reference="prop:pi1_text"}, and Proposition [Proposition 21](#prop:cover_text){reference-type="ref" reference="prop:cover_text"} below. **Theorem 3**. *Let $\mathcal S$ be the complex of four vertices, twelve edges and nine squares indicated in Figure [\[fig:radu\]](#fig:radu){reference-type="ref" reference="fig:radu"}. The fundamental group $\pi_1(\mathcal S)$ is not residually finite.* **Proposition 4**. *Let $\mathcal S$ and $\mathcal T$ be as above. There is a topological embedding $\mathcal S \to \mathcal T$ such that the induced map $\pi_1(\mathcal S) \to \pi_1(\mathcal T)$ is injective.* **Proposition 5**. *The universal cover $\widetilde{\mathcal T}$ is a building of type $\tilde{C}_2$.* *Remark 6*. None of our complexes involve loops so the orientation in which two edges need to be identified is always apparent from their vertices. *Remark 7*. The complex $\mathcal T$ was generated through a computer search. To perform this search we modified Algorithm 2 in [@Radu2 Section 3.6] to effectively address our specific problem. The concepts presented in this article serve as a significant source of inspiration and are crucial to obtain our results. We implemented our search algorithm in the computer algebra system GAP [@GAP4] and made great use of the GAP-package GRAPE [@GRAPE] developed by Soicher. # Non-residually finite lattices on products of trees In this section we summarize [@Radu Section 3.1] in order to prove Theorem [Theorem 3](#thm:radu){reference-type="ref" reference="thm:radu"}. We fix two natural numbers $d_1, d_2 \geq 3$. **Definition 8**. A $(d_1, d_2)$-complex is a square complex with 1. four vertices $v_{00}, v_{10}, v_{11}, v_{01}$, 2. $d_1$ edges between $v_{00}$ and $v_{10}$ and $d_1$ edges between $v_{11}$ and $v_{01}$ (call them horizontal edges), 3. $d_2$ edges between $v_{10}$ and $v_{11}$ and $d_2$ edges between $v_{01}$ and $v_{00}$ (call them vertical edges), 4. $d_1 d_2$ squares attached to the four vertices, such that for each pair $(a,x)$ of a horizontal and a vertical edge, there is exactly one square adjacent to both $a$ and $x$. **Observation 9**. *In a $(d_1, d_2)$-complex $S$ the link of each vertex is complete bipartite graph. We deduce that its universal cover $\widetilde S$ is the product of the $d_1$-regular tree and the $d_2$-regular tree. Its fundamental group $\pi_1(S)$ acts regularly on vertices of each type of $\widetilde S$.* **Observation 10**. *Let $S$ be a $(d_1, d_2)$-complex. Assume we have an action of $D_4 = \left\langle \sigma, \rho \mid \rho^2, \sigma^2, (\rho \sigma)^2 \right \rangle$ on $S$ with: $$\begin{aligned} \sigma &: v_{00} \leftrightarrow v_{01}, \quad v_{10} \leftrightarrow v_{11}, & \rho &: v_{00} \leftrightarrow v_{11}, \quad v_{10} \leftrightarrow v_{01} \; . \end{aligned}$$ Let $\pi : \widetilde S \to S$ be the canonical projection. We can extend $\pi_1(S)$ by $D_4$ by considering deck transformations covering it: $$\widetilde{D_4}:= \{g \in \mathop{\mathrm{Aut}}(\tilde S) \mid \exists h \in D_4 : \pi \circ g = h \circ \pi \} \;.$$ The group $\widetilde{D_4}$ now acts regularly on the vertices of $\widetilde S$. It contains $\pi_1(S)$ as a normal subgroup of index 4.* If $\Gamma := \widetilde{D_4}$ and $S$ are as in Observation [Observation 10](#obs:goodaction){reference-type="ref" reference="obs:goodaction"}, a short presentation for $\Gamma$ can be given in terms of the combinatorics of $S$ (Lemma [Lemma 13](#lem:presentation){reference-type="ref" reference="lem:presentation"}). This motivates the definition of a $(d_1, d_2)$-datum, which is due to Radu. **Definition 11**. A $(d_1, d_2)$-datum $(A,X, \varphi_A, \varphi_X, R)$ consists of two finite sets $A, X$ with $\lvert A \rvert = d_1$ and $\lvert X \rvert= d_2$, two involutions $\varphi_A : A \to A$ and $\varphi_X : X \to X$ and a subset $R \subseteq A \times X \times A \times X$ satisfying conditions (1) and (2) below. Define two involutions $\widehat \sigma, \widehat \rho : A\times X \times A \times X \to A\times X \times A \times X$: $$\begin{aligned} (a_1,x_1,a_2,x_2) & \overset{\widehat \sigma}{\longleftrightarrow} (\varphi_A(a_2), \varphi_X(x_1), \varphi_A(a_1), \varphi_X(x_2))\;, \\[2mm] (a_1,x_1,a_2,x_2) & \overset{\widehat \rho}{\longleftrightarrow} (a_2,x_2,a_1,x_1) \;. \end{aligned}$$ 1. Each of the four projections of $R$ onto sub products of the form $A\times B$ or $B\times A$ are bijective. 2. $R$ is invariant under the action of $\langle \, \widehat \sigma, \widehat \rho \, \rangle \cong D_4$. **Observation 12**. 1. *Given a $(d_1, d_2)$-datum we can construct a $(d_1, d_2)$-complex $S$ admitting a $D_4$-action as in Observation [Observation 10](#obs:goodaction){reference-type="ref" reference="obs:goodaction"} as follows:\ We start with four vertices $v_{00}, v_{10}, v_{11}, v_{01}$. For each $a \in A$ we glue in an edge $a$ between $v_{00}$ and $v_{10}$ and an edge $a'$ between $v_{11}$ and $v_{01}$. Similarly we glue in edges $x$ between $v_{10}$ and $v_{11}$ and edges $x'$ between $v_{01}$ and $v_{00}$. Afterwards we glue in a squares along the edges $a_1, b_1, a_2', b_2'$ for each $(a_1, b_1, a_2, b_2) \in R$.* 1. *We define an involution $\sigma$ acting on $S$ as follows: $$\begin{aligned} v_{00} & \leftrightarrow v_{01}, & v_{10} & \leftrightarrow v_{11}, & a &\leftrightarrow \varphi_A(a)', & x &\leftrightarrow \varphi_X(x), & x' &\leftrightarrow \varphi_X(x)', & \text{for $a\in A, x \in X$}. \end{aligned}$$* 2. *Analogously we define an involution $\rho$: $$\begin{aligned} v_{00} & \leftrightarrow v_{11}, & v_{10} & \leftrightarrow v_{01}, & a & \leftrightarrow a', & x & \leftrightarrow x', && &\hspace{13mm} & \text{for $a\in A, x \in X$}. \end{aligned}$$* *Now property (2) in the definition of a $(d_1, d_2)$-datum ensures that the squares in $S$ are indeed invariant under the action of $\langle \sigma, \rho \rangle \cong D_4$.* 2. *Conversely if $D_4 = \langle \sigma, \rho\rangle$ acts on a $(d_1, d_2)$-complex $S$ as in Observation [Observation 10](#obs:goodaction){reference-type="ref" reference="obs:goodaction"} we construct a $(d_1, d_2)$-datum as follows: Let $A$ be the set of edges between $v_{00}$ and $v_{10}$ and let $X$ be the set of edges between $v_{10}$ and $v_{11}$. Let $\sigma' := \rho \sigma$. Define $\varphi_A$ and $\varphi_X$ as follows: $$\begin{aligned} a &\overset{\varphi_A}{\longleftrightarrow} \sigma'(a) \quad \text{for $a\in A$}, & x &\overset{\varphi_X}{\longleftrightarrow} \sigma(x) \quad \text{for $x \in X$}. \end{aligned}$$ Finally we define $R$: $$R := \{ (a_1, x_1, a_2, x_2) \mid (a_1, x_1, \rho(a_2), \rho(x_2)) \; \text{is a square of } S \}\;.$$ One easily checks that $(A,X, \varphi_A, \varphi_X, R)$ is indeed a $(d_1, d_2)$-datum.* **Lemma 13**. *Let $D_4 \curvearrowright S$ be as in Observation [Observation 10](#obs:goodaction){reference-type="ref" reference="obs:goodaction"}. Let $(A,X, \varphi_A, \varphi_X, R)$ be the corresponding $(d_1, d_2)$-datum. Then the group $\widetilde {D_4}$ is presented by: $$\langle A \cup X \mid a \varphi_A(a), x \varphi_X(x), a_1 x_1 a_2 x_2 \; \; \; \text{for} \; a \in A, x \in X, (a_1, x_1, a_2, x_2) \in R \rangle \; .$$* This follows from the discussion in [@Radu] after Definition 3.4. **Example 14**. 1. Let $\mathcal S$ be the $(3,3)$-complex indicated in Figure [\[fig:radu\]](#fig:radu){reference-type="ref" reference="fig:radu"}. We have an action by $D_4 := \langle \sigma, \rho\rangle$ on $\mathcal S$: $$\begin{aligned} \sigma: && v_{00} & \leftrightarrow v_{01}, & v_{10} & \leftrightarrow v_{11}, \\ && a &\leftrightarrow a', & b &\leftrightarrow b', & c &\leftrightarrow c', & x &\leftrightarrow x, & y &\leftrightarrow y, & z &\leftrightarrow z, & x' &\leftrightarrow x', & y' &\leftrightarrow y', & z' &\leftrightarrow z', \\ && s_1 &\leftrightarrow s_1, & s_2 &\leftrightarrow s_2, & s_3 &\leftrightarrow s_6, & s_4 &\leftrightarrow s_4, & s_5 &\leftrightarrow s_8, & s_7 &\leftrightarrow s_7, & s_9 &\leftrightarrow s_9, \\[2mm] \rho: && v_{00} & \leftrightarrow v_{11}, & v_{10} & \leftrightarrow v_{01}, \\ && a &\leftrightarrow a', & b &\leftrightarrow b', & c &\leftrightarrow c', & x &\leftrightarrow x', & y &\leftrightarrow y', & z &\leftrightarrow z', \\ && s_1 &\leftrightarrow s_1, & s_2 &\leftrightarrow s_2, & s_3 &\leftrightarrow s_6, & s_4 &\leftrightarrow s_4, & s_5 &\leftrightarrow s_8, & s_7 &\leftrightarrow s_9 . \end{aligned}$$ Note that $\sigma' := \rho \sigma$ acts as follows: $$\begin{aligned} \sigma': && v_{00} & \leftrightarrow v_{10}, & v_{11} & \leftrightarrow v_{01}, \\ && a &\leftrightarrow a, & b &\leftrightarrow b, & c &\leftrightarrow c, & a' &\leftrightarrow a', & b' &\leftrightarrow b', & c' &\leftrightarrow c', & x &\leftrightarrow x', & y &\leftrightarrow y', & z &\leftrightarrow z', \\ && s_1 &\leftrightarrow s_1, & s_2 &\leftrightarrow s_2, & s_3 &\leftrightarrow s_3, & s_4 &\leftrightarrow s_4, & s_5 &\leftrightarrow s_5, & s_6 &\leftrightarrow s_6, & s_7 &\leftrightarrow s_9, & s_8 &\leftrightarrow s_8. \end{aligned}$$ 2. The $(d_1, d_2)$-datum corresponding to this action $D_4 \curvearrowright \mathcal S$ is $$\begin{aligned} A & := \{a,b,c\}, & X & := \{x,y,z\}, & \varphi_A &= \mathop{\mathrm{id}}_A, & \varphi_X &= \mathop{\mathrm{id}}_X, \end{aligned}$$ $$R := \{ (a,x,a,x), (a,y,a,y), (a,z,b,z), (b,x,b,x), (b,y,c,y), (b,z,a,z), (c,x,c,z), (c,y,b,y), (c,z,c,x) \}\;.$$ We deduce that $\widetilde{D_4}$ arising from this action is presented by: $$\begin{aligned} &\langle a,b,c,x,y,z \mid a^2, b^2, c^2, x^2, y^2, z^2, a x a x, a y a y, a z b z, b x b x, b y c y, b z a z, c x c z, c y b y, c z c x \rangle\\ =& \langle a,b,c,x,y,z \mid a^2, b^2, c^2, x^2, y^2, z^2, a x a x, a y a y, a z b z, b x b x, b y c y, c x c z \rangle\;. \end{aligned}$$ $$\begin{aligned} % \begin{tikzpicture}[baseline=(current bounding box.center)] \def\t{1.5} % \node[circle, scale = 0.1] (A) at (0,0) {}; \node[circle, scale = 0.1] (B) at (1*\t,0) {}; \node[circle, scale = 0.1] (C) at (1*\t,1*\t) {}; \node[circle, scale = 0.1] (D) at (0,1*\t) {}; % \draw[line width = 0.5mm, color = black,->-] (A) -- (B); \draw[line width = 0.5mm, color = black,->-] (B) -- (C); \draw[line width = 0.5mm, color = black,->-] (C) -- (D); \draw[line width = 0.5mm, color = black,->-] (D) -- (A); % \draw[fill = white, thick] (A) circle (0.7mm); \draw[fill = white, thick] (B) circle (0.7mm); \draw[fill = white, thick] (C) circle (0.7mm); \draw[fill = white, thick] (D) circle (0.7mm); % \node[below] at (A) {$v_{00}$}; \node[below] at (B) {$v_{10}$}; \node[above] at (C) {$v_{11}$}; \node[above] at (D) {$v_{01}$}; % \node[below] at ($(A)!.5!(B)$) {$a$}; % bottom \node[right] at ($(B)!.5!(C)$) {$x$}; % right \node[left] at ($(D)!.5!(A)$) {$x'$}; % left \node[above] at ($(C)!.5!(D)$) {$a'$}; % top % \node (M) at (0.5*\t, 0.5*\t) {$s_1$}; \end{tikzpicture} && % \begin{tikzpicture}[baseline=(current bounding box.center)] \def\t{1.5} % \node[circle, scale = 0.1] (A) at (0,0) {}; \node[circle, scale = 0.1] (B) at (1*\t,0) {}; \node[circle, scale = 0.1] (C) at (1*\t,1*\t) {}; \node[circle, scale = 0.1] (D) at (0,1*\t) {}; % \draw[line width = 0.5mm, color = black,->-] (A) -- (B); \draw[line width = 0.5mm, color = black,->-] (B) -- (C); \draw[line width = 0.5mm, color = black,->-] (C) -- (D); \draw[line width = 0.5mm, color = black,->-] (D) -- (A); % \draw[fill = white, thick] (A) circle (0.7mm); \draw[fill = white, thick] (B) circle (0.7mm); \draw[fill = white, thick] (C) circle (0.7mm); \draw[fill = white, thick] (D) circle (0.7mm); % \node[below] at (A) {$v_{00}$}; \node[below] at (B) {$v_{10}$}; \node[above] at (C) {$v_{11}$}; \node[above] at (D) {$v_{01}$}; % \node[below] at ($(A)!.5!(B)$) {$a$}; % bottom \node[right] at ($(B)!.5!(C)$) {$y$}; % right \node[left] at ($(D)!.5!(A)$) {$y'$}; % left \node[above] at ($(C)!.5!(D)$) {$a'$}; % top % \node (M) at (0.5*\t, 0.5*\t) {$s_2$}; \end{tikzpicture} && % \begin{tikzpicture}[baseline=(current bounding box.center)] \def\t{1.5} % \node[circle, scale = 0.1] (A) at (0,0) {}; \node[circle, scale = 0.1] (B) at (1*\t,0) {}; \node[circle, scale = 0.1] (C) at (1*\t,1*\t) {}; \node[circle, scale = 0.1] (D) at (0,1*\t) {}; % \draw[line width = 0.5mm, color = black,->-] (A) -- (B); \draw[line width = 0.5mm, color = black,->-] (B) -- (C); \draw[line width = 0.5mm, color = black,->-] (C) -- (D); \draw[line width = 0.5mm, color = black,->-] (D) -- (A); % \draw[fill = white, thick] (A) circle (0.7mm); \draw[fill = white, thick] (B) circle (0.7mm); \draw[fill = white, thick] (C) circle (0.7mm); \draw[fill = white, thick] (D) circle (0.7mm); % \node[below] at (A) {$v_{00}$}; \node[below] at (B) {$v_{10}$}; \node[above] at (C) {$v_{11}$}; \node[above] at (D) {$v_{01}$}; % \node[below] at ($(A)!.5!(B)$) {$a$}; % bottom \node[right] at ($(B)!.5!(C)$) {$z$}; % right \node[left] at ($(D)!.5!(A)$) {$z'$}; % left \node[above] at ($(C)!.5!(D)$) {$b'$}; % top % \node (M) at (0.5*\t, 0.5*\t) {$s_3$}; \end{tikzpicture} && % \begin{tikzpicture}[baseline=(current bounding box.center)] \def\t{1.5} % \node[circle, scale = 0.1] (A) at (0,0) {}; \node[circle, scale = 0.1] (B) at (1*\t,0) {}; \node[circle, scale = 0.1] (C) at (1*\t,1*\t) {}; \node[circle, scale = 0.1] (D) at (0,1*\t) {}; % \draw[line width = 0.5mm, color = black,->-] (A) -- (B); \draw[line width = 0.5mm, color = black,->-] (B) -- (C); \draw[line width = 0.5mm, color = black,->-] (C) -- (D); \draw[line width = 0.5mm, color = black,->-] (D) -- (A); % \draw[fill = white, thick] (A) circle (0.7mm); \draw[fill = white, thick] (B) circle (0.7mm); \draw[fill = white, thick] (C) circle (0.7mm); \draw[fill = white, thick] (D) circle (0.7mm); % \node[below] at (A) {$v_{00}$}; \node[below] at (B) {$v_{10}$}; \node[above] at (C) {$v_{11}$}; \node[above] at (D) {$v_{01}$}; % \node[below] at ($(A)!.5!(B)$) {$b$}; % bottom \node[right] at ($(B)!.5!(C)$) {$x$}; % right \node[left] at ($(D)!.5!(A)$) {$x'$}; % left \node[above] at ($(C)!.5!(D)$) {$b'$}; % top % \node (M) at (0.5*\t, 0.5*\t) {$s_4$}; \end{tikzpicture} && % \begin{tikzpicture}[baseline=(current bounding box.center)] \def\t{1.5} % \node[circle, scale = 0.1] (A) at (0,0) {}; \node[circle, scale = 0.1] (B) at (1*\t,0) {}; \node[circle, scale = 0.1] (C) at (1*\t,1*\t) {}; \node[circle, scale = 0.1] (D) at (0,1*\t) {}; % \draw[line width = 0.5mm, color = black,->-] (A) -- (B); \draw[line width = 0.5mm, color = black,->-] (B) -- (C); \draw[line width = 0.5mm, color = black,->-] (C) -- (D); \draw[line width = 0.5mm, color = black,->-] (D) -- (A); % \draw[fill = white, thick] (A) circle (0.7mm); \draw[fill = white, thick] (B) circle (0.7mm); \draw[fill = white, thick] (C) circle (0.7mm); \draw[fill = white, thick] (D) circle (0.7mm); % \node[below] at (A) {$v_{00}$}; \node[below] at (B) {$v_{10}$}; \node[above] at (C) {$v_{11}$}; \node[above] at (D) {$v_{01}$}; % \node[below] at ($(A)!.5!(B)$) {$b$}; % bottom \node[right] at ($(B)!.5!(C)$) {$y$}; % right \node[left] at ($(D)!.5!(A)$) {$y'$}; % left \node[above] at ($(C)!.5!(D)$) {$c'$}; % top % \node (M) at (0.5*\t, 0.5*\t) {$s_5$}; \end{tikzpicture} && % \begin{tikzpicture}[baseline=(current bounding box.center)] \def\t{1.5} % \node[circle, scale = 0.1] (A) at (0,0) {}; \node[circle, scale = 0.1] (B) at (1*\t,0) {}; \node[circle, scale = 0.1] (C) at (1*\t,1*\t) {}; \node[circle, scale = 0.1] (D) at (0,1*\t) {}; % \draw[line width = 0.5mm, color = black,->-] (A) -- (B); \draw[line width = 0.5mm, color = black,->-] (B) -- (C); \draw[line width = 0.5mm, color = black,->-] (C) -- (D); \draw[line width = 0.5mm, color = black,->-] (D) -- (A); % \draw[fill = white, thick] (A) circle (0.7mm); \draw[fill = white, thick] (B) circle (0.7mm); \draw[fill = white, thick] (C) circle (0.7mm); \draw[fill = white, thick] (D) circle (0.7mm); % \node[below] at (A) {$v_{00}$}; \node[below] at (B) {$v_{10}$}; \node[above] at (C) {$v_{11}$}; \node[above] at (D) {$v_{01}$}; % \node[below] at ($(A)!.5!(B)$) {$b$}; % bottom \node[right] at ($(B)!.5!(C)$) {$z$}; % right \node[left] at ($(D)!.5!(A)$) {$z'$}; % left \node[above] at ($(C)!.5!(D)$) {$a'$}; % top % \node (M) at (0.5*\t, 0.5*\t) {$s_6$}; \end{tikzpicture} \\ % \begin{tikzpicture}[baseline=(current bounding box.center)] \def\t{1.5} % \node[circle, scale = 0.1] (A) at (0,0) {}; \node[circle, scale = 0.1] (B) at (1*\t,0) {}; \node[circle, scale = 0.1] (C) at (1*\t,1*\t) {}; \node[circle, scale = 0.1] (D) at (0,1*\t) {}; % \draw[line width = 0.5mm, color = black,->-] (A) -- (B); \draw[line width = 0.5mm, color = black,->-] (B) -- (C); \draw[line width = 0.5mm, color = black,->-] (C) -- (D); \draw[line width = 0.5mm, color = black,->-] (D) -- (A); % \draw[fill = white, thick] (A) circle (0.7mm); \draw[fill = white, thick] (B) circle (0.7mm); \draw[fill = white, thick] (C) circle (0.7mm); \draw[fill = white, thick] (D) circle (0.7mm); % \node[below] at (A) {$v_{00}$}; \node[below] at (B) {$v_{10}$}; \node[above] at (C) {$v_{11}$}; \node[above] at (D) {$v_{01}$}; % \node[below] at ($(A)!.5!(B)$) {$c$}; % bottom \node[right] at ($(B)!.5!(C)$) {$x$}; % right \node[left] at ($(D)!.5!(A)$) {$z'$}; % left \node[above] at ($(C)!.5!(D)$) {$c'$}; % top % \node (M) at (0.5*\t, 0.5*\t) {$s_7$}; \end{tikzpicture} && % \begin{tikzpicture}[baseline=(current bounding box.center)] \def\t{1.5} % \node[circle, scale = 0.1] (A) at (0,0) {}; \node[circle, scale = 0.1] (B) at (1*\t,0) {}; \node[circle, scale = 0.1] (C) at (1*\t,1*\t) {}; \node[circle, scale = 0.1] (D) at (0,1*\t) {}; % \draw[line width = 0.5mm, color = black,->-] (A) -- (B); \draw[line width = 0.5mm, color = black,->-] (B) -- (C); \draw[line width = 0.5mm, color = black,->-] (C) -- (D); \draw[line width = 0.5mm, color = black,->-] (D) -- (A); % \draw[fill = white, thick] (A) circle (0.7mm); \draw[fill = white, thick] (B) circle (0.7mm); \draw[fill = white, thick] (C) circle (0.7mm); \draw[fill = white, thick] (D) circle (0.7mm); % \node[below] at (A) {$v_{00}$}; \node[below] at (B) {$v_{10}$}; \node[above] at (C) {$v_{11}$}; \node[above] at (D) {$v_{01}$}; % \node[below] at ($(A)!.5!(B)$) {$c$}; % bottom \node[right] at ($(B)!.5!(C)$) {$y$}; % right \node[left] at ($(D)!.5!(A)$) {$y'$}; % left \node[above] at ($(C)!.5!(D)$) {$b'$}; % top % \node (M) at (0.5*\t, 0.5*\t) {$s_8$}; \end{tikzpicture} && % \begin{tikzpicture}[baseline=(current bounding box.center)] \def\t{1.5} % \node[circle, scale = 0.1] (A) at (0,0) {}; \node[circle, scale = 0.1] (B) at (1*\t,0) {}; \node[circle, scale = 0.1] (C) at (1*\t,1*\t) {}; \node[circle, scale = 0.1] (D) at (0,1*\t) {}; % \draw[line width = 0.5mm, color = black,->-] (A) -- (B); \draw[line width = 0.5mm, color = black,->-] (B) -- (C); \draw[line width = 0.5mm, color = black,->-] (C) -- (D); \draw[line width = 0.5mm, color = black,->-] (D) -- (A); % \draw[fill = white, thick] (A) circle (0.7mm); \draw[fill = white, thick] (B) circle (0.7mm); \draw[fill = white, thick] (C) circle (0.7mm); \draw[fill = white, thick] (D) circle (0.7mm); % \node[below] at (A) {$v_{00}$}; \node[below] at (B) {$v_{10}$}; \node[above] at (C) {$v_{11}$}; \node[above] at (D) {$v_{01}$}; % \node[below] at ($(A)!.5!(B)$) {$c$}; % bottom \node[right] at ($(B)!.5!(C)$) {$z$}; % right \node[left] at ($(D)!.5!(A)$) {$x'$}; % left \node[above] at ($(C)!.5!(D)$) {$c'$}; % top % \node (M) at (0.5*\t, 0.5*\t) {$s_9$}; \end{tikzpicture} \end{aligned}$$ The following is a reformulation of Theorem [Theorem 3](#thm:radu){reference-type="ref" reference="thm:radu"}. **Theorem 15** (Radu). *The group $\widetilde{D_4}$ arising from the action $D_4 \curvearrowright \mathcal S$ in Example [Example 14](#exa:s){reference-type="ref" reference="exa:s"} is not residually-finite. In particular the fundamental group $\pi_1(\mathcal S)$ is not residually-finite.* For the first claim see Proposition 5.4 in [@Radu] or Corollary 4.19 in [@Caprace]. The second follows since being residually finite is invariant under commensurability. # Embedding products of trees in $\tilde{C}_2$-buildings $$\begin{aligned} \littletriangle{x'}{a}{s_1}{v_{00}}{v_{10}}{v_{01}}{t_1} && \littletriangle{a'}{x}{s_1}{v_{11}}{v_{10}}{v_{01}}{t_2}[-<-] && \littletriangle{y'}{a}{s_2}{v_{00}}{v_{10}}{v_{01}}{t_3} && \littletriangle{a'}{y}{s_2}{v_{11}}{v_{10}}{v_{01}}{t_4}[-<-] && \littletriangle{z'}{a}{s_3}{v_{00}}{v_{10}}{v_{01}}{t_5} && \littletriangle{b'}{z}{s_3}{v_{11}}{v_{10}}{v_{01}}{t_6}[-<-] \\ \littletriangle{x'}{b}{s_4}{v_{00}}{v_{10}}{v_{01}}{t_7} && \littletriangle{b'}{x}{s_4}{v_{11}}{v_{10}}{v_{01}}{t_8}[-<-] && \littletriangle{y'}{b}{s_5}{v_{00}}{v_{10}}{v_{01}}{t_9} && \littletriangle{c'}{y}{s_5}{v_{11}}{v_{10}}{v_{01}}{t_{10}}[-<-] && \littletriangle{z'}{b}{s_6}{v_{00}}{v_{10}}{v_{01}}{t_{11}} && \littletriangle{a'}{z}{s_6}{v_{11}}{v_{10}}{v_{01}}{t_{12}}[-<-] \\ \littletriangle{z'}{c}{s_7}{v_{00}}{v_{10}}{v_{01}}{t_{13}} && \littletriangle{c'}{x}{s_7}{v_{11}}{v_{10}}{v_{01}}{t_{14}}[-<-] && \littletriangle{y'}{c}{s_8}{v_{00}}{v_{10}}{v_{01}}{t_{15}} && \littletriangle{b'}{y}{s_8}{v_{11}}{v_{10}}{v_{01}}{t_{16}}[-<-] && \littletriangle{x'}{c}{s_9}{v_{00}}{v_{10}}{v_{01}}{t_{17}} && \littletriangle{c'}{z}{s_9}{v_{11}}{v_{10}}{v_{01}}{t_{18}}[-<-] \end{aligned}$$ **Observation 16**. *Diagonally subdividing each square of the complex $\mathcal S$ from Figure [\[fig:radu\]](#fig:radu){reference-type="ref" reference="fig:radu"}, we obtain the triangle complex $\mathcal T_0$ in Figure [\[fig:radu_subdivided\]](#fig:radu_subdivided){reference-type="ref" reference="fig:radu_subdivided"} consisting of four vertices, 21 edges and 18 triangles.* **Lemma 17**. *The following assignment describes cellular embedding of $\mathcal T_0$ to the triangle complex $\mathcal T$, which is indicated in Figure [\[fig:t\]](#fig:t){reference-type="ref" reference="fig:t"}: $$\begin{aligned} t_i & \mapsto \boldsymbol{t_i}, & v_{00} & \mapsto u_1, & v_{11} & \mapsto u_2, & v_{01} & \mapsto w_1, & v_{10} & \mapsto v_1, & s_i & \mapsto g_i, & \\ a & \mapsto f_1, & b & \mapsto f_2, & c & \mapsto f_3, & x & \mapsto f_4, & y & \mapsto f_5, & z & \mapsto f_6, \\ a' & \mapsto e_4, & b' & \mapsto e_5, & c' & \mapsto e_6, & x' & \mapsto e_1, & y' & \mapsto e_2, & z' & \mapsto e_3. \end{aligned}$$* $$\begin{aligned} \littletriangle{e_1}{f_1}{g_1}{u_1}{v}{w}{\boldsymbol{t_1}} && \littletriangle{e_4}{f_4}{g_1}{u_2}{v}{w}{\boldsymbol{t_2}}[-<-] && \littletriangle{e_2}{f_1}{g_2}{u_1}{v}{w}{\boldsymbol{t_3}} && \littletriangle{e_4}{f_5}{g_2}{u_2}{v}{w}{\boldsymbol{t_4}}[-<-] && \littletriangle{e_3}{f_1}{g_3}{u_1}{v}{w}{\boldsymbol{t_5}} && \littletriangle{e_5}{f_6}{g_3}{u_2}{v}{w}{\boldsymbol{t_6}}[-<-] \\ \littletriangle{e_1}{f_2}{g_4}{u_1}{v}{w}{\boldsymbol{t_7}} && \littletriangle{e_5}{f_4}{g_4}{u_2}{v}{w}{\boldsymbol{t_8}}[-<-] && \littletriangle{e_2}{f_2}{g_5}{u_1}{v}{w}{\boldsymbol{t_9}} && \littletriangle{e_6}{f_5}{g_5}{u_2}{v}{w}{\boldsymbol{t_{10}}}[-<-] && \littletriangle{e_3}{f_2}{g_6}{u_1}{v}{w}{\boldsymbol{t_{11}}} && \littletriangle{e_4}{f_6}{g_6}{u_2}{v}{w}{\boldsymbol{t_{12}}}[-<-] \\ \littletriangle{e_3}{f_3}{g_7}{u_1}{v}{w}{\boldsymbol{t_{13}}} && \littletriangle{e_6}{f_4}{g_7}{u_2}{v}{w}{\boldsymbol{t_{14}}}[-<-] && \littletriangle{e_2}{f_3}{g_8}{u_1}{v}{w}{\boldsymbol{t_{15}}} && \littletriangle{e_5}{f_5}{g_8}{u_2}{v}{w}{\boldsymbol{t_{16}}}[-<-] && \littletriangle{e_1}{f_3}{g_9}{u_1}{v}{w}{\boldsymbol{t_{17}}} && \littletriangle{e_6}{f_6}{g_9}{u_2}{v}{w}{\boldsymbol{t_{18}}}[-<-] \\ \littletriangle{e_7}{f_9}{g_{15}}{u_3}{v}{w}{t_{19}} && \littletriangle{e_7}{f_{10}}{g_{13}}{u_3}{v}{w}{t_{20}} && \littletriangle{e_7}{f_{12}}{g_6}{u_3}{v}{w}{t_{21}} && \littletriangle{e_8}{f_9}{g_{13}}{u_3}{v}{w}{t_{22}} && \littletriangle{e_8}{f_{10}}{g_4}{u_3}{v}{w}{t_{23}} && \littletriangle{e_8}{f_{12}}{g_{12}}{u_3}{v}{w}{t_{24}} \\ \littletriangle{e_9}{f_9}{g_3}{u_3}{v}{w}{t_{25}} && \littletriangle{e_9}{f_{10}}{g_{14}}{u_3}{v}{w}{t_{26}} && \littletriangle{e_9}{f_{12}}{g_{11}}{u_3}{v}{w}{t_{27}} && \littletriangle{e_{10}}{f_8}{g_{14}}{u_4}{v}{w}{t_{28}} && \littletriangle{e_{10}}{f_{13}}{g_{15}}{u_4}{v}{w}{t_{29}} && \littletriangle{e_{10}}{f_{15}}{g_9}{u_4}{v}{w}{t_{30}} \\ \littletriangle{e_{11}}{f_8}{g_{12}}{u_4}{v}{w}{t_{31}} && \littletriangle{e_{11}}{f_{13}}{g_7}{u_4}{v}{w}{t_{32}} && \littletriangle{e_{11}}{f_{15}}{g_{10}}{u_4}{v}{w}{t_{33}} && \littletriangle{e_{12}}{f_8}{g_2}{u_4}{v}{w}{t_{34}} && \littletriangle{e_{12}}{f_{13}}{g_{12}}{u_4}{v}{w}{t_{35}} && \littletriangle{e_{12}}{f_{15}}{g_{14}}{u_4}{v}{w}{t_{36}} \\ \littletriangle{e_{13}}{f_7}{g_{10}}{u_5}{v}{w}{t_{37}} && \littletriangle{e_{13}}{f_{11}}{g_{15}}{u_5}{v}{w}{t_{38}} && \littletriangle{e_{13}}{f_{14}}{g_8}{u_5}{v}{w}{t_{39}} && \littletriangle{e_{14}}{f_7}{g_{11}}{u_5}{v}{w}{t_{40}} && \littletriangle{e_{14}}{f_{11}}{g_5}{u_5}{v}{w}{t_{41}} && \littletriangle{e_{14}}{f_{14}}{g_{13}}{u_5}{v}{w}{t_{42}} \\ \littletriangle{e_{15}}{f_7}{g_1}{u_5}{v}{w}{t_{43}} && \littletriangle{e_{15}}{f_{11}}{g_{10}}{u_5}{v}{w}{t_{44}} && \littletriangle{e_{15}}{f_{14}}{g_{11}}{u_5}{v}{w}{t_{45}} \end{aligned}$$ *Remark 18*. From now on, we regard $\mathcal T_0$ and $\mathcal T$ as $M_0$-complexes in the sense of [@BridsonHaefliger Section I.7] with the triangles right-angled as indicated. **Lemma 19**. *The geometric links of $\mathcal T$ are metric spherical buildings of type $C_2$ and $A_1 \times A_1$.* *Proof.* The links of $u_1, u_2, u_3, u_4$ and $u_5$ have edges of length $\pi/2$ and are $3$-regular, complete bipartite graphs (see Figure [\[fig:A1A1\]](#fig:A1A1){reference-type="ref" reference="fig:A1A1"}). The links of $v$ and $w$ have edges of length $\pi/4$ and are isomorphic to the incidence graph of the symplectic quadrangle over $\mathbb F_2$ (see Figure [\[fig:C2\]](#fig:C2){reference-type="ref" reference="fig:C2"}). ◻ $$\begin{aligned} % \begin{tikzpicture}[baseline=(current bounding box.center)] \def\t{1.5} \def\s{1.9} % \node[circle, scale = 0.5] (A) at (1*\t,0) {}; \node[circle, scale = 0.5] (B) at (0.5*\t,0.866*\t) {}; \node[circle, scale = 0.5] (C) at (-0.5*\t,0.866*\t) {}; \node[circle, scale = 0.5] (D) at (-1*\t,0) {}; \node[circle, scale = 0.5] (E) at (-0.5*\t,-0.866*\t) {}; \node[circle, scale = 0.5] (F) at (0.5*\t,-0.866*\t) {}; % \draw[line width = 0.5mm, color = black] (A) -- (B); \draw[line width = 0.5mm, color = black] (B) -- (C); \draw[line width = 0.5mm, color = black] (C) -- (D); \draw[line width = 0.5mm, color = black] (D) -- (E); \draw[line width = 0.5mm, color = black] (E) -- (F); \draw[line width = 0.5mm, color = black] (F) -- (A); % \draw (A) edge[line width = 0.5mm, color = black, bend left=20] (D); \draw (B) edge[line width = 0.5mm, color = black, bend left=20] (E); \draw (C) edge[line width = 0.5mm, color = black, bend left=20] (F); % \draw[fill = Red, thick] (A) circle (1mm); \draw[fill = ForestGreen, thick] (B) circle (1mm); \draw[fill = Red, thick] (C) circle (1mm); \draw[fill = ForestGreen, thick] (D) circle (1mm); \draw[fill = Red, thick] (E) circle (1mm); \draw[fill = ForestGreen, thick] (F) circle (1mm); % \node[color = Red] (H) at (1*\s, 0) {$1$}; \node[color = ForestGreen] (I) at (0.5*\s,0.866*\s) {$1$}; \node[color = Red] (J) at (-0.5*\s,0.866*\s) {$2$}; \node[color = ForestGreen] (K) at (-1*\s,0*\s) {$2$}; \node[color = Red] (L) at (-0.5*\s,-0.866*\s) {$3$}; \node[color = ForestGreen] (M) at (0.5*\s,-0.866*\s) {$3$}; \end{tikzpicture} && % \begin{tikzpicture}[baseline=(current bounding box.center)] \def\t{1.5} \def\s{1.9} % \node[circle, scale = 0.5] (A) at (1*\t,0) {}; \node[circle, scale = 0.5] (B) at (0.5*\t,0.866*\t) {}; \node[circle, scale = 0.5] (C) at (-0.5*\t,0.866*\t) {}; \node[circle, scale = 0.5] (D) at (-1*\t,0) {}; \node[circle, scale = 0.5] (E) at (-0.5*\t,-0.866*\t) {}; \node[circle, scale = 0.5] (F) at (0.5*\t,-0.866*\t) {}; % \draw[line width = 0.5mm, color = black] (A) -- (B); \draw[line width = 0.5mm, color = black] (B) -- (C); \draw[line width = 0.5mm, color = black] (C) -- (D); \draw[line width = 0.5mm, color = black] (D) -- (E); \draw[line width = 0.5mm, color = black] (E) -- (F); \draw[line width = 0.5mm, color = black] (F) -- (A); % \draw (A) edge[line width = 0.5mm, color = black, bend left=20] (D); \draw (B) edge[line width = 0.5mm, color = black, bend left=20] (E); \draw (C) edge[line width = 0.5mm, color = black, bend left=20] (F); % \draw[fill = Red, thick] (A) circle (1mm); \draw[fill = ForestGreen, thick] (B) circle (1mm); \draw[fill = Red, thick] (C) circle (1mm); \draw[fill = ForestGreen, thick] (D) circle (1mm); \draw[fill = Red, thick] (E) circle (1mm); \draw[fill = ForestGreen, thick] (F) circle (1mm); % \node[color = Red] (H) at (1*\s, 0) {$4$}; \node[color = ForestGreen] (I) at (0.5*\s,0.866*\s) {$4$}; \node[color = Red] (J) at (-0.5*\s,0.866*\s) {$5$}; \node[color = ForestGreen] (K) at (-1*\s,0*\s) {$5$}; \node[color = Red] (L) at (-0.5*\s,-0.866*\s) {$6$}; \node[color = ForestGreen] (M) at (0.5*\s,-0.866*\s) {$6$}; \end{tikzpicture} && % \begin{tikzpicture}[baseline=(current bounding box.center)] \def\t{1.5} \def\s{1.9} % \node[circle, scale = 0.5] (A) at (1*\t,0) {}; \node[circle, scale = 0.5] (B) at (0.5*\t,0.866*\t) {}; \node[circle, scale = 0.5] (C) at (-0.5*\t,0.866*\t) {}; \node[circle, scale = 0.5] (D) at (-1*\t,0) {}; \node[circle, scale = 0.5] (E) at (-0.5*\t,-0.866*\t) {}; \node[circle, scale = 0.5] (F) at (0.5*\t,-0.866*\t) {}; % \draw[line width = 0.5mm, color = black] (A) -- (B); \draw[line width = 0.5mm, color = black] (B) -- (C); \draw[line width = 0.5mm, color = black] (C) -- (D); \draw[line width = 0.5mm, color = black] (D) -- (E); \draw[line width = 0.5mm, color = black] (E) -- (F); \draw[line width = 0.5mm, color = black] (F) -- (A); % \draw (A) edge[line width = 0.5mm, color = black, bend left=20] (D); \draw (B) edge[line width = 0.5mm, color = black, bend left=20] (E); \draw (C) edge[line width = 0.5mm, color = black, bend left=20] (F); % \draw[fill = Red, thick] (A) circle (1mm); \draw[fill = ForestGreen, thick] (B) circle (1mm); \draw[fill = Red, thick] (C) circle (1mm); \draw[fill = ForestGreen, thick] (D) circle (1mm); \draw[fill = Red, thick] (E) circle (1mm); \draw[fill = ForestGreen, thick] (F) circle (1mm); % \node[color = Red] (H) at (1*\s, 0) {$7$}; \node[color = ForestGreen] (I) at (0.5*\s,0.866*\s) {$9$}; \node[color = Red] (J) at (-0.5*\s,0.866*\s) {$8$}; \node[color = ForestGreen] (K) at (-1*\s,0*\s) {$10$}; \node[color = Red] (L) at (-0.5*\s,-0.866*\s) {$9$}; \node[color = ForestGreen] (M) at (0.5*\s,-0.866*\s) {$12$}; \end{tikzpicture} \\ % \begin{tikzpicture}[baseline=(current bounding box.center)] \def\t{1.5} \def\s{1.9} % \node[circle, scale = 0.5] (A) at (1*\t,0) {}; \node[circle, scale = 0.5] (B) at (0.5*\t,0.866*\t) {}; \node[circle, scale = 0.5] (C) at (-0.5*\t,0.866*\t) {}; \node[circle, scale = 0.5] (D) at (-1*\t,0) {}; \node[circle, scale = 0.5] (E) at (-0.5*\t,-0.866*\t) {}; \node[circle, scale = 0.5] (F) at (0.5*\t,-0.866*\t) {}; % \draw[line width = 0.5mm, color = black] (A) -- (B); \draw[line width = 0.5mm, color = black] (B) -- (C); \draw[line width = 0.5mm, color = black] (C) -- (D); \draw[line width = 0.5mm, color = black] (D) -- (E); \draw[line width = 0.5mm, color = black] (E) -- (F); \draw[line width = 0.5mm, color = black] (F) -- (A); % \draw (A) edge[line width = 0.5mm, color = black, bend left=20] (D); \draw (B) edge[line width = 0.5mm, color = black, bend left=20] (E); \draw (C) edge[line width = 0.5mm, color = black, bend left=20] (F); % \draw[fill = Red, thick] (A) circle (1mm); \draw[fill = ForestGreen, thick] (B) circle (1mm); \draw[fill = Red, thick] (C) circle (1mm); \draw[fill = ForestGreen, thick] (D) circle (1mm); \draw[fill = Red, thick] (E) circle (1mm); \draw[fill = ForestGreen, thick] (F) circle (1mm); % \node[color = Red] (H) at (1*\s, 0) {$10$}; \node[color = ForestGreen] (I) at (0.5*\s,0.866*\s) {$8$}; \node[color = Red] (J) at (-0.5*\s,0.866*\s) {$11$}; \node[color = ForestGreen] (K) at (-1*\s,0*\s) {$13$}; \node[color = Red] (L) at (-0.5*\s,-0.866*\s) {$12$}; \node[color = ForestGreen] (M) at (0.5*\s,-0.866*\s) {$15$}; \end{tikzpicture} && % \begin{tikzpicture}[baseline=(current bounding box.center)] \def\t{1.5} \def\s{1.9} % \node[circle, scale = 0.5] (A) at (1*\t,0) {}; \node[circle, scale = 0.5] (B) at (0.5*\t,0.866*\t) {}; \node[circle, scale = 0.5] (C) at (-0.5*\t,0.866*\t) {}; \node[circle, scale = 0.5] (D) at (-1*\t,0) {}; \node[circle, scale = 0.5] (E) at (-0.5*\t,-0.866*\t) {}; \node[circle, scale = 0.5] (F) at (0.5*\t,-0.866*\t) {}; % \draw[line width = 0.5mm, color = black] (A) -- (B); \draw[line width = 0.5mm, color = black] (B) -- (C); \draw[line width = 0.5mm, color = black] (C) -- (D); \draw[line width = 0.5mm, color = black] (D) -- (E); \draw[line width = 0.5mm, color = black] (E) -- (F); \draw[line width = 0.5mm, color = black] (F) -- (A); % \draw (A) edge[line width = 0.5mm, color = black, bend left=20] (D); \draw (B) edge[line width = 0.5mm, color = black, bend left=20] (E); \draw (C) edge[line width = 0.5mm, color = black, bend left=20] (F); % \draw[fill = Red, thick] (A) circle (1mm); \draw[fill = ForestGreen, thick] (B) circle (1mm); \draw[fill = Red, thick] (C) circle (1mm); \draw[fill = ForestGreen, thick] (D) circle (1mm); \draw[fill = Red, thick] (E) circle (1mm); \draw[fill = ForestGreen, thick] (F) circle (1mm); % \node[color = Red] (H) at (1*\s, 0) {$13$}; \node[color = ForestGreen] (I) at (0.5*\s,0.866*\s) {$7$}; \node[color = Red] (J) at (-0.5*\s,0.866*\s) {$14$}; \node[color = ForestGreen] (K) at (-1*\s,0*\s) {$11$}; \node[color = Red] (L) at (-0.5*\s,-0.866*\s) {$15$}; \node[color = ForestGreen] (M) at (0.5*\s,-0.866*\s) {$14$}; \end{tikzpicture} \end{aligned}$$ The following is Proposition [Proposition 4](#prop:pi1){reference-type="ref" reference="prop:pi1"}. **Proposition 20**. *The embedding $\mathcal T_0 \to \mathcal T$ lifts to an embedding of universal covers $\widetilde{\mathcal T}_0 \to \widetilde{\mathcal T}$. In particular, $\pi_1(\mathcal T_0) \to \pi_1(\mathcal T)$ is injective.* *Proof.* Since the vertex links are spherical buildings and therefore [cat]{.smallcaps}()-spaces, the complex $\mathcal T$ is non-positively curved by the link criterion [@BridsonHaefliger Theorem II.5.2]. We claim that the embedding $\mathcal T_0 \to \mathcal T$ is locally isometric, meaning that its image is locally convex. The claim then follows from [@BridsonHaefliger Proposition II.4.14]. Let us identify $\mathcal T_0$ with its image. Since $\mathcal T_0$ is a subcomplex, local convexity is clear in relatively interior points of triangles and edges. In a vertex $x$ it amounts to showing that for any two vertices in $\operatorname{lk}_{\mathcal T_0} x$ of distance $< \pi$ the unique shortest edge path connecting them is contained in $\operatorname{lk}_{\mathcal T_0} x$. For $u_1$ and $u_2$ this is clearly satisfied, since $\operatorname{lk}_{\mathcal T_0} (u_i) = \operatorname{lk}_{\mathcal T} (u_i)$ for $i =1,2$. In the pictures of $\operatorname{lk}_\mathcal T v$ and $\operatorname{lk}_\mathcal T w$ the subgraphs $\operatorname{lk}_{\mathcal T_0} v$ and $\operatorname{lk}_{\mathcal T_0} w$ are shown in bold and are readily verified to be convex. In fact, they are of course non-thick subbuildings isomorphic to subdivisions of $\operatorname{lk}_\mathcal S v_{01}$ and $\operatorname{lk}_\mathcal S v_{10}$. ◻ Finally we prove Proposition [Proposition 5](#prop:cover){reference-type="ref" reference="prop:cover"}. **Proposition 21**. *The universal cover $\widetilde{\mathcal T}$ is a building of type $\tilde{C}_2$.* *Proof.* That $\widetilde{\mathcal T}$ is a Euclidean building follows from Tits's local approach to buildings [@Tits Corollary 3]. In our metric context it is most convenient to refer to [@CharneyLytchak Theorem 7.3]. The type $\tilde{C}_2$ is recognizable by the fact that the rank-$2$ residues of a triangle have type $C_2$, $C_2$ and $A_1 \times A_1$ respectively. They are indeed isomorphic to the incidence graph of the symplectic quadrangle over $\mathbb F_2$ and the 3-regular complete bipartite graph respectively. ◻ # Further Computer Experiments This section presents the results of computer experiments we performed on both the building and the lattice. We performed these computer experiments using the computer algebra systems GAP and MAGMA [@MAGMA]. We also made use of the GAP-packages GRAPE and HAP [@HAP]. **Proposition 22**. *The fundamental group $\pi_1(\mathcal T)$ is presented by: $$\begin{aligned} \langle x,y \mid \; & yx^3yx^{-1}y^{-1}xy^2(x^{-1}y^{-1})^2x^{-1}yxy^{-1}x^{-1}y, \\ & xy^{-1}x^{-1}y^{-1}xyx^{-1}y^{-1}(y^{-1}xy^{-1}x^{-1})^2yxy^{{-2}}x^{-1}y^{-1}, \\ & x^{-1}y^{-1}xyxy^{-1}x^{-1}y^{-1}xyx^{-1}y^{-1}(y^{-1}x)^2y^{-1}x^{-1}yxy^{-1}x^{-2}, \\ & y^{-1}xy^{-1}x^{-1}yxy^{-1}(y^{-1}x^{-1})^2(y^{-1}x^{-1}yx)^2yx^2yx^{-1}y^{-1}, \\ & (y^{-1}x^{-1}yx)^2yx^{-2}yxy^{-1}x^{-1}y^2xy^{-1}x^{-3}y^{-1}xy^2x, \\ & yxy^2x^{-1}y^{-2}xy^{-1}x^{-1}yxy^{-2}x^{-2}y^2xy^{-2}x^{-1}y^{-1}xyx^{-1}y^{-2}x, \\ & xy^2x^{-1}y^{-1}xyx^{-1}yx^2yx^{-1}y^{-1}xy^{-1}x^{-1}yxy^{-1}x^{-1}(x^{-1}y^{-1})^2xy^{-1}x^{-1}yx, \\ & (xy^2x^{-1}y^{-1})^2y^{-1}x^3yx^{-1}y^{-1}xyx^{-1}yx^2(xyx^{-1}y^{-2})^2x \rangle \; . \end{aligned}$$* **Proposition 23**. *The fundamental group $\pi_1(\mathcal T)$* 1. *is perfect.* 2. *does not admit a finite simple quotient isomorphic to $\mathop{\mathrm{PSL}}(2,q)$ for any prime power $q$.* 3. *does not admit a finite simple quotient of size smaller than $5\cdot 10^7$.* **Proposition 24**. *The automorphism group of the complex $\mathcal T$ is of order 2. The non-trivial automorphism is determined by: $$\begin{aligned} u_1 & \leftrightarrow u_2, & u_1 & \leftrightarrow u_2, & u_3 & \leftrightarrow u_3, & u_4 & \leftrightarrow u_4, & u_5 & \leftrightarrow u_5, & v & \leftrightarrow w, & e_1 & \leftrightarrow f_4, & e_2 & \leftrightarrow f_5, & e_3 & \leftrightarrow f_6, \\ e_4 & \leftrightarrow f_1, & e_5 & \leftrightarrow f_2, & e_6 & \leftrightarrow f_3, & e_7 & \leftrightarrow f_9, & e_8 & \leftrightarrow f_{10}, & e_9 & \leftrightarrow f_{12}, & e_{10} & \leftrightarrow f_{13}, & e_{11} & \leftrightarrow f_{15}, & e_{12} & \leftrightarrow f_8, \\ e_{13} & \leftrightarrow f_{11}, & e_{14} & \leftrightarrow f_{14}, & e_{15} & \leftrightarrow f_7, & g_1 & \leftrightarrow g_1, & g_2 & \leftrightarrow g_2, & g_3 & \leftrightarrow g_6, & g_4 & \leftrightarrow g_4, & g_5 & \leftrightarrow g_8, & g_7 & \leftrightarrow g_9, \\ g_{10} & \leftrightarrow g_{10}, & g_{11} & \leftrightarrow g_{11}, & g_{12} & \leftrightarrow g_{14}, & g_{13} & \leftrightarrow g_{13}, & g_{15} & \leftrightarrow g_{15}. \end{aligned}$$* In particular we can extend $\pi_1(\mathcal T)$ by deck transformations by the cyclic group of order two. We denote this extension by $\Gamma$. **Proposition 25**. 1. *Let $\alpha$ be an automorphism of the combinatorical ball of radius 3 in $\widetilde{\mathcal T}$ centred at special vertex $\tilde v$. Then $\alpha$ fixes the combinatorical ball of radius 2 pointwise. In particular the stabiliser of $\tilde v$ in the full automorphism group of the building is trivial.* 2. *The discrete group $\Gamma$ is the full automorphism group of the building $\widetilde{\mathcal T}$. The type-preserving automorphism group is $\pi_1(\mathcal T)$.*
arxiv_math
{ "id": "2310.03662", "title": "A C2-tilde-lattice that is not residually finite", "authors": "Thomas Titz Mite, Stefan Witzel", "categories": "math.GR", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The main aim of this paper is to develop extreme value theory for $\theta$-expansions. We get the limit distribution of the largest value of $\theta$-continued fraction mixing stationary stochastic process and some related results. These are analogous to J.Galambos and W.Philipp theorems for the regular continued fractions. We also have to note that a Borel-Bernstein type theorem plays an important role. address: - | Faculty of Applied Sciences, Politehnica University of Bucharest,\ Splaiul Independentei 313, 060042, Bucharest, Romania - Gheorghe Mihoc-Caius Iacob Institute of Mathematical Statistics and Applied Mathematics of the Romanian Academy, Calea 13 Sept. 13, 050711 Bucharest, Romania - Mircea cel Batran Naval Academy, 1 Fulgerului, 900218 Constanta, Romania author: - Gabriela Ileana Sebe - Dan Lascu title: "Some extreme value theory for $\\theta$-expansions" --- $\theta$-expansions, Borel-Bernstein type theorem, extreme value theory, Fréchet law, $\psi$-mixing. # Introduction The investigation initiated by Bhattacharya and Goswami [@BG-2000] in random number generation, led to concept of continued fraction expansion of a number in terms of an irrational $\theta \in (0, 1)$. This new expansion of positive reals, different from the regular continued fraction (RCF)- expansion is called *$\theta$-expansion*. We mention that the case $\theta = 1$ refers to RCF- expansions. For a fixed $\theta \in (0, 1)$, Chakraborty and Rao [@CR-2003] have considered a generalization of the Gauss map $T_{\theta}: [0,\theta] \to [0,\theta]$, $$T_{\theta}(x):= \left\{ \begin{array}{ll} {\displaystyle \frac{1}{x} - \theta \left \lfloor \frac{1}{x \theta} \right\rfloor} & {\displaystyle \hbox{if } x \in (0, \theta],}\\ \\ 0 & \hbox{if } x=0. \end{array} \right. \label{1.1}$$ Here $\left\lfloor \cdot \right\rfloor$ stands for integer part. Then every $x \in \left(0, \theta \right)$ can be expanded into a finite or infinite $\theta$-expansion $$x = \frac{1}{\displaystyle a_1\theta +\frac{1}{\displaystyle a_2\theta + \frac{1}{\displaystyle a_3\theta + \ddots} }} =: [a_1 \theta, a_2 \theta, a_3 \theta, \ldots], \label{1.2}$$ where $$a_1=a_1(x) := \left\{\begin{array}{ll} \left\lfloor \displaystyle \frac{1}{x \theta}\right\rfloor & \hbox{if } x \neq 0, \\ \\ \infty & \hbox{if } x = 0 \end{array} \right.$$ and $$a_n=a_n(x) := a_1\left(T_{\theta}^{n-1}(x)\right), \quad n \in \mathbb{N}_+ : = \left\{1, 2, 3, \ldots\right\},$$ with $T_{\theta}^0 (x) = x$. The positive integers $a_n \in \mathbb{N}_+$ are called the *digits* or *partial quotients* of $x$. Let $\mathcal{B}_{[0,\theta]}$ denote the $\sigma$-algebra of all Borel subsets of $[0,\theta]$. It is obvious that the digits $a_n$, $n \in \mathbb{N}_+$, are random variables which are defined almost surely on $\left( [0, \theta], \mathcal{B}_{[0,\theta]}\right)$ with respect to any probability measure on $\mathcal{B}_{[0,\theta]}$ that assigns probability $0$ to the set of rationals in $[0, \theta]$. An example of such a measure is Lebesgue measure $\lambda_{\theta}$ on $[0, \theta]$. It was shown in [@CR-2003; @SL-2014] that this expansion has many of the usual properties of RCFs. A natural question is whether the dynamical system given by the transformation $T_{\theta}$ admits an absolutely continuous invariant probability like the Gauss measure in the case $\theta = 1$. Chakraborty and Rao [@CR-2003] have identified that for certain values of $\theta$ (for example, if $\theta^2 = \frac{1}{m}$, $m \in \mathbb{N}_+$) the invariant measure for the transformation $T_{\theta}$ as $$\label{1.3} \mathrm{d}\gamma_{\theta} := \frac{1}{\log \left(1+\theta^{2}\right)} \frac{\theta \,\mathrm{d}x}{1 + \theta x}.$$ Moreover, if $\theta^2 = \frac{1}{m}$, $m \in \mathbb{N}_+$, $[a_1 \theta, a_2 \theta, a_3 \theta, \ldots]$ is the $\theta$-expansion of any $x \in (0, \theta)$ if and only if the following conditions hold: 1. $a_n \geq m$ for any $n \in \mathbb{N}_+$ 2. in the case when $x$ has a finite expansion, i.e., $x = [a_1 \theta, a_2 \theta, \ldots, a_n \theta]$, then $a_n \geq m+1$. It was proved in [@CR-2003] that the dynamical system $([0,\theta], T_{\theta})$ is ergodic and the measure $\gamma_{\theta}$ is invariant under $T_{\theta}$, that is, $\gamma_{\theta} (A) = \gamma_{\theta} (T^{-1}_{\theta}(A))$ for any $A \in {\mathcal{B}}_{[0, \theta]}$. Therefore, $(a_n)_{n \in \mathbb{N}_+}$ is a strictly stationary sequence on $([0,\theta],{\mathcal B}_{[0,\theta]},\gamma_{\theta})$. For more results about $\theta$-expansions see [@CD-2004; @Sebe-2017; @SL-2014; @SL-2019] and references therein. Every irrational $x \in (0, \theta) \setminus \mathbb{Q}=: \Omega$ has an infinite $\theta$-expansion. Note that for all $n \in \mathbb{N}_+$, $a_n(x) \geq m$ and $T^n_{\theta}([a_1 \theta, a_2 \theta, \ldots]) = [a_{n+1} \theta, a_{n+2} \theta, \ldots]$. For all $n \in \mathbb{N}_+$, we call the finite truncation of ([\[1.2\]](#1.2){reference-type="ref" reference="1.2"}) $$\frac{p_n(x)}{q_n(x)} = [a_1(x) \theta, a_2(x) \theta, \ldots, a_n(x) \theta]$$ the $n$-*th convergent* of the $\theta$-expansion of $x$. For every infinite $\theta$-expansion $[a_1 \theta, a_2 \theta, \ldots]$ the sequences $\{p_n\}_{n \geq -1}$ and $\{q_n\}_{n \geq -1}$ can be obtained by the following recursive relations $$\begin{aligned} p_n(x) &=& a_n(x) \theta p_{n-1}(x) + p_{n-2}(x), \quad \label{1.4} \\ q_n(x) &=& a_n(x) \theta q_{n-1}(x) + q_{n-2}(x), \quad \label{1.5}\end{aligned}$$ with $p_{-1}(x) := 1$, $p_0(x) := 0$, $q_{-1}(x) := 0$ and $q_{0}(x) := 1$. Then by induction, we have $$p_{n-1}(x)q_{n}(x) - p_{n}(x)q_{n-1}(x) = (-1)^{n}, \quad n \in \mathbb{N}. \label{1.6}$$ By using ([\[1.4\]](#1.4){reference-type="ref" reference="1.4"}) and ([\[1.5\]](#1.5){reference-type="ref" reference="1.5"}), we can verify that $$x = \frac{p_n(x) + T^n_{\theta}(x)p_{n-1}(x)} {q_n(x) + T^n_{\theta}(x)q_{n-1}(x)}, \quad n \geq 1. \label{1.7}$$ Using ([\[1.6\]](#1.6){reference-type="ref" reference="1.6"}) and ([\[1.7\]](#1.7){reference-type="ref" reference="1.7"}) we obtain $$\left| x - \frac{p_n(x)}{q_n(x)} \right| = \frac{1}{q_n(x)\left(\left(T^n_{\theta}(x)\right)^{-1} q_n(x)+q_{n-1}(x) \right)}, \quad n \geq 1. \label{1.8}$$ Since $a_{n+1}(x)\theta \leq \left(T^n_{\theta}(x)\right)^{-1} \leq (a_{n+1}(x)+1)\theta$, using ([\[1.4\]](#1.4){reference-type="ref" reference="1.4"}) and ([\[1.5\]](#1.5){reference-type="ref" reference="1.5"}) in ([\[1.8\]](#1.8){reference-type="ref" reference="1.8"}) we get $$\frac{1}{q_n(x)(q_{n+1}(x)+\theta q_{n}(x))} \leq \left| x - \frac{p_n(x)}{q_n(x)} \right| \leq \frac{1}{q_n(x)q_{n+1}(x)}, \quad n \geq 1. \label{1.9}$$ From ([\[1.5\]](#1.5){reference-type="ref" reference="1.5"}), we have that $q_n(x) \geq \theta$, $n \in \mathbb{N}_+$. Further, also from ([\[1.5\]](#1.5){reference-type="ref" reference="1.5"}) and by induction we have that $$q_n(x) \geq \left\lfloor \frac{n}{2} \right \rfloor \theta^2. \label{1.10}$$ Finally, from ([\[1.9\]](#1.9){reference-type="ref" reference="1.9"}) and ([\[1.10\]](#1.10){reference-type="ref" reference="1.10"}) it follows that $[a_1(x) \theta, a_2(x) \theta, \ldots, a_n(x) \theta] \to x$ as $n \to \infty$. Relation ([\[1.9\]](#1.9){reference-type="ref" reference="1.9"}) means that the degree of accuracy of this approximation depends on the growth rate of partial quotients. In the case of RCFs, Borel [@Borel] and Bernstein [@Bern] gave a result called *Borel-Bernstein theorem* or *"$0-1$" law* on describing the growth rate of partial quotients in the sense of Lebesgue measure. Our first result, Theorem [Theorem 3](#th.B-B){reference-type="ref" reference="th.B-B"} is an analogy of Borel-Bernstein theorem for $\theta$-expansions. We also show in Section 5 that the Borel-Bernstein type theorem plays an important role in the case of $\theta$-expansions. In Sections 4 and 5 we state some results concerning extreme value theory for $\theta$-expansions. These results are new in the sense that they appear not to have been stated elsewhere before. Extreme value theory for RCF digits first emerged in the 1970's. The results of Galambos [@G-1972; @G-1973] concerning the maximal RCF digit have been improved by Philipp [@Ph-1976] to give a complete answer to a conjecture of Erdös. In Section 4 we derive a Fréchet law concerning the partial maxima of the growth rate of the digit sequence. Theorems [Theorem 9](#Th.4.4){reference-type="ref" reference="Th.4.4"} and [Theorem 10](#Th.4.5){reference-type="ref" reference="Th.4.5"} extend previous work of Galambos [@G-1972] and Philipp [@Ph-1976] on the asymptotic behavior of the largest digit of RCF-expansions. To get these results we mention that we need the $\theta$-continued fraction mixing property, and we also need a condition on the speed of the convergence of mixing. In Section 5 we give some iterated logarithm results (Theorem [Theorem 12](#th.5.2){reference-type="ref" reference="th.5.2"} and Corollary [Corollary 14](#cor.5.4){reference-type="ref" reference="cor.5.4"}) for the largest digit of $\theta$-expansions. # Preliminaries Let us fix $\theta^2 = 1/m$, $m \in \mathbb{N}_+$. Putting $\mathbb{N}_m := \{m, m+1, \ldots\}$, $m \in \mathbb{N}_+$, the partial quotients $a_n$, $n \in \mathbb{N}_+$, take positive integer values in $\mathbb{N}_m$. We now introduce a partition of the interval $[0, \theta]$ which is natural to the $\theta$-expansions. Such a partition is generated by the cylinders of rank $n$. For any $n \in \mathbb{N}_+$ and $i^{(n)}=(i_1, \ldots, i_n) \in \mathbb{N}_m^n$, define the *$n$-th cylinder* of $\theta$-expansion by $$C \left(i^{(n)}\right) = \{x \in \Omega: a_k(x) = i_k \mbox{ for } k=1, \ldots, n \},$$ where $C \left(i^{(0)}\right) = [0, \theta]$. For any $i \in \mathbb{N}_m$, we have $$\label{2.01} C\left(i\right) = \left\{x \in \Omega: a_1(x) = i \right\} = \left( \frac{1}{(i+1)\theta}, \frac{1}{i \theta} \right).$$ If $n \in \mathbb{N}_+$ and $i_n \in \mathbb{N}_m$, we will write $$C(a_1, \ldots, a_n) = C \left(i^{(n)}\right).$$ Next we recall some known results for later use. From the definition of $T_{\theta}$ and ([\[1.7\]](#1.7){reference-type="ref" reference="1.7"}) we have for any $n \in \mathbb{N}_+$ and $(a_1, \ldots, a_n) \in \mathbb{N}_m^n$, $$C(a_1, \ldots, a_n) = \left\{ \begin{array}{lll} \left[ \displaystyle \frac{p_n}{q_n}, \frac{p_n+ \theta p_{n-1}}{q_n+ \theta q_{n-1}} \right) & \quad \mbox{if $n$ is even}, \\ \\ \left(\displaystyle \frac{p_n+ \theta p_{n-1}}{q_n+ \theta q_{n-1}}, \frac{p_n}{q_n} \right] & \quad \mbox{if $n$ is odd}. \\ \end{array} \right. \label{2.1}$$ Using ([\[1.6\]](#1.6){reference-type="ref" reference="1.6"}) we get $$\begin{aligned} \lambda_{\theta}\left(C\left(a_1, \ldots, a_n\right)\right) = \frac{1}{\theta} \left| \frac{p_n}{q_n} - \frac{p_n+ \theta p_{n-1}}{q_n+\theta q_{n-1}} \right| = \frac{1}{q_n (q_n + \theta q_{n-1})} = \frac{1}{q^2_n (1 + \theta s_{n})}, \label{2.2}\end{aligned}$$ where $s_n = \frac{q_{n-1}}{q_n}$, $n \in \mathbb{N}_+$ and $s_0=0$. Since $s_n \in [0, \theta]$, it follows from ([\[2.2\]](#2.2){reference-type="ref" reference="2.2"}) that $$\frac{1}{2q^2_n} \leq \frac{1}{(1+\theta^2) q^2_n} \leq \lambda_{\theta}\left(C\left(a_1, \ldots, a_n\right)\right) \leq \frac{1}{q^2_n}. \label{2.3}$$ It is of interest to calculate the approximate proportion of the $n$-th level cylinder set $C\left(a_1, \ldots, a_n\right)$ that is occupied by each of the $(n+1)$-th level cylinder sets $C\left(a_1, \ldots, a_n, k\right)$. Notice that the endpoints of the interval $C\left(a_1, \ldots, a_n, k\right)$ are $\frac{p_{n+1}}{q_{n+1}}$ and $\frac{p_{n+1}+ \theta p_{n}}{q_{n+1}+\theta q_{n}}$ with $p_{n+1} = k \theta p_n + p_{n-1}$ and $q_{n+1} = k \theta q_n + q_{n-1}$. So we obtain $$\frac{p_{n+1}}{q_{n+1}} = \frac{k \theta p_n + p_{n-1}}{k \theta q_n + q_{n-1}}, \quad \frac{p_{n+1}+\theta p_n}{q_{n+1}+\theta q_n} = \frac{(k+1) \theta p_n + p_{n-1}}{(k+1) \theta q_n + q_{n-1}}.$$ Directly computation yields that $$\lambda_{\theta}\left(C\left(a_1, \ldots, a_n, k\right)\right) = \frac{1}{(k\theta q_n + q_{n-1})((k+1)\theta q_n + q_{n-1})} = \frac{1}{k^2q^2_n\left(\theta + \displaystyle \frac{s_n}{k}\right)\left( \left( 1+\displaystyle\frac{1}{k} \right) \theta +\displaystyle \frac{s_n}{k} \right) }.$$ By ([\[2.2\]](#2.2){reference-type="ref" reference="2.2"}) it follows that $$\begin{aligned} \frac{\lambda_{\theta}\left(C\left(a_1, \ldots, a_n, k\right)\right)}{\lambda_{\theta}\left(C\left(a_1, \ldots, a_n\right)\right)} &=& \frac{q^2_n(1+\theta s_n)}{k^2q^2_n\left(\theta + \displaystyle \frac{s_n}{k}\right)\left( \left( 1+\displaystyle\frac{1}{k} \right) \theta +\displaystyle \frac{s_n}{k} \right)} \\ &=& \frac{1+\theta s_n}{k^2 \left(\theta + \displaystyle \frac{s_n}{k}\right)\left( \left( 1+\displaystyle\frac{1}{k} \right) \theta +\displaystyle \frac{s_n}{k} \right)}. \end{aligned}$$ Since $$\theta^2 < \left(\theta + \displaystyle \frac{s_n}{k}\right)\left( \left( 1+\displaystyle\frac{1}{k} \right) \theta +\displaystyle \frac{s_n}{k} \right) < \theta^2 \left( 1+\displaystyle\frac{1}{k} \right) \left( 1+\displaystyle\frac{2}{k} \right) < 6 \theta^2 < 6,$$ for $k \geq m$, we find that $$\frac{1}{6k^2} < \frac{\lambda_{\theta}\left(C\left(a_1, \ldots, a_n, k\right)\right)}{\lambda_{\theta}\left(C\left(a_1, \ldots, a_n\right)\right)} < \frac{1+\theta^2}{k^2\theta^2} = \frac{m+1}{k^2}. \label{2.4}$$ Next, we give some lemmas for later use. **Lemma 1**. *Let $k \geq m$, then $$\frac{1}{6k^2} < \lambda_{\theta}\left( \bigcup_{a_1,\ldots,a_n \geq m} C\left(a_1, \ldots, a_n, k\right)\right) < \frac{m+1}{k^2}.$$* *Proof.* Using ([\[2.4\]](#2.4){reference-type="ref" reference="2.4"}), since $\displaystyle\sum_{a_1,\ldots,a_n \geq m} \lambda_{\theta}\left( C\left(a_1, \ldots, a_n \right)\right)=1$, the proof is completed. ◻ We recall the following well-known and extremely useful result. **Lemma 2** (Borel-Cantelli). *Let $(X,{\cal X}, \mu)$ be a measurable space. Let $\{C_n\}_{n\geq 1}$ be a sequence of ${\cal X}$-measurable sets and define the $lim-sup$ set $$C_{\infty} = \limsup_{n \to \infty} C_n = \bigcap_{n\geq 1} \bigcup_{m\geq n} C_m = \{x \in X: x \in C_n \mbox{ for infinitely many } n \in \mathbb{N}_+ \}.$$ Then, if $\displaystyle \sum_{n \geq 1} \mu(C_n) < \infty$, we have that $\mu(C_{\infty})=0$.* # A Borel-Bernstein-type theorem Our first main result is the following theorem. **Theorem 3** (Borel-Bernstein-type theorem). *Let $\varphi : \mathbb{N}_+ \to (0, +\infty)$ be a function and $$A_{\varphi} = \{x \in \Omega: a_n(x) > \varphi(n) \, \mbox{ for infinitely many } \, n \in \mathbb{N}_+ \}.$$ Then we have $$\lambda_{\theta} (A_{\varphi}) = \left\{ \begin{array}{lll} 0 & \quad \mbox{if } \displaystyle \sum_{n \geq 1} \frac{1}{\varphi(n)} < \infty, \\ \\ 1 & \quad \mbox{if } \displaystyle \sum_{n \geq 1} \frac{1}{\varphi(n)} = \infty. \\ \end{array} \right.$$* *Proof.* Let $A_{n} = \{x \in \Omega: a_n(x) > \varphi(n)\}$, thus $A_{\varphi} = \displaystyle \limsup_{n \to \infty} A_n = \displaystyle \bigcap_{j \geq 1}\displaystyle \bigcup_{n \geq j} A_n$. By Lemma [Lemma 1](#lema2.1){reference-type="ref" reference="lema2.1"}, one has $$\lambda_{\theta}(A_n) = \lambda_{\theta}\left( \bigcup_{a_1,\ldots,a_{n-1} \geq m} \, \bigcup_{k>\varphi(n)} C\left(a_1, \ldots, a_{n-1}, k\right)\right) < \sum_{k \geq \left\lfloor \varphi(n) \right\rfloor+1} \frac{m+1}{k^2} < \frac{m+1}{\left\lfloor \varphi(n) \right\rfloor} < \frac{2(m+1)}{\varphi(n)}.$$ Thus the convergence of Borel-Cantelli lemma enables us to conclude that $\lambda_{\theta} (A_{\varphi}) = 0$ when $\displaystyle \sum_{n \geq 1} \frac{1}{\varphi(n)} < \infty$. Suppose now $\displaystyle \sum_{n \geq 1} \frac{1}{\varphi(n)} = \infty$. Notice that $$\lambda_{\theta} (A_{\varphi}) = \lambda_{\theta} \left( \bigcap_{j\geq 1} \, \bigcup_{n \geq j} A_n\right) =1 \iff \lambda_{\theta} (A^c_{\varphi}) = \lambda_{\theta} \left( \bigcup_{j\geq 1} \, \bigcap_{n \geq j} A^c_n\right) = 0.$$ Thus we need only to show $\lambda_{\theta} \left(\displaystyle \bigcap_{n \geq j} A^c_n \right)=0$, where $$A^c_n = \{x \in \Omega: a_n(x) \leq \varphi (n) \}.$$ Let $$B_{j,\ell} = \bigcap_{j<n \leq j+\ell} A^c_n.$$ Then $$\lambda_{\theta}\left(\bigcap_{n \geq j+1} A^c_n \right) = \lim_{\ell \to \infty} \lambda_{\theta} (B_{j,\ell}).$$ By the definition of $B_{j, \ell}$, we have $$B_{j,1} = \bigcup_{a_1,\ldots,a_{j} \geq m} \, \bigcup_{m \leq k \leq \varphi(j+1)} C\left(a_1, \ldots, a_{j}, k\right) = \bigcup_{a_1,\ldots,a_{j} \geq m} \left( C(a_1, \ldots, a_j) \setminus \bigcup_{k > \varphi(j+1)} C\left(a_1, \ldots, a_{j}, k\right) \right).$$ By Lemma [Lemma 1](#lema2.1){reference-type="ref" reference="lema2.1"}, we obtain that $$\sum_{k \geq \left\lfloor \varphi(j+1)\right\rfloor+1} \frac{\lambda_{\theta}\left(C\left(a_1, \ldots, a_j, k\right)\right)}{\lambda_{\theta}\left(C\left(a_1, \ldots, a_j\right)\right)} > \sum_{k \geq \left\lfloor \varphi(j+1)\right\rfloor+1} \frac{1}{6k^2} > \sum_{k \geq \left\lfloor \varphi(j+1)\right\rfloor+1} \frac{1}{6k(k+1)} > \frac{1}{6(\varphi(j+1)+1)}.$$ Hence $$\lambda_{\theta} (B_{j,1}) \leq \sum_{a_1,\ldots,a_j \geq m} \lambda_{\theta}\left( C\left(a_1, \ldots, a_j \right)\right) \cdot \left( 1 - \frac{1}{6(\varphi(j+1)+1)}\right)= 1 - \frac{1}{6(\varphi(j+1)+1)}.$$ Since $$\begin{aligned} B_{j, \ell+1} &=& \{x \in \Omega: a_{j+1} \leq \varphi (j+1), \ldots, a_{j+\ell+1} \leq \varphi (j+\ell+1)\} \\ &=& \{x \in B_{j, \ell}: a_{j+\ell+1} \leq \varphi (j+\ell+1)\}, \end{aligned}$$ $$\lambda_{\theta} (B_{j,\ell+1}) \leq \left( 1 - \frac{1}{6(\varphi(j+\ell+1)+1)}\right) \lambda_{\theta} (B_{j,\ell}).$$ By induction, $$\lambda_{\theta} (B_{j,\ell}) \leq \prod^{\ell}_{i=1} \left( 1 - \frac{1}{6(\varphi(j+i)+1)}\right) \leq \prod^{\ell}_{i=1} \left( 1 - \frac{1}{12\varphi(j+i)}\right) \leq \exp\left(-\sum_{i=1}^{\ell} \frac{1}{12\varphi(j+i)}\right).$$ Here we use the fact that $1-x \leq \exp(-x)$ if $x \geq 0$. One has $\displaystyle \lim_{\ell \to \infty} \lambda_{\theta} (B_{j,\ell})=0$ by the fact that $\displaystyle \sum_{n \geq 1} \frac{1}{\varphi(n)} = \infty$. Therefore, $\lambda_{\theta} \left(\displaystyle A^c_{\varphi} \right)=0$, which complete the proof. ◻ **Corollary 4**. *For $\lambda_{\theta}$- a.e. $x \in [0, \theta]$, we have that $$a_n(x) > n \log n \, \mbox{ for infinitely many }\, n \in \mathbb{N}_+,$$ whereas for every $\varepsilon >0$, we have that $$a_n(x) < n (\log n)^{1+\varepsilon} \, \mbox{ for all sufficiently large }\, n \in \mathbb{N}_+.$$* *Proof.* This follows from Theorem [Theorem 3](#th.B-B){reference-type="ref" reference="th.B-B"} immediately on the observation that $$\sum_{n \geq 1} \frac{1}{n \log n} = \infty \, \mbox{ and } \sum_{n \geq 1} \frac{1}{n (\log n)^{1+\varepsilon}} < \infty$$ for all $\varepsilon > 0$. ◻ # The asymptotic behavior of the largest digit in $\theta$-expansions In the sequel we shall use the fundamental facts of the metric theory of $\theta$-expansions. One of these facts is that the stochastic process arising from $\theta$-expansion digits has the $\psi$-mixing property. **Definition 5** ($\psi$-mixing). *Let $(X, \cal{X}, \mu)$ denote a probability space and let $\xi_j :X \to \mathbb{R}$ denote a stationary sequence of random variables. For any $k \in \mathbb{N}_+$ let $\mathcal{B}^{k}_1 = \sigma(\xi_1, \ldots, \xi_{k})$ and $\mathcal{B}_{k}^{\infty} = \sigma(\xi_{k}, \xi_{k+1}, \ldots)$ denote the $\sigma$-algebras generated by the random variables $\xi_1, \ldots, \xi_{k}$, respectively $\xi_{k}, \xi_{k+1}, \ldots$. Then $\{\xi_j\}$ is said to be $\psi$-mixing if for any sets $A \in \mathcal{B}^{k}_1$ and $B \in \mathcal{B}_{k+n}^{\infty}$ we have $$\left| \mu(A \cap B) - \mu(A) \mu(B)\right| \leq \psi(n) \mu(A) \mu(B),$$ where $\psi : \mathbb{N}_+ \to \mathbb{R}$ is a function for which $\psi (n) \to 0$ as $n \to \infty$.* The random variables $a_n(x)$, $n \in \mathbb{N}_+$, form a stationary sequence due to the invariance of the measure $\gamma_{\theta}$ with respect to $T_{\theta}$. **Lemma 6**. *For all $n \in \mathbb{N}_+$ and $w \in \mathbb{N}_m$, $$\gamma_{\theta}(a_n(x) \geq w) = \frac{1}{\log\left(1+\theta^2\right)} \log\left( 1+\frac{1}{w} \right) =: p_{\theta}(w).$$* *Proof.* Using ([\[2.01\]](#2.01){reference-type="ref" reference="2.01"}) and the fact that $(a_n)_{n \in \mathbb{N}_+}$ is a strictly stationary sequence as the transformation $T_{\theta}$ is measure-preserving with respect to $\gamma_{\theta}$, we have $$\begin{aligned} \gamma_{\theta}(a_n(x) \geq w) &=&\gamma_{\theta}(a_1(x) \geq w) = \displaystyle \frac{1}{\log\left(1+\theta^2\right)} \displaystyle \sum_{k \geq w} \displaystyle \int^{\frac{1}{k \theta}}_{\frac{1}{(k+1)\theta}} \frac{\theta \mathrm{d}x}{1+\theta x} \\ &=& \frac{1}{\log\left(1+\theta^2\right)} \sum_{k \geq w} \left( \log\left( 1+\frac{1}{k}\right) - \log\left( 1+\frac{1}{k+1}\right) \right) \\ &=& \frac{1}{\log\left(1+\theta^2\right)} \log\left( 1+\frac{1}{w} \right). \end{aligned}$$ ◻ **Lemma 7**. *Let $\{f_{\theta,n}(x)\}_{n \geq 1}$ be a sequence of functions $f_{\theta,n} \in C^2[0, \theta]$ defined recursively by $$f_{\theta,n+1}(x) = \sum_{i \geq m} \left(f_{\theta,n}\left(\frac{1}{ \theta i}\right) - f_{\theta,n}\left(\frac{1}{x + \theta i}\right)\right), \quad n \in \mathbb{N}$$ with $f_{\theta, 0}(0) = 0$ and $f_{\theta, 0}(\theta) = 1$.* *Set $$g_{\theta, n}(x) = (\theta x+1) f'_{\theta, n}(x), \, \, x \in [0, \theta]. \label{4.001}$$ Then $$\label{4.01} \left\| g'_{\theta, n} \right\| \leq q^n_{\theta} \cdot \left\| g'_{\theta, 0} \right\|, \, n \in \mathbb{N}_+$$ with $$\label{4.02} q_{\theta} := m \left( \sum_{i \geq m} \left(\frac{m}{i^3(i+1)}+ \frac{i+1-m}{i(i+1)^3}\right) \right) < 1.$$ Here $\left\| \cdot \right\|$ stands for the supremum norm.* *Proof.* Since $$g_{\theta,n+1}(x) = \sum_{i \geq m} P_{\theta,i}(x)g_{\theta,n}\left(u_{\theta,i}(x)\right),$$ where $$P_{\theta,i}(x) := \frac{\theta x + 1}{(x + \theta i)(x + \theta (i+1))} = \frac{1}{\theta}\left[\frac{1-\theta^2 i}{x+\theta i} - \frac{1-\theta^2 (i+1)}{x+\theta (i+1)} \right]$$ and $$u_{\theta,i}(x) := \frac{1}{x+ \theta i}$$ we have $$g'_{\theta,n+1}(x) = \sum_{i \geq m} \frac{1-\theta^2 (i+1)}{(x+\theta i)(x+\theta (i+1))^3 } f'_{\theta,n}(\alpha_{\theta,i}) - \sum_{i \geq m} \frac{P_{\theta,i}(x)}{(x+\theta i)^2 } f'_{\theta,n}(u_{\theta,i}(x)),$$ where $u_{\theta,i+1}(x) < \alpha_{\theta,i} < u_{\theta,i}(x)$. Then $$\left\| g'_{\theta, n+1} \right\| \leq \left\| g'_{\theta, n} \right\| \cdot \max_{x \in [0, \theta]} \left( \sum_{i \geq m} \frac{\theta^2 (i+1) - 1}{(x+ \theta i)(x + \theta (i+1))^3 } + \sum_{i \geq m} \frac{P_{\theta,i}(x)}{(x+\theta i)^2 } \right).$$ Using that $$\frac{\theta^2 (i+1)- 1}{(x+ \theta i)(x + \theta (i+1))^3 } \leq m^2 \frac{\theta^2 (i+1) - 1}{i(i+1)^3}$$ and $$\sum_{i \geq m} \frac{P_{\theta,i}(x)}{(x+\theta i)^2 } \leq m^2 \sum_{i \geq m} \frac{1}{i^3(i+1)},$$ then we get ([\[4.01\]](#4.01){reference-type="ref" reference="4.01"}) and ([\[4.02\]](#4.02){reference-type="ref" reference="4.02"}). ◻ The random variables $a_n(x)$, $n \in \mathbb{N}_+$, are not independent. However, they satisfy a $\psi$-mixing condition. Actually, the sequence $\{a_n\}_{n \in \mathbb{N}_+}$ is $\psi$-mixing under $\gamma_{\theta}$ and the function $\psi$ vanishes at an exponential rate. **Lemma 8**. *For any sets $A \in \mathcal{B}^{k}_1= \sigma(a_1, \ldots, a_{k})$ and $B \in \mathcal{B}_{k+n}^{\infty}= \sigma(a_{k+n}, a_{k+n+1}, \ldots)$ we have $$\label{4.1} \left| \gamma_{\theta}(A \cap B) - \gamma_{\theta}(A) \gamma_{\theta}(B)\right| \leq K_{\theta}q^n_{\theta} \gamma_{\theta}(A) \gamma_{\theta}(B),$$ where $0 < q_{\theta} < 1$ and $K_{\theta}$ is a positive constant.* *Proof.* Let $C_k$ be the $k$-th cylinder with endpoints $\displaystyle\frac{p_k}{q_k}$ and $\displaystyle\frac{p_k+\theta p_{k-1}}{q_k+\theta q_{k-1}}$. Let $$f_{\theta, n} (x) = \gamma_{\theta} \left(T^{n+k}_{\theta} (\omega) < x \, | \, C_k\right) := \frac{\gamma_{\theta} \left(\left( T^{n+k}_{\theta} (\omega) < x \right) \cap C_k\right)}{\gamma_{\theta}(C_k)}$$ be the conditional distribution function of $T^{n+k}_{\theta} (\omega)$ given $C_k$. Obviously $\left(\left( T^{k}_{\theta} (\omega) < x \right) \cap C_k\right)$ is an interval with endpoints $\displaystyle\frac{p_k}{q_k}$ and $\displaystyle\frac{p_k+x \theta p_{k-1}}{q_k+x \theta q_{k-1}}$. Thus we obtain $$f_{\theta, 0} (x) = \frac{1}{\gamma_{\theta}(C_k)} \frac{(-1)^k}{\log\left( 1+\theta^2 \right) } \left( \log\left( 1+ \theta \frac{p_k+x \theta p_{k-1}}{q_k+x \theta q_{k-1}} \right) - \log\left( 1+ \theta \frac{p_k}{q_k} \right) \right).$$ If $g_{\theta, n}$ is defined as in Lemma [Lemma 7](#lema4.2a){reference-type="ref" reference="lema4.2a"} let us put $$K_{\theta}: = \sup_{x \in [0, \theta]} \left| g'_{\theta, 0}(x) \right| = \left\| g'_{\theta, 0} \right\|.$$ Hence by ([\[4.001\]](#4.001){reference-type="ref" reference="4.001"}) and ([\[4.01\]](#4.01){reference-type="ref" reference="4.01"}) $$\label{4.005} \left| f'_{\theta, n}(x) - \frac{\theta}{(\theta x+1)\log\left( 1+ \theta^2\right) } \right| \leq \frac{\left| g_{\theta, n}(x) - g_{\theta, n}(0) \right| }{\theta x+1} + \frac{\left| g_{\theta, n}(0) - \frac{\theta}{\log\left(1+\theta^2\right)} \right| }{\theta x+1}$$ and $$\left| g_{\theta, n}(x) - g_{\theta, n}(0) \right| = \left| \int_{0}^{x} g'_{\theta, n}(t) \mathrm{d}t \right| \leq \left\| g'_{\theta, n} \right\| \cdot x \leq K_{\theta} q^n_{\theta} x.$$ Also, for some $0 < v_{\theta} < 1$ $$\begin{aligned} 1 &=& f_{\theta, n}(\theta) = \int_{0}^{\theta} f'_{\theta, n}(t) \mathrm{d}t = \int_{0}^{\theta} \frac{g_{\theta,n}(t)}{\theta t+1}\mathrm{d}t \\ &=& g_{\theta,n}(0) \frac{\log \left( 1+ \theta^2\right)}{\theta} + \int_{0}^{\theta} \frac{g_{\theta,n}(t) - g_{\theta,n}(0) }{\theta t+1}\mathrm{d}t \\ &=& g_{\theta,n}(0) \frac{\log \left( 1+ \theta^2\right)}{\theta} + v_{\theta} K_{\theta} q^n_{\theta} \left( 1- \frac{\log \left( 1+ \theta^2\right)}{\theta^2} \right)\end{aligned}$$ and so $$g_{\theta,n}(0) = \frac{\theta}{\log\left(1+\theta^2\right)} + v_{\theta} \frac{ K_{\theta}q^n_{\theta}}{\theta} \left( 1 - \frac{\theta^2}{\log\left(1+\theta^2\right)}\right).$$ Thus, from ([\[4.005\]](#4.005){reference-type="ref" reference="4.005"}) $$\begin{aligned} \label{4.006} \left| f'_{\theta, n}(x) - \frac{\theta}{(\theta x+1)\log\left( 1+ \theta^2\right) } \right| &\leq& \frac{ K_{\theta} q^n_{\theta} }{\theta x+1} + \frac{ K_{\theta}q^n_{\theta}}{\theta} \left( \frac{\theta^2}{\log\left(1+\theta^2\right)} - 1\right) \frac{1}{\theta x+1} \\ &<& \frac{ K_{\theta} q^n_{\theta} }{\theta x+1} \frac{\theta}{\log\left(1+\theta^2\right)}. \end{aligned}$$ Integrating ([\[4.006\]](#4.006){reference-type="ref" reference="4.006"}) over $F$ we obtain $$\left| \gamma_{\theta} \left( T^{-(n+k)}_{\theta}(F) \, \left| \, C_k \, \right. \right) - \gamma_{\theta}(F) \right| \leq K_{\theta}q^n_{\theta} \gamma_{\theta}(F).$$ Since each $A \in \mathcal{B}^{k}_1$ is a countable union of disjoint $C_k$ we obtain ([\[4.1\]](#4.1){reference-type="ref" reference="4.1"}) and thus the proof is complete. ◻ Define $$L_N := \max_{1 \leq n \leq N} a_n(x), \quad x \in \Omega.$$ In the sequel we discuss the asymptotic behavior of the largest digit $L_N$. **Theorem 9**. *For any $y > 0$, we have $$\label{4.2} \lim_{N \to \infty} \gamma_{\theta}\left( x \in \Omega: L_N(x) < \frac{N y}{\log\left(1+\theta^2 \right) } \right) = \exp\left( -\frac{1}{y}\right) .$$* *Proof.* *$1$st step.* Let $$A_n = \{x \in \Omega: a_n(x)\geq w \},$$ which means $$\bigcap^{N}_{n=1} A^C_{n}= \{x \in \Omega: L_N(x) < w \} =: B_N.$$ Given that $B_N$ represents the event where none of the $A_n$ occurs, the Poincaré identity reveals that $$\label{4.3} \gamma_{\theta} (B_N) = \sum_{k=0}^{N} (-1)^k S_k$$ with $$S_0 = 1, \, S_k = \sum_{1\leq n_1 < n_2 < \ldots < n_k \leq N} \gamma_{\theta} \left( A_{n_1} \cap \ldots \cap A_{n_k}\right).$$ Thus, equation ([\[4.3\]](#4.3){reference-type="ref" reference="4.3"}) provides an expression for the distribution function of $L_N$. By selecting $w = \left\lfloor \frac{N y}{\log(1+\theta^2)} \right\rfloor$, we demonstrate that the tail $\displaystyle\sum_{k \geq Z} S_k$, where $Z$ is a sufficiently large but fixed value, can be made arbitrarily small. By repeatedly applying equation ([\[4.1\]](#4.1){reference-type="ref" reference="4.1"}) and referring to Lemma [Lemma 6](#lema4.2){reference-type="ref" reference="lema4.2"}, we obtain that $$\label{4.4} \gamma_{\theta} \left( A_{n_1} \cap \ldots \cap A_{n_k}\right) \leq (1+K_{\theta})^{k-1} \gamma_{\theta}(A_{n_1})\gamma_{\theta}(A_{n_2})\cdot \ldots \cdot \gamma_{\theta}(A_{n_k}) < (1+K_{\theta})^{k} p^k_{\theta}(w).$$ For sufficiently large values of $N$, we obtain $$\label{4.5} w = \left\lfloor \displaystyle \frac{N y}{\log\left(1+\theta^2 \right) } \right\rfloor \geq \frac{1}{2} \displaystyle \frac{N y}{\log\left(1+\theta^2 \right) },$$ whence $$p_{\theta}(w) \leq \frac{1}{w \log \left(1+\theta^2 \right)} \leq \frac{2}{Ny}.$$ Therefore $$\begin{aligned} \label{4.6} \sum_{k \geq Z} S_k &<& \sum_{k \geq Z} \frac{N!}{(N-k)!k!} (1+K_{\theta})^k p^k_{\theta} (w) \leq \sum_{k \geq Z} \frac{N!}{(N-k)!k!} N^{-k} \left(\frac{2(1+K_{\theta})}{y} \right)^k \nonumber \\ &\leq& \sum_{k \geq Z} \frac{1}{k!} \left(\frac{2(1+K_{\theta})}{y} \right)^k < \frac{1}{Z!} \left( \frac{4K_{\theta}}{y} \right)^Z \exp\left( \frac{4K_{\theta}}{y} \right) \leq \varepsilon \end{aligned}$$ as the value of $Z$ is increased sufficiently. *$2$nd step.* Let's divide $S_k$ into two separate terms when considering $k < Z$: $$\label{4.7} S_k= S^*_k + R_k.$$ Here, $S^*_k$ represents the sum over all $n_1 < n_2 < \ldots < n_k$ with $n{i+1}-n_i \geq t$ ($i \geq 1$), where $t$ is a positive integer determined as follows. Let $\eta > 0$ be an arbitrary real number, and let $t$ be the smallest integer $n$ such that $K_{\theta}q^n_{\theta} < \eta$. Next, we proceed to estimate $S_k^*$. Using repeated applications of ([\[4.1\]](#4.1){reference-type="ref" reference="4.1"}) and another reference to Lemma [Lemma 6](#lema4.2){reference-type="ref" reference="lema4.2"}, we find that for any term belonging to $S^*_k$, $$\gamma_{\theta} \left( A_{n_1} \cap \ldots \cap A_{n_k}\right) = p^k_{\theta}(w) \left( 1+\mathcal{O}_k(\eta)\right), \, n_{i}+t \leq n_{i+1}.$$ In the estimation of $S^*_k$, the constant involved in $\mathcal{O}_k(\eta)$ depends exclusively on $k$. Hence $$\label{4.8} S^*_k = \frac{(N-(t-1)(k-1))!}{(N-(t-1)(k-1)-k)!k!} p^k_{\theta}(w) \left( 1+\mathcal{O}_k(\eta)\right).$$ In order to estimate $R_k$ in ([\[4.7\]](#4.7){reference-type="ref" reference="4.7"}), it's important to observe that the overall estimation ([\[4.4\]](#4.4){reference-type="ref" reference="4.4"}) is applicable to each of its individual terms and that its number of terms is $$\frac{N!}{(N-k)!k!} - \frac{(N-(t-1)(k-1))!}{(N-(t-1)(k-1)-k)!k!} = {o}\left( N^k \right).$$ We thus have $$\label{4.9} R_k = {o}\left( N^k p^k_{\theta}(w) \right).$$ Considering that $$p_{\theta}(w) = p_{\theta}\left(\left\lfloor \displaystyle \frac{N y}{\log\left(1+\theta^2 \right) } \right\rfloor \right) = \left( 1+\mathcal{O}(N^{-1}) \right) \frac{1}{Ny}$$ by ([\[4.7\]](#4.7){reference-type="ref" reference="4.7"}), ([\[4.8\]](#4.8){reference-type="ref" reference="4.8"}) and ([\[4.9\]](#4.9){reference-type="ref" reference="4.9"}) we can deduce that $$\label{4.10} S_k = \left( 1+\mathcal{O}_k(\eta)\right) \frac{y^{-k}}{k!} + o_N (1),$$ where $k$ is fixed and $o_N (1) \to 0$ as $N \to \infty$. *$3$rd step.* Finally, by ([\[4.3\]](#4.3){reference-type="ref" reference="4.3"}), ([\[4.6\]](#4.6){reference-type="ref" reference="4.6"}) and ([\[4.10\]](#4.10){reference-type="ref" reference="4.10"}), we establish that for any given positive integer $Z$ $$\gamma_{\theta} \left( L_N < \left\lfloor \displaystyle \frac{N y}{\log\left(1+\theta^2 \right) } \right\rfloor \right) = \sum_{k=0}^{Z-1} (-1)^k \left( 1+\mathcal{O}_k(\eta)\right) \frac{y^{-k}}{k!} + o_N(1) + o_Z(1),$$ the last term approaches $0$ as $Z \to \infty$. Letting $N \to \infty$, and then $\eta \to 0$, we deduce that for any positive integer $Z$ $$\lim_{N \to \infty} \gamma_{\theta} \left( L_N < \left\lfloor \displaystyle \frac{N y}{\log\left(1+\theta^2 \right) } \right\rfloor \right) = \sum_{k=0}^{Z-1} (-1)^k \frac{y^{-k}}{k!} + o_Z(1).$$ Given that the left-hand side remains independent of $Z$, letting $Z \to \infty$, we achieve the limit relation ([\[4.2\]](#4.2){reference-type="ref" reference="4.2"}) while taking into account that the argument $w$ in $\{L_N < w\}$ is considered to be an integer in $\mathbb{N}_m$. Since $$\gamma_{\theta} \left( L_N < \left\lfloor \displaystyle \frac{N y}{\log\left(1+\theta^2 \right) } \right\rfloor \right) \leq \gamma_{\theta} \left( L_N < \displaystyle \frac{N y}{\log\left(1+\theta^2 \right) } \right) \leq \gamma_{\theta} \left( L_N < \left\lfloor \displaystyle \frac{N y}{\log\left(1+\theta^2 \right) } \right\rfloor + 1\right)$$ the proof is complete. ◻ **Theorem 10**. *For any $0<\delta <1$ and $y>0$, we have $$\label{4.11} \gamma_{\theta}\left( x \in \Omega: L_N(x) < \frac{N y}{\log\left(1+\theta^2 \right) } \right) = \exp\left( -\frac{1}{y}\right) +\mathcal{O}\left( \exp\left( -(\log N)\right)^{\delta} \right)$$ where the constant involved in $\mathcal{O}$ depends exclusively on $\delta$.* *Proof.* We follow the proof of Theorem [Theorem 9](#Th.4.4){reference-type="ref" reference="Th.4.4"} with a particular choice of $Z$ and $t$. We choose $Z = \displaystyle \left \lfloor \frac{\log N}{\log \log N}\right \rfloor$. For a specific $0<\delta <1$, we choose $\delta < \delta'<1$, $\varepsilon >0$ and $\zeta > 0$ so that $1 - \delta'> \varepsilon + \zeta$. We assume that $y \geq (\log N)^{-\delta}$. Applying Stirling's formula, we derive $$\frac{1}{Z!} ~ \frac{1}{\sqrt{2\pi} \, Z^{Z+1/2} \exp(-Z)} \asymp \frac{\exp\left(\frac{\log N}{\log \log N} \right)}{\left( \frac{\log N}{\log \log N} \right)^{\frac{\log N}{\log \log N} +\frac{1}{2}} } .$$ For $N$ sufficiently large, $$e \leq \left( \frac{\log N}{\log \log N} \right)^{\zeta}.$$ Thus, we obtain $$\exp\left(\frac{\log N}{\log \log N} \right) \leq \left( \left( \frac{\log N}{\log \log N} \right)^{\zeta} \right)^{\frac{\log N}{\log \log N}} < N^{\zeta}.$$ Furthermore, for $N$ sufficiently large $$\left( \frac{\log N}{\log \log N} \right)^{\displaystyle\frac{\log N}{\log \log N} } > N^{1-\varepsilon}.$$ As a result, we obtain $$\label{4.12} \frac{1}{Z!} \ll \frac{1}{N^{1-\varepsilon-\zeta}}.$$ Alternatively, it is obvious that $$\left(\frac{4K_{\theta}}{y}\right)^Z \exp\left(\frac{4K_{\theta}}{y}\right) \leq \left( 4K_{\theta}\left( \log N \right)^{\delta} \right)^{\displaystyle\frac{\log N}{\log \log N}} \exp \left( 4K_{\theta}\left( \log N \right)^{\delta} \right) < N^{\delta'}$$ when $N$ is sufficiently large.. Finally, using ([\[4.6\]](#4.6){reference-type="ref" reference="4.6"}), we obtain $$\label{4.13} \sum_{k \geq Z} S_k \ll N^{-a}$$ for   $0 < a < 1-\varepsilon - \zeta - \delta'$. Setting $t = \left \lfloor (\log N)^2\right\rfloor$, we estimate $R_k$ for $k<Z$ $$\begin{aligned} R_k &\leq& \left( \frac{N!}{(N-k)!k!} - \frac{(N-(t-1)(k-1))!}{(N-(t-1)(k-1)-k)!k!} \right) (1+K_{\theta})^k p^k_{\theta}(w) \ll t \, Z\, N^{k-1} (2K_{\theta})^k p^k_{\theta}(w) \\ &\ll& (\log N)^3 \frac{1}{\log \log N} N^{k-1} (2K_{\theta})^k p^k_{\theta}(w) \ll (\log N)^3 N^{k-1} \left( \frac{4K_{\theta}}{Ny}\right)^k . \end{aligned}$$ In cases where $\displaystyle \frac{4K_{\theta}}{y} > 1$, we proceed to evaluate, for $N$ sufficiently large, $$\label{4.14} R_{k} \leq \displaystyle\frac{1}{N^{1-\varepsilon}}\left(\displaystyle \frac{4K_{\theta}}{y}\right)^Z \leq \frac{1}{N^{1-\varepsilon}} \left( 4K_{\theta} (\log N)^{\delta}\right)^{\displaystyle\frac{\log N}{\log \log N}} < N^{-a}$$ for   $0 < a < 1-\varepsilon - \delta$. When $\displaystyle \frac{4K_{\theta}}{y} < 1$, the estimation becomes relatively straightforward. We can select the value of $a$ to be the same as that in equation ([\[4.13\]](#4.13){reference-type="ref" reference="4.13"}). As a result, the number of terms in $S^*_k$, $k < Z$, is given by $\displaystyle \frac{N!}{(N-k)!k!} + \mathcal{O}\left( N^{k-1}(\log N)^3\right)$. We have $$\begin{aligned} \label{4.15} S^*_k &=& \left( \displaystyle\frac{N!}{(N-k)!k!} + \mathcal{O}\left( N^{k-1}(\log N)^3\right) \right) \left( 1+ \mathcal{O}\left( N^{-1}\right) \right)^k \left(Ny \right)^{-k} \left(1+\beta K_{\theta}q^{(\log N)^2}_{\theta} \right)^k \nonumber \\ &=& \displaystyle \frac{y^{-k}}{k!} + \mathcal{O}\left( N^{-a} \right) \end{aligned}$$ where $|\beta| \leq 1$. Subsequently, using ([\[4.14\]](#4.14){reference-type="ref" reference="4.14"}) and ([\[4.15\]](#4.15){reference-type="ref" reference="4.15"}), we deduce that $S_k = \displaystyle \frac{y^{-k}}{k!} + \mathcal{O}\left( N^{-a} \right)$. In conclusion, using ([\[4.13\]](#4.13){reference-type="ref" reference="4.13"}), we obtain $$\gamma_{\theta} (B_N) = \sum_{k=0}^{Z-1} \left( (-1)^k \frac{y^{-k}}{k!} + \mathcal{O}\left( N^{-a} \right) \right) + \mathcal{O}\left( N^{-a} \right) = \sum_{k=0}^{Z-1} (-1)^k \frac{y^{-k}}{k!} + \mathcal{O}\left( N^{-a'} \right) = \exp\left(-\frac{1}{y} \right) + \mathcal{O}\left( N^{-a'} \right),$$ with $0 < a'< a$. ◻ # Some iterated logarithm results We begin with the following quantitative Borel-Cantelli lemma. **Lemma 11**. *[@Ph-1967] [\[lema5.1\]]{#lema5.1 label="lema5.1"} Let $\{E_N\}_{n\geq 1}$ be a sequence of measurable sets in a probability space $(X, \cal{X}, \mu)$. Denote by $A(N,x)$ the number of integers $n \leq N$ such that $x \in E_n$, i.e., $A(N,x) = \displaystyle \sum_{n \leq N} \chi_{E_n}(x)$, where $\chi_{E_n}$ is the characteristic function of $E_n$. Define $$\varphi(N) := \sum_{n \leq N} \mu(E_n).$$ Suppose that there exists a convergent series $\displaystyle \sum_{k \geq 1} c_k$ with $c_k \geq 0$ such that for all integers $n > \ell$ we have $$\label{5.1} \mu(E_n \cap E_{\ell}) \leq \mu(E_n)\mu(E_{\ell}) + \mu(E_n)c_{n-\ell}.$$ Then for any $\varepsilon > 0$ $$\label{5.2} A(N,x) = \varphi(N) + \mathcal{O}\left(\varphi^{1/2}(N) \log^{3/2+\varepsilon} \varphi(N)\right) \quad \mu \mbox{-a.s.}$$* **Theorem 12**. *For a.e. $x \in [0, \theta]$ we have $$\liminf_{N \to \infty} \frac{L_N(x) \log\log N}{N} = \frac{1}{\log \left( 1+ \theta^2\right)}.$$* *Proof.* Since for all $A \in \mathcal{B}_{[0, \theta]}$ $$\frac{\lambda_{\theta}(A)}{\left( 1+ \theta^2\right)\log \left( 1+ \theta^2\right)} \leq \gamma_{\theta} (A) \leq \frac{\lambda_{\theta}(A)}{\log \left( 1+ \theta^2\right)}$$ the measures $\gamma_{\theta}$ and $\lambda_{\theta}$ are equivalent. Hence we make the proof for all $x$ except a set of $\gamma_{\theta}$-measure $0$. Consider integers $M$ and $N$ with $M, N \geq 0$. Define $$L(M, N, x) := \max_{M < n \leq M+N } a_n(x),$$ $$\varphi(n) := \frac{n}{\log \log n \log\left( 1+ \theta^2\right)}$$ and $$E_k := \left( x \in \Omega: L\left( k^{2k}, k^{2(k+1)}, x\right) \leq \varphi \left(k^{2(k+1)} \right) \right).$$ Due to the $T_{\theta}$-invariance of $\gamma_{\theta}$, we can deduce from Theorem [Theorem 10](#Th.4.5){reference-type="ref" reference="Th.4.5"} that, for any integer $k \geq k_0$, $$\begin{aligned} \label{5.3} \gamma_{\theta} (E_k) &=& \gamma_{\theta} \left( x \in \Omega: L\left( k^{2k}, k^{2(k+1)}, x\right) \leq \varphi \left(k^{2(k+1)} \right) \right) \nonumber \\ &=&\gamma_{\theta} \left( x \in \Omega: L\left( 0, k^{2(k+1)}, x\right)\leq \varphi \left(k^{2(k+1)} \right) \right) \nonumber \\ &\geq& \frac{1}{2} \exp \left( -\log \log k^{2(k+1)}\right) \geq \frac{1}{8} (k \log k)^{-1}. \end{aligned}$$ Obviously $E_k$ depends only on $a_n (x)$ with $k^{2k} < n \leq k^{2(k+1)} + k^{2k}$. Consequently, according to Lemma [Lemma 8](#lema4.3){reference-type="ref" reference="lema4.3"}, we can establish that for any pair of integers $k < \ell$ $$\left| \gamma_{\theta}(E_k \cap E_{\ell}) - \gamma_{\theta}(E_k) \gamma_{\theta}(E_{\ell})\right| \leq K_{\theta}q^{\ell-k}_{\theta} \gamma_{\theta}(E_k) \gamma_{\theta}(E_{\ell}),$$ since $(k+1)^{2(k+1)} - k^{2(k+1)} - k^{2k} \geq 1$. As a result, the implication from Lemma [\[lema5.1\]](#lema5.1){reference-type="ref" reference="lema5.1"} suggests that $x \in E_k$ for infinitely many $k$ (a.e. $x$) given that $\varphi (N) \gg \log \log N$ according to ([\[5.3\]](#5.3){reference-type="ref" reference="5.3"}). On the other hand, by Lemma [Lemma 6](#lema4.2){reference-type="ref" reference="lema4.2"} $$\begin{aligned} \gamma_{\theta}(F_k) &:=& \gamma_{\theta} \left( x \in \Omega: L\left( 0, k^{2k}, x\right)\geq \varphi \left(k^{2(k+1)} \right) \right) \\ &\leq& \sum_{n \leq k^{2k}} \gamma_{\theta} \left(x \in \Omega: a_n(x) \geq \varphi\left(k^{2(k+1)} \right) \right) = k^{2k} p_{\theta} \left( \varphi \left( k^{2(k+1)}\right) \right) \\ &\leq& k^{2k} \log \log k^{2(k+1)} \cdot k^{-2(k+1)} \leq k^{-3/2}.\end{aligned}$$ Therefore, according to Lemma [\[lema5.1\]](#lema5.1){reference-type="ref" reference="lema5.1"} $x \in F_k$ only for finitely many $k$ (a.e. $x$). Thus $$x \in E_k \setminus F_k = \left( x\in \Omega: L\left( 0, k^{2k}+k^{2(k+1)}, x\right)\leq \varphi \left(k^{2(k+1)} \right) \right)$$ for infinitely many $k$ (a.e. $x$), which implies that $L\left( 0, k^{2(k+1)}, x\right)\leq \varphi \left(k^{2(k+1)} \right)$ holds for infinitely many $k$ (a.e. $x$). Hence, $$\label{5.4} \liminf_{N \to \infty} \frac{L_N(x) \log\log N}{N} \leq \frac{1}{\log \left( 1+ \theta^2\right)} \, \, \mbox{ a.e.}$$ Now, we proceed to prove the converse inequality. Let $b > 1$. Again by Theorem [Theorem 10](#Th.4.5){reference-type="ref" reference="Th.4.5"} $$\begin{aligned} \gamma_{\theta}(G_k) &:=& \gamma_{\theta} \left( x \in \Omega: L\left( 0, \left\lfloor b^k \right \rfloor, x\right)\leq b^{-2} \varphi \left (\left \lfloor b^{k+1} \right \rfloor \right) \right) \\ &\ll& \exp \left( -b \log \log b^{k}\right) \ll k^{-b}.\end{aligned}$$ By Lemma [\[lema5.1\]](#lema5.1){reference-type="ref" reference="lema5.1"}, since $\sum k^{-b} < \infty$, it follows that $x \in G_k$ only for finitely many $k$ (a.e. $x$), which means that $$L\left( 0, \left\lfloor b^k \right \rfloor, x\right) > b^{-2} \varphi \left( \left\lfloor b^{k+1} \right \rfloor \right)$$ holds for all $k \geq k_0 (x,b)$. For a given value of $N$ such that $\left\lfloor b^k \right \rfloor \leq N < b^{k+1}$ where $k \geq k_0 (x,b)$ since $L\left( 0, \left\lfloor b^k \right \rfloor, x\right) \leq L_N(x)$ and $\varphi (N) \leq \varphi \left( \left\lfloor b^{k+1} \right \rfloor \right)$ we conclude that $$L_N(x) > b^{-2} \varphi(N) \, \mbox{ a.e. } x.$$ Since this holds for any $b > 1$ we obtain $$\liminf_{N \to \infty} \frac{L_N(x) \log\log N}{N} \geq \frac{1}{\log \left( 1+ \theta^2\right)} \, \, \mbox{ a.e.}$$ By ([\[5.4\]](#5.4){reference-type="ref" reference="5.4"}) the proof is completed. ◻ There is no analogous result for Theorem [Theorem 12](#th.5.2){reference-type="ref" reference="th.5.2"} with a finite nonzero superior limit. This follows from the following theorem. **Theorem 13**. *Let $\{\varphi(n)\}_{n \geq 1}$ be a positive nondecreasing sequence. Then for a.e. $x \in [0, \theta]$ $$\label{5.5} L_N(x) > \varphi (N)$$ has finitely many or infinitely many solutions in integers $N$ according as the series $$\label{5.6} \sum_{n \geq 1} \frac{1}{\varphi(n)}$$ converges or diverges.* *Proof.* Indeed, if $\sup \varphi(n) < \infty$, then the divergence of ([\[5.5\]](#5.5){reference-type="ref" reference="5.5"}) implies, according to Theorem [Theorem 3](#th.B-B){reference-type="ref" reference="th.B-B"}, that $a_n(x) > \varphi(n)$ holds for infinitely many $n$ (a.e. $x$). On the other hand, when $\varphi(n) \nearrow \infty$, the behavior of ([\[5.5\]](#5.5){reference-type="ref" reference="5.5"}) is determined by whether the inequality $a_n(x) > \varphi(n)$ holds finitely or infinitely often. This, in turn, leads to the conclusion that, by Theorem [Theorem 3](#th.B-B){reference-type="ref" reference="th.B-B"}, this behavior holds for a.e. $x$ based on whether the series ([\[5.6\]](#5.6){reference-type="ref" reference="5.6"}) converges or diverges. ◻ **Corollary 14**. *Let $\{\varphi(n)\}_{n \geq 1}$ be as in Theorem [Theorem 13](#th.5.3){reference-type="ref" reference="th.5.3"}. Then for a.e. $x \in [0, \theta]$ $$\label{5.7} \limsup_{N \to \infty} \frac{L_N(x)}{\varphi(N)}$$ is either $0$ or $\infty$.* *Proof.* We distinguish the cases where the series ([\[5.6\]](#5.6){reference-type="ref" reference="5.6"}) converges or diverges. If the series ([\[5.6\]](#5.6){reference-type="ref" reference="5.6"}) converges we choose a monotone sequence $\{\alpha_n\}_{n \geq 1}$ tending to $\infty$ but so slowly that still $\displaystyle \sum_{n \geq 1} \frac{\alpha_n}{\varphi(n)} < \infty$. Therefore in accordance with Theorem [Theorem 13](#th.5.3){reference-type="ref" reference="th.5.3"}, the inequality $L_N(x) > \displaystyle\frac{\varphi(N)}{\alpha_N}$ holds only for finitely many $N$ (a.e. $x$). Hence ([\[5.7\]](#5.7){reference-type="ref" reference="5.7"}) vanishes for a.e. $x$. If the series ([\[5.6\]](#5.6){reference-type="ref" reference="5.6"}) diverges, we consider a monotone sequence $\{\alpha_n\}_{n \geq 1}$ tending to $0$ such that $\displaystyle\sum_{n \geq 1} \frac{\alpha_n}{\varphi(n)} = \infty$. Hence, $L_N(x) > \displaystyle \frac{\varphi(N)}{\alpha_N}$ holds for infinitely many $N$ (a.e. $x$) and thus ([\[5.7\]](#5.7){reference-type="ref" reference="5.7"}) is infinite for a.e. $x$. ◻ \[01\] Bernstein, F., *Über eine Anwendung der Mengenlehre auf ein der Theorie der säkularen Störungen herrührendes*, Problem. Math. Ann. **71** (1912) 417-439. Bhattacharya, R., Goswami, A., *A class of random continued fractions with singular equillibria*, Perspectives in Statistical Sciences, eds A.K.Basu et al (2000), Oxford University Press. Borel, E., *Les probabilités dénombrables et leurs applications arithmétiques*, Rend. Circ. Mat. Palermo **27** (1909) 247-271. Chakraborty, S., Rao, B.V., *$\theta$-expansions and the generalized Gauss map*, In Athreya, K., Majumdar, M., Puri, M., and Waymire, E. (eds.), \"Probability, Statistics, and Their Applications: Papers in Honor of Rabi Bhattacharya\", Institute of Mathematical Statistics, Lecture Notes-Monograph Series **41** (2003) 49-64. Chakraborty, P.S., Dasgupta, A., *Invariant measure and a limit theorem for some generalized Gauss maps*, J. Theoret. Probab. **17** (2) (2004) 387-401. Galambos, J., *The distribution of the largest coefficient in continued fraction expansions*, Quart. J. Math. Oxford **23** (1972) 147-151. Galambos, J., *The largest coefficient in continued fractions and related problems in Diophantine Approximation and its Applications*, ed. by Charles Osgood, Academic Press, 1973. Philipp, W., *Some metrical theorems in number theory*, Pacific J. Math. **20** (1967) 109--127. Philipp, W., *A conjecture of Erdös on continued fractions*, Acta Arith. **28** (1976) 379--386. Sebe, G.I., *A near-optimal solution to the Gauss-Kuzmin-Lévy problem for $\theta$-expansions*, J. Number Theory **171** (2017) 43-55. Sebe, G.I., Lascu, D., *A Gauss-Kuzmin theorem and related questions for $\theta$-expansions*, Journal of Function Spaces, vol. **2014** (2014) 12 pages. Sebe, G.I., Lascu, D., *On convergence rate in the Gauss-Kuzmin problem for $\theta$-expansions*, J. Number Theory **195** (2019) 51--71.
arxiv_math
{ "id": "2309.12654", "title": "Some extreme value theory for $\\theta$-expansions", "authors": "Gabriela Ileana Sebe and Dan Lascu", "categories": "math.PR math.NT", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We construct a Hopf superalgebra structure for a Drinfeld super Yangian of Lie superalgebra $B(m,n)$ relative to all possible choices of Borel subalgebras. In order to do this we introduce a simplified definition of Drinfeld super Yangians. address: MCCME (Moscow Center for Continuous Mathematical Education) author: - Alexander Mazurenko title: Hopf superalgebra structure for Drinfeld super Yangian of Lie superalgebra B(m,n) --- Lie superalgebra, Basic Lie superalgebra, Drinfeld Super Yangian, Hopf Superalgebra, Minimalistic Presentation MSC Primary 16W35, Secondary 16W55, 17B37, 81R50, 16W30 # Acknowledgements {#acknowledgements .unnumbered} This work is funded by Russian Science Foundation, scientific grant 21-11-00283. With gratitude to Stukopin V.A., who introduced the author to the world of superalgebras. # Introduction The Drinfeld super Yangian, a concept rooted in the rich theory of superalgebras and Lie superalgebras, has gained substantial attention in recent mathematical research. To learn more details about the structure of Drinfeld super Yangians, refer to the following works: [@S04], [@M21a], [@A02], [@Z96], [@U19]. In particular, the exploration of its Hopf superalgebra structure has been an area of significant interest. This paper delves into precisely this area of study, focusing on the Drinfeld super Yangian associated with the Lie superalgebra $B(m,n)$. The concept of the Drinfeld super Yangian for the Lie superalgebra $A(m, n)$, which serves as a precursor to our exploration, has been introduced relative to specific subclasses of Borel subalgebras, as detailed in [@MS22]. Building on this groundwork, we extend our focus to the Lie superalgebra $B(m, n)$. The formal definition of the Drinfeld super Yangian for $B(m, n)$ relative to distinguished Borel subalgebra is presented in [@M21], establishing a foundation for our current study. One of the central objectives of this paper is to provide insights how to generalize our results across a broader spectrum of basic Lie superalgebras, encompassing all possible choices of Borel subalgebras. This extension necessitates the introduction of additional relations in the definitions of both Lie superalgebras and Drinfeld super Yangians, along with supplementary lemmas. However, the fundamental approach to our proofs remains consistent. Our paper is structured as follows: Section [2](#sec:prelim){reference-type="ref" reference="sec:prelim"} - Preliminaries: This section provides essential background information, including fundamental algebraic concepts (Subsection [2.1](#subs:LS){reference-type="ref" reference="subs:LS"}) and general definitions and results pertaining to Lie superalgebras $B(m, n)$ (Subsection [2.2](#subsc:bs){reference-type="ref" reference="subsc:bs"}). Section [3](#sec:DSY){reference-type="ref" reference="sec:DSY"} - Drinfeld Super Yangians: Here, we introduce the fundamental definitions and concepts associated with Drinfeld super Yangians. Subsection [3.1](#sect:DSY){reference-type="ref" reference="sect:DSY"} presents the definition of the Drinfeld super Yangian of Lie superalgebra $B(m, n)$ relative to all possible choices of Borel subalgebras. In Subsection [3.2](#sb:mpsy){reference-type="ref" reference="sb:mpsy"}, we provide a minimalistic representation of the Drinfeld super Yangian, a crucial prerequisite for constructing the Hopf superalgebra structure discussed in Section [\[sec:hssdsy\]](#sec:hssdsy){reference-type="ref" reference="sec:hssdsy"}. Section [\[sec:hssdsy\]](#sec:hssdsy){reference-type="ref" reference="sec:hssdsy"} - Hopf Superalgebra Structure: This section presents the core result of our paper. In Theorem [Theorem 2](#th:hssdsy){reference-type="ref" reference="th:hssdsy"}, we establish the Hopf superalgebra structure for the Drinfeld super Yangian of Lie superalgebra $B(m, n)$, extending our findings from the minimalistic representation presented in Section [3.2](#sb:mpsy){reference-type="ref" reference="sb:mpsy"}. Section [5](#sec:par){reference-type="ref" reference="sec:par"} - Proof of Theorem [Theorem 1](#th:mpy){reference-type="ref" reference="th:mpy"}: The final section is dedicated to proving Theorem [Theorem 1](#th:mpy){reference-type="ref" reference="th:mpy"}, which is foundational for our exploration of the Hopf superalgebra structure. Subsection [5.1](#sub:grdsy){reference-type="ref" reference="sub:grdsy"} introduces special elements necessary for constructing the minimalistic presentation, while Subsection [5.2](#sub:pmpt){reference-type="ref" reference="sub:pmpt"} contains auxiliary lemmas crucial for establishing the theorem. This structured approach allows us to build a comprehensive understanding of the Hopf superalgebra structure of the Drinfeld super Yangian for Lie superalgebra $B(m, n)$, and it serves as a stepping stone for broader generalizations and applications in the field. # Preliminaries {#sec:prelim} We introduce general notations, definitions and state important results about Lie superalgebras. ## Notations {#subs:LS} By $\mathbb{N}$, $\mathbb{N}_{0}$, $\mathbb{Z}$ we denote the sets of natural numbers, natural numbers with zero and integers respectively. Recall that $\mathbb{Z}_{2} = \{ \bar{0}, \bar{1} \}$. Let $\Bbbk$ be an algebraically closed field of characteristic zero. Denote by $\mathcal{M}_{n,m}(A)$ a ring of $n \times m$ $(n,m \in \mathbb{N} )$ matrices over a ring $A$. We also use the Iverson bracket defined by $[P] = \begin{cases} 1 \text{ if } P \text{ is true;} \\ 0 \text{ otherwise}, \end{cases}$ where $P$ is a statement that can be true or false. $[1,n]$ denotes $\{1, 2, ... ,n\}$ for any $n \in \mathbb{N}$. Note that all equations in this subsection can be extended by linearity from homogeneous to all elements in corresponding algebraic structure. Let $A$ be an associative superalgebra. Denote for all homogeneous $x \in A$ the $\mathbb{Z}_2$-grading of $x$ by $|x| \in \mathbb{Z}_2$. One defines 1. the Lie superbracket by $$\label{eq:LS} [x,y] := xy - (-1)^{|x||y|} y x$$ 2. the anticommutator by $$\{x,y\} := xy + (-1)^{|x||y|} yx;$$ 3. the graded Lie superbracket by $$[x,y]_{v} := xy - (-1)^{|x||y|} v y x$$ for all homogeneous elements $x, y \in A$ and all $v\in \Bbbk$. Consider super vector spaces $V$ and $W$. Define the linear function $\tau_{V, W}: V \otimes W \to W \otimes V$ by $$\label{eq:taudef} \tau_{V, W}(v \otimes w) = (-1)^{|v| |w|} w \otimes v$$ for all homogeneous $v \in V$ and $w \in W$. A Lie superalgebra is a super vector space $\mathfrak{g} = \mathfrak{g}_{\bar{0}} \oplus \mathfrak{g}_{\bar{1}}$ together with a bilinear map (the Lie superbracket) $[\cdot, \cdot]: \mathfrak{g} \times \mathfrak{g} \to \mathfrak{g}$ which satisfies the following axioms: $$\subseteq g_{\bar{ \alpha } + \bar{\beta}} \text{ for } \bar{ \alpha }, \bar{\beta} \in \mathbb{Z}_{2},$$ $$= - (-1)^{|x| |y|} [y, x],$$ $$\label{eq:sje} (-1)^{|x||z|} [x,[y,z]] + (-1)^{|x||y|} [y,[z,x]] + (-1)^{|z||y|}[z,[x,y]] = 0$$ for all homogeneous $x, y, z \in \mathfrak{g}$. The linear mapping $\textnormal{ad}_{x}: \mathfrak{g} \to \mathfrak{g}$ is called the adjoint action and is defined by $\textnormal{ad}_{x}(y) = [x,y]$ for all $x, y \in \mathfrak{g}$. Denote by $\textnormal{id}_{\mathfrak{g}}$ the identity map on $\mathfrak{g}$. We use results about formal power series proved in [@N69]. Let $A$ be a ring. Consider the ring of formal power series $A[[X]]$. Define for each $f \in 1+XA[[X]]$ $$\log(f) := \sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n} (f-1)^{n} \in XA[[X]].$$ ## Lie Superalgebras {#subsc:bs} Let $\mathfrak{g}$ be the Lie superalgebra $B(m,n) = \mathfrak{osp}(2m+1|2n)$ with $m \ge 0$, $n \ge 1$ relative to a Borel subalgebra $\mathfrak{b}$. Recall that $\mathfrak{g}$ has rank $m+n$ and set $I = \{1,2,..,m+n\}$. Suppose that $\mathfrak{g}$ has a simple root system $\Delta^{0} = \{ \alpha_{i} \}_{i \in I}$ and a Cartan subalgebra $\mathcal{H} = \langle h_{i} \rangle_{i \in I}$. Denote the corresponding simple root generators by $e_{i}$ and $f_{i}$ for all $i \in I$. For $\tau \subseteq I$, we define a $\mathbb{Z}_{2}$-gradation by setting $|e_{i}| = \bar{0}$ ($|f_{i}| = \bar{0}$) if $i \notin \tau$ else $|e_{i}| = \bar{1}$ ($|f_{i}| = \bar{1}$) for all $i \in I$. Moreover, set a $\mathbb{Z}_{2}$-gradation on $I$ by setting $|i| = \bar{0}$ if $i \notin \tau$ else $|i| = \bar{1}$ for all $i \in I$. Decompose $\mathfrak{g}$ using its canonical root decomposition as follows: $$\mathfrak{g} = \mathcal{H} \oplus \bigoplus_{ \alpha \in \Delta } \mathfrak{g}^{ \alpha }$$ where $\Delta$ is a root system of $\mathfrak{g}$ and $$\mathfrak{g}^{ \alpha } = \{ x \in \mathfrak{g} \; | \; [h,x] = \alpha(h) x, h \in \mathcal{H} \}$$ is the root space corresponding to a root $\alpha \in \Delta$. It is well known that $\dim(\mathfrak{g}^{ \alpha }) = 1$ as a super vector space for all $\alpha \in \Delta$. Thus we can fix basis elemenets $\{ e_{\alpha}, f_{\alpha} \}_{ \alpha \in \Delta^{+} }$ such that $\mathfrak{g}^{ \alpha } = \langle e_{\alpha} \rangle$ and $\mathfrak{g}^{ -\alpha } = \langle f_{\alpha} \rangle$ for all $\alpha \in \Delta^{+}$. Define $\textnormal{ht}( \sum_{i \in I} n_{i} \alpha_{i} ) = \sum_{i \in I} n_{i}$ for all $( n_{i} )_{i \in I} \in \mathbb{Z}^{|I|}$ where $\alpha_{i} \in \Delta^{0}$ for all $i \in I$. By definition of basic Lie superalgebra (see [@K77]) there exists a unique (up to a constant factor) even nondegenerate $\mathfrak{g}$-invariant supersymmetric bilinear form $\langle . , . \rangle : \mathfrak{g} \times \mathfrak{g} \to \Bbbk$, i. e. 1. $\langle \mathfrak{g}_{\bar{\alpha}} , \mathfrak{g}_{\bar{\beta}} \rangle = 0$ unless $\bar{\alpha} + \bar{\beta} = \bar{0}$ for $\bar{\alpha}, \bar{\beta} \in \mathbb{Z}_{2}$; 2. the form induces an isomorphism $\mathfrak{g} \cong (\mathfrak{g})^{*}$; 3. $\langle [x,y] , z \rangle = \langle x , [y,z] \rangle$ for all $x,y,z \in \mathfrak{g}$; 4. $\langle x , y \rangle = (-1)^{|x||y|} \langle y , x \rangle$ for all homogeneous $x,y \in \mathfrak{g}$. Without loss of generality we choose a bilinear form on $\mathfrak{g}$ in such a way that 1. $\langle e_{i}, f_{j} \rangle = \delta_{i,j}$ for all $i,j \in I$; 2. $\langle e_{\alpha}, f_{\beta} \rangle = [\alpha + \beta = 0]$ for all $\alpha, \beta \in \Delta$; 3. the restriction of $\langle . , . \rangle$ to $\langle \mathfrak{g}^{\alpha} , \mathfrak{g}^{-\alpha} \rangle$ is nondegenerate for all $\alpha \in \Delta$. Using the last property note that for any $\alpha \in \mathcal{H}^{*}$ there exists a unique element $h_{\alpha} \in \mathcal{H}$ such that $\langle h_{\alpha} , h \rangle = \alpha(h)$ for all $h \in \mathcal{H}$. Thus we can define a nondegenerate symmetric bilinear form $( . , .)$ on $\mathcal{H}^{*}$ by $( \alpha, \beta) = \langle h_{\alpha} , h_{\beta} \rangle$ for all $\alpha, \beta \in \mathcal{H}^{*}$. Now we are able to construct the symmetric Cartan matrix $C = (c_{ij})_{i,j \in I}$ of $\mathfrak{g}$ by supposing $C = ( (\alpha_{i}, \alpha_{j}) )_{i,j \in I}$. Therefore the following arithmetic conditions for elements of the Cartan matrix $C$ hold: 1. $\langle h_{i}, h_{j} \rangle = (\alpha_{i}, \alpha_{j}) = c_{ij}$ for all $i,j \in I$; 2. $c_{ij} \in \mathbb{Z}$, for all $i, j \in I$; 3. $c_{ij} \le 0$, if $i \notin \tau$, for all $i, j \in I$. Let $\{ \epsilon_{i}, \delta_{j} \; | \; i \in [1,m], j \in [1,n] \}$ be the dual basis of $\mathcal{H}^{*}$. Then for $\mathfrak{g}$ the bilinear form is defined by $$\begin{aligned} ( \epsilon_{i}, \epsilon_{i^{'}} ) = \delta_{i,i^{'}}, \; (\delta_{j}, \delta_{j^{'}}) = -\delta_{j,j^{'}}, (\epsilon_{i}, \delta_{j}) = 0\end{aligned}$$ for all $i, i^{'} \in [1,m]$ and $j, j^{'} \in [1,n]$. We also require the following well-known **Lemma 1**. 1. *If $x \in \mathfrak{g}^{\alpha}$, $y \in \mathfrak{g}^{-\alpha}$, then $[x,y] = \langle x,y \rangle h_{\alpha}$ for all $\alpha \in \Delta$.* 2. *$[ \mathfrak{g}^{\alpha}, \mathfrak{g}^{-\alpha} ] = \langle h_{\alpha} \rangle$ for all $\alpha \in \Delta$.* 3. *$[ h, \mathfrak{g}^{\alpha} ] = \pm \langle h, h_{\alpha} \rangle \mathfrak{g}^{\alpha}$ for all $\alpha \in \Delta^{\pm}$.* Using all previous definitions we formulate the well-known result (for more details see [@Y91], [@Y94], [@Z11]): **Proposition 1**. *The Lie superalgebra $\mathfrak{g}$ is generated by elements $h_{i}$, $e_{i}$ and $f_{i}$ for $i \in I$ subject to the relations* *for $i, j \in I$ $$= 0, \; [h_{i}, e_{j}] = c_{ij} e_{j}, \; [h_{i}, f_{j}] = - c_{ij} f_{j}, \; [e_{i}, f_{j}] = \delta_{ij} h_{i}$$* *and the standart Serre relations* *for $i, j \in I$ $$\label{eq:scstr1} [e_{i}, e_{i}] = [f_{i}, f_{i}] = 0, \; \text{for} \; c_{ii} = 0,$$ $$\label{eq:scstr2} [e_{i}, e_{j}] = [f_{i}, f_{j}] = 0, \; \text{for} \; i \ne j, \; c_{ij} = 0,$$ $$\label{eq:scstr2a} [ e_{i} , [ e_{i}, e_{j} ] ] = [ f_{i} , [ f_{i}, f_{j} ] ] = 0, \; \text{for} \; c_{ii} \neq 0, \; i \neq m+n, \; j \in \{i-1, i+1\},$$ $$\label{eq:scstr2b} [ e_{m+n} , [ e_{m+n} , [ e_{m+n}, e_{m+n-1} ] ] ] = [ f_{m+n} , [ f_{m+n} , [ f_{m+n}, f_{m+n-1} ] ] ] = 0,$$* *plus higher order Serre relations for type [$\romannumeral1$](#fig:dynkin-diagrams-type1), [$\romannumeral2a$](#fig:dynkin-diagrams-type2a) and [$\romannumeral2b$](#fig:dynkin-diagrams-type2b) vertices* *for $j-1, j, j+1 \in I$ $$\begin{aligned} \label{eq:scstr3} [ [ [e_{j+1}, e_{j}], e_{j-1} ] , e_{j} ] = [ [ [f_{j+1}, f_{j}], f_{j-1} ] , f_{j} ] = 0. \end{aligned}$$* **Remark 1**. *Below $\times$ dots represent either white dots associated to even roots or grey dots associated to isotropic odd roots. Black dot corresponds to non-isotropic odd root.* *[\[fig:dynkin-diagrams-type1\]]{#fig:dynkin-diagrams-type1 label="fig:dynkin-diagrams-type1"}* *[\[fig:dynkin-diagrams-type2a\]]{#fig:dynkin-diagrams-type2a label="fig:dynkin-diagrams-type2a"}* *[\[fig:dynkin-diagrams-type2b\]]{#fig:dynkin-diagrams-type2b label="fig:dynkin-diagrams-type2b"}* [\[rm:lsr\]]{#rm:lsr label="rm:lsr"} **Remark 2**. 1. *Relations [\[eq:scstr1\]](#eq:scstr1){reference-type="eqref" reference="eq:scstr1"} are also valid for $i \notin \tau$ by [\[eq:LS\]](#eq:LS){reference-type="eqref" reference="eq:LS"}.* 2. *Relations [\[eq:scstr2a\]](#eq:scstr2a){reference-type="eqref" reference="eq:scstr2a"} are also valid for $c_{ii} = 0$, $i \neq m+n$, and $j \in \{i-1, i+1\}$, but in that case, they already follow from [\[eq:LS,eq:sje,eq:scstr1\]](#eq:LS,eq:sje,eq:scstr1){reference-type="ref" reference="eq:LS,eq:sje,eq:scstr1"}.* 3. *Relations [\[eq:scstr3\]](#eq:scstr3){reference-type="eqref" reference="eq:scstr3"} are also valid for $c_{jj} \neq 0$ (in that case $|e_{j}| = |f_{j}| = \bar{0}$), but in that case, they already follow from [\[eq:LS,eq:sje,eq:scstr1,eq:scstr2\]](#eq:LS,eq:sje,eq:scstr1,eq:scstr2){reference-type="ref" reference="eq:LS,eq:sje,eq:scstr1,eq:scstr2"} (see [@Y94 Lemma 6.1.1]).* To find out how look like all Dynkin diagrams and simple root systems for all possible Borel subalgebras of $\mathfrak{g}$ see for example [@Z11]. By assumption $\mathcal{H} = \langle h_{i} \rangle_{i \in I}$. We can construct an orthonormal basis $\{ h^{(k)} \}_{k \in I}$ of $\mathcal{H}$ with respect to the $\langle . , . \rangle$, i. e. $\langle h^{(i)}, h^{(j)} \rangle = \delta_{ij}$ for all $i,j \in I$. Then the Casimir operator $\Omega \in \mathfrak{g} \otimes \mathfrak{g}$ is given by the formula $$\begin{aligned} \nonumber \Omega = \sum_{k \in I} h^{(k)} \otimes h^{(k)} + \sum_{\alpha \in \Delta^{+}_{\bar{0}}} ( e_{\alpha} \otimes f_{\alpha} + f_{\alpha} \otimes e_{\alpha} ) + \sum_{\alpha \in \Delta^{+}_{\bar{1}}} ( f_{\alpha} \otimes e_{\alpha} - e_{\alpha} \otimes f_{\alpha} ) = \\ \label{eq:CE} \sum_{k \in I} h^{(k)} \otimes h^{(k)} + \sum_{ \alpha \in \Delta^{+}} (-1)^{|e_{\alpha}|} e_{\alpha} \otimes f_{\alpha} + \sum_{ \alpha \in \Delta^{+}} f_{\alpha} \otimes e_{\alpha}.\end{aligned}$$ The element $\Omega$ is the even, invariant, supersymmetric, i. e. 1. $|\Omega| = 0$; 2. $[ x \otimes 1 + 1 \otimes x, \Omega ] = 0$ for all $x \in \mathfrak{g}$; 3. $\Omega = \tau_{\mathfrak{g},\mathfrak{g}}(\Omega)$ where $\tau_{\mathfrak{g},\mathfrak{g}}$ is defined by [\[eq:taudef\]](#eq:taudef){reference-type="eqref" reference="eq:taudef"}. Note that $$\sum_{k \in I} \langle h^{(k)} , h_{i} \rangle h^{(k)} = h_{i},$$ for all $i \in I$. # Drinfeld super Yangians {#sec:DSY} In this section, we present the definition of the Drinfeld super Yangian associated with the Lie superalgebra $B(m,n)$ relative to all possible Borel subalgebras. Furthermore, we develop a concise framework for expressing this Drinfeld super Yangian in a minimalistic manner. ## Definition of Drinfeld super Yangian {#sect:DSY} We use the notations introduced in Subsection [2.2](#subsc:bs){reference-type="ref" reference="subsc:bs"}. Following [@T19a; @T19b] (see also [@D88], [@S94] and [@G07]), define the Drinfeld super Yangian of $\mathfrak{g}$, denoted by $Y_{\hbar}(\mathfrak{g})$, to be the unital, associative $\Bbbk[[\hbar]]$-Hopf superalgebra. It is generated by $\{ h_{i,r} , x_{i,r}^{\pm} \}^{r \in \mathbb{N}_{0}}_{i \in I}$. The $\mathbb{Z}_2$-grading of these generators is specified as follows $|h_{i,r}| = \bar{0}$, $|x_{i,r}^{\pm}| = |i|$ for all $r \in \mathbb{N}_{0}$, $i \in I$. $Y_{\hbar}(\mathfrak{g})$ is subject to the following defining relations for all $i,j \in I$ and $r,s \in \mathbb{N}_{0}$ $$\label{eq:Cer} [h_{i,r}, h_{j,s}] = 0,$$ for all $i,j \in I$ and $s \in \mathbb{N}_{0}$ $$\label{eq:SYrc1} [h_{i,0}, x_{j,s}^{\pm}] = \pm c_{ij} x_{j,s}^{\pm},$$ for $i,j \in I$ and $r,s \in \mathbb{N}_{0}$ $$\label{eq:SYrc3} [x_{i,r}^{+}, x_{j,s}^{-}] = \delta_{i,j} h_{i,r+s},$$ for $i,j \in I$ and $r,s \in \mathbb{N}_{0}$; if $i=j$, then $|i| = \bar{0}$ or ($|i|=\bar{1}$, $c_{ii} \ne 0$ and $r = 0$) $$\label{eq:SYrc2} [ h_{i,r+1} , x_{j,s}^{\pm} ] - [h_{i,r} , x_{j,s+1}^{\pm}] = \pm \frac{c_{ij} \hbar}{2} \{h_{i,r}, x_{j,s}^{\pm}\},$$ for $i,j \in I$ and $r,s \in \mathbb{N}_{0}$ $$\label{eq:SYrc4} [x_{i,r+1}^{\pm}, x_{j,s}^{\pm}] - [x_{i,r}^{\pm}, x_{j,s+1}^{\pm}] = \pm \frac{c_{ij} \hbar}{2} \{ x_{i,r}^{\pm} , x_{j,s}^{\pm} \}, \text{ unless } i=j \text{ and } |i| = \bar{1},$$ for $i \in I$ and $r,s \in \mathbb{N}_{0}$ $$\label{eq:SYrc5} [h_{i,r}, x_{i,s}^{\pm}] = 0 \text{ if } c_{ii} = 0,$$ for $i,j \in I$ and $r,s \in \mathbb{N}_{0}$ $$\label{eq:bssr} [x_{i,r}^{\pm}, x_{j,s}^{\pm}] = 0 \text{ if } c_{ij} = 0,$$ as well as cubic super Serre relations for $i,j \in I$ ($i \neq m+n$ in case of [$\romannumeral2b$](#fig:dynkin-diagrams-type2b) vertices), $j \in \{i-1, i+1\}$ and $r,s,t \in \mathbb{N}_{0}$ $$\label{eq:cssr} [ x_{i,r}^{\pm} , [ x_{i,s}^{\pm} , x_{j,t}^{\pm} ] ] + [ x_{i,s}^{\pm} , [x_{i,r}^{\pm} , x_{j,t}^{\pm}] ] = 0 \text{ if } c_{ii} \neq 0,$$ in case of [$\romannumeral2b$](#fig:dynkin-diagrams-type2b) vertices for $r_{1},r_{2},r_{3},t \in \mathbb{N}_{0}$ and $S_{3}$ - symmetric group on the set $\{1,2,3\}$ $$\label{eq:cssra} \sum_{\sigma \in S_{3}} [ x_{m+n,r_{\sigma(1)}}^{\pm} , [ x_{m+n,r_{\sigma(2)}}^{\pm} , [ x_{m+n,r_{\sigma(3)}}^{\pm} , x_{m+n-1,t}^{\pm} ] ] ] = 0,$$ and quartic super Serre relations for type [$\romannumeral1$](#fig:dynkin-diagrams-type1), [$\romannumeral2a$](#fig:dynkin-diagrams-type2a) and [$\romannumeral2b$](#fig:dynkin-diagrams-type2b) vertices for $j-1,j,j+1 \in I$ and $r,s \in \mathbb{N}_{0}$ $$\label{eq:qssr} [ [ x_{j-1,r}^{\pm} , x_{j,0}^{\pm} ] , [ x_{j,0}^{\pm}, x_{j+1,s}^{\pm} ] ] = 0.$$ **Remark 3**. 1. *Similar to Remark [\[rm:lsr\]](#rm:lsr){reference-type="ref" reference="rm:lsr"}, cubic super Serre relations [\[eq:cssr\]](#eq:cssr){reference-type="eqref" reference="eq:cssr"} also hold for $c_{ii} = 0$, $i \neq m+n$, and $j \in \{i-1, i+1\}$, but in that case they already follow from [\[eq:sje,eq:bssr\]](#eq:sje,eq:bssr){reference-type="ref" reference="eq:sje,eq:bssr"}.* 2. *Similar to Remark [\[rm:lsr\]](#rm:lsr){reference-type="ref" reference="rm:lsr"}, quartic super Serre relations [\[eq:qssr\]](#eq:qssr){reference-type="eqref" reference="eq:qssr"} also hold for for $c_{jj} \neq 0$, but in that case they already follow from [\[eq:sje,eq:bssr,eq:cssr\]](#eq:sje,eq:bssr,eq:cssr){reference-type="ref" reference="eq:sje,eq:bssr,eq:cssr"}.* 3. *Generalizing the quartic super Serre relations [\[eq:qssr\]](#eq:qssr){reference-type="eqref" reference="eq:qssr"}, the following relations also hold [@T19b]: $$[ [x_{j-1,r}^{\pm}, x_{j,k}^{\pm}], [x_{j,l}^{\pm}, x_{j+1,s}^{\pm}]] + [ [x_{j-1,r}^{\pm}, x_{j,l}^{\pm}], [x_{j,k}^{\pm},x_{j+1,s}^{\pm}]] = 0.$$ This, in turn, using [\[eq:cssr\]](#eq:cssr){reference-type="eqref" reference="eq:cssr"} can be rewritten as $$[[[x^{\pm}_{j-1,r}, x^{\pm}_{j,k}] , x^{\pm}_{j+1,s}], x^{\pm}_{j,l}] + [[[x^{\pm}_{j-1,r}, x^{\pm}_{j,l}] , x^{\pm}_{j+1,s}], x^{\pm}_{j,k}] = 0.$$* *[\[rm:yangrelrew\]]{#rm:yangrelrew label="rm:yangrelrew"}* We note that the universal enveloping superalgebra $U(\mathfrak{g})$ is naturally embedded in $Y_{\hbar}(\mathfrak{g})$ as Hopf superalgebra, and the embedding is given by the formulas $h_{i} \to h_{i,0}$, $e_{i} \to x_{i,0}^{+}$, $f_{i} \to x_{i,0}^{-}$. We shall identify the universal enveloping superalgebra $U(\mathfrak{g})$ with its image in the Drinfeld super Yangian $Y_{\hbar}(\mathfrak{g})$. Let $Y_{\hbar}^{0}(\mathfrak{g})$ be the subalgebra generated by the elements $\{ h_{i,r} \}_{i \in I, r \in \mathbb{N}_{0}}$. ## Minimalistic presentation for Drinfeld super Yangian {#sb:mpsy} To establish the Hopf superalgebra properties of a Drinfeld super Yangian, we introduce a more convenient representation for the superalgebras $Y_{\hbar}(\mathfrak{g})$. Such work was done in [@S94] for $Y_{\hbar}(\mathfrak{g})$ associated only with the distinguished Dynkin diagram (the result is stated without proof), and for non-super case in [@GNW18] and [@L93]. From defining relations we can see that $Y_{\hbar}(\mathfrak{g})$ is generated by elements $\{ h_{ir} , x_{i0}^{\pm} \}^{r \in \{0,1\}}_{i \in I}$. We use equations [\[eq:SYrc1\]](#eq:SYrc1){reference-type="eqref" reference="eq:SYrc1"}, [\[eq:SYrc3\]](#eq:SYrc3){reference-type="eqref" reference="eq:SYrc3"} and [\[eq:SYrc2\]](#eq:SYrc2){reference-type="eqref" reference="eq:SYrc2"} to get recurrent formulas $$\label{eq:rec1} x_{i,r+1}^{\pm} = \pm (c_{ij})^{-1} [ h_{j1} - \frac{\hbar}{2} h_{j0}^2 , x_{ir}^{\pm} ],$$ where $r \in \mathbb{N}_{0}$; if $c_{ii} \ne 0$, then $j=i$, and if $c_{ii} = 0$, then $j=i+1$ ($i \in I$); $$\label{eq:rec2} h_{ir} = [ x_{ir}^{+}, x_{i0}^{-} ],$$ where $r \ge 2$ and $i \in I$. We introduce the auxiliary generators for $i \in I$ by setting $$\widetilde{h}_{i 1} \overset{\operatorname{def}}{=} h_{i1} - \frac{\hbar}{2} h_{i0}^2.$$ The $\mathbb{Z}_2$-grading of these generators is specified as follows $|\widetilde{h}_{i 1}|=|h_{i,r}| = \bar{0}$, $|x_{i,r}^{\pm}| = |i|$ for all $r \in \mathbb{N}_{0}$ and $i \in I$. **Theorem 1**. *$Y_{\hbar}(\mathfrak{g})$ is isomorphic to the superalgebra generated by $\{ h_{ir} , x_{ir}^{\pm} \}^{r \in \{0,1\}}_{i \in I}$ subject only to the relations* MY1 : *for all $i, j \in I$ and $0 \le r,s \le 1$ $$\label{eq:mpy1} [h_{ir}, h_{js}] = 0,$$* MY2 : *for all $i, j \in I$ and $0 \le s \le 1$ $$\label{eq:mpy2} [h_{i0}, x_{js}^{\pm}] = \pm c_{ij} x_{js}^{\pm},$$* MY3 : *for $i, j \in I$ and $0 \le r+s \le 1$ $$\label{eq:mpy3} [x_{ir}^{+}, x_{js}^{-}] = \delta_{ij} h_{i,r+s},$$* MY4 : *for $i, j \in I$ $$\label{eq:mpy4} [\widetilde{h}_{i 1}, x_{j0}^{\pm}] = \pm c_{ij} x_{j1}^{\pm}, \text{ unless } i=j \text{ and } c_{ii} = 0,$$* MY5 : *for $i, j \in I$ $$\label{eq:mpy5} [x_{i1}^{\pm}, x_{j0}^{\pm}] - [x_{i0}^{\pm}, x_{j1}^{\pm}] = \pm \frac{c_{ij} \hbar}{2} \{ x_{i0}^{\pm} , x_{j0}^{\pm} \}, \text{ unless } i=j \text{ and } |i| = \bar{1},$$* MY6 : *for $i \in I$ and $0 \le s \le 1$ $$\label{eq:mpy6} [h_{i1}, x_{is}^{\pm}] = 0, \text{ if } c_{ii}=0,$$* MY7 : *for $i,j \in I$ $$\label{eq:mpy7} [x_{i0}^{\pm}, x_{j0}^{\pm}] = 0, \text{ if } c_{ij} = 0,$$* MY8 : *for $i \in I$ ($i \neq m+n$ in case of [$\romannumeral2b$](#fig:dynkin-diagrams-type2b) vertices), $j \in \{i-1, i+1\}$ $$\label{eq:mpy8} [ x_{i0}^{\pm} , [ x_{i0}^{\pm} , x_{j0}^{\pm} ] ] = 0, \text{ if } c_{ii} \neq 0,$$* MY9 : *in case of [$\romannumeral2b$](#fig:dynkin-diagrams-type2b) vertices $$\label{eq:mpy8a} [ x_{m+n,0}^{\pm} , [ x_{m+n,0}^{\pm} , [ x_{m+n,0}^{\pm}, x_{m+n-1,0}^{\pm} ] ] ] = 0,$$* MY10 : *for $j-1,j,j+1 \in I$ and for type [$\romannumeral1$](#fig:dynkin-diagrams-type1), [$\romannumeral2a$](#fig:dynkin-diagrams-type2a) and [$\romannumeral2b$](#fig:dynkin-diagrams-type2b) vertices $$\label{eq:mpy9} [ [ x_{j-1,0}^{\pm} , x_{j0}^{\pm} ] , [ x_{j0}^{\pm}, x_{j+1,0}^{\pm} ] ] = 0.$$* *Proof.* The proof of the theorem is established through a sequence of logical steps based on various remarks and lemmas. We summarize these key steps as follows: 1. The equality [\[eq:Cer\]](#eq:Cer){reference-type="eqref" reference="eq:Cer"} is derived from Lemmas [Lemma 14](#lm:cartelrelii){reference-type="ref" reference="lm:cartelrelii"} and [Lemma 15](#lm:cartelrelinej){reference-type="ref" reference="lm:cartelrelinej"}. 2. The relation [\[eq:SYrc1\]](#eq:SYrc1){reference-type="eqref" reference="eq:SYrc1"} is established with reference to Lemma [Lemma 5](#lm:h0eq){reference-type="ref" reference="lm:h0eq"}. 3. The equality [\[eq:SYrc3\]](#eq:SYrc3){reference-type="eqref" reference="eq:SYrc3"} is established with support from Lemmas [Lemma 8](#lm:SYrc34){reference-type="ref" reference="lm:SYrc34"} and [Lemma 14](#lm:cartelrelii){reference-type="ref" reference="lm:cartelrelii"}. 4. The relation [\[eq:SYrc2\]](#eq:SYrc2){reference-type="eqref" reference="eq:SYrc2"} is deduced using Lemma [Lemma 17](#lm:hixi){reference-type="ref" reference="lm:hixi"}. 5. The relation [\[eq:SYrc4\]](#eq:SYrc4){reference-type="eqref" reference="eq:SYrc4"} is derived by means of Lemma [Lemma 16](#lm:xijxijii){reference-type="ref" reference="lm:xijxijii"}. 6. The relation [\[eq:SYrc5\]](#eq:SYrc5){reference-type="eqref" reference="eq:SYrc5"} is obtained through the insights provided by Lemma [Lemma 20](#lm:SYrc5){reference-type="ref" reference="lm:SYrc5"}. 7. The relations [\[eq:bssr\]](#eq:bssr){reference-type="eqref" reference="eq:bssr"} and [\[eq:cssr\]](#eq:cssr){reference-type="eqref" reference="eq:cssr"} are established based on the analysis presented in Lemma [Lemma 19](#lm:bcssr){reference-type="ref" reference="lm:bcssr"}. 8. The relation [\[eq:cssra\]](#eq:cssra){reference-type="eqref" reference="eq:cssra"} is derived following the findings in Lemma [Lemma 21](#lm:cssra){reference-type="ref" reference="lm:cssra"}. 9. The relation [\[eq:qssr\]](#eq:qssr){reference-type="eqref" reference="eq:qssr"} is logically connected to the results of Lemma [Lemma 22](#lm:leqmp){reference-type="ref" reference="lm:leqmp"}. 10. Finally, the relation [\[rm:lsar\]](#rm:lsar){reference-type="eqref" reference="rm:lsar"} is established, drawing upon the insights derived from Lemma [Lemma 12](#lm:lsar){reference-type="ref" reference="lm:lsar"}. By systematically referencing these remarks and lemmas, the theorem's proof is constructed, providing a clear and organized line of reasoning. ◻ In this superalgebra, we also define elements $x_{ir}^{\pm}$ ($r \ge 2$) and $h_{ir}$ ($r \ge 2$) for $i \in I$ using [\[eq:rec1\]](#eq:rec1){reference-type="eqref" reference="eq:rec1"} and [\[eq:rec2\]](#eq:rec2){reference-type="eqref" reference="eq:rec2"}. The $\mathbb{Z}_2$-grading of these generators is specified as in $Y_{\hbar}(\mathfrak{g})$. **Remark 4**. *As noted in [@GNW18] in order to prove Theorem [Theorem 1](#th:mpy){reference-type="ref" reference="th:mpy"} we must prove that equation $$\label{rm:lsar} [ [\widetilde{h}_{j1},x_{i1}^{+}] , x_{i1}^{-} ] + [ x_{i1}^{+} , [ \widetilde{h}_{j1},x_{i1}^{-} ] ] = 0$$ or $$\label{eq:hia12} [h_{j1}, h_{i2}] = 0$$ can be deduced from relations [\[eq:mpy1\]](#eq:mpy1){reference-type="eqref" reference="eq:mpy1"} - [\[eq:mpy9\]](#eq:mpy9){reference-type="eqref" reference="eq:mpy9"}, where if $c_{ii} \ne 0$, then $j=i$, and if $c_{ii} = 0$, then $j=i+1$ ($i \in I$). Note that it follows from Lemmas [\[lm:l2\]](#lm:l2){reference-type="eqref" reference="lm:l2"} and [\[lm:cel02\]](#lm:cel02){reference-type="eqref" reference="lm:cel02"} that [\[rm:lsar\]](#rm:lsar){reference-type="eqref" reference="rm:lsar"} is equivalent to [\[eq:hia12\]](#eq:hia12){reference-type="eqref" reference="eq:hia12"}.* # Hopf superalgebra structure on Drinfeld super Yangians [\[sec:hssdsy\]]{#sec:hssdsy label="sec:hssdsy"} In this Section we explicitly describe a Hopf superalgebra structure on a Drinfeld super Yangian of Lie superalgebra $B(m,n)$. We use notations introduced in Subsection [3.2](#sb:mpsy){reference-type="ref" reference="sb:mpsy"}. **Theorem 2**. *$Y_{\hbar}(\mathfrak{g})$ is a Hopf superalgebra.* ***Counit:** the counit $\epsilon: Y_{\hbar}(\mathfrak{g}) \to \Bbbk$ is defined by the following equations:* Co1 : *$$\epsilon(1) = 1,$$* Co2 : *for $x \in \{ h_{ir} , x_{ir}^{\pm} \}^{r \in \mathbb{N}_{0}}_{i \in I}$ $$\epsilon(x) = 0.$$* ***Comultiplication:** The comultiplication $\Delta: Y_{\hbar}(\mathfrak{g}) \to Y_{\hbar}(\mathfrak{g}) \otimes Y_{\hbar}(\mathfrak{g})$ is given by the following relations:* Com1 : *for all $x \in \mathfrak{g}$ $$\Delta(x) = \square(x),$$* Com2 : *$$\begin{aligned} \Delta(h_{i1}) &= \square(h_{i1}) + \hbar( h_{i0} \otimes h_{i0} + [ h_{i0} \otimes 1, \Omega^{+}] ) \nonumber \\ &= \square(h_{i1}) + \hbar( h_{i0} \otimes h_{i0} - \sum_{ \alpha \in \Delta^{+}} (\alpha_{i}, \alpha) x_{\alpha}^{-} \otimes x_{\alpha}^{+} ). \end{aligned}$$* Com3 : *for all $r \in \mathbb{N}_{0}$; if $c_{ii} \ne 0$, then $j=i$, and if $c_{ii} = 0$, then $j=i+1$ ($i \in I$) $$\Delta(x_{i,r+1}^{\pm}) = \pm (c_{ij})^{-1} [ \Delta(h_{j1}) - \frac{\hbar}{2} \Delta^2(h_{j0}) , \Delta(x_{ir}^{\pm}) ],$$* Com4 : *for all $i \in I$ and $r \ge 2$ $$\Delta(h_{ir}) = [ \Delta(x_{ir}^{+}), \Delta(x_{i0}^{-}) ],$$* ***Antipode:** The antipode $S: Y_{\hbar}(\mathfrak{g}) \to Y_{\hbar}(\mathfrak{g})^{op \; cop}$ is described by the following equations:* Ant1 : *for all $x \in \mathfrak{g}$ $$\label{eq:antipodeY1} S(x) = -x,$$* Ant2 : *$$S(h_{i1}) = -h_{i1} + \hbar ( h_{i0}^2 + \sum_{ \alpha \in \Delta^{+}} (-1)^{1+|\alpha|} (\alpha_{i}, \alpha) x_{\alpha}^{-} x_{\alpha}^{+} ).$$* Ant3 : *for all $r \in \mathbb{N}_{0}$; if $c_{ii} \ne 0$, then $j=i$, and if $c_{ii} = 0$, then $j=i+1$ ($i \in I$) $$S(x_{i,r+1}^{\pm}) = \mp (c_{ij})^{-1} [ S(h_{j1}) - \frac{\hbar}{2} S^2(h_{j0}) , S(x_{ir}^{\pm}) ],$$* Ant4 : *for all $i \in I$ and $r \ge 2$ $$\label{eq:antipodeY2} S(h_{ir}) = - [ S(x_{ir}^{+}), S(x_{i0}^{-}) ].$$* *Proof.* By combining the insights from the Subsection [2.2](#subsc:bs){reference-type="ref" reference="subsc:bs"} with the results of Theorem [Theorem 1](#th:mpy){reference-type="ref" reference="th:mpy"}, we can observe that the proof aligns explicitly with the one presented in [@MS22 Section 4]. ◻ # Proofs of auxiliary results {#sec:par} The proof of Theorem [Theorem 1](#th:mpy){reference-type="ref" reference="th:mpy"} is splitted in several lemmas and propositions. We give all necessary proofs in this section. ## General results about Drinfeld super Yangians {#sub:grdsy} Set for any $i \in I$ $$\label{eq:hln} \widetilde{h}_{i}(t) := \hbar \sum_{r \ge 0} \widetilde{h}_{i,r} t^{-r-1} = \log ( 1 + \hbar \sum_{r \ge 0} h_{i,r} t^{-r-1} ) \in Y_{\hbar}(\mathfrak{g})^{0}[[t^{-1}]].$$ **Lemma 2**. *Let [\[eq:Cer\]](#eq:Cer){reference-type="eqref" reference="eq:Cer"} hold for $i,j \in I$ and $0 \le r,s \le v$, let [\[eq:SYrc2\]](#eq:SYrc2){reference-type="eqref" reference="eq:SYrc2"} hold for $i,j \in I$, $0 \le r \le v$ and $s \in \mathbb{N}_{0}$, and let [\[eq:SYrc5\]](#eq:SYrc5){reference-type="eqref" reference="eq:SYrc5"} hold for $i \in I$ ($c_{ii}=0$), $0 \le r,s \le v$. Then for $i,j \in I$, $0 \le r \le v$, $s \in \mathbb{N}_{0}$, $$\label{eq:thx} [ \widetilde{h}_{i,r}, x_{j,s}^{\pm} ] = \pm c_{ij} x_{j,r+s}^{\pm} \pm c_{ij} \sum_{p=1}^{\lfloor r/2 \rfloor} \binom{r}{2p} \frac{(\hbar c_{ij} / 2)^{2p}}{2p+1} x_{j,r+s-2p}^{\pm}.$$* *Proof.* Note that from [\[eq:hln\]](#eq:hln){reference-type="eqref" reference="eq:hln"} follows that for arbitrary $r \in \mathbb{N}_{0}$ we have $\widetilde{h}_{i,r} = f(h_{i,0}, h_{i,1} , ..., h_{i,r})$ for some element of free algebra $f \in \Bbbk \langle x_{1},...,x_{r+1} \rangle$. Hence, while deriving [\[eq:thx\]](#eq:thx){reference-type="eqref" reference="eq:thx"}, we may and shall assume that [\[eq:Cer\]](#eq:Cer){reference-type="eqref" reference="eq:Cer"} and [\[eq:SYrc2\]](#eq:SYrc2){reference-type="eqref" reference="eq:SYrc2"} hold for all $r,s \in \mathbb{N}_{0}$. Then the result follows from [@GT13 Lemma 2.7, Lemma 2.9, Remark 3.1]. ◻ Set $\accentset{\approx}{h}_{ij,0} = h_{i,0}$ ($i,j \in I$), and define inductively for $r \in \mathbb{N}$ $$\accentset{\approx}{h}_{ij,r} = \widetilde{h}_{i,r} - \sum_{p=1}^{\lfloor r/2 \rfloor} \binom{r}{2p} \frac{(\hbar (\alpha_{i},\alpha_{j}) / 2)^{2p}}{2p+1} \accentset{\approx}{h}_{ij,r-2p}.$$ We have **Lemma 3**. *Suppose that Lemma [Lemma 2](#lm:thfdpr){reference-type="ref" reference="lm:thfdpr"} holds. Then in the same notations $$\label{eq:thxo} [ \accentset{\approx}{h}_{ij,r} , x_{j,s}^{\pm} ] = \pm (\alpha_{i}, \alpha_{j})x_{j,r+s}^{\pm},$$ for $i, j \in I$, $0 \le r \le v$, $s \in \mathbb{N}_{0}$, and $$\label{eq:thxs} \accentset{\approx}{h}_{ij,r} = h_{i,r} + f(h_{i,0}, h_{i,1}, ... , h_{i,r-1})$$ for some polynomial $f(x_{1},x_{2},...,x_{r}) \in \Bbbk[x_{1},x_{2},...,x_{r}]$.* *Proof.* The proof is the same as in [@MS22 Lemma 3.16]. It is based on Lemma [Lemma 2](#lm:thfdpr){reference-type="ref" reference="lm:thfdpr"}. ◻ **Lemma 4**. *Let $p, n, m \in I$ and $z \in \mathbb{N}_{0}$. Suppose that $[h_{pz} , h_{nv}] = 0$, $[h_{pz}, h_{mv}] = 0$ for $0 \le v \le s^{'}-1$ $(s^{'} \in \mathbb{N}_{0})$; $(\alpha_{p}, \alpha_{n}) \ne 0$ and $(\alpha_{p}, \alpha_{m}) \ne 0$. Moreover, let $[h_{nv_1} , h_{n v_2}] = 0$ and $[h_{mv_1} , h_{m v_2}] = 0$ for $0 \le v_1, v_2 \le s^{'}$ and [\[eq:SYrc2\]](#eq:SYrc2){reference-type="eqref" reference="eq:SYrc2"} hold for $(i,j)=(n,p)=(m,p)$, $0 \le r \le s^{'}$ and any $s \in \mathbb{N}_{0}$. Then $[h_{pz}, h_{ns^{'}}] = \frac{(\alpha_{n} , \alpha_{p})}{(\alpha_{m} , \alpha_{p})} [h_{pz}, h_{ms^{'}}]$.* *Proof.* The proof is the same as in [@MS22 Lemma 3.17]. It is based on Lemmas [Lemma 2](#lm:thfdpr){reference-type="ref" reference="lm:thfdpr"} and [Lemma 3](#lm:hwdpr){reference-type="ref" reference="lm:hwdpr"}. ◻ ## Proof of minimalistic presentation theorem {#sub:pmpt} **Lemma 5**. *The relation [\[eq:SYrc1\]](#eq:SYrc1){reference-type="eqref" reference="eq:SYrc1"} is satisfied for all $i,j \in I$ and $r \in \mathbb{N}_{0}$. Moreover, for the same parameters $$\label{eq:h1eq} [\widetilde{h}_{i1} , x_{jr}^{\pm}] = \pm c_{ij} x_{j,r+1}^{\pm}.$$* *Proof.* The proof is the same as in [@MS22 Lemma 3.1]. ◻ **Lemma 6**. *The relation [\[eq:SYrc3\]](#eq:SYrc3){reference-type="eqref" reference="eq:SYrc3"} holds when $i=j$, $0 \le r +s \le 2$.* *Proof.* From [\[eq:mpy1\]](#eq:mpy1){reference-type="eqref" reference="eq:mpy1"}, [\[eq:mpy3\]](#eq:mpy3){reference-type="eqref" reference="eq:mpy3"} and Lemma [Lemma 5](#lm:h0eq){reference-type="ref" reference="lm:h0eq"} for all $i \in I$ ($j \in I$ such that $c_{ij} \ne 0$), we have $$0 = [ h_{i1}, \widetilde{h}_{j1} ] = [ [x_{i1}^{+}, x_{i0}^{-}] , \widetilde{h}_{j1} ] = c_{ij} ( [ x_{i1}^{+}, x_{i1}^{-} ] - [ x_{i2}^{+} , x_{i0}^{-} ] ).$$ On the other hand, $$0 = [ h_{i1}, \widetilde{h}_{j1} ] = [ [x_{i0}^{+}, x_{i1}^{-}] , \widetilde{h}_{j1} ] = c_{ij} ( [ x_{i0}^{+}, x_{i2}^{-} ] - [ x_{i1}^{+} , x_{i1}^{-} ] ).$$ Therefore $$[ x_{i0}^{+}, x_{i2}^{-} ] = [ x_{i1}^{+}, x_{i1}^{-} ] = [ x_{i2}^{+} , x_{i0}^{-} ] = h_{i2}.$$ ◻ **Lemma 7**. *The relation [\[eq:SYrc2\]](#eq:SYrc2){reference-type="eqref" reference="eq:SYrc2"} holds when $i=j$, $(r,s) =(1,0)$, and $|i| = \bar{0}$, i.e. $$[ h_{i2} , x_{i0}^{\pm} ] - [h_{i1} , x_{i1}^{\pm}] = \pm \frac{c_{ii} \hbar}{2} \{h_{i1}, x_{i0}^{\pm}\},$$* *Proof.* The proof is analogous to that in [@GNW18 Lemma 2.26]. ◻ **Lemma 8**. *Let $i,j \in I$ and $i \ne j$. The relations [\[eq:SYrc3\]](#eq:SYrc3){reference-type="eqref" reference="eq:SYrc3"}, [\[eq:SYrc4\]](#eq:SYrc4){reference-type="eqref" reference="eq:SYrc4"}, and [\[eq:bssr\]](#eq:bssr){reference-type="eqref" reference="eq:bssr"} hold for any $r,s \in \mathbb{N}_{0}$.* *Proof.* We prove [\[eq:SYrc4\]](#eq:SYrc4){reference-type="eqref" reference="eq:SYrc4"} by induction on $r$ and $s$. The initial case $r=s=0$ is our assumption [\[eq:mpy5\]](#eq:mpy5){reference-type="eqref" reference="eq:mpy5"}. Let $X^{\pm}(r,s)$ be the result of substracting the right-hand side of [\[eq:SYrc4\]](#eq:SYrc4){reference-type="eqref" reference="eq:SYrc4"} from the left-hand side. Suppose that $X^{\pm}(r,s)=0$. Note that if we apply $[\widetilde{h}_{m1} , \cdot]$ and $[\widetilde{h}_{n1} , \cdot]$ ($m,n \in I$) to $X^{\pm}(r,s)=0$ we get from [\[eq:h1eq\]](#eq:h1eq){reference-type="eqref" reference="eq:h1eq"} $$0 = (\alpha_{i}, \alpha_{m}) X^{\pm}(r+1,s) + (\alpha_{j}, \alpha_{m}) X^{\pm}(r,s+1),$$ $$0 = (\alpha_{i}, \alpha_{n}) X^{\pm}(r+1,s) + (\alpha_{j}, \alpha_{n}) X^{\pm}(r,s+1).$$ Consider the matrix $A = \bigl( \begin{smallmatrix}(\alpha_{i},\alpha_{m}) & (\alpha_{j},\alpha_{m})\\ (\alpha_{i},\alpha_{n}) & (\alpha_{j},\alpha_{n})\end{smallmatrix}\bigr)$. When the determinant of $A$ is non-zero, we have $X^{\pm}(r+1,s)=X^{\pm}(r,s+1)=0$. In order to determine when the determinant of the matrix $A$ is non-zero depending on the grading of roots it is sufficient to consider the following Dynkin (sub)diagrams. 1. Select in [case 1.1](#fig:dynkin-diagrams-01), [case 1.2](#fig:dynkin-diagrams-02) and [case 1.3](#fig:dynkin-diagrams-03) $m=i$, $n=j$ in order to get the nonzero determinant. [\[fig:dynkin-diagrams-01\]]{#fig:dynkin-diagrams-01 label="fig:dynkin-diagrams-01"} [\[fig:dynkin-diagrams-02\]]{#fig:dynkin-diagrams-02 label="fig:dynkin-diagrams-02"} [\[fig:dynkin-diagrams-03\]]{#fig:dynkin-diagrams-03 label="fig:dynkin-diagrams-03"} 2. Select in [case 2.1](#fig:dynkin-diagrams-1), [case 2.2](#fig:dynkin-diagrams-2) and [case 2.3](#fig:dynkin-diagrams-3) $|i|=|j|=\bar{0} \Rightarrow m=i, n=j$;\ $|i|=\bar{1}, |j|=\bar{0} \Rightarrow m=i^{'}, n=j$;\ $|i|=\bar{0}, |j|=\bar{1} \Rightarrow m=i, n=i^{'}$. [\[fig:dynkin-diagrams-1\]]{#fig:dynkin-diagrams-1 label="fig:dynkin-diagrams-1"} [\[fig:dynkin-diagrams-2\]]{#fig:dynkin-diagrams-2 label="fig:dynkin-diagrams-2"} [\[fig:dynkin-diagrams-3\]]{#fig:dynkin-diagrams-3 label="fig:dynkin-diagrams-3"} For [case 2.3](#fig:dynkin-diagrams-3) when $|i|=|j|=\bar{1} \Rightarrow m=i^{'}, n=j$. Note that [case 2.1](#fig:dynkin-diagrams-1) can occur only when $|I| > 3$. Thus in [case 2.4](#fig:dynkin-diagrams-4) or [case 2.5](#fig:dynkin-diagrams-5) when $|i|=|j|=\bar{1} \Rightarrow m=i^{'}$, $n=j^{'}$. [\[fig:dynkin-diagrams-4\]]{#fig:dynkin-diagrams-4 label="fig:dynkin-diagrams-4"} [\[fig:dynkin-diagrams-5\]]{#fig:dynkin-diagrams-5 label="fig:dynkin-diagrams-5"} 3. Select in [case 3.1](#fig:dynkin-diagrams-6), [case 3.2](#fig:dynkin-diagrams-1) and [case 3.3](#fig:dynkin-diagrams-8) $|i|=|j|=\bar{0} \Rightarrow m=i, n=j$; $|i|=\bar{1}, |j|=\bar{0} \Rightarrow m=i^{'}, n=j$; $|i|=\bar{0}, |j|=\bar{1} \Rightarrow m=i, n=j^{'}$; $|i|=|j|=\bar{1} \Rightarrow m=i^{'}, n=j^{'}$. [\[fig:dynkin-diagrams-6\]]{#fig:dynkin-diagrams-6 label="fig:dynkin-diagrams-6"} [\[fig:dynkin-diagrams-7\]]{#fig:dynkin-diagrams-7 label="fig:dynkin-diagrams-7"} [\[fig:dynkin-diagrams-8\]]{#fig:dynkin-diagrams-8 label="fig:dynkin-diagrams-8"} The result follows by induction hypothesis. We prove [\[eq:SYrc3\]](#eq:SYrc3){reference-type="eqref" reference="eq:SYrc3"} by induction on $r$ and $s$. The initial case $r=s=0$ is our assumption [\[eq:mpy3\]](#eq:mpy3){reference-type="eqref" reference="eq:mpy3"}. Let $X^{\pm}(r,s)$ be the result of substracting the right-hand side of [\[eq:SYrc3\]](#eq:SYrc3){reference-type="eqref" reference="eq:SYrc3"} from the left-hand side. Suppose that $X^{\pm}(r,s)=0$. Note that if we apply $[\widetilde{h}_{m1} , \cdot]$ and $[\widetilde{h}_{n1} , \cdot]$ ($m,n \in I$) to $X^{\pm}(r,s)=0$ we get from [\[eq:h1eq\]](#eq:h1eq){reference-type="eqref" reference="eq:h1eq"} that $$0 = (\alpha_{i}, \alpha_{m}) X^{\pm}(r+1,s) + (-1) (\alpha_{j}, \alpha_{m}) X^{\pm}(r,s+1),$$ $$0 = (\alpha_{i}, \alpha_{n}) X^{\pm}(r+1,s) + (-1) (\alpha_{j}, \alpha_{n}) X^{\pm}(r,s+1).$$ Consider the matrix $A = \bigl( \begin{smallmatrix}(\alpha_{i},\alpha_{m}) & -(\alpha_{j},\alpha_{m})\\ (\alpha_{i},\alpha_{n}) & -(\alpha_{j},\alpha_{n})\end{smallmatrix}\bigr)$. When the determinant of $A$ is non-zero, we have $X^{\pm}(r+1,s)=X^{\pm}(r,s+1)=0$. We apply the same arguments as above to deduce that it is always possible to select $m$ and $n$ in such a way that $\det(A) \ne 0$. The result follows by induction hypothesis. We prove [\[eq:bssr\]](#eq:bssr){reference-type="eqref" reference="eq:bssr"} by induction on $r$ and $s$. Denote the left hand side of [\[eq:bssr\]](#eq:bssr){reference-type="eqref" reference="eq:bssr"} by $X^{\pm}(r,s)$. The initial case $r=s=0$ is our assumption [\[eq:mpy7\]](#eq:mpy7){reference-type="eqref" reference="eq:mpy7"}. Suppose that $X^{\pm}(r,s)=0$. We apply $[\widetilde{h}_{m1} , \cdot]$ and $[\widetilde{h}_{n1} , \cdot]$ to $X^{\pm}(r,s) = 0$ to get $$0 = (\alpha_{i}, \alpha_{m}) X^{\pm}(r+1,s) + (\alpha_{j}, \alpha_{m}) X^{\pm}(r,s+1),$$ $$0 = (\alpha_{i}, \alpha_{n}) X^{\pm}(r+1,s) + (\alpha_{j}, \alpha_{n}) X^{\pm}(r,s+1).$$ Consider the matrix $A = \bigl( \begin{smallmatrix}(\alpha_{i},\alpha_{m}) & (\alpha_{j},\alpha_{m})\\ (\alpha_{i},\alpha_{n}) & (\alpha_{j},\alpha_{n})\end{smallmatrix}\bigr)$. When the determinant of $A$ is non-zero, we have $X^{\pm}(r+1,s)=X^{\pm}(r,s+1)=0$. We apply the same arguments as above to deduce that it is always possible to select $m$ and $n$ in such a way that $\det(A) \ne 0$. The result follows by induction hypothesis. ◻ **Lemma 9**. *Suppose that $i,j \in I$ and $i \ne j$. The equation [\[eq:SYrc2\]](#eq:SYrc2){reference-type="eqref" reference="eq:SYrc2"} holds for any $r,s \in \mathbb{N}_{0}$.* *Proof.* The proof is analogous to that in [@MS22 Lemma 3.13]. ◻ **Lemma 10**. *For all $i,j \in I$ and $s \in \mathbb{N}_{0}$ $$[h_{i0}, h_{js}] = 0.$$* *Proof.* By Lemma [Lemma 6](#lm:l2){reference-type="ref" reference="lm:l2"} and [\[eq:rec2\]](#eq:rec2){reference-type="eqref" reference="eq:rec2"} $$[ h_{i0} , h_{js} ] = [ h_{i0} , [x_{js}^{+} , x_{j0}^{-}] ] = (\alpha_{i} , \alpha_{j}) ( h_{js} - h_{js} ) = 0.$$ ◻ **Lemma 11**. *$[h_{j1}, h_{i2}] = 0$ for all $i,j \in I$.* *Proof.* For any $j \in I$ we have by Lemmas [Lemma 5](#lm:h0eq){reference-type="ref" reference="lm:h0eq"}, [Lemma 6](#lm:l2){reference-type="ref" reference="lm:l2"} and relation [\[eq:mpy6\]](#eq:mpy6){reference-type="eqref" reference="eq:mpy6"} $$\begin{aligned} & [ h_{j1} , h_{i2} ] = [ h_{j1} , [ x_{i1}^{+}, x_{i1}^{-} ] ] = (-1) ( [ x_{i1}^{+}, [ x_{i1}^{-}, h_{j1} ] ] + (-1)^{|i|} [ x_{i1}^{-}, [ h_{j1}, x_{i1}^{+} ] ] ) = \nonumber \\ & [ x_{i1}^{+}, [ h_{j1}, x_{i1}^{-} ] ] + [ [ h_{j1}, x_{i1}^{+} ], x_{i1}^{-} ] = \nonumber \\ & [ x_{i1}^{+}, - c_{ij} x_{i2}^{-} - \frac{\hbar}{2} [ h_{j0}^2 , x_{i1}^{-} ] ] + [ c_{ij} x_{i2}^{+} + \frac{\hbar}{2} [ h_{j0}^2 , x_{i1}^{+} ], x_{i1}^{-} ] = \nonumber \\ & c_{ij} ( [x_{i2}^{+} , x_{i1}^{-} ] - [ x_{i1}^{+}, x_{i2}^{-} ] ) + \frac{\hbar}{2} ( [[ h_{j0}^2 , x_{i1}^{+} ], x_{i1}^{-} ] - [ x_{i1}^{+}, [ h_{j0}^2 , x_{i1}^{-} ] ] ) = \nonumber \\ & c_{ij} ( [x_{i2}^{+} , x_{i1}^{-} ] - [ x_{i1}^{+}, x_{i2}^{-} ] ) + \frac{\hbar}{2} ( [[ h_{j0}^2 , x_{i1}^{+} ], x_{i1}^{-} ] + [ h_{j0}^2, h_{i2} ] - [[ h_{j0}^2 , x_{i1}^{+} ], x_{i1}^{-} ] ) = \nonumber \\ & c_{ij} ( [x_{i2}^{+} , x_{i1}^{-} ] - [ x_{i1}^{+}, x_{i2}^{-} ] ). \label{eq:hi12f} \end{aligned}$$ On the other hand, by Lemmas [Lemma 6](#lm:l2){reference-type="ref" reference="lm:l2"}, [Lemma 10](#lm:cel02){reference-type="ref" reference="lm:cel02"} $$[ h_{j1} , h_{i2} ] = [ \widetilde{h}_{j1} , h_{i2} ] = [ \widetilde{h}_{j1} , [x_{i1}^{+},x_{i1}^{-}] ] = c_{ij} ( [x_{i1}^{+},x_{i2}^{-}] - [x_{i2}^{+},x_{i1}^{-}] ).$$ Thus we get $$[ h_{j1} , h_{i2} ] = 0.$$ ◻ **Lemma 12**. *The equation [\[rm:lsar\]](#rm:lsar){reference-type="eqref" reference="rm:lsar"} holds for all $i \in I$.* *Proof.* The result follows from [\[eq:hia12\]](#eq:hia12){reference-type="eqref" reference="eq:hia12"} and Lemma [Lemma 11](#lm:h12oe){reference-type="ref" reference="lm:h12oe"}. ◻ **Lemma 13**. *Equations [\[eq:Cer\]](#eq:Cer){reference-type="eqref" reference="eq:Cer"}, [\[eq:SYrc3\]](#eq:SYrc3){reference-type="eqref" reference="eq:SYrc3"} $($for $0 \le r+s \le 3)$ and [\[eq:SYrc2\]](#eq:SYrc2){reference-type="eqref" reference="eq:SYrc2"} $($for $0 \le r \le 1; \; s \in \mathbb{N}_{0})$ hold for $i=j$ $(i \in I)$.* *Proof.* Consider the equation [\[eq:Cer\]](#eq:Cer){reference-type="eqref" reference="eq:Cer"}. The proof follows from [\[eq:mpy1\]](#eq:mpy1){reference-type="eqref" reference="eq:mpy1"} and Lemmas [Lemma 10](#lm:cel02){reference-type="ref" reference="lm:cel02"}, [Lemma 11](#lm:h12oe){reference-type="ref" reference="lm:h12oe"}. Consider the equation [\[eq:SYrc3\]](#eq:SYrc3){reference-type="eqref" reference="eq:SYrc3"}. It follows from Lemma [Lemma 6](#lm:l2){reference-type="ref" reference="lm:l2"} for $0 \le r+s \le 2$. By Lemmas [\[lm:l2\]](#lm:l2){reference-type="eqref" reference="lm:l2"} and [\[lm:h12oe\]](#lm:h12oe){reference-type="eqref" reference="lm:h12oe"} we have $$0 = [ h_{i2} , \widetilde{h}_{i^{'}1} ] = [ [ x_{i2}^{+} , x_{i0}^{-} ] , \widetilde{h}_{i^{'}1} ] = ( \alpha_{i^{'}} , \alpha_{i} ) ( [ x_{i2}^{+} , x_{i1}^{-} ] - [ x_{i3}^{+} , x_{i0}^{-} ] ),$$ where if $c_{ii} \ne 0$, then $i^{'}=i$; otherwise $i^{'}=i+1$. Apply $[\widetilde{h}_{i^{'}1}, \cdot]$ to $$[ x_{i2}^{+} , x_{i0}^{-} ] = [ x_{i1}^{+} , x_{i1}^{-} ] = [ x_{i0}^{+} , x_{i2}^{-} ].$$ Then $$0 = [ x_{i2}^{+} , x_{i1}^{-} ] - [ x_{i3}^{+} , x_{i0}^{-} ] = [ x_{i1}^{+} , x_{i2}^{-} ] - [ x_{i2}^{+} , x_{i1}^{-} ] = [ x_{i0}^{+} , x_{i3}^{-} ] - [ x_{i1}^{+} , x_{i2}^{-} ].$$ Thus we get $$[ x_{i3}^{+} , x_{i0}^{-} ] = [ x_{i2}^{+} , x_{i1}^{-} ] = [ x_{i1}^{+} , x_{i2}^{-} ] = [ x_{i0}^{+} , x_{i3}^{-} ].$$ Consider the equation [\[eq:SYrc2\]](#eq:SYrc2){reference-type="eqref" reference="eq:SYrc2"}. The case $0 \le r \le 1$, $s=0$ follows from [\[eq:mpy4\]](#eq:mpy4){reference-type="eqref" reference="eq:mpy4"} and Lemmas [Lemma 5](#lm:h0eq){reference-type="ref" reference="lm:h0eq"}, [Lemma 7](#lm:hx20){reference-type="ref" reference="lm:hx20"}. We prove by induction that [\[eq:SYrc2\]](#eq:SYrc2){reference-type="eqref" reference="eq:SYrc2"} holds for $0 \le r \le 1$ and all $s \in \mathbb{N}_{0}$. Suppose that [\[eq:SYrc2\]](#eq:SYrc2){reference-type="eqref" reference="eq:SYrc2"} holds for $0 \le r \le 1$ and $0 \le s \in \mathbb{N}_{0}$. Then we apply $[ \widetilde{h}_{i1} , \cdot]$ to get $$[ \widetilde{h}_{i1} , [h_{i,r+1} , x_{is}^{\pm}] ] = [ \widetilde{h}_{i1} , [h_{ir} , x_{i,s+1}^{\pm}] \pm \frac{c_{ii} \hbar}{2} \{h_{ir}, x_{is}^{\pm}\} ] \Leftrightarrow$$ $$[h_{i,r+1} , x_{i,s+1}^{\pm}] = [h_{ir} , x_{i,s+2}^{\pm}] \pm \frac{c_{ii} \hbar}{2} \{h_{ir}, x_{i,s+1}^{\pm}\}.$$ Thus by induction [\[eq:SYrc2\]](#eq:SYrc2){reference-type="eqref" reference="eq:SYrc2"} is true for $0 \le r \le 1$ and all $s \in \mathbb{N}_{0}$. ◻ **Lemma 14**. *Let $i, j \in I$. Equations [\[eq:Cer\]](#eq:Cer){reference-type="eqref" reference="eq:Cer"} and [\[eq:SYrc3\]](#eq:SYrc3){reference-type="eqref" reference="eq:SYrc3"} hold for $i=j$ and all $r,s \in \mathbb{N}_{0}$.* *Proof.* The proof is completely the same as in [@L93] (see considerations after Lemma 2.2) where the proof is based on mathematical induction. That's why we only explain here how to deal with equations (2.25)-(2.29) in [@L93] where some extra arguments are needed to deal with the case $|i| = \bar{1}$. First note that the base of induction is obtained by Lemma [Lemma 13](#lm:r233){reference-type="ref" reference="lm:r233"}. Further in this proof in each equation we use the same notations as in mentioned paper. From now on suppose that $|i| = \bar{1}$, and if $c_{ii} = 0$ then $i^{'} = i+1$; otherwise $i^{'} = i-1$. Consider the equation (2.25). We are able to define $\accentset{\approx}{h}_{i^{'}i,p}$ by Lemma [Lemma 2](#lm:thfdpr){reference-type="ref" reference="lm:thfdpr"} and to use Lemma [Lemma 4](#lm:ijtrancart){reference-type="ref" reference="lm:ijtrancart"} to get $$0 = [ h_{i^{'}p} , h_{i^{'}p} ] = \frac{(\alpha_{i^{'}} , \alpha_{i^{'}})}{(\alpha_{i} , \alpha_{i^{'}})} [ h_{ip} , h_{i^{'}p} ] = [ h_{ip} , \accentset{\approx}{h}_{i^{'}i,p} ].$$ Further steps are the same as in the paper. In the equation (2.26) we use $[\widetilde{h}_{i^{'}1}, \cdot]$. Further steps are the same. In the equation between (2.26) and (2.27): $$[ h_{i^{'},r-q} , h_{i^{'}, q+1} ] = \frac{(\alpha_{i^{'}} , \alpha_{i^{'}})}{(\alpha_{i} , \alpha_{i^{'}})} [ h_{i,r-q} , h_{i^{'}, q+1} ] = \frac{(\alpha_{i^{'}} , \alpha_{i^{'}})}{(\alpha_{i} , \alpha_{i^{'}})} [ h_{i,r-q} , \accentset{\approx}{h}_{i^{'}i,q+1} ].$$ Further steps are the same. Consider the equation (2.27). Here we use $[\widetilde{h}_{i^{'}1}, \cdot]$. Further steps are the same. Consider the equation (2.28). We are able to define $\accentset{\approx}{h}_{i^{'}i,p}$ by Lemma [Lemma 2](#lm:thfdpr){reference-type="ref" reference="lm:thfdpr"} and to use Lemma [Lemma 4](#lm:ijtrancart){reference-type="ref" reference="lm:ijtrancart"} to get $$[ h_{i^{'}, p+1} , h_{i^{'}p} ] = \frac{(\alpha_{i^{'}} , \alpha_{i^{'}})}{(\alpha_{i} , \alpha_{i^{'}})} [ h_{i, p+1} , h_{i^{'}p} ] = \frac{(\alpha_{i^{'}} , \alpha_{i^{'}})}{(\alpha_{i} , \alpha_{i^{'}})} [ h_{i, p+1} , \accentset{\approx}{h}_{i^{'}i,p} ].$$ Further steps are the same. In the equation (2.29) we use the same arguments as in the (2.28) and Lemma [Lemma 9](#lm:hxinej){reference-type="ref" reference="lm:hxinej"} in the following to get $$[ h_{i^{'}, p+1} , h_{i^{'}p} ] = \frac{(\alpha_{i^{'}} , \alpha_{i^{'}})}{(\alpha_{i} , \alpha_{i^{'}})} [ h_{i^{'},p+1} , h_{ip} ].$$ Further steps are the same. It is easy to see that all considerations after the equation (2.29) are also true in our case. ◻ **Lemma 15**. *Suppose that $i,j \in I$ and $i \ne j$. The equation [\[eq:Cer\]](#eq:Cer){reference-type="eqref" reference="eq:Cer"} holds for any $r,s \in \mathbb{N}_{0}$.* *Proof.* The proof is the same as in [@MS22 Lemma 3.19]. ◻ **Lemma 16**. *The relation [\[eq:SYrc4\]](#eq:SYrc4){reference-type="eqref" reference="eq:SYrc4"} is satisfied for all $i,j \in I$ and $r,s \in \mathbb{N}_{0}$.* *Proof.* The case $i \ne j$ ($i,j \in I$) is proved in Lemma [Lemma 8](#lm:SYrc34){reference-type="ref" reference="lm:SYrc34"}. Suppose that $i=j$. Note that $|i| = \bar{0}$. We prove by induction on $r$ and $s \in \mathbb{N}_{0}$. The initial case $(r,s)=(0,0)$ is our initial assumption [\[eq:mpy5\]](#eq:mpy5){reference-type="eqref" reference="eq:mpy5"}. Let $X^{\pm}(r,s)$ be the result of substracting the right-hand side of [\[eq:SYrc4\]](#eq:SYrc4){reference-type="eqref" reference="eq:SYrc4"} from the left-hand side. We are able to define $\accentset{\approx}{h}_{i^{'}i,r}$ ($i \in I$, $|i^{'} - i|=1$, $r \in \mathbb{N}_{0}$) by Lemma [Lemma 2](#lm:thfdpr){reference-type="ref" reference="lm:thfdpr"}. Using the relation [\[eq:thxo\]](#eq:thxo){reference-type="eqref" reference="eq:thxo"} we have for an arbitrary $r \in \mathbb{N}_{0}$: $$0 = [ \accentset{\approx}{h}_{i^{'}i,r}, X^{\pm}(0,0) ] = \pm c_{i^{'} i} ( X^{\pm}(r,0) + X^{\pm}(0,r) ) =$$ $$\pm c_{i^{'} i} 2 X^{\pm}(r,0) \Rightarrow X^{\pm}(r,0) = 0.$$ Now for an arbitrary $s \in \mathbb{N}_{0}$: $$0 = [ \accentset{\approx}{h}_{i^{'}i,s}, X^{\pm}(r,0) ] = \pm c_{i^{'} i} ( X^{\pm}(r+s,0) + X^{\pm}(r,s) ) =$$ $$\pm c_{i^{'} i} X^{\pm}(r,s) \Rightarrow X^{\pm}(r,s) = 0.$$ The result follows by induction hypothesis. ◻ **Lemma 17**. *Let $i, j \in I$. The equation [\[eq:SYrc2\]](#eq:SYrc2){reference-type="eqref" reference="eq:SYrc2"} holds for all $i, j \in I$ and $r,s \in \mathbb{N}_{0}$.* *Proof.* The case $i \ne j$ is proved in Lemma [Lemma 9](#lm:hxinej){reference-type="ref" reference="lm:hxinej"}. Suppose that $i=j$. We prove by induction on $r$ and $s \in \mathbb{N}_{0}$. Let $X^{\pm}(r,s)$ be the result of substracting the right-hand side of [\[eq:SYrc2\]](#eq:SYrc2){reference-type="eqref" reference="eq:SYrc2"} from the left-hand side. The case $r=0$ and arbitrary $s \in \mathbb{N}_{0}$ follows by Lemma [Lemma 5](#lm:h0eq){reference-type="ref" reference="lm:h0eq"}. Suppose that $r \ge 1$ and $X^{\pm}(r,s) = 0$ for all $s \in \mathbb{N}_{0}$. The case $r=1$ follows from Lemma [Lemma 13](#lm:r233){reference-type="ref" reference="lm:r233"}. Note that $|i| = \bar{0}$. We consider the case $\pm = +$ (the case $\pm=-$ is proved in the same way). Apply $[ x^{-}_{i1}, \cdot ]$ to [\[eq:SYrc4\]](#eq:SYrc4){reference-type="eqref" reference="eq:SYrc4"} (we can do it by Lemma [Lemma 16](#lm:xijxijii){reference-type="ref" reference="lm:xijxijii"}) to get by Lemma [Lemma 14](#lm:cartelrelii){reference-type="ref" reference="lm:cartelrelii"} $$[ x^{-}_{i1}, [x_{i,r+1}^{+}, x_{i,s}^{+}] - [x_{i,r}^{+}, x_{i,s+1}^{+}] ] = \frac{c_{ii} \hbar}{2} [ x^{-}_{i1}, \{ x_{i,r}^{+} , x_{i,s}^{+} \} ] \iff$$ $$[ h_{i,s+1} , x_{i,r+1}^{+} ] - [ h_{i,r+2} , x_{i,s}^{+} ] - [ h_{i,s+2} , x_{ir}^{+} ] + [ h_{i,r+1} , x_{i,s+1}^{+} ] =$$ $$\frac{c_{ii} \hbar}{2} (-1) ( \{ h_{i,r+1} , x_{is}^{+} \} + \{ h_{i,s+1} , x_{i,r}^{+} \} ) \iff$$ $$X^{+}(r+1,s) + X^{+}(s+1,r) = 0.$$ Select $s=0$ and note that by Lemma [Lemma 13](#lm:r233){reference-type="ref" reference="lm:r233"} $X^{+}(1,r) = 0$. Thus $X^{+}(r+1,0) = 0$. Suppose that $X^{\pm}(r+1,s) = 0$ for $s \ge 0$. Apply $[ \widetilde{h}_{i,1}, \cdot ]$ to $X^{\pm}(r+1,s) = 0$. By Lemma [Lemma 14](#lm:cartelrelii){reference-type="ref" reference="lm:cartelrelii"} we have $$[ \widetilde{h}_{i1} , [h_{i,r+2} , x_{is}^{\pm}] ] = [ \widetilde{h}_{i1} , [h_{i,r+1} , x_{i,s+1}^{\pm}] \pm \frac{c_{ii} \hbar}{2} \{h_{i,r+1}, x_{is}^{\pm}\} ] \iff$$ $$[h_{i,r+2} , x_{i,s+1}^{\pm}] = [h_{i,r+1} , x_{i,s+2}^{\pm}] \pm \frac{c_{ii} \hbar}{2} \{h_{i,r+1}, x_{i,s+1}^{\pm}\}.$$ Thus $X^{\pm}(r+1,s+1) = 0$. The result follows by induction hypothesis. ◻ **Lemma 18**. *Relation [\[eq:cssr\]](#eq:cssr){reference-type="eqref" reference="eq:cssr"} holds in the following cases:* 1. *$(r,s,t)=(0,0,z)$ ($z \in \mathbb{N}_{0}$);* 2. *$(r,s,t)=(1,0,z)$ ($z \in \mathbb{N}_{0})$.* *Equation [\[eq:cssra\]](#eq:cssra){reference-type="eqref" reference="eq:cssra"} is satisfied for* 3. *$(r_1, r_2, r_3, t) = (0,0,0,z)$ ($z \in \mathbb{N}_{0}$); [\[item:lemma8\]]{#item:lemma8 label="item:lemma8"}* 4. *$(r_1, r_2, r_3, t) = (1,0,0,z)$ ($z \in \mathbb{N}_{0}$).* *Proof.* The proof remains consistent with the one presented in [@GNW18 Lemma 2.33]. It's important to highlight that the similarity in the proof arises from the construction of a system of homogeneous linear equations, which, in turn, results in only the trivial solution. ◻ **Lemma 19**. *Relations [\[eq:bssr\]](#eq:bssr){reference-type="eqref" reference="eq:bssr"} and [\[eq:cssr\]](#eq:cssr){reference-type="eqref" reference="eq:cssr"} are satisfied for all $i,j \in I$ and $r,s,t \in \mathbb{N}_{0}$.* *Proof.* The proof is the same as in [@MS22 Lemma 3.21]. ◻ **Lemma 20**. *The relation [\[eq:SYrc5\]](#eq:SYrc5){reference-type="eqref" reference="eq:SYrc5"} is satisfied for all $i,j \in I$ and $r,s \in \mathbb{N}_{0}$.* *Proof.* The proof is the same as in [@MS22 Lemma 3.22]. ◻ **Lemma 21**. *The relation [\[eq:cssra\]](#eq:cssra){reference-type="eqref" reference="eq:cssra"} is satisfied for all $r_1, r_2, r_3, t \in \mathbb{N}_{0}$.* *Proof.* Let $X^{\pm}(r_1,r_2,r_3,t)$ be the left-hand side of [\[eq:cssra\]](#eq:cssra){reference-type="eqref" reference="eq:cssra"}. We prove by induction on $r_1$, $r_2$, $r_3$, and $t \in \mathbb{N}_{0}$. The initial case $(r_1,r_2,r_3,t)=(0,0,0,0)$ is our initial assumption [\[eq:mpy8a\]](#eq:mpy8a){reference-type="eqref" reference="eq:mpy8a"}. We have proved in Lemma [Lemma 18](#lm:fassrс){reference-type="ref" reference="lm:fassrс"} that $X^{\pm}(0,0,0,t)=0$ for all $t \in \mathbb{N}_{0}$. Note that for any $r,t \in \mathbb{N}_{0}$ $$0 = [ \accentset{\approx}{h}_{m+n-1,m+n,r} , X^{\pm}(0,0,0,t) ] = \sum_{z=t}^{t+r} a_{z} X^{\pm}(0,0,0,z) +$$ $$\pm (\alpha_{m+n-1}, \alpha_{m+n}) X^{\pm}(r,0,0,t) = \pm (\alpha_{m+n-1}, \alpha_{m+n}) X^{\pm}(r,0,0,t) \Rightarrow X^{\pm}(r,0,0,t) = 0,$$ where $a_z \in \Bbbk$ ($r \le z \le t+r$). It follows that for any $r,s,t \in \mathbb{N}_{0}$ $$0 = [\accentset{\approx}{h}_{m+n-1,m+n,s} , X^{\pm}(r,0,0,t)] = \sum_{z=t}^{t+s} a_{z} X^{\pm} ( r,0,0,z ) +$$ $$\pm (\alpha_{m+n-1}, \alpha_{m+n}) ( X^{\pm} ( r+s,0,0,t ) + X^{\pm} ( r,s,0,t ) ) = \pm (\alpha_{m+n-1}, \alpha_{m+n}) X^{\pm} ( r,s,0,t ) \Rightarrow$$ $$X^{\pm} ( r,s,0,t ) = 0,$$ where $a_z \in \Bbbk$ ($r \le z \le t+s$). Suppose that for any $r, s, t \in \mathbb{N}_{0}$ and for $0 \le l \le u$ ($s \in \mathbb{N}_{0}$), $X^{\pm}(r,s,u,t)=0$. Then $$0 = [\widetilde{h}_{m+n,1} , X^{\pm}(r,s,u,t)] = (\alpha_{m+n}, \alpha_{m+n-1}) X^{\pm} ( r,s,u,t+1 ) +$$ $$(\alpha_{m+n}, \alpha_{m+n}) ( X^{\pm}(r+1,s,u,t) + X^{\pm} (r,s+1,u,t) + X^{\pm} (r,s,u+1,t) ) =$$ $$(\alpha_{m+n}, \alpha_{m+n}) X^{\pm} (r,s,u+1,t) \Rightarrow X^{\pm} (r,s,u+1,t) = 0.$$ Thus the result follows by induction hypothesis. ◻ **Lemma 22**. *The relation [\[eq:qssr\]](#eq:qssr){reference-type="eqref" reference="eq:qssr"} is satisfied for all $j \in I$ and $r,s \in \mathbb{N}_{0}$.* *Proof.* Let $X^{\pm}(r,0,0,s)$ be the left-hand side of [\[eq:qssr\]](#eq:qssr){reference-type="eqref" reference="eq:qssr"}. We prove by induction on $r$ and $s \in \mathbb{N}_{0}$. The initial case $(r,0,0,s)=(0,0,0,0)$ is our initial assumption [\[eq:mpy9\]](#eq:mpy9){reference-type="eqref" reference="eq:mpy9"}. Note that if we apply $[\widetilde{h}_{m1} , \cdot]$, $[\widetilde{h}_{n1} , \cdot]$ and $[\widetilde{h}_{k1} , \cdot]$ ($m,n,k \in I$) to $X^{\pm}(r,0,0,s)$ we get from [\[eq:h1eq\]](#eq:h1eq){reference-type="eqref" reference="eq:h1eq"} $$0 = (\alpha_{m}, \alpha_{j-1}) X^{\pm}(r+1,0,0,s) + (\alpha_{m}, \alpha_{j}) ( X^{\pm}(r,1,0,s) + X^{\pm}(r,0,1,s) ) + (\alpha_{m}, \alpha_{j+1}) X^{\pm}(r,0,0,s+1),$$ $$0 = (\alpha_{n}, \alpha_{j-1}) X^{\pm}(r+1,0,0,s) + (\alpha_{n}, \alpha_{j}) ( X^{\pm}(r,1,0,s) + X^{\pm}(r,0,1,s) ) + (\alpha_{n}, \alpha_{j+1}) X^{\pm}(r,0,0,s+1),$$ $$0 = (\alpha_{k}, \alpha_{j-1}) X^{\pm}(r+1,0,0,s) + (\alpha_{k}, \alpha_{j}) ( X^{\pm}(r,1,0,s) + X^{\pm}(r,0,1,s) ) + (\alpha_{k}, \alpha_{j+1}) X^{\pm}(r,0,0,s+1).$$ Consider the matrix $$A = \begin{pmatrix}(\alpha_{m},\alpha_{j-1}) & (\alpha_{m},\alpha_{j}) & (\alpha_{m},\alpha_{j+1})\\ (\alpha_{n},\alpha_{j-1}) & (\alpha_{n},\alpha_{j}) & (\alpha_{n},\alpha_{j+1}) \\ (\alpha_{k},\alpha_{j-1}) & (\alpha_{k},\alpha_{j}) & (\alpha_{k},\alpha_{j+1}) \end{pmatrix}$$ When the determinant of $A$ is nonzero, we have $X^{\pm}(r+1,0,0,s)=X^{\pm}(r,0,0,s+1)=0$. In order to determine when the determinant of $A$ is nonzero depending on the grading of roots it is sufficient to consider the following Dynkin (sub)diagrams: [$\romannumeral1$](#fig:dynkin-diagrams-type1), [$\romannumeral2a$](#fig:dynkin-diagrams-type2a) and [$\romannumeral2b$](#fig:dynkin-diagrams-type2b). Select $m=j-1, n=j, k=j+1$ for [$\romannumeral2a$](#fig:dynkin-diagrams-type2a) and [$\romannumeral2b$](#fig:dynkin-diagrams-type2b). Then $\det(A) = - ( ( \alpha_{j-1} , \alpha_{j-1}) + ( \alpha_{j+1} , \alpha_{j+1} ) )$. It is easy to see that the determinant is nonzero in that case. Note that case [$\romannumeral1$](#fig:dynkin-diagrams-type1) can occur only when $|I| > 3$. 1. Select for [case 1](#fig:dynkin-diagrams-9) $|j-1|=|j+1|=\bar{0} \Rightarrow m=j-2, n=j, k=j+1$; $|j-1|=\bar{1}, |j+1|=\bar{0} \Rightarrow m=j-1, n=j, k=j+1$; $|j-1|=\bar{0}, |j+1|=\bar{1} \Rightarrow m=j-1, n=j, k=j+1$; $|j-1|=|j+1|=\bar{1} \Rightarrow m=j-2, n=j, k=j+1$. 2. Select for [case 2](#fig:dynkin-diagrams-10) $|j-1|=|j+1|=\bar{0} \Rightarrow m=j+2, n=j, k=j+1$; $|j-1|=\bar{1}, |j+1|=\bar{0} \Rightarrow m=j-1, n=j, k=j+1$; $|j-1|=\bar{0}, |j+1|=\bar{1} \Rightarrow m=j-1, n=j, k=j+1$; $|j-1|=|j+1|=\bar{1} \Rightarrow m=j+2, n=j, k=j+1$. [\[fig:dynkin-diagrams-9\]]{#fig:dynkin-diagrams-9 label="fig:dynkin-diagrams-9"} [\[fig:dynkin-diagrams-10\]]{#fig:dynkin-diagrams-10 label="fig:dynkin-diagrams-10"} It is easy to see that the determinant is nonzero in all these cases. The result follows by induction hypothesis. ◻ 00 D. Arnaudon, N. Crampe', L. Frappat, E. Ragoucy, Super Yangian $Y(osp(1|2))$ and the Universal R-matrix of its Quantum Double, arXiv:math/0209167. https://doi.org/10.1007/s00220-003-0879-4 V. Drinfeld, A new realization of Yangians and quantized affine algebras, Sov. Math. Dokl. 36 (1988), no. 2, 212--216. S. Gautam, V. Toledano Laredo, Yangians and quantum loop algebras, Selecta Math. New. Ser. 19 (2013) 271-336. https://doi.org/10.1007/s00029-012-0114-2 N. Guay, H. Nakajima, C. Wendlandt, Coproduct for Yangians of affine Kac-Moody algebras. Adv. Math. 338 (2018) 865-911. https://doi.org/10.1016/j.aim.2018.09.013 L. Gow, Gauss decomposition of the Yangian $Y(gl_{m|n})$, Commun. Math. Phys. 276 (2007), no. 3, 799--825. V.G. Kac, Lie superalgebras. Advances in Math. 26 (1977) 8-96. https://doi.org/10.1016/0001-8708(77)90017-2 S.Z. Levendorskii, On generators and defining relations of Yangians. J. Geom. Phys. 12 (1993), no. 1, 1--11. I. Niven, Formal Power Series. The American Mathematical Monthly, 76(8) (1969) 871--889. https://doi.org/10.2307/2317940 A.I. Molev, A Drinfeld-type presentation of the orthosymplectic Yangians, arXiv:2112.10419. https://doi.org/10.48550/arXiv.2112.10419 A.I. Molev, Representations of the super Yangians of types A and C, arXiv:2110.12784. https://doi.org/10.48550/arXiv.2110.12784 A. Mazurenko, V.A. Stukopin, Classification of Hopf superalgebra structures on Drinfeld super Yangians, https://doi.org/10.48550/arXiv.2210.08365 V. Stukopin, Yangians of Lie superalgebras of type $A(m, n)$, (Russian) Funktsional. Anal. i Prilozhen. 28 (1994), no. 3, 85--88; translation in Funct. Anal. Appl. 28 (1994), no. 3, 217--219. V. Stukopin, Yangians of classical lie superalgebras: Basic constructions, quantum double and universal $R$-matrix. Proceedings of the Institute of Mathematics of NAS of Ukraine 50(3): 1195--1201 (2004). A. Tsymbaliuk, PBWD bases and shuffle algebra realizations for $U_v(L\mathfrak{sl}_n), U_{v_1,v_2}(L\mathfrak{sl}_n), U_v(L\mathfrak{sl}(m|n))$ and their integral forms, arXiv:1808.09536 A. Tsymbaliuk, Shuffle algebra realizations of type A super Yangians and quantum affine superalgebras for all Cartan data, arXiv:1909.13732 M. Ueda, Affine Super Yangian, arXiv:1911.06666. https://doi.org/10.48550/arXiv.1911.06666 H. Yamane, Universal $R$-matrices for quantum groups associated to simple Lie superalgebras, Proc. Japan Acad. Ser. A Math. Sci. 67 (1991) 108--112. https://doi.org/10.3792/pjaa.67.108 H. Yamane, Quantized enveloping algebras associated with simple Lie superalgebras and their universal $R$-matrices, Publ. Res. Inst. Math. Sci. 30 (1994) 15--87. https://doi.org/10.2977/prims/1195166275 R.B. Zhang, The $gl(M|N)$ super Yangian and its finite-dimensional representations, Letters in Mathematical Physics volume 37, pages419--434 (1996). https://doi.org/10.1007/BF00312673 R.B. Zhang, Serre presentations of Lie superalgebras, arXiv:1101.3114. https://doi.org/10.48550/arXiv.1101.3114
arxiv_math
{ "id": "2309.09735", "title": "Hopf superalgebra structure for Drinfeld super Yangian of Lie\n superalgebra $B(m,n)$", "authors": "Alexander Mazurenko", "categories": "math.QA", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this paper we investigate the relation between Deddense and spectral radius algebras of two bounded linear operators that there is a similarity between them. Also, the Deddense and spectral radius algebras related to rank one operators, operators that are similar to a rank one operators, operators that are majorized by a rank one operators and quasi-isometry operators are characterized. Moreover, we apply the mentioned results to the class of weighted conditional type operators on the Hilbert space $L^2(\mu)$. address: - Z. Huang - Huxley Building Department of Mathematics, South Kensington Campus,Imperial College London, London, UK - Y. Estaremi - Department of Mathematics, Faculty of Sciences, Golestan University, Gorgan, Iran. - S. Shamsigamchi - Department of Mathematics, Noor University(NU),Tehran, Iran. author: - "**Z. Huang, Y. Estaremi and N. Shamsi**" title: on the Deddense algebras of a class of bounded operators --- [^1] [^2] # **Introduction and Preliminaries** Let $\mathcal{H}$ be a complex Hilbert spaces, $\mathcal{B}(\mathcal{H})$ be the Banach algebra of all bounded linear operators on $\mathcal{H}$, where $I=I_{\mathcal{H}}$ is the identity operator on $\mathcal{H}$. If $T\in \mathcal{B}(\mathcal{H})$, then $T^*$ is the adjoint of $T$. Let $\mathcal{C}$ be a class of operators on the Hilbert space $\mathcal{H}$ and $T\in \mathcal{B}(\mathcal{H})$. We say that $T$ is similar to an element of $\mathcal{C}$ if there exists $C\in \mathcal{C}\cap \mathcal{B}(\mathcal{H})$ and an invertible operator $A\in \mathcal{B}(\mathcal{H})$ such that $AT=CA$. In this case we say that $A$ is a similarity between $T$ and $C$, or $T$ is similar to $C$ by $A$. Since $A$ is invertible, then we have $T=A^{-1}CA$ and $C=ATA^{-1}$. Recall that a bounded linear operator $T$ is called quasi normal if $T$ commutes with $T^*T$, i.e., $TT^*T=T^*TT$.\ Let $A, T\in \mathcal{B}(\mathcal{H})$ where $A$ is non-zero positive operator. The operator $T$ is called $A$-isometry if $T^*AT= A$ It is easy to see that if $T$ is $A$-isometry, then $T^n$ is also $A$-isometry, for every $n\in \mathbb{N}$. In order to [@cs], for $n\in \mathbb{N}$, we say that $T$ is $n$-quasi-isometry if $T$ is a $T^{*^n}T^n$-isometry. Hence $T$ is a $n$-quasi-isometry if and only if $T$ is an isometry on $\mathcal{R}(T^n)$. Moreover, $T$ is called quasi-isometry if it is 1-quasi-isometry.\ If $A$ is an invertible operator in $\mathcal{B}(\mathcal{H})$, then the collection $$\{T\in \mathcal{B}(\mathcal{H}): \sup_{n\in \mathbb{N}}\|A^nTA^{-n}\|<\infty\}$$ is called the Deddense algebra of $A$ and denoted by $\mathcal{D}_A$. It is easy to see that $S\in \mathcal{D}_T$ if and only if there exists $M>0$ such that $$\|T^nSx\|\leq M\|T^nx\|, \ \ \forall n\in \mathbb{N}, \ \ x\in \mathcal{H}.$$ Let $T\in \mathcal{B}(\mathcal{H})$ and $r(T)$ be the spectral radius of $T$. For $m\geq1$ we define $$\label{e1} R_m(T)=R_m:=\left(\sum^{\infty}_{n=0}d^{2n}_mT^{\ast ^n}T^n\right)^{\frac{1}{2}},$$ where $d_m=\frac{1}{1/m+r(T)}$. Since $d_m\uparrow 1/r(T)$, the sum in [\[e1\]](#e1){reference-type="ref" reference="e1"} is norm convergent and the operators $R_m$ are well defined, positive and invertible. The spectral radius algebra $\mathcal{B}_T$ of $T$ consists of all operators $S\in \mathcal{B}(\mathcal{H})$ such that $$\sup_{m\in \mathbb{N}}\|R_mSR^{-1}_m\|<\infty,$$ or equivalently $S\in \mathcal{B}_T$ if and only if there exists $M>0$ such that $$\sum^{\infty}_{n=0}d^{2n}_m\|T^nSx\|\leq M\sum^{\infty}_{n=0}d^{2n}_m\|T^nx\|, \forall m\in \mathbb{N}, \ \ \forall x\in \mathcal{H}.$$ The set $\mathcal{B}_T$ is an algebra and it contains all operators commute with $T$ ($\{T\}'$). By the above definitions, for each $T\in \mathcal{B}(\mathcal{H})$ we have $$\{T\}'\subseteq \mathcal{D}_T\subseteq \mathcal{B}_T.$$ The spectral radius and Deddense algebras help us to find invariant and hyperinvariant subspaces for a bounded linear operator. So many mathematicians have investigated the problem of finding invariant subspaces for special classes of bounded linear operators by studying invariant subspaces of spectral radius and Deddense algebras. For instance one can see [@blpw; @ej1; @lp; @ml]. In the present paper we are concerned about Deddense and spectral radius algebras of some classes of bounded linear operators on Hilbert spaces. In section 2, first we find the relation between Deddense and spectral radius algebras of two bounded linear operators that there is a similarity between them. Also, we investigate Deddense and spectral radius algebras of rank one operators and operators that are majorized by a rank one operator. In the sequel we obtain the Deddense and spectral radius algebras of quasi-isometry operators. In section 3 we apply the results of section 2 to the class of WCT operators on the Hilbert space $L^2(\mu)$. # **Deddense and spectral radius algebras of similar operators** In this section first we investigate the relation between Deddense and spectral radius algebras of two bounded linear operators that are similar by an invertible operator. **Proposition 1**. *Let $T, A, C\in\mathcal{B}(\mathcal{H})$ and $A$ be invertible such that $T$ is similar to $C$ by $A$. Then $T^n$ is similar to $C^n$ by $A$, for every $n\in \mathbb{N}$ and $A\mathcal{D}_T=\mathcal{D}_CA$ (or equivalently $A\mathcal{D}_TA^{-1}=\mathcal{D}_C$, $\mathcal{D}_T=A^{-1}\mathcal{D}_CA$). Also, $A\mathcal{B}_T=\mathcal{B}_{C}A$ (or equivalently $A\mathcal{B}_TA^{-1}=\mathcal{B}_C$, $\mathcal{B}_T=A^{-1}\mathcal{B}_CA$).* *Proof.* Let $S\in \mathcal{D}_T$, then there exists $M>0$ such that $$\|T^nSx\|\leq M\|T^nx\|, \ \ \forall n\in \mathbb{N}, \ \ x\in \mathcal{H}.$$ So we have $$\begin{aligned} \|C^nASA^{-1}x\|&=\|AT^nSA^{-1}x\|\\ &\leq \|A\|\|T^nSA^{-1}x\|\\ &M\|A\|\|T^nA^{-1}x\|\\ &M\|A\|\|A^{-1}C^nx\|\\ &M\|A\|\|A^{-1}\|\|C^nx\|,\end{aligned}$$ for all $n\in \mathbb{N}, \ \ x\in \mathcal{H}$. Thus we have $ASA^{-1}\in \mathcal{D}_C$ and consequently $A\mathcal{D}_TA^{-1}\subseteq\mathcal{D}_C$. Similarly we get the converse i.e., $\mathcal{D}_C\subseteq A\mathcal{D}_TA^{-1}$ and so $A\mathcal{D}_TA^{-1}=\mathcal{D}_C$.\ Let $S\in \mathcal{B}_T$, then there exists $M>0$ such that $$\sum^{\infty}_{n=0}d^{2n}_m\|T^nSx\|\leq M\sum^{\infty}_{n=0}d^{2n}_m\|T^nx\|, \forall m\in \mathbb{N}, \ \ \forall x\in \mathcal{H}.$$ Hence we have $$\begin{aligned} \sum^{\infty}_{n=0}d^{2n}_m\|C^nASA^{-1}x\|&=\sum^{\infty}_{n=0}d^{2n}_m\|AT^nSA^{-1}x\|\\ &\leq\|A\|\sum^{\infty}_{n=0}d^{2n}_m\|T^nSA^{-1}x\|\\ &\leq M\|A\|\sum^{\infty}_{n=0}d^{2n}_m\|T^nA^{-1}x\|\\ &= M\|A\|\sum^{\infty}_{n=0}d^{2n}_m\|A^{-1}C^nx\|\\ &\leq M\|A\|\|A^{-1}\|\sum^{\infty}_{n=0}d^{2n}_m\|C^nx\|,\end{aligned}$$ for all $m\in \mathbb{N}$ and for all $x\in \mathcal{H}$. This means that $ASA^{-1}\in \mathcal{B}_C$ and so $A\mathcal{B}_TA^{-1}\subseteq\mathcal{B}_C$. Similarly one can prove the converse. Therefore $A\mathcal{B}_TA^{-1}=\mathcal{B}_C$. ◻ Let $X, Y, Z$ be Banach spaces and $\mathcal{B}(X,Y)$ be the Banach space of all bounded linear operators from $X$ into $Y$. Also, $\mathcal{R}(T)$, $\mathcal{N}(T)$ are the range and the kernel of $T$, respectively. If $T\in \mathcal{B}(X,Y)$ and $S\in \mathcal{B}(X,Z)$, then we say that $T$ majorizes $S$ if there exists $M>0$ such that $$\|Sx\|\leq M\|Tx\|, \ \ \ \ \ \text{for all} \ x\in X.$$ The following characterization are known in the case of Hilbert spaces. **Theorem 2**. *[@do][\[t0\]]{#t0 label="t0"} For $T, S\in \mathcal{B}(\mathcal{H})$, the following conditions are equivalent:\ (1) $\mathcal{R}(S)\subseteq \mathcal{R}(T)$;\ (2) $T^*$ majorizes $S^*$;\ (3) $S=TU$ for some $U\in \mathcal{B}(\mathcal{H})$.* For $x,y\in \mathcal{H}$, $x\otimes y\in \mathcal{B}(\mathcal{H})$ and $\|x\otimes y\|=\|x\|\|y\|$, where $(x\otimes y)h=\langle h,y\rangle_{\mathcal{H}} x$, for every $h\in \mathcal{H}$. It is known that all rank one operators are of the form of $x\otimes y$ and consequently they generate finite rank operators on $\mathcal{H}$. Here in the next Theorem we determine the elements of the Deddense algebras of rank one operators on Hilbert spaces. Also, we obtain that $\mathcal{D}_{(x\otimes y)^n}=\mathcal{D}_{x\otimes y}$, for every $n\in \mathbb{N}$. **Theorem 3**. *Let $x,y\in \mathcal{H}$ and $T\in \mathcal{B}(\mathcal{H})$. Then $T\in \mathcal{D}_{x\otimes y}$ if and only if $x\otimes y$ majorizes $(x\otimes T^*y)$ if and only if there exists $M>0$ such that $|\langle z, T^*y\rangle|\leq M |\langle z, y\rangle|$, for all $z\in \mathcal{H}$. Moreover, $\mathcal{D}_{(x\otimes y)^n}=\mathcal{D}_{x\otimes y}$, for every $n\in \mathbb{N}$.* *Proof.* If $x, y \in \mathcal{H}$ such that $\langle x, y\rangle\neq 0$, then $(x\otimes y)^n=(\langle x,y\rangle)^{n-1} (x\otimes y)$. And so, if $\langle x, y\rangle=0$, then $(x\otimes y)^2=0$ and so it is nilpotent. Hence for the case $\langle x, y\rangle=0$, we have $$\mathcal{D}_{x\otimes y}=\{T\in \mathcal{B}(\mathcal{H}):\exists M>0, \|(x\otimes y)Tz\|\leq M \|(x\otimes y)z\| \}.$$ And for the case $\langle x, y\rangle\neq 0$, $$\begin{aligned} \mathcal{D}_{x\otimes y}&=\{T\in \mathcal{B}(\mathcal{H}):\exists M>0, \|(x\otimes y)^nTz\|\leq M \|(x\otimes y)^nz\|, \ \ \forall n\in \mathbb{N}\}\\ &=\{T\in \mathcal{B}(\mathcal{H}):\exists M>0, |\langle x, y\rangle |^{n-1} \|(x\otimes y)Tz\|\leq M |\langle x, y\rangle |^{n-1} \|(x\otimes y)z\|, \ \ \forall n\in \mathbb{N}\}\\ &=\{T\in \mathcal{B}(\mathcal{H}):\exists M>0, \|(x\otimes y)Tz\|\leq M \|(x\otimes y)z\| \}\\ &=\{T\in \mathcal{B}(\mathcal{H}):\exists M>0, |\langle Tz, y\rangle|\|x\|\leq M |\langle z, y\rangle|\|x\| \}\\ &=\{T\in \mathcal{B}(\mathcal{H}):\exists M>0, |\langle z, T^*y\rangle|\leq M |\langle z, y\rangle|\}.\\\end{aligned}$$ So $T\in \mathcal{D}_{x\otimes y}$ if and only if $(x\otimes y)$ majorizes $(x\otimes T^*y)$ if and only if there exists $M>0$ such that $|\langle z, T^*y\rangle|\leq M |\langle z, y\rangle|$, for all $z\in \mathcal{H}$. By these observations and the fact that $(x\otimes y)^n=(\langle x,y\rangle)^{n-1} (x\otimes y)$, we get that $\mathcal{D}_{(x\otimes y)^n}=\mathcal{D}_{x\otimes y}$, for every $n\in \mathbb{N}$. ◻ By these observations we get that the Deddense algebra of $x\otimes y$, $\mathcal{D}_{x\otimes y}$, is independent of $x$. This implies that for all $x,y \in \mathcal{H}$ and all $S\in \mathcal{B}(\mathcal{H})$ with $\langle Sx, y\rangle \neq 0$, we have $\mathcal{D}_{x\otimes y}=\mathcal{D}_{Sx\otimes y}$. More generally, for all $x\in \mathcal {H}$ such that $x, z\notin \{y\}^{\perp}$, we have $\mathcal{D}_{x\otimes y}=\mathcal{D}_{z\otimes y}$.\ From Theorem 2.8 of [@lp], for unit vectors $x,y\in \mathcal{H}$, $$\label{e1} \mathcal{B}_{x\otimes y}=\{T\in \mathcal{B}(\mathcal{H}): y \ \text{is an eigenvector for} \ T^*\}.$$ In the next proposition we characterize elements of Deddense and spectral radius algebras of operators that are similar to rank one operators. **Proposition 4**. *Let $T, A\in\mathcal{B}(\mathcal{H})$, $x,y\in \mathcal{H}$ and $A$ be invertible such that $T$ is similar to $x\otimes y$ by $A$. Then $S\in \mathcal{D}_{T}$ if and only if $(A^{-1}x\otimes A^*y)$ majorizes $(A^{-1}x\otimes S^*A^*y)$ if and only if there exists $M>0$ such that $|\langle z, S^*A^*y\rangle|\leq M |\langle z, A^*y\rangle|$, for all $z\in \mathcal{H}$. Moreover, $$\mathcal{B}_{T}=\{S\in \mathcal{B}(\mathcal{H}): A^*y \ \text{is an eigenvector for} \ S^*\}.$$* *Proof.* Since $T$ is similar to $x\otimes y$ by $A$, then $AT=(x\otimes y)A$ and so $$T=A^{-1}(x\otimes y)A=(A^{-1}x\otimes A^*y).$$ Hence by Theorem 2.8 of [@lp] and Theorem [Theorem 3](#t2.1){reference-type="ref" reference="t2.1"} we get the proof. ◻ In the next lemma we get that if a bounded linear operator is majorized by a rank one operator is rank one. **Lemma 5**. *Let $x,y\in \mathcal{H}$ and $T\in \mathcal{B}(\mathcal{H})$. If $x\otimes y$ majorizes $T$, then $T$ is a rank one operator and therefore there exists $h\in \mathcal{H}$ such that $T=h\otimes y$.* *Proof.* If $x\otimes y$ majorizes $T$, then by Theorem [\[t0\]](#t0){reference-type="ref" reference="t0"}, we have $$\mathcal{R}(T^*)\subseteq \mathcal{R}(y\otimes x)=\{\alpha y: \alpha\in \mathbb{C}\}.$$ Hence $T^*$ is a rank one operator and so there exists $h\in \mathcal{H}$ such that $T^*=y\otimes h$. Consequently $T=h\otimes y$. This completes the proof. ◻ In the next Theorem we characterize Deddense and spectral radius algebras of operators majorized by rank one operators. **Theorem 6**. *Let $x,y\in \mathcal{H}$ and $T\in \mathcal{B}(\mathcal{H})$. Then if $x\otimes y$ majorizes $T$, then $S\in \mathcal{D}_{T}$ if and only if $h\otimes y$ majorizes $(h\otimes S^*y)$, for some $h\in \mathcal{H}$ if and only if there exists $M>0$ such that $|\langle z, S^*y\rangle|\leq M |\langle z, y\rangle|$, for all $z\in \mathcal{H}$. Also, $$\mathcal{B}_{T}=\{S\in \mathcal{B}(\mathcal{H}): y \ \text{is an eigenvector for} \ S^*\}=\mathcal{B}_{x\otimes y}.$$* *Proof.* Since $x\otimes y$ majorizes $T$, then by the Lemma [Lemma 5](#p2.3){reference-type="ref" reference="p2.3"}, there exists $h\in \mathcal{H}$ such that $T=h\otimes y$. Therefore by Theorem [Theorem 3](#t2.1){reference-type="ref" reference="t2.1"} we get that $S\in \mathcal{D}_{T}$ if and only if $h\otimes y$ majorizes $(h\otimes S^*y)$, for some $h\in \mathcal{H}$ if and only if there exists $M>0$ such that $|\langle z, S^*y\rangle|\leq M |\langle z, y\rangle|$, for all $z\in \mathcal{H}$. Also, by [\[e1\]](#e1){reference-type="ref" reference="e1"} we have $$\mathcal{B}_{T}=\{S\in \mathcal{B}(\mathcal{H}): y \ \text{is an eigenvector for} \ S^*\}=\mathcal{B}_{x\otimes y}.$$ ◻ Now we consider the quasi isometry operators on the Hilbert space $\mathcal{H}$ and characterize Deddense and spectral radius algebras of them. The operator $T\in \mathcal{B}(\mathcal{H})$ is called quasi-isometry if $T^*(T^*T)T=T^*T$. **Theorem 7**. *Let $S,T\in \mathcal{B}(\mathcal{H})$. If $T$ is quasi-isometry, then $S\in \mathcal{D}_T$ if and only if $T$ majorizes $TS$. Moreover, if $r(T)<1$, then $\mathcal{B}_T=\mathcal{B}(\mathcal{H})$. Also, for the case $r(T)\geq 1$, $S\in \mathcal{B}_T$ if and only if there exists $M>0$ such that $$\|Sx\|+\|TSx\|\alpha_m\leq M(\|x\|+\|Tx\|\alpha_m), \forall m\in \mathbb{N}, \ \ \forall x\in \mathcal{H},$$ where $\alpha_m=\sum^{\infty}_{n=1}d^{2n}_m$.* *Proof.* If $T$ is quasi-isometry, $T^*(T^*T)T=T^*T$, then for every $n\in \mathbb{N}$, $T^{*^n}T^n=T^*T$. This implies that for every $n\in \mathbb{N}$ and $x\in \mathcal{H}$, $$\|T^nx\|^2=\langle T^nx, T^nx\rangle=\langle T^{*^n}T^nx,x\rangle=\langle T^*Tx,x\rangle=\langle Tx,Tx\rangle=\|Tx\|^2.$$ Let $S\in \mathcal{B}(\mathcal{H})$. Then $\|T^nSx\|=\|TSx\|$, $\|T^nx\|=\|Tx\|$, for each $x\in \mathcal{H}$, $n\in \mathbb{N}$, and so $S\in \mathcal{D}_T$ if and only if there exists $M>0$ such that $$\|T^nSx\|\leq M\|T^nx\|, \ \ \forall n\in \mathbb{N}, \ \ x\in \mathcal{H}$$ if and only if there exists $M>0$ such that $$\|TSx\|\leq M\|Tx\|, \ \ \ \ \ \text{for all} \ x\in X.$$ This implies that $S\in \mathcal{D}_T$ if and only if $T$ majorizes $TS$.\ By our assumptions we have $T^{*^n}T^n=T^*T$, for every $n\in \mathbb{N}$. So $S\in \mathcal{B}_T$ if and only if there exists $M>0$ such that $$\sum^{\infty}_{n=0}d^{2n}_m\|T^nSx\|\leq M\sum^{\infty}_{n=0}d^{2n}_m\|T^nx\|, \forall m\in \mathbb{N}, \ \ \forall x\in \mathcal{H}$$ if and only if $$\sum^{\infty}_{n=0}d^{2n}_m\|TSx\|\leq M\sum^{\infty}_{n=0}d^{2n}_m\|Tx\|, \forall m\in \mathbb{N}, \ \ \forall x\in \mathcal{H}$$ if and only if $$\|Sx\|+\|TSx\|\sum^{\infty}_{n=1}d^{2n}_m\leq M(\|x\|+\|Tx\|\sum^{\infty}_{n=1}d^{2n}_m), \forall m\in \mathbb{N}, \ \ \forall x\in \mathcal{H}.$$ By definition, $\{d_m\}$ is an increasing sequence convergent to $\frac{1}{r(T)}$, indeed $$\sup_{m\in \mathbb{N}}d_m=\frac{1}{r(T)}.$$ Since $T$ is a bounded operator, then $r(T)<\infty$. Hence the series $\alpha_m=\sum^{\infty}_{n=1}d^{2n}_m$ is convergent for all $m\in \mathbb{N}$ if and only if $r(T)\geq1$, otherwise it is divergent. Hence for the case $r(T)\leq1$, $S\in \mathcal{B}_T$ if and only if there exists $M>0$ such that $$\|Sx\|+\|TSx\|\alpha_m\leq M(\|x\|+\|Tx\|\alpha_m), \forall m\in \mathbb{N}, \ \ \forall x\in \mathcal{H}.$$ Also, for the case $r(T)<1$, $\sum^{\infty}_{n=1}d^{2n}_m=\infty$ and consequently $\mathcal{B}_T=\mathcal{B}(\mathcal{H})$. ◻ # **Applications to WCT operators** Let $(X, \mathcal{F}, \mu)$ be a complete $\sigma$-finite measure space. All sets and functions statements are to be interpreted as holding up to sets of measure zero. For a $\sigma$-subalgebra $\mathcal{A}$ of $\mathcal{F}$, the conditional expectation operator associated with $\mathcal{A}$ is the mapping $f\rightarrow E^{\mathcal{A}}f,$ defined for all non-negative $f$ as well as for all $f\in L^2(\mathcal{F})=L^2(X, \mathcal{F}, \mu)$, where $E^{\mathcal{A}}f$ is the unique $\mathcal{A}$-measurable function satisfying $$\int_{A}(E^{\mathcal{A}}f)d\mu=\int_{A}fd\mu \ \ \ \ \ \ \ \forall A\in \mathcal{A}.$$ We will often write $E$ for $E^{\mathcal{A}}$. This operator will play a major role in our work and we list here some of its useful properties: $\bullet$  If $g$ is $\mathcal{A}$-measurable, then $E(fg)=E(f)g$. $\bullet$  If $f\geq 0$, then $E(f)\geq 0$; if $E(|f|)=0$, then $f=0$. $\bullet$  $|E(fg)|\leq (E(|f|^2))^{\frac{1}{2}}(E(|g|^{2}))^{\frac{1}{2}}$. $\bullet$  For each $f\geq 0$, $z(E(f))$ is the smallest $\mathcal{A}$-set containing $z(f)$, where $z(f)=\{x\in X: f(x)\neq 0\}$. A detailed discussion and verification of most of these properties may be found in [@rao]. **Definition 8**. Let $(X,\mathcal{F},\mu)$ be a $\sigma$-finite measure space and $\mathcal{A}$ be a $\sigma$-sub-algebra of $\mathcal{F}$ such that $(X,\mathcal{A},\mu_{\mathcal{A}})$ is also $\sigma$-finite. Let $E$ be the conditional expectation operator relative to $\mathcal{A}$. If $u,w:X\rightarrow \mathbb{C}$ are $\mathcal{F}$-measurable functions such that $uf$ is conditionable (i.e., $E(uf)$ exists) and $wE(uf)\in L^{2}(\mathcal{F})$ for all $f\in L^{2}(\mathcal{F})$, then the corresponding weighted conditional type (or briefly WCT) operator is the linear transformation $M_wEM_u:L^2(\mathcal{F})\rightarrow L^{2}(\mathcal{F})$ defined by $f\rightarrow wE(uf)$. As it was proved in [@ej], the WCT operator $M_wEM_u$ on $L^2(\mathcal{F})$ is bounded if and only if $(E(|u|^{2}))^{\frac{1}{2}}(E(|w|^2))^{\frac{1}{2}}\in L^{\infty}(\mathcal{A})$. Now un the next theorem we characterize Deddense algebra of WCT operator $T=M_{\bar{u}}EM_u$. **Theorem 9**. *Let $T=M_{\bar{u}}EM_u$ and $S\in \mathcal{B}(L^2(\mathcal{F})$. Then $S\in \mathcal{D}_T$ if and only if $PSP=PS$ and $XP=PSP\in \mathcal{D}_{M_{E(|u|^2)}}$, in which $P=P_{\mathcal{N}(EM_u)^{\perp}}$.* *Proof.* We consider the Hilbert space $L^2(\mathcal{F})$ as a direct sum $\mathcal{H}_1\oplus \mathcal{H}_2$, in which $$\mathcal{H}_2=\mathcal{N}(EM_u)=\{f\in L^2(\mathcal{F}): E(uf)=0\}$$ and $$\mathcal{H}_1=\mathcal{H}^{\perp}_2=\overline{\bar{u}L^2(\mathcal{A})}.$$ Easily we get that $\mathcal{N}(EM_u)=\mathcal{N}(M_{\bar{u}}EM_u)$, because $$\langle M_{\bar{u}}EM_uf,f\rangle=\|EM_uf\|^2.$$ Since $T=M_{\bar{u}}EM_u$ is bounded, then $E(|u|^2)\in L^{\infty}(\mathcal{A})$ and so $E(uf)\in L^2(\mathcal{A})$, for all $f\in L^2(\mathcal{A})$. This implies that $TP=M_{E(|u|^2)}P$ and consequently $T^nP=M_{(E(|u|^2))^n}P$, for every $n\in \mathbb{N}$. Thus the corresponding block matrix of $T=M_{\bar{u}}EM_u$ is $T^n=\left( \begin{array}{cc} M_{(E(|u|^2))^n} & 0 \\ 0 & 0 \\ \end{array} \right)$ and also for any $S\in \mathcal{B}(\mathcal{H})$, $S=\left( \begin{array}{cc} X & Y \\ Z & W \\ \end{array} \right)$. Hence for every $f\in L^2(\mathcal{F})$, we have $f=Pf+(P^{\perp}f)=\left( \begin{array}{cc} Pf \\ P^{\perp}f \\ \end{array} \right)$, in which $P=P_{\mathcal{H}_1}$ and $P^{\perp}=I-P$. So we have $T^nSf= \left( \begin{array}{cc} M_{(E(|u|^2))^n}XPf+M_{(E(|u|^2))^n}YP^{\perp}f \\ 0 \\ \end{array} \right)$. Therefore $S\in \mathcal{D}_T$ if and only if there exists $M>0$ such that $$\|M_{(E(|u|^2))^n}XPf+M_{(E(|u|^2))^n}YP^{\perp}f\|\leq M\|M_{(E(|u|^2))^n}f\|, \ \ \ \forall n\in \mathbb{N}, \ \ f\in L^2(\mathcal{F}.$$ By Theorem 2.4 of [@ej1] we have $S\in \mathcal{B}_T$ if and only if $\mathcal{N}(EM_u)$ is invariant under $S$, and so $S\in \mathcal{B}_T$ if and only if $P^{\perp}SP^{\perp}=SP^{\perp}$ if and only if $PSP=PS$. Moreover, we know that $\mathcal{D}_T\subseteq \mathcal{B}_T$. This means that if $S\in \mathcal{D}_T$, then it has to have the property $PSP=PS$. So we have $Y=0$. Also, $$XP=PSP^2=PSP=P^2SP=PX=PXP.$$ Hence $S\in \mathcal{D}_T$ if and only if there exists $M>0$ such that $$\|M_{(E(|u|^2))^n}PSPf\|\leq M\|M_{(E(|u|^2))^n}f\|, \ \ \ \forall n\in \mathbb{N}, \ \ f\in L^2(\mathcal{F}.$$ By all these observations we get that $S\in \mathcal{D}_T$ if and only if $PSP=PS$ and $XP=PSP\in \mathcal{D}_{M_{E(|u|^2)}}$. ◻ Here we have the next corollary. **Corollary 10**. *Let $T=M_{a\bar{u}}EM_u\in \mathcal{B}(L^2(\mathcal{F})$, $a$ be an $\mathcal{A}$-measurable function and $S\in \mathcal{B}(L^2(\mathcal{F})$. Then $S\in \mathcal{D}_T$ if and only if $PSP=PS$ and $XP=PSP\in \mathcal{D}_{M_{aE(|u|^2)}}$, in which $P=P_{\mathcal{N}(EM_u)^{\perp}}$.* Let $S=S(E(|u|^2))$, $S_0=S(E(u))$, $G=S(E(|w|^2))$, $G_0=S(w)$, $F=S(E(uw))$. By the conditional type H$\ddot{o}$lder inequality we get that $F\subseteq S\cap G$, $S(wE(uf))\subseteq S\cap G$, for all $f\in L^2(\mu)$, and also by the elementary properties of the conditional expectation $E$ we have $S_0\subseteq S$ and $G_0\subseteq G$.\ For WCT operator $T=M_wEM_u$, as a bounded linear operator on the Hilbert space $L^2(\mu)$, we have $T^*=M_{\bar{u}}EM_{\bar{w}}$ and the following properties hold [@ej], $$T^*T=M_{\bar{u}E(|w|^2)}EM_u, \ \ \ \ \ \ \ TT^*=M_{wE(|u|^2)}EM_{\bar{w}},$$ $$TT^*T=M_{E(|u|^2)E(|w|^2)}M_{w}EM_{u}=M_{E(|u|^2)E(|w|^2)}T, \ \ \ \ \ \ \ T^*TT=M_{E(wu)E(|w|^2)}M_{\bar{u}}EM_u.$$ **Proposition 11**. *The WCT operator $T=M_wEM_u:L^2(\mu)\rightarrow L^2(\mu)$ is quasinormal if and only if $T(f)=M_{v\bar{u}}EM_u$, in which $v=\frac{E(uw)}{E(|u|^2)}\chi_{S\cap G}$.* *Proof.* The WCT operator $T=M_wEM_u$ is quasinormal if and only if $TT^*T=T^*TT$ if and only if $$E(|u|^2)E(|w|^2)wE(uf)=E(wu)E(|w|^2)\bar{u}E(uf), \ \ \ \ \text{for all} \ \ \ f\in L^2(\mu).$$ Since $T$ is bounded, then $E(|u|^2)E(|w|^2)\in L^{\infty}(\mathcal{A})$ and $\|T\|=\|(E(|u|^2))^{\frac{1}{2}}(E(|w|^2)^{\frac{1}{2}}\|_{\infty}$, [@ej]. By the fact that $(X,\mathcal{A}, \mu_{\mathcal{A}})$ is a $\sigma$-finite measure space, we have an increasing sequence $\{A_n\}_{n\in \mathbb{N}}\subseteq \mathcal{A}$, with $0<\mu(A_n)<\infty$ and $X=\cup_{n\in \mathbb{N}}A_n$. Now we set $f_n=\bar{u}\sqrt{E(|w|^2)}\chi_{A_n}$, for every $n\in \mathbb{N}$. So $$\begin{aligned} \|f_n\|^2_{2}&=\int_X|u|^2E(|w|^2)\chi_{A_n}d\mu\\ &=\int_XE(|u|^2)E(|w|^2)\chi_{A_n}d\mu\\ &\leq \|E(|u|^2)E(|w|^2)\|^2_{\infty}\mu(A_n)\\ &<\infty,\end{aligned}$$ and hence $f_n\in L^2(\mu)$, for all $n\in \mathbb{N}$.\ Suppose that $T=M_wEM_u$ is quasinormal, then by the above observations we have $$\begin{aligned} E(|u|^2)E(|w|^2)wE(|u|^2)\sqrt{E(|w|^2)}\chi_{A_n}&=E(|u|^2)E(|w|^2)wE(uf_n)\\ &=E(wu)E(|w|^2)\bar{u}E(uf_n)\\ &=E(wu)E(|w|^2)\bar{u}E(|u|^2)\sqrt{E(|w|^2)}\chi_{A_n}\end{aligned}$$ Moreover, by taking limit we get $$\begin{aligned} E(|u|^2)E(|w|^2)wE(|u|^2)\sqrt{E(|w|^2)}&=\lim _{n\rightarrow \infty}E(|u|^2)E(|w|^2)wE(|u|^2)\sqrt{E(|w|^2)}\chi_{A_n}\\ &=\lim _{n\rightarrow \infty}E(wu)E(|w|^2)\bar{u}E(|u|^2)\sqrt{E(|w|^2)}\chi_{A_n}\\ &=E(wu)E(|w|^2)\bar{u}E(|u|^2)\sqrt{E(|w|^2)}.\end{aligned}$$ So we have $$E(|u|^2)E(|w|^2)wE(|u|^2)\sqrt{E(|w|^2)}=E(wu)E(|w|^2)\bar{u}E(|u|^2)\sqrt{E(|w|^2)}.$$ Therefore $$wE(|u|^2)\chi_{S\cap G}=E(uw)\bar{u}\chi_{S\cap G},$$ and hence $w=\frac{E(uw)}{E(|u|^2)}\bar{u}\chi_{S\cap G}$. Consequently we have $$T(f)=wE(uf)=w\chi_{S\cap G} E(uf)=\frac{E(uw)}{E(|u|^2)}\chi_{S\cap G}\bar{u}E(uf)=M_{v\bar{u}}EM_u,$$ in which $v=\frac{E(uw)}{E(|u|^2)}\chi_{S\cap G}$. ◻ In the next theorem we characterize Deddense algebra of quasinormal WCT operators. **Theorem 12**. *Let WCT operator $T=M_wEM_u:L^2(\mu)\rightarrow L^2(\mu)$ be quasinormal and $S\in \mathcal{B}(L^2(\mathcal{F})$. Then $S\in \mathcal{D}_T$ if and only if $PSP=PS$ and $XP=PSP\in \mathcal{D}_{M_{vE(|u|^2)}}$, in which $P=P_{\mathcal{N}(EM_u)^{\perp}}$ and $v=\frac{E(uw)}{E(|u|^2)}\chi_{S\cap G}$..* *Proof.* It is a direct consequence of Corollary [Corollary 10](#cor3.1){reference-type="ref" reference="cor3.1"} and Proposition [Proposition 11](#p3.1){reference-type="ref" reference="p3.1"}. ◻ Here we provide two technical lemmas for later use. **Lemma 13**. *Let $g\in L^{\infty}( \mathcal{A})$ and let $T:L^{2}(\Sigma)\rightarrow L^{2}(\Sigma)$ be the WCT operator $T=M_wEM_u$. Then $M_gT=0$ if and only if $g=0$ on $S(E(|w|^{2})E(|u|^{2}))=S\cap G$.\ * *Proof.* By Theorem 2.1 of [@ej] we have $$\|M_gT\|^{2}=\||g|^{2}E(|w|^{2})E(|u|^{2})\|_{\infty},$$. Hence $M_gT=0$ if and only if $$\|M_gT\|^{2}=|g|^{2}E(|w|^{2})E(|u|^{2})=0$$ if and only if $g=0$ on $S(E(|w|^{2})E(|u|^{2}))$. ◻ As we now from [@es] the Moore-Penrose inverse of WCT operator $T=M_wEM_u$ is $$T^{\dagger}=M_{\frac{\chi_{S\cap G}}{E(|u|^2)E(|w|^2)}}M_{\bar{u}}EM_{\bar{w}}=M_{\frac{\chi_{S\cap G}}{E(|u|^2)E(|w|^2)}}T^*.$$ **Lemma 14**. *Let $T=M_wEM_u$. Then $T^{\dagger}=T^*$ if and only if $E(|u|^2)E(|w|^2)=\chi_{S\cap G}$.* *Proof.* It is obvious that $$T^{\dagger}=M_{\frac{\chi_{S\cap G}}{E(|u|^2)E(|w|^2)}}T^*$$ and so $T^{\dagger}=T^*$ if and only if $(1-M_{\frac{\chi_{S\cap G}}{E(|u|^2)E(|w|^2)}}))T^*=0$. Therefore by the Lemma [Lemma 13](#l1){reference-type="ref" reference="l1"} we get that $T^{\dagger}=T^*$ if and only if $E(|u|^2)E(|w|^2)=1$, $\mu$, a.e., on $S\cap G$ if and only if $E(|u|^2)E(|w|^2)=\chi_{S\cap G}$, $\mu$, a.e., on $S\cap G$. ◻ We recall that $T\in \mathcal{B}(\mathcal{H})$ is partial isometry if and only if $TT^*T=T$. From Theorem 3.2 of [@ej] we have $T=M_wEM_u$ is partial isometry if and only if $E(|u|^2)E(|w|^2)=\chi_{S\cap G}$, $\mu$, a.e., on $S\cap G$. Now we have the following corollary. **Corollary 15**. *Let $T=M_wEM_u\in \mathcal{B}(\mathcal{H})$. Then $T$ is partial isometry if and only if $T^{\dagger}=T^*$.* Now in the next Theorem we characterize quasi-isometry WCT operators. **Theorem 16**. *Let $T=M_uEM_w$. Then For each $n\in \mathbb{N}$, $T$ is $n$-quasi-isometry if and only if $T$ is 1-quasi-isometry if and only if $|E(uw)|= 1$, $\mu$, a.e.* *Proof.* Let $T=M_wEM_u$. Then for each $n\in \mathbb{N}$, $$T^{*^n}T^n=T^{*^{n+1}}T^{n+1}$$ if and only if $$M_{E(|w|^2)|E(uw)|^{2(n-1)}}M_{\bar{u}}EM_u=M_{E(|w|^2)|E(uw)|^{2(n)}}M_{\bar{u}}EM_u$$ if and only if $$M_{E(|w|^2)|E(uw)|^{2(n-1)}(1-|E(uw)|^2)}M_{\bar{u}}EM_u=0.$$ Since $E(|w|^2)|E(uw)|^{2(n-1)}(1-|E(uw)|^2)$ is an $\mathcal{A}$-measurable function, then by the Lemma [Lemma 13](#l1){reference-type="ref" reference="l1"} we get that $$M_{E(|w|^2)|E(uw)|^{2(n-1)}(1-|E(uw)|^2)}M_{\bar{u}}EM_u=0$$ if and only if $E(|w|^2)|E(uw)|^{2(n-1)}(1-|E(uw)|^2)=0$, $\mu$, a.e., on $S$ if and only if $|E(uw)|=1$, $\mu$, a.e., on $F=S(E(uw))$ if and only if $|E(uw)|=1$, $\mu$, a.e.\ Moreover, for $n=1$, $T^{*}T=T^{*^{2}}T^{2}$ if and only if $E(|w|^2)(1-|E(uw)|^2)=0$, $\mu$, a.e., on $S$, if and only if $1-|E(uw)|^2=0$, $\mu$, a.e., on $S\cap G$. Since $F\subseteq S\cap G$, then $1-|E(uw)|^2=0$, $\mu$, a.e., on $S\cap G$ if and only if $1-|E(uw)|^2=0$, $\mu$, a.e., on $F$ if and only if $|E(uw)|=1$, $\mu$, a.e. ◻ **Theorem 17**. *Let $T=M_wEM_u$ be WCT operators on $L^2(\mu)$, $|E(uw)|=1$, a.e., on $F=S(E(uw)$ and $S\in \mathcal{B}(L^2(\mu))$. Then $S\in \mathcal{D}_{T}$ if and only if $T$ majorizes $TS$. Also, $S\in \mathcal{B}_T$ if and only if there exists $M>0$ such that $$\|Sf\|+\|TSf\|\alpha_m\leq M(\|f\|+\|Tf\|\alpha_m), \forall m\in \mathbb{N}, \ \ \forall f\in L^2(\mu),$$ where $\alpha_m=\sum^{\infty}_{n=1}d^{2n}_m$.* *Proof.* We know that $T^nf=E(uw)^{n-1}Tf$, for all $n\in \mathbb{N}$ and $f\in L^2(\mu)$. If $|E(uw)|=1$, a.e., on $F=S(E(uw)$, then $\|T^nf\|=\|Tf\|$, for all $n\in \mathbb{N}$ and $f\in L^2(\mu)$. Hence for $S\in \mathcal{B}(L^2(\mu))$, we have $S\in \mathcal{D}_{T}$ if and only if $T$ majorizes $TS$. Similarly, we get that $r(T)=1$ and so by Theorem [Theorem 7](#t2.6){reference-type="ref" reference="t2.6"}, $S\in \mathcal{B}_T$ if and only if there exists $M>0$ such that $$\|Sf\|+\|TSf\|\alpha_m\leq M(\|f\|+\|Tf\|\alpha_m), \forall m\in \mathbb{N}, \ \ \forall f\in L^2(\mu).$$ ◻ Now by Theorems [Theorem 16](#t3.5){reference-type="ref" reference="t3.5"} and [Theorem 17](#t3.6){reference-type="ref" reference="t3.6"} we have the following corollary. **Corollary 18**. *Let $T=M_wEM_u$ be WCT operators on $L^2(\mu)$ and $S\in \mathcal{B}(L^2(\mu))$. If $T$ is $1$-quasi-isometry or $n$-quasi-isometry, for every $n\in \mathbb{N}$ or there exists $n\in \mathbb{N}$ such that $T$ is $n$-quasi-isometry, then $S\in \mathcal{D}_{T}$ if and only if $T$ majorizes $TS$. And, $S\in \mathcal{B}_T$ if and only if there exists $M>0$ such that $$\|Sx\|+\|TSx\|\alpha_m\leq M(\|x\|+\|Tx\|\alpha_m), \forall m\in \mathbb{N}, \ \ \forall x\in \mathcal{H},$$ where $\alpha_m=\sum^{\infty}_{n=1}d^{2n}_m$.* **Theorem 19**. *Let $f_1,f_2\in L^2(\mu)$ and $T=M_wEM_u$. Then if $f_1\otimes f_2$ majorizes $T$, then $S\in \mathcal{D}_{T}$ if and only if $g\otimes f_2$ majorizes $(g\otimes S^*f_2)$, for some $g\in L^2(\mu)$ if and only if there exists $M>0$ such that $\int_X h\overline{S^*(f_2)}d\mu\leq M \int_X h\overline{f_2}d\mu$, for all $h\in L^2(\mu)$. Also, $$\mathcal{B}_{T}=\{S\in \mathcal{B}(L^2(\mu)): h \ \text{is an eigenvector for} \ S^*\}=\mathcal{B}_{f\otimes g}.$$* *Proof.* As is known the inner product of the Hilbert space $L^2(\mu)$ is as: $$\langle f, g\rangle=\int_Xf\bar{g}d\mu, \ \ \ \text{for all}, \ \ f,g \in L^2(\mu).$$ Hence by Theorem [Theorem 6](#t2.2){reference-type="ref" reference="t2.2"} we get the proof. ◻ # **Declarations** **Ethical Approval** Not applicable. **Competing interests** The authors declare that there is no conflict of interest. **Authors' contributions** The authors declare that all the research and writing in the manuscript has been done together. **Funding** Not applicable. **Availability of data and materials** Our manuscript has no associate data. 99 A. Biswas, A. Lambert, S. Petrovic and B. Weinstock, On spectral radius algebras, Oper. Matrices. **2** (2008) 167-176. G. Cassier and L. Sucia, Mapping theorems and similarity to contractions for classes of $A$-contractions. Hot topics in Operator theory, Theta Series in Advanced Mathematices, (2008), 39-58. R. Douglas, On majorization, factorization and range inclusion of operators on Hilbert space, Proc. Amer. Math. Soc. **17**(1966), 413-415. Y. Estaremi, M. R. Jabbarzadeh, Weighted lambert type operators on L p-spaces. Oper. Matrices **1** (2013) 101--116. Y. Estaremi and M. R. Jabbarzadeh, Spectral radius algebras of WCE operators, Oper. Matrices. **11** (2017) 337-346. Y. Estaremi, and S. Shamsigamchi, Unbounded WCT operators and applications to linear equations. Comp. Appl. Math. **238** (2022). M. Lacruz, Invariant subspaces and Deddens algebras, Expositiones Mathematicae, **33** (2015) 116-120. A. Lamberta and S. Petrovic, Beyond hyperinvariance for compact operators, J. Functional Analysis, **219** (2005) 93--108. M. M. Rao, Conditional measure and applications, Marcel Dekker, New York, 1993. [^1]: [^2]:
arxiv_math
{ "id": "2310.05435", "title": "On the Deddense algebras of a class of bounded operators", "authors": "Z. Huang, Y. Estaremi and N. Shamsi", "categories": "math.FA", "license": "http://creativecommons.org/licenses/by/4.0/" }
Connected graphs with a given dissociation number attaining the minimum spectral radius Zejun Huang$^{1}$, Jiahui Liu$^{1}$[^1], Xinwei Zhang$^{1}$\ 1. School of Mathematical Sciences, Shenzhen University, Shenzhen 518060, China \ **Abstract** A dissociation set of a graph is a set of vertices which induces a subgraph with maximum degree less than or equal to one. The dissociation number of a graph is the maximum cardinality of its dissociation sets. In this paper, we study the connected graphs of order $n$ with a given dissociation number that attains the minimum spectral radius. We characterize these graphs when the dissociation number is in $\{n-1,~n-2,~\lceil2n/3\rceil,~\lfloor2n/3\rfloor,~2\}$. We also prove that these graphs are trees when the dissociation number is larger than $\lceil {2n}/{3}\rceil$. **Keywords:** connected graph; dissociation number; independence number; spectral radius **Mathematics Subject Classification:** 05C50, 05C20, 15A99 # Introduction and main results Graphs in this paper are simple. For a graph $G$, we denote by $V(G)$ its vertex set and $E(G)$ its edge set. The cardinalities of $V(G)$ and $E(G)$ are called the *order* and the *size* of $G$, respectively. For a set $S\subseteq V(G)$, $G[S]$ is the subgraph of $G$ induced by $S$. We denote by $G-S$ the induced subgraph $G[V\setminus S]$, which is also denoted by $G-v$ if $S=\{v\}$ is a singleton. If $e\in E(G)$, $G-e$ is the graph obtained from $G$ by deleting the edge $e$; if $e\not\in E(G)$, $G+e$ is the graph obtained from $G$ by adding the edge $e$. The *union* of two graphs $G_1$ and $G_2$ is the graph $G_1\cup G_2$ with vertex set $V(G_1)\cup V(G_2)$ and edge set $E(G_1)\cup E(G_2)$, which is also called a *disjoint union* and denoted by $G_1+G_2$ if $V(G_1)\cap V(G_2)=\emptyset$. We denote by $kG$ the disjoint union of $k$ copies of $G$. The *join* of two vertex disjoint graphs $G_1$ and $G_2$, denoted by $G_1\vee G_2$, is obtained from $G_1+G_2$ by adding all edges $uv$ with $u\in V(G_1)$ and $v\in V(G_2)$. Given two graphs $G$ and $H$, if there is a bijection $\phi: V(G)\rightarrow V(H)$ such that $uv\in E(G)$ if and only if $\phi(u)\phi(v)\in E(H)$, then $G$ and $H$ are said to be *isomorphic*, written $G\cong H$. The *spectral radius* of a graph $G$, denoted by $\rho(G)$, is the largest eigenvalue of its adjacency matrix $A(G)$. When $G$ is connected, by the Perron-Frobenius theorem (see [@Zhan]), there is a unique positive unit eigenvector of $A(G)$ corresponding to the eigenvalue $\rho(G)$, which is called the *Perron vector* of $G$. Given a set of graphs, it is natural to consider the possible values of certain parameters on these graphs. In 1986, Brualdi and Solheid [@BS] proposed the general problem on determining the maximum or minimum spectral radius of graphs from a given set as well as the extremal graphs attaining the maximum or minimum spectral radius. A lot of works have been done along this line. Brualdi and Hoffman [@BRUALDI1985133] determined the maximum spectral radius of a graph $G$ of order $n$ with size exactly ${s\choose 2}$, which is attained if and only if $G\cong K_s+ (n-s)K_1$. Hong, Shu and Fang [@HONG2001177] determined the maximum spectral radius of connected graphs with given order, size and minimum degree as well as the extremal graphs. Berman and Zhang [@BERMAN2001233] characterized the graphs with the maximum spectral radius among connected graphs of order $n$ with a given number of cut vertices. Feng, Yu and Zhang [@FENG2007133] characterized the graphs with the maximum spectral radius among graphs of order $n$ with a given matching number. Liu, Lu and Tian [@LIU2007449] determined the graphs with the maximum spectral radius among unicyclic graphs of order $n$ with a given diameter. Van Dam and Kooij [@VANDAM2007408] characterized the connected graphs with the minimum spectral radius among connected graphs of order $n$ with a given diameter from $\{1,~2,~\lfloor\frac{n}{2}\rfloor,~n-3,~n-2,~n-1\}$. On graphs with a given independent number, the characterization of such connected graphs of order $n$ attaining the maximum spectral radius is trivial. However, it is not easy to characterize the connected graphs of order $n$ with a given independence number $\alpha$ that attains the minimum spectral radius. Xu, Hong, Shu and Zhai [@XU2009937] proposed this problem in 2009 and they solved the problem for the cases $\alpha\in\{1,~2,~\lceil\frac{n}{2}\rceil,~\lceil\frac{n}{2}\rceil+1,~n-3,~n-2,~n-1\}$; Du and Shi [@Du2013GraphsWS] solved the problem when $\alpha=3,4$ and the order $n$ is divided by $\alpha$; Lou and Guo [@LOU2022112778] solved the problem for the case $\alpha=n-4$ and they proved that the extremal graphs are trees when $\alpha\ge \lceil n/2\rceil$; Hu, Huang and Lou [@hu2022graphs] solved the problem for the cases $\alpha\in\{n-5,~n-6\}$, and they gave the structural features for the extremal graphs in detail as well as a constructing theorem for the extremal graphs when $\alpha\ge\lceil\frac{n}{2}\rceil$; Choi and Park [@choi2023minimal] solved the problem for the case $\alpha=\lceil\frac{n}{2}\rceil-1$. For the other cases, the problem is still open. A set $S\subseteq V(G)$ is called a *dissociation set* if the induced subgraph $G[S]$ has maximum degree at most $1$. A *maximum dissociation set* is a dissociation set of $G$ with maximum cardinality. The *dissociation number* of a graph $G$, denoted by $diss(G)$, is the cardinality of a maximum dissociation set in $G$. The dissociation set is closely related to the *vertex $k$-path cover*, which is a set of vertices intersecting every path of order $k$ in $G$. Notice that the vertex $k$-path cover has applications in secure communication in wireless sensor networks; see [@BKKS; @CHZ]. Problems on the dissociation number and dissociation sets have been widely studied; see [@bock2022bound; @bock2022relating; @das2023spectral; @ORLOVICH20111352; @SL; @TU2022127107; @TZS; @doi:10.1137/0210022]. The dissociation number is a natural generalization of the independence number. Similarly to the minimum spectral radius problem on the independence number, we study the following problem in this paper. **Problem 1**. *Let $n,k$ be integers such that $2\le k\le n$. Denote by $\mathcal{G}_{n,k}$ the connected graphs of order $n$ with dissociation number $k$. Characterize the graphs in $\mathcal{G}_{n,k}$ that attain the minimum spectral radius.* Notice that the characterization of graphs in $\mathcal{G}_{n,k}$ attaining the maximum spectral radius is trivial. In fact, suppose $G\in \mathcal{G}_{n,k}$. Then by choosing an arbitrary dissociation set with cardinality $k$, it is not difficult to prove that $G$ attains the maximum spectral radius if and only if $G\cong K_{n-k}\vee \frac{k}{2}K_2$ when $k$ is even and $G\cong K_{n-k}\vee (\frac{k-1}{2}K_2+K_1)$ when $k$ is odd. We denote by $P_n$ and $C_n$ the path and the cycle of order $n$, respectively. The path $P_{n+1}$ is also called an *$n$-path*, since its length is $n$. In a tree $T$, a vertex with degree larger than 2 is called a *branch vertex*. If $v$ is a branch vertex of a tree $T$ such that $T-v$ has at most one component containing a branch vertex, then $v$ is called an *end branch vertex*. If a $k$-path $P$ is an induced subgraph of $T$ with one of its end adjacent to a branch vertex $v\in V(T)$, then $P$ is called a *branch $k$-path* attached to $v$, which is also called a *branch edge* attached to $v$ if $P$ is an edge. If a leaf is adjacent to a branch vertex $v\in V(T)$, we say the leaf is attached to $v$. A *complete $k$-partite graph* $G$ of order $n$ is a graph whose vertex set can be partitioned into $k$ subsets $V_1,\ldots,V_k$ such that $uv\in E(G)$ if and only if $u\in V_i,v\in V_j$ with $i\ne j$. Moreover, if $|V_i|\in \{\lceil n/k\rceil,\lfloor n/k\rfloor\}$ for $i=1,2,\ldots,k$, $G$ is said to be *balanced*. Denote by $S_n$ the star graph of order $n$. Let $S(r,t)$ be the graph obtained from $S_{r+1}$ by attaching $t$ branch edges to the center of $S_{r+1}$. If $n$ is even, we denote by $H(n)$ the graph obtained from $P_4$ by attaching $\lceil(n-4)/2\rceil$ and $\lfloor(n-4)/2\rfloor$ branch edges to the two ends of $P_4$ respectively; if $n$ is odd, we denote by $H(n)$ the graph obtained from $H(n-1)$ by attaching a leaf to the branch vertex of $H(n-1)$ with degree $\lfloor (n-5)/2\rfloor+1$. What follows are the diagrams of $S(r,t)$ and $H(n)$. Our main results state as follows. **Theorem 2**. *Let $n$ and $k$ be positive integers with $k>\lceil\frac{2n}{3}\rceil$. If $G$ attains the minimum spectral radius in $\mathcal{G}_{n,k}$, then $G$ is a tree.* **Theorem 3**. *Let $n$ and $k$ be positive integers with $2\le k\le n-1$. Suppose $G$ attains the minimum spectral radius in $\mathcal{G}_{n,k}$.* *(i) If $k=n-1$, then $G\cong S(r,\lfloor(n-1)/2\rfloor),$ where $r=0$ if $n$ is odd and $r=1$ if $n$ is even.* *(ii) If $k=n-2$ with $n\ge 10$, then $G\cong H(n)$.* *(iii) If $k=\lceil 2n/3\rceil$, then $G$ is a path.* *(iv) If $k=\lfloor 2n/3\rfloor$ with $n\ne 0 ~(mod ~3)$, then $G$ is a cycle.* *(v) If $k=2$, then $G$ is a balanced complete $\lceil n/2\rceil$-partite graph.* # Preliminary In this section, we present some preliminary lemmas. Some of these lemmas concern the dissociation number of graphs, which have independent interests. **Lemma 4**. *[@Hoffman_1975] If $G_2$ is a proper subgraph of a graph $G_1$, then $\rho(G_1)>\rho(G_2)$.* **Lemma 5**. *[@0] Let $v$ be a vertex in a connected graph $G$, and let $k, m$ be nonnegative integers such that $k\ge m$. Denote by $G_{k,m}$ the graph obtained from $G$ by attaching two new paths $vv_1v_2\dots v_k$ and $vu_1u_2\dots u_m$ to $v$, where $v_1,\ldots, v_k,u_1,\ldots,u_m$ are distinct and $\{v_1,\ldots,v_k,u_1,\ldots,u_m\}\cap V(G)=\emptyset$. Then $\rho(G_{k,m})>\rho(G_{k+1,m-1})$.* Let $v$ be a vertex in a graph $G$. We denote by $N_G(v)$ the set of neighbors of $v$ in $G$, which will also be abbreviated as $N(v)$. **Lemma 6**. *[@WU2005343] Let $G$ be a connected graph with $V(G)=\{1,2,\ldots,n\}$. Suppose $u , v \in V(G)$ are distinct vertices and $\{v_1,v_2,\ldots,v_s\} \subseteq N_G(v)\setminus N_G(u)$. Suppose $X=(x_1,x_2,\dots,x_n)^T$ is the Perron vector of $G$ with $x_k$ corresponding to the vertex $k$ for $1\le k\le n$. Denote by $G^*=G-\{vv_i|1\le i\le s\}+\{uv_i|1\le i\le s\}$. If $x_u\ge x_v$, then $\rho(G^*)>\rho(G)$.* **Lemma 7**. *[@Cvetkovi1995] Among all the connected graphs on $n$ vertices, the path $P_n$ has the minimum spectral radius; $\rho(P_n)=2cos\frac{\pi}{n+1}$.* **Lemma 8**. *[@Smith1970] The only connected graphs on $n$ vertices with spectral radius less than 2 are the path $P_n$ and $T_n$, with additional cases $E_6, E_7, E_8$ when $n=6,7,8$, where $W_n, E_6, E_7, E_8$ have the following diagrams.* **Lemma 9**. *[@Smith1970] The only connected graphs on $n$ vertices with spectral radius 2 are the cycle $C_n$ and $\tilde{W}_{n}$, with additional cases $\tilde{E}_6, \tilde{E}_7, \tilde{E}_8$ when $n=6,7,8$, where $\tilde{W}_{n}, \tilde{E}_6, \tilde{E}_7$ and $\tilde{E}_8$ have the following diagrams.* [\[fig:4\]]{#fig:4 label="fig:4"} It is clear that $diss(E_6)=5$, $diss(E_7)=5$, $diss(E_8)=6$, $diss(\tilde{E}_6)=6$, $diss(\tilde{E}_7)=6$, $diss(\tilde{E}_8)=7$. **Lemma 10**. *Let $G$ be a connected graph and let $P$ be a 2-path disjoint with $G$. Suppose $G'=G\cup P+e$ with $e$ being an edge connecting $G$ and $P$. Then $$diss(G')=diss(G)+2.$$* *Proof.* Note that a maximum dissociation set of $G'$ contains at most two vertices from $P$. we have $diss(G')\le diss(G)+2$. On the other hand, suppose $S$ is a maximum dissociation set of $G$ and $v$ is the vertex in $V(P)$ incident with $e$. Then $S\cup \left(V(P)\setminus\{v\}\right)$ is a dissociation set of $G'$. Therefore, we have $diss(G')\ge diss(G)+2$, and hence $diss(G')= diss(G)+2$. ◻ **Lemma 11**. *Let $n\ge 3$ be an integer. Then $$diss(P_n)=diss(W_n)=diss(\tilde{W}_{n})=\left\lceil\frac{2n}{3}\right\rceil\quad and \quad diss(C_n)= \left\lfloor\frac{2n}{3} \right\rfloor.$$* *Proof.* Note that $S$ is a dissociation set of a graph $G$ if and only if $V(G)\setminus S$ is a vertex 3-path cover of $G$. By Proposition 1.1 of [@BRESAR20131943], we have $diss(P_n)=\left\lceil {2n}/{3}\right\rceil$ and $diss(C_n)= \left\lfloor {2n}/{3} \right\rfloor.$ Notice that $W_n$ is obtained from $P_{n-3}+P_3$ by adding an edge between the center of $P_3$ and an end of $P_{n-3}$. Applying Lemma [Lemma 6](#lemma13){reference-type="ref" reference="lemma13"} we have $$diss(W_n)=diss(P_{n-3})+2=\left\lceil\frac{2n}{3}\right\rceil.$$ Similarly, since $\tilde{W}_{n}$ is obtained from $W_{n-3}+P_3$ by adding an edge, we have $diss(\tilde{W}_{n})=\left\lceil {2n}/{3}\right\rceil.$ ◻ An *internal path* of a graph $G$ is a sequence of vertices $v_1, v_2,\ldots, v_k$ with $k\ge2$ such that:\ (1) the vertices in the sequence are distinct(except possibly $v_1=v_k$);\ (2) $v_i$ is adjacent to $v_{i+1}(i=1,2,\ldots,k-1)$;\ (3) the vertex degrees satisfy $d(v_1)\ge3$, $d(v_2)=\cdots=d(v_{k-1})=2$ (unless $k=2$) and $d(v_k\ge3)$. Let $uv$ be an edge of a graph $G$. The *subdivision* of the edge $uv$ is replacing $uv$ with a 2-path, i.e., deleting $uv$ and adding a new vertex $w$ and two new edges $uw$, $wv$. Generally, the *$k$-subdivision* of $uv$ is replacing $uv$ with a $(k+1)$-path. We denote by $G_{uv}^{(k)}$ the graph obtained from $G$ by doing a $k$-subdivision of $uv$ for $k=1,2,\ldots$, where $G_{uv}^{(1)}$ will be simplified to be $G_{uv}$. We also use $G_{uv}(w)$ and $G_{uv}^{(k)}(w_1,\ldots,w_k)$ to denote $G_{uv}$ and $G_{uv}^{(k)}$ if we need to emphasize the new added vertices $w$ and $w_1,\ldots,w_k$, respectively. **Lemma 12**. *[@0] Suppose that $G\ncong \tilde{W}_{n}$ and $uv$ is an edge on an internal path of $G$. Then $\rho(G_{uv})<\rho(G)$.* **Lemma 13**. *Let $uv$ be an edge of a tree $T$. Then $$\begin{aligned} diss(T_{uv})\in\{diss(T),~diss(T)+1\},\label{eqh1} \\ diss(T^{(2)}_{uv})\in\{diss(T)+1,~diss(T)+2\}\label{eqh2}\end{aligned}$$ and $$\label{eqh3} diss(T^{(3)}_{uv})= diss(T)+2.$$* *Proof.* Suppose $T_{uv}=T_{uv}(w)$. It is clear that $diss(T_{uv})\ge diss(T)$. Let $S$ be a maximum dissociation set of $T_{uv}(w)$. If $w\notin S$, then $S \backslash\{u\}$ is a dissociation set of $T$, and thus $diss(T)\ge|S \backslash\{u\}|\ge diss(T_{uv})-1$. If $w \in S$, then at least one of $u$ and $v$ is not in $S$. Thus $S \backslash\{w\}$ is a dissociation set of $T$, which also leads to $diss(T)\ge|S \backslash\{w\}|=diss(T_{uv})-1$. In both cases we have $$diss(T_{uv})\le diss(T)+1.$$ Therefore, we have ([\[eqh1\]](#eqh1){reference-type="ref" reference="eqh1"}). Suppose $T_{uv}^{(2)}=T_{uv}^{(2)}(w_1,w_2)$. Let $S$ be a maximum dissociation set of $T$. It is easy to verify $$diss(T_{uv}^{(2)}(w_1,w_2))\ge diss(T)+1,$$ since we can always add one of $w_1$ and $w_2$ to $S$ to get a dissociation set of $T_{uv}^{(2)}{(w_1,w_2)}$. On the other hand, suppose $S'$ is a maximum dissociation of $T_{uv}^{(2)}(w_1,w_2)$. Then $S'-\{u,w_1,w_2\}$ is a dissociation set of $T$ and $$diss(T_{uv}^{(2)}(w_1,w_2))- 2\le |S'-\{u,w_1,w_2\} |\le diss(T).$$ Therefore, we have ([\[eqh2\]](#eqh2){reference-type="ref" reference="eqh2"}). Suppose $T_{uv}^{(3)}=T_{uv}^{(3)}(w_1,w_2, w_3)$. Given a maximum dissociation set $S$ of $T$, if $\{u,v\}\subseteq S$, then $S\cup \{w_1,w_3\}$ is a dissociation set of $T_{uv}^{(3)}(w_1,w_2, w_3)$; if at least one of $u$ and $v$ is not in $S$, say $u\not \in S$, then $S\cup\{w_1,w_2\}$ is a dissociation set of $T_{uv}^{(3)}(w_1,w_2, w_3)$. Therefore, we have $$diss(T_{uv}^{(3)}(w_1,w_2, w_3))\ge diss(T)+2.$$ On the other hand, suppose $S'$ is a maximum dissociation set of $T_{uv}^{(3)}(w_1,w_2, w_3)$. If $\{u,v,w_2\}\subseteq S'$, then $S'-\{u,w_2\}$ is a dissociation set of $T$, which leads to $$diss(T_{uv}^{(3)}(w_1,w_2, w_3))-2=|S'|-2=|S'-\{u,w_2\}|\le diss(T).$$ If $\{u,v\}\subseteq S'$ and $w_2\not\in S'$, let $N(u)$ and $N(v)$ be the sets of neighbors of $u$ and $v$ in $T_{uv}^{(3)}(w_1,w_2, w_3)$, respectively. Then $S'-N(u)\cup N(v)$ is a dissociation set of $T$, which leads to $$diss(T_{uv}^{(3)}(w_1,w_2, w_3))-2=|S'|-2=|S'-N(u)\cup N(v)|\le diss(T),$$ since $|N(u)\cap S'|=|N(v)\cap S'|=1$. If $|\{u,v\}\cap S'|\le 1$, then $S'-\{w_1,w_2,w_3\}$ is a dissociation set of $T$, which leads to $$diss(T_{uv}^{(3)}(w_1,w_2, w_3))-2=|S'|-2=|S'-\{w_1,w_2,w_3\}|\le diss(T),$$ since $|S'\cap \{w_1,w_2,w_3\}|=2$. In all cases we have $$diss(T_{uv}^{(3)}(w_1,w_2, w_3))\le diss(T)+2.$$ Therefore, we have ([\[eqh3\]](#eqh3){reference-type="ref" reference="eqh3"}). ◻ **Lemma 14**. *Every tree has a maximum dissociation set containing all its leaves.* *Proof.* Let $L(T)$ be the set of all leaves of a tree $T$. Let $S$ be a maximal dissociation set such that $|S \cap L(T)|$ is as large as possible. Suppose there exists a leaf $v\in L(T)- S$. Let $u$ be the support vertex of $v$. Then $u\in S$, since otherwise $\{v\}\cup S$ is a larger dissociation set of $T$. Let $S' =\{v\}\cup S \setminus \{u\}$. Then $S'$ is a maximum dissociation set of $T$ such that $|S' \cap L(T))|>|S \cap L(T)|$, a contradiction. Hence, we have $L(T)\subseteq S$. ◻ A *subdividing transformation graph* of a tree $T$, denoted by $T^\lozenge$, is a tree obtained from $T$ by subdividing an internal edge of $T$ once, twice or three times and deleting the same number of vertices of $T$ simultaneously. A subdividing transformation graph $T^\lozenge$of $T$ is said to be well if $diss(T^\lozenge)\in\{diss(T),diss(T)-1\}$. **Lemma 15**. *For any tree $T$ with two branching vertices, there always exists a well subdividing transformation graph.* *Proof.* Let $T$ be a tree with two branching vertices. Then it has an internal edge $uv$. Take a diameter path $P=x_1x_2\cdots x_t$ of $T$. Then the neighbors of $x_2$ in $V(T)\setminus V(P)$ are all leaves, which are denoted by $v_1,\ldots,v_k$. If $k\ge 2$, then $diss(T_{uv}-x_1)=diss(T_{uv})-1$, since each maximum dissociation set of $T_{uv}$ contains $x_1,v_1,\ldots,v_k$ and each maximum dissociation set of $T_{uv}-x_1$ contains $v_1,\ldots,v_k$. It follows that $$diss(T_{uv}-x_1)\in\{diss(T),~diss(T)-1\},$$ since we have $diss(T_{uv})\in\{diss(T),diss(T)+1\}$ by Lemma [Lemma 13](#lemma14){reference-type="ref" reference="lemma14"}. Therefore $T_{uv}-x_1$ is a well subdividing transformation graph of $T$. If $k=1$, since each maximum dissociation set of $T_{uv}^{(3)}$ contains exactly two of $x_1,x_2, v_1$, we have $$diss(T_{uv}^{(3)}-\{x_1,x_2,v_1\})=diss(T_{uv}^{(3)})-2=diss(T),$$ where the last equality holds by Lemma [Lemma 13](#lemma14){reference-type="ref" reference="lemma14"}. Therefore $T_{uv}^{(3)}-\{x_1,x_2,v_1\}$ is a well subdividing transformation graph of $T$. Now we are left to verify the case $k=0$, i.e., $d(x_2)=2$. If $T$ contains a diameter path $P'$ such that a neighbor $y$ of its end vertices has degree larger than 2, then we can find a well subdividing transformation graph of $T$ by replacing the roles of $P$ and $x_2$ with $P'$ and $y$ in the above argument. So we may assume that each neighbor of $x_3$ from $V(T)\setminus V(P)$ has degree one or two in $T$. Denote by $T_1$ the component of $T-x_3x_4$ which containing the vertex $x_3$. Since $P$ is a diameter path, $T_1$ has the following diagram. If $d(x_3)=2$, then each maximum dissociation set of $T_{uv}^{(3)}$ contains exactly two of $x_1,x_2,x_3$. Hence, we have $$diss(T_{uv}^{(3)}-\{x_1,x_2,x_3\})=diss(T_{uv}^{(3)})-2=diss(T)$$ and $T_{uv}^{(3)}-\{x_1,x_2,x_3\}$ is a well subdividing transformation graph of $T$. If $d(x_3)\ge 3$, suppose $w$ is a leaf in $T_1$ distinct with $x_1$. Since each maximum dissociation set of $T_{uv}^{(3)}$ contains $x_1,x_2$ and all leaves of $T_1$, we have $$diss(T_{uv}^{(3)}-\{x_1,x_2,w\})\in \{diss(T_{uv}^{(3)})-2,diss(T_{uv}^{(3)})-3\}=\{diss(T),diss(T)-1\}.$$ Therefore, $T_{uv}^{(3)}-\{x_1,x_2,w\}$ is a well subdividing transformation graph of $T$. ◻ # The proof of Theorem [Theorem 2](#th2){reference-type="ref" reference="th2"} {#the-proof-of-theorem-th2} Denote by $B(n, s,t)$ the graph of order $n$ obtained from a path $P_{n-s-2t}$ by attaching $s$ leaves and $t$ edges to the same end of $P_{n-s-2t}$, which has the following diagram. [\[fig:5.5\]]{#fig:5.5 label="fig:5.5"} If $s+t=1$, then $B(n, s,t)$ is a path with dissociation number $\lceil 2n/3\rceil$. If $s+t\ge 2$, since there is a maximum dissociation set of $B(n, s,t)$ that does not contain the unique branch vertex, we have $$\label{eqh4} diss(B(n,s,t))=s+2t+diss(P_{n-s-2t-1})=s+2t+\lceil2(n-s-2t-1)/3\rceil.$$ Now we adopt a similar scheme as the proof of Theorem 1.2 in [@LOU2022112778] to prove Theorem [Theorem 2](#th2){reference-type="ref" reference="th2"}. *Proof of Theorem [Theorem 2](#th2){reference-type="ref" reference="th2"}.* Suppose $G$ attains the minimum spectral radius in $\mathcal{G}_{n,k}$ and it is not a tree. Then $G$ contains a spanning tree $T$. Applying Lemma [Lemma 4](#lemma3){reference-type="ref" reference="lemma3"}, we have $\rho(T)<\rho(G).$ Since $G$ attains the minimum spectral radius in $\mathcal{G}_{n,k}$, we have $$\label{eqh5} diss(T)> diss(G)>\left\lceil\frac{2n}{3}\right\rceil.$$ Thus $T$ is not a path, recalling that $diss(P_n)=\lceil 2n/3\rceil$. Now we distinguish two cases. *Case 1.* $T$ has exactly one branch vertex $u_0$. Suppose $d_{T}(u_0)=k$ and $P^{(1)},\ldots,P^{(k)}$ are the $k$ branch paths attached to $u_0$. Without loss of generality, we assume $P^{(1)}$ is the longest branch path. If $P^{(i)}$ has length larger than 1 for some $i\ge 2$, by Lemma [Lemma 5](#lemma2){reference-type="ref" reference="lemma2"} and Lemma [Lemma 10](#lemmah13){reference-type="ref" reference="lemmah13"}, we can cut a $P_3$ from $P^{(i)}$ and attach it to $P^{(1)}$ to obtain a tree with the same dissociation number and smaller spectral radius. Repeating this process until all but one branch paths have length less than or equal to 1, we can find a tree $T'=B(n,s,t)$ such that $$\label{eqh6} |V(T')|=|V(T)|, \quad diss(T')=diss(T)\quad\text{ and}\quad \rho(T')\le \rho(T).$$ Moreover, since $diss(T')=diss(T)>\lceil 2n/3\rceil+1$, we have $s +2t\ge 6.$ Let $s_1=s,t_1=t$. We construct a tree sequence $T_{i}=B(n,s_i,t_i)$, $i=1,2,\ldots$, as follows: \(i\) if $s_i=0$, let $t_{i+1}=t_{i}-2,~ s_{i+1}=s_{i}+1$; \(ii\) if $s_i=1$, let $t_{i+1}=t_i-1, ~s_{i+1}=s_i-1$; \(iii\) if $s_i\ge 2$, let $t_{i+1}=t_i+1,~ s_{i+1}=s_i-2$. Then by ([\[eqh4\]](#eqh4){reference-type="ref" reference="eqh4"}) and Lemma [Lemma 5](#lemma2){reference-type="ref" reference="lemma2"}, we have $$\label{eqh7} diss(T_{i+1})\in \{diss(T_i), diss(T_i)-1\}\quad \text{ and}\quad \rho(T_{i+1})<\rho(T_i).$$ In the above tree sequence, there exists a tree $T_k=B(n,s_k,t_k)$ such that $diss(T_k)=diss(G)$, since $diss(T_1)>diss(G)$ and $diss(B(n,s,t))<diss(G)$ when $2t+s<6$. On the other hand, by ([\[eqh6\]](#eqh6){reference-type="ref" reference="eqh6"}) and ([\[eqh7\]](#eqh7){reference-type="ref" reference="eqh7"}), we have $\rho(T_k)<\rho(G)$, which contradicts the assumption on $G$. *Case 2.* Suppose $T$ has at least two branch vertices. Let $T_1=T$. For $i=1,2,\ldots,$ if $T_i$ has two branch vertices, we construct the tree $T_{i+1}$ as follows. \(i\) If $T_{i}$ has a branch path $P_k$ with $k\ge 3$, $T_{i+1}$ is obtained from $T_i$ by doing a $3$-subdivision of an internal edge, and then deleting a pendant $P_3$ from the branch path $P_k$. \(ii\) If all branch paths in $T_i$ have lengths 1 or 2, we distinguish two cases. If there is an end branch vertex attached with $s$ leaves $v_1,\ldots,v_s$ and $t$ disjoint edges $u_1w_1,\ldots,u_tw_t$ such that $s+2t\ge 3$, $T_{i+1}$ is obtained from $T_i$ by doing a subdivision of an internal edge, and then deleting a leaf from $\{v_1,\ldots,v_s,u_1,\ldots,u_t,w_1,\ldots,w_t\}$. If every end branch vertex is only attached with two leaves, $T_{i+1}$ is obtained from $T_i$ by doing a 3-subdivision of an internal edge, and then deleting an end branch vertex together with the leaves attached to it. Suppose $k$ is the smallest integer such that $diss(T_k)=diss(G)$ or $T_k$ has exactly one branch vertex. Then applying Lemma [Lemma 13](#lemma14){reference-type="ref" reference="lemma14"}, we have $$diss(T_{i+1})\in\{diss(T_i),diss(T_i)-1\} \quad \text{for}\quad i=1,2,\ldots,k-1.$$ Since $diss(T_i)>\lceil{2n/3}\rceil$, we have $T_i\not\cong \tilde{W}_n$ for $i=1,\ldots,k-1$. Applying Lemma [Lemma 4](#lemma3){reference-type="ref" reference="lemma3"} and Lemma [Lemma 12](#lemma12){reference-type="ref" reference="lemma12"}, we have $$\rho(T_{i+1})<\rho(T_i)\quad \text{for}\quad i=1,2,\ldots,k-1.$$ If $diss(T_k)= diss(G)$, then $\rho(T_{k})<\rho(T_1)<\rho(G)$ contradicts the assumption on $G$. If $diss(T_k)> diss(G)$, then using the same construction as in Case 1, we can always find a tree $T''$ such that $diss(T'')=diss(G)$ and $\rho(T'')<\rho(G)$, which contradicts the assumption on $G$. Therefore, $G$ is a tree. This completes the proof. $\square$\ From the above proof we have the following. **Corollary 16**. *Let $\lceil 2n/3\rceil< k\le n-1$. Suppose a tree $T$ attains the minimum spectral radius in $\mathcal{G}_{n,k}$. Then either every branch path in $T$ has length at most one or $T\in B(n,s,t)$ for some positive integers $s,t$.* # Proof of Theorem [Theorem 3](#th1){reference-type="ref" reference="th1"} {#proof-of-theorem-th1} In this section, we present the proof of Theorem [Theorem 3](#th1){reference-type="ref" reference="th1"}. We divide the proof into two parts. In the first part we discuss the cases $k\in\{n-1,~\lceil\frac{2n}{3}\rceil,~\lfloor\frac{2n}{3}\rfloor,~2\}$, and in the second part we discuss the case $k=n-2$. ## The cases $k\in\{n-1,~\lceil\frac{2n}{3}\rceil,~\lfloor\frac{2n}{3}\rfloor,~2\}$ Similarly to the proof of Theorem 2.2 and Theorem 2.5 in [@XU2009937], the cases for $k=\lceil2n/3\rceil$ and $k=\lfloor 2n/3\rfloor$ can be concluded from Lemma [Lemma 7](#lemma1){reference-type="ref" reference="lemma1"}, Lemma [Lemma 8](#lemma6){reference-type="ref" reference="lemma6"}, Lemma [Lemma 9](#lemma7){reference-type="ref" reference="lemma7"} and Lemma [Lemma 11](#lemma8){reference-type="ref" reference="lemma8"} directly. *Proof of the case $k=2$.*   Suppose $k=2$. Since $diss(K_n-e)=2$ for every edge $e$ in $K_n$, $G$ contains at least two nonadjacent vertices. Given an arbitrary pair of nonadjacent vertices $u,v$ in $G$, since $diss(G)=2$, both $u$ and $v$ are adjacent to all the other vertices in $G$. Suppose $G$ has exactly $k$ disjoint pairs of nonadjacent vertices $(u_i,v_i),i=1,...,k$. Then $G=\{u_1,v_1\}\vee\{u_2,v_2\}\vee\cdots \vee\{u_k,v_k\}\vee K_{n-2k}$. If $n-2k\ge 2$, then deleting an edge $uv$ from $G$ with $u,v\in V(G)\setminus \{u_1,\ldots,u_k,v_1,\ldots,v_k\}$, we get a subgraph $H$ of $G$ with $\rho(H)<\rho(G)$ and $diss(H)=2$, a contradiction. Therefore, $n-2k<2$ and $G$ is a balanced complete $\lceil n/2\rceil$-partite graph. $\square$ *Proof of the case $k=n-1$.*   Suppose $k=n-1$. By Theorem [Theorem 2](#th2){reference-type="ref" reference="th2"}, $G$ is a tree. Let $S$ be a maximum dissociation set of $G$ such that $G[S]$ has exactly $r$ isolated vertices and $t$ disjoint edges with $r+2t=n-1$. Denote by $v$ the unique vertex in $V(G)\setminus S$. Since $G$ is a tree, $v$ is adjacent to exactly one end of each edge from $G[S]$, and $v$ is adjacent to all isolated vertices of $G[S]$. Therefore, we have $G\cong S(r,t)$. Applying Lemma [Lemma 5](#lemma2){reference-type="ref" reference="lemma2"}, we have $G\cong S(0,\frac{n-1}{2})$ when $n$ is odd, and $G\cong S(1,\frac{n-2}{2})$ when $n$ is even. $\square$ ## The case $k=n-2$ Suppose $G$ attains the minimum spectral radius in $\mathcal{G}_{n,n-2}$. If $n=5$, then $n-2=\lfloor 2n/3\rfloor$ and $G\cong C_5$. If $n\in\{6,7,8\}$, $n-2=\lceil 2n/3\rceil$ and $G\cong P_n$. If $n=9$, applying Lemma [Lemma 9](#lemma7){reference-type="ref" reference="lemma7"}, we have $G\cong \tilde{E}_8$, since $diss(\tilde{E}_8)=7$ and $diss(P_9)=6$. We present the proof for the case $n\ge 10$ of Theorem [Theorem 3](#th1){reference-type="ref" reference="th1"} in this subsection. Denote by $G_1(r,s,p,q),\ldots,G_4(r,s,p,q)$ the following graphs (see Figure [\[fig:11\]](#fig:11){reference-type="ref" reference="fig:11"}): - $G_1(r,s,p,q)$: the graph obtained from an edge $v_1v_2$ by attaching $r$ leaves and $s$ disjoint edges to $v_1$, and attaching $p$ leaves and $q$ disjoint edges to $v_2$; - $G_2(r,s,p,q)$: the graph obtained from a path $v_1v_3v_2$ by attaching $r$ leaves and $s$ disjoint edges to $v_1$, and attaching $p$ leaves and $q$ disjoint edges to $v_2$; - $G_3(r,s,p,q)$: the graph obtained from a path $v_1v_3v_4v_2$ by attaching $r$ leaves and $s$ disjoint edges to $v_1$, and attaching $p$ leaves and $q$ disjoint edges to $v_2$; - $G_4(r,s,p,q)$: the graph obtained from a path $v_1v_3v_2$ by attaching $r$ leaves and $s$ disjoint edges to $v_1$, attaching $p$ leaves and $q$ disjoint edges to $v_2$ and attaching a vertex $v_4$ to $v_3$. Firstly we prove the following claims. **Claim 1.** *Let $k>0$ be an integer and $T^*_k=S(0,k)$. Then $\rho(T^*_k)=\sqrt{k+1}$.* *Proof.* Suppose $u_1$ is the center of $T^*_k$ with neighbors $u_2,\ldots, u_{k+1}$. We label the leaf adjacent to $u_i$ by $u_{k+i}$ for $i=2,\ldots, k+1$. Let $Z=(z_1,z_2,...,z_n)^T$ be the Perron vector of $T^*_k$ with $z_i$ corresponding to $u_i$. By symmetry of the components of $Z$ and $\rho(T^*_k)Z=A(T^*_k)Z$, we have $$\label{equa4} \rho(T^*_k)z_{1}=kz_{2},\quad \rho(T^*_k)z_{2}=z_{1}+z_{k+2}, \quad \rho(T^*_k)z_{k+2}=z_{2},$$ which lead to $$\rho(T^*_k)z_1=k\rho(T^*_k)z_{k+2}=k\rho(T^*_k)[\rho(T^*_k)z_2-z_1]=\rho^3(T^*_k)z_1-k\rho(T^*_k)z_1.$$ Hence, $\rho(T^*_k)=\sqrt{k+1}$.\ **Claim 2.** *Let $s$ and $q$ be integers such that $s+1 \geq q \geq2$. Then $$\label{eqc2} \rho(H(2s+2q+4))\le\rho(G_3(0,s,0,q))<\rho(G_3(1,s,1,q-1)),$$ where equality in the left inequality holds if and only if $G_3(0,s,0,q)\cong H(2s+2q+4)$.* *Proof.* For convenience, suppose $L=G_3(0,s,0,q)$ and $R=G_3(1,s,1,q-1)$ with their vertices labelled as follows. Denote by $\rho_1=\rho(L)$ and $\rho_2=\rho(R)$. Since $T^*_{q+1}$ is a proper subgraph of $L$, it follows from Claim 1 that $$\rho_1>\rho(T^*_{q+1})=\sqrt{q+2}.$$ Let $X=(x_1,x_2,...,x_n)^T$ be the Perron vector of $L$ with $x_i$ corresponding to the vertex $v_i$, and let $Y=(y_1,y_2,...,y_n)^T$ be the Perron vector of $R$ with $y_i$ corresponding to the vertex $v_i$. Suppose $s\geq q$. By symmetry and $\rho_1X=A(L)X$, we have $$\begin{aligned} \rho_1(x_{2}-x_{7})&=&qx_{7}+x_{4}-x_{2}-x_{8} > x_{7}-x_{2}+x_{4}-x_{8},\label{eqhhh4}\\ \rho_1(x_{3}-x_{4})&=&x_{1}+x_{4}-x_{2}-x_{3},\label{eqhhh5}\\ \rho_1(x_{5}-x_{7})&=&x_{1}-x_{2}+x_{6}-x_{8},\label{eqhhh6}\\ \rho_1(x_6-x_8)&=&x_5-x_7,\label{eqhhh7}\\ \rho_1(x_{1}-x_{2})&=&sx_{5}+x_{3}-qx_{7}-x_{4}.\label{eqhhh8}\end{aligned}$$ By ([\[eqhhh4\]](#eqhhh4){reference-type="ref" reference="eqhhh4"}), we have $$\begin{aligned} (\rho_1+1)(x_{2}-x_{7})&> x_{4}-x_{8}. \end{aligned}$$ If $x_{4}\leq x_{8}$, then applying Lemma [Lemma 6](#lemma13){reference-type="ref" reference="lemma13"}, we have $\rho(L-v_3v_4+v_3v_8)>\rho_1$. On the other hand, applying Lemma [Lemma 12](#lemma12){reference-type="ref" reference="lemma12"}, we have $\rho(L_{v_3v_4}-v_8)<\rho_1$. Since $L_{v_3v_4}-v_8\cong L-v_3v_4+v_3v_8$, we get a contradiction. Therefore, we have $$\begin{aligned} x_{4}> x_{8} \quad\text{ and } \quad x_{2}>x_{7}.\label{equ9}\end{aligned}$$ Note that ([\[eqhhh5\]](#eqhhh5){reference-type="ref" reference="eqhhh5"}) implies $$(\rho_1+1)(x_{3}-x_{4})=x_{1}-x_{2},$$ and ([\[eqhhh6\]](#eqhhh6){reference-type="ref" reference="eqhhh6"}), ([\[eqhhh7\]](#eqhhh7){reference-type="ref" reference="eqhhh7"}) imply $$(\rho^2_1-1)(x_{5}-x_{7})=\rho_1(x_{1}-x_{2}).$$ Since $s\ge q$, ([\[eqhhh8\]](#eqhhh8){reference-type="ref" reference="eqhhh8"}) leads to $$\begin{aligned} \rho_1(x_{ 1}-x_{ 2}) \geq q(x_{ 5}-x_{ 7})+x_{ 3}-x_{ 4} =\Big(\frac{q\rho_1}{\rho_1^2-1}+\frac{1}{\rho_1+1}\Big)(x_{ 1}-x_{ 2}),\end{aligned}$$ which is equivalent with $$\Big(\rho_1-\frac{q\rho_1}{\rho_1^2-1}-\frac{1}{\rho_1+1}\Big)(x_{ 1}-x_{ 2})\geq0.$$ Since $$\rho_1-\frac{q\rho_1}{\rho_1^2-1}-\frac{1}{\rho_1+1}>0\quad\text{for}\quad \rho_1>\sqrt{q+2},$$ we have $$\label{eqx1} x_{ 1}\geq x_{ 2}>x_{ 7}.$$ Since $R\cong L- v_7 v_8 + v_1 v_8$, applying Lemma [Lemma 6](#lemma13){reference-type="ref" reference="lemma13"}, we have the right inequality in ([\[eqc2\]](#eqc2){reference-type="ref" reference="eqc2"}). Now suppose $q=s+1$. Let $r =s-1$. Then $s=r+1$ and $q=r+2$. By Claim 1, we have $$\rho_1>\rho(T^*_{q+1})=\sqrt{r+4}\quad \text{and}\quad \rho_2>\rho(T^*_{r+2})=\sqrt{r+3}.$$ By symmetry and $\rho_1X=A(L)X$, we have $$\begin{cases} \text{$\rho_1x_{ 3}=x_{ 1}+x_{ 4}$,} &\quad\text{$\rho_1x_{ 4}=x_{ 2}+x_{ 3}$,}\\ \text{$\rho_1x_{ 1}=(r+1)x_{ 5}+x_{ 3}$,} &\quad\text{$\rho_1x_{ 2}=(r+2)x_{ 7}+x_{ 4}$,}\\ \text{$\rho_1x_{ 5}=x_{ 1}+x_{ 6}$,} &\quad\text{$\rho_1x_{ 7}=x_{ 2}+x_{ 8}$,}\\ \text{$\rho_1x_{ 6}=x_{ 5}$,} &\quad\text{$\rho_1x_{ 8}=x_{ 7}$,}\\ \end{cases}$$ which lead to (see Appendix A.1) $$\label{eqa1} \rho_1^6-(2r+7)\rho_1^4+(r+3)(r+4)\rho_1^2-1=0.$$ By $\rho_2Y=A(R)Y$, we have $$\begin{cases} \text{$\rho_2y_{ 3}=y_{ 1}+y_{ 3},$}\\ \text{$\rho_2y_{ 1}=(r+1)y_{ 5}+y_{ 3}+y_{ 9},$}\\ \text{$\rho_2y_{ 5}=y_{ 1}+y_{ 6},$}\\ \text{$\rho_2y_{ 6}=y_{ 5},$}\\ \text{$\rho_2y_{ 9}=y_{ 1},$}\\ \end{cases}$$ which lead to (see Appendix A.2) $$\label{eqa2} \rho_2^4-(r+4)\rho_2^2-\rho_2+1=0.$$ Notice that the function $g(x)=x^4-(r+4)x^2-x+1$ is increasing when $x>\sqrt{r+3}$. Since $g(\sqrt{r+4})<0$, we have $$\label{eqc14} \rho_2>\sqrt{r+4}.$$ Let $$f(x)=x^6-(2r+7)x^4+(r+3)(r+4)x^2-1.$$ Then by ([\[eqa1\]](#eqa1){reference-type="ref" reference="eqa1"}) and ([\[eqc14\]](#eqc14){reference-type="ref" reference="eqc14"}) we have $$\begin{aligned} f(\rho_2)-f(\rho_1)&=\rho_2^6-(2r+7)\rho_2^4+(r+3)(r+4)\rho_2^2-1\\ &=\rho_2^2\left[\rho_2^4-(r+4)\rho_2^2\right]-(r+3)\left[\rho_2^4-(r+4)\rho_2^2\right]-1\\ &=\rho_2^2(\rho_2-1)-(r+3)(\rho_2-1)-1\\ &=\rho_2^3-\rho_2^2-(r+3)\rho_2+r+2\\ &>0.\end{aligned}$$ Since $f(x)=x^6-(2r+7)x^4+(r+3)(r+4)x^2-1$ is increasing when $x>\sqrt{r+4},$ we have the right inequality in ([\[eqc2\]](#eqc2){reference-type="ref" reference="eqc2"}). By ([\[eqx1\]](#eqx1){reference-type="ref" reference="eqx1"}), applying Lemma [Lemma 6](#lemma13){reference-type="ref" reference="lemma13"} we can see that a graph in $\{G_3(0,s,0,q): s+ q=(n-4)/2\}$ attains the minimum spectral radius if and only if $\{s,q\}=\{\lceil(n-4)/4\rceil,\lfloor(n-4)/4\rfloor\}$, i.e., $G_3(0,s,0,q)\cong H(2s+2q+4)$. This completes the proof of Claim 2.\ **Claim 3.** *Let $s,q\geq1$ be integers. If $s> q$, then $$\rho(G_3(0,s,1,q))<\rho(G_3(1,s,0,q))$$ and $$\rho(G_3(0,s,1,q))<\rho(G_3(0,s+1,1,q-1)).$$ If $s=q$, then $$\rho(G_3(1,s,0,q)=\rho(G_3(0,s,1,q))<\rho(G_3(1,s-1,0,q+1)).$$ Moreover, we have $$\rho(H(2s+2q+5))\le \rho(G_3(r,s,p,q))\quad \text{for} \quad r+p=1$$ with equality if and only if $G_3(r,s,p,q)\cong H(2s+2q+5)$.* *Proof.* Let $W=G_3(0,s,1,q)$ and we label its vertices as in Figure [\[fig:12.4\]](#fig:12.4){reference-type="ref" reference="fig:12.4"}. Let $\rho_3=\rho(W)$ with its Perron vector being $Z=(z_1,z_2,...,z_n)^T$, where $z_i$ is corresponding to the vertex $v_i$ for $i=1,\ldots,n$. Suppose $s> q$. By $\rho_3Z=A(W)Z$, we have $$\begin{aligned} \rho_3(z_{3}-z_{4})&=&z_{1}+z_{4}-z_{2}-z_{3},\label{eqc31}\\ \rho_3(z_{5}-z_{7})&=& z_{1}+z_{6}-z_{2}-z_{8},\label{eqc32}\\ \rho_3(z_{1}-z_{2})&=&sz_{5}+z_{3}-qz_{7}-z_{4}-z_9.\label{eqc33} \end{aligned}$$ By ([\[eqc31\]](#eqc31){reference-type="ref" reference="eqc31"}), we have $$z_{3}-z_{4}=\frac{1}{\rho_3+1}(z_{1}-z_{2}).$$ By ([\[eqc32\]](#eqc32){reference-type="ref" reference="eqc32"}), we have $$\rho_3^2(z_{5}-z_{7})=\rho_3(z_{1}-z_{2})+\rho_3z_6-\rho_3z_8=\rho_3(z_{1}-z_{2})+z_{5}-z_{7},$$ which implies $$z_{5}-z_{7}=\frac{\rho_3}{\rho_3^2-1}(z_{1}-z_{2}).$$ Since $\rho_3z_9=z_2$ and $\rho_3z_7=z_2+z_8$, we have $z_7>z_9$. Now by ([\[eqc33\]](#eqc33){reference-type="ref" reference="eqc33"}) we have $$\begin{aligned} \rho_3(z_{1}-z_{2}) >s(z_{5}-z_{7})+z_{3}-z_{4} =\Big(\frac{s\rho_3}{\rho_3^2-1}+\frac{1}{\rho_3+1}\Big)(z_{1}-z_{2}), \end{aligned}$$ which leads to $$(\rho_3-\frac{s\rho_3}{\rho_3^2-1}-\frac{1}{\rho_3+1})(z_{1}-z_{2})>0.$$ Denote by $$h(x)=x-\frac{sx}{x^2-1}-\frac{1}{x+1}.$$ Then $h(\sqrt{s+2})>0$ and $h(x)$ is increasing when $x>1$. Recall that $\rho_3>\rho(T^*_{s+1})=\sqrt{s+2}$. We have $$\label{eqc37} h(\rho_3)=\rho_3-\frac{s\rho_3}{\rho_3^2-1}-\frac{1}{\rho_3+1}>0.$$ Hence, $z_{1}>z_{2}$. Applying Lemma [Lemma 6](#lemma13){reference-type="ref" reference="lemma13"}, we have $$\rho_3=\rho(G_3(0,s,1,q))<\rho(G_3(0,s,1,q)-v_2 v_9+v_1 v_9)=\rho(G_3(1,s,0,q)).$$ Similarly, considering the graph obtained from $W$ by cutting a $P_2$ attached to $v_2$ and attaching it to $v_1$, we have $$\rho(G_3(0,s,1,q))<\rho(G_3(0,s+1,1,q-1)).$$ Suppose $s=q$. By $\rho_3Z=A(W)Z$ we have $$\begin{aligned} \rho_3(z_9-z_{5})&=&z_{2}-z_{1}-z_{6},\label{eqc34}\\ \rho_3(z_{4}-z_{3})&=&z_{2}+z_{3}-z_{1}-z_{4},\label{eqc35}\\ \rho_3(z_{2}-z_{1})&=&sz_{7}+z_{4}+z_9-sz_{5}-z_{3}.\label{eqc36} \end{aligned}$$ Similarly as above, we have $z_{7}>z_9$ and $$\begin{aligned} \rho_3^2(z_9-z_{5})=\rho_3(z_{2}-z_{1}-z_{6}) =s(z_{7}-z_{5})+z_9-z_{5}+z_{4}-z_{3} >(s+1)(z_9-z_{5})+z_{4}-z_{3}, \end{aligned}$$ which is equivalent with $$\begin{aligned} \left[\rho_3^2-(s+1)\right](z_9-z_{5})>z_{4}-z_{3}.\label{equ10} \end{aligned}$$ On the other hand, by ([\[eqc35\]](#eqc35){reference-type="ref" reference="eqc35"}) and ([\[eqc36\]](#eqc36){reference-type="ref" reference="eqc36"}) we have $$z_{2}-z_{1}=(\rho_3+1)(z_{4}-z_{3})$$ and $$\begin{aligned} \rho_3(z_{2}-z_{1}) >s(z_9-z_{5})+z_{4}-z_{3}, \end{aligned}$$ which lead to $$\begin{aligned} \left[\rho_3(\rho_3+1)-1\right](z_{4}-z_{3})>z_9-z_{5}.\label{equ11} \end{aligned}$$ By ([\[equ10\]](#equ10){reference-type="ref" reference="equ10"}) and ([\[equ11\]](#equ11){reference-type="ref" reference="equ11"}), we have $$\left[\rho_3^2-(s+1)-\frac{1}{\rho_3(\rho_3+1)-1}\right](z_9-z_{5})>0.$$ Using similar arguments as the derivation of ([\[eqc37\]](#eqc37){reference-type="ref" reference="eqc37"}), we have $$\rho_3^2-(s+1)-\frac{1}{\rho_3(\rho_3+1)-1}>0,$$ since $\rho_3>\rho(T^*_{s+1})=\sqrt{s+2}$. Therefore, $z_9>z_{5}$. Applying Lemma [Lemma 6](#lemma13){reference-type="ref" reference="lemma13"}, we have $$\rho(G_3(1,s,0,q)=\rho(G_3(0,s,1,q)=\rho(W)<\rho(W-v_5v_6+v_9v_6)=\rho(G_3(1,s-1,0,q+1)).$$ This completes the proof of Claim 3.\ Now we are ready to present the proof for the case $k=n-2$ of Theorem [Theorem 3](#th1){reference-type="ref" reference="th1"}.\ *Proof for the case $k=n-2$ of Theorem [Theorem 3](#th1){reference-type="ref" reference="th1"}.* Suppose $G$ attains the minimum spectral radius in $\mathcal{G}_{n,n-2}$ with $n\ge 10$. Applying Theorem [Theorem 2](#th2){reference-type="ref" reference="th2"}, $G$ is a tree. Suppose $S$ is a maximum dissociation set of $G$ such that $G[S]$ consists of $\gamma$ isolated vertices $x_1,\ldots,x_{\gamma}$ and $\tau$ disjoint edges $y_1z_1,\ldots, y_{\tau}z_{\tau}$. Let $V(G)\backslash S=\{ v_1,v_2\}$. Since $G$ is connected, the vertex $x_{i}$ is adjacent to at least one of $\{v_1,v_2\}$ for all $i\in\{1,\ldots,\gamma\}$. Similarly, the edge $y_jz_j$ is adjacent to at least one of $\{v_1,v_2\}$ for all $j\in\{1,\ldots,\tau\}$. We claim that $G\cong G_i(r,s,p,q)$ with $i\in\{1,2,3,4\},$ where $(r,s),(p,q)\not\in\{(0,0),(1,0)\}$ in the case $i=1$, and $(r,s)\ne (0,0)$, $(p,q)\ne (0,0)$ in the case $i=2$. In fact, if $v_1 v_2\in E(G)$, then $G\cong G_1(r,s,p,q)$. Moreover, we have $(r,s),(p,q)\not\in\{(0,0),(1,0)\}$, since $diss(G)=n-2$. If $v_1 v_2\not\in E(G)$, then either there is an isolated vertex in $G[S]$ adjacent to both of $v_1$ and $v_2$, or there is an edge in $G[S]$ adjacent to both of $v_1$ and $v_2$. In the former case, we have $G\cong G_2(r,s,p,q)$ with $(r,s)\ne (0,0)$ and $(p,q)\ne (0,0)$, since $diss(G)=n-2$; while in the latter case, we have $G\cong G_i(r,s,p,q)$ with $i\in \{3,4\}$. Next we distinguish two cases to prove $G\cong G_3(r,s,p,q)$ for some nonnegative integers $r,s,p,q$. *Case 1.* $G$ contains an internal path with $v_1$ and $v_2$ being its ends. Then $$G\cong G_i(r,s,p,q)\quad \text{ with}\quad i\in \{1,2,3\}.$$ Suppose $G\cong G_1(r,s,p,q)$ with $(r,s),(p,q)\not\in\{(0,0),(1,0)\}$. Since $n\ge 10$, at least one of $r+2s$, $p+2q$ is larger than or equal to 4, say, $r+2s\ge 4$. Let $T$ be the tree obtained from $G_{v_1v_2}^{(2)}$ by deleting two vertices from the leaves and the edges attached to $v_1$. Then $T \cong G_3(r',s',p,q)$ for some integers $r'$ and $s'$. Moreover, we have $diss(T)=n-2$. Applying Lemma [Lemma 4](#lemma3){reference-type="ref" reference="lemma3"} and Lemma [Lemma 12](#lemma12){reference-type="ref" reference="lemma12"}, we have $\rho(T )<\rho(G)$, which contradicts the assumption on $G$. Similarly, if $G\cong G_2(r,s,p,q)$ with $(r,s)\ne (0,0)$, $(p,q)\ne (0,0)$, we can obtain a tree $T$ from $G_{v_1v_3}$ by deleting one vertex such that $$diss(T)=n-2\quad \text{and}\quad \rho(T )<\rho(G),$$ a contradiction. Therefore, we have $G\cong G_3(r,s,p,q)$. *Case 2.* $G$ does not contain any internal path with $v_1$ and $v_2$ being its ends. Then either $G\cong G_4(r,s,p,q)$ or $$\label{sep927} G\cong G_i(r,s,p,q) ~~\text{with}~~ i\in\{1,2,3\} ~~s.t.~~ d(v_1)\le 2 ~~\text{or}~~ d(v_2)\le 2.$$ Suppose $G\cong G_4(r,s,p,q)$. If $d(v_1)\ge3$ and $d(v_2)\ge3$, let $T'$ be the tree obtained from $G-v_4$ by subdividing the edge $v_1v_3$. Then by Lemma [Lemma 4](#lemma3){reference-type="ref" reference="lemma3"} and Lemma [Lemma 12](#lemma12){reference-type="ref" reference="lemma12"}, we have $\rho(T')<\rho(G)$. Since $diss(T')=n-2$, we get a contradiction. If $d(v_1)\le2$ or $d(v_2)\le2$, say, $d(v_1)=2$. If $v_1$ is adjacent to a leaf, then $G\cong G_1(1,1,p,q)$ and we can deduce a contradiction as in Case 1. If $v_1$ is attached by an edge $e$, then we construct a tree $T$ from $G_{v_2v_3}$ by deleting the leaf incident with $e$. Again, by Lemma [Lemma 4](#lemma3){reference-type="ref" reference="lemma3"} and Lemma [Lemma 12](#lemma12){reference-type="ref" reference="lemma12"}, we have $\rho(T')<\rho(G)$ and $diss(T)=n-2$, a contradiction. Therefore, we have ([\[sep927\]](#sep927){reference-type="ref" reference="sep927"}), and hence $G\cong G_3(r,s,p,q)$ for some nonnegative integers $r,s,p,q$.\ Now if $d(v_i)\ge 3$ and $v_i$ is adjacent to two leaves $u_1,u_2$, let $G'=G-v_iu_1+u_1u_2$. Then applying Lemma [Lemma 5](#lemma2){reference-type="ref" reference="lemma2"}, we have $diss(G')=n-2$ and $\rho(G')<\rho(G)$, a contradiction. Therefore, we have $r\le 1$ and $p\le 1$. Moreover, applying Claim 2, we have $r+p\le 1$. Note that $$diss(G_3(r,s,p,q))=n-2 ~~\text{for all}~~ r,s,p,q~~s.t.~~ r+p+2s+2q=n-4.$$ If $n$ is even, then $r=p=0$ and $G\cong G_3(0,s,0,q)$. Since $G_3(0,s,0,q)\cong G_3(0,q,0,s)$, we may assume $s\ge q$. By Claim 2, we have $G\cong H(n)$. If $n$ is odd, then we have $(r,p)=(1,0)$ or $(0,1)$. By Claim 3, we also have $G\cong H(n)$. This completes the proof of Theorem [Theorem 3](#th1){reference-type="ref" reference="th1"}. $\square$ # Acknowledgement {#acknowledgement .unnumbered} This work was supported by the National Natural Science Foundation of China (No. 12171323), Guangdong Basic and Applied Basic Research Foundation (No. 2022A1515011995) and the Science and Technology Foundation of Shenzhen City (No. JCYJ20210324095813036). WWW A. Berman, X.D. Zhang, On the spectral radius of graphs with cut vertices, J. Combin. Theory Ser. B. 83 (2001) 233-240. F. Bock, J. Pardey, L.D. Penso, D. Rautenbach, A bound on the dissociation number, arXiv:2202.09190v1. F. Bock, J. Pardey, L.D. Penso, D. Rautenbach, Relating dissociation, independence, and matchings, Discrete Appl. Math. 322 (2022) 160-165. B. Bre${\rm \check{s}}$ar, F. Kardo${\rm \check{s}}$, J. Katreni${\rm \check{c}}$, G. Semani${\rm \check{s}}$in, Minimum $k$-path vertex cover, Discrete Appl. Math. 159 (2011) 1189-1195. B. Bre${\rm \check{s}}$ar, M. Jakovac, J. Katreni${\rm \check{c}}$, G. Semani${\rm \check{s}}$in, A. Taranenko, On the vertex $k$-path cover, Discrete Appl. Math. 161 (2013) 1943-1949. R.A. Brualdi, A.J. Hoffman, On the spectral radius of (0,1)-matrices, Linear Algebra Appl. 65 (1985) 133-146. R.A. Brualdi, E.S. Solheid, On the spectral radius of complementary acyclic matrices of zeros and ones, SIAM J. Algebraic Discrete Methods 7 (1986) 265-272. L. Chen, X. Huang, Z. Zhang, A simpler PTAS for connected k-path vertex cover in homogeneous wireless sensor network, J. Comb. Optim. 36 (2018) 35--43. J. Choi, J. Park, The minimal spectral radius with given independence number, arXiv:2304.06290. D. M. Cvetković, M. Doob, H. Sachs, Spectra of graphs, third ed. Johann Ambrosius Barth Verlag, Heidelberg-Leipzig, 1995. J. Das, S. Mohanty, On the spectral radius of block graphs with a given dissociation number, arXiv:2301.12790. X. Du, L. Shi, Graphs with Small Independence Number Minimizing the spectral radius, Discret. Math. Algorithms Appl. 5 (2013) 1350017. L. Feng, G. Yu, X.D. Zhang, Spectral radius of graphs with given matching number, Linear Algebra Appl. 422 (2007) 133-138. A.J. Hoffman, J.H. Smith, Recent Advances in Graph Theory, Academic Praha, 1975. Y. Hong, J.L. Shu, K. Fang, A Sharp Upper Bound of the Spectral Radius of Graphs, J. Combin. Theory Ser. B. 81 (2001) 177-183. Y. Hu, Q. Huang, Z. Lou, Graphs with the minimum spectral radius for given independence number, arXiv:2206.09152. Q. Li, K.Q. Feng, On the largest eigenvalues of graphs, Acta Math. Appl. 2 (1979) 167-175. H. Liu, M. Lu, F. Tian, On the spectral radius of unicyclic graphs with fixed diameter, Linear Algebra Appl. 420 (2007) 449-457. Z. Lou, J.M. Guo, The spectral radius of graphs with given independence number, Discrete Math. 345 (2022) 112778. Y. Orlovich, A. Dolgui, G. Finke, V. Gordon, F. Werner, The complexity of dissociation set problems in graphs, Discrete Appl. Math. 159 (2011) 1352-1366. J. H. Smith, Some properties of the spectrum of a graph, in: R. Guy et al.(eds.), Combinatorial Structures and their applications, Proc. Conf. Calgary, (1969). Gordon and Breach, New York, (1970) 403-406. W. Sun, S. Li, On the maximal number of maximum dissociation sets in forests with fixed order and dissociation number, Taiwanese J. Math. 1 (2023) 137 . J. Tu, Y. Li, J. Du, Maximal and maximum dissociation sets in general and triangle-free graphs, Appl. Math. Comput. 426 (2022) 127107. J. Tu, Z. Zhang, Y. Shi, The maximum number of maximum dissociation sets in trees, J. Graph Theory 96 (2021) 472-489. E.R. van Dam, R.E. Kooij, The minimal spectral radius of graphs with a given diameter, Linear Algebra Appl. 432 (2007) 408-419. B. Wu, E. Xiao, Y. Hong, The spectral radius of trees on $k$ pendant vertices, Linear Algebra Appl. 395 (2005) 343-349. M. Xu, Y. Hong, J. Shu, M. Zhai, The minimum spectral radius of graphs with a given independence number, Linear Algebra Appl. 431(2009) 937-945. M. Yannakakis, Node-Deletion Problems on Bipartite Graphs, SIAM J. Comput. 10 (1981) 310-327. X. Zhan, Matrix Theory, Graduate Studies in Mathematics, vol. 147, American Mathematical Society, Providence, RI, 2013. # Appendix {#appendix .unnumbered} ## **A.1 Proof of the equation ([\[eqa1\]](#eqa1){reference-type="ref" reference="eqa1"})** {#a.1-proof-of-the-equation-eqa1 .unnumbered} Recall that [\[16a\]]{#16a label="16a"}\ [\[16b\]]{#16b label="16b"}\ [\[16c\]]{#16c label="16c"}\ [\[16d\]]{#16d label="16d"} and [\[17a\]]{#17a label="17a"}\ [\[17b\]]{#17b label="17b"}\ [\[17c\]]{#17c label="17c"}\ .[\[17d\]]{#17d label="17d"} By ([\[16d\]](#16d){reference-type="ref" reference="16d"}), substituting $x_6$ with ${x_{5}}/{\rho_1}$ in ([\[16c\]](#16c){reference-type="ref" reference="16c"}), we have $$\rho_1x_{5}=x_{1}+\frac{1}{\rho_1}x_{5},$$ which leads to $$x_{5}=\frac{\rho_1}{\rho_1^2-1}x_{1}.\label{19}$$ Combining this with ([\[16b\]](#16b){reference-type="ref" reference="16b"}), we have $$\rho_1x_{1}=\frac{(r+1)\rho_1}{\rho_1^2-1}x_{1}+x_{3},$$ which implies $$x_{3}=\left[\rho_1-\frac{(r+1)\rho_1}{\rho^2-1}\right]x_1=\left[\rho_1-\frac{(r+1)\rho_1}{\rho^2-1}\right](\rho_1x_{3}-x_{4}),$$ where the last equality follows from ([\[16a\]](#16a){reference-type="ref" reference="16a"}). It follows that $$\left[\rho_1^2-1-\frac{(r+1)\rho_1^2}{\rho_1^2-1}\right]x_{3}=\left[\rho_1-\frac{(r+1)\rho_1}{\rho_1^2-1}\right]x_{4}\label{22}$$ Similarly, by ([\[17a\]](#17a){reference-type="ref" reference="17a"}), ([\[17b\]](#17b){reference-type="ref" reference="17b"}), ([\[17c\]](#17c){reference-type="ref" reference="17c"}), ([\[17d\]](#17d){reference-type="ref" reference="17d"}) we have $$\left[\rho_1^2-1-\frac{(r+2)\rho_1^2}{\rho_1^2-1}\right]x_{4} =\left[\rho_1-\frac{(r+2)\rho_1}{\rho_1^2-1}\right]x_{3}.\label{23}$$ Now combining ([\[22\]](#22){reference-type="ref" reference="22"}) with ([\[23\]](#23){reference-type="ref" reference="23"}), we have $$\begin{aligned} &\left[\rho_1^2-1-\frac{(r+1)\rho_1^2}{\rho_1^2-1}\right] \left[\rho_1^2-1-\frac{(r+2)\rho_1^2}{\rho^2_1-1}\right] =\left[\rho_1-\frac{(r+1)\rho_1}{\rho_1^2-1}\right]\left[\rho_1-\frac{(r+2)\rho_1}{\rho_1^2-1}\right]\\\end{aligned}$$ which is equivalent with ([\[eqa1\]](#eqa1){reference-type="ref" reference="eqa1"}). ## **A.2 Proof of the equation ([\[eqa2\]](#eqa2){reference-type="ref" reference="eqa2"})** {#a.2-proof-of-the-equation-eqa2 .unnumbered} Recall that [\[11a\]]{#11a label="11a"}\ [\[11b\]]{#11b label="11b"}\ [\[11c\]]{#11c label="11c"}\ [\[11d\]]{#11d label="11d"}\ Combining ([\[11a\]](#11a){reference-type="ref" reference="11a"}) with ([\[11e\]](#11e){reference-type="ref" reference="11e"}), we have $$\label{a21} y_3=\frac{\rho_2}{\rho_2-1}y_9.$$ By ([\[11c\]](#11c){reference-type="ref" reference="11c"}), ([\[11d\]](#11d){reference-type="ref" reference="11d"}) and ([\[11e\]](#11e){reference-type="ref" reference="11e"}), we have $$y_6=\frac{\rho_2}{\rho_2^2-1}y_9\quad \text{and}\quad y_5= \frac{\rho_2^2}{\rho_2^2-1}y_9.\label{equ12}$$ Now substituting $y_1=\rho_2y_9, y_3= {\rho_2}y_9/({\rho_2-1}), y_5= {\rho_2^2}y_9/({\rho_2^2-1})$ in ([\[11b\]](#11b){reference-type="ref" reference="11b"}), we have $$\rho_2^2y_9=\frac{(r+1)\rho_2^2}{\rho_2^2-1}y_{9}+\frac{\rho_2}{\rho_2-1}y_9+ y_{9}.$$ Therefore, we have $$\rho_2^2=\frac{(r+1)\rho_2^2}{\rho_2^2-1}+\frac{\rho_2}{\rho_2-1}+ 1,$$ which is equivalent with ([\[eqa2\]](#eqa2){reference-type="ref" reference="eqa2"}). [^1]: Corresponding author.\    Email: mathzejun\@gmail.com (Huang), mathjiahui\@163.com (Liu), 2015110112\@email.szu.edu.cn (Zhang)
arxiv_math
{ "id": "2309.15597", "title": "Connected graphs with a given dissociation number attaining the minimum\n spectral radius", "authors": "Zejun Huang, Jiahui Liu, Xinwei Zhang", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Let $F$ be a $p$-adic field. For certain non-abelian nilpotent algebraic groups $U$ over $\bar\mathbb{Z}_p$ equipped with $\operatorname{Gal}_F$-action, we study the associated *Heisenberg varieties* which model the non-abelian cohomology set "$H^1(\operatorname{Gal}_F, U)$". The construction of Heisenberg varieties involves the Herr complexes including their cup product structure. Write $U_n$ for a quasi-split unitary group and assume $p\ne 2$. We classify mod $p$ Langlands parameters for $U_n$ (quasi-split), $\operatorname{SO}_{2n+1}$, $\operatorname{SO}_{2n}$, $\operatorname{Sp}_{2n}$, ${\operatorname{GSpin}}_{2m}$ and ${\operatorname{GSpin}}_{2m+1}$ (split) over $F$, and show they are successive Heisenberg-type extensions of elliptic Langlands parameters. We employ the Heisenberg variety to study the obstructions for lifting a non-abelian cocycle along the map $H^1(\operatorname{Gal}_F, U(\bar\mathbb{Z}_p))\to H^1(\operatorname{Gal}_F, U(\bar{\mathbb{F}}_p))$. We present a precise theorem that reduces the task of finding de Rham lifts of mod $p$ Langlands parameters for unitary, symplectic, orthogonal, and spin similitude groups to the dimension analysis of specific closed substacks of the reduced Emerton-Gee stacks for the corresponding group. Finally, we carry out the dimension analysis for the unitary Emerton-Gee stacks using the geometry of Grassmannian varieties. The paper culminates in the proof of the existence of potentially crystalline lifts of regular Hodge type for all mod $p$ Langlands parameters for $p$-adic (possibly ramified) unitary groups $U_n$. It is the first general existence of de Rham lifts result for non-split (ramified) groups, and provides evidence for the topological Breuil-Mézard conjecture for more general groups. author: - Zhongyipan Lin bibliography: - heisenberg.bib title: Heisenberg varieties and the existence of de Rham lifts --- # Introduction   Let $G$ be a reductive group over a $p$-adic field $F$ which splits over a tame extension $K/F$, and let ${^{L}\!G} = \widehat{G} \rtimes \operatorname{Gal}(K/F)$ be the Langlands dual group of $G$. In our previous work [@L23], we classified elliptic mod $p$ Langlands parameters for $G$ and constructed their de Rham lifts. In this paper, we shift our attention to parabolic mod $p$ Langlands parameters. Let $P\subset \widehat{G}$ be a $\operatorname{Gal}(K/F)$-stable parabolic. Write ${^{L}\!P}$ for $P\rtimes \operatorname{Gal}(K/F)$. A mod $p$ Langlands parameter is either elliptic, or factors through some ${^{L}\!P}$. Let $\bar\rho:\operatorname{Gal}_F\to {^{L}\!P}(\bar{\mathbb{F}}_p)$ be a parabolic mod $p$ Langlands parameter. We are interested in the following question: > **Question**: Does there exist a de Rham lift $\rho:\operatorname{Gal}_F\to {^{L}\!P}(\bar\mathbb{Z}_p)$ of regular Hodge type? This question is addressed for $G=\operatorname{GL}_n$ in the book by Emerton and Gee, and has important applications to the geometric Breuil-Mézard conjecture(see [@EG23] and [@LLHLM23]). We briefly describe the general strategy employed in [@EG23] and explain how the proof breaks for groups which are not $\operatorname{GL}_n$. Let ${^{L}\!M}$ denote a $\operatorname{Gal}(K/F)$-stable Levi subgroup of ${^{L}\!P}$ and write $U$ for the unipotent radical of $P$. Write $\bar\rho_M:\operatorname{Gal}_F\xrightarrow{\bar\rho} {^{L}\!P}(\bar{\mathbb{F}}_p)\to {^{L}\!M}(\bar{\mathbb{F}}_p)$ for the ${^{L}\!M}$-semisimplification of $\bar\rho$. The construction $\rho$ follows a $2$-step process: - Step 1: Carefully choose a lift $\rho_M:\operatorname{Gal}_F\to {^{L}\!M}(\bar\mathbb{Z}_p)$ of $\bar\rho_M$. - Step 2: Endow $U(\bar\mathbb{Z}_p)$ with the $\operatorname{Gal}_F$-action induced by $\rho_M$. Show that the image of $$H^1(\operatorname{Gal}_F, U(\bar\mathbb{Z}_p)) \to H^1(\operatorname{Gal}_F, U(\bar{\mathbb{F}}_p))$$ contains the cocycle corresponding to $\bar\rho$. ## Partial lifts after abelianization and the work of Emerton-Gee   Let $[U, U]$ denote the derived subgroup of $U$ and write $U^{\operatorname{ab}}:=U/[U, U]$ for the abelianization of $U$. The first approximation of a de Rham lift of $\bar\rho$ is a continuous group homomorphism $$\operatorname{Gal}_F\to \frac{{^{L}\!P}}{[U,U]}(\bar\mathbb{Z}_p)$$ lifting $\bar\rho$ modulo $[U,U]$. The groundbreaking idea presented in [@EG23] is that we can achieve the following: - Step 1: Choose a lift $\rho_M:\operatorname{Gal}_F\to {^{L}\!M}(\bar\mathbb{Z}_p)$ of $\bar\rho_M$. - Step 2: Guarantee the image of $$H^1(\operatorname{Gal}_F, U^{\operatorname{ab}}(\bar\mathbb{Z}_p)) \to H^1(\operatorname{Gal}_F, U^{\operatorname{ab}}(\bar{\mathbb{F}}_p))$$ contains the cocycle corresponding to $\bar\rho \mod [U,U]$, as long as we can estimate the dimension of certain substacks of the reduced Emerton-Gee stacks for $M$. We formalize the output of their geometric argument as follows: **Property EPL 1**. *(Existence of partial lifts) Let $\operatorname{Spec}R$ be a non-empty potentially crystalline deformation ring of $\bar\rho_M$ such that for some $x \in \operatorname{Spec}R(\bar\mathbb{Q}_p)$, $H^1_{{\operatorname{crys}}}(\operatorname{Gal}_K, U^{\operatorname{ab}}(\bar\mathbb{Q}_p)) =H^1(\operatorname{Gal}_K, U^{\operatorname{ab}}(\bar\mathbb{Q}_p))$. Then there exists a point $y\in \operatorname{Spec}R(\bar\mathbb{Z}_p)$ such that the image of $H^1(\operatorname{Gal}_F, U^{\operatorname{ab}}(\bar\mathbb{Z}_p)) \to H^1(\operatorname{Gal}_F, U^{\operatorname{ab}}(\bar{\mathbb{F}}_p))$ contains the cocycle corresponding to $\bar\rho \mod [U,U]$.* The geometric input is as follows: **Property SSD 1**. *(Sufficiently small dimension) Write $\mathcal{X}_{F, {^{L}\!M}, {\operatorname{red}}}$ for the reduced Emerton-Gee stacks for $M$ and write $X_s\subset \mathcal{X}_{F, {^{L}\!M}, {\operatorname{red}}}$ for the (scheme-theoretic closure of the) locus $$\{x| \dim_{\bar{\mathbb{F}}_p}H^2(\operatorname{Gal}_F, U^{\operatorname{ab}}(\bar{\mathbb{F}}_p))\ge s\}.$$ Then $$\dim X_s + s \le [F:\mathbb{Q}_p]\dim M/B_M$$ where $B_M$ is a Borel of $M$.* Here we define the dimension of an empty set to be $-\infty$ to avoid confusion. **Theorem 1**. *Property SSD implies Property EPL.* *Proof.* The proof of [@EG23 Theorem 6.1.1, Theorem 6.3.2] works verbatim. See also [@L21 Theorem 5.1.2]. The proof needs the algebraicity of the reduced Emerton-Gee stacks, which is established in [@L23B], as well as the basic properties of the potentially crystalline deformation rings, which are the main results of [@BG19]. ◻ For $G=\operatorname{GL}_n$, we have $F=K$ and we can choose $P$ such that $U=U^{\operatorname{ab}}$. However, for general groups $G$, we immediately run into the problem that $U\ne U^{\operatorname{ab}}$. ## Heisenberg-type extensions   The good news is that for classical groups, we can always choose $P$ such that $U$ is the next best thing after abelian groups, namely, unipotent algebraic groups of nilpotency class $2$. **Theorem 2**. *(Lemma [7.2](#lem:unitary-1){reference-type="ref" reference="lem:unitary-1"}, Lemma [8.1](#lem:symplectic-1){reference-type="ref" reference="lem:symplectic-1"}, Lemma [9.1](#lem:orth-1){reference-type="ref" reference="lem:orth-1"}) Let ${^{L}\!G}$ be any of ${^{L}\!U}_n, \operatorname{GSp}_{2n}$ or ${\operatorname{GSO}}_{n}$.* *Then each mod $p$ Langlands parameter $\bar\rho:\operatorname{Gal}_F \to {^{L}\!G}(\bar{\mathbb{F}}_p)$ is either elliptic, or factors through a maximal proper parabolic ${^{L}\!P}$ such that $\bar\rho$ is a Heisenberg-type extension (see Definition [5.1](#def:H-type){reference-type="ref" reference="def:H-type"}) of some $\bar\rho_M:\operatorname{Gal}_F\to {^{L}\!M}(\bar{\mathbb{F}}_p)$ where ${^{L}\!M}$ is the Levi factor of ${^{L}\!P}$.* A *Heisenberg-type extension* is, roughly speaking, an extension which has the least amount of "non-linearity". More precisely, if $\bar\rho_P:\operatorname{Gal}_F\to {^{L}\!P}(\bar{\mathbb{F}}_p)$ is a Heisenberg-type extension of $\bar\rho_M$, then $[U,U]$ is an abelian group, and $$\dim_{\bar{\mathbb{F}}_p}H^2(\operatorname{Gal}_K, [U, U](\bar{\mathbb{F}}_p))\le 1.$$ The key technical result of this paper is Theorem [5.5](#thm:Galois){reference-type="ref" reference="thm:Galois"}. Roughly speaking, for Heisenberg-type extensions, the non-linear part of the obstruction for lifting is so mild that it can be killed through manipulating cup products. To make this idea work, we need a resolution of Galois cohomology supported on degrees $[0,2]$, which is compatible with cup products on the cochain level. In this paper, the resolution used is the Herr complexes. Although Herr complexes are infinite-dimensional resolutions, we can truncate them to a finite system while still retaining the structure of cup products. The Heisenberg equations are defined through cup products on the truncated Herr cochain groups. The main technical work is done in Section 1-5. In Section 6-9, we study unitary groups, symplectic groups and orthogonal groups on a case-by-case basis and prove the following. **Theorem 3**. *Let ${^{L}\!G}_n$ be any of ${^{L}\!U}_n, \operatorname{Sp}_{2n}, \operatorname{SO}_n, \operatorname{GSp}_{2n}$ or ${\operatorname{GSO}}_{n}$, and assume $p\ne 2$.* *Assume for each $n$ and each maximal proper Levi ${^{L}\!M}$ of ${^{L}\!U}_n$, Property SSD holds for ${^{L}\!M}$.* *Then all $L$-parameters $\operatorname{Gal}_F\to {^{L}\!G}_n(\bar{\mathbb{F}}_p)$ admit a potentially crystalline lift of regular Hodge type.* We remark that $\operatorname{GSp}_{2n}$ and ${\operatorname{GSO}}_{2n}$ are the Langlands dual groups of the spin similitude groups, while $\operatorname{Sp}_{2n}$, $\operatorname{SO}_{2n}$ and $\operatorname{SO}_{2n+1}$ are the Langlands dual groups of $\operatorname{SO}_{2n+1}$, $\operatorname{SO}_{2n}$ and $\operatorname{Sp}_{2n}$, resp.. ## The Emerton-Gee stacks for unitary groups   We prove the following theorem. **Theorem 4**. *(Theorem [10.1](#thm:EGU){reference-type="ref" reference="thm:EGU"}) [\[thm:intro-EGU\]]{#thm:intro-EGU label="thm:intro-EGU"} Let $\bar \alpha:\operatorname{Gal}_K\to \operatorname{GL}_a(\bar{\mathbb{F}}_p)$ be an irreducible Galois representation.* *The locus of $\bar x\in \mathcal{X}_{F, {^{L}\!U}_n, {\operatorname{red}}}$ such that $\operatorname{Hom}_{\operatorname{Gal}_K}(\bar \alpha, \bar x|_{\operatorname{Gal}_K})\ge r$ is of dimension at most $[F:\mathbb{Q}_p]\frac{n(n-1)}{2} - r^2+ \frac{r}{2}$.* Since Theorem [\[thm:intro-EGU\]](#thm:intro-EGU){reference-type="ref" reference="thm:intro-EGU"} is stronger than Property SSD, we have established the existence of de Rham lifts for $U_n$. **Theorem 5**. *If $p\ne 2$, all $L$-parameters $\operatorname{Gal}_F\to {^{L}\!U}_n(\bar{\mathbb{F}}_p)$ admits a potentially crystalline lift of regular Hodge type.* *Proof.* Note that $H^2(\operatorname{Gal}_F, U^{\operatorname{ab}}(\bar{\mathbb{F}}_p))\cong H^2(\operatorname{Gal}_K, \bar\alpha\otimes \bar x|_{\operatorname{Gal}_K}^\vee)$ and $\lfloor -r^2+r/2 \rfloor \le -r$ for all $r\ge 1$. ◻ The proof of Theorem [\[thm:intro-EGU\]](#thm:intro-EGU){reference-type="ref" reference="thm:intro-EGU"} is much more involved compared to its $\operatorname{GL}_n$-analogue worked out in [@EG23]. The $\operatorname{GL}_n$-case only requires the computation of the rank of certain vector bundles, and is a completely linear problem. For $U_n$, we need to compute the relative dimension of certain quadratic cones. To prove the required bound for $U_n$, we need a very precise control of the rank of cup products for extensions of the form $\begin{bmatrix} \bar\alpha(1)^{\oplus r} & * & * \\ & \bar\tau & * \\ & & \bar\alpha^{\oplus r} \end{bmatrix}$, and in order to do that, we need to relate the rank of cup products to the dimension of Grassmannian manifolds (with the key lemma being [\[lem:codim-lem\]](#lem:codim-lem){reference-type="ref" reference="lem:codim-lem"}). The inequality is extremely tight at many steps. After we have obtained an estimate for the rank of cup products, we still need to divide the question into multiple cases and perform min-max optimization on multi-variable polynomial functions in each of these cases. The method of proof we presented in this paper also applies to orthogonal/symplectic/spin similitude groups. However, we do not treat these groups in this paper due to the complexity of the analysis involved. ## Final remarks   In the literature, the geometric Breuil-Mézard conjecture is often proved for sufficiently generic tame inertial types (for example, see [@LLHLM23]). For $\operatorname{GL}_d$, if we only care about the generic situation, then the existence of de Rham lifts is straightforward. Let $\bar r = \begin{bmatrix} \bar r_1 & * &\dots & * \\ & \bar r_2 & \dots & *\\ & & \dots & *\\ & & & \bar r_m \end{bmatrix}$ be a Galois representation which is maximally non-split, meaning it factors through a unique minimal parabolic. If $\bar r_i(1) \ne \bar r_{i+1}$ for each $i$, then $\bar r$ admits a crystalline lift for trivial reasons. Indeed, put $\bar r^i := \begin{bmatrix} \bar r_i & * &\dots & * \\ & \bar r_{i+1} & \dots & *\\ & & \dots & *\\ & & & \bar r_m \end{bmatrix}$; once an arbitrary lift $r^i$ of $\bar r^i$ is chosen, we can construct a lift of $r^{i-1}$ as an extension of $r^i$ and a lift of $\bar r_i$. However, for general groups such as the unitary groups, regardless of how generic the situation is, we do not have an easy way of constructing de Rham lifts. The reason being that "maximal non-splitness" is not very useful for general groups. Although it remains a strong constraint, it is not easy to be directly utilized. Consider the symplectic similitude group situation for the sake of reusing notations. The argument in the previous paragraph breaks unless $\bar r_i(1)\ne \bar r_j$ for all $i, j$. Put $\bar\tau := \begin{bmatrix} \bar r_2 & \dots &* \\ & \dots & *\\ && \bar r_{m-1} \end{bmatrix}$ and thus $\bar\rho = \begin{bmatrix} \bar r_1 & \bar c_1 & \bar c_3 \\ & \bar\tau & \bar c_2\\ && \bar r_{m} \end{bmatrix}$. Suppose we have chosen a lift $(r_1, \tau, r_m)$ of $(\bar r_1, \bar \tau, \bar r_m)$. A lift $c_1$ of $\bar c_1$ uniquely determines a lift $c_2$ of $\bar c_2$ and vice versa. Let's choose a $c_1$ and, thus, a $c_2$. Now we run into the problem that a lift $c_3$ of $\bar c_3$ does not exist for all choices of $c_1$. For a lift $c_3$ to exist, we must have $c_1\cup c_2=0$, which is a non-linear condition. Even if $c_1\cup c_2=0$, we can only ensure there exists a $c_3$ which makes $\begin{bmatrix} r_1 & c_1 & c_3 \\ & \tau & c_2\\ && r_{m} \end{bmatrix}$ a group homomorphism; there is no guarantee that $c_3$ lifts $\bar c_3$! The obstruction disappears if $\bar r_1(1)\ne \bar r_m$; but such restrictions will force the Serre weights to lie within very narrow strips of a chosen alcove. From this perspective, Theorem [Theorem 5](#thm:Ulift){reference-type="ref" reference="thm:Ulift"} is necessary even if we only aim to prove the Breuil-Mézard conjecture in the generic situation. # Heisenberg equations   Let $r, s, t\in \mathbb{Z}_+$ be integers. Let $\Lambda$ be a DVR with uniformizer $\varpi$. Let $d\in \operatorname{Mat}_{s\times t}(\Lambda)$ and $\Sigma_1,\dots,\Sigma_s\in \operatorname{Mat}_{r\times r}(\Lambda)$ be constant matrices. For ease of notation, for $x\in \operatorname{Mat}_{r\times 1}(\Lambda)$, write $$x^t \Sigma x := \begin{bmatrix} x^t \Sigma_1 x \\ \dots\\ x^t \Sigma_s x \end{bmatrix} \in \operatorname{Mat}_{s \times 1}(\Lambda).$$ Here $x^t$ denotes the transpose of $x$. We are interested in solving systems of equations in $(r+t)$ variables of the form $$x^t \Sigma x + d y = 0 \tag{$\dagger$}$$ where $x\in \Lambda^{\oplus r}$ and $y\in \Lambda^{\oplus t}$ are the $(r+t)$ variables. We will call ($\dagger$) the quadratic equation with coefficient matrix $(\Sigma,d)$. ## Lemma {#lem:Heisenberg-1} Let $\Lambda$ be a DVR with uniformizer $\varpi$. Let $M$ be a finite flat $\Lambda$-module. If $N\subset M$ is a submodule such that $M/N\cong \Lambda/\varpi^n$ ($n>0$), then there exists a $\Lambda$-basis $\{x_1, x_2, \dots, x_s\}$ of $M$ such that $N = \operatorname{span}(\varpi^n x_1, x_2, \dots, x_s)$. *Proof.* Let $\{e_1,\dots, e_s\}$ be a $\Lambda$-basis of $M$ and let $\{f_1,\dots,f_s\}$ be a $\Lambda$-basis of $N$. There exists a matrix $X\in \operatorname{GL}_{s}(\Lambda[1/\varpi])$ such that $(f_1|\dots|f_s) = X (e_1|\dots|e_s)$. By the theory of Smith normal form, $X = S D T$ where $S,T\in \operatorname{GL}_{s}(\Lambda)$ and $D$ is a diagonal matrix. We have $D = {\operatorname{Diag}}(\varpi^n,1,\dots,1)$. Set $(x_1|\dots|x_s):=T(e_1|\dots|e_s)$, and we are done. ◻ ## Definition {#def:Heisenberg-1} A quadratic equation with coefficient matrix $(\Sigma, d)$ is said to be *Heisenberg* if - $\operatorname{coker}d\cong \Lambda$ or $\Lambda/\varpi^n$; and - there exists $f\in \Lambda^{\oplus r}$ such that $f^t \Sigma f \ne 0$ mod $(\varpi, \operatorname{Im}(d))$. ## Theorem {#thm:Heisenberg-1} Let $(\Sigma, d)$ be a Heisenberg equation over $\Lambda$. If there exists a mod $\varpi$ solution $(\bar x, \bar y)\in (\Lambda/\varpi)^{\oplus r+t}$ to $(\Sigma, d)$ (that is, $\bar x^t \Sigma \bar x + d \bar y \in \varpi \Lambda^{\oplus s}$), then there exists an extension of DVR $\Lambda\subset \Lambda'$ such that there exists a solution $(x, y)\in \Lambda^{\prime \oplus r+t}$ of $(\Sigma, d)$ lifting $(\bar x, \bar y)$. *Proof.* Write $\{e_1,\dots,e_s\}$ for the standard basis for $\Lambda^{\oplus s}$. By Lemma [2.1](#lem:Heisenberg-1){reference-type="ref" reference="lem:Heisenberg-1"}, we can assume $\operatorname{Im}(d)=\operatorname{span}(\varpi^n e_1,e_2,\dots,e_s)$ or $\operatorname{span}(e_2,\dots,d_s)$. Write $d= \begin{bmatrix} d_1 \\ \dots \\d_s \end{bmatrix}$. By Definition [2.2](#def:Heisenberg-1){reference-type="ref" reference="def:Heisenberg-1"}, there exists an element $f\in \Lambda^{\oplus r}$ such that $f^t\Sigma f \ne 0$ mod $(\varpi, \operatorname{Im}(d))$; equivalently, $f^t\Sigma_1 f \ne 0$ mod $\varpi$. Let $(x, y)$ be an arbitrary lift of $(\bar x, \bar y)$. Let $\Lambda'$ be the ring of integers of the algebraic closure of $\Lambda[1/\varpi]$. Let $\lambda\in \Lambda'$. Consider $$\begin{aligned} (x + \lambda f)^t\Sigma_1(x + \lambda f) + d_1 y &= (f^t\Sigma_1 f) \lambda^2 + (x^t\Sigma_1 f + f^t\Sigma_1 x) \lambda + (x^t\Sigma_1 x + d_1y);\end{aligned}$$ note that the $\varpi$-adic valuation of the quadratic term is $0$ while the $\varpi$-adic valuation of the constant team is positive. By inspecting the Newton polygon, the quadratic equation above admits a solution $\lambda \in \Lambda'$ of positive $\varpi$-adic valuation. By replacing $x$ by $x+ \lambda f$, we can assume $$x^t \Sigma_1 x + d_1 y =0.$$ Equivalently, $x^t \Sigma x + d y \in \operatorname{span}(e_2,\dots,e_s)\subset \operatorname{Im}(d)$. By replacing $\Lambda$ by $\Lambda[\lambda]$, we may assume $(x,y)\in \Lambda^{\oplus r + t}$. In particular, there exists an element $z\in \Lambda^{\oplus t}$ such that $$x^t \Sigma x + d y = d z,$$ and it remains to show we can ensure $z=0$ mod $\varpi$. We do know $d z = 0$ mod $\varpi$. So $d z \in \operatorname{span}(\varpi e_2,\dots, \varpi e_s)$. Say $d z = \varpi u$, we have $u\in \operatorname{span}(e_2,\dots, e_s)\subset \operatorname{Im}(d)$. Say $u=d v$. So $d z = \varpi d v = d \varpi v$. By replacing $z$ by $\varpi v$, we have $$x^t \Sigma x + d y = d \varpi v.$$ Finally, replacing $y$ by $(y-\varpi v)$, we are done. ◻ We will call affine varieties defined by Heisenberg equations *Heisenberg varieties*. Theorem [2.3](#thm:Heisenberg-1){reference-type="ref" reference="thm:Heisenberg-1"} says all $\Lambda/\varpi$-points of a Heisenberg variety admit a $\varpi$-adic thickening. # Extensions of $(\varphi, \Gamma)$-modules and non-abelian $(\varphi, \Gamma)$-cohomology   Fix a pinned split reductive group $(\widehat{G}, \widehat{B}, \widehat{T}, \{Y_\alpha\})$ over $\mathbb{Z}$. Fix a parabolic $P\subset \widehat{G}$ containing $\widehat{B}$ with Levi subgroupn $\widehat{M}$ and unipotent radical $U$. Let $K$ be a $p$-adic field. Let $A$ be a $\mathbb{Z}_p$-algebra. The ring $\mathbb{A}_{K, A}$ is defined as in [@L23B Definition 4.2.8]. See [@L23B Section 4.2] for the definition of the procyclic group $H_K$. Fix a topological generator $\gamma$ of $H_K$. Note that $\mathbb{A}_{K, A}$ admits a Frobenius action $\varphi$ which commutes $\gamma$. ## Framed parabolic $(\varphi, \Gamma)$-modules   A *framed $(\varphi, \gamma)$-module* with $P$-structure and $A$-coefficients is a pair of matrices $[\phi], [\gamma]\in P(\mathbb{A}_{K, A})$, satisfying $[\phi]\varphi([\gamma])=[\gamma]\gamma([\phi])$. A *framed $(\varphi, \Gamma)$-module* with $P$-structure and $A$-coefficients is a framed $(\varphi, \gamma)$-module $([\phi], [\gamma])$ with $P$-structure and $A$-coefficients such that there exists a closed algebraic group embedding $P\hookrightarrow \operatorname{GL}_d = \operatorname{GL}(V)$ and $1-[\gamma]$ induces a topologically nilpotent $\gamma$-semilinear endomorphism of $V(\mathbb{A}_{K, A})$. To make is concrete, if $\{e_1,\dots,e_d\}$ is the standard basis of $V$, then $[\gamma]$ sends $\alpha e_i$ to $\gamma(\alpha)[\gamma]e_i$. ## Levi factor of $(\varphi, \gamma)$-modules Let $([\varphi], [\gamma])$ be a framed $(\varphi, \gamma)$-module with $P$-structure and $A$-coefficients. Write $([\varphi]_{\widehat{M}}, [\gamma]_{\widehat{M}})$ for its image under the projection $P\to \widehat{M}$. Note that $([\varphi]_{\widehat{M}}, [\gamma]_{\widehat{M}})$ is a framed $(\varphi, \gamma)$-module with $\widehat{M}$-structure. ## Lemma {#lem:ext-1} Let $([\varphi], [\gamma])$ be a framed $(\varphi, \gamma)$-module with $P$-structure and $A$-coefficients. Then $([\varphi], [\gamma])$ is a framed $(\varphi, \Gamma)$-module with $P$-structure and $A$-coefficients if and only if $([\varphi]_{\widehat{M}}, [\gamma]_{\widehat{M}})$ is a framed $(\varphi, \Gamma)$-module with $\widehat{M}$-structure and $A$-coefficients. *Proof.* Note that $P = U \rtimes \widehat{M}$ and $\widehat{M}$ is a subgroup of $P$. We will regard both $P$ and $\widehat{M}$ as a subgroup of $\operatorname{GL}_d\subset \operatorname{Mat}_{d\times d}$ by fixing an embedding $P\hookrightarrow \operatorname{GL}_d$. Write $[u]:=[\gamma]-[\gamma]_{\widehat{M}}$. Note that the Jordan decomposition of $[\gamma]$ and $[\gamma]_{\widehat{M}}$ has the same semisimple part; write $[\gamma]=g_s g_u$ and $[\gamma]_{\widehat{M}}=g_s g_u'$ for the Jordan decomposition where both $g_u$ and $g_u'$ lies in the unipotent radical of a Borel of $P$ (if we replace $\widehat{M}$ by one of its conjugate in $P$, then $g_u$ and $g_u'$ lies in the unipotent radical of the same Borel of $P$). By the Lie-Kolchin theorem, $(g_u-g_u')$ is nilpotent. Since $g_s$ commutes with $g_u$ and $g_u'$, $[u]$ is nilpotent. We have $(1-[\gamma]_{\widehat{M}}) = (1-[\gamma] + [u])$. Since $[u]$ is nilpotent, $(1-[\gamma]_{\widehat{M}})$ is topologically nilpotent if and only if $(1-[\gamma])$ is topologically nilpotent. ◻ ## Extensions of $(\varphi, \Gamma)$-modules   In this paragraph, we classify all framed $(\varphi, \Gamma)$-modules with $P$-structure and $A$-coefficients whose Levi factor is equal to a fixed $(\varphi, \Gamma)$-module with $\widehat{M}$-structure $([\phi]_{\widehat{M}}, [\gamma]_{\widehat{M}})$. For ease of notation, write $f =[\phi]_{\widehat{M}}$ and $g = [\gamma]_{\widehat{M}}$. We denote by $$H^1_{{\operatorname{Herr}}}(f, g)$$ the equivalence classes of all extensions of $(f, g)$ to a framed $(\varphi, \Gamma)$-module with $P$-structure. Let $u_f, u_g\in U(\mathbb{A}_{K, A})$. Set $[\phi]=u_f f$ and $[\gamma]=u_g g$. Note that $([\phi], [\gamma])$ is a $(\varphi, \Gamma)$-module if and only if $$u_f {\operatorname{Int}}_g(\gamma(u_f^{-1})) = u_g {\operatorname{Int}}_f(\varphi(u_g^{-1}))$$ by Lemma [3.3](#lem:ext-1){reference-type="ref" reference="lem:ext-1"}. Here ${\operatorname{Int}}_{?}(*) = ? * ?^{-1}$. ## Assumption {#ass:ext} We assume $U$ is a unipotent algebraic group of nilpotency class $2$ and $p\ne 2$. Assume there exists an embedding $\iota:U\hookrightarrow \operatorname{GL}_N$ such that $(\iota(x)-1)^2=0$ for all $x\in U$. In particular, there is a well-defined truncated log map $$\begin{aligned} \log:U \to&\operatorname{Lie}U \\ u \mapsto& (u-1) - \frac{(u-1)^2}{2}\end{aligned}$$ whose inverse is the truncated exponential map $$\exp: \operatorname{Lie}U\to U.$$ Here we embed $U$ into $\operatorname{Mat}_{d\times d}$ in order to define addition and substraction (the embedding is not important). Write $x = \log(u_f)$ and $y = \log (u_g)$. ## Lemma {#lemma} $([\phi], [\gamma])$ is a $(\varphi, \Gamma)$-module with $P$-structure extending $([\phi]_{\widehat{M}}, [\gamma]_{\widehat{M}})$ if and only if $$(1 - {\operatorname{Int}}_g\circ \gamma)(x) - \frac{1}{2}[x, {\operatorname{Int}}_g \gamma(x)] = (1 - {\operatorname{Int}}_f \circ\varphi)(y) - \frac{1}{2}[y, {\operatorname{Int}}_f \varphi(y)].$$ *Proof.* It follows from the Baker-Campbell-Hausdorff formula. ◻ Recall that a nilpotent Lie algebra of nilpotency class $2$ is isomorphic to its associated graded Lie algebra (with respect to either the lower or the upper central filtration). We fix such an isomorphism $\operatorname{Lie}U \cong \operatorname{gr}^\bullet \operatorname{Lie}U = \operatorname{gr}^1\operatorname{Lie}U \oplus \operatorname{gr}^0\operatorname{Lie}U$ where $\operatorname{gr}^0\operatorname{Lie}U$ is the derived subalgebra of $\operatorname{Lie}U$. Note that $\operatorname{gr}^0\operatorname{Lie}U$ is contained in the center of $\operatorname{Lie}U$. In particular, if $x\in \operatorname{Lie}U$, we can write $x = x_0+ x_1$ where $x_i\in \operatorname{gr}^i\operatorname{Lie}U$. ## Lemma {#lem:Herr-2} $([\phi], [\gamma])$ is a $(\varphi, \Gamma)$-module extending $([\phi]_{\widehat{M}}, [\gamma]_{\widehat{M}})$ if and only if $$\begin{cases} (1 - {\operatorname{Int}}_g\circ \gamma)(x_1) - (1 - {\operatorname{Int}}_f \circ\varphi)(y_1) = 0 \\ (1 - {\operatorname{Int}}_g\circ \gamma)(x_0) - (1 - {\operatorname{Int}}_f \circ\varphi)(y_0) = \frac{1}{2}[x_1, {\operatorname{Int}}_g \gamma(x_1)] - \frac{1}{2}[y_1, {\operatorname{Int}}_f \varphi(y_1)]. \end{cases}$$ *Proof.* Arrange terms according to their degree in the graded Lie algebra. ◻ ## Herr complexes {#par:Herr} Let $V$ be a vector space over $\operatorname{Spec}A$, and let $(s, t)$ be a framed $(\varphi, \Gamma)$-module with $\operatorname{GL}(V)$-structure and $A$-coefficients. Then the Herr complex associated to $(s, t)$ is by definition the following $$C^\bullet_{{\operatorname{Herr}}}(s, t):=[V(\mathbb{A}_{K, A}) \xrightarrow{(s\circ \varphi-1, t\circ \gamma-1)} V(\mathbb{A}_{K, A}) \oplus V(\mathbb{A}_{K, A}) \xrightarrow{(t\circ \gamma-1, 1-s\circ \varphi)^t} V(\mathbb{A}_{K, A})]$$ Write $Z^\bullet_{{\operatorname{Herr}}}(s, t)$, $B^\bullet_{{\operatorname{Herr}}}(s, t)$ and $H^\bullet_{{\operatorname{Herr}}}(s, t)$ for the cocycle group, the coboundary group and the cohomology group of $C^\bullet_{{\operatorname{Herr}}}(s, t)$. The reader can easily check that our definiton is consistent with that of [@EG23 Section 5.1]. We will denote by $d$ the differential operators in $C^\bullet_{{\operatorname{Herr}}}(f, g)$. Note that $\widehat{M}$ acts on $\operatorname{gr}^1\operatorname{Lie}(U)$ and $\operatorname{gr}^0\operatorname{Lie}(U)$ by conjugation. Write $$\begin{aligned} {\operatorname{Int}}^0: \widehat{M} \to \operatorname{GL}(\operatorname{gr}^0\operatorname{Lie}(U)),\\ {\operatorname{Int}}^1: \widehat{M} \to \operatorname{GL}(\operatorname{gr}^1\operatorname{Lie}(U))\end{aligned}$$ for the conjugation actions. ## Cup products {#par:cup} Define a map $$\begin{aligned} Q: C^1_{{\operatorname{Herr}}}({\operatorname{Int}}^1(f, g)) & \to C^2_{{\operatorname{Herr}}}({\operatorname{Int}}^0(f, g))\\ (x_1,y_1) &\mapsto \frac{1}{2}[x_1, {\operatorname{Int}}_g^1 \gamma(x_1)] - \frac{1}{2}[y_1, {\operatorname{Int}}_f^1 \varphi(y_1)]\end{aligned}$$ and a symmetric bilinear pairing $$\begin{aligned} \cup: C^1_{{\operatorname{Herr}}}({\operatorname{Int}}^1(f, g)) \times C^1_{{\operatorname{Herr}}}({\operatorname{Int}}^1(f, g)) &\to C^2_{{\operatorname{Herr}}}({\operatorname{Int}}^0(f, g)) \\ ((x_1, y_1), (x_1',y_1')) & \mapsto \frac{1}{2}(Q(x_1+x_1', y_1+y_1') - Q(x_1,y_1)-Q(x_1',y_1')).\end{aligned}$$ ## Proposition {#prop:Herr-1} Define $$\begin{aligned} Z_{{\operatorname{Herr}}}^1(f, g):= \{ (x_0+x_1,y_0+y_1)\in &C^1_{{\operatorname{Herr}}}({\operatorname{Int}}^0(f, g)\oplus {\operatorname{Int}}^1(f,g)) |\\ & \begin{cases} (x_0, y_0)\in C^1_{{\operatorname{Herr}}}({\operatorname{Int}}^0(f, g))\\ (x_1, y_1)\in C^1_{{\operatorname{Herr}}}({\operatorname{Int}}^1(f, g))\\ d(x_1,y_1) = 0\\ d(x_0, y_0) + (x_1,y_1)\cup (x_1,y_1)=0 \end{cases} \}.\end{aligned}$$ There exists a surjective map $$\begin{aligned} Z_{{\operatorname{Herr}}}^1(f,g) &\to H^1_{{\operatorname{Herr}}}(f, g) \\ (x,y)&\mapsto (\exp(x)f, \exp(y) g)\end{aligned}$$ *Proof.* It is a reformulation of Lemma [3.7](#lem:Herr-2){reference-type="ref" reference="lem:Herr-2"}. ◻ ## Lemma {#lem:Herr-3} The cup product induces a well-defined symmetric bilinear pairing $$\cup_H: H^1_{{\operatorname{Herr}}}({\operatorname{Int}}^1(f, g)) \times H^1_{{\operatorname{Herr}}}({\operatorname{Int}}^1(f, g)) \to H^2_{{\operatorname{Herr}}}({\operatorname{Int}}^0(f, g)).$$ *Proof.* The proof is formally similar to [@L21 Lemma 2.3.3.2]. ◻ ## Non-split groups {#rem:non-split} We remark that all results in this section holds for non-split groups. More precisely, Let $F\subset K$ be a $p$-adic field and fix an action of $\Delta:=\operatorname{Gal}(K/F)$ on the pinned group $(\widehat{G}, \widehat{B}, \widehat{T}, \{Y_\alpha\})$ and assume both $\widehat{M}$ and $P$ are $\Delta$-stable. Set ${^{L}\!G} := \widehat{G} \rtimes \Delta$, ${^{L}\!P}:= P\rtimes \Delta$, and ${^{L}\!M} \rtimes \Delta$. Denote by $\operatorname{GL}^!(\operatorname{Lie}U)$ the (parabolic) subgroup of the general linear group $\operatorname{GL}(\operatorname{Lie}U)$ that preserves the lower central filtration of $\operatorname{Lie}U$. Using the truncated log/exp map, we have a group scheme homomorphism ${^{L}\!M} \to \operatorname{GL}^!(\operatorname{Lie}U)$, which extends to a group scheme homomorphism $${^{L}\!P} = {^{L}\!M} \rtimes U \to \operatorname{GL}^!(\operatorname{Lie}U) \rtimes U.$$ Name the group $\operatorname{GL}^!(\operatorname{Lie}U) \rtimes U$ as $\widetilde{P}$, and name the homomorphism ${^{L}\!P} \to \widetilde{P}$ as $\Xi$. ## Definition {#definition} In the non-split setting, a framed $(\varphi, \Gamma)$-module with ${^{L}\!P}$-structure is a $(\varphi, \Gamma)$-module $(F, \phi_F, \gamma_F)$ with ${^{L}\!P}$-structure, and a framed $(\varphi, \Gamma)$-module $([\phi], [\gamma])$ with $\widetilde{P}$-structure, together with an identification $\Xi_*(F, \phi_F, \gamma_F)\cong ([\phi], [\gamma])$. The reason we make the definition above is because $(\varphi, \Gamma)$-modules with $H$-structure are not represented by a pair of matrices if $H$ is a *disconnected* group. So by choosing the map ${^{L}\!P}\to \widetilde{P}$, we are able to work with connected groups $\widetilde{P}$. Since the whole purpose of this section is to understand extensions of $(\varphi, \Gamma)$-modules and we fix the ${^{L}\!M}$-semisimplification of framed $(\varphi, \Gamma)$-module with ${^{L}\!P}$-structure, the reader can easily see all results carry over to the non-split case by using $\widetilde{P}$ in place of $P$. # Cohomologically Heisenberg lifting problems   We keep notations from the previous section. Let $\Lambda\subset \bar\mathbb{Z}_p$ be a DVR. Let $([\bar \phi], [\bar \gamma])$ be a framed $(\varphi, \Gamma)$-module with $P$-structure and $\bar{\mathbb{F}}_p$-coefficients. Write $(\bar f, \bar g)$ for $([\bar \phi]_{\widehat{M}}, [\bar \gamma]_{\widehat{M}})$. Fix a framed $(\varphi, \Gamma)$-module $(f, g)$ with $\widehat{M}$-structure and $\Lambda$-coefficients lifting $(\bar f, \bar g)$. By Proposition [3.10](#prop:Herr-1){reference-type="ref" reference="prop:Herr-1"}, there exists an element $(\bar x, \bar y)\in Z^1_{{\operatorname{Herr}}}(\bar f, \bar g)$ representing $([\bar \phi], [\bar \gamma])$. We can write $\bar x=\bar x_0 + \bar x_1$ and $\bar y =\bar y_0+\bar y_1$ such that $(\bar x_i, \bar y_i)\in C^1_{{\operatorname{Herr}}}({\operatorname{Int}}^i(\bar f, \bar g))$. ## Definition {#def:coh} A *cohomologically Heisenberg lifting problem* is a tuple $(f, g, \bar x, \bar y, H)$ consisting of - a framed $(\varphi, \Gamma)$-module with $\widehat{M}$-structure and $\Lambda$-coefficients $(f, g)$; - an element $(\bar x, \bar y)\in Z^1_{{\operatorname{Herr}}}(\bar f, \bar g)$, and - a $\Lambda$-submodule $H\subset H^1_{{\operatorname{Herr}}}({\operatorname{Int}}^1(f, g))$, such that - $(\bar x_1, \bar y_1)$ lies in the image of $H$ in $H^1_{{\operatorname{Herr}}}({\operatorname{Int}}^1(\bar f, \bar g))$, - the pairing $\cup_H|_H: H \times H \to H^2_{{\operatorname{Herr}}}({\operatorname{Int}}^0(f, g))$ is surjective, - $H^2_{{\operatorname{Herr}}}({\operatorname{Int}}^0(\bar f, \bar g))\cong \Lambda/\varpi$. A *solution* to the lifting problem $(f, g, \bar x, \bar y, H)$ is an element $(x, y)\in Z^1_{{\operatorname{Herr}}}(f, g)$ lifting $(\bar x, \bar y)$ such that the image of $(x_1, y_1)$ in $H^1_{{\operatorname{Herr}}}({\operatorname{Int}}^1(f, g))$ is contained in $H$. At first sight, a cohomologically Heisenberg lifting problem defines an infinite system of quadratic polynomial equations. In the following theorem, we show that we may truncate the infinite system to a finite system and solve cohomologically Heisenberg lifting problems. ## Theorem {#thm:coh} Each cohomologically Heisenberg lifting problem is solvable after replacing $\Lambda$ by an extension of DVR $\Lambda'\subset \bar\mathbb{Z}_p$. *Proof.* Before we start, we remark that $C^\bullet_{{\operatorname{Herr}}}({\operatorname{Int}}^i(f, g))\otimes_{\Lambda}\Lambda/\varpi=C^\bullet_{{\operatorname{Herr}}}({\operatorname{Int}}^i(\bar f, \bar g))$ while it is *not* generally true that $H^\bullet_{{\operatorname{Herr}}}({\operatorname{Int}}^i(f, g))\otimes_{\Lambda}\Lambda/\varpi=H^\bullet_{{\operatorname{Herr}}}({\operatorname{Int}}^i(\bar f, \bar g))$. Write $Z_H$ for the preimage of $H$ in $Z^1_{{\operatorname{Herr}}}({\operatorname{Int}}^1(f, g))$. Since $Z^1_{{\operatorname{Herr}}}({\operatorname{Int}}^1(f, g))$ is $\Lambda$-torsion-free, so is $Z_H$. Let $X\subset Z_H$ be a finite $\Lambda$-submodule which maps surjectively onto $H$. Such an $X$ exists because $H^1_{{\operatorname{Herr}}}({\operatorname{Int}}^1(f, g))$ is a finite $\Lambda$-module ([@EG23 Theorem 5.1.22]). Since $\Lambda$ is a DVR, $X$ is finite free over $\Lambda$. Let $W\subset C^2_{{\operatorname{Herr}}}({\operatorname{Int}}^0(f, g))$ be a finite free $\Lambda$-submodule containing $X\cup X$. By (HL2), $W\to H^2_{{\operatorname{Herr}}}({\operatorname{Int}}^0(f, g))$ is surjective. Set $B_W:=B^2_{{\operatorname{Herr}}}({\operatorname{Int}}^0(f, g))\cap W$, we have $H^2_{{\operatorname{Herr}}}({\operatorname{Int}}^0(f, g)) = W/B_W$. Again, $B_W$ is a finite free $\Lambda$-module. Finally, let $Y\subset C^1_{{\operatorname{Herr}}}({\operatorname{Int}}^0(f, g))$ be a finite free $\Lambda$-submodule which maps surjectively onto $B_W$ and contains at least one lift of $(\bar x_0, \bar y_0)$. Now consider the system of equations $$\mathfrak{x}\cup \mathfrak{x}+ d \mathfrak{y}= 0 \in W \tag{$\dagger$}$$ where $\mathfrak{x}\in X$ and $\mathfrak{y}\in Y$ are the variables and $W$ is the value space. We check that ($\dagger$) is a Heisenberg equation in the sense of Definition [2.2](#def:Heisenberg-1){reference-type="ref" reference="def:Heisenberg-1"}. (H1) follows from (HL3) and the Nakayama lemma, while (H2) follows from (HL2). The equation ($\dagger$) admits a mod $\varpi$ solution $\bar\mathfrak{x},\bar\mathfrak{y}$ defined by $(\bar x, \bar y)\in Z^1_{{\operatorname{Herr}}}(\bar f, \bar g)$. By Theorem [2.3](#thm:Heisenberg-1){reference-type="ref" reference="thm:Heisenberg-1"}, ($\dagger$) admits a solution lifting $\bar\mathfrak{x},\bar\mathfrak{y}$ after extending the coefficient ring $\Lambda$. The solution to the equation ($\dagger$) is also a solution to the lifting problem. ◻ # Applications to Galois cohomology   Let $F/\mathbb{Q}_p$ be a $p$-adic field, and let $G$ is a tamely ramified quasi-split reductive group over $F$ which splits over $K$. Write $\Delta:=\operatorname{Gal}(K/F)$. There exists a $\Delta$-stable pinning $(G, B, T, \{X_\alpha\})$ of $G$, and let $(\widehat{G}, \widehat{B}, \widehat{T}, \{Y_\alpha\})$ be the dual pinned group. Let $P\subset \widehat{G}$ be a $\Delta$-stable parabolic of $\widehat{G}$ with $\Delta$-stable Levi subgroup $\widehat{M}$ and unipotent radical $U$. Denote by ${^{L}\!P}$ the semi-direct product $U\rtimes {^{L}\!M}$ where ${^{L}\!M} = \widehat{M} \rtimes \Delta$. In the terminology of [@L21], ${^{L}\!P}$ is a big pseudo-parabolic of ${^{L}\!G} = \widehat{G}\rtimes \Delta$ and all big pseudo-parabolic of ${^{L}\!G}$ are of the form ${^{L}\!P}$ (see [@L21 Section 3]). We enforce Assumption [3.5](#ass:ext){reference-type="ref" reference="ass:ext"} throughout this section. Note that ${^{L}\!M}$ acts on $\operatorname{Lie}U = \operatorname{gr}^0\operatorname{Lie}U \oplus \operatorname{gr}^1\operatorname{Lie}U$ by adjoint, and we denote the adjoint actions by ${\operatorname{Int}}^i$ as in Paragraph [3.8](#par:Herr){reference-type="ref" reference="par:Herr"}. ## Definition {#def:H-type} Let $\bar\rho_P: \operatorname{Gal}_F\to {^{L}\!P}(\bar{\mathbb{F}}_p)$ be a Langlands parameter with Levi factor $\bar\rho_M:\operatorname{Gal}_F\to {^{L}\!M}(\bar{\mathbb{F}}_p)$. We say $\bar\rho_P$ is a Heisenberg-type extension of $\bar\rho_M$ if $$\dim_{\bar{\mathbb{F}}_p}H^2(\operatorname{Gal}_K, \operatorname{gr}^0\operatorname{Lie}U(\bar{\mathbb{F}}_p)) \le 1.$$ Here the $\operatorname{Gal}_F$-action on $\operatorname{gr}^0\operatorname{Lie}U(\bar{\mathbb{F}}_p)$ is obtained from composing $\bar\rho_M$ and ${\operatorname{Int}}^0:{^{L}\!M} \to \operatorname{GL}(\operatorname{gr}^0\operatorname{Lie}U)$. ## Cup products on Galois cohomology Let $A$ be either $\bar{\mathbb{F}}_p$ or $\bar\mathbb{Z}_p$. If $\rho_M:\operatorname{Gal}_F\to {^{L}\!M}(A)$ is an $L$-parameter, we can equip $\operatorname{Lie}U(A)$ with $\operatorname{Gal}_F$-action via $\rho_M$. Note that there exists a symmetric bilinear pairing $$H^1(\operatorname{Gal}_F, \operatorname{gr}^1\operatorname{Lie}U(A)) \times H^1(\operatorname{Gal}_F, \operatorname{gr}^1\operatorname{Lie}U(A)) \to H^2(\operatorname{Gal}_F, \operatorname{gr}^0\operatorname{Lie}U(A)),$$ which is defined in [@L21 Section 3.2]. Alternatively, we can transport the symmetric cup product on $(\varphi, \Gamma)$-cohomology defined in Definition [3.9](#par:cup){reference-type="ref" reference="par:cup"} and Lemma [3.11](#lem:Herr-3){reference-type="ref" reference="lem:Herr-3"}, and later generalized in [3.12](#rem:non-split){reference-type="ref" reference="rem:non-split"} to Galois cohomology. ## Partial extensions and partial lifts A *partial extension of $\rho_M$* is a continuous group homomorphism $\rho':\operatorname{Gal}_F\to \frac{{^{L}\!P}}{[U,U]}(\bar\mathbb{Z}_p)$ extending $\rho_M:\operatorname{Gal}_F \to {^{L}\!M}(\bar\mathbb{Z}_p) = \frac{{^{L}\!P}}{U}(\bar\mathbb{Z}_p)$. Here $[U,U]$ is the derived subgroup of $U$. The set of equivalence classes of partial extensions of $\rho_M$ are in natural bijection with $H^1(\operatorname{Gal}_F, \frac{U}{[U,U]}(\bar\mathbb{Z}_p))=H^1(\operatorname{Gal}_F, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))$. Let $\bar\rho_P:\operatorname{Gal}_F\to {^{L}\!P}(\bar{\mathbb{F}}_p)$ be an $L$-parameter. A *partial lift* of $\bar\rho_P$ is a group homomorphism $\rho':\operatorname{Gal}_F\to \frac{{^{L}\!P}}{[U,U]}(\bar\mathbb{Z}_p)$ which lifts $\bar\rho_P$ mod $[U,U]$. ## Lemma {#lem:partial} A partial extension $\rho':\operatorname{Gal}_F\to \frac{{^{L}\!P}}{[U,U]}(A)$ of $\rho_{M, A}$ extends to a full extension $\rho:\operatorname{Gal}_F\to{^{L}\!P}(A)$ if and only if $c \cup c = 0$ where $c\in H^1(\operatorname{Gal}_F, \operatorname{gr}^1\operatorname{Lie}U(A))$ is the cohomology class corresponding to $\rho'$. *Proof.* It follows immediately from Proposition [3.10](#prop:Herr-1){reference-type="ref" reference="prop:Herr-1"}. ◻ ## Theorem {#thm:Galois} Assume $p\ne 2$. Let $\bar\rho_P:\operatorname{Gal}_F\to {^{L}\!P}(\bar{\mathbb{F}}_p)$ be an extension of $\bar\rho_M$. Assume $\bar\rho_M$ admits a lift $\rho_M:\operatorname{Gal}_F\to {^{L}\!M}(\bar\mathbb{Z}_p)$ such that - $\bar\rho_P$ is a Heisenberg-type extension of $\bar\rho_M$, - $\bar\rho_P|_{\operatorname{Gal}_K}$ admits a partial lift which is a partial extension of $\rho_M|_{\operatorname{Gal}_K}$, and - the pairing $$H^1(\operatorname{Gal}_F, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p \times H^1(\operatorname{Gal}_F, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p \to H^2(\operatorname{Gal}_F, \operatorname{gr}^0\operatorname{Lie}U(\bar{\mathbb{F}}_p))$$ is non-trivial unless $H^2(\operatorname{Gal}_F, \operatorname{gr}^0\operatorname{Lie}U(\bar{\mathbb{F}}_p))=0$, then $\bar\rho_P$ admits a lift $\rho_P:\operatorname{Gal}_F\to {^{L}\!P}(\bar\mathbb{Z}_p)$ with Levi factor $\rho_M$. *Proof.* Let $A$ be either $\bar{\mathbb{F}}_p$ or $\bar\mathbb{Z}_p$, and let $\rho_{M, A}$ be either $\bar\rho_M$ or $\rho_M$, resp.. The set of equivalence classes of $L$-parameters $\operatorname{Gal}_F\to {^{L}\!P}/[U,U](A)$ extending $\rho_{M, A}$ is in natural bijection with the $A$-module $H^1(\operatorname{Gal}_F, \operatorname{gr}^1\operatorname{Lie}U(A))$. Since $K/F$ is assumed to have prime-to-$p$ degree, we have $$H^1(\operatorname{Gal}_F, \operatorname{gr}^1\operatorname{Lie}U(A)) = H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(A))^\Delta$$ by [@Ko02 Theorem 3.15]. The $L$-parameter $\bar\rho_P$ defines an element $\bar c \in H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(\bar{\mathbb{F}}_p))^\Delta$. By item (ii), there exists a lift $c'\in H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))$ lifting $\bar c$. Define $c:=\frac{1}{[K:F]}\sum_{\gamma\in \Delta}\gamma c\in H^1(\operatorname{Gal}_F, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))$. It is clear that $c$ lifts $\bar c$. There are two possibilities: either $$\dim_{\bar{\mathbb{F}}_p}H^2(\operatorname{Gal}_F, \operatorname{gr}^0\operatorname{Lie}U(\bar{\mathbb{F}}_p)) =0,$$ or $$\dim_{\bar{\mathbb{F}}_p}H^2(\operatorname{Gal}_F, \operatorname{gr}^0\operatorname{Lie}U(\bar{\mathbb{F}}_p)) =1.$$ In the former case, there is no obstruction to extension and lifting and the Theorem follows from [@L21 Proposition 5.3.1]. Now we consider the latter case. By Fontaine's theory of $(\varphi, \Gamma)$-modules, $\rho_M$ corresponds to a framed $(\varphi, \Gamma)$-module $(f, g)$ with ${^{L}\!M}$-structure (or rather $\operatorname{Gal}^!(\operatorname{Lie}U)$-structure by Paragraph [3.12](#rem:non-split){reference-type="ref" reference="rem:non-split"}), and $\bar\rho_P$ corresponds to an element $(\bar x, \bar y)\in Z^1_{{\operatorname{Herr}}}(\bar f, \bar g)$. Here $(\bar f, \bar g)$ is the reduction of $(f, g)$. Since Galois cohomology is naturally isomorphic to the cohomology of Herr complexes ([@EG23 Theorem 5.1.29]), we can identify $H^1_{{\operatorname{Herr}}}({\operatorname{Int}}^i(f, g))$ with $H^1(\operatorname{Gal}_F, \operatorname{gr}^i\operatorname{Lie}U(\bar\mathbb{Z}_p))$. Consider the tuple $(f, g, \bar x, \bar y, H^1(\operatorname{Gal}_F, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p)))$. We want to check that this tuple is a cohomologically Heisenberg lifting problem in the sense of Definition [4.1](#def:coh){reference-type="ref" reference="def:coh"}. (HL3) follows from assumption (i), (HL2) follows from assumption (iii) and (HL3), and (HL1) follows from assumption (ii) and the discussion in the second paragraph of this proof. We finish the proof by invoking Theorem [4.2](#thm:coh){reference-type="ref" reference="thm:coh"}. ◻ # Interaction of cup products with $\mathbb{Z}/2$-action   [\[sec:cup\]]{#sec:cup label="sec:cup"} Let $K$ be a $p$-adic field. Let $a=c$ and $b$ be positive integers. Fix a Galois representation $$\bar \tau = \begin{bmatrix} \bar \tau_a & & \\ & \bar \tau_b & \\ & & \bar \tau_c \end{bmatrix} :\operatorname{Gal}_K \to \begin{bmatrix} \operatorname{GL}_a & & \\ & \operatorname{GL}_b & \\ & & \operatorname{GL}_c \end{bmatrix}(\bar{\mathbb{F}}_p),$$ as well as a lift $$\tau= \begin{bmatrix} \tau_a & & \\ & \tau_b & \\ & & \tau_c \end{bmatrix} :\operatorname{Gal}_K \to \begin{bmatrix} \operatorname{GL}_a & & \\ & \operatorname{GL}_b & \\ & & \operatorname{GL}_c \end{bmatrix}(\bar\mathbb{Z}_p)$$ of $\bar \tau$. Write $\operatorname{gr}^0\operatorname{Lie}U := \operatorname{Mat}_{a\times c}$, and $\operatorname{gr}^1\operatorname{Lie}U :=\operatorname{Mat}_{a\times b} \oplus \operatorname{Mat}_{b\times c}$. Recall that we have defined a (symmetrized) cup product $$H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(A)) \times H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(A)) \to H^2(\operatorname{Gal}_K, \operatorname{gr}^0\operatorname{Lie}U(A))$$ for $A=\bar{\mathbb{F}}_p, \bar\mathbb{Z}_p$. If $c\in H^i(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(A))$, write $c = (c_1, c_2)$ where $c_1\in H^i(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(A))$, and $c_2\in H^i(\operatorname{Gal}_K, \operatorname{Mat}_{b\times c}(A))$, ## Lemma {#lem:unitary-cup-1} For the cup product $$H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(A)) \times H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(A)) \to H^2(\operatorname{Gal}_K, \operatorname{gr}^0\operatorname{Lie}U(A)),$$ we have $$\begin{aligned} (c_1, 0) \cup (c_1, 0) &= 0 \\ (0, c_2) \cup (0, c_2) &= 0\\ (c_1, 0) \cup (0, c_2) &= \frac{1}{2}(c_1, c_2) \cup (c_1, c_2)\\ (c_1, 0) \cup (c_1', 0) &= 0 \\ (0, c_2) \cup (0, c_2') &= 0,\end{aligned}$$ for any $c_1, c_2, c_1', c_2'$. *Proof.* The first two identities follows from Lemma [5.4](#lem:partial){reference-type="ref" reference="lem:partial"}. The last three identity follows from the first two identities. ◻ Write $\Delta$ for the finite group $\{1, \j\}$ with two elements. While $\Delta$ denotes the Galois group $\operatorname{Gal}(K/F)$ in the previous sections, $\Delta$ is merely an abstract group in this section. ## Definition {#def:classical} An action of $\Delta$ on each of $H^i(\operatorname{Gal}_K, \operatorname{gr}^j\operatorname{Lie}U(A))$ is said to be *classical* if - $\j H^1(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(A))\subset H^1(\operatorname{Gal}_K, \operatorname{Mat}_{b\times c}(A))$, - $\j H^1(\operatorname{Gal}_K, \operatorname{Mat}_{b\times c}(A))\subset H^1(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(A))$, - the $\Delta$-action is compatible with the cup products. We also call such $\Delta$-actions *classical $\Delta$-structure*. ## Proposition {#prop:unitary-cup-1} Fix a classical $\Delta$-structure. If $$H^2(\operatorname{Gal}_K, \operatorname{gr}^0\operatorname{Lie}U(\bar{\mathbb{F}}_p))=H^2(\operatorname{Gal}_K, \operatorname{gr}^0\operatorname{Lie}U(\bar{\mathbb{F}}_p))^\Delta=\bar{\mathbb{F}}_p,$$ then the cup product $$H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))^\Delta\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p \times H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))^\Delta\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p \to H^2(\operatorname{Gal}_F, \operatorname{gr}^0\operatorname{Lie}U(\bar{\mathbb{F}}_p))^\Delta$$ is non-trivial if and only if $$H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p \times H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p \to H^2(\operatorname{Gal}_K, \operatorname{gr}^0\operatorname{Lie}U(\bar{\mathbb{F}}_p))$$ is non-trivial. *Proof.* Since $\cup$ is a symmetric pairing, non-triviality of $\cup$ on $H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))$ implies $(c_1, c_2)\cup(c_1, c_2)\ne 0$ for some $(c_1, c_2)\in H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p$. By Lemma [6.1](#lem:unitary-cup-1){reference-type="ref" reference="lem:unitary-cup-1"}, $(c_1, 0)\cup (0, c_2)\ne 0$. Since $$H^2(\operatorname{Gal}_K, \operatorname{gr}^0\operatorname{Lie}U(\bar{\mathbb{F}}_p))^\Delta=H^2(\operatorname{Gal}_K, \operatorname{gr}^0\operatorname{Lie}U(\bar{\mathbb{F}}_p)),$$ we conclude $\Delta$-acts trivially on $H^2(\operatorname{Gal}_K, \operatorname{gr}^0\operatorname{Lie}U(\bar{\mathbb{F}}_p))$. We argue by contradition and assume $\cup$ is trivial on $H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))^\Delta\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p$. We claim $x\cup y = 0$ for each $x\in H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))^\Delta\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p$ and $y\in H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p$. Indeed, $$2(x\cup y) = x\cup y + \j(x\cup y) = x\cup y + (\j x)\cup (\j y) = x\cup y + x \cup \j y = x\cup (y+\j y)=0$$ because both $x$ and $y +\j y$ lies in $H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))^\Delta\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p$. Since $(c_1, 0) + \j (c_1, 0)\in H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))^\Delta\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p$, we have $((c_1, 0) + \j (c_1, 0)) \cup (0, c_2) = 0$. However, since $(c_1, 0)\cup (0, c_2)\ne 0$, we must have $\j (c_1, 0) \cup (0, c_2)\ne 0$. By the classicality of the $\Delta$-structure, $\j (c_1, 0) = (0, c_2')$ for some $c_2'$ and thus by Lemma [6.1](#lem:unitary-cup-1){reference-type="ref" reference="lem:unitary-cup-1"}, $\j(c_1, 0)\cup (0, c_2)=0$ and we get a contradiction. ◻ Next, we establish a general non-triviality of cup products result, and before that we need a non-degeneracy result. ## Lemma {#lem:unitary-cup-3} Assume $\bar\tau_a$ and $\bar\tau_c$ are irreducible. Then the cup product $$H^1(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar{\mathbb{F}}_p)) \times H^1(\operatorname{Gal}_K, \operatorname{Mat}_{b\times c}(\bar{\mathbb{F}}_p)) \to H^2(\operatorname{Gal}_K, \operatorname{Mat}_{a\times c}(\bar{\mathbb{F}}_p))$$ is non-degenerate. *Proof.* Fix a non-zero element $x \in H^1(\operatorname{Gal}_K, \operatorname{Mat}_{b\times c}(\bar{\mathbb{F}}_p))$. Such an extension class $x$ corresponds to a non-split extension $\bar\tau_d = \begin{bmatrix} \bar\tau_b & * \\ & \bar\tau_c \end{bmatrix}$. In particular, the map $$H^0(\operatorname{Gal}_K, \bar\tau_a^\vee \otimes \bar\tau_d(1)) \to H^0(\operatorname{Gal}_K, \bar\tau_a^\vee \otimes \bar\tau_c(1))$$ is the zero map (if otherwise the socle of $\bar\tau_d$ is strictly larger than the socle of $\bar\tau_b$ and $\bar\tau_d$ must be a split extension). By local Tate duality, the map $$H^2(\operatorname{Gal}_K, \bar\tau_c^\vee \otimes \bar\tau_a) \to H^2(\operatorname{Gal}_K, \bar\tau_d^\vee \otimes \bar\tau_a)$$ is also the zero map. The short exact sequence $$0\to \bar\tau_b \to \bar\tau_d \to \bar\tau_c \to 0$$ induces the long exact sequence $$H^1(\operatorname{Gal}_K, \bar\tau_c^\vee \otimes \bar\tau_a) \to H^1(\operatorname{Gal}_K, \bar\tau_d^\vee \otimes \bar\tau_a) \to H^1(\operatorname{Gal}_K, \bar\tau_b^\vee \otimes \bar\tau_a) \to H^2(\operatorname{Gal}_K, \bar\tau_c^\vee \otimes \bar\tau_a) \xrightarrow{0} H^2(\operatorname{Gal}_K, \bar\tau_d^\vee \otimes \bar\tau_a).$$ Since $H^2(\operatorname{Gal}_K, \bar\tau_c^\vee \otimes \bar\tau_a) \ne 0$, there exists an element $y\in H^1(\operatorname{Gal}_K, \bar\tau_b^\vee \otimes \bar\tau_a)$ which maps to a non-zero element of $H^2(\operatorname{Gal}_K, \bar\tau_c^\vee \otimes \bar\tau_a)$, and $y$ does not admit an extension to $H^1(\operatorname{Gal}_K, \bar\tau_d^\vee \otimes \bar\tau_a)$. By Lemma [5.4](#lem:partial){reference-type="ref" reference="lem:partial"}, $(x, y)\cup (x, y) \ne 0$, and thus by Lemma [6.1](#lem:unitary-cup-1){reference-type="ref" reference="lem:unitary-cup-1"}, $x\cup y = \frac{1}{2}((x, y)\cup (x, y) )\ne 0$. ◻ ## Lemma {#lem:unitary-cup-4} Let $X, Y$ be vector spaces over a field $\kappa$. Let $$\cup: X\times Y \to \kappa$$ be a non-degenerate bilinear pairing. Let $H_X\subset X$ and $H_Y\subset Y$ be subspaces such that $x\cup y=0$ for all $x\in H_X$ and $y\in H_Y$. Then either $\dim X \ge 2\dim H_X$ or $\dim Y \ge 2 \dim H_Y$. *Proof.* It suffices to show $\dim X + \dim Y \ge 2(\dim H_X+\dim H_Y)$. We define a (symmetric) inner product structure on $X\oplus Y$ by setting $(x, y)\cdot (x, y) = x \cup y + y\cup x$. The lemma now follows from the Gram-Schmidt process. ◻ ## Lemma {#lem:unitary-cup-5} Assume both $\bar\tau_a$ and $\bar\tau_c$ are irreducible. The cup product $$H^1(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar\mathbb{Z}_p))\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p \times H^1(\operatorname{Gal}_K, \operatorname{Mat}_{b\times c}(\bar\mathbb{Z}_p))\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p \to H^2(\operatorname{Gal}_K, \operatorname{Mat}_{a\times c}(\bar{\mathbb{F}}_p))$$ is non-trivial unless all of the following holds - $a=1$, - $K=\mathbb{Q}_p$, - either $\bar\tau_b = \bar\tau_a(-1)^{\oplus b}$ and $\bar\tau_c = \bar\tau_a(-1)$; or $\bar\tau_b = \bar\tau_a^{\oplus b}$ and $\bar\tau_c = \bar\tau_a(-1)$. *Proof.* By Lemma [6.5](#lem:unitary-cup-4){reference-type="ref" reference="lem:unitary-cup-4"} and Lemma [6.4](#lem:unitary-cup-3){reference-type="ref" reference="lem:unitary-cup-3"}, the lemma holds if $$\dim_{\bar{\mathbb{F}}_p} H^1(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar\mathbb{Z}_p))\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p > \frac{1}{2} \dim_{\bar{\mathbb{F}}_p} H^1(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar{\mathbb{F}}_p)) \tag{$*$}$$ and $$\dim_{\bar{\mathbb{F}}_p} H^1(\operatorname{Gal}_K, \operatorname{Mat}_{b\times c}(\bar\mathbb{Z}_p))\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p > \frac{1}{2} \dim_{\bar{\mathbb{F}}_p} H^1(\operatorname{Gal}_K, \operatorname{Mat}_{b\times c}(\bar{\mathbb{F}}_p)). \tag{$**$}$$ We prove by contradiction and assume either ($*$) or ($**$) fails. Since ($*$) and ($**$) are completely similar, we assume the contraposition of ($*$) that $$\label{eq:cup-1} H^1(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar\mathbb{Z}_p))\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p \le \frac{1}{2} \dim_{\bar{\mathbb{F}}_p} H^1(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar{\mathbb{F}}_p)).$$ By the universal coefficient theorem, we have the short exact sequence $$0 \to H^1(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar\mathbb{Z}_p))\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p \to H^1(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar{\mathbb{F}}_p)) \to {\operatorname{Tor}}^{\bar\mathbb{Z}_p}_1(H^2(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar\mathbb{Z}_p)), \bar{\mathbb{F}}_p)\to 0.$$ Therefore the assumption ($\ref{eq:cup-1}$) is equivalent to $$\label{eq:cup-2} \dim_{\bar{\mathbb{F}}_p} H^1(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar\mathbb{Z}_p))\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p \le \dim_{\bar{\mathbb{F}}_p} {\operatorname{Tor}}^{\bar\mathbb{Z}_p}_1(H^2(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar\mathbb{Z}_p)), \bar{\mathbb{F}}_p).$$ Note that $$\dim_{\bar{\mathbb{F}}_p} {\operatorname{Tor}}^{\bar\mathbb{Z}_p}_1(H^2(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar\mathbb{Z}_p)), \bar{\mathbb{F}}_p) \le\dim_{\bar{\mathbb{F}}_p}H^2(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar{\mathbb{F}}_p)),$$ since $H^2$ commutes with base change; also see [@We94 Example 3.1.7] for the computation of ${\operatorname{Tor}}$. Also note that $$\dim_{\bar{\mathbb{F}}_p} H^1(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar\mathbb{Z}_p))\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p \ge \operatorname{rank}_{\bar\mathbb{Z}_p} H^1(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar\mathbb{Z}_p))_{\text{torsion-free}},$$ and $$\operatorname{rank}_{\bar\mathbb{Z}_p} H^1(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar\mathbb{Z}_p))_{\text{torsion-free}} = \dim_{\bar{\mathbb{Q}}_p}H^1(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar{\mathbb{Q}}_p)).$$ By local Euler characteristic, we have $$\begin{aligned} \dim_{\bar{\mathbb{Q}}_p}H^1(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar{\mathbb{Q}}_p)) =& \dim_{\bar{\mathbb{Q}}_p}H^0(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar{\mathbb{Q}}_p))\\ &+ \dim_{\bar{\mathbb{Q}}_p}H^2(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar{\mathbb{Q}}_p)) + [K:\mathbb{Q}_p] a b\\ \ge & [K:\mathbb{Q}_p] ab.\end{aligned}$$ On the other hand, by local Tate duality $$\dim_{\bar{\mathbb{F}}_p}H^2(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar{\mathbb{F}}_p)) =\dim_{\bar{\mathbb{F}}_p}H^0(\operatorname{Gal}_K, \tau_a^\vee \otimes\tau_b(1)) \le a b.$$ Combine all above, ([\[eq:cup-2\]](#eq:cup-2){reference-type="ref" reference="eq:cup-2"}) becomes $$[K:\mathbb{Q}_p]a b \le a b.$$ So, all inequalities above must be equalities and we are forced to have - $K=\mathbb{Q}_p$, - $H^1(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar\mathbb{Z}_p))$ is torsion-free, and - $\dim_{\bar{\mathbb{F}}_p}H^2(\operatorname{Gal}_K, \operatorname{Mat}_{a\times b}(\bar{\mathbb{F}}_p))=ab$. Item (iii) further forces $a=1$ because $\bar\tau_a$ is assumed to be irreducible. ◻ ## Corollary {#cor:unitary-cup-1} Fix a classical $\Delta$-structure. Assume both $\bar\tau_a$ and $\bar\tau_c$ are irreducible. The cup product $$H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))^\Delta\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p \times H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))^\Delta\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p \to H^2(\operatorname{Gal}_K, \operatorname{Mat}_{a\times c}(\bar{\mathbb{F}}_p))^\Delta$$ is non-trivial unless all of the following holds - $a=1$, - $K=\mathbb{Q}_p$, - either $\bar\tau_b = \bar\tau_a(-1)^{\oplus b}$ and $\bar\tau_c = \bar\tau_a(-1)$; or $\bar\tau_b = \bar\tau_a^{\oplus b}$ and $\bar\tau_c = \bar\tau_a(-1)$. *Proof.* Combine Lemma [6.6](#lem:unitary-cup-5){reference-type="ref" reference="lem:unitary-cup-5"} and Proposition [6.3](#prop:unitary-cup-1){reference-type="ref" reference="prop:unitary-cup-1"}. ◻ # Example A: unitary groups   Now assume $G=U_n$ is a quasi-split tamely ramified unitary group over $F$ which splits over the quadratic extension $K/F$ (thus we have implicitly assumed $p\ne 2$). The Dynkin diagram of $G$ is a chain of $(n-1)$-vertices (), and $\Delta=\operatorname{Gal}(K/F)$ acts on ${\operatorname{Dyn}}(G)$ by reflection. The maximal proper $\Delta$-stable subsets of ${\operatorname{Dyn}}(G)$ are given by removing either two symmetric vertices, or the middle vertex. Therefore, the Levi subgroups of maximal proper $F$-parabolics of $G$ are of the form $$M_k:=\operatorname{Res}_{K/F}\operatorname{GL}_k \times U_{n-2k}.$$ If ${^{L}\!P}$ is a maximal proper parabolic of ${^{L}\!G}$, then the Levi of ${^{L}\!P}$ is of the form ${^{L}\!M}_k$; we will write ${^{L}\!P}_k$ for ${^{L}\!P}$ to emphasize its type. ## Proposition {#prop:unitary} Let $\bar\rho: \operatorname{Gal}_F\to {^{L}\!G}(\bar{\mathbb{F}}_p)$ be an $L$-parameter. Then either $\bar\rho$ is elliptic, or $\bar\rho$ factors through ${^{L}\!P}_k(\bar{\mathbb{F}}_p)$ for some $k$ such that the composite $\bar r:\operatorname{Gal}_F \xrightarrow{\bar\rho^{\operatorname{ss}}} {^{L}\!M}_k(\bar{\mathbb{F}}_p) \to {^{L}\!\operatorname{Res}}_{K/F}\operatorname{GL}_k(\bar{\mathbb{F}}_p)$ is elliptic. *Proof.* By [@L23 Theorem B], $\bar\rho$ is either elliptic, or factors through some ${^{L}\!P}_k(\bar{\mathbb{F}}_p)$. By the non-abelian Shapiro's lemma, $L$-parameters $\operatorname{Gal}_F\to {^{L}\!\operatorname{Res}}_{K/F}\operatorname{GL}_k(\bar{\mathbb{F}}_p)$ are in natural bijection to $L$-parameters $\operatorname{Gal}_K\to \operatorname{GL}_k(\bar{\mathbb{F}}_p)$; and this bijection clearly preserves ellipticity. Suppose $\bar r$ is not elliptic, then $\bar r$, when regarded as a Galois representation $\operatorname{Gal}_K\to \operatorname{GL}_k(\bar{\mathbb{F}}_p)$, contains a proper irreducible subrepresentation $\bar r_0:\operatorname{Gal}_K\to \operatorname{GL}_s(\bar{\mathbb{F}}_p)$. It is easy to see that $\bar\rho$ also factors through ${^{L}\!P}_s(\bar{\mathbb{F}}_p)$. So we are done. ◻ We take a closer look at ${^{L}\!P}_k$: $${^{L}\!P}_k= \begin{bmatrix} \operatorname{GL}_k & \operatorname{Mat}_{k\times (n-2k)} & \operatorname{Mat}_{k \times k} \\ & \operatorname{GL}_{n-2k} & \operatorname{Mat}_{(n-2k)\times k}\\ & & \operatorname{GL}_{k} \end{bmatrix} \rtimes \Delta$$ ## Lemma {#lem:unitary-1} If $\bar\rho:\operatorname{Gal}_F \to {^{L}\!G}(\bar{\mathbb{F}}_p)$ is not elliptic, then there exists a parabolic ${^{L}\!P}_k$ through which $\bar\rho$ factors and $\bar\rho$ is a Heisenberg-type extension of some $\bar\rho_M:\operatorname{Gal}_F \to {^{L}\!M}_k(\bar{\mathbb{F}}_p)$. *Proof.* By Proposition [7.1](#prop:unitary){reference-type="ref" reference="prop:unitary"}, there exists a parabolic ${^{L}\!P}_k$ such that $\bar r:\operatorname{Gal}_F \xrightarrow{\bar\rho^{\operatorname{ss}}} {^{L}\!M}_k(\bar{\mathbb{F}}_p) \to {^{L}\!\operatorname{Res}}_{K/F}\operatorname{GL}_k(\bar{\mathbb{F}}_p)$ is elliptic. Write $$\bar r|_{\operatorname{Gal}_K} = \begin{bmatrix} \bar r_1 & & \\ & 1_{n-2k} & \\ & & \bar r_2 \end{bmatrix},$$ where $\bar r_1, \bar r_2:\operatorname{Gal}_K\to \operatorname{GL}_k(\bar{\mathbb{F}}_p)$. By the non-abelian Shapiro's lemma (see [@GHS Subsection 9.4] for details), $\bar r$ can be fully reconstructed from $\bar r_1$, and $\bar r_2$ is completely determined by $\bar r_1$; in particular, both $\bar r_1$ and $\bar r_2$ are irreducible Galois representations. We have $$H^2(\operatorname{Gal}_K, \operatorname{gr}^0\operatorname{Lie}U(\bar{\mathbb{F}}_p)) =H^2(\operatorname{Gal}_K, \operatorname{Hom}(\bar r_2, \bar r_1)).$$ Since both $\bar r_1$ and $\bar r_2$ are irreducible, by local Tate duality, we have $\dim H^2(\operatorname{Gal}_K, \operatorname{Hom}(\bar r_2, \bar r_1)) = \dim H^0(\operatorname{Gal}_K, \operatorname{Hom}(\bar r_1, \bar r_2(1)))\le 1$. ◻ Next, we study cup products. Now fix the parabolic type ${^{L}\!P}_k$. We have $$\operatorname{gr}^1\operatorname{Lie}U = \operatorname{Mat}_{k\times(n-2k)} \oplus \operatorname{Mat}_{(n-2k)\times k}$$ and $$\operatorname{gr}^0\operatorname{Lie}U = \operatorname{Mat}_{k\times k}.$$ We will use all notations introduced in Section [\[sec:cup\]](#sec:cup){reference-type="ref" reference="sec:cup"}. By [@Ko02 Theorem 3.15], we have $$H^i(\operatorname{Gal}_F, \operatorname{gr}^j\operatorname{Lie}U(A)) = H^i(\operatorname{Gal}_K, \operatorname{gr}^j\operatorname{Lie}U(A))^{\operatorname{Gal}(K/F)} = H^i(\operatorname{Gal}_K, \operatorname{gr}^j\operatorname{Lie}U(A))^{\Delta},$$ for all $i$ and $j$. ## Lemma {#lem:unitary-cup-2} The $\Delta=\operatorname{Gal}(K/F)$-action on $H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(A))$ satisfies $$\begin{aligned} \j(c_1, 0) = (0, *)\\ \j(0, c_2) = (*, 0),\end{aligned}$$ for any $c_1, c_2$. *Proof.* Write $$w=\begin{bmatrix} 0 & 0 & J_1 \\ 0 & J_2 & 0 \\ J_3 & 0 & 0 \end{bmatrix}$$ for (a representative of) the longest Weyl group element. Let $$\rho= \begin{bmatrix} A & B & * \\ & D & E \\ & & F \end{bmatrix}: \operatorname{Gal}_K\to P(A)$$ be a group homomorphism. Note that each of $A, B, D, E, F$ is a matrix-valued function on $\operatorname{Gal}_K$. Write $A'$ for $\gamma\mapsto A(\j^{-1} \gamma \j)$ and similarly define $B', D', E', F'$. We have $$\j \rho (\j^{-1} - \j) \j^{-1} = w \rho(\j^{-1} - \j)^{-t} w^{-1} = \begin{bmatrix} J_1 F^{\prime-t} J_1^{-1} & -J_1 F^{\prime-t}E^{\prime t} D^{\prime -t} J_2^{-1} & * \\ & J_2 B^{\prime -t} J_2^{-1} & -J_2 D^{\prime -t} B^{\prime t} A^{\prime -t}J_3^{-1}\\ && J_3 A^{\prime -t} A^{-1} J_3^{-1} \end{bmatrix}.$$ In particular, we see that the $\widehat{j}$-involution on $H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(A))$ permutes the two direct summands. ◻ The lemma above immediately implies the following. ## Corollary {#corollary} The Galois action $\operatorname{Gal}(K/F)$ on $H^i(\operatorname{Gal}_K, \operatorname{gr}^j \operatorname{Lie}U(A))$ is a classical $\Delta$-system in the sense of Definition [6.2](#def:classical){reference-type="ref" reference="def:classical"}. ## Theorem {#thm:main-unitary} Theorem [Theorem 3](#thm:main){reference-type="ref" reference="thm:main"} holds for ${^{L}\!G}_n={^{L}\!U}_n$. Moreover, for unramified unitary groups, $\bar\rho$ admits a crystalline lift. *Proof.* If $\bar\rho$ is elliptic, then it is [@L23 Theorem C]. Suppose $\bar\rho$ is not elliptic, then by Lemma [7.2](#lem:unitary-1){reference-type="ref" reference="lem:unitary-1"}, there exists a parabolic ${^{L}\!P}_k$ through which $\bar\rho$ factors through and that $\bar\rho$ is a Heisenberg-type extension of some $\bar\rho_M:\operatorname{Gal}_F\to {^{L}\!M}_k(\bar{\mathbb{F}}_p)$. We have $${^{L}\!M}_k ={^{L}\!(}\operatorname{Res}_{K/F}\operatorname{GL}_k\times U_{n-2k}) = \begin{bmatrix} \operatorname{GL}_k & & \\ & \operatorname{GL}_{n-2k} & \\ & & \operatorname{GL}_k \end{bmatrix} \rtimes \{1, \j\}$$ where $$\begin{bmatrix} \operatorname{GL}_k & & \\ & I_{n-2k} & \\ & & \operatorname{GL}_k \end{bmatrix} \rtimes \{1, \j\} \cong {^{L}\!\operatorname{Res}}_{K/F}\operatorname{GL}_k, ~{\text{and}}~ \begin{bmatrix} I_k & & \\ & \operatorname{GL}_{n-2k} & \\ & & I_k \end{bmatrix} \rtimes \{1, \j\} \cong {^{L}\!U}_{n-2k}.$$ Write $$\bar\rho_M = \begin{bmatrix} \bar\rho_a & & \\ & \bar\rho_b & \\ & & \bar\rho_c \end{bmatrix} \rtimes *$$ By induction on the semisimple rank of $G_n$, we assume $\bar\rho_b$ admits a lift $\rho_b:\operatorname{Gal}_F\to {^{L}\!U}_{n-2k}(\bar\mathbb{Z}_p)$. Write $w=\begin{bmatrix} J_1 & & \\& J_2 & \\& & J_3 \end{bmatrix}$ for a longest Weyl group element. Let $(\rho_a, \rho_c):\operatorname{Gal}_F\to {^{L}\!\operatorname{Res}}_{F/K}\operatorname{GL}_k(\bar\mathbb{Z}_p)$ be a potentially crystalline lift of $(\bar\rho_a, \bar\rho_c)$. We have $$\rho_c(-) = J_3\rho_a(\widehat{j} - \widehat{j}^{-1})^{-t} J_3^{-1}.$$ In particular, if $\lambda$ is a potentially crystalline character with trivial mod $p$ reduction, then $(\lambda\rho_a, \lambda^{-1}\rho_c)$ is another potentially crystalline lift of $(\bar\rho_a, \bar\rho_c)$. By choosing $\lambda:=\bar\mathbb{Z}_p(n)$ with trivial reduction for $n$ sufficiently large, we may assume the Hodge-Tate weights of $\operatorname{Hom}(\rho_b, \rho_a)$, $\operatorname{Hom}(\rho_c, \rho_a)$ and $\operatorname{Hom}(\rho_c, \rho_b)$ are all positive integers $\ge 2$; in particular $H^1(\operatorname{Gal}_K, \operatorname{Lie}U(\bar\mathbb{Q}_p)) =H^1_{{\operatorname{crys}}}(\operatorname{Gal}_K, \operatorname{Lie}U(\bar\mathbb{Q}_p))$. By Theorem [Theorem 1](#thm:EG){reference-type="ref" reference="thm:EG"}, we can modify $\rho_b$ without changing its Hodge-Tate weights and reduction mod $p$ such that $\bar\rho|_{\operatorname{Gal}_K}$ admits a partial lift which is a partial extension of $(\rho_a|_{\operatorname{Gal}_K}, \rho_b|_{\operatorname{Gal}_K}, \rho_c|_{\operatorname{Gal}_K})$. Since $K$ is a quadratic extension of $F$, $K\ne \mathbb{Q}_p$, the non-triviality of cup products follows from Lemma [6.6](#lem:unitary-cup-5){reference-type="ref" reference="lem:unitary-cup-5"} and Proposition [6.3](#prop:unitary-cup-1){reference-type="ref" reference="prop:unitary-cup-1"}. Now the theorem follows from Theorem [5.5](#thm:Galois){reference-type="ref" reference="thm:Galois"} and the main theorem of [@L23D]. For the moreover part, note that we can choose $\rho_a$ and $\rho_b$ and $\rho_c$ such that they are crystalline after restricting to $\operatorname{Gal}_K$; if $G$ is unramified, it means $\rho_a$ and $\rho_b$ and $\rho_c$ are already crystalline. ◻ # Example B: symplectic groups   Since $G=\operatorname{GSp}_{2n}$ is a split group, we have $K=F$. The Dynkin diagram for $G$ is . Thus the maximal proper Levi subgroups of $G$ are of the form $$\operatorname{GL}_k \times \operatorname{GSp}_{2(n-k)},~\text{or}~\operatorname{GL}_n.$$ Set $$M_k := \begin{cases} \operatorname{GL}_k \times \operatorname{GSp}_{2(n-k)} & k < n\\ \operatorname{GL}_n & k=n \end{cases}$$ and write $P_k$ for the corresponding parabolic subgroup. Set $\Omega_k = \begin{bmatrix} & I_{n-k} \\ -I_{n-k} & \end{bmatrix}$. We use the following presentation of $\operatorname{GSp}_{2n}$: $$\operatorname{GSp}_{2n} = \{X\in \operatorname{GL}_{2n}| X^t \begin{bmatrix} & & I_k \\ & \Omega_k & \\ -I_k & & \end{bmatrix} X = \lambda \begin{bmatrix} & & I_k \\ & \Omega_k & \\ -I_k & & \end{bmatrix} \}$$ We have $$P_k = \operatorname{GSp}_{2n} \cap \begin{bmatrix} \operatorname{GL}_k & \operatorname{Mat}_{k\times (2n-2k)} & \operatorname{Mat}_{k \times k} \\ & \operatorname{GL}_{2n-2k} & \operatorname{Mat}_{(2n-2k)\times k} \\ & & \operatorname{GL}_k \end{bmatrix} =:\operatorname{GSp}_{2n} \cap Q_k$$ where $Q_k$ is the corresponding parabolic of $\operatorname{GL}_{2n}$. ## Lemma {#lem:symplectic-1} If $\bar\rho:\operatorname{Gal}_F \to \operatorname{GSp}_{2n}(\bar{\mathbb{F}}_p)$ is not elliptic, then there exists a parabolic $P_k$ through which $\bar\rho$ factors and $\bar\rho$ is a Heisenberg-type extension of some $\bar\rho_M:\operatorname{Gal}_F \to M_k(\bar{\mathbb{F}}_p)$. *Proof.* The proof is similar to that of Lemma [7.2](#lem:unitary-1){reference-type="ref" reference="lem:unitary-1"}. A parabolic $\bar\rho$ factors through $P_k$ for some $k$. Write $$\bar\rho = \begin{bmatrix} \bar r_1 & * & * \\& \bar r_2 & * \\& & \bar r_3 \end{bmatrix}$$ If $\bar r_1$ or $\bar r_3$ is not irreducible, then $\bar\rho$ also factors through $P_s$ for some $s$ strictly less than $k$. So we can assume both $\bar r_1$ and $\bar r_3$ are irreducible. Finally, local Tate duality ensures $\bar\rho$ is a Heisenberg-type extension. ◻ Write $U$ and $V$ for the unipotent radical of $Q_k$ and $P_k$, respectively. We have $$\operatorname{gr}^0 \operatorname{Lie}U = \operatorname{Mat}_{k\times k}$$ and $$\operatorname{gr}^1 \operatorname{Lie}U = \operatorname{Mat}_{k\times 2(n-k)} \times \operatorname{Mat}_{2(n-k)\times k}$$ Define an $\Delta:=\{1, \j\}$-action on $\operatorname{Lie}U$ by $$\begin{aligned} \label{eqn:symp} \j (x, y) :=& (y^t\Omega_k, \Omega_k x^t),& (x, y)\in \operatorname{Mat}_{k\times 2(n-k)} \times \operatorname{Mat}_{2(n-k)\times k}\\ \j z :=& z^t,& z\in \operatorname{Mat}_{k\times k}.\end{aligned}$$ ## Lemma {#lem:symp-2} We have $\operatorname{Lie}V = (\operatorname{Lie}U)^\Delta$. *Proof.* Clear. ◻ ## Lemma {#lem:symp-3} The $\Delta$-action on $\operatorname{Lie}U$ induces a classical $\Delta$-action on $H^i(\operatorname{Gal}_K, \operatorname{gr}^j\operatorname{Lie}U(A))$ for each $i$, $j$, and $A=\bar{\mathbb{F}}_p,~\bar\mathbb{Z}_p$. *Proof.* Clear by Equation ([\[eqn:symp\]](#eqn:symp){reference-type="ref" reference="eqn:symp"}). ◻ ## Corollary {#cor:symp-1} For Galois representation $\rho_M = \begin{bmatrix} \tau_a & & \\ & \tau_b & \\ & & \tau_c \end{bmatrix} :\operatorname{Gal}_K\to M_k(\bar\mathbb{Z}_p)$. The cup product $$H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))^\Delta\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p \times H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))^\Delta\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p \to H^2(\operatorname{Gal}_K, \operatorname{Mat}_{a\times c}(\bar{\mathbb{F}}_p))^\Delta$$ is non-trivial. *Proof.* By Corollary [6.7](#cor:unitary-cup-1){reference-type="ref" reference="cor:unitary-cup-1"} and Lemma [8.3](#lem:symp-3){reference-type="ref" reference="lem:symp-3"}, the cup product is non-trivial unless $K=\mathbb{Q}_p$, $k = 1$, and either $$\bar\tau_b = \bar\tau_a(-1)^{\oplus 2n-2},~\text{and}~\bar\tau_c = \bar\tau_a(-1)$$ or $$\bar\tau_b = \bar\tau_a^{\oplus 2n-2},~\text{and}~\bar\tau_c = \bar\tau_a(-1).$$ The symplecticity of $\rho_M$ implies $$\bar\tau_a\bar\tau_c = \lambda$$ and $$\bar\tau_b^t\Omega_k \bar\tau_b = \lambda \Omega_k$$ where $\lambda$ is the similitude character. Since $\bar\tau_b=\bar\tau_a(m)I_{2n-2}$ ($m=0,~-1$) is forced to be a scalar matrix, we have $$\bar\tau_a(m)^2=\lambda.$$ Thus $$\bar\tau_a^2(2m) = \bar\tau_a\bar\tau_c = \bar\tau_a(-1)$$ which implies $\bar{\mathbb{F}}_p(1)=\bar{\mathbb{F}}_p$, which contradicts the fact that $K=\mathbb{Q}_p$. ◻ ## Theorem {#thm:main-symplectic} Theorem [Theorem 3](#thm:main){reference-type="ref" reference="thm:main"} holds for ${^{L}\!G}_n=\operatorname{GSp}_{2n}$ and $\operatorname{Sp}_{2n}$. *Proof.* We will only treat $\operatorname{GSp}_{2n}$; for $\operatorname{Sp}_{2n}$, the reader can check that in the proof it is possible to ensure the similitude character is always $1$. If $\bar\rho$ is elliptic, then it is [@L23 Theorem C]. Suppose $\bar\rho$ is not elliptic, then by Lemma [8.1](#lem:symplectic-1){reference-type="ref" reference="lem:symplectic-1"}, there exists a parabolic $P_k$ through which $\bar\rho$ factors through and that $\bar\rho$ is a Heisenberg-type extension of some $\bar\rho_M:\operatorname{Gal}_F\to M_k(\bar{\mathbb{F}}_p)$. Write $$\bar\rho_M = \begin{bmatrix} \bar\rho_a & & \\ & \bar\rho_b & \\ & & \bar\rho_c \end{bmatrix}$$ By induction on the semisimple rank of $G_n$, we assume $\bar\rho_b$ admits a lift $\rho_b:\operatorname{Gal}_F\to \operatorname{GSp}_{n-2k}(\bar\mathbb{Z}_p)$ with similitude character $\mu$. Let $\rho_a:\operatorname{Gal}_F\to \operatorname{GL}_k(\bar\mathbb{Z}_p)$ be a crystalline lift of $\bar\rho_a$. Set $\rho_c:=\mu\rho_a^{-t}$. If $\lambda$ is a potentially crystalline character with trivial mod $p$ reduction, then $(\lambda\rho_a, \rho_b, \lambda^{-1}\rho_c)$ is a crystalline lift of $\bar\rho_M$. By choosing $\lambda=\bar\mathbb{Z}_p(n)$ with trivial reduction for $n$ sufficiently large, we may assume the Hodge-Tate weights of $\operatorname{Hom}(\rho_b, \rho_a)$, $\operatorname{Hom}(\rho_c, \rho_a)$ and $\operatorname{Hom}(\rho_c, \rho_b)$ are all positive integers $\ge 2$; in particular $H^1(\operatorname{Gal}_K, \operatorname{Lie}U(\bar\mathbb{Q}_p)) =H^1_{{\operatorname{crys}}}(\operatorname{Gal}_K, \operatorname{Lie}U(\bar\mathbb{Q}_p))$. By Theorem [Theorem 1](#thm:EG){reference-type="ref" reference="thm:EG"}, we can modify $\rho_b$ without changing its Hodge-Tate weights and reduction mod $p$ such that $\bar\rho$ admits a partial lift which is a partial extension of $(\rho_a, \rho_b, \rho_c)$. By Corollary [8.4](#cor:symp-1){reference-type="ref" reference="cor:symp-1"}, the theorem follows from Theorem [5.5](#thm:Galois){reference-type="ref" reference="thm:Galois"} and the main theorem of [@L23D]. ◻ # Example C: odd and even orthogonal groups   Let $G={\operatorname{GSO}}_{n}$ be the split form of orthogonal similitude groups. We have $K=F$. The Dynkin diagram for $G$ is or . Thus the maximal proper Levi subgroups of $G$ are of the form $$\operatorname{GL}_k \times {\operatorname{GSO}}_{n-2k},~\text{or}~\operatorname{GL}_n.$$ Set $$M_k := \begin{cases} \operatorname{GL}_k \times {\operatorname{GSO}}_{n-2k} & 2k < n\\ \operatorname{GL}_n & 2k=n \end{cases}$$ and write $P_k$ for the corresponding parabolic subgroup. We use the following presentation of ${\operatorname{GSO}}_{n}$: $${\operatorname{GSO}}_{n} = \{X\in \operatorname{GL}_{n}| X^t \begin{bmatrix} & & I_k \\ & I_{n-2k} & \\ I_k & & \end{bmatrix} X = \lambda \begin{bmatrix} & & I_k \\ & I_{n-2k} & \\ I_k & & \end{bmatrix} \}$$ We have $$P_k = {\operatorname{GSO}}_{n} \cap \begin{bmatrix} \operatorname{GL}_k & \operatorname{Mat}_{k\times (n-2k)} & \operatorname{Mat}_{k \times k} \\ & \operatorname{GL}_{n-2k} & \operatorname{Mat}_{(n-2k)\times k} \\ & & \operatorname{GL}_k \end{bmatrix} =:{\operatorname{GSO}}_{n} \cap Q_k$$ where $Q_k$ is the corresponding parabolic of $\operatorname{GL}_{n}$. ## Lemma {#lem:orth-1} If $\bar\rho:\operatorname{Gal}_F \to {\operatorname{GSO}}_{n}(\bar{\mathbb{F}}_p)$ is not elliptic, then there exists a parabolic $P_k$ through which $\bar\rho$ factors and $\bar\rho$ is a Heisenberg-type extension of some $\bar\rho_M:\operatorname{Gal}_F \to M_k(\bar{\mathbb{F}}_p)$. *Proof.* It is completely similar to Lemma [8.1](#lem:symplectic-1){reference-type="ref" reference="lem:symplectic-1"}. ◻ Write $U$ and $V$ for the unipotent radical of $Q_k$ and $P_k$, respectively. We have $$\operatorname{gr}^0 \operatorname{Lie}U = \operatorname{Mat}_{k\times k}$$ and $$\operatorname{gr}^1 \operatorname{Lie}U = \operatorname{Mat}_{k\times (n-2k)} \times \operatorname{Mat}_{(n-2k)\times k}$$ Define an $\Delta:=\{1, \j\}$-action on $\operatorname{Lie}U$ by $$\begin{aligned} \label{eqn:orth} \j (x, y) :=& (-y^t, - x^t),& (x, y)\in \operatorname{Mat}_{k\times (n-2k)} \times \operatorname{Mat}_{(n-2k)\times k}\\ \j z :=& -z^t,& z\in \operatorname{Mat}_{k\times k}.\end{aligned}$$ ## Lemma {#lem:orth-2} We have $\operatorname{Lie}V = (\operatorname{Lie}U)^\Delta$. *Proof.* Clear. ◻ ## Lemma {#lem:orth-3} The $\Delta$-action on $\operatorname{Lie}U$ induces a classical $\Delta$-action on $H^i(\operatorname{Gal}_K, \operatorname{gr}^j\operatorname{Lie}U(A))$ for each $i$, $j$, and $A=\bar{\mathbb{F}}_p,~\bar\mathbb{Z}_p$. *Proof.* Clear by Equation ([\[eqn:orth\]](#eqn:orth){reference-type="ref" reference="eqn:orth"}). ◻ ## Corollary {#cor:orth-1} For Galois representation $\rho_M = \begin{bmatrix} \tau_a & & \\ & \tau_b & \\ & & \tau_c \end{bmatrix} :\operatorname{Gal}_K\to M_k(\bar\mathbb{Z}_p)$. The cup product $$H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))^\Delta\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p \times H^1(\operatorname{Gal}_K, \operatorname{gr}^1\operatorname{Lie}U(\bar\mathbb{Z}_p))^\Delta\underset{\bar\mathbb{Z}_p}{\otimes}\bar{\mathbb{F}}_p \to H^2(\operatorname{Gal}_K, \operatorname{Mat}_{a\times c}(\bar{\mathbb{F}}_p))^\Delta$$ is non-trivial. *Proof.* By Corollary [6.7](#cor:unitary-cup-1){reference-type="ref" reference="cor:unitary-cup-1"} and Lemma [9.3](#lem:orth-3){reference-type="ref" reference="lem:orth-3"}, the cup product is non-trivial unless $K=\mathbb{Q}_p$, $k = 1$, and either $$\bar\tau_b = \bar\tau_a(-1)^{\oplus 2n-2},~\text{and}~\bar\tau_c = \bar\tau_a(-1)$$ or $$\bar\tau_b = \bar\tau_a^{\oplus 2n-2},~\text{and}~\bar\tau_c = \bar\tau_a(-1).$$ The orthogonality of $\rho_M$ implies $$\bar\tau_a\bar\tau_c = \lambda$$ and $$\bar\tau_b^t \bar\tau_b = \lambda I_{n-2}$$ where $\lambda$ is the similitude character. Since $\bar\tau_b=\bar\tau_a(m)I_{n-2}$ ($m=0,~-1$) is forced to be a scalar matrix, we have $$\bar\tau_a(m)^2=\lambda.$$ Thus $$\bar\tau_a^2(2m) = \bar\tau_a\bar\tau_c = \bar\tau_a(-1)$$ which implies $\bar{\mathbb{F}}_p(1)=\bar{\mathbb{F}}_p$, which contradicts the fact that $K=\mathbb{Q}_p$. ◻ ## Theorem {#thm:main-symplectic} Theorem [Theorem 3](#thm:main){reference-type="ref" reference="thm:main"} holds for ${^{L}\!G}_n={\operatorname{GSO}}_{n}$ and $\operatorname{SO}_n$. *Proof.* We will only treat ${\operatorname{GSO}}_n$; the reader can verify that in the proof it is possible to ensure the similitude character is always $1$. If $\bar\rho$ is elliptic, then it is [@L23 Theorem C]. Suppose $\bar\rho$ is not elliptic, then by Lemma [9.1](#lem:orth-1){reference-type="ref" reference="lem:orth-1"}, there exists a parabolic $P_k$ through which $\bar\rho$ factors through and that $\bar\rho$ is a Heisenberg-type extension of some $\bar\rho_M:\operatorname{Gal}_F\to M_k(\bar{\mathbb{F}}_p)$. Write $$\bar\rho_M = \begin{bmatrix} \bar\rho_a & & \\ & \bar\rho_b & \\ & & \bar\rho_c \end{bmatrix}$$ By induction on the semisimple rank of $G_n$, we assume $\bar\rho_b$ admits a lift $\rho_b:\operatorname{Gal}_F\to {\operatorname{GSO}}_{n-2k}(\bar\mathbb{Z}_p)$ with similitude character $\mu$. Let $\rho_a:\operatorname{Gal}_F\to \operatorname{GL}_k(\bar\mathbb{Z}_p)$ be a crystalline lift of $\bar\rho_a$. Set $\rho_c:=\mu\rho_a^{-t}$. If $\lambda$ is a potentially crystalline character with trivial mod $p$ reduction, then $(\lambda\rho_a, \rho_b, \lambda^{-1}\rho_c)$ is a crystalline lift of $\bar\rho_M$. By choosing $\lambda=\bar\mathbb{Z}_p(n)$ with trivial reduction for $n$ sufficiently large, we may assume the Hodge-Tate weights of $\operatorname{Hom}(\rho_b, \rho_a)$, $\operatorname{Hom}(\rho_c, \rho_a)$ and $\operatorname{Hom}(\rho_c, \rho_b)$ are all positive integers $\ge 2$; in particular $H^1(\operatorname{Gal}_K, \operatorname{Lie}U(\bar\mathbb{Q}_p)) =H^1_{{\operatorname{crys}}}(\operatorname{Gal}_K, \operatorname{Lie}U(\bar\mathbb{Q}_p))$. By Theorem [Theorem 1](#thm:EG){reference-type="ref" reference="thm:EG"}, we can modify $\rho_b$ without changing its Hodge-Tate weights and reduction mod $p$ such that $\bar\rho$ admits a partial lift which is a partial extension of $(\rho_a, \rho_b, \rho_c)$. By Corollary [8.4](#cor:symp-1){reference-type="ref" reference="cor:symp-1"}, the theorem follows from Theorem [5.5](#thm:Galois){reference-type="ref" reference="thm:Galois"} and the main theorem of [@L23D]. ◻ # The Emerton-Gee stacks for unitary groups   When we speak of > "the locus of ... in the moduli stack of something", we mean > "the scheme-theoretic closure of the scheme-theoretic image of all families of something whose $\bar{\mathbb{F}}_p$-points are of the form ... in the moduli stack of something". So a "locus" is techinically always a closed substack. However, since we are interested in dimension analysis only, it is almost always harmless to replace a "locus" by its dense open substacks. When we speak of "the moduli of $L$-parameters", we always mean "the moduli of $(\varphi, \Gamma)$-modules" in the sense of [@L23B]. When we write $H^\bullet(\operatorname{Gal}_F, -)$, we always mean $(\varphi, \Gamma)$-cohomology (or the cohomology of the corresponding Herr complex). ## Theorem {#thm:EGU} Let $\bar \alpha:\operatorname{Gal}_K\to \operatorname{GL}_a(\bar{\mathbb{F}}_p)$ be an irreducible Galois representation. The locus of $\bar x\in \mathcal{X}_{F, {^{L}\!U}_n, {\operatorname{red}}}$ such that $\operatorname{Hom}_{\operatorname{Gal}_K}(\bar \alpha, \bar x|_{\operatorname{Gal}_K})\ge r$ is of dimension at most $$d_{n,r}:=[F:\mathbb{Q}_p]\frac{n(n-1)}{2} - r^2+ \frac{r}{2}.$$ The whole section is devoted to the proof of Theorem [10.1](#thm:EGU){reference-type="ref" reference="thm:EGU"}. We denote by $\mathcal{X}^{\bar\alpha^{\oplus r}}_n$ the locus considered in Theorem [10.1](#thm:EGU){reference-type="ref" reference="thm:EGU"}. We will prove Theorem [10.1](#thm:EGU){reference-type="ref" reference="thm:EGU"} by induction on $n$, and assume it holds for $n' < n$ throughout the section. ## Involution of Galois representations Write $\Delta = \operatorname{Gal}(K/F) = \{1, \j\}$. For each irreducible representation $\beta: \operatorname{Gal}_K\to \operatorname{GL}_a(\bar{\mathbb{F}}_p)$, set $\theta(\beta):= \beta(\j^{-1} \circ - \circ \j)^{-t}$. Suppose $$\begin{bmatrix} \bar \alpha & & \\ & \bar \tau & \\ & & \bar\beta \end{bmatrix}\rtimes *$$ is an $L$-parameter $\operatorname{Gal}_F\to {^{L}\!M}_a$. If $\j$ acts on $\operatorname{GL}_n$ via $x\mapsto w x^{-t} w^{-1}$, then direct computation shows $$\bar\beta = b w\theta(\bar\alpha)w^{-1}b^{-1}$$ where $w$ is the longest Weyl group element and $b=\bar\beta(\j)\j^{-1}\in \operatorname{GL}_a(\bar{\mathbb{F}}_p)$. In particular, $\bar\beta$ is isomorphic to $\theta(\bar\alpha)$ as a $\operatorname{Gal}_K$-representation. ## Base case of the induction We first consider the elliptic locus of $\mathcal{X}_{n}^{\bar\alpha^{\oplus r}}$ (that is, the locus consisting of elliptic $L$-parameters). We only need to understand the case where $n=ra$. Let $x:\operatorname{Gal}_F\to {^{L}\!U}_n(\bar{\mathbb{F}}_p)$ be an $L$-parameter in the elliptic locus of $\mathcal{X}_{ra}^{\bar\alpha^{\oplus r}}$. Since $x|_{\operatorname{Gal}_K} = \begin{bmatrix} \bar\alpha(-1) & & \\ & \dots & \\ & & \bar \alpha(-1) \end{bmatrix}$, it is immediate that we must have $\bar\alpha(-1)\cong \theta(\bar\alpha(-1))$. Moreover, $x(\j)$ permutes all diagonal blocks of $x|_{\operatorname{Gal}_K}$ without orbits of size $2$; however, since all orbits have size at most $2$, $x(\j)w^{-1}\j^{-1}$ is a diagonal matrix. We have the following: ## Lemma {#lem:EGU-1} If $\bar\alpha(-1) \not\cong \theta(\bar\alpha(-1))$, the elliptic locus of $\mathcal{X}_{ra}^{\bar\alpha^{\oplus r}}$ is empty. If $\bar\alpha(-1) \cong \theta(\bar\alpha(-1))$, the elliptic locus of $\mathcal{X}_{ra}^{\bar\alpha^{\oplus r}}$ has dimension at most $ra-r^2$. Moreover, if $r=1$, then the elliptic locus has dimension $-1$. *Proof.* We can take $x(\j)w^{-1}\j^{-1}$ to be any diagonal matrix (which completely determines $x$), and we can quotient out the block diagonal with scalar block entries. So there exists a surjective map from $[\operatorname{G}_m^{\times n}/\operatorname{GL}_r]$ to the elliptic locus of $\mathcal{X}_{ra}^{\bar\alpha^{\oplus r}}$ (here $\operatorname{G}_m^{\times n}$ parameterizes all diagonal matrices and $\operatorname{GL}_r$ parameterizes block matrices with $r\times r$ scalar matrix blocks). For the moreover part, note that if $r=1$, then by Schur's lemma, $x(\j)w^{-1}\j^{-1}$ is a scalar matrix. Since $x(\j)^2=x(\j^2) = \bar\alpha(-1)(\j^2)$ is fixed, $x(\j)$ has at most $2$ choices while the automorphism of $x$ include all scalar matrices and is at least one-dimensional. Finally, we want to show $d_{ra, r}\ge ra-r^2$ of $r>1$. We can rewrite it as $\frac{a[F:\mathbb{Q}_p]}{2}(ra-1) + 1/2 \ge a$. If $a>1$, then $\frac{a[F:\mathbb{Q}_p]}{2}(ra-1)\ge \frac{3}{2}a>a$. If $a=1$, then $\frac{a[F:\mathbb{Q}_p]}{2}(ra-1)+1/2\ge 1/2+1/2=1$. ◻ ## The shape of $x\in \mathcal{X}_{n}^{\bar\alpha^{\oplus r}}$   If $\bar\alpha \not\cong \theta(\bar\alpha)$, then such $x$ is of the form $$\tag{$\dagger$} \begin{bmatrix} \bar \alpha(-1)^{\oplus r} & * & * \\ & \bar\tau & * \\ & & \theta(\bar\theta(-1))^{\oplus r} \end{bmatrix}\rtimes *$$ where $\bar\tau\rtimes *$ is an $L$-parameter for $U_{n-2ar}$. If $\bar\alpha \cong \theta(\bar\alpha)$, then $x$ can also be of the form $$\tag{$\ddagger$} \begin{bmatrix} \bar\alpha(-1)^{\oplus k_1} & 0 & 0 & 0 & 0 & 0 & 0 \\ & \bar\alpha(-1)^{\oplus k_2} & * & 0 & * & * & 0 \\ & & \bar \tau_1 & 0 & * & * & 0\\ & & & \bar \tau & 0 & 0 & 0\\ & & & & \bar \tau_2 & * & 0\\ & & & & &\bar\alpha(-1)^{\oplus k_2} & 0 \\ & & & & &&\bar\alpha(-1)^{\oplus k_1} \end{bmatrix}\rtimes *$$ where $\bar\tau$ is an elliptic $L$-parameter for $U_{a k_3}$ and that $2 k_1 + k_2 + k_3=r$. Intuitively, the dimension of the locus of ($\dagger$) should be larger than the dimension of the locus of ($\ddagger$) because there are more zero entries in ($\ddagger$); the next lemma partially confirms our intuition and allows us to consider only ($\dagger$). ## Lemma {#lem:EGU-2} \(1\) Either $\dim \mathcal{X}^{\bar\alpha^{\oplus r}}_n \le d_{n,r}$, or the dimension of $\mathcal{X}^{\bar\alpha^{\oplus r}}_n$ is equal to the dimension of the locus of $$\begin{bmatrix} \bar\alpha(-1)^{\oplus r} & * & * \\ & \bar \tau & * \\ & & \theta(\bar\alpha(-1))^{\oplus r} \end{bmatrix}\rtimes * ,$$ where $~\bar \tau\rtimes *$ is an $L$-parameter for $U_{n-2ar}$. \(2\) The semisimple locus of $\mathcal{X}_n^{\bar\alpha^{\oplus r}}$ has dimension at most $d_{n, r}$. *Proof.* (1) It is clear if $\theta(\bar\alpha(-1))\not\cong \bar\alpha(-1)$. So, suppose $\theta(\bar\alpha(-1)) \cong \bar\alpha(-1)$. Consider ($\ddagger$). There are two cases: $k_3=0$ and $k_3\ne 0$. We first assume $k_3=0$. Set $k=k_1$ and thus $k_2=n-2k$. Since $$\operatorname{Aut}_{\operatorname{Gal}_K}(\bar\alpha^{\oplus k})= \operatorname{Aut}_{\operatorname{Gal}_F}( \begin{bmatrix} \bar\alpha^{\oplus k} & \\ & \theta(\bar\alpha)^{\oplus k} \end{bmatrix}\rtimes * )$$ has dimension $k^2$, it remains to show $$\label{eq:u-1} -k^2 + d_{n-2ka, r-2k} \le d_{n, r},$$ which is equivalent to $$\label{eq:ue-1} [F:\mathbb{Q}_p]a (-2a k + 2n -1) - (4r -5k-1) \ge 0.$$ If $a=1$, then the derivative of the LHS of ([\[eq:ue-1\]](#eq:ue-1){reference-type="ref" reference="eq:ue-1"}) with respect to $k$ is positive. So we only need to consider the $k=k_{\min}=0$ case, which is clear. If $a>1$, then the derivative with respect to $k$ is negative, and we only need to consider the $k=k_{\max} = r$-case: $$[F:\mathbb{Q}_p]a(-2a r + rn -1) \ge -r -1$$ whose LHS $>0$ and RHS $<0$. Next assume $k_3\ne 0$. Then $$\begin{bmatrix} \bar \alpha(-1)^{\oplus k_2} & * & * & * \\ & \bar\tau_1 & * & *\\ & & \bar \tau_2 & *\\ & & & \bar \alpha(-1)^{\oplus k_2} \end{bmatrix}\rtimes *$$ is an $L$-parameter for $U_{n-2k_1a-k_3a}$. By reusing the inequality ([\[eq:u-1\]](#eq:u-1){reference-type="ref" reference="eq:u-1"}), it suffices to show $$\label{eq:ue-2} d_{n-2k_1a-k_3a, k_2} + d_{k_3a, k_3} \le d_{n-2k_1a, r-2k_1}$$ We can set $$\begin{aligned} n' & = n-2k_1 a \\ r' & = r-2k_1 \\ k' & = k_2\end{aligned}$$ and rename $n', r', k'$ to $n, r, k$. By Lemma [10.4](#lem:EGU-1){reference-type="ref" reference="lem:EGU-1"}, ([\[eq:ue-2\]](#eq:ue-2){reference-type="ref" reference="eq:ue-2"}) becomes $$d_{n-(r-k)a, k} + (r-k)a - (r-k)^2 \le d_{n, r} \label{eq:ue-3}$$ Set $g(n, r, k) = d_{n, r} - (d_{n-(r-k)a, k} + (r-k)a - (r-k)^2)$. We have $\frac{\partial^2g}{\delta r^2} = - [F:\mathbb{Q}_p] a^2/2<0$. Thus $g$ achieves minimum at the boundary points of the range for $r$. Since $k\le r \le \frac{n}{a}-k$ and $g|_{r=k}=0$, it suffices to show $f(n, k) = g(n, n/a-k, r)\ge 0$. We have $\frac{\partial^2f}{\delta k^2} = 2(2- [F:\mathbb{Q}_p] a^2)$. When $2 \le [F:\mathbb{Q}_p] a^2$, $f$ achieves minimum at the boundary points of the range for $k$. Since $0\le k \le n/a$, $f|_{k=0}=\frac{n}{2a} + \frac{1}{2}n([F:\mathbb{Q}_p]n -[F:\mathbb{Q}_p]-2)>0$ (because $n\ge 3$), and $f|_{k=n/a} = 0$, ([\[eq:ue-3\]](#eq:ue-3){reference-type="ref" reference="eq:ue-3"}) holds. When $2 > [F:\mathbb{Q}_p] a^2$, we must have $a=1$ and $F=\mathbb{Q}_p$; now since $\frac{\partial f}{\partial k} = 4k -2n +2 <0$, $f_{\min} = f|_{k=n/a}=0$. So we are done. \(2\) It has been implicitly proved in the proof of part (1). ◻ To analyze the locus of $$\begin{bmatrix} \bar\alpha(-1)^{\oplus r} & * & * \\ & \bar \tau & * \\ & & \theta(\bar\alpha(-1))^{\oplus r} \end{bmatrix}\rtimes *,$$ we need to consider parabolic Emerton-Gee stacks. ## Parabolic Emerton-Gee stacks Let $A$ be a reduced finite type $\bar{\mathbb{F}}_p$-algebra. For any morphism $\operatorname{Spec}A\to \mathcal{X}_{{^{L}\!M}_{ra}, {\operatorname{red}}}$, there is always a scheme-theoretically surjective morphism $\operatorname{Spec}B\to \operatorname{Spec}A$ such that $\operatorname{Spec}B\to \mathcal{X}_{{^{L}\!M}_{ra}, {\operatorname{red}}}$ is a basic morphism (see [@L23B Lemma 10.1.1, Definition 10.1.2]). Here $B$ is also a reduced finite type $\bar{\mathbb{F}}_p$-algebra. By replacing $A$ by $B$, we assume $\operatorname{Spec}A \to \mathcal{X}_{{^{L}\!M}_{ra}, {\operatorname{red}}}$ is a basic morphism. Then $$\operatorname{Spec}A \times_{\mathcal{X}_{{^{L}\!M}_{ra}, {\operatorname{red}}}} \mathcal{X}_{{^{L}\!P}_{ra}}$$ is an algebraic stack ([@L23B Proposition 10.1.8]). Write $[U, U]$ for the derived subgroup of $U$, where $U$ is the unipotent radical of ${^{L}\!P}_{ra}$. Note that $[U, U] \cong \operatorname{Mat}_{ra\times ra}$, $U^{\operatorname{ab}}:=U/[U, U]\cong \operatorname{Mat}_{ra\times (n-2ra)}\oplus \operatorname{Mat}_{(n-2ra)\times ra}$. For ease of notation, set $$X_A:=\operatorname{Spec}A,$$ $$Y_A:= \operatorname{Spec}A \times_{\mathcal{X}_{{^{L}\!M}_{ra}, {\operatorname{red}}}} \mathcal{X}_{{^{L}\!P}_{ra}/[U,U]},$$ and $$Z_A:= \operatorname{Spec}A \times_{\mathcal{X}_{{^{L}\!M}_{ra}, {\operatorname{red}}}} \mathcal{X}_{{^{L}\!P}_{ra}}.$$ We can regard $Y_A$ as a sheaf of groupoids over $X_A$. Denote by $Y_A^C$ the coarse moduli sheaf (of sets) of $Y_A$ over $X_A$. Then $Y_A^C$ is representable by a scheme, and is a vector bundle over $X_A$ of rank $H^1(\operatorname{Gal}_F, U^{\operatorname{ab}}(A))$, and $$Y_A = [Y_A^C/H^0(\operatorname{Gal}_F, U^{\operatorname{ab}}(A))]$$ (see [@L23B Corollary 10.1.7]). In like manner, denote by $Z_A^C$ the coarse moduli sheaf of $Z_A$ over $X_A$. Then $Z_A^C$ is an affine bundle over $Y_A^C$ of rank $H^1(\operatorname{Gal}_F, [U, U](A))$; and $$Z_A=[Z_A^C/H^0(\operatorname{Gal}_F, [U, U](A))\rtimes H^0(\operatorname{Gal}_F, U^{\operatorname{ab}}(A))].$$ Write $W_A$ for the scheme-theoretic image of $Z_A^C$ in $Y_A^C$. Indeed, the image of $Z_A^C$ in $Y_A^C$ is already closed and (the underlying set of) $W_A$ is the set-theoretic image. Assume the codimension of $W_A$ in $Y_A^C$ is $c$. We have $$\label{eq:u-2} \dim Z_A^C = \dim X_A - c + \operatorname{rank}H^1(\operatorname{Gal}_F, \operatorname{Lie}U(A))$$ by the discussion above. ## Lemma {#lem:EGU-3} Let $\operatorname{Spec}A$ be an irreducible, finite type $\bar{\mathbb{F}}_p$-variety. Let $\operatorname{Spec}A\to \mathcal{X}_{F, {^{L}\!M}_r, {\operatorname{red}}}$ be a basic morphism of finite type such that each $x\in \operatorname{Spec}A(\bar{\mathbb{F}}_p)$ corresponds to an $L$-parameter of the form $$\begin{bmatrix} \bar\alpha(-1)^{\oplus r} & & \\ & \bar \tau & \\ & & \theta(\bar\alpha(-1))^{\oplus r} \end{bmatrix}\rtimes *$$ such that $\bar\tau \in \mathcal{X}_{n-2r}^{\bar\alpha(-1)^{\oplus s}}\backslash\mathcal{X}_{n-2r}^{\bar\alpha(-1)^{\oplus (s+1)}}$. Assume the scheme-theoretic image of $f:X_A \to \mathcal{X}_{F, {^{L}\!M}_{ra}, {\operatorname{red}}}$ has dimension $d_X$, and the scheme-theoretic image of $g:Z_A\to\mathcal{X}_{F, {^{L}\!U}_n, {\operatorname{red}}}$ has dimension $d_Z$. Then $$d_Z \le d_X + r s - c + [F:\mathbb{Q}_p] (2nra -3r^2a^2) + \begin{cases} r^2 & \theta(\bar \alpha(-1)) = \bar\alpha(-2) \\ 0 & \theta(\bar \alpha(-1)) \ne \bar\alpha(-2). \end{cases}$$ We also have $d_X \le d_{n-2ra, s} - r^2$. *Proof.* Note that - $\operatorname{rank}H^2(\operatorname{Gal}_F, [U, U](A))\le \begin{cases} r^2 & \theta(\bar \alpha(-1)) = \bar\alpha(-2) \\ 0 & \theta(\bar \alpha(-1)) \ne \bar\alpha(-2). \end{cases}$, - $\operatorname{rank}H^2(\operatorname{Gal}_F, U^{\operatorname{ab}}(A)) \le rs$ (see the sublemma below), and - $\dim U = 2ra(n-2ra)+r^2a^2=2nra-3r^2a^2$. **Sublemma** $\operatorname{rank}H^2(\operatorname{Gal}_F, U^{\operatorname{ab}}(A)) \le rs$. *Proof.* It is clear that $\operatorname{rank}H^2(\operatorname{Gal}_K, \operatorname{Mat}_{ra\times(n-2ra)}(A)) = \operatorname{rank}H^2(\operatorname{Gal}_K, \operatorname{Mat}_{(n-2ra)\times ra}(A)) \le rs$. Note that $$H^2(\operatorname{Gal}_K, U^{\operatorname{ab}}(A)) = H^2(\operatorname{Gal}_K, \operatorname{Mat}_{ra\times(n-2ra)}(A)) \oplus H^2(\operatorname{Gal}_K, \operatorname{Mat}_{(n-2ra)\times ra}(A)),$$ and the $\operatorname{Gal}(K/F)$-action swaps the two direct summands (Lemma [7.3](#lem:unitary-cup-2){reference-type="ref" reference="lem:unitary-cup-2"}). In particular, $\operatorname{rank}H^2(\operatorname{Gal}_K, U^{\operatorname{ab}}(A))^{\operatorname{Gal}(K/F)} \le rs.$ ◻ By the local Euler characteristic $$\operatorname{rank}H^0(\operatorname{Gal}_F, \operatorname{Lie}U(A)) - \operatorname{rank}H^1(\operatorname{Gal}_F, \operatorname{Lie}U(A)) + \operatorname{rank}H^2(\operatorname{Gal}_F, \operatorname{Lie}U(A)) = -[F:\mathbb{Q}_p]\dim U$$ we have $$\begin{aligned} \operatorname{rank}H^1(\operatorname{Gal}_F, \operatorname{Lie}U(A)) &\le [F:\mathbb{Q}_p](2nra-3r^2a^2) + \begin{cases} r^2 & \theta(\bar \alpha(-1)) = \bar\alpha(-2) \\ 0 & \theta(\bar \alpha(-1)) \ne \bar\alpha(-2) \end{cases}\\ &\hspace{5mm} +rs + \operatorname{rank}H^0(\operatorname{Gal}_F, \operatorname{Lie}U(A)).\end{aligned}$$ By Equation ([\[eq:u-2\]](#eq:u-2){reference-type="ref" reference="eq:u-2"}), it suffices to show $$\label{eq:u-3} d_Z -d_X \le \dim Z_A^C - \dim X_A - \operatorname{rank}H^0(\operatorname{Gal}_F, \operatorname{Lie}U(A)).$$ Let $W_A'\subset W_A$ be an irreducible component of largest dimension (see [@Stacks 0DR4] if the reader is not familiar with irreducible components of algebraic stacks). Set $(Z_A^C)':= Z_A^C\times_{W_A}W_A'$. We have $\dim Z_A^C = \dim (Z_A^C)'$ since $Z_A^C\to W_A$ is an affine bundle. Moreover, since $W_A$ has only finitely many irreducible components, we can assume $d_Z$ is the dimension of the scheme-theoretic image of $(Z_A^C)'$ in $\mathcal{X}_{F, {^{L}\!U}_n, {\operatorname{red}}}$. So, after suitable replacements, it is harmless to assume $W_A, X_A$ and $Z_A^C$ are irreducible varieties when proving ([\[eq:u-3\]](#eq:u-3){reference-type="ref" reference="eq:u-3"}). Now we can invoke [@Stacks Lemma Tag 0DS4]: after replacing $X_A$ by a dense open (which does not change any quantity in ([\[eq:u-3\]](#eq:u-3){reference-type="ref" reference="eq:u-3"}) by the irreducibility of $X_A$), we can assume $\dim_t (X_A)_{f(t)} = \dim X_A - d_X$ for all $t\in W_A(\bar{\mathbb{F}}_p)$; similarly, after replacing $Z_A^C$ by a dense open, we can assume $\dim_x (Z_A^C)_{g(x)} = \dim Z_A^C - d_Z$ for all $x\in Z_A^C(\bar{\mathbb{F}}_p)$. Label the projection $Z_A^C\to W_A$ by $\pi$. Now ([\[eq:u-3\]](#eq:u-3){reference-type="ref" reference="eq:u-3"}) becomes $$\dim_{\pi(x)} (X_A)_{f(\pi(x))} \le \dim_x (Z_A^C)_{x} - \operatorname{rank}H^0(\operatorname{Gal}_F, \operatorname{Lie}U(A)).$$ Denote by $G_{\pi(x)}\subset (\widehat{M_{ra}})_{\bar{\mathbb{F}}_p}$ and $G_{x}\subset (\widehat{U_n})_{\bar{\mathbb{F}}_p}$ the automorphism group of the $L$-parameters corresponding to $\pi(x)$ and $x$, resp.. Note that the immersion $[\operatorname{Spec}\bar{\mathbb{F}}_p/G_{\pi(x)}]\hookrightarrow \mathcal{X}_{F, {^{L}\!M}_{ra}, {\operatorname{red}}}$ induces an immersion $[(X_A)_{f(\pi(x))}/G_{\pi(x)}]\hookrightarrow X_A$. Similarly, we have an immersion $[(Z_A^C)_x/G_x]\hookrightarrow Z_A^C$. Note that the image of the composite $[(Z_A^C)_x/G_x]\hookrightarrow Z_A^C\to X_A$ contains the image of $[(X_A)_{f(\pi(x))}/G_{\pi(x)}]\hookrightarrow X_A$ (by, for example, the moduli interpretation), and therefore $\dim [(Z_A^C)_x/G_x] \ge \dim [(X_A)_{f(\pi(x))}/G_{\pi(x)}]$. Since $$\begin{aligned} \dim [(Z_A^C)_x/G_x] & = \dim (Z_A^C)_x - \dim G_x \\ \dim [(X_A)_{f(\pi(x))}/G_{\pi(x)}] &= \dim (X_A)_{f(\pi(x))} - \dim G_{\pi(x)},\end{aligned}$$ it remains to show $$\dim G_{\pi(x)} \le \dim G_x - \operatorname{rank}H^0(\operatorname{Gal}_F, \operatorname{Lie}U(A)),$$ which is clear since $G_{\pi(x)} \rtimes (H^0(\operatorname{Gal}_F, [U,U]) \rtimes H^0(\operatorname{Gal}_F, U^{\operatorname{ab}})) \subset G_x$. Finally, $d_X \le d_{n-2r, s} - r^2$ is clear since $$\operatorname{Aut}_{\operatorname{Gal}_K}(\bar\alpha^{\oplus r})= \operatorname{Aut}_{\operatorname{Gal}_F}( \begin{bmatrix} \bar\alpha^{\oplus r} & \\ & \theta(\bar\alpha)^{\oplus r} \end{bmatrix}\rtimes * )$$ has dimension $r^2$. ◻ ## Initial estimates To prove Theorem [10.1](#thm:EGU){reference-type="ref" reference="thm:EGU"}, we want to show $d_Z \le d_{n, r}$ for all $0\le s \le \frac{n-2r}{2a}$. By Lemma [10.8](#lem:EGU-3){reference-type="ref" reference="lem:EGU-3"}, it suffices to prove $$\label{eq:un-1} (d_{n-2r, s} - r^2) + r^2 + r s - c + [F:\mathbb{Q}_p] (2nra -3r^2a^2) \le d_{n, r}.$$ for all $0\le s \le \frac{n-2r}{2a}$. Expanding ([\[eq:un-1\]](#eq:un-1){reference-type="ref" reference="eq:un-1"}), we get $$\label{eq:un-2} r^2 + r s -s^2 +\frac{s-r}{2} \le [F:\mathbb{Q}_p] (r^2-r) +c.$$ Regarding the LHS of ([\[eq:un-1\]](#eq:un-1){reference-type="ref" reference="eq:un-1"}) as a quadratic polynomial in $s$, it achieves maximum at $s=r/2+1/4$; since $s$ only takes integral values, we only need to prove ([\[eq:un-1\]](#eq:un-1){reference-type="ref" reference="eq:un-1"}) for $s=r/2$: $$\frac{5}{4} r^2 -\frac{1}{4}r \le [F:\mathbb{Q}_p] (a^2r^2-ar) + c \label{eq:un-3}$$ Using the trivial estimate $c\ge 0$, we only need to prove $$\frac{5}{4} r -\frac{1}{4} \le [F:\mathbb{Q}_p] (a^2r-a) \label{eq:un-4}$$ which clearly holds when $a\ge 2$. ## Lemma {#lemma-1} (The codimension lemma) [\[lem:codim-lem\]]{#lem:codim-lem label="lem:codim-lem"} Write $h$ for $\operatorname{rank}H^1(\operatorname{Gal}_K, \bar\tau \otimes \theta(\bar\alpha(-1))^\vee)$. Assume - $H^2(\operatorname{Gal}_K, [U, U](A))\ne 0$ and - $a=1$. Then $$c\ge c(r, h):=\min_{1\le k \le r}\frac{1}{2}(k^2+hr-hk) = \begin{cases} r^2/2 & h > 2r \\ (hr-h^2/4)/2 & h\le 2r. \end{cases}$$ Note that the minimum of $C=\min_{1\le k \le r}\frac{1}{2}(k^2+hr-hk)$ is achieved at either $k=r$ or $k=h/2$. When $k=r$, the $C=r^2/2$. When $k=h/2$, $C=\frac{1}{2}(hr - h^2/4)\le r^2/2$. We will postpone the proof of the codimension lemma to the end of this section. Next, we prove that the codimension lemma implies Theorem [10.1](#thm:EGU){reference-type="ref" reference="thm:EGU"}. *Proof of Theorem [10.1](#thm:EGU){reference-type="ref" reference="thm:EGU"}.* We've already settled the $a>1$ case. So, assume $a=1$. If $h > 2r$, after plugging the codimension lemma [\[lem:codim-lem\]](#lem:codim-lem){reference-type="ref" reference="lem:codim-lem"} into ([\[eq:un-3\]](#eq:un-3){reference-type="ref" reference="eq:un-3"}), we only need to prove $$\frac{5}{4} r^2 - \lceil \frac{1}{2}r^2\rceil -\frac{1}{4}r \le [F:\mathbb{Q}_p] (a^2r^2-ar) \label{eq:un-4}$$ which holds for all integers $r$ and $a$. We remark that the inequality ([\[eq:un-4\]](#eq:un-4){reference-type="ref" reference="eq:un-4"}) is tight, and it achieves equality when $r=1$, $a=1$, and $[F:\mathbb{Q}_p]$ arbitrary. Suppose $h\le 2r$. If $n\le 3r$, then the LHS ([\[eq:un-2\]](#eq:un-2){reference-type="ref" reference="eq:un-2"}) achieves maximum at $s = \frac{n-2r}{2}$. So we need to prove $$\lfloor r^2 + r\frac{n-2r}{2} - (\frac{n-2r}{2})^2 + \frac{n-2r-r}{2} -\frac{h r- h^2/4}{2} \rfloor \le [F:\mathbb{Q}_p](r^2-r) \label{eq:un-5}$$ because the dimension only takes integral values. By the local Euler characteristic, $2r \ge h \ge n-2r+1$; and thus the LHS of ([\[eq:un-5\]](#eq:un-5){reference-type="ref" reference="eq:un-5"}) achieves minimum at $h=n-2r+1$. So we get $$\lfloor -n^2/8 + nr/2 + r^2/2 + n/2 -2r +1/8 \rfloor \le [F:\mathbb{Q}_p](r^2-r), \label{eq:un-6}$$ whose LHS achieves maximum at the critical point $n=2r+2$: $$\lfloor r^2-r + 5/8 \rfloor \le [F:\mathbb{Q}_p](r^2-r) \label{eq:un-7}$$ which clearly holds. If $n> 3r$, then the LHS ([\[eq:un-2\]](#eq:un-2){reference-type="ref" reference="eq:un-2"}) achieves maximum at $s = r/2$. So we need to prove $$\lfloor \frac{5}{4}r^2 - \frac{1}{4}r - \frac{hr-h^2/4}{2} \rfloor \le [F:\mathbb{Q}_p](r^r-r). \label{eq:un-8}$$ By the local Euler characteristic, $2r \ge h \ge n-2r+1$; and thus the LHS of ([\[eq:un-5\]](#eq:un-5){reference-type="ref" reference="eq:un-5"}) achieves minimum at $h=n-2r+1$. So we get $$\lfloor n^2/8 -n r +11 r^2/4+n/4-5r/4+1/8 \rfloor \le [F:\mathbb{Q}_p](r^2-r), \label{eq:un-9}$$ whose LHS achieves maximum at the boundary point $n = 3r + 1$: $$\lfloor \frac{7}{8}r^2 -\frac{3}{4}r + 1/2 \rfloor \le [F:\mathbb{Q}_p](r^2-r), \label{eq:un-10}$$ which clearly holds. ◻ Finally, we turn to the codimension lemma. ## Lemma {#lem:ag-1} Let $\kappa$ be a field. Let $f_1(\mathbf{x}), \dots, f_m(\mathbf{x})\in \kappa[\mathbf{x}]:=\kappa[x_1,\dots,x_n]$ be homogeneous polynomials in $n$ variables. Define $$X = \operatorname{Spec}\kappa[\mathbf{x}]/(f_1(\mathbf{x}), \dots f_m(\mathbf{x}))$$ and $$Y = \operatorname{Spec}\kappa[\mathbf{x}, \mathbf{y}]/(f_1(\mathbf{x})-f_1(\mathbf{y}), \dots, f_m(\mathbf{x})-f_m(\mathbf{y})).$$ Then $\dim X\le \frac{1}{2}\dim Y$. Equivalently, the codimension of $X$ in $\operatorname{Spec}\kappa[\mathbf{x}]$ is at least half of the codimension of $Y$ in $\operatorname{Spec}\kappa[\mathbf{x}, \mathbf{y}]$. *Proof.* There exists an embedding $X\times X \hookrightarrow Y$, $(\mathbf{x}, \mathbf{x}')\mapsto (\mathbf{x}, \mathbf{y})$. ◻ We remark that the inequality in Lemma [10.11](#lem:ag-1){reference-type="ref" reference="lem:ag-1"} is sharp. If $X = \operatorname{Spec}\mathbb{F}[x_1, x_2, x_3]/(x_1x_2, x_1x_3)$ and $Y = \operatorname{Spec}\mathbb{F}[x_1, x_2, x_3, y_1, y_2, y_3]/(x_1x_2-y_1y_2, x_1x_3-y_1y_3)$, then $\dim X = 2$ and $\dim Y=4$. ## Lemma {#lem:ag-2} Let $Y\to X$ be a morphism of finite type schemes over a field. Then there exists a point $x:\operatorname{Spec}\kappa\to X$ such that $\dim Y -\dim X\le \dim Y\times_{X, x}\operatorname{Spec}\kappa$. In particular, if $Y'\subset Y$ is a closed subscheme, then the codimension of $Y'$ in $Y$ is at least as large as the largest codimension of $Y'\times_{X, x}\kappa$ in $Y\times_{X, x}\operatorname{Spec}\kappa$ for some point $x$. *Proof.* It is a standard fact. See, for example, [@Stacks Tag 0DS4]. ◻ Recall that $$H^1(\operatorname{Gal}_K, U^{\operatorname{ab}}(A)) = H^1(\operatorname{Gal}_K, \operatorname{Mat}_{ra\times(n-2ra)}(A)) \oplus H^1(\operatorname{Gal}_K, \operatorname{Mat}_{(n-2ra)\times ra}(A)),$$ and the $\operatorname{Gal}(K/F)=\{1, \j\}$-action swaps the two direct summands. We can also decompose $H^1(\operatorname{Gal}_K, U^{\operatorname{ab}}(A))$ according to the eigenvalues of $\j$: $$H^1(\operatorname{Gal}_K, U^{\operatorname{ab}}(A)) = H^1(\operatorname{Gal}_K, U^{\operatorname{ab}}(A))^+ \oplus H^1(\operatorname{Gal}_K, U^{\operatorname{ab}}(A))^-$$ where $$\begin{aligned} H^1(\operatorname{Gal}_K, U^{\operatorname{ab}}(A))^+ & = \{x^+:= (x, \j x)\} =H^1(\operatorname{Gal}_F, U^{\operatorname{ab}}(A))\\ H^1(\operatorname{Gal}_K, U^{\operatorname{ab}}(A))^- & = \{x^-:= (x, -\j x)\}\end{aligned}$$ for $x\in H^1(\operatorname{Gal}_K, \operatorname{Mat}_{ra\times(n-2ra)}(A))$. There exists a bijection $$H^1(\operatorname{Gal}_K, U^{\operatorname{ab}}(A))^+ \xrightarrow{x^+\mapsto x^-} H^1(\operatorname{Gal}_K, U^{\operatorname{ab}}(A))^-.$$ Note that $$\begin{aligned} x^+\cup x^+ &= 2(x, 0)\cup(0, \j x) \\ x^-\cup x^- &= -2(x, 0)\cup(0, \j x)\end{aligned}$$ and thus $x^+\cup x^+ = -x^-\cup x^-$. The upshot is there exists an isomorphism respecting cup products $$H^1(\operatorname{Gal}_K, U^{\operatorname{ab}}(A)) \to H^1(\operatorname{Gal}_F, U^{\operatorname{ab}}(A))\oplus H^1(\operatorname{Gal}_F, U^{\operatorname{ab}}(A));$$ here we define cup products on the RHS by $(a, b)\cup(c, d) = a\cup c - b \cup d$. *Proof of the codimension lemma [\[lem:codim-lem\]](#lem:codim-lem){reference-type="ref" reference="lem:codim-lem"}.* By Lemma [10.11](#lem:ag-1){reference-type="ref" reference="lem:ag-1"}, Lemma [10.12](#lem:ag-2){reference-type="ref" reference="lem:ag-2"} and the discussion above, it suffices to show for each $\bar{\mathbb{F}}_p$-point of $A$, the codimension of the locus $W^C:=\{x\in H^1(\operatorname{Gal}_K, U^{\operatorname{ab}}(\bar{\mathbb{F}}_p))|x\cup x = 0\}$ in $H^1(\operatorname{Gal}_K, U^{\operatorname{ab}}(\bar{\mathbb{F}}_p))$ (when regarded as a vector bundle over $\operatorname{Spec}\bar{\mathbb{F}}_p$) is at least $2 c(r, h)$; as it forces the codimension of $\{x\in H^1(\operatorname{Gal}_F, U^{\operatorname{ab}}(\bar{\mathbb{F}}_p))|x\cup x = 0\}$ in $H^1(\operatorname{Gal}_F, U^{\operatorname{ab}}(\bar{\mathbb{F}}_p))$ to be at least $c(r, h)$. Consider all extensions of $\operatorname{Gal}_K$-modules $$\begin{bmatrix} \bar \alpha(-1)^{\oplus r} & * & * & * & * \\ & \bar \alpha(-2)^{\oplus s} & ? & ? & * \\ & & ? & ? & * \\ & & & \bar \alpha(-1)^{\oplus s} & * \\ & & & & \bar \alpha(-2)^{\oplus r} \end{bmatrix} =: \begin{bmatrix} \bar \alpha(-1)^{\oplus r} & * & * \\ & \bar\tau & * \\ & & \bar \alpha(-2)^{\oplus r} \end{bmatrix} =: \begin{bmatrix} \bar \alpha(-1)^{\oplus r} & * \\ & \bar \eta \end{bmatrix} =:\bar\rho$$ where $?$ means fixed and $*$ means undetermined. The coarse moduli space $Y^C$ of all extensions modulo $[U, U]$ is the vector space $H^1(\operatorname{Gal}_K, U^{\operatorname{ab}}(\bar{\mathbb{F}}_p))$. There is another way to think about extensions. We can first extend $\bar\alpha(-1)^{\oplus r}\oplus \bar\tau\oplus \bar\alpha(-2)^{\oplus r}$ to $\bar\alpha(-1)^{\oplus r}\oplus\bar\eta$, and then extend $\bar\alpha(-1)^{\oplus r}\oplus\bar\eta$ to $\bar\rho$. The coarse moduli space $T^C$ of all extensions $\bar\eta$ is the vector space $H^1(\operatorname{Gal}_K, \bar\tau\otimes \bar\alpha(-2)^{\oplus r\vee})$. Denote by $T^C_k\subset T^C$ the subvariety consisting of $\bar\eta$ such that $$\dim H^2(\operatorname{Gal}_K, \bar\alpha(-1)\otimes \bar\eta^\vee) - \dim H^2(\operatorname{Gal}_K, \bar\alpha(-1)\otimes \bar\tau^\vee) = r-k.$$ Write $Z^C$ for the coarse moduli space of all extensions $\bar\rho$. Set $Z^C_k := Z^C \times_{T^C}T^C_k$. If $\bar\eta = \begin{bmatrix} \bar\tau & * \\ & \bar\alpha(-2)^{\oplus r} \end{bmatrix}$ lies in $T^C_k$, then the column space of $*$ is $k$-dimensional. To specify a point of $T^C_k$ is the same as specifying a point of the Grassmannian ${\operatorname{Gr}}(k, r)$ and a point of $H^1(\operatorname{Gal}_K, \bar\tau\otimes \bar\alpha(-2)^{\oplus k\vee})$: $$\begin{aligned} \dim T^C_k &\le\dim {\operatorname{Gr}}(k, r) + k h \\ &=k(r-k)+ kh \\ &=\dim H^1(\operatorname{Gal}_K, \bar\tau\otimes \bar\alpha(-2)^{\oplus r\vee}) -rh + k(r-k)+ kh.\end{aligned}$$ Note that there exists a stratification of locally closed subvarieties of $T^C_k$ such that $H^\bullet(\operatorname{Gal}_K, \bar\alpha(-1)^{\oplus r} \otimes \bar\eta^\vee)$ has constant dimension over each stratum. After replacing $T^C_k$ by the disjoint union of its stratums, $Z^C_k$ is a vector bundle over $T^C_k$ of rank $$\begin{aligned} \dim H^1(\operatorname{Gal}_K, \bar\alpha(-1)^{\oplus r} \otimes \bar\eta^\vee) &= [K:\mathbb{Q}_p]r(n-2r) + \dim H^0(\operatorname{Gal}_K, \bar\alpha(-1)^{\oplus r} \otimes \eta^\vee)\\ &\hspace{10mm} +\dim H^2(\operatorname{Gal}_K, \bar\alpha(-1)^{\oplus r} \otimes \eta^\vee) \\ &\le \dim H^1(\operatorname{Gal}_K, \bar\alpha(-1)^{\oplus r} \otimes \bar\tau^\vee) + \dim H^1(\operatorname{Gal}_K, \bar\alpha(-1)^{\oplus r} \otimes \bar\alpha(-2)^{\oplus r\vee})\\ &\hspace{10mm} +\dim H^2(\operatorname{Gal}_K, \bar\alpha(-1)^{\oplus r} \otimes \eta^\vee) -\dim H^2(\operatorname{Gal}_K, \bar\alpha(-1)^{\oplus r} \otimes \tau^\vee)\\ &\hspace{10mm} -\dim H^2(\operatorname{Gal}_K, \bar\alpha(-1)^{\oplus r} \otimes \bar\alpha(-2)^{\oplus r\vee}) \\ &=\dim H^1(\operatorname{Gal}_K, \bar\alpha(-1)^{\oplus r} \otimes \bar\tau^\vee) + \dim H^1(\operatorname{Gal}_K, \bar\alpha(-1)^{\oplus r} \otimes \bar\alpha(-2)^{\oplus r\vee})\\ &\hspace{10mm} + r(r-k)-r^2 \\ &=\dim H^1(\operatorname{Gal}_K, \bar\alpha(-1)^{\oplus r} \otimes \bar\tau^\vee) + \dim H^1(\operatorname{Gal}_K, \bar\alpha(-1)^{\oplus r} \otimes \bar\alpha(-2)^{\oplus r\vee}) -kr.\end{aligned}$$ Therefore $$\begin{aligned} \dim Z^C_k &= \dim T^C_k + \dim H^1(\operatorname{Gal}_K, \bar\alpha(-1)^{\oplus r} \otimes \eta^\vee) \\ & \le \dim H^1(\operatorname{Gal}_K, \bar\tau\otimes \bar\alpha(-2)^{\oplus r\vee}) -rh + k(r-k)+ kh \\ &\hspace{10mm} \dim H^1(\operatorname{Gal}_K, \bar\alpha(-1)^{\oplus r} \otimes \bar\tau^\vee) + \dim H^1(\operatorname{Gal}_K, \bar\alpha(-1)^{\oplus r} \otimes \bar\alpha(-2)^{\oplus r\vee}) -kr\\ &= \dim H^1(\operatorname{Gal}_K, \operatorname{Lie}U(\bar{\mathbb{F}}_p)) -rh + k(r-k)+kh -kr \\ &= \dim H^1(\operatorname{Gal}_K, \operatorname{Lie}U(\bar{\mathbb{F}}_p)) -rh -k^2+kh.\end{aligned}$$ Since $Z^C$ is the union of $Z^C_k$, we have $$\begin{aligned} \dim Z^C &\le \dim H^1(\operatorname{Gal}_K, \operatorname{Lie}U(\bar{\mathbb{F}}_p)) -\min_k(rh +k^2 - kh)\\ &=\dim Y^C + \dim H^1(\operatorname{Gal}_K, [U, U](\bar{\mathbb{F}}_p)) -\min_k(rh +k^2 - kh).\end{aligned}$$ On the other hand, $Z^C$ is an affine bundle over $W^C$ of rank $\dim H^1(\operatorname{Gal}_K, [U, U](\bar{\mathbb{F}}_p))$. Thus $$\dim Z_C \ge \dim W_C + \dim H^1(\operatorname{Gal}_K, [U, U](\bar{\mathbb{F}}_p)).$$ Finally, $$\begin{aligned} \dim Y^C -\dim W_C &\ge \dim Y^C - (\dim Z_C - \dim H^1(\operatorname{Gal}_K, [U, U](\bar{\mathbb{F}}_p)))\\ &\ge \dim Y^C - (\dim Y^C + \dim H^1(\operatorname{Gal}_K, [U, U](\bar{\mathbb{F}}_p)) -\min_k(rh +k^2 - kh)\\ &\hspace{10mm}- \dim H^1(\operatorname{Gal}_K, [U, U](\bar{\mathbb{F}}_p)))\\ &=\min_k(rh +k^2 - kh) \qedhere\end{aligned}$$ ◻
arxiv_math
{ "id": "2309.00761", "title": "Heisenberg varieties and the existence of de Rham lifts", "authors": "Zhongyipan Lin", "categories": "math.NT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Heteroclinic structures organize global features of dynamical systems. We analyze whether heteroclinic structures can arise in network dynamics with higher-order interactions which describe the nonlinear interactions between three or more units. We find that while commonly analyzed model equations such as network dynamics on undirected hypergraphs may be useful to describe local dynamics such as cluster synchronization, they give rise to obstructions that allow to design heteroclinic structures in phase space. By contrast, directed hypergraphs break the homogeneity and lead to vector fields that support heteroclinic structures. address: - ${}^1$Department of Mathematics, Vrije Universiteit Amsterdam, The Netherlands (<c.bick@vu.nl>) - ${}^2$Department of Mathematics, Paderborn University, Germany (<soeren.von.der.gracht@uni-paderborn.de>) author: - Christian Bick${}^1$ and Sören von der Gracht${}^2$ bibliography: - bibliography.bib date: - - title: Heteroclinic Dynamics in Network Dynamical Systems with Higher-Order Interactions --- # Introduction {#Sec:Intro} Networks of interacting dynamical units have been extremely successful to describe emergent collective dynamics---such as synchronization---seen in many real-world systems [@Pikovsky2003; @Strogatz2004]: The state of node $k\in\left\{1, \dotsc, N\right\}$ is given by $x_k\in\mathbb{R}^d$ and evolves according to the network interactions. If the interactions take place between pairs of nodes, then a graph $\mathcal{G}=(\mathcal{V}, \mathcal{E})$ is the traditional combinatorial object that captures the interaction structure, where each unit corresponds to a vertex $v\in\mathcal{V}$ and the (additive) pairwise interactions take place along edges $e\in\mathcal{E}$. However, recent work has highlighted the importance of "higher-order" nonadditive interactions between three or more units [@Battiston2020; @Bick2021]: In analogy to dynamical systems on graphs, nonadditive interactions have been associated with hyperedges $e\in\mathcal{E}$ in a hypergraph $\mathcal{H}= (\mathcal{V},\mathcal{E})$. While numerous definitions of network dynamical systems on hypergraphs (whether undirected, directed, weighted, etc.) have appeared in the literature, this approach allows to link the associated network structure (a hypergraph or simplicial complex) to dynamical features (such as synchronization behavior). Indeed, local dynamical features such as the existence and stability of (cluster) synchronization can be phrased in terms of the higher-order interaction structure [@Mulas2020; @Salova2021a; @Salova.2021b; @Bick2020; @Aguiar2020; @vonderGracht.2023]. At the same time, to understand the dynamics of real-world systems it is essential to go beyond local dynamics and linear stability and understand global features of the network dynamics. Heteroclinic structures in phase space consist (in the simplest case) of equilibria $\xi_q$ together with heteroclinic trajectories $\gamma_{p,q}$ that lie in the intersection of the unstable manifold of $\xi_p$ and the stable manifold of $\xi_q$. They have received particular attention since they can organize periodic or chaotic dynamics of closed vector fields [@Weinberger.2018]. Moreover, they have been associated with dynamics that show metastability, for example in neuroscience, where one observes discrete states (represented by $\xi_q$) and transitions between them (along the heteroclinic connections) [@Afraimovich2004c; @Rabinovich2006]. Indeed, given a heteroclinic structure (a set of equilibria with directed connections between pairs of equilibria) there are different ways to construct dynamical systems whose phase space has the desired heteroclinic structure [@Ashwin2013; @Aguiar.2011; @Field.2015; @Field2017]. How these constructions are affected by considering vector fields that reflect specific network interactions has not been systematically investigated: Field remarked that heteroclinic structures can be realized in networks with pairwise interactions if the inputs are sufficiently heterogeneous (i.e., they need to be distinguishable) [@Field.2015]. Here, we analyze how higher-order interactions affect the emergence of heteroclinic structures in phase space. Commonly considered network dynamics on hypergraphs naturally come with homogeneity assumptions on the coupling, which in turn may affect the emergence of heteroclinic dynamics: First, if hyperedges are sets then the order of the inputs should not matter and the corresponding coupling function needs to be symmetric in the arguments. Moreover, higher-order interaction networks are typically considered on undirected hypergraphs, which impose even more constraints than directed hypergraphs as for each edge all units are affected in the same way by all other units that are contained in the edge. Second, one typically considers hypergraphs with hyperedges of a single type; this constrains the coupling functions. We systematically analyze how different types of interactions yield obstructions to constructing heteroclinic structures in phase space. Specifically, we consider heteroclinic structures in Lotka--Volterra type dynamical systems, which includes the classical Guckenheimer--Holmes example [@Guckenheimer.1988], and the construction of heteroclinic structures in network dynamical systems by Field [@Field.2015] subject to constraints imposed by the network structure. For example, network dynamics on undirected hypergraphs are typically too symmetric for Field's construction to apply (). By contrast, both---the Guckenheimer--Holmes cycle as well as Field's construction---can be realized in for network dynamics on directed hypergraphs. Interestingly though, additional restriction to specific types of interactions or coupling functions may again lead to the obstruction of one or the other construction. For example, $m$-uniform hypergraphs support the Guckenheimer--Holmes cycle but not Field's construction. On the other hand, coupling that does not explicitly depend on the state of the node makes the emergence of the Guckenheimer--Holmes cycle impossible but does not obstruct Field's construction---see [2](#sec:prelim){reference-type="ref" reference="sec:prelim"} below for details on these types. Note that for all heteroclinic structures in consideration, we prove (or disprove) the existence of heteroclinic connections. Here we focus on these fundamental existence properties rather than whether multiple connecting trajectories occur---i.e., the cleanliness of the heteroclinic structure [@Field2017]---nor the stability properties the structures may have (see for example [@Podvigina2011] and references therein). This paper is organized as follows: In , we provide necessary preliminaries. In particular, contains a brief overview over heteroclinic structures as well as the Guckenheimer--Holmes cycle and Field's construction, while introduces hypergraphs and the class of dynamics on those hypergraphs that we investigate in this article. In , we observe that undirected hypergraphs support neither of the two constructions. contain detailed expositions that the contrary is true for directed hypergraphs for both constructions respectively. In , we classify several hypergraph dynamics from the literature according to the types investigated before to clarify which of those support heteroclinic dynamics according to the constructions. We conclude in with a brief discussion and outlook. # Heteroclinic Cycles and Higher-Order Interactions {#sec:prelim} We now introduce heteroclinic dynamics on the one hand and network dynamical systems with higher-order interactions on the other. This will set the stage for the rest of the manuscript since showing what type of heteroclinic dynamics we may expect in network dynamical systems with higher-order interactions is the main topic of this paper. ## Robust Heteroclinic Cycles {#sec:HetCycles} Heteroclinic trajectories arise when the stable and unstable manifolds of distinct equilibria intersect. For the dynamical system $\dot x= f(x)$ on $\mathbb{R}^n$ let $\alpha(x)$, $\omega(x)$ be the usual $\alpha$ and $\omega$ limit sets for the flow generated by $f$ as $t\to\pm\infty$ [@Katok1995]. For a hyperbolic equilibrium $\xi\in\mathbb{R}^n$ we define $$\begin{aligned} W^{\mathrm{s}}(\xi) &:= \{x\in\mathbb{R}^n:\omega(x)=\xi\}, & W^{\mathrm{u}}(\xi) &:= \{x\in\mathbb{R}^n:\alpha(x)=\xi\}\end{aligned}$$ to be its stable and unstable manifold, respectively. A *heteroclinic cycle $\mathsf{C}$* now consists of a finite number of hyperbolic equilibria $\xi_q\in\mathbb{R}^n$, $q=1,\dotsc,Q$, together with heteroclinic trajectories $$[\xi_q\to\xi_{q+1}] \subset W^{\mathrm{u}}(\xi_q)\cap W^{\mathrm{s}}(\xi_{q+1})\neq\emptyset,$$ where indices are taken modulo $Q$. See [@Weinberger.2018] for a recent overview of heteroclinic dynamics including heteroclinic cycles between more general invariant sets and larger heteroclinic structures that contain more than one distinct cycle.[^1] Heteroclinic cycles do not persist under generic perturbations of the vector field. Hence, one often considers heteroclinic cycles such that the heteroclinic trajectories $[\xi_p\to\xi_q]$ are contained in dynamically invariant subspaces; these heteroclinic cycles are then *robust* with respect to perturbations that preserve the invariant subspaces. An important class of such dynamical systems is given by vector fields that are equivariant with respect to a symmetry group. Let $\dot x = f(x)$ with vector field $f:\mathbb{R}^n\to\mathbb{R}^n$ determine the dynamics and suppose that a group $\Gamma$ acts on $\mathbb{R}^n$. If the vector field commutes with the action of $\Gamma$, that is, $\gamma f = f\gamma$, then $f$ is *$\Gamma$-equivariant* and $\Gamma$ are symmetries of the dynamical system; see [@Golubitsky2002] for an introduction to equivariant dynamical systems. As a consequence, for any subgroup $\Gamma'\subset\Gamma$ the fixed point subspace $\mathop{\mathrm{Fix}}(\Gamma')=\{x\in\mathbb{R}^n: \gamma x=x \text{ for all }\gamma\in\Gamma'\}$ is dynamically invariant. If all heteroclinic connections of a cycle are now contained in fixed point subspaces of the symmetry action, then the heteroclinic cycle is robust with respect to perturbations to the vector field $f$ that preserve $\Gamma$-equivariance. ### Guckenheimer--Holmes Cycle {#subsubsec:gh} Guckenheimer and Holmes [@Guckenheimer.1988] famously considered the heteroclinic cycle that arises in the system $$\label{eq:gh_cubic} \begin{split} \dot{x}_1 &= x_1 + ax_1^3 + bx_1x_2^2 + cx_1x_3^2 \\ \dot{x}_2 &= x_2 + ax_2^3 + bx_2x_3^2 + cx_1^2x_2 \\ \dot{x}_3 &= x_3 + ax_3^3 + bx_1^2x_3 + cx_2^2x_3 \end{split}$$ where $x=(x_1, x_2, x_3)\in\mathbb{R}^3$. Note that these equations are equivariant with respect to the group $\Gamma = \langle\tau_1,\tau_2,\tau_3,\rho\rangle$, where the transpositions $\tau_j$ act by sending $x_j\mapsto-x_j$ (keeping the other coordinates fixed) and the cyclic permutation $\rho:(x_1,x_2,x_3)\mapsto(x_2,x_3,x_1)$ permutes the coordinates. These equations can be interpreted as a *network dynamical system* with three nodes (we will make this interpretation more precise below), where the state of node $k\in\left\{1,2,3\right\}$ is determined by $x_k\in\mathbb{R}$ and the interaction with the other nodes is determined by the parameters $b,c$. As in Lotka--Volterra-type systems [@Afraimovich2004], the "extinction subspaces" $P_{12}, P_{23}, P_{31}$ where one of the coordinates is zero---e.g., $P_{12}=\mathop{\mathrm{Fix}}(\langle\tau_3\rangle)=\{x_3=0\}$---are dynamically invariant as fixed point subspaces of the transpositions. The *Guckenheimer--Holmes cycle* now connects the hyperbolic saddle equilibria $$\begin{aligned} \xi_1 &= (1/\sqrt{-a},0,0)^\mathsf{T},\\ \xi_2 &= (0,1/\sqrt{-a},0)^\mathsf{T},\\ \xi_3 &= (0,0,1/\sqrt{-a})^\mathsf{T}\end{aligned}$$ that lie on the three coordinate axes; see (a) for a sketch of the heteroclinic cycle. The cycle exists under the conditions $a+b+c=-1, -\frac{1}{3}<a<0, c<a<b<0$ and is robust as the heteroclinic connections are contained in the subspaces $P_{kj}$. Moreover, it attracts all trajectories that are not on the coordinate planes or the diagonals $\{x_1=\pm x_2=\pm x_3 \}$. ex/HetSketch.pdf (-1,37)**(a)** (53,37)**(b)** ### Constructing Robust Heteroclinic Cycles {#subsubsec:fields-cycle} While the Guckenheimer--Holmes cycle is an example of a specific heteroclinic cycle for a particular system, it is---more generally---possible to *realize* general classes of graphs as heteroclinic structures in the phase space of a dynamical system [@Ashwin2013; @Field.2015]. Here we consider the approach by Field [@Field.2015] where a graph is realized as a heteroclinic structure in a network dynamical system (more specifically, a coupled cell system [@Stewart2003]) in the following way: If $\mathsf{G}=(\mathsf{V},\mathsf{E})$ is the given graph to be realized as a heteroclinic structure with vertices $\mathsf{V}$ and $N-1$ edges $\mathsf{E}$, consider a network dynamical system with $N$ nodes such that the state of node $k$ is given by $x_k\in\mathbb{R}$. Let $\Delta := \left\{(x_1, \dotsc, x_N)\in\mathbb{R}^N: x_1=x_2=\dotsb=x_N\right\}$ denote the diagonal where all nodes are synchronized and let $S_j = \left\{(x_1, \dotsc, x_N)\in\mathbb{R}^N: x_l=x_k \text{ for }k,l\neq j\right\}$ denote the set where all nodes except node $j$ are synchronized. For the specific class of systems in which the heteroclinic structure is realized, these subspaces are dynamically invariant. Then for each vertex $v_p\in\mathsf{V}$ there is a synchronized equilibrium $\xi_p\in\Delta$ and for each edge $(v_p,v_q)\in\mathsf{E}$ there is a $j(p,q)$ and a heteroclinic connection $[\xi_p\to\xi_q]\subset S_{j(p,q)}$ along which the $j(p,q)$th node is not synchronized to the others. For concreteness, we will focus on a minimal example of this general construction: We consider a heteroclinic cycle that consists of two equilibria $\xi_1$ and $\xi_2$ with reciprocal heteroclinic connections $C_1:=[\xi_1\to\xi_2]$ and $C_2:=[\xi_2\to\xi_1]$ in a network dynamical system that consists of $N=3$ nodes [@Aguiar.2011]. In the following we will refer to this as the *Field cycle*; see (b) for a sketch of the heteroclinic cycle. Since the heteroclinic connection $C_1$ lies in the invariant subspace $S_2 = \left\{x_1=x_3\right\}$ and $C_2$ in $S_3=\left\{x_2=x_3\right\}$, the heteroclinic cycle is robust with respect to perturbations that preserve these invariant subspaces (e.g., perturbations with symmetry). ## Hypergraphs and Network Dynamics {#Sec:Setup} Hypergraphs as combinatorial objects are convenient to capture the interaction structure of network dynamical systems. A *directed hypergraph $\mathcal{H}=(\mathcal{V},\mathcal{E})$* on $N$ vertices which consists of a set of vertices $\mathcal{V}= \left\{1, \dotsc, N\right\}$ and each directed hyperedge $e\in\mathcal{E}$ (in the following simply *edge*) can be written as $e=(T,H)$ with *tail* $T=T(e)\in \mathfrak{P}(\mathcal{V})$ and *head* $H=H(e)\in \mathfrak{P}(\mathcal{V})$. (We discuss the special case of an undirected hypergraph below.) We call $m=\left|T(e)\right|+1=:|e|$ the *order* of a hyperedge $e$ and write $\mathcal{E}^{(m)}$ for all hyperedges of order $m$; obviously $\mathcal{E}=\bigcup_{m\geq2}\mathcal{E}^{(m)}$. A hypergraph $\mathcal{H}$ is *$m$-uniform* if $\mathcal{E}= \mathcal{E}^{(m)}$. ### Network Dynamics on Hypergraphs We now consider network dynamical systems consisting of $N$ nodes that are compatible with the network structure determined by a hypergraph $\mathcal{H}$. Suppose that the state of node $k$ is given by $x_k\in \mathbb{X}= \mathbb{R}^d$. If uncoupled, each node will evolve according to their (identical) intrinsic dynamics, determined by $F:\mathbb{X}\to\mathbb{X}$. For an edge $e$ with $k\in H(e)$ the state $x_k$ will be influenced by the states of the nodes in $T(e)$ through an interaction function $G_e: \mathbb{X}^{1+\left|T(e)\right|}\to\mathbb{X}$. Since the hyperedges are sets by definition---which do not depend on the ordering of their elements---it is natural to assume some invariance properties of the interaction functions $G_e$. For $e=(T,H)$ and $k\in H(e)$ we write $G_e(x_k; x_T)$ and assume that $H_e$ is invariant under permutations of the elements of $x_T$. A coupling function is *nodespecific* if the dependency on the first coordinate is nontrivial and *nodeunspecific* otherwise, i.e., if $G_e(x_k; x_T)=G_e(x_T)$. Note that a nodeunspecific coupling function does not imply that the coupling is not state dependent: The receiving node $k$ can still be contained in the tail of the hyperedge. Taken together, the state of node $k$ for the network dynamical system on a hypergraph $\mathcal{H}$ evolves according to $$\label{eq:hypernetdyn} \dot x_k = F(x_k) + \sum_{e\in\mathcal{E}: k\in H(e)} G_e(x_k;x_{T}),$$ where $F:\mathbb{X}\to\mathbb{X}$ describes the intrinsic dynamics of each node and the coupling function $G_e:\mathbb{X}^{m+1}\to\mathbb{X}$ describes how nodes interact along the hyperedge $e$. A common assumption is to assume that there is only a single interaction function for each hyperedge order $m$, that is $G_e = G^{(m)}$ for all $e\in \mathcal{E}^{(m)}$---that is, the interaction is *homogeneous in each order $m$*. We will typically make this assumption. Since it is much more common to consider dynamics on undirected hypergraphs, we briefly discuss this special case. A directed hypergraph $\mathcal{H}$ is *undirected* if all edges are of the form $e=(A,A)$ for $A\in\mathfrak{P}(\mathcal{V})$. For notational simplicity we identify an undirected hyperedge $e=(A,A)$ with the set $A$. This means that for a network dynamical system [\[eq:hypernetdyn\]](#eq:hypernetdyn){reference-type="eqref" reference="eq:hypernetdyn"}, each node in an undirected hyperedge gets the same input from all other nodes contained in the edge. ### Interactions for One-Dimensional Node Dynamics {#sec:HypDyn} Since the aim is to relate the emergence of heteroclinic cycles to higher-order interactions, we expand the network vector field . Specifically, we focus on one-dimensional node dynamics[^2] $\mathbb{X}= \mathbb{R}$. For order $m$ interactions, that is, the state $z\in\mathbb{R}$ of a node in the head is influenced by the states $y_1, \dotsc, y_{m-1}\in\mathbb{R}$ in the tail, we expand the interaction functions $G^{(m)}(z; y_1, \dotsc, y_{m-1})$ formally as [\[eq:expansion\]]{#eq:expansion label="eq:expansion"} $$\begin{aligned} G^{(2)}(z; y_1) &= a^{(2)}_{0}(z) + a^{(2)}_{1}(z)y_1 + a^{(2)}_{2}(z)y_1^2 + \dotsb\\ \begin{split} G^{(3)}(z; y_1,y_2) &= a^{(3)}_{00}(z) + a^{(3)}_{10}(z)y_1+ a^{(3)}_{01}(z)y_2\\ &\qquad + a^{(3)}_{20}(z)y_1^2+ a^{(3)}_{11}(z)y_1y_2+ a^{(3)}_{02}(z)y_2^2+\dotsb \end{split}\\ &\ \ \vdots\nonumber \intertext{or written more compactly for order~$m+1$ with input vector $y\in\mathbb{R}^m$ and an $m$-dimensional multi-index $\mathbf{n}=(n_1, \dotsc, n_m)\in\mathbb{N}^m$, $\left|\mathbf{n}\right|=n_1+\dotsb n_m$, $y^\mathbf{n}=y_1^{n_1}\dotsb y_m^{n_m}$ as}\nonumber G^{(m+1)}(z; y) &= \sum_{o=0}^{\infty}\sum_{\left|\mathbf{n}\right|=o}a^{(m+1)}_\mathbf{n}(z)y^\mathbf{n}.\end{aligned}$$ Note that the requirement that the functions are invariant under permutations of the inputs $y$ imposes conditions on the coefficients $a^{(m+1)}_\mathbf{n}(z)$: For example, we have to have $a^{(3)}_{010}(z)=a^{(3)}_{001}(z)$. If the coupling functions are nodeunspecific, then the coefficients are constant with respect to the node state $z$, that is, $a^{(m+1)}_\mathbf{n}(z)=a^{(m+1)}_\mathbf{n}$. A particular choice of interaction function still leaves some ambiguity in terms of the network; see also [@Aguiar2020] for a more detailed discussion. First, the interaction function $G^{(m)}(z; y_1, \dotsc, y_{m-1})$ may not depend on one (or more) of the $y_l$ (as the relevant coefficients $a^{(m+1)}_\mathbf{n}(z)$ vanish). This means that the interactions along $m$-dimensional edges are actually of lower order, say $n$, and there is a possibility of two "types" of order $n$ interactions, the ones determined by $G^{(m)}$ and those by $G^{(n)}$. Second, we do not impose that $G^{(m)}$ is of minimal order $m$. That means that $G^{(m)}(z; y_1, \dotsc, y_{m-1}) = y_1+\dotsb+y_{m-1}$---an interaction function that can be realized with a graph with pairwise edges---is a valid choice of interaction function for a hyperedge of order $m$. In both cases, we say that the coupling (realized by the coupling function $G^{(m)}$) is *effectively* of a lower order $n$. # Obstruction to Heteroclinic Cycles for Network Dynamics on Undirected Hypergraphs {#Sec:Undirected} In this section, we consider network dynamical systems with higher-order interactions on *undirected* hypergraphs and see whether the heteroclinic cycles in Section [2.1](#sec:HetCycles){reference-type="ref" reference="sec:HetCycles"} can be realized in the resulting vector fields. Recall that network dynamics on undirected hypergraphs have strong homogeneity properties properties: Each node in a hyperedge is affected by each other node in the same way. As a result, we find that the undirected setup is quite restrictive as the resulting vector fields are constrained by the symmetries. Consider a network dynamical system with one-dimensional node phase space (cf. Section [2.2](#Sec:Setup){reference-type="ref" reference="Sec:Setup"}) on an undirected hypergraph $\mathcal{H}$ with three vertices $\mathcal{V}=\left\{1,2,3\right\}$. As there are precisely $7$ nontrivial undirected edges, the complete undirected hypergraph $\mathcal{K}$ has edges $$\begin{aligned} \mathcal{E}^{(1)} &= \{\{1\}, \{2\}, \{3\} \}, & \mathcal{E}^{(2)} &= \{\{1,2\}, \{1,3\}, \{2,3\}\},\\ \mathcal{E}^{(3)} &= \{\{1,2,3\}\}, \text{ and}& \mathcal{E}^{(\ell)} &= \emptyset \text{ whenever } \ell >3.\end{aligned}$$ In the most general case, the coupling function may depend on the specific edge and the equations [\[eq:hypernetdyn\]](#eq:hypernetdyn){reference-type="eqref" reference="eq:hypernetdyn"} for dynamics on $\mathcal{K}$ read $$\label{eq:hypernetdyn-undirected} \begin{split} \dot x_1 &= F(x_1) + G_{\{1\}}(x_1;x_1) + G_{\{1,2\}}(x_1; x_1, x_2) + G_{\{1,3\}}(x_1;x_1,x_3) \\ &\qquad + G_{\{1,2,3\}}(x_1;x_1,x_2,x_3) \\ \dot x_2 &= F(x_2) + G_{\{2\}}(x_2;x_2) + G_{\{1,2\}}(x_2; x_1, x_2) + G_{\{2,3\}}(x_2;x_2,x_3) \\ &\qquad + G_{\{1,2,3\}}(x_2;x_1,x_2,x_3) \\ \dot x_3 &= F(x_3) + G_{\{3\}}(x_3;x_3) + G_{\{1,3\}}(x_3; x_1, x_3) + G_{\{2,3\}}(x_3;x_2,x_3) \\ &\qquad + G_{\{1,2,3\}}(x_3;x_1,x_2,x_3). \end{split}$$ We summarize the right hand side of [\[eq:hypernetdyn-undirected\]](#eq:hypernetdyn-undirected){reference-type="eqref" reference="eq:hypernetdyn-undirected"} as $\mathbf{F}(x_1,x_2,x_3)$. Of course, an undirected hypergraph on three vertices could also comprise only a subset of the edges presented above. While this can be incorporated in the complete graph setup (e.g., by setting the coupling function of an edge to zero), we will comment explicitly below that this does not affect our main points. ## The Guckenheimer--Holmes Cycle We fist consider the Guckenheimer--Holmes cycle in network dynamics on undirected hypergraphs. **Theorem 1**. *The Guckenheimer--Holmes system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} cannot be realized in a network dynamical system on an undirected hypergraph $\mathcal{H}$ on three vertices [\[eq:hypernetdyn-undirected\]](#eq:hypernetdyn-undirected){reference-type="eqref" reference="eq:hypernetdyn-undirected"}.* *Proof.* The Guckenheimer--Holmes cycle is a cycle between three equilibria that are related by a symmetry of the system (cf. ). Specifically, the vector field [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} is equivariant with respect to the cyclic symmetries generated by the linear map $\rho(x_1,x_2,x_3)=(x_2,x_3,x_1)$. This places additional restrictions on the vector fields [\[eq:hypernetdyn-undirected\]](#eq:hypernetdyn-undirected){reference-type="eqref" reference="eq:hypernetdyn-undirected"} for undirected hypergraph dynamics: For $\rho$, equivariance requires $$\label{eq:GHundir} \mathbf{F}(\rho(x_1,x_2,x_3))=\rho(\mathbf{F}(x_1,x_2,x_3)).$$ Writing each side of [\[eq:GHundir\]](#eq:GHundir){reference-type="eqref" reference="eq:GHundir"} explicitly yields $$\mathbf{F}(\rho(x_1,x_2,x_3)) = \left(\begin{array}{l} F(x_2) + G_{\{1\}}(x_2;x_2) + G_{\{1,2\}}(x_2; x_2, x_3) \\ \qquad +\ G_{\{1,3\}}(x_2;x_2,x_1) + G_{\{1,2,3\}}(x_2;x_2,x_3,x_1) \\ F(x_3) + G_{\{2\}}(x_3;x_3) + G_{\{1,2\}}(x_3; x_2, x_3) \\ \qquad +\ G_{\{2,3\}}(x_3;x_3,x_1) + G_{\{1,2,3\}}(x_3;x_2,x_3,x_1) \\ F(x_1) + G_{\{3\}}(x_1;x_1) + G_{\{1,3\}}(x_1; x_2, x_1) \\ \qquad +\ G_{\{2,3\}}(x_1;x_3,x_1) + G_{\{1,2,3\}}(x_1;x_2,x_3,x_1) \end{array}\right),$$ while the right hand side is $$\rho(\mathbf{F}(x_1,x_2,x_3)) = \left(\begin{array}{l} F(x_2) + G_{\{2\}}(x_2;x_2) + G_{\{1,2\}}(x_2; x_1, x_2)\\ \qquad +\ G_{\{2,3\}}(x_2;x_2,x_3) + G_{\{1,2,3\}}(x_2;x_1,x_2,x_3) \\ F(x_3) + G_{\{3\}}(x_3;x_3) + G_{\{1,3\}}(x_3; x_1, x_3)\\ \qquad +\ G_{\{2,3\}}(x_3;x_2,x_3) + G_{\{1,2,3\}}(x_3;x_1,x_2,x_3) \\ F(x_1) + G_{\{1\}}(x_1;x_1) + G_{\{1,2\}}(x_1; x_1, x_2)\\ \qquad +\ G_{\{1,3\}}(x_1;x_1,x_3) + G_{\{1,2,3\}}(x_1;x_1,x_2,x_3) \end{array}\right).$$ So for [\[eq:GHundir\]](#eq:GHundir){reference-type="eqref" reference="eq:GHundir"} to hold while using the fact that each interaction function $G_e(x_k;x_{T(e)})$ is invariant under arbitrary permutations of the arguments in $x_{T(e)}$, we obtain the restrictions $$\begin{aligned} &G_{\{1\}}(z;y_1) = G_{\{2\}}(z;y_1) = G_{\{3\}}(z;y_1), \\ &G_{\{1,2\}}(z;y_1,y_2) = G_{\{1,3\}} (z;y_1,y_2) = G_{\{2,3\}}(z;y_1,y_2). \end{aligned}$$ In particular, the coupling is homogeneous in every order $m=1,2,3$. Note that this observation does neither change when arbitrary hyperedges are not present nor when all coupling functions are nodeunspecific. As a result, each row of [\[eq:hypernetdyn-undirected\]](#eq:hypernetdyn-undirected){reference-type="eqref" reference="eq:hypernetdyn-undirected"} is invariant under permutation of the two input-variables---the $k$-th row is invariant under exchanging $x_i, x_j$ for $i,j\in\{1,2,3\}\setminus\{k\}$. This, however, is not true for the Guckenheimer--Holmes system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} where the existence of the heteroclinic cycle depends crucially on $b\ne c$. Hence, this system cannot be realized by [\[eq:hypernetdyn-undirected\]](#eq:hypernetdyn-undirected){reference-type="eqref" reference="eq:hypernetdyn-undirected"}. ◻ *Remark 1*. The fact that the coupling is homogeneous in every order $m=1,2,3$ in the previous proof is a restriction that is imposed by the symmetry rather than a priori. The symmetry observations that are made for undirected hypergraphs can be summarized as follows: The Guckenheimer--Holmes system has a certain set of symmetries. On the other hand, restricting to undirected edges imposes additional symmetries on the system by the fact that certain terms are present in multiple equations. The combination of both results in the fact that there are in some sense 'too many symmetries' present in the system for the heteroclinic cycle to emerge. On a more technical note, this is caused by the fact that the presence of all symmetries simultaneously yields that the equilibria in the cycle cannot have the necessary saddle stability properties. ## The Field Cycle {#subsec:oscar-undirected} Next, we investigate the Field cycle as an example of the more general construction in which heteroclinic cycles occur between fully synchronous equilibria in a system of three interacting nodes. Contrary to the Guckenheimer--Holmes cycle, the model in [@Aguiar.2011; @Field2017; @Weinberger.2018] does not specify a precise system of ordinary differential equations but rather proves that a given network structure allows for the realization of a heteroclinic cycle between two fully synchronous equilibria. While the key component for the Guckenheimer--Holmes cycle are the symmetries of the system, the main ingredient for this construction are specific synchrony subspaces that are dynamically invariant independent of the specific governing functions. In particular, the construction requires the dynamical invariance of the fully synchronous subspace $\Delta = \{x_1=x_2=x_3\}$ and of two of the partially synchronous subspaces $S_3 = \{x_1=x_2\}, S_2 = \{x_1=x_3\}$, and $S_1 = \{x_2=x_3\}$. **Theorem 2**. *Dynamics on an undirected hypergraph on three vertices does not allow for the realization of the Field cycle.* *Proof.* We evaluate the restrictions on general dynamics on an undirected hypergraph [\[eq:hypernetdyn-undirected\]](#eq:hypernetdyn-undirected){reference-type="eqref" reference="eq:hypernetdyn-undirected"} of three vertices imposed by the dynamical invariance of the (partial) synchrony subspaces. To that end, first consider the dynamics on $S_3 = \{x_1=x_2\}$ by substituting $(x_1,x_2,x_3) = (z,z,y)$ into [\[eq:hypernetdyn-undirected\]](#eq:hypernetdyn-undirected){reference-type="eqref" reference="eq:hypernetdyn-undirected"}. The subspace is dynamically invariant if $\dot x_1=\dot x_2$, that is, $$\begin{aligned} &F(z) + G_{\{1\}}(z;z) + G_{\{1,2\}}(z;z,z) + G_{\{1,3\}}(z;z,y) + G_{\{1,2,3\}}(z;z,z,y) \\ &= F(z) + G_{\{2\}}(z;z) + G_{\{1,2\}}(z;z,z) + G_{\{2,3\}}(z;z,y) + G_{\{1,2,3\}}(z;z,z,y). \end{aligned}$$ Canceling out equal terms gives $$G_{\{1\}}(z;z) + G_{\{1,3\}}(z;z,y) = G_{\{2\}}(z;z) + G_{\{2,3\}}(z;z,y).$$ In particular, the coupling functions have to satisfy $$G_{\{1\}} = G_{\{2\}} \quad \text{and} \quad G_{\{1,3\}} = G_{\{2,3\}}.$$ Identical considerations for $S_2 = \{x_1=x_3\}$ and $S_1 = \{x_2=x_3\}$ yield $$\begin{aligned} &G_{\{1\}} = G_{\{3\}}, \quad G_{\{1,2\}} = G_{\{2,3\}} \quad \text{and} \\ &G_{\{2\}} = G_{\{3\}}, \quad G_{\{1,2\}} = G_{\{1,3\}} \end{aligned}$$ respectively. Thus, invariance of any two of the subspaces $S_k$, the coupling functions have to satisfy $$G_{\{1\}} = G_{\{2\}} = G_{\{3\}} \quad \text{and} \quad G_{\{1,2\}} = G_{\{1,3\}} = G_{\{2,3\}}.$$ As a result, the coupling is homogeneous in each order $m$. Invariance of the fully synchronous subspace follows trivially from the intersection of the two-dimensional subspaces. Again, note that none of these observations change when any hyperedges are not present (they could be represented by a coupling function $0$) or when the coupling functions are all nodeunspecific. The goal of the construction is to construct heteroclinic connections in partially synchronous spaces between two hyperbolic equilibria in the fully synchronous subspace. In order for such a heteroclinic connection to be possible, equilibria in $\Delta = \{x_1=x_2=x_3\}$ need both a stable and an unstable direction outside of the fully synchronous subspace. However, a quick calculation shows that the network structure in the equations is too restrictive for this to happen: The linearization of [\[eq:hypernetdyn-undirected\]](#eq:hypernetdyn-undirected){reference-type="eqref" reference="eq:hypernetdyn-undirected"} at $\mathbf{x}=(z,z,z)^\mathsf{T}\in \Delta$ is $$\label{eq:oscar1-lin} D\mathbf{F}(\mathbf{x}) = \begin{pmatrix} \theta & \eta + \zeta & \eta + \zeta \\ \eta + \zeta & \theta & \eta + \zeta \\ \eta + \zeta & \eta + \zeta & \theta \end{pmatrix},$$ where, with $\partial_j$ denoting differentiation with respect to the $j$th component, $$\begin{aligned} \alpha &= F'(z), \\ \beta &= \partial_1G^{(1)}(z;z), \\ \gamma &= \partial_1 G^{(2)}(z;z,z), \\ \delta &= \partial_1 G^{(3)}(z;z,z,z) \\ \epsilon &= \partial_2 G^{(1)}(z;z), \\ \eta &= \partial_2 G^{(2)}(z;z,z) = \partial_3 G^{(2)}(z;z,z), \\ \zeta &= \partial_2 G^{(3)}(z;z,z,z) = \partial_3 G^{(3)}(z;z,z,z) = \partial_4 G^{(3)}(z;z,z,z),\\ \theta &= \alpha + \beta + 2 \gamma + \delta + \epsilon + 2 \eta + \zeta. \end{aligned}$$ This matrix has eigenvalues $\lambda_1=\theta+2(\eta + \zeta)$ and $\lambda_2=\lambda_3=\theta-(\eta+\zeta)$ with corresponding eigenvectors $(1,1,1)^\mathsf{T}$ and $(1,-1,0)^\mathsf{T}, (1,0,-1)^\mathsf{T}$. In particular, in the generic situation where the two eigenvalues are not identical, any equilibrium in the fully synchronous space can either have a $2$-dimensional stable or a $2$-dimensional unstable manifold outside of $\Delta$. Hence, reciprocal heteroclinic connections outside of $\Delta$ are impossible. ◻ *Remark 2*. Similar to , the fact that the coupling functions need to be homogeneous in every order results from the restrictions on the vector field to realize the heteroclinic cycle rather than a priori. # The Guckenheimer--Holmes Cycle in Directed Hypergraphs {#sec:gh-directed} The situation for directed hypergraphs is more complex than the undirected case. On three vertices $\mathcal{V}=\{1,2,3\}$ there are $49$ non-trivial directed hyperedges of order at most three---including the seven investigated in the previous section. Rather than investigating all possible configurations of hyperedges that exist in the hypergraph, we will describe specific cases in which the heteroclinic constructions are possible due to the increased complexity. Specifically, we interpret the system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} as a (higher-order) network dynamical system with additive coupling as in [\[eq:hypernetdyn\]](#eq:hypernetdyn){reference-type="eqref" reference="eq:hypernetdyn"}. This includes the construction of a suitable interaction structure as well as of the correct coupling functions. As opposed to dynamics on an undirected hypergraph, this construction is possible when directed hyperedges are present. One of the major tools for the construction is the observation that system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} can be represented as $$\begin{aligned} \dot{x}_1 &= f(x_1; x_2, x_3) \\ \dot{x}_2 &= f(x_2; x_3, x_1) \\ \dot{x}_3 &= f(x_3; x_1, x_2),\end{aligned}$$ where $$\label{eq:GH-F} f(z, y_1, y_2) = z + az^3 + bzy_1^2 + czy_2^2.$$ In particular, the system can be interpreted as a homogeneous system of three interacting dynamical vertices, as all three internal dynamics are governed by the same function. Hence, it suffices to construct suitable couplings for one vertex and extend the construction to the remaining ones. We can make one immediate observation. **Theorem 3**. *The Guckenheimer--Holmes system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} cannot be ralized on a directed hypergraph $\mathcal{H}$ on three vertices as a network dynamical system with nodeunspecific coupling.* *Proof.* The governing function $f$ in the cubic Guckenheimer--Holmes system [\[eq:GH-F\]](#eq:GH-F){reference-type="eqref" reference="eq:GH-F"} contains internal dynamics $z+az^3$ and two coupling terms $bzy_1^2$ and $czy_2^2$. Heuristically speaking, these couplings are nodespecific and thus cannot be realized by nodeunspecific coupling functions. More precisely, the coupling occurs in mixed terms of the state of the vertex $z$ and the state one of its neighbors $y_1$ or $y_2$. Hence, a nodeunspecific coupling function would require the head vertex to be an element of the tail as well to be able to generate such a term, since it is of the form $G^{(m)}_e(z;y_T)=G^{(m)}_e(y_T)$. This can only be the case in hyperedges of order three or greater. If the term $bzy_1^2$ is contained in one of the nodeunspecific coupling functions, e.g., $G^{(3)}_e(z,y_1)$ for a hyperedge $e\in\mathcal{E}^{(3)}$, then so is $bz^2y_1$ due to the symmetry of the coupling function. This term then has to be removed by another coupling function, that is $-bz^2y_1$ has to be contained in another nodeunspecific coupling function e.g. $G^{(3)}_{e'}(z,y_1)$. But then, by the same symmetry argument, so is $-bzy_1^2$ and we also remove the wanted from the equation. This shows, that the function $f$ cannot be realized by nodeunspecific coupling functions. ◻ ## Realisation in a Directed Classical Network {#subsubsec:gh-classical} Observe that that governing function $f$ contains effective pairwise interactions only. Hence, one may want to realize the vector field as the network vector field of a classical network dynamical system with pairwise coupling. For now we consider the most general case where the coupling function may depend on the edge; we will later discuss the strict constraints a homogeneity assumption would place on the dynamics. The three subsystems in [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} are all-to-all coupled. Hence, the network $\mathcal{H}$ in consideration has to have edges $$\label{eq:gh-H1} \begin{split} \mathcal{E}^{(2)} &= \{ (\{1\}, \{1\}), (\{1\}, \{2\}), (\{1\}, \{3\}), \\ &\phantom{= \{} (\{2\}, \{1\}), (\{2\}, \{2\}), (\{2\}, \{3\}), \\ &\phantom{= \{} (\{3\}, \{1\}), (\{3\}, \{2\}), (\{3\}, \{3\}) \}, \text{ and}\\ \mathcal{E}^{(m)} &= \emptyset \text{ whenever } m >2, \end{split}$$ as represented by (note that self-loops $(\{1\}, \{1\}), (\{2\}, \{2\})$, and $(\{3\}, \{3\})$ are not included in the figure). **Theorem 4**. *The (hyper)graph $\mathcal{H}$ defined in [\[eq:gh-H1\]](#eq:gh-H1){reference-type="eqref" reference="eq:gh-H1"} allows for the realization of the Guckenheimer--Holmes system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} as a network dynamical system if the coupling is not homogeneous.* *Proof.* Without loss of generality, we may realise terms that do not describe interactions via the internal dynamics exclusively by setting $$F(z) = z + az^3.$$ This would allow us to discard the self-loops from the network entirely. To generate the remaining terms $bzy_1^2 + czy_2^2$ as the sum of two pairwise coupling functions, we need two types thereof: $$G^{(2.1)}(z;y_1) = bzy_1^2, \quad G^{(2.2)}(z;y_1) = czy_1^2.$$ This yields $$f(z,y_1,y_2) = F(z) + G^{(2.1)}(z;y_1) + G^{(2.2)}(z;y_2).$$ To realize the explicit system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} we obtain the desired vector field for $$\begin{aligned} G_{(\{1\}, \{2\})}=G_{(\{2\}, \{3\})}=G_{(\{3\}, \{1\})} &= G^{(2.1)} ,\\ G_{(\{1\}, \{3\})}=G_{(\{2\}, \{1\})}=G_{(\{3\}, \{2\})} &= G^{(2.2)}. \end{aligned}$$ ◻ The assumption of two different coupling functions can be regarded as two types of edges and the network would more precisely be represented as in . If we assume homogeneity in the coupling, i.e., $G_e = G^{(2)}$ for all $e\in\mathcal{E}$, this construction would force $b=c$. In this scenario, however, the equilibria are not saddles any longer and the heteroclinic cycle cannot exist. Thus, the emergence of the Guckenheimer--Holmes cycle depends crucially on the fact that the governing function $f$ may distinguish between the interaction of a given vertex with the other two. In a classical network with pairwise coupling, this requires two edge types and we say the network has *asymmetric inputs*. *Heterogeneity in the coupling function is necessary* to realize the Guckenheimer--Holmes heteroclinic cycle. In the remainder of , we restrict ourselves to the case of homogeneous coupling in every order and explore how higher-order interactions can be used to break the symmetry in the inputs so that the Guckenheimer--Holmes cycle may arise. ## Only True $2$-to-$1$ Connections {#subsubsec:gh-2-to-1-only} First, we note that a hypergraph can have 'too few' hyperedges to induce heterogeneity in the inputs necessary for the Guckenheimer--Holmes cycle. In [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"}, each node receives inputs from both other nodes. One might consider this a triplet interaction so that the obvious choice would be to consider dynamics on the hypergraph with edges $$\label{eq:gh-H2} \begin{split} \mathcal{E}^{(3)} &= \{(\{2,3\}, \{1\}), (\{1,3\}, \{2\}), (\{1,2\}, \{3\})\}, \text{ and}\\ \mathcal{E}^{(m)} &= \emptyset \text{ whenever } m \ne3. \end{split}$$ Note that these are 'true' triplet interactions as the head an tail sets of all edges do not intersect. **Theorem 5**. *Dynamics on a hypergraph $\mathcal{H}$ with three vertices that only contains effective $2$-to-$1$ hyperedges does not allow for the realization of the Guckenheimer--Holmes system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} as a network dynamical system.* *Proof.* A hypergraph $\mathcal{H}$ with three vertices that only contains effective $2$-to-$1$ hyperedges is necessarily of the form [\[eq:gh-H2\]](#eq:gh-H2){reference-type="eqref" reference="eq:gh-H2"}. We may trivially generate the interaction terms $bzy_1^2+czy_2^2$ via a third order coupling function $G^{(3)}(z;y_1,y_2)$. However, this requires the coupling function not to be invariant under transposition of the inputs: $G^{(3)}(z;y_1,y_2) \not\equiv G^{(3)}(z;y_2,y_1)$. In particular, this would require that each (directed) hyperedge (of order $3$) is sensitive to the order of its inputs, which is not part of the framework. Here, $G^{(3)}$---and as a result also $f$---is symmetric in its inputs which forces $b=c$ so that the Guckenheimer--Holmes cycle cannot arise. ◻ ## Directed Edges and True $2$-to-$1$ Connections {#subsubsec:gh-edge+2-to-1} In contrast to , even if hyperedges are not sensitive to the order of their inputs, we may still break the symmetry in the inputs. The straightforward way to do so is to include only one pairwise edge targeting each vertex in addition to the triplet interaction. This allows each vertex to distinguish the inputs coming from any other vertex (only triplet or triplet and edge). Again, the head an tail sets of all hyperedges do not intersect. **Theorem 6**. *Dynamics on the hypergraph $\mathcal{H}=(\mathcal{V},\mathcal{E})$ on three vertices with edges $$\label{eq:gh-H3} \begin{split} \mathcal{E}^{(2)} &= \{(\{2\}, \{1\}), (\{3\}, \{2\}), (\{1\}, \{3\})\},\\ \mathcal{E}^{(3)} &= \{(\{2,3\}, \{1\}), (\{1,3\}, \{2\}), (\{1,2\}, \{3\})\}, \text{ and}\\ \mathcal{E}^{(m)} &= \emptyset \text{ whenever } m >3, \end{split}$$ allows for the realization of the Guckenheimer--Holmes system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} as a network dynamical system.* *Proof.* To prove this assertion, we just give possible coupling functions explicitly. Define $$\begin{aligned} F(z) &= z+az^3 \\ G^{(2)}(z;y_1) &= (b-c)zy_1^2 \\ G^{(3)}(z;y_1,y_2) &= czy_1^2 + czy_2^2 = G^{(3)}(X;y_2,y_1). \end{aligned}$$ They realize the desired vector field as in [\[eq:GH-F\]](#eq:GH-F){reference-type="eqref" reference="eq:GH-F"} governed by $$f(z,y_1,y_2) = F(z) + G^{(2)}(z;y_1) + G^{(3)}(z;y_1,y_2).$$ ◻ An analogous construction is possible in the hypergraph in which the classical edges---i.e., hyperedges of order $2$---have exchanged sources for each vertex simultaneously. As this example shows that it is possible to generate the cubic Guckenheimer--Holmes system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} if the coupling is homogeneous in every order, we will restrict to this case in the remainder of this section. ## Self-Influence Through Intersecting Head and Tail So far, we have always assumed that the hyperedges have disjoint heads and tails, that is, the (hyper)edges are actually of effective order two and three. By contrast, hyperedges where the head and tail sets intersect have a lower effective order. For example, the hyperedge $(\{1,2\}, \{1\})$ is degenerate in the sense that it corresponds to an effective coupling between a pair of nodes. Such coupling gives more flexibility to choose interaction functions to obtain a desired vector field. This allows to construct the Guckenheimer--Holmes system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} similarly to . **Theorem 7**. *The hypergraph $\mathcal{H}=(\mathcal{V},\mathcal{E})$ on three vertices with hyperedges $$\label{eq:gh-H4} \begin{split} \mathcal{E}^{(2)} &= \{(\{2\}, \{1\}), (\{3\}, \{2\}), (\{1\}, \{3\})\},\\ \mathcal{E}^{(3)} &= \{(\{1,3\}, \{1\}), (\{1,2\}, \{2\}), (\{2,3\}, \{3\})\}, \text{ and}\\ \mathcal{E}^{(m)} &= \emptyset \text{ whenever } m >3, \end{split}$$ allows for the realization of the Guckenheimer--Holmes system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} as a network dynamical system.* *Proof.* With coupling functions $$\begin{aligned} F(z) &= z+(a-c)z^3 \\ G^{(2)}(z;y_1) &= bzy_1^2 \\ G^{(3)}(z;y_1,y_2) &= czy_1^2+czy_2^2 = G^{(3)}(z;y_2,y_1) \end{aligned}$$ the hypergraph $\mathcal{H}$ realizes the desired governing function $f$ as in [\[eq:GH-F\]](#eq:GH-F){reference-type="eqref" reference="eq:GH-F"}, as $$f(z,y_1,y_2) = F(z) + G^{(2)}(z;y_1) + G^{(3)}(z;y_1,y_2).$$ ◻ Note that the symmetry of $G^{(3)}$ causes an additional term $cz^3$. This can be compensated by the internal dynamics function $F(z)$---we could have also included classical self-loop edges for this purpose. Inspecting interaction terms in [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"}---or equivalently of the governing function $f$ in [\[eq:GH-F\]](#eq:GH-F){reference-type="eqref" reference="eq:GH-F"}---note that the monomial terms describe either self-influences or pairwise interactions. This is also the reason why we can generate [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} using only classical edges (cf. ) as long as there are edges of two types. Dynamics on a hypergraph with degenerate higher-order interactions that are effectively pairwise interactions, as considered here, are a different way to generate two types of interactions necessary for the Guckenheimer--Holmes cycle. While the example in has degenerate hyperedges of order three, this generalizes to degenerate hyperedges of any higher order. ## Uniform Hypergraphs {#subsec:gh-directed-uniform} The final class of dynamical systems we consider in the context of the Guckenheimer--Holmes cycle are network dynamics on $m$-uniform hypergraphs; these are commonly considered in the literature. In particular, all hyperedges are of the same order. Since the network will have three vertices, we only have to consider $2$-, $3$-, and $4$-uniform hypergraphs. First, note that $2$-uniform hypergraphs are graphs and thus the question reduces to realizing the Guckenheimer--Holmes cycle in a classical network dynamical system with pairwise interactions. As we have seen in , this requires two different coupling functions for hyperedges of order two. Hence, under the condition of homoegeneous coupling in every order it is not possible to construct the Guckenheimer--Holmes cycle in a $2$-uniform hypergraph. Second, we consider dynamics on $3$-uniform hypergraphs. If we do not allow self-influences---meaning no degenerate hyperedges---the only $3$-uniform hypergraph on three vertices is the one considered in . As we have shown, dynamics on such hypergraphs cannot realize the Guckenheimer--Holmes system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"}. If we allow for $3$-uniform hypergraphs with self-couplings, the resulting class of hypergraphs on three vertices is much larger. Dynamics on many such hypergraphs allow for the generation of the Guckenheimer--Holmes system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} as summarized in the following statement. **Theorem 8**. *The Guckenheimer--Holmes system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} can be realized as a network dynamical systems on $3$-uniform hypergraphs. The admissible hypergraphs can be organized into $20$ different categories that are listed in .* Parts of the proof are simply technical and not very enlightening. For this reason, we present only the general idea with an example here. The remainder of the proof, as well as the list of suitable categories together with example hypergraphs can be found in . *Proof.* We allow for all hyperedges of the forms $(\{i,j\}, \{k\}), (\{i,j\}, \{k,l\})$, and $(\{i,j\}, \{1,2,3\})$ where $i,j,k,l\in\{1,2,3\}$ with $i\ne j$ as well as $k\ne l$. These are all hyperedges of order three. In particular, the dynamics on any vertex is governed by the sum of the internal function $F$ and multiple instances of $G^{(3)}$ whose inputs are determined by the hyperedges that have the vertex in their head. To generate the cubic Guckenheimer--Holmes system, this sum has to equal $f$ in [\[eq:GH-F\]](#eq:GH-F){reference-type="eqref" reference="eq:GH-F"} for any vertex. We may focus on one arbitrary vertex, say vertex $k$, for the construction. We denote the state variable of the corresponding node in the network dynamical system by $z\in\mathbb{R}$ and those corresponding to the two neighbors by $y_1, y_2 \in\mathbb{R}$. To generate $f$ in [\[eq:GH-F\]](#eq:GH-F){reference-type="eqref" reference="eq:GH-F"}, the intrinsic dynamics $F$ and coupling function $G^{(3)}$ have to be polynomial in their arguments. In fact, $F$ has to contain the monomials $z$ and $z^3$, while $G^{(3)}(z;y_1,y_2)$ has to contain the monomials $zy_1^2$ and $zy_2^2$ to be able to generate the corresponding terms in $f$. They are thus of the form $$\begin{aligned} F(z) &= \alpha_1z + \alpha_2z^3 + P(z), \\ G^{(3)}(z;y_1,y_2) &= \beta zy_1^2+ \beta zy_2^2 +P'(z;y_1,y_2), \end{aligned}$$ where $P$ and $P'$ collect all terms such that $P$ does not contain the monomials $z$ and $z^3$, while $P'$ does not contain the monomials $zy_1^2$ and $zy_2^2$. Without loss of generality, we also assume that $P'$ does not contain the monomials $z$ and $z^3$ either, as these would merely cause shifted values for $\alpha_1$ and $\alpha_2$ in the argumentation and results that follow, cf. [\[eq:gh-directed-uniform\]](#eq:gh-directed-uniform){reference-type="eqref" reference="eq:gh-directed-uniform"} and below. Note that there is only one coefficient $\beta\in\mathbb{R}$ in $G^{(3)}$ due to the symmetry in $y_1$ and $y_2$. Let $E_0$ denote the set of true $2$-to-$1$ hyperedges whose head contains the chosen vertex $k$. Moreover, let $E_1$ and $E_2$ denote the sets of degenerate hyperedges whose head and tail contain $k$ and whose tail contains one of the other neighbors resulting in a $y_1$- or a $y_2$-influence, respectively. Then the dynamics of the vertex $k$ in focus is governed by $$\label{eq:gh-directed-uniform} \begin{split} &F(z) + \sum_{e\in E_0} G^{(3)}(z;y_1,y_2) + \sum_{e\in E_1} G^{(3)}(z;z,y_1) + \sum_{e\in E_2} G^{(3)}(z;z,y_2) \\ &\quad = F(z) + \Pi G^{(3)}(z;y_1,y_2) + \Phi G^{(3)}(z;z,y_1) + \Psi G^{(3)}(z;z,y_2) \\ &\quad= \alpha_1z+\alpha_2z^3 + \Pi\beta (zy_1^2+zy_2^2) + \Phi\beta (z^3+zy_1^2) \\ &\phantom{=\alpha_1X} + \Psi\beta (z^3+zy_2^2) + Q(z,y_1,y_2) \\ &\quad = \alpha_1z + (\alpha_2 + \beta (\Phi+\Psi))z^3 + \beta(\Pi+\Phi)zy_1^2\\ &\phantom{=\alpha_1X} + \beta(\Pi+\Psi)zy_2^2 + Q(z,y_1,y_2), \end{split}$$ where $\Pi=|E_0|, \Phi=|E_1|, \Psi=|E_2|$ and $Q(z,y_1,y_2)$ is a polynomial that does not contain the monomials $z$, $z^3$, $zy_1^2$ and $zy_2^2$. In particular, $\beta(\Pi+\Phi)zy_1^2 + \beta(\Pi+\Psi)zy_2^2$ cannot be equal to $bzy_1^2+czy_2^2$ for *arbitrary* $b, c\in\mathbb{R}$, since $\Pi, \Phi, \Psi$ are integers. For suitable $\Pi, \Phi, \Psi$, we can however chose suitable values for $\alpha_1, \alpha_2, \beta \in\mathbb{R}$ to satisfy the sufficient conditions for the emergence of the Guckenheimer--Holmes cycle (cf. below [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"})---in fact, we always need $\alpha_1=1$ and thus abbreviate $\alpha=\alpha_2$. For example, assume $\Pi=0, \Phi=1, \Psi=2$. Then we choose $$\begin{aligned} F(z) &= z+\frac{2}{5}z^3, \\ G^{(3)}(z;y_1,y_2) &= -\frac{7}{30} zy_1^2-\frac{7}{30}zy_2^2, \end{aligned}$$ that is, $\alpha=\frac{2}{5}$ and $\beta=-\frac{7}{30}$, and $P\equiv0, P'\equiv0$. Then, $$\begin{aligned} &F(z) + \sum_{e\in E_1} G^{(3)}(z;y_1,y_2) + \sum_{e\in E_2} G^{(3)}(z;z,y_1) + \sum_{e\in E_3} G^{(3)}(z;z,y_2) \\ &\quad= z-\frac{3}{10}z^3-\frac{7}{30}zy_1^2-\frac{7}{15}zy_2^2. \end{aligned}$$ This equals $f(z;y_1,y_2)$ in [\[eq:GH-F\]](#eq:GH-F){reference-type="eqref" reference="eq:GH-F"} with $a=-\frac{3}{10}, b=-\frac{7}{30}, c=-\frac{7}{15}$. In particular, these parameters satisfy $a+b+c=-1, -\frac{1}{3}<a<0, c<a<b<0$ so that the Guckenheimer--Holmes cycle exists for this governing function. It remains to be shown that there is indeed a configuration of hyperedges, such that $\Pi=0, \Phi=1, \Psi=2$ for all three vertices. In fact, choosing $$\begin{aligned} \mathcal{E}= \mathcal{E}^{(3)} = \{ &(\{1,3\}, \{1\}), (\{1,2\}, \{2\}), (\{2,3\}, \{3\}), \\ & (\{1,3\}, \{1,3\}), (\{1,2\}, \{1,2\}), (\{2,3\}, \{2,3\}) \}, \end{aligned}$$ we obtain a hypergraph in which no vertex is in the head of a true $2$-to-$1$ hyperedge, and each vertex receives one degenerate input from its next neighbor and two degenerate inputs from its second neighbor. In total, there are only $20$ configurations of $\Pi, \Phi, \Psi$ that can be realized by a $3$-uniform hypergraph and that allow for the realization of the Guckenheimer--Holmes cycle. A full list together with example hypergraphs can be found in . ◻ Finally, we turn to $4$-uniform hypergraphs. **Theorem 9**. *The Guckenheimer--Holmes system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} cannot be realized as a network dynamical system on a $4$-uniform hypergraph.* *Proof.* All hyperedges in a $4$-uniform hypergraph on three vertices are of the form $(\{1,2,3\}, \{k\}), (\{1,2,3\}, \{k,l\})$, or $(\{1,2,3\}, \{1,2,3\})$. Thus, the state of the $k$th vertex evolves according to $$\dot{x}_k=F(x_k)+ \sum_{\substack{e\in\mathcal{E}\\ k\in H(e)}} G^{(4)}(x_k;x_1,x_2,x_3).$$ Due to the symmetry properties of $G^{(4)}$, the right hand side is invariant under exchanging $x_i$ and $x_j$ for $i,j\ne k$. As we established before, this cannot realize the cubic Guckenheimer--Holmes system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"}. ◻ ## Structural Stability and Generic Dynamics on Hypergraphs So far, we have focused on explicitly realizing the cubic Gucknheimer-Holmes system as dynamical systems on different hypergraphs. However, the classical result in [@Guckenheimer.1988] is much stronger: It states that the heteroclinic cycle not only exists for the particular cubic vector field given but also for any perturbation of the vector field that preserves the symmetry properties. Specifically, there exists an open subset of the set of all vector fields of the form $$\label{eq:gh-sym1} \begin{pmatrix} f(x_1,x_2,x_3) \\ f(x_2,x_3,x_1) \\ f(x_3,x_1,x_2) \end{pmatrix},$$ with $f$ satisfying the symmetry condition $$\label{eq:gh-sym2} f(x_1,x_2,x_3)=-f(-x_1,x_2,x_3)=f(x_1,-x_2,x_3)=f(x_1,x_2,-x_3)$$ for which the Guckenheimer--Holmes cycle exists. This has several consequences that relate to structural stability. First, for any of the given hypergraphs above where the Guckenheimer--Holmes cycle could be realized it is actually *stable under perturbations of the coupling functions* (as long as it preserves the symmetry properties). For example, perturbations of the coupling function by weak higher order monomials does not destroy the cycle. Second, if we allow for distinct coupling functions for each edge, then we have persistence of the heteroclinic cycle under *structural perturbations of the hypergraph* (for example, adding an edge with a coupling function that is uniformly bounded and small) as long as the symmetry properties of the vector field are preserved. Third, one can get existence for the class of hypergraphs that contain all possible hyperedges as shown in the following statement. **Theorem 10**. *The Guckenheimer--Holmes cycle emerges in network dynamical systems on a hypergraph on three vertices with *all* possible hyperedges.* *Proof.* A generic hypergraph dynamical system on three vertices with homogeneous coupling in every order is a system in which *all* hyperedges are present and the coupling functions are generic in the sense that their formal series expansions [\[eq:expansion\]](#eq:expansion){reference-type="eqref" reference="eq:expansion"} contain generic coefficients. Since all hyperedges are present and there is only one coupling function per hyperedge order, the system is necessarily equivariant under arbitrary permutations of the three vertices. In particular, it is a special case of [\[eq:gh-sym1\]](#eq:gh-sym1){reference-type="eqref" reference="eq:gh-sym1"}, which are precisely the ones that are equivariant under cyclic permutations. If we additionally assume that the coupling functions satisfy $$\begin{aligned} F(z) &= -F(-z) \\ G^{(2)}(z;y_1) &= -G^{(2)}(-z;y_1)= G^{(2)}(z;-y_1) \\ G^{(3)}(z;y_1,y_2) &= -G^{(3)}(-z;y_1,y_2) = G^{(3)}(z;-y_1,y_2) = G^{(3)}(z;y_1,-y_2) \\ G^{(4)}(z;y_1,y_2,y_3) &= -G^{(4)}(-z;y_1,y_2,y_3) = G^{(4)}(z;-y_1,y_2,y_3) \\ &= G^{(4)}(z;y_1,-y_2,y_3) = G^{(4)}(z;y_1,y_2,-y_3) \end{aligned}$$ any sum of them also satisfies [\[eq:gh-sym2\]](#eq:gh-sym2){reference-type="eqref" reference="eq:gh-sym2"}. This holds even if the internal variable $z$ is also one of the coupling variables. In particular, we assume that the formal series expansions [\[eq:expansion\]](#eq:expansion){reference-type="eqref" reference="eq:expansion"} contain only terms that are simultaneously of odd degrees in the internal variable and of even degrees in the input variables. Note that for $G^{(4)}$ we have $y_j=z$ for some $j$, as there are only three different variables and no variable can be a coupling variable twice. Thus, under these assumptions a small enough perturbation of any of the cubic Guckenheimer--Holmes systems we were able to construct on a hypergraph by a generic hypergraph dynamical system preserves the heteroclinic cycle. In fact, this implies that also the hypergraph with all hyperedges supports the Guckenheimer--Holmes cycle. ◻ *Remark 3*. Note that the same argumentation yields the same result for any hypergraph that guarantees the cyclic equivariance [\[eq:gh-sym1\]](#eq:gh-sym1){reference-type="eqref" reference="eq:gh-sym1"}. Heuristically speaking, this is satisfied if and only if the different types of couplings are distributed cyclically over the three vertices with the types being - classical pairwise edges, - true $2$-to-$1$ couplings, - degenerate hyperedges of order three that realize a $1$-to-$1$ coupling from the right neighbor, - degenerate hyperedges of order three that realize a $1$-to-$1$ coupling from the left neighbor, - and inputs by all three vertices (here the precise structure is not important, e.g., $(\{1,2,3\}, \{1,2,3\})$ yields the same terms in the equations of motions as the combination of $(\{1,2,3\}, \{1\})$ and $(\{1,2,3\}, \{2,3\})$. # The Field Cycle for Dynamics on Directed Hypergraphs {#sec:oscar-directed} Now we turn to the Field cycle as an example of the more general construction to obtain dynamical systems with prescribed heteroclinic structures. Contrary to the previous section, the field cycle has two saddle equilibria in the subspace $\Delta$ corresponding to full synchrony that are connected by heteroclinic trajectories that lie in two different subspaces that correspond to partially synchronous states. As we have seen in , undirected hyperedges preserve 'too many symmetries' in the equations for the construction to work. Similar to the Guckenheimer--Holmes cycle, this is no longer the case if directed hyperedges are considered. However, the different setup leads to some significant differences between realizing the Field cycle compared to the Guckenheimer--Holmes cycle. For example, Field's construction cannot work in $m$-uniform hypergraphs. Before investigating the construction in more detail below, we can make several straightforward observations similar to the Guckenheimer--Holmes cycle in . The Field cycle as an example of the construction in [@Aguiar.2011; @Field2017; @Weinberger.2018] is done for a classical network with two types of couplings (see ), i.e., two different coupling functions for edges. In that regard, it is very similar to the Guckenheimer--Holmes system and many of the general observations made in can be made here just as well. We can obviously follow the same construction if the hypergraph is a classical network without the assumption of homogeneous coupling in every order---in fact, it was shown in [@Field.2015] that the construction can be performed for dyadic networks with additive input structure. The same can still be done if we restrict to homogeneous coupling in every order by replacing one type of pairwise couplings with degenerate $2$-to-$1$ hyperedges. These allow us to distinguish two different types of pairwise coupling through $G^{(2)}$ and $G^{(3)}$. On the other hand, if we include hyperedges in a symmetric manner---e.g., all edges, all true $2$-to-$1$ hyperedges etc.---the construction is not possible. ## Preliminary Observations Recall that the construction of the Field cycle crucially depends on the (partial) synchrony subspaces that are dynamically invariant for the given network independent of the specific governing functions. In a network with three vertices there are four synchrony subspaces $\Delta=\{x_1=x_2=x_3\}$, $S_3=\{x_1=x_2\}$, $S_2=\{x_1=x_3\}$, $S_1=\{x_2=x_3\}$. If the intrinsic node dynamics are one-dimensional, as we assume here, the first subspace is one-dimensional while all other ones are two-dimensional. We require that the fully synchronous subspace is robust, which requires some form of homogeneity in the hypergraph and the equations of motion as summarized in the following statement. **Lemma 11**. *Consider system [\[eq:hypernetdyn\]](#eq:hypernetdyn){reference-type="eqref" reference="eq:hypernetdyn"} as a network dynamical system on a hypergraph with three vertices. Assume that the coupling is homogeneous in every order. The fully synchronous subspace $\Delta$ is dynamically invariant for each choice of functions $F, G^{(2)}, G^{(3)}, \dotsc$ if and only if the number of hyperedges of order $m$ targeting a given vertex is the same for each vertex.* *Proof.* Assume dynamical invariance of $\Delta = \{x_1 = x_2 = x_3\}$. Then for any point $\mathbf{x}=(z,z,z)^\mathsf{T}\in\Delta$, the right hand sides of [\[eq:hypernetdyn\]](#eq:hypernetdyn){reference-type="eqref" reference="eq:hypernetdyn"} must be equal for each $k$. Write $N(m; k):=\# \left\{e\in \mathcal{E}^{(m)} \mid k\in H(e) \right\}$ for the number of hyperedges of order $m$ whose heads contain vertex $k$. Substituting $\mathbf{x}$ into the system we have $$\label{eq:fullsynch} \begin{split} \dot z &= F(z) + \sum_{\substack{e\in \mathcal{E}^{(2)} \\ k\in H(e)}} G^{(2)} (z; z) + \sum_{\substack{e\in \mathcal{E}^{(3)} \\ k\in H(e)}} G^{(3)} (z; z, z) + \dotsb\\ & = F(z) + N(2; k)G^{(2)} (z; z) +N(3; k)G^{(3)} (z; z, z) + \dotsb. \end{split}$$ This expression is independent of $k$ if and only if $N(m; k)$ are independent of $k$, which proves the lemma. ◻ *Remark 4*. The result is not restricted to the case $N=3$. The same proof works for arbitrary values of $N$. *Remark 5*. We will see below, that it is possible to construct the Field cycle for hypergraphs without degenerate couplings and homogeneous coupling to every order (cf. ). We will make this assumption in the remainder of this section. The goal of the construction is to generate heteroclinic connections in partially synchronous spaces between two hyperbolic equilibria in the fully synchronous subspace $\Delta$. In order for such a heteroclinic connection to be possible, equilibria in $\Delta$ need both a stable and an unstable direction outside of the fully synchronous subspace. In particular, the construction relies on the existence of two invariant partially synchronous subspaces in addition to the fully synchronous subspace. We state the following necessary condition. **Lemma 12**. *There are no local obstructions to the construction of the Field cycle if and only if in addition to the fully synchronous subspace there are precisely two partially synchronous dynamically invariant subspaces.* *Proof.* Since the construction relies on the presence of two partially synchronous dynamically invariant subspaces, it is clear that it is obstructed when the network structure allows for only one of the partially synchronous subspaces. On the other hand, whenever the network allows for all three partially synchronous subspaces, the procedure always fails due to a double eigenvalue of the linearization at a fully synchronous point. In fact, any $3\times3$ matrix leaving $\Delta, S_3, S_2, S_1$ invariant is automatically of the form $$\begin{pmatrix} \alpha & \beta & \gamma \\ \delta & \alpha + \beta - \delta & \gamma \\ \delta & \beta & \alpha + \gamma - \delta \end{pmatrix}.$$ This matrix has an eigenvalue $\alpha+\beta+\gamma$ with eigenvector $(1,1,1)^\mathsf{T}$ as well as a double eigenvalue $\alpha-\delta$ with eigenvectors $(1,-\frac{\delta}{\beta},0)^\mathsf{T},(1,0,-\frac{\delta}{\beta})^\mathsf{T}$. In particular, no steady state in the fully synchronous subspaces can have a stable direction in one partial synchrony space and an unstable direction in another. This prevents the heteroclinic cycle to be realized. Any $3\times3$ matrix leaving $\Delta, S_3, S_2$ invariant but not $S_1$ is of the form $$\begin{pmatrix} \alpha & \beta & \gamma \\ \delta & \alpha + \beta - \delta & \gamma \\ \epsilon & \beta & \alpha + \gamma - \epsilon \end{pmatrix}.$$ This matrix has eigenvalues $\alpha+\beta+\gamma, \alpha-\epsilon, \alpha-\delta$ with corresponding eigenvectors $(1,1,1)^\mathsf{T}, (1,1, -\frac{\beta+\epsilon}{\gamma})^\mathsf{T},(1,-\frac{\gamma+\delta}{\beta},1)^\mathsf{T}$ which poses no obstructions to the realization. Analogous observations can be made for the other two combinations of the two-dimensional subspaces. ◻ This yields a necessary condition as to whether the construction of the Field cycle works that is easy to check: Does the network structure allow for the fully synchronous plus exactly two partially synchronous subspaces to be dynamically invariant independent of the governing functions? *Remark 6*. It can readily be seen that the dynamical invariance of a (partial) synchrony subspace is independent of the coupling functions being nodespecific or nodeunspecific. Consider two coupling functions $G^{(m)}(x_i;\cdot)$ and $G^{(m)}(x_j;\cdot)$ that target vertices $i$ and $j$ respectively. If, in a (partial) synchrony subspace to be checked, $i$ and $j$ are synchronous, both functions receive the same first argument. The same is true, if both functions depend trivially on their first argument. *Remark 7*. The question which (partial) synchrony subspaces are dynamically invariant independent of the specific governing functions boils down to the combinatorial problem of finding so-called *balanced partitions* in the network. These have famously been introduced and algebraically studied in [@Field.2004; @Golubitsky.2006; @DeVille.2015; @Nijholt.2020] for coupled cell networks. More recently, first advances have been made to generalise balanced partitions to network dynamical systems with higher-order interactions  [@Aguiar2020; @Salova.2021b; @Nijholt.2022c; @vonderGracht.2023]. For instance in order for vertices $1$ and $2$ to synchronize, the partition $\{1,2\}, \{3\}$ needs to be balanced, meaning that any vertex in an element of the partition needs to receive the same 'kind' of inputs from any element of the partition as any other vertex in that same element. So assume there is a hyperedge $(\{2,3\}, \{1\})$. For the partition to be balanced, vertex $2$ needs to receive a hyperedge with one input from the same element in the partition and one from outside. This leaves two options $(\{1,3\}, \{2\})$ and $(\{2,3\}, \{2\})$. In a similar manner, one may then construct networks with hyperedges of order $3$ that have precisely two balanced partitions. In order to keep the presentation as straightforward as possible, we will not use this method in the remainder of this section. Instead, we use elementary combinatorics to enumerate possible hypergraphs and check the corresponding equations of motions for invariant (partial) synchrony subspaces. *Remark 8*. Finally, we remark that the construction for the heteroclinic dynamics in the hypergraphs presented or mentioned in this section might bear subtle differences from the one in [@Field.2015] (which has been specified for the three node case in [@Weinberger.2018]). However, it does always follow the same lines. In particular, it can always be arranged that two trajectories that connect two fully synchronous equilibria and locally coincide with their respective one-dimensional unstable manifolds do not intersect when projected to the partial synchrony subspaces identified with $\mathbb{R}^2$. Hence, this kind of global obstructions to the construction cannot occur (see below for an example and [@Aguiar.2011; @Field.2015] for details). ## No Degenerate Hyperedges {#subsec:oscar-nondegenerate} The realization of the Field cycle is possible in network dynamical system on a hypergraph that contains only non-degenerate hyperedges, assuming that the coupling is homogeneous in every order. The reasoning is similar to . The combination of a classical edge and a true $2$-to-$1$ hyperedge allows a targeted vertex to distinguish between the inputs of its two neighbors. We present the details for illustration of the restrictions that have to be checked for the construction to work. **Theorem 13**. *There is a dynamical system on a hypergraph with three vertices and edges $$\label{eq:field-H1} \begin{split} \mathcal{E}^{(2)} &= \{(\{2\}, \{1\}), (\{1\}, \{2\}), (\{2\}, \{3\})\},\\ \mathcal{E}^{(3)} &= \{(\{2,3\}, \{1\}), (\{1,3\}, \{2\}), (\{1,2\}, \{3\})\}, \text{ and}\\ \mathcal{E}^{(m)} &= \emptyset \text{ whenever } m >3 \end{split}$$ that realizes the Field cycle.* *Proof.* Dynamics on the hypergraph on three vertices with edges [\[eq:field-H1\]](#eq:field-H1){reference-type="eqref" reference="eq:field-H1"} evolve according to $$\label{eq:oscar-nondegenerate} \begin{split} \dot{x}_1 &= F(x_1) + G^{(2)} (x_1; x_2) + G^{(3)} (x_1;x_2,x_3) \\ \dot{x}_2 &= F(x_2) + G^{(2)} (x_2; x_1) + G^{(3)} (x_2;x_1,x_3) \\ \dot{x}_3 &= F(x_3) + G^{(2)} (x_3; x_2) + G^{(3)} (x_3;x_1,x_2). \end{split}$$ This system has dynamically invariant subspaces $\Delta$, $S_3$, and $S_2$; however, $S_1$ is not dynamically invariant. Thus, there are no local obstructions. We now look at the local situation in more detail. The linearization of [\[eq:oscar-nondegenerate\]](#eq:oscar-nondegenerate){reference-type="eqref" reference="eq:oscar-nondegenerate"} at a fully synchronous equilibrium $\mathbf{x}=(z,z,z)^\mathsf{T}\in\Delta$ is of the form $$\begin{pmatrix} \alpha + \beta + \gamma & \delta + \epsilon & \epsilon \\ \delta + \epsilon & \alpha + \beta + \gamma & \epsilon \\ \epsilon & \delta + \epsilon & \alpha + \beta + \gamma \\ \end{pmatrix},$$ where [\[eq:ABGDE\]]{#eq:ABGDE label="eq:ABGDE"} $$\begin{aligned} \alpha &= F'(z), \\ \beta &= \partial_1G^{(2)}(z;z), \\ \gamma &= \partial_1 G^{(3)}(z;z,z), \\ \delta &= \partial_2 G^{(2)}(z;z), \\ \epsilon &= \partial_2 G^{(3)}(z;z,z) = \partial_3 G^{(2)}(z;z,z). \end{aligned}$$ This matrix has eigenvalues and corresponding eigenvectors $$\begin{aligned} \lambda_1&=\alpha+\beta+\gamma+\delta+2\epsilon, & v_1 &= (1,1,1)^\mathsf{T}, \\ \lambda_2&=\alpha+\beta+\gamma-\delta-\epsilon, & v_2 &= \left(1,1,-\frac{\delta+2\epsilon}{\delta+ \epsilon}\right)^\mathsf{T}\\ \lambda_3&=\alpha+\beta+\gamma-\epsilon, & v_3 &= \left(1,-\frac{\delta+2\epsilon}{\epsilon},1\right)^\mathsf{T}. \end{aligned}$$ For two different fully synchronous equilibria $\mathbf{p}$, $\mathbf{q}$ it can then be arranged that they each have a one-dimensional unstable manifold in different partially synchronous subspaces by choosing suitable parameter values $\alpha, \dotsc, \epsilon\in\mathbb{R}$. Furthermore, projecting $S_3$ and $S_2$ to $\mathbb{R}^2$, it can be arranged that the slopes of the corresponding eigenlines have opposite signs---this requires $(\delta+\epsilon)\epsilon<0$ and both factors to have different signs at both equilibria. Then two trajectories from $\mathbf{p}$ to $\mathbf{q}$ and vice versa that coincide with the corresponding unstable manifolds locally can be assumed not to intersect when projected onto $\mathbb{R}^2$ (cf. Figure 18 in [@Aguiar.2011]). Since these projections are related by the network structure, there are no global obstructions to the simultaneous existence of both connecting trajectories. Hence, one can choose suitable coupling functions realizing the Field cycle (see [@Aguiar.2011; @Field.2015] and below for additional details). ◻ *Remark 9*. In network dynamical systems, projections onto different (dynamically invariant) subspaces are typically related by the network structure. For example, it may happen that the function governing the dynamics of one coordinate in one projection equals that of a different coordinate in the other projected system. If the local properties furthermore force connecting trajectories to intersect when projected onto the identified lower dimensional subspaces (cf. Figure 18 in [@Aguiar.2011]) one cannot construct connecting trajectories independently of each other. In such a situation the construction of the Field cycle can be prohibited entirely by these global features, we say it is *globally obstructed*. In the previous proof however we may arrange for the connecting trajectory from $\mathbf{p}$ to $\mathbf{q}$ to lie on one side of the (dynamically invariant) diagonal in the projection to $\mathbb{R}^2$ and for the connecting trajectory from $\mathbf{q}$ to $\mathbf{p}$ to lie on the other side. Thus, no global obstructions occur. This approach realizes the heteroclinic connections without additional control over the other half of respective stable and unstable manifolds. The observation in does not change if the coupling is nodeunspecific. Note these network dynamics on a hypergraph are different from the one in which allowed for the Guckenheimer--Holmes cycle. A similar argument to this section can be applied to dynamics on a hypergraph with hyperedges of order four, as these are necessarily degenerate and model $2$-to-$1$ couplings. ## Uniform Hypergraphs {#uniform-hypergraphs} While the Guckenheimer--Holmes cycle may be realized in network dynamical systems on $m$-uniform hypergraphs (cf. ), they do not provide enough degrees of freedom to realize the Field cycle. **Theorem 14**. *The Field cycle cannot be realized in a network dynamical system on an $m$-uniform hypergraph on three vertices with homogeneous coupling.* *Proof.* The following observations can be made immediately: $2$-uniform hypergraphs are classical networks which we have discussed in the beginning of . On the other hand, $3$-uniform hypergraphs without non-degenerate hyperedges as well as all $4$-uniform hypergraphs are symmetric in the inputs and therefore they do not allow for the construction (see ). The situation for $3$-uniform hypergraphs with degenerate hyperedges is more subtle. Without specifying the hyperedges that are present in the hypergraph, the equations of motion are $$\label{eq:oscar-directed-uniform} \begin{split} \dot{x}_1 &= F(x_1) + \Pi_1G^{(3)}(x_1;x_2,x_3) + \Phi_1G^{(3)} (x_1;x_1,x_2) + \Psi_1G^{(3)}(x_1;x_1,x_3) \\ \dot{x}_2 &= F(x_2) + \Pi_2G^{(3)}(x_2;x_1,x_3) + \Phi_2G^{(3)} (x_2;x_2,x_3) + \Psi_2G^{(3)}(x_2;x_1,x_2) \\ \dot{x}_3 &= F(x_3) + \Pi_3G^{(3)}(x_3;x_1,x_2) + \Phi_3G^{(3)} (x_3;x_1,x_3) + \Psi_3G^{(3)}(x_3;x_2,x_3). \end{split}$$ Here, the (nonnegative) integers $\Pi_k, \Phi_k, \Psi_k$ for $k=1,2,3$ count the number of true hyperedge inputs as well as degenerate inputs from the left and right neighbors respectively that vertex $k$ receives. Due to , robustness of the fully synchronous subspace $\Delta$ requires $\Pi_1+\Phi_1+\Psi_1=\Pi_2+\Phi_2+\Psi_2=\Pi_3+\Phi_3+\Psi_3 =: \Xi$. Furthermore, following , the construction requires robustness of precisely two partial synchrony subspaces. System [\[eq:oscar-directed-uniform\]](#eq:oscar-directed-uniform){reference-type="eqref" reference="eq:oscar-directed-uniform"} is symmetric in $x_1,x_2,x_3$ and, without loss of generality, we assume $S_3$ and $S_2$ to be dynamically invariant. Substituting these assumptions into [\[eq:oscar-directed-uniform\]](#eq:oscar-directed-uniform){reference-type="eqref" reference="eq:oscar-directed-uniform"} we additionally obtain $\Xi-\Phi_1=\Pi_2+\Phi_2$ and $\Xi-\Phi_3=\Pi_1+\Phi_1$. With the assumptions and $\alpha,\beta, \dotsc, \epsilon$ as in [\[eq:ABGDE\]](#eq:ABGDE){reference-type="eqref" reference="eq:ABGDE"}, the linearization of the right hand side of [\[eq:oscar-directed-uniform\]](#eq:oscar-directed-uniform){reference-type="eqref" reference="eq:oscar-directed-uniform"} at a fully synchronous equilibrium has the form $$\begin{pmatrix} \alpha + \Xi\beta + (\Xi-\Pi_1)\gamma & (\Xi+\Pi_1-\Pi_2-\Phi_2)\gamma & (\Pi_2+\Phi_2)\gamma \\ (\Xi-\Phi_2)\gamma & \alpha + \Xi\beta + (\Xi-\Pi_2)\gamma & (\Xi+\Phi_2)\gamma \\ (-\Pi_1+\Pi_2+\Phi_2+\Pi_3)\gamma & (\Xi+\Pi_1-\Pi_2-\Phi_2)\gamma & \alpha + \Xi\beta + (\Xi-\Pi_3)\gamma \\ \end{pmatrix},$$ This matrix has eigenvalues with corresponding eigenvectors $$\begin{aligned} \lambda_1&=\alpha+\Xi\beta+2\Xi\gamma, & v_1 &= (1,1,1)^\mathsf{T}, \\ \lambda_2&=\alpha+\Xi\beta+(\Xi-\Pi_2-\Phi_2-\Pi_3)\gamma, & v_2 &= \left(1,1,-\frac{\Xi+\Pi_3}{\Pi_2+\Phi_2}\right)^\mathsf{T}\\ \lambda_3&=\alpha+\Xi\beta+(-\Pi_1+\Phi_2)\gamma, & v_3 &= \left(1,-\frac{\Xi+\Pi_2}{M+\Pi_1-\Pi_2-\Phi_2},1\right)^\mathsf{T}. \end{aligned}$$ One readily observes $$\begin{aligned} \lambda_2&=\lambda_1 + (-\Xi-\Pi_2-\Phi_2-\Pi_3)\gamma, \\ \lambda_3&=\lambda_1 + (-2\Xi-\Pi_1+\Phi_2)\gamma. \end{aligned}$$ In both equations, the integer coefficient of $\gamma$ is negative---recall that $\Phi_2<\Xi$. Consider the case that $\lambda_1$ and $\gamma$ have opposing signs, $\lambda_1\gamma\le0$. Then, we either have $\lambda_2,\lambda_3\le0$ or $\lambda_2,\lambda_3\ge0$, i.e., the two eigenvalues have the same sign. In this situation, the fully synchronous equilibrium is not a saddle with a partially synchronous stable and unstable eigendirection which prohibits the construction. On the other hand, if $\lambda_1$ and $\gamma$ have the same sign, $\lambda_1\gamma\ge0$, either $\lambda_2$ or $\lambda_3$ can be arranged to be positive (or even both). However, note that $$\lambda_2-\lambda_3 = (\Xi+\Pi_1-\Pi_2-2\Phi_2-\Pi_3)\gamma.$$ The sign of the integer coefficient $(\Xi+\Pi_1-\Pi_2-2\Phi_2-\Pi_3)$ determines which of the two eigenvalues is larger. This coefficient, however, is fixed for a given hypernetwork and fully determined by its hyperedges. In particular, if precisely one of the two eigenvalues is arranged to be positive it can only be either $\lambda_2$ or $\lambda_3$ independent of the precise value of $\gamma$. As a result, any saddle equilibrium with one-dimensional unstable subspace has this unstable subspace contained in the same partial synchrony subspace. This is a *global* obstruction to the realization of the Field cycle which requires the unstable directions of two fully synchronous saddle equilibria to be contained in the opposite partial synchrony subspaces (cf. ). Finally, note that the observations do not change in the case of nodeunspecific coupling. In fact, nodeunspecific coupling does not alter the derivation of the necessary assumptions on the integer coefficients and is reflected by $\beta=0$ in the linear stability analysis. ◻ # For Which Classes of Network Dynamics on Hypergraphs Can We Realize Heteroclinic Cycles? {#sec:HOI} The framework for network dynamical systems on hypergraphs described in is sufficiently general to encompass a number of examples discussed in the literature. In the following we will consider explicit examples from the literature and assess---based on our results---whether the class of network dynamical systems on hypergraphs allows to realize the Guckenheimer--Holmes or Field heteroclinic cycle. Note that these classes of network dynamical systems also restrict the set of coupling functions $G_e$ associated to each hyperedge $e$. Thus, we assess here whether there are obstructions to the realization of either heteroclinic cycles rather than making a statement that there are parameters for which a heteroclinic cycle exists. ## Dynamics on Undirected Hypergraphs With Different Edge Types In [@Mulas2020], Mulas and others consider dynamics on an undirected hypergraph $\mathcal{H}^ \text{\cite{Mulas2020}} = (\mathcal{V}^ \text{\cite{Mulas2020}} , \mathcal{E}^ \text{\cite{Mulas2020}} )$, where the state $x_k$ of node $k$ evolves according to $$\label{eq:mulas_lit} \dot x_k = F(x_k) + \sum_{e\in\mathcal{E}: k\in e}G_{k;e}(x_e),$$ where $F$ determines the internal dynamics and $G_{k;e}$ the target specific coupling function associated to each hyperedge. Note that while $\mathcal{H}^ \text{\cite{Mulas2020}}$ may be undirected, having a set of coupling functions assigned to each hyperedge corresponds to different edge 'types' that effectively introduce directionality (e.g., some of them could vanish). This general setup in our framework corresponds to a network dynamical system on a directed hypergraph $\mathcal{H}= (\mathcal{V}, \mathcal{E})$: Each edge $e = \{v_{j_1}, \dotsc, v_{j_m}\}\in\mathcal{E}^ \text{\cite{Mulas2020}}$ of order $m+1$ generates $m$ edges of $\mathcal{E}$ of the form $(\{v_{j_1}, \dotsc, v_{j_m}\}, \{v_{j_k}\})$; the coupling is nodeunspecific. While this setup allows to generate the Field cycle (cf. ), the realization of the Guckenheimer--Holmes cycle in such network dynamical systems is not possible (). The main results on synchronization of [@Mulas2020] are stated for a class of coupling functions which are homogenous in both target node and edge order by assuming $G^{(m)}_{k;e}=G^{(m)}$ where $m$ is the order of $e$. As the possible directionality in the more general setup is lost, neither the Guckenheimer--Holmes nor the Field cycle can be realized (). ## Dynamics on Undirected Hypergraphs With Directed Coupling Salova and D'Souza, in [@Salova2021a], consider dynamics on undirected hypergraphs $\mathcal{H}^ \text{\cite{Salova2021a}} = (\mathcal{V}^ \text{\cite{Salova2021a}} , \mathcal{E}^ \text{\cite{Salova2021a}} )$. For a hyperedge $e$ let $|e|$ denote its order. Now the dynamics of a node $k\in\mathcal{V}^{ \text{\cite{Salova2021a}} }$ depend on the state of all nodes that are incident to it according to $$\label{eq:salova_lit} \dot x_k = F(x_k) + \sum_{\substack{e\in\mathcal{E}^ \text{\cite{Salova2021a}} :k\in e}} G^{(|e|)}(x_k; x_{e\smallsetminus k}),$$ with a coupling function $G^{(m)}$ that only depends on the order of the hyperedge $e$---$e\smallsetminus k$ is shorthand notation for $e\smallsetminus\{k\}$. Due to how $G^{(m)}$ depends on the arguments, this dynamics corresponds in our setup to a network dynamical system on a directed hypergraph $\mathcal{H}= (\mathcal{V}, \mathcal{E})$ despite $\mathcal{H}^{ \text{\cite{Salova2021a}} }$ being undirected: Each (undirected) hyperedge $e = \{v_{j_1}, \dotsc, v_{j_m}\}\in\mathcal{E}^ \text{\cite{Mulas2020}}$ generates $m$ directed hyperedges of $\mathcal{E}$ of order $m$ that are of the form $(\{v_{j_1}, \dotsc, v_{j_m}\}\smallsetminus\{v_{j_k}\}, \{v_{j_k}\})$. Moreover, the coupling is nodespecific but homogeneous in every hyperedge order. Thus, network dynamical systems [\[eq:salova_lit\]](#eq:salova_lit){reference-type="eqref" reference="eq:salova_lit"} can realize the Guckenheimer--Holmes as well as the Field cycle. ## Dynamics on Directed Weighted Hypergraphs Aguiar and coworkers [@Aguiar2020] developed a coupled cell network framework for network dynamics with higher order interactions. They consider dynamics on directed hypergraphs $\mathcal{H}^ \text{\cite{Aguiar2020}} = (\mathcal{V}^ \text{\cite{Aguiar2020}} , \mathcal{H}^ \text{\cite{Aguiar2020}} )$ where each edge $e\in\mathcal{H}^ \text{\cite{Aguiar2020}}$ carries weight $w_e\in\mathbb{R}$; while they allow the tail $T(e)$ to be a multiset, we restrict to the subclass where $T(e)$ is a set. The dynamics of node $k$ is determined by $$\label{eq:porto_lit} \dot x_k = F(x_k) + \sum_{ \substack{e\in\mathcal{H}^ \text{\cite{Aguiar2020}} :k\in H(e)} } w_{e} G^{(|e|)}\!\left(x_k; x_{T(e)} \right)$$ This setup is quite general as the coupling can be nodespecific and allows for the realization of both the Guckenheimer--Holmes cycle as well as the Field cycle. ## Generalized Laplacian Dynamics on Undirected Hypergraphs In [@Carletti2020], Carletti and coauthors consider network dynamics on an undirected hypergraph $\mathcal{H}^ \text{\cite{Carletti2020}} = (\mathcal{V}^ \text{\cite{Carletti2020}} , \mathcal{E}^ \text{\cite{Carletti2020}} )$. The evolution of node $k$ is determined by $$\label{eq:carletti_lit} \dot x_k = F(x_k) -\epsilon \sum_{e\in\mathcal{E}^ \text{\cite{Carletti2020}} : k\in e} \sum_{j\in e \smallsetminus k}(|e|-1)\left(G(x_k)-G(x_j)\right)$$ for each node $k$, where $\epsilon$ denotes the coupling strength---as before $e\smallsetminus k$ is shorthand notation for $e\smallsetminus\{k\}$. Due to the specific form of the coupling functions, [\[eq:carletti_lit\]](#eq:carletti_lit){reference-type="eqref" reference="eq:carletti_lit"} describes generalized Laplacian dynamics. In fact, it can readily be seen that the internal sum can be considered to be taken over the index set $e$ instead of $e\smallsetminus k$. Furthermore, the coefficient depends only on the hyperedge order $m=|e|+1$. Thus, in our framework this dynamics corresponds to an undirected hypergraph $\mathcal{H}=(\mathcal{V},\mathcal{E})=\mathcal{H}^ \text{\cite{Carletti2020}}$ with nodespecific coupling that is homogeneous in every order given by the coupling functions $$G^{(m)}(x_k;x_e)=-\epsilon(m-2)\sum_{j\in e}\left(G(x_k)-G(x_j)\right)$$ (the number of terms in the sum is independent of the edge  $e$). In particular, neither the Guckenheimer--Holmes cycle nor the Field cycle can be realized in systems of the form [\[eq:carletti_lit\]](#eq:carletti_lit){reference-type="eqref" reference="eq:carletti_lit"}; cf. . ## Replicator Equations on Directed Hypergraphs A type of network dynamics with higher-order interactions discussed in [@Bairey2016] are replicator equations. These are related to generalized Lotka--Volterra equations describing interacting populations which support heteroclinic dynamics [@Afraimovich2004c]. Specifically, the evolution of node $k$ is given by $$\label{eq:rep_lit} \dot x_k = x_k\left(f_k(\mathbf{x}) - \sum_{j=1}^N x_jf_j(\mathbf{x})\right)$$ where $\mathbf{x}=(x_1,\dotsc,x_N)$ and $$f_l(\mathbf{x}) = -x_l + \sum_{r=1}^N a_{lr}x_r + \sum_{r=1}^N \sum_{p=1}^N b_{lrp}x_rx_p + \dotsb.$$ Note that [\[eq:rep_lit\]](#eq:rep_lit){reference-type="eqref" reference="eq:rep_lit"} contains terms that are quadratic or of higher polynomial degree only. As an immediate consequence, the Guckenheimer--Holmes system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} cannot be realized by systems of this form. Nonetheless, in its most general form, the equations model dynamics on a directed hypergraph with very general nodespecific couplings. Therefore, the possibility to realise the Field cycle with systems of the form [\[eq:rep_lit\]](#eq:rep_lit){reference-type="eqref" reference="eq:rep_lit"} is to be expected despite the specific form of the equations of motion. In a related context, consider the variations described in [@Gibbs.2022] where node $k$ evolves according to $$\begin{aligned} \dot x_k &= x_k\left(R_k - \sum_{j=1}^NA_{kj}x_j -\sum_{j,l=1}^NB_{kjl}x_k \right) \label{eq:gibbs_lit}\\ \intertext{and, from~\cite{Grilli.2017},} \dot x_k &=x_k\sum_{j,l=1}^NP_{kjl}x_jx_l \label{eq:grilli_lit}\end{aligned}$$ for ecosystems of multiple species exhibiting higher order interactions. Compared to [\[eq:rep_lit\]](#eq:rep_lit){reference-type="eqref" reference="eq:rep_lit"} they do not contain the average fitness term (the second term in the bracket). Hence, there are no internal relations between the coupling coefficients. It can readily be seen that [\[eq:gibbs_lit\]](#eq:gibbs_lit){reference-type="eqref" reference="eq:gibbs_lit"} can realize the Guckenheimer--Holmes system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} while [\[eq:grilli_lit\]](#eq:grilli_lit){reference-type="eqref" reference="eq:grilli_lit"} cannot, since there are no linear terms in the equations of motion. The coupling topology of both systems is that of a directed hypergraph with nodespecific coupling. The coupling functions are restricted to cubic polynomials. Nonetheless, the Field cycle can be realized. In fact it was shown in [@Aguiar.2011] that the Field cycle exists in cubic systems. For [\[eq:grilli_lit\]](#eq:grilli_lit){reference-type="eqref" reference="eq:grilli_lit"} additional investigations have to be made to give a definite answer as to whether its realization is possible in a system with cubic terms only. Finally, note that [@Levine.2017] describes periodic fluctuations of species abundances in a model similar to [\[eq:gibbs_lit\]](#eq:gibbs_lit){reference-type="eqref" reference="eq:gibbs_lit"} and [\[eq:grilli_lit\]](#eq:grilli_lit){reference-type="eqref" reference="eq:grilli_lit"}. This dynamical phenomenon can be organized by heteroclinic cycles. # Discussion {#sec:Discussion} The class of hypergraphs considered restricts what heteroclinic cycles can be realized in network dynamical systems with higher-order interactions. Specifically, we see that the vector fields one obtains through an underlying class of undirected hypergraphs (under the assumption that the coupling functions are compatible with the combinatorial structure) typically does not allow to realize the heteroclinic cycles considered here (). A key issue is that an undirected hypergraph imposes symmetries on the resulting vector field: Each hyperedge is a set of vertices which means that each node in the edge is affected by each other node in the same way (indecently of how they are ordered). The emergence of heteroclinic cycles is prevented by the symmetries, for example, by forcing the transverse linear stability in all directions to be the same or by allowing to get the required linearization of one equilibrium in the cycle but not all simultaneously. Dynamics on directed (hyper)graphs provide more flexibility to realize heteroclinic cycles. Indeed, dynamics on directed hypergraphs naturally arise as phase dynamics that emerge through phase reduction [@Bick2023]. Thus, while dynamics on directed hypergraphs have only received limited attention, we expect this to be the natural class of network dynamical systems that capture real world processes. Whether a given heteroclinic cycle can be realized does not only depend on the underlying class of hypergraphs but also on the choice of interaction functions. As discussed in , certain model equations come with a natural choice of interaction function, such as "Laplacian-like" coupling in [@Salova2021a]. Whether it is possible to realize a given heteroclinic cycle in the class of coupling functions is possible depends on the particular class; an analysis of this for each model in is beyond the scope of this article. Here we focused on the realization of two key examples of heteroclinic cycles in network dynamical systems with higher order interactions. In particular, the Field cycle is an example of a more general construction to realize heteroclinic structures in dynamical systems. The analysis was restricted predominantly to the *existence* of the heteroclinic structures in phase space (apart from some comments on the Guckenheimer--Holmes cycle). Additional features of heteroclinic structures, such as (dynamical) stability, the number of connecting heteroclinic trajectories, or whether the stable manifolds of the equilibria contain all their unstable manifolds will necessarily lead to further conditions on the network structure and coupling functions. # Acknowledgements CB acknowledges support from the Engineering and Physical Sciences Research Council (EPSRC) through the grant EP/T013613/1. SvdG was partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation)----453112019. # The Guckenheimer--Holmes Cycle in Directed $3$-Uniform Hypergraphs {#sec:app-gh-3-uniform} In this appendix, we fill the gaps in the considerations on realizing the Guckenheimer--Holmes system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} as a network dynamical system on directed $3$-uniform hypergraphs by completing the proof of . *Proof of , continued.* The expression in [\[eq:gh-directed-uniform\]](#eq:gh-directed-uniform){reference-type="eqref" reference="eq:gh-directed-uniform"} has to be equal to the governing function $f(z,y_1,y_2)=z+az^3+bzy_1^2+czy_2^2$ from [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"}. That is, we have $$\label{eq:app-gh-param} \begin{split} 1 &= \alpha_1 \\ a &= \alpha_2 + \beta (\Phi+\Psi) \\ b &= \beta (\Pi+\Phi) \\ c &= \beta (\Pi+\Psi) \end{split}$$ and $P\equiv 0, P' \equiv 0$. In fact, it suffices to choose $P, P'$ such that $Q\equiv0$, however, the more restrictive assumption does not change the argumentation. In the remainder, we abbreviate $\alpha=\alpha_2$. For the Guckenheimer--Holmes cycle to emerge in the system, the parameters need to satisfy the following inequalities. $$\begin{aligned} & a+b+c = -1 \label{eq:app-gh-1} \\ & -\frac{1}{3} < a < 0 \label{eq:app-gh-2} \\ & c < a < b < 0 \label{eq:app-gh-3}. \end{aligned}$$ As a first observation, [\[eq:app-gh-3\]](#eq:app-gh-3){reference-type="eqref" reference="eq:app-gh-3"} implies that $\Pi, \Phi, \Psi$ cannot vanish simultaneously. This fact is frequently used in the upcoming transformations without mention. Substitute $a,b,c$ from [\[eq:app-gh-param\]](#eq:app-gh-param){reference-type="eqref" reference="eq:app-gh-param"} into [\[eq:app-gh-1\]](#eq:app-gh-1){reference-type="eqref" reference="eq:app-gh-1"} to obtain $$\beta = - \frac{\alpha+1}{2(\Pi+\Phi+\Psi)}.$$ Substituting this back into [\[eq:app-gh-param\]](#eq:app-gh-param){reference-type="eqref" reference="eq:app-gh-param"}, we obtain $$\begin{split} a &= \alpha - \frac{(\alpha+1)(\Phi+\Psi)}{2(\Pi+\Phi+\Psi)} \\ b &= - \frac{(\alpha+1)(\Phi+\Psi)}{2(\Pi+\Pi+\Phi)} \\ c &= - \frac{(\alpha+1)(\Phi+\Psi)}{2(\Pi+\Pi+\Psi)} \end{split}$$ We now substitute these expressions into in the inequalities to obtain four inequalities for $\alpha$ depending on $\Pi, \Phi, \Psi$. First, substitute $a$ into [\[eq:app-gh-2\]](#eq:app-gh-2){reference-type="eqref" reference="eq:app-gh-2"}. Omitting any details, this can equivalently be transformed into $$\frac{-2\Pi+\Phi+\Psi}{6\Pi+3\Phi+3\Psi} < \alpha < \frac{3\Phi+3\Psi}{6\Pi+3\Phi+3\Psi}.$$ We refer to these two inequalities as and . Substituting $a,b,c$ into [\[eq:app-gh-3\]](#eq:app-gh-3){reference-type="eqref" reference="eq:app-gh-3"}, the first two inequalities can equivalently be transformed into $$-\frac{\Pi-\Phi}{3\Pi+\Phi+2\Psi} < \alpha < -\frac{\Pi-\Psi}{3\Pi+2\Phi+\Psi},$$ which we refer to as inequalities and . The third inequality in [\[eq:app-gh-3\]](#eq:app-gh-3){reference-type="eqref" reference="eq:app-gh-3"} is equivalent to $\alpha>-1$ if $\Pi+\Phi\ne0$, which we see to be true below. This inequality is automatically satisfied whenever is satisfied. These considerations show that the cubic Guckenheimer--Holmes system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} can be realized if and only if the hypergraph is such that $\Pi,\Phi,\Psi$ are independent of the targeted vertex and the inequalities -- can be satisfied simultaneously. In the next step, we investigate, which configurations of $\Pi, \Phi, \Psi$ are possible, if these conditions are satisfied. The first immediate observations are that the third inequality in [\[eq:app-gh-3\]](#eq:app-gh-3){reference-type="eqref" reference="eq:app-gh-3"}---$b<0$---can only be satisfied if $$\label{eq:app-gh-cond1} \Pi+\Phi\ne0.$$ and that and can only both be satisfied if $$\label{eq:app-gh-cond2} \Phi<\Psi.$$ All hyperedges in a $3$-uniform hypergraph on three vertices stem from the set $$\left\{ \begin{array}{lll} (\{1,2\}, \{1\}), & (\{1,2\}, \{1,2\}), & (\{1,2\}, \{1,2,3\}) \\ (\{1,3\}, \{1\}), & (\{1,3\}, \{1,2\}), & (\{1,3\}, \{1,2,3\}) \\ (\{2,3\}, \{1\}), & (\{2,3\}, \{1,2\}), & (\{2,3\}, \{1,2,3\}) \\ (\{1,2\}, \{2\}), & (\{1,2\}, \{1,3\}) & \\ (\{1,3\}, \{2\}), & (\{1,3\}, \{1,3\}) & \\ (\{2,3\}, \{2\}), & (\{2,3\}, \{1,3\}) & \\ (\{1,2\}, \{3\}), & (\{1,2\}, \{2,3\}) & \\ (\{1,3\}, \{3\}), & (\{1,3\}, \{2,3\}) & \\ (\{2,3\}, \{3\}), & (\{2,3\}, \{2,3\}) & \end{array} \right\}.$$ We define subsets $\omega^0_k$ for $k=1,2,3$ of hyperedges that are a true $2$-to-$1$ input for vertex $k$: $$\begin{aligned} \omega^0_1 & = \{ (\{2,3\}, \{1\}), (\{2,3\}, \{1,2\}), (\{2,3\}, \{1,3\}), (\{2,3\}, \{1,2,3\}) \} \\ \omega^0_2 &= \{ (\{1,3\}, \{2\}), (\{1,3\}, \{1,2\}), (\{1,3\}, \{2,3\}), (\{1,3\}, \{1,2,3\}) \} \\ \omega^0_3 &= \{ (\{1,2\}, \{3\}), (\{1,2\}, \{1,3\}), (\{1,2\}, \{2,3\}), (\{1,2\}, \{1,2,3\}) \}. \end{aligned}$$ That is, $E_0\subset \omega^0_k$ for vertex $k$. Similarly, we define $$\begin{aligned} \omega^1_1 & = \{ (\{1,2\}, \{1\}), (\{1,2\}, \{1,2\}), (\{1,2\}, \{1,3\}), (\{1,2\}, \{1,2,3\}) \} \\ \omega^1_2 &= \{ (\{2,3\}, \{2\}), (\{2,3\}, \{1,2\}), (\{2,3\}, \{2,3\}), (\{2,3\}, \{1,2,3\}) \} \\ \omega^1_3 &= \{ (\{1,3\}, \{3\}), (\{1,3\}, \{1,3\}), (\{1,3\}, \{2,3\}), (\{1,3\}, \{1,2,3\}) \} \end{aligned}$$ to be the degenerate inputs from the right neighbor, i.e., $E_1 \subset \omega^1_k$ for vertex $k$, and $$\begin{aligned} \omega^2_1 & = \{ (\{1,3\}, \{1\}), (\{1,3\}, \{1,2\}), (\{1,3\}, \{1,3\}), (\{1,3\}, \{1,2,3\}) \} \\ \omega^2_2 &= \{ (\{1,2\}, \{2\}), (\{1,2\}, \{1,2\}), (\{1,2\}, \{2,3\}), (\{1,2\}, \{1,2,3\}) \} \\ \omega^2_3 &= \{ (\{2,3\}, \{3\}), (\{2,3\}, \{1,3\}), (\{2,3\}, \{2,3\}), (\{2,3\}, \{1,2,3\}) \} \end{aligned}$$ to be the degenerate inputs from the left neighbor, i.e., $E_2 \subset \omega^2_k$ for vertex $k$. We immediately observe $$\label{eq:app-gh-cond3} |E_i| \le 4 \quad \text{for}\quad i=0,1,2.$$ As these subsets intersect non-trivially, we cannot arbitrarily distribute hyperedges to generate arbitrary combinations of $\Pi,\Phi,\Psi$. In fact, we observe that the true $2$-to-$1$ connections are all in exactly one of those sets, the $2$-to-$2$ connections are in $\omega_k^i\cap\omega_l^j$ for some $k\ne l$ and $i\ne j$, and the $2$-to-$3$ connections are in $\omega_1^i\cap\omega_2^j\cap\omega_3^s$ for $\{i,j,s\}=\{0,1,2\}$. Since $\Pi,\Phi,\Psi$ are necessarily independent of the vertex $k$, it is handy to investigate the sets $\Omega_i = \omega_1^i\cup\omega_2^i\cup\omega_3^i$ for $i=0,1,2$ to deduce the restrictions. In fact, these sets have the following relations $$\begin{aligned} \Omega_0\cap\Omega_1 &= \left\{ \begin{array}{ll} (\{1,2\}, \{1,3\}), & (\{1,2\}, \{1,2,3\}), \\ (\{2,3\}, \{1,2\}), & (\{2,3\}, \{1,2,3\}), \\ (\{1,3\}, \{2,3\}), & (\{1,3\}, \{1,2,3\}) \end{array} \right\}, \\[5pt] \Omega_0\setminus\Omega_1 &= \left\{ \begin{array}{ll} (\{2,3\}, \{1\}), & (\{2,3\}, \{1,3\}), \\ (\{1,3\}, \{2\}), & (\{1,3\}, \{1,2\}), \\ (\{1,2\}, \{3\}), & (\{1,2\}, \{2,3\}) \end{array} \right\}, \\[5pt] \Omega_1\setminus\Omega_0 &= \left\{ \begin{array}{ll} (\{1,2\}, \{1\}), & (\{1,2\}, \{1,2\}), \\ (\{2,3\}, \{2\}), & (\{2,3\}, \{2,3\}), \\ (\{1,3\}, \{3\}), & (\{1,3\}, \{1,3\}) \end{array} \right\}, \\[5pt] \Omega_0\cap\Omega_2 &= \left\{ \begin{array}{ll} (\{1,3\}, \{1,2\}), & (\{1,3\}, \{1,2,3\}), \\ (\{1,2\}, \{2,3\}), & (\{1,2\}, \{1,2,3\}), \\ (\{2,3\}, \{1,3\}), & (\{2,3\}, \{1,2,3\}) \end{array} \right\}, \\[5pt] \Omega_0\setminus\Omega_2 &= \left\{ \begin{array}{ll} (\{2,3\}, \{1\}), & (\{2,3\}, \{1,2\}), \\ (\{1,3\}, \{2\}), & (\{1,3\}, \{2,3\}), \\ (\{1,2\}, \{3\}), & (\{1,2\}, \{1,3\}) \end{array} \right\}, \\[5pt] \Omega_2\setminus\Omega_0 &= \left\{ \begin{array}{ll} (\{1,3\}, \{1\}), & (\{1,3\}, \{1,3\}), \\ (\{1,2\}, \{2\}), & (\{1,2\}, \{1,2\}), \\ (\{2,3\}, \{3\}), & (\{2,3\}, \{2,3\}) \end{array} \right\}, \\[5pt] \Omega_1\cap\Omega_2 &= \left\{ \begin{array}{ll} (\{1,3\}, \{1,3\}), & (\{1,3\}, \{1,2,3\}), \\ (\{1,2\}, \{1,2\}), & (\{1,2\}, \{1,2,3\}), \\ (\{2,3\}, \{2,3\}), & (\{2,3\}, \{1,2,3\}) \end{array} \right\}, \\[5pt] \Omega_1\setminus\Omega_2 &= \left\{ \begin{array}{ll} (\{1,2\}, \{1\}), & (\{1,2\}, \{1,3\}), \\ (\{2,3\}, \{2\}), & (\{2,3\}, \{1,2\}), \\ (\{1,3\}, \{3\}), & (\{1,3\}, \{2,3\}) \end{array} \right\}, \\[5pt] \Omega_2\setminus\Omega_1 &= \left\{ \begin{array}{ll} (\{1,3\}, \{1\}), & (\{1,3\}, \{1,2\}), \\ (\{1,2\}, \{2\}), & (\{1,2\}, \{2,3\}), \\ (\{2,3\}, \{3\}), & (\{2,3\}, \{1,3\}) \end{array} \right\} \end{aligned}$$ Using these sets, we can deduce restrictions on $\Pi,\Phi,\Psi$ by the following combinatorical considerations: - If $|E_i|=2$ the corresponding set $\Omega_i$ needs to contain elements besides the $2$-to-$1$ connections. Thus, $\Omega_i\cap\Omega_j \ne\emptyset$ for some $j\ne i$. This implies $|E_j|\ge1$. To summarize $$\label{eq:app-gh-cond4} |E_i|=2 \implies |E_j|\ge1 \text{ for some } j\ne i.$$ - A similar argument shows that whenever $|E_i|\ge3$ there are elements in both intersections $\Omega_i\cap\Omega_j$ and $\Omega_i\cap\Omega_s$ for $\{i,j,s\}=\{0,1,2\}$, which are therefore non-empty. This can be summarized as $$|E_i|\ge3 \implies |E_j|\ge1 \text{ for all } j\ne i.$$ - Similarly, whenever $|E_i|=4$ all elements in both intersections $\Omega_i\cap\Omega_j$ and $\Omega_i\cap\Omega_k$ for $\{i,j,s\}=\{0,1,2\}$ are present. Hence, $$|E_i|=4 \implies |E_j|\ge2 \text{ for all } j\ne i.$$ - The previous two restrictions can further be summarized as $$\label{eq:app-gh-cond5} \left| |E_i| - |E_j| \right| \le 2 \text{ for all } i,j.$$ - Whenever $\Pi=\Psi=4$ all $2$-to-$2$ and all $2$-to-$3$ connections must be present. These are all elements in $\Omega_0\cap\Omega_1$ and $\Omega_1\cap\Omega_2$. In particular, we have $$\label{eq:app-gh-cond6} \Pi=\Psi=4 \implies \Phi\ge3.$$ There are exactly $20$ combinations of $\Pi,\Phi,\Psi$ that satisfy the conditions in . In fact, all of these configurations can be realized by a $3$-uniform hypergraph. Furthermore, inequalities -- can be satisfied for each of these configurations. Thus, the Guckenheimer--Holmes system [\[eq:gh_cubic\]](#eq:gh_cubic){reference-type="eqref" reference="eq:gh_cubic"} can be realized. We list all possible configurations and inequalities together with an example of a suitable hypergraph in below. In particular, the cubic Guckenheimer--Holmes system with admissible parameters is realized for any of those hypergraphs if $\alpha$ is chosen to satisfy the inequalities in the second column and $\beta$ and as a result $f$ and $G^{(3)}$ are as presented above. Note that there can be examples of a hypergraph realizing a particular configuration of $\Pi, \Phi, \Psi$ besides the one that is listed in the table. ◻ c\|c\|l \ \ Parameters & Inequalities -- & Hyperedges\ \ Parameters & Inequalities & Hyperedges\ \ $\begin{array}{l} \Pi=0 \\ \Phi=1 \\ \Psi=2\end{array}$ & $\begin{array}{c} \frac{1}{5}<\alpha<\frac{1}{2} \\[2pt] \frac{1}{3}<\alpha<1 \end{array}$ & $\begin{array}{ll} (\{1,3\}, \{1\}), & (\{1,3\}, \{1,3\}), \\ (\{1,2\}, \{2\}), & (\{1,2\}, \{1,2\}), \\ (\{2,3\}, \{3\}), & (\{2,3\}, \{2,3\}) \end{array}$\ $\begin{array}{l} \Pi=1 \\ \Phi=0 \\ \Psi=1\end{array}$ & $\begin{array}{c} -\frac{1}{5}<\alpha<0 \\[2pt] -\frac{1}{9}<\alpha<\frac{1}{3} \end{array}$ & $\begin{array}{ll} (\{2,3\}, \{1\}), & (\{1,3\}, \{1\}), \\ (\{1,3\}, \{2\}), & (\{1,2\}, \{1\}), \\ (\{1,2\}, \{3\}), & (\{2,3\}, \{2\}) \end{array}$\ $\begin{array}{l} \Pi=1 \\ \Phi=0 \\ \Psi=2\end{array}$ & $\begin{array}{c} -\frac{1}{7}<\alpha<\frac{1}{5} \\[2pt] 0<\alpha<\frac{1}{2} \end{array}$ & $\begin{array}{ll} (\{1,3\}, \{1\}), & (\{1,3\}, \{1,2\}), \\ (\{1,2\}, \{2\}), & (\{1,2\}, \{2,3\}), \\ (\{2,3\}, \{3\}), & (\{2,3\}, \{1,3\}) \end{array}$\ $\begin{array}{l} \Pi=1 \\ \Phi=1 \\ \Psi=2\end{array}$ & $\begin{array}{c} 0<\alpha<\frac{1}{7} \\[2pt] \frac{1}{15}<\alpha<\frac{3}{5} \end{array}$ & $\begin{array}{ll} (\{1,3\}, \{1\}), & (\{1,3\}, \{1,2\}), \\ (\{1,2\}, \{2\}), & (\{1,2\}, \{2,3\}), \\ (\{2,3\}, \{3\}), & (\{2,3\}, \{1,3\}), \\ (\{1,2\}, \{1\}), & \\ (\{2,3\}, \{2\}), & \\ (\{1,3\}, \{3\}) & \end{array}$\ $\begin{array}{l} \Pi=1 \\ \Phi=1 \\ \Psi=3\end{array}$ & $\begin{array}{c} 0<\alpha<\frac{1}{4} \\[2pt] \frac{1}{9}<\alpha<\frac{2}{3} \end{array}$ & $\begin{array}{ll} (\{1,3\}, \{1\}), & (\{1,3\}, \{1,2\}), \\ (\{1,2\}, \{2\}), & (\{1,2\}, \{2,3\}), \\ (\{2,3\}, \{3\}), & (\{2,3\}, \{1,3\}), \\ & (\{1,3\}, \{1,3\}), \\ & (\{1,2\}, \{1,2\}), \\ & (\{2,3\}, \{2,3\}) \end{array}$\ $\begin{array}{l} \Pi=1 \\ \Phi=2 \\ \Psi=3\end{array}$ & $\begin{array}{c} \frac{1}{11}<\alpha<\frac{1}{5} \\[2pt] \frac{1}{7}<\alpha<\frac{5}{7} \end{array}$ & $\begin{array}{ll} (\{1,3\}, \{1\}), & (\{1,3\}, \{1,2\}), \\ (\{1,2\}, \{2\}), & (\{1,2\}, \{2,3\}), \\ (\{2,3\}, \{3\}), & (\{2,3\}, \{1,3\}), \\ (\{1,2\}, \{1\}), & (\{1,3\}, \{1,3\}), \\ (\{2,3\}, \{2\}), & (\{1,2\}, \{1,2\}), \\ (\{1,3\}, \{3\}), & (\{2,3\}, \{2,3\}) \end{array}$\ $\begin{array}{l} \Pi=2 \\ \Phi=0 \\ \Psi=1\end{array}$ & $\begin{array}{c} -\frac{1}{4}<\alpha<-\frac{1}{7} \\[2pt] -\frac{1}{5}<\alpha<\frac{1}{5} \end{array}$ & $\begin{array}{ll} (\{2,3\}, \{1\}), & (\{1,3\}, \{1,2\}), \\ (\{1,3\}, \{2\}), & (\{1,2\}, \{2,3\}), \\ (\{1,2\}, \{3\}), & (\{2,3\}, \{1,3\}) \end{array}$\ $\begin{array}{l} \Pi=2 \\ \Phi=0 \\ \Psi=2\end{array}$ & $\begin{array}{c} -\frac{1}{5}<\alpha<0 \\[2pt] -\frac{1}{9}<\alpha<\frac{1}{3} \end{array}$ & $\begin{array}{ll} (\{2,3\}, \{1\}), & (\{1,3\}, \{1,2\}), \\ (\{1,3\}, \{2\}), & (\{1,2\}, \{2,3\}), \\ (\{1,2\}, \{3\}), & (\{2,3\}, \{1,3\}), \\ (\{1,3\}, \{1\}), & \\ (\{1,2\}, \{2\}), & \\ (\{2,3\}, \{3\}) \end{array}$\ $\begin{array}{l} \Pi=2 \\ \Phi=1 \\ \Psi=2\end{array}$ & $\begin{array}{c} -\frac{1}{11}<\alpha<0 \\[2pt] -\frac{1}{21}<\alpha<\frac{3}{7} \end{array}$ & $\begin{array}{ll} (\{2,3\}, \{1\}), & (\{1,3\}, \{1,2\}), \\ (\{1,3\}, \{2\}), & (\{1,2\}, \{2,3\}), \\ (\{1,2\}, \{3\}), & (\{2,3\}, \{1,3\}), \\ (\{1,3\}, \{1\}), & \\ (\{1,2\}, \{2\}), & \\ (\{2,3\}, \{3\}), & \\ (\{1,2\}, \{1\}), & \\ (\{2,3\}, \{2\}), & \\ (\{1,3\}, \{3\}) \end{array}$\ $\begin{array}{l} \Pi=2 \\ \Phi=1 \\ \Psi=3\end{array}$ & $\begin{array}{c} -\frac{1}{13}<\alpha<\frac{1}{11} \\[2pt] 0<\alpha<\frac{1}{2} \end{array}$ & $\begin{array}{ll} (\{2,3\}, \{1\}), & (\{1,3\}, \{1,2\}), \\ (\{1,3\}, \{2\}), & (\{1,2\}, \{2,3\}), \\ (\{1,2\}, \{3\}), & (\{2,3\}, \{1,3\}), \\ (\{1,3\}, \{1\}), & (\{1,3\}, \{1,3\}), \\ (\{1,2\}, \{2\}), & (\{1,2\}, \{1,2\}), \\ (\{2,3\}, \{3\}), & (\{2,3\}, \{2,3\}) \end{array}$\ $\begin{array}{l} \Pi=2 \\ \Phi=2 \\ \Psi=3\end{array}$ & $\begin{array}{c} 0<\alpha<\frac{1}{13} \\[2pt] \frac{1}{27}<\alpha<\frac{5}{9} \end{array}$ & $\begin{array}{ll} (\{2,3\}, \{1\}), & (\{1,3\}, \{1,2\}), \\ (\{1,3\}, \{2\}), & (\{1,2\}, \{2,3\}), \\ (\{1,2\}, \{3\}), & (\{2,3\}, \{1,3\}), \\ (\{1,3\}, \{1\}), & (\{1,3\}, \{1,3\}), \\ (\{1,2\}, \{2\}), & (\{1,2\}, \{1,2\}), \\ (\{2,3\}, \{3\}), & (\{2,3\}, \{2,3\}), \\ (\{1,2\}, \{1\}), & \\ (\{2,3\}, \{2\}), & \\ (\{1,3\}, \{3\}) \end{array}$\ $\begin{array}{l} \Pi=2 \\ \Phi=2 \\ \Psi=4\end{array}$ & $\begin{array}{c} 0<\alpha<\frac{1}{7} \\[2pt] \frac{1}{15}<\alpha<\frac{3}{5} \end{array}$ & $\begin{array}{lll} (\{1,3\}, \{1\}), & (\{1,3\}, \{1,3\}), & (\{1,3\}, \{1,2,3\}), \\ (\{1,2\}, \{2\}), & (\{1,2\}, \{1,2\}), & (\{1,2\}, \{1,2,3\}), \\ (\{2,3\}, \{3\}), & (\{2,3\}, \{2,3\}), & (\{2,3\}, \{1,2,3\}), \\ & (\{1,3\}, \{1,2\}), & \\ & (\{1,2\}, \{2,3\}), & \\ & (\{2,3\}, \{1,3\}) \end{array}$\ $\begin{array}{l} \Pi=2 \\ \Phi=3 \\ \Psi=4\end{array}$ & $\begin{array}{c} \frac{1}{17}<\alpha<\frac{1}{8} \\[2pt] \frac{1}{11}<\alpha<\frac{7}{11} \end{array}$ & $\begin{array}{lll} (\{1,3\}, \{1\}), & (\{1,3\}, \{1,3\}), & (\{1,3\}, \{1,2,3\}), \\ (\{1,2\}, \{2\}), & (\{1,2\}, \{1,2\}), & (\{1,2\}, \{1,2,3\}), \\ (\{2,3\}, \{3\}), & (\{2,3\}, \{2,3\}), & (\{2,3\}, \{1,2,3\}), \\ (\{1,2\}, \{1\}), & (\{1,3\}, \{1,2\}), & \\ (\{2,3\}, \{2\}), & (\{1,2\}, \{2,3\}), & \\ (\{1,3\}, \{3\}), & (\{2,3\}, \{1,3\}) \end{array}$\ $\begin{array}{l} \Pi=3 \\ \Phi=1 \\ \Psi=2\end{array}$ & $\begin{array}{c} -\frac{1}{7}<\alpha<-\frac{1}{13} \\[2pt] -\frac{1}{9}<\alpha<\frac{1}{3} \end{array}$ & $\begin{array}{ll} (\{2,3\}, \{1\}), & (\{1,3\}, \{1,2\}), \\ (\{1,3\}, \{2\}), & (\{1,2\}, \{2,3\}), \\ (\{1,2\}, \{3\}), & (\{2,3\}, \{1,3\}), \\ (\{1,3\}, \{1\}), & (\{1,2\}, \{1,3\}), \\ (\{1,2\}, \{2\}), & (\{2,3\}, \{1,2\}), \\ (\{2,3\}, \{3\}), & (\{1,3\}, \{2,3\}) \end{array}$\ $\begin{array}{l} \Pi=3 \\ \Phi=1 \\ \Psi=3\end{array}$ & $\begin{array}{c} -\frac{1}{8}<\alpha<0 \\[2pt] -\frac{1}{15}<\alpha<\frac{2}{5} \end{array}$ & $\begin{array}{lll} (\{2,3\}, \{1\}), & (\{1,3\}, \{1,2\}), & (\{1,3\}, \{1,2,3\}), \\ (\{1,3\}, \{2\}), & (\{1,2\}, \{2,3\}), & (\{1,2\}, \{1,2,3\}), \\ (\{1,2\}, \{3\}), & (\{2,3\}, \{1,3\}), & (\{2,3\}, \{1,2,3\}), \\ (\{1,3\}, \{1\}), & & \\ (\{1,2\}, \{2\}), & & \\ (\{2,3\}, \{3\}) & & \end{array}$\ $\begin{array}{l} \Pi=3 \\ \Phi=2 \\ \Psi=3\end{array}$ & $\begin{array}{c} -\frac{1}{17}<\alpha<0 \\[2pt] -\frac{1}{33}<\alpha<\frac{5}{11} \end{array}$ & $\begin{array}{lll} (\{2,3\}, \{1\}), & (\{1,3\}, \{1,2\}), & (\{1,3\}, \{1,2,3\}), \\ (\{1,3\}, \{2\}), & (\{1,2\}, \{2,3\}), & (\{1,2\}, \{1,2,3\}), \\ (\{1,2\}, \{3\}), & (\{2,3\}, \{1,3\}), & (\{2,3\}, \{1,2,3\}), \\ (\{1,3\}, \{1\}), & & \\ (\{1,2\}, \{2\}), & & \\ (\{2,3\}, \{3\}), & & \\ (\{1,2\}, \{1\}), & & \\ (\{2,3\}, \{2\}), & & \\ (\{1,3\}, \{3\}) & & \end{array}$\ $\begin{array}{l} \Pi=3 \\ \Phi=2 \\ \Psi=4\end{array}$ & $\begin{array}{c} -\frac{1}{19}<\alpha<\frac{1}{17} \\[2pt] 0<\alpha<\frac{1}{2} \end{array}$ & $\begin{array}{lll} (\{2,3\}, \{1\}), & (\{1,3\}, \{1,2\}), & (\{1,3\}, \{1,2,3\}), \\ (\{1,3\}, \{2\}), & (\{1,2\}, \{2,3\}), & (\{1,2\}, \{1,2,3\}), \\ (\{1,2\}, \{3\}), & (\{2,3\}, \{1,3\}), & (\{2,3\}, \{1,2,3\}), \\ (\{1,3\}, \{1\}), & (\{1,3\}, \{1,3\}), & \\ (\{1,2\}, \{2\}), & (\{1,2\}, \{1,2\}), & \\ (\{2,3\}, \{3\}), & (\{2,3\}, \{2,3\}) & \end{array}$\ $\begin{array}{l} \Pi=3 \\ \Phi=3 \\ \Psi=4\end{array}$ & $\begin{array}{c} 0<\alpha<\frac{1}{19} \\[2pt] \frac{1}{39}<\alpha<\frac{7}{13} \end{array}$ & $\begin{array}{lll} (\{2,3\}, \{1\}), & (\{1,3\}, \{1,2\}), & (\{1,3\}, \{1,2,3\}), \\ (\{1,3\}, \{2\}), & (\{1,2\}, \{2,3\}), & (\{1,2\}, \{1,2,3\}), \\ (\{1,2\}, \{3\}), & (\{2,3\}, \{1,3\}), & (\{2,3\}, \{1,2,3\}), \\ (\{1,3\}, \{1\}), & (\{1,3\}, \{1,3\}), & \\ (\{1,2\}, \{2\}), & (\{1,2\}, \{1,2\}), & \\ (\{2,3\}, \{3\}), & (\{2,3\}, \{2,3\}), & \\ (\{1,2\}, \{1\}), & & \\ (\{2,3\}, \{2\}), & & \\ (\{1,3\}, \{3\}) \end{array}$\ $\begin{array}{l} \Pi=4 \\ \Phi=2 \\ \Psi=3\end{array}$ & $\begin{array}{c} -\frac{1}{10}<\alpha<-\frac{1}{19} \\[2pt] -\frac{1}{13}<\alpha<\frac{5}{13} \end{array}$ & $\begin{array}{lll} (\{2,3\}, \{1\}), & (\{1,3\}, \{1,2\}), & (\{1,3\}, \{1,2,3\}), \\ (\{1,3\}, \{2\}), & (\{1,2\}, \{2,3\}), & (\{1,2\}, \{1,2,3\}), \\ (\{1,2\}, \{3\}), & (\{2,3\}, \{1,3\}), & (\{2,3\}, \{1,2,3\}), \\ (\{1,3\}, \{1\}), & (\{1,3\}, \{2,3\}), & \\ (\{1,2\}, \{2\}), & (\{1,2\}, \{1,3\}), & \\ (\{2,3\}, \{3\}), & (\{2,3\}, \{1,2\}) & \\ \end{array}$\ $\begin{array}{l} \Pi=4 \\ \Phi=3 \\ \Psi=4\end{array}$ & $\begin{array}{c} -\frac{1}{23}<\alpha<0 \\[2pt] -\frac{1}{45}<\alpha<\frac{7}{15} \end{array}$ & $\begin{array}{lll} (\{2,3\}, \{1\}), & (\{1,3\}, \{1,2\}), & (\{1,3\}, \{1,2,3\}), \\ (\{1,3\}, \{2\}), & (\{1,2\}, \{2,3\}), & (\{1,2\}, \{1,2,3\}), \\ (\{1,2\}, \{3\}), & (\{2,3\}, \{1,3\}), & (\{2,3\}, \{1,2,3\}), \\ (\{1,3\}, \{1\}), & (\{1,3\}, \{2,3\}), & \\ (\{1,2\}, \{2\}), & (\{1,2\}, \{1,3\}), & \\ (\{2,3\}, \{3\}), & (\{2,3\}, \{1,2\}), & \\ & (\{1,3\}, \{1,3\}), & \\ & (\{1,2\}, \{1,2\}), & \\ & (\{2,3\}, \{2,3\}) \end{array}$ [^1]: Such heteroclinic structures are typically called *heteroclinic networks*. To avoid confusion with network dynamical systems that determine the class of vector fields we consider, we avoid the term heteroclinic network and talk about heteroclinic structures instead. [^2]: This is not a restriction as one can always see a network of $N$ $d$-dimensional nodes as a network of $Nd$ one-dimensional nodes.
arxiv_math
{ "id": "2309.02006", "title": "Heteroclinic Dynamics in Network Dynamical Systems with Higher-Order\n Interactions", "authors": "Christian Bick, S\\\"oren von der Gracht", "categories": "math.DS", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We represent by $\{W_{\lambda, t}^\alpha\}_{t>0}$ the semigroup generated by $-\mathbb L^{\alpha}_\lambda$, where $\mathbb L^{\alpha}_\lambda$ is a Hardy operator on a half space. The operator $\mathbb L^{\alpha}_\lambda$ includes a fractional Laplacian and it is defined by $$\mathbb L^{\alpha}_\lambda=(-\Delta)^{\alpha/2}_{\mathbb{R}^d_+}+\lambda x_d^{-\alpha}, \quad \alpha\in (0,2], \lambda\geq 0.$$ We prove that, for every $k\in \mathbb N$, the $\rho$-variation operator $\mathcal{V}_\rho\left(\left\{t^k\partial_t^k W_{\lambda,t}^\alpha\right\}\right)$ is bounded on $L^p(\mathbb{R}^d_+, w)$ for each $1<p<\infty$ and $w\in A_p(\mathbb{R}^d_+)$, being $A_p(\mathbb{R}^d_+)$ the Muckenhoupt $p$-class of weights on $\mathbb{R}^d_+$. address: - Jorge J. Betancor Departamento de Análisis Matemático, Universidad de La Laguna, Campus de Anchieta, Avda. Astrofísico Sánchez, s/n, La Laguna (Sta. Cruz de Tenerife), Spain - Estefanía Dalmasso, Pablo Quijano Instituto de Matemática Aplicada del Litoral, UNL, CONICET, FIQ.Colectora Ruta Nac. Nº 168, Paraje El Pozo,S3007ABA, Santa Fe, Argentina author: - Jorge J. Betancor - Estefanía Dalmasso - Pablo Quijano title: Variation operators associated with semigroups generated by Hardy operators involving fractional Laplacians in a half space --- # Introduction In this paper we consider the non-local Hardy type operator $\mathbb L^{\alpha}_\lambda$ defined, in a formal way, by $$\mathbb L^{\alpha}_\lambda:=(-\Delta)^{\alpha/2}_{\mathbb{R}^d_+}+\lambda x_d^{-\alpha},$$ on $\mathbb{R}^d_+:=\{x=(x_1, \dots, x_d)\in \mathbb R^d: x_d>0\}$. Here, $\alpha\in (0,2]$ and $\lambda\geq \lambda_*$ where $$\lambda_*=-\frac{\Gamma(\frac{\alpha+1}{2})}{\pi} \left(\Gamma\left(\frac{\alpha+1}{2}\right)-\frac{2^{\alpha-1}\sqrt{\pi}}{\Gamma\left(1-\frac\alpha 2\right)}\right).$$ In order to give a precise definition of the operator $\mathbb L^{\alpha}_\lambda$ we consider the following quadratic forms. We define, for every $u,v\in C^1_c(\mathbb{R}^d_+)$ (the space of continuously differentiable functions having compact support on $\mathbb{R}^d_+$), when $\alpha\in (0,2)$, $$Q_\lambda^\alpha(u,v)=\frac12 \mathcal A(d,\alpha)\int_{\mathbb{R}^d_+\times \mathbb{R}^d_+} \frac{(u(x)-u(y))\overline{(v(x)-v(y))}}{|x-y|^{d+\alpha}}\, dxdy+\lambda \int_{\mathbb{R}^d_+} \frac{u(x)\overline{v(x)}}{x_d^\alpha}\, dx,$$ where $\mathcal A(d,\alpha):=\frac{-\alpha}{2^{1+\alpha}\pi^{d/2}}\frac{\Gamma((d-\alpha)/2)}{\Gamma(1+\frac\alpha 2)}$, and, when $\alpha=2$, $$Q_\lambda^2(u,v)=\int_{\mathbb{R}^d_+} \nabla u(x)\overline{\nabla v(x)}\, dx+\lambda \int_{\mathbb{R}^d_+} \frac{u(x)\overline{v(x)}}{x_d^2}\, dx.$$ By using the classical Hardy inequality when $\alpha=2$ and those proved by Bogdan and Dyda ([@BD Theorem 1]), we have that for every $\alpha\in (0,2]$, $$Q_\lambda^\alpha(u,u)\geq 0, \quad u\in C^1_c(\mathbb{R}^d_+),$$ provided that $\lambda\geq\lambda_*$. Note that Hardy inequalities proved in [@BD Theorem 1] are sharp. According to Friedrichs's Extension Theorem, the quadratic form $Q_\lambda^\alpha$ defines a self-adjoint and non-negative operator $L_\lambda^{\alpha}$ in $L^2(\mathbb{R}^d_+)$ for which $C^1_c(\mathbb{R}^d_+)$ is a form core. When $f$ is smooth enough in $\mathbb{R}^d_+$, $\mathbb L_\lambda^{\alpha} f=L_\lambda^{\alpha} f$. The operator $-L_\lambda^{\alpha}$ generates in $L^2(\mathbb{R}^d_+)$ a contractive analytic $C_0$-semigroup of angle $\frac\pi 2$ ([@Sch Corollary 9]). We denote by $\{W_{\lambda,z}^{\alpha}\}_{\mathop{\mathrm{Re}}z>0}$ the semigroup generated by $-L_\lambda^{\alpha}$ in $L^2(\mathbb{R}^d_+)$. From [@AB Theorem 3.1], for every $z\in \mathbb C$ with $\mathop{\mathrm{Re}}z>0$, $W_{\lambda,z}^{\alpha}$ is an integral operator whose kernel, denoted by $W^{\alpha}_{\lambda,z}(x,y)$, is analytic in $\{z\in \mathbb C: \mathop{\mathrm{Re}}z>0\}$ for every $x,y\in \mathbb{R}^d_+$. In Section [2](#sec: heat kernels estimates){reference-type="ref" reference="sec: heat kernels estimates"} we establish some estimates involving the kernel $W^{\alpha}_{\lambda,z}$ that will be useful to prove the main results. In [@CKS10] and [@CK03] we can find how the operators $L_0^{\alpha}$ appear related to censored processes and the two-sided estimates for the heat kernels associated with the semigroup generated by $-L_0^{\alpha}$ (see [@CK03 Theorem 1.1] and [@CKS10 Theorem 1.1]). We will now define the variation operator. Let $\rho>0$ and suppose $\{a_t\}_{t>0}$ is a set of complex numbers. We define the $\rho$-variation $\mathcal V\left(\{a_t\}_{t>0}\right)$ as follows $$\mathcal V_\rho\left(\{a_t\}_{t>0}\right)=\sup_{0<t_0<\dots<t_n,\ n\in \mathbb N} \left(\sum_{j=0}^{n-1}\left|a_{t_{j+1}}-a_{t_j}\right|^\rho\right)^{1/\rho}.$$ Assume that, for some $p\in (1,\infty)$, $T_t$ is a bounded operator on $L^p(\Omega,\mu)$ for every $t>0$, where $(\Omega,\mu)$ is a measure space. The variation operator $\mathcal V_\rho\left(\{T_t\}_{t>0}\right)$ is defined by $$\mathcal V_\rho\left(\{T_t\}_{t>0}\right)(f)(x):=\mathcal V\left(\{T_t(f)(x)\}_{t>0}\right), \quad f\in L^p(\Omega,\mu).$$ In order to have measurability of the function $\mathcal V_\rho\left(\{T_t\}_{t>0}\right)(f)$ we need to have some continuity property in the time variable for $T_t(f)$ (see the comments after [@CJRW1 Theorem 1.2]). Boundedness properties of $\rho$-variation operator (also oscillation operators and $\lambda$-jump counting function) give us quantitative measures for pointwise convergence of the family $\{T_t(f)\}_{t>0}$. Lépingle ([@Lep]) studied $\rho$-variations of martingales. He established strong $L^p$-inequalities and weak-type $(1,1)$ estimates for the $\rho$-variation operator when $\rho>2$. In [@JW] an example with $\rho=2$ shows that $\rho>2$ is the best exponent in the above estimate (see also [@Qi]). Variational inequalities for martingales can be seen as an extension of Doob's maximal inequalities. Bourgain, in a series of papers (see [@Bo2; @Bo3; @Bo]) proved $L^p$-variational inequalities in ergodic settings. Bourgain's ideas were the starting point for using the variation operator in the ergodic theory ([@K1; @K2; @KMT; @MS; @Wi; @ZK]), probability theory ([@AJS; @MSZ; @PX]) and harmonic analysis ([@BORSS; @BCT; @CJRW1; @CJRW2; @DPDU; @DL; @GT; @K3; @MTX; @MT; @MiT; @OSTTW]). Our objective is to prove weighted $L^p$-inequalities for the $\rho$-variation operator $\mathcal V_\rho\left(\{t^k\partial_t^k W_{\lambda,t}^{\alpha}\}_{t>0}\right)$ where $k\in \mathbb N$. Our study is motivated by those ones in [@ABR] and [@N] where the Schrödinger operator in $\mathbb R^d$ with inverse square potentials is considered. As it was mentioned, $\{W_{\lambda,t}^{\alpha}\}_{t>0}$ can be seen as the semigroup generated by the Hardy operator type in a half space $\mathbb L_\lambda^{\alpha}$. Frank and Merz ([@FM23JFA]) recently have proved that the scales of homogeneous Sobolev spaces generated by $\mathbb L_\lambda^{\alpha}$ and by $(-\Delta)_{\mathbb{R}^d_+}^\alpha$ are comparable. Our first result is the following. **Theorem 1**. *Let $0<\alpha\leq 2$, $\rho>2$, $\lambda\geq 0$ and $1<p<\infty$. Then, the $\rho$-variation operator $\mathcal V_\rho\left(\{t^k\partial_t^k W_{\lambda,t}^{\alpha}\}_{t>0}\right)$ is bounded on $L^p(\mathbb{R}^d_+)$ for each $k\in \mathbb N$.* For the proof of this theorem, we shall start with the case $\lambda=0$. We denote by $\{\mathbb W_t^{\alpha}\}_{t>0}$ the semigroup of operators generated by $-(-\Delta)^\alpha$ on $\mathbb R^d$. We have that $$0<W_{0,t}^{\alpha}(x,y)=W_{0,t}^{\alpha}(y,x)\leq \mathbb W_t^{\alpha}(x,y), \quad x,y\in \mathbb{R}^d_+, t>0.$$ Then, $$\int_{\mathbb{R}^d_+} W_{0,t}^{\alpha}(x,y)\, dy=\int_{\mathbb{R}^d_+} W_{0,t}^{\alpha}(x,y)\, dx\leq \int_{\mathbb R^d} \mathbb W_t^{\alpha}(x,y)\, dy=1, \quad x\in \mathbb{R}^d_+, t>0.$$ We deduce that, for every $t>0$, $W_{0,t}^{\alpha}$ is a contraction in $L^1(\mathbb{R}^d_+)$ and in $L^\infty(\mathbb{R}^d_+)$. According to [@LeMX Corollary 6.1] the $\rho$-variation operator $\mathcal V_\rho\left(\{t^k\partial_t^k W_{0,t}^{\alpha}\}_{t>0}\right)$ is bounded on $L^p(\mathbb{R}^d_+)$ for every $1<p<\infty$ and $k\in \mathbb N$. In a second step, we prove that the $\rho$-variation operator $$\mathcal V_\rho\left(\left\{t^k\partial_t^k \left(W_{\lambda,t}^{\alpha}-W_{0,t}^{\alpha}\right)\right\}_{t>0}\right)$$ is bounded on $L^p(\mathbb{R}^d_+)$ for every $1<p<\infty$ and $k\in \mathbb N$. In order to do so, we will consider operators that control this $\rho$-variation operator. Suppose that $F:(0,\infty)\to \mathbb C$ is a differentiable function on $(0,\infty)$. For $n\in \mathbb N$, let $0<t_0<\dots<t_n$. We have that $$\left(\sum_{j=0}^{n-1}\left|F(t_{j+1})-F(t_j)\right|^\rho\right)^{1/\rho}\leq \sum_{j=0}^{n-1}\left|\int_{t_j}^{t_{j+1}}F'(s)\, ds\right|\leq \int_0^\infty |F'(s)|\, ds.$$ Thus, $$\mathcal V_\rho\left(\{F(t)\}_{t>0}\right)\leq \int_0^\infty |F'(s)|\, ds.$$ From this inequality we deduce that $$\label{eq: 1.1} \mathcal V_\rho\left(\left\{t^k\partial_t^k \left(W_{\lambda,t}^{\alpha}-W_{0,t}^{\alpha}\right)\right\}_{t>0}\right)(f)(x)\leq \int_{\mathbb{R}^d_+} f(y) K_\lambda^{\alpha}(x,y)\, dy, \quad x\in \mathbb{R}^d_+,$$ where $$K_\lambda^{\alpha}(x,y):=\int_0^\infty \left|\partial_t\left(t^k \partial_t^k \left(W_{\lambda,t}^{\alpha}(x,y)-W_{0,t}^{\alpha}(x,y)\right)\right)\right|\, dt, \quad x,y\in \mathbb{R}^d_+.$$ For $m\in \mathbb N$, we consider the operator $T_\lambda^{\alpha, m}$ defined by $$T_\lambda^{\alpha, m}(f)(x)=\int_{\mathbb{R}^d_+} T_\lambda^{\alpha, m}(x,y) f(y)\, dy, \quad x\in \mathbb{R}^d_+,$$ with $$T_\lambda^{\alpha, m}(x,y)=\int_0^\infty \left|t^m \partial_t^{m+1} \left(W_{\lambda,t}^{\alpha}(x,y)-W_{0,t}^{\alpha}(x,y)\right)\right|\, dt, \quad x,y\in \mathbb{R}^d_+.$$ Taking into account [\[eq: 1.1\]](#eq: 1.1){reference-type="eqref" reference="eq: 1.1"}, to see that $\mathcal V_\rho\left(\left\{t^k\partial_t^k \left(W_{\lambda,t}^{\alpha}-W_{0,t}^{\alpha}\right)\right\}_{t>0}\right)$ is bounded on $L^p(\mathbb{R}^d_+)$ it will be sufficient to show that the operator $T_\lambda^{\alpha, m}$ has this property for $m=0$ when $k=0$, and for $m=k-1$ and $m=k$ when $k\geq 1$. We will split the operator $T_\lambda^{\alpha, m}$ into two parts, determined by the sets $$\begin{aligned} \label{eq: G} \nonumber G=&G_1\cup G_2\\ \nonumber :=&\{(x,y,t)\in \mathbb{R}^d_+\times\mathbb{R}^d_+\times(0,\infty): x_d \vee y_d\leq t^{1/\alpha}\}\\ &\cup \{(x,y,t)\in \mathbb{R}^d_+\times\mathbb{R}^d_+\times(0,\infty): x_d \vee y_d> t^{1/\alpha},\ 2|x-y|\geq x_d\wedge y_d\}\end{aligned}$$ and $$\label{eq: L} L:=\left(\mathbb{R}^d_+\times\mathbb{R}^d_+\times(0,\infty)\right)\setminus G.$$ Here, the letter $G$ stands for *global* and the letter $L$ means *local*. Hence, for $m\in \mathbb N$, we decompose $T_\lambda^{\alpha, m}$ as follows $$T_\lambda^{\alpha, m}=T_{\lambda, \textup{loc}}^{\alpha, m}+T_{\lambda, \textup{glob}}^{\alpha, m}$$ where $$T_{\lambda, \textup{loc}/\textup{glob}}^{\alpha, m}(f)(x)=\int_{\mathbb{R}^d_+} T_{\lambda, \textup{loc}/\textup{glob}}^{\alpha, m}(x,y)f(y)\, dy, \quad x\in \mathbb{R}^d_+,$$ being $$T_{\lambda, \textup{loc}}^{\alpha, m}(x,y)=\int_0^\infty \left|t^m \partial_t^{m+1} \left(W_{\lambda,t}^{\alpha}(x,y)-W_{0,t}^{\alpha}(x,y)\right)\right|\chi_L(x,y,t)\, dt, \quad x,y\in \mathbb{R}^d_+,$$ and $$T_{\lambda, \textup{glob}}^{\alpha, m}(x,y)=T_\lambda^{\alpha, m}(x,y)-T_{\lambda, \textup{loc}}^{\alpha, m}(x,y), \quad x,y\in \mathbb{R}^d_+.$$ Actually, the cancellation in $W_{\lambda,t}^{\alpha}(x,y)-W_{0,t}^{\alpha}(x,y)$ is only relevant for $T_{\lambda, \textup{loc}}^{\alpha, m}$; in the global operator this cancellation does not play any role. In Section [3](#sec: proof-teo1.1-alfa<2){reference-type="ref" reference="sec: proof-teo1.1-alfa<2"} we will prove the $L^p$-boundedness of the operators $T_{\lambda, \textup{loc}}^{\alpha, m}$ and $T_{\lambda, \textup{glob}}^{\alpha, m}$ for $\alpha\in (0,2)$, and we shall deal with the case $\alpha=2$ in Section [4](#sec: proof-teo1.1-alfa=2){reference-type="ref" reference="sec: proof-teo1.1-alfa=2"}. We distinguish this two cases since the estimations for the heat kernel $W^{\alpha}_{\lambda,t}(x,y)$, $x,y\in \mathbb{R}^d_+$ and $t>0$, are different (see Propositions [Proposition 3](#prop: 2.1){reference-type="ref" reference="prop: 2.1"} and [Proposition 4](#prop: 2.2){reference-type="ref" reference="prop: 2.2"}). We now establish the weighted $L^p$-inequalities for $\mathcal V_\rho\left(\{t^k\partial_t^k W_{\lambda,t}^{\alpha}\}_{t>0}\right)$. First of all, we recall the definitions of the Muckenhoupt and Reverse Hölder classes in $\mathbb{R}^d_+$. A weight in $\mathbb{R}^d_+$ is a non-negative measurable function defined on $\mathbb{R}^d_+$. A weight $w$ is said to be in $A_p(\mathbb{R}^d_+)$ with $1<p<\infty$ when $$\sup_B \left(\frac{1}{|B|}\int_B w(x)\, dx\right)\left(\frac{1}{|B|}\int_B w^{-1/(p-1)}(x)\, dx\right)^{p-1}<\infty,$$ where the supremum is taken over all balls $B$ in $\mathbb{R}^d_+$. If $1<q<\infty$, we say that a weight $w$ belongs to the class $\textup{RH}_q(\mathbb{R}^d_+)$ if there exists $C>0$ such that $$\left(\frac{1}{|B|}\int_B w^q(x)\, dx\right)^{1/q}\leq C \frac{1}{|B|}\int_B w(x)\, dx$$ for every ball $B$ in $\mathbb{R}^d_+$. The $p$-Lebesgue space with weight $w$ is denoted by $L^p(\mathbb{R}^d_+,w)$. **Theorem 2**. *Let $0<\alpha\leq 2$, $\rho>2$, $\lambda\geq 0$, $k\in \mathbb N$ and $1<p<\infty$. Then, the $\rho$-variation operator $\mathcal V_\rho\left(\{t^k\partial_t^k W_{\lambda,t}^{\alpha}\}_{t>0}\right)$ is bounded on $L^p(\mathbb{R}^d_+,w)$ provided that $w\in A_p(\mathbb{R}^d_+)$.* Throughout this paper $c$ and $C$ will always represent positive constants that may vary on each occurrence. The expression $a\lesssim b$ will indicate that $a\leq C b$ for some positive constant $C$, whilst $a\sim b$ means that both $a\lesssim b$ and $b\lesssim a$ hold. # Heat kernel estimates {#sec: heat kernels estimates} Our goal in this section is to establish estimates for the time derivatives of the heat kernel $W^{\alpha}_{\lambda,t}$ that will be useful in the sequel. We define the exponent $\mathfrak{p}$ as in [@FM23JFA]. We consider $$M_\alpha = \begin{cases} \alpha & \text{if } \alpha \in (0,2), \\ \infty & \text{if } \alpha = 2, \end{cases}$$ and, for every $\alpha \in (0,2]$, we define $$\begin{aligned} \Omega_\alpha: (-1,M_\alpha) & \longrightarrow \mathbb{R} \\ s & \longmapsto \Omega_\alpha(s) = \frac{1}{\pi}\left( \Gamma(\alpha)\sin \frac{\pi \alpha}{2} + \Gamma(1+s) \Gamma(\alpha-s) \sin \frac{\pi (2s-\alpha)}{2} \right).\end{aligned}$$ We understand $\Omega_2(s) = s(s-1)$, $s\in (-1,\infty)$, and ${\Omega_1(s) = \frac{1}{\pi} (1-\pi s \cot (\pi s))}$, $s\in (-1,1)$. The main properties of the function $\Omega_\alpha$ can be found in [@FM23JFA Appendix A]. For every $\lambda \geq \lambda_*$ there exists a unique $\mathfrak{p}\in \left[ \frac{\alpha-1}{2}, M_\alpha \right)$ such that $\Omega_\alpha(\mathfrak{p}) = \lambda$. Note that $\mathfrak{p}$ depends on $\lambda$ and $\alpha$ but we write $\mathfrak{p}$ instead of $\mathfrak{p}(\lambda,\alpha)$. We have that 1. $\mathfrak{p} = \max\{\alpha-1, 0\}:=(\alpha-1)_+$, when $\lambda = 0$; 2. $\mathfrak{p} > (\alpha-1)_+$, when $\lambda > 0$; 3. $\mathfrak{p} = \frac{1}{2} \left(1+ \sqrt{1+4\lambda}\right)$, when $\alpha = 2$. For simplicity, we shall denote with $\gamma=(\alpha-1)_+$ for any $\alpha\in (0,2]$. The following pointwise bounds for the heat kernel $W^{\alpha}_{\lambda,t}$, $t>0$, were established in [@FM23JFA Theorem 9 and Theorem 10]. **Proposition 3**. 1. *[\[item: prop 2.1 a\]]{#item: prop 2.1 a label="item: prop 2.1 a"} Let $\alpha \in (0,2)$ and $\lambda \geq 0$. Then, for every $x$, $y\in \mathbb{R}^d_+$ and $t>0$, $$W^{\alpha}_{\lambda,t}(x,y) \sim \left( 1 \wedge \frac{x_d}{t^{1/\alpha}}\right)^{\mathfrak{p}} \left( 1 \wedge \frac{y_d}{t^{1/\alpha}}\right)^{\mathfrak{p}} t^{-d/\alpha} \left( 1 \wedge \frac{t^{1/\alpha}}{|x-y|}\right)^{d+\alpha}.$$* 2. *Let $\lambda \geq -1/4$. Then, for every $x$, $y\in \mathbb{R}^d_+$ and $t>0$, $$W^{2}_{\lambda,t}(x,y) \asymp \left( 1 \wedge \frac{x_d}{t^{1/2}}\right)^{\mathfrak{p}} \left( 1 \wedge \frac{y_d}{t^{1/2}}\right)^{\mathfrak{p}} t^{-d/2} e^{-\frac{c|x-y|^2}{t}}.$$* *Here the symbol $\asymp$ means the same as $\sim$ but admitting different values of $c$ in the exponential function in the upper and lower bounds.* Our next objective is to extend the upper bounds in Proposition [Proposition 3](#prop: 2.1){reference-type="ref" reference="prop: 2.1"} to the complex plane. Note first that from [\[item: prop 2.1 a\]](#item: prop 2.1 a){reference-type="ref" reference="item: prop 2.1 a"} we deduce that $$\begin{aligned} \label{eq: 2.0} W^{\alpha}_{\lambda,t}(x,y) \lesssim \left( \frac{x_d}{x_d + t^{1/\alpha}}\right)^{\mathfrak{p}} \left( \frac{y_d}{y_d + t^{1/\alpha}}\right)^{\mathfrak{p}} t^{-d/\alpha} \left( \frac{t^{1/\alpha}}{t^{1/\alpha} + |x-y|}\right)^{d+\alpha},\end{aligned}$$ for every $x$, $y\in \mathbb{R}^d_+$ and $t>0$. Since the function $\phi(s) = \frac{s}{s+a}$, $s> -a$, is increasing for every $a>0$, it follows that $$\label{eq: 2.1} W^{\alpha}_{\lambda,t}(x,y) \lesssim \left( \frac{|x|}{|x| + t^{1/\alpha}}\right)^{\mathfrak{p}} \left( \frac{|y|}{|y| + t^{1/\alpha}}\right)^{\mathfrak{p}} t^{-d/\alpha} \left( \frac{t^{1/\alpha}}{t^{1/\alpha} + |x-y|}\right)^{d+\alpha},$$ for every $x$, $y\in \mathbb{R}^d_+$ and $t>0$. By using [@BM23 Proposition 2.2(a)] we obtain that if $\alpha\in (0,2)$, $\lambda\geq 0$ and $\epsilon\in (0,1)$, for every $z\in \mathbb{C}$, $|\arg(z)|<\epsilon\pi/4$, and $x$, $y\in \mathbb{R}^d_+$, $$W^{\alpha}_{\lambda,z}(x,y) \lesssim \left( \frac{|x|}{|x| + |z|^{1/\alpha}}\right)^{\mathfrak{p}} \left( \frac{|y|}{|y| + |z|^{1/\alpha}}\right)^{\mathfrak{p}} t^{-d/\alpha} \left( \frac{|z|^{1/\alpha}}{|z|^{1/\alpha} + |x-y|}\right)^{(d+\alpha)(1-\epsilon)}.$$ By using the Cauchy integral formula we deduce that, if $\alpha \in (0,2)$, $\lambda\geq 0$ and $k\in \mathbb{N}$, $$|t^k \partial^k_t W^{\alpha}_{\lambda,t}(x,y)| \lesssim \left( \frac{|x|}{|x| + t^{1/\alpha}}\right)^{\mathfrak{p}} \left( \frac{|y|}{|y| + t^{1/\alpha}}\right)^{\mathfrak{p}} t^{-d/\alpha} \left( \frac{t^{1/\alpha}}{t^{1/\alpha} + |x-y|}\right)^{(d+\alpha)(1-\epsilon)},$$ for every $x$, $y\in \mathbb{R}^d_+$ and $t>0$. In a similar way we get that if $\alpha = 2$, $\lambda \geq -1/4$ and $k\in \mathbb{N}$ $$\label{eq: 2.2} |t^k \partial^k_t W^{2}_{\lambda,t}(x,y) | \lesssim \left( \frac{|x|}{|x| + t^{1/2}}\right)^{\mathfrak{p}} \left( \frac{|y|}{|y| + t^{1/2}}\right)^{\mathfrak{p}} t^{-d/2} e^{-\frac{c|x-y|^2}{t}}$$ for every $x$, $y\in \mathbb{R}^d_+$ and $t>0$. However we need to improve [\[eq: 2.1\]](#eq: 2.1){reference-type="eqref" reference="eq: 2.1"} and [\[eq: 2.2\]](#eq: 2.2){reference-type="eqref" reference="eq: 2.2"} by replacing $|x|$ and $|y|$ by $x_d$ and $y_d$ respectively. **Proposition 4**. 1. *[\[item: prop 2.2.a\]]{#item: prop 2.2.a label="item: prop 2.2.a"} Let $\alpha\in (0,2)$, $\lambda\geq 0$, $\epsilon\in(0,1)$ and $k\in\mathbb{N}$. For every $x$, $y\in \mathbb{R}^d_+$ and $t>0$ $$|t^k \partial^k_t W^{\alpha}_{\lambda,t}(x,y)| \lesssim \left( \frac{x_d}{x_d + t^{1/\alpha}}\right)^{\mathfrak{p}} \left( \frac{y_d}{y_d + t^{1/\alpha}}\right)^{\mathfrak{p}} t^{-d/\alpha} \left( \frac{t^{1/\alpha}}{t^{1/\alpha} + |x-y|}\right)^{(d+\alpha)(1-\epsilon)}.$$* 2. *[\[item: prop 2.2.b\]]{#item: prop 2.2.b label="item: prop 2.2.b"} Let $\lambda \geq -1/4$ and $k \in \mathbb{N}$. For every $x$, $y\in \mathbb{R}^d_+$ and $t>0$, $$|t^k \partial^k_t W^{2}_{\lambda,t}(x,y)| \lesssim \left( \frac{x_d}{x_d + t^{1/2}}\right)^{\mathfrak{p}} \left( \frac{y_d}{y_d + t^{1/2}}\right)^{\mathfrak{p}} t^{-d/2} e^{-\frac{c|x-y|^2}{t}}.$$* *Proof.* We prove [\[item: prop 2.2.a\]](#item: prop 2.2.a){reference-type="ref" reference="item: prop 2.2.a"} first. According to [\[eq: 2.1\]](#eq: 2.1){reference-type="eqref" reference="eq: 2.1"} and proceeding as in the proof of [@BuiD Proposition 3.4] (see also [@BM23 Proposition 2.2(a)]) we deduce that $$\label{eq: 2.4} |w(x_d,z) W^{\alpha}_{\lambda,z}(x,y) w(y_d,z)| \leq \frac{C}{|z|^{d/\alpha}},$$ for $x$, $y\in\mathbb{R}^d_+$ and $z\in \mathbb{C}$, $|\arg(z)|\leq \pi/4$, where $$w(s,z) =\left( \frac{s}{s + z^{1/\alpha}}\right)^{-\mathfrak{p}},$$ for $s>0$ and $z\in\mathbb{C}$, $\arg(z)\leq \pi/2$. Indeed, since the operator $L^\alpha_\lambda$ is self-adjoint and non-negative, for every $s>0$, the operator $W^\alpha_{\lambda,is}$ is contractive in $L^2(\mathbb{R}^d_+)$. In order to prove our claim it is sufficient to see that $$\| W^\alpha_{\lambda,t}\|_{L^1(\mathbb{R}^d_+, w^{-1}) \hookrightarrow L^2(\mathbb{R}^d_+)} + \|W^\alpha_{\lambda,t} \|_{L^2(\mathbb{R}^d_+) \hookrightarrow wL^{\infty}(\mathbb{R}^d)} \lesssim t^{-\frac{d}{2\alpha}}, \;\; t>0,$$ where $wL^{\infty}(\mathbb{R}^d) = \{f \text{ measurable in } \mathbb{R}^d: \, wf\in L^{\infty}(\mathbb{R}^d)\}$. Suppose $f\in L^{2}(\mathbb{R}^d_+)$. According to [\[eq: 2.0\]](#eq: 2.0){reference-type="eqref" reference="eq: 2.0"} and using [@BuiD equation (18)] we get $$\begin{split} |W^{\alpha}_{\lambda,t} (f)(x)| w(x_d,t) & \lesssim \int_{\mathbb{R}^d_+} \left(\frac{y_d}{y_d + t^{1/\alpha}}\right)^{\mathfrak{p}} \frac{t}{t^{1/\alpha}+ |x-y|^{d+\alpha}} |f(y)|dy \\ & \lesssim \left( \int_{\mathbb{R}^d_+} \left| \left(\frac{|y|}{|y| + t^{1/\alpha}}\right)^{\mathfrak{p}} \frac{t}{t^{1/\alpha}+ |x-y|^{d+\alpha}} \right|^2 dy \right)^{1/2} \|f\|_{L^2(\mathbb{R}^d_+)} \\ & \lesssim t^{-\frac{d}{2\alpha}} \|f\|_{L^2(\mathbb{R}^d_+)}, \end{split}$$ for $x\in \mathbb{R}^d_+$ and $t>0$. In a similar way we can see that $$\|W^{\alpha}_{\lambda,t} f\|_{L^2(\mathbb{R}^d_+)} \lesssim t^{-\frac{d}{2\alpha}} \|f\|_{L^1(\mathbb{R}^d_+,w^{-1})},$$ for $f\in L^1(\mathbb{R}^d_+,w^{-1})$ and $t>0$. We conclude that [\[eq: 2.4\]](#eq: 2.4){reference-type="eqref" reference="eq: 2.4"} holds. On the other hand, according to [\[eq: 2.0\]](#eq: 2.0){reference-type="eqref" reference="eq: 2.0"} it follows that $$\begin{split} |w(x_d,t) W^{\alpha}_{\lambda,t} (x,y) w(y_d,t)| & \lesssim t^{-d/\alpha} \left( \frac{t^{1/\alpha}}{t^{1/\alpha} + |x-y|}\right)^{d+\alpha} \\ & \lesssim t^{-d/\alpha} \left( 1+ \frac{|x-y|}{t^{1/\alpha}} \right)^{-d-\alpha} \\ & \lesssim t^{-d/\alpha} \left( 1+ \left(\frac{|x-y|^{\alpha}}{t}\right)^{1/\alpha} \right)^{-d-\alpha}, \end{split}$$ for $x$, $y\in\mathbb{R}^d$ and $t>0$. Let us define the function $b(x) = \left(1+s^{1/\alpha}\right)^{-d-\alpha}$, $s>0$. It is clear that $b$ is bounded and decreasing in $(0,\infty)$. Then, by [@DR Proposition 3.3], for each $\epsilon\in (0,1)$ and $\theta \in (0, \epsilon\pi/2)$ there exists $C>0$ such that $$|W^{\alpha}_{\lambda,z} (x,y)| \leq C \left|\frac{x_d}{x_d+z}\right|^{\mathfrak{p}} \left|\frac{y_d}{y_d+z}\right|^{\mathfrak{p}} (\mathop{\mathrm{Re}}(z))^{-d/\alpha} b\left( \frac{|x-y|^{\alpha}}{|z|}\right)^{1-\epsilon},$$ for $x$, $y\in(0,\infty)$ and $z\in \mathbb{C}$ such that $|\arg(z)|\leq \theta$. If $z\in\mathbb{C}$ and $\arg(z)|\leq \theta$, with $0<\theta<\pi/2$, then $$|a+z| = \sqrt{(a + \mathop{\mathrm{Re}}(z))^2 + (\textup{Im}(z))^2} \geq \sqrt{a^2 + |z|^2} \gtrsim a + |z|, \; a>0.$$ We conclude that for each $\epsilon\in (0,1)$ and $\theta \in (0,\epsilon\pi/2)$ $$\label{eq: 2.5} |W^{\alpha}_{\lambda,z} (x,y)| \lesssim \left(\frac{x_d}{x_d+|z|}\right)^{\mathfrak{p}} \left(\frac{y_d}{y_d+|z|}\right)^{\mathfrak{p}} |z|^{-d/\alpha} \left( \frac{|z|^{1/\alpha}}{|z|^{1/\alpha} + |x-y|}\right)^{(d+\alpha)(1-\epsilon)},$$ where $z\in \mathbb{C}$ and $|\arg(z)|\leq \theta$. Note that $\mathop{\mathrm{Re}}(z)\simeq |z|$ for $z\in \mathbb{C}$ with $|\arg(z)|\leq \theta$. Estimates like [\[eq: 2.5\]](#eq: 2.5){reference-type="eqref" reference="eq: 2.5"} can also be proved by using the methods in [@Me Theorem 2.1]. The Cauchy integral formula allow us to obtain [\[item: prop 2.2.a\]](#item: prop 2.2.a){reference-type="ref" reference="item: prop 2.2.a"}. In order to prove [\[item: prop 2.2.b\]](#item: prop 2.2.b){reference-type="ref" reference="item: prop 2.2.b"} we can proceed in a similar way. We can also prove it using Davies method as in the proof of [@Da Theorem 3.4.8] and the Cauchy integral formula. ◻ # Proof of Theorem [Theorem 1](#thm: unweighted case){reference-type="ref" reference="thm: unweighted case"} for $\alpha\in (0,2)$ {#sec: proof-teo1.1-alfa<2} ## Global part We define, for $k\in \mathbb N$, $k\geq 1$, $$\label{eq: def nucleo Jalfa} J_\lambda^{\alpha, k} (x,y)=\int_0^\infty \left|t^{k-1}\partial_t^k (W^{\alpha}_{\lambda,t}(x,y)-W^{\alpha}_{0,t}(x,y))\right|\chi_G(x,y,t)\, dt, \quad x,y\in \mathbb{R}^d_+,$$ where $G$ is as in [\[eq: G\]](#eq: G){reference-type="eqref" reference="eq: G"}. By using Proposition [Proposition 4](#prop: 2.2){reference-type="ref" reference="prop: 2.2"} [\[item: prop 2.2.a\]](#item: prop 2.2.a){reference-type="ref" reference="item: prop 2.2.a"} we get $$\begin{aligned} \left|J_\lambda^{\alpha, k} (x,y)\right|&\leq \int_0^\infty \left(\left|t^{k-1}\partial_t^k W^{\alpha}_{\lambda,t}(x,y)\right|+\left|t^{k-1}\partial_t^k W^{\alpha}_{0,t}(x,y)\right|\right)\chi_G(x,y,t)\, dt\\ &\lesssim \int_0^\infty \frac{1}{t^{1+d/\alpha}} \left(\frac{x_d}{x_d+t^{1/\alpha}}\right)^{\gamma} \left(\frac{y_d}{y_d+t^{1/\alpha}}\right)^{\gamma}\\ &\qquad \times\left(\frac{t^{1/\alpha}}{t^{1/\alpha}+|x-y|}\right)^{(d+\alpha)(1-\epsilon)}\chi_G(x,y,t)\, dt, \quad x,y\in \mathbb{R}^d_+,\end{aligned}$$ where $0<\epsilon<1$ and, as we set before, $\gamma=(\alpha-1)_+$. Here, we can actually use that the power $\mathfrak{p}$ can be replaced by any non-negative number less or equal than $\mathfrak{p}$ in the estimate given in Proposition [Proposition 4](#prop: 2.2){reference-type="ref" reference="prop: 2.2"} [\[item: prop 2.2.a\]](#item: prop 2.2.a){reference-type="ref" reference="item: prop 2.2.a"}. We shall split the last integral over $G_1$ and $G_2$, and denote $$\begin{aligned} S_{\epsilon, j}^\alpha(x,y)&=\int_0^\infty \frac{1}{t^{1+d/\alpha}} \left(\frac{x_d}{x_d+t^{1/\alpha}}\right)^{\gamma} \left(\frac{y_d}{y_d+t^{1/\alpha}}\right)^{\gamma}\\ &\quad \times\left(\frac{t^{1/\alpha}}{t^{1/\alpha}+|x-y|}\right)^{(d+\alpha)(1-\epsilon)}\chi_{G_j}(x,y,t)\, dt, \quad j=1,2.\end{aligned}$$ From the definition of $G_1$ we can write $$\begin{aligned} S_{\epsilon, 1}^\alpha &(x,y)\\ &=\int_{(x_d\vee y_d)^\alpha}^\infty \left(\frac{x_d}{x_d+t^{1/\alpha}}\right)^{\gamma} \left(\frac{y_d}{y_d+t^{1/\alpha}}\right)^{\gamma}\left(\frac{t^{1/\alpha}}{t^{1/\alpha}+|x-y|}\right)^{(d+\alpha)(1-\epsilon)}\, \frac{dt}{t^{1+d/\alpha}}\end{aligned}$$ If we suppose that $x,y\in \mathbb{R}^d_+$ with $|x-y|\leq x_d\vee y_d$, we get $$S_{\epsilon, 1}^\alpha (x,y)\lesssim \int_{(x_d\vee y_d)^\alpha}^\infty \frac{(x_d y_d)^{\gamma}}{t^{1+d/\alpha+2q/\alpha}}\, dt\leq C \frac{(x_d y_d)^{\gamma}}{(x_d\vee y_d)^{d+2q}}, \quad 0<\epsilon<1.$$ On the other hand, if $|x-y|> x_d\vee y_d$, $$\begin{aligned} S_{\epsilon, 1}^\alpha &(x,y)\\ &=\int_{(x_d\vee y_d)^\alpha}^{|x-y|^\alpha} \left(\frac{x_d}{x_d+t^{1/\alpha}}\right)^{\gamma} \left(\frac{y_d}{y_d+t^{1/\alpha}}\right)^{\gamma}\left(\frac{t^{1/\alpha}}{t^{1/\alpha}+|x-y|}\right)^{(d+\alpha)(1-\epsilon)}\, \frac{dt}{t^{1+d/\alpha}}\\ &\quad +\int_{|x-y|^\alpha}^\infty \left(\frac{x_d}{x_d+t^{1/\alpha}}\right)^{\gamma} \left(\frac{y_d}{y_d+t^{1/\alpha}}\right)^{\gamma}\left(\frac{t^{1/\alpha}}{t^{1/\alpha}+|x-y|}\right)^{(d+\alpha)(1-\epsilon)}\, \frac{dt}{t^{1+d/\alpha}}\\ &\lesssim \int_{0}^{|x-y|^\alpha} \frac{(x_d y_d)^{\gamma}}{t^{1+d/\alpha+2q/\alpha}} \left(\frac{t^{1/\alpha}}{|x-y|}\right)^{(d+\alpha)(1-\epsilon)}\, dt+\int_{|x-y|^\alpha}^\infty \frac{(x_d y_d)^{\gamma}}{t^{1+d/\alpha+2q/\alpha}}\, dt\\ &\lesssim (x_d y_d)^{\gamma}\left(\frac{1}{|x-y|^{(d+\alpha)(1-\epsilon)}}\int_{0}^{|x-y|^\alpha} t^{\frac{(d+\alpha)(1-\epsilon)}{\alpha}-1-\frac{d}{\alpha}-\frac{2q}{\alpha}}\, dt\right.\\ &\quad +\left. \int_{|x-y|^\alpha}^\infty \frac{dt}{t^{1+d/\alpha+2q/\alpha}} \right)\\ &\lesssim \frac{(x_d y_d)^{\gamma}}{|x-y|^{d+2q}},\end{aligned}$$ provided that $(d+\alpha)(1-\epsilon)-d-2\gamma>0$, that is, whenever $0<\epsilon<(\alpha-2\gamma)/(d+\alpha)$. Note that this is possible since $\gamma=(\alpha-1)_+$ implies $\alpha-2\gamma>0$. We conclude that $$S_{\epsilon, 1}^\alpha (x,y)\lesssim \frac{(x_d y_d)^{\gamma}}{(|x-y|\vee x_d \vee y_d)^{d+2\gamma}}, \quad x,y\in \mathbb{R}^d_+$$ when $0<\epsilon<(\alpha-2\gamma)/(d+\alpha)$. For $j=2$, we take into account that $|x-y|\geq (x_d\wedge y_d)/2$, so we have $$\begin{aligned} S_{\epsilon, 2}^\alpha (x,y)&\lesssim \int_0^{(x_d\vee y_d)^\alpha} \left(\frac{x_d\wedge y_d}{x_d\wedge y_d+t^{1/\alpha}}\right)^{\gamma} \left(\frac{t^{1/\alpha}}{t^{1/\alpha}+|x-y|}\right)^{(d+\alpha)(1-\epsilon)}\, \frac{dt}{t^{1+d/\alpha}}\\ &=\int_0^{(x_d\wedge y_d)^\alpha} \left(\frac{x_d\wedge y_d}{x_d\wedge y_d+t^{1/\alpha}}\right)^{\gamma} \left(\frac{t^{1/\alpha}}{t^{1/\alpha}+|x-y|}\right)^{(d+\alpha)(1-\epsilon)}\, \frac{dt}{t^{1+d/\alpha}}\\ &\quad +\int_{(x_d\wedge y_d)^\alpha}^{(x_d\vee y_d)^\alpha}\left(\frac{x_d\wedge y_d}{x_d\wedge y_d+t^{1/\alpha}}\right)^{\gamma} \left(\frac{t^{1/\alpha}}{t^{1/\alpha}+|x-y|}\right)^{(d+\alpha)(1-\epsilon)}\, \frac{dt}{t^{1+d/\alpha}}\\ &\lesssim \int_0^{(x_d\wedge y_d)^\alpha} \left(\frac{t^{1/\alpha}}{|x-y|}\right)^{(d+\alpha)(1-\epsilon)} \frac{dt}{t^{1+d/\alpha}}\\ &\quad +\int_{(x_d\wedge y_d)^\alpha}^{(x_d\vee y_d)^\alpha} \left(\frac{x_d\wedge y_d}{t^{1/\alpha}}\right)^{\gamma} \left(\frac{t^{1/\alpha}}{|x-y|}\right)^{(d+\alpha)(1-\epsilon)} \frac{dt}{t^{1+d/\alpha}}\\ &\lesssim \int_0^{(x_d\wedge y_d)^\alpha} \frac{t^{(d+\alpha)(1-\epsilon)/\alpha-1-d/\alpha}}{|x-y|^{(d+\alpha)(1-\epsilon)}}\, dt\\ &\quad +(x_d\wedge y_d)^{\gamma}\int_0^{(x_d\vee y_d)^\alpha}\frac{t^{\frac{(d+\alpha)(1-\epsilon)}{\alpha}-1-\frac{d}{\alpha}-\frac{\gamma}{\alpha}}}{|x-y|^{(d+\alpha)(1-\epsilon)}}\, dt\\ &\lesssim \frac{(x_d\wedge y_d)^{(d+\alpha)(1-\epsilon)-d}}{|x-y|^{(d+\alpha)(1-\epsilon)}} +\frac{(x_d\wedge y_d)^{\gamma} (x_d\vee y_d)^{(d+\alpha)(1-\epsilon)-d-\gamma}}{|x-y|^{(d+\alpha)(1-\epsilon)}}\\ &\lesssim \frac{1}{|x-y|^{(d+\alpha)(1-\epsilon)}} (x_d\vee y_d)^{(d+\alpha)(1-\epsilon)-d-\gamma} (x_d\wedge y_d)^{\gamma}\end{aligned}$$ provided that $(d+\alpha)(1-\epsilon)-d-\gamma>0$, i.e., $0<\epsilon<(\alpha-\gamma)/(d+\alpha)$. Actually, this is possible since the restriction obtained in the case $j=1$ implies this one. Recall that the above estimates for $S_{\epsilon, j}^\alpha$, $j=1,2$, are valid for any non-negative power less or equal to $\mathfrak{p}=\gamma$; in particular, for power zero. Thus, in this case we get $$\begin{aligned} \left|J_\lambda^{\alpha, k} (x,y)\right|&\lesssim \frac{1}{(|x-y|\vee x_d \vee y_d)^{d}}+\chi_{\left\{|x-y|\geq \frac{x_d\wedge y_d}{2}\right\}}(x,y)\frac{(x_d \vee y_d)^{(d+\alpha)(1-\epsilon)-d}}{|x-y|^{(d+\alpha)(1-\epsilon)}}\end{aligned}$$ whenever $0<\epsilon<\alpha/(d+\alpha)$. Fix now $0<\epsilon<\alpha/(d+\alpha)$. We define now $\alpha'=(d+\alpha)(1-\epsilon)-d$. Hence (see [@FM23JFA p. 31]), $$\frac{1}{(|x-y|\vee x_d \vee y_d)^{d}}\lesssim \frac{(|x-y|\vee x_d \vee y_d)^{\alpha'}}{(|x-y|\vee (x_d \wedge y_d))^{d+\alpha'}}$$ and $$\frac{(x_d \vee y_d)^{\alpha'}}{|x-y|^{d+\alpha'}}\chi_{\left\{|x-y|\geq \frac{x_d\wedge y_d}{2}\right\}}(x,y)\lesssim \frac{(|x-y|\vee x_d \vee y_d)^{\alpha'}}{(|x-y|\vee (x_d \wedge y_d))^{d+\alpha'}}.$$ Therefore, by proceeding as in the proof of [@FM23JFA Proposition 19, Step 1], we deduce that $$\int_{\mathbb R^{d-1}} \left|J_\lambda^{\alpha, k} (x,y)\right|\, dy'\lesssim \frac{(x_d \vee y_d)^{\alpha'}}{(|x_d-y_d|\vee (x_d \wedge y_d))^{\alpha'+1}},$$ being $y'=(y_1,\dots, y_{d-1})$. By considering $w_\beta(x,y)=\left(\frac{x_d}{y_d}\right)^\beta$, $x,y\in \mathbb{R}^d_+$ and $\beta>0$, we get, as in [@FM23JFA Proposition 19, Step 2], that for any $1<p<\infty$, $$\begin{aligned} \sup_{x\in \mathbb{R}^d_+} &\int_{\mathbb{R}^d_+} w_\beta(x,y)^{1/p} \left|J_\lambda^{\alpha, k} (x,y)\right|\, dy <\infty, \quad 0<\beta/p<1,\\ \sup_{x\in \mathbb{R}^d_+} &\int_{\mathbb{R}^d_+} w_\beta(x,y)^{-1/p'} \left|J_\lambda^{\alpha, k} (x,y)\right|\, dx <\infty, \quad 0<\beta/p'<1.\end{aligned}$$ According to Schur's test with weights (see, for instance, [@KMVZZ Lemma 3.3]), we can conclude that the integral operator defined by $$J_\lambda^{\alpha, k} (f)(x)=\int_{\mathbb{R}^d_+} J_\lambda^{\alpha, k} (x,y) f(y)\, dy, \quad x\in \mathbb{R}^d_+,$$ is bounded on $L^p(\mathbb{R}^d_+)$ for every $1<p<\infty$. ## Local part {#sec: proof-teo1.1-alfa<2 local} According to Duhamel formula we can write, for every $x,y\in \mathbb{R}^d_+$ and $t>0$, that $$\begin{aligned} \label{eq: duhamel} \nonumber D_{\lambda,t}^{\alpha, 0}(x,y)&:=\lambda \int_0^t \int_{\mathbb{R}^d_+} W^{\alpha}_{0,t-s} (x,z) z_d^{-\alpha} W^{\alpha}_{\lambda,s} (z,y)\, dzds\\ \nonumber &=\lambda \int_0^{t/2} \int_{\mathbb{R}^d_+} W^{\alpha}_{0,t-s} (x,z) z_d^{-\alpha} W^{\alpha}_{\lambda,s} (z,y)\, dzds\\ &\quad +\lambda \int_0^{t/2} \int_{\mathbb{R}^d_+} W^{\alpha}_{0,s} (x,z) z_d^{-\alpha} W^{\alpha}_{\lambda,t-s} (z,y)\, dzds. \end{aligned}$$ We have that $D_{\lambda,t}^{\alpha, 0}(x,y)=t^{-d/\alpha}F\left(t^{-1/\alpha}x, t^{-1/\alpha}y\right)$, $x,y\in\mathbb{R}^d_+$, $t>0$ (see comments after [@FM23JFA Theorem 9]) for certain smooth function $F:\mathbb{R}^d_+\times \mathbb{R}^d_+\to \mathbb R$. Thus, for every $k\in \mathbb N$, there exists an smooth function $F_k:\mathbb{R}^d_+\times \mathbb{R}^d_+\to \mathbb R$ for which $$\label{eq: P1} t^k\partial_t^k D_{\lambda,t}^{\alpha, 0}(x,y)=t^{-d/\alpha}F_k\left(t^{-1/\alpha}x, t^{-1/\alpha}y\right), \quad x,y\in\mathbb{R}^d_+, t>0.$$ Given $k\in \mathbb N$, from [\[eq: duhamel\]](#eq: duhamel){reference-type="eqref" reference="eq: duhamel"} we can write $$\begin{aligned} \partial_t^k D_{\lambda,t}^{\alpha, 0}(x,y)&=\sum_{j=0}^{k-1}a_{j,k} \int_{\mathbb{R}^d_+} \partial_t^j W^{\alpha}_{0,t/2} (x,z) z_d^{-\alpha} \partial_t^{k-1-j} W^{\alpha}_{\lambda,t/2} (z,y) \, dz\\ &\quad +\lambda \int_0^{t/2} \int_{\mathbb{R}^d_+} \partial_t^k W^{\alpha}_{0,t-s} (x,z) z_d^{-\alpha} W^{\alpha}_{\lambda,s} (z,y)\, dzds\\ &\quad +\lambda \int_0^{t/2} \int_{\mathbb{R}^d_+} W^{\alpha}_{0,s} (x,z) z_d^{-\alpha} \partial_t^k W^{\alpha}_{\lambda,t-s} (z,y)\, dzds, \quad x,y\in\mathbb{R}^d_+, t>0,\end{aligned}$$ where $a_{j,k}\in \mathbb R$ for each $j=1,\dots, k-1$. Assume now that $x,y\in \mathbb{R}^d_+$ with $|x-y|\leq (x_d\wedge y_d)/2$ and $0<t^{1/\alpha}\leq x_d\vee y_d$. Taking into account the scaling property [\[eq: P1\]](#eq: P1){reference-type="eqref" reference="eq: P1"}, we are going to estimate $$\left. \partial_t^k D_{\lambda,t}^{\alpha, 0}(x,y)\right|_{t=1}.$$ We define, for every $j=1,\dots, k-1$, the integrals involved in the above summation as $$H_j(x,y,t)=\int_{\mathbb{R}^d_+} \partial_t^j W^{\alpha}_{0,t/2} (x,z) z_d^{-\alpha} \partial_t^{k-1-j} W^{\alpha}_{\lambda,t/2} (z,y) \, dz, \quad x,y\in\mathbb{R}^d_+, t>0,$$ and we decompose each one of them as follows $$\begin{aligned} H_j(x,y,t)&=\int_{\{z\in\mathbb{R}^d_+: z_d>x_d/2\}}\partial_t^j W^{\alpha}_{0,t/2} (x,z) z_d^{-\alpha} \partial_t^{k-1-j} W^{\alpha}_{\lambda,t/2} (z,y) \, dz\\ &\quad +\int_{\{z\in\mathbb{R}^d_+: z_d\leq x_d/2\}}\partial_t^j W^{\alpha}_{0,t/2} (x,z) z_d^{-\alpha} \partial_t^{k-1-j} W^{\alpha}_{\lambda,t/2} (z,y) \, dz\\ &:=H_{j,1}(x,y,t)+H_{j,2}(x,y,t), \quad x,y\in\mathbb{R}^d_+, t>0.\end{aligned}$$ We notice first that, since $x_d\sim y_d$ (because $|x-y|\leq (x_d\wedge y_d)/2$), when $t=1$ we have $x_d\sim y_d\gtrsim 1$. By using Proposition [Proposition 4](#prop: 2.2){reference-type="ref" reference="prop: 2.2"}[\[item: prop 2.2.a\]](#item: prop 2.2.a){reference-type="ref" reference="item: prop 2.2.a"}, for $\epsilon\in (0,1)$ we have $$\begin{aligned} |H_{j,1}(x,y,1)|&\lesssim x_d^{-\alpha} \int_{\mathbb{R}^d_+} \left(\frac{x_d}{x_d+1}\right)^{(\alpha-1)_+}\left(\frac{z_d}{z_d+1}\right)^{(\alpha-1)_+}\left(\frac{1}{1+|x-z|}\right)^{(d+\alpha)(1-\epsilon)}\\ &\quad \times\left(\frac{y_d}{y_d+1}\right)^{(\alpha-1)_+}\left(\frac{z_d}{z_d+1}\right)^{(\alpha-1)_+}\left(\frac{1}{1+|y-z|}\right)^{(d+\alpha)(1-\epsilon)}\, dz.\end{aligned}$$ We define $\alpha'=\alpha(1-\epsilon)-d\epsilon$ and consider $0<\epsilon<\alpha/(\alpha+d)$. Then, $0<\alpha'<\alpha<2$, and we get $$\begin{aligned} |H_{j,1}(x,y,1)|&\lesssim x_d^{-\alpha} \int_{\mathbb{R}^d_+} \left(\frac{x_d}{x_d+1}\right)^{(\alpha'-1)_+}\left(\frac{z_d}{z_d+1}\right)^{2(\alpha'-1)_+}\left(\frac{y_d}{y_d+1}\right)^{(\alpha'-1)}\\ &\quad \times\left(\frac{1}{1+|x-z|}\right)^{d+\alpha'}\left(\frac{1}{1+|y-z|}\right)^{d+\alpha'}\, dz. \end{aligned}$$ According to [@FM23JFA Theorem 9], we have $$\begin{aligned} |H_{j,1}(x,y,1)|&\lesssim x_d^{-\alpha} \int_{\mathbb{R}^d_+} W^{\alpha'}_{0,1}(x,z)W^{\alpha'}_{0,1}(z,y)\, dz\\ &=Cx_d^{-\alpha} W^{\alpha'}_{0,2}(x,y)\\ &\lesssim x_d^{-\alpha} \left(\frac{x_d}{x_d+1}\right)^{(\alpha'-1)_+}\left(\frac{y_d}{y_d+1}\right)^{(\alpha'-1)}\left(\frac{1}{1+|x-y|}\right)^{d+\alpha'}.\end{aligned}$$ We now estimate $H_{j,2}$. If $z_d\leq x_d/2$, since $|x-y|<(x_d\wedge y_d)/2$, we deduce that $|y_d-z_d|\sim y_d\sim x_d$ and $|x_d-z_d|\sim x_d$. Then, if $z_d\leq x_d/2$, we have $|x-z|\sim |x'-z'|+x_d$ and $|y-z|\sim |y'-z'|+x_d$. By [@FM23JFA Lemma 22] with $N=d-1\geq 1$, $\beta=\alpha(1-\epsilon)+1-d\epsilon$ and $r=s=1+x_d$ we get that whenever $z_d\leq x_d/2$, $$\begin{aligned} \int_{\mathbb R^{d-1}} & \left|\left. \partial_t^j W^{\alpha}_{0,t/2}(x,z)\right|_{t=1}\right|\left|\left. \partial_t^{k-1-j} W^{\alpha}_{\lambda,t/2}(z,y)\right|_{t=1}\right| dz'\\ &\lesssim \left(\frac{x_d}{x_d+1}\right)^{(\alpha-1)_+}\left(\frac{z_d}{z_d+1}\right)^{(\alpha-1)_++\mathfrak{p}}\left(\frac{y_d}{y_d+1}\right)^{\mathfrak{p}} \\ &\quad \times \int_{\mathbb R^{d-1}} \left(\frac{1}{1+|x'-z'|+x_d}\right)^{(d+\alpha)(1-\epsilon)}\left(\frac{1}{1+|y'-z'|+x_d}\right)^{(d+\alpha)(1-\epsilon)} dz'\\ &\lesssim \left(\frac{x_d}{x_d+1}\right)^{(\alpha-1)_+}\left(\frac{z_d}{z_d+1}\right)^{(\alpha-1)_++\mathfrak{p}}\left(\frac{y_d}{y_d+1}\right)^{\mathfrak{p}} \frac{1}{x_d^{\alpha(1-\epsilon)+1-d\epsilon}}\\ &\quad \times \frac{1}{(1+x_d+|y'-x'|)^{(d+\alpha)(1-\epsilon)}}\\ &\lesssim \left(\frac{x_d}{x_d+1}\right)^{(\alpha-1)_+}\left(\frac{z_d}{z_d+1}\right)^{(\alpha-1)_++\mathfrak{p}}\left(\frac{y_d}{y_d+1}\right)^{\mathfrak{p}} \frac{1}{x_d^{\alpha(1-\epsilon)+1-d\epsilon+(d+\alpha)(1-\epsilon)}}.\end{aligned}$$ From [@FM23JFA p. 26] we also have that $$\int_0^{x_d/2} \left(\frac{z_d}{z_d+1}\right)^{(\alpha-1)_++\mathfrak{p}} \frac{dz_d}{z_d^\alpha}\sim \boldsymbol{1}_{\alpha\geq 1}+(\ln(1+x_d))\boldsymbol{1}_{\alpha=1}+x_d^{1-\alpha}\boldsymbol{1}_{\alpha< 1}.$$ Then, [\[pag: cota xd\>=1\]]{#pag: cota xd>=1 label="pag: cota xd>=1"} $$\begin{aligned} &\frac{1}{x_d^{\alpha(1-\epsilon)+1-d\epsilon+(d+\alpha)(1-\epsilon)}} \int_0^{x_d/2} \left(\frac{z_d}{z_d+1}\right)^{(\alpha-1)_++\mathfrak{p}} \frac{dz_d}{z_d^\alpha}\\ &\lesssim \frac{1}{x_d^{\alpha+(d+\alpha)(1-\epsilon)}} \frac{1}{x_d^{1-(d+\alpha)\epsilon}} \left(\boldsymbol{1}_{\alpha\geq 1}+(\ln(1+x_d))\boldsymbol{1}_{\alpha=1}+x_d^{1-\alpha}\boldsymbol{1}_{\alpha< 1}\right)\\ &\lesssim \frac{1}{x_d^{\alpha+(d+\alpha)(1-\epsilon)}}\lesssim \frac{1}{x_d^\alpha} \left(\frac{1}{1+|x-y|}\right)^{(d+\alpha)(1-\epsilon)},\end{aligned}$$ provided that $0<\epsilon<\alpha/(d+\alpha)$ and $x_d\geq 1$. We consider now $$\begin{aligned} H_k(x,y,t)&=\int_0^{t/2}\int_{\{z\in\mathbb{R}^d_+: z_d>x_d/2\}}\partial_t^k W^{\alpha}_{0,t-s} (x,z) z_d^{-\alpha} W^{\alpha}_{\lambda,s} (z,y) \, dz ds\\ &\quad +\int_0^{t/2}\int_{\{z\in\mathbb{R}^d_+: z_d\leq x_d/2\}}\partial_t^k W^{\alpha}_{0,t-s} (x,z) z_d^{-\alpha} W^{\alpha}_{\lambda,s} (z,y) \, dz ds\\ &:=H_{k,1}(x,y,t)+H_{k,2}(x,y,t), \quad x,y\in\mathbb{R}^d_+, t>0.\end{aligned}$$ First, we estimate both terms using [\[eq: 2.0\]](#eq: 2.0){reference-type="eqref" reference="eq: 2.0"} and Proposition [Proposition 4](#prop: 2.2){reference-type="ref" reference="prop: 2.2"}[\[item: prop 2.2.a\]](#item: prop 2.2.a){reference-type="ref" reference="item: prop 2.2.a"}. Recalling that $\mathfrak{p}\geq (\alpha-1)_+$, for $\ell=1,2$ we get $$\begin{aligned} \label{eq: estimacion Hkl} |& H_{k,\ell}(x,y,1)|\nonumber\\ &\lesssim \int_0^{1/2} \int_{\mathbb{R}^d_+} (1-s)^{-k-d/\alpha} \left(\frac{x_d}{(1-s)^{1/\alpha}+x_d}\right)^{(\alpha-1)_+} \left(\frac{z_d}{(1-s)^{1/\alpha}+z_d}\right)^{(\alpha-1)_+}\nonumber\\ &\quad \times \left(\frac{(1-s)^{1/\alpha}}{(1-s)^{1/\alpha}+|x-z|}\right)^{(d+\alpha)(1-\epsilon)} z_d^{-\alpha} \left(\frac{y_d}{s^{1/\alpha}+y_d}\right)^{\mathfrak{p}}\left(\frac{z_d}{s^{1/\alpha}+z_d}\right)^{\mathfrak{p}}\frac{1}{s^{d/\alpha}} \nonumber\\ &\quad \times \left(\frac{s^{1/\alpha}}{s^{1/\alpha}+|z-y|}\right)^{(d+\alpha)(1-\epsilon)}\,dzds. \end{aligned}$$ We define, as before, $\alpha'=\alpha(1-\epsilon)-d\epsilon$ for $0<\epsilon<\alpha/(d+\alpha)$, so $0<\alpha'<\alpha<2$. Thus, from Proposition [Proposition 3](#prop: 2.1){reference-type="ref" reference="prop: 2.1"}[\[item: prop 2.1 a\]](#item: prop 2.1 a){reference-type="ref" reference="item: prop 2.1 a"} with $\lambda=0$, we have $$\begin{aligned} &|H_{k,1}(x,y,1)|\lesssim \frac{1}{x_d^\alpha} \int_0^{1/2} \int_{\mathbb{R}^d_+} \left(\frac{x_d}{(1-s)^{1/\alpha}+x_d}\right)^{(\alpha'-1)_+} \left(\frac{z_d}{(1-s)^{1/\alpha}+z_d}\right)^{(\alpha'-1)_+} \\ &\quad \times\frac{1}{s^{d/\alpha}} \left(\frac{y_d}{s^{1/\alpha}+y_d}\right)^{(\alpha'-1)_+} \left(\frac{z_d}{s^{1/\alpha}+z_d}\right)^{(\alpha'-1)_+}\\ &\quad\times \left(\frac{(1-s)^{1/\alpha}}{(1-s)^{1/\alpha}+|x-z|}\right)^{d+\alpha'}\left(\frac{s^{1/\alpha}}{s^{1/\alpha}+|z-y|}\right)^{d+\alpha'}\, dzds\\ &\lesssim \frac{1}{x_d^\alpha} \int_0^{1/2} \int_{\mathbb{R}^d_+} \left(\frac{x_d}{((1-s)^{\alpha'/\alpha})^{1/\alpha'}+x_d}\right)^{(\alpha'-1)_+} \left(\frac{z_d}{((1-s)^{\alpha'/\alpha})^{1/\alpha'}+z_d}\right)^{(\alpha'-1)_+} \\ &\quad \times\frac{1}{(s^{\alpha'/\alpha})^{d/\alpha'}}\left(\frac{y_d}{(s^{\alpha'/\alpha})^{1/\alpha'}+y_d}\right)^{(\alpha'-1)_+} \left(\frac{z_d}{(s^{\alpha'/\alpha})^{1/\alpha'}+z_d}\right)^{(\alpha'-1)_+}\\ &\quad \times \left(\frac{((1-s)^{\alpha'/\alpha})^{1/\alpha'}}{((1-s)^{\alpha'/\alpha})^{1/\alpha'}+|x-z|}\right)^{d+\alpha'}\left(\frac{(s^{\alpha'/\alpha})^{1/\alpha'}}{(s^{\alpha'/\alpha})^{1/\alpha'}+|z-y|}\right)^{d+\alpha'}\, dzds\\ &\sim \frac{1}{x_d^\alpha} \int_0^{1/2} W^{\alpha'}_{0,(1-s)^{\alpha'/\alpha}}(x,z) W^{\alpha'}_{0,s^{\alpha'/\alpha}}(z,y)\, dzds\\ & \lesssim \frac{1}{x_d^\alpha} \int_0^{1/2} W^{\alpha'}_{0,(1-s)^{\alpha'/\alpha}+s^{\alpha'/\alpha}}(x,y)\, dzds\\ &\lesssim \frac{1}{x_d^\alpha} \int_0^{1/2} \left(\frac{x_d}{((1-s)^{\alpha'/\alpha}+s^{\alpha'/\alpha})^{1/\alpha'}+x_d}\right)^{(\alpha'-1)_+} \\ &\qquad \times\left(\frac{y_d}{((1-s)^{\alpha'/\alpha}+s^{\alpha'/\alpha})^{1/\alpha'}+y_d}\right)^{(\alpha'-1)_+}\\ &\qquad \times\left(\frac{((1-s)^{\alpha'/\alpha}+s^{\alpha'/\alpha})^{1/\alpha'}}{((1-s)^{\alpha'/\alpha}+s^{\alpha'/\alpha})^{1/\alpha'}+|x-y|}\right)^{\alpha'+d}\, ds\\ &\lesssim \frac{1}{x_d^\alpha} \int_0^{1/2} \left(\frac{x_d}{2^{-1/\alpha}+x_d}\right)^{(\alpha'-1)_+} \left(\frac{y_d}{2^{-1/\alpha}+y_d}\right)^{(\alpha'-1)_+} \left(\frac{2^{1/\alpha}}{2^{1/\alpha}+|x-y|}\right)^{\alpha'+d}\, ds\\ &\lesssim \frac{1}{x_d^\alpha} \left(\frac{x_d}{1+x_d}\right)^{(\alpha'-1)_+} \left(\frac{y_d}{1+y_d}\right)^{(\alpha'-1)_+} \left(\frac{1}{1+|x-y|}\right)^{\alpha'+d}.\end{aligned}$$ In order to estimate $H_{k,2}$, we notice first that from [@FM23JFA Lemma 22], $$\begin{aligned} \int_{\mathbb R^{d-1}} & \left(\frac{(1-s)^{1/\alpha}}{(1-s)^{1/\alpha}+|x-z|}\right)^{(d+\alpha)(1-\epsilon)} \left(\frac{s^{1/\alpha}}{s^{1/\alpha}+|z-y|}\right)^{(d+\alpha)(1-\epsilon)}\,dz'\\ &\lesssim \int_{\mathbb R^{d-1}} \left(\frac{(1-s)^{1/\alpha}}{(1-s)^{1/\alpha}+x_d+|x'-z'|}\right)^{(d+\alpha)(1-\epsilon)} \\ &\qquad \times\left(\frac{s^{1/\alpha}}{s^{1/\alpha}+x_d+|z'-y'|}\right)^{(d+\alpha)(1-\epsilon)}\,dz'\\ &\lesssim \frac{((1-s)s)^{(d+\alpha)(1-\epsilon)/\alpha}}{x_d^{\alpha(1-\epsilon)-d\epsilon+1}}\frac{1}{(x_d+|x'-y'|)^{(d+\alpha)(1-\epsilon)}},\end{aligned}$$ provided that $z_d\leq x_d/2$. Then, by taking $0<\epsilon<(\alpha-\mathfrak{p})/(d+\alpha)$ (which is possible since $\mathfrak{p}< \alpha$), from [\[eq: estimacion Hkl\]](#eq: estimacion Hkl){reference-type="eqref" reference="eq: estimacion Hkl"} we get $$\begin{aligned} |H_{k,2}(x,y,1)|&\lesssim \int_0^{1/2} \int_0^{x_d/2} \left(\frac{x_d}{(1-s)^{1/\alpha}+x_d}\right)^{(\alpha-1)_+} \left(\frac{z_d}{(1-s)^{1/\alpha}+z_d}\right)^{(\alpha-1)_+} \\ &\quad \times z_d^{-\alpha} \left(\frac{y_d}{s^{1/\alpha}+y_d}\right)^{\mathfrak{p}}\left(\frac{z_d}{s^{1/\alpha}+z_d}\right)^{\mathfrak{p}}\frac{1}{s^{d/\alpha}}\\ &\quad \times\frac{((1-s)s)^{(d+\alpha)(1-\epsilon)/\alpha}}{x_d^{\alpha(1-\epsilon)-d\epsilon+1}}\frac{1}{(x_d+|x'-y'|)^{(d+\alpha)(1-\epsilon)}}\, dz_d ds\\ &\lesssim \int_0^{x_d/2} \left(\int_0^{1/2} \left(\frac{z_d}{(1-s)^{1/\alpha}+z_d}\right)^{(\alpha-1)_+}\left(\frac{z_d}{s^{1/\alpha}+z_d}\right)^{\mathfrak{p}}\right.\\ & \hspace{-0.55cm}\left.\phantom{\int_0^{1/2}}\times\frac{((1-s)s)^{(d+\alpha)(1-\epsilon)/\alpha} s^{-d/\alpha}}{(x_d+|x'-y'|)^{(d+\alpha)(1-\epsilon)}} ds\right)\frac{z_d^{-\alpha}}{x_d^{\alpha(1-\epsilon)-d\epsilon+1}} dz_d\\ &\lesssim \int_0^{x_d/2} \frac{z_d^{-\alpha}}{x_d^{\alpha(1-\epsilon)-d\epsilon+1+(d+\alpha)(1-\epsilon)}} (1\wedge z_d)^{\mathfrak{p}+(\alpha-1)_+}\, dz_d.\end{aligned}$$ Here, we have used that, for the values of $\epsilon$ indicated above, $$\begin{aligned} s^{(d+\alpha)(1-\epsilon)/\alpha-d/\alpha} \left(1\wedge \frac{z_d}{s^{1/\alpha}}\right)^{\mathfrak{p}} &= \left(s^{\left(1-\epsilon(d+\alpha)/\alpha\right)/\mathfrak{p}}\wedge \left(s^{\left(1-\epsilon(d+\alpha)/\alpha\right)/\mathfrak{p}-1/\alpha}\right)z_d\right)^{\mathfrak{p}}\\ &\lesssim \left(1\wedge s^{(\alpha-\mathfrak{p}-\epsilon(d+\alpha))/(\alpha\mathfrak{p})}z_d\right)^{\mathfrak{p}}\leq (1\wedge z_d)^{\mathfrak{p}}.\end{aligned}$$ Finally, since $0<\epsilon< \alpha/(d+\alpha)$, by proceeding as in page  we get $$|H_{k,2}(x,y,1)|\lesssim \frac{1}{x_d^\alpha} \left(\frac{1}{1+|x-y|}\right)^{(d+\alpha)(1-\epsilon)}, \quad x_d\geq 1.$$ In a similar way, we can also see that $$|\widetilde{H}_{k}(x,y,1)|\lesssim \frac{1}{x_d^\alpha} \left(\frac{1}{1+|x-y|}\right)^{(d+\alpha)(1-\epsilon)}, \quad x_d\geq 1,$$ where $$\widetilde{H}_{k}(x,y,t)=\int_0^{t/2}\int_{\mathbb{R}^d_+} W^{\alpha}_{0,s} (x,z) z_d^{-\alpha} \partial_t^k W^{\alpha}_{\lambda,t-s} (z,y) \, dz ds.$$ By putting together all of the above estimates, and using the scaling property [\[eq: P1\]](#eq: P1){reference-type="eqref" reference="eq: P1"}, we obtain $$\left|t^{k-1} \partial_t^k D_{\lambda, t}^{\alpha, 0}(x,y)\right|\lesssim \frac{t^{-d/\alpha}}{(x_d\vee y_d)^\alpha} \left(\frac{t^{1/\alpha}}{t^{1/\alpha}+|x-y|}\right)^{(d+\alpha)(1-\epsilon)},$$ for $0<\epsilon<(\alpha-\mathfrak{p})/(d+\alpha)$. Consequently, recalling the definition of the local part $L$ in [\[eq: L\]](#eq: L){reference-type="eqref" reference="eq: L"}, for certain constant $C>0$ we have $$\begin{aligned} &\int_{\mathbb{R}^d_+} \int_0^\infty \left|t^{k-1} \partial_t^k D_{\lambda, t}^{\alpha, 0}(x,y)\right|\chi_{L} (x,y,t) \, dt dy\\ &\lesssim \int_{\{y\in \mathbb{R}^d_+: C^{-1}x_d\leq y_d\leq Cx_d\}} \int_0^{(x_d\vee y_d)^\alpha} \frac{t^{-d/\alpha}}{(x_d\vee y_d)^\alpha} \left(\frac{t^{1/\alpha}}{t^{1/\alpha}+|x-y|}\right)^{(d+\alpha)(1-\epsilon)} \, dt dy\\ &\lesssim \int_0^{(Cx_d)^\alpha} t^{-d/\alpha} \int_{\{y\in \mathbb{R}^d_+: C^{-1}x_d\leq y_d\leq Cx_d\}} x_d^{-\alpha}\left(\frac{t^{1/\alpha}}{t^{1/\alpha}+|x-y|}\right)^{(d+\alpha)(1-\epsilon)} \, dy dt\\ &\lesssim x_d^{-\alpha}\int_0^{(Cx_d)^\alpha} t^{-d/\alpha} \int_{\mathbb R^d} \left(\frac{t^{1/\alpha}}{t^{1/\alpha}+|x-y|}\right)^{(d+\alpha)(1-\epsilon)} \, dy dt\\ &\lesssim x_d^{-\alpha}\int_0^{(Cx_d)^\alpha} \int_{\mathbb R^d} \frac{1}{(1+|z|)^{(d+\alpha)(1-\epsilon)}} \, dy dt\\ &\lesssim 1,\end{aligned}$$ for each $x\in \mathbb{R}^d_+$, provided that $0<\epsilon<(\alpha-\mathfrak{p})/(d+\alpha)$. By symmetry, we also deduce that $$\int_{\mathbb{R}^d_+} \int_0^\infty \left|t^{k-1} \partial_t^k D_{\lambda, t}^{\alpha, 0}(x,y)\right|\chi_{L} (x,y,t) \, dt dx\lesssim 1, \quad y\in \mathbb{R}^d_+,$$ when $0<\epsilon<\alpha/(d+\alpha)$. Schur's test allows us to conclude thet the integral operator $$\mathbb J_\lambda^{\alpha, k}(f)(x):=\int_{\mathbb{R}^d_+} \int_0^\infty \left|t^{k-1} \partial_t^k D_{\lambda, t}^{\alpha, 0}(x,y)\right|\chi_{L} (x,y,t) f(y)\, dt dy, \quad x\in \mathbb{R}^d_+,$$ is bounded on $L^p(\mathbb{R}^d_+)$ for every $1\leq p\leq \infty$. # Proof of Theorem [Theorem 1](#thm: unweighted case){reference-type="ref" reference="thm: unweighted case"} for $\alpha=2$ {#sec: proof-teo1.1-alfa=2} ## Global part For the case $\alpha=2$ we know, by Proposition [Proposition 4](#prop: 2.2){reference-type="ref" reference="prop: 2.2"}[\[item: prop 2.2.b\]](#item: prop 2.2.b){reference-type="ref" reference="item: prop 2.2.b"}, that $$\begin{aligned} \left|t^k \partial_t^k W^{2}_{\lambda,t}(x,y)\right|&\lesssim \left( 1\wedge \frac{x_d}{t^{1/2}}\right)^{\mathfrak{p}} \left( 1+\frac{y_d}{t^{1/2}}\right)^{\mathfrak{p}} \frac{1}{t^{d/2}} e^{-\frac{c|x-y|^2}{t}}\\ & \lesssim \left( 1\wedge \frac{x_d}{t^{1/2}}\right)^{\mathfrak{p}} \left( 1+\frac{y_d}{t^{1/2}}\right)^{\mathfrak{p}} \frac{1}{t^{d/2}} \left(\frac{t^{1/2}}{t^{1/2}+|x-y|}\right)^{d+2}, \end{aligned}$$ for any $x,y\in \mathbb{R}^d_+, t>0$. With this estimate, we can proceed as in the case $\alpha\in (0,2)$ (see §[3](#sec: proof-teo1.1-alfa<2){reference-type="ref" reference="sec: proof-teo1.1-alfa<2"}) and prove that, for every $k\in \mathbb N$, $k\geq 1$, the operator $J_\lambda^{2, k}$ given by $$J_\lambda^{2, k} (f)(x)=\int_{\mathbb{R}^d_+} J_\lambda^{2, k} (x,y) f(y)\, dy, \quad x\in \mathbb{R}^d_+,$$ whose kernel $J_\lambda^{2, k} (x,y)$ is as in [\[eq: def nucleo Jalfa\]](#eq: def nucleo Jalfa){reference-type="eqref" reference="eq: def nucleo Jalfa"} with $\alpha=2$, is bounded on $L^p(\mathbb{R}^d_+)$ for every $1<p<\infty$. ## Local part {#local-part} When dealing with the local part and $\alpha=2$, we keep the notation used in Section [3.2](#sec: proof-teo1.1-alfa<2 local){reference-type="ref" reference="sec: proof-teo1.1-alfa<2 local"}. We recall that in the local part, we are considering $x,y\in \mathbb{R}^d_+$ with $|x-y|\leq (x_d\wedge y_d)/2$ and $0<t^{1/\alpha}\leq x_d\vee y_d$. Let $j=1,\dots, k-1$. According to Proposition [Proposition 4](#prop: 2.2){reference-type="ref" reference="prop: 2.2"}[\[item: prop 2.2.b\]](#item: prop 2.2.b){reference-type="ref" reference="item: prop 2.2.b"} we get the following estimate $$\begin{aligned} |H_{j,1}(x,y,1)|&\lesssim x_d^{-2} \int_{\mathbb{R}^d_+} \left(\frac{x_d}{1+x_d}\right)\left(\frac{z_d}{1+z_d}\right)^{1+\mathfrak{p}}\left(\frac{y_d}{1+y_d}\right)^{\mathfrak{p}} e^{-c(|x-z|^2+|z-y|^2)} \, dz\\ &\lesssim x_d^{-2} \int_{\mathbb{R}^d_+} \left(\frac{x_d}{1+x_d}\right)\left(\frac{z_d}{1+z_d}\right)^{1+\mathfrak{p}}\left(\frac{y_d}{1+y_d}\right)^{\mathfrak{p}} e^{-\frac c2 |x-y|^2}e^{-\frac c2|z-y|^2} \, dz\\ &\lesssim x_d^{-2} \left(\frac{x_d}{1+x_d}\right) \left(\frac{y_d}{1+y_d}\right)^{\mathfrak{p}} e^{-\frac c2|x-y|^2}\int_{\mathbb R^d} e^{-\frac c2|x-z|^2}\, dz\\ &\lesssim x_d^{-2}e^{-\frac c2|x-y|^2}. \end{aligned}$$ Therefore, for $x,y\in \mathbb{R}^d_+$ with $|x-y|\leq (x_d\wedge y_d)/2$ and $1 \leq x_d\vee y_d$, we get $$|H_{j,1}(x,y,1)|\lesssim \frac{1}{x_d^2} \left(\frac{1}{1+|x-y|}\right)^{d+\alpha}.$$ Similarly, we get $$|H_{j,2}(x,y,1)|\lesssim \frac{1}{x_d^2} \left(\frac{1}{1+|x-y|}\right)^{d+\alpha}.$$ On the other hand, Proposition [Proposition 4](#prop: 2.2){reference-type="ref" reference="prop: 2.2"}[\[item: prop 2.2.b\]](#item: prop 2.2.b){reference-type="ref" reference="item: prop 2.2.b"} leads to $$\begin{aligned} |H_{k,1}&(x,y,1)|\\ &\lesssim \int_0^{1/2} \int_{\{z\in \mathbb{R}^d_+: z_d>x_d/2\}} (1-s)^{-k-d/2}\left(\frac{x_d}{(1-s)^{1/2}+x_d}\right) \left(\frac{z_d}{(1-s)^{1/2}+z_d}\right) \nonumber\\ &\quad \times z_d^{-2} \left(\frac{y_d}{s^{1/2}+y_d}\right)^{\mathfrak{p}}\left(\frac{z_d}{s^{1/2}+z_d}\right)^{\mathfrak{p}}\frac{1}{s^{d/2}} e^{-c\frac{|x-z|^2}{1-s}-c\frac{|z-y|^2}{s}}\,dzds\\ &\lesssim x_d^{-2}\int_0^{1/2} \frac{x_d}{(1-s)^{1/2}+x_d}\left(\frac{y_d}{s^{1/2}+y_d}\right)^{\mathfrak{p}} e^{-\frac c2\frac{|x-y|^2}{1-s}} \int_{\mathbb R^d} \frac{e^{-\frac c2\frac{|y-z|^2}{s}}}{s^{d/2}} \,dz ds\\ &\lesssim x_d^{-2}\int_0^{1/2} \frac{x_d}{(1-s)^{1/2}+x_d}\left(\frac{y_d}{s^{1/2}+y_d}\right)^{\mathfrak{p}} e^{-\frac c2\frac{|x-y|^2}{1-s}}\, ds\\ &\lesssim x_d^{-2} e^{-\frac c2|x-y|^2},\end{aligned}$$ for $(x,y,1)\in L$. We now study $H_{k,2}$. Using again Proposition [Proposition 4](#prop: 2.2){reference-type="ref" reference="prop: 2.2"}[\[item: prop 2.2.b\]](#item: prop 2.2.b){reference-type="ref" reference="item: prop 2.2.b"}, it yields, as before, $$\begin{aligned} |H_{k,2}&(x,y,1)|\\ &\lesssim \int_0^{1/2} \int_{\{z\in \mathbb{R}^d_+: z_d\leq x_d/2\}} (1-s)^{-k-d/2}\left(\frac{x_d}{(1-s)^{1/2}+x_d}\right) \left(\frac{z_d}{(1-s)^{1/2}+z_d}\right) \nonumber\\ &\quad \times z_d^{-2} \left(\frac{y_d}{s^{1/2}+y_d}\right)^{\mathfrak{p}}\left(\frac{z_d}{s^{1/2}+z_d}\right)^{\mathfrak{p}}\frac{1}{s^{d/2}} e^{-c\frac{|x-z|^2}{1-s}-c\frac{|z-y|^2}{s}}\,dzds.\end{aligned}$$ Note that, for each $s\in (0,\frac12)$, we can write $$\int_{\mathbb R^{d-1}} e^{-c\frac{|x'-z'|^2}{1-s}-c\frac{|y'-z'|^2}{s}} dz'\lesssim \int_{\mathbb R^{d-1}} e^{-c\frac{|w|^2}{s}} dw\lesssim s^{(d-1)/2}.$$ Then, for $s\in (0,\frac12)$, $$\begin{aligned} &\int_{\{z\in \mathbb{R}^d_+: z_d\leq x_d/2\}} \left(\frac{z_d}{(1-s)^{1/2}+z_d}\right) \left(\frac{z_d}{s^{1/2}+z_d}\right)^{\mathfrak{p}} z_d^{-2} e^{-c\frac{|x-z|^2}{1-s}-c\frac{|z-y|^2}{s}}\,dz\\ &\lesssim \int_0^{x_d/2} \left(\frac{z_d}{(1-s)^{1/2}+z_d}\right) \left(\frac{z_d}{s^{1/2}+z_d}\right)^{\mathfrak{p}} z_d^{-2} s^{(d-1)/2} e^{-\frac c2\frac{|x-y|^2}{1-s}-c\frac{|z_d-y_d|^2}{s}}\, dz_d.\end{aligned}$$ Combining the above estimates, we get $$\begin{aligned} |H_{k,2}(x,y,1)|&\lesssim \int_0^{x_d/2} \int_0^{1/2} s^{(d-1)/2} \left(\frac{z_d}{(1-s)^{1/2}+z_d}\right) \left(\frac{z_d}{s^{1/2}+z_d}\right)^{\mathfrak{p}} z_d^{-2}\\ &\quad \times\left(\frac{(1-s)^{1/2}}{(1-s)^{1/2}+x_d+|y'-x'|}\right)^{d+4}\, ds dz_d\\ &\lesssim \int_0^{x_d/2} \int_0^{1/2} s^{(d-1)/2} \left(1\wedge \frac{z_d}{(1-s)^{1/2}}\right)\left(1\wedge \frac{z_d}{s^{1/2}}\right)^{\mathfrak{p}} \, ds \\ &\quad \times \frac{dz_d}{z_d^2} \left(\frac{1}{x_d+|y'-x'|}\right)^{d+4}\\ &\lesssim \int_0^{x_d/2} \int_0^{1/2} s^{(d-1)/2-1/4}(1\wedge z_d)^{1+1/4}\frac{dz_d}{z_d^2} \frac{1}{x_d^{d+4}}\\ &\lesssim \int_0^1 z_d^{d-3/4} dz_d +\int_1^\infty \frac{dz_d}{z_d^2} \frac{1}{x_d^{d+4}}\\ &\lesssim \frac{1}{x_d^{d+4}}\\ &\lesssim \frac{1}{x_d^2}\left(\frac{1}{1+|x-y|}\right)^{d+2}, \quad \frac{x_d}{2}\geq 1.\end{aligned}$$ In a similar way, we deduce that $$|\widetilde{H}_k(x,y,1)|\lesssim \frac{1}{x_d^2}\left(\frac{1}{1+|x-y|}\right)^{d+2}, \quad x_d\geq 1.$$ We now put all of the above estimates together obtaining $$\left|\left.\partial_t^k D_{\lambda}^{2,0}(x,y)\right|_{t=1}\right|\lesssim \frac{1}{x_d^2}\left(\frac{1}{1+|x-y|}\right)^{d+2}.$$ Therefore, $$\left|t^{k-1}\partial_t^k D_{\lambda}^{2,0}(x,y)\right|\lesssim \frac{1}{t^{d/2}}\left(\frac{t^{1/2}}{t^{1/2}+|x-y|}\right)^{(d+2)(1-\epsilon)} \frac{1}{(x_d\vee y_d)^2},$$ with $0<\epsilon<(2-\mathfrak{p})/(d+2)=(\alpha-\mathfrak{p})/(d+\alpha)$. # Proof of Theorem [Theorem 2](#thm: weighted case){reference-type="ref" reference="thm: weighted case"} {#proof-of-theorem-thm-weighted-case} We proceed as in the proof of [@N Proposition 3.5] and of the weighted property in [@ABR Theorem 1.1] by using [@BB Proposition 2.3] (see also [@BZ Theorem 6.6]). We define, for every $t>0$, $$A^\alpha_{\lambda, t} = (I - W^\alpha_{\lambda,t})^m,$$ for $m\in \mathbb{N}$ to be fixed later. For every $x\in\mathbb{R}^d$, $r$, $s>0$, let $B(x,r) = \{y\in \mathbb{R}^d: |x-y|<r\}$ be the usual ball in $\mathbb{R}^d$, $sB(x,r) = B(x,sr)$ and $\mathcal{B}(x,r) = B(x,r)\cap \mathbb{R}^d_+$. Also, let us consider for a ball $B\subset\mathbb{R}^d$, the sets $S_0(B) = B$ and for $j\in\mathbb{N}$, $j\geq 1$, $S_j(B) = 2^{j}B\setminus 2^{j-1}B$. We denote also, for $B\subset\mathbb{R}^d$ and $j\in\mathbb{N}$, $S_j(\mathcal{B}) = S_j(B) \cap \mathbb{R}^d_+$. From now on, we will assume that $1<p\leq q<\infty$. Suppose that $\mathcal{B}=B\cap \mathbb{R}^d_+$, where $B = B(x_B,r_B)$ with $x_B\in\mathbb{R}^d_+$, $r_B>0$ and that $f$ is a smooth function supported in $\mathcal{B}$. We consider first the case $\alpha = 2$. By using [@ABR (2.17)], from Proposition [Proposition 4](#prop: 2.2){reference-type="ref" reference="prop: 2.2"} [\[item: prop 2.2.b\]](#item: prop 2.2.b){reference-type="ref" reference="item: prop 2.2.b"} we deduce $$\left(\frac{1}{|S_j(\mathcal{B})|}\int_{S_j(\mathcal{B})} |A^2_{\lambda,r^2_B}(f)(x)|^q dx \right)^{1/q} \leq C \frac{e^{-c2^{2j}}}{2^{j\alpha/q}}\left( \frac{1}{|\mathcal{B}|} \int_\mathcal{B} |f(x)|^p dx \right)^{1/p},$$ for $j\in\mathbb{N}$ where $C>0$ does not depend on $j$, $\beta$ or $f$. On the other hand, the arguments in [@ABR pages 13 and 14] allow us to obtain that $$\begin{aligned} & \left(\frac{1}{|S_j(\mathcal{B})|}\int_{S_j(\mathcal{B})} |V_\rho(\{t^k \partial^k_t W^w_{\lambda,t}\}_{t>0})(I - A^2_{\lambda,r^2_B})(f)(x)|^q dx \right)^{1/q}&\lesssim 2^{-j(dq + 2m)} f_{q, \mathcal{B}}, \end{aligned}$$ where $f_{q,\mathcal{B}}=\left( \frac{1}{|\mathcal{B}|}\int_{\mathcal{B}} |f(x)|^q dx\right)^{1/q}$ and the constant does not depend on $j$, $\mathcal{B}$ or $f$. We choose now $m\in\mathbb{N}$ such that $m>d/2$. Then $$\sum_{j=1}^{\infty} 2^{-j(d/q +2m -d)}<\infty.$$ According to Theorem [Theorem 1](#thm: unweighted case){reference-type="ref" reference="thm: unweighted case"}, the $\rho$-variation operator $V_{\rho}(\{t^k \partial^k_t W^2_{\lambda,t}\}_{t>0})$ is bounded on $L^q(\mathbb{R}^d_+)$. By using [@BB Proposition 2.3] we conclude that $V_{\rho}(\{t^k \partial^k_t W^2_{\lambda,t}\}_{t>0})$ is bounded on $L^{r}(\mathbb{R}^d_+,w)$, provided that $p<r<q$ and $w\in A_{r/p} (\mathbb{R}^d_+)\cap \textup{RH}_{(q/r)'}(\mathbb{R}^d_+)$. Taking $p=1$ we obtain that $V_{\rho}(\{t^k \partial^k_t W^2_{\lambda,t}\}_{t>0})$ is bounded on $L^{r}(\mathbb{R}^d_+,w)$, provided that $w\in A_r(\mathbb{R}^d_+)\cap \textup{RH}_{s'}(\mathbb{R}^d_+)$, $1<r<s<\infty$. Since, if $w\in A_r(\mathbb{R}^d_+)$ with ${1<r<\infty}$, there exists $\beta>1$ such that $w\in \textup{RH}_\beta(\mathbb{R}^d_+)$ we conclude that $V_{\rho}(\{t^k \partial^k_t W^2_{\lambda,t}\}_{t>0})$ is bounded on $L^{r}(\mathbb{R}^d_+,w)$ for every $1<r<\infty$ and $w\in A_r(\mathbb{R}^d_+)$. Assume now $0<\alpha<2$. According to Proposition [Proposition 4](#prop: 2.2){reference-type="ref" reference="prop: 2.2"} [\[item: prop 2.2.a\]](#item: prop 2.2.a){reference-type="ref" reference="item: prop 2.2.a"} and by proceeding as in the proof of [@BD Theorem 5.1] we deduce that if $1\leq p<q<\infty$, $\ell\in\mathbb{N}$ and $0<\epsilon<1$, there exists $C>0$ such that for every ball $\mathcal{B} = B(x_B,r_B)\cap \mathbb{R}^d_+$, with $x_B\in\mathbb{R}^d_+$ and $r_B>0$, every $t>0$ and $j\in\mathbb{N}$, we have that $$\begin{aligned} \label{eq: 5.3} & \left(\frac{1}{|S_j(\mathcal{B})|}\int_{S_j(\mathcal{B})} |t^\ell \partial^\ell_t W^\alpha_{\lambda,t}(f)(x)|^q dx \right)^{1/q}\nonumber \\ &\leq C \max \left\{ \left( \frac{r_B}{t^{1/\alpha}}\right)^{d/p}, \left( \frac{r_B}{t^{1/\alpha}}\right)^d \right\}\nonumber \\ & \, \times \left( 1+ \frac{t^{1/\alpha}}{2^jr_B}\right)^{d/q} \left(1+ \frac{2^jr_B}{t^{1/\alpha}}\right)^{-(d+\alpha)(1-\epsilon)} \left( \frac{1}{|{\mathcal{B}}|} \int_{\mathcal{B}} |f(x)|^p dx \right)^{1/p}, \end{aligned}$$ for every $f\in L^p(\mathbb{R}^d_+)$ supported in $\mathcal{B}$, and $$\label{eq: 5.4} \begin{split} & \left(\frac{1}{|S_j(\mathcal{B})|}\int_{S_j(\mathcal{B})} |t^\ell \partial^\ell_t W^\alpha_{\lambda,t}(f)(x)|^q dx \right)^{1/q} \\ &\lesssim \max \left\{ \left( \frac{2^\ell r_B}{t^{1/\alpha}}\right)^{d/p}, \left( \frac{2^\ell r_B}{t^{1/\alpha}}\right)^d \right\} \\ & \, \times \left( 1+ \frac{t^{1/\alpha}}{2^jr_B}\right)^{d/q} \left(1+ \frac{2^jr_B}{t^{1/\alpha}}\right)^{-(d+\alpha)(1-\epsilon)} \left( \frac{1}{|{S_j(\mathcal{B})}|} \int_{S_j(\mathcal{B})} |f(x)|^p dx \right)^{1/p}, \end{split}$$ for every $f\in L^p(S_j(\mathcal{B}))$. According to [\[eq: 1.1\]](#eq: 1.1){reference-type="eqref" reference="eq: 1.1"} we obtain $$V_{\rho}\left(\{t^k \partial^k_t W^2_{\lambda,t}\}_{t>0}\right)(f)(x) \leq \int_0^\infty \left|\partial_t (t^k \partial^k_t W^2_{\lambda,t}(f)(x))\right|dt, \quad x\in\mathbb{R}^d_+.$$ Now, for $\ell\in\mathbb{R}\setminus \{0\}$, we consider the operator $$\mathcal{T}_\ell (f)(x) = \int_0^\infty |t^{\ell-1} \partial_t^\ell W^\alpha_{\lambda,t} (f) (x) | dt, \; x\in\mathbb{R}^d.$$ Assume that $j\in\mathbb{N}$ and let $\mathcal{B} = B(x_B,r_B)\cap \mathbb{R}^d_+$, with $x_B\in\mathbb{R}^d_+$ and $r_B>0$. By using Minkowski inequality we get $$\begin{aligned} &\left( \frac{1}{S_j(\mathcal{B})|} \int_{S_j(\mathcal{B})} |\mathcal{T}_l ((I-W^\alpha_{\lambda,r_B^\alpha})^m(f))(x)|^q dx\right)^{1/q} \\ &\leq \int_0^\infty \left( \frac{1}{S_j(\mathcal{B})|} \int_{S_j(\mathcal{B})} |t^{\ell-1} \partial^l_t W^\alpha_{\lambda,t} ((I-W^\alpha_{\lambda,r_B^\alpha})^m(f))(x)|^q dx\right)^{1/q}dt \\ &\leq \left( \int_0^{r_B^\alpha} + \int_{r_B^\alpha}^\infty \right) \left( \frac{1}{S_j(\mathcal{B})|} \int_{S_j(\mathcal{B})} |t^{\ell-1} \partial^l_t W^\alpha_{\lambda,t} ((I-W^\alpha_{\lambda,r_B^\alpha})^m(f))(x)|^q dx\right)^{1/q}dt \\ &:= \textup{I}_1 + \textup{I}_2. \end{aligned}$$ We now adapt the arguments in [@N p. 15 and p. 16]. We begin analysing $\textup{I}_1$. First observe that $$(I - W^\alpha_{\lambda,r_B^\alpha})^m = \int_0^{r_B^\alpha} \dots \int_0^{r_B^\alpha} \partial_{s_1} \dots\partial_{s_m} W^\alpha_{\lambda,s_1+\dots+s_m} d_{s_1\dots} d_{s_m}.$$ Then, by [\[eq: 5.3\]](#eq: 5.3){reference-type="eqref" reference="eq: 5.3"} and calling $\mathfrak{s} = s_1+\dots +s_m$ and $d\mathfrak{s} = ds_1\dots ds_m$, $$\begin{aligned} \textup{I}_2 & \leq \int_{r_B^\alpha}^\infty \int_0^{r_B^\alpha} \dots \int_0^{r_B^\alpha}\left( \frac{1}{|S_j(\mathcal{B})|} \int_{S_j(\mathcal{B})} |t^{\ell-1} \partial^l_t \partial_{s_1} \dots\partial_{s_m} W^\alpha_{\lambda,t +\mathfrak{s}} (f)(x)|^q dx\right)^{1/q}d\mathfrak{s}dt\\ & \leq \int_{r_B^\alpha}^\infty \int_0^{r_B^\alpha} \dots \int_0^{r_B^\alpha} \frac{1}{(t+ \mathfrak{s})^{m+1}}\\ &\qquad \times \left( \frac{1}{|S_j(\mathcal{B})|} \int_{S_j(\mathcal{B})} \left|\left. u^{\ell+m} \partial^{\ell+m}_u W^\alpha_{\lambda,u} (f)(x)\right|_{u=t+\mathfrak{s}}\right|^q dx\right)^{1/q} d\mathfrak{s}dt \\ &\lesssim f_{q,\mathcal{B}} \left(\int_{r_B^\alpha}^\infty \int_0^{r_B^\alpha} \dots \int_0^{r_B^\alpha} \frac{1}{(t+ \mathfrak{s})^{m+1}} \left( \frac{r_B}{(t+\mathfrak{s})^{1/\alpha}}\right)^{d/q} \left( 1+ \frac{(t+\mathfrak{s})^{1/\alpha}}{2^jr_B}\right)^{d/q}\right. \\ &\quad \times \left.\left( 1+ \frac{2^j r_B}{(t+ \mathfrak{s})^{1/\alpha}}\right)^{-(d+\alpha)(1-\epsilon)} d\mathfrak{s} dt\right) \\ &\lesssim f_{q,\mathcal{B}} \left(\int_{r_B^\alpha}^{(2^jr_B)^\alpha} \int_0^{r_B^\alpha} \dots \int_0^{r_B^\alpha} \frac{1}{t^{m+1}} \left(\frac{r_B}{t^{1/\alpha}}\right)^{d/q} \left( \frac{t^{1/\alpha}}{2^{j}r_B}\right)^{d/q} d\mathfrak{s} dt\right. \\ & \quad \left. + \int_{(2^jr_B)^\alpha}^\infty \int_0^{r_B^\alpha} \dots \int_0^{r_B^\alpha} \frac{1}{t^{m+1}} \left(\frac{r_B}{t^{1/\alpha}}\right)^{d/q} \left( \frac{t^{1/\alpha}}{2^{j}r_B}\right)^{(d+\alpha)(1-\epsilon)} d\mathfrak{s} dt\right)\\ &\lesssim f_{q,\mathcal{B}}\left( 2^{-j(d+\alpha)(1-\epsilon)} \int_{r_B^\alpha}^{(2^j r_B)^{\alpha}} t^{-m-1-\frac{d}{q^\alpha}+\frac{(d+\alpha)(1-\epsilon)}{\alpha}} dt \right.\\ & \quad \times \left. r_B^{\frac{d}{q} + m\alpha - (d+\alpha)(1-\epsilon)} +\int_{(2^jr_B)^\alpha}^\infty 2^{-dj/q} r_B^{\alpha m} t^{-m-1}dt \right) \\ &\lesssim f_{q,\mathcal{B}} \left( 2^{-j(d+\alpha)(1-\epsilon)} + 2^{-j(\alpha m + d/q)} \right) \\ & \lesssim f_{q,\mathcal{B}} 2^{-j(d+\alpha)(1-\epsilon)},\end{aligned}$$ provided that $m>\frac{(d+\alpha)(1-\epsilon)}{\alpha} - \frac{d}{q^\alpha}$. Hence, if $m>\frac{d+\alpha}{\alpha} - \frac{dq}{q^\alpha}$, by choosing $0<\epsilon<1$ such that $m> \frac{(d+\alpha)(1-\epsilon)}{\alpha} - \frac{d}{q^\alpha}$ the estimates hold. We can conclude that $$\textup{I}_2 \leq C 2^{-j(d+\alpha)(1-\epsilon)} f_{q,\mathcal{B}}.$$ On the other hand, we can prove by using [\[eq: 5.4\]](#eq: 5.4){reference-type="eqref" reference="eq: 5.4"} (see [@N p. 15 and p.16]) that $$\textup{I}_1 \leq C 2^{-j(d+\alpha)(1-\epsilon)} f_{q,\mathcal{B}},$$ where $0<\epsilon<1$. We obtain that $$\begin{aligned} \left( \frac{1}{|S_j(\mathcal{B})|} \int_{S_j(\mathcal{B})} |\mathcal{T}_\ell ( (I-W^\alpha_{\lambda,r_B^\alpha})^m) (f)(x)|^q dx\right)^{1/q} & \lesssim 2^{-j(d+\alpha)(1-\epsilon)} f_{q,\mathcal{B}}, \end{aligned}$$ provided that $0<\epsilon<1$ and $m> \frac{(d+\alpha)(1-\epsilon)}{\alpha} - \frac{d}{q^\alpha}$. Let $\epsilon \in (0,1)$. According to [\[eq: 5.3\]](#eq: 5.3){reference-type="eqref" reference="eq: 5.3"} we deduce that $$\begin{aligned} &\left( \frac{1}{|S_j(\mathcal{B})|} \int_{S_j(\mathcal{B})} |(I-A^\alpha_{\lambda,r^\alpha}) (f)(x)|^q \right)^{1/q}\\ &\lesssim \sum_{i=1}^m \left( \frac{1}{|S_j(\mathcal{B})|} \int_{|S_j(\mathcal{B})|} |W^{\alpha}_{\lambda,r_B^{\alpha_i}}(f)(x)|^q dx\right)^{1/q} \\ &\lesssim \sum_{i=1}^m \left(\frac{1}{i^{1/\alpha}}\right)^{1/p}\left(1+\frac{i^{1/\alpha}}{2^j}\right)^{\alpha/q}\left(1+\frac{2^j}{i^{1/\alpha}}\right)^{-(d+\alpha)(1-\epsilon)} \left(\frac{1}{|\mathcal{B}|}\int_{\mathcal{B}} |f(x)|^p dx\right)^{1/p}\\ &\lesssim 2^{-j(d+\alpha)(1-\epsilon)}\left(\frac{1}{|\mathcal{B}|}\int_{\mathcal{B}} |f(x)|^p dx\right)^{1/p}.\end{aligned}$$ We define $\alpha_j=2^{-j(d+\alpha)(1-\epsilon)}$, $j\in \mathbb N$. We have that $\sum_{j=1}^\infty \alpha_j 2^{j\alpha}<\infty$ provided that $0<\epsilon<\alpha/(d+\alpha)$. By [@BB Proposition 2.3] (see also [@BZ Theorem 6.6]) and by proceeding as in the case $\alpha=2$, we conclude that the operator $\mathcal{T}_\ell$ is bounded on $L^p(\mathbb{R}^d_+, w)$ for every $1<p<\infty$ and $w\in A_p(\mathbb{R}^d_+)$. Hence, the $\rho$-variation operator $\mathcal{V}_\rho\left(\{t^k\partial_t^kW^{\alpha}_{\lambda,t}\}_{t>0}\right)$ is also bounded on $L^p(\mathbb{R}^d_+, w)$ for every $1<p<\infty$ and $w\in A_p(\mathbb{R}^d_+)$. ## Statements and Declarations {#statements-and-declarations .unnumbered} ### Funding {#funding .unnumbered} The first author is partially supported by grant PID2019-106093GB-I00 from the Spanish Government. The second author is partially supported by grants PICT-2019-2019-00389 (Agencia Nacional de Promoción Científica y Tecnológica), PIP-1220200101916O (Consejo Nacional de Investigaciones Científicas y Técnicas) and CAI+D 2019-015 (Universidad Nacional del Litoral) ### Competing Interests {#competing-interests .unnumbered} The authors have no relevant financial or non-financial interests to disclose. ### Author Contributions {#author-contributions .unnumbered} All authors whose names appear on the submission made substantial contributions to the conception and design of the work, drafted the work and revised it critically, approved this version to be published, and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. ## Data availability {#data-availability .unnumbered} Data sharing not applicable to this article as no datasets were generated or analysed during the current study. 10 Akcoglu, M. A., Jones, R. L., and Schwartz, P. O. Variation in probability, ergodic theory and analysis. , 1 (1998), 154--177. Almeida, V., Betancor, J. J., and Rodrı́guez-Mesa, L. Variation operators associated with the semigroups generated by Schrödinger operators with inverse square potentials. , 2 (2023), 353--383. Arendt, W., and Bukhvalov, A. V. Integral representations of resolvents and semigroups. , 1 (1994), 111--135. Beltran, D., Oberlin, R., Roncal, L., Seeger, A., and Stovall, B. Variation bounds for spherical averages. , 1-2 (2022), 459--512. Bernicot, F., and Zhao, J. New abstract Hardy spaces. , 7 (2008), 1761--1796. Betancor, J. J., Crescimbeni, R., and Torrea, J. L. The $\rho$-variation of the heat semigroup in the Hermitian setting: behaviour in $L^\infty$. , 3 (2011), 569--585. Bogdan, K., and Dyda, B. o. The best constant in a fractional Hardy inequality. , 5-6 (2011), 629--638. Bourgain, J. On the maximal ergodic theorem for certain subsets of the integers. , 1 (1988), 39--72. Bourgain, J. On the pointwise ergodic theorem on $L^p$ for arithmetic sets. , 1 (1988), 73--84. Bourgain, J. Pointwise ergodic theorems for arithmetic sets. , 69 (1989), 5--45. With an appendix by the author, Harry Furstenberg, Yitzhak Katznelson and Donald S. Ornstein. Bui, T. A., and Bui, T. Q. Maximal regularity of parabolic equations associated to generalized Hardy operators in weighted mixed-norm spaces. (2021), 547--574. Bui, T. A., and D'Ancona, P. Generalized Hardy operators. , 1 (2023), 171--198. Bui, T. A., and Merz, K. Equivalence of Sobolev norms in Lebesgue spaces for Hardy operators in a half-space, 2023. [arXiv:2309.02928](https://arxiv.org/abs/2309.02928). Campbell, J. T., Jones, R. L., Reinhold, K., and Wierdl, M. Oscillation and variation for the Hilbert transform. , 1 (2000), 59--83. Campbell, J. T., Jones, R. L., Reinhold, K., and Wierdl, M. Oscillation and variation for singular integrals in higher dimensions. , 5 (2003), 2115--2137. Chen, Z.-Q., Kim, P., and Song, R. Two-sided heat kernel estimates for censored stable-like processes. , 3-4 (2010), 361--399. Chen, Z.-Q., and Kumagai, T. Heat kernel estimates for stable-like processes on $d$-sets. , 1 (2003), 27--62. Davies, E. B. , vol. 92 of *Cambridge Tracts in Mathematics*. Cambridge University Press, Cambridge, 1989. Di Plinio, F., Do, Y. Q., and Uraltsev, G. N. Positive sparse domination of variational Carleson operators. , 4 (2018), 1443--1458. Do, Y., and Lacey, M. Weighted bounds for variational Fourier series. , 2 (2012), 153--190. Duong, X. T., and Robinson, D. W. Semigroup kernels, Poisson bounds, and holomorphic functional calculus. , 1 (1996), 89--128. Frank, R. L., and Merz, K. On Sobolev norms involving Hardy operators in a half-space. , 10 (2023), Paper No. 110104. Gillespie, T. A., and Torrea, J. L. Dimension free estimates for the oscillation of Riesz transforms. (2004), 125--144. Jones, R. L., and Wang, G. Variation inequalities for the Fejér and Poisson kernels. , 11 (2004), 4493--4518. Killip, R., Miao, C., Visan, M., Zhang, J., and Zheng, J. Sobolev spaces adapted to the Schrödinger operator with inverse-square potential. , 3-4 (2018), 1273--1298. Krause, B. Polynomial ergodic averages converge rapidly: Variations on a theorem of Bourgain, 2014. [arXiv:1402.1803](https://arxiv.org/abs/1402.1803). Krause, B. Discrete fractional integration operators along the primes, 2019. [arXiv:1905.02767](https://arxiv.org/abs/1905.02767). Krause, B. Pointwise ergodic theory: examples and entropy, 2023. [arXiv:2301.01511](https://arxiv.org/abs/2301.01511). Krause, B., Mirek, M., and Tao, T. Pointwise ergodic theorems for non-conventional bilinear polynomial averages. , 3 (2022), 997--1109. Le Merdy, C., and Xu, Q. Strong $q$-variation inequalities for analytic semigroups. , 6 (2012), 2069--2097 (2013). Lépingle, D. La variation d'ordre $p$ des semi-martingales. , 4 (1976), 295--316. Ma, T., Torrea, J. L., and Xu, Q. Weighted variation inequalities for differential operators and singular integrals in higher dimensions. , 8 (2017), 1419--1442. Mas, A., and Tolsa, X. Variation and oscillation for singular integrals with odd kernel on Lipschitz graphs. , 1 (2012), 49--86. Mehlhop, Nathan amd Słomian, W. Oscillation and jump inequalities for the polynomial ergodic averages along multi-dimensional subsets of primes. (2023). Merz, K. On complex-time heat kernels of fractional Schrödinger operators via Phragmén-Lindelöf principle. , 3 (2022), Paper No. 62, 30. Mirek, M., Stein, E. M., and Zorin-Kranich, P. Jump inequalities via real interpolation. , 1-2 (2020), 797--819. Mirek, M., and Trojan, B. Cotlar's ergodic theorem along the prime numbers. , 4 (2015), 822--848. Nader, G. Variation and oscillation operators associated to semigroup generated by Schrödinger operator with fractional power. , 1 (2023), Paper No. 127000, 26. Oberlin, R., Seeger, A., Tao, T., Thiele, C., and Wright, J. A variation norm Carleson theorem. , 2 (2012), 421--464. Pisier, G., and Xu, Q. H. The strong $p$-variation of martingales and orthogonal series. , 4 (1988), 497--514. Qian, J. The $p$-variation of partial sum processes and the empirical process. , 3 (1998), 1370--1383. Schnaubelt, R. Lecture notes: Evolution equations. <https://www.math.kit.edu/iana3/~schnaubelt/media/evgl-skript.pdf>, 2023. Wierdl, M. Pointwise ergodic theorem along the prime numbers. , 3 (1988), 315--336 (1989). Zorin-Kranich, P. Variation estimates for averages along primes and polynomials. , 1 (2015), 210--238.
arxiv_math
{ "id": "2310.03540", "title": "Variation operators associated with semigroups generated by Hardy\n operators involving fractional Laplacians in a half space", "authors": "Jorge J. Betancor, Estefan\\'ia D. Dalmasso and Pablo Quijano", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Building on the work of Nozaki, Sato and Taniguchi, we develop an instanton-theoretic invariant aimed at studying strong corks and equivariant bounding. Our construction utilizes the Chern-Simons filtration and is qualitatively different from previous Floer-theoretic methods used to address these questions. As an application, we give an example of a cork whose boundary involution does not extend over any 4-manifold $X$ with $H_1(X, \mathbb{Z}_2) = 0$ and $b_2(X) \leq 1$, and a strong cork which survives stabilization by either of $n\smash{\mathbb{CP}^2}$ or $n\smash{\overline{\mathbb{CP}}}^2$. We also prove that every nontrivial linear combination of $1/n$-surgeries on the strongly invertible knot $\overline{9}_{46}$ constitutes a strong cork. Although Yang-Mills theory has been used to study corks via the Donaldson invariant, this is the first instance where the critical values of the Chern-Simons functional have been utilized to produce such examples. Finally, we discuss the geography question for nonorientable surfaces in the case of extremal normal Euler number. address: - Centre de Recherche Mathématiques, Montreal - Department of Mathematics, The University of Texas at Austin - Department of Mathematics, Rutgers University - New Brunswick - Department of Mathematics, Graduate School of Science, Kyoto University, Kitashirakawa Oiwake-cho, Sakyo-ku, Kyoto 606-8502, Japan author: - Antonio Alfieri - Irving Dai - Abhishek Mallick - Masaki Taniguchi bibliography: - tex.bib title: Involutions and the Chern-Simons filtration in instanton Floer homology --- # Introduction {#sec:1} Let $Y$ be an integer homology $3$-sphere equipped with an involution $\tau$. If $Y$ bounds a smooth $4$-manifold $W$, then it is natural to ask whether $\tau$ extends as a diffeomorphism over $W$. By work of Akbulut [@Ak91_cork] and Akbulut-Ruberman [@AR16], this question is closely related to the study of exotic phenomena in four dimensions. Indeed, in the case that $W$ is contractible, a theorem of Freedman [@Fr82] shows that $\tau$ always extends as a homeomorphism. This gives the notion of a *cork* [@Ak91_cork], which is known to capture the difference between any pair of exotic structures on the same simply-connected, closed $4$-manifold [@MAt; @CFHS]. Investigating the extendability of $\tau$ has also led to new sliceness obstructions via branched coverings [@ASA20; @DHM20] and (in particular) a recent proof that the $(2, 1)$-cable of the figure-eight is not slice [@dai20222]. Several authors have studied the extension question through the lens of Heegaard Floer homology, monopole Floer homology, and Seiberg-Witten theory for families; see for example [@LRS18; @ASA20; @DHM20; @dai20222; @Kang2022; @KMT23A]. These developments have led to a wide range of striking applications to exotica and concordance. The aim of the present work is to introduce new tools for obstructing the extendability of $\tau$ through the use of the Chern-Simons filtration from instanton Floer homology. As we will see, this will allow us to establish different results than were previously accessible via the aforementioned methods. In this paper, we present applications to equivariant bounding, corks and exotica, surgeries on symmetric knots, and nonorientable slice surfaces. Each of these is discussed below, along with topological consequences and motivations from Floer theory. To the best of the authors' knowledge, this is the first instance in which the Chern-Simons filtration has been utilized to address questions of this nature. Indeed, while Yang-Mills theory has been heavily utilized in smooth 4-manifold topology, this is the first time that the critical *values* of the Chern-Simons functional have had any bearing on corks or exotic phenomena. Our examples are obtained through a combined analysis of both the Chern-Simons filtration and an understanding of the behavior of the Donaldson invariant under certain cork twists. ## Statement of results {#sec:1.1} Recent results of Daemi [@Da20] and Nozaki-Sato-Taniguchi [@NST19] have suggested a fundamental difference between the information contained in the Chern-Simons filtration and that of other Floer homologies. These applications include several surprising results regarding bordism and Dehn surgery which were previously unobtainable via Heegaard-Floer-theoretic methods. The present paper aims to develop similar techniques in the equivariant setting, which will have an additional connection to the theory of corks and equivariant bordism. Our main technical construction is an involutive refinement of the $r_s$-invariant from the work of Nozaki-Sato-Taniguchi [@NST19]: **Theorem 1**. *Let $Y$ be an oriented integer homology $3$-sphere and $\tau$ be a smooth, orientation-preserving involution on $Y$. For any $s \in [-\infty, 0]$, we define a real number $$r_s(Y, \tau) \in (0, \infty]$$ which is an invariant of the diffeomorphism class of $(Y, \tau)$. Moreover, let $(W, \widetilde{\tau})$ be an equivariant negative-definite cobordism from $(Y, \tau)$ to $(Y', \tau')$ with $H_1(X, \mathbb{Z}_2 )=0$. Then $$r_s(Y,\tau) \leq r_s(Y', \tau').$$ If $r_s(Y, \tau)$ is finite and $W$ is simply connected, then in fact $$r_s(Y,\tau) < r_s(Y', \tau').$$* We refer to a cobordism $W$ from $(Y, \tau)$ to $(Y', \tau')$ as *equivariant* if it is equipped with a self-diffeomorphism $\widetilde{\tau}$ which restricts to $\tau$ and $\tau'$ on $Y$ and $Y'$, respectively. The assumption that $\tau$ is an involution is not essential; our $r_s$-invariant can be defined for any orientation-preserving diffeomorphism on $Y$. As an initial application, recall the definition of a *strong cork* from the work of Lin-Ruberman-Saveliev [@LRS18]. This is a cork for which the boundary involution $\tau$ does not extend over *any* homology ball $W$ that $Y$ bounds. Techniques for detecting strong corks via Heegaard Floer theory were developed by Dai-Hedden-Mallick in [@DHM20]; these led to many novel families of corks, some of which have recently been used in the construction of new closed $4$-manifold exotica [@LLP]. It follows from Theorem [Theorem 1](#thm:1.1){reference-type="ref" reference="thm:1.1"} that if $\tau$ extends over some homology ball $W$ with $Y = \partial W$, then $r_s(Y, \tau) = \infty$ for all $s$, and hence $r_s(Y, \tau)$ can be used for strong cork detection. As we will see, the additional information of the Chern-Simons filtration will allow us to derive new results and examples that were previously out of reach using other Floer-theoretic techniques. ### Equivariant bounding {#sec:1.1.1} An obvious extension of the notion of a strong cork is the problem of obstructing *equivariant bounding*. Indeed, it is natural to ask whether there is a cork for which $\tau$ does not extend over any definite manifold (of either sign). This is surprisingly difficult to answer, since even in the nonequivariant case, there are few examples in the literature where Floer theory has been used to provide constraints on both positive- and negative-definite boundings of a homology sphere $Y$.[^1] Indeed, the first example of an integer homology sphere with no definite bounding was exhibited recently by Nozaki-Sato-Taniguchi [@NST19] using the Chern-Simons filtration.[^2] We provide a partial answer to this question by placing homological constraints on action of the extension $\widetilde{\tau}$. We say $\widetilde{\tau}$ is *homology-fixing* if $\widetilde{\tau}_* = \operatorname{id}$ on $H_2(W, \mathbb{Q})$ and *homology-reversing* if $\widetilde{\tau}_* = -\operatorname{id}$ on $H_2(W, \mathbb{Q})$. Combining the involutive $r_s$-invariant with techniques of [@DHM20], we prove: **Theorem 2**. *There exists a cork $Y = \partial W$ such that the boundary involution $\tau$:* 1. *Does not extend as a diffeomorphism over any negative-definite $4$-manifold $W^-$ with $H_1(W^-, \mathbb{Z}_2)=0$ bounded by $Y$; and,* 2. *Does not extend as a homology-fixing or homology-reversing diffeomorphism over any positive-definite $4$-manifold $W^+$ with $H_1(W^+, \mathbb{Z}_2)=0$ bounded by $Y$.* Note that all previous results regarding equivariant bounding have either been restricted to manifolds which are spin or concern definite manifolds of a fixed sign. The first part of Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"} is established using the methods of this paper, while the second part is a consequence of the Heegaard Floer-theoretic formalism developed in [@DHM20]. (This leads to the conditions in the second part of the theorem.) In principle, the involutive $r_s$-invariant is capable of establishing *both* the negative- and positive-definite cases of Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"} without any restrictions on the action of $\widetilde{\tau}_*$; this would follow from finding a cork with $r_s(Y, \tau)$ and $r_s(-Y, \tau)$ both nontrivial. However, we presently lack the computational tools to exhibit such an example. Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"} trivially provides the following corollary: **Corollary 3**. *There exists a cork $Y = \partial W$ such that the boundary involution $\tau$ does not extend as a diffeomorphism over any $4$-manifold $X$ with $H_1(X, \mathbb{Z}_2) = 0$ and $b_2 (X) \leq 1$ bounded by $Y$.* Note that if $X$ is simply connected, then $\tau$ extends over $X$ as a homeomorphism by work of Freedman [@Fr82]. Corollary [Corollary 3](#cor:1.3){reference-type="ref" reference="cor:1.3"} thus emphasizes the difference between smooth and continuous topology in the context of the extension problem for definite manifolds. We are also able to strengthen Corollary [Corollary 3](#cor:1.3){reference-type="ref" reference="cor:1.3"} in the case that $X$ is obtained via stabilizing a homology ball by $n \smash{\mathbb{CP}^2}$ or $n \smash{\overline{\mathbb{CP}}}^2$. The following should be thought of as establishing the existence of a strong cork which persists under such a stabilization: **Corollary 4**. *There exists a cork $Y = \partial W$ such that the boundary involution $\tau$ does not extend as a diffeomorphism over $W \# n \smash{\mathbb{CP}^2}$ or $W \# n \smash{\overline{\mathbb{CP}}}^2$ for any $n$; or, more generally, over $W' \# n \smash{\mathbb{CP}^2}$ or $W' \# n \smash{\overline{\mathbb{CP}}}^2$ for any homology ball $W'$ which $Y$ bounds.* Corollary [Corollary 4](#cor:cp2){reference-type="ref" reference="cor:cp2"} is somewhat similar in spirit to recent stabilization results regarding corks [@Kang2022], except that we deal with stabilization by $n\smash{\mathbb{CP}^2}$ and $n\smash{\overline{\mathbb{CP}}}^2$ rather than spin manifolds such as $S^2 \times S^2$. Utilizing work of Akbulut-Ruberman [@AR16], Corollary [Corollary 4](#cor:cp2){reference-type="ref" reference="cor:cp2"} be applied to produce pairs of compact, contractible manifolds which remain absolutely exotic even after summing with either of $n\smash{\mathbb{CP}^2}$ and $n\smash{\overline{\mathbb{CP}}}^2$. However, this latter application can also be obtained in a straightforward manner using more standard techniques. The authors thank Anubhav Mukherjee and Kouichi Yasui for bringing this to their attention; see Remark [Remark 96](#rem:7.4){reference-type="ref" reference="rem:7.4"} and the techniques of [@Y19]. ### Surgeries on slice knots {#sec:1.1.2} We also use our involutive $r_s$-invariant to establish various examples previously inaccessible via Heegaard Floer theory. In [@DHM20], it was shown that if $K$ is a symmetric slice knot, then $1/n$-surgery on $K$ often constitutes a strong cork. This approach should be contrasted with previous methods for constructing corks in the literature, which emphasize the handle decomposition of the relevant $4$-manifold $W$. The formalism of [@DHM20] has led to many novel examples of corks, including surgeries on the stevedore. From the viewpoint of homology cobordism, it is natural to consider linear combinations of such examples, and especially linear combinations of $1/n$-surgeries on the same knot $K$. Again, this turns out to be surprisingly difficult. Traditionally, Heegaard Floer homology has had difficulty distinguishing between different surgeries on the same knot up to homology cobordism. For example, the surgery formula of [@hendricks2020surgery] shows that if $K$ is a fixed knot, then (involutive) Heegaard Floer homology cannot be used to establish the linear independence (in $\Theta^3_\mathbb{Z})$ of any infinite family of $1/n$-surgeries on $K$; see [@hendricks2020surgery Proposition 22.9]. This is in contrast to Yang-Mills theory, whose application to the linear independence of families such as $\smash{\{S^3_{1/n}(T_{p, q})\}_{n \in \mathbb N}}$ is well-known [@Fu90; @FS90]. In our context, techniques such as the equivariant surgery formula and the use of Seiberg-Witten theory for families are similarly expected to fail. In Theorem [Theorem 83](#thm:connectedsum){reference-type="ref" reference="thm:connectedsum"}, we establish a connected sum inequality for the involutive $r_s$-invariant. (See Section [2.1](#sec:2.1){reference-type="ref" reference="sec:2.1"} for a discussion of connected sums.) Using this, we prove: **Theorem 5**. *Let $K$ be any of the strongly invertible slice knots in Figure [1](#fig:1.1){reference-type="ref" reference="fig:1.1"}. Then any nontrivial linear combination of elements in $\{ (S^3_{1/n}(K), \tau) \}_{n \in \mathbb N}$ yields a strong cork.* The first member $(m = 0)$ of the family displayed in Figure [1](#fig:1.1){reference-type="ref" reference="fig:1.1"} is $\overline{9}_{46} = P(-3, 3, -3)$; note that $(+1)$-surgery on $\overline{9}_{46}$ gives the (boundary of the) Akbulut-Mazur cork. Historically, the Akbulut-Mazur cork was the first example of a cork presented in the literature, and was established using the Donaldson invariant by Akbulut [@Ak91_cork]. ![The strongly invertible slice knots used in Theorems [Theorem 5](#thm:1.4){reference-type="ref" reference="thm:1.4"} and [Theorem 6](#thm:1.5){reference-type="ref" reference="thm:1.5"} parameterized by $m \geq 0$. Figure [1](#fig:1.1){reference-type="ref" reference="fig:1.1"} is taken from [@AKMR Figure 12]. Here, $-2m - 1$ denotes the number of half twists.](9_46_family.pdf){#fig:1.1} ### Simply-connected cobordisms {#sec:1.1.3} Another advantage of Yang-Mills theory is that it can obstruct the existence of simply-connected cobordisms; see for example [@T87; @Fu19; @Da20; @NST19; @ADHLP22; @Ta22]. This has its roots in a (variant of a) question of Akbulut [@K78 Problem 4.95], which asks (for example) whether there is a simply-connected cobordism from $\Sigma(2, 3, 5)$ to itself. (This was resolved negatively in [@T87 Proposition 1.7].) Here, we answer an equivariant version of this question: **Theorem 6**. *Let $K$ be any of the strongly invertible slice knots in Figure [1](#fig:1.1){reference-type="ref" reference="fig:1.1"}. Then for any $n \in \mathbb N$, there is no simply-connected, equivariant definite cobordism from $(S^3_{1/n}(K), \tau)$ to itself.* In particular, by considering $\smash{S^3_{1/n}(K) \# - S^3_{1/n}(K)}$ we obtain a cork $(Y, \tau)$ such that $\tau$ extends over some homology ball but not over any contractible manifold which $Y$ bounds. Note that this provides an example of a cork which is not strong; see [@DHM20 Question 1.14] and [@HP20]. ### Nonorientable surfaces {#sec:1.1.4} We now give some applications to the geography question for nonorientable surfaces. Let $K$ be a knot and $F$ be a nonorientable surface in $B^4$ with $K = \partial F$. There are two algebraic invariants associated to $F$: its *nonorientable genus*, defined by $h(F) = b_1(F)$, and its *normal Euler number* $e(F)$. The *geography question* asks which pairs $(e, h)$ are realized by the set of (smooth) nonorientable slice surfaces for $K$. Work of Gordon-Litherland [@GL78] gives the well-known classical bound $$\label{eq:1.1} \left | \sigma(K) - e(F)/2 \right | \leq h(F).$$ A similar inequality was obtained by Ozsvath-Stipsicz-Szabó [@OSSunoriented], who replaced the knot signature $\sigma(K)$ in ([\[eq:1.1\]](#eq:1.1){reference-type="ref" reference="eq:1.1"}) with the Floer-theoretic refinement $\upsilon(K)=2\Upsilon_K(1)$. Understanding the geography question in general is quite difficult; a complete answer is known only for a small handful of knots. See [@GL11; @MG18; @Allen] for further results and discussion. In this paper, we consider the case where $F$ satisfies the equality $$\label{eq:1.2} \left | \sigma(K) - e(F)/2 \right | = h(F).$$ We call such an $F$ an *extremal surface*. As we explain in Section [2.3](#sec:2.3){reference-type="ref" reference="sec:2.3"}, ([\[eq:1.2\]](#eq:1.2){reference-type="ref" reference="eq:1.2"}) occurs precisely when the branched double cover $\Sigma_2(F)$ is definite. We thus apply our results regarding definite bounding to produce knots with no extremal slice surface. Note that such examples cannot be replicated by any single inequality of the same form as ([\[eq:1.1\]](#eq:1.1){reference-type="ref" reference="eq:1.1"}), including that of [@OSSunoriented]. Indeed, even combining the most commonly-used obstructions from [@Ba14] and [@OSSunoriented] necessarily fails to obstruct the set of all extremal surfaces; see Section [7.5](#sec:7.5){reference-type="ref" reference="sec:7.5"}. The authors are not aware of any previous example of a knot with no extremal slice surface appearing in the literature. **Theorem 7**. *There exists a knot $J$ such that:* 1. *$J$ does not bound any extremal surface; and,* 2. *$\Sigma_2(J)$ bounds a contractible manifold.* The knot $J$ is given by a certain linear combination of the knots $A_n$ and $B_n$ in Figure [2](#fig:1.2){reference-type="ref" reference="fig:1.2"}. These are constructed as follows: note that $\overline{9}_{46}$ admits two strong inversions, which we denote by $\tau$ and $\sigma$. (See for example Figure [8](#fig:7.4){reference-type="ref" reference="fig:7.4"}.) The knots $A_n$ and $B_n$ both have branched double cover $\smash{S^3_{1/n}(\overline{9}_{46})}$, with branching involutions corresponding to $\tau$ and $\sigma$, respectively. ![Knots $A_n$ (left) and $B_n$ (right) with branched double cover $\smash{S^3_{1/n}(\overline{9}_{46})}$. The branching involution over $A_n$ corresponds to $\tau$; the branching involution over $B_n$ corresponds to $\sigma$. Here, $-n$ denotes the number of half twists.](9_46_quotient_tau_and_sigma.pdf){#fig:1.2} If the second condition is removed, one can recover Theorem [Theorem 7](#thm:1.6){reference-type="ref" reference="thm:1.6"} via the instanton-theoretic formalism of [@NST19] by taking any example for which $\Sigma_2(K)$ bounds no definite manifold. However, if $\Sigma_2(K)$ bounds a contractible manifold, then such a strategy clearly fails. In order to prove Theorem [Theorem 7](#thm:1.6){reference-type="ref" reference="thm:1.6"}, we thus refine this approach by passing to the equivariant category. More precisely, note that if $K = \partial F$, then in fact $\Sigma_2(K) = \partial \Sigma_2(F)$ in the equivariant setting simply by remembering the branching involution over $F$. Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"} can then be leveraged to provide the desired examples. Similar ideas were used in [@ASA20; @DHM20; @dai20222] to define new sliceness obstructions; see Section [2.3](#sec:2.3){reference-type="ref" reference="sec:2.3"} for further discussion. It is also natural to consider surfaces $F$ for which $\pi_1(B^4 - F)$ is as simple as possible; i.e., $\mathbb{Z}_2$. We refer to such an $F$ as a *$\mathbb{Z}_2$-surface*. The condition on the fundamental group of $B^4 - F$ naturally arises in the setting of topological classification results; see [@COP]. One can refine the geography question by asking for which pairs $(e, h)$ we can find a $\mathbb{Z}_2$-surface for $K$.[^3] It is easily checked that $\pi_1(\Sigma_2(F))$ is trivial for such $F$; thus, this problem may be approached by applying appropriate obstructions to simply-connected bounding. Here, we prove: **Theorem 8**. *There exists a slice knot $J$ such that:* 1. *$J$ does not bound any extremal $\mathbb{Z}_2$-surface; and,* 2. *$\Sigma_2(J)$ bounds a contractible manifold.* Explicitly, we may take $J = A_n \# -A_n$ for any $n \in \mathbb N$, where $A_n$ is the knot in Figure [2](#fig:1.2){reference-type="ref" reference="fig:1.2"}. Note that since $J$ is slice, we have that $J$ does bound *some* extremal surface (by taking the connected sum of any slice disk with $\smash{\mathbb{RP}^2}$). However, the complement of this surface will have complicated fundamental group. Once again, if the second condition is removed, it is possible to use the results of [@NST19] to produce examples that bound an extremal surface but no extremal $\mathbb{Z}_2$-surface. Here, to prove Theorem [Theorem 8](#thm:1.7){reference-type="ref" reference="thm:1.7"}, we use Theorem [Theorem 6](#thm:1.5){reference-type="ref" reference="thm:1.5"} to obstruct simply-connected, equivariant definite boundings of $\Sigma_2(K)$. The fact that $\Sigma_2(K)$ bounds a contractible manifold shows that if we forget the equivariant category, then any such obstruction vanishes. ### Comparison to other invariants We stress that the examples obtained in the present work are qualitatively different than those obtained via Heegaard Floer homology [@ASA20; @DHM20], monopole Floer homology [@LRS18], and Seiberg-Witten theory for families [@KMT23A]. For instance, as discussed in Section [1.1.2](#sec:1.1.2){reference-type="ref" reference="sec:1.1.2"}, these theories are ill-equipped to handle linear combinations of $1/n$-surgeries on the same knot $K$. Likewise, the TQFT nature of such invariants prohibit applications such as Theorem [Theorem 6](#thm:1.5){reference-type="ref" reference="thm:1.5"}, since this requires distinguishing contractible manifolds from homology balls. Moreover, even for simple manifolds such as Brieskorn spheres, the information contained in our instanton-theoretic construction differs from the output of [@LRS18; @ASA20; @DHM20; @KMT23A]; see Section [7.2](#sec:brieskorn){reference-type="ref" reference="sec:brieskorn"}. While the examples obtained using the latter are often qualitatively similar to each other, the usage of the Chern-Simons filtration in the present work appears to produce fundamentally new results in the study of equivariant bordism. There are also several subtle differences between our involutive $r_s$-invariant and the work of Nozaki-Sato-Taniguchi [@NST19]. Our involutive instanton theory is related to studying the Chern-Simons functional on the mapping torus of the configuration space with respect to a diffeomorphism action on $Y$. Thus, the involutive $r_s$-invariant may be viewed as a $1$-parameter version of the usual $r_s$-invariant of [@NST19]. This is related to the fact that when we prove invariance under equivariant homology cobordism, we will need to count points in $1$-parameter families of ASD moduli spaces. In contrast, invariance of the original $r_s$-invariant can be established by counting points in usual (unparameterized) ASD-moduli spaces. ### Instanton-theoretic aspects of the construction We close by discussing some of the technical difficulties regarding the use of instanton Floer theory in the present work. As is standard when defining homology cobordism invariants, it will be necessary to work with a formulation of instanton Floer homology which takes into account the reducible connection. The usual method for doing this is via the maps $D_1$ and $D_2$ of [@Do02]; see for example [@Do02; @Fr02; @Da20]. (This is usually what is meant by $SO(3)$-equivariant instanton theory.) While several such constructions are present in the literature, the standard versions are generally defined over fields such as $\mathbb{Q}$ [@Do02; @Fr02; @Da20]. See work of Miller Eismeier [@Mike19] and Daemi-Miller Eisemier [@DaMi22] for more general coefficients. However, in order to define an involutive $r_s$-invariant, it will be necessary to work over $\mathbb{Z}_2$. (See Remark [Remark 87](#rem:differentcoefficients){reference-type="ref" reference="rem:differentcoefficients"}.) To do this, we consider a restricted subcomplex of the construction in [@NST19], which corresponds to only using the $D_1$-map (or equivalently $D_2$-map) originally from [@Do02]. Unfortunately, this means that we do not have a complete dualization or tensor product formula for such complexes. This is partially why our current proof of [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"} requires input from the Heegaard-Floer-theoretic side. We direct the reader to recent work of Frøyshov [@Fr23] regarding instanton Floer homology over $\mathbb{Z}_2$. **Remark 9**. It is also possible to define the formalism of the current paper using instanton Floer homology with $\mathbb{Z}$-coefficients. However, none of the examples presented here differ significantly when working over $\mathbb{Z}$ rather than $\mathbb{Z}_2$, so we have chosen to work with the latter out of convenience. Finally, note that computing instanton Floer chain complexes (especially with a $\tau$-action) is quite difficult. While partial information can sometimes be obtained by an analysis of the $SU(2)$-representation variety of $Y$, in general there are few tools for constraining the action of $\tau$; see the work of Saveliev [@Sa03] and Ruberman-Saveliev [@RS04]. Here, we use the Donaldson invariant to constrain the involutive structure of the Akbulut-Mazur cork. We approach other examples via a topological argument involving equivariant negative-definite cobordisms, as in [@DHM20].\ **Organization.** In Section [2](#sec:2){reference-type="ref" reference="sec:2"}, we introduce various topological definitions and constructions related to equivariant bordism. In Sections [3](#sec:3){reference-type="ref" reference="sec:3"} and [4](#sec:4){reference-type="ref" reference="sec:4"}, we develop the main algebraic framework of the paper and define the involutive $r_s$-invariant. Analytic details of the construction are discussed in Section [5](#sec:5){reference-type="ref" reference="sec:5"}. In Section [6](#sec:6){reference-type="ref" reference="sec:6"}, we prove Theorem [Theorem 1](#thm:1.1){reference-type="ref" reference="thm:1.1"} and give an extended list of properties of the $r_s$-invariant. Finally, in Section [7](#sec:7){reference-type="ref" reference="sec:7"} we establish the remaining applications listed in the introduction.\ **Acknowledgements.** The authors would like to thank Aliakbar Daemi, Hokuto Konno, Tomasz Mrowka, Daniel Ruberman, and Kouki Sato for helpful conversations. Part of this work was carried out at in the program entitled "Floer homotopy theory\" held at MSRI/SL-Math (Fall 2022). Thus this work was supported by NSF DMS-1928930. The second author was partially supported by NSF DMS-1902746 and NSF DMS-2303823. The third author was partially supported by MSRI and NSF DMS-2019396. The fourth author was partially supported by JSPS KAKENHI Grant Number 20K22319, 22K13921, and RIKEN iTHEMS Program. # Background and definitions {#sec:2} In this section, we give several preliminary definitions and discuss equivariant bordism. ## Strong corks {#sec:2.1} We begin with the definition of a strong cork, as introduced by Lin, Ruberman, and Saveliev [@LRS18 Section 1.2.2]: **Definition 10**. Let $Y$ be an integer homology $3$-sphere and $\tau$ be an orientation-preserving involution on $Y$. We say that $(Y, \tau)$ is a *strong cork* if $\tau$ does not extend as a diffeomorphism over any homology ball which $Y$ bounds. We also require that $Y$ bound at least one contractible manifold, so that $Y$ is a cork boundary in the traditional sense. We have the following notion of equivariant cobordism: **Definition 11**. Let $(Y, \tau)$ and $(Y, \tau')$ be two integer homology spheres equipped with orientation-preserving involutions. We say $(W, \widetilde{\tau})$ is an *equivariant cobordism* from $(Y, \tau)$ to $(Y', \tau')$ if $W$ is a cobordism from $Y$ to $Y'$ and $\widetilde{\tau}$ is a self-diffeomorphism of $W$ which restricts to $\tau$ and $\tau'$ on $Y$ and $Y'$, respectively. We may further specialize Definition [Definition 10](#def:2.1){reference-type="ref" reference="def:2.1"} by requiring $W$ to be a homology cobordism, in which case we refer to $(W, \widetilde{\tau})$ as an *equivariant homology cobordism*. This notion defines an equivalence relation on the set of pairs $(Y, \tau)$; we denote this by $\sim$. Clearly, $(Y, \tau)$ is a strong cork if and only if $Y$ bounds a contractible manifold and $(Y, \tau) \not\sim (S^3, \operatorname{id})$. Note that by the third part of Theorem [Theorem 1](#thm:1.1){reference-type="ref" reference="thm:1.1"}, $r_s(Y, \tau)$ is an invariant of the equivalence class of $(Y, \tau)$. As discussed in Section [1.1.1](#sec:1.1.1){reference-type="ref" reference="sec:1.1.1"}, we will sometimes have cause to place homological constraints on the extension $\widetilde{\tau}$. We define: **Definition 12**. Let $(Y, \tau)$ and $(Y, \tau')$ be two integer homology spheres equipped with orientation-preserving involutions and $(W, \widetilde{\tau})$ be an equivariant cobordism between them. - We say that $\widetilde{\tau}$ is *homology-fixing* if $\widetilde{\tau}_* = \operatorname{id}$ on $H_2(W, \mathbb{Q})$. - We say that $\widetilde{\tau}$ is *homology-reversing* if $\widetilde{\tau}_* = - \operatorname{id}$ on $H_2(W, \mathbb{Q})$.[^4] If $(Y, \tau)$ and $(Y', \tau')$ are two equivariant homology spheres, one can attempt to form their equivariant connected sum. To this end, let $p \in Y$ and $p' \in Y'$ be fixed points for $\tau$ and $\tau'$, respectively, which have equivariant neighborhoods that are identified via a diffeomorphism intertwining $\tau$ and $\tau'$. Then there is an obvious equivariant connected sum operation $$(Y, \tau)\#(Y',\tau')=(Y\#Y',\tau\#\tau').$$ This may depend on the choice (and existence) of $p$ and $p'$. In all of the examples of this paper, the fixed-point set of $\tau$ will be diffeomorphic to $S^1$, in which case it is clear that the connected sum operation is well-defined. **Remark 13**. We stress that requiring $\widetilde{\tau}$ to be an involution is quite different than requiring $\widetilde{\tau}$ to be a diffeomorphism. Although obstructing the latter clearly obstructs the former, there are many examples in which the two notions are not equivalent. For instance, let $\tau$ be the involution on any Brieskorn sphere $\Sigma(p, q, r)$ coming from the obvious $\mathbb{Z}_2$-subgroup of the $S^1$-action. In [@AH21 Theorem A], it is shown that $\tau$ cannot extend as an involution over any contractible manifold which $\Sigma(p, q, r)$ bounds. On the other hand, it is easy to produce an extension of $\tau$ as a diffeomorphism by using the fact that $\tau$ is isotopic to the identity. In the present paper, we study the case where $\widetilde{\tau}$ is required to be a diffeomorphism, so as to preserve the original motivation coming from the theory of corks. ## Equivariant surgery {#sec:2.2} One particularly flexible method for constructing a symmetric $3$-manifold is via surgery on an equivariant knot. As these will provide a robust source of examples in this paper, we include a brief discussion here. Recall that by Smith theory, any orientation-preserving involution on $S^3$ is conjugate to rotation about a standard unknot. We say that a knot $K \subseteq S^3$ is *equivariant* if it is preserved by such an involution $\tau$. If $\tau$ has two fixed points on $K$, then we say that $K$ is *strongly invertible*. If $\tau$ has no fixed points on $K$, then we say that $K$ is *periodic*. In [@DHM20 Section 5.1], it is shown that if $(K, \tau)$ is an equivariant knot, then any surgered manifold $\smash{S^3_{p/q}(K)}$ inherits an involution from the symmetry on $K$, which we also denote by $\tau$. Note that any symmetric $3$-manifold constructed in this way has fixed-point set diffeomorphic to $S^1$. Such examples fit naturally into the formalism of this paper, as the following lemmas show: **Lemma 14**. *Let $K$ be a strongly invertible or periodic knot with symmetry $\tau$. For any $n \in \mathbb N$, there is a simply-connected, equivariant negative-definite cobordism $(W, \widetilde{\tau})$ from $$(S^3_{1/(n+1)}(K), \tau) \quad \text{to} \quad (S^3_{1/n}(K), \tau).$$ This cobordism has $H_2(W, \mathbb{Z}) = \mathbb{Z}$. Moreover:* - *If $K$ is strongly invertible, then $\widetilde{\tau}$ is homology-reversing.* - *If $K$ is periodic, then $\widetilde{\tau}$ is homology-fixing.* *Proof.* We explicitly construct a $2$-handle attachment cobordism from $$Y_{n+1} = S^3_{1/(n+1)}(K) \quad \text{to} \quad Y_n = S^3_{1/n}(K)$$ as follows. The reader may check that if $K$ is a strongly invertible or periodic knot, then we may find an equivariant Seifert framing $\lambda$ of $K$. View $\lambda$ as lying inside a solid torus neighborhood $N(K)$ of $K$. We remove $N(K)$ from $S^3$ and re-glue it along the matrix $$\left(\begin{array}{cc}1 & 0 \\n+1 & 1\end{array}\right)$$ to obtain $Y_{n+1}$. For clarity, let $K'$ and $\lambda'$ denote the images of $K$ and $\lambda$, respectively, in $Y_{n+1}$. We construct our cobordism by attaching a $2$-handle along $K'$, with framing $-1$ relative to $\lambda'$. This means that the outgoing boundary is the surgery along $K \subseteq S^3$ with surgery matrix $$\left(\begin{array}{cc}1 & 0 \\n+1 & 1\end{array}\right) \left(\begin{array}{cc}1 & 0 \\-1 & 1\end{array}\right) = \left(\begin{array}{cc}1 & 0 \\n & 1\end{array}\right);$$ that is, we obtain $Y_n$. It is not hard to check that the intersection form of this cobordism is $(-1)$. Indeed, examining the surgery matrix for $Y_{n+1}$, note that $K'$ and $\lambda'$ may be pushed into $Y_{n+1} - N(K') = S^3 - N(K)$. After doing so, we obtain two Seifert framings for $K$ in $S^3$. Hence $K'$ and $\lambda'$ bound disjoint Seifert surfaces in $Y_{n+1}$. The fact that we attached our $2$-handle with framing $-1$ relative to $\lambda'$ shows that the intersection form is indeed $(-1)$. Thus our cobordism is evidently simply-connected and negative-definite. If $K$ is strongly invertible, then it is not hard to check that the extension $\widetilde{\tau}$ discussed in [@DHM20 Section 5.1] reverses orientation on the generator of $H_2(W, \mathbb{Z})$, while if $K$ is periodic, then $\widetilde{\tau}$ sends the generator of $H_2(W, \mathbb{Z})$ to itself. This completes the proof. ◻ This yields: **Lemma 15**. *Let $K$ be a strongly invertible or periodic knot with symmetry $\tau$. For any nonzero $n \in \mathbb{Z}$, the surgered manifold $\smash{(S^3_{1/n}(K), \tau)}$ is the boundary of a simply-connected, equivariant definite manifold $\smash{(W, \widetilde{\tau})}$. Moreover:* - *If $n > 0$, then $W$ is positive definite.* - *If $n < 0$, then $W$ is negative definite.* *and:* - *If $K$ is strongly invertible, then $\widetilde{\tau}$ is homology-reversing.* - *If $K$ is periodic, then $\widetilde{\tau}$ is homology-fixing.* *Proof.* Suppose $n > 0$. By repeatedly applying Lemma [Lemma 14](#lem:knotsA){reference-type="ref" reference="lem:knotsA"}, we obtain a simply-connected, equivariant negative-definite cobordism from $$(S^3_{1/n}(K), \tau) \quad \text{to} \quad (S^3_{+1}(K), \tau).$$ Turning this around gives a positive-definite cobordism from $(+1)$-surgery to $1/n$-surgery on $K$. Moreover, $(S^3_{+1}(K), \tau)$ itself bounds a simply-connected, equivariant positive-definite manifold given by the trace of $(+1)$-surgery on $K$. If $K$ is strongly invertible, then $\widetilde{\tau}$ on each of these handle attachment cobordisms acts as multiplication by $-1$, and hence the composite cobordism clearly satisfies the desired properties. Similarly, if $K$ is periodic, then $\widetilde{\tau}$ acts as the identity on each handle attachment cobordism. The case $n < 0$ follows by mirroring and negating $n$. ◻ ## Branched covers {#sec:2.3} Another context in which equivariant manifolds naturally arise is the setting of branched covers. Recall that if $K$ is a knot with slice disk $D$, then the branched double cover $\Sigma_2(K)$ bounds a $\mathbb{Z}_2$-homology ball given by $\Sigma_2(D)$. This provides a well-known obstruction to the sliceness of $K$. Following [@ASA20; @DHM20; @dai20222], we may refine this approach by remembering the branching involution on $\Sigma_2(K)$. This gives $\Sigma_2(K)$ the structure of an equivariant $\mathbb{Z}_2$-homology sphere. If $K$ bounds a slice disk $D$, then $\Sigma_2(D)$ is likewise equivariant and $\Sigma_2(K) = \partial \Sigma_2(D)$ in the sense of Definition [Definition 11](#def:2.2){reference-type="ref" reference="def:2.2"}. Asking for the existence of an *equivariant* homology ball with boundary $\Sigma_2(K)$ is (in principle) stronger than the corresponding obstruction in the nonequivariant category. In [@ASA20; @DHM20; @dai20222] this was used to obtain new linear independence and sliceness results with the help of Heegaard Floer homology; and, in particular, a proof that the $(2, 1)$-cable of the figure-eight knot is not slice [@dai20222]. In each case, the corresponding nonequivariant obstruction was trivial and the use of the branching involution proved to be essential. As discussed in Section [1.1.4](#sec:1.1.4){reference-type="ref" reference="sec:1.1.4"}, we will also have cause to consider more general slice surfaces $F$ for $K$. We thus collect together various results regarding the topology of $\Sigma_2(F)$. We first have the following result of Gordon-Litherland [@GL78]: **Theorem 16**. *[@GL78][\[thm:2.7\]]{#thm:2.7 label="thm:2.7"} Let $F$ be a nonorientable slice surface for $K$ in $B^4$. Then $$\sigma(K) = \mathrm{sign}(\Sigma_2(F)) + e(F)/2.$$* Next, we have the following facts regarding the basic algebraic topology of $\Sigma_2(F)$: **Lemma 17**. *Let $F$ be a nonorientable slice surface for $K$ in $B^4$. Then: $$H_1(\Sigma_2(F), \mathbb{Z}_2) = 0 \quad \text{and} \quad b_2(\Sigma_2(F)) = b_1(F).$$* *Proof.* For the first part, see for example [@Na00 Lemma 3.8]. For the second, see for example [@Massey Lemma 2]. ◻ Note that since $| \mathrm{sign}(\Sigma_2(F)) | \leq b_2(\Sigma_2(F)) = b_1(F)$, Theorem [\[thm:2.7\]](#thm:2.7){reference-type="ref" reference="thm:2.7"} then establishes the inequality ([\[eq:1.1\]](#eq:1.1){reference-type="ref" reference="eq:1.1"}) of Section [1.1.4](#sec:1.1.4){reference-type="ref" reference="sec:1.1.4"}. Moreover, equality occurs precisely when $| \mathrm{sign}(\Sigma_2(F)) | = b_2(\Sigma_2(F))$; that is, $\Sigma_2(F)$ is definite. Finally, we have: **Lemma 18**. *[@Massey Lemma 2][\[lem:2.9\]]{#lem:2.9 label="lem:2.9"} The branching action $\widetilde{\tau}$ on $\Sigma_2(F)$ acts as $- \operatorname{id}$ on $H_2(\Sigma_2(F), \mathbb{Q})$.* We thus have: **Lemma 19**. *If $K$ bounds an extremal surface $F$, then $\Sigma_2(K)$ bounds a homology-reversing, equivariant definite manifold $W$ with $H_1(W, \mathbb{Z}_2) = 0$.* *Proof.* Follows immediately from Lemmas [Lemma 17](#lem:2.8){reference-type="ref" reference="lem:2.8"} and [\[lem:2.9\]](#lem:2.9){reference-type="ref" reference="lem:2.9"}. ◻ # Algebraic preliminaries {#sec:3} In this section, we review the set-up of [@NST19] and re-cast it in an algebraic framework which will more easily generalize to the equivariant setting. ## Instanton-type complexes {#sec:3.1} We begin with a modification to the usual instanton Floer complex [@Fl88; @Do02] which takes into account the reducible connection, and the Chern-Simons filtration. Throughout the paper, we consider $\mathbb{Z}$-graded, $\mathbb R$-filtered chain complexes over the ring $\mathbb{Z}_2[y^{\pm 1}]$ of Laurent polynomials. We denote by $\deg_\mathbb{Z}$ the homological grading, and by $\deg_I$ the filtration level. Under the differential, $\deg_\mathbb{Z}$ is decreased by one, and $\deg_I$ is nonincreasing. The gradings are so that multiplication by the variable $y$ has $\deg_\mathbb{Z}(y) = 8 \text{ and } \deg_I(y) = 1$. When regarded as a chain complex $\mathbb{Z}_2[y^{\pm 1}]$ is regarded as the trivial complex spanned by a single generator with $\deg_\mathbb{Z}= \deg_I = 0$, and no differential. We use brackets to denote a shift in homological grading, so that (for example) the generator $1 \in \mathbb{Z}_2[y^{\pm 1}][-3]$ has homological grading $\deg_\mathbb{Z}= -3$, and $\deg_I = 0$. **Definition 20**. Let $(\underline{C},\underline{d})$ be a $\mathbb{Z}$-graded, $\mathbb R$-filtered, finitely-generated, free chain complex over $\mathbb{Z}_2[y^{\pm1}]$. We say $(\underline{C},\underline{d})$ is an *instanton-type chain complex* if it is equipped with a subcomplex $C \subseteq \underline{C}$ such that there is an graded, filtered isomorphism of $\mathbb{Z}_2[y^{\pm 1}]$-complexes $$\underline{C}/C \cong \mathbb{Z}_2[y^{\pm 1}][-3].$$ Denote $$\pi \colon \underline{C} \rightarrow \underline{C}/C \cong \mathbb{Z}_2[y^{\pm 1}][-3].$$ We will often think of $C$ as defining a two-step filtration $C \subseteq \underline{C}$. We denote by $\underline{C}_i$ or $C_i$ the appropriate complex in homological grading $i$. If $\underline{C}$ is an instanton-type complex, then we may choose a splitting of graded, filtered $\mathbb{Z}_2[y^{\pm 1}]$-modules $$\underline{C} = C \oplus \mathbb{Z}_2[y^{\pm 1}][-3].$$ With respect to this splitting, $$\underline{d} = d + D_2,$$ where $d$ is the restriction of $\underline{d}$ to $C$ and $D_2$ is a filtered map from $\mathbb{Z}_2[y^{\pm 1}][-3]$ to $C$. While such a splitting is not canonical, we often assume that a particular splitting has been fixed whenever we discuss $\underline{C}$. We then denote the generator $1 \in \mathbb{Z}_2[y^{\pm 1}][-3]$ by $$\theta = (0, 1).$$ Note that $D_2$ is completely determined by $D_2(\theta) \in C_{-4}$. **Remark 21**. Definition [Definition 20](#def:3.1){reference-type="ref" reference="def:3.1"} should be thought of as the analog of the fact that in Heegaard Floer homology $\mathit{HF}^-(Y)/U\text{-torsion} \cong \mathbb{Z}_2[U]$ for any integer homology sphere $Y$. In that case we similarly have a noncanonical splitting $\mathit{HF}^-(Y) = (U\text{-torsion}) \oplus \mathbb{Z}_2[U]$. **Remark 22**. We sketch the relevance of Definition [Definition 20](#def:3.1){reference-type="ref" reference="def:3.1"} to instanton Floer homology. Recall that instanton Floer homology [@Fl88; @Do02] associates to an integer homology sphere $Y$ equipped with a Riemannian metric a $\mathbb{Z}$-graded chain complex. This is done by looking at the flow lines of a suitable perturbation $f_\pi$ of the Chern-Simons functional $$f: \mathcal{A}(P)/\mathcal{G}_0(P) \to \mathbb R.$$ Here $P\to Y$ denotes the trivial $SU(2)$-bundle over $Y$, $\mathcal{A}(P)$ the affine space of $SU(2)$-connections on $P$, and $\mathcal{G}_0(P)$ the space of degree zero gauge transformations of $P$. In the usual setting, only irreducible connections on $P$ are considered. We denote the resulting chain complex by $(C(Y,g,\pi), d)$, where $g$ denotes Riemannian metric on $Y$, and $\pi$ an admissible holonomy perturbation; see [@Fl88 Section (1b)] and [@BD95 Section 5.5.1]. Note that $C(Y,g,\pi)$ has the structure of a $\mathbb{Z}_2[y^{\pm 1}]$-module where multiplication by $y^k$ acts as the gauge action of a gauge transformation of degree $k\in \mathbb{Z}$. In order to take into account the reducible connection, we define $$\underline{C}(Y,g,\pi)= C(Y,g,\pi) \oplus \mathbb{Z}_2[y^{\pm1}] \cdot \theta,$$ where $\theta$ is a formal generator (in homological grading $-3$) representing the class of the trivial connection on $P$. The differential on $\underline{C}(Y,g,\pi)$ is then $\underline{d} = d + D_2$, where $d$ is the usual differential on the irreducible complex $C(Y,g,\pi)$ and $D_2$ counts flow lines out of the reducible connection; see [@Do02 Section 7.1]. We put a filtration on $\underline{C}(Y, g, \pi)$ by defining $$\deg_I(\bold{x})=f_\pi(\bold{x})$$ for every irreducible critical point $\bold{x}\in \text{crit}(f_\pi)$ and setting the filtration of $\theta$ to be zero. **Remark 23**. Note that $\underline{C}(Y,g,\pi)$ depends on the perturbation $\pi$ (even up to an appropriate notion of filtered homotopy equivalence) and thus is not an invariant of $Y$. In order to remedy this, we introduce the definition of an enriched complex in Section [4.4](#sec:4.4){reference-type="ref" reference="sec:4.4"}, which will capture the notion of taking a sequence of perturbations converging to zero. We gloss over this subtlety for now. The following auxiliary complexes capture information from the Chern-Simons filtration. **Definition 24**. Let $(\underline{C}, \underline{d})$ be an instanton-type complex. For any real number $s$, define $$(\underline{C}^{[-\infty,s]}, \underline{d}^{[-\infty,s]}) = (\{ \zeta \in \underline{C} | \deg_I (\zeta ) \leq s\} , \underline{d}|_{\underline{C}^{[-\infty,s]}}).$$ This is a subcomplex of $\underline{C}$. For a given pair of real numbers $r<s$, we may also consider the quotient complex $$\underline{C}^{[r,s]} = \underline{C}^{[-\infty,s]}/ \underline{C}^{[-\infty,r]}.$$ When discussing $\underline{C}^{[r,s]}$, we will usually assume $r < 0 \leq s$. Define complexes $C^{[-\infty, s]}$ and $C^{[r, s]}$ similarly. Denote $$\pi^{[r, s]} \colon \underline{C}^{[r, s]} \rightarrow \underline{C}^{[r, s]}/C^{[r, s]} \cong \mathbb{Z}_2[y^{\pm 1}][-3]^{[r, s]}.$$ Note that $\mathbb{Z}_2[y^{\pm 1}][-3]^{[r, s]}$ is generated by the cycles $y^{i}$ with $r < \deg_I(y^{i}) \leq s$. (Or, more precisely, the classes of these cycles in the quotient $\mathbb{Z}_2[y^{\pm 1}]^{[-\infty, r]}/\mathbb{Z}_2[y^{\pm 1}]^{[-\infty, s]}$.) As in the discussion following Definition [Definition 20](#def:3.1){reference-type="ref" reference="def:3.1"}, we will often assume that a splitting of $\underline{C}$ has been chosen. Denote $$\theta^{[r, s]} = \pi^{[r, s]}(\theta).$$ This is nonzero if and only if $r < 0 \leq s$. Sometimes we will continue to write $\theta$ in place of $\theta^{[r, s]}$ when the context is clear. We now discuss maps between instanton-type chain complexes. **Definition 25**. Let $(\underline{C}, \underline{d})$ and $(\underline{C}', \underline{d}')$ be two instanton-type complexes. A *morphism* $$f \colon \underline{C} \rightarrow \underline{C}'$$ is a $\deg_\mathbb{Z}$-preserving, $\mathbb{Z}_2[y^{\pm 1}]$-linear chain map that preserves the two-step filtration: $$f(C) \subseteq C'.$$ This induces a map $$f \colon \underline{C}/C \cong \mathbb{Z}_2[y^{\pm 1}][-3] \rightarrow \underline{C}'/C' \cong \mathbb{Z}_2[y^{\pm 1}][-3],$$ which by abuse of notation we also denote by $f$. We emphasize that an instanton-type complex has two filtrations: the $\deg_I$-filtration and the two-step filtration afforded by Definition [Definition 20](#def:3.1){reference-type="ref" reference="def:3.1"}. As per Definition [Definition 25](#def:3.5){reference-type="ref" reference="def:3.5"}, in the present work we will only ever consider maps which preserve the two-step filtration. We thus refer to a map as *filtered* (with no qualification) to mean that it is filtered with respect to $\deg_I$. While we will primarily be interested in filtered maps, it will also be useful to have a notion of a chain map that increases the filtration by at most a fixed parameter: **Definition 26**. Let $(\underline{C},\underline{d})$ and $(\underline{C}',\underline{d}')$ be two instanton-type chain complexes and fix any real number $\delta \geq 0$. We say that a morphism $f \colon \underline{C} \rightarrow \underline{C}'$ has *level* $\delta$ if $$\deg_I(f(\zeta)) \leq \deg_I(\zeta) + \delta$$ for all $\zeta \in \underline{C}$. Note that a level-zero morphism is filtered in the usual sense. Clearly, a level-$\delta$ morphism induces a morphism: $$f^{[r, s]} \colon \underline{C}^{[r, s]} \rightarrow \underline{C}'^{[r + \delta, s + \delta]}$$ for any $r < s$. We have the obvious notion of homotopy equivalence: **Definition 27**. Let $(\underline{C},\underline{d})$ and $(\underline{C}',\underline{d}')$ be two instanton-type chain complexes. We say that $\underline{C}$ and $\underline{C}'$ are *homotopy equivalent* if there exist morphisms $f$ and $g$ between them and $\mathbb{Z}_2[y^{\pm 1}]$-linear homotopies $$H_{\underline{C}} \colon \underline{C}_* \rightarrow \underline{C}_{* + 1} \quad \text{and} \quad H_{\underline{C}'} \colon \underline{C}'_* \rightarrow \underline{C}'_{* + 1}$$ such that $$gf + \operatorname{id}= \underline{d}H_{\underline{C}} + H_{\underline{C}}\underline{d} \quad \text{and} \quad fg + \operatorname{id}= \underline{d}'H_{\underline{C}'} + H_{\underline{C}'}\underline{d}'.$$ We require that $H_{\underline{C}}$ and $H_{\underline{C}'}$ preserve the two-step filtrations on $\underline{C}$ and $\underline{C}'$, respectively. - If $f$, $g$, $H_{\underline{C}}$, and $H_{\underline{C}'}$ are filtered then we say that $\underline{C}$ and $\underline{C}'$ are *filtered homotopy equivalent*. - If $f$, $g$, $H_{\underline{C}}$, and $H_{\underline{C}'}$ have level $\delta$, then we say that $\underline{C}$ and $\underline{C}'$ are *level-$\delta$ homotopy equivalent*. It is clear that filtered homotopy equivalence is an equivalence relation. Note, however, that for a fixed value of $\delta$, level-$\delta$ homotopy equivalence is *not* an equivalence relation, since the composition of two level-$\delta$ morphisms need not have level $\delta$. We now turn to the following important notion: **Definition 28**. Let $(\underline{C},\underline{d})$ and $(\underline{C}',\underline{d}')$ be two instanton-type chain complexes. We say that a morphism $\lambda \colon \underline{C} \rightarrow \underline{C}'$ is a *local map* if the induced map on the quotient $$\lambda \colon \underline{C}/C \cong \mathbb{Z}_2[y^{\pm 1}][-3] \rightarrow \underline{C}'/C' \cong \mathbb{Z}_2[y^{\pm 1}][-3]$$ is an isomorphism. - If $\lambda$ is filtered, then we refer to $\lambda$ as a *filtered local map*. - If $\lambda$ has level $\delta$, then we refer to $\lambda$ as a *level-$\delta$ local map*. If we have local maps in both directions between $\underline{C}$ and $\underline{C}'$, we say that $\underline{C}$ and $\underline{C}'$ are *locally equivalent*. - If both maps are filtered, then we refer to this as a *filtered local equivalence*. - If both maps have level $\delta$, then we refer to this as a *level-$\delta$ local equivalence*. It is clear that filtered local equivalence is an equivalence relation. Note, however, that for a fixed value of $\delta$, level-$\delta$ local equivalence is *not* an equivalence relation, since the composition of two level-$\delta$ morphisms need not have level $\delta$. Definition [Definition 28](#def:3.8){reference-type="ref" reference="def:3.8"} should be thought of as the rough analogue of the notion of a local map in Heegaard Floer homology. Recall that in Heegaard Floer theory, a map $f \colon CF^-(Y) \rightarrow CF^-(Y')$ is said to be local if it induces an isomorphism between the localizations $$U^{-1} \mathit{HF}^-(Y) \rightarrow U^{-1} \mathit{HF}^-(Y').$$ We have thus borrowed this terminology, even though $\underline{C}/C$ is not really the localization of $\underline{C}$ with respect to any variable. ## $\theta$-supported cycles and the $r_s$-invariant {#sec:3.2} We now review the definition of the $r_s$-invariant of [@NST19]. For this, we will need the notion of a $\theta$-supported cycle, which may be regarded as partial information of the special cycles introduced in [@DISST22]. Roughly speaking, a $\theta$-supported cycle should be regarded as the analog of a $U$-nontorsion element in $\mathit{HF}^-(Y)$. **Definition 29**. Let $(\underline{C}, d)$ be an instanton-type complex. We say that a cycle $z \in \underline{C}$ is a *$\theta$-supported cycle* if $\pi_*([z]) = 1 \in \mathbb{Z}_2[y^{\pm 1}][-3]$, where $$\pi_* \colon H_*(\underline{C}) \rightarrow H_*(\underline{C}/C) \cong \mathbb{Z}_2[y^{\pm 1}][-3].$$ Similarly, we say that a cycle $z \in \underline{C}^{[r, s]}$ is a *$\theta$-supported $[r, s]$-cycle* if $\pi_*^{[r, s]}([z]) = \pm 1 \in \mathbb{Z}_2[y^{\pm 1}][-3]^{[r, s]}$, where $$\pi_*^{[r, s]} \colon H_*(\underline{C}^{[r, s]}) \rightarrow H_*(\underline{C}^{[r, s]}/C^{[r, s]}) \cong \mathbb{Z}_2[y^{\pm 1}][-3]^{[r, s]}.$$ Here, we require that $r < 0 \leq s$, so that $\pm 1$ is a nonzero element of $\mathbb{Z}_2[y^{\pm 1}][-3]^{[r, s]}$. It will be convenient to think of $\theta$-supported cycles explicitly by choosing a splitting of $\underline{C}$ as in the discussion after Definition [Definition 20](#def:3.1){reference-type="ref" reference="def:3.1"}. Then any element $z \in \underline{C}$ can be written $z=\zeta+c\cdot \theta$ for some $\zeta \in C$ and $c\in\mathbb{Z}_2[y^{\pm1}]$. A $\theta$-supported cycle is a cycle $z$ for which $c = 1$.[^5] While the splitting of $\underline{C}$ is not canonical, it is straightforward to see that this property is equivalent to Definition [Definition 29](#def:3.9){reference-type="ref" reference="def:3.9"} and thus independent of the choice of splitting. A similar characterization holds regarding $\theta^{[r, s]}$ and $\theta$-supported $[r, s]$-cycles. We define the $r_s$-invariant of an instanton-type chain complex using the notion of a filtered $\theta$-supported cycle: **Definition 30**. Let $\underline{C}$ be an instanton-type complex and $s \in [-\infty, 0]$. Define $$r_s ( \underline{C})= - \inf_{r < 0} \{ r | \text{ there exists a $\theta$-supported $[r, -s]$-cycle}\} \in [0, \infty],$$ with the caveat that if the above set is empty, we set $r_s(\underline{C}) = - \infty$. The signs in Definition [Definition 30](#def:3.10){reference-type="ref" reference="def:3.10"} are for consistency with the conventions of [@NST19]. See Remark [Remark 86](#rem:differenceinrs){reference-type="ref" reference="rem:differenceinrs"}. **Example 31**. Several examples of instanton-type complexes and graphs of their $r_s$-invariants are given in Figure [3](#fig:3.1){reference-type="ref" reference="fig:3.1"}. - In (a), $\theta$ itself is a $\theta$-supported $[r, -s]$-cycle for all values of $r$ and $s$; hence $r_s(\underline{C}) = \infty$ for all $s$. - In (b), there is a $\theta$-supported $[r, -s]$-cycle if and only if $r \geq \beta$; this is again given by $\theta$ itself. - In (c), there is a $\theta$-supported $[r, -s]$-cycle for all values of $r$ and $s$. This is given by $\theta$ if $r \geq \beta$ and $\theta - y$ if $r < \beta$. - The most nontrivial case occurs in (d). If $-s \geq \alpha$, then $\theta - y$ forms a $\theta$-supported $[r, -s]$-cycle for any value of $r$. If $-s < \alpha$, then there is a $\theta$-supported $[r, -s]$-cycle if and only if $r \geq \beta$, which is given by $\theta$. ![Some instanton-type complexes and graphs of their $r_s$-invariants. Dotted lines represent the $\deg_I$-filtration. The differential is given by the black arrows; if no black arrow is drawn, the differential on a given generator is zero.](F3.pdf){#fig:3.1} We refer the reader to [@NST19] for further discussion of $r_s(Y)$. ## Properties {#sec:3.3} We now give several important properties of instanton-type complexes and the $r_s$-invariant. ### Local maps In Section [5.4](#sec:5.4){reference-type="ref" reference="sec:5.4"}, we show that negative-definite cobordisms induces morphisms in instanton Floer homology. Hence it will be useful to have the following: **Lemma 32**. *Let $(\underline{C},\underline{d})$ and $(\underline{C}',\underline{d}')$ be instanton-type complexes and $\lambda \colon \underline{C} \rightarrow \underline{C}'$ be a local map.* - *If $z$ is a $\theta$-supported cycle in $\underline{C}$, then $\lambda(z)$ is a $\theta$-supported cycle in $\underline{C}'$.* - *If $z$ is a $\theta$-supported $[r, s]$-cycle in $\underline{C}$ and $\lambda$ is a level-$\delta$ local map, then $\lambda(z)$ is a $\theta$-supported $[r + \delta, s + \delta]$-cycle in $\underline{C}'$.[^6]* *Proof.* Since $\lambda$ is a chain map, it maps cycles to cycles; the fact that a local map induces an isomorphism $\underline{C}/C \rightarrow \underline{C}'/C'$ shows that $\lambda(z)$ is also $\theta$-supported. The second part of the claim follows from the fact that if $\lambda$ has level-$\delta$, then it induces a chain map from $\underline{C}^{[r,s]}$ to $\underline{C}^{[r+ \delta, s+ \delta]}$. ◻ This immediately yields: **Lemma 33**. *Let $\lambda \colon \underline{C} \rightarrow \underline{C}'$ be a level-$\delta$ local map. Then for any $s \in [-\infty, 0]$, we have $$r_s (\underline{C}) - \delta \leq r_{s - \delta}(\underline{C}').$$* *Proof.* This follows from [Lemma 32](#under local map filtration){reference-type="ref" reference="under local map filtration"} and Definition [Definition 30](#def:3.10){reference-type="ref" reference="def:3.10"}. ◻ ### Tensor products In order to understand connected sums of $3$-manifolds, we will have to understand tensor products of instanton-type complexes. **Definition 34**. Let $(\underline{C},\underline{d})$ and $(\underline{C}',\underline{d}')$ be instanton-type complexes. We define their tensor product by taking the usual graded tensor product over $\mathbb{Z}_2[y^{\pm 1}]$, but with an upwards grading shift of three: $$\underline{C} \otimes \underline{C}' = (( \underline{C} \otimes \underline{C}') [3], \underline{d}\otimes \underline{d}').$$ The $\mathbb R$- and two-step filtrations on $\underline{C} \otimes \underline{C}'$ are both given by the usual tensor product of filtrations on $\underline{C}$ and $\underline{C}'$, respectively. In the latter case, this means that the subcomplex of $\underline{C} \otimes \underline{C}'$ required by Definition [Definition 20](#def:3.1){reference-type="ref" reference="def:3.1"} is given by $$(C \otimes \underline{C'} + \underline{C} \otimes C')[3].$$ It is straightforward to check that this makes $\underline{C} \otimes \underline{C}'$ into an instanton-type complex. Explicitly, if we fix splittings for $\underline{C}$ and $\underline{C}'$, then the $\theta$-chain in $\underline{C} \otimes \underline{C}'$ is given by $\theta \otimes \theta'$. Suppose that $z \in \underline{C}^{[r, s]}$ and $z' \in \underline{C}'^{[r', s']}$ are cycles. Then we may choose representatives (which we also call $z$ and $z'$) in $\underline{C}$ and $\underline{C}'$ such that $$\deg_I(z) \leq s \text{ and } \deg_I(\underline{d}z) \leq r$$ and $$\deg_I(z') \leq s' \text{ and } \deg_I(\underline{d}'z') \leq r'.$$ Then $\deg_I(z \otimes z') \leq s + s'$. Since the boundary of $z \otimes z'$ is given by $(\underline{d}z) \otimes z' \pm z \otimes (\underline{d}'z')$, we see that the boundary of $z \otimes z'$ has filtration level at most $\max\{r + s', r' + s\}$. Hence $z \otimes z'$ is a cycle in $(\underline{C} \otimes \underline{C}')^{[\max\{r + s', r' + s\}, s + s']}$. The behavior of $\theta$-supported cycles under tensor products is straightforward: **Lemma 35**. *Let $(\underline{C},\underline{d})$ and $(\underline{C}',\underline{d}')$ be instanton-type complexes.* - *If $z \in \underline{C}$ and $z' \in \underline{C}'$ are $\theta$-supported cycles, then $z \otimes z'$ is likewise a $\theta$-supported cycle.* - *If $z \in \underline{C}$ and $z' \in \underline{C}'$ are $\theta$-supported $[r, s]$- and $[r', s']$-cycles, respectively, then $z \otimes z'$ is a $\theta$-supported $[\max\{r + s', r' + s\} , s + s']\text{-cycle}$.* *Proof.* It is clear that the tensor product of two $\theta$-supported cycles is a $\theta$-supported cycle, since if $z= \theta + \zeta$ and $z'= \theta' + \zeta'$, we have $z \otimes z' = \theta \otimes \theta' + \theta\otimes \zeta' + \zeta \otimes \theta'$. This implies $z \otimes z'$ is a $\theta$-supported cycle in $\underline{C} \otimes \underline{C}'$. The filtered version of the claim then follows from the preceding paragraph. ◻ This immediately yields: **Lemma 36**. *Let $(\underline{C},\underline{d})$ and $(\underline{C}',\underline{d}')$ be instanton-type complexes. For any $s$ and $s'$ in $[-\infty, 0]$, we have $$r_{s + s'}(\underline{C} \otimes \underline{C}') \geq \min \{r_{s} (\underline{C}) + s' , r_{s'} (\underline{C}') + s\}.$$* *Proof.* This follows from [Lemma 35](#tensor product of theta supp cycle){reference-type="ref" reference="tensor product of theta supp cycle"} and Definition [Definition 30](#def:3.10){reference-type="ref" reference="def:3.10"}. ◻ ### Dualization As discussed in Section [3.1](#sec:3.1){reference-type="ref" reference="sec:3.1"}, Definition [Definition 20](#def:3.1){reference-type="ref" reference="def:3.1"} is motivated by counting flow lines out of the reducible connection $\theta$. One can instead take into account flow lines *into* the reducible connection via the map $D_1$ of [@Do02 Section 7.1]. There are thus actually two flavors of instanton-type complexes. When clarity is needed, we refer to a complex as in Definition [Definition 20](#def:3.1){reference-type="ref" reference="def:3.1"} as a ($D_2$-) instanton-type complex. In contrast: **Definition 37**. Let $(\overline{C},\overline{d})$ be a $\mathbb{Z}$-graded, $\mathbb R$-filtered, finitely-generated free chain complex over $\mathbb{Z}_2[y^{\pm1}]$. We say $(\overline{C},\overline{d})$ is a ($D_1$-) *instanton-type chain complex* if it is equipped with a subcomplex isomorphic to $\mathbb{Z}_2[y^{\pm 1}]$. We think of this as again defining a two-step filtration $\mathbb{Z}_2[y^{\pm 1}] \subseteq \underline{C}$; we denote the quotient complex by $C = \overline{C}/\mathbb{Z}_2[y^{\pm 1}]$. In this situation, we may again choose a splitting of graded, filtered $\mathbb{Z}_2[y^{\pm 1}]$-modules $$\overline{C} = C \oplus \mathbb{Z}_2[y^{\pm 1}],$$ where the first summand should be interpreted as a graded, filtered $\mathbb{Z}_2[y^{\pm 1}]$-module isomorphic to $C$. With respect to this splitting, $$\overline{d} = d + D_1,$$ where $d$ is the differential on $C$ and $D_1$ is some filtered map from $C$ to $\mathbb{Z}_2[y^{\pm 1}]$. Note that $\overline{d}$ is zero on $\mathbb{Z}_2[y^{\pm 1}]$. We again write $\theta = (0, 1)$ for the generator of $\mathbb{Z}_2[y^{\pm 1}]$. It is not hard to see that up to a grading shift, the dual of a ($D_2$-) instanton type complex may be given the structure of a ($D_1$-) instanton type complex, and vice-versa. Although we will not discuss this here, it turns out that (setting aside the dependence on $g$ and $\pi$ discussed in Remark [Remark 23](#rem:3.3){reference-type="ref" reference="rem:3.3"}), $$\overline{C}(Y) \cong \underline{C}(-Y)^*.$$ This mirrors the Heegaard Floer setting, where $\mathit{HF}^+(Y)$ is isomorphic to the dual of $\mathit{HF}^-(-Y)$. However, it turns out that $\overline{C}(Y)$ cannot in general be determined from $\underline{C}(Y)$. Indeed, the differential in $\overline{C}$ counts flows into the reducible connection on $Y$, while the differential in $\underline{C}$ counts flows out of the reducible connection. In this paper, we will almost exclusively work with $\underline{C}$, rather than $\overline{C}$. We will thus often consider both $\underline{C}(Y)$ and $\underline{C}(-Y)$, which do not contain the same information. See Section [7.1](#sec:7.1){reference-type="ref" reference="sec:7.1"} for further discussion.\ ### Local triviality We will often be interested in whether a complex $\underline{C}$ admits maps to or from the trivial complex. It turns out that the former is trivial, while the existence of a local map in the other direction is characterized by $r_0(\underline{C})$: **Lemma 38**. *Let $\underline{C}$ be an instanton-type complex. Then:* - *There always exists a filtered local map from $\underline{C}$ to the trivial complex $\mathbb{Z}_2[y^{\pm 1}][-3]$.* - *There exists a filtered local map from $\mathbb{Z}_2[y^{\pm 1}][-3]$ to $\underline{C}$ if and only if $r_0(\underline{C}) = \infty$.* *Proof.* The first part of the lemma is trivial, as the quotient map $$\pi \colon \underline{C} \rightarrow \underline{C}/C \cong \mathbb{Z}_2[y^{\pm 1}][-3]$$ constitutes the desired local map. For the second part, assume that we have a filtered local map $\lambda \colon \mathbb{Z}_2[y^{\pm 1}][-3] \rightarrow \underline{C}$. Then $\lambda(1)$ is a $\theta$-supported cycle in $\underline{C}$ of filtration level zero; this shows that $r_0(\underline{C}) = \infty$. Conversely, suppose that $r_0(\underline{C}) = \infty$. This easily implies that there is a $\theta$-supported cycle of filtration level zero. (Note that this is in fact a cycle in $\underline{C}$, not just a particular quotient of $\underline{C}$.) We construct the desired filtered local map $$\lambda \colon \mathbb{Z}_2[y^{\pm 1}][-3] \to \underline{C}$$ by sending the generator of the left-hand side to the aforementioned cycle. ◻ # Involutive instanton complexes {#sec:4} We now introduce the main invariants discussed in this paper. ## Involutive complexes {#sec:4.1} In Section [5.3](#sec:5.3){reference-type="ref" reference="sec:5.3"}, we will show that an involution on $Y$ induces a homotopy involution on the instanton complex $\underline{C}(Y)$. (We set aside the dependence on $g$ and $\pi$ for now; see Section [4.4](#sec:4.4){reference-type="ref" reference="sec:4.4"}.) We make this notion precise in the following definition: **Definition 39**. An *involutive instanton-type complex* is a pair $(\underline{C}, \tau)$, where $\underline{C}$ is an instanton-type complex and $$\tau: \underline{C} \to \underline{C}$$ is a filtered morphism satisfying the following: - The induced map on the quotient $$\tau \colon \underline{C}/C \cong \mathbb{Z}_2[y^{\pm 1}][-3] \rightarrow \underline{C}/C \cong \mathbb{Z}_2[y^{\pm 1}][-3]$$ is the identity. Note here that we use the same isomorphism in the domain and the range afforded by Definition [Definition 20](#def:3.1){reference-type="ref" reference="def:3.1"}. - $\tau$ is a homotopy involution; that is, there exists a $\mathbb{Z}_2[y^{\pm 1}]$-linear map $$H : \underline{C}_* \to \underline{C}_{*+1}$$ such that $$\underline{d} H + H \underline{d} = \tau^2+ \operatorname{id}.$$ We require $H$ to be filtered with respect to both $\deg_I$ and the two-step filtration. Note that since $\tau$ is filtered with respect to $\deg_I$, we also have the morphism $$\tau^{[r, s]} \colon \underline{C}^{[r, s]} \rightarrow \underline{C}^{[r, s]}.$$ This satisfies the same properties as above with $\underline{C}$ replaced by $\underline{C}^{[r, s]}$. We will sometimes continue to write $\tau$ in place of $\tau^{[r, s]}$ when the meaning is clear. If we choose a splitting $\underline{C} = C \oplus \mathbb{Z}_2[y^{\pm 1}]$ as in Section [3.1](#sec:3.1){reference-type="ref" reference="sec:3.1"}, then it is clear that $\tau \theta = \xi + \theta$ for some $\xi \in C$. Maps between involutive instanton-type complexes are exactly as in Section [3.1](#sec:3.1){reference-type="ref" reference="sec:3.1"}, but with the additional requirement that they homotopy commute with $\tau$. More precisely: **Definition 40**. Let $(\underline{C}, \tau)$ and $(\underline{C}', \tau')$ be two involutive complexes. We say that a morphism $f \colon \underline{C} \rightarrow \underline{C}'$ is *equivariant* if there exists a $\mathbb{Z}_2[y^{\pm 1}]$-linear map $$H : \underline{C}_* \to \underline{C}'_{*+1}$$ such that $$\underline{d}' H + H \underline{d} = \tau' f + f \tau.$$ We require $H$ to be filtered with respect to the two-step filtration. If $f$ is filtered with respect to $\deg_I$, then we require $H$ to also be filtered with respect to $\deg_I$; if $f$ has level $\delta$, then we require $H$ to also have level $\delta$. This gives the obvious notion of an equivariant local map, an equivariant homotopy equivalence, and so on. For the sake of being explicit, note that a level-$\delta$ equivariant homotopy equivalence is formed by a pair of level-$\delta$ equivariant morphisms $f$ and $g$, such that $f \circ g \simeq \operatorname{id}$ and $g \circ f \simeq \operatorname{id}$ by level-$\delta$ homotopies. This means that the maps $f$ and $g$, as well as the homotopies witnessing $$\tau' f \simeq f \tau, \quad \tau g \simeq g \tau', \quad f \circ g \simeq \operatorname{id}\quad \text{and} \quad g \circ f \simeq \operatorname{id},$$ all have level $\delta$. ## Equivariant $\theta$-supported cycles and the involutive $r_s$-invariant {#sec:4.2} We now turn to the involutive analog of the $r_s$-invariant. We begin with the following: **Definition 41**. Let $(\underline{C},\tau)$ be an involutive instanton-type complex. A chain $z \in \underline{C}$ is called an *equivariant $\theta$-supported cycle* if $z$ is a $\theta$-supported cycle in the sense of Definition [Definition 29](#def:3.9){reference-type="ref" reference="def:3.9"} and $[z]$ is fixed by $\tau_*$; that is, $$\tau_* [z]=[z] \in H_*(\underline{C}).$$ Similarly, a chain $z \in \underline{C}^{[r, s]}$ is called an *equivariant $\theta$-supported $[r, s]$-cycle* if $z$ is a $\theta$-supported $[r, s]$-cycle and $$\tau^{[r, s]}_* [z] = [z] \in H_* ( \underline{C}^{[r,s]}) .$$ Thus an involutive $\theta$-supported cycle is just a $\theta$-supported cycle whose homology class is fixed by $\tau_*$. This leads immediately to an involutive analog of the $r_s$-invariant: **Definition 42**. Let $(\underline{C}, \tau)$ be an involutive instanton-type complex and $s \in [- \infty, 0]$. Define $$r_s ( \underline{C}, \tau)= -\inf_{r < 0} \{ r | \text{ there exists an equivariant $\theta$-supported $[r, -s]$-cycle} \} \in [0,\infty],$$ with the caveat that if the above set is empty, we set $r_s(\underline{C}, \tau) = - \infty$. **Example 43**. Several examples of involutive instanton-type complexes and graphs of their involutive $r_s$-invariants are given in Figure [4](#fig:4.1){reference-type="ref" reference="fig:4.1"}. In each case, the noninvolutive $r_s$-invariant is $\infty$ for all $s$. - An archetypal example is given in $(a)$, where $\theta$ itself is a $\theta$-supported $[r, -s]$-cycle for all values of $r$ and $s$, but is equivariant only when $r \geq \beta$. - In $(b)$, we present an example which is nontrivial even though $\theta$ itself is fixed by $\tau$. Indeed, if $r < \beta$ then $\theta$ is not a cycle: the $\theta$-supported $[r, -s]$-cycles are instead $\theta - y$ and $\theta - z$, which are interchanged by $\tau$. If $r \geq \beta$, then $\theta$ forms an equivariant $\theta$-supported $[r, -s]$-cycle. - Finally, in $(c)$, we present an example which depends on $s$. If $-s \geq \alpha$, then $x$ is homologically trivial, so $\theta$ forms an equivariant $\theta$-supported $[r, -s]$-cycle. Otherwise, the analysis of $(c)$ is the same as that of $(a)$. ![Some involutive instanton-type complexes and graphs of their involutive $r_s$-invariants. Dotted lines represent the $\deg_I$-filtration. The differential is given by the black arrows; if no black arrow is drawn, the differential on a given generator is zero. The action of $\tau$ is given by the (sum of) the red arrows; if no red arrow is drawn, the action of $\tau$ on a given generator is the identity.](F4.pdf){#fig:4.1} ## Properties {#sec:4.3} The properties of involutive instanton-complexes and the involutive $r_s$-invariant are largely the same as those discussed in Section [3.3](#sec:3.3){reference-type="ref" reference="sec:3.3"}. ### Local maps The involutive $r_s$-invariant is functorial under equivariant local maps: **Lemma 44**. *Let $(\underline{C}, \tau)$ and $(\underline{C}', \tau')$ be two involutive complexes and $\lambda \colon \underline{C} \rightarrow \underline{C}'$ be an equivariant local map. Then:* - *If $z$ is an equivariant $\theta$-supported cycle in $\underline{C}$, then $\lambda(z)$ is an equivariant $\theta$-supported cycle in $\underline{C}'$.* - *If $z$ is an equivariant $\theta$-supported $[r, s]$-cycle in $\underline{C}$ and $\lambda$ has level $\delta$, then $\lambda(z)$ is an equivariant $\theta$-supported $[r + \delta, s + \delta]$-cycle in $\underline{C}'$.* *Proof.* If $[z]$ is $\tau_*$-invariant, then clearly $\lambda_*[z]$ is $\tau'_*$-invariant. The claim then follows from the proof of Lemma [Lemma 32](#under local map filtration){reference-type="ref" reference="under local map filtration"}. ◻ This immediately yields: **Lemma 45**. *Let $(\underline{C}, \tau)$ and $(\underline{C}', \tau')$ be two involutive complexes and $\lambda \colon \underline{C} \rightarrow \underline{C}'$ be an equivariant level-$\delta$ local map. Then for any $s \in [-\infty, 0]$, we have $$r_s (\underline{C}, \tau) - \delta \leq r_{s - \delta}(\underline{C}', \tau').$$* *Proof.* This follows from Lemma [Lemma 44](#lem:4.6){reference-type="ref" reference="lem:4.6"} and Definition [Definition 42](#def:4.4){reference-type="ref" reference="def:4.4"}. ◻ ### Tensor products As in Section [3.3](#sec:3.3){reference-type="ref" reference="sec:3.3"}, we have tensor products of involutive complexes: **Definition 46**. Let $(\underline{C}, \tau)$ and $(\underline{C}', \tau')$ be involutive complexes. We define their tensor product by taking the tensor product as in Definition [Definition 34](#def:3.14){reference-type="ref" reference="def:3.14"}, equipped with the obvious tensor product of $\tau$ and $\tau'$: $$(\underline{C}, \tau) \otimes (\underline{C}', \tau') = (( \underline{C} \otimes \underline{C}') [3], \tau \otimes \tau').$$ It is straightforward to check that $\tau \otimes \tau'$ satisfies all the conditions of Definition [Definition 39](#def:4.1){reference-type="ref" reference="def:4.1"}. We have: **Lemma 47**. *Let $(\underline{C}, \tau)$ and $(\underline{C}', \tau')$ be involutive complexes.* - *If $z \in \underline{C}$ and $z' \in \underline{C}'$ are equivariant $\theta$-supported cycles, then $z \otimes z'$ is likewise an equivariant $\theta$-supported cycle.* - *If $z \in \underline{C}$ and $z' \in \underline{C}'$ are equivariant $\theta$-supported $[r, s]$- and $[r', s']$-cycles, respectively, then $z \otimes z'$ is an equivariant $\theta$-supported $[\max\{r + s', r' + s\} , s + s']\text{-cycle}$.* *Proof.* If $\tau_*[z] = [z]$ and $\tau'_*[z'] = [z']$, then $(\tau \otimes \tau')_*[z \otimes z'] = [z \otimes z']$. The claim then follows from the proof of Lemma [Lemma 35](#tensor product of theta supp cycle){reference-type="ref" reference="tensor product of theta supp cycle"}. ◻ This immediately yields: **Lemma 48**. *Let $(\underline{C},\underline{d})$ and $(\underline{C}',\underline{d}')$ be instanton-type complexes. For any $s$ and $s'$ in $[-\infty, 0]$, we have $$r_{s + s'}(\underline{C} \otimes \underline{C}') \geq \min \{r_{s} (\underline{C}) + s' , r_{s'} (\underline{C}') + s\}.$$* *Proof.* This follows from Lemma [Lemma 47](#lem:4.9){reference-type="ref" reference="lem:4.9"} and Definition [Definition 42](#def:4.4){reference-type="ref" reference="def:4.4"}. ◻ ### Dualization As discussed in Section [3.3](#sec:3.3){reference-type="ref" reference="sec:3.3"}, the dual of ($D_2$-) instanton-type complex is not a ($D_2$-) instanton-type complex, but rather a ($D_1$-) instanton-type complex. **Definition 49**. A *involutive $(D_1$-$)$ instanton-type complex* is a pair $(\overline{C}, \tau)$, where $\overline{C}$ is a $(D_1$-$)$ instanton-type chain complex and $\tau: \overline{C} \to \overline{C}$ is a grading-preserving, $\mathbb{Z}_2[y^{\pm 1}]$-linear chain map satisfying the following: - $\tau$ is filtered with respect to $\deg_I$. - $\tau$ preserves the two-step filtration; that is, it sends the subcomplex $\mathbb{Z}_2[y^{\pm 1}][-3] \subseteq \overline{C}$ to itself. We moreover require that $\tau$ act as the identity on $\mathbb{Z}_2[y^{\pm 1}][-3]$. - $\tau$ is a homotopy involution; that is, there exists a $\mathbb{Z}_2[y^{\pm 1}]$-linear map $$H : \underline{C}_* \to \underline{C}_{*+1}$$ such that $$\overline{d} H + H \overline{d} = \tau^2+\operatorname{id}.$$ We require $H$ to be filtered with respect to both $\deg_I$ and the two-step filtration. Note that in the $D_1$-setting, the action of $\tau$ always fixes $\theta$, whereas in the $D_2$-setting, we may have $\tau \theta = \xi + \theta$ for some $\xi \in C$. Conversely, in the $D_1$-setting the action of $\tau$ may send a chain in $C$ to a chain which is supported by $\theta$, whereas in the $D_2$ setting, the action of $\tau$ preserves $C$. Once again, the dual of a ($D_2$-) involutive complex may be given the structure of a ($D_1$-) involutive complex, and $$\overline{C}(Y) \cong \underline{C}(-Y)^*.$$ as involutive complexes. ### Local triviality We have the following analog of Lemma [Lemma 38](#lem:3.18){reference-type="ref" reference="lem:3.18"}: **Lemma 50**. *Let $(\underline{C}, \tau)$ be an involutive complex. Then:* - *There always exists a filtered equivariant local map from $(\underline{C}, \tau)$ to the trivial complex $(\mathbb{Z}_2[y^{\pm 1}][-3], \operatorname{id})$.* - *There exists a filtered equivariant local map from $(\mathbb{Z}_2[y^{\pm 1}][-3], \operatorname{id})$ to $(\underline{C}, \tau)$ if and only if $r_0(\underline{C}, \tau) = \infty$.* *Proof.* For the first part of the lemma, observe that the quotient map $$\pi \colon \underline{C} \rightarrow \underline{C}/C \cong \mathbb{Z}_2[y^{\pm 1}][-3]$$ still constitutes the desired local map. The fact that $\tau$ acts as the identity on $\underline{C}/C$ shows that $\pi$ is equivariant. The second part of the lemma follows as in Lemma [Lemma 38](#lem:3.18){reference-type="ref" reference="lem:3.18"}, noting that all cycles in this context are equivariant. ◻ ## Enriched involutive complexes {#sec:4.4} The formalism discussed so far adequately reflects the analytic situation in the setting where no perturbation of the Chern-Simons functional is needed. In general, however, several technical modifications to the preceding sections are required. Firstly, it turns out that the action of $\tau$ may not quite be filtered with respect to $\deg_I$. This necessitates the following mild modification of Definition [Definition 39](#def:4.1){reference-type="ref" reference="def:4.1"} in which $\tau$ and all related homotopies are only required to be level-$\delta$ maps: **Definition 51**. A *level-$\delta$ involutive instanton-type complex* is a pair $(\underline{C}, \tau)$, where $\underline{C}$ is an instanton-type complex and $$\tau: \underline{C} \to \underline{C}$$ is a level-$\delta$ morphism satisfying the following: - The induced map on the quotient $$\tau \colon \underline{C}/C \cong \mathbb{Z}_2[y^{\pm 1}][-3] \rightarrow \underline{C}/C \cong \mathbb{Z}_2[y^{\pm 1}][-3]$$ is the identity. Note here that we use the same isomorphism in the domain and the range afforded by Definition [Definition 20](#def:3.1){reference-type="ref" reference="def:3.1"}. - $\tau$ is a homotopy involution; that is, there exists a $\mathbb{Z}_2[y^{\pm 1}]$-linear map $$H : \underline{C}_* \to \underline{C}_{*+1}$$ such that $$\underline{d} H + H \underline{d} = \tau^2+\operatorname{id}.$$ We require $H$ to be filtered with respect to the two-step filtration and have level $\delta$ with respect to $\deg_I$. A level-zero involutive complex is of course an involutive complex in the previous sense of Definition [Definition 39](#def:4.1){reference-type="ref" reference="def:4.1"}. We may still speak of equivariant morphisms between involutive instanton-type complexes of different levels. For example, let $(\underline{C}_1, \tau_1)$ be a level-$\delta_1$ involutive complex and $(\underline{C}_2, \tau_2)$ be a level-$\delta_2$ involutive complex. A level-$\delta$ equivariant morphism $$f \colon \underline{C}_1 \rightarrow \underline{C}_2$$ is still simply a level-$\delta$ morphism in the sense of Definition [Definition 25](#def:3.5){reference-type="ref" reference="def:3.5"} which commutes with $\tau_1$ and $\tau_2$ up to a level-$\delta$ homotopy $H$. Note that the parameter $\delta$ is independent from $\delta_1$ and $\delta_2$, even though the homotopies which make $\tau_1$ and $\tau_2$ into homotopy involutions have levels $\delta_1$ and $\delta_2$. In practice, it will not really be crucial to keep track of the different level shifts; here, we are explicit for the sake of completeness. More importantly, as discussed in Remark [Remark 22](#rem:3.2){reference-type="ref" reference="rem:3.2"}, instead of associating a single involutive complex to $(Y, \tau)$, we will need to associate a sequence of complexes which represents taking a sequence of perturbations converging to zero. The following definition captures this notion: **Definition 52**. An *enriched involutive instanton complex* $\underline{\mathfrak{E}}_\tau$ is a sequence of involutive instanton complexes (of varying levels) $$(\underline{C}_i,\underline{d}_i, \tau_i) \text{ of level } \delta_i$$ for $i \in \mathbb{Z}^{\geq 0}$, together with a sequence of equivariant local maps (of varying levels) $$\psi^j_i : (\underline{C}_i,\underline{d}_i, \tau_i) \to (\underline{C}_j,\underline{d}_j, \tau_j) \text{ of level } \delta_{i,j}$$ for $i, j \in \mathbb{Z}^{\geq 0}$, satisfying the following conditions: - (Clustering condition): The $\deg_I$-levels of homogenous chains in the $\underline{C}_i$ cluster around a discrete subset of $\mathbb R$. More precisely, we require that there exists a discrete subset $\mathfrak{K} \subseteq \mathbb R$ such that for any $\delta>0$, there exists $N$ such that $\forall i >N$ and $\zeta \in \underline{C}_i$, $$\deg_I(\zeta) \in B_\delta(\mathfrak{K}).$$ Note that necessarily $0 \in \mathfrak{K}$. - (Composition of maps): Each $\psi_i^i$ is the identity and each $\psi_j^k \circ \psi_i^j$ is homotopic to $\psi^k_i$ via a homotopy of level $\smash{\delta_{i,j,k}}$. - (Perturbations converging to zero): We have $$\delta_i \rightarrow 0, \quad \delta_{i,j} \rightarrow 0, \quad \text{and} \quad \delta_{i,j,k} \rightarrow 0$$ as $i, j, k \rightarrow \infty$. More precisely, for any $\delta > 0$, there exists $N$ such that $\forall i, j, k > N$, we have $\delta_i, \delta_{i,j}, \delta_{i,j,k} < \delta$. Usually we will suppress writing the subscripts on the differentials $\underline{d}_i$. There is an unfortunate collision of notation, in that the subscript of $\underline{C}_i$ has (up to this point) meant the $\mathbb{Z}$-grading on $\underline{C}$, while we now use it to refer to the index of $\underline{C}_i$ in a sequence of involutive complexes. However, the former will be comparatively rare moving forward; the distinction will be clear from context. One can similarly define a *$(D_1$-$)$ enriched complex* $\overline{\mathfrak{E}}_\tau$ by replacing $\underline{C}$ with $\overline{C}$. The import of Definition [Definition 52](#def:4.14){reference-type="ref" reference="def:4.14"} is that for sufficiently large indices, all of the maps and homotopies considered in Definition [Definition 52](#def:4.14){reference-type="ref" reference="def:4.14"} are "almost\" filtered. We generalize the notion of homotopy equivalences and local maps: **Definition 53**. An *(enriched) homotopy equivalence* between two enriched involutive complexes $\underline{\mathfrak{E}}_\tau$ and $\underline{\mathfrak{E}}'_\tau$ is a sequence of equivariant homotopy equivalences $f_i$ and $g_i$ making $$\underline{C}_i \simeq \underline{C}'_i \text{ of level } \epsilon_i$$ for $i \in \mathbb{Z}$, such that $\epsilon_i \rightarrow 0$ as $i \rightarrow \infty$. We require $f_j \psi_i^j \simeq (\psi')_i^j f_i$ via a chain homotopy whose level goes to zero as $i, j \rightarrow \infty$, and similarly for the $g_i$. **Definition 54**. An *(enriched) local map* between two involutive complexes $\underline{\mathfrak{E}}_\tau$ to $\underline{\mathfrak{E}}'_\tau$ is a sequence of equivariant local maps $$\lambda_i :\underline{C}_i \to \underline{C}'_i \text{ of level } \epsilon_i$$ for $i \in \mathbb{Z}$, such that $\epsilon_i \rightarrow 0$ as $i \rightarrow \infty$. If enriched local maps in both directions exist, then we say $\mathfrak{E}_\tau$ and $\mathfrak{E}'_\tau$ are *(enriched) locally equivalent*. Note that we do not require any commutation requirement with the $\psi_i^j$. Finally, we define the tensor product and dualization operations. The following are easily checked to be enriched complexes: **Definition 55**. Let $\underline{\mathfrak{E}}_\tau$ and $\underline{\mathfrak{E}'}_\tau$ be two enriched complexes. Then we define $$\underline{\mathfrak{E}}_\tau \otimes \underline{\mathfrak{E}}_\tau' = \{ \underline{C}_i\otimes \underline{C}'_i, \tau\otimes \tau' \}$$ with the following data: - the maps $\psi^j_i \otimes (\psi')^j_i$ - the discrete set $\mathfrak{K}^\otimes$ is given by $$\mathfrak{K}^\otimes = \{ r_1 + r_2 | r_1 \in \mathfrak{K}, r_2 \in \mathfrak{K}'\}.$$ - various homotopies obtained as tensor products of the homotopies from $\underline{\mathfrak{E}}_\tau$ and $\underline{\mathfrak{E}'}_\tau$. **Definition 56**. Let $\overline{\mathfrak{E}}_\tau$ be a $(D_1$-$)$ enriched complex. Then we define $$\overline{\mathfrak{E}}_\tau^* = \{ (\overline{C}_i)^* , \tau^* \}$$ with the following data: - the maps $(\psi^j_i)^*$ - the discrete set $\mathfrak{K}^*$ is given by $$\mathfrak{K}^* = \{ -r | r \in \mathfrak{K}\}.$$ - various homotopies obtained as duals of the homotopies from $\overline{\mathfrak{E}}_\tau$. ## The involutive $r_s$-invariant for an enriched complex {#sec:4.5} We now define the involutive $r_s$-invariant for an enriched complex. Note that if $(\underline{C}, \tau)$ is a level-$\delta$ involutive complex with $\delta > 0$, then $\tau$ does *not* induce an automorphism of $\underline{C}^{[r, s]}$ and hence Definition [Definition 42](#def:4.4){reference-type="ref" reference="def:4.4"} is not quite valid. We thus need the following important lemma: **Lemma 57**. *Let $\underline{\mathfrak{E}}_\tau$ be an enriched involutive complex. Fix any $r, s \in [-\infty, \infty] \setminus \mathfrak{K}$. Then for all $i$ sufficiently large, $\tau_i$ induces a homotopy involution $$\tau_i^{[r, s]} \colon \underline{C}_i^{[r, s]} \rightarrow \underline{C}_i^{[r, s]}.$$ Moreover, the equivariant chain homotopy type of $$(\underline{C}_i^{[r,s]}, \tau_i^{[r, s]})$$ is independent of $i$ for $i$ sufficiently large.* *Proof.* Let $r \in [-\infty, \infty] \setminus \mathfrak{K}$. Then $\tau_i$ induces a map $$\label{eq:taui} \tau_i \colon \underline{C}_i^{[-\infty, r]} \rightarrow \underline{C}_i^{[-\infty, r + \delta_i]}.$$ Let $\delta = d(r, \mathfrak{K}) > 0$. By the clustering condition, we know that for $i$ sufficiently large, every homogenous chain in $\underline{C}_i$ lies within distance $\delta/2$ of $\mathfrak{K}$ and hence is at least distance $\delta/2$ from $r$. In this situation, it follows that $$\smash{\underline{C}_i^{[-\infty, r]} = \underline{C}_i^{[-\infty, r - \delta/2]}}.$$ If moreover $\delta_i < \delta/2$, then clearly the image of ([\[eq:taui\]](#eq:taui){reference-type="ref" reference="eq:taui"}) lies in $\smash{\underline{C}_i^{[-\infty, r]}}$. For $i$ sufficiently large, we thus have that $\tau_i$ may be considered as a map $$\tau_i \colon \underline{C}_i^{[-\infty, r]} \rightarrow \underline{C}_i^{[-\infty, r]}.$$ Note that $\tau_i$ is a homotopy involution; by taking $i$ sufficiently large, we may likewise assume that the homotopy in question sends $\smash{\underline{C}_i^{[-\infty, r]}}$ to itself. Since $$\underline{C}_i^{[r, s]} = \underline{C}_i^{[- \infty, s]}/\underline{C}_i^{[- \infty, r]},$$ a similar argument for $s$ gives the desired construction of $\smash{\tau_i^{[r, s]}}$. The second part of the lemma is proven similarly. From Definition [Definition 52](#def:4.14){reference-type="ref" reference="def:4.14"}, we have equivariant morphisms $$\psi_i^j : \underline{C}_i \to \underline{C}_j$$ such that $\psi_j^i \psi_i^j \simeq \operatorname{id}$ and $\psi_i^j \psi_j^i \simeq \operatorname{id}$. Although these morphisms and homotopies are not filtered, by taking $i$ and $j$ sufficiently large and applying the same argument as the previous paragraph, we may assume that they in fact map $\underline{C}_i^{[r, s]}$ and $\underline{C}_j^{[r, s]}$ to themselves/each other. ◻ For $i$ sufficiently large, we thus obtain a complex $\underline{C}_i^{[r, s]}$ with a well-defined involution $\tau_i^{[r, s]}$. (At least under the assumption that $r, s \in [-\infty, \infty] \setminus \mathfrak{K}$.) Moreover, the homotopy type of this pair stabilizes as $i \rightarrow \infty$. We denote this stable value by: **Definition 58**. Let $\underline{\mathfrak{E}}_\tau$ be an enriched involutive complex. For $r, s \in [-\infty, \infty] \setminus \mathfrak{K}$, define the equivariant chain homotopy type $$\underline{\mathfrak{E}}_\tau^{[r, s]} = (\underline{C}_i^{[r, s]}, \tau_i^{[r, s]})$$ for $i$ sufficiently large as in Lemma [Lemma 57](#lem:4.19){reference-type="ref" reference="lem:4.19"}. It will be helpful to have the following lemma: **Lemma 59**. *Let $\underline{\mathfrak{E}}_\tau$ be an enriched involutive complex. Suppose $$[r, r'] \subseteq [-\infty, \infty] \setminus \mathfrak{K}\quad \text{and} \quad [s, s'] \subseteq [-\infty, \infty] \setminus \mathfrak{K}.$$ Then $$\underline{\mathfrak{E}}_\tau^{[r, s]} \simeq \underline{\mathfrak{E}}_\tau^{[r', s']}.$$* *Proof.* The same argument as in Lemma [Lemma 57](#lem:4.19){reference-type="ref" reference="lem:4.19"} shows that for $i$ sufficiently large, we have $$\underline{C}_i^{[- \infty, r]} = \underline{C}_i^{[- \infty, r']}.$$ The analogous observation for $s$ and $s'$ gives the claim. ◻ Note that $\underline{\mathfrak{E}}_\tau^{[r, s]}$ inherits a two-step filtration, since all of our maps are filtered with respect to the two-step filtration on each $\underline{C}_i$. If $r < 0 < s$, this means that we still have the notion of an equivariant $\theta$-supported cycle in $\smash{\underline{\mathfrak{E}}_\tau^{[r, s]}}$. As before, we use this to define the $r_s$-invariant: **Definition 60**. Let $\underline{\mathfrak{E}}_\tau$ be an enriched involutive complex and $s \in [- \infty, 0]$. If $-s \notin \mathfrak{K}$, define $$r_s( \underline{\mathfrak{E}}_\tau) = -\inf_{r < 0 \text{ and } r \notin \mathfrak{K}} \{ r | \text{ there exists an equivariant $\theta$-supported cycle in }\mathfrak{E}_\tau^{[r, -s]} \} \in [0,\infty],$$ with the caveat that if the above set is empty, we set $r_s(\underline{\mathfrak{E}}_\tau) = - \infty$. For $-s \in \mathfrak{K}$, we define $$r_s( \underline{\mathfrak{E}}_\tau) = \lim_{t\to s^-}r_t( \underline{\mathfrak{E}}_\tau).$$ Note that the right-hand side is eventually constant due to Lemma [Lemma 59](#lem:4.21){reference-type="ref" reference="lem:4.21"}. By Lemma [Lemma 59](#lem:4.21){reference-type="ref" reference="lem:4.21"}, it is clear that $r_s(\underline{\mathfrak{E}}_\tau)$ is valued in $-\mathfrak{K}\cup \{\pm \infty\}$. Moreover, it is not hard to see that (as a function of $s$) $r_s$ is continuous from the right and is constant on each connected component of $[- \infty, 0] \setminus - \mathfrak{K}$. Note that due to Lemma [Lemma 59](#lem:4.21){reference-type="ref" reference="lem:4.21"}, we may exclude any discrete collection of points from the infimum in the definition of $\smash{r_s( \underline{\mathfrak{E}}_\tau)}$ without changing its value. The reader may check that if $\underline{\mathfrak{E}}_\tau$ consists of a constant sequence of level-zero involutive complexes, then Definition [Definition 60](#def:4.22){reference-type="ref" reference="def:4.22"} coincides with Definition [Definition 42](#def:4.4){reference-type="ref" reference="def:4.4"}. We have the analogs of Lemmas [Lemma 45](#lem:4.7){reference-type="ref" reference="lem:4.7"} and [Lemma 48](#lem:4.10){reference-type="ref" reference="lem:4.10"}: **Lemma 61**. *[\[lem:4.23\]]{#lem:4.23 label="lem:4.23"} If there is an enriched local map from $\underline{\mathfrak{E}}_\tau$ to $\underline{\mathfrak{E}}'_\tau$, then $$r_s( \underline{\mathfrak{E}}_\tau) \leq r_s( \underline{\mathfrak{E}}'_\tau)$$ for every $s \in [-\infty, 0]$.* *Proof.* Fix any $r, s\in [-\infty, \infty] \setminus (\mathfrak{K}\cup \mathfrak{K}')$. Let $i$ be large enough so that $$\mathfrak{E}_\tau^{[r, s]} = (\underline{C}_i^{[r, s]}, \tau_i^{[r, s]}) \quad \text{and} \quad (\mathfrak{E}'_\tau)^{[r, s]} = ((\underline{C}'_i)^{[r, s]}, (\tau'_i)^{[r, s]}).$$ By the same argument as in the proof of Lemma [Lemma 57](#lem:4.19){reference-type="ref" reference="lem:4.19"}, by increasing $i$ we may in fact assume that $\lambda_i$ maps $$\lambda_i \colon \underline{C}_i^{[r, s]} \rightarrow (\underline{C}'_i)^{[r, s]}.$$ Thus, if $\mathfrak{E}_\tau^{[r, s]}$ admits an equivariant $\theta$-supported cycle, then $\smash{(\mathfrak{E}'_\tau)^{[r, s]}}$ admits an equivariant $\theta$-supported cycle. It follows that if $-s \in [-\infty, \infty] \setminus (\mathfrak{K}\cup \mathfrak{K}')$, then $$r_s( \underline{\mathfrak{E}}_\tau) \leq r_s( \underline{\mathfrak{E}}'_\tau).$$ Here, there is a slight technicality: we have assumed $r \in [- \infty, \infty] \setminus (\mathfrak{K}\cup \mathfrak{K}')$ throughout, but in the definition of $r_s$ on each side of the above inequality, $r$ ranges over $[-\infty, \infty] \setminus \mathfrak{K}$ or $[-\infty, \infty] \setminus \mathfrak{K}'$, respectively. However, due to Lemma [Lemma 59](#lem:4.21){reference-type="ref" reference="lem:4.21"}, we may exclude any discrete collection of points from the infimum in the definition of $\smash{r_s( \underline{\mathfrak{E}}_\tau)}$ without changing its value. For $-s \in \mathfrak{K} \cup \mathfrak{K}'$, a limiting argument with $t \rightarrow s^-$ gives the same inequality. ◻ **Lemma 62**. *[\[lem:4.24\]]{#lem:4.24 label="lem:4.24"} Let $\underline{\mathfrak{E}}_\tau$ and $\underline{\mathfrak{E}'}_\tau$ be two enriched complexes. For any $s$ and $s'$ in $[-\infty, 0]$, we have $$r_{s+s'}( \underline{\mathfrak{E}}_ \tau \otimes \underline{\mathfrak{E}}_ \tau') \geq \min \{r_{s} (\underline{\mathfrak{E}}_ \tau) + s' , r_{s'} (\underline{\mathfrak{E}}_ \tau') + s\}.$$* *Proof.* First suppose $$r, s \in [-\infty, \infty] \setminus \mathfrak{K}, \quad r', s' \in [-\infty, \infty] \setminus \mathfrak{K}', \quad \text{and} \quad r + s', s + r', s + s' \in [-\infty, \infty] \setminus \mathfrak{K}^\otimes.$$ By taking $i$ sufficiently large, we obtain the conclusion of Lemma [Lemma 47](#lem:4.9){reference-type="ref" reference="lem:4.9"} with $\underline{C}$ and $\underline{C}'$ replaced by $\underline{\mathfrak{E}}_\tau$ and $\underline{\mathfrak{E}}'_\tau$. This gives the desired inequality when $s, s'$, and $s + s'$ are not in $\mathfrak{K}\cup \mathfrak{K}' \cup \mathfrak{K}^\otimes$; otherwise, we apply a limiting argument as in Lemma [\[lem:4.23\]](#lem:4.23){reference-type="ref" reference="lem:4.23"}. ◻ For completeness, we record the following formal definition: **Definition 63**. Let $$\Theta^\mathfrak{E}_\tau = \{\text{all enriched complexes}\}/\text{local equivalence}.$$ By Lemma [\[lem:4.23\]](#lem:4.23){reference-type="ref" reference="lem:4.23"}, $r_s$ defines a function $$r_s: \Theta^\mathfrak{E}_\tau \to [0, \infty].$$ The operation of $\otimes$ makes $\Theta^\mathfrak{E}_\tau$ into a commutative monoid, with the identity element being the trivial enriched complex $\mathfrak{T}$ consisting of the constant sequence $\mathbb{Z}_2[y^{\pm}][-3]$ and each $\psi^j_i = \operatorname{id}$. We call $\smash{\Theta^\mathfrak{E}_\tau}$ the *(enriched) local equivalence monoid*. Finally, we have the analog of Lemma [Lemma 50](#lem:4.12){reference-type="ref" reference="lem:4.12"}: **Lemma 64**. *Let $\underline{\mathfrak{E}}_\tau$ be an enriched complex. Then:* - *There always exists an enriched local map from $\underline{\mathfrak{E}}_\tau$ to $\mathfrak{T}$.* - *There exists an enriched local map from $\mathfrak{T}$ to $\underline{\mathfrak{E}}_\tau$ if and only if $r_0(\underline{\mathfrak{E}}_\tau) = \infty$.* *Proof.* The first part of the lemma is obvious, as the sequence of filtered local maps $$\pi_i \colon \underline{C}_i \rightarrow \underline{C}_i/C_i \cong \mathbb{Z}_2[y^{\pm 1}][-3]$$ gives the claim. For the second part of the lemma, assume that there is an enriched local map from $\mathfrak{T}$ to $\underline{\mathfrak{E}}_\tau$. This means that we have a sequence of equivariant local maps $$\lambda_i \colon \mathbb{Z}_2[y^{\pm 1}][-3] \rightarrow \underline{C}_i \text{ of level } \epsilon_i.$$ Then $\lambda_i(1)$ is an equivariant $\theta$-supported cycle in $\underline{C}_i^{[-\infty, \epsilon_i]}$ for each $i$. Now fix any $-s \notin \mathfrak{K}$. Since in particular $-s > 0$, there is some $i$ such that $\epsilon_i < -s$. For $i$ sufficiently large, we thus obtain an equivariant $\theta$-supported cycle in $$\underline{\mathfrak{E}}_\tau^{[-\infty, -s]} = \underline{C}_i^{[-\infty, -s]}.$$ This shows $r_s(\underline{\mathfrak{E}}_\tau) = \infty$ for all such $s$, and hence $r_0(\underline{\mathfrak{E}}_\tau) = \infty$ by Definition [Definition 60](#def:4.22){reference-type="ref" reference="def:4.22"}. Conversely, assume $r_0(\underline{\mathfrak{E}}_\tau) = \infty$. Then we have a sequence $t_k \rightarrow 0^-$ with each $t_k \notin \mathfrak{K}$ and $r_{t_k}(\underline{\mathfrak{E}}_\tau) = \infty$. For each $k$, select an index $i_k$ sufficiently large such that $$\underline{\mathfrak{E}}_\tau^{[-\infty, t_k]} = \underline{C}_{i_k}^{[-\infty, t_k]}$$ as in Definition [Definition 58](#def:4.20){reference-type="ref" reference="def:4.20"}. Then there is an equivariant $\theta$-supported cycle in $\smash{\underline{C}_{i_k}^{[-\infty, t_k]}}$. Construct an equivariant local map $$\lambda_{i_k} \colon \mathbb{Z}_2[y^{\pm 1}][-3] \rightarrow \underline{C}_{i_k} \text{ of level } t_k$$ by setting $\lambda_{i_k}(1)$ equal to this cycle. This partially defines an enriched local map from $\mathfrak{T}$ to $\underline{\mathfrak{E}}_\tau$, in the sense that we have defined local maps $\lambda_{i_k}$ for all $k$. Without loss of generality, we may assume the $i_k$ form an increasing sequence $\mathfrak{S}$. To define a local map $\lambda_i$ for every $i$, recall that we have local maps $$\psi_i^j \colon \underline{C}_i \rightarrow \underline{C}_j \text{ of level } \delta_{i,j}.$$ For arbitrary $i$, let $i_k$ thus be the greatest element of $\mathfrak{S}$ which is less than or equal to $i$ and set $$\lambda_i = \psi_{i_k}^i \circ \lambda_{i_k}.$$ This is an equivariant local map of level $t_k + \delta_{i_k, i}$. Since $t_k \rightarrow 0$ and the $\delta_{i,j} \rightarrow 0$, it is clear that the set of $\lambda_i$ constitute an enriched local map, as desired. ◻ # The analytic construction {#sec:5} In this section, we review the construction of instanton Floer homology and show that a homology sphere equipped with an involution gives rise to an enriched involutive complex. We then discuss equivariant cobordisms and give some results involving connected sums. ## Notation {#sec:5.1} We begin with some notation. Our discussion here is from [@NST19 Section 2]. ### Holonomy perturbations {#sec:5.1.1} Let $Y$ by an oriented integer homology $3$-sphere. Denote: - the product $SU(2)$-bundle by $P_Y$, - the product connection on $P_Y$ by $\theta$; and, - the set of $SU(2)$-connections on $P_Y$ by $\mathcal A(Y)$. We fix a preferred trivialization of $P_Y$ in order to form the product connection $\theta$. Recall that the *gauge group* $\mathrm{Map}(Y, SU(2))$ is the set of smooth maps from $Y$ into $SU(2)$. We have the usual gauge action of $\mathrm{Map}(Y, SU(2))$ on $\mathcal A(Y)$ given by $$a\cdot g = g^{-1} dg + g^{-1} ag.$$ The *degree-zero gauge group* is the subgroup $\mathrm{Map}_0(Y, SU(2))$ of $\mathrm{Map}(Y, SU(2))$ consisting of gauge transformations with mapping degree zero. In this paper, we consider the quotient of $\mathcal A(Y)$ by the degree-zero gauge group, rather than the full gauge group. Note that former is in $\mathbb{Z}$-to-$1$ correspondence with the latter. Denote: - $\smash{\widetilde{B}(Y) = \mathcal A(Y) /\mathrm{Map}_0(Y, SU(2))}$; and, - $\widetilde{B}^*(Y) =\{[a] \in \widetilde{B}(Y) \ | \ a \text{ is irreducible} \}$. Here, a connection is said to be *irreducible* if its stabilizer under the action of $\mathrm{Map}_0(Y, SU(2))$ consists of the two constant gauge transformations $\pm I$. Given a fixed trivialization of $P_Y$, the *Chern-Simons functional on $\mathcal A(Y)$* is the map $cs_Y$ from $\mathcal A(Y)$ to $\mathbb R$ defined by $$cs_Y(a) = \frac{1}{8\pi^2}\int_Y \operatorname{Tr}\left(a\wedge da +\frac{2}{3}a\wedge a\wedge a\right).$$ It is a standard fact that $$\label{eq:csgauge} cs_Y(a \cdot g) - cs(a)= \text{deg} (g)$$ for any $g \in \mathrm{Map}(Y,SU(2))$, where $\text{deg} (g)$ is the degree of $g$. Thus $cs_Y$ descends to a map $$cs_Y \colon \widetilde{B}(Y)\rightarrow\mathbb R,$$ which we also denote by $cs_Y$. We write $\mathfrak{K}_Y$ for the set of critical values of $cs_Y$. **Definition 65**. For any $d \in \mathbb{Z}^{\geq 0}$ and fixed $l \gg 2$, define the set of orientation-preserving embeddings of $d$ disjoint solid tori into $Y$: $$\mathcal{F}_d= \left\{ (f_i \colon S^1\times D^2 \hookrightarrow Y )_{i = 1}^d \ \middle| \ f_i(S^1 \times D^2) \text{ are mutually disjoint} \right\}$$ and denote by $C^{l}(SU(2)^d,\mathbb R)_{\mathrm{ad}}$ the set of adjoint-invariant $C^l$-class functions on $SU(2)^d$. The *set of holonomy perturbations on $Y$ is defined by $$\mathcal{P}(Y)= \bigcup_{d \in \mathbb{Z}^{\geq 0}}\mathcal{F}_d\times C_{\mathrm{ad}}^{l}(SU(2)^d,\mathbb R).$$* A holonomy perturbation gives rise to a perturbation of $cs_Y$, constructed as follows: **Definition 66**. Fix a 2-form $d\mathcal{S}$ on $D^2$ supported in the interior of $D^2$ with $\int_{D^2}d\mathcal{S}=1$. For any $\pi = (f,h)\in \mathcal{P}(Y)$, we define the *$\pi$-perturbed Chern-Simons functional* $$cs_{Y,\pi} \colon\widetilde{\mathcal B}(Y) \rightarrow\mathbb R$$ by $$cs_{Y,\pi}(a)= cs_Y(a) + h_\pi(a) = cs_Y(a) + \int_{x \in D^2} h(\mathrm{hol}_{f_1(-,x)}(a), \dots, \mathrm{hol}_{f_d(-,x)}(a)) d\mathcal{S},$$ where $\mathrm{hol}_{f_i(-,x)}(a)$ is the holonomy around the loop $s \mapsto f_i(s,x)$ for each $1 \leq i \leq d$. We denote $\|h\|_{C^l}$ by $\|\pi\|$. ### Gradient-line trajectories {#sec:5.1.2} The gradient-line equation of $cs_{Y,\pi}$ with respect to the $L^2$-metric is given by: $$\begin{aligned} \label{grad} \frac{\partial}{\partial t} a_t=\operatorname{grad}_{a_t} cs_{Y,\pi} = *_{g_Y}F(a_t) + \operatorname{grad}_{a_t} h_\pi ,\end{aligned}$$ where $*_{g_Y}$ is the Hodge star operator. For the precise form of $\operatorname{grad}_{a_t} h_\pi$, see for instance [@NST19 Section 2.1.3]. Let $$\widetilde{R}_\pi(Y) = \left\{a \in \widetilde{\mathcal B}(Y) \Biggm |F(a)+\operatorname{grad}_a h_\pi =0 \right\}$$ and $$\widetilde{R}_\pi^*(Y) = \widetilde{R}_\pi(Y) \cap \widetilde{\mathcal B}^*(Y).$$ We furthermore assume that $\pi$ has been chosen such that $\widetilde{R}_\pi(Y) = \widetilde{R}_\pi^*(Y) \cup ([\theta] \times \mathbb{Z})$, i.e. we shall take a perturbation $h$ so that $h$ is $0$ near a neighborhood of $(\operatorname{id}, \ldots, \operatorname{id})$. **Definition 67**. The solutions of [\[grad\]](#grad){reference-type="eqref" reference="grad"} correspond to $SU(2)$-connections $A$ on the trivial $SU(2)$-bundle over $\mathbb R\times Y$ which satisfy the *perturbed ASD equations*: $$\begin{aligned} \label{pASD} F^+(A)+ \pi^+(A) = 0,\end{aligned}$$ where $\pi(A)$ is a particular $\mathfrak{su}(2)$-valued $2$-form over $Y \times \mathbb R$. See [@NST19 Section 2.1.3] for the explicit form of $\pi(A)$. The superscript $+$ is the self-dual part of a 2-form with respect to the product metric on $\mathbb R\times Y$; that is, $(1+*)/2$ where $*$ is the Hodge star operator. Fix a holonomy perturbation $\pi$ on $Y$. For $a$ and $b$ in $\widetilde{R}^*_\pi(Y)$, define the *moduli space of trajectories* $M_\pi(a,b)$ as follows. Let $A_{a,b}$ be an $SU(2)$-connection on $Y \times \mathbb R$ satisfying $$A_{a,b}|_{Y\times (-\infty,-1]}=p^*a \quad \text{and} \quad A_{a,b}|_{Y\times [1,\infty)}=p^*b,$$ where $p$ is the projection $\mathbb R\times Y \rightarrow Y$. Then we define $$\begin{aligned} \label{*} M_\pi(a,b) =\left\{A = A_{a,b}+c \ \middle | \ c \in \Omega^1(\mathbb R\times Y)\otimes \mathfrak{su}(2)_{L^2_q}\text{ with } A \text{ satisfying } \eqref{pASD} \right\}/ \mathcal G(a,b),\end{aligned}$$ where $\mathcal G(a,b)$ is given by $$\mathcal G(a,b)=\left\{ g \in \operatorname{Aut}(P_{\mathbb R\times Y})\subset { \Gamma (\mathbb R\times Y; \underline{\operatorname{End}(\mathbb{C}^2)}) }_{L^2_{q+1,\text{loc}}} \ \middle | \ \nabla_{A_{a,b}}(g) \in L^2_q \right\}.$$ Here, $$\|f\|^2_{L^2_q}=\sum_{0\leq j \leq q} \int_{\mathbb R\times Y} |\nabla^j_{A_{a,b}}f|^2$$ for $f \in \Omega^i(\mathbb R\times Y) \otimes \mathfrak{su}(2)$ with compact support, where $|-|$ is the product metric on $\mathbb R\times Y$, $q \geq 3$ and $\underline{\operatorname{End}(\mathbb{C}^2)}$ is the product bundle whose fiber is $\operatorname{End}(\mathbb{C}^2)$ on $\mathbb R\times Y$. The action of $g \in \mathcal G(a,b)$ on the numerator of ([\[\*\]](#*){reference-type="ref" reference="*"}) is given by pulling back connections along $g$. We also allow $a = \theta$ so long as $b$ is irreducible. In this setting, we construct a slightly different moduli space $\smash{M_{\pi, \delta}(\theta, b)}$ by replacing $A_{a, b}$ with a similarly-defined reference connection $A_{\theta, b}$ and using the $\smash{L^2_{q,\delta}}$-norm in ([\[\*\]](#*){reference-type="ref" reference="*"}) instead of $L^2_q$-norm. The $\smash{L^2_{q,\delta}}$-norm is given by $$\|f\|^2_{L^2_{q,\delta}}=\sum_{0\leq j \leq q} \int_{\mathbb R\times Y}e^{\delta \sigma} |\nabla^j_{A_{\theta, b}}f|^2$$ for some small $\delta > 0$. Here, $\sigma \colon \mathbb R\times Y \rightarrow\mathbb R$ is a smooth function with $$\sigma(y, t) = \begin{cases} - t &\text{ for } t < 0\\ 0 &\text{ for } t > 1. \end{cases}$$ For convenience of notation, we continue to write $\smash{M_{\pi}(a, b)}$ in place of $\smash{M_{\pi, \delta}(\theta, b)}$ when $a = \theta$. A similar construction holds when $a$ is a $y^i$-multiple of $\theta$, or when $a$ is irreducible and $b$ is a $y^i$-multiple of $\theta$, although we will not need the latter. In each case, we have an $\mathbb R$-action on $\smash{M_{\pi}(a, b)}$ given by translation. We say that a perturbation $\pi$ is *nondegenerate* if at each critical point of $cs_{Y, \pi}$, there is no kernel in the formal Hessian of $cs_{Y, \pi}$. We say that a perturbation $\pi$ is *regular* if for any pair of critical points $a$ and $b$ and $A \in M_\pi(a,b)$, the linearization $$\begin{aligned} D_A(F^+(A)+ \pi^+(A)) : \Omega^1(\mathbb R\times Y)\otimes \mathfrak{su}(2)_{L^2_{q} } \to \Omega^+ (\mathbb R\times Y)\otimes \mathfrak{su}(2)_{L^2_{q-1}}\end{aligned}$$ is surjective, with the understanding that the $\smash{L^2_q}$- and $\smash{L^2_{q-1}}$-norms should be replaced with the $\smash{L^2_{q, \delta}}$- and $\smash{L^2_{q-1, \delta}}$-norms if $a = \theta$. For more detailed arguments involving holonomy perturbations (such as questions regarding smoothness), see [@Kr05 Proposition 7] and [@SaWe08 Proposition D.1]. ### ASD moduli spaces {#sec:5.1.3} Now let $W$ be a negative-definite, connected cobordism from $Y$ to $Y'$ with $b_1(W)=0$. Suppose $Y$ and $Y'$ are connected. Throughout, we fix nondegenerate regular holonomy perturbations $\pi$ and $\pi'$ on $Y$ and $Y'$, respectively. Let $W^*$ be the cylindrical-end $4$-manifold $$W^*= \left( (-\infty, 0] \times Y \right) \cup W \cup \left( [0, \infty) \times Y' \right).$$ Choose a metric $g_{W^*}$ on $W^*$ which coincides with the product metric on $Y \times (-\infty, 0]$ and $Y \times [0, \infty)$. **Definition 68**. For any $d \in \mathbb{Z}^{\geq 0}$ and fixed $l \gg 2$, define the set of orientation-preserving embeddings of the set of $d$ disjoint copies of $S^1 \times D^3$ into $W$: $$\mathcal{F}_d(W) = \left\{ (f_i \colon S^1\times D^3 \hookrightarrow W )_{i = 1}^d \ \middle| \ f_i(S^1 \times D^3) \text{ are mutually disjoint} \right\}.$$ The *set of holonomy perturbations on $W$* is given by: $$\mathcal{P}^* (W) = \bigcup_{d \in \mathbb{Z}^{\geq 0}} \mathcal{F}_d(W) \times C^l_{\text{ad}}(SU(2), \mathbb R)^d \times C^l (\mathbb R^d, \mathbb R).$$ Given $\pi_W \in \mathcal{P}^*(W)$, one can define a perturbation $2$-form $\pi_W(A)$ of the usual ASD equations on $W$, just as in Section [5.1.2](#sec:5.1.2){reference-type="ref" reference="sec:5.1.2"}. See [@Ta22 Equation (14)] for the explicit form of $\pi_W(A)$. More generally, we consider perturbations of the ASD equations on $W^*$ taking into account a choice of holonomy perturbation on the ends: **Definition 69**. Let $\pi_W$ be a holonomy perturbation on $W$ and $\pi$ and $\pi'$ be holonomy perturbations on $Y$ and $Y'$. We define the *perturbed ASD equations on $W^*$* to be $$\label{cob} F^+(A)+ \pi_W^+(A) + \rho_- \cdot \pi^+(A) + \rho_+ \cdot (\pi')^+(A) =0.$$ Here, $\rho_\pm$ are cutoff functions on $Y \times (-\infty, 0]$ and $Y \times [0, \infty)$, respectively, satisfying $$\begin{aligned} \rho_- ( t,y)= \begin{cases} 1 \text{ if } t<-1\\ 0 \text{ if } -\frac{1}{2} <t \leq 0 \end{cases} \text{and} \quad \rho_+ ( t,y)= \begin{cases} 0 \text{ if } 0 \leq t < \frac{1}{2} \\ 1 \text{ if } 1<t. \end{cases} \end{aligned}$$ The terms $\pi^+(A)$ and $(\pi')^+(A)$ are the gradient-line perturbations discussed in Section [5.1.2](#sec:5.1.2){reference-type="ref" reference="sec:5.1.2"}, applied to the restriction of $A$ over the ends $Y \times (-\infty, 0]$ and $Y \times [0, \infty)$. We refer to the triple $(\pi_W, \pi, \pi')$ as a *holonomy perturbation on $W^*$ with ends $\pi$ and $\pi'$*. We often only write $\pi_W$, leaving $\pi$ and $\pi'$ implicit. Note that the perturbation term $\smash{\pi_W^+(A)}$ is supported in the interior of $W$, while the terms $\smash{\rho_- \cdot \pi^+(A)}$ and $\smash{\rho_+ \cdot (\pi')^+(A)}$ are supported on the ends of $W^*$. As in the case of holonomy perturbations on $Y$, we define the norm $\| \pi_W \|$ of $\pi_W$ in terms of the induced perturbation of the ASD equations on $W$. When we speak of the norm of a holonomy perturbation on $W^*$, we will mean the maximum of $\| \pi_W \|$, $\| \pi \|$, and $\| \pi' \|$, which are defined as the $C^k$ norms of $j_W \circ h_W$, $h$ and $h'$ if $\pi_W= (g_W, h_W, j_W)$, $\pi= (f, h)$ and $\pi'=(f', h')$. Fix a holonomy perturbation on $W^*$. For $a\in \widetilde{R}_{\pi}(Y)$ and $b\in \widetilde{R}_{\pi'}(Y')$, define the *ASD moduli space* by $${M}_{\pi_W} (a,W^* ,b)= \left\{ A = A_{a,b} + c \ \middle | \ c \in \Omega^1(W^*) \otimes \mathfrak{su}(2)_{L^2_q} \text{ with } A \text{ satisfying } \eqref{cob} \right\} /\mathcal G(a,W^*, b).$$ Here, we have suppressed the data of $\pi$ and $\pi'$ in the subscript. The reference connection $A_{a,b}$ and the group $\mathcal G(a,W^*, b)$ are defined in a similar way as in the gradient-line case. As before, we allow $a$ to be a $y^i$-multiple of $\theta$, in which case we must replace the $\smash{L^2_{q}}$-norm with the $\smash{L^2_{q, \delta}}$-norm. We say that a given moduli space ${M}_{\pi_W} (a,W^* ,b)$ is *regular* if for any point $A \in {M} (a,W^* ,b)$, the linearization $$\begin{aligned} D_{A}( F^+(A)+ \pi_W^+(A) + \rho_- \pi(A)^ + + \rho_+ \pi'(A)^ + ) : &\\ \Omega^1 (W^*)\otimes \mathfrak{su}(2)_{L^2_q} \to & \Omega^+ (W^*)\otimes \mathfrak{su}(2)_{L^2_{q-1}}\end{aligned}$$ is surjective. If $a$ is reducible, then we use the weighted $\smash{L^2_{q, \delta}}$-norm instead of the $\smash{L^2_q}$-norm, as in the case of $\mathbb R\times Y$. We will also need to consider instanton moduli spaces associated to a family of perturbations. Let $B$ be a smooth manifold with boundary or corners; we will usually have $B=[0,1]$ or $B= [0,1]^2$. For the family setting, we usually fix a collection of embeddings ${\bf f} = (f_i) \in \mathcal{F}_d(W)$. For a smooth map ${\bf h}: B\to C^l_{\text{ad}}(SU(2), \mathbb R)^d \times C^l (\mathbb R^d, \mathbb R)$, define the *moduli space with family $B$* by $${\bf M}_{ \pi_W = ({\bf f}, {\bf h} ) } (a, W^*, b) = \bigcup_{ ({\bf f} ,{\bf h}(b) ) \ b \in B} M_{({\bf f} , {\bf h}(b) )} (a, W^*, b) ,$$ where, for each $b \in B$, we consider the moduli space of the solutions to $$F^+(A)+ \pi_{W, b}^+(A) + \rho_- \cdot \pi^+(A) + \rho_+ \cdot (\pi')^+(A) =0$$ and $\pi_{W, b}^+$ is the 4-dimensional perturbation determined by $({\bf f} ,{ \bf h}(b) )$. We say that $(b, A) \in {\bf M}_{\bf \pi} (a, W^*, b)$ is *regular* if the linearization $$\begin{aligned} D_{(b,A)}( F^+(A)+ \pi_W^+ (A) + \rho_- \pi(A)^ + + \rho_+ \pi'(A)^ + ) : & \\ \Omega^1 (W^*) \otimes \mathfrak{su}(2)_{L^2_q} \oplus T_{b} B \to & \Omega^+ (W^*) \otimes \mathfrak{su}(2)_{L^2_{q-1}} \end{aligned}$$ is surjective. If every point in ${\bf M}_{\bf \pi} (a, W^*, b)$ is regular, we say that ${\bf M}_{\bf \pi} (a, W^*, b)$ is regular. Note that the formal dimension of ${\bf M}_{\bf \pi} (a, W^*, b)$ is $\operatorname{dim}B + \operatorname{ind}(a)-\operatorname{ind}(b)$ when $a$ and $b$ are irreducible and $\operatorname{dim}B -3 -\operatorname{ind}(b)$ when $a$ is reducible. **Lemma 70**. *Let $C$ be a positive real number. Let $\pi$ and $\pi'$ be nondegenerate regular perturbations on $Y$ and $Y'$, respectively. Let $B$ be a manifold with boundary or corners and $$\pi_{W, \partial} = ({\bf f_\partial } = (f_i)_{(1 \leq i \leq n)} , {\bf h}) : \partial B\to \mathcal{P}^* (W)$$ be a regular perturbation as a family for ${\bf M}_{\pi_{W, \partial} } (a, W^*, b)$ with respect to critical points $a$ and $b$ of $cs_{Y, \pi}$ and $cs_{Y', \pi'}$ respectively satisfying $\operatorname{ind}(a) - \operatorname{ind}(b) \leq C$. (We assume one of $a$ and $b$ is irreducible. )* *Then, there is an extension $$\pi_W = ({\bf f } = (f_i)_{(1 \leq i \leq m)} , {\bf h}, {\bf h} ') : B \to \mathcal{P}^* (W)$$ such that $\pi_{W, \partial, b }^+(A) = \pi_{W, b }^+(A)$ for $b \in \partial B$ and the moduli space ${\bf M}_{\pi_W} (a, W^*, b)$ is regular for given $a$ and $b$ satisfying $\operatorname{ind}(a) - \operatorname{ind}(b) \leq C$, where $n \leq m$.* The bound $\leq C$ is not essential, but in this paper, we only use finite components of moduli spaces, so we only use this weaker statement. (For the transversality for infinite many moduli spaces using holonomy perturbations, see [@Kr05 Definition 9].) Since every point in the moduli spaces is irreducible, the proof is essentially the same as the poof of the usual transversality argument to prove invariance of chain homotopy type of instanton complexes, see for example [@Kr05 Corollary 14] and [@SaWe08 Theorem 8.3]. Also, the family ASD-moduli spaces with family holonomy perturbations are treated in several preceding studies, for example, see [@KM11; @Sc15]. ## Instanton Floer homology {#sec:5.2} We now show that choosing a holonomy perturbation on $Y$ gives rise to an instanton-type complex in the sense of Definition [Definition 20](#def:3.1){reference-type="ref" reference="def:3.1"}. This construction is due to Donaldson and may be found in [@Do02 Section 7]. The involvement of the Chern-Simons filtration parallels the formalism of enriched instanton knot Floer theory developed in [@DS19]. ### The instanton chain complex. {#sec:5.2.1} Let $Y$ be an oriented homology 3-sphere equipped with a Riemannian metric. Fix a nondegenerate regular holonomy perturbation $\pi$ on $Y$. We define the *(irreducible) instanton chain group* to be the formal $\mathbb{Z}_2$-span of points in $\smash{\widetilde{R}_\pi^*(Y)}$: $$C(Y, \pi) = \mathrm{span}_{\mathbb{Z}_2} \{ [a] \in \widetilde{R}_{\pi}^*(Y) \}.$$ The relative index difference $\operatorname{ind}(a) - \operatorname{ind}(b) = \operatorname{dim}M_\pi(a, b)$ remains well-defined after quotienting out by the degree-zero gauge group and hence descends to an absolute $\mathbb{Z}$-grading on $C(Y, \pi)$. We convert this to an absolute $\mathbb{Z}$-grading by setting $\operatorname{ind}_- \theta = -3$[^7] and defining $\operatorname{ind}_-(\theta) - \operatorname{ind}(b) = \operatorname{dim}M_{\pi}(\theta, b)$. We thus denote $\operatorname{ind}$ by $\deg_\mathbb{Z}$. The differential is given by $$da = \sum_{\deg_\mathbb{Z}(a) - \deg_\mathbb{Z}(b) =1} \# (M_{\pi} (a, b)/\mathbb R) \cdot b,$$ extending $\mathbb{Z}_2$-linearly. When we work over $\mathbb{Z}$, we must orient each moduli space $M_{\pi} (a, b)/\mathbb R$ to obtain a signed count of points; see [@NST19 Section 2.2] for details. There is a free action of $\mathbb{Z}_2[y^{\pm 1}]$ on $C(Y, \pi)$ where $y^{\pm 1}$ represents a degree-$(\pm 1)$ gauge transformation; it is well-known that $\deg_\mathbb{Z}(y) = 8$. It is not hard to check that $d$ is in fact $\mathbb{Z}_2[y^{\pm 1}]$-equivariant and hence $C(Y, \pi)$ is $\mathbb{Z}/8\mathbb{Z}$-periodic. Finally, we also have a map $D_2: \mathbb{Z}_2[y^{\pm 1}][-3] \to C_{-4} (Y, \pi)$ given by $$D_2 (1) = \sum_{\deg_\mathbb{Z}(b) =-4} \# (M_{\pi} (\theta , b)/\mathbb R) \cdot b.$$ **Definition 71**. Define $$\underline{C}(Y, \pi) = C(Y, \pi) \oplus \mathbb{Z}_2[y^{\pm 1}][-3] \quad \text{and} \quad \underline{d} = d + D_2.$$ We put a filtration on $C(Y, \pi)$ by setting $\deg_I(a) = cs_{\pi} (a)$; it follows from ([\[eq:csgauge\]](#eq:csgauge){reference-type="ref" reference="eq:csgauge"}) that $\deg_I(y) = 1$. Similarly, we define $\deg_I(y^k) = k$ on $\mathbb{Z}_2[y^{\pm 1}][-3]$. This gives an instanton-type complex in the sense of Definition [Definition 20](#def:3.1){reference-type="ref" reference="def:3.1"}, with $C(Y, \pi) \subseteq \underline{C}(Y, \pi)$ forming the desired subcomplex. Now let $W$ be a negative-definite cobordism from $Y$ to $Y'$ with $b_1(W) = 0$. Choose a regular holonomy perturbation $\pi_W$ on $W^*$ with ends $\pi$ and $\pi'$. We obtain a cobordism map $$F_{\pi_W} : \underline{C} (Y, \pi) \to \underline{C} (Y', \pi')$$ as follows. For any $a \in \underline{C}(Y, \pi)$, we let $$F_{\pi_W} a = \sum_{\substack{b\in \underline{C}(Y', \pi') \\ \deg_\mathbb{Z}(a) = \deg_\mathbb{Z}(b)}} \# M_{\pi_W}(a,W^*, b) \cdot b$$ with the additional convention that $$\# M_{\pi_W}(a, [0,1] \times Y ^*, b) = \begin{cases} | H_1(W, \mathbb{Z}) | \operatorname{mod} 2 & \text{ if } a = \theta \text{ and } b = \theta \\ 0 & \text{ if } a \neq \theta \text{ and } b = \theta. \end{cases}$$ The case where $a = \theta$ and $b$ is irreducible was already covered in the discussion of Section [5.1.3](#sec:5.1.3){reference-type="ref" reference="sec:5.1.3"}. In order to see $F_{\pi_W}$ is a two step filtered chan map, we need to identify $M_{\pi_W}(\theta, [0,1] \times Y ^*, \theta)$ with sufficiently perturbed flat connections over $W$. This is already done in the non-equivariant setting. See [@Da20 Section 2.2] and [@NST19] for more precise arguments. Note that in the present formalism, we thus have $\smash{F_{\pi_W} \theta = z + | H_1(W, \mathbb{Z}) | \cdot \theta}$ for some $z \in C(Y', \pi')$, while $\theta$ does not appear in the image of any generator in $C(Y, \pi)$. It is thus clear that if $H_1(W, \mathbb{Z}_2) = 0$, then $| H_1(W, \mathbb{Z})|$ is odd, therefore $\smash{F_{\pi_W}}$ is a local map. It can be shown that the level of $\smash{F_{\pi_W}}$ is a function of $\| \pi_W \|$ (and $\| \pi \|$ and $\| \pi' \|$), which goes to zero as $\| \pi_W \|$ (and $\| \pi \|$ and $\| \pi' \|$) go to zero. Again the norm of a holonomy perturbation on $W^*$, we mean the maximum of $\| \pi_W \|$, $\| \pi \|$, and $\| \pi' \|$, which are defined as the $C^k$ norms of $j_W \circ h_W$, $h$ and $h'$ if $\pi_W= (g_W, h_W, j_W)$, $\pi= (f, h)$ and $\pi'=(f', h')$. ### Enriched complexes. {#sec:5.2.2} Now suppose that we have chosen a sequence of nondegenerate regular holonomy perturbations $\pi_i$ on $Y$ with $\| \pi_i \| \rightarrow 0$. For any pair of perturbations in this sequence, let $\pi^j_i$ be a regular holonomy perturbation on $Y \times [0,1]^*$ with ends $\pi_i$ and $\pi_j$. Treating $Y \times [0, 1]$ as a cobordism from $Y$ to itself, this gives a local map $$\psi_i^j: \underline{C}(Y, \pi_i) \to \underline{C}(Y, \pi_j ).$$ We may also assume that $\|\pi_i^j\| \to 0$ as $i, j \rightarrow \infty$, so that the level of $\smash{\psi_i^j}$ goes to zero. Setting aside the action of $\tau$, it is shown in [@NST19 Section 2.3] that the sequence $\underline{C}(Y, \pi_i)$ together with the maps $\smash{\psi_i^j}$ defines an enriched complex. We similarly claim that a cobordism $W$ from $Y$ to $Y'$ induces a map of enriched complexes. Let $\pi_i$ and $\pi'_i$ be sequences of nondegenerate regular holonomy perturbations on $Y$ and $Y'$ with $\| \pi_i \|, \| \pi'_i \| \rightarrow 0$. For each $i$, let $(\pi_W)_i$ be a regular holonomy perturbation on $W^*$ with ends $\pi_i$ and $\pi'_i$. Then we obtain a sequence of cobordism maps $$F_i = F_{(\pi_W)_i} : \underline{C} (Y, \pi_i) \to \underline{C} (Y', \pi'_i).$$ Moreover, we may choose $(\pi_W)_i$ such that $\|(\pi_W)_i \| \rightarrow 0$. Setting aside the action of $\tau$, this defines a map of enriched complexes in the sense of [Definition 54](#def:4.16){reference-type="ref" reference="def:4.16"}. ## Construction of $\tau$ {#sec:5.3} Now suppose that $Y$ is equipped with an orientation-preserving involution $\tau$. Henceforth, we assume that we have chosen a $\tau$-invariant metric on $Y$. Let $\pi_i$ be a sequence of holonomy perturbations on $Y$ as in the previous subsection. For each $i$, let $\pi_i^\tau$ be a holonomy perturbation on $[0, 1] \times Y ^*$ with ends $\pi_i$ and $\tau^* \pi_i$. We construct a chain map $$\label{eq:deftau} \tau_i: \underline{C}(Y, \pi_i) \to \underline{C}(Y, \tau^* \pi_i) = \underline{C}(Y, \pi_i)$$ as follows. The map from $\underline{C}(Y, \pi_i)$ to $\underline{C}(Y, \tau^* \pi_i)$, which by abuse of notation we also denote by $\tau_i$, is just the cobordism map associated to $[0, 1] \times Y$ with the perturbation $\pi^\tau_i$. Explicitly, for any $a \in \underline{C}(Y, \pi_i)$, let $$\tau_i a = \sum_{\substack{b \in \underline{C}(Y, \tau^* \pi_i) \\ \deg_\mathbb{Z}(a) = \deg_\mathbb{Z}(b)}} \# M_{\pi^\tau_i}(a, Y \times [0,1]^*, b) \cdot b$$ with the additional convention that $$\# M_{\pi^\tau_i}(a, Y \times [0,1]^*, b) = \begin{cases} 1 & \text{ if } a = \theta \text{ and } b = \theta \\ 0 & \text{ if } a \neq \theta \text{ and } b = \theta. \end{cases}$$ The case where $a = \theta$ and $b$ is irreducible was already covered in the discussion of Section [5.1.3](#sec:5.1.3){reference-type="ref" reference="sec:5.1.3"}. The sign of the moduli spaces are also given as the usual cobordism map. Note that this means $\tau_i \theta = z + \theta$ for some $z \in C(Y, \tau^* \pi_i)$, while $\theta$ does not appear in the image of any generator in $C(Y, \pi_i)$. We then complete ([\[eq:deftau\]](#eq:deftau){reference-type="ref" reference="eq:deftau"}) by composing with the tautological identification $\underline{C}(Y, \tau^* \pi_i) = \underline{C}(Y, \pi_i)$ given by the pullback of connections. If we furthermore assume $\| \pi_i^\tau \| \rightarrow 0$, then we see that $\tau_i$ is a local map whose level goes to zero as $i \rightarrow \infty$ We now show that the sequence $\tau_i$ makes the family $\underline{C}(Y, \pi_i)$ into an enriched complex in the sense of Definition [Definition 52](#def:4.14){reference-type="ref" reference="def:4.14"}. Let $\smash{\pi_i^j}$ and $\smash{\psi^j_i}$ be defined as in the previous subsection. We claim: **Lemma 72**. *The following hold:* - *For each $i$ and $j$, we have $$\psi^j_i \tau_i \simeq \tau_j \psi^j_i$$ via a chain homotopy $H_i^j$ of level $\delta_{i, j}$, where $\delta_{i, j} \rightarrow 0$.* - *For each $i$, we have $$\tau_i^2 \simeq \operatorname{id}$$ via a chain homotopy $H_i$ of level $\delta_i$, where $\delta_i \rightarrow 0$.* *Proof.* Let $a \in \underline{C}(Y, \pi_i)$ and $c \in \underline{C}(Y, \pi_j)$. The coefficient of $c$ in $(\tau_j \psi^j_i)(a)$ is easily seen to be the number of points in the product $$\label{eq:581} \bigcup_{b \in \underline{C}(Y, \pi_j) } M_{\pi_i ^j} (a,Y \times [0,1]^*, b) \times M_{\pi_j^\tau} (b, Y \times [0,1]^* , \tau^*c),$$ where $\tau^* c$ is the pullback of $c$. Likewise, the coefficient of $c$ in $(\psi^j_i \tau_i)(a)$ is easily seen to be the number of points in the product $$\label{eq:582} \bigcup_{b \in \underline{C}(Y, \tau^* \pi_i)} M_{\pi_i ^\tau} (a,Y \times [0,1]^*, b) \times M_{\pi_i^j} (\tau^* b, Y \times [0,1]^* , c),$$ where $\tau^* b$ is the pullback of $b$. Note that we clearly have a bijection between $$M_{\pi_i^j} (\tau^* b, Y \times [0,1]^* , c) \quad \text{and} \quad M_{\tau^* \pi_i^j} (b, Y \times [0,1]^* , \tau^*c)$$ by taking the pullback of the entire moduli space along $\tau \times \operatorname{id}$ on $Y \times [0, 1]$, which by abuse of notation we also denote by $\tau^*$. Hence ([\[eq:582\]](#eq:582){reference-type="ref" reference="eq:582"}) is in bijection with $$\label{eq:583} \bigcup_{b \in \underline{C}(Y, \tau^* \pi_i)} M_{\pi_i ^\tau} (a,Y \times [0,1]^*, b) \times M_{\tau^* \pi_i^j} (b, Y \times [0,1]^* , \tau^*c).$$ By gluing theory, ([\[eq:581\]](#eq:581){reference-type="ref" reference="eq:581"}) and ([\[eq:583\]](#eq:583){reference-type="ref" reference="eq:583"}) are in bijection with $$M_{\pi_i^j \# \pi_j ^\tau} (a,Y \times [0,1]^*, \tau^* c) \quad \text{ and } \quad M_{\pi_i^\tau \# \tau^* \pi_i^j} (a,Y \times [0,1]^*, \tau^*c),$$ respectively, where the perturbations $\pi_i^j \# \pi_j ^\tau$ and $\pi_i^\tau \# \tau^* \pi_i^j$ are defined by $$\pi_i^j \# \pi_j ^\tau|_{Y \times (-\infty , -T_0]} = \pi_i^j \quad \text{and} \quad \pi_i^j \# \pi_j ^\tau|_{Y \times [T_0, \infty)} = \pi_j ^\tau$$ and $$\pi_i^\tau \# \tau^* \pi_i^j|_{Y \times (-\infty , -T_0]} = \pi_i \quad \text{and} \quad \pi_i^\tau \# \tau^* \pi_i^j|_{Y \times [T_0, \infty)} = \tau^*\pi_i^j$$ for sufficiently large $T_0$. Here, we mean (for example) that $\smash{\pi_i^j \# \pi_j ^\tau}$ agrees with a $t$-shifted copy of $\pi_i^j$ for $t \ll 0$ and a $t$-shifted copy of $\pi_j^\tau$ for $t \gg 0$. Strictly speaking, this means that $\smash{\pi_i^j \# \pi_j ^\tau}$ is a holonomy perturbation on some $Y \times [-T, T]^*$, but we continue to write $Y \times [0,1]^*$. Since both $\smash{\pi_i^j \# \pi_j ^\tau}$ and $\smash{\pi_i^\tau \# \tau^* \pi_i^j}$ have ends $\pi_i$ and $\tau^* \pi_j$, we may take a one-parameter family of holonomy perturbations on $Y \times [0, 1]^*$ that interpolates between them and has fixed ends. Denote this family by $\pi(s)$, where $$\pi(0) = \pi_i^j \# \pi_j ^\tau \quad \text{and} \quad \pi(1) = \pi_i^\tau \# \tau^* \pi_i^j.$$ For $a \in \underline{C}(Y, \pi_i)$ and $c \in \underline{C}(Y, \pi_j)$, we have the instanton moduli space associated to the family $\{\pi(s)\}$ discussed in Section [5.1.3](#sec:5.1.3){reference-type="ref" reference="sec:5.1.3"}: $${\bf M} _{\{\pi(s)\}}(a,Y \times [0,1]^*, \tau^* c ) = \bigcup_{t \in [ 0,1] } M_{\pi(s)}(a,Y \times [0,1]^*, \tau^*c ).$$ If $\deg_\mathbb{Z}(a) = \deg_\mathbb{Z}(c)$, then generically this moduli space has the structure of a one-dimensional manifold. Gluing theory tells us that after compactifying and orienting, there are four kinds of endpoints: $$\displaystyle \bigcup_{\substack{b \in \underline{C}(Y, \tau^* \pi_j) \\ \deg_\mathbb{Z}(b) = \deg_\mathbb{Z}(a) + 1}} \mathbf{M}_{\{\pi(s)\}}(a, Y \times [0,1]^*, b) \times \left(M_{\tau^* \pi_j}(b, \tau^*c) /\mathbb R\right)$$ and $$\displaystyle \bigcup_{\substack{b \in \underline{C}(Y, \pi_i) \\ \deg_\mathbb{Z}(b) = \deg_\mathbb{Z}(c) - 1}} \left(M_{\tau_i}(a, b)/\mathbb R\right) \times \mathbf{M}_{\{\pi(s)\}}(b, Y \times [0,1]^*, \tau^* c)$$ together with $$M_{\pi(0)}(a, Y \times [0,1]^*, \tau^*c ) \quad \text{and} \quad M_{\pi(1)}(a, Y \times [0,1]^*, \tau^*c).$$ After appropriately introducing signs, this gives the equality $$\psi^j_i \tau_i + \tau_j \psi^j_i = \underline{d} H_i^j + H_i^j \underline{d},$$ where the homotopy $H_i^j$ is defined by counting points in $\smash{{\bf M} _{\{\pi(t)\}}(a,Y \times [0,1]^*, b)}$ whenever $\deg_\mathbb{Z}(b) = \deg_\mathbb{Z}(a) + 1$. We may moreover assume that $\| \{\pi(t)\} \| \rightarrow 0$ as $i, j \rightarrow \infty$. This means that the level of $H_i^j$ goes to zero, as desired. The proof of the second part of the lemma is similar. Let $a$ and $c$ be in $\underline{C}(Y, \pi_i)$. The coefficient of $c$ in $\tau_i^2 a$ is easily seen to be the number of points in the product $$\bigcup_{b \in \underline{C}(Y, \tau^* \pi_i) } M_{\pi_i ^\tau} (a,Y \times [0,1]^*, b) \times M_{\pi_i^\tau} (\tau^* b, Y \times [0,1]^* , \tau^* c),$$ which is in bijection with $$\bigcup_{b \in \underline{C}(Y, \tau^* \pi_i) } M_{\pi_i ^\tau} (a,Y \times [0,1]^*, b) \times M_{\tau^* \pi_i^\tau} (b, Y \times [0,1]^* , c).$$ The family of homotopies between $\tau_i^2$ and $\operatorname{id}$ is obtained by taking an interpolating family of perturbations between $\pi_i^\tau \# \tau^* \pi_i^\tau$ and the constant family $\pi_i$. ◻ Putting everything together, we obtain: **Definition 73**. Let $Y$ be an oriented homology 3-sphere equipped with an orientation-preserving involution $\tau$. We obtain an enriched complex $\underline{\mathfrak{E}}(Y, \tau)$ by taking any sequence of nondegenerate regular holonomy perturbations $\pi_i$ on $Y$ with $\| \pi_i \| \rightarrow 0$ and considering the family $$(\underline{C}(Y, \pi_i), \underline{d}, \tau_i)$$ together with the maps $\smash{\psi^j_i}$ of Section [5.2](#sec:5.2){reference-type="ref" reference="sec:5.2"} and the homotopies of Lemma [Lemma 72](#ex prf of enriched local inv str){reference-type="ref" reference="ex prf of enriched local inv str"}. The clustering subset is given by the set of critical points $\mathfrak{K}_Y$ of the Chern-Simons functional. We refer to $\underline{\mathfrak{E}}(Y, \tau)$ as the *enriched involutive complex associated to $(Y, \tau)$*. **Definition 74**. Let $Y$ be an oriented homology 3-sphere equipped with an orientation-preserving involution $\tau$. Define $$r_s (Y, \tau) =r_s (\underline{\mathfrak{E}}(Y, \tau)).$$ **Remark 75**. According to Definition [Definition 60](#def:4.22){reference-type="ref" reference="def:4.22"}, the abstract $r_s$-invariant takes values in $[0, \infty] \cup \{- \infty\}$. However, it is not hard to show that in fact $r_s(Y, \tau) > 0$ by essentially the same argument as in [@NST19; @DISST22]. The point here is that the component of $\tau$ from the reducible to irreducible part of $\underline{C}$ strictly decreases the Chern-Simons filtration. The same holds for $D_2$, which should be thought of as the component of $\underline{d}$ from the reducible to irreducible part of $\underline{C}$. It follows from this that there is always some $\epsilon < 0$ such that $\theta$ itself constitutes an equivariant $\theta$-supported cycle in $\underline{C}^{[\epsilon, -s]}$. ## Cobordism maps {#sec:5.4} We now show that an equivariant cobordism induces a map of enriched involutive complexes. Let $(W, \widetilde{\tau})$ be an equivariant negative-definite cobordism from $(Y, \tau)$ to $(Y', \tau')$ with $b_1(W) = 0$. Choose holonomy perturbations $$\pi_i^\tau \text{ on } Y \times [0, 1]^* \text{ with ends } \pi_i \text{ and } \tau^* \pi_i$$ and $$\pi_i^{\tau'} \text{ on } Y' \times [0, 1]^* \text{ with ends } \pi'_i \text{ and } (\tau')^* \pi'_i$$ which define $\tau_i$ and $\tau'_i$, respectively. In addition, choose a sequence of holonomy perturbations $$\pi_{W, i} \text{ on } W^* \text{ with ends } \pi_i \text{ and }\pi'_i.$$ This defines a sequence of cobordism maps $F_i = F_{W, i}$. The norms of all perturbations go to zero as $i \rightarrow \infty$. **Lemma 76**. *We have $$F_i \tau_i \simeq \tau'_i F_i$$ via a homotopy $H_i$ of level $\delta_i$, where $\delta_i \rightarrow 0$ as $i \rightarrow \infty$.* *Proof.* The proof is similar to that of [Lemma 72](#ex prf of enriched local inv str){reference-type="ref" reference="ex prf of enriched local inv str"}, so we restrict ourselves to listing the moduli spaces involved. Let $a \in \underline{C}(Y, \pi_i)$ and $c \in \underline{C}(Y', \pi'_i)$. Then the coefficient of $c$ in $(\tau'_i F_i)(a)$ is given by counting points in $$\label{eq:5101} \bigcup_{b \in \underline{C}(Y', \pi'_i)} M_{\pi_{W, i}} (a,W^*, b) \times M_{\pi_i^{\tau'}} (b, Y \times [0,1]^* , (\tau')^* c),$$ while the coefficient of $c$ in $(F_i \tau_i)(a)$ is given by counting points in $$\label{eq:5102} \bigcup_{b \in \underline{C}(Y, \tau^* \pi_i )} M_{\pi_i^\tau} (a,Y \times [0,1]^*, b) \times M_{\pi_{W, i}} (\tau^* b, W^* , c).$$ Pulling back under the self-diffeomorphism $\widetilde{\tau}$ on $W$, we have a bijection $$M_{\pi_{W, i}} (\tau^* b, W^* , c) = M_{\widetilde{\tau}^* \pi_{W, i}} (b, W^* , (\tau')^* c).$$ Thus ([\[eq:5102\]](#eq:5102){reference-type="ref" reference="eq:5102"}) is in bijection with $$\label{eq:5103} \bigcup_{b \in \underline{C}(Y, \tau^* \pi_i )} M_{\pi_i^\tau} (a,Y \times [0,1]^*, b) \times M_{\widetilde{\tau}^* \pi_{W, i}} (b, W^* , (\tau')^*c).$$ We stress that this requires the existence of $\widetilde{\tau}$. As in the proof of Lemma [Lemma 72](#ex prf of enriched local inv str){reference-type="ref" reference="ex prf of enriched local inv str"}, the moduli spaces ([\[eq:5101\]](#eq:5101){reference-type="ref" reference="eq:5101"}) and ([\[eq:5103\]](#eq:5103){reference-type="ref" reference="eq:5103"}) are then in bijection with ASD moduli spaces over $W^*$ corresponding to regular holonomy perturbations $$\pi_{W, i} \# \pi_i^{\tau'} \quad \text{and} \quad \pi_i^\tau \# \widetilde{\tau}^* \pi_{W, i},$$ respectively. These are defined in a similar way as the perturbations in Lemma [Lemma 72](#ex prf of enriched local inv str){reference-type="ref" reference="ex prf of enriched local inv str"}. Take a one-parameter family of holonomy perturbations on $W^*$ which interpolates between $\smash{\pi_{W, i} \# \pi_i^{\tau'}}$ and $\smash{\pi_i^\tau \# \widetilde{\tau}^* \pi_{W, i}}$. The same argument as in Lemma [Lemma 72](#ex prf of enriched local inv str){reference-type="ref" reference="ex prf of enriched local inv str"} produces the desired homotopy. ◻ In particular: **Lemma 77**. *Let $(W, \widetilde{\tau})$ be an equivariant negative-definite cobordism from $(Y, \tau)$ to $(Y', \tau')$ with $H_1(W, \mathbb{Z}_2) = 0$. Then we obtain an enriched local map $$\lambda_W \colon \underline{\mathfrak{E}}(Y, \tau) \rightarrow \underline{\mathfrak{E}}(Y', \tau').$$* *Proof.* Immediate from the definitions. ◻ ## Connected sums {#sec:5.5} We now study the behavior of our complexes under connected sums. Let $(Y, \tau)$ and $(Y', \tau')$ be two equivariant homology spheres such that the equivariant connected sum $(Y \# Y', \tau \# \tau')$ is defined. Then we may form the pair-of-pants cobordism $W^\#$ from $Y\# Y'$ to $Y \cup Y'$ obtained by attaching a $3$-handle along the connected sum sphere of $Y \# Y'$. It is clear that $W^\#$ may be made equivariant. For 4-manifolds with disconnected boundary, we can also similarly define ASD-moduli spaces as in the discussion in [5.1.3](#sec:5.1.3){reference-type="ref" reference="sec:5.1.3"}. However, to obtain a local map, we to need care about the number of components, see [Remark 79](#local pants){reference-type="ref" reference="local pants"}. **Lemma 78**. *Let $(Y, \tau)$ and $(Y', \tau')$ be two equivariant homology spheres such that the equivariant connected sum $(Y \# Y', \tau \# \tau')$ is defined. Then we have enriched local maps: $$\overline{\lambda}_{W^\#} : \overline{\mathfrak{E}}(Y\# Y', \tau\# \tau' ) \to \overline{\mathfrak{E}}(Y,\tau ) \otimes \overline{\mathfrak{E}}( Y' , \tau' )$$ and $$\underline{\lambda}_{W^\#} : \underline{\mathfrak{E}}(Y,\tau ) \otimes \underline{\mathfrak{E}}( Y' , \tau' ) \to \underline{\mathfrak{E}}(Y\# Y', \tau\# \tau' ).$$* *Proof.* The existence of the first map was essentially established in [@NST19], in which it is shown that $W^\#$ induces an enriched local map $$\overline{\lambda}_{W^\#}: \overline{\mathfrak{E}}(Y\# Y') \rightarrow \overline{\mathfrak{E}}(Y ) \otimes \overline{\mathfrak{E}}( Y' ).$$ The verification that $\overline{\lambda}_{W^\#}$ is homotopy equivariant is essentially the same as the proof of [Lemma 76](#involutive local map){reference-type="ref" reference="involutive local map"}. The second map is obtained as the dual of the first map after reversing orientation of the boundaries. ◻ **Remark 79**. Note that we do *not* have a local map $$\overline{\lambda}_{W^\#} : \overline{\mathfrak{E}}(Y,\tau ) \otimes \overline{\mathfrak{E}}( Y' , \tau' ) \to \overline{\mathfrak{E}}(Y\# Y', \tau\# \tau' ) .$$ This is due to certain subtleties involving the notion of a reducible connection in the case of a disconnected manifold. The essential reason why we cannot have a local map is: the formal dimension of the reducible solution on the upside-down cobordism $(W^\#)^\dagger$ of $W^\#$ is $-6$, although the dimension of the stabilizer is $3$. Therefore, the moduli space is obstructed in this case and there is no natural map associated with the upside-down cobordism $(W^\#)^\dagger$. It turns out that this allows for good behavior for cobordism maps with domain $\overline{\mathfrak{E}}(Y,\tau ) \otimes \overline{\mathfrak{E}}( Y' , \tau' )$, but not range $\overline{\mathfrak{E}}(Y,\tau ) \otimes \overline{\mathfrak{E}}( Y' , \tau' )$. # Properties of $r_s(Y, \tau)$ {#sec:6} We now prove Theorem [Theorem 1](#thm:1.1){reference-type="ref" reference="thm:1.1"}, together with several other properties of the involutive $r_s$-invariant. We then explain the connection between the action of $\tau$ and the effect of the cork twist on the Donaldson polynomial. ## Proof of Theorem [Theorem 1](#thm:1.1){reference-type="ref" reference="thm:1.1"} {#sec:6.1} Having constructed the involutive $r_s$-invariant, it remains to establish its claimed behavior under negative-definite cobordisms. For the convenience of the reader, we recall:\ \ **Theorem 1.1.** *Let $Y$ be an oriented integer homology $3$-sphere and $\tau$ be a smooth, orientation-preserving involution on $Y$. For any $s \in [-\infty, 0]$, we define a real number $$r_s(Y, \tau) \in (0, \infty]$$ which is an invariant of the diffeomorphism class of $(Y, \tau)$. Moreover, let $(W, \widetilde{\tau})$ be an equivariant negative-definite cobordism from $(Y, \tau)$ to $(Y', \tau')$ with $H_1(X, \mathbb{Z}_2)=0$. Then $$r_s(Y,\tau) \leq r_s(Y', \tau').$$ If $r_s(Y, \tau)$ is finite and $W$ is simply connected, then in fact $$r_s(Y,\tau) < r_s(Y', \tau').$$* *Proof.* The first inequality is straightforward. Since there is an equivariant negative-definite cobordism $(W, \widetilde{\tau})$ from $(Y, \tau)$ to $(Y', \tau')$ with $H_1(W; \mathbb{Z}_2)=0$, [Lemma 76](#involutive local map){reference-type="ref" reference="involutive local map"} gives an enriched local map from $\underline{\mathfrak{E}}(Y, \tau)$ to $\underline{\mathfrak{E}}(Y', \tau')$. [Lemma 61](#monotonicity of involtuive rs){reference-type="ref" reference="monotonicity of involtuive rs"} then implies $r_s(Y ,\tau) \leq r_s(Y', \tau')$, as desired. The strict inequality is more subtle. The proof is an equivariant analog of [@Da20 Theorem 3] and [@NST19 Theorem 1.1 (1)]; we give a sketch here for the reader. Let $W$ be simply connected and suppose for the sake of contradiction that $r_s(Y,\tau) = r_s(Y', \tau') < \infty$. Fix any sequence of analytic data used to define the enriched complexes $\underline{\mathfrak{E}}(Y, \tau)$ and $\underline{\mathfrak{E}}(Y', \tau')$ and the enriched map between them. This consists of: 1. A Riemannian metric $g$ (resp. $g'$) on $Y$ (resp. $Y'$); 2. A cylindrical-end Riemannian metric on $W^*$, as in Section [5.1.3](#sec:5.1.3){reference-type="ref" reference="sec:5.1.3"}; 3. A sequence of nondegenerate regular perturbations $\pi_i$ (resp. $\pi'_i$) such that $\|\pi_i\| \to 0$ (resp. $\|\pi'_i\| \to 0$). This gives sequences of complexes $$\underline{C}(Y, \pi_i) \quad \text{and} \quad \underline{C}(Y', \pi'_i)$$ with homotopy involutions $\tau_i$ and $\tau'_i$; and, 4. A sequence of perturbations $\pi_i^W$ of the ASD equations on $W^*$ such that $\|\pi_i^W\| \to 0$. This gives a sequence of local maps $$\lambda_i \colon \underline{C}(Y, \pi_i) \rightarrow \underline{C}(Y', \pi'_i) \text{ of level } \delta_i$$ with homotopies $H_i$ (also of level $\delta_i$) such that $$\lambda_i \tau_i + \tau'_i \lambda_i = \underline{d} H_i + H_i \underline{d}$$ for each $i$, where $\delta_i \rightarrow 0$. In Lemma [Lemma 80](#lem:technical){reference-type="ref" reference="lem:technical"} below, we construct an increasing sequence $(n_k)_{k = 1}^\infty$ such that for each $i$ in $(n_k)_{k = 1}^\infty$, we have a pair of chains $$\alpha_i \in C^{[-\infty, s]}(Y, \pi_i) \quad \text{and} \quad \alpha'_i \in C^{[-\infty, s]}(Y', \pi'_i)$$ for which the following hold: 1. [\[item:lem611\]]{#item:lem611 label="item:lem611"} We have $$\lim \deg_I(\alpha_i) = - r_s(Y, \tau) = -r_s(Y', \tau') = \lim \deg_I(\alpha'_i)$$ along $(n_k)_{k = 1}^\infty$; and, 2. [\[item:lem612\]]{#item:lem612 label="item:lem612"} Either $$\alpha'_i = \lambda_i \alpha_i \text{ along } (n_k)_{k = 1}^\infty \quad \text{or} \quad \alpha'_i = H_i \alpha_i \text{ along } (n_k)_{k = 1}^\infty.$$ For convenience of notation, we restrict our attention to $(n_k)_{k = 1}^\infty$ and speak of the above holding for every $i$. The proof and motivation behind Lemma [Lemma 80](#lem:technical){reference-type="ref" reference="lem:technical"} is rather involved, so we defer it until after the proof of Theorem [Theorem 1](#thm:1.1){reference-type="ref" reference="thm:1.1"}. However, a modification of the proof of [@NST19 Theorem 1.1 (1)] quickly completes the argument. Indeed, recall that $\alpha_i$ (respectively, $\alpha'_i$) is a linear combination of generators corresponding to critical points of the (perturbed) Chern-Simons functional. By ([\[item:lem612\]](#item:lem612){reference-type="ref" reference="item:lem612"}) of Lemma [Lemma 80](#lem:technical){reference-type="ref" reference="lem:technical"}, each generator $a_i'$ from $\alpha'_i$ must appear in the image $\lambda_i a_i$ or $H_i a_i$ of some generator $a_i$ from $\alpha_i$. In particular, it is then easily checked that $$\label{eq:11proofA} M(a_i, W^*, a_i') \neq \emptyset$$ for some sequence of generators $a_i \in C^{[-\infty, s]}(Y, \pi_i)$ and $a_i' \in C^{[-\infty, s]}(Y', \pi'_i)$ satisfying $$\label{eq:11proofB} \lim \deg_I(a_i) = -r_s(Y, \tau) \quad \text{and} \quad \lim \deg_I(a_i') = -r_s(Y', \tau').$$ Here, by $M(a_i, W^*, a_i')$ we either mean the usual ASD-moduli space used to define $\lambda_i$, or we mean the 1-parameter family moduli space used to define $H_i$ in [Lemma 76](#involutive local map){reference-type="ref" reference="involutive local map"}, depending on which case of ([\[item:lem612\]](#item:lem612){reference-type="ref" reference="item:lem612"}) is applicable. Note that since $r_s(Y, \tau) = r_s(Y', \tau') > 0$, in the limit $a_i$ and $a'_i$ have filtration level bounded above by a nonzero negative constant. For each $i$, now choose any $A_i \in M(a_i, W^*, a'_i)$ using ([\[eq:11proofA\]](#eq:11proofA){reference-type="ref" reference="eq:11proofA"}). Then ([\[eq:11proofB\]](#eq:11proofB){reference-type="ref" reference="eq:11proofB"}), together with the fact that $r_s(Y, \tau) = r_s(Y', \tau')$, shows that the energy of $A_i$ goes to zero. After possibly passing to a subsequence, it follows that the $A_i$ converge to a flat connection with respect to the $\smash{L^2_{k, loc}}$-topology (for arbitrarily large $k$), up to a sequence of gauge transformations obtained by pasting local Coulomb gauge transformations. Denote this limit by $A_\infty$. Then $A_\infty$ is a flat $SU(2)$-connection which we claim is irreducible. To see this, first observe that we may select a finite number of disjoint neighborhoods $\{U_\beta\}$ of the components of $R(Y)$ in $\mathcal B(Y)$ such that: - Each $U_\beta$ is open with respect to the $C^\infty$-topology; and, - There exists $\epsilon > 0$ such that any $a \in \mathcal B(Y)$ with $\|F_{a}\|_{L^2} \leq \epsilon$ must lie in some $U_\beta$. The existence of $\{U_\beta\}$ follows from Uhlenbeck's compactness theorem. We denote by $U_{\beta_0}$ the neighborhood containing $\theta$; by passing to a subsequence, we may assume all of the $a_i$ and $a'_i$ from ([\[eq:11proofA\]](#eq:11proofA){reference-type="ref" reference="eq:11proofA"}) lie in a single $U_{\beta}$ for some fixed $\beta \neq \beta_0$. (Note that $a_i$ and $a'_i$ are certainly irreducible, as they have filtration level which is bounded away from zero.) Next, we claim that there is some $\epsilon' >0$ such that: - For any $SU(2)$-connection $A$ on $Y \times [0,1]$ in temporal gauge and any sufficiently small perturbation $\pi$, if $A$ satisfies $$F^+_A + \pi^+(A) =0 \quad \text{and} \quad \|F_A\|_{L^2} \leq \epsilon',$$ then $A|_{\{t\}\times Y} \in \bigcup_\beta U_\beta$ for each $t$. This claim also follows from Uhlenbeck's compactness theorem. Writing each $A_i$ in temporal gauge, we thus have that $$A_i |_{[-T_0, -T_0 +1]\times Y } \in \bigcup_\beta U_\beta$$ In fact, since $A_i$ limits to $a_i$, we have $$A_i |_{[-T_0, -T_0 +1]\times Y } \in \displaystyle U_\beta \quad \text{for} \quad \beta \neq \beta_0.$$ This implies $A_\infty$ is irreducible on $Y \times (-\infty, 0]$. The holonomy of $A_\infty$ thus gives rise to an irreducible $SU(2)$-representation of $\pi_1(W)$. This contradicts the assumption that $W$ is simply connected. ◻ We now establish Lemma [Lemma 80](#lem:technical){reference-type="ref" reference="lem:technical"}. Fix analytic data as in the proof of Theorem [Theorem 1](#thm:1.1){reference-type="ref" reference="thm:1.1"}. **Lemma 80**. *Let $(W, \widetilde{\tau})$ be an equivariant negative-definite cobordism from $(Y, \tau)$ to $(Y', \tau')$ with $H_1(W, \mathbb{Z}_2)=0$. Suppose that there is some $s$ such that $$r_s(Y, \tau) = r_s(Y', \tau') < \infty.$$ Then there exists an increasing sequence $(n_k)_{k = 1}^\infty$ such that for each $i$ in $(n_k)_{k = 1}^\infty$, we have chains $$\alpha_i \in C^{[-\infty, s]}(Y, \pi_i) \quad \text{and} \quad \alpha'_i \in C^{[-\infty, s]}(Y', \pi'_i)$$ for which the following hold:* 1. *We have $$\lim \deg_I(\alpha_i) = - r_s(Y, \tau) = -r_s(Y', \tau') = \lim \deg_I(\alpha'_i)$$ along $(n_k)_{k = 1}^\infty$; and,* 2. *Either $$\alpha'_i = \lambda_i \alpha_i \text{ along } (n_k)_{k = 1}^\infty \quad \text{or} \quad \alpha'_i = H_i \alpha_i \text{ along } (n_k)_{k = 1}^\infty.$$* *Proof.* Roughly speaking, the idea for producing $\alpha_i$ and $\alpha'_i$ is the following: we start by selecting a sequence of chains $\smash{z_i \in \underline{C}^{[-\infty, s]}(Y, \pi_i)}$ realizing $r_s(Y, \tau)$. As we explain below, this means that the obstruction to $z_i$ being an equivariant cycle will be a chain $\alpha_i$ lying in some filtration level $\rho_i$, where $\rho_i$ limits to $- r_s(Y, \tau)$ as $i \rightarrow \infty$. We then consider the sequence of chains $z'_i = \lambda_i z_i$. The obstruction to $z_i'$ being an equivariant cycle will be a chain $\alpha'_i$ lying in some filtration level $\rho'_i$; we show that $\rho_i'$ likewise limits to $- r_s(Y', \tau')$. We then prove that $\alpha'_i$ is the image of $\alpha_i$ under $\lambda_i$ or $H_i$, giving the claim. We now make this precise. For simplicity, assume $-s \notin \mathfrak{K}\cup \mathfrak{K}'$; the general case follows by a limiting argument. By Definition [Definition 60](#def:4.22){reference-type="ref" reference="def:4.22"} we have a sequence of negative real numbers $r_i \notin \mathfrak{K}$ with $$\displaystyle \lim_{i\to \infty} r_i = - r_s(Y, \tau)$$ from above, such that each stable complex $\underline{\mathfrak{E}}^{[r_i, -s]}(Y, \tau)$ admits an equivariant $\theta$-supported cycle in the sense of Definition [Definition 60](#def:4.22){reference-type="ref" reference="def:4.22"}. For each $i$, fix a pertubation $\pi_{n_i}$ realizing $\smash{\underline{\mathfrak{E}}^{[r_i, -s]}(Y, \tau)}$, so that $$\underline{\mathfrak{E}}^{[r_i, -s]}(Y, \tau) = \underline{C}^{[r_i, -s]}(Y, \pi_{n_i}).$$ Without loss of generality, assume that the sequence $n_i$ is increasing with $i$. For each $i$, we have an explicit $\theta$-supported cycle $z_i$ in $\underline{C}^{[r_i, -s]}(Y, \pi_i)$ whose homology class is fixed by $\smash{(\tau_i^{[r_i, -s]})_*}$. View $z_i$ as a chain in the complex $\underline{C}^{[-\infty, -s]}(Y, \pi_{n_i})$ with $$\deg_I(\underline{d} z_i) \leq r_i.\footnote{Here and throughout, we use $\deg_I(x)$ when $x$ is not homogenous to refer to the filtration level of $x$.}$$ To say that the homology class of $z_i \in \underline{C}^{[r_i, -s]}(Y, \pi_i)$ is fixed by $\smash{(\tau_i^{[r_i, -s]})_*}$ means that we have $h_i$ and $\xi_i$ in $\underline{C}^{[-\infty, -s]}(Y, \pi_{n_i})$ such that $$\label{eq:xi} \underline{d} h_i = (z_i + \tau_i z_i) + \xi_i \quad \text{with} \quad \deg_I(\xi_i) \leq r_i.$$ Let $$\rho_i = \max\{ \deg_I(\underline{d} z_i), \deg_I(\xi_i) \}.\footnote{By convention, $\deg_I(0) = - \infty$.}$$ We claim that $$\label{eq:6.1a} \limsup \rho_i = -r_s(Y, \tau).$$ Indeed, suppose not. Then for some positive $\delta > 0$, there exists an infinite sequence of the $z_i$ with $\deg_I(\underline{d}z_i) \leq -r_s(Y, \tau) - \delta$ and $\deg_I(\xi_i) \leq -r_s(Y, \tau) - \delta$. It is straightforward to check that this produces an equivariant $\theta$-supported cycle in the stable complex $\underline{\mathfrak{E}}^{[-r(Y, \tau) - \delta, -s]}(Y, \tau)$, contradicting the infimum in Definition [Definition 60](#def:4.22){reference-type="ref" reference="def:4.22"}. For notational convenience, henceforth we assume that $n_i = i$ and we restrict to a subsequence so that in fact $\lim \rho_i = -r_s(Y, \tau)$. Now let $$z'_i = \lambda_i z_i.$$ Then we may compute $$\underline{d} z'_i = \underline{d} \lambda_i z_i = \lambda_i \underline{d} z_i$$ and $$\begin{aligned} z'_i + \tau'_i z'_i &= \lambda_i z_i + \tau'_i \lambda_i z_i \\ &= \lambda_i z_i + \lambda_i \tau_i z_i + (\underline{d} H_i + H_i\underline{d}) (z_i) \\ &= \underline{d} H_i z_i + \left( \lambda_i \xi_i + H_i \underline{d}z_i \right).\end{aligned}$$ If we write $$\label{eq:6.1b} \xi'_i = \lambda_i \xi_i + H_i \underline{d} z_i,$$ then the resulting equation $z'_i + \tau'_i z'_i = \underline{d} H_i z_i + \xi'_i$ is analogous to the defining equation ([\[eq:xi\]](#eq:xi){reference-type="ref" reference="eq:xi"}) for $\xi$. Define $$\rho'_i = \max\{ \deg_I(\underline{d} z'_i), \deg_I(\xi'_i) \}.$$ The same argument as in the previous paragraph then shows that $\limsup \rho'_i \geq - r_s(Y', \tau')$. On the other hand, $\rho'_i \leq \rho_i + \delta_i$. Since $\delta_i \rightarrow 0$, this shows that $\limsup \rho'_i \leq \limsup \rho_i = -r_s(Y, \tau) = -r_s(Y', \tau')$. Hence $$\label{eq:6.1c} \limsup \rho'_i = -r_s(Y', \tau').$$ For notational convenience, we again pass to a subsequence so that $\lim \rho'_i = -r_s(Y', \tau')$. With ([\[eq:6.1c\]](#eq:6.1c){reference-type="ref" reference="eq:6.1c"}) in hand, observe that up to passing to a further subsequence, we either have 1. [\[item:61alt1\]]{#item:61alt1 label="item:61alt1"} $\lim \deg_I(\underline{d} z_i') = -r_s(Y', \tau')$; or, 2. [\[item:61alt2\]]{#item:61alt2 label="item:61alt2"} $\lim \deg_I(\xi_i') = -r_s(Y', \tau')$. Suppose that ([\[item:61alt1\]](#item:61alt1){reference-type="ref" reference="item:61alt1"}) is the case. Since $$\deg_I(\underline{d} z_i') = \deg_I (\lambda_i \underline{d} z_i) \leq \deg_I(\underline{d} z_i) + \delta_i$$ with $\delta_i \rightarrow 0$, the fact that ([\[eq:6.1a\]](#eq:6.1a){reference-type="ref" reference="eq:6.1a"}) and ([\[eq:6.1c\]](#eq:6.1c){reference-type="ref" reference="eq:6.1c"}) are equal shows that $\lim \deg_I(\underline{d} z_i) = -r_s(Y, \tau)$. Now suppose ([\[item:61alt2\]](#item:61alt2){reference-type="ref" reference="item:61alt2"}) is the case. Again up to passing to a subsequence, examining ([\[eq:6.1b\]](#eq:6.1b){reference-type="ref" reference="eq:6.1b"}) we see that either $$\lim \deg_I ( \lambda_i \xi_i) = -r_s(Y', \tau') \quad \text{or} \quad \lim \deg_I (H_i \underline{d} z_i) = -r_s(Y', \tau').$$ In the former case, the same argument as in ([\[item:61alt1\]](#item:61alt1){reference-type="ref" reference="item:61alt1"}) shows that $\lim \deg_I( \xi_i ) = - r_s(Y, \tau)$. In the latter, the fact that $H_i$ is of level $\delta_i$ shows that $\lim \deg_I( \underline{d} z_i ) = -r_s(Y, \tau)$. In either case, we obtain the chains claimed in the statement of the lemma. That is, there is some increasing sequence $(n_k)_{k = 1}^\infty$ such that for all $i$ in $(n_k)_{k = 1}^\infty$, we may define chains $\alpha_i \in \underline{C}(Y, \pi_i)$ and $\alpha_i \in \underline{C}(Y', \pi'_i)$ such that $$\lim \deg_I(\alpha_i) = - r_s(Y, \tau) = -r_s(Y', \tau') = \lim \deg_I(\alpha'_i)$$ and $$f_i(\alpha_i) = \alpha'_i$$ along $(n_k)_{k = 1}^\infty$, where $f_i$ is formed by counting instantons on $W^*$. Indeed, either $\alpha_i = \underline{d}z_i$ and $\alpha'_i = \lambda_i \underline{d}z_i$; or $\alpha_i = \xi_i$ and $\alpha'_i = \lambda_i \xi_i$; or $\alpha_i = \underline{d}z_i$ and $\alpha'_i = H_i \underline{d}z_i$. The reader may refer to Figure [5](#fig:11cases){reference-type="ref" reference="fig:11cases"} for a schematic depiction of these three cases. ◻ **Example 81**. Figure [5](#fig:11cases){reference-type="ref" reference="fig:11cases"} gives a schematic depiction of the three possibilities in the proof of Lemma [Lemma 80](#lem:technical){reference-type="ref" reference="lem:technical"}. In each case, we display two complexes $C$ and $C'$ with the same involutive $r_s$-invariant, together with examples of maps $\lambda$ and $H$ between them. - In $(a)$, the obstruction to $z$ being a cycle is given by $\underline{d} z = x$. The obstruction to $z'$ being a cycle is likewise given by $\underline{d} z' = x'$; and we have $\lambda x = x'$. - In $(b)$, the obstruction to $z$ being equivariant is given by $z + \tau z = y$. The obstruction to $z'$ being equivariant is likewise given by $z' + \tau' z' = y'$; and we have $\lambda y = y'$. For simplicity, we suppose $h = 0$ in ([\[eq:xi\]](#eq:xi){reference-type="ref" reference="eq:xi"}), so that $\xi = z + \tau z$. - In $(c)$, we have displayed the most complicated situation. Here, the obstruction to $z$ being an equivariant cycle is given by $\underline{d} z = x$, while the obstruction to $z'$ being an equivariant cycle is given by $z' + \tau' z' = y'$. Unlike in $(a)$ and $(b)$, however, these two cycles are not related by $\lambda$, but rather by $H$. Note that in this example, $\lambda$ is not $\tau$-equivariant, as $$\lambda \tau z = \lambda (z + y) = z' \quad \text{while} \quad \tau' \lambda z = \tau' z' = z' + y'.$$ These two are equal only up to addition of the term $(\underline{d} H + H \underline{d})(z) = H \underline{d} z = H x = y'$. ![Schematic depiction of the possibilities in the proof of Lemma [Lemma 80](#lem:technical){reference-type="ref" reference="lem:technical"}. Generators are arranged by height according to their filtration level. The differential is given by the black arrows; if no black arrow is drawn, the differential on a given generator is zero. The action of $\tau$ is given by the (sum of) the red arrows; if no red arrow is drawn, the action of $\tau$ on a given generator is the identity. The map $\lambda$ is in blue and the map $H$ is in green.](F6.pdf){#fig:11cases} We end this subsection by establishing several other properties of the involutive $r_s$-invariant. We first record the following straightforward corollary of Theorem [Theorem 1](#thm:1.1){reference-type="ref" reference="thm:1.1"}: **Theorem 82** (Equivariant definite bounding). *Let $(Y, \tau)$ be an equivariant homology sphere. Suppose that $(Y, \tau)$ bounds an equivariant definite manifold $(W, \tilde{\tau})$ with $H_1(W, \mathbb{Z}_2) = 0$.* - *If $W$ is positive definite, then $r_s(-Y, \tau) = \infty$ for all $s$.* - *If $W$ is negative definite, then $r_s(Y, \tau) = \infty$ for all $s$.* *In particular, suppose that $(Y, \tau)$ is obtained by $1/n$-surgery on an equivariant knot.* - *If $n > 0$, then $r_s(-Y, \tau) = \infty$ for all $s$.* - *If $n < 0$, then $r_s(Y, \tau) = \infty$ for all $s$.* *Proof.* Assume $W$ is negative definite. Isotope $\tilde{\tau}$ so that it fixes a ball in the interior of $W$; puncturing $W$ then gives an equivariant cobordism from $(S^3, \operatorname{id})$ to $(Y, \tau)$. It is easily checked that $r_s(S^3, \operatorname{id}) = \infty$ for all $s$. Applying Theorem [Theorem 1](#thm:1.1){reference-type="ref" reference="thm:1.1"} then establishes the claim for $r_s(Y, \tau)$ and reversing orientation gives the positive-definite case. The last two claims then follow from applying Lemma [Lemma 15](#lem:knotsB){reference-type="ref" reference="lem:knotsB"}. ◻ We now have the connected sum inequality: **Theorem 83** (Connected sum inequality). *Let $(Y, \tau)$ and $(Y', \tau')$ be two equivariant homology spheres such that the equivariant connected sum $(Y \# Y', \tau \# \tau')$ is defined. Then for any $s$ and $s'$ in $[-\infty, 0]$, we have $$r_{s+s'}( Y \# Y', \tau \# \tau' ) \geq \min \{r_{s} (Y, \tau) + s' , r_{s'} (Y', \tau') + s\}.$$ In particular, $$r_0(Y \# Y', \tau \# \tau') \geq \min \{ r_0(Y, \tau), r_0(Y', \tau') \}.$$* *Proof.* [Lemma 78](#connected sum local map ){reference-type="ref" reference="connected sum local map "} implies there is an enriched local map $$\lambda_{W^\#}^* : \underline{\mathfrak{E}}(Y,\tau ) \otimes \underline{\mathfrak{E}}( Y' , \tau' ) \to \underline{\mathfrak{E}}(Y\# Y', \tau\# \tau' ).$$ Thus, [Lemma 61](#monotonicity of involtuive rs){reference-type="ref" reference="monotonicity of involtuive rs"} implies $$r_s ( \underline{\mathfrak{E}}(Y,\tau ) \otimes \underline{\mathfrak{E}}( Y' , \tau' ) ) \leq r_s( \underline{\mathfrak{E}}(Y\# Y', \tau\# \tau' ) ).$$ Combining this with [Lemma 62](#conn sum of rs for inv enriched){reference-type="ref" reference="conn sum of rs for inv enriched"} gives the claim. ◻ We also clarify the range of the $r_s$-invariant: **Theorem 84** (Range of $r_s$). *The involutive $r_s$-invariant is a nonincreasing function of $s$. Moreover, the range of the involutive $r_s$-invariant is a subset of $$-\{\text{critical values of }cs_Y\} \cup \{\infty\},$$ where $cs_Y$ is the $SU(2)$-Chern-Simons functional.* *Proof.* The fact that $r_s$ is nonincreasing easily follows from the observation that there is a local map $${\underline{\mathfrak{E}}_\tau^{[r, s]} \rightarrow \underline{\mathfrak{E}}_\tau^{[r, s']}}$$ whenever $s \leq s'$. For the second claim, note that Lemma [Lemma 59](#lem:4.21){reference-type="ref" reference="lem:4.21"} implies $r_s(\underline{\mathfrak{E}}_\tau)$ is valued in $-\mathfrak{K}\cup \{\pm \infty\}$. In our case, $\mathfrak{K} = \mathfrak{K}_Y$ is the set of critical values of the $SU(2)$-Chern-Simons functional. Combined with Remark [Remark 75](#rem:rangeofrs){reference-type="ref" reference="rem:rangeofrs"}, this completes the proof. ◻ Finally, we compare the involutive $r_s$-invariant with its usual (nonequivariant) counterpart. While technically we have only defined the latter for abstract instanton-type complexes, it is straightforward to carry out the enriched complex construction of Section [4.5](#sec:4.5){reference-type="ref" reference="sec:4.5"} to obtain an invariant for homology $3$-spheres, which we denote by $r_s(Y)$. **Theorem 85** (Comparison with the nonequivariant $r_s$-invariant). *Let $r_s(Y)$ be the nonequivariant $r_s$-invariant constructed using Definition [Definition 30](#def:3.10){reference-type="ref" reference="def:3.10"}. Then $$r_s(Y, \tau) \leq r_s(Y)$$ for every $s$. Moreover, if $\tau$ is isotopic to the identity, then $$r_s(Y, \tau) = r_s(Y).$$* *Proof.* It is clear that if $\tau$ is isotopic to the identity, then $r_s(Y, \tau) = r_s(Y, \operatorname{id}) = r_s(Y)$. Indeed, the isotopy-invariance of the induced action of $\tau$ can easily be shown directly. Alternatively, note that if $\tau_1$ is isotopic to $\tau_2$, then the track of isotopy gives an equivariant homology cobordism from $(Y, \tau_1)$ to $(Y, \tau_2)$. For the more general claim, note that a comparison of Definitions [Definition 30](#def:3.10){reference-type="ref" reference="def:3.10"} and [Definition 42](#def:4.4){reference-type="ref" reference="def:4.4"} shows $r_s(\underline{C}, \tau) \leq r_s(\underline{C})$. ◻ **Remark 86**. Note, however, that the $r_s$-invariant constructed using Definition [Definition 30](#def:3.10){reference-type="ref" reference="def:3.10"} is not quite the same as the $r_s$-invariant of [@NST19 Definition 3.2]: the two agree when working over field coefficients. (It is straightforward to carry out the present paper using more general coefficient rings.) To see this, we re-write Definition [Definition 30](#def:3.10){reference-type="ref" reference="def:3.10"} as $$\begin{aligned} r_s ( \underline{C}) & = -\inf \{ \deg_I (d \alpha - D_2 (1) ) \ | \ \alpha \in C_*, \deg_I(\alpha) \leq -s \} \\ & = \sup \{ \deg_I (f ) \ | \ f \in C^*(-Y), \deg_I(\alpha) \leq -s, d^* f(\alpha) - D_1^* (f) \neq 0 \}\\ & = \inf \{ r \leq0 \ | \ 0\neq [D_1] \in H^* (C^{\leq -s}(-Y) / C^{\leq r} (-Y) ) \} \\ & = \sup \{ r \leq0 \ | \ 0 = [D_1] \in H^* (C^{\leq -s}(-Y) / C^{\leq r} (-Y) ) \}.\end{aligned}$$ In the second line, we have used the fact that we are working with field coefficients to invoke the duality $\smash{(C_*^{\leq s }(Y, \tau) )^* = C^*_{\geq -s} (-Y, \tau)}$ and $\smash{D_2^* = D_1}$. The last line is the definition of the $r_s$-invariant in [@NST19 Definition 3.2] for $-Y$. However, in [@NST19], the Chern-Simons functional is defined with an additional negative sign in comparison to Section [5.1.1](#sec:5.1.1){reference-type="ref" reference="sec:5.1.1"}. Hence Definition [Definition 30](#def:3.10){reference-type="ref" reference="def:3.10"} agrees with that of [@NST19 Definition 3.2]. **Remark 87**. While the formalism of this paper may be carried out over any coefficient ring, in order to obtain a nontrivial equivariant invariant it is best to either use $\mathbb{Z}$ or a field with characteristic equal to the order of $\tau$. For instance, suppose that $\tau$ is of order three and suppose we work over $\mathbb{Z}_2$. Then if $z$ is any $\theta$-supported cycle, we may average over the orbit of $\tau$ to form the equivariant $\theta$-supported cycle $z + \tau z + \tau^2 z$. Hence in this case there is no distinction between the existence of a $\theta$-supported cycle and an equivariant $\theta$-supported cycle. ## Relation with the Donaldson invariant {#sec:6.2} We now discuss the relation between the action of $\tau$ and the $SO(3)$-Donaldson polynomial. Let $X$ be a closed, simply-connected oriented 4-manifold with $b^+(X) \geq 3$. Assume $X=X_1 \cup_Y X_2$ along an oriented homology 3-sphere $Y$, where: - $X_1$ is a negative-definite 4-manifold with $H_1(X_1; \mathbb{Z}_2)=0$, - $X_2$ is a 4-manifold with $b^+(X_2)>1$. Fix a cohomology class $w \in H^2(X; \mathbb{Z}_2 )$; this will correspond to the second Stiefel-Whitney class of an $SO(3)$-bundle. We will assume $w^2 \neq 0 \bmod 4$ in order to guarantee compactness of various moduli spaces. Denote the $SO(3)$-Donaldson polynomial invariant by $$\Psi (X) : \Lambda^* H_2(X;\mathbb{Z}) \to \mathbb{Z}.$$ This is formally defined by $$\Psi (X)(x_1, \cdots , x_n ) = \int_{\overline{M(X, g, m,w)} } \mu ( x_1) \wedge \cdots \wedge \mu (x_n) ,$$ where $\mu : H_2 (X) \to H^2 (\mathcal B_{m,w} (X))$ is as in [@Do02 Section 6.3] and $\overline{M(X, g, m,w)}$ is the compactified ASD-moduli space with respect to the $SO(3)$-bundle $P_{m,w}$ satisfying $p_1=m$ and $w_2=w$. Here, $\mathcal B_{m,w} (X)$ is the space of $SO(3)$ connections over $P_{m,w}$ divided by gauge transformations and $m$ is chosen so that $$-2m-3(1+b^+(X)) = 2n.$$ In order to make this definition rigorous, we realize $\mu ( x_1), \ldots, \mu (x_n)$ as divisors $V(x_1), \ldots , V(x_n)$ in the ASD-moduli space $\overline{M_k(X, g, m, \omega)}$ and set $$\int_{\overline{M(X, g, m,w)} } \mu ( x_1) \wedge \cdots \wedge \mu (x_n) = \# \left( V(x_1 ) \cap \cdots \cap V(x_n) \cap \overline{M(X, g, m,w)} \right).$$ In this paper, we focus on the case $X=K3$ or $X=3\smash{\mathbb{CP}^2}\# 20 \smash{\overline{\mathbb{CP}}}^2$. **Remark 88**. Although we have used $SU(2)$ as the structure group in our definition of instanton Floer homology, one can equally well use $SO(3)$. For integer homology spheres, the groups defined in this way are naturally isomorphic; hence we will sometimes identify $SO(3)$- and $SU(2)$-instanton Floer homology. Counting relative ASD moduli spaces over $X_1$ defines a cycle $\underline{\psi}(X_1)$ in the chain complex $\underline{C}(Y)$, this defines $$\underline{\Psi}(X_1): \Lambda^* H_2(X_1; \mathbb{Z}) \to \underline{I}(Y),$$ where $\underline{I}_*(Y)$ is regarded as absolutely $\mathbb{Z}/8\mathbb{Z}$-graded. We similarly have $$\overline{\Psi}(X_2): \Lambda^* H_2(X_2; \mathbb{Z}) \to \overline{I}(Y).$$ This gives the pairing formula: **Theorem 89**. *[@Do02 Section 7] In the above setting, we have $$\langle \underline{\Psi}(X_1), \overline{\Psi}(X_2) \rangle = \Psi(X).$$* Note that this pairing formula holds in $\mathbb{Z}$ coefficient, although we will use it in $\mathbb{Z}_2$ coefficient. We now assume $b^+(X)=3$. Let us summarize the computations of the $SO(3)$-Donaldson polynomial invariants of $\smash{X=K3\# \smash{\overline{\mathbb{CP}}}^2}$ or $\smash{X=3\smash{\mathbb{CP}^2}\# 20 \smash{\overline{\mathbb{CP}}}^2}$. We focus on the $SO(3)$-bundle $P_{m,w}$ satisfying $$m=6 \quad \text{and} \quad w^2 = 1 \bmod 4.$$ In this case, the ASD moduli space $\overline{M(X, g, m,w)} =M(X, g, m,w)$ has dimension zero and is compact. Thus, we do not need to cut down the moduli space with any divisor $V_{x_i}$. We denote by $\Psi (X)(1) \in \mathbb{Z}$ the $SO(3)$-Donaldson invariant, defined as the signed count of points in $M(X, g, m,w)$ with respect to a homology orientation of $X$. The invariant $\Psi (X)(1) \in \mathbb{Z}$ is called the *simple invariant*; see [@DK90 Section 9.1.1]. In the simple invariant case, the corresponding relative Donaldson invariants $\underline{\Psi}(X_1)$ and $\overline{\Psi}(X_2)$ are concentrated in a single degree within $\underline{I}(Y)$ and $\underline{I}(Y)$, respectively. By combining known computations of simple invariants of $K3$ with the blow-up formula [@DK90 Proposition 9.3.14], we see the following: - ([@K91] and [@DK90 Proposition 9.3.14]) Suppose $X= K3 \# \smash{\overline{\mathbb{CP}}}^2$. Then there is a cohomology class $w\in H^2(X; \mathbb{Z}_2)$ with $w^2 = 1 \bmod 4$ whose Donaldson invariant is computed as $$\Psi (X)(1)= \pm 1 \in \mathbb{Z}.$$ - ([@DK90 Theorem 9.3.4]) Suppose $X= 3\smash{\mathbb{CP}^2}\# 20 \smash{\overline{\mathbb{CP}}}^2$. Then for every cohomology class $w\in H^2(X; \mathbb{Z}_2)$ with $w^2 = 1 \bmod 4$, the simple Donaldson invariant of $X$ is $$\Psi (X)(1)= 0 \in \mathbb{Z}.$$ Now consider the decomposition $$X = K3 \# \smash{\overline{\mathbb{CP}}}^2= X_1 \cup_Y X_2,$$ where $X_1$ is the usual Akbulut cork. Then cutting out and re-gluing $X_1$ via the involution $\tau \colon Y \rightarrow Y$ of [@Ak91_cork] gives $$X' = 3\smash{\mathbb{CP}^2}\# 20 \smash{\overline{\mathbb{CP}}}^2= X_1 \cup_{\tau} X_2.$$ By the paring formula, we have $$\begin{aligned} \Psi (X')(1) = \langle \tau_* \underline{\Psi}(X_1) , \overline{\Psi}(X_2) \rangle =0 \neq \pm 1 = \langle \underline{\Psi}(X_1) , \overline{\Psi}(X_2) \rangle = \Psi(X)(1).\end{aligned}$$ Note that $\underline{\Psi}(X_1)$ induces a local map from $\underline{C}(S^3 )$ to $\underline{C}(Y)$. **Lemma 90**. *The action $\tau_* : \underline{I}(Y) \to \underline{I}(Y)$ induced by tau satisfies the following condition: There is a $\theta$-supported element $\psi(X_1)$ and a function $T: \underline{I}(Y) \to \mathbb{Z}_2$ such that $T\tau_* \psi(X_1)= 0$ and $T\psi(X_1)=1$.* *Proof.* Immediate from the pairing formula. ◻ This trivially gives: **Lemma 91**. *There exists a $\theta$-supported cycle in $\underline{I}_{-3}(Y)$ which is not fixed by $\tau_*$.* *Proof.* If not, we would have $\tau_* \psi(X_1) = \psi(X_1)$ in Lemma [Lemma 90](#lem:6.11){reference-type="ref" reference="lem:6.11"}, which would clearly be a contradiction. ◻ # Examples and applications {#sec:7} In this section, we provide several computations and partial computations of our invariants. We use these to establish the remaining claims of Section [1](#sec:1){reference-type="ref" reference="sec:1"}. ## The Akbulut-Mazur cork {#sec:7.1} Our first (and most fundamental) example is the Akbulut-Mazur cork $Y = S_{+1}(\overline{9}_{46})$. In [@Sa03 Theorem 1], it is shown that the (irreducible) instanton Floer homology $$I(Y) = H_* (C(Y), d )$$ is isomorphic to $\mathbb{Z}$ in odd gradings and is zero in even gradings. Moreover, since all critical points are known to be nondegenerate, we do not need to take holonomy perturbations to make the Chern-Simons functional Morse. For trajectories, we take small perturbations which are zero near the critical points and make all moduli spaces regular. We claim that the action of $\tau$ on the instanton homology $$\underline{I}(Y) = H_* (\underline{C}(Y), \underline{d} )$$ is locally nontrivial. To see this, observe that $I(Y)$ is one-dimensional in grading $-3$. Together with the fact that the Frøyshov invariant of $Y$ is zero, this shows that $\underline{I}(Y)$ is two-dimensional in grading $-3$. Let $x$ be a generator of $\underline{I}(Y)$ in grading $-3$ whose image generates the quotient $\underline{I}(Y)/I(Y)$ and let $a$ be the class of a cycle in $C(Y) \subset \underline{C}(Y)$ with the same grading as $x$. Note that $x$ is the class of a $\theta$-supported cycle. The fact that $\tau_*$ preserves $I(Y)$ and $\tau_*^2 = \operatorname{id}$ shows that $\tau_* a = a$. It follows immediately from Lemma [Lemma 91](#lem:6.12){reference-type="ref" reference="lem:6.12"} that $$\tau_* x = x + a.$$ The condition $\tau_*^2 = \operatorname{id}$ then shows that in fact $$\tau_* a = a.$$ This is easily seen to be locally nontrivial just from the action of $\tau$ on homology. Hence: **Lemma 92**. *The involutive chain complex $(\underline{C}(Y),\tau)$ is not locally trivial. In fact, we have: $$r_0(Y, \tau) < \infty \quad \text{and} \quad r_s(-Y, \tau)=\infty.$$* *Proof.* By [Lemma 64](#lem:4.26){reference-type="ref" reference="lem:4.26"}, $r_0(Y, \tau)$ characterizes local triviality. Hence the first inequality follows from the discussion at the beginning of this subsection. The claim regarding $r_0(-Y, \tau)$ follows from the fact that $Y$ is $(+1)$-surgery on the strongly invertible knot $\overline{9}_{46}$, combined with Theorem [Theorem 82](#thm:definitebounding){reference-type="ref" reference="thm:definitebounding"}. ◻ Although not strictly necessary, for completeness we describe the chain complex $\underline{C}(Y)$. The irreducible part of this complex was computed by Saveliev in [@Sa03] via an analysis of the representation variety of $Y$; see also [@RS04]. As shown in [@Sa03 Section 2], the irreducible representation variety of $Y$ (modulo conjugation) is transversely cut out and consists of six points, denoted by $\beta_1, \beta_2, \alpha_1, \alpha_2, \alpha_3$, and $\alpha_4$. The induced action of $\tau$ on the representation variety interchanges $\beta_1$ and $\beta_2$ and leaves each $\alpha_i$ fixed. Together with the aforementioned computation of $I(Y)$, this shows that one generator of $C(Y)$ (say the generator corresponding to $\alpha_1$) lies in even grading and has $$\underline{d} \alpha_1 = \beta_1 + \beta_2,$$ while all other generators lie in odd grading and have zero differential; see Figure [6](#fig:7.1){reference-type="ref" reference="fig:7.1"}. Hence $$\tau \beta_1 = \beta_2, \quad \tau \beta_2 = \beta_1, \quad \text{and} \quad \tau \alpha_i = \alpha_i \text{ for all } i.$$ Here, we abuse notation by writing $\beta_1$ to mean the generator of $C(Y)$ corresponding to the representation $\beta_1$, and so on. See [@RS04 Section 9.3] for the relevant computation over $\mathbb{Z}$. We thus have that $$\underline{C}(Y) = \mathrm{span}\{\theta, \beta_1, \beta_2, \alpha_1, \alpha_2, \alpha_3, \alpha_4\}.$$ It is somewhat difficult to determine the absolute gradings of these generators, although we know that $\beta_1, \beta_2, \alpha_2, \alpha_3$, and $\alpha_4$ lie in the same mod two grading as $\theta$. It follows from this that either $D_2 \theta = 0$ or $D_2 \theta = \alpha_1$; the latter is impossible as $\alpha_1$ is not a cycle. There are two possibilities for the action of $\tau$ on $\underline{C}(Y)$ consistent with our analysis of $\underline{I}(Y)$. If some $\alpha_i$ (for $i = 2, 3, 4$) is in the same grading as $\theta$, then $$\tau \theta = \theta + \alpha_i.$$ If $\beta_1$ and $\beta_2$ are in the same grading as $\theta$, then up to filtered homotopy equivalence we have $$\tau \theta = \theta + \beta_1.\footnote{To see that the homotopy equivalence is filtered, write $\tau \theta = \theta + m \beta_1 + n \beta_2$. Our previous observation involving the Donaldson invariant implies $m + n \equiv 1 \bmod 2$; in particular, $m \neq n$. It follows that $\tau$ is not literally an involution. The fact that $\tau$ is a homotopy involution in the filtered sense then implies that $\deg_I(\alpha_1) \leq 0$.}$$ These two possibilities are displayed in Figure [6](#fig:7.1){reference-type="ref" reference="fig:7.1"}. ![Possibilities for the complex of $Y = S_{+1}(\overline{9}_{46})$. Generators are arranged by height according to their homological grading. The differential is given by the black arrows; if no black arrow is drawn, the differential on a given generator is zero. The action of $\tau$ is given by the (sum of) the red arrows; if no red arrow is drawn, the action of $\tau$ on a given generator is the identity.](F1.pdf){#fig:7.1} Finally, we describe the chain complex $\underline{C}(-Y)$ to illustrate the fact that $\underline{C}(Y)$ and $\underline{C}(-Y)$ do not necessarily contain the same information. The generators of $\underline{C}(-Y)$ are easily calculated to be $$\underline{C}(-Y) = \mathrm{span}\{\theta, \beta_1^\vee, \beta_2^\vee, \alpha_1^\vee, \alpha_2^\vee, \alpha_3^\vee, \alpha_4^\vee\}.$$ However, now the only generator in the same mod two grading as $\theta$ is $\alpha_1^\vee$. Since $-Y$ bounds a homology ball, $D_2 \theta$ is a nullhomologous cycle in $\underline{C}(-Y)$. It is easily checked that $D_2 \theta = 0$ from the fact that the Frøyshov invariant is zero; see Figure [7](#fig:7.2){reference-type="ref" reference="fig:7.2"}. There are again two possibilities for the action of $\tau$ on $\theta$. If there are no other generators in the same grading as $\theta$, then clearly $$\tau \theta = \theta.$$ Otherwise, we have $\tau \theta = \theta + n \alpha_1^\vee$ for $n= 0$ or $1$. Note that on the level of homology, we still have $\tau_*[\theta] = [\theta]$; moreover, one can check that this action of $\tau$ is filtered homotopy equivalent to the trivial action on $\theta$.[^8] In either case, it is straightforward to see that the resulting complex is locally trivial. ![Possibilities for the complex of $-Y$. Generators are arranged by height according to their homological grading. The differential is given by the black arrows; if no black arrow is drawn, the differential on a given generator is zero. The action of $\tau$ is given by the (sum of) the red arrows; if no red arrow is drawn, the action of $\tau$ on a given generator is the identity.](F2.pdf){#fig:7.2} ## Brieskorn homology spheres {#sec:brieskorn} We now discuss the involutive $r_{s}$-invariant for Brieskorn homology spheres $\Sigma(p, q, r)$. In all cases other than $S^3$ or $\Sigma(2, 3, 5)$, this has mapping class group $\mathbb{Z}/2\mathbb{Z}$. (For an exposition of this fact, see [@DHM20 Theorem 7.6].) The generator of $\textit{MCG}(\Sigma(p, q, r))$ is the involution $\tau$ obtained by viewing $\Sigma(p, q, r)$ as the link of a complex singularity and restricting the complex conjugation action to $\Sigma(p, q, r)$. We first explain the importance of this analysis. In [@DHM20], it was shown that the Heegaard-Floer-theoretic invariants associated to a pair $(Y, \tau)$ are monotonic under certain types of equivariant negative-definite cobordisms. To constrain the invariants associated to $(Y_1, \tau_1)$, it thus suffices to produce a cobordism of this kind from $(Y_1, \tau_1)$ to another pair $(Y_2, \tau_2)$ whose invariants are already understood. This strategy formed a crucial technique for establishing many of the examples presented in [@DHM20]. In particular, the set of Brieskorn spheres (equipped with the complex conjugation involution) formed an essential class of target pairs $(Y_2, \tau_2)$ in the discussion of [@DHM20], since in [@ASA20] the invariants of these manifolds were computed and shown to be appropriately nontrivial. While in this paper we likewise use equivariant negative-definite cobordisms to compare pairs, a crucial difference is that the involutive instanton-theoretic invariants are trivial for Brieskorn spheres. In particular, the methods of this paper cannot immediately be used to re-establish the examples of [@DHM20]. We have: **Theorem 93**. *Let $Y = \Sigma(p, q, r)$ be a Brieskorn sphere with $h(Y) = 0$ and $\tau$ be the complex conjugation action on $Y$. Then $$r_s(Y, \tau) = \infty \quad \text{and} \quad r_s(-Y, \tau) = \infty.$$ Here, $Y$ is oriented as the link of its corresponding singularity and $h(Y)$ is the instanton Frøyshov invariant of $Y$.* *Proof.* Resolving the singularity gives an equivariant negative-definite manifold with boundary $(Y, \tau)$. Puncturing this manifold gives a cobordism from $S^3$ to $(Y, \tau)$; the first claim then follows from Theorem [Theorem 1](#thm:1.1){reference-type="ref" reference="thm:1.1"}. To prove the second claim, observe that by [@FS90 Proposition 3.10], the irreducible instanton chain complex for $-Y$ (with our orientation conventions) is supported in even gradings and has no differential. Since $h(Y) = h(-Y) = 0$, it follows that the $D_2$-map is zero on the chain level. Since there are no other generators in the same grading as the reducible $\theta$, the action of $\tau$ must send $\theta$ to itself. It is then easily seen that $r_s(-Y, \tau)$ is identically $\infty$. ◻ Readers should contrast the above with [@DHM20 Lemma 7.7]. Note that as discussed in Remark [Remark 9](#rem:1.8){reference-type="ref" reference="rem:1.8"}, it is also possible to define the involutive $r_s$-invariant using $\mathbb{Z}$-coefficients. However, the same argument as above shows that the involutive $r_s$-invariant with $\mathbb{Z}$-coefficients is also trivial for Brieskorn spheres with $h(Y) = 0$ . (The analogue of Lemma [Lemma 64](#lem:4.26){reference-type="ref" reference="lem:4.26"} likewise applies to show that the local equivalence class is then trivial.) **Remark 94**. Our local equivalence formalism roughly corresponds to Donaldson's Theorem A; that is, local maps are induced by equivariant negative-definite cobordisms. We expect that a theory such as [@Sa13 Theorem 4.1] more analogous to Donaldson's Theorem B or C might capture the nontriviality of the local equivalence class of $\Sigma(2,3,7)$ with the complex conjugation action. (Such variants of instanton Floer theory were first developed by Fukaya, Furuta and Ohta.) ## Definite bounding {#sec:7.3} We now turn to the proof of Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"}. As shown in Figure [8](#fig:7.4){reference-type="ref" reference="fig:7.4"}, the knot $\overline{9}_{46}$ admits two strong inversions, which we denote by $\tau$ and $\sigma$. Let $$Y_i = S^3_{1/i}(\overline{9}_{46}).$$ Write $(Y_i, \tau)$ and $(Y_i, \sigma)$ to mean these surgered $3$-manifolds equipped with the involutions induced by $\tau$ and $\sigma$, respectively; see [@DHM20 Section 5.1]. Note that $(Y_1, \tau)$ is the Abkulut cork. It will also be useful to point out that Lemma [Lemma 92](#cal for Akbulut){reference-type="ref" reference="cal for Akbulut"} holds with $(Y_1, \tau)$ replaced by $(Y_1, \sigma)$. Indeed, $\tau$ and $\sigma$ differ by an involution on $Y_1$ which extends over the contractible $4$-manifold $X_1$ discussed in Section [6.2](#sec:6.2){reference-type="ref" reference="sec:6.2"}. Hence the cork twist along $\sigma$ gives the same $4$-manifold (up to diffeomorphism) as the cork twist along $\tau$. In particular, the Donaldson invariants of the two resulting cork twists agree. Thus the analysis of Section [7.1](#sec:7.1){reference-type="ref" reference="sec:7.1"} applies to calculate the action of $\sigma$. ![Strong inversions on $\overline{9}_{46}$.](9_46.pdf){#fig:7.4} The cork $Y$ of Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"} will be constructed as an equivariant connected sum of pairs $(Y_i, \tau)$ and $(Y_i, \sigma)$. For convenience, we use addition to denote the operation of connected sum and negation to denote orientation reversal. For the convenience of the reader, we recall Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"}:\ **Theorem 1.2** There exists a cork $Y = \partial W$ such that the boundary involution $\tau$: 1. Does not extend as a diffeomorphism over any negative-definite $4$-manifold $W^-$ with $H_1(W^-, \mathbb{Z}_2)=0$ bounded by $Y$; and, 2. Does not extend as a homology-fixing or homology-reversing diffeomorphism over any positive-definite $4$-manifold $W^+$ with $H_1(W^+, \mathbb{Z}_2)=0$ bounded by $Y$. The first part of Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"} is an application of the present paper: *Proof of Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"} (1).* We show that at least one of $$\label{eq:combA} (Y, \tau) = - 2(Y_1, \tau) - 2(Y_1, \sigma) + (Y_2, \tau)$$ and $$\label{eq:combB} (Y, \tau) = -2(Y_1, \tau) - 2(Y_1, \sigma) + (Y_2, \sigma)$$ constitutes the desired example. Indeed, note that if $(Y, \tau)$ bounds an equivariant negative-definite manifold $W^-$with $H_1(W^-, \mathbb{Z}_2)=0$, then the inequality of Theorem [Theorem 1](#thm:1.1){reference-type="ref" reference="thm:1.1"} implies $r_0(Y, \tau) = \infty$. It thus suffices to see that $r_0(Y, \tau) < \infty$ for at least one of [\[eq:combA\]](#eq:combA){reference-type="eqref" reference="eq:combA"} and [\[eq:combB\]](#eq:combB){reference-type="eqref" reference="eq:combB"}. We know from Lemma [Lemma 92](#cal for Akbulut){reference-type="ref" reference="cal for Akbulut"} and the discussion at the beginning of this subsection that $r_0(Y_1, \tau)$ and $r_0(Y_1, \sigma)$ are both finite. Moreover, by Lemma [Lemma 14](#lem:knotsA){reference-type="ref" reference="lem:knotsA"}, we have a simply-connected, negative-definite equivariant cobordism from $(Y_2, \tau)$ to $(Y_1, \tau)$, and similarly from $(Y_2, \sigma)$ to $(Y_1, \sigma)$. Hence $$r_0(Y_2, \tau) < r_0(Y_1, \tau) \quad \text{and} \quad r_0(Y_2, \sigma) < r_0(Y_1, \sigma).$$ Now suppose that $$r_0(Y_1, \tau) \leq r_0(Y_1, \sigma).$$ In this situation we let $(Y, \tau)$ be given by ([\[eq:combA\]](#eq:combA){reference-type="ref" reference="eq:combA"}). The connected sum inequality for the involutive $r_s$-invariant gives $$r_0 (Y_2, \tau) \geq \min \{ r_0( Y, \tau) , r_0( 2(Y_1, \tau) + 2(Y_1, \sigma))\}.$$ Under the supposition that $r_0(Y_1, \tau) \leq r_0(Y_1, \sigma)$, repeatedly applying the connected sum inequality as in the proof of Theorem [Theorem 98](#key:sequence){reference-type="ref" reference="key:sequence"} gives $$r_0 (Y_2, \tau) < r_0(Y_1, \tau) = \min \{r_0(Y_1, \tau), r_0(Y_1, \sigma)\} \leq r_0( 2(Y_1, \tau) + 2(Y_1, \sigma)).$$ Combining these two inequalities shows that $r_0( Y, \tau) \leq r_0 (Y_2, \tau) < \infty$, as desired. If instead $r_0(Y_1, \sigma) \leq r_0(Y_1, \tau)$, we let $(Y, \tau)$ be given by ([\[eq:combB\]](#eq:combB){reference-type="ref" reference="eq:combB"}). A similar argument with $r_0 (Y_2, \sigma)$ in place of $r_0 (Y_2, \tau)$ then gives the result. ◻ We now establish the second part of Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"}. This will require a general understanding of the results of [@DHM20]. In [@DHM20 Section 4], it is shown that an involution $\tau$ on $Y$ induces a homotopy involution (which we also denote by $\tau$) on the Heegaard Floer complex $\mathit{CF}^-(Y)$. The pair $(\mathit{CF}^-(Y), \tau)$ defines an equivalence class in the *local equivalence group* $\mathfrak{I}$ introduced by Hendricks-Manolescu-Zemke [@HMZ Secion 8.3]. We refer to the local equivalence class of $(\mathit{CF}^-(Y), \tau)$ as the *$\tau$-class* of $(Y, \tau)$, even in the case that the involution on $Y$ is denoted by something other than $\tau$, such as $\sigma$. The most salient feature of $\mathfrak{I}$ is that it admits a partial order which restricts the existence of equivariant negative-definite cobordisms, as follows. Suppose that $(Y_1, \tau_1)$ and $(Y_2, \tau_2)$ are two homology spheres equipped with involutions and let $(W, \widetilde{\tau})$ be an equivariant negative-definite cobordism between them with $b_1(W) = 0$. Suppose that there is a $\text{spin}^c$-structure $\mathfrak{s}$ on $W$ such that the Heegaard Floer grading shift $\Delta_{W, \mathfrak{s}}$ (associated to the cobordism map $F_{W, \mathfrak{s}}$) is zero and $\widetilde{\tau}_*(\mathfrak{s}) = \mathfrak{s}$. Then $$\mathit{CF}^-(Y_1, \tau_1) \leq \mathit{CF}^-(Y_2, \tau_2).$$ We call such a cobordism a *$\text{spin}^c$-fixing cobordism*. Thus, if $(\mathit{CF}^-(Y), \tau) > 0$, then there does not exist any $\text{spin}^c$-fixing, equivariant negative-definite manifold with boundary $(Y, \tau)$. Here, we write $0$ to mean the local equivalence class of the trivial complex. A crucial insight of [@DHM20] was that the action of $\tau$ may also be combined with the involution $\iota$ defined by Hendricks and Manolescu [@HM]. The composition $\iota \tau$ also constitutes a homotopy involution on $\mathit{CF}^-(Y)$ and hence defines another class $(\mathit{CF}^-(Y), \iota \tau)$ in the local equivalence group. We refer to this class as the *$\iota \tau$-class* of $(Y, \tau)$. Crucially, this class has slightly different functoriality properties with respect to the partial order. In particular, let $(W, \widetilde{\tau})$ be a negative-definite equivariant cobordism as before, but with a $\text{spin}^c$-structure $\mathfrak{s}$ such that the Heegaard Floer grading shift $\Delta_{W, \mathfrak{s}}$ is zero and $\widetilde{\tau}_*(\mathfrak{s}) = \bar{\mathfrak{s}}$. Then $$\mathit{CF}^-(Y_1, \iota_1\tau_1) \leq \mathit{CF}^-(Y_2, \iota_2\tau_2).$$ We call such a cobordism a *$\text{spin}^c$-reversing cobordism*. Thus, if $(\mathit{CF}^-(Y), \iota \tau) > 0$, then there does not exist any $\text{spin}^c$-reversing, equivariant negative-definite manifold with boundary $(Y, \tau)$. The fact that the invariants of [@DHM20] are functorial only in the presence of a $\text{spin}^c$-fixing or $\text{spin}^c$-reversing cobordism leads to the conditions on $\widetilde{\tau}$ in the second part of Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"}: *Proof of Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"} (2).* We show that a positive-definite $W^+$ as in the statement of Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"} cannot exist for either of the possibilities ([\[eq:combA\]](#eq:combA){reference-type="ref" reference="eq:combA"}) or ([\[eq:combB\]](#eq:combB){reference-type="ref" reference="eq:combB"}) given in the proof of the first part of Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"}. Suppose for the sake of contradiction that such a $W^+$ did exist. Reversing orientation gives a negative-definite cobordism $\smash{-W^+}$ from $S^3$ to $-Y$ (or, equivalently, a negative-definite cobordism from $Y$ to $S^3$). Since $\pm Y$ bounds a homology ball, it has $d$-invariant zero. It follows from the argument of [@ozsvath2003absolutely Section 9.1] that any definite manifold with boundary $\pm Y$ must have diagonalizable intersection form. This easily implies the existence of a $\text{spin}^c$-structure $\mathfrak{s}$ such that the Heegaard Floer grading shift $\Delta_{-W^+, \mathfrak{s}}$ is zero. If $\widetilde{\tau}$ is homology-fixing, as in the statement of the theorem, then it is not hard to see that $\smash{-W^+}$ is a $\text{spin}^c$-fixing cobordism as defined previously. If $\widetilde{\tau}$ is homology-reversing, then $-W^+$ will be a $\text{spin}^c$-reversing cobordism.[^9] To produce a contradiction, it thus suffices to prove $$(\mathit{CF}^-(Y), \tau) > 0 \quad \text{and} \quad (\mathit{CF}^-(Y), \iota \tau) > 0$$ for both ([\[eq:combA\]](#eq:combA){reference-type="ref" reference="eq:combA"}) and ([\[eq:combB\]](#eq:combB){reference-type="ref" reference="eq:combB"}), as reversing orientation then gives the desired result. We begin by understanding the Heegaard Floer homologies of $Y_1$ and $Y_2$. The Heegaard Floer homology of $Y_1$ is displayed on the left in Figure [9](#fig:7.5){reference-type="ref" reference="fig:7.5"}. The action of $\tau$ was computed in [@DHM20 Lemma 7.5] up to a change-of-basis, this is likewise shown on the left in Figure [6](#fig:7.1){reference-type="ref" reference="fig:7.1"}. On the right in Figure [9](#fig:7.5){reference-type="ref" reference="fig:7.5"}, we have displayed the Heegaard Floer homology of $Y_2$, which may be calculated via the rational surgery formula [@OS11]. ![The Heegaard Floer homologies $\mathit{HF}^-(Y_1)$ (left) and $\mathit{HF}^-(Y_2)$ (right). The action of $\tau$ is displayed on the left as the sum of red arrows.](F5.pdf){#fig:7.5} We first analyze the actions of $\tau$ and $\sigma$ on $Y_1$. The following was established [@DHM20 Lemma 7.5]: $$\label{eq:y1A} (\mathit{CF}^-(Y_1), \tau) < 0 \quad \text{and} \quad (\mathit{CF}^-(Y_1), \sigma) = 0$$ and $$\label{eq:y1B} (\mathit{CF}^-(Y_1), \iota \tau) = 0 \quad \text{and} \quad (\mathit{CF}^-(Y_1), \iota \sigma) < 0.$$ In fact, we have $(\mathit{CF}^-(Y_1), \tau) = (\mathit{CF}^-(Y_1), \iota\sigma)$. The actions of $\tau$ and $\sigma$ on $Y_2$ are not so immediate. However, it is not hard to constrain their possibilities up to local equivalence. Note that if $C_1$ and $C_2$ are two Heegaard Floer complexes whose homologies are supported in even gradings, then chain maps between $C_1$ and $C_2$ (up to homotopy) are determined by their maps on homology. (See for example the argument of [@dai2019involutive Lemma 4.4].) Let $\omega$ be any involution on $\mathit{HF}^-(Y_2)$. There are two possibilities: - First suppose that $\omega$ fixes the $U$-nontorsion generator $x$. Note that $\omega y_i$ cannot be supported by $x$ for any $i$, since $\omega$ cannot map a $U$-torsion element of homology to a $U$-nontorsion element of homology. Then $(\mathit{CF}^-(Y_2), \omega)$ is easily seen to be locally trivial. - Now suppose that $\omega$ does not fix $x$. Perform a change-of-basis on $\mathit{HF}^-(Y_2)$ so that $$y_1 = x + \omega x$$ and $y_2$, $y_3$, and $y_4$ are still $U$-torsion. Then $(\mathit{CF}^-(Y_1), \tau)$ admits an equivariant local map into $(\mathit{CF}^-(Y_2), \omega)$ by sending $x$ and $y_1$ in $\mathit{HF}^-(Y_1)$ to $x$ and $y_1$ in $\mathit{HF}^-(Y_2)$. (Up to local equivalence, $y_2$ may be discarded.) Thus, in either of the above two cases, we see that $$\label{eq:y1C} (\mathit{CF}^-(Y_1), \tau) = (\mathit{CF}^-(Y_1), \iota\sigma) \leq (\mathit{CF}^-(Y_2), \omega)$$ for any involution $\omega$ on $\mathit{HF}^-(Y_2)$. In particular, we may take $\omega \in \{\tau, \sigma, \iota\tau, \iota\sigma\}$. Now consider the complex of ([\[eq:combA\]](#eq:combA){reference-type="ref" reference="eq:combA"}). The $\tau$-class of ([\[eq:combA\]](#eq:combA){reference-type="ref" reference="eq:combA"}) is given by $$\begin{aligned} - 2(\mathit{CF}^-(Y_1), \tau) - 2(\mathit{CF}^-(Y_1), \sigma) + (\mathit{CF}^-(Y_2), \tau) &= -2(\mathit{CF}^-(Y_1), \tau) + (\mathit{CF}^-(Y_2), \tau) \\ & \geq -(\mathit{CF}^-(Y_1), \tau) \\ &> 0\end{aligned}$$ where the first equality follows from ([\[eq:y1A\]](#eq:y1A){reference-type="ref" reference="eq:y1A"}), the second inequality follows from ([\[eq:y1C\]](#eq:y1C){reference-type="ref" reference="eq:y1C"}), and the final inequality follows from ([\[eq:y1A\]](#eq:y1A){reference-type="ref" reference="eq:y1A"}). Similarly, the $\iota \tau$-class of ([\[eq:combA\]](#eq:combA){reference-type="ref" reference="eq:combA"}) is given by $$\begin{aligned} - 2(\mathit{CF}^-(Y_1), \iota\tau) - 2(\mathit{CF}^-(Y_1), \iota \sigma) + (\mathit{CF}^-(Y_2), \iota \tau) &= -2(\mathit{CF}^-(Y_1), \iota \sigma) + (\mathit{CF}^-(Y_2), \iota \tau) \\ & \geq -(\mathit{CF}^-(Y_1), \iota \sigma) \\ &> 0\end{aligned}$$ where the first equality follows from ([\[eq:y1B\]](#eq:y1B){reference-type="ref" reference="eq:y1B"}), the second inequality follows from ([\[eq:y1C\]](#eq:y1C){reference-type="ref" reference="eq:y1C"}), and the final inequality follows from ([\[eq:y1B\]](#eq:y1B){reference-type="ref" reference="eq:y1B"}). Thus, both the $\tau$-class and the $\iota\tau$-class of ([\[eq:combA\]](#eq:combA){reference-type="ref" reference="eq:combA"}) are strictly greater than zero. A similar computation holds for ([\[eq:combB\]](#eq:combB){reference-type="ref" reference="eq:combB"}). This completes the proof. ◻ Having established Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"}, the proof of Corollary [Corollary 3](#cor:1.3){reference-type="ref" reference="cor:1.3"} is immediate: *Proof of Corollary [Corollary 3](#cor:1.3){reference-type="ref" reference="cor:1.3"}.* Consider the cork obtained from Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"}. The first part of Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"} shows that there is no extension of $\tau$ over any negative-definite $X$ as in the corollary statement. On the other hand, any extension of $\tau$ over a positive-definite such $X$ would clearly be either homology-fixing or homology-reversing. Hence such an extension is ruled out by the second part of Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"}. Since $Y$ is a homology sphere, these are the only two possibilities for $X$. ◻ **Remark 95**. In principle, it may be possible to replicate Corollary [Corollary 3](#cor:1.3){reference-type="ref" reference="cor:1.3"} using the methods of [@DHM20]. It suffices to find an example of a cork $(Y, \tau)$ such that $$\underline{d}_\tau(Y) < 0 < \overline{d}_\tau(Y) \quad \text{and} \quad \underline{d}_{\iota \tau}(Y) < 0 < \overline{d}_{\iota \tau}(Y);$$ see [@DHM20 Remark 4.5]. In the involutive Heegaard Floer setting, such algebraic examples can be constructed through connected sums of Brieskorn spheres of varying sign; see [@DaiStoff19 Corollary 1.7]. Although the corks considered in [@DHM20 Theorem 1.13] are conjecturally similar in behavior to these, at present we do not have the computational tools necessary to verify this. The authors do not expect the specific examples presented in this paper can be re-established using any of the methods of [@LRS18; @ASA20; @DHM20; @KMT23A]. In the case that $X$ is of the form $W' \# n \smash{\mathbb{CP}^2}$ or $W' \# \smash{\overline{\mathbb{CP}}}^2$ for a homology ball $W'$, it turns out that we can remove the constraint on the action of $\widetilde{\tau}$ in Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"}. This leads to the proof of Corollary [Corollary 4](#cor:cp2){reference-type="ref" reference="cor:cp2"}: *Proof of Corollary [Corollary 4](#cor:cp2){reference-type="ref" reference="cor:cp2"}.* Let $(Y, \tau)$ be any pair for which $r_0(Y, \tau) < \infty$ and $(\mathit{CF}^-(Y), \tau) > 0$. For instance, we may consider the strong cork $(Y, \tau)$ established in the proof of Corollary [Corollary 3](#cor:1.3){reference-type="ref" reference="cor:1.3"}. (The additional property that $(\mathit{CF}^-(Y), \iota\tau) > 0$ will not be needed; simpler examples are thus possible.) The same argument as before shows $\tau$ does not extend as a diffeomorphism over any negative-definite $X$ with $H_1(X, \mathbb{Z}_2) = 0$ and does not extend as a homology-fixing diffeomorphism over any positive-definite $X$ with $H_1(X, \mathbb{Z}_2) = 0$. Clearly, the first claim shows $\tau$ does not extend over any $W' \# n \smash{\overline{\mathbb{CP}}}^2$ as in the corollary statement. Now suppose that there was an extension $\widetilde{\tau}$ over some $W' \# n \smash{\mathbb{CP}^2}$. It is not difficult to see that any automorphism of the intersection form of $n \smash{\mathbb{CP}^2}$ can be realized by a self-diffeomorphism of $n \smash{\mathbb{CP}^2}$; see for example [@Ba23 Section 2]. After isotopy, we may assume that such a self-diffeomorphism fixes a ball. It follows that by postcomposing $\widetilde{\tau}$ with an appropriate self-diffeomorphism of (punctured) $n \smash{\mathbb{CP}^2}$, we obtain a new extension $\widetilde{\tau}'$ of $\tau$ which acts as the identity on second homology. This contradicts the second claim in the previous paragraph. ◻ **Remark 96**. If the contractible manifold $W$ is fixed in Corollary [Corollary 4](#cor:cp2){reference-type="ref" reference="cor:cp2"}, then it is not difficult to give several families of corks which survive definite stabilization of either sign. One method (following the formalism of [@DHM20]) is as follows: let $W$ be any cork for which the Heegaard Floer relative invariant $F_W(1)$ is not fixed by the action of $\tau$. It is not hard to choose $W$ so that $F_W(1)$ is also not fixed by $\iota \tau$.[^10] Then the blow-up formula for the Heegaard Floer cobordism maps shows that $\tau$ does not extend over $W \# \smash{\overline{\mathbb{CP}}}^2$. If we now take any two such corks $W_1$ and $W_2$, then the tensor product formula easily establishes the same for $W_1 \# -W_2$ and $W_2 \# - W_1$; hence $W_1 \# -W_2$ is a cork that survives stabilization by both $\smash{\mathbb{CP}^2}$ and $\smash{\overline{\mathbb{CP}}}^2$ (at least in the case $n = 1$). Many other arguments are possible (see also the techniques of [@Y19]); the authors thank Anubhav Mukherjee and Kouichi Yasui for related discussions. Finally, we list an additional motivation for Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"}. If $K$ is a strongly invertible or periodic knot with involution $\tau$, then any $1/n$-surgery on $K$ inherits an involution coming from $\tau$. It is natural to ask whether every symmetry on an integer homology sphere arises in this manner: **Question 97**. *Does there exist a pair $(Y, \tau)$ such that $Y$ is surgery on a knot, but $(Y, \tau)$ does not arise via surgery on any equivariant knot?* The question of whether a manifold is Dehn surgery on a knot is well-studied; see for example [@HKL16] for a Floer-theoretic obstruction. Question [Question 97](#qn:surgery){reference-type="ref" reference="qn:surgery"} may be viewed as the natural equivariant version of this problem. The relation between Question [Question 97](#qn:surgery){reference-type="ref" reference="qn:surgery"} and Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"} is given by Lemma [Lemma 15](#lem:knotsB){reference-type="ref" reference="lem:knotsB"}. This shows that Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"} can be used to produce pairs $(Y, \tau)$ which do not arise via surgery on any equivariant knot. Unfortunately, it is not difficult to check that the example given in the proof of Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"} does not even arise as (nonequivariant) Dehn surgery on a knot. The obstruction provided by Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"} actually applies to any pair in the equivariant homology cobordism class of $(Y, \tau)$; the authors do not know if this equivariant homology cobordism class contains a representative which arises as (nonequivariant) Dehn surgery on a knot. ## Proof of Theorems [Theorem 5](#thm:1.4){reference-type="ref" reference="thm:1.4"} and [Theorem 6](#thm:1.5){reference-type="ref" reference="thm:1.5"} {#sec:7.4} We now prove Theorems [Theorem 5](#thm:1.4){reference-type="ref" reference="thm:1.4"} and [Theorem 6](#thm:1.5){reference-type="ref" reference="thm:1.5"}. We begin with a general method for establishing the linear independence of a sequence of equivariant homology spheres. For convenience, we use addition to denote the operation of connected sum and negation to denote orientation reversal throughout. **Theorem 98**. *Let $\{(Y_i, \tau_i)\}_{i = 1}^\infty$ be a sequence of oriented integer homology $3$-spheres equipped with orientation-preserving involutions $\tau_i$. Assume the fixed-point set of each $\tau_i$ is a copy of $S^1$, so that the equivariant connected sum operation is well-defined. Suppose that:* 1. *[\[it:seq1\]]{#it:seq1 label="it:seq1"} $r_0(Y_1, \tau_1) > r_0( Y_2, \tau_2 ) > \cdots > r_0(Y_i, \tau_i) > \cdots$;* 2. *[\[it:seq2\]]{#it:seq2 label="it:seq2"} $r_0(Y_1, \tau_1) < \infty$; and,* 3. *[\[it:seq3\]]{#it:seq3 label="it:seq3"} $r_0(- Y_i, \tau_i) = \infty$ for each $i$.* *Then any nontrivial linear combination of elements from $\{(Y_i, \tau_i)\}_{i = 1}^\infty$ yields a strong cork.* *Proof.* We follow the proof of [@NST19 Corollary 5.6], replacing the original $r_s$-invariant with the involutive $r_s$-invariant. Suppose that we had a linear combination $$\sum_i n_i (Y_i, \tau_i) = 0.$$ Let $k$ be the maximal index for which $n_k \neq 0$. Without loss of generality, suppose $n_k > 0$. Then $$(Y_k, \tau_k) = - \sum_{i = 1}^{k-1} n_i(Y_i, \tau_i) - (n_k - 1) (Y_k, \tau_k).$$ Consider the involutive $r_0$-invariant of the right-hand side. By repeatedly applying the connected sum inequality and utilizing ([\[it:seq1\]](#it:seq1){reference-type="ref" reference="it:seq1"}) and ([\[it:seq3\]](#it:seq3){reference-type="ref" reference="it:seq3"}), we see that this is greater than or equal to $r_0(Y_{k-1}, \tau_{k-1})$. (Indeed, it is greater than or equal to $r_0(Y_i, \tau_i)$ for the largest value of $i < k$ such that $n_i < 0$.) Considering the involutive $r_0$-invariant of the left-hand side then contradicts ([\[it:seq1\]](#it:seq1){reference-type="ref" reference="it:seq1"}), given that this is finite by ([\[it:seq2\]](#it:seq2){reference-type="ref" reference="it:seq2"}). ◻ Theorem [Theorem 5](#thm:1.4){reference-type="ref" reference="thm:1.4"} follows quickly from Theorem [Theorem 98](#key:sequence){reference-type="ref" reference="key:sequence"}: *Proof of [Theorem 5](#thm:1.4){reference-type="ref" reference="thm:1.4"}.* Denote by $$Y_n = S^3_{1/n}(K).$$ It suffices to prove that $\{(Y_n, \tau)\}_{n = 1}^\infty$ satisfies the assumptions of [Theorem 98](#key:sequence){reference-type="ref" reference="key:sequence"}. The first and third conditions are immediate from Theorem [Theorem 1](#thm:1.1){reference-type="ref" reference="thm:1.1"} (combined with Lemma [Lemma 14](#lem:knotsA){reference-type="ref" reference="lem:knotsA"}) and Theorem [Theorem 82](#thm:definitebounding){reference-type="ref" reference="thm:definitebounding"}. If $K$ is the first knot $K_0$ in the family displayed in Figure [1](#fig:1.1){reference-type="ref" reference="fig:1.1"}, then the second condition is just Lemma [Lemma 92](#cal for Akbulut){reference-type="ref" reference="cal for Akbulut"}. If $K$ is a different element in this family, then we have an equivariant negative-definite cobordism from $S^3_{+1}(K)$ to $S^3_{+1}(K_0)$, as displayed in Figure [10](#fig:7.3){reference-type="ref" reference="fig:7.3"}. Hence Theorem [Theorem 1](#thm:1.1){reference-type="ref" reference="thm:1.1"} again shows that $r_0(Y_n(K), \tau)$ is finite. Applying [Theorem 98](#key:sequence){reference-type="ref" reference="key:sequence"} completes the proof. ◻ ![An equivariant negative-definite cobordism from $S^3_{+1}(K)$ to $S^3_{+1}(K_0)$ given by attaching the $-1$-framed green 2-handles.](9_46_cobordism.pdf){#fig:7.3} Finally, we have: *Proof of Theorem [Theorem 6](#thm:1.5){reference-type="ref" reference="thm:1.5"}.* Let $K$ be the any of the strongly invertible knots in Figure [1](#fig:1.1){reference-type="ref" reference="fig:1.1"}. We have already checked in the proof of Theorem [Theorem 5](#thm:1.4){reference-type="ref" reference="thm:1.4"} that $$r_0(S^3_{1/n}(K), \tau) < \infty$$ for any $n \in \mathbb N$. A simply-connected, equivariant negative-definite cobordism from $\smash{(S^3_{1/n}(K), \tau)}$ to itself would violate the strict inequality in Theorem [Theorem 1](#thm:1.1){reference-type="ref" reference="thm:1.1"}. (In the positive-definite case, we turn the cobordism around.) This completes the proof. ◻ Another application of Theorem [Theorem 6](#thm:1.5){reference-type="ref" reference="thm:1.5"} is as follows: let $K$ be any of the knots in Theorem [Theorem 6](#thm:1.5){reference-type="ref" reference="thm:1.5"}. Using the Montesinos trick, write $$S^3_{1/n}(K) = \Sigma_2(K'_n)$$ for some knot $K'_n$ in $S^3$. Here, $\Sigma_2(K'_n)$ is the branched double cover of $K'_n$ and the induced symmetry $\tau$ on the left is identified with the branching involution on the right. If $C$ is any concordance from $K'_n$ to $K'_n$, then $\Sigma_2(C)$ constitutes an equivariant homology cobordism from $\smash{S^3_{1/n}(K)}$ to itself. Moreover, if we had $$\pi_1([0,1]\times S^3 \setminus C ) = \mathbb{Z},$$ then this homology cobordism would be simply connected, which is ruled out by Theorem [Theorem 6](#thm:1.5){reference-type="ref" reference="thm:1.5"}. Hence the methods of this paper can be used to rule out self-concordances with fundamental group $\mathbb{Z}$. Such results can already be obtained using singular knot instanton Floer theory with a general holonomy parameter [@DISST22], at least in the case that $K_n'$ is not algebraically slice. ## Nonorientable surfaces {#sec:7.5} We now prove Theorems [Theorem 7](#thm:1.6){reference-type="ref" reference="thm:1.6"} and [Theorem 8](#thm:1.7){reference-type="ref" reference="thm:1.7"}. These will follow immediately from the discussion of Section [2.3](#sec:2.3){reference-type="ref" reference="sec:2.3"} and our prior results regarding equivariant bounding. For the benefit of the reader, we first briefly discuss some details regarding the geography question for nonorientable surfaces. Recall that we have the Gordon-Litherland bound [@GL78]: $$\left | \sigma(K) - e(F)/2 \right | \leq h(F).$$ This corresponds to the region $\Sigma$ displayed below on the left in Figure [11](#fig:nonorientable){reference-type="ref" reference="fig:nonorientable"}. In general (ignoring issues of parity), the set of realizable pairs is a subset of $\Sigma$ consisting of the union of sets of the form $\{|a - e/2| \leq h - b\}$, as shown in Figure [11](#fig:nonorientable){reference-type="ref" reference="fig:nonorientable"}. (To see this, note that we may take the connected sum of any surface with $\smash{\mathbb{RP}^2}$, which increases $h$ by one and changes $e$ by $\pm 2$.) Extremal surfaces correspond to the points on the boundary of $\Sigma$. ![From left-to-right: $(a)$ the region due to the Gordon-Litherland bound; $(b)$ a schematic example of the set of realizable pairs; $(c)$ the region due to the Ozváth-Szabó-Stipsicz inequality; and $(d)$ the region due to Batson's bound.](F7.pdf){#fig:nonorientable} Several authors have used Floer theory to derive additional constraints on the set of realizable pairs. The most well-known of these are the following: in [@Ba14], Batson showed that $$\label{eq:constraint1} e(F)/2 - 2d(S^3_{-1}(K)) \leq h(F),$$ while in [@OSSunoriented], Ozsváth-Stipsicz-Szabó proved $$\label{eq:constraint2} \left | 2\Upsilon_K(1) - e(F)/2 \right | \leq h(F).$$ These are schematically displayed in Figure [11](#fig:nonorientable){reference-type="ref" reference="fig:nonorientable"}. Either constraint can be used (in conjunction with the Gordon-Litherland bound) to prove that the nonorientable slice genus may be arbitrarily large and (additionally) rule out a subset of extremal surfaces. Note, however, that neither inequality is individually capable of ruling out the set of *all* extremal surfaces. Moreover, using the fact that $$2V_0(-K) \geq \Upsilon_K(1)$$ as shown in [@Sa23 Proposition 2.54], it is possible to show that such a result cannot be obtained by even by combining ([\[eq:constraint1\]](#eq:constraint1){reference-type="ref" reference="eq:constraint1"}) and ([\[eq:constraint2\]](#eq:constraint2){reference-type="ref" reference="eq:constraint2"}). The authors thank Kouki Sato for bringing this to their attention. *Proof of Theorem [Theorem 7](#thm:1.6){reference-type="ref" reference="thm:1.6"}.* Let $Y$ be the cork constructed in Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"}. Using the Montesinos trick, we may write $Y$ as the branched double cover of some knot $J$ in such a way so that the branching involution over $J$ corresponds to the relevant involution on $Y$. Explicitly, this means that $J$ is either $$-2 A_1 \# - 2B_1 \# A_2 \quad \text{or} \quad -2A_1 \# -2B_1 \# B_2,$$ with $A_n$ and $B_n$ as in Figure [2](#fig:1.2){reference-type="ref" reference="fig:1.2"}. Suppose that $J$ bounded an extremal surface $F$. Then $\Sigma_2(J) = \partial \Sigma_2(F)$. As discussed in Section [2.3](#sec:2.3){reference-type="ref" reference="sec:2.3"}, $\Sigma_2(F)$ is a homology-reversing equivariant definite manifold with $H_1(\Sigma(F), \mathbb{Z}_2) = 0$. This contradicts Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"}. ◻ *Proof of Theorem [Theorem 8](#thm:1.7){reference-type="ref" reference="thm:1.7"}.* Consider any $K$ and $n$ as in Theorem [Theorem 6](#thm:1.5){reference-type="ref" reference="thm:1.5"}. Consider the cork $$Y = S^3_{1/n}(K) \# - S^3_{1/n}(K).$$ Using the Montesinos trick, we may write $Y$ as the branched double cover of some knot $J$ in such a way so that the branching involution over $J$ corresponds to the relevant involution on $Y$. For instance, when $K = \overline{9}_{46}$, we have $$J = A_n \# -A_n,$$ with $A_n$ as in Figure [1](#fig:1.1){reference-type="ref" reference="fig:1.1"}. Clearly, $J$ is slice. Taking the connected sum of any slice disk with $\smash{\mathbb{RP}^2}$ gives a trivial extremal surface. Suppose that $J$ bounded an extremal $\mathbb{Z}_2$-surface $F$. Then $\Sigma_2(J) = \partial \Sigma_2(F)$ and it is easily checked that $\Sigma_2(F)$ is simply-connected. As discussed in Section [2.3](#sec:2.3){reference-type="ref" reference="sec:2.3"}, $\Sigma_2(F)$ is a homology-reversing equivariant definite manifold. This contradicts Theorem [Theorem 6](#thm:1.5){reference-type="ref" reference="thm:1.5"}. ◻ [^1]: For *rational* homology spheres, the fact that there are multiple $d$-invariants makes this problem much more approachable; see [@GL20]. However, for an *integer* homology sphere, note that any straightforward approach using the $d$- or $h$-invariant necessarily fails. [^2]: We generally use "definite bounding\" to refer to a definite bounding $W$ with $H_1(W, \mathbb{Z}_2) = 0$. [^3]: Every $K$ bounds *some* $\mathbb{Z}_2$-surface by taking the connected sum of a Seifert surface with an unknotted $\smash{\mathbb{RP}^2}$. [^4]: We could require that $\widetilde{\tau}_*$ act as $\pm \operatorname{id}$ on $H_2(W, \mathbb{Z})$, but here we will use $H_2(W, \mathbb{Q})$ due to Theorem [\[lem:2.9\]](#lem:2.9){reference-type="ref" reference="lem:2.9"}. [^5]: When we work over $\mathbb{Z}$, $c$ is allowed to be $\pm1$. [^6]: *Here, we assume that $r < 0 \leq s$ and $r + \delta < 0 \leq s + \delta$.* [^7]: There are two abusolute $\mathbb{Z}$-degrees for $\theta$, $\operatorname{ind}_+ (\theta)=0$ and $\operatorname{ind}_- (\theta)=-3$. The dimension of the moduli space $M(a, \theta)$ can be computed by $\operatorname{ind}(a) - \operatorname{ind}_+(\theta) = \operatorname{ind}(a)$. [^8]: If $n \neq 0$, then $\tau$ will not literally be an involution. The fact that $\tau$ is a homotopy involution in the filtered sense then implies $\deg_I(\beta_1^\vee) = \deg_I(\beta_2^\vee) \leq 0$. [^9]: The second part of Theorem [Theorem 2](#thm:1.2){reference-type="ref" reference="thm:1.2"} can thus actually be strengthened to exclude all $\text{spin}^c$-fixing and $\text{spin}^c$-reversing equivariant positive-definite cobordisms; we have suppressed this for the sake of clarity of the theorem statement. [^10]: This follows from computations of [@DHM20]. As an explicit example, we may take $W$ to be boundary connected sum of two copies of the Akbulut-Mazur cork with the involution $\tau \# \sigma$.
arxiv_math
{ "id": "2309.02309", "title": "Involutions and the Chern-Simons filtration in instanton Floer homology", "authors": "Antonio Alfieri, Irving Dai, Abhishek Mallick, Masaki Taniguchi", "categories": "math.GT", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In 2009-2011 Pelayo and Vũ Ngọc classified semitoric integrable systems in terms of five invariants. Four of the invariants were already well-understood prior to the classification, but the fifth invariant, the so-called *twisting index invariant*, came as a surprise. Having a better understanding of the twisting index invariant of a semitoric system is a necessary step towards extending the symplectic classification result to more general situations, such as almost-toric systems, hypersemitoric systems, or higher dimensional systems which admit underlying complexity-one torus actions. The twisting index encodes how the structure in a neighborhood of a focus-focus fiber compares to the large-scale structure of the semitoric system. Pelayo and Vũ Ngọc originally defined the twisting index in terms of comparing certain momentum maps. The first half of the paper is devoted to giving an equivalent definition of the twisting index in terms of topological-geometric objects, such as homology cycles. The second half of the paper is concerned with computing the twisting index of a specific family of systems (the *generalized coupled angular momenta*) with two focus-focus singular points, which is the first time that the twisting index has been computed for a system with more than one focus-focus point. Along the way, we also compute the terms of the Taylor series invariant up to second order, completing the computation of all five semitoric invariants for this system. Thus there is now a fully classified third family of semitoric systems after the completion of the classification of spin oscillators and coupled angular momenta (Alonso $\&$ Hohloch $\&$ Dullin in 2019 and 2020). author: - Jaume Alonso $\&$ Sonja Hohloch $\&$ Joseph Palmer bibliography: - twistingindexref.bib title: The twisting index in semitoric systems --- # Introduction A Hamiltonian dynamical system is known as *integrable* if it admits the maximal number of independent Poisson commuting quantities conserved by the dynamics. Integrable systems form an important class of dynamical systems, for instance in classical mechanics. Furthermore, they arise often in nature, and though they enjoy certain symmetries, they can also exhibit very complicated behavior. Many examples of integrable systems admit a circular symmetry in the form of an ${\mathbb S}^1$-action, and this class will be the focus of the present paper. The classification of toric integrable systems due to Delzant [@Delzant1988] was extended in dimension four in 2009-2011 by Pelayo & Vũ Ngọc [@PVNinventiones; @PVNacta] to a more general class of integrable systems called *semitoric*. While a toric integrable system in dimension four admits an underlying $\mathbb{T}^2$-action, a semitoric integrable system in dimension four admits an underlying ${\mathbb S}^1\times{\mathbb R}$-action. This allows for semitoric systems to have a certain class of singular points known as *focus-focus singularities* which cannot appear in toric integrable systems. The original classification of Pelayo & Vũ Ngọc [@PVNinventiones; @PVNacta] applied only to systems satisfying the generic condition that the momentum map of the ${\mathbb S}^1$-action contained at most one focus-focus point in each fiber (such systems are called *simple semitoric systems*), but this was extended by Palmer & Pelayo & Tang [@PPT] to a classification of all semitoric systems, simple or not. The classification of semitoric systems represents a substantial generalization of the toric classification. Semitoric systems have been an active area of research in recent years, see for instance the survey papers [@HHM; @AH; @Pe; @GH]. The classification of simple semitoric systems is in terms of five invariants. Four of these invariants were already conceptually well-understood: the definitions of the *number of focus-focus points invariant* and the *height invariant* are straightforward, and the *Taylor series* and *semitoric polygon invariants* had already been defined and geometrically interpreted in this context by Vũ Ngọc several years earlier [@VuNgoc03; @VuNgoc07]. Surprisingly, these four invariants were not enough to classify simple semitoric systems. In order to complete the classification, Pelayo & Vũ Ngọc had to introduce a fifth invariant, the so-called *twisting index invariant*. The twisting index invariant is defined by comparing the fibration induced by the integrable system locally near a focus-focus singular point to the global structure determined by a choice of the semitoric polygon invariant. This original definition was stated in terms of comparing momentum maps. The present paper will provide geometric-topological interpretations of the twisting index invariant by giving several equivalent descriptions in terms of homology cycles. Pelayo & Vũ Ngọc formulated the invariants and classified simple semitoric systems, but actually computing these invariants for explicit examples, and even for relatively elementary systems, turns out to be very involved, for instance see [@ADH; @ADH2; @Du; @LFPecoupledangular]. In this paper, we do not only provide geometric formulations of the twisting index invariant, but we compute for the first time the twisting index for an explicit system with two focus-focus points. The twisting index has never been computed for a system with more than one focus-focus point before, and we will see that new phenomena appear in this case compared with the case of a single focus-focus point, which was studied in Alonso & Dullin & Hohloch [@ADH; @ADH2]. These calculations are of high computational complexity, and involve combining theoretical knowledge of the twisting index and Taylor series invariants with computation techniques related to solving elliptic integrals. In summary, the main contributions of this paper are: 1. We give an overview of the construction of the semitoric invariants by Pelayo and Vũ Ngọc (for systems satisfying the simplicity condition), taking into account the most recent developments (Section [3](#ss:invariants){reference-type="ref" reference="ss:invariants"}) , 2. We give a geometric interpretation of the twisting index, which is a necessary prerequisite for any attempts to extend it to any broader class of integrable systems (Theorem [Theorem 1](#thm:geometry){reference-type="ref" reference="thm:geometry"}), 3. We compute for the first time the twisting-index invariant (and on the way we compute several terms of the Taylor series invariant) for a system with two focus-focus points (Theorems [Theorem 3](#thm:twist-intro){reference-type="ref" reference="thm:twist-intro"} and [Theorem 4](#thm:Taylor-intro){reference-type="ref" reference="thm:Taylor-intro"}), 4. In order to compute these invariants, we investigate in general how the Taylor series invariant is affected by transformations that change the signs of the components of the momentum map (Proposition [Proposition 5](#prop:symmetry-general){reference-type="ref" reference="prop:symmetry-general"}). The system that we consider in the current paper is actually a one-parameter family of systems, which is semitoric for all but a finite number of values of the parameter. In fact, since the symplectic manifold and underlying ${\mathbb S}^1$-action are independent of the parameter, it is an example of a *semitoric family*, as studied by Le Floch & Palmer [@LFP-fam1; @LFP-fam2]. The underlying ${\mathbb S}^1$-action is an important feature of semitoric systems. Recall that a group action is called *effective* if the identity is the only element that fixes the entire space. A 4-dimensional symplectic manifold equipped with an effective Hamiltonian ${\mathbb S}^1$-action is called an ${\mathbb S}^1$-space, and such spaces were classified by Karshon [@karshon] in terms of a labeled graph. The relationship between the classification of ${\mathbb S}^1$-spaces and the classification of semitoric systems was studied by Hohloch & Sabatini & Sepe [@HSS], and in particular the twisting index is one of the invariants of semitoric systems which is not encoded in the invariants of the underlying ${\mathbb S}^1$-space; it is a purely semitoric invariant. The problem of lifting Hamiltonian ${\mathbb S}^1$-actions to integrable systems has also been considered by Hohloch & Palmer [@Hohloch2022]. The problem of recovering the semitoric invariants from the joint spectrum of the corresponding quantum integrable system has also been studied, such as by Le Floch & Pelayo & Vũ Ngọc [@LFPeVN2016], and in particular a technique for recovering the twisting index was first discovered by Le Floch & Vũ Ngọc [@LFVN21]. ## Main results The twisting index can be encoded as one integer for each focus-focus point assigned to each representative of the semitoric polygon invariant. As mentioned above, it encodes the way the semilocal normal form around the focus-focus fiber lies with respect to the global integral affine structure. The original definition was in terms of a local preferred momentum map, and in the present paper we will show that there are several equivalent formulations. Let $m$ be a focus-focus point. For the following theorem we use the notation: - $\kappa^\Delta$ denotes the twisting index of $m$ relative to the semitoric polygon $(\Delta,b,\varepsilon)$ (for the definition of $(\Delta,b,\varepsilon)$, see Section [3.2](#sss:polygon){reference-type="ref" reference="sss:polygon"}); - $\mu_\Delta$ denotes the generalized momentum map associated to the polygon $(\Delta,b,\varepsilon)$ (for the definition, see Section [3.2](#sss:polygon){reference-type="ref" reference="sss:polygon"}); - $\nu$ is the preferred momentum map near $m$, as in Pelayo & Vũ Ngọc [@PVNinventiones] (see Section [3.5](#sss:twisting){reference-type="ref" reference="sss:twisting"}); - $I_\Delta$ is the action associated to $(\Delta,b,\varepsilon)$ (see Section [4](#sec:geometricinterpret){reference-type="ref" reference="sec:geometricinterpret"}); - $\xi$ is the preferred action near $m$ (see Section [3.5](#sss:twisting){reference-type="ref" reference="sss:twisting"}); - $(S^\Delta)^\infty$ and $(S^\text{pref})^\infty$ are the Taylor series associated to $\Delta$ and the preferred Taylor series respectively, both are Taylor series in the variables $l,j$ (see Section [4](#sec:geometricinterpret){reference-type="ref" reference="sec:geometricinterpret"}); - $\gamma_L^z,\gamma_\Delta^z,\gamma_\text{pref}^z\in H_1(\Lambda_z)$ are cycles on a fiber $\Lambda_z$ near $m$ (see Section [4](#sec:geometricinterpret){reference-type="ref" reference="sec:geometricinterpret"}); Denote by $\mathrm{Re}(z)$ the real part of a complex number $z$ and let $T$ be the matrix given in Equation [\[eqn:T\]](#eqn:T){reference-type="eqref" reference="eqn:T"}. Then we have the following: **Theorem 1**. *The twisting index can equivalently be described in the following ways:* 1. *[\[itemthm:original\]]{#itemthm:original label="itemthm:original"} The original definition, which is a difference of momentum maps: $T^{\kappa^\Delta} \mu_\Delta = \nu$;* 2. *[\[itemthm:actions\]]{#itemthm:actions label="itemthm:actions"} A difference of actions: $I^\Delta(z)-\xi(z) = \kappa^\Delta \mathrm{Re}(z)$;* 3. *[\[itemthm:series\]]{#itemthm:series label="itemthm:series"} A difference of Taylor series: $\kappa^\Delta 2\pi l = (S^\Delta)^\infty - (S^\text{pref})^\infty$;* 4. *[\[itemthm:cycles\]]{#itemthm:cycles label="itemthm:cycles"} A difference of cycles: $\kappa^\Delta \gamma_L^z = \gamma_\Delta^z - \gamma_\text{pref}^z$ as elements of $H_1(\Lambda_z)$* In each case, we compare an object determined by the choice of polygon with a local preferred object described in a neighborhood of the fiber containing the focus-focus point $m$. Item [\[itemthm:original\]](#itemthm:original){reference-type="eqref" reference="itemthm:original"} is the original definition of Pelayo & Vũ Ngọc [@PVNinventiones], and the other items are all proved in Section [4](#sec:geometricinterpret){reference-type="ref" reference="sec:geometricinterpret"}. Item [\[itemthm:actions\]](#itemthm:actions){reference-type="eqref" reference="itemthm:actions"} is an easy consequence of the original definition and is stated as Lemma [Lemma 25](#lem:twist-action){reference-type="ref" reference="lem:twist-action"}. Item [\[itemthm:series\]](#itemthm:series){reference-type="eqref" reference="itemthm:series"} is from Proposition [Proposition 26](#prop:difference-Taylor){reference-type="ref" reference="prop:difference-Taylor"}, and item [\[itemthm:cycles\]](#itemthm:cycles){reference-type="eqref" reference="itemthm:cycles"} is given in Proposition [Proposition 28](#prop:cycles){reference-type="ref" reference="prop:cycles"}. The way in which the twisting index is linked to the Taylor series (item [\[itemthm:series\]](#itemthm:series){reference-type="eqref" reference="itemthm:series"} above) is similar to the way that it was treated in [@LFVN21; @PPT; @Jaume-thesis]. Now we introduce the system for which we will compute the twisting index. This family, which is a one-parameter subfamily of the family of systems studied in Hohloch & Palmer [@HoPa2018] and Alonso & Hohloch [@AH2], has a relatively simple dependence on the parameter. Let ${\mathbb S}^2$ be equipped with cartesian coordinates $(x,y,z)$ induced from the inclusion ${\mathbb S}^2\subset{\mathbb R}^3$ as the unit sphere and symplectic form $\omega_{{\mathbb S}^2} = x \mathrm{d}y\wedge \mathrm{d}z + y \mathrm{d}z\wedge \mathrm{d}x +z \mathrm{d}x\wedge \mathrm{d}y$. Let $(M,\omega)$ be the symplectic manifold given by $M:={\mathbb S}^2 \times {\mathbb S}^2$ with coordinates $(x_1,y_1,z_1,x_2,y_2,z_2)$ and symplectic form $\omega:= - (\omega_{{\mathbb S}^2} \oplus 2 \omega_{{\mathbb S}^2})$. Then we define, for $s\in[0,1]$, the parameter-dependent family of systems $(M,\omega,F_s)$, with $F_s:=(L,H_s):M \to {\mathbb R}^2$ given by $$\label{eqn_ssys} \left\{ \begin{aligned} L(x_1, y_1, z_1 , x_2, y_2, z_2)& := z_1 + 2 z_2 , \\ H_s(x_1, y_1, z_1 , x_2, y_2, z_2) & := (1 - s) z_1 + s z_2 +2 (1 - s) s (x_1 x_2 + y_1 y_2). \end{aligned} \right.$$ The Hamiltonian flow of the function $L$ corresponds to a simultaneous rotation on both spheres about their respective vertical axes. The Hamiltonian flow of $H_s$ corresponds to an interpolation between rotations on each sphere and the scalar product of the horizontal projection of both spheres. Note that we took this symplectic form, in which the second factor is scaled by two, so that the focus-focus values occur at different values of $J$ (i.e. so that this integrable system satisfies the condition of *simplicity*). **Remark 2**. The momentum map of a semitoric system is often written in the form $F = (J,H)$, where $J$ generates the $\mathbb{S}^1$-action. In this paper we use $F = (L,H)$ since $L$ is an angular momentum and we use $J$ to denote the imaginary action (see Section [5.4](#sec:prep-for-TS){reference-type="ref" reference="sec:prep-for-TS"}). Let $\mathcal{N}=(0,0,1)$ and $\mathcal{S}=(0,0,-1)$ denote the poles of the sphere. Let $s_-,s_+\in [0,1]$ be as in Proposition [\[prop:nff\]](#prop:nff){reference-type="ref" reference="prop:nff"}. The result of that proposition implies that if $s\notin \,\,]s_-,s_+[\,$ then the system has no focus-focus points, and if $s\in \,\,]s_-,s_+[\,$, then the system has exactly two focus-focus points, given by $m_1:=\mathcal{N}\times\mathcal{S}$ and $m_2:=\mathcal{S}\times\mathcal{N}$. The twisting index consists of an integer ("the integer label") assigned to each focus-focus point on each semitoric polygon representative. Given such labels on a single representative, the group action given in Equation [\[eq:twisact\]](#eq:twisact){reference-type="eqref" reference="eq:twisact"} can be used to determine the labels on all representatives. That is, to specify the twisting index of our system we just have to state the integer labels on a single choice of semitoric polygon representative, which is precisely what we do in the following theorem. **Theorem 3**. *Let $(M,\omega, F_s = (L,H_s))$ be as in Equation [\[eqn_ssys\]](#eqn_ssys){reference-type="eqref" reference="eqn_ssys"} with $s\in \,\,]s_-,s_+[\,$. Then the twisting index invariant of $(M,\omega, F_s)$ is independent of the choice of $s$. It is the unique one which assigns the tuple of indices $(\kappa_1^s,\kappa_2^s)=(-1,-2)$ to the representative of the semitoric polygon invariant with upwards cuts which is the convex hull of $(-3,2)$, $(-1,2)$, $(1,0)$, and $(3,-4)$ (the upper left representative in Figure [\[fig:polygons-with-kappa-2\]](#fig:polygons-with-kappa-2){reference-type="ref" reference="fig:polygons-with-kappa-2"}).* ![ $(\kappa_1^s,\kappa_2^s)=(-1,-2)$ ](fig14_1_n.pdf){width="80%"} ![ $(\kappa_1^s,\kappa_2^s)=(-1,-1)$ ](fig14_2_n.pdf){width="80%"} \ ![ $(\kappa_1^s,\kappa_2^s)=(0,-1)$ ](fig14_3_n.pdf){width="80%"} ![ $(\kappa_1^s,\kappa_2^s)=(0,0)$ ](fig13_2_n.pdf){width="80%"} Theorem [Theorem 3](#thm:twist-intro){reference-type="ref" reference="thm:twist-intro"} is proved in Section [5.6](#sec:twist-proof){reference-type="ref" reference="sec:twist-proof"}. Note that the twisting index is independent of the parameter $s$ for this particular system, but this may not be true for other one-parameter families of systems. This paper is the first time, to our knowledge, that anyone has attempted to explicitly compute the twisting index of a system with more than one focus-focus singular point. In the process of performing these calculations, we noticed a small oversight in the original formula about how the twisting index changes between different polygons: there is an extra term which was missing from the original formulation in Pelayo & Vũ Ngọc [@PVNinventiones]. This term is automatically zero if the system has strictly less than two focus-focus singular points, which has been the case for all systems whose twisting index has been treated so far in the literature. We include a corrected formula in Equation [\[eq:twisact\]](#eq:twisact){reference-type="eqref" reference="eq:twisact"} and discuss this in Remark [Remark 19](#rmk:twist-changes){reference-type="ref" reference="rmk:twist-changes"}. In order to compute the twisting index in Theorem [Theorem 3](#thm:twist-intro){reference-type="ref" reference="thm:twist-intro"}, we also needed to obtain the lower order terms of the Taylor series invariant at each focus-focus point of the system. **Theorem 4**. *Let $(M,\omega, F_s = (L,H_s))$ be as in Equation [\[eqn_ssys\]](#eqn_ssys){reference-type="eqref" reference="eqn_ssys"} with $s\in \,\,]s_-,s_+[\,$. Let $S_{i,s}^\infty(l,j)$ denote the Taylor series invariant at the focus-focus point $m_i$, for $i\in\{1,2\}$. Then $$\begin{aligned} S_{1,s}^\infty(l,j) &= l \arctan \left( \dfrac{6-9s}{{\rho_2}} \right) + j \log \left( \dfrac{{\rho_2}^3}{\sqrt{2}{\rho_1}(1-s)^2 {s}^2} \right) \\&+ \dfrac{l^2}{8 {\rho_2}^3 {\rho_1}^2} \left( 3(2-3s)(16 - 96 {s} - 216 {s}^2 + 1944 {s}^3 - 3211 {s}^4 + 424 {s}^5 \right. \\ & \qquad \left. + 3252 {s}^6 - 2816 {s}^7 + 704 {s}^8) \right) \\&+ \dfrac{lj}{16 {\rho_2}^3 {\rho_1}^2} \left( {\rho_2}(16 - 96 {s} + 360 {s}^2 - 936 {s}^3 + 2693 {s}^4 - 6200 {s}^5 + 8004 {s}^6 \right. \\ & \qquad \left.- 5120 {s}^7 + 1280 {s}^8)\right) \\&+ \dfrac{j^2}{8 {\rho_2}^3 {\rho_1}^2} \left( (-96 + 720 {s} - 7248 {s}^2 + 36312 {s}^3 - 99558 {s}^4 + 174957 {s}^5 \right. \\&\qquad \left. - 211536 {s}^6 + 171924 {s}^7 - 83328 {s}^8 + 17856 {s}^9) \right) + \mathcal O(3), \end{aligned}$$ where $\mathcal O(3)$ denotes terms of order greater than or equal to three in the variables $l$ and $j$, and $\rho_1,\rho_2$ are the functions of $s$ given in Equation [\[eqn:rho\]](#eqn:rho){reference-type="eqref" reference="eqn:rho"}. Furthermore, due to the symmetries of the system, $$\label{eqn:symmetry-intro} S_{2,s}^\infty(l,j) = -S_{1,s}^\infty(-l,-j)+\pi l.$$* Theorem [Theorem 4](#thm:Taylor-intro){reference-type="ref" reference="thm:Taylor-intro"} is proved in Section [5.5](#sec:proof-of-Taylor){reference-type="ref" reference="sec:proof-of-Taylor"}. One important ingredient of the proof is the following proposition which explains how the Taylor series invariant changes under symplectormorphisms that reverse the signs of the components of the momentum map. **Proposition 5**. *Let $(M,\omega,F=(L,H))$ and $(M',\omega',F'=(L',H'))$ be semitoric integrable systems, and let $m\in M$ be a focus-focus point which we further assume is the only singular point in $F^{-1}(F(m))$. Moreover, let $\Phi\colon M\to M'$ be a symplectomorphism such that $\Phi^*(L',H') = (\varepsilon_1 L, \varepsilon_2 H)$ for some $\varepsilon_1,\varepsilon_2\in \{-1,+1\}$. Then $\Phi(m)$ is a focus-focus point, and $$S_m^\infty(l,j) = \varepsilon_2 (S_{\Phi(m)}')^\infty(\varepsilon_1 l, \varepsilon_2 j) + \left(\frac{1-\varepsilon_1}{2}\right) \pi l \quad (\textrm{mod }2\pi l),$$ where $S_m^\infty(l,j)$ denotes the Taylor series invariant at $m$ and $(S_{\Phi(m)}')^\infty(l,j)$ denotes the Taylor series invariant at $\Phi(m)$.* Proposition [Proposition 5](#prop:symmetry-general){reference-type="ref" reference="prop:symmetry-general"} is proved in Section [5.2](#sec:symmetry){reference-type="ref" reference="sec:symmetry"}, relying on some results by Sepe & Vũ Ngọc [@SepeVN-notes]. ## Impact and future applications The semitoric classification represents important progress in the task of obtaining global symplectic classifications of integrable systems. In dimension four, the classes of almost toric systems [@Sy2003] and hypersemitoric systems [@Hohloch2022] are natural candidates for the next broader class of systems to be symplectically classified, and similar classifications could also be performed in higher dimensions. In any of these cases, an invariant similar to the twisting index invariant would have to appear in the classification, either directly or indirectly encoded in other invariants. Thus, to further expand the classification of integrable systems from semitoric to some broader class, one must first understand the true nature of the twisting index, on both a conceptual and computational level, to be able to adapt it properly. This is the aim of the present paper. ### Structure of the article: {#structure-of-the-article .unnumbered} In Section [2](#sec:preliminaries){reference-type="ref" reference="sec:preliminaries"}, we explain the necessary background knowledge. In Section [3](#ss:invariants){reference-type="ref" reference="ss:invariants"} we describe, in detail, the five invariants which appear in the semitoric classification. In Section [4](#sec:geometricinterpret){reference-type="ref" reference="sec:geometricinterpret"}, we give a variety of geometric interpretations and equivalent definitions of the twisting index invariant. In Section [5](#sec:specificexample){reference-type="ref" reference="sec:specificexample"} we compute the twisting index and some terms of the Taylor series for a specific example, the system given in Equation [\[eqn_ssys\]](#eqn_ssys){reference-type="eqref" reference="eqn_ssys"} which has two focus-focus singular points. ### Acknowledgements: {#acknowledgements .unnumbered} We would like to thank Holger Dullin, San Vũ Ngọc, Konstantinos Efstathiou, Yohann Le Floch, and Marine Fontaine for helpful discussions and Wim Vanroose for sharing his computational resources. The authors have been partially funded by the FWO-EoS project G0H4518N. The last author was partially supported by UA-BOF project with Antigoon-ID 31722 and an FWO senior postdoctoral fellowship 12ZW320N. # Preliminaries {#sec:preliminaries} In this section we introduce the background, concepts, notation, and results necessary for this paper. In particular, we summarise some of the properties of semitoric systems and describe their classification by Pelayo & Vũ Ngọc [@PVNinventiones; @PVNacta] and Palmer & Pelayo & Tang [@PPT]. For general overviews of semitoric systems we recommend the following surveys: Pelayo & Vũ Ngọc [@PV2], Pelayo [@Pe], Sepe & Vũ Ngọc [@SepeVN-notes], Alonso & Hohloch [@AH], Henriksen & Hohloch & Martynchuk [@HHM], and Gullentops $\&$ Hohloch [@GH]. ## Integrable systems {#ss:integrable} Let $(M,\omega)$ be a symplectic manifold. Since the symplectic form $\omega$ is non-degenerate, we can associate to each $f \in \mathcal{C}^\infty(M, {\mathbb R})$ a vector field $\mathcal{X}_f$ using the relation $\omega(\mathcal{X}_f,\cdot) = -\mathrm{d}f(\cdot)$. In this situation, we say that $f$ is a *Hamiltonian function* and $\mathcal{X}_f$ its *Hamiltonian vector field*. The flow of $\mathcal X_f$ is called the *Hamiltonian flow generated by $f$*. The *Poisson bracket* of $f,g \in \mathcal{C}^\infty(M, {\mathbb R})$ is defined by $\{f,g\} := \omega(\mathcal{X}_f,\mathcal{X}_g)$. **Definition 6**. A *$2n$-dimensional (completely) integrable system* is a triple $(M,\omega,F)$, where $(M,\omega)$ is a $2n$-dimensional symplectic manifold and $F:=(f_1, \ldots, f_n):M \to \mathbb{R}^n$, called the *momentum map*, satisfies the following conditions: 1. $\{f_i,f_j\}=0$ for all $1 \leq i,j \leq n$. 2. The Hamiltonian vector fields $\mathcal X_{f_1}, \ldots, \mathcal X_{f_n}$ are linearly independent almost everywhere in $M$. **Definition 7**. Two integrable systems $(M,\omega,F)$ and $(M',\omega',F')$ are said to be *isomorphic* if there exists a pair of maps $(\varphi, \varrho)$, where $\varphi:(M,\omega) \to (M',\omega')$ is a symplectomorphism and $\varrho:F(M) \to F'(M')$ is a diffeomorphism, such that $\varrho \circ F = F' \circ \varphi$. The rank of $dF$ at any point $x\in M$ is equal to the dimension of the span of $\mathcal X_{f_1}, \ldots, \mathcal X_{f_n}$ at $x$. A point $x\in M$ is called a *regular* point of $F$ if $dF(x)$ has maximal rank, and otherwise it is called *singular* and the rank of $dF(x)$ is called the *rank of $x$*. A point $c\in F(M)$ is called a *regular value of $F$* if $F^{-1}(c)$ only contains regular points, in which case the fibre $F^{-1}(c)$ is also called *regular*. The symplectic form vanishes on the fibers of $F$, and in particular, the regular fibers of $F$ are Lagrangian submanifolds of $M$. Thus, and the map $F$ induces what is called a *singular Lagrangian fibration* on $M$. ## Regular points An example of an integrable system is the cotangent bundle $T^*{\mathbb T}^n$ of the $n$-torus ${\mathbb T}^n$. If $(q_1, \ldots, q_n, p_1, \ldots, p_n)$ are local coordinates in $T^*{\mathbb T}^n\cong \mathbb{T}^n\times {\mathbb R}^n$, then the symplectic form is locally given by $\omega_{T^*{\mathbb T}^n} = \sum_{j=1}^n dp_j \wedge dq_j$ and the momentum map by $F_{T^*{\mathbb T}^n} = (p_1, \ldots, p_n)$. This example serves as a model for neighbourhoods of regular fibres, as shown in the following theorem. **Theorem 8** (Liouville Arnold Mineur theorem [@Ar]). *Let $(M,\omega,F)$ be an integrable system and let $c\in F(M)$ be a regular value. If $\Lambda_c := F^{-1}(c)$ is a regular, compact and connected fibre, then there exist neighbourhoods $U \subset F(M)$ of $c$ and $V \subset {\mathbb R}^n$ of the origin, such that taking $$\mathcal U:= \bigsqcup_{r \in U}F^{-1}(r) \text{ and } \mathcal V:= {\mathbb T}^n \times V \subset T^*{\mathbb T}^n$$ we have that $(\mathcal U,\omega|_\mathcal U,F|_\mathcal U)$ and $(\mathcal V,\omega_{T^*{\mathbb T}^n}|_{\mathcal V}, F_{T^*{\mathbb T}^n}|_\mathcal V)$ are isomorphic integrable systems as in Definition [Definition 7](#def:isom){reference-type="ref" reference="def:isom"}.* In particular this means that $\Lambda_c \simeq {\mathbb T}^n$ and that $F|_\mathcal U$ is a trivial torus bundle. The local coordinates $p_j$ on $T^*{\mathbb T}^n$ introduced above are called *action coordinates* and the $q_j$ are called *angle coordinates*. Furthermore, following [@Ar], if $\{\gamma_1(c), \ldots, \gamma_n(c)\}$ form a basis of the homology group $H_1(\Lambda_c)$, varying smoothly with $c \in U$, then the action coordinates are given by $$\label{eqn:action-coords} p_j(c) = \frac{1}{2\pi} \oint_{\gamma_j(c)} \varpi,$$ where $\varpi$ is any one-form such that $d \varpi|_{\mathcal U} = \omega$ on $\mathcal U$. **Remark 9**. As above, consider $\mathbb{T}^n\times {\mathbb R}^n$ with coordinates $(q_1,\ldots, q_n,p_1,\ldots, p_n)$ and symplectic form $\omega_{T^*{\mathbb T}^n}$. Note that $\omega= \mathrm{d}\alpha$ where $\alpha = \sum p_i\mathrm{d}q_i$. Note that the Hamiltonian vector field of $p_i$ is simply $\mathcal{X}_{p_i}=\frac{\partial}{\partial q_i}$. Fix some value $x\in{\mathbb R}^n$ and consider $\Lambda = \pi^{-1}(x)$ where $\pi$ is the projection onto ${\mathbb R}^n$. Let $\gamma$ denote an orbit of the flow of $\mathcal{X}_{p_i}$ contained in $\Lambda$. Then $$\frac{1}{2\pi}\int_{\gamma}\alpha = \frac{1}{2\pi}\int_0^{2\pi} p_i \mathrm{d}t = p_i.$$ Let $\tilde{p}$ be an action variable of an integrable system, let $x$ be a regular value of the integrable system, and suppose that $\gamma_x$ is a closed orbit of the flow generated by $\tilde{p}$ contained in the fiber over $x$. Then combining the above argument with Theorem [Theorem 8](#thm:action-angle){reference-type="ref" reference="thm:action-angle"}, we conclude that $\frac{1}{2\pi} \int_{\gamma_x} \alpha = \tilde{p}(x)$ for any $\alpha$ such that $\mathrm{d}\alpha = \omega$. ## Integral affine structures {#sec:int-affine} The group $\mathrm{GL}(n,{\mathbb Z}) \ltimes {\mathbb R}^n$ is called the group of integral affine maps of ${\mathbb R}^n$, and this group acts on ${\mathbb R}^n$ where the first component acts by composition and the second acts by translation. An *integral affine structure* on an $n$-manifold $X$ is an atlas of charts on $X$ such that the transition functions between these charts are integral affine maps on ${\mathbb R}^n$. If $X$ and $Y$ are manifolds equipped with integral affine structures, we call a map $g\colon X \to Y$ an integral affine map if it sends the integral affine structure of $X$ to the integral affine structure of $Y$. Now suppose that $(M,\omega,F)$ is an integrable system with connected fibers, and let $B=F(M)$ denote the momentum map image. Let $B_r \subseteq B$ denote the set of regular values of $F$. Given any $c\in B_r$, applying Theorem [Theorem 8](#thm:action-angle){reference-type="ref" reference="thm:action-angle"} induces action coordinates $p_1,\ldots, p_n$ in a neighborhood of $c$. Since they arise from a choice of basis of $H_1(\Lambda_c)$ and a choice of primitive of $\omega$ via Equation [\[eqn:action-coords\]](#eqn:action-coords){reference-type="eqref" reference="eqn:action-coords"}, the action coordinates are unique up to the action of $\mathrm{GL}(n,{\mathbb Z}) \ltimes {\mathbb R}^n$. Thus, the action coordinates induce an integral affine structure on $B_r$. Note that this construction also works if the fibers of $F$ are disconnected, in which case the base $B$ of the Lagrangian fibration cannot be identified with the image $F(M)$ so the description is somewhat more complicated. ## Singular points Theorem [Theorem 8](#thm:action-angle){reference-type="ref" reference="thm:action-angle"} gives a normal form for a neighborhood of a regular fiber. Now, we move on to gaining a local understanding of singular points. In order to classify singular points of an integrable system, we first introduce a notion of non-degeneracy. **Definition 10**. Let $(M,\omega,F=(f_1,\ldots,f_n))$ be an integrable system. A rank $0$ singular point $p\in M$ of $F$ is *non-degenerate* if the Hessians $d^2f_1(p),\ldots, d^2f_n(p)$ span a Cartan subalgebra of the Lie algebra of quadratic forms on the tangent space $T_pM$. The notion of non-degenerate singular points can also be extended to points of all ranks, see Bolsinov-Fomenko [@bolsinov-fomenko]. We will explain an equivalent formulation of this condition for four dimensional integrable systems in Section [2.5](#sec:dim4){reference-type="ref" reference="sec:dim4"}. The following local theorem gives a local normal form for non-degenerate singular points. It is based on works by Rüssmann [@Russmann64], Vey [@Vey78], Colin de Verdière and Vey [@ColindeVerdiereVey79], Eliasson [@Eliasson84; @Eliasson90], Dufour and Molino [@DufourMolino88], Miranda [@Miranda2003; @Miranda2014], Miranda and Zung [@MirandaZung04], Miranda and Vũ Ngọc [@MirandaVuNgoc05], Vũ Ngọc and Wacheux [@VuNgocWacheux13] and Chaperon [@Chaperon13]. **Theorem 11** (Local normal form for non-degenerate singular points). *Let $(M,\omega,F)$ be a $2n$-dimensional integrable system and let $m\in M$ be a non-degenerate singular point. Consider also $(\mathbb{R}^{2n},\omega_{\mathbb{R}^{2n}})$ with coordinates $(q,p):=(q_1, \ldots, q_n, p_1, \ldots, p_n)$, where $\omega_{\mathbb{R}^{2n}} = \sum_i \mathrm{d}p_i\wedge \mathrm{d}q_i$. Then there exist neighborhoods $U$ of $m$ and $V$ of the origin in $\mathbb{R}^{2n}$, a symplectomorphism $\varphi:U \to V$ and a function $G:=(g_1, \ldots, g_n): \mathbb{R}^{2n}\to{\mathbb R}^n$ where each $g_i$ is given by one of the following:* *2* 1. *non-singular: $g_i = p_i$,* 2. *elliptic: $g_i = \frac{1}{2}(q_i^2+p_i^2)$,* 3. *hyperbolic: $g_i = q_ip_i$,* 4. *focus-focus: $\begin{cases} g_i = q_ip_{i+1}-q_{i+1}p_i,\\ g_{i+1} = q_ip_i+q_{i+1}p_{i+1},\end{cases}$* *such that $\{f_i, g_j \circ \varphi \}=0$ for all $i,j$. Moreover, if there are no hyperbolic components, then there is a local diffeomorphism $\varrho:({\mathbb R}^n,F(m)) \to ({\mathbb R}^n,0)$ such that $\varrho \circ F = G \circ \varphi$ on $U$, so $(U,\omega|_U,F|_U)$ and $(V,\omega_{\mathbb{R}^{2n}}|_V,G|_V)$ are isomorphic as in Definition [Definition 7](#def:isom){reference-type="ref" reference="def:isom"}.* ## Singular points in dimension four {#sec:dim4} Now we specialize to the situation that $n=2$, and so $\mathrm{dim}(M)=4$. We use the notation $f_1=L$ and $f_2=H$, so $F=(L,H)$. In this case, the following proposition becomes very useful for the verification of non-degeneracy. **Lemma 12** (Bolsinov $\&$ Fomenko [@bolsinov-fomenko]). *Let $(M,\omega,F = (L,H))$ be a four dimensional integrable system. Let $p\in M$ be a rank $0$ singular point of $F$ and $\omega_p$ the matrix of the symplectic form with respect to a basis of $T_p M$ and $\omega_p^{-1}$ its inverse. Denote by $d^2L(p)$ and $d^2H(p)$ the matrices of the Hessians of $L$ and $H$ with respect to the same basis. Then $p$ is non-degenerate if and only if $d^2L(p)$ and $d^2H(p)$ are linearly independent and there exists a linear combination of $\omega_p^{-1}d^2 L(p)$ and $\omega_p^{-1}d^2 H(p)$ with four distinct eigenvalues.* Let $p$ be a singular point of rank 1 in a 4-dimensional integrable system $(M,\omega,F=(L,H))$. Then there exists some $\alpha,\beta\in\mathbb{R}$ with $\alpha d H(p) + \beta dL(p) = 0$. The flows of $\mathcal X^L$ and $\mathcal X^H$ induce an $\mathbb{R}^2$-action which has a one-dimensional orbit through $p$. Let $P_p\subset T_pM$ be the tangent line of this orbit at $p$ and denote by $P^\omega_p$ the symplectic orthogonal complement of $P_p$. Notice that $P_p\subset P^\omega_p$ and since $L$ and $H$ Poisson commute they are invariant under the ${\mathbb R}^2$-action. Thus $\alpha d^2H(p) + \beta d^2L(p)$ descends to the quotient $P_p/P^\omega_p$. **Definition 13** (Bolsinov $\&$ Fomenko [@bolsinov-fomenko]). Let $(M,\omega,F = (L,H))$ be a four dimensional integrable system. A rank 1 singular point $p$ is *non-degenerate* if $\alpha d^2H(p) + \beta d^2L(p)$ is invertible on $P_p/P^\omega_p$. # Semitoric systems {#ss:invariants} Here we introduce the class of systems that we will be focusing on in this paper. We will also give a precise and concise description of the invariants of a simple semitoric system. In what follows we identify ${\mathbb S}^1$ with ${\mathbb R}/2\pi{\mathbb Z}$, and so an effective action of ${\mathbb S}^1$ is equivalent to a periodic ${\mathbb R}$-action with minimal period $2\pi$. Recall that a map is called *proper* if the preimage of every compact set is compact. **Definition 14**. A *semitoric system* is a four dimensional integrable system $(M,\omega,F=(J,H))$, such that: 1. The Hamiltonian flow generated by $L$ is an effective ${\mathbb S}^1$-action. 2. $L$ is proper. 3. The singular points of $F$ are non-degenerate and have no hyperbolic components. If, moreover, the following condition is satisfied, we say that the semitoric system is *simple*: 4. In each level set of $L$, there is at most one singularity of focus-focus type. Note that, if $M$ is compact, then $L$ is automatically proper. Given semitoric systems $(M,\omega,(L,H))$ and $(M',\omega',(L',H'))$ an *isomorphism of semitoric systems* is a pair $(\varphi,\varrho)$, where $\varphi$ is a symplectomorphism $\varphi \colon (M, \omega)\to (M', \omega')$ and $\varrho = (\varrho_1, \varrho_2):F(M) \to F'(M')$ is a diffeomorphism satisfying $\varrho \circ F = F' \circ \varphi$ and $\varrho$ is of the form $\varrho(l,h) = (l, \varrho_2(l,h))$ with $\partial_h \varrho_2 >0$. **Remark 15**. Isomorphisms of semitoric systems are a particular case of the isomorphisms of integrable systems from Definition [Definition 7](#def:isom){reference-type="ref" reference="def:isom"}, where $(\varphi,\varrho)$ is chosen to preserve the first coordinate and the orientation of the second one. A consequence of Theorem [Theorem 11](#EliassonMZ){reference-type="ref" reference="EliassonMZ"} is that the only types of singularities which can occur in semitoric systems are: - *elliptic-elliptic points*: rank 0 points with two elliptic components, - *focus-focus points*: rank 0 points with one focus-focus pair of components, - *elliptic-regular points*: rank 1 points with one elliptic component and one non-singular component. Furthermore, due to Vũ Ngọc [@VuNgoc07], the fibers of the momentum map $F\colon M\to {\mathbb R}^2$ in a semitoric system are connected, and thus we may identify the image $F(M)$ with the base of the singular Lagrangian fibration that $F$ induces on $M$. Pelayo $\&$ Vũ Ngọc [@PVNinventiones; @PVNacta] classified simple semitoric systems by means of five invariants. Their original classification only applied to simple semitoric systems, but this classification has since been extended by Palmer & Pelayo & Tang [@PPT] to include all semitoric systems, simple or not. Note that the twisting index invariant appears rather differently in the non-simple case, so for the present paper we will focus on the original classification which only applies to simple semitoric systems. Now we will briefly describe each of the five invariants. More details can be found in the original papers by Pelayo & Vũ Ngọc [@PVNinventiones; @PVNacta]. In the following, let $(M,\omega,F=(L,H))$ be a simple semitoric system. Note that the invariant we are concerned with in this paper is the twisting index invariant, and we will now see that the twisting index is mainly concerned with the relationship between the Taylor series invariant and the polygon invariant. ## The number of focus-focus points invariant {#sss:nff} The first symplectic invariant of a semitoric system is the *number of focus-focus singularities* ${n_{\text{FF}}}\in \mathbb{N}_0$, which is finite by Vũ Ngọc [@VuNgoc07]. Since our system is simple, we can order the focus-focus singularities $m_1, \ldots, m_{n_{\text{FF}}}$ according to their $L$-values, so that $L(m_1) < \ldots < L(m_{n_{\text{FF}}})$. ## The polygon invariant {#sss:polygon} Let $B=F(M) \subset {\mathbb R}^2$ be the image of the momentum map and $B_r \subseteq B$ be the set of regular values of $F$. As discussed in Section [2.3](#sec:int-affine){reference-type="ref" reference="sec:int-affine"}, the action coordinates of Theorem [Theorem 8](#thm:action-angle){reference-type="ref" reference="thm:action-angle"} induce an integral affine structure on $B_r$, which in general does not agree with the one induced by its inclusion in ${\mathbb R}^2$. Moreover, if the semitoric system has at least one focus-focus point, then there cannot exist an integral affine map $g\colon B_r\to{\mathbb R}^2$ due to the presence of monodromy in the integral affine structure of $B_r$. In this section we will explain a procedure, introduced by Vũ Ngọc [@VuNgoc07], to obtain a map from $B_r$ to ${\mathbb R}^2$ which is an integral affine map away from certain sets in $B_r$. Such a map can be thought of as "straightening out" the integral affine structure of $B_r$, and the image of such a map is a rational convex polygon. The equivalence class of such polygons (up to the freedom in the choice of map) is the semitoric polygon invariant. We explain the details now. Recall that $\{m_1,\ldots,m_{{n_{\text{FF}}}}\}$ denote the focus-focus points of the system. For $r \in \{1, \ldots, {n_{\text{FF}}}\}$, let $c_r:=F(m_r)=(\lambda_r,\eta_r)\in{\mathbb R}^2$ denote the focus-focus singular value and let $b_{\lambda_r}: = \{(\lambda_r,y)\;|\; y \in {\mathbb R}\}$ be the vertical line through $c_r$. We associate a sign $\varepsilon_r \in {\mathbb Z}_2: = \{-1,+1\}$ with each singular value and let $b^{\varepsilon_r}_{\lambda_r}$ denote the half-line starting in $c_r$ which goes upwards if $\varepsilon_r=+1$ and downwards if $\varepsilon_r=-1$. For each choice of signs $\varepsilon=(\varepsilon_r)_{r=1}^{n_{\text{FF}}}\in ({\mathbb Z}_2)^{n_{\text{FF}}}$, we remove the union of half-lines $b^\varepsilon := \bigcup_r b_{\lambda_r}^{\varepsilon_r}$ from $B$, obtaining the simply connected set $B \backslash b^\varepsilon \subset {\mathbb R}^2$. Vũ Ngọc [@VuNgoc07] showed that for each vector $\varepsilon$ there exists a map $f_\varepsilon:B \to {\mathbb R}^2$ such that: 1. $\Delta:= f_\varepsilon(B)$ is a rational convex polygon, 2. $f_\varepsilon$ is a homeomorphism, 3. $f_\varepsilon$ preserves the first coordinate, i.e. $f_\varepsilon(l,h)=\bigl(f_\varepsilon^{(1)},f_\varepsilon^{(2)}\bigr)(l,h)=\bigl(l,f_\varepsilon^{(2)}(l,h)\bigr)$, 4. $(f_\varepsilon)|_{B\backslash b^\varepsilon}$ is a diffeomorphism into its image which sends the integral affine structure of $B_r\backslash b^\varepsilon$ to the integral affine structure of ${\mathbb R}^2$. The map $f_\varepsilon$ is commonly referred to as *cartographic homeomorphism*. Cartographic homeomorphisms are not unique: let $$\label{eqn:T} T := \begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix} \in \text{GL}(2,{\mathbb Z}),$$ let $\mathcal G:=\{T^n \;|\; n \in {\mathbb Z}\}$, and let $\mathcal V:= \mathcal G\ltimes {\mathbb R}$. Then, given a choice of $\varepsilon$, the map $f_\varepsilon$ obtained via the construction above is unique up to left composition by an element of $\mathcal V$. Note that the components of the cartographic homeomorphism are the actions of the action-angle coordinates. By definition, the map $f_\varepsilon$ depends on the choice of $\varepsilon$, the effect of which we describe now. Denote by $t_{b_{\lambda_r}} : {\mathbb R}^2 \to {\mathbb R}^2$ the map given by $$t_{b_{\lambda_r}}(x,y) = \begin{cases} (x,y), &\text{ if }x\leq \lambda_r\\ (x,y + x-\lambda_r), &\text{ if }x>\lambda_r. \end{cases}$$ That is, intuitively $t_{b_{\lambda_r}}$ leaves the half-plane to the left of $b_{\lambda_r}$ invariant and applies $T$, relative to a choice of origin on $b_{\lambda_r}$, to the half-plane on the right of $b_{\lambda_r}$. Given a vector $\kappa=(\kappa_1,\ldots, \kappa_{n_{\text{FF}}})\in {\mathbb Z}^{n_{\text{FF}}}$, set $t_\kappa:= t_{\lambda_1}^{\kappa_1} \circ \cdots \circ t_{\lambda_{n_{\text{FF}}}}^{\kappa_{n_{\text{FF}}}}$, and note that $t_\kappa$ is a piecewise integral affine map. A different choice of signs changes the cartographic homeomorphism by composition with such a map, as we will describe below. We now define the polygon invariant. The triple $\big(\Delta, (b_{\lambda_r})_{r=1}^{n_{\text{FF}}},(\varepsilon_r)_{r=1}^{n_{\text{FF}}}\big)$ is called *weighted polygon*. The freedom in the definition of $f_\varepsilon$ can be expressed as an action of the group $({\mathbb Z}_2)^{n_{\text{FF}}}\times\ \mathcal G$ on the space of weighted polygons: letting $(\varepsilon',T^n) \in ({\mathbb Z}_2)^{n_{\text{FF}}}\times \mathcal G$ and taking $u := \tfrac{1}{2}(\varepsilon - \varepsilon\varepsilon')$, the action is given by $$( \varepsilon', T^n) \cdot \big(\Delta, (b_{\lambda_r})_{r=1}^{n_{\text{FF}}},(\varepsilon_r)_{r=1}^{n_{\text{FF}}}\big) = \bigl( t_u(T^n(\Delta)), (b_{\lambda_r})_{r=1}^{n_{\text{FF}}},(\varepsilon_r'\varepsilon_r)_{r=1}^{n_{\text{FF}}}\bigr).$$ Since the non-uniqueness of the cartographic homeomorphism is encoded in the group action, this assignment described in the following definition is well-defined. **Definition 16**. Given a choice of cartographic homeomorphism, we define the *polygon invariant* of $(M,\omega,F)$ to be the $({\mathbb Z}_2)^{n_{\text{FF}}}\times \mathcal G$-orbit of weighted polygons obtained from its image, briefly denoted by $(\Delta, b, \varepsilon)$. Now, let $f_\varepsilon$ be any choice of cartographic homeomorphism with cut directions $\varepsilon$, let $M^\varepsilon := M\setminus F^{-1}(b^\varepsilon)$, and let $\mu:=f_\varepsilon\circ F\colon M\to {\mathbb R}^2$. Then $\mu$ restricted to $M^\varepsilon$ is an integrable system of which all points are regular, elliptic-regular, or elliptic-elliptic, and by definition the induced integral affine structure on the regular points of $\mu(M^\varepsilon)$ is equal to the integral affine structure on ${\mathbb R}^2$. Thus, the Hamiltonian flows of the components of $\mu$ form a Hamiltonian $2$-torus action on $M^\varepsilon$. For this reason, we call $\mu$ a *generalized toric momentum map*. Note that the semitoric polygon is the image of $\mu$, that is, $\mu(M) = f_\varepsilon(B) = \Delta$. ## The height invariant {#sss:height} Let $f_\varepsilon$ be a cartographic homeomorphism and let $\mu = (\mu_1,\mu_2):=f_\varepsilon\circ F$ denote the associated generalized toric momentum map. For $r \in \{1, \ldots, {n_{\text{FF}}}\}$, we define $h_r$ as the height of the corresponding focus-focus value measured from the lower boundary of the polygon $\Delta=f_\varepsilon(M)$: $$h_r := \mu_2(m_r) - \min_{p \in \Delta\cap b_{\lambda_r}} \operatorname{proj}_2(p),$$ where $\operatorname{proj}_2:{\mathbb R}^2 \to {\mathbb R}$ is the natural projection on the second coordinate. Geometrically, the value $h_r$ measures the symplectic volume of the submanifold $$\{p \in M \;|\; L(p)=L(m_r) \text{ and }H(p)<H(m_r)\}$$ and therefore is independent of the choice of $f_\varepsilon$. The *height invariant* of the simple semitoric system $(M,\omega,F)$ is the ${n_{\text{FF}}}$-tuple $(h_1, \ldots, h_{n_{\text{FF}}})$ of heights of all focus-focus values. ## The Taylor series invariant {#sss:taylor} In this section, we outline a construction due to Vũ Ngọc [@VuNgoc03] of a semilocal invariant of a focus-focus fiber. Let $m \in M$ be a focus-focus singularity of the semitoric system $(M,\omega,F)$. By Theorem [Theorem 11](#EliassonMZ){reference-type="ref" reference="EliassonMZ"} there exist neighbourhoods $U\subset M$ of $m$ and $V \subset {\mathbb R}^4$ of the origin and a isomorphism of integrable systems $(\varphi, \varrho)\colon(U,\omega|_U,F|_U)\to (V,\omega_{{\mathbb R}^4}|_V,G|_V)$, where $$\label{eq:defG} G(q_1,q_2,p_1,p_2)=(q_1p_2-q_2p_1,q_1p_1+q_2p_2).$$ As discussed in Sepe & Vũ Ngọc [@SepeVN-notes], for a semi-local neighborhood of a focus-focus point in a general integrable system, there is still a degree of freedom in the choice of such a $(\varphi,\varrho)$. In this section, following [@VuNgoc03], we will describe how a choice of $(\varphi,\varrho)$ around a focus-focus point produces an invariant of the fiber containing that point, called the Taylor series invariant. Note that different choices of $(\varphi,\varrho)$ produce different Taylor series invariants (related by a simple formula, we discuss this in Section [5.2](#sec:symmetry){reference-type="ref" reference="sec:symmetry"}), but for a semitoric system there are preferred choices. If $(M,\omega,(L,H))$ is semitoric, then $\varrho = (\varrho_1,\varrho_2)$ is of the form $\varrho(l,h) = (\pm l,\varrho_2(l,h))$ with $\frac{\partial \varrho_2}{\partial h}\neq 0$, and we make the preferred choice of $\varphi$ and $\varrho$ such that $\varrho(l,h)=(l,\varrho_2(l,h))$ where $\frac{\partial \varrho_2}{\partial h}>0$. With this choice the invariant we construct in this section is well-defined. Understanding this choice is important in Section [5.2](#sec:symmetry){reference-type="ref" reference="sec:symmetry"}, when we discuss how changing the signs of the components of the momentum map of an semitoric integrable system impacts the invariants. Now we will consider a neighborhood of the focus-focus fiber. Let $W := F^{-1}(F(U))$ and let $\Phi=(\Phi_1,\Phi_2): W \to G(V)$ be defined by $\Phi = \varrho \circ F$. Let $z:=(l,j) := l+ ij$ denote the coordinate on $G(V) \subseteq {\mathbb R}^2 \simeq {\mathbb C}$ induced by the identification with ${\mathbb C}$, and let $\Lambda_z:= \Phi^{-1}(z)$. For any nonzero $z\in\Phi(W)$, note that $\Phi^{-1}(z)$ is a regular fiber, and thus by Theorem [Theorem 8](#thm:action-angle){reference-type="ref" reference="thm:action-angle"} and the fact that $L$ is proper, this fiber is diffeomorphic to a $2$-torus. Note that $\Phi_1=L$, so $\mathcal X_{\Phi_1} = \mathcal X_L$ and thus the flow of $\mathcal X_{\Phi_1}$ generates an effective ${\mathbb S}^1$-action on $\Lambda_z$. In contrast, the flow of $\mathcal X_{\Phi_2}$ will be in general quasi-periodic. Note that the monodromy induced by a focus-focus point discussed in Section [3.2](#sss:polygon){reference-type="ref" reference="sss:polygon"} will appear again here in the following way: in principle, to directly emulate the situation of Equation [\[eqn:action-coords\]](#eqn:action-coords){reference-type="eqref" reference="eqn:action-coords"}, a smoothly varying basis of the fundamental group of $\Lambda_z$ for $z\in\Phi(W)$ is needed, but this unfortunately does not exist due to the monodromy. Instead, for $\varepsilon\in {\mathbb Z}_2$ let $b_\varepsilon\subset {\mathbb R}^2$ be a ray starting at the origin and going up if $\varepsilon=+1$ and down if $\varepsilon=-1$. Then let $\tilde{W} = F^{-1}(F(W)\setminus b_\varepsilon)$. Now we choose a basis $\{\gamma_1^z,\gamma_2^z\}$ of the fundamental group of the torus $\Lambda_z$ which varies smoothly with $z\in\Phi(\tilde{W})$, and where $\gamma_1^z = \gamma_L^z \subset \Lambda_z$ is the cycle corresponding to the flow of $\mathcal X_L$ with its same orientation. From Theorem [Theorem 11](#EliassonMZ){reference-type="ref" reference="EliassonMZ"} the actions of the system are given by $$L(z) = \dfrac{1}{2\pi} \oint_{\gamma_L^z} \varpi = l,\qquad I(z):= \dfrac{1}{2\pi} \oint_{\gamma_2^z} \varpi, \label{eq:semigact}$$ where $\varpi$ is a primitive of the symplectic form $\omega|_W$. We moreover impose that the orientation of $\gamma_2^z$ is such that $\tfrac{\partial I}{\partial j}>0$, where $z=l+ij$, which is equivalent to taking the preferred choice of $\varrho$ for semitoric systems. The function $I(z)$ can be extended continuously to all of $\Phi(W)$ but not smoothly. To address this, let now log denote a determination of the complex logarithm with its branch along the ray $b_\varepsilon$ (i.e. the ray corresponds to the choice of domain of definition for log) which we used above for determining a basis of cycles. Vũ Ngọc [@VuNgoc03] showed that $$\label{eqn:def-of-S} S(z) := 2\pi I(z) - 2\pi I(0) + \text{Im}(z \log z -z),$$ can be extended to a smooth function on all of $\Phi(W)$. The function $S$ is often referred to as the *desingularised* or *regularised* action, cf. Pelayo & Vũ Ngọc [@PV2]. The Taylor series invariant $S^\infty$ associated to the focus-focus singularity $m$ is the Taylor series of the function $S$. Note that $S(z)$ is normalized to satisfy $S(0)=0$ in Equation [\[eqn:def-of-S\]](#eqn:def-of-S){reference-type="eqref" reference="eqn:def-of-S"}. Though $S$ is not unique (see Remark [Remark 18](#rmk:nonunique){reference-type="ref" reference="rmk:nonunique"}), its Taylor series $S^\infty$ is uniquely defined up to the addition of integer multiples of $2\pi l$. That is, letting ${\mathbb R}_0[[l,j]]$ denote the set of power series in the variables $l$ and $j$ with zero constant term, $S^\infty$ is uniquely determined as an element of ${\mathbb R}_0[[l,j]]/(2\pi l\, {\mathbb Z})$. For the purposes of the calculations in this paper, we will take the representatives which satisfies $-\tfrac{\pi}{2} \leq \partial_l S(0) < \tfrac{3\pi}{2}$. By performing this construction for all focus-focus singularities $m_1, \ldots, m_{n_{\text{FF}}}$ of the system, we obtain the Taylor series $$S^\infty_1, \ldots, S^\infty_{n_{\text{FF}}}\in {\mathbb R}_0[[l,j]]/(2\pi l\, {\mathbb Z})$$ corresponding to each singularity. **Definition 17**. The Taylor series invariant associated to the simple semitoric system $(M,\omega,F)$ is the ${n_{\text{FF}}}$-tuple of Taylor series $(S^\infty_1, \ldots, S^\infty_{n_{\text{FF}}})$. [\[def:taylorinv\]]{#def:taylorinv label="def:taylorinv"} **Remark 18**. Equation [\[eqn:def-of-S\]](#eqn:def-of-S){reference-type="eqref" reference="eqn:def-of-S"} can be solved for $S(z)$, but the function $S(z)$ is still not uniquely defined because it depends on certain choices encoded in the function $I(z)$. First of all, $I(z)$ depends on a choice of complementary cycle $\gamma^z_2$ chosen so that $\{\gamma_L^z, \gamma_2^z\}$ generates the fundamental group of the torus $\Lambda_z$. Such a choice is not unique and changing the choice of $\gamma_2^z$ will change $S(z)$ by an integer multiple of $2\pi l$. This dependence of the function $S(z)$ on a choice of basis of the fundamental group of $\Lambda_z$ is related to a geometric interpretation of the twisting index, see Section [4](#sec:geometricinterpret){reference-type="ref" reference="sec:geometricinterpret"}. Furthermore, $S(z)$ also depends on the choice of a chart for the local normal form from Theorem [Theorem 11](#EliassonMZ){reference-type="ref" reference="EliassonMZ"}, and different choices of such charts change $I(z)$, and in turn $S(z)$, by adding on a function for which all derivatives vanish at $z=0$. Such functions are called *flat functions*. This is why $S(z)$ is not well-defined, and instead we have to take its Taylor series $S^\infty$ which is invariant under the addition of flat functions. Following Vũ Ngọc [@VuNgoc03], the Taylor series $S^\infty$ associated with a focus-focus singularity is the only semiglobal invariant of a focus-focus fibre. That is, if two systems have a focus-focus singularity with the same Taylor series $S^\infty$, then there exists an isomorphism of integrable systems between semilocal neighbourhoods of the respective singular fibres. Moreover, every power series in two real variables $l,j$ with no constant term and with the linear term in $l$ lying in $[-\tfrac{\pi}{2},\tfrac{3\pi}{2}[$, appears as the Taylor series invariant of a focus-focus point. Putting these two facts together, we obtain that the map from semilocal neighborhoods of focus-focus points up to isomorphisms to ${\mathbb R}_0[[l,j]]/(2\pi l \,{\mathbb Z})$ is a bijection. ![The red curve is along the Hamiltonian flow of the ${\mathbb S}^1$-action generated by $L$ and the orange curve is along the Hamiltonian flow of $\Phi_2$. The value of $\tau_1(z)$ is the time spent along the flow of the red curve and $\tau_2(z)$ represents the time spent along the flow of the orange curve.](torustorus.pdf){#fig:torustorus width="250pt"} An alternative interpretation of the Taylor series invariant which does not directly make use of the function $I(z)$, also due to Vũ Ngọc [@VuNgoc03], is as follows. Let $z\in\Phi(W)$ and let $p \in \Lambda_z= \Phi^{-1}(z)$ be any point. Consider the closed curve constructed by following the Hamiltonian flow of $\mathcal X_{\Phi_2}$ until the $L$-orbit of $p$ is reached and then following the Hamiltonian flow of $\mathcal X_L$ to go back to $p$, see Figure [1](#fig:torustorus){reference-type="ref" reference="fig:torustorus"}. Let $\tau_1(z) \in {\mathbb R}/2\pi{\mathbb Z}$ be the time spent along the flow of $\mathcal X_L$ and $\tau_2(z) >0$ be the time spent along the flow of $\mathcal X_{\Phi_2}$. The functions $\tau_1(z)$ and $\tau_2(z)$ are independent of the choice of $p$. Letting $\log$ be any choice of determination of the complex logarithm and taking a choice of lift of $\tau_1(z)$ to ${\mathbb R}$ discontinuous along the same branch cut, the functions $$\begin{cases} \sigma_1(z):= \tau_1(z) - \text{Im}(\log z), \\ \sigma_2(z):= \tau_2(z) + \text{Re}(\log z) \end{cases} \label{eq:sigmas}$$ extend to smooth single-valued functions around the origin. Similar to Equation [\[eqn:def-of-S\]](#eqn:def-of-S){reference-type="eqref" reference="eqn:def-of-S"}, subtracting logarithms is necessary to compensate for the discontinuity in the lift of $\tau_1$, which is unavoidable because of the monodromy, and for the rate at which $\tau_2(z)$ diverges to infinity as $z$ approaches the origin. Moreover, the 1-form $\sigma := \sigma_1 \mathrm{d}l + \sigma_2 \mathrm{d}j$ is closed and therefore exact. Given such a $\sigma$, we take the function $S$ to be the unique smooth function that satisfies $\mathrm{d}S=\sigma$ and $S(0)=0$. As before, there are choices encoded in $\sigma$, such as the choice of lift of $\tau_1(z)$, but the Taylor series of $S$ is well defined as an element of ${\mathbb R}[[l,j]]/(2\pi l\,{\mathbb Z})$. ## The twisting index invariant {#sss:twisting} In this section, we summarize the construction of the twisting index invariant due to Pelayo & Vũ Ngọc [@PV2]. Roughly speaking, the twisting index invariant is a label of ${n_{\text{FF}}}$ integers assigned to each of the polygons $\Delta$ of the polygon invariant. These integers will be obtained by comparing the semiglobal action coordinates from Equation [\[eq:semigact\]](#eq:semigact){reference-type="eqref" reference="eq:semigact"} with the generalised toric momentum map $\mu$ of §[3.3](#sss:height){reference-type="ref" reference="sss:height"} around each focus-focus point. We explain the details now. To each focus-focus singularity $m_r$, where $r\in \{1, \ldots, {n_{\text{FF}}}\}$, we now describe how to construct a so-called *privileged momentum map*. Fix a choice of signs $\varepsilon = (\varepsilon_1,\ldots,\varepsilon_{n_{\text{FF}}}) \in ({\mathbb Z}_2)^{n_{\text{FF}}}$ and let $\tilde{W}:=F^{-1}\big(F(W)\backslash b^\varepsilon\big)$ be the set $W$ without the preimage of the cuts. Recall the map $\Phi= (L,\Phi_2): W \to {\mathbb R}^2$ introduced in §[3.4](#sss:taylor){reference-type="ref" reference="sss:taylor"}. Let ${\tau_1^{\text{pref}}}\colon \Phi(W)\to{\mathbb R}$ be the lift of $\tau_1$ to ${\mathbb R}$ that is continuous on $\Phi(\tilde{W})$ and also satisfies the condition that if we take ${\sigma_1^{\text{pref}}}$ to be defined by Equation [\[eq:sigmas\]](#eq:sigmas){reference-type="eqref" reference="eq:sigmas"} using ${\tau_1^{\text{pref}}}$ then $$\label{eqn:pref-tau1} {\sigma_1^{\text{pref}}}(0) = \lim_{z\to 0} \Big({\tau_1^{\text{pref}}}(z) - \text{Im}(\log(z))\Big) \in [-\tfrac{\pi}{2} ,\tfrac{3\pi}{2}[.$$ Note that changing the choice of lift will change $\sigma_1(0)$ by integer multiples of $2\pi$, so such a ${\tau_1^{\text{pref}}}$ always exists. We define now the vector field $$2 \pi \mathcal X^r := ({\tau_1^{\text{pref}}}\circ \Phi) \mathcal X_L + (\tau_2 \circ \Phi) \mathcal X_{\Phi_2}, \label{eqHamvf}$$which is smooth on $\tilde{W}$ and, by Pelayo & Vũ Ngọc [@PVNinventiones; @PVNacta], it turns out that there is a unique continuous function $$\label{eqn:Xi} \Xi_r:W \to {\mathbb R}$$ which satisfies: - The function is smooth on $\tilde{W}$ - the Hamiltonian vector field of $\Xi_r$ on $\tilde{W}$ is $\mathcal X^r$. - $\Xi_r(p)$ tends to $0$ as $p$ approaches $m$. The *privileged momentum map at $m_r$* is defined by $\nu_r :=(L,\Xi_r)\colon W\to{\mathbb R}^2$ and it is smooth on $\tilde{W}$. Let $f_\varepsilon$ be a choice of cartographic homeomorphism. Recall $\mu = f_\varepsilon\circ F$ and $\Delta= \mu (M)$. Both $\mu$ and $\nu_r$ are continuous on $W$ and smooth on $\tilde{W}$, they both have $L$ as their first component, and on $\tilde{W}$ they both generate an effective Hamiltonian $2$-torus action. Thus, there exists $\kappa_r^\Delta\in{\mathbb Z}$ such that $\mu$ and $\nu$ are related via $$\label{eqn:twist-def} \mu = T^{\kappa_r^\Delta} \circ \nu_r,$$ where $T$ is as in Equation [\[eqn:T\]](#eqn:T){reference-type="eqref" reference="eqn:T"}. The integer $\kappa_r^\Delta \in {\mathbb Z}$ is called the *twisting index* of $\Delta$ at the focus-focus value $c_r$. Note that the integer $\kappa_r^\Delta$ depends on the choice of the cartographic homeomorphism $f_\varepsilon$, so the assignment of the integers $\kappa_r^\Delta$ differs for each representative of the semitoric polygon. Given any such set of integer labels, the labels on the other representatives of the semitoric polygon are obtained via the following group action: $$(\varepsilon', T^n) \cdot \Big(\Delta, (b_{\lambda_r})_{r=1}^{n_{\text{FF}}},(\varepsilon_r)_{r=1}^{n_{\text{FF}}},(\kappa_r^\Delta)_{r=1}^{n_{\text{FF}}}\Big)\, = \left(t_u(T^n(\Delta)), (b_{\lambda_r})_{r=1}^{n_{\text{FF}}},(\varepsilon_r'\varepsilon_r)_{r=1}^{n_{\text{FF}}},\left(\kappa_r^\Delta+n+\sum_{i=1}^{r}u_i\right)_{r=1}^{n_{\text{FF}}}\right), \label{eq:twisact}$$ for $(\varepsilon',T^n) \in ({\mathbb Z}_2)^{n_{\text{FF}}}\times \mathcal G$ where again $u_r = \frac{1}{2}(\varepsilon_r - \varepsilon_r \varepsilon_r')$. **Remark 19**. There are two differences between Equation [\[eq:twisact\]](#eq:twisact){reference-type="eqref" reference="eq:twisact"} and the original equation for this group action contained in Pelayo & Vũ Ngọc [@PVNinventiones], both contained in the extra term $\sum_{i=1}^{r}u_i$ which does not appear in Pelayo & Vũ Ngọc [@PVNinventiones]. - The first $r-1$ terms of the sum, $u_1+\ldots+u_{r-1}$, had to be added to this group action for the following reason: changing the cut direction at any focus-focus point with smaller $x$-component than $m_r$ will have the effect of applying $T$ to the generalized toric momentum map near $m_r$, but will not change the privileged momentum map. Therefore, such a change in cut direction changes the twisting index; see Example [Example 22](#ex:twist-change){reference-type="ref" reference="ex:twist-change"} for an explicit example of this. Since the twisting index of a system with at least two focus-focus points had never been computed until the present paper, this oversight has no impact on the existing literature. This was noticed independently by both Yohann Le Floch and the authors of the present paper. - The last term $u_r$ of the summation comes from a change in convention compared to Pelayo & Vũ Ngọc [@PVNinventiones], which is independent of the previous item. In the original formulation, the definition of the preferred momentum map near each focus-focus point was slightly different depending on whether the corresponding cut was upwards or downwards. Specifically, they take the argument of $\tau_1$ to be in different intervals depending on whether the corresponding cut was up or down, taking it in $[0,2\pi\,[$ if the cut was up and $[\pi,3\pi\,[$ if the cut was down. In this paper, we have opted to take a more unified approach, and the construction of $\nu$ presented in the current section does not depend on the cut direction, which comes at the cost of the appearance of the term $u_r$. **Definition 20**. The *twisting index invariant* of the simple semitoric system $(M,\omega,F)$ is the $({\mathbb Z}_2)^{n_{\text{FF}}}\times \mathcal G$-orbit of the quadruple $\left(\Delta, (b_{\lambda_r})_{r=1}^{n_{\text{FF}}},(\varepsilon_r)_{r=1}^{n_{\text{FF}}},(\kappa_r^\Delta)_{r=1}^{n_{\text{FF}}}\right)$ constructed as above with group action as given in Equation [\[eq:twisact\]](#eq:twisact){reference-type="eqref" reference="eq:twisact"}. Thus, the twisting index can be thought of as an assignment of an integer to each focus-focus point for each choice of semitoric polygon. For $r\in\{1,\ldots, n_{\text{FF}}\}$, given a representative of the semitoric polygon $\Delta$, we call the associated $\kappa_r^\Delta$ the *twisting index of $m_r$ relative to $\Delta$*. **Remark 21**. Notice that since the Hamiltonian vector field of $\Xi_r$ is $\mathcal X^r$, the function $\Xi_r$ can be written as $\Xi_r = \xi_r \circ \Phi$, where $\xi_r$ is determined by $$\label{eqn:xi_deriv} \frac{\partial \xi_r}{\partial l}(z) = \dfrac{1}{2\pi} {\tau_1^{\text{pref}}}(z),\qquad \frac{\partial \xi_r}{\partial j}(z) = \dfrac{1}{2\pi} \tau_2(z)$$ and $\displaystyle \lim_{z \to 0} \xi_r (z) =0$. **Example 22**. Consider the two polygon representatives shown in Figure [\[fig:twist-change\]](#fig:twist-change){reference-type="ref" reference="fig:twist-change"}, which differ by the action of $(\varepsilon'=(-1,1), T^0)$, which changes the cut direction at the first focus-focus point $m_1$. Let $\mu$ be the generalized toric momentum map for the polygon on the left, and $\mu'$ be the generalized toric momentum map for the polygon on the right. Since $\mu' = t_{b_{\lambda_1}}\circ \mu$, near the second focus-focus point $m_2$, the maps $\mu$ and $\mu'$ differ by translation and an application of $T$. Since the privileged momentum map $\nu_2$ does not depend on the choice of polygon, we conclude that $\mu = T^{\kappa_2^\Delta}\circ\nu_2$ implies that $\mu' = T^{\kappa_2^\Delta+1}\circ\nu_2$. Thus, we see that changing the cut direction at $m_1$ increases the twisting index at $m_2$ by 1. ![](pol-10pp.pdf){width="70%"} ![](pol-10mp.pdf){width="70%"} ## The classification Pelayo & Vũ Ngọc [@PVNinventiones] describe how to obtain the five invariants from a given simple semitoric system. Then, in [@PVNacta], they also describe exactly which data can appear as invariants of a semitoric system, and call the abstract list of such data a *list of semitoric ingredients*. Furthermore, they proved that these five invariants completely classify semitoric integrable systems: **Theorem 23** (Pelayo $\&$ Vũ Ngọc [@PVNinventiones; @PVNacta]). 1. *Two simple semitoric systems are isomorphic if and only if they have the same five semitoric invariants.* 2. *Given a list of semitoric ingredients there exists a simple semitoric system which has those as its five invariants.* Theorem [Theorem 23](#thm:PVN-classification){reference-type="ref" reference="thm:PVN-classification"} thus implies that there is a natural bijection between isomorphism classes of simple semitoric systems and lists of semitoric ingredients. # Geometric interpretations of twisting index {#sec:geometricinterpret} The primary goal of this section is to give an equivalent formulation of the twisting index in terms of comparing homology cycles near a focus-focus point to those which arise from a choice of semitoric polygon. The main results of this section are collected in Theorem [Theorem 1](#thm:geometry){reference-type="ref" reference="thm:geometry"}. ## The local and semilocal maps First let us briefly review the relevant maps, as described in Section [3](#ss:invariants){reference-type="ref" reference="ss:invariants"}. Each focus-focus point $m_r$ of the system has a neighborhood $U_r$ which is isomorphic as an integrable system to a neighbourhood $V_r$ of the origin in the system on ${\mathbb R}^4$ given by $$G(q_1,q_2,p_1,p_2)=(q_1p_2-q_2p_1,q_1p_1+q_2p_2)$$ via an isomorphism $(\varphi_r,\varrho_r)$. Let $W_r = F^{-1}(F(U_r))$. We can then define a map $\Phi_r\colon W_r\to{\mathbb R}^2$ via $\Phi_r = \varrho_r \circ F$, which is of the form $\Phi = (L,\Phi_2)$ and essentially amounts to extending the momentum maps of the local normal form for $m_r$ to a neighborhood of the fiber $F^{-1}(F(m_r))$. Given a $z\in\Phi_r(W_r)$, we consider the fiber $\Lambda_z = \Phi_r^{-1}(z)$. An action variable $I_r$ is obtained by integrating a primitive $\varpi$ of $\omega$ along a cycle in $\Lambda_z$. There are choices of such cycles, and therefore various options for $I_r$. Any such $I_r$ induces a momentum map $\mu_r^{I_r}$. Due to the monodromy around the focus-focus value $F(m_r)$, such a $I_r$ and $\mu_r^{I_r}$ cannot be smoothly defined in all of $U_r$ or $W_r$, respectively. There is a preferred choice of action and momentum map though. This is the action $\xi_r$ and corresponding momentum map $\Xi_r$ discussed in Section [3.5](#sss:twisting){reference-type="ref" reference="sss:twisting"} and in particular Remark [Remark 21](#rmk:Xi-and-xi){reference-type="ref" reference="rmk:Xi-and-xi"}. The local maps are shown in Figure [\[fig:diagram-local\]](#fig:diagram-local){reference-type="ref" reference="fig:diagram-local"}, and the semilocal maps are shown in Figure [\[fig:diagram-semilocal\]](#fig:diagram-semilocal){reference-type="ref" reference="fig:diagram-semilocal"}. Later we will see that for the specific system we study in Section [5](#sec:specificexample){reference-type="ref" reference="sec:specificexample"} there is a relationship between the map $\varrho_r$ and a quantity called imaginary action, see Remark [Remark 37](#rmk:varrho_is_J){reference-type="ref" reference="rmk:varrho_is_J"}. $$\begin{tikzcd} M\arrow[r, phantom, sloped, "\supset"]&[-2em] U_r \arrow{r}{\varphi_r} \arrow[d,"F"] \arrow[dr,"\Phi_r"] & V_r \arrow[d,"G"] &[-2em] {\mathbb R}^4 \arrow[l,phantom,sloped,"\subset"]\\ {\mathbb R}^2 \arrow[r, phantom, sloped, "\supset"] &F(U_r) \arrow{r}{\varrho_r} & {\mathbb R}^2 \\[-1em] & (l,h)\arrow[u, phantom, sloped, "\in"] & (l,j)\arrow[u, phantom, sloped, "\in"] & \end{tikzcd}$$ $$\begin{tikzcd} M\arrow[r, phantom, sloped, "\supset"] &[-2em] W_r \arrow[bend left = 20, looseness = 1]{rrrd}{\mu_r^{I_r} := I\circ \Phi_r} \arrow[d,"F"] \arrow[dr,"\Phi_r"] & &[-2em] & \\ {\mathbb R}^2 \arrow[r, phantom, sloped, "\supset"] &F(W_r) \arrow{r}{\varrho_r} & {\mathbb R}^2 \arrow[r, phantom, sloped, "\simeq"] & {\mathbb C}\arrow[r,"I_r"] & {\mathbb R}\\[-1em] & (l,h)\arrow[u, phantom, sloped, "\in"] & (l,j)\arrow[u, phantom, sloped, "\in"] & z \arrow[u, phantom, sloped, "\in"] & \end{tikzcd}$$ ## Geometric interpretations Throughout this section we will often make use of the following fact: any free continuous ${\mathbb S}^1$-action on a connected manifold $N$ determines a well-defined cycle in $H_1(N)$ by taking the orbit of any point $p\in N$. We denote the orbit through $p$ by ${\mathbb S}^1\cdot p$. The resulting cycle is well-defined in $H_1(N)$ because if $p,q\in N$ then any path $\gamma\colon [0,1]\to N$ with $\gamma(0)=p$ and $\gamma(1)=q$ determines a homotopy between the orbit through $p$ and the orbit through $q$ by ${\mathbb S}^1\cdot (\gamma(s))$ for $0\leq s \leq 1$. Let $(M,\omega,F)$ be a simple semitoric system and let $(\Delta,b,\varepsilon)$ be a representative of its semitoric polygon. Let $M^\varepsilon := M\setminus F^{-1}(b^\varepsilon)$ denote the manifold $M$ without the preimage of the cuts. Recall, as in Section [3.2](#sss:polygon){reference-type="ref" reference="sss:polygon"}, that there is a generalized toric momentum map $\mu_\Delta=(L,\mu_2^\Delta)\colon M\to {\mathbb R}^2$ such that - $\mu_\Delta$ is continuous on $M$ and smooth on $M^\varepsilon$, - $\mu_\Delta(M)=\Delta$, - the Hamiltonian flows of $L$ and $\mu_2^\Delta$ generate an effective $\mathbb{T}^2$-action on $M^\varepsilon$. Roughly speaking, the twisting index measures the difference between $\mu_\Delta$ and the dynamics near a given focus-focus point, which can be expressed in several different, but equivalent, ways. Throughout this section, let $m:=m_r$ be a focus-focus point, and we will drop the subscript $r$ from all notation. Following the construction in Section [3.4](#sss:taylor){reference-type="ref" reference="sss:taylor"}, we obtain a neighborhood $W$ of the focus-focus fiber and a map $\Phi\colon W \to {\mathbb C}$. As before, let $\Lambda_z :=\Phi^{-1}(z)$, which is a torus for $z\neq 0$ and a pinched torus for $z=0$. For $z\neq 0$, the Hamiltonian flow of $L$ generates a free ${\mathbb S}^1$-action on $\Lambda_z$, and therefore determines a cycle in $H_1(\Lambda_z)$ which we denote by $\gamma^z_L$. Now consider $\tilde{W} = W\setminus F^{-1}(b^\varepsilon)$ and for $z\in\Phi(\tilde{W})$ denote by $\gamma^z_\Delta$ the cycle determined by the flow of $\mu_2^{\Delta}$ on $\Lambda_z$. Notice that $\{\gamma_L^z,\gamma_\Delta^z\}$ form a basis of $\Lambda_z$ for any $z\in\Phi(\tilde{W})$. Analogously to Equation [\[eq:semigact\]](#eq:semigact){reference-type="eqref" reference="eq:semigact"}, we integrate the primitive $\varpi$ of $\omega$ over this cycle to determine an action $I_\Delta\colon \Phi(\tilde{W})\to{\mathbb R}$ via $$\label{eqn:I-Delta} I_\Delta(z) := \frac{1}{2\pi}\int_{\gamma_\Delta^z}\varpi.$$ Note that $I_\Delta(z)$ is not defined when $z=0$, but the limit as $z$ goes to zero exists, so we denote $I_\Delta(0) := \lim_{z\to 0}I_\Delta(z)$. Due to the discussion in Remark [Remark 9](#rmk:integrate-cycles){reference-type="ref" reference="rmk:integrate-cycles"}, we have that $I_\Delta \circ \Phi=\mu_2^\Delta$. Now we let $$\label{eqn:S-Delta} S^\Delta(z) := 2\pi I_\Delta(z)-2\pi I_\Delta(0) + \text{Im}(z\log z - z) \in {\mathbb R}_0[[l,h]]$$ and obtain its Taylor series $(S^\Delta)^\infty$. Note that, in Section [3.4](#sss:taylor){reference-type="ref" reference="sss:taylor"}, the Taylor series $S^\infty$ defined via Equation [\[eqn:def-of-S\]](#eqn:def-of-S){reference-type="eqref" reference="eqn:def-of-S"} is only well-defined up to the addition of integer multiples of $2\pi l$. On the other hand, in the present section the polygon $\Delta$ and associated generalized toric momentum map $\mu^\Delta$ give a *preferred choice* of cycle $\gamma_\Delta^z$ to use when computing $I_\Delta(z)$ in Equation [\[eqn:I-Delta\]](#eqn:I-Delta){reference-type="eqref" reference="eqn:I-Delta"}. Thus, the Taylor series $(S^\Delta)^\infty$ is completely determined in the process described in this section. We now want to compare this to another preferred Taylor series near the focus-focus point $m$, which is related to the preferred momentum map $\nu = (L,\Xi)$ from Section [3.5](#sss:twisting){reference-type="ref" reference="sss:twisting"}. Let $\Xi\colon W\to{\mathbb R}$ be as in Equation [\[eqn:Xi\]](#eqn:Xi){reference-type="eqref" reference="eqn:Xi"}, and as in Remark [Remark 21](#rmk:Xi-and-xi){reference-type="ref" reference="rmk:Xi-and-xi"}, there exists a map $\xi\colon \Phi(W)\to {\mathbb R}$ such that $\xi \circ\Phi = \Xi$. **Lemma 24**. *Let $(S^{\text{pref}})^\infty\in{\mathbb R}_0[[l,j]]$ denote the representative of $S^\infty$ in which the coefficient of $l$ lies in $[-\frac{\pi}{2},\frac{3\pi}{2}[$. Then $(S^{\text{pref}})^\infty$ is equal to the Taylor series of $$\hat{S}(z) := 2\pi \xi(z) - 2\pi \xi(0) + \textup{Im}(z \log z -z),$$ where $\xi \circ\Phi = \Xi$ as in Remark [Remark 21](#rmk:Xi-and-xi){reference-type="ref" reference="rmk:Xi-and-xi"}.* *Proof.* Let $(\hat{S})^\infty$ denote the Taylor series of $\hat{S}$ at the origin. Then $(S^{\text{pref}})^\infty, (\hat{S})^{\infty}\in{\mathbb R}[[l,j]]$ are representatives of the Taylor series $S^\infty\in{\mathbb R}[[l,j]]/(2\pi l{\mathbb Z})$, so $$(S^{\text{pref}})^\infty - (\hat{S})^\infty \in 2\pi l {\mathbb Z}.$$ Thus, to show that $(S^{\text{pref}})^\infty = (\hat{S})^\infty$, and therefore complete the proof, it is sufficient to show that the coefficient of $l$ in the series $(\hat{S})^\infty$ lies in the interval $[\frac{-\pi}{2},\frac{3\pi}{2}[$. Recall from [\[eqn:xi_deriv\]](#eqn:xi_deriv){reference-type="eqref" reference="eqn:xi_deriv"} that $\frac{\partial \xi}{\partial l}= \frac{1}{2\pi} {\tau_1^{\text{pref}}}$. By direct calculation from $z =l+ij$, it can be shown that $\frac{\partial }{\partial l}(\text{Im}(z\log(z)-z))=\text{Im}(\log(z))$. Now we compute $$\begin{aligned} \frac{\partial }{\partial l}\hat{S}(z) &= \frac{\partial}{\partial l}\left(2\pi\xi(z) - 2\pi \xi(0) + \text{Im}(z\log(z)-z)\right)\\ &= {\tau_1^{\text{pref}}}(z) - \frac{\partial}{\partial l}\big( \text{Im}(z\log(z)-z)\big)\\ &= {\tau_1^{\text{pref}}}(z) - \text{Im}(\log(z)).\end{aligned}$$ Since the $l$ coefficient of the Taylor series $(\hat{S})^\infty$ is given by $\frac{\partial\hat(S)}{\partial l}|_{z=0}$, by [\[eqn:pref-tau1\]](#eqn:pref-tau1){reference-type="eqref" reference="eqn:pref-tau1"}, the proof is complete. ◻ **Lemma 25**. *The functions $I_\Delta(z)$ and $\xi(z)$, wherever both are defined, are related by $$I_\Delta(z) - \xi(z) = \kappa^\Delta l,$$ where $z = l + ij$.* *Proof.* The first component of both $\mu_\Delta$ and $\nu$ is $L$, and therefore $T^{\kappa^\Delta}\nu = \mu_\Delta$ is equivalent to $\kappa^\Delta L = \mu_2^\Delta-\Xi$. This implies the result. ◻ **Proposition 26**. *Let $r\in\{1,\ldots, n_{\text{FF}}\}$. Write briefly $m_r=m$ and drop below the subscript $r$ from all notation. Fix a semitoric polygon $(\Delta,b, \varepsilon)$ of $(M,\omega,F)$ and let:* - *$(S^\Delta)^\infty$ denote the Taylor series obtained from Equation [\[eqn:S-Delta\]](#eqn:S-Delta){reference-type="eqref" reference="eqn:S-Delta"},* - *$(S^{\text{pref}})^\infty$ denote the representative of $S^\infty$ in which the coefficient of $l$ lies in $[\-\frac{\pi}{2},\frac{3\pi}{2}[$,* - *$\kappa^\Delta$ denote the twisting index of $m$ relative to the semitoric polygon representative $\Delta$.* *Then $\kappa^\Delta = \frac{1}{2\pi l}\left((S^{\Delta})^\infty-(S^{\text{pref}})^\infty \right)$.* *Proof.* Let $\mu_\Delta= (L, \mu_2^\Delta)$ be the generalized toric momentum map satisfying $\mu(M) = \Delta$, and let $I_\Delta\colon \Phi(\tilde{W})\to {\mathbb R}$ be as in Equation [\[eqn:I-Delta\]](#eqn:I-Delta){reference-type="eqref" reference="eqn:I-Delta"}, so that $I_\Delta \circ \Phi = \mu_2^\Delta$. Recall that, as in Section [3.5](#sss:twisting){reference-type="ref" reference="sss:twisting"}, near $m$ there is a preferred local momentum map $\nu = (L,\Xi)$, which is smooth on $\tilde{W}$, such that $T^{\kappa^\Delta}\nu = \mu_\Delta.$ Furthermore, as in Remark [Remark 21](#rmk:Xi-and-xi){reference-type="ref" reference="rmk:Xi-and-xi"}, there is a map $\xi\colon \Phi(W)\to{\mathbb R}$ such that $\xi\circ\Phi = \Xi$. Using Lemma [Lemma 24](#lem:Spref){reference-type="ref" reference="lem:Spref"}, Equation [\[eqn:S-Delta\]](#eqn:S-Delta){reference-type="eqref" reference="eqn:S-Delta"}, and Lemma [Lemma 25](#lem:twist-action){reference-type="ref" reference="lem:twist-action"}, we conclude that $$(S^\Delta)^\infty - (S^{pref})^\infty = 2\pi (I_\Delta - \xi)^\infty = 2\pi \kappa^\Delta l,$$ where $(I_\Delta - \xi)^\infty$ denotes the Taylor series of $I_\Delta(z) - \xi(z)$ expanded at the origin. ◻ The result of Proposition [Proposition 26](#prop:difference-Taylor){reference-type="ref" reference="prop:difference-Taylor"} is similar to how the twisting index was treated in [@LFVN21; @PPT; @Jaume-thesis], by packaging it along with the Taylor series. Let $\gamma_{\text{pref}}^z\in H_1(\Lambda_z)$ be the cycle defined by following $\mathcal{X}_{\Phi_2}$ for time $\tau_2(z)$ and following $\mathcal{X}_L$ for time ${\tau_1^{\text{pref}}}(z)$. In other words, $\gamma_\text{pref}^z$ is the piecewise smooth loop shown in Figure [1](#fig:torustorus){reference-type="ref" reference="fig:torustorus"}. Now we will show that $\gamma_\text{pref}^z$ is homotopic to $\gamma_{\Xi}^z$, which is the cycle determine by the flow of $\Xi$. For $s\in [0,1]$, define a vector field $\mathcal{X}(s)$ on $\Lambda_z$ by: $$2\pi \mathcal{X}(s) = s({\tau_1^{\text{pref}}}\circ\Phi)\mathcal{X}_L + (\tau_2\circ\Phi)\mathcal{X}_{\Phi_2}.$$ Let $\gamma^z(s)$ be the cycle determined by flowing along $\mathcal{X}(s)$ for time $2\pi$, and then flowing along $\mathcal{X}_L$ for time $(1-s){\tau_1^{\text{pref}}}(z)$. Then $\gamma^z(s)$ is a loop for all $s\in [0,1]$, $\gamma^z(0)=\gamma_\text{pref}^z$, and $\gamma(1) = \gamma^z_{\Xi}$. Thus, applying this equivalence and Remark [Remark 9](#rmk:integrate-cycles){reference-type="ref" reference="rmk:integrate-cycles"}, we have that $$\label{eqn:gamma-pref-Xi} \frac{1}{2\pi} \int_{\gamma_\text{pref}^z}\varpi = \frac{1}{2\pi} \int_{\gamma_\Xi^z}\varpi = \xi(z).$$ **Remark 27**. Because of [\[eqn:gamma-pref-Xi\]](#eqn:gamma-pref-Xi){reference-type="eqref" reference="eqn:gamma-pref-Xi"}, the function $\xi(z)$ can be understood as a *preferred local action* around the focus-focus value. **Proposition 28**. *$\gamma_\Delta^z - \gamma_\text{pref}^z = \kappa^\Delta \gamma_L$.* *Proof.* Since $\{\gamma_L^z, \gamma_\Delta^z\}$ and $\{\gamma_L^z, \gamma_\text{pref}^z\}$ are each a basis of $H_1(\Lambda_z)\cong {\mathbb Z}^2$, there exists $K \in{\mathbb Z}$ such that $\gamma_\Delta^z - \gamma_\text{pref}^z = K \gamma_L$. By Equation [\[eqn:gamma-pref-Xi\]](#eqn:gamma-pref-Xi){reference-type="eqref" reference="eqn:gamma-pref-Xi"}, $\gamma_\Delta^z - \gamma_\tau^z = \ell \gamma_L^z$ implies that $$I_\Delta(z) - \xi(z) = \frac{1}{2\pi} \int_{\gamma_\Delta^z} \varpi - \frac{1}{2\pi} \int_{\gamma_\text{pref}^z} \varpi = \frac{1}{2\pi} \int_{\gamma_\Delta^z-\gamma_\text{pref}^z} \varpi = \frac{1}{2\pi} \int_{K \gamma_L^z} \varpi = K l$$ again applying Remark [Remark 9](#rmk:integrate-cycles){reference-type="ref" reference="rmk:integrate-cycles"} in the last equality, where $z = l+ij$. Hence, $I_\Delta(z) - \xi(z) = K l$, so $K = \kappa^\Delta$ by Lemma [Lemma 25](#lem:twist-action){reference-type="ref" reference="lem:twist-action"}. ◻ # A two-parameter family with two focus-focus points {#sec:specificexample} In this section, we compute the symplectic invariants of the family of simple semitoric systems given in Equation [\[eqn_ssys\]](#eqn_ssys){reference-type="eqref" reference="eqn_ssys"}, which can have up to two focus-focus singularities. When this is the case, the Taylor series invariant (§[3.4](#sss:taylor){reference-type="ref" reference="sss:taylor"}), the height invariant (§[3.3](#sss:height){reference-type="ref" reference="sss:height"}) and the twisting index invariant (§[3.5](#sss:twisting){reference-type="ref" reference="sss:twisting"}) have two components, so we can compare them with each other. **Remark 29**. The one parameter family of systems given in Equation [\[eqn_ssys\]](#eqn_ssys){reference-type="eqref" reference="eqn_ssys"} is a special case of the four-parameter family of systems studied by Alonso & Hohloch [@AH2], which in turn is a special case of the broad six parameter family of systems studied by Hohloch & Palmer [@HoPa2018]. In particular, the system in Equation [\[eqn_ssys\]](#eqn_ssys){reference-type="eqref" reference="eqn_ssys"} is a particular case of the family of systems studied in Hohloch & Palmer [@HoPa2018] with parameters $$\begin{aligned} R_1=1,&\hspace{1.5cm}&&t_1 = (1-s), &\hspace{1.5cm}& t_3 = 2s(1 - s), \\ R_2=2,&&&t_2 = s, && t_4 =0, \end{aligned}$$ and of the family of systems studied in Alonso & Hohloch [@AH2] with parameters $$R_1 = 1,\qquad R_2=2, \qquad s_1 = 0, \qquad s_2=s.$$ [\[re:otherpapers\]]{#re:otherpapers label="re:otherpapers"} Since the system in Equation [\[eqn_ssys\]](#eqn_ssys){reference-type="eqref" reference="eqn_ssys"} is a special case of the system from Hohloch & Palmer [@HoPa2018], then as a consequence of Hohloch & Palmer [@HoPa2018 Theorem 1.1], for any $s\in [0,1]$ the system in equation [\[eqn_ssys\]](#eqn_ssys){reference-type="eqref" reference="eqn_ssys"} is a completely integrable system with four singularities of rank 0, located at the product of the poles of the spheres. We will denote these points by $\mathcal{N} \times \mathcal{N}$, $\mathcal{N} \times \mathcal{S}$, $\mathcal{S} \times \mathcal{N}$ and $\mathcal{S} \times \mathcal{S}$, where $\mathcal{N},\mathcal{S}$ denote the North and South poles respectively. As a consequence of Alonso $\&$ Hohloch [@AH2 *Prop. 8, Prop. 9, Thm. 11*], we have: **Proposition 30**. *The system [\[eqn_ssys\]](#eqn_ssys){reference-type="eqref" reference="eqn_ssys"} is semitoric for all values of $s \in [0,1] \backslash \{s_-, s_+\}$, where $$s_+ = \dfrac{1}{16}\left( 8 - 3 \sqrt{2} + \sqrt{82 + 16 \sqrt{2}} \right), \qquad s_- = \dfrac{1}{16}\left( 8 + 3 \sqrt{2} - \sqrt{82 - 16 \sqrt{2}} \right).$$ The points $\mathcal{N} \times \mathcal{S}$ and $\mathcal{S} \times \mathcal{N}$ are focus-focus if $s_- < s < s_+$ and elliptic-elliptic if $s<s_-$ or $s>s_+$. The points $\mathcal{N} \times \mathcal{N}$ and $\mathcal{S} \times \mathcal{S}$ are always elliptic-elliptic. This situation is displayed in Figure [2](#fig:momimg){reference-type="ref" reference="fig:momimg"}. [\[prop:nff\]]{#prop:nff label="prop:nff"}* In Figure [2](#fig:momimg){reference-type="ref" reference="fig:momimg"}, we can observe how the image of the momentum map $F$ evolves as we change the parameter $s$: two of the singular values move from the border to the interior and back to the border. This corresponds to the transition of the singular points from elliptic-elliptic to focus-focus and focus-focus to elliptic-elliptic respectively, which are Hamiltonian-Hopf bifurcations. ![ The image $B_s :=F_s(M)$ of the momentum map as the parameter moves from $s=0$ on the left to $s=1$ on the right. Notice that two of the singular points start as elliptic-elliptic at $s=0$, become focus-focus at $s_- \approx 0.284$, and transition back into elliptic-elliptic at $s_+ \approx 0.874$.](4X2array.pdf){#fig:momimg width="430pt"} ## Polygon and height invariants The polygon invariant and the height invariant of the system in Equation [\[eqn_ssys\]](#eqn_ssys){reference-type="eqref" reference="eqn_ssys"} have been already calculated by Alonso & Hohloch [@AH2]. In this subsection, we recall those results specialized to the parameter values we are interested in for this paper. **Theorem 31** (Alonso & Hohloch [@AH2 Theorem 2]). *The number of focus-focus points invariant and the polygon invariant of the system from Equation [\[eqn_ssys\]](#eqn_ssys){reference-type="eqref" reference="eqn_ssys"} are as follows (see also Figure [\[fig:pols\]](#fig:pols){reference-type="ref" reference="fig:pols"}):* - *For $s < s_-$, we have ${n_{\text{FF}}}=0$ and the polygon invariant is the equivalence class of the polygon which is the convex hull of $(-3,-1)$, $(-1,1)$, $(1,-1)$, and $(3,1)$.* - *For $s_- < s < s_+$, we have ${n_{\text{FF}}}=2$ and the polygon invariant is the equivalence class of $\big(\Delta, (b_{\lambda_r})_{r=1}^2,(\varepsilon_r)_{r=1}^2\big)$ where $\lambda_1 = -1$, $\lambda_2=1$, $\varepsilon_1 = 1$, $\varepsilon_2=1$, and $\Delta$ is the convex hull of $(-3,-1)$, $(-1,1)$, $(1,1)$, and $(3,-1)$.* - *For $s > s_+$, we have ${n_{\text{FF}}}=0$ and the polygon invariant is the equivalence class of the polygon which is the convex hull of $(-3,-1)$, $(-1,-1)$, $(1,1)$, and $(3,1)$.* *[\[thm:poly\]]{#thm:poly label="thm:poly"}* ![*$(\varepsilon_1,\varepsilon_2)=(+1,+1)$*](4X2pol00-3.pdf){#fig:pols00 width=".8\\textwidth"} ![*$(\varepsilon_1,\varepsilon_2)=(+1,-1)$*](4X2pol11-3.pdf){#fig:pols10 width=".8\\textwidth"} ![*$(\varepsilon_1,\varepsilon_2)=(-1,+1)$*](4X2pol01-3.pdf){#fig:pols01 width=".8\\textwidth"} ![*$(\varepsilon_1,\varepsilon_2)=(-1,-1)$*](4X2pol10-3.pdf){#fig:pols11 width=".8\\textwidth"} Let $u\colon {\mathbb R}\to \{0,1\}$ denote the usual Heavyside step function, $$u(t) := \begin{cases} 0, & \text{if }t \leq 0,\\ 1, & \text{if }t > 0.\end{cases}$$ Also, for $s_- < s < s_+$ we define $$\label{eqn:rho} \begin{aligned} \rho_1 &:= \sqrt{4 - 12 s + 13 s^2 - 8 s^3 + 4 s^4} , \\ \rho_2 &:= \sqrt{-4 + 12 s + 23 s^2 - 64 s^3 + 32 s^4}. \end{aligned}$$ Note that $s_-$ and $s_+$ are precisely the only two roots for $s\in [0,1]$ of the polynomial $$\label{eqn:sminus-plus} -4 + 12 s + 23 s^2 - 64 s^3 + 32 s^4.$$ Thus, the argument of $\rho_2$ is strictly positive when $s_- < s < s_+$. Furthermore, the argument of $\rho_1$ is strictly positive for all $s\in{\mathbb R}$. **Theorem 32** (Alonso & Hohloch [@AH2 Theorem 22]). *The height invariant $(h_1(s),h_2(s))$ associated to the system [\[eqn_ssys\]](#eqn_ssys){reference-type="eqref" reference="eqn_ssys"} for $s_- < s < s_+$ is given by $$\begin{aligned} h_1(s) &= -\dfrac{1}{2\pi} \mathcal F(s) + 2u (2-3s), \\ h_2(s) &= 2 - h_1(s), \end{aligned}$$ where $$\begin{aligned} \mathcal F(s) := &-4 \arctan \left( \dfrac{-4 - 16 s^3 + 8 s^4 + 4 s (3 + \rho_1) - s^2 (1 + 4 \rho_1)}{(-2 + 3 s) \rho_2}\right) \\&-2 \arctan \left( \dfrac{-4 + 32 s^3 - 16 s^4 + 4 s (3 + 2 \rho_1) - s^2 (25 + 8 \rho_1)}{(-2 + 3 s) \rho_2} \right) \\& + \dfrac{2-3s}{2(s-1)s} \log \left( \dfrac{- \rho_1}{-6s + 6s^2 + \rho_2} \right). \end{aligned}$$* In Figure [7](#fig:hei){reference-type="ref" reference="fig:hei"} we can see the representation of the height invariant as a function of the parameter $s$. ![ The height invariant as a function of the parameter $s$ exists only for $s_- < s < s_+$. The height $h_1(s)$ (blue) corresponds to the focus-focus singularity $\mathcal{N} \times \mathcal{S}$ and the height $h_2(s)$ (yellow) to $\mathcal{S} \times \mathcal{N}$. ](4X2height.pdf){#fig:hei width="200pt"} ## Symmetry between the Taylor series {#sec:symmetry} Before computing the Taylor series for the focus-focus singularity $\mathcal{N}\times\mathcal{S}$, in this section we will show that symmetries of the system can be used to determine a relationship between the Taylor series invariants of the two focus-focus points of the system. To do this we will make use of Proposition [Proposition 5](#prop:symmetry-general){reference-type="ref" reference="prop:symmetry-general"}, which explains how changing the signs of the components of the momentum map affects the Taylor series invariant. Sepe & Vũ Ngọc [@SepeVN-notes Theorem 4.56] show how different choices of local normal form charts (as in Theorem [Theorem 11](#EliassonMZ){reference-type="ref" reference="EliassonMZ"}) impact the resulting Taylor series. In the following proof, we show how changing the signs of the components of the momentum map impact the preferred choice of chart in a semitoric system, and then apply the result of Sepe & Vũ Ngọc to obtain Proposition [Proposition 5](#prop:symmetry-general){reference-type="ref" reference="prop:symmetry-general"}. *Proof of Proposition [Proposition 5](#prop:symmetry-general){reference-type="ref" reference="prop:symmetry-general"}.* Since $\Phi$ is a fiber preserving symplectomorphism, the fact that $\Phi(m)$ is focus-focus is immediate. Since $m$ is focus-focus, by Theorem [Theorem 11](#EliassonMZ){reference-type="ref" reference="EliassonMZ"} there exists a map $\varphi\colon U\to {\mathbb R}^4$ from a neighborhood $U$ of $m$ which is a symplectomorphism onto its image, and a local diffeomorphism $\varrho\colon {\mathbb R}^2\to{\mathbb R}^2$ around $0$ such that $\varrho(F(m)) = 0$ and $\varrho \circ F = G \circ \varphi$, where $$G(x_1,y_1,x_2,y_2):= (G_1,G_2)(x_1,y_1,x_2,y_2) := (x_1y_2-x_2y_1, x_1 y_1 + x_2 y_2)$$ and ${\mathbb R}^4$ is equipped with the symplectic form $\omega_{{\mathbb R}^4} = \mathrm{d}x_1 \wedge \mathrm{d}y_1 + \mathrm{d}x_2 \wedge \mathrm{d}y_2$. As discussed in Section [3.4](#sss:taylor){reference-type="ref" reference="sss:taylor"}, we may, and do, choose these maps such that $\varrho_1(l,h) = l$ and that $\varrho_2(l,h)$ satisfies $\frac{\partial \varrho_2}{\partial h}>0$. Following Sepe & Vũ Ngọc [@SepeVN-notes] and adapting to our notation, let $$A_{-1,1}(x_1,y_1,x_2,y_2) := (x_2,y_2,x_1,y_1), \qquad A_{1,-1}(x_1,y_1,x_2,y_2) := (y_1, -x_1, y_, -x_2),$$ $A_{-1,-1} := A_{-1,1}\circ A_{1,-1}$, and let $A_{1,1}$ denote the identity on ${\mathbb R}^4$. Notice that $A_{\varepsilon_1,\varepsilon_2}\colon {\mathbb R}^4 \to {\mathbb R}^4$ is a symplectomorphism and that $G\circ A_{\varepsilon_1,\varepsilon_2} = (\varepsilon_1 G_1,\varepsilon_2 G_2)$ for any $\varepsilon_1,\varepsilon_2\in\{-1,+1\}$. Now fix a choice of $\varepsilon_1,\varepsilon_2\in\{-1,1\}$ and let $\tilde{F}: =(\varepsilon_1 L,\varepsilon_2 H)$. Note that $(M,\omega,\tilde{F})$ is a semitoric system, $\Phi$ satisfies $\Phi^*F' = \tilde{F}$, and $m$ is a focus-focus point of $(M,\omega,\tilde{F})$. Let $\tilde{S}_m^\infty$ denote the Taylor series invariant of $m$ in $(M,\omega,\tilde{F})$, and note that $\tilde{S}_m^\infty=(S_{\Phi(m)}')^\infty$. To complete the proof, we will now show that $$\label{eqn:tildeS-formula} S_m^\infty(l,j) = \varepsilon_2 \tilde{S}^\infty_{m}(\varepsilon_1 l, \varepsilon_2 j) + \left(\frac{1-\varepsilon_1}{2}\right) \pi l \quad (\textrm{mod }2\pi l).$$ We identify the Klein group $K_4$ with $\{-1,1\}^2$. Note that since $(L,H)$ and $(\varepsilon_1 L, \varepsilon_2 H)$ induce the same fibration, due to Sepe & Vũ Ngọc [@SepeVN-notes Theorem 4.56], the Taylor series in each case will be related by the $K_4$-action described in Sepe & Vũ Ngọc [@SepeVN-notes Lemma 4.52]. The remainder of the proof is showing that the preferred choices of each Taylor series, relative to the semitoric systems, are the ones which satisfy Equation [\[eqn:tildeS-formula\]](#eqn:tildeS-formula){reference-type="eqref" reference="eqn:tildeS-formula"}. Define $\tilde{\varrho}= (\varepsilon_1\varrho_1,\varepsilon_2\varrho_2)$. Then $\tilde{\varphi} := A_{\varepsilon_1,\varepsilon_2}\circ \varphi\colon U \to {\mathbb R}^4$ is a symplectomorphism onto the image and $\tilde{\varrho}$ is a local diffeomorphism around zero such that $\tilde{\varrho}(\tilde{F}(m)) = 0$ and $\tilde{\varrho} \circ \tilde{F} = G \circ \tilde{\varphi}$. Furthermore, writing $\tilde{\varrho} = (\tilde{\varrho}_1, \tilde{\varrho}_2)$, we see that $\tilde{\varrho}_1(\varepsilon_1 l,\varepsilon_2 h) = l$ and $\frac{\partial} {\partial h}\left(\tilde{\varrho}_2(\varepsilon_1 l, \varepsilon_2 h)\right) >0$. Recall that associated to each local normal form chart $(\varphi,\varrho)$ around a focus-focus point, there is a well defined choice of Taylor series invariant, and recall that (as described in the beginning of Section [3.4](#sss:taylor){reference-type="ref" reference="sss:taylor"}) there is a preferred choice of such a pair $(\varphi,\varrho)$ around any focus-focus point in a semitoric system (up to flat functions). Given the preferred isomorphism $(\varphi,\varrho)$ of $(M,\omega,F)$, we produced a new preferred isomorphism $(\tilde{\varphi},\tilde{\varrho})$ of $(M,\omega,\tilde{F})$. By Sepe & Vũ Ngọc [@SepeVN-notes Lemma 4.55], the map which assigns the Taylor series invariant to a local normal form chart around a focus-focus point is equivariant with respect to actions of $K_4 \cong \{-1,1\}^2$. The action of $(\varepsilon_1,\varepsilon_2)\in K_4$ on the charts is given by $(\varphi,(\varrho_1,\varrho_2))\mapsto (A_{\varepsilon_1,\varepsilon_2}\circ\varphi,(\varepsilon_1\varrho_1,\varepsilon_2\varrho_2))$. Since $(\varphi,\varrho)$ and $(\tilde{\varphi},\tilde{\varrho})$ are related by this action, we conclude that the Taylor series are related by the formula given in Sepe & Vũ Ngọc [@SepeVN-notes Lemma 4.52], which is Equation [\[eqn:tildeS-formula\]](#eqn:tildeS-formula){reference-type="eqref" reference="eqn:tildeS-formula"}, as desired. ◻ Now we will apply this result to our system. Recall that $m_1=\mathcal{N}\times\mathcal{S}$ and $m_2=\mathcal{S}\times\mathcal{N}$ are both focus-focus singular points for $s\in \,\,]s_-,s_+[\,$. **Lemma 33**. *Let $s\in \,\,]s_-,s_+[\,$, and for such $s$ let $S^\infty_{i,s}$ denote the Taylor series invariant at the focus-focus point $m_i$ for $i\in\{1,2\}$. Then $$S_{2,s}^\infty(l,j) = -S_{1,s}^\infty(-l,-j)+\pi l.$$* *Proof.* Consider the map $\Phi\colon {\mathbb S}^2\times {\mathbb S}^2 \to {\mathbb S}^2\times{\mathbb S}^2$ given by $$\Phi(x_1,y_1,z_1,x_2,y_2,z_2) = (-x_1,y_1,-z_1,x_2,-y_2,-z_2).$$ Note that $\Phi^*\omega=\omega$, $\Phi(\mathcal{N}\times\mathcal{S}) = \mathcal{S}\times\mathcal{N}$, and $\Phi^*(L,H) = (-L,-H)$. Then we apply Proposition [Proposition 5](#prop:symmetry-general){reference-type="ref" reference="prop:symmetry-general"}, taking $\varepsilon_1=\varepsilon_2=-1$, which proves the claim. ◻ In order to calculate the twisting index invariant of the system, and therefore prove Theorem [Theorem 3](#thm:twist-intro){reference-type="ref" reference="thm:twist-intro"}, we need to know the lower order terms of the Taylor series invariant for each focus-focus point, which is the content of Theorem [Theorem 4](#thm:Taylor-intro){reference-type="ref" reference="thm:Taylor-intro"}. The above lemma is an important part of Theorem [Theorem 4](#thm:Taylor-intro){reference-type="ref" reference="thm:Taylor-intro"}, since it implies that to obtain both Taylor series it is sufficient to only explicitly compute $S^\infty_{1,s}(l,j)$. This is what we will do now. ## The action integral {#ss:action} In order to obtain the remaining symplectic invariants of this system, we next need to compute the action integral. To do so, we perform singular symplectic reduction by the Hamiltonian ${\mathbb S}^1$-action generated by $L$, see Sjamaar & Lerman [@SL] for details of this concept. We start by rewriting the system [\[eqn_ssys\]](#eqn_ssys){reference-type="eqref" reference="eqn_ssys"} using the usual cylindrical coordinates $(\theta_1,z_1,\theta_2,z_2)$ where $\theta_i$ measures the angle in the $x_iy_i$-plane in the counter clockwise direction starting from the positive $x_i$-axis for $i\in\{1,2\}$. We then have: $$\left\{ \begin{aligned} L(\theta_1,z_1,\theta_2,z_2) &= z_1 + 2 z_2,\\ H_s(\theta_1,z_1,\theta_2,z_2) &= (1-s) z_1 + s z_2 + 2(1-s)s \sqrt{(1 - {z_1}^2) (1 - {z_2}^2)} \cos(\theta_1 - \theta_2). \end{aligned} \right. \label{eqncyl}$$ The symplectic form in cylindrical coordinates is given by $\omega = \mathrm{d}z_1 \wedge \mathrm{d}\theta_1 + 2 \mathrm{d}z_2 \wedge \mathrm{d}\theta_2$, since the standard symplectic form on the sphere is $\omega_{{\mathbb S}^2} = \mathrm{d}\theta \wedge \mathrm{d}z$ and $\omega = -(\omega_{{\mathbb S}^2} \oplus 2 \omega_{{\mathbb S}^2})$. We now perform the affine coordinate change $$\begin{array}{lcl} q_1 := -\theta_1, && p_1 := z_1 + 2 z_2, \\ q_2 := \theta_1-\theta_2, && p_2 := 2(1+z_2), \end{array} \label{eq:changevars}$$ and obtain $L(q_1,p_1,q_2,p_2)=p_1$ and $$\begin{aligned} H_s(q_1,p_1,q_2,p_2) =& p_1 - p_2+2 - s(p_1 - p_2+2) + \frac{s}{2} (p_2-2) \\&+ (1-s)s \sqrt{p_2(p_2-p_1-1)(p_2-4)(p_2-p_1-3)} \cos(q_2). \end{aligned} \label{eq:LH}$$ In these coordinates, the symplectic form becomes $\omega = \mathrm{d}q_1 \wedge \mathrm{d}p_1 + \mathrm{d}q_2 \wedge \mathrm{d}p_2$. Moreover, it is obvious that $L=p_1$ is a constant of motion because $H_s$ does not depend on $q_1$. This notation is thus particularly suitable to express the reduction by the ${\mathbb S}^1$-action induced by $L$. Instead of $p_1$, it is more convenient to use the variable $l:=p_1+1\in[-2,4]$, so that $\mathcal{N} \times \mathcal{S}\in \{l=0\}$. We obtain the reduced space $$M^{\text{red},l} := \{L = l\}/{\mathbb S}^1.$$ The reduced space at the levels $l=0$ and $l=2$ is a stratified symplectic space with the shape of a sphere with a conic singular point, at levels $l=-2$ and $l=4$ it is a point, and otherwise it is a smooth sphere. Let $H^{\text{red},l}_s$ be the function on $M^{\text{red},l}$ induced by descending $H_s$ to the quotient. We denote the coordinates on the reduced space by $(q_2,p_2)$, which are induced from the coordinates $(q_2,p_2)$ on $M$, defined above. The bounds for these coordinates on the reduced space depend on the level $l$: $$\label{eqn:coor-bounds} -\pi \leq q_2 \leq \pi, \quad \max\{0,l\}\leq p_2 \leq \min\{l+2,4\}.$$ In these coordinates, $$\label{eqn:H-red_NS} H_s^{\text{red},l}(q_2,p_2)=A^l_s(p_2) + \cos(q_2) \sqrt{B_s^l(p_2)},$$ where $$\left\{ \begin{aligned} A_s^l(p_2) &= l+1-p_2-2s-l s + \tfrac{3}{2} s p_2, \\ B_s^l(p_2) &= s^2 (1-s)^2 p_2 (p_2-l) (p_2-4) (p_2-2-l). \end{aligned} \right.$$ For each $s\in [0,1]$, let $h$ be a constant and consider the level set $$\label{eqn:beta} \beta^s_{l,h} := \left\{ (q_2,p_2)\in M^{\text{red},l} \mid H_s^{\text{red},l}(q_2,p_2)=h+(1-2s)\right\}.$$ The solutions of the above equation define closed curves in the reduced space, and the curve going through the focus-focus point $\mathcal{N} \times \mathcal{S}$ will correspond to the value $h=0$, see Figure [\[4X2orbitsNS\]](#4X2orbitsNS){reference-type="ref" reference="4X2orbitsNS"}. The next step is to compute the (second) action integral from Equation [\[eq:semigact\]](#eq:semigact){reference-type="eqref" reference="eq:semigact"}, $$\mathcal I(l,h) := \dfrac{1}{2\pi} \oint_{\beta^s_{l,h}} q_2 \mathrm{d}p_2, \label{eq:defAct}$$ where we use $\varpi = q_2 \mathrm{d}p_2$ as the primitive of the symplectic form in $M^{\text{red},l}$ and the curve $\beta_{l,h}^s$ as our choice of cycle $\gamma_2^z$, cf. [\[eq:semigact\]](#eq:semigact){reference-type="eqref" reference="eq:semigact"}. The integral [\[eq:defAct\]](#eq:defAct){reference-type="eqref" reference="eq:defAct"} measures the symplectic volume of one of the two regions bounded by the curve $\beta_{l,h}^s$. The condition $\partial_h \mathcal I(l,h)>0$ implies that, from the two regions that the curve bounds, we have to choose the one with points satisfying $H_s^{\text{red},l}(q_2,p_2) -(1-2s) < h$. Note that here our choice of coordinates has given us a choice of cycle, so the $\mathcal{I}$ defined in Equation [\[eq:defAct\]](#eq:defAct){reference-type="eqref" reference="eq:defAct"} satisfies $$\mathcal{I} = I \circ \varrho$$ where $\varrho$ is as in the diagram in Figure [\[fig:diagram-semilocal\]](#fig:diagram-semilocal){reference-type="ref" reference="fig:diagram-semilocal"} and $I$ is one of the possible choices of action as discussed in Section [3.4](#sss:taylor){reference-type="ref" reference="sss:taylor"}, and in particular Equation [\[eq:semigact\]](#eq:semigact){reference-type="eqref" reference="eq:semigact"}. In fact, it will turn out to be the preferred action $\xi$ discussed in Remark [Remark 21](#rmk:Xi-and-xi){reference-type="ref" reference="rmk:Xi-and-xi"}, but we cannot see this a priori, as we discuss in Remark [Remark 41](#rmk:right_cycle){reference-type="ref" reference="rmk:right_cycle"}. Figure [\[fig:areas\]](#fig:areas){reference-type="ref" reference="fig:areas"} shows an overview of the areas corresponding to the action integral. As $h$ increases (from left to right), so does the area in colour, which represents the area being integrated over. To obtain a circle-valued coordinate $q_2$, we identify $(-\pi, p_2)$ with $(\pi,p_2)$ for each $p_2$, and thus obtain a cylinder with coordinates $(q_2,p_2)$. We distinguish between three types of orbits on this cylinder: - Type I: The curve $\beta^s_{l,h}$ crosses the line $q_2=0$ and is homotopic to a point in the cylinder. - Type II: The curve $\beta^s_{l,h}$ crosses both $q_2=0$ and $q_2=\pm \pi$ and is not homotopic to a point in the cylinder. - Type III: The curve $\beta^s_{l,h}$ crosses the line $q_2=\pm \pi$ and is homotopic to a point in the cylinder. ![*$s<\tfrac{2}{3}$, $h<h_{l^+}$*](area_11.pdf){width="3cm"} ![*$s<\tfrac{2}{3}$, $h_{l^+}<h<h_{l^-}$*](area_12.pdf){width="3cm"} ![*$s<\tfrac{2}{3}$, $h_{l^-}<h$*](area_13.pdf){width="3cm"} \ ![*$s=\tfrac{2}{3}$, $h<h_{l^\pm}$*](area_21.pdf){width="3cm"} ![*$s=\tfrac{2}{3}$, $h=h_{l^\pm}$*](area_22.pdf){width="3cm"} ![*$s=\tfrac{2}{3}$, $h_{l^\pm}<h$*](area_23.pdf){width="3cm"} \ ![*$s>\tfrac{2}{3}$, $h<h_{l^-}$*](area_31.pdf){width="3cm"} ![*$s>\tfrac{2}{3}$, $h_{l^-}<h<h_{l^+}$*](area_32.pdf){width="3cm"} ![*$s>\tfrac{2}{3}$, $h_{l^+}<h$*](area_33.pdf){width="3cm"} The different types of curves are distinguished by means of special separatrix curves, which correspond to the values $h_{[p_2=0]},h_{[p_2=l]},h_{[p_2=4]},h_{[p_2=2+l]}$ of $H^{\text{red},l}_s(q_2,p_2) -(1-2s)$ along the bounds of the coordinates on $M^{\text{red},l}$, namely $p_2=0$, $p_2=l$, $p_2=4$ and $p_2=2+l$: $$\label{eqn:h0-hl-h4-h2l} \begin{aligned} h_0 := h_{[p_2=0]} &= l(1-s), & \qquad h_4 := h_{[p_2=4]} &= 6\left( s-\frac{2}{3}\right) + l (1-s), \\[0.3cm] h_l := h_{[p_2=l]} &= \dfrac{s}{2} l, & \qquad h_{2+l}:=h_{[p_2=2+l]} &= 3\left(s-\frac{2}{3} \right) + \frac{s}{2}l. \end{aligned}$$ That is, setting $$l^- := \max\{0,l\}, \quad l^+ := \min\{2+l,4\}, \label{lminmax}$$ we have that for each value of $l$, there are two values $h_{l^-}:=h_{[p_2=l^-]}$ and $h_ {l^+}:=h_{[p_2=l^+]}$ that separate the three types of curves, see Figures [\[fig:h_vs_l\]](#fig:h_vs_l){reference-type="ref" reference="fig:h_vs_l"} and [\[4X2orbitsNS\]](#4X2orbitsNS){reference-type="ref" reference="4X2orbitsNS"}. Note that $l^-$ and $l^+$ are the extreme values of $p_2$, i.e. $l^- \leq p_2 \leq l^+$. The value of the parameter $s$ then determines the type of the curve $\beta^s_{l,h}$. Note that if $s< \tfrac{2}{3}$, then $h_{l^-} > h_{l^+}$, if $s=\tfrac{2}{3}$, then $h_{l^-}=h_{l^+}$ and if $s> \tfrac{2}{3}$, then $h_{l^-} < h_{l^+}$, as illustrated in Figure [\[fig:h_vs_l\]](#fig:h_vs_l){reference-type="ref" reference="fig:h_vs_l"}. ![$s< \tfrac{2}{3}$](h_vs_l_1.pdf){width="4cm"} ![$s= \tfrac{2}{3}$](h_vs_l_2.pdf){width="4cm"} ![$s> \tfrac{2}{3}$](h_vs_l_3.pdf){width="4cm"} ![*$s<\tfrac{2}{3}$, $l<0$*](sneglneg.pdf){width="3cm"} ![*$s<\tfrac{2}{3}$, $l=0$*](sneglzer.pdf){width="3cm"} ![*$s<\tfrac{2}{3}$, $l>0$*](sneglpos.pdf){width="3cm"} \ ![*$s=\tfrac{2}{3}$, $l<0$*](szerlneg.pdf){width="3cm"} ![*$s=\tfrac{2}{3}$, $l=0$*](szerlzer.pdf){width="3cm"} ![*$s=\tfrac{2}{3}$, $l>0$*](szerlpos.pdf){width="3cm"} \ ![*$s>\tfrac{2}{3}$, $l<0$*](sposlneg.pdf){width="3cm"} ![*$s>\tfrac{2}{3}$, $l=0$*](sposlzer.pdf){width="3cm"} ![*$s>\tfrac{2}{3}$, $l>0$*](sposlpos.pdf){width="3cm"} The curve types are shown in Table [1](#table:types){reference-type="ref" reference="table:types"}. $s< \tfrac{2}{3}$ $s= \tfrac{2}{3}$ $s> \tfrac{2}{3}$ ------------------ --------------------------------- --------------------------- ---------------------------------- $h$ large $h > h_{l^-}$: type I $h > h_{l^\pm}$: type I $h > h_{l^+}$: type I $h$ intermediate $h_{l^-}> h > h_{l^+}$: type II *not possible* $h_{l^+} > h > h_{l^-}$: type II $h$ small $h_{l^+} > h$: type III $h_{l^\pm} > h$: type III $h_{l^-} > h$: type III : The various types of curves depending on the values of $h$ and $s$. In Figure [\[4X2orbitsNS\]](#4X2orbitsNS){reference-type="ref" reference="4X2orbitsNS"} the relation between the different types of curves and the separatrices around the values $l=0$ and $s = \tfrac{2}{3}$ is shown. Considering the subfigures from left to right, note that when $l<0$, the curve $H_0(q_2,p_2)-(1-2s)=h_{[p_2=0]}$ (orange colour) acts as separatrix, while the curve $H_l(q_2,p_2)-(1-2s)=h_{[p_2=l]}$ (red colour) is a normal curve. For $l=0$ both curves coincide and for $l>0$ the roles of the curves are exchanged. Considering the subfigures from top to bottom, note that for $s<\tfrac{2}{3}$, the curve $H_l(q_2,p_2)-(1-2s)=h_{l^-}$ (orange or red colour) separates orbits of types I and II and the curve $H_l(q_2,p_2)-(1-2s)=h_{l^+}$ (purple colour) separates the orbits of types II and III. For $s=\tfrac{2}{3}$, these curves coincide and for $s>\tfrac{2}{3}$, the roles are exchanged. We define now the polynomial $$\label{eq:polP} \begin{aligned} P^s_{l,h}(p_2) := B_s^l(p_2) - (h+(1+2s)-A_s^l(p_2))^2, \end{aligned}$$ which is of degree 4 in $p_2$. Let $\zeta_1, \zeta_2, \zeta_3, \zeta_4$ denote the roots of this polynomial. As in Alonso & Dullin & Hohloch [@ADH2 Equation (3.13)], it turns out that within the bounds of the coordinate $p_2$ given in Equation [\[eqn:coor-bounds\]](#eqn:coor-bounds){reference-type="eqref" reference="eqn:coor-bounds"} all roots are real. Moreover, if we assume that they are ordered as $$\zeta_1\leq \zeta_2\leq \zeta_3\leq \zeta_4,$$ then $$\zeta_1 \leq \min\{0,l\} \quad \max\{0,l\} \leq \zeta_2 \leq \zeta_3 \leq \min\{4, l+2\}, \quad \max\{4,l+2\} \leq \zeta_4. \label{eq:rootsP}$$ That is, only $\zeta_2$ and $\zeta_3$ are within the bounds of the coordinate $p_2$ and therefore only these two roots are meaningful on the reduced space $M^{\text{red},l}$. In particular, $\beta_{l,h}^s \subset \{ (q_2,p_2) \mid \zeta_2 \leq p_2 \leq \zeta_3\}$. We now express the action integral as follows. **Proposition 34**. *The action integral $\mathcal I(l,h)$ from Equation [\[eq:defAct\]](#eq:defAct){reference-type="eqref" reference="eq:defAct"} can be written as $$\mathcal I(l,h) = C^B_{s,h}(l) + \mathfrak I(l,h) \label{4X2actintNS}$$ where the values of $C_{s,h}^B(l)$ are given in Table [3](#table:areaB){reference-type="ref" reference="table:areaB"} and $\mathfrak I(l,h)$ is the elliptic integral $$\mathfrak I(l,h):= \dfrac{1}{2\pi} \oint_{\beta^s_{l,h}} R_\mathcal I(p_2) \frac{ \mathrm{d}p_2}{\sqrt{P^s_{l,h}(p_2)}}.$$ The elliptic curve $\beta^s_{l,h}$ is given by $\vartheta^2 = P^s_{l,h}(p_2)$, where $P^s_{l,h}$ is as in Equation [\[eq:polP\]](#eq:polP){reference-type="eqref" reference="eq:polP"}, and $$\begin{aligned} \label{eqn:R} R_\mathcal I(p_2) =& \dfrac{h_0 + h_l - h_4 - h_{2+l}}{6}p_2 + \dfrac{4h-h_0 - h_l - h_4 -h_{2+l}}{2} \\& + \dfrac{l}{2}\dfrac{(h-h_l)}{p_2-l} + \dfrac{4}{2} \dfrac{(h-h_4)}{p_2-4} + \dfrac{(2+l)}{2} \dfrac{(h-h_{2+l})}{p_2-2-l}.\nonumber\end{aligned}$$* Note that the values of $p_2$ for which the denominators of each of the last three terms of $R_\mathcal I(p_2)$ are zero corresponds to the separatrix curves discussed above. *Proof.* We want to compute the integral Equation [\[eq:defAct\]](#eq:defAct){reference-type="eqref" reference="eq:defAct"}, where the curve $\beta^s_{l,h}$ is defined by the equation $$\label{eqn:beta-proof} h = A_s^l(p_2) + \sqrt{B_s^l(p_2)}\cos(q_2) -(1-2s).$$ and measures the areas represented in Figure [\[fig:areas\]](#fig:areas){reference-type="ref" reference="fig:areas"}. Because of the symmetry of $q\mapsto -q$ in Equation [\[eqn:beta-proof\]](#eqn:beta-proof){reference-type="eqref" reference="eqn:beta-proof"}, to compute the integral over $\beta^s_{l,h}$ we can restrict to the portion of $\beta^s_{l,h}$ with $q_2\geq 0$ and multiply by $2$. Moreover, we can solve for $q_2$ in Equation [\[eqn:beta-proof\]](#eqn:beta-proof){reference-type="eqref" reference="eqn:beta-proof"}, $$q_2(p_2) =\arccos \left(\dfrac{h+(1-2s)-A_s^l(p_2)}{\sqrt{B_s^l(p_2)}} \right).$$ We can thus integrate $q=q_2(p_2)$ between $\zeta_2$ and $\zeta_3$. However, we will also need to deal with the regions to the left and/or right of the curve by including certain correction terms. Let us illustrate this with the case $s < \tfrac{2}{3}$. We want to compute the "half-areas\" represented in Figure [\[fig:halfareas\]](#fig:halfareas){reference-type="ref" reference="fig:halfareas"}. ![$h<h_{l^+}$ (Type III)](halfarea_1.pdf){width="4.5cm"} ![$h_{l^+}<h<h_{l^-}$ (Type II)](halfarea_2.pdf){width="4.5cm"} ![$h_{l^-}<h$ (Type I)](halfarea_3.pdf){width="4.5cm"} The size of the half-rectangles are $(2+l)\pi$ if $l<0$ and $2\pi$ if $l \geq 0$. From this total size, we need to subtract the area below the curve $q_2=q_2(p_2)$ and, depending on the type of the curve, some additional area. More specifically, for the right situation (type I) we need not subtract anything extra, for the middle situation (type II) we need to subtract the area to the left of $\zeta_2$ and for the left situation (type III) we need to subtract the area to the left of $\zeta_2$ and to the right of $\zeta_3$. In general, we can write the action integral as $$\begin{aligned} \label{eqn:action-compute} \mathcal I(l,h) = C_{s,h}^A(l) - \frac{1}{\pi}\int_{\zeta_2}^{\zeta_3} \arccos \left(\dfrac{h+(1-2s)-A_s^l(p_2)}{\sqrt{B_s^l(p_2)}} \right) \mathrm{d}p_2,\end{aligned}$$ where $C_{s,h}^A(l)$ is given in Table [2](#table:areaA){reference-type="ref" reference="table:areaA"}. $C_{s,h}^A(l)$ $s< \tfrac{2}{3}$ $s= \tfrac{2}{3}$ $s> \tfrac{2}{3}$ ---------------- -------------------- -------------------- -------------------- Type I $l^+-l^-$ $l^+-l^-$ $l^+-l^-$ Type II $l^+-\zeta_2$ *not possible* $\zeta_3-l^-$ Type III $\zeta_3 -\zeta_2$ $\zeta_3 -\zeta_2$ $\zeta_3 -\zeta_2$ : The value of $C_{s,h}^A(l)$ depending on the type of the curve being integrated over. [\[table:areaA\]]{#table:areaA label="table:areaA"} The next step is to do integration by parts $$\begin{aligned} \mathcal I(l,h) &= C_{s,h}^A(l) - \frac{p_2}{\pi}\left[ \arccos \left(\dfrac{h+(1-2s)-A^l_s(p_2)}{\sqrt{B^l_s(p_2)}} \right) \right]_{p_2=\zeta_2}^{p_2=\zeta_3} \\ &\quad + \dfrac{1}{\pi}\int_{\zeta_2}^{\zeta_3} p_2 \dfrac{d}{dp_2} \left(\arccos \left(\dfrac{h+(1-2s)-A^l_s(p_2)}{\sqrt{B^l_s(p_2)}} \right)\right) \mathrm{d}p_2 \\ &= \, C_{s,h}^B(l) + \dfrac{1}{\pi}\int_{\zeta_2}^{\zeta_3} R_\mathcal I(p_2) \frac{ \mathrm{d}p_2}{\sqrt{P^s_{l,h}(p_2)}}\\ &= \,C_{s,h}^B(l) +\dfrac{1}{2\pi} \oint_{\beta^s_{l,h}} R_\mathcal I(p_2) \frac{ \mathrm{d}p_2}{\sqrt{P^s_{l,h}(p_2)}},\end{aligned}$$ where $R_\mathcal I(p_2)$ is as in Equation [\[eqn:R\]](#eqn:R){reference-type="eqref" reference="eqn:R"} and $$\begin{aligned} C_{s,h}^B(l) &:= C_{s,h}^A(l)- \dfrac{p_2}{\pi} \left[ \arccos \left(\dfrac{h+(1-2s)-A^l_s(p_2)}{\sqrt{B^l_s(p_2)}} \right) \right]_{p_2=\zeta_2}^{p_2=\zeta_3}.\end{aligned}$$ We list all possible values of $C_{s,h}^B(l)$ in Table [3](#table:areaB){reference-type="ref" reference="table:areaB"}. $C_{s,h}^B(l)$ $s< \tfrac{2}{3}$ $s= \tfrac{2}{3}$ $s> \tfrac{2}{3}$ ---------------- ------------------- ------------------- ------------------- Type I $l^+-l^-$ $l^+-l^-$ $l^+-l^-$ Type II $l^+$ *not possible* $-l^-$ Type III $0$ $0$ $0$ : The value of $C_{s,h}^B(l)$ depending on the type of the curve being integrated over.  ◻ We now compute the *reduced period* $\mathcal T$ and the *rotation number* $\mathcal W$, defined by $$\mathcal T(l,h) := 2\pi \dfrac{\partial \mathcal I}{\partial h},\qquad \mathcal W(l,h) := -\dfrac{\partial \mathcal I}{\partial l}. \label{def:TW}$$ **Remark 35**. Each of the quantities in Equation [\[def:TW\]](#def:TW){reference-type="eqref" reference="def:TW"} has a geometric interpretation: since $M$ is compact, any regular fiber of the integrable system $(L,H)$ is a torus, and the quotient of this torus by the circle action generated by $L$ is a circle. The flow of the Hamiltonian vector field of $H$ descends to this circle, and thus this flow is necessarily periodic, and the reduced period $\mathcal T(l,h)$ is the minimal period of this flow. This is because $\mathcal T(l,h)$ computes the relative speed of the flow of $\mathcal{I}$ (which has period $2\pi$) against the speed of the flow of $H$ to determine the time taken for the flow of $H$ to go once around and return to the orbit of the ${\mathbb S}^1$-action that it started in. The rotation number $\mathcal W(l,h)$ computes the relative speed between the given periodic flow (generated by $L$) and the additional periodic flow generated by $\mathcal{I}$ on a regular fiber. **Corollary 36**. *The reduced period $\mathcal T(l,j)$ and the rotation number $\mathcal W(l,h)$ are given by the complete elliptic integrals $$\mathcal T(l,h) = \oint_{\beta^s_{l,h}} \frac{ \mathrm{d}p_2}{\sqrt{P^s_{l,h}(p_2)}},\qquad \mathcal W(l,h) = C_{s,h}^C(l)+\dfrac{1}{2\pi} \oint_{\beta^s_{l,h}} R_{\mathcal W}(p_2) \frac{ \mathrm{d}p_2}{\sqrt{P^s_{l,h}(p_2)}} \label{eq:defTW}$$ over the elliptic curve $\vartheta^2 = P^s_{l,h}(p_2)$, where $$\begin{aligned} %R_{\mcW}&(p_2) = \dfrac{1}{2}s + \dfrac{1}{4} \dfrac{l s-2 h }{p_2-l} + \dfrac{1}{4} \dfrac{(-4 - 2 h + (6 + l) s)}{p_2-l-2}. R_{\mathcal W}&(p_2) = \dfrac{s}{2} - \dfrac{1}{2} \dfrac{(h-h_l)}{p_2-l} - \dfrac{1}{2} \dfrac{(h-h_{2+l})}{p_2-2-l}.\end{aligned}$$* *Proof.* We compute the derivatives given in Equation [\[def:TW\]](#def:TW){reference-type="eqref" reference="def:TW"} using the expression for $\mathcal I(l,h)$ from Proposition [Proposition 34](#prop:4X2act){reference-type="ref" reference="prop:4X2act"} and the explicit formula in Equation [\[eqn:action-compute\]](#eqn:action-compute){reference-type="eqref" reference="eqn:action-compute"}. For the reduced period, we have $$\begin{aligned} \mathcal T(l,h) &= 2\pi \dfrac{\partial \mathcal I}{\partial h}(l,h) = -2 \frac{\partial}{\partial h}\int_{\zeta_2}^{\zeta_3} \arccos \left(\dfrac{h+(1-2s)-A_s^l(p_2)}{\sqrt{B_s^l(p_2)}} \right) \mathrm{d}p_2 \\&= 2 \int_{\zeta_2}^{\zeta_3} \frac{ \mathrm{d}p_2}{\sqrt{P^s_{l,h}(p_2)}} = \oint_{\beta^s_{l,h}} \frac{ \mathrm{d}p_2}{\sqrt{P^s_{l,h}(p_2)}}\end{aligned}$$ and we proceed similarly for $\mathcal W$, $$\begin{aligned} \mathcal W(l,h) &= - \dfrac{\partial \mathcal I}{\partial l}(l,h) = - \dfrac{\partial C_{s,h}^B(l)}{\partial l} + \dfrac{1}{\pi} \int_{\zeta_2}^{\zeta_3} R_{\mathcal W}(p_2) \frac{ \mathrm{d}p_2}{\sqrt{P^s_{l,h}(p_2)}} \\&= C_{s,h}^C(l) + \dfrac{1}{2\pi} \oint_{\beta^s_{l,h}} R_{\mathcal W}(p_2) \frac{ \mathrm{d}p_2}{\sqrt{P^s_{l,h}(p_2)}},\end{aligned}$$ where $$\begin{aligned} R_{\mathcal W}&(p_2) = \dfrac{s}{2} - \dfrac{1}{2} \dfrac{h-h_l}{p_2-l} - \dfrac{1}{2} \dfrac{h-h_{2+l}}{p_2-2-l}\end{aligned}$$ and the values of $$C_{s,h}^C(l) := - \dfrac{\partial C_{s,h}^B(l)}{\partial l}$$ are given in Table [4](#table:areaC){reference-type="ref" reference="table:areaC"}. ◻ Range of $h$ $l<0$ $0<l<2$ $2<l$ -------------------------- ------------------------- ------- --------- ------- $s< \tfrac{2}{3} \qquad$ $h> h_{l^-}$ $-1$ $0$ $1$ $h_{l^-} > h > h_{l^+}$ $-1$ $-1$ $0$ $h_{l^+} > h$ $0$ $0$ $0$ $s= \tfrac{2}{3} \qquad$ $h> h_{l^\pm}$ $-1$ $0$ $1$ $h_{l^\pm} > h$ $0$ $0$ $0$ $s> \tfrac{2}{3} \qquad$ $h> h_{l^+}$ $-1$ $0$ $1$ $h_{l^+} > h > h_{l^-}$ $0$ $1$ $1$ $h_{l^-} > h$ $0$ $0$ $0$ : Values of $C_{s,h}^C(l)$. The integrals in [\[eq:defTW\]](#eq:defTW){reference-type="eqref" reference="eq:defTW"} can be expressed in terms of two basic integrals, namely $$\mathcal N_A := \int^{\zeta_3}_{\zeta_2} \dfrac{\mathrm{d}p_2}{\vartheta},\qquad \mathcal N_{B,\eta} := \int^{\zeta_3}_{\zeta_2} \dfrac{1}{p_2-\eta} \dfrac{\mathrm{d}p_2}{\vartheta}, \label{NANB}$$ where $\eta$ is a constant. More precisely, we have $$\mathcal T= 2 \mathcal N_A,\qquad \mathcal W= C^C_{s,h} + \dfrac{1}{2\pi} \left( \dfrac{s}{2} \mathcal N_A - \dfrac{h-h_l}{2} \mathcal N_{B,l} - \dfrac{h-h_{2+l}}{2} \mathcal N_{B,2+l} \right). \label{eq:TWN}$$ The two integrals in [\[NANB\]](#NANB){reference-type="eqref" reference="NANB"} can be rewritten in Legendre's standard form by changing the integration variable $x$ and defining the parameters $k, n_\eta$, where $$x := \sqrt{\dfrac{\zeta_3-p_2}{\zeta_3-\zeta_2}},\qquad k^2 := \dfrac{\zeta_3-\zeta_2}{\zeta_3-\zeta_1},\qquad n_\eta := \dfrac{\zeta_3-\zeta_2}{\zeta_3-\eta}.$$ We write now $\mathcal N_A$ in terms of the complete elliptic integral of first kind $$\mathcal N_A = \dfrac{\sqrt{2}}{\sqrt{\zeta_3-\zeta_1}} \int_0^1 \dfrac{\mathrm{d}x}{\sqrt{(1-x^2)(1-k^2x^2)}} = \dfrac{\sqrt{2}}{\sqrt{\zeta_3-\zeta_1}} K(k) \label{eq:NAEl}$$ and $\mathcal N_{B,\eta}$ as a complete elliptic integral of third kind $$\begin{aligned} \nonumber \mathcal N_{B,\eta} &= \dfrac{\sqrt{2}}{(\zeta_3 - \eta)\sqrt{\zeta_3-\zeta_1}} \int_0^1 \dfrac{\mathrm{d}x}{(1-n_\eta x^2)\sqrt{(1-x^2)(1-k^2x^2)}} \\&= \dfrac{\sqrt{2}}{(\zeta_3 - \eta)\sqrt{\zeta_3-\zeta_1}} \Pi(n_\eta,k). \label{eq:NBEl}\end{aligned}$$ ## Preparations for the Taylor series invariant {#sec:prep-for-TS} We now set up the general theory to compute the Taylor series invariant of the system given in Equation [\[eqn_ssys\]](#eqn_ssys){reference-type="eqref" reference="eqn_ssys"}. We will use the method from Alonso & Dullin & Hohloch [@ADH2], based on the properties of complex elliptic curves and the expansions of the reduced period and the rotation number. It is similar to the method introduced by Dullin [@Du] and used in Alonso & Dullin & Hohloch [@ADH]. The elliptic integrals in Equations [\[eq:defAct\]](#eq:defAct){reference-type="eqref" reference="eq:defAct"} and [\[eq:defTW\]](#eq:defTW){reference-type="eqref" reference="eq:defTW"} go along the real elliptic curve $\vartheta^2 = P^s_{l,h}(p_2)$, where $P^s_{l,h}$ was defined in [\[eq:polP\]](#eq:polP){reference-type="eqref" reference="eq:polP"}. However, we can also consider the variables as complex numbers, $$\Gamma_l = \left\{ \left. (p_2,\vartheta) \in \overline{{\mathbb C}}^2 \;\right|\; \vartheta^2 = P^s_{l,h}(p_2) \right\},$$ where $\overline{\mathbb{C}} = \mathbb{C}\cup\{\infty\}$ is the Riemann sphere. Since $P^s_{l,h}(p_2)$ is a polynomial of degree 4, the curve $\Gamma_l$ is homeomorphic to a two-torus, and thus its first homotopy group is generated by two cycles, as represented in Figure [\[fig:elliptic\]](#fig:elliptic){reference-type="ref" reference="fig:elliptic"}. We will work with the distinguished cycles $\alpha=\alpha^s_{l,h}$ and $\beta=\beta^s_{l,h}$ which have the properties we describe now. The *real cycle* $\beta$ corresponds to the real elliptic curve and connects the roots $\zeta_2$ and $\zeta_3$, which is the one we defined in [\[eqn:beta\]](#eqn:beta){reference-type="eqref" reference="eqn:beta"}. The other cycle, $\alpha$, is known as the *imaginary cycle* and connects the roots $\zeta_1$ and $\zeta_2$. ![Cycles $\alpha$ and $\beta$ on ${\mathbb C}$.](EL-Plane2.pdf){width="90%"} ![Cycles $\alpha$ and $\beta$ on ${\mathbb T}^2$](EL-TorusCycles.pdf){width="80%"} As we approach the focus-focus value $(l,h)=(0,0)$, the roots $\zeta_1$ and $\zeta_2$ move closer to each other and coincide in the limit, so one representative of the cycle $\alpha$ collapses. For this reason, $\alpha$ is known as the *vanishing cycle*. The elliptic integrals of the action, the reduced period and the rotation number along this cycle are purely imaginary complex numbers, so we can divide by the imaginary unit and obtain real numbers. We define thus the *imaginary action* $$\label{eqn:im-action} J(l,h):=\dfrac{1}{2\pi i} \oint_{\alpha^s_{l,h}} R_\mathcal I(p_2) \frac{ \mathrm{d}p_2}{\sqrt{P^s_{l,h}(p_2)}},$$ the *imaginary reduced period* and the *imaginary rotation number*, $$\mathcal{T}^{\alpha}(l,h) := \dfrac{1}{i}\oint_{\alpha^s_{l,h}} \frac{ \mathrm{d}p_2}{\sqrt{P^s_{l,h}(p_2)}},\qquad \mathcal{W}^{\alpha}(l,h) := \dfrac{1}{2\pi i} \oint_{\alpha^s_{l,h}} R_W(p_2) \frac{ \mathrm{d}p_2}{\sqrt{P^s_{l,h}(p_2)}}$$ all defined along the vanishing cycle $\alpha^s_{l,h}$ of the elliptic curve $\vartheta^2 = P^s_{l,h}(p_2)$, where $P^s_{l,h}$ is as given in Equation [\[eq:polP\]](#eq:polP){reference-type="eqref" reference="eq:polP"}. Since the cycle $\alpha_{l,h}^s$ vanishes as we approach the focus-focus singular value, the Taylor expansion of these quantities around the singular value can be obtained using the residue theorem, which we perform in the following section. **Remark 37**. It was first observed by Dullin [@Du] that the imaginary action coincides with the second component of the local diffeomorphism $\varrho$ from Theorem [Theorem 11](#EliassonMZ){reference-type="ref" reference="EliassonMZ"}, that is, $\varrho(l,h) = (l,J(l,h))$. See also the diagrams in Figures [\[fig:diagram-local\]](#fig:diagram-local){reference-type="ref" reference="fig:diagram-local"} and [\[fig:diagram-semilocal\]](#fig:diagram-semilocal){reference-type="ref" reference="fig:diagram-semilocal"}. By inverting $j = J(l,h)$ we will obtain the Birkhoff normal form of the focus-focus singularity as $h = Z(l,j)$. This will allow us to express the action integral [\[eq:defAct\]](#eq:defAct){reference-type="eqref" reference="eq:defAct"} as a function of $(l,j)$ instead of $(l,h)$. In other words, $\varrho^{-1}(l,j) = (l,Z(l,j))$. For more details on this construction, see Dullin [@Du] and Alonso & Dullin & Hohloch [@ADH; @ADH2]. **Remark 38**. Note that when approaching the other focus-focus point of the system, the roots $\zeta_3$ and $\zeta_4$ come together and coincide, causing a different representative of the same cycle $\alpha$ to collapse. Since the Taylor series invariant has no constant term, it is fully determined by its partial derivatives. To determine the partial derivatives of the Taylor series invariant associated to the singularity $\mathcal{N} \times \mathcal{S}$, we will combine the Birkhoff normal form with the following result, which relates the real and imaginary versions of the reduced period and the rotation number. **Theorem 39** (Theorem 3.9 of [@ADH2]). *Let $p\in M$ be a focus-focus singular point. Let $Z(l,j)$ be the associated Birkhoff normal form and $w = l+ij$ where $j$ is the value of the imaginary action $J$. Let $S(l,j)$ denote the desingularized action at $p$. Then $$\begin{aligned} \frac{\partial S}{\partial l} (l,j) &= 2\pi \left.\left( \mathcal{W}^\alpha (l,h) \frac{\mathcal{T}(l,h)}{\mathcal{T}^\alpha(l,h)} - \mathcal{W}(l,h) \right)\right|_{h=Z(l,j)} \hspace{-0.2cm}+ \arg(w), \\[0.2cm] \frac{\partial S}{\partial j}(l,j) &= 2\pi \left.\frac{\mathcal{T}(l,h)}{\mathcal{T}^\alpha(l,h)}\right|_{h=Z(l,j)}\hspace{-0.2cm}+\ln |w|. \end{aligned}$$* Note that there are various choices of desingularized action $S$, cf. Remark [Remark 18](#rmk:nonunique){reference-type="ref" reference="rmk:nonunique"}. In this theorem we will obtain the one corresponding to our action [\[eq:defAct\]](#eq:defAct){reference-type="eqref" reference="eq:defAct"}, which is related to $S$ by [\[eqn:def-of-S\]](#eqn:def-of-S){reference-type="eqref" reference="eqn:def-of-S"}. The function $S$ is thus defined up to addition of flat functions, but since we are only using the series expansion $Z(l,j)$, the theorem holds for any such choice. ## Computing the Taylor series invariant {#sec:proof-of-Taylor} With Theorem [Theorem 39](#thm:birk-to-partials){reference-type="ref" reference="thm:birk-to-partials"} in hand, we now proceed to complete the computation of the Taylor series. In order to do this, we must compute the Birkhoff normal form, the reduced period, the rotation number, the imaginary reduced period, and the imaginary rotation number. Proceeding analogously to [@ADH2], we now calculate expansions of each of these quantities. To obtain the expansions in this article, we scaled the parameters by $\varepsilon$ and expanded with respect to $\varepsilon$. That is, we replaced $(l,h)$ by $(\varepsilon l, \varepsilon h)$ and expanded around $\varepsilon=0$, which corresponds to the focus-focus point $(l,h)=(0,0)$. Finally, we set $\varepsilon=1$ to obtain the desired Taylor series expansions. In the following, we use $\mathcal O(N)$ to denote terms of order $N$ or higher in the variables $l$ and $j$. **Lemma 40**. *The Birkhoff normal form around the focus-focus point is given by* *$$\begin{aligned} Z(l,j) &= \dfrac{1}{4} (j\, {\rho_2}+ l\, (2 - s)) + \dfrac{(1 - s)^2 {s}^2 (2 j\ l\ {\rho_2}- 9 j^2 (2 - 3 s) - 3 l^2 (2 - 3 s)) }{4\ {\rho_2}^2} + \mathcal O(3), \end{aligned}$$ where $\rho_2$ is as in Equation [\[eqn:rho\]](#eqn:rho){reference-type="eqref" reference="eqn:rho"}.* *Proof.* We apply the same technique as in the proof of [@ADH2 Lemma 3.8]. The imaginary action [\[eqn:im-action\]](#eqn:im-action){reference-type="eqref" reference="eqn:im-action"} is an elliptic integral along the cycle $\alpha_{l,h}^s$, which vanishes as $(l,h)$ approaches the singular value $(0,0)$. This means that we can use the residue theorem of complex analysis and obtain a series expansion of $J(l,h)$ around $(0,0)$: $$\begin{aligned} J(l,h) &= \dfrac{1}{{\rho_2}} ( - (2-s)l+4h) \\&+ \dfrac{8s^2(1-s^2)}{{\rho_2}^5} \left((44s^5-128 s^4+115 s^3-28 s^2+2s-4)l^2 \right. \\& \hspace{2cm}\left. + 2(16s^4-32s^3+25s^2-30 s+16)hl-18(2-3s)h^2 \right) + \mathcal O(3).\end{aligned}$$ We then solve $j=J(l,h)$ for $h$, obtaining the Birkhoff normal form $h = Z(l,j)$. ◻ We are now prepared to prove Theorem [Theorem 4](#thm:Taylor-intro){reference-type="ref" reference="thm:Taylor-intro"}. *Proof of Theorem [Theorem 4](#thm:Taylor-intro){reference-type="ref" reference="thm:Taylor-intro"}.* We start by computing the Taylor series invariant at the $\mathcal{N}\times\mathcal{S}$ singularity, which we denote by $S_{1,s}^\infty(l,h)$. Since we wrote $\mathcal{T}$ and $\mathcal{W}$ as elliptic integrals in Lengendre's canonical form in [\[eq:TWN\]](#eq:TWN){reference-type="eqref" reference="eq:TWN"}, [\[eq:NAEl\]](#eq:NAEl){reference-type="eqref" reference="eq:NAEl"}, and [\[eq:NBEl\]](#eq:NBEl){reference-type="eqref" reference="eq:NBEl"}, we may apply expansions for elliptic integrals to obtain expansions for $\mathcal{T}$ and $\mathcal{W}$ similar as it was done in [@ADH]. Furthermore, $\mathcal{T}^\alpha$ and $\mathcal{W}^\alpha$ are obtained as derivatives of the Birkhoff normal form given in Lemma [Lemma 40](#lem:birkhoff){reference-type="ref" reference="lem:birkhoff"}. Thus, we can apply the equation given in Theorem [Theorem 39](#thm:birk-to-partials){reference-type="ref" reference="thm:birk-to-partials"} to obtain expansions for the derivatives of $S_{1,s}^\infty(l,h)$. The partial derivative of the Taylor series invariant with respect to $j$ is: $$\label{eqn:dSdj} \begin{aligned} \frac{\partial S_{1,s}^\infty}{\partial j}(l,j) &= \log \left( \dfrac{{\rho_2}^3}{ (1-s)^2 {s}^2\sqrt{2}{\rho_1}} \right) \\ &+ \dfrac{1}{16 {\rho_2}^3 {\rho_1}^2} \left( l{\rho_2}(16 - 96 {s} + 360 {s}^2 - 936 {s}^3 + 2693 {s}^4 \right. \\&\qquad \left.- 6200 {s}^5 + 8004 {s}^6 - 5120 {s}^7 + 1280 {s}^8) \right. \\&\qquad \left.+ j(-96 + 720 {s} - 7248 {s}^2 + 36312 {s}^3 - 99558 {s}^4 \right. \\&\qquad \left. + 174957 {s}^5 - 211536 {s}^6 + 171924 {s}^7 - 83328 {s}^8 \right. \\&\qquad \left.+ 17856 {s}^9) \right) + \dfrac{d_1}{d_2} + \mathcal O(3), \end{aligned}$$ where $d_1$ and $d_2$ are as given in Appendix [6](#appendix:d1d2){reference-type="ref" reference="appendix:d1d2"} and $\rho_1,\rho_2$ are as in Equation [\[eqn:rho\]](#eqn:rho){reference-type="eqref" reference="eqn:rho"}. From the explicit formula in Appendix [6](#appendix:d1d2){reference-type="ref" reference="appendix:d1d2"}, for each fixed $s$, we note that $\frac{d_1}{d_2} = d_3^sj^2 + d_4^s jl + d_5^sl^2$ for some constants $d_3^s, d_4^s, d_5^s\in{\mathbb R}$, each depending only on the parameter $s$. The partial derivative of the Taylor series invariant with respect to $l$ is given by $$\label{eqn:dSdl} \begin{aligned} \frac{\partial S_{1,s}^\infty}{\partial l}(l,j) &= \arctan \left( \dfrac{6-9s}{{\rho_2}} \right) + \dfrac{1}{16 {\rho_2}^3 {\rho_1}^2} \left( 3l(2-3s)(16 - 96 {s} \right. \\& \qquad \left.- 216 {s}^2 + 1944 {s}^3 - 3211 {s}^4 + 424 {s}^5 + 3252 {s}^6 \right. \\& \qquad \left.- 2816 {s}^7 + 704 {s}^8) + j {\rho_2}(16 - 96 {s} + 360 {s}^2 \right. \\& \qquad \left.- 936 {s}^3 + 2693 {s}^4 - 6200 {s}^5 + 8004 {s}^6 - 5120 {s}^7 + 1280 {s}^8)\right) \\&+ \mathcal O(2) \end{aligned}$$ We can now combine Equations [\[eqn:dSdj\]](#eqn:dSdj){reference-type="eqref" reference="eqn:dSdj"} and [\[eqn:dSdl\]](#eqn:dSdl){reference-type="eqref" reference="eqn:dSdl"} to obtain the Taylor series invariant up to $\mathcal O(3)$, as in the statement of the theorem. The formula for $S_{2,s}^\infty(l,j)$ in terms of $S_{1,s}^\infty(l,j)$ is from Lemma [Lemma 33](#lem:symmetry){reference-type="ref" reference="lem:symmetry"}. ◻ **Remark 41**. In Equation [\[eqn:dSdl\]](#eqn:dSdl){reference-type="eqref" reference="eqn:dSdl"} notice that the constant term of $\frac{\partial S}{\partial l}$ lies in $[-\frac{\pi}{2},\frac{3\pi}{2}[$. Therefore, the desingularised action $S$ that we have chosen is actually the preferred choice $\hat{S}$ described in Lemma [Lemma 24](#lem:Spref){reference-type="ref" reference="lem:Spref"}, and therefore the Taylor series expansion in Theorem [Theorem 4](#thm:Taylor-intro){reference-type="ref" reference="thm:Taylor-intro"} is the preferred one $({S^{\text{pref}}})^\infty$ discussed in Theorem [Theorem 3](#thm:twist-intro){reference-type="ref" reference="thm:twist-intro"}. In other words, it turns out that our choice of cycle $\gamma_2^z$ that we made in [\[eq:defAct\]](#eq:defAct){reference-type="eqref" reference="eq:defAct"} is actually the preferred cycle $\gamma_\text{pref}^z$ defined in Section [4](#sec:geometricinterpret){reference-type="ref" reference="sec:geometricinterpret"}, and therefore $\mathcal{I}$ from Equation [\[eq:defAct\]](#eq:defAct){reference-type="eqref" reference="eq:defAct"} satisfies $$\mathcal{I} = \xi \circ \varrho$$ where $\varrho$ is as in the diagram in Figure [\[fig:diagram-semilocal\]](#fig:diagram-semilocal){reference-type="ref" reference="fig:diagram-semilocal"}. If this would not have been the case, we would simply have had to add or substract multiples of $2\pi l$ to obtain $\hat{S}$ from $S$. ## The twisting index invariant {#sec:twist-proof} The goal of this section is to prove Theorem [Theorem 3](#thm:twist-intro){reference-type="ref" reference="thm:twist-intro"}, that is, we compute the twisting index invariant of the system given in Equation [\[eqn_ssys\]](#eqn_ssys){reference-type="eqref" reference="eqn_ssys"}. We extend the procedure used in Alonso & Dullin & Hohloch [@ADH; @ADH2] to a situation with two focus-focus singularities. That is, we use the Taylor series invariants at each of the focus-focus points to obtain local privileged momentum maps, extend these local maps to the entire manifold, and then compare the images of these maps to the polygon invariant to compute the twisting index. To be more precise, recall that the twisting index invariant is an assignment of a tuple of integers (one for each focus-focus point) to each representative of the polygon invariant, and furthermore recall that the value of such an assignment on a single given representative uniquely determines its value on all representatives by the group action given in Equation [\[eq:twisact\]](#eq:twisact){reference-type="eqref" reference="eq:twisact"}. To calculate the twisting index we start by extending the privileged momentum maps $\nu_1$ and $\nu_2$, which are defined in a small neighborhood of the focus-focus point $m_1$ and $m_2$, respectively, to the entire manifold. Recall that, as explained in Subsections [3.2](#sss:polygon){reference-type="ref" reference="sss:polygon"} and [3.5](#sss:twisting){reference-type="ref" reference="sss:twisting"}, each choice of representative of the polygon invariant $\Delta$ corresponds to a momentum map $\mu_{\Delta}\colon M\to {\mathbb R}^2$ which is toric away from the preimages of the cuts and satisfies $\Delta= \mu_{\Delta}(M)$. Near each focus-focus point $m_r$ we have that $$\mu_{\Delta} = T^{\kappa_r^{\Delta}} \circ \nu_r$$ for some $\kappa_r^{\Delta}\in{\mathbb Z}$. For each $r\in\{1,2\}$, our goal is to use the image of $\nu_r$ to determine the choice of polygon $\Delta$ for which $\kappa_r^\Delta=0$. After fixing a choice of cut directions, the associated representatives of the polygon invariant are related by applying iterates of the linear map $T$ from Equation [\[eqn:T\]](#eqn:T){reference-type="eqref" reference="eqn:T"}, which changes the representatives drastically. See for instance Figure [\[fig:polygons-with-kappa-1\]](#fig:polygons-with-kappa-1){reference-type="ref" reference="fig:polygons-with-kappa-1"}. Thus, a sufficiently accurate approximation of the images of the privileged momentum maps is enough to determine which choice of polygon has a momentum map locally equal to the given privileged momentum map, i.e. the choice of $\Delta$ such that $\mu_\Delta= \nu_r$. This representative has index $\kappa_r^\Delta=0$, and therefore repeating this for $r=1$ and $r=2$ completely determines the twisting index invariant of the system. The approximate image shown in Figure [\[fig:twisComp\]](#fig:twisComp){reference-type="ref" reference="fig:twisComp"} is produced from our expansions of the Taylor series invariant from Theorem [Theorem 4](#thm:Taylor-intro){reference-type="ref" reference="thm:Taylor-intro"}. It is accurate enough to determine the twisting index by comparing it to the polygons in Figure [\[fig:polygons-with-kappa-1\]](#fig:polygons-with-kappa-1){reference-type="ref" reference="fig:polygons-with-kappa-1"}, and we will see that the polygon with $\kappa_r^\Delta=0$ for both $r=1$ and $r=2$ is the representative shown in Figure [8](#fig:twis-cuts-down){reference-type="ref" reference="fig:twis-cuts-down"}. The twisting index of other polygons is then determined by the the action given in Equation [\[eq:twisact\]](#eq:twisact){reference-type="eqref" reference="eq:twisact"}, as shown in Figures [\[fig:polygons-with-kappa-1\]](#fig:polygons-with-kappa-1){reference-type="ref" reference="fig:polygons-with-kappa-1"} and [\[fig:polygons-with-kappa-2\]](#fig:polygons-with-kappa-2){reference-type="ref" reference="fig:polygons-with-kappa-2"}. ![A representative of the semitoric polygon. In the proof of Theorem [Theorem 3](#thm:twist-intro){reference-type="ref" reference="thm:twist-intro"} we prove that the associated twisting index labels are $(\kappa_1,\kappa_2)=(0,0)$. The indices of the other polygons are then found by applying the action defined in Equation [\[eq:twisact\]](#eq:twisact){reference-type="eqref" reference="eq:twisact"}, see Figures [\[fig:polygons-with-kappa-1\]](#fig:polygons-with-kappa-1){reference-type="ref" reference="fig:polygons-with-kappa-1"} and [\[fig:polygons-with-kappa-2\]](#fig:polygons-with-kappa-2){reference-type="ref" reference="fig:polygons-with-kappa-2"}. ](4X2pol10-3-lab.pdf){#fig:twis-cuts-down width="250pt"} ![ Application of $T^{-1}$ ](fig13_1_n.pdf){width="\\textwidth"} ![ Application of $T^0$ ](fig13_2_n.pdf){width="\\textwidth"} ![ Application of $T^1$ ](fig13_3_n.pdf){width="\\textwidth"} \ **Remark 42**. Note that in our case there is a single representative of the polygon invariant which turns out to have both $\kappa_1=0$ and $\kappa_2=0$ (the one shown in Figure [8](#fig:twis-cuts-down){reference-type="ref" reference="fig:twis-cuts-down"}). This is just a coincidence for this system. In general the representatives of the polygon invariant with $\kappa_1=0$ and $\kappa_2=0$ can be distinct. **Remark 43**. If there is only one focus-focus point the situation is much easier, see Alonso & Dullin & Hohloch [@ADH2]. There the twisting index is deduced by comparing [@ADH2 Figure 13] with [@ADH2 Figure 14], which correspond to our Figures [8](#fig:twis-cuts-down){reference-type="ref" reference="fig:twis-cuts-down"} and [\[fig:twisComp\]](#fig:twisComp){reference-type="ref" reference="fig:twisComp"}, respectively. *Proof of Theorem [Theorem 3](#thm:twist-intro){reference-type="ref" reference="thm:twist-intro"}.* Let $s_-,s_+$ be the lower and upper bounds for the range of parameters $s$ for which the system has two focus-focus points, as given in Proposition [\[prop:nff\]](#prop:nff){reference-type="ref" reference="prop:nff"}. Let $\rho_2 = \rho_2(s)$ be as given in Equation [\[eqn:rho\]](#eqn:rho){reference-type="eqref" reference="eqn:rho"} and let $s\in \,\,]s_-,s_+[$. Consider the first order factor of $l$ in the Taylor series $S_r^\infty(l,j)$ from Theorem [Theorem 4](#thm:Taylor-intro){reference-type="ref" reference="thm:Taylor-intro"}, which is given by $$\label{eqn:arctan} \arctan \left( \dfrac{6-9s}{{\rho_2}} \right)\quad \text{for }r=1,\qquad\quad \arctan \left( \dfrac{6-9s}{{\rho_2}} \right) + \pi \quad \text{for }r=2.$$ No matter the value of $s$, the values of the terms in Equation [\[eqn:arctan\]](#eqn:arctan){reference-type="eqref" reference="eqn:arctan"} stay in the interval $]-\tfrac{\pi}{2},\tfrac{3\pi}{2}[$. Since the twisting index only changes when this term surpasses either $-\tfrac{\pi}{2}$ or $\tfrac{3\pi}{2}$, we conclude that the twisting index invariant at both focus-focus points $m_1$ and $m_2$ is independent of the value of $s$. For the remainder of the proof we thus suppress all dependencies on $s$ in the notation. To obtain the twisting index invariant we need to compare the privileged momentum maps $\nu_1$ and $\nu_2$ associated to each of the focus-focus singularities with the momentum map $\mu_\Delta$ associated to a polygon $\Delta$ of the polygon invariant from Theorem [\[thm:poly\]](#thm:poly){reference-type="ref" reference="thm:poly"}. Following §[3.2](#sss:polygon){reference-type="ref" reference="sss:polygon"}, we make the choice of signs $\varepsilon = (\varepsilon_1, \varepsilon_2) = (-1,-1)$ and we consider the half-lines $$b_{\lambda_1}^{\varepsilon_1} = \{ (0,y) \;|\; y<0\}\quad \mbox{and} \quad b_{\lambda_2}^{\varepsilon_2} = \{ (2,y) \;|\; y<0 \}$$ where $\lambda_1=0$, $\lambda_2=2$. For each $r\in\{1,2\}$, we have seen in §[3.4](#sss:taylor){reference-type="ref" reference="sss:taylor"} that we can find a local neighbourhood $U_r \subset M$ of the focus-focus singularity $m_r$, a neighbourhood $W_r := F^{-1} F(U)$ of the corresponding focus-focus fibre and $\tilde{W}_r = F^{-1}(F(W_r) \backslash b_{\lambda_r}^{\varepsilon_r})$. We also have a map $\Phi_r = \varrho_r \circ F$ and we use a local coordinate $z = l + ij \in \Phi_r(W_r) \subset {\mathbb C}$. In §[3.5](#sss:twisting){reference-type="ref" reference="sss:twisting"} we have defined the privileged momentum map $\nu_r = (L,\Xi_r)$, where $\Xi_r = \xi_r \circ \Phi : W_r \to {\mathbb R}$ and the function $\xi_r(z)$ is understood as a preferred local action (cf. Remark [Remark 27](#re:xi_as_action){reference-type="ref" reference="re:xi_as_action"}). In Lemma [Lemma 24](#lem:Spref){reference-type="ref" reference="lem:Spref"} we have defined its corresponding desingularised action $\hat{S}$, which is related to $\xi_r$ by $$\label{eqn:proof-xi} 2\pi \xi_r(z) = \hat{S}_r(z) - \textup{Im}(z \log z -z) + 2\pi \xi_r(0)$$ for $z\in\Phi_r(W_r)$. In Theorem [Theorem 4](#thm:Taylor-intro){reference-type="ref" reference="thm:Taylor-intro"} we have computed the Taylor series invariant up to second order, which consists of the Taylor series of the functions $\hat{S}_r(z)$, $r\in\{1,2\}$. We will use this result to approximate the functions $\xi_r(z)$. ![Singularity $\mathcal{N} \times \mathcal{S}$](TwisNS-cut.png){width="90%"} ![Singularity $\mathcal{S} \times \mathcal{N}$](TwisSN-cut.png){width="90%"} In Remark [Remark 37](#rmk:varrho_is_J){reference-type="ref" reference="rmk:varrho_is_J"}, we have seen that $\varrho_r(l,h) = (l,J_r(l,h)),$ where $J_r(l,h)$ is the imaginary action defined in [\[eqn:im-action\]](#eqn:im-action){reference-type="eqref" reference="eqn:im-action"}. For $x\in W_r$, we thus have that $$\label{eq:approxtwis} \Xi_r (x) = \xi_r \circ \varrho_r \circ F(x) = \xi_r(L(x), J_r(L(x),H(x))).$$ Note that $J_r$ is smooth around the focus-focus value and can thus be expanded around it, as in the proof of Lemma [Lemma 40](#lem:birkhoff){reference-type="ref" reference="lem:birkhoff"}. Furthermore, $\xi_r$ is determined by $\hat{S}$ via Equation [\[eqn:proof-xi\]](#eqn:proof-xi){reference-type="eqref" reference="eqn:proof-xi"}, and $\hat{S}$ can be approximated by the finite expansion from Theorem [Theorem 4](#thm:Taylor-intro){reference-type="ref" reference="thm:Taylor-intro"}. Putting this together, from Equation [\[eq:approxtwis\]](#eq:approxtwis){reference-type="eqref" reference="eq:approxtwis"} and Theorem [Theorem 4](#thm:Taylor-intro){reference-type="ref" reference="thm:Taylor-intro"} we have obtained an approximation for $\Xi_r$. We take $W_1$ to be the focus-focus point $m_1$ and all regular points in the range $-3<L<1$, and for $W_2$ we take $m_2$ and all regular points in the range $-1<L<3$. Using the approximation of $\Xi_r$ from Equation [\[eq:approxtwis\]](#eq:approxtwis){reference-type="eqref" reference="eq:approxtwis"}, we now plot the image of the map $\nu_r = (L,\Xi_r)$ using Mathematica, which yields the image shown in Figure [\[fig:twisComp\]](#fig:twisComp){reference-type="ref" reference="fig:twisComp"}. For simplicity in the computations we have taken $s = \frac{1}{2}$, since as discussed above the resulting value of the twisting index will be independent of $s$. Necessarily, for $r\in\{1,2\}$, the map $\nu_r$ is equal in a neighborhood of $c_r$ to one of the possible generalized toric momentum maps $\mu_\Delta$, for some choice of $\Delta$ with downwards cuts. All such choices are related by a global application of the linear map $T$; several such polygons are shown in Figure [\[fig:polygons-with-kappa-1\]](#fig:polygons-with-kappa-1){reference-type="ref" reference="fig:polygons-with-kappa-1"}. We now compare the polygons from Figure [\[fig:polygons-with-kappa-1\]](#fig:polygons-with-kappa-1){reference-type="ref" reference="fig:polygons-with-kappa-1"} with the images of the approximations of $\nu_r = (L,\Xi_r)$ shown in Figure [\[fig:twisComp\]](#fig:twisComp){reference-type="ref" reference="fig:twisComp"}. We see that the order of our approximation is sufficiently high to clearly identify that the images from Figure [\[fig:twisComp\]](#fig:twisComp){reference-type="ref" reference="fig:twisComp"} are most similar to the polygon singled out in Figure [8](#fig:twis-cuts-down){reference-type="ref" reference="fig:twis-cuts-down"}, since the other options (such as those shown in Figure [\[fig:polygons-with-kappa-1\]](#fig:polygons-with-kappa-1){reference-type="ref" reference="fig:polygons-with-kappa-1"}) have a sufficiently different shape from the one shown in Figure [8](#fig:twis-cuts-down){reference-type="ref" reference="fig:twis-cuts-down"}. Thus, the indices associated to the polygon in Figure [8](#fig:twis-cuts-down){reference-type="ref" reference="fig:twis-cuts-down"} are $(\kappa_1,\kappa_2)=(0,0)$. This determines the twisting index labels for all representatives of the polygon by applying the group action [\[eq:twisact\]](#eq:twisact){reference-type="eqref" reference="eq:twisact"}. ◻ # Values of constants {#appendix:d1d2} The values of the terms $d_1$ and $d_2$ which appear in Equation [\[eqn:dSdj\]](#eqn:dSdj){reference-type="eqref" reference="eqn:dSdj"} are given by: $$\begin{aligned} d_1 &= -\left(16 - 96 (1 + {\rho_1}) {s} + 8 (139 + 48 {\rho_1}) {s}^2 - 8 (587 + 183 {\rho_1}) {s}^3 \right. \\& \quad \left. + 3 (3515 + 1032 {\rho_1}) {s}^4 - 64 (241 + 45 {\rho_1}) {s}^5 + 96 (157 + 10 {\rho_1}) {s}^6 \right. \\& \quad \left.- 8704 {s}^7 + 2176 {s}^8\right) \left(-6 j l {\rho_2}(-512 + 6912 {s} - 32256 {s}^2 + 29952 {s}^3 \right. \\& \quad \left.+ 651072 {s}^4 - 5470176 {s}^5 + 25000480 {s}^6 - 78708528 {s}^7 + 181951998 {s}^8 \right. \\& \quad \left.- 308836805 {s}^9 + 366523680 {s}^{10} - 264177144 {s}^{11} + 45690784 {s}^{12} + 122026416 {s}^{13} \right. \\& \quad \left.- 141477888 {s}^{14} + 75192576 s2^{15} - 20766720 {s}^{16} + 2396160 {s}^{17}) \right. \\& \quad \left.+ l^2 (5120 - 76800 {s} + 536832 {s}^2 - 2331648 {s}^3 + 9224832 {s}^4 - 43900032 {s}^5 \right. \\& \quad \left.+ 196060832 {s}^6 - 664211520 {s}^7 + 1701591876 {s}^8 - 3591176044 {s}^9 \right. \\& \quad \left.+ 6833670885 {s}^{10} - 11790448080 {s}^{11} + 17057911016 {s}^{12} - 19014926976 {s}^{13}\right. \\& \quad \left. + 15302126928 {s}^{14} - 8262067200 {s}^{15} + 2541006336 {s}^{16} - 132489216 {s}^{17} \right. \\& \quad \left.- 200669184 {s}^{18} + 66846720 {s}^{19} - 6684672 {s}^{20}) \right. \\& \quad \left.+ j^2 (-5120 + 76800 {s} + 118528 {s}^2 - 6843392 {s}^3 + 69320064 {s}^4 - 441754496 {s}^5 \right. \\& \quad \left.+ 1994226016 {s}^6 - 6694907840 {s}^7 + 17301290172 {s}^8 - 35191498900 {s}^9 \right. \\& \quad \left.+ 57016666395 {s}^{10} - 74143143472 {s}^{11} + 78236327192 {s}^{12} - 68381724032 {s}^{13} \right. \\& \quad \left.+ 50940505264 {s}^{14} - 32953613312 {s}^{15} + 18193326592 {s}^{16} - 8071077888 {s}^{17} \right. \\& \quad \left.+ 2625804288 {s}^{18} - 547880960 {s}^{19} + 54788096 {s}^{20})\right)\\[0.2cm] d_2 &= 256 {\rho_2}^6 {\rho_1}^4 (16 - 96 (1 + {\rho_1}) {s} + 8 (139 + 48 {\rho_1}) {s}^2 - 8 (587 + 183 {\rho_1}) {s}^3 \\ & \quad + 3 (3515 + 1032 {\rho_1}) {s}^4 - 64 (241 + 45 {\rho_1}) {s}^5 + 96 (157 + 10 {\rho_1}) {s}^6 \\& \quad- 8704 {s}^7 + 2176 {s}^8). \end{aligned} \label{d1d2}$$ Note that $\frac{d_1}{d_2} = d^s_3j^2 + d^s_4 jl + d^s_5l^2$ for some $d^s_3, d^s_4, d^s_5\in{\mathbb R}$ depending on the parameter $s$, since $\rho_1$ and $\rho_2$ are also determined by $s$, see Equation [\[eqn:rho\]](#eqn:rho){reference-type="eqref" reference="eqn:rho"}. **Jaume Alonso**\ Technische Universität Berlin\ Institute of Mathematics\ Str. des 17. Juni 136\ D-10623 Berlin, Germany\ *E-mail*: `alonso@math.tu-berlin.de`\ **Sonja Hohloch**\ University of Antwerp\ Department of Mathematics\ Middelheimlaan 1\ B-2020 Antwerpen, Belgium\ *E-mail*: `sonja.hohloch@uantwerpen.be`\ **Joseph Palmer**\ University of Illinois at Urbana-Champaign\ Department of Mathematics\ 1409 W. Green St\ Urbana, IL 61801 USA\ *E-mail*: `jpalmer5@illinois.edu`\
arxiv_math
{ "id": "2309.16614", "title": "The twisting index in semitoric systems", "authors": "Jaume Alonso, Sonja Hohloch, Joseph Palmer", "categories": "math.SG math.DS", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Consider the elliptic operator given by $$\label{fx2} \mathscr{L}_{\epsilon}f\,=\, \boldsymbol{b} \cdot \nabla f \,+\, \epsilon\, \Delta f$$ for some smooth vector field ${\boldsymbol b}\colon {\mathbb R}^d\to{\mathbb R}^d$ and a small parameter $\epsilon>0$. Consider the initial-valued problem $$\label{fx00} \left\{ \begin{aligned} & \partial_ t u_\epsilon\,=\, {\mathscr L}_\epsilon u_\epsilon\;, \\ & u_\epsilon (0, \cdot) = u_0(\cdot) \;, \end{aligned} \right.$$ for some bounded continuous function $u_0$. Denote by ${\mathcal M}_0$ the set of critical points of ${\boldsymbol b}$, ${\mathcal M}_0 =\{{\boldsymbol x}\in {\mathbb R}^d : {\boldsymbol b}({\boldsymbol x})=0\}$, assumed to be finite. Under the hypothesis that ${\boldsymbol b} = -(\nabla U + {\boldsymbol \ell})$, where ${\boldsymbol \ell}$ is a divergence-free field orthogonal to $\nabla U$, the main result of this article states that there exist a time-scale $\theta^{(1)}_\epsilon$, $\theta^{(1)}_\epsilon \to \infty$ as $\epsilon \rightarrow 0$, and a Markov semigroup $\{p_t : t\ge 0\}$ defined on ${\mathcal M}_0$ such that $$\lim_{\epsilon\to 0} u_\epsilon ( t \, \theta^{(1)}_\epsilon, {\boldsymbol x} ) \;=\; \sum_{{\boldsymbol m}'\in {\mathcal M}_0} p_t({\boldsymbol m}, {\boldsymbol m}')\, u_0({\boldsymbol m}')\;,$$ for all $t>0$ and ${\boldsymbol x}$ in the domain of attraction of ${\boldsymbol m}$ \[for the ODE $\dot {{\boldsymbol x}}(t) = {\boldsymbol b}({\boldsymbol x}(t))$\]. The time scale $\theta^{(1)}$ is critical in the sense that, for all time scale $\varrho_\epsilon$ such that $\varrho_\epsilon \to \infty$, $\varrho_\epsilon/\theta^{(1)}_\epsilon \to 0$, $$\lim_{\epsilon\to 0} u_\epsilon ( \varrho_\epsilon, {\boldsymbol x} ) \;=\; u_0({\boldsymbol m})$$ for all ${\boldsymbol x} \in {\mathcal D}({\boldsymbol m})$. Namely, $\theta_\epsilon^{(1)}$ is the first scale at which the solution to the initial-valued problem starts to change. In a companion paper [@LLS2] we extend this result finding all critical time-scales at which the solution of the initial-valued problem [\[fx00\]](#fx00){reference-type="eqref" reference="fx00"} evolves smoothly in time and we show that the solution $u_\epsilon$ is expressed in terms of the semigroup of some Markov chain taking values in sets formed by unions of critical points of ${\boldsymbol b}$. address: - | IMPA, Estrada Dona Castorina 110, J. Botanico, 22460 Rio de Janeiro, Brazil and Univ. Rouen Normandie, CNRS, LMRS UMR 6085, F-76000 Rouen, France.\ e-mail: `landim@impa.br` - | School of Mathematics, Korea Institute for Advanced Study, Republic of Korea.\ e-mail: `jklee@kias.re.kr` - | Department of Mathematical Sciences, Seoul National University and Research Institute of Mathematics, Republic of Korea.\ e-mail: `insuk.seo@snu.ac.kr` author: - Claudio Landim, Jungkyoung Lee, and Insuk Seo title: "Metastability and time scales for parabolic equations with drift 1: the first time scale" --- # Introduction {#sec0} The main concern of the current article is the behavior of the solution $u_{\epsilon}$ of the equation [\[fx00\]](#fx00){reference-type="eqref" reference="fx00"} in the regime $\epsilon\rightarrow0$. This problem is connected to the metastable behavior of the diffusion process induced by the generator $\mathscr{L}_{\epsilon}$ given in [\[fx2\]](#fx2){reference-type="eqref" reference="fx2"}, which has been a serious issue in the probability community. Freidlin and Koralov [@fk10a; @fk10b] found a critical depth $D>0$ and showed that the the solution $u_{\epsilon}(t,\,x)$ in the interval $t\in[0,\,e^{(D-\eta)/\epsilon}]$ and $t\in[e^{(D+\eta)/\epsilon},\,\infty)$ differ significantly for all $\eta>0$. Therefore, a dramatic phase transition occurs at the scale $\theta_{\epsilon}=e^{D/\epsilon}$. This result has been extended by Koralov and Tcheuko [@kt16] to cases which exhibit multiple metastable time-scales. Ishii and Souganidis [@is15; @is17] derived similar results with purely analytical methods. In this article and the companion paper [@LLS2], we characterize the solution $u_{\epsilon}$ assuming that the diffusion process induced by the generator $\mathscr{L}_{\epsilon}$ has a Gibbs invariant measure. More precisely, fix a smooth potential $U\colon\mathbb{R}^{d}\to\mathbb{R}$, and a smooth vector field $\boldsymbol{\ell}\colon\mathbb{R}^{d}\to\mathbb{R}$. Assume that the vector field $\boldsymbol{\ell}$ is divergence free and is orthogonal to the gradient of $U$: $$(\nabla\cdot\boldsymbol{\ell})(\boldsymbol{x})\,=\,0\;,\quad(\nabla U)(\boldsymbol{x})\cdot\boldsymbol{\ell}(\boldsymbol{x})\,=\,0\;,\quad\boldsymbol{x}\in\mathbb{R}^{d}\;. \label{27}$$ For $\epsilon>0$, denote by $\mathscr{L}_{\epsilon}$ the elliptic operator given by $$\mathscr{L}_{\epsilon}f\,=\,-\,(\,\nabla U\,+\,\boldsymbol{\ell}\,)\cdot\nabla f\,+\,\epsilon\,\Delta f\;,\quad f\in C^{2}(\mathbb{R}^{d})\;, \label{46}$$ which corresponds to the operator [\[fx2\]](#fx2){reference-type="eqref" reference="fx2"} when $\boldsymbol{b}= - (\nabla U+\boldsymbol{\ell})$. It has been shown in [@LS-22] that this decomposition of $\boldsymbol{b}$ is a necessary and sufficient condition for the diffusion process induced by the generator $\mathscr{L}_{\epsilon}$ to have a Gibbs invariant measure. Denote by $\mathscr{L}_{\epsilon}$ the generator [\[46\]](#46){reference-type="eqref" reference="46"}, unless otherwise specified. Fix a bounded and continuous function $u_{0}\colon\mathbb{R}^{d}\to\mathbb{R}$ and consider the initial-valued problem. $$\left\{ \begin{aligned} & \partial_{t}u_{\epsilon}\,=\,\mathscr{L}_{\epsilon}u_{\epsilon}\;,\\ & u_{\epsilon}(0,\cdot)=u_{0}(\cdot)\;. \end{aligned} \right. \label{44}$$ The tools developed in [@BL1; @BL2; @LLM; @LMS2; @LMS] permit to describe the solution of the parabolic equation [\[44\]](#44){reference-type="eqref" reference="44"}, in the domain of attraction of a local minimum $\boldsymbol{m}$, at the time-scale in which the solution is transformed from the value of the initial condition $u_{0}(\cdot)$ at the local attractor $\boldsymbol{m}$ of the field $\boldsymbol{b}=-(\nabla U+\boldsymbol{\ell})$ to a convex combination of the initial condition calculated at several different local attractors $\boldsymbol{m}'$. A similar result appeared in [@bgl2] in the case of sequences of continuous-time Markov chains on a fixed finite state space. ## The first critical time scale {#the-first-critical-time-scale .unnumbered} Let us now explain our main result in more detail. For two positive sequences $(\alpha_{\epsilon}:\epsilon>0)$, $(\beta_{\epsilon}:\epsilon>0)$, we denote by ${\color{blue}\alpha_{\epsilon}\prec\beta_{\epsilon}}$, $\color{blue} \beta_{\epsilon} \succ \alpha_{\epsilon}$ if $\alpha_{\epsilon}/\beta_{\epsilon}\to0$ as $\epsilon\rightarrow0$. The main results of the current article and the companion paper [@LLS2] assert that there exist critical times scales $\theta_{\epsilon}^{(1)}\prec\cdots\prec\theta_{\epsilon}^{(\mathfrak{q})}$ associated with the potential function $U$ at which the asymptotic behavior of the solution $u_{\epsilon}$ changes dramatically. We not only characterize these time scales explicitly but also provide precise asymptotics of $u_{\epsilon}$ along these scales. We also derive the asymptotics of the solution between these time-scales, completely analyzing the behavior of $u_{\epsilon}$. The current article concerns the first time-scale among such a complex multi-scale structure. We explicitly find a time-scale $\theta_{\epsilon}^{(1)} \succ 1$, and a Markov semigroup $\{p_{t}:t\ge0\}$ defined on the set of local minima $\mathcal{M}_{0}$ of $U$ such that, for all local minimum $\boldsymbol{m}\in\mathcal{M}_{0}$, $$\lim_{\epsilon\to0}u_{\epsilon}(t\,\theta_{\epsilon}^{(1)},\boldsymbol{x})\;=\;\sum_{\boldsymbol{m}'\in\mathcal{M}_{0}}p_{t}(\boldsymbol{m},\boldsymbol{m}')\,u_{0}(\boldsymbol{m}')\;,\label{43}$$ for all $t>0$, $\boldsymbol{x}\in\mathcal{D}(\boldsymbol{m})$. Here, $\mathcal{D}(\boldsymbol{m})$ represents the domain of attraction of $\boldsymbol{m}$ for the ODE $\dot{\boldsymbol{x}}(t)=\boldsymbol{b}(\boldsymbol{x}(t))$, where $\boldsymbol{b}=-(\nabla U+\boldsymbol{\ell})$. We also show that for any sequence $1 \prec \varrho_{\epsilon}\prec\theta_{\epsilon}^{(1)}$ $$\lim_{\epsilon\to0}u_{\epsilon}(\varrho_{\epsilon}, \boldsymbol{x})\;=\;u_{0}(\boldsymbol{m}) \;, \label{51}$$ for all $\boldsymbol{x}\in\mathcal{D}(\boldsymbol{m})$. Hence, the solution does not change until the time-scale $\theta_{\epsilon}^{(1)}$, and it starts to change exactly at $\theta_{\epsilon}^{(1)}$ in view of [\[43\]](#43){reference-type="eqref" reference="43"} and [\[51\]](#51){reference-type="eqref" reference="51"}. The main achievement of the current article is the verification of [\[43\]](#43){reference-type="eqref" reference="43"} and [\[51\]](#51){reference-type="eqref" reference="51"}. We remark this scale $\theta_{\epsilon}^{(1)}$ is the scale $\theta_{\epsilon}=e^{D/\epsilon}$ obtained in [@fk10a; @fk10b]. ## Multi-scale structure {#multi-scale-structure .unnumbered} The characterization of the remaining scales are the contents of the companion paper [@LLS2]. We briefly explain the main result. Let us start from the second scale which can be inferred from [\[43\]](#43){reference-type="eqref" reference="43"}. The theory of finite-state continuous-time Markov chains asserts that there exist probability measures $\pi_{j}^{(1)}$, $1\le j\le\mathfrak{n}_{1}$, on $\mathcal{M}_{0}$ with disjoint supports, and probability measures $\omega^{(1)}(\boldsymbol{m},\cdot)$, $\boldsymbol{m}\in\mathcal{M}_{0}$, on $\{1,\dots,\mathfrak{n}_{1}\}$ such that $$\lim_{t\to\infty}p_{t}(\boldsymbol{m},\boldsymbol{m}') \;=\;\sum_{k=1}^{\mathfrak{n}_{1}} \omega^{(1)}(\boldsymbol{m},k)\,\pi_{k}^{(1)}(\boldsymbol{m}') \label{eq:52}$$ for all $\boldsymbol{m}$, $\boldsymbol{m}'\in\mathcal{M}_{0}$. If $\boldsymbol{m}'$ is a transient state all terms in the previous sum vanish. Indeed, the measures $\pi_{j}^{(1)}$ represent the stationary states of the Markov chain restricted to the closed irreducible classes, which in turn are the support of the measures $\pi_{j}^{(1)}$. The weight $\omega^{(1)}(\boldsymbol{m},k)$ stands for the probability that the Markov chain starting from $\boldsymbol{m}$ is absorbed at the support of the measure $\pi_{k}^{(1)}$. If there is only one stationary state or, equivalently, one closed irreducible class, namely $\mathfrak{n}_{1}=1$, then for all time-scales $\varrho_{\epsilon}$ such that $\theta_{\epsilon}^{(1)}\prec\varrho_{\epsilon}$, we can readily guess from [\[43\]](#43){reference-type="eqref" reference="43"} and [\[eq:52\]](#eq:52){reference-type="eqref" reference="eq:52"} that (note that $\omega^{(1)}(\boldsymbol{m},1)=1$ in this case) $$\lim_{\epsilon\to0}u_{\epsilon}(\varrho_{\epsilon},\boldsymbol{x})\;=\;\sum_{\boldsymbol{m}'\in\mathcal{M}_{0}}\pi_{1}^{(1)}(\boldsymbol{m}')\,u_{0}(\boldsymbol{m}')$$ for all local minimum $\boldsymbol{m}\in\mathcal{M}_{0}$ and $\boldsymbol{x}\in\mathcal{D}(\boldsymbol{m})$. Note that the limit does not depend on $\boldsymbol{m}$ or $\boldsymbol{x}$. This behavior occurs when (a) all the wells associated to local minima of $U$ which are not global minima have the same depth, and (b) either there is only one global minimum or there is more than one and the depth of all wells are the same. In this case of a unique closed irreducible class, the support of the measure $\pi_{1}^{(1)}$ corresponds to the set of global minima of $U$. This finishes the description of multi-scale structure for the case $\mathfrak{n}_{1}=1$. In contrast, if there are more than one closed irreducible classes, the limit of $u_{\epsilon}(t\theta_{\epsilon}^{(1)},\boldsymbol{x})$ as $\epsilon\to0$ and then $t\to\infty$ depends on the local minimum attracting $\boldsymbol{x}$. In this case, there exists a second and longer time-scale $\theta_{\epsilon}^{(2)}$ such that $\theta_{\epsilon}^{(1)}\prec\theta_{\epsilon}^{(2)}$ and a Markov semigroup $\{p_{t}^{(2)}:t\ge0\}$ defined on the set of closed irreducible classes $\{1,\dots,\mathfrak{n}_{1}\}$ obtained at the first time-scale such that, $$\lim_{\epsilon\to0}u_{\epsilon}(t\,\theta_{\epsilon}^{(2)},\boldsymbol{x})\;=\;\sum_{k=1}^{\mathfrak{n}_{1}}\omega^{(1)}(\boldsymbol{m},k)\,\sum_{\ell=1}^{\mathfrak{n}_{1}}p_{t}^{(2)}(k,\ell)\,\sum_{\boldsymbol{m}'\in\mathcal{M}_{0}}\pi_{\ell}^{(1)}(\boldsymbol{m}')\,u_{0}(\boldsymbol{m}')$$ for all $t>0$ where $\boldsymbol{m}\in\mathcal{M}_{0}$ is a local minium of $U$ and $\boldsymbol{x}\in\mathcal{D}(\boldsymbol{m})$. Mind that we may restrict the sum over $\boldsymbol{m}'$ to local minima in the support of the measure $\pi_{\ell}^{(1)}$. We can also verify that, for any sequence $\varrho_{\epsilon}$ such that $\theta_{\epsilon}^{(1)}\prec\varrho_{\epsilon}\prec\theta_{\epsilon}^{(2)}$, we have $$\lim_{\epsilon\to0}u_{\epsilon}(\varrho_{\epsilon},\boldsymbol{x})\;=\;\sum_{k=1}^{\mathfrak{n}_{1}}\omega^{(1)}(\boldsymbol{m},k)\,\sum_{\boldsymbol{m}'\in\mathcal{M}_{0}}\pi_{k}^{(1)}(\boldsymbol{m}')\,u_{0}(\boldsymbol{m}')$$ for all $\boldsymbol{m}\in\mathcal{M}_{0}$ and $\boldsymbol{x}\in\mathcal{D}(\boldsymbol{m})$. This is exactly the behavior of the solution $u_{\epsilon}$ in the time scale $t\theta_{\epsilon}^{(1)}$ as $\epsilon\to0$ and then $t\to\infty$, and the one in the time scale $t\theta_{\epsilon}^{(2)}$ as $\epsilon\to0$ and then $t\to0$. This completes the description of the asymptotics of $u_{\epsilon}$ until the second scale $\theta_{\epsilon}^{(2)}$. More generally, there exist $\mathfrak{q}\ge1$ and time-scales $\theta_{\epsilon}^{(1)}\prec\cdots\prec\theta_{\epsilon}^{(\mathfrak{q})}$ such that $$\lim_{\epsilon\to0}u_{\epsilon}(t\theta_{\epsilon}^{(p)},\boldsymbol{x})\;=\;\sum_{k=1}^{\mathfrak{n}_{p-1}}\omega^{(p-1)}(\boldsymbol{m},k)\,\sum_{\ell=1}^{\mathfrak{n}_{p-1}}p_{t}^{(p)}(k,\ell)\,\sum_{\boldsymbol{m}'\in\mathcal{M}_{0}}\pi_{\ell}^{(p-1)}(\boldsymbol{m}')\,u_{0}(\boldsymbol{m}')\label{lls2-1}$$ for each $1\le p\le\mathfrak{q}$, $t>0$, $\boldsymbol{m}\in\mathcal{M}_{0}$, $\boldsymbol{x}\in\mathcal{D}(\boldsymbol{m})$. Furthermore, for each $1\le p\le\mathfrak{q}+1$, sequence $(\varrho_{\epsilon}:\epsilon>0)$ such that $\theta_{\epsilon}^{(p-1)}\,\prec\,\varrho_{\epsilon}\,\prec\,\theta_{\epsilon}^{(p)}$, $\boldsymbol{m}\in\mathcal{M}_{0}$, and $\boldsymbol{x}\in\mathcal{D}(\boldsymbol{m})$, $$\lim_{\epsilon\to0}u_{\epsilon}(\varrho_{\epsilon},\boldsymbol{x})\;=\;\sum_{k=1}^{\mathfrak{n}_{p-1}}\omega^{(p-1)}(\boldsymbol{m},k)\,\sum_{\boldsymbol{m}'\in\mathcal{M}_{0}}\pi_{k}^{(p-1)}(\boldsymbol{m}')\,u_{0}(\boldsymbol{m}')\;.\label{lls2-2}$$ In this formula, $\theta_{\epsilon}^{(0)}$, $\theta_{\epsilon}^{(\mathfrak{q}+1)}$ are the constant sequences equal to $1$, $+\infty$, respectively. Summing up, - Denote by $\color{blue} \mathfrak{n}_{0}$ the number of local minima of $U$ so that $\mathfrak{n}_{0}>\mathfrak{n}_{1}>\cdots>\mathfrak{n}_{\mathfrak{q}}=1$. - $p_{t}^{(p)}$, $t\ge0$, is a Markov semigroup on $\{1,\dots,\mathfrak{n}_{p-1}\}$, $1\le p \le {\mathfrak q}$. Here, the semigroup $p_t$, introduced in [\[43\]](#43){reference-type="eqref" reference="43"}, has been represented by $p^{(1)}_t$. - For a fixed $1\le p\le\mathfrak{q}$, $\pi_{j}^{(p)}$, $1\le j\le\mathfrak{n}_{p}$, are probability measures on $\mathcal{M}_{0}$ with disjoint supports. They correspond to the extremal invariant probability measures of the Markov chain with transition probability $p_{t}^{(p)}$. - $\omega^{(p)}(\boldsymbol{m},\cdot)$ are probability measures on $\{1,\dots,\mathfrak{n}_{p}\}$, where $\omega^{(p)}(\boldsymbol{m},j)$ stands for the probability that $\boldsymbol{m}$ is absorbed at the support of the probability measure $\pi_{j}^{(p)}$. It turns out that all local minima which belong to the support of a measure $\pi_{j}^{(p)}$ are at the same height: $U(\boldsymbol{m}')=U(\boldsymbol{m}'')$ if $\boldsymbol{m}'$, $\boldsymbol{m}''$ belong to the support of the same measure $\pi_{j}^{(p)}$. On the other hand, the support of a measure $\pi_{j}^{(p+1)}$ is formed by the union of the supports of measures $\pi_{k}^{(p)}$, $k\in\{1,\dots,\mathfrak{n}_{p}\}$. Moreover, $\pi_{j}^{(p+1)}$ is a convex combination of the corresponding measures $\pi_{k}^{(p)}$. The rigorous recursive construction of this multi-scale structure is a delicate and complicated task and will be done in the companion paper [@LLS2]. Assertions [\[lls2-1\]](#lls2-1){reference-type="eqref" reference="lls2-1"} and [\[lls2-2\]](#lls2-2){reference-type="eqref" reference="lls2-2"} will be proven there as well. ## Comments on the proof {#comments-on-the-proof .unnumbered} The analysis of the asymptotics of the solution $u_{\epsilon}(t,\,\boldsymbol{x})$ of [\[44\]](#44){reference-type="eqref" reference="44"} is closely related to that of the metastable behavior of the process $$d\boldsymbol{x}_{\epsilon}(t)\,=\,(\,\nabla U\,+\,\boldsymbol{\ell}\,)(\boldsymbol{x}_{\epsilon}(t))\,dt\,+\,\sqrt{2\epsilon}\,dW_{t} \label{sde0}$$ where $\epsilon>0$ denotes a small parameter corresponding to the temperature of the system, and $W_{t}$ a $d$-dimension Brownian motion. This relation comes from well-known expression $$u_{\epsilon}(t,\,\boldsymbol{x})=\mathbb{E}_{\boldsymbol{x}}^{\epsilon}\left[u_{0}(\boldsymbol{x}_{\epsilon}(t))\right]\;,\;\;\;\;t\ge0,\,\boldsymbol{x}\in\mathbb{R}^{d},$$ where $\mathbb{E}_{\boldsymbol{x}}^{\epsilon}$ denotes the expectation with respect to the diffusion process [\[sde0\]](#sde0){reference-type="eqref" reference="sde0"} starting at $\boldsymbol{x}\in\mathbb{R}^{d}$. The proof of the result described above is purely probabilistic and relies on the theory of metastable Markov processes developed in [@BL1; @LLM; @LMS2; @RS; @LMS]. The metastable behavior of the process [\[sde0\]](#sde0){reference-type="eqref" reference="sde0"} has been recently studied in several articles: [@LM] provided sharp asymptotics on the low-lying spectra which is closely related with the metastability of the process $\boldsymbol{x}_{\epsilon}(\cdot)$, [@LS-22] established Eyring-Kramers law precisely estimating the mean transition time from a local minimum of $U$ to another one, and finally [@LS-22b] investigated the metastability among the global minima (i.e., ground states) of $U$. The last work can be regarded as the analysis of the metastability at the final scale $\theta_{\epsilon}^{(\mathfrak{q})}$ described above. Robust methods to analyze the metastable multi-scale structure, which appears when the potential function has a complicated structure, was not examined before in the context of diffusion processes. We refer to [@bl3; @lx16; @fk17] for the the context of finite state Markov chains. The analysis of the multi-scale structure is based on the resolvent approach to metastability developed in [@LMS]. The crucial point consists in showing that the solution of a resolvent equation is asymptotically constant in neighborhoods of local minima. More precisely, denote by $\mathcal{E}(\boldsymbol{m})$ a small neighborhood of a local minimum $\boldsymbol{m}$. Fix $\lambda>0$, $\boldsymbol{g}\colon\mathcal{M}_{0}\rightarrow\mathbb{R}$, and let $\phi_{\epsilon}$ be the unique solution of the resolvent equation $$(\lambda-\theta_{\epsilon}^{(1)}\mathscr{L}_{\epsilon})\,\phi_{\epsilon}\;=\;G\;:=\;\sum_{\boldsymbol{m}\in\mathcal{M}_{0}}{\boldsymbol g}(\boldsymbol{m})\,\chi_{_{\mathcal{E}(\boldsymbol{m})}}\;,$$ where $\chi_{\mathcal{A}}$, $\mathcal{A}\subset\mathbb{R}^{d}$, represents the indicator function of the set $\mathcal{A}$. The function on the right-hand side vanishes at $(\cup_{\boldsymbol{m}\in\mathcal{M}_{0}}\mathcal{E}(\boldsymbol{m}))^{c}$ and is constant on each well $\mathcal{E}(\boldsymbol{m}')$. One of the main results of this article asserts that the solution $\phi$ is asymptotically constant in each well $\mathcal{E}(\boldsymbol{m})$: $$\lim_{\epsilon\rightarrow0}\,\max_{\boldsymbol{m}\in\mathcal{M}_{0}}\,\sup_{\boldsymbol{x}\in\mathcal{E}({\boldsymbol m})}\vert\,\phi_{\epsilon}(\boldsymbol{x})-\boldsymbol{f}({\boldsymbol m})\,\vert\;=\;0\;, \label{fx3}$$ where ${\boldsymbol f}$ is the solution of the reduced resolvent equation $$(\lambda-\mathfrak{L}_{1})\,\boldsymbol{f}\;=\;{\boldsymbol g}\;,\label{fx1}$$ and $\mathfrak{L}_{1}$ is the generator of the $\mathcal{M}_{0}$-valued, continuous-time Markov chain whose associated semigroup is the one appearing in [\[43\]](#43){reference-type="eqref" reference="43"}. Property [\[43\]](#43){reference-type="eqref" reference="43"} of the solution of the initial-valued problem [\[44\]](#44){reference-type="eqref" reference="44"} is deduced from this property of the resolvent equation. Uniform estimates, similar to [\[fx3\]](#fx3){reference-type="eqref" reference="fx3"}, for solutions of Dirichlet problems go back at least to Devinatz and Friedman [@DF78], and Day [@Day82]. The convergence to a constant is called in the literature the leveling property of the equation. We refer to Lelièvre, Le Peutrec and Nectoux [@LLP22] for a recent account and further references. ## Organization {#organization .unnumbered} The paper is organized as follows. In Section [2](#sec1){reference-type="ref" reference="sec1"}, we state the main results. The proof of Theorem [ 2](#t01){reference-type="ref" reference="t01"} is divided in two parts. In Section [4](#sec2){reference-type="ref" reference="sec2"}, we prove that the solution of the resolvent equation is constant on each well, and, in Section [9](#sec9){reference-type="ref" reference="sec9"}, that the solution of the resolvent equation restricted to the set of local minima of $U$ is asymptotically the solution of the reduced resolvent equation [\[fx1\]](#fx1){reference-type="eqref" reference="fx1"}. The proof of the local constancy relies on a diffusion mixing time estimate presented in Section [3](#sec-ap3){reference-type="ref" reference="sec-ap3"}. The proof of the second property of the resolvent equation solution requires an estimate of the time it takes to exit a neighborhood of an unstable equilibrium point, presented in Section [5](#sec5){reference-type="ref" reference="sec5"}, estimates on the time needed to reach a local minimum of $U$, the subject of Section [6](#sec6){reference-type="ref" reference="sec6"}, and test functions which approximate the equilibrium potential between wells, introduced in Section [7](#sec10){reference-type="ref" reference="sec10"}. In Section [8](#sec4){reference-type="ref" reference="sec4"}, we add the last piece of the proof, extending the results of Section [4](#sec2){reference-type="ref" reference="sec2"} by showing that the solution of the resolvent equation is actually asymptotically constant in the domain of attractions of a local minimum. In Section [9](#sec9){reference-type="ref" reference="sec9"} we prove Theorem [ 2](#t01){reference-type="ref" reference="t01"}, and Theorem [ 1](#t00){reference-type="ref" reference="t00"} in Section [10](#sec3){reference-type="ref" reference="sec3"}. In the appendices, we present some results needed in the proofs. # Model and Main Results {#sec1} Fix a function $U\colon {\mathbb R}^d \to {\mathbb R}$ in $C^{3}(\mathbb{R}^{d})$ admitting only a finite number of critical points, all non-degenerate (hence $U$ is a Morse function, cf. [@Nic18 Definition 1.7]). Assume that $$\label{26} \begin{gathered} \lim_{n\to\infty}\inf_{|\bm{x}|\geq n}\frac{U(\bm{x})}{|\bm{x}|} \,=\,\infty\;,\quad \lim_{|\bm{x}|\to\infty}\frac{\bm{x}}{|\bm{x}|} \cdot\nabla U(\bm{x})\,=\,\infty\;, \\ \lim_{|\bm{x}|\to\infty} \big\{ \, |\nabla U(\bm{x})| \,-\, 2\, \Delta U(\bm{x}) \, \big\} \,=\,\infty\;. \end{gathered}$$ In this formula and below, $\color{blue} |\boldsymbol{x}|$ represents the Euclidean norm of ${\boldsymbol x}\in \mathbb{R}^{d}$. Suppose, without loss of generality, that $\min_{{\boldsymbol x}\in {\mathbb R}^d} U({\boldsymbol x}) = 0$. Consider a vector field ${\boldsymbol \ell}\colon {\mathbb R}^d \to {\mathbb R}$ in $C^2({\mathbb R}^d)$, assumed to be divergence free and orthogonal to the graduent of $U$ as stated in [\[27\]](#27){reference-type="eqref" reference="27"}. ## Time-scale {#time-scale .unnumbered} Denote by $\color{blue} \mathcal{M}_{0}$ the set of local minima of $U$. For each pair $\boldsymbol{m}' \neq \boldsymbol{m}''\in\mathcal{M}_{0}$, denote by $\Theta(\boldsymbol{m}',\,\boldsymbol{m}'')$ the communication height between $\boldsymbol{m}$ and $\boldsymbol{m}'$: $$\label{Theta} {\color{blue} \Theta(\boldsymbol{m}',\,\boldsymbol{m}'') } \;:=\; \inf_{\substack{\boldsymbol{z}:[0\,1]\rightarrow\mathbb{R}^{d}}} \max_{t\in[0,\,1]}U(\boldsymbol{z}(t))\;,$$ where the minimum is carried over all continuous paths $\boldsymbol{z}(\cdot)$ such that $\boldsymbol{z}(0)=\boldsymbol{m}'$ and $\boldsymbol{z}(1)=\boldsymbol{m}''$. Clearly, $\Theta(\boldsymbol{m}',\,\boldsymbol{m}'') = \Theta(\boldsymbol{m}'',\,\boldsymbol{m}')$. Denote by $\Gamma({\boldsymbol m})$ the depth of the local minimum ${\boldsymbol m}\in{\mathcal M}_0$: $$\label{53} {\color{blue} \Gamma({\boldsymbol m})} \, :=\, \min_{{\boldsymbol m}' \neq {\boldsymbol m}} \Theta({\boldsymbol m}, {\boldsymbol m}') \,-\, U(\boldsymbol{m})\;.$$ Denote by $d^{(1)}$ the depth of the shallowest well, and by $\theta^{(1)}_\epsilon$ the corresponding time-scale: $${\color{blue} d^{(1)} } \;:=\; \min_{{\boldsymbol m}\in {\mathcal M}_0} \Gamma({\boldsymbol m})\;, \quad {\color{blue} \theta^{(1)}_\epsilon } \; :=\; e^{d^{(1)}/\epsilon}\;.$$ ## Gates {#gates .unnumbered} Denote by $\color{blue} \Upsilon ({\boldsymbol m})$ the set of gates of ${\boldsymbol m}\in{\mathcal M}_0$. This is the set of points ${\boldsymbol x}\in {\mathbb R}^d$ for which there exist ${\boldsymbol m}'\in {\mathcal M}_0$, ${\boldsymbol m}'\neq {\boldsymbol m}$, and a continuous path $z\colon [0,1]\to {\mathbb R}^d$ such that $z(0) ={\boldsymbol m}$, $z(1)={\boldsymbol m}'$, $z(1/2) = {\boldsymbol x}$ and $U(z(t)) < U({\boldsymbol x}) = U({\boldsymbol m}) + \Gamma ({\boldsymbol m})$ for all $t\in [0,1]$, $t\neq 1/2$. Mind that there might be more than one local minima ${\boldsymbol m}'$ for the same gate ${\boldsymbol x}\in \Upsilon ({\boldsymbol m})$: there might exist ${\boldsymbol m}_1\neq {\boldsymbol m}_2$, both different from ${\boldsymbol m}$, ${\boldsymbol x}\in\Upsilon ({\boldsymbol m})$, and continuous paths $z_i\colon [0,1]\to {\mathbb R}^d$, $i=1$, $2$, such that $z_i(0) ={\boldsymbol m}$, $z_i(1)={\boldsymbol m}_i$, $z_i(1/2) = {\boldsymbol x}$ and $U(z_i(t)) < U({\boldsymbol x}) = U({\boldsymbol m}) + \Gamma ({\boldsymbol m})$ for all $t\in [0,1]$, $t\neq 1/2$. Mind also that in the definition of gate, we require ${\boldsymbol m}'$ to be different from ${\boldsymbol m}$. In this way, we exclude from the set of gates points ${\boldsymbol x}$ for which there exists a continuous path $z\colon [0,1]\to {\mathbb R}^d$ such that $z(0) ={\boldsymbol m}$, $z(1)={\boldsymbol m}$, $z(1/2) = {\boldsymbol x}$ and $U(z(t)) < U({\boldsymbol x}) = U({\boldsymbol m}) + \Gamma ({\boldsymbol m})$ for all $t\in [0,1]$, $t\neq 1/2$. Recall that $\color{blue} {\boldsymbol b} \,=\, -\, (\nabla U \,+\, {\boldsymbol \ell})$ and that a heteroclinic orbit $\phi$ from ${\boldsymbol x}$ to ${\boldsymbol y}\in {\mathbb R}^d$ is a solution $\phi\colon {\mathbb R} \to {\mathbb R}^d$ of the ODE $$\label{31} \dot {{\boldsymbol x}} (t) \,=\, {\boldsymbol b}({\boldsymbol x} (t)) \;,$$ such that $$\lim_{t\to - \infty} \phi(t) \,=\, {\boldsymbol x}\,,\quad \lim_{t\to \infty} \phi(t) \,=\, {\boldsymbol y} \;.$$ We represent this relation by $\color{blue} {\boldsymbol x} \curvearrowright {\boldsymbol y}$. In other words, ${\boldsymbol x} \curvearrowright {\boldsymbol y}$ indicates the existence of a heteroclinic orbit from ${\boldsymbol x}$ to ${\boldsymbol y}$. We assume also that for all ${\boldsymbol m}\in {\mathcal M}_0$ such that $\Gamma({\boldsymbol m}) = d^{(1)}$, and ${\boldsymbol \sigma }\in \Upsilon ({\boldsymbol m})$, there exists ${\boldsymbol m}'\in {\mathcal M}_0$, ${\boldsymbol m}' \neq {\boldsymbol m}$, such that $$\label{48} {\boldsymbol \sigma }\curvearrowright {\boldsymbol m} \;\;\text{and}\;\; {\boldsymbol \sigma} \curvearrowright {\boldsymbol m}'\;.$$ By Proposition [ 45](#l05a){reference-type="ref" reference="l05a"}, any gate ${\boldsymbol x} \in {\color{blue} \Upsilon := \cup_{{\boldsymbol m}\in {\mathcal M}_0} \Upsilon ({\boldsymbol m})}$ belongs to the set of critical points of $U$, denoted by $\color{blue} {\mathcal C}$: $${\color{blue} {\mathcal C}} \;:=\; \{\, {\boldsymbol x} \in {\mathbb R}^d : (\nabla U)({\boldsymbol x}) = 0\,\}\;.$$ By [@LS-22 Theorem 2.1], the divergence-free field ${\boldsymbol \ell}$ vanishes at the critical points of $U$: ${\boldsymbol \ell}({\boldsymbol x}) =0$ for all ${\boldsymbol x}\in {\mathcal C}$. Denote by $\color{blue} (\nabla^{2}U)( {\boldsymbol x})$ the Hessian of $U$ at ${\boldsymbol x}$. Since $U$ is a Morse function, for all ${\boldsymbol \sigma }\in \Upsilon$, $$\label{47} \text{ $(\nabla^{2}U)( {\boldsymbol \sigma})$ has only one negative eigenvalue, all the others being strictly positive} \;.$$ Indeed, by definition, ${\boldsymbol \sigma}$ can not be a local minimum. On the other hand, assume that ${\boldsymbol \sigma}$ is a gate between ${\boldsymbol m}$ and ${\boldsymbol m}'$. If the number of negative eigenvalues is greater than $1$, the set $\{{\boldsymbol x} : U({\boldsymbol x}) < U({\boldsymbol \sigma})\}$ would be locally connected, and there would be a continuous path from ${\boldsymbol m}$ to ${\boldsymbol m}'$ staying strictly below $U({\boldsymbol \sigma})$, which is a contradiction. Denote by ${\mathscr V}({\boldsymbol m})$ the set of points ${\boldsymbol m}' \in{\mathcal M}_0$, ${\boldsymbol m}'\neq {\boldsymbol m}$, for which [\[48\]](#48){reference-type="eqref" reference="48"} holds for some ${\boldsymbol \sigma }\in \Upsilon ({\boldsymbol m})$. Hence, ${\mathscr V}({\boldsymbol m})$ is the set of local minima ${\boldsymbol m}' \in {\mathcal M}_0$, ${\boldsymbol m}'\neq {\boldsymbol m}$, for which there exist a critical point ${\boldsymbol \sigma }\in \Upsilon ({\boldsymbol m})$ and heteroclinic orbits from ${\boldsymbol \sigma}$ to ${\boldsymbol m}$ and ${\boldsymbol \sigma}$ to ${\boldsymbol m}'$: $$\begin{gathered} {\color{blue} {\mathscr V}({\boldsymbol m}) } \; :=\; \big \{ \, {\boldsymbol m}' \in {\mathcal M}_0 \setminus \{{\boldsymbol m}\}: \exists \, {\boldsymbol \sigma }\in \Upsilon ({\boldsymbol m}) \;\; \text{such that}\;\; {\boldsymbol \sigma}\curvearrowright {\boldsymbol m} \;,\;\; {\boldsymbol \sigma}\curvearrowright {\boldsymbol m}' \, \big\}\;.\end{gathered}$$ Elements of ${\mathscr V}({\boldsymbol m})$ is called neighbors of the local minimum ${\boldsymbol m}$ of $U$. Denote by ${\mathcal S} ({\boldsymbol m} , {\boldsymbol m}')$, ${\boldsymbol m}' \neq {\boldsymbol m}$, the set of critical points which separate ${\boldsymbol m}$ from ${\boldsymbol m}'$: $$\label{38a} {\color{blue} {\mathcal S} ({\boldsymbol m} , {\boldsymbol m}')} \; :=\; \big \{ \, {\boldsymbol \sigma} \in \Upsilon ({\boldsymbol m}) : {\boldsymbol \sigma}\curvearrowright {\boldsymbol m} \;,\;\; {\boldsymbol \sigma}\curvearrowright {\boldsymbol m}' \, \big\}\;.$$ ## Reduced model {#reduced-model .unnumbered} Denote by $\color{blue} (D {\boldsymbol \ell})({\boldsymbol x})$ the Jacobian of ${\boldsymbol \ell}$ at ${\boldsymbol x}$. By [\[47\]](#47){reference-type="eqref" reference="47"}, $(\nabla^{2}U)( {\boldsymbol \sigma})$, ${\boldsymbol \sigma}\in\Upsilon$, has only one negative eigenvalue. By [@LS-22 Lemma 3.3], $(\nabla^{2}U)( {\boldsymbol \sigma}) + (D {\boldsymbol \ell})({\boldsymbol \sigma})$ has also one negative eigenvalue, represented by $\color{blue} -\mu_{{\boldsymbol \sigma}}<0$. For ${\boldsymbol m} \in {\mathcal M}_0$, ${\boldsymbol \sigma }\in \Upsilon$, let the weights $\nu ({\boldsymbol m})$, $\omega ({\boldsymbol \sigma})$ be given by $$\label{eq:nu} {\color{blue} \nu(\boldsymbol{m}) } \;:=\; \frac{1}{\sqrt{\det(\nabla^{2}U)(\boldsymbol{m})}} \;, \quad {\color{blue} \omega(\boldsymbol{\sigma})} \;:=\; \frac{\mu (\boldsymbol{\sigma})} {2\pi\sqrt{-\det\nabla^{2}U(\boldsymbol{\sigma})}} \; \cdot$$ Let $\omega ({\boldsymbol m}, {\boldsymbol m}')$, ${\boldsymbol m} \neq {\boldsymbol m}' \in {\mathcal M}_0$, be the weight given by $$\label{39a} {\color{blue} \omega ({\boldsymbol m}, {\boldsymbol m}') } \; :=\; \sum_{{\boldsymbol \sigma }\in {\mathcal S} ({\boldsymbol m} , {\boldsymbol m}')} \omega ({\boldsymbol \sigma})\;.$$ Note that neither ${\mathcal S}(\,\cdot\,,\, \cdot\,)$ nor $\omega (\,\cdot\,,\, \cdot\,)$ is symmetric in its arguments. To include the depth of the local minimum ${\boldsymbol m}$ in the definition of the weight $\omega ({\boldsymbol m}, {\boldsymbol m}')$, set $$\label{40} {\color{blue} \omega_1 ({\boldsymbol m}, {\boldsymbol m}') } \; :=\; \omega ({\boldsymbol m}, {\boldsymbol m}') \, {\boldsymbol 1} \{ \Gamma ({\boldsymbol m}) = d^{(1)} \, \}\;.$$ Denote by ${\mathfrak L}_1$ the generator of the ${\mathcal M}_0$-valued, continuous-time Markov chain given by $$\label{28} ({\mathfrak L}_1 {\boldsymbol h})({\boldsymbol m}) \;=\; \frac{1}{\nu({\boldsymbol m})}\, \sum_{{\boldsymbol m}'\in {\mathcal M}_0} \omega_1 ({\boldsymbol m}, {\boldsymbol m}') \, [\, {\boldsymbol h} ({\boldsymbol m}') \,-\, {\boldsymbol h} ({\boldsymbol m}) \,] \;.$$ ** 1**. *Assume that hypotheses [\[26\]](#26){reference-type="eqref" reference="26"}, [\[48\]](#48){reference-type="eqref" reference="48"} are in force. Fix a bounded and continuous function $u_0\colon {\mathbb R}^d \to {\mathbb R}$. Denote by $u_\epsilon$ the solution of the parabolic equation [\[44\]](#44){reference-type="eqref" reference="44"}. Then, [\[43\]](#43){reference-type="eqref" reference="43"} and [\[51\]](#51){reference-type="eqref" reference="51"} hold for all $t>0$, where $p_t(\cdot,\cdot)$ is the semigroup associated to the generator ${\mathfrak L}_1$.* ## Resolvent equation {#resolvent-equation .unnumbered} The proof of Theorem [ 1](#t00){reference-type="ref" reference="t00"} is based on properties of the resolvent equation presented in this subsection. Denote by $\color{blue} B({\boldsymbol x}, r)$, ${\boldsymbol x}\in {\mathbb R}^d$, $r>0$, the open ball of radius $r$ centered at ${\boldsymbol x}$. Let $\color{blue} \mathcal{W}^{r}(\bm{m})$, $\bm{m}\in\mathcal{M}_{0}$, $r>0$, be the connected component of the set $\{\boldsymbol{x}\in\mathbb{R}^{d}:U(\boldsymbol{x})\le U(\bm{m})+ r \}$ containing $\bm{m}$. Fix ${\boldsymbol m}\in{\mathcal M}_0$. All constants $r_i$ below depend on ${\boldsymbol m}$ and ${\boldsymbol b} (\cdot)$, though this does not appear in the notation. Equation [\[eq:cond1\]](#eq:cond1){reference-type="eqref" reference="eq:cond1"} introduces a positive constant $r_5>0$. Choose $r_4$ small enough for [\[eq:condr_4\]](#eq:condr_4){reference-type="eqref" reference="eq:condr_4"} to hold with $r_3=r_5$. By Proposition [ 63](#pap4){reference-type="ref" reference="pap4"} and conditions (1), (2) in Section [3](#sec-ap3){reference-type="ref" reference="sec-ap3"}, $B({\boldsymbol m}, r_5)$ does not contain critical points of $U$ besides ${\boldsymbol m}$. Choose $r_{0}>0$ small enough so that for all ${\boldsymbol m}\in {\mathcal M}_0$, - $\overline{\mathcal{W}^{2r_{0}}(\bm{m})} \setminus\{\boldsymbol{m}\}$ does not contain critical points of $U$; - ${\mathcal W}^{2r_0}({\boldsymbol m})$ is contained in the domain of attraction of ${\boldsymbol m}$ for the ODE [\[31\]](#31){reference-type="eqref" reference="31"}; - ${\boldsymbol b}({\boldsymbol x}) \cdot {\boldsymbol n}({\boldsymbol x}) <0$ for all ${\boldsymbol x}\in \partial {\mathcal W}^{2r_0}({\boldsymbol m})$, where ${\boldsymbol n}(\cdot)$ is the exterior normal of the boundary of ${\mathcal W}^{2r_0}({\boldsymbol m})$. - ${\mathcal W}^{3r_0}({\boldsymbol m}) \subset B({\boldsymbol m}, r_5)$. - ${\mathcal W}^{2r_0}({\boldsymbol m}) \subset {\mathcal D}_{r_4} ({\boldsymbol m})$ Set $$\label{30} {\color{blue} \mathcal{E}(\boldsymbol{m}) } :=\mathcal{W}^{r_{0}}(\bm{m})\;, \quad \boldsymbol{m}\in\mathcal{M}_{0}\;.$$ For $\lambda>0$, ${\boldsymbol g}\colon {\mathcal M}_0 \rightarrow\mathbb{R}$, denote by $\phi_{\epsilon} = \phi_{\epsilon}^{\lambda, {\boldsymbol g}}$ the unique solution of the resolvent equation $$\label{e_res} (\lambda-\theta_{\epsilon}^{(1)} \mathscr{L}_{\epsilon})\, \phi_{\epsilon} \;=\; G \;:=\; \sum_{{\boldsymbol m}\in {\mathcal M}_0} {\boldsymbol g} ({\boldsymbol m})\, \chi_{_{\mathcal{E}({\boldsymbol m})}} \;,$$ where $\color{blue} \chi_{{\mathcal A}}$, ${\mathcal A}\subset {\mathbb R}^d$, represents the indicator function of the set ${\mathcal A}$. The function on the right-hand side vanishes at $(\cup_{{\boldsymbol m}\in{\mathcal M}_0} {\mathcal E}({\boldsymbol m}))^c$ and is constant on each well ${\mathcal E}({\boldsymbol m}')$. The second main result of this article reads as follows. ** 2**. *For all $\lambda>0$ and ${\boldsymbol g}\colon {\mathcal M}_0 \rightarrow\mathbb{R}$, $$\lim_{\epsilon\rightarrow0}\, \max_{{\boldsymbol m}\in {\mathcal M}_0} \, \sup_{x\in {\mathcal E}({\boldsymbol m})} \vert\, \phi_{\epsilon} (x) - {\boldsymbol f}({\boldsymbol m}) \, \vert \;=\; 0\;,$$ where ${\boldsymbol f}$ is the solution of the reduced resolvent equation $$(\lambda - {\mathfrak L}_1)\, {\boldsymbol f} \;=\; {\boldsymbol g} \;,$$ and ${\mathfrak L}_1$ is the generator introduced in [\[28\]](#28){reference-type="eqref" reference="28"}.* ## Comments and Remarks {#comments-and-remarks .unnumbered} The proofs of Theorem [ 1](#t00){reference-type="ref" reference="t00"} and [ 2](#t01){reference-type="ref" reference="t01"} are entirely based on the metastable behavior of the stochastic differential equation $$\label{sde} d\boldsymbol{x}_{\epsilon}(t) \,=\, {\boldsymbol b}({\boldsymbol x}_\epsilon (t)) \, dt \,+\, \sqrt{2\epsilon}\,dW_{t}\;.$$ where $\epsilon>0$ denotes a small parameter corresponding to the temperature of the system, and $W_t$ a $d$-dimension Brownian motion. The proof of Theorem [ 2](#t01){reference-type="ref" reference="t01"} is divided in two parts. We first show in Section [4](#sec2){reference-type="ref" reference="sec2"} that $\phi_{\epsilon}$ is asymptotically constant on each well ${\mathcal E}({\boldsymbol m})$. Then, we prove that the average of the solution $\phi_{\epsilon}$ on a well ${\mathcal E}({\boldsymbol m})$ converges to ${\boldsymbol f}$. In Section [10](#sec3){reference-type="ref" reference="sec3"}, we deduce from Theorem [ 2](#t01){reference-type="ref" reference="t01"} and with ideas introduced in [@LLM], the convergence of the finite-dimensional distributions of the process $\boldsymbol{x}_{\epsilon}(\cdot)$. A similar result has been obtained by Sugiura in [@su] with different ideas in the case ${\boldsymbol \ell }=0$. # Mixing time of diffusions {#sec-ap3} The main result of this section, Theorem [ 3](#t_main2){reference-type="ref" reference="t_main2"}, provides an estimate on the mixing time of a diffusion on ${\mathbb R}^d$. The proof of this result can be skipped in a first reading as the ideas and techniques used to derive the bound on the mixing time will not be used in the next sections. Fix a function $U_0\colon {\mathbb R}^d \to {\mathbb R}$ of class $C^{3}$ and a vector field $\boldsymbol{\ell}_0 \colon {\mathbb R}^d \to {\mathbb R}^d$ of class $C^{2}$ such that $$\label{eq:orth} (\nabla U_0) ({\boldsymbol x}) \cdot\boldsymbol{\ell}_0 ({\boldsymbol x}) \,=\, (\nabla\cdot\boldsymbol{\ell}_0) ({\boldsymbol x}) \, = \, 0 \quad \text{for all} \; {\boldsymbol x} \in {\mathbb R}^d\;.$$ Suppose that $U_0$ has a local minimum at ${\boldsymbol x} = \boldsymbol{0}$ and that it has no other critical point in a neighborhood of the origin. Furthermore, we assume, for convenience, that $U_0(\boldsymbol{0})=0$ . Consider a vector field ${\boldsymbol b}_0: {\mathbb R}^d \to {\mathbb R}^d$ of class $C^1$ such that 1. ${\boldsymbol b}_0$ vanishes only at the origin, which is a stable equilibrium point for the dynamical system $$\label{eq:x_0} \dot {\boldsymbol{y}} (t) \,=\, \boldsymbol{b}_0(\boldsymbol{y}(t)) \;.$$ 2. There exists $r_3>0$ such that $$\boldsymbol{b}_0(\boldsymbol{x}) \,=\, -\, (\nabla U_0) (\boldsymbol{x}) \,-\, \boldsymbol{\ell}_0(\boldsymbol{x}) \;, \quad \boldsymbol{x}\in B(0,r_3)\;.$$ 3. There exist $R>0$ and a finite constant $C_{1}$ such that $$\label{growth} |\boldsymbol{b}_0 (\boldsymbol{x})|\le C_{1}|\, \boldsymbol{x}|\;\;\;\;\text{and}\;\;\;\; \left\Vert D\boldsymbol{b}_0(\boldsymbol{x})\right\Vert \le C_{1}\, |\boldsymbol{x}|$$ for all $|\boldsymbol{x}|>R$, where the matrix norm is defined as $$\Vert\mathbb{M}\Vert=\sup_{|\boldsymbol{y}|=1} |\mathbb{M}\boldsymbol{y}|\;.$$ 4. Let $\mathbb{H}_0=(\nabla^{2}U_0)(\boldsymbol{0})$ and $\mathbb{L}_0= (D\boldsymbol{\ell}_0) (\boldsymbol{0})$. Assume that $$-\, \left\langle \boldsymbol{b}_0(\boldsymbol{x}),\, \mathbb{H}_0 \boldsymbol{x}\right\rangle \,\ge\, \frac{1}{2}\, |\mathbb{H}_0\boldsymbol{x}|^{2}\;\;\;\; \text{for all }\boldsymbol{x}\in\mathbb{R}\;. \label{eq:contraction}$$ where $\langle\,\cdot\,,\,\cdot\,\rangle$ represents the scalar product in ${\mathbb R}^d$. The main result of this section requires some notation. Let $$\mathbb{A}(\boldsymbol{x}) \,:=\, ( D\boldsymbol{b}_0) (\boldsymbol{x})\;, \quad \mathbb{A} \,:=\, \mathbb{A}(\boldsymbol{0}) \;, \quad \text{so that}\quad \mathbb{A} \,=\, -\, (\mathbb{H}_0 +\mathbb{L}_0)\;. \label{eq:A(x)}$$ By [@LS-22 Lemmata 4.5 and 4.1], all the eigenvalues of the matrix ${\mathbb A}$ have negative real parts. Therefore, by [@LT85 Theorems 2 and 3, p.414], there exists a positive definite matrix $\mathbb{K}$ such that $$\mathbb{A}^{\dagger}\mathbb{K}+\mathbb{K}\mathbb{A} \,= \, -\, \mathbb{I}\;, \label{eq:ricatti1}$$ where ${\mathbb I}$ is the identity. Let $\mathcal{D}_{r}\subset\mathbb{R}^{d}$, $r>0$, be the set given by $${\color{blue} \mathcal{D}_{r}} \;:=\; \{\boldsymbol{x}\in\mathbb{R}^{d}: \left\langle \boldsymbol{x},\,\mathbb{H}_0\, \boldsymbol{x}\right\rangle \le r^{2}\}\;. \label{eq:D_r}$$ By [\[eq:ricatti1\]](#eq:ricatti1){reference-type="eqref" reference="eq:ricatti1"}, there exists $r'_4>0$ such that $$\left\Vert \, (\mathbb{A}(\boldsymbol{x})-\mathbb{A})^{\dagger} \mathbb{K}+\mathbb{K}(\mathbb{A}(\boldsymbol{x}) -\mathbb{A})\right\Vert \leq\frac{1}{2} \label{eq:A(x)-A}$$ for all $\boldsymbol{x}\in B(0, r'_4)$. By [\[eq:D_r\]](#eq:D_r){reference-type="eqref" reference="eq:D_r"}, $\mathcal{D}_{2r_4} \subset B(0, r'_4)$ for some $r_4>0$. Take $r_4$ small enough so that $$\mathcal{D}_{2r_4} \,\subset\, B(0, r_3) \;, \label{eq:condr_4}$$ where $r_3$ has been introduced in condition (2) above. The main result of this section reads as follows. Denote by $\color{blue} d_{\textup{TV}}(\mu, \nu)$ represents the total variation distance between probability measures $\mu$ and $\nu$. Let ${\boldsymbol y}_\epsilon (\cdot)$ be the diffusion given by $$d\boldsymbol{y}_{\epsilon}(t) \,=\, \boldsymbol{b}_0(\boldsymbol{y}_{\epsilon}(t))\, dt \,+\, \sqrt{2\epsilon}\,dW_{t}\;. \label{eq:x_eps}$$ The process ${\boldsymbol y}_\epsilon (\cdot)$ starting at $\boldsymbol{x}\in\mathbb{R}^{d}$, is represented by $\boldsymbol{y}_{\epsilon}(t;\boldsymbol{x})$. Let $t_{\epsilon}=\epsilon^{- \theta}$ for some $\theta\in (0,\,1/3)$. ** 3**. *Denote by $\pi_\epsilon$ the stationary state of the diffusion ${\boldsymbol y}_\epsilon (\cdot)$. Then, $$\lim_{\epsilon\rightarrow0} \sup_{\boldsymbol{x}\in\mathcal{D}_{r_4}} d_{\textup{TV}} \big(\, \boldsymbol{y}_{\epsilon}(t_{\epsilon};\boldsymbol{x}),\, \pi_\epsilon\, \big) \,= \, 0\;.$$* **Remark 4**. *The proof of this result is largely based on [@BJ; @LeeRamSeo]. Theorem [ 3](#t_main2){reference-type="ref" reference="t_main2"} follows from [@BJ Theorem 2.2] when ${\mathbb A}$ is negative definite. As mentioned above, all the eigenvalues of matrix ${\mathbb A}$ have negative real parts, but ${\mathbb A}$ might not be negative definite. The purpose of this section is to extend [@BJ Theorem 2.2] to this situation.* ## Proof of Theorem [ 3](#t_main2){reference-type="ref" reference="t_main2"} {#proof-of-theorem-t_main2 .unnumbered} The main idea of proof is to approximate the difference $\boldsymbol{y}_{\epsilon}(t)-\boldsymbol{y}(t)$ by a Gaussian process. Let $$\widehat{\boldsymbol{\xi}}(t) \,=\, \frac{1}{\sqrt{2\epsilon}}\left(\boldsymbol{y}_{\epsilon}(t) -\boldsymbol{y}(t)\right)\;.$$ By ([\[eq:x_eps\]](#eq:x_eps){reference-type="ref" reference="eq:x_eps"}) and ([\[eq:x_0\]](#eq:x_0){reference-type="ref" reference="eq:x_0"}), $$d\widehat{\boldsymbol{\xi}}(t) \,=\, \frac{1}{\sqrt{2\epsilon}} \big\{\, \boldsymbol{b}_0(\boldsymbol{y}_{\epsilon}(t)) -\boldsymbol{b}_0(\boldsymbol{y}(t))\, \big\}\, dt \,+\, dW_{t} \simeq (D\boldsymbol{b}_0) (\boldsymbol{y}(t)) \, \widehat{\boldsymbol{\xi}}(t)dt+dW_{t}\;.$$ Hence, it is natural to conjecture that $\widehat{\boldsymbol{\xi}}(t)\simeq\boldsymbol{\xi}(t)$ where $\boldsymbol{\xi}(t)$ is the Gaussian process defined by the SDE $$d\boldsymbol{\xi}(t)=\mathbb{A}(\boldsymbol{y}(t)) \, \boldsymbol{\xi}(t) \, dt \,+\, dW_{t} \;, \quad \boldsymbol{\xi}(0)=\boldsymbol{0} \;. \label{eq:xi}$$ Let $$\boldsymbol{z}_{\epsilon}(t)\, :=\, \boldsymbol{y}(t) \,+\, \sqrt{2\epsilon}\, \boldsymbol{\xi}(t)\;. \label{eq:z(t)}$$ By the previous discussion, we expect that $\boldsymbol{y}_{\epsilon}(t)\simeq\boldsymbol{z}_{\epsilon}(t)$. ** 5**. *There exists $r_4>0$ such that $$\lim_{\epsilon\rightarrow0}\sup_{x\in\mathcal{D}_{r_4}} d_{\mathrm{TV}}\left(\boldsymbol{y}_{\epsilon} (t_{\epsilon};\boldsymbol{x}),\,\boldsymbol{z}_{\epsilon} (t_{\epsilon};\boldsymbol{x})\right)=0\;. \label{eq:tvm1-0}$$* Denote by $\mathcal{N}(\boldsymbol{\mu},\,\Sigma)$ the normal distribution with mean $\boldsymbol{\mu}$ and covariance $\Sigma$. ** 6**. *There exists $r_4>0$ such that $$\lim_{\epsilon\rightarrow0}\sup_{x\in\mathcal{D}_{r_4}} d_{\mathrm{TV}}\left(\boldsymbol{z}_{\epsilon} (t_{\epsilon};\boldsymbol{x}),\, \mathcal{N}(0,2\epsilon\mathbb{H}^{-1})\right)=0\;.$$* *Proof.* The proof is presented in [@BJ Proposition 3.6], and relies on the fact that $\boldsymbol{z}_{\epsilon}(\cdot;\boldsymbol{x})$ is a Gaussian process. In particular, $\boldsymbol{z}_{\epsilon}(t;\boldsymbol{x})$ is a normal random variable whose mean and variance can be expressed explicitly. The assertion is thus reduced to a computation of the total variation distance between two normal random variables. Denote by $\lambda>0$ the smallest eigenvalue of ${\mathbb H}_0$. The proof starts at [@BJ display (3.22)], and requires the bound $$\left|\boldsymbol{y}(t)\right|^{2} \, \le\, |\boldsymbol{y}(0)|^{2}\, e^{-\lambda t} \;,$$ and [@BJ Lemma B.2]. In the present context, Lemma [ 7](#lem:stability){reference-type="ref" reference="lem:stability"} replaces the first estimate, and [@BJ Lemma B.2] holds because it only needs all the eigenvalues of $(D\boldsymbol{b}_0)(\boldsymbol{0})$ to have a positive real part, a property satisfied by our model as mentioned in Remark [Remark 4](#rem:posev){reference-type="ref" reference="rem:posev"}. ◻ *Proof of Theorem [ 3](#t_main2){reference-type="ref" reference="t_main2"}.* Denote by $p_{t}^{\epsilon}(\cdot,\,\cdot)$ the transition kernel of the process $\boldsymbol{y}_{\epsilon}(\cdot)$ and by $\pi_{\epsilon}(\cdot)$ the density of the measure $\pi_{\epsilon}(d{\boldsymbol x})$: $\pi_{\epsilon}(d\boldsymbol{x}) = \pi_{\epsilon}(\boldsymbol{x}) d\boldsymbol{x}$. By definition, and since $\pi_\epsilon$ is the stationary state of the process ${\boldsymbol y}_\epsilon(\cdot)$, $$\begin{aligned} \mathrm{d}_{\mathrm{TV}}\left(\boldsymbol{y}_{\epsilon} (t_{\epsilon};\boldsymbol{x}),\,\pi_{\epsilon}\right) & =\frac{1}{2}\int_{\mathbb{R}^{d}}\left|p_{t^{\epsilon}}^{\epsilon} (\boldsymbol{x},\,\boldsymbol{y}) -\pi_{\epsilon}(\boldsymbol{y})\right|d\boldsymbol{y} \\ & =\frac{1}{2}\int_{\mathbb{R}^{d}}\Big|\, \int_{\mathbb{R}^{d}} \big[ \, p_{t^{\epsilon}}^{\epsilon} (\boldsymbol{x},\,\boldsymbol{y}) - p_{t^{\epsilon}}^{\epsilon}(\boldsymbol{x}',\,\boldsymbol{y})\,\big] \, \pi_{\epsilon}(\boldsymbol{x}')\, d\boldsymbol{x}'\, \Big| \, d\boldsymbol{y} \;.\end{aligned}$$ The previous expression is bounded by $$\frac{1}{2}\int_{\mathbb{R}^{d}} \int_{\mathbb{R}^{d}} \big| \, p_{t^{\epsilon}}^{\epsilon} (\boldsymbol{x},\,\boldsymbol{y}) - p_{t^{\epsilon}}^{\epsilon}(\boldsymbol{x}',\,\boldsymbol{y})\,\big| \, \pi_{\epsilon}(\boldsymbol{x}')\, d\boldsymbol{x}' \, d\boldsymbol{y} \;=\; \int_{\mathbb{R}^{d}} \mathrm{d}_{\mathrm{TV}} (\, \boldsymbol{y}_{\epsilon} (t_{\epsilon};\boldsymbol{x}),\, \boldsymbol{y}_{\epsilon}(t_{\epsilon};\boldsymbol{x}') \,) \, \pi_{\epsilon}(\boldsymbol{x}')\, d\boldsymbol{x}'\;.$$ By [\[apf1\]](#apf1){reference-type="eqref" reference="apf1"}, the right-hand side is less than or equal to $$\int_{\mathcal{D}_{r_4}}\mathrm{d}_{\mathrm{TV}} \left(\, \boldsymbol{y}_{\epsilon}(t_{\epsilon}; \boldsymbol{x}),\,\boldsymbol{y}_{\epsilon}(t_{\epsilon};\boldsymbol{x}') \, \right) \, \pi_{\epsilon}(\boldsymbol{x}') \,d\boldsymbol{x}' \,+\, C_0 \, \epsilon$$ for some finite constant $C_0$. By Lemma [ 5](#prop:tvm1){reference-type="ref" reference="prop:tvm1"}, and a triangular inequality, $$\limsup_{\epsilon\rightarrow0} \sup_{\boldsymbol{x}\in\mathcal{D}_{r_4}} \int_{\mathcal{D}_{r_4}}\left|\, \mathrm{d}_{\mathrm{TV}} \left(\boldsymbol{y}_{\epsilon}(t_{\epsilon};\boldsymbol{x}),\, \boldsymbol{y}_{\epsilon}(t_{\epsilon};\boldsymbol{x}')\right) \,-\, \mathrm{d}_{\mathrm{TV}}\left(\boldsymbol{z}_{\epsilon} (t_{\epsilon};\boldsymbol{x}),\,\boldsymbol{z}_{\epsilon} (t_{\epsilon};\boldsymbol{x}')\right)\, \right|\, \pi_{\epsilon}(\boldsymbol{x}')\,d\boldsymbol{x}' \;=\; 0\;.$$ It remains to show that $$\limsup_{\epsilon\rightarrow0} \sup_{\boldsymbol{x}\in\mathcal{D}_{r_4}} \int_{\mathcal{D}_{r_4}} \mathrm{d}_{\mathrm{TV}}\left(\boldsymbol{z}_{\epsilon} (t_{\epsilon};\boldsymbol{x}),\,\boldsymbol{z}_{\epsilon} (t_{\epsilon};\boldsymbol{x}')\right)\, \pi_{\epsilon}(\boldsymbol{x}')\,d\boldsymbol{x}' \;=\; 0\;. \label{eq:dtt3}$$ Since the integrand is bounded by $$\mathrm{d}_{\mathrm{TV}}\left( \boldsymbol{z}_{\epsilon}(t_{\epsilon};\boldsymbol{x}),\, \mathcal{N}(0,2\epsilon\mathbb{H}^{-1})\right) \,+\, \mathrm{d}_{\mathrm{TV}}\left( \mathcal{N}(0,2\epsilon\mathbb{H}^{-1}),\, \boldsymbol{z}_{\epsilon}(t_{\epsilon};\boldsymbol{x}')\right)\;,$$ assertion ([\[eq:dtt3\]](#eq:dtt3){reference-type="ref" reference="eq:dtt3"}) follows from Lemma [ 6](#prop:tvm2){reference-type="ref" reference="prop:tvm2"}. ◻ ## Proof of Lemma [ 5](#prop:tvm1){reference-type="ref" reference="prop:tvm1"} {#sec52 .unnumbered} The proof is similar to the one presented in [@BJ Section 3.3], which is based on conditions (C) or (H) of that article. These conditions, however, are only used in the proof of Lemma [ 5](#prop:tvm1){reference-type="ref" reference="prop:tvm1"} to derive the estimates presented in Lemmata [ 7](#lem:stability){reference-type="ref" reference="lem:stability"}, [ 9](#lem:mom_est){reference-type="ref" reference="lem:mom_est"}, [ 12](#lem:mom_Yt){reference-type="ref" reference="lem:mom_Yt"}, [ 13](#lem:mom_Yt_sup){reference-type="ref" reference="lem:mom_Yt_sup"}, and Proposition [ 10](#prop:main_apprx){reference-type="ref" reference="prop:main_apprx"}. Fix $\delta_{\epsilon}=\epsilon^{c}$ for some $c>0$. As $$\boldsymbol{y}_{\epsilon}(t_\epsilon;\boldsymbol{x}) \;=\; \boldsymbol{y}_{\epsilon}(\delta_{\epsilon}; \boldsymbol{y}_{\epsilon}(t_{\epsilon}-\delta_{\epsilon}; \boldsymbol{x}))\;, \quad \boldsymbol{z}_{\epsilon}(t_\epsilon;\boldsymbol{x}) \;=\;\boldsymbol{z}_{\epsilon}(\delta_{\epsilon}; \boldsymbol{z}_{\epsilon}(t_{\epsilon}- \delta_{\epsilon};\boldsymbol{x})) \;,$$ we have that $$\label{eq:dtvdec} \begin{aligned} d_{\mathrm{TV}}\left(\, \boldsymbol{y}_{\epsilon}(t_{\epsilon}; \boldsymbol{x}),\,\boldsymbol{z}_{\epsilon}(t_{\epsilon}; \boldsymbol{x}) \, \right) \; & \le \; d_{\mathrm{TV}}\left(\, \boldsymbol{y}_{\epsilon}(\delta_{\epsilon}; \boldsymbol{y}_{\epsilon}(t_{\epsilon}-\delta_{\epsilon}; \boldsymbol{x})) \, ,\,\boldsymbol{z}_{\epsilon}(\delta_{\epsilon}; \boldsymbol{y}_{\epsilon}(t_{\epsilon}-\delta_{\epsilon}; \boldsymbol{x})) \, \right) \\ & + \; d_{\mathrm{TV}}\left(\, \boldsymbol{z}_{\epsilon}(\delta_{\epsilon}; \boldsymbol{y}_{\epsilon}(t_{\epsilon}-\delta_{\epsilon}; \boldsymbol{x})) \, ,\,\boldsymbol{z}_{\epsilon}(\delta_{\epsilon}; \boldsymbol{z}_{\epsilon}(t_{\epsilon}-\delta_{\epsilon}; \boldsymbol{x})) \, \right)\;. \end{aligned}$$ The first term on the right-hand side is bounded in [@BJ Proposition 3.3] and the second one in [@BJ Proposition 3.4]. The proof relies on the estimate presented in Proposition [ 10](#prop:main_apprx){reference-type="ref" reference="prop:main_apprx"} below. We sketch the proof of these bounds. For the first one, fix $\boldsymbol{x}\in\mathcal{D}_{r_4}$ and denote by $\mathbb{P}_{Y}$ and $\mathbb{P}_{Z}$ the law of process $(\boldsymbol{y}_{\epsilon}(s;\boldsymbol{x}))_{s\in[0,\,\delta_{\epsilon}]}$ and $(\boldsymbol{z}_{\epsilon}(s;\boldsymbol{x}))_{s\in[0,\,\delta_{\epsilon}]}$, respectively. By Pinsker inequality, $$d_{\mathrm{TV}}\left(\boldsymbol{y}_{\epsilon} (\delta_{\epsilon};\boldsymbol{x}),\, \boldsymbol{z}_{\epsilon}(\delta_{\epsilon}; \boldsymbol{x})\right)^{2} \;\le\; -\,2\, \mathbb{E}_{\mathbb{P}_{Y}} \Big[\log\frac{\textup{d}\mathbb{P}_{Z}} {\textup{d}\mathbb{P}_{Y}}\Big]\;. \label{eq:pinsker}$$ The SDE describing the process $\boldsymbol{z}_{\epsilon}(\cdot)$ can be written as $$d\boldsymbol{z}_{\epsilon}(t) \;=\; \Big\{\, \boldsymbol{b}_0(\boldsymbol{y}(t)) +D\boldsymbol{b}_0(\boldsymbol{y}(t)\, [\, \boldsymbol{z}_{\epsilon}(t)-\boldsymbol{y}(t)\,]\, \Big\}\,dt \,+\, \sqrt{2\epsilon}\,dW_{t}\;. \label{eq:sdeZ}$$ Hence, by Girsanov theorem, ([\[eq:x_eps\]](#eq:x_eps){reference-type="ref" reference="eq:x_eps"}), and ([\[eq:sdeZ\]](#eq:sdeZ){reference-type="ref" reference="eq:sdeZ"}), $$\begin{aligned} \log\frac{\textup{d}\mathbb{P}_{Z}}{\textup{d}\mathbb{P}_{Y}} = & \frac{1}{\sqrt{2\epsilon}}\int_{0}^{\delta_{\epsilon}} \big \langle \, \boldsymbol{b}_0(\boldsymbol{y}_{\epsilon}(t)) -\boldsymbol{b}_0(\boldsymbol{y}(t)) +D\boldsymbol{b}_0(\boldsymbol{y}(t))\, [\boldsymbol{y}_{\epsilon}(t) -\boldsymbol{y}(t)] \, ,\,dW_{s}\, \big \rangle \\ - & \frac{1}{4\epsilon}\, \int_{0}^{\delta_{\epsilon}} \big|\, \boldsymbol{b}_0(\boldsymbol{y}_{\epsilon}(t)) -\boldsymbol{b}_0 (\boldsymbol{y}(t)) +D\boldsymbol{b}_0(\boldsymbol{y}(t)) \, [ \boldsymbol{y}_{\epsilon}(t) -\boldsymbol{y}(t)\,] \,\big |^{2}\, \textup{d}s\;.\end{aligned}$$ Thus, the left-hand side of ([\[eq:pinsker\]](#eq:pinsker){reference-type="ref" reference="eq:pinsker"}) is bounded by $$\frac{1}{2\epsilon}\int_{0}^{\delta_{\epsilon}} \mathbb{E}_{\boldsymbol{x}}\left[ \, \big| \, \boldsymbol{b}_0 (\boldsymbol{y}_{\epsilon}(t)) -\boldsymbol{b}_0 (\boldsymbol{y}(t)) +D\boldsymbol{b}_0 (\boldsymbol{y}(t)) \, [\, \boldsymbol{y}_{\epsilon}(t) -\boldsymbol{y}(t) \,] \, \big|^{2}\, \right]\textup{d}s\;. \label{eq:pnbd1}$$ By condition ([\[growth\]](#growth){reference-type="ref" reference="growth"}) on $D\boldsymbol{b}_0$ (which is milder than that of [@BJ]), and the argument presented in [@BJ Proposition 3.3], we can conclude that $d_{\mathrm{TV}}\left(\boldsymbol{y}_{\epsilon} (\delta_{\epsilon};\boldsymbol{x}) \, ,\, \boldsymbol{z}_{\epsilon}(\delta_{\epsilon}; \boldsymbol{x})\right) \le \delta_{\epsilon}^{1/2}$. We emphasize that in order to control the term $\boldsymbol{b}_0(\boldsymbol{y}_{\epsilon}(t)) - \boldsymbol{b}_0(\boldsymbol{y}(t))$, we need the estimate of the fourth moment stated in Proposition [ 10](#prop:main_apprx){reference-type="ref" reference="prop:main_apprx"}. In all other places, a bound of the second moment suffices. By Lemma [ 9](#lem:mom_est){reference-type="ref" reference="lem:mom_est"}, the probability that the starting point $\boldsymbol{y}_{\epsilon}(t_{\epsilon}-\delta_{\epsilon};\boldsymbol{x})$ does not belong to $\mathcal{D}_{r_4}$ vanishes as $\epsilon\to 0$. This fact together with the bound obtained in the previous paragraph yields that $$\lim_{\epsilon\rightarrow0}\sup_{\boldsymbol{x}\in\mathcal{D}} d_{\mathrm{TV}}\left(\boldsymbol{y}_{\epsilon}(\delta_{\epsilon}; \boldsymbol{y}_{\epsilon}(t_{\epsilon} -\delta_{\epsilon};\boldsymbol{x})) \,,\, \boldsymbol{z}_{\epsilon}(\delta_{\epsilon}; \boldsymbol{y}_{\epsilon}(t_{\epsilon} -\delta_{\epsilon};\boldsymbol{x}))\right)=0\;.$$ This completes the estimate of the first term on the right-hand side of ([\[eq:dtvdec\]](#eq:dtvdec){reference-type="ref" reference="eq:dtvdec"}). We turn to the second term. By Proposition [ 10](#prop:main_apprx){reference-type="ref" reference="prop:main_apprx"} the starting points $\boldsymbol{y}_{\epsilon}(t_{\epsilon}-\delta_{\epsilon};\boldsymbol{x})$ and $\boldsymbol{z}_{\epsilon}(t_{\epsilon}-\delta_{\epsilon};\boldsymbol{x})$ are close. Since the process ${\boldsymbol z}_\epsilon (\cdot)$ is Gaussian, the distance $$d_{\mathrm{TV}}\left(\boldsymbol{z}_{\epsilon}(\delta_{\epsilon}; \boldsymbol{w}),\,\boldsymbol{z}_{\epsilon}(\delta_{\epsilon}; \boldsymbol{w}')\right)$$ is complete determined by $\boldsymbol{w}$ and $\boldsymbol{w}'$, and one can follow the arguments presented in [@BJ Proposition 3.3]. All error term appearing in the proof are uniform on the starting point $\boldsymbol{x}\in\mathcal{D}_{r_4}$ because all estimates obtained in the next subsections are uniform. Thus, $$\lim_{\epsilon\rightarrow0}\sup_{\boldsymbol{x}\in\mathcal{D}_{r_4}} d_{\mathrm{TV}}\left(\boldsymbol{z}_{\epsilon}(\delta_{\epsilon}; \boldsymbol{y}_{\epsilon}(t_{\epsilon}-\delta_{\epsilon}; \boldsymbol{x})),\,\boldsymbol{z}_{\epsilon}(\delta_{\epsilon}; \boldsymbol{z}_{\epsilon}(t_{\epsilon} -\delta_{\epsilon};\boldsymbol{x}))\right) \;=\; 0\;.$$ This completes the proof of Lemma [ 5](#prop:tvm1){reference-type="ref" reference="prop:tvm1"}. ## Exponential Stability {#exponential-stability .unnumbered} In this subsection and in the next we provide the estimates used in the proofs of Lemmata [ 5](#prop:tvm1){reference-type="ref" reference="prop:tvm1"} and [ 6](#prop:tvm2){reference-type="ref" reference="prop:tvm2"}. Recall that we denote by $\lambda>0$ the smallest eigenvalue of $\mathbb{H}_0$. The following lemma substitutes [@BJ display (2.2)]. ** 7**. *For all $t\ge0$, $$\left\langle \boldsymbol{y}(t),\,\mathbb{H}_0 \, \boldsymbol{y}(t)\right\rangle \, \leq\, \mathrm{e}^{-\lambda t} \left\langle \boldsymbol{y}(0),\,\mathbb{H}_0\, \boldsymbol{y}(0)\right\rangle \;. \label{eq:conv_uld0}$$* *Proof.* By ([\[eq:x_0\]](#eq:x_0){reference-type="ref" reference="eq:x_0"}) and ([\[eq:contraction\]](#eq:contraction){reference-type="ref" reference="eq:contraction"}), $$\frac{d}{dt}\left\langle \boldsymbol{y}(t),\, \mathbb{H}_0 \, \boldsymbol{y}(t)\right\rangle \,=\, 2\, \left\langle \boldsymbol{b}(\boldsymbol{y}(t)),\, \mathbb{H}_0 \, \boldsymbol{y}(t)\right\rangle \, \le\, -\, \left|\, \mathbb{H}_0 \, \boldsymbol{y}(t) \, \right|^{2} \, \le\, -\, \lambda\, \left\langle \boldsymbol{y}(t),\, \mathbb{H}_0 \, \boldsymbol{y}(t)\right\rangle \label{eq:conv1}$$ since $\lambda$ is the smallest eigenvalue of $\mathbb{H}_0$. ◻ **Remark 8**. *Fix $r>0$. By the previous lemma, $\boldsymbol{y}(t)\in\mathcal{D}_{r}$ for all $t\ge0$ provided $\boldsymbol{y}(0)\in\mathcal{D}_{r}$.* A similar computation for $\boldsymbol{y}_{\epsilon}(t)$ instead of $\boldsymbol{y}(t)$ yields the moment estimate stated in the next lemma. This bound plays the role of [@BJ condition (H)]. Denote by $\mathbb{E}_{\boldsymbol{x}}$ the expectation with respect to $\boldsymbol{y}_{\epsilon}(\cdot)$ starting at $\boldsymbol{x}$. Moreover, from now on, all estimates presented hold only for sufficiently small $\epsilon$. ** 9**. *Fix $r>0$. For all $n\ge1$, there exists a constant $C(n)>0$ such that $$\sup_{t\ge0}\sup_{\boldsymbol{x}\in\mathcal{D}_{r}} \mathbb{E}_{{\boldsymbol x}} \left\langle \boldsymbol{y}_{\epsilon}(t),\, \mathbb{H}_0 \, \boldsymbol{y}_{\epsilon}(t)\right\rangle ^{n} \, \leq \, e^{-(n \lambda/ 4)\, t} \left\langle \boldsymbol{x},\,\mathbb{H}_0\, \boldsymbol{x}\right\rangle \,+ \, C(n)\, \epsilon\;.$$* *Proof.* By Ito's formula, ([\[eq:contraction\]](#eq:contraction){reference-type="ref" reference="eq:contraction"}), and similar computation with ([\[eq:conv1\]](#eq:conv1){reference-type="ref" reference="eq:conv1"}), we get $$d\left\langle \boldsymbol{y}_{\epsilon}(t),\, \mathbb{H}_0\, \boldsymbol{y}_{\epsilon}(t)\right\rangle \, \le\, \left[-\lambda\left\langle \boldsymbol{y}_{\epsilon}(t),\, \mathbb{H}_0 \, \boldsymbol{y}_{\epsilon}(t)\right\rangle +2\mathfrak{h\epsilon}\right]dt+\sqrt{2\epsilon}\left\langle 2\mathbb{H}_0 \, \boldsymbol{y}_{\epsilon}(t),\,dW_{t}\right\rangle \;, \label{eq:itoy}$$ where $\mathfrak{h}=\text{tr}(\mathbb{H}_0)$. Thus, by Ito's formula and ([\[eq:itoy\]](#eq:itoy){reference-type="ref" reference="eq:itoy"}), $$\begin{aligned} & d\left\langle \boldsymbol{y}_{\epsilon}(t),\,\mathbb{H}_0\, \boldsymbol{y}_{\epsilon}(t)\right\rangle ^{n} \; \le\; n\left\langle \boldsymbol{y}_{\epsilon}(t),\,\mathbb{H}_0\, \boldsymbol{y}_{\epsilon}(t)\right\rangle ^{n-1}\left[\, -\lambda\left\langle \boldsymbol{y}_{\epsilon}(t),\,\mathbb{H}_0\, \boldsymbol{y}_{\epsilon}(t)\right\rangle +2\mathfrak{h\epsilon} \, \right]dt \\ & \qquad\qquad + \; \sqrt{2\epsilon}\left\langle 2\mathbb{H}_0\, \boldsymbol{y}_{\epsilon}(t),\,dW_{t}\right\rangle +\frac{n(n-1)}{2}\left\langle \boldsymbol{y}_{\epsilon}(t),\,\mathbb{H}_0\, \boldsymbol{y}_{\epsilon}(t)\right\rangle^{n-2} \times8\epsilon\left|\mathbb{H}_0\, \boldsymbol{y}_{\epsilon}(t)\right|^{2}dt\;.\end{aligned}$$ For $\epsilon$ sufficiently small, this expression is bounded by $$\left\langle \boldsymbol{y}_{\epsilon}(t),\, \mathbb{H}_0\, \boldsymbol{y}_{\epsilon}(t)\right\rangle ^{n-1} \Big[\, -\frac{n\lambda}{2} \left\langle \boldsymbol{y}_{\epsilon}(t),\,\mathbb{H}_0\, \boldsymbol{y}_{\epsilon}(t)\right\rangle +2\mathfrak{h}n\epsilon\, \Big] \, dt \;+\; \sqrt{2\epsilon}\left\langle 2\mathbb{H}_0\, \boldsymbol{y}_{\epsilon}(t),\,dW_{t}\right\rangle \;.$$ Since $$\left\langle \boldsymbol{y}_{\epsilon}(t),\,\mathbb{H}_0\, \boldsymbol{y}_{\epsilon}(t)\right\rangle ^{n-1} \; \le\; \frac{n-1}{n}\left\langle \boldsymbol{y}_{\epsilon}(t),\, \mathbb{H}_0\, \boldsymbol{y}_{\epsilon}(t)\right\rangle ^{n}+\frac{1}{n}\;,$$ for small enough $\epsilon>0$, $$d\left\langle \boldsymbol{y}_{\epsilon}(t),\, \mathbb{H}_0\, \boldsymbol{y}_{\epsilon}(t)\right\rangle ^{n} \;\le\; \Big [\, -\frac{n\lambda}{4}\left\langle \boldsymbol{y}_{\epsilon}(t),\, \mathbb{H}_0\, \boldsymbol{y}_{\epsilon}(t)\right\rangle ^{n}+c(n)\epsilon\, \Big ]\, dt \;+\; \sqrt{2\epsilon}\left\langle 2\mathbb{H}_0\, \boldsymbol{y}_{\epsilon}(t),\,dW_{t}\right\rangle$$ for some finite constant $c(n)$. Hence, by Gronwall's inequality, $$\mathbb{E}_{\boldsymbol{x}}\left\langle \boldsymbol{y}_{\epsilon}(t),\,\mathbb{H}_0\, \boldsymbol{y}_{\epsilon}(t)\right\rangle ^{n}\le e^{- (n\lambda/4) \, t}\left\langle \boldsymbol{x},\,\mathbb{H}_0\, \boldsymbol{x}\right\rangle +\frac{4c(n)\epsilon}{n\lambda}\;,$$ as claimed ◻ It follows from the estimates derived in the previous lemma, the argument presented in [@BJ page 1192] (cf. the last line of the proof of [@BJ Proposition 3.7]) and the dominated and monotone convergence theorems that there exists a finite constant $C_0$ such that $$\label{apf1} \pi_{\epsilon}((\mathcal{D}_{r_4})^{c}) \,\leq\, C_0 \, \epsilon$$ for all $\epsilon$ sufficiently small. ## Gaussian Approximation {#gaussian-approximation .unnumbered} Hereafter, we couple the processes $\boldsymbol{y}_{\epsilon}(\cdot)$, $\xi(\cdot)$, and $\boldsymbol{z}_{\epsilon}(\cdot)$ by using the same driving Brownian motion $W_{t}$. This coupled probability law and associated expectation will be denoted by ${\boldsymbol P}_{\boldsymbol{x}}$ and ${\boldsymbol E}_{\boldsymbol{x}}$. ** 10** (Gaussian approximation). *There exist constants $\alpha_{1},\,\alpha_{2}>0$ such that $$\sup_{t\le t_{\epsilon}} \sup_{\boldsymbol{x}\in\mathcal{D}_{r_4}} {\boldsymbol E}_{\boldsymbol{x}}\left[\, |\, \boldsymbol{y}_{\epsilon}(t; {\boldsymbol x})- \boldsymbol{z}_{\epsilon}(t; {\boldsymbol x})\, |^{4} \, \right] \, \leq\, \alpha_{1}\, \epsilon^{2+\alpha_{2}}\;.$$* This proposition corresponds to [@BJ display (3.12)] which plays a crucial role in the proof of the main result. Since the proof of [@BJ display (3.12)] requires conditions (C) and (H) of [@BJ], and these conditions are not assumed here, we develop an alternative approach below, based on [@LeeRamSeo]. ** 11**. *There exists $c>0$ such that $$\langle\mathbb{K}\boldsymbol{x},\, \mathbb{A}(\boldsymbol{y}(t))\boldsymbol{x}\rangle \, \le\, -\, c\, \langle\boldsymbol{x},\,\mathbb{K}\boldsymbol{x}\rangle\;.$$ for all $t\ge0$ and $\boldsymbol{x}\in\mathcal{D}_{2r_4}$.* *Proof.* By ([\[eq:A(x)-A\]](#eq:A(x)-A){reference-type="ref" reference="eq:A(x)-A"}) and ([\[eq:ricatti1\]](#eq:ricatti1){reference-type="ref" reference="eq:ricatti1"}), $$\begin{aligned} 2\langle\mathbb{K}\boldsymbol{x},\, \mathbb{A}(\boldsymbol{y}(t))\boldsymbol{x}\rangle & =\left\langle \boldsymbol{x},\, \left[\mathbb{A}(\boldsymbol{y}(t))^{\dagger}\mathbb{K} +\mathbb{K}\mathbb{A}(\boldsymbol{y}(t))\right] \boldsymbol{x}\right\rangle \\ & \le\left\langle \boldsymbol{x},\, \left[\mathbb{A}^{\dagger}\mathbb{K}+\mathbb{K}\mathbb{A}\right] \boldsymbol{x}\right\rangle +\frac{1}{2} |\boldsymbol{x}|^{2}=-\frac{1}{2}|\boldsymbol{x}|^{2} \;.\end{aligned}$$ As $\mathbb{K}$ is bounded, the previous term is less than or equal to $-\, c\, \langle\boldsymbol{x},\,\mathbb{K}\boldsymbol{x}\rangle$ for some positive contant $c$, as claimed. ◻ ** 12**. *For all $n\ge1$, there exists a finite constant $C(n)>0$ such that $$\sup_{t\ge0}\sup_{\boldsymbol{x}\in\mathcal{D}_{r_4}} {\boldsymbol E}_{\boldsymbol{x}}\left[\left\langle \boldsymbol{\xi}(t),\, \mathbb{K}\boldsymbol{\xi}(t)\right\rangle^{n}\right]\leq C(n)\;.$$* *Proof.* By Ito's formula and Lemma [ 11](#lem:drift){reference-type="ref" reference="lem:drift"}, $$\begin{aligned} d\langle\boldsymbol{\xi}(t),\,\mathbb{K}\boldsymbol{\xi}(t)\rangle & =\; \left[\, 2\, \langle\, \mathbb{K}\boldsymbol{\xi}(t),\, \mathbb{A}({\boldsymbol y} (t) )\boldsymbol{\xi}(t)\, \rangle +\mathfrak{k}\, \right]dt +2\, \langle\mathbb{K}\boldsymbol{\xi}(t),\,\mathrm{d}W_{t}\rangle \nonumber \\ & \le\left[-2c\langle\boldsymbol{\xi}(t),\, \mathbb{K}\boldsymbol{\xi}(t)\rangle+\mathfrak{k}\right]dt +2\langle\mathbb{K}\boldsymbol{\xi}(t),\,\mathrm{d}W_{t}\rangle \;, \label{eq:dxi}\end{aligned}$$ where $\mathfrak{k}=\text{tr}(\mathbb{K}).$ The remainder of the proof is identical to the proof of Lemma [ 9](#lem:mom_est){reference-type="ref" reference="lem:mom_est"}. ◻ ** 13**. *For all $n\ge1$, there exists a finite constant $C(n)>0$ such that $$\sup_{x\in\mathcal{D}_{r_4}}{\boldsymbol E}_{\boldsymbol{x}} \big[\, \sup_{t\in[0,\,t_{\epsilon}]} \left\langle \boldsymbol{\xi}(t),\, \mathbb{K}\boldsymbol{\xi}(t)\right\rangle ^{n}\,\big] \;\leq\; C(n)\, t_\epsilon^{n}\;.$$* *Proof.* By Hölder's inequality, it is enough to prove the lemma for $n$ even. Assume that this is the case. Integrating ([\[eq:dxi\]](#eq:dxi){reference-type="ref" reference="eq:dxi"}), as the first term on the right-hand side is negative, $$\langle\boldsymbol{\xi}(t),\, \mathbb{K}\boldsymbol{\xi}(t)\rangle \leq\langle\boldsymbol{x},\,\mathbb{K}\boldsymbol{x}\rangle +2\int_{0}^{t}\langle\mathbb{K}\boldsymbol{\xi}(t),\, \mathrm{d}W_{t}\rangle ds+\mathfrak{k}t\;.$$ Therefore, $${\boldsymbol E}_{\boldsymbol{x}}\Big[\, \sup_{t\in[0, t_\epsilon]} \left\langle \boldsymbol{\xi}(t),\,\mathbb{K} \boldsymbol{\xi}(t)\right\rangle ^{n}\,\Big] \;\leq\; C(n)\, \Big(\, {\boldsymbol E}_{\boldsymbol{x}} \Big[\, \sup_{t\in[0, t_\epsilon ]} \left|\int_{0}^{t}\langle\mathbb{K}\boldsymbol{\xi}(t),\, \mathrm{d}W_{t}\rangle\right|^{n}\,\Big] \,+\, t_\epsilon^{n} \, \Big) \label{eq:bdx}$$ for some finite constant $C(n)$. By the Burkholder-Davis-Gundy inequality and the Hölder inequality, the expectation on the right-hand side is bounded by $$C(n)\, {\boldsymbol E}_{\boldsymbol{x}}\Big[\, \Big(\, \int_{0}^{t_\epsilon} |\mathbb{K}\boldsymbol{\xi}(t)|^{2}dt\, \Big)^{n/2}\, \Big] \, \le\, C(n)\, t_\epsilon^{(n/2)-1}\, \, {\boldsymbol E}_{\boldsymbol{x}}\Big[\, \int_{0}^{t_\epsilon} \left\langle \boldsymbol{\xi}(t),\, \mathbb{K}\boldsymbol{\xi}(t)\right\rangle ^{n/2}dt \,\Big]\;.$$ By Fubini's theorem and by Lemma [ 12](#lem:mom_Yt){reference-type="ref" reference="lem:mom_Yt"} \[since $n$ is even\], this expression is less than or equal to $C(n)\, t_\epsilon^{n/2}$. Inserting this bound is ([\[eq:bdx\]](#eq:bdx){reference-type="ref" reference="eq:bdx"}) completes the proof of the lemma. ◻ The proof below is developed in [@LeeRamSeo] based on ideas of [@BJ]. *Proof of Proposition [ 10](#prop:main_apprx){reference-type="ref" reference="prop:main_apprx"}.* Fix $\boldsymbol{x}\in\mathcal{D}_{r_4}$, and remember from [\[eq:xi\]](#eq:xi){reference-type="eqref" reference="eq:xi"} that ${\boldsymbol \xi }(t)$ depends on ${\boldsymbol x}$ through the dynamical system ${\boldsymbol y}(\cdot)$ which starts from ${\boldsymbol x}$. Let $$\boldsymbol{r}_{\epsilon}(t) \,:=\, \frac{\boldsymbol{y}_{\epsilon}(t)-\boldsymbol{z}_{\epsilon}(t)} {\sqrt{2\epsilon}} \,=\, \frac{\boldsymbol{y}_{\epsilon}(t)-\boldsymbol{y}(t)} {\sqrt{2\epsilon}}-\boldsymbol{\xi}(t)\;. \label{eq:wte}$$ We need to prove that there exist positive finite constants $\alpha_{1}$, $\alpha_{2}$ such that $$\sup_{t\le t_\epsilon} \, \sup_{{\boldsymbol x}\in {\mathcal D}_{r_4}}\, {\boldsymbol E}_{\boldsymbol{x}}\left[\langle\boldsymbol{r}_{\epsilon}(t),\, \mathbb{K}\boldsymbol{r}_{\epsilon}(t)\rangle^{2}\right] \, \leq\, \alpha_{1}\epsilon^{\alpha_{2}}\;. \label{eq:obj0}$$ Since $\boldsymbol{y}_{\epsilon}(t)$ and $\boldsymbol{\xi}(t)$ share the same driving Brownian motion, by ([\[eq:x_eps\]](#eq:x_eps){reference-type="ref" reference="eq:x_eps"}), ([\[eq:x_0\]](#eq:x_0){reference-type="ref" reference="eq:x_0"}), and ([\[eq:xi\]](#eq:xi){reference-type="ref" reference="eq:xi"}) that $$\label{apf02} \frac{\mathrm{d}}{\mathrm{d}t}\langle\boldsymbol{r}_{\epsilon}(t),\, \mathbb{K}\boldsymbol{r}_{\epsilon}(t)\rangle \, =\, 2\, \langle\mathbb{K}\boldsymbol{r}_{\epsilon}(t),\, \mathbb{A}(\boldsymbol{y}(t)) \, \boldsymbol{r}_{\epsilon}(t)\rangle \, -\, 2\, \langle\mathbb{K}\boldsymbol{r}_{\epsilon}(t),\, \boldsymbol{q}_{\epsilon}(t)\rangle\;,$$ where $$\boldsymbol{q}_{\epsilon}(t) \,=\, \frac{1}{\sqrt{2\epsilon}} \, \big\{\, \boldsymbol{b}_0(\boldsymbol{y}_{\epsilon}(t)) \,-\, \boldsymbol{b}_0(\boldsymbol{y}(t)) \,-\, (D\boldsymbol{b}_0) (\boldsymbol{y}(t)) \, (\boldsymbol{y}_{\epsilon}(t)-\boldsymbol{y}(t)) \,\big\} \;.$$ Let $$\mathcal{A}_{\epsilon}=\mathcal{A}_{\epsilon}(\boldsymbol{x}) :=\big\{\, \boldsymbol{y}_{\epsilon}(t)\in\mathcal{D}_{2r_4}\; \text{for all }t\in[0,\, t_\epsilon]\,\big\} \;.$$ By Lemma [ 11](#lem:drift){reference-type="ref" reference="lem:drift"} and since ${\mathbb K}$ is positive-definite and bounded, on the event ${\mathcal A}_\epsilon$ the right-hand side of [\[apf02\]](#apf02){reference-type="eqref" reference="apf02"} is bounded by $$\begin{aligned} -\, c\, \langle\boldsymbol{r}_{\epsilon}(t),\, \mathbb{K}\boldsymbol{r}_{\epsilon}(t)\rangle \,-\, 2\, \langle\mathbb{K}\boldsymbol{r}_{\epsilon}(t),\, \boldsymbol{q}_{\epsilon}(t)\rangle \,\le\, -\, c_{1}\, \langle\boldsymbol{r}_{\epsilon}(t),\, \mathbb{K}\boldsymbol{r}_{\epsilon}(t)\rangle \,+\, C_{2}\, |\boldsymbol{q}_{\epsilon}(t)|^{2} \label{eq:dW1}\end{aligned}$$ for some finite positive constants $c_{1}$, $C_{2}$. Fix $t\in[0,\, t_\epsilon]$. Since $\boldsymbol{b}_0\in C^{2}(B({\boldsymbol 0}, r_3),\,\mathbb{R}^{d})$, by ([\[eq:condr_4\]](#eq:condr_4){reference-type="ref" reference="eq:condr_4"}) on the event $\mathcal{A}_{\epsilon}$, $$|\boldsymbol{q}_{\epsilon}(t)| \,\leq\, \frac{C_0}{\sqrt{\epsilon}}\, |\boldsymbol{y}_{\epsilon}(t)-\boldsymbol{y}(t)|^{2} \,\le\, C_0\, \sqrt{\epsilon}\, \big\{ \, |\boldsymbol{r}_{\epsilon}(t)|^{2}+|\boldsymbol{\xi}(t)|^{2}\,\big\}\;,$$ for some finite constant $C_0$, whose value may change from line to line. The second inequality follows from ([\[eq:wte\]](#eq:wte){reference-type="ref" reference="eq:wte"}). Therefore, by ([\[eq:dW1\]](#eq:dW1){reference-type="ref" reference="eq:dW1"}), $$\frac{\mathrm{d}}{\mathrm{d}t}\langle\boldsymbol{r}_{\epsilon}(t),\, \mathbb{K}\boldsymbol{r}_{\epsilon}(t)\rangle \,\leq\, -\, c_{1}\, \langle\boldsymbol{r}_{\epsilon}(t),\, \mathbb{K}\boldsymbol{r}_{\epsilon}(t)\rangle \,+\, C_3\, \epsilon \left[\langle\boldsymbol{r}_{\epsilon}(t),\, \mathbb{K}\boldsymbol{r}_{\epsilon}(t)\rangle^{2}+ \left\langle \boldsymbol{\xi}(t),\, \mathbb{K}\boldsymbol{\xi}(t)\right\rangle ^{2}\right]\;.$$ Let $\mathcal{B}_{\epsilon}=\mathcal{B}_{\epsilon}({\boldsymbol x})$ be the event defined by $$\mathcal{B}_{\epsilon} \,:=\, \Big\{\, \frac{C_3\, \epsilon}{c_{1}}\, \Big(\, C_3\, \epsilon \, t_\epsilon\, \sup_{s\in[0,\,t_\epsilon]} \left\langle \boldsymbol{\xi}(s),\, \mathbb{K}\boldsymbol{\xi}(s)\right\rangle ^{2}\,\Big) \,\leq\, \frac{1}{2}\, \Big\} \;.$$ By Perov's inequality [@Webb Theorem 3.1], as $\boldsymbol{r}_{\epsilon}(0)=\boldsymbol{0}$, it follows from the previous inequality that on the event ${\mathcal B}_\epsilon$, $$\begin{aligned} \langle\boldsymbol{r}_{\epsilon}(t),\, \mathbb{K}\boldsymbol{r}_{\epsilon}(t)\rangle \,\leq\, 2\, C_{3}\, \epsilon\, t_\epsilon\, e^{-c_{1}t}\, \sup_{s\in[0,\,t_\epsilon]} \left\langle \boldsymbol{\xi}(s),\, \mathbb{K}\boldsymbol{\xi}(s)\right\rangle ^{2}\;.\end{aligned}$$ for all $t\in[0,\,t_\epsilon]$. Hence, by Lemma [ 13](#lem:mom_Yt_sup){reference-type="ref" reference="lem:mom_Yt_sup"}, $$\sup_{t\in[0,\, t_\epsilon]} \, \sup_{{\boldsymbol x}\in {\mathcal D}_{r_4}}\, {\boldsymbol E}_{\boldsymbol{x}}\left[\langle\boldsymbol{r}_{\epsilon}(t),\, \mathbb{K}\boldsymbol{r}_{\epsilon}(t)\rangle^{2}\, {\boldsymbol 1}_{\mathcal{A}_{\epsilon}\cap\mathcal{B}_{\epsilon}}\right] \,\leq\, C_0\, \epsilon^2\, t_\epsilon^6 \;=\; C_0\, \epsilon^{2-6\theta}\;.$$ Since $\theta<1/3$, this proves ([\[eq:obj0\]](#eq:obj0){reference-type="ref" reference="eq:obj0"}) on the event $\mathcal{A}_{\epsilon}\cap\mathcal{B}_{\epsilon}$. We turn to the event $(\mathcal{A}_{\epsilon}\cap\mathcal{B}_{\epsilon})^{c}$. By the Cauchy-Schwarz inequality, $${\boldsymbol E}_{\boldsymbol{x}}\left[\langle\boldsymbol{r}_{\epsilon}(t),\, \mathbb{K}\boldsymbol{r}_{\epsilon}(t)\rangle^{2} \, {\boldsymbol 1}_{(\mathcal{A}_{\epsilon}\cap\mathcal{B}_{\epsilon})^{c}}\right]^{2} \, \leq\, {\boldsymbol E}_{\boldsymbol{x}} \left[\langle\boldsymbol{r}_{\epsilon}(t),\, \mathbb{K}\boldsymbol{r}_{\epsilon}(t)\rangle^{4}\right] \, \left\{\, {\boldsymbol P}_{\boldsymbol{x}}(\mathcal{A}_{\epsilon}^{c}) +{\boldsymbol P}_{\boldsymbol{x}}(\mathcal{B}_{\epsilon}^{c}) \, \right\}\;.$$ By ([\[eq:wte\]](#eq:wte){reference-type="ref" reference="eq:wte"}) and the Cauchy-Schwarz inequality, $${\boldsymbol E}_{\boldsymbol{x}}\left[\langle\boldsymbol{r}_{\epsilon}(t),\, \mathbb{K}\boldsymbol{r}_{\epsilon}(t)\rangle^{4}\right] \,\leq\, \frac{C_0}{\epsilon^{4}} \, \left({\boldsymbol E}_{{\boldsymbol x}} \left[\langle\boldsymbol{y}_{\epsilon}(t),\, \mathbb{K}\boldsymbol{y}_{\epsilon}(t)\rangle^{4} \,+\, \langle\boldsymbol{y}(t),\,\mathbb{K}\boldsymbol{y}(t)\rangle^{4} \, +\, \epsilon^{4}\, \langle\boldsymbol{\xi}(t),\, \mathbb{K}\boldsymbol{\xi}(t)\rangle^{4}\right]\right)\;.$$ Hence, by Lemmata [ 7](#lem:stability){reference-type="ref" reference="lem:stability"}, [ 9](#lem:mom_est){reference-type="ref" reference="lem:mom_est"} and [ 12](#lem:mom_Yt){reference-type="ref" reference="lem:mom_Yt"}, $${\boldsymbol E}_{\boldsymbol{x}}\left[\langle\boldsymbol{r}_{\epsilon}(t),\, \mathbb{K}\boldsymbol{r}_{\epsilon}(t)\rangle^{4}\right] \,\le\, \frac{C_0}{\epsilon^{4}}$$ for all $t\ge0$, ${\boldsymbol x}\in{\mathcal D}_{r_4}$. It remains to show that there exist $c_0>0$ and $C_0<\infty$ such that $$\sup_{{\boldsymbol x}\in {\mathcal D}_{r_4}}\, {\boldsymbol P}_{\boldsymbol{x}}(\mathcal{A}_{\epsilon}^{c}) \,\le\, C_0\, \epsilon^{4+c_0}\;\;\;\text{and}\;\;\; \sup_{{\boldsymbol x}\in {\mathcal D}_{r_4}}\, {\boldsymbol P}_{\boldsymbol{x}}(\mathcal{B}_{\epsilon}^{c}) \,\le\, C_0 \, \epsilon^{4+c_0}\;. \label{eq:obj1}$$ Consider the event ${\mathcal A}_\epsilon$. On the complement of this set, $$\sup_{t\le t_\epsilon} \left\langle \boldsymbol{y}_{\epsilon}(t),\, \mathbb{H}_0 \boldsymbol{y}_{\epsilon}(t)\right\rangle \,\ge\, (2r_4)^2\;.$$ By ([\[eq:itoy\]](#eq:itoy){reference-type="ref" reference="eq:itoy"}), $$\left\langle \boldsymbol{y}_{\epsilon}(t),\, \mathbb{H}_0 \boldsymbol{y}_{\epsilon}(t)\right\rangle \,\le\, 2\, \mathfrak{h}\, \epsilon\, t \,+\, 2\, \sqrt{2\epsilon} \int_{0}^{t}\left\langle \mathbb{H}_0 \boldsymbol{y}_{\epsilon}(s),\,dW_{s}\right\rangle\;.$$ Thus, as $\epsilon\, t_\epsilon \to 0$, for $\epsilon$ small enough, by Markov inequality, $${\boldsymbol P}_{\boldsymbol{x}}(\mathcal{A}_{\epsilon}^{c})\, \leq\, {\boldsymbol P}_{\boldsymbol{x}} \Big[\, \sup_{t\le t_\epsilon} \int_{0}^{t} \left\langle \mathbb{H}_0 \, \boldsymbol{y}_{\epsilon}(s),\,dW_{s}\right\rangle > \frac{r^2_1}{\sqrt{\epsilon}}\, \Big] \,\le\, C_0 \, \epsilon^{8}\, {\boldsymbol E}_{\boldsymbol{x}}\Big[\, \sup_{t\le t_\epsilon} \Big|\, \int_{0}^{t} \left\langle \mathbb{H}_0\, \boldsymbol{y}_{\epsilon}(s),\,dW_{s}\right\rangle \, \Big|^{16} \,\Big]\;.$$ By the Burkholder-Davis-Gundy and Hölder inequalities, the right-hand side is bounded by $$C_0\, \epsilon^{8}\, {\boldsymbol E}_{\boldsymbol{x}}\Big[\, \Big(\, \int_{0}^{t_\epsilon} \left|\, \mathbb{H}_0 \, \boldsymbol{y}_{\epsilon}(s) \, \right|^{2} \, \mathrm{d}s\, \Big)^{8} \Big] \,\le\, C_0\, \epsilon^{8}\, t^7_\epsilon\, {\boldsymbol E}_{\boldsymbol{x}}\Big[\, \int_{0}^{t_\epsilon} \left|\, \mathbb{H}_0\, \boldsymbol{y}_{\epsilon}(s) \, \right|^{16} \, \mathrm{d}s\, \Big] \;.$$ Hence, by Lemma [ 9](#lem:mom_est){reference-type="ref" reference="lem:mom_est"}, $$\sup_{{\boldsymbol x}\in {\mathcal D}_{r_4}}\, {\boldsymbol P}_{\boldsymbol{x}}(\mathcal{A}_{\epsilon}^{c}) \,\le\, C_0\, \epsilon^{8}\, t^8_\epsilon \,=\, C_0\, \epsilon^{8(1-\theta)}\;.$$ As $\theta<1/3$, the first assertion of ([\[eq:obj1\]](#eq:obj1){reference-type="ref" reference="eq:obj1"}) holds. We turn to the second assertion. By definition, there exists a positive constant $c_0$ such that $$\mathcal{B}_{\epsilon}^{c} \,=\, \Big\{ \, \sup_{s\in[0, t_\epsilon]} \left\langle \boldsymbol{\xi}(s),\, \mathbb{K}\boldsymbol{\xi}(s)\right\rangle \,\ge\, \frac{c_0}{\epsilon\, \sqrt{t_\epsilon}}\, \Big\} \;.$$ By the Markov inequality and Lemma [ 13](#lem:mom_Yt_sup){reference-type="ref" reference="lem:mom_Yt_sup"} $$\sup_{{\boldsymbol x}\in {\mathcal D}_{r_4}}\, {\boldsymbol P}_{\boldsymbol{x}}(\mathcal{B}_{\epsilon}^{c}) \,\le\, C_0 \, \epsilon^8 \, t^4_\epsilon \, \sup_{{\boldsymbol x}\in {\mathcal D}_{r_4}}\, {\boldsymbol E}_{\boldsymbol{x}}\Big[\, \sup_{s\in[0,t_\epsilon]} \left\langle \boldsymbol{\xi}(s),\, \mathbb{K}\boldsymbol{\xi}(s)\right\rangle ^{8}\, \Big] \,\le\, C_0\, \epsilon^{8-12\theta}\;.$$ This proves the second assertion in ([\[eq:obj1\]](#eq:obj1){reference-type="ref" reference="eq:obj1"}) since $\theta<1/3$. ◻ # Local ergodicity {#sec2} Fix $\lambda>0$, ${\boldsymbol g}\colon {\mathcal M}_0 \rightarrow\mathbb{R}$, and recall that we denote by $\phi_{\epsilon} = \phi_{\epsilon}^{\lambda, {\boldsymbol g}}$ the unique solution of the resolvent equation [\[e_res\]](#e_res){reference-type="eqref" reference="e_res"}. The main result of this section states that the solution $\phi_{\epsilon}$ is asymptotically constant on each well ${\mathcal E}({\boldsymbol m})$. ** 14**. *Fix $\lambda>0$ and ${\boldsymbol g}\colon {\mathcal M}_0 \rightarrow\mathbb{R}$. For all ${\boldsymbol m} \in {\mathcal M}_0$, $$\lim_{\epsilon\rightarrow0}\, \sup_{x, y\in {\mathcal E}({\boldsymbol m})} \vert\, \phi_{\epsilon} (x) - \phi_{\epsilon} (y) \, \vert \;=\; 0\;.$$* Recall from [\[sde\]](#sde){reference-type="eqref" reference="sde"} that we represent by ${\boldsymbol x}_\epsilon (\cdot)$ the diffusion process induced by the generator ${\mathcal L}_\epsilon$. The proof of Theorem [ 14](#p_flat){reference-type="ref" reference="p_flat"} is based on mixing properties of ${\boldsymbol x}_\epsilon (\cdot)$ obtained in [@fw98; @BJ; @LeeRamSeo]. Denote by $\color{blue} \mathbb{P}_{\boldsymbol{z}}^{\epsilon}$, ${\boldsymbol z}\in {\mathbb R}^d$, the law of $\bm{x}_{\epsilon}(\cdot)$ starting from $\boldsymbol{z}$. Expectation with respect to $\mathbb{P}_{\boldsymbol{z}}^{\epsilon}$, is represented by $\color{blue} \mathbb{E}_{\boldsymbol{z}}^{\epsilon}$. We start with elementary facts. By equation (1.3) in [@BEGK], conditions [\[26\]](#26){reference-type="eqref" reference="26"} guarantees that the partition function $Z_\epsilon$, defined by $$\label{e: def_Zeps} {\color{blue} Z_{\epsilon}} \;:=\; \int_{\mathbb{R}^{d}}\,e^{-U(\bm{x})/\epsilon}\, d\bm{x}$$ is finite. In particular, the Gibbs measure $$\mu_{\epsilon}(d\boldsymbol{x}) \;:=\; Z_{\epsilon}^{-1} \,e^{-U(\bm{x})/\epsilon}\,d\boldsymbol{x} \;:=\; \mu_{\epsilon}(\bm{x}) \,d\boldsymbol{x}$$ is well defined. Moreover, by Theorem 2.2 and 2.3 in [@LS-22], the diffusion ${\boldsymbol x}_\epsilon (\cdot)$ is positive recurrent and $\mu_\epsilon$ is its unique invariant measure. On the other hand, as we assumed that $\min_{{\boldsymbol x}\in{\mathbb R}^d} U({\boldsymbol x})=0$, by [@LS-22b Proposition 3.2] or a straightforward computation, if $\color{blue} {\mathcal M}_\star$ representes the set of absolute minima of $U$, $$\label{32} Z_{\epsilon} \,=\,[\,1+o_{\epsilon}(1)\,] \,(2\pi\epsilon)^{d/2}\,\nu_{\star}\;, \quad \text{where}\quad {\color{blue} \nu_{\star}} \;=\; \sum_{{\boldsymbol m}\in {\mathcal M}_\star} \frac{1} {\sqrt{\det\nabla^{2}U(\boldsymbol{{\boldsymbol m}})}} \;,$$ and, for a local minimum ${\boldsymbol m}\in {\mathcal M}_0$, $$\label{55} \mu_\epsilon ({\mathcal E}({\boldsymbol m})) \, e^{U({\boldsymbol m})/\epsilon} \;=\; [\,1+o_{\epsilon}(1)\,] \, \frac{\nu({\boldsymbol m})}{\nu_{\star}} \;\cdot$$ In this formula, and throughout the article, $\color{blue} o_{\epsilon}(1)$ represents a remainder which vanishes as $\epsilon\to0$, and $\nu({\boldsymbol m})$ has been introduced in [\[eq:nu\]](#eq:nu){reference-type="eqref" reference="eq:nu"}. Denote by $\tau_{\mathcal{A}}$, $\mathcal{A} \subset \mathbb{R}^{d}$, the hitting time of the set $\mathcal{A}$: $$\label{41} {\color{blue} \tau_{\mathcal{A}}} \;:=\; \inf\{\, t\ge0 : {\boldsymbol x}_\epsilon (t) \in {\mathcal A}\,\} \;.$$ Recall from [\[30\]](#30){reference-type="eqref" reference="30"} the definition of ${\mathcal W}^r({\boldsymbol m})$. Conditions (b) and (c) in the definition of ${\mathcal E}({\boldsymbol m})$ guarantee that the hypotheses of Theorem 6.2 in Chapter 6 of [@fw98] are fulfilled. This results asserts that ** 15**. *Fix $h<H$, and denote by ${\mathcal A}$, ${\mathcal B}$ a connected component of the set $\{{\boldsymbol x}: U({\boldsymbol x}) < h\}$, $\{{\boldsymbol x}: U({\boldsymbol x}) < H\}$, respectively. Assume that ${\mathcal A} \subset {\mathcal B}$. Suppose that all critical points ${\boldsymbol c}$ of $U$ in ${\mathcal A}$ are such that $U({\boldsymbol c}) \le h_0$ for some $h_0 <h$. Then, for all $\eta>0$, $$\label{60} \limsup_{\epsilon\to 0} \sup_{\boldsymbol{x}\in {\mathcal A}} \,\mathbb{P}_{\boldsymbol{x}}^{\epsilon} \left[\,\tau_{\partial {\mathcal B}} <e^{(H-h_0-\eta)/\epsilon}\,\right] \;=\; 0\;.$$ In particular, for all ${\boldsymbol m}\in{\mathcal M}_0$, $\eta>0$, $$\limsup_{\epsilon\to 0} \sup_{\boldsymbol{x}\in \mathcal{E} ({\boldsymbol m})} \,\mathbb{P}_{\boldsymbol{x}}^{\epsilon} \left[\,\tau_{\partial\mathcal{W}^{2r_0}({\boldsymbol m})} <e^{(r_0-\eta)/\epsilon}\,\right] \;=\; 0\;.$$* The estimate in [@fw98 Theorem 6.6.2] is uniform over initial points ${\boldsymbol z}$ belonging to neighborhoods of a critical points. We claim that it holds uniformly over initial points ${\boldsymbol x}\in{\mathcal A}$. Indeed, by [@fw98 Theorem 2.1.2], since the set ${\mathcal A}$ is bounded, if we denote by ${\mathcal N}$ the union of neighborhoods of all critical points of $U$ in ${\mathcal A}$, there exist $T_0<\infty$, such that $$\label{61} \liminf_{\epsilon\to 0} \inf_{\boldsymbol{x}\in {\mathcal A}} \,\mathbb{P}_{\boldsymbol{x}}^{\epsilon} \left[\,\tau_{\partial {\mathcal N}} < T_0 \,,\, \tau_{\partial {\mathcal N}} < \tau_{\partial {\mathcal B}} \,\right] \;=\; 1\;.$$ Assertion [\[60\]](#60){reference-type="eqref" reference="60"} follows from [\[61\]](#61){reference-type="eqref" reference="61"}, the strong Markov property and [@fw98 Theorem 6.6.2]. Moreover, we could replace $h_0$ by the minimal value of $U$ on ${\mathcal A}$, but that will not be needed below. ## Mixing times {#mixing-times .unnumbered} Fix ${\boldsymbol m}\in {\mathcal M}_0$. All constants, functions, processes which appear in this subsection depend on ${\boldsymbol m}$, but this dependence is omitted in the notation. Let ${\boldsymbol b}_0 \colon {\mathbb R}^d \to {\mathbb R}^d$ be the field of class $C^1$ defined in Appendix [12](#sec-ap4){reference-type="ref" reference="sec-ap4"}. By [\[eq:vecb\]](#eq:vecb){reference-type="eqref" reference="eq:vecb"} and condition (d) in the definition of $r_0$, ${\boldsymbol b}_0({\boldsymbol x}) = {\boldsymbol b} ({\boldsymbol x})$ for ${\boldsymbol x}\in {\mathcal W}^{3r_0}({\boldsymbol m})$. By Proposition [ 63](#pap4){reference-type="ref" reference="pap4"}, the vector field ${\boldsymbol b}_0$ satisfies the hypotheses of Section [3](#sec-ap3){reference-type="ref" reference="sec-ap3"}. Denote by $\color{blue} \boldsymbol{x}^{F}_{\epsilon}(\cdot)$ the diffusion process [\[sde\]](#sde){reference-type="eqref" reference="sde"} with the vector field ${\boldsymbol b}_0$ replacing ${\boldsymbol b}$. Let $\color{blue} \mathbb{P}_{\boldsymbol{z}}^{\epsilon,\,F}$, ${\boldsymbol z}\in {\mathbb R}^d$, be the law of $\bm{x}^{F}_{\epsilon}(\cdot)$ starting from $\boldsymbol{z}$, and $\color{blue} p_{\epsilon}^{F}(\bm{z},\,\cdot \, ;t)$ its transition kernel: $$p_{\epsilon}^{F}(\bm{z}, B ;t) \;=\; \mathbb{P}_{\boldsymbol{z}}^{\epsilon,\,F} \left[\,\bm{x}^{F}_{\epsilon}(t)\in\mathcal{B}\,\right]\;, \quad \boldsymbol{z}\in {\mathbb R}^d \;,\;\; \mathcal{B}\subseteq {\mathbb R}^d \;.$$ Denote by $\color{blue} \mu_{\epsilon}^{F}$ the stationary state of the process $\bm{x}^{F}_{\epsilon}(\cdot)$. *Proof of Theorem [ 14](#p_flat){reference-type="ref" reference="p_flat"}.* Fix $\bm{m}\in\mathcal{M}_{0}$. Let $$\label{35} {\color{blue} {{\boldsymbol f}}_{\epsilon}(\bm{m})} \,:=\, \int_{{\mathbb R}^d}\phi_{\epsilon} (\bm{x})\, \mu_{\epsilon}^{F} (d\boldsymbol{x})\ .$$ It is enough to prove that for all $\bm{m}\in\mathcal{M}_{0}$, $$\lim_{\epsilon\to0}\,\sup_{\bm{x}\in\mathcal{E}(\bm{m})}\, |\, \phi_{\epsilon}(\bm{x})-{\boldsymbol f}_{\epsilon}(\bm{m})\,| \,=\, 0 \;.$$ Recall from [\[e_res\]](#e_res){reference-type="eqref" reference="e_res"} the definition of the function $G\colon {\mathbb R}^d \to {\mathbb R}$. By the stochastic representation of the resolvent equation, $$\label{exp_phi-1} \phi_{\epsilon}(\bm{x}) \;=\; \mathbb{E}_{\bm{x}}^{\epsilon}\Big[\, \int_{0}^{\infty}e^{-\lambda s}\, G(\bm{x}_{\epsilon}(\theta_{\epsilon}^{(1)}s))\,ds\, \Big]\ .$$ Fix $0< a <1/3$, $0<\eta < r_0/2$, and let $\varrho_\epsilon = \epsilon^{-a}$. By definition $\theta_{\epsilon}^{(1)}$, $$\label{58} \varrho_{\epsilon} \prec e^{(r_0-\eta)/\epsilon} \prec \theta_{\epsilon}^{(1)} \;.$$ Since $\varrho_{\epsilon}\prec\theta_{\epsilon}^{(1)}$ and $G$ is bounded, $$\phi_{\epsilon}(\bm{x}) \;=\; \mathbb{E}_{\bm{x}}^{\epsilon} \Big[\, \int_{\varrho_{\epsilon}/\theta_{\epsilon}^{(1)}}^{\infty} e^{-\lambda s}\,G(\bm{x}_{\epsilon}(\theta_{\epsilon}^{(1)}s))\,ds \,\Big] \;+\; R_{\epsilon} ({\boldsymbol x}) \;,$$ where, here and below, $R_{\epsilon} ({\boldsymbol x})$ represents an error whose value may change from line to line and such that $$\limsup_{\epsilon\to 0} \sup_{{\boldsymbol y} \in {\mathcal E}({\boldsymbol m})} |\, R_{\epsilon} ({\boldsymbol y})\,| \;=\; 0\;.$$ By the Markov property, $$\begin{aligned} \phi_{\epsilon}(\bm{x}) \;&=\; [ 1 + R_{\epsilon} ({\boldsymbol x})] \, \mathbb{E}_{\bm{x}}^{\epsilon} \Big[\, \mathbb{E}_{\bm{x}_{\epsilon}(\varrho_{\epsilon})} \Big[\, \int_{0}^{\infty}e^{-\lambda s}\, G(\bm{x}_{\epsilon}(\theta_{\epsilon}^{(1)}s))\,ds\, \Big]\,\Big] \;+\; R_{\epsilon} ({\boldsymbol x}) \\ & =\; \mathbb{E}_{\bm{x}}^{\epsilon} \big[\, \phi_{\epsilon} (\bm{x}_{\epsilon}(\varrho_{\epsilon})) \,\big] \;+\; R_{\epsilon} ({\boldsymbol x}) \end{aligned}$$ because $G$ is bounded. As $\varrho_{\epsilon}\prec e^{(r_0-\eta)/\epsilon}$, by Proposition [ 15](#p_FW){reference-type="ref" reference="p_FW"} and since $\phi_{\epsilon}$ is uniformly bounded by $(1/\lambda)\, \Vert {\boldsymbol g}\Vert_\infty$, $$\mathbb{E}_{\bm{x}}^{\epsilon} \left[\phi_{\epsilon} (\bm{x}_{\epsilon}(\varrho_{\epsilon}))\right] \;=\; \mathbb{E}_{\bm{x}}^{\epsilon}\left[ \phi_{\epsilon}(\bm{x}_{\epsilon}(\varrho_{\epsilon})) \, {\bf 1}\{\varrho_{\epsilon}<\tau_{\left(\mathcal{W}^{2r_0}(\bm{m})\right)^{c}}\} \right] \,+\, R_{\epsilon} ({\boldsymbol x})$$ Recall that ${\boldsymbol b}$ and ${\boldsymbol b}_0$ coincide on ${\mathcal W}^{3r_0} ({\boldsymbol m})$. By coupling the diffusions ${\boldsymbol x}_\epsilon (\cdot)$, ${\boldsymbol x}^F_\epsilon (\cdot)$, and in view of Proposition [ 15](#p_FW){reference-type="ref" reference="p_FW"}, the previous expectation is equal to $$\begin{aligned} \mathbb{E}_{\bm{x}}^{\epsilon, F }\left[\phi_{\epsilon} (\bm{x}^{F}_{\epsilon} (\varrho_{\epsilon}))\, {\bf 1}\{\varrho_{\epsilon}<\tau_{\left(\mathcal{W}^{2r_0}(\bm{m})\right)^{c}}\} \right] \; =\, \mathbb{E}_{\bm{x}}^{\epsilon , F} \left[\phi_{\epsilon}(\bm{x}^F_{\epsilon} (\varrho_{\epsilon}))\right] \,+\, R_{\epsilon} ({\boldsymbol x}) \ .\end{aligned}$$ Mind that we changed the measure. By condition (e) in the definition of $r_0$, ${\mathcal E}({\boldsymbol m}) \subset {\mathcal W}^{2r_0} ({\boldsymbol m}) \subset {\mathcal D}_{r_4} ({\boldsymbol m})$. Hence, by Theorem [ 3](#t_main2){reference-type="ref" reference="t_main2"} and since $\phi_{\epsilon}$ is uniformly bounded, $$\mathbb{E}_{\bm{x}}^{\epsilon, F}\left[\phi_{\epsilon} (\bm{x}^F_{\epsilon} (\varrho_{\epsilon}))\right] \;=\; \int_{{\mathbb R}^d}\phi_{\epsilon} (\bm{y})\, p_{\epsilon}^{F}(\boldsymbol{x},\, d \boldsymbol{y}; \varrho_{\epsilon}) \;= \; \int_{{\mathbb R}^d}\phi_{\epsilon}(\bm{y})\, \mu_{\epsilon}^{F} (d\bm{y}) \,+\, R_{\epsilon} ({\boldsymbol x}) \;.$$ As the right-hand side is equal to ${{\boldsymbol f}}_{\epsilon}(\bm{m}) \,+\, R_{\epsilon} ({\boldsymbol x})$, the theorem is proved. ◻ Recall the definition of the sequence $\varrho_\epsilon$ introduced in [\[58\]](#58){reference-type="eqref" reference="58"}. The proof of Theorem [ 14](#p_flat){reference-type="ref" reference="p_flat"} yields the following result. ** 16**. *Fix $\bm{m}\in\mathcal{M}_{0}$, $b>0$. Then, for all $\mathcal{A}\subset {\mathbb R}^d$. $$\limsup_{\epsilon\rightarrow0}\sup_{t\in[2b,\,4b]} \sup_{\boldsymbol{x}\in\mathcal E(\bm{m})} \left|\, {\mathbb P}_{\boldsymbol{x}}^{\epsilon} \big[\,\bm{x}_{\epsilon}(t\theta_{\epsilon}^{(1)})\in\mathcal{A}\,\big] \,-\, {\mathbb P}_{\mu_{\epsilon}^{F}}^{\epsilon} \big[\,\bm{x}_{\epsilon}(t\theta_{\epsilon}^{(1)} - \varrho_\epsilon)\in\mathcal{A}\,\big]\, \right|=0 \;.$$* Denote by $\color{blue} {\boldsymbol x}^{\rm R}_\epsilon (\cdot)$ the diffusion ${\boldsymbol x}_\epsilon (\cdot)$ reflected at the boundary of ${\mathcal W}^{2r_{0}}(\bm{m})$. Note that we omitted the dependence of ${\boldsymbol x}^{\rm R}_\epsilon (\cdot)$ on ${\boldsymbol m}$. Denote by $\color{blue} \mu^{\rm R}_\epsilon$ the measure $\mu_\epsilon$ conditioned to ${\mathcal W}^{2r_{0}}(\bm{m})$, which is the invariant measure of the diffusion ${\boldsymbol x}^{\rm R}_\epsilon (\cdot)$. Let finally $\color{blue} \mathbb{P}^{\epsilon,\rm R}_{\boldsymbol{z}}$, ${\boldsymbol z}\in {\mathcal W}^{2r_{0}}(\bm{m})$, be the law of $\bm{x}^{\rm R}_{\epsilon}(\cdot)$ starting from $\boldsymbol{z}$. Recall that we denote by $d_{\rm TV}(\mu, \nu)$ the total variation distance between two probability measures $\nu$, $\mu$ defined on ${\mathbb R}^d$. Let $\mu^{{\mathcal E}({\boldsymbol m})}_\epsilon$ be the measure $\mu_\epsilon$ conditioned to ${\mathcal E}({\boldsymbol m})$. We claim that $$\label{59} \limsup_{\epsilon\rightarrow0} d_{\rm TV}(\mu^{{\mathcal E}({\boldsymbol m})}_\epsilon , \mu^{F}_\epsilon) \,=\, 0\;.$$ Indeed, fix ${\mathcal A} \subset {\mathbb R}^d$, ${\boldsymbol x}\in {\mathcal E}({\boldsymbol m})$, and the sequence $\varrho_\epsilon$ introduced in [\[58\]](#58){reference-type="eqref" reference="58"}. By stationarity and Theorem [ 3](#t_main2){reference-type="ref" reference="t_main2"}, $$\mu^{F}_\epsilon ({\mathcal A}) \,=\, {\mathbb P}_{\mu_{\epsilon}^{F}}^{\epsilon, F} \big[\,\bm{x}^F_{\epsilon}(\varrho_\epsilon)\in\mathcal{A}\,\big] \,=\, {\mathbb P}_{{\boldsymbol x}}^{\epsilon, F} \big[\,\bm{x}^F_{\epsilon}(\varrho_\epsilon)\in\mathcal{A}\,\big] \,+\, R_\epsilon ({\boldsymbol x}) \;,$$ where adopted the convention established in the proof of Theorem [ 14](#p_flat){reference-type="ref" reference="p_flat"} for the remainder $R_\epsilon ({\boldsymbol x})$. As in the proof of Theorem [ 14](#p_flat){reference-type="ref" reference="p_flat"}, introduce the event $\{\tau_{\partial \mathcal{W}^{2r_0}(\bm{m}) } \le \varrho_{\epsilon}\}$ and its complement. On the event $\{\tau_{\partial \mathcal{W}^{2r_0}(\bm{m})} > \varrho_{\epsilon}\}$ we may replace the set ${\mathcal A}$ by ${\mathcal A} \cap {\mathcal W}^{2r_0}(\bm{m})$, and couple the process $\bm{x}^F_{\epsilon}(\cdot)$, $\bm{x}^{\rm R}_{\epsilon} (\cdot)$ up to time $\varrho_{\epsilon}$. Therefore, the probability on the right-hand side of the previous displayed equation is equal to $${\mathbb P}_{{\boldsymbol x}}^{\epsilon, \rm R} \big[\,\bm{x}^R_{\epsilon}(\varrho_\epsilon)\in\mathcal{A} \cap {\mathcal W}^{2r_0}(\bm{m})\,\big] \,+\, R^{(2)}_\epsilon \;=\; {\mathbb P}_{{\boldsymbol x}}^{\epsilon, \rm R} \big[\,\bm{x}^R_{\epsilon}(\varrho_\epsilon)\in\mathcal{A} \,\big] \,+\, R^{(2)}_\epsilon \;,$$ where $|R^{(2)}_\epsilon| \le 2 \sup_{{\boldsymbol z}\in {\mathcal E}({\boldsymbol m})} {\mathbb P}_{{\boldsymbol z}}^{\epsilon} [\, \tau_{\partial \mathcal{W}^{2r_0}(\bm{m}) } \le \varrho_{\epsilon} \,]$. Here, we removed the set ${\mathcal W}^{2r_0}(\bm{m})$ because ${\boldsymbol x}^{\rm R}$ takes value on this set. By Proposition [ 15](#p_FW){reference-type="ref" reference="p_FW"}, $R^{(2)}_\epsilon \to 0$. Since the previous estimates are uniform over ${\boldsymbol x}\in {\mathcal E}({\boldsymbol m})$, we may average the probability appearing on the right-hand side of the previous displayed equation with respect to the measure $\mu^{{\mathcal E}({\boldsymbol m})}_\epsilon$ to get that $$\mu^{F}_\epsilon ({\mathcal A}) \,=\, {\mathbb P}_{\mu^{{\mathcal E}({\boldsymbol m})}_\epsilon}^{\epsilon, \rm R} \big[\,\bm{x}^R_{\epsilon}(\varrho_\epsilon)\in\mathcal{A} \,\big] \,+\, o_\epsilon (1) \;.$$ Rewrite the previous probability as $${\mathbb P}_{\mu^{{\mathcal E}({\boldsymbol m})}_\epsilon}^{\epsilon, \rm R} \big[\,\bm{x}^R_{\epsilon}(\varrho_\epsilon)\in\mathcal{A} \,\big] \;=\; \frac{1}{\mu_\epsilon ({\mathcal E}({\boldsymbol m}))} \, \int_{{\mathcal E}({\boldsymbol m})} {\mathbb P}_{{\boldsymbol y}}^{\epsilon, \rm R} \big[\,\bm{x}^R_{\epsilon}(\varrho_\epsilon)\in\mathcal{A} \,\big] \mu_\epsilon (d{\boldsymbol y}) \;.$$ The measure $\mu^{{\mathcal E}({\boldsymbol m})}_\epsilon$ is also the measure $\mu^{\rm R}_\epsilon$ conditioned to ${\mathcal E}(\bm{m})$. Since $\mu_\epsilon ({\mathcal W}^{2r_0}(\bm{m}) \setminus {\mathcal E}(\bm{m})) / \mu_\epsilon ({\mathcal E}(\bm{m})) \to 0$, the previous expression is equal to $${\mathbb P}_{\mu^{\rm R}_\epsilon}^{\epsilon, \rm R} \big[\,\bm{x}^R_{\epsilon}(\varrho_\epsilon)\in\mathcal{A} \,\big] \,+\, R^{(3)}_\epsilon\;,$$ where $R^{(3)}_\epsilon \to 0$. Since $\mu^{\rm R}_\epsilon$ is the stationary state, the previous probability is equal to $\mu^{\rm R}_\epsilon ({\mathcal A})$. Putting together the previous estimates yields that $$\limsup_{\epsilon \to 0} \sup_{{\mathcal A} \subset {\mathbb R}^d} \, \big|\, \mu^{F}_\epsilon ({\mathcal A}) \,-\, \mu^{\rm R}_\epsilon ({\mathcal A})\,\big|\; = \; 0 \;.$$ as claimed in [\[59\]](#59){reference-type="eqref" reference="59"}. Next result follows from Lemma [ 16](#l14){reference-type="ref" reference="l14"} and [\[59\]](#59){reference-type="eqref" reference="59"}, Note that the measure $\mu_{\epsilon}^{F}$ has been replaced by $\mu_{\epsilon}^{\rm R}$. **Corollary 17**. *Fix $\bm{m}\in\mathcal{M}_{0}$, $b>0$, $\mathcal{A}\subset {\mathbb R}^d$. Then, $$\limsup_{\epsilon\rightarrow0}\sup_{t\in[2b,\,4b]} \sup_{\boldsymbol{x}\in\mathcal E(\bm{m})} \left|\, {\mathbb P}_{\boldsymbol{x}}^{\epsilon} \big[\,\bm{x}_{\epsilon}(t\theta_{\epsilon}^{(1)})\in\mathcal{A}\,\big] \,-\, {\mathbb P}_{\mu_{\epsilon}^{\rm R}}^{\epsilon} \big[\,\bm{x}_{\epsilon}(t\theta_{\epsilon}^{(1)} - \varrho_\epsilon)\in\mathcal{A}\,\big]\, \right|=0 \;.$$* # Exiting neighborhoods of unstable critical points {#sec5} The main result of this section, Proposition [ 22](#p:Kifer){reference-type="ref" reference="p:Kifer"}, asserts that the time necessary for the diffusion ${\boldsymbol x}_\epsilon(\cdot)$ to leave neighborhoods of unstable critical points is bounded by $\epsilon^{-1}$. It also characterizes the exiting sets. Recall that $\mathcal{C}_{0}$ denotes the set of critical points of $U$ and set $${\color{blue} \mathcal{Y}_{0}} \,:=\, \mathcal{C}_{0}\setminus\mathcal{M}_{0}\;,$$ so that $\mathcal{Y}_{0}$ stands for the collection of critical points of $U$ with index larger than $0$. By [@LS-22 Theorem 2.1], $\mathcal{M}_{0}$ and $\mathcal{Y}_{0}$ are the set of stable and unstable equilibria of the dynamical system [\[31\]](#31){reference-type="eqref" reference="31"}, respectively. Let $\color{blue} \mathbb{H}^{\boldsymbol{c}}=(\nabla^{2}U)(\boldsymbol{c})$, $\color{blue} \mathbb{L}^{\boldsymbol{c}}=(\nabla\cdot\boldsymbol{\ell})(\boldsymbol{c})$, $\boldsymbol{c}\in\mathcal{C}_{0}$, so that $\mathbb{H}^{\boldsymbol{c}}+\mathbb{L}^{\boldsymbol{c}}$ denotes the Jacobian of the drift ${\boldsymbol b}$ at the critical point ${\boldsymbol c}$. Next result asserts that critical points in $\mathcal{Y}_{0}$ are hyperbolic. ** 18**. *Fix $\bm{c}\in\mathcal{Y}_{0}$. Then, the matrix $\mathbb{H}^{\bm{c}}+\mathbb{L}^{\bm{c}}$ is invertible and does not have a pure imaginary eigenvalue.* *Proof.* Suppose, by contradiction, that $ai$, $a\in\mathbb{R}$, is an eigenvalue of $\mathbb{H}^{\bm{c}}+\mathbb{L}^{\bm{c}}$. Denote by $\bm{v}$ the unit eigenvector corresponding to $ai$ so that $(\mathbb{H}^{\bm{c}}+\mathbb{L}^{\bm{c}})\bm{v}=ai\bm{v}$. Thus, if $\color{blue} {\mathbb A}^\dagger$ represents the transpose of the matrix ${\mathbb A}$, $$\begin{aligned} ai\bm{v}\cdot ai\bm{v} & =\bm{v}\cdot(\mathbb{H}^{\bm{c}} +\mathbb{L}^{\bm{c}})^{\dagger}(\mathbb{H}^{\bm{c}} +\mathbb{L}^{\bm{c}})\bm{v} \\ & =\bm{v}\cdot\left\{ (\mathbb{H}^{\bm{c}})^{\dagger} \mathbb{H}^{\bm{c}}+(\mathbb{H}^{\bm{c}})^{\dagger} \mathbb{L}^{\bm{c}}+(\mathbb{L}^{\bm{c}})^{\dagger} \mathbb{H}^{\bm{c}}+(\mathbb{L}^{\bm{c}})^{\dagger} \mathbb{L}^{\bm{c}})\right\} \bm{v}\;.\end{aligned}$$ By [@LS-22 Lemma 4.5], the matrix ${\mathbb H}^{\bm{c}} {\mathbb L}^{\bm{c}}$ is skew-symmetric, so that $$\begin{aligned} -\, a^{2}\|\bm{v}\|^{2} & =\|\mathbb{H}^{\bm{c}}\bm{v}\|^{2} +\|\mathbb{L}^{\bm{c}}\bm{v}\|^{2}\;,\end{aligned}$$ which is a contradiction if $a\neq0$. If $a=0$, $\mathbb{H}^{\bm{c}}\bm{v}=0$ which implies that $\boldsymbol{v}=0$ since $\mathbb{H}^{\boldsymbol{c}}$ is invertible. This is also a contradiction to the fact that $\boldsymbol{v}$ is a unit vector. ◻ ## The Hartman-Grobman theorem {#the-hartman-grobman-theorem .unnumbered} Fix from now on a critical point ${\boldsymbol c}\in {\mathcal Y}_0$ of index $k\ge1$. In this subsection, we use Hartman-Grobman theorem [@Chicone Theorem 1.47], [@Perko Section 2.8], to define a neighborhood of ${\boldsymbol c}$. Denote by $\color{blue} \upsilon_{{\boldsymbol x}} (t)$, $\bm{x} \in{\mathbb R}^d$, $t\ge0$, the solution of the ODE [\[31\]](#31){reference-type="eqref" reference="31"} starting from ${\boldsymbol x}$, and by $\color{blue} \upsilon_{L, {\boldsymbol x}} (t) = \upsilon^{{\boldsymbol c}}_{L, {\boldsymbol x}} (t)$ the solution of the linear ODE $$\label{34} \dot {{\boldsymbol x}} (t) \,=\, -\, ( \mathbb{H}^{\bm{c}}+ \mathbb{L}^{\bm{c}} )\, (\boldsymbol{x} (t) -\boldsymbol{c} )$$ starting from ${\boldsymbol x}$. The Hartman-Grobman theorem, which can be applied in view of Lemma [ 18](#lem:hyper){reference-type="ref" reference="lem:hyper"}, reads as follows. ** 19**. *Fix $\boldsymbol{c}\in\mathcal{Y}_{0}$. There exist open neighborhoods $\mathcal{U}_{\boldsymbol{c}},\,\mathcal{U}^L_{\boldsymbol{c}}$ of $\boldsymbol{c}$ and a homeomorphism $\Xi \colon \mathcal{U}_{\boldsymbol{c}}\rightarrow\mathcal{U}^L_{\boldsymbol{c}}$ such that $\Xi(\boldsymbol{c})=\boldsymbol{c}$ and $\Xi ( \upsilon_{{\boldsymbol x}} (t) ) = \upsilon_{L,\Xi({\boldsymbol x})} (t)$ for all $({\boldsymbol x}, t)$ such that $\upsilon_{{\boldsymbol x}} (t) \in {\mathcal U}_{{\boldsymbol c}}$. In particular, $\boldsymbol{c}$ is the unique critical point of $U$ in $\mathcal{U}_{\boldsymbol{c}}$.* Denote by $\color{blue} {\mathcal M}_s = \mathcal{M}_s (\boldsymbol{c})$, $\color{blue} {\mathcal M}_u = \mathcal{M}_u (\boldsymbol{c})$ the stable, unstable manifold of $\boldsymbol{c}$ for the dynamical system [\[31\]](#31){reference-type="eqref" reference="31"}, respectively. Hence, for all ${\boldsymbol x} \in {\mathcal M}_s$, $\lim_{t\rightarrow \infty} \upsilon_{{\boldsymbol x}} (t) = \boldsymbol{c}$. In contrast, for all $y \in {\mathcal M}_u$ there exists a solution ${\boldsymbol x}(t)$, $t\le 0$ of [\[31\]](#31){reference-type="eqref" reference="31"} such that $${\boldsymbol x}(0) = {\boldsymbol y} \;, \quad \lim_{t\to - \infty} {\boldsymbol x}(t) = {\boldsymbol c} \;.$$ Let $\color{blue} \mathcal{M}_{L,s}$, $\color{blue} \mathcal{M}_{L,u}$ be the stable, unstable manifold of ${\boldsymbol c}$ for the linear ODE [\[34\]](#34){reference-type="eqref" reference="34"}. By Theorem [ 19](#thm:H-G){reference-type="ref" reference="thm:H-G"}, on the set ${\mathcal U}^L_{{\boldsymbol c}}$, $\mathcal{M}_{L,s} = \Xi ({\mathcal M}_s)$, $\mathcal{M}_{L,u} = \Xi ({\mathcal M}_u)$. Choose $r_{1}>0$ so that $\color{blue} B({\boldsymbol c},\,r_{1})\subset{\mathcal U}^L_{{\boldsymbol c}}$. Let $\color{blue} \widehat {{\mathcal N}} = \widehat {{\mathcal N}} (\bm{c}) := \Xi^{-1}(B({\boldsymbol c} ,\,r_{1}))$. For each $\boldsymbol{y}\in\widehat{{\mathcal N}} \setminus{\mathcal M}_s$, let $t(\boldsymbol{y})=t_{\boldsymbol{c}}(\boldsymbol{y})$ be the exit time from $\widehat{{\mathcal N}}$: $$\label{eq:overlinet} {\color{blue} t(\boldsymbol{y})} \, :=\, \inf \{t\ge0: \upsilon_{{\boldsymbol y}} (t) \not\in \widehat{{\mathcal N}} \} \;.$$ Clearly, $t(\boldsymbol{y}) = t_L (\Xi(\boldsymbol{y}))$ if $t_L ({\boldsymbol z})$ represents the exit time from $B({\boldsymbol c} ,\,r_{1})$ for the linear ODE [\[34\]](#34){reference-type="eqref" reference="34"} starting from ${\boldsymbol z}$. Denote by $\boldsymbol{e}(\boldsymbol{y}) = \boldsymbol{e}_{\boldsymbol{c}}(\boldsymbol{y})$ the exit location of the dynamical systems [\[31\]](#31){reference-type="eqref" reference="31"} from the set $\widehat{{\mathcal N}}$: ${\color{blue} \boldsymbol{e}(\boldsymbol{y})} \,:=\; \upsilon_{{\boldsymbol y}} (t(\boldsymbol{y}))$. Here again, $$\label{eq:relp} \Xi(\bm{e}(\bm{y})) \;=\; {\boldsymbol e}_L (\Xi(\boldsymbol{y}))$$ provided $\color{blue} {\boldsymbol e}_L ({\boldsymbol z})$ stands for the exit location from the set $B({\boldsymbol c},\,r_{1})$ of the linear dynamical systems [\[34\]](#34){reference-type="eqref" reference="34"} starting from ${\boldsymbol z}$. Let ${\mathcal J}^{a}_L = {\mathcal J}^{a}_L(\boldsymbol{c})$ be the elements of $\partial B (\boldsymbol{c},\,r_{1})$ at distance less than $a$ from ${\mathcal M}_{L,u} \cap\partial\mathcal{B}(\boldsymbol{c},\,r_{1})$: $${\color{blue} {\mathcal J}^a_L } \,:=\, \big\{ \boldsymbol{x}\in\partial B (\boldsymbol{c},\,r_{1}): \exists\, \boldsymbol{y}\in {\mathcal M}_{L,u} \cap\partial\mathcal{B}(\boldsymbol{c},\,r_{1}) \text{ such that }\Vert\boldsymbol{x}-\boldsymbol{y}\Vert<a\big\} \;.$$ Next result is an assertion about the linear ODE [\[34\]](#34){reference-type="eqref" reference="34"}. Its proof is presented in Appendix [14](#sec:ODE){reference-type="ref" reference="sec:ODE"}. ** 20**. *Fix $\boldsymbol{c}\in\mathcal{Y}_{0}$ and $a>0$. Then, there exists $0< r(a)<r_1$ such that $e_L({\boldsymbol z}) \in {\mathcal J}^a_L$ for all ${\boldsymbol z}\in \mathcal{B}(\bm{c},\,r(a)) \setminus {\mathcal M}_{L,s}$.* We turn to the construction of a second neighborhood ${\mathcal N} \subset \widehat {{\mathcal N}}$. Since $\nabla U\cdot\boldsymbol{\ell}\equiv0$, $(d/dt) U(\upsilon_{{\boldsymbol x}}(t)) = -\,| \nabla U (\upsilon_{{\boldsymbol x}}(t)) |^{2} < 0$ for all ${\boldsymbol x} \notin \mathcal{C}_{0}$, $t>0$. Therefore, if ${\boldsymbol x}$ is not a critical point, $U (\upsilon_{{\boldsymbol x}}(t))$ is strictly decreasing in $t$, and there exists $\eta_{0}=\eta_{0}(r_1)>0$ such that $$\label{eq:Ubdr} \max_{\boldsymbol{x}\in {\mathcal M}_{u} \cap \partial\widehat{{\mathcal N}} } U(\boldsymbol{x})<U(\boldsymbol{c})-3\eta_{0}\;.$$ Take $\eta_{0}$ small enough so that there is no critical point $\boldsymbol{c}' \in {\mathcal C}_0$ such that $$\label{eq:nocr} U(\boldsymbol{c}')\in[U(\boldsymbol{c})-\eta_{0},\, U(\boldsymbol{c})) \;.$$ ** 21**. *For all $\boldsymbol{c}\in\mathcal{Y}_{0}$, there exists $r_{2}=r_{2}(\boldsymbol{c})>0$ such that, $$\sup_{\boldsymbol{y}\in\Xi^{-1}(B(\bm{c},\,r_{2})) \setminus{\mathcal M}_s} U(\boldsymbol{e}(\boldsymbol{y}))\le U(\boldsymbol{c})-2\eta_{0}\;.$$* *Proof.* For $a>0$, let $$\mathcal{J}^{a} \,=\, \big\{ \, \boldsymbol{x}\in\partial\widehat{\mathcal{N}} : \exists\, \boldsymbol{y}\in{\mathcal M}_u \cap \partial\widehat{{\mathcal N}} \text{ such that }\Vert\boldsymbol{x}-\boldsymbol{y}\Vert<a\, \big\} \;.$$ By [\[eq:Ubdr\]](#eq:Ubdr){reference-type="eqref" reference="eq:Ubdr"} and the fact that $|\nabla U|$ is bounded on compact sets, there exists $a_{0}>0$ such that $$\label{eq:esc1} \sup_{\boldsymbol{x}\in\mathcal{J}^{a_{0}} } U (\boldsymbol{x})\le U(\boldsymbol{c})-2\eta_{0}\;.$$ Since $\Xi^{-1}\colon \mathcal{U}^L_{\bm{c}}\to\mathcal{U}_{\bm{c}}$ is continuous, it is uniformly continuous on the compact set $\overline {B(\bm{c},\,r_{1})}$. Therefore, there exists $b_{0}>0$ such that $\|\Xi^{-1}(\bm{x})-\Xi^{-1}(\bm{y})\|\le a_{0}$ for all $\boldsymbol{x},\,\boldsymbol{y}\in B(\bm{c},\,r_{1})$ satisfying $\|\bm{x}-\bm{y}\|\le b_{0}$, Therefore, $$\label{esc2} \Xi^{-1}(\mathcal{J}_L^{b_{0}}) \subset \mathcal{J}^{a_{0}}\ .$$ Let $r(b_0)>0$ be the positive constant whose existence is asserted in Lemma [ 20](#lem_esclin){reference-type="ref" reference="lem_esclin"}. Set $r_2 = r(b_0) \wedge r_1$. By Lemma [ 20](#lem_esclin){reference-type="ref" reference="lem_esclin"}, ${\boldsymbol e}_L (\Xi(\boldsymbol{y})) \in \mathcal{J}_L^{b_{0}}$ for $\Xi(\bm{y})\in B(\bm{c},\,r_{2})\setminus\Xi({\mathcal M}_s)$. Therefore, by [\[eq:relp\]](#eq:relp){reference-type="eqref" reference="eq:relp"} and [\[esc2\]](#esc2){reference-type="eqref" reference="esc2"}, for $\bm{y}\in\Xi^{-1}(B(\bm{c},\,r_{2}))\setminus{\mathcal M}_s$, $$\bm{e}(\bm{y}) = \Xi^{-1}({\boldsymbol e}_L (\Xi(\boldsymbol{y}))) \in \mathcal{J}^{a_{0}} \;.$$ This along with [\[eq:esc1\]](#eq:esc1){reference-type="eqref" reference="eq:esc1"} imply that $$\sup_{\boldsymbol{y}\in\Xi^{-1}(B(\bm{c},\,r_{2})) \setminus {\mathcal M}_s} U(\boldsymbol{e}(\boldsymbol{y}))\le U(\boldsymbol{c})-2\eta_{0}\;,$$ which completes the proof of the lemma. ◻ ## Exit problem from $\widehat{{\mathcal N}}$ {#exit-problem-from-widehatmathcal-n .unnumbered} Denote by $\color{blue} {\mathcal N} = {\mathcal N}({\boldsymbol c})$ the closure of the set $\Xi^{-1}(B(\boldsymbol{c},\,r_{2}))$, where $r_{2}$ has been introduced in Lemma [ 21](#lem_esc){reference-type="ref" reference="lem_esc"}. As the set $\widehat{{\mathcal N}}$ contains an unstable equilibrium $\boldsymbol{c}$, the exit problem from $\widehat{{\mathcal N}}$ does not follow from the Friedlin-Wentzell theory, but has been investigated in [@Kif]. ** 22**. *Fix $\bm{c}\in\mathcal{Y}_{0}$. Then, $$\limsup_{\epsilon\rightarrow0} \sup_{\bm{z}\in{\mathcal N} } \mathbb{P}_{\bm{z}}^{\epsilon} \big[\, U(\bm{x}_{\epsilon} (\tau_{\partial\widehat{{\mathcal N}}}))>U(\bm{c})-\eta_{0}\, \big]=0\;.$$ Moreover, for all $C>0$, $$\limsup_{\epsilon\rightarrow0} \sup_{\bm{z}\in{\mathcal N} } \mathbb{P}_{\bm{z}}^{\epsilon} \Big[\, \tau_{\partial\widehat{{\mathcal N}}} >\frac{C}{\epsilon}\, \Big]=0\;.$$* *Proof.* Since the set $\widehat{{\mathcal N}}$ contains only one unstable equilibrium, the second assertion of the proposition follow from [@Kif Theorem 2.1], which presents an estimate for a fixed starting point in the interior of $\widehat{{\mathcal N}}$. However, a careful reading of the proof reveals that all estimates hold uniformly on compact subsets of the interior of $\widehat{{\mathcal N}}$, such as ${\mathcal N}$. We turn to the first assertion of the proposition. Let $\mathcal{Q} \subset\partial\widehat{{\mathcal N}}$ be given by $$\mathcal{Q} = \{ \boldsymbol{e}(\boldsymbol{y}): \boldsymbol{y}\in{\mathcal N} \setminus\mathcal{M}_s \} \cup ( \mathcal{M}_u \cap \partial\widehat{{\mathcal N}} ) \;.$$ By [@Kif Theorem 2.3], for any open neighborhood $\mathcal{U}\subset\partial\widehat{{\mathcal N}}$ of $\mathcal{Q}$ in $\partial\widehat{{\mathcal N}}$, $$\limsup_{\epsilon\rightarrow0} \sup_{\bm{z}\in{\mathcal N}} \mathbb{P}_{\bm{z}}^{\epsilon} \big[\, \bm{x}_{\epsilon}(\tau_{\partial\widehat{{\mathcal N}}}) \notin\mathcal{U} \, \big] = 0\;.$$ Note that [@Kif Theorem 2.3] is stated for a fixed starting point in the interior of $\widehat{{\mathcal N}}$, but as in the first part of the proof, all estimates in the proof of this result hold unifomly on compact subsets of the interior of $\widehat{{\mathcal N}}$. By [\[eq:Ubdr\]](#eq:Ubdr){reference-type="eqref" reference="eq:Ubdr"} and Lemma [ 21](#lem_esc){reference-type="ref" reference="lem_esc"}, $$\sup_{\boldsymbol{x}\in\mathcal{\mathcal{Q}} } U(\boldsymbol{x})\le U(\boldsymbol{c})-2\eta_{0}\;.$$ To complete the proof, it remains to choose a neighborhood $\mathcal{U}$ small enough so that $$\sup_{\boldsymbol{x}\in\mathcal{U}} U(\boldsymbol{x})\le U(\boldsymbol{c})-\eta_{0}\;.$$ ◻ # Hitting wells {#sec6} The main result of this section, Theorem [ 23](#t_hitting2){reference-type="ref" reference="t_hitting2"} below, asserts that starting from a compact set, the diffusion ${\boldsymbol x}_\epsilon (\cdot)$ hits some well ${\mathcal E}({\boldsymbol m})$ in a time bounded by $\epsilon^{-1}$. Denote by ${\mathcal E}({\mathcal A})$, $\mathcal{A}\subset\mathbb{R}^{d}$, the union of the wells in ${\mathcal A}$: $$\mathcal{E}(\mathcal{A})= \bigcup_{\bm{m}\in\mathcal{M}_{0}\cap\mathcal{A}}\mathcal{E}(\bm{m})\;.$$ Let $$\Lambda_H = \{\boldsymbol{x}\in\mathbb{R}^{d}:U(\boldsymbol{x})\le H\} \;, \quad H \in {\mathbb R} \;.$$ ** 23**. *Fix $H>\min_{\bm{x}\in\mathbb{R}^{d}}U(\bm{x})$. Suppose that there is no critical point $\boldsymbol{c}\in\mathcal{C}_{0}$ such that $U(\boldsymbol{c})=H$. Then, for all $C>0$, $$\limsup_{\epsilon\rightarrow0} \sup_{\bm{z}\in \Lambda_H} \mathbb{P}_{\bm{z}}^{\epsilon} \Big[\, \tau_{\mathcal{E}(\Lambda_H)}> \frac{C}{\epsilon}\, \Big]=0\;.$$ Fix $h_0<h_1$, and denote by ${\mathcal A}$, ${\mathcal B}$ a connected component of the set $\{{\boldsymbol x}\in{\mathbb R}^d : U({\boldsymbol x}) < h_0\}$, $\{{\boldsymbol x} \in{\mathbb R}^d : U({\boldsymbol x}) < h_1\}$, respectively. Assume that ${\mathcal A} \subset {\mathcal B}$, and that there are no critical points ${\boldsymbol c}$ of $U$ in ${\mathcal B} \setminus {\mathcal A}$. Then, for all $C>0$, $$\limsup_{\epsilon\rightarrow0} \sup_{\bm{z}\in {\mathcal A}} \mathbb{P}_{\bm{z}}^{\epsilon} \Big[\, \tau_{\mathcal{E}({\mathcal B})}> \frac{C}{\epsilon}\, \Big]=0\;.$$* **Corollary 24**. *Fix $R>0$ large enough for $\Lambda_R$ to contain all the local minima of $U$. For all constant $C>0$, $$\limsup_{\epsilon\rightarrow0} \sup_{\bm{x}\in \Lambda_R} \mathbb{P}_{\bm{x}}^{\epsilon} \Big[ \, \tau_{\mathcal{E}(\mathcal{M}_{0})}> \frac{C}{\epsilon}\,\Big]=0\ .$$* Next result follows from Proposition [ 15](#p_FW){reference-type="ref" reference="p_FW"} and Theorem [ 23](#t_hitting2){reference-type="ref" reference="t_hitting2"}. **Corollary 25**. *Fix $h_0<h_1$, and denote by ${\mathcal A}$, ${\mathcal B}$ a connected component of the set $\{{\boldsymbol x} \in{\mathbb R}^d: U({\boldsymbol x}) < h_0\}$, $\{{\boldsymbol x} \in{\mathbb R}^d : U({\boldsymbol x}) < h_1\}$, respectively. Assume that ${\mathcal A} \subset {\mathcal B}$, and that there are no critical points ${\boldsymbol c}$ of $U$ in ${\mathcal B} \setminus {\mathcal A}$. Then, $$\limsup_{\epsilon\rightarrow0} \sup_{\bm{x}\in {\mathcal A}} \mathbb{P}_{\bm{x}}^{\epsilon} \big[ \, \tau_{\partial {\mathcal B}} < \tau_{\mathcal{E}(B)} \,\big] \;=\; 0\ .$$* **Remark 26**. *We expect the optimal time scale to be $O(\log \epsilon^{-1})$ instead of $O(\epsilon^{-1})$.* Denote by ${\mathcal N}({\mathcal A})$, $\mathcal{A}\subset\mathbb{R}^{d}$, the union, carried over all critical points ${\boldsymbol c}$ in ${\mathcal Y}_0 \cap {\mathcal A}$, of the neighborhoods ${\mathcal N} ({\boldsymbol c})$ introduced in the previous section: $$\mathcal{N}(\mathcal{A})= \bigcup_{\bm{c}\in\mathcal{Y}_{0}\cap\mathcal{A}}\mathcal{N}(\bm{c})\;.$$ ** 27**. *Under the hypotheses of Theorem [ 23](#t_hitting2){reference-type="ref" reference="t_hitting2"}, for all $C>0$, $$\begin{gathered} \limsup_{\epsilon\rightarrow0} \sup_{\bm{z}\in \Lambda_H} \mathbb{P}_{\bm{z}}^{\epsilon} \Big[\, \tau_{{\mathcal N}(\Lambda_H) \cup {\mathcal E}(\Lambda_H)} >\frac{C}{\epsilon} \, \Big] \,=\, 0\;, \\ \limsup_{\epsilon\rightarrow0} \sup_{\bm{z}\in {\mathcal A}} \mathbb{P}_{\bm{z}}^{\epsilon} \Big[\, \tau_{{\mathcal N} ({\mathcal B}) \cup {\mathcal E}({\mathcal B})} >\frac{C}{\epsilon} \, \Big] \,=\, 0\;. \end{gathered}$$* *Proof.* For each $\boldsymbol{z}\in\Lambda_H$, $\upsilon_{{\boldsymbol z}}(t)$ reaches the set ${\mathcal N}(\Lambda_H)\cup\mathcal{E}(\Lambda_H)$ in finite time. Therefore, the assertion of the lemma follows from [@fw98 Theorem 2.1.2]. ◻ ## Proof of Theorem [ 23](#t_hitting2){reference-type="ref" reference="t_hitting2"} {#proof-of-theorem-t_hitting2 .unnumbered} We prove the first assertion of the theorem. The arguments for the second one are similar. Recall the definition of $\eta_{0}=\eta_{0}(\boldsymbol{c})$, $\bm{c}\in\mathcal{Y}_{0}$, introduced at [\[eq:nocr\]](#eq:nocr){reference-type="eqref" reference="eq:nocr"}, and let $$\mathcal{H}(\bm{c})= \left\{ \boldsymbol{x}:U(\boldsymbol{x})\le U(\bm{c})-\eta_{0}\right\}\;.$$ By definition of $\eta$, there is no critical point $\boldsymbol{c}'$ of $U$ such that $U(\boldsymbol{c}')=U(\bm{c})-\eta_{0}$. The proof is carried out by induction on $| \mathcal{Y}_{0}\cap\Lambda_H |$, the number of critical point ${\boldsymbol c} \in {\mathcal Y}_0$ which belong to $\Lambda_H$. If there are no such critical points in $\Lambda_H$ the assertion of the theorem follows from Lemma [ 27](#p: FW2){reference-type="ref" reference="p: FW2"}. Consider the general case. Decompose the probability $\mathbb{P}_{\bm{z}}^{\epsilon} [\, \tau_{\mathcal{E}(\Lambda_H)}> C {\epsilon}^{-1} \, ]$ into $$\mathbb{P}_{\bm{z}}^{\epsilon}\Big[\, \tau_{\mathcal{E}(\Lambda_H)}>\frac{C}{\epsilon},\, \tau_{\mathcal{E}(\Lambda_H)}= \tau_{\mathcal{E}(\Lambda_H)\cup{\mathcal N}(\Lambda_H)}\, \Big] +\mathbb{P}_{\bm{z}}^{\epsilon}\Big[\, \tau_{\mathcal{E}(\Lambda_H)} >\frac{C}{\epsilon},\,\tau_{\mathcal{E}(\Lambda_H)} >\tau_{\mathcal{E}(\Lambda_H)\cup{\mathcal N}(\Lambda_H)}\, \Big]\;.$$ The first probability is bounded above by $\mathbb{P}_{\bm{z}}^{\epsilon} [\, \tau_{\mathcal{E}(\Lambda_H)\cup{\mathcal N}(\Lambda_H)}> C {\epsilon}^{-1} \, ]$. By Lemma [ 27](#p: FW2){reference-type="ref" reference="p: FW2"} this expression vanishes as $\epsilon \to 0$. In view of this result, it remains to show that $$\limsup_{\epsilon\rightarrow0}\sup_{\bm{z}\in\Lambda_H} \mathbb{P}_{\bm{z}}^{\epsilon} \Big[\, \tau_{\mathcal{E}(\Lambda_H)} >\frac{C}{\epsilon},\;\tau_{\mathcal{E}(\Lambda_H)} >\tau_{\mathcal{E}(\Lambda_H)\cup{\mathcal N}(\Lambda_H)},\; \tau_{\mathcal{E}(\Lambda_H)\cup{\mathcal N}(\Lambda_H)} \le\frac{C}{2\epsilon}\, \Big]=0\;.$$ By the strong Markov property, the last display is bounded by $$\limsup_{\epsilon\rightarrow0} \sup_{\bm{z}\in\mathcal{E}(\Lambda_H)\cup{\mathcal N}(\Lambda_H)} \mathbb{P}_{\bm{z}}^{\epsilon} \Big[\, \tau_{\mathcal{E}(\Lambda_H)} >\frac{C}{2\epsilon}\,\Big]\;.$$ Since $\mathbb{P}_{\bm{z}}^{\epsilon} [\, \tau_{\mathcal{E}(\Lambda_H)}>\frac{C}{2\epsilon}\,]=0$ if $\boldsymbol{z}\in\mathcal{E}(\Lambda_H)$, it suffices show that, for each $\boldsymbol{c}\in\mathcal{Y}_{0} \cap \Lambda_H$, $$\limsup_{\epsilon\rightarrow0} \sup_{\bm{z}\in{\mathcal N}(\boldsymbol{c})} \mathbb{P}_{\bm{z}}^{\epsilon}\Big[\, \tau_{\mathcal{E}(\Lambda_H)}>\frac{C}{2\epsilon}\,\Big]=0\;.$$ By Proposition [ 22](#p:Kifer){reference-type="ref" reference="p:Kifer"}, it is enough to prove that $$\limsup_{\epsilon\rightarrow0} \sup_{\bm{z}\in{\mathcal N}(\boldsymbol{c})} \mathbb{P}_{\bm{z}}^{\epsilon} \Big[\, \tau_{\mathcal{E}(\Lambda_H)}>\frac{C}{2\epsilon} \;, \;\; \tau_{\partial\widehat{{\mathcal N}}(\bm{c})} \le\frac{C}{4\epsilon} \; ,\;\; \bm{x}_{\epsilon}(\tau_{\partial\widehat{{\mathcal N}}(\bm{c})}) \in {\mathcal H} (\bm{c})\, \Big]=0\;.$$ By the strong Markov property, the left-hand side is bounded from above by $$\limsup_{\epsilon\rightarrow0} \sup_{\bm{z}\in{\mathcal H}(\bm{c})}\mathbb{P}_{\bm{z}}^{\epsilon} \Big[\, \tau_{\mathcal{E}(\Lambda_H)}>\frac{C}{4\epsilon}\,\Big]=0\;.$$ As ${\boldsymbol c}$ belongs to $\Lambda_H$, $U({\boldsymbol c}) \le H$ and ${\mathcal H}({\boldsymbol c}) \subset \Lambda_H$. Thus, $\tau_{\mathcal{E}(\Lambda_H)}<\tau_{\mathcal{E}({\mathcal H}(\bm{c}))}$, and it is enough to prove that $$\limsup_{\epsilon\rightarrow0}\sup_{\bm{z}\in{\mathcal H}(\bm{c})} \mathbb{P}_{\bm{z}}^{\epsilon}\Big [ \, \tau_{\mathcal{E}({\mathcal H}(\bm{c}))}>\frac{C}{4\epsilon} \, \Big]=0 \;.$$ This identity follows from the induction hypothesis. Indeed, as the critical point ${\boldsymbol c}$ belongs to $\Lambda_H$ and not to ${\mathcal H}(\boldsymbol{c})$, the number of critical points in ${\mathcal Y}_0 \cap \Lambda_H$ is strictly greater than the one in ${\mathcal Y}_0 \cap {\mathcal H}(\boldsymbol{c})$. 0◻ We conclude this section with two results on hitting times of wells. The first one follows from Theorem [ 2](#t01){reference-type="ref" reference="t01"} and [@LMS Lemma 4.2]. It will be used in Section [10](#sec3){reference-type="ref" reference="sec3"} in the proof of Theorem [ 1](#t00){reference-type="ref" reference="t00"}. We state it here, before the proof of Theorem [ 2](#t01){reference-type="ref" reference="t01"}, to have all hitting time estimates of wells in the same section. ** 28**. *For all ${\boldsymbol m} \in \mathcal{M}_0$, $$\limsup_{a\rightarrow0}\, \limsup_{\epsilon\rightarrow0}\, \sup_{\boldsymbol{x}\in\mathcal{E}({\boldsymbol m})}\, \mathbb{P}_{\boldsymbol{x}}^{\epsilon} \big[\,\tau_{\mathcal{E} ({\mathcal M}_0) \setminus\mathcal{E}({\boldsymbol m})} \le a\,\theta^{(1)}_\epsilon \, \big]=0\;.$$* The last result asserts that starting from the domain of attraction of a local minima the well associated to this local minimum is attained before the other ones. Recall that we denote by $\upsilon_{{\boldsymbol x}} (t)$, $\bm{x} \in{\mathbb R}^d$, $t\ge0$, the solution of the ODE [\[31\]](#31){reference-type="eqref" reference="31"} starting from ${\boldsymbol x}$. Denote by $\color{blue} \mathcal{D}(\boldsymbol{m})$, $\boldsymbol{m}\in\mathcal{M}_{0}$, the domain of attraction of $\boldsymbol{m}$: $$\mathcal{D}(\boldsymbol{m})= \big\{ \boldsymbol{x}\in\mathbb{R}^{d}: \lim_{t\rightarrow\infty} \upsilon_{{\boldsymbol x}} (t) =\boldsymbol{m} \big\} \;.$$ ** 29**. *Let $\boldsymbol{m}\in\mathcal{M}_{0}$ and $\mathcal{K}$ be a compact subset of $\mathcal{D}(\boldsymbol{m})$. Then, $$\liminf_{\epsilon\rightarrow0} \inf_{\boldsymbol{x}\in\mathcal{K}} \mathbb{P}_{\boldsymbol{x}}^{\epsilon} \left[\tau_{\mathcal{E}(\mathcal{M}_{0})} =\tau_{\mathcal{E}(\boldsymbol{m})}\right]=1\;.$$* *Proof.* Let $\mathcal{F}(\boldsymbol{m}) :=\mathcal{D}(\boldsymbol{m})\setminus\mathcal{E}(\boldsymbol{m})$ so that $\partial\mathcal{F}(\boldsymbol{m}) =\partial\mathcal{D}(\boldsymbol{m})\cup\partial\mathcal{E}(\boldsymbol{m})$. Then, $$\mathbb{P}_{\boldsymbol{x}}^{\epsilon} \left[\tau_{\mathcal{E}(\mathcal{M}_{0})} =\tau_{\mathcal{E}(\boldsymbol{m})}\right] \ge\mathbb{P}_{\boldsymbol{x}}^{\epsilon} \left[\tau_{\partial\mathcal{F}(\boldsymbol{m})} =\tau_{\partial\mathcal{E}(\boldsymbol{m})}\right]\;.$$ Therefore, it suffices to show that $$\label{P_exit} \liminf_{\epsilon\rightarrow0} \inf_{\boldsymbol{x}\in\mathcal{K}} \mathbb{P}_{\boldsymbol{x}}^{\epsilon} \left[\tau_{\partial\mathcal{F}(\boldsymbol{m})} =\tau_{\partial\mathcal{E}(\boldsymbol{m})}\right]=1\;.$$ Since $\mathcal{K}$ is contained in the domain of attraction of ${\boldsymbol m}$, the solution $\upsilon_{\boldsymbol{x}}(t)$ of the ODE [\[31\]](#31){reference-type="eqref" reference="31"} starting from $\boldsymbol{x}\in\mathcal{K}$ exits the domain $\mathcal{F}(\boldsymbol{m})$ at $\partial\mathcal{E}(\boldsymbol{m})$. Thus, by [@fw98 Chapter 2, Theorem 1.2], [\[P_exit\]](#P_exit){reference-type="eqref" reference="P_exit"} holds. The estimate in [@fw98 Chapter 2, Theorem 1.2] is not uniform in ${\boldsymbol x}$, and just asserts that $$\liminf_{\epsilon\rightarrow0} \mathbb{P}_{\boldsymbol{x}}^{\epsilon} \left[\tau_{\partial\mathcal{F}(\boldsymbol{m})} =\tau_{\partial\mathcal{E}(\boldsymbol{m})}\right]=1$$ for all $\boldsymbol{x}\in\mathcal{D}(\boldsymbol{m})$. However, the bound [@fw98 Chapter 2, Theorem 1.2] holds uniformly over $\boldsymbol{x}\in\mathcal{K}$ (The variable $a(t)$ in the proof depends on $\boldsymbol{x}$ but can be bounded uniformly on any compact set of ${\mathcal D}({\boldsymbol m})$, see the displayed equation above (1.6)). This completes the proof of the lemma. ◻ # Test functions {#sec10} In this section, we construct the test functions used in Section [9](#sec9){reference-type="ref" reference="sec9"} to estimate the solution of the resolvent equation. This test function appeared before in [@BEGK; @LMS2; @LS-22]. For this reason we just present its definition and main properties, and refer the reader to [@LS-22b] for proofs. ## Around a saddle point {#around-a-saddle-point .unnumbered} Fix a saddle point $\boldsymbol{\sigma}$ of $U$ such that ${\boldsymbol m} \curvearrowleft {\boldsymbol \sigma }\curvearrowright {\boldsymbol m}'$ for distinct local minima ${\boldsymbol m}$, ${\boldsymbol m}'$ of $U$. Let $\color{blue} \mathbb{H}^{\boldsymbol{\sigma}} = \nabla^{2}U(\boldsymbol{\sigma})$, $\color{blue} \mathbb{L}^{\boldsymbol{\sigma}} = (D {\boldsymbol \ell})({\boldsymbol \sigma})$. By [\[47\]](#47){reference-type="eqref" reference="47"}, $\mathbb{H}^{{\boldsymbol \sigma}}$ has a unique negative eigenvalue. Denote by $\color{blue} -\lambda_{1},\,\lambda_{2},\,\dots,\,\lambda_{d}$ the eigenvalues of $\mathbb{H}^{{\boldsymbol \sigma}}$, where $-\lambda_{1}$ represents the unique negative eigenvalue. Mind that we omit the dependence on ${\boldsymbol \sigma}$ which is fixed. Let $\color{blue} \boldsymbol{e}_{1}$, $\color{blue} \boldsymbol{e}_{k}$, $k\ge2$, be the unit eigenvector associated with the eigenvalue $-\lambda_{1}$, $\lambda_{k}$, respectively. Choose ${\boldsymbol e}_1$ pointing towards ${\boldsymbol m}$: for all sufficiently small $a>0$, $\bm{\sigma}+a\boldsymbol{e}_{1}$ belongs to the domain of attraction of ${\boldsymbol m}$. For $\boldsymbol{x}\in\mathbb{R}^{d}$ and $k=1,\,\dots,\,d$, write $\color{blue} \hat x_{k}=(\boldsymbol{x}-\boldsymbol{\sigma}) \cdot \boldsymbol{e}_{k}$, so that $\boldsymbol{x}=\bm{\sigma}+\sum_{m=1}^{d} \hat x_{m}\bm{e}_{m}$. Let $${\color{blue} \delta} = \delta(\epsilon) := (\epsilon\log\frac{1}{\epsilon})^{1/2}\ .$$ Fix a large constant $J>0$ to be chosen later, and denote by ${\mathcal A}^\pm_\epsilon$, $\mathcal{C}_{\epsilon}$ the $d$-dimensional rectangles defined by $$\begin{gathered} {\color{blue} \mathcal{A}^-_{\epsilon}} \,:= \, \Big\{\,\boldsymbol{x}\in\mathbb{R}^{d}\,:\, \hat x_{1}\in \Big[\,-\frac{J\delta}{\sqrt{\lambda_{1}}} - \epsilon^2,\, - \frac{J\delta}{\sqrt{\lambda_{1}}}\,\Big] \,,\, \hat x_{k}\in \Big[\,-\frac{2J\delta}{\sqrt{\lambda_{k}}},\, \frac{2J\delta}{\sqrt{\lambda_{k}}}\,\Big] \,,\, 2\leq k\leq d\,\Big\} \\ {\color{blue} \mathcal{C}_{\epsilon}} \,:= \, \Big\{\,\boldsymbol{x}\in\mathbb{R}^{d}\,:\, \hat x_{1}\in \Big[\,-\frac{J\delta}{\sqrt{\lambda_{1}}},\, \frac{J\delta}{\sqrt{\lambda_{1}}}\,\Big] \,,\, \hat x_{k}\in \Big[\,-\frac{2J\delta}{\sqrt{\lambda_{k}}},\, \frac{2J\delta}{\sqrt{\lambda_{k}}}\,\Big] \,,\, 2\leq k\leq d\,\Big\} \\ {\color{blue} \mathcal{A}^+_{\epsilon}} \,:= \, \Big\{\,\boldsymbol{x}\in\mathbb{R}^{d}\,:\, \hat x_{1}\in \Big[\, \frac{J\delta}{\sqrt{\lambda_{1}}} ,\, \frac{J\delta}{\sqrt{\lambda_{1}}} + \epsilon^2 \,\Big] \,,\, \hat x_{k}\in \Big[\,-\frac{2J\delta}{\sqrt{\lambda_{k}}},\, \frac{2J\delta}{\sqrt{\lambda_{k}}}\,\Big] \,,\, 2\leq k\leq d\,\Big\}\ .\end{gathered}$$ Figure [1](#fig1){reference-type="ref" reference="fig1"} illustrates these definitions and the next ones. ![The sets around a saddle point $\boldsymbol{\sigma}$ appearing in the definition of the test function.](fig1){#fig1} Recall from [\[eq:nu\]](#eq:nu){reference-type="eqref" reference="eq:nu"} that $\mathbb{H}^{\boldsymbol{\sigma}}+\mathbb{L^{\boldsymbol{\sigma}}}$ has a unique negative eigenvalue, denoted by $-\mu$. Denote by $\mathbb{A}^{\dagger}$ the transpose of a matrix $\mathbb{A}$. By [@LS-22 display (8.1)], the matrix $\mathbb{H}^{\boldsymbol{\sigma}}-(\mathbb{L^{\boldsymbol{\sigma}}})^{\dagger}$ also has a unique negative eigenvalue equal to $-\mu$. Denote by $\boldsymbol{v}$ the unit eigenvector of $\mathbb{H}^{\boldsymbol{\sigma}}-(\mathbb{L^{\boldsymbol{\sigma}}})^{\dagger}$ associated with $-\mu$. By [@LS-22 Lemma 8.1], $\boldsymbol{v}\cdot\boldsymbol{e}_{1}\neq0$. We assume that $\boldsymbol{v}\cdot\boldsymbol{e}_{1}>0$, as we can take $-\boldsymbol{v}$ instead of $\boldsymbol{v}$ if this inner product is negative. Let $p_{\epsilon}\colon \mathcal{C}_{\epsilon} \to \mathbb{R}$ be given by $$\label{e_pesB} p_{\epsilon}(\bm{x})\,:=\, \frac{1}{M_{\epsilon}} \int_{-\infty}^{(\bm{x}-\bm{\sigma})\cdot\bm{v}}\, e^{-\frac{\mu}{2\epsilon}t^{2}} \,dt \;,$$ where the normalizing constant $M_{\epsilon}$ is given by $$\label{e_Ces} M_{\epsilon}\,=\, \int_{-\infty}^{\infty}\,e^{-\frac{\mu}{2\epsilon}t^{2}}\,dt \,=\,\sqrt{\frac{2\pi\epsilon}{\mu}}\;.$$ We extend continuously the function $p_{\epsilon}$ to the $d$-dimensional rectangle $\color{blue} {\mathcal R}_\epsilon = {\mathcal A}^-_\epsilon \cup {\mathcal C}_\epsilon \cup {\mathcal A}^+_\epsilon$ as follows. For $\widehat{\bm{x}}=\bm{\sigma}+\sum_{k=1}^{d} \widehat x_{k}\bm{e}_{k} \in {\mathcal A}^+_\epsilon$, let $$\label{33} \overline{\boldsymbol{x}}_r \,=\, \bm{\sigma}\, + \,\frac{J\delta}{ \sqrt{\lambda_{1}}}\, \bm{e}_{1} \,+\, \sum_{k=2}^{d} \widehat{x}_{k}\,\bm{e}_{k} \;.$$ We define $\overline{\boldsymbol{x}}_l$ similarly for $\bm x\in {\mathcal A}^-_\epsilon$, replacing on the right-hand side of the previous formula the first plus sign by a minus sign. Clearly, $\overline{{\boldsymbol x}}_r$ and $\overline{{\boldsymbol x}}_l$ belong to ${\mathcal C}_\epsilon$. We extend the definition of $p_{\epsilon}$ to ${\mathcal R}_\epsilon$ by setting $p_{\epsilon} \colon {\mathcal A}^-_\epsilon \cup {\mathcal A}^+_\epsilon \to {\mathbb R}$ as $$\label{e_pes} \begin{gathered} p_{\epsilon}^{\bm{\sigma}}(\bm{x})\,=\, 1\,+\, \epsilon^{-2} \,\Big[\, \hat x_1 - \frac{J\delta}{ \sqrt{\lambda_{1}}} -\epsilon^2 \,\Big]\, [\, 1-p_{\epsilon} (\overline{\boldsymbol{x}}_r)\,] \;, \quad \bm{x} \in \mathcal{A}^+_{\epsilon}\;, \\ p_{\epsilon}^{\bm{\sigma}}(\bm{x})\,=\, \epsilon^{-2} \, \Big[\, \hat x_1 +\frac{J\delta}{\sqrt{\lambda_{1}}}+ \epsilon^2\,\Big]\, p_{\epsilon}(\overline{\boldsymbol{x}}_l) \;, \quad \bm{x}\in \mathcal{A}^-_{\epsilon} \;. \end{gathered}$$ The function $p_{\epsilon}$ is an approximating solution of the Dirichlet problem $\mathscr{L}_{\epsilon}^\dagger f = 0$ in $\mathcal{R}_{\epsilon}$ with boundary conditions $f = 1$ on the points of ${\mathcal R}_\epsilon$ where $\hat x_1 = J\delta/ \sqrt{\lambda_{1}} + \epsilon^2$ and $f = 0$ on the ones such that $\hat x_1 = - J\delta/ \sqrt{\lambda_{1}} - \epsilon^2$. This is the content of [@LS-22b Proposition 6.2], which states that the integral of $\theta^{(1)}_\epsilon \,|\mathscr{L}_{\epsilon}^\dagger f|$ on a set slightly smaller than $\mathcal{R}_{\epsilon}$ vanishes as $\epsilon\to 0$. This result also justifies the definition of the test function $p_{\epsilon}$. The test function $p_{\epsilon}(\cdot)$ constructed above depends on ${\boldsymbol \sigma}$ and ${\boldsymbol m}$. To stress this fact, it is sometimes represented by $\color{blue} p^{{\boldsymbol \sigma}, {\boldsymbol m}}_{\epsilon}(\cdot)$. ## Inside a well {#inside-a-well .unnumbered} In this subsection we define a test function $Q_\epsilon$ on ${\mathbb R}^d$ with the help of the test functions $p^{{\boldsymbol \sigma}, {\boldsymbol m}}_\epsilon$ introduced in the previous subsection. Recall that we denote by $B({\boldsymbol x}, r)$ the open ball of radius $r$ centered at ${\boldsymbol x}$. Fix a height $H$ such that $U({\boldsymbol \sigma}) = H$ for some saddle point ${\boldsymbol \sigma}$ of $U$. Denote by ${\mathcal W}$ a connected component of the set $\{{\boldsymbol x}\in {\mathbb R}^d: U({\boldsymbol x}) < H \}$. Assume that there exists a saddle point ${\boldsymbol \sigma}' \in \partial {\mathcal W}$ satisfying condition (a) below and that condition (b) is fulfilled for all saddle points ${\boldsymbol \sigma}' \in \partial {\mathcal W}$ satisfying (a). Here, 1. There exists $\delta_0>0$ such that $B({\boldsymbol \sigma}', \delta) \cap \{{\boldsymbol x}\in {\mathbb R}^d: U({\boldsymbol x}) < H \}$ is not contained in ${\mathcal W}$ for all $0<\delta <\delta_0$; 2. There exists ${\boldsymbol m}$, ${\boldsymbol m}'\in {\mathcal M}_0$ such that $$\label{56} {\boldsymbol \sigma}' \curvearrowright {\boldsymbol m} \;\;\text{and}\;\; {\boldsymbol \sigma}' \curvearrowright {\boldsymbol m}'\;.$$ Condition (b) prevents the existence of a heteroclinic orbit from ${\boldsymbol \sigma}$ to a critical point which is not a local minimum. Clearly, if conditions (a) and (b) hold, ${\boldsymbol m}\in {\mathcal W}$ and ${\boldsymbol m}'\not\in {\mathcal W}$ or the contrary. Let $\color{blue} {\mathcal S}_H({\mathcal W}) = \{ {\boldsymbol \sigma}_1, \dots {\boldsymbol \sigma}_p\}$ be the (non-empty) set of saddle points ${\boldsymbol \sigma }\in \partial {\mathcal W}$ satisfying (a) and (b). Note that we excluded saddle points whose heteroclinic orbits leads to local minima in ${\mathcal W}$. On the other hand, as ${\boldsymbol \sigma }\in \partial {\mathcal W}$, $U({\boldsymbol \sigma}) = H$ for all ${\boldsymbol \sigma }\in {\mathcal S}_H({\mathcal W})$. Write, whenever needed to clarify, ${\mathcal W}$ as ${\mathcal W}_1$, and denote by ${\mathcal W}_j$, $2\le j\le m$, the connected components of the set $\{{\boldsymbol x}\in {\mathbb R}^d: U({\boldsymbol x}) < H \}$ which share with ${\mathcal W}$ a common saddle point in ${\mathcal S}_H({\mathcal W})$. Hence, for each ${\mathcal W}_j$, $2\le j\le m$, there exist ${\boldsymbol \sigma }\in {\mathcal S}_H({\mathcal W}) \cap \overline{{\mathcal W}_j}$, ${\boldsymbol m}\in {\mathcal W} \cap {\mathcal M}_0$, ${\boldsymbol m}'\in {\mathcal W}_j \cap {\mathcal M}_0$ such that ${\boldsymbol \sigma }\curvearrowright {\boldsymbol m}$, ${\boldsymbol \sigma }\curvearrowright {\boldsymbol m}'$. Note that there might be saddle points ${\boldsymbol \sigma}$ in $\partial {\mathcal W}$ which do not belong to ${\mathcal S}_H({\mathcal W})$ because they lead to critical points in ${\mathcal W}$: ${\boldsymbol \sigma }\curvearrowright {\boldsymbol x}$, ${\boldsymbol \sigma }\curvearrowright {\boldsymbol y}$ for ${\boldsymbol x}$, ${\boldsymbol y}\in {\mathcal W} \cap {\mathcal C}_0$ . Figure [2](#fig2){reference-type="ref" reference="fig2"} illustrates this possibility. ![The saddle points ${\boldsymbol \sigma}_2$ in $\partial {\mathcal W}_2$ does not belong to ${\mathcal S}_H({\mathcal W}_2)$ because it leads to critical points in ${\mathcal W}_2$: ](fig2){#fig2} Fix $\eta>0$ small enough so that there is no critical point ${\boldsymbol x}$ with height in the interval $(H,H+2\eta)$. Let $\color{blue}\Omega$ be the connected component of the set $\{{\boldsymbol x}\in {\mathbb R}^d: U({\boldsymbol x}) \le H +\eta \}$ which contains ${\mathcal W}$ (and thus all connected components ${\mathcal W}_j$), and set $$\begin{gathered} {\color{blue} \mathcal{K}_{\epsilon} } \,:=\, \big \{ \bm{x}\in\mathbb{R}^{d}\,:\,U(\bm{x})\le H +J^{2}\delta^{2}\big\} \cap \Omega\;.\end{gathered}$$ Denote by $\partial_{0}\mathcal{R}^{{\boldsymbol \sigma}}_{\epsilon}$, ${\boldsymbol \sigma }\in {\mathcal S}_H({\mathcal W})$, the boundary of the set $\mathcal{R}^{{\boldsymbol \sigma}}_{\epsilon}$, introduced in the previous subsection, given by $$\begin{gathered} \partial_{0}\mathcal{R}^{{\boldsymbol \sigma}}_{\epsilon} =\Big\{ \,\boldsymbol{x}\in\mathcal{R}^{{\boldsymbol \sigma}}_{\epsilon}\,:\, \hat x_{k}=\pm\frac{2J\delta}{\sqrt{\lambda_{k}}}\ \ \text{for some}\ 2\leq k\leq d\,\Big\} \;.\end{gathered}$$ By the proof of [@LS-22 Lemma 8.3], $$\label{23} U(\bm{x})\geq U(\bm{\sigma})+ \frac{3}{2}\,J^{2}\,\delta^{2}\,[\,1+o_{\epsilon}(1)\,]$$ for all $\bm{x}\in\partial_{0}\mathcal{R}_{\epsilon}$. In particular, $\partial_{0}\mathcal{R}_{\epsilon}$ is contained in the complement of $\mathcal{K}_{\epsilon}$ provided that $\epsilon$ is sufficiently small. Let $\color{blue} \mathcal{E}_{\epsilon}^{\bm{\sigma}} \,:=\, \mathcal{R}_{\epsilon}^{\bm{\sigma}}\cap \mathcal{K}_{\epsilon}$, ${\boldsymbol \sigma }\in {\mathcal S}_H({\mathcal W})$. Denote by $\mathcal{W}^{\epsilon}_1$ the connected component of $\mathcal{K}_{\epsilon}\setminus( \bigcup_{\bm{\sigma}\in{\mathcal S}_H({\mathcal W})}\mathcal{E}_{\epsilon}^{\bm{\sigma}})$ which intersects $\mathcal{W}_1$, and let $\mathcal{W}^{\epsilon}_2 = \mathcal{K}_{\epsilon} \setminus( \mathcal{W}^{\epsilon}_1 \cup \bigcup_{\bm{\sigma}\in{\mathcal S}_H({\mathcal W})}\mathcal{E}_{\epsilon}^{\bm{\sigma}})$. With this notation, $$\label{36} \Omega \;=\; \bigcup_{\bm{\sigma}\in {\mathcal S}_H({\mathcal W})} \mathcal{E}_{\epsilon}^{\bm{\sigma}} \,\cup\, \mathcal{W}^{\epsilon}_{1} \,\cup\, \mathcal{W}^{\epsilon}_{2} \, \cup\, \big( \Omega \setminus\mathcal{K}_{\epsilon} \,\big)\;.$$ For each ${\boldsymbol \sigma }\in {\mathcal S}_H({\mathcal W})$, denote by $\color{blue} {\boldsymbol m}_{{\boldsymbol \sigma}}$ the local minimum ${\boldsymbol m}$ in ${\mathcal W}$ such that ${\boldsymbol \sigma }\curvearrowright {\boldsymbol m}$. Recall the notation introduced at the end of the previous subsection, and let $\color{blue} q^{{\boldsymbol \sigma}} = p^{{\boldsymbol \sigma}, {\boldsymbol m}_{{\boldsymbol \sigma}}}$. Consider the test function $Q_{\epsilon} \colon \mathcal{K}_{\epsilon} \to\mathbb{R}$ given by $$\begin{gathered} \label{e: def_Q} Q_{\epsilon} (\boldsymbol{x}) \,=\, 1 \;, \;\; \bm{x}\in\mathcal{W}_1^{\epsilon} \;; \quad Q_{\epsilon} (\boldsymbol{y}) \,=\, 0 \;, \;\; \bm{y}\in\mathcal{W}_2^{\epsilon} \;; \\ Q_{\epsilon} (\boldsymbol{x}) \,=\, q_{\epsilon}^{\bm{\sigma}}(\bm{x}) \;, \;\; \bm{x}\in\mathcal{E}_{\epsilon}^{\bm{\sigma}},\, \bm{\sigma}\in {\mathcal S}_H({\mathcal W}) \;. \nonumber\end{gathered}$$ By [\[e_pes\]](#e_pes){reference-type="eqref" reference="e_pes"}, the function $Q_{\epsilon}$ is continuous on $\mathcal{K}_{\epsilon}$. Moreover, if ${\mathcal G}_\epsilon$ represents the open set formed by the union of the interiors of the set ${\mathcal E}_{\epsilon}^{\bm{\sigma}}$, $\bm{\sigma} \in {\mathcal S}_H({\mathcal W})$, and the interior of the sets $\mathcal{W}_i^{\epsilon}$, $i=1$, $2$, $$\lVert\nabla Q_{\epsilon} \rVert_{L^{\infty}(\mathcal{G}_{\epsilon} )} \, =\, O(\epsilon^{-1/2})\;\ \text{and}\;\ \|\Delta Q_{\epsilon}\|_{L^{\infty}(\mathcal{G}_{\epsilon} )} =O(\epsilon^{-3/2})\ .$$ We can extend $Q_{\epsilon}$ to $\Omega$ keeping these bounds out of a $(d-1)$ dimensional manifold: $$\label{18} \lVert Q_{\epsilon}\rVert_{L^{\infty}(\Omega_0)} \, \le\ 1\;,\ \ \lVert \nabla Q_{\epsilon} \rVert_{L^{\infty}(\Omega_0)}\,= O(\epsilon^{-1/2})\;,\ \;\text{and\;\;} \|\Delta Q_{\epsilon} \|_{L^{\infty}(\Omega_0)}= O(\epsilon^{-3/2})\ ,$$ where $\Omega_0 = \Omega \setminus {\mathfrak M}$, and ${\mathfrak M}$ is $(d-1)$ dimensional manifold at which the gradient of $Q_{\epsilon}$ is discontinuous. We further impose the condition that $Q_{\epsilon}$ vanishes away from $\Omega$: $$\label{e: boundary_Q} Q_{\epsilon}\equiv0 \,\, \text{ on }\,\, \{\boldsymbol{x}\in\mathbb{R}^{d}:U(\boldsymbol{x}) >H +\frac{\eta}{2}\}\;,$$ respecting the previous bounds. The function $Q_\epsilon$ is the test function associated to the well ${\mathcal W}$ and height $H$. ## Main estimate {#main-estimate .unnumbered} Next lemma is a crucial step in the proof of Theorem [ 2](#t01){reference-type="ref" reference="t01"}. To stress below the dependence of the set $\mathcal{C}_{\epsilon}$, $\mathcal{A}^{\pm}_{\epsilon}$ on a saddle point ${\boldsymbol \sigma}$, we add a superscript ${\boldsymbol \sigma}$ in the notation. Denote by $\partial_{\pm}\mathcal{C}^{{\boldsymbol \sigma}}_{\epsilon}$ the boundary of the set $\mathcal{C}^{{\boldsymbol \sigma}}_{\epsilon}$ given by $$\begin{gathered} \partial_{\pm}\mathcal{C}^{{\boldsymbol \sigma}}_{\epsilon} \,=\, \big\{ \,\boldsymbol{x}\in\mathcal{C}^{{\boldsymbol \sigma}}_{\epsilon}\,:\, \hat x_{1} =\pm\frac{J\delta}{\sqrt{\lambda_{1}}}\,\big\} \;, \;\; \text{and let}\;\; {\color{blue} \mathcal{B}_{\epsilon}^{\bm{\sigma}}} \, :=\, \mathcal{C}_{\epsilon}^{\bm{\sigma}}\cap \mathcal{K}_{\epsilon}\;,\;\; {\color{blue} \partial_{\pm}\mathcal{B}_{\epsilon}^{\bm{\sigma}}} \,:=\, \partial_{\pm}\mathcal{C}_{\epsilon}^{\bm{\sigma}} \cap\mathcal{K}_{\epsilon}\;.\end{gathered}$$ Recall from [\[32\]](#32){reference-type="eqref" reference="32"} the definition of $\nu_\star$, and from [\[33\]](#33){reference-type="eqref" reference="33"} the definition of $\overline{{\boldsymbol x}}_r$, $\overline{{\boldsymbol x}}_l$. In the statement of Lemma [ 30](#p03){reference-type="ref" reference="p03"}, the vectors ${\boldsymbol v}$ and ${\boldsymbol e}_1$ depend on ${\boldsymbol \sigma}$, but this dependence is omitted from the notation. For $c>0$, let $${\color{blue} \Lambda_{c,\epsilon} } \,:=\, \big \{ \bm{x}\in\mathbb{R}^{d}\,:\,U(\bm{x})\le H - c\, J^{2} \delta^{2} \big\} \;.$$ ** 30**. *There exists $c_0>0$, such that for all $0<c<c_0$, ${\bf g}\colon {\mathcal M}_0 \to {\mathbb R}$, $$\label{24} e^{H/\epsilon} \, \int_{\Omega}\, Q_{\epsilon} \,(-\mathscr{L}_{\epsilon}\,\phi_{\epsilon}) \,d\mu_{\epsilon} \; =\; \sum_{\bm{\sigma}\in {\mathcal S}_H({\mathcal W}) } J ({\boldsymbol \sigma}) \,+\, o_{\epsilon}(1) \;,$$ where $J ({\boldsymbol \sigma}) = J_+ ({\boldsymbol \sigma}) - J_- ({\boldsymbol \sigma})$, and $$\begin{aligned} & J_{+} ({\boldsymbol \sigma}) \, =\, -\, [\,1+ o_{\epsilon}(1) \,]\, \frac{\epsilon\, \sqrt{\mu^{{\boldsymbol \sigma}}} \,(\bm{v}\cdot\bm{e}_{1})}{(2\pi\epsilon)^{(d+1)/2}\, \nu_\star }\,\int_{\partial_{+}\mathcal{B}^{{\boldsymbol \sigma}}_{\epsilon} \cap \Lambda_{c,\epsilon}}\, e^{-\frac{1}{2\epsilon}\bm{x}\cdot\,(\mathbb{H}^{{\boldsymbol \sigma}} + \mu^{{\boldsymbol \sigma}}\,\bm{v}\otimes\bm{v})\,\bm{x}}\, \phi_{\epsilon} (\bm{x})\, {\rm S} (d\bm{x}) \\ &\quad -\, [\,1+ o_{\epsilon}(1) \,]\, \frac{1} {(2\pi)^{(d+1)/2}\ \nu_\star \, \sqrt{\mu^{{\boldsymbol \sigma}}} \,\epsilon{}^{(d+3)/2}}\, \int_{{\mathcal A}^{{\boldsymbol \sigma}, +}_{\epsilon} \cap \Lambda_{c,\epsilon} }\, \phi_{\epsilon} (\bm{x}) \,\frac{\mathbb{L}^{{\boldsymbol \sigma}} \overline{\boldsymbol{x}}_r \cdot\boldsymbol{e}_{1}} {\overline{\boldsymbol{x}}_r \cdot\bm{v}}\, e^{-\frac{1}{2\epsilon}\overline{\boldsymbol{x}}_r \cdot\,(\mathbb{H}^{{\boldsymbol \sigma}} +\mu^{{\boldsymbol \sigma}}\,\bm{v}\otimes\bm{v})\, \overline{\boldsymbol{x}}_r}\,d\bm{x}\;. \end{aligned}$$ In this formula, ${\rm S} (d\bm{x})$ represents the surface measure on the $(d-1)$-dimensional manifold $\partial_{+}\mathcal{B}^{{\boldsymbol \sigma}}_{\epsilon} \cap \Lambda_{c,\epsilon}$, and $J_{-}({\boldsymbol \sigma})$ is obtained from $J_{+}({\boldsymbol \sigma})$ by removing the minus sign and replacing $\partial_{+}\mathcal{B}^{{\boldsymbol \sigma}}_{\epsilon}$, ${\mathcal A}^{{\boldsymbol \sigma}, +}_{a,\epsilon}$ by $\partial_{-}\mathcal{B}^{{\boldsymbol \sigma}}_{\epsilon}$, ${\mathcal A}^{{\boldsymbol \sigma}, -}_{a,\epsilon}$, respectively.* The proof of this result is omitted as it is the content of [@LS-22b Section 7]. # Domain of attraction {#sec4} Fix ${\boldsymbol \sigma }\in {\mathcal S}_H({\mathcal W})$. Denote by $\color{blue} {\boldsymbol n}_{{\boldsymbol \sigma}}$ the local minimum ${\boldsymbol m}$ of $U$ which does not belong to ${\mathcal W}$ and such that ${\boldsymbol \sigma }\curvearrowright {\boldsymbol m}$. The main result of this section asserts that we may replace $\phi_{\epsilon} (\bm{x})$ in the formula for $J_+({\boldsymbol \sigma})$, $J_-({\boldsymbol \sigma})$ by $\phi_{\epsilon} (\bm{m}_{{\boldsymbol \sigma}})$, $\phi_{\epsilon} (\bm{n}_{{\boldsymbol \sigma}})$, respectively. ** 31**. *There exists $c_0>0$, such that for all $0<c<c_0$, $$\lim_{\epsilon\to0}\, \sup_{\bm{x}\in\partial_{+}\mathcal{B}_{\epsilon}^{\bm{\sigma}}\cap \Lambda_{c,\epsilon}} \,|\phi_{\epsilon} (\bm{x}) - \phi_{\epsilon} (\bm{m}_{{\boldsymbol \sigma}})|=0\;, \quad \lim_{\epsilon\to0}\, \sup_{\bm{x}\in\partial_{-}\mathcal{B}_{\epsilon}^{\bm{\sigma}}\cap \Lambda_{c,\epsilon}} \,|\phi_{\epsilon} (\bm{x})-\phi_{\epsilon} (\bm{n}_{{\boldsymbol \sigma}})|=0$$ for all ${\boldsymbol \sigma }\in {\mathcal S}_H({\mathcal W})$. A similar result holds if we replace $\partial_{+}\mathcal{B}_{\epsilon}^{\bm{\sigma}}$, $\partial_{-}\mathcal{B}_{\epsilon}^{\bm{\sigma}}$ by ${\mathcal A}^{{\boldsymbol \sigma}, +}_{\epsilon}$, ${\mathcal A}^{{\boldsymbol \sigma}, -}_{\epsilon}$, respectively.* The proof of Proposition [ 31](#l10){reference-type="ref" reference="l10"} is based on the following general result. Recall that we denote by $\mathcal{D}(\boldsymbol{m})$, $\boldsymbol{m}\in\mathcal{M}_{0}$, the domain of attraction of $\boldsymbol{m}$. ** 32**. *Fix $\boldsymbol{m}\in\mathcal{M}_{0}$, and a sequence $(\mathcal{K}_{\epsilon})_{\epsilon>0}$ of subsets of $\mathcal{D}(\boldsymbol{m})$. Assume that $\bigcup_{\epsilon>0}\mathcal{K}_{\epsilon}$ is a bounded set, and $$\label{P_exit0} \liminf_{\epsilon\rightarrow0} \inf_{\boldsymbol{x}\in\mathcal{K}_{\epsilon}} \mathbb{P}_{\boldsymbol{x}}^{\epsilon} \left[\tau_{\mathcal{E}(\mathcal{M}_{0})} =\tau_{\mathcal{E}(\boldsymbol{m})}\right]=1$$ Then, $$\limsup_{\epsilon\rightarrow0} \sup_{\boldsymbol{x}\in\mathcal{K}_{\epsilon}} \big|\, \phi_{\epsilon} (\boldsymbol{x}) -\phi_{\epsilon} (\boldsymbol{m})\, \big| \,=\, 0\;.$$* *Proof.* Recall the definition of the function $G\colon{\mathbb R}^d\to{\mathbb R}$ introduced in [\[e_res\]](#e_res){reference-type="eqref" reference="e_res"}. By the stochastic representation of the solution of the resolvent equation, $$\label{exp_phi} \phi_{\epsilon}(\bm{x})\,=\, \mathbb{E}_{\bm{x}}^{\epsilon} \Big[\, \int_{0}^{\infty}e^{-\lambda s}\, G(\bm{x}_{\epsilon}(\theta_{\epsilon}^{(1)}s))\,ds \, \Big]\;.$$ As $G$ is bounded, the absolute value of the time integral is bounded by $\lambda^{-1} \, \Vert {\boldsymbol g}\Vert_\infty$. Therefore, as $\bigcup_{\epsilon>0}\mathcal{K_{\epsilon}}$ is a bounded set and $U({\boldsymbol x}) \to \infty$ as $|{\boldsymbol x}|\to\infty$, taking $R$ sufficiently large in Corollary [Corollary 24](#t_hitting){reference-type="ref" reference="t_hitting"}, $$\label{exp_phi2} \phi_{\epsilon}(\bm{x}) \,=\, \mathbb{E}_{\bm{x}}^{\epsilon}\Big[\, \int_{0}^{\infty} e^{-\lambda s}\,G(\bm{x}_{\epsilon} (\theta_{\epsilon}^{(1)}s))\,ds\; {\bf 1}\{\tau_{\mathcal{E}(\mathcal{M}_{0})} \le\frac{C_0}{\epsilon}\}\, \Big] \;+\; R_{\epsilon}({\boldsymbol x})\;,$$ where, here and below, $R_{\epsilon}({\boldsymbol x})$ is an error term such that $$\lim_{\epsilon\to 0} \sup_{\boldsymbol{x}\in\bigcup_{\epsilon>0}\mathcal{K_{\epsilon}}} |\, R_{\epsilon}({\boldsymbol x})\,| \;=\; 0\;.$$ Consider the time integral in the interval $[0, \tau_{\mathcal{E}(\mathcal{M}_{0})}/\theta_{\epsilon}^{(1)}]$. As $G$ is bounded and $\epsilon \, \theta_{\epsilon}^{(1)} \to\infty$, the expectation of this piece is bounded by $R_{\epsilon}({\boldsymbol x})$. By the strong Markov property, the second piece is equal to $$\mathbb{E}_{\bm{x}}^{\epsilon} \Big[\, \mathbb{E}_{\bm{x}_{\epsilon} (\tau_{\mathcal{E}(\mathcal{M}_{0})})}^{\epsilon} \Big[\, \int_{0}^{\infty}e^{-\lambda s}\, G(\bm{x}_{\epsilon}(\theta_{\epsilon}^{(1)}s))\,ds\, \Big]\, e^{-\lambda\tau_{\mathcal{E}(\mathcal{M}_{0})}/ \theta_{\epsilon}^{(1)}}\, {\bf 1}\{\tau_{\mathcal{E}(\mathcal{M}_{0})} \le\frac{C_0}{\epsilon}\}\, \Big]\;.$$ By the same reasons invoked above, this expression is equal to $$\mathbb{E}_{\bm{x}}^{\epsilon} \Big[\, \mathbb{E}_{\bm{x}_{\epsilon} (\tau_{\mathcal{E}(\mathcal{M}_{0})})}^{\epsilon} \Big[\, \int_{0}^{\infty}e^{-\lambda s}\, G(\bm{x}_{\epsilon}(\theta_{\epsilon}^{(1)}s))\,ds\, \Big]\, {\bf 1}\{\tau_{\mathcal{E}(\mathcal{M}_{0})} \le\frac{C_0}{\epsilon}\}\, \Big] \;+\; R_{\epsilon}({\boldsymbol x}) \;.$$ In conclusion, $$\phi_{\epsilon} (\bm{x}) \,=\, \mathbb{E}_{\bm{x}}^{\epsilon} \Big[\, \phi_{\epsilon}(\bm{x}_{\epsilon} (\tau_{\mathcal{E}(\mathcal{M}_{0})}))\, {\bf 1}\{\tau_{\mathcal{E}(\mathcal{M}_{0})}\le \frac{C_0}{\epsilon}\}\, \Big] \;+\; R_{\epsilon}({\boldsymbol x}) \;.$$ Applying Corollary [Corollary 24](#t_hitting){reference-type="ref" reference="t_hitting"} once more, as $\phi_{\epsilon}$ is uniformly bounded, the right-hand side is equal to $$\mathbb{E}_{\bm{x}}^{\epsilon} \Big[\, \phi_{\epsilon}(\bm{x}_{\epsilon} (\tau_{\mathcal{E}(\mathcal{M}_{0})})) \, \Big] \;+\; R_{\epsilon}({\boldsymbol x}) \;=\; \mathbb{E}_{\bm{x}}^{\epsilon} \Big[\, \phi_{\epsilon}(\bm{x}_{\epsilon} (\tau_{\mathcal{E}(\boldsymbol{m})}))\, \Big] \;+\; R_{\epsilon}({\boldsymbol x}) \;,$$ where we used hypothesis [\[P_exit0\]](#P_exit0){reference-type="eqref" reference="P_exit0"} and the uniform boundedness of $\phi_{\epsilon}$ in the last step. To complete the proof, it remains to recall the assertion of Theorem [ 14](#p_flat){reference-type="ref" reference="p_flat"} ◻ By combining Proposition [ 32](#prop:DoC){reference-type="ref" reference="prop:DoC"} and Lemma [ 29](#lem_exit){reference-type="ref" reference="lem_exit"}, we directly obtain the following proposition. **Corollary 33**. *Let $\boldsymbol{m}\in\mathcal{M}_{0}$ and let $\mathcal{K}$ be a compact subset of $\mathcal{D}(\boldsymbol{m})$. Then, $$\limsup_{\epsilon\rightarrow0} \sup_{\boldsymbol{x}\in\mathcal{K}} \big|\, \phi_{\epsilon} (\boldsymbol{x}) -\phi_{\epsilon}(\boldsymbol{m})\, \big| \,=\, 0\;.$$* Recall that ${\boldsymbol \sigma }\in {\mathcal S}_H({\mathcal W})$ is fixed and that ${\boldsymbol \sigma }\curvearrowright {\boldsymbol m}_{{\boldsymbol \sigma}}$, ${\boldsymbol \sigma }\curvearrowright {\boldsymbol n}_{{\boldsymbol \sigma}}$. Denote by $\color{blue} B[{\boldsymbol x}, r]$ the closed ball of radius $r$ centered at ${\boldsymbol x}$, and by ${\mathcal W}'$ the connected component of the set $\{{\bm x} \in{\mathbb R}^d : U({\bm x}) < U({\boldsymbol \sigma}) \}$ whose closure contains ${\boldsymbol \sigma}$ and ${\boldsymbol n}_{{\boldsymbol \sigma}}$. Next lemma is a consequence of Theorem [ 19](#thm:H-G){reference-type="ref" reference="thm:H-G"}. ** 34**. *There exist $\delta>0$ such that $(B[{\boldsymbol \sigma}, \delta] \cap \overline{{\mathcal W}} ) \setminus \{{\boldsymbol \sigma}\}$ is contained in the domain of attraction ${\mathcal D}({{\boldsymbol m}}_{{\boldsymbol \sigma}})$ of ${{\boldsymbol m}}_{{\boldsymbol \sigma}}$, and $(B[{\boldsymbol \sigma}, \delta] \cap \overline{{\mathcal W}'} ) \setminus \{{\boldsymbol \sigma}\}$ is contained in the domain of attraction ${\mathcal D}({{\boldsymbol n}}_{{\boldsymbol \sigma}})$ of ${{\boldsymbol n}}_{{\boldsymbol \sigma}}$.* *Proof of Proposition [ 31](#l10){reference-type="ref" reference="l10"}.* We prove the first assertion, as the second is similar. By Lemma [ 34](#l08){reference-type="ref" reference="l08"}, there exists $\epsilon_{1}>0$ such that $\partial_+ \mathcal{B}_{\epsilon}^{\bm{\sigma}} \cap \Lambda_{c,\epsilon} \subset\mathcal{D}(\boldsymbol{m}_{{\boldsymbol \sigma}})$ for all $\epsilon<\epsilon_1$. Therefore, by Proposition [ 32](#prop:DoC){reference-type="ref" reference="prop:DoC"}, it suffices to show that $$\label{eq:flat} \liminf_{\epsilon\rightarrow0}\, \inf_{\bm{x}\in\partial_{+}\mathcal{B}_{\epsilon}^{\bm{\sigma}} \cap\Lambda_{c,\epsilon}} \mathbb{P}_{\bm{x}}^{\epsilon} [\, \tau_{\mathcal{E}({\boldsymbol m}_{{\boldsymbol \sigma}})} = \tau_{\mathcal{E}(\mathcal{M}_{0})} \,] \,=\, 1\;.$$ Recall that ${\mathcal W}$ represents the well that contains ${\boldsymbol m}_{{\boldsymbol \sigma}}$. Let ${\color{blue} {\mathcal G}_\epsilon} = \partial_{+}\mathcal{B}_{\epsilon}^{\bm{\sigma}} \cap \overline{{\mathcal W}}$. By Lemma [ 34](#l08){reference-type="ref" reference="l08"}, there exists $\epsilon_0>0$ such that ${\mathcal G}_\epsilon \subset {\mathcal D}({\boldsymbol m}_0)$ for all $\epsilon \le \epsilon_0$. We claim that $$\label{eq_esc1} \inf_{\bm{x}\in\partial_{+}\mathcal{B}_{\epsilon}^{\bm{\sigma}} \cap\Lambda_{c, \epsilon}} \mathbb{P}_{\bm{x}}^{\epsilon}[ \tau_{\partial\mathcal{C}_{\epsilon_{0}}^{\bm{\sigma}}} = \tau_{{\mathcal G}_{\epsilon_0}}]=1-o_{\epsilon}(1)\ ,$$ where $\partial \mathcal{C}_{\epsilon}^{\bm{\sigma}}$ represents the boundary of ${\mathcal C}^{\bm{\sigma}}_\epsilon$. ![The sets $\mathcal{G}_{\rm ext}$ and $\mathcal{P}_{\rm ext}$ introduced in the proof of Proposition [ 31](#l10){reference-type="ref" reference="l10"}.](fig3){#fig3} To prove [\[eq_esc1\]](#eq_esc1){reference-type="eqref" reference="eq_esc1"}, let $\color{blue} \mathcal{P} \,:=\, \partial \mathcal{C}_{\epsilon_0}^{\bm{\sigma}} \setminus {\mathcal G}_{\epsilon_0}$, $\color{blue} \mathcal{G}_{\rm ext} = \overline{{\mathcal W} \setminus {\mathcal C}^{{\boldsymbol \sigma}}_{\epsilon_0}}$, $\color{blue} \mathcal{P}_{\rm ext} = \overline {{\mathbb R}^d \setminus [ \mathcal{G}_{\rm ext} \cup {\mathcal C}_{\epsilon_0}^{\bm{\sigma}}] }$. Figure [3](#fig3){reference-type="ref" reference="fig3"} illustrates these sets. By definition, $$\{\, \tau_{\partial\mathcal{C}_{\epsilon_{0}}^{\bm{\sigma}}} = \tau_{{\mathcal G}_{\epsilon_0}} \,\} \;=\; \{\, \tau_{{\mathcal G}_{\epsilon_0}} < \tau_{{\mathcal P}} \,\} \;=\; \{\, \tau_{{\mathcal G}_{\rm ext}} < \tau_{{\mathcal P}_{\rm ext}} \,\}$$ for all $\bm{x}\in\partial_{+}\mathcal{B}_{\epsilon}^{\bm{\sigma}} \cap\Lambda_{c, \epsilon}$, $\epsilon < \epsilon_0$. Therefore, by definition of the set ${\mathcal P}_{\rm ext}$, $\{\tau_{\mathcal{E}({\boldsymbol m}_{{\boldsymbol \sigma}})} < \tau_{\mathcal{P}_{\rm ext}}\} \subset \{ \tau_{\partial\mathcal{C}_{\epsilon_{0}}^{\bm{\sigma}}} = \tau_{{\mathcal G}_{\epsilon_{0}}}\}$. Fix $\epsilon < \epsilon_0$. Recall that we denote by $B({\boldsymbol x}, r)$ the open ball of radius $r$ centered at ${\boldsymbol x}$. By [@LS-22 Lemma 9.2], there exists a finite constant $C_0$, whose value may change from line to line, such that $$\textup{cap}_{\epsilon}( B({\boldsymbol y}, {\epsilon}) \,,\, \mathcal{E}({\boldsymbol m}_{{\boldsymbol \sigma}})) \,\geq\, C_0\,\epsilon^{d}\,Z_{\epsilon}^{-1}\,e^{-U(\bm{y})/\epsilon}$$ for all $\bm{y}\in\partial_{+}\mathcal{B}_{\epsilon}^{\bm{\sigma}}\cap\Lambda_{c, \epsilon}$. In this formula, ${\rm cap}_\epsilon ({\mathcal A}, {\mathcal B})$ stands for the capacity between the sets ${\mathscr A}$, ${\mathcal B}$ for the diffusion ${\boldsymbol x}_\epsilon (\cdot)$ and is defined in Appendix [13](#sec-ap2){reference-type="ref" reference="sec-ap2"}. On the other hand, by the proof of [@LS-22 Lemma 9.3], there exists a finite constant $C_0$ such that $$\textup{cap}_{\epsilon}(B(\bm{y}, \epsilon) \,,\, \mathcal{P}_{\rm ext}) \le C_0 \,Z_{\epsilon}^{-1} \,e^{-U(\bm{\sigma})/\epsilon}$$ for all $\bm{y}\in\partial_{+}\mathcal{B}_{\epsilon}^{\bm{\sigma}}\cap\Lambda_{c, \epsilon}$. By [@LMS2 Proposition 7.2], there exists a finite constant $C_0$ such that $$\mathbb{P}_{\bm{x}}^{\epsilon}[\,\tau_{\mathcal{P}_{\rm ext}} < \tau_{\mathcal{E}({\boldsymbol m}_{{\boldsymbol \sigma}})}\,] \,\le\, C_0\,\frac{\textup{cap}_{\epsilon}( B({\boldsymbol x} , \epsilon) \, ,\, \mathcal{P}_{\rm ext})} {\textup{cap}_{\epsilon} (B({\boldsymbol x}, \epsilon) \, ,\,\mathcal{E}({\boldsymbol m}_{{\boldsymbol \sigma}}))}$$ for all ${\boldsymbol x}\in {\mathbb R}^d$. Combining the previous estimates yields that for all $\bm{x}\in\partial_{+}\mathcal{B}_{\epsilon}^{\bm{\sigma}}\cap\Lambda_{c, \epsilon}$ $$\mathbb{P}_{\bm{x}}^{\epsilon} [\,\tau_{\mathcal{P}_{\rm ext}}<\tau_{\mathcal{E}({\boldsymbol m}_{{\boldsymbol \sigma}})}\,] \,\le\, C_0 \, \epsilon^{-d} \, e^{(U(\bm{x})-U(\bm{\sigma}))/\epsilon} \,\le\, C_0 \, \epsilon^{-d}\, e^{-c J^2\delta^{2}/\epsilon} \,= \, C_0 \, \epsilon^{cJ^2-d}\;.$$ This expression vanishes as $\epsilon\to 0$ for large enough $J$. To complete the proof of assertion [\[eq_esc1\]](#eq_esc1){reference-type="eqref" reference="eq_esc1"}, it remains to recall that $\{\tau_{\mathcal{E}({\boldsymbol m}_{{\boldsymbol \sigma}})} < \tau_{\mathcal{P}_{\rm ext}}\} \subset \{ \tau_{\partial\mathcal{C}_{\epsilon_{0}}^{\bm{\sigma}}} = \tau_{{\mathcal G}_{\epsilon_{0}}}\}$. We turn to the proof of [\[eq:flat\]](#eq:flat){reference-type="eqref" reference="eq:flat"}. For $\bm{x}\in\partial_{+}\mathcal{B}_{\epsilon}^{\bm{\sigma}}\cap\Lambda_{c, \epsilon}$, by the strong Markov property and [\[eq_esc1\]](#eq_esc1){reference-type="eqref" reference="eq_esc1"}, $$\begin{aligned} \mathbb{P}_{\bm{x}}^{\epsilon}[\tau_{\mathcal{E}({\boldsymbol m}_{{\boldsymbol \sigma}})} =\tau_{\mathcal{E}(\mathcal{M}_{0})}] & \, \ge\, \mathbb{P}_{\bm{x}}^{\epsilon}[\tau_{\mathcal{E}({\boldsymbol m}_{{\boldsymbol \sigma}})} =\tau_{\mathcal{E}(\mathcal{M}_{0})},\, \tau_{\partial\mathcal{C}_{\epsilon_{0}}^{\bm{\sigma}}} =\tau_{{\mathcal G}_{\epsilon_0}}] \\ & \,\ge\, \inf_{\boldsymbol{y}\in {\mathcal G}_{\epsilon_0}} \mathbb{P}_{\bm{y}}^{\epsilon}[\tau_{\mathcal{E}({\boldsymbol m}_{{\boldsymbol \sigma}})} =\tau_{\mathcal{E}(\mathcal{M}_{0})}] \, \mathbb{P}_{\bm{x}}^{\epsilon}[\tau_{\partial\mathcal{C}_{\epsilon_{0}}^{\bm{\sigma}}} =\tau_{{\mathcal G}_{\epsilon_0}}] \\ & \,=\, (1-o_{\epsilon}(1)) \inf_{\boldsymbol{y}\in {\mathcal G}_{\epsilon_0}} \mathbb{P}_{\bm{y}}^{\epsilon}[\tau_{\mathcal{E}({\boldsymbol m}_o)} =\tau_{\mathcal{E}(\mathcal{M}_{0})}]\;.\end{aligned}$$ The last infimum is $1-o_{\epsilon}(1)$ by Lemma [ 29](#lem_exit){reference-type="ref" reference="lem_exit"} because ${\mathcal G}_{\epsilon_0} \subset {\mathcal D}({\boldsymbol m}_0)$. ◻ Proposition [ 31](#l10){reference-type="ref" reference="l10"} provides a simple formula for the quantities $J_\pm ({\boldsymbol \sigma})$ introduced in Lemma [ 30](#p03){reference-type="ref" reference="p03"}. ** 35**. *For all ${\bf g}\colon {\mathcal M}_0 \to {\mathbb R}$, $$\label{37} e^{H/\epsilon} \, \int_{\Omega}\, Q_{\epsilon} \,(-\mathscr{L}_{\epsilon}\,\phi_{\epsilon}) \,d\mu_{\epsilon} \; =\; \frac{1}{2\pi\nu_\star}\, \sum_{\bm{\sigma}\in {\mathcal S}_H({\mathcal W}) } \, [\phi_\epsilon ({\boldsymbol m}_{{\boldsymbol \sigma}}) - \phi_\epsilon ({\boldsymbol n}_{{\boldsymbol \sigma}})]\, \frac{\mu^{{\boldsymbol \sigma}}} {\sqrt{ - \det \mathbb{H}^{{\boldsymbol \sigma}}}} \,+\, o_{\epsilon}(1) \;.$$* *Proof.* By Proposition [ 31](#l10){reference-type="ref" reference="l10"}, in the formula for $J_+({\boldsymbol \sigma})$ presented in Lemma [ 30](#p03){reference-type="ref" reference="p03"}, we may replace $\phi_\epsilon({\boldsymbol x})$ by $\phi_\epsilon ({\boldsymbol m}_{{\boldsymbol \sigma}})$ at a cost $o_\epsilon(1)$, and we are left with a Gaussian type integral. A straightforward computation, presented in the proof of [@LS-22b Lemma 7.5], together with [@LS-22b Lemma 7.3] yields that $$\frac{\epsilon\,\sqrt{\mu^{{\boldsymbol \sigma}}} \,(\bm{v}\cdot\bm{e}_{1})} {(2\pi\epsilon)^{(d+1)/2}\, \nu_\star} \, \int_{\partial_{+}\mathcal{B}^{{\boldsymbol \sigma}}_{\epsilon} \cap \Lambda_{c,\epsilon}}\, e^{-\frac{1}{2\epsilon}\bm{x}\cdot\, (\mathbb{H}^{{\boldsymbol \sigma}} +\mu^{{\boldsymbol \sigma}} \,\bm{v}\otimes\bm{v})}\, {\rm S} (d\bm{x}) \;=\; \frac{1}{2\pi \, \nu_\star}\, \frac{\lambda^{{\boldsymbol \sigma}}_1} { \sqrt{ - \det \mathbb{H}^{{\boldsymbol \sigma}}}} + o_{\epsilon}(1) \;.$$ Similarly, by the proof of [@LS-22b Lemma 7.7], $$\begin{aligned} & \frac{1} {(2\pi)^{(d+1)/2}\ \nu_\star\, \sqrt{\mu^{{\boldsymbol \sigma}}}\,\epsilon{}^{(d+3)/2}}\, \int_{{\mathcal A}^{{\boldsymbol \sigma}, +}_{\epsilon} \cap \Lambda_{c,\epsilon} }\, \frac{\mathbb{L}^{{\boldsymbol \sigma}} \overline{\boldsymbol{x}}\cdot\boldsymbol{e}_{1}} {\overline{\boldsymbol{x}}\cdot\bm{v}}\, e^{-\frac{1}{2\epsilon}\overline{\boldsymbol{x}} \cdot\,(\mathbb{H}^{{\boldsymbol \sigma}}+\mu^{{\boldsymbol \sigma}}\,\bm{v}\otimes\bm{v})\, \overline{\boldsymbol{x}}}\,d\bm{x} \\ & \quad \;=\; \frac{1}{2\pi\nu_\star}\, \frac{\lambda^{{\boldsymbol \sigma}}_1}{ \sqrt{ - \det \mathbb{H}^{{\boldsymbol \sigma}}}} \, \frac{(\mathbb{L}^{{\boldsymbol \sigma}} (\mathbb{H}^{{\boldsymbol \sigma}})^{-1} \bm{v}) \cdot\bm{e}_{1} } {{\boldsymbol v} \cdot {\boldsymbol e}_1}\, + \, o_{\epsilon}(1) \;. \end{aligned}$$ By the proof of [@LS-22b Proposition 5.7], $${\boldsymbol v} \cdot {\boldsymbol e}_1 \;+\; (\mathbb{L}^{{\boldsymbol \sigma}} (\mathbb{H}^{{\boldsymbol \sigma}})^{-1} \bm{v}) \cdot\bm{e}_{1} \;=\; \frac{\mu^{{\boldsymbol \sigma}}} {\lambda^{{\boldsymbol \sigma}}_1} \, {\boldsymbol v} \cdot {\boldsymbol e}_1\;.$$ Combining the previous estimates yields that $$J_+({\boldsymbol \sigma}) \;=\; -\, \frac{1}{2\pi \, \nu_\star}\, \frac{\mu^{{\boldsymbol \sigma}}_1} { \sqrt{ - \det \mathbb{H}^{{\boldsymbol \sigma}}}} \, \phi_\epsilon({\boldsymbol m}_{{\boldsymbol \sigma}}) \,+\, o_{\epsilon}(1) \;.$$ The same argument leads to the same formula for $J_-({\boldsymbol \sigma})$ with a plus sign and $\phi_\epsilon({\boldsymbol n}_{{\boldsymbol \sigma}})$ instead of $\phi_\epsilon({\boldsymbol m}_{{\boldsymbol \sigma}})$. This completes the proof of the lemma. ◻ # Proof of Theorem [ 2](#t01){reference-type="ref" reference="t01"} {#sec9} Recall from [\[28\]](#28){reference-type="eqref" reference="28"}, [\[35\]](#35){reference-type="eqref" reference="35"} the definitions of the generator ${\mathfrak L}_1$, and the function ${\boldsymbol f}_{\epsilon} \colon {\mathcal M}_0 \to {\mathbb R}$, respectively. The main result of this section reads as follows. ** 36**. *For all $\lambda >0$, ${\boldsymbol g} \colon {\mathcal M}_0 \to {\mathbb R}$, $$(\lambda \,-\, {\mathfrak L}_1)\, {\boldsymbol f}_{\epsilon} \;=\; {\boldsymbol g} \;+\; o_\epsilon (1)\;.$$* *Proof of Theorem [ 2](#t01){reference-type="ref" reference="t01"}.* The assertion follows from two observations. The sequence ${\boldsymbol f}_{\epsilon}$ is uniformly bounded and the equation $(\lambda \,-\, {\mathfrak L}_1)\, {\boldsymbol f} \;=\; {\boldsymbol g}$ has a unique solution. ◻ The remainder of this section is devoted to the proof of Theorem [ 36](#p01){reference-type="ref" reference="p01"}. Fix ${\boldsymbol m} \in {\mathcal M}_0$. Let ${\mathcal W}$ be the connected component of the set $\{{\boldsymbol x}\in{\mathbb R}^d: U({\boldsymbol x}) < U({\boldsymbol m}) + \Gamma ({\boldsymbol m})\}$ which contains ${\boldsymbol m}$. By definition, ${\mathcal W}$ does not contain any other local minimum of $U$ (in particular, the present situation is different from the one represented in Figure [2](#fig2){reference-type="ref" reference="fig2"}, where ${\mathcal W}$ contains more than one local minimum). Recall from [\[56\]](#56){reference-type="eqref" reference="56"} the definition of the set ${\mathcal S}_H({\mathcal W})$. ** 37**. *There exists a saddle point ${\boldsymbol \sigma }\in \partial {\mathcal W}$ satisfying condition (a) in [\[56\]](#56){reference-type="eqref" reference="56"}. Condition (b) is fulfilled for all saddle points ${\boldsymbol \sigma}' \in \partial {\mathcal W}$ satisfying (a). Moreover, ${\mathcal S}_H({\mathcal W}) = \Upsilon ({\boldsymbol m})$, where $H= U({\boldsymbol m}) + \Gamma({\boldsymbol m})$.* *Proof.* By Proposition [ 45](#l05a){reference-type="ref" reference="l05a"}, there exist a local minimum ${\boldsymbol m}'$ of $U$ different from ${\boldsymbol m}$ and a continuous path $\boldsymbol{z} \colon [0,1] \to {\mathbb R}^d$ such that $U({\boldsymbol m}) + \Gamma ({\boldsymbol m}) = \Theta({\boldsymbol m}, {\boldsymbol m}')$, $\boldsymbol{z}(0)=\boldsymbol{m}$, $\boldsymbol{z}(1)=\boldsymbol{m}'$, and $$\label{54a} \max_{t\in[0,\,1]}U(\boldsymbol{z}(t)) \;=\; U(\boldsymbol{z}(1/2)) \;=\; \Theta(\boldsymbol{m},\,\boldsymbol{m}')\;, \quad U({\boldsymbol z}(s)) \,<\, U({\boldsymbol z}(1/2))\;,\;\; s\in [0,1]\setminus\{1/2\} \;,$$ and ${\boldsymbol \sigma }:= {\boldsymbol z}(1/2)$ is a saddle point of $U$. In particular, ${\boldsymbol \sigma }\in \partial {\mathcal W}$. Condition (a) is satisfied because ${\boldsymbol m}' \neq {\boldsymbol m}$ and ${\mathcal W}$ contains only the local minimum ${\boldsymbol m}$. We turn to condition (b). Let ${\boldsymbol \sigma}$ be a saddle point in $\partial {\mathcal W}$ satisfying (a). By definition of ${\mathcal W}$ and with the help of the solution of the ODE [\[31\]](#31){reference-type="eqref" reference="31"}, it is possible to construct a continuous path $\boldsymbol{z} \colon [0,1] \to {\mathbb R}^d$ such that $\boldsymbol{z}(0)=\boldsymbol{m}' \in {\mathcal M}_0$, $\boldsymbol{z}(1/2) \;=\; {\boldsymbol \sigma}$, $\boldsymbol{z}(1)=\boldsymbol{m}'' \in {\mathcal M}_0$, and $$U({\boldsymbol z}(s)) \,<\, U({\boldsymbol \sigma})\;,\;\; s\, \neq\, 1/2 \;,$$ for some $\boldsymbol{m}'$, ${\boldsymbol m}'' \in {\mathcal M}_0$. As ${\boldsymbol \sigma}$ satisfies (a), we may assume without loss of generality that $\boldsymbol{m}' \in {\mathcal W}$, $\boldsymbol{m}'' \in \overline{{\mathcal W}}^c$. Since ${\mathcal W}$ contains a unique local minimum, ${\boldsymbol m}'={\boldsymbol m}$. Therefore, since $U({\boldsymbol \sigma}) = U({\boldsymbol m}) + \Gamma({\boldsymbol m})$, by definition of $\Upsilon ({\boldsymbol m})$, ${\boldsymbol \sigma }\in \Upsilon ({\boldsymbol m})$. Hence, by condition [\[48\]](#48){reference-type="eqref" reference="48"}, there exists ${\boldsymbol m}''' \in {\mathcal M}_0$, ${\boldsymbol m}'''\not\in {\mathcal W}$, such that ${\boldsymbol \sigma }\curvearrowright {\boldsymbol m}$, ${\boldsymbol \sigma }\curvearrowright {\boldsymbol m}'''$, which is condition (b). Assume that ${\boldsymbol \sigma }\in {\mathcal S}_H({\mathcal W})$. By definition, it satisfies (a). Thus, by the previous paragraph, ${\boldsymbol \sigma}\in \Upsilon ({\boldsymbol m})$. Conversely, suppose that ${\boldsymbol \sigma }\in \Upsilon ({\boldsymbol m})$. By definition, there exists a local minimum ${\boldsymbol m}'\neq {\boldsymbol m}$ and a continuous path $\boldsymbol{z} \colon [0,1] \to {\mathbb R}^d$ such that $\boldsymbol{z}(0)=\boldsymbol{m}$, $\boldsymbol{z}(1)=\boldsymbol{m}'$ for which [\[54a\]](#54a){reference-type="eqref" reference="54a"} holds. By Proposition [ 45](#l05a){reference-type="ref" reference="l05a"}, ${\boldsymbol \sigma }:= {\boldsymbol z}(1/2)$ is a saddle point of $U$. Since ${\mathcal W}$ has a unique local minimum, ${\boldsymbol m}'\not \in {\mathcal W}$. Thus, condition (a) holds for ${\boldsymbol \sigma}$. By [\[48\]](#48){reference-type="eqref" reference="48"}, condition (b) also holds, so that ${\boldsymbol \sigma}\in {\mathcal S}_H({\mathcal W})$. ◻ *Proof of Theorem [ 36](#p01){reference-type="ref" reference="p01"}.* Fix ${\boldsymbol m} \in {\mathcal M}_0$. Let ${\mathcal W}$ be the connected component of the set $\{{\boldsymbol x}\in{\mathbb R}^d: U({\boldsymbol x}) < U({\boldsymbol m}) + \Gamma ({\boldsymbol m})\}$ which contains ${\boldsymbol m}$. By Lemma [ 37](#l13){reference-type="ref" reference="l13"}, there exists a saddle point ${\boldsymbol \sigma }\in \partial {\mathcal W}$ satisfying condition (a) in [\[56\]](#56){reference-type="eqref" reference="56"}, and condition (b) is fulfilled for all saddle points ${\boldsymbol \sigma}' \in \partial {\mathcal W}$ satisfying (a). We may therefore apply Lemma [ 30](#p03){reference-type="ref" reference="p03"}. Let $Q_\epsilon$ be the test function constructed in Section [7](#sec10){reference-type="ref" reference="sec10"} associated to the well ${\mathcal W}$, and recall that $H=U({\boldsymbol m}) + \Gamma ({\boldsymbol m})$. Multiply both sides of [\[e_res\]](#e_res){reference-type="eqref" reference="e_res"} by the test function $Q_{\epsilon}$ and integrate over ${\mathbb R}^d$ to deduce that $$\label{14} \int_{\Omega}\,Q_{\epsilon}\,(\lambda-\theta_{\epsilon}^{(1)}\, \mathscr{L}_{\epsilon})\,\phi_{\epsilon}\,d\mu_{\epsilon} \;=\; {\boldsymbol g} ({\boldsymbol m}) \, \int_{\mathcal{E}({\boldsymbol m})} \, Q_{\epsilon} \, \,d\mu_{\epsilon}\;,$$ where $\Omega$ is given by [\[36\]](#36){reference-type="eqref" reference="36"}. By definition of $Q_{\epsilon}$, the right-hand side is equal to ${\bf g}({\boldsymbol m}) \, \mu_{\epsilon}(\mathcal{E}({\boldsymbol m}))$. Similarly, as $\phi_{\epsilon}$ is uniformly bounded and $$\mu_{\epsilon}\Big(\, \bigcup_{\bm{\sigma}\in{\mathcal S}_H ({\mathcal W})} \mathcal{E}_{\epsilon}^{\bm{\sigma}} \,\cup\, (\Omega \setminus \mathcal{K}_{\epsilon}) \,\Big) \, =\, o_{\epsilon}(1)\, \mu_{\epsilon} (\mathcal{E}({\boldsymbol m})) \;,$$ as $Q_\epsilon$ vanishes on ${\mathcal W}_2$ and is equal to $1$ on ${\mathcal W}_1$, by definition of ${\bf f}_\epsilon$, $$\label{25} \lambda \int_{\Omega}\, Q_{\epsilon} \,\phi_{\epsilon}\,d\mu_{\epsilon} \;=\; \lambda \, {\bf f}_\epsilon ({\boldsymbol m}) \, \mu_{\epsilon}(\mathcal{E}({\boldsymbol m})) \; + \; o_\epsilon (1) \, \mu_{\epsilon} (\mathcal{E}({\boldsymbol m})) \, \,.$$ It remains to consider the term in [\[14\]](#14){reference-type="eqref" reference="14"} involving the generator ${\mathscr L}_\epsilon$. We examine two cases separately. Assume that $\Gamma ({\boldsymbol m}) > d^{(1)}$. As $\Gamma ({\boldsymbol m}) > d^{(1)}$ and $e^{-U({\boldsymbol m})/\epsilon} / \mu_{\epsilon} (\mathcal{E}({\boldsymbol m})) \le C_0$ for some finite constant independent of $\epsilon$, $\theta_{\epsilon}^{(1)} = e^{d^{(1)}/\epsilon} \prec e^{\Gamma ({\boldsymbol m})/\epsilon} \le C_0\, e^{[\Gamma ({\boldsymbol m}) + U ({\boldsymbol m})]/\epsilon} \, \mu_{\epsilon} (\mathcal{E}({\boldsymbol m}))$. Hence, by Lemma [ 35](#p05){reference-type="ref" reference="p05"}, as the right-hand side of [\[37\]](#37){reference-type="eqref" reference="37"} is bounded and $H=\Gamma({\boldsymbol m}) + U ({\boldsymbol m})$, $$\theta_{\epsilon}^{(1)}\, \int_{\Omega}\,Q_\epsilon \, (-\mathscr{L}_{\epsilon})\, \phi_{\epsilon} \,d\mu_{\epsilon} \;=\; o_\epsilon (1) \, \mu_{\epsilon} (\mathcal{E}({\boldsymbol m}))\;.$$ Combining the previous estimates yields that $${\bf f}_{\epsilon}({\boldsymbol m}) = \frac{1}{\lambda} \, {\bf g}({\boldsymbol m}) \;+\; o_\epsilon (1) \;.$$ By [\[40\]](#40){reference-type="eqref" reference="40"} and the definition of ${\mathfrak L}_1$, as $\Gamma ({\boldsymbol m}) > d^{(1)}$, $({\mathfrak L}_1 {\boldsymbol f}_\epsilon)({\boldsymbol m}) =0$, which completes the proof of the theorem in Case 1. Assume that $\Gamma ({\boldsymbol m}) = d^{(1)}$. Multiply both sides of [\[37\]](#37){reference-type="eqref" reference="37"} by $e^{- U({\boldsymbol m})/\epsilon}$. Since $\theta_{\epsilon}^{(1)} = e^{d^{(1)}/\epsilon} = e^{\Gamma ({\boldsymbol m})/\epsilon} = e^{[H - U({\boldsymbol m})]/\epsilon}$, by Lemma [ 35](#p05){reference-type="ref" reference="p05"}, $$\theta_{\epsilon}^{(1)}\, \int_{\Omega}\,Q_\epsilon \, (-\mathscr{L}_{\epsilon})\, \phi_{\epsilon} \,d\mu_{\epsilon} \;=\; \Big\{\frac{1}{2\pi\nu_\star}\, \sum_{\bm{\sigma}\in {\mathcal S}_H({\mathcal W}) } \, [\phi_\epsilon ({\boldsymbol m}) - \phi_\epsilon ({\boldsymbol n}_{{\boldsymbol \sigma}})]\, \frac{\mu^{{\boldsymbol \sigma}}} {\sqrt{ - \det \mathbb{H}^{{\boldsymbol \sigma}}}} \,+\, o_{\epsilon}(1) \Big\} \, e^{-U({\boldsymbol m})/\epsilon}$$ because ${\boldsymbol m}_\sigma = {\boldsymbol m}$ for all ${\boldsymbol \sigma }\in {\mathcal S}_H({\mathcal W})$, as ${\boldsymbol m}$ is the only local minima of $U$ in ${\mathcal W}$. Since $e^{-U({\boldsymbol m})/\epsilon} / \mu_{\epsilon} (\mathcal{E}({\boldsymbol m})) \le C_0$ for some finite constant independent of $\epsilon$, we may replace in the previous formula, $o_{\epsilon}(1) \, e^{-U({\boldsymbol m})/\epsilon}$ by $o_{\epsilon}(1) \, \mu_{\epsilon} (\mathcal{E}({\boldsymbol m}))$. On the other hand, by [\[55\]](#55){reference-type="eqref" reference="55"}, $e^{-U({\boldsymbol m})/\epsilon}/\nu_\star = [1+o(\epsilon)]\, \mu_{\epsilon} (\mathcal{E}({\boldsymbol m}))/\nu({\boldsymbol m})$. We may therefore rewrite the right-hand side of the previous equation as $$\Big\{\frac{1}{2\pi\nu({\boldsymbol m})}\, \sum_{\bm{\sigma}\in {\mathcal S}_H({\mathcal W}) } \, [\phi_\epsilon ({\boldsymbol m}) - \phi_\epsilon ({\boldsymbol n}_{{\boldsymbol \sigma}})]\, \frac{\mu^{{\boldsymbol \sigma}}} {\sqrt{ - \det \mathbb{H}^{{\boldsymbol \sigma}}}} \,+\, o_{\epsilon}(1) \Big\} \, \mu_{\epsilon} (\mathcal{E}({\boldsymbol m}))\;.$$ By Lemma [ 37](#l13){reference-type="ref" reference="l13"}, ${\mathcal S}_H({\mathcal W}) = \Upsilon ({\boldsymbol m})$. Thus, by [\[38a\]](#38a){reference-type="eqref" reference="38a"} and by definition of ${\boldsymbol n}_{{\boldsymbol \sigma}}$, introduced at the beginning of Section [8](#sec4){reference-type="ref" reference="sec4"}, $\{{\boldsymbol n}_{{\boldsymbol \sigma}} : \bm{\sigma}\in {\mathcal S}_H({\mathcal W}) \} = \{{\boldsymbol n}_{{\boldsymbol \sigma}} : \bm{\sigma}\in \Upsilon ({\boldsymbol m}) \} = {\mathscr V}({\boldsymbol m})$. Hence, by Theorem [ 14](#p_flat){reference-type="ref" reference="p_flat"}, the previous expression can rewritten as $$\Big\{\frac{1}{2\pi\nu({\boldsymbol m})}\, \sum_{{\boldsymbol m}' \in {\mathscr V}({\boldsymbol m})} [{\boldsymbol f}_\epsilon ({\boldsymbol m}) - {\boldsymbol f}_\epsilon ({\boldsymbol m}')]\, \sum_{\bm{\sigma}\in {\mathcal S}({\boldsymbol m}, {\boldsymbol m}') } \, \frac{\mu^{{\boldsymbol \sigma}}} {\sqrt{ - \det \mathbb{H}^{{\boldsymbol \sigma}}}} \,+\, o_{\epsilon}(1) \Big\} \, \mu_{\epsilon} (\mathcal{E}({\boldsymbol m}))\;.$$ By [\[eq:nu\]](#eq:nu){reference-type="eqref" reference="eq:nu"}, [\[39a\]](#39a){reference-type="eqref" reference="39a"}, [\[40\]](#40){reference-type="eqref" reference="40"} and [\[28\]](#28){reference-type="eqref" reference="28"}, the previous expression is equal to $$\big\{ \, (-\, {\mathfrak L}_1 {\boldsymbol f}_\epsilon)({\boldsymbol m}) \,+\, o_{\epsilon}(1) \,\big\} \, \mu_{\epsilon} (\mathcal{E}({\boldsymbol m}))\;.$$ To complete the proof of the theorem, it remains to combine the estimates obtained at the beginning of the proof with this last one. ◻ ## Trace processes {#trace-processes .unnumbered} Let ${\boldsymbol y}_\epsilon (t)$ be the process ${\boldsymbol x}_\epsilon(t)$ speeded-up by $\theta^{(1)}_\epsilon$: $\color{blue} {\boldsymbol y}_\epsilon(t) = {\boldsymbol x}_\epsilon(t\theta^{(1)}_\epsilon)$. Denote by $\color{blue} {\mathbb Q}^{\epsilon}_{{\boldsymbol x}}$ the probability measure on $C({\mathbb R}_+, {\mathbb R}^d)$ induced by the process ${\boldsymbol y}_\epsilon (t)$ starting from ${\boldsymbol x}$. We use the same symbol ${\mathbb Q}^{\epsilon}_{{\boldsymbol x}}$ to represent the expectation with respect to the measure ${\mathbb Q}^{\epsilon}_{{\boldsymbol x}}$. Denote by $T_{\epsilon}(t)$ the time spent by ${\boldsymbol y}_{\epsilon}(\cdot)$ on $\mathcal{E}({\mathcal M}_0)$ up to time $t>0$: $${\color{blue} T_{\epsilon}(t)} \; :=\; \int_{0}^{t}\, \chi_{_{\mathcal{E} ({\mathcal M}_0)}}({\boldsymbol y}_{\epsilon}(s))\,ds\;.$$ Let $S_{\epsilon}(\cdot)$ be the generalized inverse of the non-decreasing process $T_{\epsilon}(\cdot)$: $${\color{blue} S_{\epsilon}(t)} \;:=\; \sup \{s\ge 0 : T_\epsilon (s) \le t\,\}\;, \quad t\ge 0 \;.$$ Define the trace process of ${\boldsymbol y}_\epsilon(\cdot)$ on ${\mathcal E}({\mathcal M}_0)$ by $$\label{eq:trace} {\color{blue} {\boldsymbol y}^{\rm T}_\epsilon(t)} \, :=\, {\boldsymbol y}_{\epsilon} (\, S_{\epsilon}(t)\,)\;, \quad t\ge0 \;,$$ which is an ${\mathcal E}({\mathcal M}_0)$-valued Markov process. Let $\Phi: {\mathcal E}({\mathcal M}_0) \to {\mathcal M}_0$ be the projection given by $\Phi = \sum_{{\boldsymbol m}\in {\mathcal M}_)} {\boldsymbol m} \, \chi_{{\mathcal E}({\boldsymbol m})}$. Next result is a consequence of Theorem [ 2](#t01){reference-type="ref" reference="t01"} and [@LMS Theorem 2.3]. ** 38**. *Fix ${\boldsymbol m} \in {\mathcal M}_0$, and a sequence ${\boldsymbol x}_\epsilon \in {\mathcal E}({\boldsymbol m})$. Starting from ${\boldsymbol x}_\epsilon$, the process $\Phi({\boldsymbol y}^{\rm T}_\epsilon(t))$ converges in the Skorohod topology to the ${\mathcal M}_0$-valued continuous-time Markov chain induced by the generator ${\mathfrak L}_1$ starting from ${\boldsymbol m}$. Moreover, for all $T>0$, $$\label{57} \lim_{\epsilon\rightarrow0} \sup_{\boldsymbol{x}\in {\mathcal E}({\mathcal M}_0)} \mathbb{Q}_{\bm{x}}^{\epsilon} \Big[\, \int_{0}^{T} \chi_{_{{\mathbb R}^d \setminus {\mathcal E}({\mathcal M}_0)}} (\boldsymbol{y}_{\epsilon}(t)) \; dt \, \Big] \,=\, 0\;.$$* Next assertion is a consequence of [\[57\]](#57){reference-type="eqref" reference="57"}. We refer to [@LLM display (3.2)] for a proof. ** 39**. *For all $t\ge0$ and $\delta>0$, $$\limsup_{\epsilon\rightarrow0}\, \sup_{\boldsymbol{x}\in\mathcal{E} ({\mathcal M}_0) }\, \mathbb{Q}_{\boldsymbol{x}}^{\epsilon} [\,S_{\epsilon}(t)>t+\delta\,]=0\;.$$* # Proof of Theorem [ 1](#t00){reference-type="ref" reference="t00"}: finite-dimensional distributions {#sec3} The main result of this section, Theorem [ 40](#t_fdd){reference-type="ref" reference="t_fdd"}, states that in the time-scale $\theta^{(1)}_\epsilon$ the finite-dimensional distributions of the diffusion ${\boldsymbol x}_\epsilon (t)$ converge to those of the ${\mathcal M}_0$-valued Markov chain whose generator is given by ${\mathfrak L}_1$ introduced in [\[28\]](#28){reference-type="eqref" reference="28"}. Denote by $\color{blue} D({\mathbb R}_+, {\mathcal M}_0)$ the space of right-continuous functions ${\boldsymbol y}: {\mathbb R}_+ \to {\mathcal M}_0$ with left limits endowed with the Skorohod topology. Let $\color{blue} {\mathcal Q}_{{\boldsymbol m}}$, ${\boldsymbol m}\in{\mathcal M}_0$, be the measure on $D({\mathbb R}_+, {\mathcal M}_0)$ induced by the continuous-time ${\mathcal M}_0$-valued Markov chain associated to the generator ${\mathfrak L}_1$ starting from ${\boldsymbol m}$. Expectation with respect to ${\mathcal Q}_{{\boldsymbol m}}$ is also represented by ${\mathcal Q}_{{\boldsymbol m}}$. ** 40**. *Fix ${\boldsymbol m}\in {\mathcal M}_0$, and $\boldsymbol{x}\in\mathcal{D}({\boldsymbol m})$. Then, $$\lim_{\epsilon\rightarrow0} \mathbb{E}_{\boldsymbol{x}}^{\epsilon} \Big[\, \prod_{j=1}^{{{\mathfrak n}}} F_j (\boldsymbol{x}_{\epsilon}(\theta^{(1)}_{\epsilon} t_{j})) \, \Big] \,=\, \mathcal{Q}_{{\boldsymbol m}} \big[\, \prod_{j=1}^{{{\mathfrak n}}} F_j ({\boldsymbol y} (t_{j})) \, \Big]$$ for all ${{\mathfrak n}}\ge1$, $0<t_{1}<\cdots<t_{{{\mathfrak n}}}$ and bounded continuous functions $F_j\colon {\mathbb R}^d \to {\mathbb R}$, $1\le j\le {{\mathfrak n}}$.* The proof of Theorem [ 40](#t_fdd){reference-type="ref" reference="t_fdd"} is based on the next result. ** 41**. *Fix $r_0>0$ small enough to fulfill the conditions above equation [\[30\]](#30){reference-type="eqref" reference="30"}, and recall from this equation the definition of the wells ${\mathcal E}({\boldsymbol m}')$, ${\boldsymbol m}'\in {\mathcal M}_0$. Fix ${\boldsymbol m}\in {\mathcal M}_0$. Then, $$\lim_{\epsilon\rightarrow0} \mathbb{P}_{\boldsymbol{x}_\epsilon}^{\epsilon} \Big[\, \bigcap_{j=1}^{{{\mathfrak n}}} \{\, \boldsymbol{x}_{\epsilon}(\theta^{(1)}_{\epsilon} t_{j,\epsilon} )) \in {\mathcal E}({\boldsymbol m}_j)\, \} \, \Big] \,=\, \mathcal{Q}_{{\boldsymbol m}} \big[\, \bigcap_{j=1}^{{{\mathfrak n}}} \{ {\boldsymbol y} (t_{j})) = {\boldsymbol m}_j \, \} \, \, \big]$$ for all ${{\mathfrak n}}\ge 1$, $0<t_{1}<\cdots<t_{{{\mathfrak n}}}$, ${\boldsymbol m}_1, \dots, {\boldsymbol m}_{{\mathfrak n}} \in {\mathcal M}_0$, and sequences ${\boldsymbol x}_\epsilon \in {\mathcal E}({\boldsymbol m})$, $t_{j,\epsilon} \to t_j$.* It follows from this result that $$\label{42} \lim_{\epsilon\rightarrow0} \mathbb{P}_{\boldsymbol{x}_\epsilon}^{\epsilon} \Big[\, \bigcap_{j=1}^{{{\mathfrak n}}} \{\, \boldsymbol{x}_{\epsilon}(\theta^{(1)}_{\epsilon} t_{j,\epsilon} )) \in {\mathcal E}({\mathcal M}_0 )\, \} \, \Big] \;=\; 1$$ for all ${\boldsymbol m}\in {\mathcal M}_0$, ${{\mathfrak n}}\ge 1$, $0<t_{1}<\cdots<t_{{{\mathfrak n}}}$, ${\boldsymbol m}_1, \dots, {\boldsymbol m}_{{\mathfrak n}} \in {\mathcal M}_0$, and sequences ${\boldsymbol x}_\epsilon \in {\mathcal E}({\boldsymbol m})$, $t_{j,\epsilon} \to t_j$. ## Proof of Theorem [ 40](#t_fdd){reference-type="ref" reference="t_fdd"} {#proof-of-theorem-t_fdd .unnumbered} We prove the result for ${{\mathfrak n}}=1$ as the arguments in the general case are identical. Fix $t>0$, $\eta>0$ and a bounded continuous function $F\colon {\mathbb R}^d\to {\mathbb R}$. By continuity, there exists $\delta_0>0$ such that $$\label{39} \max_{{\boldsymbol m} \in{\mathcal M}_0}\, \sup_{{\boldsymbol x} \in {\mathcal W}^{2\delta_0}({\boldsymbol m})} |\, F({\boldsymbol x}) - F({\boldsymbol m})\,| \;\le\; \eta\;.$$ Fix $r_0 <\delta_0$ small enough to fulfill the conditions of Proposition [ 41](#l11){reference-type="ref" reference="l11"}. Consider the wells ${\mathcal E}({\boldsymbol m})$ defined by [\[30\]](#30){reference-type="eqref" reference="30"}. Recall from [\[41\]](#41){reference-type="eqref" reference="41"} that we represent by $\tau_{{\mathscr A}}$ the hitting time of the set ${\mathscr A}$, and let $\color{blue} \tau = \tau_{{\mathcal E}({\boldsymbol m})}$. By Lemma [ 29](#lem_exit){reference-type="ref" reference="lem_exit"}, Corollary [Corollary 24](#t_hitting){reference-type="ref" reference="t_hitting"}, and the strong Markov property, as ${\boldsymbol x}\in {\mathcal D}({\boldsymbol m})$ and $F$ is bounded, $$\mathbb{E}_{\boldsymbol{x}}^{\epsilon} \big[\, F (\boldsymbol{x}_{\epsilon}(\theta^{(1)}_{\epsilon} t)) \, \big] \;=\; \mathbb{E}_{\boldsymbol{x}}^{\epsilon} \Big[\, \mathbb{E}_{\boldsymbol{x}_\epsilon (\tau)}^{\epsilon} \big[\, F (\boldsymbol{x}_{\epsilon}(\theta^{(1)}_{\epsilon} t -\tau )) \, \big]\, \chi_{\tau \le \epsilon^{-1}} \, \Big] \,+\, R^{(1)}_\epsilon\;,$$ where $|R^{(1)}_\epsilon|\to 0$. The expectation on the right-hand side has to be understood as the expectation of $\mathbb{E}_{\boldsymbol{x}_\epsilon (\tau)}^{\epsilon} \big[\, F (\boldsymbol{x}_{\epsilon}(\theta^{(1)}_{\epsilon} t - s )) \, \big]$ for $s=\tau$. By definition of $r_0$, the wells ${\mathcal E}({\boldsymbol m}')$, ${\boldsymbol m}'\in{\mathcal M}_0$, and [\[39\]](#39){reference-type="eqref" reference="39"}, the right-hand side of the previous equation is equal to $$\sum_{{\boldsymbol m}'\in {\mathcal M}_0} F({\boldsymbol m}') \; \mathbb{E}_{\boldsymbol{x}}^{\epsilon} \Big[\, \mathbb{P}_{\boldsymbol{x}_\epsilon (\tau)}^{\epsilon} \big[\, \boldsymbol{x}_{\epsilon}(\theta^{(1)}_{\epsilon} t -\tau ) \in {\mathscr E}({\boldsymbol m}') \, \big]\, \chi_{\tau \le \epsilon^{-1}} \, \Big] \,+\, R^{(1)}_\epsilon \,+\, R^{(2)}_\epsilon \,+\, R_\eta \;,$$ where $|R_\eta|\le \eta$ and $$|R^{(2)}_\epsilon| \;\le\; \Vert F\Vert_\infty\, \sup_{{\boldsymbol y}\in {\mathcal E}({\boldsymbol m})} \sup_{t - (\epsilon \theta^{(1)}_\epsilon)^{-1} \le s\le t} \mathbb{P}_{\boldsymbol{y}}^{\epsilon} \big[\, \boldsymbol{x}_{\epsilon}(\theta^{(1)}_{\epsilon} s ) \not\in {\mathscr E}({\mathcal M}_0) \, \big]\;.$$ By [\[42\]](#42){reference-type="eqref" reference="42"}, $R^{(2)}_\epsilon \to 0$. By Proposition [ 41](#l11){reference-type="ref" reference="l11"}, Lemma [ 29](#lem_exit){reference-type="ref" reference="lem_exit"} and Corollary [Corollary 24](#t_hitting){reference-type="ref" reference="t_hitting"}, as $\epsilon\to 0$, the sum converges to $$\sum_{{\boldsymbol m}'\in {\mathcal M}_0} F({\boldsymbol m}') \; {\mathcal Q}_{{\boldsymbol m}} \big[\, {\boldsymbol y} (t) ={\boldsymbol m}'\, \big] \;=\; {\mathcal Q}_{{\boldsymbol m}} \big[\, F({\boldsymbol y} (t))\, \big] \;,$$ which completes the proof of the theorem. 0◻ ## Proof of Proposition [ 41](#l11){reference-type="ref" reference="l11"} {#proof-of-proposition-l11 .unnumbered} The proof relies on a lemma, which appeared before in [@LLM Lemma 3.1] for discrete-valued Markov processes. ** 42**. *Fix $t>0$ and ${\boldsymbol m}$, ${\boldsymbol m}' \in {\mathcal M}_0$. Then, for all $\boldsymbol{x}\in\mathcal{E}({\boldsymbol m})$, $b\in(0,\,t/3)$ and sequence $t_\epsilon \to t$, $$\mathbb{Q}_{\boldsymbol{x}}^{\epsilon} \big[\,{\boldsymbol y}^{\rm T}_\epsilon\left(t-3b\right)\in\mathcal{E}({\boldsymbol m}')\,\big] \, \le\, \mathbb{Q}_{\boldsymbol{x}}^{\epsilon}\big[\, \boldsymbol{y}_{\epsilon}(t_\epsilon)\in\mathcal{E}({\boldsymbol m}')\,\big] \,+ \, R_{\epsilon}(t,\,b)\;,$$ where, $$\lim_{b\rightarrow0}\,\limsup_{\epsilon\to0}\,R_{\epsilon}(t,\,b)=0\;.$$* *Proof.* Fix $t>0$, ${\boldsymbol m}$, ${\boldsymbol m}' \in {\mathcal M}_0$, $\boldsymbol{x}\in\mathcal{E}({\boldsymbol m})$, a sequence $t_\epsilon \to t$, and $2<\alpha < 3$. By [\[eq:trace\]](#eq:trace){reference-type="eqref" reference="eq:trace"} and the trivial fact that $S_{\epsilon}(t)\ge t$, for $b\in(0,\,t/3)$ $$\mathbb{Q}_{\boldsymbol{x}}^{\epsilon} [\,{\boldsymbol y}^{\rm T}_\epsilon(t-3b)\in\mathcal{E}({\boldsymbol m}')\,] \,=\, \mathbb{Q}_{\boldsymbol{x}}^{\epsilon} [\,\bm{y}_{\epsilon} (S_{\epsilon}(t-3b)) \in\mathcal{E}({\boldsymbol m}')\,] \;\le\; \mathbb{Q}_{\boldsymbol{x}}^{\epsilon}[\,A_{\epsilon}(t,\,b)\,] \,+\, \mathbb{Q}_{\boldsymbol{x}}^{\epsilon} [\,B_{\epsilon}(t,\,b)\,]\;,$$ where $$\begin{gathered} A_{\epsilon}(t,\,b) \,=\, \{\,S_{\epsilon}(t-3b)>t-\alpha b\,\}\;, \\ B_{\epsilon}(t,\,b) \,=\,\{\,{\boldsymbol y}_{\epsilon}(s)\in\mathcal{E}({\boldsymbol m}')\ \ \text{for some}\ s\in[t-3b,\,t- \alpha b]\,\}\ .\end{gathered}$$ By Lemma [ 39](#l: lem1){reference-type="ref" reference="l: lem1"}, as $\alpha <3$, $$\limsup_{\epsilon\rightarrow0}\, \sup_{\boldsymbol{x}\in\mathcal{E}(\mathcal{M}_0)}\, \mathbb{Q}_{\boldsymbol{x}}^{\epsilon}[\,A_{\epsilon}(t,\,b)\,]=0\ .$$ On the other hand, $$\mathbb{Q}_{\boldsymbol{x}}^{\epsilon}[B_{\epsilon}(t,\,b)] \le\mathbb{Q}_{\boldsymbol{x}}^{\epsilon} [\, {\boldsymbol y}_{\epsilon}(t_\epsilon)\in\mathcal{E}({\boldsymbol m}') \,] +\mathbb{Q}_{\boldsymbol{x}}^{\epsilon} [\,B_{\epsilon}(t,\,b),\,\bm{y}_{\epsilon}(t_\epsilon) \notin\mathcal{E}({\boldsymbol m}')\,]\;.$$ It remains to prove that $$\limsup_{b\rightarrow0}\,\limsup_{\epsilon\to 0}\, \sup_{{\boldsymbol x}\in {\mathcal E}({\boldsymbol m})} \mathbb{Q}_{\boldsymbol{x}}^{\epsilon} [\, B_{\epsilon}(t,\,b),\,\bm{y}_{\epsilon}(t_\epsilon) \notin\mathcal{E}({\boldsymbol m}')\,] \,=\, 0\;.$$ By Lemma [ 28](#l17){reference-type="ref" reference="l17"}, the definition of $B_{\epsilon}(t,\,b)$, and the strong Markov property, $$\limsup_{b\rightarrow0}\,\limsup_{\epsilon\to0}\, \sup_{\boldsymbol{x}\in\mathcal{E}({\boldsymbol m})}\, \mathbb{Q}_{\boldsymbol{x}}^{\epsilon} [\,B_{\epsilon}(t,\,b),\,{\boldsymbol y}_{\epsilon}(t_\epsilon)\in \mathcal{E}({\mathcal M}_0) \setminus\mathcal{E}({\boldsymbol m}')\,] \,=\, 0\;.$$ On the other hand, as $\alpha>2$, for $\epsilon$ sufficiently small, $t_\epsilon -s \in [2b,4b]$ for all $s\in [t-3b, t-\alpha b]$. Hence, by the strong Markov property and Proposition [ 43](#p06){reference-type="ref" reference="p06"}, $$\limsup_{b\rightarrow0}\, \limsup_{\epsilon\to0}\, \sup_{\boldsymbol{x}\in\mathcal{E}({\boldsymbol m})}\, \mathbb{Q}_{\boldsymbol{x}}^{\epsilon} [\, B_{\epsilon}(t,\,b),\, \bm{y}_{\epsilon}(t_\epsilon) \notin\mathcal{E}({\mathcal M}_0)\, ] \;=\; 0\;.$$ The assertion of the lemma follows from the previous estimates. ◻ *Proof of Proposition [ 41](#l11){reference-type="ref" reference="l11"}.* The proof is similar to the one of [@LLM Proposition 2.1]. We consider the case ${\mathfrak n}=1$, the general one being similar. Fix $t>0$, ${\boldsymbol m}$, ${\boldsymbol m}' \in {\mathcal M}_0$, and sequences ${\boldsymbol x}_\epsilon \in {\mathcal E}({\boldsymbol m})$, $t_\epsilon \to t$. By Theorem [ 38](#t02){reference-type="ref" reference="t02"}, $${\mathcal Q}_{{\boldsymbol m}} [ {\boldsymbol y}(t) = {\boldsymbol m}'] \;=\; \lim_{\delta \to 0} {\mathcal Q}_{{\boldsymbol m}} [ {\boldsymbol y}(t-3\delta) = {\boldsymbol m}'] \;=\; \lim_{\delta \to 0} \lim_{\epsilon \to 0} {\mathbb Q}^\epsilon_{{\boldsymbol x}_\epsilon} [ \, {\boldsymbol y}^{\rm T}_\epsilon (t-3\delta) \in {\mathcal E}({\boldsymbol m}') \, ] \;.$$ Thus, by Lemma [ 42](#l: lem0){reference-type="ref" reference="l: lem0"}, $${\mathcal Q}_{{\boldsymbol m}} [ {\boldsymbol y}(t) = {\boldsymbol m}'] \;\le \; \liminf_{\epsilon\to 0} {\mathbb Q}^\epsilon_{{\boldsymbol x}_\epsilon} [ \, {\boldsymbol y}_\epsilon (t_\epsilon) \in {\mathcal E}({\boldsymbol m}') \, ] \;\le \; \limsup_{\epsilon\to 0} {\mathbb Q}^\epsilon_{{\boldsymbol x}_\epsilon} [ \, {\boldsymbol y}_\epsilon (t_\epsilon) \in {\mathcal E}({\boldsymbol m}') \, ]\;.$$ Since $$1 \;=\; \sum_{{\boldsymbol m}'\in {\mathcal M}_0} {\mathcal Q}_{{\boldsymbol m}} [ {\boldsymbol y}(t) = {\boldsymbol m}'] \quad\text{and}\quad \sum_{{\boldsymbol m}'\in {\mathcal M}_0} {\mathbb Q}^\epsilon_{{\boldsymbol x}_\epsilon} [ \, {\boldsymbol y}_\epsilon (t_\epsilon) \in {\mathcal E}({\boldsymbol m}') \, ] \;\le\; 1\;,$$ the inequalities in the penultimate formula must be identities for each ${\boldsymbol m}\in {\mathcal M}_0$, which completes the proof of the proposition. ◻ ## Avoiding wells {#avoiding-wells .unnumbered} We complete the proof Proposition [ 41](#l11){reference-type="ref" reference="l11"} by showing that the probability that the process is not in a well when it starts from a well is very small. This is the content of Proposition [ 43](#p06){reference-type="ref" reference="p06"} below, the main result of this subsection. ** 43**. *For all ${\boldsymbol m} \in {\mathcal M}_0$, $$\limsup_{b\rightarrow0}\,\limsup_{\epsilon\rightarrow0}\, \sup_{\boldsymbol{x}\in\mathcal{E}({\boldsymbol m})}\, \sup_{t\in[2b,\,4b]}\,\mathbb{Q}_{\boldsymbol{x}}^{\epsilon} [\,{\boldsymbol y}_{\epsilon}(t)\notin\mathcal{E} ({\mathcal M}_0) \,] \;=\; 0\;.$$* The proof of this proposition requires some preliminary estimates. Fix $\color{blue} \eta\in(0,\,r_{0}/2)$ so that there is no critical point $\bm{c} \in {\mathcal C}_0$ such that $U(\bm{c})\in(U({\boldsymbol m}'),\,U({\boldsymbol m}')+\eta)$ for some ${\boldsymbol m}'\in{\mathcal M}_0$. Fix ${\boldsymbol m}\in {\mathcal M}_0$, and let $${\color{blue} {\mathcal R}} \,=\, \mathcal{R}({\boldsymbol m}) \,=\, (\mathbb{R}^{d}\setminus\mathcal{E} ({\mathcal M}_0) ) \cap \big\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x}) < U({\boldsymbol m} )+\eta/2\, \big\}\; .$$ Denote by ${\mathcal W}$ the connected component of the set $\{{\boldsymbol x}\in {\mathbb R}^d: U({\boldsymbol x}) < U({\boldsymbol m}) + d^{(1)}\}$ which contains ${\boldsymbol m}$. We claim that $$\label{50} {\mathcal W} \cap {\mathcal R} \;=\; \varnothing \;.$$ Indeed, if ${\boldsymbol y}\in {\mathcal R}$, $U({\boldsymbol y}) < U({\boldsymbol m}) + \eta/2$. By definition of $d^{(1)}$ and ${\mathcal E}({\boldsymbol m})$, all points ${\boldsymbol z} \in {\mathcal W}$ such that $U({\boldsymbol z}) \le U({\boldsymbol m}) + \eta/2$ are contained in ${\mathcal E}({\boldsymbol m})$. Hence, ${\mathcal W} \cap {\mathcal R} = {\mathcal E}({\boldsymbol m}) \cap {\mathcal R}$. By definition of ${\mathcal R}$, ${\mathcal E}({\boldsymbol m}) \cap {\mathcal R} = \varnothing$, which completes the proof of the claim. ** 44**. *Fix ${\boldsymbol m}\in {\mathcal M}_0$. Then, $$\limsup_{a\rightarrow0}\, \limsup_{\epsilon\rightarrow0}\, \sup_{\boldsymbol{x}\in\mathcal{E}({\boldsymbol m})}\, \mathbb{Q}_{\boldsymbol{x}}^{\epsilon} \big[\,\tau_{_{\mathcal{R}}} \le a \,\big]=0\;.$$* *Proof.* By Lemma [ 28](#l17){reference-type="ref" reference="l17"}, it suffices to show that $$\mathbb{Q}_{\boldsymbol{x}}^{\epsilon}\big[\, \tau_{_{\mathcal{R}}} \le a \,\big] \;\le\; 2\, \max_{{\boldsymbol m}'\in {\mathcal M}} \sup_{\bm{z}\in \mathcal{E}({\boldsymbol m}')} \mathbb{Q}_{\bm{z}}^{\epsilon}\big[\, \tau_{{\mathcal E}({\mathcal M}_0) \setminus \mathcal{E}({\boldsymbol m}')} < 2 a \,\big ] \,+\, R_{\epsilon}({\boldsymbol x}) \;,$$ where $\sup_{{\boldsymbol x} \in {\mathcal E}({\boldsymbol m})} |R_{\epsilon}({\boldsymbol x})| \to 0$. To prove the previous bound, first observe that $$\label{45} \mathbb{Q}_{\bm{x}}^{\epsilon} [\, \tau_{_{\mathcal{R}}} \le a \, ] \,=\, \mathbb{Q}_{\bm{x}}^{\epsilon} [\, \tau_{_{\mathcal{R}}} \le a \,,\, \sigma_{_{{\mathcal E}({\mathcal M}_0)}} \le \iota_\epsilon\,] \,+\, \mathbb{Q}_{\bm{x}}^{\epsilon} [\, \tau_{_{\mathcal{R}}} \le a \,,\, \sigma_{_{{\mathcal E}({\mathcal M}_0)}} > \iota_\epsilon \, ] \;,$$ where $\sigma_{_{{\mathcal A}}}$, ${\mathcal A}\subset{\mathbb R}^d$, is the first time after $\tau_{{\mathcal R}}$ that the process visits ${\mathcal A}$: $${\color{blue} \sigma_{_{{\mathcal A}}}} \;:=\; \inf\{t> \tau_{{\mathcal R}} : {\boldsymbol x}_\epsilon (t) \in {\mathcal A} \,\}\;,$$ $\iota_\epsilon = a + \epsilon^{-1}/\theta^{(1)}_\epsilon$. By the strong Markov property, the second term on the right-hand is bounded by $$\sup_{\bm{z}\in\mathcal{R}} \mathbb{P}_{\bm{z}}^{\epsilon}\big[ \, \tau_{{\mathcal E}({\mathcal M}_0)} \ge \epsilon^{-1} \, \big]\ .$$ To keep notation simple, we replaced the measure ${\mathbb Q}^\epsilon_{{\boldsymbol z}}$ by ${\mathbb P}^\epsilon_{{\boldsymbol z}}$. By Corollary [Corollary 24](#t_hitting){reference-type="ref" reference="t_hitting"}, this expression is bounded by a remainder $R_{\epsilon}({\boldsymbol x})$ such that $\sup_{{\boldsymbol x} \in {\mathcal E}({\boldsymbol m})} |R_{\epsilon}({\boldsymbol x})| \to 0$. We turn to the first term on the right-hand side of [\[45\]](#45){reference-type="eqref" reference="45"}. It can be written as $$\label{49} \mathbb{Q}_{\bm{x}}^{\epsilon} [\, \tau_{_{\mathcal{R}}} \le a \,,\, \sigma_{_{{\mathcal E}({\boldsymbol m})}} \le \iota_\epsilon\,] \;+\; \mathbb{Q}_{\bm{x}}^{\epsilon} [\, \tau_{_{\mathcal{R}}} \le a \,,\, \sigma_{_{{\mathcal E}({\mathcal M}_0) \setminus {\mathcal E}({\boldsymbol m}) }} \le \iota_\epsilon\,]$$ By the strong Markov property, the first term is bounded by $$\sup_{\bm{z}\in\mathcal{R}} \mathbb{P}_{\bm{z}}^{\epsilon} \big[\, \tau_{\mathcal{E}({\boldsymbol m})}< 2 a \theta^{(1)}_{\epsilon} \,\big] \;=\; \max_{{\mathcal A}} \sup_{\bm{z}\in\mathcal{A}} \mathbb{P}_{\bm{z}}^{\epsilon} \big[\, \tau_{\mathcal{E}({\boldsymbol m})}< 2 a \theta^{(1)}_{\epsilon} \,\big] \;,$$ where the maximum is carried over all connected components of ${\mathcal R}$. The number of connected component is finite because $U(x) \to\infty$ as $|x|\to\infty$. Fix a connected component ${\mathcal A}$ of ${\mathcal R}$, and let ${\mathcal B}$ be the connected component of $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})<U({\boldsymbol m})+ \eta \}$ containing $\mathcal{A}$. Since there are no critical points ${\boldsymbol c}\in {\mathcal C}_0$ such that $U({\boldsymbol c}) \in (U({\boldsymbol m}), U({\boldsymbol m}) +\eta)$, by Corollary [Corollary 25](#l16){reference-type="ref" reference="l16"}, $$\lim_{\epsilon \to 0} \sup_{\bm{z}\in\mathcal{A}} \mathbb{P}_{\bm{z}}^{\epsilon}\big[\, \tau_{\partial {\mathcal B}} < \tau_{_{\mathcal{E}({\mathcal B})}} \,\big] \; =\; 0 \;.$$ On the other hand, by [\[50\]](#50){reference-type="eqref" reference="50"}, $\mathcal{E}({\boldsymbol m}) \subset \mathbb{R}^{d} \setminus \mathcal{B}$, so that $\tau_{\partial {\mathcal B}} < \tau_{{\mathcal E}({\boldsymbol m})}$. Hence, $$\sup_{\bm{z}\in\mathcal{A}} \mathbb{P}_{\bm{z}}^{\epsilon}\big[\, \tau_{_{\mathcal{E}({\boldsymbol m})}} < 2 a \theta^{(1)}_{\epsilon} \,\big] \;\le\; \sup_{\bm{z}\in\mathcal{A}} \mathbb{P}_{\bm{z}}^{\epsilon}\big[\, \tau_{\mathcal{E}({\boldsymbol m})} < 2 a \theta^{(1)}_{\epsilon} \,,\, \tau_{_{\mathcal{E}({\mathcal B})}} < \tau_{\mathcal{E}({\boldsymbol m})} \,\big ] \;+\; o_\epsilon(1) \;.$$ By the strong Markov property, this expression is bounded by $$\sup_{\bm{z}\in \mathcal{E}({\mathcal B})} \mathbb{P}_{\bm{z}}^{\epsilon}\big[\, \tau_{\mathcal{E}({\boldsymbol m})} < 2 a \theta^{(1)}_{\epsilon} \,\big ] \;+\; o_\epsilon(1) \;.$$ Since ${\mathcal B}$ and ${\mathcal E}({\boldsymbol m})$ are disjoint this expression is less than or equal to $$\max_{{\boldsymbol m}'\in {\mathcal M}} \sup_{\bm{z}\in \mathcal{E}({\boldsymbol m}')} \mathbb{P}_{\bm{z}}^{\epsilon}\big[\, \tau_{{\mathcal E}({\mathcal M}_0) \setminus \mathcal{E}({\boldsymbol m}')} < 2 a \theta^{(1)}_{\epsilon} \,\big ] \;+\; o_\epsilon(1) \;.$$ We turn to the second term of [\[49\]](#49){reference-type="eqref" reference="49"}. Since $\epsilon^{-1} \prec\theta_{\epsilon}^{(1)}$, it is bounded by $$\mathbb{P}^{\epsilon}_{\bm{x}} \big[\, \tau_{_{\mathcal{E} ({\mathcal M}_0) \setminus\mathcal{E}({\boldsymbol m})}} < 2a\theta_{\epsilon} \,\big] \;,$$ which completes the proof of the lemma. ◻ *Proof of Proposition [ 43](#p06){reference-type="ref" reference="p06"}.* Recall the definition of the set ${\mathcal R}$ introduced just before Lemma [ 44](#l12){reference-type="ref" reference="l12"}. Denote by ${\mathcal W}$ the connected component of the set $\{{\boldsymbol x} \in\mathbb{R}^{d}: U({\boldsymbol x})<U({\boldsymbol m}) + d^{(1)}\, \}$ which contains ${\boldsymbol m}$. By [\[50\]](#50){reference-type="eqref" reference="50"}, ${\mathcal R} \cap {\mathcal W} = \varnothing$. Clearly, $$\begin{aligned} \mathbb{Q}_{\boldsymbol{x}}^{\epsilon} \big[\, \boldsymbol{y}_{\epsilon}(t) \in\mathbb{R}^{d}\setminus\mathcal{E} ({\mathcal M}_0) \,\big ] \; \le \; \mathbb{Q}_{\boldsymbol{x}}^{\epsilon} \big[\, {\boldsymbol y}_{\epsilon} (t) \in \mathbb{R}^{d} \setminus \{ \mathcal{E} ({\mathcal M}_0) \cup \mathcal{R} \} \, \big ] \;+\; \mathbb{Q}_{\boldsymbol{x}}^{\epsilon} \big[\, \tau_{_{\mathcal{R}}} \le t\,\big]\;.\end{aligned}$$ Recall that $\eta<r_0/2$, choose a time-scale $\varrho_\epsilon$ satisfying [\[58\]](#58){reference-type="eqref" reference="58"}, and let $\kappa_\epsilon = \varrho_\epsilon/\theta^{(1)}_\epsilon$. By Corollary [Corollary 17](#l15){reference-type="ref" reference="l15"}, the first term on the right-hand side is bounded by $$\mathbb{Q}_{\mu_{\epsilon}^{\rm R}}^{\epsilon} \big[\, \bm{y}_{\epsilon} (t-\kappa_{\epsilon}) \in \mathbb{R}^{d}\setminus \{\mathcal{E} ({\mathcal M}_0) \cup\mathcal{R} \} \big] \,+\, o_{\epsilon}(1)\;,$$ where the error is uniform over $t\in [2b, 4b]$, ${\boldsymbol x}\in {\mathcal E}({\boldsymbol m})$. As $\mu_\epsilon$ is the stationary state and $\mu_{\epsilon}^{\rm R}$ the measure $\mu_\epsilon$ conditioned to $\mathcal{W}^{2r_{0}}(\bm{m})$, the previous expression is equal to $$\begin{aligned} \frac{\mu_{\epsilon}(\mathbb{R}^{d}\setminus \{ \mathcal{E} ({\mathcal M}_0) \cup\mathcal{R} \} )} {\mu_{\epsilon}(\mathcal{W}^{2r_{0}}(\bm{m}))} \,+\, o_{\epsilon}(1) \; = \; o_{\epsilon}(1)\ ,\end{aligned}$$ where the error terms are uniform on $\bm{x}\in\mathcal{E}(\bm{m})$ and $t\in[2b,\,4b]$. It remains to show that $$\limsup_{b\rightarrow0}\,\limsup_{\epsilon\rightarrow0}\, \sup_{\boldsymbol{x}\in\mathcal{E}(\bm{m})}\, \sup_{t\in[2b,\,4b]}\mathbb{Q}_{\boldsymbol{x}}^{\epsilon} \left[\tau_{\mathcal{R}}\le t\right]=0\;.$$ This is a direct consequence of Lemma [ 44](#l12){reference-type="ref" reference="l12"} since $\mathbb{Q}_{\boldsymbol{x}}^{\epsilon} [\tau_{\mathcal{R}} \le t ] \le \mathbb{Q}_{\boldsymbol{x}}^{\epsilon} [\tau_{\mathcal{R}}\le 4b ]$ for all $t\le 4b$. ◻ ## Proof of Theorem [ 1](#t00){reference-type="ref" reference="t00"} {#proof-of-theorem-t00 .unnumbered} The assertion of Theorem [ 1](#t00){reference-type="ref" reference="t00"} in the time scale $\theta^{(1)}_\epsilon$ is a particular case of Theorem [ 40](#t_fdd){reference-type="ref" reference="t_fdd"}. We turn to the second claim. Fix a time-scale $\varrho_\epsilon$ such that $1\prec \varrho_\epsilon \prec \theta^{(1)}_\epsilon$, ${\boldsymbol m}\in {\mathcal M}_0$, ${\boldsymbol x}\in {\mathcal D}({\boldsymbol m})$, $\eta>0$, and a bounded continuous function $F$. Define the wells ${\mathcal E}({\boldsymbol m}')$, ${\boldsymbol m}'\in{\mathcal M}_0$, as in the proof of Proposition [ 41](#l11){reference-type="ref" reference="l11"}, to fulfill [\[39\]](#39){reference-type="eqref" reference="39"}. First, assume that there exists $\epsilon_0$ such that $\varrho_\epsilon \ge \epsilon^{-2}$ for all $\epsilon<\epsilon_0$. By [@fw98 Theorem 2.1.2], there exists $T>0$ such that $${\mathbb P}^\epsilon_x [ {\boldsymbol x}_\epsilon (T) \not\in {\mathcal E} ({\boldsymbol m}) \,] \,=\, o_\epsilon(1)\;.$$ Hence, by the Markov property, $${\mathbb E}^\epsilon_x [\, F({\boldsymbol x}_\epsilon (\varrho_\epsilon))\,] \;=\; {\mathbb E}^\epsilon_x \Big[ {\mathbb E}^\epsilon_{ {\boldsymbol x}_\epsilon (T) } \big[\, F({\boldsymbol x}_\epsilon (\varrho_\epsilon - T)) \,\big] \, {\boldsymbol 1}\{ {\boldsymbol x}_\epsilon (T) \in {\mathcal E} ({\boldsymbol m})\,\} \, \Big] \,+\, o_\epsilon(1)\;.$$ As ${\boldsymbol x}_\epsilon (T)$ belongs to ${\mathcal E} ({\boldsymbol m})$ and $\varrho_\epsilon \prec \theta^{(1)}_\epsilon$, by Lemma [ 28](#l17){reference-type="ref" reference="l17"}, inside the second expectation on the right-hand side we may insert the indicator of the set ${\mathcal A}_1 = \{{\boldsymbol x}_\epsilon (\varrho_\epsilon - T) \not \in {\mathcal E}({\mathcal M}_0) \setminus {\mathcal E}({\boldsymbol m}) \}$ at a cost $o_\epsilon(1)$. By Proposition [ 15](#p_FW){reference-type="ref" reference="p_FW"}, we may also insert the indicator of the set ${\mathcal A}_2 = \{ U({\boldsymbol x}_\epsilon (\varrho_\epsilon - T - \epsilon^{-1}) \le U({\boldsymbol m}) + d^{(1)} + 2r_0\}$ at the same cost. Hence, the left-hand side of the previous displayed equation is equal to $${\mathbb E}^\epsilon_x \Big[ {\mathbb E}^\epsilon_{ {\boldsymbol x}_\epsilon (T) } \big[\, F({\boldsymbol x}_\epsilon (\varrho_\epsilon - T)) \, {\boldsymbol 1}\{{\mathcal A}_1 \cap {\mathcal A}_2\}\, \,\big] \, {\boldsymbol 1}\{ {\boldsymbol x}_\epsilon (T) \in {\mathcal E} ({\boldsymbol m})\,\} \, \Big] \,+\, o_\epsilon(1)\;.$$ By the Markov property the previous expectation is equal to $${\mathbb E}^\epsilon_x \Big[ \, {\mathbb E}^\epsilon_{{\boldsymbol x}_\epsilon (T)} \Big[ {\boldsymbol 1}\{{\mathcal A}_2\} {\mathbb E}^\epsilon_{ {\boldsymbol x}_\epsilon (\varrho_\epsilon - T - (1/\epsilon)) } \big[\, F({\boldsymbol x}_\epsilon (1/\epsilon)) \, {\boldsymbol 1}\{{\mathcal A}'_1\}\,\big] \, \Big] {\boldsymbol 1}\{ {\boldsymbol x}_\epsilon (T) \in {\mathcal E} ({\boldsymbol m})\,\} \, \Big] \;,$$ where ${\mathcal A}'_1 = \{{\boldsymbol x}_\epsilon (\epsilon^{-1}) \not \in {\mathcal E}({\mathcal M}_0) \setminus {\mathcal E}({\boldsymbol m}) \}$. Since $U({\boldsymbol x}_\epsilon (\varrho_\epsilon - T - \epsilon^{-1})) \le U({\boldsymbol m}) + d^{(1)} + 2r_0$, by Theorem [ 23](#t_hitting2){reference-type="ref" reference="t_hitting2"} and Proposition [ 15](#p_FW){reference-type="ref" reference="p_FW"}, in the third expectation, we may insert the indicator of the set ${\mathcal A}_3 = \{{\boldsymbol x}_\epsilon (1/\epsilon) \in {\mathcal E}({\mathcal M}_0) \}$ at a cost $o_\epsilon(1)$. If by bad luck, there are critical points ${\boldsymbol c}$ such that $U({\boldsymbol c}) = U({\boldsymbol m}) + d^{(1)} + 2r_0$, we add to this constant a positive value to make sure that this does not happen. As ${\mathcal A}_4 = {\mathcal A}'_1 \cap {\mathcal A}_3 = \{{\boldsymbol x}_\epsilon (\epsilon^{-1}) \in {\mathcal E}({\boldsymbol m}) \}$, by [\[39\]](#39){reference-type="eqref" reference="39"}, the previous expression is equal to $$F({\boldsymbol m}) \, {\mathbb E}^\epsilon_x \Big[ \, {\mathbb E}^\epsilon_{{\boldsymbol x}_\epsilon (T)} \Big[ {\boldsymbol 1}\{{\mathcal A}_2\} {\mathbb P}^\epsilon_{ {\boldsymbol x}_\epsilon (\varrho_\epsilon - T - (1/\epsilon)) } \big[\, \, {\mathcal A}_4 \,\big] \, \Big] {\boldsymbol 1}\{ {\boldsymbol x}_\epsilon (T) \in {\mathcal E} ({\boldsymbol m})\,\} \, \Big] \;+\; R(\epsilon , \eta) \;,$$ where $|R(\epsilon , \eta)| \le \eta + o_\epsilon(1)$. We may now go backward in the argument to conclude that the previous expression is equal to $F({\boldsymbol m}) + R(\epsilon , \eta)$, which completes the proof of the theorem in the case where $\varrho_\epsilon \ge \epsilon^{-2}$ for all $\epsilon$ small. Assume that this is not the case. We may suppose that $\varrho_\epsilon \le \epsilon^{-2}$ for all $\epsilon$ small enough. If there is a subsequence which does not satisfy this condition, it is treated as in the first part of the proof. By [@fw98 Theorem 2.1.2], there exists $T>0$ such that $${\mathbb P}^\epsilon_x [ {\boldsymbol x}_\epsilon (T) \not\in {\mathcal W}^{r_0/2} ({\boldsymbol m}) \,] \,=\, o_\epsilon(1)\;.$$ Hence, by the Markov property, $${\mathbb E}^\epsilon_x [\, F({\boldsymbol x}_\epsilon (\varrho_\epsilon))\,] \;=\; {\mathbb E}^\epsilon_x \Big[ {\mathbb E}^\epsilon_{ {\boldsymbol x}_\epsilon (T) } \big[\, F({\boldsymbol x}_\epsilon (\varrho_\epsilon - T)) \,\big] \, {\boldsymbol 1}\{ {\boldsymbol x}_\epsilon (T) \in {\mathcal W}^{r_0/2} ({\boldsymbol m}) \,\} \, \Big] \,+\, o_\epsilon(1)\;.$$ As ${\boldsymbol x}_\epsilon (T) \in {\mathcal W}^{r_0/2} ({\boldsymbol m})$, by Proposition [ 15](#p_FW){reference-type="ref" reference="p_FW"}, in the second expectation, we may insert the indicator of the set ${\mathcal A} = \{{\boldsymbol x}_\epsilon (\varrho_\epsilon - T ) \in {\mathcal E}({\boldsymbol m}) \}$ at a cost $o_\epsilon(1)$. At this point, we may repeat the arguments presented at the end of the first part of the proof to conclude. 0◻ # The potential $U$ We present in this section elementary properties of the potential $U$ and the dynamical system [\[31\]](#31){reference-type="eqref" reference="31"}. The main result establishes the existence of a path which perfects the infimum in [\[Theta\]](#Theta){reference-type="eqref" reference="Theta"}. ** 45**. *Fix a local minimum ${\boldsymbol m} \in{\mathcal M}_0$. Then, there exist a local minimum $\boldsymbol{m}'\in\mathcal{M}_{0}$, ${\boldsymbol m}'\neq {\boldsymbol m}$ and a continuous path $\boldsymbol{z} \colon [0,1] \to {\mathbb R}^d$ such that $\boldsymbol{z}(0)=\boldsymbol{m}$, $\boldsymbol{z}(1)=\boldsymbol{m}'$, and $$\begin{gathered} \max_{t\in[0,\,1]}U(\boldsymbol{z}(t)) \;=\; U(\boldsymbol{z}(1/2)) \;=\; U({\boldsymbol m}) \,+\, \Gamma({\boldsymbol m}) \,=\, \Theta(\boldsymbol{m},\,\boldsymbol{m}')\;, \\ U({\boldsymbol z}(s)) \,<\, U({\boldsymbol z}(1/2))\;,\;\; s\in [0,1]\setminus\{1/2\} \;. \end{gathered}$$ Moreover, if ${\boldsymbol z}(\cdot)$ is such a path, then ${\boldsymbol z}(1/2)$ is a saddle point of $U$.* ## Proof of Proposition [ 45](#l05a){reference-type="ref" reference="l05a"} {#proof-of-proposition-l05a .unnumbered} The proof is based on three lemmata. Fix ${\boldsymbol m}\in{\mathcal M}_0$, and let $\mathcal{W}$ be the connected component of $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})<U(\bm{m})+\Gamma(\bm{m})\}$ containing $\bm{m}$. By definition of $\Gamma({\boldsymbol m})$, $$\label{a01} {\mathcal M}_0 \cap {\mathcal W} \;=\; \{{\boldsymbol m}\}\;.$$ ** 46**. *Fix ${\boldsymbol m}\in{\mathcal M}_0$ There is a connected component $\mathcal{W}'$ of $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})<U(\bm{m})+\Gamma(\bm{m})\}$ such that $\mathcal{W}\cap\mathcal{W}'=\varnothing$ and $\overline{\mathcal{W}}\cap\overline{\mathcal{W}'}\ne\varnothing$.* The proof of this result is given in a subsection below. Recall that we denote by $B(\bm{x},\,r)$ the open ball of radius $r$ centered at ${\boldsymbol x}$. Let $$\mathcal{A}(\bm{x},\,r)=\left(B (\bm{x},\,r) \setminus\{\bm{x}\}\right)\cap\{\bm{y} \in\mathbb{R}^{d}:U(\bm{y})<U(\bm{x})\}\ .$$ ** 47**. *Fix $H\in{\mathbb R}$, and let $\mathcal{W}_{1}$ and $\mathcal{W}_{2}$ be two disjoint connected components of $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})<H\}$. If $\overline{\mathcal{W}_{1}}\cap\overline{\mathcal{W}_{2}}\ne\varnothing$, then $\overline{\mathcal{W}_{1}}\cap\overline{\mathcal{W}_{2}} = \partial\mathcal{W}_{1}\cap\partial\mathcal{W}_{2}$ and any element $\bm{\sigma}$ of $\overline{\mathcal{W}_{1}}\cap\overline{\mathcal{W}_{2}}$ is a saddle point such that $U({\boldsymbol \sigma})=H$. Moreover, for all $r>0$ small enough, $\mathcal{A}(\bm{\sigma},\,r)$ has two connected components: $\mathcal{A}(\bm{\sigma},\,r)\cap\mathcal{W}_{1}$ and $\mathcal{A}(\bm{\sigma},\,r)\cap\mathcal{W}_{2}$* The proof of this lemma is presented in a later subsection below. ** 48**. *Let $\bm{m}',\,\bm{m}''\in\mathcal{M}_{0}$ and let $\bm{z}:[0,1]\to\mathbb{R}^{d}$ be a continuous path such that $$\bm{z}(0)=\bm{m}',\quad \bm{z}(1)=\bm{m}'', \quad U(\bm{z}(1/2))=\Theta(\bm{m}',\,\bm{m}'')\ ,$$ $$\label{f01} U(\bm{z}(t))<U(\bm{z}(1/2)\ \text{for}\ t\in[0,\,1]\setminus\{1/2\}\ .$$ Then, $\bm{z}(1/2)$ is a saddle point.* *Proof.* Recall that we denote by $\upsilon_{{\boldsymbol x}} (t)$ the solution of the ODE [\[31\]](#31){reference-type="eqref" reference="31"} starting from ${\boldsymbol x}$. For $s\ge 0$, let $\psi_s \colon [0,1]\to\mathbb{R}^{d}$ be the continuous path defined by $$\psi_s(t)=\upsilon_{\bm{z}(t)} (s)\ .$$ As $U$ decreases along the solutions of the ODE, $$U(\psi_s (t)) \,=\, U(\upsilon_{\bm{z}(t)} (s)) \,\le\, U(\bm{z}(t))\ .$$ We claim that $$\label{f03} U(\psi_s(1/2)) \,=\, U(\bm{z}(1/2)) \,=\, U(\psi_0(1/2)) \quad \text{for all $s>0$}\;.$$ Suppose, by contradiction, that there exists $s_{0}>0$ such that $U(\psi_{s_0}(1/2))<U(\bm{z}(1/2))$. By [\[f01\]](#f01){reference-type="eqref" reference="f01"}, for all $t\ne1/2$, $$U(\psi_{s_0}(t))\le U(\bm{z}(t))<U(\bm{z}(1/2)) \;.$$ Thus, since by hypothesis, $U(\psi_{s_0}(1/2))<U(\bm{z}(1/2))$, $$\label{f02} \max_{t\in[0,1]}U(\psi_{s_0}(t))<U(\bm{z}(1/2))=\Theta(\bm{m}',\,\bm{m}'')\ .$$ As $\bm{m}'$, ${\boldsymbol m}''$ are critical points, $\upsilon_{{\boldsymbol n}}(s)={\boldsymbol n}$ for ${\boldsymbol n}={\boldsymbol m}'$, ${\boldsymbol m}''$, $s>0$, so that $\psi_{s_0} (0) = \upsilon_{{\boldsymbol z}(0)}(s_0) = \upsilon_{{\boldsymbol m}'}(s_0) = {\boldsymbol m}'$, $\psi_{s_0} (1) = {\boldsymbol m}''$. Therefore, the continuous path $\psi_{s_0}\colon [0,1]\to {\mathbb R}^d$ satisfies $\psi_{s_0}(0)=\bm{m}'$, $\psi_{s_0}(1)=\bm{m}''$ and [\[f02\]](#f02){reference-type="eqref" reference="f02"}. This contradicts to the definition of $\Theta(\bm{m}',\,\bm{m}'')$, and completes the proof of claim [\[f03\]](#f03){reference-type="eqref" reference="f03"}. It follows from [\[f03\]](#f03){reference-type="eqref" reference="f03"} and from the fact $U$ strictly decreases along trajectories which do not start from critical points that $\bm{z}(1/2)$ is a critical point of $U$. It remains to show that $\bm{z}(1/2)$ is a saddle point. Clearly, $\bm{z}(1/2)$ is not a local minimum. Suppose, by contradiction, that $\bm{z}(1/2)$ is not a saddle point. Then, by Lemma [ 52](#l_ind2_conn){reference-type="ref" reference="l_ind2_conn"} below, the set $\mathcal{A}(\bm{z}(1/2),\,r)$ is connected for sufficiently small $r>0$. Since $\bm{z}$ is continuous, there is $\eta_{0}=\eta_{0}(r)>0$ such that $\bm{z}(t)\in B(\bm{z}(1/2),\,r)$ for all $t\in[1/2-\eta_{0},\,1/2+\eta_{0}]$. Therefore, $\bm{z}(t)\in\mathcal{A}(\bm{z}(1/2),\,r)$ for all $t\in[1/2-\eta_{0},\,1/2+\eta_{0}]\setminus\{1/2\}$. Since $\mathcal{A}(\bm{z}(1/2),\,r)$ is connected and open, it is path connected. Therefore, there is a continuous path $\bm{z}_{1}:[1/2-\eta_{0},\,1/2+\eta_{0}]\to\mathcal{A}(\bm{z}(1/2),\,r)$ such that $\bm{z}_{1}(1/2\pm \eta_{0})=\bm{z}(1/2\pm \eta_{0})$. Define a path $\bm{z}_{2}:[0,1]\to\mathbb{R}^{d}$ as $$\bm{z}_{2}(t)=\begin{cases} \bm{z}(t) & t\in[0,1/2-\eta_{0})\cup(1/2+\eta_{0},1]\ ,\\ \bm{z}_{1}(t) & t\in[1/2-\eta_{0},\,1/2+\eta_{0}]\ . \end{cases}$$ Thus, $\bm{z}_{2}$ is a continuous trajectory from $\bm{m}'$ to $\bm{m}''$ such that $U(\bm{z}(t))<U(\bm{z}(1/2))$ for all $t\in[0,\,1]$. This contradicts the definition of $\Theta(\bm{m}',\,\bm{m}'')$, and completes the proof of the lemma. ◻ *Proof of Proposition [ 45](#l05a){reference-type="ref" reference="l05a"}.* Fix ${\boldsymbol m}\in{\mathcal M}_0$. Let $\mathcal{W}'$ be given by Lemma [ 46](#l_105a-1){reference-type="ref" reference="l_105a-1"}, and denote by $\bm{\sigma}$ an element of $\overline{\mathcal{W}}\cap\overline{\mathcal{W}}'$. By Lemma [ 47](#l_cap_saddle){reference-type="ref" reference="l_cap_saddle"}, ${\boldsymbol \sigma}$ is a saddle point, ${\boldsymbol \sigma }\in \Upsilon ({\boldsymbol m})$, and, for sufficiently small $r>0$, $\mathcal{A}(\bm{\sigma},\,r)$ has two connected components $\mathcal{A}(\bm{\sigma},\,r)\cap\mathcal{W}$ and $\mathcal{A}(\bm{\sigma},\,r)\cap\mathcal{W}'$. By Hartman--Grobman theorem, there are two continuous path $\phi_{1},\,\phi_{2} \colon (-\infty, 0] \to\mathbb{R}^{d}$ such that $$\lim_{t\to-\infty}\phi_{j}(t)=\bm{\sigma}\ ,\quad \phi_{1}(s)\in\mathcal{A}(\bm{\sigma},\,r)\cap\mathcal{W}\ ,\quad \phi_{2}(s) \in \mathcal{A}(\bm{\sigma},\,r)\cap\mathcal{W}'$$ for all $s\le 0$. Since ${\mathcal W}$, ${\mathcal W}'$ are connected, we may extend continuously these trajectories to $s>0$ in such a way that $\phi_{1}(s) \in{\mathcal W}$, $\phi_{2}(s) \in{\mathcal W}'$ for all $s\ge 0$. As ${\boldsymbol \sigma}\in \Upsilon ({\boldsymbol m})$, by [\[48\]](#48){reference-type="eqref" reference="48"}, $$\lim_{s\to\infty} \phi_{1}(s)= {\boldsymbol m}\;, \quad \lim_{s\to\infty} \phi_{2}(s)= {\boldsymbol m}'\;,$$ where ${\boldsymbol m}'$ is a local minimum of $U$ one in ${\mathcal W}'$. Concatenating the paths $\phi_1$, $\phi_2$ and reparametrizing it, we obtain a continuous path $\bm{z}:[0,1]\to\mathbb{R}^{d}$ from ${\boldsymbol m}$ to ${\boldsymbol m}'$ such that ${\boldsymbol z}(1/2) = {\boldsymbol \sigma}$. By Lemma [ 47](#l_cap_saddle){reference-type="ref" reference="l_cap_saddle"}, $U({\boldsymbol \sigma}) = U({\boldsymbol m}) + \Gamma({\boldsymbol m})$. Therefore, by construction, ${\boldsymbol z}(\cdot)$ fulfills all conditions required in Proposition [ 45](#l05a){reference-type="ref" reference="l05a"}. It remains to check the final assertion of the proposition, which follows from Lemma [ 48](#l_path_max_saddle){reference-type="ref" reference="l_path_max_saddle"}. ◻ ## Proof of Lemma [ 47](#l_cap_saddle){reference-type="ref" reference="l_cap_saddle"} {#proof-of-lemma-l_cap_saddle .unnumbered} Throughout this subsection, we will use the fact that an open connected subset of ${\mathbb R}^d$ is path-connected. ** 49**. *Homeomorphisms preserve the number of open connected components.* *Proof.* Let $\mathcal{U}_1$ and $\mathcal{U}_2$ be open sets, and let $\varphi:\mathcal{U}_1\to\mathcal{U}_2$ be a homeomorphism. Denote by ${\mathcal U}_{j, 1}, \dots, {\mathcal U}_{j,n_j}$ the connected components of $\mathcal{U}_j$, $j=1$, $2$. Since $\varphi$ is continuous, $\varphi(\mathcal{U}_{1,k})$ is connected. As $\varphi$ is surjective, ${\mathcal U}_2 = \cup_{1\le k\le n_1} \varphi({\mathcal U}_{1,k})$, so that $n_2 \le n_{1}$. Since $\varphi^{-1}$ is continuous, the same argument yields the reverse inequality. ◻ ** 50**. *Let $\bm{p}$ be non-critical point of $U$. Then, for sufficiently small $r>0$, the manifold $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})=U(\bm{p})\}$ divides $B(\bm{p},\,r)$ into two connected components which are $B(\bm{p},\,r)\cap\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})<U(\bm{p})\}$ and $B(\bm{p},\,r)\cap\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})>U(\bm{p})\}$. In particular, $\mathcal{A}(\bm{p},\,r)$ is connected. Furthermore, there is a continuous path $\bm{z}:[0,1]\to B(\bm{p},\,r)$ such that $$\bm{z}(0)=\bm{p}\ ,\ \bm{z}\left((0,1]\right) \subset B(\bm{p},\,r)\cap\{\bm{x}\in\mathbb{R}^{d}: U(\bm{x})<U(\bm{p})\}\ .$$* *Proof.* Fix $\bm{p}=(p_{1},\,\dots,\,p_{d})\in\mathbb{R}^{d}$ be a non-critical point. Then, $\nabla U(\bm{p})\ne0$ so that there is $1\le j\le d$ such that $$\frac{\partial U}{\partial x_{j}}(\bm{p})\ne0\ .$$ Assume, without loss of generality, that $j=d$. For $\bm{x}\in\mathbb{R}^{d}$, let $$\widetilde{\bm{x}}=(x_{1,}\,\dots,\,x_{d-1})\ .$$ By the implicit function theorem, there exist $r>0$ and a $C^{1}$-function $g:\mathbb{R}^{d-1}\to\mathbb{R}$ such that $$g(\widetilde{\bm{p}})=p_{d} \;, \quad U(\widetilde{\bm{x}},\,g(\widetilde{\bm{x}}))=U(\bm{p})\ \text{for all}\ \widetilde{\bm{x}}\in B_{d-1}(\widetilde{\bm{p}},\,r)\ ,$$ where $B_{d-1}(\widetilde{\bm{p}},\,r)$ is a $(d-1)$-dimensional ball with radius $r>0$ centered at $\widetilde{\bm{p}}$. Decompose the set $B(\bm{p},\,r)$ into three parts: $$\mathcal{P}_{1}=B(\bm{p},\,r)\cap\{(\widetilde{\bm{x}}, \,\bm{y})\in\mathbb{R}^{d}:\bm{y}>g(\widetilde{\bm{x}})\}\ ,$$ $$\mathcal{P}_{2}=B(\bm{p},\,r)\cap\{(\widetilde{\bm{x}}, \,\bm{y})\in\mathbb{R}^{d}:\bm{y}<g(\widetilde{\bm{x}})\}\ ,$$ $$\mathcal{P}_{3}=B(\bm{p},\,r)\cap\{(\widetilde{\bm{x}}, \,\bm{y})\in\mathbb{R}^{d}:\bm{y}=g(\widetilde{\bm{x}})\}\ .$$ By definition of $g$, $\mathcal{P}_{3}=B(\bm{p},\,r)\cap\{\bm{x}\in\mathbb{R}^{d}: U(\bm{x})=U(\bm{p})\}$ and $$U(\bm{x})\ne U(\bm{p})\ \text{for all}\, \bm{x}\in\mathcal{P}_{1}\cup\mathcal{P}_{2}\ . \label{e_nocri_divide-1}$$ Suppose that there is $\bm{x},\,\bm{y}\in\mathcal{P}_{1}$ such that $U(\bm{x})<U(\bm{p})<U(\bm{y})$. As $\mathcal{P}_{1}$ is path-connected, there is a path in $\mathcal{P}_{1}$ connecting $\bm{x}$ to $\bm{y}$. Since $U$ is continuous, this path must pass through a point $\bm{z}\in\mathcal{P}_{1}$ such that $U(\bm{z})=U(\bm{p})$, and this contradicts [\[e_nocri_divide-1\]](#e_nocri_divide-1){reference-type="eqref" reference="e_nocri_divide-1"}. Therefore, $U(\bm{x})>U(\bm{p})$ for all $\bm{x}\in\mathcal{P}_{1}$ or $U(\bm{x})<U(\bm{p})$ for all $\bm{x}\in\mathcal{P}_{1}$. Let $\bm{v}=\nabla U(\bm{p})$. For sufficiently small $\eta>0$, $U(\bm{p}+\eta\bm{v})>U(\bm{p})$ and $U(\bm{p}-\eta\bm{v})< U({\boldsymbol p})$. Thus, there is $\bm{x},\,\bm{y}\in B(\bm{p},\,r)$ such that $U(\bm{x})<U(\bm{p})<U(\bm{y})$. Therefore, one of the sets $\mathcal{P}_{1}$, $\mathcal{P}_{2}$ is $B(\bm{p},\,r)\cap\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})<U(\bm{p})\}$ and the other one is $B(\bm{p},\,r)\cap\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})>U(\bm{p})\}$. Finally, since $\mathcal{P}_{3}$ is the graph of a $C^{1}$ function, there are paths $\bm{z}_{i}:[0,1]\to B(\bm{p},\,r)$ such that $$\bm{z}_{i}(0)=\bm{p}\ ,\ \bm{z}_{i}\left((0,1]\right) \subset\mathcal{P}_{i}\ .$$ This completes the proof of the lemma. ◻ . The next two lemmata provide the number of connected componentes of the set ${\mathcal A}({\boldsymbol c}, r)$, ${\boldsymbol c} \in {\mathcal C}_0$, in terms of the index of the critical points ${\boldsymbol c}$. ** 51**. *Let $\bm{\sigma}$ be a saddle point of $U$. Then, for sufficiently small $r>0$, the set $\mathcal{A}(\bm{\sigma},\,r)=\left(B(\bm{\sigma},\,r) \setminus\{\bm{\sigma}\}\right)\cap\{{\boldsymbol x}\in{\mathbb R}^d : U({\boldsymbol x}) <U(\bm{\sigma})\}$ has exactly two connected components.* *Proof.* By[@M69 Lemma 2.2], since $U$ is nondegenerate at $\bm{\sigma}$, the image of $U$ near $\bm{\sigma}$ is locally diffeomorphic to the quadratic function $F:{\mathbb R}^d\to {\mathbb R}$ given by $$F({\boldsymbol x}) \;=\; -\, x_{1}^{2}+\sum_{i=2}^{d}x_{i}^{2}\ .$$ Therefore, for sufficiently small $r>0$, $\mathcal{A}(\bm{\sigma},\,r)$ is diffeomorphic to the set $$\cap F^{-1}((-\infty, 0)) \,=\, [\, B({\boldsymbol 0} ,\,r) \setminus \{\bm{0}\} \,] \cap \{\bm{x}\in\mathbb{R}^{d}: -x_{1}^{2}+\sum_{i=2}^{d}x_{i}^{2}<0\}\ .$$ Since the set on the right-hand side has two connected components, by Lemma [ 49](#l_homeo){reference-type="ref" reference="l_homeo"}, $\mathcal{A}(\bm{\sigma},\,r)$ has also two connected components. ◻ ** 52**. *Let $\bm{c}$ be a critical point of $U$ with index greater or equal to $2$. Then, for sufficiently small $r>0$, $\mathcal{A}(\bm{c},\,r)$ is path-connected.* *Proof.* By [@M69 Lemma 2.2], since $U$ is nondegenerate at $\bm{c}$, the image of $U$ near $\bm{c}$ is locally diffeomorphic to to the quadratic function $F:{\mathbb R}^d\to {\mathbb R}$ given by $$F({\boldsymbol x}) \;=\; -\, \sum_{i=1}^{k} x_{i}^{2} \,+\, \sum_{i=k+1}^{d}x_{i}^{2}\ ,$$ where $k\ge 2$ is the index of $\bm{c}$. Therefore, for sufficiently small $r>0$, $\mathcal{A}(\bm{c},\,r)$ is diffeomorphic to $$\cap \Big\{\bm{x}\in\mathbb{R}^{d}: -\, \sum_{i=1}^{k} x_{i}^{2} \,+\, \sum_{i=k+1}^{d}x_{i}^{2}<0 \Big\}\;.$$ Since this set is connected, by Lemma [ 49](#l_homeo){reference-type="ref" reference="l_homeo"}, $\mathcal{A}(\bm{c},\,r)$ is also connected, and therefore path-connected. ◻ . In this subsection, we examine the connected components of the level sets of $U$. ** 53**. *Fix $H\in\mathbb{R}$. Let $\mathcal{H}$ be a connected component of $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})<H\}$. Let $\mathcal{G}\subset\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})<H\}$ be a connected set satisfying $\mathcal{G}\cap\mathcal{H}\ne\varnothing$. Then, $\mathcal{G}\subset\mathcal{H}$. The same assertion holds if we replace all strict inequalities by inequalities.* *Proof.* Let $\bm{x}_{0}\in\mathcal{H}$. Then, $\mathcal{H}$ is the largest connected set ${\mathcal F}$ satisfying $$\bm{x}_{0}\in\mathcal{F}\subset\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})<H\}\ .$$ As ${\mathcal G} \cap {\mathcal H} \neq\varnothing$, there exists ${\boldsymbol x}_0 \in {\mathcal G} \cap {\mathcal H}$. As $\mathcal{G}$ belongs to the previous class, $\mathcal{G}\subset\mathcal{H}$. The same proof yields the second assertion of the lemma. ◻ ** 54**. *Fix $H\in\mathbb{R}$. Let $\mathcal{H}$ be a connected component of the set $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})<H\}$ or one of the set $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})\le H\}$. Then, $U(\bm{x}_{0})=H$ for all $\bm{x}_{0}\in\partial\mathcal{H}$. Moreover,* 1. *If $\mathcal{H}$ is an open set, then $\bm{x}_{0}$ is not a local minimum* 2. *If $\mathcal{H}$ is a closed set, then $\bm{x}_{0}$ is not a local maximum* *Proof.* Fix $\bm{x}_{0}\in\partial\mathcal{H}$. Since $U$ is continuous, $U(\bm{x}_{0})\le H$. Assume by contradiction that $U(\bm{x}_{0})<H$. Let $\mathcal{G}$ be the connected component of the set $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})<H\}$ containing $\bm{x}_{0}$. Since $U$ is smooth, there exists $r>0$ such that $$\max_{\bm{y}\in B(\bm{x}_{0},\,r)}U(\bm{y})<H$$ As $x_0\in \partial {\mathcal H}$, there exists $z\in B(\bm{x}_{0},\,r) \cap {\mathcal H}$. Hence, by the previous displayed equation, $B(\bm{x}_{0},\,r) \subset {\mathcal H}$, so that $x_0\in{\mathcal H}$, in contradiction to the fact that $\bm{x}_{0}\in\partial\mathcal{H}$. This completes the proof of the first assertion. Suppose that $\mathcal{H}$ is a connected component of the set $\{ {\boldsymbol x} \in{\mathbb R}^d : U({\boldsymbol x}) <H\}$, and fix $\bm{x}_{0}\in\partial\mathcal{H}$. By the first assertion of the lemma, $U(\bm{x}_{0})=H$. Suppose by contradiction that $\bm{x}_{0}$ is a local minimum. Then, there exists $r>0$ such that $U({\boldsymbol y}) \ge U({\boldsymbol x}_0)$ for all ${\boldsymbol y}\in B(\bm{x}_{0},\,r)$. Therefore, $B(\bm{x}_{0},\,r)\cap\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})<H\}=\varnothing$, so that $B(\bm{x}_{0},\,r)\cap\mathcal{H}=\varnothing$. This contradicts the fact that $\bm{x}_{0}\in\partial\mathcal{H}$. Suppose that $\mathcal{H}$ is a connected component of the set $\{ {\boldsymbol x} \in{\mathbb R}^d : U({\boldsymbol x}) \le H\}$, and fix $\bm{x}_{0}\in\partial\mathcal{H}$. By the first assertion of the lemma, $U(\bm{x}_{0})=H$. Suppose by contradiction that $\bm{x}_{0}$ is a local maximum. Then, there exists $r>0$ such that $U({\boldsymbol y}) < U({\boldsymbol x}_0)$ for all ${\boldsymbol y}\in B(\bm{x}_{0},\,r) \setminus \{{\boldsymbol x}_0\}$. Therefore, $B(\bm{x}_{0},\,r)\setminus\{\bm{x}_{0}\}\subset \{ {\boldsymbol x} \in{\mathbb R}^d : U({\boldsymbol x}) < H\}$. Since $\bm{x}_{0}\in\partial\mathcal{H}$, $B(\bm{x}_{0},\,r)\cap\mathcal{H}^{o}\ne\varnothing$, where $\mathcal{H}^{o}$ is the interior of $\mathcal{H}$. Fix $\bm{x}_{1}\in B(\bm{x}_{0},\,r)\cap\mathcal{H}^{o}$ and let $\mathcal{G}$ be the connected component of $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})<H\}$ containing $\bm{x}_{1}$. As ${\boldsymbol x}_1 \in {\mathcal H}^o \subset {\mathcal H}$, by definition of ${\mathcal H}$, ${\mathcal G} \subset {\mathcal H}$. On the other hand, by Lemma [ 53](#l_level_connected){reference-type="ref" reference="l_level_connected"}, $B(\bm{x}_{0},\,r)\setminus\{\bm{x}_{0}\}\subset\mathcal{G}$, so that $B(\bm{x}_{0},\,r) \setminus\{\bm{x}_{0}\}\subset\mathcal{H}$. As $\bm{x}_{0}\in\mathcal{H}$, $B(\bm{x}_{0},\,r) \subset\mathcal{H}$. This contradicts the fact that ${\boldsymbol x}_0 \in\partial\mathcal{H}$, and completes the proof of the lemma. ◻ For the next lemma, we extend the definition of $\Theta({\boldsymbol m}, {\boldsymbol m}')$ to subsets of ${\mathcal M}_0$. For two disjoint non-empty subsets $\mathcal{M}'$ and $\mathcal{M}''$ of $\mathcal{M}_{0}$, define $$\label{ap02} \Theta(\mathcal{M}',\,\mathcal{M}'') \;=\; \min_{\boldsymbol{m}'\in\mathcal{M}',\,\boldsymbol{m}''\in\mathcal{M}''} \Theta(\boldsymbol{m}',\,\boldsymbol{m}'')\;.$$ ** 55**. *Let $\mathcal{H}\subset\mathbb{R}^{d}$ be a connected component of level set $\{\boldsymbol{x}\in\mathbb{R}^{d}:U(\boldsymbol{x})<c_{0}\}$ for some $c_{0}\in\mathbb{R}$. Let $\mathcal{M},\,\mathcal{M}'$ be disjoint non-empty subsets of $\mathcal{M}_{0}$.* 1. *If $\mathcal{M},\,\mathcal{M}'\subset\mathcal{H}$, then $\Theta(\mathcal{M},\,\mathcal{M}')<c_{0}$.* 2. *If $\mathcal{M}\subset\mathcal{H}$ and $\mathcal{M}' \subset\mathbb{R}^{d}\setminus\mathcal{H}$, then $\Theta(\mathcal{M},\,\mathcal{M}')\ge c_{0}$.* *Proof.* Let $\mathcal{M},\,\mathcal{M}'$ be disjoint non-empty subsets of $\mathcal{M}_{0}$ contained in ${\mathcal H}$. Since $\mathcal{H}$ is open connected set, it is a path connected set. Thus, there exists a connected path $\boldsymbol{z}:[0,\,1]\rightarrow\mathcal{H}$ such that $\boldsymbol{z}(0)\in\mathcal{M}$ and $\boldsymbol{z}(1)\in\mathcal{M}'$. Since $\boldsymbol{z}(t)\in\mathcal{H}$ for all $t\in[0,\,1]$, we have $\max_{t\in[0,\,1]}U(\boldsymbol{z}(t))<c_{0}$ and thus by [\[ap02\]](#ap02){reference-type="eqref" reference="ap02"}, $\Theta(\mathcal{M},\,\mathcal{M}')<c_{0}$. This proves the first assertion. To prove the second assertion note that any path connecting $\mathcal{M}$ and $\mathcal{M}'$ must pass through $\partial\mathcal{H}$ on which the value of $U$ is $c_{0}$. ◻ *Proof of Lemma [ 47](#l_cap_saddle){reference-type="ref" reference="l_cap_saddle"}.* Let $\bm{\sigma}\in\overline{\mathcal{W}_1}\cap\overline{\mathcal{W}_2}$. We claim that ${\boldsymbol \sigma }\in \partial {\mathcal W}_1 \cap \partial {\mathcal W}_2$. Indeed, by definition ${\boldsymbol \sigma }\in \overline{ {\mathcal W}_1}$. It remains to show that ${\boldsymbol \sigma }\not\in{\mathcal W}_1$. Assume, by contradiction, that ${\boldsymbol \sigma }\in{\mathcal W}_1$. Then, there exists $r>0$ such that $B({\boldsymbol \sigma}, r) \subset {\mathcal W}_1$. Since ${\mathcal W}_1 \cap {\mathcal W}_2 = \varnothing$, $B({\boldsymbol \sigma}, r) \cap {\mathcal W}_2 = \varnothing$, which contradicts the fact that $\bm{\sigma}\in\overline{\mathcal{W}_2}$. Thus, ${\boldsymbol \sigma }\in \partial {\mathcal W}_1$. The same argument shows that ${\boldsymbol \sigma }\in \partial {\mathcal W}_2$, proving the claim. By Lemma [ 54](#l_level_boundary){reference-type="ref" reference="l_level_boundary"}, $U({\boldsymbol \sigma}) = H$, and ${\boldsymbol \sigma}$ is not a local minimum. By definition, there exists $r>0$ such that $B(\bm{\sigma},\,r)\cap\mathcal{W}_1\ne\varnothing$ and $B(\bm{\sigma},\,r)\cap\mathcal{W}_2\ne\varnothing$. Since ${\boldsymbol \sigma }\in \partial {\mathcal W}_1 \cap \partial {\mathcal W}_2$, ${\boldsymbol \sigma }\not\in {\mathcal W}_1 \cup {\mathcal W}_2$, so that $\left(B(\bm{\sigma},\,r)\setminus\{\bm{\sigma}\}\right) \cap\mathcal{W}_1\ne\varnothing$ and $\left(B(\bm{\sigma},\,r)\setminus\{\bm{\sigma}\}\right) \cap\mathcal{W}_2\ne\varnothing$. Hence, by definition of $\mathcal{W}_1$ and $\mathcal{W}_2$, $\mathcal{A}(\bm{\sigma},\,r)$ is not empty. We claim that $\mathcal{A}(\bm{\sigma},\,r)$ is not connected. Suppose, by contradiction, that $\mathcal{A}(\bm{\sigma},\,r)$ is connected. Let $\bm{x}_1\in B(\bm{\sigma},\,r)\cap\mathcal{W}_1$, $\bm{x}_2\in B(\bm{\sigma},\,r)\cap\mathcal{W}_2$. Since $U(\bm{x}_{j})< U({\boldsymbol \sigma})$, $\bm{x}_{1},\,\bm{x}_{2}\in\mathcal{A}(\bm{\sigma},\,r)$. Since $\mathcal{A}(\bm{\sigma},\,r)$ is open, there exists a continuous path ${\boldsymbol z} \colon [0,1] \to {\mathbb R}^d$ connecting $\bm{x}_{1}$ to $\bm{x}_{2}$ in $\mathcal{A}(\bm{\sigma},\,r)$. In particular, $\sup_{0\le t\le 1} U({\boldsymbol z}(t)) < U({\boldsymbol \sigma}) = H$. Since ${\boldsymbol x}_1\in {\mathcal W}_1$ and ${\mathcal W}_1$ is a connected component of the set $\{{\boldsymbol x}: U({\boldsymbol x}) <H\}$, all points in this path, including ${\boldsymbol x}_2$, belong to ${\mathcal W}_1$. As ${\boldsymbol x}_2 \in{\mathcal W}_2$ and ${\mathcal W}_1 \cap {\mathcal W}_2 = \varnothing$, this is a contradiction, which proves the claim. Since ${\boldsymbol \sigma}$ is not a local minimum, and $\mathcal{A}(\bm{\sigma},\,r)$ is not empty and not connected, by Lemmata [ 50](#l_noncri_divide){reference-type="ref" reference="l_noncri_divide"}, [ 52](#l_ind2_conn){reference-type="ref" reference="l_ind2_conn"}, $\bm{\sigma}$ is a saddle point. By Lemma [ 51](#l_saddle_2comp){reference-type="ref" reference="l_saddle_2comp"}, $\mathcal{A}(\bm{\sigma},\,r)$ has exactly two components. Let $\mathcal{A}_{1}$, ${\mathcal A}_2$, be the connected component which intersects with ${\mathcal W}_1$, $\mathcal{W}_2$, respectively. Since $\mathcal{A}_{j}$ is a connected set contained in $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})< U({\boldsymbol \sigma})\}$, by Lemma [ 53](#l_level_connected){reference-type="ref" reference="l_level_connected"}, $\mathcal{A}_{1}\subset\mathcal{W}_1$ and $\mathcal{A}_{2}\subset\mathcal{W}_2$. Hence, $\mathcal{A}_{1}\ne\mathcal{A}_{2}$ and $\mathcal{A}_{1}=\mathcal{A}(\bm{\sigma},\,r)\cap\mathcal{W}_1$, $\mathcal{A}_{2}=\mathcal{A}(\bm{\sigma},\,r)\cap\mathcal{W}_2$. ◻ ## Proof of Lemma [ 46](#l_105a-1){reference-type="ref" reference="l_105a-1"} {#proof-of-lemma-l_105a-1 .unnumbered} The proof relies on several lemmata. ** 56**. *Let $\mathcal{K}_{n}$ be a decreasing sequence of compact connected sets and let $\mathcal{K}:=\bigcap_{n=1}^{\infty}\mathcal{K}_{n}$. Then, $\mathcal{K}$ is connected.* *Proof.* Suppose, by contradiction, that $\mathcal{K}$ is not connected. In consequence, there are two disjoint open sets $\mathcal{U}$ and $\mathcal{V}$ such that $\mathcal{K}\cap\mathcal{U}\ne\varnothing$, $\mathcal{K}\cap\mathcal{V}\ne\varnothing$, and $\mathcal{K}\subset\mathcal{U}\cup\mathcal{V}$. Since $\mathcal{K}_{n}\cap\mathcal{V} \ne\varnothing$ and ${\mathcal U} \cap {\mathcal V} = \varnothing$, $\mathcal{K}_{n}\setminus\mathcal{U}\ne\varnothing$. We claim that $\mathcal{K}_{n}\cap\partial\mathcal{U}\ne\varnothing$. Suppose by contradiction that $\mathcal{K}_{n}\cap\partial\mathcal{U}=\varnothing$. In this case, ${\mathbb R}^d = [\mathcal{K}_{n}\cap\partial\mathcal{U}]^c = \mathcal{K}_{n}^c\cup (\partial\mathcal{U})^c$, so that ${\mathcal K}_n = {\mathcal K}_n \cap (\partial\mathcal{U})^c$. Hence, ${\mathcal K}_n \setminus {\mathcal U} = {\mathcal K}_n \cap {\mathcal U}^c = {\mathcal K}_n \cap (\partial\mathcal{U})^c \cap {\mathcal U}^c = {\mathcal K}_n \cap [(\partial\mathcal{U}) \cup {\mathcal U}]^c = {\mathcal K}_n \cap \overline{ {\mathcal U}}^c \subset \overline{ {\mathcal U}}^c$. Therefore, as $\overline{ {\mathcal U}}^c$ is an open set, for all $\bm{x}\in\mathcal{K}_{n}\setminus\mathcal{U}$, there exists $r(\bm{x})>0$ such that $B(\bm{x},\,r(\bm{x}))\subset \overline{ {\mathcal U}}^c$. Since $\mathcal{K}_{n}$ is compact, $\mathcal{K}_{n}\setminus\mathcal{U}$ is compact so that there are finitely many $\bm{x}_{1},\,\dots,\,\bm{x}_{k}\in\mathcal{K}_{n}\setminus\mathcal{U}$ such that $$\mathcal{K}_{n}\setminus\mathcal{U} \subset\bigcup_{j=1}^{k}B(\bm{x}_{j},\,r(\bm{x}_{j}))\ .$$ Therefore, $\mathcal{K}_{n}\subset\mathcal{U}\cup\bigcup_{j=1}^{k} B(\bm{x}_{j},\,r(\bm{x}_{j}))$. However, since $B(\bm{x}_{j},\,r(\bm{x}_{j}))\subset \overline{ {\mathcal U}}^c$ for all $j$, $\mathcal{U}\cap\bigcup_{j=1}^{k}B(\bm{x}_{j},\,r(\bm{x}_{j}))=\varnothing$, in contradiction with the connectedness of $\mathcal{K}_{n}$. This proves the claim. As $(\mathcal{K}_{n}\cap\partial\mathcal{U})$ is a decreasing sequence of compact sets, $\mathcal{K}\cap\partial\mathcal{U}= \bigcap_{n=1}^{\infty}(\mathcal{K}_{n}\cap\partial\mathcal{U})\ne\varnothing$ by Cantor's intersection theorem. Let $\bm{x}_{0}\in\mathcal{K}\cap\partial\mathcal{U}$. Since $\mathcal{U}$ is open, $\bm{x}_{0}\notin\mathcal{U}$ so that $\bm{x}_{0}\in\mathcal{V}$. Since $\mathcal{V}$ is open, there exists $r_{0}>0$ such that $B(\bm{x}_{0},\,r_{0})\subset\mathcal{V}$ so that $B(\bm{x}_{0},\,r_{0})\cap\mathcal{U}=\varnothing$, which contradicts the fact that $\bm{x}_{0}\in\partial\mathcal{U}$. This completes the proof of the lemma. ◻ ** 57**. *Let $\mathcal{K}_{n}$ be a decreasing sequence of compact sets. Suppose that $\mathcal{K}:=\bigcap_{n=1}^{\infty}\mathcal{K}_{n}$ is contained in an open set $\mathcal{U}$. Then, there exists $N\in\mathbb{N}$ such that $\mathcal{K}_{N}\subset\mathcal{U}$.* *Proof.* Suppose, by contradiction, that each ${\mathcal K}_n$ is not contained in ${\mathcal U}$. Then, for each $n\in\mathbb{N}$, there exists $\bm{x}_{n}\in\mathcal{K}_{n}\setminus\mathcal{U} \subset \mathcal{K}_{1}\setminus\mathcal{U}$. As ${\mathcal K}_{1}\setminus\mathcal{U}$ is compact, there is a subsequence $(\bm{x}_{n}')_{n\ge1}$ which converges to a point $\bm{x}_{0}\in\mathcal{K}_{1} \setminus\mathcal{U}$. Since $\bm{x}_{j}'\in\mathcal{K}_{m}\setminus\mathcal{U}$ for all $j\ge m$, $\bm{x}_{0}\in\mathcal{K}_{m}\setminus\mathcal{U}$. Therefore, $$\bm{x}_{0}\in\bigcap_{n=1}^{\infty}(\mathcal{K}_{n} \setminus\mathcal{U})=\mathcal{K}\setminus\mathcal{U}=\varnothing\ ,$$ which is a contradiction. ◻ ** 58**. *Let $\mathcal{H}$ be a connected component of the set $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})\le H\}$, and let ${\mathcal U}$ be an open set containing $\mathcal{H}$. Then, there is $N\in\mathbb{N}$ such that the connected component of $$\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})\le H+\frac{1}{N}\}$$ containing $\mathcal{H}$ is contained in ${\mathcal U}$* *Proof.* Let $\mathcal{R}_{n}$ be the connected component of $$\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x}) \le H+\frac{1}{n}\}$$ containing $\mathcal{H}$ and let $\mathcal{R}:=\bigcap_{n=1}^{\infty} {\mathcal R}_n$. Since $\mathcal{H}\subset\mathcal{R}_{n}$ for all $n\ge1$, $\mathcal{H}\subset\mathcal{R}$. On the other hand, as $(\mathcal{R}_{n})_{n\ge1}$ is a decreasing sequence of compact connected sets, by Lemma [ 56](#l_comp_conn){reference-type="ref" reference="l_comp_conn"}, $\mathcal{R}$ is connected. Since $\mathcal{R}\cap\mathcal{H}\ne\varnothing$, $\mathcal{R}\subset\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})\le H\}$ and $\mathcal{H}$ is a connected component of $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})\le H\}$, by Lemma [ 53](#l_level_connected){reference-type="ref" reference="l_level_connected"}, $\mathcal{R}\subset\mathcal{H}$. By the previous two estimates, $\mathcal{H}=\mathcal{R}$. As $\mathcal{R}\subset{\mathcal U}$ by Lemma [ 57](#l_decrea_comp){reference-type="ref" reference="l_decrea_comp"}, there exists $N\in\mathbb{N}$ such that $\mathcal{R}_{N}\subset{\mathcal U}$. ◻ ** 59**. *Fix $H\in\mathbb{R}$. Let $\mathcal{H}$ be a connected component of $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})<H\}$. Then, $\overline{\mathcal{H}}$ is path connected.* *Proof.* As ${\mathcal H}$ is open and connected, it is path connected. It remains to show that the boundary $\partial {\mathcal H}$ is path connected to ${\mathcal H}$. Fix ${\boldsymbol x}_0\in \partial {\mathcal H}$. By Lemma [ 54](#l_level_boundary){reference-type="ref" reference="l_level_boundary"}, $U(\bm{x}_{0})=H$. Assume that $\bm{x}_{0}$ is not a critical point of $U$. By Lemma [ 50](#l_noncri_divide){reference-type="ref" reference="l_noncri_divide"}, there exists $r>0$ such that the manifold $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})=U(\bm{x}_{0})\}$ divides $B(\bm{x}_{0},\,r)$ into two parts: $$\begin{aligned} & B(\bm{x}_{0},\,r)\cap\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})<U(\bm{x}_{0})\}\ ,\\ & B(\bm{x}_{0},\,r)\cap\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})>U(\bm{x}_{0})\}\ .\end{aligned}$$ Since $\bm{x}_{0}\in\partial\mathcal{H}$, $B(\bm{x}_{0},\,r)\cap\mathcal{H}\ne\varnothing$ so that $B(\bm{x}_{0},\,r)\cap\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x}) <U(\bm{x}_{0})\}\ne\varnothing$. By Lemma [ 53](#l_level_connected){reference-type="ref" reference="l_level_connected"}, $B(\bm{x}_{0},\,r)\cap\{\bm{x}\in\mathbb{R}^{d}: U(\bm{x})<U(\bm{x}_{0})\}\subset\mathcal{H}$. By Lemma [ 50](#l_noncri_divide){reference-type="ref" reference="l_noncri_divide"}, there is a path $\bm{z}\colon [0,1]\to B(\bm{x}_{0},\,r)$ such that $\bm{z}(0)=\bm{x}_{0}$ and $\bm{z}\left((0,1]\right)\subset B(\bm{x}_{0},\,r)\cap\{\bm{x}\in\mathbb{R}^{d}: U(\bm{x})<U(\bm{x}_{0})\}\subset\mathcal{H}$. Hence, $\bm{x}_{0}$ is path-connected to $\mathcal{H}$. Suppose that $\bm{x}_{0}$ is a critical point. By Lemma [ 54](#l_level_boundary){reference-type="ref" reference="l_level_boundary"}, $\bm{x}_{0}$ is not a local minimum. By the Hartman--Grobman Theorem, there is $T>0$ and a continuous path $\bm{z}\colon [0,T]\to\mathbb{R}^{d}$ in the unstable manifold of $\bm{x}_{0}$ such that $\bm{z}(0)=\bm{x}_{0}$ and $\bm{z}\left((0,T]\right)\subset\mathcal{H}$. This completes the proof of the lemma. ◻ ** 60**. *Let $\mathcal{H}$ be a connected component of $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})\le H\}$, and $\mathcal{H}^{o}$ the interior of $\mathcal{H}$. Denote by ${\mathcal W}_i$, $i\ge 1$, the connected components of $\mathcal{H}^{o}$. Then, the number of connected components is finite. Moreover,* 1. *Let $\mathcal{W}_{j}'$ be a connected component of $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})<H\}$ which intersects with $\mathcal{W}_{j}$. Then, $\overline{\mathcal{W}_{j}}=\overline{\mathcal{W}_{j}'}$.* 2. *$\mathcal{H}$ is path connected. In particular, for each $i$, there is $j$ such that $\overline{\mathcal{W}_{i}}\cap\overline{\mathcal{W}_{j}}\ne\varnothing$.* *Proof.* Consider the open set ${\mathcal W}_1$. Since critical points of $U$ are not isolated points, it is not possible to have $U(\bm{x})=H$ for all $\bm{x}\in\mathcal{W}_{1}$. Hence, $\mathcal{W}_{1}'$ well defined and $\mathcal{W}_{1}\cap\mathcal{W}'_1 \neq \varnothing$. Let $\bm{x}_{0}\in\mathcal{W}_{1}\cap\mathcal{W}'_1$. Let ${\boldsymbol x}_1\in {\mathcal W}_1$ such that $U({\boldsymbol x}_1) = H$. Then, ${\boldsymbol x}_1$ is a local maximum. Fix ${\boldsymbol x}_1\in {\mathcal W}_1$ such that $U({\boldsymbol x}_1) = H$. Since $\mathcal{W}_{1}$ is open, there exists $r_{1}>0$ such that $B(\bm{x}_{1},\,r_{1})\subset\mathcal{W}_{1}$. Let $r_{1}$ be small enough so that there is no critical point in $B(\bm{x}_{1},\,r_{1})\setminus\{\bm{x}_{1}\}$. Let $\bm{y}\in B(\bm{x}_{1},\,r_{1})$ such that $U(\bm{y})=H$. Since $U(\bm{x})\le H$ for all $\bm{x}\in B(\bm{x}_{1},\,r_{1})$, $\bm{y}$ is a critical point (since $\nabla U(\bm{y})=0$). Hence, $U(\bm{x})<H$ for all $\bm{x}\in B(\bm{x}_{1},\,r_{1})\setminus\{\bm{x}_{1}\}$. Therefore, $\bm{x}_{1}$ is a local maximum, as claimed. Let $$\widehat{\mathcal{W}}_{1}:=\{\bm{x}\in {\mathcal W}_1: \bm{x}\,\text{is not a local maximum}\} \,\subset\, {\mathcal W}_1\ .$$ Since there are finitely many local maximum in $\mathcal{W}_{1}$, $\widehat{\mathcal{W}}_{1}$ is open and connected. By Claim 1, $U(\bm{x})<H$ for all $\bm{x}\in\widehat{\mathcal{W}}_{1}$. By construction, $\bm{x}_{0}\in\widehat{\mathcal{W}}_{1}$. $\widehat{\mathcal{W}}_{1}\subset\mathcal{W}_{1}'\subset\mathcal{W}_{1}$. Since $\mathcal{W}_{1}'$ is a connected component of $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})<H\}$ intersecting with $\widehat{\mathcal{W}}_{1}$ and since $U(\bm{x})<H$ for all $\bm{x}\in\widehat{\mathcal{W}}_{1}$, by Lemma [ 53](#l_level_connected){reference-type="ref" reference="l_level_connected"}, $\widehat{\mathcal{W}}_{1}\subset\mathcal{W}_{1}'$. Let $\bm{x}_{2}\in\mathcal{W}_{1}'$. Since ${\boldsymbol x}_0 \in {\mathcal W}'_1$, there is a continuous path ${\boldsymbol z}\colon [0,1]\to {\mathcal W}'_1$ in $\mathcal{W}'_1$ from $\bm{x}_{0}$ to $\bm{x}_{2}$ such that $U({\boldsymbol z}(t)) <H$ for all $0\le t\le 1$. Since $\mathcal{H}$ is a connected component of $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})\le H\}$ containing $\bm{x}_{0}$, this path is contained in $\mathcal{H}$: ${\boldsymbol z}(t) \in\mathcal{H}$ for all $0\le t\le 1$. As $U({\boldsymbol z}(t))<H$, by Lemma [ 54](#l_level_boundary){reference-type="ref" reference="l_level_boundary"}, ${\boldsymbol z}(t) \in\mathcal{H}^{o}$ for all $0\le t\le 1$. As ${\mathcal W}_1$ is a connected component of ${\mathcal H}^o$ and ${\boldsymbol x}_0\in {\mathcal W}_1$, ${\boldsymbol x}_2 \in\mathcal{W}_{1}$, as claimed. By definition, the set ${\mathcal W}'_1$ contains a local minimum. By Claim 2. so does ${\mathcal W}_1$. Since the connected components are disjoints, each one contains at least one local minimum of $U$ and there are a finite number of critical points, the set ${\mathcal H}^o$ has a finite number of connected components. This is the first assertion of the lemma. By Claim 2, $\overline{\widehat{\mathcal{W}}_{1}} \subset\overline{\mathcal{W}'_1} \subset\overline{\mathcal{W}_{1}}$. Since local maxima $\bm{y}\in\mathcal{W}_{1}$ are accumulation points, $\overline{\widehat{\mathcal{W}}_{1}}=\overline{\mathcal{W}_{1}}$ so that $\overline{\mathcal{W}_{1}'}=\overline{\mathcal{W}_{1}}$. This proves the second assertion of the lemma. Denote by $n$ the number of connected components of ${\mathcal H}^o$, so that $\mathcal{H}=\overline{\mathcal{H}^{o}}=\overline{\bigcup_{i=1}^{n} \mathcal{W}_{i}}=\bigcup_{i=1}^{n}\overline{\mathcal{W}_{i}}$. By Lemma [ 59](#l_pathcon_closure){reference-type="ref" reference="l_pathcon_closure"}, $\overline{\mathcal{W}_{i}} = \overline{\mathcal{W}_{1}'}$ is path-connected. for all $i \neq j\in \{1, \dots, n\}$, there exists $i= i_{0},\,\dots,\,i_{k} = j$ such that $$\label{ap01} \overline{\mathcal{W}_{i_{m}}}\cap\overline{\mathcal{W}_{i_{m+1}}} \ne \varnothing \;, \quad 0\le m < k\;.$$ Suppose this property does not hold. Then, there exists $i\neq j\in \{1, \dots, n\}$ which are not connected in the sense [\[ap01\]](#ap01){reference-type="eqref" reference="ap01"}. Let $A$ be the set of indices in $\{1, \dots, n\}$ which are connected to $i$ in the sense [\[ap01\]](#ap01){reference-type="eqref" reference="ap01"}. The sets $\cup_{k\in A} \overline{\mathcal{W}_{k}}$, $\cup_{k\not\in A} \overline{\mathcal{W}_{k}}$ are compact, disjoint and non-empty. Thus, there exist disjoint opens sets ${\mathcal U}$, ${\mathcal V}$ such that $\cup_{k\in A} \overline{\mathcal{W}_{k}} \subset {\mathcal U}$, $\cup_{k\not\in A} \overline{\mathcal{W}_{k}} \subset {\mathcal V}$. This contradicts the fact that ${\mathcal H} = \bigcup_{i=1}^{n}\overline{\mathcal{W}_{i}}$ is connected, and proves Claim 3. Since each set $\overline{\mathcal{W}_{i}}$ is path-connected, by property [\[ap01\]](#ap01){reference-type="eqref" reference="ap01"}, the set $\mathcal{H}$ is also path-connected. ◻ ** 61**. *The connected component of the set $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})\le\Theta(\bm{m}',\,\bm{m}'')\}$ containing $\bm{m}'$ also contains $\bm{m}''$.* *Proof.* Let $\mathcal{H}$ be the connected component of the set $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})\le\Theta(\bm{m}',\,\bm{m}'')\}$ containing $\bm{m}'$. Suppose, by contradiction, that $\bm{m}''\notin\mathcal{H}$. Since $\mathcal{H}$ and $\{\bm{m}''\}$ are compact sets, there is an open set ${\mathcal U}$ such that $\mathcal{H}\subset{\mathcal U}$ and $\bm{m}''\notin{\mathcal U}$. By Lemma [ 58](#l_level_cover){reference-type="ref" reference="l_level_cover"}, there is $n\in\mathbb{N}$ such that the connected component of $$\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})\le\Theta(\bm{m}',\,\bm{m}'') +\frac{1}{n}\}$$ containing $\mathcal{H}$ is contained in ${\mathcal U}$. This connected component does not contain $\bm{m}''$. Thus, by Lemma [ 55](#lap01){reference-type="ref" reference="lap01"}, $\Theta(\bm{m}',\,\bm{m}'')\ge \Theta(\bm{m}',\,\bm{m}'')+\frac{1}{n}$, which is a contradiction. ◻ ** 62**. *Let $\bm{m},\,\bm{m}'\in\mathcal{M}_{0}$ be two different local minima. Then, $$U(\bm{m}), \, U(\bm m')<\Theta(\bm{m},\,\bm{m}')\ .$$* *Proof.* We only prove for $\bm m$ because $\Theta(\cdot, \, \cdot)$ is symmetric. Since $\bm{m}$ is a local minimum, there exists $\eta>0$ such that $\bm{m}$ is a unique local minimum of a connected component of $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})<U(\bm{m})+2\eta\}$ containing $\bm{m}$. Therefore, by Lemma [ 55](#lap01){reference-type="ref" reference="lap01"}, for all continuous path $\bm{z}$ connecting local minimum $\bm{m}$ and any other local minimum, we have $$\max_{t\in[0,1]}U(\bm{z}(t))>U(\bm{m})+\eta\ ,$$ which implies $$\Theta(\bm{m},\,\bm{m}')\ge U(\bm{m})+\eta>U(\bm{m})\ .$$ ◻ *Proof of Lemma [ 46](#l_105a-1){reference-type="ref" reference="l_105a-1"}.* Fix ${\boldsymbol m}\in{\mathcal M}_0$. Let $\mathcal{H}$ be the connected component of the set $\{\bm{x}\in\mathbb{R}^{d}: U(\bm{x}) \le U (\bm{m}) + \Gamma ({\boldsymbol m})\}$ containing $\bm{m}$. By definition, there exists $\bm{m}' \in\mathcal{M}_{0}$ such that $$\Theta(\bm{m},\,\bm{m}')=\min_{\bm{m}''\in \mathcal{M}_{0}\setminus\{\bm{m}\}}\Theta(\bm{m},\,\bm{m}'') =U(\bm{m})+\Gamma(\bm{m})\;.$$ By Lemma [ 61](#l_105a-2){reference-type="ref" reference="l_105a-2"}, ${\boldsymbol m}'\in {\mathcal H}$. As in Lemma [ 60](#l_pathcon_level){reference-type="ref" reference="l_pathcon_level"}, denote by $\mathcal{H}^{o}$ the interior of ${\mathcal H}$. Let $\mathcal{W}_{j}$, $1\le i\le n$, the open connected components of ${\mathcal H}^{o}$. Assume that $\bm{m}\in\mathcal{W}_{1}$. We assert that $\bm{m}$ is the unique local minimum in $\mathcal{W}_{1}$. Indeed, as in Lemma [ 60](#l_pathcon_level){reference-type="ref" reference="l_pathcon_level"}, let $\mathcal{W}_{1}'$ be a connected component of $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})< U(\bm{m})+\Gamma(\bm{m})\}$ which intersects with $\mathcal{W}_{1}$. By [\[a01\]](#a01){reference-type="eqref" reference="a01"}, ${\mathcal W}'_1$ contains one and only one local minimum of $U$. On the other hand, by Claim 2 in Lemma [ 60](#l_pathcon_level){reference-type="ref" reference="l_pathcon_level"}, ${\mathcal W}'_1 \subset {\mathcal W}_1$ and all elements of ${\mathcal W}_1\setminus {\mathcal W}'_1$ are local maxima. This proves the assertion. By Lemma [ 62](#l_118){reference-type="ref" reference="l_118"}, $U(\bm m')<\Theta(\bm m, \, \bm m')$. As ${\boldsymbol m}'\in{\mathcal H}$, by Lemma [ 54](#l_level_boundary){reference-type="ref" reference="l_level_boundary"}, $\bm m' \in {\mathcal H}^{o}$ so that $n\ge2$. By Lemma [ 60](#l_pathcon_level){reference-type="ref" reference="l_pathcon_level"}-(2), there is $1<k\le n$ such that $\overline{\mathcal{W}_{1}}\cap\overline{\mathcal{W}_{k}} \ne\varnothing$. Let ${\mathcal W}$, $\mathcal{W}'$ be connected components of $\{\bm{x}\in\mathbb{R}^{d}:U(\bm{x})<\Theta(\bm{m},\,\bm{m}')\}$ intersecting with ${\mathcal W}_1$, $\mathcal{W}_{k}$, respectively. By Lemma [ 60](#l_pathcon_level){reference-type="ref" reference="l_pathcon_level"}, $\overline{\mathcal{W}}=\overline{\mathcal{W}_{1}}$ and $\overline{\mathcal{W}'}=\overline{\mathcal{W}_{k}}$ so that $\overline{\mathcal{W}}\cap\overline{\mathcal{W}'}\ne\varnothing$. ◻ # Extension of the vector field $\mathbf b$ {#sec-ap4} Fix ${\boldsymbol m}\in {\mathcal M}_0$. In this subsection, we define a new vector field ${\boldsymbol b}_0 \colon {\mathbb R}^d \to {\mathbb R}^d$ which coincides with ${\boldsymbol b}$ in a neighborhood of ${\boldsymbol m}$ and satisfies the hypotheses of Section [3](#sec-ap3){reference-type="ref" reference="sec-ap3"}. Assume that ${\boldsymbol m} = {\boldsymbol 0}$, and let $$\mathbb{H}= (\nabla^{2}U) (\boldsymbol{0})\;,\;\;\; \mathbb{L}= (D\boldsymbol{\ell}) (\ensuremath{\boldsymbol{0})}\;.$$ By Taylor expansion, for $\boldsymbol{x}\simeq\boldsymbol{0}$, $$\begin{aligned} -\, {\boldsymbol b} (\boldsymbol{x})\cdot\mathbb{H}\boldsymbol{x} \,=\, \left[(\mathbb{H+\mathbb{L}})\boldsymbol{x} +O(|\boldsymbol{x}|^{2})\right]\cdot\mathbb{H} \boldsymbol{x}=|\mathbb{H}\boldsymbol{x}|^{2}+O(|\boldsymbol{x}|^{3})\;,\end{aligned}$$ where the second equality comes from the fact that $\mathbb{H}\mathbb{L}$ is skew-symmetric. Thus, there exists $r_5>0$ such that $$=\, -\, {\boldsymbol b} (\boldsymbol{x})\cdot\mathbb{H}\boldsymbol{x} \ge\frac{1}{2}\, |\mathbb{H}\boldsymbol{x}|^{2} \;\;\;\;\text{for all\;}\boldsymbol{x}\in B({\boldsymbol 0},2r_5) \;. \label{eq:cond1}$$ If needed, modify the definition of $r_5>0$ for $$| {\mathbb K}_{{\boldsymbol x}} {\boldsymbol y} | \,\le\, \frac{1}{2}\, | \mathbb{H} {\boldsymbol y} | \;\;\;\; \text{for all\;}\boldsymbol{x}\in B({\boldsymbol 0},2r_5) \;, \;\; {\boldsymbol y}\in{\mathbb R}^d\;. \label{eq:cond2}$$ where ${\mathbb K}_{{\boldsymbol x}} = (\nabla^{2}U+D\boldsymbol{\ell})(\boldsymbol{x}) -(\mathbb{H}+\mathbb{L})$. For $\boldsymbol{x}\not \in B({\boldsymbol 0}, r_5)$, let $$\boldsymbol{r}(\boldsymbol{x})= \frac{r_5}{|\boldsymbol{x}|}\boldsymbol{x} \in\partial B({\boldsymbol 0}, r_5) \;$$ and let $\boldsymbol{b}_0:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$ be given by $$\boldsymbol{b}_0 (\boldsymbol{x}) \;=\; \left\{ \begin{aligned} & \boldsymbol{b}(\boldsymbol{x}) \;, \;\; \boldsymbol{x}\in B({\boldsymbol 0}, r_5) \\ & \boldsymbol{b}(\boldsymbol{r}(\boldsymbol{x})) + (D\boldsymbol{b}) (\boldsymbol{r}(\boldsymbol{x}))(\boldsymbol{x} -\boldsymbol{r}(\boldsymbol{x})) \;, \;\; \boldsymbol{x}\in B({\boldsymbol 0}, r_5)^{c}\;. \end{aligned} \right. \label{eq:vecb}$$ The main result of this section reads as follows. ** 63**. *The vector field $\boldsymbol{b}_0$ fullfils all conditions of Section [3](#sec-ap3){reference-type="ref" reference="sec-ap3"}. Condition (2) holds for $r_3=r_5$.* The proof relies on two lemmata. ** 64**. *The vector field $\boldsymbol{b}_0$ belongs to $C^{1}(\mathbb{R}^{d},\,\mathbb{R}^{d})$. Moreover, there exists a finite constant $C_{1}$ such that $$|\boldsymbol{b}_0(\boldsymbol{x})|\le C_{1}|\boldsymbol{x}| \quad\text{and}\quad \left\Vert D\boldsymbol{b}_0 (\boldsymbol{x})\right\Vert \le C_{1}|\boldsymbol{x}| \label{eq:cond_b}$$ for all $\boldsymbol{x}\in B({\boldsymbol 0}, r_5)^{c}$.* *Proof.* By a straightforward computation, for $|\boldsymbol{x}| > r_5$, $$(\partial_{x_k}\boldsymbol{b}_0) (\boldsymbol{x}) \,=\, \partial_{x_k} \big\{\, \left[D\boldsymbol{b}(\bm{r}(\bm{x}))\right] (\bm{x}-\bm{r}(\bm{x})) \,\big\} \,+\, \partial_{x_k} \{ \, \boldsymbol{b} (\bm{r}(\bm{x})) \,\} \;. \label{eq:part_b}$$ Since $$\partial_{x_k}\bm{r}(\bm{x}) \,=\, r_5\, \frac{\boldsymbol{e}_{k}}{|x|} \,-\, \frac{x_{k}}{|\boldsymbol{x}|^{3}}\, \boldsymbol{x}\;,$$ the matrix $\partial_{x_k}\left[D\boldsymbol{b}(\bm{r}(\bm{x}))\right]$ is uniformly bounded on $B({\boldsymbol 0}, r_5)^{c}$. Since $\boldsymbol{x}\rightarrow\boldsymbol{r}(\boldsymbol{x})$ as $\boldsymbol{x}$ approaches $\partial B({\boldsymbol 0}, r_5)$, the boundedness of $\partial_{x_k}\left[D\boldsymbol{b}(\bm{r}(\bm{x}))\right]$ yields that $\partial_{x_k}\boldsymbol{b}_0(\boldsymbol{x})\rightarrow \partial_{x_k}\boldsymbol{b}(\boldsymbol{x})$ as $\boldsymbol{x}$ approaches to $\partial B({\boldsymbol 0}, r_5)$. This proves that $\boldsymbol{b}_0\in C^{1}(\mathbb{R}^{d},\,\mathbb{R}^{d})$. The first assertion of ([\[eq:cond_b\]](#eq:cond_b){reference-type="ref" reference="eq:cond_b"}) follows from the definition of ${\boldsymbol b}_0$. The second one from ([\[eq:part_b\]](#eq:part_b){reference-type="ref" reference="eq:part_b"}) and the boundedness of $\partial_{k}\left[D\boldsymbol{b}(\bm{r}(\bm{x}))\right]$ on $B(0,r_5)^{c}$. ◻ ** 65**. *For all $\boldsymbol{x}\in\mathbb{R}^{d}$, $$-\, \boldsymbol{b}_0 (\boldsymbol{x})\cdot\mathbb{H}\boldsymbol{x} \ge\frac{1}{2}|\mathbb{H}\boldsymbol{x}|^{2}$$* *Proof.* By ([\[eq:cond1\]](#eq:cond1){reference-type="ref" reference="eq:cond1"}) the condition is satisfied for $\boldsymbol{x}\in B({\boldsymbol 0}, 2r_5)$. Fix $\boldsymbol{x}\not \in B({\boldsymbol 0}, r_5)$ so that $$-\, \boldsymbol{b}_0 (\boldsymbol{x})\cdot\mathbb{H}\bm{x} \,=\, -\, \boldsymbol{b}(\bm{r}(\bm{x}))\cdot\mathbb{H}\bm{x} \,-\, (D\boldsymbol{b}) (\bm{r}(\bm{x}))(\bm{x} -\bm{r}(\bm{x}))\cdot\mathbb{H}\bm{x} \label{eq:cs1}$$ Since $\boldsymbol{x}= (|\boldsymbol{x}|/r_5)\, \boldsymbol{r}(\boldsymbol{x})$ and since $\boldsymbol{r}(\boldsymbol{x})\in B({\boldsymbol 0}, 2r_5)$, by ([\[eq:cond1\]](#eq:cond1){reference-type="ref" reference="eq:cond1"}), the first term on the right-hand side can be estimated by $$-\, \frac{|\boldsymbol{x}|}{r_5} \, {\boldsymbol b} (\bm{r}(\bm{x}))\cdot \mathbb{H}\bm{r}(\bm{x}) \, \ge\, \frac{|\boldsymbol{x}|}{2r_5}\, |\, \mathbb{H}\bm{r}(\bm{x})\, |^{2} \,=\, \frac{r_5}{2|\bm{x}|}\, |\mathbb{H}\bm{x}|^{2}\;.$$ For the second term, write $$-\, (D\boldsymbol{b}) (\bm{r}(\bm{x})) \,=\, \mathbb{H} +\mathbb{L}+\mathbb{K}_{{\boldsymbol x}} \;\;\; \text{and}\;\;\;\;\bm{x}-\bm{r}(\bm{x}) =\big(\, 1-\frac{r_5}{|\boldsymbol{x}|} \, \big)\, \boldsymbol{x}\;.$$ Since $\mathbb{HL}$ is skew-symmetry, the second term of ([\[eq:cs1\]](#eq:cs1){reference-type="ref" reference="eq:cs1"}) is equal to $$\begin{aligned} & \big(\, 1-\frac{r_5}{|\boldsymbol{x}|} \, \big)\, \left(\mathbb{H}+\mathbb{L}+\mathbb{K}_{{\boldsymbol x}} \right) \boldsymbol{x}\cdot\mathbb{H}\bm{x} \\ & \quad =\; \big(\, 1-\frac{r_5}{|\boldsymbol{x}|} \, \big)\, \left(\left|\mathbb{H}\boldsymbol{x}\right|^{2} +\mathbb{K}_{{\boldsymbol x}} \boldsymbol{x}\cdot\mathbb{H}\bm{x}\right) \ge\frac{1}{2}\, \big(\, 1-\frac{r_5}{|\boldsymbol{x}|} \, \big)\, \left|\mathbb{H}\boldsymbol{x}\right|^{2}\;.\end{aligned}$$ The last inequality comes from ([\[eq:cond2\]](#eq:cond2){reference-type="ref" reference="eq:cond2"}). Adding the previous estimates completes the proof of the lemma. ◻ *Proof of Proposition [ 63](#pap4){reference-type="ref" reference="pap4"}.* To check the first condition, suppose that $\boldsymbol{b}_0 (\boldsymbol{x})=0$ for some $\boldsymbol{x}\in\mathbb{R}^{d}$. Lemma [ 65](#prop:b2){reference-type="ref" reference="prop:b2"} implies that $\boldsymbol{x}=\boldsymbol{0}.$ Thus $\boldsymbol{0}$ is the only equilibrium of the dynamical system [\[eq:x_0\]](#eq:x_0){reference-type="eqref" reference="eq:x_0"}. Since the behavior of this ODE near $\boldsymbol{0}$ is identical to that of $\boldsymbol{x}(\cdot)$, the origin is a stable equilibrium. Condition (2) in Section [3](#sec-ap3){reference-type="ref" reference="sec-ap3"} for $r_3=r_5$ follows from the definition of ${\boldsymbol b}_0$. The third and fourth conditions have been derived in Lemmata [ 64](#prop:b1){reference-type="ref" reference="prop:b1"} and [ 65](#prop:b2){reference-type="ref" reference="prop:b2"}, respectively. ◻ # Potential Theory {#sec-ap2} In sake of completeness we introduce in this section the capacity between sets. Fix two disjoint non-empty bounded domains $\mathcal{A}$ and $\mathcal{B}$ of $\mathbb{R}^{d}$ with $C^{2,\,\alpha}$-boundaries for some $\alpha\in(0,\,1)$. Assume that the perimeters of ${\mathcal A}$, ${\mathcal B}$ are finite and that the distance between the sets is positive. Let $\Omega=(\overline{\mathcal{A}}\cup\overline{\mathcal{B}})^{c}$ so that $\partial\Omega=\partial\mathcal{A}\cup\partial\mathcal{B}$. The equilibrium potentials $h_{\mathcal{A},\mathcal{\,B}}^{\epsilon}$ between $\mathcal{A}$ and $\mathcal{B}$ with respect to the processes $\boldsymbol{x}_{\epsilon}(\cdot)$ is given by $$h_{\mathcal{A},\mathcal{\,B}}^{\epsilon}\,(\boldsymbol{x}) \,=\,\mathbb{P}_{\boldsymbol{x}}^{\epsilon} \,[\,\tau_{\mathcal{A}}<\tau_{\mathcal{B}\:}]\;, \quad \boldsymbol{x}\in\mathbb{R}^{d}\;,$$ and the capacity by $$\textup{cap}_{\epsilon}(\mathcal{A},\,\mathcal{B})\,=\, \epsilon\int_{\Omega}|\nabla h_{\mathcal{A},\,\mathcal{B}}^{\epsilon}|^{2}\,d\mu_{\epsilon} \;.$$ We refer to [@LMS] for equivalent formulations and properties of the capacity. # Analysis of a linear ODE {#sec:ODE} In this section, we prove Lemma [ 20](#lem_esclin){reference-type="ref" reference="lem_esclin"}. To simplify notation, we fix $\boldsymbol{c}\in\mathcal{Y}_{0}$, assumed to be equal to $0$, $\boldsymbol{c}=\boldsymbol{0}$, and write $\mathbb{A}=-(\mathbb{H}^{\bm{c}}+\mathbb{L}^{\bm{c}})$. By Lemma [ 18](#lem:hyper){reference-type="ref" reference="lem:hyper"}, the matrix $\mathbb{A}$ is invertible and does not have a pure imaginary eigenvalue. All the results given in this section holds for such a matrix $\mathbb{A}$. ## Real Jordan canonical form {#real-jordan-canonical-form .unnumbered} Suppose that a matrix $\mathbb{K}$ can be written as a block matrix of the form $$\mathbb{K}=\begin{bmatrix}\mathbb{K}_{1} & \mathbb{O} & \mathbb{O}\\ \mathbb{O} & \ddots & \mathbb{O}\\ \mathbb{O} & \mathbb{O} & \mathbb{K}_{n} \end{bmatrix} \;,$$ where $\mathbb{K}_{1},\dots,\,\mathbb{K}_{n}$ are matrices of possibly different sizes and $\mathbb{O}$ denotes the zero matrix of suitable size. We represent such a matrix as $\mathbb{K}=\text{diag}(\mathbb{K}_{1},\,\dots,\,\mathbb{K}_{n})$. We start by a review of a real Jordan canonical form of $\mathbb{A}$. By [@ODE Theorem 2.5], there exists a unitary matrix $\mathbb{U}$ such that $$\label{62} \mathbb{A}=\mathbb{U}\mathbb{J}\mathbb{U}^{-1}$$ where $\mathbb{J}$ is of the form $$\mathbb{J}={\rm diag}(\mathbb{E}_{1}^{-},\,\dots,\,\mathbb{E}_{u_{1}}^{-},\, \mathbb{F}_{1}^{-},\,\dots,\,\mathbb{F}_{u_{2}}^{-},\, \mathbb{E}_{1}^{+},\,\dots,\,\mathbb{E}_{s_{1}}^{+},\, \mathbb{F}_{1}^{+},\,\dots,\,\mathbb{F}_{s_{2}}^{+})\,$$ $$\mathbb{E}_{k}^{\pm}=\left[\begin{array}{ccc} \lambda_{\,k}^{\pm} & 1 & 0\\ 0 & \ddots & 1\\ 0 & 0 & \lambda_{\,k}^{\pm} \end{array}\right]\ \;, \quad \mathbb{F}_{k}^{\pm}=\left[\begin{array}{ccc} \mathbb{B}_{k}^{\pm} & \mathbb{I}_{2} & 0\\ 0 & \ddots & \mathbb{I}_{2}\\ 0 & 0 & \mathbb{B}_{k}^{\pm} \end{array}\right]\;,$$ and $$\mathbb{I}_{2}=\left[\begin{array}{cc} 1 & 0\\ 0 & 1 \end{array}\right]\ \;, \quad \ \mathbb{B}_{k}^{\pm} =\left[\begin{array}{cc} \alpha_{k}^{\pm} & \beta_{k}^{\pm}\\ -\beta_{k}^{\pm} & \alpha_{k}^{\pm} \end{array}\right]\ .$$ In this formula, $\lambda_{k}^{+}$ and $\alpha_{k}^{+}$ are positive while $\lambda_{k}^{-}$ and $\alpha_{k}^{-}$ are negative real numbers. The eigenvalues of $\mathbb{A}$ are $\lambda_{k}^{\pm}$ and $\alpha_{k}^{\pm}+i\beta_{k}^{\pm}$. Thus, by Lemma [ 18](#lem:hyper){reference-type="ref" reference="lem:hyper"}, $\lambda_{k}^{\pm}$ and $\alpha_{k}^{\pm}$ cannot be $0$. (Note that the real numbers $\beta^\pm_k$ are also different from $0$ because the eigenvalues $\alpha_{k}^{\pm}+i\beta_{k}^{\pm}$ are complex numbers, but this will not be used below). By [\[62\]](#62){reference-type="eqref" reference="62"} $$e^{t\mathbb{A}}=\mathbb{U}e^{t\mathbb{J}}\mathbb{U}^{-1}$$ where $$e^{t\mathbb{J}}={\rm diag}(e^{t\mathbb{E}_{1}^{-}},\,\dots,\,e^{t\mathbb{E}_{u_{1}}^{-}},\,e^{t\mathbb{F}_{1}^{-}},\,\dots,\,e^{t\mathbb{F}_{u_{2}}^{-}},\,e^{t\mathbb{E}_{1}^{+}},\,\dots,\,e^{t\mathbb{E}_{s_{1}}^{+}},\,e^{t\mathbb{F}_{1}^{+}},\,\dots,\,e^{t\mathbb{F}_{s_{2}}^{+}})\ .\label{e: Jordan_exp}$$ Suppose that $\mathbb{E}_{k}^{\pm}$ is a $j\times j$ matrix. An elementary computation yields that $$\label{e:jor1} e^{t\mathbb{E}_{k}^{\pm}}=e^{t\lambda_{k}^{\pm}}\left[\begin{array}{cccccc} 1 & t & \frac{t^{2}}{2} & \cdots & \frac{t^{j-1}}{(j-1)!} & \frac{t^{j}}{j!}\\ 0 & 1 & t & \cdots & \frac{t^{j-2}}{(j-2)!} & \frac{t^{j-1}}{(j-1)!}\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\ 0 & 0 & 0 & \cdots & 1 & t\\ 0 & 0 & 0 & \cdots & 0 & 1 \end{array}\right]\;.$$ Similarly, if $\mathbb{F}_{k}^{\pm}$ is a $2j\times2j$ matrix, $$e^{t\mathbb{F}_{k}^{\pm}}=e^{t\alpha_{k}^{\pm}}\left[\begin{array}{cccccc} \mathbb{S} & t\mathbb{S} & \frac{t^{2}}{2}\mathbb{S} & \cdots & \frac{t^{j-1}}{(j-1)!}\mathbb{S} & \frac{t^{j}}{j!}\mathbb{S}\\ \mathbb{O} & \mathbb{S} & t\mathbb{S} & \cdots & \frac{t^{j-2}}{(j-2)!}\mathbb{S} & \frac{t^{j-1}}{(j-1)!}\mathbb{S}\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\ \mathbb{O} & \mathbb{O} & \mathbb{O} & \cdots & \mathbb{S} & t\mathbb{S}\\ \mathbb{O} & \mathbb{O} & \mathbb{O} & \cdots & \mathbb{O} & \mathbb{S} \end{array}\right]\;, \label{jor2}$$ where $\mathbb{O}$ denotes the $2\times2$ zero matrix and $$\mathbb{S}=\left[\begin{array}{cc} \cos(t\beta_{k}^{\pm}) & \sin(t\beta_{k}^{\pm})\\ -\sin(t\beta_{k}^{\pm}) & \cos(t\beta_{k}^{\pm}) \end{array}\right]\;.$$ ## Stable and unstable manifolds {#stable-and-unstable-manifolds .unnumbered} Recall from [\[34\]](#34){reference-type="eqref" reference="34"} that we represent by $\upsilon_{L,{\boldsymbol x}} (t)$ the solution of the linear ODE [\[34\]](#34){reference-type="eqref" reference="34"}. With the notation of this section, it can be written as $$\label{e: sol_ODElin} \frac{d}{dt} \upsilon_{L,{\boldsymbol x}} (t) \,=\, {\mathbb A}\, \upsilon_{L,{\boldsymbol x}} (t) \;, \quad \upsilon_{L,{\boldsymbol x}} (t) \;=\; e^{t\mathbb{A}}\boldsymbol{x} =\mathbb{U} e^{t\mathbb{J}}\mathbb{U}^{-1}\boldsymbol{x}\ .$$ Recall that $\mathcal{M}_{L,s}$, $\mathcal{M}_{L,u}$ represent the stable, unstable manifold of ${\boldsymbol c} = {\boldsymbol 0}$ for the linear ODE [\[34\]](#34){reference-type="eqref" reference="34"}. By [\[e: sol_ODElin\]](#e: sol_ODElin){reference-type="eqref" reference="e: sol_ODElin"}, $$\mathcal{M}_{L,u} :=\big\{ \bm{y}\in\mathbb{R}^{d}: \lim_{t\to-\infty}e^{t\mathbb{J}}\mathbb{U}^{-1}\bm{y}=\bm{0}\, \big\} \ , \quad \mathcal{M}_{L,s} :=\big\{ \, \bm{y}\in\mathbb{R}^{d}: \lim_{t\to+\infty}e^{t\mathbb{J}}\mathbb{U}^{-1}\bm{y}=\bm{0}\, \big\} \ .$$ Denote by $m\in\mathbb{N}$ the size of the matrix ${\rm diag}(e^{t\mathbb{E}_{1}^{-}},\,\dots,\,e^{t\mathbb{E}_{u_{1}}^{-}}, \,e^{t\mathbb{F}_{1}^{-}},\,\dots,\,e^{t\mathbb{F}_{u_{2}}^{-}})$, and by $\{\boldsymbol{u}_{1},\,\dots,\,\boldsymbol{u}_{d}\}$ the column vectors of $\mathbb{U}$, ($\boldsymbol{u}_{i}=\mathbb{U}\boldsymbol{e}_{i}$, where $\{\bm{e}_{1},\,\dots,\,\bm{e}_{d}\}$ stands for the canonical basis of $\mathbb{R}^{d}$). By [\[e: Jordan_exp\]](#e: Jordan_exp){reference-type="eqref" reference="e: Jordan_exp"}, [\[e:jor1\]](#e:jor1){reference-type="eqref" reference="e:jor1"}, and [\[jor2\]](#jor2){reference-type="eqref" reference="jor2"}, $$\mathcal{M}_{L,u} =\left\langle \bm{u}_{1},\,\dots,\, \bm{u}_{m}\right\rangle \;\;\;\text{and\;\;\;} \mathcal{M}_{L,s} =\left\langle \bm{u}_{m+1},\,\dots,\, \bm{u}_{d}\right\rangle \;,\label{eq:AuAs}$$ where $\left\langle S\right\rangle$ denotes the vector space spanned by $S$. The following lemma is a direct consequence of the discussion above. ** 66**. *There exists $C_{0}>0$ such that for all ${\boldsymbol y}\in {\mathcal M}_{L,s}$ and $t\ge0$, $$\Vert\upsilon_{L,{\boldsymbol y}} (t)\Vert \le C_{0}\Vert\boldsymbol{y}\Vert\;.$$* *Proof.* Write $\gamma=\min\{|\lambda_{1}^{-}|,\,\dots, \,|\lambda_{u_{1}}^{-}|,\,|\alpha_{1}^{-}|,\,\dots,\,|\alpha_{u_{2}}^{-}|\}>0$. Then, by [\[e: Jordan_exp\]](#e: Jordan_exp){reference-type="eqref" reference="e: Jordan_exp"}, [\[e:jor1\]](#e:jor1){reference-type="eqref" reference="e:jor1"}, [\[jor2\]](#jor2){reference-type="eqref" reference="jor2"}, [\[e: sol_ODElin\]](#e: sol_ODElin){reference-type="eqref" reference="e: sol_ODElin"}, and [\[eq:AuAs\]](#eq:AuAs){reference-type="eqref" reference="eq:AuAs"}, it is clear that there exists a polynomial $P(t)$ depending only on $d$ such that $$\Vert\upsilon_{L,{\boldsymbol y}} (t)\Vert\le e^{-\gamma t}P(t)\Vert\boldsymbol{y}\Vert\;.$$ The conclusion of lemma follows immediately. ◻ By [\[eq:AuAs\]](#eq:AuAs){reference-type="eqref" reference="eq:AuAs"}, $\mathbb{R}^{d}={\mathcal M}_{L,u}\oplus{\mathcal M}_{L,s}$. Hence, for each $\bm{y}\in\mathbb{R}^{d}$, there exists a unique decomposition $$\bm{y}=\boldsymbol{v}^{u}(\boldsymbol{y}) +\bm{v}^{s}(\boldsymbol{y})\ ,\label{eq:decus}$$ such that $\boldsymbol{v}^{u}(\boldsymbol{y})\in{\mathcal M}_{L,u}$ and $\bm{v}^{s}(\boldsymbol{y})\in{\mathcal M}_{L,s}$. Next lemma provides the basic property of this decomposition. ** 67**. *There is $c_{0}<\infty$ such that $$\|\boldsymbol{v}^{s}(\bm{y})\|\le c_{0}\|\bm{y}\|\ \text{for all}\ \bm{y}\in\mathbb{R}^{d}\;.$$* The proof is based on the following elementary result. ** 68**. *Let $V$ and $W$ be subspaces of $\mathbb{R}^{d}$ such that $V\cap W=\{\bm{0}\}$. Then, there exists $\zeta=\zeta(V,\,W)>0$ such that $$\sup_{\bm{v}\in V\setminus\{\boldsymbol{0}\},\,\bm{w}\in W\setminus\{\boldsymbol{0}\}}\frac{|\langle\bm{v},\,\bm{w}\rangle|}{\|\bm{v}\|\|\bm{w}\|}=1-\zeta$$ where sup is taken over all non-zero vectors.* *Proof.* Let us define $F:(V\setminus\{\boldsymbol{0}\}) \times(W\setminus\{\boldsymbol{0}\})\rightarrow\mathbb{R}$ as $$F(\boldsymbol{v},\,\boldsymbol{w})=\frac{|\langle\bm{v},\,\bm{w}\rangle|}{\|\bm{v}\|\|\bm{w}\|}\;.$$ Since $F(c\boldsymbol{v},\,c'\boldsymbol{w})=F(\boldsymbol{v},\,\boldsymbol{w})$ for all $c,\,c'\neq0$, we have $$\sup_{\bm{v}\in V\setminus\{\boldsymbol{0}\},\,\bm{w}\in W\setminus\{\boldsymbol{0}\}}F(\boldsymbol{v},\,\boldsymbol{w})=\sup_{\bm{v}\in V,\,\bm{w}\in W:\,\Vert\boldsymbol{v}\Vert=\Vert\boldsymbol{w}\Vert=1}F(\boldsymbol{v},\,\boldsymbol{w})\;.$$ Since the set $S_{0}=\{(\boldsymbol{v},\,\boldsymbol{w}):\bm{v}\in V,\,\bm{w}\in W:\,\Vert\boldsymbol{v}\Vert=\Vert\boldsymbol{w}\Vert=1\}$ is compact and $F(\cdot,\,\cdot)$ is continuous, the function $F$ achieve the maximum at certain $(\boldsymbol{v}^{*},\,\boldsymbol{w}^{*})\in S_{0}$. Then $$\sup_{\bm{v}\in V\setminus\{\boldsymbol{0}\},\,\bm{w}\in W\setminus\{\boldsymbol{0}\}}F(\boldsymbol{v},\,\boldsymbol{w})=F(\boldsymbol{v}^{*},\,\boldsymbol{w}^{*})\;.$$ Note that $F(\boldsymbol{v}^{*},\,\boldsymbol{w}^{*})<1$ by the Cauchy-Schwarz inequality (the equality cannot hold because of the assumption $V\cap W=\{\bm{0}\}$). This completes the proof. ◻ *Proof of Lemma [ 67](#l: v_s<x){reference-type="ref" reference="l: v_s<x"}.* Since ${\mathcal M}_{L,u}\cap{\mathcal M}_{L,s}=\{\boldsymbol{0}\}$, by Lemma [ 68](#lem:Cauchy){reference-type="ref" reference="lem:Cauchy"}, there exists a constant $c>1$ such that, for all $\boldsymbol{y}\in\mathbb{R}^{d}$, $$|\langle\bm{v}^{u}(\boldsymbol{y}),\,\boldsymbol{v}^{s}(\boldsymbol{y})\rangle| \;\le\; \sqrt{(c-1)/c}\|\bm{v}^{u}(\boldsymbol{y})\| \|\boldsymbol{v}^{s}(\boldsymbol{y})\|\\ \;\le\; \frac{1}{2}\|\bm{v}^{u}(\boldsymbol{y})\|^{2} +\frac{c-1}{2c}\|\boldsymbol{v}^{s}(\boldsymbol{y})\|^{2}\;,$$ where we applied Young's inequality in the last step. Therefore, $$-2c\langle\bm{v}^{u}(\boldsymbol{y}),\, \boldsymbol{v}^{s}(\boldsymbol{y})\rangle \le c\|\bm{v}^{u}(\boldsymbol{y})\|^{2}+(c-1)\| \boldsymbol{v}^{s}(\boldsymbol{y})\|^{2}\;.$$ Reorganizing, we obtain $$\|\boldsymbol{v}^{s}(\boldsymbol{y})\|^{2}\le c\,(\Vert\bm{v}^{u}(\boldsymbol{y})\|^{2} +2\langle\bm{v}^{u}(\boldsymbol{y}),\, \boldsymbol{v}^{s}(\boldsymbol{y})\rangle +\|\boldsymbol{v}^{s}(\boldsymbol{y})\|^{2})= c\, \Vert\boldsymbol{y}\Vert^{2}\ .$$ This completes the proof. ◻ *Proof of Lemma [ 20](#lem_esclin){reference-type="ref" reference="lem_esclin"}.* Recall the constant $C_{0}$ and $c_{0}$ from Lemmata [ 66](#lem_bdbarx){reference-type="ref" reference="lem_bdbarx"} and [ 67](#l: v_s<x){reference-type="ref" reference="l: v_s<x"}, respectively, and define $r>0$ as $$r=\frac{a}{3C_{0}c_{0}}\;\cdot \label{eq:defr}$$ Suppose that $\boldsymbol{y}\in\mathcal{B}(\boldsymbol{0},\,r)$. As in [\[eq:decus\]](#eq:decus){reference-type="eqref" reference="eq:decus"}, decompose ${\boldsymbol y}\in\mathbb{S}^{d}$ into $${\boldsymbol y}=\boldsymbol{v}^{u}({\boldsymbol y})+\bm{v}^{s}({\boldsymbol y})\ ,$$ so that $$\upsilon_{L,{\boldsymbol y}}(t) = \upsilon_{L,{\boldsymbol v}^u({\boldsymbol y})}(t) +\upsilon_{L,{\boldsymbol v}^s ({\boldsymbol y})}(t) \;.$$ Note that $\upsilon_{L,{\boldsymbol v}^u({\boldsymbol y})}(t) \in{\mathcal M}_{L,u}$ and $\upsilon_{L,{\boldsymbol v}^s ({\boldsymbol y})}(t) \in{\mathcal M}_{L,s}$ for all $t\ge0$ since ${\mathcal M}_{L,u}$ and ${\mathcal M}_{L,s}$ are invariant under the dynamical system [\[34\]](#34){reference-type="eqref" reference="34"}. Recall the definition [\[eq:overlinet\]](#eq:overlinet){reference-type="eqref" reference="eq:overlinet"} of $t_L(\cdot)$ and write $$\bm{w}^{u}(\boldsymbol{y}) \;:=\; \upsilon_{L,{\boldsymbol v}^u({\boldsymbol y})}(t_L({\boldsymbol y})) \in {\mathcal M}_{L,u} \;\;\;\text{and\;\;\;} \bm{w}^{s}(\boldsymbol{y}) \;:=\; \upsilon_{L,{\boldsymbol v}^s({\boldsymbol y})}(t_L({\boldsymbol y})) \in {\mathcal M}_{L,s} \;,$$ so that $${\boldsymbol e}_L ({\boldsymbol y})=\bm{w}^{u}(\bm{y})+\bm{w}^{s}(\bm{y})\label{eq:decpb}$$ By [\[eq:overlinet\]](#eq:overlinet){reference-type="eqref" reference="eq:overlinet"}, $\Vert {\boldsymbol e}_L({\boldsymbol y}) \Vert =r_{1}$. Moreover, by Lemmata [ 66](#lem_bdbarx){reference-type="ref" reference="lem_bdbarx"}, [ 67](#l: v_s<x){reference-type="ref" reference="l: v_s<x"}, and by the definition [\[eq:defr\]](#eq:defr){reference-type="eqref" reference="eq:defr"} of $r$, $$\|\bm{w}^{s}(\bm{y})\|\le C_{0}\Vert\boldsymbol{v}^{s}(\boldsymbol{y})\Vert\le C_{0}c_{0}\Vert\boldsymbol{y}\Vert\le C_{0}c_{0}r=\frac{a}{3} \;\cdot \label{eq:wr1}$$ Therefore, by the triangle inequality, $$\begin{aligned} \Big\Vert \, {\boldsymbol e}_L({\boldsymbol y})-\frac{r_{1}}{\|\bm{w}^{u}(\bm{y})\|} \bm{w}^{u}(\bm{y}) \,\Big\Vert & \;\le\; \Big\Vert \, \bm{w}^{u}(\bm{y})-\frac{r_{1}}{\|\bm{w}^{u}(\bm{y})\|} \bm{w}^{u}(\bm{y})\,\Big \Vert \;+\; \|\bm{w}^{s}(\bm{y})\|\\ & =\; \Big|\, 1-\frac{r_{1}}{\|\bm{w}^{u}(\bm{y})\|}\, \Big| \, \left\Vert \bm{w}^{u}(\bm{y})\right\Vert \;+\; \|\bm{w}^{s}(\bm{y})\|\;.\end{aligned}$$ Since $\Vert{\boldsymbol e}_L({\boldsymbol y}) \Vert =r_{1}$, this expression is equal to $$\big|\, \|\bm{w}^{u}(\bm{y})\|-r_{1}\,\big| \;+\; \|\bm{w}^{s}(\bm{y})\| \;=\; \big|\, \|\bm{w}^{u}(\bm{y})\| \,-\, \Vert{\boldsymbol e}_L({\boldsymbol y})\Vert\,\big| \;+\; \|\bm{w}^{s}(\bm{y})\|\;.$$ By [\[eq:decpb\]](#eq:decpb){reference-type="eqref" reference="eq:decpb"} and [\[eq:wr1\]](#eq:wr1){reference-type="eqref" reference="eq:wr1"}, this expression is bounded by $2\, \|\bm{w}^{s}(\bm{y})\|< (2/3) \, a$. This completes the proof of the lemma since $$\frac{r_{1}}{\|\bm{w}^{u}(\bm{y})\|} \, \bm{w}^{u}(\bm{y})\in{\mathcal M}_{L,u}\cap\partial \mathcal{B}(\boldsymbol{0},\,r_{1})\ .$$ ◻ ### Acknowledgement {#acknowledgement .unnumbered} C. L. has been partially supported by FAPERJ CNE E-26/201.117/2021, and by CNPq Bolsa de Produtividade em Pesquisa PQ 305779/2022-2. JL was supported by the NRF grant funded by the Korea government (No. 2022R1F1A106366811). IS was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2022R1F1A106366811, 2022R1A5A600084012, 2023R1A2C100517311) and Samsung Science and Technology Foundation (Project Number SSTF-BA1901-03). In addition, IS and JL are also supported by Seoul National University Research Grant in 2022. SSTF-BA1901-03 10 G. Barrera, M. Jara: Thermalisation for small random perturbations of dynamical systems. Ann. Appl. Probab. **30**, 1164--1208 (2020) J. Beltrán, C. Landim: Tunneling and metastability of continuous time Markov chains. J. Stat. Phys. **140**, 1065--1114 (2010) J. Beltrán, C. Landim; Tunneling and metastability of continuous time Markov chains II. J. Stat. Phys. **149**, 598--618 (2012) J. Beltrán, C. Landim: Metastability of reversible condensed zero range processes on a finite set, Probab. Theory Relat. Fields **152**, 781--807 (2012) L. Bertini, D. Gabrielli, C. Landim: Metastable expansion of finite state Markov chains level two large deviations rate functions. arXiv:2207.02588 (2022) A. Bovier, M. Eckhoff,V. Gayrard, M. Klein: Metastability in reversible diffusion process I. Sharp asymptotics for capacities and exit times. J. Eur. Math. Soc. **6**, 399--424 (2004) C. Chicone: *Ordinary Differential Equations with Applications*. Text in Applied Mathematics **34**, Springer New York, NY, (2010) M. V. Day. Exponential leveling for stochastically perturbed dynamical systems. SIAM J. Math. Anal. **13** 532--540 (1982) A. Devinatz, A. Friedman: Asymptotic behavior of the principal eigenfunction for a singularly perturbed Dirichlet problem. Indiana Univ. Math. J. **27**, 143--157 (1978) M. I. Freidlin, L. Koralov: Nonlinear stochastic perturbations of dynamical systems and quasi-linear parabolic PDE's with a small parameter Probab. Theory Relat. Fields **147**, 273--301 (2010) M. I. Freidlin, L. Koralov: Metastability for nonlinear random perturbations of dynamical systems. Stoch. Proc. Appl. **120**, 1194--1214 (2010) M. Freidlin, L. Koralov: Metastable Distributions of Markov Chains with Rare Transitions. J. Stat. Phys. **167**, 1355--1375 (2017) M. I. Freidlin, A. D. Wentzell: *Random perturbations of dynamical systems*. Second edition. Grundlehren der Mathematischen Wissenschaften \[Fundamental Principles of Mathematical Sciences\], **260**. Springer-Verlag, New York, (1998) H. Ishii, P. E. Souganidis: Metastability for parabolic equations with drift: Part I. Indiana Univ. Math. J., **64**, 875-913 (2015) H. Ishii, P. E. Souganidis: Metastability for parabolic equations with drift: part II. The quasilinear case, Indiana Univ. Math. J. **66**, 315--360 (2017) Y. Kifer: The exit problem for small random perturbations of dynamical systems with a hyperbolic fixed point. Isr. J. Math. **40**, 74--96 (1981) L. Koralov, L. Tcheuko: Quasi-linear equations with a small diffusion Term and the evolution of hierarchies of cycles. J. Theor. Probab. **29**, 867--895 (2016) P. Lancaster and M. Tismenetsky. *The theory of matrices with applications*. 2nd edition. Academic press. Inc.. San Diego, (1985) C. Landim, J. Lee, I. Seo: Metastability and time scales for parabolic equations with drift II: the longer time scales. In preparation (2023) C. Landim, M. Loulakis, M. Mourragui :Metastable Markov chains: from the convergence of the trace to the convergence of the finite-dimensional distributions. Electron. J. Probab. **23**, 1-34 (2018) C. Landim, D. Marcondes, I. Seo: A Resolvent Approach to Metastability. To appear in J. Eur. Math. Soc. arXiv:2102.00998 (2023) C. Landim, M. Mariani, I. Seo: A Dirichlet and a Thomson principle for non-selfadjoint elliptic operators, Metastability in non-reversible diffusion processes. Arch. Rational Mech. Anal. **231**, 887--938 (2017) C. Landim, I. Seo: Metastability of non-reversible random walks in a potential field, the Eyring--Kramers transition rate formula. Commun. Pure Appl. Math. **71**, 203-- 266 (2018) C. Landim, T. Xu; Metastability of finite state Markov chains: a recursive procedure to identify slow variables for model reduction. ALEA Lat. Am. J. Probab. Math. Stat. **13**, 725-751 (2016) S. Lee, M. Ramil and I. Seo: Asymptotic stability and cut-off phenomenon for underdamped Langevin dynamics. In preparation. (2023) J. Lee, I. Seo: Non-reversible and Gibbsian Metastable Diffusion Processes I: Eyring--Kramers Formula. Probab. Theory Relat. Fields **182**, 849--903 (2022) J. Lee, I. Seo: Metastable Behavior of Non-reversible, Gibbsian Diffusion Processes II: Markov Chain Convergence. J Stat Phys **189**, 25 (2022) T. Lelièvre, D. Le Peutrec, B. Nectoux: The exit from a metastable state: concentration of the exit point distribution on the low energy saddle points, part 2, Stoch. PDE: Anal. Comp. **10**, 317--357 (2022) D. Le Peutrec and L. Michel. Sharp spectral asymptotics for nonreversible metastable diffusion processes. Probability and Mathematical Physics. **1**, 3--53 (2019) J. Milnor: *Morse theory*. Annals of Mathematics Studies, **51**. Princeton University Press, New Jersey, (1969) L. I. Nicolaescu: *An invitation to Morse Theory*, second edition.Universitext, Springer, (2011) J. Palis, W. de Melo: *Geometric Theory of Dynamical Systems: An Introduction* Springer New York, NY (1982) L. Perko: *Differential equations and dynamical systems*, 3rd. ed. Texts in Applied Mathematics **7**, Springer Verlag, (2001) F. Rezakhanlou, I. Seo: Scaling limit of small random perturbation of dynamical systems. Ann. Inst. H. Poincaré Probab. Statist. **59**, 867--903 (2023) M. Sugiura. Metastable behaviors of diffusion processes with small parameter. J. Math. Soc. Japan **47**, 755--788 (1995) J. R. L. Webb. Extensions of gronwall's inequality with quadratic growth terms and applications. Electron. J. Qual. Theor. Differ., **61**, 1--12 (2018)
arxiv_math
{ "id": "2309.05546", "title": "Metastability and time scales for parabolic equations with drift 1: the\n first time scale", "authors": "Claudio Landim, Jungkyoung Lee, Insuk Seo", "categories": "math.PR cond-mat.stat-mech math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper we demonstrate the first example of a finite translation plane which does not contain a translation hyperoval, disproving a conjecture of Cherowitzo. The counterexample is a semifield plane, specifically a Generalised Twisted Field plane, of order $64$. We also relate this non-existence to the covering radius of two associated rank-metric codes, and the non-existence of scattered subspaces of maximum dimension with respect to the associated spread. author: - Kevin Allen and John Sheekey bibliography: - translationrefs.bib title: On Translation Hyperovals in Semifield Planes --- # Introduction Hyperovals are extremal combinatorial objects in projecive planes; namely a hyperoval is a set of $q+2$ points in which no three lie on a common line. We refer to Section [2](#sec:def){reference-type="ref" reference="sec:def"} for formal definitions. Hyperovals have attracted much attention over the years, particularly in the case of Desarguesian planes. Papers regarding existence, construction, and classification abound. While full classifications appear out of reach, the addition of extra assumptions on the symmetry of the plane, the hyperoval, or both, leads to interesting questions and more potential for classification. In particular, the most well-studied non-Desarguesian planes are the *translation planes*, while the best understood hyperovals in Desarguesian planes are the *translation hyperovals*. Therefore it is natural to consider the question of existence and classification for translation hyperovals in translation planes. It is known that translation hyperovals exist in certain translation planes; for example Desarguesian planes [@Payne1971], André planes [@Denniston], Hall planes [@Korchmaros], and Knuth's binary semifield planes [@DuTrZh]. Cherowitzo [@Cherowitzo] computationally classified all hyperovals (translation and otherwise) in each of the nine translation planes of order $16$. In particular he showed that every translation plane of order $16$ contains a translation hyperoval, which lead him to make the following statement. **Conjecture 1**. *These results\... lead one to the natural conjecture that translation hyperovals exist in all translation planes of even order.* In this paper we will disprove this conjecture, by exhibiting a projective plane of order $64$ containing no translation hyperoval. Specifically, we show that the *twisted field plane* of order $64$, which is a *semifield plane*, contains no translation hyperovals. We will also relate this problem to the (non-)existence of so-called *scattered subspaces* with respect to a spread associated to the translation plane, as well as the covering radius of the rank-metric code (spread set) associated to the translation plane. # Definitions and Background {#sec:def} In this section we collect the necessary definitions and background for this article. We refer to [@Dembowski; @HughesPiper; @Cherowitzo; @Handbook; @LavrauwScat] for further details on these largely well-known topics. ## Projective Planes A *projective plane* $\pi$ is an incidence structure $(\mathcal P,\mathcal L,\mathcal I)$ consisting of a set of *points* $\mathcal P$, a set of lines $\mathcal L$, and an incidence relation $\mathcal I\subset \mathcal P\times \mathcal L$ such that every two distinct points are both incident with precisely one common line, and every pair of distinct lines are incident with precisely one common point. The *dual* of $\pi$, denoted $\pi^d$, is the incidence structure $(\mathcal L,\mathcal P,\mathcal I^d)$, where $\mathcal I^d$ is the reverse relation of $\mathcal I$. Since the terms points and lines are interchangeable in the definition of a projective plane, $\pi^d$ is again a projective plane. It is well known that for any finite projective plane there exists a positive integer $q$, called the *order* of $\pi$, such that the plane contains $q^2+q+1$ points and $q^2+q+1$ lines, every line is incident with $q+1$ points, and every point is incident with $q+1$ lines. If $q$ is a prime power, there exists a projective plane of order $q$; the converse is a famous open problem. The classical examples of projective planes are the *Desarguesian planes* $\mathrm{PG}(2,F)$ where $F$ is some (skew) field, in which points are one-dimensional vector subspaces of $V(3,F)$, lines are two-dimensional vector subspaces of $V(3,F)$, and incidence is given naturally by inclusion. This plane can also be realised as the *completion* of the *affine plane* $\mathrm{AG}(2,F)$, in which points are vectors in $V(2,F)$, and lines are translations of one-dimensional subspaces; that is, cosets $u+\langle v\rangle$ for some $u,v\in V(2,F),v\ne 0$. Then the addition of a *line at infinity* consisting of the *directions* $\langle v\rangle$ returns a projective plane isomorphic to the previous description. ## Translation Planes {#ssec:tplane} Translation planes are projective planes with extra symmetry. They arise from affine planes sharing some natural properties of $\mathrm{AG}(2,F)$. In order to introduce them formally, we need to recall some technical terminology. An *isomorphism* from a plane $\pi_1 = (\mathcal P_1,\mathcal L_1,\mathcal I_1)$ to another $\pi_2 = (\mathcal P_2,\mathcal L_2,\mathcal I_2)$ is a bijection $\phi$ from $\mathcal P_1\cup\mathcal L_1$ to $\mathcal P_2\cup\mathcal L_2$ which preserves type and incidence; that is, $\phi(\mathcal P_1)=\mathcal P_2$, $\phi(\mathcal L_1)=\mathcal L_2$, and $p\in \ell\Leftrightarrow \phi(p)\in \phi(\ell)$. A *collineation* of a plane $\pi$ is an isomorphism from $\pi$ to itself; a *correlation* is an isomorphism from $\pi$ to its dual $\pi^d$. An *elation* of $\pi$ with centre $p$ and axis $\ell\ni p$ is a collineation of $\pi$ fixing every point on $\ell$ and every line containing $p$. A projective plane $\pi$ is said to be a *translation plane* if there exists a line $\ell$ such that the group of elations with axis $\ell$ acts transitively on the points of $\pi\backslash \ell$. If the plane is not Desarguesian, then the line $\ell$ is unique, and is called the *translation line* of $\pi$. We will usually denote the translation line by $\ell_\infty$. Any collineation of $\pi$ must then fix $\ell_\infty$. The name *translation plane* can be more easily understood in the affine setting; in the affine plane $\mathrm{AG}(2,F)$, a translation $\tau_u:v\mapsto v+u$ clearly maps lines to lines, preserves direction, and fixes all lines with direction $\langle u\rangle$. The natural extension of this map to $\mathrm{PG}(2,F)$ then satisfies the definition of an elation given above, with $\ell$ the line at infinity, and $p$ the point at infinity corresponding to $\langle u\rangle$. Furthermore, the group of translations clearly acts transitively on the points of $\mathrm{AG}(2,F)$. Thus the concept of an elation and a translation plane is a natural generalisation of this example. The dual of a translation plane is not necessarily a translation plane. If both a plane $\pi$ and its dual $\pi^d$ are translation planes, then we call it a *semifield plane*. If a semifield plane $\pi$ is not Desarguesian, then there is a unique point $p_\infty\in \pi$ such that the dual of $p$ is the translation line of $\pi^d$. We call this point the *shears point*. Any collineation of $\pi$ must then fix $p_\infty$. It is known that we must have $p_\infty\in \ell_\infty$, that is, the shears point must lie on the translation line. Furthermore, the collineation group of a semifield plane has precisely three orbits on points: the shears point, the points of the translation line other than the shears point, and the remaining points of the plane. ## Hyperovals A *hyperoval* in a finite projective plane $\pi$ (of even order $q$) is a set $\mathcal H$ of $q+2$ points such that no three points of $\mathcal H$ are incident with a common line. Hyperovals can only exist in planes of even order; in planes of odd order, the maximum size of a set with this property is $q+1$, and a famous result of Segre tells us that all sets attaining this bound are equivelent to the set of points of a *conic*. The study of hyperovals in planes $\mathrm{PG}(2,{\mathbb{F}}_q)$, $q$ even, has a long history, with connections to important objects in coding theory, namely *MDS codes* with certain parameters and properties. We refer to [@Vandendriessche] for an up-to-date list of the known constructions for Desarguesian planes, as well as the largest plane with a complete computer classification. Hyperovals in general planes have also received much attention. It was conjectured that every projective plane of even order contained a hyperoval; this was disproved by the computer classification of [@Royle], where a projective plane of order $16$ containing no hyperovals was exhibited. We are now ready to formally introduce translation hyperovals. **Definition 1**. A *translation hyperoval* is a hyperoval $\mathcal H$ such that there exists a line $\ell$ for which the group of elations with axis $\ell$ acts transitively on $\mathcal H\backslash \ell$. Payne [@Payne1971] showed that every translation hyperoval in $\mathrm{PG}(2,2^n)$ is equivalent to one defined by the set of vectors $\{(0,1,0),(0,0,1),(1,x,x^{2^i}):x\in \mathbb{F}_{2^n}\}$ for some positive integer $i$ relatively prime to $n$. These translation hyperovals were first constructed by Segre [@Segre1957]. It is known (see e.g. [@Cherowitzo]) that for a translation hyperoval in a (non-Desarguesian) translation plane, the line $\ell$ must be the translation line $\ell_\infty$, and $|\mathcal H\cap \ell_\infty|=2$. Since a semifield posesses a distinguished point at infinity, namely the shears point, it makes sense to consider whether or not a translation hyperoval contains the shears point. **Definition 2**. A translation hyperoval in a semifield plane is said to be of *shears type* if it contains the shears point, and of *non-shears type* if it does not. Translation hyperovals in translation planes were studied by Cherowitzo in [@Cherowitzo] where he computationally classified all hyperovals (translation and otherwise) in each of the nine translation planes of order $16$. In particular he showed that every translation plane of order $16$ contains a translation hyperoval, which lead him to Conjecture [Conjecture 1](#con:cher){reference-type="ref" reference="con:cher"}. # Quasifields, Spreads, Spread Sets and Translation Planes In this section we outline various well-known correspondences between quasifields, spreads, spread sets and Translation Planes. We refer to [@Handbook] for details, proofs, and further references. ## Quasifields and Semifields A *quasifield* is an algebraic structure similar to a finite field, without the requirement that multiplication be associative, and assuming only one distributive law. In the finite case, a quasifield must have order $q^n$ for some prime power $q$ and some positive integer $n$. In this case we may take the additive structure to be $(\mathbb{F}_{q^n},+)$, with multiplication $\circ:\mathbb{F}_{q^n}\times \mathbb{F}_{q^n}\rightarrow \mathbb{F}_{q^n}$ satisfiying - $(x+x')\circ y=x\circ y+x'\circ y$ for all $x,x',y\in \mathbb{F}_{q^n}$; - For every $a,b\in \mathbb{F}_{q^n}$, $a\ne 0$, there exist unique $x,y\in \mathbb{F}_{q^n}$ such that $x\circ a=a\circ y=b$. A *semifield* is a quasifield in which the second distributive law also holds: - $x\circ (y+y')=x\circ y+x\circ y'$ for all $x,y,y'\in \mathbb{F}_{q^n}$. Note that in the literature quasifields and semifields are assumed to contain a multiplicative identity, with the terms *prequasifield* and *presemifield* used to describe the case where an idenitity is not assumed. Since this distinction does not have any relevance for this paper, we will abuse terminology and drop the prefix. There are many known constructions for quasifields and semifields. We refer to [@Handbook; @LaPo] for examples of constructions, and [@Rua2009] for the classification of semifields of order $64$. One of the most well-studied families, and the example most relevant to this paper, are the *generalised twisted fields* of Albert [@Albert1961]. These are semifields with multiplication $$x\circ y := xy-jx^{q^i}y^{q^k},$$ where $j$ is a fixed element of $\mathbb{F}_{q^n}$ satisfying $N_{\mathbb{F}_{q^n}:\mathbb{F}_{q^{(n,i,k)}}}(j)\ne 1$, with $(n,i,k)$ denoting the greatest common divisor of these three integers. For example, for $q=2,n=6,i=2,k=4$, the multiplication $$x\circ y := xy-jx^{2^2}y^{2^4},$$ defines a semifield if and only if $N_{\mathbb{F}_{2^6},\mathbb{F}_{2^2}}(j)\ne 1$. Such elements certainly exist, for example any $j$ satisfying $j^6+j+1=0$. Two semifields with respective multiplications $\circ$ and $\star$ are *isotopic* if there exist invertible additive maps $A,B,C$ from $\mathbb{F}_{q^n}$ to itself such that $A(x\circ y)=B(x)\star C(y)$ for all $x,y\in \mathbb{F}_{q^n}$. Every presemifield is isotopic to a semifield via *Kaplansky's trick* [@LaPo]. ## Spreads and Spread Sets A *spread* (or *$n$-spread*) in $V=V(2n,F)$ is a set $\mathcal D$ of $n$-dimensional vector subspaces of $V$ such that every nonzero element of $V$ is contained in precisely one element of $\mathcal D$. Let us identify the elements of $V(2n,q)$ with elements of $V(2,q^n)$, and let $S_\infty= \{(0,x):x\in \mathbb{F}_{q^n}\}$. The $n$-dimensional spaces meeting $S_\infty$ trivially are precisely those of the form $$S_f := \{(x,f(x)):x\in \mathbb{F}_{q^n}\},$$ where $f(x)\in \mathbb{F}_{q^n}[x]$ is a *linearised polynomial*, i.e. a polynomial of the form $f(x)=\sum_{i=0}^{n-1}f_ix^{q^i}$. These are the polynomials which define ${\mathbb{F}}_q$-linear maps from $\mathbb{F}_{q^n}$ into itself. We denote the set of linearised polynomials of degree at most $q^{n-1}$ as $\mathcal L$. To a spread $\mathcal D$ containing $S_\infty$ we can associate a unique set of linear maps (or linearised polynomials) $C(\mathcal D)$ by $$C(\mathcal D) = \{f:S_f\in \mathcal D\}.$$ These satisfy the property that $C(\mathcal D)=q^n$ and for any $f,g\in C(\mathcal D),f\ne g$ , we have $\mathrm{rank}(f-g)=n$, where $\mathrm{rank}$ denotes the usual linear algebra rank of an ${\mathbb{F}}_q$-linear map. This is called the *spread set* associated to $\mathcal D$. The definition of a spread set coincides with that of a (not necessarily linear) *maximum rank distance (MRD) code*. Conversely, any set $C$ satisfying this property defines a spread by $$\mathcal D(C) := \{S_\infty\}\cup \{S_f:f\in C\}.$$ For a linear map $f$, we can define $$\phi_f:(x,y)\mapsto (x,f(x)+y).$$ If $C$ is additively closed, then each of the maps in the set $\phi_C := \{\phi_f:f\in C\}$ fix the spread $\mathcal D(C)$. Moreover, each $\phi_f$ fixes $S_\infty$ pointwise, and $\phi_C$ is an abelian group acting transitively on $\mathcal D(C)\backslash \{S_\infty\}$. The spread is then called a *semifield spread*, and $C$ a *semifield spread set*, for reasons that will shortly become apparent. We refer to the distinguished element $S_\infty$ as the *shears element* of $\mathcal D$. Given a quasifield $Q$, we can define a spread set and a spread as follows. Define $$R_y(x):= x\circ y.$$ Then each $R_y$ is an additive map. Moreover $R_y-R_{y'}$ is invertible for all $y\ne y'$, since otherwise there would exist some nonzero $a$ such that $a\circ y=a\circ y'$, contradicting one of the axioms of a quasifield. Thus $$C(Q) := \{R_y:y\in \mathbb{F}_{q^n}\}$$ defines a spread set, and $\mathcal D(Q) := \mathcal D(C(Q))$ is a spread. If $Q$ is a semifield, then $R_{y+y'}=R_y+R_{y'}$, and so $C(Q)$ is additively closed. Conversely, from a spread or spread set we can define a quasifield; note however that the quasifield is not uniquely determined by the spread or spread set. However it is uniquely determined up to isotopy. ## Translation Planes from Spreads {#ssec:tplanespread} From a spread $\mathcal D$ we can define an affine plane as follows: - Points: elements of $V(2n,q)$; - Lines: translations of elements of $\mathcal D$, i.e. $u+S$ for $u\in V(2n,q),S\in \mathcal D$. We can complete this to a projective plane by adding a line at infinity $\ell_\infty$, whose points are the elements of $\mathcal D$; to a line $u+S$ we add the point at infinity $S$. We denote this plane as $\pi(\mathcal D)$. It is straightforward to check that this is indeed a translation plane. Moreover, André showed that every translation plane arises from a spread [@Andre]. By the discussion in the previous subsections, we can define a translation plane from a quasifield, and from a spread set. We may denote these naturally as $\pi(Q)$, $\pi(C)$ respectively. The dual of a translation plane defined by a semifield is again a translation plane, and so the plane is a semifield plane. The shears point corresponds to the shears element of the spread. As mentioned in Section [2.2](#ssec:tplane){reference-type="ref" reference="ssec:tplane"}, the collineation group of a semifield plane has precisely three orbits on points: the shears point, the points of the translation line other than the shears point, and the remaining points of the plane. The transitivity on the points of the translation line other than the shears point is demonstrated by the maps $\phi_f$ defined in the previous section. It is well known that two semifields define isomorphic planes if and only if they are isotopic. ## Scattered Subspaces It is known (see e.g. [@LavrauwScat; @DHVdV2020]) that translation hyperovals correspond to *scattered subspaces* with respect to spreads. We recall the relevant notions and demonstrate this fact here. A subspace $U$ is said to be *scattered* with respect to a spread $\mathcal D$ if $$\mathrm{dim}(U\cap S)\leq 1\quad\mathrm{for~all }~S\in \mathcal D.$$ The study of scattered subspaces with respect to a spread originates from [@BlLa2000]; we refer to [@LavrauwScat] for a recent survey of the various applications of scattered subspaces, including that of translation hyperovals. We note that the interest in this notion is not restricted to spreads in $V(2n,q)$; the more general case of spreads of $n$-dimensional subspaces of $V(kn,q)$ is also of interest. However, here we deal only with the case $k=2$. In this case it is straightforward to obtain an upper bound on the dimension of a scattered subspace. **Proposition 1**. *[@BlLa2000 Lemma 3.1 and Theorem 4.1] Let $\mathcal D$ be a spread in $V(2n,q)$. Then the dimension of a scattered subspace is at most $n$. Moreover, if $\mathcal D$ is Desarguesian, then there exists a scattered subspace of dimension $n$.* Suppose $U$ is a scattered subspace of dimension $n$ with respect to a spread in $V(2n,2)$. Then $U$ must intersect $2^n-1$ distinct elements of $\mathcal D$ nontrivially, and hence since $|\mathcal D|=q^n+1$, there exist precisely two elements of $\mathcal D$ which intersect $U$ trivially, say $S_1$ and $S_2$. We claim that the set of points of the translation plane $\pi(\mathcal D)$ defined by the affine points $U$ and the two points at infinity $S_1,S_2$ form a translation hyperoval, which we will denote by $\mathcal H_U$. Consider a line $v+S$ of $\pi(\mathcal D)$ with $S\notin\{S_1,S_2\}$. Then the points of intersection of $v+S$ and $\mathcal H_U$ are affine, and so the number of them is equal to $|(v+S)\cap U|$, which is either $0$ or $2$, since $|U\cap S|=2$. Next consider a line $v+S_i$, $i\in \{1,2\}$. Then this line meets $\mathcal H_U$ in the point at infinity corresponding to $S_i$, as well as unique affine point $v+s_i$, where $s_i+u_i=v$ for a unique $u_i\in U$. The uniqueness here follows from the fact that $V(2n,2) = S_i\oplus U$. Finally the line at infinity clearly meets $\mathcal H_U$ in the two points $\{S_1,S_2\}$, proving that $\mathcal H$ is a hyperoval. Then the group of translations $\tau_u:u\in U$ clearly acts transitively on $U=\mathcal H_U\backslash\ell_\infty$, showing that $\mathcal H_U$ is indeed a translation hyperoval. Conversely, it has been shown that (up to equivalence) every translation hyperoval arises in this way from a scattered subspace. **Proposition 2**. *[@LavrauwScat Theorem 1.7] Suppose $\mathcal H$ is a translation hyperoval in a translation plane $\pi$ of order $2^n$, and suppose $\mathcal D$ is a spread in $V(2n,2)$ such that $\pi$ is isomorphic to $\pi(\mathcal D)$. Then there exists a translation hyperoval in $\pi$ if and only if there exists an $n$-dimensional scattered subspace with respect to $\mathcal D$.* Hence the existence of translation hyperovals corresponds to the existence of scattered subspaces. Recently in [@GrRaShZu] the following lower bound on the largest dimension of a scattered subspace was shown. **Proposition 3**. *[@GrRaShZu Proposition 4.15] Let $\mathcal D$ be a spread in $V(2n,q)$, $q\geq 8$. Then there exists a scattered subspace of dimension at least $n/2 - 1$.* Clearly this does not guarantee the existence of translation hyperovals, though it does indicate the dimension at which scattered subspaces become hard to find, and impossible to guarantee by combinatorial methods. Various generalisations of the notion of scattered subspaces have been put forward in recent years, such as *evasive subspaces* [@CsEvasive]. One which is of particular relevance to this paper is that of $(\mathcal D,h)$-scattered subspaces: A subspace $U$ is said to be *$(\mathcal D,h)$-scattered* if $$\mathrm{dim}(U\cap S)\leq h\quad\mathrm{for~all }~S\in \mathcal D.$$ Clearly the property of a subspace being $(\mathcal D,1)$-scattered coincides with it being scattered with respect to $\mathcal D$. The following specialisation of [@GrRaShZu Theorem 6.7] guarantees the existence of $(\mathcal D,h)$-scattered subspaces in certain circumstances. **Proposition 4**. *Let $\mathcal D$ be a spread in $V(2n,q)$. Then there exists a $(\mathcal D,h)$-scattered subspace of dimension $n$ for any $h\geq \lceil \sqrt{n+1}-1\rceil$ if $q\geq 4$, and for any $h\geq \lceil \sqrt{n+2}-1\rceil$ if $q< 4$.* For the case $n=6$, which is the case for spreads arising from semifields of order $q^6$, we obtain that there exists a $2$-scattered subspace of dimension $6$ with respect to any $6$-spread $\mathcal D$ in $V(12,q)$. In particular, we note that the existence of a $1$-scattered subspace of dimension $n$, and hence a translation hyperoval, is not guaranteed. Indeed, we will demonstrate a counterexample. ## Covering Radius Let us assume that $\mathcal D=\mathcal D(\mathbb{S})$, where $\mathbb{S}$ is a semifield. Suppose $U$ is an $n$-dimensional scattered subspace with respect to $\mathcal D$. Then without loss of generality we may assume that one of the following two occur. - (Shears Type)       $U\cap S_\infty = U\cap S_0=0$; - (Non-Shears Type) $U\cap S_\infty \ne 0, U\cap S_0=U\cap S_y=0$ for some $y\ne 0$. These two cases correspond to whether or not the translation hyperorval $\mathcal H_U$ contains the shears point $S_\infty$. The *covering radius* of a set or subspace $C$ of the space of linear maps $M=\mathrm{End}_{{\mathbb{F}}_q}(\mathbb{F}_{q^n})$ is a notion arising naturally from coding theory in the rank metric; we refer to [@ByrneRav] for background. We denote it by $\rho(C)$ and define it as $$\rho(C)= \min\{i:\forall g\in M,\exists f\in C \mathrm{ ~s.t.~}\mathrm{rank}(f-g) \leq i\}.$$ For a spread set $C\subset M$, which has cardinality $q^n$ and is such that every nonzero element of $C$ is invertible, we define $C^{-1}= \{f^{-1}:f\in C,f\ne 0\}\cup\{0\}$. If $C$ is a spread set, then $\rho(C)\leq n-1$, since otherwise there would exist $g\notin C$ such that $\mathrm{rank}(g-f)=n$ for all $f\in C$. But for any nonzero $a\in \mathbb{F}_{q^n}$ there exists a unique $y\in \mathbb{F}_{q^n}$ such that $a\circ y = g(a)$, and so there exists $R_y\in C$ such that $\mathrm{rank}(g-R_y)<n$. **Theorem 1**. *Let $\mathbb{S}$ be a semifield of order $2^n$, $\pi(\mathbb{S})$ the translation plane it defines, and $C=C(\mathbb{S})$ the spread set it defines in $V(2n,2)$. Then there exists a translation hyperoval of shears type in $\pi(\mathbb{S})$ if and only if $\rho(C)=n-1$, and there exists a translation hyperoval of non-shears type in $\pi(\mathbb{S})$ if and only if $\rho(C^{-1})=n-1$.* *Proof.* The plane $\pi(\mathbb{S})$ contains a translation hyperoval of shears type if and only if there exists an $n$-dimensional subspace $U$ which is scattered with respect to $\mathcal D(\mathbb{S})$ such that $U\cap S_\infty=0$. For any $n$-dimensional subspace such that $U\cap S_\infty=0$ there exists an ${\mathbb{F}}_q$-linear map $f$ from $\mathbb{F}_{q^n}$ to itself such that $$U=\{(x,f(x)):x\in \mathbb{F}_{q^n}\}.$$ Let $y\in \mathbb{F}_{q^n}$. Then $U\cap S_y = \{(x,f(x))|f(x)=R_y(x)\}$, and so $\mathrm{dim}(U\cap S_y) = n-\mathrm{rank}(f-R_y)$. Hence there exists an $n$-dimensional subspace which is scattered with respect to $\mathcal D(\mathbb{S})$ if and only if there exists $f$ such that $\mathrm{rank}(f-R_y)\geq n-1$ for all $y\in \mathbb{F}_{q^n}$, if and only if $\rho(C)\geq n-1$, if and only if $\rho(C)=n-1$, proving the first claim. Similarly, $\pi(\mathbb{S})$ contains a translation hyperoval of shears type if and only if there exists an $n$-dimensional subspace $U$ which is scattered with respect to $\mathcal D(\mathbb{S})$ such that $U\cap S_0=0$. For any such subspace there exists an ${\mathbb{F}}_q$-linear map $f$ from $\mathbb{F}_{q^n}$ to itself such that $U=\{(f(x),x):x\in \mathbb{F}_{q^n}\}$. Then for any nonzero $y\in \mathbb{F}_{q^n}$ we have $\mathrm{dim}(U\cap S_y)= n-\mathrm{rank}(f-R_y^{-1})$, while $\mathrm{dim}(U\cap S_\infty)=n-\mathrm{rank}(f)$, and so arguing as before we obtain the second claim. ◻ Note that we do not have such a result for general quasifields, since the lack of a distinguished element at infinity nor transitivity on the remaining points at infinity means that there is not a *canonical* choice for the spread set. Note also that the connection between the existence of scattered subspaces with respect to semifield spreads and the covering radius of $C$ and $C^{-1}$ are also valid for $q>2$; however in this case we do not obtain translation hyperovals. ## Linearised Polynomials and Dickson Matrices In order to explicitly determine whether or not translation hyperovals exist, we need a practical method for determining the existence of a linear map $f$ such that $\mathrm{rank}(f-R_y)\geq n-1$ for all $y\in \mathbb{F}_{q^n}$. We do this by utilising linearised polynomials and Dickson matrices; this is the approach used by Payne to classify translation hyperovals in Desarguesian planes, and also used productively in recent years in the construction of MRD codes. To a linearised polynomial $f(x)= \sum_{i=0}^{n-1}f_ix^{q^i}$ we associate the *Dickson (or autocirculant) matrix* $D_f$ defined as follows: $$D_f := \begin{pmatrix} f_{0} & f_{1} & \cdots & f_{n-1}\\ f_{n-1}^q & f_{0}^q & \cdots & f_{n-2}^q\\ \vdots & \ddots& \ddots & \vdots\\ f_{1}^{q^{n-1}} & f_{2}^{q^{n-1}} & \cdots & f_{0}^{q^{n-1}} \end{pmatrix}$$ It is well known that the assignment $f\mapsto D_f$ is linear, and $\mathrm{rank}(f) = \mathrm{rank}(D_f)$ [@MenichettiAffine]. Hence we can translate Theorem [Theorem 1](#thm:covering){reference-type="ref" reference="thm:covering"} to this setting. **Lemma 1**. *The plane $\pi(\mathbb{S})$ contains a translation hyperoval of shears type if and only if there exists $f\in \mathcal L\backslash C(\mathbb{S})$ such that $$\mathrm{rank}(D_{R_y}-D_f)\geq n-1$$ for all $y\in \mathbb{F}_{q^n}$.* *The plane $\pi(\mathbb{S})$ contains a translation hyperoval of non-shears type if and only if there exists $g\in \mathcal L\backslash C(\mathbb{S})$ such that $$\mathrm{rank}(D_{R_y}^{-1}-D_g)\geq n-1$$ for all $y\in \mathbb{F}_{q^n}^\times$ and $\mathrm{rank}(D_g)=n-1$.* Now the entries of $D_{R_y}$ are polynomials in $y$, and since $\det(D_{R_y})=1$ for all non-zero $y$, we can regard $D_{R_y}^{-1}$ as a matrix whose entries are polynomials in $y$. Hence for $f,g\in \mathcal L$, the functions $d_f(y) := \det(D_{R_y}-D_f)$ and $d^{\mathrm{inv}}_g(y) := \det(D_{R_y}^{-1}-D_g)$ are both polynomials in $y$ (whose coefficients are expressions in the unknown coefficients of $f$ and $g$ respectively). Note that $d_g(0)=\det(D_g)$. If necessary we can replace $d_f(y)$ and $d^{\mathrm{inv}}_g(y)$ with their reduction modulo $y^{2^n}-y$. **Lemma 2**. *The plane $\pi(\mathbb{S})$ of order $2^n$ contains a translation hyperoval of shears type if and only if there exists $f\in \mathcal L\backslash C(\mathbb{S})$ such that $d_f(y) = y^{2^n-1}+1$, and contains a translation hyperoval of non-shears type if and only if there exists $g\in \mathcal L\backslash C(\mathbb{S})$ such that $\mathrm{rank}(g)=n-1$ and $d^{\mathrm{inv}}_g(y) = \frac{y^{2^n}+y}{y+a}$ for some $a\in \mathbb{F}_{q^n}^\times$.* *Proof.* Let $U=\{(x,f(x)):x\in \mathbb{F}_{q^n}\}$, and suppose $U$ defines a translation hyperoval of shears type. Then we may assume without loss of generality that $U\cap S_0=0$, and so $U\cap S_y\ne 0$ for all $y\ne 0$. Then $\mathrm{rank}(D_{R_y}-f)=n-1$ for all nonzero $y\in \mathbb{F}_{q^n}$, and so $d_f$ is zero at all nonzero elements of $\mathbb{F}_{q^n}$ and nonzero at $y=0$. Clearly this implies that $d_f(y)=y^{2^n-1}+1$. Now let $W=\{(g(x),x):x\in \mathbb{F}_{q^n}\}$, and suppose $W$ defines a translation hyperoval of non-shears type. Then we may assume without loss of generality that $W\cap S_0=0$, and there exists a unique $a\in \mathbb{F}_{q^n}^\times$ such that $W\cap S_a=0$ and $W\cap S_y\ne 0$ for all nonzero $y\ne a$. Furthermore $W\cap S_{\infty}\ne 0$, and so $d^{\mathrm{inv}}_g(0)=0$. Hence $d^{\mathrm{inv}}_g(y)\ne 0$ if and only if $y=a$, and so $d^{\mathrm{inv}}_g(y) = \frac{y^{2^n}+y}{y+a}$ as claimed. ◻ Note that this is the approach used by Payne in his classification of translation hyperovals in $\mathrm{PG}(2,q^n)$. He showed that for the case $R_y(x)= xy$, the requirement that $d_f(y)=y^{2^n-1}+1$ implies that $f(x)$ is monomial, that is, $f(x)=f_ix^{2^i}$. The classification of translation hyperovals in $\mathrm{PG}(2,q^n)$ then follows easily. # Translation Hyperovals in the Generalised Twisted Field plane of order $64$ In this section we analyse the conditions from Lemma [Lemma 1](#lem:dickson){reference-type="ref" reference="lem:dickson"} and Lemma [Lemma 2](#lem:dicksondet){reference-type="ref" reference="lem:dicksondet"} for the case of the Generalised Twisted Field plane of order $64$. We choose this plane due to the fact that the Dickson matrices $D_{R_y}$ and $D_{R_y^{-1}}$ are sparse, making the equations manageable. Furthermore the known symmetries of these semifields allow us to further reduce the necessary computation. The multiplication in this presemifield, which we will denote by $\mathbb{T}$, is given by $$x\circ y = xy-jx^{2^2}y^{2^4}=:R_y(x),$$ where $j$ is a solution to $j^6+j+1=0$. We choose this representation to match that in [@Rua2009]. Any semifield isotopic to this presemifield has centre isomorphic to $\mathbb{F}_4$. In particular, each of the maps $R_y$ are $\mathbb{F}_4$-linear. Note furthermore that we have the following identity, which will prove useful in the subsequent calculations: $$\alpha R_y (\beta x) = R_{\alpha\beta y}(x)$$ for all $\alpha,\beta\in \mathbb{F}_{2^6}$ such that $\alpha\beta^{2^2}=\alpha^{2^4}\beta\ne 0$. Hence for any $\alpha,\beta$ satisfying this condition, the map $\phi_{\alpha,\beta}:(x,y)\mapsto (\beta^{-1}x,\alpha y)$ fixes $\mathcal D(\mathbb{T})$; in particular, it fixes $S_\infty$ and $S_0$, and maps $S_y$ to $S_{\alpha\beta y}$. Note that any such $\phi_{\alpha,\beta}$ fixes one further element of $\mathcal D(\mathbb{T})$ if and only if it fixes every element of $\mathcal D(\mathbb{T})$, and this occurs precisely if $\alpha^9=1$ and $\alpha\beta=1$. Furthermore, letting $U_f=\{(x,f(x)):x\in \mathbb{F}_{q^n}\}$ and $W_g=\{(g(x),x):x\in \mathbb{F}_{q^n}\}$, we get that $\phi_{\alpha,\beta}(U_f) = U_h$ where $h(x) = \alpha f(\beta x)$, and $\phi_{\alpha,\beta}(W_g) = W_k$ where $k(x) = \beta^{-1} g(\alpha^{-1} x)$. ## Shears Type Suppose $\pi(\mathbb{T})$ contains a translation hyperoval of shears type. By Lemma [Lemma 2](#lem:dicksondet){reference-type="ref" reference="lem:dicksondet"}, we require the existence of some $f\in \mathcal L\backslash C(\mathbb{S})$ such that $d_f(y) := \det(D_{R_y}-D_f)=y^{2^6-1}+1$. This leads to a system of equations in six unknowns $f_0,\ldots,f_5$ over $\mathbb{F}_{2^6}$. Note furthermore that for any $\alpha,\beta\in \mathbb{F}_{2^6}$ such that $\alpha\beta^{2^2}=\alpha^{2^4}\beta\ne 0$, if $h(x)=\alpha f(\beta x)$, then $$d_f(y)=d_h(\alpha\beta y)= d_h(y),$$ and so $f$ defines a translation hyperoval if and only if $h$ defines a translation hyperoval. Note that $h_0=\alpha\beta f_0$, and the set $\{\alpha\beta:\alpha,\beta\in \mathbb{F}_{2^6},\alpha\beta^{2^2}=\alpha^{2^4}\beta\ne 0\}$ is precisely the set of solutions to $x^{21}=1$. Thus we may assume without loss of generality that $f_0\in \{0,1,j,j^2\}$. Furthermore if $\alpha\beta=1$ then $\beta^9=1$ and $h_1=\beta f_1$, and so we can assume without loss of generality that $f_1^8=f_1$, i.e. $f_1\in \mathbb{F}_8$. From the coefficients of $y^{62}$ and $y^{58}$ respectively, we get that $$\begin{aligned} 0&=j^{21}f_{0}+j^{38}f_{2}^{4},\\ 0&=j^{22}f_{0}^{16}f_{4}^{4} + j^{21}f_{0}^5 + j^{22}f_{2}^{20} + j^{21}f_{2}f_{4}^4.\end{aligned}$$ Thus we have that either $f_0=f_2=0$, or $f_{0}=j^{17}f_{2}^{4}$ and $f_{4}=j^{16}f_{2}^{52}$. We plug these expressions into the coefficients of $y^{57}$ and $y^{54}$ and set them equal to zero. It turns out that we get the same pair of equations regardless of whether or not $f_2=0$, and we also observe that $f_2$ does not appear in either of the resulting equations: $$\begin{aligned} 0&=j^{39}f_{1}^{16}f_{3}^{8} + f_{1}^{2}f_{5}^{4} + j^{5}f_{3}^{16}f_{5}^{2} + j^{34}f_{3}^{4}f_{5}^{8},\\ 0&=j^{10}f_{1}^{33} + j^{17}f_{1}^{12} + f_{3}^{9} + j^{27}f_{5}^{36}.\end{aligned}$$ Taking into account that $f_1\in \mathbb{F}_8$, and raising to an appropriate power of $2$, we get that $$\begin{aligned} \label{eqn:shearsystem1} 0&=f_1(j^{51}f_{3}^{4} + f_{5}^{2}) + j^{34}f_{3}^{8}f_{5} + j^{17}f_{3}^{2}f_{5}^{4},\\ 0&=j^{36}f_{1}^{5} + f_{3}^{9} + j^{27}f_{5}^{36}.\nonumber\end{aligned}$$ This leads to the following. **Theorem 2**. *The Generalised Twisted Field plane of order $64$ does not contain a translation hyperoval of shears type.* *Proof.* The following MAGMA code verifies that the system ([\[eqn:shearsystem1\]](#eqn:shearsystem1){reference-type="ref" reference="eqn:shearsystem1"}) has no nontrivial solutions. > q := 2; > F := GF(q); > P<x> := PolynomialRing(F); > L<j> := ext<F|x^6+x+1>; > F8 := {x:x in L|x^8 eq x}; > > S<f1,f3,f5> := PolynomialRing(L,3); > > g := f1*(j^51*f3^4 + f5^2) + j^34*f3^8*f5 + j^17*f3^2*f5^4; > h := j^36*f1^5 + f3^9 + j^27*f5^36; > > s1 := {[a,b,c]:a in F8,b,c in L|Evaluate(g,[a,b,c]) eq 0}; > s2 := {[a,b,c]:a in F8,b,c in L|Evaluate(h,[a,b,c]) eq 0}; > > s1 meet s2 eq {[L|0,0,0]}; Hence we have that $f_1=f_3=f_5=0$, implying $f(x)$ is in fact an $\mathbb{F}_4$-linear map. But since each $R_y$ is also $\mathbb{F}_4$-linear, then the rank of $f-R_y$ as an $\mathbb{F}_2$-linear map must be even; in particular it cannot be $n-1=5$, contradicting Theorem [Theorem 1](#thm:covering){reference-type="ref" reference="thm:covering"}. Hence this plane does not contain a translation hyperoval of shears type. ◻ The MAGMA code used in this proof runs in less than one second. ## Non-shears Type {#sec:nonshears} Suppose $\pi(\mathbb{T})$ contains a translation hyperoval of non-shears type. By Lemma [Lemma 1](#lem:dickson){reference-type="ref" reference="lem:dickson"}, we require the existence of some $g\in \mathcal L\backslash C(\mathbb{S})$ such that $\mathrm{rank}(D_{R_y}^{-1} - D_g)\geq n-1$ for all $y\in \mathbb{F}_{q^n}^\times$ and $\mathrm{rank}(D_g)=n-1$. Similar to the shears case, we may assume without loss of generality that $g_0\in \{0,1,j,j^2\}$ and $g_1\in \mathbb{F}_8$. Note if $R_y(x)=yx+jy^{2^4}x^{2^2}$ for $y\neq0$, then $R_y$ is $\mathbb{F}_4$-linear and so $R_{y}^{-1}$ must also be $\mathbb{F}_{4}$-linear. It is straighforward then to calculate $R_{y}^{-1}$, which we find to be $$R_{y}^{-1}(x)=y^{62}j^{21}x+y^{11}j^{22}x^4+y^{59}j^{26}x^{16}.$$ Due to the complexity of the coefficients of $d^{\mathrm{inv}}_g(y)$ and the unknown element $a$ such that $d^{\mathrm{inv}}_g(a)\ne 0$, there is little that can be done from a theoretical point of view utilising Lemma [Lemma 2](#lem:dicksondet){reference-type="ref" reference="lem:dicksondet"}, beyond the above restrictions on the coefficients of $g$. Hence we must rely on a long computation using Lemma [Lemma 1](#lem:dickson){reference-type="ref" reference="lem:dickson"}. **Theorem 3**. *The Generalised Twisted Field plane of order $64$ does not contain a translation hyperoval of non-shears type.* *Proof.* The following MAGMA code verifies that there are no tuples $(g_0,g_1,g_2,g_3,g_4,g_5)$ with $g_0\in \{0,1,j,j^2\}, g_1\in \mathbb{F}_8$, and $g_i\in \mathbb{F}_{64}$ for $i=3,4,5,6$ such that $\mathrm{rank}(D_{R_y}^{-1}-D_g)\geq n-1$ for all $y\in \mathbb{F}_{q^n}$ and $\mathrm{rank}(D_g)=n-1$. > q := 2; > n := 6; > > F := GF(q); > P<x> := PolynomialRing(F); > L<j> := ext<F|x^6+x+1>; > F8 := {x:x in L|x^8 eq x}; > > DicksonMatrix := function(v,n,q); > return Matrix([Rotate([a^(q^i):a in v],i):i in [0..n-1]]); > end function; > > Cinv := {DicksonMatrix([j^21*y^62,0,j^22*y^11,0,j^26*y^59,0],n,q):y in L}; > > time nonshears := {<g0,g1,g2,g3,g4,g5>:g0 in {0,1,j,j^2},g1 in F8,g2,g3,g4,g5 in L| > forall{z:z in Cinv|Rank(z-f) ge n-1} where f is DicksonMatrix([g0,g1,g2,g3,g4,g5],n,q)}; > #nonshears eq 0; Hence by Lemma [Lemma 1](#lem:dickson){reference-type="ref" reference="lem:dickson"}, there does not exist a translation hyperoval of non-shears type in this plane. ◻ The calculation used in this proof takes approximately 8.5 hours on a single CPU. We note that this computation could clearly be parallelised and optimised further, but we do not attempt any improvements beyond the above restrictions on $g_0$ and $g_1$. Without these restrictions, the computation would take approximately three weeks. # Conclusion and Remarks This culminates in the following theorem, disproving Cherowitzo's conjecture. **Theorem 4**. *There does not exist a translation hyperoval in the Twisted Field Plane of order $64$.* **Remark 1**. Due to the previously described equivalences, we have also demonstrated the existence of a $6$-spread in $V(12,2)$ not admitting a scattered subspace of dimension $6$, and an MRD code (semifield spread set) in $M_6(\mathbb{F}_2)$ with minimum distance $6$ and covering radius less than $5$. **Remark 2**. The situation for the remaining semifield planes of order $64$ is more difficult to analyse theoretically. Instead we would need to rely on exhaustive computer searches. For translation hyperovals of shears type, this can be done relatively efficiently by exploiting the additivity of $C(\mathbb{S})$; a naive implementation can perform an exhaustive search in about 8 hours (as opposed to less than a second for the generalised twisted field). In fact, it turns out that many semifield planes of order $64$ do not contain a translation hyperoval of shears type. However, for the non-shears case we do not have additivity, and for the majority of semifields we do not have enough symmetries to constrain the coefficients $g_i$ as in Section [4.2](#sec:nonshears){reference-type="ref" reference="sec:nonshears"}, and so exhaustive computation takes much longer. Hence further theoretical reductions, or a more significant parallelised computation, would be necessary in order to determine the existence or non-existence of translation hyperovals for these planes. **Remark 3**. Although hyperovals cannot exist in planes of odd order, scattered subspaces of maximum dimension with respect to spreads can still exist. The corresponding point set in the associated projective plane is a set of $q$ points not contained in the translation line meeting each line in $0,1$ or $q$ points upon which a group of translations acts transitively. We can repeat the arguments from this paper in part; however, since $\frac{q^n-1}{q-1}<q^n-1$ for $q>2$, we cannot conclude much about $d_f(y)$. It remains an open question whether or not spreads defined by generalised twisted fields possess a scattered subspace of dimension $n$ for general $q$.
arxiv_math
{ "id": "2309.01451", "title": "On Translation Hyperovals in Semifield Planes", "authors": "Kevin Allen and John Sheekey", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We introduce and begin the study of quasi-BPS categories for K3 surfaces, which are a categorical version of the BPS cohomologies for K3 surfaces. We construct semiorthogonal decompositions of derived categories of coherent sheaves on moduli stacks of semistable objects on K3 surfaces, where each summand is a categorical Hall product of quasi-BPS categories. We also prove the wall-crossing equivalence of quasi-BPS categories, which generalizes Halpern-Leistner's wall-crossing equivalence of moduli spaces of stable objects for primitive Mukai vectors on K3 surfaces. We also introduce and study a reduced quasi-BPS category. When the weight is coprime to the Mukai vector, the reduced quasi-BPS category is proper, smooth, and its Serre functor is trivial étale locally on the good moduli space. Moreover we prove that its topological K-theory recovers the BPS invariants of K3 surfaces, which are known to be equal to the Euler characteristics of Hilbert schemes of points on K3 surfaces. We regard reduced quasi-BPS categories as noncommutative hyperkähler varieties which are categorical versions of crepant resolutions of singular symplectic moduli spaces of semistable objects on K3 surfaces. author: - Tudor Pădurariu and Yukinobu Toda bibliography: - math.bib title: Quasi-BPS categories for K3 surfaces --- # Introduction Let $S$ be a K3 surface, $v$ a Mukai vector, and $w$ an integer. The purpose of this paper is to introduce and study a category $$\begin{aligned} \label{intro:T} \mathbb{T}=\mathbb{T}_S(v)_w^{\rm{red}}\end{aligned}$$ called *(reduced) quasi-BPS category*. When $v$ is primitive, [\[intro:T\]](#intro:T){reference-type="eqref" reference="intro:T"} is equivalent to the derived category of twisted sheaves over the moduli space $M$ of stable objects on $S$ with Mukai vector $v$, which is a holomorphic symplectic manifold. When $v$ is not necessarily primitive, but $w$ is coprime to $v$, we show that $\mathbb{T}$ is proper, smooth, and has trivial Serre functor étale locally on the good moduli space $M$ of semistable objects with Mukai vector $v$, which is a singular symplectic variety. So we obtain a category $\mathbb{T}$ which we regard as a (twisted) categorical (étale locally) crepant resolution of singularities of $M$. The construction of the category ([\[intro:T\]](#intro:T){reference-type="ref" reference="intro:T"}) is motivated by enumerative geometry: quasi-BPS categories are a categorical replacement of BPS cohomologies [@DM; @D; @KinjoKoseki; @DHSM; @DHSM2], constructed from semiorthogonal decompositions of derived categories of moduli stacks of semistable sheaves which approximate the PBW theorem in cohomological Donaldson-Thomas (DT) theory studied in loc. cit. Below, we first mention our main results, and then explain how the construction of the category ([\[intro:T\]](#intro:T){reference-type="ref" reference="intro:T"}) is motivated by DT theory and the study of singular symplectic varieties. ## Semiorthogonal decompositions into quasi-BPS categories For a K3 surface $S$, let $$\begin{aligned} \mathrm{Stab}(S)\end{aligned}$$ be the main connected component of the space of Bridgeland stability conditions [@Brs2] on $D^b(S)$. Let $\Gamma=\mathbb{Z} \oplus \mathrm{NS}(S) \oplus \mathbb{Z}$ be the Mukai lattice. For $\sigma \in \mathrm{Stab}(S)$ and $v \in \Gamma$, consider the moduli stacks $$\begin{aligned} \mathfrak{M}_S^{\sigma}(v) \hookleftarrow \mathcal{M}_S^{\sigma}(v) \to M_S^{\sigma}(v),\end{aligned}$$ where $\mathfrak{M}_S^{\sigma}(v)$ is the derived moduli stack of $\sigma$-semistable objects in $D^b(S)$ with Mukai vector $v$, $\mathcal{M}_S^{\sigma}(v)$ is its classical truncation, and $M_S^{\sigma}(v)$ is its good moduli space. Below we write $v=dv_0$ for $d \in \mathbb{Z}_{\geqslant 1}$ and $v_0$ a primitive Mukai vector with $\langle v_0, v_0\rangle=2g-2$. We say *$w\in\mathbb{Z}$ is coprime with $v$* if $\gcd(w,d)=1$. We use the following structures on the derived category of $\mathfrak{M}_S^{\sigma}(v)$: **(The weight decomposition)**: every point in $\mathfrak{M}_S^{\sigma}(v)$ admits scalar automorphisms $\mathbb{C}^{\ast}$, and thus there is an orthogonal decomposition of $D^b(\mathfrak{M}_S^{\sigma}(v))$ into $\mathbb{C}^{\ast}$-weight categories $$\begin{aligned} D^b(\mathfrak{M}_S^{\sigma}(v))=\bigoplus_{w\in \mathbb{Z}} D^b(\mathfrak{M}_S^{\sigma}(v))_w. \end{aligned}$$ **(The categorical Hall product)**: for a decomposition $d=d_1+\cdots+d_k$, the stack of filtrations of $\sigma$-semistable objects induces *the categorical Hall product* defined by Porta-Sala [@PoSa]: $$\begin{aligned} \label{intro:cathall} \boxtimes_{i=1}^k D^b(\mathfrak{M}_S^{\sigma}(d_i v_0)) \to D^b(\mathfrak{M}_S^{\sigma}(v)). \end{aligned}$$ Davison--Hennecart--Schlegel Mejia [@DHSM Theorem 1.5] proved that the Hall algebra of a K3 surface is generated by its BPS cohomology. The categorical analogue of their result is the following result, which can also be regarded as a partial categorification of a BBDG-type decomposition theorem, see Subsection [1.3](#subsec:intro:topK){reference-type="ref" reference="subsec:intro:topK"}: **Theorem 1**. **(Theorem [Theorem 43](#thm:sodK3){reference-type="ref" reference="thm:sodK3"})*[\[intro:thm1\]]{#intro:thm1 label="intro:thm1"} Let $\sigma \in \mathrm{Stab}(S)$ be a generic stability condition. Then there exists a subcategory (called quasi-BPS category) $$\begin{aligned} \label{intro:qBPS} \mathbb{T}_S^{\sigma}(v)_w \subset D^b\left(\mathfrak{M}_S^{\sigma}(v)\right)_w\end{aligned}$$ such that there is a semiorthogonal decomposition $$\begin{aligned} \label{intro:sod} D^b\left(\mathfrak{M}_S^{\sigma}(v)\right) =\left\langle \boxtimes_{i=1}^k \mathbb{T}_{S}^{\sigma}(d_i v_0)_{w_i+(g-1)d_i(\sum_{i>j}d_j-\sum_{i<j}d_j)} \right\rangle. \end{aligned}$$ The right hand side is after all partitions $(d_i)_{i=1}^k$ of $d$ and all weights $(w_i)_{i=1}^k\in\mathbb{Z}^k$ such that $$\frac{w_1}{d_1}<\cdots<\frac{w_k}{d_k},$$ and each fully-faithful functor in ([\[intro:sod\]](#intro:sod){reference-type="ref" reference="intro:sod"}) is given by the categorical Hall product ([\[intro:cathall\]](#intro:cathall){reference-type="ref" reference="intro:cathall"}).* The order of the summands in the semiorthogonal decomposition ([\[intro:sod\]](#intro:sod){reference-type="ref" reference="intro:sod"}) is not immediate to state and we do not make it explicit in this paper, see Remark [Remark 15](#rmk:order){reference-type="ref" reference="rmk:order"}. If $v$ is primitive, then $$\mathbb{T}_S^{\sigma}(v)_w= D^b\left(\mathfrak{M}_S^{\sigma}(v)\right)_w.$$ In general, the category $\mathbb{T}_S^{\sigma}(v)_w$ is uniquely determined by the semiorthogonal decomposition ([\[intro:sod\]](#intro:sod){reference-type="ref" reference="intro:sod"}). Locally on $M_S^{\sigma}(v)$, the category $\mathbb{T}_S^{\sigma}(v)_w$ is defined to be the subcategory of objects which are Koszul dual to matrix factorizations with some weight conditions for the maximal torus of the stabilizer groups. Such a subcategory was first considered by Špenko--Van den Bergh [@SVdB] to construct noncommutative crepant resolutions of quotients of quasi-symmetric representations by reductive groups. It was later used in [@hls] to prove the "magic window theorem\" for GIT quotient stacks, and in [@P] to give PBW type decompositions for categorical (and K-theoretic) Hall algebras of symmetric quivers with potential. We regard the subcategory ([\[intro:qBPS\]](#intro:qBPS){reference-type="ref" reference="intro:qBPS"}) as a global version of these categories in the case of moduli stacks of semistable objects on K3 surfaces. The main tool in investigating the category [\[intro:qBPS\]](#intro:qBPS){reference-type="eqref" reference="intro:qBPS"} is its local description via categories of matrix factorizations on the moduli stacks of representations of Ext-quivers of $\sigma$-polystable objects. We study quasi-BPS categories in this local context in [@PTquiver]. ## Quasi-BPS categories for reduced stacks {#subsec12} The derived stack $\mathfrak{M}_S^{\sigma}(v)$ is never classical because of the existence of the trace map $\operatorname{Ext}^2(E, E) \twoheadrightarrow \mathbb{C}$ for any object $E \in D^b(S)$. Let $$\begin{aligned} \mathfrak{M}_S^{\sigma}(v)^{\rm{red}} \hookrightarrow \mathfrak{M}_S^{\sigma}(v)\end{aligned}$$ be the reduced derived stack, which roughly speaking is obtained by taking the traceless part of its obstruction theory. By [@KaLeSo], it is known that the reduced derived stack is classical when $g\geqslant 2$. We also have a reduced version of the quasi-BPS category $$\begin{aligned} \label{reduced:bps} \mathbb{T}_S^{\sigma}(v)^{\rm{red}}_w \subset D^b\left(\mathfrak{M}_S^{\sigma}(v)^{\rm{red}}\right)_w\end{aligned}$$ and a reduced version of the semiorthogonal decomposition ([\[intro:sod\]](#intro:sod){reference-type="ref" reference="intro:sod"}), see Theorem [Theorem 44](#thm:sodK32){reference-type="ref" reference="thm:sodK32"}. When $v$ is primitive, we have $$\begin{aligned} \label{T:twist} \mathbb{T}_S^{\sigma}(v)^{\rm{red}}_w=D^b\left(M_S^{\sigma}(v), \alpha^w\right),\end{aligned}$$ where $\alpha$ is the Brauer class which represents the obstruction to the existence of a universal object, and $M_S^{\sigma}(v)$ is a projective holomorphic symplectic manifold [@Mu2; @BaMa2]. From the above description, we have the following properties of the category ([\[reduced:bps\]](#reduced:bps){reference-type="ref" reference="reduced:bps"}) when $v$ is primitive: (i) the category $\mathbb{T}_S^{\sigma}(v)^{\rm{red}}_w$ is smooth and proper; (ii) the Serre functor $S_{\mathbb{T}}$ of $\mathbb{T}_S^{\sigma}(v)^{\rm{red}}_w$ is isomorphic to the shift functor $[\dim M_S^{\sigma}(v)]$; (iii) by Halpern-Leistner [@halpK32], the category $\mathbb{T}_S^{\sigma}(v)^{\rm{red}}_w$ is independent of $\sigma$ up to equivalence. The proof of the above properties relies on the description ([\[T:twist\]](#T:twist){reference-type="ref" reference="T:twist"}) for primitive $v$, and a priori there is no reason that these properties hold for non-primitive $v$. Nevertheless, we have the following: **Theorem 2**. **(Corollary [Corollary 58](#cor:smooth){reference-type="ref" reference="cor:smooth"}, Theorem [Theorem 66](#thm:Serre:etale){reference-type="ref" reference="thm:Serre:etale"}, Theorem [Theorem 29](#thm:walleq){reference-type="ref" reference="thm:walleq"})*[\[intro:thm4\]]{#intro:thm4 label="intro:thm4"} Suppose that $g\geqslant 2$, $\sigma, \sigma'\in\mathrm{Stab}(S)$ are generic stability conditions, and $w$ is coprime to $v$. Then:* ***(i) (smooth and properness):** the category $\mathbb{T}_S^{\sigma}(v)_w^{\rm{red}}$ is smooth and proper;* ***(ii) (étale locally trivial Serre functor):** the Serre functor $S_{\mathbb{T}}$ of $\mathbb{T}_S^{\sigma}(v)^{\rm{red}}_w$ is trivial étale locally on $M_S^{\sigma}(v)$;* ***(iii) (wall-crossing equivalence):** there is an equivalence $\mathbb{T}_S^{\sigma}(v)_w^{\rm{red}} \simeq \mathbb{T}_S^{\sigma'}(v)_w^{\rm{red}}$. Hence we may write the quasi-BPS category as $\mathbb{T}_S(v)_w^{\rm{red}}$.* The key point in the proof of (i) above is Lemma [Theorem 56](#prop:catsupp){reference-type="ref" reference="prop:catsupp"} (the categorical support lemma), which says that any object in $\mathbb{T}_S(v)_w^{\rm{red}}$ has a nilpotent singular support if $w$ is coprime to $v$. By combining with the strong generation, we conclude that $\mathbb{T}_S(v)_w^{\rm{red}}$ is smooth and proper if $w$ is coprime to $v$. In particular, it admits a Serre functor $S_{\mathbb{T}}$. We expect that $S_{\mathbb{T}}$ is globally isomorphic to $[\dim M_S^{\sigma}(v)]$. However, currently there is a technical subtlety of proving this, and we only prove it is trivial étale locally in (ii). Globally, we prove an isomorphism $S_{\mathbb{T}}\cong [\dim M_S^{\sigma}(v)]$ on the level of cohomologies, see Corollary [Corollary 74](#cor:isomcoh){reference-type="ref" reference="cor:isomcoh"}, and also for perfect complexes, see Corollary [Corollary 75](#cor:serreperf){reference-type="ref" reference="cor:serreperf"}. In view of parts (i) and (ii) of Theorem [\[intro:thm4\]](#intro:thm4){reference-type="ref" reference="intro:thm4"}, we view $\mathbb{T}_S(v)^{\mathrm{red}}_w$ as a categorical version of a crepant resolution of $M^\sigma_S(v).$ It is an interesting question to see the relation between (reduced) quasi-BPS categories and categorical crepant resolutions in the sense of Kuznetsov [@KuzICM] or noncommutative crepant resolutions in the sense of Van den Bergh [@VdB22]. We plan to investigate this relation in future work. The main tool in proving Theorem [\[intro:thm4\]](#intro:thm4){reference-type="ref" reference="intro:thm4"} is its local version for stacks of representations of preprojective algebras constructed from Ext-quivers of $\sigma$-polystable objects, see [@PTquiver]. Along the way, we obtain generation statements for singular support quotient categories of more general quasi-smooth stacks that may be of independent interest, see Theorem [Theorem 61](#thm:regular2){reference-type="ref" reference="thm:regular2"}. ## Topological K-theory of quasi-BPS categories {#subsec:intro:topK} We finally relate topological K-theory of quasi-BPS category with the cohomology of the BPS sheaf $\mathcal{BPS}_v$ on $M_S^{\sigma}(v)$ studied in [@DHSM] (i.e. with BPS cohomology). Note that $\mathcal{BPS}_v=\mathrm{IC}_{M_S^{\sigma}(v)}=\mathbb{Q}_{M_S^{\sigma}(v)}[\dim M_S^{\sigma}(v)]$ if $v$ is a primitive Mukai vector and $\sigma$ is generic, and in general it is a semisimple perverse sheaf which contains $\mathrm{IC}_{M_S^{\sigma}(v)}$ as a proper direct summand. For a dg-category $\mathcal{D}$ and $i\in \mathbb{Z}$, we denote by $K_i^{\rm{top}}(\mathcal{D})$ the topological K-theory of $\mathcal{D}$ defined by Blanc [@Blanc]. We prove the following: **Theorem 3**. **(Theorem [Theorem 76](#thmKtop){reference-type="ref" reference="thmKtop"})*[\[intro:thm:K\]]{#intro:thm:K label="intro:thm:K"} Suppose that $\sigma$ is a generic Gieseker stability condition, $g\geqslant 2$, and $w$ is coprime to $v$. For $i\in \mathbb{Z}$, we have the identity: $$\begin{aligned} &\dim K_{i}^{\rm{top}}(\mathbb{T}_S^{\sigma}(v)_w) = \sum_{j\in \mathbb{Z}}\dim H^{i+2j}(M_S^{\sigma}(v), \mathcal{BPS}_v). \end{aligned}$$* The above result is motivated by categorification of BPS invariants in Donaldson-Thomas theory, which will be explained in the next subsection. We regard Theorem [\[intro:thm:K\]](#intro:thm:K){reference-type="ref" reference="intro:thm:K"} as a weight-independence phenomenon reminiscent of the (numerical and cohomological) $\chi$-independence phenomenon [@MT; @TodGV; @MaulikShen; @KinjoKoseki]. It is an interesting problem to define a primitive part $\mathrm{P}K^{\mathrm{top}}_i(\mathbb{T}_S^{\sigma}(v)_w)\subset K^{\mathrm{top}}_i(\mathbb{T}_S^{\sigma}(v)_w)$ whose dimension is independent of *all* $w\in\mathbb{Z}$. Theorem [\[intro:thm:K\]](#intro:thm:K){reference-type="ref" reference="intro:thm:K"} can be seen as part of the more general problem of categorifying perverse sheaves of interest [@PTtop], [@P3]. Such a problem is the first step in categorifying instances of the BBDG decomposition theorem [@BBD]. In the context of good moduli space maps for objects in certain Calabi-Yau $2$-categories, a BBDG-type decomposition was proved by Davison [@DavPurity]. Theorem [\[intro:thm1\]](#intro:thm1){reference-type="ref" reference="intro:thm1"} can be seen as a partial categorification of the decomposition theorem for the morphism $\mathcal{M}^\sigma_S(v) \to M^\sigma_S(v)$. ## Motivation from Donaldson-Thomas theory We now explain how the study of quasi-BPS categories is motivated by DT theory. Let $X$ be a smooth Calabi-Yau 3-fold. For a given numerical class $v$ and a stability condition $\sigma$ on $D^b(X)$, the DT invariant is defined to be a rational number $$\begin{aligned} \label{intro:DT} \mathrm{DT}^{\sigma}(v) \in \mathbb{Q}\end{aligned}$$ which virtually counts $\sigma$-semistable (compactly supported) objects with numerical class $v$, see [@Thom; @JS; @PiYT]. It is defined via the moduli stack $\mathcal{M}_X^{\sigma}(v)$ of $\sigma$-semistable objects with numerical class $v$ or its good moduli space $$\mathcal{M}_X^{\sigma}(v) \to M_X^{\sigma}(v).$$ When $\sigma$-semistable objects coincide with $\sigma$-stable objects (e.g. $v$ is primitive and $\sigma$ is generic), then the DT invariant is an integer and can be also computed as $$\begin{aligned} \mathrm{DT}^{\sigma}(v)=\int_{[M_X^{\sigma}(v)]^{\rm{vir}}}1 = \int_{M_X^{\sigma}(v)} \chi_B \ de \in \mathbb{Z},\end{aligned}$$ where $\chi_B$ is the Behrend constructible function [@MR2600874]. Otherwise, ([\[intro:DT\]](#intro:DT){reference-type="ref" reference="intro:DT"}) is defined as the weighted Euler characteristic with respect to the Behrend function of the 'log' of $\mathcal{M}_X^{\sigma}(v)$ in the motivic Hall algebra, see [@JS]. For a generic $\sigma$, the BPS invariant $\Omega^{\sigma}(v)$ is inductively defined by the multiple cover formula $$\begin{aligned} \mathrm{DT}^{\sigma}(v)=\sum_{k\geqslant 1, k|v}\frac{1}{k^2} \Omega^{\sigma}(v/k). \end{aligned}$$ Although ([\[intro:DT\]](#intro:DT){reference-type="ref" reference="intro:DT"}) is a rational number in general, the BPS number $\Omega^{\sigma}(v)$ is an integer. The integrality of $\Omega^{\sigma}(v)$ is conjectured in [@K-S Conjecture 6], [@JS Conjecture 6.12] and proved in [@DM] combined with [@Todext]. We address the following categorification problem of BPS invariants: **Problem 1**. Is there a dg-category $\mathbb{T}^{\sigma}(v)$ which recovers $\Omega^{\sigma}(v)$ by taking the Euler characteristic of an additive invariant, e.g. $$\chi(K^{\rm{top}}(\mathbb{T}^{\sigma}(v))):=\dim_\mathbb{Q} K_0^{\rm{top}}(\mathbb{T}^{\sigma}(v))_\mathbb{Q}-\dim_\mathbb{Q} K_1^{\rm{top}}(\mathbb{T}^{\sigma}(v))_\mathbb{Q}= -\Omega^{\sigma}(v)?$$ The above problem is open even if $v$ is primitive, and in this case it is related to the gluing problem of matrix factorizations, see [@T] for the case of local surfaces and [@RHH] for work in progress addressing the general case. Now, for a K3 surface $S$, we consider the local K3 surface $$\begin{aligned} \label{loc:K3} X=\mathrm{Tot}_S(K_S)=S \times \mathbb{A}^1_\mathbb{C}. \end{aligned}$$ The ($\mathbb{C}^{\ast}$-equivariant) DT category for the moduli stack $\mathcal{M}_X^{\sigma}(v)$ is defined in [@T] via categorical dimensional reduction $$\begin{aligned} \mathcal{DT}(\mathcal{M}_X^{\sigma}(v)):=D^b(\mathfrak{M}_S^{\sigma}(v)). \end{aligned}$$ We regard the subcategory $\mathbb{T}_S^{\sigma}(v)_w \subset \mathcal{DT}(\mathcal{M}_X^{\sigma}(v))$ as a categorification of the BPS invariant for local K3 surface when $(v, w)$ is coprime. Indeed, Theorem [\[intro:thm:K\]](#intro:thm:K){reference-type="ref" reference="intro:thm:K"} implies that $$\begin{aligned} \label{isom:Ktop} \chi(K^{\rm{top}}(\mathbb{T}_S^{\sigma}(v)_w))=-\Omega^{\sigma}(v),\end{aligned}$$ where the right hand side is explicitly computed in terms of Hilbert schemes of points, see the next subsection. Thus the category $\mathbb{T}_S^{\sigma}(v)_w$ gives a solution to Problem [Problem 1](#prob1){reference-type="ref" reference="prob1"} for the local K3 surface [\[loc:K3\]](#loc:K3){reference-type="eqref" reference="loc:K3"}. ## Motivation from hyperkähler geometry Let $S$ be a K3 surface, and consider the local K3 surface ([\[loc:K3\]](#loc:K3){reference-type="ref" reference="loc:K3"}). The BPS invariant in this case is completely known: $$\begin{aligned} \label{BPS:S} \Omega^{\sigma}(v)=-\chi(S^{[\langle v, v \rangle/2+1]}), \end{aligned}$$ where, for a positive integer $n$, we denote by $S^{[n]}$ the Hilbert scheme of $n$ points on $S$. The above identity is conjectured by the second named author [@TodK3] and proved by Maulik--Thomas [@MTK3 Corollary 6.10]. The identity ([\[BPS:S\]](#BPS:S){reference-type="ref" reference="BPS:S"}) is an instance of *the $\chi$-independence phenomena* (e.g. when $v=(0, \beta, \chi)$, the right hand side of ([\[BPS:S\]](#BPS:S){reference-type="ref" reference="BPS:S"}) is independent of $\chi$), see [@MR2892766 Conjecture 6.3], [@TodGV Conjecture 2.15] and [@MaulikShen; @KinjoKoseki] for the recent development of $\chi$-independence phenomena. When $v$ is primitive, the identity ([\[BPS:S\]](#BPS:S){reference-type="ref" reference="BPS:S"}) holds since $M_S^{\sigma}(v)$ is a holomorphic symplectic manifold [@Mu2] deformation equivalent to $S^{[\langle v, v\rangle/2+1]}$. However, it is much less obvious and mysterious when $v$ is not primitive. For non-primitive $v$, the good moduli space $M=M_S^{\sigma}(v)$ is a singular symplectic variety. O'Grady [@Ogra1] constructed a symplectic resolution of singularities $$\label{symplecticresolution} \widetilde{M} \to M$$ when $v=2v_0$ for a primitive $v_0$ with $\langle v_0, v_0 \rangle=2$. But this turned out to be the only exceptional case: Kaledin--Lehn--Sorger [@KaLeSo] proved that $M$ does not admit a symplectic resolution in all other cases with $\langle v, v\rangle \geqslant 2$. By [@Funil Proposition 1.1], the existence of a symplectic resolution [\[symplecticresolution\]](#symplecticresolution){reference-type="eqref" reference="symplecticresolution"} is equivalent to the existence of a crepant resolution of $M$, so $M$ does not admit a crepant resolution except in the example studied by O'Grady. Instead of a usual (geometric) crepant resolution, it is interesting to investigate if $M$ admits a crepant resolution of singularities in a categorical sense: **Problem 2**. Is there a categorical version of a crepant resolution of $M_S^{\sigma}(v)$? Inspired by Theorem [\[intro:thm4\]](#intro:thm4){reference-type="ref" reference="intro:thm4"}, we regard the category $\mathbb{T}_S(v)_w^{\rm{red}}$ as a categorical version of a (twisted, étale local) crepant resolution of $M_S^{\sigma}(v)$. Note that, even in the situation of the O'Grady resolution [\[symplecticresolution\]](#symplecticresolution){reference-type="eqref" reference="symplecticresolution"} (that is, if $d=2$ and $\langle v_0, v_0\rangle=2$), the category $\mathbb{T}_S(2)_1^{\mathrm{red}}$ is different from $D^b(\widetilde{M})$ because its topological K-theory is a proper direct summand of the topological K-theory of $\widetilde{M}$, see by Theorem [\[intro:thm:K\]](#intro:thm:K){reference-type="ref" reference="intro:thm:K"} and [@MR4338453]. In view of ([\[isom:Ktop\]](#isom:Ktop){reference-type="ref" reference="isom:Ktop"}) and ([\[BPS:S\]](#BPS:S){reference-type="ref" reference="BPS:S"}), we further expect $\mathbb{T}_S(v)_w^{\rm{red}}$ to be a "non-commutative hyperkähler variety\" deformation equivalent to $S^{[\langle v, v\rangle/2+1]}$. In particular, it is natural to investigate how $\mathbb{T}_S(v)_w^{\rm{red}}$ is analogous to $D^b(M)$ for a smooth projective hyperkähler variety of $K3^{[\langle v, v\rangle/2+1]}$-type. More precisely, we may expect the following, which we regard as a categorical $\chi$-independence phenomenon: **Conjecture 4**. **(Conjecture [Conjecture 34](#conj:HK){reference-type="ref" reference="conj:HK"})*[\[conj:HK2\]]{#conj:HK2 label="conj:HK2"} For any $g\geqslant 0$ and any $w\in \mathbb{Z}$ coprime with $v$, the category $\mathbb{T}_S(v)_{w}^{\rm{red}}$ is deformation equivalent to $D^b\left(S^{[\langle v, v\rangle/2+1]}\right)$.* Recall that $\langle v_0, v_0\rangle=2g-2$. The above conjecture is easy to check for $g=0$, see Proposition [Proposition 38](#prop:g=0){reference-type="ref" reference="prop:g=0"}. For $g=1$, we conjecture that the category $\mathbb{T}_S(v)_w^{\rm{red}}$ is equivalent to the derived category of a K3 surface (possibly twisted and not necessarily isomorphic to $S$), and we show this follows from an explicit computation of the quasi-BPS categories of $\mathbb{C}^3$ studied in [@PTzero; @PT1], see Conjectures [Conjecture 39](#conj:K3){reference-type="ref" reference="conj:K3"}, [\[conj:C2\]](#conj:C2){reference-type="ref" reference="conj:C2"} and Proposition [Proposition 41](#prop:conj){reference-type="ref" reference="prop:conj"}. In the forthcoming paper [@PaTobps], we prove Conjecture [\[conj:C2\]](#conj:C2){reference-type="ref" reference="conj:C2"} for $d=2$, which implies Conjecture [\[conj:HK2\]](#conj:HK2){reference-type="ref" reference="conj:HK2"} for $(d, w)=(2, 1)$. More precisely, there is an equivalence $D^b(S) \stackrel{\sim}{\to} \mathbb{T}_S(2v_0)_1^{\rm{red}}$ in this case. ## Acknowledgements We thank Tasuki Kinjo, Davesh Maulik, Yalong Cao, Junliang Shen, Georg Oberdieck, and Jørgen Rennemo for discussions related to this work. T. P. is grateful to Columbia University in New York and to Max Planck Institute for Mathematics in Bonn for their hospitality and financial support during the writing of this paper. The project of this paper started when Y. T. was visiting Columbia University in April 2023. Y. T. thanks Columbia University for their hospitality. Y. T. is supported by World Premier International Research Center Initiative (WPI initiative), MEXT, Japan, and Grant-in Aid for Scientific Research grant (No. 19H01779) from MEXT, Japan. # Preliminaries In this section, we introduce notations and review definitions related to stacks, matrix factorizations, and window categories. We also include a table with the most important notation we use later in the paper. ## Notations for (derived) stacks {#notation} All the spaces $\mathscr{X}$ considered are quasi-smooth (derived) stacks over $\mathbb{C}$, see [@T Subsection 3.1] for references. The classical truncation of $\mathscr{X}$ is denoted by $\mathscr{X}^{\rm{cl}}$. We denote by $\mathbb{L}_\mathscr{X}$ the cotangent complex of $\mathscr{X}$. For $G$ an algebraic group and $X$ a dg-scheme with an action of $G$, denote by $X/G$ the corresponding quotient stack. When $X$ is affine, we denote by $X/\!\!/G$ the quotient dg-scheme with dg-ring of regular functions $\mathcal{O}_X^G$. For a morphism $f \colon X\to Y$ and for a closed point $y \in Y$, we denote by $\widehat{X}_y$ the following base change $$\begin{aligned} \widehat{X}_y := X \times_{Y} \operatorname{Spec}\left(\widehat{\mathcal{O}}_{Y, y}\right).\end{aligned}$$ We call $\widehat{X}_y$ the *formal fiber*, though it is a scheme over a complete local ring rather than a formal scheme. When $X$ is a $G$-representation, $f\colon X\to Y:=X/\!\!/G$, and $y=0$, we omit the subscript $y$ from the above notation. We use the terminology of *good moduli spaces* of Alper, see [@MR3237451 Section 8] for examples of stacks with good moduli spaces. ## DG-categories For $\mathscr{X}$ a quasi-smooth stack, we denote by $D^b(\mathscr{X})$ the bounded derived category of coherent sheaves and by $\mathrm{Perf}(\mathscr{X})$ the subcategory of perfect complexes on $\mathscr{X}$, see Subsection [2.6](#subsec:qsmooth){reference-type="ref" reference="subsec:qsmooth"} for more details and for more categories of (quasi)coherent sheaves. ### Generation of dg-categories {#notation2} Any dg-category considered is a $\mathbb{C}$-linear pre-triangulated dg-category, in particular its homotopy category is a triangulated category. For a pre-triangulated dg-category $\mathcal{D}$ and a full subcategory $\mathcal{C} \subset \mathcal{D}$, we say that $\mathcal{C}$ *classically generates* $\mathcal{D}$ if $\mathcal{D}$ coincides with the smallest thick pre-triangulated subcategory of $\mathcal{D}$ which contains $\mathcal{C}$. If $\mathcal{D}$ is furthermore cocomplete, then we say that $\mathcal{C}$ *generates* $\mathcal{D}$ if $\mathcal{D}$ coincides with the smallest thick pre-triangulated subcategory of $\mathcal{C}$ which contains $\mathcal{C}$ and is closed under taking colimits. We also recall some terminology related to strong generation. For a set of objects $\mathcal{S} \subset \mathcal{D}$, we denote by $\langle \mathcal{S} \rangle$ the smallest subcategory which contains $S$ and is closed under shifts, finite direct sums, and direct summands. If $\mathcal{D}$ is cocomplete, we denote by $\langle \! \langle S \rangle \! \rangle$ the smallest subcategory which contains $S$ and is closed under shifts, arbitrary direct sums, and direct summands. For subcategories $\mathcal{C}_1, \mathcal{C}_2 \subset \mathcal{D}$, we denote by $\mathcal{C}_1 \star \mathcal{C}_2 \subset \mathcal{D}$ the smallest subcategory which contains objects $E$ which fit into distinguished triangles $A_1 \to E \to A_2\to A_1[1]$ with $A_i \in \mathcal{C}_i$ for $i\in\{1,2\}$, and is closed under shifts, finite direct sums, and direct summands. We say that $\mathcal{D}$ is *strongly generated* by $C \in \mathcal{D}$ if $\mathcal{D}=\langle C \rangle^{\star n}$ for some $n\geqslant 1$. This is equivalent to $\operatorname{Ind}\mathcal{D}=\langle \! \langle C \rangle \! \rangle^{\star n}$ for some $n\geqslant 1$, see [@Neeman Proposition 1.9]. A dg-category $\mathcal{D}$ is called *regular* if it has a strong generator. A dg-category $\mathcal{D}$ is called *smooth* if the diagonal dg-module of $\mathcal{D}$ is perfect. It is proved in [@VL Lemma 3.5, 3.6] that if $\mathcal{D}$ is smooth, then $\mathcal{D}$ is regular. ### Semiorthogonal decompositions Let $R$ be a set. Consider a set $O\subset R\times R$ such that for any $i, j\in R$ we have $(i,j)\in O$, or $(j,i)\in O$, or both $(i,j)\in O$ and $(j,i)\in O$. Let $\mathbb{T}$ be a pre-triangulated dg-category. We will construct semiorthogonal decompositions $$\label{sodinitial} \mathbb{T}=\langle \mathbb{A}_i \mid i \in R \rangle$$ with summands pre-triangulated subcategories $\mathbb{A}_i$ indexed by $i\in R$ such that for any $i,j\in R$ with $(i, j)\in O$ and for any objects $\mathcal{A}_i\in\mathbb{A}_i$, $\mathcal{A}_j\in\mathbb{A}_j$, we have $\operatorname{Hom}_{\mathbb{T}}(\mathcal{A}_i,\mathcal{A}_j)=0$. Let $\pi\colon \mathscr{X}\to S$ be a morphism from a quasi-smooth stack to a scheme $S$ and assume $\mathbb{T}$ is a subcategory of $D^b(\mathscr{X})$. We say the decomposition [\[sodinitial\]](#sodinitial){reference-type="eqref" reference="sodinitial"} is *$S$-linear* if $\mathbb{A}_i\otimes \pi^*\mathrm{Perf}(S)\subset \mathbb{A}_i$. ## Graded matrix factorizations {#subsec:graded} References for this subsection are [@T3 Section 2.2], [@T Section 2.2], [@MR3895631 Section 2.3], [@PoVa3 Section 1]. Let $G$ be an algebraic group and let $Y$ be a smooth affine scheme with an action of $G$. Let $\mathscr{Y}=Y/G$ be the corresponding quotient stack and let $f$ be a regular function $$f \colon \mathscr{Y}\to \mathbb{C}.$$ Assume that there exists an extra action of $\mathbb{C}^{\ast}$ on $Y$ which commutes with the action of $G$ on $Y$, trivial on $\mathbb{Z}/2 \subset \mathbb{C}^{\ast}$, and $f$ is weight two with respect to the above $\mathbb{C}^{\ast}$-action. Consider the category of graded matrix factorizations $$\begin{aligned} \mathrm{MF}^{\mathrm{gr}}(\mathscr{Y}, f).\end{aligned}$$ Its objects are pairs $(P, d_P)$ with $P$ a $G\times\mathbb{C}^*$-equivariant coherent sheaf on $Y$ and $d_P \colon P\to P(1)$ a $G\times\mathbb{C}^*$-equivariant morphism satisfying $d_P^2=f$. Here $(1)$ is the twist by the character $\mathrm{pr}_2 \colon G \times \mathbb{C}^{\ast} \to \mathbb{C}^{\ast}$. Note that as the $\mathbb{C}^{\ast}$-action is trivial on $\mathbb{Z}/2$, we have the induced action of $\mathbb{C}^{\star}=\mathbb{C}^{\ast}/(\mathbb{Z}/2)$ on $Y$ and $f$ is weight one with respect to the above $\mathbb{C}^{\star}$-action. The objects of $\mathrm{MF}^{\mathrm{gr}}(\mathscr{Y}, f)$ can be alternatively described as tuples $$\begin{aligned} \label{tuplet:graded} (E, F, \alpha \colon E\to F(1)', \beta \colon F\to E),\end{aligned}$$ where $E$ and $F$ are $G\times\mathbb{C}^{\star}$-equivariant coherent sheaves on $Y$, $(1)'$ is the twist by the character $G \times \mathbb{C}^{\star} \to \mathbb{C}^{\star}$, and $\alpha$ and $\beta$ are $\mathbb{C}^{\star}$-equivariant morphisms such that $\alpha\circ\beta$ and $\beta\circ\alpha$ are multiplication by $f$. For a pre-triangulated subcategory $\mathbb{M}$ of $D^b(\mathscr{Y})$, define $\mathrm{MF}^{\rm{gr}}(\mathbb{M}, f)$ as the full subcategory of $\mathrm{MF}^{\rm{gr}}(\mathscr{Y}, f)$ with objects totalizations of pairs $(P, d_{P})$ with $P \in \mathbb{M}$ equipped with $\mathbb{C}^{\ast}$-equivariant structure, see [@PTzero Subsection 2.6.2]. If $\mathbb{M}$ is generated by a set of vector bundles $\{\mathscr{V}_i\}_{i\in I}$ on $\mathscr{Y}$, then $\mathrm{MF}^{\rm{gr}}(\mathbb{M}, f)$ is generated by matrix factorizations whose factors are direct sums of vector bundles from $\{\mathscr{V}_i\}_{i\in I}$, see [@PTzero Lemma 2.3]. Functoriality of categories of graded matrix factorizations for pullback and proper pushfoward is discussed in [@PoVa3]. In Subsection [7.2](#subsec:trace){reference-type="ref" reference="subsec:trace"}, we will also consider the category $D^{\rm{gr}}(Y)$ for a possibly singular affine variety $Y$ with a $\mathbb{C}^{\ast}$-action as above. It consists of objects ([\[tuplet:graded\]](#tuplet:graded){reference-type="ref" reference="tuplet:graded"}) with $f=0$, so its definition is the same as $\mathrm{MF}^{\rm{gr}}(Y, 0)$, but when $Y$ is singular an object ([\[tuplet:graded\]](#tuplet:graded){reference-type="ref" reference="tuplet:graded"}) may not be isomorphic to the one such that $E, F$ are locally free of finite rank. See [@EfPo] for factorization categories over possibly singular varieties. Note that if the $\mathbb{C}^{\ast}$-action on $Y$ is trivial, then $D^{\rm{gr}}(Y)=D^b(Y)$. ## The Koszul equivalence Let $Y$ be a smooth affine scheme with an action of an algebraic group $G$, let $\mathscr{Y}=Y/G$, and let $V$ be a $G$-equivariant vector bundle on $Y$. We always assume that $Y$ is either of finite type over $\mathbb{C}$, or is a formal fiber of a map $X \to X/\!\!/H$ for a finite type scheme $X$ and an algebraic group $H$ as in Subsection [2.1](#notation){reference-type="ref" reference="notation"}. Let $\mathbb{C}^*$ act on the fibers of $V$ with weight $2$ and consider a section $s\in \Gamma(Y, V)$. It induces a map $\partial \colon V^{\vee} \to \mathcal{O}_Y$. Let $s^{-1}(0)$ be the derived zero locus of $s$ with dg-algebra of regular functions $$\begin{aligned} \label{Kscheme} \mathcal{O}_{s^{-1}(0)}:=\mathcal{O}_Y\left[V^{\vee}[1];\partial\right].\end{aligned}$$ Consider the quotient (quasi-smooth) stack $$\label{koszuldef} \mathscr{P}:=s^{-1}(0)/G.$$ We call $\mathscr{P}$ the *Koszul stack* associated with $(Y, V, s, G)$. There is a natural inclusion $$j\colon \mathscr{P}\hookrightarrow\mathscr{Y}.$$ The section $s$ also induces the regular function $$\label{defreg} f\colon \mathscr{V}^{\vee}:=\text{Tot}_Y\left(V^{\vee}\right)/G\to\mathbb{C}$$ defined by $f(y,v)=\langle s(y), v \rangle$ for $y\in Y(\mathbb{C})$ and $v\in V^{\vee}|_y$. Consider the category of graded matrix factorizations $\text{MF}^{\text{gr}}\left(\mathscr{V}^{\vee}, f\right)$ with respect to the $\mathbb{C}^*$-action mentioned above. The Koszul equivalence, also called dimensional reduction in the literature, says the following: **Theorem 5**. **([@I; @Hirano; @T])*[\[thm:Kduality\]]{#thm:Kduality label="thm:Kduality"} There is an equivalence $$\begin{aligned} \label{equiv:Phi} \Theta \colon D^b(\mathscr{P}) \stackrel{\sim}{\to} \mathrm{MF}^{\mathrm{gr}}(\mathscr{V}^{\vee}, f) \end{aligned}$$ given by $\Theta(-)=\mathcal{K}\otimes_{\mathcal{O}_{\mathscr{P}}}(-)$, where $\mathcal{K}$ is the Koszul factorization, see [@T Theorem 2.3.3].* We will use the following lemma: **Lemma 6**. **([@PTquiver Lemma 2.6])*[\[lem:genJ\]]{#lem:genJ label="lem:genJ"} Let $\{V_a\}_{a\in A}$ be a set of $G$-representations and let $\mathbb{S} \subset \mathrm{MF}^{\rm{gr}}(\mathscr{V}^{\vee}, f)$ be the subcategory generated by matrix factorizations whose factors are direct sums of vector bundles $\mathcal{O}_{\mathscr{V}^{\vee}} \otimes V_a$. Then an object $\mathcal{E} \in D^b(\mathscr{P})$ satisfies $\Theta(\mathcal{E}) \in \mathbb{S}$ if and only if $j_{\ast}\mathcal{E} \in D^b(\mathscr{Y})$ is generated by $\mathcal{O}_{\mathscr{Y}} \otimes V_a$ for $a\in A$.* ## Window categories {#subsection:window} ### Attracting stacks {#attractingloci} Let $Y$ be an affine variety with an action of a reductive group $G$. Let $\lambda$ be a cocharacter of $G$. Let $G^\lambda$ and $G^{\lambda\geqslant 0}$ be the Levi and parabolic groups associated to $\lambda$. Let $Y^\lambda\subset Y$ be the closed subvariety of $\lambda$-fixed points. Consider the attracting variety $$Y^{\lambda\geqslant 0}:=\{y\in Y|\,\lim_{t\to 0}\lambda(t)\cdot y\in Y^\lambda\}\subset Y.$$ Consider the attracting and fixed stacks $$\label{attracting} \mathscr{Z}:=Y^\lambda/G^\lambda \xleftarrow{q}\mathscr{S}:=Y^{\lambda\geqslant 0}/G^{\lambda\geqslant 0}\xrightarrow{p}\mathscr{Y}.$$ The map $p$ is proper. Kempf-Ness strata are connected components of certain attracting stacks $\mathscr{S}$, and the map $p$ restricted to a Kempf-Ness stratum is a closed immersion, see [@halp Section 2.1]. The attracting stacks also appear in the definition of Hall algebras [@P0] (for $Y$ an affine space), where the Hall product is induced by the functor $$\label{Hallproductquiver} \ast:=p_*q^*\colon D^b(\mathscr{Z})\to D^b(\mathscr{Y}).$$ In this case, the map $p$ may not be a closed immersion. Let $T \subset G$ be a maximal torus and let $\lambda$ be a cocharacter $\lambda \colon \mathbb{C}^{\ast} \to T$. For a $G$-representation $Y$, the attracting variety $Y^{\lambda \geqslant 0} \subset Y$ coincides with the sub $T$-representation generated by weights which pair non-negatively with $\lambda$. We may abuse notation and denote by $\langle \lambda, Y^{\lambda \geqslant 0} \rangle:= \langle \lambda, \det Y^{\lambda \geqslant 0} \rangle$, where $\det Y^{\lambda \geqslant 0}$ is the sum of $T$-weights of $Y^{\lambda \geqslant 0}$. ### The definition of window categories Let $Y$ be an affine variety with an action of a reductive group $G$ and a linearization $\ell$. Consider the stacks $$j\colon\mathscr{Y}^{\ell\text{-ss}}:=Y^{\ell\text{-ss}}/G\hookrightarrow\mathscr{Y}:=Y/G.$$ We review the construction of window categories of $D^b(\mathscr{Y})$ which are equivalent to $D^b(\mathscr{Y}^{\ell\text{-ss}})$ via the restriction map, due to Segal [@MR2795327], Halpern-Leistner [@halp], and Ballard--Favero--Katzarkov [@MR3895631]. We follow the presentation from [@halp]. By also fixing a Weyl-invariant norm on the cocharacter lattice, the unstable locus $\mathscr{Y}\setminus \mathscr{Y}^{\ell\text{-ss}}$ has a stratification in Kempf-Ness strata $\mathscr{S}_i$ for $i\in I$ a finite ordered set: $$\mathscr{Y}\setminus \mathscr{Y}^{\ell\text{-ss}}=\bigsqcup_{i\in I}\mathscr{S}_i.$$ A Kempf-Ness stratum $\mathscr{S}_i$ is the attracting stack in $\mathscr{Y} \setminus \bigsqcup_{j<i}\mathcal{S}_j$ for a cocharacter $\lambda_i$, with the fixed stack $\mathscr{Z}_i:=\mathscr{S}_i^{\lambda_i}$. Let $N_{\mathscr{S}_i/\mathscr{Y}}$ be the normal bundle of $\mathscr{S}_i$ in $\mathscr{Y}$. Define the width of the window categories $$\eta_i:=\left\langle \lambda_i, \det\left(N^{\vee}_{\mathscr{S}_i/\mathscr{Y}}|_{\mathscr{Z}_i}\right) \right\rangle.$$ For a choice of real numbers $m_{\bullet}=(m_i)_{i\in I}\in \mathbb{R}^I$, define the category $$\label{def:window} \mathbb{G}^\ell_{m_{\bullet}}:=\{\mathcal{F}\in D^b(\mathscr{Y})\text{ such that } \mathrm{wt}_{\lambda_i}(\mathcal{F}|_{\mathscr{Z}_i})\subset [ m_i, m_i+\eta_i) \text{ for all }i\in I\}.$$ In the above, $\mathrm{wt}_{\lambda_i}(\mathcal{F}|_{\mathscr{Z}_i})$ is the set of $\lambda_i$-weights on $\mathcal{F}|_{\mathscr{Z}_i}$. Then [@halp Theorem 2.10] says that the restriction functor $j^*$ induces an equivalence of categories: $$\label{jequiv} j^*\colon \mathbb{G}^\ell_{m_{\bullet}}\xrightarrow{\sim} D^b\big(\mathscr{Y}^{\ell\text{-ss}}\big)$$ for any choice of real numbers $m_{\bullet}=(m_i)_{i\in I}\in \mathbb{R}^I$. ## Quasi-smooth derived stacks {#subsec:qsmooth} ### Derived categories of (quasi-)coherent sheaves Let $\mathfrak{M}$ be a derived stack over $\mathbb{C}$ and let $\mathcal{M}$ be its classical truncation. Let $\mathbb{L}_{\mathfrak{M}}$ be the cotangent complex of $\mathfrak{M}$. The stack $\mathfrak{M}$ is called *quasi-smooth* if for all closed points $x\to \mathcal{M}$, the restriction $\mathbb{L}_\mathfrak{M}|_x$ has cohomological amplitude in $[-1, 1]$. By [@BBBJ Theorem 2.8], a stack $\mathfrak{M}$ is quasi-smooth if and only if it is a $1$-stack and any point of $\mathfrak{M}$ lies in the image of a $0$-representable smooth morphism $$\label{alpha} \alpha \colon \mathscr{U}\to\mathfrak{M}$$ for a Koszul scheme $\mathscr{U}$ as in [\[Kscheme\]](#Kscheme){reference-type="eqref" reference="Kscheme"}. Let $D_{\rm{qc}}(\mathscr{U})$ be the derived category of dg-modules over $\mathcal{O}_{\mathscr{U}}$ and let $D^b(\mathscr{U}) \subset D_{\rm{qc}}(\mathscr{U})$ be the subcategory of objects with bounded coherent cohomologies. Further, let $\operatorname{Ind}D^b(\mathscr{U})$ be the ind-completion of $D^b(\mathscr{U})$ [@MR3136100]. For a quasi-smooth stack $\mathfrak{M}$, the dg-categories $D_{\rm{qc}}(\mathfrak{M})$, $D^b(\mathfrak{M})$, and $\operatorname{Ind}D^b(\mathfrak{M})$ are defined to be limits in the $\infty$-category of smooth morphisms [\[alpha\]](#alpha){reference-type="eqref" reference="alpha"}, see [@T Subsection 3.1.1], [@MR3136100]: $$\begin{aligned} D_{\rm{qc}}(\mathfrak{M})=\lim_{\mathscr{U} \to \mathfrak{M}} D_{\rm{qc}}(\mathscr{U}), \ D^b(\mathfrak{M})=\lim_{\mathscr{U} \to \mathfrak{M}} D^b(\mathscr{U}), \ \operatorname{Ind}D^b(\mathfrak{M})=\lim_{\mathscr{U} \to \mathfrak{M}} \operatorname{Ind}D^b(\mathscr{U}).\end{aligned}$$ The category $\operatorname{Ind}D^b(\mathfrak{M})$ is a module over $D_{\rm{qc}}(\mathfrak{M})$ via the tensor product. For $\mathcal{E}_1, \mathcal{E}_2 \in \operatorname{Ind}D^b(\mathfrak{M})$, there exist an internal homomorphism, see [@MR3037900 Remark 3.4.5] $$\begin{aligned} \mathcal{H}om(\mathcal{E}_1, \mathcal{E}_2) \in D_{\rm{qc}}(\mathfrak{M}),\end{aligned}$$ such that for any $\mathcal{A} \in D_{\rm{qc}}(\mathfrak{M})$ we have $$\begin{aligned} \operatorname{Hom}_{D_{\rm{qc}}(\mathfrak{M})}(\mathcal{A}, \mathcal{H}om(\mathcal{E}_1, \mathcal{E}_2)) \cong \operatorname{Hom}_{\operatorname{Ind}D^b(\mathfrak{M})}(\mathcal{A} \otimes \mathcal{E}_1, \mathcal{E}_2). \end{aligned}$$ If $\mathfrak{M}$ is QCA (quasi-compact and with affine automorphism groups) [@MR3037900 Definition 1.1.8], then $\operatorname{Ind}D^b(\mathfrak{M})$ is compactly generated with compact objects $D^b(\mathfrak{M})$, see [@MR3037900 Theorem 3.3.5]. ### Étale and formal local structures along good moduli spaces {#subsection:etale} Let $\mathfrak{M}$ be a quasi-smooth stack over $\mathbb{C}$ and let $\mathcal{M}$ be its classical truncation. Suppose that $\mathcal{M}$ admits a good moduli space map $$\pi\colon \mathcal{M} \to M,$$ see [@MR3237451] for the notion of a good moduli space. In particular, $M$ is an algebraic space and $\pi$ is a quasi-compact morphism. For each point in $M$, there are an étale neighborhood $U \to M$ and Cartesian squares: $$\begin{aligned} \label{dia:fnbd} \xymatrix{ \mathfrak{M}_U \ar[d] \ar@{}[rd]|\square & \ar@<-0.3ex>@{_{(}->}[l] \mathcal{M}_U \ar[r] \ar[d] \ar@{}[rd]|\square & U \ar[d] \\ \mathfrak{M} & \ar@<-0.3ex>@{_{(}->}[l] \mathcal{M} \ar[r] & M, }\end{aligned}$$ where each vertical arrow is étale and $\mathfrak{M}_U$ is equivalent to a Koszul stack $\mathscr{P}=s^{-1}(0)/G$ as in ([\[koszuldef\]](#koszuldef){reference-type="ref" reference="koszuldef"}), see [@T Subsection 3.1.4], [@halpK32 Theorem 4.2.3], [@AHR]. Similarly, for each closed point $y \in M$, there exist Cartesian squares, see [@T Subsection 3.1.4]: $$\begin{aligned} \label{dia:fnbd2} \xymatrix{ \widehat{\mathfrak{M}}_y \ar[d] \ar@{}[rd]|\square & \ar@<-0.3ex>@{_{(}->}[l] \widehat{\mathcal{M}}_y \ar[r] \ar[d] \ar@{}[rd]|\square & \operatorname{Spec}\widehat{\mathcal{O}}_{M, y} \ar[d] \\ \mathfrak{M} & \ar@<-0.3ex>@{_{(}->}[l] \mathcal{M} \ar[r] & M. }\end{aligned}$$ ### $(-1)$-shifted cotangent stacks {#subsec:shifted} Let $\mathfrak{M}$ be a quasi-smooth stack. Let $\mathbb{T}_\mathfrak{M}$ be the tangent complex of $\mathfrak{M}$, which is the dual complex to the cotangent complex $\mathbb{L}_\mathfrak{M}$. We denote by $\Omega_{\mathfrak{M}}[-1]$ the *$(-1)$-shifted cotangent stack* of $\mathfrak{M}$: $$\begin{aligned} \Omega_\mathfrak{M}[-1]:=\mathrm{Spec}_\mathfrak{M}\left(\mathrm{Sym}(\mathbb{T}_\mathfrak{M}[1])\right).\end{aligned}$$ Consider the projection map $$\label{p0} p_0\colon \mathcal{N}:=\Omega_\mathfrak{M}[-1]^{\rm{cl}}\to \mathfrak{M}.$$ For a Koszul stack $\mathscr{P}$ as in [\[koszuldef\]](#koszuldef){reference-type="eqref" reference="koszuldef"}, recall the function $f$ from [\[defreg\]](#defreg){reference-type="eqref" reference="defreg"} and consider the critical locus $\mathrm{Crit}(f)\subset \mathscr{V}^{\vee}$. In this case, the map $p_0$ is the natural projection $$\label{p1} p_0\colon \mathrm{Crit}(f)=\Omega_\mathscr{P}[-1]^{\rm{cl}}\to \mathscr{P}.$$ For an object $\mathcal{F} \in D^b(\mathfrak{M})$, Arinkin--Gaitsgory [@AG] defined the notion of singular support denoted by $$\begin{aligned} \mathrm{Supp}^{\rm{sg}}(\mathcal{F}) \subset \mathcal{N}. \end{aligned}$$ The definition is compatible with maps $\alpha$ as in [\[alpha\]](#alpha){reference-type="eqref" reference="alpha"}, see [@AG Section 7]. Consider the group $\mathbb{C}^*$ scaling the fibers of the map $p_0$. A closed substack $\mathscr{Z}$ of $\mathcal{N}$ is called *conical* if it is closed under the action of $\mathbb{C}^*$. The singular support $\mathrm{Supp}^{\rm{sg}}(\mathcal{F})$ of $\mathcal{F}$ is a conical subset $\mathscr{Z}$ of $\mathcal{N}$. For a given conical closed substack $\mathscr{Z} \subset \mathcal{N}$, we denote by $\mathcal{C}_{\mathscr{Z}} \subset D^b(\mathfrak{M})$ the subcategory of objects whose singular supports are contained in $\mathscr{Z}$. Consider a Koszul stack $\mathscr{P}$ as in [\[koszuldef\]](#koszuldef){reference-type="eqref" reference="koszuldef"} and recall the Koszul equivalence $\Theta$ from [\[equiv:Phi\]](#equiv:Phi){reference-type="eqref" reference="equiv:Phi"}. Under $\Theta$, the singular support of $\mathcal{F}\in D^b(\mathscr{P})$ corresponds to the support $\mathscr{Z}$ of the matrix factorization $\Theta(\mathcal{F})$, namely the maximal closed substack $\mathscr{Z}\subset \mathrm{Crit}(f)$ such that $\mathcal{F}|_{\mathscr{V}^{\vee}\setminus \mathscr{Z}}=0$ in $\mathrm{MF}^{\mathrm{gr}}(\mathscr{V}^{\vee}\setminus \mathscr{Z}, f)$, see [@T Subsection 2.3.9]. ## The window theorem for quasi-smooth stacks {#windowfirst} We review the theory of window categories for singular support quotients of quasi-smooth stacks [@T Chapter 5], which itself is inspired by Halpern-Leistner's theory of window categories for $0$-shifted symplectic derived stacks [@halpK32]. We continue with the notation from the previous subsection. Let $\mathfrak{M}$ be a quasi-smooth stack and assume throughout this subsection that its classical truncation $\mathcal{M}$ admits a good moduli space $\mathcal{M} \to M.$ Let $\ell$ be a line bundle on $\mathcal{M}$ and let $b \in H^4(\mathcal{M}, \mathbb{Q})$ be a positive definite class, see [@Halpinstab Definition 3.7.6]. We also use the same symbols $(\ell, b)$ for $p_0^{\ast}\ell \in \mathrm{Pic}(\mathcal{N})$ and $p_0^{\ast}b \in H^4(\mathcal{N}, \mathbb{Q})$. Then there is a $\Theta$-stratification with respect to $(\ell, b)$: $$\begin{aligned} \label{theta:N} \mathcal{N}=\mathscr{S}_1 \sqcup \cdots \sqcup \mathscr{S}_N\sqcup \mathcal{N}^{\ell\text{-ss}}\end{aligned}$$ with centers $\mathscr{Z}_i\subset\mathscr{S}_i$, see [@Halpinstab Theorem 5.2.3, Proposition 5.3.3]. In the above situation, an analogue of the window theorem is proved in [@Totheta Theorem 1.1], [@T Theorem 5.3.13] (which generalizes [@halpK32 Theorem 3.3.1] in the case that $\mathfrak{M}$ is $0$-shifted symplectic): **Theorem 7**. **([@T; @Totheta])*[\[thm:window:M\]]{#thm:window:M label="thm:window:M"} In addition to the above, suppose that $\mathcal{M} \to M$ satisfies the formal neighborhood theorem, see below. Then for each $m_{\bullet}=(m_i)_{i=1}^N\in \mathbb{R}^N$, there is a subcategory $\mathbb{W}(\mathfrak{M})^\ell_{m_{\bullet}} \subset D^b(\mathfrak{M})$ such that the composition $$\begin{aligned} \mathbb{W}(\mathfrak{M})^\ell_{m_{\bullet}} \subset D^b(\mathfrak{M}) \twoheadrightarrow D^b(\mathfrak{M})/\mathcal{C}_{\mathscr{Z}} \end{aligned}$$ is an equivalence, where $\mathscr{Z}:=\mathcal{N} \setminus \mathcal{N}^{\ell\text{-ss}}$.* **Remark 8**. When $\mathcal{N}$ is a (global) quotient stack $\mathcal{N}=Y/G$ for a reductive algebraic group $G$, a $\Theta$-stratification [\[theta:N\]](#theta:N){reference-type="eqref" reference="theta:N"} is the same as a Kempf-Ness stratification [@Halpinstab Example 0.0.5]. The class $b$ is then constructed as the pull-back of the class in $H^4(BG, \mathbb{Q})$ corresponding to the chosen positive definite form [@Halpinstab Example 5.3.4]. **Remark 9**. Suppose that $\mathbb{L}_{\mathfrak{M}}$ is self-dual, e.g. $\mathfrak{M}$ is 0-shifted symplectic. In this case, we have $\mathcal{N}^{\ell\text{-ss}}=\Omega_{\mathfrak{M}^{\ell\text{-ss}}}[-1]^{\rm{cl}}$, which easily follows from [@halpK32 Lemma 4.3.22]. Then we have the equivalence, see [@T Lemma 3.2.9]: $$\begin{aligned} D^b(\mathfrak{M})/\mathcal{C}_{\mathscr{Z}} \stackrel{\sim}{\to} D^b(\mathfrak{M}^{\ell\text{-ss}}). \end{aligned}$$ We now explain the meaning of "the formal neighborhood theorem\" in the statement of Theorem [\[thm:window:M\]](#thm:window:M){reference-type="ref" reference="thm:window:M"}, see [@T Definition 5.2.3]. For a closed point $y \in M$, denote also by $y \in \mathcal{M}$ the unique closed point in the fiber of $\mathcal{M} \to M$ at $y$. Set $G_y:=\mathrm{Aut}(y)$, which is a reductive algebraic group. Let $\widehat{\mathcal{M}}_y$ be the formal fiber along with $\mathcal{M} \to M$ at $y$. Let $\widehat{\mathcal{H}}^0(\mathbb{T}_{\mathcal{M}}|_{y})$ be the formal fiber at the origin of $\mathcal{H}^0(\mathbb{T}_{\mathcal{M}}|_{y})\to \mathcal{H}^0(\mathbb{T}_{\mathcal{M}}|_{y})\sslash G_y$ at the origin, and define $\widehat{\mathcal{H}}^0(\mathbb{T}_{\mathfrak{M}}|_{y})$ similarly, see also the convention from Subsection [2.1](#notation){reference-type="ref" reference="notation"}. Then the formal neighborhood theorem says that there is a $G_y$-equivariant morphism $$\begin{aligned} \kappa_y \colon \widehat{\mathcal{H}}^0(\mathbb{T}_{\mathcal{M}}|_{y}) \to \mathcal{H}^1(\mathbb{T}_{\mathcal{M}}|_{y}) \end{aligned}$$ such that, by setting $\mathcal{U}_y$ to be the classical zero locus of $\kappa_y$, there is an isomorphism $\widehat{\mathcal{M}}_y \cong \mathcal{U}_y/G_y$. Let $\mathfrak{U}_y$ be the derived zero locus of $\kappa_y$. Then, by replacing $\kappa_y$ if necessary, $\widehat{\mathfrak{M}}_y$ is equivalent to $\mathfrak{U}_y/G$, see [@T Lemma 5.2.5]. Below we give a formal local description of $\mathbb{W}(\mathfrak{M})_{m_{\bullet}}^\ell$. Consider the pair of a smooth stack and a regular function $(\mathscr{X}_y, f_y)$: $$\begin{aligned} \mathscr{X}_y:=\left(\widehat{\mathcal{H}}^0(\mathbb{T}_{\mathfrak{M}}|_{y}) \times \mathcal{H}^1(\mathbb{T}_{\mathfrak{M}}|_{y})^{\vee}\right)/G_y \stackrel{f_y}{\to} \mathbb{C},\end{aligned}$$ where $f_y(u, v)=\langle \kappa_y(u), v \rangle$. From ([\[p1\]](#p1){reference-type="ref" reference="p1"}), the critical locus of $f_y$ is isomorphic to the classical truncation of the $(-1)$-shifted cotangent stack over $\widehat{\mathfrak{M}}_y$, so it is isomorphic to the formal fiber $\widehat{\mathcal{N}}_y$ of $\mathcal{N} \to \mathcal{M} \to M$ at $y$. The pull-back of the $\Theta$-stratification ([\[theta:N\]](#theta:N){reference-type="ref" reference="theta:N"}) to $\widehat{\mathcal{N}}_y$ gives a Kempf-Ness stratification $$\begin{aligned} \widehat{\mathcal{N}}_y=\widehat{\mathscr{S}}_{1, y} \sqcup \cdots \sqcup \widehat{\mathscr{S}}_{N, y} \sqcup \widehat{\mathcal{N}}_y^{\ell\text{-ss}}\end{aligned}$$ with centers $\widehat{\mathscr{Z}}_{i, y}\subset \widehat{\mathscr{S}}_{i, y}$ and one parameter subgroups $\lambda_i \colon \mathbb{C}^{\ast} \to G_y$. By the Koszul equivalence, see Theorem [\[thm:Kduality\]](#thm:Kduality){reference-type="ref" reference="thm:Kduality"}, there is an equivalence: $$\begin{aligned} \Theta_y \colon D^b(\widehat{\mathfrak{M}}_y) \stackrel{\sim}{\to} \mathrm{MF}^{\text{gr}}(\mathscr{X}_y, f_y). \end{aligned}$$ Then the subcategory $\mathbb{W}(\mathfrak{M})_{m_{\bullet}}^\ell$ in Theorem [\[thm:window:M\]](#thm:window:M){reference-type="ref" reference="thm:window:M"} is characterized as follows: an object $\mathcal{E} \in D^b(\mathfrak{M})$ is an object of $\mathbb{W}(\mathfrak{M})^\ell_{m_{\bullet}}$ if and only if, for any closed point $y \in M$, we have $$\begin{aligned} \label{PhiEy} \Theta_{y}(\mathcal{E}|_{\widehat{\mathfrak{M}}_y}) \in \mathrm{MF}^{\text{gr}}(\mathbb{G}^\ell_{m_{\bullet}'}, f_y), \ m_i'=m_i-\left\langle \lambda_i, \det\left(\mathcal{H}^1(\mathbb{T}_{\mathfrak{M}}|_{y})^{\lambda_i>0}\right) \right\rangle. \end{aligned}$$ The category $\mathbb{G}^{\ell}_{m'_{\bullet}}$ is the window category [\[def:window\]](#def:window){reference-type="eqref" reference="def:window"} for the weights $(m'_i)_{i=1}^N$ and the line bundle $\ell$. The difference between $m_i$ and $m_i'$ is due to the discrepancy of categorical Hall products on $\mathfrak{M}_y$ and $\mathscr{X}_y$, see [@P2 Proposition 3.1]. ## Intrinsic window subcategory We continue to consider a quasi-smooth derived stack $\mathfrak{M}$ whose classical truncation $\mathcal{M}$ admits a good moduli space $\mathcal{M} \to M$. We say that $\mathfrak{M}$ is *symmetric* if for any closed point $y \in \mathfrak{M}$, the $G_y:=\mathrm{Aut}(y)$-representation $$\begin{aligned} \mathcal{H}^0(\mathbb{T}_{\mathfrak{M}}|_{y}) \oplus \mathcal{H}^1(\mathbb{T}_{\mathfrak{M}}|_{y})^{\vee}\end{aligned}$$ is a self-dual $G_y$-representation. In this subsection, we assume that $\mathfrak{M}$ is symmetric. Let $\delta \in \mathrm{Pic}(\mathfrak{M})_{\mathbb{R}}$. We now define a different kind of window categories, called *intrinsic window subcategories* $\mathbb{W}(\mathfrak{M})_{\delta}^{\rm{int}} \subset D^b(\mathfrak{M})$, see [@T Definition 5.2.12, 5.3.12]. These categories are the quasi-smooth version of "magic window categories\" from [@SVdB; @hls]. First, assume that $\mathfrak{M}$ is a Koszul stack associated with $(Y, V, s, G)$ as in ([\[koszuldef\]](#koszuldef){reference-type="ref" reference="koszuldef"}) $$\begin{aligned} \label{present:M} \mathfrak{M}=s^{-1}(0)/G.\end{aligned}$$ Consider the quotient stack $\mathscr{Y}=Y/G$, the closed immersion $j \colon \mathfrak{M} \hookrightarrow \mathscr{Y}$, and let $\mathscr{V} \to \mathscr{Y}$ be the total space of $V/G \to Y/G$. In this case, we define $\mathbb{W}(\mathfrak{M})_{\delta}^{\rm{int}} \subset D^b(\mathfrak{M})$ to be consisting of $\mathcal{E} \in D^b(\mathfrak{M})$ such that for any map $\nu \colon B\mathbb{C}^{\ast} \to \mathfrak{M}$ we have $$\begin{aligned} \mathrm{wt}(\nu^{\ast}j^{\ast}j_{\ast}\mathcal{E}) \subset \left[\frac{1}{2}\mathrm{wt}\left(\det \nu^{\ast}(\mathbb{L}_{\mathscr{V}}|_{\mathscr{Y}})^{\nu<0} \right), \frac{1}{2}\mathrm{wt}\left(\det \nu^{\ast}(\mathbb{L}_{\mathscr{V}}|_{\mathscr{Y}})^{\nu>0}\right) \right] +\mathrm{wt}(\nu^{\ast}\delta). \end{aligned}$$ The above subcategory $\mathbb{W}(\mathfrak{M})_{\delta}^{\rm{int}}$ is intrinsic to $\mathfrak{M}$, that is, independent of a choice of a presentation $\mathfrak{M}$ as ([\[present:M\]](#present:M){reference-type="ref" reference="present:M"}) for $(Y, V, s, G)$, see [@T Lemma 5.3.14]. In general, the intrinsic window subcategory is defined as follows (which generalizes the magic window category in [@halpK32 Definition 4.3.5] considered when $\mathbb{L}_{\mathfrak{M}}$ is self-dual): **Definition 10**. ([@T Definition 5.3.12])[\[def:intwind\]]{#def:intwind label="def:intwind"} We define the subcategory $$\begin{aligned} \mathbb{W}(\mathfrak{M})_{\delta}^{\rm{int}} \subset D^b(\mathfrak{M}) \end{aligned}$$ to be consisting of objects $\mathcal{E}$ such that, for any étale morphism $\iota_U\colon U \to M$ such that $\mathfrak{M}_U$ is of the form $s^{-1}(0)/G$ as in ([\[present:M\]](#present:M){reference-type="ref" reference="present:M"}) and $\iota_U$ induces an étale morphism $\iota_U \colon \mathfrak{M}_U \to \mathfrak{M}$, we have $\iota_U^{\ast}\mathcal{E} \in \mathbb{W}(\mathfrak{M}_U)_{\iota_U^{\ast}\delta}^{\rm{int}} \subset D^b(\mathfrak{M}_U)$. # Quasi-BPS categories for doubled quivers {#sec:qbps:double} In this section, we review the results in [@PTquiver] about quasi-BPS categories of doubled quivers, focusing on the example of doubled quivers of $g$-loop quivers for $g\geqslant 1$. These results are the local analogues of Theorems [\[intro:thm1\]](#intro:thm1){reference-type="ref" reference="intro:thm1"} and [\[intro:thm4\]](#intro:thm4){reference-type="ref" reference="intro:thm4"}. We also discuss similar results for formal fibers along good moduli space morphisms. ## Moduli stacks of representations of quivers ### Moduli stacks {#subsec311} Let $Q=(I, E)$ be a quiver. For a dimension vector $\bm{d}=(d^{(a)})_{a \in I} \in \mathbb{N}^{I}\subset \mathbb{Z}^I$, we denote by $$\begin{aligned} \label{stack:X} \mathscr{X}(\bm{d})=R_Q(\bm{d})/G(\bm{d})\end{aligned}$$ the moduli stack of $Q$-representations of dimension $\bm{d}$. Here, the affine space $R_Q(\bm{d})$ and the reductive group $G(\bm{d})$ are defined by $$\begin{aligned} R_Q(\bm{d})=\bigoplus_{(a \to b) \in E} \operatorname{Hom}(V^{(a)}, V^{(b)}), \ G(\bm{d})=\prod_{a \in I}GL(V^{(a)}),\end{aligned}$$ where $V^{(a)}$ is a $\mathbb{C}$-vector space of dimension $d^{(a)}$. We denote by $\mathfrak{g}(\bm{d})$ the Lie algebra of $G(\bm{d})$. ### Doubled quivers {#subsec312} Let $Q^{\circ}=(I, E^{\circ})$ be a quiver. Let $E^{\circ \ast}$ be the set of edges $e^{\ast}=(b \to a)$ for each $e=(a \to b)$ in $E^{\circ}$. Consider the *doubled quiver* of $Q^\circ$: $$\begin{aligned} Q^{\circ, d}=(I, E^{\circ, d}), \ E^{\circ, d}=E^{\circ} \sqcup E^{\circ \ast}. \end{aligned}$$ Let $\mathscr{I}$ be the quadratic relation $\sum_{e \in E^{\circ}} [e, e^{\ast}]\in \mathbb{C}[Q^{\circ, d}]$. For a dimension vector $\bm{d}=(d^{(a)})_{a \in I}$, the relation $\mathscr{I}$ induces a moment map: $$\begin{aligned} \label{moment} \mu \colon R_{Q^{\circ, d}}(\bm{d})=T^*R_{Q^\circ}(\bm{d}) \to \mathfrak{g}(\bm{d}). \end{aligned}$$ The derived zero locus $$\begin{aligned} \label{P:dzero} \mathscr{P}(\bm{d}):=\mu^{-1}(0)/G(\bm{d}) \stackrel{j}{\hookrightarrow} \mathscr{Y}(\bm{d}):=R_{Q^{\circ, d}}(\bm{d})/G(\bm{d})\end{aligned}$$ is the derived moduli stack of $(Q^{\circ, d}, \mathscr{I})$-representations of dimension vector $\bm{d}$. Note that a $(Q^{\circ, d}, \mathscr{I})$-representation is the same as a representation of the preprojective algebra $\pi_{Q^\circ}:=\mathbb{C}[Q^{\circ, d}]/(\mathscr{I})$ of $Q^\circ$, and we will use these two names interchangeably. ### Tripled quivers {#subsec:triple} Consider a quiver $Q^{\circ}=(I, E^{\circ})$. For $a\in I$, let $\omega_a$ be a loop at $a$. *The tripled quiver* of $Q^\circ$ is: $$\begin{aligned} Q=(I, E), \ E=E^{\circ, d}\sqcup \{\omega_a\}_{a \in I}. \end{aligned}$$ The tripled potential $W$ of $Q$ is: $$\begin{aligned} W=\left(\sum_{a \in I}\omega_a \right) \left( \sum_{e \in E^{\circ}}[e, e^{\ast}] \right). \end{aligned}$$ Consider the stack ([\[stack:X\]](#stack:X){reference-type="ref" reference="stack:X"}) of representations of dimension $d$ for the tripled quiver $Q$: $$\mathscr{X}(\bm{d}):=R_{Q}(\bm{d})/G(\bm{d}):=R_{Q^{\circ, d}}(\bm{d}) \oplus \mathfrak{g}(\bm{d})/G(d).$$ The potential $W$ induces the regular function: $$\begin{aligned} \label{moduli:triple} \mathop{\rm Tr}W\colon \mathscr{X}(\bm{d})=R_{Q}(\bm{d})/G(\bm{d}) \to \mathbb{C}.\end{aligned}$$ We have the Koszul duality equivalence, see Theorem [\[thm:Kduality\]](#thm:Kduality){reference-type="ref" reference="thm:Kduality"}: $$\begin{aligned} \label{Koszul:theta} \Theta \colon D^b(\mathscr{P}(\bm{d})) \stackrel{\sim}{\to} \mathrm{MF}^{\rm{gr}}(\mathscr{X}(\bm{d}), \mathop{\rm Tr}W). \end{aligned}$$ ## The weight lattice {#subsec:qBPS} Let $Q=(I, E)$ be a quiver. For a dimension vector $\bm{d} \in \mathbb{N}^I$, let $T(\bm{d}) \subset G(\bm{d})$ be the maximal torus and let $M(\bm{d})$ be the character lattice for $T(\bm{d})$: $$\begin{aligned} M(\bm{d})=\bigoplus_{a\in I} \bigoplus_{1\leqslant i\leqslant d^{(a)}} \mathbb{Z} \beta_i^{(a)}. \end{aligned}$$ Here $\beta_1^{(a)}, \ldots, \beta_{d^{(a)}}^{(a)}$ are the weights of the standard representation of $GL(V^{(a)})$ for $a\in I$. In the case that $I$ consists of one element, we omit the subscript $(a)$. We denote by $\rho \in M(\bm{d})_{\mathbb{Q}}$ half of the sum of the positive roots of $\mathfrak{g}(\bm{d})$. Let $W$ be the Weyl group of $G(\bm{d})$ and let $M(\bm{d})_{\mathbb{R}}^W \subset M(\bm{d})_{\mathbb{R}}$ be the Weyl invariant subspace. There is a decomposition: $$M(\bm{d})_{\mathbb{R}}^W=\bigoplus_{a \in I} \mathbb{R}\sigma^{(a)},$$ where $\sigma^{(a)}:=\sum_{i=1}^{d^{(a)}}\beta_i^{(a)}$. There is a natural pairing: $$\begin{aligned} \langle -, -\rangle \colon M(\bm{d})_{\mathbb{R}}^W \times \mathbb{R}^I \to \mathbb{R}, \ \langle \sigma^{(a)}, e^{(b)} \rangle=\delta^{ab}. \end{aligned}$$ We denote by $\iota \colon M(\bm{d})_{\mathbb{R}} \to \mathbb{R}$ the linear map sending $\beta_i^{(a)}$ to $1$, and its kernel by $M(\bm{d})_{0, \mathbb{R}}$. An element $\ell \in M(\bm{d})_{0, \mathbb{R}}^W$ is written as $$\begin{aligned} \ell=\sum_{a\in I}\ell^{(a)}\sigma^{(a)}, \ \langle \ell, \bm{d} \rangle= \sum_{a}\ell^{(a)}d^{(a)}=0\end{aligned}$$ i.e. $\ell$ is an $\mathbb{R}$-character of $G(\bm{d})$ which is trivial on the diagonal torus $\mathbb{C}^{\ast} \subset G(\bm{d})$. Denote by $\underline{\bm{d}}:=\sum_{a\in I}d^{(a)}$ the total dimension. Define the following Weyl invariant weight: $$\begin{aligned} \tau_{\bm{d}} :=\frac{1}{\underline{\bm{d}}} \cdot \sum_{a\in I, 1\leqslant i\leqslant d^{(a)}}\beta_i^{(a)}. \end{aligned}$$ Define the polytope: $$\label{def:polytope} \textbf{W}(\bm{d}):=\frac{1}{2}\mathrm{sum}[0, \beta]\subset M(\bm{d})_\mathbb{R},$$ where the Minkowski sum is after all $T(\bm{d})$-weights $\beta$ of $R(\bm{d})$. **Definition 11**. A weight $\ell \in M(\bm{d})_{0, \mathbb{R}}^W$ is *generic* if the following conditions hold: - if $H \subset M(\bm{d})_{0, \mathbb{R}}$ is a hyperplane parallel to a face in $\mathbf{W}(\bm{d})$ which contains $\ell$, then $M(\bm{d})_{0, \mathbb{R}}^W \subset H$, - for any decomposition $\bm{d}=\bm{d}_1+\bm{d}_2$ such that $\bm{d}_1, \bm{d}_2 \in \mathbb{N}^I$ are not proportional to $\bm{d}$, we have that $\langle \ell, \bm{d}_i\rangle \neq 0$ for $i\in\{1,2\}$. Note that the set of generic weights is a dense open subset in $M(\textbf{d})^W_{0,\mathbb{R}}$. ## Quasi-BPS categories for stacks of representations of preprojective algebras {#subsec:double} Let $Q^{\circ}=(I, E^{\circ})$ be a quiver and let $\mathscr{P}(d)$ be the derived moduli stack of representations of its preprojective algebra ([\[P:dzero\]](#P:dzero){reference-type="ref" reference="P:dzero"}). For $\delta \in M(\bm{d})_{\mathbb{R}}^W$, define the quasi-BPS category to be the intrinsic window subcategory in Definition [\[def:intwind\]](#def:intwind){reference-type="ref" reference="def:intwind"}: $$\begin{aligned} \label{def:Tdelta} \mathbb{T}(\bm{d})_\delta:=\mathbb{W}(\mathscr{P}(\bm{d}))_{\delta}^{\rm{int}} \subset D^b(\mathscr{P}(\bm{d})), \ \mathbb{T}(\bm{d})_w :=\mathbb{T}(\bm{d}; w \tau_{\bm{d}}). \end{aligned}$$ An alternative description is as follows, where recall the map $j\colon \mathscr{P}(\bm{d})\hookrightarrow\mathscr{Y}(\bm{d})$ and choose a dominant chamber $M(\bm{d})^+\subset M(\bm{d})$, for example the one in [@PTquiver Subsection 2.2.2]: **Lemma 12**. **([@PTquiver Corollary 3.20])*[\[lem:compareT\]]{#lem:compareT label="lem:compareT"} The subcategory ([\[def:Tdelta\]](#def:Tdelta){reference-type="ref" reference="def:Tdelta"}) consists of objects $\mathcal{E} \in D^b(\mathscr{P}(\bm{d}))$ such that $j_{\ast}\mathcal{E}$ is classically generated by the vector bundle $\mathcal{O}_{\mathscr{Y}(d)} \otimes \Gamma_{G(\bm{d})}(\chi)$, where $\chi$ is a dominant weight such that $$\begin{aligned} \label{chi:rho} \chi+\rho-\delta \in \mathbf{W}(\bm{d}). \end{aligned}$$ Here, $\Gamma_{G(\bm{d})}(\chi)$ is the irreducible representation of $G(\bm{d})$ with highest weight $\chi$, and $\mathbf{W}(\bm{d})$ is the polytope [\[def:polytope\]](#def:polytope){reference-type="eqref" reference="def:polytope"} for the tripled quiver $Q$ of $Q^\circ$.* For $\ell \in M(\bm{d})_{0, \mathbb{R}}^W$, let $\mathscr{P}(\bm{d})^{\ell\text{-ss}} \subset \mathscr{P}(\bm{d})$ be the open substack of $\ell$-semistable locus. The quasi-BPS category for $\ell$-semistable locus is defined to be $$\begin{aligned} \mathbb{T}^{\ell}(\bm{d})_\delta :=\mathbb{W}(\mathscr{P}(\bm{d})^{\ell\text{-ss}})_{\delta}^{\rm{int}} \subset D^b(\mathscr{P}(\bm{d})^{\ell\text{-ss}}), \ \mathbb{T}^\ell(\bm{d})_{w} :=\mathbb{T}^\ell(\bm{d}; w\tau_{\bm d}). \end{aligned}$$ Consider the restriction functor $$\begin{aligned} \label{rest:P} \mathrm{res} \colon D^b(\mathscr{P}(\bm{d})) \twoheadrightarrow D^b(\mathscr{P}(\bm{d})^{\ell\text{-ss}}). \end{aligned}$$ We recall a wall-crossing equivalence proved in [@PTquiver]: **Theorem 13**. **([@PTquiver Corollary 3.19, Remark 3.12])*[\[prop:eqS\]]{#prop:eqS label="prop:eqS"} For generic $\ell_+, \ell_- \in M(\bm{d})_{0, \mathbb{R}}^W$, let $\delta'=\varepsilon_{+} \cdot \ell_{+} +\varepsilon_{-} \cdot \ell_{-}$ for general $0<\varepsilon_{\pm} \ll 1$. Let $\delta\in M(\bm{d})^W_\mathbb{R}$ and let $\delta''=\delta+\delta'$. Then the restriction functor ([\[rest:P\]](#rest:P){reference-type="ref" reference="rest:P"}) induces equivalences: $$\begin{aligned} \mathbb{T}(\bm{d})_{\delta''} \stackrel{\sim}{\to} \mathbb{T}^{\ell_{\pm}}(\bm{d})_{ \delta''}. \end{aligned}$$ In particular, there is an equivalence $\mathbb{T}^{\ell_{+}}(\bm{d})_{\delta''} \simeq \mathbb{T}^{\ell_{-}}(\bm{d})_{ \delta''}$.* ## Semiorthogonal decompositions for preprojective algebras of quivers with one vertex {#subsec:onevert} In the remaining of this section, we focus on the case of the $g$-loop quiver $Q^{\circ}=Q_g$ with loops $x_1, \ldots, x_g$. In this case, we write the dimension vector by $\bm{d}=d \in \mathbb{N}$. The doubled quiver is $Q^{\circ, d}=Q_{2g}$ with loops $x_1, \ldots, x_g, y_1, \ldots, y_g$ and the relation $\mathscr{I}$ is given by $\sum_{i=1}^g[x_i, y_i]\in \mathbb{C}[Q_{2g}]$. The map ([\[moment\]](#moment){reference-type="ref" reference="moment"}) in this case is $$\begin{aligned} \mu \colon \mathfrak{gl}(d)^{\oplus 2g} \to \mathfrak{gl}(d), \ (x_1, \ldots, x_g, y_1, \ldots, y_g) \mapsto \sum_{i=1}^g [x_i, y_i]. \end{aligned}$$ Then the derived stack in ([\[P:dzero\]](#P:dzero){reference-type="ref" reference="P:dzero"}) is $$\begin{aligned} \mathscr{P}(d)=\mu^{-1}(0)/GL(d) \hookrightarrow \mathscr{Y}(d)=\mathfrak{gl}(d)^{\oplus 2g}/GL(d).\end{aligned}$$ For a partition $d=d_1+\cdots+d_k$, let $\mathscr{P}(d_1, \ldots, d_k)$ be the derived moduli stack of filtrations $$\begin{aligned} 0=R_0\subset R_1 \subset \cdots \subset R_k \end{aligned}$$ of $(Q^{\circ, d}, \mathscr{I})$-representations such that $R_i/R_{i-1}$ has dimension $d_i$. Explicitly, let $\lambda \colon \mathbb{C}^{\ast} \to T(d)$ be an antidominant cocharacter corresponding to the decomposition $d=d_1+\cdots+d_k$ and set $$\begin{aligned} \mu^{\geqslant 0} \colon \left(\mathfrak{gl}(d)^{\oplus 2g}\right)^{\lambda \geqslant 0} \to \mathfrak{gl}(d)^{\lambda \geqslant 0}\end{aligned}$$ to be the restriction of $\mu$. Then $$\begin{aligned} \mathscr{P}(d_1, \ldots, d_k)=\left(\mu^{\geqslant 0}\right)^{-1}(0)/GL(d)^{\lambda \geqslant 0}. \end{aligned}$$ Consider the evaluation morphisms $$\begin{aligned} \times_{i=1}^k \mathscr{P}(d_i) \stackrel{q}{\leftarrow} \mathscr{P}(d_1, \ldots, d_k) \stackrel{p}{\to} \mathscr{P}(d). \end{aligned}$$ The map $q$ is quasi-smooth and the map $p$ is proper. Consider the categorical Hall product for the preprojective algebra of $Q^\circ$ [@PoSa; @VaVa]: $$\begin{aligned} \label{chall:P} p_{\ast}q^{\ast} \colon \boxtimes_{i=1}^k D^b(\mathscr{P}(d_i)) \to D^b(\mathscr{P}(d)). \end{aligned}$$ We recall a result from [@PTquiver]: **Theorem 14**. **([@PTquiver Theorem 4.20, Example 4.21])*[\[cor:sodT\]]{#cor:sodT label="cor:sodT"} There is a semiorthogonal decomposition $$\begin{aligned} \label{sod:triple} D^b(\mathscr{P}(d))= \left\langle \boxtimes_{i=1}^k \mathbb{T}(d_i)_{w_i+(g-1)d_i(\sum_{i>j}d_j-\sum_{i<j}d_j)}\right\rangle\end{aligned}$$ where the right hand side is after all partitions $(d_i)_{i=1}^k$ of $d$ and weights $(w_i)_{i=1}^k \in \mathbb{Z}^k$ such that $$\label{ineq:slopes} \frac{w_1}{d_1}<\cdots<\frac{w_k}{d_k}.$$ The fully-faithful functor $$\begin{aligned} \boxtimes_{i=1}^k \mathbb{T}(d_i)_{w_i+(g-1)d_i(\sum_{i>j}d_j-\sum_{i<j}d_j)} \to D^b(\mathscr{P}(d))\end{aligned}$$ is given by the categorical Hall product ([\[chall:P\]](#chall:P){reference-type="ref" reference="chall:P"}). The order is as in [@PTzero Subsection 3.4], [@PTquiver Subsection 4.6].* **Remark 15**. The semiorthogonal decomposition ([\[sod:triple\]](#sod:triple){reference-type="ref" reference="sod:triple"}) is obtained from that of $(2g+1)$-loop quiver and applying Koszul equivalence. The order of summands in ([\[sod:triple\]](#sod:triple){reference-type="ref" reference="sod:triple"}) is not immediate to state, and is the same one for the $(2g+1)$-loop quiver explained in [@PTquiver Subsection 4.6], see also [@PTzero Subsection 3.4] for the case $g=1$. ## Semiorthogonal decompositions on formal fibers {#subsub:formal} We have the following diagram: $$\begin{aligned} \label{moduli:M} \xymatrix{ \mathscr{P}(d)^{\rm{cl}} \ar@<-0.3ex>@{^{(}->}[r]\ar[d]^{\pi_{P,d}} & \mathscr{P}(d) \ar@<-0.3ex>@{^{(}->}[r]^{j} & \mathscr{Y}(d) \ar[d]^{\pi_{Y,d}} \\ P(d) \ar@<-0.3ex>@{^{(}->}[rr] & & Y(d). }\end{aligned}$$ Here, the vertical arrows are good moduli space morphisms and horizontal arrows are closed immersions. Consider a closed point $p\in P(d)$ which corresponds to a semisimple $(Q^{\circ, d}, \mathscr{I})$-representation $$\begin{aligned} \label{Rp} R_p=\bigoplus_{i=1}^m W^{(i)} \otimes R^{(i)}, \end{aligned}$$ where $R^{(i)}$ is a simple $(Q^{\circ, d}, \mathscr{I})$-representation of dimension $r^{(i)}$ and $W^{(i)}$ is a finite dimensional $\mathbb{C}$-vector space. We denote by $\widehat{\mathscr{Y}}(d)_p$ the formal fiber of the right vertical arrow in ([\[moduli:M\]](#moduli:M){reference-type="ref" reference="moduli:M"}) at $p$. By the étale slice theorem, we have $$\begin{aligned} \widehat{\mathscr{Y}}(d)_p=\widehat{\operatorname{Ext}}_{Q^{\circ, d}}^1(F_p, F_p)/G_p,\end{aligned}$$ where $G_p=\mathrm{Aut}(R_p)=\prod_{i=1}^m GL(W^{(i)})$, and see Subsection [2.1](#notation){reference-type="ref" reference="notation"} for the notation. We denote by $$\begin{aligned} \label{mapjp} j_p \colon \widehat{\mathscr{P}}(d)_p \hookrightarrow \widehat{\mathscr{Y}}(d)_p\end{aligned}$$ the natural inclusion of the derived zero locus of $\mu$ restricted to $\widehat{\mathscr{Y}}(d)_p$. **Remark 16**. Let $\kappa$ be the morphism $$\begin{aligned} \kappa \colon \operatorname{Ext}^1_{(Q^{\circ, d}, \mathscr{I})}(R_p, R_p) \to \operatorname{Ext}^2_{(Q^{\circ, d}, \mathscr{I})}(R_p, R_p) \end{aligned}$$ given by $x \mapsto [x, x]$. By the formality of polystable objects in CY2 category, see [@DavPurity Corollary 4.9], the derived stack $\widehat{\mathscr{P}}(d)_p$ is equivalent to the formal fiber of $\kappa^{-1}(0)/G_p$ at $0 \in \kappa^{-1}(0)^{\mathrm{cl}}/\!\!/G_p$. We define $$\begin{aligned} \label{qbps:that} \mathbb{T}_p(d)_w :=\mathbb{W}(\widehat{\mathscr{P}}(d)_p)_{w\tau_d}^{\rm{int}} \subset D^b(\widehat{\mathscr{P}}(d)_p). \end{aligned}$$ There is a description of $\mathbb{T}_p(d)_w$ similar to Lemma [\[lem:compareT\]](#lem:compareT){reference-type="ref" reference="lem:compareT"}, see Subsection [5.4](#subsec:prop){reference-type="ref" reference="subsec:prop"}. Consider a partition $d=d_1+\cdots+d_k$. We have the commutative diagram: $$\label{com:hall2} \begin{tikzcd} \times_{i=1}^k \mathscr{P}(d_i)& \mathscr{P}(d_1, \ldots, d_k)\arrow[l, "q"']\arrow[r, "p"]& \mathscr{P}(d)\\ \times_{i=1}^k \mathscr{P}(d_i)^{\mathrm{cl}}\arrow[u, hook]\arrow[d, "\times_{i=1}^k \pi_{P,d_i}"]&\mathscr{P}(d_1, \ldots, d_k)^{\mathrm{cl}}\arrow[l, "q"']\arrow[r, "p"] \arrow[u, hook] & \mathscr{P}(d)^{\mathrm{cl}}\arrow[u, hook]\arrow[d, "\pi_{P,d}"]\\ \times_{i=1}^k P(d_i)\arrow[rr, "\oplus"]& & P(d), \end{tikzcd}$$ where $\times_{i=1}^k \pi_{P,d_i}$ and $\pi_{P,d}$ are good moduli space maps. The base change of the categorical Hall product gives the functor $$\begin{aligned} \label{bchange} \bigoplus_{p_1+\cdots+p_k=p} \boxtimes_{i=1}^k D^b(\widehat{\mathscr{P}}(d_i)_{p_i}) \to D^b(\widehat{\mathscr{P}}(d)_{p}),\end{aligned}$$ where the sum on the left hand side consists of the fiber of the bottom horizontal arrow $\oplus$ in ([\[com:hall2\]](#com:hall2){reference-type="ref" reference="com:hall2"}), which is a finite map. Also see [@Totheta (6.12), Lemma 6.4] for the existence of base change diagram of ([\[com:hall2\]](#com:hall2){reference-type="ref" reference="com:hall2"}) extended to derived stacks. The following proposition is a formal fiber version of Theorem [\[cor:sodT\]](#cor:sodT){reference-type="ref" reference="cor:sodT"}. The proof is technical and will be postponed to Subsection [5.4](#subsec:prop){reference-type="ref" reference="subsec:prop"}. **Proposition 17**. *There is a semiorthogonal decomposition $$\begin{aligned} D^b(\widehat{\mathscr{P}}(d)_p) =\left\langle \bigoplus_{p_1+\cdots+p_k=p} \boxtimes_{i=1}^k \mathbb{T}_{p_i}(d_i)_{w_i+(g-1)d_i(\sum_{i>j}d_j-\sum_{i<j}d_j)} \right\rangle. \end{aligned}$$ The right hand side is after all partitions $(d_i)_{i=1}^k$ of $d$, all points $(p_1,\ldots, p_k)$ in the fiber over $p$ of the addition map $\oplus\colon \times_{i=1}^k P(d_i)\to P(d)$, and all weights $(w_i)_{i=1}^k\in \mathbb{Z}^k$ such that $$\frac{w_1}{d_1}<\cdots<\frac{w_k}{d_k}.$$ The order of the semiorthogonal decomposition is the same as the order of ([\[sod:triple\]](#sod:triple){reference-type="ref" reference="sod:triple"}). The fully-faithful functor $$\begin{aligned} \bigoplus_{p_1+\cdots+p_k=p} \boxtimes_{i=1}^k \mathbb{T}_{p_i}(d_i)_{w_i+(g-1)d_i(\sum_{i>j}d_j-\sum_{i<j}d_j)} \to D^b(\widehat{\mathscr{P}}(d)_p)\end{aligned}$$ is given by the base change of the categorical Hall product ([\[bchange\]](#bchange){reference-type="ref" reference="bchange"}).* During the proof of Proposition [Proposition 17](#prop:sod2){reference-type="ref" reference="prop:sod2"}, we will also obtain the following: **Corollary 18**. *The map $\iota_p \colon \widehat{\mathscr{P}}(d)_p \to \mathscr{P}(d)$ induces the functor $$\begin{aligned} \iota_p^{\ast} \colon \mathbb{T}(d)_w \to \mathbb{T}_p(d)_w\end{aligned}$$ and its image classically generates $\mathbb{T}_p(d)_w$.* ## Reduced quasi-BPS categories We continue the discussion from the previous subsection. Let $\mathfrak{gl}(d)_0 \subset \mathfrak{gl}(d)$ be the traceless Lie subalgebra, and let $\mu_0$ be the map $$\begin{aligned} \label{mu0:trace} \mu_0 \colon \mathfrak{gl}(d)^{\oplus 2g} \to \mathfrak{gl}(d)_0, \ (x_1, \ldots, x_g, y_1, \ldots, y_g) \mapsto \sum_{i=1}^g [x_i, y_i]. \end{aligned}$$ Define the reduced stack: $$\begin{aligned} \mathscr{P}(d)^{\rm{red}} :=\mu_0^{-1}(0)/GL(d). \end{aligned}$$ We define the reduced quasi-BPS category to be $$\begin{aligned} \label{def:Tdwred} \mathbb{T}(d)_w^{\rm{red}}:=\mathbb{W}(\mathscr{P}(d)^{\rm{red}})^{\rm{int}}_{w\tau_d} \subset D^b(\mathscr{P}(d)^{\rm{red}}). \end{aligned}$$ There is a description similar to Lemma [\[lem:compareT\]](#lem:compareT){reference-type="ref" reference="lem:compareT"} using the embedding $\mathscr{P}(d)^{\rm{red}} \hookrightarrow \mathscr{Y}(d)$. Denote by $\mathfrak{gl}(d)_{\rm{nil}} \subset \mathfrak{gl}(d)_0$ the subset of nilpotent elements. The categorical support lemma in [@PTquiver] is the following: **Lemma 19**. **([@PTquiver Corollary 5.5])*[\[cor:support\]]{#cor:support label="cor:support"} For coprime $(d, w)\in\mathbb{N}\times\mathbb{Z}$, any object $\mathcal{E} \in \mathbb{T}(d)_w^{\rm{red}}$ satisfies: $$\begin{aligned} \mathrm{Supp}^{\rm{sg}}(\mathcal{E}) \subset (\mathfrak{gl}(d)^{\oplus 2g} \oplus \mathfrak{gl}(d)_{\rm{nil}})/GL(d). \end{aligned}$$* For $g\geqslant 2$, the derived stack $\mathscr{P}(d)^{\rm{red}}$ is classical by [@KaLeSo Proposition 3.6], in particular there is a good moduli space morphism $$\begin{aligned} \pi_P \colon \mathscr{P}(d)^{\rm{red}}\to P(d)=\mu^{-1}(0)^{\mathrm{red}}/\!\!/G(d).\end{aligned}$$ It follows that the Hom-space between any two objects in $D^b(\mathscr{P}(d)^{\rm{red}})$ is a module over $\mathcal{O}_{P(d)}$. The categorical support lemma is the main ingredient in the proof of the following: **Proposition 20**. **([@PTquiver Proposition 5.9])* For coprime $(d, w)\in\mathbb{N}\times\mathbb{Z}$ and objects $\mathcal{E}_i \in \mathbb{T}(d)_w^{\rm{red}}$ for $i=1, 2$, the $\mathcal{O}_{P(d)}$-module $$\bigoplus_{i\in \mathbb{Z}}\operatorname{Hom}^i_{\mathscr{P}(d)^{\mathrm{red}}}(\mathcal{E}_1, \mathcal{E}_2)$$ is finitely generated. In particular, we have $\operatorname{Hom}^i_{\mathscr{P}(d)^{\mathrm{red}}}(\mathcal{E}_1, \mathcal{E}_2)=0$ for $\lvert i \rvert \gg 0$.* ## Relative Serre functor on reduced quasi-BPS categories We continue the discussion from the previous subsection. We have that $\mathbb{T}:=\mathbb{T}(d)^{\rm{red}}_w$ is a subcategory of $D^b(\mathscr{P}^{\mathrm{red}})$, which is a module over $\rm{Perf}(\mathscr{P}(d)^{\rm{red}})$. Thus there is an associated internal homomorphism, see Subsection [2.6](#subsec:qsmooth){reference-type="ref" reference="subsec:qsmooth"}: $$\begin{aligned} \mathcal{H}om_{\mathbb{T}}(\mathcal{E}_1, \mathcal{E}_2) \in D_{\rm{qc}}(\mathscr{P}(d)^{\rm{red}})\end{aligned}$$ for $\mathcal{E}_1, \mathcal{E}_2 \in \mathbb{T}$. Proposition [Proposition 20](#lem:bound){reference-type="ref" reference="lem:bound"} implies that $\pi_{\ast} \mathcal{H}om_{\mathbb{T}}(\mathcal{E}_1, \mathcal{E}_2)$ is an object of $D^b(P(d))$. **Theorem 21**. **([@PTquiver Theorem 5.10])*[\[thm:Serre\]]{#thm:Serre label="thm:Serre"} For coprime $(d, w)\in\mathbb{N}\times\mathbb{Z}$ and $\mathcal{E}_1, \mathcal{E}_2 \in \mathbb{T}$, there is an isomorphism: $$\begin{aligned} \label{isom:Serre} \operatorname{Hom}_{P(d)}(\pi_{\ast}\mathcal{H}om_{\mathbb{T}}(\mathcal{E}_1, \mathcal{E}_2), \mathcal{O}_{P(d)}) \cong \operatorname{Hom}_{\mathbb{T}}(\mathcal{E}_2, \mathcal{E}_1). \end{aligned}$$* For $\mathcal{E}_1=\mathcal{E}_2=\mathcal{E}$, the identity $\operatorname{id}\colon \mathcal{E} \to \mathcal{E}$ corresponds, under [\[isom:Serre\]](#isom:Serre){reference-type="eqref" reference="isom:Serre"}, to the morphism $$\begin{aligned} \mathrm{tr}_{\mathcal{E}} \colon \pi_{\ast}\mathcal{H}om(\mathcal{E}, \mathcal{E}) \to \mathcal{O}_{P(d)}.\end{aligned}$$ From the construction in [@PTquiver], the above morphism coincides with the trace map determined by $(GL(d), \mathfrak{gl}(d)^{\oplus 2g}, \mathfrak{gl}(d)_0, \mu_0)$, see Subsection [7.2](#subsec:trace){reference-type="ref" reference="subsec:trace"} for the construction of the trace map, especially ([\[const:tre\]](#const:tre){reference-type="ref" reference="const:tre"}). # Quasi-BPS categories for K3 surfaces In this section, we introduce (non-reduced and reduced) quasi-BPS categories for K3 surfaces. In Theorem [Theorem 29](#thm:walleq){reference-type="ref" reference="thm:walleq"}, we prove the wall-crossing equivalence for quasi-BPS categories. We state a categorical version of the $\chi$-independence phenomenon, see Conjecture [Conjecture 34](#conj:HK){reference-type="ref" reference="conj:HK"}, which we prove for $g=0$ and for $g=1$ and $(d,w)=(2,1)$. ## Generalities on K3 surfaces {#subsec41} Let $S$ be a smooth projective K3 surface, i.e. $K_S$ is trivial and $H^1(\mathcal{O}_S)=0$. Let $K(S)$ be the Grothendieck group of $S$. Denote by $\chi(-, -)$ the Euler pairing $$\begin{aligned} \chi(E, F)=\sum_{j}(-1)^j \mathrm{ext}^j(E, F).\end{aligned}$$ Let $N(S)$ be the numerical Grothendieck group: $$\begin{aligned} N(S):=K(S)/\equiv,\end{aligned}$$ where $E_1 \equiv E_2$ in $K(S)$ if $\chi(E_1, F)=\chi(E_2, F)$ for any $F \in K(S)$. There is an isomorphism by taking the Mukai vector: $$\begin{aligned} \label{isom:Mvector} v(-)=\operatorname{ch}(-)\sqrt{\mathrm{td}}_S \colon N(S) \stackrel{\cong}{\to} \mathbb{Z} \oplus \mathrm{NS}(S) \oplus \mathbb{Z}. \end{aligned}$$ Write a vector $v\in N(S)$ as $v=(r, \beta, \chi)\in \mathbb{Z}\oplus \mathrm{NS}(S)\oplus \mathbb{Z}$ via the above isomorphism. There is a symmetric bilinear pairing on $N(S)$ defined by $\langle E_1, E_2 \rangle=-\chi(E_1, E_2)$. Under the isomorphism ([\[isom:Mvector\]](#isom:Mvector){reference-type="ref" reference="isom:Mvector"}), we have $$\begin{aligned} \langle E_1, E_2 \rangle=\beta_1 \beta_2-r_1 \chi_2-r_2 \chi_1,\end{aligned}$$ where $v(E_i)=(r_i, \beta_i, \chi_i)$. We say $v\in N(S)$ is *primitive* if it cannot be written as $v=dv_0$ for an integer $d\geqslant 2$ and $v_0\in N(S)$. Let $v\in N(S)$ and $w\in \mathbb{Z}$. Write $v=dv_0$ for $d\in \mathbb{Z}$ and $v_0$ primitive. We define $\gcd(v, w):=\gcd(d,w)$. Below we identify $N(S)$ with $\mathbb{Z} \oplus \mathrm{NS}(S) \oplus \mathbb{Z}$ via the isomorphism ([\[isom:Mvector\]](#isom:Mvector){reference-type="ref" reference="isom:Mvector"}), and write an element $v \in N(S)$ as $v=(r, \beta, \chi)$. ## Bridgeland stability conditions on K3 surfaces For a K3 surface $S$, we denote by $$\begin{aligned} \mathrm{Stab}(S)\end{aligned}$$ the (main connected component of) the space of Bridgeland stability conditions [@Brs1; @Brs2] on $D^b(S)$. A point $\sigma \in \mathrm{Stab}(S)$ consists of a pair $$\begin{aligned} \sigma=(Z, \mathcal{A}), \ Z \colon N(S) \to \mathbb{C}, \ \mathcal{A} \subset D^b(S),\end{aligned}$$ where $Z$ is a group homomorphism (called *central charge*) and $\mathcal{A}$ is the heart of a bounded t-structure satisfying some axioms, see [@Brs1]. One of the axioms is the following positivity property $$\begin{aligned} Z(E) \in \{ z \in \mathbb{C} : \mathrm{Im}(z)>0 \mbox{ or } z \in \mathbb{R}_{<0}\}\end{aligned}$$ for any $0\neq E \in \mathcal{A}$. An object $E \in \mathcal{A}$ is called *$Z$-(semi)stable* if for any subobject $0\neq F \subsetneq E$ we have $\arg Z(F)<(\leqslant) \arg Z(E)$ in $(0, \pi]$. An object $E \in D^b(X)$ is called *$\sigma$-(semi)stable* if $E[a] \in \mathcal{A}$ is $Z$-semistable for some $a \in \mathbb{Z}$. For each $B+iH \in \mathrm{NS}(S)_{\mathbb{C}}$ such that $H$ is ample with $H^2>2$, there is an associated stability condition $$\begin{aligned} \sigma_{B, H}=(Z_{B, H}, \mathcal{A}_{B, H}) \in \mathrm{Stab}(S), \end{aligned}$$ where $\mathcal{A}_{B, H} \subset D^b(S)$ is the heart of a bounded t-structure obtained by a tilting of $\mathrm{Coh}(S)$ and $Z_{B, H}$ is given by $$\begin{aligned} Z_{B, H}(E)=-\int_S e^{-B-iH} v(E) \in \mathbb{C}. \end{aligned}$$ We refer to [@Brs2 Section 6] for the construction of the above stability conditions. A stability condition $\sigma_{B, mH}$ for $m\gg 0$ is said to be in a *neighborhood of the large volume limit*. Recall the following proposition about semistable objects at the large volume limit: **Proposition 22**. **([@Brs3 Proposition 14.2], [@Tst3 Proposition 6.4, Lemma 6.5])* If $v=(r, \beta, \chi)$ such that $r\geqslant 0$ and $H \cdot \beta>0$, or $r=H \cdot \beta=0$ and $\chi>0$, then an object $E\in D^b(S)$ of Mukai vector $v$ is $\sigma_{0, mH}$-semistable for $m\gg 0$ if and only if $E[2a]$ is an $H$-Gieseker semistable sheaf for some $a\in\mathbb{Z}$.* ## Moduli stacks of semistable objects on K3 surfaces {#subsection:moduliK3} For each $\sigma \in \mathrm{Stab}(S)$ and $v \in N(S)$, we denote by $$\begin{aligned} \mathfrak{M}_S^{\sigma}(v) \end{aligned}$$ the derived moduli stack of $\sigma$-semistable objects $E \in \mathcal{A} \cup \mathcal{A}[1]$ with numerical class $v$. We denote by $\mathbb{F}$ the universal object $$\begin{aligned} \mathbb{F} \in D^b(S \times \mathfrak{M}_S^{\sigma}(v)). \end{aligned}$$ We also consider the reduced version of the stack $\mathfrak{M}_S^{\sigma}(v)$. Let $v=(r, \beta, \chi)$. Let $\mathcal{P}ic^{\beta}(S)$ be the derived moduli stack of line bundles on $S$ with first Chern class $\beta$. Then $\mathcal{P}ic^{\beta}(S)=\operatorname{Spec}\mathbb{C}[\varepsilon]/\mathbb{C}^{\ast}$, where $\varepsilon$ is of degree $-1$. We consider the determinant morphism $$\begin{aligned} \det \colon \mathfrak{M}_S^{\sigma}(v) \to \mathcal{P}ic^{\beta}(S)=\operatorname{Spec}\mathbb{C}[\varepsilon]/\mathbb{C}^{\ast}. \end{aligned}$$ Define the reduced stack: $$\begin{aligned} \label{defn:reduced} \mathfrak{M}_S^{\sigma}(v)^{\rm{red}}:= \mathfrak{M}_S^{\sigma}(v) \times_{\mathcal{P}ic^{\beta}(S)} B\mathbb{C}^{\ast}. \end{aligned}$$ The obstruction space of the reduced stack $\mathfrak{M}_S^{\sigma}(v)^{\rm{red}}$ at $F$ is the kernel of the trace map: $$\begin{aligned} \operatorname{Ext}_S^2(F, F)_0:=\operatorname{Ker}\left(\operatorname{Ext}_S^2(F, F) \stackrel{\mathrm{tr}}{\twoheadrightarrow} H^2(\mathcal{O}_S)=\mathbb{C} \right). \end{aligned}$$ Note that $\mathfrak{M}^\sigma_S(v)^{\mathrm{red}}$ may still not be a classical stack. There are decompositions: $$\begin{aligned} D^b(\mathfrak{M}_S^{\sigma}(v))=\bigoplus_{w \in \mathbb{Z}}D^b(\mathfrak{M}_S^{\sigma}(v))_w, \ D^b(\mathfrak{M}_S^{\sigma}(v)^{\rm{red}})=\bigoplus_{w \in \mathbb{Z}}D^b(\mathfrak{M}_S^{\sigma}(v)^{\rm{red}})_w,\end{aligned}$$ where each summand contains complexes $F$ of weight $w$ with respect to the scaling automorphisms $\mathbb{C}^{\ast} \subset \mathrm{Aut}(F)$, see [@T Subsection 3.2.4]. We denote by $\mathcal{M}_S^{\sigma}(v)$ the classical truncation of $\mathfrak{M}_S^{\sigma}(v)$. It admits a good moduli space (cf. [@MR3237451], [@AHLH Example 7.26]): $$\begin{aligned} \label{gmoduli} \pi \colon \mathcal{M}_S^{\sigma}(v) \to M_S^{\sigma}(v),\end{aligned}$$ where $M_S^{\sigma}(v)$ is a proper algebraic space. A closed point $y \in M_S^{\sigma}(v)$ corresponds to a $\sigma$-polystable object $$\begin{aligned} \label{pstable:S} F=\bigoplus_{i=1}^m V^{(i)} \otimes F^{(i)},\end{aligned}$$ where $F^{(1)}, \ldots, F^{(m)}$ are mutually non-isomorphic $\sigma$-stable objects such that $\arg Z(F^{(i)})=\arg Z(F)$, and $V^{(i)}$ is a finite dimensional vector space with dimension $d^{(i)}$ for $1\leqslant i\leqslant m$. Let $G_y:=\mathrm{Aut}(F)=\prod_{i=1}^m GL(V^{(i)})$ and let $\widehat{\operatorname{Ext}}_S^1(F, F)$ be the formal fiber at the origin of the morphism $$\begin{aligned} \operatorname{Ext}_S^1(F, F) \to \operatorname{Ext}_S^1(F, F)/\!\!/G_y. \end{aligned}$$ By the formality of the dg-algebra $\mathrm{RHom}(F, F)$, see [@DavPurity Corollary 4.9], there are equivalences $$\begin{aligned} \label{formal:equiv} \widehat{\mathfrak{M}}_S^{\sigma}(v)_y \simeq \widehat{\kappa}^{-1}(0)/G_y, \ \widehat{\mathfrak{M}}_S^{\sigma}(v)_y^{\rm{red}} \simeq \widehat{\kappa}_0^{-1}(0)/G_y,\end{aligned}$$ where $\kappa$, $\kappa_0$ are the maps $$\begin{aligned} \kappa\colon \operatorname{Ext}_S^1(F, F) \to \operatorname{Ext}_S^2(F, F), \ \kappa_0 \colon \operatorname{Ext}_S^1(F, F) \to \operatorname{Ext}_S^2(F, F)_0\end{aligned}$$ given by $x \mapsto [x, x]$, and $\widehat{\kappa}$, $\widehat{\kappa}_0$ are their restrictions to $\widehat{\operatorname{Ext}}_S^1(F, F)$. **Remark 23**. The stack $\kappa^{-1}(0)/G_y$ is described in terms of the Ext-quiver of $F$ as follows. Let $Q^{\circ, d}_y$ be the quiver with vertices $\{1, \ldots, m\}$ and the number of edges from $i$ to $j$ is $\dim \operatorname{Ext}_{S}^1(F^{(i)}, F^{(j)})$ for any $1\leqslant i, j\leqslant m$. By Serre duality, $Q^{\circ, d}_y$ is symmetric. Moreover the number of loops at each vertex is even, so $Q^{\circ, d}_y$ is the doubled quiver of some quiver $Q_y^{\circ}$. The derived stack $\kappa^{-1}(0)/G_y$ is identified with the derived moduli stack of representations of the preprojective algebra of $Q^\circ_y$ (alternatively, of $Q^{\circ, d}_y$-representations with quadratic relation $\mathscr{I}_y$) as in Subsection [3.3](#subsec:double){reference-type="ref" reference="subsec:double"}, and dimension vector $(d^{(i)})_{i=1}^m$ where $d^{(i)}=\dim V^{(i)}$. There is a wall-chamber structure on $\mathrm{Stab}(S)$ such that $\mathcal{M}_S^{\sigma}(v)$ is constant if $\sigma$ lies in a chamber, but may change when $\sigma$ crosses a wall. Locally, a wall is defined by the equation $$\begin{aligned} \frac{Z(v_1)}{Z(v_2)} \in \mathbb{R}_{>0}, \ v=v_1+v_2\end{aligned}$$ such that $v_1$ and $v_2$ are not proportional, see [@Brs2 Proposition 9.3]. A stability condition $\sigma\in \mathrm{Stab}(S)$ is *generic* if $\sigma$ is not on a wall. If $\sigma$ is generic, then for a polystable object ([\[pstable:S\]](#pstable:S){reference-type="ref" reference="pstable:S"}) each numerical class of $F^{(i)}$ is proportional to $v$. Let $v=dv_0$ for a primitive $v_0$. Then we have $$\begin{aligned} =r^{(i)}v_0, \ d^{(1)}r^{(1)}+\cdots+d^{(m)}r^{(m)}=d.\end{aligned}$$ The good moduli space $M_S^{\sigma}(v)$ has a stratification indexed by data $(d^{(i)}, r^{(i)})_{i=1}^m$, and the deepest stratum corresponds to $m=1$, $\dim V^{(1)}=d$ and $d^{(1)}=1$. Let $v=dv_0$ for a primitive $v_0$ with $\langle v_0, v_0\rangle=2g-2$, and $Q^{\circ, d}=Q_{2g}$ be the quiver with one vertex and $2g$-loops with relation $\mathscr{I}$ as in Subsection [3.3](#subsec:double){reference-type="ref" reference="subsec:double"}. Recall the stacks: $$\begin{aligned} \mathscr{P}(d)=\mu^{-1}(0)/GL(d), \ \mathscr{P}(d)^{\rm{red}}=\mu_0^{-1}(0)/GL(d)\end{aligned}$$ where $\mu \colon \mathfrak{gl}(d)^{\oplus 2g} \to \mathfrak{gl}(d)$ and $\mu_0 \colon \mathfrak{gl}(d)^{\oplus 2g} \to \mathfrak{gl}(d)_0$ are moment maps. **Lemma 24**. *For any closed point $y \in M_S^{\sigma}(v)$, there is a point $p \in P(d)$ which is sufficiently close to $0\in P(d)$ such that we have equivalences $$\begin{aligned} \label{formal:equiv3} \widehat{\mathfrak{M}}_S^{\sigma}(v)_y \simeq \widehat{\mathscr{P}}(d)_p, \ \widehat{\mathfrak{M}}_S^{\sigma}(v)_y^{\rm{red}} \simeq \widehat{\mathscr{P}}(d)_p^{\rm{red}}.\end{aligned}$$ If $y$ lies in the deepest stratum, we can take $p=0$.* *Proof.* Let $y$ corresponds to a direct sum ([\[pstable:S\]](#pstable:S){reference-type="ref" reference="pstable:S"}) such that $[F^{(i)}]=r^{(i)} v_0$, and let $R^{(i)}$ be a simple $(Q^{\circ, d}, \mathscr{I})$-representation with dimension $r^{(i)}$. Such $R^{(i)}$ exists by a straightforward dimension count argument, for example see the proof of [@PTquiver Lemma 5.7 (i)]. Let $$\begin{aligned} R=\bigoplus_{i=1}^m V^{(i)} \otimes R^{(i)}\end{aligned}$$ be a semisimple $(Q^{\circ, d}, \mathscr{I})$-representation and let $p\in P(d)$ be the corresponding point. Note that $$\begin{aligned} \chi(R^{(i)}, R^{(j)})=\chi(F^{(i)}, F^{(j)})=r^{(i)}r^{(j)}(2-2g). \end{aligned}$$ By the CY2 property of $(Q^{\circ, d}, \mathscr{I})$-representations (cf. see [@KellerVandenBergh], [@DavPurity Proposition 7.1]), and the fact that $\hom(R^{(i)}, R^{(j)})=\hom(F^{(i)}, F^{(j)})=\delta_{ij}$, we have an isomorphism $$\begin{aligned} \operatorname{Ext}^{\ast}(R^{(i)}, R^{(j)})\cong \operatorname{Ext}^{\ast}(F^{(i)}, F^{(j)}). \end{aligned}$$ Then by the formality of polystable objects CY2 categories [@DavPurity Corollary 4.9], there is an isomorphism of dg-algebras $\mathrm{RHom}(R, R) \cong \mathrm{RHom}(F, F)$. Therefore we have equivalences ([\[formal:equiv3\]](#formal:equiv3){reference-type="ref" reference="formal:equiv3"}), see Remark [Remark 16](#rmk:ext-quiver){reference-type="ref" reference="rmk:ext-quiver"}. There is an action of $\mathbb{C}^{\ast}$ on the moduli of $Q^{\circ, d}$-representations which scales the linear maps corresponding to each edge of $Q^{\circ, d}$, which induces an action on $P(d)$. The above $\mathbb{C}^{\ast}$-action preserves the type of the semi-simplification, and any point $p \in P(d)$ satisfies $\lim_{t\to 0} (t \cdot p)=0$. Therefore we can take $p$ to be sufficiently close to $0$. By the above construction, we can take $p=0$ if $y$ lies in the deepest stratum. ◻ Combining Lemma [Lemma 24](#lem:py){reference-type="ref" reference="lem:py"} with [@KaLeSo], we have the following: **Lemma 25**. *Suppose that $g\geqslant 2$. Then for a generic $\sigma$, the derived stack $\mathfrak{M}_S^{\sigma}(v)^{\rm{red}}$ is classical, i.e. the natural morphism $\mathcal{M}_S^{\sigma}(v) \to \mathfrak{M}_S^{\sigma}(v)^{\rm{red}}$ is an equivalence.* *Proof.* For $g\geqslant 2$, the derived stack $\mathscr{P}(d)^{\rm{red}}$ is classical by [@KaLeSo Proposition 3.6]. Therefore the conclusion holds by Lemma [Lemma 24](#lem:py){reference-type="ref" reference="lem:py"}. ◻ ## Quasi-BPS categories for K3 surfaces Let $v\in N(S)$ and $w\in\mathbb{Z}$. Take $a \in K(S)_{\mathbb{R}}$ such that $\chi(a\otimes v)=w \in \mathbb{Z}$. We define the $\mathbb{R}$-line bundle $\delta$ on $\mathfrak{M}_S^{\sigma}(v)$ to be $$\begin{aligned} \label{def:delta} \delta=\det p_{\mathfrak{M}_{\ast}}(a \boxtimes \mathbb{F}), \end{aligned}$$ where $p_{\mathfrak{M}} \colon S \times \mathfrak{M}_S^{\sigma}(v) \to \mathfrak{M}_S^{\sigma}(v)$ is the projection. Note that the object $p_{\mathfrak{M}\ast}(A \boxtimes \mathbb{F})$ is a perfect complex on $\mathfrak{M}_S^{\sigma}(v)$ for any $A \in D^b(S)$, so the $\mathbb{R}$-line bundle ([\[def:delta\]](#def:delta){reference-type="ref" reference="def:delta"}) is well-defined. The pull-back of $\delta$ to $\mathfrak{M}_S^{\sigma}(v)^{\rm{red}}$ is also denoted by $\delta$. We define the (non-reduced or reduced) quasi-BPS categories to be the following intrinsic window categories from Definition [\[def:intwind\]](#def:intwind){reference-type="ref" reference="def:intwind"}: $$\begin{aligned} \label{qbps:T} &\mathbb{T}_S^{\sigma}(v)_ \delta:=\mathbb{W}(\mathfrak{M}_S^{\sigma}(v))^{\rm{int}}_{\delta}\subset D^b(\mathfrak{M}_S^{\sigma}(v))_w, \\ \notag &\mathbb{T}_S^{\sigma}(v)_ \delta^{\rm{red}}:=\mathbb{W}(\mathfrak{M}_S^{\sigma}(v)^{\rm{red}})^{\rm{int}}_{\delta}\subset D^b(\mathfrak{M}_S^{\sigma}(v)^{\rm{red}})_w. \end{aligned}$$ **Remark 26**. For each $y\in M_S^{\sigma}(v)$, let $\widehat{\mathfrak{M}}_S^{\sigma}(v)_y$ be the formal fiber at $y$ and $\delta_y$ the pull-back of $\delta$ to it. The quasi-BPS category for the formal fiber is defined in a similar way: $$\begin{aligned} \mathbb{T}^{\sigma}_{S, y}(v)_{\delta_y} :=\mathbb{W}(\widehat{\mathfrak{M}}_S^{\sigma}(v)_y)_{\delta_y}^{\rm{int}} \subset D^b(\widehat{\mathfrak{M}}_S^{\sigma}(v)_y). \end{aligned}$$ By the definition of $\mathbb{T}_S^{\sigma}(v)_{\delta}$, an object $\mathcal{E} \in D^b(\mathfrak{M}_S^{\sigma}(v))$ is an object in $\mathbb{T}_S^{\sigma}(v)_{\delta}$ if and only if its restriction to any formal fiber is an object in $\mathbb{T}_{S, y}^{\sigma}(v)_{\delta_y}$. There is an analogous statement for the reduced version. **Lemma 27**. *If $\sigma\in \mathrm{Stab}(S)$ is generic, then $\mathbb{T}_S^{\sigma}(v)_{\delta}$ and $\mathbb{T}_S^{\sigma}(v)_{\delta}^{\rm{red}}$ are independent of $a\in K(S)_{\mathbb{R}}$ satisfying $\chi(a \otimes v)=w$.* *Proof.* Let $b \in K(S)_{\mathbb{R}}$ such that $\chi(b \otimes v)=0$ and set $a'=a+b$. Let $\delta'$ be the $\mathbb{R}$-line bundle defined as in ([\[def:delta\]](#def:delta){reference-type="ref" reference="def:delta"}) for $a'$. By Remark [Remark 26](#rmk:qbps){reference-type="ref" reference="rmk:qbps"}, it is enough to show that $\delta_y=\delta'_y$ for any closed point $y \in M_S^{\sigma}(v)$. Let $y$ be a point which corresponds to the polystable object $F$ as in ([\[pstable:S\]](#pstable:S){reference-type="ref" reference="pstable:S"}). By the decomposition ([\[pstable:S\]](#pstable:S){reference-type="ref" reference="pstable:S"}), we have $\mathrm{Aut}(F)=\prod_{i=1}^m GL(V^{(i)})$ and $\delta_y$ is the character of $\mathrm{Aut}(F)$ given by $$\begin{aligned} \label{deltay} \delta_y= \det\left(\sum_{i=1}^m V^{(i)} \otimes \chi(a \otimes F^{(i)})\right) =\bigotimes_{i=1}^m (\det V^{(i)})^{\chi(a\otimes F^{(i)})}. \end{aligned}$$ As $\sigma$ is generic, the numerical class of $F^{(i)}$ is proportional to $v$. Therefore $\chi(b \otimes v)=0$ implies $\chi(b \otimes F^{(i)})=0$ for $1\leqslant i\leqslant m$, hence $\delta_y=\delta'_y$. ◻ By the above lemma, the following definition makes sense. **Definition 28**. For $v \in N(S)$, let $\sigma \in \mathrm{Stab}(S)$ be generic. For $w \in \mathbb{Z}$, define $$\begin{aligned} \mathbb{T}_S^{\sigma}(v)_{w} := \mathbb{T}_S^{\sigma}(v)_\delta, \ \mathbb{T}_S^{\sigma}(v)_{w}^{\rm{red}} := \mathbb{T}_S^{\sigma}(v)_\delta^{\rm{red}}. \end{aligned}$$ Here $\delta$ is defined as in [\[def:delta\]](#def:delta){reference-type="eqref" reference="def:delta"} for any $a \in K(S)_{\mathbb{R}}$ such that $\chi(a \otimes v)=w$. The first main result of this section is the following wall-crossing equivalence of quasi-BPS categories. **Theorem 29**. *Let $\sigma_1, \sigma_2 \in \mathrm{Stab}(S)$ be generic stability conditions. Then there exist equivalences $$\begin{aligned} \label{equiv:T} \mathbb{T}_S^{\sigma_1}(v)_{w} \stackrel{\sim}{\to} \mathbb{T}_S^{\sigma_2}(v)_{w}, \ \mathbb{T}_S^{\sigma_1}(v)_{w}^{\rm{red}}\stackrel{\sim}{\to} \mathbb{T}_S^{\sigma_2}(v)_{w}^{\rm{red}}. \end{aligned}$$* *Proof.* We only prove the first equivalence, the second one follows by the same argument. We reduce the proof of the equivalence to a local statement as in Theorem [\[prop:eqS\]](#prop:eqS){reference-type="ref" reference="prop:eqS"}. Consider a stability $\sigma=(Z, \mathcal{A}) \in \mathrm{Stab}(S)$ lying on a wall and consider stability conditions $\sigma_{\pm}=(Z_{\pm}, \mathcal{A}_{\pm}) \in \mathrm{Stab}(S)$ lying in adjacent chambers. Let $b \in K(S)_{\mathbb{R}}$ be an element satisfying $\chi(b \otimes v)=0$ and let $\delta' \in \mathrm{Pic}(\mathfrak{M}_S^{\sigma}(v))_{\mathbb{R}}$ be defined as in ([\[def:delta\]](#def:delta){reference-type="ref" reference="def:delta"}) using $b$. Let $\delta \in \mathrm{Pic}(\mathfrak{M}_S^{\sigma}(v))_{\mathbb{R}}$ be as in Definition [Definition 28](#def:qbps0){reference-type="ref" reference="def:qbps0"}, and set $\delta''=\delta+\delta'$. It is enough to show that, there exists $b$ as above such that the restriction functors for the open immersions $\mathfrak{M}^{\sigma_{\pm}}_S(v) \subset \mathfrak{M}^{\sigma}_S(v)$ restrict to the equivalence $$\begin{aligned} \label{induce:T} \mathbb{T}_S^{\sigma}(v)_{\delta''} \stackrel{\sim}{\to} \mathbb{T}_S^{\sigma_{\pm}}(v)_{\delta''}. \end{aligned}$$ The open substacks $\mathfrak{M}_S^{\sigma_{\pm}}(v) \subset \mathfrak{M}_S^{\sigma}(v)$ are semistable loci with respect to line bundle $\ell_{\pm}$ on $\mathfrak{M}_S^{\sigma}(v)$ and they are parts of $\Theta$-stratifications, see [@halpK32 Proposition 4.4.5]. The line bundles $\ell_{\pm}$ are constructed as follows. We may assume that $Z(v)=Z_{\pm}(v)=\sqrt{-1}$, and write $Z_{\pm}(-)=\chi(\omega_{\pm}\otimes -)$ for $\omega_{\pm} \in K(S)_{\mathbb{C}}$. Then set $b_{\pm} \in K(S)_{\mathbb{R}}$ to be the real parts of $\omega_{\pm}$, which satisfy $\chi(b_{\pm} \otimes v)=0$. The line bundles $\ell_{\pm}$ are defined by $\ell_{\pm}=\det p_{\mathfrak{M}\ast}(b_{\pm} \boxtimes \mathbb{F})$, see [@Halpinstab Theorem 6.4.11]. Then we set $b=\varepsilon_{+} b_{+} +\varepsilon_{-} b_{-}$ for general elements $0< \varepsilon_{\pm} \ll 1$. Since $\mathfrak{M}_S^{\sigma}(v)$ is 0-shifted symplectic, from Theorem [\[thm:window:M\]](#thm:window:M){reference-type="ref" reference="thm:window:M"} and Remark [Remark 9](#rmk:0shift){reference-type="ref" reference="rmk:0shift"} (see also [@halpK32 Theorem 3.3.1]), there exist subcategories $\mathbb{W}(\mathfrak{M}_S^{\sigma}(v))^{\ell_{\pm}}_{m_{\bullet\pm}} \subset D^b(\mathfrak{M}_S^{\sigma}(v))$ which induce equivalences: $$\begin{aligned} \label{window:k3} \mathbb{W}(\mathfrak{M}_S^{\sigma}(v))^{\ell_{\pm}}_{m_{\bullet\pm}} \stackrel{\sim}{\to} D^b(\mathfrak{M}_S^{\sigma_{\pm}}(v)).\end{aligned}$$ Moreover, there exist choices of $m_{\bullet\pm}$ such that $\mathbb{T}_S^{\sigma}(v)_{\delta''} \subset \mathbb{W}(\mathfrak{M}_S^{\sigma}(v))^{\ell_{\pm}}_{m_{\bullet\pm}}$, see [@halpK32 Lemma 4.3.10] or [@Totheta Proposition 6.15] for a choice of $m_{\bullet}$. Therefore, by Remark [Remark 26](#rmk:qbps){reference-type="ref" reference="rmk:qbps"}, it is enough to show that, for each closed point $y\in M_S^{\sigma}(v)$, we have the equivalences $$\begin{aligned} \label{equiv:local} \mathbb{T}_{S, y}^{\sigma}(v)_{\delta_y''} \stackrel{\sim}{\to} \mathbb{T}_{S, y}^{\sigma_{\pm}}(v)_{\delta_y''}.\end{aligned}$$ Here, on the right hand side we consider the intrinsic window subcategories for the formal fibers of the morphisms $\mathcal{M}_S^{\sigma_{\pm}}(v) \subset \mathcal{M}_S^{\sigma}(v) \to M_S^{\sigma}(v)$ at $y$. Let $y$ corresponds to the polystable object ([\[pstable:S\]](#pstable:S){reference-type="ref" reference="pstable:S"}) and set $\bm{d}=(\dim V^{(i)})_{i=1}^m$. Let $(Q^{\circ, d}_y, \mathscr{I}_y)$ be the Ext-quiver at $y$ with relation $\mathscr{I}_y$, see Remark [Remark 23](#rmk:Equiver){reference-type="ref" reference="rmk:Equiver"}. The quiver with relation $(Q^{\circ, d}_y, \mathscr{I}_y)$ is the double of some quiver $Q_y^{\circ}$. Let $\mathscr{P}_y(\bm{d})$ be the derived stack of $(Q^{\circ, d}_y, \mathscr{I}_y)$-representations with dimension vector $\bm{d}$, see ([\[P:dzero\]](#P:dzero){reference-type="ref" reference="P:dzero"}), and $P_y(\bm{d})$ the good moduli space of its classical truncation. By the equivalence ([\[formal:equiv\]](#formal:equiv){reference-type="ref" reference="formal:equiv"}), there is an equivalence $$\begin{aligned} \label{equiv:formal} \widehat{\mathfrak{M}}_S^{\sigma}(v)_y \simeq \widehat{\mathscr{P}}_y(\bm{d}).\end{aligned}$$ Here the right hand side is the formal fiber of $\mathscr{P}(\bm{d})$ at $0 \in P(\bm{d})$. The line bundles $\ell_{\pm}$ restricted to $\widehat{\mathfrak{M}}_S^{\sigma}(v)$ correspond to generic elements $\ell_{\pm} \in M(\bm{d})_{\mathbb{R}}^W$, where $M(\bm{d})_{\mathbb{R}}$ is the character lattice of the maximal torus of $G_y:=\mathrm{Aut}(y)=\prod_{i=1}^m GL(V^{(i)})$. Moreover the $\sigma_{\pm}$-semistable loci in the left hand side of ([\[equiv:formal\]](#equiv:formal){reference-type="ref" reference="equiv:formal"}) correspond to $\ell_{\pm}$-semistable $(Q^{\circ, d}_y, \mathscr{I}_y)$-representations. Therefore the equivalences ([\[equiv:local\]](#equiv:local){reference-type="ref" reference="equiv:local"}) follow from the formal fiber version of the equivalences $$\mathbb{T}(\bm{d})_{\delta_y''} \stackrel{\sim}{\to} \mathbb{T}^{l_{\pm}}(\bm{d})_{\delta_y''}$$ in Theorem [\[prop:eqS\]](#prop:eqS){reference-type="ref" reference="prop:eqS"}, whose proof is identical to loc. cit. ◻ By Lemma [Lemma 27](#lem:sigmagen){reference-type="ref" reference="lem:sigmagen"} and Theorem [Theorem 29](#thm:walleq){reference-type="ref" reference="thm:walleq"}, the following definition makes sense: **Definition 30**. For $v \in N(S)$ and $w \in \mathbb{Z}$, define $$\begin{aligned} \mathbb{T}_S(v)_{w}:= \mathbb{T}_S^{\sigma}(v)_{w}, \ \mathbb{T}_S(v)_{w}^{\rm{red}}:= \mathbb{T}_S^{\sigma}(v)_{w}^{\rm{red}},\end{aligned}$$ where $\sigma \in \mathrm{Stab}(S)$ is a generic stability condition. **Remark 31**. The category $\mathbb{T}_S(v)_w$ is defined as an abstract pre-triangulated dg-category. If we take a generic $\sigma \in \mathrm{Stab}(S)$, it is realized as a subcategory of $D^b(\mathfrak{M}_S^{\sigma}(v))$ by the identification $\mathbb{T}_S(v)_w=\mathbb{T}_S^{\sigma}(v)_w \subset D^b(\mathfrak{M}_S^{\sigma}(v))$. **Remark 32**. Suppose that $g\geqslant 2$ and take a generic $\sigma \in \mathrm{Stab}(S)$. Then we have $\mathfrak{M}_S^{\sigma}(v)^{\rm{red}}=\mathcal{M}_S^{\sigma}(v)$. Let $\mathcal{M}_S^{\sigma\text{-st}}(v) \subset \mathcal{M}_S^{\sigma}(v)$ be the open substack of $\sigma$-stable objects. Then the good moduli space morphism $\mathcal{M}_S^{\sigma\text{-st}}(v) \to M_S^{\sigma\text{-st}}(v)$ is a $\mathbb{C}^{\ast}$-gerbe classified by $\alpha \in \mathrm{Br}(M_S^{\sigma\text{-st}}(v))$ which gives the obstruction of the existence of a universal object in $S \times M_S^{\sigma\text{-st}}(v)$. We then have that $$\begin{aligned} \label{decom:prim0} \mathbb{T}_S(v)_w^{\rm{red}}|_{M_S^{\sigma\text{-st}}(v)}=D^b(M_S^{\sigma}(v), \alpha^w),\end{aligned}$$ where the right hand is the derived category of $\alpha^w$-twisted coherent sheaves on $M_S^{\sigma\text{-st}}(v)$, see [@MR2700538; @MR2309155], and the left hand side is the subcategory of $D^b(\mathcal{M}^{\sigma\text{-st}}(v))$ classically generated by the restriction of objects in $\mathbb{T}_S(v)_w^{\rm{red}}$. If $v$ is primitive, then $M_S^{\sigma\text{-st}}(v)=M_S^{\sigma}(v)$ and it is a non-singular holomorphic symplectic variety deformation equivalent to the Hilbert scheme of points $S^{[n]}$, where $n=\langle v, v\rangle/2+1$. By ([\[decom:prim0\]](#decom:prim0){reference-type="ref" reference="decom:prim0"}), we have $$\begin{aligned} \label{decom:prim} \mathbb{T}_S(v)_w^{\rm{red}}=D^b(M_S^{\sigma}(v), \alpha^w).\end{aligned}$$ **Remark 33**. We can also define quasi-BPS categories for other Calabi-Yau surfaces, i.e. abelian surfaces, similarly to Definition [Definition 30](#defn:BPS){reference-type="ref" reference="defn:BPS"}. When $S$ is an abelian surface, the derived Picard stack is $\mathcal{P}ic^{\beta}(S)=\widehat{S} \times \operatorname{Spec}\mathbb{C}[\varepsilon]/\mathbb{C}^{\ast}$, where $\widehat{S}$ is the dual abelian surface, and we define the reduced stack ([\[defn:reduced\]](#defn:reduced){reference-type="ref" reference="defn:reduced"}) by $\mathfrak{M}_S^{\sigma}(v) \times_{\mathcal{P}ic^{\beta}(S)}\mathcal{P}ic^{\beta}(S)^{\rm{cl}}$. The results in this paper also hold for abelian surfaces. Let $v=dv_0$ for a primitive $v_0$. We expect that, if $\gcd(d, w)=1$, the category $\mathbb{T}_S(v)_w^{\rm{red}}$ is a "non-commutative hyperkähler manifold", so that it shares several properties with $D^b(M)$ for a smooth projective hyperkähler variety of $K3^{[n]}$-type for $n=\langle v, v \rangle/2+1$. More precisely, we may expect the following, which we view as a categorical $\chi$-independence phenomenon. **Conjecture 34**. *Let $v=dv_0$ for $d\geqslant 1$ and $v_0$ a primitive vector with $\langle v_0, v_0\rangle=2g-2$. Suppose that $g\geqslant 0$. For $\gcd(d, w)=1$, the category $\mathbb{T}_S(v)_{w}^{\rm{red}}$ is deformation equivalent to $D^b(S^{[n]})$ for $n=\langle v, v\rangle/2+1$.* ## Quasi-BPS categories for Gieseker semistable sheaves In Definition [Definition 30](#defn:BPS){reference-type="ref" reference="defn:BPS"}, we defined quasi-BPS categories for Bridgeland semistable objects. By applying the categorical wall-crossing equivalence in Theorem [Theorem 29](#thm:walleq){reference-type="ref" reference="thm:walleq"}, we can relate the categories in Definition [Definition 30](#defn:BPS){reference-type="ref" reference="defn:BPS"} with those under Hodge isometries, and with those for moduli stacks of Gieseker semistable sheaves. Let $G$ be the group of Hodge isometries of the Mukai lattice $H^{\ast}(S, \mathbb{Z})$, preserving the orientation of the positive definite four dimensional plane of $H^{\ast}(S, \mathbb{R})$. Note that it acts on the algebraic part $\mathbb{Z} \oplus \mathrm{NS}(S) \oplus \mathbb{Z}$. The following is a categorical analogue of derived invariance property of counting invariants for K3 surfaces [@Tst3; @TodK3]. **Corollary 35**. *For any $\gamma \in G$, there is an equivalence $$\begin{aligned} \mathbb{T}_S(v)_w \simeq \mathbb{T}_S(\gamma v)_w. \end{aligned}$$* *Proof.* Let $\mathrm{Aut}_{\circ}(D^b(S))$ be the group of autoequivalences $\Phi$ of $D^b(S)$ whose action $\Phi_{\ast}$ on the space of stability conditions preserves the component $\mathrm{Stab}(S)$. It also acts on $H^{\ast}(S, \mathbb{Z})$, and denote the action by $\Phi_{\ast} \colon H^{\ast}(S, \mathbb{Z}) \to H^{\ast}(S, \mathbb{Z})$. Then we have the surjective group homomorphism, see [@HH Proposition 7.9], [@HMS2 Corollary 4.10]: $$\begin{aligned} \label{aut:surj} \mathrm{Aut}_{\circ}(D^b(S)) \to G, \ \Phi \mapsto \Phi_{\ast}\end{aligned}$$ For $\Phi \in \mathrm{Aut}_{\circ}(D^b(S))$, there is an equivalence of derived stacks $\phi \colon \mathfrak{M}_S^{\sigma}(v) \simeq \mathfrak{M}_S^{\Phi_{\ast}\sigma}(\Phi_{\ast}\sigma)$ given by $F \mapsto \Phi(F)$. The above equivalence induces an equivalence $$\begin{aligned} \phi_{\ast} \colon \mathbb{T}_S^{\sigma}(v)_{\delta} \simeq \mathbb{T}_S^{\Phi_{\ast}\sigma}(\Phi_{\ast}v)_{\delta'}\end{aligned}$$ where $\delta'$ is determined by $a'=\Phi_{\ast}^{-1}a \in K(S)_{\mathbb{R}}$ which satisfies $\chi(a'\otimes v')=w$ for $v'=\Phi_{\ast}v$. By Theorem [Theorem 29](#thm:walleq){reference-type="ref" reference="thm:walleq"} and the surjectivity of ([\[aut:surj\]](#aut:surj){reference-type="ref" reference="aut:surj"}), we obtain the corollary. ◻ Let $H$ be an ample divisor on $S$. We denote by $\mathfrak{M}_S^H(v)$ the derived moduli stack of $H$-Gieseker semistable sheaves on $S$ with Mukai vector $v$, and by $\mathfrak{M}_S^H(v)^{\rm{red}}$ its reduced stack. For $a \in K(S)_{\mathbb{R}}$, we define the $\mathbb{R}$-line bundle $\delta$ on $\mathfrak{M}_S^H(v)$, $\mathfrak{M}_S^H(v)^{\rm{red}}$ similarly to ([\[def:delta\]](#def:delta){reference-type="ref" reference="def:delta"}). Then we define $$\begin{aligned} &\mathbb{T}_S^H(v)_{\delta}:=\mathbb{W}(\mathfrak{M}_S^H(v))_{\delta}^{\rm{int}} \subset D^b(\mathfrak{M}_S^H(v)), \\ &\mathbb{T}_S^H(v)_{\delta}^{\rm{red}}:=\mathbb{W}(\mathfrak{M}_S^H(v)^{\rm{red}})_{\delta}^{\rm{int}} \subset D^b(\mathfrak{M}_S^H(v)^{\rm{red}}). \end{aligned}$$ Below we consider $H$ generic with respect to $v$, so that all Jordan-Hölder factors of objects in $\mathfrak{M}_S^H(v)$ have numerical class proportional to $v$. The following is a corollary of the wall-crossing equivalence in Theorem [Theorem 29](#thm:walleq){reference-type="ref" reference="thm:walleq"}. **Corollary 36**. *For $v \in N(S)_{\mathbb{R}}$ and generic $\sigma \in \mathrm{Stab}(S)$, there is $\varepsilon \in \{0, 1\}$ and $m\gg 0$ such that, by setting $v'=(-1)^{\varepsilon}v(mH)$ we have equivalences $$\begin{aligned} \mathbb{T}_S(v)_{\delta} \simeq \mathbb{T}_S^H(v')_{\delta'}, \ \mathbb{T}_S(v)_{\delta}^{\rm{red}} \simeq \mathbb{T}_S^H(v')_{\delta'}^{\rm{red}}. \end{aligned}$$ Here, $\delta'$ is a line bundle on $\mathfrak{M}_S^H(v)$ determined by $a'=(-1)^{\varepsilon}a(-mH) \in K(S)_{\mathbb{R}}$ with $\chi(a \otimes v)=\chi(a' \otimes v')=w$. Then: $$\begin{aligned} \mathbb{T}_S(v)_{w} \simeq \mathbb{T}_S^H(v')_{w}, \ \mathbb{T}_S(v)_{w}^{\rm{red}} \simeq \mathbb{T}_S^H(v')_{w}^{\rm{red}}. \end{aligned}$$* *Proof.* We take the autoequivalence $\Phi$ of $D^b(S)$ to be either $\Phi=\otimes \mathcal{O}(mH)$ or $\otimes \mathcal{O}(mH)[1]$ for $m\gg 0$, such that the vector $\Phi_{\ast}v=(r, \beta, \chi)$ either has $r \geqslant 0$ and $H \cdot \beta>0$, or $r=H \cdot \beta=0$, $\chi>0$. Then applying Corollary [Corollary 35](#cor:equiv){reference-type="ref" reference="cor:equiv"}, Theorem [Theorem 29](#thm:walleq){reference-type="ref" reference="thm:walleq"}, and Proposition [Proposition 22](#prop:LV){reference-type="ref" reference="prop:LV"} we obtain the conclusion. ◻ We next mention the natural periodicity and symmetry of quasi-BPS categories: **Lemma 37**. *Let $m:=\gcd\{\chi(a \otimes v) : a \in K(S)\}$. We have equivalences $$\begin{aligned} \mathbb{T}_S(v)_w \simeq \mathbb{T}_S(v)_{w+m}, \ \mathbb{T}_S(v)_w \simeq \mathbb{T}_S(v)_{-w}^{\rm{op}}. \end{aligned}$$ The similar equivalences also hold for $\mathbb{T}_S(v)_w^{\rm{red}}$.* *Proof.* For $a \in K(S)$, let $\delta$ be the line bundle ([\[def:delta\]](#def:delta){reference-type="ref" reference="def:delta"}). Then tensoring by $\delta$ induces equivalences: $$\begin{aligned} \mathbb{T}_S^{\sigma}(v)_w \simeq \mathbb{T}^{\sigma}_S(v)_{w+\chi(a\otimes v)}, \ \mathbb{T}^{\sigma}_S(v)_w^{\rm{red}} \simeq \mathbb{T}^{\sigma}_S(v)_{w+\chi(a\otimes v)}^{\rm{red}}. \end{aligned}$$ Therefore we obtain the first equivalence. The second equivalence is given by the restriction of $\mathcal{H}om(-, \mathcal{O}_{\mathfrak{M}_S^{\sigma}(v)})$ to $\mathbb{T}_S^{\sigma}(v)$. ◻ ## Conjecture [Conjecture 34](#conj:HK){reference-type="ref" reference="conj:HK"} for $g=0,1$ {#conjecture-conjhk-for-g01} Write the Mukai vector as $v=dv_0$, where $d\in\mathbb{Z}_{\geqslant 1}$ and for $v_0$ a primitive Mukai vector with $\langle v_0, v_0 \rangle=2g-2$ and $g\geqslant 0$. The following proposition proves Conjecture [Conjecture 34](#conj:HK){reference-type="ref" reference="conj:HK"} when $g=0$. **Proposition 38**. *Suppose that $g=0$ and $\gcd(d, w)=1$. Then we have $$\begin{aligned} \mathbb{T}_S(dv_0)_w^{\rm{red}}=\begin{cases} D^b(\operatorname{Spec}\mathbb{C}), & d=1, \\ 0, & d>1. \end{cases} \end{aligned}$$* *Proof.* By Corollary [Corollary 36](#cor:lv){reference-type="ref" reference="cor:lv"}, we can assume that $\mathbb{T}_S(v)_w=\mathbb{T}_S^H(v)_{\delta}$ in $D^b(\mathfrak{M}_S^H(v))$ for $H$ an ample divisor on $S$. It is well-known that $\mathfrak{M}_S^H(v)$ consists of a single point $F^{\oplus d}$ for a spherical stable sheaf $F$, so we have $$\begin{aligned} \mathfrak{M}_S^H(v)^{\rm{red}}= \operatorname{Spec}\mathbb{C}[\mathfrak{gl}(d)_0^{\vee}[1]]/GL(d). \end{aligned}$$ By the definition of $\mathbb{T}_S(v)_w^{\rm{red}}$, it consists of objects such that for the inclusion $$j \colon \mathfrak{M}_S^H(v)^{\rm{red}} \hookrightarrow BGL(d),$$ the object $j_{\ast}\mathcal{E}$ is generated by $\Gamma_{GL(d)}(\chi)$ for a dominant weight $\chi$ such that $$\begin{aligned} \chi+\rho \in \frac{1}{2} \mathrm{sum}[0, \beta_i-\beta_j]+\frac{w}{d}\sum_{i=1}^d \beta_i, \end{aligned}$$ where the Minkowski sum is after all $1\leqslant i, j\leqslant d$. By [@Toquot2 Lemma 3.2], such a weight exists if and only if $d|w$, and thus only if $d=1$ because $\gcd(d,w)=1$. Therefore, together with ([\[decom:prim\]](#decom:prim){reference-type="ref" reference="decom:prim"}) in the primitive case, the proposition follows. ◻ We next discuss the case of $g=1$. Then $v=dv_0$, where $v_0$ is primitive with $\langle v_0, v_0 \rangle=0$. For a generic $\sigma$, set $$\begin{aligned} S':=M_S^{\sigma}(v_0)\end{aligned}$$ which is well-known to be a K3 surface [@Mu2; @BaMa2]. We have the good moduli space morphism $\mathcal{M}_S^{\sigma}(v_0) \to S'$ which is a $\mathbb{C}^{\ast}$-gerbe classified by some $\alpha \in \mathrm{Br}(S')$. There is an equivalence $$\begin{aligned} \label{equiv:twist} D^b(S', \alpha) \stackrel{\sim}{\to} D^b(S)\end{aligned}$$ given by the Fourier-Mukai transform with kernel the universal $(1\boxtimes \alpha)$-twisted sheaf on $S \times S'$, see [@MR1902629]. There is also an isomorphism given by the direct sum map $$\label{directsum} \mathrm{Sym}^d(S') \stackrel{\cong}{\to} M_S^{\sigma}(dv_0).$$ Let $\mathcal{M}_S^{\sigma}(v_0, \ldots, v_0)$ be the classical moduli stack of filtrations of semistable objects on $S$: $$\begin{aligned} 0=F_0 \subset F_1 \subset \cdots \subset F_d\end{aligned}$$ such that $F_i/F_{i-1}$ is a $\sigma$-semistable object with numerical class $v_0$. We define $\mathscr{Z}_S$ and $\widetilde{\mathcal{M}}_S^{\sigma}(v_0)$ by the following diagram, where the two squares are Cartesian in the classical sense: $$\begin{aligned} \label{dia:ZS} \xymatrix{ \mathscr{Z}_S \ar@/^18pt/[rr]^-{p_S} \ar@<-0.3ex>@{^{(}->}[r]\ar[d]_-{q_S} \ar@{}[rd]|\square & \mathcal{M}_S^{\sigma}(v_0, \ldots, v_0) \ar[r] \ar[d] & \mathfrak{M}_S^{\sigma}(dv_0)^{\rm{red}} \\ \widetilde{\mathcal{M}}_S^{\sigma}(v_0) \ar@{}[rd]|\square\ar[d] \ar@<-0.3ex>@{^{(}->}[r] & \mathcal{M}_S^{\sigma}(v_0)^{\times d} \ar[d] & \\ S' \ar@<-0.3ex>@{^{(}->}[r]_-{\Delta} & (S')^{\times d}. & }\end{aligned}$$ Let $T(d)=(\mathbb{C}^{\ast})^{\times d}$. The map $\widetilde{\mathcal{M}}_S^{\sigma}(v_0) \to S'$ is a $T(d)$-gerbe, so we have the decomposition into $T(d)$-weights $$\begin{aligned} \label{decom:Td} D^b(\widetilde{\mathcal{M}}_S^{\sigma}(v_0)) =\bigoplus_{(w_1, \ldots, w_d)\in \mathbb{Z}^d} D^b(\widetilde{\mathcal{M}}_S^{\sigma}(v_0))_{(w_1, \ldots, w_d)}\end{aligned}$$ where the summand corresponding to $(w_1, \ldots, w_d)$ is equivalent to $D^b(S', \alpha^{w_1+\cdots+w_d})$. For $1\leqslant i\leqslant d$, define $m_i$ by the formula $$\begin{aligned} \label{def:mi} m_i :=\left\lceil \frac{wi}{d} \right\rceil -\left\lceil \frac{w(i-1)}{d} \right\rceil +\delta_{id} -\delta_{i1} \in \mathbb{Z}. \end{aligned}$$ Define the functor $$\begin{aligned} \Phi_{d,w}\colon D^b(S', \alpha^w)\to D^b(\mathfrak{M}_{S}^{\sigma}(v)^{\rm{red}})_{w},\ \mathcal{F}\mapsto p_{S*}\left(q_S^{\ast}\circ i_{(m_1, \ldots, m_d)}\mathcal{F}\right),\end{aligned}$$ where $i_{(m_1, \ldots, m_d)}$ is the inclusion of $D^b(S', \alpha^w)$ into the weight $(m_1, \ldots, m_d)$-part of ([\[decom:Td\]](#decom:Td){reference-type="ref" reference="decom:Td"}). When $v_0=[\mathcal{O}_x]$ for a point $x \in S$, it is proved in [@PT2 Proposition 4.7] that the image of the functor $\Phi_{d, w}$ lies in $\mathbb{T}_S^{\sigma}(v)_w$. We now state a stronger form of Conjecture [Conjecture 34](#conj:HK){reference-type="ref" reference="conj:HK"} for $g=1$. **Conjecture 39**. *Let $v=d v_0$ such that $d \in \mathbb{Z}_{\geqslant 1}$ and $v_0$ is primitive with $\langle v_0, v_0 \rangle=0$. If $\gcd(d, w)=1$, then the functor $\Phi_{d, w}$ restricts to the equivalence $$\begin{aligned} \label{def:Phi} \Phi_{d, w} \colon D^b(S', \alpha^w) \stackrel{\sim}{\to} \mathbb{T}_S^{\sigma}(dv_0)_{w}^{\rm{red}}. \end{aligned}$$ In particular for $w=1$, the category $\mathbb{T}_S^{\sigma}(dv_0)_1$ is equivalent to $D^b(S)$.* In [@PTzero; @PT1], we addressed a similar conjecture for $\mathbb{C}^2$ which we recall here. Let $\mathscr{C}(d)^{\rm{red}}$ be the reduced derived moduli stack of zero-dimensional sheaves on $\mathbb{C}^2$ with length $d$. It is the quotient stack $$\begin{aligned} \mathscr{C}(d)^{\rm{red}}=\mu_0^{-1}(0)/GL(d),\end{aligned}$$ where $\mu_0 \colon \mathfrak{gl}(d)^{\oplus 2} \to \mathfrak{gl}(d)_0$ is the commuting map, so the map ([\[mu0:trace\]](#mu0:trace){reference-type="ref" reference="mu0:trace"}) for $g=1$. Let $\mathscr{C}(1, \ldots, 1)$ be the classical moduli stack of filtrations of zero-dimensional sheaves on $\mathbb{C}^2$: $$\begin{aligned} 0=Q_0 \subset Q_1 \subset \cdots \subset Q_d\end{aligned}$$ such that $Q_i/Q_{i-1}$ is isomorphic to $\mathcal{O}_{x_i}$ for some $x_i \in \mathbb{C}^2$. Similarly to ([\[dia:ZS\]](#dia:ZS){reference-type="ref" reference="dia:ZS"}), we have the following diagram $$\begin{aligned} \label{dia:ZS2} \xymatrix{ \mathscr{Z}_{\mathbb{C}^2} \ar@/^18pt/[rr]^-{p_{\mathbb{C}^2}} \ar@<-0.3ex>@{^{(}->}[r]\ar[d]_-{q_{\mathbb{C}^2}} \ar@{}[rd]|\square & \mathscr{C}(1, \ldots, 1) \ar[r] \ar[d] & \mathscr{C}(d)^{\rm{red}} \\ \mathbb{C}^2/T(d) \ar@<-0.3ex>@{^{(}->}[r]_-{\Delta} & (\mathbb{C}^2)^{\times d}/T(d). & }\end{aligned}$$ The functor $$\begin{aligned} \label{funct:C2} \Phi_{d, w}^{\mathbb{C}^2} \colon D^b(\mathbb{C}^2) \to D^b(\mathscr{C}(d)^{\rm{red}})\end{aligned}$$ is defined similarly to ([\[def:Phi\]](#def:Phi){reference-type="ref" reference="def:Phi"}). **Conjecture 40**. **([@PTzero; @PT1; @PaTobps])*[\[conj:C2\]]{#conj:C2 label="conj:C2"} If $\gcd(d, w)=1$, the functor ([\[funct:C2\]](#funct:C2){reference-type="ref" reference="funct:C2"}) restricts to the equivalence $$\begin{aligned} \label{equiv:conjC2} \Phi_{d, w}^{\mathbb{C}^2} \colon D^b(\mathbb{C}^2) \stackrel{\sim}{\to} \mathbb{T}(d)_w^{\rm{red}}. \end{aligned}$$ Here the right hand side is defined in ([\[def:Tdwred\]](#def:Tdwred){reference-type="ref" reference="def:Tdwred"}) for $g=1$.* We have the following proposition: **Proposition 41**. *Conjecture [\[conj:C2\]](#conj:C2){reference-type="ref" reference="conj:C2"} implies Conjecture [Conjecture 39](#conj:K3){reference-type="ref" reference="conj:K3"}.* *Proof.* Consider the composition $$\begin{aligned} \label{compose} D^b(\mathfrak{M}_S^{\sigma}(dv_0)^{\rm{red}}) \stackrel{p_S^{!}}{\to}\operatorname{Ind}D^b(\mathscr{Z}_S) \stackrel{q_{S\ast}}{\to}\operatorname{Ind}D^b(\widetilde{\mathcal{M}}_S^{\sigma}(v_0)) \stackrel{\mathrm{pr}}{\to}\operatorname{Ind}D^b(S', \alpha^w), \end{aligned}$$ where $\mathrm{pr}$ is the projection onto the weight $(m_1, \ldots, m_d)$-component. We claim that, assuming Conjecture [\[conj:C2\]](#conj:C2){reference-type="ref" reference="conj:C2"}, the above functor restricts to the functor $$\begin{aligned} \label{Phiright} \Phi_{d, w}^{R} \colon \mathbb{T}^\sigma_S(dv_0)_w^{\rm{red}} \to D^b(S', \alpha^w), \end{aligned}$$ which is a right adjoint of $\Phi_{d, w}$. Let $$\begin{aligned} \mathcal{M}_S^{\sigma}(dv_0) \to M_S^{\sigma}(dv_0) \stackrel{\cong}{\leftarrow} \mathrm{Sym}^d(S') \end{aligned}$$ be the good moduli space morphism, see [\[directsum\]](#directsum){reference-type="eqref" reference="directsum"}. For a point $p \in S'$, the diagram ([\[dia:ZS\]](#dia:ZS){reference-type="ref" reference="dia:ZS"}) pulled back over the formal completion $\operatorname{Spec}\widehat{\mathcal{O}}_{\mathrm{Sym}^d(S'), d[p]} \to \mathrm{Sym}^d(S')$ is isomorphic to the diagram ([\[dia:ZS2\]](#dia:ZS2){reference-type="ref" reference="dia:ZS2"}) pulled back via $\operatorname{Spec}\widehat{\mathcal{O}}_{\mathrm{Sym}^d(\mathbb{C}^2), d[0]} \to \mathrm{Sym}^d(\mathbb{C}^2)$. The ind-completion of the equivalence ([\[equiv:conjC2\]](#equiv:conjC2){reference-type="ref" reference="equiv:conjC2"}) gives an equivalence $$\operatorname{Ind}D^b(\mathbb{C}^2) \stackrel{\sim}{\to} \operatorname{Ind}\mathbb{T}(d)_w^{\rm{red}},$$ whose inverse is $$\begin{aligned} \label{functorprop418} \mathrm{pr} \circ q_{\mathbb{C}^2 \ast} p_{\mathbb{C}^2}^! \colon \operatorname{Ind}\mathbb{T}(d)_w^{\rm{red}} \to \operatorname{Ind}D^b(\mathbb{C}^2). \end{aligned}$$ In the above, $\mathrm{pr}$ is again the projection functor onto the weight $(m_1,\ldots, m_d)$-component. By the equivalence ([\[equiv:conjC2\]](#equiv:conjC2){reference-type="ref" reference="equiv:conjC2"}), the functor [\[functorprop418\]](#functorprop418){reference-type="eqref" reference="functorprop418"} restricts to the functor $\mathbb{T}(d)_w^{\rm{red}}\to D^b(\mathbb{C}^2)$. Therefore the functor ([\[compose\]](#compose){reference-type="ref" reference="compose"}) restricts to the functor ([\[Phiright\]](#Phiright){reference-type="ref" reference="Phiright"}), giving a right adjoint of $\Phi_{d, w}$. We have the natural transformations $\operatorname{id}\to \Phi_{d, w}^R \circ \Phi_{d, w}$, $\Phi_{d, w}\circ \Phi_{d, w}^R \to \operatorname{id}$ by adjunction, which are isomorphisms formally locally on $\mathrm{Sym}^d(S')$. Hence they are isomorphisms and thus $\Phi_{d, w}$ is an equivalence. ◻ **Remark 42**. In [@PaTobps], we prove Conjecture [\[conj:C2\]](#conj:C2){reference-type="ref" reference="conj:C2"} for $(d, w)=(2, 1)$. By Proposition [Proposition 41](#prop:conj){reference-type="ref" reference="prop:conj"}, it implies that Conjecture [Conjecture 39](#conj:K3){reference-type="ref" reference="conj:K3"} is true for $(d, w)=(2, 1)$. # Semiorthogonal decompositions into quasi-BPS categories In this section, we prove a categorical version of the PBW theorem for cohomological Hall algebras of K3 surfaces [@DHSM], see Theorem [Theorem 43](#thm:sodK3){reference-type="ref" reference="thm:sodK3"}. We first prove Theorem [Theorem 43](#thm:sodK3){reference-type="ref" reference="thm:sodK3"} assuming Proposition [Proposition 17](#prop:sod2){reference-type="ref" reference="prop:sod2"}, which states that there is a semiorthogonal decomposition formally locally on the good moduli space. We then prove Proposition [Proposition 17](#prop:sod2){reference-type="ref" reference="prop:sod2"}. ## Semiorthogonal decomposition Let $S$ be a K3 surface. We take $v \in N(S)$ and write $v=dv_0$ for $d \in \mathbb{Z}_{\geqslant 1}$ and primitive $v_0$. For a partition $d=d_1+\cdots+d_k$, let $\mathfrak{M}_S^{\sigma}(d_1 v_0, \ldots, d_k v_0)$ be the derived moduli stack of filtrations $$\begin{aligned} 0=F_0 \subset F_1 \subset \cdots \subset F_k\end{aligned}$$ such that $F_i/F_{i-1}$ is $\sigma$-semistable with numerical class $d_i v_0$. Consider the natural morphisms $$\begin{aligned} \label{PortaSala} \times_{i=1}^k \mathfrak{M}_S^{\sigma}(d_i v_0) \stackrel{q}{\leftarrow} \mathfrak{M}_S^{\sigma}(d_1 v_0, \ldots, d_k v_0) \stackrel{p}{\to} \mathfrak{M}_S^{\sigma}(d v_0),\end{aligned}$$ where $q$ is quasi-smooth and $p$ is proper. The above morphisms induce the categorical Hall product, see [@PoSa]: $$\begin{aligned} \label{hall:k3} p_{\ast}q^{\ast} \colon \boxtimes_{i=1}^k D^b(\mathfrak{M}_S^{\sigma}(d_i v_0)) \to D^b(\mathfrak{M}_S^{\sigma}(d v_0)). \end{aligned}$$ We next discuss a semiorthogonal decomposition of $D^b(\mathfrak{M}_S^{\sigma}(v))$ using categorical Hall products of quasi-BPS categories, which we view as a categorical version of the PBW theorem for cohomological Hall algebras [@DM Theorem C], [@DHSM Corollary 1.6]. When $v_0$ is the class of a point, the statement was proved in [@P2]. **Theorem 43**. *Assume $v=dv_0$ for $d\in\mathbb{Z}_{\geqslant 1}$ and for a primitive Mukai vector $v_0$. For a generic stability condition $\sigma$, there is a semiorthogonal decomposition $$\begin{aligned} \label{sod:main} D^b(\mathfrak{M}_S^{\sigma}(v)) =\left\langle \boxtimes_{i=1}^k \mathbb{T}_{S}^{\sigma}(d_i v_0)_{w_i+(g-1)d_i(\sum_{i>j}d_j-\sum_{i<j}d_j)} \right\rangle. \end{aligned}$$ The right hand side is after all partitions $(d_i)_{i=1}^k$ of $d$ and all weights $(w_i)_{i=1}^k\in\mathbb{Z}^k$ such that $$\frac{w_1}{d_1}<\cdots<\frac{w_k}{d_k}.$$ Each semiorthogonal summand is given by the restriction of the categorical Hall product ([\[hall:k3\]](#hall:k3){reference-type="ref" reference="hall:k3"}), and the order of the semiorthogonal decomposition is the same as that of ([\[sod:triple\]](#sod:triple){reference-type="ref" reference="sod:triple"}).* *Proof.* We first explain that the semiorthogonal decomposition [\[sod:main\]](#sod:main){reference-type="eqref" reference="sod:main"} holds formally locally over the good moduli space $M^\sigma_S(v)$. For each $y \in M_S^{\sigma}(v)$, recall the equivalence $$\label{msigmap} \widehat{\mathfrak{M}}_S^{\sigma}(v)_y \simeq \widehat{\mathscr{P}}(d)_p$$ from Lemma [Lemma 24](#lem:py){reference-type="ref" reference="lem:py"}. For a $\mathbb{R}$-line bundle $\delta$ on $\mathfrak{M}_S^{\sigma}(v)$ as in ([\[def:delta\]](#def:delta){reference-type="ref" reference="def:delta"}), its restriction $\delta_y$ to $\widehat{\mathfrak{M}}_S^{\sigma}(v)_y$ corresponds to $w\tau_d$ under the above equivalence by the computation ([\[deltay\]](#deltay){reference-type="ref" reference="deltay"}). Therefore, the category $\mathbb{T}_{S, y}^{\sigma}(v)_{\delta_y}$ from Remark [Remark 26](#rmk:qbps){reference-type="ref" reference="rmk:qbps"} is equivalent to the category $\mathbb{T}_p(d)_w$ from ([\[qbps:that\]](#qbps:that){reference-type="ref" reference="qbps:that"}) under the equivalence [\[msigmap\]](#msigmap){reference-type="eqref" reference="msigmap"}, as both of them are intrinsic window subcategories of equivalent derived stacks. Therefore the statement holds formally locally at any point $y \in M_S^{\sigma}(v)$ by Proposition [Proposition 17](#prop:sod2){reference-type="ref" reference="prop:sod2"}. We set $$\begin{aligned} \label{part:A} A=(d_i, w_i')_{i=1}^k, \ w_i':=w_i+(g-1)d_i\left(\sum_{i>j}d_j-\sum_{i<j}d_j\right). \end{aligned}$$ Every functor $$\begin{aligned} \label{upA} \Upsilon_{A} \colon \boxtimes_{i=1}^k \mathbb{T}_{S}^{\sigma}(d_i v_0)_{w_i'} \to D^b(\mathfrak{M}_S^{\sigma}(v))\end{aligned}$$ is globally defined via categorical Hall product, hence a standard argument reduces the existence of the desired SOD to the formal local statement as in [@P2 Section 4.2], also see [@PT2; @T; @Totheta; @T3] for the similar arguments on reduction to formal fibers. We give more details on the proof. We prove the semiorthogonal decomposition ([\[sod:main\]](#sod:main){reference-type="ref" reference="sod:main"}) by induction on $d$. The case of $d=1$ is obvious, so we assume that $d\geqslant 2$. We first show that, for $w_1/d_1<\cdots<w_k/d_k$, the functor ([\[upA\]](#upA){reference-type="ref" reference="upA"}) is fully-faithful. By the induction hypothesis, the inclusion $$\begin{aligned} \boxtimes_{i=1}^k \mathbb{T}_S^{\sigma}(d_i v_0)_{w_i'} \hookrightarrow \boxtimes_{i=1}^k D^b(\mathfrak{M}_S^{\sigma}(d_i v_0))_{w_i'} \end{aligned}$$ admits a right adjoint. The categorical Hall product restricted to the fixed $(\mathbb{C}^{\ast})^k$-weights $(w_i)_{i=1}^k$: $$\begin{aligned} \boxtimes_{i=1}^k D^b(\mathfrak{M}_S^{\sigma}(d_i v_0))_{w_i'} \to D^b(\mathfrak{M}_S^{\sigma}(v)) \end{aligned}$$ also admits a right adjoint, see the proof of [@Totheta Lemma 6.7] or [@P2 Theorem 1.1]. Therefore the functor ([\[upA\]](#upA){reference-type="ref" reference="upA"}) admits a right adjoint $\Upsilon_A^R$. To show that [\[upA\]](#upA){reference-type="eqref" reference="upA"} is fully-faithful, it is enough to show that the natural transform $$\begin{aligned} \label{isom:upA} \operatorname{id}\to \Upsilon_A^R \circ \Upsilon_A \end{aligned}$$ is an isomorphism. This is a local question for $M_S^{\sigma}(v)$, i.e. it is enough to show that ([\[isom:upA\]](#isom:upA){reference-type="ref" reference="isom:upA"}) is an isomorphism after restricting to $\widehat{\mathfrak{M}}_S^{\sigma}(v)_y$ for any $y\in M_S^{\sigma}(v)$. Since $\Upsilon_A$ and $\Upsilon_A^R$ are compatible with pull-backs to $\widehat{\mathfrak{M}}_S^{\sigma}(v)_y$, the isomorphism ([\[isom:upA\]](#isom:upA){reference-type="ref" reference="isom:upA"}) on $\widehat{\mathfrak{M}}_S^{\sigma}(v)_y$ follows from Lemma [Lemma 24](#lem:py){reference-type="ref" reference="lem:py"} and Proposition [Proposition 17](#prop:sod2){reference-type="ref" reference="prop:sod2"}. We next show that there is a semiorthogonal decomposition of the form $$\begin{aligned} \label{sod:upw} D^b(\mathfrak{M}_S^{\sigma}(v))_w=\langle \{\mathrm{Im}\Upsilon_A\}_{A \in \Gamma}, \mathbb{W} \rangle,\end{aligned}$$ where $\Gamma$ is the set of partitions $A=(d_i, w'_i)_{i=1}^k$ of $(d, w)$ as in ([\[part:A\]](#part:A){reference-type="ref" reference="part:A"}) such that $k\geqslant 2$ and $w_1/d_1<\cdots<w_k/d_k$. For $A>B$, we have $\operatorname{Hom}(\mathrm{Im}\Upsilon_A, \mathrm{Im}\Upsilon_B)=0$. Indeed it is enough to show that $\Upsilon_A^R \circ \Upsilon_B=0$, which is a property local on $M_S^{\sigma}(v)$. Hence similarly to showing [\[isom:upA\]](#isom:upA){reference-type="eqref" reference="isom:upA"} is an isomorphism, the desired vanishing follows from Proposition [Proposition 17](#prop:sod2){reference-type="ref" reference="prop:sod2"}. We next show that the functor ([\[upA\]](#upA){reference-type="ref" reference="upA"}) admits a left adjoint $\Upsilon_A^L$. Let $\mathbb{D}_{\mathfrak{M}}$ be the dualizing functor $$\begin{aligned} \mathbb{D}_{\mathfrak{M}} \colon D^b(\mathfrak{M}_S^{\sigma}(v)) \stackrel{\sim}{\to} D^b(\mathfrak{M}_S^{\sigma}(v))^{\rm{op}}. \end{aligned}$$ The above functor restricts to the equivalence $\mathbb{D}_{\mathbb{T}(d)} \colon \mathbb{T}_S^{\sigma}(v)_{\delta} \stackrel{\sim}{\to} \mathbb{T}_S^{\sigma}(v)_{-\delta}^{\rm{op}}$. For a partition $A$ in ([\[part:A\]](#part:A){reference-type="ref" reference="part:A"}), we set $A^{\vee}=(d_i, -w_i')_{i=1}^k$. Then the functor $$\begin{aligned} \Upsilon_A^L:= \left(\boxtimes_{i=1}^k \mathbb{D}_{\mathbb{T}(d_i)}\right) \circ (\Upsilon_{A^{\vee}}^R)^{\rm{op}} \circ \mathbb{D}_{\mathfrak{M}} \colon D^b(\mathfrak{M}_S^{\sigma}(v)) \to \boxtimes_{i=1}^k \mathbb{T}_S^{\sigma}(d_i v_0)_{w_i'} \end{aligned}$$ gives a left adjoint of $\Upsilon_A$. Therefore we obtain the semiorthogonal decomposition of the form ([\[sod:upw\]](#sod:upw){reference-type="ref" reference="sod:upw"}). It is enough to show that $\mathbb{W}=\mathbb{T}_S^{\sigma}(v)_w$ in the semiorthogonal decomposition ([\[sod:upw\]](#sod:upw){reference-type="ref" reference="sod:upw"}). The inclusion $\mathbb{T}_S^{\sigma}(v)_w \subset \mathbb{W}$ follows from a formal local argument as above. It thus suffices to show that $\mathbb{W} \subset \mathbb{T}_S^{\sigma}(v)_w$. The subcategory $\mathbb{W}$ consists of $\mathcal{E} \in D^b(\mathfrak{M}_S^{\sigma}(v))_w$ such that $\Upsilon_A^L(\mathcal{E})=0$ for all $A \in \Gamma$. This is a local property on $M_S^{\sigma}(v)$. The functor $\Upsilon_A^L$ is compatible with pull-back to $\widehat{\mathfrak{M}}_S^{\sigma}(v)_y$. Thus, for any $\mathcal{E} \in \mathbb{W}$, we have $\mathcal{E}|_{\widehat{\mathfrak{M}}^{\sigma}_S(v)_y} \in \mathbb{T}_{S, y}^{\sigma}(v)_{\delta_y}$ by Lemma [Lemma 24](#lem:py){reference-type="ref" reference="lem:py"} and Proposition [Proposition 17](#prop:sod2){reference-type="ref" reference="prop:sod2"}. Therefore, from Remark [Remark 26](#rmk:qbps){reference-type="ref" reference="rmk:qbps"}, we conclude that $\mathcal{E} \in \mathbb{T}_S^{\sigma}(v)_w$. ◻ The reduced version of the semiorthogonal decomposition is as follows: **Theorem 44**. *Assume $v=dv_0$ for $d\in\mathbb{Z}_{\geqslant 1}$ and for a primitive Mukai vector $v_0$. For a generic stability condition $\sigma$, there is a semiorthogonal decomposition $$\begin{aligned} &D^b(\mathfrak{M}_S^{\sigma}(v)^{\rm{red}}) =\\ &\left\langle \boxtimes_{i=1}^{k-1} \mathbb{T}_{S}^{\sigma}(d_i v_0)_{w_i+(g-1)d_i(\sum_{i>j}d_j-\sum_{i<j}d_j)} \boxtimes \mathbb{T}_S^{\sigma}(d_k v_0)_{w_k+(g-1)d_k(\sum_{k>j}d_j)}^{\rm{red}} \right\rangle. \end{aligned}$$ The right hand side is after $d_1+\cdots+d_k=d$ such that $w_1/d_1<\cdots<w_k/d_k$.* *Proof.* Let $v_0=(r, \beta, \chi)$. We have the commutative diagram $$\begin{aligned} \xymatrix{ \times_{i=1}^k \mathfrak{M}_S^{\sigma}(d_i v_0) \ar[d]_-{\times_{i=1}^k \det} & \ar[l]_-{q} \mathfrak{M}_S^{\sigma}(d_1 v_0, \ldots, d_k v_0) \ar[r]^-{p} & \mathfrak{M}_S^{\sigma}(dv_0) \ar[d]_-{\det} \\ \times_{i=1}^k \mathcal{P}ic^{d_i \beta}(S) \ar[rr] \ar@{}[rrd]|\square & & \mathcal{P}ic^{d\beta}(S) \\ \times_{i=1}^{k-1}\mathcal{P}ic^{d_i \beta}(S) \times B\mathbb{C}^{\ast} \ar[rr] \ar[u] & & B\mathbb{C}^{\ast}, \ar[u] } \end{aligned}$$ where the middle horizontal arrow is $(L_1, \ldots, L_k) \mapsto L_1 \otimes \cdots \otimes L_k$. By base change, the categorical Hall product induces the functor $$\begin{aligned} \boxtimes_{i=1}^{k-1}D^b(\mathfrak{M}_S^{\sigma}(d_i v_0)) \boxtimes D^b(\mathfrak{M}_S^{\sigma}(d_k v_0)^{\rm{red}}) \to D^b(\mathfrak{M}_S^{\sigma}(dv_0)^{\rm{red}}). \end{aligned}$$ The rest of the argument is the same as in Theorem [Theorem 43](#thm:sodK3){reference-type="ref" reference="thm:sodK3"}. ◻ ## Generation from ambient spaces The rest of this section is devoted to the proof of Proposition [Proposition 17](#prop:sod2){reference-type="ref" reference="prop:sod2"}. In this subsection, we prove technical preliminary results about generation of dg-categories from ambient spaces and the restriction of semiorthogonal decompositions to formal fibers. Let $\mathcal{U}$ be a reduced $\mathbb{C}$-scheme of finite type with an action of a reductive algebraic group $G$. Let $\mathcal{U}/G \to T$ be a morphism to an affine scheme $T$ of finite type. For a closed point $y \in T$, we denote by $\widehat{\mathcal{U}}_y/G$ the formal fiber at $y$. We denote by $\iota_y$ the induced map $\iota_y \colon \widehat{\mathcal{U}}_y/G \to \mathcal{U}/G$. Recall the definition of classical generation from Subsection [2.2.1](#notation2){reference-type="ref" reference="notation2"}. **Lemma 45**. *The image of the pull-back functor $$\begin{aligned} \label{iotay} \iota_y^{\ast} \colon D^b(\mathcal{U}/G) \to D^b(\widehat{\mathcal{U}}_y/G) \end{aligned}$$ classically generates $D^b(\widehat{\mathcal{U}}_y/G)$.* *Proof.* It is enough to show that $\operatorname{Ind}D^b(\widehat{\mathcal{U}}_y/G)$ is generated by the image of $$\begin{aligned} \label{iota:y} \iota_y^{\ast} \colon \operatorname{Ind}D^b(\mathcal{U}/G) \to \operatorname{Ind}D^b(\widehat{\mathcal{U}}_y/G). \end{aligned}$$ Indeed, suppose that $\operatorname{Ind}D^b(\widehat{\mathcal{U}}_y/G)$ is generated by the image of ([\[iota:y\]](#iota:y){reference-type="ref" reference="iota:y"}). Let $\mathcal{C}_y \subset D^b(\widehat{\mathcal{U}}_y/G)$ be the subcategory classically generated by the image of ([\[iotay\]](#iotay){reference-type="ref" reference="iotay"}). Then we have $\operatorname{Ind}\mathcal{C}_y \stackrel{\sim}{\to} \operatorname{Ind}D^b(\widehat{\mathcal{U}}_y/G)$, hence $\mathcal{C}_y=D^b(\widehat{\mathcal{U}}_y/G)$ as both of them are the subcategories of compact objects in $\operatorname{Ind}\mathcal{C}_y$ and $\operatorname{Ind}D^b(\widehat{\mathcal{U}}_y/G)$, respectively. Let $\mathscr{Z} \subset \mathcal{U}$ be a $G$-invariant closed subset, and define $\mathcal{U}^{\circ}=\mathcal{U} \setminus \mathscr{Z}$. Let $i \colon \mathscr{Z} \hookrightarrow \mathcal{U}$ be the closed immersion and $j \colon \mathcal{U}^{\circ} \hookrightarrow \mathcal{U}$ be the open immersion. For any $\mathcal{E} \in \operatorname{Ind}D^b(\mathcal{U}/G)$, we have the distinguished triangle $$\begin{aligned} R\Gamma_{\mathscr{Z}}(\mathcal{E}) \to \mathcal{E} \to j_{\ast}j^{\ast}\mathcal{E}\to R\Gamma_{\mathscr{Z}}(\mathcal{E})[1], \end{aligned}$$ where $R\Gamma_{\mathscr{Z}}(\mathcal{E})$ is an object in $$\begin{aligned} \operatorname{Ind}D^b_{\mathscr{Z}}(\mathcal{U}/G)= \mathrm{Ker}\left(j^{\ast} \colon \operatorname{Ind}D^b(\mathcal{U}/G) \to \operatorname{Ind}D^b(\mathcal{U}^{\circ}/G) \right) \end{aligned}$$ and $j_{\ast}j^{\ast}\mathcal{E}$ is an object in $j_{\ast}\operatorname{Ind}D^b(\mathcal{U}^{\circ}/G)$. Note that by [@MR3701352 Proposition 6.1.3], the category $\operatorname{Ind}D_{\mathscr{Z}}^b(\mathcal{U}/G)$ is generated by the image of $$\begin{aligned} i_{\ast} \colon \operatorname{Ind}D^b(\mathscr{Z}/G) \to \operatorname{Ind}D^b_{\mathscr{Z}}(\mathcal{U}/G). \end{aligned}$$ We have the Cartesian diagrams $$\begin{aligned} \xymatrix{ \widehat{\mathcal{U}}_y^{\circ} \ar@<-0.3ex>@{^{(}->}[r]^-{\widehat{j}} \ar[d]_{\iota_y^{\circ}} & \widehat{\mathcal{U}}_y \ar[d]_-{\iota_y} & \widehat{\mathscr{Z}}_y \ar[l]_-{\widehat{i}} \ar[d]_{\overline{\iota}_y} \\ \mathcal{U}^{\circ} \ar@<-0.3ex>@{^{(}->}[r]^-{j} & \mathcal{U} & \mathscr{Z} \ar[l]_-{i}. } \end{aligned}$$ There are base change isomorphisms, see [@MR3037900 Corollary 3.7.14]: $$\begin{aligned} \iota_y^{\ast}j_{\ast} \cong \widehat{j}_{\ast}\iota_y^{\circ \ast}&\colon \operatorname{Ind}D^b(\mathcal{U}^{\circ}/G) \to \operatorname{Ind}D^b(\widehat{\mathcal{U}}_y/G), \\ \iota_y^{\ast}i_{\ast} \cong \widehat{i}_{\ast}\overline{\iota}_y^{\ast}&\colon \operatorname{Ind}D^b(\mathscr{Z}/G) \to \operatorname{Ind}D^b(\widehat{\mathcal{U}}_y/G),\end{aligned}$$ we can replace $\mathcal{U}$ with $\mathcal{U}^{\circ} \sqcup \mathscr{Z}$. Then, by taking a stratification of $\mathcal{U}$ and repeating the above argument, we can assume that $\mathcal{U}$ is smooth. Then $$\begin{aligned} \operatorname{Ind}D^b(\mathcal{U}/G)=D_{\rm{qc}}(\mathcal{U}/G)=\operatorname{Ind}\rm{Perf}(\mathcal{U}/G) \end{aligned}$$ and it is a standard fact that the image of $\rm{Perf}(\mathcal{U}/G) \to \rm{Perf}(\widehat{\mathcal{U}}_y/G)$ classically generates $\rm{Perf}(\widehat{\mathcal{U}}_y/G)$ (see the argument of [@MR2801403 Lemma 5.2]). ◻ Let $Y$ be a smooth affine variety with an action of a reductive algebraic group $G$. Let $V \to Y$ be a $G$-equivariant vector bundle with a $G$-invariant section $s$. We set $\mathfrak{U}$ to be the derived zero locus of $s$, and $\mathcal{U} \hookrightarrow \mathfrak{U}$ its classical truncation. We have the following diagram $$\begin{aligned} \xymatrix{ \mathcal{U}/G \ar@<-0.3ex>@{^{(}->}[r]\ar[d] & \mathfrak{U}/G \ar@<-0.3ex>@{^{(}->}[r]\ar[rd]_-{f} & Y/G \ar[d] \\ \mathcal{U}/\!\!/G \ar@<-0.3ex>@{^{(}->}[rr] & & Y/\!\!/G. } \end{aligned}$$ For $y \in \mathcal{U}/\!\!/G$, we denote by $\widehat{Y}_y$ the formal fiber of $Y \to Y/\!\!/G$ at $y$, and by $\widehat{\mathfrak{U}}_y \hookrightarrow \widehat{Y}_y$ the derived zero locus of $s$ restricted to $\widehat{Y}_y$. Let $\iota_y \colon \widehat{\mathfrak{U}}_y/G \to \mathfrak{U}/G$ be the induced map. **Lemma 46**. *The image of the pull-back functor $$\begin{aligned} \iota_y^{\ast} \colon D^b(\mathfrak{U}/G) \to D^b(\widehat{\mathfrak{U}}_y/G) \end{aligned}$$ classically generates $D^b(\widehat{\mathfrak{U}}_y/G)$.* *Proof.* Since $D^b(\widehat{\mathfrak{U}}_y/G)$ is classically generated by the image of $$\begin{aligned} D^b(\widehat{\mathcal{U}}_y/G) \to D^b(\widehat{\mathfrak{U}}_y/G) \end{aligned}$$ the claim follows from Lemma [Lemma 45](#lem:gen1){reference-type="ref" reference="lem:gen1"}. ◻ **Lemma 47**. *Let $D^b(\mathfrak{U}/G)=\langle \mathbb{T}_i \mid i \in I \rangle$ be a $(Y\sslash G)$-linear semiorthogonal decomposition. Let $\widehat{\mathbb{T}}_{i, y} \subset D^b(\widehat{\mathfrak{U}}_y/G)$ be the subcategory classically generated by the image of $\iota_y^{\ast} \colon \mathbb{T}_i \to D^b(\widehat{\mathfrak{U}}_y/G)$. Then there is a semiorthogonal decomposition $$D^b(\widehat{\mathfrak{U}}_y/G)=\langle \widehat{\mathbb{T}}_{i, y} \mid i \in I\rangle.$$* *Proof.* The subcategories $\widehat{\mathbb{T}}_{i, y}$ classically generate $D^b(\widehat{\mathfrak{U}}_y/G)$ by Lemma [Lemma 46](#lem:gen2){reference-type="ref" reference="lem:gen2"}. As for the semiorthogonality, take $i, j \in I$ such that $\operatorname{Hom}(\mathbb{T}_i, \mathbb{T}_j)=0$. Then for $A \in \mathbb{T}_i$ and $B \in \mathbb{T}_j$, we have $$\begin{aligned} \label{HomAB} \operatorname{Hom}(\iota_y^{\ast}A, \iota_y^{\ast}B) =\operatorname{Hom}(A, B \otimes \iota_{y\ast}\mathcal{O}_{\widehat{\mathfrak{U}}_y/G}) =\operatorname{Hom}(A, B \otimes f^{\ast}\widehat{\mathcal{O}}_{Y/\!\!/G, y}). \end{aligned}$$ The sheaf $\widehat{\mathcal{O}}_{Y/\!\!/G, y}$ is an object of $D_{\rm{qc}}(Y/\!\!/G)=\operatorname{Ind}\mathrm{Perf}(Y/\!\!/G)$, hence $f^{\ast}\widehat{\mathcal{O}}_{Y/\!\!/G, y} \in D_{\rm{qc}}(\mathfrak{U}/G)$, and $\otimes$ is the action of $D_{\rm{qc}}(\mathfrak{U}/G)$ on $\operatorname{Ind}D^b(\mathfrak{U}/G)$, which recall is continuous (i.e. it preserves small coproducts). Then $B \otimes f^{\ast}\widehat{\mathcal{O}}_{Y/\!\!/G, y}$ is an object of $\operatorname{Ind}\mathbb{T}_{j}$, and by writing it as $\mathrm{colim}_{k \in K} B_k$ for $B_k \in \mathbb{T}_j$, we have $$\begin{aligned} \operatorname{Hom}(A, \mathrm{colim}_{k\in K}B_k)= \mathrm{colim}_{k\in K}\operatorname{Hom}(A, B_k)=0, \end{aligned}$$ where the first identity follows as $A$ is compact, and the second vanishing holds from $\operatorname{Hom}(\mathbb{T}_{i}, \mathbb{T}_j)=0$. ◻ ## Descriptions of quasi-BPS categories for doubled quivers In this subsection, we give an alternative description of quasi-BPS categories for doubled quivers, which will be used in the proof of Proposition [Proposition 17](#prop:sod2){reference-type="ref" reference="prop:sod2"}. Below we keep the notation in Subsection [3.5](#subsub:formal){reference-type="ref" reference="subsub:formal"}. Let $Q^{\circ}$ be a $g$-loop quiver. For $d \in \mathbb{N}$, let $\mathscr{X}(d)$ be moduli stack of $d$-dimensional representations of the tripled quiver of $Q^{\circ}$: $$\mathscr{X}(d)=\mathfrak{gl}(d)^{\oplus 2g+1}/GL(d).$$ Consider the regular function induced by the tripled potential: $$\begin{aligned} \label{TrW:X} \mathop{\rm Tr}W(x_1, \ldots, x_g, y_1, \ldots, y_g, z) = \mathop{\rm Tr}\sum_{i=1}^g z[x_i, y_i] \colon \mathscr{X}(d) \to \mathbb{C}. \end{aligned}$$ Let $\mathbb{C}^{\ast}$ act on $z$ with weight two. We define the subcategory $$\begin{aligned} \label{sgr:w} \mathbb{S}^{\rm{gr}}(d)_w \subset \mathrm{MF}^{\rm{gr}}(\mathscr{X}(d), \mathop{\rm Tr}W) \end{aligned}$$ to be classically generated by matrix factorizations whose factors are direct sums of $\mathcal{O}_{\mathscr{X}(d)} \otimes \Gamma_{GL(d)}(\chi)$ such that $$\begin{aligned} \chi+\rho \in \mathbf{W}(d)_w=\frac{1}{2} \mathrm{sum}[0, \beta]+w\tau_d=\left(\frac{2g+1}{2}\mathrm{sum}_{1\leqslant i,j\leqslant d}[0, \beta_i-\beta_j]\right)+w\tau_d \end{aligned}$$ where the Minkowski sums above are after all the $T(d)$-weights of $R_{Q}(d)=\mathfrak{gl}(d)^{\oplus 2g+1}$. Alternatively, by [@hls Lemma 2.9], the subcategory ([\[sgr:w\]](#sgr:w){reference-type="ref" reference="sgr:w"}) consists of matrix factorizations with factors $\mathcal{O}_{\mathscr{X}(d)} \otimes \Gamma$ for a $GL(d)$-representation $\Gamma$ whose $T(d)$-weights are contained in $$\begin{aligned} \nabla(d)_w =\left\{\chi \in M(d)_{\mathbb{R}} : -\frac{1}{2}n_{\lambda} \leqslant\langle \lambda, \chi \rangle \leqslant\frac{1}{2}n_{\lambda} \mbox{ for all } \lambda \colon \mathbb{C}^{\ast} \to T(d)\right\}+w\tau_d. \end{aligned}$$ Here, the width $n_{\lambda}$ is defined by $$\begin{aligned} n_{\lambda}:=\left\langle \lambda, \det \left((R_Q(d)^{\vee})^{\lambda>0}\right)-\det \left((\mathfrak{gl}(d)^{\vee})^{\lambda>0}\right) \right\rangle=2g \left\langle \lambda, \det \left(\mathfrak{gl}(d)^{\lambda>0}\right)\right\rangle. \end{aligned}$$ The equivalence ([\[Koszul:theta\]](#Koszul:theta){reference-type="ref" reference="Koszul:theta"}) restricts to the equivalence, see Lemma [\[lem:genJ\]](#lem:genJ){reference-type="ref" reference="lem:genJ"}: $$\begin{aligned} \label{theta:rest} \Theta \colon \mathbb{T}(d)_w \stackrel{\sim}{\to} \mathbb{S}^{\rm{gr}}(d)_w. \end{aligned}$$ We next give another descriptions of the subcategory ([\[qbps:that\]](#qbps:that){reference-type="ref" reference="qbps:that"}) based on Lemma [\[lem:compareT\]](#lem:compareT){reference-type="ref" reference="lem:compareT"}. As in Subsections [3.4](#subsec:onevert){reference-type="ref" reference="subsec:onevert"}, [3.5](#subsub:formal){reference-type="ref" reference="subsub:formal"}, let $\mathscr{P}(d)$ be the derived moduli stack of $d$-dimensional representations of the quiver $Q^{\circ,d}$ with relation $\mathscr{I}$. There is a good moduli space map $$\pi_{P,d}\colon \mathscr{P}(d)^{\rm{cl}} \to P(d).$$ Let $p \in P(d)$ be a closed point corresponding to the semisimple $(Q^{\circ,d}, \mathscr{I})$-representation $R_p$ as in ([\[Rp\]](#Rp){reference-type="ref" reference="Rp"}): $$\label{Rp2} R_p=\bigoplus_{i=1}^m W^{(i)} \otimes R^{(i)},$$ where $R^{(i)}$ is a simple representation of dimension $r^{(i)}$ and $W^{(i)}$ is a finite dimensional $\mathbb{C}$-vector space. Recall that $G_p=\prod_{i=1}^m GL(W^{(i)})$ and let $T_p \subset G_p$ be a maximal torus. Note that we have an isomorphism of $G_p$-representations: $$\begin{aligned} \label{ext1} \operatorname{Ext}_{Q^{\circ, d}}^1(R_p, R_p) \oplus \mathfrak{gl}(d)^{\vee} =\bigoplus_{i, j}\operatorname{Hom}(W^{(i)}, W^{(j)})^{\oplus (\delta_{ij}+2g r^{(i)} r^{(j)})}. \end{aligned}$$ Let $M_p$ be the character lattice of $T_p$ and let $\tau_{d, p} \in (M_p)_{\mathbb{R}}$ be the restriction of $\tau_d$ to $G_p \subset GL(d)$. For $w \in \mathbb{Z}$, we set $$\begin{aligned} \label{Minksum} \mathbf{W}_p(d)_{w}=\frac{1}{2}\mathrm{sum}[0, \beta] +w \tau_{d, p} \subset (M_p)_{\mathbb{R}}, \end{aligned}$$ where the Minkowski sum is after all $T_p$-weights $\beta$ in the representation ([\[ext1\]](#ext1){reference-type="ref" reference="ext1"}). Let $\beta_i^{(j)}$ for $1\leqslant i \leqslant\dim W^{(j)}$ be the weights of the standard representation of $GL(W^{(j)})$. Then a weight $\chi$ in $\textbf{W}_p(d)_w$ is written as $$\begin{aligned} \label{wt:wpd} \chi=\sum_{i, j, a, b}c_{ij}^{(ab)}(\beta_i^{(a)}-\beta_j^{(b)}) +\frac{w}{d}\sum_{i, a}r^{(a)}\beta_i^{(a)},\end{aligned}$$ where the sum above is after all $1\leqslant a, b\leqslant m$, $1\leqslant i\leqslant\dim W^{(a)}$, $1\leqslant j\leqslant\dim W^{(b)}$, and where $\lvert c_{ij}^{(ab)} \rvert \leqslant\delta_{ab}/2+gr^{(a)}r^{(b)}$ for all such $a,b,i,j$. **Lemma 48**. *Recall the map $j_p \colon \widehat{\mathscr{P}}(d)_p \hookrightarrow \widehat{\mathscr{Y}}(d)_p$ from [\[mapjp\]](#mapjp){reference-type="eqref" reference="mapjp"}. The subcategory introduced in ([\[qbps:that\]](#qbps:that){reference-type="ref" reference="qbps:that"}): $$\begin{aligned} \label{def:Tformal} \mathbb{T}_p(d)_{w} \subset D^b(\widehat{\mathscr{P}}(d)_p)\end{aligned}$$ coincides with the subcategory of objects $\mathcal{E}$ such that $j_{p\ast}\mathcal{E}$ is generated by the vector bundles $\Gamma_{G_p}(\chi) \otimes \mathcal{O}_{\widehat{\mathscr{Y}}(d)_p}$, where $\chi$ is a dominant $T_p$-weight satisfying $$\begin{aligned} \chi+\rho_p \in \mathbf{W}_p(d)_{w},\end{aligned}$$ where $\rho_p$ is half the sum of positive roots of $G_p$.* *Proof.* The lemma follows similarly to Lemma [\[lem:compareT\]](#lem:compareT){reference-type="ref" reference="lem:compareT"}, using the Koszul equivalence and [@PTquiver Corollary 3.14]. ◻ **Remark 49**. Alternatively, by Lemma [Lemma 48](#lem:anotherT){reference-type="ref" reference="lem:anotherT"} and [@hls Lemma 2.9], the subcategory ([\[def:Tformal\]](#def:Tformal){reference-type="ref" reference="def:Tformal"}) consists of objects $\mathcal{E}$ such that $j_{p\ast}\mathcal{E}$ is generated by vector bundles $W \otimes \mathcal{O}_{\widehat{\mathscr{Y}}(d)_p}$ for $W$ a $G_p$-representation whose $T_p$-weights are contained in the set $$\begin{aligned} \label{nablap} \left\{\chi \in (M_{p})_{\mathbb{R}} : -\frac{1}{2}n_{\lambda, p} \leqslant \langle \lambda, \chi \rangle \leqslant\frac{1}{2} n_{\lambda, p}\text{ for all }\lambda \colon \mathbb{C}^{\ast} \to T_p \right\}+w\tau_{d, p}.\end{aligned}$$ Here, the width $n_{\lambda, p}$ is defined by $$\begin{aligned} n_{\lambda, p}=\Big\langle \lambda, \det\left(\operatorname{Ext}_{Q^{\circ, d}}^1(R_p, R_p)^{\vee}\oplus \mathfrak{gl}(d)\right)^{\lambda>0}\Big\rangle-\Big\langle \lambda, \det\left((\mathfrak{g}_p^{\vee})^{\lambda>0}\right) \Big\rangle,\end{aligned}$$ where $\mathfrak{g}_p$ is the Lie algebra of $G_p$. From ([\[ext1\]](#ext1){reference-type="ref" reference="ext1"}), one can easily check that $$\begin{aligned} \label{eta:equal} n_{\lambda, p}=2g \Big\langle \lambda, \det\left(\mathfrak{gl}(d)^{\lambda>0}\right) \Big\rangle =n_{\lambda}\end{aligned}$$ for any cocharacter $\lambda \colon \mathbb{C}^{\ast} \to T_p \subset T(d)$. ## Proof of Proposition [Proposition 17](#prop:sod2){reference-type="ref" reference="prop:sod2"} {#subsec:prop} In this subsection, we prove Proposition [Proposition 17](#prop:sod2){reference-type="ref" reference="prop:sod2"} and Corollary [Corollary 18](#cor:gen){reference-type="ref" reference="cor:gen"}, and thus finish the proof of Theorem [Theorem 43](#thm:sodK3){reference-type="ref" reference="thm:sodK3"}. *Proof of Proposition [Proposition 17](#prop:sod2){reference-type="ref" reference="prop:sod2"} and Corollary [Corollary 18](#cor:gen){reference-type="ref" reference="cor:gen"}.* Let $\iota_p \colon \widehat{\mathscr{P}}(d)_p \to \mathscr{P}(d)$ be the natural induced map and define $\widehat{\mathbb{T}}_p(d)_w$ to be the subcategory of $D^b(\widehat{\mathscr{P}}(d)_p)$ classically generated by the image of $$\begin{aligned} \iota_p^{\ast} \colon \mathbb{T}(d)_w \to D^b(\widehat{\mathscr{P}}(d)_p). \end{aligned}$$ By Theorem [\[cor:sodT\]](#cor:sodT){reference-type="ref" reference="cor:sodT"} and Lemma [Lemma 47](#lem:basechange){reference-type="ref" reference="lem:basechange"}, we have the semiorthogonal decomposition $$\begin{aligned} \label{sod:hat} D^b(\widehat{\mathscr{P}}(d)_p) =\left\langle \bigoplus_{p_1+\cdots+p_k=p} \boxtimes_{i=1}^k \widehat{\mathbb{T}}_{p_i}(d_i)_{w_i+(g-1)d_i(\sum_{i>j}d_j-\sum_{i<j}d_j)} \right\rangle. \end{aligned}$$ Therefore it is enough to show that $$\label{tpdw} \widehat{\mathbb{T}}_p(d)_w=\mathbb{T}_p(d)_w,$$ which is the claim of Corollary [Corollary 18](#cor:gen){reference-type="ref" reference="cor:gen"}. Let $\widehat{\mathscr{X}}(d)_p$ be the formal fiber at $p$ of the composition $$\begin{aligned} \mathscr{X}(d) \to \mathscr{Y}(d) \to Y(d)=\mathfrak{gl}(d)^{\oplus 2g}/\!\!/GL(d),\end{aligned}$$ where the first morphism is the natural projection. It is given by $$\begin{aligned} \widehat{\mathscr{X}}(d)_p=(\widehat{\operatorname{Ext}}^1_{Q^{\circ, d}}(R_p, R_p)\times \mathfrak{gl}(d)^{\vee})/G_p. \end{aligned}$$ We have the Koszul duality equivalence, see Theorem [\[thm:Kduality\]](#thm:Kduality){reference-type="ref" reference="thm:Kduality"} $$\begin{aligned} \label{Kdualp} \Theta_p \colon D^b(\widehat{\mathscr{P}}(d)_p) \stackrel{\sim}{\to} \mathrm{MF}^{\rm{gr}}(\widehat{\mathscr{X}}(d)_p, \mathop{\rm Tr}W). \end{aligned}$$ We next define categories Koszul equivalent to the two categories in [\[tpdw\]](#tpdw){reference-type="eqref" reference="tpdw"}: $$\begin{aligned} \widehat{\mathbb{S}}^{\rm{gr}}_p(d)_w \subset \mathrm{MF}^{\rm{gr}}(\widehat{\mathscr{X}}(d)_p, \mathop{\rm Tr}W), \ \mathbb{S}^{\rm{gr}}_p(d)_w \subset \mathrm{MF}^{\rm{gr}}(\widehat{\mathscr{X}}(d)_p, \mathop{\rm Tr}W).\end{aligned}$$ We define the subcategory $\widehat{\mathbb{S}}^{\rm{gr}}_p(d)_w$ to be classically generated by the image of $$\begin{aligned} \notag \mathbb{S}^{\rm{gr}}(d)_w \subset \mathrm{MF}^{\rm{gr}}(\mathscr{X}(d), \mathop{\rm Tr}W) \to \mathrm{MF}^{\rm{gr}}(\widehat{\mathscr{X}}(d)_p, \mathop{\rm Tr}W). \end{aligned}$$ We define the subcategory $\mathbb{S}^{\rm{gr}}_p(d)_w$ to be consisting of matrix factorizations whose factors are of the form $W \otimes \mathcal{O}_{\widehat{\mathscr{X}}(d)_p}$, where $W$ is a $G_p$-representation whose $T_p$-weights are contained in ([\[nablap\]](#nablap){reference-type="ref" reference="nablap"}). By the equivalence ([\[theta:rest\]](#theta:rest){reference-type="ref" reference="theta:rest"}) and using Lemma [\[lem:genJ\]](#lem:genJ){reference-type="ref" reference="lem:genJ"} and Remark [Remark 49](#rmk:anotherT){reference-type="ref" reference="rmk:anotherT"}, the equivalence $\Theta_p$ restricts to equivalences $$\begin{aligned} \Theta_p \colon \widehat{\mathbb{T}}_p(d)_w \stackrel{\sim}{\to} \widehat{\mathbb{S}}^{\rm{gr}}_p(d)_w, \ \mathbb{T}_p(d)_w \stackrel{\sim}{\to} \mathbb{S}^{\rm{gr}}_p(d)_w. \end{aligned}$$ It is enough to show that $\widehat{\mathbb{S}}^{\rm{gr}}_p(d)_w=\mathbb{S}^{\rm{gr}}_p(d)_w$. By Remark [Remark 49](#rmk:anotherT){reference-type="ref" reference="rmk:anotherT"}, it is obvious that $\widehat{\mathbb{T}}_p(d)_w \subset \mathbb{T}_p(d)_w$, hence $\widehat{\mathbb{S}}^{\rm{gr}}_p(d)_w \subset \mathbb{S}^{\rm{gr}}_p(d)_w$. By the semiorthogonal decomposition ([\[sod:hat\]](#sod:hat){reference-type="ref" reference="sod:hat"}) together with the equivalence ([\[Kdualp\]](#Kdualp){reference-type="ref" reference="Kdualp"}), we have the semiorthogonal decomposition $$\begin{aligned} \label{sod:hat2} \mathrm{MF}^{\rm{gr}}(\widehat{\mathscr{X}}(d)_p, \mathop{\rm Tr}W) =\left\langle \bigoplus_{p_1+\cdots+p_k=p} \boxtimes_{i=1}^k \widehat{\mathbb{S}}^{\rm{gr}}_{p_i}(d_i)_{w_i+g d_i(\sum_{i>j}d_j-\sum_{i<j}d_j)} \right\rangle \end{aligned}$$ for $w_1/d_1<\cdots<w_k/d_k$, and each summand is given by the categorical Hall product, see [@P2 Proposition 3.1] or [@T Lemma 2.4.4, 2.4.7] for the compatibility of the categorical Hall products under Koszul duality. In Lemma [Lemma 50](#lem:orthoS){reference-type="ref" reference="lem:orthoS"} below, we show that the semiorthogonal summands in ([\[sod:hat2\]](#sod:hat2){reference-type="ref" reference="sod:hat2"}) except $\widehat{\mathbb{S}}^{\rm{gr}}_p(d)_w$ are right orthogonal to $\mathbb{S}^{\rm{gr}}_p(d)_w$. Then by ([\[sod:hat2\]](#sod:hat2){reference-type="ref" reference="sod:hat2"}) we have $\mathbb{S}^{\rm{gr}}_p(d)_w \subset \widehat{\mathbb{S}}^{\rm{gr}}_p(d)_w$, hence that $\widehat{\mathbb{S}}^{\rm{gr}}_p(d)_w=\mathbb{S}^{\rm{gr}}_p(d)_w$. ◻ **Lemma 50**. *The semiorthogonal summands in ([\[sod:hat2\]](#sod:hat2){reference-type="ref" reference="sod:hat2"}) with $k\geqslant 2$ are right orthogonal to $\mathbb{S}^{\rm{gr}}_p(d)_w$.* *Proof.* The proof is analogous to that of  [@PT1 Lemma 3.6]. The inclusion $T_p \subset T(d)$ induces a surjection $M(d) \twoheadrightarrow M_p$. We will regard $T(d)$-weights as $T_p$-weights by the above surjection. Let $\widehat{\textbf{W}}_p(d)_w$ be the image of $\textbf{W}(d)_w \subset M(d)_{\mathbb{R}} \twoheadrightarrow (M_{p})_{\mathbb{R}}$. Recall the decomposition [\[Rp2\]](#Rp2){reference-type="eqref" reference="Rp2"} and the weights $\beta^{(a)}_i$ for $1\leqslant a\leqslant m$ and $1\leqslant i\leqslant\dim W^{(a)}$. Then a weight $\chi$ in $\widehat{\textbf{W}}_p(d)_w$ is written as $$\begin{aligned} \label{write:chi} \chi=\sum_{i, j, a, b}\alpha_{ij}^{(ab)}(\beta_i^{(a)}-\beta_j^{(b)}) +\frac{w}{d}\sum_{i, a}r^{(a)}\beta_i^{(a)},\end{aligned}$$ where the sum above is after all $1\leqslant a, b\leqslant m$, $1\leqslant i\leqslant\dim W^{(a)}$, $1\leqslant j\leqslant\dim W^{(b)}$, and we have that $\lvert \alpha_{ij}^{(ab)} \rvert \leqslant r^{(a)}r^{(b)}(g+1/2)$. We also note that a choice of $(p_1, \ldots, p_k)$ corresponds to decompositions for all $1\leqslant j\leqslant m$: $$\begin{aligned} W^{(j)}=W_1^{(j)} \oplus \cdots \oplus W_k^{(j)}\end{aligned}$$ such that $d_i^{(j)}=\dim W_i^{(j)}$ satisfies $d_i=d_i^{(1)}+\cdots+d_i^{(m)}$. Let $\lambda$ be the antidominant cocharacter of $T_p$ which acts on the space $W_i^{(j)}$ by weight $(k+1-i)$ for $1\leqslant j\leqslant m$ and $1\leqslant i\leqslant k$, and write it as $\lambda=(\lambda^{(j)})_{1\leqslant j\leqslant m}$, where $\lambda^{(j)}$ is a cocharacter of the maximal torus of $GL(W^{(j)})$. We set $\mathfrak{g}^{(j)}=\mathrm{End}(W^{(j)})$. Consider the diagram of attracting loci $$\begin{aligned} \widehat{\mathscr{X}}(d)_p^{\lambda}=\times_{i=1}^k \widehat{\mathscr{X}}(d_i)_{p_i} \stackrel{q}{\leftarrow} \widehat{\mathscr{X}}(d)_p^{\lambda \geqslant 0} \stackrel{p}{\to} \widehat{\mathscr{X}}(d)_p. \end{aligned}$$ Let $A=\Gamma_{GL(d)}(\chi) \otimes \mathcal{O}_{\widehat{\mathscr{X}}(d)_p}$ and $B=\Gamma_{GL(d)^{\lambda}}(\chi') \otimes \mathcal{O}_{\mathscr{X}(d)_p^{\lambda}}$ such that $$\begin{aligned} \label{condition:chiW} \chi+\rho_p \in \textbf{W}_p(d)_w, \ \chi'+\sum_{i=1}^k \rho_{p_i} \in \bigoplus_{i=1}^k \widehat{\textbf{W}}_{p_i}(d_i)_{w_i'}\subset \bigoplus_{i=1}^k M(d_i)_\mathbb{R}=M(d)_\mathbb{R},\end{aligned}$$ where $w=w_1+\cdots+w_k$, $w_1/d_1<\cdots<w_k/d_k$ and $w_i'=w_i+gd_i(\sum_{i>j}d_j-\sum_{j>i}d_j)$. We write $$\label{psidecomp} \chi'=\sum_{i=1}^k (\psi_i+w'_i\tau_{d_i}),\, \psi_i\in \widehat{\textbf{W}}_{p_i}(d_i)_0.$$ By the adjunction, we have $$\begin{aligned} \label{hom:AB} \operatorname{Hom}(A, p_{\ast}q^{\ast}B)=\operatorname{Hom}(p^{\ast}A, q^{\ast}B). \end{aligned}$$ Let $\chi''$ be a weight of $\Gamma_{GL(d)}(\chi)$. Below we show that $$\label{inequality} \langle \lambda, \chi'' \rangle > \langle \lambda, \chi'\rangle.$$ Then ([\[hom:AB\]](#hom:AB){reference-type="ref" reference="hom:AB"}) vanishes by [@P Proposition 4.2] and thus the lemma holds. Let $\mu$ be the weight: $$\begin{aligned} \label{wt:mu} \mu=-\frac{1}{2} \mathfrak{gl}(d)^{\lambda>0}+\frac{1}{2}\sum_{a=1}^m(\mathfrak{g}^{(a)})^{\lambda^{(a)}>0}= \sum_{i, j, a<b}\gamma_{ij}^{(ab)}(\beta_i^{(a)}-\beta_j^{(b)})\end{aligned}$$ where $1\leqslant a<b\leqslant m$, $1\leqslant i\leqslant\dim W^{(a)}$, $1\leqslant j\leqslant\dim W^{(b)}$, and such that $\lvert \gamma_{ij}^{(ab)}\rvert=r^{(a)}r^{(b)}/2$. To show [\[inequality\]](#inequality){reference-type="eqref" reference="inequality"}, it is enough to show that $$\begin{aligned} \label{ineq:chirho} \langle \lambda, \chi''+\rho_p+\mu-w\tau_d\rangle >\langle \lambda, \chi'+\rho_p+\mu-w\tau_d \rangle. \end{aligned}$$ By [\[psidecomp\]](#psidecomp){reference-type="eqref" reference="psidecomp"}, we write $$\begin{aligned} \chi'+\rho_p+\mu-w\tau_d =\sum_{i=1}^k \psi_i+\sum_{i=1}^k w_i \tau_{d_i} -\frac{2g+1}{2} \mathfrak{gl}(d)^{\lambda>0}-w\tau_d, \end{aligned}$$ where $\psi_i \in \widehat{\textbf{W}}_{p_i}(d_i)_{0}$ for $1\leqslant i\leqslant k$. In what follows, we write $\mathfrak{gl}(d)^{\lambda>0}$ instead of $\det\left(\mathfrak{gl}(d)^{\lambda>0}\right)$ to simplify notation. We compute $$\begin{aligned} \left\langle \lambda, \chi'+\rho_p+\mu-w\tau_d \right\rangle &=\left\langle \lambda, \sum_{i=1}^k \psi_i+\sum_{i=1}^k w_i \tau_{d_i}-\frac{2g+1}{2} \mathfrak{gl}(d)^{\lambda>0} -w \tau_d \right\rangle \\ &=\sum_{i=1}^k (k+1-i)d_i\left(\frac{w_i}{d_i}-\frac{w}{d}\right) -\left\langle \lambda, \frac{2g+1}{2}\mathfrak{gl}(d)^{\lambda>0} \right\rangle. \end{aligned}$$ For $1\leqslant i\leqslant k$, define $$\tilde{w}_i:=d_i\left(\frac{w_i}{d_i}-\frac{w}{d}\right).$$ Then $\tilde{w}_1+\cdots+\tilde{w}_k=0$ and $\tilde{w}_1+\cdots+\tilde{w}_l<0$ for $1\leqslant l<k$. Therefore $$\begin{aligned} \sum_{i=1}^k (k+1-i)d_i\left(\frac{w_i}{d_i}-\frac{w}{d}\right) =\sum_{l=1}^k \left(\sum_{i=1}^l \tilde{w}_i \right)<0. \end{aligned}$$ It follows that $$\begin{aligned} \label{lambda1} \left\langle \lambda, \chi'+\rho_p+\mu-w\tau_d \right\rangle <-\left\langle \lambda, \frac{2g+1}{2}\mathfrak{gl}(d)^{\lambda>0} \right\rangle. \end{aligned}$$ On the other hand, by ([\[condition:chiW\]](#condition:chiW){reference-type="ref" reference="condition:chiW"}) and [@hls Lemma 2.9], we have $$\begin{aligned} \langle \lambda, \chi'' -w\tau_d \rangle \geqslant-\frac{1}{2}n_{\lambda, p}=-g \langle \lambda, \mathfrak{gl}(d)^{\lambda>0} \rangle. \end{aligned}$$ Then $$\begin{aligned} \langle \lambda, \chi''+\rho_p+\mu-w\tau_d \rangle \geqslant -g \langle \lambda, \mathfrak{gl}(d)^{\lambda>0}\rangle+\langle \lambda, \rho_p+\mu \rangle=-\left\langle \lambda, \frac{2g+1}{2}\mathfrak{gl}(d)^{\lambda>0} \right\rangle.\end{aligned}$$ Therefore we have the inequality ([\[ineq:chirho\]](#ineq:chirho){reference-type="ref" reference="ineq:chirho"}). ◻ # Smooth and properness of reduced quasi-BPS categories In this section, we show that the reduced version of quasi-BPS category is smooth and proper, which gives evidence towards Conjecture [Conjecture 34](#conj:HK){reference-type="ref" reference="conj:HK"}. We first prove the strong generation of quasi-BPS categories. It relies on the strong generations of singular support quotients, which itself is of independent interest and is proved in Subsection [6.3](#subsec:ssuport:gen){reference-type="ref" reference="subsec:ssuport:gen"}. ## Strong generation of quasi-BPS categories In this subsection, we prove the strong generation of the quasi-BPS category $\mathbb{T}_S(v)_{w}$, see Subsection [2.1](#notation){reference-type="ref" reference="notation"} for the terminology of strong generation. The strategy is to show that $\mathbb{T}_S(v)_{w}$ is admissible in a singular support quotient category constructed from Joyce-Song pairs on the local Calabi-Tau threefold $X:=S\times \mathbb{C}$, which has a strong generator by Theorem [Theorem 61](#thm:regular2){reference-type="ref" reference="thm:regular2"}. Let $S$ be a smooth projective K3 surface, let $H$ be an ample divisor on $S$, and set $\mathcal{O}(n)=\mathcal{O}_S(nH)$. For $v \in N(S)$, let $\mathfrak{M}=\mathfrak{M}_S^H(v)$ be the derived moduli stack of $H$-Gieseker semistable sheaves $F$ on $S$ with numerical class $v$. We take $H$ generic with respect to $v$. Let $n \gg 0$ be such that $H^i(F(n))=0$ for all $i>0$ and all $H$-Gieseker semistable sheaves $F$ with numerical class $v$. Let $\mathbb{F} \in D^b(S \times \mathfrak{M})$ be the universal sheaf, and consider the following derived stack $$\begin{aligned} \mathfrak{M}^{\dag}:= \operatorname{Spec}_{\mathfrak{M}} \mathrm{Sym} (p_{\mathfrak{M}\ast}(\mathbb{F}\boxtimes \mathcal{O}(n))^{\vee}), \end{aligned}$$ where $p_{\mathfrak{M}} \colon S \times \mathfrak{M} \to \mathfrak{M}$ is the projection. The stack $\mathfrak{M}^{\dag}$ is the derived moduli stack of pairs $(F, s)$, where $F$ is an $H$-Gieseker semistable sheaf on $S$ with numerical class $v$ and $s \in H^0(F(n))$. We consider its $(-1)$-shifted cotangent space $$\begin{aligned} \Omega_{\mathfrak{M}^{\dag}}[-1]=\operatorname{Spec}_{\mathfrak{M}^{\dag}} \mathrm{Sym}(\mathbb{T}_{\mathfrak{M}^{\dag}}[1]). \end{aligned}$$ Since the projection $\mathfrak{M}^{\dag} \to \mathfrak{M}$ is smooth, we have the isomorphism, see [@T Lemma 3.1.2]: $$\begin{aligned} (\Omega_{\mathfrak{M}}[-1]\times_{\mathfrak{M}} \mathfrak{M}^{\dag})^{\rm{cl}} \stackrel{\cong}{\to} \Omega_{\mathfrak{M}^{\dag}}[-1]^{\rm{cl}}. \end{aligned}$$ Therefore, $\Omega_{\mathfrak{M}^{\dag}}[-1]$ is the derived moduli stack of pairs $(E, s)$, where $E$ is a compactly supported coherent sheaf on the local K3 surface $$\begin{aligned} X:=\mathrm{Tot}(\omega_S)=S \times \mathbb{C} \stackrel{r}{\to} S\end{aligned}$$ such that $r_{\ast}E$ has numerical class $v$, and $s \in H^0(E(n))$. Here the pull-back of $\mathcal{O}(n)$ on $S$ to $X$ is also denoted by $\mathcal{O}(n)$. We recall the definition of Joyce-Song (JS) stable pairs on $X$: **Definition 51**. ([@JS Definition 5.20]) A pair $(E, s)$ on $X=S \times \mathbb{C}$ is JS-stable if $E$ is a compactly supported $H$-Gieseker semistable sheaf on $X$ and $s \in H^0(E(n))$ is a section such that there is no non-trivial exact sequence of framed sheaves $$\begin{aligned} \label{ex:framed} 0 \to (\mathcal{O}_X \to E'(n)) \to (\mathcal{O}_X \stackrel{s}{\to} E(n)) \to (0 \to E''(n)) \to 0,\end{aligned}$$ where $E'$, $E''$ are $H$-Gieseker semistable sheaves with the same reduced Hilbert polynomials. We denote by $$\begin{aligned} \Omega_{\mathfrak{M}^{\dag}}^{\rm{JS}}[-1] \subset \Omega_{\mathfrak{M}^{\dag}}[-1] \end{aligned}$$ the open substack consisting of JS-stable pairs, and we denote by $\mathscr{Z}^{\rm{JS}}$ its complement. It is well-known that $\Omega_{\mathfrak{M}^{\dag}}^{\rm{JS}}[-1]^{\rm{cl}}$ is a quasi-projective scheme, which easily follows from [@JS Theorem 5.22] by taking a compactification of $X$. We set $$\begin{aligned} \ell :=\det p_{\mathfrak{M}\ast}(\mathbb{F}\boxtimes \mathcal{O}(n)) \in \mathrm{Pic}(\mathfrak{M}).\end{aligned}$$ Its pull-back to $\Omega_{\mathfrak{M}^{\dag}}[-1]$ is also denoted by $\ell$. We denote by $\Omega^{\ell\text{-ss}}_{\mathfrak{M}^{\dag}}[-1]$ the stack of $\ell$-semistable points in $\Omega_{\mathfrak{M}^{\dag}}[-1]^{\mathrm{cl}}$. **Lemma 52**. *We have $\Omega^{\rm{JS}}_{\mathfrak{M}^{\dag}}[-1]= \Omega^{\ell\mathrm{-ss}}_{\mathfrak{M}^{\dag}}[-1]$.* *Proof.* Let $\mathfrak{M}^{\rm{cl}} \to M$ be a good moduli space. It is enough to prove the identity on each fiber at a closed point $y \in M$ for the composition of the projections $$\begin{aligned} \label{compose2} \gamma \colon \Omega_{\mathfrak{M}^{\dag}}[-1]^{\rm{cl}} \to \mathfrak{M}^{\dag, \rm{cl}} \to \mathfrak{M}^{\rm{cl}} \to M. \end{aligned}$$ A point $y$ corresponds to a polystable sheaf $\bigoplus_{i=1}^m V^{(i)} \otimes F^{(i)}$. Let $(Q^{\circ, d}_y, \mathscr{I}_y)$ be the Ext-quiver of $(F^{(1)}, \ldots, F^{(m)})$ with relation $\mathscr{I}_y$. The quiver $Q^{\circ, d}_y$ is the double of some quiver $Q_y^{\circ}$, see Remark [Remark 23](#rmk:Equiver){reference-type="ref" reference="rmk:Equiver"}. Let $(Q_y, W)$ be the tripled quiver with potential of $Q_y^{\circ}$, see Subsection [3.1.3](#subsec:triple){reference-type="ref" reference="subsec:triple"}. Let $c^{(i)}:=h^0(F^{(i)}(n))>0$ and let $Q_y^{\dag}$ be the quiver obtained by adding a vertex $\{0\}$ to $Q_y$ and $c^{(i)}$-arrows from $0$ to $i$ for $1\leqslant i\leqslant m$. Then a fiber of ([\[compose2\]](#compose2){reference-type="ref" reference="compose2"}) at $y$ corresponds to nilpotent $Q_y^{\dag}$-representations with dimension vector $(1, \bm{d})$ where $\bm{d}=(\dim V^{(i)})_{i=1}^m$ and $1$ is the dimension at the vertex $\{0\}$: $$\begin{aligned} \label{isom:gamma} \gamma^{-1}(y) \cong R^{\rm{nil}}_{Q_y^{\dag}}(1, \bm{d})/G(\bm{d}). \end{aligned}$$ Also the line bundle $\ell$ restricted to $\gamma^{-1}(y)$ corresponds to the character $$\begin{aligned} \ell_y \colon G(\bm{d})=\prod_{i=1}^m GL(V^{(i)}) \to \mathbb{C}^{\ast}, \ (g_i)_{i=1}^m \mapsto \prod_{i=1}^m (\det g_i)^{c^{(i)}}. \end{aligned}$$ By [@T Lemma 5.1.9, 5.1.19], the $\ell_y$-semistable $Q_y^{\dag}$-representations are those generated by the images from the arrows $0 \to i$ with $1\leqslant i \leqslant m$. The above $\ell_y$-semistable locus in the right hand side of ([\[isom:gamma\]](#isom:gamma){reference-type="ref" reference="isom:gamma"}) corresponds to pairs $(E, s)$ on $X$ in $\gamma^{-1}(y)$ such that $r_{\ast}E$ is S-equivalent to $\bigoplus_{i=1}^m V^{(i)} \otimes F^{(i)}$ and there is no exact sequence of the form ([\[ex:framed\]](#ex:framed){reference-type="ref" reference="ex:framed"}), i.e. it is a JS pair. Therefore we obtain the desired identity on $\gamma^{-1}(y)$. ◻ We set $$\begin{aligned} b:=\operatorname{ch}_2(p_{\mathfrak{M}\ast}(\mathbb{F} \boxtimes \mathcal{O}(n))) \in H^4(\mathfrak{M}, \mathbb{Q}). \end{aligned}$$ Its pull-back to $\Omega_{\mathfrak{M}^{\dag}}[-1]$ is also denoted by $b$. Consider the $\Theta$-stratification with respect to $(\ell, b)$, see [@halpK32 Theorem 4.1.3]: $$\begin{aligned} \Omega_{\mathfrak{M}^{\dag}}[-1]= \mathscr{S}_1 \sqcup \cdots \sqcup \mathscr{S}_N \sqcup \Omega^{\ell\text{-ss}}_{\mathfrak{M}^{\dag}}[-1].\end{aligned}$$ By Theorem [\[thm:window:M\]](#thm:window:M){reference-type="ref" reference="thm:window:M"}, for each choice of $m_\bullet=(m_i)_{i=1}^N \in \mathbb{R}^N$, there is a subcategory $\mathbb{W}(\mathfrak{M}^{\dag})_{m_{\bullet}}^\ell \subset D^b(\mathfrak{M}^{\dag})$ such that the composition $$\begin{aligned} \label{equiv:WDT} \Phi \colon \mathbb{W}(\mathfrak{M}^{\dag})_{m_{\bullet}}^\ell \subset D^b(\mathfrak{M}^{\dag}) \twoheadrightarrow D^b(\mathfrak{M}^{\dag})/\mathcal{C}_{\mathscr{Z}^{\rm{JS}}}\end{aligned}$$ is an equivalence. Let $\eta \colon \mathfrak{M}^{\dag} \to \mathfrak{M}$ be the projection. We have the following lemma: **Lemma 53**. *Let $\delta\in\mathrm{Pic}(\mathfrak{M}^H_S(v))_\mathbb{R}$. There exists a choice $m_\bullet$ such that the functor $\eta^{\ast} \colon D^b(\mathfrak{M}) \to D^b(\mathfrak{M}^{\dag})$ restricts to a functor $\eta^{\ast} \colon \mathbb{T}_S^H(v)_\delta \to \mathbb{W}(\mathfrak{M}^{\dag})_{m_{\bullet}}^\ell$.* *Proof.* We use the notation in the proof of Lemma [Lemma 52](#lem:JS){reference-type="ref" reference="lem:JS"}. For $y\in M$, let $\mathscr{X}_y(\bm{d})$ be the moduli stack of $Q_y$-representations with dimension vector $\bm{d}$ and let $\mathscr{X}_y^{\dag}(\bm{d})$ be the moduli stack of $Q_y^{\dag}$-representations with dimension vector $(1, \bm{d})$. Let $\widehat{\mathscr{X}}_y^{\dag}(\bm{d})$ be the formal fiber of the composition $$\begin{aligned} \mathscr{X}^{\dag}_y(\bm{d}) \to \mathscr{X}_y(\bm{d}) \to X_y(\bm{d})\end{aligned}$$ at the origin, where the last map is the good moduli space morphism. Let $$\begin{aligned} \widehat{\mathscr{X}}^{\dag}_y(\bm{d})= \widehat{\mathscr{S}}_1 \sqcup \cdots \sqcup \widehat{\mathscr{S}}_N \sqcup \widehat{\mathscr{X}}^{\dag}_y(\bm{d})^{\ell_y\text{-ss}} \end{aligned}$$ be the Kempf-Ness stratification with respect to $(\ell_y, b_y)$. For $1\leqslant i\leqslant N$, consider the center $\widehat{\mathscr{Z}}_i$ of $\widehat{\mathscr{S}}_i$ and its corresponding one parameter subgroup $\lambda_i$ for the maximal torus of $G(\bm{d})$. Let $\widehat{\mathfrak{M}}_y^{\dag}$, $\widehat{\mathfrak{M}}_y$ be the formal fibers along $\mathfrak{M}^{\dag} \to M$, $\mathfrak{M}\to M$ at $y$ respectively. We have the commutative diagram $$\begin{aligned} \label{com:koszul} \xymatrix{ D^b(\widehat{\mathfrak{M}}_y) \ar[r]^-{\Theta_y}_-{\sim} \ar[d]_-{\eta^{\ast}} & \mathrm{MF}^{\rm{gr}}(\widehat{\mathscr{X}}_y(\bm{d}), \mathop{\rm Tr}W) \ar[d]^-{\eta'^{\ast}} \\ D^b(\widehat{\mathfrak{M}}_y^{\dag}) \ar[r]^-{\Theta_y^{\dag}}_-{\sim} & \mathrm{MF}^{\rm{gr}}(\widehat{\mathscr{X}}_y^{\dag}(\bm{d}), \mathop{\rm Tr}W). } \end{aligned}$$ Here the horizontal arrows are Koszul duality equivalences in Theorem [\[thm:Kduality\]](#thm:Kduality){reference-type="ref" reference="thm:Kduality"}, and the vertical arrows are pull-backs along the natural projections. By [@Totheta Proposition 6.1], there exists a choice $m_\bullet=(m_i)_{i=1}^N \in \mathbb{R}^N$ such that an object $\mathcal{E} \in D^b(\mathfrak{M}^{\dag})$ lies in $\mathbb{W}(\mathfrak{M}^{\dag})_{m_{\bullet}}^\ell$ if and only if, for any $y$ as above, we have $$\begin{aligned} \mathrm{wt}_{\lambda_i} \Theta_y^{\dag}(\mathcal{E}|_{\widehat{\mathfrak{M}}_y^{\dag}})|_{\widehat{\mathscr{Z}}_i} \subset \left[ -\frac{1}{2}n_i^{\dag}, \frac{1}{2}n_i^{\dag}\right)+\langle \lambda_i, \delta_y\rangle. \end{aligned}$$ Here, the width $n_i^{\dag}$ is defined by $$\begin{aligned} n_i^{\dag}:=\left\langle \lambda_i, \det\Big(\mathbb{L}^{\lambda_i>0}_{\mathscr{X}_y^{\dag}(\bm{d})}\Big|_{0}\big)\right\rangle =n_i+\sum_{j=1}^m c^{(j)} \left\langle \lambda_i, \det\big((V_j^{\vee})^{\lambda_i>0}\big) \right\rangle \end{aligned}$$ and $n_i:=\big\langle \lambda_i, \det\big(\mathbb{L}^{\lambda_i>0}_{\mathscr{X}_y(\bm{d})}|_{0}\big)\big\rangle$. On the other hand, by the definition of $\mathbb{T}_S^H(v)_\delta$, for an object $A \in \mathbb{T}_S^H(v)_\delta$, the $\lambda_i$-weights of $\Theta_y(A|_{\widehat{\mathfrak{M}}_y})|_{\widehat{\mathscr{X}}_y(\bm{d})^{\lambda_i}}$ lie in $[-n_i/2, n_i/2]+\langle \lambda_i, \delta_y\rangle$ for all $1\leqslant i\leqslant N$. As in [@T Lemma 5.1.9], each $\lambda_i$ has only non-positive weights in each $V^{(j)}$ for $1\leqslant j\leqslant m$, hence we have $n_i^{\dag}>n_i$. From the diagram ([\[com:koszul\]](#com:koszul){reference-type="ref" reference="com:koszul"}), we have $$\Theta_y^{\dag}((\eta^{\ast}A)|_{\widehat{\mathfrak{M}}_y^{\dag}}) \cong \eta'^{\ast}\Theta_y(A|_{\mathfrak{M}_y}),$$ hence its restriction to $\widehat{\mathscr{Z}}_i$ has $\lambda_i$-weights in $[-n_i^{\dag}/2, n_i^{\dag}/2)+\langle \lambda_i, \delta_y\rangle$. Therefore we have $\eta^{\ast}A \in \mathbb{W}(\mathfrak{M}^{\dag})_{m_{\bullet}}^\ell$. ◻ We prove the following theorem, using the strong generation of singular support quotients in Theorem [Theorem 61](#thm:regular2){reference-type="ref" reference="thm:regular2"} which will be proved in Subsection [6.3](#subsec:ssuport:gen){reference-type="ref" reference="subsec:ssuport:gen"}: **Theorem 54**. *The quasi-BPS category $\mathbb{T}_S(v)_w$ is regular.* *Proof.* By Corollary [Corollary 36](#cor:lv){reference-type="ref" reference="cor:lv"}, it is enough to show that $\mathbb{T}_S^H(v)_{w} \subset D^b(\mathfrak{M}_S^H(v))$ is regular. We consider the following composition $$\begin{aligned} F \colon \mathbb{T}_S^H(v)_w \stackrel{i}{\hookrightarrow} D^b(\mathfrak{M})_w \stackrel{\eta^{\ast}}{\to} D^b(\mathfrak{M}^{\dag}) \stackrel{p}{\twoheadrightarrow} D^b(\mathfrak{M}^{\dag}) /\mathcal{C}_{\mathscr{Z}^{\rm{JS}}}.\end{aligned}$$ Let $\Phi$ be window equivalence [\[equiv:WDT\]](#equiv:WDT){reference-type="eqref" reference="equiv:WDT"} as in Lemma [Lemma 53](#lem:eta){reference-type="ref" reference="lem:eta"}, and let $\Phi^{-1}$ be its inverse. Let $\Psi \colon D^b(\mathfrak{M})_w \twoheadrightarrow \mathbb{T}_S^H(v)_w$ be the projection with respect to the semiorthogonal decomposition in Theorem [Theorem 43](#thm:sodK3){reference-type="ref" reference="thm:sodK3"}. We also define the following functor $$\begin{aligned} G \colon D^b(\mathfrak{M}^{\dag})/\mathcal{C}_{\mathscr{Z}^{\rm{JS}}} \stackrel{\Phi^{-1}}{\to} \mathbb{W}(\mathfrak{M}^{\dag})_{m_{\bullet}}^\ell \stackrel{j}{\hookrightarrow} D^b(\mathfrak{M}^{\dag}) \stackrel{(\eta_{\ast})_w}{\twoheadrightarrow} D^b(\mathfrak{M})_w \stackrel{\Psi}{\twoheadrightarrow} \mathbb{T}_S^H(v)_{w}. \end{aligned}$$ Here, $(\eta_{\ast})_w(-)$ is the weight $w$-part of $\eta_{\ast}(-)$, which is the projection onto $D^b(\mathfrak{M})_w$ with respect to the semiorthogonal decomposition $$\begin{aligned} \label{sod:wt} D^b(\mathfrak{M}^{\dag}) =\langle \ldots, D^b(\mathfrak{M})_{-1}, D^b(\mathfrak{M})_0, D^b(\mathfrak{M})_1, \ldots \rangle. \end{aligned}$$ Every fully-faithful functor in ([\[sod:wt\]](#sod:wt){reference-type="ref" reference="sod:wt"}) is given by the restriction of $\eta^{\ast}$ to $D^b(\mathfrak{M})_w$. The above semiorthogonal decomposition exists since $\eta \colon \mathfrak{M}^{\dag} \to \mathfrak{M}$ is an affine space bundle such that the cone of $\mathcal{O}_{\mathfrak{M}} \to \eta_{\ast}\mathcal{O}_{\mathfrak{M}^{\dag}}$ has strictly negative $\mathbb{C}^{\ast}$-weights, see [@halp Amplification 3.18]. Then $G \circ F \cong \operatorname{id}$. Indeed, we have $$\begin{aligned} \Psi \circ (\eta_{\ast})_w \circ \Phi^{-1} \circ p \circ\eta^{\ast} \circ i \cong \Psi \circ (\eta_{\ast})_w \circ \eta^{\ast} \circ i \cong \Psi \circ i \cong \operatorname{id}. \end{aligned}$$ For the first isomorphism, the image of $\eta^{\ast} \circ i$ lies in $\mathbb{W}(\mathfrak{M}^{\dag})_{m_{\bullet}}^\ell$ by Lemma [Lemma 53](#lem:eta){reference-type="ref" reference="lem:eta"} and then $\Phi^{-1}\circ p$ is identity on $\mathbb{W}(\mathfrak{M}^{\dag})_{m_{\bullet}}^\ell$ by the definition of $\Phi$. The second isomorphism follows since $(\eta_{\ast})_w \circ \eta^{\ast}\cong \operatorname{id}$. The last isomorphism also holds by the definition of $\Psi$. By Theorem [Theorem 61](#thm:regular2){reference-type="ref" reference="thm:regular2"} together with the fact that $\Omega^{\rm{JS}}_{\mathfrak{M}^{\dag}}[-1]$ is a quasi-projective scheme, the category $D^b(\mathfrak{M}^{\dag})/\mathcal{C}_{\mathscr{Z}^{\rm{JS}}}$ is regular, so it is $\langle \mathcal{E} \rangle^{\star n}$ for some $\mathcal{E} \in D^b(\mathfrak{M}^{\dag})/\mathcal{C}_{\mathscr{Z}^{\rm{JS}}}$ and $n\geqslant 1$. Then as $\mathrm{Im}(F) \subset \langle \mathcal{E} \rangle^{\star n}$ and $G \circ F \cong \operatorname{id}$, we conclude that $\mathbb{T}_S^H(v)_{w}=\langle G(\mathcal{E}) \rangle^{\star n}$, hence $\mathbb{T}_S^H(v)_{w}$ is regular. ◻ By an analogous argument using window categories of the reduced stack $\mathfrak{M}^{\dag, \mathrm{red}}$, we obtain: **Theorem 55**. *The reduced quasi-BPS category $\mathbb{T}_S(v)_w^{\mathrm{red}}$ is regular.* ## Properness of reduced quasi-BPS categories {#subsec62} Recall that we write $v=dv_0$ for $d\in\mathbb{Z}_{\geqslant 1}$ and for $v_0$ primitive with $\langle v_0, v_0 \rangle=2g-2$. Let $\mathfrak{M}^{\rm{red}}=\mathfrak{M}_S^{\sigma}(v)^{\rm{red}}$ for a generic $\sigma \in \mathrm{Stab}(S)$. We consider its $(-1)$-shifted cotangent space: $$\begin{aligned} \Omega_{\mathfrak{M}^{\rm{red}}}[-1] \to \mathfrak{M}^{\rm{red}}.\end{aligned}$$ Its classical truncation is identified with the moduli stack of pairs $$\begin{aligned} \label{F:pairs} (F, \theta), \ F \in \mathcal{M}_S^{\sigma}(v), \ \theta \colon F \to F\end{aligned}$$ such that $\mathrm{tr}(\theta)=0$, see [@T Lemma 3.4.1] for the non-reduced case and the proof for the reduced case is similar. Let $$\begin{aligned} \mathcal{N}_{\rm{nil}} \subset \Omega_{\mathfrak{M}^{\rm{red}}}[-1]\end{aligned}$$ be the closed substack consisting of pairs ([\[F:pairs\]](#F:pairs){reference-type="ref" reference="F:pairs"}) such that $\theta$ is nilpotent. The following is the global version of the categorical support lemma. **Theorem 56**. *Let $w\in \mathbb{Z}$ be coprime with $d$ and let $\mathcal{E} \in \mathbb{T}_S^{\sigma}(v)_w^{\rm{red}} \subset D^b(\mathfrak{M}_S^{\sigma}(v)^{\rm{red}})$. Then $\mathrm{Supp}^{\rm{sg}}(\mathcal{E}) \subset \mathcal{N}_{\rm{nil}}$.* *Proof.* It is enough to prove the inclusion $\mathrm{Supp}^{\rm{sg}}(\mathcal{E}) \subset \mathcal{N}_{\rm{nil}}$ over any point $y \in M_S^{\sigma}(v)$. For simplicity, we write $\widehat{\mathfrak{M}}^{\rm{red}}_y=\widehat{\mathfrak{M}}_S^{\sigma}(v)_y^{\rm{red}}$. The equivalence in Lemma [Lemma 24](#lem:py){reference-type="ref" reference="lem:py"} induces the isomorphism of classical truncations of $(-1)$-shifted cotangents, $$\begin{aligned} \label{isom:hat} \Omega_{\widehat{\mathfrak{M}}^{\rm{red}}_y}[-1]^{\rm{cl}} \stackrel{\cong}{\to} \Omega_{\widehat{\mathscr{P}}(d)_p^{\rm{red}}}[-1]^{\rm{cl}}.\end{aligned}$$ The right hand side is the critical locus of the function $$\begin{aligned} \mathop{\rm Tr}W \colon \widehat{\mathfrak{gl}}(d)^{\oplus 2g}_p \times \mathfrak{gl}(d)_0 \to \mathbb{C}\end{aligned}$$ where $\mathop{\rm Tr}W$ is the function ([\[TrW:X\]](#TrW:X){reference-type="ref" reference="TrW:X"}) associated with the tripled quiver of the $g$-loop quiver, see Subsection [2.6.3](#subsec:shifted){reference-type="ref" reference="subsec:shifted"}. Then the isomorphism ([\[isom:hat\]](#isom:hat){reference-type="ref" reference="isom:hat"}) restricts to the isomorphism $$\begin{aligned} \mathcal{N}_{\rm{nil}} \times_{\mathfrak{M}^{\rm{red}}} \widehat{\mathfrak{M}}^{\rm{red}}_y \stackrel{\cong}{\to} \mathrm{Crit}(\mathop{\rm Tr}W) \cap (\widehat{\mathfrak{gl}}(d)^{\oplus 2g}_p \times \mathfrak{gl}(d)_{\rm{nil}}).\end{aligned}$$ Therefore the theorem follows from Lemma [\[cor:support\]](#cor:support){reference-type="ref" reference="cor:support"}. ◻ Recall that a pre-triangulated category $\mathcal{D}$ over $\mathbb{C}$ is called *proper* if for any $\mathcal{E}_1, \mathcal{E}_2 \in \mathcal{D}$, the vector space $\bigoplus_{i\in \mathbb{Z}} \operatorname{Hom}^{\ast}(\mathcal{E}_1, \mathcal{E}_2)$ is finite dimensional. We also have the following global analogue of Proposition [Proposition 20](#lem:bound){reference-type="ref" reference="lem:bound"}: **Theorem 57**. *If $(d, w)\in\mathbb{N}\times\mathbb{Z}$ are coprime and $g\geqslant 2$, the category $\mathbb{T}_S(v)_w^{\rm{red}}$ is proper.* *Proof.* We regard $\mathbb{T}_S(v)_w^{\rm{red}}$ as a subcategory of $D^b(\mathfrak{M}_S^{\sigma}(v)^{\rm{red}})$ for a generic $\sigma\in\mathrm{Stab}(S)$ via $\mathbb{T}_S(v)_w^{\rm{red}}=\mathbb{T}_S^{\sigma}(v)_{w}^{\rm{red}}$. For $\mathcal{E}_1, \mathcal{E}_2 \in \mathbb{T}_S^{\sigma}(v)_w^{\rm{red}}$, let $$\begin{aligned} \mathcal{H} om(\mathcal{E}_1, \mathcal{E}_2) \in D_{\rm{qc}}(\mathfrak{M}_S^{\sigma}(v)^{\rm{red}}) \end{aligned}$$ be the internal homomorphism, see Subsection [2.6](#subsec:qsmooth){reference-type="ref" reference="subsec:qsmooth"}. Recall that $\mathfrak{M}_S^{\sigma}(v)^{\rm{red}}=\mathcal{M}_S^{\sigma}(v)$ by Lemma [Lemma 25](#lem:classical){reference-type="ref" reference="lem:classical"}. Let $\pi$ be the good moduli space morphism from ([\[gmoduli\]](#gmoduli){reference-type="ref" reference="gmoduli"}). Then we have $$\begin{aligned} \label{pi:coh} \pi_{\ast}\mathcal{H}om(\mathcal{E}_1, \mathcal{E}_2) \in D^b(M_S^{\sigma}(v)). \end{aligned}$$ Indeed, the statement [\[pi:coh\]](#pi:coh){reference-type="eqref" reference="pi:coh"} is local on $M_S^{\sigma}(v)$, hence it follows from Proposition [Proposition 20](#lem:bound){reference-type="ref" reference="lem:bound"} and Lemma [Lemma 24](#lem:py){reference-type="ref" reference="lem:py"}. Then the theorem holds as $$\begin{aligned} \mathrm{Hom}^{\ast}(\mathcal{E}_1, \mathcal{E}_2) =R^{\ast}\Gamma(\pi_{\ast}\mathcal{H}om(\mathcal{E}_1, \mathcal{E}_2)) \end{aligned}$$ and $M_S^{\sigma}(v)$ is a proper algebraic space. ◻ **Corollary 58**. *If $(d, w)\in\mathbb{N}\times\mathbb{Z}$ are coprime and $g\geqslant 2$, then $\mathbb{T}_S(v)_w^{\rm{red}}$ is proper and smooth.* *Proof.* By Theorem [Theorem 57](#thm:proper){reference-type="ref" reference="thm:proper"} and Theorem [Theorem 55](#thm:regularred){reference-type="ref" reference="thm:regularred"}, the category $\mathbb{T}_S(v)_w^{\rm{red}}$ is proper and regular if $\gcd(d, w)=1$. Then it is also proper and smooth by [@Orsmooth Theorem 3.18]. ◻ ## Strong generation of singular support quotients {#subsec:ssuport:gen} In this subsection, we prove Theorem [Theorem 61](#thm:regular2){reference-type="ref" reference="thm:regular2"} on strong generation of singular support quotients, which was used in the proof of Theorem [Theorem 54](#thm:regular){reference-type="ref" reference="thm:regular"}. Let $\mathfrak{M}$ be a quasi-smooth derived stack of finite type over $\mathbb{C}$ such that its classical truncation $\mathcal{M}=\mathfrak{M}^{\rm{cl}}$ admits a good moduli space $\mathcal{M} \to M$ which is quasi-separated. Note that $M$ is quasi-compact by the assumption on $\mathfrak{M}$. We denote by $\mathrm{Et}/M$ the category whose objects $(U, \rho)$, where $U$ is a $\mathbb{C}$-scheme and $\rho \colon U \to M$ is an étale morphism. The set of morphisms $(U', \rho') \to (U, \rho)$ consists of étale morphisms $U' \to U$ commuting with $\rho$ and $\rho'$. For a closed subscheme $Z \subset U$, an étale morphism $f \colon U' \to U$ is called an *étale neighborhood* of $Z$ if $f^{-1}(Z) \to Z$ is an isomorphism. We will use the following result in the proof of Theorem [Theorem 54](#thm:regular){reference-type="ref" reference="thm:regular"}: **Theorem 59**. **([@Rydetale Theorem D])*[\[thm:ryd\]]{#thm:ryd label="thm:ryd"} Let $\mathbf{D} \subset \mathrm{Et}/M$ be the subcategory satisfying the following conditions:* (i) *If $(U\to M) \in \mathbf{D}$ and $(U' \to U)$ is a morphism in $\mathrm{Et}/M$, then $(U' \to M) \in \mathbf{D}$.* (ii) *If $(U' \to M) \in \mathbf{D}$ and $(U' \to U)$ is a morphism in $\mathrm{Et}/M$ which is finite and surjective, then $(U \to M) \in \mathbf{D}$.* (iii) *If $(j \colon U^{\circ} \to U)$ and $(f \colon W \to U)$ are morphisms in $\mathrm{Et}/M$ such that $j$ is an open immersion and $f$ is an étale neighborhood of $U\setminus U^{\circ}$, and $(U^{\circ} \to M) \in \mathbf{D}$ and $(W \to M) \in \mathbf{D}$, then $(U \to M) \in \mathbf{D}$.* *Then if there is $(g \colon M' \to M) \in \mathbf{D}$ such that $g$ is surjective, then $(\operatorname{id}\colon M \to M) \in \mathbf{D}$.* For each object $(U \to M) \in \mathrm{Et}/M$, let $\mathcal{M}_U \to U$ be the pull-back of $\mathcal{M} \to M$ by $U \to M$. There is a derived stack $\mathfrak{M}_U$, unique up to equivalence, such that for each morphism $\rho \colon U' \to U$ in $\mathrm{Et}/M$ there is an induced diagram, see Subsection [2.6](#subsec:qsmooth){reference-type="ref" reference="subsec:qsmooth"} $$\begin{aligned} \label{dia:induced} \xymatrix{ U' \ar[d]_-{\rho} & \ar[l] \mathcal{M}_{U'} \ar[r] \ar[d] & \mathfrak{M}_{U'} \ar[d]^-{\rho} \\ U & \ar[l] \mathcal{M}_U \ar[r] & \mathfrak{M}_U. }\end{aligned}$$ For each $y\in M$, there is $\rho \colon U \to M$ in $\mathrm{Et}/M$ whose image contains $y$ such that $\mathfrak{M}_U$ is equivalent to a Koszul stack $$\begin{aligned} \label{eq:Kstack} \mathfrak{M}_U \simeq s^{-1}(0)/G\end{aligned}$$ for some $(Y, V, s, G)$, where $Y$ is a smooth scheme with an action of a reductive algebraic group $G$, $V\to Y$ is a $G$-equivariant vector bundle with a $G$-invariant section $s$ and $s^{-1}(0)$ is the derived zero locus of $s$, see Subsection [2.6](#subsec:qsmooth){reference-type="ref" reference="subsec:qsmooth"}. For $\ell \in \mathrm{Pic}(\mathfrak{M})_{\mathbb{R}}$ and $(U\to M)\in\mathrm{Et}/M$, consider the $\ell$-semistable locus $$\begin{aligned} \label{l-stable} \Omega_{\mathfrak{M}_U}^{\ell\text{-ss}}[-1]^{\rm{cl}} \subset \Omega_{\mathfrak{M}_U}[-1]^{\rm{cl}}.\end{aligned}$$ We denote by $\mathscr{Z}_U$ the complement of the open immersion ([\[l-stable\]](#l-stable){reference-type="ref" reference="l-stable"}), which is a conical closed substack. Let $\mathcal{C}_{\mathscr{Z}_U} \subset D^b(\mathfrak{M}_U)$ be the subcategory of objects with singular supports contained in $\mathscr{Z}_U$. **Lemma 60**. *Suppose that the open substack ([\[l-stable\]](#l-stable){reference-type="ref" reference="l-stable"}) is an algebraic space. Then for a Koszul stack as in ([\[eq:Kstack\]](#eq:Kstack){reference-type="ref" reference="eq:Kstack"}), the category $D^b(\mathfrak{M}_U)/\mathcal{C}_{\mathscr{Z}_{U}}$ is regular. In particular, there is a compact generator $\mathcal{E}_U \in D^b(\mathfrak{M}_U)/\mathcal{C}_{\mathscr{Z}_U}$.* *Proof.* By the Koszul duality equivalence in Theorem [\[thm:Kduality\]](#thm:Kduality){reference-type="ref" reference="thm:Kduality"}, we have the equivalence $$\begin{aligned} \label{equiv:KZ} D^b(\mathfrak{M}_U) \stackrel{\sim}{\to} \mathrm{MF}^{\rm{gr}}(V^{\vee}/G, f). \end{aligned}$$ The above equivalence descends to an equivalence, see [@T Proposition 2.3.9] $$\begin{aligned} D^b(\mathfrak{M}_U)/\mathcal{C}_{\mathscr{Z}_U} \stackrel{\sim}{\to} \mathrm{MF}^{\rm{gr}}((V^{\vee}/G) \setminus \mathscr{Z}_U, f). \end{aligned}$$ Note that we have $$\begin{aligned} \label{crit:Uw} (\Omega_{\mathfrak{M}}[-1]^{\rm{cl}}\setminus \mathscr{Z})\times_{\mathcal{M}}U =(\mathrm{Crit}(w)/G) \setminus \mathscr{Z}_U, \end{aligned}$$ hence the right hand side is an algebraic space by the assumption. Let $(V^{\vee})^{\rm{free}} \subset (V^{\vee})^{\ell\text{-ss}}$ be the $(G\times \mathbb{C}^{\ast})$-invariant open subspace of $\ell$-semistable points with free closed $G$-orbits. Then ([\[crit:Uw\]](#crit:Uw){reference-type="ref" reference="crit:Uw"}) is a closed substack of $Y:=(V^{\vee})^{\rm{free}}/G$. Since the category of matrix factorizations depends only on an open neighborhood of the critical locus, there is an equivalence $$\begin{aligned} \mathrm{MF}^{\rm{gr}}((V^{\vee}/G) \setminus \mathscr{Z}_U, f) \stackrel{\sim}{\to} \mathrm{MF}^{\rm{gr}}(Y, f). \end{aligned}$$ Note that $Y$ is quasi-projective since it is an open subset of the quasi-projective good moduli space $(V^{\vee})^{\ell\text{-ss}}/\!\!/G$. The category $\mathrm{MF}^{\rm{gr}}(Y, f)$ is proven to be smooth in [@FavTy Lemma 2.11, Remark 2.12], hence it is regular. ◻ The main result of this subsection is the following strong generation of singular support quotient: **Theorem 61**. *Let $\mathfrak{M}$ be a quasi-smooth derived stack of finite type over $\mathbb{C}$ with a good moduli space $$\mathfrak{M}^{\rm{cl}} \to M,$$ where $M$ is a quasi-separated algebraic space. For $\ell \in \mathrm{Pic}(\mathfrak{M})_{\mathbb{R}}$, suppose that $\Omega_{\mathfrak{M}}^{\ell\text{-ss}}[-1]^{\rm{cl}}$ is an algebraic space. Let $\mathscr{Z} = \Omega_{\mathfrak{M}}[-1]^{\rm{cl}} \setminus \Omega_{\mathfrak{M}}^{\ell\text{-ss}}[-1]^{\rm{cl}}$. Then the quotient category $D^b(\mathfrak{M})/\mathcal{C}_{\mathscr{Z}}$ is regular.* *Proof.* For $(U \to M) \in \mathrm{Et}/M$, we define $$\begin{aligned} \label{def:mathcaltu} \mathcal{T}_U=D^b(\mathfrak{M}_U)/\mathcal{C}_{\mathscr{Z}_U}, \ \operatorname{Ind}\mathcal{T}_U=\operatorname{Ind}D^b(\mathfrak{M}_U)/ \operatorname{Ind}\mathcal{C}_{\mathscr{Z}_U}. \end{aligned}$$ By the diagram ([\[dia:induced\]](#dia:induced){reference-type="ref" reference="dia:induced"}), there is an adjoint pair: $$\begin{aligned} \xymatrix{ \operatorname{Ind}\mathcal{T}_{U} \ar@<0.5ex>[r]^{\rho^{\ast}} & \operatorname{Ind}\mathcal{T}_{U'} \ar@<0.5ex>[l]^{\rho_{\ast}} }, \rho^{\ast} \dashv \rho_{\ast}. \end{aligned}$$ Then $U \mapsto \operatorname{Ind}\mathcal{T}_{U}$ is a $\mathrm{Et}/M$-pre-triangulated categories with adjoints, see [@Hperf Section 5]. Let $\mathbf{D}^{\rm{st}} \subset \mathrm{Et}/M$ be the full subcategory of $(U \to M)$ such that $\mathcal{T}_U$ is regular. The condition $(U \to M) \in \mathbf{D}^{\rm{st}}$ is equivalent to $\mathcal{T}_U=\langle \mathcal{E}_U \rangle^{\star n}$ for some $\mathcal{E}_U \in \mathcal{T}_U$ and $n\geqslant 1$. On the other hand, it is proved in [@T Proposition 3.2.7, Section 7.2] that $\operatorname{Ind}\mathcal{T}_U=\operatorname{Ind}(\mathcal{T}_U)$ with compact objects the idempotent closure of $\mathcal{T}_U$. Therefore by [@Neeman Proposition 1.9], the condition $\mathcal{T}_U=\langle \mathcal{E}_U \rangle^{\star n}$ is equivalent to $\operatorname{Ind}\mathcal{T}_U= \langle \! \langle\mathcal{E}_U \rangle \! \rangle^{\star n}$ for some $n\geqslant 1$. By Lemma [Lemma 60](#lem:compact){reference-type="ref" reference="lem:compact"}, there exists $(M' \to M) \in \mathbf{D}^{\rm{st}}$ which is surjective. By Theorem [\[thm:ryd\]](#thm:ryd){reference-type="ref" reference="thm:ryd"}, it is enough to check the conditions (i), (ii) and (iii) for the subcategory $\mathbf{D}^{\rm{st}} \subset \mathrm{Et}/M$. To show condition (i), consider a morphism $(\rho \colon U' \to U)$ in $\mathrm{Et}/M$. Suppose that $\operatorname{Ind}\mathcal{T}_U=\langle \! \langle\mathcal{E}_U \rangle \! \rangle^{\star n}$. For each $A \in \operatorname{Ind}\mathcal{T}_{U'}$, there is a natural morphism $\rho^{\ast}\rho_{\ast}A \to A$ and $\rho_{\ast}A \in \langle \! \langle\mathcal{E}_U \rangle \! \rangle^{\star n}$ by the assumption. Since $U' \times_U U' \to U'$ admits a section given by the diagonal, we have a decomposition into open and closed subsets $$U' \times_U U'=U' \sqcup U''.$$ Then, by the base change for $U' \to U \leftarrow U'$, the morphism $\rho^{\ast}\rho_{\ast}A \to A$ splits, hence $A \in \langle \! \langle\rho^{\ast}\mathcal{E}_U \rangle \! \rangle^{\star n}$. Therefore $\operatorname{Ind}\mathcal{T}_{U'}=\langle \! \langle\mathcal{E}_{U'}\rangle \! \rangle^{\star n}$ for $\mathcal{E}_{U'}=\rho^{\ast}\mathcal{E}_U$ and $(U' \to M) \in \mathbf{D}^{\rm{st}}$ holds. To show condition (ii), let $(\rho \colon U' \to U)$ be a morphism in $\mathrm{Et}/M$ such that $\rho$ is finite surjective. Assume that $(U' \to M) \in \mathbf{D}^{\rm{st}}$, so $\operatorname{Ind}\mathcal{T}_{U'}=\langle \! \langle\mathcal{E}_{U'} \rangle \! \rangle^{\star n}$ for some $\mathcal{E}_{U'} \in \mathcal{T}_{U'}$ and $n\geqslant 1$. For $A \in \operatorname{Ind}\mathcal{T}_U$, let $A \to \rho_{\ast}\rho^{\ast}A=A \otimes \rho_{\ast}\mathcal{O}_{\mathfrak{M}_U}$ be the natural morphism. The induced map $\rho \colon \mathfrak{M}_{U'} \to \mathfrak{M}_U$ is also finite and surjective, and $\mathcal{O}_{\mathfrak{M}_U} \to \rho_{\ast}\mathcal{O}_{\mathfrak{M}_{U'}}$ splits. In fact, we have $\rho_{\ast}=\rho_{!}$ as $\rho$ is finite étale, and the natural map $\rho_{!}\mathcal{O}_{\mathfrak{M}_{U'}} \to \mathcal{O}_{\mathfrak{M}_U}$ gives a splitting. Therefore $A$ is a direct summand of $\rho_{\ast}\rho^{\ast}A$. As $\rho^{\ast}A \in \langle \! \langle\mathcal{E}_{U'} \rangle \! \rangle^{\star n}$, we have $A \in \langle \! \langle\rho_{\ast}\mathcal{E}_{U'} \rangle \! \rangle^{\star n}$. Since $\rho$ is finite, we have $\rho_{\ast}\mathcal{E}_{U'} \in \mathcal{T}_U$. Then by setting $\mathcal{E}_U=\rho_{\ast}\mathcal{E}_{U'}$, we have $A \in \langle \! \langle\mathcal{E}_{U} \rangle \! \rangle^{\star n}$, hence $\operatorname{Ind}\mathcal{T}_U= \langle \! \langle\mathcal{E}_{U} \rangle \! \rangle^{\star n}$ and $(U \to M) \in \mathbf{D}^{\rm{st}}$ holds. To show condition (iii), let $(j \colon U_{\circ} \to U)$ and $(f \colon W \to U)$ be morphisms in $\mathrm{Et}/M$ such that $j$ is an open immersion and $f$ is an étale neighborhood of $U \setminus U_{\circ}$. Suppose that $\operatorname{Ind}\mathcal{T}_{U_{\circ}}=\langle \! \langle\mathcal{E}_{U_{\circ}} \rangle \! \rangle^{\star n}$ and $\operatorname{Ind}\mathcal{T}_{W}=\langle \! \langle\mathcal{E}_W \rangle \! \rangle^{\star n}$ for some $n \geqslant 1$ and $\mathcal{E}_W \in \mathcal{T}_W$, $\mathcal{E}_{U_{\circ}} \in \mathcal{T}_{U_{\circ}}$. For an object $A \in \operatorname{Ind}\mathcal{T}_U$, there is a distinguished triangle in $\operatorname{Ind}\mathcal{T}_U$, see [@Hperf Lemma 5.9]: $$\begin{aligned} \label{tr:A} A \to j_{\ast}j^{\ast}A \oplus f_{\ast}f^{\ast}A \to f_{\ast}f^{\ast}j_{\ast}j^{\ast}A \to A[1].\end{aligned}$$ We have $j_{\ast}j^{\ast}A \in \langle \! \langle j_{\ast}\mathcal{E}_{U_{\circ}}\rangle \! \rangle^{\star n}$, $f_{\ast}f^{\ast}A \in \langle \! \langle f_{\ast}\mathcal{E}_W \rangle \! \rangle^{\star n}$ and $f_{\ast}f^{\ast}j_{\ast}j^{\ast}A \in \langle \! \langle f_{\ast}\mathcal{E}_W\rangle \! \rangle^{\star n}$. By Lemma [Lemma 62](#lem:cohbou){reference-type="ref" reference="lem:cohbou"}, there exist $\mathcal{E}_U \in \mathcal{T}_U$ such that $j_{\ast}\mathcal{E}_{U_{\circ}}$ and $f_{\ast}\mathcal{E}_W$ are objects in $\langle \! \langle\mathcal{E}_U \rangle \! \rangle^{\star m}$ for some $m\geqslant 1$. Then we have $j_{\ast}j^{\ast}A \in \langle \! \langle\mathcal{E}_U \rangle \! \rangle^{\star nm}$, $f_{\ast}f^{\ast}A \in \langle \! \langle\mathcal{E}_U \rangle \! \rangle^{\star nm}$ and $f_{\ast}f^{\ast}j_{\ast}j^{\ast}A \in \langle \! \langle\mathcal{E}_U\rangle \! \rangle^{\star nm}$. From the triangle ([\[tr:A\]](#tr:A){reference-type="ref" reference="tr:A"}), we conclude that $\operatorname{Ind}\mathcal{T}_U=\langle \! \langle \mathcal{E}_U \rangle \! \rangle^{\star nm+1}$, therefore $(U \to M) \in \mathbf{D}^{\rm{st}}$. ◻ We have used the following lemma: **Lemma 62**. *Let $f \colon U' \to U$ be a morphism in $\mathrm{Et}/M$. Then for any object $P \in D^b(\mathfrak{M}_{U'})$, there is $Q \in D^b(\mathfrak{M}_{U})$ and $m \geqslant 1$ such that $f_{\ast} P \in \langle \! \langle Q \rangle \! \rangle^{\star m}$ in $\operatorname{Ind}D^b(\mathfrak{M}_U)$.* *Proof.* Since $P$ is a finite extension of objects from the image of the pushforward functor $D^b(\mathcal{M}_{U'}) \to D^b(\mathfrak{M}_{U'})$, we may assume that $P\in D^b(\mathcal{M}_{U'})$. It suffices to find $Q\in D^b(\mathcal{M}_U)$ and $m\geqslant 1$ such that $f_*P\in \langle \! \langle Q \rangle \! \rangle^{\star m}$ in $\operatorname{Ind}D^b(\mathcal{M}_U)$. By Nagata compactification, there is a factorization $$f \colon U' \stackrel{j}{\hookrightarrow} \overline{U} \stackrel{g}{\to} U,$$ where $j$ is an open immersion and $g$ is proper. There is an object $\overline{P} \in D^b(\mathcal{M}_{\overline{U}})$ such that $j^{\ast}\overline{P}\cong P$. Then $j_{\ast}P \cong \overline{P} \otimes_{\mathcal{O}_{\overline{U}}} j_{\ast}\mathcal{O}_{U'}$, where $j_{\ast}\mathcal{O}_{U'} \in D_{\rm{qc}}(\overline{U})$ and the tensor product is given by the action of $D_{\rm{qc}}(\overline{U})$ on $\operatorname{Ind}D^b(\mathcal{M}_{\overline{U}})$. By [@Neeman Theorem 6.2], there is $B \in \mathrm{Perf}(\overline{U})$ such that $j_{\ast}\mathcal{O}_{U'} \in \langle \! \langle B \rangle \! \rangle^{\star m}$ for some $m\geqslant 1$ in $D_{\rm{qc}}(\overline{U})$. Then $j_{\ast}P \in \langle \! \langle\overline{P} \otimes_{\mathcal{O}_{\overline{U}}}B\rangle \! \rangle^{\star m}$, hence $f_{\ast}P \in \langle \! \langle Q \rangle \! \rangle^{\star m}$ for $Q=g_{\ast}(\overline{P}\otimes_{\mathcal{O}_{\overline{U}}}B) \in D^b(\mathcal{M}_U)$. ◻ # Serre functor for reduced quasi-BPS categories In this section, we show that the reduced quasi-BPS categories have étale locally trivial Serre functor, which gives further evidence towards Conjecture [Conjecture 34](#conj:HK){reference-type="ref" reference="conj:HK"}. ## Serre functor {#subsec71} Recall that we write $v=dv_0$ for $d\in\mathbb{Z}_{\geqslant 1}$ and a primitive Mukai vector $v_0$ with $\langle v_0, v_0 \rangle=2g-2$. We assume $g\geqslant 2$. Consider a generic stability $\sigma \in \mathrm{Stab}(S)$. Recall that the derived stack $\mathfrak{M}_S^{\sigma}(v)^{\rm{red}}$ is equivalent to its classical truncation $\mathcal{M}=\mathcal{M}^\sigma_S(v)$ by Lemma [Lemma 25](#lem:classical){reference-type="ref" reference="lem:classical"}. Let $w \in \mathbb{Z}$ such that $\gcd(d, w)=1$, and consider the quasi-BPS category $$\mathbb{T}=\mathbb{T}_S^{\sigma}(v)_w^{\rm{red}}\subset D^b(\mathcal{M}).$$ We recall some terminology from [@MR1996800]. Let $\mathcal{T}$ be a $\mathbb{C}$-linear pre-triangulated category. A contravariant functor $F\colon \mathcal{T}\to \mathrm{Vect}(\mathbb{C})$ is called *of finite type* if $\oplus_{i\in\mathbb{Z}}F(A[i])$ is finite dimensional for all objects $A$ of $\mathcal{T}$. The category $\mathcal{T}$ is called *saturated* if every contravariant functor $H\colon\mathcal{T}\to \mathrm{Vect}(\mathbb{C})$ of finite type is representable. By Corollary [Corollary 58](#cor:smooth){reference-type="ref" reference="cor:smooth"} and [@MR1996800 Theorem 1.3], the category $\mathbb{T}$ is saturated, and thus it admits a Serre functor $$\begin{aligned} S_{\mathbb{T}} \colon \mathbb{T} \to \mathbb{T}\end{aligned}$$ i.e. a functor such that there are functorial isomorphisms for $\mathcal{E}_1, \mathcal{E}_2 \in \mathbb{T}$: $$\begin{aligned} \operatorname{Hom}(\mathcal{E}_1, \mathcal{E}_2) \cong \operatorname{Hom}(\mathcal{E}_2, S_{\mathbb{T}}(\mathcal{E}_1))^{\vee}. \end{aligned}$$ There is also a version of the Serre functor relative to the good moduli space $\pi \colon \mathcal{M} \to M$. For $\mathcal{E}_1, \mathcal{E}_2 \in \mathbb{T}$, let $\mathcal{H}om_{\mathbb{T}}(\mathcal{E}_1, \mathcal{E}_2) \in D_{\rm{qc}}(\mathcal{M})$ be its internal homomorphism. Then a functor $S_{\mathbb{T}/M}\colon \mathbb{T} \to \mathbb{T}$ is called a *relative Serre functor* if there are functorial isomorphisms in $D^b(M)$: $$\begin{aligned} \label{rel:S} \mathcal{H}om_M(\pi_{\ast}\mathcal{H}om_{\mathbb{T}}(\mathcal{E}_1, \mathcal{E}_2), \mathcal{O}_M) \cong \pi_{\ast}\mathcal{H}om_{\mathbb{T}}(\mathcal{E}_2, S_{\mathbb{T}/M}(\mathcal{E}_1)).\end{aligned}$$ **Remark 63**. We note that $M$ has at worst Gorenstein singularities. The result is most probably well-known, but we did not find a reference. The statement follows from Lemma [Lemma 24](#lem:py){reference-type="ref" reference="lem:py"} and [@PTquiver Lemma 5.7]. Thus $\mathcal{H}om(-, \mathcal{O}_M)$ is an equivalence $$\begin{aligned} \mathcal{H}om(-, \mathcal{O}_M) \colon D^b(M) \stackrel{\sim}{\to} D^b(M)^{\rm{op}}. \end{aligned}$$ Moreover, the dualizing complex is $\omega_M=\mathcal{O}_M[\dim M]$, since the singular locus of $M$ is at least codimension two and there is a holomorphic symplectic form on the smooth part. **Remark 64**. The category $\mathbb{T}$ is proper over $M$, i.e. $\pi_{\ast}\mathcal{H}om_{\mathbb{T}}(\mathcal{E}_1, \mathcal{E}_2) \in D^b(M)$, and it is strongly generated. Thus the relative Serre functor also exists, and is constructed as follows. Let $\mathcal{E} \in \mathbb{T}$ be a strong generator and consider the sheaf of dg-algebras on $M$: $$\mathcal{A}=\pi_{\ast}\mathcal{H}om_{\mathbb{T}}(\mathcal{E}, \mathcal{E}).$$ Then $\mathbb{T}$ is equivalent to the derived category of coherent right dg-$\mathcal{A}$-modules. Under the above equivalence, the relative Serre functor is given by the $\mathcal{A}^{\rm{op}}\otimes_{\mathcal{O}_M} \mathcal{A}$-module $\mathcal{H}om_{M}(\mathcal{A}, \mathcal{O}_M)$. The absolute and the relative Serre functors are related as follows: **Lemma 65**. *We have $S_{\mathbb{T}}=S_{\mathbb{T}/M}[\dim M]$.* *Proof.* By taking the global sections of ([\[rel:S\]](#rel:S){reference-type="ref" reference="rel:S"}), we obtain $$\begin{aligned} \operatorname{Hom}_M(\pi_{\ast}\mathcal{H}om_{\mathbb{T}}(\mathcal{E}_1, \mathcal{E}_2), \mathcal{O}_M) \cong \operatorname{Hom}(\mathcal{E}_2, S_{\mathbb{T}/M}(\mathcal{E}_1)). \end{aligned}$$ By the Serre duality for $M$ and using that $\omega_M=\mathcal{O}_M[\dim M]$ from Remark [Remark 63](#rmk:Gorenstein){reference-type="ref" reference="rmk:Gorenstein"}, the left hand side is isomorphic to $$\begin{aligned} \operatorname{Hom}_M(\mathcal{O}_M, \pi_{\ast}\mathcal{H}om_{\mathbb{T}}(\mathcal{E}_1, \mathcal{E}_2)[\dim M])^{\vee} =\operatorname{Hom}(\mathcal{E}_1, \mathcal{E}_2[\dim M])^{\vee}. \end{aligned}$$ Then the lemma holds by the uniqueness of $S_{\mathbb{T}}$. ◻ We believe that $S_\mathbb{T}$ is isomorphic to the shift functor $[\dim M]$, see the discussion in Subsection [1.2](#subsec12){reference-type="ref" reference="subsec12"}, which reinforces the analogy between reduced quasi-BPS categories and hyperkähler varieties, see Conjecture [Conjecture 34](#conj:HK){reference-type="ref" reference="conj:HK"}. The main result in this section is the following weaker form of this expectation, which we prove in Subsection [7.4](#subsec:proof2){reference-type="ref" reference="subsec:proof2"}: **Theorem 66**. *The Serre functor $S_{\mathbb{T}}$ is isomorphic to the shift functor $[\dim M]$ étale locally on $M$, i.e. there is an étale cover $U \to M$ such that for each $\mathcal{E} \in \mathbb{T}$ we have $S_{\mathbb{T}}(\mathcal{E})|_U \cong \mathcal{E}|_U[\dim M]$.* ## Construction of the trace map {#subsec:trace} In this subsection, we construct a trace map for objects with nilpotent singular supports in a general setting. The construction here is used in the proof of Theorem [Theorem 66](#thm:Serre:etale){reference-type="ref" reference="thm:Serre:etale"}. Let $G$ be a reductive algebraic group which acts on a smooth affine variety $Y$. We assume that there is a one-dimensional subtorus $\mathbb{C}^{\ast} \subset G$ which acts on $Y$ trivially, so the $G$-action on $Y$ factors through the action of $\mathbb{P}(G):=G/\mathbb{C}^{\ast}$. We say that $Y$ is *unimodular* if $\det \Omega_Y$ is trivial as a $G$-equivariant line bundle. We also say that the action of $\mathbb{P}(G)$ on $Y$ is *generic* if the subset $Y^s \subset Y$ of points with closed $\mathbb{P}(G)$-orbit and trivial stabilizer is non-empty and $\mathrm{codim}(Y\setminus Y^s) \geqslant 2$. **Lemma 67**. **([@Knop2 Korollary 2])*[\[lem:Gorenstein\]]{#lem:Gorenstein label="lem:Gorenstein"} If $Y$ is unimodular and generic, then $Y/\!\!/G$ has only Gorenstein singularities and its canonical module is trivial.* Let $Y$ be unimodular and generic. By Lemma [\[lem:Gorenstein\]](#lem:Gorenstein){reference-type="ref" reference="lem:Gorenstein"}, the quotient $Y/\!\!/G$ is Gorenstein and its dualizing complex is $$\begin{aligned} \label{dualcomp}\omega_{Y/\!\!/G}=\mathcal{O}_{Y/\!\!/G}[\dim Y/\!\!/G].\end{aligned}$$ Let $V \to Y$ be a $G$-equivariant vector bundle with a $G$-invariant regular section $s$ such that $V$ is also unimodular and generic. We refer to such choices of $G$, $Y$, $V$, and $s$ as *a good data* $(G, Y, V, s)$. Let $\mathcal{U}:=s^{-1}(0)$ be the zero locus of $s$, which is equivalent to the derived zero locus as we assumed that $s$ is regular. We have the following diagram $$\begin{aligned} \label{dia:cartesian} \xymatrix{ \mathcal{U}/G \ar@<-0.3ex>@{^{(}->}[r]^-{j} \ar[d]_-{\pi_{\mathcal{U}}} & Y/G \ar[d]_-{\pi_Y} \ar@<-0.5ex>[r]_-{0} & V^{\vee}/G \ar@<-0.5ex>[l]_-{\eta} \ar[d]_-{\pi_{V^{\vee}}} \\ \mathcal{U}/\!\!/G \ar@<-0.3ex>@{^{(}->}[r]^-{\overline{j}} & Y/\!\!/G \ar@<-0.5ex>[r]_-{\overline{0}} &\ar@<-0.5ex>[l]_-{\overline{\eta}} V^{\vee}/\!\!/G. }\end{aligned}$$ Here $0 \colon Y/G \to V^{\vee}/G$ is the zero section, $\eta$ is the projection, and the bottom horizontal arrows are induced maps on good moduli spaces. Recall the Koszul duality equivalence in Theorem [\[thm:Kduality\]](#thm:Kduality){reference-type="ref" reference="thm:Kduality"} $$\begin{aligned} \Theta \colon D^b(\mathcal{U}/G) \stackrel{\sim}{\to} \mathrm{MF}^{\rm{gr}}(V^{\vee}/G, f). \end{aligned}$$ For $\mathcal{E} \in D^b(\mathcal{U}/G)$, let $\mathcal{P}=\Theta(\mathcal{E})$. Then we have the following isomorphism in $D_{\rm{qc}}(Y/G)$, see [@PTquiver Lemma 2.7]: $$\begin{aligned} j_{\ast}\mathcal{H}om_{\mathcal{U}/G}(\mathcal{E}, \mathcal{E}) \stackrel{\cong}{\to} \eta_{\ast} \mathcal{H} om_{V^{\vee}/G}(\mathcal{P}, \mathcal{P}). \end{aligned}$$ Here $\mathcal{H}om_{V^{\vee}/G}(\mathcal{P}, \mathcal{P})$ is the internal homomorphism of matrix factorizations, which is an object in $\mathrm{MF}^{\rm{gr}}(V^{\vee}/G, 0)$. As $V^{\vee}/G$ is smooth, by taking a resolution of $\mathcal{P}$ by vector bundles, we obtain the natural trace map in $\mathrm{MF}^{\rm{gr}}(V^{\vee}/G, 0)$: $$\begin{aligned} \mathrm{tr} \colon \mathcal{H}om_{V^{\vee}/G}(\mathcal{P}, \mathcal{P}) \to \mathcal{O}_{V^{\vee}/G}. \end{aligned}$$ By taking $\pi_{V^{\vee}\ast}$, we obtain the morphism in $D^{\rm{gr}}(V^{\vee}/\!\!/G)$: $$\begin{aligned} \label{tr:push} \pi_{V^{\vee}\ast}\mathrm{tr} \colon \pi_{\ast}\mathcal{H}om_{V^{\vee}/G}(\mathcal{P}, \mathcal{P}) \to \mathcal{O}_{V^{\vee}/\!\!/G}\end{aligned}$$ Here the grading on $V^{\vee}/\!\!/G$ is induced by the fiberwise weight two $\mathbb{C}^{\ast}$-action on $V^{\vee}/G \to Y/G$, see Subsection [2.3](#subsec:graded){reference-type="ref" reference="subsec:graded"} for the graded category $D^{\rm{gr}}(V^{\vee}/\!\!/G)$. We say that $\mathcal{P}$ has *nilpotent support* if: $$\begin{aligned} \mathrm{Supp}(\mathcal{P}) \subset \pi_{V^{\vee}}^{-1}(\mathrm{Im}(\overline{0})).\end{aligned}$$ We say $\mathcal{E}$ has *nilpotent singular support* with respect to $(G, Y, V, s)$ if $\mathcal{P}$ has nilpotent support. Assume that $\mathcal{P}$ has nilpotent support. Then the object $\pi_{V^{\vee}\ast}\mathcal{H}om_{V^{\vee}/G}(\mathcal{P}, \mathcal{P})$ in $D^{\rm{gr}}(V^{\vee}/\!\!/G)$ has proper support over $Y/\!\!/G$. Moreover, we have $$\begin{aligned} \label{equalityomegao} \omega_{V^{\vee}/\!\!/G}= \mathcal{O}_{V^{\vee}/\!\!/G}[\dim V^{\vee}/\!\!/G](-2 \operatorname{rank}V)= \mathcal{O}_{V^{\vee}/\!\!/G}[\dim Y/\!\!/G-\operatorname{rank}V]\end{aligned}$$ in $D^{\rm{gr}}(V^{\vee}/\!\!/G)$, where $(1)$ is the grade shift functor of $D^{\rm{gr}}(V^{\vee}/\!\!/G)$ which is isomorphic to the cohomological shift functor $[1]$. Then by Lemma [Lemma 68](#lem:psupport){reference-type="ref" reference="lem:psupport"} below, the morphism ([\[tr:push\]](#tr:push){reference-type="ref" reference="tr:push"}) induces the morphism in $D^b(Y/\!\!/G)$: $$\begin{aligned} \label{induce:ap} a_{\mathcal{P}} \colon \eta_{\ast}\pi_{V^{\vee}\ast}\mathcal{H}om_{V^{\vee}/G}(\mathcal{P}, \mathcal{P}) \to \mathcal{O}_{Y/\!\!/G} [\operatorname{rank}V]. \end{aligned}$$ Suppose that $\mathcal{U}/\!\!/G$ is Gorenstein with trivial canonical module and has dimension $\dim Y/\!\!/G -\operatorname{rank}V$. Then $\overline{j}^! \mathcal{O}_{Y/\!\!/G}= \mathcal{O}_{\mathcal{U}/\!\!/G}[-\operatorname{rank}V]$. Since there are isomorphisms: $$\begin{aligned} \overline{\eta}_{\ast}\pi_{V^{\vee}\ast}\mathcal{H}om_{V^{\vee}/G}(\mathcal{P}, \mathcal{P}) &=\pi_{Y\ast}\eta_{\ast}\mathcal{H}om_{V^{\vee}/G}(\mathcal{P}, \mathcal{P}) \\ &\stackrel{\cong}{\to} \pi_{Y\ast}j_{\ast}\mathcal{H}om_{\mathcal{U}/G}(\mathcal{E}, \mathcal{E}) \\ &=\overline{j}_{\ast}\pi_{\mathcal{U}\ast}\mathcal{H}om_{\mathcal{U}/G}(\mathcal{E}, \mathcal{E}),\end{aligned}$$ the morphism ([\[induce:ap\]](#induce:ap){reference-type="ref" reference="induce:ap"}) induces the trace morphism in $D^b(\mathcal{U}/\!\!/G)$: $$\begin{aligned} \label{const:tre} \mathrm{tr}_{\mathcal{E}} \colon \pi_{\mathcal{U}\ast}\mathcal{H}om_{\mathcal{U}/G}(\mathcal{E}, \mathcal{E}) \to \overline{j}^{!}\mathcal{O}_{Y/\!\!/G}[\operatorname{rank}V] =\mathcal{O}_{\mathcal{U}/\!\!/G}. \end{aligned}$$ We have used the following lemma in the above construction: **Lemma 68**. *Let $X, Y$ be Noetherian $\mathbb{C}$-schemes with $\mathbb{C}^{\ast}$-actions, and let $f \colon X \to Y$ be a $\mathbb{C}^{\ast}$-equivariant morphism. Let $\omega_X$ be a dualizing complex for $X$. If $\mathcal{E} \in D^{\rm{gr}}(X)$ has proper support over $Y$, then there is a natural isomorphism $$\phi_f \colon \operatorname{Hom}_X(\mathcal{E}, \omega_X) \stackrel{\cong}{\to} \operatorname{Hom}_Y(f_{\ast}\mathcal{E}, \omega_Y).$$ Moreover, let $g \colon Y \to Z$ be another $\mathbb{C}^{\ast}$-equivariant morphism and assume the support of $\mathcal{E}$ is proper over $Z$. Let $h=g \circ f \colon X \to Z$. Then we have $$\begin{aligned} \phi_h=\phi_g \circ \phi_f \colon \operatorname{Hom}_X(\mathcal{E}, \omega_X) \stackrel{\phi_f}{\to} \operatorname{Hom}_Y(f_{\ast}\mathcal{E}, \omega_Y) \stackrel{\phi_g}{\to} \operatorname{Hom}_Z(h_{\ast}\mathcal{E}, \omega_Z). \end{aligned}$$* *Proof.* The lemma is obvious if $f$ and $g$ are proper since $\omega_X=f^{!}\omega_Y$, $\omega_Y=g^{!}\omega_Z$ and $f^!$ and $g^!$ are right adjoints to $f_{\ast}$, $g_{\ast}$. In general, let $i \colon T \hookrightarrow X$ be a closed subscheme such that $f|_{T}$, $g|_{f(T)}$ are proper. By a standard dévissage argument, it suffices to check the statement for $\mathcal{E}=i_{\ast}\mathcal{F}$ for some $\mathcal{F} \in D^b(T)$. Then $\operatorname{Hom}_X(\mathcal{E}, \omega_X)=\operatorname{Hom}_T(\mathcal{F}, \omega_T)$ as $\omega_T=i^{!}\omega_X$. Then the lemma holds from the case of $f$, $g$ proper. ◻ **Definition 69**. Let $(G, Y, V, s)$ be a good data. Suppose that $\mathcal{U}/\!\!/G$ is Gorenstein with trivial canonical module and of dimension $\dim Y/\!\!/G-\mathrm{rank}\,V$. For $\mathcal{E} \in D^b(\mathcal{U}/G)$ with nilpotent singular support with respect to this data, the morphism $$\begin{aligned} \mathrm{tr}_{\mathcal{E}} \colon \pi_{\mathcal{U}\ast}\mathcal{H}om_{\mathcal{U}/G}(\mathcal{E}, \mathcal{E}) \to \mathcal{O}_{\mathcal{U}/\!\!/G} \end{aligned}$$ constructed in ([\[const:tre\]](#const:tre){reference-type="ref" reference="const:tre"}) is called the *trace map determined by* $(G, Y, V, s)$. The following lemma is immediate from the construction of the trace map: **Lemma 70**. *For another good data $(G', Y', V', s')$, suppose that there is a commutative diagram of stacks $$\begin{aligned} \xymatrix{ V/G \ar[r]^{\cong} \ar[d] & V'/G' \ar[d] \\ Y/G \ar[r]^{\cong} \ar@/^18pt/[u]_-{s} & Y'/G', \ar@/_18pt/[u]^-{s'} } \end{aligned}$$ where the horizontal arrows are isomorphisms. Let $\mathcal{U}'=(s')^{-1}(0)$ and consider the induced equivalence $\phi \colon \mathcal{U}/G \stackrel{\cong}{\to} \mathcal{U}'/G'$. For $\mathcal{E} \in D^b(\mathcal{U}/G)$ with nilpotent singular support for $(G, Y, V, s)$, the object $\phi_*\mathcal{E}$ has nilpotent singular support with respect to $(G', Y', V', s')$. Further, the trace map $\mathrm{tr}_{\mathcal{E}}$ determined by $(G, Y, V, s)$ is identified with that of $\mathrm{tr}_{\phi_{\ast}\mathcal{E}}$ determined by $(G', Y', V', s')$, i.e. the following diagram commutes $$\begin{aligned} \xymatrix{ \pi_{\mathcal{U}'\ast}\mathcal{H}om_{\mathcal{U}'/G'}(\phi_{\ast}\mathcal{E}, \phi_{\ast}\mathcal{E}) \ar[r]^-{\mathrm{tr}_{\phi_{\ast}\mathcal{E}}}\ar[d]_-{\cong} & \mathcal{O}_{\mathcal{U}'/\!\!/G'} \ar[d]_-{\cong} \\ \phi_{\ast}\pi_{\mathcal{U}_{\ast}}\mathcal{H}om_{\mathcal{U}/G}(\mathcal{E}, \mathcal{E}) \ar[r]^-{\phi_{\ast}\mathrm{tr}_{\mathcal{E}}} & \phi_{\ast}\mathcal{O}_{\mathcal{U}/\!\!/G}, } \end{aligned}$$ where the vertical arrows are natural isomorphisms induced by $\phi$.* Suppose that $\mathcal{E} \in D^b(\mathcal{U}/G)$ is a perfect complex. In this case, there is a canonical trace map $\mathcal{H}om_{\mathcal{U}/G}(\mathcal{E}, \mathcal{E}) \to \mathcal{O}_{\mathcal{U}/G}$. By taking the push-forward to $\mathcal{U}/\!\!/G$, we obtain the map $$\begin{aligned} \label{trperf} \pi_{\mathcal{U}\ast}\mathcal{H}om_{\mathcal{U}/G}(\mathcal{E}, \mathcal{E}) \to \mathcal{O}_{\mathcal{U}/\!\!/G}. \end{aligned}$$ Note that the above construction is independent of a choice of $(G, Y, V, s)$. The following lemma is straightforward to check, and we omit the details. **Lemma 71**. *If $\mathcal{E}$ is a perfect complex, then $\mathrm{tr}_{\mathcal{E}}$ is the same as the map ([\[trperf\]](#trperf){reference-type="ref" reference="trperf"}).* ## Comparison of the trace maps In this subsection, we compare the trace map constructed in the previous subsection under a change of the presentations of quasi-smooth affine derived schemes. Suppose that $(G, Y, V, s)$ is a good data and let $W$ be another $G$-representation such that $\det W$ is a trivial $G$-character. Let $i \colon Y/G \hookrightarrow (Y\oplus W)/G$ be the embedding given by $y\mapsto (y, 0)$. We have the section $s'$ of the vector bundle $V \oplus W \oplus W \to Y \oplus W$ given by $(y, w) \mapsto (s(y), w, w)$, whose zero locus is $\mathcal{U} \subset Y$. Then $(G, Y\oplus W, V\oplus W\oplus W, s')$ is a good data. Let $\mathcal{E}\in D^b(\mathcal{U}/G)$ be a complex with nilpotent singular support with respect to $(G, Y, V, s)$. Then $\mathcal{E}$ also has nilpotent singular support with respect to $(G, Y\oplus W, V\oplus W\oplus W, s')$ and we can consider the trace determined by the good data $(G, Y\oplus W, V\oplus W\oplus W, s')$: $$\begin{aligned} \mathrm{tr}_{\mathcal{E}}' \colon \mathcal{H}om_{\mathcal{U}/G}(\mathcal{E}, \mathcal{E}) \to \mathcal{O}_{\mathcal{U}/\!\!/G}.\end{aligned}$$ **Lemma 72**. *Let $\mathcal{E}\in D^b(\mathcal{U}/G)$ have nilpotent singular support with respect to the good data $(G, Y, V, s)$. Then $\mathcal{E}$ also has nilpotent singular support with respect to the good data $(G, Y\oplus W, V\oplus W\oplus W, s')$. Further, we have that $\mathrm{tr}_{\mathcal{E}}=\mathrm{tr}_{\mathcal{E}}'$.* *Proof.* We have the following diagram $$\begin{aligned} \label{diagXc2} \xymatrix{ & & & (Y\oplus W)/G \ar[d]^(.3){\pi_{Y\oplus W}}|\hole & (V^{\vee} \oplus W \oplus W^{\vee})/G\ar[d]^-{\pi_{V^{\vee} \oplus W \oplus W^{\vee}}} \ar[l]_-{p}\\ \mathcal{U}/G \ar@<-0.3ex>@{^{(}->}[r]^-{j} \ar[d]_-{\pi_{\mathcal{U}}} & Y/G \ar[rru]^-{i_Y} \ar[d]_-{\pi_Y} & V^{\vee}/G \ar[l]^-{\eta} \ar[d]_(.3){\pi_{V^{\vee}}} \ar[rru]^(.3){i_{V^{\vee}}} & (Y\oplus W)/\!\!/G & (V^{\vee}\oplus W \oplus W^{\vee})/\!\!/G \ar[l]_-{\overline{p}}\\ \mathcal{U}/\!\!/G \ar@<-0.3ex>@{^{(}->}[r]^-{\overline{j}} & Y/\!\!/G \ar[rru]^(.3){\overline{i}_{Y}}|\hole & V^{\vee}/\!\!/G \ar[rru]_-{\overline{i}_{V^{\vee}}} \ar[l]^-{\overline{\eta}} & & & }\end{aligned}$$ Let $q \colon W \oplus W^{\vee} \to \mathbb{C}$ be the natural non-degenerate pairing. From the construction of the Koszul equivalences, there is a commutative diagram: $$\begin{aligned} \xymatrix{ D^b(\mathcal{U}/G) \ar[r]_-{\sim}^-{\Theta} \ar@{=}[d] & \mathrm{MF}^{\rm{gr}}(V^{\vee}/G, f) \ar[d]^-{\Phi}_-{\sim} \\ D^b(\mathcal{U}/G) \ar[r]_-{\sim}^-{\Theta'} & \mathrm{MF}^{\rm{gr}}((V^{\vee}\oplus W \oplus W^{\vee})/G, f+q). }\end{aligned}$$ Here, the horizontal arrows are the Koszul equivalences from Theorem [\[thm:Kduality\]](#thm:Kduality){reference-type="ref" reference="thm:Kduality"}, and $\Phi$ is the Knörrer periodicity equivalence, given by $\Phi(-)=(-)\otimes_{\mathbb{C}} \mathcal{K}$. The Koszul factorization $\mathcal{K}$ of $q$ has the form $$\begin{aligned} \mathcal{K}=\left(\bigwedge^{\rm{even}} W \otimes {\mathcal{O}_{W \oplus W^{\vee}}} \rightleftarrows \bigwedge^{\rm{odd}}W \otimes \mathcal{O}_{W \oplus W^{\vee}} \right) \in \mathrm{MF}^{\rm{gr}}((W \oplus W^{\vee})/G, q)\end{aligned}$$ and is isomorphic to $\mathcal{O}_{(W \oplus \{0\})/G}$, see [@MR3895631 Proposition 3.20]. In the above, the grading is given by the $\mathbb{C}^{\ast}$-action on $W \oplus W^{\vee}$ of weight $(0, 2)$. By a diagram chasing, we see that $$\mathcal{Q}:=\Theta'(\mathcal{E})=\Phi(\mathcal{P})$$ has support in $\mathrm{Im}(\overline{0})$, where $\overline{0}\colon (Y\oplus W)/\!\!/G\to (Y\oplus W\oplus W^\vee)/\!\!/G$. Then $\mathcal{E}$ has nilpotent singular support with respect to $(G, Y\oplus W, V\oplus W\oplus W, s')$. Let $i_0 \colon BG \hookrightarrow (W \oplus W^{\vee})/G$ be the inclusion of the origin. We have the Koszul equivalence $$\begin{aligned} D^b(BG) \stackrel{\sim}{\to} \mathrm{MF}^{\rm{gr}}((W\oplus W^{\vee})/G, q)\end{aligned}$$ which sends $\mathcal{O}_{BG}$ to $\mathcal{K}$. Then $\mathcal{H}om(\mathcal{K}, \mathcal{K})=i_{0\ast}\mathcal{O}_{BG}$, hence we have the isomorphism $i_{V^{\vee}\ast}\mathcal{H}om(\mathcal{P}, \mathcal{P}) \stackrel{\cong}{\to} \mathcal{H}om(\mathcal{Q}, \mathcal{Q})$. We have the commutative diagram: $$\begin{aligned} \label{diagXc22} \xymatrix{ i_{V^{\vee}\ast}\mathcal{H}om(\mathcal{P}, \mathcal{P}) \ar[r]^-{\cong}\ar[d]_-{i_{V^{\vee}\ast} \mathrm{tr}_{\mathcal{P}}} & \mathcal{H}om(\mathcal{Q}, \mathcal{Q}) \ar[d]^-{\mathrm{tr}_{\mathcal{Q}}} \\ i_{V^{\vee}\ast} \mathcal{O}_{V^{\vee}/G} \ar[r] & \mathcal{O}_{(V^{\vee} \oplus W \oplus W^{\vee})/G}, }\end{aligned}$$ where the bottom arrow is the morphism obtained by adjunction and using the isomorphism in $D^b(V^{\vee}/G)$: $$\begin{aligned} i^{!}_{V^{\vee}}\mathcal{O}_{(V^{\vee} \oplus W \oplus W^{\vee})/G} \cong \det W \otimes \det (W^{\vee}(2))[-2\dim W] =\mathcal{O}_{V^{\vee}/G}. \end{aligned}$$ Applying $\pi_{V^{\vee} \oplus W \oplus W^{\vee}\ast}$ to the sheaves in the diagram ([\[diagXc22\]](#diagXc22){reference-type="ref" reference="diagXc22"}), we obtain the commutative diagram: $$\begin{aligned} \xymatrix{ \overline{i}_{V^{\vee}\ast}\pi_{V^{\vee}\ast}\mathcal{H}om(\mathcal{P}, \mathcal{P}) \ar[r]^-{\cong}\ar[d]_-{\overline{i}_{V^{\vee}\ast}\pi_{V^{\vee}\ast} \mathrm{tr}_{\mathcal{P}}} & \pi_{V^{\vee} \oplus W \oplus W^{\vee}\ast}\mathcal{H}om(\mathcal{Q}, \mathcal{Q}) \ar[d]^-{\pi_{V^{\vee}\oplus W \oplus W^{\vee}\ast}\mathrm{tr}_{\mathcal{Q}}} \\ \overline{i}_{V^{\vee}\ast} \mathcal{O}_{V^{\vee}/\!\!/G} \ar[r] & \mathcal{O}_{(V^{\vee} \oplus W \oplus W^{\vee}) /\!\!/G}. }\end{aligned}$$ Then by Lemma [Lemma 68](#lem:psupport){reference-type="ref" reference="lem:psupport"} applied for the map $p$ together with the commutative diagram ([\[diagXc2\]](#diagXc2){reference-type="ref" reference="diagXc2"}), we have the commutative diagram in $D^b((Y\oplus W)/\!\!/G)$ $$\begin{aligned} \xymatrix{ \overline{i}_{Y\ast}\pi_{Y\ast}\eta_{\ast}\mathcal{H}om(\mathcal{P}, \mathcal{P}) \ar[r]^-{\cong}\ar[d]_-{\overline{i}_{Y\ast}a_{\mathcal{P}}} & \pi_{Y\oplus W\ast}p_{\ast}\mathcal{H}om(\mathcal{Q}, \mathcal{Q}) \ar[d]^-{a_{\mathcal{Q}}} \\ \overline{i}_{Y\ast} \mathcal{O}_{Y/\!\!/G}[\operatorname{rank}V] \ar[r] & \mathcal{O}_{(Y \oplus W) /\!\!/G}[\operatorname{rank}V+\dim W]. }\end{aligned}$$ The bottom arrow is the natural morphism by $\overline{i}_{Y}^{!}\mathcal{O}_{(Y\oplus W)/\!\!/G}[\dim W]=\mathcal{O}_{Y/\!\!/G}$, see ([\[dualcomp\]](#dualcomp){reference-type="ref" reference="dualcomp"}). The lemma follows from the above commutative diagram together with the constructions of $\mathrm{tr}_{\mathcal{E}}$ and $\mathrm{tr}_{\mathcal{E}}'$. ◻ ## Local triviality of the Serre functor {#subsec:proof2} In this section, we prove Theorem [Theorem 66](#thm:Serre:etale){reference-type="ref" reference="thm:Serre:etale"} using the trace map to reduce to the local case discussed in Theorem [\[thm:Serre\]](#thm:Serre){reference-type="ref" reference="thm:Serre"}. We first explain that objects of $\mathbb{T}$ have nilpotent singular support in the sense of Subsection [7.2](#subsec:trace){reference-type="ref" reference="subsec:trace"}. This result is a version of Lemma [\[cor:support\]](#cor:support){reference-type="ref" reference="cor:support"} and Theorem [Theorem 56](#prop:catsupp){reference-type="ref" reference="prop:catsupp"}. To show it follows from Lemma [\[cor:support\]](#cor:support){reference-type="ref" reference="cor:support"}, we need to mention a stronger form of the étale local description of $M$ from Subsection [2.6](#subsec:qsmooth){reference-type="ref" reference="subsec:qsmooth"}. For each $y\in M$, recall from Remark [Remark 23](#rmk:Equiver){reference-type="ref" reference="rmk:Equiver"} the polystable sheaf $F$, the corresponding doubled quiver $Q^{\circ, d}_y$, dimension vector $\bm{d}$, and good moduli spaces of the reduced stacks of representations of the doubled quiver and of the preprojective algebra of $Q^\circ_y$, respectively: $$\pi_Y\colon \mathscr{Y}(\bm{d})\to Y(\bm{d}),\, \pi_P\colon \mathcal{P}(\bm{d})\to P(\bm{d}).$$ Then there exists a smooth affine scheme $A$ with an action of the reductive group $G:=G(\bm{d})$, a section $s$ of the vector bundle $V:=\mathcal{O}_A\otimes\mathfrak{g}(\bm{d})^\vee$ with zero locus $$\mathcal{Z}:=s^{-1}(0)/G\subset \mathscr{A}:=A/G,$$ and étale maps $e''\colon A/\!\!/G\to Y(\bm{d})$ and $M\xleftarrow{e} Z:=s^{-1}(0)/\!\!/G\xrightarrow{e'}P(\bm{d})$ such that the following diagram is Cartesian, the horizontal maps are étale, and the vertical maps are good moduli space maps: $$\label{diaggg1} \begin{tikzcd} \mathscr{A}\arrow[r, "e''"]\arrow[d, "\pi"]&\mathscr{Y}(\bm{d})\arrow[d, "\pi_Y"]\\ A/\!\!/G\arrow[r, "e''"]& Y(\bm{d}), \end{tikzcd}$$ and such that both squares in the following diagram are Cartesian, the horizontal maps are étale, and the vertical maps are good moduli space maps: $$\label{diaggg2} \begin{tikzcd} \mathcal{M}\arrow[d, "\pi_M"]& \mathcal{Z}\arrow[d, "\pi"]\arrow[l, "e"']\arrow[r, "e'"]& \mathcal{P}(\bm{d})\arrow[d, "\pi_P"]\\ M&Z\arrow[l, "e"']\arrow[r, "e'"]& P(\bm{d}). \end{tikzcd}$$ See [@DavPurity Theorem 5.11] for a proof of the second diagram. To also obtain the first diagram, one can prove a stronger statement accounting for the derived structure of $\mathfrak{M}$ and $\mathscr{P}(\bm{d})$ as in [@halpK32 Theorem 4.2.3], because $A$ ($R$ in loc.cit.) can be chosen étale over $\mathrm{Ext}^1_S(F, F)=R_{Q^{\circ,d}}(\bm{d})$, see the proof of loc.cit. Then [\[diaggg1\]](#diaggg1){reference-type="eqref" reference="diaggg1"} and the right square of [\[diaggg2\]](#diaggg2){reference-type="eqref" reference="diaggg2"} commute, and the left square of [\[diaggg2\]](#diaggg2){reference-type="eqref" reference="diaggg2"} commutes by [@halpK32 Theorem 4.2.3]. For such $e\colon Z\to M$ and for $\mathcal{E}\in D^b(\mathcal{M})$, we denote by $\mathcal{E}|_Z=e^*(\mathcal{E})\in D^b(\mathcal{Z})$. The upshot of the discussion above is that $y\in M$ is in the image of $e\colon Z\to M$ for a good data $(G, A, V, s)$. **Proposition 73**. *Let $\mathcal{E} \in \mathbb{T}$. Then $\mathcal{E}|_Z\in D^b(\mathcal{Z})$ has nilpotent singular support with respect to $(G, A, V, s)$.* *Proof.* The object $\mathcal{E}|_Z$ is in the subcategory of $D^b(\mathcal{Z})$ classically generated by the image of $e''\colon D^b(\mathcal{P}(\bm{d}))\to D^b(\mathcal{Z})$, see [@PTtop Subsection 2.11, Subsection 9.2]. Then the claim follows from [@PTquiver Lemma 5.4, Corollary 5.5]. ◻ *Proof of Theorem [Theorem 66](#thm:Serre:etale){reference-type="ref" reference="thm:Serre:etale"}.* By Proposition [Proposition 73](#prop710){reference-type="ref" reference="prop710"}, the object $\mathcal{E}|_Z \in D^b(\mathcal{Z})$ admits a trace map determined by $(G, A, V, s)$, see the construction of Subsection [7.2](#subsec:trace){reference-type="ref" reference="subsec:trace"} and Definition [Definition 69](#defn:trace){reference-type="ref" reference="defn:trace"}: $$\begin{aligned} \label{tr:E} \mathrm{tr}_Z \colon \pi_{\ast}\mathcal{H}om(\mathcal{E}|_{Z}, \mathcal{E}|_{Z}) \to \mathcal{O}_Z. \end{aligned}$$ By the definition of the relative Serre functor, it corresponds to a morphism $$\begin{aligned} \label{mor:Serre} \phi_Z \colon \mathcal{E}|_{Z} \to S_{\mathbb{T}/M}(\mathcal{E})|_{Z}. \end{aligned}$$ By Lemma [Lemma 65](#lem:Serre){reference-type="ref" reference="lem:Serre"}, it is enough to show that the above morphism is an isomorphism. Set $\mathscr{A}=A/G$ and $\mathcal{V}=V/G$. For each point $u \in Z \hookrightarrow A/\!\!/G$, let $\widehat{\mathscr{A}}_u$ be the formal fiber of $\mathscr{A} \to A/\!\!/G$ at $u$, and (by abuse of notation) denote by $u \in \mathscr{A}$ the unique closed point in the fiber of $\mathscr{A} \to A/\!\!/G$ at $u$. Let $G_u=\mathrm{Aut}(u) \subset G$. By the étale slice theorem, there is an isomorphism $$\begin{aligned} \label{isom:formal} \widehat{\mathscr{A}}_u \cong \widehat{\mathcal{H}}^{0}(\mathbb{T}_{\mathscr{A}}|_{u})/G_u. \end{aligned}$$ From the triangle $\mathbb{T}_{\mathcal{Z}} \to \mathbb{T}_{\mathscr{A}}|_{\mathcal{Z}} \to \mathcal{V}|_{\mathcal{Z}}\to \mathbb{T}_{\mathcal{Z}}[1]$, there is an exact sequence of $G_u$-representations $$\begin{aligned} 0 \to \mathcal{H}^0(\mathbb{T}_{\mathcal{Z}|_{u})} \to \mathcal{H}^0(\mathbb{T}_{\mathscr{A}}|_{u}) \stackrel{ds|_{u}}{\to} \mathcal{V}|_{u} \to \mathcal{H}^1(\mathbb{T}_{\mathcal{Z}}|_{u}) \to 0. \end{aligned}$$ Hence there exist isomorphisms of $G_u$-representations $$\begin{aligned} \label{decom:TW} \mathcal{H}^0(\mathbb{T}_{\mathscr{A}}|_{u}) \cong \mathcal{H}^0(\mathbb{T}_{\mathcal{Z}}|_{u}) \oplus W, \ \mathcal{V}|_u \cong \mathcal{H}^1(\mathbb{T}_{\mathcal{Z}}|_{u}) \oplus W\end{aligned}$$ for some $G_u$-representation $W$ such that $ds|_{u}=(0, \operatorname{id}_W)$. First assume that $u$ corresponds to a point in the deepest stratum, so that $$\begin{aligned} \label{isom:Tgl}\mathcal{H}^0(\mathbb{T}_{\mathcal{Z}}|_{u})=\mathfrak{gl}(d)^{\oplus 2g}, \ \mathcal{H}^1(\mathbb{T}_{\mathcal{Z}}|_{u}) =\mathfrak{gl}(d)_0, \text{ and } G_u=GL(d).\end{aligned}$$ Let $\mu_0 \colon \mathfrak{gl}(d)^{\oplus 2g} \to \mathfrak{gl}(d)_0$ be the moment map ([\[mu0:trace\]](#mu0:trace){reference-type="ref" reference="mu0:trace"}). Note that the zero locus of $s|_{\widehat{\mathscr{A}}_u}$ is isomorphic to the formal fiber of $\mu_0^{-1}(0)/GL(d) \to \mu_0^{-1}(0)/\!\!/GL(d)$ at the origin, see Lemma [Lemma 24](#lem:py){reference-type="ref" reference="lem:py"}. As both of $s$ and $\mu_0$ are regular sections, by a formal coordinate change we may replace the isomorphism ([\[isom:formal\]](#isom:formal){reference-type="ref" reference="isom:formal"}) and assume that $s|_{\widehat{\mathscr{A}}_u}$ corresponds to the map $$\begin{aligned} (\mu_0, \operatorname{id}_W) \colon \mathfrak{gl}(d)^{\oplus 2g} \oplus W \to \mathfrak{gl}(d)_0 \oplus W\end{aligned}$$ under the decompositions ([\[decom:TW\]](#decom:TW){reference-type="ref" reference="decom:TW"}) and isomorphisms ([\[isom:Tgl\]](#isom:Tgl){reference-type="ref" reference="isom:Tgl"}). By Lemmas [Lemma 70](#compare:trace0){reference-type="ref" reference="compare:trace0"} and [Lemma 72](#lem:trace=){reference-type="ref" reference="lem:trace="}, the trace map ([\[tr:E\]](#tr:E){reference-type="ref" reference="tr:E"}) pulled back via $\widehat{Z}_u:=\operatorname{Spec}\widehat{\mathcal{O}}_{Z, u} \to Z$ coincides with the trace map determined by the good data $(GL(d), \mathfrak{gl}(d)^{\oplus 2g}, \mathfrak{gl}(d)_0, \mu_0)$. Then from Theorem [\[thm:Serre\]](#thm:Serre){reference-type="ref" reference="thm:Serre"}, the map ([\[mor:Serre\]](#mor:Serre){reference-type="ref" reference="mor:Serre"}) is an isomorphism at $\widehat{Z}_u$. In general, let $p \in \mathfrak{gl}(d)^{\oplus 2g}/\!\!/GL(d)$ be a point corresponding to $u$ as in Lemma [Lemma 24](#lem:py){reference-type="ref" reference="lem:py"}, i.e. there is an equivalence $$\begin{aligned} \label{equiv:MPp} \widehat{\mathcal{Z}}_{u} \simeq \widehat{\mathscr{P}}(d)_p\end{aligned}$$ for the $g$-loop quiver $Q^{\circ}$. Let $\mathscr{Y}(d)=\mathfrak{gl}(d)^{\oplus 2g}/GL(d)$ be the moduli stack of representations of the doubled quiver of $Q^\circ$. We also denote by $p \in \mathscr{Y}(d)$ the unique closed point in the fiber of $\mathscr{Y}(d) \to \mathfrak{gl}(d)^{\oplus 2g}/\!\!/GL(d)$ at $p$. Then we have decompositions $$\begin{aligned} \label{decom:W2} \mathcal{H}^0(\mathbb{T}_{\mathscr{Y}(d)}|_{p})=\mathcal{H}^0(\mathbb{T}_{\mathcal{Z}}|_{u}) \oplus W', \ \mathfrak{gl}(d)_0=\mathcal{H}^1(\mathbb{T}_{\mathcal{Z}}|_{u}) \oplus W'\end{aligned}$$ for some $G_u$-representation $W'$. By Lemma [Lemma 70](#compare:trace0){reference-type="ref" reference="compare:trace0"} and the isomorphism ([\[isom:formal\]](#isom:formal){reference-type="ref" reference="isom:formal"}), the trace map ([\[tr:E\]](#tr:E){reference-type="ref" reference="tr:E"}) at $\widehat{Z}_u$ equals the trace map determined by $(G_u, \mathcal{H}^0(\mathbb{T}_{\mathscr{A}}|_{u}), \mathscr{V}|_{u}, s|_{\widehat{\mathscr{A}}_u})$. Then by the decomposition ([\[decom:TW\]](#decom:TW){reference-type="ref" reference="decom:TW"}) and Lemma [Lemma 72](#lem:trace=){reference-type="ref" reference="lem:trace="}, it also equals the trace map determined by the good data $(G_u, \mathcal{H}^0(\mathbb{T}_{\mathcal{Z}}|_{u}), \mathcal{H}^1(\mathbb{T}_{\mathcal{Z}}|_{u}), \kappa)$. Then by ([\[decom:W2\]](#decom:W2){reference-type="ref" reference="decom:W2"}) and Lemma [Lemma 72](#lem:trace=){reference-type="ref" reference="lem:trace="}, under the equivalence ([\[equiv:MPp\]](#equiv:MPp){reference-type="ref" reference="equiv:MPp"}) the trace map ([\[tr:E\]](#tr:E){reference-type="ref" reference="tr:E"}) at $\widehat{Z}_u$ also equals the trace map determined by $(G_p, \mathcal{H}^0(\mathbb{T}_{\mathscr{Y}(d)}|_{p}), \mathfrak{gl}(d)_0, \mu_0)$, which in turn equals the trace map determined by $(GL(d), \mathfrak{gl}(d)^{\oplus 2g}, \mathfrak{gl}(d)_0, \mu_0)$ at $p$ by Lemma [Lemma 70](#compare:trace0){reference-type="ref" reference="compare:trace0"}. Again by Theorem [\[thm:Serre\]](#thm:Serre){reference-type="ref" reference="thm:Serre"}, the map ([\[mor:Serre\]](#mor:Serre){reference-type="ref" reference="mor:Serre"}) is an isomorphism on $\widehat{Z}_u$. Therefore ([\[mor:Serre\]](#mor:Serre){reference-type="ref" reference="mor:Serre"}) is an isomorphism at any point $u\in Z$, hence it is an isomorphism. ◻ The proof of Theorem [Theorem 66](#thm:Serre:etale){reference-type="ref" reference="thm:Serre:etale"} also implies the following: **Corollary 74**. *In the situation of Theorem [Theorem 66](#thm:Serre:etale){reference-type="ref" reference="thm:Serre:etale"}, for each $\mathcal{E} \in \mathbb{T}$ there are isomorphisms: $$\mathcal{H}^i(S_{\mathbb{T}}(\mathcal{E})) \cong \mathcal{H}^i(\mathcal{E}[\dim M])$$ for all $i\in \mathbb{Z}$. In particular, if there exists $k\in \mathbb{Z}$ such that $\mathcal{E}$ is an object in $\mathbb{T} \cap \operatorname{Coh}(\mathcal{M})[k]$, then $S_{\mathbb{T}}(\mathcal{E}) \cong \mathcal{E}[\dim M]$.* *Proof.* Let $\mathrm{tr}_Z$ be the morphism in ([\[tr:E\]](#tr:E){reference-type="ref" reference="tr:E"}). For another étale morphism $Z' \to M$, the proof of Theorem [\[thm:Serre\]](#thm:Serre){reference-type="ref" reference="thm:Serre"} shows that the morphism $$\begin{aligned} \mathrm{tr}_{Z}|_{Z \times_M Z'}- \mathrm{tr}_{Z'}|_{Z\times_M Z'} \colon \pi_{\ast}\mathcal{H}om(\mathcal{E}|_{Z\times_M Z'}, \mathcal{E}|_{Z\times_M Z'}) \to \mathcal{O}_{Z \times_M Z'} \end{aligned}$$ is a zero map formally locally at any point in $Z\times_M Z'$. Thus for the morphism $\phi_Z$ in ([\[mor:Serre\]](#mor:Serre){reference-type="ref" reference="mor:Serre"}), the morphism $$\begin{aligned} \phi_Z|_{Z\times_M Z'}-\phi_{Z'}|_{Z\times_M Z'} \colon \mathcal{E}|_{Z\times_M Z'} \to S_{\mathbb{T}/M}(\mathcal{E})|_{Z\times_M Z'} \end{aligned}$$ is a zero map formally locally at each point in $Z\times_M Z'$. Therefore for each $i \in \mathbb{Z}$, the isomorphism $$\begin{aligned} \mathcal{H}^i(\phi_Z) \colon \mathcal{H}^i(\mathcal{E}|_{Z}) \stackrel{\cong}{\to} \mathcal{H}^i(S_{\mathbb{T}/M}(\mathcal{E})|_{Z}) \end{aligned}$$ glues to give an isomorphism $\mathcal{H}^i(\mathcal{E})\cong \mathcal{H}^i(S_{\mathbb{T}/M}(\mathcal{E}))$. Then the corollary follows from Lemma [Lemma 65](#lem:Serre){reference-type="ref" reference="lem:Serre"}. ◻ We also have the following: **Corollary 75**. *In the situation of Theorem [\[thm:Serre\]](#thm:Serre){reference-type="ref" reference="thm:Serre"}, for $\mathcal{E} \in \mathbb{T} \cap \mathrm{Perf}(\mathcal{M})$, we have $S_{\mathbb{T}}(\mathcal{E}) \cong \mathcal{E}[\dim M]$.* *Proof.* As $\mathcal{E}$ is perfect, there is a trace map $\mathcal{H}om_{\mathcal{M}}(\mathcal{E}, \mathcal{E}) \to \mathcal{O}_{\mathcal{M}}$, thus its push-forward $\pi_{\ast}$ gives a morphism $$\begin{aligned} \pi_{\ast}\mathcal{H}om_{\mathcal{M}}(\mathcal{E}, \mathcal{E}) \to \mathcal{O}_M. \end{aligned}$$ The above morphism corresponds to $\phi \colon \mathcal{E} \to S_{\mathbb{T}/M}(\mathcal{E})$. By Lemma [Lemma 71](#lem:trperf){reference-type="ref" reference="lem:trperf"}, the above morphism coincides with ([\[mor:Serre\]](#mor:Serre){reference-type="ref" reference="mor:Serre"}) on each étale map $Z \to M$, thus $\phi$ is an isomorphism. Then the corollary follows from Lemma [Lemma 65](#lem:Serre){reference-type="ref" reference="lem:Serre"}. ◻ # Topological K-theory of quasi-BPS categories for K3 surfaces {#sec:topK} ## Statement of the main result {#subsec81} In this section, we prove Theorem [\[intro:thm:K\]](#intro:thm:K){reference-type="ref" reference="intro:thm:K"} using the computation of topological K-theory of quasi-BPS categories of preprojective algebras from [@PTtop]. We actually compute the topological K-theory of quasi-BPS categories for all weights $w\in\mathbb{Z}$, not only in the case of $w$ coprime with $v=dv_0$, see Theorem [Theorem 76](#thmKtop){reference-type="ref" reference="thmKtop"}. For a stack $\mathscr{X}$, we denote by $D_{\rm{con}}(\mathscr{X})$ the bounded derived category of constructible sheaves on $\mathscr{X}$ and $\mathrm{Perv}(\mathscr{X}) \subset D_{\rm{con}}(\mathscr{X})$ the subcategory of perverse sheaves [@MR2480756]. We denote by $D^+_{\mathrm{con}}(\mathscr{X})$ the category of locally bounded below complexes of constructible sheaves on $\mathscr{X}$: if $\mathscr{X}$ is connected, then $D^+_{\mathrm{con}}(\mathscr{X})$ is the limit of the diagram of categories $D_n:=D^b_{\mathrm{con}}(\mathscr{X})$ for all $n\in\mathbb{N}$ and for the functors ${}^p\tau^{\leqslant n'}\colon D^n\to D^{n'}$; for general $\mathscr{X}$, we have $D^+_{\mathrm{con}}(\mathscr{X})=\prod_{\mathscr{X}'\in \pi_0(\mathscr{X})} D^+_{\mathrm{con}}(\mathscr{X}')$. In this section, we assume that $d\geqslant 2$, $g\geqslant 2$, and that $\sigma \in \mathrm{Stab}(S)$ corresponds to a Gieseker stability condition for an ample divisor $H$, see Proposition [Proposition 22](#prop:LV){reference-type="ref" reference="prop:LV"} and Corollary [Corollary 36](#cor:lv){reference-type="ref" reference="cor:lv"}. The reason we restrict to Gieseker stability conditions is that, in this case, $\mathcal{M}$ is a global quotient stack and one can construct a cycle map as in [@PTtop Section 3]. We fix $v=dv_0$ and $w\in\mathbb{Z}$. In Subsection [8.2.1](#subsub:bpsk3){reference-type="ref" reference="subsub:bpsk3"}, we recall the definition of the BPS sheaf $$\mathcal{BPS}_v=\mathcal{BPS}^{\sigma}_S(v)\in \mathrm{Perv}\left(M^{\sigma}_S(v)\right).$$ For a partition $A=(d_i)_{i=1}^k$ of $d$, define the perverse sheaf $\mathcal{BPS}_A$ on $M_S^{\sigma}(v)$ to be $$\begin{aligned} \mathcal{BPS}_A :=\left(\oplus_{\ast}\boxtimes_{i=1}^k \mathcal{BPS}_{d_i v_0}\right)^{\mathfrak{S}_A},\end{aligned}$$ where $\mathfrak{S}_A \subset \mathfrak{S}_k$ is the subgroup of permutations $\sigma \in \mathfrak{S}_k$ such that $d_i=d_{\sigma(i)}$, and $\oplus$ is the addition map $$\begin{aligned} \oplus \colon M_S^{\sigma}(d_1 v_0) \times \cdots \times M_S^{\sigma}(d_k v_0) \to M_S^{\sigma}(d v_0). \end{aligned}$$ For $w\in\mathbb{Z}$, let $S^d_w$ be the set of partitions of $d$ from [@PTtop Subsection 6.1.2]. From [@PTtop Proposition 8.8], it consists of partitions $A=(d_i)_{i=1}^k$ such that, for all $1\leqslant i\leqslant k$, $w_i:= d_i w/d$ is an integer, thus $S^d_w$ is in bijection with the set of partitions of $\gcd(d,w)$. We set $$\mathcal{BPS}_{v,w}:=\bigoplus_{A\in S^d_v}\mathcal{BPS}_A.$$ For a dg-category $\mathscr{D}$, we denote by $K^{\mathrm{top}}(\mathscr{D})$ the topological K-theory spectrum as defined by Blanc [@Blanc]. We consider its (rational) homotopy groups: $$K^{\mathrm{top}}_i(\mathscr{D}):=\left(\pi_i K^{\mathrm{top}}(\mathscr{D})\right)\otimes_\mathbb{Z}\mathbb{Q}.$$ For a review of (and references on) topological K-theory, see [@PTtop Subsection 2.4]. If $\mathcal{M}$ is a quotient stack, we denote by $G^{\mathrm{top}}(\mathcal{M})$ the (rational) K-homology of $\mathcal{M}$. Then, by [@HLP], we have that $G^{\mathrm{top}}(\mathcal{M})=K^{\mathrm{top}}(D^b(\mathcal{M}))$. For a $\mathbb{Z}$-graded vector space $H^{\ast}$ and $i\in\mathbb{Z}$, let $\widetilde{H}^{i}:=\prod_{j \in \mathbb{Z}}H^{i+2j}$. In this section, we prove the following result, which implies Theorem [\[intro:thm:K\]](#intro:thm:K){reference-type="ref" reference="intro:thm:K"} as a special case. Note that the second isomorphism is not canonical, see Theorem [Theorem 84](#thmKtopgraded){reference-type="ref" reference="thmKtopgraded"} for a statement involving canonical isomorphisms: **Theorem 76**. *For $i\in \mathbb{Z}$, there exist isomorphisms of $\mathbb{Q}$-vector spaces $$\begin{aligned} \label{isom:thmKtop} K_{i}^{\rm{top}}(\mathbb{T}_S^{\sigma}(v)_w^{\rm{red}}) \stackrel{\cong}{\to} K_{i}^{\rm{top}}(\mathbb{T}_S^{\sigma}(v)_w) \cong \widetilde{H}^{i}\left(M^{\sigma}_S(v), \mathcal{BPS}_{v,w}\right). \end{aligned}$$* ## BPS sheaves for K3 surfaces As in the case of symmetric quivers with potential or preprojective algebras, the BPS cohomology for K3 surfaces is the "primitive\" part of the Hall algebra of $S$ for the chosen stability condition, and is computed as the cohomology of the BPS sheaf. In this section, we recall the definition of BPS sheaves for K3 surfaces due to Davison--Hennecart--Schlegel Mejia [@DHSM] and we compare these sheaves with BPS sheaves for preprojective algebras. ### BPS sheaves via intersection complexes {#subsub:bpsk3} Let $\mathbb{D}$ be the Verdier duality functor on $D^b_{\mathrm{con}}(\mathcal{M}^{\sigma}_S(v))$ and let $D_d:=\mathbb{D}\mathbb{Q}\in D^b_{\mathrm{con}}(\mathcal{M}^{\sigma}_S(v))$ be the dualizing complex on $\mathcal{M}^{\sigma}_S(v)=\mathcal{M}^{\sigma}_S(dv_0)$. Recall the good moduli space map $$\pi_d:=\pi\colon\mathcal{M}\to M.$$ The BBDG decomposition theorem holds for $\pi_{d*} D_d\in D^+_{\mathrm{con}}(M^{\sigma}_S(v))$, see [@Dav Theorem C]. The BPS sheaf of $M^{\sigma}_S(v)$ is a certain direct summand of the zeroth perverse truncation (which itself is a perverse sheaf on $M^{\sigma}_S(v)$, see loc. cit.): $$\label{eq:BPS0} {}^p\tau^{\leqslant 0}\pi_{d*}D_d\in \mathrm{Perv}(M_S^{\sigma}(v)).$$ We now explain the definition of the BPS sheaf. The cohomological Hall product $m=p_*q^*$ for the maps $p,q$ in [\[PortaSala\]](#PortaSala){reference-type="eqref" reference="PortaSala"} induces an algebra structure on the $\mathbb{N}$-graded complex $$\mathscr{A}:=\bigoplus_{d\in\mathbb{N}} {}^p\tau^{\leqslant 0}\pi_{d*}D_d \in \bigoplus_{d\in \mathbb{N}} D^+_{\mathrm{con}}(M_S^{\sigma}(dv_0)) .$$ There is a natural map $$\mathrm{IC}_{M^{\sigma}_S(v)}\to {}^p\tau^{\leqslant 0}\pi_{d*}D_d.$$ The main theorem of Davison--Hennecart--Schlegel Mejia [@DHSM Theorem C] says that the induced map from the free algebra generated by the intersection complexes is isomorphic to $\mathscr{A}$: $$\mathrm{Free}\left(\bigoplus_{d\in\mathbb{N}}\mathrm{IC}_{M^{\sigma}_S(dv_0)}\right)\xrightarrow{\sim} \mathscr{A}.$$ The BPS sheaves $$\mathcal{BPS}^{\sigma}_S(v)=\mathcal{BPS}^{\sigma}_S(dv_0)\in \mathrm{Perv}\left(M^{\sigma}_S(v)\right)$$ are defined via the free Lie algebra on the intersection complexes $$\label{defBPS} \mathrm{Free}_{\mathrm{Lie}}\left(\bigoplus_{d\in\mathbb{N}}\mathrm{IC}_{M^{\sigma}_S(v)}\right)=:\bigoplus_{d\in\mathbb{N}}\mathcal{BPS}^{\sigma}_S(dv_0).$$ We obtain that: $$\label{BPSZ} \mathrm{Sym}\left(\bigoplus_{d\in\mathbb{N}}\mathcal{BPS}^{\sigma}_S(dv_0)\right)\xrightarrow{\sim}\mathscr{A}.$$ A precise formulation for the heuristics that the BPS cohomology is the "primitive\" part of the Hall algebra is the following: the relative Hall algebra of $S$ for the multiples of the Mukai vector $v_0$ and stability condition $\sigma$ has a PBW decomposition in terms of BPS sheaves: $$\label{Hall} \mathrm{Sym}\left(\bigoplus_{d\in\mathbb{N}}\mathcal{BPS}^{\sigma}_S(dv_0)\otimes H^\cdot(B\mathbb{C}^*)\right)\xrightarrow{\sim}\mathscr{H}:=\bigoplus_{d\in\mathbb{N}}\pi_{d*}D_{d},$$ see [@DHSM Theorem 1.5], and note that the above isomorphism is of constructible sheaves, not of relative algebras. There is also a version for the absolute Hall algebra [@DHSM Corollary 1.6]. The above PBW theorem is the analogue for K3 surfaces of the Davison--Meinhardt PBW theorem for cohomological Hall algebras of quivers with potential [@DM]. The results in [@DHSM] cited above hold by a computation of all the simple summands of $\pi_{d*}D_d$, which satisfies a version of the BBDG/ Saito decomposition theorem due to Davison [@DavPurity]. ### The moduli stack of semistable sheaves on a K3 surface via preprojective algebras {#localdescription} One can describe the map $\pi_d \colon \mathcal{M}_S^{\sigma}(v) \to M_S^{\sigma}(v)$ étale, formally, or analytically locally on the target via preprojective algebras [@Sacca0], [@DavPurity Sections 4 and 5], [@halpK32 Theorem 4.3.2], [@Todstack], Subsections [2.6.2](#subsection:etale){reference-type="ref" reference="subsection:etale"} and [7.4](#subsec:proof2){reference-type="ref" reference="subsec:proof2"}. We will use the setting of Subsection [7.4](#subsec:proof2){reference-type="ref" reference="subsec:proof2"}, see diagrams [\[diaggg1\]](#diaggg1){reference-type="eqref" reference="diaggg1"} and [\[diaggg2\]](#diaggg2){reference-type="eqref" reference="diaggg2"}. We will continue with the notation from Subsection [7.4](#subsec:proof2){reference-type="ref" reference="subsec:proof2"}. The quiver $Q^\circ_y$ is totally negative in the sense of [@DHSM Section 1.2.3], see [@KaLeSo]. Thus the results in [@DHSM] about construction of BPS sheaves via intersection complexes apply, so the BPS sheaves $\mathcal{BPS}^p(\bm{d})$ of the preprojective algebras of the quiver $Q^\circ_y$ have a similar description via intersection complexes [\[defBPS\]](#defBPS){reference-type="eqref" reference="defBPS"}. Then the maps $e$ and $e'$ induce isomorphisms: $$\label{localBPS} e^*\left(\mathcal{BPS}^{\sigma}_S(v)\right)=e'^*\left(\mathcal{BPS}^p(\bm{d})\right).$$ **Remark 77**. If we are interested in a local analytic description of $\mathcal{M}^{\sigma}_S(v)$, then it is possible to choose $Y$ an analytic open subset of $P(\bm{d})$ and $M^{\sigma}_S(v)$, that is, we may assume that $e$ and $e'$ are open inclusions of analytic sets. Thus, locally analytically near $p$, the preimage of the map $\pi_d$ is isomorphic to the preimage of the map $\pi_P$. ## Topological K-theory and étale covers We use the shorthand notations $M=M^{\sigma}_S(v)$, $\mathcal{M}:=\mathcal{M}^{\sigma}_S(v)$, $\mathfrak{M}=\mathfrak{M}^{\sigma}_S(v)$ and $$\mathbb{T}(M)^{\mathrm{red}}:=\mathbb{T}^{\sigma}_S(v)^{\mathrm{red}}_w, \ \mathbb{T}(M):=\mathbb{T}^{\sigma}_S(v)_w.$$ We write the semiorthogonal decomposition for $\mathcal{M}$ as: $$\label{SODredAT} D^b(\mathcal{M})=\langle \mathbb{A}(M)^{\mathrm{red}}, \mathbb{T}(M)^{\mathrm{red}}\rangle.$$ By the following lemma, it suffices to prove Theorem [Theorem 76](#thmKtop){reference-type="ref" reference="thmKtop"} for $\mathbb{T}(M)^{\mathrm{red}}$. The argument for $\mathbb{T}(M)$ is the same, but we prefer working with the stack $\mathcal{M}$ because the good moduli space map is defined from $\mathcal{M}$. **Lemma 78**. *The closed immersion $\iota \colon \mathcal{M} \hookrightarrow \mathfrak{M}$ induces the isomorphism $$\label{isoKtopTTred} \iota_{\ast}\colon G^{\mathrm{top}}_\bullet(\mathbb{T}(M)^{\mathrm{red}})\xrightarrow{\sim} G^{\mathrm{top}}_\bullet(\mathbb{T}(M)).$$* *Proof.* We have the isomorphism $$\begin{aligned} \iota_{\ast}\colon G^{\mathrm{top}}_\bullet(\mathcal{M})\xrightarrow{\sim} G^{\mathrm{top}}_\bullet(\mathfrak{M})\end{aligned}$$ since both spaces have the same underlying topological space. Then the lemma holds since $\iota_{\ast}$ sends $\mathbb{T}(M)^{\mathrm{red}}$ to $\mathbb{T}(M)$. ◻ The semiorthogonal decomposition in Theorem [Theorem 44](#thm:sodK32){reference-type="ref" reference="thm:sodK32"} holds étale locally over $M$ by [@PTtop Section 9] and the diagram [\[diaggg2\]](#diaggg2){reference-type="eqref" reference="diaggg2"}. Indeed, let $R\to M$ be an étale map which factors through $R\xrightarrow{h} Z\to M$ as in [\[diaggg2\]](#diaggg2){reference-type="eqref" reference="diaggg2"}. Let $\mathcal{R}:=\mathcal{M}^\sigma_S(v)\times_{M^\sigma_S(v)} R$. By [@PTtop Section 9], there is a semiorthogonal decomposition: $$\label{SODZ} D^b(\mathcal{R})=\langle \mathbb{A}(R)^{\mathrm{red}}, \mathbb{T}(R)^{\mathrm{red}}\rangle$$ such that for an étale map $b\colon R'\to R$, the pull-back $b^{\ast}$ induce functors $$\begin{aligned} b^{\ast} \colon \mathbb{A}(R)^{\mathrm{red}} \to \mathbb{A}(R')^{\mathrm{red}}, \ b^{\ast} \colon \mathbb{T}(R)^{\mathrm{red}} \to \mathbb{T}(R')^{\mathrm{red}}. \end{aligned}$$ Consider étale covers $$U=(Z\xrightarrow{e} M),\, \mathcal{U}=(\mathcal{Z}\xrightarrow{e}\mathcal{M}) %,\, \mathscr{U}=(\mathscr{Y}\xrightarrow{e}\mathfrak{M})$$ generated by the étale covers $Z\to M$ as in [\[diaggg2\]](#diaggg2){reference-type="eqref" reference="diaggg2"}. Consider the presheaves of spectra $\mathcal{K}$, $\mathcal{A}$ and $\mathcal{T}$ on $U$ defined as follows: for $(R\xrightarrow{e}M)\in U$ (and dropping $e$ from the notation), we have: $$\mathcal{K}(R)=G^{\mathrm{top}}(\mathcal{R}),\, \mathcal{A}(R)=K^{\mathrm{top}}(\mathbb{A}(R)^{\mathrm{red}}),\, \mathcal{T}(R)=K^{\mathrm{top}}(\mathbb{T}(R)^{\mathrm{red}}).$$ By [@PTtop Theorem 9.2], there is a direct sum of presheaves of spectra on $U$: $$\label{sum} \mathcal{K}=\mathcal{A}\oplus\mathcal{T}.$$ Let $\mathcal{F}$ be a presheaf of spectra and consider a cover $(Z_i\xrightarrow{e} M)_{i\in I}$ as in diagram [\[diaggg2\]](#diaggg2){reference-type="eqref" reference="diaggg2"} for a set $I$. Consider the cosimplicial sheaf of spectra: $$\xymatrix{ \prod_{i\in I}\mathcal{F}(Z_i) \ar@<1ex>[r] \ar@<-1ex>[r] & \prod_{i,j\in I}\mathcal{F}(Z_i\times_M Z_j) \ar@<0ex>[l] \ar@<2ex>[r] \ar@<0ex>[r] \ar@<-2ex>[r] & \cdots \ar@<1ex>[l] \ar@<-1ex>[l] },$$ which can be used to compute the cohomology of the sheafification of $\mathcal{F}$, and which can be also related to Cech cohomology $\Check{\mathrm{H}}(U, \mathcal{F})$, see [@Thomason3 Definition 1.33, Remark 1.38]. There is a natural map $$\label{mapzero} \eta_\mathcal{F}\colon\mathcal{F}(M)\to \Check{\mathrm{H}}(U, \mathcal{F}).$$ For a presheaf of spectra $\mathcal{F}$ and for $i\in\mathbb{Z}$, denote by $\mathcal{F}_i=\pi_i\mathcal{F}$ the corresponding presheaf of abelian groups and by $\mathcal{F}_i^s$ the sheafification of $\mathcal{F}_i$. **Proposition 79**. *The map [\[mapzero\]](#mapzero){reference-type="eqref" reference="mapzero"} induces a weak equivalence of spectra: $$G^{\mathrm{top}}(\mathcal{M})=\mathcal{K}(M)\xrightarrow{\sim} \Check{\mathrm{H}}(U, \mathcal{K}).$$ Thus there is a spectral sequence $$\label{ss1} E_{p,q}:=\Check{\mathrm{H}}^p(U, \mathcal{K}^s_q)\implies G^{\mathrm{top}}_{q-p}(\mathcal{M}).$$* *Proof.* The above statement is proved for (rational) algebraic K-theory by Thomason in [@Thomason3 theorem 2.15, Corollary 2.16, Corollary 2.17]. The proof in loc. cit. also applies to the easier case of (rational) topological K-theory. Indeed, pushforward maps along étale maps exist on topological K-theory, so topological K-theory satisfies the weak transfer property [@Thomason3 Definition 2.12], thus topological K-theory has etale cohomological descent [@Thomason3 Proposition 2.14], and then the statement of [@Thomason3 Theorem 2.15] also holds for topological K-theory. Alternatively, the analogous statement holds for singular cohomology [@MR0559531 Chapter III, Theorem 2.17], then by a standard dévissage argument also for Borel-Moore homology, and then the statement for topological K-theory can be obtained using [@PTtop Proposition 3.1]. ◻ **Remark 80**. Even more, the presheaf $\mathcal{K}$ is a sheaf of spectra. Indeed, let $\mathcal{K}^s$ be the sheafification of $\mathcal{K}$. For any $(E\to M)\in \mathrm{Et}(M)$, we can compute the sections $\mathcal{K}^s(E)$ using Cech cohomology for a cover $U_E$ of $E$: $$\mathcal{K}^s(E)\xrightarrow{\sim} \Check{\mathrm{H}}(U_E, \mathcal{K}).$$ By the same argument as in Proposition [Proposition 79](#etalecomputationtopK){reference-type="ref" reference="etalecomputationtopK"}, we also have that $\mathcal{K}(E)\xrightarrow{\sim} \Check{\mathrm{H}}(U_E, \mathcal{K})$, thus $\mathcal{K}$ is indeed a sheaf. **Corollary 81**. *The map [\[mapzero\]](#mapzero){reference-type="eqref" reference="mapzero"} induces a weak equivalence: $$K^{\mathrm{top}}(\mathbb{T}(M)^{\mathrm{red}})\xrightarrow{\sim} \Check{\mathrm{H}}(U, \mathcal{T}).$$ Thus there is a spectral sequence $$\Check{\mathrm{H}}^p(U, \mathcal{T}^s_q)\implies K^{\mathrm{top}}_{q-p}(\mathbb{T}(M)^{\mathrm{red}}).$$* *Proof.* The map $\eta_\mathcal{K}=\eta_\mathcal{A}\oplus\eta_\mathcal{T}$ is an isomorphism by Proposition [Proposition 79](#etalecomputationtopK){reference-type="ref" reference="etalecomputationtopK"}, so $\eta_\mathcal{T}$ is also an isomorphism. ◻ Let $\mathcal{H}_q$ be the presheaf of $\mathbb{Q}$-vector spaces such that, for $(Z\xrightarrow{e}M)\in U$, we have $$\mathcal{H}_q(Z)=H^{\mathrm{BM}}_q(\mathcal{Z}).$$ Then $\mathcal{H}_q=\pi_q\mathcal{H}$, where $\mathcal{H}$ is the presheaf of Eilenberg-MacLane spectra. As for $\mathcal{K}$, the presheaf $\mathcal{H}$ is actually a sheaf. There is an spectral sequence analogous to [\[ss1\]](#ss1){reference-type="eqref" reference="ss1"}: $$\label{ss2} E'_{p,q}:=\Check{\mathrm{H}}^p(U, \mathcal{H}^s_q)\implies H^{\mathrm{BM}}_{q-p}(\mathfrak{M})=H^{\mathrm{BM}}_{q-p}(\mathcal{M}),$$ see the proof of Proposition [Proposition 79](#etalecomputationtopK){reference-type="ref" reference="etalecomputationtopK"}. **Proposition 82**. *We have $\mathcal{K}_1=\widetilde{\mathcal{H}}^s_1=0$. Thus the terms $E_{p,q}$ from [\[ss1\]](#ss1){reference-type="eqref" reference="ss1"} and $E'_{p,q}$ from [\[ss2\]](#ss2){reference-type="eqref" reference="ss2"} vanish for $q$ odd.* *Proof.* By [@PTtop Proposition 3.1], it suffices to check that $\widetilde{\mathcal{H}}^s_{1}=0$. It suffices to check that the stalks of $\widetilde{\mathcal{H}}^s_{1}$ over $y\in M$ are zero. We can define spectra $\mathcal{H}^{\mathrm{an}}$ in the analytic topology, and $\mathcal{H}^{\mathrm{an}}_y\cong \mathcal{H}_y$ for any $y\in M$, which follows as in [@MR0559531 Chapter III, Theorem 3.12]. It thus suffices to check that $H^{\mathrm{BM}}_{\mathrm{odd}}(V)=0$ for a system of open sets $V\subset M$. By the local description from Subsection [8.2.2](#localdescription){reference-type="ref" reference="localdescription"}, we may assume that $V\subset P(\bm{d})$ is an open neighborhood of the origin, where $P(\bm{d})$ is the coarse space of dimension $\bm{d}$ representations of the preprojective algebra of the Ext-quiver $Q^\circ_y$. Consider the action of $\mathbb{C}^*$ on spaces of representations of the double quiver of $Q^\circ_y$, which acts with weight one. It induces a scaling action on $P(\bm{d})$ which contracts it onto $0$. We can choose a system of open sets $0\in V\subset P(\bm{d})$ such that $V$ is homeomorphic to $P(\bm{d})$ and $\pi_{P}^{-1}(V)$ is homeomorphic to $\mathcal{P}(\bm{d})$. It then suffices to check that $H^{\mathrm{BM}}_{\mathrm{odd}}(\mathcal{P}(\bm{d}))=0$, which was proved by Davison in [@Dav Theorem A]. ◻ Let $i\in\mathbb{Z}$. Consider the Chern character for the quotient stack $\mathcal{M}$: $$\mathrm{ch}\colon G^{\mathrm{top}}_i(\mathcal{M})\to \widetilde{H}^{\mathrm{BM}}_i(\mathcal{M}),$$ see [@PTtop Subsection 3.1]. There are analogous such Chern characters for $\mathcal{Z}$ with $(e\colon \mathcal{Z}\to \mathcal{M})\in\mathcal{U}$. By Proposition [Proposition 82](#prop78){reference-type="ref" reference="prop78"}, there are compatible spectral sequences with terms for bi-degrees: $$\label{diagsect8} \begin{tikzcd}\Check{\mathrm{H}}^{2q-i}(U, \mathcal{T}^s_{2q})\arrow[r, Rightarrow]\arrow[d, hook]& K_{i}^{\mathrm{top}}(\mathbb{T}(M))\arrow[d, hook]\\ \Check{\mathrm{H}}^{2q-i}(U, \mathcal{K}^s_{2q})\arrow[r, Rightarrow]\arrow[d, "\mathrm{ch}"]& G_{i}^{\mathrm{top}}(\mathcal{M})\arrow[d, "\mathrm{ch}"]\\ \Check{\mathrm{H}}^{2a+i}(U, \widetilde{\mathcal{H}}^s_{2q})\arrow[r, Rightarrow]& \widetilde{H}^{\mathrm{BM}}_{i}(\mathcal{M}). \end{tikzcd}$$ Let $F_{\bullet} \mathcal{K}_{2g}^s\subset \mathcal{K}_{2g}^s$ and $F_{\bullet} \mathcal{T}_{2g}^s\subset \mathcal{T}_{2g}^s$ be the increasing filtrations defined by $$\begin{aligned} F_j \mathcal{K}_{2q}^s =\operatorname{ch}^{-1}(\mathcal{H}_{\leqslant 2q+2j}^s), \ F_j \mathcal{T}_{2g}^s=\mathcal{T}_{2g}^s \cap F_j \mathcal{K}_{2q}^s.\end{aligned}$$ We denote by $\mathrm{gr}_{\bullet}$ the associated graded with respect to the above filtrations. We obtain compatible spectral sequences: $$\label{diagss3} \begin{tikzcd}\Check{\mathrm{H}}^{2q-i}(U, \mathrm{gr}_{j}\mathcal{T}^s_{2q})\arrow[r, Rightarrow]\arrow[d, hook, "\alpha"]& \mathrm{gr}_{j} K_{i}^{\mathrm{top}}(\mathbb{T}(M))\arrow[d, hook]\\ \Check{\mathrm{H}}^{2q-i}(U, \mathrm{gr}_{j}\mathcal{K}^s_{2q})\arrow[r, Rightarrow]\arrow[d, "\mathrm{c}"]& \mathrm{gr}_{j}G_{i}^{\mathrm{top}}(\mathcal{M})\arrow[d, "\mathrm{c}"]\\ \Check{\mathrm{H}}^{2q+i}(U, \mathcal{H}^s_{2q+2j})\arrow[r, Rightarrow, "d"]& H^{\mathrm{BM}}_{i+2j}(\mathcal{M}), \end{tikzcd}$$ where the cycle maps $\mathrm{c}$ are isomorphisms by [@PTtop Proposition 3.1]. **Proposition 83**. *The image of the map $d\mathrm{c}\alpha$ is $H^{-i-2j}(\mathcal{M}, \mathcal{BPS}_{v,w})$.* *Proof.* By [@PTtop Theorem 9.2], the image of $\mathrm{c}\alpha$ is the bi-graded complex with terms $E^\circ_{p,q}:=\Check{\mathrm{H}}^{2q+i}(U, \mathcal{H}^{-2q-2j}(\mathcal{BPS}_{v,w}))$. The restriction of $d$ to $E^\circ_{p,q}$ corresponds to the Čech spectral sequence for $\mathcal{BPS}_{v,w}$: $$d\colon E^{\circ}_{p,q}\implies H^{-i-2j}(M, \mathcal{BPS}_{v,w}).$$ The conclusion then follows. ◻ We obtain the following: **Theorem 84**. *For any $i\in\mathbb{Z}$, there is an isomorphism $$\mathrm{c}\colon \mathrm{gr}_jK^{\mathrm{top}}_i(\mathbb{T}^{\sigma}_S(v)^{\mathrm{red}}_w)\xrightarrow{\sim} H^{-i-2j}\left(M^{\sigma}_S(v), \mathcal{BPS}_{v,w}\right).$$* *Proof.* The conclusion follows from the diagram [\[diagss3\]](#diagss3){reference-type="eqref" reference="diagss3"} and Proposition [Proposition 83](#prop79){reference-type="ref" reference="prop79"}. ◻ *Proof of Theorem [Theorem 76](#thmKtop){reference-type="ref" reference="thmKtop"}.* By Theorem [Theorem 84](#thmKtopgraded){reference-type="ref" reference="thmKtopgraded"} and Lemma [Lemma 78](#lem:topisom){reference-type="ref" reference="lem:topisom"}, it suffices to check that there is a non-canonical isomorphism $K^{\mathrm{top}}_i(\mathbb{T}^{\sigma}_S(v)^{\mathrm{red}}_w)\cong \bigoplus_{j\in\mathbb{Z}}\mathrm{gr}_j K^{\mathrm{top}}_i(\mathbb{T}^{\sigma}_S(v)^{\mathrm{red}}_w)$. It suffices to check that the Chern character $$\mathrm{ch}\colon K^{\mathrm{top}}_i(\mathbb{T}^{\sigma}_S(v)^{\mathrm{red}}_w)\hookrightarrow G^{\mathrm{top}}_i(\mathcal{M}^\sigma_S(v))\to \widetilde{H}^{\mathrm{BM}}_i(\mathcal{M}^\sigma_S(v))$$ is injective. By the diagram [\[diagsect8\]](#diagsect8){reference-type="eqref" reference="diagsect8"}, it suffices to check that the following Chern character is injective $$\mathrm{ch}\colon K^{\mathrm{top}}_i(\mathbb{T}(R)^{\mathrm{red}})\hookrightarrow G^{\mathrm{top}}_i(\mathcal{R})\to \widetilde{H}^{\mathrm{BM}}_i(\mathcal{R}),$$ where $(R\xrightarrow{e} M)\in U$. This was proved in [@PTtop Proposition 9.9]. ◻ [Tudor Pădurariu: Max Planck Institute for Mathematics, Vivatsgasse 7 Bonn 53111, Germany.]{.smallcaps}\ *E-mail address:* `tpadurariu@mpim-bonn.mpg.de`\ [Yukinobu Toda: Kavli Institute for the Physics and Mathematics of the Universe (WPI), University of Tokyo, 5-1-5 Kashiwanoha, Kashiwa, 277-8583, Japan.]{.smallcaps}\ *E-mail address:* `yukinobu.toda@ipmu.jp`\
arxiv_math
{ "id": "2309.08437", "title": "Quasi-BPS categories for K3 surfaces", "authors": "Tudor P\\u{a}durariu and Yukinobu Toda", "categories": "math.AG", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | For a $\mathbb{Z}^{d}$-topological dynamical system $(X, T, \mathbb{Z}^d)$, an *isomomorphism* is a self-homeomorphism $\phi : X\to X$ such that for some matrix $M\in GL(d,\mathbb{Z})$ and any ${\bm n}\in \mathbb{Z}^{d}$, $\phi\circ T^{{\bm n}}=T^{M{\bm n}}\circ \phi$, where $T^{\bm n}$ denote the self-homeomorphism of $X$ given by the action of ${\bm n}\in {\mathbb Z}^d$. The collection of all the isomorphisms forms a group that is the normalizer of the set of transformations $T^{\bm n}$. In the one-dimensional case, isomorphisms correspond to the notion of *flip conjugacy* of dynamical systems and by this fact are also called *reversing symmetries*. These isomorphisms are not well understood even for classical systems. We present a description of them for odometers and more precisely for constant-base $\mathbb{Z}^{2}$-odometers, which is surprisingly not simple. We deduce a complete description of the isomorphisms of some minimal $\mathbb{Z}^{d}$-substitutive subshifts. This enables us to provide the first example known of a minimal zero-entropy subshift with the largest possible normalizer group. address: - Departament of Mathematics, Université de Liège, Allée de la découverte 12 (B37), 4000 Liège, Belgium - LAMFA, CNRS, UMR 7352, Université de Picardie Jules Verne, 80 000 Amiens, France author: - Christopher Cabezas - Samuel Petite bibliography: - sample.bib title: Large normalizers of $\mathbb{Z}^{d}$-odometers systems and realization on substitutive subshifts --- # Introduction This article concerns $\mathbb{Z}^d$-odometers and their symbolic extensions. More specifically, for such dynamical system $(X, T, \mathbb{Z}^d)$, we study their *isomorphisms*, which are self-homeomorphisms $\phi : X\to X$ such that for some linear transformation $M\in GL(d,\mathbb{Z})$ and any ${\bm n}\in \mathbb{Z}^{d}$, $\phi\circ T^{{\bm n}}=T^{M{\bm n}}\circ \phi$, where $T^{\bm n}$ denotes the self-homeomorphism of $X$ given by the $\mathbb{Z}^d$ action $T$. Isomorphisms associated with the linear transformation $M= \textrm{Id}$ are, in essence, nothing more than self-conjugacies of the system. They are commonly referred to as *automorphisms* of dynamical systems. Therefore, an isomorphism can be thought of as a self-conjugacy up to a $\textrm{GL}(d, \mathbb{Z})$-transformation. In the one-dimensional case ($d=1$), isomorphisms correspond to the notion of *flip conjugacy* in dynamical systems [@bezuglyi2008fullgroups]. Because of this, they are also known as *reversing symmetries* (see [@goodson1999inverse; @baake2006structure]). The automorphisms can be algebraically defined as elements of the centralizer of the group $\left\langle T \right \rangle$, considered as a subgroup of all self-homeomorphisms $\mathop{\mathrm{Homeo}}(X)$ of $X$. From this algebraic perspective, isomorphisms can be seen as elements of the normalizer group of $\left\langle T \right \rangle$ within the group $\mathop{\mathrm{Homeo}}(X)$. Consequently, the automorphism group is a normal subgroup of the normalizer. The quotient is isomorphic to a linear subgroup of $\textrm{GL}(d, \mathbb{Z})$, called the *linear representation group*, achieved through the identification of an isomorphism $\phi$ with its matrix $M$. The automorphism group is never trivial (it always contains the transformations $T^{\bm n}$, $\bm n \in \mathbb{Z}^d$), but generally, the existence of an isomorphism for a given $M\in GL(d,\mathbb{Z})$ remains an open problem, which is significant in the context of higher-rank actions. There are few general results on these groups so natural questions on the algebraic properties of the group of isomorphisms (is it finite? is it amenable? what are the subgroups? etc.), on their relations with the dynamics, or on a description of their actions, are widely open. In this article, we explore these issues using classic and widely studied examples of odometers and substitutive Toeplitz subshifts. Even though the dynamics of an odometer is one of the most well understood, a partial description of their isomorphisms has only been initiated in [@giordano2019zdodometer] for $d=2$ and pursued for higher rank in [@merenkov2022odometers; @sabitova2022; @sabitova2022number]. These works were essentially motivated by the fact that isomorphisms define specific transformations of orbit equivalence. We complete these studies, but mainly for dimension $d=2$, by describing explicitly the group structure of their normalizers. This leads to a complete classification of them (Theorem [Theorem 19](#GeneralTheoremDifferentCasesNormalizer){reference-type="ref" reference="GeneralTheoremDifferentCasesNormalizer"}) for constant-base odometer systems, i.e., whose base is given by the iteration of the same expansive matrix. The classification depends on arithmetical properties of the expansive matrix. In addition, we provide computable arithmetic conditions for deciding whether a given matrix arises as an isomorphism of a constant-base odometer. Our techniques use elementary arithmetic, but extending our results to higher ranks would require sophisticated notions of number theory. Interest in the study of the normalizer group for multidimensional subshifts has increased in recent years. These objects represent geometrical and combinatorial symmetries, such as rotations and reflections, that a particular subshift possesses. Similar studies were initiated in the framework of tilings and Delone sets (see e.g., [@Robinson1996]), where the existence of some point set symmetries is at the heart of the discovery of aperiodicity of quasicrystals [@shechtman1984metallic]. For instance, the Penrose tiling serves as a model with ten-fold symmetry [@Penrose1979]. However, still few results and no systematic description of the analogous normalizer group for $\mathbb{R}^d$-flows exist. For a self-similar tiling, its linear representation group is known to be countable and isomorphic to its mapping class group [@Kwapisz2011]. Recently, classical examples of multidimensional subshifts have been considered to study their normalizers [@baake2019number; @baake2018reversing; @bustos2021admissible; @cabezas2021homomorphisms]. These works have yielded sufficient conditions (of a combinatorial nature in [@bustos2021admissible] and a geometrical nature in [@cabezas2021homomorphisms]) on a substitutive subshift so that the quotient of the normalizer by the automorphism group is finite, and each group is virtually $\mathbb{Z}^d$. The article [@baake2019number] concerns different classes of non-minimal subshifts with positive entropy having a normalizer not virtually isomorphic to its automorphism group. However, the question remains open whether an infinite linear representation group is also possible for minimal, deterministic (zero-entropy) subshifts. In this article, we provide a positive answer to this problem by completely describing the normalizer group of a family of minimal subshifts (Theorem [Theorem 25](#prop:MainresultSection4){reference-type="ref" reference="prop:MainresultSection4"}). The examples are substitutive Toeplitz subshifts that can be presented in an effective way, in the algorithmic sense. These subshifts are highly deterministic meaning they have zero entropy. More precisely, the rate of growth of their complexity is polynomial of degree the dimension $d$. In particular, we exhibit examples where the linear representation group is the largest one, i.e., equal to $\textrm{GL}(d, \mathbb{Z})$. Together with the results of [@baake2019number], this shows that the normalizer is not restricted by the complexity. More surprisingly, low complexity is not enough to ensure the amenability of the normalizer group in dimension $d>1$. This phenomenon differs from what happens in dimension one [@CovenQuasYassawi; @CyrKra2015; @CyrKra2016; @CyrKra2020; @donoso2016lowcomplexity; @DonosoToeplitz2017], where the amenability of the automorphism group (of finite-index in the normalizer group) is shown for a large class of zero-entropy subshifts and any Toeplitz subshift. This article is organized as follows: Section [2](#sec:Basics){reference-type="ref" reference="sec:Basics"} is devoted to the background on topological and symbolic dynamics. In particular, relations between the normalizer of a system with the one of its maximal equicontinuous factor are exhibited. The next section concerns the study of the normalizers of odometers in order to describe them (Theorem [Theorem 19](#GeneralTheoremDifferentCasesNormalizer){reference-type="ref" reference="GeneralTheoremDifferentCasesNormalizer"}). Finally, we construct examples of multidimensional subshifts with an odometer as their maximal equicontinuous factors in Section [5](#sec:NormalizerSubshiftEx){reference-type="ref" reference="sec:NormalizerSubshiftEx"}. We characterize their infinite linear representation groups (Theorem [Theorem 25](#prop:MainresultSection4){reference-type="ref" reference="prop:MainresultSection4"}) by using our study of normalizers of odometers and the relations with their extensions. # Definitions and basic properties {#sec:Basics} ## Topological dynamical systems {#SectionTopologicalDynamicalSystem} We recall that a *topological dynamical system* $(X, T, \mathbb{Z}^d)$ is given by a continuous left-action $T \colon \mathbb{Z}^d \times X \to X$ on a compact metric space $X$ equipped with a distance. This provides a family of self-homeomorphisms of $X$: $\{ T^{\bm n}\colon {\bm n} \in \mathbb{Z}^d\}$, also denoted by $\langle T \rangle$, such that $T^{\bm m}\circ T^{\bm n} = T^{\bm m+\bm n}$ for any ${\bm m,\bm n} \in \mathbb{Z}^d$. In particular, the homeomorphisms $T^{\bm n}$ commute with each other. The *orbit* of a point $x \in X$ is the set $\mathcal{O}(x,T)=\{T^{\bm n}(x)\colon {\bm n}\in \mathbb{Z}^d\}$. We will be mainly concerned by topological dynamical systems that are *minimal*, i.e., where any orbit is dense. In this case, there is no topological way to classify the orbits. An important type of topological dynamical systems are the equicontinuous ones. A topological dynamical system $(X,T,\mathbb{Z}^{d})$ is said to be *equicontinuous* if the set of maps $\{T^{{\bm n}}\colon {\bm n}\in \mathbb{Z}^{d}\}$ forms an equicontinuous family of homeomorphisms. The equicontinuous systems are, in some sense, the simplest dynamical systems. In the following, we define the endomorphisms and isomorphisms of a topological dynamical system, which are the central objects of study of this article. Isomorphisms represent internal symmetries of a given system that do not commute with the action, such as rotations and reflections. **Definition 1**. Let $(X,T,\mathbb{Z}^{d})$, $(Y,S,\mathbb{Z}^{d})$ be two topological dynamical systems and ${M\in \textrm{GL}(d, \mathbb{Z})}$. An $M$-*epimorphism* is a continuous surjective map ${\phi: X\to Y}$ that for any ${\bm n}\in \mathbb{Z}^{d}$ satisfies $\phi\circ T^{{\bm n}}= T^{M{\bm n}}\circ \phi$. When $(X,T,\mathbb{Z}^{d})=(Y,S,\mathbb{Z}^{d})$, it is called an $M$-*endomorphism*. Moreover, if $\phi$ is invertible, it is called an $M$-*isomorphism*. We simply call a $\textrm{GL}(d, \mathbb{Z})$-*endomorphism* (or $\textrm{GL}(d, \mathbb{Z})$-*isomorphism*), any $M$-endomorphism (resp. isomorphism) for some $M \in \textrm{GL}(d, \mathbb{Z})$. In other terms, the $\textrm{GL}(d, \mathbb{Z})$-isomorphisms are conjugacies of $\mathbb{Z}^{d}$-actions, up to a $\textrm{GL}(d, \mathbb{Z})$-transformation, i.e., an orbit equivalence with a constant orbit cocycle. In the special case where $M$ is the identity matrix ${\rm Id}$, the ${\rm Id}$-endomorphisms and ${\rm Id}$-isomorphisms are called *endomorphisms* (or *self-factor maps*) and *automorphisms* (or *self-conjugacies*), respectively. When the action is *aperiodic*, i.e., the stabilizer of any point is trivial or $T^{\bm n} x= x$ only for ${\bm n} ={\bm 0}$, the matrix $M$ associated with a $\textrm{GL}(d, \mathbb{Z})$-endomorphism $\Phi$ is unique: $\Phi$ is an $M_1$- and $M_2$-endomorphism if and only if $M_1=M_2$. In the following, we fix different notations that we will use throughout this article: - $N_{M}(X,T)$, with $M \in \textrm{GL}(d, \mathbb{Z})$, denotes the set of all $M$-endomorphisms of $(X,T)$; - $N(X,T)$ denotes the set of all the $\textrm{GL}(d, \mathbb{Z})$-endomorphisms of the dynamical system $(X,T, \mathbb{Z}^d)$, i.e. $$N(X,T)=\bigcup\limits_{M\in \textrm{GL}(d, \mathbb{Z})}N_M(X,T).$$ The set of $\textrm{GL}(d, \mathbb{Z})$-isomorphisms, is denoted by $N^*(X,T)$. In algebraic terms, this set is a group and, when the action is aperiodic, corresponds to the normalizer of the group action $\left\langle T\right\rangle$ in the group of self-homeomorphisms $\mathop{\mathrm{Homeo}}(X)$ of $X$, that is, the set $\phi \colon X \to X$ such that $\phi \circ T^{\bm n} \circ \phi^{-1} \in \{T^{\bm m}: {\bm m} \in \mathbb{Z}^d\}$ for any ${\bm n}\in \mathbb{Z}^d$; - $\mathop{\mathrm{End}}(X,T)$ and $\mathop{\mathrm{Aut}}(X,T)$ denotes respectively the set of all endomorphisms and automorphisms of $(X, T, \mathbb{Z}^d)$. - We define the *linear representation semigroup* $\vec{N}(X,T)$ as the semigroup of all matrices $M\in \textrm{GL}(d, \mathbb{Z})$ with ${N_{M}(X,T)\neq \emptyset}$. Notice that for $\phi\in N_{M_{1}}(X,T )$ and $\psi\in N_{M_{2}}(X,T)$, the composition $\phi\circ \psi$ belongs to $N_{M_{1}M_{2}}(X,T)$. Moreover, if $\phi$ is an $M$-isomorphism, then its inverse is an $M^{-1}$-isomorphism, so the sets $N_{M}(X,T)$ are not semigroups (except when $M$ is the identity matrix). Concerning the linear representation semigroup $\vec{N}(X,T)$, it is direct to check that the isomorphism class of $\vec{N}(X,T)$ is invariant under conjugacy. However, it is not necessarily a group, since the existence of an $M$-endomorphism associated with a matrix $M\in \textrm{GL}(d, \mathbb{Z})$ does not necessarily imply the existence of an $M^{-1}$-endomorphism. Nevertheless, we have the following result when a dynamical system is coalescent. Recall that a topological dynamical system $(X,T,\mathbb{Z}^{d})$ is *coalescent* when any endomorphism of $(X,T,\mathbb{Z}^{d})$ is invertible. **Proposition 2**. *Let $(X,T,\mathbb{Z}^{d})$ be a coalescent system. If the linear representation semigroup $\vec{N}(X,T)$ is a group, then any $\textrm{GL}(d, \mathbb{Z})$-endomorphism in $N(X,T)$ is invertible, i.e., $N(X,T) = N^{*}(X,T)$.* Equicontinuous systems are examples of coalescent systems [@auslander1988minimal]. *Proof.* Let $\phi,\psi$ be respectively an $M$- and $M^{-1}$-endomorphism onto $(X,T,\mathbb{Z}^{d})$. Then $\phi\circ\psi$ is an endomorphism onto $(X,T,\mathbb{Z}^{d})$. Since $(X,T,\mathbb{Z}^{d})$ is coalescent, then $\phi\circ \psi$ is invertible. It follows that $\phi$ and $\psi$ are invertible maps. ◻ A particular case is when the linear representation semigroup of a coalescent system is finite. In this case, it is always a group, so any $\textrm{GL}(d, \mathbb{Z})$-endomorphism is invertible. The groups $\left\langle T\right\rangle$ and $\mathop{\mathrm{Aut}}(X,T)$ are normal subgroups of $N^{*}(X,T)$ (the group of isomorphisms). More precisely, for aperiodic systems the following exact sequence holds: $$\begin{aligned} \label{ExactSequenceForNormalizer} %1 & \to & \left\langle T\right\rangle\quad\quad\quad & \to & \Aut(X,T)\quad & \to & \Aut(X,T)/\left\langle T\right\rangle \quad & \to & 1\,\,\\ 1 & \to & \mathop{\mathrm{Aut}}(X,T) \quad & \to \quad N^{*}(X,T) & \xrightarrow{j} \quad \vec{N^{*}}(X,T)\quad \to & \quad 1, \end{aligned}$$ where all the maps are the canonical injections and $j(\phi)= M$ whenever $\phi \in N_M(X,T)$. A *factor map* $\pi: (X,T,\mathbb{Z}^{d})\to (Y,S,\mathbb{Z}^{d})$ between two topological dynamical systems $(X, T, \mathbb{Z}^d)$ and $(Y, S,\mathbb{Z}^{d})$ is a continuous onto map commuting with the action, i.e., $\pi \circ T^{\bm n}=S^{\bm n} \circ \pi$ for any ${\bm n} \in \mathbb{Z}^d$. The system $(Y, S,\mathbb{Z}^{d})$ is said to be a *factor* of $(X, T, \mathbb{Z}^d)$, and $(X, T, \mathbb{Z}^d)$ is an *extension* of $(Y, S,\mathbb{Z}^{d})$. If $|\pi^{-1}(\{y\})|\leq K<\infty$ for all $y\in Y$, we say that $\pi$ is *finite-to-1*. Sometimes there exists a $G_{\delta}$-dense subset $Y_{0}\subseteq Y$ where any $y\in Y_{0}$ satisfies $|\pi^{-1}(\{y\})|=1$. In this case, the factor map $\pi$ is said to be *almost* *1-to-1*. We recall that when the system $(Y,S, \mathbb{Z}^d)$ is minimal, the existence of one point with only one preimage is enough to ensure that the map is almost 1-to-1. For every topological dynamical system, there exists at least one equicontinuous factor, which is the system given by one point. Furthermore, any topological dynamical system admits a *maximal equicontinuous factor*, i.e., a factor $\pi_{eq}:(X,T,\mathbb{Z}^{d})\to(X_{eq},T_{eq},\mathbb{Z}^{d})$ where $(X_{eq},T_{eq},\mathbb{Z}^{d})$ is an equicontinuous system, satisfying the following universal property: for any other equicontinuous factor $\pi:(X,T,\mathbb{Z}^{d})\to (Y,S,\mathbb{Z}^{d})$ there exists a factor map $\phi:(X_{eq},T_{eq},\mathbb{Z}^{d})\to (Y,S,\mathbb{Z}^{d})$ such that $\pi=\phi\circ \pi_{eq}$ [@auslander1988minimal]. Also, in the particular case where $\pi:(X,T,\mathbb{Z}^{d})\to (Y,S,\mathbb{Z}^{d})$ is an almost 1-to-1 factor on an equicontinuous system $(Y,S,\mathbb{Z}^{d})$, this factor is the maximal equicontinuous factor of $(X,T,\mathbb{Z}^{d})$. As typical examples of this case, are the odometer systems (see next section) that are almost 1-to-1 factors of particular symbolic systems [@cortez2006toeplitz; @cortez2008godometers; @downarowicz2005survey]. A factor map $\pi:(X,T,\mathbb{Z}^{d})\to(Y,S,\mathbb{Z}^{d})$ is *compatible* if any endomorphism $\phi\in \mathop{\mathrm{End}}(X,T)$ preserves the $\pi$-fibers, i.e., $\pi(\phi(x))= \pi(\phi(y))$ for any $x,y\in X$, such that $\pi(x)= \pi(y)$. With the same spirit, we say that a factor $\pi$ is *compatible with $\textrm{GL}(d, \mathbb{Z})$-endomorphisms* if for any $\textrm{GL}(d, \mathbb{Z})$-endomorphism $\phi\in N(X,T)$, $\pi(\phi(x))= \pi(\phi(y))$ for any $x,y\in X$, such that $\pi(x)= \pi(y)$. The compatibility property allow us to relate $\textrm{GL}(d, \mathbb{Z})$-endomorphisms between factor systems. The next lemma follows the ideas from [@CovenQuasYassawi Theorem 3.3] but in a larger context. **Lemma 3**. *Let $(X,T,\mathbb{Z}^{d})$, $(Y,S,\mathbb{Z}^{d})$ be two minimal systems, such that $\pi:(X,T,\mathbb{Z}^{d})\to (Y,S,\mathbb{Z}^{d})$ is a compatible factor. Then, there is a semigroup homomorphism $\hat{\pi}:\mathop{\mathrm{End}}(X,T)\to\mathop{\mathrm{End}}(Y,S)$ such that* 1. *[\[CompatibilityPropertyEnumerate1\]]{#CompatibilityPropertyEnumerate1 label="CompatibilityPropertyEnumerate1"} $\hat{\pi}(\phi)(\pi(x))=\pi(\phi(x))$ for all $\phi\in \mathop{\mathrm{End}}(X,T)$ and $x\in X$.* 2. *[\[CompatibilityPropertyEnumerate2\]]{#CompatibilityPropertyEnumerate2 label="CompatibilityPropertyEnumerate2"} $\hat{\pi}(\mathop{\mathrm{Aut}}(X,T))\leqslant \mathop{\mathrm{Aut}}(Y,S)$.* 3. *[\[CompatibilityPropertyEnumerate3\]]{#CompatibilityPropertyEnumerate3 label="CompatibilityPropertyEnumerate3"} For all $\psi\in \mathop{\mathrm{End}}(Y,S)$, $|\hat{\pi}^{-1}(\{\psi\})|\leq \min\limits_{y\in Y}|\pi^{-1}(y)|$.* *Moreover, if $\pi$ is compatible with $\textrm{GL}(d, \mathbb{Z})$-endomorphisms, there is an extension of $\hat{\pi}:N(X,T)\to N(Y,S)$ defined as in [\[CompatibilityPropertyEnumerate1\]](#CompatibilityPropertyEnumerate1){reference-type="ref" reference="CompatibilityPropertyEnumerate1"} for all $\phi\in N(X,T)$, satisfying the following properties* 1. *[\[CompatibilityPropertyEnumeratei\]]{#CompatibilityPropertyEnumeratei label="CompatibilityPropertyEnumeratei"} $\hat{\pi}(N_{M}(X,T))\subseteq N_{M}(Y,S)$, for any $M\in GL(d,\mathbb{Z})$.* 2. *[\[CompatibilityPropertyEnumerateii\]]{#CompatibilityPropertyEnumerateii label="CompatibilityPropertyEnumerateii"} For any $M \in \textrm{GL}(d, \mathbb{Z})$, the map $\hat{\pi}:N_{M}(X,T)\to N_{M}(Y,S)$ is at most $\min\limits_{y\in Y}|\pi^{-1}(\{y\})|$-to-1.* 3. *[\[CompatibilityPropertyEnumerateiii\]]{#CompatibilityPropertyEnumerateiii label="CompatibilityPropertyEnumerateiii"} For any $\phi\in \hat{\pi}^{-1} (N^*(Y,S))$, the cardinality of the $\pi$-fiber is non decreasing under the $\textrm{GL}(d, \mathbb{Z})$-isomorphism $\hat{\pi}(\phi)^{-1}$. In other terms, for any integer $n \geq 1$, the map $\hat{\pi}(\phi)$ satisfies $$\{y\in Y\colon |\pi^{-1}(\{y\})|\ge n\}\subset \hat{\pi}(\phi)\left(\{y\in Y\colon |\pi^{-1}(\{y\})|\ge n\}\right).$$* *Proof.* Set $\phi\in \mathop{\mathrm{End}}(X,T)$. By definition, the map $\hat{\pi}(\phi):Y\to Y$ given by $\hat{\pi}(\phi)(\pi(x))=\pi(\phi(x))$ is well defined and is surjective by minimality of $(Y,S,\mathbb{Z}^{d})$. So $\hat{\pi}(\phi)$ is an endomorphism of $(Y,S,\mathbb{Z}^{d})$. Moreover, if $\phi$ is an automorphism of $(X,T,\mathbb{Z}^{d})$, then $\hat{\pi}(\phi)$ is invertible. Indeed, $\hat{\pi}(\phi)\circ \hat{\pi}(\phi^{-1})\circ \pi=\pi\circ\phi\circ \phi^{-1}=\pi$, so we conclude that $\hat{\pi}(\phi)\circ \hat{\pi}(\phi^{-1})=\textrm{id}_{Y}$. To prove [\[CompatibilityPropertyEnumerate3\]](#CompatibilityPropertyEnumerate3){reference-type="ref" reference="CompatibilityPropertyEnumerate3"}, fix any $\psi\in \mathop{\mathrm{End}}(Y,S)$ and suppose that $\min\limits_{y\in Y}|\pi^{-1}(\{y\})|=c<\infty$ (if not, then there is nothing to prove). Let $x_{0}\in X$ and $y_{0}\in Y$ be such that $|\pi^{-1}(\{y_{0}\})|=c$, and $y_{0}=\psi(\pi(x_{0}))$. Assume there exists $c+1$ endomorphisms $\phi_{0},\ldots,\phi_{c}$ of $(X,T,\mathbb{Z}^{d})$, in $\hat{\pi}(\{\psi\})^{-1}$. The compatibility then implies that $y_{0}=\psi(\pi(x_{0}))=\pi(\phi_{0}(x_{0}))=\cdots=\pi(\phi_{c}(x_{0}))$. So, by the pigeonhole principle, there must exist two different indices $0\leq i,j\leq c$ such that $\phi_{i}(x_{0})=\phi_{j}(x_{0})$. The minimality of $(X,T,\mathbb{Z}^{d})$ then gives that $\phi_{i}=\phi_{j}$. The proofs concerning the items [\[CompatibilityPropertyEnumeratei\]](#CompatibilityPropertyEnumeratei){reference-type="ref" reference="CompatibilityPropertyEnumeratei"} and [\[CompatibilityPropertyEnumerateii\]](#CompatibilityPropertyEnumerateii){reference-type="ref" reference="CompatibilityPropertyEnumerateii"} on $\textrm{GL}(d, \mathbb{Z})$-endomorphisms use similar arguments and are left to the reader. To prove [\[CompatibilityPropertyEnumerateiii\]](#CompatibilityPropertyEnumerateiii){reference-type="ref" reference="CompatibilityPropertyEnumerateiii"}, we consider any $y \in Y$ with $n$ preimages $x_1, \ldots, x_n \in X$. Since $\phi$ is onto, there are $x'_1, \ldots, x'_n \in X$ such that $\phi(x'_i)= x_i$. Notice that, $\hat{\pi}(\phi) (\pi(x'_i)) = \pi(\phi(x'_i))= \pi(x_i) =y$. It follows that $\hat{\pi}(\phi) (\pi(x'_i)) = \hat{\pi}(\phi) (\pi(x'_j))$ for any indices $i,j = 1, \ldots, n$. Since $\hat{\pi}(\phi)$ is invertible, all the element $x'_i$ belongs to the same $\pi$-fiber of $z \in Y$. Thus $z$ admits at least $n$ preimages by $\pi$ and satisfies $\hat{\pi}(\phi)(z) = y$. The claim follows. ◻ It is known that factor maps between equicontinuous systems are compatible [@auslander1988minimal], but as we will see in the next section, they are not necessarily compatible with $\textrm{GL}(d, \mathbb{Z})$-endomorphisms (see [Remark 21](#rem:compatibilityCounterExample){reference-type="ref" reference="rem:compatibilityCounterExample"}). Nevertheless, the maximal equicontinuous factor is an example of a factor compatible with $\textrm{GL}(d, \mathbb{Z})$-endomorphisms as proved in [@baake2018reversing Theorem 5 and Corollary 3]. This can also be seen by using the universal property of the maximal equicontinuous factor. **Lemma 4**. *[@baake2018reversing Corollary 3][\[MaximalEquicontinuousFactorCompatibleWithHomomorphisms\]]{#MaximalEquicontinuousFactorCompatibleWithHomomorphisms label="MaximalEquicontinuousFactorCompatibleWithHomomorphisms"} Let $(X,T,\mathbb{Z}^{d})$ be a minimal topological dynamical system and let ${\pi_{eq}:(X,T,\mathbb{Z}^{d})\to (X_{eq},T_{eq},\mathbb{Z}^{d})}$ denote its maximal equicontinuous factor. Then $\pi_{eq}$ is compatible with $\textrm{GL}(d, \mathbb{Z})$-endomorphisms.* [Lemma 3](#CompatibilityPropertyFactors){reference-type="ref" reference="CompatibilityPropertyFactors"} and [\[MaximalEquicontinuousFactorCompatibleWithHomomorphisms\]](#MaximalEquicontinuousFactorCompatibleWithHomomorphisms){reference-type="ref" reference="MaximalEquicontinuousFactorCompatibleWithHomomorphisms"} illustrate that to study $\textrm{GL}(d, \mathbb{Z})$-endomorphisms, a first step is to understand the equicontinuous systems. This will be done in [2.2](#SectionOdoemterSystems){reference-type="ref" reference="SectionOdoemterSystems"} for the class of minimal equicontinous Cantor systems: the odometer systems. ## $\mathbb{Z}^{d}$-Odometer systems {#SectionOdoemterSystems} Odometer systems are the equicontinuous minimal Cantor systems. Hence, they are the maximal equicontinuous factor for a large family of symbolic systems, such as Toeplitz flows and some substitutive subshifts. We refer to [@cortez2006toeplitz] for the study of odometer systems and $\mathbb{Z}^d$-Toeplitz sequences. We use the same notations. In this section, we briefly recall the basic definitions of odometers. Subsequently, we describe the linear representation semigroup of odometer systems ([Lemma 6](#LemmaNessesaryConditionNormalizerOdometer){reference-type="ref" reference="LemmaNessesaryConditionNormalizerOdometer"}), which we then use to completely characterize it for constant-base $\mathbb{Z}^{2}$- odometer systems ([Theorem 19](#GeneralTheoremDifferentCasesNormalizer){reference-type="ref" reference="GeneralTheoremDifferentCasesNormalizer"}). It is worth noting that the initial exploration of $\textrm{GL}(2, \mathbb{Z})$-endomorphisms between $\mathbb{Z}^{2}$-odometer systems was initiated in [@giordano2019zdodometer] and later extended to higher dimensions in [@merenkov2022odometers]. ### Basic definitions Let $Z_{0}\geqslant Z_{1}\geqslant \ldots \geqslant Z_{n}\geqslant Z_{n+1}\geqslant\ldots$ be a nested sequence of finite-index subgroups of $\mathbb{Z}^{d}$ such that $\bigcap\limits_{n\geq 0}Z_{n}=\{{\bm 0}\}$ and let $\alpha_{n}:\mathbb{Z}^{d}/Z_{n+1}\to \mathbb{Z}^{d}/Z_{n}$ be the function induced by the inclusion map. We consider the inverse limit of these groups $$\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}=\lim\limits_{\leftarrow n} (\mathbb{Z}^{d}/Z_{n},\alpha_{n}),$$ i.e., $\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}$ is the subset of the product $\prod\limits_{n\geq 0} \mathbb{Z}^{d}/Z_{n}$ consisting of the elements ${\overleftarrow{g}=({g}_{n})_{n\geq 0}}$ such that $\alpha_{n}({g}_{n+1})={g}_{n}\ (\text{mod}\ Z_{n})$ for all $n\geq 0$. The odometer $\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}$ is a compact 0-dimensional topological group, whose topology is spanned by the cylinder sets $$[{\bm a}]_{n}=\left\{\overleftarrow{g}\in \overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}: {{\bm g}}_{n}={{\bm a}}\right\},$$ with ${{\bm a}}\in \mathbb{Z}^{d}/Z_{n}$, and $n\geq 0$. Now, consider the group homomorphism $\kappa_{(Z_{n})}:\mathbb{Z}^{d}\to \prod\limits_{n\geq 0} \mathbb{Z}^{d}/Z_{n}$ defined for ${\bm n}\in \mathbb{Z}^{d}$ as $$\kappa_{(Z_{n})}({\bm n})=[{\bm n}\ \text{mod}\ Z_{n}]_{n\geq 0}.$$ The image of $\mathbb{Z}^{d}$ under $\kappa_{(Z_{n})}$ is dense in $\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}$, allowing us to define the $\mathbb{Z}^{d}$-action ${\bm n}{\bm +}\overleftarrow{g}=\kappa_{(Z_{n})}({\bm n})+\overleftarrow{g}$, where ${\bm n}\in \mathbb{Z}^{d}$ and $\overleftarrow{g}\in \overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}$. This action is well-defined and continuous. The resulting topological dynamical system $(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}, {\bm +},\mathbb{Z}^{d})$ is called a $\mathbb{Z}^{d}$-*odometer system*, and for the rest of this article, we will simply use the notation $\overleftarrow{\mathbb{Z}^{d}}_{(Z{n})}$, i.e., denoted just by its phase space. Similarly, its set of automorphisms, endomorphisms, and linear representation semigroup will be denoted as $\mathop{\mathrm{Aut}}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})$, $N(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})$, and $\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})$, respectively. Notice that the "return times" of the action to a cylinder set $[{\bm a}]_n$ is a finite-index subgroup of $\mathbb{Z}^d$, or more precisely: $$\begin{aligned} \label{eq:ReturnTime} \forall \overleftarrow{g} \in [{\bm a}]_n, \quad \{{\bm n} \in \mathbb{Z}^d: {\bm n}{\bm +} \overleftarrow{g} \in [{\bm a}]_n\} = Z_n.\end{aligned}$$ This observation is the base to show that an odometer system is a minimal equicontinuous system. This also shows that the action is aperiodic since the intersection of all the $Z_n$ is trivial. We will be particularly concerned by a special case of odometers: namely the *constant-base* ones. In these systems, $Z_{n}=L^{n}(\mathbb{Z}^{d})$ for each $n \ge 0$, where $L\in \mathcal{M}(d,\mathbb{Z})$ is an expansion matrix. Recall that an integer matrix $L\in \mathcal{M}(d,\mathbb{Z})$ is an *expansion* if the norm of each eigenvalue is greater than 1. To simplify notation, we denote the constant-base odometer $\overleftarrow{\mathbb{Z}^{d}}_{(L^{n} (\mathbb{Z}^d))}$ as $\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})}$. The next result characterizes the factor odometer systems of a fixed odometer system. **Lemma 5**. *[@cortez2006toeplitz Lemma 1][\[CharacterizationFactorOdometer\]]{#CharacterizationFactorOdometer label="CharacterizationFactorOdometer"} Let $\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n}^{j})}$ be two odometer systems $(j=1,2)$. There exists a factor map $\pi:\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n}^{1})}\to \overleftarrow{\mathbb{Z}^{d}}_{(Z_{n}^{2})}$ if and only if for every $Z_{n}^{2}$ there exists some $Z_{m}^{1}$ such that $Z_{m}^{1}\leqslant Z_{n}^{2}$.* ### Normalizer condition The proof of [\[CharacterizationFactorOdometer\]](#CharacterizationFactorOdometer){reference-type="ref" reference="CharacterizationFactorOdometer"} can be modified to provide a characterization for the matrices $M\in GL(d,\mathbb{Z})$ defining a $\textrm{GL}(d, \mathbb{Z})$-*epimorphism* between two odometer systems $\phi:\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n}^{1})}\to \overleftarrow{\mathbb{Z}^{d}}_{(Z_{n}^{2})}$. **Lemma 6**. *Set $M\in GL(d,\mathbb{Z})$. There exists a continuous map $\phi:\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n}^{1})}\to \overleftarrow{\mathbb{Z}^{d}}_{(Z_{n}^{2})}$, such that $$\forall {\bm n} \in \mathbb{Z}^d,\overleftarrow {g} \in \overleftarrow{\mathbb{Z}^{d}}_{(Z_{n}^{1})}, \quad \phi ({\bm n}{\bm +} (\overleftarrow {g} )) = M{\bm n }{\bm +} \phi(\overleftarrow {g} ),$$ if and only if $$\label{normalizercondition}\tag{Epimorphism Condition} \forall n \in \mathbb{N}, \exists m(n)\in\mathbb{N}\textrm{ s.t. } MZ_{m(n)}^{1}\leqslant Z_{n}^{2}.$$* It follows from [Lemma 6](#LemmaNessesaryConditionNormalizerOdometer){reference-type="ref" reference="LemmaNessesaryConditionNormalizerOdometer"} that a matrix $M\in GL(d,\mathbb{Z})$ belongs to the linear representation semigroup $\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})$ of an odometer system if and only if it satisfies the following condition, that we called *Normalizer condition* $$\label{normalizercondition1}\tag{NC 1} \forall n \in \mathbb{N}, \exists m(n)\in\mathbb{N}\textrm{ s.t. } MZ_{m(n)}\leqslant Z_{n}.$$ *Proof.* For the sufficiency, assume that $M\in GL(d,\mathbb{Z})$ satisfies [\[normalizercondition\]](#normalizercondition){reference-type="eqref" reference="normalizercondition"}. Since the sequences $\{Z_{n}^{i}\}_{n>0}$, $i=1,2$ are decreasing, we may assume that $m(n)\leq m(n+1)$ for all $n\in\mathbb{N}$. Thus, we have homomorphisms $\phi_{m(n)}^{M}:\mathbb{Z}^{d}/Z_{m(n)}^{1}\to \mathbb{Z}^{d}/Z_{n}^{2}$, given by $\phi_{m(n)}({\bm m}\ (\text{mod}\ Z_{m(n)}^1) )=M{\bm m}\ (\text{mod}\ Z^2_n)$. To finish the proof, we only have to remark that ${\phi^{M}:\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n}^{1})}\to\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n}^{2})}}$ defined as ${\phi(({\bm g}_{n})_{n\in\mathbb{N}})}=(\phi_{m(n)}({\bm g}_{m(n)}))_{n\in \mathbb{N}}$ is an $M$-epimorphism. We now prove the necessity. Let $\phi:\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n}^{1})}\to \overleftarrow{\mathbb{Z}^{d}}_{(Z_{n}^{2})}$ be an $M$-epimorphism. By continuity, for any $n\in \mathbb{N}$ and ${\bm g}\in \mathbb{Z}^{d}/Z_{n}^{2}$, there exists $m\in \mathbb{N}$ and ${\bm f}\in \mathbb{Z}^{d}/Z_{m}^{1}$ such that ${[{\bm f}]_{m}\subseteq \phi^{-1}([{\bm g}]_{n})}$. Set ${\bm h}\in Z_{m}^{1}$. For all $\overleftarrow{f}\in [{\bm f}]_{m}$, we have by [\[eq:ReturnTime\]](#eq:ReturnTime){reference-type="eqref" reference="eq:ReturnTime"} that ${\bm h}{\bm +}\overleftarrow{f}\in [{\bm f}]_{m}$, which implies that $\phi({\bm h}{\bm +} \overleftarrow{f})=M{\bm h} {\bm +}\phi(\overleftarrow{f})\in [{\bm g}]_{n}$. Since $\phi(\overleftarrow{f})$ is in $[{\bm g}]_{n}$, the set $\left\{{\bm m}\in \mathbb{Z}^{d}\colon {\bm m}{\bm +}\phi(\overleftarrow{f})\in [{\bm g}]_{n}\right\}$ is equal to $Z_{n}^{2}$ (by [\[eq:ReturnTime\]](#eq:ReturnTime){reference-type="eqref" reference="eq:ReturnTime"}), which implies that $M{\bm h}\in Z_{n}^{2}$. ◻ Notice that since the sequences $\{Z_n^i\}$, $i=1,2$ are decreasing, the [\[normalizercondition\]](#normalizercondition){reference-type="eqref" reference="normalizercondition"} is satisfied for any large enough $m\in\mathbb{N}$ provided it is true for one constant. This remark implies that the set of matrices $M$ (not necessarily invertible) satisfying the condition [\[normalizercondition1\]](#normalizercondition1){reference-type="eqref" reference="normalizercondition1"} is stable under product and sum, so it is a ring. By applying this remark we get the following result **Corollary 7**. *The semigroup ${N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})$ of $\textrm{GL}(d, \mathbb{Z})$-endomorphims of an odometer $\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}$ is a group.* *In particular any $\textrm{GL}(d, \mathbb{Z})$-endomorphim is invertible and the linear representation semigroup $\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})$ is a group.* *Proof.* Recall that odometer systems are equicontinuous and, hence, are coalescent [@auslander1988minimal]. Thus, from [Corollary 18](#CorollariesNormalizerConditionOdometer){reference-type="ref" reference="CorollariesNormalizerConditionOdometer"}, to show that any $\textrm{GL}(d, \mathbb{Z})$-endomorphism of an odometer system is invertible, it is enough to show that $\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})$ is a group. Since $\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})$ is a semigroup, we only have to prove that any element $M\in \vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})})$ admits an inverse inside. Since the sets of matrices satisfying [\[normalizercondition\]](#normalizercondition){reference-type="eqref" reference="normalizercondition"} is a ring, any integer polynomial of $M$ also satisfies [\[normalizercondition\]](#normalizercondition){reference-type="eqref" reference="normalizercondition"}. Now, the Cayley-Hamilton theorem implies that $M^{d}=\sum\limits_{k=0}^{d-1}b_{k}M^{k}$, where $b_{k}\in \mathbb{Z}$ are the coefficients of the characteristic polynomial of $M$. Notice that $b_{0}=(-1)^{d}\det(M)$, so that $b_{0}\in \{-1,1\}$. Multiplying by $M^{-1}$, we conclude that $M^{-1}$ can be written as an integer polynomial on $M$. Hence, $M^{-1}$ satisfies [\[normalizercondition\]](#normalizercondition){reference-type="eqref" reference="normalizercondition"}, and by [Lemma 6](#LemmaNessesaryConditionNormalizerOdometer){reference-type="ref" reference="LemmaNessesaryConditionNormalizerOdometer"}, we conclude that $M^{-1}\in \vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})$. ◻ Recall that the automorphisms of the odometer system are well known: they are the translations on it [@auslander1988minimal]. **Lemma 8**. *For any odometer system we have that $$\mathop{\mathrm{Aut}}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}) = \{\overleftarrow {g} \in \overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})} \mapsto \overleftarrow {g} +\overleftarrow {h} \in \overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}: {\overleftarrow {h} \in \overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}}\}.$$ In particular $\mathop{\mathrm{Aut}}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})$ is an abelian group isomorphic to $\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}$.* As a direct consequence of [Corollary 7](#cor:CorollariesNormalizerConditionOdometer1){reference-type="ref" reference="cor:CorollariesNormalizerConditionOdometer1"} we get the following algebraic structure of the normalizer group of odometer systems. **Corollary 9**. *The normalizer group $N(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})$ of an odometer system is isomorphic to a semidirect product between the odometer system $\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}$ and the linear representation group $\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})$.* *Proof.* Recall from [Lemma 6](#LemmaNessesaryConditionNormalizerOdometer){reference-type="ref" reference="LemmaNessesaryConditionNormalizerOdometer"} that for each $M\in \vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})$, one can associate a $M$-isomorphism $\phi^M$ of ${\mathbb{Z}^{d}}_{(Z_{n})}$ defined for any $({\bm g}_{m(n)}\ (\text{mod}\ Z_{m(n)}))_{n\in \mathbb{N}} \in \overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}$ by $${\phi^M (({\bm g}_{m(n)}\ (\text{mod}\ Z_{m(n)}))_{n\in \mathbb{N}})}=(M{\bm g}_{n}\ (\text{mod}\ Z_{n}))_{n\in \mathbb{N}}.$$ Notice that the set $\{\phi^{M}\colon M\in \vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\}$ is a group and defines a group homomorphism $h: \vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\to N(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})$ such that $j\circ h$ is the identity in $\{\phi^{M}\colon M\in \vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})\}$, so the exact sequence [\[ExactSequenceForNormalizer\]](#ExactSequenceForNormalizer){reference-type="eqref" reference="ExactSequenceForNormalizer"} is split exact. ◻ So, to study the normalizer group of an odometer system, we just have to determine its linear representation group. Actually, all these results lead to wonder the following question: **Question 10**. *Give a characterization of the groups of the form $\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})$ for any odometer $\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}$.* We do not answer to this question, but we provide necessary conditions for specific odometers: the universal and the constant-base ones in Sections [4.1](#sec:UnivOdometer){reference-type="ref" reference="sec:UnivOdometer"} and [4.2](#sec:ConstantBaseOdometer){reference-type="ref" reference="sec:ConstantBaseOdometer"} respectively. ## Symbolic dynamics We recall here classical definitions and we fix the notations on multidimensional subshifts. ### Basic definitions Let $\mathcal{A}$ be a finite alphabet and $d\geq 1$ be an integer. We define a topology on $\mathcal{A}^{\mathbb{Z}^{d}}$ by endowing $\mathcal{A}$ with the discrete topology and considering in $\mathcal{A}^{\mathbb{Z}^{d}}$ the product topology, which is generated by cylinders. Since $\mathcal{A}$ is finite, $\mathcal{A}^{\mathbb{Z}^{d}}$ is a metrizable compact space. In this space, the group $\mathbb{Z}^{d}$ acts by translations (or shifts), defined for every ${\bm n}\in \mathbb{Z}^{d}$ as $$S^{{\bm n}}(x)_{{\bm k}}=x_{{\bm n}+{\bm k}},\ x= (x_{\bm k})_{\bm k}\in \mathcal{A}^{\mathbb{Z}^{d}},\ {\bm n, \bm k}\in \mathbb{Z}^{d}.$$ Let $P\subseteq \mathbb{Z}^{d}$ be a finite subset. A *pattern* is an element $\texttt{p}\in \mathcal{A}^{P}$. We say that $P$ is the *support* of $\texttt{p}$, and we denote $P=\mathop{\mathrm{supp}}(\texttt{p})$. A pattern *occurs in* $x\in \mathcal{A}^{\mathbb{Z}^{d}}$, if there exists ${\bm n}\in \mathbb{Z}^{d}$ such that $\texttt{p}=x|_{{\bm n}+P}$ (identifying ${\bm n}+P$ with $P$ by translation). In this case, we denote it $\texttt{p}\sqsubseteq x$ and we call this ${\bm n}$ an *occurrence in* $x$ of $\texttt{p}$. A *subshift* $(X,S,\mathbb{Z}^{d})$ is given by a closed subset $X\subseteq \mathcal{A}^{\mathbb{Z}^{d}}$ which is invariant by the $\mathbb{Z}^{d}$-action. A subshift defines a *language*. For a finite subset $P\Subset \mathbb{Z}^{d}$ we define $$\mathcal{L}_{P}(X)=\{\texttt{p}\in \mathcal{A}^{P}: \exists x \in X,\ \texttt{p}\sqsubseteq x\}.$$ The *language* of a subshift $X$ is defined as $$\mathcal{L}(X)=\bigcup\limits_{P\Subset \mathbb{Z}^{d}}\mathcal{L}_{P}(X).$$ Let $(X,S,\mathbb{Z}^{d})$ be a subshift and $x\in X$. We say that ${\bm p}\in \mathbb{Z}^{d}$ is a *period* of $x$ if for all ${\bm n}\in \mathbb{Z}^{d}$, $x_{{\bm n}+ {\bm p}}=x_{{\bm n}}$. The subshift $(X,S,\mathbb{Z}^{d})$ is said to be *aperiodic* if there are no nontrivial periods. Let $\mathcal{B}$ be another finite alphabet and $Y\subseteq \mathcal{B}^{\mathbb{Z}^{d}}$ be a subshift. For $P\Subset \mathbb{Z}^{d}$, we define a $P$-*block map* as a map of the form $\Phi: \mathcal{L}_{P}(X)\to \mathcal{B}$. This induces a factor map $\phi:X\to Y$ given by $$\phi(x)_{{\bm n}}= \Phi(x|_{{\bm n}+ P}).$$ The map $\phi$ is called the *sliding block code* induced by $\Phi$ and $P$ is the support of the map $\phi$. In most of the cases we may assume that the support of the sliding block codes is a ball of the form $B({\bm 0},r)$, where $r$ is a positive integer. We define the *radius*, denoted as $r(\phi)$, as the smallest positive integer $r$ for which a $B({\bm 0},r)$-block map can be defined to induce $\phi$.. The next theorem characterizes the factor maps between two subshifts. **Theorem 11** (Curtis-Hedlund-Lyndon). *Let $(X,S,\mathbb{Z}^{d})$ and $(Y,S,\mathbb{Z}^{d})$ be two subshifts. A map $\phi:(X,S,\mathbb{Z}^{d})\to (Y,S,\mathbb{Z}^{d})$ is a factor map if and only if there exists a $B({\bm 0},r)$-block map $\Phi:\mathcal{L}_{B({\bm 0},r)}(X)\to \mathcal{L}_{1}(Y)$, such that $\phi(x)_{{\bm n}}=\Phi(x|_{{\bm n}+B({\bm 0},r)})$, for all ${\bm n}\in \mathbb{Z}^{d}$ and $x\in X$.* For $\textrm{GL}(d, \mathbb{Z})$-epimorphisms there is a similar characterization. See [@cabezas2021homomorphisms Theorem 2.7] for a proof. **Theorem 12** (Curtis-Hedlund-Lyndon theorem for $\textrm{GL}(d, \mathbb{Z})$-epimorphisms). *Let $(X,S,\mathbb{Z}^{d})$ and $(Y,S,\mathbb{Z}^{d})$ be two subshifts and ${M\in \textrm{GL}(d, \mathbb{Z})}$. A map $\phi:(X,S,\mathbb{Z}^{d})\to (X,S,\mathbb{Z}^{d})$ is an $M$-endomorphism if and only if there exists a $B({\bm 0},r)$-block map $\Phi:\mathcal{L}_{B({\bm 0},r)}(X)\to \mathcal{L}_{1}(Y)$, such that $\phi(x)_{{\bm n}}=\Phi(x|_{M^{-1}{\bm n}+B({\bm 0},r)})$, for all ${\bm n}\in \mathbb{Z}^{d}$ and $x\in X$.* This means, for any $\textrm{GL}(d, \mathbb{Z})$-epimorphism $\phi$ we can define a *radius* (also denoted by $r(\phi)$), as the infimum of $r\in\mathbb{N}$ such that we can define a $B({\bm 0},r)$-block map which induces it. In the case $r(\phi)=0$, we say that $\phi$ is induced by a *letter-to-letter map*. ### Substitutive subshifts {#sec:SubstSubshift} We provide a brief overview of multidimensional substitutive subshifts of constant-shape that will be used throughout this article. We refer to [@cabezas2021homomorphisms] for basic properties on this topic, where we follow the same notations. Let $L\in \mathcal{M}_{d}(\mathbb{Z})$ be an integer expansion matrix, $F\subseteq \mathbb{Z}^{d}$ be a fundamental domain of $L(\mathbb{Z}^{d})$ in $\mathbb{Z}^{d}$, i.e., a set of representative classes of $\mathbb{Z}^{d}/L(\mathbb{Z}^{d})$ (with ${\bm 0}\in F$) and $\mathcal{A}$ be a finite alphabet. A *constant-shape substitution* is a map $\zeta:\mathcal{A}\to\mathcal{A}^{F}$. We say that $F$ is the *support* of the substitution. Since any element $\bm n \in \mathbb{Z}^d$ can be expressed uniquely as $\bm n = L(\bm j) + \bm f$, with $\bm j \in \mathbb{Z}^d$ and $\bm f \in F$, the substitution extends to $\mathcal{A}^{\mathbb{Z}^d}$ as $$\zeta(x)_{L(\bm j) + \bm f} = \zeta(x_{\bm j})_{\bm f}.$$ For any $n>0$, we define the $n$-th iteration of the substitution $\zeta^{n}:\mathcal{A}\to \mathcal{A}^{F_{n}}$ by induction $\zeta^{n+1}=\zeta\circ \zeta^{n}$, where the supports of these substitutions satisfy the recurrence $F_{n+1}=L(F_{n})+F_{1}$ for all $n\geq 1$. We will always assume that the sequence of supports $(F_{n})_{n>0}$ is *Følner*, i.e., for all ${\bm n}\in \mathbb{Z}^{d}$ we have that $$\lim\limits_{n\to \infty} \dfrac{|F_{n} \triangle (F_{n} +{\bm n})|}{|F_{n} |}=0.$$ The supports do not need to cover all the space. Nevertheless, up to adding a finite set and taking its images under power of the expansion matrix $L$, they cover the space. This property is explained in the following proposition. It is similar to the notion of rest in numeration theory and will be technically useful. **Proposition 13**. *[@cabezas2021homomorphisms Proposition 2.10][\[FiniteSubsetFillsZd\]]{#FiniteSubsetFillsZd label="FiniteSubsetFillsZd"} Let $\zeta$ be a constant-shape substitution. Then, the set $K_{\zeta}=\bigcup\limits_{m>0}((\textrm{id}-L^{m})^{-1}(F_{m})\cap \mathbb{Z}^{d})$ is finite and satisfies $$\bigcup\limits_{n\geq 0} L^{n}(K_{\zeta})+F_{n}=\mathbb{Z}^{d},$$* *using the notation $F_{0}=\{0\}$.* The *language* of a substitution is the set of all patterns that occur in $\zeta^{n}(a)$, for some $n>0$, $a\in \mathcal{A}$, i.e., $$\mathcal{L}_{\zeta}=\{\texttt{p}\colon \texttt{p}\sqsubseteq \zeta^{n}(a),\ \text{for some }n>0,\ a\in \mathcal{A}\}.$$ A substitution $\zeta$ is called *primitive* if there exists a positive integer $n>0$, such that for every $a,b\in \mathcal{A}$, $b$ occurs in $\zeta^{n}(a)$. If $\zeta$ is a primitive constant-shape substitution, the existence of *periodic points* is well-known, i.e., there exists at least one point $x_{0}\in X_{\zeta}$ such that $\zeta^{p}(x_{0})=x_{0}$ for some $p>0$. In the primitive case, the subshift is preserved by replacing the substitution with a power of it; that is, $X_{\zeta^{n}}$ is equal to $X_{\zeta}$ for all $n>0$. Consequently, we may assume the existence of at least one fixed point. In other words, there exists a point $x \in X_{\zeta}$ such that $x = \zeta(x)$. As in the one-dimensional case, it is important to note that the number of periodic points (or, if we consider proper iterations, the number of fixed points) is finite. The substitutive subshift $(X_{\zeta},S,\mathbb{Z}^{d})$ is the topological dynamical system, where $X_{\zeta}$ is the set of all sequences $x\in \mathcal{A}^{\mathbb{Z}^{d}}$ such that every pattern occurring in $x$ is in $\mathcal{L}_{\zeta}$. When the substitutive subshift $(X_{\zeta},S,\mathbb{Z}^{d})$ is aperiodic, the substitution satisfy a combinatorial property called *recognizability* [@cabezas2021homomorphisms; @solomyakrecognizability]. **Definition 14**. Let $\zeta$ be a primitive substitution and $x\in X_{\zeta}$ be a fixed point. We say that $\zeta$ is *recognizable on $x$* if there exists some constant $R>0$ such that for all ${\bm i}, {\bm j}\in \mathbb{Z}^{d}$, $$x|_{B(L_{\zeta}({\bm i}),R)\cap \mathbb{Z}^{d}}=x|_{B({\bm j},R)\cap \mathbb{Z}^{d}} \implies (\exists {\bm k}\in \mathbb{Z}^{d}) (({\bm j}=L_{\zeta}({\bm k}))\wedge (x_{{\bm i}}=x_{{\bm k}})).$$ The recognizability property implies some topological and combinatorial properties of the substitutive subshift that we summarize in the following: - The substitutive subshift $(X_{\zeta},S,\mathbb{Z}^{d})$ is aperiodic. - For any $n>0$, the map $\zeta^{n}:X_{\zeta}\to \zeta^{n}(X_{\zeta})$ is a homeomorphism. - For any $n>0$, every $x\in X_{\zeta}$ can be written in a unique way $x=S^{{\bm f}}\zeta^{n}(x_{1})$ with ${\bm f}\in F_{n}$ and $x_{1}\in X_{\zeta}$. It follows that the map $\pi_n \colon X_\zeta \to F_n$ by $\pi_n(x) = {\bm f}$ when $x \in S^{\bm f} \zeta^n(X_\zeta)$ is well-defined, continuous and can be extended to a factor map $\pi \colon (X_\zeta,S,\mathbb{Z}^{d}) \to (\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})},{\bm +},\mathbb{Z}^{d})$ defined as $\pi (x) = (\pi_n(x))_n$ [@cabezas2021homomorphisms]. # Calculability of symmetries of constant base odometers {#sec:CalcSymmetries} ## Decidability of symmetries of constant base odometers {#sec:Decidability} The main goal of this section is to give a computable condition to check if a transformation $M \in GL(d, \mathbb{Z})$ is a symmetry of an odometer system with a constant base given by an expansion matrix $L$. To do this, we use the characterization of the symmetries by [\[normalizercondition\]](#normalizercondition){reference-type="ref" reference="normalizercondition"}. It involves congruence equations on the coefficients of the symmetry matrix. We will show that the symmetry matrices of a constant base odometer are characterized by explicit linear arithmetical relations of their coefficients. More precisely, the set of symmetry matrices is a $\mathbb{Z}$-module intersected with the algebraic set $\rm GL (d, \mathbb{Z})$. We start by reading [\[normalizercondition\]](#normalizercondition){reference-type="ref" reference="normalizercondition"} in the constant base context. A matrix $M \in GL(d, \mathbb{Z})$ is a the symmetry group $\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})})$ if and only if $$\label{normalizercondition3}\tag{NC 3} \forall n\in \mathbb{N}, \exists m\in\mathbb{N}, L^{-n}ML^m \in \mathcal{M}(d, \mathbb{Z}).$$ **Proposition 15**. *Let $L \in \mathcal{M}(d, \mathbb{Z})$ be an expansion matrix. For $i=1, \ldots, d$, set $\alpha_i =d_i(L)/d_{i-1}(L)$, where $d_0(L) =1$ and $d_i(L)$ equals the greatest common divisor of the determinants of all $i\times i$ minors of the matrix $L$. Then the set of matrices $M\in \mathcal{M}(d, \mathbb{Z})$ satisfying [\[normalizercondition3\]](#normalizercondition3){reference-type="ref" reference="normalizercondition3"} is a free $\mathbb{Z}$-module of rank $$d^2- |\{(i,j): 1 \le i <j \le d ,\quad rad(\frac{\alpha_j}{\alpha_i}) \textrm{ does not divide } \alpha_i\ \}|.$$ Moreover, it admits an explicit basis. In particular there is an algorithm to decide if a matrix $M$ satisfies [\[normalizercondition3\]](#normalizercondition3){reference-type="ref" reference="normalizercondition3"}.* It follows that given an expansion matrix $L$, there is a (theoretical) algorithm to decide if a matrix $M$ belongs to $\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(L^n)})$, since this later set is the intersection of $GL(d,\mathbb{Z})$ with the set of matrices satisfying [\[normalizercondition3\]](#normalizercondition3){reference-type="ref" reference="normalizercondition3"}. However this says very few things on its group structure. To be more explicit on the description of the symmetries, we treat the case $d=2$ in [4](#sec:HomoZ2Odometers){reference-type="ref" reference="sec:HomoZ2Odometers"}. In addition we specify the group of symmetries $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^n)})$ with respect to the arithmetical properties of $L$. Before giving the proof of [Proposition 15](#theo:charSymmetries){reference-type="ref" reference="theo:charSymmetries"}, recall from [Lemma 6](#LemmaNessesaryConditionNormalizerOdometer){reference-type="ref" reference="LemmaNessesaryConditionNormalizerOdometer"} that a matrix $M$ is in $\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(L^n)})$ if and only if it satisfies the normalizer condition [\[normalizercondition2\]](#normalizercondition2){reference-type="ref" reference="normalizercondition2"}. We start with a simplification of this condition, showing that the second term $m$ can be chosen uniformly in $n$. **Lemma 16**. *A matrix $M \in \mathcal{M}_d(\mathbb{Z})$ satisfies the normalizer condition [\[normalizercondition2\]](#normalizercondition2){reference-type="ref" reference="normalizercondition2"} if and only if $$\forall n\in \mathbb{N}, \quad \mathop{\mathrm{adj}}(L^n)ML^{d} \equiv 0 (\mod \det(L)^n.$$* *Proof.* Recall [\[normalizercondition2\]](#normalizercondition2){reference-type="ref" reference="normalizercondition2"} states $$\forall n\in \mathbb{N}, \exists m \in \mathbb{N}\ \mathop{\mathrm{adj}}(L^n)ML^{m} \equiv 0 \mod\det L^n.$$ By linearity this is equivalent to $$\forall n\in \mathbb{N}, \forall m \in \mathbb{N}\textrm{ large enough}, \mathop{\mathrm{adj}}(L^n)ML^{m} \equiv 0 \mod \det L^n.$$ The Cayley-Hamilton theorem ensures for each $m\ge d$, they are integers $a_0(m), \ldots, a_{d-1}(m)$ such that $L^{m} = \sum_{k=0}^{d-1} a_k(m) L^k$. Moreover the vector $a(m) =(a_0(m), \ldots, a_{d-1}(m))$ satisfies the recurrence relation $a(m+1) = P a(m)$ for the companion matrix $P$ of the characteristic polynomial of $L$. Since the set $\mathbb{Z}/(\det L)^n \mathbb{Z}$ is finite, there is an integer $N$ such that $P^N = {\rm Id} \mod \det L^n$, so that $a(kN) = a(d) \mod \det L^n$ for any $k \in \mathbb{N}$. Hence $L^{kN} = L^d \mod \det L^n$ and the claim follows. ◻ *Proof of [Proposition 15](#theo:charSymmetries){reference-type="ref" reference="theo:charSymmetries"}.* Consider the set $$E=\{ M \in \mathcal{M}_d(\mathbb{Q}): \forall n \ge 0, L^{-n}M \in \mathcal{M}_d(\mathbb{Z})\}.$$ Notice this set is a $\mathbb{Q}$ vector space invariant under the linear transformation $\tilde{L} \colon M \mapsto LM$. It follows from basic linear algebra it is a direct sum of kernels of polynomials $P_i(\tilde{L})$ in $\tilde{L}$, where each $P_i$ is an integer irreducible polynomial. Hence there exist an integer polynomial $\tilde{P}$ such that $$M \in E \Leftrightarrow \tilde{P}(\tilde{L})M = 0.$$ Since the condition $\mathop{\mathrm{adj}}(L^n)ML^{d} \equiv 0 \mod \det(L)^n$ is equivalent to $L^{-n}ML^d \in \mathcal{M}_d(\mathbb{Z})$, we get with the former remark and [Lemma 16](#lem:NCUniform){reference-type="ref" reference="lem:NCUniform"} that a matrix $M \in \mathcal{M}_d(\mathbb{Z})$ satisfies the [\[normalizercondition2\]](#normalizercondition2){reference-type="ref" reference="normalizercondition2"} if and only if $\tilde{P}(\tilde{L})M L^d =0$. Recall that $M \in \mathcal{M}_d(\mathbb{Z})$ is in $\rm GL (d, \mathbb{Z})$ only means that $\det (M) = \pm 1$. This finish the proof. ◻ # Description of the linear representation group of odometer systems {#sec:HomoZ2Odometers} In this section, we describe the linear representation group and its elements for several odometers, specifically the universal and constant-base $\mathbb{Z}^2$-odometer systems. It is worth noting that we did not find a similar result in the existing literature. One of our main tools involves the characterization [\[normalizercondition1\]](#normalizercondition1){reference-type="eqref" reference="normalizercondition1"} for matrices in the linear representation group of an odometer system $\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})}$. We begin by presenting an equivalent formulation of [\[normalizercondition1\]](#normalizercondition1){reference-type="eqref" reference="normalizercondition1"} in terms of arithmetical equations. For any given $n\in \mathbb{N}$, let $L_{n}\in \mathcal{M}(d,\mathbb{Z})$ be a matrix such that $L_{n}(\mathbb{Z}^{d})=Z_{n}$. It is important to note that this matrix is unique, up to a composition with a matrix in $\textrm{GL}(d, \mathbb{Z})$. Then, the condition [\[normalizercondition1\]](#normalizercondition1){reference-type="eqref" reference="normalizercondition1"} is equivalent to: for all $n\in \mathbb{N}$, there exists $m_{M}(n)\in \mathbb{N}$ such that $L_{n}^{-1}ML_{m_{M}(n)}$ is an endomorphism in $\mathbb{Z}^{d}$. Since $\det(L)L^{-1}=\mathop{\mathrm{adj}}(L)$, where $\mathop{\mathrm{adj}}(L)$ is the *adjugate matrix* of $L$, we can express [\[normalizercondition1\]](#normalizercondition1){reference-type="eqref" reference="normalizercondition1"} equivalently as: $$\label{normalizercondition2}\tag{NC 2} \forall n\in \mathbb{N}, \exists m_{M}(n)\in\mathbb{N},\ \mathop{\mathrm{adj}}(L_{n})ML_{m_{M}(n)} \equiv 0\ (\text{mod}\ \det(L_{n})).$$ ## The universal $\mathbb{Z}^{d}$-odometer case {#sec:UnivOdometer} Let $(\Gamma_{n})_{n\in \mathbb{N}}$ be an enumeration of all finite-index subgroups of $\mathbb{Z}^{d}$. We define the *universal* $d$-*dimensional odometer system* as follows: Start with $\Lambda_{0}=\Gamma_{0}$, and for any $n\geq1$ set $\Lambda_{n}=\Lambda_{n-1}\cap \Gamma_{n}$. Since the intersection of finite-index subgroups remains a finite-index subgroup, we can define the universal $d$-dimensional odometer as $\overleftarrow{\mathbb{Z}^{d}}_{(\Lambda_{n})}$. This odometer is universal in the sense that, by [\[CharacterizationFactorOdometer\]](#CharacterizationFactorOdometer){reference-type="ref" reference="CharacterizationFactorOdometer"}, any odometer system is a topological factor of the universal odometer. For example, the universal 1-dimensional odometer is equal to $\overleftarrow{\mathbb{Z}}_{(n!\mathbb{Z})}$. With respect to its linear representation group, [\[normalizercondition2\]](#normalizercondition2){reference-type="eqref" reference="normalizercondition2"} leads to the following result. **Proposition 17**. *The linear representation group of the $d$-dimensional universal odometer is equal to $\textrm{GL}(d, \mathbb{Z})$.* *Proof.* Consider $L_{n}\in \mathcal{M}(d,\mathbb{Z})$ such that $L_{n}(\mathbb{Z}^{d})=\Lambda_{n}$. A matrix $M\in \textrm{GL}(d, \mathbb{Z})$ is in $\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(\Lambda_{n})})$ if and only if $M$ satisfies [\[normalizercondition2\]](#normalizercondition2){reference-type="eqref" reference="normalizercondition2"}. Now, for any $n\in \mathbb{N}$, we can choose $m(n)\in\mathbb{N}$ large enough such that $\Lambda_{m(n)}\leqslant \det(L_{n})\mathbb{Z}^{d}$. This implies that $\mathop{\mathrm{adj}}(L_{n})ML_{m(n)}\equiv 0\ (\text{mod}\ \det(L_{n}))$ for any matrix $M\in \textrm{GL}(d, \mathbb{Z})$. We then conclude that $\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(\Lambda_{n})})=\textrm{GL}(d, \mathbb{Z})$. ◻ ## The constant-base $\mathbb{Z}^{2}$-odometer case {#sec:ConstantBaseOdometer} We will be mainly interested in $\textrm{GL}(d, \mathbb{Z})$-endomorphisms of constant-base odometers, i.e., when $Z_n= L^n (\mathbb{Z}^d)$ for each $n\in \mathbb{N}$ and for some expansion matrix $L$. In this case we get the following direct corollary of [Lemma 6](#LemmaNessesaryConditionNormalizerOdometer){reference-type="ref" reference="LemmaNessesaryConditionNormalizerOdometer"} and the condition [\[normalizercondition2\]](#normalizercondition2){reference-type="eqref" reference="normalizercondition2"}. **Corollary 18**. *Let $L \in \mathcal{M}(d,\mathbb{Z})$ be an expansion matrix.* 1. *A matrix $M\in \textrm{GL}(d, \mathbb{Z})$ is in $\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})})$, if and only if $$\label{normalizercondition3}\tag{NC 3} \forall n\in \mathbb{N}, \exists m(n)\in\mathbb{N},\ \mathop{\mathrm{adj}}(L^n)ML^{m(n)} \equiv 0\ ({mod}\ \det(L^n)).$$* 2. *If $M\in \textrm{GL}(d, \mathbb{Z})$ commutes with some power of the expansion matrix $L$, then $M$ is in the linear representation semigroup $\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})})$.* 3. *For any $M\in \textrm{GL}(d, \mathbb{Z})$ we have that $\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(ML^{n}M^{-1})})=M\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})})M^{-1}$.* In the next theorem, we present the structure of the linear representation group of constant-base $\mathbb{Z}^{2}$-odometer systems based on computable arithmetical conditions of the expansive matrix $L$. Within this family, we obtain a bifurcation phenomenon at the level of the linear representation group, depending on arithmetic relations of the coefficients of the matrix $L$. To describe the different cases, we introduce some additional notations. For any positive integer $n>1$, the *radical* $\mathop{\mathrm{rad}}(n)$ *of* $n$ is defined as the product of the distinct prime numbers that divide $n$. If $n<-1$, we define $\mathop{\mathrm{rad}}(n)$ just as $\mathop{\mathrm{rad}}(-n)$. The *centralizer* $\mathop{\mathrm{Cent}}_{\textrm{GL}(2, \mathbb{Z})}(L)$ *of a matrix* $L$ *in* $\textrm{GL}(2, \mathbb{Z})$ is defined as the subgroup consisting of all matrices in $\textrm{GL}(2, \mathbb{Z})$ commuting with $L$. Recall that, as established in [Corollary 18](#CorollariesNormalizerConditionOdometer){reference-type="ref" reference="CorollariesNormalizerConditionOdometer"}, the centralizer $\mathop{\mathrm{Cent}}_{\textrm{GL}(2, \mathbb{Z})}(L)$ is always a subgroup of $\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})})$. **Theorem 19**. *Let $L \in \mathcal{M}(2,\mathbb{Z})$ be an integer expansion matrix.* 1. *[\[RadDividesEverythingCase\]]{#RadDividesEverythingCase label="RadDividesEverythingCase"} If $\mathop{\mathrm{rad}}(\det(L))$ divides $\mathop{\mathrm{trace}}(L)$, then the linear representation group $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^n)})$ is equal to $\textrm{GL}(2, \mathbb{Z})$.* 2. *Otherwise* 1. *[\[it:case2a\]]{#it:case2a label="it:case2a"} If the spectrum of the matrix $L$ is disjoint from the integers, then the linear representation group $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^n)})$ is the centralizer $\mathop{\mathrm{Cent}}_{GL(2,\mathbb{Z})}(L)$.* *Moreover, if the spectrum of $L$ is disjoint from the real line, then $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^n)})$ is a finite group.* 2. *[\[it:case2b\]]{#it:case2b label="it:case2b"} When the spectrum of $L$ contains an integer value, the linear representation group $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^n)})$ is finite or virtually $\mathbb{Z}$.* *More precisely, under explicit arithmetical properties of $L$, $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^n(\mathbb{Z}^{2}))})$ isomorphic to $\mathbb{Z}/ 2\mathbb{Z}$ or $\mathbb{Z}^{2}/(2\mathbb{Z}\times2\mathbb{Z})$, or its abelianization is finite, and its commutator subgroup is cyclic.* Along the proof, the group structure of $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^n(\mathbb{Z}^{2}))})$ is specified in terms of the arithmetical properties of the coefficient of $L$. The following examples illustrate the different cases of [Theorem 19](#GeneralTheoremDifferentCasesNormalizer){reference-type="ref" reference="GeneralTheoremDifferentCasesNormalizer"} according to the expansion matrix $L$. **Example 20** (Different results for [Theorem 19](#GeneralTheoremDifferentCasesNormalizer){reference-type="ref" reference="GeneralTheoremDifferentCasesNormalizer"}). 1. As we will see in the proof of [Theorem 19](#GeneralTheoremDifferentCasesNormalizer){reference-type="ref" reference="GeneralTheoremDifferentCasesNormalizer"}, the case [\[RadDividesEverythingCase\]](#RadDividesEverythingCase){reference-type="ref" reference="RadDividesEverythingCase"} can be easily generalized for higher dimensions in the following way: If $\mathop{\mathrm{rad}}(\det(L))$ divides every coefficient of the characteristic polynomial of $L$, then the linear representation semigroup $\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})})$ is equal to $\textrm{GL}(d, \mathbb{Z})$. In particular, if $L=pM$, with $p\in \mathbb{Z}$ and $M\in \textrm{GL}(d, \mathbb{Z})$, then the linear representation group $\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{L^{n}})$ is $\textrm{GL}(d, \mathbb{Z})$. 2. The matrix $L_{\theLvariable}=\begin{pmatrix} 2 & -1 \\ 1 & 5 \end{pmatrix}$ illustrates the case (2)[\[it:case2a\]](#it:case2a){reference-type="ref" reference="it:case2a"}: $\mathop{\mathrm{trace}}(L_{\theLvariable})=7$, $\det(L_{\theLvariable})=11$, and $L_{\theLvariable}$ has real eigenvalues (which are equal to $7/2\pm\sqrt{5}/2$). The matrices in $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L_{\theLvariable}^{n})})$ are the ones commuting with $L_{\theLvariable}$, which is an infinite group containing $\begin{pmatrix} 2&1\\-1&-1 \end{pmatrix}$. 3. [\[ComplexEigenvalues\]]{#ComplexEigenvalues label="ComplexEigenvalues"}The matrix $L_{\theLvariable}=\begin{pmatrix} 2 & -1 \\ 1 & 3 \end{pmatrix}$ also illustrates the case (2)[\[it:case2a\]](#it:case2a){reference-type="ref" reference="it:case2a"} but with a spectrum disjoint from the real line: $\mathop{\mathrm{trace}}(L_{\theLvariable})=5$ and $\det(L_{\theLvariable})=7$, and $L_{\theLvariable}$ has complex eigenvalues $5/2\pm i\sqrt{3}/2$. The linear representation semigroup $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{L_{\theLvariable}^{n}})$ is equal to $\mathop{\mathrm{Cent}}_{GL(2,\mathbb{Z})}(L_{\theLvariable})$, which corresponds to the set $$\left\{\begin{array}{ccc} \begin{pmatrix} 1 & 1 \\ -1 & 0 \end{pmatrix},&\begin{pmatrix} -1 & -1 \\ 1 & 0 \end{pmatrix},&\begin{pmatrix} 0 & -1 \\ 1 & 1 \end{pmatrix}\\ \begin{pmatrix} 0 & -1 \\ 1 & -1 \end{pmatrix},&\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix},&\begin{pmatrix} -1 & 0 \\ 0 & -1 \end{pmatrix}\\ \end{array}\right\}.$$ 4. The matrix $L_{\theLvariable}=\begin{pmatrix} 6& 1\\0& 2 \end{pmatrix}$ illustrates the case (2)[\[it:case2b\]](#it:case2b){reference-type="ref" reference="it:case2b"}. This is an upper triangular matrix which is not diagonalizable by $\textrm{GL}(2, \mathbb{Z})$. It will be proved that $\vec{N}(\overleftarrow{\mathbb{Z}}_{(L_{\theLvariable}^{n})})$ is conjugate to $\left\{\begin{pmatrix}m_{11} & m_{12}\\ 0 & m_{22}\end{pmatrix}\colon |m_{11}m_{22}|=1, m_{12}\in \mathbb{Z}\right\}$ via the matrix $\begin{pmatrix} 1 & 0\\4 & 1 \end{pmatrix}$, so it is virtually $\mathbb{Z}$. It can be directly checked that the linear representation group $\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})})$ associated with a matrix $L$ diagonalizable by $\textrm{GL}(2, \mathbb{Z})$ also has the same group structure, being isomorphic to a set of invertible upper triangular matrices. 5. The matrix $L_{\theLvariable}=\begin{pmatrix} 3& 1\\0& 5 \end{pmatrix}$ also concerns the case (2)[\[it:case2b\]](#it:case2b){reference-type="ref" reference="it:case2b"}. This matrix has eigenvalues $3$ and $5$. Along the proof of [Theorem 19](#GeneralTheoremDifferentCasesNormalizer){reference-type="ref" reference="GeneralTheoremDifferentCasesNormalizer"} it will be shown that a matrix $M$ is in $\vec{N}(\overleftarrow{\mathbb{Z}^{2}_{(L^{n}}})$ if and only if $M$ commutes with $L_{\theLvariable}$, so $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L_{\theLvariable}^{n})})$ is isomorphic to $\mathbb{Z}/2\mathbb{Z}$. 6. The last example illustrating the case (2)[\[it:case2b\]](#it:case2b){reference-type="ref" reference="it:case2b"} is the matrix $L_{\theLvariable}=\begin{pmatrix} 2& 1\\0& 3 \end{pmatrix}$ with eigenvalues $2$ and $3$. It will be also shown that $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L_{\theLvariable}^{n})})=\mathop{\mathrm{Cent}}_{\textrm{GL}(2, \mathbb{Z})}(L_{\theLvariable})$, which is isomorphic to $(\mathbb{Z}/2\mathbb{Z})^2$. **Remark 21**. Note that [Theorem 19](#GeneralTheoremDifferentCasesNormalizer){reference-type="ref" reference="GeneralTheoremDifferentCasesNormalizer"} implies that the factor map between equicontinuous systems is not necessarily compatible with $\textrm{GL}(d, \mathbb{Z})$-endomorphisms. Consider $X$ as the universal $\mathbb{Z}^{2}$-odometer, and set $Y=\overleftarrow{\mathbb{Z}^{2}}_{(L_{1}^{n})}$. Hence $(Y,{\bm +},\mathbb{Z}^{d})$ is an equicontinuous factor of $(X,{\bm +},\mathbb{Z}^{d})$. Now, by [Proposition 17](#SymmetrySemigroupUniversalOdometer){reference-type="ref" reference="SymmetrySemigroupUniversalOdometer"}, we can define an isomorphism associated with the matrix $\begin{pmatrix} 2 & 1\\1 & 1 \end{pmatrix}$ in $X$, However, in contrast, [Theorem 19](#GeneralTheoremDifferentCasesNormalizer){reference-type="ref" reference="GeneralTheoremDifferentCasesNormalizer"} and [Lemma 3](#CompatibilityPropertyFactors){reference-type="ref" reference="CompatibilityPropertyFactors"} establish that such isomorphism is not possible in $Y$. ## Proof of [Theorem 19](#GeneralTheoremDifferentCasesNormalizer){reference-type="ref" reference="GeneralTheoremDifferentCasesNormalizer"} {#sec:ProofTheo3.3} In this subsection, we will prove [Theorem 19](#GeneralTheoremDifferentCasesNormalizer){reference-type="ref" reference="GeneralTheoremDifferentCasesNormalizer"}. We decompose this proof according to its spectral properties. We will get more precise results as the ones stated in [Theorem 19](#GeneralTheoremDifferentCasesNormalizer){reference-type="ref" reference="GeneralTheoremDifferentCasesNormalizer"}. We start with the case where the expansion matrix has integer eigenvalues. From now on, an integer expansion matrix $L$ will be denoted as $L=\begin{pmatrix} p & q\\ r & s \end{pmatrix}$, its powers as $L^{n}=\begin{pmatrix} p(n) & q(n)\\ r(n) & s(n) \end{pmatrix}$ and a matrix $M$ in $\textrm{GL}(2, \mathbb{Z})$ as $M=\begin{pmatrix}m_{11} & m_{12}\\ m_{21} & m_{22} \end{pmatrix}$. ### The triangular case We now consider the case where $L$ is a triangular matrix. We focus only on the upper triangular case, i.e., $q\neq0$ and $r=0$. The lower triangular case can be deduced from this, thanks to [Corollary 18](#CorollariesNormalizerConditionOdometer){reference-type="ref" reference="CorollariesNormalizerConditionOdometer"} via conjugation with the matrix $\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$. For all $n\in \mathbb{Z}$ we have $L^{n}=\begin{pmatrix}p^{n} & q(n)\\ 0 & s^{n}\end{pmatrix},$ where $q(n)=q(p^{n}-s^{n})/(p-s)=q\sum\limits_{i=0}^{n-1}p^{i}s^{n-1-i}$. Since $\det(L)=ps$ and $\mathop{\mathrm{trace}}(L)=p+s$, so $\mathop{\mathrm{rad}}(\det(L))$ divides $\mathop{\mathrm{trace}}(L)$ if and only if $\mathop{\mathrm{rad}}(p)=\mathop{\mathrm{rad}}(s)$. In this case we get a more precise result about the linear representation group as the one mentioned in [Theorem 19](#GeneralTheoremDifferentCasesNormalizer){reference-type="ref" reference="GeneralTheoremDifferentCasesNormalizer"}. **Proposition 22**. *Let $L\in \mathcal{M}(2,\mathbb{Z})$ be an expansion upper triangular matrix such that $\mathop{\mathrm{rad}}(\det(L))$ does not divide $\mathop{\mathrm{trace}}(L)$. Then, we have one of the following:* 1. *[\[ConflictivecaseUppertriangularcase\]]{#ConflictivecaseUppertriangularcase label="ConflictivecaseUppertriangularcase"} If $\mathop{\mathrm{rad}}(p)$ does not divide $s$ and $\mathop{\mathrm{rad}}(s)$ divides $p$, then a matrix $M\in GL(2,\mathbb{Z})$ is in $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{L^{n}})$ if and only if $(p-s)^{2}m_{12}=m_{21}q^{2}+(p-s)(m_{11}-m_{22})q$. Moreover, $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})$ is virtually $\mathbb{Z}$.* 2. *Assume that $\mathop{\mathrm{rad}}(p)$ divides $s$ and $\mathop{\mathrm{rad}}(s)$ does not divide $p$. Then $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})$ is virtually $\mathbb{Z}$.* 3. *If $\mathop{\mathrm{rad}}(p)$ does not divide $s$ and $\mathop{\mathrm{rad}}(s)$ does not divide $p$, we have two cases:* - *If $2q\in (p-s)\mathbb{Z}$, then $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})$ is isomorphic to $\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z}$.* - *Otherwise, $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})$ is isomorphic to $\mathbb{Z}/2\mathbb{Z}$.* *Proof.* Let $M$ be in $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})$. Define the matrix $\overline{M}=(p-s)M-(m_{11}-m_{22})L-(p\cdot m_{22}-m_{11}\cdot s)\textrm{id}_{\mathbb{R}^{2}}$. Then, $\overline{M}$ satisfies [\[normalizercondition3\]](#normalizercondition3){reference-type="eqref" reference="normalizercondition3"}. Moreover, note that $\overline{M}$ has the form $\overline{M}=\begin{pmatrix} 0 & \overline{m_{12}}\\ \overline{m_{21}} & 0 \end{pmatrix}$, where $\overline{m_{12}}=(p-s)m_{12}-(m_{11}-m_{22})q$ and $\overline{m_{21}}=(p-s)m_{21}$, with $\overline{m_{12}},\overline{m_{21}}\in\mathbb{Z}$. Now, [\[normalizercondition3\]](#normalizercondition3){reference-type="eqref" reference="normalizercondition3"} implies that for all $n,m>0$, $$\begin{aligned} \begin{pmatrix} -\overline{m_{21}}p^{m}q(n) & \overline{m_{12}}s^{n+m}-\overline{m_{21}} q(m)q(n)\\ \overline{m_{21}}p^{n+m} & \overline{m_{21}}pq(m) \end{pmatrix} \equiv \begin{pmatrix} 0 & 0\\ 0 & 0 \end{pmatrix}\ (\text{mod}\ p^{n}s^{n}).\label{eqsuppertriangularcase} \end{aligned}$$ Suppose that $\mathop{\mathrm{rad}}(s)$ does not divide $p$. Then, there exists a prime number $t$ dividing $s$ such that for all $n>0$ and $m>0$, $p^{m}$ is an invertible element in $\mathbb{Z}/t^{n}\mathbb{Z}$. Hence, $\overline{m}_{21}\equiv 0\ (\text{mod}\ t^{n})$ for any $n>0$, which implies that $\overline{m}_{21}=0$, so $m_{21}=0$. Now, by [\[eqsuppertriangularcase\]](#eqsuppertriangularcase){reference-type="eqref" reference="eqsuppertriangularcase"}, we get that $$\begin{aligned} \forall n\ge 0, \exists m\ge 0, \quad \overline{m}_{12}s^{m} & \equiv 0 \ (\text{mod}\ p^{n}). \label{eq7uppertriangularcase} \end{aligned}$$ There are two cases: - If $\mathop{\mathrm{rad}}(p)$ does not divide $s$, then [\[eq7uppertriangularcase\]](#eq7uppertriangularcase){reference-type="eqref" reference="eq7uppertriangularcase"} implies that $\overline{m}_{12}=0$. We conclude that $\overline{M}=\begin{pmatrix} 0 & 0\\ 0 & 0 \end{pmatrix}$, i.e., $(p-s)M=(m_{11}-m_{22})L+(p\cdot m_{22}-m_{11}\cdot s)\textrm{id}_{\mathbb{R}^{2}}$. Since $m_{21}=0$, then $M$ has the form $$M=\begin{pmatrix} m_{11} & m_{12}(m_{11},m_{22})\\ 0 & m_{22} \end{pmatrix},$$ where $m_{12}(m_{11},m_{22})$ satisfies $(p-s)m_{12}(m_{11},m_{22})=(m_{11}-m_{22})q$. - Note that $m_{11}=m_{22}$ if and only if $m_{12}=0$. - If $m_{11}\neq m_{22}$, then $m_{11}-m_{22}\in \{-2,2\}$, so $(p-s)m_{12}=\pm2q$. Since $M$ has integer coefficients, this necessarily implies that $2q\in (p-s)\mathbb{Z}$. If this condition is satisfied, then $M$ has the form $$M=\begin{pmatrix} m_{11} & \frac{(m_{11}-m_{22})q}{p-s}\\ 0 & m_{22} \end{pmatrix}.$$ It is not difficult to see that $M^{2}$ is the identity matrix. We conclude that $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})$ is isomorphic to $\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z}$. If $2q\notin (p-s)\mathbb{Z}$, then $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})$ is isomorphic to $\mathbb{Z}/2\mathbb{Z}$. - If $\mathop{\mathrm{rad}}(p)$ divides $s$, then any $\overline{m}_{12}\in \mathbb{Z}$ satisfies [\[eq7uppertriangularcase\]](#eq7uppertriangularcase){reference-type="eqref" reference="eq7uppertriangularcase"}. Thus, any matrix $M=\begin{pmatrix} m_{11} & m_{12}\\ 0 & m_{22} \end{pmatrix}$ with $|m_{11}m_{22}|=1$ satisfies [\[normalizercondition3\]](#normalizercondition3){reference-type="eqref" reference="normalizercondition3"}. Finally, if $\mathop{\mathrm{rad}}(s)$ divides $p$, then for any $n>0$ and any $m$ large enough $s^{n}$ divides $p^{m}$ and $q(m)$. Let $t$ be a prime number dividing $p$ that does not divide $s$. Then, by [\[eqsuppertriangularcase\]](#eqsuppertriangularcase){reference-type="eqref" reference="eqsuppertriangularcase"} we obtain that $$\begin{aligned} (p-s)^{2}\overline{m}_{12}s^{n+m}\equiv \overline{m}_{21}q^{2}s^{n+m} \ (\text{mod}\ t^{n}). \label{eq8uppertriangularcase} \end{aligned}$$ Since $t$ does not divide $s$, for any $n,m>0$, $s^{n+m}$ is an invertible element in $\mathbb{Z}/t^{n}\mathbb{Z}$. So [\[eq8uppertriangularcase\]](#eq8uppertriangularcase){reference-type="eqref" reference="eq8uppertriangularcase"} is reduced to $$\begin{aligned} \forall n,\ (p-s)^{2}\overline{m}_{12} & \equiv \overline{m}_{22}q^{2} \ (\text{mod}\ t^{n}). \label{eq9uppertriangularcase} \end{aligned}$$ This implies that $(p-s)^{2}\overline{m}_{12}=\overline{m}_{21}q^{2}$. Thus, we get that $$\begin{aligned} (p-s)^{2}m_{12}=m_{21}q^{2}+(p-s)(m_{11}-m_{22})q.\label{eq10uppertriangularcase} \end{aligned}$$ This implies that if $M\in \vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})$, then $M$ is in $\mathop{\mathrm{span}}_{\mathbb{Q}}\left\{ L,\textrm{id}, \begin{pmatrix} 0 & 1\\ (p-s)^2/q^2 & 0 \end{pmatrix}\right\}$. We separate here in two cases: - If $(p-s)$ divides $q$, we write $q=k(p-s)$ for some $k\in \mathbb{Z}$. By [\[eq10uppertriangularcase\]](#eq10uppertriangularcase){reference-type="eqref" reference="eq10uppertriangularcase"}, we have that $$m_{12}=m_{21}k^{2}+k(m_{11}-m_{22}).$$ Since $|\det(M)|=1$ and $\det(M)=(m_{11}+m_{21}\cdot k)(m_{22}-m_{21}\cdot k)$, we get that $|m_{11}+m_{21}\cdot k|=1$ and $|m_{22}-m_{21}\cdot k|=1$. We can parameterize the matrices in $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})$ as follows: $$\left\{\begin{array}{cl} \begin{pmatrix} 1-m\cdot k & -mk^{2}\\ m & 1+m\cdot k \end{pmatrix},\begin{pmatrix} 1-m\cdot k & 2k-mk^{2}\\ m & m\cdot k-1 \end{pmatrix}\\ \begin{pmatrix} -1-m\cdot k & -2k-mk^{2}\\ m & 1+m\cdot k \end{pmatrix}, \begin{pmatrix} -1-m\cdot k & -mk^{2}\\ m & -1+m\cdot k \end{pmatrix} & \colon m\in \mathbb{Z} \end{array}\right\}.$$ Note that this group is virtually $\mathbb{Z}$, since the quotient by $\left\langle \begin{pmatrix} 1-k&-k^2\\1&1+k \end{pmatrix}\right\rangle$ is finite. - If $(p-s)$ does not divide $q$, we will find a matrix $P\in \textrm{GL}(2, \mathbb{Z})$ such that $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})$ is conjugate to the group of matrices $\left\{\begin{pmatrix} m_{11} & m_{12}\\ 0 & m_{22}\end{pmatrix}\colon m_{11},m_{12},m_{22}\in \mathbb{Z}, |m_{11}m_{22}|=1\right\}$ and we conclude that $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})$ is virtually $\mathbb{Z}$. Indeed, set $c=\gcd(p-s,q)$ and $g=(p-s)/c$, $h=q/c$. Since $\gcd(g,h)=1$, Bézout's lemma implies the existence of two numbers $e,f\in \mathbb{Z}$ such that $eh-gf=1$. A standard computation shows that $P=\begin{pmatrix} e & f\\ g & h\end{pmatrix}$ is such a matrix.  ◻ ### The general case {#SectionProofOfTheoremGeneralBifurcationNormalizer} We are ready to prove [Theorem 19](#GeneralTheoremDifferentCasesNormalizer){reference-type="ref" reference="GeneralTheoremDifferentCasesNormalizer"}. *Proof of [Theorem 19](#GeneralTheoremDifferentCasesNormalizer){reference-type="ref" reference="GeneralTheoremDifferentCasesNormalizer"}.* We continue to use the notations introduced in [4.3](#sec:ProofTheo3.3){reference-type="ref" reference="sec:ProofTheo3.3"}. Since we already proved the triangular case, we assume that the coefficients of the expansion matrix $L$ satisfy $q\cdot r\neq 0$. It will be useful to note that the Cayley-Hamilton theorem implies that $$\begin{aligned} \label{eq:CayleyHamilton} L^{2}=\mathop{\mathrm{trace}}(L)L-\det(L)\textrm{id}_{\mathbb{R}^2}. \end{aligned}$$ First assume that $\mathop{\mathrm{rad}}(\det(L))$ divides $\mathop{\mathrm{trace}}(L)$. By [\[eq:CayleyHamilton\]](#eq:CayleyHamilton){reference-type="eqref" reference="eq:CayleyHamilton"}, we can conclude that ${L^{2}\equiv 0\ (\text{mod}\ \mathop{\mathrm{rad}}(\det(L)))}$. Hence, for all $n\in \mathbb{N}$ there exists $m(n)\in\mathbb{N}$ large enough such that $L^{m(n)}\equiv 0\ (\text{mod}\ \det(L)^{n})$. Therefore, any matrix in $\textrm{GL}(2, \mathbb{Z})$ satisfies [\[normalizercondition3\]](#normalizercondition3){reference-type="eqref" reference="normalizercondition3"} and we can deduce that $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})=GL(2,\mathbb{Z})$. Now we deal with the case when $\mathop{\mathrm{rad}}(\det(L))$ does not divide $\mathop{\mathrm{trace}}(L)$. In dimension $2$, this implies that $L$ is diagonalizable. Let $M=\begin{pmatrix}m_{11} & m_{12}\\ m_{21} & m_{22}\end{pmatrix}$ be in $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})$, so that it satisfies [\[normalizercondition3\]](#normalizercondition3){reference-type="eqref" reference="normalizercondition3"} . Define the matrix $\overline{M}=rM-m_{21}L-(r\cdot m_{11}-p\cdot m_{21})\textrm{id}_{\mathbb{R}^{2}}$. The matrix $\overline{M}$ also satisfies [\[normalizercondition3\]](#normalizercondition3){reference-type="eqref" reference="normalizercondition3"} and has the form $\overline{M}=\begin{pmatrix} 0 & \overline{m_{12}}\\ 0 & \overline{m_{22}} \end{pmatrix}$, with $\overline{m}_{12},\overline{m}_{21}\in\mathbb{Z}$. Suppose first that $L$ has integer eigenvalues $t_1, t_2\in \mathbb{Z}$, i.e., we can write $$(eh-fg)L= P\begin{pmatrix} t_{1} & 0\\ 0 & t_{2} \end{pmatrix}\mathop{\mathrm{adj}}(P), \quad\text{ for some integer matrix } P= \begin{pmatrix} e & f\\ g & h \end{pmatrix}.$$ If ${|eh-fg|=1}$, then we can use [Proposition 22](#uppertriangularcasenormalizer){reference-type="ref" reference="uppertriangularcasenormalizer"} with [Corollary 18](#CorollariesNormalizerConditionOdometer){reference-type="ref" reference="CorollariesNormalizerConditionOdometer"} to conclude that $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})$ is conjugate (via $P$ in $\textrm{GL}(2, \mathbb{Z})$) to the linear representation group $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(t_{1}^{n}\mathbb{Z}\times t_{2}^{n}\mathbb{Z})})$. The same conclusion holds when $L$ is conjugate (via a $\textrm{GL}(2, \mathbb{Z})$-matrix) to a triangular matrix. We then assume that $|eh-fg|>1$, $e,f,g,h\in \mathbb{Z}$, and $\gcd(e,g)=\gcd(f,h)=1$. For any $n>0$, the coefficients of $L^n$ are given by: $$\begin{array}{ll} p(n)=\dfrac{eht_{1}^{n}-fgt_{2}^{n}}{eh-fg} & q(n)=\dfrac{ef(t_{1}^{n}-t_{2}^{n})}{eh-fg}\\ &\\ r(n)=\dfrac{gh(t_{1}^{n}-t_{2}^{n})}{eh-fg} & s(n)=\dfrac{eht_{2}^{n}-fgt_{1}^{n}}{eh-fg}. \end{array}$$ So, [\[normalizercondition3\]](#normalizercondition3){reference-type="eqref" reference="normalizercondition3"} can be rewritten as: $$\begin{aligned} \label{eqsintegereigenvaluecases} gh(t_{1}^{m}-t_{2}^{m})[\overline{m}_{12}(eht_{2}^{n}-fgt_{1}^{n})-\overline{m}_{22}ef(t_{2}^{n}-t_{1}^{n})] \equiv 0\ (\text{mod}\ t_{1}^{n}t_{2}^{n})\\ gh(t_{1}^{m}-t_{2}^{m})[\overline{m}_{22}(eht_{1}^{n}-fgt_{2}^{n})-\overline{m}_{12}gh(t_{1}^{n}-t_{2}^{n})] \equiv 0\ (\text{mod}\ t_{1}^{n}t_{2}^{n})\notag\\ (eht_{2}^{m}-fgt_{1}^{m})[\overline{m}_{12}(eht_{2}^{n}-fgt_{1}^{n})-\overline{m}_{22}ef(t_{2}^{n}-t_{1}^{n})] \equiv 0\ (\text{mod}\ t_{1}^{n}t_{2}^{n}) \notag\\ (eht_{2}^{m}-fgt_{1}^{m})[\overline{m}_{22}(eht_{1}^{n}-fgt_{2}^{n})-\overline{m}_{12}gh(t_{1}^{n}-t_{2}^{n})] \equiv 0\ (\text{mod}\ t_{1}^{n}t_{2}^{n}).\notag \end{aligned}$$ Since $\mathop{\mathrm{rad}}(\det(L))=\mathop{\mathrm{rad}}(t_{1}t_{2})$ does not divide $\mathop{\mathrm{trace}}(L)=t_{1}+t_{2}$, the following three cases hold: 1. Suppose that $\mathop{\mathrm{rad}}(t_{1})$ divides $t_{2}$, but there exists a prime number $t$ dividing $t_{2}$ that does not divide $t_{1}$. Then [\[eqsintegereigenvaluecases\]](#eqsintegereigenvaluecases){reference-type="eqref" reference="eqsintegereigenvaluecases"} can be reduced to $$\begin{aligned} \label{eqsintegereigenvaluecasesCASE2} fgt_{1}^{m+n}[\overline{m}_{22}\cdot eh-\overline{m}_{12}\cdot gh] \equiv 0\ (\text{mod}\ t^{n}). \end{aligned}$$ Since $t$ does not divide $t_{1}$, for any $n,m\geq 0$, $t_{1}^{n+m}$ is an invertible element in $\mathbb{Z}/t^{n}\mathbb{Z}$. We can also choose $n$ large enough so that $t^{n}$ does not divide any of the coefficients $e,f,g,h\in \mathbb{Z}$. We then conclude that $\overline{m}_{12}g=\overline{m}_{22}e$. This implies that $$\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})\subseteq \left\{aL+b\textrm{id}+c\begin{pmatrix} 0 & e\\ 0 & g \end{pmatrix}, a,b,c \in \frac 1r\mathbb{Z}\right\}\cap \textrm{GL}(2, \mathbb{Z}).$$ Since $P^{-1}\begin{pmatrix} 0 & e \\ 0 & g \end{pmatrix} P = \begin{pmatrix} g & h \\ 0 & 0 \end{pmatrix}$, the set $P^{-1} \vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})}) P$ is a subgroup of unimodular upper triangular matrices in $G$, where $$G = \left\{\begin{pmatrix} a & b\\ 0 & c \end{pmatrix}, \quad a,b,c \in \frac 1r\mathbb{Z}, |ac| =1\right\}.$$ Notice that any commutators of $G$ are of the form $\begin{pmatrix} 1 & b \\ 0 & 1 \end{pmatrix}$. So, the derived subgroup $G'$ (generated by the commutators) is isomorphic to $\mathbb{Z}$. Moreover, the abelianization $G/G'$ of $G$ is finite. Therefore, the abelianization of $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})$ is finite, and its derived subgroup is isomorphic to a subgroup (eventually trivial) of $\mathbb{Z}$. Conversely, a direct computation shows that the matrix $P^{-1}\begin{pmatrix}1& \det(P) \\ 0 &1 \end{pmatrix}P$ satisfies [\[normalizercondition3\]](#normalizercondition3){reference-type="eqref" reference="normalizercondition3"}, proving that the derived subgroup of $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})$ is nontrivial. 2. The case where $\mathop{\mathrm{rad}}(t_{2})$ divides $t_{1}$ but $\mathop{\mathrm{rad}}(t_1)$ does not divide $t_2$ is symmetric to the former one. 3. Neither $\mathop{\mathrm{rad}}(t_{1})$ divides $t_{2}$ nor $\mathop{\mathrm{rad}}(t_{2})$ divides $t_{1}$. The former computations in the two cases provide that$$\begin{aligned} \overline{m}_{12}g = \overline{m}_{22}e, \quad \wedge \quad \overline{m}_{12}h = \overline{m}_{22}f. \end{aligned}$$ Since $eh-fg\neq 0$, this implies that $\overline{m}_{12}=0$ and $\overline{m}_{22}=0$, so $\overline{M}=0$. We conclude that $M$ commutes with $L$, i.e., the linear representation group $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})$ is equal to $\mathop{\mathrm{Cent}}_{\textrm{GL}(2, \mathbb{Z})}(L)$, which can be isomorphic to $\mathbb{Z}/2\mathbb{Z}$ or $(\mathbb{Z}/2\mathbb{Z})^{2}$. Now we suppose that $L$ does not have integer eigenvalues. A direct induction on [\[eq:CayleyHamilton\]](#eq:CayleyHamilton){reference-type="eqref" reference="eq:CayleyHamilton"} gives, for any $n>0$, that $L^{n}\equiv \mathop{\mathrm{trace}}(L)^{n-1}L\ (\text{mod}\ \det(L))$. Since $\mathop{\mathrm{rad}}(\det(L))$ does not divide $\mathop{\mathrm{trace}}(L)$, there exists a prime number $t$ dividing $\det(L)$ that does not divide $p$ or $s$. Without loss of generality (up to a conjugation with $\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$) we may assume that $t$ does not divide $s$. As $s(n)\equiv \mathop{\mathrm{trace}}(L)^{n-1}s\ (\text{mod}\ \det(L))$, then, for all $n>0$ and $m>0$, $s(m)$ is an invertible element in $\mathbb{Z}/t^{n}\mathbb{Z}$. Hence, [\[normalizercondition3\]](#normalizercondition3){reference-type="eqref" reference="normalizercondition3"} implies that $$\begin{aligned} \overline{m_{12}}s(n)-\overline{m_{22}}q(n) & \equiv 0 \ (\text{mod}\ t^{n})\label{eq5generalcase}\\ -\overline{m_{12}}r(n)+\overline{m_{22}}p(n) & \equiv 0 \ (\text{mod}\ t^{n}),\label{eq6generalcase} \end{aligned}$$ which is equivalent to $$\begin{aligned} \mathop{\mathrm{adj}}(L^{n})\dbinom{\overline{m_{12}}}{\overline{m_{22}}} \equiv \dbinom{0}{0}\ (\text{mod}\ t^{n}).\label{eq7generalcase} \end{aligned}$$ Consider the set $E=\left\{\dbinom{\overline{m_{12}}}{\overline{m_{22}}}\in \mathbb{Z}^{2}\colon\ \text{satisfying}\ \eqref{eq7generalcase}\ \text{for all}\ n>0\right\}$. This set is $\mathop{\mathrm{adj}}(L)$-invariant, and if $\overline{m_{22}}=0$, then [\[eq5generalcase\]](#eq5generalcase){reference-type="eqref" reference="eq5generalcase"} implies that $\overline{m_{12}}=0$. Now, set $\dbinom{\overline{m_{12}}^{(1)}}{\overline{m_{22}}^{(1)}},\dbinom{\overline{m_{12}}^{(2)}}{\overline{m_{22}}^{(2)}}\in E$. Note that $$\overline{m_{22}}^{(2)}\dbinom{\overline{m_{12}}^{(1)}}{\overline{m_{22}}^{(1)}}-\overline{m_{22}}^{(1)}\dbinom{\overline{m_{12}}^{(2)}}{\overline{m_{22}}^{(2)}}=\dbinom{\overline{m_{22}}^{(2)}\overline{m_{12}}^{(1)}-\overline{m_{22}}^{(1)}\overline{m_{12}}^{(1)}}{0},$$ hence, by the former remark $\overline{m_{22}}^{(2)}\dbinom{\overline{m_{12}}^{(1)}}{\overline{m_{22}}^{(1)}}=\overline{m_{22}}^{(1)}\dbinom{\overline{m_{12}}^{(2)}}{\overline{m_{22}}^{(2)}}$. So, $E$ is an $\mathop{\mathrm{adj}}(L)$-invariant $\mathbb{Z}$-module of rank at most 1. Since $L$ does not have integer eigenvalues, $E$ must have rank 0. This implies that $\overline{m_{12}}=\overline{m_{22}}=0$. Hence $M$ commutes with $L$. We conclude that $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L^{n})})$ is equal to $\mathop{\mathrm{Cent}}_{\textrm{GL}(2, \mathbb{Z})}(L)$. We claim that the centralizer $\mathop{\mathrm{Cent}}_{\textrm{GL}(2, \mathbb{Z})}(L)$ is finite when $L$ has no real eigenvalues. Indeed a matrix $M$ commuting with $L$ has to satisfy $$\label{eq8generalcase} \begin{array}{cc} r\cdot m_{12}-q\cdot m_{21} & =0\\ r\cdot m_{22}-s\cdot m_{21}-r\cdot m_{11}+p\cdot m_{21} & = 0. \end{array}$$ - Suppose $p=s$. In this case, [\[eq8generalcase\]](#eq8generalcase){reference-type="eqref" reference="eq8generalcase"} implies that $m_{11}=m_{22}$ and $m_{21}=m_{11}\cdot r/q$. Note that $L$ has complex eigenvalues if and only if $qr<0$, as determined by the condition $(2p)^{2}-4(p^{2}-qr)<0$. Since $|\det(M)|=1$, then $|m_{11}^{2}-m_{12}^{2}\cdot r/q|$ equals $1$. Therefore, when $qr<0$, there exists a finite number of points $(m_{11},m_{12})\in \mathbb{Z}^{2}$ satisfying [\[eq8generalcase\]](#eq8generalcase){reference-type="eqref" reference="eq8generalcase"}. - If $p\neq s$, then [\[eq8generalcase\]](#eq8generalcase){reference-type="eqref" reference="eq8generalcase"} implies that $m_{12}=q(m_{11}-m_{22})/(p-s)$ and $m_{21}=r(m_{11}-m_{22})/(p-s)$. Since $M \in GL(2,\mathbb{Z})$, we get that $$\label{eqgeneralcasefordiagonalelements}m_{11}m_{22}-(m_{11}-m_{22})^{2}\dfrac{qr}{(p-s)^{2}}=\pm1.$$ In this case, there is a finite number of solutions if $\mathop{\mathrm{trace}}(L)^{2}-4\det(L)<0$, which is equivalent to $L$ having no real eigenvalues.  ◻ **Remark 23**. In the particular case when $\gcd(\mathop{\mathrm{trace}}(L),\det(L))=1$, we can simplify the proof noting that [\[eq5generalcase\]](#eq5generalcase){reference-type="eqref" reference="eq5generalcase"}, [\[eq6generalcase\]](#eq6generalcase){reference-type="eqref" reference="eq6generalcase"} imply the existence of two sequences $(k_{n}^{1})_{n>0}, (k_{n}^{2})_{n>0}\subseteq \mathbb{Z}$ such that $\det(L)^{n}k_{n}^{1}=\overline{m_{12}}s(n)-\overline{m_{22}}q(n)$ and $\det(L)^{n}k_{n}^{2}=-\overline{m_{12}}r(n)+\overline{m_{22}}p(n)$, i.e., $$\begin{array}{cl} k_{n}^{1} & =\overline{m_{12}}\dfrac{s(n)}{\det(L)^{n}}-\overline{m_{22}}\dfrac{q(n)}{\det(L)^{n}}\\ &\\ k_{n}^{2} & =-\overline{m_{12}}\dfrac{r(n)}{\det(L)^{n}}+\overline{m_{22}}\dfrac{p(n)}{\det(L)^{n}}. \end{array}$$ Since $L$ is an expansion matrix, then $L^{-1}$ is a contraction, so we have that $$\lim\limits_{n\to \infty}\dfrac{p(n)}{\det(L)^{n}}=\lim\limits_{n\to \infty}\dfrac{q(n)}{\det(L)^{n}}=\lim\limits_{n\to \infty}\dfrac{r(n)}{\det(L)^{n}}=\lim\limits_{n\to \infty}\dfrac{s(n)}{\det(L)^{n}}=0,$$ this implies that for all $n\in\mathbb{N}$ large enough, $k_{n}^{1}=k_{n}^{2}=0$, and we conclude that $\overline{m_{12}}=\overline{m_{22}}=0$. [Theorem 19](#GeneralTheoremDifferentCasesNormalizer){reference-type="ref" reference="GeneralTheoremDifferentCasesNormalizer"} implies that the linear representation group of constant-base $\mathbb{Z}^{2}$-odometer systems is computable. But the techniques developed in this article may not be directly applicable to higher dimensions. This raises the following question: **Question 24**. *Regarding the linear representation group of higher dimensional constant-base odometer systems, are its elements computable? Is its group structure computable?* By "computable elements\", we mean that if there exists an algorithm to decide whether a matrix $M$ belongs to the linear representation group or not. The second question involves finding an algorithm to determine the linear representation group, up to isomorphism, as a function of the base matrix $L$. # Minimal subshifts with infinite linear representation group {#sec:NormalizerSubshiftEx} In this section, we present minimal substitutive subshifts with infinite linear representation groups, thereby providing a positive response to a question posed in [@baake2019number]. Their normalizer groups are fully explained. We prove the following result. **Theorem 25**. *For any expansion matrix $L \in {\mathcal M} (d, \mathbb{Z})$ with $|\det L|\geq 3$, there exists an aperiodic minimal substitutive $\mathbb{Z}^d$-subshift $X$ with expansion matrix $L$ such that* - *It is coalescent.* - *It is an almost 1-to-1 extension of $\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})}$.* - *Its automorphisms are reduced to the shift transformations.* - *Its linear representation semigroup $\vec{N}(X, S)$ is equal to $$\{M \in \bigcup_{k\ge 0}\bigcap_{n\ge k} L^{n} \textrm{GL}(d, \mathbb{Z})L^{-n}: \exists n_0, L^{-n}ML^n = L^{-p}ML^p \ (\text{mod } L(\mathbb{Z}^d)), \forall n,p\ge n_0\}.$$* - *Its normalizer group is a semidirect product of $\mathbb{Z}^d$ with $\vec{N}(X, S)$.* *The isomorphisms are explicit.* In particular, when $L$ is proportional to the identity, the former result provides an example of a minimal subshift with a linear representation semigroup equal to $\textrm{GL}(d, \mathbb{Z})$. To describe these explicit examples, we will briefly introduce some notions coming from (aperiodic) tiling theory. Most of the references come from [@baake2013aperiodic]. From a tiling perspective, the *half-hex inflation* is a well-known inflation rule analogous to a symbolic substitution (for more properties about this tiling substitution, see [@baake2013aperiodic Section 6.4]). The tiles consist of 6 regular half-hexagons, each one being the image under a 6-fold rotation of a single one. The inflation rule, up to rotation, is described in [\[halfhextiling\]](#halfhextiling){reference-type="ref" reference="halfhextiling"}. In tiling terminology, it is an *edge-to-edge inflation*, which means that each inflated tile is precisely dissected into copies of the tiles, and the vertices of any tile only intersect with the vertices of the adjacent tiles. This inflation defines an aperiodic tiling of the plane (see [@baake2013aperiodic Example 6.4]). Since the largest edge of any half-hex can only meet the largest edge of the adjacents half-hexes, two half-hexes always join to form a regular hexagon via their largest edges. By applying this procedure, the half-hex tiling can be decomposed into three hexagons, each distinguished by a single diagonal line, as shown in [\[hexagonfigures\]](#hexagonfigures){reference-type="ref" reference="hexagonfigures"} (see [@baake2013aperiodic]). Using these full hexagons, we can define a *pseudo inflation* (using the vocabulary on [@baake2013aperiodic]), which is conjugated to the half-hex tiling as in [\[PseudoInflationHalfHex\]](#PseudoInflationHalfHex){reference-type="ref" reference="PseudoInflationHalfHex"}. From this pseudo inflation, we construct a tiling substitution with only the four shaded hexagons in [\[PseudoInflationHalfHex\]](#PseudoInflationHalfHex){reference-type="ref" reference="PseudoInflationHalfHex"}. In this tiling substitution, there is an invariant discrete lattice $\Lambda\subseteq \mathbb{R}^{2}$ generated by the centers of these hexagons, using the vectors ${\bm u}$ and ${\bm v}$ as depicted in [\[PseudoInflationHalfHex\]](#PseudoInflationHalfHex){reference-type="ref" reference="PseudoInflationHalfHex"}. The discrete translation $\Lambda$-subaction is conjugate to the substitutive subshift associated with the following constant-shape substitution, called *half-hex substitution*, $\zeta_{hh}$ with an expansion matrix $L_{hh}=2\cdot \textrm{id}_{\mathbb{Z}^{2}}$ and support $F_{1}^{hh}=\{(0,0),(1,0),(0,1),(1,-1)\}$ $$\begin{array}{llllllllllllll} & & 0 & & & & & 0 & & & & & 0 & \\ 0 & \mapsto & 0 & 2 & & 1 & \mapsto & 1 & 2 & & 2 & \mapsto & 2 & 2 \\ & & & 1 & & & & & 1 & & & & & 1. \\ \end{array}$$ We recall that the notations and the notions we will use are summarized in Section [2.3.2](#sec:SubstSubshift){reference-type="ref" reference="sec:SubstSubshift"}. A straightforward computation, based on [@kirat2010remarksselfaffine Theorem 4.8], reveals that the extreme points of the convex hull of $F_{n}^{hh}$ are $\{(0,0),(0,2^{n}-1),(2^{n}-1,0),(2^{n}-1,1-2^{n})\}$. Since $F_n^{hh}$ is a fundamental domain of $2^n\mathbb{Z}^2$, it has cardinality $4^{n}$. Furthermore, $F_{n}^{hh} \subset \mathop{\mathrm{conv}}(F_{n}^{hh})\cap \mathbb{Z}^{2}$, where $\mathop{\mathrm{conv}}(F_{n}^{hh})$ denotes the convex hull of $F_n^{hh}$. Actually, the Pick formula provides that the cardinality of $\mathop{\mathrm{conv}}(F_{n}^{hh})\cap \mathbb{Z}^{2}$ and $F_n^{hh}$ are the same, so $F_{n}^{hh}= \mathop{\mathrm{conv}}(F_{n}^{hh})\cap \mathbb{Z}^{2}$. It then follows that $(F_{n}^{hh})_{n\geq 0}$ is a Følner sequence. Inspired by the half-hex substitution, we consider an integer expansion matrix $L\in \mathcal{M}(d,\mathbb{Z})$ with $|\det(L)|\geq 3$, a fundamental domain $F_{1}$ of $L(\mathbb{Z}^{d})$ in $\mathbb{Z}^{d}$, and set the finite alphabet $\mathcal{A}= F_1\setminus\{\bm 0\}$. We define the substitution $\sigma_{L} \colon \mathcal{A}\to \mathcal{A}^{F_1}$ as follows: $$\begin{aligned} \label{SubStitutionToeplitzInfiniteSymmetries} \forall a \in \mathcal{A},\quad \sigma_L(a)_{{\bm f}}=\left\{\begin{array}{cl} a & \text{ when } {\bm f}={\bm 0},\\ {\bm f} & \text{ when } {\bm f}\neq {\bm 0}. \end{array}\right.\end{aligned}$$ Under the hypothesis that the sequence of supports $(F_{n})_{n>0}$ is a Følner sequence, we get the substitutive subshift $(X_{\sigma_{L}},S,\mathbb{Z}^{d})$. It is important to notice that all the patterns $\sigma_L(a)$ coincide except at the origin, where the letter is uniquely determined. For computational purposes, we introduce the map $$\begin{aligned} \tau \colon \bm n \in \mathbb{Z}^d \setminus \{ {\bm 0}\} \mapsto {\bm f} \in F_1\setminus\{\bm 0\},\end{aligned}$$ where ${\bm n}= L^{p+1}({\bm z}) + L^p({\bm f})$ with ${\bm z} \in \mathbb{Z}^d$, ${\bm f} \in F_1\setminus\{\bm 0\}$ and $p$ is the smallest integer such that $\bm n \not\in L^{p+1}(\mathbb{Z}^d)$. The value $p$ serves as a multidimensional $L$-adic valuation of $\bm n$. A motivation to introduce this map is due to the next formula that enables to compute the value of a $\sigma_L$-fixed point $\bar{x}$ at some position only by the knowledge of this position. More precisely, it is straightforward to check that $$\begin{aligned} \label{eq:Fixedpoint} \forall {\bm n} \neq {\bm 0} \in \mathbb{Z}^d, \quad \bar{x}_{\bm n} = \tau (\bm n).\end{aligned}$$ This property is a typical one of automatic sequences. As a consequence, $\sigma_{L}$ has exactly $|\mathcal{A}|= |\det L| - 1$ fixed points in $X_{\sigma_{L}}$, and they all coincide except at the origin. Moreover, we have the following standard recognizability property. **Lemma 26**. *Let $\overline{x}$ be a fixed point of $\sigma_{L}$, then for any integer $n>0$ and any ${\bm a},{\bm b}\in \mathbb{Z}^{d}\setminus\{0\}$, $$\overline{x}_{{\bm a}+F_{n}}=\overline{x}_{{\bm b}+F_{n}} \implies {\bm a}\equiv {\bm b}\ (\text{mod}\ L^{n}(\mathbb{Z}^{d})).$$ In particular, if the sequence of supports of the iterations $\sigma_{L}^{n}$ is Følner, then the substitution $\sigma_{L}$ is recognizable on any fixed point $\overline{x}$ of ${\sigma_L}$, so $\sigma_{L}$ is aperiodic.* *Proof.* We prove the claim by induction on $n>0$. We start with the base case $n=1$. Suppose ${\bm a}\notin L(\mathbb{Z}^{d})$, i.e., ${\bm a}=L({\bm c})+{\bm g}$ with ${\bm g}\in F_{1}\setminus\{{\bm 0}\}$. If $b\notin L(\mathbb{Z}^{d})$, then ${\bm b}=L({\bm d})+{\bm h}$ with ${\bm h}\in F_{1}\setminus {\bm 0}$. Since $\overline{x}$ is a fixed point of $\sigma_{L}$, we have that $\overline{x}_{{\bm a}}=\sigma(x_{{\bm c}})_{{\bm g}}={\bm g}=\sigma(x_{{\bm d}})_{{\bm h}}$, so ${\bm g}={\bm h}$, which implies that ${\bm a}\equiv {\bm b}\ (\text{mod}\ L(\mathbb{Z}^{d}))$. If $b\in L(\mathbb{Z}^{d})$, then for any ${\bm f}\in F_{1}\setminus\{{\bm 0}\}$ we have that $\overline{x}_{{\bm b}+{\bm f}}={\bm f}=\overline{x}_{{\bm a}+{\bm f}}$. We consider ${\bm f}\in F_{1}\setminus \{{\bm 0}\}$ such that ${\bm a}+{\bm f}\notin L(\mathbb{Z}^{d})$, i.e., ${\bm a}+{\bm f}=L({\bm e})+{\bm h}$, so $\overline{x}_{{\bm a}+{\bm f}}={\bm h}$, and ${\bm h}={\bm f}$, i.e., ${\bm a}\in L(\mathbb{Z}^{d})$ which is a contradiction. Now, suppose there exists some $n>0$ such that $\overline{x}_{{\bm a}+F_{n}}=\overline{x}_{{\bm b}+F_{n}} \implies {\bm a}\equiv {\bm b}\ (\text{mod}\ L^{n}(\mathbb{Z}^{d}))$. Let ${\bm a},{\bm b}\in \mathbb{Z}^{d}$ be such that $$\overline{x}_{{\bm a}+F_{n+1}}=\overline{x}_{{\bm b}+F_{n+1}}.$$ Since $F_{n}\subseteq F_{n+1}$, by the induction hypothesis we have that ${\bm a}\equiv {\bm b}\ (\text{mod}\ L^{n}(\mathbb{Z}^{d}))$. We recall that $F_{n+1}=F_{n}+L^{n}(F_{1})$, so we write $${\bm a} = L^{n+1}({\bm c}) + {\bm f}+L^{n}({\bm g}), \quad {\bm b} = L^{n+1}({\bm d}) + {\bm f}+L^{n}({\bm h})$$ for some ${\bm f}\in F_{n}$, ${\bm g},{\bm h}\in F_{1}$ and ${\bm c},{\bm d}\in \mathbb{Z}^{d}$. We prove that ${\bm g}={\bm h}$. If ${\bm f}={\bm 0}$ we can use a similar argument as for the case $n=1$ to conclude that ${\bm g}={\bm h}$. Suppose then ${\bm f}\neq {\bm 0}$. We consider ${\bm j}\in F_{n+1}$ such that ${\bm f}\equiv- {\bm j}\ (\text{mod}\ L^{n}(\mathbb{Z}^{d}))$, so $${\bm a}+{\bm j} = L^{n+1}({\bm c}_{1})+L^{n}({\bm g})\quad {\bm b}+{\bm j} = L^{n+1}({\bm d}_{1})+L^{n}({\bm h}),$$ for some ${\bm c}_{1},{\bm d}_{1}\in \mathbb{Z}^{d}$. Since $\overline{x}$ is a fixed point of $\sigma_{L}$, we get that $$\begin{aligned} \overline{x}_{{\bm a}+{\bm j}} & = \sigma_{L}^{n-2}(\sigma_{L}( \sigma_{L}(\overline{x})_{{\bm c}_{1}})_{{\bm g}})_{{\bm 0}})_{{\bm 0}} = {\bm g}\\ \overline{x}_{{\bm b}+{\bm j}} & = \sigma_{L}^{n-2}(\sigma_{L}(\sigma_{L}(\overline{x})_{{\bm d}_{1}})_{{\bm h}})_{{\bm 0}})_{{\bm 0}} = {\bm h}.\end{aligned}$$ Recall that $\overline{x}_{{\bm a}+{\bm j}}=\overline{x}_{{\bm b}+{\bm j}}$, hence ${\bm g}={\bm h}$ which implies that ${\bm a}\equiv {\bm b}\ (\text{mod}\ L^{n+1}(\mathbb{Z}^{d}))$. ◻ Recall that the map $\pi:(X_{\sigma_{L}},S,\mathbb{Z}^{d})\to (\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})},{\bm +}, \mathbb{Z}^{d})$ is defined in Section [2.3.2](#sec:SubstSubshift){reference-type="ref" reference="sec:SubstSubshift"}. **Proposition 27**. *If the sequence of supports of the iterations $\sigma_{L}^{n}$ is Følner, then $\sigma_{L}$ is an aperiodic, primitive constant-shape substitution and the factor map $\pi:(X_{\sigma_{L}},S,\mathbb{Z}^{d})\to (\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})},{\bm +},\mathbb{Z}^{d})$ is almost 1-to-1.* *More precisely, we have $$|\pi^{-1}(\{\overleftarrow{g}\})|=\left\{\begin{array}{cl} |\mathcal{A}| & \text{ if } \overleftarrow{g}\in \mathcal{O}(\overleftarrow{0},{\bm +}),\\ 1 & \text{ otherwise.} \end{array}\right.$$* In particular, the subshift $X_{\sigma_L}$ is a substitutive Toeplitz subshift and its maximal equicontinuous factor is $\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})}$. As an explicit example, the substitutive subshift $X^{hh}$ associated with the half-hex substitution $\zeta_{hh}$ is an almost 1-to-1 extension of the constant-base odometer $\overleftarrow{\mathbb{Z}^{2}}_{(2^{n} \mathbb{Z}^{2})}$. *Proof.* Since $\tau$ is a bijection, $\sigma_{L}$ is a primitive substitution. The aperiodicity follows from the recognizability given by [Lemma 26](#lem:reconagizability){reference-type="ref" reference="lem:reconagizability"}. Now we study the fibers $\pi^{-1}(\{\overleftarrow{g}\})$ for $\overleftarrow{g}= (g_n)_n\in \overleftarrow{\mathbb{Z}^{d}}_{(L^{n})}$. Suppose $|\pi^{-1}(\{\overleftarrow{g}\})|\geq 2$ and set $x_{1},x_{2}\in \pi^{-1}(\{\overleftarrow{g}\})$, i.e., for any $n>0$ there exists $y_{1}^{(n)},y_{2}^{(n)}\in X_{\sigma_{L}}$ such that $x_{i}=S^{{g}_{n}}\sigma_{L}^{n}(y_{i}^{(n)})$, for $i\in \{1,2\}$. Let ${\bm a}\in \mathbb{Z}^{d}$ such that $x_{1{\bm a}}\neq x_{2{\bm a}}$. This implies that $\sigma_{L}^{n}(y_{1}^{(n)})_{{\bm a}+{g}_{n}}\neq \sigma_{L}^{n}(y_{2}^{(n)})_{{\bm a}+{g}_{n}}$. For every $n>0$, we write ${\bm a}+{g}_{n}=L^{n}({\bm b}_{n})+{\bm f}_{n}$, with ${\bm b}_{n}\in \mathbb{Z}^{d}$ and ${\bm f}_{n}\in F_{n}$. Since for $i\in \{1,2\}$ we have that $\sigma_{L}^{n}(y_{i}^{(n)})_{{\bm a}+{\bm g}_{n}}=\sigma_{L}^{n}(y_{i}^{(n)}({\bm b}_{n}))_{{\bm f}_{n}}$ and these letters are different, then for any $n>0$, ${\bm f}_{n}$ must be ${\bm 0}\in F_{n}$. This implies that for every $n>0$ $$\begin{aligned} {\bm g}_{n}\equiv -{\bm a}\ (\text{mod}\ L^{n}(\mathbb{Z}^{d})). \end{aligned}$$ Hence $\overleftarrow{g}=\kappa_{(L^{n})}(-{\bm a})$, i.e., $\overleftarrow{g}\in \mathcal{O}(\overleftarrow{0},{\bm +})$. It follows that $x_1$ and $x_2$ are in the orbit of two fixed points of $\sigma_L$. In this case, $|\pi^{-1}(\overleftarrow{g})|$ has cardinality $|\mathcal{A}|$ and they only differ in the coordinate $-\kappa_{(L^{n})}^{-1}(\overleftarrow{g})$. If $\overleftarrow{g}$ is not in $\mathcal{O}(\overleftarrow{0},{\bm +})$, then $\pi^{-1}(\overleftarrow{g})$ has cardinality 1. We conclude that the factor map $\pi:(X_{\sigma_{L}},S,\mathbb{Z}^{d})\to (\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})},{\bm +},\mathbb{Z}^{d})$ is almost 1-to-1. ◻ As a consequence of [Proposition 27](#ParticularCaseForFibersOdometer){reference-type="ref" reference="ParticularCaseForFibersOdometer"}, we get the following property on the $\textrm{GL}(d, \mathbb{Z})$-endomorphisms of $X_{\sigma_{L}}$. **Corollary 28**. *Assume the sequence of supports of the iterations $\sigma_{L}^{n}$ is Følner. Then any $\textrm{GL}(d, \mathbb{Z})$-endomorphism $\phi \in N(X_{\sigma_{L}},S)$ maps a $\sigma_{L}$-fixed point onto a shifted of a $\sigma_{L}$-fixed point.* *Proof.* Since the factor map $\pi:X_{\sigma_{L}}\to \overleftarrow{\mathbb{Z}^{d}}_{(L^{n})}$ is almost 1-to-1 ([Proposition 27](#ParticularCaseForFibersOdometer){reference-type="ref" reference="ParticularCaseForFibersOdometer"}), then the odometer system $\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})}$ is the maximal equicontinuous factor of the substitutive subshift $(X_{\sigma_{L}},S,\mathbb{Z}^{d})$. So there exists a semigroup homomorphism $\hat{\pi}:N(X_{\sigma_{L}},S)\to N(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})})$ which is injective ([Lemma 3](#CompatibilityPropertyFactors){reference-type="ref" reference="CompatibilityPropertyFactors"}). Recall that any equicontinuous system is coalescent, so any endomorphism is invertible and by [\[CompatibilityPropertyEnumerateiii\]](#CompatibilityPropertyEnumerateiii){reference-type="ref" reference="CompatibilityPropertyEnumerateiii"} of [Lemma 3](#CompatibilityPropertyFactors){reference-type="ref" reference="CompatibilityPropertyFactors"}, any endomorphism $\phi$ satisfies $$\begin{aligned} \left\{\overleftarrow{g}\in \overleftarrow{\mathbb{Z}^{d}}_{(L^{n})}\colon |\pi^{-1}(\{\overleftarrow{g}\})|=|\mathcal{A}|\right\}\subseteq \hat{\pi}(\phi)\left(\left\{\overleftarrow{g}\in \overleftarrow{\mathbb{Z}^{d}}_{(L^{n})}\colon |\pi^{-1}(\{\overleftarrow{g}\})|=|\mathcal{A}|\right\}\right).\end{aligned}$$ Since $\left\{\overleftarrow{g}\in \overleftarrow{\mathbb{Z}^{d}}_{(L^{n})}\colon |\pi^{-1}(\{\overleftarrow{g}\})|=|\mathcal{A}|\right\}$ is the orbit $\mathcal{O}(\overleftarrow{0},{\bm +}) =\kappa_{(L^{n})}(\mathbb{Z}^{d})$, it implies that $\phi$ maps the $\pi$-fiber of the orbit $\mathcal{O}(\overleftarrow{0},{\bm +})$ onto itself. This $\pi$-fiber consists of the orbits of $\sigma_L$-fixed points. It follows that, up to compose $\phi$ with a shift map, the image of a $\sigma_L$-fixed point $\bar{x}$ by $\phi$ is also a $\sigma_L$-fixed point. ◻ We also characterize the $\textrm{GL}(d, \mathbb{Z})$-endomorphisms of the substitutive subshift $(X_{\sigma_{L}},S,\mathbb{Z}^{d})$ by the following results. The first one concerns endomorphisms and automorphisms. **Lemma 29**. *Let $\sigma_{L}$ defined as [\[SubStitutionToeplitzInfiniteSymmetries\]](#SubStitutionToeplitzInfiniteSymmetries){reference-type="eqref" reference="SubStitutionToeplitzInfiniteSymmetries"} and assume the sequence of supports of the iterations $\sigma_{L}^{n}$ is Følner. Then the subshift $(X_{\sigma_{L}},S,\mathbb{Z}^{d})$ satisfies* - *it is coalescent,* - *the automorphism group $\mathop{\mathrm{Aut}}(X_{\sigma_{L}},S)$ is trivial, i.e., consist only on the shifted transformations $S^{\bm n}$, ${\bm n} \in \mathbb{Z}^d$,* *Proof.* First we prove that $\mathop{\mathrm{End}}(X_{\sigma_{L}},S)=\left\langle S\right\rangle$. We keep the notations of [Lemma 3](#CompatibilityPropertyFactors){reference-type="ref" reference="CompatibilityPropertyFactors"}. Set $\phi \in \mathop{\mathrm{End}}(X_{\sigma_{L}},S)$. According to [Corollary 28](#cor:FixedPoint){reference-type="ref" reference="cor:FixedPoint"}, $\hat{\pi}(\phi)$ is an endomorphism of the odometer, which means it is a translation, as proven in [Lemma 8](#lem:DescriptAutEquicont){reference-type="ref" reference="lem:DescriptAutEquicont"}. Moreover, since it preserves the $\overleftarrow{0}$-orbit, $\hat{\pi}(\phi)$ is a translation by some element $\kappa_{(L^{n})}({\bm n})$ with ${\bm n}\in \mathbb{Z}^d$. By definition of $\hat{\pi}$, this translation is $\hat{\pi} (S^{\bm n})$, thus equal to $\hat{\pi}(\phi)$. We conclude, by the injectivity of $\hat{\pi}$, that $\mathop{\mathrm{End}}(X_{\sigma_{L}},S)=\left\langle S\right\rangle$. As a consequence, $(X_{\sigma_{L}},S,\mathbb{Z}^{d})$ is a coalescent system. ◻ Now, we characterize the elements of their linear representation semigroups. For this, we introduce the following notations. Let $N_{L}$ be the group $$\{M \in \bigcup_{k\ge 0}\bigcap_{n\ge k} L^{n} \textrm{GL}(d, \mathbb{Z})L^{-n}: \exists n_0, L^{-n}ML^n = L^{-p}ML^p \ (\text{mod } L(\mathbb{Z}^d)), \forall n,p\ge n_0\}.$$ The crucial properties of an element $M$ of $N_{L}$ are the following: for each integer $n>0$, there is a $\mathbb{Z}^d$-automorphism $M_n$ such that $L^n M_n = M L^n$. Moreover, each automorphism $M_n$ permutes the $L(\mathbb{Z}^{d})$-cosets. With an abuse of notation, we will denote these permutations on $F_1 \setminus\{ \bm 0\}$ by $M_n \ (\text{mod } L(\mathbb{Z}^d))$. These permutations are ultimately all the same, for large enough $n$. Linked with the computation of the digits of fixed points, we have the following relation $$\begin{aligned} \label{eq:CommutationTauM} \tau \circ M (\bm n) = M_{p} \circ \tau (\bm n) \ (\text{mod } L(\mathbb{Z}^d)), \end{aligned}$$ where $p$ is the smallest integer such that $\bm n \not\in L^{p+1}(\mathbb{Z}^d)$. **Lemma 30**. *Assume that the sequence of supports of the iterations $\sigma_{L}^{n}$ is Følner. Then the linear representation semigroup $\vec{N}(X_{\sigma_{L}},S)$ is the linear group $N_{L}$.* *Proof.* We start to show that $\vec{N}(X_{\sigma_{L}},S) \leqslant N_{L}$. Set $M\in \vec{N}(X_{\sigma_{L}},S)$, and let $\phi\in N(X_{\sigma_{L}}, S)$ be an $M$-endomorphism with radius $r(\phi)$. Up to compose $\phi$ with a shift, we can assume that it preserves the set of $\sigma_{L}$-fixed points ([Corollary 28](#cor:FixedPoint){reference-type="ref" reference="cor:FixedPoint"}). Since $\pi$ is compatible with $\textrm{GL}(d, \mathbb{Z})$-endomorphisms ([\[MaximalEquicontinuousFactorCompatibleWithHomomorphisms\]](#MaximalEquicontinuousFactorCompatibleWithHomomorphisms){reference-type="ref" reference="MaximalEquicontinuousFactorCompatibleWithHomomorphisms"}), [Lemma 3](#CompatibilityPropertyFactors){reference-type="ref" reference="CompatibilityPropertyFactors"} provides that $M\in \vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})})$. This set is a group ([Corollary 7](#cor:CorollariesNormalizerConditionOdometer1){reference-type="ref" reference="cor:CorollariesNormalizerConditionOdometer1"}), so $M^{-1}$ also belongs to $\vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(L^{n})})$, i.e., for any $n>0$, there exists $m>0$ such that $L^{-n}M^{-1}L^{m}$ is an endomorphism of $\mathbb{Z}^{d}$ (see [4](#sec:HomoZ2Odometers){reference-type="ref" reference="sec:HomoZ2Odometers"}). We define $m(n)=\min\limits\{m>0: L^{-n}M^{-1}L^{m} \text{\ is an endomorphism of}\ \mathbb{Z}^{d}\}$. Since the determinant of a $\mathbb{Z}^d$-endormorphism is an integer, we have that $m(n)\geq n$. We will show that actually $m(n)=n$ for any large enough $n$, so that $M^{-1}$ belongs to $\bigcup_{k> 0}\bigcap_{n \geq k} L^n \textrm{GL}(d, \mathbb{Z})L^{-n}$. Since this set is stable by taking the inverse, this is also true for $M$. We prove the claim by contradiction, i.e., we assume there is an infinite set of integers $j$ such that $m(j)>j$. Choose $n$ large enough integer so that the ball of radius $r(\phi)$ centered at the origin is included in $L^{n}(K_{\sigma_{L}}) + F_n$, see [\[FiniteSubsetFillsZd\]](#FiniteSubsetFillsZd){reference-type="ref" reference="FiniteSubsetFillsZd"}. For such $n$, $L^j(\mathbb{Z}^d) \cap (L^{n+1}(K_{\sigma_{L}}) + F_n)= \{\bm 0\}$, for any $j$ large enough. Since the sequence $(m(j))_{j\in S}$ goes to infinity, one can moreover assume that $m(j+1)>m(j)$. With this convention, the group $L^{-j}M^{-1}L^{m(j)}(\mathbb{Z}^d)$ can not be a subset of $L(\mathbb{Z}^d)$, since otherwise this implies $m(j) \ge m(j+1)$. So there is some $\bar{\bm g} \in \mathbb{Z}^d$ such that $L^{-j}M^{-1}L^{m(j)}(\bar{\bm g} ) \neq 0 \ (\text{mod } L(\mathbb{Z}^d))$. Moreover, the set $L^{-m(j)-1}ML^{j+1}(\mathbb{Z}^d) \setminus \mathbb{Z}^d$ is not empty, since otherwise the matrix $L^{-m(j)-1}ML^{j+1}$ would have integer coefficients, which is impossible since its determinant $\det L^{j-m(j)}$ is not an integer. This provides an element $\bm h_0\in L^{-m(j)-1}ML^{j+1}(\mathbb{Z}^d) \setminus \mathbb{Z}^d$. Set $\bm h_1= L(\bm h_0 )$, we have then $\bm h_1 \not\in L(\mathbb{Z}^d)$ and $L^{m(j)} (\bm h_1) = M L^{j+1}(\bm h_2)$ for some element $\bm h_2 \in \mathbb{Z}^d$. These elements will enable us to provide a contradiction. Set $\bm g_1 = L^{m(j)}(\bar{\bm g} )$ and $\bm g_2 = \bm g_1 + ML^{j+1}\bm h_2$. By construction, such elements satisfy $$\begin{aligned} M^{-1} \bm g_1= L^j (L^{-j}M^{-1} L^{m(j)}(\bar{\bm g} )) = M^{-1} \bm g_2 \ (\text{mod } L^{j}(\mathbb{Z}^d)),\end{aligned}$$ so that $$\begin{aligned} \tau(M^{-1} \bm g_1) = \tau(M^{-1} \bm g_2) = L^{-j}M^{-1} L^{m(j)} \bar{\bm g} \ (\text{mod } L(\mathbb{Z}^d)).\end{aligned}$$ Moreover the same relation and the very choice of $j$ give for any $\bm k \in K_{\sigma_L}$, $\bm f \in F_n$ $$\begin{aligned} \label{eq:g1g2} \tau(M^{-1} \bm g_1 + L^{n+1}(\bm k) + \bm f) = \tau(M^{-1} \bm g_2 + L^{n+1}(\bm k) + \bm f).\end{aligned}$$ On the other way, the very choice of $\bm h_2$ implies that $$\tau(\bm g_2) = \tau (\bar{\bm g} + \bm h_1) \qquad \text{ whereas } \tau(\bm g_1) = \tau (\bar{\bm g}).$$ Since $\bm h_1 \not\in L(\mathbb{Z}^d)$, we get $$\begin{aligned} \label{eq:taug1g2} \tau(\bm g_1) \neq \tau (\bm g_2). \end{aligned}$$ Consider $\overline{x}\in X_{\sigma_{L}}$ a fixed point of $\sigma_{L}$. The relation [\[eq:g1g2\]](#eq:g1g2){reference-type="eqref" reference="eq:g1g2"} implies that $$\bar{x}_{|M^{-1}\bm g_1 + L^{n+1}(K_{\sigma_{L}}) + F_n } = \bar{x}_{|M^{-1}\bm g_2 + L^{n+1}(K) + F_n }.$$ By the choice of $n\in\mathbb{N}$ and the Curtis-Hedlund-Lyndon theorem ([Theorem 12](#thm:CHLEpimorphism){reference-type="ref" reference="thm:CHLEpimorphism"}), we also have that $\phi(\bar{x})_{\bm g_1} = \phi(\bar{x})_{\bm g_2}$. Since $\phi$ preserves the set of $\sigma_L$-fixed points, we get by [\[eq:Fixedpoint\]](#eq:Fixedpoint){reference-type="eqref" reference="eq:Fixedpoint"}, $\phi(\bar{x})_{\bm g_1}= \tau (\bm g_1)$ and $\phi(\bar{x})_{\bm g_2}= \tau (\bm g_2)$, contradicting [\[eq:taug1g2\]](#eq:taug1g2){reference-type="eqref" reference="eq:taug1g2"}. So $m(n)= n$ for any large enough $n$. We still have to show that $L^{-p}ML^{p} \ (\text{mod } L(\mathbb{Z}^{d}))$ is uniform in $p$ for any large enough integer $p$. Let $K_{\sigma_{L}}$ be the finite set provided by [\[FiniteSubsetFillsZd\]](#FiniteSubsetFillsZd){reference-type="ref" reference="FiniteSubsetFillsZd"} and let $n$ be large enough so that $L^n(K_{\sigma_{L}}) + F_n$ contains the ball $B_{r(\phi)}({\bm 0})$. Consider $\bm f \in F_1\setminus\{0\}$ and integers $p, q > n$. We claim that $$\begin{aligned} \label{eq:repetition} \bar{x}_{|L^p({\bm f}) + L^n(K_{\sigma_{L}}) + F_n } =\bar{x}_{|L^q({\bm f}) + L^n(K_{\sigma_{L}}) + F_n }.\end{aligned}$$ Indeed, by the equality [\[eq:Fixedpoint\]](#eq:Fixedpoint){reference-type="eqref" reference="eq:Fixedpoint"} and since $p>n$ we get the following for any $\bm k \in K_{\sigma_{L}}$, $\bm f_n \in F_n$ $$\begin{aligned} \bar{x}_{L^p({\bm f}) + L^n(k) + \bm f_n} = \begin{cases} \tau (\bm f_n) & \text{ if } \bm f_n \in F_n\setminus\{0\} \\ \tau(\bm k) &\text{ if } \bm k \neq \bm 0 \wedge {\bm f}_{n}={\bm 0}\\ \tau ({\bm f}) &\text{ otherwise}. \end{cases} \end{aligned}$$ In particular, notice that $\bar{x}_{|L^p({\bm f}) + L^n(K_{\sigma_{L}}) + F_n }$ is independent of $p$ and so the equality [\[eq:repetition\]](#eq:repetition){reference-type="eqref" reference="eq:repetition"} follows. Moreover, the equality [\[eq:repetition\]](#eq:repetition){reference-type="eqref" reference="eq:repetition"} implies, by Curtis-Hedlund-Lyndon theorem ([Theorem 12](#thm:CHLEpimorphism){reference-type="ref" reference="thm:CHLEpimorphism"}), that $\phi(\bar{x})_{ML^p{\bm f}} = \phi(\bar{x})_{ML^q{\bm f}}$. Recall that $M_{k}$ denotes $L^{-k} M L^{k}$ for any integer $k \ge 0$. Since $\phi(\bar{x})$ is also fixed by $\sigma_L$, the same computation as before provides $\phi(\bar{x})_{ML^p{\bm f}} = \tau (ML^p {{\bm f}}) = M_p \bm f \ (\text{mod } L(\mathbb{Z}^d))$, by [\[eq:CommutationTauM\]](#eq:CommutationTauM){reference-type="eqref" reference="eq:CommutationTauM"}. Similarly, we also have that $\phi(\bar{x})_{ML^q{\bm f}} = M_q \bm f \ (\text{mod } L(\mathbb{Z}^d))$. Hence $M_p \bm f = M_q \bm f (\text{mod } L(\mathbb{Z}^d))$ for any $p,q>n$ and $\bm f \in F_1\setminus\{0\}$. It follows that $M$ is in $N_{L}$. We will show now the converse inclusion, that is $N_{L} \leqslant \vec{N}(X_{\sigma_{L}},S)$, so that the two sets are actually equal. Recall that for a matrix $M \in N_{L}$, all the matrices $M_p = L^{-p}ML^p$ permute $L(\mathbb{Z}^d)$-cosets, so they define an isomorphism on $F_1\setminus \{ \bm 0\}$. Moreover, this isomorphism is independent of $p$, for any $p$ no greater than some $n_0\in\mathbb{N}$. The recognizability property of $\sigma_L$ enables us to define the truncation of a "$L$-adic\" valuation as a local map. More precisely, from [Lemma 26](#lem:reconagizability){reference-type="ref" reference="lem:reconagizability"}, we can define the local map $v \colon {\mathcal L}_{F_{n_0}}(X_{\sigma_L}) \to \{0, 1, \ldots, n_0\}$ by $$\begin{aligned} v(\bar{x}_{|\bm n + F_{n_0} }) = \min\{n_0, q \text{ where } \bm n \in L^{q}(\mathbb{Z}^d) \setminus L^{q+1}(\mathbb{Z}^d)\}.\end{aligned}$$ We then set the map $\phi_{M}:X_{\sigma_{L}}\to\phi(X_{\sigma_{L}})$ induced by the local map $$\phi_{M}(x)_{{\bm n}}= M_{v(x_{|\bm M^{-1} \bm n + F_{n_0} })} x_{M^{-1}{\bm n}} \ (\text{mod } L(\mathbb{Z}^d)),\quad \text{for any } x\in X_{\sigma_{L}}, {\bm n}\in \mathbb{Z}^{d}.$$ Notice that $\phi_M$ is an $M$-epimorphism onto the subshift $\phi_M(X_{\sigma_L})$ by Curtis-Hedlund-Lyndon theorem ([Theorem 12](#thm:CHLEpimorphism){reference-type="ref" reference="thm:CHLEpimorphism"}). Actually, the two subshifts are the same $\phi_M(X_{\sigma_L}) =X_{\sigma_{L}}$ so that $\phi_M$ is an $M$-endomorphism. To prove it, it is enough to show that $\phi_M$ maps a $\sigma_{L}$-fixed point $\overline{x}\in X_{\sigma_{L}}$ to another fixed point of $\sigma_L$ within $X_{\sigma_L}$, so that $\phi_M(X_{\sigma_L}) \cap X_{\sigma_{L}} \neq \emptyset$. The minimality of the subshift $X_{\sigma_{L}}$ enables us to conclude that $\phi_M$ is an $M$-endomorphism. Indeed, Equation [\[eq:Fixedpoint\]](#eq:Fixedpoint){reference-type="eqref" reference="eq:Fixedpoint"} provides for any ${\bm n} \neq {\bm 0} \in \mathbb{Z}^d$ that $$\begin{aligned} \phi_M(\bar x)_{M\bm n} &= M_{v(\bar x_{|\bm n + F_{n_0} })} \bar x_{{\bm n}} \ (\text{mod } L(\mathbb{Z}^d)) \\ &= M_{v(\bar x_{|\bm n + F_{n_0} })} \tau(\bm n) \ (\text{mod } L(\mathbb{Z}^d)) \\ &= \tau (M \bm n) \hfill \text{ by relation } \eqref{eq:CommutationTauM}. \\\end{aligned}$$ So $\phi_M(\bar x)$ is fixed by $\sigma_L$, and the claim follows, i.e., $\phi_M$ is an $M$-endomorphism of $X_{\sigma_L}$. Hence $\vec{N}(X_{\sigma_{L}},S)=N_{L}$, and it is a group. [Proposition 2](#CoalescenceOfHommorphismsForCoalescentSystems){reference-type="ref" reference="CoalescenceOfHommorphismsForCoalescentSystems"} ensures then that $N(X_{\sigma_L}, S)$ is a group. ◻ **Lemma 31**. *Assume the sequence of supports of the iterations $\sigma_{L}^{n}$ is Følner, then the normalizer semigroup $N(X_{\sigma_{L}},S)$ is isomorphic to a semidirect product between $\mathbb{Z}^{d}$ and the linear group $N_{L}$.* *Proof.* From preceding [Lemma 30](#Lem:LinearRepresentation){reference-type="ref" reference="Lem:LinearRepresentation"} and [Lemma 29](#LemmaNormalizerToeplitzSubstitution){reference-type="ref" reference="LemmaNormalizerToeplitzSubstitution"}, [Proposition 2](#CoalescenceOfHommorphismsForCoalescentSystems){reference-type="ref" reference="CoalescenceOfHommorphismsForCoalescentSystems"} ensures then that $N(X_{\sigma_L}, S, \mathbb{Z}^{d})$ is a group. We have to prove that the map $$M\in N_{L} \mapsto \phi_M \in \vec{N}(X_{\sigma_L}, S)$$ is a group embedding. This will show that the exact sequence [\[ExactSequenceForNormalizer\]](#ExactSequenceForNormalizer){reference-type="eqref" reference="ExactSequenceForNormalizer"} splits and ${N}(X_{\sigma_L}, S, \mathbb{Z}^{d})$ is a semidirect product between $\mathbb{Z}^d$ and the linear group $N_{L}$. To prove it is a group morphism, the only nontrivial point to check is the composition relation $\phi_{MM'} = \phi_{M} \circ \phi_{M'}$ for any $M, M' \in N_{L}$. Since the maps $\phi_{MM'}$ and $\phi_{M} \circ \phi_{M'}$ have the same linear part, the closed set $\{x \in X_{\sigma_L} : \phi_{MM'}(x) = \phi_{M} \circ \phi_{M'}(x) \}$ is $S$-invariant. By minimality, we only have to prove it is nonempty. We will show it contains any $\sigma_L$-fixed point $\bar{x}$. Since in the previous part, we have shown that the fixed points are preserved under the maps $\phi_M$, we only have to check that the images under the two maps have the same $\bm 0$ coordinate. Let $n_0$ be the integer associated with $M$ such that the transformation $M_p$ coincides mod $L^p(\mathbb{Z}^d)$ for $p\ge n_0$. Define $n'_0$ similarly for $M'$. It is direct to check that the integer $\max(n_0, n'_0)$ plays a similar role for $MM'$. By definition we have $\phi_{M'}(\bar x)_{\bm 0} = M'_{n'_0} \bar{x}_{\bm 0} \ (\text{mod } L(\mathbb{Z}^d))$ and $\phi_{M} \circ \phi_{M'}(\bar x)_{\bm 0} = M_{n_0}M'_{n'_0} \bar{x}_{\bm 0} \ (\text{mod } L(\mathbb{Z}^d))$. Now, a direct computation gives that $(MM')_{\max(n_0, n'_0)} = M_{n_0}M'_{n'_0} \ (\text{mod } L(\mathbb{Z}^d))$ and this shows the claim, i.e., the two images have the same ${\bm 0}$ coordinate. To prove that the morphism is injective, consider a matrix $M$ in its kernel, i.e., such that $\phi_M = \textrm{Id}$. Composing this relation with the shift map $S^{\bm z}$, with $\bm z \in \mathbb{Z}^d$, and since $\phi_M$ is an $M$-endomorphism, we get that $S^{M\bm z} = S^{\bm z}$ for any $\bm z \in \mathbb{Z}^d$. By aperiodicity of the subshift, $M$ has to be the identity matrix. ◻ [Theorem 25](#prop:MainresultSection4){reference-type="ref" reference="prop:MainresultSection4"} resumes all the former results. In particular, if the expansion matrix $L$ is a multiple of the identity, then $\vec{N}(X_{\sigma_{L}},S)=\mathop{\mathrm{Cent}}_{\textrm{GL}(d, \mathbb{Z})}(L)=\textrm{GL}(d, \mathbb{Z})$. As a consequence, we get the following direct corollary: **Corollary 32**. *The normalizer semigroup of the half-hex substitution $N(X_{hh},S)$ is a group and it is isomorphic to a semidirect product between $\mathbb{Z}^{2}$ and $\textrm{GL}(2, \mathbb{Z})$. Moreover, its automorphism group $\mathop{\mathrm{Aut}}(X_{hh},S)$ is trivial.* This implies that the half-hex substitutive subshift is a minimal subshift with an infinite linear representation group. In fact, since $\vec{N}(X_{hh},S)=\textrm{GL}(2, \mathbb{Z})$, its linear representation group is the largest possible. As an other example, consider the matrix $L_{\theLvariable}=\begin{pmatrix}2 & 0\\ 0 & 4\end{pmatrix}$. By [Theorem 19](#GeneralTheoremDifferentCasesNormalizer){reference-type="ref" reference="GeneralTheoremDifferentCasesNormalizer"}, we have that $\vec{N}(\overleftarrow{\mathbb{Z}^{2}}_{(L_{\theLvariable}^{n})})=\textrm{GL}(2, \mathbb{Z})$, but [Theorem 25](#prop:MainresultSection4){reference-type="ref" reference="prop:MainresultSection4"} and a standard analysis provide that $\vec{N}(X_{\sigma_{L_{\theLvariable}}},S)$ is the set of matrices $\left\{\begin{pmatrix}a & 2b\\ 0 & d\end{pmatrix}\colon a,d\in \{-1,1\}, b\in \mathbb{Z}\right\}$. In particular, $\vec{N}(X_{\sigma_{L_{\theLvariable}}},S)$ is virtually $\mathbb{Z}$: its quotient by the group generated by the matrix $\begin{pmatrix}1& 2\\ 0 & 1\end{pmatrix}$ is finite. It is then natural to wonder what is the collection of all the groups $\vec{N}(X_{\sigma_{L}},S)$ that appear for all the matrices $L$ and in particular if any subgroup of $\textrm{GL}(2, \mathbb{Z})$ can be realized like this. This question can be very difficult because it requires precise control of the combinatorics which can be difficult to manage. A more manageable way could be by realizing linear representation groups of specific odometers (see Question [Question 10](#ques:realizationOdometer){reference-type="ref" reference="ques:realizationOdometer"}). With this, we may expect to get an answer of the following: **Question 33**. *Does there exist for any odometer system $(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})$ an almost 1-to-1 Toeplitz extension $(X,S,\mathbb{Z}^{d})$ such that $\vec{N}(X,S)= \vec{N}(\overleftarrow{\mathbb{Z}^{d}}_{(Z_{n})})$?* Together with Question [Question 10](#ques:realizationOdometer){reference-type="ref" reference="ques:realizationOdometer"} this will enable us to enrich that family of examples with a large linear representation group. In the more restrictive class of subshifts given by the finite data of a constant-shape substitution, it is natural to ask whether the elements of the normalizer are computable. There is a body of evidence indicating that their automorphisms can be described by an algorithm. But, as illustrated by the characterization in Theorem [Theorem 25](#prop:MainresultSection4){reference-type="ref" reference="prop:MainresultSection4"}, nothing is clear concerning the elements of the linear representation group. Related to Question [Question 24](#ques:LinearRepOdo){reference-type="ref" reference="ques:LinearRepOdo"}, we ask the following: **Question 34**. *Regarding the linear representation group for substitutive constant-shape subshifts, are its elements computable? Is its group structure computable?* Here we mean "computable elements\" in the sense that there is an algorithm deciding whether or not a matrix $M$ belongs to the linear representation group. The second question is to find an algorithm for determining the linear representation group, up to isomorphism, as a function of the substitution.
arxiv_math
{ "id": "2309.10156", "title": "Large normalizers of ${\\mathbb Z}^{d}$-odometers systems and realization\n on substitutive subshifts", "authors": "Christopher Cabezas and Samuel Petite", "categories": "math.DS", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this short note, we establish a sharp Morrey regularity theory for an even order elliptic system of Rivière type: $$\Delta^{m}u=\sum_{l=0}^{m-1}\Delta^{l}\left\langle V_{l},du\right\rangle +\sum_{l=0}^{m-2}\Delta^{l}\delta\left(w_{l}du\right)+f\qquad \text{ in } B^{2m}\label{eq: Longue-Gastel system}$$ under minimal regularity assumptions on the coefficients functions $V_l, w_l$ and that $f$ belongs to certain Morrey space. This can be regarded as a further extension of the recent $L^p$-regularity theory obtained by Guo-Xiang-Zheng [@GXZ2022], and generalizes [@Du-Kang-Wang-2022; @XZ2023] for second and fourth order elliptic systems. address: - Research Center for Mathematics and Interdisciplinary Sciences, Shandong University 266237, Qingdao, P. R. China and Frontiers Science Center for Nonlinear Expectations, Ministry of Education, P. R. China - Research Center for Mathematics and Interdisciplinary Sciences, Shandong University 266237, Qingdao, P. R. China and Frontiers Science Center for Nonlinear Expectations, Ministry of Education, P. R. China author: - Chang-Yu Guo and Wen-Juan Qi title: Sharp Morrey regularity for an even order elliptic system --- [^1] [^2] # Introduction and main results In this paper, we consider regularity of weak solutions $u\in W^{m,2}(B^{2m},{\mathbb R}^n)$ to the following even order linear elliptic system of Rivière type $$\label{eq:even order linear elliptic system} \Delta^mu=\sum_{l=0}^{m-1}\Delta^l\langle V_l,du\rangle+\sum_{l=0}^{m-2}\Delta^l\delta(w_ldu)+f\qquad\text{in\,\,}B^{2m},$$ where $B^{2m}=B^{2m}_1(0)\subset{\mathbb R}^{2m}$ is the unit ball, $f\in M^{p,p\lambda}(B^{2m},{\mathbb R}^n)$ with $p\ge 1$ and the coefficient functions satisfy $$\begin{split} &w_l\in W^{2l+2-m,2}(B^{2m},{\mathbb R}^{n\times n})\qquad\text{for\,\,}l\in\{0,...,m-2\}\\ &V_l\in W^{2l+1-m,2}(B^{2m},{\mathbb R}^{n\times n}\otimes\wedge^1{\mathbb R}^{2m})\qquad\text{for\,\,}l\in\{0,...,m-1\}. \end{split}$$ Moreover, the first order potential $V_0$ has the decomposition $V_0=d\eta+F$ with $$\eta\in W^{2-m,2}(B^{2m},so(n)),\quad F\in W^{2-m,\frac{2m}{m+1},1}(B^{2m},{\mathbb R}^{n\times n}\otimes\wedge^1{\mathbb R}^{2m}).$$ Here $so(n)$ represents the space of $n\times n$ antisymmetric matries. The homogeneous version of [\[eq:even order linear elliptic system\]](#eq:even order linear elliptic system){reference-type="eqref" reference="eq:even order linear elliptic system"}, that is when $f\equiv0$, was first introduced by Rivière [@R2007] for the case $m=1$, and later by Lamm-Rivière [@Lamm-Riviere-2008] for the case $m=2$, and finally by de Longueville and Gastel [@DG2021] for the general case. It includes many interesting geometric models, such as the harmonic mapping equations ($m=1$), the prescribed mean curvature equations ($m=1$), the biharmonic mapping equations ($m=2$), the $m$-polyharmonic mapping equations and so on; we refer the interested readers to [@R2007; @Riviere-Struve-2008; @Chang-W-Y-1999; @Struwe-2008; @Wang-2004-CPAM; @Goldstein-Strzelecki-Zatorska-2009; @GS2009; @Lamm-Wang-2009; @Moser-2015-TAMS; @AY17; @DG2021; @Horter-Lamm-2021; @Guo-Xiang-2021; @GXZ2021; @GXZ2022; @Chen-Zhu-2023; @HJL2023] and the references therein for various aspects regarding this system or polyharmonic mappings. To explain the difficulty toward regularity issues, let us look at the simplest case when $m=1$. In this case, system [\[eq:even order linear elliptic system\]](#eq:even order linear elliptic system){reference-type="eqref" reference="eq:even order linear elliptic system"} with $f\equiv 0$ reduces to a second order elliptic PDE $$\label{eq:2 dim riviere system} -\Delta u=\Omega\cdot\nabla u\qquad\text{in\,\,}B^2,$$ which was initially introduced by Rivière in his celebrated work [@R2007]. The right-hand side of [\[eq:2 dim riviere system\]](#eq:2 dim riviere system){reference-type="eqref" reference="eq:2 dim riviere system"} lies merely in $L^1$ and thus prevents a direct application of the standard iteration techniques from regularity theory of elliptic equations. A fundamental observation, due to Rivière [@R2007], was the algebraic anti-symmetry of $\Omega$, allows people to find a conservation law of [\[eq:2 dim riviere system\]](#eq:2 dim riviere system){reference-type="eqref" reference="eq:2 dim riviere system"}, turning [\[eq:2 dim riviere system\]](#eq:2 dim riviere system){reference-type="eqref" reference="eq:2 dim riviere system"} into a divergence form. Starting from the conservation law, continuity and compactness follow routinely via standard analytic tools. Rivière's conservation law approach was soon extended to higher order systems in [@Lamm-Riviere-2008] ($m=2$) and [@DG2021] ($m\geq 3$). It should be noticed, however, that direct extension of this approach only gives continuity of weak solutions. A refined approach for Hölder regularity of [\[eq:even order linear elliptic system\]](#eq:even order linear elliptic system){reference-type="eqref" reference="eq:even order linear elliptic system"} (for $f\equiv 0$) can be found in [@Guo-Xiang-2019-Boundary; @Guo-Xiang-2021]. Geometric applications, such as the heat flow or bubbling analysis of polyharmonic mappings, motivate people to establish a deeper $L^p$-regularity of weak solutions to [\[eq:even order linear elliptic system\]](#eq:even order linear elliptic system){reference-type="eqref" reference="eq:even order linear elliptic system"}; see [@ST2013; @Moser-2015-TAMS; @AY17; @Chen-Zhu-2023; @GXZ2022] for more on the motivation/applications. The case $m=1$ was studied by Sharp-Topping [@ST2013], the case $m=2$ was considered by Guo-Xiang-Zheng [@GXZ2021] (see also [@Guo-Wang-Xiang-2022-CV] for the supercritical case), and the general case $m\geq 3$ was established very recently by Guo-Xiang-Zheng [@GXZ2022]. The starting point of [@GXZ2022] is the following conservation law of de Longueville and Gastel [@DG2021] (see also [@Horter-Lamm-2021] for another version of conservation law). To describe their convservation law, for $D\subset {\mathbb R}^{2m}$, we set $$\label{eq:theta for small coefficient} \begin{aligned} \theta_{D}:=\sum_{k=0}^{m-2}&\|w_k\|_{W^{2k+2-m,2}(D)}+\sum_{k=1}^{m-1}\|V_k\|_{W^{2k+1-m,2}(D)}\\ &+\|\eta\|_{W^{2-m,2}(D)}+\|F\|_{W^{2-m,\frac{2m}{m+1},1}(D)} \end{aligned}$$ Under a smallness assumption $$\label{eq:smallness assumption} \theta_{B^{2m}_1}<\epsilon_m,$$ they successfully found $$A\in W^{m,2}\cap L^\infty(B^{2m},Gl(n)) \text{ and } B\in W^{2-m,2}(B^{2m},{\mathbb R}^{n\times n}\otimes \wedge^2{\mathbb R}^{2m})$$ which satisfies $$\Delta^{m-1}dA+\sum_{k=0}^{m-1}(\Delta^k A)V_k-\sum_{k=0}^{m-2}(\Delta^k dA)w_k=\delta B,$$ such that $u\in W^{m,2}(B^m,{\mathbb R}^n)$ solves [\[eq:even order linear elliptic system\]](#eq:even order linear elliptic system){reference-type="eqref" reference="eq:even order linear elliptic system"} in $B^{2m}$[^3] if and only if it satisfies the following conservation law (namely, in divergence form): $$\label{eq:conservation law 1} \begin{aligned} 0&=\delta\Big[\sum_{l=0}^{m-1}\left(\Delta^{l} A\right) \Delta^{m-l-1} d u-\sum_{l=0}^{m-2}\left(d \Delta^{l} A\right) \Delta^{m-l-1} u \\ &\qquad -\sum_{k=0}^{m-1} \sum_{l=0}^{k-1}\left(\Delta^{l} A\right) \Delta^{k-l-1} d\left\langle V_{k}, d u\right\rangle+\sum_{k=0}^{m-1} \sum_{l=0}^{k-1}\left(d \Delta^{l} A\right) \Delta^{k-l-1}\left\langle V_{k}, d u\right\rangle \\ &\qquad -\sum_{k=0}^{m-2} \sum_{l=0}^{k-2}\left(\Delta^{l} A\right) d \Delta^{k-l-1} \delta\left(w_{k} d u\right)+\sum_{k=0}^{m-2} \sum_{l=0}^{k-2}\left(d \Delta^{l} A\right) \Delta^{k-l-1} \delta\left(w_{k} d u\right) \\ &\qquad -\langle B, d u\rangle\Big]+Af, \end{aligned}$$ where $d \Delta^{-1} \delta$ denotes the identity map. The general idea of [@GXZ2022] is similar to Sharp-Topping [@ST2013] but with principal technical differences. One crucial step is to establish a suitable Morrey decay for all the gradients of weak solutions when $f\in L^p$ for $p>1$. Since $L^p\subset M^{1,\lambda}$ for some $\lambda\in (0,1)$, it is natrual to weaken the assumption $f\in L^p$ to $f\in M^{1,\lambda}$. Our first main result gives the optimal Morrey decay when $f\in M^{1,\lambda}$. **Theorem 1**. *Let $u\in W^{m,2}(B^{2m},{\mathbb R}^n)$ be a weak solution of [\[eq:even order linear elliptic system\]](#eq:even order linear elliptic system){reference-type="eqref" reference="eq:even order linear elliptic system"} with $f\in M^{1,\lambda}(B^{2m},{\mathbb R}^n)$ for some $0<\lambda<1$. Then $$\nabla^iu\in M^{\frac{2m}{i},\frac{2m\lambda}{i}}_{\text{loc}}(B^{2m})\qquad1\leq i\leq m,$$ with $$\label{eq:decay estimate} \sum_{i=1}^{m}\|\nabla^iu\|_{M^{2m/i,2m\lambda/i}(B^{2m}_{1/2})}\leq C\left(\|u\|_{W^{m,2}(B^{2m}_1)}+\|f\|_{M^{1,\lambda}(B^{2m}_1)}\right).$$ As a result, we have $u\in C^{0,\lambda}_{\text{loc}}(B^{2m})$ with $$\label{eq:holder estimate 1} \|u\|_{C^{0,\lambda}(B^{2m}_{1/2})}\leq C \left(\|u\|_{W^{m,2}(B^{2m}_1)}+\|f\|_{M^{1,\lambda}(B^{2m}_1)}\right),$$ where $C>0$ is a positive constant depending only on $m,n,\lambda$ and the coefficient functions $V_l,w_l$.* Theorem [Theorem 1](#thm:holder continuity){reference-type="ref" reference="thm:holder continuity"} reduces to the main result of Du-Kang-Wang [@Du-Kang-Wang-2022] when $m=1$, and to the main result of Xiang-Zheng [@XZ2023] when $m=2$. The sharpness of Theorem [Theorem 1](#thm:holder continuity){reference-type="ref" reference="thm:holder continuity"} can be seen in the following way. Consider the model case $\Delta^m u=f\in L^p(B^{2m})$ with $1<p<\frac{2m}{2m-1}$. Then $f\in L^p\subset M^{1,\lambda}$ with $\lambda=2m(1-1/p)\in (0,1)$. On the other hand, $u\in W^{2m,p}_{{\mathop\mathrm{\,loc\,}}}\subset C^{0,\lambda}_{{\mathop\mathrm{\,loc\,}}}$, which shows the Morrey regularity of $\nabla u$ is optimal. Notice also that Theorem [Theorem 1](#thm:holder continuity){reference-type="ref" reference="thm:holder continuity"} fails for the case $\lambda=1$; see [@GXZ2022] for a non-Lipschitz continuous solutions. Our second main result shows that weak solutions of [\[eq:even order linear elliptic system\]](#eq:even order linear elliptic system){reference-type="eqref" reference="eq:even order linear elliptic system"} enjoy higher regularity if $f$ has higher Morrey regularity. **Theorem 2**. *Let $u\in W^{m,2}(B^{2m},{\mathbb R}^n)$ be a weak solution of [\[eq:even order linear elliptic system\]](#eq:even order linear elliptic system){reference-type="eqref" reference="eq:even order linear elliptic system"} with $f\in M^{p,p\lambda}(B^{2m})$ for some $1<p<\frac{2m}{2m-1}$ and $0\leq\lambda<\frac{2m-(2m-1)p}{p}$. Then $$\nabla^iu\in M^{p_i,p_i\lambda}_{\text{loc}}(B^{2m})\qquad 1\leq i\leq m+1,$$ where $p_i=\frac{2mp}{2m-(2m-i)p}$ and $$\label{eq:morrey estimate} \sum_{i=1}^{m+1}\|\nabla^iu\|_{M^{p_i,p_i\lambda}(B^{2m}_{1/2})}\leq C\left(\|u\|_{W^{m,2}(B^{2m}_1)}+\|f\|_{M^{p,p\lambda}(B^{2m}_1)}\right)$$ holds for some constant $C>0$ depending only on $m,n,p,\lambda$ and the coefficient functions $V_l,w_l$.* If $\lambda=0$, then Theorem [Theorem 2](#thm:morrey estimate){reference-type="ref" reference="thm:morrey estimate"} reduces to the main $L^p$-regularity theorem of Guo-Xiang-Zheng [@GXZ2022]. As was observed in [@GXZ2022], one cannot expect higher (Morrey-)Sobolev regularity for $\nabla^iu$ with $i>m+1$ and thus Theorem [Theorem 2](#thm:morrey estimate){reference-type="ref" reference="thm:morrey estimate"} is optimal in this sense. As the general idea for the proofs of Theorem [Theorem 1](#thm:holder continuity){reference-type="ref" reference="thm:holder continuity"} and Theorem [Theorem 2](#thm:morrey estimate){reference-type="ref" reference="thm:morrey estimate"} is very similar to that used in Guo-Xiang-Zheng [@GXZ2022]. We will follow closely the presentation in [@GXZ2022] and indicate the necessary changes when necessary. # Morrey spaces and fractional Riesz operators Let $\Omega\subset{\mathbb R}^n$ be a bounded smooth domain, $1\leq p<\infty$ and $0\leq\lambda<n$. The Morrey space $M^{p,\lambda}(\Omega)$ consists of function $f\in L^p(\Omega)$ such that $$\|f\|_{M^{p,\lambda}(\Omega)}\equiv\left(\sup_{x\in\Omega,r>0}r^{-\lambda}\int_{B_r(x)\cap\Omega}|f|^p\right)^{1/p}<\infty.$$ Denote by $L^p_*$ the weak $L^p$ space and define the weak Morrey space $M_*^{p,\lambda}(\Omega)$ as the space of functions $f\in L^p_*(\Omega)$ such that $$\|f\|_{M_*^{p,\lambda}(\Omega)}\equiv\left(\sup_{x\in\Omega,r>0}r^{-\lambda}\|f\|^p_{L^p_*(B_r(x)\cap\Omega)}\right)^{1/p}<\infty,$$ where $$\|f\|^p_{L^p_*(B_r(x)\cap\Omega)}\equiv\sup_{t>0}t^p\left|\{x\in B_r(x)\cap\Omega:|f(x)|>t\}\right|.$$ Let $0<\alpha<n$. Then the Riesz operators $I_\alpha f$ of a locally integrable function $f$ on ${\mathbb R}^n$ is the function defined by $$I_\alpha(f)(x):=\frac{1}{c_{\alpha,n}}\int_{{\mathbb R}^n}|x-y|^{\alpha-n}f(y)\,dy,$$ where the constant $C_{\alpha,n}$ is given by $$c_{\alpha,n}=\pi^{n/2}2^\alpha\frac{\Gamma(\alpha/2)}{\Gamma((n-\alpha)/2)}.$$ The following well-known estimates on fractional Riesz operators between Morrey spaces were proved by Adams [@A1975]. **Proposition 3**. *Let $0< \alpha<n$, $0\le \lambda< n$ and $1\le p<\frac{n-\lambda}{\alpha}$. There exists a constant $C>0$ depending only $n,\alpha,\lambda$ and $p$ such that, for all $f\in M^{p,\lambda}({\mathbb R}^n)$, there holds* *(i) If $p>1$, then $$\label{eq:Riesz Adams 1} \|I_\alpha(f)\|_{M^{\frac{(n-\lambda)p}{n-\lambda-\alpha p},\lambda}({\mathbb R}^n)}\leq C\|f\|_{M^{p,\lambda}({\mathbb R}^n)}.$$* *(ii) If $p=1$, then $$\label{eq:Riesz Adams 2} \|I_\alpha(f)\|_{M_{*}^{\frac{n-\lambda}{n-\lambda-\alpha},\lambda}({\mathbb R}^n)}\leq C\|f\|_{M^{1,\lambda}({\mathbb R}^n)}.$$* In particular, when $\lambda=0$, it reduces to the boundedness theory of Riesz operator between $L^p$ spaces. # Hölder continuity via decay estimates This section is devoted to the proof of Theorem [Theorem 1](#thm:holder continuity){reference-type="ref" reference="thm:holder continuity"}. Following [@GXZ2022 Proof of Lemma 3.2], we divide the proof into four steps. *Proof of Theorem [Theorem 1](#thm:holder continuity){reference-type="ref" reference="thm:holder continuity"}.* First of all, since the result is local and scaling invariant, we may assume that for a sufficiently small $\varepsilon$, the conservation law [\[eq:conservation law 1\]](#eq:conservation law 1){reference-type="eqref" reference="eq:conservation law 1"} holds for some $A,B$ in $B^{2m}_{1/2}$; for details, see [@GXZ2022]. **Step 1.** Rewrite the equation. According to [@GXZ2022 Proposition 3.3], $Adu$ satisfies the equation $$\delta\Delta^{m-1}(Adu)=\sum_{i=1}^{m-1}\delta^i\left(\sum_{j=m-i}^{m}\nabla^jA\nabla^{2m-i-j}u\right)+\delta K+Af,$$ where $\delta$ denotes the divergence operator, $\delta^i$ means taking divergence for $i$ times and $K$ is the last five terms of the conservation law [\[eq:conservation law 1\]](#eq:conservation law 1){reference-type="eqref" reference="eq:conservation law 1"}. **Step 2.** Decompose $Adu$. Extend all the functions from $B_{1/2}$ into the whole space ${\mathbb R}^{2m}$ in a bounded way and for simplicity use the same notations to denote the extended functions. For $f$, simply extends it as zero outside $B_{1/2}$. As in [@GXZ2022 Page 299], let $c\log|\cdot|$ be the fundamental solution of $\Delta^m$ in ${\mathbb R}^{2m}$ and define $$u_{11}=c\log*\left(\sum_{i=1}^{m-1}\delta^i\left(\sum_{j=m-i}^{m}\nabla^jA\nabla^{2m-i-j}u\right)+\delta K\right),\quad u_{12}=c\log*(Af)$$ and $$u_2=c\log*\Delta^{m-1}(dA\wedge du).$$ Then we have $\Delta^mu_{11}=\sum_{i=1}^{m-1}\delta^i\left(\sum_{j=m-i}^{m}\nabla^jA\nabla^{2m-i-j}u\right)+\delta K$, $\Delta^mu_{12}=Af$ and $\Delta^mu_2=\Delta^{m-1}(dA\wedge du)$. Thus, we obtain the decomposition $$Adu=du_{11}+du_{12}+d^*u_2+h$$ for some $m$-polyharmonic $1$-form in $B^{2m}_{1/2}$. **Step 3.** Estimates of $u_{11},u_{12},u_2$ and $h$. For the term $u_{11}$ and $u_2$, we may repeat the proof in [@GXZ2022 Step 3 in the proof of Lemma 3.2] (even simpler, using the boundedness of Riesz operator on $L^p$ spaces, instead of Lorentz spaces) to obtain $$\sum_{j=1}^{m}\|\nabla^ju_{11}\|_{L^{2m/j}({\mathbb R}^{2m})}\lesssim\varepsilon\sum_{i=1}^{m}\|\nabla^iu\|_{L^{2m/i}(B^{2m}_{1/2})}$$ and $$\sum_{j=1}^{m}\|\nabla^ju_2\|_{L^{2m/j}({\mathbb R}^{2m})}\lesssim\varepsilon\sum_{i=1}^{m}\|\nabla^iu\|_{L^{2m/i}(B^{2m}_{1/2})}.$$ For $u_{12}$, we use the estimate $$|\nabla^ju_{12}|\lesssim I_{2m-j}(|Af|)\qquad\text{for all\,\,}1\leq j\leq 2m$$ and the boundedness of the operator $$I_{2m-j}\colon M^{1,\lambda}({\mathbb R}^{2m})\to M_*^{\frac{2m-\lambda}{j-\lambda},\lambda}({\mathbb R}^{2m})$$ and find that $\nabla^ju_{12}\in M_*^{\frac{2m-\lambda}{j-\lambda},\lambda}({\mathbb R}^{2m})$ with estimate $$\|\nabla^ju_{12}\|_{M_*^{\frac{2m-\lambda}{j-\lambda},\lambda}({\mathbb R}^{2m})}\lesssim\|f\|_{M^{1,\lambda}({\mathbb R}^{2m})}\lesssim\|f\|_{M^{1,\lambda}(B^{2m}_{1/2})}.$$ For any $r>0$, it follows from Hölder's inequality that $$\|\nabla^ju_{12}\|_{L^{2m/j}(B^{2m}_r)} \leq\|\nabla^ju_{12}\|_{L_*^{\frac{2m-\lambda}{j-\lambda}}(B^{2m}_r)}r^{2m\left(\frac{j}{2m}-\frac{j-\lambda}{2m-\lambda}\right)} \leq C\|f\|_{M^{1,\lambda}(B^{2m}_{1/2})}r^\lambda.$$ Summing over $j$, we infer that $$\sum_{j=1}^{m}\|\nabla^ju_{12}\|_{L^{2m/j}(B^{2m}_r)}\lesssim \|f\|_{M^{1,\lambda}(B^{2m}_{1/2})}r^\lambda.$$ For the polyharmonic function $h$, it follows from [@GS2009 Lemma 6.2] that, for any $0<r<\frac{1}{4}$, $$\sum_{j=1}^{m}\|\nabla^jh\|_{L^{2m/j}(B^{2m}_r)}\lesssim r\sum_{i=1}^{m}\|\nabla^ih\|_{L^{2m/i}(B^{2m}_{1/2})}.$$ **Step 4.** Conclusion. For any $0<r<\frac{1}{4}$ and $1\leq j\leq m$, the triangle inequality implies that $$\begin{split} \|\nabla^ju\|_{L^{2m/j}(B^{2m}_r)} &\leq\|\nabla^{j-1}(A^{-1}h)\|_{L^{2m/j}(B^{2m}_r)}+\|\nabla^{j-1}(A^{-1}du_{11})\|_{L^{2m/j}(B^{2m}_r)}\\ &\quad+\|\nabla^{j-1}(A^{-1}du_{12})\|_{L^{2m/j}(B^{2m}_r)}+\|\nabla^{j-1}(A^{-1}*du_2)\|_{L^{2m/j}(B^{2m}_r)}\\ &\lesssim r\sum_{i=1}^{m}\|\nabla^ih\|_{L^{2m/i}(B^{2m}_{1/2})}+\varepsilon\sum_{i=1}^{m}\|\nabla^iu\|_{L^{2m/i}(B^{2m}_{1/2})}+\|f\|_{M^{1,\lambda}(B^{2m}_{1/2})}r^\lambda\\ &\leq C(r+\varepsilon)\sum_{i=1}^{m}\|\nabla^iu\|_{L^{2m/i}(B^{2m}_{1/2})}+C\|f\|_{M^{1,\lambda}(B^{2m}_{1/2})}r^\lambda. \end{split}$$ Summing over $j$, we obtain $$\sum_{i=1}^{m}\|\nabla^iu\|_{L^{2m/i}(B^{2m}_r)}\leq C(r+\varepsilon)\sum_{i=1}^{m}\|\nabla^iu\|_{L^{2m/i}(B^{2m}_{1/2})}+C\|f\|_{M^{1,\lambda}(B^{2m}_{1/2})}r^\lambda,$$ where $C$ is a constant depending only $m,n$ and $\lambda$. Write $$\Theta(r)=\sum_{i=1}^{m}\|\nabla^iu\|_{L^{2m/i}(B^{2m}_r)}.$$ Then for any $0<r<\frac{1}{4}$ $$\Theta(r)\leq C(r+\varepsilon)\Theta(\frac{1}{2})+C\|f\|_{M^{1,\lambda}(B^{2m}_{1/2})}r^\lambda$$ for some $C=C(m,n,\lambda)>0$. Now choose $r=\tau$ small such that $2C\tau\leq\tau^{(\lambda+1)/2}$, and then choose $\varepsilon\leq \tau$, we obtain the decay estimate $$\Theta(\tau)\leq\tau^{\frac{\lambda+1}{2}}\Theta(\frac{1}{2})+C\|f\|_{M^{1,\lambda}(B^{2m}_{1/2})}\tau^\lambda.$$ Now using a standard scaling [@GXZ2022 Section 2.3] and iteration argument (see [@GXZ2021 Proof of Theorem 3.1]), we conclude that for any $k\geq 1$, $$\Theta(\tau^k)\leq\tau^{\frac{\lambda+1}{2}}\Theta(\tau^{k-1})+C\|f\|_{M^{1,\lambda}(B^{2m}_{1/2})}\tau^{k\lambda},$$ which implies that $$\Theta(r)\leq Cr^\lambda\left(\Theta(1)+\|f\|_{M^{1,\lambda}(B^{2m}_{1})}\right)$$ for all $0<r<\frac{1}{4}$. This gives $\nabla^iu\in M^{\frac{2m}{i},\frac{2m\lambda}{i}}_{\text{loc}}(B^{2m})$ for all $1\leq i\leq m$ together with the desired estimate [\[eq:decay estimate\]](#eq:decay estimate){reference-type="eqref" reference="eq:decay estimate"}. Finally, Morrey's Dirichlet growth theorem (see e.g. [@G1983]) implies that $u\in C^{0,\lambda}_{\text{loc}}(B^{2m})$ and the Hölder continuity estimate [\[eq:holder estimate 1\]](#eq:holder estimate 1){reference-type="eqref" reference="eq:holder estimate 1"}. This completes the proof of Theorem [Theorem 1](#thm:holder continuity){reference-type="ref" reference="thm:holder continuity"}. ◻ # Optimal local estimates In this section, we shall prove Theorem [Theorem 2](#thm:morrey estimate){reference-type="ref" reference="thm:morrey estimate"}. The general strategy is very similar to [@GXZ2022 Proof of Theorem 1.2]. Before the official proofs, we shall point out three easy consequences of Theorem [Theorem 1](#thm:holder continuity){reference-type="ref" reference="thm:holder continuity"}. 1\) By Hölder's inequality, we have $$M^{p,p\lambda}(B^{2m}_1)\subset M^{1,\lambda_0}(B^{2m}_1),$$ where $$\label{eq:lambda_0} \lambda_0=\lambda+2m(1-\frac{1}{p}).$$ Thus Theorem [Theorem 1](#thm:holder continuity){reference-type="ref" reference="thm:holder continuity"} implies that $$\sum_{i=1}^{m}\|\nabla^iu\|_{M^{2m/i,2m\lambda_0/i}(B^{2m}_{1/2})}\leq C\left(\|u\|_{W^{m,2}(B^{2m}_1)}+\|f\|_{M^{1,\lambda_0}(B^{2m}_1)}\right).$$ and that $u\in C^{0,\lambda_0}(B^{2m}_{1/2})$ with $$\label{eq:holder estimate} \|u\|_{C^{0,\lambda_0}(B^{2m}_{1/2})}\leq C \left(\|u\|_{W^{m,2}(B^{2m}_1)}+\|f\|_{M^{1,\lambda_0}(B^{2m}_1)}\right).$$ 2\) Since $M^{p,p\lambda}(B^{2m}_1)\subset L^p(B^{2m}_1)$ with $1<p<\frac{2m}{2m-1}$, [@GXZ2022 Theorem 1.2] implies that $u\in W^{m+1,p_{m+1}}(B^{2m}_{1/2})$ together with the estimate $$\|u\|_{W^{m+1,p_{m+1}}(B^{2m}_{1/2})}\leq C\left(\|u\|_{W^{m,2}(B^{2m}_1)}+\|f\|_{L^p(B^{2m}_1)}\right),$$ where $p_{m+1}=\frac{2mp}{2m-(m-1)p}.$ 3\) Thanks to the above $W^{m+1,p_{m+1}}$-regularity, we can then repeat the proof of Theorem [Theorem 1](#thm:holder continuity){reference-type="ref" reference="thm:holder continuity"} to deduce that $$\|\nabla^{m+1}u\|_{M^{\frac{2m}{m+1},\frac{2m\lambda_0}{m+1}}{(B^{2m}_{1/2}})}\leq C \left(\|u\|_{W^{m,2}(B^{2m}_1)}+\|f\|_{M^{1,\lambda_0}(B^{2m}_1)}\right).$$ Therefore, we summarize the above result to conclude that $$\sum_{i=1}^{m+1}\|\nabla^iu\|_{M^{2m/i,2m\lambda_0/i}(B^{2m}_{1/2})}\leq CM,$$ where $$\label{eq:M} M=\left(\|u\|_{W^{m,2}(B^{2m}_1)}+\|f\|_{M^{p,p\lambda}(B^{2m}_1)}\right).$$ With all the previous ingredients at hand, we are now ready to prove Theorem [Theorem 2](#thm:morrey estimate){reference-type="ref" reference="thm:morrey estimate"}. *Proof of Theorem [Theorem 2](#thm:morrey estimate){reference-type="ref" reference="thm:morrey estimate"}.* Module some technical arguments, the proof presented here is very similar to [@GXZ2022 Section 5]. By the discussion above, we know that $u\in W^{m+1,p_{m+1}}(B^{2m}_{1/2})$. As in [@GXZ2022 Proof of Theorem 1.2], we consider separately two cases. **Case I**: $m$ is an even integer. According to [@GXZ2022 Corollary 3.4], $A\Delta u$ satisfies the equation $$\begin{split} \Delta^{m-1}(A\Delta u)&=\sum_{i=1}^{m-1}\delta^i\left(\sum_{j=m-i}^{m}\nabla^jA\nabla^{2m-i-j}u\right)-\Delta^{m-1}(dAdu)+\delta K+Af\\ &=\sum_{i=1}^{m-1}\delta^i\left(\sum_{j=m-i}^{m}\nabla^jA\nabla^{2m-i-j}u\right)+\delta K+Af, \end{split}$$ where $\delta$ denotes the divergence operator, $\delta^i$ means taking divergence for $i$ times and $K$ is the last five terms of the conservation law [\[eq:conservation law 1\]](#eq:conservation law 1){reference-type="eqref" reference="eq:conservation law 1"}. Here, we use $\sum_{i}a_i$ to denote a linear combination of $a_i$'s, i.e., $\sum_ia_i=\sum_ic_ia_i$ for some harmless absolute constant $c_i$. **Step 1.** Split $A\Delta u$. Fix $x_0\in B^{2m}_{1/4}$ and $0<r<\frac{1}{4}$. Split $A\Delta u=v+h$ in $B_r(x_0)$ with $v$ and $h$ satisfying $$\begin{cases} \Delta^{m-1}h=0\qquad\quad\,\text{in\,\,}B_r(x_0),\\ \Delta^ih=\Delta^i(A\Delta u)\quad\text{on\,\,}\partial B_r(x_0),\quad 0\leq i\leq m-2 \end{cases}$$ and $$\begin{cases} \Delta^{m-1}v=\Delta^{m-1}(A\Delta u)\qquad\qquad\,\text{in\,\,}B_r(x_0),\\ v=\Delta^{\frac{m}{2}}v=\cdots=\Delta^{m-2}v=0\quad\,\text{on\,\,}\partial B_r(x_0). \end{cases}$$ Then $v$ satisfies $$\Delta^{m-1}v=\sum_{i=1}^{m-1}\delta^i\left(\sum_{j=m-i}^{m}\nabla^jA\nabla^{2m-i-j}u\right)+\delta K+Af.$$ **Step 2.** A duality argument. In this step, we will divide it into two parts. **Part 1.** Note that $p_m=\frac{2p}{2-p}$. By the duality of $L^p$-norm, we have $$\|\Delta^{\frac{m-2}{2}}v\|_{L^{p_m}(B_r(x_0))}=\sup_{\varphi\in\mathcal{A}_1}\int_{B_r(x_0)}(\Delta^{\frac{m-2}{2}}v)\varphi,$$ where $$\mathcal{A}_1=\{\varphi\in L^{p_m'}(B_r(x_0),{\mathbb R}^m)\colon\|\varphi\|_{L^{p_m'}(B_r(x_0))}\leq 1\}$$ and $p_m'=\frac{p_m}{p_m-1}$ is the Hölder conjugate exponent of $p_m$. For any $\varphi\in\mathcal{A}_1$, let $\Phi\in W^{\frac{m}{2},p_m'}(B_r(x_0))$ satisfy $$\begin{cases} \Delta^{\frac{m}{2}}\Phi=\varphi\qquad\qquad\qquad\qquad\quad\,\,\,\text{in\,\,}B_r(x_0),\\ \Phi=\Delta\Phi=\cdots=\Delta^{\frac{m}{2}-1}\Phi=0\quad\,\text{on\,\,}\partial B_r(x_0). \end{cases}$$ By the standard elliptic regularity theory, there exists a constant $C_p>0$ such that $$\label{eq:Phi estimate} \|\Phi\|_{W^{m,p_m'}(B_r(x_0))}\leq C_p\|\varphi\|_{L^{p_m'}(B_r(x_0))}\leq C_p.$$ Note that integration by parts gives $$\int_{B_r(x_0)}(\Delta^{\frac{m-2}{2}}v)\varphi=\int_{B_r(x_0)}(\Delta^{\frac{m-2}{2}}v)(\Delta^{\frac{m}{2}}\Phi)=\int_{B_r(x_0)}(\Delta^{m-1}v)\Phi.$$ For the details, see [@GXZ2022 Page 315 - 316]. Thus, we have $$\sup_{\varphi\in\mathcal{A}_1}\int_{B_r(x_0)}(\Delta^{\frac{m-2}{2}}v)\varphi \leq C\sup_{\varphi\in\mathcal{A}_2}\int_{B_r(x_0)}(\Delta^{\frac{m-2}{2}}v)(\Delta^{\frac{m}{2}}\Phi) =C\sup_{\varphi\in\mathcal{A}_2}\int_{B_r(x_0)}(\Delta^{m-1}v)\Phi,$$ where $$\mathcal{A}_2=\left\{\Phi\in W^{m,p_m'}(B_r(x_0))\colon\|\Phi\|_{W^{m,p_m'}}\leq 1,\Phi=\Delta\Phi=\cdots=\Delta^{\frac{m}{2}-1}\Phi=0\text{\,\,on\,\,}\partial B_r(x_0)\right\}.$$ Recall that $$\int_{B_r(x_0)}(\Delta^{m-1}v)\Phi=\int_{B_r(x_0)}\left\{\sum_{i=1}^{m-1}\delta^i\left(\sum_{j=m-i}^{m}\nabla^jA\nabla^{2m-i-j}u\right)+\delta K+Af\right\}\Phi.$$ Applying Hölder's inequality and the Sobolev embedding $W^{m,p_m'}_0(B_r(x_0))\subset L^{p'}(B_r(x_0))$, we obtain $$\begin{split} \int_{B_r(x_0)}(Af)\Phi &\leq\|Af\|_{L^p(B_r(x_0))}\|\Phi\|_{L^{p'}(B_r(x_0))} \leq C\|f\|_{L^p(B_r(x_0))}\|\Phi\|_{W_0^{m,p_m'}(B_r(x_0))}\\ &\leq C_p\|f\|_{L^p(B_r(x_0))} \leq C_p\|f\|_{M^{p,p\lambda}(B^{2m})}r^\lambda. \end{split}$$ For the remaining terms, we can follow [@GXZ2022 Step 2 on Page 316] to obtain $$\int_{B_r(x_0)}\left\{\sum_{i=1}^{m-1}\delta^i\left(\sum_{j=m-i}^{m}\nabla^jA\nabla^{2m-i-j}u\right)+\delta K\right\}\Phi\lesssim\varepsilon\sum_{i=1}^{m}\|\nabla^iu\|_{L^{p_i}(B_r(x_0))}.$$ Combining the above two estimates gives $$\|\Delta^{\frac{m-2}{2}}v\|_{L^{p_m}(B_r(x_0))} \leq C\varepsilon\sum_{i=1}^{m}\|\nabla^iu\|_{L^{p_i}(B_r(x_0))}+C\|f\|_{M^{p,p\lambda}(B^{2m})}r^\lambda.$$ **Part 2.** Estimate of $\|\nabla^{m-2}h\|_{L^{p_m}(B_{r/2}(x_0))}$ Since $A\Delta u\in W^{m-1,\frac{2m}{m+1}}(B_r(x_0))$, we can apply [@GXZ2022 Lemma 5.1] to obtain that $h\in W^{m-1,\frac{2m}{m+1}}(B_r(x_0))$ with $$\|h\|_{W^{m-1,\frac{2m}{m+1}}(B_r(x_0))}\leq C\|A\Delta u\|_{W^{m-1,\frac{2m}{m+1}}(B_r(x_0))}\leq C\Gamma(r)r^{\lambda_0},$$ where $\lambda_0$ is defined as in [\[eq:lambda_0\]](#eq:lambda_0){reference-type="eqref" reference="eq:lambda_0"} and $$\Gamma(r)=\sum_{i=1}^{m+1}\|\nabla^iu\|_{L^{p_i}(B_r(x_0))}.$$ In particular, this implies that $$\|h\|_{W^{m-1,\frac{2m}{m+1}}(B_r(x_0))}\leq CMr^{\lambda_0}.$$ where $M$ is defined as in [\[eq:M\]](#eq:M){reference-type="eqref" reference="eq:M"}. It follows from the poly-harmonicity of $h$ and the Sobolev embedding $W^{1,\frac{2m}{m+1}}(B_r(x_0))\subset L^2(B_r(x_0))$ that $$\begin{split} \|\nabla^{m-2}h\|_{L^{p_m}(B_{r/2}(x_0))} &\leq Cr^{\frac{m(2-p)}{p}}\|\nabla^{m-2}h\|_{L^\infty(B_{r/2}(x_0))} \leq Cr^{\frac{m(2-p)}{p}}\left(\mathop{\mathchoice% {\setbox 0\hbox{$\displaystyle\intop$}\kern 0.22\wd 0% \vcenter{\hrule width 0.6\wd 0}\kern -0.82\wd 0}% {\setbox 0\hbox{$\textstyle\intop$}\kern 0.2\wd 0% \vcenter{\hrule width 0.6\wd 0}\kern -0.8\wd 0}% {\setbox 0\hbox{$\scriptstyle\intop$}\kern 0.2\wd 0% \vcenter{\hrule width 0.6\wd 0}\kern -0.8\wd 0}% {\setbox 0\hbox{$\scriptscriptstyle\intop$}\kern 0.2\wd 0% \vcenter{\hrule width 0.6\wd 0}\kern -0.8\wd 0}}% \mathopen{}\int_{B_r(x_0)}|\nabla^{m-2}h|^2\right)^\frac{1}{2}\\ &\leq Cr^{\frac{m(2-p)}{p}}r^{-m}\|\nabla^{m-2}h\|_{W^{1,\frac{2m}{m+1}}(B_r(x_0))} \leq CMr^{\lambda}. \end{split}$$ **Step 3.** Conclusion. For the upper bound on $\|\nabla u\|_{L^{p_1}}$, using a standard interpolation argument (see e.g. [@Adams-book Section 5.1]) and the Hölder estimate [\[eq:holder estimate\]](#eq:holder estimate){reference-type="eqref" reference="eq:holder estimate"}, we obtain $$\begin{split} \|\nabla u\|_{L^{p_1}(B_{r/4}(x_0))} &\leq C\|\nabla^mu\|_{L^{p_m}(B_{r/4}(x_0))}+Cr^{-m}\|u-u_{B_{r/4}(x_0)}\|_{L^{p_m}(B_{r/4}(x_0))}\\ &\leq C\|\Delta^{\frac{m}{2}}u\|_{L^{p_m}(B_{r/2}(x_0))}+Cr^{-2m(1-1/p)}\|u-u_{B_{r/2}(x_0)}\|_{L^{\infty}(B_{r/2}(x_0))}\\ &\leq C\left(\|\Delta^{\frac{m-2}{2}}v\|_{L^{p_m}(B_{r/2}(x_0))}+\|\Delta^{\frac{m-2}{2}}h\|_{L^{p_m}(B_{r/2}(x_0))}\right)+CMr^{\lambda}\\ &\leq C\varepsilon\sum_{i=1}^{m}\|\nabla^iu\|_{L^{p_i}(B_r(x_0))}+CMr^{\lambda}. \end{split}$$ The estimates for the terms $\nabla^2u,\nabla^3u,...,\nabla^mu$ are similar and thus it remains to estimate $\|\nabla^{m+1}u\|_{L^{p_{m+1}}}$. By Step 3 in the proof of Theorem 1.2 in [@GXZ2022] (more precisely, the last second estimate before equation (5.11) there), we have $$\begin{split} \|\nabla^{m+1}u\|_{L^{p_{m+1}}(B_{r/4}(x_0))} &\lesssim \sum_{i=1}^{m}\|\nabla^iu\|_{L^{p_i}(B_{r/2}(x_0))}+r^{-m}\|u-u_{B_{r/2}(x_0)}\|_{L^{p_m}(B_{r/2}(x_0))}\\ &\leq C\varepsilon\sum_{i=1}^{m}\|\nabla^iu\|_{L^{p_i}(B_r(x_0))}+CMr^{\lambda}. \end{split}$$ Combining the above two estimates, we conclude that $$\Gamma(r/4)\leq C\varepsilon\Gamma(r)+CMr^{\lambda}.$$ Applying the Simon's iteration lemma (see [@ST2013 Lemma A7]), we infer that there are $\varepsilon_0$ and $r_0(\lambda,p)>0$ sufficiently small such that for all $r\leq r_0$, we have $$\Gamma(r)\leq CMr^{\lambda},$$ which gives [\[eq:morrey estimate\]](#eq:morrey estimate){reference-type="eqref" reference="eq:morrey estimate"}. **Case II**: $m$ is an odd integer. The proof for this case is completely similar to [@GXZ2022 Section 5.2.2] and thus we only outline the main differences. In the case, $u\in W_{{\mathop\mathrm{\,loc\,}}}^{m-1,\eta}$ for any $\eta<p_{m-1}=\frac{2mp}{2m-(m+1)p}$ and $m-1$ is an even integer. Furthermore, we have $$\|\nabla^{m-1}u\|_{L^q\big(B_{\frac{r}{2}}(x_0)\big)}\approx\|\Delta^{\frac{m-3}{2}}(A\Delta u)\|_{L^q\big(B_{\frac{r}{2}}(x_0)\big)}.$$ In this case, we may repeat exactly what we have done in **Case I**. The only difference is that instead of first showing that $u\in W^{m,p_m}_{{\mathop\mathrm{\,loc\,}}}$, we first show that $u\in W^{m-1,p_{m-1}}_{{\mathop\mathrm{\,loc\,}}}$ and $$\|u\|_{W^{m-1,p_{m-1}}(B_{\frac{r}{2}}(x_0))}\le Cr^{\lambda}\left(\|f\|_{M^{p,p\lambda}(B_{r}(x_0))}+\|u\|_{W^{m-1,2}(B_{r}(x_0))}\right).$$ With a similar interpolation argument as in the previous case, we may conclude that $$\|u\|_{W^{m,p_m}(B_{\frac{r}{2}}(x_0))}\le Cr^{\lambda}\left(\|f\|_{M^{p,p\lambda}(B_{r}(x_0))}+\|u\|_{W^{m,2}(B_{r}(x_0))}\right).$$ Then once again an interpolation argument leads to $$\|\nabla^{m+1}u\|_{L^{p_{m+1}}\big(B_{\frac{r}{2}}(x_0)\big)}\lesssim \varepsilon\sum_{i=1}^{m}\|\nabla^iu\|_{L^{p_i}(B_r(x_0))}+ r^{\lambda}\left(\|f\|_{M^{p,p\lambda}(B_{r}(x_0))}+\|u\|_{W^{m,2}(B_{r}(x_0))}\right).$$ Finally, as in the even case dealt above, combining the previous esimate together with Simon's iteration lemma, gives the desired estimate [\[eq:morrey estimate\]](#eq:morrey estimate){reference-type="eqref" reference="eq:morrey estimate"}. The proof of Theorem [Theorem 2](#thm:morrey estimate){reference-type="ref" reference="thm:morrey estimate"} is complete. ◻ 99 [D. R. Adams]{.smallcaps}, *A note on Riesz potentials*, Duke Math. J. 42(4) (1975), 765-778. [R.C. Adams and J.F. Fournier]{.smallcaps}, *Sobolev spaces.* Second edition. Pure and Applied Mathematics (Amsterdam), 140. Elsevier/Academic Press, Amsterdam, 2003. [W. Ai and H. Yin]{.smallcaps}, *Neck analysis of extrinsic polyharmonic maps*, Ann. Global Anal. Geom. 52 (2017), no. 2, 129-156. [S.-Y. A. Chang, L. Wang and P.C. Yang,]{.smallcaps} *A regularity theory of biharmonic maps*, Commun. Pure Appl. Math. 52 (9) (1999), 1113-1137. [Y. Chen, M. Zhu]{.smallcaps}, *Bubbling analysis for extrinsic biharmonic maps from general Riemannian 4-manifolds*, Sci. China Math. 66 (2023), no. 3, 581-600. [F.L. de Longueville and A. Gastel]{.smallcaps}, *Conservation laws for even order systems of polyharmonic map type*, Calc. Var. Partial Differ. Equ. 60 (2021), no. 4, 138. [H. Du, Y. Kang and J. Wang]{.smallcaps}, *Morrey regularity theory of Rivière's equation*, Proc. Amer. Math. Soc., to appear 2023. [A. Gastel and C. Scheven]{.smallcaps}, *Regularity of polyharmonic maps in the critical dimension*, Comm. Anal. Geom. 17 (2009), no. 2, 185-226. [M. Giaquinta]{.smallcaps}, *Multiple Integrals in the Calculus of Variations and Nonlinear Elliptic Systems*, Annals of Mathematics Studies, vol. 105, Princeton University Press, Princeton, NJ, 1983. [P. Goldstein, P. Strzelecki and A. Zatorska-Goldstein]{.smallcaps}, *On polyharmonic maps into spheres in the critical dimension*. Ann. Inst. H. Poincaré Anal. Non Linéaire 26 (2009), 1387-1405. [C.-Y. Guo, C. Wang and C.-L. Xiang]{.smallcaps}, *$L^p$-regularity for fourth order elliptic systems with antisymmetric potentials in higher dimensions*, Calc. Var. Partial Differential Equations 62 (2023), no. 1, Paper No. 31. [C.-Y. Guo and C.-L. Xiang]{.smallcaps}, *Regularity of solutions for a fourth order linear system via conservation law*. J. Lond. Math. Soc. (2) 101 (2020), no. 3, 907-922. [C.-Y. Guo and C.-L. Xiang]{.smallcaps}, *Regularity of weak solutions to higher order elliptic systems in critical dimensions*, Tran. Amer. Math. Soc. 374 (2021), no. 5, 3579-3602. [C.-Y. Guo, C.-L. Xiang and G.-F. Zheng]{.smallcaps}, *The Lamm-Riviere system I: $L^p$ regularity theory*, Calc. Var. Partial Differ. Equ. 60 (2021), no. 6, 213. [C.-Y. Guo, C.-L. Xiang and G.-F. Zheng]{.smallcaps}, *$L^p$ regularity theory for even order elliptic systems with antisymmetric first order potentials*, J. Math. Pures Appl. (9) 165 (2022), 286-324. [C.-Y. Guo, C.-L. Xiang and G.-F. Zheng]{.smallcaps}, *Refined conservation law for an even order system with antisymmetric potential*, arXiv-preprint 2022. [W. He, R. Jiang and L. Lin]{.smallcaps}, *Existence of extrinsic polyharmonic maps in critical dimensions*. J. Funct. Anal. 285 (2023), no. 5, Paper No. 110020. [J. Hörter and T. Lamm]{.smallcaps}, *Conservation laws for even order elliptic systems in the critical dimensions - a new approach*, Calc. Var. Partial Differential Equations 60, 125 (2021). [T. Lamm and T. Rivière,]{.smallcaps} *Conservation laws for fourth order systems in four dimensions*, Comm. Partial Differential Equations 33 (2008), 245-262. [T. Lamm and C. Wang,]{.smallcaps} *Boundary regularity for polyharmonic maps in the critical dimension.* Adv. Calc. Var. 2 (2009), 1-16. [R. Moser]{.smallcaps}, *An $L^p$ regularity theory for harmonic maps*. Trans. Amer. Math. Soc. 367 (2015), no. 1, 1-30. [T. Rivière,]{.smallcaps} *Conservation laws for conformally invariant variational problems*, Invent. Math. 168 (2007), 1-22. [T. Rivière and M. Struwe,]{.smallcaps} *Partial regularity for harmonic maps and related problems.* Comm. Pure Appl. Math. 61 (2008), 451-463. [B. Sharp and P. Topping]{.smallcaps}, *Decay estimates for Rivière's equation, with applications to regularity and compactness*, Trans. Amer. Math. Soc. 365 (2013), no. 5, 2317-2339. [M. Struwe,]{.smallcaps} *Partial regularity for biharmonic maps, revisited.* Calc. Var. Partial Differential Equations 33 (2008), 249-262. [C.Y. Wang,]{.smallcaps} *Stationary biharmonic maps from $R^m$ into a Riemannian manifold.* Comm. Pure Appl. Math. 57 (2004), 419-444. [C.-L. Xiang and G.-F. Zheng]{.smallcaps}, *Sharp Morrey regularity theory for a fourth order geometrical equation*, arXiv-preprint, 2023. [^1]: Corresponding author: Wen-Juan Qi. [^2]: Both authors are supported by the Young Scientist Program of the Ministry of Science and Technology of China (No. 2021YFA1002200), the National Natural Science Foundation of China (No. 12101362) and the Taishan Scholar Program and the Natural Science Foundation of Shandong Province (No. ZR2022YQ01, ZR2021QA003). [^3]: In the orginal paper [@DG2021], the authors only obtained this on $B^{2m}_{1/2}$. But a minor change of arguments lead to the current form, for details, see [@GXZ2023]
arxiv_math
{ "id": "2309.13522", "title": "Sharp Morrey regularity for an even order elliptic system", "authors": "Chang-Yu Guo and Wen-Juan Qi", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We study two classes of permutations intimately related to the visual proof of Spitzer's lemma and Huq's generalization of the Chung--Feller theorem. Both classes of permutations are counted by the Fuss--Catalan numbers. The study of one class leads to a generalization of results of Flajolet from continued fractions to continuants. The study of the other class leads to the discovery of a restricted variant of the Foata--Strehl group action. address: - "Department of Mathematics, University of Kentucky, Lexington, KY 40506-0027.`http://www.math.uky.edu/~jrge/, richard.ehrenborg@uky.edu.`" - "Department of Mathematics and Statistics, UNC Charlotte, Charlotte NC 28223-0001. `http://webpages.uncc.edu/ghetyei/, ghetyei@uncc.edu.`" - "Department of Mathematics, University of Kentucky, Lexington, KY 40506-0027. `http://www.math.uky.edu/~readdy/, margaret.readdy@uky.edu.`" author: - Richard EHRENBORG, Gábor HETYEI and Margaret READDY title: Catalan--Spitzer permutations --- # Introduction {#introduction .unnumbered} A classical result of lattice path enumeration arising from tossing $n$ fair coins is the Chung-Feller theorem [@Chung_Feller]. It states that the Catalan number $C_n$ counts not only the lattice paths consisting of unit northeast and southeast steps from $(0,0)$ to $(2n,0)$ that stay above the horizontal axis, but we can also prescribe the number $r$ of northeast steps above the horizontal axis. For each $r\in \{0,1,\ldots,n\}$ we have the same Catalan number of lattice paths. Generalizations of this result are due to Spitzer [@Spitzer Theorem 2.1] as well as Huq in his dissertation [@Huq Theorem 2.1.1]. All these results may be shown using the following simple visual idea: if we slightly "tilt" the diagram of a lattice path (see Figure [\[figure_infinite_lattice_path\]](#figure_infinite_lattice_path){reference-type="ref" reference="figure_infinite_lattice_path"}), all steps occur at different heights, and the relative order of these heights may be rotated cyclically by changing the designation of the first step in the lattice path. This simple idea was perhaps first used by Raney [@Raney Theorem 2.1], who observed that there is exactly one rotational equivalent of a sequence of $n+1$ positive units and $n$ negative units in which the partial sums are all positive. A question naturally arises: which permutations are relative orders of steps in such tilted pictures of lattice paths? In this paper we partially answer this general question in two specific settings. Both are related to *$k$-Catalan paths*, defined as lattice paths consisting of unit up steps $(1,1)$ and down steps $(1,-k+1)$ which start and end on the horizontal axis but never go below it. The study of the relative order of all steps leads us to a generalization of some results of Flajolet from continued fractions to continuants. The study of the relative orders of the up steps leads us to the discovery of a restricted variant of the Foata--Strehl group action [@Foata-Strehl1; @Foata-Strehl2]. Our paper is structured as follows. In the Preliminaries we review the Chung-Feller theorem [@Chung_Feller], its generalizations by Spitzer [@Spitzer Theorem 2.1] and Huq [@Huq Theorem 2.1.1], and we point out a few connections between the two generalizations. In Section [2](#section_Huq_visual){reference-type="ref" reference="section_Huq_visual"} we outline a visual proof of Huq's results which inspires the definition of the permutations we intend to study. We introduce *Catalan--Spitzer permutations* (and their $k$-generalizations) in Section [3](#section_Catalan_Spitzer){reference-type="ref" reference="section_Catalan_Spitzer"} as the relative orders of all steps in a Catalan path. Equivalently, these are obtained by labeling the steps in reverse lexicographic order and listing them in the order they occur along the path. Due to this labeling, a refined count of Catalan--Spitzer permutations amounts to enumerating all Catalan paths that have a given number of steps at a certain level. For the Catalan paths our formulas may be obtained using Flajolet's result [@Flajolet Theorem 1] which provides a generalized continued fraction formula. We generalize these formulas to $k$-Catalan paths by using continuants instead of continued fractions. In Section [4](#section_short_catalan_spitzer){reference-type="ref" reference="section_short_catalan_spitzer"} we observe that the relative order of the up steps alone uniquely determines the Catalan paths. The resulting *short Catalan--Spitzer permutations* may be characterized in terms of the associated *Foata--Strehl trees*, first studied by Foata and Strehl [@Foata-Strehl1; @Foata-Strehl2] who introduced a group action on the set of all permutations using these ordered $0-1-2$ trees. Finally, in Section [5](#section_restricted_Foata_Strehl){reference-type="ref" reference="section_restricted_Foata_Strehl"} we study a restricted variant of the Foata--Strehl group action which takes each short Catalan--Spitzer permutation into another short Catalan--Spitzer permutation. The number of orbits on the set of $C_{n}$ permutations is the Catalan number $C_{n-1}$. This is a consequence of a generating function formula that is applicable to any class of permutations that is closed under the restricted Foata--Strehl group action. In particular, for the set of all permutations the number of orbits is the same as the number of indecomposable permutations. Our results inspire revisiting three classical topics: generalizations of the Chung--Feller theorem, Flajolet's continued fraction approach to lattice path enumeration and the Foata--Strehl group actions. They are likely the first to connect these three areas. # Preliminaries This paper focuses on permutations that are associated to the *Chung and Feller theorem* [@Chung_Feller] and some of its generalizations. **Theorem 1** (Chung--Feller). *Among the lattice paths from $(0,0)$ to $(2n,0)$ consisting of $n$ up steps $(1,1)$ and $n$ down steps $(1,-1)$, the number of paths having $2r$ steps above the $x$ axis is the Catalan number $C_{n}=\frac{1}{n+1}\binom{2n}{n}$, independently of $r$, for each $r\in \{0,1,\ldots,n\}$.* In the special case when $r=n$ the Chung--Feller theorem implies that the number of *Dyck paths*, that is, lattice paths of the above type that never go below the $x$ axis from $(0,0)$ to $(2n,0)$ is the Catalan number $C_n$. This well-known special case has been generalized to *$k$-Dyck paths* (whose definition may be found in Lemma [Lemma 2](#lemma_Raney){reference-type="ref" reference="lemma_Raney"} below) by Raney; see [@Graham_Knuth_Patashnik p. 361]. **Lemma 2** (Raney). *The number of lattice paths from $(0,0)$ to $(kn,0)$ consisting of $(k-1)n$ up steps $(1,1)$ and $n$ down steps $(1,1-k)$ that never go below the $x$-axis is the *Fuss--Catalan number* $$\begin{aligned} C_{n,k} & = \frac{1}{kn+1}\binom{kn+1}{n} = \frac{1}{(k-1)n+1}\binom{kn}{n}. %\label{equation_Fuss_Catalan}\end{aligned}$$* Huq has generalized Theorem [Theorem 1](#theorem_Chung_Feller){reference-type="ref" reference="theorem_Chung_Feller"} to the lattice paths appearing in Lemma [Lemma 2](#lemma_Raney){reference-type="ref" reference="lemma_Raney"} by proving the following result [@Huq Theorem 2.1.1]. **Theorem 3** (Huq). *Let $(y_{1},\ldots,y_{m})$ be any sequence of integers whose sum is $1$, Then for each $r\in\{0,1,\ldots, m-1\}$ exactly one of the cyclic shifts $$\begin{aligned} (y_{\sigma(1)},y_{\sigma(2)},\ldots,y_{\sigma(m)}) & \in \{(y_{1}, y_{2}, \ldots, y_{m}), (y_{2}, \ldots, y_{m}, y_{1}), \ldots, (y_{m}, y_{1}, \ldots, y_{m-1})\} \end{aligned}$$ has the property that exactly $r$ of the partial sums $y_{\sigma(1)}+y_{\sigma(2)}+\cdots+y_{\sigma(k)}$ for $1 \leq k \leq m$ are positive.* Huq's proof of Theorem [Theorem 3](#theorem_Huq){reference-type="ref" reference="theorem_Huq"} is a consequence of the following theorem of Spitzer [@Spitzer Theorem 2.1]. **Theorem 4** (Spitzer). *Let $(x_{1},x_{2},\ldots,x_{m})\in {\mathbb R}^m$ be any vector with real coordinates such that $$\begin{aligned} x_{1}+x_{2}+\cdots+x_{m} &= 0\end{aligned}$$ but no shorter cyclic partial sum $x_{i+1}+x_{i+2}+\cdots+x_{j}$ of the coordinates vanishes. Then for each $r\in\{0,1,\ldots,m-1\}$ exactly one of the cyclic shifts $$\begin{aligned} (x_{\sigma(1)},x_{\sigma(2)},\ldots,x_{\sigma(m)}) & \in \{(x_{1}, x_{2}, \ldots, x_{m}), (x_{2}, \ldots, x_{m}, x_{1}), \ldots, (x_{m}, x_{1}, \ldots, x_{m-1})\} \end{aligned}$$ has the property that exactly $r$ of the partial sums $x_{\sigma(1)}+x_{\sigma(2)}+\cdots+x_{\sigma(k)}$ for $1 \leq k \leq m$ are positive. [\[theorem_Spitzer\]]{#theorem_Spitzer label="theorem_Spitzer"}* Indeed, introducing $x_i=y_i-1/m$ for $i=1,2,\ldots,m$, the resulting vector $(x_{1},x_{2},\ldots,x_{m})\in {\mathbb R}^m$ satisfies the conditions of Theorem [\[theorem_Spitzer\]](#theorem_Spitzer){reference-type="ref" reference="theorem_Spitzer"} as the sum $x_1+x_2+\cdots+x_m$ is zero but no shorter partial sum $x_{i+1}+x_{i+2}+\cdots+x_{j}$ of the coordinates, read cyclically, is an integer. Furthermore, any shorter sum $x_{i+1}+\cdots+x_{j}$ is positive if and only if $y_{i+1}+\cdots+y_{j}-1\geq 0$ holds, since $y_{i+1}+\cdots+y_{j}$ is an integer and we have $y_{i+1}+\cdots+y_{j}-1< x_{i+1}+\cdots+x_{j}< y_{i+1}+\cdots+y_{j}$. *Proof of Theorem [\[theorem_Spitzer\]](#theorem_Spitzer){reference-type="ref" reference="theorem_Spitzer"}.* Introducing $$\begin{aligned} z_{i}&= x_{1}+x_{2}+\cdots+ x_{i}, \label{equation_partial_sums} \end{aligned}$$ all cyclically consecutive sums may be expressed as $x_{i+1}+x_{i+2}+\cdots+x_{j}=z_{j}-z_{i}$. This is clear when $i\leq j$, and it is easy to prove when $i>j$ using $z_{m}=0$. Putting the numbers $z_{1}, z_{2},\ldots,z_{m}$ in increasing order, for each $0 \leq r \leq m-1$ there is exactly one $z_{i}$, the $(r+1)$st largest number, for which exactly $r$ of the differences $z_{j}-z_{i}$ are positive. ◻ As observed by Huq [@Huq Corollary 5.1.2], Theorem [Theorem 3](#theorem_Huq){reference-type="ref" reference="theorem_Huq"} has the following consequence. **Corollary 5** (Huq). *The number of lattice paths from $(0,0)$ to $(kn,0)$ consisting of $(k-1)n$ up steps $(1,1)$ and $n$ down steps $(1,1-k)$ with exactly $r$ up steps below the $x$-axis is independent of $r$ for $r\in \{0,1,\ldots,(k-1)n\}$ and is given by the Fuss--Catalan number $C_{n,k}$.* **Remark 6**. **The special instance of Spitzer's theorem when $j=m-1$ is often called Spitzer's lemma; see [@Krattenthaler Lemma 10.4.3]. The special instance of Corollary [Corollary 5](#corollary_Huq){reference-type="ref" reference="corollary_Huq"} when $j=m-1$ is also a special case of Raney's theorem [@Raney Theorem 2.1]; see [@Graham_Knuth_Patashnik p. 359].* * As noted, Theorem [Theorem 3](#theorem_Huq){reference-type="ref" reference="theorem_Huq"} above is a consequence of Spitzer's theorem [\[theorem_Spitzer\]](#theorem_Spitzer){reference-type="ref" reference="theorem_Spitzer"}, but the converse is also true. **Proposition 7**. *Spitzer's theorem [\[theorem_Spitzer\]](#theorem_Spitzer){reference-type="ref" reference="theorem_Spitzer"} is a consequence of Huq's theorem [Theorem 3](#theorem_Huq){reference-type="ref" reference="theorem_Huq"}.* *Proof.* Assume that the sum of all coordinates $(x_1,\ldots,x_m)\in {\mathbb R}^m$ is zero, and each of the shorter cyclically consecutive sum of terms $x_{i+1}+\cdots+x_{j}$ is nonzero and has sign $\varepsilon_{i,j}$. We may assume that there is a fixed positive integer $k>0$ such that all numbers $x_i$ are rational of the special form $$\begin{aligned} x_i=\frac{m\cdot y_i-1}{m\cdot k} \text{ for some } y_i\in {\mathbb Z}. \label{equation_special_form}\end{aligned}$$ Indeed, we may perturb the coordinates of $(x_1,\ldots,x_m)$ as long as they satisfy all $m(m-1)-2$ inequalities of the form $$\varepsilon_{i,j}\cdot (x_{i+1}+\cdots+x_j)>0, \label{equation_polytope}$$ together with the equation $$x_1+\cdots+x_m=0 \label{equation_hyperplane}$$ in ${\mathbb R}^m$. The inequalities [\[equation_special_form\]](#equation_special_form){reference-type="eqref" reference="equation_special_form"} define an open subset of the hyperplane defined by [\[equation_hyperplane\]](#equation_hyperplane){reference-type="eqref" reference="equation_hyperplane"}. This subset is not empty as it contains the vector we began with. Points whose coordinates are of the form given in [\[equation_special_form\]](#equation_special_form){reference-type="eqref" reference="equation_special_form"} form a dense subset in the hyperplane defined by [\[equation_hyperplane\]](#equation_hyperplane){reference-type="eqref" reference="equation_hyperplane"}, hence we may replace $(x_1,\ldots,x_m)$ with a vector whose coordinates are of the form given in [\[equation_special_form\]](#equation_special_form){reference-type="eqref" reference="equation_special_form"} and that satisfy the same inequalities. Similarly to the other implication, Theorem [\[theorem_Spitzer\]](#theorem_Spitzer){reference-type="ref" reference="theorem_Spitzer"} now follows from Theorem [Theorem 3](#theorem_Huq){reference-type="ref" reference="theorem_Huq"} after observing that each shorter sum $x_{i+1}+\cdots+x_{j}$ satisfies the inequality $y_{i+1}+\cdots+y_{j}-1\leq k\cdot (x_{i+1}+\cdots+x_{j})< y_{i+1}+\cdots+y_{j}$. ◻ # A lattice path visualization of Huq's result {#section_Huq_visual} =\[circle,draw=black,fill=black,thick, inner sep=0pt,minimum size=1.5mm\] =\[rectangle,draw=black,fill=black,thick, inner sep=0pt,minimum size=1.5mm\] In the spirit of Krattenthaler [@Krattenthaler Remark 10.4.4] and also of Graham, Knuth and Patashnik [@Graham_Knuth_Patashnik p. 360], we may visualize a self contained proof of Theorem [Theorem 3](#theorem_Huq){reference-type="ref" reference="theorem_Huq"}, using lattice paths, as follows. This visualization makes the result and its proof a generalization of Raney's lemma [Lemma 2](#lemma_Raney){reference-type="ref" reference="lemma_Raney"} and its geometric proof given in [@Graham_Knuth_Patashnik p. 359--360]. If we generalize the notion of lattice paths to connect vertices with non-integer second coordinates, our visualization also includes the proof of Theorem [\[theorem_Spitzer\]](#theorem_Spitzer){reference-type="ref" reference="theorem_Spitzer"}. Let us extend the vector $\mathbf{y}$ to an infinite vector $(\ldots, y_{-1}, y_{0}, y_{1}, y_{2}, \ldots)$ by setting $y_{i} = y_{j}$ for $i \equiv j \bmod m$, and consider the associated infinite lattice path with steps $\ldots, (1,y_{-1}), (1,y_{0}), (1,y_{1}), (1,y_{2}),\ldots$, containing the lattice point $(0,v_{0}) = (0,0)$ and satisfying $(i+1,v_{i+1}) = (i,v_{i}) + (1,y_{i+1})$ for all integers $i$; see Figure [\[figure_infinite_lattice_path\]](#figure_infinite_lattice_path){reference-type="ref" reference="figure_infinite_lattice_path"}. =\[circle,draw=black,fill=black,thick, inner sep=0pt,minimum size=1.5mm\] =\[rectangle,draw=black,fill=black,thick, inner sep=0pt,minimum size=1.5mm\] The finite path $p_{i}$ occurs in the infinite path as a subpath starting at $(i, v_{i})$ and ending at $(i+m, v_{i}+1)$. Introducing $x_{i}=y_{i}-1/m$ and ordering the $z_{i}$s defined in [\[equation_partial_sums\]](#equation_partial_sums){reference-type="eqref" reference="equation_partial_sums"} amounts to the following. Consider the linear functional $F(u,v) = v - 1/m\cdot u$ defined on the plane, and consider its level curves which are lines with slope $1/m$. For any $i\in \{1,2,\ldots, m\}$ we have $z_{i}=F(i,v_{i})$ and we may extend this observation to all $i\in {\mathbb Z}$ keeping in mind that $z_{m}=0$. Thus we may set $z_{i}=z_{j}$ if $i \equiv j \bmod m$. Ordering $z_{1}, z_{2},\ldots, z_{m}=z_{0}$ in increasing order amounts to ordering the $m$ lattice points $(0,v_{0})$ through $(m-1,v_{m-1})$ according to the linear functional $F$. If $(i,v_{i})$ is the $(r+1)$st largest lattice point in this order, then there are exactly $r$ lattice points $(i,v_{i})$ above this level. # Catalan--Spitzer permutations {#section_Catalan_Spitzer} In this section we investigate the restriction of Theorem [Theorem 3](#theorem_Huq){reference-type="ref" reference="theorem_Huq"} and its proof to $k$-Catalan paths. In particular, we describe the permutations of partial sums that appear in the proof of Theorem [Theorem 3](#theorem_Huq){reference-type="ref" reference="theorem_Huq"}, when we prove it by reducing it to Spitzer's theorem [\[theorem_Spitzer\]](#theorem_Spitzer){reference-type="ref" reference="theorem_Spitzer"}. We define an *augmented* $k$-Catalan path of order $n$ as a lattice path consisting of $(k-1)n+1$ up steps $(1,1)$ and $n$ down steps $(1,-k+1)$ that begins with an up step and never goes below the line $y=1$ after the initial up step. The sum of the second coordinates of these steps is $1$, hence Theorem [Theorem 3](#theorem_Huq){reference-type="ref" reference="theorem_Huq"} is applicable. In this special case the proof of this theorem calls for replacing each step $(1,y)$ by $(1,y-1/(kn+1))$. Hence up steps become $(1,kn/(kn+1))$ and down steps become $(1,-k((k-1)n + 1)/(kn + 1))$ and the transformed path goes from $(0,0)$ to $(kn+1,0)$. Between its endpoints it remains strictly above the line $y=0$. The transformed path is not a lattice path, but we can easily transform it to one by multiplying all $y$-coordinates by the factor $(kn+1)/k$. This vertical stretch does not change the relative vertical order of the $y$-coordinates of the endpoints of the steps. **Definition 8**. *A *$k$-Catalan--Spitzer path of order $n$* is a lattice path consisting of $(k-1)n+1$ up steps $(1,n)$ and $n$ down steps $(1,-((k-1)n + 1)$ from $(0,0)$ to $(kn+1,0)$ that remains strictly above the line $y=0$.* There is a natural bijection between augmented $k$-Catalan paths and $k$-Catalan--Spitzer paths of order $n$: we associate to each augmented $k$-Catalan path $(0,0),(1,z_{1}),\ldots, (kn, z_{kn}), (kn+1,1)$ the $k$-Catalan--Spitzer path $(0,0),(1,z_{1}'),\ldots, (kn, z_{kn}'), (kn+1,0)$ in which up steps and down steps follow in the same order. Hence the number of $k$-Catalan--Spitzer paths of order $n$ is also the Fuss-Catalan number $C_{n,k}$. The second coordinates $z_i'$ of the lattice points in a $k$-Catalan--Spitzer path pairwise differ and may be easily computed from the second coordinates $z_i$ of the corresponding augmented $k$-Catalan path as follows. Since a $k$-Catalan--Spitzer path is obtained from the corresponding augmented $k$-Catalan path by first decreasing the second coordinate of each step by $1/(kn+1)$ and then performing a vertical stretch by a factor of $(kn+1)/k$, we obtain that $$\begin{aligned} z_{i}'=\frac{(kn+1)\cdot z_{i}- i}{k} \text{ holds for } i=1,2,\ldots, kn. \label{equation_ztransform}\end{aligned}$$ The next proposition describes the relative position of these lattice points in a $k$-Catalan--Spitzer path in terms of the positions of lattice points in the corresponding augmented $k$-Catalan path. **Proposition 9**. *Consider an augmented $k$-Catalan path $(0,0),(1,z_{1}),\ldots, (kn, z_{kn}), (kn+1,1)$ of order $n$, and let $(0,0),(1,z_{1}'),\ldots, (kn, z_{kn}'), (kn+1,0)$ be the corresponding $k$-Catalan--Spitzer path. Then for some $i\neq j$ the inequality $z_{i}'<z_{j}'$ holds if and only if $(-i,z_{i}) < (-j, z_{j})$ in the reverse lexicographic order, where coordinates are compared right to left. [\[proposition_revlexorder\]]{#proposition_revlexorder label="proposition_revlexorder"}* *Proof.* By [\[equation_ztransform\]](#equation_ztransform){reference-type="eqref" reference="equation_ztransform"} we have $$\begin{aligned} z_{j}'-z_{i}' &=\frac{(kn+1)\cdot (z_{j}-z_{i})+(i-j)}{k}. \label{equation_zij}\end{aligned}$$ Observe first that in the case when $z_{i}\neq z_{j}$, the sign of $z_{j}'-z_{i}'$ is the same as the sign of $z_{j}-z_{i}$. Indeed, $(kn+1)\cdot (z_{j}-z_{i})$ in [\[equation_zij\]](#equation_zij){reference-type="eqref" reference="equation_zij"} above is a nonzero integer multiple of $(kn+1)$, whereas $|i-j|<kn$. On the other hand, in the case when $z_{i}= z_{j}$, by [\[equation_zij\]](#equation_zij){reference-type="eqref" reference="equation_zij"} we have $z_{i}'<z_{j}'$ if and only if $-i< -j$ holds. ◻ Proposition [\[proposition_revlexorder\]](#proposition_revlexorder){reference-type="ref" reference="proposition_revlexorder"} inspires the following definition. **Definition 10**. *A *$k$-Catalan--Spitzer permutation of order $n$* is the relative order of the numbers $z_{1}',\ldots,z_{kn}'$ in a $k$-Catalan--Spitzer path $(0,0),(1,z_{1}'),\ldots, (kn, z_{kn}'), (kn+1,0)$ of order $n$. Equivalently it is the relative order of the lattice points $(1,z_{1}),\ldots, (kn, z_{kn})$ in the corresponding augmented $k$-Catalan path of order $n$, where we order the lattice points first by the second coordinate in increasing order and then by the first coordinate in decreasing order. In the case when $k=2$ we will use the term *Catalan--Spitzer permutation*.* **Example 11**. **An example of an augmented $4$-Catalan path and its labeling giving rise to the associated $4$-Catalan--Spitzer permutation is shown in Figure [\[figure_4\_Catalan_without_labelings_and_tree\]](#figure_4_Catalan_without_labelings_and_tree){reference-type="ref" reference="figure_4_Catalan_without_labelings_and_tree"}. There are $3$ lattice points at level one, numbered right to left, followed by $2$ lattice points at the next level, and so on. Modifying an idea presented in [@Stanley_EC2], we may visualize an augmented $k$-Catalan path as a description of the movement of a worm crawling around a rooted plane tree in counterclockwise order, shown on the right of Figure [\[figure_4\_Catalan_without_labelings_and_tree\]](#figure_4_Catalan_without_labelings_and_tree){reference-type="ref" reference="figure_4_Catalan_without_labelings_and_tree"}. The plane tree is rooted with a root edge at level $0$. Each up step in the lattice path corresponds to the worm moving up one level and each down step corresponds to the worm moving down $(k-1)$ levels. We can think of the worm moving down $(k-1)$ times faster than up. The set of all rooted plane trees with $n+1$ vertices is in bijection with the set of all augmented Catalan paths with $(n+1)$ up steps and $n$ down steps. Only a subset of the set of rooted plane trees with $(k-1)\cdot n+2$ vertices corresponds bijectively to the set of $(k-1)$-Catalan paths with $(k-1)\cdot n+1$ up steps and $n$ down steps. The numbering of the lattice points corresponds to the labeling of the points where the worm begins or ends a move.* [\[example_k\_Catalan_Spitzer\]]{#example_k_Catalan_Spitzer label="example_k_Catalan_Spitzer"}* In order to describe the finer structure of $k$-Catalan--Spitzer permutations, we make the following definition. **Definition 12**. *Let $(i_1,\ldots,i_r)$ be a vector with nonnegative integer coordinates. We say that a $k$-Catalan--Spitzer permutation and the corresponding augmented $k$-Catalan path has type $(i_1,i_2,\ldots,i_r)$ if the augmented $k$-Catalan path has $i_j$ lattice points at level $j$. We denote the number of $k$-Catalan--Spitzer permutations having type $\overline{\imath}= (i_1,i_2,\ldots,i_r)$ by $t_{k}(\overline{\imath})$.* Note that this definition of type is not unique in the sense that an augmented $k$-Catalan path has type $(i_1,i_2,\ldots,i_r)$ if and only if it has type $(i_1,i_2,\ldots,i_r,0)$. In other words, we may add as many zero coordinates to the type of an augmented $k$-Catalan path as we wish. We exclude the empty lattice path from consideration as we consider it non-augmented. Hence $i_1$ must be positive. Let $e_{i}$ denote the $i$th unit vector. For $S$ a finite subset of positive integers, let $e_{S}$ denote the sum $e_{S}=\sum_{i \in S} e_{i}$ and $x_{S}$ denote the product $x_{S}=\prod_{i \in S} x_{i}$. Furthermore, we also use the notation $x^{\overline{\imath}} = x_1^{i_1}x_2^{i_2}\cdots x_r^{i_r}$. We write $[n] = \{1,2, \ldots, n\}$ and $[i,j] = \{i, i+1, \ldots, j\}$. **Lemma 13**. *The numbers $t_{k}(\overline{\imath})$, where $\overline{\imath}= (i_{1}, \ldots, i_{r-1},i_{r})$, are determined by the initial condition $t_{k}(i_1) = \delta_{i_1,1}$ where $\delta_{k_1,1}$ is the Kronecker delta and if there is an index $j \in [r-k+1,r-1]$ such that $i_{j} < i_{r}$ then $t_{k}(\overline{\imath}) = 0$. When $r \geq k$ the following recurrence holds: $$t_{k}(\overline{\imath}) = \binom{i_{r-k+1}-1}{i_r} \cdot t_{k}\left(\overline{\imath}- i_{r} \cdot e_{[r-k+1,r]}\right) .$$* *Proof.* In an augmented $k$-Catalan path exactly one lattice point must be at level $1$ if the lattice path never hits level $2$. This yields the initial condition. Notice that any lattice point at level $r$ in an augmented $k$-Catalan path of type $\overline{\imath}$ is a peak immediately preceded by a run of $k-1$ up steps $(1,1)$ and immediately followed by a down step $(1,1-k)$. By removing these steps, each $k$-Catalan path of type $\overline{\imath}$ may be uniquely reduced to an augmented $k$-Catalan path path of type $$(i_1,\ldots,i_{r-k}, i_{r-k+1}-i_{r},\ldots, i_{r-1}-i_{r}, i_{r}-i_r) = \overline{\imath}- i_{r} \cdot e_{[r-k+1,r]} .$$ Conversely, given an augmented $k$-Catalan path of type $\overline{\imath}- i_{r} \cdot e_{[r-k+1,r]}$, there are $$\multichoose{i_{r-k+1}-i_r}{i_r} %= %\binom{i_{r-k+1}-i_r+i_r-1}{i_r} = \binom{i_{r-k+1}-1}{i_r}$$ ways to select the place to reinsert $i_r$ runs of $k-1$ up steps $(1,1)$ immediately followed by a down step $(1-k,1)$, after one of the $i_{r-k+1}-i_r$ lattice points at level $r-k+1$ of the reduced lattice path, where $\multichoose{n}{j} = \binom{n+j-1}{j}$ denotes the number of $j$ element multisubsets of an $n$-set. ◻ Using Lemma [Lemma 13](#lemma_counting_types){reference-type="ref" reference="lemma_counting_types"} we obtain the following recurrence for the associated generating functions. **Lemma 14**. *The generating functions for the $k$-Catalan--Spitzer permutations of type $\overline{\imath}$, that is, $$T_k(x_1,\ldots,x_r) = \sum_{\overline{\imath}\in {\mathbb P}\times {\mathbb N}^{r-1}} t_{k}(\overline{\imath}) \cdot x^{\overline{\imath}}$$ are given by the initial conditions $T_k(x_1)=T_k(x_1,x_2)=\cdots=T_k(x_1,\ldots,x_{k-1})=x_1$ and for $r\geq k$ by the recurrence $$\label{equation_trecurrence} T_k(x_1,x_2,\ldots,x_r) =T_k\left(x_1,x_2,\ldots,x_{r-k},\frac{x_{r-k+1}}{1-x_{[r-k+1,r]}},x_{r-k+2},\ldots,x_{r-1} \right).$$* Observe that the function on the left-hand side of ([\[equation_trecurrence\]](#equation_trecurrence){reference-type="ref" reference="equation_trecurrence"}) is $r$-ary, whereas the function on the right-hand side is $(r-1)$-ary. *Proof of Lemma [Lemma 14](#lemma_types_genf){reference-type="ref" reference="lemma_types_genf"} ..* The initial conditions are straightforward to verify. Using the recurrence stated in Lemma [Lemma 13](#lemma_counting_types){reference-type="ref" reference="lemma_counting_types"}, we obtain $$\begin{aligned} T_k(x_1,\ldots,x_r) %%%% & = %%%% \sum_{\ii \in \Ppp \times \Nnn^{r-1}} t_{k}(\ii) x^{\ii} \\ & = \nonumber \sum_{\substack{\overline{\imath}\in {\mathbb P}\times {\mathbb N}^{r-1} \\ i_{r}\leq i_{r-k+1},i_{r-k+2},\ldots,i_{r-1}}} t_{k}\left(\overline{\imath}- i_{r} \cdot e_{[r-k+1,r]}\right) \cdot \binom{i_{r-k+1}-1}{i_r} \cdot x^{\overline{\imath}} \\ & = \label{equation_t} \sum_{\substack{\overline{\imath}\in {\mathbb P}\times {\mathbb N}^{r-1} \\ i_{r}\leq i_{r-k+1},i_{r-k+2},\ldots,i_{r-1}}} t_{k}\left(\overline{\imath}- i_{r} \cdot e_{[r-k+1,r]}\right) \cdot x^{\overline{\imath}- i_{r} \cdot e_{[r-k+1,r]}} \cdot \binom{i_{r-k+1}-1}{i_r} \cdot x_{[r-k+1,r]}^{i_r} .\end{aligned}$$ Introduce $\overline{\ell}$ to be the index vector $\overline{\imath}- i_{r} \cdot e_{[r-k+1,r]}$ with the last zero removed, that is, we set $\ell_{j} = i_{j}$ for $1 \leq j \leq r-k$ and $\ell_{j} = i_{j}-i_r$ for $r-k+1 \leq j \leq r-1$. The sum ([\[equation_t\]](#equation_t){reference-type="ref" reference="equation_t"}) is now $$\begin{aligned} T_k(x_1,\ldots,x_r) & = \sum_{\overline{\ell}\in {\mathbb P}\times {\mathbb N}^{r-2}} t_{k}(\overline{\ell}) \cdot x^{\overline{\ell}} \cdot \sum_{0 \leq i_r} \binom{i_{r}+\ell_{r-k+1}-1}{i_r} x_{[r-k+1,r]}^{i_{r}} \\ & = \sum_{\overline{\ell}\in {\mathbb P}\times {\mathbb N}^{r-2}} t_{k}(\overline{\ell}) \cdot x^{\overline{\ell} } \cdot \frac{1}{(1- x_{[r-k+1,r]})^{\ell_{r-k+1}}} . \qedhere\end{aligned}$$ ◻ In order to give an explicit rational expression for these generating functions, we define the denominator polynomial as follows. **Definition 15**. *Given any positive integer $k\geq 2$ and any interval $[r,s]$ of consecutive positive integers, we define the *$k$-Catalan denominator polynomial $Q_k(x_r,x_{r+1},\ldots, x_s)$* as the signed sum $$Q_k(x_r,x_{r+1},\ldots, x_s) = \sum_S (-1)^{{|S|}/{k}} \cdot x_{S},$$ where $S$ ranges over all subsets of $[r,s]$ that arise as a disjoint union of sets consisting of $k$ consecutive integers. The empty set is included in the sum and contributes the term $1$.* **Theorem 16**. *For $k\geq 2$ we have $$T_k(x_1,x_2,\ldots,x_r)=\frac{x_1\cdot Q_k(x_2,\ldots,x_r)}{Q_k(x_1,\ldots,x_r)}.$$* *Proof.* For $r<k$ the sets $[r]$ and $[2,r]$ do not contain any subset of $k$ consecutive integers, hence we have $Q_k(x_1,\ldots,x_r)=Q_k(x_2,\ldots,x_r)=1$ and the identity holds. We proceed by induction for $r\geq k$. By Lemma [Lemma 14](#lemma_types_genf){reference-type="ref" reference="lemma_types_genf"} the generating function $T_k(x_1,x_2,\ldots,x_r)$ is obtained from $T_k(x_1,x_2,\ldots,x_{r-1})$ by substituting $x_{r-k+1}/(1-x_{[r-k+1,r]})$ into $x_{r-k+1}$. This substitution turns the stated formula for $T_k(x_1,x_2,\ldots,x_{r-1})$ into a four-level fraction which may be transformed into a quotient of two polynomials by multiplying the numerator and the denominator by $1-x_{[r-k+1,r]}$. This operation leaves all monomials $x_{S}$ where $S\subseteq [r-1]$ containing a factor of $x_{r-k+1}$ unchanged, as replacing $x_{r-k+1}$ with $x_{r-k+1}/(1-x_{[r-k+1,r]})$ and then multiplying by $1-x_{[r-k+1,r]}$ amounts to no change at all. On the other hand, each monomial $x_S$ satisfying $r-k+1\not\in S$ is replaced with $x_S-x_S\cdot x_{[r-k+1,r]}$. Assuming the induction hypothesis, the terms $x_S$ appearing in the denominator of $T_k(x_1,\ldots,x_{r-1})$ are indexed exactly by those subsets $S\subseteq [r-1]$ which arise as a disjoint union of sets consisting of $k$ consecutive integers. Each such set is also a subset of $[r]$, and by our recurrence the corresponding term $x_S$ remains in the denominator with the same coefficient. The additional new terms in the denominator of $T_k(x_1,x_2,\ldots,x_{r-1})$ are exactly the terms $x_S \cdot x_{[r-k+1,r]}$ where $S$ ranges through all terms of $Q_k(x_1,\ldots,x_{r-1})$ that do not contain $x_{r-k+1}$ as a factor. Note that $r-k+1\not \in S\subseteq [r-1]$ implies $S\subseteq [1,r-k]$ as the interval $[r-k+2,r-1]$ contains less than $k$ consecutive integers. The converse is also true. Hence the monomial $x_S \cdot x_{[r-k+1,r]}$ is square-free, the underlying set is obtained by adding the disjoint set $[r-k+1,r]$ consisting of $k$ consecutive integers to $S$. Hence we are adding exactly those terms of $Q_k(x_1,\ldots,x_{r})$ to the denominator which do not appear in $Q_k(x_1,\ldots,x_{r-1})$, and the sign of $x_S \cdot x_{[r-k+1,r]}$ is the opposite of $x_S$, consistent with the definition of $Q_k(x_1,\ldots,x_{r})$. Similar reasoning may be used to show that the numerator of $T_k(x_1,x_2,\ldots,x_{r})$ is $x_1 \cdot Q_k(x_2,\ldots,x_{r})$. ◻ **Example 17**. * *As an example of Theorem [Theorem 16](#theorem_types_genf){reference-type="ref" reference="theorem_types_genf"}, we obtain for $k=3$ and $r=6$ $$\begin{aligned} T_3(x_1,x_2, \ldots, x_6) & = \frac{x_1\cdot (1 - x_{[2,4]} - x_{[3,5]} - x_{[4,6]})} {1 - x_{[1,3]} - x_{[2,4]} - x_{[3,5]} - x_{[4,6]} + x_{[1,6]}} . %%\frac{x_1(1 - x_2x_3x_4 - x_3x_4x_5 - x_4x_5x_6)} %%{1 - x_1x_2x_3 - x_2x_3x_4 - x_3x_4x_5 - x_4x_5x_6 + x_1x_2x_3x_4x_5x_6} .\end{aligned}$$** Theorem [Theorem 16](#theorem_types_genf){reference-type="ref" reference="theorem_types_genf"} may be restated in a more compact form in terms of the following generalization of continuants. **Definition 18**. *The *$n$th $k$-continuant $K_{k,n}(x_1,x_2,\ldots, x_n)$* is defined recursively by the initial condition $K_{k,n} = x_{1} x_{2} \cdots x_{n}$ for $n < k$ and for $n \geq k$ by the recurrence $$K_{k,n}(x_1,x_2,\ldots,x_n) = K_{k,n-k}(x_1,x_2,\ldots,x_{n-k})+K_{k,n-1}(x_1,x_2,\ldots,x_{n-1})\cdot x_n .$$* Note that for $k=2$ Definition [Definition 18](#definition_kcontinuant){reference-type="ref" reference="definition_kcontinuant"} is the classical definition of the continuants. The verification of the following facts for $k$-continuants are completely analogous to the proof in the $k=2$ case, and are left to the reader. **Proposition 19**. *The $k$-continuant $K_{k,n}(x_1,x_2,\ldots,x_n)$ can be computed by taking the sum of all possible products of $x_1,x_2,\ldots,x_n$ in which any number of disjoint sets of $k$ consecutive variables are deleted from the product $x_1x_2\cdots x_n$.* **Proposition 20**. *Introducing the $k\times k$ matrix $$M_k(x)=\begin{bmatrix} x & 0 & \ldots & 0 & 1\\ 1 & 0 & \ldots & 0 & 0\\ 0 & 1 & \ldots & 0 & 0\\ \vdots & & \ddots & & \\ 0 & 0 & \ldots & 1 & 0\\ \end{bmatrix},$$ we may write $$\begin{bmatrix} K_{k,n}(x_1,x_2,\ldots,x_n)\\ K_{k,n-1}(x_1,x_2,\ldots,x_{n-1})\\ \vdots\\ K_{k,n-k+1}(x_1,x_2,\ldots,x_{n-k+1})\\ \end{bmatrix} = M_k(x_n)M_k(x_{n-1})\cdots M_k(x_1) \begin{bmatrix} 1\\ 0\\ \vdots\\ 0\\ \end{bmatrix}.$$* **Proposition 21**. *The number of terms in $K_{k,n}(x_1,\ldots,x_n)$, that is, $K_{k,n}(1,\ldots,1)$, is recursively obtained by $$\label{equation_kcrecursion} K_{k,n}(1,\ldots,1) = \begin{cases} 1 & \text{ for $0\leq n\leq k-1$,} \\ K_{k,n-k}(1,\ldots,1)+K_{k,n-k}(1,\ldots,1) & \text{ for } n\geq k. \end{cases}$$* For $k=2$ the sequence $K_{2,n}(1,\ldots,1)$ is the Fibonacci number $F_{n}$. For $k=3$ the sequence $\{K_{3,n}(1,\ldots,1)\}_{n\geq 0}$ is sequence A000930 in [@OEIS], also known as Narayana's cow sequence. The same page also contains information regarding the general sequence $\{K_{k,n}(1,\ldots,1)\}_{n\geq 0}$. As a direct consequence of Definition [Definition 15](#definition_kcdenominator){reference-type="ref" reference="definition_kcdenominator"} and Proposition [Proposition 19](#proposition_Kconsecutive){reference-type="ref" reference="proposition_Kconsecutive"}, we have the following corollary. **Corollary 22**. *Given any positive integer $k\geq 2$, let $\zeta$ denote a primitive $(2k)$th root of unity. Then for any interval $[r,s]$ of consecutive positive integers, the $k$-Catalan denominator polynomial $Q_k(x_r,x_{r+1},\ldots, x_s)$ is given by $$Q_k(x_r,x_{r+1},\ldots, x_s) = \zeta^{s-r+1} \cdot x_r x _{r+1} \cdots x_s \cdot K_{k,s-r+1}\left(\frac{1}{\zeta x_r},\frac{1}{\zeta x_{r+1}},\ldots,\frac{1}{\zeta x_s}\right).$$* Substituting Corollary [Corollary 22](#corollary_q_kc){reference-type="ref" reference="corollary_q_kc"} into Theorem [Theorem 16](#theorem_types_genf){reference-type="ref" reference="theorem_types_genf"}, after simplifying by $x_1x_2\cdots x_r$, we obtain the formula $$\begin{aligned} \label{equation_types_continuants} T_k(x_1,x_2,\ldots,x_r) & = \frac{1}{\zeta} \cdot \frac{K_{k,r-1}\left(\frac{1}{\zeta x_2},\ldots,\frac{1}{\zeta x_r}\right)} {K_{k,r}\left(\frac{1}{\zeta x_1},\ldots,\frac{1}{\zeta x_r}\right)}. \end{aligned}$$ We conclude this section by having a closer look at the Catalan case. Using the well-known relation between continuants and continued fractions, equation [\[equation_types_continuants\]](#equation_types_continuants){reference-type="eqref" reference="equation_types_continuants"} may be rewritten as $$\begin{aligned} \label{equation_types} T_2(x_1,x_2,\ldots,x_r) & = \frac{1}{\mathbf{i}} \cdot \cfrac{1}{\frac{1}{\mathbf{i}\cdot x_1}+\cfrac{1}{\frac{1}{\mathbf{i}\cdot x_2}+\ddots \cfrac{1}{\frac{1}{\mathbf{i}\cdot x_r}}}} = \cfrac{1}{\frac{1}{x_1}-\cfrac{1}{\frac{1}{x_2}-\ddots \cfrac{1}{\frac{1}{x_r}}}} ,\end{aligned}$$ where $\mathbf{i}$ is the square root of $-1$. **Remark 23**. **Equation [\[equation_types\]](#equation_types){reference-type="eqref" reference="equation_types"} is also a consequence of Flajolet's result [@Flajolet Theorem 1] which provides a generating function formula for *Motzkin paths*, starting at $(0,0)$ and ending on the $x$ axis consisting of up steps $(1,1)$, down steps $(1,-1)$ and horizontal steps $(1,0)$ that never go below the $x$-axis. To obtain Equation [\[equation_types\]](#equation_types){reference-type="eqref" reference="equation_types"}, we must set $c_i=0$ and $a_{i} = b_{i} = x_{i+1}$ for all $i\geq 0$ in Flajolet's formula and multiply the resulting generating function by $x_1$. The additional factor of $x_1$ is induced by the fact that we consider augmented Catalan paths. Thus we obtain the generating function $$\cfrac{x_1}{1-\cfrac{x_1x_2}{1-\cfrac{x_2x_3}{1-\cfrac{x_3x_4}{\ddots}}}}.$$ It is straightforward to see that we obtain the generating function that is the limit, as $r$ goes to infinity, of the function given in [\[equation_types\]](#equation_types){reference-type="eqref" reference="equation_types"}.** # Short $k$-Catalan--Spitzer permutations {#section_short_catalan_spitzer} The $k$-Catalan--Spitzer permutations defined in the previous section contain redundant information. In this section we show that we may restrict our attention to the lattice points which are the lower ends of the up steps. The resulting permutations have a particularly nice representation when we consider the Foata--Strehl group action [@Foata-Strehl1; @Foata-Strehl2]. **Definition 24**. *A *short $k$-Catalan--Spitzer permutation of order $n$* is the relative order of the numbers $z_{i_1}',\ldots,z_{i_{(k-1)n}}'$ in a $k$-Catalan--Spitzer path $(0,0),(1,z_{1}'),\ldots, (kn, z_{kn}'), (kn+1,0)$ of order $n$, where $\{i_{1},i_{2},\ldots,i_{(k-1)n}\}$ is the set of indices $i_j\geq 1$ satisfying $z_{i_j}'<z_{i_j+1}'$. In the case when $k=2$ we use the term *short Catalan--Spitzer permutation*.* In analogy to Definition [Definition 10](#definition_Catalan_Spitzer_permutation){reference-type="ref" reference="definition_Catalan_Spitzer_permutation"}, a short Catalan--Spitzer permutation may be also defined in terms of $k$-Catalan paths as follows. **Proposition 25**. *The set of short $k$-Catalan--Spitzer permutations of order $n$ is the set of all permutations arising by the following procedure. Take a $k$-Catalan path of order $n$ and number its up steps so that the values increase right to left at the same level and upward between different levels. Record the numbers along the lattice path.* We will say that a short $k$-Catalan--Spitzer permutation $\sigma$ is *induced* by a $k$-Catalan path $P$ if the procedure described in Proposition [Proposition 25](#proposition_k_Catalan-short){reference-type="ref" reference="proposition_k_Catalan-short"} applied to $P$ yields the permutation $\sigma$. A short $k$-Catalan--Spitzer permutation associated to a $k$-Catalan path may be computed directly from the corresponding (full) $k$-Catalan--Spitzer permutation using the notions of an *ascents* and *patterns*. Recall that the index $i\in \{1,\ldots, n-1\}$ is an *ascent* of a permutation $\pi(1)\pi(2)\cdots\pi(n)$ if $\pi(i)<\pi(i+1)$ holds. Furthermore, given an ordered alphabet $X$ with $n$ letters, a *permutation* of $X$ is a word $w_1w_2\cdots w_n$ containing each letter of $X$ exactly once. The *pattern* of the permutation $w$ is the permutation $\pi(1)\pi(2)\cdots \pi(n)$ of the set $\{1,2,\ldots,n\}$ satisfying $\pi(i)<\pi(j)$ if and only if $w_i<w_j$ for each $1\leq i<j\leq n$. The following lemma follows directly from the definitions. **Lemma 26**. *Let $\pi(1)\pi(2)\cdots\pi(kn)$ and $\sigma(1)\sigma(2)\cdots \sigma((k-1)n)$ be a $k$-Catalan--Spitzer, respectively a short $k$-Catalan--Spitzer, permutation of order $n$ associated to the same $k$-Catalan--Spitzer path. Then $\sigma(1)\sigma(2)\cdots \sigma((k-1)n)$ is the pattern of $\pi(i_{1})\pi(i_{2})\cdots\pi(i_{(k-1)n})$ where $\{i_{1},i_{2},\ldots,i_{(k-1)n}\}$ is the set of ascents of $\pi(1)\pi(2)\cdots\pi(kn)$.* **Example 27**. **Consider the $4$-Catalan--Spitzer permutation $3,5,8,11,2,4,7,10,12,13,6,9,1$ of order $3$ discussed in Example [\[example_k\_Catalan_Spitzer\]](#example_k_Catalan_Spitzer){reference-type="ref" reference="example_k_Catalan_Spitzer"}. Its ascent set is $\{1,2,3,5,6,7,8,9,11\}$. The pattern of the word $3,5,8,2,4,7,10,12,6$ is $2,4,7,1,3,6,8,9,5$.** As shown in Proposition [\[proposition_csp_injection\]](#proposition_csp_injection){reference-type="ref" reference="proposition_csp_injection"} below, the operation assigning to each $k$-Catalan--Spitzer permutation (equivalently, each $k$-Catalan path) the corresponding short $k$-Catalan--Spitzer permutation is injective. We will prove this by considering the *Foata--Strehl trees* of short $k$-Catalan--Spitzer permutations. Recall that a *plane $0-1-2$ tree* is a rooted plane tree, in which each vertex has degree at most $2$. (It is not unusual to call plane $0-1-2$ trees plane binary trees. However, a plane binary tree in the strict sense cannot contain a vertex with a single child.) **Definition 28**. *Let $w_{1}w_{2}\cdots w_{n}$ be a word with letters from an ordered alphabet containing no repeated letters. The *Foata--Strehl tree ${\mathcal FS}(w_{1}w_{2}\cdots w_{n})$* of this word is the plane $0-1-2$ tree defined recursively as follows. The root of the tree is $w_i=\min(w_{1},w_{2}, \ldots, w_{n})$ whose left child is $\min(w_{1},w_{2}, \ldots, w_{i-1})$ and whose right child is $\min(w_{i+1},w_{2}, \ldots, w_{n})$. There is no left child if $i=1$ and no right child if $i=n$. The subtree of the left child is ${\mathcal FS}(w_{1}w_{2}\cdots w_{i-1})$, and the subtree of the right child is ${\mathcal FS}(w_{i+1}w_{i+2}\cdots w_{n})$.* Clearly, the correspondence between permutations of $\{1,2,\ldots,n\}$ and Foata--Strehl trees with $n$ vertices is a bijection. **Lemma 29**. *Let $\sigma=\sigma(1)\sigma(2)\cdots \sigma((k-1)n)$ be a short $k$-Catalan--Spitzer permutation induced by a $k$-Catalan path $P$ of order $n$. If $\sigma(i)$ is the label of an up step that is immediately followed by a down step in $P$ then $\sigma(i)$ has no right child. If $\sigma(i)$ is the label of an up step that is immediately followed by an up step in $P$ then $\sigma(i)$ has a right child $\sigma(j)$ and the level of the up step labeled by $\sigma(j)$ is one more than the level of the up step labeled by $\sigma(i)$.* *Proof.* If the up step labeled $\sigma(i)$ is immediately followed by a down step, then the level of the next up step is not greater than that of the up step labeled $\sigma(i)$. In this case $\sigma(i+1)<\sigma(i)$ holds and the right subtree of $\sigma(i)$ in ${\mathcal FS}(\sigma)$ is empty. Assume from now on that the up step labeled $\sigma(i)$ is immediately followed by an up step. As we follow the up steps along $P$ after the up step labeled $\sigma(i)$, all have a larger label than $\sigma(i)$ until $P$ returns to the level of $\sigma(i)$. The label of the next up step is less than the label of $\sigma(i)$. Hence the labels belonging to the right subtree of $\sigma(i)$ in ${\mathcal FS}(\sigma)$ are exactly the up steps belonging to the part $P'$ of $P$ that begins with the up step labeled $\sigma(i)$ and ends with the first return of $P$ to the same level. The labels in this subtree belong to up steps whose level is greater than the level of the up step labeled $\sigma(i)$. The level of the last up step in $P'$ is one more than that of the up step labeled $\sigma(i)$ and it is the rightmost among all up steps of $P'$ at this level. Hence its label is the least element of the right subtree of $\sigma(i)$. ◻ Inspired by Lemma [Lemma 29](#lemma_levels){reference-type="ref" reference="lemma_levels"}, we define the *level* of each letter in a permutation as follows. **Definition 30**. *Let $T$ be a plane $0-1-2$ tree. We define the *level* of each vertex of $T$ as follows.* 1. *The level of the root is zero.* 2. *For any other vertex $v$, the level of $v$ is the number of right steps in the unique path in $T$ from the root to $v$.* *Given any permutation $\sigma=\sigma(1)\sigma(2)\cdots\sigma(n)$ we define the level of $\sigma(i)$ as the level of the vertex labeled $\sigma(i)$ in the the Foata--Strehl tree ${\mathcal FS}(\sigma)$.* Using Definition [Definition 30](#definition_levels){reference-type="ref" reference="definition_levels"} we may rephrase Lemma [Lemma 29](#lemma_levels){reference-type="ref" reference="lemma_levels"} as follows. **Corollary 31**. *Let $\sigma=\sigma(1)\sigma(2)\cdots \sigma((k-1)n)$ be a short $k$-Catalan--Spitzer permutation induced by a $k$-Catalan path $P$ of order $n$. Then for each $i$ the level of the up step labeled $\sigma(i)$ is the same as the level of $\sigma(i)$. [\[corollary_levels\]]{#corollary_levels label="corollary_levels"}* An important consequence of Corollary [\[corollary_levels\]](#corollary_levels){reference-type="ref" reference="corollary_levels"} is the following. **Proposition 32**. *The operation associating to each $k$-Catalan path $P$ its induced $k$-Catalan--Spitzer permutation is injective. [\[proposition_csp_injection\]]{#proposition_csp_injection label="proposition_csp_injection"}* *Proof.* Assume the $k$-Catalan--Spitzer permutation $\sigma$ is induced by the $k$-Catalan path $P$. It suffices to show that $P$ may be uniquely reconstructed from the Foata--Strehl tree ${\mathcal FS}(\sigma)$ of $\sigma$. By Corollary [\[corollary_levels\]](#corollary_levels){reference-type="ref" reference="corollary_levels"} the level of each up step may be read from ${\mathcal FS}(\sigma)$. Note finally that the number of down steps between the up step labeled $\sigma(i)$ and the next up step labeled $\sigma(i+1)$ is the difference between the level of $\sigma(i)$ and the level of $\sigma(i+1)$. ◻ Definition [Definition 30](#definition_levels){reference-type="ref" reference="definition_levels"} allows us to define the level of each letter in any permutation. The next definition allows us to identify the short $k$-Catalan--Spitzer permutations by looking at their Foata--Strehl trees. **Definition 33**. *Let $T$ be a plane $0-1-2$ tree with $n$ vertices numbered from $1$ to $n$. We say the that $T$ is *levelwise numbered* if the labeling of its vertices satisfies the following criteria:* 1. *If the level of the vertex labeled $i$ is less than the level of the vertex labeled $j$ then $i<j$.* 2. *If the vertex labeled $j$ is in the left subtree of the vertex labeled $i$ then $i<j$.* 3. *If the vertex labeled $i$ and the vertex labeled $j$ have the same level, but there is a $k<i,j$ such that vertex labeled $i$ (respectively $j$) is in the right (respectively left) subtree of the vertex labeled $k$ then $i<j$.* **Proposition 34**. *Each plane $0-1-2$ tree has a unique levelwise numbering. [\[proposition_unique\]]{#proposition_unique label="proposition_unique"}* *Proof.* By Definition [Definition 33](#definition_levelwise){reference-type="ref" reference="definition_levelwise"} vertices at the same level must be numbered consecutively. We only need to show that the second and third rules of the definition uniquely determine the labeling of the vertices at the same level. We will show this by considering for each vertex $v$ the unique path from the root to $v$. This path may be encoded by an *$r\ell$-word $RL(v)$* defined as follows. As we move along the path from the root to a vertex $v$, we record a letter $r$ each time we move to the right child and a letter $\ell$ each time to a left child. For example, $r\ell\ell$ represents the left child of the left child of the right child of the root. Clearly the level of $v$ is the number of letters $r$ in its $r\ell$-word. It suffices to show that for vertices at the same level, the second and third rules amount to ordering their $r\ell$-words in the left-to-right lexicographic order as follows: the letter $r$ precedes the letter $\ell$ and each word is succeeded by all words obtained by appending any number of letters $\ell$ at their right end. Consider the $r\ell$-words $RL(u)$ and $RL(v)$ encoding the vertices $u$ and $v$ at the same level. Let us compare their letters left to right and stop where we see the first difference. Two possibilities arise: 1. One of the two words, say $RL(u)$, is an initial segment of the other. Since $u$ and $v$ are at the same level, in this case we must have $RL(v)=RL(u)\ell^k$ for some $k$. In this case $v$ is in the left subtree of $u$ and by the second rule $v$ must have a higher number than $u$. 2. Neither of the two words is an initial segment of the other one, and (without loss of generality) the leftmost letter that is different is $r$ in $RL(u)$ and $\ell$ in $RL(v)$. In other words, we have $RL(u)=RL(w)r x$ and $RL(v)=RL(w)\ell y$ for some $r\ell$-words $x$ and $y$ and a vertex $w$ whose $r\ell$-word is the longest common initial segment of $RL(u)$ and $RL(v)$. In this case $u$ is in the right subtree of $w$, $v$ is in the left subtree of $w$ and the third rule is applicable. Note that the above two cases, as well as the premises of the second and third rules in Definition [Definition 33](#definition_levelwise){reference-type="ref" reference="definition_levelwise"}, are mutually exclusive and represent a complete enumeration of all possibilities. We have shown that the rules in Definition [Definition 33](#definition_levelwise){reference-type="ref" reference="definition_levelwise"} are logically equivalent to defining the above lexicographic order on the $r\ell$-words of the vertices at the same level. ◻ **Proposition 35**. *The Foata-Strehl tree of a short $k$-Catalan--Spitzer permutation of order $n$ is levelwise numbered. [\[proposition_fs_levelwise\]]{#proposition_fs_levelwise label="proposition_fs_levelwise"}* *Proof.* By Lemma [Lemma 29](#lemma_levels){reference-type="ref" reference="lemma_levels"} the Foata--Strehl tree of a short $k$-Catalan--Spitzer permutation $\sigma$ satisfies the first condition of Definition [Definition 33](#definition_levelwise){reference-type="ref" reference="definition_levelwise"}. Consider two letters $\sigma(i)$ and $\sigma(j)$ at the same level. By Lemma [Lemma 29](#lemma_levels){reference-type="ref" reference="lemma_levels"} there is a $k$-Catalan path $P$ inducing $\sigma$ in which $\sigma(i)$ and $\sigma(j)$ label up steps at the same level. If the vertex labeled $\sigma(j)$ is in the left subtree of the vertex labeled $\sigma(i)$ then $\sigma(j)$ precedes $\sigma(i)$ in $\sigma$, that is, we have $j<i$ and $\sigma(i)<\sigma(j)$ must hold as up steps at the same level are numbered in the right to left order. Similarly, if there is a vertex labeled $\sigma(k)$ such that the vertex labeled $\sigma(i)$, respectively $\sigma(j)$, is in its right, respectively left subtree, then we must have $j<k<i$ and $\sigma(i)<\sigma(j)$ must hold. ◻ We conclude this section with a theorem completely describing short $k$-Catalan--Spitzer permutations in terms of their Foata--Strehl trees. **Theorem 36**. *A permutation $\sigma=\sigma(1)\sigma(2)\cdots \sigma(n)$ is a short Catalan--Spitzer permutation if and only if its Foata--Strehl tree ${\mathcal FS}(\sigma)$ is levelwise numbered. It is also a short $k$-Catalan--Spitzer permutation if and only if its Foata--Strehl tree ${\mathcal FS}(\sigma)$ also has the following additional property: in each longest path containing only edges between parent and right child, the number of vertices is a multiple of $k-1$. [\[theorem_FS_characterization\]]{#theorem_FS_characterization label="theorem_FS_characterization"}* *Proof.* By Proposition [\[proposition_csp_injection\]](#proposition_csp_injection){reference-type="ref" reference="proposition_csp_injection"} the number of short Catalan--Spitzer permutations of order $n$ is the Catalan number $C_n$. By Proposition [\[proposition_fs_levelwise\]](#proposition_fs_levelwise){reference-type="ref" reference="proposition_fs_levelwise"} the Foata--Strehl tree ${\mathcal FS}(\sigma)$ of each short Catalan--Spitzer permutation must be levelwise numbered. By Proposition [\[proposition_unique\]](#proposition_unique){reference-type="ref" reference="proposition_unique"} each plane $0-1-2$ tree has exactly one levelwise numbering, and the number of plane $0-1-2$ trees on $n$ vertices is also the Catalan number $C_n$. Therefore the set of Foata--Strehl trees of all short Catalan--Spitzer permutations of order $n$ must equal the set of all levelwise numbered plane $0-1-2$ trees on $n$ vertices. To prove the second statement, observe that each $k$-Catalan path of order $n$ may be transformed into a Catalan path of order $(k-1)n$ by replacing each down step $(1,1-k)$ with a run of $(k-1)$ consecutive down steps $(1,-1)$. Under this correspondence $k$-Catalan paths of order $n$ bijectively correspond to those Catalan paths of order $(k-1)n$ in which the length of each longest run of consecutive down steps is a multiple of $(k-1)$. Using this correspondence it is easy to see that a short Catalan--Spitzer permutation of order $n$ is also a short $k$-Catalan--Spitzer permutation of order $n$ if and only if for each $i\in \{1,2,\ldots,n-1\}$ the difference between the level of $\sigma(i)$ and the level of $\sigma(i+1)$ is a multiple of $k-1$. This condition is equivalent to the condition stated in the theorem. The details are left to the reader. ◻ # A restricted Foata--Strehl group action and its enumerative consequences {#section_restricted_Foata_Strehl} Foata--Strehl trees were first defined [@Foata-Strehl1; @Foata-Strehl2] to introduce the *Foata--Strehl group action* on permutations of order $n$. This ${\mathbb Z}_2^{n-1}$ action is generated by $n-1$ commuting involutions $\phi_1,\phi_2,\ldots,\phi_{n-1}$, where $\phi_i$ swaps the left and right subtrees of the vertex labeled $i$ in the Foata--Strehl tree of each permutation. (Both subtrees may be empty.) Theorem [\[theorem_FS_characterization\]](#theorem_FS_characterization){reference-type="ref" reference="theorem_FS_characterization"} provides a characterization of short $k$-Catalan--Spitzer permutations in terms of their Foata--Strehl trees. Unfortunately in most cases the Foata--Strehl action destroys the levelwise numbering, except for some special situations. In this section we focus on such a special situation, introduce a subgroup of the Foata--Strehl group action, and observe how it may be used in proving identities in enumerative combinatorics beyond the world of Catalan objects. **Definition 37**. *Let $X$ be an ordered alphabet, $x\in X$ and $w$ a permutation of $X$. We say that $w$ is $x$-decomposable if it may be written in the form $w=w_1w_2w_3$ such that the following hold:* 1. *All letters of $w_1$ and $w_3$ are less than $x$;* 2. *all letters of $w_2$ are greater than or equal to $x$;* 3. *the letter $x$ is either the first or the last letter of $w_2$.* *Under the above circumstances we call the decomposition $w=w_1w_2w_3$ the *$x$-decomposition* of $w$. An *$x$-flip* consists of moving the letter $x$ from one end of the subword $w_2$ to its other end.* When $x=\max(X)$ is the largest element of the alphabet $X$, any permutation is *trivially $x$-decomposable* since $w_2$ must consist of the single letter $x$. For this $x$, the $x$-flip is the identity map. For all other $x\in X$ an $x$-decomposition must be *nontrivial* as $w_2$ must contain all letters larger than $x$. The application of an $x$-flip results in a different word. The verification of the following equivalent description is left to the reader. **Proposition 38**. *For $x<\max(X)$ the permutation $w$ is $x$-decomposable if and only if its Foata--Strehl tree ${\mathcal FS}(w)$ satisfies the following:* 1. *exactly one of the right and left subtrees of $x$ is empty;* 2. *the set of descendants of $x$ contains all letters larger than $x$.* *If $w$ has an $x$-decomposition, an $x$-flip is precisely the application of the operation $\phi_x$ of the Foata--Strehl group action.* An important consequence of Proposition [Proposition 38](#proposition_decomposition_FS){reference-type="ref" reference="proposition_decomposition_FS"} is that $x$-flips and $y$-flips commute the same way the generators of the Foata--Strehl group action commute. **Corollary 39**. *If a word $w$ is simultaneously $x$-decomposable and $y$-decomposable then the same holds for $\phi_x(w)$, $\phi_y(w)$ and $\phi_x\phi_y(w)$. Furthermore, $\phi_x\phi_y(w)=\phi_y\phi_x(w)$* **Definition 40**. *Given an ordered alphabet $X$, we extend the $x$-flip operations $\phi_x$ to all permutations $w$ of $X$ by setting $\phi_x(w)=w$ whenever $w$ is not $x$-decomposable. We call the group action generated by the operations $\{\phi_x\: x<\max(X)\}$ the *restricted Foata--Strehl group action* on the permutations of $X$.* Our interest in $x$-decompositions and $x$-flips is due to the following result. **Theorem 41**. *If a short Catalan--Spitzer permutation $\sigma$ of order $n$ is $i$-decomposable for some $i<n$ then its $i$-flip $\phi_i(\sigma)$ is also a short Catalan--Spitzer permutation. [\[theorem_csp_orbits\]]{#theorem_csp_orbits label="theorem_csp_orbits"}* *Proof.* By Proposition [Proposition 38](#proposition_decomposition_FS){reference-type="ref" reference="proposition_decomposition_FS"} the set of descendants of $i$ is the set $\{i+1,i+2,\ldots,n\}$ and all of them are in the subtree of $i+1$, which is the only child of $i$. If $i+1$ is the right child of $i$ then the level of $i+1$ is one more than the level of $i$, $i$ is the largest letter at its level and $i+1$ is the smallest letter at its level. Moving $i+1$ to the left of $i$ results in merging the levels of $i$ and $i+1$ into a single level: the level of the labels larger than $i+1$ uniformly decrease by one. For all other labels, the sets of labels at the same level remains unchanged. Consider two vertices $u$ and $v$ whose label belongs to the merged level. If the labels of $u$ and $v$ are both greater than $i$, then the shortest path leading to both contains the vertex $i+1$: moving $i+1$ to the left induces changing a letter $r$ into a letter $\ell$ in the same position in both $RL(u)$ and $RL(v)$. Such a change does not affect the relative order of the two $r\ell$-words in the lexicographic order. If the labels of $u$ and $v$ are both at most $i$, then $RL(u)$ and $RL(v)$ remain unchanged after moving $i+1$ to the left of $i$. Consider finally the case when the label of $u$ is at most $i$ and the label of $v$ is less than $i$. When the vertex labeled $i+1$ is the right child of the vertex labeled $i$ then the label of $v$ is more than the label of $u$ because they are at different levels. When we move the vertex labeled $i+1$ to the left, the vertex $v$ becomes a vertex in the left subtree of the vertex labeled $i$, hence its label must be still greater than $i$ and also greater than the label of the vertex $u$ (whose $r\ell$-word remains unchanged). We have shown that applying an $i$-flip to an $i$-decomposable Catalan-Spitzer permutation that moves $i+1$ from the right to the left results in a Catalan-Spitzer permutation. The proof of the converse is analogous and left to the reader. ◻ As a consequence of Theorem [\[theorem_csp_orbits\]](#theorem_csp_orbits){reference-type="ref" reference="theorem_csp_orbits"} the set of all Catalan-Spitzer permutations of order $n$ may be partitioned into orbits of the restricted Foata--Strehl group action. A question naturally arises, namely, what is the number of such orbits. We answer this in the greatest generality. A *class of permutations* $\mathcal P$ is a rule assigning to each finite ordered set $X$ a subset ${\mathcal P}_{|X|}(X)$ of its permutations in such a way that membership of $w$ in ${\mathcal P}_{|X|}(X)$ depends only on the pattern of $w$. For brevity, we will say that a permutation $w$ *belongs to the class ${\mathcal P}$* if $w$ is an element of ${\mathcal P}_{|X|}(X)$ for some finite set $X$. **Definition 42**. *A class of permutations is *compatible with the restricted Foata--Strehl group action* if it satisfies the following: for each $x\in X$ and for each $x$-decomposable $w\in {\mathcal P}_{|X|}(X)$ the following holds:* 1. *The permutation $\phi_x(w)$ belongs to to the class ${\mathcal P}$.* 2. *If $w_1w_2w_3$ is the $x$-decomposition of $w$ then $w_2$ and $w_1w_3$ also belong to the class ${\mathcal P}$.* For a class of permutations ${\mathcal P}$ that is compatible with the restricted Foata--Strehl group action, let $P_n$ respectively $O_n$, be the number of permutations in ${\mathcal P}(\{1,2,\ldots,n\})$, respectively number of orbits of the restricted Foata--Strehl group action on the set ${\mathcal P}(\{1,2,\ldots,n\})$. We introduce the ordinary generating functions $$P(x)=\sum_{n\geq 1} P_n\cdot x^n \quad \mbox{and}\quad Q(x)=\sum_{n\geq 1} Q_n\cdot x^n. \label{equations_gfns}$$ We consider these generating functions as formal power series from ${\mathbb Q}[[x]]$. Our first general result is the following. **Theorem 43**. *The generating functions $P(x)$ and $O(x)$ satisfy $$\label{equation_po} P(x)=\frac{O(x)}{1-O(x)}$$ equivalently, $$\label{equation_op} O(x)=\frac{P(x)}{1+P(x)}$$ [\[theorem_op\]]{#theorem_op label="theorem_op"}* *Proof.* We only need to show ([\[equation_po\]](#equation_po){reference-type="ref" reference="equation_po"}) as equation ([\[equation_op\]](#equation_op){reference-type="ref" reference="equation_op"}) is algebraically equivalent. For each $\sigma\in {\mathcal P}(\{1,2,\ldots,n\})$ denote the orbit of $\sigma$ under the restricted Foata--Strehl group action by $[\sigma]$. To each orbit $[\sigma]$ we may associate a set $I([\sigma])\subseteq \{1,2,\ldots,n-1\}$ such that each permutation in the orbit is $i$-decomposable if and only if $i$ is an element of $I([\sigma])$. The size of the orbit will be $2^{|I([\sigma])|}$. We say that $\sigma$ is a *distinguished orbit representative* if for each $i\in I$, the letter $i+1$ is to the right of $i$ in $\sigma$. Equivalently, in the Foata--Strehl tree ${\mathcal FS}(\sigma)$ of $\sigma$, each $i\in I([\sigma])$ has a right child and not a left child. Clearly there is exactly one distinguished orbit representative in each orbit. For all permutations $\sigma\in {\mathcal P}(\{1,2,\ldots,n\})$ that are not distinguished orbit representatives, there is a unique smallest $k\in I([\sigma])$ such that $k+1$ is the left child of $k$ and $i+1$ is a right child for all $i\in I([\sigma])$ satisfying $i<k$. The removal of the left subtree of $k+1$ results in the Foata--Strehl tree of a distinguished orbit representative in ${\mathcal P}(\{1,2,\ldots,k\})$, whereas the left subtree of $k$ is the Foata--Strehl tree of a permutation in ${\mathcal P}(\{k+1,k+2,\ldots,n\})$. The two permutations may be selected independently and determine $\sigma$ uniquely. This observation justifies the formula $$P_n=O_n+\sum_{k=1}^{n-1} O_{k}\cdot P_{n-k}.$$ The stated formula for the generating functions follows immediately. ◻ **Example 44**. **If ${\mathcal P}$ is the class of short Catalan-Spitzer permutations then $P(x)=C(x)-1$ where $$C(x)= \sum_{n \geq 0} C_{n} \cdot x^{n} = \frac{1-\sqrt{1-4x}}{2x}$$ is the generating function of the Catalan numbers. Equation ([\[equation_op\]](#equation_op){reference-type="ref" reference="equation_op"}) gives $O(x)=\frac{C(x)-1}{C(x)}=x\cdot C(x)$. Hence $O_n=C_{n-1}$.* * **Example 45**. **If ${\mathcal P}$ is the class of all permutations then $P(x)=\sum_{n\geq 1} n!x^n$. Equation ([\[equation_op\]](#equation_op){reference-type="ref" reference="equation_op"}) gives the ordinary generating function of the indecomposable permutations. The numbers of these are listed as sequence A003319 in [@OEIS].* * To refine Theorem [\[theorem_op\]](#theorem_op){reference-type="ref" reference="theorem_op"} let $P_{n,k}$ denote the number of permutations belonging to an orbit of size $2^k$ of the restricted Foata--Strehl group action on ${\mathcal P}(\{1,2,\ldots,n\})$ and let $O_{n,k}$ be the number of orbits of size $2^k$. Clearly we have $$\label{equation_orbits} O_{n,k}=\frac{P_{n,k}}{2^k}.$$ We introduce the generating functions $$P(x,y)=\sum_{n\geq 1, k\geq 0} P_{n,k} x^ny^k\quad\mbox{and}\quad O(x,y)=\sum_{n\geq 1, k\geq 0} O_{n,k} x^ny^k.$$ **Theorem 46**. *The generating functions $P(x,y)$ and $O(x,y)$ are given by $$\begin{aligned} \label{equation_pxy} P(x,y) & = \frac{P(x)}{1-2(y-1)P(x)} , \\ \label{equation_oxy} O(x,y) & = \frac{P(x)}{1-(y-2)P(x)} .\end{aligned}$$ [\[theorem_op_refined\]]{#theorem_op_refined label="theorem_op_refined"}* *Proof.* We use the notation $I([\sigma])$ introduced in the proof of Theorem [\[theorem_op\]](#theorem_op){reference-type="ref" reference="theorem_op"}. Given any $\sigma\in {\mathcal P}(\{1,2,\ldots,n\})$ and any element $i_1$ of $I([\sigma])$, the permutation $\sigma$ may be written as a concatenation of words $$\sigma=\sigma_0\sigma_1\sigma_0', \label{equation_sigma_decomposition}$$ where $\sigma_0 i_1 \sigma_0'$ is an element of ${\mathcal P}(\{1,2,\ldots,i_1\})$ and $\sigma_1$ is an element of ${\mathcal P}(\{i_1,i_1+1,\ldots,n\})$ containing $i_1$ as the first or the last letter. At the level of Foata--Strehl trees, ${\mathcal FS}(\sigma)$ may be obtained by selecting a levelwise labeled plane $0-1-2$ tree with $i_1$ vertices and then adding any Foata--Strehl tree with $n-i_1$ as the right or left subtree of $i_1$. Iterating the procedure, for any subset $\{i_1,i_2,\ldots,i_k\}$ of $I([\sigma])$ satisfying $i_1<i_2<\cdots<i_k$ we may decompose ${\mathcal FS}(\sigma)$ into a sequence of Foata--Strehl trees $(T_0,T_1,\ldots,T_{k})$ such that 1. $T_1={\mathcal FS}(\sigma_0i_1\sigma_0')$ for some $\sigma_0i_1\sigma_0'\in {\mathcal P}(\{1,2,\ldots,i_1\})$; 2. for $j=2,3,\ldots,k-1$ the labeled tree $T_j={\mathcal FS}(\sigma_{j-1}i_j\sigma_{j-1}')$ for some $\sigma_{j-1}i_j\sigma_{j-1}'\in \widehat{{\mathcal P}}(\{i_{j-1}+1,i_{j-1}+2,\ldots,i_j\})$; 3. $T_k={\mathcal FS}(\sigma_{k-1}i_k\sigma_{k-1}')$ for some $\sigma_{k-1}i_k\sigma_{k-1}'\in {\mathcal P}(\{i_{k-1}+1,i_{k-1}+2,\ldots,i_k\})$, *or it may be empty*; 4. for $j=2,\ldots,k$ the labeled tree $T_j$ is the right or left subtree of $i_j$. Conversely, if ${\mathcal FS}(\sigma)$ is decomposed into into a sequence of Foata--Strehl trees $(T_0,T_1,\ldots,T_{k})$ satisfying the above criteria then $\{i_1,i_2,\ldots,i_k\}$ must be a subset of $I([\sigma])$. Introducing the variable $y$ to mark the selected elements of $I([\sigma])$, we obtain the formula $$P(x,1+y)=\sum_{n\geq 1,j\geq 0} P_{n,j} x^n(1+y)^j= (1+P(x))\cdot \sum_{k\geq 0} (2y)^k P(x)^k= \frac{1+P(x)}{1-2yP(x)}.$$ Equation ([\[equation_pxy\]](#equation_pxy){reference-type="ref" reference="equation_pxy"}) follows by replacing $y$ with $y-1$ in the last equation. To obtain ([\[equation_oxy\]](#equation_oxy){reference-type="ref" reference="equation_oxy"}), by equation ([\[equation_orbits\]](#equation_orbits){reference-type="ref" reference="equation_orbits"}), we only need to substitute $y/2$ into $y$ in ([\[equation_pxy\]](#equation_pxy){reference-type="ref" reference="equation_pxy"}). ◻ Note that Theorem [\[theorem_op\]](#theorem_op){reference-type="ref" reference="theorem_op"} may be obtained by substituting $y=1$ in ([\[equation_oxy\]](#equation_oxy){reference-type="ref" reference="equation_oxy"}). **Example 47**. **For Catalan--Spitzer permutations we have $P(x)=C(x)-1$ and $$\begin{aligned} O(x,y)&=&\frac{C(x)-1}{1-(y-2)(C(x)-1)}\\ &=& x + yx^2 + (y^2 + 1)x^3 + (y^3 + 2y + 2)x^4 + (y^4 + 3y^2 + 4y + 6)x^5\\ && + (y^5 + 4y^3 + 6y^2 + 13y + 18)x^6+\cdots \\ \end{aligned}$$ Substituting $y=1$ respectively $y=2$ gives $O(x)=xC(x)$, respectively $P(x)=C(x)-1$. The substitutions $y=3,4,5,6$ are listed as sequences A001700, A049027, A076025, A076026 in [@OEIS]. The generating functions listed for these sequences are all substitutions into $y$ in $\frac{1 - (y - 2)xC(x)}{1 - (y - 1)xC(x)}$ which is easily seen to be equal to $1+O(x,y)$.** **Example 48**. **For all permutations we have $P(x)=\sum_{n\geq 1} n! x^n$ and $$\begin{aligned} O(x,y)&=&\frac{P(x)}{1-(y-2)P(x)}\\ &=& x + yx^2 + (y^2 + 2)x^3 + (y^3 + 4y + 8)x^4 + (y^4 + 6y^2 + 16y + 48)x^5\\ && + (y^5 + 8y^3 + 24y^2 + 100y + 328)x^6+\cdots \end{aligned}$$ Substituting $y=1$ respectively $y=2$ gives $O(x)$, respectively $P(x)$. The substitution $y=3$ is listed as sequence A051296 in [@OEIS]. The presence of this substitution is not surprising: $O(x,3)=P(x)/(1-P(x))$ is always the generating function for the ordered collections of permutations in the class ${\mathcal P}$.* * # Concluding remarks Are there any other results on the distribution of the quantity $\alpha$ for more for general paths from the origin $(0,0)$ to $(s,0)$? For instance, what can be said if we have one type of up step, but two types of down steps? Are there other Fuss--Catalan structures that belongs to a larger set of structures of cardinality $\binom{kn+1}{n}$ with a uniformly distributed statistic on the set $\{0,1, \ldots, kn\}$ and the Fuss--Catalan structure is the fiber of one particular value of this statistic? # Acknowledgments {#acknowledgments .unnumbered} This work was partially supported by grants from the Simons Foundation (\#429370 to Richard Ehrenborg, \#245153 and \#514648 to Gábor Hetyei, \#422467 to Margaret Readdy). Margaret Readdy was also supported by NSF grant DMS-2247382. 99 K. L. Chung and W. Feller, On fluctuations in-coin tossing, *Proc. Natl. Acad. Sci. USA* **35** (1949), 605--608. P. Flajolet, Combinatorial aspects of continued fractions, *Discrete Math. * **32** (1980), 125--161. D. Foata and V. Strehl, Rearrangements of the symmetric group and enumerative properties of the tangent and secant numbers, *Math. Z. * **137** (1974), 257--264. D. Foata and V. Strehl, Euler numbers and variations of permutations, in: Colloquio Internazionale sulle Teorie Combinatoire 1973, Tome I (Atti Dei Convegni Lincei **17**, 119--131), Accademia Nazionale dei Lincei 1976. R. Graham, D. E. Knuth and O. Patashnik, "Concrete mathematics. A foundation for computer science. Second Edition," Addison--Wesley Publishing Company, Reading, MA, 1994. A. Huq, Generalized Chung-Feller theorems for lattice paths, Thesis (Ph.D.)-Brandeis University. 2009. 87 pp. ISBN: 978-1109-31080-1 C. Krattenthaler, Lattice path enumeration, in: Handbook of enumerative combinatorics, Edited by Miklós Bóna, 589--678, Discrete Math. Appl. (Boca Raton), CRC Press, Boca Raton, FL, 2015. OEIS Foundation Inc., *The On-Line Encyclopedia of Integer Sequences*, `http://oeis.org`. G. N. Raney, Functional composition patterns and power series reversion, *Trans. Amer. Math. Soc.* **94** (1960), 441--451. F. Spitzer, A combinatorial lemma and its application to probability theory, *Trans. Amer. Math. Soc.* **82** (1956), 323--339. R. P. Stanley, "Enumerative Combinatorics, Vol. II," Cambridge University Press, 1999.
arxiv_math
{ "id": "2310.06288", "title": "Catalan-Spitzer permutations", "authors": "Richard Ehrenborg, G\\'abor Hetyei and Margaret Readdy", "categories": "math.CO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper we prove the local and global well-posedness of the time fractional abstract Schrödinger type evolution equation ($iD_t^\alpha u+Au+F(u)=0$) on the Hilbert space and as an application, we prove the local and global well-posedness of the fractional dispersive equation with static potential ($D_t^\alpha-iP(D)u-iqu-iVu+F(u)=0$) under the only assumption that the symbol $P(\xi)$ of $P(D)$ behaves like $\left\lvert\xi\right\rvert^m$ for $\lvert\xi\rvert\to\infty$. In appendix, we also give the Hölder regularities and the asymptotic behaviors of the mild solution to the linear time fractional abstract Schrödinger type equation ($iD_t^\alpha u+Au+F(t)=0$). Because of the lack of the semigroup properties of the solution operators, we employ a strategy of proof based on the spectral theorem of the selfadjoint operators and the asymptotic behaviors of the Mittag-Leffler functions. address: School of Mathematics and Computational Science, Xiangtan University author: - Mingxuan He - Na Deng bibliography: - Reference.bib title: On the fractional abstract Schrödinger type evolution equations on the Hilbert space and its applications to the fractional dispersive equations --- Local and global well-posedness, Time fractional abstract Schrödinger type equation, Fractional dispersive equation, The spectral theorem, The perturbation of the selfadjoint operator, Hölder regularity, Asymptotic behavior # Introduction ## Background and main results In the last decades, fractional calculus has attracted great interest from mathematicians and has been proved useful in physics, engineering and economics. For more details about the fractional derivatives, we refer readers to [@Theory-and-Applications-of-Fractional-Differential-Equations-Chapter2-Fractional-integrals-and-fractional-derivatives; @Fractional-Calculus-and-Waves-in-Linear-Viscoelasticity; @Fractional-differential-equations-an-introduction-to-fractional-derivatives-fractional-differential-equations-to-methods-of-their-solution-and-some-of-their-applications] and we will give a brief introduction of them in $\ref{9.1Fractionl integral and derivative definition appendix}$. Our purpose of this paper is to consider the well-posedness of the fractional abstract Schrödinger type evolution equation $$\begin{cases} iD_t^\alpha u+Au+F(u)=0,\quad t>0\\ u(0)=x, \end{cases} \label{nonlinear Schrodinger equation Hilbert}$$ on a separable Hilbert space $H$ with some suitable regularity hypotheses on $F$ and $x$. $A$ is a selfadjoint operator in $H$. To state the hypotheses on $F$, we first introduce a function space $C_q[0,\infty)$: we say a continuous, nondecreasing and nonnegative function $w$ is in $C_q[0,\infty)$ if $w:[0,\infty)\to[0,\infty)$ satisfies $w(0)=0$ and $w(\sigma)\neq0$ when $\sigma\neq0$ and there exists a $\varepsilon>0$ such that $$\int_1^\infty\frac{\sigma^{\frac{1}{\alpha}+\varepsilon-1}}{w(\sigma)^{\frac{1}{\alpha}+\varepsilon}}d\sigma=q.$$ Hence we can state the hypotheses on $F$ as: **Assumption 1**. *$F(0)=0$. If $\left\lVert u(t)\right\rVert_{D(A)}$ and $\left\lVert v(t)\right\rVert_{D(A)}$ is bounded on $I\subset[0,\infty)$ a.e., then $\left\lVert F(u)-F(v)\right\rVert_{D(A)}\leq C\left\lVert u(t)-v(t)\right\rVert_{D(A)}$ a.e. on $I$ where $C$ is dependent on the initial data $u(0),v(0)$ with the norm $\rho(\cdot)$ and the essential upper bound of $\left\lVert u(t)\right\rVert_{D(A)}$ and $\left\lVert v(t)\right\rVert_{D(A)}$ on $I$. [\[AssumptionA\'\]]{#AssumptionA' label="AssumptionA'"}* **Assumption 2**. *There exists a $w\in C_\infty[0,\infty)$ and a positive constant $C$ which depends on the initial data $u(0)$ with the norm $\varrho(\cdot)$ such that $\left\lVert F(u)\right\rVert_{D(A)}\leq Cw\left(\left\lVert u(t)\right\rVert_{D(A)}\right)$ pointwisely in $t$. [\[AssumptionB\'\]]{#AssumptionB' label="AssumptionB'"}* In Section $\ref{9.17Proof of Theorem unique mild solution X}$, $\ref{Proof of Theorem continuation and blow up alternative}$ and $\ref{Proof of Theorem unique global solution X}$, we will prove the following results. **Theorem 1** (local well-posedness). *Let Assumption $\ref{AssumptionA'}$ hold and $x\in D(A)$ such that $\left\lvert x\right\rvert_1:=\max\left\{\left\lVert x\right\rVert_{D(A)},\rho(x)\right\}<\infty$. There exists a positive number $T$ which depends only on $\left\lVert x\right\rVert_{D(A)}$ and $\rho(x)$ such that $(\ref{nonlinear Schrodinger equation Hilbert})$ admits a unique strict solution $u(t)$ on $[0,T]$ in the class $$u\in C\left([0,T];D(A)\right),\mathbf{D}_t^\alpha(u-x)\in C\left([0,T];H\right).$$ Moreover, if $u(t), v(t)$ are the strict solutions of $(\ref{nonlinear Schrodinger equation Hilbert})$ with the initial data $x, y$ respectively, then there exists a positive constant $C$ which depends on $\rho(x),\rho(y)$ and $\left\lVert u\right\rVert_{L^\infty\left((0,T);D(A)\right)},\left\lVert v\right\rVert_{L^\infty\left((0,T);D(A)\right)}$ such that $$\left\lVert u(t)-v(t)\right\rVert_{D(A)}\leq CE_{\alpha,1}\left(\Gamma(\alpha)t^\alpha\right)\left\lVert x-y\right\rVert_{D(A)}. \label{unique mild solution X theorem equation1}$$ [\[unique mild solution X\]]{#unique mild solution X label="unique mild solution X"}* **Theorem 2** (continuation and blow-up alternative). *Let the assumptions in Theorem $\ref{unique mild solution X}$ hold and $u$ be the strict solution of $(\ref{nonlinear Schrodinger equation Hilbert})$ on $[0,T]$. Then $u$ can be extended to a maximal interval $[0,T_{\max})$ uniquely such that $$u\in C\left([0,T_{\max});D(A)\right).\quad\mathbf{D}_t^\alpha(u-x)\in C\left([0,T_{\max});H\right)$$ and $T_{\max}<\infty$ implies $\lim\limits_{t\uparrow T_{\max}}\left\lVert u(t)\right\rVert_{D(A)}=\infty$. [\[continuation and blow up alternative\]]{#continuation and blow up alternative label="continuation and blow up alternative"}* **Theorem 3** (global well-posedness). *Let the assumptions in Theorem $\ref{unique mild solution X}$ and Assumption $\ref{AssumptionB'}$ hold. If $x\in D(A)$ satisfying $\left\lvert x\right\rvert_2:=\max\left\{\left\lVert x\right\rVert_{D(A)},\rho(x),\varrho(x)\right\}<\infty$, $(\ref{nonlinear Schrodinger equation Hilbert})$ admits a unique strict solution $u(t)$ on $[0,\infty)$ in the class $$u\in C\left([0,\infty);D(A)\right),\quad\mathbf{D}_t^\alpha(u-x)\in C\left([0,\infty);H\right).$$ That is, the strict solution in Theorem $\ref{unique mild solution X}$ is global. [\[unique global solution X\]]{#unique global solution X label="unique global solution X"}* **Remark 1**. *You can find the notion of the solution of $(\ref{nonlinear Schrodinger equation Hilbert})$ in Definition $\ref{linear mild solution H}$, Definition $\ref{linear classical solution H}$ and Definition $\ref{linear strict solution H}$.* In section $\ref{9.16Application. The well-posedness of the fractional dispersive equation}$, we shall show that these theorems are applicable to the very general fractional dispersive equation $$\begin{cases} D_t^\alpha u-iP(D)u-iqu-iVu+F(u)=0,\quad&x\in\mathbb{R}^n,\;t>0\\ u(0,x)=u_0(x),\quad&x\in\mathbb{R}^n \end{cases}. \label{9.1 very general dispersive equation motivated}$$ Here $q\in L^2(\mathbb{R}^n)$ and $V\in L^\infty(\mathbb{R}^n)$ are both real-valued functions. $P(D)$ is defined via its real symbol, that is, $P(D)u=\mathscr{F}^{-1}\left(P(\xi)\mathscr{F}u\right)$ and $P(\xi)\in C\left(\mathbb{R}^n;\mathbb{R}\right)$ behaves like $\left\lvert\xi\right\rvert^m (m>\frac{n}{2})$ when $\lvert\xi\rvert\to\infty$. Here $\mathscr{F}$ denotes the Fourier transform and $\mathscr{F}^{-1}$ denotes the inverse Fourier transform. Note that no assumption is made on the behaviour of $P(\xi)$ for small $\xi$ except continuity. For some results in the integer order case ($\alpha=1$), you can see Constantin, Saut[@Local-smoothing-properties-of-dispersive-equations] and Kenig, Ponce, Vega[@Oscillatory-Integrals-and-Regularity-of-Dispersive-Equations]. **Remark 2**. *Specifically, as is easily seen that $(\ref{9.1 very general dispersive equation motivated})$ is the general case of some well-known different kinds of fractional disperve equations such as $$\begin{aligned} &iD_t^\alpha u+\left(-\Delta\right)^\beta u+q(x)u+V(x)u+\lambda\left\lvert u\right\rvert^{p-1}u=0,\;x\in\mathbb{R}^n,\;t>0,\label{space time fractional Schrodinger equation motivated}\\ &D_t^\alpha u+\partial_x^3u+u^m\partial_xu=0,\quad x\in\mathbb{R},\;t>0,\label{time fractional KdV equation motivated}\\ &D_t^\alpha u+H\partial_x^2u+u^m\partial_xu=0,\quad x\in\mathbb{R},\;t>0.\label{time fractional BO equation motivatied}\end{aligned}$$ In $(\ref{space time fractional Schrodinger equation motivated})$, $\left(-\Delta\right)^\beta$ denotes the fractional Laplacian whose definition is $\left(-\Delta\right)^\beta u=\mathscr{F}^{-1}\left(\left\lvert\xi\right\rvert^{2\beta}\mathscr{F}u\right)$. In $(\ref{time fractional BO equation motivatied})$, $H$ denotes the Hilbert trasfrom whose definition is $Hu=\mathscr{F}^{-1}\left(i\mathop{\mathrm{sgn}}(\xi)\mathscr{F}u\right)$. Actually these equations have been studied by many authors but have not been studied in a more general and abstract way. $(\ref{time fractional KdV equation motivated})$ is called the time fractional m-gKdV equation and $(\ref{time fractional BO equation motivatied})$ is called the time fractional modified Benjamin-Ono equation (mBO equation). The researches on $(\ref{time fractional KdV equation motivated})$ and $(\ref{time fractional BO equation motivatied})$ mainly focused on solving them by variational iteration method[@Variational-iteration-method-for-solving-the-space-and-time-fractional-KdV-equation], Adomian decomposition method[@Application-of-homotopy-perturbation-method-to-fractional-IVPs] and symmetry analysis[@Lie-symmetry-analysis-of-the-time-fractional-KdV-type-equation] and so on. Several works have been devoted to the well-posed problem for $(\ref{time fractional KdV equation motivated})$ and $(\ref{time fractional BO equation motivatied})$ in the integer order case ($\alpha=1$) which you can see [@On-the-Korteweg-de-Vries-equation; @Well-Posedness-of-the-Initial-Value-Problem-for-the-Korteweg-de-Vries-Equation; @On-the-local-well-posedness-of-the-Benjamin-Ono-and-modified-Benjamin-Ono-equations; @On-the-global-well-posedness-of-the-Benjamin-Ono-equation]. There are much more studies on $(\ref{space time fractional Schrodinger equation motivated})$ which is called the space-time fractional nonlinear Schrödinger equation with static potential introduced by Achar, Yale, Hanneken[@Time-Fractional-Schrodinger-Equation-Revisited] in the case $q,V=0$ or $q=0$. Su, Zhao, Li[@Local-well-posedness-of-semilinear-space-time-fractional-Schrodinger-equation] have studied the local well-posedness of it by estimating the fundamental solution using the properties of $H$-functions. If $\beta=1$, it reduces to the time fractional nonlinear Schrödinger equation. Peng, Zhou, Ahmad[@The-well-posedness-for-fractional-nonlinear-Schrodinger-equations] have studied the global well-posedness of it by the decay estimates of the solution. Wang, Zhou, Wei[@Fractional-Schrodinger-equations-with-potential-and-optimal-controls] have studied the global well-posedness and some dynamical properties of it in a bounded domain. In particular, the integer order case ($\alpha=1$, $\beta=1$) has been studied extensively by mathematicians such as Kato[@On-nonlinear-Schrodinger-equations; @On-nonlinear-Schrodinger-equations-II-HS-solutions-and-unconditional-well-posedness], Cazenave[@Semilinear-Schrodinger-equations], Ginibre, Velo[@On-a-class-of-nonlinear-Schrodinger-equations-I-The-Cauchy-problem-general-case; @The-global-Cauchy-problem-for-the-nonlinear-Schrodinger-equation-revisited], Bourgain[@Hyperbolic-Equations-and-Frequency-Interactions] and so on and the researches of the well-posedness of the space fractional case ($\alpha=1$) which is introduced by Laskin[@Fractional-Schrodinger-equation; @Fractional-quantum-mechanics; @Fractional-quantum-mechanics-and-Levy-path-integrals; @Fractals-and-quantum-mechanics] you can see Guo, Han, Xin[@Existence-of-the-global-smooth-solution-to-the-period-boundary-value-problem-of-fractional-nonlinear-Schrodinger-equation], Guo, Huo[@Global-Well-Posedness-for-the-Fractional-Nonlinear-Schrodinger-Equation] and Hong, Sire[@On-Fractional-Schrodinger-Equations-in-sobolev-spaces]. In addition, here are some more differences about the time fractionalisation of the Schrödinger equation whether we should fractionalize the constant $i$. In fact, Naber[@Time-fractional-Schrodinger-equation] use Wick rotation to raise a fractional power of order $\alpha$ of $i$ which turns out to be the classical Schrödinger equations with a time dependent Hamiltonian and Grande[@Space-Time-Fractional-Nonlinear-Schrodinger-Equation] studied the local well-posedness and local smoothing properties of it.* In Section $\ref{9.17Linear estimate. The well-posedness of the linear equation}$ we will give some required estimates for the linear solution operators. In $\ref{9.1Fractionl integral and derivative definition appendix}$ and $\ref{9.17On the Mittag-Leffler functions}$ we will give some brief introduction of the fractional integrals and fractional derivatives and the Mittag-Leffler function. In $\ref{9.17The perturbation and the spectral theorem of the selfadjoint operators}$ the perturbation of the selfadjoint operators will be stated and we will prove a general spectral theorem of the selfadjoint operators for the purpose of estimating the linear solution operators and proving Theorem $\ref{unique mild solution X}$ to Theorem $\ref{unique global solution X}$. In $\ref{Some further results of the linear}$ some futher results of the linear form of $(\ref{nonlinear Schrodinger equation Hilbert})$ will be given such as Hölder regularities and asymptotic behaviors. ## Notations The following notations are used without particular comments. $$\begin{aligned} &L_T^\infty H=L^\infty\left((0,T);H\right),\quad L_T^\infty D(A)=L^\infty\left((0,T);D(A)\right),\\ &C_T^\alpha H=C^\alpha\left([0,T];H\right),\quad C_{[\delta,T]}^\alpha H=C^\alpha\left([\delta,T];H\right),\\ &L_t^\infty H=L^\infty\left((0,\infty);H\right),\quad L_t^\infty D(A)=L^\infty\left((0,\infty);D(A)\right),\\ &L_{(T_1,T_2)}^\infty H=L^\infty\left((T_1,T_2);H\right),\quad L_{(T_1,T_2)}^\infty D(A)=L^\infty\left((T_1,T_2);D(A)\right).\end{aligned}$$ We denote by $a\lesssim b$ if there exists a positive number $C$ which is independent on $\varepsilon$ (see the definition of $C_q[0,\infty)$), $T$ (local in time), the norm of the initial data ($\rho(\cdot)$,$\varrho(\cdot)$) and the essential upper bound of $\left\lVert u(t)\right\rVert_{D(A)}$, $\left\lVert v(t)\right\rVert_{D(A)}$ (see Assumption $\ref{AssumptionA'}$) such that $a\leq Cb$. We denote by $a\sim b$ if $b\lesssim a\lesssim b$. We say $u$ is in a ball with radius $R$ of $Z$ if $u\in Z$ satisfies $\left\lVert u\right\rVert_Z\leq R$. We denote by $*$ the convolution in time, that is, $$u(t)*v(t)=\int_0^tu(t-\tau)v(\tau)d\tau.$$ We denote by $\vee$ the maximum and $\wedge$ the minimum. [\[9.1 subsection Notations\]]{#9.1 subsection Notations label="9.1 subsection Notations"} # Linear estimate. The well-posedness of the linear equation {#9.17Linear estimate. The well-posedness of the linear equation} We will call it the linear $(\ref{nonlinear Schrodinger equation Hilbert})$ if $F(u)=F(t)$ in $(\ref{nonlinear Schrodinger equation Hilbert})$. In this section, we shall give some estimates of the solution operator to the linear $(\ref{nonlinear Schrodinger equation Hilbert})$ and consider the well-posedness of it. More results of the linear $(\ref{nonlinear Schrodinger equation Hilbert})$ will be given in appendix $\ref{Some further results of the linear}$. By the work of Zhou, Peng, Huang[@Duhamels-formula-for-time-fractional-Schrodinger-equations], the solution of the linear $(\ref{nonlinear Schrodinger equation Hilbert})$ is given by $$u(t)=S_tx+iGF(t) \label{mild solution1}$$ where $$Gv(t)=\int_0^tP_{t-\tau}v(\tau)d\tau$$ and $$\begin{aligned} &S_t\phi=U\left(a(t,\xi)U^{-1}\phi\right),\quad a(t,\xi)=E_{\alpha,1}\left(ia(\xi)t^\alpha\right),\\ &P_t\phi=U\left(b(t,\xi)U^{-1}\phi\right),\quad b(t,\xi)=t^{\alpha-1}E_{\alpha,\alpha}\left(ia(\xi)t^\alpha\right).\end{aligned}$$ Note that $\frac{d}{dt}a(t,\xi)=ia(\xi)b(t,\xi)$ and $\frac{d}{dt}b(t,\xi)=t^{\alpha-2}E_{\alpha,\alpha-1}\left(ia(\xi)t^\alpha\right)$ (Theorem $\ref{Derivative Mittag Leffler}$). Their method is based on the spectral theorem of the selfadjoint operator (see appendix $\ref{9.5appendix The spectral theorem of the selfadjoint operator}$) and we can define the mild solution, the classical solution and the strict solution by this way as follows: **Definition 1**. *For $T>0$, let $x\in H$. The function $u\in C\left([0,T];H\right)$ given by $(\ref{mild solution1})$ is the mild solution of the linear $(\ref{nonlinear Schrodinger equation Hilbert})$ on $[0,T]$ [\[linear mild solution H\]]{#linear mild solution H label="linear mild solution H"}* **Definition 2**. *For $T>0$, a function $u:[0,T]\to H$ is a classical solution of the linear $(\ref{nonlinear Schrodinger equation Hilbert})$ on $[0,T]$ if $u$ in the class $$u\in C\left((0,T];D(A)\right)\cap C\left([0,T];H\right),\mathbf{D}_t^\alpha(u-u(0))\in C\left((0,T];H\right)$$ satisfies the linear $(\ref{nonlinear Schrodinger equation Hilbert})$. [\[linear classical solution H\]]{#linear classical solution H label="linear classical solution H"}* **Definition 3**. *For $T>0$, a function $u:[0,T]\to H$ is a strict solution of the linear $(\ref{nonlinear Schrodinger equation Hilbert})$ on $[0,T]$ if $u$ in the class $$u\in C\left([0,T];D(A)\right),\mathbf{D}_t^\alpha(u-u(0))\in C\left([0,T];H\right)$$ satisfies the linear $(\ref{nonlinear Schrodinger equation Hilbert})$. [\[linear strict solution H\]]{#linear strict solution H label="linear strict solution H"}* Let $\chi_t=\chi_t(\xi)=\chi_{t^\alpha\lvert a(\xi)\rvert\leq M}$ and $\chi_t^c=\chi_t^c(\xi)=1-\chi_t$ where $M$ is large enough. Here $\chi_t$ denotes a smooth function supported on the set $\left\{(t,\xi):t^\alpha\lvert a(\xi)\rvert\leq2M\right\}$ satisfying $\chi_t=1$ if $t^\alpha\lvert a(\xi)\rvert\leq M$ and hence $\chi_t^c=\chi_{t^\alpha\lvert a(\xi)\rvert>2M}$ is a smooth function supported on the set $\left\{(t,\xi):t^\alpha\lvert a(\xi)\rvert>M\right\}$ satisfying $\chi_t^c=1$ if $t^\alpha\lvert a(\xi)\rvert>2M$. We can now define the following operators: $$\begin{aligned} &S_t^l\phi:=U\left(\chi_ta(t,\xi)U^{-1}\phi\right),\quad S_t^h\phi:=U\left(\chi_t^ca(t,\xi)U^{-1}\phi\right),\\ &P_t^l\phi:=U\left(\chi_tb(t,\xi)U^{-1}\phi\right),\quad P_t^h\phi:=U\left(\chi_t^cb(t,\xi)U^{-1}\phi\right),\\ &G^lv(t):=\int_0^tP_{t-\tau}^lv(\tau)d\tau,\quad G^hv(t):=\int_0^tP_{t-\tau}^hv(\tau)d\tau.\end{aligned}$$ According to Theorem $\ref{Mittag-Leffler function asymptotic expansion}$, it follows that $$\begin{aligned} &\chi_t^ca(t,\xi)=\frac{i}{\Gamma(1-\alpha)}\chi_t^ca(\xi)^{-1}t^{-\alpha}+\chi_t^cO\left(\left\lvert a(\xi)\right\rvert^{-2}t^{-2\alpha}\right),\\ &\chi_t^cb(t,\xi)=\frac{1}{\Gamma(-\alpha)}\chi_t^ca(\xi)^{-2}t^{-\alpha-1}+\chi_t^cO\left(\left\lvert a(\xi)\right\rvert^{-3}t^{-2\alpha-1}\right),\end{aligned}$$ which then implies that $$\begin{aligned} &\begin{aligned} S_t^h\phi&=\frac{i}{\Gamma(1-\alpha)}t^{-\alpha}U\left(a(\xi)^{-1}\chi_t^cU^{-1}\phi\right)+U\left(O\left(\left\lvert a(\xi)\right\rvert^{-2}t^{-2\alpha}\right)\chi_t^cU^{-1}\phi\right)\\ &=:\frac{i}{\Gamma(1-\alpha)}t^{-\alpha}\mathbf{A}_t^{-1}\phi+R_t^S\phi, \end{aligned}\\ &\begin{aligned} P_t^h\phi&=\frac{1}{\Gamma(-\alpha)}t^{-\alpha-1}U\left(a(\xi)^{-2}\chi_t^cU^{-1}\phi\right)+U\left(O\left(\left\lvert a(\xi)\right\rvert^{-3}t^{-2\alpha-1}\right)\chi_t^cU^{-1}\phi\right)\\ &=:\frac{1}{\Gamma(-\alpha)}t^{-\alpha-1}\mathbf{A}_t^{-2}\phi+R_t^P\phi, \end{aligned}\\ &G^hv(t)=\frac{1}{\Gamma(-\alpha)}\int_0^t\left(t-\tau\right)^{-\alpha-1}\mathbf{A}_{t-\tau}^{-2}v(\tau)d\tau+\int_0^tR_{t-\tau}^Pv(\tau)d\tau.\end{aligned}$$ Note that the following relations hold: $$S_t\phi=S_t^l\phi+S_t^h\phi,\quad P_t\phi=P_t^l\phi+P_t^h\phi,\quad Gv(t)=G^lv(t)+G^hv(t).$$ Here we first give some estimates of the operator $\mathbf{A}_t^{-1}$, $\mathbf{A}_t^{-2}$, $R_t^S$ and $R_t^P$. **Lemma 1**. *$\mathbf{A}_t^{-1}$ maps $H$ into $H$ boundedly for every $t\geq0$ with the estimate $$\left\lVert\mathbf{A}_t^{-1}\phi\right\rVert_H\lesssim t^\alpha\left\lVert\phi\right\rVert_H, \label{lemma linear new approach1 equation1}$$ and $A\mathbf{A}_t^{-1}$ maps $H$ into $H$ boundedly for every $t\geq0$ with the estimate $$\left\lVert A\mathbf{A}_t^{-1}\phi\right\rVert_H\leq\left\lVert \phi\right\rVert_H. \label{lemma linear new approach1 equation2}$$ [\[lemma linear new approach1\]]{#lemma linear new approach1 label="lemma linear new approach1"}* *Proof.* $(\ref{lemma linear new approach1 equation1})$ can be easily proved using the fact $\left\lvert\chi_t^ca(\xi)^{-1}\right\rvert\lesssim t^\alpha$ and $(\ref{lemma linear new approach1 equation2})$ can be easily proved using the fact $\left\lvert\chi_t^c\right\rvert\leq1$. ◻ By the same way, we can easily proove the following lemmas. **Lemma 2**. *$\mathbf{A}_t^{-2}$ maps $H$ into $H$ boundedly for every $t\geq0$ with the estimate $$\left\lVert\mathbf{A}_t^{-2}\phi\right\rVert_H\lesssim t^{2\alpha}\left\lVert\phi\right\rVert_H, \label{lemma linear new approach3 equation1}$$ and $A\mathbf{A}_t^{-2}$ maps $H$ into $H$ boundedly for every $t\geq0$ with the estimate $$\left\lVert A\mathbf{A}_t^{-2}\phi\right\rVert_H\lesssim t^\alpha\left\lVert\phi\right\rVert_H. \label{lemma linear new approach3 equation2}$$ [\[lemma linear new approach3\]]{#lemma linear new approach3 label="lemma linear new approach3"}* **Lemma 3**. *$R_t^S$ maps $H$ into $H$ boundedly for every $t\geq0$ with the estimate $$\left\lVert R_t^S\phi\right\rVert_H\lesssim\left\lVert\phi\right\rVert_H, \label{lemma linear new approach2 equation1}$$ and $AR_t^S$ maps $H$ into $H$ boundedly for every $t>0$ with the estimate $$\left\lVert AR_t^S\phi\right\rVert_H\lesssim t^{-\alpha}\left\lVert\phi\right\rVert_H. \label{lemma linear new approach2 equation2}$$ [\[lemma linear new approach2\]]{#lemma linear new approach2 label="lemma linear new approach2"}* **Lemma 4**. *$R_t^P$ maps $H$ into $H$ boundedly for every $t>0$ with the estimate $$\left\lVert R_t^P\phi\right\rVert_H\lesssim t^{\alpha-1}\left\lVert\phi\right\rVert_H. \label{lemma linear new approach4 equation1}$$ [\[lemma linear new approach4\]]{#lemma linear new approach4 label="lemma linear new approach4"}* **Lemma 5**. *Let $\phi\in H$. For any $t,s>0$, we have $$\left\lVert t^{-\alpha}\mathbf{A}_t^{-1}\phi-s^{-\alpha}\mathbf{A}_s^{-1}\phi\right\rVert_H\lesssim\left(1+\left(t\wedge s\right)^{-1}\right)\lvert t-s\rvert\left\lVert\phi\right\rVert_H, \label{lemma linear new approach bilinear1 equation1}$$ and $$\left\lVert t^{-\alpha}A\mathbf{A}_t^{-1}\phi-s^{-\alpha}A\mathbf{A}_s^{-1}\phi\right\rVert_H\lesssim\left(\left\lvert t^{-\alpha}-s^{-\alpha}\right\rvert+\left\lvert t^{1-\alpha}-s^{1-\alpha}\right\rvert\right)\left\lVert\phi\right\rVert_H. \label{lemma linear new approach bilinear1 equation2}$$ If moreover $\phi\in D(A)$, we have $$\left\lVert t^{-\alpha}\mathbf{A}_t^{-1}\phi-s^{-\alpha}\mathbf{A}_s^{-1}\phi\right\rVert_H\lesssim\left(\left\lvert t^\alpha-s^\alpha\right\rvert+\left\lvert t^{\alpha+1}-s^{\alpha+1}\right\rvert\right)\left\lVert\phi\right\rVert_{D(A)}. \label{lemma linear new approach bilinear1 equation3}$$ [\[lemma linear new approach bilinear1\]]{#lemma linear new approach bilinear1 label="lemma linear new approach bilinear1"}* *Proof.* Note that $\left\lvert t^{-\alpha-1}\chi_t^c\right\rvert\lesssim t^{-1}\left\lvert a(\xi)\right\rvert$, $\left\lvert t^{-\alpha-1}\chi_t^c\right\rvert\lesssim t^{\alpha-1}\left\lvert a(\xi)\right\rvert^2$, $\left\lvert t^{-\alpha}\frac{d}{dt}\chi_t^c\right\rvert\lesssim\lvert a(\xi)\rvert$ and $\left\lvert t^{-\alpha}\frac{d}{dt}\chi_t^c\right\rvert\lesssim t^\alpha\left\lvert a(\xi)\right\rvert^2$. It follows that $$\begin{aligned} \left\lvert t^{-\alpha}\chi_t^c-s^{-\alpha}\chi_s^c\right\rvert&=\left\lvert\int_s^t\frac{d}{d\tau}\left(\tau^{-\alpha}\chi_\tau^c\right)d\tau\right\rvert\\ &=\left\lvert\int_s^t-\alpha\tau^{-\alpha-1}\chi_\tau^c+\tau^{-\alpha}\frac{d}{d\tau}\chi_\tau^cd\tau\right\rvert\\ &\lesssim\left\lvert\int_s^t\lvert a(\xi)\rvert\tau^{-1}+\lvert a(\xi)\rvert d\tau\right\rvert\\ &\leq\left(1+\left(t\wedge s\right)^{-1}\right)\lvert a(\xi)\rvert\lvert t-s\rvert\end{aligned}$$ and $$\begin{aligned} \left\lvert t^{-\alpha}\chi_t^c-s^{-\alpha}\chi_s^c\right\rvert&=\left\lvert\int_s^t-\alpha\tau^{-\alpha-1}\chi_\tau^c+\tau^{-\alpha}\frac{d}{d\tau}\chi_\tau^cd\tau\right\rvert\\ &\lesssim\left\lvert\int_s^t\tau^{\alpha-1}\left\lvert a(\xi)\right\rvert^2+\tau^\alpha\left\lvert a(\xi)\right\rvert^2d\tau\right\rvert\\ &\lesssim\left(\left\lvert t^\alpha-s^\alpha\right\rvert+\left\lvert t^{\alpha+1}-s^{\alpha+1}\right\rvert\right)\left\lvert a(\xi)\right\rvert^2.\end{aligned}$$ Then we have $$\begin{aligned} \left\lVert t^{-\alpha}\mathbf{A}_t^{-1}\phi-s^{-\alpha}\mathbf{A}_s^{-1}\phi\right\rVert_H&=\left\lVert t^{-\alpha}a(\xi)^{-1}\chi_t^cU^{-1}\phi-s^{-\alpha}a(\xi)^{-1}\chi_s^cU^{-1}\phi\right\rVert_{L^2(\Omega)}\\ &\lesssim\left(1+\left(t\wedge s\right)^{-1}\right)\lvert t-s\rvert\left\lVert\phi\right\rVert_H,\end{aligned}$$ and $$\begin{aligned} \left\lVert t^{-\alpha}\mathbf{A}_t^{-1}\phi-s^{-\alpha}\mathbf{A}_s^{-1}\phi\right\rVert_H&=\left\lVert t^{-\alpha}a(\xi)^{-1}\chi_t^cU^{-1}\phi-s^{-\alpha}a(\xi)^{-1}\chi_s^cU^{-1}\phi\right\rVert_{L^2(\Omega)}\\ &\lesssim\left(\left\lvert t^\alpha-s^\alpha\right\rvert+\left\lvert t^{\alpha+1}-s^{\alpha+1}\right\rvert\right)\left\lVert\phi\right\rVert_{D(A)}.\end{aligned}$$ Moreover, by the estimate $$\begin{aligned} \left\lvert t^{-\alpha}\chi_t^c-s^{-\alpha}\chi_s^c\right\rvert&=\left\lvert\int_s^t-\alpha\tau^{-\alpha-1}\chi_\tau^c+\tau^{-\alpha}\frac{d}{d\tau}\chi_\tau^cd\tau\right\rvert\\ &\lesssim\left\lvert\int_s^t\tau^{-\alpha-1}+\tau^{-\alpha}d\tau\right\rvert\\ &\lesssim\left\lvert t^{-\alpha}-s^{-\alpha}\right\rvert+\left\lvert t^{1-\alpha}-s^{1-\alpha}\right\rvert\end{aligned}$$ we obtain $$\begin{aligned} \left\lVert t^{-\alpha}A\mathbf{A}_t^{-1}\phi-s^{-\alpha}A\mathbf{A}_s^{-1}\phi\right\rVert_H&=\left\lVert t^{-\alpha}\chi_t^cU^{-1}\phi-s^{-\alpha}\chi_s^cU^{-1}\phi\right\rVert_{L^2(\Omega)}\\ &\lesssim\left(\left\lvert t^{-\alpha}-s^{-\alpha}\right\rvert+\left\lvert t^{1-\alpha}-s^{1-\alpha}\right\rvert\right)\left\lVert\phi\right\rVert_H.\end{aligned}$$ ◻ **Lemma 6**. *Let $\phi\in H$. For any $t,s>0$, we have $$\left\lVert t^{-\alpha-1}\mathbf{A}_t^{-2}\phi-s^{-\alpha-1}\mathbf{A}_s^{-2}\phi\right\rVert_H\lesssim\left(\left\lvert t^{\alpha-1}-s^{\alpha-1}\right\rvert+\left\lvert t^\alpha-s^\alpha\right\rvert\right)\left\lVert\phi\right\rVert_H. \label{8.16lemma linear new approach bilinear3 equation1}$$ [\[8.16lemma linear new approach bilinear3\]]{#8.16lemma linear new approach bilinear3 label="8.16lemma linear new approach bilinear3"}* *Proof.* Note that $\left\lvert t^{-\alpha-2}\chi_t^c\right\rvert\lesssim t^{\alpha-2}\left\lvert a(\xi)\right\rvert^2$ and $\left\lvert t^{-\alpha-1}\frac{d}{dt}\chi_t^c\right\rvert\lesssim t^{\alpha-1}\left\lvert a(\xi)\right\rvert^2$. It follows that $$\begin{aligned} \left\lvert t^{-\alpha-1}\chi_t^c-s^{-\alpha-1}\chi_s^c\right\rvert&=\left\lvert\int_s^t\frac{d}{d\tau}\left(\tau^{-\alpha-1}\chi_\tau^c\right)d\tau\right\rvert\\ &=\left\lvert\int_s^t(-\alpha-1)\tau^{-\alpha-2}\chi_\tau^c+\tau^{-\alpha-1}\frac{d}{d\tau}\chi_\tau^cd\tau\right\rvert\\ &\lesssim\left\lvert\int_s^t\tau^{\alpha-2}\left\lvert a(\xi)\right\rvert^2+\tau^{\alpha-1}\left\lvert a(\xi)\right\rvert^2d\tau\right\rvert\\ &\lesssim\left(\left\lvert t^{\alpha-1}-s^{\alpha-1}\right\rvert+\left\lvert t^\alpha-s^\alpha\right\rvert\right)\left\lvert a(\xi)\right\rvert^2.\end{aligned}$$ Hence we can obtain $$\begin{aligned} \left\lVert t^{-\alpha-1}\mathbf{A}_t^{-2}\phi-s^{-\alpha-1}\mathbf{A}_s^{-2}\phi\right\rVert_H&=\left\lVert t^{-\alpha-1}a(\xi)^{-2}\chi_t^cU^{-1}\phi-s^{-\alpha-1}a(\xi)^{-2}\chi_s^cU^{-1}\phi\right\rVert_{L^2(\Omega)}\\ &\lesssim\left(\left\lvert t^{\alpha-1}-s^{\alpha-1}\right\rvert+\left\lvert t^\alpha-s^\alpha\right\rvert\right)\left\lVert\phi\right\rVert_H.\end{aligned}$$ ◻ **Lemma 7**. *Let $\phi\in H$. For any $t,s>0$, we have $$\left\lVert R_t^S\phi-R_s^S\phi\right\rVert_H\lesssim\left(1+\left(t\wedge s\right)^{-1}\right)\lvert t-s\rvert\left\lVert\phi\right\rVert_H, \label{lemma linear new approach bilinear2 equation2}$$ and $$\left\lVert AR_t^S\phi-AR_s^S\phi\right\rVert_H\lesssim\left(\left\lvert t^{-\alpha}-s^{-\alpha}\right\rvert+\left\lvert t^{1-\alpha}-s^{1-\alpha}\right\rvert\right)\left\lVert\phi\right\rVert_H. \label{lemma linear new approach bilinear2 equation3}$$ If moreover $\phi\in D(A)$, we have $$\left\lVert R_t^S\phi-R_s^S\phi\right\rVert_H\lesssim\left(\left\lvert t^\alpha-s^\alpha\right\rvert+\left\lvert t^{\alpha+1}-s^{\alpha+1}\right\rvert\right)\left\lVert\phi\right\rVert_{D(A)}. \label{lemma linear new approach bilinear2 equation4}$$ [\[lemma linear new approach bilinear2\]]{#lemma linear new approach bilinear2 label="lemma linear new approach bilinear2"}* *Proof.* Using the fact $\left\lvert t^{-2\alpha-1}\chi_t^c\right\rvert\lesssim t^{-1}\left\lvert a(\xi)\right\rvert^2$, $\left\lvert t^{-2\alpha-1}\chi_t^c\right\rvert\lesssim t^{\alpha-1}\left\lvert a(\xi)\right\rvert^3$, $\left\lvert t^{-2\alpha}\frac{d}{dt}\chi_t^c\right\rvert\lesssim\left\lvert a(\xi)\right\rvert^2$ and $\left\lvert t^{-2\alpha}\frac{d}{dt}\chi_t^c\right\rvert\lesssim t^\alpha\left\lvert a(\xi)\right\rvert^3$ we obtain $$\begin{aligned} \left\lvert t^{-2\alpha}\chi_t^c-s^{-2\alpha}\chi_s^c\right\rvert&=\left\lvert\int_s^t\frac{d}{d\tau}\left(\tau^{-2\alpha}\chi_\tau^c\right)d\tau\right\rvert\\ &=\left\lvert\int_s^t-2\alpha\tau^{-2\alpha-1}\chi_\tau^c+\tau^{-2\alpha}\frac{d}{d\tau}\chi_\tau^cd\tau\right\rvert\\ &\lesssim\left\lvert\int_s^t\tau^{-1}\left\lvert a(\xi)\right\rvert^2+\left\lvert a(\xi)\right\rvert^2d\tau\right\rvert\\ &\leq\left(1+\left(t\wedge s\right)^{-1}\right)\left\lvert a(\xi)\right\rvert^2\lvert t-s\rvert,\end{aligned}$$ and $$\begin{aligned} \left\lvert t^{-2\alpha}\chi_t^c-s^{-2\alpha}\chi_s^c\right\rvert&=\left\lvert\int_s^t-2\alpha\tau^{-2\alpha-1}\chi_\tau^c+\tau^{-2\alpha}\frac{d}{d\tau}\chi_\tau^cd\tau\right\rvert\\ &\lesssim\left\lvert\int_s^t\tau^{\alpha-1}\left\lvert a(\xi)\right\rvert^3+\tau^\alpha\left\lvert a(\xi)\right\rvert^3d\tau\right\rvert\\ &\lesssim\left(\left\lvert t^\alpha-s^\alpha\right\rvert+\left\lvert t^{\alpha+1}-s^{\alpha+1}\right\rvert\right)\left\lvert a(\xi)\right\rvert^3,\end{aligned}$$ which then implies that $$\begin{aligned} \left\lVert R_t^S\phi-R_s^S\phi\right\rVert_H&=\left\lVert O\left(\left\lvert a(\xi)\right\rvert^{-2}\right)t^{-2\alpha}\chi_t^cU^{-1}\phi-O\left(\left\lvert a(\xi)\right\rvert^{-2}\right)s^{-2\alpha}\chi_s^cU^{-1}\phi\right\rVert_{L^2(\Omega)}\\ &\lesssim\left(1+\left(t\wedge s\right)^{-1}\right)\lvert t-s\rvert\left\lVert\phi\right\rVert_H\end{aligned}$$ and $$\begin{aligned} \left\lVert R_t^S\phi-R_s^S\phi\right\rVert_H&=\left\lVert O\left(\left\lvert a(\xi)\right\rvert^{-2}\right)t^{-2\alpha}\chi_t^cU^{-1}\phi-O\left(\left\lvert a(\xi)\right\rvert^{-2}\right)s^{-2\alpha}\chi_s^cU^{-1}\phi\right\rVert_{L^2(\Omega)}\\ &\lesssim\left(\left\lvert t^\alpha-s^\alpha\right\rvert+\left\lvert t^{\alpha+1}-s^{\alpha+1}\right\rvert\right)\left\lVert\phi\right\rVert_{D(A)}.\end{aligned}$$ Note that $\left\lvert t^{-2\alpha-1}\chi_t^c\right\rvert\lesssim t^{-\alpha-1}\left\lvert a(\xi)\right\rvert$ and $\left\lvert t^{-2\alpha}\frac{d}{dt}\chi_t^c\right\rvert\lesssim t^{-\alpha}\left\lvert a(\xi)\right\rvert$. It follows that $$\begin{aligned} \left\lvert t^{-2\alpha}\chi_t^c-s^{-2\alpha}\chi_s^c\right\rvert&=\left\lvert\int_s^t-2\alpha\tau^{-2\alpha-1}\chi_\tau^c+\tau^{-2\alpha}\frac{d}{d\tau}\chi_\tau^cd\tau\right\rvert\\ &\lesssim\left\lvert\int_s^t\tau^{-\alpha-1}\left\lvert a(\xi)\right\rvert+\tau^{-\alpha}\left\lvert a(\xi)\right\rvert d\tau\right\rvert\\ &\lesssim\left(\left\lvert t^{-\alpha}-s^{-\alpha}\right\rvert+\left\lvert t^{1-\alpha}-s^{1-\alpha}\right\rvert\right)\left\lvert a(\xi)\right\rvert.\end{aligned}$$ Then we have $$\begin{aligned} \left\lVert AR_t^S\phi-AR_s^S\phi\right\rVert_H&=\left\lVert a(\xi)O\left(\left\lvert a(\xi)\right\rvert^{-2}\right)t^{-2\alpha}\chi_t^cU^{-1}\phi-a(\xi)O\left(\left\lvert a(\xi)\right\rvert^{-2}\right)s^{-2\alpha}\chi_s^cU^{-1}\phi\right\rVert_{L^2(\Omega)}\\ &\lesssim\left(\left\lvert t^{-\alpha}-s^{-\alpha}\right\rvert+\left\lvert t^{1-\alpha}-s^{1-\alpha}\right\rvert\right)\left\lVert\phi\right\rVert_H.\end{aligned}$$ ◻ **Lemma 8**. *Let $\phi\in H$. For any $t,s>0$, we have $$\left\lVert R_t^P\phi-R_s^P\phi\right\rVert_H\lesssim\left(\left\lvert t^{\alpha-1}-s^{\alpha-1}\right\rvert+\left\lvert t^\alpha-s^\alpha\right\rvert\right)\left\lVert\phi\right\rVert_H. \label{8.16lemma linear new approach bilinear4 equation1}$$ [\[8.16lemma linear new approach bilinear4\]]{#8.16lemma linear new approach bilinear4 label="8.16lemma linear new approach bilinear4"}* *Proof.* Using the fact $\left\lvert t^{-2\alpha-2}\chi_t^c\right\rvert\lesssim t^{\alpha-2}\left\lvert a(\xi)\right\rvert^3$ and $\left\lvert t^{-2\alpha-1}\frac{d}{dt}\chi_t^c\right\rvert\lesssim t^{\alpha-1}\left\lvert a(\xi)\right\rvert^3$ we obtain $$\begin{aligned} \left\lvert t^{-2\alpha-1}\chi_t^c-s^{-2\alpha-1}\chi_s^c\right\rvert&=\left\lvert\int_s^t\frac{d}{d\tau}\left(\tau^{-2\alpha-1}\chi_\tau^c\right)d\tau\right\rvert\\ &=\left\lvert\int_s^t(-2\alpha-1)\tau^{-2\alpha-2}\chi_\tau^c+\tau^{-2\alpha-1}\frac{d}{d\tau}\chi_\tau^cd\tau\right\rvert\\ &\lesssim\left\lvert\int_s^t\left(\tau^{\alpha-2}+\tau^{\alpha-1}\right)\left\lvert a(\xi)\right\rvert^3d\tau\right\rvert\\ &\lesssim\left(\left\lvert t^{\alpha-1}-s^{\alpha-1}\right\rvert+\left\lvert t^\alpha-s^\alpha\right\rvert\right)\left\lvert a(\xi)\right\rvert^3.\end{aligned}$$ Hence there holds $$\begin{aligned} \left\lVert R_t^P\phi-R_s^P\phi\right\rVert_H&=\left\lVert O\left(\left\lvert a(\xi)\right\rvert^{-3}\right)t^{-2\alpha-1}\chi_t^cU^{-1}\phi-O\left(\left\lvert a(\xi)\right\rvert^{-3}\right)s^{-2\alpha-1}\chi_s^cU^{-1}\phi\right\rVert_{L^2(\Omega)}\\ &\lesssim\left(\left\lvert t^{\alpha-1}-s^{\alpha-1}\right\rvert+\left\lvert t^\alpha-s^\alpha\right\rvert\right)\left\lVert\phi\right\rVert_H.\end{aligned}$$ ◻ **Proposition 1**. *For $T>0$, $S_t$ maps $H$ into $C\left((0,T];D(A)\right)$ with the estimate $$\left\lVert S_t\phi\right\rVert_{D(A)}\lesssim\left(1+t^{-\alpha}\right)\left\lVert\phi\right\rVert_H,\quad t>0, \label{lemmalemma linearlinear new2 equation1}$$ and into $C\left([0,T];H\right)$ with the estimate $$\left\lVert S_t\phi\right\rVert_H\lesssim\left\lVert\phi\right\rVert_H,\quad t\geq0. \label{lemmalemma linearlinear new2 equation2}$$ [\[lemma linear new2\]]{#lemma linear new2 label="lemma linear new2"}* *Proof.* The proof of $S_t$ maps $H$ into $C\left((0,T];H\right)$ and $C\left((0,T];D(A)\right)$ is left to Proposition $\ref{proposition linear new approach bilinear1}$ and the claim that $S_t$ is continuous at $t=0$ in the norm of $H$ can be proved by Lebesgue's dominated theorem. It suffices to prove $(\ref{lemmalemma linearlinear new2 equation1})$ and $(\ref{lemmalemma linearlinear new2 equation2})$. On one hand, $$\begin{aligned} \left\lVert S_t^l\phi\right\rVert_{D(A)}&=\left\lVert S_t^l\phi\right\rVert_H+\left\lVert AS_t^l\phi\right\rVert_H\\ &=\left\lVert\chi_ta(t,\xi)U^{-1}\phi\right\rVert_{L^2(\Omega)}+\left\lVert a(\xi)\chi_ta(t,\xi)U^{-1}\phi\right\rVert_{L^2(\Omega)}\\ &\lesssim\left\lVert U^{-1}\phi\right\rVert_{L^2(\Omega)}+\left\lVert a(\xi)\chi_tU^{-1}\phi\right\rVert_{L^2(\Omega)}\\ &\lesssim\left\lVert U^{-1}\phi\right\rVert_{L^2(\Omega)}+t^{-\alpha}\left\lVert U^{-1}\phi\right\rVert_{L^2(\Omega)}\\ &=\left(1+t^{-\alpha}\right)\left\lVert\phi\right\rVert_H.\end{aligned}$$ On the other hand, it follows from Lemma $\ref{lemma linear new approach1}$ that $$\left\lVert S_t^h\phi\right\rVert_H\lesssim t^{-\alpha}\left\lVert\mathbf{A}_t^{-1}\phi\right\rVert_H+\left\lVert R_t^S\phi\right\rVert_H\lesssim\left\lVert\phi\right\rVert_H,$$ and $$\left\lVert AS_t^h\phi\right\rVert_H\lesssim t^{-\alpha}\left\lVert A\mathbf{A}_t^{-1}\phi\right\rVert_H+\left\lVert AR_t^S\phi\right\rVert_H\lesssim t^{-\alpha}\left\lVert\phi\right\rVert_H,$$ which implies that $$\left\lVert S_t^h\phi\right\rVert_{D(A)}\lesssim\left(1+t^{-\alpha}\right)\left\lVert\phi\right\rVert_H.$$ Combining above we can prove $(\ref{lemmalemma linearlinear new2 equation1})$ and $(\ref{lemmalemma linearlinear new2 equation2})$. ◻ **Proposition 2**. *Let $\phi\in H$. For any $t,s>0$, we have $$\left\lVert S_t\phi-S_s\phi\right\rVert_H\lesssim\left(1+\left(t\wedge s\right)^{-1}\right)\lvert t-s\rvert\left\lVert\phi\right\rVert_H \label{proposition linear new approach bilinear1 equation1}$$ and $$\left\lVert AS_t\phi-AS_s\phi\right\rVert_H\lesssim\left(\left\lvert t^{-\alpha}-s^{-\alpha}\right\rvert+\left\lvert t^{1-\alpha}-s^{1-\alpha}\right\rvert\right)\left\lVert\phi\right\rVert_H. \label{proposition linear new approach bilinear1 equation2}$$ If moreover $\phi\in D(A)$, we have $$\left\lVert S_t\phi-S_s\phi\right\rVert_H\lesssim\left(\left\lvert t^\alpha-s^\alpha\right\rvert+\left\lvert t^{\alpha+1}-s^{\alpha+1}\right\rvert\right)\left\lVert\phi\right\rVert_{D(A)}. \label{proposition linear new approach bilinear1 equation3}$$ [\[proposition linear new approach bilinear1\]]{#proposition linear new approach bilinear1 label="proposition linear new approach bilinear1"}* *Proof.* According to Lemma $\ref{lemma linear new approach bilinear1}$ and Lemma $\ref{lemma linear new approach bilinear2}$, it's sufficient to prove $$\begin{gathered} \left\lVert S_t^l\phi-S_s^l\phi\right\rVert_H\lesssim\left(1+\left(t\wedge s\right)^{-1}\right)\lvert t-s\rvert\left\lVert\phi\right\rVert_H,\label{proposition linear new approach bilinear1 proof equation1}\\ \left\lVert S_t^l\phi-S_s^l\phi\right\rVert_H\lesssim\left(\left\lvert t^\alpha-s^\alpha\right\rvert+\left\lvert t^{\alpha+1}-s^{\alpha+1}\right\rvert\right)\left\lVert\phi\right\rVert_{D(A)},\label{proposition linear new approach bilinear1 proof equation3}\end{gathered}$$ and $$\left\lVert AS_t^l\phi-AS_s^l\phi\right\rVert_H\lesssim\left(\left\lvert t^{-\alpha}-s^{-\alpha}\right\rvert+\left\lvert t^{1-\alpha}-s^{1-\alpha}\right\rvert\right)\left\lVert\phi\right\rVert_H. \label{proposition linear new approach bilinear1 proof equation2}$$ Since $\left\lvert a(\xi)b(t,\xi)\chi_t\right\rvert\lesssim t^{-1}$, $\left\lvert a(\xi)b(t,\xi)\chi_t\right\rvert\lesssim t^{\alpha-1}\left\lvert a(\xi)\right\rvert$, $\left\lvert a(t,\xi)\frac{d}{dt}\chi_t\right\rvert\lesssim1$ and $\left\lvert a(t,\xi)\frac{d}{dt}\chi_t\right\rvert\lesssim t^\alpha\left\lvert a(\xi)\right\rvert$, we have $$\begin{aligned} \left\lvert\chi_ta(t,\xi)-\chi_sa(s,\xi)\right\rvert&=\left\lvert\int_s^t\frac{d}{d\tau}\left(\chi_\tau a(\tau,\xi)\right)d\tau\right\rvert\\ &=\left\lvert\int_s^tia(\xi)b(\tau,\xi)\chi_\tau+a(\tau,\xi)\frac{d}{d\tau}\chi_\tau d\tau\right\rvert\\ &\lesssim\left\lvert\int_s^t1+\tau^{-1}d\tau\right\rvert\\ &\leq\left(1+\left(t\wedge s\right)^{-1}\right)\lvert t-s\rvert,\end{aligned}$$ and $$\begin{aligned} \left\lvert\chi_ta(t,\xi)-\chi_sa(s,\xi)\right\rvert&=\left\lvert\int_s^tia(\xi)b(\tau,\xi)\chi_\tau+a(\tau,\xi)\frac{d}{d\tau}\chi_\tau d\tau\right\rvert\\ &\lesssim\left\lvert\int_s^t\tau^{\alpha-1}\left\lvert a(\xi)\right\rvert+\tau^\alpha\left\lvert a(\xi)\right\rvert d\tau\right\rvert\\ &\lesssim\left(\left\lvert t^\alpha-s^\alpha\right\rvert+\left\lvert t^{\alpha+1}-s^{\alpha+1}\right\rvert\right)\left\lvert a(\xi)\right\rvert\end{aligned}$$ Using the fact that $\left\lvert a(t,\xi)\chi_t\right\rvert\lesssim t^{-\alpha}\left\lvert a(\xi)\right\rvert^{-1}$ and $\left\lvert a(\xi)b(t,\xi)\chi_t\right\rvert\lesssim t^{-\alpha-1}\left\lvert a(\xi)\right\rvert^{-1}$ we can obtain $$\begin{aligned} \left\lvert\chi_ta(t,\xi)-\chi_sa(s,\xi)\right\rvert&=\left\lvert\int_s^tia(\xi)b(\tau,\xi)\chi_\tau+a(\tau,\xi)\frac{d}{d\tau}\chi_\tau d\tau\right\rvert\\ &\lesssim\left\lvert\int_s^t\tau^{-\alpha-1}\left\lvert a(\xi)\right\rvert^{-1}+\tau^{-\alpha}\left\lvert a(\xi)\right\rvert^{-1}d\tau\right\rvert\\ &\lesssim\left(\left\lvert t^{-\alpha}-s^{-\alpha}\right\rvert+\left\lvert t^{1-\alpha}-s^{1-\alpha}\right\rvert\right)\left\lvert a(\xi)\right\rvert^{-1}.\end{aligned}$$ Then $(\ref{proposition linear new approach bilinear1 proof equation1})$ follows from $$\begin{aligned} \left\lVert S_t^l\phi-S_s^l\phi\right\rVert_H&=\left\lVert\chi_ta(t,\xi)U^{-1}\phi-\chi_sa(s,\xi)U^{-1}\phi\right\rVert_{L^2(\Omega)}\\ &\lesssim\left(1+\left(t\wedge s\right)^{-1}\right)\lvert t-s\rvert\left\lVert\phi\right\rVert_H,\end{aligned}$$ $(\ref{proposition linear new approach bilinear1 equation3})$ follows from $$\begin{aligned} \left\lVert S_t^l\phi-S_s^l\phi\right\rVert_H&=\left\lVert\chi_ta(t,\xi)U^{-1}\phi-\chi_sa(s,\xi)U^{-1}\phi\right\rVert_{L^2(\Omega)}\\ &\lesssim\left(\left\lvert t^\alpha-s^\alpha\right\rvert+\left\lvert t^{\alpha+1}-s^{\alpha+1}\right\rvert\right)\left\lVert\phi\right\rVert_{D(A)},\end{aligned}$$ and $(\ref{proposition linear new approach bilinear1 proof equation2})$ follows from $$\begin{aligned} \left\lVert AS_t^l\phi-AS_s^l\phi\right\rVert_H&=\left\lVert a(\xi)\chi_ta(t,\xi)U^{-1}\phi-a(\xi)\chi_sa(s,\xi)U^{-1}\phi\right\rVert_{L^2(\Omega)}\\ &\lesssim\left(\left\lvert t^{-\alpha}-s^{-\alpha}\right\rvert+\left\lvert t^{1-\alpha}-s^{1-\alpha}\right\rvert\right)\left\lVert\phi\right\rVert_H.\end{aligned}$$ ◻ **Proposition 3**. *For every $t>0$, if $\phi\in H$, then $g_{1-\alpha}(t)*(S_t\phi-\phi)$ is differentiable and $$\mathbf{D}_t^\alpha(S_t\phi-\phi)=iAS_t\phi. \label{lemma linear new4 equation1}$$ [\[lemma linear new4\]]{#lemma linear new4 label="lemma linear new4"}* *Proof.* Let $\psi(t)=g_{1-\alpha}(t)*(S_t\phi-\phi)$. Since $$\begin{aligned} \psi(t)&=\frac{1}{\Gamma(1-\alpha)}\int_0^t\left(t-\tau\right)^{-\alpha}\left(S_\tau\phi-\phi\right)d\tau\\ &=U\left(\frac{1}{\Gamma(1-\alpha)}\int_0^t\left(t-\tau\right)^{-\alpha}\left(a(\tau,\xi)U^{-1}\phi-U^{-1}\phi\right)d\tau\right)\\ &=U\left(\sum\limits_{k=1}^\infty\frac{i^ka(\xi)^kt^{\alpha(k-1)+1}}{\Gamma(\alpha(k-1)+2)}U^{-1}\phi\right),\end{aligned}$$ then we have $$\begin{aligned} \lim\limits_{h\to0}\frac{\psi(t+h)-\psi(t)}{h}&=\lim\limits_{h\to0}U\left(\sum\limits_{k=1}^\infty\frac{i^ka(\xi)^k}{\Gamma(\alpha(k-1)+2)}\frac{\left(t+h\right)^{\alpha(k-1)+1}-t^{\alpha(k-1)+1}}{h}U^{-1}\phi\right)\\ &=U\left(\sum\limits_{k=1}^\infty\frac{i^ka(\xi)^k}{\Gamma(\alpha(k-1)+2)}\lim\limits_{h\to0}\frac{\left(t+h\right)^{\alpha(k-1)+1}-t^{\alpha(k-1)+1}}{h}U^{-1}\phi\right)\\ &=iU\left(a(\xi)a(t,\xi)U^{-1}\phi\right).\end{aligned}$$ Proposition $\ref{lemma linear new2}$ shows that $S_t\phi\in D(A)$ for every $t>0$, which implies that $g_{1-\alpha}(t)*(S_t\phi-\phi)$ is differentiable and $$\frac{d}{dt}\left(g_{1-\alpha}(t)*(S_t\phi-\phi)\right)=\mathbf{D}_t^\alpha\left(S_t\phi-\phi\right)=iAS_t\varphi$$ ◻ **Proposition 4**. *For $T>0$, $G$ maps $L^\infty\left((0,T);H\right)$ into $C\left([0,T];H\right)$ with the estimate $$\left\lVert Gv\right\rVert_{L_T^\infty H}\lesssim T^\alpha\left\lVert v\right\rVert_{L_T^\infty H}. \label{lemma linear new3 equation1}$$ [\[lemma linear new3\]]{#lemma linear new3 label="lemma linear new3"}* *Proof.* The proof of continuity for $t>0$ is left to Proposition $\ref{8.16proposition linear new approach bilinear2}$ and the continuity at $t=0$ can be proved by $(\ref{lemma linear new3 proof equation3})$. We just prove $(\ref{lemma linear new3 equation1})$ here. On one hand, $$\left\lVert G^lv(t)\right\rVert_H\lesssim\int_0^t\left(t-\tau\right)^{\alpha-1}\left\lVert v(\tau)\right\rVert_Hd\tau, \label{lemma linear new3 proof equation1}$$ it follows that $$\left\lVert G^lv\right\rVert_{L_T^\infty H}\lesssim T^\alpha\left\lVert v\right\rVert_{L_T^\infty H}.$$ On the other hand, according to Lemma $\ref{lemma linear new approach3}$ and Lemma $\ref{lemma linear new approach4}$, we obtain $$\left\lVert G^hv(t)\right\rVert_H\lesssim\int_0^t\left(t-\tau\right)^{\alpha-1}\left\lVert v(\tau)\right\rVert_Hd\tau, \label{lemma linear new3 proof equation2}$$ which implies that $$\left\lVert G^hv\right\rVert_{L_T^\infty H}\lesssim T^\alpha\left\lVert v\right\rVert_{L_T^\infty H}.$$ Then $(\ref{lemma linear new3 equation1})$ can be proved. ◻ **Remark 3**. *From $(\ref{lemma linear new3 proof equation1})$ and $(\ref{lemma linear new3 proof equation2})$ we also have $$\left\lVert Gv(t)\right\rVert_H\lesssim\int_0^t\left(t-\tau\right)^{\alpha-1}\left\lVert v(\tau)\right\rVert_Hd\tau. \label{lemma linear new3 proof equation3}$$ In particular, for any $0<T_1<T_2$, if $v\equiv0$ on $[0,T_1]$ and $v\in L^\infty\left((0,T_2);H\right)$, then by $(\ref{lemma linear new3 proof equation3})$ we have $$\left\lVert Gv(t)\right\rVert_H\lesssim\int_{T_1}^t\left(t-\tau\right)^{\alpha-1}\left\lVert v(\tau)\right\rVert_Hd\tau$$ and hence $$\left\lVert Gv\right\rVert_{L_{(T_1,T_2)}^\infty H}\lesssim\left(T_2-T_1\right)^\alpha\left\lVert v\right\rVert_{L_{T_2}^\infty H}. \label{lemma linear new3 proof equation4}$$* **Proposition 5**. *For $T>0$ and $0<t,s<T$, let $v\in L^\infty\left((0,T);H\right)$. We have $$\left\lVert Gv(t)-Gv(s)\right\rVert_H\lesssim\left(\left\lvert t-s\right\rvert^\alpha+\left\lvert t^{\alpha+1}-s^{\alpha+1}\right\rvert\right)\left\lVert v\right\rVert_{L_T^\infty H}. \label{8.16proposition linear new approach bilinear2 equation1}$$ [\[8.16proposition linear new approach bilinear2\]]{#8.16proposition linear new approach bilinear2 label="8.16proposition linear new approach bilinear2"}* *Proof.* Without loss of generality we assume $t>s$. Let $$\begin{aligned} &I_1=\frac{1}{\Gamma(-\alpha)}\int_0^s\left(\left(t-\tau\right)^{-\alpha-1}\mathbf{A}_{t-\tau}^{-2}v(\tau)-\left(s-\tau\right)^{-\alpha-1}\mathbf{A}_{s-\tau}^{-2}v(\tau)\right)d\tau,\\ &I_2=\frac{1}{\Gamma(-\alpha)}\int_s^t\left(t-\tau\right)^{-\alpha-1}\mathbf{A}_{t-\tau}^{-2}v(\tau)d\tau,\\ &I_3=\int_0^s\left(R_{t-\tau}^Pv(\tau)-R_{s-\tau}^Pv(\tau)\right)d\tau,\\ &I_4=\int_s^tR_{t-\tau}^Pv(\tau)d\tau,\end{aligned}$$ then $G^hv(t)-G^hv(s)=I_1+I_2+I_3+I_4$. According to Lemma $\ref{8.16lemma linear new approach bilinear3}$ and Lemma $\ref{8.16lemma linear new approach bilinear4}$, it follows that $$\begin{aligned} \left\lVert I_1\right\rVert_H&\lesssim\int_0^s\left(\left(s-\tau\right)^{\alpha-1}-\left(t-\tau\right)^{\alpha-1}+\left(t-\tau\right)^\alpha-\left(s-\tau\right)^\alpha\right)\left\lVert v(\tau)\right\rVert_Hd\tau\\ &\lesssim\left(\left(t-s\right)^\alpha+t^{\alpha+1}-s^{\alpha+1}\right)\left\lVert v\right\rVert_{L_T^\infty H}\end{aligned}$$ and also $$\left\lVert I_3\right\rVert_H\lesssim\left(\left(t-s\right)^\alpha+t^{\alpha+1}-s^{\alpha+1}\right)\left\lVert v\right\rVert_{L_T^\infty H}.$$ Similarly it follows from Lemma $\ref{lemma linear new approach3}$ and Lemma $\ref{lemma linear new approach4}$ that $$\left\lVert I_2\right\rVert_H\lesssim\int_s^t\left(t-\tau\right)^{\alpha-1}\left\lVert v(\tau)\right\rVert_Hd\tau\lesssim\left(t-s\right)^\alpha\left\lVert v\right\rVert_{L_T^\infty H}$$ and also $$\left\lVert I_4\right\rVert_H\lesssim\left(t-s\right)^\alpha\left\lVert v\right\rVert_{L_T^\infty H}.$$ Then there holds $$\left\lVert G^hv(t)-G^hv(s)\right\rVert_H\lesssim\left(\left(t-s\right)^\alpha+t^{\alpha+1}-s^{\alpha+1}\right)\left\lVert v\right\rVert_{L_T^\infty H}. \label{8.16proposition linear new approach bilinear2 proof equation1}$$ On the other hand, since $$\begin{aligned} \left\lvert\chi_tb(t,\xi)-\chi_sb(s,\xi)\right\rvert&=\left\lvert\int_s^t\frac{d}{d\tau}\left(\chi_\tau b(\tau,\xi)\right)d\tau\right\rvert\\ &=\left\lvert\int_s^tb(\tau,\xi)\frac{d}{d\tau}\chi_\tau+\chi_\tau\tau^{\alpha-2}E_{\alpha,\alpha-1}\left(ia(\xi)\tau^\alpha\right)d\tau\right\rvert\\ &\lesssim\int_s^t\tau^{\alpha-1}+\tau^{\alpha-2}d\tau\\ &\lesssim t^\alpha-s^\alpha+s^{\alpha-1}-t^{\alpha-1},\end{aligned}$$ it follows that $$\begin{aligned} \left\lVert P_t^l\phi-P_s^l\phi\right\rVert_H&=\left\lVert\chi_tb(t,\xi)U^{-1}\phi-\chi_sb(s,\xi)U^{-1}\phi\right\rVert_{L^2(\Omega)}\\ &\lesssim\left(t^\alpha-s^\alpha+s^{\alpha-1}-t^{\alpha-1}\right)\left\lVert\phi\right\rVert_H,\end{aligned}$$ which implies that $$\left\lVert I_5\right\rVert_H\lesssim\left(\left(t-s\right)^\alpha+t^{\alpha+1}-s^{\alpha+1}\right)\left\lVert v\right\rVert_{L_T^\infty H}$$ by letting $$\begin{aligned} &I_5=\int_0^s\left(P_{t-\tau}^lv(\tau)-P_{s-\tau}^lv(\tau)\right)d\tau,\\ &I_6=\int_s^tP_{t-\tau}^lv(\tau)d\tau.\end{aligned}$$ There holds $G^lv(t)-G^lv(s)=I_5+I_6$. It's easy to verify that $$\left\lVert I_6\right\rVert_H\lesssim\left(t-s\right)^\alpha\left\lVert v\right\rVert_{L_T^\infty H}.$$ Then we have $$\left\lVert G^lv(t)-G^lv(s)\right\rVert_H\lesssim\left(\left(t-s\right)^\alpha+t^{\alpha+1}-s^{\alpha+1}\right)\left\lVert v\right\rVert_{L_T^\infty H}. \label{8.16proposition linear new approach bilinear2 proof equation2}$$ Combining $(\ref{8.16proposition linear new approach bilinear2 proof equation1})$ and $(\ref{8.16proposition linear new approach bilinear2 proof equation2})$ we can obtain the result. ◻ **Proposition 6**. *If $Gv(t)\in D(A)$ for $t>0$, then $g_{1-\alpha}*Gv$ is differentiable for $t>0$ and $$\mathbf{D}_t^\alpha Gv(t)=iAGv(t)+v(t). \label{lemma linear new6 equation1}$$ [\[lemma linear new6\]]{#lemma linear new6 label="lemma linear new6"}* *Proof.* Let $\Phi(t)=g_{1-\alpha}*Gv$. Since $$\begin{aligned} \Phi(t)&=\frac{1}{\Gamma(1-\alpha)}\int_0^t\int_0^\tau\left(t-\tau\right)^{-\alpha}P_{\tau-s}v(s)dsd\tau\\ &=U\left(\frac{1}{\Gamma(1-\alpha)}\int_0^t\int_0^\tau\left(t-\tau\right)^{-\alpha}b(\tau-s,\xi)U^{-1}v(s)dsd\tau\right)\\ &=U\left(\frac{1}{\Gamma(1-\alpha)}\int_s^t\int_0^t\left(t-\tau\right)^{-\alpha}b(\tau-s,\xi)U^{-1}v(s)dsd\tau\right)\\ &=U\left(\int_0^ta(t-s,\xi)U^{-1}v(s)ds\right),\end{aligned}$$ we obtain $$\begin{aligned} &\lim\limits_{h\to0}\frac{\Phi(t+h)-\Phi(t)}{h}\\ &=U\left(\int_0^t\lim\limits_{h\to0}\frac{a(t+h-s,\xi)U^{-1}v(s)-a(t-s,\xi)U^{-1}v(s)}{h}ds\right)\\ &+U\left(\lim\limits_{h\to0}\frac{1}{h}\int_t^{t+h}a(t+h-s,\xi)U^{-1}v(s)ds\right)\\ &=iU\left(a(\xi)\int_0^tb(t-s,\xi)U^{-1}v(s)ds\right)+v(t).\end{aligned}$$ Since $Gv(t)\in D(A)$, we can deduce $\Phi(t)$ is differentiable and $$\mathbf{D}_t^\alpha Gv(t)=iAGv(t)+v(t).$$ ◻ **Theorem 4**. *For $T>0$, let $x\in H$ and $F\in L^\infty\left((0,T);H\right)$. The linear $(\ref{nonlinear Schrodinger equation Hilbert})$ has a unique mild solution $u$ on $[0,T]$ with the estimate $$\left\lVert u(t)\right\rVert_H\lesssim\left\lVert x\right\rVert_H+T^\alpha\left\lVert F\right\rVert_{L_T^\infty H}. \label{mild solution inhomogeneous H equation1}$$ [\[mild solution inhomogeneous H\]]{#mild solution inhomogeneous H label="mild solution inhomogeneous H"}* *Proof.* It's just a direct consequence of Proposition $\ref{lemma linear new2}$ and Proposition $\ref{lemma linear new3}$. ◻ **Theorem 5**. *For $T>0$, let $F\in L^\infty\left((0,T);H\right)\cap C\left((0,T];H\right)$. The linear $(\ref{nonlinear Schrodinger equation Hilbert})$ has a unique classical solution $u$ on $[0,T]$ for every $x\in H$ if and only if $GF\in C\left((0,T];D(A)\right)$. [\[linear classical solution theorem\]]{#linear classical solution theorem label="linear classical solution theorem"}* *Proof.* If $u$ is a classical solution of the linear $(\ref{nonlinear Schrodinger equation Hilbert})$, then $GF=u-S_tx\in C\left((0,T];D(A)\right)$ by applying Proposition $\ref{lemma linear new2}$. If $GF\in C\left((0,T];D(A)\right)$, we can complete the proof applying Proposition $\ref{lemma linear new6}$ with the assumption that $F\in C\left((0,T];H\right)$. ◻ **Theorem 6**. *For $T>0$, let $F\in C\left([0,T];D(A)\right)$. The linear $(\ref{nonlinear Schrodinger equation Hilbert})$ has a unique strict solution $u$ on $[0,T]$ for every $x\in D(A)$. [\[linear strict solution theorem\]]{#linear strict solution theorem label="linear strict solution theorem"}* *Proof.* The proof is clear noting that $G$ maps $C\left([0,T];D(A)\right)$ into $C\left([0,T];D(A)\right)$ by Proposition $\ref{lemma linear new3}$. ◻ # Proof of Theorem $\ref{unique mild solution X}$ {#9.17Proof of Theorem unique mild solution X} We start with $(\ref{unique mild solution X theorem equation1})$ which also implies that the strict solution of $(\ref{nonlinear Schrodinger equation Hilbert})$ is unique. To this end, we state the following Gronwall type inequality which you can find in Henry[@Geometric-Theory-of-Semilinear-Parabolic-Equations] and Yagi[@Abstract-Parabolic-Evolution-Equations-and-their-Applications]. **Lemma 9**. *Let $0\leq a\in C\left([0,T];\mathbb{R}\right)$ be an increasing function, let $b>0$ be a constant and $\alpha>0$ be an exponent. If $u\in C\left([0,T];\mathbb{R}\right)$ satisfies the integral inequality $$u(t)\leq a(t)+b\int_0^t\left(t-s\right)^{\alpha-1}u(s)ds,\quad 0\leq t\leq T,$$ on this inteval, then $$u(t)\leq a(t)E_{\alpha,1}\left(b\Gamma(\alpha)t^\alpha\right),\quad 0\leq t\leq T.$$ [\[Yagislemma\]]{#Yagislemma label="Yagislemma"}* **Theorem 7**. *Let the assumptions in Theorem $\ref{unique mild solution X}$ hold and $u(t),v(t)$ be the strict solutions of $(\ref{nonlinear Schrodinger equation Hilbert})$ with the initial data $x,y$ respectively, then $(\ref{unique mild solution X theorem equation1})$ holds. [\[uniqueness\]]{#uniqueness label="uniqueness"}* *Proof.* Let $R=\left\lVert u\right\rVert_{L_T^\infty D(A)}\vee\left\lVert v\right\rVert_{L_T^\infty D(A)}$. By the representation of the strict solution that $u(t)-v(t)=S_t(x-y)+iG(F(u)-F(v))$ and Proposition $\ref{lemma linear new2}$ and $(\ref{lemma linear new3 proof equation3})$ it follows that $$\left\lVert u(t)-v(t)\right\rVert_{D(A)}\lesssim_{\rho(x),\rho(y),R}\left\lVert x-y\right\rVert_{D(A)}+\int_0^t\left(t-\tau\right)^{\alpha-1}\left\lVert u(\tau)-v(\tau)\right\rVert_{D(A)}d\tau.$$ Then $(\ref{unique mild solution X theorem equation1})$ follows from Lemma $\ref{Yagislemma}$. ◻ **Lemma 10**. *For $T>0$, $GF$ maps a ball with radius $R$ in $L^\infty\left((0,T);D(A)\right)$ into $C\left([0,T];D(A)\right)$ boundedly with the estimate $$\left\lVert GF(u)-GF(v)\right\rVert_{L_T^\infty D(A)}\lesssim_{\rho(u(0)),\rho(v(0)),R}T^\alpha\left\lVert u-v\right\rVert_{L_T^\infty D(A)}.$$ [\[unique mild solution X proof lemma1\]]{#unique mild solution X proof lemma1 label="unique mild solution X proof lemma1"}* *Proof.* By hypotheses on $F$ we obtain $F$ maps a ball with radius $R$ in $L^\infty\left((0,T);D(A)\right)$ into $L^\infty\left((0,T);D(A)\right)$ with the estimate $$\left\lVert F(u)-F(v)\right\rVert_{L_T^\infty D(A)}\lesssim_{\rho(u(0)),\rho(v(0)),R}\left\lVert u-v\right\rVert_{L_T^\infty D(A)}.$$ Then the result can be followed by Proposition $\ref{lemma linear new3}$. ◻ **Lemma 11**. *For $T>0$, let $x\in D(A)$ such that $\left\lvert x\right\rvert_1<\infty$. If $u\in C\left([0,T];D(A)\right)$ satisfies $u(t)=S_tx+iGF(u)$, then $u$ is a strict solution of $(\ref{nonlinear Schrodinger equation Hilbert})$ on $[0,T]$. [\[inhomogeneous mild solution strict\]]{#inhomogeneous mild solution strict label="inhomogeneous mild solution strict"}* *Proof.* Let $H(t)=F(u)$ and $R=\left\lVert u\right\rVert_{L_T^\infty D(A)}$. It's easy to verify that $H(t)\in D(A)$ and for any $t_0\in[0,T]$, $$\left\lVert H(t)-H(t_0)\right\rVert_{D(A)}\lesssim_{\rho(x),R}\left\lVert u(t)-u(t_0)\right\rVert_{D(A)}\to0,\quad t\to t_0.$$ It shows that $H\in C\left([0,T];D(A)\right)$. Applying Theorem $\ref{linear strict solution theorem}$ there exsits a unique strict solution $v$ on $[0,T]$ to the following equation $$iD_t^\alpha v(t)+Av(t)+H(t)=0,\quad v(0)=x.$$ Then it follows that $v(t)=S_tx+iGH(t)=S_tx+iGF(u)=u(t)$ which completes the proof. ◻ ***Proof of Theorem** $\mathbf{\ref{unique mild solution X}}$.* Let $R=\left\lVert x\right\rVert_{D(A)}$ and set $$X_R=\left\{u\in L^\infty\left((0,T);D(A)\right):u(0)=x,\quad\left\lVert u\right\rVert_{L_T^\infty D(A)}\lesssim R\right\}$$ with metric $$d(u,v)=\left\lVert u-v\right\rVert_{L_T^\infty D(A)}.$$ Define $Ku=S_tx+iGF(u)$. It follows from Proposition $\ref{lemma linear new2}$ and Lemma $\ref{unique mild solution X proof lemma1}$ that for any $u\in X_R$, $$\left\lVert Ku\right\rVert_{L_T^\infty D(A)}\lesssim\left\lVert x\right\rVert_{D(A)}+C_{\rho(x),R}T^\alpha\left\lVert u\right\rVert_{L_T^\infty D(A)}\lesssim R+C_{\rho(x),R}T^\alpha.$$ Then we can choose $T$ small enough such that $K$ maps $X_R$ into $X_R$. For any $u,v\in X_R$, we have, by Lemma $\ref{unique mild solution X proof lemma1}$, $$d\left(Ku,Kv\right)=\left\lVert GF(u)-GF(v)\right\rVert_{L_T^\infty D(A)}\lesssim_{\rho(x),R}T^\alpha\left\lVert u-v\right\rVert_{L_T^\infty D(A)}=T^\alpha d(u,v).$$ We can choose $T$ small enough such that $K$ is a contraction on $X_R$ which implies that $K$ has a unique fixed point $u\in X_R$ and hence $u=Ku\in C\left([0,T];D(A)\right)$ by Lemma $\ref{unique mild solution X proof lemma1}$. Combining with Lemma $\ref{inhomogeneous mild solution strict}$ we can obtain the local existence and uniqueness of the strict solution. ◻ # Proof of Theorem $\ref{continuation and blow up alternative}$ {#Proof of Theorem continuation and blow up alternative} **Lemma 12** (continuation). *Let $u$ be the strict solution of $(\ref{nonlinear Schrodinger equation Hilbert})$ on $[0,T]$ under the assumptions in Theorem [\[unique mild solution X\]](#unique mild solution X){reference-type="ref" reference="unique mild solution X"}. Then $u$ can be extended to the interval $[0,T^*]$ for some $T^*>T$ uniquely and the extended function is the strict solution of $(\ref{nonlinear Schrodinger equation Hilbert})$ on $[0,T^*]$. [\[extended function strict solution\]]{#extended function strict solution label="extended function strict solution"}* *Proof.* Due to Lemma $\ref{inhomogeneous mild solution strict}$, we only need to prove $u$ can be extended to $v\in C\left([0,T^*];D(A)\right)$ satisfying $v(t)=S_tx+iGF(v)$ on $[0,T^*]$. Let $Kv=S_tx+iGF(v)$, $R=\left\lVert u\right\rVert_{L_T^\infty D(A)}$ and set $$E_R=\left\{v\in L^\infty\left((0,T^*);D(A)\right):\begin{array}{cc} v\equiv u\;on\;[0,T]\\ \left\lVert v-u(T)\right\rVert_{L_{(T,T^*)}^\infty D(A)}\lesssim R\;on\;[T,T^*] \end{array}\right\}$$ with the metric $$d(v,w)=\left\lVert v-w\right\rVert_{L_{T^*}^\infty D(A)}.$$ We first claim that $K$ maps $E_R$ into itself. Indeed, for any $v\in E_R$, clearly we have, by Proposition $\ref{lemma linear new2}$ and Lemma $\ref{unique mild solution X proof lemma1}$, that $Kv\in L^\infty\left((0,T^*);D(A)\right)$. And $Kv\equiv Ku=u$ on $[0,T]$ follows from that $u$ is a strict solution on $[0,T]$. Now on $[T,T^*]$, according to Proposition $\ref{proposition linear new approach bilinear1}$ and Proposition $\ref{8.16proposition linear new approach bilinear2}$, we can choose $T^*$ and $T$ to be close enough such that $$\begin{aligned} &\left\lVert Kv-u(T)\right\rVert_{L_{(T,T^*)}^\infty D(A)}\\ &\lesssim_{\rho(x),R}\left(1+T^{-1}\right)\left(T^*-T\right)\left\lVert x\right\rVert_{D(A)}+\left(\left(T^*-T\right)^\alpha+\left(T^{*\alpha+1}-T^{\alpha+1}\right)\right)\left\lVert v\right\rVert_{L_{T^*}^\infty D(A)}\\ &\lesssim_{\rho(x),R}\left(1+T^{-1}\right)\left(T^*-T\right)R+\left(\left(T^*-T\right)^{\alpha+1}+T\left(T^*-T\right)^\alpha+T^\alpha\left(T^*-T\right)\right)R\\ &\lesssim R.\end{aligned}$$ Then it follows that $K$ maps $E_R$ into itself. Similarly, for any $v,w\in E_R$, it follows from $(\ref{lemma linear new3 proof equation4})$ that $$\begin{aligned} d\left(Kv,Kw\right)&=\left\lVert Kv-Kw\right\rVert_{L_{(T,T^*)}^\infty D(A)}=\left\lVert GF(v)-GF(w)\right\rVert_{L_{(T,T^*)}^\infty D(A)}\\ &\lesssim\left(T^*-T\right)^\alpha\left\lVert F(v)-F(w)\right\rVert_{L_{T^*}^\infty D(A)}\lesssim_{\rho(x),R}\left(T^*-T\right)^\alpha d(v,w).\end{aligned}$$ Then we can choose $T^*$ and $T$ to be close enough such that $K$ is a contraction on $E_R$ which completes the proof of the result. ◻ ***Proof of Theorem** $\mathbf{\ref{continuation and blow up alternative}}$.* Let $$T_{\max}=\sup\left\{T\in(0,\infty):\exists!\;strict\;solution\;to\;(\ref{nonlinear Schrodinger equation Hilbert})\;on\;[0,T]\right\}.$$ Lemma $\ref{extended function strict solution}$ shows the existence of $T_{\max}$ and that $u\in C\left([0,T_{\max});D(A)\right)$. It's clear that $0<T_{\max}\leq\infty$. Suppose that $T_{\max}<\infty$ but $\left\lVert u(t)\right\rVert_{D(A)}\lesssim R$ on $[0,T_{\max}]$. When $t\to T_{\max}$, assuming $t\in\left[T_{\max}-\delta,T_{\max}\right]$, by Proposition $\ref{proposition linear new approach bilinear1}$ and Proposition $\ref{8.16proposition linear new approach bilinear2}$, we have $$\begin{aligned} &\left\lVert u(t)-u(T_{\max})\right\rVert_{D(A)}\\ &\lesssim_{\rho(x),R}\left(1+\left(T_{\max}-\delta\right)^{-1}\right)\left(T_{\max}-t\right)\left\lVert x\right\rVert_{D(A)}+\left(\left(T_{\max}-t\right)^\alpha+\left(T_{\max}^{\alpha+1}-t^{\alpha+1}\right)\right)\left\lVert u\right\rVert_{L_{T_{\max}}^\infty D(A)}\\ &\lesssim_{\rho(x),R}\left(1+\left(T_{\max}-\delta\right)^{-1}\right)\left(T_{\max}-t\right)+\left(\left(T_{\max}-t\right)^\alpha+\left(T_{\max}^{\alpha+1}-t^{\alpha+1}\right)\right)\\ &\to0,\quad t\to T_{\max}.\end{aligned}$$ It follows that $u\in C\left([0,T_{\max}];D(A)\right)$. But by Lemma $\ref{extended function strict solution}$, $u$ can be extended to the interval $[0,T^*]$ for some $T^*>T_{\max}$ which contradicts the definition of $T_{\max}$. Then $T_{\max}<\infty$ implies that $\lim\limits_{t\to T_{\max}}\left\lVert u(t)\right\rVert_{D(A)}=\infty$. ◻ # Proof of Theorem $\ref{unique global solution X}$ {#Proof of Theorem unique global solution X} **Lemma 13**. *Let $w:[0,\infty)\to[0,\infty)$ be a continuous, nondecreasing, nonnegative function which is not always $0$ and $u(t)$ be a continuous, nonnegative function on $[0,T]$ satisfying $$u(t)\lesssim1+\int_0^t\left(t-\tau\right)^{\alpha-1}w\left(u(\tau)\right)d\tau$$ where $\alpha\in(0,1)$. Then $$\int_1^{u(t)}\frac{\sigma^{\frac{1}{\alpha}+\varepsilon-1}}{w(\sigma)^{\frac{1}{\alpha}+\varepsilon}}d\sigma\lesssim_\varepsilon e^{\left(\frac{1}{\alpha}+\varepsilon\right)T}\left(1-e^{-\left(\frac{1}{\alpha}+\varepsilon\right)t}\right).$$ [\[9.9proof of Theorem unique global solution X lemma\]]{#9.9proof of Theorem unique global solution X lemma label="9.9proof of Theorem unique global solution X lemma"}* *Proof.* The proof is similar to Theorem 2 in [@Integral-Inequalities-and-Global-Solutions-of-Semilinear-Evolution-Equations] and we omit it. ◻ ***Proof of Theorem** $\mathbf{\ref{unique global solution X}}$.* It suffices to prove that $\left\lVert u(t)\right\rVert_{D(A)}$ is bounded on every interval $[0,T]$. We assume that there exists a $T<\infty$ such that $\lim\limits_{t\uparrow T}\left\lVert u(t)\right\rVert_{D(A)}=\infty$. Then for such $T$, by Proposition $\ref{lemma linear new2}$, Assumption $\ref{AssumptionB'}$ and $(\ref{lemma linear new3 proof equation3})$, we obtain $$\left\lVert u(t)\right\rVert_{D(A)}\lesssim_{\left\lVert x\right\rVert_{D(A)},\varrho(x)}1+\int_0^t\left(t-\tau\right)^{\alpha-1}w\left(\left\lVert u(\tau)\right\rVert_{D(A)}\right)d\tau.$$ Then Lemma $\ref{9.9proof of Theorem unique global solution X lemma}$ shows that $$\int_1^{\left\lVert u(t)\right\rVert_{D(A)}}\frac{\sigma^{\frac{1}{\alpha}+\varepsilon-1}}{w(\sigma)^{\frac{1}{\alpha}+\varepsilon}}d\sigma\lesssim_{\varepsilon,\left\lVert x\right\rVert_{D(A)},\varrho(x)}e^{\left(\frac{1}{\alpha}+\varepsilon\right)T}.$$ Letting $t\uparrow T$ on both sides of the inequality we have $$\int_1^\infty\frac{\sigma^{\frac{1}{\alpha}+\varepsilon-1}}{w(\sigma)^{\frac{1}{\alpha}+\varepsilon}}d\sigma\lesssim_{\varepsilon,\left\lVert x\right\rVert_{D(A)},\varrho(x)}e^{\left(\frac{1}{\alpha}+\varepsilon\right)T}.$$ This leads to a contradiction since $w\in C_\infty[0,\infty)$. Then the proof is complete. ◻ # Application. The well-posedness of the fractional dispersive equation {#9.16Application. The well-posedness of the fractional dispersive equation} In this section, we will consider the well-posedness of $(\ref{9.1 very general dispersive equation motivated})$. The hypotheses on $F$ are: **Assumption 3**. *$F(0)=0$ and $\left\lVert F(u)-F(v)\right\rVert_{H^s(\mathbb{R}^n)}\lesssim_{\rho(u(0)),\rho(v(0)),R}\left\lVert u-v\right\rVert_{H^s(\mathbb{R}^n)}$ a.e. on $I\subset [0,\infty)$, where $R$ is the essential upper bound of $\left\lVert u(t)\right\rVert_{H^s(\mathbb{R}^n)}$ and $\left\lVert v(t)\right\rVert_{H^s(\mathbb{R}^n)}$ on $I$. [\[AssumptionA\]]{#AssumptionA label="AssumptionA"}* **Assumption 4**. *There exists a $w\in C_\infty[0,\infty)$ such that $\left\lVert F(u)\right\rVert_{H^s(\mathbb{R}^n)}\lesssim_{\varrho(u(0))}w\left(\left\lVert u(t)\right\rVert_{H^s(\mathbb{R}^n)}\right)$. [\[AssumptionB\]]{#AssumptionB label="AssumptionB"}* Here $\rho(\cdot)$ and $\varrho(\cdot)$ are norms. We will prove the following results. **Theorem 8** (local well-posedness). *Let $\alpha\in(0,1)$, $m\geq\frac{n}{2}$, $s\geq m$, $q\in L^2(\mathbb{R}^n)$ and $V\in L^\infty(\mathbb{R}^n)$. $P(D)$ is defined as $P(D)u=\mathscr{F}^{-1}\left(P(\xi)\mathscr{F}u\right)$ where the assumption of $P(\xi)$ is $P(\xi)\in C\left(\mathbb{R}^n;\mathbb{R}\right)$ and $\left\lvert P(\xi)\right\rvert\sim\left\lvert\xi\right\rvert^m$ when $\lvert\xi\rvert\to\infty$ and Assumption $\ref{AssumptionA}$ holds. If $u_0\in H^s(\mathbb{R}^n)$ satisfying $\left\lVert u_0\right\rVert_{H^s(\mathbb{R}^n)}\vee\rho(u_0)<\infty$, there exists a positive number $T$ which depends only on $\left\lVert u_0\right\rVert_{H^s(\mathbb{R}^n)}$ and $\rho(u_0)$ such that $(\ref{9.1 very general dispersive equation motivated equivalent})$ admits a unique solution on $[0,T]$ in the class $$u\in C\left([0,T];H^s(\mathbb{R}^n)\right),\quad\mathbf{D}_t^\alpha(u-u_0)\in C\left([0,T];L^2(\mathbb{R}^n)\right).$$ In addition, $u$ can be extended to the maximal interval $[0,T_{\max})$ such that $$u\in C\left([0,T_{\max});H^s(\mathbb{R}^n)\right),\quad\mathbf{D}_t^\alpha(u-u_0)\in C\left([0,T_{\max});L^2(\mathbb{R}^n)\right)$$ and $T_{\max}<\infty$ implies $\lim\limits_{t\uparrow T_{max}}\left\lVert u(t)\right\rVert_{H^s(\mathbb{R}^n)}=\infty$. [\[9.1 very general dispersive equation motivated equivalent result\]]{#9.1 very general dispersive equation motivated equivalent result label="9.1 very general dispersive equation motivated equivalent result"}* **Theorem 9** (global well-posedness). *Let the assumptions in Theorem $\ref{9.1 very general dispersive equation motivated equivalent result}$ and Assumption $\ref{AssumptionB}$ hold. If $u_0\in H^s(\mathbb{R}^n)$ satisfying $\left\lVert u_0\right\rVert_{H^s(\mathbb{R}^n)}\vee\rho(u_0)\vee\varrho(u_0)<\infty$, $(\ref{9.1 very general dispersive equation motivated equivalent})$ admits a unique solution on $[0,\infty)$ in the class $$u\in C\left([0,\infty);H^s(\mathbb{R}^n)\right),\quad\mathbf{D}_t^\alpha(u-u_0)\in C\left([0,\infty);L^2(\mathbb{R}^n)\right).$$ [\[9.1 very general dispersive equation motivated equivalent result global\]]{#9.1 very general dispersive equation motivated equivalent result global label="9.1 very general dispersive equation motivated equivalent result global"} That is, the solution in Theorem $\ref{9.1 very general dispersive equation motivated equivalent result}$ is global.* To deal with these theorems, it suffices to consider the following equivalent equation that $$\begin{cases} iD_t^\alpha u+P(D)u+qu+Vu+F(u)=0,\quad&x\in\mathbb{R}^n,\;t>0\\ u(0,x)=u_0(x),\quad&x\in\mathbb{R}^n \end{cases} \label{9.1 very general dispersive equation motivated equivalent}$$ where the assumptions of $P(D), q, V, F$ are the same as Theorem $\ref{9.1 very general dispersive equation motivated equivalent result}$ and Theorem $\ref{9.1 very general dispersive equation motivated equivalent result global}$. Define the following operators: $$\begin{aligned} &Hu=\mathscr{F}^{-1}\left(P(\xi)\mathscr{F}u\right),\quad D(H)=H^s(\mathbb{R}^n),\;s\geq m,\\ &Q_1u=qu,\quad D(Q_1)=\left\{u\in L^2(\mathbb{R}^n):qu\in L^2(\mathbb{R}^n)\right\},\\ &Q_2u=Vu,\quad D(Q_2)=\left\{u\in L^2(\mathbb{R}^n):Vu\in L^2(\mathbb{R}^n)\right\},\end{aligned}$$ and $Tu=Q_1u+Q_2u$, $Au=Hu+Tu$. $Q_1,Q_2$ is called the the maximal multiplication operators by $q,V$ respectively and hence they are selfadjoint operator in $L^2(\mathbb{R}^n)$ (see [@Perturbation-Theory-for-Linear-Operators]). We claim that $D(H)\subset\left\{u\in L^2(\mathbb{R}^n):P(D)u\in L^2(\mathbb{R}^n)\right\}$. Indeed, by the assumption $s\geq m$, we obtain $\frac{P(\xi)}{\left(1+\left\lvert\xi\right\rvert^2\right)^{\frac{s}{2}}}\in L^\infty(\mathbb{R}^n)$ and hence $$\left\lVert P(D)u\right\rVert_{L^2(\mathbb{R}^n)}\sim\left\lVert\frac{P(\xi)}{\left(1+\left\lvert\xi\right\rvert^2\right)^{\frac{s}{2}}}\left(1+\left\lvert\xi\right\rvert^2\right)^{\frac{s}{2}}\mathscr{F}u\right\rVert_{L^2(\mathbb{R}^n)}<\infty.$$ Also it's easy to see that $H$ is a selfadjoint operator in $L^2(\mathbb{R}^n)$. **Proposition 7**. *$A$ is a selfadjoint operator in $L^2(\mathbb{R}^n)$ with $D(A)=D(H)$. [\[9.1 introductioin selfadjoint motivated\]]{#9.1 introductioin selfadjoint motivated label="9.1 introductioin selfadjoint motivated"}* *Proof.* We choose $\gamma$ large enough and then the asymptotic behavior of $P(\xi)$ shows that $$\begin{aligned} \int_{\mathbb{R}^n}\frac{1}{P(\xi)^2+\gamma^2}d\xi&=\int_{\lvert\xi\rvert\leq\gamma^{\frac{1}{m}}}\frac{1}{P(\xi)^2+\gamma^2}d\xi+\int_{\lvert\xi\rvert>\gamma^{\frac{1}{m}}}\frac{1}{P(\xi)^2+\gamma^2}d\xi\\ &\lesssim\gamma^{\frac{n}{m}-2}+\int_{\lvert\xi\rvert>\gamma^{\frac{1}{m}}}\frac{1}{\left\lvert\xi\right\rvert^{2m}+\gamma^2}d\xi\\ &\sim\gamma^{\frac{n}{m}-2}.\end{aligned}$$ It follows that $$\begin{aligned} \left(\int_{\mathbb{R}^n}\left\lvert\mathscr{F}u\right\rvert d\xi\right)^2&=\left(\int_{\mathbb{R}^n}\frac{1}{P(\xi)+\gamma}(P(\xi)+\gamma)\left\lvert\mathscr{F}u\right\rvert d\xi\right)^2\\ &\leq\int_{\mathbb{R}^n}\frac{1}{\left(P(\xi)+\gamma\right)^2}d\xi\int_{\mathbb{R}^n}\left(P(\xi)+\gamma\right)^2\left\lvert\mathscr{F}u\right\rvert^2d\xi\\ &\lesssim\int_{\mathbb{R}^n}\frac{1}{P(\xi)^2+\gamma^2}d\xi\left(\left\lVert P(D)u\right\rVert_{L^2(\mathbb{R}^n)}^2+\gamma^2\left\lVert u\right\rVert_{L^2(\mathbb{R}^n)}^2\right)\\ &\lesssim\gamma^{\frac{n}{m}-2}\left\lVert P(D)u\right\rVert_{L^2(\mathbb{R}^n)}^2+\gamma^{\frac{n}{m}}\left\lVert u\right\rVert_{L^2(\mathbb{R}^n)}^2\\ &=\gamma^{\frac{n}{m}-2}\left\lVert Hu\right\rVert_{L^2(\mathbb{R}^n)}^2+\gamma^{\frac{n}{m}}\left\lVert u\right\rVert_{L^2(\mathbb{R}^n)}^2\end{aligned}$$ and hence $$\left\lVert u\right\rVert_{L^\infty(\mathbb{R}^n)}\lesssim\left\lVert\mathscr{F}u\right\rVert_{L^1(\mathbb{R}^n)}\lesssim\gamma^{\frac{n}{2m}-1}\left\lVert Hu\right\rVert_{L^2(\mathbb{R}^n)}+\gamma^{\frac{n}{2m}}\left\lVert u\right\rVert_{L^2(\mathbb{R}^n)}.$$ Since $$\begin{aligned} &\begin{aligned} \left\lVert Q_1u\right\rVert_{L^2(\mathbb{R}^n)}&=\left\lVert qu\right\rVert_{L^2(\mathbb{R}^n)}\leq\left\lVert q\right\rVert_{L^2(\mathbb{R}^n)}\left\lVert u\right\rVert_{L^\infty(\mathbb{R}^n)}\\ &\lesssim\gamma^{\frac{n}{2m}-1}\left\lVert q\right\rVert_{L^2(\mathbb{R}^n)}\left\lVert Hu\right\rVert_{L^2(\mathbb{R}^n)}+\gamma^{\frac{n}{2m}}\left\lVert q\right\rVert_{L^2(\mathbb{R}^n)}\left\lVert u\right\rVert_{L^2(\mathbb{R}^n)}, \end{aligned}\\ &\left\lVert Q_2u\right\rVert_{L^2(\mathbb{R}^n)}=\left\lVert Vu\right\rVert_{L^2(\mathbb{R}^n)}\leq\left\lVert V\right\rVert_{L^\infty(\mathbb{R}^n)}\left\lVert u\right\rVert_{L^2(\mathbb{R}^n)},\end{aligned}$$ we have $$\left\lVert Tu\right\rVert_{L^2(\mathbb{R}^n)}\lesssim\left(\gamma^{\frac{n}{2m}}\left\lVert q\right\rVert_{L^2(\mathbb{R}^n)}+\left\lVert V\right\rVert_{L^\infty(\mathbb{R}^n)}\right)\left\lVert u\right\rVert_{L^2(\mathbb{R}^n)}+\gamma^{\frac{n}{2m}-1}\left\lVert q\right\rVert_{L^2(\mathbb{R}^n)}\left\lVert Hu\right\rVert_{L^2(\mathbb{R}^n)}. \label{9.1 introductioin selfadjoint motivated proof equation1}$$ Thus we can choose $\gamma$ large enough such that $T$ is $H$-bounded with $H$-bound smaller than $1$ and then $A$ is a selfadjoint operator in $L^2(\mathbb{R}^n)$ by Theorem $\ref{9.1 perturbation selfadjoint operator}$. Now it remains to prove $D(A)=D(H)$. $(\ref{9.1 introductioin selfadjoint motivated proof equation1})$ shows $$\begin{aligned} \left\lVert Au\right\rVert_{L^2(\mathbb{R}^n)}&\leq\left\lVert Hu\right\rVert_{L^2(\mathbb{R}^n)}+\left\lVert Tu\right\rVert_{L^2(\mathbb{R}^n)}\\ &\lesssim\left(\gamma^{\frac{n}{2m}}\left\lVert q\right\rVert_{L^2(\mathbb{R}^n)}+\left\lVert V\right\rVert_{L^\infty(\mathbb{R}^n)}\right)\left\lVert u\right\rVert_{L^2(\mathbb{R}^n)}+\left(1+\gamma^{\frac{n}{2m}-1}\left\lVert q\right\rVert_{L^2(\mathbb{R}^n)}\right)\left\lVert Hu\right\rVert_{L^2(\mathbb{R}^n)}\end{aligned}$$ and hence $D(H)\subset D(A)$. On the other hand, since $$\begin{aligned} &\left\lVert Hu\right\rVert_{L^2(\mathbb{R}^n)}\\ &\leq\left\lVert Au\right\rVert_{L^2(\mathbb{R}^n)}+\left\lVert Tu\right\rVert_{L^2(\mathbb{R}^n)}\\ &\leq\left\lVert Au\right\rVert_{L^2(\mathbb{R}^n)}+C\left(\gamma^{\frac{n}{2m}}\left\lVert q\right\rVert_{L^2(\mathbb{R}^n)}+\left\lVert V\right\rVert_{L^\infty(\mathbb{R}^n)}\right)\left\lVert u\right\rVert_{L^2(\mathbb{R}^n)}+C\gamma^{\frac{n}{2m}-1}\left\lVert q\right\rVert_{L^2(\mathbb{R}^n)}\left\lVert Hu\right\rVert_{L^2(\mathbb{R}^n)},\end{aligned}$$ we can choose $\gamma$ large enough such that $C\gamma^{\frac{n}{2m}-1}\left\lVert q\right\rVert_{L^2(\mathbb{R}^n)}<1$ and then $$\begin{aligned} &\left\lVert Hu\right\rVert_{L^2(\mathbb{R}^n)}\\ &\leq\left(1-C\gamma^{\frac{n}{2m}-1}\left\lVert q\right\rVert_{L^2(\mathbb{R}^n)}\right)^{-1}\left\lVert Au\right\rVert_{L^2(\mathbb{R}^n)}\\ &+C\left(\gamma^{\frac{n}{2m}}\left\lVert q\right\rVert_{L^2(\mathbb{R}^n)}+\left\lVert V\right\rVert_{L^\infty(\mathbb{R}^n)}\right)\left(1-C\gamma^{\frac{n}{2m}-1}\left\lVert q\right\rVert_{L^2(\mathbb{R}^n)}\right)^{-1}\left\lVert u\right\rVert_{L^2(\mathbb{R}^n)}\end{aligned}$$ which implies that $D(A)\subset D(H)$. Then the proof is complete. ◻ By the above arguments and applying Theorem $\ref{unique mild solution X}$, Theorem $\ref{continuation and blow up alternative}$ and Theorem $\ref{unique global solution X}$ we can obtain Theorem $\ref{9.1 very general dispersive equation motivated equivalent result}$ and Theorem $\ref{9.1 very general dispersive equation motivated equivalent result global}$. # On the fractional integral and fractional derivatives {#9.1Fractionl integral and derivative definition appendix} Here we give a brief introduction to the fractional integral and fractional derivatives. We only consider $0<\alpha<1$ without particular comment. Let $$g_\alpha(t):=\begin{cases} \frac{t^{\alpha-1}}{\Gamma(\alpha)},\quad&t>0\\ 0,\quad&t\leq0 \end{cases}.$$ We can now define the Rimann-Liouville fractional integral ($I_t^\alpha$), the Riemann-Liouville fractional derivarive ($\mathbf{D}_t^\alpha$) and the Caputo derivative ($D_t^\alpha$) by $$I_t^\alpha u(t)=g_\alpha*u,\quad\mathbf{D}_t^\alpha u(t)=\frac{d}{dt}\left(g_{1-\alpha}*u\right),\quad D_t^\alpha u(t)=g_{1-\alpha}*u'(t),$$ and the relationship between the Riemann-Liouville derivative and the Caputo derivative is given by $$D_t^\alpha u(t)=\mathbf{D}_t^\alpha\left(u(t)-u(0)\right).$$ # On the Mittag-Leffler functions {#9.17On the Mittag-Leffler functions} Here we give a brief introduction to the Mittag-Leffler functions which are the fundamental functions in fractional differential equations. **Definition 4** ([@Theory-and-Applications-of-Fractional-Differential-Equations-Chapter1-Preliminaries]). *Let $\alpha,\beta,z\in\mathbb{C}$ and $\mathop{\mathrm{Re}}\alpha>0$, we define the Mittag-Leffler function by $$E_{\alpha,\beta}(z):=\sum\limits_{k=0}^\infty\frac{z^k}{\Gamma(\alpha k+\beta)}.$$* We state the asymptotic expansion and the derivative of the Mittag-Leffler function as follows. **Theorem 10** ([@Fractional-differential-equations-an-introduction-to-fractional-derivatives-fractional-differential-equations-to-methods-of-their-solution-and-some-of-their-applications]). *If $0<\alpha<2$, $\beta$ is an arbitrary complex number and $\mu$ is an arbitrary real number such that $$\frac{\pi\alpha}{2}<\mu<\pi\wedge\pi\alpha,$$ then for an arbitrary interger $p\geq1$ the following expansion holds $$E_{\alpha,\beta}(z)=-\sum\limits_{k=1}^p\frac{z^{-k}}{\Gamma(\beta-\alpha k)}+O\left(\left\lvert z\right\rvert^{-1-p}\right),\quad\lvert z\rvert\to\infty,\;\mu\leq\lvert\arg z\rvert\leq\pi.$$ [\[Mittag-Leffler function asymptotic expansion\]]{#Mittag-Leffler function asymptotic expansion label="Mittag-Leffler function asymptotic expansion"}* **Theorem 11**. *Let $0<\alpha<1$ and $\lambda\in\mathbb{C}$, then $$\begin{gathered} \frac{d}{dt}E_{\alpha,1}\left(\lambda t^\alpha\right)=\lambda t^{\alpha-1}E_{\alpha,\alpha}\left(\lambda t^\alpha\right),\\ \frac{d}{dt}\left(t^{\alpha-1}E_{\alpha,\alpha}\left(\lambda t^\alpha\right)\right)=t^{\alpha-2}E_{\alpha,\alpha-1}\left(\lambda t^\alpha\right).\end{gathered}$$ [\[Derivative Mittag Leffler\]]{#Derivative Mittag Leffler label="Derivative Mittag Leffler"}* # The perturbation and the spectral theorem of the selfadjoint operators {#9.17The perturbation and the spectral theorem of the selfadjoint operators} ## The perturbation of the selfadjoint operators Recall that an operator $A\in\mathscr{C}(X,Y)$ is relatively bounded with respect to $T\in\mathscr{C}(X,Y)$ (or $T$-bounded) if $D(A)\supset D(T)$ and $$\left\lVert Au\right\rVert_Y\leq a\left\lVert u\right\rVert_X+b\left\lVert Tu\right\rVert_Y,$$ where $X,Y$ are Banach spaces and $b$ is called $T$-bound. **Theorem 12** ([@Perturbation-Theory-for-Linear-Operators]). *Let $H$ be selfadjoint. If $T$ is symmetric and $H$-bounded with $H$-bound smaller than $1$, then $H+T$ is selfadjoint. [\[9.1 perturbation selfadjoint operator\]]{#9.1 perturbation selfadjoint operator label="9.1 perturbation selfadjoint operator"}* ## The spectral theorem of the selfadjoint operators {#9.5appendix The spectral theorem of the selfadjoint operator} Let $H$ be a separable Hilbert space and $A$ be a selfadjoint operator. We denote $H$ by $D(A^0)$ and endow $D(A)$ with the graph norm $\left\lVert x\right\rVert_{D(A)}:=\left\lVert x\right\rVert_H+\left\lVert Ax\right\rVert_H$. Define $D\left(A^n\right),n\geq2$ by $$D\left(A^n\right):=\left\{x\in D\left(A^{n-1}\right):A^{n-1}x\in D(A)\right\},$$ with the graph norm $$\left\lVert x\right\rVert_{D\left(A^n\right)}:=\left\lVert x\right\rVert_H+\left\lVert A^nx\right\rVert_H.$$ Note that $A^n$ is selfadjoint and $D(A^n)$ is a Banach space and also a Hilbert space. By induction, it's easy to check the following equivalence form, $$D(A^n)=\begin{cases} H,\quad &n=0\\ D(A),\quad &n=1\\ \left\{x\in D(A):Ax\in D(A),\cdots,A^{n-1}x\in D(A)\right\},\quad &n\geq2 \end{cases}.$$ **Lemma 14**. *Let $H$ be a separable Hilbert space, $A$ be a self-adjoint operator on $H$ with domain $D(A)$. Then there exists a measure space $\left(\Omega,\mu\right)$ with $\mu$ a finite measure, a unitary operator $U:L^2(\Omega)\to H$ and a real-valued function $a(\xi)$ on $\Omega$ which is finite a.e. such that,* 1. *$\psi\in D\left(A^n\right)\Longleftrightarrow\bigcup\limits_{k=0}^n\left\{a(\xi)^kU^{-1}\psi\right\}\subset L^2(\Omega)$,$n\geq0$,* 2. *$U\varphi\in D(A^n)\Longrightarrow A^nU\varphi=U\left(a(\xi)^n\varphi\right)$,$n\geq0$.* *Moreover, if $A$ is injective, we have, with $D\left(A^{-n}\right)=R\left(A^n\right)$,* 1. *$\psi\in D\left(A^{-n}\right)\Longleftrightarrow\bigcup\limits_{k=0}^n\left\{a(\xi)^{-k}U^{-1}\psi\right\}\subset L^2(\Omega)$,$n\geq0$,* 2. *$U\varphi\in D(A^{-n})\Longrightarrow A^{-n}U\varphi=U\left(a(\xi)^{-n}\varphi\right)$,$n\geq0$.* *In addition, the measure space $(\Omega,\mu)$ and the function $a(\xi)$ can be chosen such that $a\in L^p(\Omega)$ for all $p$ with $1\leq p<\infty.$ [\[Local solution lemma\]]{#Local solution lemma label="Local solution lemma"}* *Proof.* **Proof of** $\mathbf{(i)}$ **and** $\mathbf{(ii)}$. The case $n=0$ is trival. The proof of the case $n=1$ you can see [@Methods-of-Modern-Mathematical-Physics-VIII-Unbounded-Operators]. Assume that $(ii)$ is valid in the case $U\varphi\in D(A^{n-1})$. When $U\varphi\in D(A^n)$, $$A^nU\varphi=AU\left(a(\xi)^{n-1}\psi\right)=U\left(a(\xi)^n\varphi\right),$$ which completes the proof of $(ii)$. Similarly, Assume that $(i)$ is valid for the case $n-1$. For the case $n$, by the definition of $D(A^n)$, we have $$\psi\in D(A^n)\Longleftrightarrow\psi\in D(A^{n-1})\;and\;A^{n-1}\psi\in D(A).$$ On one hand, by assumption, $$\psi\in D(A^{n-1})\Longleftrightarrow\bigcup\limits_{k=0}^{n-1}\left\{a(\xi)^kU^{-1}\psi\right\}\subset L^2(\Omega),$$ on the other hand, $$A^{n-1}\psi\in D(A)\Longleftrightarrow a(\xi)U^{-1}\left(A^{n-1}\psi\right)\in L^2(\Omega)\Leftrightarrow a(\xi)^nU^{-1}\psi\in L^2(\Omega),$$ then we obtain $$\psi\in D\left(A^n\right)\Longleftrightarrow\bigcup\limits_{k=0}^n\left\{a(\xi)^kU^{-1}\psi\right\}\subset L^2(\Omega).$$ Then the proof of $(i)$ is complete. **Proof of** $\mathbf{(a)}$ **and** $\mathbf{(b)}$. We just need to prove the case $n\geq1$. Recall that $$\psi\in D(A^{-n})\Longleftrightarrow\exists\varphi\in D(A^n)\;s.t.\;\psi=A^n\varphi.$$ By $(ii)$, $$U^{-1}\psi=U^{-1}\left(A^n\varphi\right)=a(\xi)^nU^{-1}\varphi,$$ that is $$a(\xi)^{-n}U^{-1}\psi=U^{-1}\varphi,$$ it follows that $$\psi\in D(A^{-n})\Longleftrightarrow\exists\varphi\in D(A^n)\;s.t.\;a(\xi)^{-n}U^{-1}\psi=U^{-1}\varphi.$$ But by $(i)$, $$\varphi\in D(A^n)\Longleftrightarrow\bigcup\limits_{k=0}^n\left\{a(\xi)^kU^{-1}\varphi\right\}\subset L^2(\Omega),$$ we can deduce that $$\begin{aligned} \psi\in D(A^{-n})&\Longleftrightarrow\bigcup\limits_{k=0}^n\left\{a(\xi)^{-(n-k)}U^{-1}\psi\right\}\subset L^2(\Omega)\\ &\Longleftrightarrow\bigcup\limits_{k=0}^n\left\{a(\xi)^{-k}U^{-1}\psi\right\}\subset L^2(\Omega).\end{aligned}$$ Then we complete the proof of $(a)$ and $(b)$. ◻ Note that $a(\xi)\neq0$ a.e. on $\Omega$ which you can see the proof of Theorem VIII.4 in [@Methods-of-Modern-Mathematical-Physics-VIII-Unbounded-Operators] # Some further results of the linear $(\ref{nonlinear Schrodinger equation Hilbert})$ {#Some further results of the linear} ## Hölder regularities **Proposition 8**. *For $T>0$ and $0<t,s<T$, let $\frac{1}{2}<\alpha<1$ and $v\in L^q\left((0,T);H\right)$ where $\frac{1}{2\alpha-1}<q<\infty$. We have $$\left\lVert Gv(t)-Gv(s)\right\rVert_H\lesssim\left(T^{\frac{1}{q'}+4\alpha-3}+T^{\frac{1}{q'}+2\alpha-1}+T^{\frac{1}{q'}-2(1-\alpha)}\right)\left\lvert t-s\right\rvert^{1-\alpha}\left\lVert v\right\rVert_{L_T^qH}. \label{8.28 linear Holder regularity proposition1 equation1}$$ [\[8.28 linear Holder regularity proposition1\]]{#8.28 linear Holder regularity proposition1 label="8.28 linear Holder regularity proposition1"}* *Proof.* Using the notations in Proposition $\ref{8.16proposition linear new approach bilinear2}$, we first have $$\begin{aligned} \left\lVert I_1\right\rVert_H&\lesssim\int_0^s\left(\left(s-\tau\right)^{\alpha-1}-\left(t-\tau\right)^{\alpha-1}+\left(t-\tau\right)^\alpha-\left(s-\tau\right)^\alpha\right)\left\lVert v(\tau)\right\rVert_Hd\tau\\ &\leq\left(\int_0^s\left(\left(s-\tau\right)^{\alpha-1}-\left(t-\tau\right)^{\alpha-1}+\left(t-\tau\right)^\alpha-\left(s-\tau\right)^\alpha\right)^{q'}d\tau\right)^{\frac{1}{q'}}\left\lVert v\right\rVert_{L_T^qH}\\ &\lesssim\left(\left(t-s\right)^{1-\alpha}\left(\int_0^s\left(s-\tau\right)^{(\alpha-1)q'}\left(t-\tau\right)^{(\alpha-1)q'}d\tau\right)^{\frac{1}{q'}}+s^{\frac{1}{q'}}\left(t-s\right)^\alpha\right)\left\lVert v\right\rVert_{L_T^qH}\\ &\leq\left(\left(t-s\right)^{1-\alpha}\left(\int_0^s\left(s-\tau\right)^{2(\alpha-1)q'}d\tau\right)^{\frac{1}{q'}}+s^{\frac{1}{q'}}\left(t-s\right)^\alpha\right)\left\lVert v\right\rVert_{L_T^qH}\\ &\lesssim\left(s^{\frac{1}{q'}-2(1-\alpha)}\left(t-s\right)^{1-\alpha}+s^{\frac{1}{q'}}\left(t-s\right)^\alpha\right)\left\lVert v\right\rVert_{L_T^qH}\\ &\leq\left(T^{\frac{1}{q'}+4\alpha-3}+T^{\frac{1}{q'}+2\alpha-1}\right)\left(t-s\right)^{1-\alpha}\left\lVert v\right\rVert_{L_T^qH}\end{aligned}$$ and $$\left\lVert I_3\right\rVert_H\lesssim\left(T^{\frac{1}{q'}+4\alpha-3}+T^{\frac{1}{q'}+2\alpha-1}\right)\left(t-s\right)^{1-\alpha}\left\lVert v\right\rVert_{L_T^qH}.$$ Also we have $$\begin{aligned} \left\lVert I_2\right\rVert_H&\lesssim\int_s^t\left(t-\tau\right)^{\alpha-1}\left\lVert v(\tau)\right\rVert_Hd\tau\\ &\leq\left(\int_s^t\left(t-\tau\right)^{(\alpha-1)q'}d\tau\right)^{\frac{1}{q'}}\left\lVert v\right\rVert_{L_T^qH}\\ &\lesssim\left(t-s\right)^{\frac{1}{q'}-(1-\alpha)}\left\lVert v\right\rVert_{L_T^qH}\\ &\leq T^{\frac{1}{q'}-2(1-\alpha)}\left(t-s\right)^{1-\alpha}\left\lVert v\right\rVert_{L_T^qH}\end{aligned}$$ and $$\left\lVert I_4\right\rVert_H\lesssim T^{\frac{1}{q'}-2(1-\alpha)}\left(t-s\right)^{1-\alpha}\left\lVert v\right\rVert_{L_T^qH}.$$ Then there holds $$\left\lVert G^hv(t)-G^hv(s)\right\rVert_H\lesssim\left(T^{\frac{1}{q'}+4\alpha-3}+T^{\frac{1}{q'}+2\alpha-1}+T^{\frac{1}{q'}-2(1-\alpha)}\right)\left(t-s\right)^{1-\alpha}\left\lVert v\right\rVert_{L_T^qH}.$$ On the other hand, since $$\begin{aligned} \left\lVert I_5\right\rVert_H&\lesssim\int_0^s\left(\left(s-\tau\right)^{\alpha-1}-\left(t-\tau\right)^{\alpha-1}+\left(t-\tau\right)^\alpha-\left(s-\tau\right)^\alpha\right)\left\lVert v(\tau)\right\rVert_Hd\tau\\ &\lesssim\left(T^{\frac{1}{q'}+4\alpha-3}+T^{\frac{1}{q'}+2\alpha-1}\right)\left(t-s\right)^{1-\alpha}\left\lVert v\right\rVert_{L_T^qH}\end{aligned}$$ and $$\begin{aligned} \left\lVert I_6\right\rVert_H&\lesssim\int_s^t\left(t-\tau\right)^{\alpha-1}\left\lVert v(\tau)\right\rVert_Hd\tau\\ &\lesssim T^{\frac{1}{q'}-2(1-\alpha)}\left(t-s\right)^{1-\alpha}\left\lVert v\right\rVert_{L_T^qH},\end{aligned}$$ it follows that $$\left\lVert G^lv(t)-G^lv(s)\right\rVert_H\lesssim\left(T^{\frac{1}{q'}+4\alpha-3}+T^{\frac{1}{q'}+2\alpha-1}+T^{\frac{1}{q'}-2(1-\alpha)}\right)\left(t-s\right)^{1-\alpha}\left\lVert v\right\rVert_{L_T^qH}.$$ Then $(\ref{8.28 linear Holder regularity proposition1 equation1})$ thus holds. ◻ **Theorem 13**. *For $T>0$, let $x\in H$, $F\in L^\infty\left((0,T);H\right)$ and $u$ be the mild solution of the linear $(\ref{nonlinear Schrodinger equation Hilbert})$ on $[0,T]$, then $u\in C^\alpha\left([\delta,T];H\right)$ for every $0<\delta<T$ with the estimate $$\left[u\right]_{C_{[\delta,T]}^\alpha H}\lesssim T^{1-\alpha}\left(1+\delta^{-1}\right)\left\lVert x\right\rVert_H+T\left\lVert F\right\rVert_{L_T^\infty H}. \label{8.28 linear Holder regularity theorem1 equation1}$$ If moreover $x\in D(A)$, $u\in C^\alpha\left([0,T];H\right)$ with the estimate $$\left[u\right]_{C_T^\alpha H}\lesssim(1+T)\left(\left\lVert x\right\rVert_{D(A)}+\left\lVert F\right\rVert_{L_T^\infty H}\right). \label{8.28 linear Holder regularity theorem1 equation2}$$ [\[8.28 linear Holder regularity theorem1\]]{#8.28 linear Holder regularity theorem1 label="8.28 linear Holder regularity theorem1"}* *Proof.* By the representation of the mild solution that $u=S_tx+iGF(t)$ and Proposition $\ref{proposition linear new approach bilinear1}$, Proposition $\ref{8.16proposition linear new approach bilinear2}$ we can obtain $$\begin{aligned} \left\lVert u(t)-u(s)\right\rVert_H&\lesssim\left(1+\left(t\wedge s\right)^{-1}\right)\left\lvert t-s\right\rvert\left\lVert x\right\rVert_H+\left(\left\lvert t-s\right\rvert^\alpha+\left\lvert t^{\alpha+1}-s^{\alpha+1}\right\rvert\right)\left\lVert F\right\rVert_{L_T^\infty H}\\ &\lesssim T^{1-\alpha}\left(1+\delta^{-1}\right)\left\lvert t-s\right\rvert^\alpha\left\lVert x\right\rVert_H+T\left\lvert t-s\right\rvert^\alpha\left\lVert F\right\rVert_{L_T^\infty H}\end{aligned}$$ which implies $(\ref{8.28 linear Holder regularity theorem1 equation1})$. And for $x\in D(A)$, we have $$\begin{aligned} \left\lVert u(t)-u(s)\right\rVert_H&\lesssim\left(\left\lvert t-s\right\rvert^\alpha+\left\lvert t^{\alpha+1}-s^{\alpha+1}\right\rvert\right)\left(\left\lVert x\right\rVert_{D(A)}+\left\lVert F\right\rVert_{L_T^\infty H}\right)\\ &\lesssim(1+T)\left\lvert t-s\right\rvert^\alpha\left(\left\lVert x\right\rVert_{D(A)}+\left\lVert F\right\rVert_{L_T^\infty H}\right)\end{aligned}$$ which implies $(\ref{8.28 linear Holder regularity theorem1 equation2})$. ◻ **Theorem 14**. *For $T>0$, let $x\in H$, $F\in L^q\left((0,T);H\right)$ where $\frac{1}{2\alpha-1}<q<\infty$ and $u$ be the mild solution the linear $(\ref{nonlinear Schrodinger equation Hilbert})$ on $[0,T]$, then $u\in C^{1-\alpha}\left([\delta,T];H\right)$ for any $0<\delta<T$ with the estimate $$\left[u\right]_{C_{[\delta,T]}^{1-\alpha}H}\lesssim\left(1+\delta^{-1}\right)T^\alpha\left\lVert x\right\rVert_H+\left(T^{\frac{1}{q'}+4\alpha-3}+T^{\frac{1}{q'}+2\alpha-1}+T^{\frac{1}{q'}-2(1-\alpha)}\right)\left\lVert F\right\rVert_{L_T^qH}. \label{8.29 linear Holder regularity theorem2 equation1}$$ If moreover $x\in D(A)$, $u\in C^{1-\alpha}\left([0,T];H\right)$ with the estimate $$\left[u\right]_{C_T^{1-\alpha}H}\lesssim\left(T^{2\alpha-1}+T^{2\alpha}\right)\left\lVert x\right\rVert_{D(A)}+\left(T^{\frac{1}{q'}+4\alpha-3}+T^{\frac{1}{q'}+2\alpha-1}+T^{\frac{1}{q'}-2(1-\alpha)}\right)\left\lVert F\right\rVert_{L_T^qH}. \label{8.29 linear Holder regularity theorem2 equation2}$$ [\[8.29 linear Holder regularity theorem2\]]{#8.29 linear Holder regularity theorem2 label="8.29 linear Holder regularity theorem2"}* *Proof.* To simplify, let $C=T^{\frac{1}{q'}+4\alpha-3}+T^{\frac{1}{q'}+2\alpha-1}+T^{\frac{1}{q'}-2(1-\alpha)}$. With the help of Proposition $\ref{proposition linear new approach bilinear1}$ and Proposition $\ref{8.28 linear Holder regularity proposition1}$ it follows that $$\begin{aligned} \left\lVert u(t)-u(s)\right\rVert_H&\lesssim\left(1+\left(t\wedge s\right)^{-1}\right)\left\lvert t-s\right\rvert\left\lVert x\right\rVert_H+C\left\lvert t-s\right\rvert^{1-\alpha}\left\lVert F\right\rVert_{L_T^qH}\\ &\leq\left(1+\delta^{-1}\right)T^\alpha\left\lvert t-s\right\rvert^{1-\alpha}\left\lVert x\right\rVert_H+C\left\lvert t-s\right\rvert^{1-\alpha}\left\lVert F\right\rVert_{L_T^qH}\end{aligned}$$ which implies $(\ref{8.29 linear Holder regularity theorem2 equation1})$. And for $x\in D(A)$, we have $$\begin{aligned} \left\lVert u(t)-u(s)\right\rVert_H&\lesssim\left(\left\lvert t-s\right\rvert^\alpha+\left\lvert t^{\alpha+1}-s^{\alpha+1}\right\rvert\right)\left\lVert x\right\rVert_{D(A)}+C\left\lvert t-s\right\rvert^{1-\alpha}\left\lVert F\right\rVert_{L_T^qH}\\ &\lesssim\left(T^{2\alpha-1}+T^{2\alpha}\right)\left\lvert t-s\right\rvert^{1-\alpha}\left\lVert x\right\rVert_{D(A)}+C\left\lvert t-s\right\rvert^{1-\alpha}\left\lVert F\right\rVert_{L_T^qH}\end{aligned}$$ which implies $(\ref{8.29 linear Holder regularity theorem2 equation2})$. ◻ ## Asymptotic behaviors It's easy to show that if $x\in H$, $F\in L^\infty\left((0,\infty);H\right)$, then there is a mild solution $u\in C\left([0,\infty);H\right)$ of the linear $(\ref{nonlinear Schrodinger equation Hilbert})$ on $[0,\infty)$ satisfying $u(t)=S_tx+iGF(t)$. **Theorem 15**. *If $A$ is injective, let $x\in D(A^{-1})$, $F\in L^\infty\left((0,\infty);H\right)$ and $u$ be the mild solution of the linear $(\ref{nonlinear Schrodinger equation Hilbert})$ on $[0,\infty)$. If there exists $F_0\in D(A^{-1})$ such that $$\lim\limits_{t\to\infty}\int_0^t\left(t-\tau\right)^{\alpha-1}\left\lVert F(\tau)-F_0\right\rVert_Hd\tau=0,$$ then $u$ satisfies $$\lim\limits_{t\to\infty}u(t)=-A^{-1}F_0.$$ [\[8.29 Asymptotic behaviours theorem1\]]{#8.29 Asymptotic behaviours theorem1 label="8.29 Asymptotic behaviours theorem1"}* *Proof.* Thanks to Lemma $\ref{lemma linear new approach1}$, Lemma $\ref{lemma linear new approach2}$ and Proposition $\ref{lemma linear new2}$, we have $$\left\lVert S_tx\right\rVert_H\lesssim t^{-\alpha}\left\lVert x\right\rVert_{D(A^{-1})}$$ which implies that $\lim\limits_{t\to\infty}S_tx=0$. On the other hand, we can devide $GF(t)$ into two parts such that $$GF(t)=\int_0^tP_{t-\tau}\left(F(\tau)-F_0\right)d\tau+\int_0^tP_{t-\tau}F_0d\tau=:v_1(t)+v_2(t).$$ A straightforward computation leads to $$\begin{aligned} v_2(t)&=\int_0^tP_{t-\tau}F_0d\tau=U\left(\int_0^tb(t-\tau,\xi)d\tau U^{-1}F_0\right)\\ &=U\left(\int_0^tia(\xi)^{-1}\frac{d}{d\tau}a(t-\tau,\xi)d\tau U^{-1}F_0\right)\\ &=iA^{-1}F_0-iA^{-1}S_tF_0.\end{aligned}$$ By Lebesgue's dominated theorem, there holds $$\begin{aligned} \left\lVert iA^{-1}S_tF_0\right\rVert_H&\leq\left\lVert A^{-1}S_t^lF_0\right\rVert_H+\left\lVert A^{-1}S_t^hF_0\right\rVert_H\\ &\lesssim\left\lVert A^{-1}S_t^lF_0\right\rVert_H+\left\lVert t^{-\alpha}A^{-1}\mathbf{A}_t^{-1}F_0\right\rVert_H+\left\lVert A^{-1}R_t^SF_0\right\rVert_H\\ &=\left\lVert a(\xi)^{-1}\chi_ta(t,\xi)U^{-1}F_0\right\rVert_{L^2(\Omega)}+\left\lVert t^{-\alpha}a(\xi)^{-2}\chi_t^cU^{-1}F_0\right\rVert_{L^2(\Omega)}\\ &+\left\lVert t^{-2\alpha}a(\xi)^{-1}O\left(\left\lvert a(\xi)\right\rvert^{-2}\right)\chi_t^cU^{-1}F_0\right\rVert_{L^2(\Omega)}\\ &\to0,\quad t\to\infty.\end{aligned}$$ This implies that $\lim\limits_{t\to\infty}v_2(t)=iA^{-1}F_0$. By Assumption and $(\ref{lemma linear new3 proof equation3})$ we obtain $$\left\lVert v_1(t)\right\rVert_H\lesssim\int_0^t\left(t-\tau\right)^{\alpha-1}\left\lVert F(\tau)-F_0\right\rVert_Hd\tau\to0,\quad t\to\infty.$$ It follows that $\lim\limits_{t\to\infty}GF(t)=iA^{-1}F_0$ and hence the result holds. ◻ **Theorem 16**. *Let $F\in L^\infty\left((0,\infty);H\right)$ and $x\in H$. If $u_\varepsilon(t)$ is the mild solution of $$iD_t^\alpha u_\varepsilon(t)+\varepsilon Au_\varepsilon(t)+F(t)=0,\quad u_\varepsilon(0)=x, \label{8.29 Asymptotic behaviours theorem2 equation1}$$ on $[0,\infty)$, then $$\lim\limits_{\varepsilon\to0}u_\varepsilon(t)=x+iI_t^\alpha F(t) \label{8.29 Asymptotic behaviours theorem2 equation2}$$ on $[0,\infty)$ pointwisely. [\[8.29 Asymptotic behaviours theorem2\]]{#8.29 Asymptotic behaviours theorem2 label="8.29 Asymptotic behaviours theorem2"}* *Proof.* Clearly $u_\varepsilon(t)$ exists and satisfies $$u_\varepsilon(t)=S_{\varepsilon^{\frac{1}{\alpha}}t}x+i\varepsilon^{\frac{1}{\alpha}-1}\int_0^tP_{\varepsilon^{\frac{1}{\alpha}}(t-\tau)}F(\tau)d\tau.$$ Thanks to Lebesgue's dominated theorem, it follows that $$\begin{aligned} \left\lVert S_{\varepsilon^{\frac{1}{\alpha}}t}x-x\right\rVert_H&\leq\left\lVert S_{\varepsilon^{\frac{1}{\alpha}}t}^lx-x\right\rVert_H+\left\lVert S_{\varepsilon^{\frac{1}{\alpha}}t}^hx\right\rVert_H\\ &\lesssim\left\lVert\chi_{\varepsilon^{\frac{1}{\alpha}}t}a\left(\varepsilon^{\frac{1}{\alpha}}t,\xi\right)U^{-1}x-U^{-1}x\right\rVert_{L^2(\Omega)}+\left\lVert\varepsilon^{-1}t^{-\alpha}a(\xi)^{-1}\chi_{\varepsilon^{\frac{1}{\alpha}}t}^cU^{-1}x\right\rVert_{L^2(\Omega)}\\ &+\left\lVert\varepsilon^{-2}t^{-2\alpha}O\left(\left\lvert a(\xi)\right\rvert^{-2}\right)\chi_{\varepsilon^{\frac{1}{\alpha}}t}^cU^{-1}x\right\rVert_{L^2(\Omega)}\\ &\to0,\quad \varepsilon\to0\end{aligned}$$ and hence $\lim\limits_{\varepsilon\to0}S_{\varepsilon^{\frac{1}{\alpha}}t}x=x$. On the other hand, also it follows from Lebesgue's dominated theorem that $$\begin{aligned} &\left\lVert\varepsilon^{\frac{1}{\alpha}-1}\int_0^tP_{\varepsilon^{\frac{1}{\alpha}}(t-\tau)}F(\tau)d\tau-\frac{1}{\Gamma(\alpha)}\int_0^t\left(t-\tau\right)^{\alpha-1}F(\tau)d\tau\right\rVert_H\\ &\leq\left\lVert\varepsilon^{\frac{1}{\alpha}-1}\int_0^tP_{\varepsilon^{\frac{1}{\alpha}}(t-\tau)}^lF(\tau)d\tau-\frac{1}{\Gamma(\alpha)}\int_0^t\left(t-\tau\right)^{\alpha-1}F(\tau)d\tau\right\rVert_H+\left\lVert\varepsilon^{\frac{1}{\alpha}-1}\int_0^tP_{\varepsilon^{\frac{1}{\alpha}}(t-\tau)}^hF(\tau)d\tau\right\rVert_H\\ &\lesssim\left\lVert\varepsilon^{\frac{1}{\alpha}-1}\int_0^t\chi_{\varepsilon^{\frac{1}{\alpha}}(t-\tau)}b\left(\varepsilon^{\frac{1}{\alpha}}(t-\tau),\xi\right)U^{-1}F(\tau)d\tau-\frac{1}{\Gamma(\alpha)}\int_0^t\left(t-\tau\right)^{\alpha-1}U^{-1}F(\tau)d\tau\right\rVert_{L^2(\Omega)}\\ &+\left\Vert\varepsilon^{-2}\int_0^t\left(t-\tau\right)^{-\alpha-1}a(\xi)^{-2}\chi_{\varepsilon^{\frac{1}{\alpha}}(t-\tau)}^cU^{-1}F(\tau)d\tau\right\rVert_{L^2(\Omega)}\\ &+\left\Vert\varepsilon^{-3}\int_0^t\left(t-\tau\right)^{-2\alpha-1}\left\lvert a(\xi)\right\rvert^{-3}\chi_{\varepsilon^{\frac{1}{\alpha}}(t-\tau)}^cU^{-1}F(\tau)d\tau\right\rVert_{L^2(\Omega)}\\ &\to0,\quad\varepsilon\to0\end{aligned}$$ and hence $(\ref{8.29 Asymptotic behaviours theorem2 equation2})$ holds. ◻ **Theorem 17**. *If $A$ is injective, let $0<\alpha<\frac{1}{\alpha}$, $F\in L^\infty\left((0,\infty);D(A^{-1})\right)$ be continuous and bounded on $(0,\infty)$ and $x\in H$. If $u_\varepsilon$ is the mild solution of $$i\varepsilon D_t^\alpha u_\varepsilon(t)+Au_\varepsilon(t)+F(t)=0,\quad u_\varepsilon(0)=x, \label{8.30 Asymptotic behaviours theorem3 equation1}$$ on $[0,\infty)$, then $$\lim\limits_{\varepsilon\to0}u_\varepsilon(t)=-A^{-1}F(t). \label{8.30 Asymptotic behaviours theorem3 equation2}$$ uniformly on $[\delta,T]$ for any $0<\delta<T$. [\[8.30 Asymptotic behaviours theorem3\]]{#8.30 Asymptotic behaviours theorem3 label="8.30 Asymptotic behaviours theorem3"}* *Proof.* Clearly $u_\varepsilon(t)$ exists and satisfies $$u_\varepsilon(t)=S_{\varepsilon^{-\frac{1}{\alpha}}t}x+i\varepsilon^{-\frac{1}{\alpha}}\int_0^tP_{\varepsilon^{-\frac{1}{\alpha}}(t-\tau)}F(\tau)d\tau.$$ By Lebesgue's dominated theorem we can obtain $$\begin{aligned} \left\lVert S_{\varepsilon^{-\frac{1}{\alpha}}t}x\right\rVert_H&\leq\left\lVert S_{\varepsilon^{-\frac{1}{\alpha}}t}^lx\right\rVert_H+\left\lVert S_{\varepsilon^{-\frac{1}{\alpha}}t}^hx\right\rVert_H\\ &\lesssim\left\lVert\chi_{\varepsilon^{-\frac{1}{\alpha}}t}a\left(\varepsilon^{-\frac{1}{\alpha}}t,\xi\right)U^{-1}x\right\rVert_{L^2(\Omega)}+\left\lVert\varepsilon t^{-\alpha}a(\xi)^{-1}\chi_{\varepsilon^{-\frac{1}{\alpha}}t}^cU^{-1}x\right\rVert_{L^2(\Omega)}\\ &+\left\lVert\varepsilon^2t^{-2\alpha}\left\lvert a(\xi)\right\rvert^{-2}\chi_{\varepsilon^{-\frac{1}{\alpha}}t}^cU^{-1}x\right\rVert_{L^2(\Omega)}\\ &\to0,\quad\varepsilon\to0 \end{aligned} \label{8.30 Asymptotic behaviours theorem3 proof equation1}$$ and the limit is uniform on $[\delta,T]$. Dividing the second term into two parts such that $$\varepsilon^{-\frac{1}{\alpha}}\int_0^tP_{\varepsilon^{-\frac{1}{\alpha}}(t-\tau)}F(\tau)d\tau=v_{1\varepsilon}(t)+v_{2\varepsilon}(t),$$ where $$\begin{aligned} &v_{1\varepsilon}(t)=\varepsilon^{-\frac{1}{\alpha}}\int_0^tP_{\varepsilon^{-\frac{1}{\alpha}}(t-\tau)}\left(F(\tau)-F(t)\right)d\tau,\\ &v_{2\varepsilon}(t)=\varepsilon^{-\frac{1}{\alpha}}\int_0^tP_{\varepsilon^{-\frac{1}{\alpha}}(t-\tau)}F(t)d\tau,\end{aligned}$$ a straightforward computation leads to $$v_{2\varepsilon}(t)=iA^{-1}F(t)-iA^{-1}S_{\varepsilon^{-\frac{1}{\alpha}}t}F(t).$$ A similar way as $(\ref{8.30 Asymptotic behaviours theorem3 proof equation1})$ we can prove $$\left\lVert A^{-1}S_{\varepsilon^{-\frac{1}{\alpha}}t}F(t)\right\rVert_H\to0,\quad\varepsilon\to0$$ uniformly on $[\delta,T]$ by the boundedness of $F(t)$ and hence $\lim\limits_{\varepsilon\to0}v_{2\varepsilon}(t)=iA^{-1}F(t)$ uniformly on $[\delta,T]$. On the other hand, we can choose $r$ large enough and $\varepsilon$ small enough such that $$\begin{aligned} &v_{1\varepsilon}(t)\\ &=\varepsilon^{-\frac{1}{\alpha}}\int_0^tP_{\varepsilon^{-\frac{1}{\alpha}}\tau}\left(F(t-\tau)-F(t)\right)d\tau\\ &=\varepsilon^{-\frac{1}{\alpha}}\int_0^{r\varepsilon^{-\frac{1}{\alpha}}}P_{\varepsilon^{-\frac{1}{\alpha}}\tau}\left(F(t-\tau)-F(t)\right)d\tau+\varepsilon^{-\frac{1}{\alpha}}\int_{r\varepsilon^{-\frac{1}{\alpha}}}^tP_{\varepsilon^{-\frac{1}{\alpha}}\tau}\left(F(t-\tau)-F(t)\right)d\tau\\ &=\int_0^rP_\tau\left(F\left(t-\varepsilon^{\frac{1}{\alpha}}\right)-F(t)\right)d\tau+\varepsilon^{-\frac{1}{\alpha}}\int_{r\varepsilon^{-\frac{1}{\alpha}}}^tP_{\varepsilon^{-\frac{1}{\alpha}}\tau}\left(F(t-\tau)-F(t)\right)d\tau\\ &=:v_{1\varepsilon}^{(1)}(t)+v_{1\varepsilon}^{(2)}(t)\end{aligned}$$ By the continuity of $F(t)$, for any given $\rho>0$, we can choose $\varepsilon$ small enough such that $$\left\lVert F\left(t-\varepsilon^{\frac{1}{\alpha}}\right)-F(t)\right\rVert\lesssim\frac{\rho}{r^\alpha},$$ then from $(\ref{lemma linear new3 proof equation3})$ it follows that $$\left\lVert v_{1\varepsilon}^{(1)}(t)\right\rVert_H\lesssim\int_0^r\tau^{\alpha-1}\left\lVert F\left(t-\varepsilon^{\frac{1}{\alpha}}\right)-F(t)\right\rVert d\tau\lesssim\rho. \label{8.30 Asymptotic behaviours theorem3 proof equation1}$$ For $v_{1\varepsilon}^{(2)}(t)$, we have, by a slightly careful calculation, that $$\begin{aligned} &\left\lVert v_{1\varepsilon}^{(2)}(t)\right\rVert_H\\ &\leq\left\lVert\varepsilon^{-\frac{1}{\alpha}}\int_{r\varepsilon^{-\frac{1}{\alpha}}}^tP_{\varepsilon^{-\frac{1}{\alpha}}\tau}^l\left(F(t-\tau)-F(t)\right)d\tau\right\rVert_H+\left\lVert\varepsilon^{-\frac{1}{\alpha}}\int_{r\varepsilon^{-\frac{1}{\alpha}}}^tP_{\varepsilon^{-\frac{1}{\alpha}}\tau}^h\left(F(t-\tau)-F(t)\right)d\tau\right\rVert_H\\ &\lesssim\left\lVert\varepsilon^{-\frac{1}{\alpha}}\int_{r\varepsilon^{-\frac{1}{\alpha}}}^tP_{\varepsilon^{-\frac{1}{\alpha}}\tau}^l\left(F(t-\tau)-F(t)\right)d\tau\right\rVert_H+\left\lVert\varepsilon\int_{r\varepsilon^{-\frac{1}{\alpha}}}^t\tau^{-\alpha-1}\mathbf{A}_{\varepsilon^{-\frac{1}{\alpha}}\tau}^{-2}\left(F(t-\tau)-F(t)\right)d\tau\right\rVert_H\\ &+\left\lVert\varepsilon^{-\frac{1}{\alpha}}\int_{r\varepsilon^{-\frac{1}{\alpha}}}^tR_{\varepsilon^{-\frac{1}{\alpha}}\tau}^P\left(F(t-\tau)-F(t)\right)d\tau\right\rVert_H\\ &\lesssim\left\lVert\varepsilon^{-\frac{1}{\alpha}}\int_{r\varepsilon^{-\frac{1}{\alpha}}}^t\chi_{\varepsilon^{-\frac{1}{\alpha}}\tau}b\left(\varepsilon^{-\frac{1}{\alpha}}\tau,\xi\right)U^{-1}\left(F(t-\tau)-F(t)\right)d\tau\right\rVert_{L^2(\Omega)}\\ &+\left\lVert\varepsilon\int_{r\varepsilon^{-\frac{1}{\alpha}}}^t\tau^{-\alpha-1}a(\xi)^{-2}\chi_{\varepsilon^{-\frac{1}{\alpha}}\tau}^cU^{-1}\left(F(t-\tau)-F(t)\right)d\tau\right\rVert_{L^2(\Omega)}\\ &+\left\lVert\varepsilon^2\int_{r\varepsilon^{-\frac{1}{\alpha}}}^t\tau^{-2\alpha-1}\left\lvert a(\xi)\right\rvert^{-3}\chi_{\varepsilon^{-\frac{1}{\alpha}}\tau}^cU^{-1}\left(F(t-\tau)-F(t)\right)d\tau\right\rVert_{L^2(\Omega)}\\ &\lesssim\int_{r\varepsilon^{-\frac{1}{\alpha}}}^t\tau^{-1}\left\lVert\left\lvert a(\xi)\right\rvert^{-1}U^{-1}\left(F(t-\tau)-F(t)\right)\right\rVert_{L^2(\Omega)}d\tau\lesssim r^{-1}\varepsilon^{\frac{1}{\alpha}}\left\lVert F\right\rVert_{L_t^\infty D(A^{-1})}.\end{aligned}$$ We can choose $r$ large enough and $\varepsilon$ small enough such that $$\left\lVert v_{1\varepsilon}^{(2)}(t)\right\rVert_H\lesssim r^{-1}\varepsilon^{\frac{1}{\alpha}}\left\lVert F\right\rVert_{L_t^\infty D(A^{-1})}\lesssim\rho. \label{8.30 Asymptotic behaviours theorem3 proof equation2}$$ Combining $(\ref{8.30 Asymptotic behaviours theorem3 proof equation1})$ and $(\ref{8.30 Asymptotic behaviours theorem3 proof equation2})$ we obtain that for any given $\rho$ we can choose $r$ large enough and $\varepsilon$ small enough such that $\left\lVert v_{1\varepsilon}(t)\right\rVert_H\lesssim\rho$. Thus $v_{1\varepsilon}(t)\to0$ as $\varepsilon\to0$ uniformly on $[\delta,T]$ and then $(\ref{8.30 Asymptotic behaviours theorem3 equation2})$ holds. ◻
arxiv_math
{ "id": "2309.08278", "title": "On the fractional abstract Schrodinger type evolution equations on the\n Hilbert space and its applications to the fractional dispersive equations", "authors": "Mingxuan He, Na Deng", "categories": "math.AP", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | An important result in the theory of harmonic maps is due to Benoist--Hulin: given a quasi-isometry $f:X\to Y$ between pinched Hadamard manifolds, there exists a unique harmonic map at a finite distance from $f$. Here we show existence of harmonic maps under a weaker condition on $f$, that we call non-collapsing -- we require that the following two conditions hold uniformly in $x\in X$: (1) average distance from $f(x)$ to $f(y)$ for $y$ on the sphere of radius $R$ centered at $x$ grows linearly with $R$ (2) the pre-image under $f$ of small cones with apex $f(x)$ have low harmonic measures on spheres centered at $x$. Using these ideas, we also continue the previous work of the author on existence of harmonic maps that are at a finite distance from projections to certain convex sets. We show this existence in a pinched negative curvature setting, when the convex set is large enough. For hyperbolic spaces, this includes the convex hulls of open sets in the sphere at infinity with sufficiently regular boundary. address: | Mathematical Institute\ University of Oxford\ United Kingdom author: - Ognjen Tošić bibliography: - main.bib title: "Harmonic projections in negative curvature II: large convex sets" --- # Introduction A classical conjecture in the theory of harmonic maps is the Schoen conjecture, stating that for any quasi-isometry $f:\mathbb{H}^2\to\mathbb{H}^2$ of the hyperbolic plane $\mathbb{H}^2$, there exists a harmonic self-map of $\mathbb{H}^2$ at a bounded distance from $f$. This was shown by Marković [@markovic-schoen], and there have since been numerous generalizations to spaces other than $\mathbb{H}^2$. Most notable results were obtained by Marković [@markovic-h3] (for 3-dimensional hyperbolic space $\mathbb{H}^3$), Lemm--Marković [@lemm-markovic] (for higher-dimensional hyperbolic spaces $\mathbb{H}^n$ for $n\geq 3$), Benoist--Hulin [@bh-rank-one] (for rank one symmetric spaces), and Benoist--Hulin [@Benoist2017HarmonicQM] (for pinched Hadamard manifolds). Here we generalize the results of [@Benoist2017HarmonicQM] on pinched Hadamard manifolds, meaning simply connected complete Riemannian manifolds with sectional curvatures bounded between two negative constants, by weakening the quasi-isometry requirement on the map $f$. For a pinched Hadamard manifold $X$, we use $\mathrm{dist}(\cdot, \cdot)$ to refer to the path metric on $X$ induced by the Riemannian metric. We will denote the visual boundary at infinity of $X$ with $\partial_\infty X$. Let $B_R(x)$ be the ball of radius $R$ centered at $x$, and let $\sigma_{x,R}$ be the harmonic measure on $\partial B_R(x)$ as seen from $x$. **Definition 1**. A Lipschitz map $f:X\to Y$ between pinched Hadamard manifolds is non-collapsing if the following two conditions hold 1. there exist constants $c, R_0>0$, such that for any $x\in X, R>R_0$, we have $$\begin{aligned} \int_{\partial B_R(x)} \mathrm{dist}(f(x), f(y))d\sigma_{x,R}(y)\geq cR, \end{aligned}$$ and 2. for any $\varepsilon>0$, there exist $\theta, R_0>0$ such that for any $x\in X, R>R_0$ and $\xi\in\partial_\infty Y$, we have $$\begin{aligned} \sigma_{x, R}\left(\{y\in \partial B_R(x):\measuredangle_{f(x)}(\xi, f(y))<\theta\}\right)<\varepsilon, \end{aligned}$$ where $\measuredangle_a(b, c)$ denotes the angle at $a$ between the geodesics $[a, b]$, joining $a$ and $b$, and $[a, c]$, joining $a$ and $c$. **Theorem 1**. *For any non-collapsing Lipschitz map $f:X\to Y$ between pinched Hadamard manifolds, there exists a harmonic map $h:X\to Y$ such that $\sup\mathrm{dist}(h, f)<\infty$.* The main novelty of Theorem [Theorem 1](#thm:main-qia){reference-type="ref" reference="thm:main-qia"} relative to [@Benoist2017HarmonicQM Theorem 1.1] is our generalization of the "interior estimate" [@Benoist2017HarmonicQM §4]. As another application of our generalized interior estimate, we study harmonic maps that are at a finite distance from a nearest-point projection to a convex set in a pinched Hadamard manifold. The study of such maps was initiated by the author in [@tosic], where the main result states that, given a pinched Hadamard manifold $X$, and a set $S$ in the boundary at infinity $\partial_\infty X$ of $X$, such that $S$ has sufficiently low dimension, there exists a harmonic self-map of $X$ that is at a finite distance from the nearest-point projection to the convex hull of $S$. Here we generalize this to convex sets that are sufficiently large. **Definition 2**. A closed convex subset $C$ of a pinched Hadamard manifold $X$ is called admissible if, for each $D>0$, there exists an angle $\theta=\theta(D)$ and a distance $R_0=R_0(D)$ with the following property. For any $x\in N_D(C), R>R_0$, there exists a point $\xi\in\partial_\infty X$ such that $$\partial B_R(x)\cap \mathrm{Cone}(x\xi, \theta)\subseteq\partial B_R(x)\cap C,$$ where $\mathrm{Cone}(x\xi, \theta)=\{y\in X: \measuredangle_{x}(y, \xi)<\theta\}$. **Theorem 2**. *Let $C$ be an admissible closed convex subset of a pinched Hadamard manifold $X$. There exists a harmonic map $h:X\to X$ that is a finite distance away from the nearest-point retraction $r:X\to C$.* Note that nearest point projections are in general not non-collapsing, so Theorem [Theorem 2](#thm:main-general){reference-type="ref" reference="thm:main-general"} can not be derived directly from Theorem [Theorem 1](#thm:main-qia){reference-type="ref" reference="thm:main-qia"}. As mentioned above, the key common ingredient in both Theorem [Theorem 2](#thm:main-general){reference-type="ref" reference="thm:main-general"} and Theorem [Theorem 1](#thm:main-qia){reference-type="ref" reference="thm:main-qia"} is the generalized interior estimate. A rich class of admissible convex sets in hyperbolic spaces $\mathbb{H}^n$ is provided by convex hulls of open sets in $\partial_\infty\mathbb{H}^n\cong \mathbb{S}^{n-1}$ with sufficiently regular boundary. **Theorem 3**. *Let $U\subseteq\partial_\infty \mathbb{H}^n=\mathbb{S}^{n-1}$ be an open set with quasiconformal boundary. Then the convex hull of $U$ is admissible.* Here by quasiconformal boundary we mean that near any point $x\in\partial U$, there exists a local quasiconformal map that sends $U$ to $\mathbb{R}_{+}\times\mathbb{R}^{n-2}$ and $x$ to the origin. ## More precise results We will in fact prove a slightly stronger version of Theorem [Theorem 1](#thm:main-qia){reference-type="ref" reference="thm:main-qia"}. **Definition 3**. Let $\omega:\mathbb{R}_+\to\mathbb{R}_+$ be a function such that $\omega(x)\to\infty$ and $\frac{\omega(x)}{x}\to 0$ as $x\to\infty$. Then a Lipschitz map $f:X\to Y$ is called $\omega$-weakly non-collapsing (weakly non-collapsing map with size function $\omega$) if the following two conditions hold 1. there exist constants $c, R_0>0$, such that for any $x\in X, R>R_0$, we have $$\begin{aligned} \int_{\partial B_R(x)} \mathrm{dist}(f(x), f(y))d\sigma_{x,R}(y)\geq cR, \end{aligned}$$ and 2. for any $\varepsilon>0$, there exist $\theta, R_0>0$ such that for any $x\in X, R>R_0$ and $\xi\in\partial_\infty Y$, we have $$\begin{aligned} \sigma_{x, R}\left(\{y\in\partial B_R(x):\measuredangle_{f(x)}(\xi, f(y))<\theta\text{ and }\mathrm{dist}(f(x), f(y))\geq \omega(R)\}\right)<\varepsilon. \end{aligned}$$ We call an $\omega:\mathbb{R}_+\to\mathbb{R}_+$ with $\omega(x)\to \infty$ and $\frac{\omega(x)}{x}\to 0$ as $x\to\infty$ a sublinear size function. A Lipschitz map is weakly non-collapsing if it is $\omega$-weakly non-collapsing for some sublinear size function $\omega$. **Theorem 4**. *For any weakly non-collapsing Lipschitz map $f:X\to Y$, there exists a harmonic map $h:X\to Y$ such that $\sup \mathrm{dist}(h, f)<\infty$.* **Remark 4**. 1. Note that a non-collapsing map as in Definition [Definition 1](#dfn:qia){reference-type="ref" reference="dfn:qia"} is a weakly non-collapsing map with size function $0$, so Theorem [Theorem 1](#thm:main-qia){reference-type="ref" reference="thm:main-qia"} follows immediately from Theorem [Theorem 4](#thm:main-qia-omega){reference-type="ref" reference="thm:main-qia-omega"}. 2. We will show below that, if $f$ is a weakly non-collapsing map, and $\tilde{f}$ is a Lipschitz map such that $\sup\mathrm{dist}(f, \tilde{f})<\infty$, then $\tilde{f}$ is also weakly non-collapsing (albeit with a different size function). In particular, the harmonic map obtained either from Theorem [Theorem 1](#thm:main-qia){reference-type="ref" reference="thm:main-qia"} or Theorem [Theorem 4](#thm:main-qia-omega){reference-type="ref" reference="thm:main-qia-omega"} is weakly non-collapsing, but not necessarily with size function $0$. 3. If $f$ is an $\omega$-weakly non-collapsing map, and $\tilde{\omega}\geq\omega$ is a sublinear size function, then $f$ is also an $\tilde{\omega}$-weakly non-collapsing. Thus the condition $\omega(x)\to\infty$ as $x\to\infty$ in Definition [Definition 3](#dfn:qia-omega){reference-type="ref" reference="dfn:qia-omega"} is superfluous, and is there merely for convenience. Both Theorem [Theorem 2](#thm:main-general){reference-type="ref" reference="thm:main-general"} and [Theorem 4](#thm:main-qia-omega){reference-type="ref" reference="thm:main-qia-omega"} follow from our generalized interior estimate, stated below. **Definition 5**. Let $\mathcal{F}$ be a family of smooth maps between pointed pinched Hadamard manifolds. Then $\mathcal{F}$ is uniformly non-collapsing if it is uniformly Lipschitz, if the domain and range of any function in $\mathcal{F}$ have uniformly bounded pinching constants, and if the following two conditions hold 1. There exist constants $c, R_0>0$, such that for any $f:(X,x)\to (Y,y)$ in $\mathcal{F}$ and any $R>R_0$, we have $$\begin{aligned} \int_{\partial B_R(x)} \mathrm{dist}(f(x), f(y))d\sigma_{x,R}(y)\geq cR, \end{aligned}$$ and 2. There exists a sublinear size function $\omega:\mathbb{R}_+\to\mathbb{R}_+$ such that for any $\varepsilon>0$, there exist $\theta>0, R_0>0$ such that, for any $f:(X,x)\to(Y,y)$ in $\mathcal{F}$ and $R>R_0$, and any $\xi\in\partial_\infty Y$, we have $$\begin{aligned} \sigma_{x, R}\left(\{y\in\partial B_R(x):\measuredangle_{f(x)}(\xi, f(y))<\theta\text{ and }\mathrm{dist}(f(x), f(y))\geq \omega(R)\}\right)<\varepsilon. \end{aligned}$$ **Theorem 5** (Generalized interior estimate). *Let $\mathcal{F}=\{f_n:(X_n, x_n)\to (Y_n, y_n):n=1,2,...\}$ be a uniformly non-collapsing family. Suppose $R_n$ is a sequence of positive real numbers with $R_n\to\infty$, and let $h_n:B_{R_n}(x_n)\to Y_n$ be a sequence of harmonic maps, such that the maximum of $\mathrm{dist}(h_n, f_n)$ is achieved at $x_n\in X_n$. Then $\sup_n \sup\mathrm{dist}(f_n, h_n)<\infty$.* ## Organization and a brief outline Here we briefly describe the contents of each section in the paper. In §[2](#sec:deforming){reference-type="ref" reference="sec:deforming"}, we show that any weakly non-collapsing Lipschitz map can be deformed to a smooth weakly non-collapsing map with bounds on the first two derivatives. This is achieved by using the same argument as in [@tosic §3], that is in turn a slight generalization of the argument of Benoist--Hulin [@Benoist2017HarmonicQM §2]. In particular, here we merely verify that the property of being weakly non-collapsing is preserved under finite distance deformations (although the size function is not preserved). This is an important step, as the proofs of both Theorem [Theorem 4](#thm:main-qia-omega){reference-type="ref" reference="thm:main-qia-omega"} and Theorem [Theorem 2](#thm:main-general){reference-type="ref" reference="thm:main-general"} depend on computations of the Laplacian of the distance function, using the classical computation of Schoen--Yau [@SCHOEN1979361]. For this we need the underlying maps to be at least $C^2$, and moreover we need control on the tension field of the map that we are trying to deform to a harmonic map. In §[3](#sec:generalized-interior-estimate){reference-type="ref" reference="sec:generalized-interior-estimate"} we prove Theorem [Theorem 5](#thm:generalized-interior-estimate){reference-type="ref" reference="thm:generalized-interior-estimate"}. The main technical result in this section is Lemma [Lemma 9](#lm:fundamental-inequality){reference-type="ref" reference="lm:fundamental-inequality"}, that easily implies Theorem [Theorem 5](#thm:generalized-interior-estimate){reference-type="ref" reference="thm:generalized-interior-estimate"}, and that we believe is of independent interest. Lemma [Lemma 9](#lm:fundamental-inequality){reference-type="ref" reference="lm:fundamental-inequality"} is a more precise quantitative version of the "interior estimate" of [@Benoist2017HarmonicQM §4]. The proof of Theorem [Theorem 5](#thm:generalized-interior-estimate){reference-type="ref" reference="thm:generalized-interior-estimate"} boils down to the observation that since $\mathrm{dist}(f_n(x_n), h_n(\cdot))$ is a subharmonic function, we have $$\begin{aligned} \int_{\partial B_{R_n}(x_n)} \left(\mathrm{dist}(f_n(x_n), h_n(y)) - \mathrm{dist}(f_n(x_n), h_n(x_n))\right) d\sigma_{x_n, R_n}(y)\geq 0,\end{aligned}$$ followed by an estimate of the integrand on the left-hand side in the regime where $\mathrm{dist}(f_n(x_n), h_n(x_n))\to\infty$ as $n\to\infty$, and reach a contradiction, along the lines of [@Benoist2017HarmonicQM §4]. This section is the heart of the paper, and a more detailed outline can be found at the start of §[3](#sec:generalized-interior-estimate){reference-type="ref" reference="sec:generalized-interior-estimate"}. In §[4](#sec:weakly-non-collapsing){reference-type="ref" reference="sec:weakly-non-collapsing"} we derive Theorem [Theorem 4](#thm:main-qia-omega){reference-type="ref" reference="thm:main-qia-omega"} from Theorem [Theorem 5](#thm:generalized-interior-estimate){reference-type="ref" reference="thm:generalized-interior-estimate"}. Given the generalized interior estimate, this is similar to the arguments in [@Benoist2017HarmonicQM] or [@tosic]. The idea is to take an exhaustive sequence of nested balls $B_1\subset B_2\subset ...\subset X$, and let $h_n:B_n\to Y$ be the harmonic map that agrees with $f$ on $\partial B_n$. We then extract a limit of the $h_n$. Using Theorem [Theorem 5](#thm:generalized-interior-estimate){reference-type="ref" reference="thm:generalized-interior-estimate"}, in combination with an appropriate boundary estimate [@Benoist2017HarmonicQM Proposition 3.7] (along with control on the second derivative of $f$, obtained in §[2](#sec:deforming){reference-type="ref" reference="sec:deforming"}), we show that $\sup_n\sup_{B_n}\mathrm{dist}(f, h_n)<\infty$. Given this bound, the Arzela--Ascoli theorem combined with some classical results on harmonic maps (namely Schauder estimates [@petersen] and Cheng's lemma [@cheng]), allows us to extract a limit of $h_n$, that gives the desired harmonic map at a finite distance from $f$. In §[5](#sec:nearest-point){reference-type="ref" reference="sec:nearest-point"} we show Theorem [Theorem 2](#thm:main-general){reference-type="ref" reference="thm:main-general"}. The overall strategy is similar to the proof of Theorem [Theorem 4](#thm:main-qia-omega){reference-type="ref" reference="thm:main-qia-omega"}. We still have an exhaustive sequence of nested balls $B_n$, with harmonic maps $h_n:B_n\to X$, and wish to prove $\sup_n\sup_{B_n}\mathrm{dist}(r, h_n)<\infty$. The proof of this bound is again naturally divided into two pieces: one follows from Theorem [Theorem 5](#thm:generalized-interior-estimate){reference-type="ref" reference="thm:generalized-interior-estimate"} and the fact that $r$ is uniformly non-collapsing in a neighbourhood of the convex set $C$ (which follows from admissibility), and the other follows from the arguments in the previous paper of the author [@tosic §4]. Finally in §[6](#sec:admissible){reference-type="ref" reference="sec:admissible"} we show Theorem [Theorem 3](#thm:main-admissible){reference-type="ref" reference="thm:main-admissible"}. We give here a brief outline of the proof. Firstly, it is easy to see that the only way admissibility can fail is along a sequence of points converging to the boundary at infinity $\partial_\infty\mathbb{H}^n$. If this sequence converges to a point in $U$, admissibility holds. Assume therefore that the sequence converges to a point $\xi$ in $\partial U$. We then use quasiconformal maps to straighten out $\partial U$ near $\xi$. Note that in the model case where $U=\mathbb{R}_+\times\mathbb{R}^{n-2}\subseteq\mathbb{R}^{n-1}\cup\{\infty\}=\mathbb{S}^{n-1}$, it is easy to show Theorem [Theorem 3](#thm:main-admissible){reference-type="ref" reference="thm:main-admissible"} by hand. Therefore it suffices to show that applying this quasiconformal map preserves the condition in Definition [Definition 2](#dfn:admissible){reference-type="ref" reference="dfn:admissible"}. This follows from the fact, due to Tukia--Väisälä [@Tukia19829uasiconformalEF], that local quasiconformal maps in $\mathbb{S}^{n-1}$ can be extended to local bi-Lipschitz maps in $\mathbb{H}^n$. ## Notation {#notation .unnumbered} We write $A\lesssim B$ when there exists a constant $C>0$ that depends only on the pinching constants and dimension of the relevant pinched Hadamard manifolds, such that $A\leq CB$. We similarly write $A\gtrsim B$ when $B\lesssim A$, and $A\approx B$ when $A\lesssim B\lesssim A$. We collect below some pieces of notation that appear throughout the paper for the reader's convenience, - given a Riemannian manifold $M$, the distance function $\mathrm{dist}:M\times M\to\mathbb{R}_+=\{x\in\mathbb{R}: x\geq 0\}$ always refers to the path metric induced by the Riemannian metric on $M$, - we denote by $B_R(x)$ the ball of radius $R$ centered at $x$, under the metric given by $\mathrm{dist}$, - we denote by $\sigma_{x,R}$ the harmonic measure on the sphere $\partial B_R(x)$, as seen from $x$, i.e. the measure defined by the equality $$\begin{aligned} h(x)=\int_{\partial B_R(x)} h(y)d\sigma_{x,R}(y) \end{aligned}$$ for all bounded harmonic functions $h:B_R(x)\to \mathbb{R}$, - when $X$ is a pinched Hadamard manifold, we denote by $\partial_\infty X$ the visual boundary at infinity of $X$, - for $x, y\in X\cup\partial_\infty X$, we denote by $[x,y]$ the geodesic joining $x$ and $y$, - for $a\in X, b,c\in X\cup\partial_\infty X\setminus\{a\}$, we denote by $\measuredangle_a(b,c)$ the angle at $a$ between the geodesics $[a,b]$ and $[a, c]$, - for $x\in X, \xi\in X\cup\partial_\infty X\setminus\{a\}$ and $\theta>0$, we denote by $\mathrm{Cone}(x\xi, \theta)$ the set of points $y\in X\cup\partial_\infty X$ such that $\measuredangle_x(\xi, y)<\theta$, - we denote by $\mathbb{H}^n$ the $n$-dimensional hyperbolic space, and by $\partial_\infty\mathbb{H}^n=\mathbb{S}^{n-1}$ the $(n-1)$-dimensional sphere at infinity, - we denote by $\norm{f}_\infty$ the supremum of some function $f$ (if $f$ is a section of some vector bundle equipped with a natural metric, we still denote by $\norm{f}_\infty$ the supremum of the norm of $f$). # Deforming to smooth maps {#sec:deforming} Our aim here is to show that any weakly non-collapsing map can be deformed to a smooth weakly non-collapsing map, with control on the first two derivatives. Note that from [@tosic Lemma 3.1], any Lipschitz map can be deformed to a smooth map with first two derivatives bounded. The following proposition is thus the aim of this section. **Proposition 6**. *Let $\omega:\mathbb{R}_+\to\mathbb{R}_+$ be a sublinear size function, and let $f:X\to Y$ be an $\omega$-weakly non-collapsing map between pinched Hadamard manifolds. Suppose $\tilde{f}:X\to Y$ is a Lipschitz map such that $D=\sup\mathrm{dist}(f, \tilde{f})<\infty$. Then $\tilde{f}$ is a weakly non-collapsing map with size function $\omega(x)+2D$.* *Proof.* Suppose that $\sup_X\mathrm{dist}(f, \tilde{f})=D$. To check Definition [Definition 3](#dfn:qia-omega){reference-type="ref" reference="dfn:qia-omega"}(1), we write $$\begin{aligned} \int_{\partial B_R(x)}\mathrm{dist}(\tilde{f}(x), \tilde{f}(y))d\sigma_{x,R}(y)&\geq \int_{\partial B_R(x)}\left(\mathrm{dist}(f(x), f(y))-2D\right)d\sigma_{x, R}(y)\\ &\geq cR-2D\geq \frac{c}{2}R \end{aligned}$$ for $R>\max(R_0, 4c^{-1}D)$, where $R_0, c$ are constants from Definition [Definition 3](#dfn:qia-omega){reference-type="ref" reference="dfn:qia-omega"}(1) for $f$. It remains to show Definition [Definition 3](#dfn:qia-omega){reference-type="ref" reference="dfn:qia-omega"}(2). We first observe that for any set $S\subseteq Y$, we have $$\begin{aligned} \label{eq:containment-1} \tilde{f}^{-1}(S)\subseteq f^{-1}\left(N_D(S)\right). \end{aligned}$$ The proof relies on the following proposition, that we show in the next subsection. **Proposition 7**. *For any $D, \theta>0$, there exist $\hat{D}, \hat{\theta}>0$ such that for any two points $x, y\in X$ at a distance at most $D$ and any $\xi\in\partial_\infty X$, we have $$\begin{aligned} N_D\left(\mathrm{Cone}(x\xi, \hat{\theta})\setminus B_{\hat{D}}(x)\right)\subseteq \mathrm{Cone}(y\xi, \theta). \end{aligned}$$* Now let $\varepsilon>0$ be arbitrary. Let $\theta_0, R_0$ be such that $$\begin{aligned} \label{eq:harmonic-measure-bound} \sigma_{x, R}\left(f^{-1}\left(\mathrm{Cone}(f(x)\xi,\theta_0)\setminus B_{\omega(R)}(f(x))\right)\cap \partial B_R(x)\right)<{\varepsilon}. \end{aligned}$$ for any $x\in X, \xi\in\partial_\infty Y, R>R_0$. Then choose $\theta, \hat{D}>0$ as in Proposition [Proposition 7](#proposition:moving-cone){reference-type="ref" reference="proposition:moving-cone"} such that $$\begin{aligned} N_D\left(\mathrm{Cone}(\tilde{f}(x)\xi, \theta)\setminus B_{\hat{D}}(\tilde{f}(x))\right)\subseteq\mathrm{Cone}(f(x)\xi, \theta_0). \end{aligned}$$ In particular, for $R$ large enough, we have $\omega(R)>\hat{D}-2D$, and then we have $$\begin{aligned} N_D\left(\mathrm{Cone}(\tilde{f}(x)\xi, \theta)\setminus B_{\omega(R)+2D}(\tilde{f}(x))\right)&\subseteq N_D\left(\mathrm{Cone}(\tilde{f}(x)\xi, \theta)\setminus B_{\hat{D}}(\tilde{f}(x))\right) \\ &\subseteq \mathrm{Cone}(f(x)\xi, \theta_0), \end{aligned}$$ and hence $$\begin{aligned} N_D\left(\mathrm{Cone}(\tilde{f}(x)\xi, \theta)\setminus B_{\omega(R)+2D}(\tilde{f}(x))\right)&\subseteq \mathrm{Cone}(f(x)\xi, \theta_0)\setminus B_{\omega(R)+D}(\tilde{f}(x)) \nonumber \\ &\subseteq \mathrm{Cone}(f(x)\xi, \theta_0)\setminus B_{\omega(R)}(f(x)). \label{eq:containment-2} \end{aligned}$$ Combining ([\[eq:containment-1\]](#eq:containment-1){reference-type="ref" reference="eq:containment-1"}), ([\[eq:containment-2\]](#eq:containment-2){reference-type="ref" reference="eq:containment-2"}) and ([\[eq:harmonic-measure-bound\]](#eq:harmonic-measure-bound){reference-type="ref" reference="eq:harmonic-measure-bound"}), we see that for $R$ large enough, we have $$\begin{aligned} \sigma_{x, R}\left(\tilde{f}^{-1}\left(\mathrm{Cone}(\tilde{f}(x)\xi, \theta_0)\setminus B_{\omega(R)+2D}(\tilde{f}(x))\right)\cap \partial B_R(x)\right)<\varepsilon, \end{aligned}$$ for all $x\in X, \xi\in\partial_\infty Y$. Thus $\tilde{f}$ has Definition [Definition 3](#dfn:qia-omega){reference-type="ref" reference="dfn:qia-omega"}(2) with size function $\tilde{\omega}(x)=\omega(x)+{2D}$. ◻ ## Moving the apex of a cone Here we show Proposition [Proposition 7](#proposition:moving-cone){reference-type="ref" reference="proposition:moving-cone"}. We fix $D, \theta>0$. Let $\hat{D}$ (resp. $\hat{\theta}$) be an arbitrary positive constant, that we will freely increase (resp. decrease) over the course of the proof. By [@tosic Proposition 5.4], it suffices to show $$\begin{aligned} \label{eq:prop-moving-cone-main-1} \mathrm{Cone}(x\xi, \hat{\theta})\setminus B_{\hat{D}}(x)\subseteq\mathrm{Cone}(y\xi, \theta). \end{aligned}$$ **Remark 8**. Note that in [@tosic], the author works with the visual metric on $\partial_\infty X$, whereas here we are interested in the angle metric. It is classical that the two are Hölder equivalent, and the direction we need follows readily from Claim [Claim 11](#claim:def-angle){reference-type="ref" reference="claim:def-angle"} and [@bourdon1993actions §2.5]. Let $z\in \mathrm{Cone}(x\xi,\hat{\theta})\setminus B_{\hat{D}}(x)$ and let $w$ be the point on $x\xi$ closest to $z$. Our first assertion is that $$\begin{aligned} \label{eq:footpoint-far} \mathrm{dist}(x, w)\geq \min\left(\hat{D}, a^{-1}\log\frac{1}{\hat{\theta}}\right)+O(1). \end{aligned}$$ By comparison with the hyperbolic plane for the triangle $xzw$, we see that $$\begin{aligned} \label{eq:cmp-1} \mathrm{sinh}\left(a\mathrm{dist}(z, w)\right)\leq \sin\measuredangle_{x}(z, w)\sinh\left(a\mathrm{dist}(x, z)\right). \end{aligned}$$ This in particular shows that $$\begin{aligned} \label{eq:dist-estimate-sinh} \mathrm{dist}(z, w)\leq \max\left(0, \mathrm{dist}(x, z)+a^{-1}\log \measuredangle_x(z, w)\right) + O(1). \end{aligned}$$ Therefore by the triangle inequality $$\begin{aligned} \mathrm{dist}(x, w)&\geq \mathrm{dist}(x, z)-\mathrm{dist}(z, w) \\ &\geq\min\left(\mathrm{dist}(x, z), a^{-1}\log \frac{1}{\measuredangle_x(z,w)}\right)+O(1) \\ &\geq \min\left(\hat{D}, a^{-1}\log\frac{1}{\hat{\theta}}\right)+O(1), \end{aligned}$$ thus showing ([\[eq:footpoint-far\]](#eq:footpoint-far){reference-type="ref" reference="eq:footpoint-far"}). Let $\delta$ be the Gromov constant of $X$ as a hyperbolic metric space. By ([\[eq:footpoint-far\]](#eq:footpoint-far){reference-type="ref" reference="eq:footpoint-far"}), since $\mathrm{dist}(x, y)\leq D$, by choosing $\hat{D}$ large enough and $\hat{\theta}$ small enough, we can arrange it so that $\mathrm{dist}(w, xy)>10\delta$. Thus, by considering the ideal triangle $x\xi y$, we see that $\mathrm{dist}(w, y\xi)\leq\delta$. Therefore $$\begin{aligned} \label{eq:diff-dist-bounded}\mathrm{dist}(z, y\xi)\leq \mathrm{dist}(z, x\xi)+\delta. \end{aligned}$$ Similarly to ([\[eq:cmp-1\]](#eq:cmp-1){reference-type="ref" reference="eq:cmp-1"}), by comparison to the hyperbolic plane, we see that $$\begin{aligned} \sinh(b\mathrm{dist}(z ,y\xi))\geq \sinh(b\mathrm{dist}(z, y))\sin\measuredangle_y(z,\xi)\gtrsim e^{b(\mathrm{dist}(x, z)-D)}\measuredangle_y(z, \xi). \end{aligned}$$ It follows from ([\[eq:diff-dist-bounded\]](#eq:diff-dist-bounded){reference-type="ref" reference="eq:diff-dist-bounded"}) that $$\begin{aligned} \measuredangle_y(z,\xi)\lesssim e^{b(\mathrm{dist}(z, x\xi)-\mathrm{dist}(x, z))}, \end{aligned}$$ where we absorbed $e^{b(D+\delta)}$ into the implicit constant. Applying ([\[eq:dist-estimate-sinh\]](#eq:dist-estimate-sinh){reference-type="ref" reference="eq:dist-estimate-sinh"}), we get $$\begin{aligned} \measuredangle_y(z, \xi)&\lesssim \exp\left(\max\left(-b\mathrm{dist}(x, z), ba^{-1}\log\measuredangle_x(z, w)\right)\right)\\ &\lesssim\exp\left(-\min\left(b\hat{D}, ba^{-1}\log\frac{1}{\hat{\theta}}\right)\right). \end{aligned}$$ By increasing $\hat{D}$ and decreasing $\hat{\theta}$ further, we can ensure that $\measuredangle_y(z, \xi)<\theta$. Since $z$ was arbitrary, and none of our constants or choices of $\hat{D}, \hat{\theta}$ depended on $z$, this concludes the proof of ([\[eq:prop-moving-cone-main-1\]](#eq:prop-moving-cone-main-1){reference-type="ref" reference="eq:prop-moving-cone-main-1"}). # Generalized interior estimate {#sec:generalized-interior-estimate} This section is devoted to proving Theorem [Theorem 5](#thm:generalized-interior-estimate){reference-type="ref" reference="thm:generalized-interior-estimate"}, which follows from the technical Lemma [Lemma 9](#lm:fundamental-inequality){reference-type="ref" reference="lm:fundamental-inequality"} below. **Lemma 9**. *Suppose $X, Y$ are pinched Hadamard manifolds of dimension at most $n$ with pinching constants $-b^2\leq -a^2<0$, and let $x\in X$. Let $f:B_r(x)\to Y$ and $h:B_r(x)\to Y$ be a smooth and harmonic map, respectively. Suppose that $\mathrm{dist}(h, f)$ achieves its maximum at $x$. Then there exist positive constants $M=M(r, a, b, n), N=N(r, a, b, n)$, such that either $$\begin{aligned} \label{eq:fundamental-inequality-1} \sup\mathrm{dist}(h, f)\leq M \mathrm{diam}(f(B_r))+a^{-1 }N, \end{aligned}$$ or $$\begin{aligned} \int_{\partial B_r} \min\left(a\rho(y), \log\frac{\pi}{\theta(y)}\right)d\sigma_{x,r}(y)\geq \frac{1}{2}\int_{\partial B_r} a\rho(y)d\sigma_{x,r}(y)-2, \end{aligned}$$ where $$\begin{aligned} \rho(y)=\mathrm{dist}(f(x), f(y))\text{ and }\theta(y)=\measuredangle_{f(x)}(h(x), f(y)). \end{aligned}$$* The proof of Lemma [Lemma 9](#lm:fundamental-inequality){reference-type="ref" reference="lm:fundamental-inequality"} is a quantitative version of the proof of the "interior estimate" [@Benoist2017HarmonicQM §4]. We first outline the proof of Lemma [Lemma 9](#lm:fundamental-inequality){reference-type="ref" reference="lm:fundamental-inequality"} briefly. We divide the outline into three steps. 1. We first observe that $\mathrm{dist}(f(x), h(\cdot))$ is a subharmonic function, so in particular $$\begin{aligned} \label{eq:outline-subh-int} \int_{\partial B_r(x)} \left(\mathrm{dist}(f(x), h(y))-\mathrm{dist}(f(x), h(x))\right)d\sigma_{x, r}(y)\geq 0. \end{aligned}$$ The entirety of the proof of Lemma [Lemma 9](#lm:fundamental-inequality){reference-type="ref" reference="lm:fundamental-inequality"} is estimating the integrand on the left-hand side under the assumption that $\mathrm{dist}(h(x), f(x))$ is very large. 2. If $\mathrm{dist}(h(x), f(x))=:D$ is large enough, we have $$\begin{gathered} \inf_{y\in B_r(x)}\mathrm{dist}(f(x), h(y))\geq \frac{D}{2},\label{eq:outline-inf-dist} \\ \sup_{y\in B_r(x)}\measuredangle_{f(x)}(h(x), h(y))\leq C\exp\left({-\frac{a}{2}D}\right).\label{eq:outline-sup-angle} \end{gathered}$$ Inequality ([\[eq:outline-inf-dist\]](#eq:outline-inf-dist){reference-type="ref" reference="eq:outline-inf-dist"}) follows from the fact that $\mathrm{dist}(f(x), h(\cdot))$ is a positive subharmonic function defined on $B_r(x)$ that takes the value $D$ at the center $x$, and is bounded above by $D+2\mathrm{diam}(f(B_r(x)))$. For $D$ large enough, $D^{-1}\mathrm{diam}(f(B_r(x)))$ is very small, which forces $\inf_{y\in B_r(x)}\mathrm{dist}(f(x), h(y))$ to be comparable to $D$. Inequality ([\[eq:outline-sup-angle\]](#eq:outline-sup-angle){reference-type="ref" reference="eq:outline-sup-angle"}) then follows from ([\[eq:outline-inf-dist\]](#eq:outline-inf-dist){reference-type="ref" reference="eq:outline-inf-dist"}) and Cheng's lemma. 3. We then have the chain of inequalities $$\begin{gathered} \mathrm{dist}(f(x), h(y))-\mathrm{dist}(f(x), h(x))\leq \mathrm{dist}(f(x), h(y)) - \mathrm{dist}(f(y), h(y))\\ \leq 2a^{-1}\log\frac{1}{\measuredangle_{f(x)}(f(y), h(y))} - \mathrm{dist}(f(x), f(y)) + O(1). \end{gathered}$$ The inequality in the first line follows from the fact that $$\mathrm{dist}(f(x), h(x))=\sup_{B_r(x)}\mathrm{dist}(h, f),$$ and the inequality in the second line follows from the comparison of the triangle with vertices $f(x), f(y), h(y)$ with the hyperbolic plane. Plugging the final inequality into ([\[eq:outline-subh-int\]](#eq:outline-subh-int){reference-type="ref" reference="eq:outline-subh-int"}) along with the bound ([\[eq:outline-sup-angle\]](#eq:outline-sup-angle){reference-type="ref" reference="eq:outline-sup-angle"}) yields Lemma [Lemma 9](#lm:fundamental-inequality){reference-type="ref" reference="lm:fundamental-inequality"}. We first show Theorem [Theorem 5](#thm:generalized-interior-estimate){reference-type="ref" reference="thm:generalized-interior-estimate"} assuming Lemma [Lemma 9](#lm:fundamental-inequality){reference-type="ref" reference="lm:fundamental-inequality"} below, and then we show Lemma [Lemma 9](#lm:fundamental-inequality){reference-type="ref" reference="lm:fundamental-inequality"} in §[3.1](#subsec:proof-of-lm-fundamental-inequality){reference-type="ref" reference="subsec:proof-of-lm-fundamental-inequality"}. *Proof of Theorem [Theorem 5](#thm:generalized-interior-estimate){reference-type="ref" reference="thm:generalized-interior-estimate"}.* We assume that $\sup_{B_{R_n}(x_n)}\mathrm{dist}(f_n, h_n)\to\infty$, possibly after passing to a subsequence. Fix a large constant $R\geq 1$, that we will choose later, and pass to a subsequence such that $R_n>R$ for all $n$. Our proof strategy is to apply Lemma [Lemma 9](#lm:fundamental-inequality){reference-type="ref" reference="lm:fundamental-inequality"} to $B_R(x_n)$. Since $\sup_{B_R(x_n)}\mathrm{dist}(f_n, h_n)\to\infty$, we eventually have violation of ([\[eq:fundamental-inequality-1\]](#eq:fundamental-inequality-1){reference-type="ref" reference="eq:fundamental-inequality-1"}). Thus for large $n$, we have $$\begin{aligned} \int_{\partial B_R(x_n)} \min\left(a\rho_n(y), \log\frac{\pi}{\theta_n(y)}\right)d\sigma_{x_n, R}(y)\gtrsim R, \end{aligned}$$ where $$\begin{aligned} \rho_n(y)=\mathrm{dist}(f_n(x_n), f_n(y))\text{ and }\theta_n(y)=\measuredangle_{f_n(x_n)}(h_n(x_n), f_n(y)). \end{aligned}$$ Note that in this proof, we suppress the dependence of implicit constants on the constants of $\mathcal{F}$ coming from Definition [Definition 5](#dfn:uniformly-inner){reference-type="ref" reference="dfn:uniformly-inner"}. We observe that since $f_n$ are uniformly Lipschitz, we have $\rho_n(y)\lesssim R$. Let $\omega$ be the size function of $\mathcal{F}$. We now let $$S_n=\{y\in\partial B_R(x_n):\theta_n(y)\leq \pi e^{-a\omega(R)}\text{ and }\rho_n(y)\geq \omega(R)\}.$$ Then $$\begin{aligned} R&\lesssim \int_{\partial B_R(x_n)} \min\left(a\rho_n(y), \log\frac{\pi}{\theta_n(y)}\right)d\sigma_{x_n, R}(y)\\ &\leq \int_{S_n} Rd\sigma_{x_n, R}+\int_{\partial B_R(x_n)\setminus S_n} a\omega(R) d\sigma_{x_n, R}\leq R\sigma_{x_n, R}(S_n)+a\omega(R). \end{aligned}$$ By sublinearity of $\omega$, we have $\sigma_{x_n, R}(S_n)\gtrsim 1$. However, observe that $$\begin{aligned} S_n=f_n^{-1}\left(\mathrm{Cone}\left(f_n(x_n)h_n(x_n), e^{-a\omega(R)}\right)\setminus B_{\omega(R)}(f_n(x_n))\right)\cap\partial B_R(x_n). \end{aligned}$$ Thus for $R$ large enough, depending on the constants of $\mathcal{F}$, we reach a contradiction with Definition [Definition 5](#dfn:uniformly-inner){reference-type="ref" reference="dfn:uniformly-inner"}(2) for $\mathcal{F}$. ◻ ## Proof of Lemma [Lemma 9](#lm:fundamental-inequality){reference-type="ref" reference="lm:fundamental-inequality"} {#subsec:proof-of-lm-fundamental-inequality} For clarity of notation, we introduce the notation $$\begin{gathered} \rho_f(y)=\mathrm{dist}(f(x), f(y)),\\ \rho_h(y)=\mathrm{dist}(f(x), h(y)). \end{gathered}$$ The proof will follow from the following two inequalities. The first is $$\begin{gathered} \label{eq:visual-angle-bound}\measuredangle_{f(x)}(h(x), h(y))\leq C(x, r)a\frac{\rho_h(x)+\norm{\rho_f}_\infty}{\sinh\left(a\rho_h(x)-C(x, r)a\norm{\rho_f}_\infty\right)}, \end{gathered}$$ where $C(x, r)>0$ is a constant depending only on $x, r$. The second is $$\begin{gathered} \label{eq:int-deficiency}\int_{\partial B_r} \min\left(a\rho_f(y), \log\frac{\pi}{\measuredangle_{f(x)}(h(y), f(y))}\right)\geq \frac{1}{2}\int_{\partial B_r} (a\rho_f-2), \end{gathered}$$ provided $\rho_h(x)\geq (C(x, r)+2)\norm{\rho_f}_\infty$. Here we are integrating against $\sigma_{x,r}$, but we drop the $d\sigma_{x,r}(y)$ in formulas for brevity. We first prove Lemma [Lemma 9](#lm:fundamental-inequality){reference-type="ref" reference="lm:fundamental-inequality"} assuming inequalities ([\[eq:visual-angle-bound\]](#eq:visual-angle-bound){reference-type="ref" reference="eq:visual-angle-bound"}) and ([\[eq:int-deficiency\]](#eq:int-deficiency){reference-type="ref" reference="eq:int-deficiency"}), that we show in the next two subsections. Let $\varepsilon=\frac{\pi}{4} \exp\left(-a\norm{\rho_f}_\infty\right)$, and $$\begin{aligned} \mathcal{C}_\varepsilon=\{y\in \partial B_r: \measuredangle_{f(x)}(h(x), f(y))\leq\varepsilon\}. \end{aligned}$$ By ([\[eq:visual-angle-bound\]](#eq:visual-angle-bound){reference-type="ref" reference="eq:visual-angle-bound"}), if $\rho_h(x)\geq M(r, x)\norm{\rho_f}_\infty+a^{-1}N(x, r)$ for some suitable functions $M, N$, we have $$\begin{aligned} \sup_y \measuredangle_{f(x)}(h(x), h(y))\leq \frac{1}{2}\varepsilon. \end{aligned}$$ We observe that for $y\in\partial B_r\setminus\mathcal{C}_\varepsilon$, we have $$\begin{aligned} \measuredangle_{f(x)}(h(y), f(y)) \geq \measuredangle_{f(x)}(h(x), f(y))- \measuredangle_{f(x)}(h(x), h(y))\geq\frac{1}{2}\measuredangle_{f(x)}(h(x), f(y)). \end{aligned}$$ Thus for $y\in\partial B_r\setminus\mathcal{C}_\varepsilon$, we have $$\begin{aligned} \log \frac{\pi}{\measuredangle_{f(x)}(h(y), f(y))}\leq 1+\log\frac{\pi}{\measuredangle_{f(x)}(h(x), f(y))}. \end{aligned}$$ Therefore $$\begin{aligned} \label{eq:a+b-estimate} \int_{\partial B_r} \min\left(a\rho_f(y), \log\frac{\pi}{\measuredangle_{f(x)}(h(y),f(y))}\right)\leq A+B, \end{aligned}$$ where $$\begin{gathered} A=\int_{\partial B_r\setminus\mathcal{C}_\varepsilon} \min\left(a\rho_f(y), 1+\log\frac{\pi}{\measuredangle_{f(x)}(h(x),f(y))}\right),\\ B=\int_{\mathcal{C}_\varepsilon} a\rho_f. \end{gathered}$$ We note that by choice of $\varepsilon$, for $y\in\mathcal{C}_\varepsilon$, $$\begin{aligned} 1+\log \frac{\pi}{\measuredangle_{f(x)}(h(x), f(y))}\geq a\norm{\rho_f}_\infty \geq a\rho_f(y). \end{aligned}$$ By ([\[eq:a+b-estimate\]](#eq:a+b-estimate){reference-type="ref" reference="eq:a+b-estimate"}) and ([\[eq:int-deficiency\]](#eq:int-deficiency){reference-type="ref" reference="eq:int-deficiency"}), we have $$\begin{aligned} \frac{1}{2}\int_{\partial B_r} (a\rho_f-2)\leq \int_{\partial B_r} \min\left(a\rho_f(y), 1+\log\frac{\pi}{\measuredangle_{f(x)}(h(x), f(y))}\right). \end{aligned}$$ ### Proof of ([\[eq:visual-angle-bound\]](#eq:visual-angle-bound){reference-type="ref" reference="eq:visual-angle-bound"}). The proof of ([\[eq:visual-angle-bound\]](#eq:visual-angle-bound){reference-type="ref" reference="eq:visual-angle-bound"}) depends on the following estimate. **Claim 10**. Let $f:B_r(x)\to\mathbb{R}$ be a subharmonic function. Then there exists $\lambda=\lambda(r, a, b, n)>0$, such that $$\begin{aligned} f(x)\leq \lambda \inf_{B_r(x)}f+(1-\lambda)\sup_{B_r(x)}f. \end{aligned}$$ *Proof.* If $f$ is constant, there is nothing to prove. Therefore we post-compose $f$ with a linear function so that $\inf_{B_r(x)}f=0$ and $\sup_{B_r(x)}f=1$. Let $y\in B_r(x)$ be such that $f(y)=0$, and let $\rho=\mathrm{dist}(x, y)$. If $\rho=0$, then $x=y$ and there is nothing to prove. Otherwise, note that $$\begin{aligned} \label{eq:supinf-convexity-int-subh} f(x)\leq\int_{\partial B_\rho(x)}f(y)d\sigma_{x, \rho}(y). \end{aligned}$$ Note that by Cheng's lemma, $\sup_{B_r(x)}\norm{Df}\leq C=C(r)$. For some $\theta=\theta(r)$, the following holds by comparison to the hyperbolic plane: given $a, b\in \partial B_\rho(x)$ for $\rho\leq r$ with $\measuredangle_x(a, b)<\theta$, we have $\mathrm{dist}(a, b)<\frac{1}{2C}$. From ([\[eq:supinf-convexity-int-subh\]](#eq:supinf-convexity-int-subh){reference-type="ref" reference="eq:supinf-convexity-int-subh"}), we get $$\begin{aligned} f(x)&\leq \frac{1}{2}\sigma_{x,\rho}\left({\partial B_\rho(x)\cap\mathrm{Cone}(xy, \theta)} \right)+1-\sigma_{x,\rho}\left({\partial B_\rho(x)\cap\mathrm{Cone}(xy, \theta)} \right)\\ &= 1-\frac{1}{2}\sigma_{x,\rho}\left({\partial B_\rho(x)\cap\mathrm{Cone}(xy, \theta)} \right). \end{aligned}$$ By [@Benoist2020HarmonicMO], there is some absolute $\mu=\mu(r, n)$ such that $$\sigma_{x,\rho}\left({\partial B_\rho(x)\cap\mathrm{Cone}(xy, \theta)} \right)\geq\mu.$$ Therefore $f(x)\leq 1-\frac{1}{2}\mu$, and the claim is shown with $\lambda=\frac{1}{2}\mu$. ◻ Since $\mathrm{dist}(h(y), f(y))\leq \rho_h(x)$, we have $$\begin{aligned} \rho_h(y)\leq \rho_f(y)+\mathrm{dist}(h(y), f(y))\leq\rho_f(y)+\rho_h(x), \end{aligned}$$ and hence $\norm{\rho_h}_\infty\leq \rho_h(x)+\norm{\rho_f}_\infty$. From Claim [Claim 10](#claim:supinf-convexity){reference-type="ref" reference="claim:supinf-convexity"} and the fact that $\rho_h$ is subharmonic, it follows that $$\begin{aligned} \rho_h(x)\leq \lambda \inf_{B_r} \rho_h+(1-\lambda) \left(\norm{\rho_f}_\infty+\rho_h(x)\right). \end{aligned}$$ In particular, we have $$\begin{aligned} \label{eq:inf-rhoh-estimate} \inf_{B_r}\rho_h\geq \rho_h(x)-\frac{1-\lambda}{\lambda}\norm{\rho_f}_\infty\geq \rho_h(x)-C\norm{\rho_f}_\infty. \end{aligned}$$ By comparison to the hyperbolic plane, we have $$\begin{aligned} a\mathrm{length}(h([x, y]))\geq \sinh(a\inf_{B_r}\rho_h) \measuredangle_{f(x)}(h(x), h(y)). \end{aligned}$$ By Cheng's lemma, $$\mathrm{length}(h([x, y]))\leq r\norm{D h}_\infty\leq C\norm{\rho_h}_\infty\leq C\left(\rho_h(x)+\norm{\rho_f}_\infty\right).$$ Therefore, $$\begin{aligned} \measuredangle_{f(x)}(h(x), h(y))\leq Ca\frac{\rho_h(x)+\norm{\rho_f}_\infty}{\sinh\left(a\rho_h(x)-Ca\norm{\rho_f}_\infty\right)}. \end{aligned}$$ ### Proof of ([\[eq:int-deficiency\]](#eq:int-deficiency){reference-type="ref" reference="eq:int-deficiency"}). We first relate the deficiency (i.e. the slack in the triangle inequality) of the triangle with vertices $h(x),f(y), h(y)$ and the angle $\measuredangle_{f(x)}(h(y),f(y))$, that we denote by $\theta(y)$ for this subsection only, by abuse of notation. **Claim 11**. Let $D(y)=\rho_f(y)+\rho_h(y)-\mathrm{dist}(f(y), h(y))$. Assuming $\rho_h\geq 2\rho_f$ and $\rho_f\geq a^{-1}$, we have $$\begin{aligned} \log\frac{\pi}{\theta(y)}\geq \frac{a}{2}D(y)-1. \end{aligned}$$ *Proof.* This follows from comparison with the hyperbolic plane. By the hyperbolic law of cosines, we have $$\begin{aligned} \cosh(a\mathrm{dist}(f(y), h(y)))&\geq \cosh(a\rho_f(y))\cosh(a\rho_h(y))-\sinh(a\rho_f(y))\sinh(a\rho_h(y))\cos\theta(y)\\ &=\cosh(a(\rho_f(y)-\rho_h(y)))+2\sin^2\frac{\theta(y)}{2}\sinh(a\rho_f(y))\sinh(a\rho_h(y)) % \\ % &\geq \frac{e^{a\abs{\rho_f(y)-\rho_h(y)}}}{2}+\frac{1}{8}\sin^2\frac{\theta(y)}{2}e^{a(\rho_f(y)+\rho_h(y))}\\ % &\geq \frac{(1-e^{-2})^2}{2\pi^2}\theta^2 e^{a(\rho_f(y)+\rho_h(y))}. \end{aligned}$$ Therefore $$\begin{aligned} \sin^2\frac{\theta}{2}\leq \frac{\sinh\left(\frac{a}{2}(\mathrm{dist}(f,h)+\rho_f-\rho_h)\right)\sinh\left(\frac{a}{2}(\mathrm{dist}(f,h)+\rho_h-\rho_f)\right)}{\sinh(a\rho_f)\sinh(a\rho_h)}. \end{aligned}$$ Since $a\rho_h\geq 2a\rho_f\geq 2$, we have $\min\left(a\rho_f,a\rho_h\right)\geq 1$, so $$\begin{aligned} \sin^2\frac{\theta}{2}\leq\frac{e^{-a\left(\rho_f+\rho_h-\mathrm{dist}(f, h)\right)}}{(1-e^{-2a\rho_f})(1-e^{-2a\rho_h})}\leq e^{-aD(y)} (1-e^{-2})^{-2}. \end{aligned}$$ Since $\sin\frac{\theta}{2}\geq \frac{\theta}{\pi}$, we see that $$\begin{aligned} \frac{\pi}{\theta}\geq (1-e^{-2}) e^{\frac{a}{2}D(y)}, \end{aligned}$$ so $\log\frac{\pi}{\theta}\geq\frac{a}{2}D(y)-\log\frac{1}{1-e^{-2}}\geq \frac{a}{2}D(y)-1$. ◻ By ([\[eq:inf-rhoh-estimate\]](#eq:inf-rhoh-estimate){reference-type="ref" reference="eq:inf-rhoh-estimate"}), and the assumption that $\rho_h(x)\geq(C+2)\norm{\rho_f}_\infty$, we have $\inf_{B_r}(a\rho_h)\geq 2\norm{\rho_f}_\infty$. We let $G=\{y\in\partial B_r: a\rho_f(y)\geq 1\}$. We observe that for $y\in G$, we have by Claim [Claim 11](#claim:def-angle){reference-type="ref" reference="claim:def-angle"} $$\begin{aligned} \log \frac{\pi}{\theta(y)}\geq \frac{a}{2}D(y)-1. \end{aligned}$$ Note that $D(y)=\rho_f(y)+\rho_h(y)-\mathrm{dist}(f(y),h(y))\leq 2\rho_f(y)$ by the triangle inequality, and hence $\frac{a}{2}D(y)-1\leq a\rho_f(y)$. Therefore for $y\in G$, $$\begin{aligned} \label{eq:min-g-case} \min\left(a\rho_f(y), \log\frac{\pi}{\theta(y)}\right)\geq \frac{a}{2}D(y)-1. \end{aligned}$$ On the other hand, for $y\not\in G$, we have $$\begin{aligned} \min\left(a\rho_f(y), \log\frac{\pi}{\theta(y)}\right)\geq 0\geq a\rho_f(y)-1\geq \frac{a}{2}D(y)-1. \end{aligned}$$ Hence ([\[eq:min-g-case\]](#eq:min-g-case){reference-type="ref" reference="eq:min-g-case"}) holds for all $y\in\partial B_r$. Integrating ([\[eq:min-g-case\]](#eq:min-g-case){reference-type="ref" reference="eq:min-g-case"}) over $\partial B_r$, we get $$\begin{aligned} \int_{\partial B_r} \min\left(a\rho_f(y), \log\frac{\pi}{\theta(y)}\right)&\geq \frac{1}{2}\int_{\partial B_r} \left(aD(y)-2\right)\\ &\geq\frac{1}{2}\int_{\partial B_r} \left(a\rho_h(x)+a\rho_f(y)-a\mathrm{dist}(f(y), h(y))-2\right)\\ &\geq \frac{1}{2}\int_{\partial B_r} (a\rho_f-2), \end{aligned}$$ where we used subharmonicity of $\rho_h$ in going from the first to the second line. # Weakly non-collapsing maps {#sec:weakly-non-collapsing} In this section we prove Theorem [Theorem 4](#thm:main-qia-omega){reference-type="ref" reference="thm:main-qia-omega"}, that in turn immediately implies Theorem [Theorem 1](#thm:main-qia){reference-type="ref" reference="thm:main-qia"}, as explained in Remark [Remark 4](#remark:non-collapsing){reference-type="ref" reference="remark:non-collapsing"}. The proof of Theorem [Theorem 4](#thm:main-qia-omega){reference-type="ref" reference="thm:main-qia-omega"} has four steps. 1. By combining [@tosic Lemma 3.1] and Proposition [Proposition 6](#prop:qia-deform){reference-type="ref" reference="prop:qia-deform"}, it follows immediately that the map $f$ is at a finite distance from a map that is weakly non-collapsing with first two derivatives bounded. 2. We then construct harmonic maps $h_n$ on larger and larger balls $B_n$ that agree with $f$ on $\partial B_n$. 3. The boundary estimate of Benoist--Hulin [@Benoist2017HarmonicQM Proposition 3.7], stated below as Proposition [Proposition 12](#prop:boundary-estimate){reference-type="ref" reference="prop:boundary-estimate"}, then shows that in any finite distance neighbourhood of the boundary $\partial B_n$, the distance between $f$ and $h_n$ remains bounded. The generalized interior estimate Theorem [Theorem 5](#thm:generalized-interior-estimate){reference-type="ref" reference="thm:generalized-interior-estimate"} shows that the distance between $f$ and $h_n$ remains bounded far from the boundary of $B_n$. 4. The limiting argument following Benoist--Hulin, [@Benoist2017HarmonicQM §3.3], that we state and prove below as Proposition [Proposition 13](#prop:limiting){reference-type="ref" reference="prop:limiting"}, then shows that a limit of $h_n$ can be extracted to get a harmonic map at a finite distance from $f$. This argument follows from the Arzela--Ascoli theorem, combined with some classical results on harmonic maps: Schauder elliptic estimates and Cheng's lemma. **Proposition 12**. *Let $f:X\to Y$ be a smooth map between two pinched Hadamard manifolds with first two derivatives bounded. Let $x_0\in X, R>0$, and let $h:B_R(x_0)\to Y$ be a harmonic map that agrees with $f$ on $\partial B_R(x_0)$. Then there exists a constant $C$ that depends only on $\norm{Df}_\infty, \norm{D^2f}_\infty$, and the pinching constants of $X, Y$, such that $$\begin{aligned} \mathrm{dist}(h(x), f(x))\leq C\mathrm{dist}(x, \partial B_R(x_0)) \end{aligned}$$* **Proposition 13**. *Let $X, Y$ be pinched Hadamard manifolds, and let $f:X\to Y$ be a smooth Lipschitz map with bounded second derivative. We fix a point $x\in X$, and let $h_n:B_n(x)\to Y$ be harmonic maps such that $\sup_n\sup_{B_n(x)} \mathrm{dist}(f, h_n)<\infty$. Then there exists a harmonic map $h:X\to Y$ such that $\sup\mathrm{dist}(h, f)<\infty$.* Proposition [Proposition 12](#prop:boundary-estimate){reference-type="ref" reference="prop:boundary-estimate"} is exactly stated in [@Benoist2017HarmonicQM], so we omit the proof. Proposition [Proposition 13](#prop:limiting){reference-type="ref" reference="prop:limiting"} appears in [@Benoist2017HarmonicQM] as well, however it was never explicitly stated, so for the reader's convenience we include the proof below in §[4.1](#subsec:pf-prop-limiting){reference-type="ref" reference="subsec:pf-prop-limiting"}. We then prove Theorem [Theorem 2](#thm:main-general){reference-type="ref" reference="thm:main-general"} in §[4.2](#subsec:pf-thm-main-qia-omega){reference-type="ref" reference="subsec:pf-thm-main-qia-omega"}. ## Proof of Proposition [Proposition 13](#prop:limiting){reference-type="ref" reference="prop:limiting"} {#subsec:pf-prop-limiting} For any fixed compact set $K\subset X$, we have $$\begin{aligned} \mathrm{diam}(h_n(K))\leq 2\sup_{B_n(x)}\mathrm{dist}(h_n, f)+\mathrm{diam}(f(K)). \end{aligned}$$ Hence $\mathrm{diam}(h_n(K))$ is bounded, and hence by Cheng's lemma [@cheng] $$\sup_n\norm{D h_n}_{L^\infty(K)}<\infty.$$ By the Arzela--Ascoli theorem, we may pass to a subsequence and extract a limit $h_n\to h$, that is uniform on compact subsets of $X$. From the fact that $\sup_n\norm{D h_n}_{L^\infty(K)}<\infty$ for any compact set $K\subset X$, we see that for $\alpha\in(0,1)$, we have $$\begin{aligned} \sup_n\norm{h_n}_{C^\alpha(K)}<\infty. \end{aligned}$$ From Schauder elliptic estimates [@petersen Theorem 70, pp. 303] and the fact that $h_n$ is harmonic, we see that $\sup_n \norm{h_n}_{C^{2,\alpha}(K)}<\infty$ for any compact $K\subset X$. Applying Arzela--Ascoli again, we may extract a further subsequence such that $D^2 h_n\to H$. It is easy to see that $H=D^2 h$, so in particular $h$ is harmonic. Finally, we have $$\begin{aligned} \sup \mathrm{dist}(h, f)\leq\sup_n\sup_{B_n(x)}\mathrm{dist}(h_n, f)<\infty, \end{aligned}$$ which concludes the proof of Proposition [Proposition 13](#prop:limiting){reference-type="ref" reference="prop:limiting"}. ## Proof of Theorem [Theorem 4](#thm:main-qia-omega){reference-type="ref" reference="thm:main-qia-omega"} {#subsec:pf-thm-main-qia-omega} Let $f:X\to Y$ be an $\omega$-weakly non-collapsing map between pinched Hadamard manifolds. By [@tosic Lemma 3.1] there exists a smooth $\tilde{f}:X\to Y$ such that $D\tilde{f}, D^2\tilde{f}$ are bounded and $\sup\mathrm{dist}(f, \tilde{f})<\infty$. Proposition [Proposition 6](#prop:qia-deform){reference-type="ref" reference="prop:qia-deform"} then guarantees that $\tilde{f}$ is a weakly non-collapsing map, possibly with a different size function. Fix an arbitrary point $x\in X$. Then let $h_n:B_n(x)\to Y$ be the harmonic map that agrees with $\tilde{f}$ on $\partial B_n(x)$. If $\sup \mathrm{dist}(\tilde{f}, h_n)$ is a bounded sequence, by Proposition [Proposition 13](#prop:limiting){reference-type="ref" reference="prop:limiting"}, we are done. Assume therefore, after passing to a subsequence, that $\sup\mathrm{dist}(\tilde{f}, h_n)\to\infty$. Let $x_n\in B_n(x)$ be a sequence of points such that the maximum of $\mathrm{dist}(\tilde{f}, h_n)$ is achieved at $x_n$. By Proposition [Proposition 12](#prop:boundary-estimate){reference-type="ref" reference="prop:boundary-estimate"}, we have $$R_n=\mathrm{dist}(x_n, \partial B_n(x))\to\infty.$$ We observe that the family of maps $\{\tilde{f}:(X, x_n)\to (Y, f(x_n))\text{ for }n=1,2,...\}$ is uniformly non-collapsing by definition. Applying Theorem [Theorem 5](#thm:generalized-interior-estimate){reference-type="ref" reference="thm:generalized-interior-estimate"} to the harmonic maps $h_n: B_{R_n}(x_n)\to Y$, we get that $$\begin{aligned} \sup_n\sup_{B_n(x)}\mathrm{dist}(\tilde{f}, h_n)=\sup_n\mathrm{dist}(\tilde{f}(x_n), h_n(x_n))<\infty, \end{aligned}$$ which is a contradiction. # Nearest-point projections to admissible convex sets {#sec:nearest-point} This section is devoted to showing Theorem [Theorem 2](#thm:main-general){reference-type="ref" reference="thm:main-general"}. We first give a rough outline of the proof. As in the proof of Theorem [Theorem 4](#thm:main-qia-omega){reference-type="ref" reference="thm:main-qia-omega"}, we construct harmonic maps $h_n$ defined on larger and larger balls $B_n(o)$ for some fixed $o\in X$, agreeing with $r$ on the boundaries $\partial B_n(o)$. The goal is to use the limiting argument in Proposition [Proposition 13](#prop:limiting){reference-type="ref" reference="prop:limiting"} to get a harmonic map defined on all of $X$. It therefore suffices to show that $\sup_X\mathrm{dist}(h_n, r)$ is a bounded sequence. We do this in two steps. 1. We first show that for some fixed $D>0$, we have $$\sup_{X\setminus N_D(C)}\mathrm{dist}(h_n, r)\leq\sup_{N_D(C)}\mathrm{dist}(h_n, r)+O(1).$$ This inequality is derived analogously to [@tosic §4], and it follows from the existence of a bounded subharmonic function $\Phi$ such that $\Delta\Phi\gtrsim e^{-a\mathrm{dist}(\cdot, C)}$ on $X\setminus N_D(C)$, for some fixed $D>0$, and from the classical inequality of Schoen--Yau [@SCHOEN1979361] on the Laplacian of the distance between two functions. 2. It therefore remains to show that $\sup_{N_D(C)}\mathrm{dist}(h_n, r)$ is bounded. This follows from our generalized interior estimate Theorem [Theorem 5](#thm:generalized-interior-estimate){reference-type="ref" reference="thm:generalized-interior-estimate"}, since the map $r$ is non-collapsing near the convex set $C$. This bound is contained in Proposition [Proposition 14](#prop:interior-estimate){reference-type="ref" reference="prop:interior-estimate"} below. To state Proposition [Proposition 14](#prop:interior-estimate){reference-type="ref" reference="prop:interior-estimate"}, we first note that, given any admissible convex set $C$ and a nearest-point projection map $r:X\to C$, by [@tosic Corollary 3.7], there exists a smooth map $\tilde{r}:X\to X$ such that $\mathcal{D}:=\sup_X\mathrm{dist}(r, \tilde{r})<\infty$, and $$\begin{gathered} \norm{D\tilde{r}}\lesssim e^{-a\mathrm{dist}(\cdot, C)},\\ \norm{\tau(\tilde{r})}\lesssim e^{-a\mathrm{dist}(\cdot, C)}. \end{gathered}$$ **Proposition 14**. *Let $D>0$. There exist constants $R_0=R_0(D)>0$ and $M=M(D)>0$, such that, for any $x\in N_D(C)$ and $R>R_0$ we have the following property. Given a harmonic map $h:B_R(x)\to X$ such that $\mathrm{dist}(h, \tilde{r})$ achieves its maximum at $x$, we have $\mathrm{dist}(h, \tilde{r})<M$.* We first show Theorem [Theorem 2](#thm:main-general){reference-type="ref" reference="thm:main-general"} assuming Proposition [Proposition 14](#prop:interior-estimate){reference-type="ref" reference="prop:interior-estimate"} in §[5.1](#subsec:pf-thm-main-general){reference-type="ref" reference="subsec:pf-thm-main-general"}. We then show Proposition [Proposition 14](#prop:interior-estimate){reference-type="ref" reference="prop:interior-estimate"} in §[5.2](#subsec:prop-interior-estimate){reference-type="ref" reference="subsec:prop-interior-estimate"}. ## Proof of Theorem [Theorem 2](#thm:main-general){reference-type="ref" reference="thm:main-general"} {#subsec:pf-thm-main-general} From [@tosic Proposition 4.4], for some $D>0$ large enough, there exist subharmonic functions $\phi_n:X\to\mathbb{R}$ for $n>D-1$, such that $$\begin{aligned} \Delta\phi_n\geq 1\text{ on }N_{n+1}(C)\setminus N_n(C), \end{aligned}$$ and $\sup_n\norm{\phi_n}_\infty<\infty$. We now construct the function $$\begin{aligned} \Phi=\sum_{n=\lfloor D\rfloor}^\infty e^{-an}\phi_n, \end{aligned}$$ such that $\Phi$ is a bounded subharmonic function, with the property that $\Delta\Phi\gtrsim e^{-a\mathrm{dist}(\cdot, C)}$ on $X\setminus N_D(C)$. We now fix an arbitrary point $o\in X$, and let $h_N:B_N(o)\to X$ be the harmonic map that agrees with $\tilde{r}$ on $\partial B_N(o)$. By Proposition [Proposition 13](#prop:limiting){reference-type="ref" reference="prop:limiting"}, it suffices to show the following claim. **Claim 15**. The sequence $\sup_{B_N(o)}\mathrm{dist}(h_N, \tilde{r})$ is bounded. *Proof.* Assume that, possibly after passing to a subsequence, we have $$\sup_{B_N(o)}\mathrm{dist}(h_N, \tilde{r})\to\infty.$$ Note that from [@SCHOEN1979361], we have $$\begin{aligned} \Delta\mathrm{dist}(h_N, \tilde{r})\gtrsim -\norm{\tau(\tilde{r})}\gtrsim -e^{-a\mathrm{dist}(\cdot, C)}. \end{aligned}$$ Therefore, for a suitably chosen constant $c$, the function $$\begin{aligned} \mathrm{dist}(h_N, \tilde{r})+c\Phi \end{aligned}$$ is subharmonic on $X\setminus N_D(C)$. Let $x_N\in B_N(o)$ be the point where the maximum of $\mathrm{dist}(h_N, \tilde{r})$ is achieved. If $x_N\in X\setminus N_D(C)$ for infinitely many $N$, then $$\begin{aligned} \mathrm{dist}(h_N(x_N), \tilde{r}(x_N))+c\Phi(x_N)\leq \sup_{\partial B_N(o)}(\mathrm{dist}(h_N, \tilde{r})+c\Phi)\lesssim \norm{\Phi}_\infty\lesssim 1, \end{aligned}$$ which is a contradiction since $\Phi$ is bounded. Thus, for infinitely many $N$, we have $x_N\in N_D(C)$. Proposition [Proposition 12](#prop:boundary-estimate){reference-type="ref" reference="prop:boundary-estimate"} shows that $\mathrm{dist}(x_N, \partial B_N(o))\to\infty$ as $N\to\infty$. In particular, for $N$ large enough, we may apply Proposition [Proposition 14](#prop:interior-estimate){reference-type="ref" reference="prop:interior-estimate"}, to get that $\sup_N\sup_{B_N(o)}\mathrm{dist}(h_N, \tilde{r})<\infty$. This is a contradiction. ◻ ## Proof of Proposition [Proposition 14](#prop:interior-estimate){reference-type="ref" reference="prop:interior-estimate"} {#subsec:prop-interior-estimate} This follows immediately from Theorem [Theorem 5](#thm:generalized-interior-estimate){reference-type="ref" reference="thm:generalized-interior-estimate"}, once we show that the family $$\{\tilde{r}:(X,x)\to (X, \tilde{r}(x))\text{ for }x\in N_D(C)\}$$ is uniformly non-collapsing. Note that a proof identical to that of Proposition [Proposition 6](#prop:qia-deform){reference-type="ref" reference="prop:qia-deform"} shows the following. **Proposition 16**. *Let $\mathcal{F}$ be a uniformly non-collapsing family, let $D>0$, and let $\tilde{\mathcal{F}}$ be a uniformly Lipschitz family of maps between pointed pinched Hadamard manifolds. Assume that for any $\tilde{f}:(X,x)\to (Y,y)$ in $\tilde{\mathcal{F}}$, there exists a map $f:(X,x)\to(Y,y)$ in $\mathcal{F}$, such that $\sup_X \mathrm{dist}(f, \tilde{f})<D$. Then $\tilde{\mathcal{F}}$ is uniformly non-collapsing.* In particular, it suffices to show that the family $$\begin{aligned} \{r:(X, x)\to (X, r(x))\text{ for }x\in N_D(C)\} \end{aligned}$$ is uniformly non-collapsing. The rest of this subsection is devoted to showing this. We first check Definition [Definition 5](#dfn:uniformly-inner){reference-type="ref" reference="dfn:uniformly-inner"}(1). Fix some $x\in X$, and set $\rho(y)=\mathrm{dist}({r}(x), {r}(y))$. From Definition [Definition 2](#dfn:admissible){reference-type="ref" reference="dfn:admissible"}, we see that there exists some $\xi\in\partial_\infty X$, such that $\partial B_R(x)\cap\mathrm{Cone}(x\xi, \theta)\subseteq \partial B_R(x)\cap C$. Here $\theta=\theta(D)$ is an absolute constant that we have fixed. Then we have $$\begin{aligned} \int_{\partial B_R(x)} \rho(y)&\geq \int_{\partial B_R(x)\cap C} \rho(y)=\int_{\partial B_R(x)\cap C}\mathrm{dist}({r}(x), y)\\ &\geq \sigma_{x,R}(\mathrm{Cone}(x\xi, \theta)\cap \partial B_R(x)) (R-\mathrm{dist}(x, \tilde{r}(x)))\\ &\gtrsim R-D\approx R, \end{aligned}$$ where we used the fundamental estimate of Benoist--Hulin [@Benoist2020HarmonicMO] that $\sigma_{x, R}(\mathrm{Cone}(x\xi, \theta)\cap\partial B_R(x))\gtrsim 1$, and where we assumed $R>2D$. We now turn to Definition [Definition 5](#dfn:uniformly-inner){reference-type="ref" reference="dfn:uniformly-inner"}(2). **Claim 17**. Let $x\in X$ be such that $\mathrm{dist}(x, C)\leq D$. Let $y\in C$ be such that $\mathrm{dist}(r(x), y)=R$. Then for any $z\in r^{-1}(y)$, we have $\measuredangle_{r(x)}(y, z)\leq\pi e^{-aR}$. *Proof.* We let $w=r(x)$. Then $\measuredangle_y(z, w)\geq\frac{\pi}{2}$. Then $\mathrm{dist}(w, y)=R\geq a^{-1}$. Let $\bar{w}\bar{y}\bar{z}$ be a comparison triangle for $wyz$ in the hyperbolic plane of curvature $-a^2$. Then $$\begin{aligned} \measuredangle_w(y, z)\leq \measuredangle_{\bar{w}}(\bar{y}, \bar{z}), \end{aligned}$$ so it suffices to estimate $\measuredangle_{\bar{w}}(\bar{y}, \bar{z})$. Note that $\measuredangle_{\bar{y}}(\bar{w}, \bar{z})\geq \measuredangle_{y}(w, z)\geq \frac{\pi}{2}$, and we can assume without loss of generality that $\measuredangle_{\bar{y}}(\bar{w}, \bar{z})=\frac{\pi}{2}$. Then the dual hyperbolic law of cosines shows $$\begin{aligned} \cos\measuredangle_{\bar{z}}(\bar{w}, \bar{y})=\sin\measuredangle_{\bar{w}}(\bar{y}, \bar{z}) \cosh(aR)\geq\frac{2\measuredangle_{\bar{w}}(\bar{y}, \bar{z})}{\pi} \frac{e^{aR}}{2}, \end{aligned}$$ and hence $\measuredangle_{\bar{w}}(\bar{y}, \bar{z})\leq \pi e^{-aR}$, which concludes the proof. ◻ We now have for $x\in N_D(C)$, $$\begin{aligned} r^{-1}\left(\mathrm{Cone}(r(x)\xi, \theta)\setminus B_{M}(r(x))\right)\subseteq \mathrm{Cone}(r(x)\xi, \theta+\pi e^{-aM}), \end{aligned}$$ and hence in particular $$\begin{aligned} \partial B_R(x)\cap r^{-1}\left(\mathrm{Cone}(r(x)\xi,\theta)\setminus B_{\sqrt{R}}(r(x))\right)&\subseteq \partial B_R(x)\cap \mathrm{Cone}(r(x)\xi, \theta+\pi e^{-a\sqrt{R}})\\ &\subseteq \partial B_R(x)\cap \mathrm{Cone}(x\xi,\tilde{\theta}(R, \theta)), \end{aligned}$$ where $\tilde{\theta}(R, \theta)\to 0$ as $R\to\infty, \theta\to 0$. Here in going from the first to the second line, we used Proposition [Proposition 7](#proposition:moving-cone){reference-type="ref" reference="proposition:moving-cone"}. From the work of Benoist--Hulin [@Benoist2020HarmonicMO], we see that $\sigma_{x,R}(\partial B_R(x)\cap \mathrm{Cone}(x\xi, \tilde{\theta}))\to 0$ as $\theta\to 0, R\to\infty$. Therefore Definition [Definition 5](#dfn:uniformly-inner){reference-type="ref" reference="dfn:uniformly-inner"}(2) holds, and Proposition [Proposition 14](#prop:interior-estimate){reference-type="ref" reference="prop:interior-estimate"} is shown. # Admissible convex sets in hyperbolic spaces {#sec:admissible} In this section we prove Theorem [Theorem 3](#thm:main-admissible){reference-type="ref" reference="thm:main-admissible"}, that readily follows from the lemma below. For a set $S\subseteq\partial_\infty\mathbb{H}^n$, denote by $\mathrm{CH}(S)$ the closed convex hull of $S$. **Lemma 18**. *Let $S\subseteq\mathbb{S}^{n-1}$ be an open set with quasiconformal boundary. Then for any $D>0$, there exists an angle $\theta=\theta(D)>0$, such that for any $x\in N_D(\mathrm{CH}(S))$, there exists $\xi\in S$ such that $\mathrm{Cone}(x\xi, \theta)\cap\mathbb{S}^{n-1}\subseteq S$.* We prove Lemma [Lemma 18](#lm:boundary-analysis){reference-type="ref" reference="lm:boundary-analysis"} by contradiction. Assuming there is a sequence $x_i\in N_D(\mathrm{CH}(S))$ that provides a contradiction to the claim in Lemma [Lemma 18](#lm:boundary-analysis){reference-type="ref" reference="lm:boundary-analysis"}, it is easy to see that it has to converge to some point in the boundary at infinity $\xi \in S\subseteq\partial_\infty\mathbb{H}^n=\mathbb{S}^{n-1}$. Then by assumption we can map $S$ in a neighbourhood of $\xi$ to some standard model using a quasiconformal map. By a classical result of Tukia--Väisälä [@Tukia19829uasiconformalEF], this quasiconformal map can be extended to a bi-Lipschitz self-map $F$ of $\mathbb{H}^n$. We then prove the claim of Lemma [Lemma 18](#lm:boundary-analysis){reference-type="ref" reference="lm:boundary-analysis"} for this standard model, and transport it back to $S$ using the map $F$. All of this is done in §[6.1](#subsec:bdry-analysis){reference-type="ref" reference="subsec:bdry-analysis"}. Theorem [Theorem 3](#thm:main-admissible){reference-type="ref" reference="thm:main-admissible"} then follows by simple hyperbolic geometry, that we explain in §[6.2](#subsec:pf-thm-main-admissible){reference-type="ref" reference="subsec:pf-thm-main-admissible"}. ## Boundary analysis: Proof of Lemma [Lemma 18](#lm:boundary-analysis){reference-type="ref" reference="lm:boundary-analysis"} {#subsec:bdry-analysis} Suppose that the conclusion of Lemma [Lemma 18](#lm:boundary-analysis){reference-type="ref" reference="lm:boundary-analysis"} fails. Then there exists a sequence $x_i\in\mathbb{H}^n$ such that $\sup_i \mathrm{dist}(x_i, \mathrm{CH}(S))<\infty$, and such that $$\begin{aligned} \label{eq:contrary-assumption} \sup\{\theta:\mathrm{Cone}(x_i\xi, \theta)\cap\mathbb{S}^{n-1}\subseteq S\text{ for some }\xi\in S\}\to 0 \end{aligned}$$ as $i\to\infty$. Note that if $x_i$ remain in some compact set, after passing to a subsequence, we may assume that $x_i\to x_\infty\in X$. Since $S$ is an open set, this is a contradiction with ([\[eq:contrary-assumption\]](#eq:contrary-assumption){reference-type="ref" reference="eq:contrary-assumption"}). Assume therefore that $x_i\to s\in\partial_\infty X$, possibly after passing to a subsequence. Since $\sup_i \mathrm{dist}(x_i, \mathrm{CH}(S))<\infty$, we have $s\in \bar{S}$. Before continuing with the proof, we define a different version of the cone that will be more convenient for us to work with. We define for $x\in\mathbb{H}^n, \xi\in\mathbb{S}^{n-1}$, and $D>0$, the set $$\begin{aligned} C^D_{x\xi}=\{\eta\in\mathbb{S}^{n-1}:\mathrm{dist}(x, [\xi,\eta])\geq D\}. \end{aligned}$$ It is classical that for some absolute constant $C>1$, we have $$\begin{aligned} \mathrm{Cone}(x\xi, C^{-1}e^{-D})\cap\partial_\infty\mathbb{H}^n\subseteq C^{D}_{x\xi}\subseteq \mathrm{Cone}(x\xi, Ce^{-{D}})\cap\partial_\infty\mathbb{H}^n. \end{aligned}$$ so it suffices to show that there exists $D>0$ and $\xi_i\in S$ such that $C_{x_i\xi_i}^{D}\subseteq S$ for all $i$ large enough. The rest of the proof is devoted to showing this. If $s\in S\setminus\partial S$, we may set $\xi_i=s$ and pick an arbitrary $D>0$. Therefore assume $s\in\partial S$. Let $U$ be an open set containing $s$, and $f:U\to V\subseteq\mathbb{R}^{n-1}$ be a quasiconformal homeomorphism, such that $f(s)=0$ and $$\begin{aligned} f(S\cap U)=V\cap(\mathbb{R}_{+}\times\mathbb{R}^{n-2}). \end{aligned}$$ By [@Tukia19829uasiconformalEF Theorem 3.2], we can extend $f$ to a map $$\begin{aligned} F:\mathcal{U}\to\mathcal{V}, \end{aligned}$$ where $\mathcal{U}$ and $\mathcal{V}$ are neighbourhoods of $U, V$ in $\mathbb{H}^n$, respectively, such that $F$ is $L$-bi-Lipschitz for the hyperbolic metric, meaning $$\begin{aligned} L^{-1}\mathrm{dist}(a,b)\leq\mathrm{dist}(F(a), F(b))\leq L\mathrm{dist}(a, b). \end{aligned}$$ We let $y_i=F(x_i)$. Note that since $F$ is bi-Lipschitz, by the Morse lemma we have $$\begin{aligned} \sup_i \mathrm{dist}(y_i, \mathrm{CH}(f(S\cap U)))<\infty. \end{aligned}$$ **Claim 19**. There exists a sequence $\eta_i\in V\cap (\mathbb{R}_{+}\times\mathbb{R}^{n-2})$ and $D<\infty$, such that $C_{y_i\eta_i}^D\subseteq V\cap(\mathbb{R}_{+}\times\mathbb{R}^{n-2})$. We first show how to complete the proof assuming Claim [Claim 19](#claim:hyp-space-straight-submfd){reference-type="ref" reference="claim:hyp-space-straight-submfd"}. Suppose therefore that $\eta_i, D$ are as in Claim [Claim 19](#claim:hyp-space-straight-submfd){reference-type="ref" reference="claim:hyp-space-straight-submfd"}. We set $\xi_i=F^{-1}(\eta_i)$. Since $F$ is $L$-Lipschitz, $$\begin{aligned} f\left(C_{x_i\xi_i}^{L(D+M)}\right)\subseteq C_{y_i\eta_i}^D. \end{aligned}$$ Here $M=M(L)$ is a constant with the property that $$\begin{aligned} \mathrm{dist}(F([a, b]), [F(a), F(b)])\leq M. \end{aligned}$$ The existence of such a constant is the well-known Morse lemma. From the conclusion of Claim [Claim 19](#claim:hyp-space-straight-submfd){reference-type="ref" reference="claim:hyp-space-straight-submfd"}, we now have $$\begin{aligned} C_{x_i\xi_i}^{L(D+M)}\subseteq f^{-1}\left(V\cap(\mathbb{R}_{+}\times\mathbb{R}^{n-2})\right)\subseteq S, \end{aligned}$$ as desired. *Proof of Claim [Claim 19](#claim:hyp-space-straight-submfd){reference-type="ref" reference="claim:hyp-space-straight-submfd"}.* Suppose the conclusion of the claim does not hold, and pass to a subsequence such that $\sup\{D:C_{y_i\eta_i}^D\subseteq V\}\to\infty$ sa $i\to\infty$. Let $A_i$ be an isometry of $\mathbb{H}^n$ such that $A_i(y_i)=y_0$, and $A(0)=0$, where $0\in \mathbb{R}^{n-1}=\partial\mathbb{H}^n$. Then $A_i(\mathbb{R}_+\times\mathbb{R}^{n-2})$ is the boundary at infinity of a halfspace in a totally geodesic copy $G_i$ of $\mathbb{H}^{n-1}$ in $\mathbb{H}^n$. But $\sup_i\mathrm{dist}(y_0, G_i)<\infty$ and $0\in G_i\cap\partial_\infty\mathbb{H}^n$, so we can pass to a subsequence so that $G_i\to G$, and thus $A_i(\mathbb{R}_+\times\mathbb{R}^{n-2})\to G\cap\partial_\infty\mathbb{H}^n$, where $G$ is some halfspace in a totally geodesic copy of $\mathbb{H}^{n-1}$ lying in $\mathbb{H}^n$. Then there exist some $\eta\in G\cap\partial_\infty\mathbb{H}^n, D<\infty$ such that $$\begin{aligned} C_{y_0\eta}^D\cap \partial G\cap\partial_\infty\mathbb{H}^n=\emptyset. \end{aligned}$$ Taking $\eta_i=A_i^{-1}(\eta)$, we see that for large enough $i$, $$\begin{aligned} C_{y_i\eta_i}^{2D}\subseteq V\cap(\mathbb{R}_+\times\mathbb{R}^{n-2}), \end{aligned}$$ which is a contradiction. ◻ ## Proof of Theorem [Theorem 3](#thm:main-admissible){reference-type="ref" reference="thm:main-admissible"} {#subsec:pf-thm-main-admissible} For any $x\in N_D(C)$, there exists by Lemma [Lemma 18](#lm:boundary-analysis){reference-type="ref" reference="lm:boundary-analysis"} an angle $\theta=\theta(D)>0$ and $\xi\in \partial_\infty \mathbb{H}^n$ such that $$\begin{aligned} \mathrm{Cone}(x\xi, \theta)\cap\partial_\infty \mathbb{H}^n\subseteq U. \end{aligned}$$ We claim that for all $R>R_0=R_0(D)$, $$\begin{aligned} \label{eq:cone-containment} \mathrm{Cone}\left(x\xi, \frac{\theta}{12}\right)\cap\partial B_R(x)\subseteq\mathrm{CH}\left(\mathrm{Cone}(x\xi, \theta)\cap \partial_\infty X\right). \end{aligned}$$ Note that ([\[eq:cone-containment\]](#eq:cone-containment){reference-type="ref" reference="eq:cone-containment"}) immediately shows admissibility of $\mathrm{CH}(U)$, so the rest of this subsection is devoted to showing ([\[eq:cone-containment\]](#eq:cone-containment){reference-type="ref" reference="eq:cone-containment"}). Let $y\in\mathrm{Cone}\left(x\xi, \frac{\theta}{12}\right)\cap\partial B_R(x)$ be arbitrary. Pick any point $\eta_1\in\partial_\infty \mathbb{H}^n$ such that $\frac{\theta}{6}<\measuredangle_x(\eta_1, \xi)<\frac{\theta}{3}$, and let $\eta_2\in\partial_\infty \mathbb{H}^n$ be such that $y\in[\eta_1, \eta_2]$. Then in particular we have $$\begin{aligned} \frac{\theta}{12}<\measuredangle_x(\eta_1, y)<\frac{5}{12}\theta. \end{aligned}$$ Claim [Claim 20](#claim:tiny-angle-side){reference-type="ref" reference="claim:tiny-angle-side"} below shows that, for $R$ large enough depending on $\theta$, we have $$\begin{aligned} \measuredangle_x(y, \eta_2)<\frac{\theta}{12}. \end{aligned}$$ Thus $\measuredangle_{x}(\eta_1, \eta_2)<\frac{\theta}{2}$, and hence $\eta_2\in\mathrm{Cone}(x\xi, \theta)\cap\partial_\infty \mathbb{H}^n$. Then the set $\mathrm{CH}\left(\mathrm{Cone}(x\xi,\theta)\cap\partial_\infty \mathbb{H}^n\right)$ contains the entire geodesic $[\eta_1, \eta_2]$, and hence also contains $y$. **Claim 20**. Let $x, y\in \mathbb{H}^n$ and $\xi, \eta\in\partial_\infty \mathbb{H}^n$ be such that $y\in [\xi, \eta]$. If $\measuredangle_x(\xi, y)=\alpha$ and $\mathrm{dist}(x, y)=R$, we have $$\begin{aligned} \measuredangle_x(y, \eta)\lesssim e^{-2R}, \end{aligned}$$ where the implicit constant depends on $\alpha$. *Proof.* By the dual hyperbolic law of cosines applied to $xy\xi$ and to $xy\eta$, we see that $$\begin{gathered} 1=-\cos\alpha \cos\measuredangle_{{y}}({\xi},{x})+\sin\alpha\sin\measuredangle_{{y}}({\xi}, {x})\cosh(R),\label{eq:cosines-1} \\ 1=\cos\beta\cos\measuredangle_{{y}}({\xi}, {x}) + \sin\beta\sin\measuredangle_{{y}}({\xi}, {x})\cosh(R).\label{eq:cosines-2} \end{gathered}$$ It follows from ([\[eq:cosines-1\]](#eq:cosines-1){reference-type="ref" reference="eq:cosines-1"}) that for large $R$, we have $\measuredangle_{{y}}({x}, {\xi})\lesssim e^{-R}$. Straightforward analysis of ([\[eq:cosines-2\]](#eq:cosines-2){reference-type="ref" reference="eq:cosines-2"}) then implies $\beta\lesssim \measuredangle_{{y}}({\xi},{x})^2\lesssim e^{-2R}$. ◻
arxiv_math
{ "id": "2310.05796", "title": "Harmonic projections in negative curvature II: large convex sets", "authors": "Ognjen To\\v{s}i\\'c", "categories": "math.DG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- author: - - "Xin Guo[^1]" - "Yufei Zhang[^2]" bibliography: - potential_game.bib title: Towards An Analytical Framework for Potential Games --- **Abstract.** Potential game is an emerging notion and framework for studying multi-agent games, especially with heterogeneous agents. Up to date, potential games have been extensively studied mostly from the algorithmic aspect in approximating and computing the Nash equilibrium without verifying if the game is a potential game, due to the lack of analytical structure. In this paper, we aim to build an analytical framework for dynamic potential games. We prove that a game is a potential game if and only if each agent's value function can be decomposed as a potential function and a residual term that solely dependent on other agents' actions. This decomposition enables us to identify and analyze a new and important class of potential games called the distributed game. Moreover, by an appropriate notion of functional derivatives, we prove that a game is a potential game if the value function has a symmetric Jacobian. Consequently, for a general class of continuous-time stochastic games, their potential functions can be further characterized from both the probabilistic and the PDE approaches. The consistency of these two characterisations are shown in a class of linear-quadratic games. **Key words.** Potential game, potential function, closed-loop Nash equilibrium, linear derivatives, distributed game, stochastic differential game **AMS subject classifications.** 91A14, 91A06, 91A15 # Introduction Multi-agent games are of paramount importance in various fields, spanning economics, engineering, biology, and ecology (see e.g., [@fudenberg1991game; @bacsar1998dynamic]). These games model strategic interactions among multiple agents, where each agent aims to optimise her individual objective based on the observed system state and actions of other agents. Multi-agent games are notoriously difficult to analyze. One well-established paradigm for analysing multi-agent games is the mean field framework [@caines2006large; @lasry2007mean]. Its ingenious yet simple aggregation ideas enables efficiently approximating the Nash equilibrium of games with a large number of agents. However, such an approximation typically requires that agents are (approximately) homogeneous who interact weakly and symmetrically. As a result, the mean field approach is not appropriate for general games with heterogeneous agents and/or with general forms of interactions. Another alternative and emerging framework for studying multi-agent games, especially with heterogeneous agents, is the potential game introduced by [@monderer1996potential]. The analysis of this game is through the notion of potential function, such that the change in the value function of any agent who unilaterally deviates from her policy is given by the change in value of the potential function. Once the potential function is identified, finding Nash equilibrium of the game can be reduced to solving for the global optimum of the potential function. For a particular class of potential games, called the Markov potential game whose potential function is a function of policies and states, provably convergent multi-agent reinforcement learning algorithms have been developed, along with an extensive body of research on the approximation and computation of Nash equilibria [@maheshwari2022independent; @zhang2021gradient; @song2021can; @mao2021decentralized; @ding2022independent; @fox2022independent; @zhangglobal; @marden2012state; @macua2018learning; @narasimha2022multi]. These algorithmic studies are, however, built *without* verifying *a priori* whether the game under consideration is actually a potential game. Despite the promise of potential games, there are limited studies on the analytical structures for their value functions and potential functions. Consequently, it is hard to identify a potential game, by either constructing the potential function explicitly or simply verifying its existence. This challenge is particularly pronounced for dynamic games that do not satisfy certain restrictive assumptions, such as the state dynamics being independent of players' actions or all players' payoff functions being identical. Up to date, the only known analytical structure of dynamic potential games is established for a discrete setting where separability of value functions has been identified in [@leonardos2021global]. With this separability property, one class of dynamic games known as team Markov games has been identified and remains the primary example for potential games studies [@littman2001value; @wang2002reinforcement]. Consequently, the vast majority of existing literature on potential games focuses on algorithms for computing Nash equilibria of potential games [@marden2009joint; @macua2018learning; @marden2012state; @leonardos2021global; @ding2022independent]. #### Our work. In this paper, we take one step towards building an analytical framework to characterise dynamic potential games. - We prove that a game is a potential game if and only if each agent's value function can be decomposed in two components: a potential function and a residual term solely dependent on other agents' actions (Theorem [Theorem 2](#thm:separation_value){reference-type="ref" reference="thm:separation_value"}). - We apply this decomposition characterisation to identify a new and important class of potential games called the distributed game (Section [3](#sec:distributed_game){reference-type="ref" reference="sec:distributed_game"}). For this class of games, potential functions are constructed explicitly using games' cost functions. To the best of our knowledge, this is the first systematic characterisation of potential functions for distributed games. It offers a recipe for constructing distributed closed-loop Nash equilibria among heterogeneous agents, for this (previously intractable) large-scale game. - We introduce a notion of linear derivative to characterise the sensitivity of the value function with respect to unilateral deviation of policies (Definition [Definition 4](#def:linear_deri){reference-type="ref" reference="def:linear_deri"}). Leveraging this concept, we prove that a game is a potential game if the value functions have a symmetric Jacobian formed by their second-order derivatives (Theorem [Theorem 8](#thm:symmetry_value_sufficient){reference-type="ref" reference="thm:symmetry_value_sufficient"}). Moreover, we construct the potential function using first-order derivatives of value functions. Notably, this characterisation applies to dynamic games with continuous state and action spaces, with general infinite-dimensional policy classes. - For continue-time stochastic games, where the state dynamics involves controlled drift and diffusion coefficients, we characterise the potential function via two approaches: one probabilistic approach where the potential function is represented via sensitivity processes of the state and control processes with respect to policies, and another where the potential function is expressed via a system of linear PDEs. We show that these two characterisations are consistent for a class of linear-quadratic games, which can be characterised more simply through a system of ordinary differential equations. One of the key challenges to develop the general analytical framework for the dynamic potential game is to develop an appropriate notion of the functional derivative for possibly continuous time/state spaces and general policy classes. For instance, in the discrete time setting of [@monderer1996potential; @leonardos2021global; @hosseinirad2023general], by finite-dimensional parameterizations for policies, they adopt the (Fréchet) derivatives and show that the game is a potential game if the second-order derivatives of the value functions are symmetric. Neither Fréchet derivative nor parametrization is appropriate for the general framework: the set of admissible policies becomes infinite-dimensional and may not be a normed space; for games with stochastic policies, the admissible polices take values in the space of probability measures and hence do not form a vector space. To overcome the above difficulties, we define the derivative of value function along a convex combination of two policies. Compared with the Fréchet derivative, such a directional derivative is easier to compute and well-defined under weaker conditions. It also eliminates the need to select a norm over the possible infinite-dimensional policy class, providing more flexibility in the analysis (see Theorem [Theorem 8](#thm:symmetry_value_sufficient){reference-type="ref" reference="thm:symmetry_value_sufficient"}). In contrast to the mean field approach and its graphon extensions, which either require homogeneity in agents' state dynamics and symmetric interaction among agents and policy classes or at least some specific *asymptotic* structures of the cost functions ([@gao2020linear; @aurell2022stochastic; @lacker2022label]), the potential game framework offers a new perspective on constructing (closed-loop) Nash equilibrium that is non-asymptotic in nature. It also accommodates heterogeneity among agents to allow for different action sets and state dynamics, different dependencies on individual behavior, and different interaction strengths between pairs agents (see Theorem [Theorem 4](#thm:distributed_game_differentiable){reference-type="ref" reference="thm:distributed_game_differentiable"} and Proposition [Proposition 5](#prop:distribted_qudratic_MF){reference-type="ref" reference="prop:distribted_qudratic_MF"}). #### Notation. For each $n\in {\mathbb{N}}$, we denote by ${\mathbb{S}}^n$ the space of $n\times n$ symmetric matrices. For each $T>0$, $n\in {\mathbb{N}}$, $\alpha\in (0,1]$, probability space $(\Omega, \mathcal{F}, \mathbb{P})$ and Euclidean space $(E, |\cdot|)$, we introduce the following spaces: - $\mathcal{S}^\infty([t,T];E)$, $t\in [0,T]$, is the space of measurable processes $X:\Omega\times [t,T]\rightarrow E$ such that ${\mathbb{E}}[\sup_{s\in [t,T]} |X_s|^q]<\infty$ for all $q\ge 1$; - $\mathcal{H}^\infty([t,T];E)$, $t\in [0,T]$, is the space of measurable processes $X:\Omega\times [t,T]\rightarrow{\mathbb R}^n$ such that ${\mathbb{E}}[\int_t^T |X_s|^q {\mathrm{d}}s]<\infty$ for all $q\ge 1$; - $C^{i+\alpha/2, j+\alpha}([0,T]\times {\mathbb R}^{n}; E)$, $i,j\in {\mathbb{N}}\cup\{0\}$, is the space of functions $u:[0,T]\times {\mathbb R}^{n}\rightarrow E$ such that $\|u\|_{{i+\alpha/2,j+\alpha }} \coloneqq \sum_{\ell= 0}^{i-1} \|\partial^{\ell}_t u\|_{0 } + \sum_{\ell= 0}^{j-1} \|\partial^\ell_x u\|_{0} + [\partial^{i}_t u]_{\alpha/2 } +[\partial^{j}_x u]_{\alpha } <\infty$, where $\|u\|_{0}\coloneqq \sup_{(t,x) \in [0,T]\times {\mathbb R}^n} |u(t,x)|$ and $[u]_{\alpha} \coloneqq \sup_{(t,x),(t,x')\in [0,T]\times {\mathbb R}^n} \frac{|u(t,x)-u(t',x')|}{(|t-t'|^{1/2}+|x-x'|)^\alpha }$. Similar definitions extend to $C^{i+\alpha/2, j+\alpha,k+\alpha}([0,T]\times {\mathbb R}^{n}\times {\mathbb R}^{m}; E)$ for functions with two space variables. Throughout the paper, unless otherwise stated, proofs of theorems and propositions are deferred to Section [5](#sec:main_proof){reference-type="ref" reference="sec:main_proof"}. # Potential Game and its Nash Equilibrium To introduce the mathematical framework for potential games, let us start by some basic notions for the game and associated policies. Consider a game $\mathcal{G}=(I_N, \mathcal{S}, (A_i)_{i\in I_N}, \pi^{(N)}, (V_i)_{i\in I_N})$, where $I_N= \{1,\ldots, N\}$, $N\in \mathbb{N}$, is a finite set of agents; $\mathcal{S}$ is a topological space representing the state space of the underlying dynamics; $A_i$ is a Borel subset of a topological vector space representing the action set of agent $i$; $\pi^{(N)}= \prod_{i\in I_N} \pi_i$ is the set of joint policy profiles of all players, where $\pi_i$ is a set of Borel measurable functions $\phi_i:\mathcal{S}\rightarrow A_i$ representing the admissible policies of agent $i$; and $V_i: \mathcal{S}\times \pi^{(N)} \rightarrow{\mathbb R}$ is the value function of agent $i$, where $V_i^{s}(\phi)\coloneqq V_i(s,\phi)$ is agent $i$'s expect cost if the state dynamics starts at the state $s\in \mathcal{S}$ and all agents take the policy profile $\phi \in \pi^{(N)}$. For each $i\in I_N$, we denote by $\pi^{(N)}_{-i}= \prod_{j\in I_N\setminus \{i\}} \pi_j$ the sets of policy profiles of all players except agent $i$. The elements of $\pi^{(N)}$ and $\pi^{(N)}_{-i}$ are denoted as ${\phi} =(\phi_i)_{i\in I_N}$, and $\phi_{-i}=(\phi_j)_{j\in I_N\setminus \{i\}}$, respectively. Note that this game include static games, and discrete-time and continuous-time dynamic games; and its value function $V_i^{s}(\phi)$ may depend on the underlying state system, whose time and space variables are referred collectively as the state variable. There are two related yet distinct notions of Nash equilibrium associated with this game $\mathcal{G}$: closed-loop and Markovian types. **Definition 1** (Nash equilibrium). A policy profile $\phi \in \pi^{(N)}$ is a closed-loop Nash equilibrium for $\mathcal{G}$ with initial state $s_0\in \mathcal{S}$ if $$V^{s_0}_i((\phi_i,\phi_{-i}))\le V^{s_0}_i((\phi'_i,\phi_{-i})) \quad \forall i\in I_N, \phi'_{i}\in \pi_{i}.$$ A policy profile $\phi \in \pi^{(N)}$ is a Markov Nash equilibrium for $\mathcal{G}$ if $$V^{s}_i((\phi_i,\phi_{-i}))\le V^{s}_i((\phi'_i,\phi_{-i})) \quad \forall s\in \mathcal{S}, i\in I_N, \phi'_{i}\in \pi_{i}.$$ Definition [Definition 1](#def:NE){reference-type="ref" reference="def:NE"} is consistent with the concept of closed-loop and Markov equilibrium introduced in stochastic differential games, as described in [@carmona2018probabilistic]. Note that a closed-loop equilibrium $\phi$ can depend on the initial state $s_0$ of the system dynamics, while Markov equilibrium requires that the policy profile $\phi$ be a closed-loop Nash equilibrium for *all* initial states. The focus of this paper is on a class of games $\mathcal{G}$, called potential games. **Definition 2** (Potential game). A game $\mathcal{G}$ with initial state $s_0\in \mathcal{S}$ is a closed-loop potential game (CLPG), if there exists $\Phi^{s_0}: \pi^{(N)}\rightarrow{\mathbb R}$, called a potential function, such that for all $i\in I_N$, $\phi_i,\phi_i'\in \pi_i$ and $\phi_{-i}\in \pi^{(N)}_{-i}$, $$\label{eq:MPG} \Phi^{s_0}((\phi'_i,\phi_{-i})) -\Phi^{s_0}((\phi_i,\phi_{-i})) = V^{s_0}_i((\phi'_i,\phi_{-i})) -V^{s_0 }_i((\phi_i,\phi_{-i})).$$ A game $\mathcal{G}$ is a Markov potential game (MPG), if there exists a potential function $\Phi: \mathcal{S}\times \pi^{(N)}\rightarrow{\mathbb R}$ such that for all $s_0\in \mathcal{S}$, $\Phi(s_0,\cdot)$ is a potential function of $\mathcal{G}$ with initial state $s_0\in \mathcal{S}$. Intuitively, a game $\mathcal{G}$ is a potential game if there exists a potential function such that whenever one agent unilaterally deviates from their policy, the change of the potential function equals to the change of that agent's value function. The existence of a potential function depends crucially on the set $\pi^{(N)}$ of admissible policy profiles. The broader the class of admissible policy profiles $\pi^{(N)}$, the stricter the requirements on the system dynamics and payoff functions to ensure the existence of a potential function. By Definition [Definition 2](#def:MPG){reference-type="ref" reference="def:MPG"}, a Nash equilibrium of a potential game can be obtained by optimising its potential function, as shown in the following proposition. **Proposition 1**. *If there exists $s_0\in \mathcal{S}$ such that $\mathcal{G}$ with initial state $s_0$ is a CLPG with potential function $\Phi^{s_0}$, then any $\phi^*\in \mathop{\mathrm{arg\,min}}_{\phi\in \pi^{(N)} }\Phi^{s_0}(\phi)$ is a closed-loop Nash equilibrium of $\mathcal{G}$ with initial state $s_0$. Similarly, if $\mathcal{G}$ is an MPG with potential function $\Phi$ and $\phi^*\in \pi^{(N)}$ satisfies $\phi^*\in \mathop{\mathrm{arg\,min}}_{\phi\in \pi^{(N)} }\Phi(s, \phi)$ for all $s\in \mathcal{S}$, then $\phi^*$ is an Markov Nash equilibrium of $\mathcal{G}$.* Proposition [Proposition 1](#prop:ne_mpg){reference-type="ref" reference="prop:ne_mpg"} shows the importance of an explicit criterion for the existence or the construction of a potential function for a given potential game. The potential function of a dynamic game, if exists, is however generally difficult to identify. Alternatively, one may analyze the game via the separability of its value functions. **Theorem 2**. *A game $\mathcal{G}=(I_N, \mathcal{S}, (A_i)_{i\in I_N}, \pi^{(N)}, (V_i)_{i\in I_N})$ with initial state $s_0\in \mathcal{S}$ is a CLPG with potential $\Phi^{s_0}$ if and only if for all $i\in I_N$, there exists $U^{s_0}_i: \pi^{(N)}_{-i}\rightarrow{\mathbb R}$ such that $$\label{eq:separation_value} V^{s_0}_i((\phi_i, \phi_{-i}))= \Phi^{s_0} ((\phi_i, \phi_{-i})) + U^{s_0}_i(\phi_{-i}), \quad \forall\phi_i\in \pi_i, \phi_{-i}\in \pi^{(N)}_{-i}.$$ Furthermore, a game $\mathcal{G}$ is an MPG if and only if [\[eq:separation_value\]](#eq:separation_value){reference-type="eqref" reference="eq:separation_value"} holds for all $s_0\in S$ and $i\in I_N$.* Theorem [Theorem 2](#thm:separation_value){reference-type="ref" reference="thm:separation_value"} generalizes the results in [@leonardos2021global], which focuses on discrete MPGs with stationary policies, to general games: a game is a potential game if and only if the value function of each agent can be decomposed into two terms: one that is common for all players, namely the potential function, and another that may be different for each agent but depends only on the actions of other agents. The most known example of MPG is the team Markov game, as shown by Theorem [Theorem 2](#thm:separation_value){reference-type="ref" reference="thm:separation_value"}. **Example 1** (Team Markov Game). This is a game where all agents have a common interest. That is, in this game $\mathcal{G}=( I_N, \mathcal{S}, (A_i)_{i\in I_N}, \pi^{(N)}, (V_i)_{i\in I_N})$, $V_i=V_j$ for all $i,j \in I_N$; see [@littman2001value; @wang2002reinforcement]. In this setting, choose $U_i\equiv 0$ in Theorem [Theorem 2](#thm:separation_value){reference-type="ref" reference="thm:separation_value"}, then $\mathcal{G}$ is an MPG with a potential function $\Phi\equiv V_1$. There are few known examples of MPG beyond team Markov game, partly due to the difficulty in deriving and decomposing the value functions in the form of Theorem [Theorem 2](#thm:separation_value){reference-type="ref" reference="thm:separation_value"}. In the next section, we will identify an important (new) class of potential games called the distributed game. This analysis is based on a simple yet crucial observation on the cost structure of the game, as stated in Theorem [Theorem 3](#thm:distributed_game){reference-type="ref" reference="thm:distributed_game"}. # Distributed Game as Markov Potential Game {#sec:distributed_game} In a distributed game, agent $i$'s control depends only on the state of agent $i$, not the other agents. However, each agent's value function may depend on the joint state and action profiles of all agents. Distributed policies of this nature are pivotal in addressing the complexities of large-scale multi-agent systems. Such systems appear in multi-vehicle coordination [@aghajani2015formation], production management [@paccagnan2016aggregative], and interacting particle models in biology [@nourian2011mean] and physics [@carmona2022synchronization]. **Definition 3** (Distributed game $\mathcal{G}_{\rm dist}$). Let $T>0$, $N\in {\mathbb{N}}$, $(n_i)_{i=1}^N,(k_i)_{i=1}^N\subset {\mathbb{N}}$. The dynamic game $\mathcal{G}_{\rm dist}=(I_N, \mathcal{S}, (A_i)_{i\in I_N}, {\pi}^{(N)}_{\rm dist}, (V_i)_{i\in I_N})$ with $I_N=\{1,\ldots, N\}$, $\mathcal{S}=[0,T]\times\prod_{i\in I_N} {\mathbb R}^{n_i}$ and $A_i$ is a Borel set in ${\mathbb R}^{k_i}$. Here the set ${\pi}^{(N)}_{\rm dist}=\prod_{i\in I_N}{\pi}^{\rm dist}_i$ contains distributed policy profiles, where ${\pi}^{\rm dist}_i$ contains measurable functions $\phi_i:[0,T]\times {\mathbb R}^{n_i}\rightarrow A_i$ such that $\sup_{(t,x_i)\in [0,T]\times {\mathbb R}^{n_i}}\frac{|\phi_i(t,x_i)|}{ 1+|x_i|}<\infty$, and for all $(t, x_i )\in [0,T]\times {\mathbb R}^{n_i}$, agent $i$'s state dynamics $$\label{eq:state_policy_distributed} {\mathrm{d}}X^i_s = b_i(s,X^i_s,\phi_i(s,X^i_s)) {\mathrm{d}}s + \sigma_i(s,X^i_s,\phi_i(s,X^i_s)) {\mathrm{d}}W_s, \quad s\in [t,T]; \quad X^i_t=x_i,$$ admits a unique square-integrable weak solution $X^{t,x_i,\phi_i}$ on a probability space $(\Omega,\mathcal{F},\mathbb{P})$. Here $(b_i,\sigma_i):[0,T]\times {\mathbb R}^{n_i}\times {\mathbb R}^{k_i}\rightarrow{\mathbb R}^{n_i}\times {\mathbb R}^{n_i\times n_w}$ are given measurable functions, and $W$ is an $n_w$-dimensional Brownian motion on $(\Omega,\mathcal{F},\mathbb{P})$. Let $n_x=\sum_{i=1}^N n_i$ and $n_a=\sum_{i=1}^N k_i$, and for each $t\in [0,T]$, $x=(x_i)_{i\in I_N}\in \prod_{i\in I_N} {\mathbb R}^{n_i}$ and $\phi = (\phi_i)_{i\in I_N}\in {\pi}^{(N)}_{\rm dist}$, let $X^{t,x,\phi}= (X^{t,x_i,\phi_i})_{i\in I_N}$ be the joint state process, and let $\alpha^{t,x,\phi}= (\phi_i(\cdot, X^{t,x_i,\phi_i}))_{i\in I_N}$ be the joint control process. Then the agent $i$'s value function $V_i: [0,T]\times {\mathbb R}^{n_x}\times {\pi}^{(N)}_{\rm dist}\rightarrow{\mathbb R}$ is given by $$\label{eq:cost_dist_i} V^{t,x}_i(\phi) = {\mathbb{E}}^{\mathbb{P}} \left[\int_t^T f_i(s,X^{t,x,\phi}_s,\alpha^{t,x,\phi}_s){\mathrm{d}}s + g_i(X^{t,x,\phi}_T)\right],$$ where $f_i : [0,T]\times {\mathbb R}^{n_x}\times {\mathbb R}^{n_a}\rightarrow{\mathbb R}$ and $g_i : {\mathbb R}^{n_x} \rightarrow{\mathbb R}$ are given measurable functions such that $\sup_{(t,x,a)\in [0,T]\times {\mathbb R}^{n_x}\times {\mathbb R}^{n_a}}\frac{|f_i(t,x,a)|+|g_i(x)|}{1+|x|^2+|a|^2}<\infty$. A simple yet key observation for the distribution game $\mathcal{G}_{\rm dist}$ is: if its cost functions admit a potential structure, then the game with asymmetric agents is an MPG. **Theorem 3**. *Take the game $\mathcal{G}_{\rm dist}$ in Definition [Definition 3](#example:distributed_game){reference-type="ref" reference="example:distributed_game"}. Suppose that there exist measurable functions $F:[0,T]\times{\mathbb R}^{n_x}\times {\mathbb R}^{n_a}\rightarrow{\mathbb R}$, $G: {\mathbb R}^{n_x}\rightarrow{\mathbb R}$, $U_{f_i}:[0,T]\times \prod_{j\not=i}{\mathbb R}^{n_j}\times \prod_{j\not=i}{\mathbb R}^{k_j}\rightarrow{\mathbb R}$ and $U_{g_i}: \prod_{j\not=i}{\mathbb R}^{n_j} \rightarrow{\mathbb R}$, $i\in I_N$, such that for all $i\in I_N$ and $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$, $$\label{eq:distributed_cost_decomp} f_i(t,x,a)= F(t,x,a) +U_{f_i}(t,x_{-i},a_{-i}), \quad g_i(x)= G(x) +U_{g_i}(x_{-i}) \quad$$ with $x_{-i}=(x_j)_{j\not =i}$ and $a_{-i}=(a_j)_{j\not =i}$. Then the distributed game $\mathcal{G}_{\rm dist}$ is an MPG with a potential function $\Phi:[0,T]\times {\mathbb R}^{n_x}\times {\pi}^{(N)}_{\rm dist}\rightarrow{\mathbb R}$ defined by $$\begin{aligned} \label{eq:distributed_potential} \Phi^{t,x}(\phi) = {\mathbb{E}}^{\mathbb{P}} \left[\int_t^T F(s,X^{t,x,\phi}_s,\alpha^{t,x,\phi}_s){\mathrm{d}}s + G(X^{t,x,\phi}_T)\right],\end{aligned}$$ where $X^{t,x,\phi}$ and $\alpha^{t,x,\phi}$ are defined as in [\[eq:cost_dist_i\]](#eq:cost_dist_i){reference-type="eqref" reference="eq:cost_dist_i"}.* Indeed, observe that for all $i\in I_N$ and $(t,x)\in [0,T]\times{\mathbb R}^{n_x}$, by [\[eq:cost_dist_i\]](#eq:cost_dist_i){reference-type="eqref" reference="eq:cost_dist_i"} and [\[eq:distributed_cost_decomp\]](#eq:distributed_cost_decomp){reference-type="eqref" reference="eq:distributed_cost_decomp"}, $V^{t,x}_i(\phi) = \Phi^{t,x}(\phi)+U^{t,x}_i(\phi_{-i}),$ where $$\begin{aligned} U^{t,x}_i(\phi_{-i}) &= {\mathbb{E}}^{\mathbb{P}} \left[\int_t^T U_{f_i}(s,X^{t,x,\phi,-i}_s,\alpha^{t,x,\phi,-i}_s){\mathrm{d}}s + U_{g_i}(X^{t,x,\phi,-i}_T)\right], \end{aligned}$$ with $X^{t,x,\phi,-i}= (X^{t,x_j,\phi_j})_{j\not =i}$ and $\alpha^{t,x,\phi,-i}= (\phi_j(\cdot,X^{t,x_j,\phi_j}))_{j\not =i}$ being the state and control processes of other agents. By [\[eq:state_policy_distributed\]](#eq:state_policy_distributed){reference-type="eqref" reference="eq:state_policy_distributed"} and the definition of distributed policies, the states $(X^{t,x,\phi,\ell})_{\ell\in I_N}$ are decoupled and hence the term $U^{t,x}_i(\phi_{-i})$ depends only on $\phi_{-i}$ and is independent of $\phi_i$. This along with Theorem [Theorem 2](#thm:separation_value){reference-type="ref" reference="thm:separation_value"} implies that $\Phi$ is a potential function for $\mathcal{G}_{\rm dist}$. Next, we analyze in details how to construct distributed closed-loop Nash equilibria among heterogeneous agents with varying state dynamics, policies, and objectives. Based on Theorems [Proposition 1](#prop:ne_mpg){reference-type="ref" reference="prop:ne_mpg"} and [Theorem 3](#thm:distributed_game){reference-type="ref" reference="thm:distributed_game"}, a recipe for this construction process consists of two key tasks: (i) creating potential functions $F$ and $G$ for the cost functions, and (ii) constructing an optimal distributed policy for the control problem [\[eq:distributed_potential\]](#eq:distributed_potential){reference-type="eqref" reference="eq:distributed_potential"}. Task (ii) has been covered in [@jackson2023approximately], particularly for the special case where the state dynamics [\[eq:state_policy_distributed\]](#eq:state_policy_distributed){reference-type="eqref" reference="eq:state_policy_distributed"} has coefficients $b_i(t,x_i,a_i)=a_i$ and $\sigma_i(t,x_i,a_i)=\sigma$. The following theorem explicitly constructs the functions $F$ and $G$ in [\[eq:distributed_cost_decomp\]](#eq:distributed_cost_decomp){reference-type="eqref" reference="eq:distributed_cost_decomp"} and hence the potential function $\Phi$ in [\[eq:distributed_potential\]](#eq:distributed_potential){reference-type="eqref" reference="eq:distributed_potential"} based on cost functions $(f_i, g_i)_{i\in I_N}$ which are twice differentiable. **Theorem 4**. *Take the game $\mathcal{G}_{\rm dist}$ in Definition [Definition 3](#example:distributed_game){reference-type="ref" reference="example:distributed_game"}. Suppose that for all $i\in I_N$ and $t\in [0,T]$, ${\mathbb R}^{n_x} \times {\mathbb R}^{n_a}\ni (x,a)\mapsto ( f_i(t,x,a),g_i(x))\in {\mathbb R}^2$ is twice continuously differentiable, for all $i\in I_N$, $\sup_{(t,x,a)\in [0,T]\times {\mathbb R}^{n_x}\times {\mathbb R}^{n_a}} \frac{|(\partial_{(x_i,a_i)}f_i)(t,x,a)|+|(\partial_{x_i}g_i)(x)|}{1+|x|+|a|}<\infty$ and for all $i,j\in I_N$, $$\label{eq:symmetric_hessian_distributed} \begin{pmatrix} \partial_{x_ix_j}f_i & \partial_{a_ix_j}f_i \\ \partial_{x_ia_j}f_i & \partial_{a_ia_j}f_i \end{pmatrix} = \begin{pmatrix} \partial_{x_ix_j}f_j & \partial_{a_ix_j }f_j \\ \partial_{x_ia_j} f_j & \partial_{a_ia_j}f_j \end{pmatrix}, \quad \partial_{x_ix_j}g_i =\partial_{x_ix_j}g_j.$$ Then for any $(\hat{x},\hat{a}) \in {\mathbb R}^{n_x}\times {\mathbb R}^{n_a}$, the functions $F$ and $G$ in [\[eq:distributed_cost_decomp\]](#eq:distributed_cost_decomp){reference-type="eqref" reference="eq:distributed_cost_decomp"} can be chosen as $$\begin{aligned} \label{eq:F_G_distributed} \begin{split} F(t,x,a)& = \sum_{i=1}^N \int_0^1 \left( \begin{pmatrix} \partial_{x_i} f_i \\ \partial_{a_i} f_i \end{pmatrix} \big(t, \hat{x}+r (x-\hat{x}),\hat{a} +r ( a- \hat{a}) \big) \right)^\top \begin{pmatrix} x_i-\hat{x}_{i} \\ a_i-\hat{a}_i \end{pmatrix} {\mathrm{d}}r, \\ G(x) & = \sum_{i=1}^N \int_0^1 (\partial_{x_i} g_i) ( r\hat{x} +r (x-\hat{x}) )^\top (x_i -\hat{x}_i) {\mathrm{d}}r. \end{split}\end{aligned}$$* Indeed, by [@facchinei2003finite Theorem 1.3.1], under the symmetry condition [\[eq:symmetric_hessian_distributed\]](#eq:symmetric_hessian_distributed){reference-type="eqref" reference="eq:symmetric_hessian_distributed"}, for all $i\in I_N$, $\partial_{(x_i,a_i)}F = \partial_{(x_i,a_i)}f_i$ and $\partial_{x_i}G = \partial_{x_i}g_i$, and hence $U_{f_i}\coloneqq f_i- F$ and $U_{g_i}\coloneqq g_i- G$ are independent of $(x_i,a_i)$. Along with the integrability condition of $(f_i,g_i)_{i\in I_N}$, one can see that $F$ and $G$ satisfy the requirements in Theorem [Theorem 3](#thm:distributed_game){reference-type="ref" reference="thm:distributed_game"}. It is worth noting that Theorem [Theorem 4](#thm:distributed_game_differentiable){reference-type="ref" reference="thm:distributed_game_differentiable"} allows agents to have distinct policy classes and state coefficients, and only requires that the Hessian of the cost functions be symmetric *between any two agents*. In particular, it allows $\begin{psmallmatrix} \partial_{x_ix_j}f_i & \partial_{a_ix_j}f_i \\ \partial_{x_ia_j}f_i & \partial_{a_ia_j}f_i \end{psmallmatrix} \not= \begin{psmallmatrix} \partial_{x_ix_j}f_k & \partial_{a_ix_j }f_k \\ \partial_{x_ia_j} f_k & \partial_{a_ia_j}f_k \end{psmallmatrix}$ or $\partial_{x_ix_j}g_i \not =\partial_{x_ix_j}g_k$ for different $i,j,k\in I_N$. Moreover, according to Theorem [Theorem 4](#thm:distributed_game_differentiable){reference-type="ref" reference="thm:distributed_game_differentiable"}, if the costs of a distributed game are quadratic and depend both on other agents through their average behavior, then straightforward computation shows that the game is an MPG with a quadratic potential function. **Proposition 5**. *Take the game $\mathcal{G}_{\rm dist}$ in Definition [Definition 3](#example:distributed_game){reference-type="ref" reference="example:distributed_game"}. Suppose that $n_i=n$ and $k_i=k$ for all $i\in I_N$, and there exists $\bar{Q}, Q_i: [0,T]\rightarrow{\mathbb{S}}^{n}$, $\bar{R}, R_i: [0,T]\rightarrow{\mathbb{S}}^{k}$, $\bar{G}, G_i \in {\mathbb{S}}^{n}$, $i\in I_N$, $\gamma, \kappa: [0,T]\rightarrow{\mathbb R}$ and $\eta \in {\mathbb R}$ such that for all $i\in I_N$ and $(t,x,a)\in [0,T]\times {\mathbb R}^{Nn}\times {\mathbb R}^{Nk}$, $$\begin{aligned} \label{eq:distributed_quadratic_MF} \begin{split} f_i(t,x,a) &= x_i^\top Q_i(t) x_i + \left(x_i-\gamma(t) \overline{x}^{(N-1)}_{ -i} \right)^\top \bar{Q}(t) \left( x_i-\gamma(t) \overline{x}^{(N-1)}_{ -i} \right) \\ &\quad + a_i^\top R_i(t) a_i + \left(a_i-\kappa (t)\overline{a}^{(N-1)}_{ -i} \right)^\top \bar{R}(t) \left(a_i-\kappa (t) \overline{a}^{(N-1)}_{ -i} \right), \\ g_i(x) &= x_i^\top G_i x_i + \left(x_i - \eta \overline{x}^{(N-1)}_{ -i} \right)^\top \bar{G} \left( x_i- \eta \overline{x}^{(N-1)}_{ -i} \right), \end{split}\end{aligned}$$ where $x_i\in {\mathbb R}^n$ and $a_i\in {\mathbb R}^k$ are agent $i$'s state and action, respectively, and $\overline{x}^{(N-1)}_{ -i}\in {\mathbb R}^n$ and $\overline{a}^{(N-1)}_{ -i}\in {\mathbb R}^k$ are the average state and action of other agents defined by: $$\overline{x}^{(N-1)}_{ -i} = \frac{1}{N-1} \sum_{j=1,j\not=i}^N x_j, \quad \overline{a}^{(N-1)}_{ -i}= \frac{1}{N-1} \sum_{j=1,j\not=i}^N a_j.$$ Then $\mathcal{G}_{\rm dist}$ is an MPG and a potential function $\Phi$ is defined by [\[eq:distributed_potential\]](#eq:distributed_potential){reference-type="eqref" reference="eq:distributed_potential"} with $$\begin{aligned} \label{eq:F_G_distributed_MF} F(t,x,a)= x^\top \tilde{Q}(t) x+a^\top \tilde{R}(t) a, \quad G(x)= x^\top \tilde{G} x, \end{aligned}$$ where $$\begin{aligned} \tilde{Q}&=\begin{pmatrix} Q_{1}+\bar{Q} & -\frac{\gamma\bar{Q}}{N-1} & \dots & -\frac{\gamma\bar{Q}}{N-1} \\ -\frac{\gamma\bar{Q}}{N-1} & Q_{2}+\bar{Q} &\ddots & \vdots \\ \vdots &\ddots &\ddots & -\frac{\gamma\bar{Q}}{N-1} \\ -\frac{\gamma\bar{Q}}{N-1} & \dots & -\frac{\gamma\bar{Q}}{N-1} & Q_{N}+\bar{Q} \end{pmatrix}, \quad \tilde{R}=\begin{pmatrix} R_{1}+\bar{R} & -\frac{\kappa\bar{R}}{N-1} & \dots & -\frac{\kappa\bar{R}}{N-1} \\ -\frac{\kappa\bar{R}}{N-1} & R_{2}+\bar{R} &\ddots & \vdots \\ \vdots &\ddots &\ddots & -\frac{\gamma\bar{R}}{N-1} \\ -\frac{\kappa\bar{R}}{N-1} & \dots & -\frac{\kappa\bar{R}}{N-1} & R_{N}+\bar{R} \end{pmatrix}, \\ \tilde{G}&=\begin{pmatrix} G_{1}+\bar{G} & -\frac{\eta \bar{ G}}{N-1} & \dots & -\frac{\eta \bar{ G}}{N-1} \\ -\frac{\eta \bar{ G}}{N-1} & G_{2}+\bar{G} &\ddots & \vdots \\ \vdots &\ddots &\ddots & -\frac{\eta \bar{ G}}{N-1} \\ - \frac{\eta \bar{ G} }{N-1} & \dots & -\frac{\eta \bar{ G}}{N-1} & G_{N}+\bar{G} \end{pmatrix}.\end{aligned}$$* We reiterate that Proposition [Proposition 5](#prop:distribted_qudratic_MF){reference-type="ref" reference="prop:distribted_qudratic_MF"} allows for heterogeneity among agents. In particular, agents can have different action sets, different state dynamics (i.e., different $b_i$ and $\sigma_i$), and different dependence on their individual's behavior (i.e., different $Q_i, R_i$ and $G_i$). This is in contrast with the classical $N$-agent mean field games (see e.g., [@bensoussan2016linear; @carmona2018probabilistic]), which requires all agents to have homogeneous state dynamics and cost functions. # Characterisation of Differentiable Potential Game {#sec:characterisation_differentiable_MPG} For a general game $\mathcal{G}$, due to the interconnection among all agents, it is not easy to choose $(U_i)_{i\in I_N}$ or a potential function $\Phi$ based on Theorem [Theorem 2](#thm:separation_value){reference-type="ref" reference="thm:separation_value"}. We propose in this section to instead construct potential functions based on derivatives of associated value functions, assuming their regularities with respect to policies. We shall introduce a notion of directional derivative with respect to policies. Compared with Fréchet derivative used in the earlier works like [@monderer1996potential; @leonardos2021global; @hosseinirad2023general] for MPGs with specific finite-dimensional parameterized policies, directional derivative is easier to compute and well-defined under weaker conditions. It also eliminates the need to select a norm over the possible infinite-dimensional policy class $\pi^{(N)}$, providing more flexibility in the analysis. This is particularly important for dynamic games with stochastic policies, whose admissible polices take values in the space of probability measures and hence do not form a vector space. Let us start by a precise definition of the differentiability of a (scalar-valued) function with respect to unilateral deviations of policies. Recall that for each $i\in I_N$, $\pi_i$ is a subset of all measurable functions from $\mathcal{S}$ to $A_i$. We denote by $\mathop{\mathrm{span}}{(\pi_i)}$ the vector space of all linear combinations of policies in $\pi_i$: $$\mathop{\mathrm{span}}{(\pi_i)}=\left\{ \sum_{\ell=1}^m \alpha_\ell {\phi}^{(\ell)} \bigg\vert m\in {\mathbb{N}}, (\alpha_\ell)_{\ell=1}^m \subset {\mathbb R}, (\phi^{(\ell)})_{\ell=1}^m\subset \pi_i \right\}.$$ **Definition 4**. Let $\pi^{(N)}=\prod_{i\in I_N}\pi_i$ be a convex set and $f: \pi^{(N)}\rightarrow{\mathbb R}$. For each $i\in I_N$, we say $f$ has a linear derivative with respect to $\pi_i$, if there exists $\frac{\delta f}{\delta \phi_i}:\pi^{(N)} \times \mathop{\mathrm{span}}(\pi_i) \rightarrow{\mathbb R}$, such that for all $\phi=(\phi_i,\phi_{-i})\in \pi^{(N)}$, $\frac{\delta f}{\delta\phi_i} (\phi;\cdot)$ is linear and $$\label{eq:first_der_def} \lim_{\varepsilon\downarrow 0 }\frac{ f\big((\phi_i+\varepsilon(\phi'_i-\phi_i),\phi_{-i})\big) -f (\phi) }{ \varepsilon} = \frac{\delta f}{\delta \phi_i} (\phi ; \phi'_i-\phi_i), \quad \forall\phi'_i\in \pi_i.$$ For each $i,j\in I_N$, we say $f$ has second-order linear derivatives with respect to $\pi_i\times \pi_j$, if (i) for all $k\in \{i,j\}$, $f$ has a linear derivative $\frac{\delta f}{\delta \phi_k}$ with respect to $\pi_k$, and (ii) for all $(k,\ell)\in \{(i,j),(j,i) \}$, there exists $\frac{\delta^2 f}{\delta \phi_k\delta\phi_\ell} :\pi^{(N)} \times \mathop{\mathrm{span}}(\pi_k)\times \mathop{\mathrm{span}}(\pi_\ell) \rightarrow{\mathbb R}$ such that for all $\phi \in \pi^{(N)}$, $\frac{\delta^2 f}{\delta \phi_k\delta\phi_\ell}(\phi,\cdot,\cdot)$ is bilinear and for all $\phi'_k\in \mathop{\mathrm{span}}(\pi_k)$, $\frac{\delta^2 f}{\delta \phi_k\delta\phi_\ell}(\cdot; \phi'_k,\cdot)$ is a linear derivative of $\frac{\delta f}{\delta \phi_k}(\cdot; \phi'_k)$ with respect to $\pi_\ell$. We refer to $\frac{\delta^2 f}{\delta \phi_i\delta\phi_j}$ and $\frac{\delta^2 f}{\delta \phi_j\delta\phi_i}$ as second-order linear derivatives of $f$ with respect to $\pi_i\times \pi_j$. *Remark 1*. Definition [Definition 4](#def:linear_deri){reference-type="ref" reference="def:linear_deri"} allows $(\pi_i)_{i\in I_N}$ to be generic convex sets of measurable functions, which may not be vector spaces. This is important for games with stochastic/mixed policies, whose policy class $\pi_i$ consists of functions mapping the system state to a probability measure over the action set. In general, $f:\pi^{(N)}\rightarrow{\mathbb R}$ may have multiple linear derivatives. This non-uniqueness of linear derivatives will not affect our subsequent analysis, as our results depend only on the properties of the linear derivative outlined in Definition [Definition 4](#def:linear_deri){reference-type="ref" reference="def:linear_deri"}, rather than a specific choice of the derivative. The same remark also applies to the second-order derivatives of $f$. We then present two lemmas regarding the linear derivative of $f:\pi^{(N)}\rightarrow{\mathbb R}$, which are crucial for proving Theorem [Theorem 8](#thm:symmetry_value_sufficient){reference-type="ref" reference="thm:symmetry_value_sufficient"}. Lemma [Lemma 6](#lemma:derivative_line){reference-type="ref" reference="lemma:derivative_line"} shows that if $f$ has a linear derivative in $\pi_i$, then $f$ is differentiable along any line segment within $\pi_i$. Additionally, Lemma [Lemma 7](#lemma:multi-dimension_derivative){reference-type="ref" reference="lemma:multi-dimension_derivative"} shows that if $f$ is differentiable concerning all unilateral deviations, then $f$ is differentiable with respect to perturbations of all agents' policies, and the derivative can be computed via a chain rule. **Lemma 6**. *Suppose $\pi^{(N)}$ is convex, $i\in I_N$, and $f: \pi^{(N)}\rightarrow{\mathbb R}$ has a linear derivative $\frac{\delta f}{\delta \phi_i}$ with respect to $\pi_i$. Let $\phi\in \pi^{(N)}$, $\phi'_i\in \pi_i$, and for each $\varepsilon\in [0,1]$, let $\phi^\varepsilon=( \phi_i+\varepsilon(\phi'_i-\phi_i),\phi_{-i})$. Then the map $[0,1]\ni \varepsilon\mapsto f(\phi^\varepsilon)\in {\mathbb R}$ is differentiable and $\frac{{\mathrm{d}}}{{\mathrm{d}}\varepsilon}f(\phi^\varepsilon)= \frac{\delta f}{\delta \phi_i} (\phi^\varepsilon; \phi'_i-\phi_i)$ for all $\varepsilon\in [0,1]$.* **Lemma 7**. *Suppose $\pi^{(N)}$ is convex and for all $i\in I_N$, $f: \pi^{(N)}\rightarrow{\mathbb R}$ has a linear derivative $\frac{\delta f}{\delta \phi_i}$ with respect to $\pi_i$ such that for all $z,\phi\in \pi^{(N)}$ and $\phi'_i\in \pi_i$ , $[0,1]^N\ni {\varepsilon}\mapsto \frac{\delta f}{\delta \phi_i}(z+\varepsilon\cdot (\phi-z) ;\phi'_i)$ is continuous at $0$, where $z+\varepsilon\cdot (\phi-z)\coloneqq (z_i+\varepsilon_i({\phi}_{i}-z_i))_{i\in I_N}$. Then for all $z,\phi\in \pi^{(N)}$, the map $[0,1]\ni r\mapsto f(z+r( \phi -z))\in {\mathbb R}$ is differentiable and $\frac{{\mathrm{d}}}{{\mathrm{d}}r} f(z+r( \phi -z)) =\sum_{j=1}^N \frac{\delta f}{ \delta\phi_j} (z+r( \phi-z); \phi_j-z_j)$.* We now show that if the value functions of a game are sufficiently regular and have a symmetric Jacobian, then this game is a potential game, and its potential function can be constructed via the linear derivative of its value function. **Theorem 8**. *Let $\mathcal{G}=(I_N, \mathcal{S}, (A_i)_{i\in I_N}, \pi^{(N)}, (V_i)_{i\in I_N})$ be a game whose set of policy profiles $\pi^{(N)}$ is convex. Suppose that for some $s_0\in \mathcal{S}$ and for all $i,j\in I_N$, the value function $V^{s_0}_i$ has second-order linear derivatives with respect to $\pi_i\times \pi_j$ such that for all $z=(z_j)_{j\in I_N}, \phi=(\phi_j)_{j\in I_N}\in \pi^{(N)}$, $\phi'_i,\tilde{\phi}'_i\in \pi_i$ and ${\phi}''_j \in \pi_j$,* (1) *[\[item:integrable_p-Vi\]]{#item:integrable_p-Vi label="item:integrable_p-Vi"} (Boundedness.) $\sup_{r,\varepsilon\in [0,1]}\left| \frac{\delta^2 V_i^{s_0}}{\delta\phi_i\delta\phi_j}\big(z+r( \phi^\varepsilon-z); {\phi}'_i,{\phi}''_j\big) \right|<\infty$, where $\phi^\varepsilon\coloneqq (\phi_i+\varepsilon(\tilde{\phi}'_{i}-\phi_i),\phi_{-i})$;* (2) *[\[item:joint_continuity\]]{#item:joint_continuity label="item:joint_continuity"} (Continuity.) $[0,1]^N\ni {\varepsilon}\mapsto \frac{\delta^2 V_i^{s_0}}{\delta \phi_i\phi_j}(z+\varepsilon\cdot (\phi-z) ;\phi'_i, \phi''_j)$ is continuous at $0$, where $z+\varepsilon\cdot (\phi-z)\coloneqq (z_i+\varepsilon_i({\phi}_{i}-z_i))_{i\in I_N}$;* (3) *[\[item:symmetry-Vi\]]{#item:symmetry-Vi label="item:symmetry-Vi"} (Symmetric Jacobian.) $\frac{\delta^2 V_i^{s_0}}{\delta\phi_i\delta\phi_j} ( \phi; \phi'_i, \phi''_j)=\frac{\delta^2 V_j^{s_0}}{\delta\phi_j\delta\phi_i}( \phi ;\phi''_j, \phi'_i).$* *Then $\mathcal{G}$ with initial state $s_0$ is a CLPG and for any $z\in \pi^{(N)}$, $\Phi^{s_0}:\pi^{(N)}\rightarrow{\mathbb R}$ defined by $$\label{eq:potential_variation} \Phi^{s_0}(\phi)=\int_0^1\sum_{j=1}^N \frac{\delta V_j^{s_0}}{\delta\phi_j} (z+r(\phi-z);\phi_j-z_j) {\mathrm{d}}r % \quad \fa \phi \in \pi^{(N)}$$ is a potential function. If the above conditions hold for all $s_0\in \mathcal{S}$, then $\mathcal{G}$ is an MPG with a potential function $(s,\phi)\mapsto \Phi^{s}(\phi)$ defined as in [\[eq:potential_variation\]](#eq:potential_variation){reference-type="eqref" reference="eq:potential_variation"}.* ## Probabilistic Characterisation of Continuous-time Potential Game {#sec:sde_probabilistic} One can extend the criteria established in Theorem [Theorem 8](#thm:symmetry_value_sufficient){reference-type="ref" reference="thm:symmetry_value_sufficient"} to continuous-time stochastic games whose state dynamics is a controlled diffusion. Under different technical condition, potential functions may be further characterized either probabilistically or analytically. Consider the game $\mathcal{G}_{\rm prob}=(I_N, \mathcal{S}, (A_i)_{i\in I_N}, \pi^{(N)}, (V_i)_{i\in I_N})$, where $I_N=\{1,\ldots, N\}$, $\mathcal{S}= [0,T]\times {\mathbb R}^{n_x}$ with $T>0$ and $n_x\in {\mathbb{N}}$; $A_i\subset {\mathbb R}^{k_i}$ with $k_i\in {\mathbb{N}}$ is agent $i$'s action set; $\pi^{(N)}= \prod_{i\in I_N} \pi_i$ is the set of admissible policy profiles, where agent $i$'s policy class $\pi_i$ is defined by: $$\label{eq:policy_probabilistic} \pi_i \coloneqq \left\{ \phi: [0,T]\times {\mathbb R}^{n_x}\rightarrow A_i \,\middle\vert\, \begin{aligned} &\textnormal{$\phi$ is measurable and for all $t\in [0,T]$, $ x\mapsto \phi(t,x)$ is} \\ &\textnormal{twice continuously differentiable and } \\ & \textnormal{$\|\phi(\cdot,0)\|_0 +\| \partial_x \phi\|_0+\| \partial_{xx} \phi\|_0<\infty $} % %\textnormal{$\sup_{(t,x)\in [0,T]\times \sR^{n_x}}({|\phi(t,0)|}+|(\p_x \phi)(t,x)|+|(\p_{xx} \phi)(t,x)|)<\infty $} \end{aligned} \right\}.$$ Let $n_a=\sum_{i\in I_N}k_i$ and $A^{(N)}=\prod_{i\in I_N}A_i\subset {\mathbb R}^{n_a}$ be the set of action profiles of all agents. Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space supporting an $n_w$-dimensional Brownian motion $W$. For each $i\in I_N$, agent $i$'s value function $V_i : [0,T]\times {\mathbb R}^{n_x}\times \pi^{(N)}\rightarrow{\mathbb R}$ is given by: for all $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$ and $\phi\in \pi^{(N)}$, $$\label{eq:cost_policy} V_i^{t,x}(\phi)\coloneqq {\mathbb{E}}\left[\int_t^T f_i(s,X^{t,x,\phi}_s, \phi(s,X^{t,x,\phi}_s)){\mathrm{d}}s + g_i(X^{t,x,\phi}_T)\right],$$ where $f_i : [0,T]\times {\mathbb R}^{n_x}\times {\mathbb R}^{n_a}\rightarrow{\mathbb R}$ and $g_i : {\mathbb R}^{n_x} \rightarrow{\mathbb R}$ are given cost functions, $X^{t,x,\phi}$ is the state process governed by the following dynamics: $$\label{eq:state_policy} {\mathrm{d}}X_s =B(t,X_s, \phi (s,X_s)) {\mathrm{d}}s +\Sigma(s,X_s, \phi (s,X_s)){\mathrm{d}}W_s, \quad s\in [t,T]; \quad X_t=x,$$ and $( B,\Sigma): [0,T]\times {\mathbb R}^{n_x}\times {\mathbb R}^{n_a}\rightarrow{\mathbb R}^{n_x}\times {\mathbb R}^{n_x\times n_w}$ are given coefficients. Note that in [\[eq:cost_policy\]](#eq:cost_policy){reference-type="eqref" reference="eq:cost_policy"}, we write $V_i^{t,x}(\phi)\coloneqq V_i(t,x, \phi)$ , as we will analyse the derivatives of value functions for each fixed initial condition $(t,x)$ (see Theorem [Theorem 9](#thm:differential_MPG_proba){reference-type="ref" reference="thm:differential_MPG_proba"}). Throughout this section, the following regularity assumptions are imposed on the action sets $(A_i)_{i\in I_N}$ and coefficients $(B,\Sigma,(f_i)_{i\in I_N},(g_i)_{i\in I_N})$. **H. 1**. *$A_i\subset {\mathbb R}^{k_i}$, $i\in I_N$, is nonempty and convex. The functions $(B, \Sigma): [0,T]\times {\mathbb R}^{n_x}\times {\mathbb R}^{n_a}\rightarrow{\mathbb R}^{n_x}\times {\mathbb R}^{n_x\times n_w}$, $f_i: [0,T]\times {\mathbb R}^{n_x}\times {\mathbb R}^{n_a}\rightarrow{\mathbb R}$ and $g_i : {\mathbb R}^{n_x} \rightarrow{\mathbb R}$, $i\in I_N$, are measurable and satisfy the following properties: for all $i\in I_N$ and $t\in [0,T]$,* (1) *$(x,a)\mapsto \big(B(t,x,a),\Sigma(t,x,a),f_i(t,x,a),g_i(x) \big)$ are twice continuously differentiable.* (2) *$(x,a)\mapsto (B(t,x,a),\Sigma(t,x,a))$ is of linear growth and their first and second derivatives are bounded (uniformly in $t$).* (3) *$(x,a)\mapsto (f_i(t, x,a),g_i(x))$ and their first and second derivatives are of polynomial growth (uniformly in $t$).* Under Condition (H.[H. 1](#assum:regularity_N){reference-type="ref" reference="assum:regularity_N"}), for all $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$ and $\phi\in \mathop{\mathrm{span}}( \pi^{(N)})$, [\[eq:state_policy\]](#eq:state_policy){reference-type="eqref" reference="eq:state_policy"} admits a unique strong solution $X^{t,x,\phi}\in \mathcal{S}^\infty([t,T];{\mathbb R}^{n_x})$ and the value function $V^{t,x}_i(\phi)$ in [\[eq:cost_policy\]](#eq:cost_policy){reference-type="eqref" reference="eq:cost_policy"} is well-defined (see e.g., [@zhang2017backward Theorem 3.3.1]). We then introduce several stochastic processes, which will be used to describe the conditions under which $\mathcal{G}_{\rm prob}$ is a potential game. Fix $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$ and $\phi\in \pi^{(N)}$. For each $i\in I_N$ and $\phi'_i\in \mathop{\mathrm{span}}(\pi_i)$, let $\frac{\delta X^{t,x}}{\delta \phi_i}(\phi;\phi'_i)\in \mathcal{S}^\infty([t,T];{\mathbb R}^{n_x})$ be the solution to the following equation: $$\begin{aligned} \label{eq:X_sensitivity_first} \begin{split} {\mathrm{d}}Y_s & = (\partial_x B^\phi) (s, X^{t,x, \phi}_s ) Y_s {\mathrm{d}}s + \big((\partial_x \Sigma^\phi) (s, X^{t,x, \phi}_s ) Y_s \big) {\mathrm{d}}W_s \\ &\quad + (\partial_{a_i} B^\phi)[\phi'_i](s, X^{t,x, \phi}_s ) {\mathrm{d}}s %\\ %&\quad + (\partial_{a_i} \Sigma^\phi)[\phi'_i](s, X^{t,x, \phi}_s ) {\mathrm{d}}W_s \quad \forall s\in [t,T]; \quad Y_t =0, \end{split} \end{aligned}$$ where $B^\phi(t,x)\coloneqq B (t, x, \phi(t,x))$, $(\partial_{a_i} B^\phi)[\phi'_i](t,x)\coloneqq(\partial_{a_i}B) (t, x, \phi(t,x) ) \phi'_i(t,x)$, and $\Sigma^\phi(t,x)$ and $(\partial_{a_i} \Sigma^\phi)[\phi'_i](t,x)$ are defined similarly. Here the differentiations are taken componentwise, i.e., $((\partial_x B^\phi) (s, X^{t,x, \phi}_s ) Y_s)_{\ell} =\sum_{j=1}^{n_x}\partial_{x_j} B^\phi_\ell (s, X^{t,x, \phi}_s ) (Y_s)_{j}$ for all $\ell=1,\ldots, n_x$. Equation [\[eq:X_sensitivity_first\]](#eq:X_sensitivity_first){reference-type="eqref" reference="eq:X_sensitivity_first"} is the sensitivity equation of [\[eq:state_policy\]](#eq:state_policy){reference-type="eqref" reference="eq:state_policy"} with respect to $\phi_i$ (see Lemma [Lemma 12](#lemma:1st_derivative_state){reference-type="ref" reference="lemma:1st_derivative_state"}). In addition, for each $\phi'_i\in \mathop{\mathrm{span}}(\pi_i)$ and $\phi''_j\in \mathop{\mathrm{span}}(\pi_j)$, let $\frac{\delta^2 X^{t,x}}{\delta \phi_i\delta \phi_j}(\phi;\phi'_i,\phi''_j) \in \mathcal{S}^\infty([t,T];{\mathbb R}^{n_x})$ be the solution to the following equation: $$\begin{aligned} \label{eq:X_sensitivity_second} \begin{split} {\mathrm{d}}Z_s & = (\partial_x B^\phi) (s, X^{t,x, \phi}_s ) Z_s {\mathrm{d}}s + \big((\partial_x \Sigma^\phi) (s, X^{t,x, \phi}_s ) Z_s \big) {\mathrm{d}}W_s \\ &\quad + F_B\left(s,X^{t,x, \phi}_s, \frac{\delta X^{t,x}}{\delta \phi_i}(\phi;\phi'_i), \frac{\delta X^{t,x}}{\delta \phi_j}(\phi;\phi''_j), \phi'_i, \phi''_j\right){\mathrm{d}}s \\ &\quad +F_\Sigma\left(s,X^{t,x, \phi}_s, \frac{\delta X^{t,x}}{\delta \phi_i}(\phi;\phi'_i), \frac{\delta X^{t,x}}{\delta \phi_j}(\phi;\phi''_j), \phi'_i, \phi''_j\right) {\mathrm{d}}W_s \quad\forall s\in [t,T]; \quad Z_t=0, \end{split} \end{aligned}$$ where $$\begin{aligned} \label{eq:F_B} \begin{split} & F_B(\cdot, y_1,y_2, \phi'_i, \phi''_j) \\ & =y_2^\top (\partial_{xx} B^\phi) (\cdot ) y_1 +\phi''_j(\cdot)^\top (\partial_{a_ia_j}B)(\cdot,\phi(\cdot))\phi'_i(\cdot) \\ &\quad+ \left[\phi''_j(\cdot)^\top (\partial_{xa_j}B)(\cdot,\phi(\cdot)) +\phi''_j(\cdot)^\top (\partial_{aa_j}B)(\cdot,\phi(\cdot))\partial_x \phi(\cdot) \right]y_1 +(\partial_{a_j}B)(\cdot,\phi(\cdot))\partial_x \phi''_j(\cdot)y_1 \\ & \quad+ y_2^\top \left[(\partial_{a_ix}B)(\cdot,\phi(\cdot)) \phi'_i(\cdot) +\partial_x \phi(\cdot) ^\top (\partial_{a_ia}B)(\cdot,\phi(\cdot))\phi'_i(\cdot)\right]+(\partial_{a_i}B)(\cdot,\phi(\cdot)\partial_x \phi'_i(\cdot) y_2, \end{split} \end{aligned}$$ and $F_\Sigma(\cdot, y_1,y_2, \phi'_i, \phi''_j)$ is defined similarly. Equation [\[eq:X_sensitivity_second\]](#eq:X_sensitivity_second){reference-type="eqref" reference="eq:X_sensitivity_second"} is the sensitivity equation of [\[eq:state_policy\]](#eq:state_policy){reference-type="eqref" reference="eq:state_policy"} with respect to $\phi_i$ and $\phi_j$ (see Lemma [Lemma 13](#lemma:2nd_derivative_state){reference-type="ref" reference="lemma:2nd_derivative_state"}). Finally, let $\alpha^{t,x,\phi}_s=\phi(s,X^{t,x,\phi}_s)$ for all $s\in [t,T]$, and for each $i,j\in I_N$, define $\frac{\delta \alpha^{t,x}}{\delta \phi_i}(\phi;\cdot):\mathop{\mathrm{span}}(\pi_i)\rightarrow\mathcal{H}^\infty([t,T];{\mathbb R}^{n_a})$ by $$\begin{aligned} \label{eq:control_1st} \frac{\delta \alpha^{t,x}}{\delta \phi_i}(\phi;\phi'_i)_s & =(\partial_x \phi)(s,X^{t,x,\phi}_s) \frac{\delta X^{t,x}}{\delta \phi_i}(\phi;\phi'_i)_s+E_i\phi'_i(s, X^{t,x,\phi}_s), \end{aligned}$$ and define $\frac{\delta^2 \alpha^{t,x}}{\delta \phi_i\delta\phi_j}(\phi;\cdot,\cdot):\mathop{\mathrm{span}}(\pi_i)\times\mathop{\mathrm{span}}(\pi_j) \rightarrow\mathcal{H}^\infty([t,T];{\mathbb R}^{n_a})$ by $$\begin{aligned} \label{eq:control_2nd} \begin{split} & \frac{\delta^2 \alpha^{t,x}}{\delta \phi_i\delta\phi_j}(\phi;\phi'_i,\phi''_j)_s \\ & =\left(\frac{\delta X^{t,x}}{\delta \phi_j}(\phi;\phi''_j)_s\right)^\top (\partial_{xx} \phi)(s,X^{t,x,\phi}_s) \frac{\delta X^{t,x}}{\delta \phi_i}(\phi;\phi'_i)_s+(\partial_x\phi)(s, X^{t,x,\phi}_s) \frac{\delta^2 X^{t,x}}{\delta \phi_i\delta\phi_j}(\phi;\phi'_i,\phi''_j)_s \\ &\quad+E_j (\partial_x \phi''_j)(s, X^{t,x,\phi}_s) \frac{\delta X^{t,x}}{\delta \phi_i}(\phi;\phi'_i)_s +E_i (\partial_x \phi'_i)(s, X^{t,x,\phi}_s) \frac{\delta X^{t,x}}{\delta \phi_j}(\phi;\phi''_j)_s, \end{split}\end{aligned}$$ where $E_\ell\in {\mathbb R}^{n_a\times k_\ell}$, $\ell\in \{i,j\}$, is a block row matrix whose $\ell$-th row is a $k_\ell$-by-$k_\ell$ identity matrix (agent $\ell$'s action) and other rows are zero (other agents' actions). The following theorem characterises the linear derivatives of value functions in [\[eq:cost_policy\]](#eq:cost_policy){reference-type="eqref" reference="eq:cost_policy"} using the sensitivity processes of the states and controls, and further constructs potential functions for the game $\mathcal{G}_{\rm prob}$ defined by [\[eq:policy_probabilistic\]](#eq:policy_probabilistic){reference-type="eqref" reference="eq:policy_probabilistic"} and [\[eq:cost_policy\]](#eq:cost_policy){reference-type="eqref" reference="eq:cost_policy"}. **Theorem 9**. *Assume (H.[H. 1](#assum:regularity_N){reference-type="ref" reference="assum:regularity_N"}). For each $i\in I_N$, given $\pi_i$ and $V_i$ in [\[eq:policy_probabilistic\]](#eq:policy_probabilistic){reference-type="eqref" reference="eq:policy_probabilistic"} and [\[eq:cost_policy\]](#eq:cost_policy){reference-type="eqref" reference="eq:cost_policy"}, respectively.* (1) *[\[item:prob_first\]]{#item:prob_first label="item:prob_first"} For each $(t,x)\in [0,T]\times{\mathbb R}^{n_x}$ and $i,j\in I_N$, define $\frac{\delta V^{t,x}_i}{\delta \phi_j}:\pi^{(N)} \times \mathop{\mathrm{span}}(\pi_k) \rightarrow{\mathbb R}$ such that for all $\phi\in \pi^{(N)}$ and $\phi'_j\in \mathop{\mathrm{span}}(\pi_j)$, $$\begin{aligned} \label{eq:value_derivative_1st} \begin{split} \frac{\delta V^{t,x}_i}{\delta \phi_j} (\phi;\phi'_j) & = {\mathbb{E}}\bigg[\int_t^T \begin{pmatrix} \partial_x f_i & \partial_a f_i \end{pmatrix} (s,X^{t,x,\phi}_s, \alpha^{t,x,\phi}_s) \begin{pmatrix} \frac{\delta X^{t,x}}{\delta \phi_j} (\phi;\phi'_j)_s \\ \frac{\delta \alpha^{t,x}}{\delta \phi_j} (\phi;\phi'_j)_s \end{pmatrix} {\mathrm{d}}s \bigg] \\ &\quad+ {\mathbb{E}}\bigg[(\partial_xg_i)(X^{t,x,\phi}_T) \frac{\delta X^{t,x}}{\delta \phi_j} (\phi;\phi'_j)_T\bigg]. \end{split}\end{aligned}$$ Then $\frac{\delta V^{t,x}_i}{\delta \phi_j}$ is a linear derivative of $\phi\mapsto V^{t,x}_i(\phi)$ with respect to $\pi_j$.* (2) *[\[item:prob_second\]]{#item:prob_second label="item:prob_second"} For each $(t,x)\in [0,T]\times{\mathbb R}^{n_x}$ and $i, k,\ell\in I_N$, define $\frac{\delta^2 V^{t,x}_i}{\delta \phi_k\delta \phi_\ell}:\pi^{(N)} \times \mathop{\mathrm{span}}(\pi_k)\times \mathop{\mathrm{span}}(\pi_\ell) \rightarrow{\mathbb R}$ such that for all $\phi\in \pi^{(N)}$, $\phi'_k\in \mathop{\mathrm{span}}(\pi_k)$ and $\phi''_\ell\in \mathop{\mathrm{span}}(\pi_\ell)$, $$\begin{aligned} \label{eq:value_derivative_2nd} % \begin{split} & \frac{\delta^2 V^{t,x}_i}{\delta \phi_k \delta \phi_\ell} (\phi;\phi'_k, \phi''_\ell) \nonumber \\ &\quad= {\mathbb{E}}\left[\int_t^T \begin{pmatrix} \frac{\delta X^{t,x}}{\delta \phi_\ell} (\phi;\phi''_\ell)_s \nonumber \\ \frac{\delta \alpha^{t,x}}{\delta \phi_\ell} (\phi;\phi''_\ell)_s \end{pmatrix}^\top \begin{pmatrix} \partial_{xx} f_i & \partial_{ax} f_i \\ \partial_{x a}f_i & \partial_{a a} f_i \end{pmatrix}(s,X^{t,x,\phi}_s, \alpha^{t,x,\phi}_s) \begin{pmatrix} \frac{\delta X^{t,x}}{\delta \phi_k} (\phi;\phi'_k)_s \\ \frac{\delta \alpha^{t,x}}{\delta \phi_k} (\phi;\phi'_k)_s \end{pmatrix} {\mathrm{d}}s\right] \nonumber\\ &\qquad+ {\mathbb{E}}\bigg[\int_t^T \begin{pmatrix} \partial_x f_i & \partial_a f_i \end{pmatrix} (s,X^{t,x,\phi}_s, \alpha^{t,x,\phi}_s) \begin{pmatrix} \frac{\delta^2 X^{t,x}}{\delta \phi_k\phi_\ell} (\phi;\phi'_k,\phi''_\ell)_s \nonumber\\ \frac{\delta^2 \alpha^{t,x}}{\delta \phi_k\phi_\ell} (\phi;\phi'_k,\phi''_\ell)_s \end{pmatrix} {\mathrm{d}}s \bigg] \nonumber\\ &\qquad +{\mathbb{E}}\left[\left(\frac{\delta X^{t,x}}{\delta \phi_\ell} (\phi;\phi''_\ell)_T\right)^\top (\partial_{xx}g_i)(X^{t,x,\phi}_T) \frac{\delta X^{t,x}}{\delta \phi_k} (\phi;\phi'_k)_T\right] \nonumber\\ &\qquad+ {\mathbb{E}}\left[ (\partial_xg_i)(X^{t,x,\phi}_T) \frac{\delta^2 X^{t,x}}{\delta \phi_k\phi_\ell} (\phi;\phi'_k,\phi''_\ell)_T\right]. % \end{split}\end{aligned}$$ Then $\frac{\delta^2 V^{t,x}_i}{\delta \phi_i\delta \phi_j}$ and $\frac{\delta^2 V^{t,x}_i}{\delta \phi_j\delta \phi_i}$ are second-order linear derivatives of $\phi\mapsto V^{t,x}_i(\phi)$ with respect to $\pi_i\times \pi_j$.* (3) *[\[item:prob_potential\]]{#item:prob_potential label="item:prob_potential"} If there exists $(t,x)\in [0,T]\times{\mathbb R}^{n_x}$ such that for all $\phi\in \pi^{(N)}$, $i,j\in I_N$, $\phi'_i\in \pi_i$ and ${\phi}''_j \in \pi_j$, $$\label{eq:prob_symmetry} \frac{\delta^2 V^{t,x}_i}{\delta \phi_i \delta \phi_j} (\phi;\phi'_i, \phi''_j) =\frac{\delta^2 V^{t,x}_j}{\delta \phi_j \delta \phi_i} (\phi;\phi''_j, \phi'_i)$$ with $\frac{\delta^2 V^{t,x}_i}{\delta \phi_i \delta \phi_j}$ and $\frac{\delta^2 V^{t,x}_j}{\delta \phi_j \delta \phi_i}$ defined in Item [\[item:prob_second\]](#item:prob_second){reference-type="ref" reference="item:prob_second"}, then $\mathcal{G}_{\rm prob}$ with initial condition $(t,x)$ is a CLPG and a potential function is given by [\[eq:potential_variation\]](#eq:potential_variation){reference-type="eqref" reference="eq:potential_variation"} with $( \frac{\delta V^{t,x}_i}{\delta \phi_i})_{i\in I_N}$ defined in Item [\[item:prob_first\]](#item:prob_first){reference-type="ref" reference="item:prob_first"}.* *Moreover, if [\[eq:prob_symmetry\]](#eq:prob_symmetry){reference-type="eqref" reference="eq:prob_symmetry"} holds for all $(t,x)\in [0,T]\times{\mathbb R}^{n_x}$, then $\mathcal{G}_{\rm prob}$ is an MPG.* Theorem [Theorem 9](#thm:differential_MPG_proba){reference-type="ref" reference="thm:differential_MPG_proba"} generalises Theorem [Theorem 3](#thm:distributed_game){reference-type="ref" reference="thm:distributed_game"} by accommodating interconnected states among agents. Indeed, for the distributed game $\mathcal{G}_{\rm dist}$ in Definition [Definition 3](#example:distributed_game){reference-type="ref" reference="example:distributed_game"}, agent $i$'s state and control processes depend solely on her own policy $\phi_i$, resulting in zero derivatives with respect to agent $j$'s policy $\phi_j$ for all $j \neq i$. In this distributed game, [\[eq:value_derivative_2nd\]](#eq:value_derivative_2nd){reference-type="eqref" reference="eq:value_derivative_2nd"} can be simplified and [\[eq:prob_symmetry\]](#eq:prob_symmetry){reference-type="eqref" reference="eq:prob_symmetry"} follows from condition [\[eq:symmetric_hessian_distributed\]](#eq:symmetric_hessian_distributed){reference-type="eqref" reference="eq:symmetric_hessian_distributed"}. ## PDE Characterisation of Continuous-time Potential Game {#sec:sde_analytic} This section studies the continuous-time games in Section [4.1](#sec:sde_probabilistic){reference-type="ref" reference="sec:sde_probabilistic"} via a PDE approach. For general games with nondegenerate diffusion coefficients, it characterises its potential functions and the linear derivatives of value functions using the classical solution to a system of linear PDEs. Consider a game $\mathcal{G}_{\rm analyt}=(I_N, \mathcal{S}, (A_i)_{i\in I_N}, \pi^{(N)}, (V_i)_{i\in I_N})$, with $I_N=\{1,\ldots, N\}$, $\mathcal{S}= [0,T]\times {\mathbb R}^{n_x}$, $A_i\subset {\mathbb R}^{k_i}$ and $n_a=\sum_{i\in I_N}k_i$ as in Section [4.1](#sec:sde_probabilistic){reference-type="ref" reference="sec:sde_probabilistic"}. The set $\pi^{(N)}=\prod_{i\in I_N}\pi_i$ consists of all policy profiles, where agent $i$'s policy class $\pi_i$ is defined by (cf. [\[eq:policy_probabilistic\]](#eq:policy_probabilistic){reference-type="eqref" reference="eq:policy_probabilistic"}): $$\label{eq:policy_pde} \pi_i=\{\phi:[0,T]\times{\mathbb R}^{n_x}\rightarrow A_i\mid \phi\in C^{\gamma/2,\gamma}([0,T]\times {\mathbb R}^{n_x};{\mathbb R}^{k_i})\} \quad \textnormal{for some $\gamma\in (0,1]$.}$$ The value functions $(V_i)_{i\in I_N}$ are defined in a manner analogous to [\[eq:cost_policy\]](#eq:cost_policy){reference-type="eqref" reference="eq:cost_policy"}. However, we impose a different set of regularity conditions compared to (H.[H. 1](#assum:regularity_N){reference-type="ref" reference="assum:regularity_N"}) to facilitate the subsequent PDE analysis. **H. 2**. *$A_i\subset {\mathbb R}^{k_i}$, $i\in I_N$, is nonempty and convex. There exists $\beta\in (0,1]$ and $\kappa>0$ such that* (1) *[\[item:holder\]]{#item:holder label="item:holder"} $(B, \Sigma)\in C^{ {\beta}/{2},\beta,\beta}( [0,T]\times {\mathbb R}^{n_x}\times {\mathbb R}^{n_a} ; {\mathbb R}^{n_x}\times {\mathbb R}^{n_x\times n_x})$ and for all $i\in I_N$, $f_i\in C^{ {\beta}/{2},\beta,2+\beta}( [0,T]\times {\mathbb R}^{n_x}\times {\mathbb R}^{n_a}; {\mathbb R})$ and $g_i \in C^{2+\beta}( {\mathbb R}^{n_x} ;{\mathbb R})$;* (2) *[\[item:non-degeneracy\]]{#item:non-degeneracy label="item:non-degeneracy"} For all $t\in [0,T]$ and $(x,a)\in {\mathbb R}^{n_x}\times {\mathbb R}^{n_a}$, $\Sigma(t,x,a)$ is symmetric and satisfies $\xi^\top \Sigma(t,x,a)\xi \ge \kappa |\xi|^2$ for all $\xi\in {\mathbb R}^{n_x}$.* Under (H.[H. 2](#assum:Holder_regularity){reference-type="ref" reference="assum:Holder_regularity"}), by [@mishura2020existence Theorem 2], for each $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$ and $\phi\in \pi^{(N)}$, the state dynamics [\[eq:state_policy\]](#eq:state_policy){reference-type="eqref" reference="eq:state_policy"} admits a unique weak solution $X^{t,x,\phi}$ on a probability space $(\Omega, \mathcal{F}, \bar{\mathbb{P}})$ and the value functions $$\label{eq:cost_pde} V^\phi_i(t,x) = {\mathbb{E}}^{\bar{\mathbb{P}}} \left[\int_t^T f_i(s,X^{t,x,\phi}_s, \phi(s,X^{t,x,\phi}_s)){\mathrm{d}}s + g_i(X^{t,x,\phi}_T)\right], \quad i\in I_N$$ are well-defined. In contrast to [\[eq:cost_policy\]](#eq:cost_policy){reference-type="eqref" reference="eq:cost_policy"}, in [\[eq:cost_pde\]](#eq:cost_pde){reference-type="eqref" reference="eq:cost_pde"}, we use the notation $V^\phi_i(t,x)\coloneqq V_i(t,x, \phi)$ to analyse the value functions for all initial conditions $(t, x)$ simultaneously. By standard regularity results of linear PDEs (see e.g., [@ladyzhenskaia1988linear Theorem 5.1, p. 320]) and Itô's formula, $(t,x)\mapsto V^{\phi}_i(t,x)$ is the unique classical solution to the following PDE: for all $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$, $$\begin{aligned} \label{eq:pde_value} \begin{split} &\partial_t V(t,x) + \mathcal{L}^\phi V(t,x)+f_i(t,x,\phi(t,x))=0; \quad V(T,x)= g_i(x), \end{split}\end{aligned}$$ where $\mathcal{L}^\phi$ is the generator of [\[eq:state_policy\]](#eq:state_policy){reference-type="eqref" reference="eq:state_policy"} defined by $$\mathcal{L}^\phi u(t,x) = \frac{1}{2}\operatorname{tr} \big(\Sigma\Sigma^\top(t,x,\phi(t,x))(\partial_{xx} u)(t,x)\big)+ B(t,x,\phi(t,x))^\top (\partial_x u)(t,x).$$ We now introduce several equations associated with [\[eq:pde_value\]](#eq:pde_value){reference-type="eqref" reference="eq:pde_value"}, whose solutions characterise the linear derivatives of $(V_i)_{i\in I_N}$. Fix $i\in I_N$. For each $k\in I_N$, consider the map $\frac{\delta V_i}{\delta \phi_k}: [0,T]\times {\mathbb R}^{n_x}\times \pi^{(N)}\times \mathop{\mathrm{span}}(\pi_k) \rightarrow{\mathbb R}$: $$\label{eq:value_1st_pde} (t,x,\phi, \phi'_k)\mapsto \frac{\delta V_i}{\delta \phi_k}(t,x,\phi, \phi'_k) \coloneqq \frac{\delta V^\phi_i}{\delta \phi_k}(t,x; \phi'_k),$$ where $(t,x)\mapsto \frac{\delta V^\phi_i}{\delta \phi_k}(t,x; \phi'_k)$ solves the following equation: for all $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$, $$\label{eq:pde_value_1st} \left\{ \begin{aligned} &\partial_t U (t,x) +\mathcal{L}^{\phi} U (t,x) + (\partial_{a_k} H_i)\big(t,x,(\partial_x V^\phi_i)(t,x),(\partial_{xx} V^\phi_i)(t,x),\phi(t,x)\big)^\top \phi'_k(t,x)=0, \\ & U(T,x)= 0, \end{aligned}\right.$$ with the function $H_i:[0,T]\times {\mathbb R}^{n_x}\times {\mathbb R}^{n_x}\times {\mathbb R}^{n_x\times n_x}\times {\mathbb R}^{n_a}\rightarrow{\mathbb R}$ defined as follows: $$\label{eq:Hamiltonian_H} H_i(t,x,y,z,a)= \frac{1}{2}\operatorname{tr} \big( (\Sigma\Sigma^\top ) (t,x,a ) z \big) + B(t,x, a)^\top y+ f_i(t,x, a).$$ Equation [\[eq:pde_value_1st\]](#eq:pde_value_1st){reference-type="eqref" reference="eq:pde_value_1st"} describes the sensitivity of [\[eq:pde_value\]](#eq:pde_value){reference-type="eqref" reference="eq:pde_value"} with respect to $\phi_k$. It is derived by considering the value function $V^{\phi^\varepsilon}$ with $\phi^\varepsilon=(\phi_k+\varepsilon(\phi'_k-\phi_k),\phi_{-k})$ for $\varepsilon\in [0,1)$, and then differentiating the corresponding PDE [\[eq:pde_value\]](#eq:pde_value){reference-type="eqref" reference="eq:pde_value"} with respect to $\varepsilon$. In addition, for each $k, \ell\in I_N$, consider the map $\frac{\delta^2 V^\phi_i}{\delta \phi_k\phi_\ell}: [0,T]\times {\mathbb R}^{n_x}\times \pi^{(N)}\times \mathop{\mathrm{span}}(\pi_k)\times \mathop{\mathrm{span}}(\pi_\ell)\rightarrow{\mathbb R}$: $$\label{eq:value_2nd_pde} (t,x,\phi, \phi'_k,\phi''_\ell)\mapsto \frac{\delta^2 V_i}{\delta \phi_k\phi_\ell}(t,x, \phi, \phi'_k,\phi''_\ell) \coloneqq \frac{\delta^2 V^\phi_i}{\delta \phi_k\phi_\ell}(t,x; \phi'_k,\phi''_\ell),$$ where $(t,x)\mapsto \frac{\delta^2 V^\phi_i}{\delta \phi_k\phi_\ell}(t,x; \phi'_k,\phi''_\ell)$ solves the following equation: for all $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$, $$\label{eq:pde_value_2nd} \left\{ \begin{aligned} &\partial_t W (t,x) +\mathcal{L}^{\phi} W (t,x) \\ & \qquad + (\partial_{a_\ell} L)\left(t,x, \left(\partial_x \frac{\delta V^\phi_i}{\delta \phi_k}\right)(t,x; \phi'_k) , \left(\partial_{xx} \frac{\delta V^\phi_i}{\delta \phi_k}\right)(t,x; \phi'_k) ,\phi(t,x)\right) ^\top \phi''_\ell(t,x) %\frac{\d \cL^{\phi^\eps}U^{\phi,\phi'_k}_{i,k}}{\d\eps }\bigg\vert_{\eps=0} (t,x) \\ & \qquad + (\partial_{a_k} L)\left(t,x, \left(\partial_x \frac{\delta V^\phi_i}{\delta \phi_\ell}\right)(t,x; \phi''_\ell) , \left(\partial_{xx} \frac{\delta V^\phi_i}{\delta \phi_\ell}\right)(t,x; \phi''_\ell), \phi(t,x)\right) ^\top \phi'_k(t,x) \\ & \qquad + \phi''_\ell(t,x)^\top (\partial_{a_ka_\ell} H_i)\big(t,x,(\partial_x V^{\phi}_i)(t,x),(\partial_{xx} V^{\phi}_i)(t,x),\phi(t,x)\big) \phi'_k(t,x) =0, \\ & W(T,x)=0, \end{aligned}\right.$$ with the function $L:[0,T]\times {\mathbb R}^{n_x}\times {\mathbb R}^{n_x}\times {\mathbb R}^{n_x\times n_x}\times {\mathbb R}^{n_a}\rightarrow{\mathbb R}$ defined by $$\label{eq:L} L(t,x,y,z,a)= \frac{1}{2}\operatorname{tr} \big( (\Sigma\Sigma^\top ) (t,x,a ) z \big) + B(t,x, a)^\top y.$$ Equation [\[eq:pde_value_2nd\]](#eq:pde_value_2nd){reference-type="eqref" reference="eq:pde_value_2nd"} describes the sensitivity of [\[eq:pde_value\]](#eq:pde_value){reference-type="eqref" reference="eq:pde_value"} with respect to $\phi_k$ and $\phi_\ell$. Now, via the Schauder's estimate [@ladyzhenskaia1988linear Theorem 5.1, p. 320] for [\[eq:pde_value\]](#eq:pde_value){reference-type="eqref" reference="eq:pde_value"}, [\[eq:pde_value_1st\]](#eq:pde_value_1st){reference-type="eqref" reference="eq:pde_value_1st"} and [\[eq:pde_value_2nd\]](#eq:pde_value_2nd){reference-type="eqref" reference="eq:pde_value_2nd"} in suitable Hölder spaces one can show that the maps [\[eq:value_1st_pde\]](#eq:value_1st_pde){reference-type="eqref" reference="eq:value_1st_pde"} and [\[eq:value_2nd_pde\]](#eq:value_2nd_pde){reference-type="eqref" reference="eq:value_2nd_pde"} are well-defined, and are the linear derivatives of value function $V_i$. **Theorem 10**. *Suppose (H.[H. 2](#assum:Holder_regularity){reference-type="ref" reference="assum:Holder_regularity"}) holds. For each $i\in I_N$, let $\pi_i$ and $V_i$ be defined by [\[eq:policy_pde\]](#eq:policy_pde){reference-type="eqref" reference="eq:policy_pde"} and [\[eq:cost_pde\]](#eq:cost_pde){reference-type="eqref" reference="eq:cost_pde"}, respectively.* (1) *[\[item:pde_first\]]{#item:pde_first label="item:pde_first"} For all $i,k\in I_N$, the map $\frac{\delta V_i}{\delta \phi_k}$ in [\[eq:value_1st_pde\]](#eq:value_1st_pde){reference-type="eqref" reference="eq:value_1st_pde"} is well-defined, and for all $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$, $(\phi,\phi'_k)\mapsto \frac{\delta V^{\phi}_i}{\delta \phi_k}(t,x; \phi'_k)$ is a linear derivative of $\phi\mapsto V^{\phi}_i(t,x)$ with respect to $\pi_k$.* (2) *[\[item:pde_second\]]{#item:pde_second label="item:pde_second"} For all $i,k,\ell\in I_N$, the map $\frac{\delta^2 V_i}{\delta \phi_k\delta \phi_\ell}$ in [\[eq:value_2nd_pde\]](#eq:value_2nd_pde){reference-type="eqref" reference="eq:value_2nd_pde"} is well-defined, and for all $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$, $(\phi,\phi'_k,\phi''_\ell)\mapsto \frac{\delta^2 V^\phi_i}{\delta \phi_k\delta \phi_\ell}(t,x; \phi'_k, \phi''_\ell)$ and $(\phi,\phi''_\ell,\phi'_k)\mapsto \frac{\delta^2 V^\phi_i}{\delta \phi_\ell \delta \phi_k}(t,x; \phi''_\ell, \phi'_k)$ are second-order linear derivatives of $\phi\mapsto V^{\phi}_i(t,x)$ with respect to $\pi_k\times \pi_\ell$.* (3) *[\[item:pde_potential\]]{#item:pde_potential label="item:pde_potential"} If there exists $(t,x)\in [0,T]\times{\mathbb R}^{n_x}$ such that for all $\phi\in \pi^{(N)}$, $i,j\in I_N$, $\phi'_i\in \pi_i$ and ${\phi}''_j \in \pi_j$, $$\label{eq:pde_symmetry} \frac{\delta^2 V^{\phi}_i}{\delta \phi_i \delta \phi_j} (t,x;\phi'_i, \phi''_j) =\frac{\delta^2 V^{\phi}_j}{\delta \phi_j \delta \phi_i} (t,x;\phi''_j, \phi'_i),$$ with $\frac{\delta^2 V^{\phi}_i}{\delta \phi_i \delta \phi_j}$ and $\frac{\delta^2 V^{\phi}_j}{\delta \phi_j \delta \phi_i}$ defined in Item [\[item:pde_second\]](#item:pde_second){reference-type="ref" reference="item:pde_second"}, then $\mathcal{G}_{\rm analyt}$ with initial condition $(t,x)$ is a CLPG and a potential function is given by [\[eq:potential_variation\]](#eq:potential_variation){reference-type="eqref" reference="eq:potential_variation"} with $( \frac{\delta V_i}{\delta \phi_i})_{i\in I_N}$ defined in Item [\[item:pde_first\]](#item:pde_first){reference-type="ref" reference="item:pde_first"}. Moreover, if [\[eq:pde_symmetry\]](#eq:pde_symmetry){reference-type="eqref" reference="eq:pde_symmetry"} holds for all $(t,x)\in [0,T]\times{\mathbb R}^{n_x}$, then $\mathcal{G}_{\rm analyt}$ is an MPG.* ## Linear Quadratic Differentiable Potential Game Theorems [Theorem 9](#thm:differential_MPG_proba){reference-type="ref" reference="thm:differential_MPG_proba"} and [Theorem 10](#thm:differential_MPG_pde){reference-type="ref" reference="thm:differential_MPG_pde"} characterise continuous-time MPGs differently depending on the regularity of model coefficients and agents' admissible policies. This section shows that these two characterisations coincide for a class of linear-quadratic (LQ) games. Moreover, by leveraging the LQ structure of the problem, simpler characterisations of MPGs can be obtained through a system of ODEs. In this section, the time variable of all coefficients is dropped when there is no risk of confusion. Consider the game $\mathcal{G}_{\rm LQ}=(I_N, \mathcal{S}, (A_i)_{i\in I_N}, \pi^{(N)}, (V_i)_{i\in I_N})$, where $I_N=\{1,\ldots, N\}$, $\mathcal{S}= [0,T]\times {\mathbb R}^{n_x}$, $A_i={\mathbb R}^{k_i}$, and $\pi^{(N)}=\prod_{i\in I_N}\pi_i$ is the set of policy profiles, where agent $i$'s policy class $\pi_i$ contains linear policies defined by $$\label{eq:policy_lq} \pi_i=\{\phi:[0,T]\times{\mathbb R}^{n_x}\rightarrow{\mathbb R}^{k_i}\mid \phi(t,x)=K_i(t)x, \; K_i \in C([0,T]; {\mathbb R}^{k_i\times n_x})\}.$$ With an abuse of notation, we identify a policy $\phi_i\in \pi_i$ with the feedback map $K_i$, and identify a policy profile $\phi\in \pi^{(N)}$ with feedback maps $K=(K_i)_{i\in I_N}$. For each $i\in I_N$, define agent $i$'s value function $V_i:[0,T]\times {\mathbb R}^{n_x} \times \pi^{(N)}\rightarrow{\mathbb R}$ by $$\begin{aligned} \label{eq:value_lq} \begin{split} V^\phi_i(t,x) & = \frac{1}{2} {\mathbb{E}}\bigg[\int_t^T \left(\big(X^{t,x,\phi}_s\big)^\top Q_i(t) X^{t,x,\phi}_s + \big(K(s)X^{t,x,\phi}_s\big)^\top R_i(t) K(s) X^{t,x,\phi}_s \right) {\mathrm{d}}s \\ &\qquad + (X^{t,x,\phi}_T)^\top G_i X^{t,x,\phi}_T \bigg], \end{split}\end{aligned}$$ where $X^{t,x,\phi}$ satisfies the following dynamics: $$\label{eq:state_lq} {\mathrm{d}}X_s =\left(A(s)X_s+B(s)K(s) X_s \right){\mathrm{d}}s +\sigma {\mathrm{d}}W_s, \quad s\in [t,T]; \quad X_t=x,$$ and $W$ is a given $n_w$-dimensional standard Brownian motion. We assume the following standard conditions for the coefficients of [\[eq:value_lq\]](#eq:value_lq){reference-type="eqref" reference="eq:value_lq"} and [\[eq:state_lq\]](#eq:state_lq){reference-type="eqref" reference="eq:state_lq"}: $A\in C([0,T];{\mathbb R}^{n_x\times n_x})$, $B\in C([0,T];{\mathbb R}^{n_x\times n_a})$, $\sigma\in {\mathbb R}^{n_x\times n_w}$, and for all $i\in I_N$, $Q_i\in C([0,T];{\mathbb{S}}^{n_x})$, $R_i\in C([0,T];{\mathbb{S}}^{n_a})$ and $G_i\in {\mathbb{S}}^{n_x}$. Here for ease of exposition and simple characterization, we focus on LQ games with (possibly degenerate) uncontrolled additive noises. As will be clear from the analysis that similar characterisations can be established for LQ games with controlled multiplicative noises. #### Probabilistic characterisation in LQ games. To characterise the LQ games, write $B$ and $R_i$ in the following block form: $$\label{eq:BR_block} B= (B_1, \ldots, B_N), \quad R_i=((R_i)_1, \ldots, (R_i)_N) =((R_i)_{h\ell})_{h,\ell\in I_N}$$ with $B_\ell \in C([0,T]; {\mathbb R}^{n_x\times k_\ell})$, $(R_i)_{ \ell} \in C([0,T]; {\mathbb R}^{n_a\times k_\ell})$ and $(R_i)_{ h\ell} \in C([0,T]; {\mathbb R}^{k_h\times k_\ell})$ for all $h,\ell\in I_N$. Fix $i,h, \ell\in I_N$ and $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$. The probabilistic characterisation [\[eq:value_derivative_1st\]](#eq:value_derivative_1st){reference-type="eqref" reference="eq:value_derivative_1st"} of $\frac{\delta V^{t,x}_i}{\delta \phi_h} (\phi; \phi'_h)$ simplifies into $$\begin{aligned} \label{eq:value_derivative_1st_lq} \begin{split} \frac{\delta V^{t,x}_i}{\delta \phi_h} (\phi;\phi'_h) & = {\mathbb{E}}\bigg[\int_t^T \bigg( X_s^\top Q_i Y^{h}_s +(K X_s)^\top \Big(R_i K Y^{h}_s +(R_i)_h K'_h X_s \Big) \bigg) {\mathrm{d}}s + X_T^\top G_i Y^{h}_T\bigg], \end{split}\end{aligned}$$ where $(R_i)_{ h}$ is defined in [\[eq:BR_block\]](#eq:BR_block){reference-type="eqref" reference="eq:BR_block"}, $X=X^{t,x,\phi}$ is the state process satisfying [\[eq:state_lq\]](#eq:state_lq){reference-type="eqref" reference="eq:state_lq"}, $Y^{h}$ is the sensitivity process of $X$ with respect $K'_h$ satisfying (cf. [\[eq:X_sensitivity_first\]](#eq:X_sensitivity_first){reference-type="eqref" reference="eq:X_sensitivity_first"}): $$\begin{aligned} \label{eq:X_sensitivity_first_lq} \begin{split} {\mathrm{d}}Y^{h}_s & = \big( (A +B K ) Y^{h}_s +B_h K'_h X_s \big) {\mathrm{d}}s \quad \forall s\in [t,T]; \quad Y^{h}_t=0. \end{split} \end{aligned}$$ Moreover, the probabilistic characterisation [\[eq:value_derivative_2nd\]](#eq:value_derivative_2nd){reference-type="eqref" reference="eq:value_derivative_2nd"} of $\frac{\delta^2 V^{t,x}_i}{\delta \phi_h\delta \phi_\ell}(\phi; \phi'_h, \phi''_\ell)$ simplifies into $$\begin{aligned} \label{eq:value_derivative_2nd_lq} \begin{split} \frac{\delta^2 V^{t,x}_{i}}{\delta \phi_h \delta \phi_\ell} (\phi;\phi'_h, \phi''_\ell) & = {\mathbb{E}}\left[\int_t^T \left( Y^\ell_s Q_i Y^h_s +\big(K Y^\ell_s +E_\ell K''_\ell X_s\big)^\top R_i \big(K Y^h_s +E_h K'_h X_s\big) \right) {\mathrm{d}}s\right] \\ &\quad+ {\mathbb{E}}\bigg[\int_t^T \left( X^\top_s Q_i Z_s +(K X_s)^\top R_i \big( K Z_s +E_\ell K''_\ell Y^h_s+E_hK'_h Y^\ell_s\big) \right) {\mathrm{d}}s \bigg] \\ &\quad +{\mathbb{E}}\left[ (Y^\ell_T )^\top G_i Y^h_T +X^\top_T G_i Z_T\right], \end{split}\end{aligned}$$ where $E_j\in {\mathbb R}^{n_a\times k_j}$, $j\in \{h,\ell\}$, is a block row matrix defined as in [\[eq:control_2nd\]](#eq:control_2nd){reference-type="eqref" reference="eq:control_2nd"}, $Z$ is the sensitivity process of $X$ with respect $K'_h$ and $K''_\ell$ satisfying (cf. [\[eq:X_sensitivity_second\]](#eq:X_sensitivity_second){reference-type="eqref" reference="eq:X_sensitivity_second"}): $$\begin{aligned} \label{eq:X_sensitivity_second_lq} \begin{split} {\mathrm{d}}Z_s & = \left( (A+BK) Z_s + B_\ell K''_\ell Y^h_s +B_h K'_h Y^\ell_s \right){\mathrm{d}}s \quad\forall s\in [t,T]; \quad Z_t=0. \end{split} \end{aligned}$$ #### PDE characterisation in LQ games. For the PDE characterisation, define for all $\phi\in \pi^{(N)}$, $\phi'_h=K'_h\in \pi_h$ and $\phi''_\ell=K''_\ell\in \pi_\ell$, [\[eq:quadratic_ansatz\]]{#eq:quadratic_ansatz label="eq:quadratic_ansatz"} $$\begin{aligned} {{V}^\phi_i}(t,x)&\coloneqq \frac{1}{2}x^\top \Psi_i(t)x+\psi_i(t) \quad\forall(t,x)\in [0,T]\times {\mathbb R}^{n_x}, \label{eq:quadratic_value} \\ {\frac{\delta \overline{ V}^\phi_i}{\delta \phi_h}}(t,x; \phi'_h)&\coloneqq \frac{1}{2}x^\top \Theta^{h}_i(t)x+\theta^{h}_i(t) \quad\forall(t,x)\in [0,T]\times {\mathbb R}^{n_x}, \label{eq:quadratic_1st} \\ {\frac{\delta^2 \overline{V}^\phi_i}{\delta \phi_h\delta \phi_\ell}}(t,x; \phi'_h, \phi''_\ell)&\coloneqq \frac{1}{2}x^\top \Lambda^{h,\ell}_i(t)x+\lambda^{h, \ell}_i(t)\quad\forall(t,x)\in [0,T]\times {\mathbb R}^{n_x}, \label{eq:quadratic_2nd}\end{aligned}$$ where $\Psi_i \in C([0,T];{\mathbb{S}}^{n_x})$ satisfies $$\label{eq:ode_psi} \dot \Psi_i+ \left(A+BK\right)^\top \Psi_i+ \Psi_i \left(A+BK\right) +Q_i+K^\top R_i K=0 \quad\forall t\in [0,T]; \quad\Psi_i(T)=G_i,$$ $\Theta^h_i \in C([0,T];{\mathbb{S}}^{n_x} )$ satisfies $$\label{eq:ode_theta} \begin{aligned} \dot \Theta^{h}_i &+ \left(A+BK \right)^\top \Theta^{h}_i+ \Theta^{h}_i \left(A+BK \right) + \Psi_i B_h K'_h+(B_h K'_h)^\top \Psi_i \\ & + K^\top (R_i)_{ h}K'_h + (K'_h)^\top \big((R_i)_{ h}\big)^\top K =0 \quad\forall t\in [0,T]; \quad\Theta^{h}_i(T)=0, \end{aligned}$$ $\Lambda^{h,\ell}_i \in C([0,T];{\mathbb{S}}^{n_x} )$ satisfies $$\label{eq:ode_lambda} \begin{aligned} \dot \Lambda^{h,\ell}_i &+ \left(A+BK \right)^\top \Lambda^{h,\ell}_i+ \Lambda^{h,\ell}_i \left(A+BK \right) \\ & + \left( \Theta^{h}_iB_\ell K''_\ell +(B_\ell K''_\ell)^\top \Theta^{h}_i \right) +\left(\Theta^{\ell}_i B_h K'_h +(B_h K'_h )^\top \Theta^{\ell}_i \right) \\ & +(K''_\ell)^\top (R_i)_{\ell h} K'_h +(K'_h)^\top \big((R_i)_{\ell h}\big)^\top K''_\ell =0 \quad\forall t\in [0,T]; \quad\Lambda^{h,\ell}_i(T)=0, \end{aligned}$$ and $\psi_i, \theta^{h}_i, \lambda^{h,\ell}_i\in C([0,T];{\mathbb R})$ satisfies $$\label{eq:zero_order_term} \left\{ \begin{aligned} & \dot \psi_i+ \frac{1}{2}\textnormal{tr}\left(\sigma\sigma^\top \Psi_i \right)=0\quad\forall t\in [0,T]; \quad \psi_i(T)=0, \\ &\dot \theta^{h}_i + \frac{1}{2}\textnormal{tr}\left(\sigma\sigma^\top \Theta^{h}_i \right)=0\quad\forall t\in [0,T]; \quad \theta^{h}_i(T)=0, \\ & \dot \lambda^{h,\ell}_i + \frac{1}{2}\textnormal{tr}\left(\sigma\sigma^\top \Lambda^{h,\ell}_i \right)=0\quad\forall t\in [0,T]; \quad \lambda^{h,\ell}_i(T)=0. \end{aligned} \right.$$ Note that in [\[eq:ode_psi\]](#eq:ode_psi){reference-type="eqref" reference="eq:ode_psi"}, [\[eq:ode_theta\]](#eq:ode_theta){reference-type="eqref" reference="eq:ode_theta"}, [\[eq:ode_lambda\]](#eq:ode_lambda){reference-type="eqref" reference="eq:ode_lambda"} and [\[eq:zero_order_term\]](#eq:zero_order_term){reference-type="eqref" reference="eq:zero_order_term"}, we use a dot to represent the derivative with respect to time. Additionally, for the sake of notational simplicity, we omit the dependence of $\Theta^{h}_i, \theta^{h}_i$ on $K'_h$, and the dependence of $\Lambda^{h,\ell}_i, \lambda^{h,\ell}_i$ on $K'_h$ and $K''_\ell$. One can easily verify that [\[eq:quadratic_value\]](#eq:quadratic_value){reference-type="eqref" reference="eq:quadratic_value"}, [\[eq:quadratic_1st\]](#eq:quadratic_1st){reference-type="eqref" reference="eq:quadratic_1st"} and [\[eq:quadratic_2nd\]](#eq:quadratic_2nd){reference-type="eqref" reference="eq:quadratic_2nd"} are the unique solutions to the PDEs [\[eq:pde_value\]](#eq:pde_value){reference-type="eqref" reference="eq:pde_value"}, [\[eq:pde_value_1st\]](#eq:pde_value_1st){reference-type="eqref" reference="eq:pde_value_1st"} and [\[eq:pde_value_2nd\]](#eq:pde_value_2nd){reference-type="eqref" reference="eq:pde_value_2nd"}, respectively. The following theorem proves the equivalence of Theorems [Theorem 9](#thm:differential_MPG_proba){reference-type="ref" reference="thm:differential_MPG_proba"} and [Theorem 10](#thm:differential_MPG_pde){reference-type="ref" reference="thm:differential_MPG_pde"} for LQ games. **Theorem 11**. *For all $i,h,\ell \in I_N$, $\phi\in \pi^{(N)}$, $\phi'_h\in \pi_h$, $\phi''_\ell\in \pi_\ell$ and $(t,x)\in [0,T]\times{\mathbb R}^{n_x}$, $$\label{eq:equivalence} \frac{\delta V^{t,x}_i}{\delta \phi_h} (\phi;\phi'_h) ={\frac{\delta \overline{V}^\phi_i}{\delta \phi_h}}(t,x; \phi'_h), \quad \frac{\delta^2 V^{t,x}_i}{\delta \phi_h \delta \phi_\ell} (\phi;\phi'_h, \phi''_\ell) ={\frac{\delta^2 \overline{V}^\phi_i}{\delta \phi_h\delta \phi_\ell}}(t,x; \phi'_h, \phi''_\ell).$$ That is, Condition [\[eq:prob_symmetry\]](#eq:prob_symmetry){reference-type="eqref" reference="eq:prob_symmetry"} in Theorem [Theorem 9](#thm:differential_MPG_proba){reference-type="ref" reference="thm:differential_MPG_proba"} is equivalent to Condition [\[eq:pde_symmetry\]](#eq:pde_symmetry){reference-type="eqref" reference="eq:pde_symmetry"} in Theorem [Theorem 10](#thm:differential_MPG_pde){reference-type="ref" reference="thm:differential_MPG_pde"}.* *Consequently, if there exists $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$ such that for all $i,j \in I_N$ with $i\not =j$, $\phi\in \pi^{(N)}$, $\phi'_i\in \pi_i$ and $\phi''_j\in \pi_j$, $$\label{eq:lq_symmetry} \frac{1}{2}x^\top \Lambda^{i,j}_i(t)x+ \lambda^{i, j}_i(t) = \frac{1}{2}x^\top \Lambda^{j,i}_j(t)x +\lambda^{j, i}_j(t), % \q \fa t\in [0,T],$$ then $\mathcal{G}_{\rm LQ}$ with initial condition $(t,x)$ is a CLPG. Moreover, if for all $i,j \in I_N$ with $i\not =j$, $\phi\in \pi^{(N)}$, $\phi'_i\in \pi_i$ and $\phi''_j\in \pi_j$, $$%\label{eq:lq_symmetry} \Lambda^{i,j}_i(t) = \Lambda^{j,i}_j(t), \quad\forall t\in [0,T],$$ then $\mathcal{G}_{\rm LQ}$ is an MPG.* # Proof of Main Results {#sec:main_proof} ## Proof of Theorem [Theorem 2](#thm:separation_value){reference-type="ref" reference="thm:separation_value"} {#sec:proof_separation} *Proof.* We only prove the characterisation of CLPGs with a fixed initial state $s_0\in \mathcal{G}$. The characterisation of MPGs holds by repeating the argument for all initial states. It is clear that if for all $i\in I_N$, the value function $V^{s_0}_i$ admits the decomposition [\[eq:separation_value\]](#eq:separation_value){reference-type="eqref" reference="eq:separation_value"}, then $\mathcal{G}$ with initial state $s_0$ is a CLPG with potential $\Phi^{s_0}$. Hence it remains to prove that [\[eq:separation_value\]](#eq:separation_value){reference-type="eqref" reference="eq:separation_value"} is a necessary condition for $\mathcal{G}$ being a CLPG with potential $\Phi^{s_0}$. Let $i\in I_N$, $\phi_{-i}\in \pi^{(N)}_{-i}$ and $\phi_i, \phi_i',\phi_i''\in \pi_i$ be arbitrary policies. Then by Definition [Definition 2](#def:MPG){reference-type="ref" reference="def:MPG"}, $$\begin{aligned} V^{s_0}_i((\phi_i, \phi_{-i})) &= \Phi^{s_0}((\phi_i, \phi_{-i})) -\Phi^{s_0}((\phi_i',\phi_{-i}))+V^{s_0}_i((\phi_i',\phi_{-i})), \\ V^{s_0}_i((\phi_i, \phi_{-i})) &= \Phi^{s_0}((\phi_i, \phi_{-i})) -\Phi^{s_0}((\phi_i'',\phi_{-i}))+V^{s_0}_i((\phi_i'',\phi_{-i})).\end{aligned}$$ This shows that for any pair of policies $(\phi'_i,\phi''_i)\in \pi_i$, $$-\Phi^{s_0}((\phi_i',\phi_{-i}))+V^{s_0}_i((\phi_i',\phi_{-i}))= -\Phi^{s_0}((\phi_i'',\phi_{-i}))+V^{s_0}_i((\phi_i'',\phi_{-i})).$$ Consequently, $U^{s_0}_i(\phi_{-i})\coloneqq -\Phi^{s_0}((\phi_i',\phi_{-i}))+V^{s_0}_i((\phi_i',\phi_{-i}))$ is well-defined, as it only depends on $\phi_{-i}$ but is independent of $\phi'_i\in \pi_i$. ◻ ## Proofs of Lemmas [Lemma 6](#lemma:derivative_line){reference-type="ref" reference="lemma:derivative_line"} and [Lemma 7](#lemma:multi-dimension_derivative){reference-type="ref" reference="lemma:multi-dimension_derivative"} and Theorem [Theorem 8](#thm:symmetry_value_sufficient){reference-type="ref" reference="thm:symmetry_value_sufficient"} {#sec:proof_abstract_MPG} *Proof of Lemma [Lemma 6](#lemma:derivative_line){reference-type="ref" reference="lemma:derivative_line"}.* By Definition [\[eq:first_der_def\]](#eq:first_der_def){reference-type="ref" reference="eq:first_der_def"} and the linear differentiability of $f$, $[0,1]\ni \varepsilon\mapsto f(\phi^\varepsilon)\in {\mathbb R}$ admits a right-hand derivative $\frac{\delta f}{\delta \phi_i} (\phi ; \phi'_i-\phi_i)$ at $\varepsilon=0$. We now prove the differentiability of $\varepsilon\mapsto f(\phi^\varepsilon)$ at $\varepsilon_0\in (0,1)$. For all $\varepsilon>0$ such that $\varepsilon_0+\varepsilon<1$, $$\begin{aligned} \phi^{\varepsilon_0+\varepsilon} = ( \phi^{\varepsilon_0}_i+\varepsilon(\phi'_i-\phi_i),\phi_{-i})=\left( \phi^{\varepsilon_0}_i+\frac{\varepsilon}{1-\varepsilon_0}(\phi'_i-\phi^{\varepsilon_0}_i),\phi_{-i}\right).\end{aligned}$$ Thus by the linear differentiability of $f$, $$\begin{aligned} \lim_{\varepsilon\downarrow 0 }\frac{ f(\phi^{\varepsilon_0+\varepsilon} ) -f (\phi^{\varepsilon_0} ) }{ \varepsilon} &= \frac{1}{ 1-\varepsilon_0} \lim_{\varepsilon\downarrow 0 }\frac{ f\left(\left( \phi^{\varepsilon_0}_i+\frac{\varepsilon}{1-\varepsilon_0}(\phi'_i-\phi^{\varepsilon_0}_i),\phi_{-i}\right) \right) -f (\phi^{\varepsilon_0} ) }{ \varepsilon/(1-\varepsilon_0)} \\ &= \frac{1}{ 1-\varepsilon_0} \frac{\delta f}{\delta \phi_i} (\phi^{\varepsilon_0} ; \phi'_i-\phi^{\varepsilon_0}_i)= \frac{\delta f}{\delta \phi_i} (\phi^{\varepsilon_0} ; \phi'_i-\phi_i),\end{aligned}$$ where the last identity used the linearity of $\frac{\delta f}{\delta \phi_i} (\phi^{\varepsilon_0} ; \cdot): \mathop{\mathrm{span}}(\pi_i) \rightarrow{\mathbb R}$. On the other hand, for all $\varepsilon>0$ such that $\varepsilon_0-\varepsilon>0$, $$\begin{aligned} \phi^{\varepsilon_0-\varepsilon} = ( \phi^{\varepsilon_0}_i-\varepsilon(\phi'_i-\phi_i),\phi_{-i})=\left( \phi^{\varepsilon_0}_i+\frac{\varepsilon}{\varepsilon_0}(\phi_i-\phi^{\varepsilon_0}_i),\phi_{-i}\right).\end{aligned}$$ Hence by the linear differentiability of $f$, $$\begin{aligned} \lim_{\varepsilon\downarrow 0 }\frac{ f(\phi^{\varepsilon_0-\varepsilon} ) -f (\phi^{\varepsilon_0} ) }{ - \varepsilon} &= -\frac{1}{\varepsilon_0} \lim_{\varepsilon\downarrow 0 }\frac{ f\left(\left( \phi^{\varepsilon_0}_i+\frac{\varepsilon}{\varepsilon_0}(\phi_i-\phi^{\varepsilon_0}_i),\phi_{-i}\right) \right) -f (\phi^{\varepsilon_0} ) }{ \varepsilon/\varepsilon_0 } \\ &=- \frac{1}{\varepsilon_0} \frac{\delta f}{\delta \phi_i} (\phi^{\varepsilon_0} ; \phi_i-\phi^{\varepsilon_0}_i)= \frac{\delta f}{\delta \phi_i} (\phi^{\varepsilon_0} ; \phi'_i-\phi_i).\end{aligned}$$ This proves the differentiability of $\varepsilon\mapsto f(\phi^\varepsilon)$ at $\varepsilon_0\in (0,1)$. Similar argument shows that $\varepsilon\mapsto f(\phi^\varepsilon)$ admits a left-hand derivative at $1$. ◻ *Proof of Lemma [Lemma 7](#lemma:multi-dimension_derivative){reference-type="ref" reference="lemma:multi-dimension_derivative"}.* Let $z,\phi\in \pi^{(N)}$ be fixed. For simplicity, we assume $N=2$ and $I_N= \{i,j\}$, as the same argument can be easily extended to $N\ge 3$. We first prove the differentiability of $r \mapsto f(z+r( \phi -z))$ admits a right-hand derivative at $0$. For each $r\in [0,1]$, let $\phi^r= z+r( \phi -z)$. Recall that by Lemma [Lemma 6](#lemma:derivative_line){reference-type="ref" reference="lemma:derivative_line"}, for all $r\in [0,1]$, $\varepsilon\mapsto f ((\phi^{r}_{-j}, z_j+ \varepsilon(\phi_j-z_j)))$ is differentiable on $[0,1]$ and $$\frac{{\mathrm{d}}}{{\mathrm{d}}\varepsilon}f((\phi^{r}_{-j}, z_j+ \varepsilon(\phi_j-z_j))) = \frac{\delta f}{ \delta\phi_j}((\phi^{r}_{-j}, z_j+ \varepsilon(\phi_j-z_j)) ; \phi_j-z_j) \quad \forall\varepsilon\in (0,1).$$ Then for all $r \in (0,1)$, using the fact that $\phi^r_{-i}=\phi^r_j$ (note that $I_N= \{i,j\}$), $$\begin{aligned} f(\phi^{r} )-f(z ) & = f(\phi^{r} ) - f(( \phi^r_i, z_{-i})) + f(( \phi^r_i, z_{-i}) ) -f(z ) \\ & = \frac{\delta f}{ \delta\phi_j}( (z_i+r(\phi_i-z_i),z_j+r\varepsilon_r (\phi_j-z_j) ) ; \phi_j-z_j)r + f(( \phi^r_i, z_{-i}) ) -f(z ), \end{aligned}$$ for some $\varepsilon_r \in (0,1)$, where the last identity used the mean value theorem. Dividing both sides of the above identity by $r$ and letting $r\rightarrow 0$ yield $$\begin{aligned} \lim_{r\downarrow 0 }\frac{f(\phi^{r} )-f(z )}{r} & =\lim_{r\downarrow 0} \frac{\delta f}{ \delta\phi_j}( (z_i+r(\phi_i-z_i),z_j+r\varepsilon_r (\phi_j-z_j) ) ; \phi_j-z_j) + \frac{\delta f}{\delta\phi_i }(z ; \phi_i-z_i) \\ & = \frac{\delta f}{ \delta\phi_j}( z ; \phi_j-z_j) + \frac{\delta f}{\delta\phi_i }(z ; \phi_i-z_i),\end{aligned}$$ where the last identity used the continuity of ${\varepsilon}\mapsto \frac{\delta f}{\delta \phi_j}(z+\varepsilon\cdot (\phi-z) ; \phi_j-z_j)$ at $0$, which holds due to the continuity assumption of $\frac{\delta f}{\delta \phi_j}$ and the linearity of $\frac{\delta f}{\delta \phi_j}$ with respect to the last argument. This proves the desired differentiability of $r\mapsto f (\phi^{r} )$ at $0$. Finally, observe that by similar arguments as those for Lemma [Lemma 6](#lemma:derivative_line){reference-type="ref" reference="lemma:derivative_line"}, $[0,1]^N\ni {\varepsilon}\mapsto \frac{\delta f}{\delta \phi_i}(z+\varepsilon\cdot (\phi-z) ;\phi'_i)$ is in fact continuous on $[0,1]^N$. Hence repeating the above arguments yields the differetiability of $r\mapsto f (\phi^{r} )$ at $r\in (0,1]$. ◻ *Proof of Theorem [Theorem 8](#thm:symmetry_value_sufficient){reference-type="ref" reference="thm:symmetry_value_sufficient"}.* We only prove the characterisation of CLPGs, as the characterisation of MPGs follows by repeating the argument for all initial states. Throughout this proof, we denote by $s$ the fixed initial state $s_0$ for simplicity. For each $i\in I_N$ and $\phi'_i\in \pi_i$, using Condition [\[item:joint_continuity\]](#item:joint_continuity){reference-type="ref" reference="item:joint_continuity"} and applying Lemma [Lemma 7](#lemma:multi-dimension_derivative){reference-type="ref" reference="lemma:multi-dimension_derivative"} to $\frac{\delta V_i^{s}}{\delta\phi_i}(\cdot; {\phi}'_i)$ yield that for all $\phi ,z \in \pi^{(N)}$, $[0,1]\ni r\mapsto \frac{\delta V_i^{s}}{\delta\phi_i}(z+r( \phi -z); {\phi}'_i)\in {\mathbb R}$ is differentiable and $$\label{eq:chainrule_p-Vi} \frac{{\mathrm{d}}}{{\mathrm{d}}r} \frac{\delta V_i^{s}}{\delta\phi_i}(z+r( \phi -z); {\phi}'_i) =\sum_{j=1}^N \frac{\delta^2 V_i^{s}}{\delta\phi_i\delta\phi_j} (z+r( \phi-z); {\phi}'_i, \phi_j-z_j).$$ Moreover, as the second-order derivatives are bilinear with respect to the last two arguments, one can assume without loss of generality that Conditions [\[item:integrable_p-Vi\]](#item:integrable_p-Vi){reference-type="ref" reference="item:integrable_p-Vi"} and [\[item:symmetry-Vi\]](#item:symmetry-Vi){reference-type="ref" reference="item:symmetry-Vi"} and [\[eq:chainrule_p-Vi\]](#eq:chainrule_p-Vi){reference-type="eqref" reference="eq:chainrule_p-Vi"} hold for all ${\phi}'_i\in \mathop{\mathrm{span}}( \pi_i)$ and ${\phi}''_j\in \mathop{\mathrm{span}}( \pi_j)$. Note that $\Phi^s: \pi^{(N)} \rightarrow{\mathbb R}$ in [\[eq:potential_variation\]](#eq:potential_variation){reference-type="eqref" reference="eq:potential_variation"} is well-defined, as $r\mapsto \frac{\delta V_j^{s}}{\delta\phi_j} (z+r( \phi-z) ; \phi_j-z_j)$ is continuous on $[0,1]$ for all $j\in I_N$. We first prove that for all $\phi=(\phi_i)_{i\in I_N}\in \pi^{(N)}$, $i\in I_N$ and $\phi'_i\in \pi_i$, $$\frac{{\mathrm{d}}}{{\mathrm{d}}\varepsilon}\Phi^{s}(\phi^\varepsilon)\bigg\vert_{\varepsilon=0} =\frac{\delta V^{s}}{\delta \phi_i} (\phi ; \phi'_i-\phi_i) \quad\textnormal{with $\phi^\varepsilon\coloneqq (\phi_i+\varepsilon(\phi'_i-\phi_i),\phi_{-i})$.}$$ Observe that by [\[eq:potential_variation\]](#eq:potential_variation){reference-type="eqref" reference="eq:potential_variation"}, for all $\varepsilon\in (0,1]$, $$\begin{aligned} \Phi^{s} (\phi^{\varepsilon} ) - \Phi^{s}(\phi) &= \int_0^1\sum_{j=1}^N \frac{\delta V_j^{s}}{\delta\phi_j} (z+r( \phi^{\varepsilon}-z); \phi_j+\varepsilon\delta_{ji}(\phi'_i-\phi_i)-z_j) {\mathrm{d}}r \\ &\quad - \int_0^1\sum_{j=1}^N \frac{\delta V_j^{s}}{\delta\phi_j} (z+r(\phi-z);\phi_j-z_j) {\mathrm{d}}r, \end{aligned}$$ where $\delta_{ji}=0$ if $j\not =i$ and $\delta_{ii}=1$. Then by $z+r( \phi^{\varepsilon}-z)\in \pi^{(N)}$, for all $\varepsilon\in (0,1]$, $$\begin{aligned} \label{eq:dphi_i} \begin{split} %\p \Phi^{t,x}(\phi;\phi'_i) & \frac{ \Phi^{s} (\phi^{ \varepsilon} ) - \Phi^{s}(\phi) }{\varepsilon} \\ &\quad = \frac{1}{\varepsilon} \int_0^1\sum_{j=1}^N \left( \frac{\delta V_j^{s}}{\delta\phi_j}(z+r( \phi^{ \varepsilon}-z);\phi_j-z_j) - \frac{\delta V_j^{s}}{\delta\phi_j}(z+r( \phi-z);\phi_j-z_j) \right) {\mathrm{d}}r \\ &\qquad+ \frac{1}{\varepsilon} \int_0^1\sum_{j=1}^N \varepsilon\delta_{ji} \frac{\delta V_j^{s}}{\delta\phi_j}(z+r( \phi^{\varepsilon}-z) ; \phi'_i-\phi_i) {\mathrm{d}}r \\ &\quad= \int_0^1 \sum_{j=1}^N \frac{1}{ \varepsilon} \left( \frac{\delta V_j^{s}}{\delta\phi_j} (z+r( \phi^{ \varepsilon}-z);\phi_j-z_j) - \frac{\delta V_j^{s}}{\delta\phi_j} (z+r( \phi-z);\phi_j-z_j)\right) {\mathrm{d}}r \\ &\qquad+ \int_0^1 \frac{\delta V_i^{s}}{\delta\phi_i}(z+r( \phi^{\varepsilon}-z) ; \phi'_i-\phi_i) {\mathrm{d}}r. \end{split}\end{aligned}$$ To send $\varepsilon\rightarrow 0$ in [\[eq:dphi_i\]](#eq:dphi_i){reference-type="eqref" reference="eq:dphi_i"}, note that for all $\varepsilon\in [0,1]$, $(z+r( \phi^{\varepsilon}-z))_{-i}=z_{-i}+r(\phi_{-i}-z_{-i})$ and $$\begin{aligned} \label{eq:convex_combination_r_eps} \begin{split} (z+r( \phi^{\varepsilon}-z))_i &=z_i + r (\phi_i+\varepsilon(\phi'_i-\phi_i)-z_i) \\ &=z_i+r(\phi_i-z_i)+\varepsilon\big((z_i+r(\phi'_i-z_i))-(z_i+r(\phi_i-z_i))\big) \end{split}\end{aligned}$$ with $z_i+r(\phi_i-z_i),z_i+r(\phi'_i-z_i)\in \pi_i$. Thus for all $j\in I_N$, as $\phi_j-z_j \in \mathop{\mathrm{span}}(\pi_j)$, the twice differentiability of $V^{s}_j$ and Lemma [Lemma 6](#lemma:derivative_line){reference-type="ref" reference="lemma:derivative_line"} imply that $\varepsilon\mapsto \frac{\delta V_j^{s}}{\delta\phi_j} (z+r( \phi^{\varepsilon}-z); \phi_j-z_j )$ is differentiable on $[0,1]$ and $$\begin{aligned} \frac{{\mathrm{d}}}{{\mathrm{d}}\varepsilon}\frac{\delta V_j^{s}}{\delta\phi_j} (z+r( \phi^{\varepsilon}-z); \phi_j-z_j ) &= \frac{\delta^2 V_j^{s}}{\delta\phi_j\delta\phi_i} (z+r( \phi^{\varepsilon}-z); \phi_j-z_j , r(\phi'_i-\phi_i)) \\ &= \frac{\delta^2 V_j^{s}}{\delta\phi_j\delta\phi_i} (z+r( \phi^{\varepsilon}-z); \phi_j-z_j , \phi'_i-\phi_i)r, \end{aligned}$$ where the last identity used the linearity of $\frac{\delta^2 V_j^{s}}{\delta\phi_j\delta\phi_i}$ in its last component. Hence, by the mean value theorem, for all $\ \varepsilon\in (0,1]$, $$\begin{aligned} \label{eq:MVT_eps} \begin{split} &\left|\frac{1}{ \varepsilon} \left( \frac{\delta V_j^{s}}{\delta\phi_j} (z+r( \phi^{\varepsilon}-z);\phi_j-z_j)- \frac{\delta V_j^{s}}{\delta\phi_j} (z+r( \phi-z);\phi_j-z_j)\right) \right| \\ &\quad\le \sup_{ r,\varepsilon\in [0,1]}\left| \frac{\delta^2 V_j^{s}}{\delta\phi_j\delta\phi_i}(z+r( \phi^{ \varepsilon}-z);\phi_j-z_j, \phi'_i-\phi_i)r \right| \\ & \quad= \sup_{ r, \varepsilon\in [0,1]}\left| \frac{\delta^2 V_i^{s}}{\delta\phi_i\delta\phi_j}(z+r( \phi^{ \varepsilon}-z);\phi'_i-\phi_i,\phi_j-z_j)r \right| <\infty, \end{split}\end{aligned}$$ where the last inequality follows from Conditions [\[item:integrable_p-Vi\]](#item:integrable_p-Vi){reference-type="ref" reference="item:integrable_p-Vi"} and [\[item:symmetry-Vi\]](#item:symmetry-Vi){reference-type="ref" reference="item:symmetry-Vi"}. Similarly, as $\phi'_i-\phi_i\in \mathop{\mathrm{span}}(\pi_i)$, by the twice differentiability of $V^{s}_i$ (see [\[eq:convex_combination_r\_eps\]](#eq:convex_combination_r_eps){reference-type="eqref" reference="eq:convex_combination_r_eps"}), for all $r\in (0,1)$, $$\begin{aligned} \lim_{\varepsilon\downarrow 0} \frac{\delta V_i^{s}}{\delta\phi_i}(z+r( \phi^{\varepsilon}-z) ; \phi'_i-\phi_i) = \frac{\delta V_i^{s}}{\delta\phi_i}(z+r( \phi-z) ; \phi'_i-\phi_i), \end{aligned}$$ and for all $r, \varepsilon\in [0,1]$, by the mean value theorem, $$\begin{aligned} & \left| \frac{\delta V_i^{s}}{\delta\phi_i}(z+r( \phi^{ \varepsilon}-z) ; \phi'_i-\phi_i)\right| \\ &\quad\le \left| \frac{\delta V_i^{s}}{\delta\phi_i}(z+r( \phi-z) ; \phi'_i-\phi_i)\right| +\left| \frac{\delta^2 V_i^{s}}{\delta\phi_i\delta\phi_i}(z+r( \phi^\varepsilon-z) ; \phi'_i-\phi_i,\phi'_i-\phi_i)r\right|, \end{aligned}$$ which is uniformly bounded with respect to $(r,\varepsilon)\in [0,1]^2$ due to [\[eq:chainrule_p-Vi\]](#eq:chainrule_p-Vi){reference-type="eqref" reference="eq:chainrule_p-Vi"} and Condition [\[item:integrable_p-Vi\]](#item:integrable_p-Vi){reference-type="ref" reference="item:integrable_p-Vi"}. Hence, letting $\varepsilon\rightarrow 0$ in [\[eq:dphi_i\]](#eq:dphi_i){reference-type="eqref" reference="eq:dphi_i"} and using Lebesgue's dominated convergence theorem give $$\begin{aligned} \begin{split} & \frac{{\mathrm{d}}}{{\mathrm{d}}\varepsilon} \Phi^{s} (\phi^{\varepsilon} ) \bigg\vert_{\varepsilon=0} %\p \Phi^{t,x}(\phi;\phi'_i) \\ & = \int_0^1 \sum_{j=1}^N \frac{\delta^2 V_j^{s}}{\delta\phi_j\delta\phi_i}(z+r( \phi-z);\phi_j-z_j, \phi'_i-\phi_i) r {\mathrm{d}}r + \int_0^1 \frac{\delta V_i^{s}}{\delta\phi_i}(z+r( \phi -z) ; \phi'_i-\phi_i) {\mathrm{d}}r \\ & = \int_0^1 \sum_{j=1}^N \frac{\delta^2 V_i^{s}}{\delta\phi_i\delta\phi_j} (z+r( \phi-z); \phi'_i-\phi_i, \phi_j-z_j) r {\mathrm{d}}r + \int_0^1 \frac{\delta V_i^{s}}{\delta\phi_i}(z+r( \phi-z) ; \phi'_i-\phi_i) {\mathrm{d}}r \\ & = \int_0^1 r\frac{{\mathrm{d}}}{{\mathrm{d}}r} \left( \frac{\delta V_i^{s}}{\delta\phi_i}(z+r( \phi-z); \phi'_i-\phi_i) \right) {\mathrm{d}}r + \int_0^1 \frac{\delta V_i^{s}}{\delta\phi_i}(z+r( \phi-z) ; \phi'_i-\phi_i) {\mathrm{d}}r, \end{split}\end{aligned}$$ where the second to last identity used Condition [\[item:symmetry-Vi\]](#item:symmetry-Vi){reference-type="ref" reference="item:symmetry-Vi"} and the last identity used [\[eq:chainrule_p-Vi\]](#eq:chainrule_p-Vi){reference-type="eqref" reference="eq:chainrule_p-Vi"}. The integration by parts formula then yields $$\begin{aligned} \label{eq:dPhi=dV} \begin{split} & \frac{{\mathrm{d}}}{{\mathrm{d}}\varepsilon} \Phi^{s} (\phi^{\varepsilon} ) \bigg\vert_{\varepsilon=0} =\frac{\delta V_i^{s}}{\delta\phi_i}(\phi ; \phi'_{i}-\phi_i). \end{split}\end{aligned}$$ Now let $i\in I_N$, $\phi'_i\in \pi_i$ and $\phi\in \pi^{(N)}$. For each $\varepsilon\in [0,1]$, let $\phi^\varepsilon= (\phi_i+\varepsilon(\phi'_i-\phi_i),\phi_{-i})\in \pi^{(N)}$. By the differentiability $V_i^{s}$ and Lemma [Lemma 6](#lemma:derivative_line){reference-type="ref" reference="lemma:derivative_line"}, $\frac{{\mathrm{d}}}{{\mathrm{d}}\varepsilon} V_i^{s}(\phi^\varepsilon)=\frac{\delta V_i^{s}}{\delta\phi_i}(\phi^\varepsilon; \phi'_i-\phi_i)$ for all $\varepsilon\in [0,1]$, and $\varepsilon\mapsto \frac{\delta V_i^{s}}{\delta\phi_i}(\phi^\varepsilon; \phi'_i-\phi_i)$ is differentiable on $[0,1]$. This implies that $\varepsilon\mapsto V_i^{s}(\phi^\varepsilon)$ is continuously differentiable on $[0,1]$. Then by Lemma [Lemma 6](#lemma:derivative_line){reference-type="ref" reference="lemma:derivative_line"} and [\[eq:dPhi=dV\]](#eq:dPhi=dV){reference-type="eqref" reference="eq:dPhi=dV"}, $[0,1]\ni \varepsilon\mapsto \Phi^{s}(\phi^\varepsilon)\in {\mathbb R}$ is also continuously differentiable with $\frac{{\mathrm{d}}}{{\mathrm{d}}\varepsilon}\Phi^{s}(\phi^\varepsilon) =\frac{{\mathrm{d}}}{{\mathrm{d}}\varepsilon} V_i^{s}(\phi^\varepsilon)$ for all $\varepsilon\in [0,1]$. Hence by the fundamental theorem of calculus, $$\begin{aligned} V_i^{s}((\phi'_i,\phi_{-i}))- V_i^{s}((\phi_i,\phi_{-i})) & =\int_0^1 \frac{\delta V_i^{s}}{\delta\phi_i}(\phi^\varepsilon; \phi'_i-\phi_i) {\mathrm{d}}\varepsilon =\int_0^1 \frac{\delta \Phi^{s}}{\delta\phi_i}(\phi^\varepsilon; \phi'_i-\phi_i) {\mathrm{d}}\varepsilon \\ & =\Phi^{s}((\phi'_i,\phi_{-i}))- \Phi^{s}((\phi_i,\phi_{-i})). \end{aligned}$$ This proves that $\Phi^s$ defined in [\[eq:potential_variation\]](#eq:potential_variation){reference-type="eqref" reference="eq:potential_variation"} is a potential function of the game $\mathcal{G}$ with initial state $s$. ◻ ## Proof of Theorem [Theorem 9](#thm:differential_MPG_proba){reference-type="ref" reference="thm:differential_MPG_proba"} {#sec:proof_sde_probabilistic} The proof consists of first showing that the solutions to [\[eq:X_sensitivity_first\]](#eq:X_sensitivity_first){reference-type="eqref" reference="eq:X_sensitivity_first"} and [\[eq:X_sensitivity_second\]](#eq:X_sensitivity_second){reference-type="eqref" reference="eq:X_sensitivity_second"} characterise the derivatives of $X^{t,x,\phi}$ with respect to policies (see Lemmas [Lemma 12](#lemma:1st_derivative_state){reference-type="ref" reference="lemma:1st_derivative_state"} and [Lemma 13](#lemma:2nd_derivative_state){reference-type="ref" reference="lemma:2nd_derivative_state"}), and then proving [\[eq:control_1st\]](#eq:control_1st){reference-type="eqref" reference="eq:control_1st"} and [\[eq:control_2nd\]](#eq:control_2nd){reference-type="eqref" reference="eq:control_2nd"} characterise the derivatives of $\alpha^{t,x,\phi}$ with respect to policies (see Lemma [Lemma 14](#lemma:control_derivative){reference-type="ref" reference="lemma:control_derivative"}). The following notation will be used in the subsequent analysis: for each $t\in [0,T]$, $n\in {\mathbb R}$ and $(X^n)_{n=0 }^\infty\in \mathcal{S}^\infty([t,T];{\mathbb R}^n)$, we write $\mathcal{S}^\infty\text-\lim_{n\rightarrow\infty}X^n=X^0$ if for all $q\ge 1$, ${\mathbb{E}}[\sup_{s\in [t,T]} |X^n_s-X^0_s|^q]<\infty$. Let $X^\varepsilon\in \mathcal{S}^\infty([t,T];{\mathbb R}^n)$ for all $\varepsilon\in [0,1)$, we write $Y=\mathcal{S}^\infty \text- \partial_\varepsilon X^\varepsilon\vert_{\varepsilon=0}$ if $\mathcal{S}^\infty \text-\lim_{r\downarrow 0}\frac{1}{r}(X^{r}-X^{0})=Y$. Higher order derivatives of a process can be defined similarly. The continuity and differentiability for processes in $\mathcal{H}^\infty([t,T];{\mathbb R}^n)$ are defined similarly. **Lemma 12**. *Assume (H.[H. 1](#assum:regularity_N){reference-type="ref" reference="assum:regularity_N"}). Let $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$, $\phi\in \pi^{(N)}$ and $i\in I_N$. For all $\phi'_i\in \mathop{\mathrm{span}}(\pi_i)$, [\[eq:X_sensitivity_first\]](#eq:X_sensitivity_first){reference-type="eqref" reference="eq:X_sensitivity_first"} admits a unique solution $\frac{\delta X^{t,x}}{\delta \phi_i}(\phi;\phi'_i)\in \mathcal{S}^\infty([t,T];{\mathbb R}^{n_x})$ satisfying the following properties:* (1) *[\[item:linear_dX_dphi\]]{#item:linear_dX_dphi label="item:linear_dX_dphi"} The map $\frac{\delta X^{t,x}}{\delta \phi_i}(\phi;\cdot):\mathop{\mathrm{span}}(\pi_i)\rightarrow\mathcal{S}^\infty([t,T];{\mathbb R}^{n_x})$ is linear;* (2) *[\[item:Y=dX_dphi\]]{#item:Y=dX_dphi label="item:Y=dX_dphi"} For all $\phi'_i\in \pi_i$, $\frac{\delta X^{t,x}}{\delta \phi_i}(\phi;\phi'_i-\phi_i) =\mathcal{S}^\infty \text-\partial_\varepsilon X^{t,x, (\phi_i+\varepsilon(\phi_i'-\phi_i),\phi_{-i})}\big\vert_{\varepsilon=0}$.* *Proof.* Since $B$, $\Sigma$ and $\phi$ have bounded first-order derivatives and $\phi'_i\in \mathop{\mathrm{span}}(\pi_i)$ is of linear growth in $x$, standard a-priori estimates of SDEs (see e.g., [@zhang2017backward Theorem 3.4.3]) show that [\[eq:X_sensitivity_first\]](#eq:X_sensitivity_first){reference-type="eqref" reference="eq:X_sensitivity_first"} admits a unique solution $\frac{\delta X^{t,x}}{\delta \phi_i}(\phi;\phi'_i)\in \mathcal{S}^\infty([t,T];{\mathbb R}^{n_x})$. As the terms $(\partial_{a_i} B^\phi)[\phi'_i](s, X^{t,x, \phi}_s )$ and $(\partial_{a_i} \Sigma^\phi)[\phi'_i](s, X^{t,x, \phi}_s )$ are linear with respect to $\phi'_i$, one can easily verify by the uniqueness of solutions that for all $\alpha, \beta\in {\mathbb R}$ and $\phi'_i,\phi_i''\in \mathop{\mathrm{span}}(\pi_i)$, $\frac{\delta X^{t,x}}{\delta \phi_i}(\phi;\alpha{\phi_i'}+\beta \phi_i'' ) =\alpha\frac{\delta X^{t,x}}{\delta \phi_i}(\phi;{\phi_i'} ) ++\beta\frac{\delta X^{t,x}}{\delta \phi_i}(\phi; \phi_i'' ).$ This proves the linearity of $\mathop{\mathrm{span}}(\pi_i)\ni \phi'_i\mapsto \frac{\delta X^{t,x}}{\delta \phi_i}(\phi;\phi'_i)\in \mathcal{S}^\infty([t,T];{\mathbb R}^{n_x})$ in Item [\[item:linear_dX_dphi\]](#item:linear_dX_dphi){reference-type="ref" reference="item:linear_dX_dphi"}. Finally, observe that for all $\varepsilon\in [0,1)$, $X^{t,x, (\phi_i+\varepsilon(\phi_i'-\phi_i),\phi_{-i})}$ satisfies $$\begin{aligned} {\mathrm{d}}X_s & = B(s, X_s, (\phi_i+\varepsilon(\phi'_i-\phi_i),\phi_{-i})(s,X_s) ){\mathrm{d}}s \\ &\quad + \Sigma(s, X_s , (\phi_i+\varepsilon(\phi'_i-\phi_i),\phi_{-i})(s,X_s)){\mathrm{d}}W_s, \quad s\in [t,T]; \quad X_t =x. \end{aligned}$$ As $B,\Sigma$, $\phi$ and $\phi'_i$ are continuous differentiable in $(x,a)$, $\mathcal{S}^\infty \text-\partial_\varepsilon X^{t,x, (\phi_i+\varepsilon(\phi_i'-\phi_i),\phi_{-i})}\big\vert_{\varepsilon=0}$ exists due to [@krylov2008controlled Theorem 4, p. 105] and satisfies [\[eq:X_sensitivity_first\]](#eq:X_sensitivity_first){reference-type="eqref" reference="eq:X_sensitivity_first"} due to [@krylov2008controlled Remark 5, p. 108]. The uniqueness of solutions to [\[eq:X_sensitivity_first\]](#eq:X_sensitivity_first){reference-type="eqref" reference="eq:X_sensitivity_first"} then yields Item [\[item:Y=dX_dphi\]](#item:Y=dX_dphi){reference-type="ref" reference="item:Y=dX_dphi"}. ◻ **Lemma 13**. *Suppose (H.[H. 1](#assum:regularity_N){reference-type="ref" reference="assum:regularity_N"}) holds. Let $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$, $\phi\in \pi^{(N)}$ and $i,j\in I_N$. For all $\phi'_i\in \mathop{\mathrm{span}}(\pi_i)$ and $\phi''_j\in \mathop{\mathrm{span}}(\pi_j)$, [\[eq:X_sensitivity_second\]](#eq:X_sensitivity_second){reference-type="eqref" reference="eq:X_sensitivity_second"} admits a unique solution $\frac{\delta^2 X^{t,x}}{\delta \phi_i\delta \phi_j}(\phi;\phi'_i,\phi''_j)\in \mathcal{S}^\infty([t,T];{\mathbb R}^{n_x})$ satisfying the following properties:* (1) *[\[item:bilinear_d2X_d2phi\]]{#item:bilinear_d2X_d2phi label="item:bilinear_d2X_d2phi"} The map $\frac{\delta^2 X^{t,x}}{\delta \phi_i\delta \phi_j}(\phi;\cdot,\cdot):\mathop{\mathrm{span}}(\pi_i)\times \mathop{\mathrm{span}}(\pi_j)\rightarrow\mathcal{S}^\infty([t,T];{\mathbb R}^{n_x})$ is bilinear;* (2) *[\[item:symmetry_d2X_d2phi\]]{#item:symmetry_d2X_d2phi label="item:symmetry_d2X_d2phi"} For all $\phi'_i\in \pi_i$ and $\phi''_j\in \pi_j$, $\frac{\delta^2 X^{t,x}}{\delta \phi_i\delta \phi_j}(\phi;\phi'_i,\phi''_j) =\frac{\delta^2 X^{t,x}}{\delta \phi_j\delta \phi_i}(\phi;\phi''_j,\phi'_i)$;* (3) *[\[item:Z=d\^2X_d\^2phi\]]{#item:Z=d^2X_d^2phi label="item:Z=d^2X_d^2phi"} For all $\phi'_i\in \mathop{\mathrm{span}}(\pi_i)$ and $\phi''_j\in \pi_j$, $\frac{\delta^2 X^{t,x}}{\delta \phi_i\delta \phi_j}(\phi;\phi'_i,\phi''_j-\phi_j) =\mathcal{S}^\infty \text-\partial_\varepsilon \frac{\delta X^{t,x}}{\delta \phi_i}((\phi_j+\varepsilon(\phi''_j-\phi_j),\phi_{-j});\phi'_i) \big\vert_{\varepsilon=0}.$* *Proof.* Standard well-posedness results of SDEs and the regularity conditions of $B$, $\Sigma$, $\phi$, $\phi'_i$ and $\phi''_j$ show that [\[eq:X_sensitivity_second\]](#eq:X_sensitivity_second){reference-type="eqref" reference="eq:X_sensitivity_second"} admits a unique solution $\frac{\delta^2 X^{t,x}}{\delta \phi_i\delta \phi_j}(\phi;\phi'_i,\phi''_j)$. For Item [\[item:bilinear_d2X_d2phi\]](#item:bilinear_d2X_d2phi){reference-type="ref" reference="item:bilinear_d2X_d2phi"}, observe that by Lemma [Lemma 12](#lemma:1st_derivative_state){reference-type="ref" reference="lemma:1st_derivative_state"} and [\[eq:F_B\]](#eq:F_B){reference-type="eqref" reference="eq:F_B"}, for any given $\phi''_j\in \mathop{\mathrm{span}}(\pi_j)$, the map $\phi'_i\mapsto \big(F_B, F_\Sigma\big)\left(s,X^{t,x, \phi}_s, \frac{\delta X^{t,x}}{\delta \phi_i}(\phi;\phi'_i), \frac{\delta X^{t,x}}{\delta \phi_j}(\phi;\phi''_j), \phi'_i, \phi''_j\right)$ is linear. This along with the linearity of the SDE [\[eq:X_sensitivity_second\]](#eq:X_sensitivity_second){reference-type="eqref" reference="eq:X_sensitivity_second"} shows that $\phi'_i\mapsto \frac{\delta^2 X^{t,x}}{\delta \phi_i\delta \phi_j}(\phi;\phi'_i,\phi''_j)$ is linear. Similar arguments yield the linearity of $\phi''_j\mapsto \frac{\delta^2 X^{t,x}}{\delta \phi_i\delta \phi_j}(\phi;\phi'_i,\phi''_j)$. For Item [\[item:symmetry_d2X_d2phi\]](#item:symmetry_d2X_d2phi){reference-type="ref" reference="item:symmetry_d2X_d2phi"}, observe that $F_B(t,x, y_1, y_2, \phi'_i, \phi''_j ) = F_B (t,x, y_2, y_1, \phi''_j,\phi'_i )$, as $B$ is twice continuously differentiable with respect to $(x,a)$ and hence has a symmetric Jacobian. Similarly, we have $F_\Sigma \left(t,x, y_1, y_2, \phi'_i, \phi''_j\right) = F_\Sigma\left(t,x, y_2, y_1, \phi''_j,\phi'_i\right)$. This proves that both $\frac{\delta^2 X^{t,x}}{\delta \phi_i\delta \phi_j}(\phi;\phi'_i,\phi''_j)$ and $\frac{\delta^2 X^{t,x}}{\delta \phi_j\delta \phi_i}(\phi;\phi''_j,\phi'_i)$ satisfy [\[eq:X_sensitivity_second\]](#eq:X_sensitivity_second){reference-type="eqref" reference="eq:X_sensitivity_second"}, which along with the uniqueness of solutions to [\[eq:X_sensitivity_second\]](#eq:X_sensitivity_second){reference-type="eqref" reference="eq:X_sensitivity_second"} yields Item [\[item:symmetry_d2X_d2phi\]](#item:symmetry_d2X_d2phi){reference-type="ref" reference="item:symmetry_d2X_d2phi"}. Finally, for all $\varepsilon\in [0,1)$, let $\phi^\varepsilon=(\phi_j+\varepsilon(\phi''_j-\phi_j),\phi_{-j})$. By Lemma [Lemma 12](#lemma:1st_derivative_state){reference-type="ref" reference="lemma:1st_derivative_state"}, $\frac{\delta X^{t,x}}{\delta \phi_i}(\phi^\varepsilon;\phi'_i)$ satisfies $$\begin{aligned} \begin{split} {\mathrm{d}}Y_s & = (\partial_x B^{\phi^\varepsilon}) (s, X^{t,x, \phi^\varepsilon}_s ) Y_s {\mathrm{d}}s + \big((\partial_x \Sigma^{\phi^\varepsilon}) (s, X^{t,x, {\phi^\varepsilon}}_s ) Y_s \big) {\mathrm{d}}W_s \\ &\quad + (\partial_{a_i} B^{\phi^\varepsilon})[\phi'_i](s, X^{t,x, {\phi^\varepsilon}}_s ) {\mathrm{d}}s %\\ %&\quad + (\partial_{a_i} \Sigma^{\phi^\varepsilon})[\phi'_i](s, X^{t,x, {\phi^\varepsilon}}_s ) {\mathrm{d}}W_s \quad \forall s\in [t,T]. \end{split} \end{aligned}$$ By [@krylov2008controlled Theorem 4, p. 105], $\mathcal{S}^\infty \text-\partial_\varepsilon \frac{\delta X^{t,x}}{\delta \phi_i}(\phi^\varepsilon;\phi'_i) \big\vert_{\varepsilon=0}$ exists and satisfies [\[eq:X_sensitivity_second\]](#eq:X_sensitivity_second){reference-type="eqref" reference="eq:X_sensitivity_second"}, which along with the uniqueness of solutions to [\[eq:X_sensitivity_second\]](#eq:X_sensitivity_second){reference-type="eqref" reference="eq:X_sensitivity_second"} yields Item [\[item:Z=d\^2X_d\^2phi\]](#item:Z=d^2X_d^2phi){reference-type="ref" reference="item:Z=d^2X_d^2phi"}. This finishes the proof. ◻ **Lemma 14**. *Suppose (H.[H. 1](#assum:regularity_N){reference-type="ref" reference="assum:regularity_N"}) holds. Let $(t,x)\in [0,T]\times{\mathbb R}^{n_x}$ and $\phi\in \pi^{(N)}$. For each $s\in [t,T]$, let $\alpha^{t,x,\phi}_s=\phi(s,X^{t,x,\phi}_s)$. Then for all $i,j\in I_N$,* (1) *[\[item:control_1st\]]{#item:control_1st label="item:control_1st"}* *$\frac{\delta \alpha^{t,x}}{\delta \phi_i}(\phi;\cdot):\mathop{\mathrm{span}}(\pi_i)\rightarrow\mathcal{H}^\infty([t,T];{\mathbb R}^{n_a})$ is linear, and for all $\phi'_i\in \pi_i$, $\frac{\delta \alpha^{t,x}}{\delta \phi_i}(\phi;\phi'_i-\phi_i) =\mathcal{H}^\infty \text-\partial_\varepsilon\alpha^{t,x, (\phi_i+\varepsilon(\phi_i'-\phi_i),\phi_{-i})}\big\vert_{\varepsilon=0}$;* (2) *[\[item:control_2nd\]]{#item:control_2nd label="item:control_2nd"} $\frac{\delta^2 \alpha^{t,x}}{\delta \phi_i\delta\phi_j}(\phi;\cdot,\cdot):\mathop{\mathrm{span}}(\pi_i)\times\mathop{\mathrm{span}}(\pi_j) \rightarrow\mathcal{H}^\infty([t,T];{\mathbb R}^{n_a})$ is bilinear, for all $\phi'_i\in \pi_i$ and $\phi''_j\in \pi_j$, $\frac{\delta^2 \alpha^{t,x}}{\delta \phi_i\delta \phi_j}(\phi;\phi'_i,\phi''_j) =\frac{\delta^2 \alpha^{t,x}}{\delta \phi_j\delta \phi_i}(\phi;\phi''_j,\phi'_i)$ and for all $\phi'_i\in \mathop{\mathrm{span}}(\pi_i)$ and $\phi''_j\in \pi_j$, $\frac{\delta^2 X^{t,x}}{\delta \phi_i\delta \phi_j}(\phi;\phi'_i,\phi''_j-\phi_j) =\mathcal{H}^\infty \text-\partial_\varepsilon \frac{\delta \alpha^{t,x}}{\delta \phi_i}((\phi_j+\varepsilon(\phi''_j-\phi_j),\phi_{-j});\phi'_i) \big\vert_{\varepsilon=0}$.* *Proof.* For Item [\[item:control_1st\]](#item:control_1st){reference-type="ref" reference="item:control_1st"}, the linearity of $\frac{\delta \alpha^{t,x}}{\delta \phi_i}(\phi;\cdot)$ follows from [\[eq:control_1st\]](#eq:control_1st){reference-type="eqref" reference="eq:control_1st"} and the linearity of $\frac{\delta X^{t,x}}{\delta \phi_i}(\phi;\cdot)$ (see Lemma [Lemma 12](#lemma:1st_derivative_state){reference-type="ref" reference="lemma:1st_derivative_state"}). For each $\varepsilon\in [0,1)$, let $\phi^\varepsilon=(\phi_{i}+\varepsilon(\phi'_i-\phi_i),\phi_{-i})$ and observe that $$\alpha^{t,x,\phi^\varepsilon}_s=\phi^\varepsilon(s, X^{t,x,\phi^\varepsilon}_s)= (\phi_{i}+\varepsilon(\phi'_i-\phi_i),\phi_{-i})(s, X^{t,x,\phi^\varepsilon}_s).$$ Hence [@krylov2008controlled Theorem 9, p. 97], the differentiability of $\phi$ and $\phi'_i$ and the $\mathcal{S}^\infty$-differentiability of $(X^{t,x,\psi^\varepsilon})_{\varepsilon\in [0,1)}$ imply the existence and the desired formula of $\mathcal{H}^\infty \text-\partial_\varepsilon\alpha^{t,x, (\phi_i+\varepsilon(\phi_i'-\phi_i),\phi_{-i})}\big\vert_{\varepsilon=0}$. For Item [\[item:control_2nd\]](#item:control_2nd){reference-type="ref" reference="item:control_2nd"}, the bilinearity and symmetry of $\frac{\delta^2 \alpha^{t,x}}{\delta \phi_i\delta\phi_j}(\phi;\cdot,\cdot)$ follow directly from [\[eq:control_2nd\]](#eq:control_2nd){reference-type="eqref" reference="eq:control_2nd"} and Lemmas [Lemma 12](#lemma:1st_derivative_state){reference-type="ref" reference="lemma:1st_derivative_state"} and [Lemma 13](#lemma:2nd_derivative_state){reference-type="ref" reference="lemma:2nd_derivative_state"}. For each $\varepsilon\in [0,1)$, let $\phi^\varepsilon=(\phi_{j}+\varepsilon(\phi'_j-\phi_j),\phi_{-j})$ and $$\begin{aligned} \frac{\delta \alpha^{t,x}}{\delta \phi_i}(\phi^\varepsilon;\phi'_i)_s & =(\partial_x \phi^\varepsilon)(s,X^{t,x,\phi^\varepsilon}_s) \frac{\delta X^{t,x}}{\delta \phi_i}(\phi^\varepsilon;\phi'_i)_s+E_i\phi'_i(s, X^{t,x,\phi^\varepsilon}_s), \end{aligned}$$ Applying [@krylov2008controlled Theorem 9, p. 97] agains yields the desired expression of $\mathcal{H}^\infty \text-\partial_\varepsilon \frac{\delta \alpha^{t,x}}{\delta \phi_i}((\phi_j+\varepsilon(\phi''_j-\phi_j),\phi_{-j});\phi'_i) \big\vert_{\varepsilon=0}$. ◻ *Proofs of Theorem [Theorem 9](#thm:differential_MPG_proba){reference-type="ref" reference="thm:differential_MPG_proba"} Items [\[item:prob_first\]](#item:prob_first){reference-type="ref" reference="item:prob_first"} and [\[item:prob_second\]](#item:prob_second){reference-type="ref" reference="item:prob_second"}.* Fix $\phi\in \pi^{(N)}$. The linearity of $\frac{\delta V^{t,x}_i}{\delta \phi_j}(\phi;\cdot): \mathop{\mathrm{span}}(\pi_j) \rightarrow{\mathbb R}$ follows from the linearity of $\frac{\delta X^{t,x}}{\delta \phi_j} (\phi;\cdot)$ and $\frac{\delta \alpha^{t,x}}{\delta \phi_j} (\phi;\cdot)$ shown in Lemmas [Lemma 12](#lemma:1st_derivative_state){reference-type="ref" reference="lemma:1st_derivative_state"} and [Lemma 14](#lemma:control_derivative){reference-type="ref" reference="lemma:control_derivative"}. Let $\phi'_j\in \pi_j$ and for each $\varepsilon\in [0,1)$, let $\phi^\varepsilon=(\phi_j+\varepsilon(\phi'_j-\phi_j), \phi_{-j})$. Recall that $$\begin{aligned} V^{t,x}_i(\phi^\varepsilon) = {\mathbb{E}}\left[\int_t^T f_i(s,X^{t,x,\phi^\varepsilon}_s, \alpha^{t,x,\phi^\varepsilon}_s){\mathrm{d}}s + g_i(X^{t,x,\phi^\varepsilon}_T)\right]. \end{aligned}$$ Then [@krylov2008controlled Theorem 9, p. 97], the $\mathcal{S}^\infty$-differentiability of $X^{t,x,\phi^\varepsilon}$ and the $\mathcal{H}^\infty$-differentiability of $\alpha^{t,x,\phi^\varepsilon}$ (see Lemmas [Lemma 12](#lemma:1st_derivative_state){reference-type="ref" reference="lemma:1st_derivative_state"}, [Lemma 13](#lemma:2nd_derivative_state){reference-type="ref" reference="lemma:2nd_derivative_state"} and [Lemma 14](#lemma:control_derivative){reference-type="ref" reference="lemma:control_derivative"}) imply that $\frac{{\mathrm{d}}}{{\mathrm{d}}\varepsilon}V^{t,x}_i(\phi^\varepsilon)\vert_{\varepsilon=0}= \frac{\delta V^{t,x}_i}{\delta \phi_j} (\phi;\phi'_j-\phi_j)$, which proves Theorem [Theorem 9](#thm:differential_MPG_proba){reference-type="ref" reference="thm:differential_MPG_proba"} Item [\[item:prob_first\]](#item:prob_first){reference-type="ref" reference="item:prob_first"}. To prove Theorem [Theorem 9](#thm:differential_MPG_proba){reference-type="ref" reference="thm:differential_MPG_proba"} Item [\[item:prob_second\]](#item:prob_second){reference-type="ref" reference="item:prob_second"}, for all $(k,\ell)\in \{(i,j),(j,i)\}$, observe that the bilinearity of $\frac{\delta^2 V^{t,x}_i}{\delta \phi_k\delta \phi_\ell}:\pi^{(N)} \times \mathop{\mathrm{span}}(\pi_k)\times \mathop{\mathrm{span}}(\pi_\ell) \rightarrow{\mathbb R}$ follows from [\[eq:value_derivative_2nd\]](#eq:value_derivative_2nd){reference-type="eqref" reference="eq:value_derivative_2nd"}, the linearity of $\frac{\delta X^{t,x}}{\delta \phi_k} (\phi;\cdot)$ and $\frac{\delta \alpha^{t,x}}{\delta \phi_\ell} (\phi;\cdot)$, and the bilinearity of $\frac{\delta^2 X^{t,x}}{\delta \phi_k\phi_\ell} (\phi;\cdot,\cdot)$ and $\frac{\delta^2 \alpha^{t,x}}{\delta \phi_k\phi_\ell} (\phi;\cdot,\cdot)$. The facts that $\frac{\delta^2 V^{t,x}_i}{\delta \phi_i\delta \phi_j}$ is a linear derivative of $\frac{\delta V^{t,x}_i}{\delta \phi_i}$ in $\pi_j$ and that $\frac{\delta^2 V^{t,x}_i}{\delta \phi_j\delta \phi_i}$ is a linear derivative of $\frac{\delta V^{t,x}_i}{\delta \phi_j}$ in $\pi_i$ can be proved by similar arguments as those employed for the first-order derivatives. This finishes the proof. ◻ *Proof of Theorem [Theorem 9](#thm:differential_MPG_proba){reference-type="ref" reference="thm:differential_MPG_proba"} Item [\[item:prob_potential\]](#item:prob_potential){reference-type="ref" reference="item:prob_potential"}.* We only prove the characterisation of CLPGs, which follows by applying Theorem [Theorem 8](#thm:symmetry_value_sufficient){reference-type="ref" reference="thm:symmetry_value_sufficient"} to the game $\mathcal{G}_{\rm prob}$ with fixed $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$. The convexity of $\pi_i$ follows from the convexity of $A_i$ in (H.[H. 1](#assum:regularity_N){reference-type="ref" reference="assum:regularity_N"}), the linear differentiability of $(V_i)_{i\in I_N}$ has been proved in Theorem [Theorem 9](#thm:differential_MPG_proba){reference-type="ref" reference="thm:differential_MPG_proba"} Items [\[item:prob_first\]](#item:prob_first){reference-type="ref" reference="item:prob_first"} and [\[item:prob_second\]](#item:prob_second){reference-type="ref" reference="item:prob_second"}, and Condition [\[item:symmetry-Vi\]](#item:symmetry-Vi){reference-type="ref" reference="item:symmetry-Vi"} in Theorem [Theorem 8](#thm:symmetry_value_sufficient){reference-type="ref" reference="thm:symmetry_value_sufficient"} has been imposed in Condition [\[eq:prob_symmetry\]](#eq:prob_symmetry){reference-type="eqref" reference="eq:prob_symmetry"}. Hence it remains to verify Conditions [\[item:integrable_p-Vi\]](#item:integrable_p-Vi){reference-type="ref" reference="item:integrable_p-Vi"} and [\[item:joint_continuity\]](#item:joint_continuity){reference-type="ref" reference="item:joint_continuity"} in Theorem [Theorem 8](#thm:symmetry_value_sufficient){reference-type="ref" reference="thm:symmetry_value_sufficient"} for given $i,j\in I_N$, $z, \phi \in \pi^{(N)}$, $\phi'_i,\tilde{\phi}'_i\in \pi_i$ and ${\phi}''_j \in \pi_j$. To verify Condition [\[item:integrable_p-Vi\]](#item:integrable_p-Vi){reference-type="ref" reference="item:integrable_p-Vi"}, for all $r, \varepsilon\in [0,1]$, let $\phi^{r,\varepsilon}= z+r( (\phi_i+\varepsilon(\tilde{\phi}'_{i}-\phi_i),\phi_{-i})-z)$. To prove $\sup_{r,\varepsilon\in [0,1]}\left| \frac{\delta^2 V_i^{t,x}}{\delta\phi_i\delta\phi_j}\big(\phi^{r,\varepsilon}; {\phi}'_i,{\phi}''_j\big) \right|<\infty$, by [\[eq:value_derivative_2nd\]](#eq:value_derivative_2nd){reference-type="eqref" reference="eq:value_derivative_2nd"} and the polynomial growth of $f_i$ and $g_i$ and their derivatives (see (H.[H. 1](#assum:regularity_N){reference-type="ref" reference="assum:regularity_N"})), it suffices to show for all $q\ge 1$, there exists $C_q$ such that for all $r,\varepsilon\in [0,1]$, $$\label{eq:uniform_moment_bound_X} \|X^{t,x,\phi^{r,\varepsilon}}\|_{\mathcal{S}^q}, \left\| \frac{\delta X^{t,x}}{\delta \phi_i} (\phi^{r,\varepsilon};\phi'_i)\right\|_{\mathcal{S}^q}, \left\| \frac{\delta X^{t,x}}{\delta \phi_j} (\phi^{r,\varepsilon};\phi''_j)\right\|_{\mathcal{S}^q}, \left\| \frac{\delta^2 X^{t,x}}{\delta \phi_i\phi_j} (\phi^{r,\varepsilon};\phi'_i,\phi''_j)\right\|_{\mathcal{S}^q} \le C_q,$$ and $$\label{eq:uniform_moment_bound_alpha} \|\alpha^{t,x,\phi^{r,\varepsilon}}\|_{\mathcal{H}^q}, \left\| \frac{\delta \alpha^{t,x}}{\delta \phi_i} (\phi^{r,\varepsilon};\phi'_i)\right\|_{\mathcal{H}^q}, \left\| \frac{\delta \alpha^{t,x}}{\delta \phi_j} (\phi^{r,\varepsilon};\phi''_j)\right\|_{\mathcal{H}^q}, \left\| \frac{\delta^2 \alpha^{t,x}}{\delta \phi_i\phi_j} (\phi^{r,\varepsilon};\phi'_i,\phi''_j)\right\|_{\mathcal{H}^q} \le C_q.$$ Since $\phi^{r,\varepsilon}$ is a convex combination of policies $\phi$, $z$ and $(\tilde{\phi}'_i,\phi_{-i})$, there exists $L\ge 0$ such that for all $r,\varepsilon\in [0,1]$, $$\sup_{(t,x)\in [0,T]\times {\mathbb R}^{n_x}}({|\phi^{r,\varepsilon}(t,0)|}+|(\partial_x \phi^{r,\varepsilon})(t,x)|+|(\partial_{xx} \phi^{r,\varepsilon})(t,x)|)\le L.$$ Hence, the uniform moment estimate [\[eq:uniform_moment_bound_X\]](#eq:uniform_moment_bound_X){reference-type="eqref" reference="eq:uniform_moment_bound_X"} follows by applying standard moment estimates of SDEs (see [@zhang2017backward Theorem 3.4.3]) to [\[eq:state_policy\]](#eq:state_policy){reference-type="eqref" reference="eq:state_policy"}, [\[eq:X_sensitivity_first\]](#eq:X_sensitivity_first){reference-type="eqref" reference="eq:X_sensitivity_first"} and [\[eq:X_sensitivity_second\]](#eq:X_sensitivity_second){reference-type="eqref" reference="eq:X_sensitivity_second"}. The estimate [\[eq:uniform_moment_bound_alpha\]](#eq:uniform_moment_bound_alpha){reference-type="eqref" reference="eq:uniform_moment_bound_alpha"} then follows from [\[eq:control_1st\]](#eq:control_1st){reference-type="eqref" reference="eq:control_1st"}, [\[eq:control_2nd\]](#eq:control_2nd){reference-type="eqref" reference="eq:control_2nd"} and the Cauchy-Schwarz inequality. This verifies Condition [\[item:integrable_p-Vi\]](#item:integrable_p-Vi){reference-type="ref" reference="item:integrable_p-Vi"}. To verify Condition [\[item:joint_continuity\]](#item:joint_continuity){reference-type="ref" reference="item:joint_continuity"}, for all $\varepsilon=(\varepsilon_i)_{i=1}^N\in [0,1]^N$, let $\phi^\varepsilon= (z_i+\varepsilon_i({\phi}_{i}-z_i))_{i\in I_N}$. Observe that $\phi^\varepsilon_i$ is a convex combination of $\phi_i$ and $z_i$, there exists $L\ge 0$ such that for all $\varepsilon\in [0,1]^N$, $$\sup_{(t,x)\in [0,T]\times {\mathbb R}^{n_x}}({|\phi^{\varepsilon}(t,0)|}+|(\partial_x \phi^{\varepsilon})(t,x)|+|(\partial_{xx} \phi^{\varepsilon})(t,x)|)\le L,$$ from which one can deduce uniform moment estimates of the state and control processes as in [\[eq:uniform_moment_bound_X\]](#eq:uniform_moment_bound_X){reference-type="eqref" reference="eq:uniform_moment_bound_X"} and [\[eq:uniform_moment_bound_alpha\]](#eq:uniform_moment_bound_alpha){reference-type="eqref" reference="eq:uniform_moment_bound_alpha"}. As $\lim_{\varepsilon_i\downarrow 0}\phi^\varepsilon_i =\phi^0_i$ a.e., by stability results of SDEs (e.g., [@zhang2017backward Theorem 3.4.2]), $$\begin{aligned} &\lim_{\varepsilon\rightarrow 0} \left(X^{t,x,\phi^{\varepsilon}} , \frac{\delta X^{t,x}}{\delta \phi_i} (\phi^{\varepsilon};\phi'_i), \frac{\delta X^{t,x}}{\delta \phi_j} (\phi^{\varepsilon};\phi''_j), \frac{\delta^2 X^{t,x}}{\delta \phi_i\phi_j} (\phi^{\varepsilon};\phi'_i,\phi''_j)\right) \\ &\quad= \left( X^{t,x,\phi^{0}} , \frac{\delta X^{t,x}}{\delta \phi_i} (\phi^{0};\phi'_i), \frac{\delta X^{t,x}}{\delta \phi_j} (\phi^{0};\phi''_j), \frac{\delta^2 X^{t,x}}{\delta \phi_i\phi_j} (\phi^{0};\phi'_i,\phi''_j) \right) \quad \textnormal{a.e.~$[t,T]\times \Omega$.}\end{aligned}$$ By [\[eq:control_1st\]](#eq:control_1st){reference-type="eqref" reference="eq:control_1st"}, [\[eq:control_2nd\]](#eq:control_2nd){reference-type="eqref" reference="eq:control_2nd"} and the convergence of $\phi^\varepsilon$, similar convergence results also hold for the control processes. Hence by Lebesgue's dominated convergence theorem, $\lim_{\varepsilon\rightarrow 0} \frac{\delta^2 V_i^{t,x }}{\delta \phi_i\phi_j}(\phi^\varepsilon;\phi'_i, \phi''_j) = \frac{\delta^2 V_i^{t,x }}{\delta \phi_i\phi_j}(\phi^0 ;\phi'_i, \phi''_j)$. This verifies Condition [\[item:joint_continuity\]](#item:joint_continuity){reference-type="ref" reference="item:joint_continuity"}. ◻ ## Proof of Theorem [Theorem 10](#thm:differential_MPG_pde){reference-type="ref" reference="thm:differential_MPG_pde"} {#sec:proof_sde_analytic} *Proof of Theorem [Theorem 10](#thm:differential_MPG_pde){reference-type="ref" reference="thm:differential_MPG_pde"} Item [\[item:pde_first\]](#item:pde_first){reference-type="ref" reference="item:pde_first"}.* Fix $i,k\in I_N$, $\phi\in \pi^{(N)}$ and $\phi'_k\in \mathop{\mathrm{span}}(\pi_k)$. Let $\eta=\beta\gamma$. By (H.[H. 2](#assum:Holder_regularity){reference-type="ref" reference="assum:Holder_regularity"}), the Hölder continuity of $\phi, \phi'_k$ and $V^\phi_i$, it is clear that all coefficients of [\[eq:pde_value_1st\]](#eq:pde_value_1st){reference-type="eqref" reference="eq:pde_value_1st"} are in $C^{ {\eta}/{2}, \eta}([0,T]\times {\mathbb R}^{n_x};{\mathbb R})$. Thus by [@ladyzhenskaia1988linear Theorem 5.1, p. 320], [\[eq:pde_value_1st\]](#eq:pde_value_1st){reference-type="eqref" reference="eq:pde_value_1st"} admits a unique classical solution in $C^{1+ {\eta}/{2},2+\eta}([0,T]\times {\mathbb R}^{n_x}; {\mathbb R})$. This shows that the map $\frac{\delta V_i}{\delta \phi_k}$ in [\[eq:value_1st_pde\]](#eq:value_1st_pde){reference-type="eqref" reference="eq:value_1st_pde"} is well-defined. Moreover, as [\[eq:pde_value_1st\]](#eq:pde_value_1st){reference-type="eqref" reference="eq:pde_value_1st"} is linear in terms of $\phi'_k$, $\mathop{\mathrm{span}}(\pi_k)\ni\phi'_k\mapsto \frac{\delta V^\phi_i}{\delta \phi_k}(t,x; \phi'_k)\in {\mathbb R}$ is linear. Now fix $\phi'_k\in \pi_k$. For each $\varepsilon\in [0,1]$, let $\phi^\varepsilon=(\phi_k+\varepsilon(\phi'_k-\phi_k),\phi_{-k})$. We claim that $$\label{eq:derivative_1st_holder} \lim_{\varepsilon\downarrow 0 }\bigg\|\frac{V^{\phi^\varepsilon}_i(\cdot) -V^{\phi}_i(\cdot)}{\varepsilon}-\frac{\delta V^\phi_i}{\delta \phi_k}(\cdot; \phi'_k-\phi_k)\bigg\|_{1+\beta\gamma/2,2+\beta\gamma}=0.$$ Recall that $V^{\phi^\varepsilon}_i$, $\varepsilon\in (0,1]$, satisfies the following PDE: for all $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$, $$\begin{aligned} \label{eq:pde_value_eps} \begin{split} &\partial_t V^{\phi^\varepsilon}_i (t,x) + \mathcal{L}^{\phi^\varepsilon} V^{\phi^\varepsilon}_i (t,x)+f_i(t,x,\phi^\varepsilon(t,x))=0; \quad V^{\phi^\varepsilon}_i (T,x)= g_i(x). \end{split}\end{aligned}$$ Then by [\[eq:pde_value_1st\]](#eq:pde_value_1st){reference-type="eqref" reference="eq:pde_value_1st"}, $U^\varepsilon(\cdot)=\frac{V^{\phi^\varepsilon}_i(\cdot) -V^{\phi}_i(\cdot)}{\varepsilon}-\frac{\delta V^\phi_i}{\delta \phi_k}(\cdot; \phi'_k-\phi_k)$ satisfies for all $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$, $$\begin{aligned} \label{eq:pde_U_eps} \begin{split} &\partial_t U^{\varepsilon} (t,x) +\mathcal{L}^{\phi^\varepsilon} U^{\varepsilon} (t,x) + F^\varepsilon(t,x )=0; \quad U^{\varepsilon} (T,x)= 0, \end{split}\end{aligned}$$ where $$\begin{aligned} \label{eq:F_eps} \begin{split} F^\varepsilon(t,x ) &= \frac{(\mathcal{L}^{\phi^\varepsilon}-\mathcal{L}^{\phi})V(t,x)+f_i(t,x,\phi^\varepsilon(t,x))-f_i(t,x,\phi(t,x))}{\varepsilon} \\ &\quad - (\partial_{a_k} H_i)\big(t,x,(\partial_x V^\phi_i)(t,x),(\partial_{xx} V^\phi_i)(t,x),\phi(t,x)\big)^\top (\phi'_k-\phi_k)(t,x) \\ & \quad+(\mathcal{L}^{\phi^\varepsilon}-\mathcal{L}^{\phi})\frac{\delta V^\phi_i}{\delta \phi_k}(t,x; \phi'_k-\phi_k). \end{split}\end{aligned}$$ By Taylor's theorem, for all $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$, $$\begin{aligned} & \frac{f_i(t,x,\phi^\varepsilon(t,x))-f_i(t,x,\phi(t,x))}{\varepsilon}-(\partial_{a_k} f_i)\big(t,x, \phi(t,x)\big)^\top (\phi'_k-\phi_k)(t,x) \\ &= \varepsilon\int_0^1 (\phi'_k-\phi_k)(t,x)^\top(\partial_{a_ka_k} f_i)(t,x,\phi^{r\varepsilon}(t,x)) (\phi'_k-\phi_k)(t,x) {\mathrm{d}}r,\end{aligned}$$ which along with the Hölder continuity of $\partial_{a_ka_k} f_i$, $\phi_k$ and $\phi'_k$ shows that there exists $C\ge 0$ such that $\|\frac{f_i(\cdot,\phi^\varepsilon(\cdot))-f_i(\cdot,\phi(\cdot))}{\varepsilon}-(\partial_{a_k} f_i)\big(\cdot, \phi(\cdot)\big)^\top (\phi'_k-\phi_k)(\cdot)\|_{\eta/2,\eta}\le C\varepsilon$ for all $\varepsilon\in (0,1]$. Similarly, by using $V, \frac{\delta V^\phi_i}{\delta \phi_k}(\cdot; \phi'_k-\phi_k)\in C^{1+ {\eta}/{2},2+\eta}([0,T]\times {\mathbb R}^{n_x};{\mathbb R})$, one can prove the $\|\cdot\|_{\eta/2,\eta}$-norm of the remaining terms in [\[eq:F_eps\]](#eq:F_eps){reference-type="eqref" reference="eq:F_eps"} converges to zero as $\varepsilon\rightarrow 0$. As $\phi^\varepsilon$ is a convex combination of $\phi$ and $(\phi'_k,\phi_{-k})$ (see [\[eq:policy_pde\]](#eq:policy_pde){reference-type="eqref" reference="eq:policy_pde"}), $\sup_{\varepsilon\in [0,1]}\|\phi^\varepsilon\|_{\gamma/2,\gamma}<\infty$. Hence applying the Schauder a-priori estimate [@ladyzhenskaia1988linear Theorem 5.1, p. 320] to [\[eq:pde_U\_eps\]](#eq:pde_U_eps){reference-type="eqref" reference="eq:pde_U_eps"} implies that there exists $C\ge 0$ such that for all $\varepsilon\in (0,1]$, $\|U^\varepsilon\|_{1+\eta/2,2+\eta}\le C\|F^\varepsilon\|_{\eta/2,\eta}\le C\varepsilon$. This proves [\[eq:derivative_1st_holder\]](#eq:derivative_1st_holder){reference-type="eqref" reference="eq:derivative_1st_holder"}, which implies that $\frac{{\mathrm{d}}}{{\mathrm{d}}\varepsilon} { V^{\phi^\varepsilon}_i} (t,x ) \big\vert_{\varepsilon=0} = \frac{\delta V^{\phi}_i}{\delta \phi_k }(t,x; \phi'_k-\phi_k )$. ◻ *Proof of Theorem [Theorem 10](#thm:differential_MPG_pde){reference-type="ref" reference="thm:differential_MPG_pde"} Item [\[item:pde_second\]](#item:pde_second){reference-type="ref" reference="item:pde_second"}.* Fix $i,k,\ell \in I_N$, $\phi\in \pi^{(N)}$, $\phi'_k\in \mathop{\mathrm{span}}(\pi_k)$ and $\phi''_\ell\in \mathop{\mathrm{span}}(\pi_\ell)$. Let $\eta=\beta\gamma$. Recall that $V^{\phi}_i(\cdot)$, $\frac{\delta V^\phi_i}{\delta \phi_k}(\cdot; \phi'_k)$ and $\frac{\delta V^\phi_i}{\delta \phi_\ell}(\cdot; \phi''_\ell)$ are in $C^{1+ {\eta}/{2},2+\eta}([0,T]\times {\mathbb R}^{n_x}; {\mathbb R})$, as shown in the proof of Theorem [Theorem 10](#thm:differential_MPG_pde){reference-type="ref" reference="thm:differential_MPG_pde"} Item [\[item:pde_first\]](#item:pde_first){reference-type="ref" reference="item:pde_first"}. Hence the well-posedness of [\[eq:pde_value_2nd\]](#eq:pde_value_2nd){reference-type="eqref" reference="eq:pde_value_2nd"} in $C^{1+ {\eta}/{2},2+\eta}([0,T]\times {\mathbb R}^{n_x}; {\mathbb R})$ follows from a similar argument as that for [\[eq:pde_value_1st\]](#eq:pde_value_1st){reference-type="eqref" reference="eq:pde_value_1st"}, which subsequently implies that the map $\frac{\delta^2 V_i}{\delta \phi_k\delta \phi_\ell}$ in [\[eq:value_2nd_pde\]](#eq:value_2nd_pde){reference-type="eqref" reference="eq:value_2nd_pde"} is well-defined. Moreover, by [\[eq:pde_value_1st\]](#eq:pde_value_1st){reference-type="eqref" reference="eq:pde_value_1st"}, for all $k\in I_N$, $\phi'_k\mapsto \big(\partial_x \frac{\delta V^\phi_i}{\delta \phi_k}\big)(t,x; \phi'_k)$ and $\phi'_k\mapsto \big(\partial_{xx} \frac{\delta V^\phi_i}{\delta \phi_k}\big)(t,x; \phi'_k)$ are linear. This shows that [\[eq:pde_value_2nd\]](#eq:pde_value_2nd){reference-type="eqref" reference="eq:pde_value_2nd"} is bilinear in terms of $\phi'_k$ and $\phi''_\ell$, and hence the bilinearity of $(\phi'_k, \phi''_\ell)\mapsto \frac{\delta^2 V^\phi_i}{\delta \phi_k\phi_\ell}(t,x; \phi'_k,\phi''_\ell)$. Now fix $\phi'_k\in \mathop{\mathrm{span}}(\pi_k)$ and $\phi''_\ell\in \pi_\ell$. For all $\varepsilon\in [0,1]$, let $\phi^\varepsilon=(\phi_\ell+\varepsilon(\phi''_\ell-\phi_\ell),\phi_{-\ell})$. We aim to prove that $$\label{eq:derivative_2nd_sup} \lim_{\varepsilon\downarrow 0 }\bigg\| \frac{ 1}{\varepsilon}\bigg(\frac{\delta V^{\phi^\varepsilon}_i}{\delta \phi_k}(\cdot; \phi'_k ) - \frac{\delta V^{\phi}_i}{\delta \phi_k}(\cdot; \phi'_k )\bigg)- \frac{\delta^2 V^{\phi}_i}{\delta \phi_k\delta \phi_\ell}(\cdot; \phi'_k,\phi''_\ell-\phi_\ell ) \bigg\|_{0}=0.$$ By [\[eq:pde_value_1st\]](#eq:pde_value_1st){reference-type="eqref" reference="eq:pde_value_1st"} and [\[eq:pde_value_2nd\]](#eq:pde_value_2nd){reference-type="eqref" reference="eq:pde_value_2nd"}, for all $\varepsilon\in (0,1]$, $$W^\varepsilon(\cdot)=\frac{ 1}{\varepsilon}\left(\frac{\delta V^{\phi^\varepsilon}_i}{\delta \phi_k}(\cdot; \phi'_k ) - \frac{\delta V^{\phi}_i}{\delta \phi_k}(\cdot; \phi'_k )\right)- \frac{\delta^2 V^{\phi}_i}{\delta \phi_k\delta \phi_\ell}(\cdot; \phi'_k,\phi''_\ell-\phi_\ell )$$ satisfies for all $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$, $$\label{eq:PDE_W_eps} \partial_t W^{\varepsilon} (t,x) +\mathcal{L}^{\phi^\varepsilon} W^{\varepsilon} (t,x) + G^\varepsilon(t,x )=0; \ \ \ \ W^{\varepsilon} (T,x)= 0,$$ where $G^\varepsilon(t,x )= G^\varepsilon_1(t,x )+G^\varepsilon_2(t,x )+G^\varepsilon_3(t,x )$ satisfies for all $\bar{x}= (t,x)\in [0,T]\times{\mathbb R}^{n_x}$, $$\begin{aligned} G^\varepsilon_{1}(\bar{x}) &= \frac{1}{\varepsilon} (\mathcal{L}^{\phi^\varepsilon}-\mathcal{L}^{\phi}) \frac{\delta V^{\phi}_i}{\delta \phi_k}(\bar{x}; \phi'_k ) \label{eq:G1_eps} \\ &\quad - (\partial_{a_\ell} L)\left(\bar{x}, \left(\partial_x \frac{\delta V^\phi_i}{\delta \phi_k}\right)(\bar{x}; \phi'_k) , \left(\partial_{xx} \frac{\delta V^\phi_i}{\delta \phi_k}\right)(\bar{x}; \phi'_k) ,\phi(\bar{x})\right) ^\top (\phi''_\ell-\phi_\ell)(\bar{x}), \nonumber\end{aligned}$$ $$\begin{aligned} G^\varepsilon_2(\bar{x} ) &= \frac{1}{\varepsilon} \bigg[ (\partial_{a_k} H_i)\big(\bar{x},(\partial_x V^{\phi^\varepsilon}_i)(\bar{x}),(\partial_{xx} V^{\phi^\varepsilon}_i)(\bar{x}),{\phi^\varepsilon}(\bar{x})\big)^\top \phi'_k(\bar{x}) \label{eq:G2_eps} \\ &\quad -(\partial_{a_k} H_i)\big(\bar{x},(\partial_x V^\phi_i)(\bar{x}),(\partial_{xx} V^\phi_i)(\bar{x}),\phi(\bar{x})\big)^\top \phi'_k(\bar{x}) \bigg] \nonumber \\ &\quad - \bigg[ (\partial_{a_k} L)\left(\bar{x}, \left(\partial_x \frac{\delta V^\phi_i}{\delta \phi_\ell}\right)(\bar{x}; \phi''_\ell-\phi_\ell) , \left(\partial_{xx} \frac{\delta V^\phi_i}{\delta \phi_\ell}\right)(\bar{x}; \phi''_\ell-\phi_\ell), \phi(\bar{x})\right) ^\top \phi'_k(\bar{x}) \nonumber \\ & \quad + (\phi''_\ell-\phi_\ell)(\bar{x})^\top (\partial_{a_ka_\ell} H_i)\big(t,x,(\partial_x V^{\phi}_i)(\bar{x}),(\partial_{xx} V^{\phi}_i)(\bar{x}),\phi(\bar{x})\big) \phi'_k(\bar{x}) \bigg], \nonumber \\ G^\varepsilon_{3}(\bar{x} ) &= (\mathcal{L}^{\phi^\varepsilon}-\mathcal{L}^{\phi})\frac{\delta^2 V^{\phi}_i}{\delta \phi_k\delta \phi_\ell}(\bar{x}; \phi'_k,\phi''_\ell-\phi_\ell ). \label{eq:G_3_eps}\end{aligned}$$ We claim that $\lim_{\varepsilon\downarrow 0}\|G^\varepsilon\|_0=0$. The convergences of $G^\varepsilon_1$ and $G^\varepsilon_3$ follow directly from the Hölder continuity of $B$ and $\Sigma$. For $G^\varepsilon_2$, using $H_i(t,x,y,z,a)-H_i(t,x,y',z',a)=L(t,x,y,z,a)-L(t,x,y',z',a)$ (see [\[eq:Hamiltonian_H\]](#eq:Hamiltonian_H){reference-type="eqref" reference="eq:Hamiltonian_H"} and [\[eq:L\]](#eq:L){reference-type="eqref" reference="eq:L"}), $$\begin{aligned} \|G^\varepsilon_2\|_0 &\le \bigg\| \frac{1}{\varepsilon} \bigg[ (\partial_{a_k} L)\big(\cdot, \partial_x V^{\phi^\varepsilon}_i, \partial_{xx} V^{\phi^\varepsilon}_i, {\phi^\varepsilon} \big)^\top \phi'_k -(\partial_{a_k} L)\big(\cdot, \partial_x V^{\phi}_i, \partial_{xx} V^{\phi}_i, {\phi^\varepsilon} \big)^\top \phi'_k \bigg] \\ &\quad - (\partial_{a_k} L)\left(\cdot, \partial_x \frac{\delta V^\phi_i}{\delta \phi_\ell} , \partial_{xx} \frac{\delta V^\phi_i}{\delta \phi_\ell} , \phi^\varepsilon\right) ^\top \phi'_k \bigg\|_0 \\ &\quad + \bigg\| \frac{1}{\varepsilon} \left( (\partial_{a_k} H_i)\big(\cdot, \partial_x V^{\phi}_i, \partial_{xx} V^{\phi}_i, {\phi^\varepsilon} \big) -(\partial_{a_k} H_i)\big(\cdot ,(\partial_x V^\phi_i),(\partial_{xx} V^\phi_i),\phi\big) \right)^\top \phi'_k \\ & \quad - (\phi''_\ell-\phi_\ell)^\top (\partial_{a_ka_\ell} H_i)\big(\cdot,(\partial_x V^{\phi}_i),(\partial_{xx} V^{\phi}_i),\phi\big) \phi'_k \bigg\|_0 \\ &\quad +\left\| (\partial_{a_k} L)\left(\cdot, \partial_x \frac{\delta V^\phi_i}{\delta \phi_\ell} , \partial_{xx} \frac{\delta V^\phi_i}{\delta \phi_\ell} , \phi^\varepsilon\right) - (\partial_{a_k} L)\left(\cdot, \partial_x \frac{\delta V^\phi_i}{\delta \phi_\ell} , \partial_{xx} \frac{\delta V^\phi_i}{\delta \phi_\ell}, \phi\right) \right\|_0 \| \phi'_k\|_0 \\ &\coloneqq G^\varepsilon_{2,1} +G^\varepsilon_{2,2} + G^\varepsilon_{2,3},\end{aligned}$$ where we wrote $\frac{\delta V^\phi_i}{\delta \phi_\ell}$ for $\frac{\delta V^\phi_i}{\delta \phi_\ell}(\cdot; \phi''_\ell-\phi_\ell)$ to simplify the notation. By [\[eq:derivative_1st_holder\]](#eq:derivative_1st_holder){reference-type="eqref" reference="eq:derivative_1st_holder"}, $\lim_{\varepsilon\downarrow 0 }\Big\|\frac{V^{\phi^\varepsilon}_i(\cdot) -V^{\phi}_i(\cdot)}{\varepsilon}-\frac{\delta V^\phi_i}{\delta \phi_\ell}(\cdot; \phi''_\ell-\phi_\ell)\Big\|_{1+\eta/2,2+\eta}=0$, which along with the linearity of $(y,z)\mapsto L(t,x,y,z,a)$ and (H.[H. 2](#assum:Holder_regularity){reference-type="ref" reference="assum:Holder_regularity"}) shows that $\lim_{\varepsilon\downarrow 0 } \|G^\varepsilon_{2,1}\|_0=0$. For the term $G^\varepsilon_{2,2}$, for all $\bar{x} \in [0,T]\times{\mathbb R}^{n_x}$, writing $\partial_x V^{\phi}_i=(\partial_x V^{\phi}_i)(\bar{x})$ and $\partial_{xx} V^{\phi}_i=(\partial_{xx} V^{\phi}_i)(\bar{x})$ and applying the fundamental theorem of calculus yield $$\begin{aligned} & \frac{1}{\varepsilon} \left( (\partial_{a_k} H_i)\big(\bar{x}, \partial_x V^{\phi}_i , \partial_{xx} V^{\phi}_i, {\phi^\varepsilon}(\bar{x}) \big) -(\partial_{a_k} H_i)\big(\bar{x} , \partial_x V^\phi_i, \partial_{xx} V^\phi_i,\phi(\bar{x})\big) \right) \\ & \quad - (\partial_{a_ka_\ell} H_i)\big(\bar{x}, \partial_x V^{\phi}_i , \partial_{xx} V^{\phi}_i,\phi(\bar{x})\big)^\top (\phi''_\ell-\phi_\ell)(\bar{x}) \\ &= \int_0^1 \left( (\partial_{a_ka_\ell} H_i)\big(\bar{x},\partial_x V^{\phi}_i,\partial_{xx} V^{\phi}_i,\phi^{r\varepsilon}(\bar{x})\big) - (\partial_{a_ka_\ell} H_i)\big(\bar{x},\partial_x V^{\phi}_i ,\partial_{xx} V^{\phi}_i,\phi(\bar{x})\big)\right)^\top (\phi''_\ell-\phi_\ell)(\bar{x}) {\mathrm{d}}r \\ & \le C\varepsilon^{\beta } \|\phi''_\ell-\phi_\ell\|_0^2,\end{aligned}$$ where the last inequality used the $\beta$-Hölder continuity of $a\mapsto (\partial_{a_ka_\ell} H_i)\big(\bar{x},\partial_x V^{\phi}_i,\partial_{xx} V^{\phi}_i,a\big)$. This proves $\lim_{\varepsilon\downarrow 0 } \|G^\varepsilon_{2,2}\|_0=0$. The Hölder continuity of coefficients gives $\lim_{\varepsilon\downarrow 0 } \|G^\varepsilon_{2,3}\|_0=0$ and $\lim_{\varepsilon\downarrow 0 } \|G^\varepsilon_{3}\|_0=0$. Applying the maximum principle [@ladyzhenskaia1988linear Theorem 2.5, p. 18] to [\[eq:PDE_W\_eps\]](#eq:PDE_W_eps){reference-type="eqref" reference="eq:PDE_W_eps"} yields that there exists $C\ge 0$ such that $\|W^\varepsilon\|_0\le C\|G^\varepsilon\|_0$ for all $\varepsilon\in (0,1]$. This along with the convergence of $G^\varepsilon$ implies $\lim_{\varepsilon\downarrow 0 } \|W^\varepsilon\|_0=0$ and proves [\[eq:derivative_2nd_sup\]](#eq:derivative_2nd_sup){reference-type="eqref" reference="eq:derivative_2nd_sup"}. Consequently, for all $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$, $\frac{{\mathrm{d}}}{{\mathrm{d}}\varepsilon} \frac{\delta V^{\phi^\varepsilon}_i}{\delta \phi_k}(t,x; \phi'_k ) \big\vert_{\varepsilon=0} = \frac{\delta^2 V^{\phi}_i}{\delta \phi_k\delta \phi_\ell}(t,x; \phi'_k,\phi''_\ell-\phi_\ell )$. Interchanging the roles of $k$ and $\ell$ in the above analysis proves the desired statement. ◻ *Proof of Theorem [Theorem 10](#thm:differential_MPG_pde){reference-type="ref" reference="thm:differential_MPG_pde"} Item [\[item:pde_potential\]](#item:pde_potential){reference-type="ref" reference="item:pde_potential"}.* By Theorem [Theorem 10](#thm:differential_MPG_pde){reference-type="ref" reference="thm:differential_MPG_pde"} Items [\[item:pde_first\]](#item:pde_first){reference-type="ref" reference="item:pde_first"} and [\[item:pde_second\]](#item:pde_second){reference-type="ref" reference="item:pde_second"}, it remains to verify Theorem [Theorem 8](#thm:symmetry_value_sufficient){reference-type="ref" reference="thm:symmetry_value_sufficient"} Conditions [\[item:integrable_p-Vi\]](#item:integrable_p-Vi){reference-type="ref" reference="item:integrable_p-Vi"} and [\[item:joint_continuity\]](#item:joint_continuity){reference-type="ref" reference="item:joint_continuity"} for fixed $i,j\in I_N$, $z, \phi \in \pi^{(N)}$, $\phi'_i,\tilde{\phi}'_i\in \pi_i$ and ${\phi}''_j \in \pi_j$. To verify Condition [\[item:integrable_p-Vi\]](#item:integrable_p-Vi){reference-type="ref" reference="item:integrable_p-Vi"}, for all $r, \varepsilon\in [0,1]$, let $\phi^{r,\varepsilon}= z+r( (\phi_i+\varepsilon(\tilde{\phi}'_{i}-\phi_i),\phi_{-i})-z)$. As $\sup_{r, \varepsilon\in [0,1]}\|\phi^{r,\varepsilon}\|_{\gamma/2,\gamma}<\infty$, by applying the Schauder a-priori estimate [@ladyzhenskaia1988linear Theorem 5.1, p. 320] to [\[eq:pde_value\]](#eq:pde_value){reference-type="eqref" reference="eq:pde_value"}, [\[eq:pde_value_1st\]](#eq:pde_value_1st){reference-type="eqref" reference="eq:pde_value_1st"} and [\[eq:pde_value_2nd\]](#eq:pde_value_2nd){reference-type="eqref" reference="eq:pde_value_2nd"}, there exists $\eta\in (0,1]$ such that $\sup_{r, \varepsilon\in [0,1]}\Big\| \frac{\delta^2 V^{\phi^{r,\varepsilon}}_i}{\delta \phi_i\delta \phi_j} (\cdot; \phi'_i,\phi''_j) \Big \|_{\gamma/2,\gamma}<\infty$. To verify Condition [\[item:joint_continuity\]](#item:joint_continuity){reference-type="ref" reference="item:joint_continuity"}, for all $\varepsilon=(\varepsilon_i)_{i=1}^N\in [0,1]^N$, let $\phi^\varepsilon= (z_i+\varepsilon_i({\phi}_{i}-z_i))_{i\in I_N}$ and observe that $U^\varepsilon= V^{\phi^{\varepsilon}}-V^{\phi^{0}}$ satisfies for all $(t,x)\in [0,T]\times {\mathbb R}^{n_x}$, $$\left\{ \begin{aligned} &\partial_t U^\varepsilon(t,x) + \mathcal{L}^{\phi^\varepsilon} U^\varepsilon(t,x)+( \mathcal{L}^{\phi^\varepsilon}- \mathcal{L}^{z}) V^{\phi^{0}}(t,x) + f_i(t,x,\phi^\varepsilon(t,x)) - f_i(t,x,{\phi^0}(t,x))=0, \\ & U^\varepsilon(T,x)= 0. \end{aligned} \right.$$ As $\sup_{ \varepsilon\in [0,1]^N}\|\phi^{\varepsilon}\|_{\gamma/2,\gamma}<\infty$, by (H.[H. 2](#assum:Holder_regularity){reference-type="ref" reference="assum:Holder_regularity"}), the Schauder a-priori estimate and the fundamental theorem of calculus, there exists $C\ge 0$ such that for all $\varepsilon\in [0,1]^N$, it holds with $\eta=\beta \gamma$ that $$\begin{aligned} \|U^\varepsilon\|_{1+\eta/2,2+ \eta} &\le C\|( \mathcal{L}^{\phi^\varepsilon}- \mathcal{L}^{\phi^{0}}) V^{\phi^{0}}+ f_i(\cdot,\phi^\varepsilon(\cdot)) - f_i(\cdot,{\phi^0}(\cdot))\|_{ \eta/2, \eta} \\ &\le C\left\|\int_0^1 \sum_{j=1}^N (\partial_{a_j} H_i)\big(\cdot, (\partial_x V^{\phi^{0}}_i)(\cdot), (\partial_{xx} V^{\phi^{0}}_i)(\cdot),\phi^{r\varepsilon}(\cdot)\big)^\top \varepsilon_j (\phi_j-z_j)(\cdot){\mathrm{d}}r \right\|_{ \eta/2, \eta},\end{aligned}$$ where $\phi^{r\varepsilon}= (z_i+r\varepsilon_i({\phi}_{i}-z_i))_{i\in I_N}$. This along with the Hölder continuity of $\partial_{a} H_i$ implies $\lim_{\varepsilon\downarrow 0}\|V^{\phi^{\varepsilon}}-V^{\phi^{0}}\|_{1+ \beta \gamma/2,2+ \beta \gamma}=0$. Applying similar arguments to [\[eq:pde_value_1st\]](#eq:pde_value_1st){reference-type="eqref" reference="eq:pde_value_1st"} yields $\lim_{\varepsilon\downarrow 0} \frac{\delta V^{\phi^\varepsilon}_i}{\delta \phi_i}(\cdot; \phi'_i) = \frac{\delta V^{\phi^{0}}_i}{\delta \phi_i}(\cdot; \phi'_i)$ and $\lim_{\varepsilon\downarrow 0}\frac{\delta V^{\phi^\varepsilon}_i}{\delta \phi_j}(\cdot; \phi''_j) = \frac{\delta V^{\phi^{0}}_i}{\delta \phi_j}(\cdot; \phi''_j)$ with respect to the $\|\cdot \|_{1+ \beta \gamma/2,2+ \beta \gamma}$-norm. Finally, by [\[eq:pde_value_2nd\]](#eq:pde_value_2nd){reference-type="eqref" reference="eq:pde_value_2nd"} and the maximum principle, $\lim_{\varepsilon\downarrow 0} \big\|\frac{\delta^2 V^{\phi^\varepsilon}_i}{\delta \phi_i\delta \phi_j}(\cdot; \phi'_i,\phi''_j)- \frac{\delta^2 V^{\phi^0}_i}{\delta \phi_i\delta \phi_j}(\cdot; \phi'_i,\phi''_j)\big \|_0=0$. This verifies Condition [\[item:joint_continuity\]](#item:joint_continuity){reference-type="ref" reference="item:joint_continuity"}. ◻ ## Proof of Theorem [Theorem 11](#thm:lq_characterisation){reference-type="ref" reference="thm:lq_characterisation"} {#sec:proof_example} *Proof of Theorem [Theorem 11](#thm:lq_characterisation){reference-type="ref" reference="thm:lq_characterisation"}.* Fix $i,h,\ell \in I_N$, $\phi\in \pi^{(N)}$, $\phi'_h\in \pi_h$, $\phi''_\ell\in \pi_\ell$ and $(t,x)\in [0,T]\times{\mathbb R}^{n_x}$. Denote by $X$ the state process $X^{t,x,\phi}$. We first prove the equivalence of first-order linear derivatives. Applying Itô's formula to $s\mapsto \frac{1}{2} X_s ^\top \Theta^h_i(s) X_s+ \theta^h_i(s)$ gives that $$\begin{aligned} &\left(\frac{1}{2} X_T ^\top \Theta^h_i(T) X_T+ \theta^h_i(T)\right) -\left(\frac{1}{2}x^\top \Theta^h_i(t) x+ \theta^h_i(t)\right) \\ &\quad = {\mathbb{E}}\left[\int_t^T\left( \frac{1}{2} X_s^\top \dot\Theta^h_i X_s + \dot\theta^h + \frac{1}{2} \langle(\Theta^h_i+ (\Theta^h_i)^\top) X_s, (A+BK)X_s\rangle+ \frac{1}{2}\textnormal{tr}\left(\sigma\sigma^\top \Theta^{h}_i \right) \right){\mathrm{d}}s \right],\end{aligned}$$ which along with [\[eq:quadratic_1st\]](#eq:quadratic_1st){reference-type="eqref" reference="eq:quadratic_1st"} and [\[eq:ode_theta\]](#eq:ode_theta){reference-type="eqref" reference="eq:ode_theta"} shows that $$\begin{aligned} &{\frac{\delta \overline{ V}^\phi_i}{\delta \phi_h}}(t,x; \phi'_h) \\ & \quad= \frac{1}{2} {\mathbb{E}}\left[\int_t^T X_s ^\top \left( \Psi_i B_h K'_h+(B_h K'_h)^\top \Psi_i +K^\top (R_i)_{ h}K'_h + (K'_h)^\top \big((R_i)_{ h}\big)^\top K \right)X_s{\mathrm{d}}s \right] \\ & \quad= {\mathbb{E}}\left[\int_t^T \left( X_s^\top \Psi_i B_h K'_hX_s +(KX_s)^\top (R_i)_{ h}K'_h X_s\right){\mathrm{d}}s \right],\end{aligned}$$ where we used the symmetry of $\Psi_i(\cdot)$. Then by [\[eq:value_derivative_1st_lq\]](#eq:value_derivative_1st_lq){reference-type="eqref" reference="eq:value_derivative_1st_lq"}, $$\begin{aligned} \label{eq:difference_1st_derivative_term1} \begin{split} & \frac{\delta V^{t,x}_i}{\delta \phi_h} (\phi;\phi'_h) - {\frac{\delta \overline{ V}^\phi_i}{\delta \phi_h}}(t,x; \phi'_h) \\ &\quad= {\mathbb{E}}\bigg[\int_t^T \bigg( X_s^\top Q_i Y^{h}_s +(K X_s)^\top R_i K Y^{h}_s - X_s^\top \Psi_i B_h K'_hX_s \bigg) {\mathrm{d}}s + X_T^\top G_i Y^{h}_T\bigg]. \end{split}\end{aligned}$$ Applying Itô's formula to $s\mapsto X_s^\top \Psi_i (s) Y^h_s$ and using [\[eq:X_sensitivity_first_lq\]](#eq:X_sensitivity_first_lq){reference-type="eqref" reference="eq:X_sensitivity_first_lq"} and [\[eq:ode_psi\]](#eq:ode_psi){reference-type="eqref" reference="eq:ode_psi"} imply that $$\begin{aligned} \begin{split} &{\mathbb{E}}[X_T^\top G_i Y^{h}_T] \\ &\quad= {\mathbb{E}}\bigg[\int_t^T \bigg( X_s^\top \dot \Psi_i Y^{h}_s +\langle\Psi_i X_s, (A+BK) Y^h_s+B_hK'_h X_s \rangle + \langle\Psi_i Y^h_s, (A+BK) X_s \rangle \bigg) {\mathrm{d}}s\bigg] \\ &\quad= - {\mathbb{E}}\bigg[\int_t^T \bigg( X_s^\top (Q_i+K^\top R_i K) Y^{h}_s - X_s ^\top \Psi_i B_hK'_h X_s \bigg) {\mathrm{d}}s\bigg]. \end{split}\end{aligned}$$ This along with [\[eq:difference_1st_derivative_term1\]](#eq:difference_1st_derivative_term1){reference-type="eqref" reference="eq:difference_1st_derivative_term1"} proves $\frac{\delta V^{t,x}_i}{\delta \phi_h} (\phi;\phi'_h) - {\frac{\delta \overline{ V}^\phi_i}{\delta \phi_h}}(t,x; \phi'_h)=0$. We then prove the equivalence of second-order linear derivatives. By [\[eq:value_derivative_2nd_lq\]](#eq:value_derivative_2nd_lq){reference-type="eqref" reference="eq:value_derivative_2nd_lq"} and $R_iE_h=(R_i)_h$, $$\begin{aligned} \label{eq:prob_term1} \begin{split} \frac{\delta^2 V^{t,x}_{i}}{\delta \phi_h \delta \phi_\ell} (\phi;\phi'_h, \phi''_\ell) & = {\mathbb{E}}\bigg[\int_t^T \bigg( Y^\ell_s Q_i Y^h_s + (K Y^\ell_s)^\top R_i K Y^h_s \\ &\quad +(K Y^h_s)^\top (R_i)_\ell K''_\ell X_s +(K Y^\ell_s)^\top (R_i)_h K'_h X_s +(K''_\ell X_s)^\top (R_i)_{\ell h} K'_h X_s \\ &\quad+ X^\top_s Q_i Z_s +(K X_s)^\top \Big( R_i K Z_s +(R_i )_\ell K''_\ell Y^h_s+(R_i )_hK'_h Y^\ell_s\Big) \bigg) {\mathrm{d}}s\bigg] \\ &\quad +{\mathbb{E}}\left[ (Y^\ell_T )^\top G_i Y^h_T +X^\top_T G_i Z_T\right]. \end{split}\end{aligned}$$ Applying Itô's formula to $s\mapsto (Y^\ell_s )^\top \Psi_i(s) Y^h_s +X^\top_s \Psi_i(s) Z_s$ and using [\[eq:X_sensitivity_first_lq\]](#eq:X_sensitivity_first_lq){reference-type="eqref" reference="eq:X_sensitivity_first_lq"}, [\[eq:X_sensitivity_second_lq\]](#eq:X_sensitivity_second_lq){reference-type="eqref" reference="eq:X_sensitivity_second_lq"} and [\[eq:ode_psi\]](#eq:ode_psi){reference-type="eqref" reference="eq:ode_psi"}, $$\begin{aligned} &{\mathbb{E}}\left[ (Y^\ell_T )^\top G_i Y^h_T +X^\top_T G_i Z_T\right] \\ &\quad= {\mathbb{E}}\left[ \int_t^T \left( \langle\Psi_i Y^\ell_s , {\mathrm{d}}Y^h_s\rangle+ \langle{\mathrm{d}}Y^\ell_s, \Psi_i Y^h_s\rangle +\langle\Psi_i X_s , {\mathrm{d}}Z_s\rangle+ \langle{\mathrm{d}}X_s , \Psi_i Z_s\rangle\right){\mathrm{d}}s \right] \\ &\qquad+ {\mathbb{E}}\left[ \int_t^T \left((Y^\ell_s)^\top \dot \Psi_i Y^h_s +X_s^\top \dot \Psi_i Z_s\right) {\mathrm{d}}s\right] \\ &\quad= {\mathbb{E}}\left[ \int_t^T \left((Y^\ell_s)^\top \Psi_i B_h K'_h X_s +(Y^h_s)^\top \Psi_i B_\ell K''_\ell X_s +X_s^\top \Psi_i (B_\ell K''_\ell Y^h_s +B_h K'_h Y^\ell_s) \right) {\mathrm{d}}s\right] \\ &\qquad- {\mathbb{E}}\left[ \int_t^T \left((Y^\ell_s)^\top (Q_i+K^\top R_i K) Y^h_s +X_s^\top (Q_i+K^\top R_i K) Z_s\right) {\mathrm{d}}s\right].\end{aligned}$$ Substituting this into [\[eq:prob_term1\]](#eq:prob_term1){reference-type="eqref" reference="eq:prob_term1"} implies that $$\begin{aligned} \label{eq:prob_term2} \begin{split} &\frac{\delta^2 V^{t,x}_{i}}{\delta \phi_h \delta \phi_\ell} (\phi;\phi'_h, \phi''_\ell) \\ & \quad= {\mathbb{E}}\bigg[\int_t^T \bigg( (Y^\ell_s)^\top \left( \Psi_i B_h K'_h +(B_h K'_h )^\top \Psi_i +K^\top (R_i)_h K'_h + (K'_h)^\top((R_i )_h)^\top K \right) X_s \\ &\qquad +(Y^h_s)^\top \left( \Psi_i B_\ell K''_\ell + (B_\ell K''_\ell )^\top \Psi_i +K^\top (R_i)_\ell K''_\ell + (K''_\ell)^\top((R_i )_\ell)^\top K \right) X_s \\ &\qquad +(K''_\ell X_s)^\top (R_i)_{\ell h} K'_h X_s \bigg) {\mathrm{d}}s\bigg]. \end{split}\end{aligned}$$ On the other hand, applying Itô's formula to $s\mapsto \frac{1}{2} X_s ^\top \Lambda^{h,\ell}_i(s) X_s+ \lambda^{h,\ell}_i (s)$ and using [\[eq:quadratic_2nd\]](#eq:quadratic_2nd){reference-type="eqref" reference="eq:quadratic_2nd"} and [\[eq:ode_lambda\]](#eq:ode_lambda){reference-type="eqref" reference="eq:ode_lambda"} give $$\begin{aligned} \label{eq:analytic_term1} \begin{split} &{\frac{\delta^2 \overline{V}^\phi_i}{\delta \phi_h\delta \phi_\ell}}(t,x; \phi'_h, \phi''_\ell) %\\ %&\q =- \sE\left[\int_t^T\left( \frac{1}{2} X_s^\top \dot \Lambda^{h,\ell}_i X_s + \dot \lambda^{h,\ell}_i %+ \frac{1}{2} \la ( \Lambda^{h,\ell}_i+ ( \Lambda^{h,\ell}_i)^\top) X_s, (A+BK)X_s\ra + \frac{1}{2}\tr\left(\sigma\sigma^\top \Lambda^{h,\ell}_i \right) \right)\d s % \right] \\ & \quad= {\mathbb{E}}\left[\int_t^T \left( X_s^\top \Theta^{h}_iB_\ell K''_\ell X_s + X_s^\top \Theta^{\ell}_i B_h K'_h X_s +(K''_\ell X_s)^\top (R_i)_{ \ell h}K'_h X_s\right){\mathrm{d}}s \right]. \end{split} \end{aligned}$$ Finally, applying Itô's formula to $s\mapsto (Y^h_s )^\top \Theta^\ell_i(s) X_s + (Y^\ell_s )^\top \Theta^h_i(s) X_s$ and using [\[eq:X_sensitivity_first_lq\]](#eq:X_sensitivity_first_lq){reference-type="eqref" reference="eq:X_sensitivity_first_lq"} yield $$\begin{aligned} 0&= {\mathbb{E}}\bigg[ \int_t^T \bigg( (Y^h_s )^\top \dot \Theta^\ell_i X_s + (Y^\ell_s )^\top \dot \Theta^h_i X_s + \langle\Theta^\ell_i X_s , (A +B K ) Y^{h}_s +B_h K'_h X_s \rangle \\ &\quad+ \langle\Theta^\ell_i Y^h_s , (A +B K ) X_s \rangle+ \langle\Theta^h_i X_s , (A +B K ) Y^{\ell}_s +B_\ell K''_\ell X_s \rangle + \langle\Theta^h_i Y^\ell_s , (A +B K ) X_s \rangle \bigg) {\mathrm{d}}s \bigg] \\ &= {\mathbb{E}}\bigg[ \int_t^T \bigg( (Y^h_s )^\top \left( \dot \Theta^\ell_i +(A +B K )^\top \Theta^\ell_i + \Theta^\ell_i (A+BK) \right) X_s \\ &\quad + (Y^\ell_s )^\top \left( \dot \Theta^h_i +(A +B K )^\top \Theta^h_i + \Theta^h_i (A +B K ) \right) X_s + X^\top _s \Theta^\ell_i B_h K'_h X_s + X^\top_s \Theta^h_i B_\ell K''_\ell X_s \bigg){\mathrm{d}}s \bigg], \end{aligned}$$ which along with [\[eq:ode_theta\]](#eq:ode_theta){reference-type="eqref" reference="eq:ode_theta"}, [\[eq:prob_term2\]](#eq:prob_term2){reference-type="eqref" reference="eq:prob_term2"} and [\[eq:analytic_term1\]](#eq:analytic_term1){reference-type="eqref" reference="eq:analytic_term1"} implies that $\frac{\delta^2 V^{t,x}_{i}}{\delta \phi_h \delta \phi_\ell} (\phi;\phi'_h, \phi''_\ell) = {\frac{\delta^2 \overline{V}^\phi_i}{\delta \phi_h\delta \phi_\ell}}(t,x; \phi'_h, \phi''_\ell)$. Finally, we prove the equivalence of Conditions [\[eq:prob_symmetry\]](#eq:prob_symmetry){reference-type="eqref" reference="eq:prob_symmetry"} and [\[eq:lq_symmetry\]](#eq:lq_symmetry){reference-type="eqref" reference="eq:lq_symmetry"}. By [\[eq:equivalence\]](#eq:equivalence){reference-type="eqref" reference="eq:equivalence"}, it is clear that [\[eq:lq_symmetry\]](#eq:lq_symmetry){reference-type="eqref" reference="eq:lq_symmetry"} implies [\[eq:prob_symmetry\]](#eq:prob_symmetry){reference-type="eqref" reference="eq:prob_symmetry"}. On the other hand, if [\[eq:prob_symmetry\]](#eq:prob_symmetry){reference-type="eqref" reference="eq:prob_symmetry"} holds, by [\[eq:zero_order_term\]](#eq:zero_order_term){reference-type="eqref" reference="eq:zero_order_term"} and [\[eq:equivalence\]](#eq:equivalence){reference-type="eqref" reference="eq:equivalence"}, $$x^\top( \Lambda^{i,j}_i (t) - \Lambda^{j,i}_j(t))x+\int_t^T \textnormal{tr}\left(\sigma\sigma^\top (\Lambda^{i,j}_i(s)-\Lambda^{j,i}_j (s) )\right){\mathrm{d}}s =0 \quad\forall(t,x)\in [0,T]\times {\mathbb R}^{n_x},$$ which along with $\mathop{\mathrm{span}}\{xx^\top |x\in{\mathbb R}^{n_x}\}={\mathbb{S}}^{n_x}$ implies that $\Lambda^{i,j}_i (T)- \Lambda ^{j,i}_j(T)=0$ and $\dot{ \Lambda}^{i,j}_i -\dot{ \Lambda}^{j,i}_j+\Lambda^{i,j}_i-\Lambda^{j,i}_j =0$. This shows that [\[eq:lq_symmetry\]](#eq:lq_symmetry){reference-type="eqref" reference="eq:lq_symmetry"} holds. ◻ [^1]: Department of Industrial Engineering and Operations Research, University of California, Berkeley, USA ( `xinguo@berkeley.edu`). [^2]: Department of Mathematics, Imperial College London, London, UK (`yufei.zhang@imperial.ac.uk`).
arxiv_math
{ "id": "2310.02259", "title": "Towards An Analytical Framework for Potential Games", "authors": "Xin Guo, Yufei Zhang", "categories": "math.OC cs.GT math.PR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We prove the Eigenstate Thermalisation Hypothesis for Wigner matrices uniformly in the entire spectrum, in particular near the spectral edges, with a bound on the fluctuation that is optimal for any observable. This complements earlier works of Cipolloni et. al. [@ETHpaper; @A2] and Benigni et. al. [@BenigniLopatto2103.12013; @2303.11142] that were restricted either to the bulk of the spectrum or to special observables. As a main ingredient, we prove a new multi-resolvent local law that optimally accounts for the edge scaling. address: - G.C., Princeton Center for Theoretical Science and Department of Mathematics Princeton University, Princeton, NJ 08544, USA - L.E. and J.H., IST Austria, Am Campus 1, 3400 Klosterneuburg, Austria author: - Giorgio Cipolloni László Erdős$^{*}$ Joscha Henheik$^{*}$ title: Eigenstate Thermalisation at the edge for Wigner Matrices --- # Introduction In the physics literature, the *Eigenstate Thermalisation Hypothesis (ETH)* asserts that each eigenfunction of a sufficiently chaotic quantum system is uniformly distributed in the phase space. This concept was coined by Srednicki [@Srednicki] after similar ideas appeared in the seminal paper of Deutsch [@deutsch1991]. While the original physics setup concerns genuine many-body systems, especially a small system in a heat bath described by standard statistical mechanics, Deutsch has also formulated a phenomenological version of ETH for the simplest chaotic quantum system, the Wigner ensemble. In this form, ETH asserts that for any deterministic observable (matrix) $A$ and for any normalised eigenvector $\bm{u}$ of a large $N\times N$ Wigner matrix, the quadratic form $\langle \bm{u}, A\bm{u}\rangle$ is very close to its statistical average, which, in the Wigner case, is the normalized trace $\langle A\rangle : =\frac{1}{N} \mathrm{Tr}A$: $$\label{eth} |\langle \bm{u}, A\bm{u}\rangle - \langle A\rangle|\lesssim \frac{\| A\|}{\sqrt{N}}.$$ The $1/\sqrt{N}$ speed of convergence is optimal and it is in agreement with the earlier predictions of Feingold and Peres [@FeinPeres], see also [@EckFisch]. For more physics background and references, see the introduction of [@ETHpaper]. In the mathematics literature the same phenomenon is known as the *Quantum Unique Ergodicity (QUE)*. In precise mathematical terms it was formulated by Rudnick and Sarnak [@RudnickSarnak1994] for standard quantisations of ergodic classical dynamical systems and proved only in some special cases [@Lindenstrauss; @Soundararajan; @HoloSound; @BrooksLindenstrauss], often as a purely limiting statement without optimizing the speed of convergence. The key point is to control the behaviour of *all* eigenfunctions; a similar result for *most* eigenfunctions (called *Quantum Ergodicity*) is much easier and has been earlier discovered by S̆nirel'man [@snirelman1974], see also [@ColinDeVerdiere1985; @Zelditch1987]. Motivated by the paper of Deutsch [@deutsch1991] and the novel technical developments in random matrix theory, the ETH for Wigner matrices in the form [\[eth\]](#eth){reference-type="eqref" reference="eth"} has been the object of intense study in recent years. An important question is the precise dependence of the error term in the right hand side on $A$. The first proof of [\[eth\]](#eth){reference-type="eqref" reference="eth"} given in [@ETHpaper] involved the operator norm $\lVert \mathring{A}\rVert$ of the traceless part $\mathring{A} :=A-\langle A\rangle$ of $A$, but this estimate is far from optimal for low rank observables. For example, if $A= |\bm{q}\rangle\langle \bm{q}|$ is a rank--one projection onto a normalised vector $\bm{q}\in{\mathbb C}^N$, then $\langle \bm{u}, A\bm{u}\rangle = | \langle \bm{u}, \bm{q}\rangle|^2$ which is known to be essentially of order $1/N$ by the *complete delocalisation of eigenvectors*, see [@ESY2009; @EYY2012; @KnowYin; @BEKYY; @BenLopDeloc]. However the result in [@ETHpaper] gives only the suboptimal estimate $| \langle \bm{u}, \bm{q}\rangle|^2 \lesssim 1/\sqrt{N}$ for this special observable. In the Gaussian (GUE/GOE) case, the eigenvectors are uniformly Haar distributed, hence explicit moment calculations for $\langle \bm{u}, A\bm{u}\rangle$ are possible by Weingarten calculus. The result indicates the following optimal form of [\[eth\]](#eth){reference-type="eqref" reference="eth"}: $$\label{eth1} \left| \langle \bm{u}_i, A \bm{u}_j\rangle - \delta_{ij} \langle A \rangle \right| \lesssim \frac{\langle |\mathring{A}|^2 \rangle^{1/2}}{\sqrt{N}}.$$ Note that this estimate involves the (normalised) Hilbert-Schmidt norm $\langle |\mathring{A}|^2 \rangle^{1/2}$ instead of the operator norm[^1], and it can also be extended to different eigenvectors $\bm{u}_i, \bm{u}_j$. In particular, [\[eth1\]](#eth1){reference-type="eqref" reference="eth1"} recovers the optimal delocalisation bound for eigenvectors as a special case. The optimal form of ETH [\[eth1\]](#eth1){reference-type="eqref" reference="eth1"} for any Wigner matrix was proved for the special case when $A$ is a projection in [@BenigniLopatto2103.12013; @2303.11142], and for arbitrary $A$ but only in the bulk of the spectrum[^2] in [@A2]. In fact, $\sqrt{N} [\langle \bm{u}_i, A \bm{u}_j\rangle - \delta_{ij} \langle A\rangle ]$ is asymptotically normal with variance proportional to $\langle |\mathring{A}|^2 \rangle^{1/2}$ (see [@normalfluc; @A2]) in the bulk, showing that the Hilbert-Schmidt norm $\langle |\mathring{A}|^2 \rangle^{1/2}$ is indeed the optimal one. In the main theorem of the current paper (see Theorem [Theorem 2](#thm:ETH){reference-type="ref" reference="thm:ETH"} below) we prove [\[eth1\]](#eth1){reference-type="eqref" reference="eth1"} for all observables and all eigenfunctions, giving the optimal ETH for Wigner matrices in all spectral regimes. We remark that ETH is expected to hold for much more general random matrix ensembles. For example the approach in [@ETHpaper] could be directly generalized to a special class of generalized Wigner matrices in [@GenWigETH]. Furthermore, ETH in the bulk has recently been extended to deformed random matrix ensembles [@iidpaper; @equipart], where both the leading term $\delta_{ij} \langle A \rangle$ and the correct replacement for the traceless part of $A$ in the right hand side of [\[eth1\]](#eth1){reference-type="eqref" reference="eth1"} became considerably more involved, in particular they are energy dependent. The edge regime and the optimal norm of $A$ in the error term are still open questions for these ensembles, but we leave them to further works and for simplicity we focus on the Wigner case here. The key tool to achieve our ETH is a new *multi--resolvent local law* with traceless observables that is optimal at the spectral edges. Multi--resolvent local laws in general refer to concentration results for alternating products of resolvents of a random matrix and deterministic matrices (observables). Their proofs are typically more difficult at the spectral edges since, besides correctly accounting for the traceless observables, their optimal form also requires to exploit a delicate cancellation mechanism; the smallness of the local density of eigenvalues must accurately compensate for the linear instability of a nonlinear equation that governs the fluctuation. In contrast to the previous proofs of local laws behind ETH results, here we apply a dynamical approach, the *method characteristic flow* complemented with a Green function comparison argument. While this method has already been extensively tested for single resolvent local laws [@Bourgade2021; @HuangLandon; @AdhiHuang; @LLS; @AdhiLandon; @LandSos], only two papers concern the multi-resolvent situation [@2210.12060; @bourfalc], neither of them focuses on the critical edge behaviour. On top of the edge, we will need to track another key aspect of the local law; in order to obtain the Hilbert-Schmidt norm in [\[eth1\]](#eth1){reference-type="eqref" reference="eth1"}, the same norm must appear in the local law as well. Typically, errors in the local laws involve the operator norm of the deterministic matrices between the resolvents, the control in the much harder Hilbert-Schmidt sense was considered only very recently in [@A2]. However, this work did not track the optimal edge behaviour. Our new local law is simultaneously optimal in both aspects. We will explain the strength of this new local law in the context of previous works in Section [2.1](#sec:loclaw){reference-type="ref" reference="sec:loclaw"}. ## Notations {#notations .unnumbered} By $\lceil x \rceil := \min\{ m \in {\mathbb Z}\colon m \ge x \}$ and $\lfloor x \rfloor := \max\{ m \in {\mathbb Z}\colon m \le x \}$ we denote the upper and lower integer part of a real number $x \in {\mathbb R }$. We set $[k] := \{1, ... , k\}$ for $k \in {\mathbb N}$ and $\langle A \rangle := d^{-1} \mathrm{Tr}(A)$, $d \in {\mathbb N}$, for the normalised trace of a $d \times d$-matrix $A$. For positive quantities $A, B$ we write $A \lesssim B$ resp. $A \gtrsim B$ and mean that $A \le C B$ resp. $A \ge c B$ for some $N$-independent constants $c, C > 0$ that depend only on the basic control parameters of the model in Assumption [Assumption 1](#ass:entries){reference-type="ref" reference="ass:entries"} below. Moreover, for $N$-dependent positive quantities $A, B$, we write $A \ll B$ whenever $A/B \to 0$ as $N \to \infty$. We denote vectors by bold-faced lower case Roman letters $\boldsymbol{x}, \boldsymbol{y} \in {\mathbb C}^{N}$, for some $N \in {\mathbb N}$, and define $$\langle \boldsymbol{x}, \boldsymbol{y} \rangle := \sum_i \bar{x}_i y_i\,, \qquad A_{\boldsymbol{x} \boldsymbol{y}} := \langle \boldsymbol{x}, A \boldsymbol{y} \rangle\,.$$ Matrix entries are indexed by lower case Roman letters $a, b, c , ... ,i,j,k,...$ from the beginning or the middle of the alphabet and unrestricted sums over those are always understood to be over $\{ 1 , ... , N\}$. Finally, we will use the concept *'with very high probability'*, meaning that for any fixed $D > 0$, the probability of an $N$-dependent event is bigger than $1 - N^{-D}$ for all $N \ge N_0(D)$. Also, we will use the convention that $\xi > 0$ denotes an arbitrarily small positive exponent, independent of $N$. Moreover, we introduce the common notion of *stochastic domination* (see, e.g., [@semicirclegeneral]): For two families $$X = \left(X^{(N)}(u) \mid N \in {\mathbb N}, u \in U^{(N)}\right) \quad \text{and} \quad Y = \left(Y^{(N)}(u) \mid N \in {\mathbb N}, u \in U^{(N)}\right)$$ of non-negative random variables indexed by $N$, and possibly a parameter $u$, we say that $X$ is stochastically dominated by $Y$, if for all $\epsilon, D >0$ we have $$\sup_{u \in U^{(N)}} \boldsymbol{P} \left[X^{(N)}(u) > N^\epsilon Y^{(N)}(u)\right] \le N^{-D}$$ for large enough $N \ge N_0(\epsilon, D)$. In this case we write $X \prec Y$. If for some complex family of random variables we have $\vert X \vert \prec Y$, we also write $X = O_\prec(Y)$. ## Acknowledgment. {#acknowledgment. .unnumbered} We thank Volodymyr Riabov for his help with creating Figure [1](#fig:flow){reference-type="ref" reference="fig:flow"}. # Main results {#sec:mainres} We consider $N\times N$ Wigner matrices $W$, i.e. $W$ is a random real symmetric or complex Hermitian matrix $W=W^*$ with independent entries (up to the Hermitian symmetry) and with single entry distributions $w_{aa}\stackrel{\mathrm{d}}{=}N^{-1/2}\chi_{\mathrm{d}}$, and $w_{ab}\stackrel{\mathrm{d}}{=}N^{-1/2}\chi_{\mathrm{od}}$, for $a>b$. The random variables $\chi_{\mathrm{d}},\chi_{\mathrm{od}}$ satisfy the following assumptions.[^3] **Assumption 1**. *The off-diagonal distribution $\chi_{\mathrm{od}}$ is a real or complex centered random variable, ${\mathbb E }\chi_{\mathrm{od}}=0$, satisfying ${\mathbb E }|\chi_{\mathrm{od}}|^2 = 1$. The diagonal distribution is a real centered random variable, ${\mathbb E }\chi_{\mathrm{d}} =0$. Furthermore, we assume the existence of high moments, i.e. for any $p\in {\mathbb N}$ there exists $C_p > 0$ such that $${\mathbb E }\big[|\chi_{\mathrm{d}}|^p+|\chi_{\mathrm{od}}|^p\big]\le C_p\,.$$* Our main result is the optimal form of the eigenstate thermalization hypothesis (ETH) for Wigner matrices uniformly in the spectrum, in particular, including the spectral edges. Its proof is given in Section [2.2](#subsec:proofETH){reference-type="ref" reference="subsec:proofETH"} and it is based on a new *multi-resolvent local law*, Theorem [Theorem 4](#thm:main){reference-type="ref" reference="thm:main"} below. **Theorem 2** (Eigenstate Thermalization Hypothesis). *Let $W$ be a Wigner matrix satisfying Assumption [Assumption 1](#ass:entries){reference-type="ref" reference="ass:entries"} with orthonormalized eigenvectors $\bm{u}_1, ... , \bm{u}_N$ and let $A \in {\mathbb C}^{N \times N}$ be deterministic. Then $$\label{eq:eth} \max_{i,j \in [N]} \left| \langle \bm{u}_i, A \bm{u}_j\rangle - \delta_{ij} \langle A \rangle \right| \prec \frac{\langle |\mathring{A}|^2 \rangle^{1/2}}{\sqrt{N}}$$ where $\mathring{A} := A - \langle A \rangle$ denotes the traceless part of $A$.* ## Multi-resolvent local laws {#sec:loclaw} Consider the resolvent $G(z):=(W-z)^{-1}$, with $z\in{\mathbb C}\setminus{\mathbb R }$. It is well known that in the limit $N\to \infty$ the resolvent becomes deterministic, with its deterministic approximation $m_{\mathrm{sc}}(z)\cdot I$, where $m_{\mathrm{sc}}$ is the Stieltjes transform of the semicircular law: $$\label{eq:semicirc} m(z):=m_{\mathrm{sc}}(z)=\int_{\mathbb R }\frac{1}{x-z}\rho_{\mathrm{sc}}(x)\,\mathrm{d}x, \qquad\quad \rho_{\mathrm{sc}}(x):=\frac{1}{2\pi}\sqrt{[4-x^2]_+}.$$ This holds even in the local regime as long as $|\mathrm{Im}\,z|\gg N^{-1}$; such concentration results are commonly called *local laws.* The single resolvent local law, in its simplest form[^4], asserts that $$\label{eq:singleG} \big|\langle (G(z)- m(z))A\rangle\big|\prec \frac{\| A \|}{N\eta}, \qquad \eta:=| \mathrm{Im}\,z|,$$ holds for any deterministic matrix (*observable*) $A$. The $1/N\eta$ error is optimal for $A=I$ in the relevant $\eta\lesssim1$ regime and $N\eta \langle G(z)-m(z)\rangle$ is approximately Gaussian with variance of order one [@HeKnowles]. However, for traceless observables, i.e. $\langle A\rangle =0$, hence $A=\mathring{A}$, the bound in [\[eq:singleG\]](#eq:singleG){reference-type="eqref" reference="eq:singleG"} improves to the optimal form, $$\big|\langle (G(z)- m(z))A\rangle\big| = \big|\langle G(z)A\rangle\big| \prec \frac{ \sqrt{\rho(z)}}{N\sqrt{\eta}} \langle |A|^2\rangle^{1/2}, \qquad \rho(z):= \frac{1}{\pi}| \mathrm{Im}\,m( z)|.$$ The improvement in the $\eta$-power together and the additional density factor $\rho(z)$ relevant near the spectral edges were first observed in [@ETHpaper], while the optimal dependence on the Hilbert-Schmidt norm of $A$ was proved in [@A2]. Single resolvent local laws, however, are not sufficient to control the eigenfunction overlaps as in [\[eq:eth\]](#eq:eth){reference-type="eqref" reference="eq:eth"}. While the local law, via the spectral decomposition of $\mathrm{Im}\,G = \frac{1}{2\mathrm{i}}(G-G^*)$, $$\label{eq:res} \langle \mathrm{Im}\,G(z)A\rangle = \frac{1}{N} \sum_i \frac{\eta}{(\lambda_i-E)^2 +\eta^2} \langle \bm{u}_i, A \bm{u}_i\rangle, \qquad z=E+\mathrm{i}\eta,$$ gives an effectively local average of approximately $N\eta$ diagonal overlaps $\langle \bm{u}_i, A \bm{u}_i\rangle$, inferring the size of a single overlap is not possible just from this average since $\langle \bm{u}_i, A \bm{u}_i\rangle$ may change sign as $i$ varies. Two-resolvent local laws are much more powerful. In particular, using $$\label{specdec} \langle \mathrm{Im}\,G(z_1)A\mathrm{Im}\,G(z_2)A^*\rangle = \frac{1}{N} \sum_{i,j} \frac{\eta}{(\lambda_i-E_1)^2 +\eta^2} \frac{\eta}{(\lambda_j-E_2)^2 +\eta^2}|\langle \bm{u}_i, A \bm{u}_j\rangle|^2, \quad z_l=E_l+\mathrm{i}\eta, \;\; l=1,2,$$ we see that for a traceless observable, $\langle A\rangle=0$, a bound of the form $$\label{eq:2G} \langle \mathrm{Im}\,G(z_1)A\mathrm{Im}\,G(z_2)A^*\rangle \prec \| A\|^2$$ at $\eta\sim N^{-1+\xi}$, $\xi>0$, would imply that a local average (in both indices) of $|\langle \bm{u}_i, A \bm{u}_j\rangle|^2$ is bounded by $N^{-1+2\xi}\|A\|^2$. Since $|\langle \bm{u}_i, A \bm{u}_j\rangle|^2$ is positive (unlike $\langle \bm{u}_i, A \bm{u}_i\rangle$ in [\[eq:res\]](#eq:res){reference-type="eqref" reference="eq:res"}), we can deduce the optimal bound $|\langle \bm{u}_i, A \bm{u}_j\rangle|^2\prec \frac{1}{N} \|A\|^2$ for each overlap. This argument in this form is valid only in the bulk; near the spectral edges the right hand side of [\[eq:2G\]](#eq:2G){reference-type="eqref" reference="eq:2G"} needs to be improved to $\rho(z_1)\rho(z_2)\|A\|^2$; this was already achieved in [@ETHpaper]. However, to obtain the optimal Hilbert-Schmidt norm of the observable in [\[eq:eth\]](#eq:eth){reference-type="eqref" reference="eq:eth"} a second improvement to the form $$\label{eq:2Gimproved} \langle \mathrm{Im}\,G(z_1)A\mathrm{Im}\,G(z_2)A^*\rangle \prec \rho(z_1)\rho(z_2) \langle |A|^2\rangle, \qquad \langle A\rangle=0,$$ is necessary. The main achievement of the current paper is to extract both types of improvement *simultaneously.* While Theorem [Theorem 2](#thm:ETH){reference-type="ref" reference="thm:ETH"} requires only the upper bound [\[eq:2Gimproved\]](#eq:2Gimproved){reference-type="eqref" reference="eq:2Gimproved"} for $\mathrm{Im}\,G A \mathrm{Im}\,G A$, along its proof other alternating products of resolvents (with or without $\mathrm{Im}\,$) and deterministic matrices emerge. More precisely, setting $G_i:=G(z_i)$ and considering deterministic matrices $B_i$, the main object of interest is $$\label{eq:mainobj} G_1B_1G_2B_2G_3\dots B_{k-1} G_k$$ for some fixed $k$. We will call expressions of the form [\[eq:mainobj\]](#eq:mainobj){reference-type="eqref" reference="eq:mainobj"} *(resolvent) chains*. We will show a *multi-resolvent local law*, i.e. that any chain [\[eq:mainobj\]](#eq:mainobj){reference-type="eqref" reference="eq:mainobj"} concentrates around a deterministic object and give an upper bound on the fluctuation. The interesting regime is the local one, i.e. when $|\mathrm{Im}\,z_i|\ll 1$. We will also consider the case when some of the $G_i$'s are replaced by their imaginary part $\mathrm{Im}\,G_i$, and we will show that in this case the fluctuations are reduced close to the edge of the spectrum by some factor of $|\mathrm{Im}\,m(z_i)|$ which is essentially the density $\rho_{\mathrm{sc}}$ at $\mathrm{Re}\,z_i$. It turns out [@ETHpaper] that the sizes of both the deterministic limit of [\[eq:mainobj\]](#eq:mainobj){reference-type="eqref" reference="eq:mainobj"} and its fluctuation are substantially reduced if some of the matrices $B_i$ are traceless. Therefore, in the main part of the paper we study [\[eq:mainobj\]](#eq:mainobj){reference-type="eqref" reference="eq:mainobj"} when all the matrices $B_i$ are traceless, $\langle B_i\rangle =0$, this will also imply a local law for [\[eq:mainobj\]](#eq:mainobj){reference-type="eqref" reference="eq:mainobj"} for generic $B_i$'s using that any matrix $B$ can be decomposed into a constant and a traceless part as $B=\langle B\rangle\cdot I+\mathring{B}$. We will prove local laws that are optimal simultaneously in the two different aspects mentioned above in addition to account for the improvement owing to the traceless observables. The first aspect is to trace the improvement near the spectral edges in terms of additional $\rho$-powers; in general the presence of each $\mathrm{Im}\,G$ provides an additional $\rho$ factor. Second, instead of the technically much easier Euclidean matrix norm (operator norm) of the $B_i$'s, we need to use the more sophisticated Hilbert-Schmidt norm. One additional advantage of using the Hilbert-Schmidt norm is that it enables us to test the chain in [\[eq:mainobj\]](#eq:mainobj){reference-type="eqref" reference="eq:mainobj"} against rank one matrices and still get optimal bounds. In particular, testing it against the projection $|\bm{x}\rangle\langle \bm{y}|$ immediately gives the so-called *isotropic local laws*, i.e. concentration for the individual matrix elements $\langle \bm{x}, G_1B_1\dots B_{k-1} G_k\bm {y}\rangle$, for any deterministic vectors $\bm{x},\bm {y}$. Our results also hold for the case when the spectral parameters $z_i$'s are different, but we will not explore the additional potential improvements from this fact since it is not needed for ETH. While in some part of the argument we track the different values of $|\mathrm{Im}\,z_i|$ precisely (instead of overestimating them by the worst one), we will not exploit the additional gain from possibly different real parts $\mathrm{Re}\,z_i$; this study is left for future investigations. Multi-resolvent local laws for chains [\[eq:mainobj\]](#eq:mainobj){reference-type="eqref" reference="eq:mainobj"} with traceless deterministic matrices have been the object of interest in several recent papers, however in each of these works only one aspect of the fluctuations of [\[eq:mainobj\]](#eq:mainobj){reference-type="eqref" reference="eq:mainobj"} was taken into consideration: either the problem was optimal only in the bulk of the spectrum [@A2], hence missing $\rho$ factors were ignored, or the error term was estimated using the crude operator norm of the $B_i$ [@ETHpaper; @multiG], or only chains of length one ($k=1$) had an optimal error term in both aspects [@functionalCLT]. Our new result (Theorem [Theorem 4](#thm:main){reference-type="ref" reference="thm:main"} below) does not have any of these restriction: we give a bound on the fluctuation of [\[eq:mainobj\]](#eq:mainobj){reference-type="eqref" reference="eq:mainobj"} uniformly in the spectrum with optimal $N$- and $\rho$-powers and with the Hilbert-Schmidt norm on the traceless $B_i$'s. ### Preliminaries on the deterministic approximation {#sec:prelim} Before stating our main technical result we introduce some additional notation. Given a non-crossing partition $\pi$ of the set $[k]:=\set{1,\ldots,k}$ arranged in cyclic order, the partial trace $\mathrm{pTr}_{\pi}$ of an ordered set of matrices $B_1, \ldots, B_{k-1}$ is defined as $$\label{eq:partrdef} \mathrm{pTr}_\pi(B_1,\ldots,B_{k-1}): = \prod_{S\in\pi\setminus \mathfrak{B}(k)}\left\langle\prod_{j\in S}B_j\right\rangle\prod_{j\in \mathfrak{B}(k)\setminus\set{k}} B_j,$$ with $\mathfrak{B}(k)\in\pi$ denoting the unique block containing $k$. Then, for generic $B_i$'s, the deterministic approximation of [\[eq:mainobj\]](#eq:mainobj){reference-type="eqref" reference="eq:mainobj"} is given by [@thermalisation Theorem 3.4]: $$\label{eq:Mdef} M_{[1,k]} = M(z_1,B_1,\ldots,B_{k-1},z_k) := \sum_{\pi\in\mathrm{NC}([k])}\mathrm{pTr}_{K(\pi)}(B_1,\ldots,B_{k-1}) \prod_{S\in\pi} m_\circ[S ],$$ where $\mathrm{NC}([k])$ denotes the non-crossing partitions of the set $[k]$, and $K(\pi)$ denotes the Kreweras complement of $\pi$ (see [@thermalisation Definition 2.4] and [@Kreweras]). Furthermore, for any subset $S\subset [k]$ we define $m[S]:=m_\mathrm{sc}[\bm{z}_S]$ as the iterated divided difference of $m_\mathrm{sc}$ evaluated in $\bm{z}_S:=\{z_i: i\in S\}$ which can also be written as $$\label{msc dd} m[S]=m_\mathrm{sc}[\bm{z}_S] =m_\mathrm{sc}[\set{z_i: i \in S}] = \int_{-2}^2\rho_\mathrm{sc}(x)\prod_{i\in S}\frac{1}{x-z_i}\mathrm{d}x.$$ We denote by $m_\circ[\cdot ]$ the free-cumulant transform of $m[\cdot]$ which is uniquely defined implicitly by the relation $$\label{eq:freecumulant} m[S] = \sum_{\pi\in\mathrm{NC}(S)} \prod_{S'\in\pi} m_\circ[S'], \qquad \forall S\subset [k],$$ e.g. $m_\circ[i,j]=m[\set{i,j}]-m[\set{i}]m[\set{j}]$. For example, for $k=2$ we have $$\begin{split} M(z_1,B_1,z_2) &= \langle B_1\rangle(m_\mathrm{sc}[z_1,z_2]-m_\mathrm{sc}(z_1)m_\mathrm{sc}(z_2)) + B_1 m_\mathrm{sc}(z_1)m_\mathrm{sc}(z_2) \\ &= \frac{\langle B_1\rangle}{2\pi}\int_{-2}^2\frac{\sqrt{4-x^2}}{(x-z_1)(x-z_2)}\mathrm{d}x + (B_1-\braket{B_1}) m_\mathrm{sc}(z_1)m_\mathrm{sc}(z_2). \end{split}$$ The main objects of interest within this section are general resolvent chains $$\mathcal{G}_1 B_1\mathcal{G}_2 B_2\dots B_{k-1}\mathcal{G}_k$$ where $\mathcal{G}_i \in \{G_i, \mathrm{Im}\,G_i\}$, and we denote by $\mathfrak{I}_k\subset [k]$ the set of the indices for which $\mathcal{G}_i=\mathrm{Im}\,G_i$. Note that some resolvents may be replaced with their imaginary parts. In order to generalize [\[eq:Mdef\]](#eq:Mdef){reference-type="eqref" reference="eq:Mdef"}, for any subset $\mathfrak{I}_k\subset [k]$ we define[^5] $$\label{eq:Mdefim} \mathcal{M}_{[1,k]}= \mathcal{M}(z_1,B_1,\dots,B_{k-1},z_k;\mathfrak{I}_k):=\sum_{\pi\in\mathrm{NC}([k])}\mathrm{pTr}_{K(\pi)}(B_1,\ldots,B_{k-1}) \prod_{S\in\pi} m_\circ^{(\mathfrak{I}_k)}[S],$$ with $m_\circ^{(\mathfrak{I}_k)}[S]$ implicitly defined as in [\[eq:freecumulant\]](#eq:freecumulant){reference-type="eqref" reference="eq:freecumulant"} with $m[S]$ replaced with $m^{(\mathfrak{I}_k)}[S]$, where $$\label{eq:Mdivdiff} m^{(\mathfrak{I}_k)}[S]= m^{(\mathfrak{I}_k)}[\set{z_i: i \in S}]: = \int_{-2}^2\rho_\mathrm{sc}(x)\left(\prod_{i\in \mathfrak{I}_k \cap S} \mathrm{Im}\,\frac{1}{x-z_i}\right)\left(\prod_{i\in S\setminus \mathfrak{I}_k}\frac{1}{x-z_i}\right)\mathrm{d}x.$$ We now give some bounds on the deterministic approximations in the case where all matrices in [\[eq:Mdefim\]](#eq:Mdefim){reference-type="eqref" reference="eq:Mdefim"} are traceless, $\langle B_i \rangle = 0$.[^6] The proof of the following lemma is presented in Appendix [6](#sec:addtech){reference-type="ref" reference="sec:addtech"}. **Lemma 3** ($M$-bounds). *Fix $k \ge 1$. Consider spectral parameters $z_1, ... , z_{k+1} \in {\mathbb C}\setminus {\mathbb R }$ and traceless matrices $A_1, ... , A_k \in {\mathbb C}^{N \times N}$. Moreover, let $${\eta}_j := |\mathrm{Im}\,z_j|\,, \qquad m_j := m_{\rm sc}(z_j)\,, \qquad \rho_j :=\frac{1}{\pi} |\mathrm{Im}\,m_j|\,.$$* - *Denoting $\ell := \min_{j \in [k]}\big[ \eta_j (\rho_j + \boldsymbol{1}(j \notin \mathfrak{I}_k))\big]$ and assuming $N \ell \ge 1$, we have the average bound $$\label{eq:Mbound} \left| \langle \mathcal{M}(z_1, A_1, ... , A_{k-1}, z_k; \mathfrak{I}_k) A_k \rangle \right| \lesssim \left(\prod_{i \in \mathfrak{I}_k} \rho_i\right) N^{k/2 - 1}\prod_{j \in [k]} \langle |A_j|^2 \rangle^{1/2}\,.$$* - *Denoting $\ell := \min_{j \in [k+1]}\big[ \eta_j (\rho_j + \boldsymbol{1}(j \notin \mathfrak{I}_{k+1}))\big]$ and assuming $N \ell \ge 1$, we have the isotropic bound[^7] $$\label{eq:MboundISO} \left| \langle \bm x, \mathcal{M}(z_1, A_1, ... , A_{k}, z_{k+1}; \mathfrak{I}_{k+1}) \bm y \rangle \right| \lesssim \left(\prod_{i \in \mathfrak{I}_{k+1}} \rho_i \right) N^{k/2}\prod_{j \in [k]} \langle |A_j|^2 \rangle^{1/2}\,.$$ for arbitrary bounded deterministic vectors $\Vert \bm x \Vert \,, \Vert \bm y \Vert \lesssim 1$.* Note that [\[eq:Mbound\]](#eq:Mbound){reference-type="eqref" reference="eq:Mbound"} already reflects the different aspects of our local law: it correctly accounts for the $\rho$-powers for each $\mathrm{Im}\,G$, it involves the Hilbert-Schmidt norm of the observables and it is not hard to see that the $N$-power is also optimal. Note that the isotropic bound [\[eq:MboundISO\]](#eq:MboundISO){reference-type="eqref" reference="eq:MboundISO"} is stated separately for convenience, although it will be a straightforward consequence of the average bound [\[eq:Mbound\]](#eq:Mbound){reference-type="eqref" reference="eq:Mbound"}. ### Multi-resolvent local law {#sec:main} As our main input for Theorem [Theorem 2](#thm:ETH){reference-type="ref" reference="thm:ETH"}, we will prove the following multi-resolvent local law, optimally accounting for the decay of the density at the edge. **Theorem 4** (Multi-resolvent local law with optimal edge dependence). *Let $W$ be a Wigner matrix satisfying Assumption [Assumption 1](#ass:entries){reference-type="ref" reference="ass:entries"}, and fix $k \in {\mathbb N}$. Consider spectral parameters $z_1, \ldots , z_{k+1} \in {\mathbb C}\setminus {\mathbb R }$, the associated resolvents $G_j = G(z_j) := (W-z_j)^{-1}$ with $\mathcal{G}_j \in \{ G_j, \mathrm{Im}\,G_j\}$, and traceless matrices $A_1, \ldots , A_k \in {\mathbb C}^{N \times N}$. Finally, let $$\label{eq:defpar} {\eta}_j := |\mathrm{Im}\,z_j|\,, \qquad m_j := m_{\rm sc}(z_j)\,, \qquad \rho_j :=\frac{1}{\pi} |\mathrm{Im}\,m_j|\,, \qquad j\in[ k+1].$$* - *Denote by $\mathfrak{I}_k$ the set of indices $j \in [k]$ where $\mathcal{G}_j = \mathrm{Im}\,G_j$. Then, setting $$\ell := \min_{j \in [k]}\big[ \eta_j (\rho_j + \boldsymbol{1}(j \notin \mathfrak{I}_k))\big],$$ we have the *average law* $$\label{eq:mainAV} \left| \langle \mathcal{G}_1 A_1 \mathcal{G}_2\ldots \mathcal{G}_k A_k \rangle - \langle \mathcal{M}_{[1,k]}A_k \rangle \right| \prec \left[\left(\prod_{i \in \mathfrak{I}_k} \rho_i \right) \wedge \max_{i \in [k]} \sqrt{\rho_i}\right] \, \frac{N^{k/2 - 1}}{\sqrt{N \ell}} \, \prod_{j \in [k]} \langle |A_j|^2 \rangle^{1/2} \,,$$ uniformly in spectral parameters satisfying $\min_j N \eta_j \rho_j \ge N^{\epsilon}$ and $\max_j |z_j| \le N^{1/\epsilon}$ for some $\epsilon > 0$.* - *Denote by $\mathfrak{I}_{k+1}$ the set of indices $j \in [k+1]$ where $\mathcal{G}_j = \mathrm{Im}\,G_j$. Then, setting $$\ell := \min_{j \in [k+1]}\big[ \eta_j (\rho_j + \boldsymbol{1}(j \notin \mathfrak{I}_{k+1}))\big],$$ we have the *isotropic law* $$\label{eq:mainISO} \left| \langle \bm x, \mathcal{G}_1 A_1\mathcal{G}_2 \ldots A_k \mathcal{G}_{k+1} \bm y \rangle - \langle \bm x, \mathcal{M}_{[1,k+1]}\bm y \rangle \right| \prec \left[\left(\prod_{i \in \mathfrak{I}_{k+1}} \rho_i \right) \wedge \max_{i\in [k+1]} \sqrt{\rho_i}\right] \, \frac{N^{k/2}}{\sqrt{N \ell}} \, \prod_{j \in [k]} \langle |A_j|^2 \rangle^{1/2} \,,$$ uniformly in bounded deterministic vectors $\Vert \bm x \Vert \,, \Vert \bm y \Vert \lesssim 1$ and spectral parameters satisfying $\min_j N \eta_j \rho_j \ge N^{\epsilon}$ and $\max_j |z_j| \le N^{1/\epsilon}$ for some $\epsilon > 0$.* Observe that, in the regime $N \ell \gg 1$, the error terms in [\[eq:mainAV\]](#eq:mainAV){reference-type="eqref" reference="eq:mainAV"} and [\[eq:mainISO\]](#eq:mainISO){reference-type="eqref" reference="eq:mainISO"} are smaller by an additional small $(N \ell)^{-1/2}$-factor compared to the size of the leading terms in [\[eq:Mbound\]](#eq:Mbound){reference-type="eqref" reference="eq:Mbound"} and [\[eq:MboundISO\]](#eq:MboundISO){reference-type="eqref" reference="eq:MboundISO"}, respectively. **Remark 5** (Optimality). *The bounds [\[eq:mainAV\]](#eq:mainAV){reference-type="eqref" reference="eq:mainAV"} and [\[eq:mainISO\]](#eq:mainISO){reference-type="eqref" reference="eq:mainISO"} are optimal (up to the $N^\xi$ factor hidden in the $\prec$-relation) in the class of bounds that involve only the parameters $N$, $\eta_i$, $\rho_i$ and the Hilbert-Schmidt norm of $A_i$'s. This fact can be seen by computing the variance of the left hand sides in the case when $W$ is a GUE matrix. The resolvents can be written out by spectral theorem, similarly to [\[specdec\]](#specdec){reference-type="eqref" reference="specdec"}, and the variance with respect to the eigenvectors can be explicitly computed by Weingarten calculus, while the variance with respect to the eigenvalues (that are independent of the eigenvectors) can be identified from well-known central limit theorems for linear statistics of eigenvalues. For example, for $k=2$, $A_1=A_2=A$, $z_1= z_2 =z$ and $\mathfrak{I}_k=\emptyset$, in this way we obtain $$\label{var} \sqrt{ {\mathbb E }\big|\langle GAGA\rangle - m^2 \langle A^2\rangle \big|^2} \sim \frac{1}{N\eta}\langle A^2\rangle + \frac{\sqrt{\rho}}{N\sqrt{\eta}} \langle A^4\rangle^{1/2}.$$ After estimating $\langle A^4\rangle \le N \langle A^2\rangle^2$, which may saturate for certain $A$, we see the optimality of [\[eq:mainAV\]](#eq:mainAV){reference-type="eqref" reference="eq:mainAV"} for this case. The general case is a similar, albeit somewhat tedious calculation.* **Remark 6** (Interpretations). *We have two further comments on Theorem [Theorem 4](#thm:main){reference-type="ref" reference="thm:main"}.* - *For $\mathfrak{I}_k = \emptyset$ and $\mathfrak{I}_{k+1} = \emptyset$ both bounds, [\[eq:mainAV\]](#eq:mainAV){reference-type="eqref" reference="eq:mainAV"} and [\[eq:mainISO\]](#eq:mainISO){reference-type="eqref" reference="eq:mainISO"}, have already been proven in [@A2 Theorem 2.2 and Corollary 2.4]. In the complementary cases $\mathfrak{I}_k \neq \emptyset$ and $\mathfrak{I}_{k+1} \neq \emptyset$, we point out that the minimum $\big[ ... \wedge ... \big]$ in [\[eq:mainAV\]](#eq:mainAV){reference-type="eqref" reference="eq:mainAV"} and [\[eq:mainISO\]](#eq:mainISO){reference-type="eqref" reference="eq:mainISO"} is realized by the product $\prod_{i\in \mathfrak{I}}\rho_i$ since $\rho_i \lesssim 1$. In particular, as a rule of thumb, every index $j$ for which $\mathcal{G}_j = \mathrm{Im}\,G_j$, decreases both the size of the deterministic approximation [\[eq:Mbound\]](#eq:Mbound){reference-type="eqref" reference="eq:Mbound"}--[\[eq:MboundISO\]](#eq:MboundISO){reference-type="eqref" reference="eq:MboundISO"} and the size of the error [\[eq:mainAV\]](#eq:mainAV){reference-type="eqref" reference="eq:mainAV"}--[\[eq:mainISO\]](#eq:mainISO){reference-type="eqref" reference="eq:mainISO"} by a factor $\rho_j$, with $\rho_j \le 1$, compared to the case when $\mathcal{G}_j = G_j$. An exception to this rule is [\[eq:mainAV\]](#eq:mainAV){reference-type="eqref" reference="eq:mainAV"} for $k=1$; here the bounds for $\langle G A \rangle$ and $\langle \mathrm{Im}\,G A \rangle$ are identical.* - *The estimates in Theorem [Theorem 4](#thm:main){reference-type="ref" reference="thm:main"} remain valid if we replace $$\label{eq:Mreplace} \begin{split} \langle \mathcal{M}_{[1,k]}A_k \rangle &\longrightarrow \left(\prod_{i\in \mathfrak{I}_k } \mathrm{Im}\,m_i \right)\left(\prod_{i\notin \mathfrak{I}_k}m_i\right) \langle A_1... A_k \rangle \\ \langle \bm x, \mathcal{M}_{[1,k+1]}\bm y \rangle &\longrightarrow \left(\prod_{i\in \mathfrak{I}_{k+1} } \mathrm{Im}\,m_i \right)\left(\prod_{i\notin \mathfrak{I}_{k+1}}m_i\right) \langle \bm x, A_1... A_k \bm y \rangle \end{split}$$ in [\[eq:mainAV\]](#eq:mainAV){reference-type="eqref" reference="eq:mainAV"} and [\[eq:mainISO\]](#eq:mainISO){reference-type="eqref" reference="eq:mainISO"}, respectively, i.e., if we consider only the trivial partition into singletons $\pi$ in the definition [\[eq:Mdefim\]](#eq:Mdefim){reference-type="eqref" reference="eq:Mdefim"} of $\mathcal{M}_{[1,k]}$. This is simply due to the fact that all other summands in [\[eq:Mdefim\]](#eq:Mdefim){reference-type="eqref" reference="eq:Mdefim"} are explicitly smaller than the error terms in [\[eq:mainAV\]](#eq:mainAV){reference-type="eqref" reference="eq:mainAV"}--[\[eq:mainISO\]](#eq:mainISO){reference-type="eqref" reference="eq:mainISO"}. A proof of this fact is given in Appendix [6](#sec:addtech){reference-type="ref" reference="sec:addtech"}.* **Remark 7** (Generalisations). *We mention a few direct generalisations of Theorem [Theorem 4](#thm:main){reference-type="ref" reference="thm:main"} whose proofs are omitted as they are straightforward.* - *In Theorem [Theorem 4](#thm:main){reference-type="ref" reference="thm:main"} each $\mathcal{G}$ can be replaced by a product of $\mathcal{G}$'s and an individual $\mathcal{G}$ may also stand for $|G|$, not only for $G$ or $\mathrm{Im}\,G$ (see [@multiG Lemma 3.2], [@A2 Lemma 3.1], and also Lemma [Lemma 17](#lem:G^2lemma){reference-type="ref" reference="lem:G^2lemma"} below). We refrain from stating these results explicitly as they are easily obtained using appropriate integral representations of general products of such $\mathcal{G}$'s in terms of a single $\mathrm{Im}\,G$.* - *We stated the multi--resolvent local laws in Theorem [Theorem 4](#thm:main){reference-type="ref" reference="thm:main"} only for $\mathcal{G}_j\in \{G_j,\mathrm{Im}\,G_j\}$, however, inspecting the proof, one can easily see that it also leads to a local law for $\mathcal{G}_j\in\{G_j, \mathrm{Im}\,G_j, G^\mathfrak{t}_j,\mathrm{Im}\,G^\mathfrak{t}_j\}$, where $G^\mathfrak{t}$ stands for the transpose of $G$. In particular, this implies that the ETH in Theorem [Theorem 2](#thm:ETH){reference-type="ref" reference="thm:ETH"} can also be extended to $$\label{eq:ethbar} \max_{i,j \in [N]} \left| \langle \overline{\bm{u}_i}, A \bm{u}_j\rangle - \langle A \rangle \langle \overline{\bm{u}_i}, \bm{u}_j\rangle \right| \prec \frac{\langle |\mathring{A}|^2 \rangle^{1/2}}{\sqrt{N}}.$$ Furthermore, setting $\sigma:= {\mathbb E }\chi_{\mathrm{od}}^2$, for $|\sigma|<1$ we have (see [@ETHpaper Theorem 2.3]) $$\big|\langle \overline{\bm{u}_i}, \bm{u}_j\rangle\big|\prec \frac{C_\sigma}{\sqrt{N}}.$$ In two extreme cases $\sigma=\pm 1$, we have $|\langle \overline{\bm{u}_i}, \bm{u}_j\rangle|=\delta_{i,j}$ if $\sigma=1$ and $|\langle \overline{\bm{u}_i}, \bm{u}_j\rangle|=\delta_{i,N-j+1}$ if $\sigma=-1$ and ${\mathbb E }(W_{aa}^2)=0$ (see [@ETHpaper Remark 2.4]). We remark that here $\bm{u}_i$ denotes the eigenvector corresponding to the eigenvalue $\lambda_i$, with the $\lambda_i$'s labeled in increasing order.* ## Proof of Theorem [Theorem 2](#thm:ETH){reference-type="ref" reference="thm:ETH"} {#subsec:proofETH} Fix $\epsilon>0$, pick $E\in [-2,2]$ and define $\eta(E)$ implicitly by $$N\eta(E)\rho(E+\mathrm{i}\eta(E))= N^\epsilon.$$ Let $A$ be a traceless matrix $\langle A\rangle=0$, then by spectral decomposition [\[specdec\]](#specdec){reference-type="eqref" reference="specdec"} and the well-known eigenvalue rigidity[^8] (see, e.g., [@EYY2012]) it is easy to see that (see [@ETHpaper Lemma 1] for more details) $$\max_{i,j \in [N]} N\left| \langle \bm{u}_i, A \bm{u}_j\rangle \right|^2 \prec N^{2\epsilon}\sup_{E_1,E_2\in [-2,2]} \frac{\big|\langle \mathrm{Im}\,G(E_1+\mathrm{i}\eta(E_1))A\mathrm{Im}\,G(E_2+\mathrm{i}\eta(E_2)) A^*\rangle\big|}{\rho(E_1+\mathrm{i}\eta(E_1))\rho(E_2+\mathrm{i}\eta(E_2))} \prec N^{2\epsilon}\langle |A|^2\rangle\,.$$ We point out that in the last inequality we used [\[eq:mainAV\]](#eq:mainAV){reference-type="eqref" reference="eq:mainAV"} for $k=2$ and $\mathfrak{I}_2=[2]$: $$\big|\langle \mathrm{Im}\,G_1 A \mathrm{Im}\,G_2 A^*\rangle- \mathrm{Im}\,m_1\mathrm{Im}\,m_2\langle |A|^2\rangle\big|\prec \frac{\rho_1\rho_2}{\sqrt{N\ell}}\langle |A|^2\rangle\,.$$ The fact that this bound holds simultaneously for all $E_1=\mathrm{Re}\,z_1\in [-2,2]$ and $E_2=\mathrm{Re}\,z_2\in [-2,2]$ follows by a simple grid argument together with the Lipschitz regularity of the resolvent (with Lipschitz constant of order $N$ at spectral parameters with imaginary part bigger than $1/N$). This completes the proof of Theorem [Theorem 2](#thm:ETH){reference-type="ref" reference="thm:ETH"}. 0◻ The rest of the paper is devoted to the proof of the multi-resolvent local law, Theorem [Theorem 4](#thm:main){reference-type="ref" reference="thm:main"}. # Multi--resolvent local law: Proof of Theorem [Theorem 4](#thm:main){reference-type="ref" reference="thm:main"} {#sec:proof} In this section we prove the *multi-resolvent local laws* in Theorem [Theorem 4](#thm:main){reference-type="ref" reference="thm:main"} via the following three steps: - **Global law.** Prove a multi-resolvent *global law*, i.e. for spectral parameters "far away\" from the spectrum, $\min_j \mathrm{dist}(z_j, [-2,2]) \ge \delta \normalcolor$ for some small $\delta > 0$ (see Proposition [Proposition 8](#prop:initial){reference-type="ref" reference="prop:initial"}). - **Characteristic flow.** Propagate the global law to a *local law* by considering the evolution of the Wigner matrix $W$ along the Ornstein-Uhlenbeck flow, thereby introducing an almost order one Gaussian component (see Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"}). The spectral parameters evolve from the global regime to the local regime according to the *characteristic (semicircular) flow*. The simultaneous effect of these two evolutions is a key cancellation of two large terms. - **Green function comparison.** Remove the Gaussian component by a Green function comparison (GFT) argument (see Proposition [Proposition 11](#prop:zag){reference-type="ref" reference="prop:zag"}). As the first step, we have the following global law, the proof of which is completely analogous to the proofs presented in [@multiG Appendix B] and [@A2 Appendix A] and so omitted. We point out that these proofs do not use the system of master inequalities and the bootstrapped error analysis that form the technical backbone of [@multiG; @A2]. In the regime of large $d: = \min_j \mathrm{dist}(z_j, [-2,2])$ a simple cumulant expansion is used with the trivial norm bound $\| \mathcal{G}_j\| \le 1/d$. In particular, Proposition [Proposition 8](#prop:initial){reference-type="ref" reference="prop:initial"} holds for general deterministic matrices since the traceless condition plays no role in this case. **Proposition 8** (Step 1: Global law). *Let $W$ be a Wigner matrix satisfying Assumption [Assumption 1](#ass:entries){reference-type="ref" reference="ass:entries"}, and fix any $k \in {\mathbb N}$ and $\delta>0$. Consider spectral parameters $z_1, ... , z_{k+1} \in {\mathbb C}\setminus {\mathbb R }$, the associated resolvents $G_j = G(z_j) := (W-z_j)^{-1}$, with $\mathcal{G}_j \in \{ G_j , \mathrm{Im}\,G_j \}$, and deterministic matrices $B_1, ... , B_k \in {\mathbb C}^{N \times N}$. Then, uniformly in deterministic matrices $B_i$ and in spectral parameters satisfying $\mathrm{dist}(z_j, [-2, 2])\ge \delta$, for some fix constant $C>0$, the following holds.* - *We have the averaged bound $$\label{eq:maininAV} \left| \langle \mathcal{G}_1 B_1 ... \mathcal{G}_k B_k \rangle - \langle \mathcal{M}_{[1,k]}B_k \rangle \right| \prec \frac{N^{k/2-1}}{\sqrt{N}} \prod_{j \in [k]} \langle |B_j|^{2} \rangle^{\frac{1}{2}} \,.$$* - *For deterministic unit vectors ${\bm x}, {\bm y}$, we have the isotropic bound $$\label{eq:maininISO} \left| (\mathcal{G}_1 B_1 ... B_k \mathcal{G}_{k+1})_{{\bm x}{\bm y}} - (\mathcal{M}_{[1,k+1]})_{{\bm x}{\bm y}} \right| \prec \frac{N^{k/2}}{\sqrt{N}} \prod_{j \in [k]} \langle |B_j|^{2} \rangle^{\frac{1}{2}} \,.$$* In the next Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"}, using Proposition [Proposition 8](#prop:initial){reference-type="ref" reference="prop:initial"} as an input, we derive Theorem [Theorem 4](#thm:main){reference-type="ref" reference="thm:main"} for Wigner matrices which have an order one Gaussian component. For this purpose we consider the evolution of the Wigner matrix $W$ along the Ornstein-Uhlenbeck flow $$\label{eq:OUOUOU} \mathrm{d}W_t=-\frac{1}{2}W_t\mathrm{d}t+\frac{\mathrm{d}B_t}{\sqrt{N}}, \qquad W_0=W,$$ with $B_t$ being real symmetric or complex Hermitian Brownian motion[^9] with entries having $t$ times the same first two moments of $W$, and define its resolvent $G_t(z):=(W_t-z)^{-1}$ with $z\in{\mathbb C}\setminus{\mathbb R }$. Even if not stated explicitly we will always consider this flow only for short times, i.e. for $0\le t\le T$, where the maximal time $T$ is smaller than $\gamma$, for some small constant $\gamma>0$. Note that along the flow [\[eq:OUOUOU\]](#eq:OUOUOU){reference-type="eqref" reference="eq:OUOUOU"} the first two moments of $W_t$ are preserved, and so the self-consistent density of states of $W_t$ is unchanged; it remains the standard semicircle law. We now want to compute the deterministic approximation of product of resolvents and deterministic matrices with trace zero, $$\label{eq:quantnochar} \mathcal{G}_t(z_1)A_1\mathcal{G}_t(z_2) A_2\mathcal{G}_t(z_3)A_3\dots, \qquad\quad \langle A_i\rangle =0,$$ and have a very precise estimate of the error term. In fact, we also let the spectral parameters evolve with time with a carefully chosen equation that conveniently cancels some leading error terms in the time evolution of [\[eq:quantnochar\]](#eq:quantnochar){reference-type="eqref" reference="eq:quantnochar"}. ![Several trajectories for solutions of [\[eq:chardef\]](#eq:chardef){reference-type="eqref" reference="eq:chardef"} are depicted. We chose ten reference times, indicated by dots, showing that the rate of change along the flow strongly depends on $\rho$. The solid black line is the graph of $E \mapsto \eta(E)$ with $\eta(E)$ implicitly defined via $\eta(E) \rho(E + \mathrm{i}\eta(E)) = \mathrm{const}.$ for a small positive constant. A similar picture also appeared in [@Bourgade2021 Figure 1].](plot.pdf){#fig:flow} The corresponding equation is the characteristic equation for the semicircular flow, i.e. given by the first order ODE (see Figure [1](#fig:flow){reference-type="ref" reference="fig:flow"}): $$\label{eq:chardef} \partial_t z_{i,t}=-m(z_{i,t})-\frac{z_{i,t}}{2}\,.$$ Define $\eta_{i,t}:=|\mathrm{Im}\,z_{i,t}|$ and $\rho_{i,t}:=\pi^{-1}|\mathrm{Im}\,m(z_{i,t})|$. Note that along the characteristics we have $$\label{eq:characteristics} \partial_t m(z_{i,t})=-\partial_z m(z_{i,t}) \left(m(z_{i,t})+\frac{z_{i,t}}{2}\right)=-\partial_z m(z_{i,t})\left(-\frac{1}{2m(z_{i,t})}+\frac{m(z_{i,t})}{2}\right)=\frac{m(z_{i,t})}{2},$$ where in the last two equalities we used the defining equation $m(z)^2 + zm(z)+1=0$ of the Stieltjes transform of the semicircular law. In particular, taking the imaginary part of [\[eq:characteristics\]](#eq:characteristics){reference-type="eqref" reference="eq:characteristics"} we get $\rho_{i,s}\sim \rho_{i,t}$ for any $0\le s\le t$, while the behavior of the $\eta_{i,t}$ depends on the regime: in the bulk $\eta_{i,t}$ decreases linearly in time with a speed of order one, close to the edge $\eta_{i,t}$ decreases still linearly, but with a speed depending on $\rho$, i.e. it is slower near the edges. By standard ODE theory we obtain the following lemma: **Lemma 9**. *Fix an $N$--independent $\gamma>0$, fix $0<T< \gamma \normalcolor$, and pick $z\in {\mathbb C}\setminus {\mathbb R }$. Then there exists an initial condition $z_0$ such that the solution $z_t$ of [\[eq:chardef\]](#eq:chardef){reference-type="eqref" reference="eq:chardef"} with this initial condition $z_0$ satisfies $z_T=z$. Furthermore, there exists a constant $C>0$ such that $\mathrm{dist}(z_0,[-2,2]) \ge C\normalcolor T$.* The spectral parameters evolving by [\[eq:OUOUOU\]](#eq:OUOUOU){reference-type="eqref" reference="eq:OUOUOU"} will have the property that $$\mathcal{G}_t(z_{1,t})A_1\dots A_{k-1}\mathcal{G}_t(z_{k,t})-\mathcal{M}_{[1,k],t}\approx \mathcal{G}_0(z_{1,0})A_1\dots A_{k-1}\mathcal{G}_0(z_{k,0})-\mathcal{M}_{[1,k],0},$$ with $\mathcal{M}_{[1,k],t}:=\mathcal{M}(z_{1,t},A_1,\dots, A_{k-1},z_{k,t})$, for any $0\le t \le T$. Note that the deterministic approximation $\mathcal{M}_{[1,k],t}$ depends on time only through the time dependence of the spectral parameters. The deterministic approximation of [\[eq:quantnochar\]](#eq:quantnochar){reference-type="eqref" reference="eq:quantnochar"} with fixed spectral parameters is unchanged along the whole flow [\[eq:OUOUOU\]](#eq:OUOUOU){reference-type="eqref" reference="eq:OUOUOU"} since the Wigner semicircular density is preserved under the OU flow [\[eq:OUOUOU\]](#eq:OUOUOU){reference-type="eqref" reference="eq:OUOUOU"}. **Proposition 10** (Step 2: Characteristic flow). *Fix $\epsilon,\gamma>0$, $0\le T\le \gamma$, $K\in{\mathbb N}$. Consider $z_{1,0},\dots,z_{K+1,0}\in{\mathbb C}\setminus {\mathbb R }$ as initial conditions of the solution $z_{j,t}$ of [\[eq:chardef\]](#eq:chardef){reference-type="eqref" reference="eq:chardef"} for $0\le t\le T$, define $G_{j,t}:=G_t(z_{j,t})$ and let $\mathcal{G}_{j,t}\in \{G_{j,t}, \mathrm{Im}\,G_{j,t}\}$. Let $\Vert \bm x \Vert \,, \Vert \bm y \Vert \lesssim1$ be bounded deterministic vectors.* - *For any $k\le K$ let $\mathfrak{I}_k$ be the set of indices $j \in [k]$ where $\mathcal{G}_{j,t}=\mathrm{Im}\,G_{j,t}$, and define ${\ell}_{t} := \min_{j \in [k]}\big[\eta_{j,t}(\rho_{j,t}+\bm1(j\notin\mathfrak{I}_{k}))\big]$, the time dependent analogue[^10] of $\ell$. Then, assuming that $$\label{eq:inass0} \left| \langle \mathcal{G}_{1,0} A_1 ... \mathcal{G}_{k,0} A_k \rangle - \langle \mathcal{M}_{[1,k],0}A_k \rangle \right|\prec \left[\left(\prod_{i \in \mathfrak{I}_k} \rho_{i,0} \right) \wedge \max_{i \in [k]} \sqrt{\rho_{i,0}} \right] \frac{N^{k/2 - 1}}{\sqrt{N \ell_{0}}} \prod_{j \in [k]} \langle |A_j|^2 \rangle^{1/2} \,$$ holds uniformly for any $k\le K$, any choice of $A_1,\dots,A_k$ traceless deterministic matrices and any choice of $z_{i,0}$'s such that $N\eta_{i,0}\rho_{i,0}\ge N^\epsilon$ and $|z_{i,0}| \le N^{1/\epsilon}$, then we have $$\label{eq:flowgimg} \left| \langle \mathcal{G}_{1,T} A_1 ... \mathcal{G}_{k,T} A_k \rangle - \langle \mathcal{M}_{[1,k],T}A_k \rangle \right|\prec \left[\left(\prod_{i \in \mathfrak{I}_k} \rho_{i,T} \right) \wedge \max_{i \in [k]} \sqrt{\rho_{i,T}} \right] \frac{N^{k/2 - 1}}{\sqrt{N \ell_{T}}} \prod_{j \in [k]} \langle |A_j|^2 \rangle^{1/2}\,,$$ for any $k\le K$, again uniformly in traceless matrices $A_i$ and in spectral parameters satisfying $N \eta_{i, T} \rho_{i, T} \ge N^{\epsilon}$, $|z_{i,T}|\le N^{1/\epsilon}$.* - *Let $\mathfrak{I}_{k+1}$ be the set of indices $j \in [k+1]$ where $\mathcal{G}_{j,t}=\mathrm{Im}\,G_{j,t}$, and define ${\ell}_{j,t} := \min_{j \in [k+1]} \big[\eta_{j,t}(\rho_{j,t}+\bm1(j\notin\mathfrak{I}_{k+1}))\big]$. Then, assuming that $$\label{eq:inass0ISO} \left| \langle \bm x, \mathcal{G}_{1,0} A_1 ... A_k \mathcal{G}_{k+1, 0}\bm y\rangle - \langle \bm x, \mathcal{M}_{[1,k+1],0} \bm y \rangle \right|\prec \left[\left(\prod_{i \in \mathfrak{I}_{k+1}} \rho_{i,0} \right) \wedge \max_{i \in [k+1]} \sqrt{\rho_{i,0}} \right] \frac{N^{k/2 }}{\sqrt{N \ell_{0}}} \prod_{j \in [k]} \langle |A_j|^2 \rangle^{1/2} \,$$ holds for any $k\le K$, uniformly in $A$'s and in the spectral parameters as in part (a), and in deterministic vectors, then we have $$\label{eq:flowgimgISO} \left| \langle \bm x, \mathcal{G}_{1,T} A_1 ... A_k \mathcal{G}_{k+1, T}\bm y\rangle - \langle \bm x, \mathcal{M}_{[1,k+1],T} \bm y \rangle \right|\prec \left[\left(\prod_{i \in \mathfrak{I}_{k+1}} \rho_{i,T} \right) \wedge \max_{i \in [k+1]} \sqrt{\rho_{i,T}} \right] \frac{N^{k/2 }}{\sqrt{N \ell_{T}}} \prod_{j \in [k]} \langle |A_j|^2 \rangle^{1/2} \,,$$ for any $k\le K$, again uniformly in $A$'s and in spectral parameters as in part (a), and in deterministic vectors ${\bm x}, {\bm y}$.* Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} is proven in Section [4](#sec:opAedge){reference-type="ref" reference="sec:opAedge"}. As the third and final step, we show that the additional Gaussian component introduced in Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} can be removed using a Green function comparison (GFT) argument. The proof of this proposition is presented in Section [5](#sec:GFT){reference-type="ref" reference="sec:GFT"}. **Proposition 11** (Step 3: Green function comparison). *Let $H^{(\bm v)}$ and $H^{(\bm w)}$ be two $N \times N$ Wigner matrices with matrix elements given by the random variables $v_{ab}$ and $w_{ab}$, respectively, both satisfying Assumption [Assumption 1](#ass:entries){reference-type="ref" reference="ass:entries"} and having matching moments up to third order,[^11] i.e. $$\label{eq:momentmatch} {\mathbb E }\bar{v}_{ab}^u v_{ab}^{s-u} = {\mathbb E }\bar{w}_{ab}^u w_{ab}^{s-u}\,, \quad s \in \{0,1,2,3\}\,, \quad u \in \{0,...,s\}\,.$$ Fix $K \in {\mathbb N}$ and consider spectral parameters $z_1, ... , z_{K+1} \in {\mathbb C}\setminus {\mathbb R }$ satisfying $\min_j N \eta_j \rho_j \ge N^{\epsilon}$ and $\max_j |z_j| \le N^{1/\epsilon}$ for some $\epsilon > 0$ and the associated resolvents $G_j^{(\#)} = G^{(\#)}(z_j) := (H^{(\#)}-z_j)^{-1}$ with $\mathcal{G}^{(\#)}_j \in \{ G^{(\#)}_j, \mathrm{Im}\,G^{(\#)}_j\}$ and $\# = \bm v, \bm w$. Pick traceless matrices $A_1, ... , A_K \in {\mathbb C}^{N \times N}$.* *Assume that, for $H^{(\bm v)}$, we have the following bounds (writing $\mathcal{G}_j \equiv \mathcal{G}_j^{(\bm v)}$ for brevity).* - *For any $k \le K$, consider any subset of cardinality $k$ of the $K+1$ spectral parameters and, similarly, consider any subset of cardinality $k$ of the $K$ traceless matrices. Relabel both of them by $[k]$, and denote the set of indices $j \in [k]$ by $\mathfrak{I}_k$ where $\mathcal{G}_j = \mathrm{Im}\,G_j$. Setting $\ell := \min_{j \in [k]}\big[ \eta_j (\rho_j + \boldsymbol{1}(j \notin \mathfrak{I}_k))\big]$ we have that $$\label{eq:zagmultiG} \left| \langle \mathcal{G}_1 A_1 ... \mathcal{G}_k A_k \rangle - \langle \mathcal{M}_{[1,k]}A_k \rangle \right| \prec \left[\left(\prod_{i \in \mathfrak{I}_k} \rho_i \right) \wedge \max_{i \in [k]} \sqrt{\rho_i}\right] \, \frac{N^{k/2 - 1}}{\sqrt{N \ell}} \, \prod_{j \in [k]} \langle |A_j|^2 \rangle^{1/2}\,,$$ uniformly in all choices of subsets of $z$'s and $A$'s.* - *For any $k \le K$, consider any subset of cardinality $k+1$ of the $K+1$ spectral parameters and, similarly, consider any subset of cardinality $k$ of the $K$ traceless matrices. Relabel them by $[k+1]$ and $[k]$, respectively, and denote the set of indices $j \in [k+1]$ by $\mathfrak{I}_{k+1}$ where $\mathcal{G}_j = \mathrm{Im}\,G_j$. Setting $\ell := \min_{j \in [k+1]}\big[ \eta_j (\rho_j + \boldsymbol{1}(j \notin \mathfrak{I}_{k+1}))\big]$ we have that $$\label{eq:zagmultiGISO} \begin{split} \left| \langle \bm x, \mathcal{G}_1 A_1 ... A_k \mathcal{G}_{k+1} \bm y \rangle - \langle \bm x, \mathcal{M}_{[1,k+1]}\bm y \rangle \right| \prec \left[\left(\prod_{i \in \mathfrak{I}_{k+1}} \rho_i \right) \wedge \max_{i\in [k+1]} \sqrt{\rho_i}\right] \, \frac{N^{k/2}}{\sqrt{N \ell}} \, \prod_{j \in [k]} \langle |A_j|^2 \rangle^{1/2} \,, \end{split}$$ uniformly in all choices of subsets of $z$'s and $A$'s as in part (a) and in bounded deterministic vectors $\Vert \bm x \Vert \,, \Vert \bm y \Vert \lesssim 1$.* *Then, [\[eq:zagmultiG\]](#eq:zagmultiG){reference-type="eqref" reference="eq:zagmultiG"}--[\[eq:zagmultiGISO\]](#eq:zagmultiGISO){reference-type="eqref" reference="eq:zagmultiGISO"} also hold for the ensemble $H^{(\bm{w})}$, uniformly all choices of subsets of $z$'s and $A$'s and in bounded deterministic vectors.* We are now ready to finally conclude the proof of Theorem [Theorem 4](#thm:main){reference-type="ref" reference="thm:main"}. Fix $T>0$, and fix $z_1,\dots,z_{k+1}\in{\mathbb C}\setminus {\mathbb R }$ such that $\min_i N\eta_i \rho_i \ge N^\epsilon$, and let $z_{i,0}$ be the initial conditions of the characteristics [\[eq:chardef\]](#eq:chardef){reference-type="eqref" reference="eq:chardef"} chosen so that $z_{i,T}=z_i$ (this is possible thanks to Lemma [Lemma 9](#lem:propchar){reference-type="ref" reference="lem:propchar"}). Then, the assumption [\[eq:inass0\]](#eq:inass0){reference-type="eqref" reference="eq:inass0"} of Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} is satisfied for those $z_{i,0}$ by Proposition [Proposition 8](#prop:initial){reference-type="ref" reference="prop:initial"} with $\delta=CT$, where $C>0$ is the constant from Lemma [Lemma 9](#lem:propchar){reference-type="ref" reference="lem:propchar"}. We can thus use Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} to show that [\[eq:flowgimg\]](#eq:flowgimg){reference-type="eqref" reference="eq:flowgimg"} and [\[eq:flowgimgISO\]](#eq:flowgimgISO){reference-type="eqref" reference="eq:flowgimgISO"} hold. Finally, the Gaussian component added in Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} is removed using Proposition [Proposition 11](#prop:zag){reference-type="ref" reference="prop:zag"} with the aid of a complex version of the standard moment-matching lemma [@EYbook Lemma 16.2], see Lemma [Lemma 34](#lem:momentmatch){reference-type="ref" reference="lem:momentmatch"} in Appendix [6.2](#app:moma){reference-type="ref" reference="app:moma"} for more details. 0◻ # Characteristic flow: Proof of Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} {#sec:opAedge} In this section we present the proof of Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"}. The argument is structured as follows: - In Section [4.1](#subsec:pureIM){reference-type="ref" reference="subsec:pureIM"} we begin by proving the average part, Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} (a), in the case when $\mathcal{G}_{j,t}=\mathrm{Im}\,G_{j,t}$ for each $j\in [k]$, i.e., we prove [\[eq:flowgimg\]](#eq:flowgimg){reference-type="eqref" reference="eq:flowgimg"} for chains containing only $\mathrm{Im}\,G$'s. Along the flow [\[eq:OUOUOU\]](#eq:OUOUOU){reference-type="eqref" reference="eq:OUOUOU"} new resolvents without imaginary part arise, so the pure $\mathrm{Im}\,G$ structure cannot be directly maintained. However, we can use the integral representation (see, e.g. [@multiG Eq. (3.14)]), $$\label{eq:intrepIM} \prod_{j=1}^mG(z_j ) = \frac{1}{\pi} \int_{\mathbb R }\mathrm{Im}\,G (x + \mathrm{i}\eta) \prod_{j=1}^m \frac{1}{x - z_j + \mathrm{sgn}(\mathrm{Im}\,z_j) \mathrm{i}\eta} \mathrm{d}x,$$ (that is valid for any $0 < \eta < \min_j \mathrm{Im}\,z_j$ or $\max_j \mathrm{Im}\,z_j < - \eta < 0$) to express each $G$ in terms of $\mathrm{Im}\,G$, thus the flow for purely $\mathrm{Im}\,G$ chains will be self-contained. - Afterwards, in the very short Section [4.2](#subsec:pureIMISO){reference-type="ref" reference="subsec:pureIMISO"}, we prove the isotropic part, Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} (b) again first in the case when $\mathcal{G}_{j,t}=\mathrm{Im}\,G_{j,t}$ for each $j\in [k+1]$. Due to the Hilbert-Schmidt error terms, the isotropic bound [\[eq:flowgimgISO\]](#eq:flowgimgISO){reference-type="eqref" reference="eq:flowgimgISO"} will directly follow from [\[eq:flowgimg\]](#eq:flowgimg){reference-type="eqref" reference="eq:flowgimg"} proven in Section [4.1](#subsec:pureIM){reference-type="ref" reference="subsec:pureIM"}. - Finally, using the integral representation [\[eq:intrepIM\]](#eq:intrepIM){reference-type="eqref" reference="eq:intrepIM"} in the special case $m=1$, we derive the general case of mixed chains from the purely $\mathrm{Im}\,G$'s case in Section [4.3](#subsec:mixed){reference-type="ref" reference="subsec:mixed"}. Without loss of generality, to keep the presentation simpler, throughout this section we will assume that $\sigma : = {\mathbb E }\chi^2_{\mathrm{od}}$ is real and ${\mathbb E }\chi^2_{\mathrm{d}}=1+\sigma$ (recall that $\chi_{\mathrm{d}}, \chi_{\mathrm{od}}$ are the distribution of the diagonal and off-diagonal matrix elements of $W$, respectively). At the end, in Section [4.4](#sec:general){reference-type="ref" reference="sec:general"}, we will explain how to lift these two restrictions. We recall our choice of the characteristics $$\label{eq:characrecall} \partial_t z_{i,t}=-m(z_{i,t})-\frac{z_{i,t}}{2}.$$ Additionally, we record the following trivially checkable integration rules for any $\alpha \ge 1$: $$\label{eq:newintrule} \int_0^t\frac{1}{\eta_{i,s}^\alpha}\,\mathrm{d}s\lesssim \frac{\log N}{\eta_{i,t}^{\alpha-1}\rho_{i,t}} \qquad \text{and} \qquad \int_0^t\frac{1}{\eta_{s}^\alpha}\,\mathrm{d}s\lesssim \frac{\log N}{\eta_{t}^{\alpha-2}\hat{\ell}_{t}} \quad \text{with} \quad \eta_t:=\min_i\eta_{i,t}\,, \quad \hat{\ell}_t := \min_i \eta_{i,t} \rho_{i,t}\,.$$ Note that, in general, $\hat{\ell}$ differs from $\ell$, introduced in Theorem [Theorem 4](#thm:main){reference-type="ref" reference="thm:main"}. However, in case that every resolvent $\mathcal{G}$ in a given chain is an $\mathrm{Im}\,G$, i.e. $\mathfrak{I}$ is the full set of indices, then it holds that $\hat{\ell} = \ell$. The notation 'hat' will be consistently used to indicate that a chain contains only $\mathrm{Im}\,G$'s (see [\[eq:deflongchaingsim\]](#eq:deflongchaingsim){reference-type="eqref" reference="eq:deflongchaingsim"}--[\[eq:deflongchaingsimbar\]](#eq:deflongchaingsimbar){reference-type="eqref" reference="eq:deflongchaingsimbar"} below). Using the short--hand notation $G_{i,t}:=(W_t-z_{i,t})^{-1}$ with $W_t$ being the solution of [\[eq:OUOUOU\]](#eq:OUOUOU){reference-type="eqref" reference="eq:OUOUOU"}, we now compute the derivative (recall [\[eq:Mdefim\]](#eq:Mdefim){reference-type="eqref" reference="eq:Mdefim"}) $$\label{eq:timederiv} \mathrm{d}\langle (\mathrm{Im}\,G_{1,t} A_1 ... \mathrm{Im}\,G_{k,t} -\mathcal{M}(z_{1,t}, A_1, ... , z_{k,t}; [k]) )A_k \rangle = ...$$ along the characteristics with the aid of Itô's formula. We point out that the following derivation of the flow holds for any deterministic matrices $A_i$, i.e. in this derivation we do not assume that $\langle A_i\rangle=0$. We will assume again that $A_i$ are traceless only later starting from the beginning of Section [4.1](#subsec:pureIM){reference-type="ref" reference="subsec:pureIM"}. The evolution for [\[eq:timederiv\]](#eq:timederiv){reference-type="eqref" reference="eq:timederiv"} (see [\[eq:flowkaim\]](#eq:flowkaim){reference-type="eqref" reference="eq:flowkaim"} below) is obtained by multilinearity from the analogous formula for the time derivative of a resolvent chain without any imaginary parts. So first we compute $$\begin{split} \label{eq:flowka} \mathrm{d}\langle (G_{[1,k],t}-M_{{[1,k]},t})A_k\rangle&=\frac{1}{\sqrt{N}}\sum_{a,b=1}^N \partial_{ab} \langle {G}_{[1,k],t}A_k\rangle\mathrm{d}B_{ab,t}+\frac{k}{2}\langle {G}_{[1,k],t}A_k\rangle\mathrm{d}t +\sum_{i,j=1\atop i< j}^k\langle {G}_{[i,j],t}\rangle\langle {G}_{[j,i],t}\rangle\mathrm{d}t \\ &\quad+\sum_{i=1}^k \langle G_{i,t}-m_{i,t}\rangle \langle {G}^{(i)}_{[1,k],t}A_k\rangle\mathrm{d}t -\partial_t \langle M_{[1,k],t}A_k\rangle \mathrm{d}t +\frac{\sigma}{N}\sum_{i,j=1\atop i\le j}^k\langle G_{[i,j],t}G_{[j,i],t}^\mathfrak{t}\rangle\mathrm{d}t \,, \end{split}$$ where $\partial_{ab}:=\partial_{w_{ab}}$ denotes the direction derivative in the direction $w_{ab}=w_{ab}(t):=(W_t)_{ab}$. Here we introduced the notation $${{G}}_{[{i}, {j}],t}:=\begin{cases} G_{i,t}A_i\dots A_{j-1}G_{j,t} & \mathrm{if}\quad i<j \\ G_{i,t} & \mathrm{if}\quad i=j \\ G_{i,t}A_{i,t}\dots \mathrm{Im}\,G_{k,t}A_k G_{1,t}A_1\dots A_{j-1} G_{j,t} &\mathrm{if}\quad i>j\,, \end{cases}$$ and analogously for the deterministic approximation $M_{[i,j],t}$ (cf. [\[eq:Mdef\]](#eq:Mdef){reference-type="eqref" reference="eq:Mdef"}). Furthermore, we define ${{G}}_{[{i}, {j}],t}^{(l)}$ exactly as ${{G}}_{[{i},{j}],t}$ but with the $l$--th factor $G_{l,t}$ being substituted by $G_{l,t}^2$. For the last term in [\[eq:flowka\]](#eq:flowka){reference-type="eqref" reference="eq:flowka"} we used the convention that $\langle G_{[i,j],t}^\mathfrak{t}G_{[j,i],t}\rangle=\langle G_{i,t}^\mathfrak{t}G_{i,t}A_{i+1}G_{[i+1,i],t}\rangle$ for $j=i$. In order to write the derivative [\[eq:timederiv\]](#eq:timederiv){reference-type="eqref" reference="eq:timederiv"} in a manageable form, we need to introduce some further short--hand notations. Set $$\label{eq:deflongchaingsim} \widehat{{G}}_{[\hat{i}, \hat{j}],t}:=\begin{cases} \mathrm{Im}\,G_{i,t}A_i\dots A_{j-1}\mathrm{Im}\,G_{j,t} & \mathrm{if}\quad i<j \\ \mathrm{Im}\,G_{i,t} & \mathrm{if}\quad i=j \\ \mathrm{Im}\,G_{i,t}A_{i,t}\dots \mathrm{Im}\,G_{k,t}A_k\mathrm{Im}\,G_{1,t}A_1\dots A_{j-1}\mathrm{Im}\,G_{j,t} &\mathrm{if}\quad i>j, \end{cases}$$ and define $\widehat{{G}}_{[\hat{i}, \hat{j}],t}^{(l)}$ exactly as $\widehat{{G}}_{[\hat{i},\hat{j}],t}$ except the $l$--th factor $\mathrm{Im}\,G_{l,t}$ is substituted with $G_{l,t}\mathrm{Im}\,G_{l,t}$. Similarly, $\widehat{{G}}_{[\hat{i}, \hat{j}],t}^{(l^*)}$ is defined as $\widehat{{G}}_{[\hat{i}, \hat{j}],t}$ but with the $l$--th $\mathrm{Im}\,G_{l,t}$ is substituted by $G^*_{l,t}\mathrm{Im}\,G_{l,t}$. Furthermore, we also define $$\label{eq:deflongchaingsimbar} \widehat{{G}}_{[\hat{i}, j],t}:=\begin{cases} \mathrm{Im}\,G_{i,t}A_i\dots A_{j-1} G_{j,t} & \mathrm{if}\quad i<j \\ G_{i,t} & \mathrm{if}\quad i=j \\ \mathrm{Im}\,G_{i,t}A_{i,t}\dots \mathrm{Im}\,G_{k,t}A_k\mathrm{Im}\,G_{1,t}A_1\dots A_{j-1} G_{j,t} &\mathrm{if}\quad i>j; \end{cases}$$ note the absent hat on the $j$ index indicates that the last resolvent $G_{j,t}$ is without imaginary part. We also define $\widehat{G}_{[i^*, \hat{j}],t}$ replacing $\mathrm{Im}\,G_{i,t}$ with $G_{i,t}^*$ in [\[eq:deflongchaingsim\]](#eq:deflongchaingsim){reference-type="eqref" reference="eq:deflongchaingsim"} and similarly $\widehat{G}_{[i^*,j],t}$ is defined by replacing $\mathrm{Im}\,G_{i,t}$ with $G_{i,t}^*$ and $\mathrm{Im}\,G_{j,t}$ with $G_{j,t}$ in [\[eq:deflongchaingsim\]](#eq:deflongchaingsim){reference-type="eqref" reference="eq:deflongchaingsim"}. In particular, the 'decorations' of $i$ and $j$ indicate, whether $G_{i,t}$ and $G_{j,t}$ are really taken as plain resolvents (no decoration) or as adjoints (star) or with imaginary part (hat). We point out that throughout this entire section 'hat' on $G$ indicates that the chain contains only $\mathrm{Im}\,G_i$ unless specified as in [\[eq:deflongchaingsimbar\]](#eq:deflongchaingsimbar){reference-type="eqref" reference="eq:deflongchaingsimbar"}. Finally, we use similar notations for the corresponding deterministic approximation $\widehat{M}_{[i^\#,j^\#],t}$ whose 'undecorated' version was defined in [\[eq:Mdef\]](#eq:Mdef){reference-type="eqref" reference="eq:Mdef"}. Here $\#$ indicates one of the possible 'decorations', i.e. star, hat or no decoration and the corresponding change entails modifying the factor $(x-z_i)^{-1}$ in [\[msc dd\]](#msc dd){reference-type="eqref" reference="msc dd"} to $(x-\bar z_i)^{-1}$ in case of star, and to $\mathrm{Im}\,(x-\bar z_i)^{-1}$ in case of hat (as in [\[eq:Mdefim\]](#eq:Mdefim){reference-type="eqref" reference="eq:Mdefim"}--[\[eq:Mdivdiff\]](#eq:Mdivdiff){reference-type="eqref" reference="eq:Mdivdiff"}). The time derivative of the deterministic term in [\[eq:timederiv\]](#eq:timederiv){reference-type="eqref" reference="eq:timederiv"} is obtained by the following lemma, the proof of which is given in Appendix [6](#sec:addtech){reference-type="ref" reference="sec:addtech"}. **Lemma 12**. *For any $k\ge 1$ we have $$\begin{aligned} \label{partialt} \partial_t \langle \widehat{M}_{[\hat{1},\hat{k}],t}A_{k}\rangle=\frac{k}{2}\langle \widehat{M}_{[\hat{1},\hat{k}],t}A_{k}\rangle+\sum_{i,j=1\atop i< j}^k\langle \widehat{M}_{[\hat{i}, j],t}\rangle\langle \widehat{{M}}_{[\hat{j}, i],t}\rangle+\sum_{i,j=1\atop i< j}^k\langle \widehat{M}_{[i^*, \hat{j}],t}\rangle\langle \widehat{{M}}_{[j^*, \hat{i}],t}\rangle \\ +\sum_{i,j=1\atop i< j}^k\langle \widehat{M}_{[\hat{i}, \hat{j}],t}\rangle\langle \widehat{{M}}_{[j^*, i],t}\rangle+\sum_{i,j=1\atop i< j}^k\langle \widehat{M}_{[i^*,j],t}\rangle\langle \widehat{{M}}_{[\hat{j}, \hat{i}],t}\rangle. \nonumber \end{aligned}$$* Hence, by Itô's formula, for any $k\ge 1$, the evolution of $\widehat{G}_{[\hat{1},\hat{k}],t}$ is given by (for brevity we omit the $\mathrm{d}t$ differentials) $$\begin{split} \label{eq:flowkaim} &\mathrm{d}\langle (\widehat{{G}}_{[\hat{1}, \hat{k}],t}-\widehat{M}_{[\hat{1},\hat{k}],t})A_k\rangle \\ &=\frac{1}{\sqrt{N}}\sum_{a,b=1}^N \partial_{ab} \langle \widehat{{G}}_{[\hat{1}, \hat{k}],t}A_k\rangle\mathrm{d}B_{ab}+\frac{k}{2}\langle (\widehat{{G}}_{[\hat{1}, \hat{k}],t}-\widehat{M}_{[\hat{1}, \hat{k}],t})A_k\rangle +\Omega_1 + \Omega_2 + \Omega_3 +\Omega_4+ \Omega_\sigma \\ &\quad+\sum_{i=1}^k \langle G_{i,t}-m_{i,t}\rangle \langle \widehat{{G}}^{(i)}_{[\hat{1},\hat{k}],t}A_k\rangle+\sum_{i=1}^k \langle G_{i,t}^*-\overline{m_{i,t}}\rangle \langle \widehat{{G}}_{[\hat{1}, \hat{k}],t}^{(i^*)}A_k\rangle+ \langle \widehat{{G}}_{[\hat{1}, \hat{k}],t}A_k\rangle\sum_{i=1}^k \frac{\langle \mathrm{Im}\,G_{i,t}-\mathrm{Im}\,m_{i,t}\rangle}{\mathrm{Im}\,z_{i,t}}\, , \end{split}$$ where $$\small \begin{split} \Omega_1: & = \sum_{i,j=1\atop i< j}^k\left[\langle \widehat{{G}}_{[\hat{i}, j],t}-\widehat{M}_{[\hat{i},j],t}\rangle\langle \widehat{{M}}_{[\hat{j},i],t}\rangle+\langle \widehat{{M}}_{[\hat{i}, j],t}\rangle\langle \widehat{{G}}_{[\hat{j},i],t}-\widehat{M}_{[\hat{j}, i],t}\rangle+ \langle \widehat{{G}}_{[\hat{i}, j],t}-\widehat{M}_{[\hat{i},j],t}\rangle\langle \widehat{{G}}_{[\hat{j},i],t}-\widehat{M}_{[\hat{j}, i],t}\rangle \right],\\ \Omega_2: &= \sum_{i,j=1\atop i< j}^k\left[\langle \widehat{{G}}_{[i^*, \hat{j}],t}-\widehat{M}_{[i^*, \hat{j}],t}\rangle\langle \widehat{{M}}_{[j^*,\hat{i}],t}\rangle+\langle \widehat{{M}}_{[i^*, \hat{j}],t}\rangle\langle \widehat{{G}}_{[j^*,\hat{i}],t}-\widehat{M}_{[j^*,\hat{i}],t}\rangle+ \langle \widehat{{G}}_{[i^*, \hat{j}],t}-\widehat{M}_{[i^*, \hat{j}],t}\rangle\langle \widehat{{G}}_{[j^*,\hat{i}],t}-\widehat{M}_{[j^*,\hat{i}],t}\rangle\right], \\ \Omega_3:&= \sum_{i,j=1\atop i< j}^k\left[\langle \widehat{{G}}_{[\hat{i}, \hat{j}],t}-\widehat{M}_{[\hat{i},\hat{j}],t}\rangle\langle \widehat{{M}}_{[j^*,i],t}\rangle+\langle \widehat{{M}}_{[\hat{i}, \hat{j}],t}\rangle\langle \widehat{{G}}_{[j^*,i],t}-\widehat{M}_{[j^*, i],t}\rangle+ \langle \widehat{{G}}_{[\hat{i}, \hat{j}],t}-\widehat{M}_{[\hat{i},\hat{j}],t}\rangle\langle \widehat{{G}}_{[j^*,i],t}-\widehat{M}_{[j^*, i],t}\rangle\right],\\ \Omega_4:&=\sum_{i,j=1\atop i< j}^k\left[\langle \widehat{{G}}_{[i^*, j],t}-\widehat{M}_{[i^*, j],t}\rangle\langle \widehat{{M}}_{[\hat{j},\hat{i}],t}\rangle+\langle \widehat{{M}}_{[i^*, j],t}\rangle\langle \widehat{{G}}_{[\hat{j},\hat{i}],t}-\widehat{M}_{[\hat{j},\hat{i}],t}\rangle+ \langle \widehat{{G}}_{[i^*, j],t}-\widehat{M}_{[i^*, j],t}\rangle\langle \widehat{{G}}_{[\hat{j},\hat{i}],t}-\widehat{M}_{[\hat{j},\hat{i}],t}\right], \\ \Omega_\sigma:&=\frac{\sigma}{N}\sum_{i,j=1\atop i\le j}^k\left[\langle G_{[\widehat{i},j],t}G_{[\widehat{j},i],t}^\mathfrak{t}\rangle+\langle G_{[i^*,\widehat{j}],t}G_{[j^*,\widehat{i}],t}^\mathfrak{t}\rangle+\langle G_{[\widehat{i},\widehat{j}],t}G_{[j^*,i],t}^\mathfrak{t}\rangle+\langle G_{[i^*,j],t}G_{[\widehat{j},\widehat{i}],t}^\mathfrak{t}\rangle\right]\,. \end{split}$$ Observe that the flow [\[eq:flowkaim\]](#eq:flowkaim){reference-type="eqref" reference="eq:flowkaim"} for imaginary parts $\mathrm{Im}\,G$ contains much more terms compared to a flow for plain resolvents $G$ (see [\[eq:flowka\]](#eq:flowka){reference-type="eqref" reference="eq:flowka"}). This is a simple consequence of the fact that each time an $\mathrm{Im}\,G$ is differentiated it creates two terms, i.e. $\partial_{ab} \mathrm{Im}\,G=G\Delta^{ab}\mathrm{Im}\,G+\mathrm{Im}\,G\Delta^{ab}G^*$, with $\Delta^{ab}$ being a matrix consisting of all zeroes except for the $(a,b)$--entry which is equal to one. Furthermore, the novel last term in [\[eq:flowkaim\]](#eq:flowkaim){reference-type="eqref" reference="eq:flowkaim"} comes from applying a Ward identity, $GG^*= \mathrm{Im}\,G/\mathrm{Im}\,z$. We now write out the random part $\mathrm{d}\langle \widehat{{G}}_{[\hat{1}, \hat{k}],t}A_k\rangle$ of the flow [\[eq:flowkaim\]](#eq:flowkaim){reference-type="eqref" reference="eq:flowkaim"} for the simpler cases $k=1$ and $k=2$ to show its main structure. Here we used that $\widehat{M}_{\hat{1},t} = \mathrm{Im}\,m_{1,t}$ with $m_{i}: =m(z_{i,t})$. **Example 13**. For $k=1$ we have the evolution $$\begin{split} \label{eq:eqk1im} \mathrm{d}\langle \mathrm{Im}\,G A\rangle&=\sum_{a,b=1}^N\partial_{ab} \langle \mathrm{Im}\,G A\rangle\frac{\mathrm{d}B_{ab}}{\sqrt{N}}+\left(\frac{1}{2}+\frac{\langle \mathrm{Im}\,G-\mathrm{Im}\,m\rangle}{\mathrm{Im}\,z_t}\right)\langle \mathrm{Im}\,G A\rangle+\langle G-m\rangle \langle \mathrm{Im}\,G A G\rangle \\ &\quad+\overline{\langle G-m\rangle} \langle \mathrm{Im}\,G A G^*\rangle+\frac{\sigma}{N}\langle \mathrm{Im}\,G A GG^\mathfrak{t}\rangle+\frac{\sigma}{N}\langle (G^*)^\mathfrak{t}G^*A \mathrm{Im}\,G A \rangle+\frac{\sigma}{N}\langle \mathrm{Im}\,G^\mathfrak{t} G^*A G\rangle\,, \end{split}$$ and for $k=2$ we get (for keeping the formula somewhat short, we assume $\sigma=0$) $$\label{eq:imgflowsdetsub} \begin{split} \mathrm{d}\langle \mathrm{Im}\,G_1 A_1 \mathrm{Im}\,G_2 A_2\rangle &=\sum_{a,b=1}^N\partial_{ab} \langle \mathrm{Im}\,G_1A_1 \mathrm{Im}\,G_2 A_2\rangle\frac{\mathrm{d}B_{ab}}{\sqrt{N}}+\langle \mathrm{Im}\,G_1 A_1\mathrm{Im}\,G_2 A_2\rangle \\ &\quad+\left(\frac{\langle \mathrm{Im}\,G_1 -\mathrm{Im}\,m_1\rangle}{\mathrm{Im}\,z_{1,t}}+\frac{\langle \mathrm{Im}\,G_2 -\mathrm{Im}\,m_2\rangle}{\mathrm{Im}\,z_{2,t}}\right)\langle \mathrm{Im}\,G_1 A_1\mathrm{Im}\,G_2 A_2\rangle+\langle G_2^*A_2G_1\rangle \langle \mathrm{Im}\,G _1A_1\mathrm{Im}\,G_2\rangle \\ &\quad+\langle G_1^* A_1 G_2 \rangle\langle \mathrm{Im}\,G_2 A_2 \mathrm{Im}\,G_1\rangle+\langle \mathrm{Im}\,G_1 A_1 G_2\rangle\langle\mathrm{Im}\,G_2A_2G_1\rangle +\langle G_2^*A_2\mathrm{Im}\,G_1\rangle\langle G_1^*A_1\mathrm{Im}\,G_2\rangle \\ &\quad+\langle G_1-m_1\rangle \langle \mathrm{Im}\,G_1 A_1\mathrm{Im}\,G_2 A_2 G_1\rangle +\langle G_2-m_2\rangle \langle \mathrm{Im}\,G_2 A_2\mathrm{Im}\,G_1 A_1 G_2\rangle \\ &\quad+\langle G_1^*-\overline{m_1}\rangle \langle \mathrm{Im}\,G_1 A_1\mathrm{Im}\,G_2 A_2 G_1^*\rangle+\langle G_2^*-\overline{m_2}\rangle \langle \mathrm{Im}\,G_2 A_2\mathrm{Im}\,G_1 A_1 G_2^*\rangle. \end{split}$$ Note that [\[eq:eqk1im\]](#eq:eqk1im){reference-type="eqref" reference="eq:eqk1im"}--[\[eq:imgflowsdetsub\]](#eq:imgflowsdetsub){reference-type="eqref" reference="eq:imgflowsdetsub"} combined with [\[partialt\]](#partialt){reference-type="eqref" reference="partialt"} give [\[eq:flowkaim\]](#eq:flowkaim){reference-type="eqref" reference="eq:flowkaim"} for the special cases $k=1,2$. ## Proof of Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} (a) for pure $\mathrm{Im}\,G$-chains {#subsec:pureIM} The goal of this section is to prove $$\label{eq:goaloptaIms} \langle \widehat{G}_{[\widehat{1},\widehat{k}],T}A_k \rangle - \langle \widehat{M}_{[\widehat{1},\widehat{k}],T}A_k \rangle = \langle \widehat{G}_{[\widehat{1},\widehat{k}],0}A_k \rangle - \langle \widehat{M}_{[\widehat{1},\widehat{k}],0}A_k \rangle+ \mathcal{O}_\prec \left(\Big(\prod_{i\in [k]} \rho_{i,T}\Big)\frac{N^{k/2 - 1}}{\sqrt{N \ell_{T}}} \prod_{j \in [k]} \langle |A_j|^2 \rangle^{1/2}\right),$$ uniformly in the spectrum and in the choice of traceless matrices $A_i$. We may assume that all the $A_i$'s are Hermitian; the general case follows by multilinearity. ### Master inequalities {#subsec:master} For the purpose of proving [\[eq:goaloptaIms\]](#eq:goaloptaIms){reference-type="eqref" reference="eq:goaloptaIms"}, recall the notation $\hat{\ell}_t = \min \eta_{i,t} \rho_{i,t}$ from [\[eq:newintrule\]](#eq:newintrule){reference-type="eqref" reference="eq:newintrule"} and define [\[eq:defphi\]]{#eq:defphi label="eq:defphi"} $$\Phi_1(t) :=\frac{N \sqrt{\hat{\ell}_t}}{\rho_t \langle |A|^2 \rangle^{1/2}}\big|\langle G_tA\rangle\big|;$$ and for $k\ge 2$ $$\Phi_k(t):= \frac{\sqrt{N \hat{\ell}_{t}}}{N^{k/2 - 1} \, \Big(\prod_{i\in [k]} \rho_{i,t}\Big) \prod_{j \in [k]} \langle |A_j|^2 \rangle^{1/2}} \big| \langle (\widehat{G}_{[\widehat{1},\widehat{k}],t}- \widehat{M}_{[\widehat{1},\widehat{k}],t})A_k \rangle\big|\,.$$ Note that we defined $\Phi_1(t)$ in a slightly different way than $\Phi_k(t)$ for $k\ge 2$, this is a consequence of the fact that for $k=1$ we have $| \langle GA\rangle |\sim |\langle \mathrm{Im}\,GA\rangle|$, i.e. for this special case the imaginary part does not reduce the fluctuation unlike for longer chains (see also Remark [Remark 6](#rmk:MHS){reference-type="ref" reference="rmk:MHS"} (ii)). The prefactors in [\[eq:defphi\]](#eq:defphi){reference-type="eqref" reference="eq:defphi"} are chosen such that we expect $\Phi_k(t)$ to be an essentially order one quantity, see [\[eq:goaloptaIms\]](#eq:goaloptaIms){reference-type="eqref" reference="eq:goaloptaIms"}. The goal is to show exactly this, i.e. that $\Phi_k(t)\prec 1$, uniformly in time $t\le T$ for any $k\ge 1$. Note that by [\[eq:inass0\]](#eq:inass0){reference-type="eqref" reference="eq:inass0"} it follows $$\label{eq:initialphi} \Phi_k(0)\prec 1,$$ for any $k\ge 1$. To prove $\Phi_k(t)\prec 1$, we will derive a series of *master inequalities* for these quantities with the following structure. We assume that $$\label{phiphi} \Phi_k(t)\prec\phi_k$$ holds for some deterministic control parameter $\phi_k$, *uniformly* in $0\le t\le T$, in spectral parameters satisfying $N\hat{\ell}_{t}\ge N^\epsilon$ and in traceless deterministic matrices $A_j$ (we stress that $\phi_k$'s depend neither on time, nor on the spectral parameters $z_{i,t}$, nor on the matrices $A_j$). Given this input, we will then show that $\Phi_k(t)$'s also satisfy a better upper bound in terms of $\phi$'s. Iterating this procedure we will arrive at the final bound $\Phi_k(t) \prec 1$. **Proposition 14** (Master inequalities). *Fix $k\in {\mathbb N}$ and $t\in [0,T]$. Assume that $\Phi_l(s)\prec \phi_l$ for any $1\le l\le 2k$ uniformly in $s\in [0,t]$, in the spectral parameters with $N\hat{\ell}_{s}\ge N^\epsilon$ and in the traceless deterministic matrices $A_j$. Set $\phi_0 :=1$. Then we have the *master inequalities* $$\begin{split} \label{eq:AVmasterITERATE} \Phi_k(t)\prec 1+\frac{\sqrt{\phi_{2k}}}{(N \hat{\ell}_{t})^{1/4}}+ \frac{1}{N \hat{\ell}_t}\sum_{l=1}^{k} \tilde{\phi}_l +\frac{1}{(N\hat{\ell}_{t})^{3/2}}\sum_{l=1}^{k-1} \tilde{\phi}_l \tilde{\phi}_{k-l}+\frac{|\sigma|}{(N\hat{\ell}_t)^{1/4}}\sum_{l=1}^k\sqrt{\phi_{2l}}+\frac{|\sigma|}{N\hat{\ell}_t}\sum_{l=0}^k \sqrt{\phi_{2l}\phi_{2(k-l)}} \,, \end{split}$$ where we introduced the shorthand notation $$\label{tildephi} \tilde{\phi}_l := \phi_l + \boldsymbol{1}(l \ \mathrm{is \;\; odd}) \sqrt{\phi_{l+1} \phi_{l-1}}, \quad \text{for} \quad l \in [k-1]\,.$$* Using the master inequalities, we conclude this section with the proof of [\[eq:flowgimg\]](#eq:flowgimg){reference-type="eqref" reference="eq:flowgimg"} for pure $\mathrm{Im}\,G$ chains. *Proof of Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} (a) for pure $\mathrm{Im}\,G$ chains.* We now consider the master inequalities [\[eq:AVmasterITERATE\]](#eq:AVmasterITERATE){reference-type="eqref" reference="eq:AVmasterITERATE"} for $t=T$, with $T$ the time defined in the statement of Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"}. We use a two-step induction. The base case consists of the cases $k=1,2$ (using $|\sigma|\le 1$): $$\begin{split} \label{eq:mast12phi} \Phi_1(T)&\prec 1+\frac{\sqrt{\phi_2}}{(N\hat{\ell}_T)^{1/4}}+\frac{\phi_1}{N\hat{\ell}_T}, \\ \Phi_2(T)&\prec 1+\frac{\sqrt{\phi_4}+\sqrt{\phi_2}}{(N\hat{\ell}_T)^{1/4}}+ \frac{\phi_2 + \phi_1 + \sqrt{\phi_2}}{N\hat{\ell}_T}+\frac{\phi_1^2 + \phi_1 \sqrt{\phi_2} + \phi_2}{(N\hat{\ell}_T)^{3/2}}. \end{split}$$ To estimate $\phi_4$ in [\[eq:mast12phi\]](#eq:mast12phi){reference-type="eqref" reference="eq:mast12phi"} we rely on the following *reduction inequality*; its proof is given in Appendix [6](#sec:addtech){reference-type="ref" reference="sec:addtech"}. **Lemma 15** (Reduction inequality). *Fix $k\ge 2$, and assume that $\Phi_l(t)\prec \phi_l$ holds uniformly[^12] in $t\in [0,T]$ for $0\le l\le 2k$. Then $$\label{eq:redinphi} \Phi_{2k}(T)\prec \begin{cases} (N\hat{\ell}_T)^{1/2}+\frac{1}{(N \hat{\ell}_T)^{1/2}}\phi_k^2 \quad & k \,\, \mathrm{even} \\ (N\hat{\ell}_T)^{1/2}+\phi_{k-1}+\phi_{k+1}+ \frac{1}{(N \hat{\ell}_T)^{1/2}}\phi_{k+1}\phi_{k-1} \quad & k \,\, \mathrm{odd}. \end{cases}$$* The following abstract iteration lemma shows how to use the master inequalities for improving the bound on $\Phi$. **Lemma 16** (Iteration). *Let $X=X_N(\hat{\ell})$ be an $N$-dependent random variable depending also on the parameter $\hat{\ell}$. Fix $\epsilon,\delta>0$. Suppose that for any $l\in {\mathbb N}$ and any $x>0$ the fact that $X\prec x$ uniformly for $\hat{\ell}\ge N^{-1+l\epsilon}$ implies $$\label{eq:iterationrep} X\prec A+\frac{x}{B}+x^{1-\alpha}C^\alpha,$$ uniformly for $\hat{\ell}\ge N^{-1+(l+l')\epsilon}$, for some constants $l'\in {\mathbb N}$, $B\ge N^\delta>0$, $A,C>0$, and $\alpha\in (0,1)$, and suppose we also know that $X\prec N^D$ uniformly[^13] in $\hat{\ell}\ge N^{-1+\epsilon}$. Then $$X\prec A+C,$$ uniformly for $\hat{\ell}\ge N^{-1+(1+ \kappa l')\epsilon}$, for some $\kappa=\kappa(\alpha,D,\delta)$.* *Proof.* The proof is a simple iteration of [\[eq:iterationrep\]](#eq:iterationrep){reference-type="eqref" reference="eq:iterationrep"} $\kappa$ times; it is immediate to see that $\kappa$ depends only on $\alpha,D,\delta$. ◻ Notice that using Lemma [Lemma 16](#lem:iteration){reference-type="ref" reference="lem:iteration"} reduces the domain of parameters $\eta_i,\rho_i$ for which the master inequalities [\[eq:AVmasterITERATE\]](#eq:AVmasterITERATE){reference-type="eqref" reference="eq:AVmasterITERATE"} hold, e.g. from $\hat{\ell}_T\ge N^{-1+l\epsilon}$ to $\hat{\ell}_T\ge N^{-1+(l+l')\epsilon}$, and so on. However, this can happen only finitely many times, and so it does not affect the estimates in the sense of stochastic domination that always allows for a small $N$-power tolerance that can be achieved by adjusting $\epsilon$ small enough. For simplicity, we ignore this subtlety here, see [@iidpaper Sections 4.1--4.3] for a more detailed explanation. Using iteration from Lemma [Lemma 16](#lem:iteration){reference-type="ref" reference="lem:iteration"} and the reduction inequality [\[eq:redinphi\]](#eq:redinphi){reference-type="eqref" reference="eq:redinphi"} for $k=2$ we obtain $$\Phi_1(T)\prec 1+\frac{\sqrt{\phi_2}}{(N\hat{\ell}_T)^{1/4}} \quad \text{and} \quad \Phi_2(T)\prec 1+\frac{\phi_1}{N\hat{\ell}_T}+\frac{\phi_1^2}{(N\hat{\ell}_T)^{3/2}}.$$ Then, plugging the first relation into the second, and using iteration again we conclude $$\Phi_1(T)\prec 1 \quad \text{and} \quad \Phi_2(T)\prec 1\,.$$ To prove the same relation for $\Phi_l(T)$ with $l\ge 3$, we use a step-two induction. Fix an even $k\ge 4$ and assume as our induction hypothesis that $\Phi_l(T)\prec 1$ for any $1\le l\le k-2$. We now prove that $\Phi_l(T)\prec 1$ also holds for $l=k-1, k$. From [\[eq:AVmasterITERATE\]](#eq:AVmasterITERATE){reference-type="eqref" reference="eq:AVmasterITERATE"}, using $N \hat{\ell}_T \ge 1$ and the induction hypothesis $\Phi_l(T)\prec \phi_l:=1$ for $1\le l\le k-2$, we have $$\begin{split} \Phi_{k-1}(T)&\prec 1+\frac{\sqrt{\phi_{2(k-1)}}}{(N\hat{\ell}_T)^{1/4}} + \frac{\phi_{k-1} + \sqrt{\phi_k}}{N \hat{\ell}_T}+\frac{1}{(N\hat{\ell}_T)^{1/4}}\sum_{l=k/2}^{k-1}\sqrt{\phi_{2l}}\,, \\ \Phi_k(T)&\prec 1+\frac{\sqrt{\phi_{2k}}}{(N\hat{\ell}_T)^{1/4}}+ \frac{\phi_k + \phi_{k-1} + \sqrt{\phi_k}}{N\hat{\ell}_T}+\frac{1}{(N\hat{\ell}_T)^{1/4}}\sum_{l=k/2}^k\sqrt{\phi_{2l}} \,. \end{split}$$ Then using [\[eq:redinphi\]](#eq:redinphi){reference-type="eqref" reference="eq:redinphi"} and iteration from Lemma [Lemma 16](#lem:iteration){reference-type="ref" reference="lem:iteration"} together with $\phi_l= 1$ for any $1\le l\le k-2$ and $N \hat{\ell}_T \ge 1$, we obtain $$\Phi_{k-1}(T)\prec 1+\frac{\sqrt{\phi_{k}}}{(N\hat{\ell}_T)^{1/4}} \quad \text{and} \quad \Phi_k(T)\prec 1+ \frac{ \phi_{k-1} }{N\hat{\ell}_T} \,.$$ Plugging the first relation into the second, we obtain by iteration that $$\Phi_{k-1}(T)\prec 1 \qquad \text{and} \qquad \Phi_k(T)\prec 1\,.$$ This concludes the induction step and hence the proof of Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} (a) modulo the proof of the master inequalities, Proposition [Proposition 14](#pro:masterinIM){reference-type="ref" reference="pro:masterinIM"}, that will be done next. ◻ ### Proof of Proposition [Proposition 14](#pro:masterinIM){reference-type="ref" reference="pro:masterinIM"} {#subsec:pfmasterAV} As a preparation for the proof of the master inequalities (Proposition [Proposition 14](#pro:masterinIM){reference-type="ref" reference="pro:masterinIM"}), we recall that $t \mapsto \eta_{i,t}$ is decreasing and $\rho_{i,s} \sim \rho_{i,t}$ for any $0 \le s \le t\lesssim1$ (see [\[eq:characteristics\]](#eq:characteristics){reference-type="eqref" reference="eq:characteristics"}, [\[eq:characrecall\]](#eq:characrecall){reference-type="eqref" reference="eq:characrecall"}, and the paragraphs around). *Proof of Proposition [Proposition 14](#pro:masterinIM){reference-type="ref" reference="pro:masterinIM"}.* We begin with the case $k=1$. Hence, for $A_1=A$, we start by rewriting the flow [\[eq:eqk1im\]](#eq:eqk1im){reference-type="eqref" reference="eq:eqk1im"} with $\mathrm{Im}\,G$ replaced by $G= G_t(z_t)$ (recall [\[eq:defphi\]](#eq:defphi){reference-type="eqref" reference="eq:defphi"}): $$\label{eq:eq1} \mathrm{d}\langle GA\rangle=\sum_{a,b=1}^N\partial_{ab}\langle GA\rangle\frac{\mathrm{d}B_{ab}}{\sqrt{N}}+\frac{1}{2}\langle GA\rangle \mathrm{d}t+\langle G-m\rangle\langle G^2A\rangle \mathrm{d}t+\frac{\sigma}{N}\langle GAGG^\mathfrak{t}\rangle\mathrm{d}t \,.$$ We point out that the additional term $\frac{1}{2}\langle GA\rangle$ in the rhs. of [\[eq:eq1\]](#eq:eq1){reference-type="eqref" reference="eq:eq1"} can be incorporated into the lhs. by differentiating $e^{-t/2}\langle GA\rangle$; the extra exponential factor is irrelevant since $e^{t/2}\sim 1$ for our times $t\lesssim 1$. Note that the same argument applies to the term $$\frac{k}{2}\langle (\widehat{{G}}_{[\hat{1}, \hat{k}],t}-\widehat{M}_{[\hat{1}, \hat{k}],t})A_k\rangle$$ appearing in [\[eq:flowkaim\]](#eq:flowkaim){reference-type="eqref" reference="eq:flowkaim"} for general $k$. We are now ready to obtain the master inequality for $\Phi_1(t)$. Assume $\Phi_k(t)\prec \phi_k$ for $k=1,2$, in the sense of uniformity explained after [\[phiphi\]](#phiphi){reference-type="eqref" reference="phiphi"} (recall that $\Phi_1(0)\prec 1$ by [\[eq:initialphi\]](#eq:initialphi){reference-type="eqref" reference="eq:initialphi"}), and we will prove improved bounds on $\Phi_1(t)$. We first consider the third summand in [\[eq:eq1\]](#eq:eq1){reference-type="eqref" reference="eq:eq1"}. Here, we use the integral representation (see also [@iidpaper Lemma 5.1]) $$\label{eq:niceintrep} G^2(z) = \frac{1}{2 \pi \mathrm{i}} \oint_\Gamma \frac{G(w)}{(w-z)^2} \mathrm{d}w\,,$$ which simply follows from residue calculus. Here, $\Gamma$ is a tiny circle of radius $|\mathrm{Im}\,z|/2$ around $z \in {\mathbb C}\setminus {\mathbb R }$, which ensures that $|\mathrm{Im}\,w| |\mathrm{Im}\,m(w)| \sim |\mathrm{Im}\,z| |\mathrm{Im}\,m(z)|$ as follows by elementary continuity properties of $m(w)$. In this way, applying [\[eq:niceintrep\]](#eq:niceintrep){reference-type="eqref" reference="eq:niceintrep"} for every fixed time $s \le t$ and using the fact that the deterministic approximation of $\langle G^2 A \rangle$ vanishes as $\langle A\rangle =0$, we obtain (with the $G_s:= G_s(z_s)$, $m_s:=m(z_s)$ notation) $$\big|\langle G_s^2 A \rangle\big| \prec \frac{1}{\eta_s} \frac{\rho_s \langle|A|^2 \rangle^{1/2}}{N \sqrt{\hat{\ell}_s}} \phi_1\,.$$ Hence, in combination with the single resolvent local law $|\langle G_s-m_s\rangle|\prec 1/(N\eta_s)$, we find $$\label{psi11} \frac{N\sqrt{\hat{\ell}_t}}{\rho_t \langle|A|^2 \rangle^{1/2}} \int_0^t \langle G_s-m_s\rangle \langle G^2_s A \rangle\, \mathrm{d}s\prec \frac{N\sqrt{\hat{\ell}_t}}{\rho_t \langle|A|^2 \rangle^{1/2}} \int_0^t \phi_1\; \frac{\rho_s\langle|A|^2 \rangle^{1/2}}{N^2\eta_s^2\hat{\ell}_s^{1/2}}\,\mathrm{d}s\lesssim \frac{\phi_1}{N\hat{\ell}_t}\log N.$$ In the last step we used the integration estimate [\[eq:newintrule\]](#eq:newintrule){reference-type="eqref" reference="eq:newintrule"} and the fact that along the characteristics $\hat{\ell}_s\gtrsim \hat{\ell}_t$ for $0\le s\le t$. The prefactor $N \sqrt{\hat{\ell}_t}/(\rho_t \langle|A|^2 \rangle^{1/2})$ is included in anticipation of the same prefactor in the definition of $\Phi_1$ in [\[eq:defphi\]](#eq:defphi){reference-type="eqref" reference="eq:defphi"}. Then we proceed with the estimate of the quadratic variation of the martingale term in [\[eq:eq1\]](#eq:eq1){reference-type="eqref" reference="eq:eq1"}: $$\begin{split} \frac{1}{N}\sum_{a,b=1}^N \big[\big|\partial_{ab}\langle G_sA\rangle|^2+\sigma\partial_{ab}\langle G_sA\rangle\overline{\partial_{ba}\langle G_sA\rangle}\big]\mathrm{d}t&\lesssim\frac{1}{N^3}\sum_{a,b=1}^N \big|(G_sAG_s)_{ab}|^2\mathrm{d}t \\ &= \frac{1}{N^2}\langle G_s A G_sG_s^* A G_s^*\rangle\mathrm{d}t= \frac{1}{N^2\eta_t^2}\langle \mathrm{Im}\,G_s A \mathrm{Im}\,G_s A\rangle\mathrm{d}t, \end{split}$$ where we used that $\mathrm{d}[B_{ab},\overline{B_{cd}}]=\delta_{ac}\delta_{bd}+ \sigma\delta_{ad}\delta_{bc}$ and the Ward identity $GG^*=\frac{\mathrm{Im}\,G}{\mathrm{Im}\,z}$. Then, we write $$\langle \mathrm{Im}\,G_s A \mathrm{Im}\,G_s A\rangle =\langle \widehat{M}_{[\widehat{1},\widehat{2}],s}A\rangle + \Big( \langle \mathrm{Im}\,G_s A \mathrm{Im}\,G_s A\rangle -\langle \widehat{M}_{[\widehat{1},\widehat{2}],s}A\rangle \Big) \prec \rho_s^2 \langle |A|^2 \rangle+\frac{\rho_s^2 \langle |A|^2 \rangle}{\sqrt{N\hat{\ell}_s}} \phi_2\,.$$ Here we used that the deterministic approximation $\langle \widehat{M}_{[\widehat{1},\widehat{2}],s}A\rangle$ is bounded by $\rho_s^2 \langle |A|^2 \rangle$ and we used [\[eq:defphi\]](#eq:defphi){reference-type="eqref" reference="eq:defphi"} together with $\Phi_2(s)\prec \phi_2$. For the time integration of the quadratic variation term, with the appropriate prefactor, we obtain $$\label{eq:qv1} \begin{split} \frac{N\sqrt{\hat{\ell}_t}}{\rho_t \langle |A|^2 \rangle^{1/2}} & \left(\int_0^t \frac{\langle \mathrm{Im}\,G_s A \mathrm{Im}\,G_s A\rangle}{N^2\eta_s^2} \,\mathrm{d}s\right)^{1/2} \\ &\prec \frac{N\sqrt{\hat{\ell}_t}}{\rho_t \langle |A|^2\rangle^{1/2}}\left(\int_0^t \frac{\rho_s^2 \langle |A|^2 \rangle}{N^2\eta_s^2}\left(1 +\frac{\phi_2}{(N\hat{\ell}_s)^{1/2}}\right) \,\mathrm{d}s\right)^{1/2}\lesssim 1 +\frac{\sqrt{\phi_2}}{(N\hat{\ell}_t)^{1/4}}\,. \end{split}$$ Here in the last inequality we used that along the characteristics $\hat{\ell}_s\gtrsim \hat{\ell}_t$ for $0\le s\le t$ and the integration rule [\[eq:newintrule\]](#eq:newintrule){reference-type="eqref" reference="eq:newintrule"}. Using the Burkholder-Davis-Gundy (BDG) inequality we conclude exactly the same estimate [\[eq:qv1\]](#eq:qv1){reference-type="eqref" reference="eq:qv1"} for the stochastic term in [\[eq:eq1\]](#eq:eq1){reference-type="eqref" reference="eq:eq1"} in high probability as in quadratic variation. Next, we estimate the last term in the rhs. of [\[eq:eq1\]](#eq:eq1){reference-type="eqref" reference="eq:eq1"}: $$\begin{split} \label{eq:newk=1} \frac{N\sqrt{\hat{\ell}_t}}{\rho_t\langle |A|^2\rangle^{1/2}}\int_0^t \frac{|\sigma|}{N}\big|\langle G_sAG_sG_s^\mathfrak{t}\rangle\big|\,\mathrm{d}s&\le \frac{N\sqrt{\hat{\ell}_t}}{\rho_t\langle |A|^2\rangle^{1/2}}\int_0^t \frac{1}{N\eta_s^{3/2}}\langle \mathrm{Im}\,G_sA\mathrm{Im}\,G_s A\rangle^{1/2} \langle \mathrm{Im}\,G_s \rangle^{1/2}\,\mathrm{d}s \\ &\prec \frac{N\sqrt{\hat{\ell}_t}}{\rho_t\langle |A|^2\rangle^{1/2}}\int_0^t \frac{\rho^{1/2}_s}{N\eta_s^{3/2}}\left(\langle |A|^2\rangle \rho_s^2+\frac{\langle |A|^2\rangle \rho_s^2\phi_2}{\sqrt{N\hat{\ell}_s}}\right)^{1/2}\,\mathrm{d}s \\ &\lesssim 1+\frac{\sqrt{\phi_2}}{(N\hat{\ell}_t)^{1/4}}, \end{split}$$ where in the first inequality we used Schwarz inequality together with several Ward identities, and in the second inequality the single resolvent local law $|\langle G_s-m_s\rangle|\prec 1/(N\eta_s)$ to show that $\langle\mathrm{Im}\,G_s\rangle\prec \rho_s$ (recall that we consider the regime $N\eta_s\rho_s\ge N^\epsilon$, so $1/(N\eta_s)\le \rho_s$). Putting all these estimates together, and using that $\Phi_1(0)\prec 1$ by [\[eq:initialphi\]](#eq:initialphi){reference-type="eqref" reference="eq:initialphi"} to bound the initial condition after integration, we obtain the first master inequality $$\Phi_1(t)\prec 1+\frac{\phi_1}{N\hat{\ell}_t}+\frac{\sqrt{\phi_2}}{(N\hat{\ell}_t)^{1/4}},$$ again in the sense of uniformity explained after [\[phiphi\]](#phiphi){reference-type="eqref" reference="phiphi"}. For the proof of the master inequalities [\[eq:AVmasterITERATE\]](#eq:AVmasterITERATE){reference-type="eqref" reference="eq:AVmasterITERATE"} with $k\ge 2$, a fundamental input for the estimates of the various terms in [\[eq:flowkaim\]](#eq:flowkaim){reference-type="eqref" reference="eq:flowkaim"} is the following *$G^2$-Lemma*. Recall that even if we are interested only in pure $\mathrm{Im}\,G$ chains, their evolution equation [\[eq:flowkaim\]](#eq:flowkaim){reference-type="eqref" reference="eq:flowkaim"} necessarily contains mixed chains as well. The $G^2$-Lemma turns them back to pure $\mathrm{Im}\,G$ chains. It expresses how to estimate *not* strictly alternating purely $\mathrm{Im}\,G$ chains in terms of strictly alternating purely $\mathrm{Im}\,G$ chains based upon the integral representation [\[eq:intrepIM\]](#eq:intrepIM){reference-type="eqref" reference="eq:intrepIM"}. Note that this formula involves the non-analytic function $\mathrm{Im}\,G$ hence simple and flexible contour deformations are prohibited, contrary to the $k=1$ case, where we did not care about preserving $\mathrm{Im}\,G$'s and the contour integral [\[eq:niceintrep\]](#eq:niceintrep){reference-type="eqref" reference="eq:niceintrep"} with the analytic $G(z)$ was applicable. For brevity we will state the $G^2$-Lemma for spectral parameters $z_1, ... , z_k$ without time dependence, but eventually we will use them for $z_{1,t}, ... , z_{k,t}$ at any fixed time along the flow. The proof is given in Section [4.1.3](#subsec:proofG^2){reference-type="ref" reference="subsec:proofG^2"} below. **Lemma 17** ($G^2$-Lemma). *Fix $k\ge 2$. Let $i,j \in [k]$ with $j-i \ge 1$ and assume that $\Phi_l\prec \phi_l$ holds uniformly (in the sense explained after [\[phiphi\]](#phiphi){reference-type="eqref" reference="phiphi"}) for some control parameters $\phi_l \ge 1$ for $l=1,2,\ldots , k$. Then, for all versions of $\widehat{G}_{[i^\#,j^\#]}$ and $\widehat{M}_{[i^\#,j^\#]}$, i.e. for any choice of $\#$ indicating star (adjoint), hat (imaginary part) or simply no 'decoration', we have the following:[^14] $$\label{eq:redbm} \left| \big\langle \widehat{M}_{[i^\#, j^\#]} \big\rangle \right| \prec \left(\frac{\rho_i \rho_j}{\eta_i \eta_j}\right)^{1/2} \Big(\prod_{n =i+1}^{j-1} \rho_n \Big) \, N^{\frac{j-i}{2}-1} \, \Big(\prod_{m = i}^{j-1} \langle |A_m|^2 \rangle^{1/2}\Big)$$ and (the decorations at the indices $i$ and $j$ on $\widehat{G}$ and on $\widehat{M}$ have to be matching) $$\label{eq:redbg} \begin{split} \left| \big\langle \widehat{G}_{[i^\#,j^\#]} - \widehat{M}_{[i^\#,j^\#]}\big\rangle \right| \prec \, \left(\frac{\rho_i \rho_j}{\eta_i \eta_j} \right)^{1/2} \, \Big(\prod_{n =i+1}^{j-1} \rho_n \Big) \, \frac{ N^{\frac{j-i}{2}-1} }{\sqrt{N \hat{\ell}}} \, \Big(\prod_{m = i}^{j-1} \langle |A_m|^2 \rangle^{1/2}\Big) \tilde{\phi}_{j-i} \,, \end{split}$$ where we used the notation $\tilde{\phi}_{j-i} = \phi_{j-i} + \boldsymbol{1}( j-i \ \mathrm{odd} )\sqrt{\phi_{j-i-1} \phi_{j-i+1}}$ (as in [\[tildephi\]](#tildephi){reference-type="eqref" reference="tildephi"}).* *Moreover, it holds that (now $\#$ indicates star (adjoint) or no 'decoration') $$\label{eq:schwarzeasy} \left| \big\langle \widehat{{G}}_{[\hat{1}, \hat{k}]}^{(i^\#)}A_k\big\rangle \right| \prec\left( \frac{\rho_i}{\eta_i}\right)^{1/2} \big| \big\langle \mathrm{Im}\,G_i \big(A_i \mathrm{Im}\,G_{i+1} ... A_{i-1} \big)\mathrm{Im}\,G_i \big(A_i \mathrm{Im}\,G_{i+1} ... A_{i-1}\big)^*\big\rangle\big|^{1/2} \,.$$* Since all resolvent chains and their $M$-approximations are multi-linear in the $A$'s, by a simple scaling we may assume, without loss of generality, that $\langle |A_j|^2 \rangle = 1$ for all $j \in [k]$. This shortens some formulas. We start our estimates on $\Phi_k(t)$ with bounding the quadratic variation of the martingale term in [\[eq:flowkaim\]](#eq:flowkaim){reference-type="eqref" reference="eq:flowkaim"}: $$\begin{split} &\frac{1}{N}\sum_{a,b=1}^N \big[\big|\partial_{ab} \langle \widehat{{G}}_{[\hat{1}, \hat{k}]}A_k\rangle\big|^2 +\sigma \partial_{ab} \langle \widehat{{G}}_{[\hat{1}, \hat{k}]}A_k\rangle\overline{\partial_{ba} \langle \widehat{{G}}_{[\hat{1}, \hat{k}]}A_k\rangle}\big] \\ &\qquad\qquad\qquad\qquad\quad\lesssim \frac{1}{N^2} \sum_{i=1}^{k} \big\langle \big(G_i A_i \mathrm{Im}\,G_{i+1} ... A_{i-1} G_i \big) \big(G_i A_i \mathrm{Im}\,G_{i+1} ... A_{i-1} G_i \big)^*\big\rangle \\ &\qquad\qquad\qquad\qquad\quad= \sum_{i=1}^{k}\frac{ \big\langle \mathrm{Im}\,G_i \big(A_i \mathrm{Im}\,G_{i+1} ... A_{i-1} \big)\mathrm{Im}\,G_i \big(A_i \mathrm{Im}\,G_{i+1} ... A_{i-1}\big)^*\big\rangle }{N^2 \eta_i^2} \,, \end{split}$$ where we omitted the time dependence. Adding the prefactor from the definition of $\Phi_k(t)$, we find that $$\label{eq:boundneediter} \begin{split} &\frac{\sqrt{N \hat{\ell}_{t}}}{N^{k/2 - 1} \, \Big(\prod_{i\in [k]} \rho_{i,t}\Big) }\left(\int_0^t \frac{ \big\langle \mathrm{Im}\,G_{i,s} \big(... \big)\mathrm{Im}\,G_{i,s} \big(...\big)^*\big\rangle }{N^2 \eta_{i,s}^2} \,\mathrm{d}s\right)^{1/2} \\ \prec&\, \frac{\sqrt{N \hat{\ell}_{t}}}{N^{k/2 - 1} \, \Big(\prod_{i\in [k]} \rho_{i,t}\Big) } \left(\int_0^t \frac{ N^{k-2}\Big(\prod_{i\in [k]} \rho_{i,s}\Big)^2 }{N\eta_s^2}\left(1+\frac{\phi_{2k}}{(N\hat{\ell}_s)^{1/2}}\right) \,\mathrm{d}s\right)^{1/2} \lesssim \, 1+\frac{\sqrt{\phi_{2k}}}{(N\hat{\ell}_t)^{1/4}}\,, \end{split}$$ analogously to [\[eq:qv1\]](#eq:qv1){reference-type="eqref" reference="eq:qv1"}, where we again used that along the characteristics $\hat{\ell}_s\gtrsim \hat{\ell}_t$ for $0\le s\le t$ and the integration rule [\[eq:newintrule\]](#eq:newintrule){reference-type="eqref" reference="eq:newintrule"}. Then, using the BDG inequality we conclude the same estimate in high probability for the martingale term in [\[eq:flowkaim\]](#eq:flowkaim){reference-type="eqref" reference="eq:flowkaim"}. Next, we bound the first two terms in the last line of [\[eq:flowkaim\]](#eq:flowkaim){reference-type="eqref" reference="eq:flowkaim"}. We have $$\label{eq:boundlastline} \begin{split} & \qquad \frac{\sqrt{N \hat{\ell}_{t}}}{N^{k/2 - 1} \, \Big(\prod_{i\in [k]} \rho_{i,t}\Big) }\int_0^t \big| \langle G_{i,s}-m_{i,s}\rangle \langle \widehat{{G}}^{(i)}_{[\hat{1},\hat{k}],s}A_k\rangle \big| \, \mathrm{d}s \\ &\prec \frac{\sqrt{N \hat{\ell}_{t}}}{N^{k/2 - 1} \, \Big(\prod_{i\in [k]} \rho_{i,t}\Big) }\int_0^t \frac{\rho_{i,s}^{1/2}}{N\eta_{i,s}^{3/2}} N^{(k-1)/2}\Big(\prod_{i\in [k]} \rho_{i,s}\Big)\left(1+\frac{\phi_{2k}}{(N\hat{\ell}_s)^{1/2}}\right)^{1/2} \, \mathrm{d}s\lesssim 1+\frac{\sqrt{\phi_{2k}}}{(N\hat{\ell}_{t})^{1/4}}\,, \end{split}$$ where we used the bound in [\[eq:schwarzeasy\]](#eq:schwarzeasy){reference-type="eqref" reference="eq:schwarzeasy"} together with a usual single resolvent local law $| \langle G_{i,s}-m_{i,s}\rangle | \prec (N \eta_{i,s})^{-1}$ and applied a similar reasoning as for [\[eq:boundneediter\]](#eq:boundneediter){reference-type="eqref" reference="eq:boundneediter"}. Then, we estimate the terms in $\Omega_\sigma$ of [\[eq:flowkaim\]](#eq:flowkaim){reference-type="eqref" reference="eq:flowkaim"}. For $j\ne i$ we have $$\begin{split} & \qquad \frac{\sqrt{N \hat{\ell}_{t}}}{N^{k/2 - 1} \, \Big(\prod_{i\in [k]} \rho_{i,t}\Big)}\int_0^t \frac{1}{N}\big| \langle G_{[\widehat{i},j],s}G_{[\widehat{j},i],s}^\mathfrak{t}\rangle \big| \, \mathrm{d}s \\ &\le \frac{\sqrt{N \hat{\ell}_{t}}}{N^{k/2 - 1} \, \Big(\prod_{i\in [k]} \rho_{i,t}\Big) }\int_0^t \frac{1}{N} \langle G_{[\widehat{i},j],s}G_{[\widehat{i},j],s}^*\rangle^{1/2} \langle G_{[\widehat{j},i],s}^*G_{[\widehat{j},i],s}\rangle^{1/2} \, \mathrm{d}s \\ &\prec \sqrt{N \hat{\ell}_{t}}\int_0^t \frac{1}{N\eta_{i,s}\eta_{j,s}} \left(1+\frac{\phi_{2(j-i)}}{\sqrt{N\hat{\ell}_t}}\right)^{1/2}\left(1+\frac{\phi_{2(k-j+i)}}{\sqrt{N\hat{\ell}_t}}\right)^{1/2} \, \mathrm{d}s \\ &\lesssim \frac{1}{\sqrt{N\hat{\ell}_t}}+\frac{\sqrt{\phi_{2(j-i)}}}{(N\hat{\ell}_{t})^{3/4}}+\frac{\sqrt{\phi_{2(k-j+i)}}}{(N\hat{\ell}_{t})^{3/4}}+\frac{\sqrt{\phi_{2(j-i)}\phi_{2(k-j+i)}}}{N\hat{\ell}_{t}}\,, \end{split}$$ where in the first inequality we used Schwarz and in the second inequality the Ward identity (see [\[eq:newk=1\]](#eq:newk=1){reference-type="eqref" reference="eq:newk=1"} for similar computations in a simpler case). Similarly, for $j=i$ we get a bound $1+\sqrt{\phi_{2k}}/(N\hat{\ell}_t)^{1/4}$. To combine these two cases in a simpler bound we just estimate $$\begin{split} &\frac{\sqrt{N \hat{\ell}_{t}}}{N^{k/2 - 1} \, \Big(\prod_{i\in [k]} \rho_{i,t}\Big)}\int_0^t \frac{1}{N}\big| \langle G_{[\widehat{i},j],s}G_{[\widehat{j},i],s}^\mathfrak{t}\rangle \big| \, \mathrm{d}s \\ &\qquad\qquad\qquad\qquad\quad\lesssim \frac{1}{\sqrt{N\hat{\ell}_t}}+\frac{\sqrt{\phi_{2(j-i)}}}{(N\hat{\ell}_{t})^{1/4}}+\frac{\sqrt{\phi_{2(k-j+i)}}}{(N\hat{\ell}_{t})^{1/4}}+\frac{\sqrt{\phi_{2(j-i)}\phi_{2(k-j+i)}}}{N\hat{\ell}_{t}}\,. \end{split}$$ We are now left with the terms $\Omega_1,\Omega_2,\Omega_3,\Omega_4$ of [\[eq:flowkaim\]](#eq:flowkaim){reference-type="eqref" reference="eq:flowkaim"}. We write out the estimates for $\Omega_1$ as all the other $\Omega_a$, $a=2,3,4$, are completely analogous. Using [\[eq:redbm\]](#eq:redbm){reference-type="eqref" reference="eq:redbm"}--[\[eq:redbg\]](#eq:redbg){reference-type="eqref" reference="eq:redbg"} for $i<j$ we estimate $$\begin{split} \label{eq:critbound} &\frac{\sqrt{N \hat{\ell}_{t}}}{N^{k/2 - 1} \, \Big(\prod_{i\in [k]} \rho_{i,t}\Big) }\int_0^t \big| \langle \widehat{{G}}_{[\hat{i}, j],s}-\widehat{M}_{[\hat{i},j],s}\rangle\langle \widehat{{M}}_{[\hat{j},i],s}\rangle \big| \, \mathrm{d}s \\ &\prec\frac{\sqrt{N \hat{\ell}_{t}}}{N^{k/2 - 1} \, \Big(\prod_{i\in [k]} \rho_{i,t}\Big) } \int_0^t \frac{N^{(j-i)/2-1}}{\sqrt{N\hat{\ell}_s}} \Big(\prod_{n\in [i+1,j-1]} \rho_{n,s}\Big) \, \frac{\rho_{i,s} \rho_{j,s}}{\eta_{i,s} \eta_{j,s}} \, \Big(\prod_{n\in [i,j]^c} \rho_{n,s}\Big) N^{(k-j+i)/2-1} \tilde{\phi}_{j-i}\,\mathrm{d}s \\ &\lesssim \frac{\sqrt{N \hat{\ell}_{t}}}{N^{k/2 - 1} \, \Big(\prod_{i\in [k]} \rho_{i,t}\Big) } \int_0^t \frac{\tilde{\phi}_{j-i}}{N \eta_s^2} \frac{N^{k/2-1}}{\sqrt{N\hat{\ell}_s}} \Big(\prod_{i\in [k]} \rho_{i,s}\Big)\,\mathrm{d}s \lesssim \frac{\tilde{\phi}_{j-i}}{N \hat{\ell}_t}, \end{split}$$ where $[i,j]^c:= [1,i-1]\cup [j+1,k]$. Similarly, we bound $$\label{eq:critbound2} \begin{split} \frac{\sqrt{N \hat{\ell}_{t}}}{N^{k/2 - 1} \, \Big(\prod_{i\in [k]} \rho_{i,t}\Big) } \int_0^t \big| \langle \widehat{{G}}_{[\hat{i}, j],s}-\widehat{M}_{[\hat{i},j],s}\rangle\langle \widehat{{G}}_{[\hat{j},i],s}-\widehat{M}_{[\hat{j}, i],s}\rangle \big| \,\mathrm{d}s \prec \frac{ \tilde{\phi}_{j-i} \tilde{\phi}_{k-j+i}}{(N\hat{\ell}_t)^{3/2}}\,. \end{split}$$ Finally, we estimate the last term in the last line of the rhs. of [\[eq:flowkaim\]](#eq:flowkaim){reference-type="eqref" reference="eq:flowkaim"} as $$\label{eq:2ndtermlhs} \begin{split} \frac{\sqrt{N \hat{\ell}_{t}}}{N^{k/2 - 1} \, \Big(\prod_{i\in [k]} \rho_{i,t}\Big) } \int_{0}^{t} \left| \langle \widehat{{G}}_{[\hat{1}, \hat{k}],s}A_k\rangle \frac{\langle \mathrm{Im}\,G_{i,s}-\mathrm{Im}\,m_{i,s}\rangle}{\eta_{i,s}} \right| \, \mathrm{d}s \prec \frac{1}{\sqrt{N \hat{\ell}_t}} + \frac{\phi_k}{N \hat{\ell}_t}\,, \end{split}$$ where we again used the usual single resolvent local law, the integration rule [\[eq:newintrule\]](#eq:newintrule){reference-type="eqref" reference="eq:newintrule"} and $$\big| \langle \widehat{{G}}_{[\hat{1}, \hat{k}],s}A_k\rangle \big| \prec N^{k/2 - 1} \, \Big(\prod_{i\in [k]} \rho_{i,s}\Big) \left( 1 + \frac{\phi_k}{\sqrt{N\hat{\ell}_s}}\right)\,.$$ Putting all these estimates [\[eq:boundneediter\]](#eq:boundneediter){reference-type="eqref" reference="eq:boundneediter"}--[\[eq:2ndtermlhs\]](#eq:2ndtermlhs){reference-type="eqref" reference="eq:2ndtermlhs"} together, we thus conclude [\[eq:AVmasterITERATE\]](#eq:AVmasterITERATE){reference-type="eqref" reference="eq:AVmasterITERATE"}. This finishes the proof of Proposition [Proposition 14](#pro:masterinIM){reference-type="ref" reference="pro:masterinIM"}, modulo the proof of Lemma [Lemma 17](#lem:G^2lemma){reference-type="ref" reference="lem:G^2lemma"} that will be done next. ◻ ### Proof of Lemma [Lemma 17](#lem:G^2lemma){reference-type="ref" reference="lem:G^2lemma"} {#subsec:proofG^2} As a preparation for our proof, we observe that the estimate [\[eq:Mbound\]](#eq:Mbound){reference-type="eqref" reference="eq:Mbound"} (modulo logarithmic corrections in $\ell$) even holds true if the condition $N \ell \ge 1$ with $$\ell = \min_i [\eta_i (\rho_i + \boldsymbol{1}(i \notin \mathfrak{I}_k))]= \eta_{i_{\min}} (\rho_{i_{\min}} + \boldsymbol{1}({i_{\min}} \notin \mathfrak{I}_k))]$$ is violated, but the *second smallest* $$\ell_2 := \min_{i \neq i_{\min}} [\eta_i (\rho_i + \boldsymbol{1}(i \notin \mathfrak{I}_k))]$$ satisfies $N \ell_2 \ge 1$. More precisely, under this weaker assumption, we still have that $$\label{eq:Mbound2ndsmallest} \left| \langle \mathcal{M}(z_1, A_1, ... , A_{k-1}, z_k; \mathfrak{I}_k) A_k \rangle \right| \lesssim \big(1 + \boldsymbol{1}(i_{\min} \notin \mathfrak{I}_k) |\log \ell|\big) \left(\prod_{i \in \mathfrak{I}_k} \rho_i\right) N^{k/2 - 1}\prod_{j \in [k]} \langle |A_j|^2 \rangle^{1/2}\,.$$ This simply follows by realizing that the key estimate within the proof of [\[eq:Mbound\]](#eq:Mbound){reference-type="eqref" reference="eq:Mbound"}, namely [\[eq:intrepbound\]](#eq:intrepbound){reference-type="eqref" reference="eq:intrepbound"} in Appendix [6](#sec:addtech){reference-type="ref" reference="sec:addtech"}, can alternatively be estimated as $$\left| m^{(\mathfrak{I}_k)}[S] \right| \lesssim \big( 1 + \boldsymbol{1}(i_{\min} \notin \mathfrak{I}_k, i_{\min} \in S) |\log \ell| \big)\frac{\prod_{i \in S \cap \mathfrak{I}_k} \rho_i}{\ell_2^{|S| - 1}},$$ and following the steps leading to the proof of Lemma [Lemma 3](#lem:Mbound){reference-type="ref" reference="lem:Mbound"} (a).[^15] We now turn to the actual proof of Lemma [Lemma 17](#lem:G^2lemma){reference-type="ref" reference="lem:G^2lemma"} and again assume that, by simple scaling, $\langle |A_m|^2 \rangle = 1$ for all $m \in [k]$. We start with the proof of [\[eq:redbm\]](#eq:redbm){reference-type="eqref" reference="eq:redbm"} for both $\#$'s indicating no decoration and assuming, for definiteness, that $\eta_i=\mathrm{Im}\,z_i >0$ and $\eta_j=\mathrm{Im}\,z_j > 0$; all other cases can be treated similarly and are hence omitted. In this case, we use the integral representation [@multiG Eq. (3.15)] (which simply follows from [\[eq:Mdefim\]](#eq:Mdefim){reference-type="eqref" reference="eq:Mdefim"}--[\[eq:Mdivdiff\]](#eq:Mdivdiff){reference-type="eqref" reference="eq:Mdivdiff"} using multilinearity)[^16] $$\label{eq:intrepMbasic} \big\langle \widehat{M}_{[i, j]} \big\rangle = \frac{1}{\pi } \int_{\mathbb R }\frac{\big\langle\widehat{M}(x+\mathrm{i}\zeta, A_i, z_{i+1}, ... , z_{j-1}) A_{j-1}\big\rangle}{(x-z_i + \mathrm{i}\zeta)(x - z_j + \mathrm{i}\zeta)} \mathrm{d}x$$ with $\zeta := (\eta_i \wedge \eta_j)/2$. To estimate the $x$-integration in [\[eq:intrepMbasic\]](#eq:intrepMbasic){reference-type="eqref" reference="eq:intrepMbasic"}, we will apply the following basic lemma, which shall frequently be used in the sequel. Its proof is omitted as it is a simple Hölder's inequality and elementary calculus using basic properties of $\rho(z)$. **Lemma 18**. *Under the setting and assumptions described above, for any $\alpha \in [0,1]$, we have that $$\label{eq:xdeprestore} \frac{1}{\zeta^{ \alpha}}\int_{{\mathbb R }} \frac{\big(\rho(x + \mathrm{i}\zeta)\big)^{1 - \alpha} }{| x-z_i + \mathrm{i}\zeta| |x - z_j + \mathrm{i}\zeta|} \mathrm{d}x \prec \frac{1}{(\eta_i \eta_j)^{1/2}}\left(\frac{\rho_i \rho_j}{\big(({\eta}_i \rho_i) ({\eta}_j \rho_j)\big)^{\alpha}}\right)^{1/2} \,.$$* Therefore, plugging in [\[eq:Mbound2ndsmallest\]](#eq:Mbound2ndsmallest){reference-type="eqref" reference="eq:Mbound2ndsmallest"} with $\boldsymbol{1}(...) = 0$ for the numerator in [\[eq:intrepMbasic\]](#eq:intrepMbasic){reference-type="eqref" reference="eq:intrepMbasic"} and then using [\[eq:xdeprestore\]](#eq:xdeprestore){reference-type="eqref" reference="eq:xdeprestore"}, we obtain $$\label{eq:intrepM} \begin{split} \left| \big\langle \widehat{M}_{[i, j]} \big\rangle \right| \lesssim \left(\prod_{n = i+1}^{j-1} \rho_n\right) N^{(j-i)/2 - 1} \int_{\mathbb R }\frac{\rho(x + \mathrm{i}\zeta)}{| x-z_i + \mathrm{i}\zeta| |x - z_j + \mathrm{i}\zeta|} \mathrm{d}x \prec \left(\frac{\rho_i \rho_j }{\eta_i \eta_j}\right)^{1/2} \left(\prod_{n = i+1}^{j-1} \rho_n\right) N^{(j-i)/2 - 1} \,, \end{split}$$ completing the proof of [\[eq:redbm\]](#eq:redbm){reference-type="eqref" reference="eq:redbm"}. We now turn to the proof of [\[eq:redbg\]](#eq:redbg){reference-type="eqref" reference="eq:redbg"}, again focusing on the case where both $\#$'s indicate no decoration and assuming that $\eta_i=\normalcolor\mathrm{Im}\,z_i >0$ and $\eta_j=\mathrm{Im}\,z_j > 0$. As the first step, we apply the integral representations [\[eq:intrepIM\]](#eq:intrepIM){reference-type="eqref" reference="eq:intrepIM"} and [\[eq:intrepMbasic\]](#eq:intrepMbasic){reference-type="eqref" reference="eq:intrepMbasic"} (see [@multiG Eqs. (3.14) and (3.15)]) to find $$\label{eq:intrepG-M} \begin{split} \left| \big\langle \widehat{G}_{[i, j]} - \widehat{M}_{[i, j]} \big\rangle \right| \lesssim \int_{\mathbb R }\frac{\big| \big\langle \big(\mathrm{Im}\,G(x + \mathrm{i}\zeta) A_i \widehat{G}_{[\, \widehat{i+1}, \, \widehat{j-1}\, ]} - \widehat{M}(x+\mathrm{i}\zeta, A_i, ...)\big) A_{j-1}\big\rangle\big|}{| x-z_i + \mathrm{i}\zeta| |x - z_j + \mathrm{i}\zeta|} \mathrm{d}x \end{split}$$ with $\zeta = (\eta_i \wedge \eta_j)/2$ and split the integral into an *above the scale* and a *below the scale* part. This concept refers to spectral regimes $x\in {\mathbb R }$ where the typical eigenvalue spacing $\rho(x + \mathrm{i}\zeta)/N$ is larger or smaller than the given $\zeta$. More precisely, we fix an arbitrarily small $\xi > 0$ and decompose ${\mathbb R }$ into[^17] $$\label{eq:abovebelow} \big\{ x: N \rho(x+\mathrm{i}\zeta) \zeta \ge N^\xi \big\}\, \dot{\cup} \, \big\{ x: N \rho(x+\mathrm{i}\zeta) \zeta < N^\xi \big\} =: I_{\mathrm{above}} \, \dot{\cup}\, I_{\mathrm{below}}\,.$$ For the *above the scale* part, we use that $\Phi_{j-i} \prec \phi_{j-i}$ and estimate this part of the integral [\[eq:intrepG-M\]](#eq:intrepG-M){reference-type="eqref" reference="eq:intrepG-M"} by $$\label{eq:above1} \int_{I_{\rm above}} \frac{\rho(x + \mathrm{i}\zeta)}{| x-z_i + \mathrm{i}\zeta| |x - z_j + \mathrm{i}\zeta| }\left(\prod_{n = i+1}^{j-1} \rho_n\right) \frac{N^{(k-i)/2 - 1}}{\sqrt{N \hat{\ell}(x)}} \phi_{j-i}\mathrm{d}x \,,$$ where we emphasized that now $\hat{\ell}(x) = \zeta \rho(x+\mathrm{i}\zeta) \wedge \min_{n \in [i+1,j-1]} \eta_n \rho_n$ depends on the integration variable $x$ since the integrated chain in [\[eq:intrepG-M\]](#eq:intrepG-M){reference-type="eqref" reference="eq:intrepG-M"} contains a resolvent at spectral parameter $x+\mathrm{i}\zeta$. Next, we further split $I_{\rm above}$ into two parts $I_{\rm above} = I_{{\rm above}, =} \, \dot{\cup} \, I_{{\rm above}, <}$ with $$\label{eq:above=<} I_{{\rm above}, =} := \big\{ x : \hat{\ell}(x) = \rho(x + \mathrm{i}\zeta) \zeta\big\} \quad \text{and} \quad I_{{\rm above}, <} := \big\{ x : \hat{\ell}(x) < \rho(x + \mathrm{i}\zeta) \zeta\big\},$$ depending on whether the minimum is attained at the special spectral argument $x+\mathrm{i}\zeta$ or not, and estimate each of them separately. In this way, we obtain the contribution from $I_{{\rm above}, =}$ to [\[eq:above1\]](#eq:above1){reference-type="eqref" reference="eq:above1"} to equal $$\label{eq:above2} \frac{1}{\sqrt{N}} \left[\frac{1}{\zeta^{1/2}}\int_{I_{{\rm above},=}} \frac{\big(\rho(x + \mathrm{i}\zeta)\big)^{1/2}}{| x-z_i + \mathrm{i}\zeta| |x - z_j + \mathrm{i}\zeta| } \mathrm{d}x\right]\rho_{i+1} \ldots\rho_{j-1} N^{(j-i)/2 - 1} \phi_{j-i} \,.$$ By means of Lemma [Lemma 18](#lem:xrestore){reference-type="ref" reference="lem:xrestore"} with $\alpha = 1/2$ applied to the integral in $\big[\cdots\big]$, this can be bounded as $$\label{eq:above3} \begin{split} \frac{1}{(\eta_i \eta_j)^{1/2}} \frac{1}{\sqrt{N}} \sum_{s=1}^{j-i} \frac{ \sqrt{\rho_i} \, \rho_{i+1} \ldots \rho_{j-1} \sqrt{\rho_j}}{\sqrt{({\eta}_i \rho_i)^{1/2}} \sqrt{({\eta}_j \rho_j )^{1/2}}}N^{(j-i)/2 - 1} \phi_{j-i} \le \left(\frac{\rho_i \rho_j }{\eta_i\eta_j}\right)^{1/2}\left(\prod_{n = i+1}^{j-1} \rho_n\right) \frac{N^{(j-i)/2 - 1}}{\sqrt{N \hat{\ell}}} \phi_{j-i} \,. \end{split}$$ For $I_{{\rm above}, <}$ the argument is completely analogous, yielding exactly the same bound as in [\[eq:above3\]](#eq:above3){reference-type="eqref" reference="eq:above3"}. This completes the bound for the *above the scale* part. For the *below the scale* part, we estimate the two terms in the numerator in [\[eq:intrepG-M\]](#eq:intrepG-M){reference-type="eqref" reference="eq:intrepG-M"} separately; in this regime the local law is anyway not effective in the sense that $G-M$ is not smaller than $G$. For the $\widehat{M}$-term, we recall the bound [\[eq:Mbound2ndsmallest\]](#eq:Mbound2ndsmallest){reference-type="eqref" reference="eq:Mbound2ndsmallest"}, and estimate $$\label{eq:belowM} \begin{split} &\int_{I_{\rm below}} \frac{\big| \big\langle \widehat{M}(x+\mathrm{i}\zeta, A_i, ...)A_{j-1}\big\rangle\big|}{| x-z_i + \mathrm{i}\zeta| |x - z_j + \mathrm{i}\zeta|} \mathrm{d}x \lesssim \, N^{(j-i)/2 - 1}\rho_{i+1}\ldots \rho_{j-1}\left[ \int_{I_{\rm below}}\frac{\rho(x + \mathrm{i}\zeta) }{| x-z_i + \mathrm{i}\zeta| |x - z_j + \mathrm{i}\zeta|} \mathrm{d}x\right] \\[2mm] \prec &\frac{1}{(\eta_i \eta_j)^{1/2}}\frac{1}{N} \frac{\sqrt{\rho_i} \, \rho_{i+1}\ldots \rho_{j-1} \, \sqrt{\rho_j}}{({\eta}_i \rho_i)^{1/2} ({\eta}_j \rho_j)^{1/2}}N^{(j-i)/2 - 1} \lesssim \left(\frac{\rho_i \rho_j }{\eta_i\eta_j}\right)^{1/2}\left(\prod_{n = i+1}^{j-1} \rho_n\right) \frac{N^{(j-i)/2 - 1}}{N \hat{\ell}} \,. \end{split}$$ To go from the second to the third line, we used that $\rho(x + \mathrm{i}\zeta) \zeta \prec N^{-1}$ for $x \in I_{\rm below}$ (recall that $\xi> 0$ in the definition [\[eq:abovebelow\]](#eq:abovebelow){reference-type="eqref" reference="eq:abovebelow"} may be chosen arbitrarily small) and employed Lemma [Lemma 18](#lem:xrestore){reference-type="ref" reference="lem:xrestore"} with $\alpha = 1$. In the ultimate step, we utilized $\eta_i \rho_i \wedge \eta_j \rho_j \ge \hat{\ell}$ together with $N \hat{\ell}\gtrsim 1$. This concludes the discussion of the $\widehat{{M}}$-term. Next, we turn to the $\widehat{{G}}$-term in [\[eq:intrepG-M\]](#eq:intrepG-M){reference-type="eqref" reference="eq:intrepG-M"} in the regime $x \in I_{\rm below}$ and first focus on the case where $j-i$ is even. Here, we employ a Schwarz inequality in order to be able to exploit $$\label{eq:monotone} \begin{split} \big| \big\langle \mathrm{Im}\,G(x + \mathrm{i}\zeta) &A_i \widehat{G}_{[\, \widehat{i+1}, \, \widehat{j-1}\, ]} A_{j-1}\big\rangle\big| \\ \le \, & \frac{\zeta_x}{\zeta} \big| \big\langle \mathrm{Im}\,G(x + \mathrm{i}\zeta_x) (A_i ... \mathrm{Im}\,G_{r-1} A_{r-1}) \mathrm{Im}\,G_{r} (A_i ... \mathrm{Im}\,G_{r-1} A_{r-1})^*\big\rangle \big|^{1/2} \\ & \quad \times\big| \big\langle \mathrm{Im}\,G_{r} (A_{r} ... \mathrm{Im}\,G_{j-1} A_{j-1}) \mathrm{Im}\,G(x + \mathrm{i}\zeta_x) (A_{r} ... \mathrm{Im}\,G_{j-1} A_{j-1}) ^*\big\rangle \big|^{1/2} \end{split}$$ where $\zeta_x > \zeta$ is implicitly defined via $N \rho(x + \mathrm{i}\zeta_x) \zeta_x = N^\xi$ and we denoted $r := (i+j)/2$. After application of a Schwarz inequality, we find this part of [\[eq:intrepG-M\]](#eq:intrepG-M){reference-type="eqref" reference="eq:intrepG-M"} to be bounded by $$\label{eq:below1} \begin{split} \left(\int_{I_{\rm below}} \frac{\zeta_x}{\zeta}\frac{\big| \big\langle \mathrm{Im}\,G(x + \mathrm{i}\zeta_x) (A_i ... \mathrm{Im}\,G_{r-1} A_{r-1}) \mathrm{Im}\,G_{\frac{j+i}{2}} (A_i ... \mathrm{Im}\,G_{r-1} A_{r-1})^*\big\rangle \big|}{| x-z_i + \mathrm{i}\zeta| |x - z_j + \mathrm{i}\zeta|} \mathrm{d}x\right)^{1/2} \\ \times \left(\int_{I_{\rm below}} \frac{\zeta_x}{\zeta}\frac{ \big| \big\langle \mathrm{Im}\,G_{r} (A_{r} ... \mathrm{Im}\,G_{j-1} A_{j-1}) \mathrm{Im}\,G(x + \mathrm{i}\zeta_x) (A_{r} ... \mathrm{Im}\,G_{j-1} A_{j-1})^*\big\rangle \big| }{| x-z_i + \mathrm{i}\zeta| |x - z_j + \mathrm{i}\zeta|} \mathrm{d}x\right)^{1/2} \end{split}$$ Adding and subtracting the respective $\widehat{{M}}$-terms for both resolvent chains in [\[eq:below1\]](#eq:below1){reference-type="eqref" reference="eq:below1"}, we are left with two terms for each integral. For concreteness, we estimate the one in the first line in [\[eq:below1\]](#eq:below1){reference-type="eqref" reference="eq:below1"}, the second line the same. The first line in [\[eq:below1\]](#eq:below1){reference-type="eqref" reference="eq:below1"} is bounded by (the square root of) $$\begin{split} \frac{1}{\zeta}\int_{I_{\rm below}} \mathrm{d}x\frac{\zeta_x \rho(x + \mathrm{i}\zeta_x)}{| x-z_i + \mathrm{i}\zeta| |x - z_j + \mathrm{i}\zeta| } &\left(\prod_{n = i+1}^{r-1} \rho_n\right)^2 \rho_{r} \, N^{\frac{j-i}{2}-1}( 1+\phi_{j-i})\\ &\prec( 1+\phi_{j-i})\left(\frac{ \rho_i \rho_j}{\eta_i\eta_j}\right)^{1/2} \left(\prod_{n = i+1}^{r-1} \rho_n\right)^2 \rho_{r} \, \frac{N^{\frac{j-i}{2}-1}}{N \hat{\ell}} \,. \end{split}$$ Here, we used that $N \rho(x + \mathrm{i}\zeta_x) \zeta_x = N^\xi$ for arbitrarily small $\xi > 0$ and employed Lemma [Lemma 18](#lem:xrestore){reference-type="ref" reference="lem:xrestore"} (with $\alpha=1$) in estimates analogous to [\[eq:above3\]](#eq:above3){reference-type="eqref" reference="eq:above3"} and [\[eq:belowM\]](#eq:belowM){reference-type="eqref" reference="eq:belowM"}. Combining this with the identical estimate for the second line of [\[eq:below1\]](#eq:below1){reference-type="eqref" reference="eq:below1"} and using $N \hat{\ell}\ge 1$ and $\phi_{j-i} \ge 1$, we finally deduce that $$\label{eq:below2} \eqref{eq:below1} \prec \phi_{j-i}\left(\frac{ \rho_i \rho_j}{\eta_i\eta_j}\right)^{1/2} \left(\prod_{n = i+1}^{j-1} \rho_n\right) \frac{N^{(j-i)/2 - 1}}{\sqrt{N \hat{\ell}}} \,.$$ For $j-i$ being odd, only the monotonicity argument [\[eq:monotone\]](#eq:monotone){reference-type="eqref" reference="eq:monotone"} is different: $$\begin{split} \big| \big\langle \mathrm{Im}\,G(x + \mathrm{i}\zeta) &A_i \widehat{G}_{[\, \widehat{i+1}, \, \widehat{j-1}\, ]} A_{j-1}\big\rangle\big| \\ \le \, & \frac{\zeta_x}{\zeta} \big| \big\langle \mathrm{Im}\,G(x + \mathrm{i}\zeta_x) (A_i ... \mathrm{Im}\,G_{r-1} A_{r-1}) \mathrm{Im}\,G_{r} (A_i ... \mathrm{Im}\,G_{r-1} A_{r-1})^*\big\rangle \big|^{1/2} \\ & \quad \times\big| \big\langle \mathrm{Im}\,G_{r} (A_{r+1} ... \mathrm{Im}\,G_{j-1} A_{j-1}) \mathrm{Im}\,G(x + \mathrm{i}\zeta_x) (A_{r+1} ... \mathrm{Im}\,G_{j-1} A_{j-1})^*\big\rangle \big|^{1/2} \,, \end{split}$$ where we now denoted $r:= (i+j+1)/2$. This asymmetry in the lengths of the resolvent chains now leads to the term $\sqrt{\phi_{j-i+1} \phi_{j-i-1}}$ in [\[eq:redbg\]](#eq:redbg){reference-type="eqref" reference="eq:redbg"}, the rest of the argument is identical. Finally, we turn to the proof of [\[eq:schwarzeasy\]](#eq:schwarzeasy){reference-type="eqref" reference="eq:schwarzeasy"}. Again, we focus on the case where $\#$ indicates no decoration. By application of a Schwarz inequality, we find $$\label{eq:schwarzeasyproof} \begin{split} \left| \big\langle \widehat{{G}}_{[\hat{1}, \hat{k}]}^{(i)}A_k\big\rangle \right| &= \left|\big\langle \mathrm{Im}\,G_1 A_1 ... \mathrm{Im}\,G_{i-1} A_{i-1} \mathrm{Im}\,G_i G_i A_i ... \mathrm{Im}\,G_k A_k\big\rangle \right| \\ &\le |\langle G_i G_i^* \rangle|^{1/2} \left| \big\langle \mathrm{Im}\,G_i(A_i\mathrm{Im}\,G_{i+1}\dots A_{i-1})\mathrm{Im}\,G_i(A_i\mathrm{Im}\,G_{i+1}\dots A_{i-1})^* \big\rangle \right|^{1/2} \\ &\prec\left( \frac{\rho_i}{\eta_i}\right)^{1/2} \left| \big\langle \mathrm{Im}\,G_i(A_i\mathrm{Im}\,G_{i+1}\dots A_{i-1})\mathrm{Im}\,G_i(A_i\mathrm{Im}\,G_{i+1}\dots A_{i-1})^* \big\rangle \right|^{1/2}\,, \end{split}$$ where in the last step we used the Ward identity $GG^* = \mathrm{Im}\,G/\eta$ together with the usual single resolvent local law applied to $\mathrm{Im}\,G_i$. This concludes the proof of Lemma [Lemma 17](#lem:G^2lemma){reference-type="ref" reference="lem:G^2lemma"} which was the last missing piece for the proof of Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} (a) for pure $\mathrm{Im}\,G$ chains. 0◻ ## Proof of Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} (b) for pure $\mathrm{Im}\,G$-chains {#subsec:pureIMISO} In this section, we briefly explain how to derive Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} (b) from Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} (a). For fixed spectral parameters and bounded deterministic vectors $\Vert \bm x \Vert \,, \Vert \bm y \Vert \lesssim 1$, we have $$\label{eq:isofromav} \left|\langle \bm x, (\widehat{{G}}_{[\hat{1}, \widehat{k+1}]}-\widehat{M}_{[\hat{1},\widehat{k+1}]})\bm y\rangle\right| \lesssim \left|\big\langle (\widehat{{G}}_{[\hat{1}, \widehat{k+1}]}-\widehat{M}_{[\hat{1},\widehat{k+1}]})A_{k+1}\big\rangle\right| + \left|\big\langle \widehat{{G}}_{[\hat{1}, \widehat{k+1}]}-\widehat{M}_{[\hat{1},\widehat{k+1}]}\big\rangle\right|$$ with the special choice $A_{k+1} := N \bm y \bm x^* - \langle \bm x , \bm y \rangle$. Next, using that $\langle |A_{k+1}|^2 \rangle^{1/2} \lesssim N^{1/2}$ we find from Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} (a) for pure $\mathrm{Im}\,G$ chains the first term in [\[eq:isofromav\]](#eq:isofromav){reference-type="eqref" reference="eq:isofromav"} to be bounded as $$\left|\big\langle (\widehat{{G}}_{[\hat{1}, \widehat{k+1}]}-\widehat{M}_{[\hat{1},\widehat{k+1}]})A_{k+1}\big\rangle\right| \prec \Big(\prod_{i\in [k+1]} \rho_{i}\Big)\frac{N^{k/2 }}{\sqrt{N \hat{\ell}}} \prod_{j \in [k]} \langle |A_j|^2 \rangle^{1/2}\,.$$ For the second term, we apply [\[eq:redbg\]](#eq:redbg){reference-type="eqref" reference="eq:redbg"} from Lemma [Lemma 17](#lem:G^2lemma){reference-type="ref" reference="lem:G^2lemma"} (note that by Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} (a) for pure $\mathrm{Im}\,G$ chains, we have $\Phi_k\prec \phi_k:= 1$ and hence also $\tilde\phi_k=1$) and obtain $$\left|\big\langle \widehat{{G}}_{[\hat{1}, \widehat{k+1}]}-\widehat{M}_{[\hat{1},\widehat{k+1}]}\big\rangle\right| \prec \left(\frac{\rho_1 \rho_{k+1}}{\eta_1 \eta_{k+1}}\right)^{1/2}\Big(\prod_{i = 2}^{k} \rho_{i}\Big)\frac{N^{k/2-1 }}{\sqrt{N \hat{\ell}}} \prod_{j \in [k]} \langle |A_j|^2 \rangle^{1/2} \le \Big(\prod_{i\in [k+1]} \rho_{i}\Big)\frac{N^{k/2 }}{\sqrt{N \hat{\ell}}} \prod_{j \in [k]} \langle |A_j|^2 \rangle^{1/2}\,,$$ where in the last step we used $\eta_1 \rho_1 \wedge \eta_{k+1} \rho_{k+1} \ge \hat{\ell}$ and $N \hat{\ell}\ge 1$. This concludes the proof of Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} (b) for pure $\mathrm{Im}\,G$ chains. ## Proof of Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} (b) for mixed chains {#subsec:mixed} We consider mixed resolvent chains $$\mathcal{G}_1 A_1 ... \mathcal{G}_k A_k$$ with $\mathcal{G}_j \in \{ G_j, \mathrm{Im}\,G_j\}$ and traceless matrices $A_1, ... , A_k \in {\mathbb C}^{N \times N}$, and explain how the respective bounds in [\[eq:mainAV\]](#eq:mainAV){reference-type="eqref" reference="eq:mainAV"}--[\[eq:mainISO\]](#eq:mainISO){reference-type="eqref" reference="eq:mainISO"} are obtained from the multi-resolvent local law for pure $\mathrm{Im}\,G$-chains derived in Sections [4.1](#subsec:pureIM){reference-type="ref" reference="subsec:pureIM"}--[4.2](#subsec:pureIMISO){reference-type="ref" reference="subsec:pureIMISO"}. We will henceforth focus on the average case, the isotropic bounds can immediately be obtained from those by following Section [4.2](#subsec:pureIMISO){reference-type="ref" reference="subsec:pureIMISO"}. Recalling $$\ell = \min_{j \in [k]}\big[ \eta_j (\rho_j + \boldsymbol{1}(j \notin \mathfrak{I}_k))\big]$$ where $\mathfrak{I}_k$ denotes the set of indices $j \in [k]$ where $\mathcal{G}_j = \mathrm{Im}\,G_j$, the goal of this section is to prove that $$\label{eq:mainAVproof} \left| \langle \mathcal{G}_1 A_1 ... \mathcal{G}_k A_k \rangle - \langle \mathcal{M}_{[1,k]}A_k \rangle \right| \prec \left[\left(\prod_{i \in \mathfrak{I}_k} \rho_i \right) \wedge \max_{i \in [k]} \sqrt{\rho_i}\right] \, \frac{N^{k/2 - 1}}{\sqrt{N \ell}} \, \prod_{j \in [k]} \langle |A_j|^2 \rangle^{1/2}\,.$$ In order to do so, we iteratively apply the integral representation [\[eq:intrepIM\]](#eq:intrepIM){reference-type="eqref" reference="eq:intrepIM"} with $m=1$ for every $\mathcal{G}_j$ such that $j \notin \mathfrak{I}_k$. In Section [4.3.1](#subsec:I=notempty){reference-type="ref" reference="subsec:I=notempty"}, this procedure will immediately yield the claimed bound [\[eq:mainAVproof\]](#eq:mainAVproof){reference-type="eqref" reference="eq:mainAVproof"} for $\mathfrak{I}_k \neq \emptyset$ (recall from Remark [Remark 6](#rmk:MHS){reference-type="ref" reference="rmk:MHS"} (ii), that in this case the minimum in [\[eq:mainAVproof\]](#eq:mainAVproof){reference-type="eqref" reference="eq:mainAVproof"} is always realized by the product). In the complementary case, $\mathfrak{I}_k = \emptyset$, which has already been studied in [@A2], the outcome of iteratively applying [\[eq:intrepIM\]](#eq:intrepIM){reference-type="eqref" reference="eq:intrepIM"} is the natural continuation of the pattern obtained for $\mathfrak{I}_k \neq \emptyset$. However, in this way we only find the weaker bound, where in [\[eq:mainAVproof\]](#eq:mainAVproof){reference-type="eqref" reference="eq:mainAVproof"} the minimum $\big[... \wedge ...\big]$ replaced by one. The improvement to include the small factor $\max_{i \in [k]} \sqrt{\rho_i}$ requires a short separate argument, which we provide in Section [4.3.2](#subsec:I=empty){reference-type="ref" reference="subsec:I=empty"}. ### The case $\mathfrak{I}_k \neq \emptyset$ {#subsec:I=notempty} For concreteness, we consider the case where $\mathfrak{I}_k = [k-1]$, i.e. $\mathcal{G}_k = G_k$ with $\mathrm{Im}\,z_k > 0$ w.l.o.g. and all other $\mathcal{G}$'s are $\mathrm{Im}\,G$'s. Then, using the integral representation [\[eq:intrepIM\]](#eq:intrepIM){reference-type="eqref" reference="eq:intrepIM"} with $m=1$ and $\eta = \zeta = \mathrm{Im}\,z_k/2$, and its analog for the deterministic approximation (see [@multiG Eqs. (3.14) and (3.15)] and [\[eq:intrepG-M\]](#eq:intrepG-M){reference-type="eqref" reference="eq:intrepG-M"} above), we find that $$\begin{split} &\left| \langle \mathrm{Im}\,{G}_1 A_1 ... G_k A_k \rangle - \langle \mathcal{M}(z_1, A_1, ... , z_k; [k-1])A_k \rangle \right| \\ & \qquad \lesssim \int_{\mathbb R }\frac{ \left| \langle \mathrm{Im}\,{G}_1 A_1 ... \mathrm{Im}\,G(x + \mathrm{i}\zeta) A_k \rangle - \langle \mathcal{M}(z_1, A_1, ... , x+ \mathrm{i}\zeta; [k])A_k \rangle \right|}{|x - z_k + \mathrm{i}\zeta|} \, \mathrm{d}x \end{split}$$ We then follow the steps in the proof of Lemma [Lemma 17](#lem:G^2lemma){reference-type="ref" reference="lem:G^2lemma"} starting from [\[eq:intrepG-M\]](#eq:intrepG-M){reference-type="eqref" reference="eq:intrepG-M"} in order to estimate the integral. In particular, we split the integration region into $I_{\rm above}$ and $I_{\rm below}$, just as in [\[eq:abovebelow\]](#eq:abovebelow){reference-type="eqref" reference="eq:abovebelow"}. In the treatment of these regimes, the two main differences compared to the proof of Lemma [Lemma 17](#lem:G^2lemma){reference-type="ref" reference="lem:G^2lemma"} are the following: - We use the $M$-bound in [\[eq:Mbound2ndsmallest\]](#eq:Mbound2ndsmallest){reference-type="eqref" reference="eq:Mbound2ndsmallest"} with logarithmic corrections, which can be absorbed into $\prec$. - Lemma [Lemma 18](#lem:xrestore){reference-type="ref" reference="lem:xrestore"} gets replaced by the bound $$\int_{\mathbb R }\frac{\big(\rho(x+\mathrm{i}\zeta)\big)^\alpha}{|x-z_k + \mathrm{i}\zeta|} \, \mathrm{d}x \prec 1 \qquad \text{for all} \qquad \alpha > 0\,,$$ which can easily be seen using that $\mathrm{Im}\,z_k \ge N^{-1}$ and $\rho(w)$ decays polynomially as $|w| \to \infty$. For example, instead of [\[eq:above2\]](#eq:above2){reference-type="eqref" reference="eq:above2"} we estimate (recall that $I_{\rm above}$ is further split into $I_{\rm above, =}$ and $I_{\rm above, <}$ in [\[eq:above=\<\]](#eq:above=<){reference-type="eqref" reference="eq:above=<"}) $$\begin{split} \frac{1}{\sqrt{N}} \left[\frac{1}{\zeta^{1/2}}\int_{I_{{\rm above},=}} \hspace{-2mm}\frac{\big(\rho(x + \mathrm{i}\zeta)\big)^{1/2}}{| x-z_k + \mathrm{i}\zeta|} \mathrm{d}x\right]\rho_{1} \ldots \rho_{k-1} N^{k/2 - 1} \prec \left(\prod_{i \in [k-1]} \rho_i \right) \, \frac{N^{k/2 - 1}}{\sqrt{N \ell}} \,, \end{split}$$ neglecting the product of Hilbert-Schmidt norms. We point out that, compared to the estimates in the pure $\mathrm{Im}\,G$-case, now $\ell := \min_{j \in [k]}\big[ \eta_j (\rho_j + \boldsymbol{1}(j \neq k))\big]$ and $\rho_k$ disappeared from the rhs. Therefore, as a result, we find the claimed bound [\[eq:mainAVproof\]](#eq:mainAVproof){reference-type="eqref" reference="eq:mainAVproof"} for $\mathfrak{I}_k = [k-1]$. All other cases with $\mathfrak{I}_k \neq \emptyset$ follow by iteratively applying this strategy. This completes the proof of Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} (b) if $\mathfrak{I}_k \ne \emptyset$. ### The case $\mathfrak{I}_k = \emptyset$ {#subsec:I=empty} As mentioned above, in order to obtain the improvement by $\max_{i \in [k]} \sqrt{\rho_i}$, we now give a separate argument. We thereby closely follow the steps in Section [4.1](#subsec:pureIM){reference-type="ref" reference="subsec:pureIM"} and point out only the main differences. In particular, we now use the flow [\[eq:flowka\]](#eq:flowka){reference-type="eqref" reference="eq:flowka"}, together with the following lemma proven in Appendix [6.3](#sec:addproofsec4){reference-type="ref" reference="sec:addproofsec4"}, instead of [\[eq:flowkaim\]](#eq:flowkaim){reference-type="eqref" reference="eq:flowkaim"}. Here, similarly to [\[eq:flowka\]](#eq:flowka){reference-type="eqref" reference="eq:flowka"}, the absence of hats indicates that *none* of the resolvents $\mathcal{G}$ in the chain approximated by $M$ is an $\mathrm{Im}\,G$. **Lemma 19**. *We have $$\partial_t \langle M_{[1,k],t}A_k\rangle=\frac{k}{2}\langle M_{[1,k],t}A_k\rangle+\sum_{i,j=1, \atop i<j}^k\langle M_{[i,j],t}\rangle \langle M_{[j,i],t}\rangle.$$* Moreover, using the shorthand notations $$\eta_t := \min_{i \in [k]} \eta_{i,t} \quad \text{and} \quad \rho_t:= \max_{i \in [k]} \rho_{i,t}\,,$$ we introduce the new normalized differences $${\Psi}_k(t):= \frac{\sqrt{N \eta_{t}}}{N^{k/2 - 1} \, \sqrt{\rho_t} \prod_{j \in [k]} \langle |A_j|^2 \rangle^{1/2}} \big| \langle ({G}_{[{1},{k}],t}- {M}_{[{1},{k}],t})A_k \rangle\big|$$ for every $k \in {\mathbb N}$. The $\Psi_k$'s introduced here are the no-$\mathrm{Im}\,G$-analogs of the $\Phi_k$'s defined in [\[eq:defphi\]](#eq:defphi){reference-type="eqref" reference="eq:defphi"}, i.e. all hats are removed and we replaced $\hat{\ell}_t \to \eta_t$ as well as $\prod_i \rho_{i,t} \to \sqrt{\rho_t}$. In the following, we will derive master inequalities for the $\Psi_k$'s, analogously to Proposition [Proposition 14](#pro:masterinIM){reference-type="ref" reference="pro:masterinIM"}. However, compared to the proof in Section [4.1](#subsec:pureIM){reference-type="ref" reference="subsec:pureIM"}, we now have two major simplifications: - Since the bound [\[eq:mainAVproof\]](#eq:mainAVproof){reference-type="eqref" reference="eq:mainAVproof"} for $\mathfrak{I}_k \neq \emptyset$ is already proven, the contribution of the quadratic variation term in [\[eq:flowka\]](#eq:flowka){reference-type="eqref" reference="eq:flowka"}, which automatically carries two $\mathrm{Im}\,G$'s, is easily estimated as (again assuming $\langle |A_j|^2 \rangle = 1$ for all $j \in [k]$ henceforth) $$\begin{split} \frac{\sqrt{N \eta_{t}}}{N^{k/2 - 1} \, \sqrt{\rho_{t}} }&\left(\int_0^t \frac{ \big\langle \mathrm{Im}\,G_{i,s} \big(A_i G_{i+1,s}... A_{i-1} \big)\mathrm{Im}\,G_{i,s} \big(A_i G_{i+1,s}... A_{i-1} \big)^*\big\rangle }{N^2 \eta_{i,s}^2} \,\mathrm{d}s\right)^{1/2} \\ \prec&\, \frac{\sqrt{N \eta_{t}}}{N^{k/2 - 1} \, \sqrt{\rho_{t}} } \left(\int_0^t \frac{ N^{k-2} \rho_{i,s}^2 }{N\eta_{i,s}^2} \,\mathrm{d}s\right)^{1/2} \lesssim \, \sqrt{\frac{\rho_{i,t} \eta_t}{\rho_t \eta_{i,t}}} \le 1\,, \end{split}$$ analogously to [\[eq:boundneediter\]](#eq:boundneediter){reference-type="eqref" reference="eq:boundneediter"}. Note that in the first step, we did not use the overestimate $1/\eta_{i,s} \le 1/ \eta_s$ inside the integral as done in [\[eq:boundneediter\]](#eq:boundneediter){reference-type="eqref" reference="eq:boundneediter"}. The same reasoning applies to the analog of the last line in [\[eq:flowkaim\]](#eq:flowkaim){reference-type="eqref" reference="eq:flowkaim"}. We point out that, in this section, the already proven bounds for resolvent chains containing at least one $\mathrm{Im}\,G$ make the usage of reduction inequalities as in Lemma [Lemma 15](#lem:redinphi){reference-type="ref" reference="lem:redinphi"} obsolete. - For treating the analogues of $\Omega_1,\Omega_2,\Omega_3,\Omega_4$ in [\[eq:flowkaim\]](#eq:flowkaim){reference-type="eqref" reference="eq:flowkaim"}, it is not necessary to \"restore\" $\mathrm{Im}\,G$'s via the integral representation [\[eq:intrepIM\]](#eq:intrepIM){reference-type="eqref" reference="eq:intrepIM"} as in the proof of the $G^2$-Lemma [Lemma 17](#lem:G^2lemma){reference-type="ref" reference="lem:G^2lemma"}. Instead, in the course of proving an analog of Lemma [Lemma 17](#lem:G^2lemma){reference-type="ref" reference="lem:G^2lemma"} (again suppressing the time dependence of the $z$'s as well as $\eta$ and $\rho$) it is sufficient to apply resolvent identities for $|z_i- z_j| \ge \eta$ and the integral representation $$G(z_i) G(z_j) = \frac{1}{2 \pi \mathrm{i}} \int_\Gamma \frac{G(w)}{(w-z_i)(w-z_j)} \mathrm{d}w\,,$$ for $|z_i - z_j| \le \eta$. In this case $z_i$ and $z_j$ are necessarily on the same halfplane ($\mathrm{Im}\,z_i \mathrm{Im}\,z_j > 0$) and, just as in [\[eq:niceintrep\]](#eq:niceintrep){reference-type="eqref" reference="eq:niceintrep"}, $\Gamma$ is a tiny contour encircling $z_i, z_j \in {\mathbb C}\setminus {\mathbb R }$ in such a way that $\mathrm{dist}(\Gamma, \{z_i,z_j\}) \sim \eta$, which ensures that $|\mathrm{Im}\,m(w)| \lesssim \max_{i \in [k]} \rho_i$ on $\Gamma$ as follows by elementary continuity properties of $m(w)$. As a consequence, for fixed $k \in {\mathbb N}$, we find, assuming $\Psi_l\prec \psi_l$ for some control parameters $\psi_l \ge 1$ for $l=1,2,\ldots , k$ in the usual sense of uniformity explained below [\[phiphi\]](#phiphi){reference-type="eqref" reference="phiphi"}, that $$\left| \big\langle M_{[i, j]} \big\rangle \right| \prec \frac{1}{\eta} \, N^{\frac{j-i}{2}-1} \, \Big(\prod_{m = i}^{j-1} \langle |A_m|^2 \rangle^{1/2}\Big)\,,$$ as an analog of [\[eq:redbm\]](#eq:redbm){reference-type="eqref" reference="eq:redbm"} and $$\begin{split} \left| \big\langle {G}_{[i,j]} - {M}_{[i,j]}\big\rangle \right| \prec \, \frac{1}{\eta} \, N^{\frac{j-i}{2}-1} \sqrt{\frac{\rho }{{N \eta}}} \, \Big(\prod_{m = i}^{j-1} \langle |A_m|^2 \rangle^{1/2}\Big) {\psi}_{j-i} \,, \end{split}$$ as an analog of [\[eq:redbg\]](#eq:redbg){reference-type="eqref" reference="eq:redbg"}, for all $i,j \in [k]$ with $j-i \ge 1$. Overall, using the above two simplifications and following the arguments in [\[eq:boundneediter\]](#eq:boundneediter){reference-type="eqref" reference="eq:boundneediter"}--[\[eq:2ndtermlhs\]](#eq:2ndtermlhs){reference-type="eqref" reference="eq:2ndtermlhs"}, we arrive at the following new set of master inequalities. **Proposition 20** (Master inequalities II). *Fix $k\in {\mathbb N}$ and $t\in [0,T]$. Assume that $\Psi_l(s)\prec \psi_l$ for any $1\le l\le k$ uniformly in $s\in [0,t]$ (in the sense of [\[phiphi\]](#phiphi){reference-type="eqref" reference="phiphi"}) and set $\psi_0 :=1$. Then we have the *master inequalities* $$\begin{split} \label{eq:AVmasterITERATE2} \Psi_k(t)\prec 1+ \frac{1}{N \hat{\ell}_t}\sum_{l=1}^{k} {\psi}_l +\frac{1}{(N\hat{\ell}_{t})^{3/2}}\sum_{l=1}^{k-1} {\psi}_l {\psi}_{k-l} +\frac{|\sigma|}{(N\hat{\ell}_t)^{1/4}}\sum_{l=1}^k\sqrt{\psi_{2l}}+\frac{|\sigma|}{N\hat{\ell}_t}\sum_{l=0}^k \sqrt{\psi_{2l}\psi_{2(k-l)}} \end{split}$$ where we denoted $\hat{\ell}_t = \min_{i \in [k]} \eta_{i,t} \rho_{i,t}$ for brevity (recall [\[eq:newintrule\]](#eq:newintrule){reference-type="eqref" reference="eq:newintrule"}; not to be confused with the $\ell$ used around [\[eq:mainAVproof\]](#eq:mainAVproof){reference-type="eqref" reference="eq:mainAVproof"}!).* Using that $N \hat{\ell}_t \ge N^\epsilon$ and iteration (Lemma [Lemma 16](#lem:iteration){reference-type="ref" reference="lem:iteration"}), analogously to Section [4.1.1](#subsec:master){reference-type="ref" reference="subsec:master"}, we can immediately deduce that $\Psi_k(T) \prec 1$ where $T$ is the time defined in the statement of Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"}. This concludes the proof of Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} (b) for the remaining case $\mathfrak{I}_k = \emptyset$. ## Modifications for general $\sigma={\mathbb E }\chi_{\mathrm{od}}^2$. {#sec:general} The proof of Proposition [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} presented so far assumed for simplicity that $\sigma = {\mathbb E }\chi^2_{\mathrm{od}}$ is real and ${\mathbb E }\chi^2_{\mathrm{d}} = 1+\sigma$. We now explain how to prove the general case, when these two restrictions are lifted. The only changes concern the choice of the initial condition and of the evolution $B_t$ in the flow [\[eq:OUOUOU\]](#eq:OUOUOU){reference-type="eqref" reference="eq:OUOUOU"}. If $\sigma$ is not real, we modify the evolution in [\[eq:OUOUOU\]](#eq:OUOUOU){reference-type="eqref" reference="eq:OUOUOU"} in such a way the entries of $B_t$ are $\sqrt{t}$ times a standard complex Gaussian, and we modify the initial condition in [\[eq:OUOUOU\]](#eq:OUOUOU){reference-type="eqref" reference="eq:OUOUOU"} from $W_0=W$ to $W_0=\widetilde{W}_T$, with another Wigner matrix $\widetilde{W}_T$ prepared such that $$\label{eq:simpl1} e^{-T/2}\widetilde{W}_T+\sqrt{1-e^{-T}}U\stackrel{\mathrm{d}}{=}W.$$ Here $U$ is a GUE matrix, which is independent of $\widetilde{W}_T$. We point out that the limiting eigenvalue density of $\widetilde{W}_T$ does not change along the flow [\[eq:OUOUOU\]](#eq:OUOUOU){reference-type="eqref" reference="eq:OUOUOU"} as a consequence of the fact that ${\mathbb E }|(W_t)_{ab}|^2$, for $a>b$, is preserved, and only $${\mathbb E }(W_t)_{ab}^2=e^{-t} {\mathbb E }(\widetilde{W}_T)_{ab}^2, \qquad\quad {\mathbb E }(W_t)_{aa}^2=e^{-t/2}{\mathbb E }(\widetilde{W}_T)_{aa}^2 + \frac{1}{N} \sqrt{1-e^{-t}}\,, \qquad t\in [0, T]\,,$$ change. The fact that ${\mathbb E }(W_t)_{ab}^2$ and ${\mathbb E }(W_t)_{aa}^2$ do change along the flow contributes to a change of order $1/N$ in the averaged Stieltjes transform of $W_t$; such change is easily seen to be negligible for the precision of the local laws we are considering here. If $\sigma\in{\mathbb R }$ but ${\mathbb E }\chi_{\mathrm{d}}^2 \ne 1+\sigma$, similarly to [\[eq:simpl1\]](#eq:simpl1){reference-type="eqref" reference="eq:simpl1"}, we choose $B_t$ so that its entries have variance $t$ times the variance of $W$ for the off--diagonal entries and ${\mathbb E }(B_t)_{aa}^2=(1+\sigma)t$, and we can prepare yet another Wigner matrix $\widehat{W}_T$ such that $$\label{eq:simpl2} e^{-T/2}\widehat{W}_T+\sqrt{1-e^{-T}}\widehat{U}\stackrel{\mathrm{d}}{=}W,$$ with $\widehat{U}$ being independent of $\widehat{W}_T$ and having the same entries distribution as $W$ except for the diagonal entries having variance ${\mathbb E }\widehat{U}_{aa}^2=\frac{1}{N}(1+\sigma)$. The second moments of $(\widehat{W}_t)_{ab}$ are preserved and only the diagonal changes $${\mathbb E }(\widehat{W}_t)_{aa}^2=e^{-t/2}{\mathbb E }(\widehat{W}_T)_{aa}^2+\frac{1}{N}\sqrt{1-e^{-t}}(1+\sigma);$$ hence the limiting eigenvalue distribution is still given by the semicircular law. # Green function comparison: Proof of Proposition [Proposition 11](#prop:zag){reference-type="ref" reference="prop:zag"} {#sec:GFT} In this section, we remove the Gaussian component introduced in Propositions [Proposition 10](#prop:zig){reference-type="ref" reference="prop:zig"} by a Green function comparison (GFT) argument, i.e. we prove Proposition [Proposition 11](#prop:zag){reference-type="ref" reference="prop:zag"}. For simplicity, we will write the detailed proof only in the case of no imaginary parts, i.e. $\mathfrak{I}_k = \emptyset$ and $\mathfrak{I}_{k+1} = \emptyset$ in the average and isotropic case, respectively. The minor modifications needed for handling the other cases will be briefly discussed in Section [5.4](#subsec:withIM){reference-type="ref" reference="subsec:withIM"} below. Before entering the proof, we point out that typical GFT arguments (starting from [@TVActa]) are used to compare the distribution of a genuinely fluctuating observable under two different matrix ensembles whose single entry distributions have matching first few moments. Technically, a family of interpolating ensembles is constructed which may be finite (e.g. Lindeberg replacement strategy) or continuous (e.g. along an Ornstein-Uhlenbeck flow) and the change of the distribution in question is closely monitored along the interpolation. In this standard setup for GFT, however, local laws serve as *a priori* bounds obtained by independent methods and they assumed to hold for all interpolating ensembles in between. In other words, concentration--type information about resolvents $G(z)$ with $\mathrm{Im}\,z$ well above the eigenvalue spacing are turned into information on the distribution of $G(z)$ with $\mathrm{Im}\,z$ at, or even slightly below the eigenvalue spacing. Our application of GFT is different in spirit, since we aim to prove local laws for one ensemble knowing them for the other one. Thus GFT needs to be done *self-consistently* with monitoring a carefully designed quantity that satisfies a Gronwall-type inequality along the interpolation. We remark that more than ten years ago Knowles and Yin in [@KnowYin2] used GFT in a similar spirit to prove single resolvent local law for ensembles where the deterministic approximation $M$ to $G$ is not a multiple of identity matrix (for example deformed Wigner matrices). Later a much more direct and generally applicable alternative method based upon the matrix Dyson equation [@AEK1; @AEKS] has been developed to prove such local laws without GFT. Our current dynamical approach revives the idea of a self-consistent GFT, since it naturally serves as a counterpart of the characteristic flow to remove the Gaussian component added along that flow. In fact, the approach of [@KnowYin2] also used a tandem of gradual reduction of $\eta=\mathrm{Im}\,z$ (called *bootstrapping* steps) and a self-consistent GFT (called *interpolation* steps), see Fig. 1.1 in [@KnowYin2]. However, the bootstrapping step in [@KnowYin2] was much less effective than the characteristic flow which does the $\eta$-reduction in one step even for a much more complex multi-resolvent chain. In the GFT step, we use the simple entry-by-entry Lindeberg replacement strategy that is better adjustable to our complicated resolvent chains instead of a special continuous interpolation as in [@KnowYin2], but the core of both techniques is a self-consistent Gronwall argument. The main technical challenge in our proof is that the error in one step of the Lindeberg replacement is not always sufficiently small, but by carefully monitoring the errors in each step, we gain from summing them up explicitly. We will explain this mechanism in Example [Example 32](#ex:ave){reference-type="ref" reference="ex:ave"}. Now we turn to the actual proof. Recalling the notations $$\label{eq:parameters} \eta := \min_i |\mathrm{Im}\,z_i| \quad \text{and} \quad \rho:= \pi^{-1} \max_i |\mathrm{Im}\,m_i|\,,$$ we begin by distinguishing the *averaged* and *isotropic* control quantities $$\begin{aligned} \label{eq:Psikav} \Psi_k^{\rm av} & := \frac{\sqrt{N \eta}}{N^{k/2-1} \sqrt{\rho}} \big| \big\langle \big(G_1 A_1 ... G_k - M_{[1,k]}\big)A_k \big\rangle\big| \\ \label{eq:Psikiso} \Psi_k^{\rm iso} (\bm x, \bm y) &:= \frac{\sqrt{N\eta}}{N^{k/2} \sqrt{\rho}} \big| \big(G_1 A_1 ... A_k G_{k+1} - M_{[1,k+1]}\big)_{\bm{x}\bm{y}} \big|\,, \end{aligned}$$ where $\bm x , \bm y \in {\mathbb C}^N$ are unit deterministic vectors and the traceless matrices $A_i \in {\mathbb C}^{N \times N}$ are assumed to have normalized Hilbert-Schmidt norms, $\langle |A_i|^2 \rangle^{1/2} = 1$. Recall that, in [\[eq:Psikav\]](#eq:Psikav){reference-type="eqref" reference="eq:Psikav"}--[\[eq:Psikiso\]](#eq:Psikiso){reference-type="eqref" reference="eq:Psikiso"}, we only consider chains without $\mathrm{Im}\,G$'s, the more general cases will be discussed later in Section [5.4](#subsec:withIM){reference-type="ref" reference="subsec:withIM"}. Finally, we point out that our notation in [\[eq:Psikav\]](#eq:Psikav){reference-type="eqref" reference="eq:Psikav"}--[\[eq:Psikiso\]](#eq:Psikiso){reference-type="eqref" reference="eq:Psikiso"} already suppressed the dependence on the spectral parameters and deterministic matrices, since the sets of these are considered fixed along the argument. In the following, we will often say that an estimate on $\Psi$ holds *uniformly*, by which we will always mean uniformity in all unit deterministic vectors and all choices of subsets of spectral parameters and deterministic matrices as explained in Proposition [Proposition 11](#prop:zag){reference-type="ref" reference="prop:zag"} (a). Now, the goal of this section is to prove Proposition [Proposition 11](#prop:zag){reference-type="ref" reference="prop:zag"}. More precisely, we will show that, if the optimal multi-resolvent local laws $$\label{eq:multiG} \Psi_k^{\rm av} + \Psi_k^{\rm iso} \prec 1, \quad \text{for all fixed} \quad k \in {\mathbb N},$$ hold *uniformly* for a Wigner matrix with some given single entry distributions, then they also hold for every other Wigner matrix with different single entry distributions, again *uniformly*, provided that the *first three moments* of the entries of these two ensembles *match*. A fundamental input for our proof is that the corresponding single resolvent local laws hold for *every* Wigner matrix ensemble [@EYY2012; @KnowYin; @BEKYY], i.e. the following Green function comparison argument is not needed for them. **Theorem 21**. *For fixed $\epsilon > 0$, we have $$\label{eq:singleGoptimal} |\langle G -m \rangle| \prec \frac{1}{N \eta} \,, \qquad \big|\big(G-m\big)_{\bm{x} \bm{y}}\big| \prec \sqrt{\frac{\rho}{N \eta}} + \frac{1}{N \eta}$$ uniformly in unit deterministic vectors $\bm x, \bm y$ and at spectral parameter $z \in {\mathbb C}\setminus {\mathbb R }$ with $\eta=|\mathrm{Im}\,z| \ge N^{-1+\epsilon}$ and $\mathrm{Re}\,z \in {\mathbb R }$, where $\rho = \pi^{-1} |\mathrm{Im}\,m(z)|$.* For convenience, these single resolvent laws will be expressed in the compact form $$\Psi_0^{\rm av} + \Psi_0^{\rm iso} \prec 1\,,$$ which extends [\[eq:Psikav\]](#eq:Psikav){reference-type="eqref" reference="eq:Psikav"}--[\[eq:Psikiso\]](#eq:Psikiso){reference-type="eqref" reference="eq:Psikiso"} when no traceless matrices $A$ are present (see, e.g., [@multiG; @A2]). Before starting the proof, we recall some notation which has already been used in the statement of Proposition [Proposition 11](#prop:zag){reference-type="ref" reference="prop:zag"}. We will distinguish between the two ensembles compared in the GFT argument by using different letters, $v_{ab}$ and $w_{ab}$, for their matrix elements, and we shall occasionally use the notation $H^{(\bm{v})}$ and $H^{(\bm{w})}$ to indicate the difference. Alternatively, one could denote the matrix elements by a universal letter $h_{ab}$ and distinguish the two ensembles in the underlying measure, especially in the expectations ${\mathbb E }_{\bm{v}}$ and ${\mathbb E }_{\bm{w}}$. However, since the proof of Proposition [Proposition 11](#prop:zag){reference-type="ref" reference="prop:zag"} works by replacing the matrix elements one-by-one in $N(N+1)/2$ steps, we use the first notation, analogously to [@EYbook Section 16]. ## Preliminaries The principal idea of the proof is as follows: First, we fix a bijective ordering $$\phi : \{(i,j) \in[N]^2 : i \le j \} \to [\gamma(N)]\,, \qquad \gamma(N):= \frac{N(N+1)}{2}$$ on the index set of independent entries of a Wigner matrix. Then, according to the induced ordering, the matrix elements are swapped one-by-one from the distribution $v_{ab}$ to $w_{ab}$ in $\gamma(N) \sim N^2$ steps. In particular, at step $\gamma \in \{0\}\cup[\gamma(N)]$ in this replacement procedure, the resulting matrix $H^{(\gamma)}$ has entries which are distributed according to $w_{ij}$ whenever $\phi\big((i,j)\big) \le \gamma$ and according to $v_{ij}$ whenever $\phi\big((i,j)\big) > \gamma$, i.e. $H^{(0)} = H^{(\bm{v})}$ and $H^{(\gamma(N))} = H^{(\bm{w})}$. This one-by-one replacement of the matrix elements naturally requires understanding the *isotropic* law [\[eq:zagmultiGISO\]](#eq:zagmultiGISO){reference-type="eqref" reference="eq:zagmultiGISO"}, as already indicated in [\[eq:Psikiso\]](#eq:Psikiso){reference-type="eqref" reference="eq:Psikiso"}. In order to derive [\[eq:multiG\]](#eq:multiG){reference-type="eqref" reference="eq:multiG"} also for $H^{(\bm{w})}$, we compute high moments of $\Psi_k^{\rm av/iso}$ for $H^{(\gamma)}$ and $H^{(\gamma-1)}$ for general $\gamma \in [\gamma(N)]$ and compare the results. Given sufficiently good one-step bounds, a telescopic argument will yield the estimate [\[eq:multiG\]](#eq:multiG){reference-type="eqref" reference="eq:multiG"} also for $H^{(\bm{w})}$. These \"sufficiently good\" one-step bounds are essentially required to accommodate the large number $O(N^2)$ of necessary replacements in order to arrive at $H^{(\gamma(N))}$. A key feature of our proof, in contrast to previous applications of the replacement strategy, is that the error will not always be $o(N^{-2})$ in each step, but their cumulative size after summation is still $o(1)$. The proof of Proposition [Proposition 11](#prop:zag){reference-type="ref" reference="prop:zag"} is divided in two main parts: At first, in **Part (a)**, Section [5.2](#subsec:iso){reference-type="ref" reference="subsec:iso"}, we show the isotropic part of [\[eq:multiG\]](#eq:multiG){reference-type="eqref" reference="eq:multiG"}, that is $\Psi_k^{\rm iso} \prec 1$, via a double induction on the number $k \in {\mathbb N}$ of traceless matrices and the moment $p \in {\mathbb N}$ taken of $\Psi_k^{\rm iso}$, i.e. ${\mathbb E }|\Psi_k^{\rm iso}|^p$. Thereby, we crucially use that the $\prec$-bound is (essentially) equivalent to controlling arbitrarily high moments up to an $N^\xi$-error with arbitrarily small $\xi> 0$. Afterwards, in **Part (b)**, Section [5.3](#subsec:av){reference-type="ref" reference="subsec:av"}, using Part (a) as an input, we will demonstrate $\Psi_k^{\rm av} \prec 1$ (and thus conclude the proof of Proposition [Proposition 11](#prop:zag){reference-type="ref" reference="prop:zag"} for $\mathfrak{I}_{k+1}=\emptyset$ resp. $\mathfrak{I}_k=\emptyset$) for every fixed $k$ via a single induction on the moment $p$. The main reason for this order of the argument is that the one-by-one replacement in step $\gamma$ is conducted via resolvent expansion focusing on the differing matrix entries at positions $(i,j) = \phi^{-1}(\gamma)$ and $(j,i)$, and thereby it naturally produces isotropic quantities (see Lemma [Lemma 23](#lem:resolexp){reference-type="ref" reference="lem:resolexp"} below). Hence, the argument for $\Psi_k^{\rm av}$ cannot be self-contained and must rely on $\Psi_k^{\rm iso}$, which in fact will not involve the averaged local laws at all. We fix some further notation. We have an initial Wigner matrix $H^{(0)}:= H^{(\bm{v})}$ and iteratively define $$H^{(\gamma)} := H^{(\gamma-1)} - \frac{1}{\sqrt{N}}\Delta_V^{(\gamma)} + \frac{1}{\sqrt{N}}\Delta_W^{(\gamma)},$$ a sequence of Wigner matrices for $\gamma \in [\gamma(N)]$, where we denoted[^18] $$\begin{aligned} \label{eq:Deltas} \Delta_V^{(\gamma)}:= \sqrt{N}\frac{E^{(ij)} (H^{(\bm{v})})_{ij} + E^{(ji)} (H^{(\bm{v})})_{ji} }{1 + \delta_{ij}} \quad \text{and} \quad \Delta_W^{(\gamma)}:= \sqrt{N}\frac{E^{(ij)}(H^{(\bm{w})})_{ij} + E^{(ji)} (H^{(\bm{w})})_{ji} }{1 + \delta_{ij}} \,. \end{aligned}$$ Here, $\phi\big((i,j)\big) = \gamma$ and $E^{(ij)}$ denotes the matrix whose matrix elements are zero everywhere except at position $(i,j)$, i.e. $(E^{(ij)})_{k\ell} = \delta_{i k} \delta_{j \ell}$. The denominator $1 + \delta_{ij}$ is introduced to account for the factor of two in the numerator occurring for diagonal indices. Note that $H^{(\gamma)}$ and $H^{(\gamma-1)}$ differ only in the $(i,j)$ and $(j,i)$ matrix elements, and they can be written as $$\begin{aligned} H^{(\gamma-1)} = \widecheck{H}^{(\gamma)} + \frac{1}{\sqrt{N}} \Delta_V^{(\gamma)} \quad \text{and} \quad H^{(\gamma)} = \widecheck{H}^{(\gamma)} + \frac{1}{\sqrt{N}} \Delta_W^{(\gamma)} \end{aligned}$$ with a matrix $\widecheck{H}^{(\gamma)}$ whose matrix element is zero at the $(i,j)$ and $(j,i)$ positions. Similarly, we denote the corresponding resolvents at spectral parameter $z_j \in {\mathbb C}\setminus {\mathbb R }$ by $$\label{GG} G_j^{(\gamma)} := (H^{(\gamma)} - z_j)^{-1}\,, \quad G_j^{(\gamma-1)} := (H^{(\gamma-1)} - z_j)^{-1}\,, \quad \text{and} \quad \widecheck{G}_j^{(\gamma)} := (\widecheck{H}^{(\gamma)} - z_j)^{-1}\,.$$ Observe that, at each step $\gamma$ in the replacement procedure, the deterministic approximation to a resolvent chain involving $G^{(\gamma)}$ is the same. This is because only the first two moments of the matrix elements of $H^{(\gamma)}$ determine this approximation, symbolically denoted by $M$, via the *Matrix Dyson Equation (MDE)*, see, e.g., [@MDEreview]. For a chain in the checked resolvents $\widecheck{G}$, the approximating $M$ is *in principle* differing from the non-checked ones, simply because the self-energy operator $\widecheck{\mathcal{S}}^{(\gamma)}[R] = {\mathbb E }[\widecheck{H}^{(\gamma)} R \widecheck{H}^{(\gamma)}]$ associated with $\widecheck{H}^{(\gamma)}$ is no longer exactly the averaged trace $\langle \cdot \rangle$. However, since this discrepancy introduces an error of size $1/N$ in the MDE, which is a stable equation, this will not be visible in the local laws [\[eq:multiG\]](#eq:multiG){reference-type="eqref" reference="eq:multiG"}. Therefore, we shall henceforth ignore this minor point and shall just define the normalized differences $$\Psi_k^{{\rm av}, (\gamma)}\,, \quad \widecheck{\Psi}_k^{{\rm av}, (\gamma)} \,, \quad \Psi_k^{{\rm iso}, (\gamma)} (\bm x, \bm y)\,, \quad \text{and} \quad \widecheck{\Psi}_k^{{\rm iso}, (\gamma)} (\bm x, \bm y) \,,$$ exactly as in [\[eq:Psikav\]](#eq:Psikav){reference-type="eqref" reference="eq:Psikav"}--[\[eq:Psikiso\]](#eq:Psikiso){reference-type="eqref" reference="eq:Psikiso"}, but with $G_j$ replaced by $G_j^{(\gamma)}$ and $\widecheck{G}_j^{(\gamma)}$, respectively. We emphasize again that the deterministic counterparts in all of the normalized differences are the *same*. We can now turn to the actual proof. ## Part (a): Proof of the isotropic law {#subsec:iso} In this first part, we exclusively work with isotropic quantities and we shall hence drop the superscript $^{\rm iso}$ in the entire Section [5.2](#subsec:iso){reference-type="ref" reference="subsec:iso"}. As already mentioned above, we shall prove the claim by a *double induction* on $k$ and the moment $p$ taken of $\Psi_k$, i.e. ${\mathbb E }|\Psi_k|^p$. Thereby, the primary induction parameter is $k$ and our goal is to show that, if for some $k \in {\mathbb N}$ we have $$\label{eq:ind0} \max_{\gamma \le \gamma(N)}\Psi^{(\gamma)}_{k'} + \max_{\gamma \le \gamma(N)}\widecheck{\Psi}^{(\gamma)}_{k'} \prec 1\,, \qquad \forall \, k' \in \{0, ... , k-1\}\,,$$ then also $$\label{eq:ind1} \max_{\gamma \le \gamma(N)}\Psi^{(\gamma)}_{k} + \max_{\gamma \le \gamma(N)}\widecheck{\Psi}^{(\gamma)}_{k} \prec 1\,.$$ Within the proof of [\[eq:ind1\]](#eq:ind1){reference-type="eqref" reference="eq:ind1"}, for a fixed $k$, we will then crucially use that the $\prec$-bound is equivalent to controlling arbitrarily high moments ${\mathbb E }|\Psi_k|^p$ up to an $N^\xi$-error for an arbitrarily small $\xi > 0$. Therefore, we use another secondary induction on the moment $p$. More precisely, in order to establish [\[eq:ind1\]](#eq:ind1){reference-type="eqref" reference="eq:ind1"} from [\[eq:ind0\]](#eq:ind0){reference-type="eqref" reference="eq:ind0"}, our goal is to show that, for any fixed $k\in {\mathbb N}$, if for some $p \in{\mathbb N}$ we have that $$\max_{\gamma \le \gamma(N)}\big\Vert \Psi^{(\gamma)}_{k} \big\Vert_{p-1} + \max_{\gamma \le \gamma(N)}\big\Vert \widecheck{\Psi}^{(\gamma)}_{k} \big\Vert_{p-1} \lesssim N^\xi$$ for any $\xi>0$, then also $$\label{eq:conclusion} \max_{\gamma \le \gamma(N)}\big\Vert \Psi^{(\gamma)}_{k} \big\Vert_{p} + \max_{\gamma \le \gamma(N)}\big\Vert \widecheck{\Psi}^{(\gamma)}_{k} \big\Vert_{p} \lesssim N^\xi \,$$ holds for any $\xi>0$, where implicit constants depend on $k,p$ and $\xi$. Here for a random variable $X$ we used the definition $\Vert X\big\Vert_{p}:=[{\mathbb E }|X|^p]^{1/p}$. To summarize, as the *induction hypothesis*, given some arbitrary fixed $p,k \in {\mathbb N}$, we will assume that $$\label{eq:inductionhypo} \max_{\gamma \le \gamma(N)}\Psi^{(\gamma)}_{k'} + \max_{\gamma \le \gamma(N)}\widecheck{\Psi}^{(\gamma)}_{k'} \prec 1 \quad \text{and} \quad \max_{\gamma \le \gamma(N)}\big\Vert \Psi^{(\gamma)}_{k} \big\Vert_{p-1} + \max_{\gamma \le \gamma(N)}\big\Vert \widecheck{\Psi}^{(\gamma)}_{k} \big\Vert_{p-1} \le C_{k,p,\xi} N^\xi$$ hold uniformly for all ${k' \in \{0,...,k-1\}}$ and $\xi > 0$ with an appropriate $N$-independent constant. Then we will conclude [\[eq:conclusion\]](#eq:conclusion){reference-type="eqref" reference="eq:conclusion"}. The overall *base case* ($k=1$, $p=1$) is easy to verify: it solely consists of the usual isotropic law (the first estimate in [\[eq:inductionhypo\]](#eq:inductionhypo){reference-type="eqref" reference="eq:inductionhypo"} for $k'=0$) and the trivial bound ${\mathbb E }| \Psi_{1}|^0 = 1$ (the second estimate in [\[eq:inductionhypo\]](#eq:inductionhypo){reference-type="eqref" reference="eq:inductionhypo"} for $k=1$ and $p=1$). We start with two arbitrary but fixed bounded deterministic vectors $\Vert \bm x \Vert, \Vert \bm y \Vert \lesssim 1$ and introduce the set $$\label{eq:Ixy} I_{\bm x \bm y} := \{\bm x, \bm y\} \cup \{{\bm e}_a : a \in [N]\} \subset {\mathbb C}^N$$ of vectors, which will naturally arise along the argument (see [\[eq:vectormax\]](#eq:vectormax){reference-type="eqref" reference="eq:vectormax"} below), where ${\bm e}_a$ denotes the standard basis vector in the coordinate direction $a$. Note that the cardinality of $I_{\bm x \bm y}$ is $N+2$. After defining[^19] $$\Omega_k^p(\gamma) := \max_{\bm u, \bm v \in I_{\bm x \bm y}} \Vert {\Psi}_k^{(\gamma)}(\bm u, \bm v) \Vert_p^p\,$$ (we omitted the dependence on $\bm x, \bm y$ in the notation, as they are considered fixed along the whole argument), the principal goal of the induction step is to prove the following proposition. **Proposition 22** (Gronwall estimate). *Fix $p, k \in {\mathbb N}$ and assume [\[eq:inductionhypo\]](#eq:inductionhypo){reference-type="eqref" reference="eq:inductionhypo"} holds. Then, for any $\xi > 0$, there exist some constants $C_1, C_2 >0$ (depending on $p$, $k$, and $\xi$, but independent of $N$, $\bm x$, and $\bm y$) such that $$\label{eq:gronwall} \Omega_k^p(\gamma_0) \le C_1 \frac{1}{N^2} \sum_{\gamma < \gamma_0} \Omega_k^p(\gamma) + C_2 N^\xi$$ for every $\gamma_0 \in [\gamma(N)]$.* Note that [\[eq:gronwall\]](#eq:gronwall){reference-type="eqref" reference="eq:gronwall"} is a discrete Gronwall inequality for $\Omega_k^p(\gamma)$. Hence, having Proposition [Proposition 22](#prop:gronwall){reference-type="ref" reference="prop:gronwall"} at hand (note that, in particular, $\Omega_k^p(0) \le C_2 N^\xi$), we obtain $$\label{eq:gronwallconclude} \max_{\gamma \le \gamma(N)} \Omega_k^p(\gamma) \le C_2 \mathrm{e}^{C_1} N^\xi \le C_3(k,p,\xi) N^\xi\,,$$ uniformly in $\bm x$ and $\bm y$ and all choices of spectral parameters and traceless deterministic matrices, which then implies the $\Psi$-part of [\[eq:conclusion\]](#eq:conclusion){reference-type="eqref" reference="eq:conclusion"}. In the next subsections we present auxiliary results necessary for the proof of Proposition [Proposition 22](#prop:gronwall){reference-type="ref" reference="prop:gronwall"} which will then be concluded in Section [5.2.5](#sec:Propgron){reference-type="ref" reference="sec:Propgron"}. The $\widecheck{\Psi}$-part of [\[eq:conclusion\]](#eq:conclusion){reference-type="eqref" reference="eq:conclusion"} and thus the induction step will finally be completed in Section [5.2.6](#sec:complete){reference-type="ref" reference="sec:complete"}. In order to simplify notation, we shall henceforth drop the subscripts for all resolvents and deterministic matrices, i.e. write $G_j =G$ and $A_j =A$ instead. ### Preliminaries The fundamental building block of our proof is the following elementary lemma on resolvent expansion. Note that we need to express $G^{(\gamma-1)}, G^{(\gamma)}$ in terms of the \"unperturbed\" resolvent $\widecheck{G}^{(\gamma)}$ of $\widecheck{H}^{(\gamma)}$ that has zero elements in the $\gamma$-th position, and conversely, we need to express $\widecheck{G}^{(\gamma)}$ in terms of both \"perturbed\" resolvents using $\Delta_V^{(\gamma)}$ and $\Delta_W^{(\gamma)}$ from [\[eq:Deltas\]](#eq:Deltas){reference-type="eqref" reference="eq:Deltas"} as perturbations, see [\[GG\]](#GG){reference-type="eqref" reference="GG"}. We work with finite resolvent expansions up to some order $m$, independent of $N$, to be determined later. The last term therefore always contains the original resolvent as well and it will have to be estimated deterministically by its norm but if $m$ is large enough this will be affordable. **Lemma 23** (Resolvent expansions). *For every fixed $m\in {\mathbb N}$, it holds that* *$$\label{eq:forwardexp} \widecheck{G}^{(\gamma)} = \sum_{\ell = 0}^{m} N^{-\ell/2} \big( G^{(\gamma)} \Delta_W^{(\gamma)} \big)^\ell G^{(\gamma)} + N^{-(m+1)/2} \big( G^{(\gamma)} \Delta_W^{(\gamma)} \big)^{m+1} \widecheck{G}^{(\gamma)}$$ and $$\label{eq:backwardexp} {G}^{(\gamma)} = \sum_{\ell = 0}^{m} (-1)^\ell N^{-\ell/2} \big(\widecheck{G}^{(\gamma)} \Delta_W^{(\gamma)} \big)^\ell \widecheck{G}^{(\gamma)} + (-1)^{(m+1)} N^{-(m+1)/2} \big( \widecheck{G}^{(\gamma)} \Delta_W^{(\gamma)} \big)^{m+1} {G}^{(\gamma)}\,.$$* *These relations also hold verbatim when replacing $G^{(\gamma)} \to G^{(\gamma-1)}$ and $\Delta_W^{(\gamma)} \to \Delta_V^{(\gamma)}$. 0◻* We now expand each $G^{(\gamma)}$ in $$\label{eq:gamma} \big|\Psi_k^{(\gamma)}(\bm x, \bm y)\big|^p = \left(\frac{N \eta}{\rho}\right)^{p/2} N^{-pk/2} \big| \big( (G^{(\gamma)} A)^k G^{(\gamma)} - M_{[1,k+1]}\big)_{\bm x \bm y} \big|^p$$ and each $G^{(\gamma-1)}$ in $$\label{eq:gamma-1} \big|\Psi_k^{(\gamma-1)}(\bm x, \bm y)\big|^p = \left(\frac{N \eta}{\rho}\right)^{p/2} N^{-pk/2} \big| \big( (G^{(\gamma-1)} A)^k G^{(\gamma-1)} - M_{[1,k+1]}\big)_{\bm x \bm y} \big|^p$$ according to [\[eq:backwardexp\]](#eq:backwardexp){reference-type="eqref" reference="eq:backwardexp"} (for some $m \ge 4$ to be determined below, depending on $p$ and $k$; see [\[eq:truncation\]](#eq:truncation){reference-type="eqref" reference="eq:truncation"}) and sort the resulting terms by their power $r = 0,1,2,...$ of $N^{-1/2}$. Then we take the expectation with respect to $w_{ij}$ and $v_{ij}$, respectively (recall that $\phi\big((i,j)\big) = \gamma$), and use the moment matching condition [\[eq:momentmatch\]](#eq:momentmatch){reference-type="eqref" reference="eq:momentmatch"}. As a result, we find that the terms with a prefactor $N^{-r/2}$ for $r = 0, 1,2,3$ are algebraically *exactly the same* for both [\[eq:gamma\]](#eq:gamma){reference-type="eqref" reference="eq:gamma"} and [\[eq:gamma-1\]](#eq:gamma-1){reference-type="eqref" reference="eq:gamma-1"}. The conclusion of this argument is formalized in the following lemma. **Lemma 24**. *For any fixed $(i,j) \in [N]^2$ with $i \le j$ and $\gamma=\phi(i,j)$ we have that $$\begin{aligned} \label{eq:gammaexp} {\mathbb E }_{w_{ij}} \big|\Psi_k^{(\gamma)}(\bm x, \bm y)\big|^p = \sum_{r= 0}^3 N^{-r/2} \alpha_{k, r}^{(\gamma)}(\bm x, \bm y)\big|\widecheck{\Psi}_k^{(\gamma)}(\bm x, \bm y)\big|^{p-r} + \text{\rm higher order terms} \\[2mm] \label{eq:gamma-1exp} {\mathbb E }_{v_{ij}} \big|\Psi_k^{(\gamma-1)}(\bm x, \bm y)\big|^p = \sum_{r= 0}^3 N^{-r/2} \alpha_{k, r}^{(\gamma)}(\bm x, \bm y)\big|\widecheck{\Psi}_k^{(\gamma)}(\bm x, \bm y)\big|^{p-r} + \text{\rm higher order terms} \end{aligned}$$ for some *identical* coefficients $\alpha_{k, r}^{(\gamma)}(\bm x, \bm y)$ independent of $v_{ij}$ and $w_{ij}$ whose precise values are (mostly) irrelevant. Here \"higher order terms\" denote terms with prefactor $N^{-r/2}$ with $r\ge 4$.* In the following Sections [5.2.2](#subsec:fourthorder){reference-type="ref" reference="subsec:fourthorder"}--[5.2.4](#subsec:truncation){reference-type="ref" reference="subsec:truncation"}, preparing the conclusion of the proof of Proposition [Proposition 22](#prop:gronwall){reference-type="ref" reference="prop:gronwall"} in Section [5.2.5](#sec:Propgron){reference-type="ref" reference="sec:Propgron"}, we will discuss the higher order terms in [\[eq:gammaexp\]](#eq:gammaexp){reference-type="eqref" reference="eq:gammaexp"} and [\[eq:gamma-1exp\]](#eq:gamma-1exp){reference-type="eqref" reference="eq:gamma-1exp"}. These have to be estimated individually by size when we will consider the difference of [\[eq:gammaexp\]](#eq:gammaexp){reference-type="eqref" reference="eq:gammaexp"} and [\[eq:gamma-1exp\]](#eq:gamma-1exp){reference-type="eqref" reference="eq:gamma-1exp"}. Recall that, we will eventually compare $\Psi_k^{(0)}(\bm x, \bm y)$ and $\Psi_k^{(\gamma(N))}(\bm x, \bm y)$ in $\gamma(N) = O(N^2)$ many steps, which is why the higher order terms must all be bounded by $1/N^2$, roughly said. More precisely, we will use the following telescopic summation: For every $\gamma_0 \in [\gamma(N)]$, it holds that $$\label{eq:telescope} \left| \Vert \Psi_k^{(\gamma_0)}(\bm x, \bm y) \Vert_p^p - \Vert \Psi_k^{(0)}(\bm x, \bm y) \Vert_p^p \right| \le \sum_{1 \le \gamma \le \gamma_0} \left| \Vert \Psi_k^{(\gamma)}(\bm x, \bm y) \Vert_p^p - \Vert \Psi_k^{(\gamma-1)}(\bm x, \bm y) \Vert_p^p \right|\,.$$ In the next Section [5.2.2](#subsec:fourthorder){reference-type="ref" reference="subsec:fourthorder"}, we will explain the term with $r=4$ in Lemma [Lemma 24](#lem:firstthree){reference-type="ref" reference="lem:firstthree"}, i.e. with $N^{-2}$-prefactor, in detail. All other higher order terms with $r\ge 5$ but still involving only the resolvent $\widecheck{G}^{(\gamma)}$ are completely analogous, in fact easier (see Section [5.2.3](#subsec:lowerorder){reference-type="ref" reference="subsec:lowerorder"} later for some detail). Afterwards, in Section [5.2.4](#subsec:truncation){reference-type="ref" reference="subsec:truncation"}, we will discuss, how the maximal order $m$ of the resolvent expansion [\[eq:backwardexp\]](#eq:backwardexp){reference-type="eqref" reference="eq:backwardexp"} has to be chosen in order to accommodate the remainder term involving a non-checked resolvent $G^{(\gamma)}$ (resp. $G^{(\gamma-1)}$). Throughout the following argument we shall focus on the higher order terms in [\[eq:gammaexp\]](#eq:gammaexp){reference-type="eqref" reference="eq:gammaexp"}, the treatment of [\[eq:gamma-1exp\]](#eq:gamma-1exp){reference-type="eqref" reference="eq:gamma-1exp"} is exactly the same. Whenever it does not lead to confusion, we shall henceforth drop the superscript $\gamma$. ### Fourth order terms in Lemma [Lemma 24](#lem:firstthree){reference-type="ref" reference="lem:firstthree"} {#subsec:fourthorder} The goal of the current Section [5.2.2](#subsec:fourthorder){reference-type="ref" reference="subsec:fourthorder"} is to show that the terms of order $r=4$ arising in the telescopic summation [\[eq:telescope\]](#eq:telescope){reference-type="eqref" reference="eq:telescope"} can be bounded by the rhs. of [\[eq:gronwall\]](#eq:gronwall){reference-type="eqref" reference="eq:gronwall"}. In the following, we denote (cf. [\[eq:Deltas\]](#eq:Deltas){reference-type="eqref" reference="eq:Deltas"}) $$\label{eq:Delta} \Delta = \Delta^{(\gamma)} = \frac{E^{(ij)} + E^{(ji)}}{1 + \delta_{ij}}$$ and find, similarly to [\[eq:Deltas\]](#eq:Deltas){reference-type="eqref" reference="eq:Deltas"}, after taking the full expectation, the $r=4$ (i.e. $1/N^{2}$) prefactor of the higher order terms in [\[eq:gammaexp\]](#eq:gammaexp){reference-type="eqref" reference="eq:gammaexp"} to be bounded by (a constant times) $$\label{eq:fourthorder} {\mathbb E }\sum_{d=1}^{4 \wedge p} \big|\widecheck{\Psi}_k(\bm x, \bm y)\big|^{p-d} \left(\frac{N \eta}{\rho}\right)^{d/2} N^{-dk/2} \sum_{4 \Delta \, \leadsto \, d } \big| \underbrace{\big(... \Delta... \Delta ... \big)_{\bm{x} \bm{y}}\ldots \big(... \Delta... \big)_{\bm{x} \bm{y}}}_{\text{four} \, \Delta\; \text{in a total} \; d \, \text{chains}} \big|\,.$$ Here $d$ counts the number of formerly \"intact\" resolvent chains $\big( (\widecheck{G} A)^k \widecheck{G} \big)_{\bm x \bm y}$, which have been 'destroyed' by at least one replacement $\widecheck{G} \to \widecheck{G} \Delta \widecheck{G}$ due to the expansion [\[eq:backwardexp\]](#eq:backwardexp){reference-type="eqref" reference="eq:backwardexp"}. The symbol $$\label{eq:summation} \sum_{4 \Delta \, \leadsto \, d }$$ indicates that we sum over all possibilities to destroy exactly $d$ chains by four $\Delta$'s. Note that a chain may be \"destroyed\" by more than one $\Delta$, therefore $d$ may be less than four. After using the explicit form of $\Delta$, altogether we arrive at a finite sum of $4+d$ chains. **Example 25**. For example, for $d=1$ we have that $$\label{eq:d=1example} \begin{split} &\sum_{4 \Delta \, \leadsto \, 1 } \big| \big(... \Delta... \Delta...\Delta...\Delta... \big)_{\bm{x} \bm{y}}\big| \\ &= \sum_{\substack{k_1, ... , k_5 \ge 0 : \\ \sum_{l} k_l = k}}\big| \big( (\widecheck{G} A)^{k_1} \widecheck{G} \Delta (\widecheck{G} A)^{k_2} \widecheck{G} \Delta (\widecheck{G} A)^{k_3} \widecheck{G} \Delta (\widecheck{G} A)^{k_4} \widecheck{G} \Delta (\widecheck{G} A)^{k_5} \widecheck{G}\big)_{\bm x \bm y} \big| \\ &= \sum_{\substack{k_1, ... , k_5 \ge 0 : \\ \sum_{l} k_l = k}}\left[\big| \big( (\widecheck{G} A)^{k_1} \widecheck{G}\big)_{\bm x \bm{e}_i} \big( (\widecheck{G} A)^{k_2} \widecheck{G}\big)_{\bm e_j \bm{e}_j} \big( (\widecheck{G} A)^{k_3} \widecheck{G}\big)_{\bm e_i \bm{e}_i} \big( (\widecheck{G} A)^{k_4} \widecheck{G}\big)_{\bm e_j \bm{e}_j} \big( (\widecheck{G} A)^{k_5} \widecheck{G}\big)_{\bm e_i \bm{y}} \big| + ... \right] \end{split}$$ with the neglected summands being analogous, only having different distributions of $\bm e_i$ and $\bm e_j$ occurring, which can be produced by the structure of $\Delta$. For general $d$, in each of the $4+d$ resolvent chains in the rhs. of [\[eq:fourthorder\]](#eq:fourthorder){reference-type="eqref" reference="eq:fourthorder"}, we now add and subtract the corresponding deterministic $M$-term, $(\widecheck{G}A)^k \widecheck{G} = ((\widecheck{G}A)^k \widecheck{G}-M_{k+1}) + M_{k+1}$ (see also [\[eq:Mshorthand\]](#eq:Mshorthand){reference-type="eqref" reference="eq:Mshorthand"} below), schematically written as $G = (G-M) + M$. In the sequel, we will distinguish the following two complementary cases: - At least $d$ of the $d+4$ resolvent chains are replaced by their fluctuating part, $G-M$. - At least five of the $d+4$ resolvent chains are replaced by their deterministic counterpart, $M$. [Case (i):]{.ul} In case (i), we first separate those possibilities from [\[eq:summation\]](#eq:summation){reference-type="eqref" reference="eq:summation"}, where the destruction of the $d$ chains $\big( (\widecheck{G} A)^k \widecheck{G}\big)_{\bm x \bm y}$ in fact *preserves* $d$ resolvent chains each with $k$ traceless matrices $A$, but with deterministic vectors, which are not $\bm x$ and $\bm{y}$. This happens when all four $\Delta$'s are placed at the ends of the chains. For example, if $d=1$, we separate these possibilities as $$\label{eq:sustainexample} \begin{split} \widecheck{G}_{\bm x \bm e_i} \widecheck{G}_{\bm e_j \bm e_j} \widecheck{G}_{\bm e_i \bm e_i} \widecheck{G}_{\bm e_j \bm e_j} \big( (\widecheck{G} A)^k \widecheck{G} \big)_{\bm e_i \bm y} &+ ... \quad \text{or} \\ \widecheck{G}_{\bm x \bm e_i} \widecheck{G}_{\bm e_j \bm e_j} \big( (\widecheck{G} A)^k \widecheck{G} \big)_{\bm e_i \bm e_i} \widecheck{G}_{\bm e_j \bm e_j} \widecheck{G}_{\bm e_i \bm y}&+ ... \,. \end{split}$$ In the following, we shall focus on the first exemplary term in [\[eq:sustainexample\]](#eq:sustainexample){reference-type="eqref" reference="eq:sustainexample"}. Its fluctuating part $$\label{eq:G-Mfull} \big( (\widecheck{G} A)^k \widecheck{G} -M_{[1,k+1]}\big)_{\bm e_i \bm y}$$ can then be paired with the leftover $\big(N \eta /\rho\big)^{1/2} N^{-k/2}$ in [\[eq:fourthorder\]](#eq:fourthorder){reference-type="eqref" reference="eq:fourthorder"} and thereby produces a further full $\big|\widecheck{\Psi}_k^{(\gamma)}(\bm e_j, \bm y)\big|$; the remaining terms coming from a single resolvent in [\[eq:sustainexample\]](#eq:sustainexample){reference-type="eqref" reference="eq:sustainexample"} are simply estimated by one, $$\label{eq:Gbdd1GFT} |\widecheck{G}_{\bm u \bm v}|\prec 1\,, \qquad \bm u, \bm v \in I_{\bm{x}\bm{y}} \quad \mbox{cf.~\eqref{eq:Ixy}}\,,$$ by the usual isotropic law [\[eq:singleGoptimal\]](#eq:singleGoptimal){reference-type="eqref" reference="eq:singleGoptimal"}. All these terms stemming from [\[eq:fourthorder\]](#eq:fourthorder){reference-type="eqref" reference="eq:fourthorder"} and constituting a full $\big|\widecheck{\Psi}_k^{(\gamma)}\big|$ (or $\big|\widecheck{\Psi}_k^{(\gamma)}\big|^d$ for general $d \in [4 \wedge p]$) can then be estimated by $$\label{eq:vectormax} \widecheck{\Omega}_k^p(\gamma) := \max_{\bm u, \bm v \in I_{\bm x \bm y}} \Vert \widecheck{\Psi}_k^{(\gamma)}(\bm u, \bm v) \Vert_p^p\,.$$ Now, after having separated the possibilities from [\[eq:summation\]](#eq:summation){reference-type="eqref" reference="eq:summation"}, where the destruction preserves $d$ resolvent chains with $k$ deterministic matrices in between, we are left with those which solely create *strictly shorter* chains by the procedure $4 \Delta \leadsto d$. These terms can entirely be treated by our *induction hypothesis* [\[eq:inductionhypo\]](#eq:inductionhypo){reference-type="eqref" reference="eq:inductionhypo"}: The power of $\widecheck{\Psi}_k^{(\gamma)}$ has been reduced by (at least) one (cf. the second estimate in [\[eq:inductionhypo\]](#eq:inductionhypo){reference-type="eqref" reference="eq:inductionhypo"}) and $\widecheck{\Psi}_{k'}^{(\gamma)} + \Psi_{k'}^{(\gamma)} \prec 1$ uniformly in $\gamma$ for *strictly* shorter chains, $k' < k$, has already been shown (first estimate in [\[eq:inductionhypo\]](#eq:inductionhypo){reference-type="eqref" reference="eq:inductionhypo"}). **Example 26**. Writing $$\label{eq:Mshorthand} M_{j-i+1} \equiv M_{[i,j]} \quad \text{for} \quad 1 \le i < j \le k+1\,,$$ with a slight abuse of notation, we estimate the $d=1$ term in [\[eq:fourthorder\]](#eq:fourthorder){reference-type="eqref" reference="eq:fourthorder"} (after having split off the cases when one of the $k_l$'s equals $k$ and all others are zero in [\[eq:sustainexample\]](#eq:sustainexample){reference-type="eqref" reference="eq:sustainexample"}) as $$\label{eq:d=1exampleCase1} \begin{split} {\mathbb E }& \big|\widecheck{\Psi}_k(\bm x, \bm y)\big|^{p-1} \left(\frac{N \eta}{\rho}\right)^{1/2} N^{-k/2}\times \\ & \ \times \sum_{\substack{0\le k_l \le k-1 : \\ \sum_{l} k_l = k}}\Big[\big| \big( (\widecheck{G} A)^{k_1} \widecheck{G} - M_{k_1+1}\big)_{\bm x \bm{e}_i} \big(M_{k_2+1}\big)_{\bm e_j \bm{e}_j} \big( M_{k_3+1}\big)_{\bm e_i \bm{e}_i} \big( M_{k_4+1}\big)_{\bm e_j \bm{e}_j} \big( M_{k_5+1}\big)_{\bm e_i \bm{y}} \big| \\ & \quad + \big| \big( (\widecheck{G} A)^{k_1} \widecheck{G} - M_{k_1+1}\big)_{\bm x \bm{e}_i} \big((\widecheck{G} A)^{k_2} \widecheck{G} -M_{k_2+1}\big)_{\bm e_j \bm{e}_j} \big( M_{k_3+1}\big)_{\bm e_i \bm{e}_i} \big( M_{k_4+1}\big)_{\bm e_j \bm{e}_j} \big( M_{k_5+1}\big)_{\bm e_i \bm{y}} \big| + ... \Big] \\ \lesssim \, &N^\xi \left(\frac{N \eta}{\rho}\right)^{1/2} N^{-k/2}\sum_{\substack{0\le k_l \le k-1 : \\ \sum_{l} k_l = k}} \left[\left( \frac{\rho}{N \eta} \right)^{1/2} N^{\sum_l k_l/2}+ \left( \frac{\rho}{N \eta} \right) N^{\sum_l k_l/2} + ... \right]\lesssim N^\xi\,, \end{split}$$ where analogous summands (i.e. having further $G-M$ factors instead of $M$, or other arrangements of standard basis vectors $\bm e_i, \bm e_j$ stemming from [\[eq:Delta\]](#eq:Delta){reference-type="eqref" reference="eq:Delta"}) are again indicated by dots. In the first estimate, we used that $\big| (M_{j+1})_{\bm u \bm v} \big| \lesssim N^{j/2}$ for all $\bm u, \bm v \in I_{x,y}$ from Lemma [Lemma 3](#lem:Mbound){reference-type="ref" reference="lem:Mbound"} (b) together with the induction hypothesis [\[eq:inductionhypo\]](#eq:inductionhypo){reference-type="eqref" reference="eq:inductionhypo"}. In the general case, $d \ge 1$, the argument works analogously to the above example: The minimal number of $d$ fluctuating terms carrying an $(\rho/ N \eta)^{1/2}$-factor cancel the leftover $(N\eta/\rho)^{d/2}$-factor in [\[eq:fourthorder\]](#eq:fourthorder){reference-type="eqref" reference="eq:fourthorder"}. The remaining $N^{k_l/2}$-factors can then be handled by a simple power counting. Overall, we find that, all the terms in [\[eq:fourthorder\]](#eq:fourthorder){reference-type="eqref" reference="eq:fourthorder"} summarized in Case (i), can be bounded by $$\label{eq:fluctuationfinal} C_1 \widecheck{\Omega}_k^p(\gamma) + C_2 N^\xi$$ for some positive constants $C_1, C_2 > 0$, which shall henceforth be used generically, i.e. their value might change from line to line (but remain uniformly bounded in $\gamma$). [Case (ii):]{.ul} For the second case, we recall that all the purely deterministic terms are *independent* of $\gamma$, i.e., as emphasized above, at each replacement step the deterministic approximation to a resolvent chain is the same. However, it is *not* sufficient to just estimate every $M$-term blindly via $\big| (M_{j+1})_{\bm u \bm v} \big| \lesssim N^{j/2}$, as done in [\[eq:d=1exampleCase1\]](#eq:d=1exampleCase1){reference-type="eqref" reference="eq:d=1exampleCase1"}. Instead, we need to *gain from the summation* in [\[eq:telescope\]](#eq:telescope){reference-type="eqref" reference="eq:telescope"} over all replacement positions. This is the main new element of our proof compared with previous GFT arguments. **Example 27**. We again look at our $d=1$ example. Using the notation [\[eq:Mshorthand\]](#eq:Mshorthand){reference-type="eqref" reference="eq:Mshorthand"}, we find the *trivial estimate* $$\label{eq:d=1exampleCase2} \begin{split} &{\mathbb E }\big|\widecheck{\Psi}_k(\bm x, \bm y)\big|^{p-1} \left(\frac{N \eta}{\rho}\right)^{1/2} N^{-k/2} \sum_{\substack{0\le k_l \le k: \\ \sum_{l} k_l = k}}\left[\big| \big( M_{k_1+1}\big)_{\bm x \bm{e}_i} \big(M_{k_2+1}\big)_{\bm e_j \bm{e}_j} \big( M_{k_3+1}\big)_{\bm e_i \bm{e}_i} \big( M_{k_4+1}\big)_{\bm e_j \bm{e}_j} \big( M_{k_5+1}\big)_{\bm e_i \bm{y}} \big| + ... \right] \\ \lesssim \, &N^\xi \left(\frac{N \eta}{\rho}\right)^{1/2} N^{-k/2} \sum_{\substack{0\le k_l \le k: \\ \sum_{l} k_l = k}} \left[N^{\sum_l k_l/2}+ ... \right]\lesssim N^\xi \left(\frac{N \eta}{\rho}\right)^{1/2}\,, \end{split}$$ where we again used the induction hypothesis [\[eq:inductionhypo\]](#eq:inductionhypo){reference-type="eqref" reference="eq:inductionhypo"} and $\big| (M_{j+1})_{\bm u \bm v} \big| \lesssim N^{j/2}$. This bound is off by a factor $(N \eta/\rho)^{1/2}$, which we will now improve on. Indeed, the point in *gaining from the summation* is that, although at each individual step $\gamma$, the deterministic terms in [\[eq:d=1exampleCase2\]](#eq:d=1exampleCase2){reference-type="eqref" reference="eq:d=1exampleCase2"} might be large, *on average* over $\gamma$ their contribution is bounded. More precisely, fixing one constellation of $k_l$'s in [\[eq:d=1exampleCase2\]](#eq:d=1exampleCase2){reference-type="eqref" reference="eq:d=1exampleCase2"} and using ${\mathbb E }\big|\widecheck{\Psi}_k\big|^{p-1} \lesssim N^\xi$ , we find the average of the first line in [\[eq:d=1exampleCase2\]](#eq:d=1exampleCase2){reference-type="eqref" reference="eq:d=1exampleCase2"} over all $i,j \in [N]$ to be bounded by (a constant times) $$\label{eq:d=1exampleCase2SUM} \begin{split} &N^\xi \left(\frac{N \eta}{\rho}\right)^{1/2} N^{-k/2} \frac{1}{N^2} \sum_{i,j } \left[\big| \big( M_{k_1+1}\big)_{\bm x \bm{e}_i} \big(M_{k_2+1}\big)_{\bm e_j \bm{e}_j} \big( M_{k_3+1}\big)_{\bm e_i \bm{e}_i} \big( M_{k_4+1}\big)_{\bm e_j \bm{e}_j} \big( M_{k_5+1}\big)_{\bm e_i \bm{y}} \big| + ... \right] \\ \lesssim \, &N^\xi \left(\frac{N \eta}{\rho}\right)^{1/2} \frac{1}{N^2} \sum_{i,j } \left[ \frac{\big| (M_{k_1+1})_{\bm x \bm e_i} \big|}{N^{k_1/2}} + ... \right] \\ \lesssim \, &N^\xi \left(\frac{N \eta}{\rho}\right)^{1/2} \frac{1}{N} \sqrt{N} \left[ \frac{\sqrt{\big(|M_{k_1+1}|^2 \big)_{\bm x \bm x} } }{N^{k_1/2}}+ ... \right] \lesssim \, N^\xi \left(\frac{\eta}{\rho}\right)^{1/2} \lesssim N^\xi\,. \end{split}$$ To go from the first to the second line, we used $\big| (M_{j+1})_{\bm u \bm v} \big| \lesssim N^{j/2}$ for all but the first $M$ factor. Next, we used a Schwarz inequality for the $i$-summation, which involves the off-diagonal term $(M_{k_1+1})_{\bm x \bm e_i}$: $$\label{eq:schwarzfirst} \sum_i \big| (M_{k_1+1})_{\bm x \bm e_i} \big| \le \sqrt{N} \left(\sum_i \big| (M_{k_1+1})_{\bm x \bm e_i} \big|^2 \right)^{1/2} \le \sqrt{N}\sqrt{ \big(| M_{k_1+1}|^2 \big)_{\bm x \bm x} }\,.$$ In the penultimate estimate, we used that $$\label{eq:M^2estimate} \sqrt{\big(|M_{j+1}|^2 \big)_{\bm u \bm u} } \lesssim N^{j/2}\,,$$ as follows from the fact that $N^{j/2}$ is in fact the operator norm bound for $M_{j+1}$, and the final estimate in [\[eq:d=1exampleCase2SUM\]](#eq:d=1exampleCase2SUM){reference-type="eqref" reference="eq:d=1exampleCase2SUM"} simply used the general fact $\eta/\rho \lesssim 1$. We point out that we even could have gained another $1/\sqrt{N}$-factor from the $i$-summation by not estimating $\big( M_{k_5+1}\big)_{\bm e_i \bm{y}}$ trivially by $N^{k_5/2}$ but using $$\label{eq:schwarzfirst2} \begin{split} \sum_i \big| (M_{k_1+1})_{\bm x \bm e_i} \big( M_{k_5+1}\big)_{\bm e_i \bm{y}}\big| & \le \left(\sum_i \big| (M_{k_1+1})_{\bm x \bm e_i} \big|^2 \right)^{1/2} \left(\sum_i \big| (M_{k_5+1})_{\bm e_i \bm y} \big|^2 \right)^{1/2} \\ &\le \sqrt{ \big(| M_{k_1+1}|^2 \big)_{\bm x \bm x} } \sqrt{ \big(| M_{k_5+1}|^2 \big)_{\bm y \bm y}} \,. \end{split}$$ instead of [\[eq:schwarzfirst\]](#eq:schwarzfirst){reference-type="eqref" reference="eq:schwarzfirst"}. However, we do not need this additional factor $1/\sqrt{N}$ here. Finally, note that the $j$-summation in [\[eq:d=1exampleCase2SUM\]](#eq:d=1exampleCase2SUM){reference-type="eqref" reference="eq:d=1exampleCase2SUM"} would have been useless, since the $j$-terms are diagonal. The summation gain is effective only for off-diagonal terms as in [\[eq:schwarzfirst\]](#eq:schwarzfirst){reference-type="eqref" reference="eq:schwarzfirst"}. The above example indicates the following general mechanism: After estimating all the $G-M$-type terms with the aid of the induction hypothesis [\[eq:inductionhypo\]](#eq:inductionhypo){reference-type="eqref" reference="eq:inductionhypo"}, and estimating the $M$-factors just trivially by their size, we are left with an excess $(N \eta/\rho)^{u/2}$-factor, for some $u \in [4]$. In order to remove this leftover factor, we need at least $u$ *(collectively) summable bounded $M$-terms* like $$\label{eq:Msummable} \frac{\big| (M_{k_1+1})_{\bm x \bm e_i} \big|}{N^{k_1/2}}$$ in [\[eq:d=1exampleCase2SUM\]](#eq:d=1exampleCase2SUM){reference-type="eqref" reference="eq:d=1exampleCase2SUM"} (see also [\[eq:M\^2estimate\]](#eq:M^2estimate){reference-type="eqref" reference="eq:M^2estimate"}). In fact, each of these collectively summable factors will gain one $1/\sqrt{N}$ compared to the trivial estimate, like the one in [\[eq:d=1exampleCase2\]](#eq:d=1exampleCase2){reference-type="eqref" reference="eq:d=1exampleCase2"}. Here, the notion \"collective\" refers to particular index structures, which allow an effective summation. Denoting terms like [\[eq:Msummable\]](#eq:Msummable){reference-type="eqref" reference="eq:Msummable"} symbolically by $M_{\bm x \bm e_i}$ for brevity, by *(collectively) summable bounded $M$-terms* we mean the following possible index structures $$\label{eq:Msums} \begin{split} u=1 \, :& \qquad \sum_{i,j} |M_{\bm x\bm e_i}| \quad \text{or} \quad \sum_{i,j} |M_{\bm e_j \bm y}| \quad \text{or} \quad ...\\ u=2 \, :& \qquad \sum_{i,j} |M_{\bm x\bm e_i}| |M_{\bm e_j \bm y}|\quad \text{or} \quad \sum_{i,j} |M_{\bm x\bm e_i}| |M_{\bm e_i \bm y}| \quad \text{or} \quad ...\\ u=3\, :& \qquad \sum_{i,j} |M_{\bm x\bm e_i}| |M_{\bm e_i \bm y}||M_{\bm e_j \bm y}|\quad \text{or} \quad \sum_{i,j} |M_{\bm x\bm e_i}| |M_{\bm e_j \bm y}|^2 \quad \text{or} \quad ... \\ u=4\, :& \qquad \sum_{i,j} |M_{\bm x\bm e_i}| |M_{\bm x \bm e_j}| |M_{\bm e_i \bm y}||M_{\bm e_j \bm y}|\quad \text{or} \quad \sum_{i,j} |M_{\bm x\bm e_i}|^2 |M_{\bm e_j \bm y}|^2 \quad \text{or} \quad ... \end{split}$$ where dots are always indicating other similar terms, obtained from trivial exchanges $\bm x \leftrightarrow \bm y$ or $i \leftrightarrow j$. In principle, every summation over $i$ and $j$ potentially gains a full $1/N$-factor each -- provided that there are enough $M$'s with suitable indices as in [\[eq:Msums\]](#eq:Msums){reference-type="eqref" reference="eq:Msums"}. The existence of $u$ *collectively summable bounded $M$-terms* then ensures that of this potential $1/N^2$-improvement at least a $1/N^{u/2}$-gain is effective. More precisely, as an example, for the first column of terms in [\[eq:Msums\]](#eq:Msums){reference-type="eqref" reference="eq:Msums"} we have that $$\label{eq:MsumsSchwarz} \begin{split} u=1 \, :& \quad \sum_{i,j} |M_{\bm x\bm e_i}| \le N^{3/2} \left(\sum_i |M_{\bm x\bm e_i}|^2\right)^{1/2}\lesssim N^{2-1/2}\\ u=2 \, :& \quad \sum_{i,j} |M_{\bm x\bm e_i}| |M_{\bm e_j \bm y}| \le N \left(\sum_i |M_{\bm x\bm e_i}|^2\right)^{1/2} \left(\sum_j |M_{\bm e_j\bm y}|^2\right)^{1/2}\lesssim N^{2-2/2}\\ u=3\, :& \quad \sum_{i,j} |M_{\bm x\bm e_i}| |M_{\bm e_i \bm y}||M_{\bm e_j \bm y}| \\ & \qquad \quad \le N^{1/2} \left(\sum_i |M_{\bm x\bm e_i}|^2\right)^{1/2} \left(\sum_i |M_{\bm e_i \bm y}|^2\right)^{1/2} \left(\sum_j |M_{\bm e_j\bm y}|^2\right)^{1/2}\lesssim N^{2-3/2} \\ u=4\, :& \quad \sum_{i,j} |M_{\bm x\bm e_i}| |M_{\bm x \bm e_j}| |M_{\bm e_i \bm y}||M_{\bm e_j \bm y}| \\ &\qquad \quad \le \left(\sum_i |M_{\bm x\bm e_i}|^2\right)^{1/2} \left(\sum_i |M_{\bm e_i \bm y}|^2\right)^{1/2} \left(\sum_j |M_{\bm e_j\bm y}|^2\right)^{1/2} \left(\sum_j |M_{\bm x \bm e_j}|^2\right)^{1/2}\lesssim N^{2-4/2} \end{split}$$ by application of Schwarz inequalities like in [\[eq:schwarzfirst\]](#eq:schwarzfirst){reference-type="eqref" reference="eq:schwarzfirst"}--[\[eq:schwarzfirst2\]](#eq:schwarzfirst2){reference-type="eqref" reference="eq:schwarzfirst2"} and using that $\Vert M \Vert \lesssim 1$. We point out that the $\eta/\rho\le 1$ factor within each excess $(N \eta/\rho)^{1/2}$ would not be able to compensate for excess $N$-factors; but the *gains from the summation* are obtained solely on the level of $N$'s. It follows from a simple counting argument (or simply by considering all cases directly), that for any $u \in [4]$, we find an appropriately summable index structure within the at least five purely deterministic terms, as in [\[eq:Msums\]](#eq:Msums){reference-type="eqref" reference="eq:Msums"}--[\[eq:MsumsSchwarz\]](#eq:MsumsSchwarz){reference-type="eqref" reference="eq:MsumsSchwarz"}. Hence, we deduce that $$\label{eq:fourthorderbound} \eqref{eq:fourthorder} \le C_1 \widecheck{\Omega}_k^p(\gamma) + C_2 N^\xi \bigg( 1 + \sum_{u=1}^{4} \left(\frac{N \eta}{\rho}\right)^{u/2} \left|\big[u \, \text{sum. bdd.}\, M \text{-terms}\big]^{(\gamma)}_{\bm x, \bm y}\right| \bigg)\,,$$ where $$\label{eq:Mterms} \big[u\, \text{sum. bdd.}\, M \text{-terms}\big]^{(\gamma)}_{\bm x, \bm y}$$ stands symbolically for a product of *$u$ collectively summable bounded deterministic terms*, like [\[eq:Msummable\]](#eq:Msummable){reference-type="eqref" reference="eq:Msummable"}, for which we have just shown the following. **Lemma 28**. *It holds that $$\label{eq:Mestimate} \sum_{\gamma \in [\gamma(N)]}\left|\big[u\, \text{\rm sum. bdd.}\, M \text{\rm -terms}\big]^{(\gamma)}_{\bm x, \bm y}\right| \lesssim N^{2 -u/2}\,.$$* Combining [\[eq:fourthorderbound\]](#eq:fourthorderbound){reference-type="eqref" reference="eq:fourthorderbound"} with [\[eq:Mestimate\]](#eq:Mestimate){reference-type="eqref" reference="eq:Mestimate"}, this concludes the argument for the fourth order terms in [\[eq:gammaexp\]](#eq:gammaexp){reference-type="eqref" reference="eq:gammaexp"}. ### Further higher order terms in Lemma [Lemma 24](#lem:firstthree){reference-type="ref" reference="lem:firstthree"} {#subsec:lowerorder} Just as in the previous Section [5.2.2](#subsec:fourthorder){reference-type="ref" reference="subsec:fourthorder"}, the goal of the current Section [5.2.3](#subsec:lowerorder){reference-type="ref" reference="subsec:lowerorder"} is to show that the terms of order $r\ge 5$ arising in the telescopic summation [\[eq:telescope\]](#eq:telescope){reference-type="eqref" reference="eq:telescope"} can be bounded by the rhs. of [\[eq:gronwall\]](#eq:gronwall){reference-type="eqref" reference="eq:gronwall"}. For these other higher order terms in [\[eq:gammaexp\]](#eq:gammaexp){reference-type="eqref" reference="eq:gammaexp"} with $r \ge 5$ and involving *only* $\widecheck{G}$ (and not ${G}$), the two cases distinguished above for $r=4$ generalize to the following. - At least $d$ of the $d+r$ resolvent chains are replaced by their fluctuating part, $G-M$. - At least $r+1$ of the $d+r$ resolvent chains are replaced by their deterministic counterpart, $M$. For Case (i'), we separate a $1/N^2$-prefactor and find that the remaining part can be estimated by $$\label{eq:higherorderboundCase1} C_1 N^{-(r-4)/2}\widecheck{\Omega}_k^p(\gamma) + C_2 N^\xi N^{-(r-4)/2} \,,$$ completely analogously to [\[eq:fluctuationfinal\]](#eq:fluctuationfinal){reference-type="eqref" reference="eq:fluctuationfinal"}. In fact, we gain an additional $N^{-(r-4)/2}\ll 1$ factor in both terms. This reflects the idea that more $G-M$ terms are better because their presumed bounds carry a factor $(\rho/ N\eta)^{1/2}$ (encoded in the prefactor $(N\eta/\rho)^{1/2}$ in the definition of $\Psi_k^{\rm iso}$ in [\[eq:Psikiso\]](#eq:Psikiso){reference-type="eqref" reference="eq:Psikiso"}). For Case (ii'), we include the additional $N^{-(r-4)/2}$ (after having separated a $1/N^2$-prefactor) into our counting of the leftover $(N \eta/\rho)^{u/2}$-factor (recall the discussion below [\[eq:Msummable\]](#eq:Msummable){reference-type="eqref" reference="eq:Msummable"}). In this way, we find that the maximal number of such leftover factors is $r-(r-4) = 4$. Hence, for every $u \in [4]$, we find an appropriately summable index structure, completely analogously to [\[eq:Msums\]](#eq:Msums){reference-type="eqref" reference="eq:Msums"}, and deduce that (leaving out the separated $1/N^2$-prefactor) $$\label{eq:higherorderboundfinal} \begin{split} r^{\rm th} \, &\text{order term in} \, \eqref{eq:gammaexp} \\ &\le C_1 N^{-(r-4)/2}\widecheck{\Omega}_k^p(\gamma) + C_2 N^\xi \bigg( N^{-(r-4)/2} + \sum_{u=1}^{4} \left(\frac{N \eta}{\rho}\right)^{u/2} \left|\big[u \, \text{sum. bdd.}\, M \text{-terms}\big]^{(\gamma)}_{\bm x, \bm y}\right|\bigg)\,, \end{split}$$ which can be directly incorporated into [\[eq:fourthorderbound\]](#eq:fourthorderbound){reference-type="eqref" reference="eq:fourthorderbound"} after adjusting the constants. Note that while the contributions form Case (i') improve by larger $r$, the terms from Case (ii') that carry many $M$-factors, do not. Combining [\[eq:higherorderboundfinal\]](#eq:higherorderboundfinal){reference-type="eqref" reference="eq:higherorderboundfinal"} with [\[eq:Mestimate\]](#eq:Mestimate){reference-type="eqref" reference="eq:Mestimate"}, this concludes the argument for the higher order terms in [\[eq:gammaexp\]](#eq:gammaexp){reference-type="eqref" reference="eq:gammaexp"}. ### Truncation of the resolvent expansion {#subsec:truncation} It remains to discuss the *truncation terms*, which involve *not* only $\widecheck{G}$, but also ${G}$, i.e. the order $m \in {\mathbb N}$ for the truncation of the resolvent expansion [\[eq:backwardexp\]](#eq:backwardexp){reference-type="eqref" reference="eq:backwardexp"}. Also here, our goal is to show that the contribution of these terms arising in the telescopic summation [\[eq:telescope\]](#eq:telescope){reference-type="eqref" reference="eq:telescope"} can be bounded by the rhs. of [\[eq:gronwall\]](#eq:gronwall){reference-type="eqref" reference="eq:gronwall"}. After expanding each resolvent in [\[eq:gamma\]](#eq:gamma){reference-type="eqref" reference="eq:gamma"} via [\[eq:backwardexp\]](#eq:backwardexp){reference-type="eqref" reference="eq:backwardexp"}, for every fixed $q \ge 1$, we collect those terms which contain the final summand in [\[eq:backwardexp\]](#eq:backwardexp){reference-type="eqref" reference="eq:backwardexp"} (the *truncation term*), and hence ${G}$ exactly $q$ times. For these terms with $q \ge 1$ fixed, we then proceed as follows: Estimate those chains within the truncation term in which ${G}$ appears trivially by norm, $\Vert G \Vert \le 1/\eta$ (note that there are at most $k+1$ resolvents in such chains and we can afford estimating all of them by $1/\eta$ not just the last one $G$) and use $\Vert A \Vert \le \sqrt{N} \langle |A|^2 \rangle^{1/2}$ (recall that we assumed $\langle |A|^2\rangle^{1/2}=1$ around [\[eq:Psikav\]](#eq:Psikav){reference-type="eqref" reference="eq:Psikav"}--[\[eq:Psikiso\]](#eq:Psikiso){reference-type="eqref" reference="eq:Psikiso"}), and treat the other factors by our induction hypothesis [\[eq:inductionhypo\]](#eq:inductionhypo){reference-type="eqref" reference="eq:inductionhypo"} (resulting in an $N^\xi$ factor). In this way, we conclude the estimate $$\label{eq:truncation} \big[q\, \text{truncation terms}\big] \lesssim N^\xi \frac{(N\eta/\rho)^{p/2}}{\big(N^{\frac{m+1}{2}}\big)^q} \left(\frac{N^{k/2}}{\eta^{k+1}} \right)^q = \frac{N^\xi}{N^{2q}} \frac{1}{N^{p(q-1)/2}} \left(\frac{\eta}{\rho}\right)^{p/2}\frac{1}{(N \eta)^{(k+1)q}} \lesssim \frac{N^\xi}{N^2}$$ when choosing $m = p+3k+5$, where in the last step we used that $\eta /\rho \lesssim 1$ and $N \eta \gg 1$. We remark that $(N\eta/\rho)^{p/2}$ in [\[eq:truncation\]](#eq:truncation){reference-type="eqref" reference="eq:truncation"} comes from the prefactor of $\Psi_k$, $(N^{\frac{m+1}{2}}\big)^{-q}$ from the cumulant order of the truncation terms and $\big(N^{k/2}/\eta^{k+1}\big)^{q}$ from the trivial bounds. ### Proof of Proposition [Proposition 22](#prop:gronwall){reference-type="ref" reference="prop:gronwall"} {#sec:Propgron} As mentioned above [\[eq:Delta\]](#eq:Delta){reference-type="eqref" reference="eq:Delta"}, the treatment of the higher order terms in [\[eq:gamma-1exp\]](#eq:gamma-1exp){reference-type="eqref" reference="eq:gamma-1exp"} is identical to our discussion above. Therefore, summarizing Sections [5.2.2](#subsec:fourthorder){reference-type="ref" reference="subsec:fourthorder"}--[5.2.4](#subsec:truncation){reference-type="ref" reference="subsec:truncation"}, we have proven the following. **Lemma 29**. *Fix $p,k \in {\mathbb N}$ and assume that the induction hypothesis [\[eq:inductionhypo\]](#eq:inductionhypo){reference-type="eqref" reference="eq:inductionhypo"} holds. Then, for every $\gamma \in [\gamma(N)]$, we have that $$\left| \Vert \Psi_k^{(\gamma)}(\bm x, \bm y) \Vert_p^p - \Vert \Psi_k^{(\gamma-1)}(\bm x, \bm y) \Vert_p^p \right| \le \frac{C_1}{N^2} \widecheck{\Omega}_k^p(\gamma) + C_2 \frac{N^\xi}{N^2} \bigg(1 + \sum_{u=1}^{4} \left(\frac{N \eta}{\rho}\right)^{u/2} \left|\big[u\, \text{\rm sum. bdd.}\, M \text{\rm -terms}\big]^{(\gamma)}_{\bm x, \bm y}\right|\bigg)\,,$$ where $\big[u \, \text{\rm sum. bdd.}\, M \text{\rm -terms}\big]^{(\gamma)}_{\bm x, \bm y}$ is understood as explained below [\[eq:Mterms\]](#eq:Mterms){reference-type="eqref" reference="eq:Mterms"}.* Next, employing the telescopic summation from [\[eq:telescope\]](#eq:telescope){reference-type="eqref" reference="eq:telescope"} we find that $$\label{eq:telescopic} \Vert \Psi_k^{(\gamma_0)}(\bm x, \bm y) \Vert_p^p \le C_1 \frac{1}{N^2} \sum_{\gamma < \gamma_0} \widecheck{\Omega}_k^p(\gamma) + C_2 N^\xi + \frac{N^\xi}{N^2} \sum_{\gamma < \gamma_0} \bigg( \sum_{u=1}^{4} \left(\frac{N \eta}{\rho}\right)^{u/2} \left|\big[u \, \text{\rm sum. bdd.}\, M \text{\rm -terms}\big]^{(\gamma)}_{\bm x, \bm y}\right|\bigg)$$ after having absorbed $\Vert \Psi_k^{(0)}(\bm x, \bm y) \Vert_p^p$ into $C_2 N^\xi$ by our initial assumption that we have multi-resolvent local laws [\[eq:multiG\]](#eq:multiG){reference-type="eqref" reference="eq:multiG"} for the Wigner matrix $H^{(\bf v)} = H^{(0)}$. We are left with discussing the first and last term on the rhs. of [\[eq:telescopic\]](#eq:telescopic){reference-type="eqref" reference="eq:telescopic"}. For the first term, we rely on the following lemma, which says that, in particular, we can replace each $\widecheck{\Omega}_k^p(\gamma)$ in [\[eq:telescopic\]](#eq:telescopic){reference-type="eqref" reference="eq:telescopic"} by $\Omega_k^p(\gamma)$, absorbing the additional error into $C_2$. **Lemma 30**. *Fix $p, k \in {\mathbb N}$. Then, for every fixed $\gamma \in [\gamma(N)]$, the expressions (recall [\[eq:vectormax\]](#eq:vectormax){reference-type="eqref" reference="eq:vectormax"}) $$\Omega_k^p(\gamma)\,, \quad \Omega_k^p(\gamma-1)\,, \quad \text{and} \quad \widecheck{\Omega}_k^p(\gamma)$$ are comparable up to an additive error of order $N^\xi$ for arbitrarily small $\xi > 0$.* *Proof.* We give a sketch of the simple argument based on Lemma [Lemma 29](#lem:onestep){reference-type="ref" reference="lem:onestep"} in combination with Lemma [Lemma 24](#lem:firstthree){reference-type="ref" reference="lem:firstthree"}: Similarly to the proof of Lemma [Lemma 29](#lem:onestep){reference-type="ref" reference="lem:onestep"}, we first expand $G^{(\gamma)}$ (resp. $G^{(\gamma-1)}$) in $\Vert \Psi_k^{(\gamma)}(\bm x, \bm y) \Vert_p^p$ (resp. $\Vert \Psi_k^{(\gamma-1)}(\bm x, \bm y) \Vert_p^p$) by means of [\[eq:backwardexp\]](#eq:backwardexp){reference-type="eqref" reference="eq:backwardexp"} and realize that $\alpha_{k, 0}^{(\gamma)}(\bm x, \bm y) = 1$ in [\[eq:gammaexp\]](#eq:gammaexp){reference-type="eqref" reference="eq:gammaexp"}--[\[eq:gamma-1exp\]](#eq:gamma-1exp){reference-type="eqref" reference="eq:gamma-1exp"}. The various terms arising in the expansion (now for all $r \ge 1$ and not only for $r \ge 4$) are dealt with as explained in Sections [5.2.2](#subsec:fourthorder){reference-type="ref" reference="subsec:fourthorder"}--[5.2.4](#subsec:truncation){reference-type="ref" reference="subsec:truncation"}. However, there is a major simplification, since we do not need to gain from the summation as in Case (ii) in Section [5.2.2](#subsec:fourthorder){reference-type="ref" reference="subsec:fourthorder"}: The maximal excess power $u$ of the leftover $(N \eta/\rho)^{1/2}$-factor is bounded by the order $r$ of the expansions in [\[eq:gammaexp\]](#eq:gammaexp){reference-type="eqref" reference="eq:gammaexp"}--[\[eq:gamma-1exp\]](#eq:gamma-1exp){reference-type="eqref" reference="eq:gamma-1exp"} (simply because at order $r$, there are at most $d=r$ destroyed resolvent chains), such that the characteristic $1/N^{r/2}$-factor at order $r$ balances this excess. Finally, we take a maximum over all $\bm u, \bm v \in I_{\bm x, \bm y}$ for all $\Vert \widecheck{\Psi}_k^{(\gamma)}(\bm u, \bm v) \Vert_p^p$ arising through the expansion (see [\[eq:vectormax\]](#eq:vectormax){reference-type="eqref" reference="eq:vectormax"}). This finishes the sketch of the proof of Lemma [Lemma 30](#lem:checknochecksim){reference-type="ref" reference="lem:checknochecksim"}. ◻ For the last term in [\[eq:telescopic\]](#eq:telescopic){reference-type="eqref" reference="eq:telescopic"}, we extend the summation $\sum_{\gamma < \gamma_0}$ to all indices $i,j \in [N]$; it is an upper bound as we only sum positive terms. Then, for every fixed $u\in [4]$, we need to gain from this summation of $\big[u \, \text{\rm sum. bdd.}\, M \text{\rm -terms}\big]^{(\gamma)}_{\bm x, \bm y}$ over all $\gamma \in [\gamma(N)]$ precisely $N^{-u/2}$ compared to the naive $N^2$-size of the summation. This was achieved in Lemma [Lemma 28](#lem:sumMs){reference-type="ref" reference="lem:sumMs"} by the index structure [\[eq:Msums\]](#eq:Msums){reference-type="eqref" reference="eq:Msums"} of the factors and application of several Schwarz inequalities [\[eq:MsumsSchwarz\]](#eq:MsumsSchwarz){reference-type="eqref" reference="eq:MsumsSchwarz"}. Hence, combining [\[eq:telescopic\]](#eq:telescopic){reference-type="eqref" reference="eq:telescopic"} with Lemma [Lemma 30](#lem:checknochecksim){reference-type="ref" reference="lem:checknochecksim"} and Lemma [Lemma 28](#lem:sumMs){reference-type="ref" reference="lem:sumMs"}, we find that $$\Vert \Psi_k^{(\gamma_0)}(\bm x, \bm y) \Vert_p^p \le C_1 \frac{1}{N^2} \sum_{\gamma < \gamma_0} {\Omega}_k^p(\gamma) + C_2 N^\xi \,.$$ Since the rhs. is independent of the elements in $I_{\bm x\bm y}$ (recall [\[eq:vectormax\]](#eq:vectormax){reference-type="eqref" reference="eq:vectormax"}), we can as well maximize over those on the lhs. and arrive at Proposition [Proposition 22](#prop:gronwall){reference-type="ref" reference="prop:gronwall"}. 0◻ ### Conclusion of the induction step {#sec:complete} Having Proposition [Proposition 22](#prop:gronwall){reference-type="ref" reference="prop:gronwall"} and hence [\[eq:gronwallconclude\]](#eq:gronwallconclude){reference-type="eqref" reference="eq:gronwallconclude"} at hand, we can immediately deduce $$\max_{\gamma \le \gamma(N)} \widecheck{\Omega}_k^p(\gamma) \lesssim N^\xi$$ from Lemma [Lemma 30](#lem:checknochecksim){reference-type="ref" reference="lem:checknochecksim"} above. This proves the $\widecheck\Psi$-part of [\[eq:conclusion\]](#eq:conclusion){reference-type="eqref" reference="eq:conclusion"} and thus finishes the induction step. Therefore, using uniformity of this bound, we conclude the proof of the isotropic multi-resolvent local laws [\[eq:zagmultiGISO\]](#eq:zagmultiGISO){reference-type="eqref" reference="eq:zagmultiGISO"}. ## Part (b): Proof of the averaged law. {#subsec:av} The general idea of the proof of the averaged law is exactly the same as in the previous section: We replace all matrix elements one-by-one in $\gamma(N) \sim N^2$ steps and sum up the changes over all positions $\gamma\in [\gamma(N)]$ (cf. [\[eq:telescope\]](#eq:telescope){reference-type="eqref" reference="eq:telescope"}). However, there are a several (minor) differences in the averaged case compared to Section [5.2](#subsec:iso){reference-type="ref" reference="subsec:iso"}, which we will explain in the following. Since both, averaged and isotropic normalized differences, [\[eq:Psikav\]](#eq:Psikav){reference-type="eqref" reference="eq:Psikav"} and [\[eq:Psikiso\]](#eq:Psikiso){reference-type="eqref" reference="eq:Psikiso"}, appear, we shall henceforth reintroduce the superscripts $^{\rm av}$ and $^{\rm iso}$. Moreover, contrary to the isotropic proof, in this part it is sufficient to consider an arbitrary fixed $k \in {\mathbb N}$ and perform a *single induction* on the moment $p$ taken of $\Psi_k^{\rm av}$, i.e. ${\mathbb E }|\Psi_k^{\rm av}|^p = \Vert \Psi_k^{\rm av}\Vert_p^p$. We point out that the induction on $k$ used in the previous section is not needed, because the proof of the isotropic laws has already been concluded (see [\[eq:fourthorderAV\]](#eq:fourthorderAV){reference-type="eqref" reference="eq:fourthorderAV"} later). Hence, as the *induction hypothesis*, we will assume that $$\label{eq:inductionhypoav} \max_{\gamma \le \gamma(N)} \Vert \Psi_k^{\rm av, (\gamma)} \Vert_{p-1} + \max_{\gamma \le \gamma(N)} \Vert \widecheck{\Psi}_k^{\rm av, (\gamma)} \Vert_{p-1} \lesssim N^\xi$$ holds uniformly in traceless matrices for all $\xi > 0$, and our goal is to prove the same relation with $p$ replacing $p-1$. The base case is thus simply the trivial bound ($p=1$) given by ${\mathbb E }|\Psi_k^{\rm av}|^0 = 1$. To ease notation, just as in Section [5.2](#subsec:iso){reference-type="ref" reference="subsec:iso"}, we will drop the subscripts for all resolvents and deterministic matrices, i.e. write $G_j =G$ and $A_j =A$ instead. Moreover, whenever it does not lead to confusion, we will drop all further sub- and superscripts. Completely analogously to Section [5.2](#subsec:iso){reference-type="ref" reference="subsec:iso"}, we use resolvent expansions from Lemma [Lemma 23](#lem:resolexp){reference-type="ref" reference="lem:resolexp"} to prove the exact agreement of the orders $r \in \{0,1,2,3\}$ as in Lemma [Lemma 24](#lem:firstthree){reference-type="ref" reference="lem:firstthree"}. For the higher order terms (again focusing on the most critical fourth order ones, see Section [5.2.2](#subsec:fourthorder){reference-type="ref" reference="subsec:fourthorder"}), we argue completely analogously to [\[eq:fourthorder\]](#eq:fourthorder){reference-type="eqref" reference="eq:fourthorder"}, but now we have an additional effect: Whenever an intact averaged chain gets destroyed by a replacement $G \to G \Delta G$ from a derivative, we obtain (a sum of) isotropic chains with a $1/N$ prefactor from the normalization of the trace, i.e. $$\label{eq:avtoiso} \langle (GA)^k \rangle \longrightarrow \langle G \Delta (GA)^k \rangle = \frac{1}{N} \big( (GA)^kG \big)_{\bm e_i \bm e_j} + \frac{1}{N} \big( (GA)^kG \big)_{\bm e_j \bm e_i}\,.$$ In this way, the analogue of [\[eq:fourthorder\]](#eq:fourthorder){reference-type="eqref" reference="eq:fourthorder"} reads $$\label{eq:fourthorderAV} {\mathbb E }\sum_{d=1}^{4 \wedge p} \big|\widecheck{\Psi}_k^{\rm av}(\bm x, \bm y)\big|^{p-d} \left(\frac{N \eta}{\rho}\right)^{d/2}N^{-d(k/2-1)} \frac{1}{N^d} \sum_{(4-d) \Delta \, \leadsto \, d } \big| \underbrace{\big(... \Delta... \Delta ... \big)_{\bm{e}_i \bm{e}_j}\cdot ... \cdot \big(... \Delta... \big)_{\bm{e}_j \bm{e}_i}}_{(4-d)\; \Delta \; \text{in a total} \; d \, \text{iso chains}} \big|\,,$$ where the isotropic chains referred to in [\[eq:fourthorderAV\]](#eq:fourthorderAV){reference-type="eqref" reference="eq:fourthorderAV"}, are precisely those obtained in [\[eq:avtoiso\]](#eq:avtoiso){reference-type="eqref" reference="eq:avtoiso"}. In particular, one $\Delta$ has already been \"used\" for each destroyed averaged chain, hence only $(4-d)$ $\Delta$'s are placed in the isotropic chains (recall [\[eq:summation\]](#eq:summation){reference-type="eqref" reference="eq:summation"}). Observe that, after writing $N^{-d(k/2-1)} /N^d = N^{-dk/2}$, beside from the unit vectors in the isotropic chains, the structure of [\[eq:fourthorderAV\]](#eq:fourthorderAV){reference-type="eqref" reference="eq:fourthorderAV"} is exactly the same for [\[eq:fourthorder\]](#eq:fourthorder){reference-type="eqref" reference="eq:fourthorder"}. Next, in each of the resulting four resolvent chains in the rhs. of [\[eq:fourthorderAV\]](#eq:fourthorderAV){reference-type="eqref" reference="eq:fourthorderAV"}, as before we add and subtract the corresponding $M$-term, again schematically written as $G = (G-M) + M$. Exactly as in the previous section, we have to distinguish two cases. - At least $d$ of the $4$ resolvent chains are replaced by their fluctuating part, $G-M$. - At least $5-d$ of the $4$ resolvent chains are replaced by their deterministic counterpart, $M$. [Case (i):]{.ul} First, we note that, since there are only strictly lower moments of $\Psi_k^{\rm av}$ appearing in [\[eq:fourthorderAV\]](#eq:fourthorderAV){reference-type="eqref" reference="eq:fourthorderAV"} after the resolvent expansion, we can directly employ the *induction hypothesis* [\[eq:inductionhypoav\]](#eq:inductionhypoav){reference-type="eqref" reference="eq:inductionhypoav"}, i.e. there is no possibility of preserving the destroyed chains unlike in [\[eq:sustainexample\]](#eq:sustainexample){reference-type="eqref" reference="eq:sustainexample"}. Therefore, by additionally applying the already established isotropic laws from the previous section in combination with $\big| (M_{j+1})_{\bm u \bm v} \big| \lesssim N^{j/2}$ (recall also [\[eq:Mshorthand\]](#eq:Mshorthand){reference-type="eqref" reference="eq:Mshorthand"}), we find that $$\label{eq:case1avfinal} \text{Case (i) terms of \eqref{eq:fourthorderAV}} \lesssim N^\xi \sum_{d=1}^{4 \wedge p} \left(\frac{N \eta}{\rho}\right)^{d/2} N^{-dk/2} \left[ N^{dk/2} \left( \frac{\rho}{N \eta}\right)^{d/2} +...\right]\lesssim N^\xi\,,$$ indicating terms with more than $d$ factors of $G-M$ by dots. This concludes the discussion of Case (i).\ [Case (ii):]{.ul} For the second case, we again recall that all purely deterministic terms are independent of the replacement step $\gamma$. Moreover, completely analogously to Case (ii) in Section [5.2.2](#subsec:fourthorder){reference-type="ref" reference="subsec:fourthorder"}, it is not sufficient to just estimate every isotropic $M$-term blindly -- instead we again need to *gain from the summation* over all replacement positions. We again illustrate this by an example. **Example 32**. We first consider $d=1$ and use the notation [\[eq:Mshorthand\]](#eq:Mshorthand){reference-type="eqref" reference="eq:Mshorthand"}. Then, by means of the induction hypothesis [\[eq:inductionhypoav\]](#eq:inductionhypoav){reference-type="eqref" reference="eq:inductionhypoav"}, we have the trivial estimate $$\label{eq:d=1exampleCase2AV} \begin{split} &{\mathbb E }\big|\widecheck{\Psi}^{\rm av}_k(\bm x, \bm y)\big|^{p-1} \left(\frac{N \eta}{\rho}\right)^{1/2} N^{-k/2}\sum_{\substack{0\le k_l \le k: \\ \sum_{l} k_l= k}}\left[\big| \big( M_{k_1+1}\big)_{\bm e_i \bm{e}_i} \big(M_{k_2+1}\big)_{\bm e_j \bm{e}_j} \big( M_{k_3+1}\big)_{\bm e_i \bm{e}_i} \big( M_{k_4+1}\big)_{\bm e_j \bm{e}_j} \big| + ... \right] \\ \lesssim \, &N^\xi \left(\frac{N \eta}{\rho}\right)^{1/2} N^{-k/2}\sum_{\substack{0\le k_l \le k: \\ \sum_{l} k_l = k}} \left[ N^{\sum_l k_l/2} + ... \right] \lesssim N^\xi \left(\frac{N \eta}{\rho}\right)^{1/2}\,, \end{split}$$ analogously to [\[eq:d=1exampleCase2\]](#eq:d=1exampleCase2){reference-type="eqref" reference="eq:d=1exampleCase2"}. Again, this bound is off by a factor $(N \eta/\rho)^{1/2}$, which can be improved on by averaging over all replacement position. Compared to the isotropic case, we can no longer gain from summing over off-diagonal terms of the form $M_{\bm x \bm e_i}$. Instead, now we sum over squares of terms of the form $M_{\bm e_i \bm e_i}$ and estimate it by $$\label{eq:schwarzAV} \sum_{i} \big| M_{\bm e_i \bm{e}_i}\big|^2 \le \sum_{i,j} \big|M_{\bm e_i \bm{e}_j}\big|^2 \le \sum_i \big( |M|^2\big)_{\bm e_i \bm{e}_i} = N \langle |M|^2 \rangle\,,$$ similarly to [\[eq:schwarzfirst\]](#eq:schwarzfirst){reference-type="eqref" reference="eq:schwarzfirst"}--[\[eq:schwarzfirst2\]](#eq:schwarzfirst2){reference-type="eqref" reference="eq:schwarzfirst2"}. Note that [\[eq:schwarzAV\]](#eq:schwarzAV){reference-type="eqref" reference="eq:schwarzAV"} is better than the trivial bound, which would give $N\Vert M \Vert^2$. The key for exploiting this improvement is the following lemma, the proof of which is given in Appendix [6](#sec:addtech){reference-type="ref" reference="sec:addtech"}. **Lemma 31**. *Using the assumptions and notations from Lemma [Lemma 3](#lem:Mbound){reference-type="ref" reference="lem:Mbound"} and the normalization $\langle |A_i|^2 \rangle = 1$, we have that $$\label{eq:M^2estimateAV} \big\langle \big|\mathcal{M}(z_1, A_1, ... , A_k, z_{k+1}; \mathfrak{I}_{k+1})\big|^2 \big\rangle \lesssim N^{k} \left(\prod_{i \in \mathfrak{I}_{k+1}} \rho_i \right)^2\left[ \left(\frac{ \max_{i\in [k+1]} \big(\rho_i + \boldsymbol{1}(i \notin \mathfrak{I}_{k+1})\big)}{N \ell}\right)^2 \vee \frac{1}{N} \right]\,.$$* Applying [\[eq:M\^2estimateAV\]](#eq:M^2estimateAV){reference-type="eqref" reference="eq:M^2estimateAV"} for $k= k_l$ and $\mathfrak{I}_{k_l+1} = \emptyset$ (recall [\[eq:Mdef\]](#eq:Mdef){reference-type="eqref" reference="eq:Mdef"}, [\[eq:Mdefim\]](#eq:Mdefim){reference-type="eqref" reference="eq:Mdefim"}, and [\[eq:Mshorthand\]](#eq:Mshorthand){reference-type="eqref" reference="eq:Mshorthand"}), we see the bound $$\label{eq:gainAVapplied} \langle |M_{k_l+1}|^2 \rangle \lesssim N^{k_l} \left[\left(\frac{\rho}{N \eta}\right)^2 \vee \frac{1}{N}\right]\,.$$ We remark that this estimate is better by the factor $\big[\big(N \eta/\rho\big)^{-2} \vee N^{-1}\big] \ll 1$ compared to the naive norm bound $|(M_{k_l+1})_{\bm u \bm v}|^2 \le \Vert M_{k_l+1} \Vert^2 \lesssim N^{k_l}$ from Lemma [Lemma 3](#lem:Mbound){reference-type="ref" reference="lem:Mbound"} (b) employed in [\[eq:d=1exampleCase2AV\]](#eq:d=1exampleCase2AV){reference-type="eqref" reference="eq:d=1exampleCase2AV"}. Hence, fixing one constellation of $k_l$'s in [\[eq:d=1exampleCase2AV\]](#eq:d=1exampleCase2AV){reference-type="eqref" reference="eq:d=1exampleCase2AV"}, we find the average of the first line in [\[eq:d=1exampleCase2AV\]](#eq:d=1exampleCase2AV){reference-type="eqref" reference="eq:d=1exampleCase2AV"} over all $i,j \in [N]$ to be bounded by $$\label{eq:d=1exampleCase2AVSUM} \begin{split} &N^\xi \left(\frac{N \eta}{\rho}\right)^{1/2} N^{-k/2} \frac{1}{N^2}\sum_{i,j}\left[\big| \big( M_{k_1+1}\big)_{\bm e_i \bm{e}_i} \big(M_{k_2+1}\big)_{\bm e_j \bm{e}_j} \big( M_{k_3+1}\big)_{\bm e_i \bm{e}_i} \big( M_{k_4+1}\big)_{\bm e_j \bm{e}_j} \big| + ... \right] \\ \lesssim &N^\xi\left(\frac{N \eta}{\rho}\right)^{1/2} N^{-k/2} \frac{1}{N^2} \left[\prod_{l \in [4]} \left(\sum_{i} \big|\big( M_{k_l+1}\big)_{\bm e_i \bm{e}_i}\big|^2 \right)^{1/2} + ... \right] \\ \lesssim &N^\xi \left(\frac{N \eta}{\rho}\right)^{1/2} N^{-k/2}\frac{1}{N^2}\left[\left(\prod_{l \in [4]} N^{k_l+1} \left[\left(\frac{\rho}{N \eta}\right)^2 \vee \frac{1}{N}\right]\right)^{1/2}+ ... \right] \\ \lesssim &N^\xi \left[\left(\frac{\rho}{N\eta}\right)^{7/2} \vee \left(\frac{\eta}{\rho}\right)^{1/2} \frac{1}{N^{3/2}} \right] \lesssim N^\xi\,. \end{split}$$ To go from the first to the second line, we employed a trivial Schwarz inequality. To go to the penultimate line, we used [\[eq:schwarzAV\]](#eq:schwarzAV){reference-type="eqref" reference="eq:schwarzAV"} with $M = M_{k_l+1}$. For the final estimate, we employed $(\prod_{l \in [4]} N^{k_l+1})^{1/2} = N^{k/2+2}$. Next, we consider one example for $d=4$, where all four resolvent chains are replaced by their deterministic counterpart. In this case, the analog of [\[eq:d=1exampleCase2AVSUM\]](#eq:d=1exampleCase2AVSUM){reference-type="eqref" reference="eq:d=1exampleCase2AVSUM"} reads $$\begin{split} &N^\xi \left(\frac{N \eta}{\rho}\right)^{2} N^{-2k} \frac{1}{N^2}\sum_{i,j}\left[\big| \big( M_{k+1}\big)_{\bm e_i \bm{e}_j} \big(M_{k+1}\big)_{\bm e_j \bm{e}_i} \big( M_{k+1}\big)_{\bm e_i \bm{e}_j} \big( M_{k+1}\big)_{\bm e_j \bm{e}_i} \big| + ... \right] \\ \lesssim &N^\xi\left(\frac{N \eta}{\rho}\right)^{2} N^{-k} \frac{1}{N^2} \left[\sum_{i,j} \big|\big( M_{k+1}\big)_{\bm e_i \bm{e}_j}\big|^2 + ... \right] \\ \lesssim &N^\xi \left(\frac{N \eta}{\rho}\right)^{2} N^{-k}\frac{1}{N^2}\left[ N^{k+1}\left[\left(\frac{\rho}{N \eta}\right)^2 \vee \frac{1}{N}\right]+ ... \right] \lesssim N^\xi \left[\frac{1}{N} \vee \left(\frac{\eta}{\rho}\right)^2\right] \lesssim N^\xi\,. \end{split}$$ To go from the first to the second line, we estimated two factors of $M_{k+1}$ by their norm, $\big| (M_{k+1})_{\bm u \bm v}\big| \lesssim N^{k/2}$. Next, to go to the third line, we employed [\[eq:schwarzAV\]](#eq:schwarzAV){reference-type="eqref" reference="eq:schwarzAV"} and Lemma [Lemma 31](#lem:gainAV){reference-type="ref" reference="lem:gainAV"}. The final estimate used $\eta/\rho \lesssim 1$. The above examples showcase the general mechanism for the terms in Case (ii): After estimating all the $(G-M)$-type terms with the aid of the induction hypothesis [\[eq:inductionhypoav\]](#eq:inductionhypoav){reference-type="eqref" reference="eq:inductionhypoav"}, we are left with an excess $(N \eta/\rho)^{u/2}$-factor, for some $u \in [4]$. Analogously to [\[eq:Msummable\]](#eq:Msummable){reference-type="eqref" reference="eq:Msummable"}--[\[eq:Msums\]](#eq:Msums){reference-type="eqref" reference="eq:Msums"}, this leftover factor is then controlled by *gaining from the summation* like in [\[eq:M\^2estimateAV\]](#eq:M^2estimateAV){reference-type="eqref" reference="eq:M^2estimateAV"}. We skip the simple counting argument ensuring this gain. The treatment of the further higher order terms and the truncation of the resolvent expansion is completely analogous to Sections [5.2.3](#subsec:lowerorder){reference-type="ref" reference="subsec:lowerorder"} and [5.2.4](#subsec:truncation){reference-type="ref" reference="subsec:truncation"}, respectively. Therefore, by telescopic summation like in [\[eq:telescope\]](#eq:telescope){reference-type="eqref" reference="eq:telescope"}, we find that $$\max_{\gamma \le \gamma(N)} \Vert \Psi_k^{\rm av, (\gamma)} \Vert_p^p + \max_{\gamma \le \gamma(N)} \Vert \widecheck{\Psi}_k^{\rm av, (\gamma)}\Vert_p^p \lesssim \Vert \Psi_k^{\rm av, (0)}\Vert_p^p + N^\xi \lesssim N^\xi$$ where in the last step we absorbed $\Vert \Psi_k^{\rm av, (0)}\Vert_p^p$ into $N^\xi$ by our initial assumption that we have multi-resolvent local laws [\[eq:multiG\]](#eq:multiG){reference-type="eqref" reference="eq:multiG"} for the matrix $H^{(\bm v)} = H^{(0)}$. The checked version is obtained completely analogously to Lemma [Lemma 30](#lem:checknochecksim){reference-type="ref" reference="lem:checknochecksim"}. This completes the proof of the induction step. We have thus finished the argument for the averaged case and hence the proof of Proposition [Proposition 11](#prop:zag){reference-type="ref" reference="prop:zag"}. 0◻ ## The case $\mathfrak{I}_k \neq \emptyset$ (resp. $\mathfrak{I}_{k+1} \neq \emptyset$) {#subsec:withIM} In this section, we explain how to adjust the above argument for proving Proposition [Proposition 11](#prop:zag){reference-type="ref" reference="prop:zag"} in the case that at least one of the resolvents in the chains of interests $$\langle \mathcal{G}_1 A_1 ... \mathcal{G}_k A_k \rangle \qquad \text{and} \qquad \big( \mathcal{G}_1 A_1 ... \mathcal{G}_k A_k \mathcal{G}_{k+1}\big)_{\bm x \bm y}$$ is an imaginary part, i.e. $\mathcal{G}_i = \mathrm{Im}\,G_i$ for at least one index $i \in [k]$ (resp. $i \in [k+1]$). Recall the local laws for the average and isotropic chain from [\[eq:zagmultiG\]](#eq:zagmultiG){reference-type="eqref" reference="eq:zagmultiG"} and [\[eq:zagmultiGISO\]](#eq:zagmultiGISO){reference-type="eqref" reference="eq:zagmultiGISO"}, respectively. Compared to the case of no imaginary parts, handled in the previous Sections [5.2](#subsec:iso){reference-type="ref" reference="subsec:iso"}--[5.3](#subsec:av){reference-type="ref" reference="subsec:av"}, there are now two changes: First, the bound contains the product $\prod_{i \in \mathfrak{I}} \rho_i$ (instead of one). Second, the smallness factor $(N\eta/\rho)^{-1/2}$ from before is now replaced by $(N \ell)^{-1/2}$ For adjusting the first change, the simple but key insight is, that when applying the resolvent expansion from Lemma [Lemma 23](#lem:resolexp){reference-type="ref" reference="lem:resolexp"} to both $G$ and $G^*$ in $\mathrm{Im}\,G = \frac{1}{2 \mathrm{i}}(G-G^*)$, we can always \"restore\" exactly one $\mathrm{Im}\,G$ on the rhs. More precisely, taking [\[eq:backwardexp\]](#eq:backwardexp){reference-type="eqref" reference="eq:backwardexp"} for concreteness and using $\Delta = \Delta^*$, we have that $$\begin{split} \mathrm{Im}\,G = \frac{1}{2 \mathrm{i}} \big[G - G^*\big] &= \frac{1}{2 \mathrm{i}} \bigg[ \left(\widecheck{G} - N^{-1/2} \widecheck{G} \Delta \widecheck{G}+ N^{-1} \widecheck{G} \Delta\widecheck{G} \Delta \widecheck{G} + ... \right) \\ & \qquad \qquad - \left(\widecheck{G}^* - N^{-1/2} \widecheck{G}^* \Delta \widecheck{G}^*+ N^{-1} \widecheck{G}^* \Delta\widecheck{G}^* \Delta \widecheck{G} ^*+ ... \right)\bigg] \\ & = \mathrm{Im}\,\widecheck{G} - N^{-1/2} \big(\mathrm{Im}\,\widecheck{G} \Delta \widecheck{G} + \widecheck{G}^*\Delta \mathrm{Im}\,\widecheck{G}\big) \\[1mm] & \qquad \qquad + N^{-1} \big( \mathrm{Im}\,\widecheck{G} \Delta \widecheck{G} \Delta \widecheck{G}+ \widecheck{G}^*\Delta \mathrm{Im}\,\widecheck{G} \Delta \widecheck{G} + \widecheck{G}^*\Delta \widecheck{G}^*\Delta \mathrm{Im}\,\widecheck{G}\big) + ... \end{split}$$ In this way, the imaginary parts in the original chain are \"preserved\" by the resolvent expansion. Recall that $|\mathrm{Im}\,\widecheck{G}_{\bm u \bm v}(z)| \prec \rho(z)$ (as a consequence of [\[eq:singleGoptimal\]](#eq:singleGoptimal){reference-type="eqref" reference="eq:singleGoptimal"} for $N |\mathrm{Im}\,z| \rho(z) \gg 1$; recall $N \hat{\ell}\gg 1$), which improves [\[eq:Gbdd1GFT\]](#eq:Gbdd1GFT){reference-type="eqref" reference="eq:Gbdd1GFT"}. In particular, using Lemma [Lemma 3](#lem:Mbound){reference-type="ref" reference="lem:Mbound"}, we find that the factor $\big(\prod_{i \in \mathfrak{I}} \rho_i\big)^{-d}$ stemming from the correct normalisation of the analog of $\Psi_k^{\rm av/iso}$ in [\[eq:Psikav\]](#eq:Psikav){reference-type="eqref" reference="eq:Psikav"}--[\[eq:Psikiso\]](#eq:Psikiso){reference-type="eqref" reference="eq:Psikiso"} and thus appearing in the expression analogous to [\[eq:fourthorder\]](#eq:fourthorder){reference-type="eqref" reference="eq:fourthorder"} is naturally compensated by a product of $\rho$'s stemming from the destroyed chains. For adjusting to the second change, it suffices to replace every $\eta/\rho$ appearing in Sections [5.2](#subsec:iso){reference-type="ref" reference="subsec:iso"}--[5.3](#subsec:av){reference-type="ref" reference="subsec:av"} by $\ell$ and realize that the complement of the interesting regime, i.e. the regime $\ell\ge 1$ is already proven in Proposition [Proposition 8](#prop:initial){reference-type="ref" reference="prop:initial"}. # Additional technical results {#sec:addtech} In this section we prove several additional technical results which are used in the main sections. ## Bounds on the deterministic approximations *Proofs of Lemma [Lemma 3](#lem:Mbound){reference-type="ref" reference="lem:Mbound"} and the claim in Remark [Remark 6](#rmk:MHS){reference-type="ref" reference="rmk:MHS"} (ii).* We will first proof the following stronger bound in Lemma [Lemma 33](#lem:Mboundstrong){reference-type="ref" reference="lem:Mboundstrong"}, from which we immediately deduce Lemma [Lemma 3](#lem:Mbound){reference-type="ref" reference="lem:Mbound"} and the claim in Remark [Remark 6](#rmk:MHS){reference-type="ref" reference="rmk:MHS"} (ii). The proof of the following lemma is given at the end of the current section. **Lemma 33**. *Fix $k \ge 1$. Consider spectral parameters $z_1, ... , z_k \in {\mathbb C}\setminus {\mathbb R }$ and traceless matrices $A_1, ... , A_k \in {\mathbb C}^{N \times N}$, and define for every $j \in [k]$ $${\eta}_j := |\mathrm{Im}\,z_j|, \qquad \rho_j := \frac{1}{\pi}|\mathrm{Im}\,m_{\rm sc}(z_j)|, \qquad \ell := \min_j\big[\eta_j (\rho_j + \boldsymbol{1}(j \notin \mathfrak{I}_k))\big]\,.$$ Then, for every $1 \le s \le \lfloor k/2 \rfloor$, it holds that $$\label{eq:Mboundstrong} \left| \sum_{\substack{\pi\in\mathrm{NC}([k]): \\ |\pi| = k+1-s}} \langle \mathrm{pTr}_{K(\pi)}(A_1,\ldots,A_{k-1})A_k \rangle \prod_{S\in\pi} m_\circ^{(\mathfrak{I}_k)}[S] \right| \lesssim \left(\prod_{j \in \mathfrak{I}_k} \rho_j \right) \frac{1}{\ell^{s-1}} \prod_{\substack{S\in K(\pi) \\ |S| \ge 2}}\prod_{j\in S} \left\langle|A_j|^{|S|}\right\rangle^{\frac{1}{|S|}} \,.$$ with $m_\circ^{(\mathfrak{I})}[S]$ being defined above [\[eq:Mdivdiff\]](#eq:Mdivdiff){reference-type="eqref" reference="eq:Mdivdiff"}. For $s > \lfloor k/2 \rfloor$ the lhs. of [\[eq:Mboundstrong\]](#eq:Mboundstrong){reference-type="eqref" reference="eq:Mboundstrong"} equals zero.* For the proof of Lemma [Lemma 3](#lem:Mbound){reference-type="ref" reference="lem:Mbound"} (a) and the claim in Remark [Remark 6](#rmk:MHS){reference-type="ref" reference="rmk:MHS"} (ii) concerning [\[eq:mainAV\]](#eq:mainAV){reference-type="eqref" reference="eq:mainAV"} we use that $\langle |A|^p\rangle^{1/p} \le N^{\frac{p-2}{2p}} \langle |A|^2 \rangle^{1/2}$ for any $p \ge 2$, and hence $$\text{rhs. of} \, \eqref{eq:Mboundstrong} \lesssim N^{k/2 - 1} \left(\prod_{j \in \mathfrak{I}_k} \rho_j \right) \left(\prod_{j=1}^k \langle |A_j|^2 \rangle^{1/2} \right) \frac{1}{(N \ell)^{s-1}}\,.$$ This shows that, in particular, all terms with $s>1$ in [\[eq:Mboundstrong\]](#eq:Mboundstrong){reference-type="eqref" reference="eq:Mboundstrong"} are explicitly smaller than the error term in [\[eq:mainAV\]](#eq:mainAV){reference-type="eqref" reference="eq:mainAV"}, where we used that $N \ell \gg 1$. The $s=1$ term exactly constitutes the deterministic approximation in [\[eq:Mreplace\]](#eq:Mreplace){reference-type="eqref" reference="eq:Mreplace"}, i.e. the sum in [\[eq:Mboundstrong\]](#eq:Mboundstrong){reference-type="eqref" reference="eq:Mboundstrong"} contains exactly one term $$\sum_{\substack{\pi\in\mathrm{NC}([k]): \\ |\pi| = k}} \langle \mathrm{pTr}_{K(\pi)}(A_1,\ldots,A_{k-1})A_k \rangle \prod_{S\in\pi} m_\circ^{(\mathfrak{I}_k)}[S] = \bigg(\prod_{j \in \mathfrak{I}_k} \mathrm{Im}\,m_j\bigg) \bigg( \prod_{j \notin \mathfrak{I}_k} m_j\bigg)\langle A_1 ... A_k \rangle \,.$$ Here we used that $|\pi| = k$ implies that the Kreweras complement consists of the full set, $K(\pi) = [k]$. Finally, for the proof of Lemma [Lemma 3](#lem:Mbound){reference-type="ref" reference="lem:Mbound"} (b) and the claim in Remark [Remark 6](#rmk:MHS){reference-type="ref" reference="rmk:MHS"} (ii) concerning [\[eq:mainISO\]](#eq:mainISO){reference-type="eqref" reference="eq:mainISO"} (i.e. the corresponding isotropic bounds) we argue completely analogously to Section [4.2](#subsec:pureIMISO){reference-type="ref" reference="subsec:pureIMISO"}. ◻ It remains to prove Lemma [Lemma 33](#lem:Mboundstrong){reference-type="ref" reference="lem:Mboundstrong"}. *Proof of Lemma [Lemma 33](#lem:Mboundstrong){reference-type="ref" reference="lem:Mboundstrong"}.* Fix an arbitrary non-crossing partition $\pi \in \mathrm{NC}([k])$ consisting of $|\pi| = k+1-s$ blocks. First, note that, in order to get a non-vanishing partial trace $$\langle \mathrm{pTr}_{K(\pi)}(A_1,\ldots,A_{k-1})A_k \rangle = \prod_{S\in K(\pi)}\left\langle\prod_{j\in S}A_j\right\rangle$$ the minimal size of a block $S \in K(\pi)$ is two (using that the $A_i$'s are traceless). Therefore, by application of Hölder's inequality, $$\label{eq:Apartfinal} \begin{split} \big| \langle \mathrm{pTr}_{K(\pi)}(A_1,\ldots,A_{k-1})A_k \rangle \big| \le \prod_{\substack{S\in K(\pi) \\ |S| \ge 2}}\prod_{j\in S} \left\langle|A_j|^{|S|}\right\rangle^{\frac{1}{|S|}} \,. \end{split}$$ In order to estimate $\prod_{S\in\pi} m_\circ^{(\mathfrak{I}_k)}[S]$, we recall the Möbius inversion formula [@thermalisation Lemma 2.16] $$\label{eq:Mobius} m_\circ^{(\mathfrak{I}_k)}[S] = m^{(\mathfrak{I}_k)}[S] + \sum_{\substack{\pi \in \mathrm{NC}(S) \\ |\pi| \ge 2}} (-1)^{|\pi|-1} \left( \prod_{T \in K(\pi)} C_{|T| -1} \right) \prod_{U \in \pi} m^{(\mathfrak{I}_k)}[U]$$ where $C_n$ is the $n^{\rm th}$ Catalan number. Hence, it suffices to bound the iterated divided differences $m^{(\mathfrak{I}_k)}[S]$ for a subset $S \subset [k]$ as $$\label{eq:intrepbound} \left| m^{(\mathfrak{I}_k)}[S] \right| \lesssim \frac{\prod_{i \in \mathfrak{I}_k \cap S} \rho_i}{\ell^{|S| - 1}}$$ which is a direct consequence of the integral representation [\[eq:Mdivdiff\]](#eq:Mdivdiff){reference-type="eqref" reference="eq:Mdivdiff"}. Indeed, combining [\[eq:Mobius\]](#eq:Mobius){reference-type="eqref" reference="eq:Mobius"} with [\[eq:intrepbound\]](#eq:intrepbound){reference-type="eqref" reference="eq:intrepbound"} and using that the sum in [\[eq:Mobius\]](#eq:Mobius){reference-type="eqref" reference="eq:Mobius"} is restricted to partitions of $S$ with at least two blocks, we obtain $$\label{eq:zpartfinal} \left| \prod_{S\in\pi} m_\circ^{(\mathfrak{I}_k)}[S] \, \right| \lesssim \left(\prod_{i \in \mathfrak{I}_k} \rho_i\right)\frac{1}{\ell^{s-1}}$$ where we additionally used that the original non-crossing partition $\pi \in \mathrm{NC}([k])$ consists of exactly $k+1-s$ blocks. Combining [\[eq:zpartfinal\]](#eq:zpartfinal){reference-type="eqref" reference="eq:zpartfinal"} with [\[eq:Apartfinal\]](#eq:Apartfinal){reference-type="eqref" reference="eq:Apartfinal"}, we conclude the proof of [\[eq:Mboundstrong\]](#eq:Mboundstrong){reference-type="eqref" reference="eq:Mboundstrong"}. For $s > \lfloor k/2 \rfloor$, we note that the Kreweras complement $K(\pi)$ necessary contains singletons, and hence the lhs. of [\[eq:Mboundstrong\]](#eq:Mboundstrong){reference-type="eqref" reference="eq:Mboundstrong"} vanishes since $\langle A_i\rangle=0$. ◻ We conclude this section by giving the proof of Lemma [Lemma 31](#lem:gainAV){reference-type="ref" reference="lem:gainAV"}. *Proof of Lemma [Lemma 31](#lem:gainAV){reference-type="ref" reference="lem:gainAV"}.* The principal idea of the proof is very similar to the previous ones given in this section, hence we provide only a brief argument. Recalling [\[eq:Mdefim\]](#eq:Mdefim){reference-type="eqref" reference="eq:Mdefim"}--[\[eq:Mdivdiff\]](#eq:Mdivdiff){reference-type="eqref" reference="eq:Mdivdiff"}, we have that $$\label{eq:M^2AVestpf} \big\langle \big| \mathcal{M}(z_1,A_1,\dots,A_{k},z_{k+1};\mathfrak{I}_{k+1})\big|^2 \big\rangle \lesssim \sum_{\pi\in\mathrm{NC}([k+1])} \big\langle \big| \mathrm{pTr}_{K(\pi)}(A_1,\ldots,A_{k}) \big|^2 \big\rangle \left| \prod_{S\in\pi} m_\circ^{(\mathfrak{I}_{k+1})}[S] \right|^2\,.$$ Next, analogously to Lemma [Lemma 33](#lem:Mboundstrong){reference-type="ref" reference="lem:Mboundstrong"} above, we decompose the summation over all partitions $\pi$ into groups, where $|\pi| = k+2-s$ with $1 \le s \le \lceil (k+1)/2 \rceil$ is fixed (note that $\lfloor \cdot \rfloor$ got replaced by $\lceil \cdot \rceil$ due to the presence of a non-traceless identity matrix). Moreover, for fixed $s$ we distinguish two cases in [\[eq:M\^2AVestpf\]](#eq:M^2AVestpf){reference-type="eqref" reference="eq:M^2AVestpf"} (recall [\[eq:partrdef\]](#eq:partrdef){reference-type="eqref" reference="eq:partrdef"}): For Case (i), we assume that the unique block $\mathfrak{B}(k+1) \in K(\pi)$ containing $k+1$ contains no other elements, i.e. $\mathfrak{B}(k+1)\setminus \{k+1\} = \emptyset$. For Case (ii), we assume that $\mathfrak{B}(k+1)\setminus \{k+1\} \neq \emptyset$.\ [Case (i).]{.ul} First, we note that necessarily $s \ge 2$ in this case. Then, we have that $$\big\langle \big| \mathrm{pTr}_{K(\pi)}(A_1,\ldots,A_{k}) \big|^2 \big\rangle \le \left(\prod_{\substack{S\in K(\pi) \setminus \mathfrak{B}(k+1) \\ |S| \ge 2}}\prod_{j\in S} \left\langle|A_j|^{|S|}\right\rangle^{\frac{1}{|S|}} \right)^2 \le \left(\frac{N^{k/2}}{N^{s-1}}\right)^2\,,$$ analogously to [\[eq:Apartfinal\]](#eq:Apartfinal){reference-type="eqref" reference="eq:Apartfinal"}. Since in Case (i), $z_1$ and $z_{k+1}$ are always together in one block $S \in \pi$ with $|\pi| = k+2-s$, we obtain $$\left| \prod_{S\in\pi} m_\circ^{(\mathfrak{I}_{k+1})}[S] \, \right|^2 \lesssim \left[\frac{\left(\prod_{i \in \mathfrak{I}_{k+1}} \rho_i\right) \wedge \max_{i \in [k+1]} \rho_i}{\ell^{s-1}} \right]^2$$ analogously to [\[eq:zpartfinal\]](#eq:zpartfinal){reference-type="eqref" reference="eq:zpartfinal"} by means of [\[eq:Mobius\]](#eq:Mobius){reference-type="eqref" reference="eq:Mobius"} and the integral representation [\[eq:Mdivdiff\]](#eq:Mdivdiff){reference-type="eqref" reference="eq:Mdivdiff"}. The additional $\wedge \max_{i \in [k+1]} \rho_i$, which is effective only for $\mathfrak{I}_{k+1} = \emptyset$, comes from the estimate $$\int_{\mathbb R }\frac{\rho(x)}{|x - z_1| \, |x - z_{k+1}|} \mathrm{d}x\lesssim \frac{\rho_1 \vee \rho_{k+1}}{\ell}\,,$$ easily obtained by a Schwarz inequality.\ [Case (ii).]{.ul} In this case, the above estimates of the two factors in [\[eq:M\^2AVestpf\]](#eq:M^2AVestpf){reference-type="eqref" reference="eq:M^2AVestpf"} modify to $$\big\langle \big| \mathrm{pTr}_{K(\pi)}(A_1,\ldots,A_{k}) \big|^2 \big\rangle \le \left(\prod_{\substack{S\in K(\pi) \\ |S| \ge 2}} \ \left(\prod_{j\in S_1} \left\langle|A_j|^{2(|S_1|-1)}\right\rangle^{\frac{1}{2(|S_1|-1)}}\right) \left(\prod_{i = 2}^s \prod_{j\in S_i} \left\langle|A_j|^{|S_i|}\right\rangle^{\frac{1}{|S_i|}}\right) \right)^2\,,$$ assuming that $S_1 = \mathfrak{B}(k+1)$, and $$\left| \prod_{S\in\pi} m_\circ^{(\mathfrak{I}_{k+1})}[S] \, \right|^2 \lesssim \left[\frac{\prod_{i \in \mathfrak{I}_{k+1}} \rho_i}{\ell^{s-1}} \right]^2\,.$$ Putting the two cases together and using $\langle |A|^p\rangle^{1/p} \le N^{\frac{p-2}{2p}} \langle |A|^2 \rangle^{1/2}$ for any $p \ge 2$ together with $N \ell > 1$ and the normalization $\langle |A_j|^2 \rangle = 1$, we find that $$\big\langle \big| \mathcal{M}(z_1,A_1,\dots,A_{k},z_{k+1};\mathfrak{I}_{k+1})\big|^2 \big\rangle \lesssim N^k \, \left(\prod_{i \in \mathfrak{I}_{k+1}} \rho_i\right)^2 \left[ \left(\frac{ \max_{i\in [k+1]} \big(\rho_i + \boldsymbol{1}(i \notin \mathfrak{I}_{k+1})\big)}{N \ell}\right)^2 + \frac{1}{N} \right]\,.$$ ◻ ## Complex moment matching {#app:moma} In order to conduct the third step of our proof, the Green function comparison (GFT) of Proposition [Proposition 11](#prop:zag){reference-type="ref" reference="prop:zag"}, we need to guarantee the moment matching condition [\[eq:momentmatch\]](#eq:momentmatch){reference-type="eqref" reference="eq:momentmatch"} of the single entry distributions. For real random variables (or complex ones with independent real and imaginary parts), the argument ensuring this (and even an approximately matching fourth moment) is standard (see, e.g., [@EYbook Lemma 16.2]) and based on an explicit construction of a distribution supported on three points in ${\mathbb R }$. However, for general complex random variables, this construction is not sufficient; we now present its complex variant. Let $Z$ be a complex random variable and denote its moments by $$\label{eq:moment} m_{i,j}=m_{i,j}(Z) := {\mathbb E }\big[\overline{Z}^i Z^j\big] \qquad \text{for} \qquad i,j \in {\mathbb N}_0\,,$$ and call $i+j$ the *order* of $m_{i,j}$. Clearly $m_{0,0} = 1$ and $m_{i,j} = \overline{m}_{j,i}$, so we can focus on $m_{i,j}$ with $i\le j$. **Lemma 34**. *Let $m_{0,2}, m_{0,3}, m_{1,2} \in {\mathbb C}$ with $|m_{0,2}| \le 1$. Then there exists a complex random variable $Z$ supported on at most eleven points $z_1, ... , z_{11} \in {\mathbb C}$, such that its moments [\[eq:moment\]](#eq:moment){reference-type="eqref" reference="eq:moment"} are given by $$\label{eq:momentmatch2} m_{0,1}(Z) = 0\,, \quad m_{1,1}(Z) = 1\,, \quad m_{0,2}(Z) = m_{0,2}\,, \quad m_{0,3}(Z) = m_{0,3}\,, \quad \text{and} \quad m_{1,2}(Z) = m_{1,2}\,.$$* **Remark 35**. *A generalized version of this problem (constructing an atomic measure with arbitrary number of prescribed moments), known as the *truncated complex $K$-moment problem*, has been solved by Curto and Fialkow in [@CuFi]. To keep our result self-contained, we give a simple independent proof for the special case of three moments that we need here.* Having Lemma [Lemma 34](#lem:momentmatch){reference-type="ref" reference="lem:momentmatch"} at hand, one can easily see that there exists a random variable that has the prescribed first three moments and it has an independent Gaussian component of given variance $\gamma>0$. More precisely, given $m_{0,1}= 0$, $m_{1,1} = 1$, $m_{0,2}$, $m_{0,3}$, and $m_{1,2}$ with $|m_{0,2}| \le 1$ as the set of moments of $\chi_{\mathrm{od}}$, we look for a representation of $Z$ in the form $$Z := (1 - \gamma)^{1/2} Z' + \gamma^{1/2} \xi_G \quad \text{with} \quad \gamma \in (0,1) \quad \text{fixed}$$ with some random variable $Z'$ to be constructed, where $\xi_G$ is a centered complex Gaussian random variable having second moments $m_{0,2}(\xi_G) = m_{0,2}$ and $m_{1,1}(\xi_G) = 1$. The moments of $Z'$ thus satisfy the relations $$\label{seq} m_{i,j} = (1 - \gamma)^{(i+j)/2} m_{i,j}(Z') + \gamma^{(i+j)/2} m_{i,j}(\xi_G) \quad \text{with} \quad 1 \le i+j \le 3.$$ In particular, $|m_{0,2}(Z')| = |m_{0,2}|\le 1$, so the moment sequence $m_{i,j}(Z')$ from [\[seq\]](#seq){reference-type="eqref" reference="seq"} satisfy the only nontrivial condition of Lemma [Lemma 34](#lem:momentmatch){reference-type="ref" reference="lem:momentmatch"}. Therefore, by Lemma [Lemma 34](#lem:momentmatch){reference-type="ref" reference="lem:momentmatch"}, we can construct the random variable $Z'$. Finally, we remark that all random variables involved have arbitrarily high moments (cf. Assumption [Assumption 1](#ass:entries){reference-type="ref" reference="ass:entries"}). This moment matching argument shows how to choose the distribution of the initial condition $W_0$ of the Ornstein-Uhlenbeck flow [\[eq:OUOUOU\]](#eq:OUOUOU){reference-type="eqref" reference="eq:OUOUOU"} so that after time $T=\gamma$ it will match with the distribution of the original matrix $W$ up to three moments. *Proof of Lemma [Lemma 34](#lem:momentmatch){reference-type="ref" reference="lem:momentmatch"}.* We only outline the construction of the points $z_1, ... , z_{11} \in {\mathbb C}$, the precise computations are a simple exercise in calculus and linear algebra and hence omitted. We set $z_{11} = 0$ to be the origin. The remaining ten points are then placed on five lines through the origin, carrying two points each, i.e. we put $$z_j = r_j \mathrm{e}^{\mathrm{i}\varphi_j} \quad \text{and} \quad z_{11-j} = \hat{z}_j := - \hat{r}_j \mathrm{e}^{\mathrm{i}\varphi_j} \quad \text{with} \quad r_j, \hat{r}_j \ge 0\,, \varphi_j \in [0, 2 \pi) \quad \text{for} \quad j \in [5]\,.$$ For simplicity, we can even prescribe four of the five angular variables in such a way that the corresponding points lie on the real and imaginary axis and the two diagonals, i.e. set $\varphi_j := j \pi/4$ for $j \in [4]$. We then take the law of $Z$ to be of the form $$\sum_{j \in [5]} \big( p_j \delta_{z_j} + \hat{p}_j \delta_{\hat{z}_j} \big) + \bigg(1 - \sum_{j \in [5]} (p_j + \hat{p}_j)\bigg) \delta_0$$ for weights $p_j, \hat{p}_j \ge 0$ satisfying $\sum_{j \in [5]}(p_j + \hat{p}_j) \le 1$. As mentioned above, it is a simple exercise to show that the remaining parameters $r_j, \hat{r}_j, p_j, \hat{p}_j \ge 0$ for $j \in [5]$ and $\varphi_5 \in [0, 2 \pi)$ can be chosen in such a way to accommodate [\[eq:momentmatch2\]](#eq:momentmatch2){reference-type="eqref" reference="eq:momentmatch2"}. More precisely, taking $A_j := p_j r_j = \hat{p}_j \hat{r}_j \ge 0$ for $j \in [5]$ (this ensures $m_{0,1}(Z) = 0$), $r_5 = \hat{r}_5$, and using our choices of $\varphi_j = j \pi/4$ for $j \in [4]$, the two complex conditions $m_{0,3}(Z) = m_{0,3}$ and $m_{1,2}(Z) = m_{1,2}$ turn into four real linear equations for the variables $C_j := B_j (r_j - \hat{r}_j) \in {\mathbb R }$ for $j \in [4]$ with $B_j := A_j (r_j + \hat{r}_j) \ge 0$. The determinant of this linear systems can easily seen to be non-vanishing and it thus determines the difference variables $r_j - \hat{r}_j \in {\mathbb R }$ for $j \in [4]$. Finally, the independent variables $\varphi_5 \in [0, 2 \pi)$ and $B_j := A_j (r_j + \hat{r}_j) \ge 0$ for $j \in [5]$ can easily be chosen to satisfy $m_{1,1}(Z) = 1$ and $m_{0,2}(Z) = m_{0,2}$. ◻ ## Additional proofs for Section [4](#sec:opAedge){reference-type="ref" reference="sec:opAedge"} {#sec:addproofsec4} *Proofs of Lemmas [Lemma 12](#lem:Mcancel){reference-type="ref" reference="lem:Mcancel"} and [Lemma 19](#lem:cancM){reference-type="ref" reference="lem:cancM"}.* The claim of Lemma [Lemma 12](#lem:Mcancel){reference-type="ref" reference="lem:Mcancel"} follows by multi-linearity from Lemma [Lemma 19](#lem:cancM){reference-type="ref" reference="lem:cancM"}. For the proof of Lemma [Lemma 19](#lem:cancM){reference-type="ref" reference="lem:cancM"}, we will use a *tensorization argument* (or *meta argument*) similar to [@metaargument] and [@iidpaper Proof of Lemma D.1]. Throughout this proof the size $N$ of $W$ is fixed. For $d\in{\mathbb N}$ consider the $(Nd)\times (Nd)$ Wigner matrix ${\bm W}^{(d)}$, i.e. the entries of ${\bm W}^{(d)}$ have variance $1/(Nd)$. Let ${\bm W}_t^{(d)}$ be the Ornstein-Uhlenbeck flow as in [\[eq:OUOUOU\]](#eq:OUOUOU){reference-type="eqref" reference="eq:OUOUOU"} with initial condition ${\bm W}_0^{(d)}={\bm W}^{(d)}$, and define its resolvent ${\bm G}_{i,t}^{(d)}:=({\bm W}_t^{(d)}-z_{i,t})^{-1}$, then the deterministic approximation of the resolvent is still given by $m_1$, the Stieltjes transform of the semicircular law. We now explain that also the deterministic approximation of products of resolvents and deterministic matrices is unchanged. For $1\le i\le k$, define ${\bm A}_i^{(d)}:=A_i\otimes I_d$, with $I_d$ denoting the $d$--dimensional identity, then for ${\bm M}_{[1,k],t}^{(d)}$ defined as in [\[eq:Mdef\]](#eq:Mdef){reference-type="eqref" reference="eq:Mdef"} with ${\bm M}_{i,t}^{(d)}$ and ${\bm A}_i^{(d)}$ we have $$\label{eq:usefulrel} {\bm M}_{[1,k],t}^{(d)}:={\bm M}^{(d)}(z_{1,t},{\bm A}_1^{(d)}, \dots, {\bm A}_{k-1}^{(d)},z_{k,t})=M(z_{1,t},A_1,\dots,A_{k-1},z_{k,t})\otimes I_d.$$ Fix $0<s<t$, then integrating [\[eq:flowka\]](#eq:flowka){reference-type="eqref" reference="eq:flowka"} for the bold faced resolvent and deterministic matrices, in time from $s$ to $t$ and taking the expectation we obtain $$\begin{split} &\langle {\bm M}_{[1,k],t}^{(d)}\bm A_k\rangle- \langle {\bm M}_{[1,k],s}^{(d)}\bm A_k\rangle \\ =&-{\mathbb E }\langle (\bm{G}_{[1,k],t}-{\bm M}_{[1,k],t}^{(d)})\bm A_k\rangle+{\mathbb E }\langle (\bm{G}_{[1,k],s}-{\bm M}_{[1,k],s}^{(d)})\bm A_k\rangle+\frac{k}{2}\int_s^t{\mathbb E }\langle \bm{G}_{[1,k],r}\bm A_k\rangle\,\mathrm{d}r \\ &+ \sum_{i,j=1\atop i< j}^k\int_s^t{\mathbb E }\langle \bm{G}_{[i,j],r}\rangle\langle \bm{G}_{[j,i],r}\rangle\, \mathrm{d}r+\sum_{i=1}^k \int_s^t{\mathbb E }\langle \bm G_{i,r}-m_{i,r}\rangle \langle \bm{G}^{(i)}_{[1,k],r}\bm A_k\rangle\, \mathrm{d}r+\frac{\sigma}{Nd}\sum_{i,j=1\atop i\le j}^k\int_s^t {\mathbb E }\langle \bm{G}_{[i,j],r}\bm{G}_{[j,i],r}^\mathfrak{t}\rangle\, \mathrm{d}r\,. \end{split}$$ Using the global law in Proposition [Proposition 8](#prop:initial){reference-type="ref" reference="prop:initial"} and [\[eq:usefulrel\]](#eq:usefulrel){reference-type="eqref" reference="eq:usefulrel"}, and taking the limit $d\to \infty$, this implies that for $|\mathrm{Im}\,z_i|\gtrsim 1$ we have $$\label{eq:almthereder} \langle M_{[1,k],t}A_k\rangle- \langle M_{[1,k],s}A_k\rangle=\frac{k}{2}\int_s^t\langle M_{[1,k],r}A_k\rangle\,\mathrm{d}r+\sum_{i,j=1, \atop i<j}^{k-1}\int_s^t\langle M_{[i,j],r}\rangle \langle M_{[j,i],r}\rangle\, \mathrm{d}r.$$ Finally, dividing [\[eq:almthereder\]](#eq:almthereder){reference-type="eqref" reference="eq:almthereder"} by $t-s$ and taking the limit $s\to t$, we conclude the proof of Lemma [Lemma 19](#lem:cancM){reference-type="ref" reference="lem:cancM"}. ◻ *Proof of Lemma [Lemma 15](#lem:redinphi){reference-type="ref" reference="lem:redinphi"}.* The proof of this lemma is very similar to [@A2 Lemma 3.3]. Hence we give the argument only for the case where $k$ is even, if $k$ is odd the proof is completely analogous. Moreover, for notational simplicity we henceforth drop the time dependence and the precise indices of $G$'s and $A$'s, i.e. write $\mathrm{Im}\,G \equiv \mathrm{Im}\,G_i$, $A \equiv A_j$, $\rho \equiv \rho_i$ and so on. Then, by application of the general bound $$| \langle B_1 B_2 B_3 B_4 \rangle | \le N \prod_{i=1}^4 \langle |B_i|^2 \rangle^{1/2} \quad \text{for all} \quad B_i \in {\mathbb C}^{N \times N}$$ applied to $B_i = \sqrt{|\mathrm{Im}\,G|} A (\mathrm{Im}\,G A)^{k/2-1} \sqrt{|\mathrm{Im}\,G|}$ and with the aid of [\[eq:Mbound\]](#eq:Mbound){reference-type="eqref" reference="eq:Mbound"}, we find that $$\begin{split} \Phi_{2k} &= \frac{\sqrt{N \hat{\ell}}}{N^{k-1} \, \rho^{2k} \, \langle |A|^2 \rangle^{k} } \big| \big\langle (\mathrm{Im}\,G A)^{2k} - \widehat{{M}}_{[\hat{1},\widehat{2k}]} A \big\rangle \big| \\ &\lesssim \sqrt{N \hat{\ell}} + \frac{\sqrt{N \hat{\ell}}}{N^{k-1} \, \rho^{2k} \, \langle |A|^2 \rangle^{k} } N \big| \big\langle (\mathrm{Im}\,G A)^{k} \big\rangle \big|^2 \\ &\prec \sqrt{N \hat{\ell}} + \frac{\sqrt{N \hat{\ell}}}{N^{k-1} \, \rho^{2k} \, \langle |A|^2 \rangle^{k} } N \left[ N^{k/2-1} \rho^k \langle |A|^2\rangle^{k/2} \left( 1 + \frac{\phi_k}{\sqrt{N \hat{\ell}}} \right) \right]^2 \\ &\lesssim \sqrt{N \hat{\ell}} + \frac{\phi_k^2}{\sqrt{N \hat{\ell}}} \,. \end{split}$$ We remark that, in order to bound $\big\langle (\mathrm{Im}\,G A)^{k} \big\rangle$ in terms of $\phi_k$, we added and subtracted the corresponding $M$-term and used the assumption that $\Phi_k \prec \phi_k$. ◻ 10 A. Adhikari, J. Huang. Dyson Brownian motion for general $\beta$ and potential at the edge. *Probability Theory and Related Fields* **178**, 893--950 (2020). A. Adhikari, B. Landon. Local law and rigidity for unitary Brownian motion. arXiv: [2202.06714](https://arxiv.org/abs/2202.06714) (2022). A. Adhikari, S. Dubova, C. Xu, J. Yin. Eigenstate Thermalization Hypothesis for Generalized Wigner Matrices. arXiv: [2302.00157](https://arxiv.org/abs/2302.00157) (2023). O.H. Ajanki, L. Erdős, T. Krüger. Stability of the matrix Dyson equation and random matrices with correlations. *Probability Theory and Related Fields* **173**, 293--373 (2019). J. Alt, L. Erdős, T. Krüger, D. Schröder. Correlated random matrices: Band rigidity and edge universality. *The Annals of Probability* **48**, 963--1001 (2020). L. Benigni, P. Lopatto. Optimal delocalization for generalized Wigner matrices. *Advances in Mathematics* **396**, 108109 (2022). L. Benigni, P. Lopatto. Fluctuations in local Quantum Unique Ergodicity for generalized Wigner matrices. *Communications in Mathematical Physics* **391**, 401--454 (2022). L. Benigni, N. Chen, P. Lopatto, X. Xie. Fluctuations in Quantum Unique Ergodicity at the Spectral Edge. arXiv: [2303.11142](https://arxiv.org/abs/2303.11142) (2023). A. Bloemendal, L. Erdős, A. Knowles, H.-T. Yau, J. Yin. Isotropic local laws for sample covariance and generalized Wigner matrices. *Electronic Journal of Probability* **19**, 33, 1--53 (2014). P. Bourgade. Extreme gaps between eigenvalues of Wigner matrices. *Journal of the European Mathematical Society* **24**, 2823--2873 (2021). P. Bourgade, H. Falconet. Liouville quantum gravity from random matrix dynamics. arXiv: [2206.03020](https://arxiv.org/abs/2206.03029) (2022). S. Brooks, E. Lindenstrauss. Joint quasimodes, positive entropy, and quantum unique ergodicity. *Inventiones Mathematicae* **198**, 219--259 (2014). G. Cipolloni, L. Erdős, D. Schröder. Eigenstate Thermalisation Hypothesis for Wigner matrices. *Communications in Mathematical Physics* **388**, 1005--1048 (2021). G. Cipolloni, L. Erdős, D. Schröder. Functional Central Limit Theorems for Wigner matrices. Accepted to *The Annals of Applied Probability*, arXiv: [2012.13218](https://arxiv.org/abs/2012.13218) (2020). G. Cipolloni, L. Erdős, D. Schröder. Normal fluctuation in quantum ergodicity for Wigner matrices. *The Annals of Probability* **50**, 984--1012 (2022). G. Cipolloni, L. Erdős, D. Schröder. Thermalisation for Wigner matrices. *Journal of Functional Analysis* **282**, 109394 (2022). G. Cipolloni, L. Erdős, D. Schröder. Optimal multi-resolvent local laws for Wigner matrices. *Electronic Journal of Probability* **27**, 1--38 (2022). G. Cipolloni, L. Erdős, D. Schröder. Rank-uniform local law for Wigner matrices. *Forum of Mathematics, Sigma* **10**, E96 (2022). G. Cipolloni, L. Erdős, D. Schröder. Mesoscopic central limit theorem for non-Hermitian random matrices. arXiv: [2210.12060](https://arxiv.org/abs/2210.12060) (2022). G. Cipolloni, L. Erdős, J. Henheik, D. Schröder. Optimal Lower Bound on Eigenvector Overlaps for non-Hermitian Random Matrices. arXiv: [2301.03549](https://arxiv.org/abs/2301.03549) (2023). G. Cipolloni, L. Erdős, J. Henheik, O. Kolupaiev. Gaussian fluctuations in the equipartition principle for Wigner matrices. *Forum of Mathematics, Sigma* **11**, E74 (2023). Y. Colin de Verdière. Ergodicité et fonctions propres du laplacien. *Communications in Mathematical Physics* **102**, 497--502 (1985). N. Cook, W. Hachem, J. Najim, D. Renfrew. Non-Hermitian random matrices with a variance profile (I): deterministic equivalents and limiting ESDs. *Electronic Journal of Probability* **23** No. 110, 1--61 (2018). R. Curto, L. Fialkow. The truncated complex $K$-moment problem. *Transactions of the American Mathematical Society* **352**, 2825--2855 (2000). J. M. Deutsch. Quantum statistical mechanics in a closed system. *Physical Review A* **43**, 2046--2049 (1991). B. Eckhardt, S. Fishman, J. Keating, O. Agam, J. Main, K. Müller. Approach to ergodicity in quantum wave functions. *Physical Review E* **52**, 5893 (1995). L. Erdős, B. Schlein, H.-T. Yau. Local semicircle law and complete delocalization for Wigner random matrices. *Communications in Mathematical Physics* **287**, 641--655 (2009). L. Erdős, H.-T. Yau, J. Yin. Rigidity of eigenvalues of generalized Wigner matrices. *Advances in Mathematics* **229**, 1435--1515 (2012). L. Erdős, A. Knowles, H.-T. Yau, J. Yin. The local semicircle law for a general class of random matrices. *Electronic Journal of Probability* **18**, No. 59, 1--58 (2013). L. Erdős, H.-T. Yau. *A dynamical approach to random matrix theory*. American Mathematical Society, Vol. 28 (2017). L. Erdős, T. Krüger, D. Schröder. Random matrices with slow correlation decay. *Forum of Mathematics, Sigma* **7**, E8 (2019). L. Erdős. The Matrix Dyson Equation and its applications for random matrices. In *IAS/Park City Mathematics Series* Volume 26, 75--158. Editors: Alexei Borodin, Ivan Corwin, Alice Guionnet (2019) M. Feingold, A. Peres. Distribution of matrix elements of chaotic systems. *Physical Review A* **34**, 591 (1986). R. Holowinsky, K. Soundararajan. Mass equidistribution for Hecke eigenforms. *Annals of Mathematics* **172**, 1517--1528 (2010). J. Huang, B. Landon. Rigidity and a mesoscopic central limit theorem for Dyson Brownian motion for general $\beta$ and potentials. *Probability Theory and Related Fields* **175**, 209--253 (2019). Y. He, A. Knowles. Mesoscopic eigenvalue statistics of Wigner matrices. *The Annals of Applied Probability* **27**, 1510--1550 (2017). A. Knowles, J. Yin. The isotropic semicircle law and deformation of Wigner matrices. *Communications on Pure and Applied Mathematics* **66**, 1663--1749 (2013). A. Knowles, J. Yin. Anisotropic local laws for random matrices. *Probability Theory and Related Fields* **169**, 257--352 (2017). G. Kreweras. Sur les partitions non croisées d'un cycle. *Discrete mathematics* **1**, 333--350 (1972). B. Landon, P. Lopatto, P. Sosoe. Single eigenvalue fluctuations of general Wigner-type matrices. arXiv: [2105.01178](https://arxiv.org/abs/2105.01178) (2021). B. Landon, P. Sosoe. Almost-optimal bulk regularity conditions in the CLT for Wigner matrices. arXiv: [2204.03419](https://arxiv.org/abs/2204.03419) (2022). E. Lindenstrauss. Invariant measures and arithmetic quantum unique ergodicity. *Annals of Mathematics* **163**, 165--219 (2006). Z. Rudnick, P. Sarnak. The behaviour of eigenstates of arithmetic hyperbolic manifolds. *Communications in Mathematical Physics* **161**, 195--213 (1994). A. I. S̆nirel'man. Ergodic properties of eigenfunctions. *Uspekhi Matematicheskikh Nauk* **29**, 181--182 (1974). K. Soundararajan. Quantum unique ergodicity for $\mathrm{SL}_2({\mathbb Z})\setminus \mathbb{H}$. *Annals of Mathematics* **172**, 1529--1538 (2010) M. Srednicki. Chaos and quantum thermalization. *Physical Review E* **50**, 888--901 (1994). T. Terence, V. Vu. Random matrices: Universality of local eigenvalue statistics. *Acta Mathematica* **206**, 127--204 (2011). S. Zelditch. Uniform distribution of eigenfunctions on compact hyperbolic surfaces. *Duke Mathematical Journal* **55**, 919--941 (1987). [^1]: Note that $\langle |\mathring{A}|^2 \rangle^{1/2}$ is substantially smaller than $\lVert \mathring{A}\rVert$ for matrices $\mathring{A}$ of low rank; in fact, if $\mbox{rank}(\mathring{A})=1$, then $\lVert \mathring{A}\rVert= \sqrt{N} \langle |\mathring{A}|^2 \rangle^{1/2}$, losing the entire $\sqrt{N}$ factor in [\[eth\]](#eth){reference-type="eqref" reference="eth"} compared with the optimal [\[eth1\]](#eth1){reference-type="eqref" reference="eth1"}. [^2]: We point out that the end of the proof of Theorem 2.2 in the published version of [@A2] contained a small error; a correct and in fact simpler argument was given in the updated arXiv:2203.01861 version of the paper. [^3]: By inspecting our proof, it is easy to see that actually we do not need to assume that the off-diagonal entries of $W$ are identically distributed. We only need that they all have the same second moments, but higher moments can be different. [^4]: Traditionally [@EYY2012; @KnowYin; @BEKYY], local laws did not consider arbitrary test matrix $A$, but only $A=I$ or special rank one projections leading the *isotropic local laws*. General $A$ was included later, e.g. in [@slowcorr]. [^5]: Calligraphic letters like $\mathcal{G}, \mathcal{M}$ indicate that we may consider $\mathrm{Im}\,G$ instead of some resolvents $G$ in the chain. [^6]: From now on we use the convention that traceless matrices are denoted by $A$, while general deterministic matrices are denoted by $B$. [^7]: *The isotropic bound for $|\langle \bm x, \mathcal{M} \bm y\rangle|$ in [\[eq:MboundISO\]](#eq:MboundISO){reference-type="eqref" reference="eq:MboundISO"} is the same as the norm bound $\| \mathcal{M}\|$.* [^8]: Rigidity asserts that the increasingly ordered eigenvalues $\lambda_i$ are very close to the $i$-th $N$-quantile $\gamma_i$ of the semicircle density $\rho_{\rm sc}$ in the sense $|\lambda_i-\gamma_i|\prec N^{-2/3} [i\wedge (N+1-i)]^{-1/3}$, i.e. each eigenvalue is strongly concentrated around the corresponding quantile essentially on the scale of the local eigenvalue spacing. [^9]: Strictly speaking, we use this Brownian motion only when $\sigma : = {\mathbb E }\chi^2_{\mathrm{od}}$ is real and ${\mathbb E }\chi^2_{\mathrm{d}} = 1+ \sigma$, otherwise we need a small modification, see later in Section [4](#sec:opAedge){reference-type="ref" reference="sec:opAedge"}. [^10]: *We point out that the index $j$ realizing the minimum may change along the time evolution. Additionally, by [\[eq:characteristics\]](#eq:characteristics){reference-type="eqref" reference="eq:characteristics"} and the text below it, we note that if $\min_i N\eta_i\rho_i\ge N^\epsilon$ then $\min_i N\eta_{i,t}\rho_{i,t}\ge N^\epsilon$ for any $0\le t\le T$.* [^11]: *This condition can easily be relaxed to being matching up to an error of size $N^{-2}$ as done, e.g., in [@EYbook Theorem 16.1].* [^12]: *Here and in the sequel when we say that such relation involving $\Phi(t)$ \"holds uniformly\", we mean that uniformly in traceless deterministic matrices $A_i$'s and in all spectral parameters satisfying $N\hat{\ell}_t \ge N^\epsilon$.* [^13]: *We remark that $D,\delta,\alpha$ are $N$--independent constants, all the other quantities may depend on $N$.* [^14]: *Note that we use the $\prec$-notation to purely deterministic quantities. The reason is that it conveniently absorbs irrelevant $|\log \eta| \lesssim (\log N)$-factors coming from slightly singular integrals, see Footnote [\[log\]](#log){reference-type="ref" reference="log"}.* [^15]: [\[log\]]{#log label="log"} The logarithmic corrections are stemming from the estimate $\int_{\mathbb R }\frac{\rho(x)}{|x-z|} \mathrm{d}x \lesssim 1+\big| \log |\mathrm{Im}\,z| \big|$ (cf. [\[eq:Mdivdiff\]](#eq:Mdivdiff){reference-type="eqref" reference="eq:Mdivdiff"}). [^16]: Alternatively, this can also be obtained using [\[eq:intrepIM\]](#eq:intrepIM){reference-type="eqref" reference="eq:intrepIM"} for $m=2$: The resolvent chain, which is approximated by $\langle \widehat{M}_{[i,j]} \rangle$ contains a $G_j G_i$-factor after cyclicity of the trace. Applying [\[eq:intrepIM\]](#eq:intrepIM){reference-type="eqref" reference="eq:intrepIM"} for $m=2$ to this part of the chain and using a *meta argument* like in Appendix [6.3](#sec:addproofsec4){reference-type="ref" reference="sec:addproofsec4"}, we can also conclude [\[eq:intrepMbasic\]](#eq:intrepMbasic){reference-type="eqref" reference="eq:intrepMbasic"}. [^17]: To be precise, in the integral [\[eq:intrepG-M\]](#eq:intrepG-M){reference-type="eqref" reference="eq:intrepG-M"} we first need to cut-off the regime where $|x| \ge N^{100}$, say, and estimate this contribution by a simple norm bound using that the spectrum of the Wigner matrix is contained in $[-2-\epsilon, 2+ \epsilon]$ with very high probability [@EYY2012]. Such technicality about the irrelevant, very far out $x$-regime will henceforth be ignored. [^18]: Observe that in this normalization, the non-zero entries of $\Delta_V^{(\gamma)}$ and $\Delta_W^{(\gamma)}$ are of order one random variables. [^19]: Here, $p$ is a superscript, not a power.
arxiv_math
{ "id": "2309.05488", "title": "Eigenstate thermalisation at the edge for Wigner matrices", "authors": "Giorgio Cipolloni, L\\'aszl\\'o Erd\\H{o}s, Joscha Henheik", "categories": "math.PR math-ph math.MP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this paper we study a version of (non-Markovian) first passage percolation on graphs, where the transmission time between two connected vertices is non-iid, but increases by a *penalty factor* polynomial in their expected degrees. Based on the exponent of the penalty-polynomial, this makes it increasingly harder to transmit to and from high-degree vertices. This choice is motivated by awareness or time-limitations. For the underlying graph models we choose spatial random graphs that have power-law degree distributions, so that the effect of the penalisation becomes visible: (finite and infinite) Geometric Inhomogeneous Random Graphs, and Scale-Free Percolation. In these spatial models, the connection probability between two vertices depends on their spatial distance and on their expected degrees. We identify the parameter-regimes where the transmission time between two far away vertices $x,y$ are respectively *polynomial* ($\Theta(|x-y|^{\eta_0})$ for some $\eta_0<1$), and *linear* ($\Theta(|x-y|)$) in their Euclidean distance. In this paper we present proofs of lower bounds and the upper bound for the linear regime. These complement the matching upper bounds for the polynomial regime in our companion paper. Together with the companion paper and other work, our results imply that the transmission time between $x,y$ undergoes three phase transitions as the penalty exponent increases. The four phases are: bounded (explosive), at most polylogarithmic, polynomial but sublinear, and linear transmission times. We conjecture universality of this phenomenon across models. author: - "Júlia Komjáthy[^1], John Lapinskas[^2], Johannes Lengler[^3], Ulysse Schaller[^4]" bibliography: - references.bib title: Four universal growth regimes in degree-dependent first passage percolation on spatial random graphs II. --- # Introduction {#sec:intro} First passage percolation (FPP) is a natural way to understand geodesics in random metric spaces. In this paper, we investigate a natural extension of first passage percolation, where the transmission time of the first passage percolation process across an edge depends on its direct surroundings in the underlying graph, in particular on the degrees of the sending and receiving vertex [@komjathy2020stopping]. We abbreviate this process as $1$-FPP for short. For the underlying graph models we choose spatial models with power-law degree distributions, i.e., with highly varying degrees, so that the dependence of the transmissions on the local surroundings causes non-negligible effects. The dependence is so that the transmission time from and towards high-degree vertices is slowed down by a polynomial of their (expected) degrees, which makes it harder (but not impossible) to transmit to and from "superspreaders". This new modification of FPP has several interesting features. It reveals **topological features** of the underlying graphs hidden from unmodified versions. Classical FPP tend to strongly depend on the highest degree vertices and shows the 'explosion' phenomenon for almost all transmission-time distributions, i.e., the process reaches infinitely many vertices in finite time [@komjathy2020explosion], while with $1$-FPP explosion can be stopped by increasing the exponent of the penalty polynomial [@komjathy2020stopping] above a given value depending on the power-law exponent. Here we study exactly those cases, and show that $1$-FPP gives a large number of examples of polynomial intrinsic ball-growth *faster than the dimension* of the underlying space. This is rare in spatial graph models, and shows that the new modification cannot simply be mapped and studied as a simpler process on a spatial graph with different parameters. Finally, 1-FPP allows for **more realistic modelling of real phenomena**, because it can model the *cost* or *transmission time* of an edge in a way that takes into account the direct surroundings in the graph. This implies various interesting applications [@bonaventura2014characteristic; @ding2018centrality; @pu2015epidemic]. *Degree-dependent first passage percolation, 1-FPP, briefly.* We model the transmission time through the edge $e=xy$ between vertices $x,y$ as the product of an independent and identically distributed (iid) random factor $L_{xy}$ and a factor $(W_xW_y)^{\mu}$ for $\mu \geq 0$, where $W_x$ and $W_y$ are (up to a constant factor) the expected degrees of $x$ and $y$, which we call vertex-weights. Starting from some initial vertex at time $0$, the process then spreads through the underlying graph using the usual rules of first passage percolation; the transmission time along a path is the sum of the transmission times of its edges, and the transmission time between two vertices $x,y$ is the minimum transmission time over all paths connecting them. For example, when $L_{xy}$ are iid exponential random variables, the penalty factor makes the rate of transmission across the edge $xy$ exponential with mean $(W_xW_y)^\mu$. We shall work with general distributions for $L$, and show that the phenomena we see is universal. The 1-FPP model is inspired by degree-dependent bond percolation [@hooyberghs2010biased] and by topology-biased random walks [@bonaventura2014characteristic; @lee2009centrality; @zlatic2010topologically], in which the transition probabilities from a vertex depend on the degrees of its neighbours. Those works also assume a polynomial dependence on the degrees. *Motivation.* The choice of degree-dependent transmission times is not just motivated by the related work mentioned above, but comes directly from applications. Actual contacts do not scale linearly with network connectivity due to **limited time or awareness** [@feldman2017high], and this type of penalty has been used to model the **sublinear impact of superspreaders** as a function of contacts [@giuraniuc2006criticality; @karsai2006nonequilibrium; @miritello2013time]. $1$-FPP is interesting also from the applied perspective, since it shows that transmission of diseases can follow different regimes *in the same social network*, depending on how infection rates across single connection decreases with the degree of the persons involved. In real communication networks, higher degrees do lead to less communication per neighbour, but less than proportionally [@feldman2017high], which corresponds to $0<\mu<1$. This leads to another interpretation of $1$-FPP: the spread of information in two-way communication networks like chats or phone calls. Consider a network in which every node is available for communication with a neighbouring node at random points in time, but communication can only happen if both partners are available. If availability towards each neighbour is independent of the degrees, this leads to the classical iid FPP, i.e., $\mu=0$. If each node has a fixed time-budget of availability, that they need to split among their neighbours, then the availability becomes inversely proportional to the degree. Without coordination the expected waiting time for communication between two neighbouring nodes is proportional to the product of the degrees, which yields $1$-FPP with $\mu=1$. *The results, heuristically.* The underlying graph models we consider -- (finite and infinite) Geometric Inhomogeneous Random Graphs, and Scale-Free Percolation -- have power-law degree distributions [@bringmann2019geometric; @deijfen2013scale], i.e., $\mathbb{P}(\deg(v)= x) \propto x^{-\tau}$, with $\tau$ being the power-law exponent (we allow slowly varying correction terms). We set the power-law exponent to be $\tau\in(2,3)$, i.e., the degree-distribution has finite mean but infinite variance. This causes the graph distance ball $B_G(v, r)=\{u: d_G(u,v)\le r\}$ to grow *doubly-exponentially* [@deijfen2013scale; @van2017explosion] and classical FPP to show *explosion* [@komjathy2020explosion]. We then prove that when the exponent $\mu$ of the penalty-polynomial of $1$-FPP is in a given interval $(\mu_1, \mu_2)$ (depending among others on the power-law exponent $\tau$), the transmission time between two nodes $x$ and $y$ is polynomial in the Euclidean distance $|x-y|$, with exponent strictly less than one. Here we present the lower bound and in the companion paper [@komjathy2022one1] we present the matching upper bound up to a $o(1)$ term in the exponent, so combined we obtain upper and lower bounds of the form $|x-y|^{\eta_0 \pm o(1)}$ for $\eta_0<1$. In the accompanying [@komjathy2022one1] we also show that whenever $\mu<\mu_1$, transmission times are at most polylogarithmic in $|x-y|$, hence the two papers together prove that $|x-y|^{\eta_0 \pm o(1)}$ for $\eta_0<1$ distances do occur in the parameter regime $\mu\in(\mu_1, \mu_2)$, but not outside this interval: When $\mu>\mu_2$, i.e., when the exponent of the penalty-polynomial of $1$-FPP is larger than $\mu_2$, then we prove that the transmission time between two nodes $x$ and $y$ is between $\kappa_1 |x-y|$ and $\kappa_2 |x-y|$ for two constants $\kappa_1,\kappa_2>0$, with the lower bound valid in all dimensions and the upper bound valid in dimension at least $2$. In dimension $1$ we prove an upper bound $|x-y|^{1+o(1)}$ in the accompanying [@komjathy2022one1]. Nevertheless, (at least) linear distances in dimension $1$ are somewhat surprising, since generally long-range spatial models either do not percolate in dimension $1$ or show shorter distances [@biskup2004scaling; @trapman2010growth; @deijfen2013scale; @schulman1983long]. Together with [@komjathy2022one1] and previous work [@komjathy2020stopping], our results show that transmission times undergo three phase transitions between different regimes as the exponent $\mu$ of the penalty factor of transmission times increases. We thus observe four different *universality classes*: an *explosive regime* where the transmission time between two nodes $x$ and $y$ converges to a limiting distribution which is independent of the geometric distance; a *polylogarithmic regime* where the transmission time grows with distance, but at most as $(\log |x-y|)^{O(1)}$; a *polynomial sub-linear regime* where the transmission time is polynomial, $|x-y|^{\eta_0}$ with exponent $\eta_0$ less than 1; and a *linear regime* where the transmission time is proportional to $|x-y|$. None of these regimes are restricted to boundary cases in the parameter space, i.e., they are occurring throughout proper intervals for the parameter $\mu$. To the best of our knowledge, there is only one other model, long-range first passage percolation LRFPP, by Chatterjee and Dey [@chatterjee2016multiple], for which a similarly rich set of regimes in proper subspaces of the parameter space has been proven. This model is rather different, since it uses the *complete graph* on $\mathbb{Z}^d$ as underlying network, and a distance-dependent transmission time. We refer the reader to the discussion in [@komjathy2022one1] for details, and also for a thorough discussion of other related literature. Arguably, $1$-FPP with its variety of regimes reflects spreading processes on social networks better than simpler models (e.g., iid FPP), since the speed of dissemination in social networks can range from extremely fast (e.g., when news or memes spread through social media) to rather slow, geometry-dominated patterns (e.g., as diseases spread through regions and countries), with change only in the dynamics of the process but not (much) in the underlying network. We leave the investigation of any phase-boundary cases in the parameter space, i.e., where $\mu$ takes the value that separates two phases of growth, for future work. ## Graph Models {#sec:graph_model} We will consider undirected, simple graphs with vertex set $\mathcal{V}\subseteq \mathbb{R}^d$. We use standard graph notation, which we summarize along with other common terminology in Section [1.4](#sec:notation){reference-type="ref" reference="sec:notation"}. We consider three random graph models: *Scale-Free Percolation* (SFP), *Infinite Geometric Inhomogeneous Random Graphs* (IGIRG)[^5], and (finite) *Geometric Inhomogeneous Random Graphs* (GIRG). Since the latter model contains *Hyperbolic Random Graphs* (HypRG) as special case, our results also hold for HypRG. The main difference between the models is the vertex set $\mathcal V$. For SFP, we use $\mathcal V := \mathbb Z^d$, where $d \in \mathbb{N}$. For IGIRG, $\mathcal V$ is given by a Poisson point process on $\mathbb R^d$ of intensity one with respect to the Lebesgue measure. The formal definition is: **Definition 1** (SFP, IGIRG, GIRG). *Let $d\in \mathbb{N}$, $\tau >2$, $\alpha\in(1,\infty)$, and $\overline{c}>\underline{c}>0$. Let $\ell:[1,\infty)\rightarrow(0,\infty)$ be a slowly varying function, and let $h:\mathbb{R}^d\times[1,\infty)\times[1,\infty)\rightarrow[0,1]$ be a function satisfying $$\begin{aligned} \label{eq:connection_prob} \underline{c}\cdot\min\left\{1, \dfrac{w_1w_2}{|x|^d}\right\}^{\alpha} \le h(x,w_1,w_2)\le \overline{c}\cdot\min\left\{1, \dfrac{w_1w_2}{|x|^d}\right\}^{\alpha}.\end{aligned}$$ We call $d$ the *dimension*, $\tau$ the *power-law exponent*, $\alpha$ the *long-range parameter*, and $h$ the *connection probability*.* *For SFP, set $\mathcal V := \mathbb Z^d$, for IGIRG, let $\mathcal V$ be given by a Poisson point process on $\mathbb R^d$ of intensity one.[^6] For each $x\in\mathcal V$, we draw a *weight* $W_x$ independently from a probability distribution on $[1, \infty)$ satisfying $$\label{eq:power_law} F_W(w)=\mathbb{P}( W\le w)= 1-\frac{\ell(w)}{w^{\tau-1}}.$$ We denote the weight vector $(W_x)_{x\in \mathcal V}$ by $\mathcal{W}$. Conditioned on $\mathcal V$ and $\mathcal W$, every edge $xy$ is present independently with probability $h(x-y,W_x,W_y)$.* *A finite GIRG is obtained by restricting an IGIRG to a cube $Q_n$ of volume $n$ centred at $0$.* For finite GIRG models we are interested in the behaviour as $n\to \infty$. Definition [Definition 1](#def:girg){reference-type="ref" reference="def:girg"} leads to a slightly less general model than those e.g. in [@bringmann2019geometric] and [@komjathy2020stopping]. There, the original definition had a different scaling of the geometric space vs connection probabilities. However, the resulting graphs are identical in distribution after rescaling, see [@komjathy2020stopping] for a comparison. Finally, [@bringmann2019geometric] considered the torus topology on the cube, identifying "left" and "right" boundaries, but this does not make a difference for our results. Next we define $1$-dependent FPP on these graphs. **Definition 2** (1-dependent first passage percolation (1-FPP)). *Let $G(\mathcal V, \mathcal E)$ be a graph and to each vertex $v\in \mathcal V$ associate a vertex-weight $W_v$. For each edge $xy\in \mathcal E$, draw an iid random variable $L_{xy}$ from distribution $L$, and set the *transmission cost* of an edge $xy$ as $$\label{eq:cost} \mathcal{C}(xy):=L_{xy}(W_xW_y)^{\mu}$$ for a fixed parameter $\mu>0$ called the *penalty strength*. Setting $\mathcal{C}(xy)$ defines a cost-distance $d_{\mathcal C}$ between the vertices (see [\[eq:cost_distance\]](#eq:cost_distance){reference-type="eqref" reference="eq:cost_distance"} below), that we call $1$-dependent first passage percolation. We define the *cost-ball* $B_r^{\mathcal C}(x)$ of radius $r\ge 0$ around a vertex $x$ to be the set of all vertices to which there is a path of cost at most $r$ from $x$.* We usually assume that the cumulative distribution function (cdf) $F_L:[0,\infty)\rightarrow[0,1]$ of $L$ satisfies the following (exceptions of this assumption will be made explicit when applicable): **Assumption 3**. There exist constants $t_0,\,c_1,\,c_2,\,\beta>0$ such that $$\begin{aligned} \label{eq:F_L-condition} c_1t^{\beta}\le F_L(t)\le c_2t^{\beta}\mbox{ for all }t\in[0,t_0].\end{aligned}$$ Without much effort, Assumption [Assumption 3](#assu:L){reference-type="ref" reference="assu:L"} could be relaxed to $\lim_{x\to0} \log F_L(x)/\log x=\beta$, but having the stronger form in [\[eq:F_L-condition\]](#eq:F_L-condition){reference-type="eqref" reference="eq:F_L-condition"} increases the readability of the paper. For the same reason, we also exclude extensions to $\alpha = \infty$ in Definition [Definition 1](#def:girg){reference-type="ref" reference="def:girg"} and $\beta = \infty$ in [\[eq:F_L-condition\]](#eq:F_L-condition){reference-type="eqref" reference="eq:F_L-condition"} from the main body of the paper, and discuss those separately in Section [1.2.1](#sec:threshold){reference-type="ref" reference="sec:threshold"}. We will call the set of parameters $$\label{eq:parameters} \textnormal{\texttt{par}}\xspace:= \{d, \tau, \alpha, \mu, \beta, \underline{c}, \overline{c}, c_1, c_2, t_0\}$$ the *model parameters*. We generally take the standpoint that we consider $\mu$ to be the easiest parameter to change, e.g. by adjusting behaviour of individuals corresponding to high-degree vertices, increasing $\mu$ means gradually slowing down the spreading process around high-degree vertices. We also phrase our results from this point of view. ## Results {#sec:results} We now present two phases (polynomial but sublinear, and linear) of the transmission times between two far away nodes. Without loss of generality we fix one of the endpoints as $0\in \mathbb{R}^d$. For IGIRG and GIRG, this means that we will condition on the vertex set $\mathcal{V}$ containing $0$, i.e., we consider the resulting Palm distribution. Recall the power-law exponent $\tau$ of vertex-weights from [\[eq:power_law\]](#eq:power_law){reference-type="eqref" reference="eq:power_law"}, the long-range parameter $\alpha$ from the connection probability in [\[eq:connection_prob\]](#eq:connection_prob){reference-type="eqref" reference="eq:connection_prob"}, and $\beta$ of the transmission times from Assumption [Assumption 3](#assu:L){reference-type="ref" reference="assu:L"}. We assume $\tau\in(2,3)$, this ensures that the degree distribution has finite mean but infinite variance, and that graph-distance balls grow doubly-logarithmically [@bringmann2016average]. In the companion paper [@komjathy2022one1] and in [@komjathy2020stopping] we show that cost-distances are at most polylogarithmic if $\alpha <2$ or if $\mu < (3-\tau)/\beta$. Thus here we only consider the complementary case (neglecting phase boundaries) that $\alpha > 2$ and $\mu > (3-\tau)/\beta$. We first define the two values that will separate the phases of growth as $\mu$ changes: $$\begin{aligned} \label{eq:mu_pol_log} \mu_{\log}:=\frac{3-\tau}{\beta}, \quad \mu_{\mathrm{pol}}:= \frac1d+\frac{3-\tau}{\min\{\beta, d(\alpha-2)\}}=: \max\{\mu_{\mathrm{\log}} + \tfrac1d, \mu_{\mathrm{pol},\alpha} \}\end{aligned}$$ with $\mu_{\mathrm{pol},\alpha}:=\tfrac{1}{d}+\tfrac{3-\tau}{d(\alpha-2)}=\tfrac{\alpha-(\tau-1)}{d(\alpha-2)}$. These values are thus all positive when $\tau\in(2,3)$ and $\beta>0$, and finite under Assumption [Assumption 3](#assu:L){reference-type="ref" reference="assu:L"} since $\beta>0$. When $\alpha>2$ then $\mu_{\mathrm{pol},\alpha}$ is above $1/d$ as well. The values are also well-defined for $\tau=3$, see Theorem [\[thm:linear_polynomial_lower_bound\]](#thm:linear_polynomial_lower_bound){reference-type="ref" reference="thm:linear_polynomial_lower_bound"} below about results with $\tau\ge 3$. Then we define the polynomial growth exponent. For $\alpha>2, \tau\in(2,3)$, for all $\mu > \mu_{\log}$, let $$\begin{aligned} \label{eq:eta_0} \eta_0 = \eta_0(d, \alpha,\tau,\beta,\mu) := \begin{cases} 1 & \mbox{ if $\mu>\mu_{\mathrm{pol}}$,}\\ \min\left\{d(\mu-\mu_{\log}), \mu/\mu_{\mathrm{pol},\alpha}\right\} & \mbox{ if $\mu\le\mu_{\mathrm{pol}}$,} \end{cases}\end{aligned}$$ and note that $\eta_0>0$ for all $\mu>\mu_{\log}$, and $\eta_0<1$ exactly when $\mu< \mu_{\mathrm{pol}}$ by [\[eq:mu_pol_log\]](#eq:mu_pol_log){reference-type="eqref" reference="eq:mu_pol_log"}. We present special cases when $\alpha = \infty$ or $\beta=\infty$ and extensions to $\tau\ge3$ in Section [1.2.1](#sec:threshold){reference-type="ref" reference="sec:threshold"} below. **Theorem 4** (Polynomial Lower Bound). *Consider $1$-FPP on IGIRG, GIRG, or SFP of Definition [Definition 1](#def:girg){reference-type="ref" reference="def:girg"} satisfying the assumptions given in [\[eq:power_law\]](#eq:power_law){reference-type="eqref" reference="eq:power_law"}--[\[eq:F_L-condition\]](#eq:F_L-condition){reference-type="eqref" reference="eq:F_L-condition"}. Assume that $\alpha>2$, $\tau\in(2,3)$, and $\mu>\mu_{\log}$. Then for any $\varepsilon>0$ almost surely there exists $r >0$ (independent of $n$ in case of finite GIRG) such that $$\begin{aligned} \mbox{for all } x\in \mathcal{V}\mbox{ with } |x|\ge r: d_{\mathcal{C}}(0,x) \ge |x|^{\eta_0-\varepsilon}. \end{aligned}$$* Note that $r$ is random, so it depends on the instance of IGIRG. For GIRG, "independent of $n$" means that for every $q >0$ there is $r_0 = r_0(q)$ independent of $n$ such that we can satisfy Theorem [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"} with an $r$ such that $\mathbb{P}(r > r_0) < q$.[^7] Note that the lower bound holds simultaneously for all vertices $x$ at distance at least $r$ from $0$. This means that we also obtain a geometric bound on the location of the cost-ball around $0$. However, the matching upper bounds in [@komjathy2022one1] are not uniform over all $x$ with $\|x\|=r$, so we can only bound the cost-ball in one direction. We rephrase Theorem [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"} in terms of intrinsic ball growth. **Corollary 5**. *Consider $1$-FPP on IGIRG, GIRG, or SFP of Definition [Definition 1](#def:girg){reference-type="ref" reference="def:girg"} satisfying the assumptions given in [\[eq:power_law\]](#eq:power_law){reference-type="eqref" reference="eq:power_law"}--[\[eq:F_L-condition\]](#eq:F_L-condition){reference-type="eqref" reference="eq:F_L-condition"} and with $0\in \mathcal{V}$. Assume that $\alpha>2$, $\tau\in(2,3)$, and $\mu>\mu_{\log}$. Then for all $\varepsilon>0$, almost surely there exists $r_0$ such that for all $r\ge r_0$,* (a) *$\mathcal{B}_r^{\mathcal{C}}(0) \subseteq \{x\in \mathbb{R}^d : |x| \le r^{1/\eta_0 +\varepsilon}\}$, and* (b) *$|\mathcal{B}_r^{\mathcal{C}}(0)| \le r^{d/\eta_0 +\varepsilon}$.* Part (a) of Corollary [Corollary 5](#cor:ball-growth){reference-type="ref" reference="cor:ball-growth"} rephrases Theorem [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"}, and (b) follows from (a) when considering the fact that the number of vertices in a (Euclidean) ball of radius $r^{1/\eta_0 +\varepsilon}$ around $0$ concentrates around the volume of this ball (uniformly for all $r\ge r_0$), both for IGIRG and SFP. To provide context, we cite the corresponding upper bound from [@komjathy2022one1]. For $\tau \in (2,3)$, for all edge-densities, the infinite component exists, is unique, and has positive density, but not necessarily density one.[^8] Hence for an upper bound we need to condition on $0$ and $x$ both being in the infinite component. **Theorem 6** (Polynomial Upper Bound [@komjathy2022one1]). *Consider $1$-FPP on IGIRG or SFP of Definition [Definition 1](#def:girg){reference-type="ref" reference="def:girg"} satisfying the assumptions given in [\[eq:power_law\]](#eq:power_law){reference-type="eqref" reference="eq:power_law"}--[\[eq:F_L-condition\]](#eq:F_L-condition){reference-type="eqref" reference="eq:F_L-condition"} with $0\in\mathcal{V}$. Assume that $\alpha>2$, $\tau\in(2,3)$, and $\mu>\mu_{\log}$. Let $\mathcal{C}_\infty$ be the infinite component. Then for any $\varepsilon>0$, $$\begin{aligned} \label{eq:polynomial-upper} \lim_{|x|\rightarrow\infty} \mathbb{P}\left(d_{\mathcal{C}}(0,x)\le |x|^{\eta_0+\varepsilon} \mid 0, x\in \mathcal{C}_\infty\right)=1. \end{aligned}$$* In [@komjathy2022one1] we also prove a corresponding theorem for finite GIRGs that we omit here. Observe that Theorem [Theorem 6](#thm:polynomial-upper){reference-type="ref" reference="thm:polynomial-upper"} does not hold *simultaneously* for all vertices at a certain norm, rather, the convergence holds in probability. The reason for this is that we did not make any assumptions on the *tail-behaviour* of the distribution of $L$, and hence we cannot exclude (clusters of) vertices with all edges having very large $L$ values so that these are reached much later, violating the upper bound in [\[eq:polynomial-upper\]](#eq:polynomial-upper){reference-type="eqref" reference="eq:polynomial-upper"}. Hence Theorem [Theorem 6](#thm:polynomial-upper){reference-type="ref" reference="thm:polynomial-upper"} is not strong enough to provide an analogous lower bound to Corollary [Corollary 5](#cor:ball-growth){reference-type="ref" reference="cor:ball-growth"}(a), since for that we would need a statement over *all* vertices in distance at most $r$. Yet, even though some vertices at norm $r$ might have long cost-distance to $0$, linearly many of them do satisfy $d_{\mathcal{C}}(0,x)\le |x|^{\eta_0+\varepsilon}$, so we still obtain the following corollary when combining Theorems [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"} and [Theorem 6](#thm:polynomial-upper){reference-type="ref" reference="thm:polynomial-upper"}: **Corollary 7**. *Consider $1$-FPP on IGIRG or SFP of Definition [Definition 1](#def:girg){reference-type="ref" reference="def:girg"} satisfying the assumptions given in [\[eq:power_law\]](#eq:power_law){reference-type="eqref" reference="eq:power_law"}--[\[eq:F_L-condition\]](#eq:F_L-condition){reference-type="eqref" reference="eq:F_L-condition"} with $0\in\mathcal{V}$. Assume that $\alpha>2$, $\tau\in(2,3)$, and $\mu>\mu_{\log}$. Let $\mathcal{C}_\infty$ be the infinite component. Then for any $\varepsilon>0$, $$\begin{aligned} \lim_{|x|\rightarrow\infty} \mathbb{P}\left( \left|\frac{\log d_{\mathcal{C}}(0,x)}{\log |x|} - \eta_0\right| \le \varepsilon\mid 0,x \in \mathcal{C}_\infty \right)&=1, \label{eq:part-a-earlier}\\ \lim_{r\rightarrow\infty} \mathbb{P}\left(\left|\frac{\log |\mathcal{B}_r^{\mathcal{C}}(0)|}{\log r}- \frac{d}{\eta_0}\right| \le \varepsilon\mid 0\in \mathcal{C}_\infty \right)&=1. \label{eq:part-b-earlier}\end{aligned}$$* Note that [\[eq:part-b-earlier\]](#eq:part-b-earlier){reference-type="eqref" reference="eq:part-b-earlier"} follows immediately from [\[eq:part-a-earlier\]](#eq:part-a-earlier){reference-type="eqref" reference="eq:part-a-earlier"} because for the lower bound on $|\mathcal{B}_r^{\mathcal{C}}(0)|$ it suffices that a constant fraction of vertices at distance at most $r$ have cost-distance at most $r^{1/\eta_0+\varepsilon}$, which is implied by (the upper bound in) [\[eq:part-a-earlier\]](#eq:part-a-earlier){reference-type="eqref" reference="eq:part-a-earlier"}. Hence we do obtain an absolute value in [\[eq:part-b-earlier\]](#eq:part-b-earlier){reference-type="eqref" reference="eq:part-b-earlier"} within the $\mathbb P$ sign. Our next result refines Theorem [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"} and [\[eq:part-a-earlier\]](#eq:part-a-earlier){reference-type="eqref" reference="eq:part-a-earlier"} in Corollary [Corollary 7](#cor:polynomial-total){reference-type="ref" reference="cor:polynomial-total"} when $\eta_0=1$ in [\[eq:eta_0\]](#eq:eta_0){reference-type="eqref" reference="eq:eta_0"}. I.e., when $\mu > \mu_{\mathrm{pol}}$, we can sharpen both upper and lower bounds if the dimension is at least $2$: theoremLinearRegime [\[thm:linear_regime\]]{#thm:linear_regime label="thm:linear_regime"} Consider $1$-FPP on IGIRG or SFP of Definition [Definition 1](#def:girg){reference-type="ref" reference="def:girg"} satisfying the assumptions given in [\[eq:power_law\]](#eq:power_law){reference-type="eqref" reference="eq:power_law"}--[\[eq:F_L-condition\]](#eq:F_L-condition){reference-type="eqref" reference="eq:F_L-condition"} with $0\in\mathcal{V}$. Assume that $\alpha>2$, $\tau\in(2,3)$, $\mu>\mu_{\mathrm{pol}}$, and additionally $d\ge 2$. Let $\mathcal{C}_\infty$ be the infinite component. Then there exist constants $\kappa_1,\kappa_2>0$ depending only on the model parameters such that $$\begin{aligned} \label{eq:linear_regime} \lim_{|x|\rightarrow\infty} \mathbb{P}\left(\kappa_1 |x| < d_{\mathcal{C}}(0,x)<\kappa_2 |x| \ \mid \ \textnormal{$0,x \in \mathcal{C}_{\infty}$}\right)=1. \end{aligned}$$ The lower bound is actually valid in a more general setting, see Theorem [\[thm:linear_polynomial_lower_bound\]](#thm:linear_polynomial_lower_bound){reference-type="ref" reference="thm:linear_polynomial_lower_bound"} below. Our proof of the lower bound is a generalisation of ideas from Berger [@berger2004lower] to the edge-weighted one-dependent setting, and indeed we recover his result on graph-distances in Long-Range Percolation and the extension to Scale-Free Percolation in [@deprez2015inhomogeneous] when we set $\alpha>2,\tau>3,\mu=0, \beta=\infty$. However, we give a proof that avoids Kingman's subadditive ergodic theorem that both papers [@berger2004lower; @deprez2015inhomogeneous] use. The statements of Theorem [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"} and Corollary [Corollary 5](#cor:ball-growth){reference-type="ref" reference="cor:ball-growth"} remain valid in finite-sized GIRGs, since finite GIRGs are obtained as subgraphs of IGIRG, and hence distances can only increase in GIRG versus the surrounding infinite model. For the upper bound in Theorem [\[thm:linear_regime\]](#thm:linear_regime){reference-type="ref" reference="thm:linear_regime"}, the extension to finite GIRGs requires a proof: theoremFiniteGraph [\[thm:finite_graph\]]{#thm:finite_graph label="thm:finite_graph"} Consider GIRG of Definition [Definition 1](#def:girg){reference-type="ref" reference="def:girg"} satisfying the assumptions given in [\[eq:power_law\]](#eq:power_law){reference-type="eqref" reference="eq:power_law"}--[\[eq:F_L-condition\]](#eq:F_L-condition){reference-type="eqref" reference="eq:F_L-condition"}. Assume that $\alpha>2$, $\tau\in(2,3)$, $\mu>\mu_{\mathrm{pol}}$, and additionally $d\ge 2$. Let $\mathcal{C}_{\max}^{(n)}$ be the largest component in $Q_n$. Let $u_n,v_n$ be two vertices chosen uniformly at random from $\mathcal{C}_{\max}^{(n)}$. Then there exist constants $\kappa_1,\kappa_2>0$ depending only on the model parameters such that $$\begin{aligned} \label{eq:finite-linear-regime} \lim_{n\to \infty}\mathbb{P}\left(\kappa_1 |u_n-v_n| < d_{\mathcal{C}}(u_n,v_n)<\kappa_2 |u_n-v_n| \right)=1. \end{aligned}$$ Next we present extensions, in particular for the $\tau>3, \alpha>2$ case where we have very mild conditions on the distribution of $L$, see Corollary [Corollary 9](#cor:classical){reference-type="ref" reference="cor:classical"} below. ### Limit Cases and Extensions {#sec:threshold} The results above naturally extend to cases/models that may informally be described as $\alpha = \infty$ or $\beta = \infty$, and to some extent to $\tau\ge3$ as well. We start with $\alpha = \infty$. This means that we replace the condition [\[eq:connection_prob\]](#eq:connection_prob){reference-type="eqref" reference="eq:connection_prob"} on the connection probability $h(\cdot)$ by $$\begin{aligned} \label{eq:alpha_infty} h(x,w_1,w_2) \begin{cases} \ = 0,\quad & \text{if } \tfrac{w_1w_2}{|x|^d} < c', \\ \ \ge \underline{c}\quad & \text{if }\tfrac{w_1w_2}{|x|^d} \geq 1, \end{cases} \end{aligned}$$ for some constants $\underline c, c' \in(0,1]$. We use the bound $\tfrac{w_1w_2}{|x|^d} \geq 1$ in the second row for convenience, as it allows us to write the proofs for different cases in a consistent way, but it could easily be replaced by $\tfrac{w_1w_2}{|x|^d} \geq c''$ for any other constant $c''\ge c'$. The connectivity function $h$ in [\[eq:alpha_infty\]](#eq:alpha_infty){reference-type="eqref" reference="eq:alpha_infty"} covers the so-called threshold regime for *hyperbolic random graphs* when we also set $d=1$, see [@bringmann2019geometric Theorem 2.3]. In this case, when $\tau\in(2,3)$, we extend the definitions [\[eq:mu_pol_log\]](#eq:mu_pol_log){reference-type="eqref" reference="eq:mu_pol_log"} and [\[eq:eta_0\]](#eq:eta_0){reference-type="eqref" reference="eq:eta_0"} in the natural way, since $\lim_{\alpha\to \infty}\mu_{\mathrm{pol},\alpha}=1/d$: $$\begin{aligned} \label{eq:alpha-infty-definitions} \mu_{\log}:=\frac{3-\tau}{\beta}, \quad \mu_{\mathrm{pol}}:= \frac{1}{d}+\frac{3-\tau}{\beta},\quad \eta_0 := \begin{cases} 1 & \mbox{ if $\mu>\mu_{\mathrm{pol}}$,}\\ d\cdot(\mu-\mu_{\log}) & \mbox{ if $\mu\le\mu_{\mathrm{pol}}$.} \end{cases}\end{aligned}$$ To describe the case $\beta = \infty$, we replace [\[eq:F_L-condition\]](#eq:F_L-condition){reference-type="eqref" reference="eq:F_L-condition"} by the condition $$\begin{aligned} \label{eq:beta_infty} \lim_{t\to 0} F_L(t)/t^{\beta} = 0 \mbox{ for all }0<\beta <\infty.\end{aligned}$$ This means that the cdf of $L> 0$ is flatter than any polynomial near $0$. In particular, this condition is satisfied if $F_L$ has no probability mass around zero, for example in the case $L \equiv 1$. In this case, we again replace [\[eq:mu_pol_log\]](#eq:mu_pol_log){reference-type="eqref" reference="eq:mu_pol_log"}-[\[eq:eta_0\]](#eq:eta_0){reference-type="eqref" reference="eq:eta_0"} naturally by $$\begin{aligned} \label{eq:beta-infty-definitions} \mu_{\log}:=0, \quad \mu_{\mathrm{pol}}:= \max\{\tfrac{1}{d}, \mu_{\mathrm{pol},\alpha}\}, \quad \eta_0 := \begin{cases} 1 & \mbox{ if $\mu>\mu_{\mathrm{pol}}$,}\\ \min\{ d\mu, \mu/\mu_{\mathrm{pol},\alpha}\} & \mbox{ if $\mu\le\mu_{\mathrm{pol}}$.} \end{cases}\end{aligned}$$ We mention that these definitions stay valid also for $\tau=3$. Finally, in the case $\alpha=\beta=\infty$ we replace [\[eq:mu_pol_log\]](#eq:mu_pol_log){reference-type="eqref" reference="eq:mu_pol_log"}-[\[eq:eta_0\]](#eq:eta_0){reference-type="eqref" reference="eq:eta_0"} by $$\begin{aligned} \label{eq:alpha-beta-infty-definitions} \mu_{\log}:=0, \quad \mu_{\mathrm{pol}}:= \tfrac{1}{d}, \quad \eta_0 := \min\{1,d\mu\}.\end{aligned}$$ Our main results still hold for these limit regimes. We remark that the corresponding upper bounds also hold, see [@komjathy2022one1 Theorem 1.8]. **Theorem 8** (Extension to threshold IGIRGs/GIRGs, and $\beta=\infty$). *  [\[thm:threshold_regimes\]]{#thm:threshold_regimes label="thm:threshold_regimes"}* (a) *Theorems [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"}, [\[thm:linear_regime\]](#thm:linear_regime){reference-type="ref" reference="thm:linear_regime"}, and [\[thm:finite_graph\]](#thm:finite_graph){reference-type="ref" reference="thm:finite_graph"} still hold for $\alpha=\infty$ if we replace the definitions of $\mu_{\mathrm{pol}}, \mu_{\log}, \eta_0$ in [\[eq:mu_pol_log\]](#eq:mu_pol_log){reference-type="eqref" reference="eq:mu_pol_log"}-[\[eq:eta_0\]](#eq:eta_0){reference-type="eqref" reference="eq:eta_0"} by their values in [\[eq:alpha-infty-definitions\]](#eq:alpha-infty-definitions){reference-type="eqref" reference="eq:alpha-infty-definitions"}.* (b) *Theorems [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"}, [\[thm:linear_regime\]](#thm:linear_regime){reference-type="ref" reference="thm:linear_regime"}, and [\[thm:finite_graph\]](#thm:finite_graph){reference-type="ref" reference="thm:finite_graph"} still hold for $\beta=\infty$ if we replace the definitions of $\mu_{\mathrm{pol}}, \mu_{\log}, \eta_0$ in [\[eq:mu_pol_log\]](#eq:mu_pol_log){reference-type="eqref" reference="eq:mu_pol_log"}-[\[eq:eta_0\]](#eq:eta_0){reference-type="eqref" reference="eq:eta_0"} by their values in [\[eq:beta-infty-definitions\]](#eq:beta-infty-definitions){reference-type="eqref" reference="eq:beta-infty-definitions"}.* (c) *Theorems [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"}, [\[thm:linear_regime\]](#thm:linear_regime){reference-type="ref" reference="thm:linear_regime"}, and [\[thm:finite_graph\]](#thm:finite_graph){reference-type="ref" reference="thm:finite_graph"} still hold for $\alpha=\beta=\infty$ if we replace the definitions of $\mu_{\mathrm{pol}}, \mu_{\log}, \eta_0$ in [\[eq:mu_pol_log\]](#eq:mu_pol_log){reference-type="eqref" reference="eq:mu_pol_log"}-[\[eq:eta_0\]](#eq:eta_0){reference-type="eqref" reference="eq:eta_0"} by their values in [\[eq:alpha-beta-infty-definitions\]](#eq:alpha-beta-infty-definitions){reference-type="eqref" reference="eq:alpha-beta-infty-definitions"}.* In Theorem [\[thm:threshold_regimes\]](#thm:threshold_regimes){reference-type="ref" reference="thm:threshold_regimes"}(b), the requirement that $\mu>\mu_{\mathrm{\log}}=0$ implies that we exclude the case $\mu=0$. Indeed, $\mu=0$ means the setting of classical iid first-passage percolation, where whenever $\tau\in(2,3)$, explosion occurs under a mild conditions on $L$, see [@komjathy2020explosion Theorems 2.11, 2.12]. Theorem [\[thm:threshold_regimes\]](#thm:threshold_regimes){reference-type="ref" reference="thm:threshold_regimes"}(a) implies the analogous result for hyperbolic random graphs (HypRG) by setting $d=1$ in [\[eq:alpha-infty-definitions\]](#eq:alpha-infty-definitions){reference-type="eqref" reference="eq:alpha-infty-definitions"}, except for some technical details. By our Definition [Definition 1](#def:girg){reference-type="ref" reference="def:girg"}, the number of vertices in a finite GIRG is random, and has Poisson distribution with parameter $n$, while in the usual definition of HypRG [@krioukov2010hyperbolic; @gugelmann2012random] and GIRG [@bringmann2019geometric] the number of vertices is exactly $n$. Further, in HypRG the distribution of the vertex-weights is $n$-dependent $W^{(n)}$, so that $W^{(n)}$ converges to a limiting distribution $W$ [@komjathy2020explosion]. These issues can be overcome by coupling techniques e.g. presented in [@komjathy2020explosion]: a model with exactly $n$ vertices can be squeezed between two GIRGs with Poisson intensity $1-\sqrt{4\log n/n}$ and $1+\sqrt{4\log n/n}$, and one can couple $n$-dependent and limiting vertex-weights to each other, respectively, but we avoid spelling out the details and refer the reader to [@komjathy2020explosion Claims 3.2, 3.3]. Finally, the lower bounds in Theorems [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"} and [\[thm:linear_regime\]](#thm:linear_regime){reference-type="ref" reference="thm:linear_regime"} also hold under much weaker, but more technical conditions. In particular, for $\tau > 3$ we can strongly relax Assumption [Assumption 3](#assu:L){reference-type="ref" reference="assu:L"} on $L$. theoremLowerBounds [\[thm:linear_polynomial_lower_bound\]]{#thm:linear_polynomial_lower_bound label="thm:linear_polynomial_lower_bound"} Consider $1$-FPP on IGIRG or SFP of Definition [Definition 1](#def:girg){reference-type="ref" reference="def:girg"} satisfying the assumptions given in [\[eq:power_law\]](#eq:power_law){reference-type="eqref" reference="eq:power_law"}--[\[eq:cost\]](#eq:cost){reference-type="eqref" reference="eq:cost"} with $0\in\mathcal{V}$, (but not necessarily Assumption [Assumption 3](#assu:L){reference-type="ref" reference="assu:L"} on $L$). Assume that $\alpha>2$ and $\tau\in(2,\infty)$. \(1\) *Conditions for polynomial distance lower bound.* If $\boldsymbol{\tau\in (2,3]}$ and $L$ satisfies Assumption [Assumption 3](#assu:L){reference-type="ref" reference="assu:L"} with some $\beta\in(0,\infty]$ so that $\boldsymbol{\mu > \mu_{\log}}$, then for all $\varepsilon>0$, almost surely there is $r>0$ such that $$\begin{aligned} \label{eq:lin-poly-lower-bound} \mbox{for all $x\in \mathcal{V}$ with $|x|\ge r$}: d_{\mathcal{C}}(0,x) \ge |x|^{\eta_0-\varepsilon}. \end{aligned}$$ \(2\) *Conditions for strictly linear distance lower bound.* If *either* $\boldsymbol{\tau>3}$ and $\mu\ge 0$, and $L$ in [\[eq:cost\]](#eq:cost){reference-type="eqref" reference="eq:cost"} satisfies $\mathbb P(L>0)=1$, *or* $\boldsymbol{\tau\in(2,3]}$ and $L$ satisfies Assumption [Assumption 3](#assu:L){reference-type="ref" reference="assu:L"} with some $\beta>0$ so that $\boldsymbol{\mu > \mu_{\mathrm{pol}}}$, then there exists $\kappa>0$ such that almost surely there is $r>0$ such that $$\begin{aligned} \label{eq:linear-lower} \mbox{for all $x\in \mathcal{V}$ with $|x|\ge r$}: d_{\mathcal{C}}(0,x) \ge \kappa |x|. \end{aligned}$$ Compared to Theorems [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"} and [\[thm:linear_regime\]](#thm:linear_regime){reference-type="ref" reference="thm:linear_regime"}, Theorem [\[thm:linear_polynomial_lower_bound\]](#thm:linear_polynomial_lower_bound){reference-type="ref" reference="thm:linear_polynomial_lower_bound"} covers the boundary case $\tau=3$. More importantly, it states that the linear lower bound in [\[eq:linear-lower\]](#eq:linear-lower){reference-type="eqref" reference="eq:linear-lower"} holds for $\tau> 3$ and $\alpha>2$ under very mild conditions on $L$ and $\mu$, i.e., we allow arbitrary edge weight distributions $L$ that have no probability mass at zero, and arbitrary penalty strength $\mu\ge 0$. This includes *classical first passage percolation* by setting $\mu=0$ and $L$ arbitrary, a.s. positive, but also the case $L \equiv 1$ and $\mu=0$, giving *graph distances*. In this latter case we recover the result of Berger [@berger2004lower] for long-range percolation (LRP) and its extension by Deprez, Hazra, and Wüthrich [@deprez2015inhomogeneous] for SFP. We state the case of *classical* (iid) first passage percolation on finite variance degree models ($\tau>3$) with long-range parameter $\alpha>2$ as an immediate corollary. **Corollary 9**. *Consider classical first passage percolation with iid transmission times from distribution $L$ satisfying $\mathbb{P}(L>0)=1$ on IGIRG, GIRG or SFP of Definition [Definition 1](#def:girg){reference-type="ref" reference="def:girg"} with $\tau>3, \alpha>2$. Then there exists $\kappa>0$ such that almost surely there is $r>0$ such that $$\begin{aligned} \mbox{for all $x\in \mathcal{V}$ with $|x|\ge r$}: d_{\mathcal{C}}(0,x) \ge \kappa |x|. \end{aligned}$$* In Theorem [\[thm:linear_polynomial_lower_bound\]](#thm:linear_polynomial_lower_bound){reference-type="ref" reference="thm:linear_polynomial_lower_bound"}, (unlike in Theorem [\[thm:linear_regime\]](#thm:linear_regime){reference-type="ref" reference="thm:linear_regime"}), since we state lower bounds, we do not condition on $0,x \in \mathcal{C}_\infty$. In the case $\tau > 3$, an infinite component does not need to exist (it depends on the constants and slowly varying functions), so conditioning on $0,x \in \mathcal{C}_\infty$ would not make sense. However, for parameters that ensure an infinite component of positive density, the event $0,x \in \mathcal{C}_\infty$ has positive probability, and [\[eq:lin-poly-lower-bound\]](#eq:lin-poly-lower-bound){reference-type="eqref" reference="eq:lin-poly-lower-bound"} and  [\[eq:linear-lower\]](#eq:linear-lower){reference-type="eqref" reference="eq:linear-lower"} remain true after conditioning. **Discontinuity at $\tau=3$.** Remarkably, for $\tau > 3$ there is no analogue to the strictly polynomial regime in Theorem [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"}, even though algebraically the limits of $\lim_{\tau\nearrow 3}\mu_{\textrm{pol}}=1/d$ and $\lim_{\tau\nearrow 3}\eta_0=\mu d$ exist and are in $(0,1)$. So if we fix some $\mu < 1/d$ and let $\tau \nearrow 3$, the cost-distances grow polynomially with exponents bounded away from one (e.g., they approach $1/2$ from below for $\mu = 1/(2d)$). But as soon as $\tau > 3$ is reached, the exponent "jumps" to one. In other words, even though $\mu_{\textrm{pol}}$ converges to a positive limit $1/d$ as $\tau \nearrow 3$, it drops to $0$ as soon as $\tau >3$ is reached. In this sense, the parameter space is discontinuous in $\eta_0$ and $\mu_{\textrm{pol}}$. ## Proof outline and main methodological contributions {#sec:outline} *Proof outline of the lower bounds.* For lower bounds, we need to show that every path from a vertex $x$ to $0$ is expensive. For this, we quantify the property that 'most long edges are expensive' in Lemma [Lemma 11](#lem:no_long_cheap_edge){reference-type="ref" reference="lem:no_long_cheap_edge"}. This allows us to generalise a renormalisation method from Berger [@berger2004lower]. For long-range percolation, Berger considers a growing system of boxes around the origin and defines a box $Q$ as *good* if it does not contain edges of linear length in the box size and the same property holds recursively for most of its child-boxes, which are a certain set of non-disjoint sub-boxes that cover $Q$ multiple times. In the setting of 1-FPP, we modify the definition since long edges *do* exist when $\tau\in(2,3)$ (which is implied by the doubly-logarithmic graph distances), but these edges are typically very expensive. We then show inductively the deterministic statement that once a box is good, the cost-distance is "large" between any pair of vertices inside the box with Euclidean distance linear in the box size. By "large" we mean either linear or polynomial with an exponent less than one of the Euclidean distance, depending on the model parameters, which is unlike the linear graph-distances in [@berger2004lower]. Polynomial cost-distances occur when $\mu\in(\mu_{\mathrm{log}},\mu_{\mathrm{pol}})$. For the inductive step, for any two sufficiently distant vertices $u$, $v$ in the same good box $Q$, we use that any path $\pi_{u,v}$ between them must either *(i)* contain a long and thus expensive edge, or *(ii)* has many long disjoint sub-segments in good child-boxes of $Q$, whose costs we can lower-bound by induction, see Figure [\[fig:lower-bound-proof\]](#fig:lower-bound-proof){reference-type="ref" reference="fig:lower-bound-proof"}. We give the definition of good boxes in the FPP setting in Section [2.2](#sec:good-boxes){reference-type="ref" reference="sec:good-boxes"}, after calculating the probability that long but cheap edges exist in a box in Section [2.1](#sec:long-cheap-edges){reference-type="ref" reference="sec:long-cheap-edges"}. In Section [2.3](#sec:inside-good-blocks){reference-type="ref" reference="sec:inside-good-blocks"} we then show that there are no cheap paths *within* a good box, and in Section [2.4](#sec:lower_bound_polynomial){reference-type="ref" reference="sec:lower_bound_polynomial"} that there are no cheap paths at all, excluding thus cheap paths that leave and return to the same box. Although those two sections follow broadly the proof in [@berger2004lower], our formulation gives the lower bound directly for all vertices, while [@berger2004lower] only showed it along a sequence of norms for the vertices, and then used Kingman's subadditive ergodic theorem to extend the result to all vertices. We avoid this by improving on the conditions in [@berger2004lower Lemma 2] for the 1-FPP, which allows us to extend the proof to non-linear regimes, in which Kingman's theorem is not applicable (see Proposition [Proposition 15](#prop:path_in_good_block){reference-type="ref" reference="prop:path_in_good_block"} and the proof on page ). Finally, in Section [2.5](#sec:limit-cases-lower){reference-type="ref" reference="sec:limit-cases-lower"} we show the analogous lower bounds for $\alpha = \infty$ and/or $\beta = \infty$ in Theorem [\[thm:threshold_regimes\]](#thm:threshold_regimes){reference-type="ref" reference="thm:threshold_regimes"} using coupling arguments. *Proof outline for the upper bounds.*[\[proof:idea-upper\]]{#proof:idea-upper label="proof:idea-upper"} For a sufficiently large constant $M$, we consider in IGIRG/SFP the subgraph $G_M$ of vertices with weights in the interval $I_M:=[M,2M]$ and edges with edge-cost at most $M^{3\mu}$ among them. We partition the space into boxes of side length $R:= M^{2/d}/\sqrt{d}$. We relate the connectivity of boxes to a site-bond percolation process $\omega$, where a box is occupied if its induced subgraph of $G_M$ is connected, has at most a bounded number $K$ of vertices, and two occupied boxes are joined by a bond if there is an edge of $G_M$ between those subgraphs. We then dominate $\omega$ from below by an iid Bernoulli bond percolation $\omega^\star$ on $\mathbb{Z}^d$, which has edge-density increasing with $M$ whenever $\tau \in (2,3)$. Thus for $M$ large enough $\omega^\star$ is supercritical, and we can use a result by Antal and Pisztora [@antal1996chemical] that distances in the infinite component $\mathcal{C}_\infty^\star$ of $\omega^\star$ scale linearly with the Euclidean distance. The union of boxes corresponding to $\mathcal{C}_\infty^\star$ in $G_M$ results in an infinite connected subgraph $\mathcal{H}_\infty$ of $G_M$ with linear cost-distances. We then adapt a local-density result of Deuschel and Pisztora in [@DP-percolation], saying that the infinite cluster $\mathcal{C}_\infty^\star$ of $\omega^\star$ comes near every vertex $z\in \mathbb{Z}^d$, to hold also for quite general models of random geometric graph with high edge density (see Definition [Definition 18](#def:dense-geometric){reference-type="ref" reference="def:dense-geometric"}), and thus also for $G_M$, see Corollary [Corollary 22](#cor:dense-subgraph){reference-type="ref" reference="cor:dense-subgraph"}. These results may be of independent interest. They also allows us to construct a low-cost path in the IGIRG/SFP from the starting vertices $0,x$ to a near vertex in $\mathcal{H}_\infty$.\ When $\tau >3$, the edge-density in $G_M$ and thus in $\omega^\star$ *decreases* with $M$, which is why IGIRG with $\tau >3$ is non-robust under percolation, while IGIRG with $\tau\in(2,3)$ is. For a GIRG in a finite box, we additionally need to ensure that the constructed paths stay inside the box of the graph. For this we prove that there exist linear distance paths between any sites $u$ and $v$ in the infinite component which deviate "little" from the straight line segment $S_{u_n,v_n}$ (this is implicitly present in [@antal1996chemical], and we prove it for general models of random geometric graphs with high-density, see Lemma [Lemma 21](#lem:dense-geometric-linear-paths){reference-type="ref" reference="lem:dense-geometric-linear-paths"}). Hence, for random vertices $u_n,v_n$ in the giant component of GIRG, there is a.a.s. a cheap path $\pi_{u_n,v_n}$ from $u_n$ to $v_n$ in the corresponding IGIRG with small deviation. Since $u_n$ and $v_n$ are random, they are unlikely to be close to the boundary of the box, and hence $\pi_{u_n,v_n}$ is completely contained in GIRG. ## Notation {#sec:notation} The graphs in this paper are undirected, simple graphs with vertex set $\mathcal V \subseteq \mathbb{R}^d$. For a graph $G=(\mathcal V,\mathcal E)$ and a set $A\subseteq \mathbb{R}^d$, we denote by $G[A]$ the induced subgraph of $G$ with vertex set $\mathcal V \cap A$. For two vertices $x,y\in \mathcal V$, we denote the edge between them by $xy$. For a path $\pi$, we denote the number of edges in $\pi$ by $|\pi|$. Given a cost function $\mathcal C: \mathcal E \to [0,\infty]$ on the edges, the cost of a collection of edges $\pi$ is the sum of the costs of the edges in the collection, $\mathcal{C}(\pi):=\sum_{e\in\pi}\mathcal{C}(e)$. For convenience we define $\mathcal{C}(xx):=0$ for all $x\in \mathcal{V}$. We define the *cost-distance* between vertices $x$ and $y$ as $$\begin{aligned} \label{eq:cost_distance} d_{\mathcal{C}}(x,y):=\inf\{\mathcal{C}(\pi) : \pi \textnormal{ is a path from } x \textnormal{ to } y \textnormal{ in $G$}\}.\end{aligned}$$ We define the graph distance $d_G(x,y)$ similarly, when we set all edge-costs to $1$. The graph will usually be clear from the context. We denote the Euclidean norm of $x \in \mathbb R^d$ by $|x|$. We denote by $B_r(x) := \{y\in \mathbb{R}^d : |x-y|\le r\}$ the Euclidean ball with radius $r\geq 0$ around $x$, and by $\mathcal B_r(x):=\{y\in\mathcal V : |x-y|\le r\} = B_r(x) \cap \mathcal V$ the set of vertices in this ball. The minimal notation difference is intentional, as we do not expect any confusion to arise between the two sets. The *graph-distance ball* and *cost-distance ball* (or *cost-ball* for short) around a vertex $x$ are the vertex sets $\mathcal B_r^G(x):=\{y\in\mathcal V : d_G(x,y)\le r\}$ and $\mathcal B_r^{\mathcal C}(x):=\{y\in\mathcal V : d_{\mathcal C}(x,y)\le r\}$, respectively. We set $B_r := B_r(0)$, and similarly for $\mathcal B_r$, $\mathcal B_r^G$ and $\mathcal B_r^{\mathcal C}$ if $0$ is a vertex. We denote by $\partial B_r(x) := B_r(x)\setminus \{y\in \mathbb{R}^d : |x-y| < r\}$, and similarly for $\partial \mathcal{B}_r$, $\partial \mathcal{B}_r^G$ and $\partial \mathcal{B}_r^{\mathcal{C}}$. In particular, $\partial \mathcal{B}_1^G(v)$ is the neighbourhood of a vertex $v$. For parameters $a,b >0$ (model parameters or otherwise), we use "for all $a{\,\gg_{\star}\,}b$" as shortcut for "$\forall b>0:\, \exists a_0 = a_0(b):\, \forall a\ge a_0$". We also say "$a {\,\gg_{\star}\,}b$" to mean that $a \ge a_0(b)$. We use $a {\,\ll_{\star}\,}b$ analogously, and may use more than two parameters. For example, "for $a{\,\gg_{\star}\,}b,c$" means "$\forall b,c>0:\, \exists a_0=a_0(b,c):\, \forall a \ge a_0$". A measurable function $\ell:(0,\infty) \to (0,\infty)$ is *slowly varying* if $\lim_{x\to \infty} \ell(ax)/\ell(x) =1$ for all $a>0$. We denote the natural logarithm by $\log$. For $n \in \mathbb{N}$ we abbreviate $[n]:= \{1,\ldots,n\}$. For $S \subseteq \mathbb{R}^d$, we denote the Lebesgue measure of $S$ by $\textnormal{Vol}(S)$. # Lower Bounds {#sec:lower-bound} The main part of this section is the proof of Theorem [\[thm:linear_polynomial_lower_bound\]](#thm:linear_polynomial_lower_bound){reference-type="ref" reference="thm:linear_polynomial_lower_bound"}, which in turn implies the lower bounds in Theorem [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"} and Theorem [\[thm:linear_regime\]](#thm:linear_regime){reference-type="ref" reference="thm:linear_regime"}. Note that the corresponding lower bound for GIRG in Theorem [\[thm:finite_graph\]](#thm:finite_graph){reference-type="ref" reference="thm:finite_graph"} also follows trivially since GIRG is a subgraph of IGIRG. Throughout the section, we will maintain Assumption [Assumption 3](#assu:L){reference-type="ref" reference="assu:L"} on $L$ unless explicitly noted otherwise. Before we start with the proof, we note an elementary lemma stating that the product of two random variables with regularly varying tails sharing the same tail exponent again has a regularly varying tail with the same tail exponent. **Lemma 10**. *Let $X, Y$ be two independent positive random variables with cumulative distributions $F_X(x)=1-\ell_1(x)x^{-\tau}$ and $F_Y(y)=1-\ell_2(y)y^{-\tau}$, respectively, where $\ell_1$ and $\ell_2$ are slowly varying functions. Then the cumulative distribution of their product $Z:=XY$ is given by $F_Z(z)=1- \ell^{\star}(z)z^{-\tau}$ for some slowly varying function $\ell^{\star}$.* *Proof.* This is a consequence of [@embrechts1980closure Corollary, Page 3]. ◻ ## Upper bound on the number of long and cheap edges {#sec:long-cheap-edges} As mentioned in Section [1.3](#sec:outline){reference-type="ref" reference="sec:outline"}, we start with developing a renormalisation argument. The argument builds on the basic observation that a box is unlikely to contain an edge that is both long in Euclidean distance and cheap, i.e., its cost is small. This follows from a straightforward but tedious calculation and is summarised in the following lemma. The more interesting part of the proof comes afterwards, when we turn this basic property into the statement that all paths between a vertex $x$ and $0$ are at least polynomially expensive. **Lemma 11**. *Consider $1$-FPP on IGIRG or SFP of Definition [Definition 1](#def:girg){reference-type="ref" reference="def:girg"} satisfying the assumptions given in [\[eq:power_law\]](#eq:power_law){reference-type="eqref" reference="eq:power_law"}--[\[eq:F_L-condition\]](#eq:F_L-condition){reference-type="eqref" reference="eq:F_L-condition"} with $0\in\mathcal{V}$. Assume that $\tau\in(2,\infty)$ and $\beta>0$. For all $\varepsilon> 0$, if $N>0$ is sufficiently large relative to $\varepsilon$, then the following holds. Let $A>N$ and $a > 0$. Then the expected number of edges in $[-A/2,A/2]^d$ with length at least $N$ and cost at most $N^{ad}$ is at most $$\begin{aligned} \label{eq:no_long_cheap_edge} \begin{cases} A^dN^{\varepsilon}\Big( N^{-d(\alpha-1)} + N^{-d(\tau-2)}\Big) & \mbox{ if $a\ge\mu$,}\\ A^dN^{\varepsilon}\Big(N^{-d(\alpha-1)} + N^{-d(\alpha - 1 - \frac{a}{\mu}(\alpha-(\tau-1)))} + N^{-d(\tau-2+(\mu-a)\beta)} \Big) & \mbox{ if $a<\mu$.} \end{cases}\end{aligned}$$ The formula for the case $a \ge \mu$ remains valid without the restriction on the cost of the edges, and in this case $L$ does not need to satisfy Assumption [Assumption 3](#assu:L){reference-type="ref" reference="assu:L"}.* Before the proof, let us informally explain the formula, suppressing slowly varying factors in the discussion. The factor $A^d$ is simply the (expected) number of vertices in $[-A/2,A/2]^d$. The term $N^{\varepsilon}$ comes from applying Potter's bound [@bingham1989regular] to bound the slowly varying function that appears in the distribution function of the vertex weights in [\[eq:power_law\]](#eq:power_law){reference-type="eqref" reference="eq:power_law"} when we integrate over the distribution of the products of weights $W_uW_v$ in the connectivity function in [\[eq:connection_prob\]](#eq:connection_prob){reference-type="eqref" reference="eq:connection_prob"}. For the case $a\geq \mu$, the two terms in the brackets represent two types of edges. Consider a vertex $v$ of constant weight. Then first term $N^{-d(\alpha-1)}$ counts the expected number of neighbours $u$ in distance $\sim N$ (e.g., in distance $[N,2N]$) of constant weight, up to negligible factors. The second term $N^{-d(\tau-2)}$ counts the number of neighbours $u$ in distance $\sim\! N$ of weight $\sim\! N^{d}$, which is exactly the weight needed to get constant connection probability from [\[eq:connection_prob\]](#eq:connection_prob){reference-type="eqref" reference="eq:connection_prob"}. When $a\geq \mu$, the cost of such edges is typically less than $N^{ad}$, which is why we can ignore the cost condition in this case. There are potentially other possibilities, but their contribution to the expectation is negligible. In the case $a < \mu$, the first term $N^{-d(\alpha-1)}$ is similar to the case $a\ge \mu$, coming from edges between vertices of constant weight, and so the cost is typically constant as well. The third term $N^{-d((\tau-2)+ (\mu-a)\beta)}$ comes from edges between vertices of weights $W_v \sim 1$ and $W_u \sim N^d$. These receive a cost penalty of $N^{d\mu}$, so the probability that the edge has cost at most $N^{ad}$ is roughly $F_L(N^{-d(\mu-a)}) \approx N^{-d(\mu-a)\beta}$. Finally, the second term $N^{-d(\alpha-1 - a/\mu(\alpha -(\tau-1)))}$ comes from edges between vertices of weights $W_v \sim 1$ and $W_u \sim N^{ad/\mu}$. With this weight, the typical cost of the edge is $N^{ad}$, so the cost condition is satisfied. For fixed $v$, there are $\Theta(N^{d})$ vertices $u$ in distance $\sim\! N$, they have probability $N^{-(ad/\mu)(\tau-1)}$ to be of weight $W_u \sim N^{ad/\mu}$, and the probability to be adjacent to $v$ with this weight is $(N^{ad/\mu}/N^{d})^\alpha$ by [\[eq:connection_prob\]](#eq:connection_prob){reference-type="eqref" reference="eq:connection_prob"}. (All up to negligible terms.) Together, this yields the second term. Note that the term $(N^{ad/\mu}/N^{d})^\alpha$ for the connection probability is only correct if the bracket is at most one, i.e., if $a<\mu$. *Proof of Lemma [Lemma 11](#lem:no_long_cheap_edge){reference-type="ref" reference="lem:no_long_cheap_edge"}.* Throughout this proof, we will denote by $C_1, C_2, \ldots$ finite positive constants depending on $\varepsilon$ and the set $\textnormal{\texttt{par}}\xspace$ of model parameters. For readability, we allow these to appear in the middle of a calculation without necessarily being defined beforehand. Note that the statement of the lemma is stronger for smaller $\varepsilon>0$. So without loss of generality, we can assume that we take a $\varepsilon>0$ small enough so that $$\begin{aligned} \label{eq:delta_no_long_cheap_edge_1} -d(\tau-2)+\varepsilon/2<0 \quad \textnormal{and} \quad -d(\alpha-1)+\varepsilon/2<0. \end{aligned}$$ If $\alpha-(\tau-1)-\mu\beta<0$, we can also assume that $$\begin{aligned} \label{eq:delta_no_long_cheap_edge_2} \alpha-(\tau-1)-\mu\beta+\varepsilon\frac{\mu}{2ad}<0. \end{aligned}$$ Let $E(A,N,a)$ denote the expected number of edges in $[-A/2,A/2]^d$ with length at least $N$ and cost at most $N^{ad}$. We first compute $E(A,N,a)$ for SFP, having vertex set $\mathbb{Z}^d$. Let $$\label{eq:lambda_r} \Lambda(r):=\mathbb{E}[(1 \wedge W_xW_y/r^d)^{\alpha} F_L(N^{ad}(W_xW_y)^{-\mu})].$$ Using conditional expectation we have $$\begin{aligned} E(A,N,a) &= \sum_{\substack{x,y \in [-A/2,A/2]^d\cap \mathbb{Z}^d\\|x-y| \ge N}} \mathbb{E}\left[\mathbbm{1}_{\{xy \textnormal{ is an edge}\}}\cdot\mathbbm{1}_{\{\mathcal{C}(xy)\le N^{ad}\}}\right] \\&\le \sum_{\substack{x,y \in[-A/2,A/2]^d\cap\mathbb{Z}^d\\N\le|x-y|\le dA}} \mathbb{E}\left[\overline{c}\left(1 \wedge \dfrac{W_xW_y}{|x-y|^d}\right) ^\alpha \cdot F_L(N^{ad}(W_xW_y)^{-\mu}) \right] \\ &= \overline{c} \sum_{x \in [-A/2,A/2]^d\cap \mathbb{Z}^d} \sum_{\substack{y \in [-A/2,A/2]^d\cap \mathbb{Z}^d\\N\le|x-y|\le dA}} \Lambda(|x-y|). \end{aligned}$$ Note that the number of vertices $\refstepcounter{constant}\label{cst:no_long_cheap_edge2}|\mathbb{Z}^d\cap [-A/2,A/2]^d|\le C_{\ref{cst:no_long_cheap_edge2}}A^d$. In order to simplify calculations, we will replace the second sum by an integral. More precisely, by usual isoperimetric inequalities for $\mathbb{Z}^d$, there is a constant $\refstepcounter{constant}\label{cst:no_long_cheap_edge3}C_{\ref{cst:no_long_cheap_edge3}}=C_{\ref{cst:no_long_cheap_edge3}}(d)$ such that $$\begin{aligned} \label{eq:ENA} E(A,N,a)\le C_{\ref{cst:no_long_cheap_edge3}}A^d \int_{r=N}^{dA} r^{d-1} \Lambda(r)\,\mathrm{d}r. \end{aligned}$$ For IGIRG, we obtain the same formula [\[eq:ENA\]](#eq:ENA){reference-type="eqref" reference="eq:ENA"} in a simpler way. In expectation, there are $A^d$ vertices in $[-A/2,A/2]^d$. For any fixed vertex, the expected number of neighbours in IGIRG in distance between $N$ and $dA$ with edge cost at most $N^{ad}$ is given by the integral $\int_{r=N}^{dA} r^{d-1} \Lambda(r)\,\mathrm{d}r$. Since $dA$ bounds the diameter of $[-A/2,A/2]^d$, this includes all neighbours of this type in $[-A/2,A/2]^d$. Hence, the upper bound [\[eq:ENA\]](#eq:ENA){reference-type="eqref" reference="eq:ENA"} also holds for IGIRG and its subgraph GIRG. Next we bound $\Lambda(r)$ in [\[eq:ENA\]](#eq:ENA){reference-type="eqref" reference="eq:ENA"}. Defined in [\[eq:lambda_r\]](#eq:lambda_r){reference-type="eqref" reference="eq:lambda_r"}, $\Lambda(r)$ only depends on the product $W_xW_y=:Z$, not on the individual weights of the two vertices. By Lemma [Lemma 10](#lem:product_distribution){reference-type="ref" reference="lem:product_distribution"}, the distribution of $Z$ is of the form $f_Z(z)=\ell^{\star}(z)z^{-\tau}$, where $\ell^{\star}$ is a slowly varying function. For the sake of simplicity, we will work with $Z$ having a density, but a proof using only Lebesgue-Stieltjes integration is similar. We recall from [\[eq:ENA\]](#eq:ENA){reference-type="eqref" reference="eq:ENA"} that $r>N$ and rewrite [\[eq:lambda_r\]](#eq:lambda_r){reference-type="eqref" reference="eq:lambda_r"} using law of total probability as $$\label{eq:lambda-r-detailed} \Lambda(r)= \int_{z=1}^{\infty} \left(1\wedge\dfrac{z}{r^d}\right)^\alpha \cdot F_L(N^{ad}z^{-\mu}) \cdot \dfrac{\ell^{\star}(z)}{z^{\tau}}\,\mathrm{d}z.$$ We now split into cases depending on the value of $a$. **Case 1: $\boldsymbol{a \ge \mu}$.** In this case we first show that for all $\varepsilon>0$, for all sufficiently large $r$ (i.e., larger than some $r_0(\varepsilon)$), [\[local-lambda\]]{#local-lambda label="local-lambda"} $$\label{eq:local-lambda} \Lambda(r)\le C_{\ref{local-lambda}} (r^{-d(\tau-1)} + r^{-d\alpha})r^{\varepsilon/2}.$$ We split the inner integral of [\[eq:lambda-r-detailed\]](#eq:lambda-r-detailed){reference-type="eqref" reference="eq:lambda-r-detailed"} in two parts, at $r^d$. The first part is given by $$\begin{aligned}\label{eq:I_1} I_1 := \int_{z=1}^{r^d} \left(1\wedge\dfrac{z}{r^d}\right)^\alpha \cdot\underbrace{F_L(N^{ad}z^{-\mu})}_{\le 1} \cdot \dfrac{\ell^{\star}(z)}{z^{\tau}} \,\mathrm{d}z \le r^{-\alpha d} \int_{z=1}^{r^d} z^{\alpha-\tau}\ell^{\star}(z) \,\mathrm{d}z. \end{aligned}$$ Since $r>N$, and we will later let $N\to \infty$, we can use Karamata's theorem [@bingham1989regular Prop. 1.5.6], and obtain that for $\alpha-(\tau-1)>0$, [\[cst:no_long_cheap_edge5\]]{#cst:no_long_cheap_edge5 label="cst:no_long_cheap_edge5"} $$\begin{aligned} \label{eq:I1-case1} I_1\le r^{-\alpha d} \int_{z=1}^{r^d}z^{\alpha-\tau}\ell^{\star}(z)\,\mathrm{d}z \le r^{-\alpha d} C_{\ref{cst:no_long_cheap_edge5}}r^{d(\alpha-(\tau-1))}\ell^{\star}(r^d)\le C_{\ref{cst:no_long_cheap_edge5}}r^{-d(\tau-1)+\varepsilon/2} \end{aligned}$$ for $N$ (and thus $r$) large enough, where we used Potter's bound [@bingham1989regular] to get $\ell^{\star}(r)=o(r^{\varepsilon/(2d)})$ as $r\to\infty$, and thus for sufficiently large $N$ (and hence $r$) we obtain that $I_1$ in [\[eq:I_1\]](#eq:I_1){reference-type="eqref" reference="eq:I_1"} in the $\alpha>\tau-1$ case is bounded from above by the first term in [\[eq:local-lambda\]](#eq:local-lambda){reference-type="eqref" reference="eq:local-lambda"}. If $\alpha-\tau+1=0$, we again use $\ell^{\star}(z)=o(z^{\varepsilon/(2d)})$ as $z\to\infty$ to get [\[cst:no_long_cheap_edge7\]]{#cst:no_long_cheap_edge7 label="cst:no_long_cheap_edge7"} [\[cst:no_long_cheap_edge8\]]{#cst:no_long_cheap_edge8 label="cst:no_long_cheap_edge8"} $$\begin{aligned} \label{eq:case1-I1-2} I_1 \le C_{\ref{cst:no_long_cheap_edge7}}r^{-\alpha d}\int_{z=1}^{r^d} z^{\varepsilon/(2d)-1}\,\mathrm{d}z \le C_{\ref{cst:no_long_cheap_edge8}}r^{-\alpha d + \varepsilon/2}, \end{aligned}$$ which is the second term in [\[eq:local-lambda\]](#eq:local-lambda){reference-type="eqref" reference="eq:local-lambda"}. Finally, when $\alpha<\tau-1$, we use Potter's bound [@bingham1989regular] to get[\[cst:no_long_cheap_edge9\]]{#cst:no_long_cheap_edge9 label="cst:no_long_cheap_edge9"} $\ell^{\star}(z)\le C_{\ref{cst:no_long_cheap_edge9}} z^{\varepsilon}$ as $z\to\infty$, and since $\alpha-\tau+\varepsilon< -1$ we get that the integral is bounded by some constant and hence the bound [\[eq:case1-I1-2\]](#eq:case1-I1-2){reference-type="eqref" reference="eq:case1-I1-2"} remains valid. Combining equations [\[eq:I1-case1\]](#eq:I1-case1){reference-type="eqref" reference="eq:I1-case1"}--[\[eq:case1-I1-2\]](#eq:case1-I1-2){reference-type="eqref" reference="eq:case1-I1-2"}, we obtain that regardless of the relation between $\alpha$ and $\tau-1$, $I_1$ satisfies the bound in [\[eq:local-lambda\]](#eq:local-lambda){reference-type="eqref" reference="eq:local-lambda"}. The second part of the inner integral in [\[eq:lambda-r-detailed\]](#eq:lambda-r-detailed){reference-type="eqref" reference="eq:lambda-r-detailed"} is $$\begin{aligned} I_2 := \int_{z=r^d}^{\infty} \left(1\wedge\dfrac{z}{r^d}\right)^\alpha \cdot F_L(N^{ad}z^{-\mu}) \cdot \dfrac{\ell^{\star}(z)}{z^{\tau}}\,\mathrm{d}z \le \int_{z=r^d}^{\infty} z^{-\tau}\ell^{\star}(z)\,\mathrm{d}z, \end{aligned}$$ since $F_L\le 1$ always holds. Since $r^d\rightarrow\infty$ as $N\rightarrow\infty$, Proposition 1.5.10 of [@bingham1989regular] tells us that for $N$ large enough [\[cst:no_long_cheap_edge12\]]{#cst:no_long_cheap_edge12 label="cst:no_long_cheap_edge12"} $$\begin{aligned} I_2\le \int_{z=r^d}^{\infty} z^{-\tau}\ell^{\star}(z)\,\mathrm{d}z \le C_{\ref{cst:no_long_cheap_edge12}}r^{-d(\tau-1)}\ell^{\star}(r^d) \end{aligned}$$ Again using Potter's bound, for sufficiently large $N$ we have that $I_2$ is dominated by the first term in [\[eq:local-lambda\]](#eq:local-lambda){reference-type="eqref" reference="eq:local-lambda"}. This finishes the proof that [\[eq:local-lambda\]](#eq:local-lambda){reference-type="eqref" reference="eq:local-lambda"} holds when $a\ge \mu$. Returning the attention to $E(A, N, a)$ in [\[eq:ENA\]](#eq:ENA){reference-type="eqref" reference="eq:ENA"}, and using now [\[eq:local-lambda\]](#eq:local-lambda){reference-type="eqref" reference="eq:local-lambda"}, we have for $\tau>2$ [\[cst:no_long_cheap_edge14\]]{#cst:no_long_cheap_edge14 label="cst:no_long_cheap_edge14"} [\[cst:no_long_cheap_edge15\]]{#cst:no_long_cheap_edge15 label="cst:no_long_cheap_edge15"} $$\begin{aligned} E(A,N,a)&\le C_{\ref{cst:no_long_cheap_edge3}} A^d \int_{r=N}^{dA} r^{d-1} C_{\ref{local-lambda}} (r^{-d(\tau-1)} + r^{-d\alpha})r^{\varepsilon/2}\mathrm{d}r \\ &= C_{\ref{cst:no_long_cheap_edge14}} A^d\int_{r=N}^{dA} r^{-d(\tau-2)+\varepsilon/2-1} + r^{-d(\alpha-1)+\varepsilon/2-1}\,\mathrm{d}r \\ &\stackrel{\eqref{eq:delta_no_long_cheap_edge_1}}{\le} C_{\ref{cst:no_long_cheap_edge15}}A^dN^{\varepsilon/2}\left(N^{-d(\tau-2)}+N^{-d(\alpha-1)}\right) \le A^dN^{\varepsilon}\left(N^{-d(\tau-2)}+N^{-d(\alpha-1)}\right), \end{aligned}$$ where the last inequality holds for $N$ large enough. Thus we have proved [\[eq:no_long_cheap_edge\]](#eq:no_long_cheap_edge){reference-type="eqref" reference="eq:no_long_cheap_edge"} when $a \ge \mu$. Observe that we used a trivial bound $F_L\le 1$ in the proofs, hence we get that the statement also holds without any restriction on the edge-costs and without Assumption [Assumption 3](#assu:L){reference-type="ref" reference="assu:L"}. **Case 2: $\boldsymbol{a < \mu}$.** In this case we also start bounding $\Lambda(r)$ in [\[eq:lambda-r-detailed\]](#eq:lambda-r-detailed){reference-type="eqref" reference="eq:lambda-r-detailed"} first. We recall the constant $t_0$ from [\[eq:F_L-condition\]](#eq:F_L-condition){reference-type="eqref" reference="eq:F_L-condition"}. Note that $t_0^{-1/\mu}N^{ad/\mu}$ is smaller than $N^d$ (and thus $r^d$) for $N$ large enough. We assume this inequality and split the integral of [\[eq:lambda-r-detailed\]](#eq:lambda-r-detailed){reference-type="eqref" reference="eq:lambda-r-detailed"} into three parts. In the first part we bound the factor $F_L$ by $1$ from above: $$\begin{aligned} \widetilde I_1 := \int_{z=1}^{t_0^{-1/\mu}N^{ad/\mu}} \left(1\wedge\dfrac{z}{r^d}\right)^\alpha \cdot F_L(N^{ad}z^{-\mu}) \cdot z^{-\tau}\ell^{\star}(z)\,\mathrm{d}z \le r^{-d\alpha}\int_{z=1}^{t_0^{-1/\mu}N^{ad/\mu}} z^{\alpha-\tau}\ell^{\star}(z)\,\mathrm{d}z. \end{aligned}$$ This is the same as $I_1$ in the previous case, except for the upper limit of integration. Using the same reasoning, we get that [\[cst:no_long_cheap_edge16\]]{#cst:no_long_cheap_edge16 label="cst:no_long_cheap_edge16"} $$\begin{aligned} \label{eq:I_1_bound_case2} \widetilde I_1 \le C_{\ref{cst:no_long_cheap_edge16}}(t_0)r^{-d\alpha} \left(N^{\varepsilon/2}+N^{\frac{ad}{\mu}(\alpha-\tau+1)+\varepsilon/2}\right) \end{aligned}$$ for $N$ large enough. In the second part the argument of $F_L$ will be at most $t_0$, hence we can use that $F_L(t)\le c_2t^\beta$ in this regime: $$\begin{aligned} \widetilde I_2 &:= \int_{z=t_0^{-1/\mu}N^{ad/\mu}}^{r^d} \left(1\wedge\dfrac{z}{r^d}\right)^\alpha \cdot F_L(N^{ad}z^{-\mu}) \cdot z^{-\tau}\ell^{\star}(z)\,\mathrm{d}z \\ &\stackrel{\eqref{eq:F_L-condition}}{\le} c_2r^{-d\alpha} \int_{z=t_0^{-1/\mu}N^{ad/\mu}}^{r^d} z^{\alpha-\tau}(N^{ad}z^{-\mu})^{\beta}\ell^{\star}(z)\,\mathrm{d}z \\ &= c_2N^{ad\beta}r^{-d\alpha} \int_{z=t_0^{-1/\mu}N^{ad/\mu}}^{r^d} z^{\alpha-\tau-\mu\beta}\ell^{\star}(z)\,\mathrm{d}z. \end{aligned}$$ If $\alpha-\tau-\mu\beta+1>0$, we can again use Proposition 1.5.8 of [@bingham1989regular] on the integral to get that [\[cst:no_long_cheap_edge17\]]{#cst:no_long_cheap_edge17 label="cst:no_long_cheap_edge17"} [\[cst:no_long_cheap_edge18\]]{#cst:no_long_cheap_edge18 label="cst:no_long_cheap_edge18"} $$\begin{aligned} \label{eq:I2-tilde-new} \widetilde I_2 \le c_2N^{ad\beta}r^{-d\alpha} C_{\ref{cst:no_long_cheap_edge17}}r^{d(\alpha-\tau-\mu\beta+1)}\ell^{\star}(r^d) \le C_{\ref{cst:no_long_cheap_edge18}}N^{ad\beta}r^{-d(\tau+\mu\beta-1)+\varepsilon/2} \end{aligned}$$ for $N$ large enough by Potter's bound. If $\alpha-\tau-\mu\beta+1=0$, we use $\ell^{\star}(z)=o(z^{\frac{\varepsilon}{2d}})$ to get [\[cst:no_long_cheap_edge19\]]{#cst:no_long_cheap_edge19 label="cst:no_long_cheap_edge19"} [\[cst:no_long_cheap_edge20\]]{#cst:no_long_cheap_edge20 label="cst:no_long_cheap_edge20"} $$\begin{aligned} \label{eq:I_2_bound_case2-2} \widetilde I_2 \le C_{\ref{cst:no_long_cheap_edge19}}N^{ad\beta}r^{-d\alpha} \int_{z=t_0^{-1/\mu}N^{ad/\mu}}^{r^d} z^{\frac{\varepsilon}{2d}-1} \,\mathrm{d}z \le C_{\ref{cst:no_long_cheap_edge20}}N^{ad\beta}r^{-d(\tau-1+\mu\beta)+\varepsilon/2}. \end{aligned}$$ Finally, when $\alpha-\tau-\mu\beta+1<0$, we use Potter's bound to get $\ell^{\star}(z)=o(z^{\frac{\mu\varepsilon}{2ad}})$, and since in this case we assume that [\[eq:delta_no_long_cheap_edge_2\]](#eq:delta_no_long_cheap_edge_2){reference-type="eqref" reference="eq:delta_no_long_cheap_edge_2"} holds, the integral is convergent and we have [\[cst:no_long_cheap_edge21\]]{#cst:no_long_cheap_edge21 label="cst:no_long_cheap_edge21"} [\[cst:no_long_cheap_edge22\]]{#cst:no_long_cheap_edge22 label="cst:no_long_cheap_edge22"} $$\begin{aligned} \label{eq:I_2_bound_case2-3} \widetilde I_2 \le C_{\ref{cst:no_long_cheap_edge21}}N^{ad\beta}r^{-d\alpha} \int_{z=t_0^{-1/\mu}N^{ad/\mu}}^{r^d} z^{\alpha-\tau-\mu\beta+\frac{\mu\varepsilon}{2ad}} \,\mathrm{d}z \le C_{\ref{cst:no_long_cheap_edge22}}r^{-d\alpha}N^{\frac{ad}{\mu}(\alpha-\tau+1)+\varepsilon/2}. \end{aligned}$$ Combining equations [\[eq:I2-tilde-new\]](#eq:I2-tilde-new){reference-type="eqref" reference="eq:I2-tilde-new"}--[\[eq:I_2\_bound_case2-3\]](#eq:I_2_bound_case2-3){reference-type="eqref" reference="eq:I_2_bound_case2-3"}, and using the fact that $r \ge N$ in the inner integral, we get [\[cst:no_long_cheap_edge23\]]{#cst:no_long_cheap_edge23 label="cst:no_long_cheap_edge23"} $$\begin{aligned} \label{eq:I_2_bound_case2} \widetilde I_2\le C_{\ref{cst:no_long_cheap_edge23}}\left(N^{ad\beta}r^{-d(\tau-1+\mu\beta)} + r^{-d\alpha}N^{\frac{ad}{\mu}(\alpha-\tau+1)}\right) r^{\varepsilon/2}. \end{aligned}$$ Finally, the third part of the integral in $\Lambda(r)$ in [\[eq:lambda-r-detailed\]](#eq:lambda-r-detailed){reference-type="eqref" reference="eq:lambda-r-detailed"} is given by $$\begin{aligned} \widetilde I_3 := \int_{z=r^d}^{\infty} \left(1\wedge\dfrac{z}{r^d}\right)^\alpha \cdot F_L(N^{ad}z^{-\mu}) \cdot z^{-\tau}\ell^{\star}(z)\,\mathrm{d}z \le c_2 N^{ad\beta} \int_{z=r^d}^{\infty}z^{-\mu\beta-\tau}\ell^{\star}(z)\,\mathrm{d}z. \end{aligned}$$ Again, we can apply Proposition 1.5.10 of [@bingham1989regular] to treat the inner integral and get, by Potter's bound when $N$ is sufficiently large, [\[cst:no_long_cheap_edge24\]]{#cst:no_long_cheap_edge24 label="cst:no_long_cheap_edge24"} [\[cst:no_long_cheap_edge25\]]{#cst:no_long_cheap_edge25 label="cst:no_long_cheap_edge25"} $$\begin{aligned} \label{eq:I_3_bound_case2} \widetilde I_3 \le C_{\ref{cst:no_long_cheap_edge24}} N^{ad\beta}r^{-d(\tau-1+\mu\beta)}\ell^{\star}(r^d)\le C_{\ref{cst:no_long_cheap_edge25}}N^{ad\beta}r^{-d(\tau-1+\mu\beta)+\varepsilon/2}. \end{aligned}$$ Combining [\[eq:I_1\_bound_case2\]](#eq:I_1_bound_case2){reference-type="eqref" reference="eq:I_1_bound_case2"}, [\[eq:I_2\_bound_case2\]](#eq:I_2_bound_case2){reference-type="eqref" reference="eq:I_2_bound_case2"} and [\[eq:I_3\_bound_case2\]](#eq:I_3_bound_case2){reference-type="eqref" reference="eq:I_3_bound_case2"}, while keeping in mind that $r\ge N$, we get [\[cst:no_long_cheap_edge26\]]{#cst:no_long_cheap_edge26 label="cst:no_long_cheap_edge26"} $$\begin{aligned} \Lambda(r) = \widetilde I_1 + \widetilde I_2 + \widetilde I_3 \le C_{\ref{cst:no_long_cheap_edge26}}\left(r^{-d\alpha}+r^{-d\alpha}N^{\frac{ad}{\mu}(\alpha-\tau+1)} + N^{ad\beta}r^{-d(\tau+\mu\beta-1)}\right)r^{\varepsilon/2}. \end{aligned}$$ Returning again to [\[eq:ENA\]](#eq:ENA){reference-type="eqref" reference="eq:ENA"}, using this bound on $\Lambda(r)$ we obtain [\[cst:no_long_cheap_edge28\]]{#cst:no_long_cheap_edge28 label="cst:no_long_cheap_edge28"} [\[cst:no_long_cheap_edge27\]]{#cst:no_long_cheap_edge27 label="cst:no_long_cheap_edge27"} $$\begin{aligned} E(A,N,a) \le &C_{\ref{cst:no_long_cheap_edge28}}A^d\int_{r=N}^{dA} r^{-d(\alpha-1)+\varepsilon/2-1} \left(1+N^{\frac{ad}{\mu}(\alpha-\tau+1)}\right) + r^{-d(\tau-2+\mu\beta)+\varepsilon/2-1}N^{ad\beta}\,\mathrm{d}r \\ \stackrel{\eqref{eq:delta_no_long_cheap_edge_1}}{\le} &C_{\ref{cst:no_long_cheap_edge27}}A^dN^{\varepsilon/2} \left(N^{-d(\alpha-1)} + N^{-d(\alpha-1-\frac{a}{\mu}(\alpha-\tau+1))} + N^{-d(\tau-2+(\mu-a)\beta)}\right) \\ \le &A^dN^{\varepsilon}\left(N^{-d(\alpha-1)} + N^{-d(\alpha-1-\frac{a}{\mu}(\alpha-\tau+1))} + N^{-d(\tau-2+(\mu-a)\beta)}\right), \end{aligned}$$ where the last inequality holds for $N$ large enough. Thus we have proved [\[eq:no_long_cheap_edge\]](#eq:no_long_cheap_edge){reference-type="eqref" reference="eq:no_long_cheap_edge"} when $a < \mu$. ◻ ## Good blocks {#sec:good-boxes} In the renormalisation scheme, we shall shortly cover the space with blocks that are iteratively contained in larger are larger boxes. The smallest boxes are level-$1$ boxes, and we group them together in level-$2$ boxes, and so on. A level-$k$ box thus contains sub-boxes of level $k-1$, which in turn contains subboxes of level $k-2$, and so on until we reach the level-$1$ boxes. Now we turn to the definition of good blocks. Berger [@berger2004lower] defined them as boxes that do not contain edges of linear length (in the box size), and additionally the same property must hold for certain translates and recursively for most of its sub-boxes. In our setting, we have to modify the definition since long edges do exist. However, we can define a box to be 'good' if all edges of linear length are costly enough and arrive at the following definition. **Definition 12**. *Let $k_0:=16\cdot30^d$, let $A_1>1$ and $u\in (0,1)$ be constants, and define $A_k := A_1k!^2$ for all $k \ge k_0$.* *A *$k$-block* is defined as a $d$-dimensional cube of side length $A_k$. For $k > k_0$, there is a natural partition of a $k$-block $Q$ into $(k-1)$-blocks; we call these the *children* of $Q$. We denote by $Q_k$ the $k$-block centred at the origin.* *We will define the notion of goodness recursively. Let $\eta\in(0,1]$. We say that a $k_0$-block $Q$ is *$\eta$-good* if every edge internal to $Q$ has cost at least $u$. For $k > k_0$ and a $k$-block $Q$, consider all $3^d$ $k$-blocks $Q'$ of the form $Q+jA_{k-1}/2$ for $j\in\{ -1, 0, 1 \}^d$. We say that $Q$ is *$\eta$-good* if for all $Q'=Q+jA_{k-1}/2$:* (i) *Every edge internal to $Q'$ of length larger than $A_{k-1}/100$ has cost at least $uA_k^{\eta}$.* (ii) *All but at most $3^d$ children of $Q'$ are $\eta$-good.* We will fix $A_1$ and then $u$ in Definition [Definition 12](#def:good_box){reference-type="ref" reference="def:good_box"} such that they satisfy equations [\[eq:u_cond\]](#eq:u_cond){reference-type="eqref" reference="eq:u_cond"}, [\[eq:A_1\_cond_3\]](#eq:A_1_cond_3){reference-type="eqref" reference="eq:A_1_cond_3"}, [\[eq:A_1\_cond_2\]](#eq:A_1_cond_2){reference-type="eqref" reference="eq:A_1_cond_2"}, [\[eq:A_1\_cond_1\]](#eq:A_1_cond_1){reference-type="eqref" reference="eq:A_1_cond_1"} and [\[eq:A_1\_cond\]](#eq:A_1_cond){reference-type="eqref" reference="eq:A_1_cond"} below. We will also assume that $A_1$ is large enough so that the upper bound of Lemma [Lemma 11](#lem:no_long_cheap_edge){reference-type="ref" reference="lem:no_long_cheap_edge"} holds for certain choices of $N$ and $\varepsilon$ specified case-by-case below in [\[eq:general_E(k)-2\]](#eq:general_E(k)-2){reference-type="eqref" reference="eq:general_E(k)-2"}, [\[eq:general_E(k)\]](#eq:general_E(k)){reference-type="eqref" reference="eq:general_E(k)"}, [\[eq:eps-1-def\]](#eq:eps-1-def){reference-type="eqref" reference="eq:eps-1-def"}, [\[eq:eps-2-def\]](#eq:eps-2-def){reference-type="eqref" reference="eq:eps-2-def"} and [\[eq:eps-3-def\]](#eq:eps-3-def){reference-type="eqref" reference="eq:eps-3-def"}. Recall the definitions of $\mu_{\mathrm{pol}}$ from [\[eq:mu_pol_log\]](#eq:mu_pol_log){reference-type="eqref" reference="eq:mu_pol_log"} and $\eta_0$ from [\[eq:eta_0\]](#eq:eta_0){reference-type="eqref" reference="eq:eta_0"}. **Proposition 13**. *Consider $1$-FPP on IGIRG or SFP of Definition [Definition 1](#def:girg){reference-type="ref" reference="def:girg"} satisfying the assumptions given in [\[eq:power_law\]](#eq:power_law){reference-type="eqref" reference="eq:power_law"}--[\[eq:cost\]](#eq:cost){reference-type="eqref" reference="eq:cost"} with $0\in\mathcal{V}$, but $L$ not necessarily satisfying Assumption [Assumption 3](#assu:L){reference-type="ref" reference="assu:L"}. Assume that* 1) *[\[item:1\]]{#item:1 label="item:1"} $\boldsymbol{\alpha>2}$, $\boldsymbol{\tau>3}$, $\mu\ge 0$ arbitrary, and the distribution of $L$ is arbitrary satisfying $\mathbb P(L>0)=1$. Then there are choices for $A_1$ and $u$ in Definition [Definition 12](#def:good_box){reference-type="ref" reference="def:good_box"} for which a.s. there exists $k_1\ge k_0$ such that $Q_k$ is $1$-good for all $k\ge k_1$.* 2) *[\[item:2\]]{#item:2 label="item:2"} $\boldsymbol{\alpha>2}$, $\boldsymbol{\tau\in(2,3]}$, $\boldsymbol{\mu>\mu_{\log}}$, and $L$ satisfies [\[eq:F_L-condition\]](#eq:F_L-condition){reference-type="eqref" reference="eq:F_L-condition"} in Assumption [Assumption 3](#assu:L){reference-type="ref" reference="assu:L"}. Then for any $\delta>0$ there are choices for $A_1$ and $u$ in Definition [Definition 12](#def:good_box){reference-type="ref" reference="def:good_box"} for which a.s. there exists $k_1\ge k_0$ such that $Q_k$ is $(\eta_0-\delta)$-good for all $k\ge k_1$.* 3) *[\[item:3\]]{#item:3 label="item:3"} $\boldsymbol{\alpha>2}$ , $\boldsymbol{\tau\in(2,3]}$, $\boldsymbol{\mu>\mu_{\mathrm{pol}}}$, and $L$ satisfies [\[eq:F_L-condition\]](#eq:F_L-condition){reference-type="eqref" reference="eq:F_L-condition"} in Assumption [Assumption 3](#assu:L){reference-type="ref" reference="assu:L"}. Then there are choices for $A_1$ and $u$ in Definition [Definition 12](#def:good_box){reference-type="ref" reference="def:good_box"} for which a.s. there exists $k_1\ge k_0$ such that $Q_k$ is $1$-good for all $k\ge k_1$.* **Remark 14**. Proposition [Proposition 13](#prop:large_box_good){reference-type="ref" reference="prop:large_box_good"} implicitly limits the type of paths present between two vertices: a path either uses short edges (of which it needs to use many if the endpoints are far away) or it uses long edges, which do have high cost. We will see that in Case [\[item:1\]](#item:1){reference-type="ref" reference="item:1"} paths that have long edges are simply not present. In Case [\[item:3\]](#item:3){reference-type="ref" reference="item:3"} they are present, but the cost of long edges is so high that the paths are not more efficient than paths which only use short edges (corresponding to case $\eta_0=1$ of linear cost-distances). Case [\[item:2\]](#item:2){reference-type="ref" reference="item:2"} is most exotic: long edges are present, but their cost is on a polynomial scale compared to their Euclidean length, and their precise exponent will give polynomial (but nonlinear) cost-distances. The proof we will give follows closely the one of in [@deprez2015inhomogeneous Lemma 14]. We show that $$\begin{aligned} \label{eq:tail_bound_k1} \mathbb{P}(Q_k \textnormal{ is not } (\eta_0-\delta)\textnormal{-good}) \le e^{-k} \end{aligned}$$ for $k\ge k_0$, where $\delta$ is an arbitrary positive constant for Case [\[item:2\]](#item:2){reference-type="ref" reference="item:2"} and $\delta=0$ for Cases [\[item:1\]](#item:1){reference-type="ref" reference="item:1"} and [\[item:3\]](#item:3){reference-type="ref" reference="item:3"}. Thus $$\begin{aligned} \mathbb{P}(\forall k\ge k': Q_{k} \textnormal{ is } (\eta_0-\delta)\textnormal{-good})) \ge 1- \sum_{k\ge k'} e^{-k} \ge 1- 2e^{-k'}. \end{aligned}$$ The existence of $k_1$ (so that $Q_k$ is $\eta_0-\delta$-good for all $k\ge k_1$) then follows from the Borel-Cantelli Lemma, and for a fixed $q>0$ we can achieve $\mathbb{P}(k_1 \le k') \ge 1-q$ for ${k'} = |\log(q/2)|$. We will show in the later parts of the section that the lower bounds on cost-distances follow *deterministically* from Proposition [Proposition 13](#prop:large_box_good){reference-type="ref" reference="prop:large_box_good"}, for all vertices in distance at least $r_0:=c\cdot A_{k_1}$ for a constant $c$, and thus for all vertices in distance $r\ge r_0=c\cdot A_{|\log(q/2)|} = cA_1\cdot (|\log(q/2)|!)^2 \ge e^{c'\cdot \log(1/q) \cdot \log \log(1/q)}$ for a constant $c'>0$. However, we did not try to optimise these bounds. Compared to [@deprez2015inhomogeneous] developed for graph distances, in order to show [\[eq:tail_bound_k1\]](#eq:tail_bound_k1){reference-type="eqref" reference="eq:tail_bound_k1"} we additionally need to derive an upper bound on the probability that property *(i)* of Definition [Definition 12](#def:good_box){reference-type="ref" reference="def:good_box"} fails, i.e., to handle the cost of edges as well, not just their length. We do this using Lemma [Lemma 11](#lem:no_long_cheap_edge){reference-type="ref" reference="lem:no_long_cheap_edge"}. *Proof of Proposition [Proposition 13](#prop:large_box_good){reference-type="ref" reference="prop:large_box_good"}.* In order to unify notation, we define $\eta_0 := 1$ and $\delta := 0$ in Cases [\[item:1\]](#item:1){reference-type="ref" reference="item:1"} and [\[item:3\]](#item:3){reference-type="ref" reference="item:3"}, which is also consistent with the definition of $\eta_0$ in [\[eq:eta_0\]](#eq:eta_0){reference-type="eqref" reference="eq:eta_0"}. So we need to show $(\eta_0-\delta)$-goodness in all three cases. We fix the values of $A_1$ and $u$ from Definition [Definition 12](#def:good_box){reference-type="ref" reference="def:good_box"}. They will both depend on the set $\textnormal{\texttt{par}}\xspace$ of model parameters and on $\delta$ (in Case [\[item:2\]](#item:2){reference-type="ref" reference="item:2"}). We will first choose $A_1$ to be suitably large, and then choose $u$ to be suitably small as a function of $A_1$. We will prove the result using the Borel-Cantelli Lemma, by showing that $\sum_{k\ge k_0} \mathbb{P}\left(Q_k \textnormal{ is not } (\eta_0-\delta)\textnormal{-good}\right)$ is a finite sum. Let $\psi_k:=\mathbb{P}(Q_k \textnormal{ is not } (\eta_0-\delta)\textnormal{-good})$. We will show inductively that $\psi_k \le e^{-k}$. Note that our choice of $k_0=16\cdot 30^d$ in Definition [Definition 12](#def:good_box){reference-type="ref" reference="def:good_box"} ensures that $$\begin{aligned} \label{eq:k_0_cond} 3^dk^{4d}e^{-2(k-1)} \le \tfrac{1}{2}e^{-k} \qquad\text{ for all $k\geq k_0$.} \end{aligned}$$ **Base case.** We start by bounding $\psi_{k_0}$. In all three cases we have $\mathbb P(L>0)=1$ (Assumption [Assumption 3](#assu:L){reference-type="ref" reference="assu:L"} is a stronger assumption), so we can choose the constant $u\in (0,1)$ from Definition [Definition 12](#def:good_box){reference-type="ref" reference="def:good_box"} small enough so that $$\label{eq:u_cond} F_L(u)\le e^{-2k_0}/(4\mathbb{E}[T]) \textnormal{, where } T \textnormal{ is the number of edges inside } Q_{k_0}.$$ Here, $T=T_{k_0}$ has finite expectation because the number of vertices in $Q_k$ is fixed (in SFP) or has a finite second moment (in IGIRG), and the number of edges with both endpoints within $Q_k$ is at most the square of the number of the vertices. Since the vertex-weights are always at least $1$ and $\mu\ge 0$, for each edge it holds that $\mathcal C(e) \ge L_e$. Hence by Markov's inequality and a union bound, $$\begin{aligned} \psi_{k_0} &= \mathbb{P}(\exists e \in Q_{k_0} \textnormal{ with } \mathcal{C}(e) < u) \le \mathbb{P}(\exists e \in Q_{k_0} \textnormal{ with } L_e\le u) \\ &\le\mathbb{P}(T>2e^{k_0}\mathbb{E}[T]) + \mathbb{P}(\exists e \in Q_{k_0} \textnormal{ with } L_e\le u \mid T\le 2e^{k_0}\mathbb{E}[T]) \\ &\le \dfrac{1}{2}e^{-k_0} + 2e^{k_0}\mathbb{E}[T]\cdot \dfrac{e^{-2k_0}}{4\mathbb{E}[T]} = e^{-k_0}. \end{aligned}$$ **Bounding the failure probability of property *(i)*.** For the induction step, we need to derive an upper bound for the probability that property *(i)* in the definition of $(\eta_0-\delta)$-goodness fails. Using Markov's inequality and translation invariance, for any fixed $j\in\{ -1, 0, 1 \}^d$ we have $$\begin{aligned} &\mathbb{P}(Q_k+jA_{k-1}/2 \textnormal{ fails to have property \emph{(i)}}) \\ &\le \mathbb{E}[|\{vw\subset Q_k \textnormal{ edge} : |v-w|\ge A_{k-1}/100, \mathcal{C}(vw)\le uA_k^{\eta_0-\delta}\}|] =: E(k). \end{aligned}$$ We will bound $E(k)$ from above using Lemma [Lemma 11](#lem:no_long_cheap_edge){reference-type="ref" reference="lem:no_long_cheap_edge"}. We will specify $\varepsilon,a$ in Lemma [Lemma 11](#lem:no_long_cheap_edge){reference-type="ref" reference="lem:no_long_cheap_edge"} later, only depending on [`par`]{.nodecor}. Then we will set $A := A_k$, $N:=A_{k-1}/100<A$, and choose $A_1$ large enough (with respect to $\varepsilon, a$) so that Lemma [Lemma 11](#lem:no_long_cheap_edge){reference-type="ref" reference="lem:no_long_cheap_edge"} can be applied. When we are able to set $a\ge \mu$, with these choices of $A, N$ then we obtain, for some constant $C>0$ that depends only on [`par`]{.nodecor}, $$\label{eq:general_E(k)-2} \begin{aligned} E(k) &\le A_k^d\cdot\frac{A_{k-1}^{\varepsilon}}{100^{\varepsilon}}\left(\frac{A_{k-1}^{-d(\alpha-1)}}{100^{-d(\alpha-1)}} + \frac{A_{k-1}^{-d(\tau-2)}}{100^{-d(\tau-2)}} \right) \\ &\le 100^{C}k^{2d}\left(A_{k-1}^{-d(\alpha-2)+\varepsilon} + A_{k-1}^{-d(\tau-3)+\varepsilon}\right), \end{aligned}$$ where we substituted $A_k = k^2 A_{k-1}$ for the second inequality. Recall that this bound holds for any $L$ and does not require Assumption [Assumption 3](#assu:L){reference-type="ref" reference="assu:L"}. Observe that the second term only tends to zero when $\tau>3$, so we can set $a\ge \mu$ only when $\tau>3$. For $\tau\le 3$, we will have to set $a<\mu$, again $A := A_k$, $N:=A_{k-1}/100<A$. By possibly increasing the constant $C>0$, if $A_1$ is sufficiently large so that Lemma [Lemma 11](#lem:no_long_cheap_edge){reference-type="ref" reference="lem:no_long_cheap_edge"} can be applied then [\[eq:no_long_cheap_edge\]](#eq:no_long_cheap_edge){reference-type="eqref" reference="eq:no_long_cheap_edge"} yields $$\label{eq:general_E(k)} \begin{aligned} E(k) &\le A_k^d\cdot\frac{A_{k-1}^{\varepsilon}}{100^{\varepsilon}}\left(\frac{A_{k-1}^{-d(\alpha-1)}}{100^{-d(\alpha-1)}} + \frac{A_{k-1}^{-d(\alpha-1-\frac{a}{\mu}(\alpha-(\tau-1)))}}{100^{-d(\alpha-1-\frac{a}{\mu}(\alpha-\tau+1))}} + \frac{A_{k-1}^{-d(\tau-2+(\mu-a)\beta)}}{100^{-d(\tau-2+(\mu-a)\beta)}} \right) \\ &\le 100^{C}k^{2d}\left(A_{k-1}^{-d(\alpha-2)+\varepsilon} + A_{k-1}^{-d(\alpha-2-\frac{a}{\mu}(\alpha-(\tau-1)))+\varepsilon} + A_{k-1}^{-d(\tau-3+(\mu-a)\beta)+\varepsilon}\right), \end{aligned}$$ Note that this case of Lemma [Lemma 11](#lem:no_long_cheap_edge){reference-type="ref" reference="lem:no_long_cheap_edge"} required Assumption [Assumption 3](#assu:L){reference-type="ref" reference="assu:L"} on $L$. We now split into cases depending on the values of $\tau$ and $\mu$ to specify $a$ and $\varepsilon$. Proposition [Proposition 13](#prop:large_box_good){reference-type="ref" reference="prop:large_box_good"} always assumes $\alpha>2$ but we emphasise this for readability. **Case 1: $\boldsymbol{\tau >3, \alpha>2}$, and $\boldsymbol{\mu\ge 0}$, $L>0$ a.s., otherwise arbitrary.** In this case we have $\eta_0=1$ and $\delta=0$. Let us choose $a>\max\{\mu,1/d\}$. Then since $ad>1=\eta_0$ we can choose $A_1$ so large that $$\begin{aligned} \label{eq:A_1_cond_3} \left(\frac{A_{k-1}}{100}\right)^{ad} \ge A_{k-1}k^2 = A_{k} \quad \textnormal{for all } k > k_0. \end{aligned}$$ Since $\tau>3, \alpha>2$, we will set in [\[eq:general_E(k)-2\]](#eq:general_E(k)-2){reference-type="eqref" reference="eq:general_E(k)-2"} $$\begin{aligned} \label{eq:eps-1-def} \varepsilon:=\frac{d}{2}\min\left\{\alpha-2, \tau-3 \right\}>0. \end{aligned}$$ Plugging this $\varepsilon$ into the upper bound [\[eq:general_E(k)-2\]](#eq:general_E(k)-2){reference-type="eqref" reference="eq:general_E(k)-2"} yields $$\begin{aligned} E(k)&\le 2\cdot100^{C}k^{2d}A_{k-1}^{-\varepsilon} \le 100^{C+1} A_1^{-\varepsilon} k^{2d} ((k-1)!)^{-2\varepsilon}. \end{aligned}$$ **Case 2: $\boldsymbol{\tau \in (2,3], \alpha>2}$, and $\boldsymbol{\mu \in (\mu_{\log}, \mu_{\mathrm{pol}}]}$.** In this case, $\delta >0$ and Assumption [Assumption 3](#assu:L){reference-type="ref" reference="assu:L"} needs to hold for $L$. Define $$\begin{aligned} \label{eq:local-a-case2} a:=\min\left\{\mu-\frac{3-\tau}{\beta}, \frac{\mu(\alpha-2)}{\alpha-(\tau-1)}\right\}-\frac{\delta}{2d} = \frac{\eta_0}{d} -\frac{\delta}{2d}, \end{aligned}$$ where the last equation can be seen by using $\eta_0$ from [\[eq:eta_0\]](#eq:eta_0){reference-type="eqref" reference="eq:eta_0"} and $\mu_{\log} = (3-\tau)/\beta$ from [\[eq:mu_pol_log\]](#eq:mu_pol_log){reference-type="eqref" reference="eq:mu_pol_log"}. Also note that $\eta_0>0$ since $\mu >\mu_{\log} = (3-\tau)/\beta$, and $\alpha > 2\ge (\tau-1)$, and that the statement of Proposition [Proposition 13](#prop:large_box_good){reference-type="ref" reference="prop:large_box_good"} is stronger for smaller $\delta$. Therefore, we can assume that $a>0$. Note also that the first term of the minimum is at most $\mu$ and $\delta>0$, which implies $a<\mu$. (In fact, the second term of the minimum is also at most $\mu$.) Moreover, by the definition of $a$ and the fact that $\alpha>2$ implies $\alpha-(\tau-1)>0$, rearranging [\[eq:local-a-case2\]](#eq:local-a-case2){reference-type="eqref" reference="eq:local-a-case2"} yields that $$\begin{aligned} \label{eq:large-box-good-consts} 3-\tau-(\mu-a)\beta\le-\frac{\beta\delta}{2d}, \qquad 2-\alpha+\frac{a}{\mu}(\alpha-(\tau-1))\le -\frac{(\alpha-(\tau-1))\delta}{2\mu d}. \end{aligned}$$ We choose $A_1$ large enough so that $$\begin{aligned} \label{eq:A_1_cond_2} \left(\frac{A_{k-1}}{100}\right)^{\eta_0-\delta/2} = \left(\frac{A_k}{100k^2}\right)^{\eta_0-\delta/2}\ge A_k^{\eta_0-\delta} > uA_k^{\eta_0-\delta} \quad \textnormal{for all } k >k_0. \end{aligned}$$ By definition of $a$, cf. equation [\[eq:local-a-case2\]](#eq:local-a-case2){reference-type="eqref" reference="eq:local-a-case2"}, $(A_{k-1}/100)^{ad}=(A_{k-1}/100)^{\eta_0-\delta/2} > uA_k^{\eta_0-\delta}$ as desired. Using $\alpha-(\tau-1) >0$ the first term in the upper bound [\[eq:general_E(k)\]](#eq:general_E(k)){reference-type="eqref" reference="eq:general_E(k)"} is dominated by the second term, and [\[eq:general_E(k)\]](#eq:general_E(k)){reference-type="eqref" reference="eq:general_E(k)"} becomes $$\begin{aligned} E(k)&\le 2\cdot100^{C}k^{2d}\left( A_{k-1}^{-d(\alpha-2-\frac{a}{\mu}(\alpha-\tau+1))+\varepsilon} + A_{k-1}^{-d(\tau-3+(\mu-a)\beta)+\varepsilon}\right). \end{aligned}$$ Now we set $$\label{eq:eps-2-def} \varepsilon:=\frac{\delta}{4}\min\left\{\beta, \frac{\alpha-(\tau-1)}{\mu}\right\}>0.$$ Using this $\varepsilon$, combined with [\[eq:large-box-good-consts\]](#eq:large-box-good-consts){reference-type="eqref" reference="eq:large-box-good-consts"}, it follows that $$\begin{aligned} E(k)&\le 100^{C+1}k^{2d}A_{k-1}^{-\varepsilon} = 100^{C+1} A_1^{-\varepsilon} k^{2d} ((k-1)!)^{-2\varepsilon}. \end{aligned}$$ **Case 3: $\boldsymbol{\tau \in (2,3]}$, $\boldsymbol{\alpha>2}$, and $\boldsymbol{\mu >\mu_{\mathrm{pol}}}$.** Here again we need Assumption [Assumption 3](#assu:L){reference-type="ref" reference="assu:L"} to hold for $L$, and we have $\eta_0=1$ and $\delta = 0$. Remembering the definition of $\mu_{\mathrm{pol}}$ in [\[eq:mu_pol_log\]](#eq:mu_pol_log){reference-type="eqref" reference="eq:mu_pol_log"}, rearranging $\mu>\mu_{\mathrm{pol}}$ yields that both $d(\alpha-2)-(\alpha-\tau+1)/\mu>0$ and $d(\tau-3+(\mu-1/d)\beta)>0$, hold. So we will show that we can set in [\[eq:general_E(k)\]](#eq:general_E(k)){reference-type="eqref" reference="eq:general_E(k)"} the following values of $\varepsilon$ and $a$: $$\begin{aligned} \label{eq:eps-3-def} \varepsilon:=\frac{1}{4}\min\left\{d(\alpha-2)-\frac{\alpha-(\tau-1)}{\mu}, d\left(\tau-3+(\mu-1/d)\beta\right)\right\}>0, \end{aligned}$$ and $$\begin{aligned} \label{eq:local-a-case3} a := \frac{1}{2}\Big(\frac1d + \min\left\{\frac{\mu(\alpha-2)}{\alpha-(\tau-1)}, \mu-\frac{3-\tau}{\beta} \right\}\Big) >0. \end{aligned}$$ We establish first that indeed $a<\mu$ and also that $ad>1$. To see the first, observe that $a$ averages $1/d$ with the minimum of two different quantities. By [\[eq:mu_pol_log\]](#eq:mu_pol_log){reference-type="eqref" reference="eq:mu_pol_log"} we have $1/d\le\mu_{\mathrm{pol}} <\mu$. Since $\tau\in(2,3]$, $(\alpha-2)/(\alpha-(\tau-1))\le1$ and so $\mu(\alpha-2)/(\alpha-\tau+1)\le\mu$, and $\mu-(3-\tau)/\beta\le\mu$. This implies in particular that $a$ averages one quantity strictly smaller than $\mu$ with another quantity smaller or equal to $\mu$, showing that $a<\mu$. Since $\mu>\mu_{\mathrm{pol}}$, rearranging also yields $$\begin{aligned} \frac{\mu(\alpha-2)}{\alpha-(\tau-1)} > \frac{1}{d} \quad \textnormal{and} \quad \mu-\frac{3-\tau}{\beta} > \frac{1}{d}, \end{aligned}$$ so $a>1/d$. This implies $ad>1=\eta_0$, and thus, we can choose $A_1$ so large that $$\begin{aligned} \label{eq:A_1_cond_1} \left(\frac{A_{k-1}}{100}\right)^{ad} \ge A_k = A_k^{\eta_0-\delta} > uA_k^{\eta_0-\delta} \quad \textnormal{for all } k > k_0. \end{aligned}$$ Since $\alpha>2$ and $\tau\le 3$, we have $\alpha-(\tau-1)>0$ and thus, as in Case 2 the upper bound [\[eq:general_E(k)\]](#eq:general_E(k)){reference-type="eqref" reference="eq:general_E(k)"} becomes $$\begin{aligned} \label{eq:Ek_case3} E(k)&\le 2\cdot100^{C}k^{2d}\left( A_{k-1}^{-d(\alpha-2-\frac{a}{\mu}(\alpha-(\tau-1)))+\varepsilon} + A_{k-1}^{-d(\tau-3+(\mu-a)\beta)+\varepsilon}\right). \end{aligned}$$ The definition of $\varepsilon$ in [\[eq:eps-3-def\]](#eq:eps-3-def){reference-type="eqref" reference="eq:eps-3-def"} and $a$ in [\[eq:local-a-case3\]](#eq:local-a-case3){reference-type="eqref" reference="eq:local-a-case3"} implies $2\varepsilon\le \tfrac12d(\tau-3+(\mu-1/d)\beta)$ and $a\le\tfrac12(1/d + \mu -(3-\tau)/\beta)$. From these, it is elementary to check that $$\begin{aligned} -d(\tau-3+(\mu-a)\beta) \le -2\varepsilon. \end{aligned}$$ Likewise, from $2\varepsilon\le \tfrac12(d(\alpha-2)-(\alpha-(\tau-1)/\mu))$ and $a\le\tfrac12(1/d + \mu(\alpha-2)/(\alpha-\tau+1))$ we derive $$\begin{aligned} -d\left(\alpha-2-\frac{a}{\mu}(\alpha-\tau+1)\right) \le -2\varepsilon, \end{aligned}$$ so [\[eq:Ek_case3\]](#eq:Ek_case3){reference-type="eqref" reference="eq:Ek_case3"} simplifies to $$\begin{aligned} E(k) \le 100^{C+1}k^{2d}A_{k-1}^{-\varepsilon} = 100^{C+1}A_1^{-\varepsilon}k^{2d}((k-1)!)^{-2\varepsilon}. \end{aligned}$$ We remark that our approach does not give $\delta=0$ in the boundary case $\mu=\mu_{\mathrm{pol}}$ for $\tau \in (2,3)$, which would yield linear distances. The reason is that for $\delta =0$ we can not find a value of $a$ such that $ad > 1$ holds, and which give a negative exponent in the error bound [\[eq:no_long_cheap_edge\]](#eq:no_long_cheap_edge){reference-type="eqref" reference="eq:no_long_cheap_edge"}. Since Lemma [Lemma 11](#lem:no_long_cheap_edge){reference-type="ref" reference="lem:no_long_cheap_edge"} is essentially tight, this means that there exist long edges whose cost is sublinear in their length. **Induction step.** Combining the three cases above, we conclude that in all cases we have for an appropriately chosen $\varepsilon>0$ (depending on the values of $\mu$ and $\tau$) $$\begin{aligned} \label{eq:failing-property1} \mathbb{P}(Q_k+jA_{k-1}/2 \textnormal{ fails to have property \emph{(i)}}) \le E(k) \le 100^{C+1} A_1^{-\varepsilon} k^{2d} ((k-1)!)^{-2\varepsilon}. \end{aligned}$$ Importantly, both $C$ and $\varepsilon$ depend only on [`par`]{.nodecor}. We define $$\label{eq:Cprime} C'=C'(A_1):=100^{C+1} A_1^{-\varepsilon},$$ which is a constant depending on [`par`]{.nodecor} and $A_1$. We note that for $k>k_0$, the $k$-block $Q_k$ is not $(\eta_0-\delta)$-good if at least one of its $3^d$ translations $Q_k+jA_{k-1}/2, j\in\{-1, 0, 1\}^d$ fails to have property *(i)* or *(ii)* of Definition [Definition 12](#def:good_box){reference-type="ref" reference="def:good_box"}. The failure probability of property *(i)* is bounded by [\[eq:failing-property1\]](#eq:failing-property1){reference-type="eqref" reference="eq:failing-property1"}. Therefore, translation invariance together with a union bound, and recalling property *(ii)*, implies that $$\begin{aligned} \psi_k \le 3^d\left(C'k^{2d}((k-1)!)^{-2\varepsilon} + \mathbb{P}(\textnormal{at least } 3^d+1 \textnormal{ children of } Q_k \textnormal{ are not } (\eta_0-\delta)\textnormal{-good})\right). \end{aligned}$$ Observe the following: if a box has at least $3^d+1$ bad child-boxes, one can find at least one pair of non-neighbouring bad child-boxes. Hence the event in the probability above implies that there are at least two children-boxes $Q^{\star}_{k-1}$ and $Q^{\star\star}_{k-1}$ of $Q_k$ that are not $(\eta_0-\delta)$-good and whose centres have distance at least $2A_{k-1}$. In particular, any vertex in $Q^{\star}_{k-1}$ is at least $A_{k-1}$ Euclidean distance away from any vertex in $Q^{\star\star}_{k-1}$, which ensures that the events $\{Q^{\star}_{k-1} \textnormal{ is not } (\eta_0-\delta)\textnormal{-good}\}$ and $\{Q^{\star\star}_{k-1} \textnormal{ is not } (\eta_0-\delta)\textnormal{-good}\}$ are independent. Since there are $k^{2d}$ many child-boxes of $Q_k$, there is at most $\binom{k^{2d}}{2}\le k^{4d}$ many ways to choose $Q^{\star}_{k-1}$ and $Q^{\star\star}_{k-1}$, that are independently bad with probability $\psi_{k-1}$. So we deduce the bound $$\begin{aligned} \psi_k \le 3^d\left(C'k^{2d}((k-1)!)^{-\varepsilon} + k^{4d}\psi_{k-1}^2\right). \end{aligned}$$ Note that $C'\rightarrow0$ as $A_1\rightarrow\infty$, by [\[eq:Cprime\]](#eq:Cprime){reference-type="eqref" reference="eq:Cprime"}. Thus, we can choose $A_1$ large enough (and this choice only depends on [`par`]{.nodecor} and $\varepsilon$) so that for all $k > k_0$, $$\begin{aligned} \label{eq:A_1_cond} \psi_k \le \tfrac{1}{2}e^{-k} + 3^dk^{4d}\psi_{k-1}^2 . \end{aligned}$$ We now prove by induction on $k$ that $$\begin{aligned} \label{eq:psi_k_bound} \psi_k \le e^{-k} \end{aligned}$$ for all $k\ge k_0$. Indeed, we have already seen that [\[eq:psi_k\_bound\]](#eq:psi_k_bound){reference-type="eqref" reference="eq:psi_k_bound"} holds for $k=k_0$. For $k>k_0$, assuming that [\[eq:psi_k\_bound\]](#eq:psi_k_bound){reference-type="eqref" reference="eq:psi_k_bound"} holds for $k-1$, we get $$\begin{aligned} &\psi_k\le \tfrac{1}{2}e^{-k} + 3^dk^{4d}\psi_{k-1}^2 \le \tfrac{1}{2}e^{-k} + 3^dk^{4d}e^{-2(k-1)} \stackrel{\eqref{eq:k_0_cond}}{\le} \tfrac{1}{2}e^{-k} + \tfrac{1}{2}e^{-k} = e^{-k}. \end{aligned}$$ This shows by induction that [\[eq:psi_k\_bound\]](#eq:psi_k_bound){reference-type="eqref" reference="eq:psi_k_bound"} holds for all $k\ge k_0$, which yields $$\begin{aligned} \sum_{k\ge k_0} \mathbb{P}\left(Q_k \textnormal{ is not } (\eta_0-\delta)\textnormal{-good}\right) = \sum_{k\ge k_0}\psi_k \le \sum_{k\ge k_0} e^{-k} < +\infty \end{aligned}$$ and concludes the proof of Proposition [Proposition 13](#prop:large_box_good){reference-type="ref" reference="prop:large_box_good"} by the Borel-Cantelli Lemma. ◻ ## Paths within good blocks {#sec:inside-good-blocks} The next proposition carries out the renormalisation scheme and is an important step in proving Theorem [\[thm:linear_regime\]](#thm:linear_regime){reference-type="ref" reference="thm:linear_regime"} and Theorem [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"}. **Proposition 15**. *Let $\eta\in(0,1]$ and recall $k_0$ from Definition [Definition 12](#def:good_box){reference-type="ref" reference="def:good_box"}. There exists a constant $u^{\star}$ depending only on [`par`]{.nodecor} and $\eta$ such that for all $k\ge k_0$ the following holds.* *(1) If $Q$ is an $\eta$-good $k$-block and $x,y\in Q$ satisfy $|x-y|> A_k/16$, then every path from $x$ to $y$ within $Q$ has cost at least $u^{\star}|x-y|^{\eta}$ (deterministically).* *(2) If the $(k-1)$-block $Q'$ and the $k$-block $Q$ centred at $x$ are both $\eta$-good, and if $y\in Q$ satisfies $|x-y|>A_{k-1}/8$, then every path from $x$ to $y$ within $Q$ has cost at least $u^{\star}|x-y|^{\eta}/ 30^{d+2}$ (deterministically).* **Remark 16**. Our proof of (1) is an adaptation of the proof of Lemma 2 in [@berger2004lower], with the difference that the continuous-valued edge-cost make the argument slightly more complicated. The statement of (2) allows us to prove strictly linear cost-distances in the case $\eta = 1$, avoiding Kingman's subadditive ergodic theorem that finishes the proof in [@berger2004lower]. Note that since (2) is symmetric, we could also require a condition for the blocks centred at $y$ instead of $x$. *Proof of Proposition [Proposition 15](#prop:path_in_good_block){reference-type="ref" reference="prop:path_in_good_block"}.* [\[cst:path_in_good_block\]]{#cst:path_in_good_block label="cst:path_in_good_block"} We will show the following claim by induction: there exists a constant $C_{\ref{cst:path_in_good_block}}$ (which depends on the same parameters as $u^{\star}$ does) such that for every $k\ge k_0$, if $Q$ is an $\eta$-good $k$-block and $x,y\in Q$ satisfy $|x-y|> A_k/16$, then every path $\pi$ from $x$ to $y$ within $Q$ has cost at least $$\begin{aligned} \label{eq:inductive_lower_bound} \mathcal C(\pi)\ge C_{\ref{cst:path_in_good_block}}\Lambda(k)|x-y|^{\eta}, \qquad \textnormal{where } \qquad \Lambda(k):=\prod_{h=k_0}^k\left(1-\frac{k_0}{h^2}\right).\end{aligned}$$ Then taking $u^{\star}=C_{\ref{cst:path_in_good_block}} \prod_{h=k_0}^{\infty}(1-k_0/h^2)>0$ shows (1). To show (2), we will slightly modify the last step of the induction in (1). To ease notation, we will assume that $Q=Q_k$, i.e., $Q$ is the $k$-block centred at the origin. For the base case, consider $x,y\in Q_{k_0}$ that satisfy $|x-y|>A_{k_0}/16$, where $Q_{k_0}$ is a $\eta$-good $k_0$-block. Then any path between $x$ and $y$ contains at least one edge and since $Q_{k_0}$ is an $\eta$-good $k_0$-block, the path has cost at least $u$ by Definition [Definition 12](#def:good_box){reference-type="ref" reference="def:good_box"}. It is here that we use the assumption that $\mathbb P(L > 0)=1$, compare it to the base case in the proof of Proposition [Proposition 13](#prop:large_box_good){reference-type="ref" reference="prop:large_box_good"}, i.e., at and below [\[eq:u_cond\]](#eq:u_cond){reference-type="eqref" reference="eq:u_cond"}. Since $|x-y|\le A_{k_0}\sqrt{d}$, and $\eta\le 1$, we define $C_{\ref{cst:path_in_good_block}}:=u/(A_{k_0}\sqrt{d})$ so that [\[eq:inductive_lower_bound\]](#eq:inductive_lower_bound){reference-type="eqref" reference="eq:inductive_lower_bound"} holds for $k=k_0$. Note that $C_{\ref{cst:path_in_good_block}}\le u/\sqrt{d}$. Now assume that [\[eq:inductive_lower_bound\]](#eq:inductive_lower_bound){reference-type="eqref" reference="eq:inductive_lower_bound"} holds already until $k-1$ for some $k > k_0$. Let us consider an $\eta$-good $k$-block $Q_k$, and $x,y\in Q_k$ with $|x-y|>A_k/16$. Let $\pi = (v_1, \ldots, v_l)$ be a path from $v_1=x$ to $v_l=y$ within $Q_k$. *Case A:* If $\pi$ contains one or more edges of length greater than $A_{k-1}/100$, then since $Q_k$ is $\eta$-good, by property *(i)* of Definition [Definition 12](#def:good_box){reference-type="ref" reference="def:good_box"}, the path has cost at least $$\begin{aligned} uA_k^{\eta} \ge \frac{u|x-y|^{\eta}}{\sqrt{d}^{\eta}} \ge C_{\ref{cst:path_in_good_block}}|x-y|^{\eta} \ge C_{\ref{cst:path_in_good_block}}\Lambda(k)|x-y|^{\eta},\end{aligned}$$ since $\eta\le 1$, so we are done. *Case B:* When every edge of $\pi$ has length at most $A_{k-1}/100$, we will split $\pi$ in subpaths as follows. Remember that for every translation $Q_k+jA_{k-1}/2$ of $Q_k$, $j\in\{-1, 0, 1\}^d$, at most $3^d$ children of $Q_k+jA_{k-1}/2$ are not $\eta$-good (call these bad), thus we have in total at most $9^d$ bad child-boxes in the union of all translations (some might overlap). Denote these bad children by $B_1,B_2, \ldots, B_{p}$ with $p\le9^d$, and let $B:=B_1 \cup\ldots\cup B_{p}$. If $\pi\cap B = \emptyset$, define $\pi_1:=\pi$. Otherwise, we will decompose $\pi$ into *good segments* $\pi_s$ followed by *bad segments* $\sigma_t$, so that $\pi$ is the concatenation of the good and bad segments: $\pi=(\pi_1, \sigma_1, \pi_2, \sigma_2, \ldots, \pi_{S-1}, \sigma_T, \pi_{S})$ for some $S, T$, where some of these segments might be empty, and the last vertex of a segment is the first vertex of the next segment. We now divide the vertices of $\pi=(v_1, \ldots, v_l)$ into the segments. Intuitively, the edge-set of good segments stay fully outside of bad child-boxes. Let $a_1$ be the smallest index $i\le l$ so that $v_i\in B$ and let $b_1$ be the index of the containing bad child-box: $v_{a_1}\in B_{b_1}$ (choose $b_1$ arbitrarily if there is more than one possibility). Then we set $\pi_1=(v_1, \ldots, v_{a_1-1})$ as the first good segment. Let $z_1$ be the largest value $z$ such that $v_z\in B_{b_1}$, then $\sigma_1=(v_{a_1-1}, \ldots, v_{z_1+1})$ is the first bad segment. (Note that there may be vertices on this segment outside $B_{b_1}$.) Inductively, let $a_{s+1}$ be the smallest $a>z_s$ so that $v_a\in B$, let $b_{s+1}$ be so that $v_{a_{s+1}}\in B_{b_{s+1}}$ and let $z_{s+1}$ be the largest $z$ with $v_z\in B_{b_{s+1}}$. The further good segments are then defined as $\pi_2:=(v_{z_1+1}, \ldots, v_{a_2-1})$ and so on, up to $\pi_S$, and the bad segments as $\sigma_1:=(v_{a_1-1}, \ldots, v_{z_1+1}), \sigma_2:=(v_{a_2-1}, \ldots, v_{z_2+1})$ and so on up to $\sigma_T$. Observe that the bad segment $\sigma_t$ contains the two edges $v_{a_t-1} v_{a_t}$ and $v_{z_t}v_{z_t+1}$ not fully contained in $B_{b_t}$. So, the good segments contain edges that are *completely outside* the bad set $B$. Note that $S-1\le T\le p \le 9^d$, and $S-1$ may or may not equal $T$ since two bad segments may directly follow each other. For a path $\rho$, denote by $\mathcal{D}(\rho)$ the Euclidean distance between its endpoints. Moreover, for $v,w\in\rho$, let $\rho[v, w]$ be the subpath (segment) from $v$ to $w$ on $\rho$. By the triangle inequality, $|x-y|\le \sum_{s=1}^S\mathcal{D}(\pi_s) + \sum_{t=1}^T\mathcal{D}(\sigma_{t})$. Moreover, since the diameter of $B_{b_{t}}$ is $\sqrt{d}A_{k-1}$ (since $B_{b_t}$ is a level $k-1$ box), and every edge in $\pi$ has length at most $A_{k-1}/100$ (by assumption of Case B), for every bad segment $\sigma_{t}$ we have $$\begin{aligned} \label{eq:bad-box-span} \mathcal{D}(\sigma_{t}) \le |v_{a_{t}}-v_{z_{t}}| + |v_{a_{t}-1}-v_{a_{t}}| + |v_{z_{t}}-v_{z_{t}+1}| \le \sqrt{d}A_{k-1}+2A_{k-1}/100 \le 2dA_{k-1}.\end{aligned}$$ Let $$\begin{aligned} \label{eq:IS-def} I:=\{ s \in \{1,\ldots,S\} \mid \mathcal{D}(\pi_s)>A_{k-1}/2\}\end{aligned}$$ be those good segments where the two endpoints span at least $A_{k-1}/2$ Euclidean distance. Then $\sum_{s\notin I} \mathcal{D}(\pi_s)\le SA_{k-1}/2$, trivially, since there are $S$ good segments. Since the inductive statement assumes $|x-y|>A_k/16$, this and [\[eq:bad-box-span\]](#eq:bad-box-span){reference-type="eqref" reference="eq:bad-box-span"} gives a somewhat involved pigeon-hole principle on the triangle inequality above: $$\begin{aligned} \begin{split} \label{eq:lower-bound-sum-distance} \sum_{s\in I} \mathcal{D}(\pi_s) &\ge |x-y| - \sum_{t=1}^T\mathcal{D}(\sigma_{t}) - \sum_{s\notin I} \mathcal{D}(\pi_s) \ge |x-y|-(2dT+S/2)A_{k-1} \\ &{ \buildrel (\star) \over \ge}\ |x-y|-30^dA_{k-1} { \buildrel (\dagger) \over \ge}\ \left(1-\frac{16\cdot30^d}{k^2}\right)|x-y|\ { \buildrel (\Box) \over =}\ \left(1-\frac{k_0}{k^2}\right)|x-y|, \end{split}\end{aligned}$$ where we used that $S-1, T \le 9^d$ to get inequality $(\star)$, that $|x-y|\ge A_k/16$ and that $A_k=k^2 A_{k-1}$ to get $(\dagger)$, and then the definition of $k_0$ in Definition [Definition 12](#def:good_box){reference-type="ref" reference="def:good_box"} to get $(\Box)$. For later reference, we note that under the condition $|x-y|\ge 2\cdot 30^d A_{k-1}$ (that we will assume in proving (2)), the following weaker version of [\[eq:lower-bound-sum-distance\]](#eq:lower-bound-sum-distance){reference-type="eqref" reference="eq:lower-bound-sum-distance"} still holds: $$\begin{aligned} \begin{split} \label{eq:lower-bound-sum-distance2} \sum_{s\in I} \mathcal{D}(\pi_s) \ge |x-y|-30^dA_{k-1} \ge \frac{1}{2}|x-y|. \end{split}\end{aligned}$$ Returning to the proof of [\[eq:inductive_lower_bound\]](#eq:inductive_lower_bound){reference-type="eqref" reference="eq:inductive_lower_bound"}, observe that [\[eq:lower-bound-sum-distance\]](#eq:lower-bound-sum-distance){reference-type="eqref" reference="eq:lower-bound-sum-distance"} bounds the total Euclidean distance $\mathcal D$ spanned by the endpoints of good segments $\pi_s$. Now we will work towards switching $\mathcal D$ to $\mathcal C$, the cost of the segment. We claim that if $s\in I$, then $$\begin{aligned} \label{eq:lower_bound_long_non_bad_path} \mathcal{C}(\pi_s) \ge C_{\ref{cst:path_in_good_block}} \Lambda(k-1)\mathcal{D}(\pi_s)^{\eta}.\end{aligned}$$ In order to prove [\[eq:lower_bound_long_non_bad_path\]](#eq:lower_bound_long_non_bad_path){reference-type="eqref" reference="eq:lower_bound_long_non_bad_path"}, we will show inductively that we can partition $\pi_s=(v_{z_s+1}, \ldots, v_{a_{s+1}-1})$ into sub-segments $\pi_{s,i}=\pi_s[x_{s,i}, y_{s,i}]$ (where $y_{s,i}=x_{s,i+1}$, i.e., the end-vertex of a sub-segment is the starting vertex of the next sub-segment) for $i=1,\ldots, q_s$, such that for all $i$: (I) $\mathcal{D}(\pi_{s,i})> A_{k-1}/16$, (II) $\pi_{s,i}\subseteq B_{A_{k-1}/4}(x_{s,i})$, i.e., the whole segment is contained in the Euclidean ball of radius $A_{k-1}/4$ centred around $x_{s,i}$ and its endpoints span enough Euclidean distance. We will construct the $\pi_{s,i}$ greedily, with the induction hypothesis that if $\pi_s$ is *not* covered by the first $i$ sub-segments (i.e. if $\pi_s\neq\cup_{j=1}^{i}\pi_{s,j}$), then $|y_{s,i}-v_{a_{s+1}-1}|>A_{k-1}/16$ (remembering that $v_{a_{s+1}-1}$ is the last vertex of $\pi_s$). For the base case $i=0$, we take $y_{s,0}$ to be the first vertex of $\pi_s$, and since $s\in I$, by [\[eq:IS-def\]](#eq:IS-def){reference-type="eqref" reference="eq:IS-def"}, we have $|y_{s,0}-v_{a_{s+1}-1}| = \mathcal{D}(\pi_s) > A_{k-1}/2 > A_{k-1}/16$, so the induction hypothesis is satisfied. Now assume by induction that we already have $\pi_{s,1}, \ldots, \pi_{s,i}$ satisfying (I), (II) and $|y_{s,i}-v_{a_{s+1}-1}|>A_{k-1}/16$ for some $i\ge1$, and let us construct $\pi_{s,i+1}$. *Case B1:* If the segment $\pi_s[y_{s,i},v_{a_{s+1}-1}]$ satisfies (II), i.e. if $\pi_s[y_{s,i}, v_{a_{s+1}-1}]\subseteq B_{A_{k-1}/4}(y_{s,i})$ then we define $q_s:=i+1$, $\pi_{s,q_s}:=\pi_s[y_{s,i},v_{a_{s+1}-1}]$ and the procedure terminates. $\pi_{s,q_s}$ also satisfies (I) since $\mathcal{D}(\pi_{s,q_s}) = |y_{s,i}-v_{a_{s+1}-1}|>A_{k-1}/16$ because of our assumption that $q_s>i$. *Case B2:* If $\pi_s[y_{s,i},v_{a_{s+1}-1}] \nsubseteq B_{A_{k-1}/4}(y_{s,i})$ we distinguish two cases depending on $|y_{s,i}-v_{a_{s+1}-1}|$. *Case B2a*: $|y_{s,i}-v_{a_{s+1}-1}|\ge 5A_{k-1}/32$. Define $\pi_{s,i+1}$ be the path obtained by following $\pi_s$ from $y_{s,i}:=x_{s,i+1}$ until reaching the first vertex on $\pi_s$ that spans larger than $A_{k-1}/16$ Euclidean distance from $x_{s,i+1}$, and let this vertex be $y_{s, i+1}$. The vertex $y_{s, i+1}$ exists since $|x_{s,i+1}-v_{a_{s+1}-1}| = |y_{s,i}-v_{a_{s+1}-1}| > A_{k-1}/16$ was our assumption. Then $\pi_{s,i+1}$ satisfies (I) by our definition of $y_{s, i+1}$. Since every edge of $\pi_s$ has length at most $A_{k-1}/100$ (since we are under Case B), and every vertex of $\pi_{s,i+1}=\pi_s[x_{s,i+1}, y_{s,i+1}]$ except $y_{s,i+1}$ is within distance $A_{k-1}/16$ of $x_{s,i+1}$, $\pi_{s,i+1}$ is also contained in a ball of radius $A_{k-1}/16+A_{k-1}/100<A_{k-1}/4$ centred at $x_{s,i+1}$, and thus (II) is satisfied as well. Finally, by the triangle inequality we have $$\begin{aligned} |y_{s,i+1}-v_{a_{s+1}-1}| &\ge |y_{s,i}-v_{a_{s+1}-1}|-|y_{s,i}-y_{s,i+1}|\\ & \ge 5A_{k-1}/32-(A_{k-1}/16+A_{k-1}/100) > A_{k-1}/16,\end{aligned}$$ which shows the induction step for this case. *Case B2b*: $|y_{s,i}-v_{a_{s+1}-1}|< 5A_{k-1}/32$. Intuitively, this case means that while $y_{s,i}$ is close to $v_{a_{s+1}-1}$ in space, the path $\pi_s[y_{s,i},v_{a_{s+1}-1}]$ wanders far off and then comes back before reaching the last vertex $v_{a_{s+1}-1}$. Let $y'$ be the first vertex on the path $\pi_s$ starting from $y_{s,i}:=x_{s,i+1}$ such that $|y'-x_{s,i+1}|\ge A_{k-1}/4$ (this vertex must exist since $\pi_s[y_{s,i}, v_{a_s-1}] \nsubseteq B_{A_{k-1}/4}(y_{s,i})$, since we are under Case B2). Then define $y_{s,i+1}$ to be the vertex right before $y'$ on $\pi_s[x_{s,i+1},y']$ and let $\pi_{s,i+1}:=\pi_s[x_{s,i+1},y_{s,i+1}]$. Again, since every edge of $\pi_s$ has length at most $A_{k-1}/100$ (since we are under Case B), we have $\mathcal{D}(\pi_{s,i+1}) \ge |x_{s,i+1}-y'|-|y_{s,i+1}-y'| \ge A_{k-1}/4-A_{k-1}/100 > A_{k-1}/16$ so (I) is satisfied. Moreover, $\pi_{s,i+1}$ satisfies (II) by our definition of $y_{s,i+1}$. Finally, we use the triangle inequality to get $$\begin{aligned} |y_{s,i+1}-v_{a_{s+1}-1}| &\ge |y'-x_{s,i+1}| - |y_{s,i+1}-y'| - |v_{a_{s+1}-1}-x_{s,i+1}| \\ &\ge A_{k-1}/4-A_{k-1}/100-5A_{k-1}/32 > A_{k-1}/16,\end{aligned}$$ which concludes the induction step. The reason for requiring (II) is that the obtained partition of $\pi_s$ has the property that every sub-segment $\pi_{s,i}$ *is contained in a $\eta$-good $(k-1)$-block* (among the children of $Q_k$ and its translations). This is due to a geometric statement: any ball of radius $A_{k-1}/4$ within $Q_k$ is contained in at least one $k-1$-level block that is a child of some shift $Q_k+\underline j A_{k-1}/2$ for some $\underline j \in \{-1,0,1\}^d$. By (I) and the induction hypothesis [\[eq:inductive_lower_bound\]](#eq:inductive_lower_bound){reference-type="eqref" reference="eq:inductive_lower_bound"}, we have $\mathcal{C}(\pi_{s,i}) \ge C_{\ref{cst:path_in_good_block}} \Lambda(k-1)\mathcal{D}(\pi_{s,i})^{\eta}$ and therefore $$\begin{aligned} \mathcal{C}(\pi_s) &= \sum_{i=1}^{q_s}\mathcal{C}(\pi_{s,i}) \ge C_{\ref{cst:path_in_good_block}} \Lambda(k-1)\sum_{i=1}^{q_s} \mathcal{D}(\pi_{s,i})^{\eta} \\ &{\buildrel (\star) \over \ge}\ C_{\ref{cst:path_in_good_block}} \Lambda(k-1)\left(\sum_{i=1}^{q_s} \mathcal{D}(\pi_{s,i})\right)^{\eta} {\buildrel (\triangle) \over \ge} C_{\ref{cst:path_in_good_block}} \Lambda(k-1)\mathcal{D}(\pi_s)^{\eta},\end{aligned}$$ where we got inequality $(\star)$ as follows: since $\eta\le 1$ the function $x^\eta$ is sublinear, i.e., $x^{\eta} + y^{\eta} \ge (x+y)^{\eta}$, and $(\triangle)$ is the triangle inequality applied on $\pi_s$ and its sub-segments. This shows [\[eq:lower_bound_long_non_bad_path\]](#eq:lower_bound_long_non_bad_path){reference-type="eqref" reference="eq:lower_bound_long_non_bad_path"}. Recall now the set $I$ from [\[eq:IS-def\]](#eq:IS-def){reference-type="eqref" reference="eq:IS-def"}, and the pigeon-hole argument in [\[eq:lower-bound-sum-distance\]](#eq:lower-bound-sum-distance){reference-type="eqref" reference="eq:lower-bound-sum-distance"}, which holds under the assumption $|x-y|\ge A_k/16$. By the same sublinearity argument $(\star)$ of the function $x^\eta$, we can finally deduce that $$\begin{aligned} \begin{split}\label{eq:lower-bound-cost-sum} \mathcal{C}(\pi) &\ge \sum_{s\in I}\mathcal{C}(\pi_s) \stackrel{\eqref{eq:lower_bound_long_non_bad_path}}{\ge} C_{\ref{cst:path_in_good_block}}\Lambda(k-1)\sum_{s\in I} \mathcal{D}(\pi_s)^{\eta} \stackrel{(\star)}{\ge} C_{\ref{cst:path_in_good_block}} \Lambda(k-1)\bigg(\sum_{s\in I} \mathcal{D}(\pi_s)\bigg)^{\eta} \\ &\stackrel{\eqref{eq:lower-bound-sum-distance}}{\ge} C_{\ref{cst:path_in_good_block}} \Lambda(k-1) \left(1-\frac{k_0}{k^2}\right)^{\eta} |x-y|^{\eta} \ \stackrel{\eta \le 1}{\ge} \ C_{\ref{cst:path_in_good_block}} \Lambda(k)|x-y|^{\eta}, \end{split}\end{aligned}$$ where $\Lambda(k)$ is defined in [\[eq:inductive_lower_bound\]](#eq:inductive_lower_bound){reference-type="eqref" reference="eq:inductive_lower_bound"}. This finishes the inductive demonstration of [\[eq:inductive_lower_bound\]](#eq:inductive_lower_bound){reference-type="eqref" reference="eq:inductive_lower_bound"} and concludes the proof of part (1) of the proposition. For part (2), we discriminate two sub-cases. If $|x-y|\ge 2\cdot 30^dA_{k-1}$, then [\[eq:lower-bound-sum-distance2\]](#eq:lower-bound-sum-distance2){reference-type="eqref" reference="eq:lower-bound-sum-distance2"} holds, and we can repeat the calculation in [\[eq:lower-bound-cost-sum\]](#eq:lower-bound-cost-sum){reference-type="eqref" reference="eq:lower-bound-cost-sum"}, getting $$\begin{aligned} \mathcal{C}(\pi) \ge C_{\ref{cst:path_in_good_block}} \Lambda(k-1)\left(\sum_{s\in I} \mathcal{D}(\pi_s)\right)^{\eta} &\stackrel{\eqref{eq:lower-bound-sum-distance2}}{\ge} 2^{-\eta}C_{\ref{cst:path_in_good_block}} \Lambda(k-1) |x-y|^{\eta},\end{aligned}$$ which is stronger than required. So consider the remaining case, $A_{k-1}/8 < |x-y|< 2\cdot 30^dA_{k-1}$. As before, if the path $\pi$ from $x$ to $y$ contains an edge of length more than $A_{k-1}/100$, the claim follows from $Q_k$ being $\eta$-good (Case A). So assume otherwise. Then there exists a vertex on $\pi$ with $|x-v| \in (A_{k-1}/16, A_{k-1}/8]$. Let $v$ be the first such vertex on $\pi$ (starting from $x$), and let $\pi'=\pi[x,v]$. Then $v \in Q'$ and $\pi' \subseteq Q'$. Since $Q'$ is good by prerequisite, we may apply part (1) to the path $\pi'$ and conclude $$\begin{aligned} \mathcal{C}(\pi) \ge \mathcal{C}(\pi') \ge u^{\star}|x-v|^{\eta} \geq u^{\star}\left(\frac{A_{k-1}}{16}\right)^{\eta} \geq u^{\star}\left(\frac{|x-y|}{32\cdot 30^d}\right)^{\eta} \ \stackrel{\eta \le 1}{\ge} \ u^{\star}|x-y|^\eta/30^{d+2},\end{aligned}$$ as required, finishing the proof of Proposition [Proposition 15](#prop:path_in_good_block){reference-type="ref" reference="prop:path_in_good_block"}. ◻ ## Proofs of lower bounds {#sec:lower_bound_polynomial} We are now ready to prove Theorem [\[thm:linear_polynomial_lower_bound\]](#thm:linear_polynomial_lower_bound){reference-type="ref" reference="thm:linear_polynomial_lower_bound"}. *Proof of Theorem [\[thm:linear_polynomial_lower_bound\]](#thm:linear_polynomial_lower_bound){reference-type="ref" reference="thm:linear_polynomial_lower_bound"}.* [\[proof:linear_polynomial_lower_bound\]]{#proof:linear_polynomial_lower_bound label="proof:linear_polynomial_lower_bound"} We will prove both claims [\[eq:lin-poly-lower-bound\]](#eq:lin-poly-lower-bound){reference-type="eqref" reference="eq:lin-poly-lower-bound"} and [\[eq:linear-lower\]](#eq:linear-lower){reference-type="eqref" reference="eq:linear-lower"} simultaneously by showing that the bound $$\begin{aligned} \label{eq:lin-poly-lower-bound-1} d_{\mathcal{C}}(0,x) \ge \frac{u^{\star}|x|^{\eta}}{30^{d+2}\sqrt{d}} \end{aligned}$$ holds simultaneously for all $x$ with sufficiently large $|x|=:r$, where $u^\star$ is the constant from Proposition [Proposition 13](#prop:large_box_good){reference-type="ref" reference="prop:large_box_good"}. The only difference between the two statements is that we prove [\[eq:lin-poly-lower-bound-1\]](#eq:lin-poly-lower-bound-1){reference-type="eqref" reference="eq:lin-poly-lower-bound-1"} for all $\eta < \eta_0$ if $\mu \in (\mu_{\log},\mu_{\mathrm{pol}}]$, and for $\eta := \eta_0 = 1$ in the cases $\mu > \mu_{\mathrm{pol}}$ or $\tau>3$. By Proposition [Proposition 13](#prop:large_box_good){reference-type="ref" reference="prop:large_box_good"}, a.s. there is some (random) $k_1\ge k_0$ such that for all $k\ge k_1$, the $k$-block $Q_k$ centred around $0$ is $\eta$-good in the sense of Definition [Definition 12](#def:good_box){reference-type="ref" reference="def:good_box"}: every edge of length larger than $A_{k-1}/100$, internal to $Q_k$ and its shifts $Q'=Q_k+\underline j A_{k-1}/2$ by vectors $\underline j\in \{-1,0,1\}^d$, has cost at least $u^\star A_k^\eta$, and there are at most $3^d$ bad child-boxes of $Q_k$ and/or the $Q'$s. This is the only place where we need to discriminate between the cases $\tau\in(2,3)$ and $\mu \in (\mu_{\log},\mu_{\mathrm{pol}}]$, or $\tau\in(2,3)$ and $\mu > \mu_{\mathrm{pol}}$; or $\tau >3$. Proposition [Proposition 13](#prop:large_box_good){reference-type="ref" reference="prop:large_box_good"} allows us to choose $\eta := \eta_0=1$ only in the latter two cases, otherwise we choose $\eta=\eta_0-\delta$ for an arbitrarily small $\delta>0$. For each vertex $x\in\mathbb{R}^d$, define $k^{\star}(x)$ to be the smallest integer $k\in\mathbb{N}$ with $x \in Q_k$. We will show that [\[eq:lin-poly-lower-bound-1\]](#eq:lin-poly-lower-bound-1){reference-type="eqref" reference="eq:lin-poly-lower-bound-1"} holds for all $x$ with $k^{\star}(x) \ge k_1 + 1$, which in particular implies the inequality for all $|x| > \sqrt{d}A_{k_1}/2$. So let us fix some $x$ and suppose that $k^{\star}:=k^{\star}(x) \ge k_1+1$. In particular we have $|x|\le \sqrt{d}A_{k^{\star}}/2$. Let $\pi$ be a path from $0$ to $x$, and let $k'$ be the smallest index such that $\pi \subseteq Q_{k'}$. Note that $k' \ge k^\star$ because $x \not\in Q_{k^\star-1}$. Then there exists a vertex $v$ on $\pi$ in $Q_{k'} \setminus Q_{k'-1}$, and in particular $|v| > A_{k'-1}/2$. If $k'=k^\star$ then we pick $v:=x$, otherwise we pick any such vertex $v$. By the second part of Proposition [Proposition 15](#prop:path_in_good_block){reference-type="ref" reference="prop:path_in_good_block"}, we have $$\begin{aligned} \mathcal{C}(\pi)\ge u^{\star}|v|^{\eta}/30^{d+2}. \end{aligned}$$ In the case $k'=k^\star$, [\[eq:lin-poly-lower-bound-1\]](#eq:lin-poly-lower-bound-1){reference-type="eqref" reference="eq:lin-poly-lower-bound-1"} follows from $x=v$ and thus $|v|=|x|$. In the case $k'>k^\star$, it follows from $|v| > A_{k'-1}/2 \ge A_{k^\star}/2 \ge |x|/\sqrt{d}$ and $\eta \le 1$, finishing the proof of [\[eq:lin-poly-lower-bound\]](#eq:lin-poly-lower-bound){reference-type="eqref" reference="eq:lin-poly-lower-bound"} and [\[eq:linear-lower\]](#eq:linear-lower){reference-type="eqref" reference="eq:linear-lower"}. ◻ Theorem [\[thm:linear_polynomial_lower_bound\]](#thm:linear_polynomial_lower_bound){reference-type="ref" reference="thm:linear_polynomial_lower_bound"} contains Theorem [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"} and the lower bound in Theorem [\[thm:linear_regime\]](#thm:linear_regime){reference-type="ref" reference="thm:linear_regime"} as special cases. Moreover, Corollaries [Corollary 5](#cor:ball-growth){reference-type="ref" reference="cor:ball-growth"} and [Corollary 7](#cor:polynomial-total){reference-type="ref" reference="cor:polynomial-total"} follow immediately from Theorem [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"} and from Theorems [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"} and [Theorem 6](#thm:polynomial-upper){reference-type="ref" reference="thm:polynomial-upper"}, respectively. Finally, the lower bound in Theorem [\[thm:finite_graph\]](#thm:finite_graph){reference-type="ref" reference="thm:finite_graph"} is a trivial consequence of Theorem [\[thm:linear_regime\]](#thm:linear_regime){reference-type="ref" reference="thm:linear_regime"} since GIRG is a subgraph of IGIRG, which can only increase distances. Thus, we have proven all lower bounds except for the limit cases in Theorem [\[thm:threshold_regimes\]](#thm:threshold_regimes){reference-type="ref" reference="thm:threshold_regimes"}. ## Limit Cases, Lower Bounds {#sec:limit-cases-lower} In this section, we prove the lower bounds in the limit cases, i.e., we prove the lower bounds in Theorem [\[thm:threshold_regimes\]](#thm:threshold_regimes){reference-type="ref" reference="thm:threshold_regimes"}. We achieve this by coupling a model with $\alpha=\infty$ to a suitable model with $\alpha < \infty$, and likewise for $\beta = \infty$. For $\alpha=\infty$, we will keep the vertex set and the weights identical in the two coupled models, but we use a subset of edges of the $\alpha<\infty$ model to obtain the $\alpha=\infty$ model (with identical costs, where edges are kept). For $\beta =\infty$, we can use the same vertex set and weights, and the same edge set in the two coupled models with $\beta=\infty$ and $\beta<\infty$, but we decrease the edge costs. *Proof of Theorem [\[thm:threshold_regimes\]](#thm:threshold_regimes){reference-type="ref" reference="thm:threshold_regimes"} (lower bounds).* [\[proof:threshhold_regimes_lower\]]{#proof:threshhold_regimes_lower label="proof:threshhold_regimes_lower"} *(a)* First consider the case $\alpha = \infty$. For Theorem [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"}, we need to study the case $\mu > \mu_{\log} = (3-\tau)/\beta$ from [\[eq:alpha-infty-definitions\]](#eq:alpha-infty-definitions){reference-type="eqref" reference="eq:alpha-infty-definitions"} and show that for sufficiently large $|x|$, $$\label{eq:limit-poly-rep} d_{\mathcal C}(0,x) \ge|x|^{\eta_{0,\infty}-\varepsilon},$$ where, recalling that $\mu_{\mathrm{pol}}=(3-\tau)/\beta + 1/d$ in the $\alpha=\infty$ case (see [\[eq:alpha-infty-definitions\]](#eq:alpha-infty-definitions){reference-type="eqref" reference="eq:alpha-infty-definitions"}), $$\begin{aligned} \eta_{0,\infty} = \begin{cases} 1 & \mbox{ if $\mu>\mu_{\mathrm{pol}}$,}\\ d\cdot (\mu-\mu_{\log}) & \mbox{ if $\mu\le\mu_{\mathrm{pol}}$.} \end{cases}\end{aligned}$$ To show [\[eq:limit-poly-rep\]](#eq:limit-poly-rep){reference-type="eqref" reference="eq:limit-poly-rep"}, we will show that IGIRG (and also GIRG and SFP) are *monotone* in $\alpha$ in the following sense. We first explain it for $\alpha <\infty$. Let $\alpha, \alpha' \in (1,\infty)$ with $\alpha' <\alpha$. Let $\overline c \ge \underline c >0$, and consider a realisation of IGIRG, say $G$, with parameters $\alpha, \overline c, \underline c >0$, and arbitrary other parameters. Set $\overline c' := \underline c' := \overline c$ and consider a second IGIRG $G'$ with parameters $\alpha',\overline c',\underline c'$, and otherwise identical parameters to $G$. Then conditioned on the vertex set $\mathcal V$ and the weight vector $\mathcal W$, by [\[eq:connection_prob\]](#eq:connection_prob){reference-type="eqref" reference="eq:connection_prob"}, the edge probabilities of $G$ (of $G'$) are given by a function $h$ (a function $h'$), and $h$ and $h'$ satisfy the relation $$\begin{aligned} h(x,w_1,w_2) \le \overline c \cdot\min\{1,(w_1w_2)/|x|^d\}^\alpha \le \underline c' \cdot\min\{1,(w_1w_2)/|x|^d\}^{\alpha'} \le h'(x,w_1,w_2).\end{aligned}$$ Therefore, we can couple $G$ to $G'$ such that $G$ is a subgraph of $G'$. Note that $G'$ has the same model parameters as $G$ except for $\alpha'$, and except for the parameters $\overline c,\underline c$ whose values do not appear in Theorem [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"}, nor in any of the other theorems. For $\alpha=\infty$ and any $\alpha' \in (1,\infty)$, the same coupling is possible, as we show next. In this case, the defining equation [\[eq:alpha_infty\]](#eq:alpha_infty){reference-type="eqref" reference="eq:alpha_infty"} for $h$ includes two parameters $\underline{c}, c' >0$, and requires that $h(x,w_1,w_2) = 0$ for $(w_1w_2)/|x|^d < c'$ and $h(x,w_1,w_2) \geq \underline c$ for $(w_1w_2)/|x|^d \ge 1$. In all other cases, we still have $h(x,w_1,w_2) \leq 1$ because it is a probability. Hence by setting $$\begin{aligned} h'(x,w_1,w_2) := \begin{cases} \underline c \cdot \min\{1,(w_1w_2)/|x|^d\}^{\alpha'} & \text{ if } \tfrac{w_1w_2}{|x|^d} < c', \\ 1 & \text{ if }\tfrac{w_1w_2}{|x|^d} \ge c', \end{cases}\end{aligned}$$ we can ensure that $h'(x,w_1,w_2) \ge h(x,w_1,w_2)$ for all $x,w_1,w_2$. Moreover, it can easily be checked that $h'$ satisfies the conditions in [\[eq:connection_prob\]](#eq:connection_prob){reference-type="eqref" reference="eq:connection_prob"} for $\alpha'$ with $\overline c' := \max\{1,\underline{c}, (c')^{-\alpha'}\}$ and $\underline c' := \min\{1,\underline c\}$, i.e., we obtain an IGIRG model with parameter $\alpha'$. To resume the proof of Theorem [\[thm:threshold_regimes\]](#thm:threshold_regimes){reference-type="ref" reference="thm:threshold_regimes"}, consider IGIRG $G$ for $\alpha = \infty$, and for some $\beta,\mu,\tau$ such that $\mu > \mu_{\mathrm{log}}$, and let $\varepsilon>0$. We claim that there exists $\alpha' <\infty$ such that $\eta_0' := \eta_0(\alpha',\beta,\mu,\tau) > \eta_{0,\infty}-\tfrac\varepsilon 2$. Indeed, this follows because we have defined $\eta_{0,\infty}$ such that $\lim_{\alpha'\to\infty} \eta_0(\alpha',\beta,\mu,\tau) = \eta_{0,\infty}$, as can be seen by comparing $\eta_0$ in [\[eq:eta_0\]](#eq:eta_0){reference-type="eqref" reference="eq:eta_0"} (see also[\[eq:mu_pol_log\]](#eq:mu_pol_log){reference-type="eqref" reference="eq:mu_pol_log"}) to $\eta_{0, \infty}$ in [\[eq:alpha-infty-definitions\]](#eq:alpha-infty-definitions){reference-type="eqref" reference="eq:alpha-infty-definitions"}. Now we couple $G$ with an IGIRG $G'$ with parameter $\alpha'$ (and different $\underline c, \overline c$), but identical parameters $\beta,\mu,\tau$. Theorem [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"} applies to $G'$, so we obtain for sufficiently large $|x|$, $$\label{eq:limit-poly-rep2} d_{\mathcal C}^{G'}(0,x) \ge|x|^{\eta_{0,\infty}-\varepsilon}.$$ Since $G$ is a subgraph of $G'$, cost-distances in $G$ are larger, so [\[eq:limit-poly-rep2\]](#eq:limit-poly-rep2){reference-type="eqref" reference="eq:limit-poly-rep2"} remains true if we replace $d_{\mathcal C}^{G'}$ by $d_{\mathcal C}^{G}$. Likewise, for the linear lower bound in Theorem [\[thm:linear_regime\]](#thm:linear_regime){reference-type="ref" reference="thm:linear_regime"} with $\alpha=\infty$, for any $\mu > \mu_{\mathrm{pol}}$ (i.e, $\eta_{0,\infty} =1$) we can find $\alpha'$ such that $\eta_0(\alpha',\beta,\mu,\tau) =1$ with the same parameters $\beta,\mu,\tau$. Hence, the same coupling also shows that for sufficiently large $|x|$, $$\begin{aligned} d_{\mathcal{C}}(0,x) > \kappa_1 |x|.\end{aligned}$$ This implies the lower bound in Theorem [\[thm:linear_regime\]](#thm:linear_regime){reference-type="ref" reference="thm:linear_regime"} for $\alpha = \infty$. Finally, the same argument also holds for GIRG, since for fixed $r>0$, two randomly chosen vertices $u_n,v_n$ in a box of volume $n$ will a.a.s. satisfy $|u_n - v_n| \ge r$ as $n\to \infty$. Thus the lower bound in Theorem [\[thm:finite_graph\]](#thm:finite_graph){reference-type="ref" reference="thm:finite_graph"} is also implied. This concludes the case $\alpha = \infty$. *(b)*&*(c)* Now we come to the case $\beta = \infty$. Note that we have shown part *(a)* already, so in particular the couplings that we construct now are also valid when $\alpha=\beta=\infty$. The idea is the same as for $\alpha=\infty$, but the coupling is much easier. Since the parameter $\beta$ does not influence the graph structure, we can use the same graph, but with different transmission costs. More precisely, consider IGIRG $G$ with $\beta=\infty$, i.e., the cumulative distribution $F_L$ of the cost variables satisfies $\lim_{t\to 0} F_L(t)/t^{\beta} = 0 \mbox{ for all }0<\beta <\infty$. For any $\beta'\in(0,\infty)$, there exists $t_0\in (0,1)$ such that the distribution $F_L'$ given by $F_L'(t) = t^{\beta'}$ for $t<t_0$ and $F_L'(t) = 1$ for $t\geq t_0$ dominates $F_L$. The distribution $F_L'$ satisfies condition [\[eq:F_L-condition\]](#eq:F_L-condition){reference-type="eqref" reference="eq:F_L-condition"}, so it yields an IGIRG model. Hence, we can couple $G$ to an IGIRG $G'$ with any parameter $\beta' \in (0,\infty)$ (and otherwise identical parameters except for the parameters $t_0,c_1,c_2$ that appear in [\[eq:F_L-condition\]](#eq:F_L-condition){reference-type="eqref" reference="eq:F_L-condition"}) such that $G$ and $G'$ are the same graph, and for every edge the transmission cost in $G$ is at least as large as in $G'$. We can now turn to proving the lower bound in Theorem [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"} for $\beta =\infty$, so consider IGIRG $G$ for $\beta =\infty$, and let $\mu >0 = \mu_{\mathrm{log}}$. According to [\[eq:beta-infty-definitions\]](#eq:beta-infty-definitions){reference-type="eqref" reference="eq:beta-infty-definitions"}, in this case we set $\eta_{0,\infty} := 1$ for $\mu > \mu_{\mathrm{pol}}$ and $\eta_{0,\infty} := \min\{d\mu,\mu/\mu_{\mathrm{pol},\alpha}\}$ otherwise. Since $\eta_{0,\infty} = \lim_{\beta'\to\infty} \eta_0(\alpha,\beta',\mu,\tau)$, we can find $\beta' \in (0,\infty)$ such that $\eta_0' := \eta_0(\alpha,\beta',\mu,\tau) \geq \eta_{0,\infty} -\tfrac\varepsilon 2$, and couple $G$ to an IGIRG $G'$ with parameter $\beta'$, as described above. Moreover, by choosing $\beta'$ large enough, we can also ensure that $\mu > \mu_{\mathrm{log}}(\beta',\tau,\mu) = (3-\tau)/\beta'$. Then we can apply Theorem [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"} to $G'$, and obtain that for sufficiently large $|x|$, $$d_{\mathcal C}^{G'}(0,x) \ge|x|^{\eta_0'-\varepsilon/2}.$$ Since cost-distances in $G'$ are less or equal than cost-distances in $G$, the statement remains true if we replace $d_{\mathcal C}^{G'}$ by $d_{\mathcal C}^{G}$, and since $\eta_0'-\tfrac\varepsilon 2 \geq \eta_{0,\infty}-\varepsilon$, we may replace $\eta_0'-\tfrac\varepsilon 2$ by $\eta_{0,\infty}-\varepsilon$. This yields the lower bound of Theorem [Theorem 4](#thm:polynomial_regime){reference-type="ref" reference="thm:polynomial_regime"} for $G$. As before, the proof for SFP is verbatim the same, and the proofs of the linear lower bound (i.e., the lower bound in Theorem [\[thm:linear_regime\]](#thm:linear_regime){reference-type="ref" reference="thm:linear_regime"}) and the lower bound for GIRG (the lower bound of Theorem [\[thm:finite_graph\]](#thm:finite_graph){reference-type="ref" reference="thm:finite_graph"}) are analogous. This concludes the case $\beta =\infty$, and concludes the proof of the lower bounds of Theorem [\[thm:threshold_regimes\]](#thm:threshold_regimes){reference-type="ref" reference="thm:threshold_regimes"}. ◻ # Linear Regime, Upper Bound {#sec:linear-regime} In the following section, we prove the upper bound of Theorem [\[thm:linear_regime\]](#thm:linear_regime){reference-type="ref" reference="thm:linear_regime"}, that cost-distance in IGIRG and SFP scales at most linearly with Euclidean distance and the corresponding part of Theorem [\[thm:finite_graph\]](#thm:finite_graph){reference-type="ref" reference="thm:finite_graph"} for finite GIRGs in a box. We shall make the renormalisation argument outlined on page quantitative. We start by classical results on iid bond percolation, but first a definition: **Definition 17** (Deviation from straight line). *Given $u,v \in \mathbb{R}^d$, let $S_{u,v}$ denote the line segment between $u,v$. For $x\in \mathbb{R}^d$ we define the deviation $\mathrm{dev}_{uv}(x):=\|x-S_{u,v}\|$. Given a path $\pi=x_1\ldots x_k$ in a graph $G$ with vertices in $\mathbb{R}^d$, we define the *deviation of $\pi$ from $S_{uv}$* as $\mathrm{dev}_{uv}(\pi):=\max\{\mathrm{dev}_{uv}(x_i) \colon i \in [k]\}$. Finally, let the *deviation of $\pi$* be $\mathrm{dev}(\pi) := \max\{\mathrm{dev}_{x_1x_k}(x_i) \colon i \in [k]\}$, i.e., its deviation from the segment between the endpoints.* The next two lemmas state two properties for bond percolation with high enough edge-density in $\mathbb{Z}^d$. First, the density of the infinite component is large in every ball $B_{(\log r)^{3/2}}(y)$, where $y$ may vary through $r^d$ vertices around the origin. Second, vertices in the infinite component can be joined by a path of linear length with small deviation. The first statement is implicit in [@DP-percolation], the second in [@antal1996chemical]. We give proofs in the appendix for completeness on pages and respectively. lemmaBondPercolationDensity [\[lem:bond-percolation-density\]]{#lem:bond-percolation-density label="lem:bond-percolation-density"} Let $d \in \mathbb{N}$ with $d \ge 2$, let $\varepsilon,\delta,\sigma \in (0,1)$, and let $r>0$. Let $\omega^\star$ be an iid Bernoulli bond percolation on $\mathbb{Z}^d$ with edge-retention probability $p := 1-\varepsilon$. Then whenever $r {\,\gg_{\star}\,}\varepsilon, \delta$, and $\varepsilon{\,\ll_{\star}\,}\sigma,d$, then almost surely $\omega^\star$ has a unique infinite component $\mathcal{C}_\infty^\star$ with $\mathbb{P}(0\in \mathcal{C}_\infty^\star)\ge 1-\sigma$ and $$\label{eq:bond-percolation-density} \forall x\in \mathbb{Z}^d:\quad \mathbb{P}\Big(\forall y\in B_r(x), \, \frac{|B_{(\log r)^{3/2}}(y) \cap \mathcal{C}_\infty^\star|}{|B_{(\log r)^{3/2}}(y)\cap\mathbb{Z}^d|} \ge 1-\sigma\Big) \ge 1-\delta.$$ lemmaBondPercolationLinear [\[lem:bond-percolation-linear\]]{#lem:bond-percolation-linear label="lem:bond-percolation-linear"} Let $d \in \mathbb{N}$ with $d \ge 2$, let $\zeta,\varepsilon,c \in (0,1)$, and let $\kappa,r>0$. Let $\omega^\star$ be an iid Bernoulli bond percolation on $\mathbb{Z}^d$ with edge-retention probability $p := 1-\varepsilon$. Let $\mathcal{C}_\infty^\star$ be the infinite component of $\omega^\star$. For all $x\in\mathbb{Z}^d$, let $\mathcal{A}^\star_{\mathrm{linear}}(r,\kappa,\zeta,x)$ be the event that for all $y\in\mathcal{C}_\infty^\star \setminus B_r(x)$ there is a path $\pi$ from $x$ to $y$ with length at most $|\pi|\le\kappa |x-y|$ and deviation at most $\mathrm{dev}(\pi)\le \zeta |x-y|$. Then whenever $r{\,\gg_{\star}\,}\zeta,\varepsilon$, and $\kappa, 1/\varepsilon,1/c {\,\gg_{\star}\,}d$, $$\begin{aligned} \label{eq:bond-percolation-distances} \mathbb{P}(\mathcal{A}^\star_{\mathrm{linear}}(r,\kappa,\zeta,x) \mid x\in\mathcal{C}_\infty^\star)\ge 1-e^{-cr}. \end{aligned}$$ In order to apply Lemmas [\[lem:bond-percolation-density\]](#lem:bond-percolation-density){reference-type="ref" reference="lem:bond-percolation-density"} and [\[lem:bond-percolation-linear\]](#lem:bond-percolation-linear){reference-type="ref" reference="lem:bond-percolation-linear"} to IGIRG and SFP, we will need a coupling to bond percolation. We provide this in Lemma [Lemma 19](#lem:bond-percolation-coupling){reference-type="ref" reference="lem:bond-percolation-coupling"} for any graph model satisfying Definition [Definition 18](#def:dense-geometric){reference-type="ref" reference="def:dense-geometric"} below. Given a graph $G = (\mathcal V,\mathcal E)$ with vertex set $\mathcal V \subseteq \mathbb{R}^d$ and a set $A \subseteq \mathbb{R}^d$, we write $G[A]$ for the induced subgraph of $G$ on $\mathcal V[A] := \mathcal V\cap A$. Two (half-open) boxes are called *neighbouring* if their closures have non-empty intersection. **Definition 18** (Dense geometric random graphs, generally). *Let $d \in \mathbb{N}$ and let $\mathcal{S}$ be a partition of $\mathbb{R}^d$ into half-open boxes of the form $[a_1,a_1+R)\times\dots\times[a_d,a_d+R)$ for some $a_1,\dots,a_d \in \mathbb{R}$ and side-length $R>0$. When $\mathcal{S}$ is given, for all $z \in \mathbb{Z}^d$, we write $S_z$ for the unique box in $\mathcal{S}$ containing $R\!\cdot \!z$. Let $G=(\mathcal{V},\mathcal{E})$ be a random graph whose vertex set is either $\mathbb{Z}^d$ or is given by a homogeneous PPP on $\mathbb{R}^d$. For all $\varepsilon> 0$, we say $G$ is an *$(R,\varepsilon)$-dense geometric graph* with *boxing scheme* $\mathcal{S}$ if it satisfies the following properties (i)-(iii), and we call $G$ a *strong $(R,D,\varepsilon)$-dense geometric graph* if it satisfies (i)-(iv):* (i) *[\[item:dense-1\]]{#item:dense-1 label="item:dense-1"} $G[S_1]$ and $G[S_2]$ are independent for any disjoint Lebesgue measurable sets $S_1,S_2$;* (ii) *[\[item:dense-2\]]{#item:dense-2 label="item:dense-2"} for all boxes $S \in \mathcal{S}$, $\mathbb{P}(G[S] \text{ is non-empty and connected}\,) \geq 1-\varepsilon$;* (iii) *[\[item:dense-3\]]{#item:dense-3 label="item:dense-3"} for all neighbouring boxes $S_1, S_2 \in \mathcal{S}$, $$\mathbb{P}(G[S_1 \cup S_2] \text{ contains at least one edge from $S_1$ to $S_2$}) \geq 1-\varepsilon;$$* (iv) *[\[item:dense-4\]]{#item:dense-4 label="item:dense-4"} for all boxes $S \in \mathcal{S}$, $\mathbb{P}(G[S] \text{ has diameter at most $D$}\,) \geq 1-\varepsilon$.* Observe that $z\mapsto S_z$ is a bijection from $\mathbb{Z}^d$ to $\mathcal{S}$; this forms the basis of the coupling in Lemma [Lemma 19](#lem:bond-percolation-coupling){reference-type="ref" reference="lem:bond-percolation-coupling"} below, essentially acting as renormalisation. In IGIRG and SFP, the random graph $G_M$ described at the start of the section will take the role of the dense geometric graph (see Corollary [Corollary 22](#cor:dense-subgraph){reference-type="ref" reference="cor:dense-subgraph"}). **Lemma 19** (Renormalisation-coupling to bond-percolation). *Let $d \in \mathbb{N}$, $\varepsilon\in (0,1)$, and $K,R>0$. Suppose $K,1/\varepsilon{\,\gg_{\star}\,}R,d$ and let $G$ be an $(R,\varepsilon)$-dense geometric graph with boxing scheme $\mathcal{S}$, and let $\omega^\star$ be an iid Bernoulli bond percolation with retention probability $1 - 20d\varepsilon$. Then there exists a coupling between $G$ and $\omega^\star$ such that whenever $z_1z_2$ is open in $\omega^\star$: $G[S_{z_1}]$ and $G[S_{z_2}]$ are non-empty and connected; $G[S_{z_1}]$ and $G[S_{z_2}]$ contain at most $K$ vertices each; and there is an edge from $\mathcal{V}[S_{z_1}]$ to $\mathcal{V}[S_{z_2}]$ in $G$.* *Proof.* In Definition [Definition 18](#def:dense-geometric){reference-type="ref" reference="def:dense-geometric"}, the vertex set is either a homogeneous PPP or $\mathbb{Z}^d$. If $\mathcal{V}$ is a homogeneous PPP, there exists $K>0$ such that for all boxes $S \in \mathcal{S}$, $$\begin{aligned} \label{eq:dense_geometric_K} \mathbb{P}(|\mathcal{V}[S]| \le K) \ge 1-\varepsilon. \end{aligned}$$ If $\mathcal{V}=\mathbb{Z}^d$, then $|\mathcal{V}[S]| \le (R+1)^d$ and so [\[eq:dense_geometric_K\]](#eq:dense_geometric_K){reference-type="eqref" reference="eq:dense_geometric_K"} holds trivially with $K = (R+1)^d$. We now carry out a one-step renormalisation and define a site-bond percolation $\omega$ on $\mathbb{Z}^d$. Recall that $\mathcal{S}$ in Definition [Definition 18](#def:dense-geometric){reference-type="ref" reference="def:dense-geometric"} is a boxing scheme with side-length $R$ and $S_z$ is the box containing $R\!\cdot\!z$ for $z\in \mathbb{R}^d$. In the renormalised lattice, we set a *site* $z\in\mathbb{Z}^d$ *occupied* in $\omega$ if $G[S_z]$ contains at most $K$ vertices and is connected. We set two neighbouring sites $z,z'$ connected by an *open bond* in $\omega$ if both sites are occupied and there is at least one edge in $G$ between $G[S_{z}], G[S_{z'}]$. By Definition [Definition 18](#def:dense-geometric){reference-type="ref" reference="def:dense-geometric"}[\[item:dense-1\]](#item:dense-1){reference-type="eqref" reference="item:dense-1"}, sites are occupied independently of each other in $\omega$ with probability at least $1-2\varepsilon$ by [\[item:dense-1\]](#item:dense-1){reference-type="eqref" reference="item:dense-1"},[\[item:dense-2\]](#item:dense-2){reference-type="eqref" reference="item:dense-2"} and [\[eq:dense_geometric_K\]](#eq:dense_geometric_K){reference-type="eqref" reference="eq:dense_geometric_K"}, and bonds that do not share a site are also open independently by [\[item:dense-1\]](#item:dense-1){reference-type="eqref" reference="item:dense-1"}. However, bonds that do share a site ($zz'$ and $zz''$) are *not* open independently since they are both influenced by $G[S_z]$. We now couple $\omega$ to a Bernoulli bond percolation $\omega^\star$ on $\mathbb{Z}^d$. Similar ideas have been used before --- see [@andjel1993characteristic; @liggett1997domination] in particular. By Definition [Definition 18](#def:dense-geometric){reference-type="ref" reference="def:dense-geometric"}[\[item:dense-2\]](#item:dense-2){reference-type="eqref" reference="item:dense-2"} and [\[item:dense-3\]](#item:dense-3){reference-type="eqref" reference="item:dense-3"} and by [\[eq:dense_geometric_K\]](#eq:dense_geometric_K){reference-type="eqref" reference="eq:dense_geometric_K"}, every bond is open with probability at least $1-5\varepsilon$, where the factor of $5$ comes from a union bound over the two sites being occupied and over [\[item:dense-3\]](#item:dense-3){reference-type="eqref" reference="item:dense-3"}. We next show an approximate independence as follows. Consider a bond $zz'$ in $\omega$, and let $N(zz') := \{\text{bonds incident to $z$ or $z'$}\} \setminus \{zz'\}$, with size $|N(zz')| = 4d-2$. For any set of bonds $S \subseteq N(zz')$, $$\begin{aligned} \mathbb{P}(zz' \text{ open in } \omega\mid \text{all bonds in $S$ open}) \nonumber & \geq \mathbb{P}(zz' \text{ open} \text{ and } \text{all bonds in $S$ open}) \\ & \geq 1-(4d-1)\cdot 5\varepsilon\geq 1-20d\varepsilon=:p \end{aligned}$$ by a union bound. Using that bonds that do not share a site are independently open, one can iterate this argument to show that for every finite set $S$ of bonds, $\mathbb{P}(\text{all bonds in $S$ are open}) \geq p^{|S|}$. Hence, by Strassen's theorem [@lindvall1999strassen] we can couple $\omega$ with an independent bond percolation $\omega^{\star}$ on $\mathbb{Z}^d$ with retention-probability $p$, where every open bond in $\omega^{\star}$ is open in $\omega$. ◻ **Definition 20** (Bernoulli-induced infinite subgraph). *Let $\varepsilon\in (0,1)$ be such that Lemma [\[lem:bond-percolation-density\]](#lem:bond-percolation-density){reference-type="ref" reference="lem:bond-percolation-density"} applies with $\varepsilon_{\ref{lem:bond-percolation-density}}:=20d \varepsilon$ for some $r>0$ and some $\delta,\sigma \in(0,1)$. Let $G$ be an $(R,\varepsilon)$-dense geometric graph with boxing scheme $\mathcal{S}$ and let $\omega^\star$ be an iid Bernoulli bond percolation process with retention probability $p := 1-20d\varepsilon$ given by the coupling of Lemma [Lemma 19](#lem:bond-percolation-coupling){reference-type="ref" reference="lem:bond-percolation-coupling"}. When $d\ge 2$, write $\mathcal{C}_\infty^\star$ for the (unique) infinite component of $\omega^\star$ guaranteed by Lemma [\[lem:bond-percolation-density\]](#lem:bond-percolation-density){reference-type="ref" reference="lem:bond-percolation-density"}. We define then the following infinite subgraph of $G$: $$\label{eq:h-infty} \mathcal{H}_\infty(G,\mathcal{S},\omega^\star) = \bigcup_{z \in \mathcal{C}_\infty^\star} \mathcal{V}[S_z].$$ When there is no danger of confusion, we write $\mathcal{H}_\infty := \mathcal{H}_\infty(G,\mathcal{S},\omega^\star)$ and we may not explicitly define the bond percolation process $\omega^\star$.* The graph $\mathcal{H}_\infty$ corresponds to vertices in renormalised boxes that belong to $\mathcal{C}_\infty^\star$ of the renormalised bond-percolation model, hence we \"blow-up\" $\mathcal{C}_\infty^\star$ to correspond to boxes in $G$. By the connectivity of $\mathcal{C}_\infty$, also $G[\mathcal{H}_\infty]$ is a.s. connected by the properties of the coupling set out in Lemma [Lemma 19](#lem:bond-percolation-coupling){reference-type="ref" reference="lem:bond-percolation-coupling"}. However, $\mathcal{H}_\infty$ may only be a proper subset of some infinite component $\mathcal{C}_\infty$ of $G$. Moreover, $G$ need not have a unique infinite component. Indeed, suppose $G$ is a random graph model with a PPP as $\mathcal{V}$, where each vertex is coloured red or blue independently at random, and there is no edge with endpoints of different colours. If the probability of 'red' is sufficiently high with respect to the box side length $R$, so that with probability at least $1-\varepsilon$ a box $S\in\mathcal{S}$ contains no blue vertices, then Definition [Definition 18](#def:dense-geometric){reference-type="ref" reference="def:dense-geometric"}[\[item:dense-1\]](#item:dense-1){reference-type="eqref" reference="item:dense-1"}--[\[item:dense-2\]](#item:dense-2){reference-type="eqref" reference="item:dense-2"} may be satisfied, and there will be an infinite component of red vertices. However, there may also be an infinite component of blue vertices with arbitrary behaviour. Of course, if $\varepsilon$ is small, then $\mathcal{H}_\infty$ has density larger than $1/2$ and so no other infinite component can have density larger than $1/2$, so $\mathcal{H}_\infty$ is the uniquely determined as the infinite component with density close to $1$. We now show in Lemma [\[lem:dense-geometric-locally-dense\]](#lem:dense-geometric-locally-dense){reference-type="ref" reference="lem:dense-geometric-locally-dense"} that $\mathcal{H}_\infty$ is "locally dense" in $G$, with density close to one, and in Lemma [Lemma 21](#lem:dense-geometric-linear-paths){reference-type="ref" reference="lem:dense-geometric-linear-paths"} that $\mathcal{H}_\infty$ is a well-connected set containing short low-deviation paths between many pairs of vertices. lemmaLocallyDense [\[lem:dense-geometric-locally-dense\]]{#lem:dense-geometric-locally-dense label="lem:dense-geometric-locally-dense"} Let $d \in \mathbb{N}$ with $d \ge 2$, let $\varepsilon,\delta,\sigma \in (0,1)$, and let $r,R>0$. Suppose that $r{\,\gg_{\star}\,}\varepsilon, \delta,R$, and that $\varepsilon{\,\ll_{\star}\,}\sigma,d$. Let $G$ be an $(R,\varepsilon)$-dense geometric graph with boxing scheme $\mathcal{S}$. Then a.s. $\mathcal{H}_\infty$ is infinite and $G[\mathcal{H}_\infty]$ defined in [\[eq:h-infty\]](#eq:h-infty){reference-type="eqref" reference="eq:h-infty"} is connected, for any box $S\in \mathcal{S}$, $\mathbb{P}(\mathcal{V}\cap S\subseteq \mathcal{H}_\infty)\ge 1-\sigma$ and moreover $$\begin{aligned} \forall x\in\mathbb{R}^d:\quad \mathbb{P}\left( \forall y\in B_r(x), \, \frac{|B_{(\log r)^2}(y) \cap \mathcal{H}_\infty|}{|B_{(\log r)^2}(y)\cap\mathcal{V}|} \ge 1-\sigma\right) \ge 1-\delta. \label{eq:dense-geometric-density} \end{aligned}$$ The statement in [\[eq:dense-geometric-density\]](#eq:dense-geometric-density){reference-type="eqref" reference="eq:dense-geometric-density"} gets stronger if one decreases the radius $(\log r)^2$, since then the infinite subgraph $\mathcal{H}_\infty$ is present already at lower radii near every vertex $y$ inside $B_r(x)$. For our proofs later, $(\log r)^2$ is more than strong enough, but one could improve [\[eq:dense-geometric-density\]](#eq:dense-geometric-density){reference-type="eqref" reference="eq:dense-geometric-density"} to having $\Theta(\log r)$ as radius around $y$. If the vertex set of $G$ is $\mathbb{Z}^d$, then Lemma [\[lem:dense-geometric-locally-dense\]](#lem:dense-geometric-locally-dense){reference-type="ref" reference="lem:dense-geometric-locally-dense"} follows from Lemma [\[lem:bond-percolation-density\]](#lem:bond-percolation-density){reference-type="ref" reference="lem:bond-percolation-density"} (applied with some $\sigma_{\ref{lem:bond-percolation-density}}$ that is sufficiently small compared to $\sigma$ in [\[eq:dense-geometric-density\]](#eq:dense-geometric-density){reference-type="eqref" reference="eq:dense-geometric-density"}) by the coupling in Lemma [Lemma 19](#lem:bond-percolation-coupling){reference-type="ref" reference="lem:bond-percolation-coupling"}. If the vertex set comes form a Poisson point process, we use that the number of vertices in the boxes $B_{(\log r)^2}(y) \cap \mathcal{H}_\infty$ concentrates around its mean and that only at most $\sigma_{\ref{lem:bond-percolation-density}}$-fraction of boxes have too few/too many vertices in $B_{(\log r)^2}(y) \setminus \mathcal{H}_\infty$. The details are straightforward and we give a proof of Lemma [\[lem:dense-geometric-locally-dense\]](#lem:dense-geometric-locally-dense){reference-type="ref" reference="lem:dense-geometric-locally-dense"} in the appendix on page . **Lemma 21** (Linear graph distances in dense geometric random graphs). *Let $d \in \mathbb{N}$ with $d \ge 2$, let $\zeta,\varepsilon,\delta \in (0,1)$, and let $\kappa,r,R,C>0$. Let $G$ be an $(R,\varepsilon)$-dense geometric graph with boxing scheme $\mathcal{S}$. For $x \in \mathcal{V}$, let $\mathcal{A}_{\mathrm{linear}}(r,\kappa,C,\zeta,x)$ be the event that for all $u \in B_r(x)\cap \mathcal{H}_\infty$ and $v \in \mathcal{H}_\infty$, there is a path from $u$ to $v$ in $G$ of length at most $\kappa|u-v|+C$ and deviation at most $\zeta|u-v|+C$. Then whenever $C{\,\gg_{\star}\,}r {\,\gg_{\star}\,}\varepsilon,\delta,\zeta,R,d$ and $\kappa {\,\gg_{\star}\,}R,d$ and $\varepsilon{\,\ll_{\star}\,}d$, then $\mathbb{P}(\mathcal{A}_{\mathrm{linear}}(r,\kappa,C,\zeta,x))\ge 1-\delta$.* *Proof.* Let $\omega^\star$ be a bond percolation process with retention probability $1-20d\varepsilon$ as required in Lemma [Lemma 19](#lem:bond-percolation-coupling){reference-type="ref" reference="lem:bond-percolation-coupling"}, so that $\mathcal{H}_\infty = \mathcal{H}_\infty(G,\mathcal{S},\omega^\star)$ in [\[eq:h-infty\]](#eq:h-infty){reference-type="eqref" reference="eq:h-infty"} exists. Indeed, since $\varepsilon{\,\ll_{\star}\,}d$, by Lemma [\[lem:bond-percolation-density\]](#lem:bond-percolation-density){reference-type="ref" reference="lem:bond-percolation-density"} $\omega^\star$ has a unique infinite component $\mathcal{C}_\infty^\star$ a.s. and $\mathcal{H}_\infty$ is well-defined. Let $K>0$ be as in Lemma [Lemma 19](#lem:bond-percolation-coupling){reference-type="ref" reference="lem:bond-percolation-coupling"}, so that $K$ is a function of $d$ and $R$, and let $\kappa' = \sqrt \kappa$ and $C' = \sqrt{C}$. Denote the event $\mathcal{A}^\star_{\mathrm{linear}}(r/R,\kappa',\zeta,z)$ of Lemma [\[lem:bond-percolation-linear\]](#lem:bond-percolation-linear){reference-type="ref" reference="lem:bond-percolation-linear"} by $\mathcal{A}^\star_1(z)$, so if $\mathcal{A}_1^\star(z)$ occurs then there is a short low-deviation path from $z$ to any site in $\mathcal{C}_\infty^\star \setminus B_{r/R}(z)$. Let $\mathcal{A}_2^\star(z)$ be the event that for all $v \in \mathcal{C}_\infty^\star\cap B_{r/R}(z)$, there is a path in $\omega^\star$ from $z$ to $v$ of length and deviation at most $C'$. Observe that if $\mathcal{A}^\star_1(z) \cap \mathcal{A}^\star_2(z)$ occurs, then for all $v \in \mathcal{C}_\infty^\star$ there is a path in $\omega^\star$ from $z$ to $v$ of length at most $C' + \kappa'|z-v|$ and deviation at most $C' + \zeta|z-v|$. Recall that for $z\in \mathbb{Z}^d$, we denote by $S_z$ the box in $\mathcal{S}$ that contains $R\!\cdot\!z$. We define our last event as $$\begin{aligned} \mathcal{A}^+(x) &:= \bigcap_{z\in\mathbb{Z}^d\,:\, S_z \cap B_r(x)\ne\emptyset} \Big(\big(\mathcal{A}_1^\star(z) \cap \mathcal{A}_2^\star(z)\big) \cup \{z \notin \mathcal{C}_\infty^\star\}\Big). \end{aligned}$$ Using the coupling of Lemma [Lemma 19](#lem:bond-percolation-coupling){reference-type="ref" reference="lem:bond-percolation-coupling"} we will prove that $$\begin{aligned} \label{eq:dense-geometric-linear-main-event} \mathcal{A}^+(x) &\subseteq \mathcal{A}_{\mathrm{linear}}(r,\kappa,C,\zeta,x),\\\label{eq:dense-geometric-linear-A1-bound} \sum_{z \in \mathbb{Z}^d: S_z\cap B_r(x)\ne\emptyset}\mathbb{P}\big(\neg\mathcal{A}_1^\star(z) \cap \{z \in \mathcal{C}_\infty^\star\}\big) &\le \delta/2,\\\label{eq:dense-geometric-linear-A2-bound} \mathbb{P}\bigg(\bigcap_{z \in \mathbb{Z}^d: S_z\cap B_r(x)\ne\emptyset}\big(\mathcal{A}_2^\star(z) \cup \{z\notin\mathcal{C}_\infty^\star\} \big)\bigg) &\ge 1 - \delta/2. \end{aligned}$$ Given [\[eq:dense-geometric-linear-main-event\]](#eq:dense-geometric-linear-main-event){reference-type="eqref" reference="eq:dense-geometric-linear-main-event"}--[\[eq:dense-geometric-linear-A2-bound\]](#eq:dense-geometric-linear-A2-bound){reference-type="eqref" reference="eq:dense-geometric-linear-A2-bound"}, a union bound gives that $\mathbb{P}(\mathcal{A}_{\mathrm{linear}}(r,\kappa,C,\zeta,x)) \ge \mathbb{P}(\mathcal{A}^+)\ge 1-\delta$, as required. We first prove [\[eq:dense-geometric-linear-A1-bound\]](#eq:dense-geometric-linear-A1-bound){reference-type="eqref" reference="eq:dense-geometric-linear-A1-bound"}, starting with $$\sum_{z \in \mathbb{Z}^d\,:\,S_z\cap B_r(x)\ne\emptyset}\mathbb{P}\big(\neg\mathcal{A}_1^\star(z) \cap \{z \in \mathcal{C}_\infty^\star\}\big) \le \sum_{z \in \mathbb{Z}^d\,:\,S_z\cap B_r(x)\ne\emptyset}\mathbb{P}\big(\neg\mathcal{A}_1^\star(z) \mid z \in \mathcal{C}_\infty^\star\big).$$ By Lemma [\[lem:bond-percolation-linear\]](#lem:bond-percolation-linear){reference-type="ref" reference="lem:bond-percolation-linear"}, since $\kappa'{\,\gg_{\star}\,}d {\,\gg_{\star}\,}\varepsilon$ and $r{\,\gg_{\star}\,}\varepsilon,\zeta,R,d$, there exists $c_{\ref{lem:bond-percolation-linear}}$ depending only on $d$ such that each term in the sum is at most $e^{-c_{\ref{lem:bond-percolation-linear}}r/R}$. Since $S_z$ is a box of side-length $R$, the number of terms $|\{z\in \mathbb{Z}^d: S_z\cap B_r(x)\}|$ is at most $(3r/R)^d$, and $r{\,\gg_{\star}\,}\delta, R, d$ (and thus $r{\,\gg_{\star}\,}c_{\ref{lem:bond-percolation-linear}}$), $$\sum_{z \in \mathbb{Z}^d\,:\,S_z\cap B_r(x)\ne\emptyset}\mathbb{P}\big(\neg\mathcal{A}_1^\star(z) \cap \{z \in \mathcal{C}_\infty^\star\}\big) \le (3r/R)^de^{-c_{\ref{lem:bond-percolation-linear}}r/R} \le \delta/2$$ for all sufficiently large $r$ given $\delta$. We next prove [\[eq:dense-geometric-linear-A2-bound\]](#eq:dense-geometric-linear-A2-bound){reference-type="eqref" reference="eq:dense-geometric-linear-A2-bound"}. The maximum length and deviation of a path between any pair of sites in $\mathcal{C}_\infty^\star \cap B_{3r/R}(x)$ is a random variable which is a.s. finite, so since $C'=\sqrt{C}{\,\gg_{\star}\,}\delta,r$ this maximum must be at most $C'$ with probability at least $1-\delta/2$, for all sufficiently large $C'$. Thus [\[eq:dense-geometric-linear-A2-bound\]](#eq:dense-geometric-linear-A2-bound){reference-type="eqref" reference="eq:dense-geometric-linear-A2-bound"} follows. Finally, we prove [\[eq:dense-geometric-linear-main-event\]](#eq:dense-geometric-linear-main-event){reference-type="eqref" reference="eq:dense-geometric-linear-main-event"}. Suppose $\mathcal{A}^+(x)$ occurs, let $u \in B_r(x) \cap \mathcal{H}_\infty$, and let $v \in \mathcal{H}_\infty$ two vertices in $G$. Let $u^- \in \mathbb{Z}^d$ be such that $u \in S_{u^-}$, and let $v^- \in \mathbb{Z}^d$ be such that $v \in S_{v^-}$; thus $u^- = \lfloor u/R \rfloor$ and $v^- = \lfloor v/R \rfloor$, and $u^-,v^-\in\mathcal{C}^\star_\infty$ per definition of $\mathcal{H}_\infty$ in [\[eq:h-infty\]](#eq:h-infty){reference-type="eqref" reference="eq:h-infty"}. Since $\mathcal{A}^+(x)$ occurs and $u^-\in\mathcal{C}^\star_\infty$, the event $\mathcal{A}_1^\star(u^-) \cap \mathcal{A}_2^\star(u^-)$ also occurs; thus there exists a path $\pi^\star = z_1\dots z_m$ (for some $m$) from $u^-$ to $v^-$ in $\omega^\star$ with $$\begin{aligned} |\pi^\star| \le C'+\kappa'|u^- - v^-|,\qquad \mbox{dev}(\pi^\star) \le C'+\zeta|u^- - v^-|. \end{aligned}$$ The statement of the lemma becomes weaker with larger $\kappa$, so we may just prove it for one fixed value $\kappa=\kappa(R,d)$, and deduce it for all larger values of $\kappa$. We may thus assume for this fixed value of $\kappa$ that $C'{\,\gg_{\star}\,}\kappa$ and $C' {\,\gg_{\star}\,}\kappa'=\sqrt{\kappa}$. For the same reason we may assume $C'{\,\gg_{\star}\,}\zeta$. Since also $C'{\,\gg_{\star}\,}R,d$ and the diameter of each box is at most $R\sqrt{d}$, we have $$\begin{aligned} \label{eq:dense-geometric-pistar} |\pi^\star| \le 2C'+ \kappa'|u - v|/R,\qquad \mbox{dev}(\pi^\star) \le 2C'+\zeta|u - v|/R. \end{aligned}$$ By the properties of the coupling with $\omega^\star$ set out in Lemma [Lemma 19](#lem:bond-percolation-coupling){reference-type="ref" reference="lem:bond-percolation-coupling"}, for each $z_i\in \pi^\star$, the graph $G[B_{z_i}]$ is connected and contains between $1$ and $K$ vertices, and for all $i \le m-1$ there is an edge between $G[B_{z_i}]$ and $G[B_{z_{i+1}}]$. Since $u \in B_{z_1}$ and $v \in B_{z_m}$, we can thus find a path $\pi^G$ from $u$ to $v$ in $G[B_{z_1} \cup \dots \cup B_{z_m}]$. Since each box contains at most $K$ vertices, it follows from [\[eq:dense-geometric-pistar\]](#eq:dense-geometric-pistar){reference-type="eqref" reference="eq:dense-geometric-pistar"} that $$|\pi^G| \le Km = K(|\pi^\star|+1) \le 2K\kappa'|u-v|/R + K(2C'+1).$$ Likewise, since each box has side length $R$ and diameter at most $R\sqrt{d}$, $$\mathrm{dev}(\pi^G) \le R\sqrt{d} + R\cdot\mathrm{dev}(\pi^\star) \le \zeta |u-v| + 2RC' + R\sqrt{d}.$$ Recall that $K$ is a function of $d$ and $R$, and that $\kappa,C{\,\gg_{\star}\,}d,R$; we therefore recover $|\pi^G| \le \kappa|u-v|+C$ and $\mathrm{dev}(\pi^G) \le \zeta |u-v|+C$. Thus $\mathcal{A}_{\mathrm{linear}}(r,\kappa,C,\zeta,x)$ occurs, and so $\mathcal{A}^+(x)\subseteq \mathcal{A}_{\mathrm{linear}}(r,\kappa,C,\zeta,x)$ as claimed. ◻ We will state the following corollary in a stronger form than necessary here; we only need part [\[item:cor-3\]](#item:cor-3){reference-type="eqref" reference="item:cor-3"} and the 'strong' version of part [\[item:cor-1\]](#item:cor-1){reference-type="eqref" reference="item:cor-1"} below in our companion paper [@komjathy2022one1]. Recall from [\[eq:parameters\]](#eq:parameters){reference-type="eqref" reference="eq:parameters"} that $\textnormal{\texttt{par}}\xspace$ is the set of model parameters. **Corollary 22** (Linear costs in Bernoulli-induced infinite subgraph in IGIRG and SFP). *Consider $1$-FPP in Definition [Definition 2](#def:1-FPP){reference-type="ref" reference="def:1-FPP"} on a graph $G=(\mathcal{V},\mathcal{E})$ that is an IGIRG or SFP satisfying assumptions [\[eq:power_law\]](#eq:power_law){reference-type="eqref" reference="eq:power_law"}--[\[eq:F_L-condition\]](#eq:F_L-condition){reference-type="eqref" reference="eq:F_L-condition"} with $\tau\in(2,3)$. Let $\delta,\varepsilon,\sigma \in (0,1)$, let $r,M,C,\kappa,\zeta,t > 0$, and let $R := M^{2/d}/\sqrt{d}$. Suppose $C{\,\gg_{\star}\,}r {\,\gg_{\star}\,}M,\zeta,\delta$, and that $\kappa {\,\gg_{\star}\,}M {\,\gg_{\star}\,}\varepsilon,\sigma,t, \textnormal{\texttt{par}}\xspace$. Finally, suppose that $\varepsilon{\,\ll_{\star}\,}\sigma,\textnormal{\texttt{par}}\xspace$. Let $I_M:=[M,2M]$, and let $G_M=(\mathcal{V}_M,\mathcal{E}_M)$ be the subgraph of $G$ formed by vertices $\mathcal{V}_M:=\{v\in\mathcal{V}: W_v\in I_M\}$ and edges $\mathcal{E}_M:=\{uv\in\mathcal{E}: u,v\in \mathcal{V}_M, \mathcal{C}(uv)\le M^{3\mu}\}$. For $t$ as above, let $\{x_1, \dots, x_t\} \subseteq \mathbb{Z}^d$ for SFP and $\{x_1, \dots, x_t \}\subseteq \mathbb{R}^d$ for IGIRG, and let $\mathcal{F}:= \{x_1, \dots, x_t \in \mathcal{V}\}$ and $\mathcal{F}_M := \{x_1, \dots, x_t\in\mathcal{V}_M\}$. Then:* (i) *[\[item:cor-1\]]{#item:cor-1 label="item:cor-1"} For all dimensions $d\ge 1$, conditioned on either $\mathcal{F}$ or $\mathcal{F}_M$, $G_M$ is a strong $(R,2, e^{-M^{3-\tau-\varepsilon}})$-dense and an $(R,\varepsilon)$-dense geometric graph in the sense of Definition [Definition 18](#def:dense-geometric){reference-type="ref" reference="def:dense-geometric"}.* (ii) *[\[item:cor-2\]]{#item:cor-2 label="item:cor-2"} For all dimensions $d\ge 2$, a.s. $\mathcal{H}_\infty$ is infinite, $G[\mathcal{H}_\infty]$ is connected, $G$ has a unique infinite component $\mathcal{C}_\infty$, and $\mathcal{H}_\infty \subseteq \mathcal{V}(\mathcal{C}_\infty) \cap \mathcal{V}_M$.* (iii) *[\[item:cor-3\]]{#item:cor-3 label="item:cor-3"} For all dimensions $d\ge 2$, and $x \in \mathbb{R}^d$, $$\label{eq:dense-C_infinity^M} \mathbb{P}\Big(\forall y\in B_r(x), \, \frac{|B_{(\log r)^2}(y) \cap \mathcal{H}_\infty|}{|B_{(\log r)^2}(y)\cap\mathcal{V}_M|} \ge 1-\sigma\,\Big|\,\mathcal{F}\Big) \ge 1-\delta.$$* (iv) *[\[item:cor-4\]]{#item:cor-4 label="item:cor-4"} For $x \in \mathbb{R}^d$, let $\mathcal{A}_\mathrm{linearcost}(r,\kappa,C,\zeta,x)$ be the event that for all $u \in \mathcal{B}_r(x) \cap \mathcal{H}_\infty$ and all $v \in \mathcal{H}_\infty$, there is a path from $u$ to $v$ of cost at most $\kappa |u-v| + C$ and deviation at most $\zeta |u-v| + C$. Then for all dimensions $d\ge 2$, $\mathbb{P}(\mathcal{A}_{\mathrm{linearcost}}(r,\kappa,C,\zeta,z)\mid \mathcal{F}) \ge 1-\delta$.* *Proof.* Let $\mathcal{S}$ be a boxing scheme of side length $R=M^{2/d}/\sqrt{d}$. Using Definition [Definition 18](#def:dense-geometric){reference-type="ref" reference="def:dense-geometric"}, we now prove that $G_M$ is a strong $(R,2,\varepsilon_{\ref{def:dense-geometric}})$-dense geometric graph for $\varepsilon_{\ref{def:dense-geometric}}:= e^{-M^{3-\tau-\varepsilon}}$ with boxing scheme $\mathcal{S}$. Note that this automatically implies that $(R,\varepsilon)$-dense since $M {\,\gg_{\star}\,}\varepsilon$. Definition [Definition 18](#def:dense-geometric){reference-type="ref" reference="def:dense-geometric"}(i) follows from the definitions of the models. For the other conditions we first lower-bound the expected number of vertices in a box $S\in \mathcal{S}$. In both IGIRG and SFP, we use [\[eq:power_law\]](#eq:power_law){reference-type="eqref" reference="eq:power_law"} to bound: $$\label{eq:W-in-IM} \begin{aligned} \mathbb{P}(W\in I_M) &= (1-F_W(M)) - (1- F_W(2M)) = \frac{\ell(M)}{M^{\tau-1}} - \frac{\ell(2M)}{(2M)^{\tau-1}} \\ &= M^{-(\tau-1)} \ell(M) \left(1- 2^{-(\tau-1)} \frac{\ell(2M)}{\ell(M)}\right)\ge M^{-(\tau-1)-\varepsilon/4}, \end{aligned}$$ where to obtain the last inequality we used that $\ell$ is slowly-varying, so $\ell(2M)/\ell(M)\to 1$, and that $M{\,\gg_{\star}\,}\varepsilon$. In IGIRG, $\mathcal{V}_M$ follows a homogeneous PPP with intensity $\mathbb{P}(W\in I_M)$ on $\mathbb{R}^d$; since $S$ has side length $R$, $|\mathcal{V}_M[S]|$ is therefore a Poisson random variable with mean $\lambda_M(\mathbb{R}^d):=R^d\mathbb{P}(W\in I_M)$, which is at least $R^dM^{-(\tau-1)-\varepsilon/4} = d^{-d/2}M^{3-\tau-\varepsilon/4}$. Since $\tau<3$, the exponent is positive since we assumed $\varepsilon{\,\ll_{\star}\,}\textnormal{\texttt{par}}\xspace$ small, and so $\lambda_M \ge M^{3-\tau-\varepsilon/2}$ since $M {\,\gg_{\star}\,}d, \varepsilon$. Similarly, in SFP the number of vertices in $\mathcal{V}_M[S]$ follows a binomial distribution with mean $\lambda_M(\mathbb{Z}^d):=|\mathbb{Z}^d\cap S|\cdot \mathbb{P}(W\in I_M)\ge d^{-d/2}M^{3-\tau-\varepsilon/4}/2$, (where the factor of $2$ suffices to account for boundary effects since $M$ is large), which is again at least $M^{3-\tau-\varepsilon/2}$ for $M{\,\gg_{\star}\,}\varepsilon$. We now study the edge probabilities within the box $S$. Consider two vertices $u,v \in \mathcal{V}_M[S]$. Their distance is most the diameter of the box, $\sqrt{d}R\le M^{2/d}$, and $W_u,W_v\in I_M=[M, 2M]$ so $W_uW_v/|u-v|^d\ge 1$ holds. Recall that the edges have cost $(W_uW_v)^\mu L_{uv}\le 4M^{2\mu}L_{uv}$ in $1$-FPP, while we keep the edge in $G_M$ only if this edge cost is at most $M^{3\mu}$. Thus for all vertices $u,v\in \mathcal{V}_M[S]$, $$\begin{aligned} \begin{split}\label{eq:M-mesh-probability} \mathbb P\big(uv \in \mathcal{E}, \mathcal{C}(uv)\le M^{3\mu}\mid u,v\in \mathcal{V}_M[S] \big) &\ge \underline{c}\left(1 \wedge \frac{W_uW_v}{|u-v|^d}\right)^{\alpha} \cdot F_L\big((W_uW_v)^{-\mu}M^{3\mu}\big) \\ &\ge \underline{c}F_L(4^{-\mu}M^{\mu}) \ge \underline{c}/2, \end{split} \end{aligned}$$ where the last inequality holds since $M {\,\gg_{\star}\,}\textnormal{\texttt{par}}\xspace$. Note that [\[eq:M-mesh-probability\]](#eq:M-mesh-probability){reference-type="eqref" reference="eq:M-mesh-probability"} holds *uniformly* over the weights in $I_M$ and locations of vertices in $S$, and is also valid when $\alpha=\infty$ or $\beta=\infty$. With the vertex set $\mathcal{V}_M$ exposed, the presence of edges in $G_M[S]$ can therefore stochastically dominates an independent collection of Bernoulli$(\underline{c}/2)$ random variables. Thus, the graph $G_M[S]$ dominates an Erdős-Rényi random graph with number of vertices distributed as $\mbox{Poisson}(\lambda_M(\mathbb{R}^d))$ or binomial with mean $\lambda_M(\mathbb{Z}^d)$ with $\lambda_M(\mathbb{R}^d), \lambda_M(\mathbb{Z}^d)$ both at least $M^{3-\tau-\varepsilon/2}$, and *constant* connection probability $\underline{c}/2$. This Erdős-Rényi random graph is non-empty and connected with diameter two with probability at least $1-\exp(-\Theta(\lambda_M))\ge 1-\exp(-M^{3-\tau-\varepsilon})=1-\varepsilon_{\ref{def:dense-geometric}}$, see [@frieze2023random Theorem 7.1]. Hence, $G_M$ satisfies conditions [\[item:dense-2\]](#item:dense-2){reference-type="eqref" reference="item:dense-2"}--[\[item:dense-4\]](#item:dense-4){reference-type="eqref" reference="item:dense-4"} of Definition [Definition 18](#def:dense-geometric){reference-type="ref" reference="def:dense-geometric"} with $\varepsilon_{\ref{def:dense-geometric}}$ and is thus a strong $(R,2,\varepsilon_{\ref{def:dense-geometric}})$-dense geometric graph, which proves that [\[item:cor-1\]](#item:cor-1){reference-type="eqref" reference="item:cor-1"} of Corollary [Corollary 22](#cor:dense-subgraph){reference-type="ref" reference="cor:dense-subgraph"} holds unconditionally. Moreover, since $R{\,\gg_{\star}\,}t$, the above argument still holds conditioned on any intersection $\mathcal{F}$ of at most $t$ events of the form $z \in \mathcal{V}$ or $z\in\mathcal{V}_M$; from this point on in the proof, we always condition on $\mathcal{F}$, and the only property of $G_M$ we use is that it is an $(R,\varepsilon)$-dense geometric graph conditioned on $\mathcal{F}$. When $d\ge 2$, let $\omega^\star$ be an iid Bernoulli bond percolation with retention probability $1-20d\varepsilon$, and recall $\mathcal{H}_\infty := \mathcal{H}_\infty(G_M, \mathcal{S},\omega^\star)$ from [\[eq:h-infty\]](#eq:h-infty){reference-type="eqref" reference="eq:h-infty"} in Definition [Definition 20](#def:dense-geometric-H){reference-type="ref" reference="def:dense-geometric-H"}. Since $\varepsilon{\,\ll_{\star}\,}\sigma,d$ and $r{\,\gg_{\star}\,}\varepsilon, \delta, R$, Corollary [Corollary 22](#cor:dense-subgraph){reference-type="ref" reference="cor:dense-subgraph"}[\[item:cor-2\]](#item:cor-2){reference-type="eqref" reference="item:cor-2"}--[\[item:cor-3\]](#item:cor-3){reference-type="eqref" reference="item:cor-3"} are now immediate from Lemma [\[lem:dense-geometric-locally-dense\]](#lem:dense-geometric-locally-dense){reference-type="ref" reference="lem:dense-geometric-locally-dense"}, except the uniqueness of $\mathcal{C}_\infty$, which was proved earlier in [@deijfen2013scale] for SFP and in [@deprez2019scale] for IGIRG. Recall the definition of $\mathcal{A}_{\mathrm{linear}}(r,\kappa, C, \zeta,x)$ from Lemma [Lemma 21](#lem:dense-geometric-linear-paths){reference-type="ref" reference="lem:dense-geometric-linear-paths"}: in particular, that the *graph* distance in $G_M$ between two vertices is $\kappa$ times the Euclidean distance. Since every edge of $G_M$ has cost at most $M^{3\mu}$, the *cost* distance then is at most $\kappa M^{3\mu}$ between those vertices in $G_M$. Hence, $\mathcal{A}_{\mathrm{linearcost}}(r,\kappa,C,\zeta,z) \supseteq \mathcal{A}_{\mathrm{linear}}(r,\kappa/M^{3\mu},C,\zeta,z)$; thus Corollary [Corollary 22](#cor:dense-subgraph){reference-type="ref" reference="cor:dense-subgraph"}[\[item:cor-4\]](#item:cor-4){reference-type="eqref" reference="item:cor-4"} follows from Lemma [Lemma 21](#lem:dense-geometric-linear-paths){reference-type="ref" reference="lem:dense-geometric-linear-paths"}. ◻ The above proof works whenever $3-\tau>0$, which guarantees that the expected degree of vertices in $G_M$, i.e., $M^{3-\tau +o(1)}$, grows with $M$. We chose the box-size $R=M^{2/d}$ so that it corresponds to the connectivity radius of a vertex in $\mathcal{V}_M$. For $\tau>3$, the expected degree in $G_M$ tends to $0$ as $M$ increases and so $G_M$ has no infinite component for large $M$ (even if we removed the restriction on edge costs). Corollary [Corollary 22](#cor:dense-subgraph){reference-type="ref" reference="cor:dense-subgraph"}[\[item:cor-4\]](#item:cor-4){reference-type="eqref" reference="item:cor-4"} guarantees linear-cost low-deviation paths within $\mathcal{H}_\infty \subseteq \mathcal{C}_\infty$ when $d\ge 2$. However, when connecting two arbitrary vertices, e.g. $0$ and $x$, in IGIRG/SFP, we cannot assume that they fall in $\mathcal{H}_\infty$. We solve this issue using the following two claims. **Claim 23** (The infinite cluster of IGIRG/SFP is dense). *Let $G=(\mathcal{V},\mathcal{E})$ be an IGIRG or SFP satisfying assumptions [\[eq:power_law\]](#eq:power_law){reference-type="eqref" reference="eq:power_law"}--[\[eq:F_L-condition\]](#eq:F_L-condition){reference-type="eqref" reference="eq:F_L-condition"} with $d\ge 2$ and $\tau\in(2,3)$. Then there exists $\rho > 0$ such that for all $u,v \in \mathbb{R}^d$ (for IGIRG) or $u,v\in\mathbb{Z}^d$ (for SFP), we have $\mathbb{P}(u,v \in \mathcal{C}_\infty \mid u,v \in \mathcal{V}) \ge \rho$.* **Claim 24** (Connecting to the Bernoulli-induced infinite subgraph). *Consider the setting of Corollary [Corollary 22](#cor:dense-subgraph){reference-type="ref" reference="cor:dense-subgraph"} with $d\ge2$, recall $\mathcal{H}_\infty \subseteq \mathcal{V}_M$ from [\[eq:h-infty\]](#eq:h-infty){reference-type="eqref" reference="eq:h-infty"}, and let $\mathcal{C}_\infty$ be the infinite component of $G$ containing $\mathcal{H}_\infty$. Let $u,v \in \mathbb{R}^d$ (for IGIRG) or $\mathbb{Z}^d$ (for SFP). Then, whenever $r{\,\gg_{\star}\,}M, \delta$, $$\label{eq:path-contained} \begin{aligned} \mathbb{P}\Big( \exists u^\star\in \mathcal{H}_\infty\!\cap\! B_r(u); &\ \exists \mbox{ a path }\pi^G_{u,u^\star}\subseteq \mathcal{E}(G):\\ &\mathcal{V}(\pi^G_{u,u^\star}) \subseteq B_r(u), \mathcal{C}(\pi^G_{u,u^\star}) \le C \mid u,v \in \mathcal{C}_\infty\Big)\ge 1-\delta. \end{aligned}$$* *Proof of Claim [Claim 23](#claim:two-in-Cinfty){reference-type="ref" reference="claim:two-in-Cinfty"}.* Fix two locations $u,v\in R^d$ or $u,v \in \mathbb{Z}^d$, and let $M{\,\gg_{\star}\,}\textnormal{\texttt{par}}\xspace$, and recall from the calculation in [\[eq:W-in-IM\]](#eq:W-in-IM){reference-type="eqref" reference="eq:W-in-IM"} that $$\mathbb{P}(u,v \in \mathcal{V}_M \mid u,v\in\mathcal{V}) =\mathbb{P}(W\in I_M)^2 \ge M^{-2(\tau-1)-\varepsilon/2}\ge M^{-2\tau}.$$ Further, by Corollary 3.9[\[item:cor-1\]](#item:cor-1){reference-type="eqref" reference="item:cor-1"}, $G_M$ is an $(R, \varepsilon)$-dense geometric graph even when conditioned on the event $\mathcal{F}_M=\{u,v\in \mathcal{V}_M\}$. This means that the coupling in Lemma [Lemma 19](#lem:bond-percolation-coupling){reference-type="ref" reference="lem:bond-percolation-coupling"} remains valid under the conditioning, and the high-density result $\mathbb{P}(S\cap\mathcal{V}\in \mathcal{H}_\infty\mid u,v\in \mathcal{V}_M) \ge 1-\sigma$ in Lemma [\[lem:dense-geometric-locally-dense\]](#lem:dense-geometric-locally-dense){reference-type="ref" reference="lem:dense-geometric-locally-dense"} holds also conditioned on $u,v\in \mathcal{V}_M$. So, a union bound yields that $$\begin{aligned} \mathbb{P}(u, v\in \mathcal{H}_\infty\mid u,v \in \mathcal{V}_M) \ge 1-\mathbb{P}(u\notin \mathcal{H}_\infty \mid u,v \in \mathcal{V}_M) - \mathbb{P}(v \notin \mathcal{H}_\infty \mid u,v \in \mathcal{V}_M)\\ \ge 1-2\sigma\ge 1/2, \end{aligned}$$ whenever $\sigma\le 1/4$. Hence $$\mathbb{P}(u,v \in \mathcal{H}_\infty \mid u,v\in \mathcal{V}) = \mathbb{P}(u,v \in \mathcal{H}_\infty \mid u,v\in\mathcal{V}_M) \cdot \mathbb{P}(u,v \in \mathcal{V}_M \mid u,v\in \mathcal{V})\ge 1/(2M^{2\tau}).$$ Since $\mathcal{H}_\infty \subseteq V(\mathcal{C}_\infty)$ by Corollary [Corollary 22](#cor:dense-subgraph){reference-type="ref" reference="cor:dense-subgraph"}[\[item:cor-2\]](#item:cor-2){reference-type="eqref" reference="item:cor-2"}, the result follows with $\rho = 1/(2M^{2\tau})$. ◻ *Proof of Claim [Claim 24](#claim:linear-start-vertices){reference-type="ref" reference="claim:linear-start-vertices"}.* For $r' >0$ let $\mathcal{B}^\star(r',M,u):= \mathcal{H}_\infty\cap B_{r'}(u)$. By Corollary [Corollary 22](#cor:dense-subgraph){reference-type="ref" reference="cor:dense-subgraph"}[\[item:cor-2\]](#item:cor-2){reference-type="eqref" reference="item:cor-2"}--[\[item:cor-3\]](#item:cor-3){reference-type="eqref" reference="item:cor-3"}, applied with $\delta/2$ instead of $\delta$, $\mathcal{H}_\infty$ has a positive density in $\mathcal{V}_M$ around $u$. Hence, for $r'{\,\gg_{\star}\,}M,\delta$, $$\begin{aligned} \label{eq:start-neighbourhood-nonempty} \mathbb{P}(\mathcal{B}^\star(r',M,u) = \emptyset \mid u,v \in \mathcal{C}_\infty) \le \delta/2. \end{aligned}$$ Fix such $r'$. Because $\mathcal{H}_\infty\subseteq \mathcal{C}_\infty$ connected, paths exists within $\mathcal{C}_\infty$ to vertices in $\mathcal{H}_\infty$. In particular, conditioned on $u,v\in\mathcal{C}_\infty$, fix any procedure to uniquely select a path $\pi_{uu^\star}^G$ from $u$ to each $u^\star\in \mathcal{B}^\star(r',M,u)$ (for example, take the paths with lowest deviation, breaking ties by random coin-flips). Then the random variables $$\begin{aligned} X &:= \inf\{r>0 \mid \forall u^\star \in \mathcal{B}^\star(r',M,u): V(\pi_{uu^\star}^G) \subseteq B_r(u)\},\\ Y &:= \inf\{C >0 \mid \forall u^\star \in \mathcal{B}^\star(r',M,u): \mathcal{C}(\pi_{uu^\star}^G) \le C\}, \end{aligned}$$ are a.s. finite conditioned on $u,v\in \mathcal{C}_\infty$. On choosing $r$ and $C$ suitably large, we have $$\begin{aligned} \label{eq:start-neighbourhood-maxcost} \mathbb{P}(X > r \text{ or } Y > C \mid u,v\in\mathcal{C}_\infty) \le \delta/2, \end{aligned}$$ and the result follows from a union bound over [\[eq:start-neighbourhood-nonempty\]](#eq:start-neighbourhood-nonempty){reference-type="eqref" reference="eq:start-neighbourhood-nonempty"} and [\[eq:start-neighbourhood-maxcost\]](#eq:start-neighbourhood-maxcost){reference-type="eqref" reference="eq:start-neighbourhood-maxcost"}. ◻ The next corollary combines the previous claims and lemmas and constructs a linear-cost low deviation path between two vertices in the unique infinite component of SFP/IGIRG. **Corollary 25** (Linear costs in 1-FPP on IGIRG/SFP). *Consider $1$-FPP on IGIRG or SFP of Definition [Definition 1](#def:girg){reference-type="ref" reference="def:girg"} satisfying the assumptions given in [\[eq:power_law\]](#eq:power_law){reference-type="eqref" reference="eq:power_law"}--[\[eq:F_L-condition\]](#eq:F_L-condition){reference-type="eqref" reference="eq:F_L-condition"} with $0\in\mathcal{V}$. Assume that the dimension $d\ge 2$, and $\alpha>2, \tau\in(2,3)$, $\mu>\mu_{\mathrm{pol}}$. Let $\kappa,r > 0$, and let $\delta,\zeta \in (0,1)$. Suppose $r{\,\gg_{\star}\,}\delta, \zeta, \textnormal{\texttt{par}}\xspace$, and that $\kappa{\,\gg_{\star}\,}\textnormal{\texttt{par}}\xspace$. Let $\mathcal{C}_\infty$ be the unique infinite component guaranteed by Corollary [Corollary 22](#cor:dense-subgraph){reference-type="ref" reference="cor:dense-subgraph"}[\[item:cor-2\]](#item:cor-2){reference-type="eqref" reference="item:cor-2"}. Let $u,v \in \mathbb{R}^d$ for IGIRG or $\mathbb{Z}^d$ for SFP with $|u-v| \ge r$. Let $\mathcal{A}_{\mathrm{linearcost}}(u,v, \kappa, \zeta)$ be the event that there is a path joining $u$ and $v$ in $G$ with cost at most $\kappa|u-v|$ and deviation at most $\zeta|u-v|$. Then $$\mathbb{P}(\mathcal{A}_{\mathrm{linearcost}}(u,v,\kappa,\zeta)\mid u,v\in\mathcal{C}_\infty) \ge 1-\delta.$$* *Proof.* Let $\rho\in (0,1)$ be as in Claim [Claim 23](#claim:two-in-Cinfty){reference-type="ref" reference="claim:two-in-Cinfty"}. Let $r_{\ref{cor:dense-subgraph}}$, $C_{\ref{cor:dense-subgraph}}>0$ be as in Corollary [Corollary 22](#cor:dense-subgraph){reference-type="ref" reference="cor:dense-subgraph"}[\[item:cor-4\]](#item:cor-4){reference-type="eqref" reference="item:cor-4"} applied with $\kappa_{\ref{cor:dense-subgraph}} = \kappa/2$, $\delta_{\ref{cor:dense-subgraph}} = \rho\delta/3$, $\zeta_{\ref{cor:dense-subgraph}} = \zeta/2$ and $t_{\ref{cor:dense-subgraph}} = 2$ (and any suitable values of $\varepsilon$, $M$, $\sigma$). Let $r_{\ref{claim:linear-start-vertices}}$ and $C_{\ref{claim:linear-start-vertices}}$ be as in Claim [Claim 24](#claim:linear-start-vertices){reference-type="ref" reference="claim:linear-start-vertices"} applied with $\delta_{\ref{claim:linear-start-vertices}} = \delta/3$. In Corollary [Corollary 22](#cor:dense-subgraph){reference-type="ref" reference="cor:dense-subgraph"}[\[item:cor-4\]](#item:cor-4){reference-type="eqref" reference="item:cor-4"}, the requirement $r_{\ref{cor:dense-subgraph}}{\,\gg_{\star}\,}M, \zeta, \delta$ is assumed and in Claim [Claim 24](#claim:linear-start-vertices){reference-type="ref" reference="claim:linear-start-vertices"} $r_{\ref{claim:linear-start-vertices}}{\,\gg_{\star}\,}M$, thus we may increase the first value to assume $r_{\ref{cor:dense-subgraph}}\ge r_{\ref{claim:linear-start-vertices}}$. We may assume that $r{\,\gg_{\star}\,}r_{\ref{cor:dense-subgraph}}, r_{\ref{claim:linear-start-vertices}}, C_{\ref{cor:dense-subgraph}}, C_{\ref{claim:linear-start-vertices}}$. We define events as follows. (a) Let $\mathcal{A}_1$ be the event that there is a path $\pi^1$ from $u$ to some vertex $u^\star \in \mathcal{H}_\infty \cap B_{r_{\ref{claim:linear-start-vertices}}}(u)$ of cost at most $C_{\ref{claim:linear-start-vertices}}$ and with $\mathcal{V}(\pi^1) \subseteq B_{r_{\ref{claim:linear-start-vertices}}}(u)$. (b) Let $\mathcal{A}_2$ be the event that *every* vertex $u_1 \in \mathcal{H}_\infty\cap B_{r_{\ref{cor:dense-subgraph}}}(u)$ is joined to every vertex $u_2' \in \mathcal{H}_\infty$ by a path $\pi^2_{u_1,u_2}$ of cost at most $(\kappa/2)|u_1-u_2| + C_{\ref{cor:dense-subgraph}}$ and deviation at most $(\zeta/2)|u_1-u_2|+C_{\ref{cor:dense-subgraph}}$. (c) Let $\mathcal{A}_3$ be the event that there is a path $\pi^3$ from some vertex $v^\star$ in $\mathcal{H}_\infty \cap B_{r_{\ref{claim:linear-start-vertices}}}(v)$ to $v$ of cost at most $C_{\ref{claim:linear-start-vertices}}$ and with $\mathcal{V}(\pi^3) \subseteq B_{r_{\ref{claim:linear-start-vertices}}}(v)$. Observe that if $\mathcal{A}_1\cap\mathcal{A}_2\cap\mathcal{A}_3$ occurs, then since $|u-v|\ge r$ is large, the path $\pi^1\pi^2_{u^\star,v^\star}\pi^3$ has cost at most $\kappa|u-v|/2 + C_{\ref{cor:dense-subgraph}} + 2C_{\ref{claim:linear-start-vertices}} \le \kappa |u-v|$ and deviation at most $\zeta|u-v|/2 + C_{\ref{cor:dense-subgraph}} + 2r_{\ref{claim:linear-start-vertices}} \le \zeta |u-v|$. We must therefore prove $$\label{eq:linear-regime-proof-goal} \mathbb{P}(\mathcal{A}_1\cap\mathcal{A}_2\cap\mathcal{A}_3\mid u,v\in \mathcal{C}_\infty) \ge 1-\delta.$$ By our choice of $r_{\ref{claim:linear-start-vertices}}$ and $C_{\ref{claim:linear-start-vertices}}$, Claim [Claim 24](#claim:linear-start-vertices){reference-type="ref" reference="claim:linear-start-vertices"} implies that $$\mathbb{P}(\neg\mathcal{A}_1 \mid u,v \in \mathcal{C}_\infty) \le \delta/3,\qquad \mathbb{P}(\neg\mathcal{A}_3 \mid u,v \in \mathcal{C}_\infty) \le \delta/3.$$ Similarly, by Corollary [Corollary 22](#cor:dense-subgraph){reference-type="ref" reference="cor:dense-subgraph"}[\[item:cor-4\]](#item:cor-4){reference-type="eqref" reference="item:cor-4"} we have $\mathbb{P}(\neg\mathcal{A}_2 \mid u,v\in\mathcal{V}) \le \rho\delta/3$. Thus by Lemma [Claim 23](#claim:two-in-Cinfty){reference-type="ref" reference="claim:two-in-Cinfty"}, $$\mathbb{P}(\neg\mathcal{A}_2 \mid u,v \in \mathcal{C}_\infty) \le \frac{\mathbb{P}(\neg\mathcal{A}_{2} \mid u,v\in\mathcal{V})}{\mathbb{P}(u,v\in\mathcal{C}_\infty\mid u,v\in \mathcal{V})} \le \frac{\rho\delta/3}{\rho} = \delta/3.$$ Applying a union bound, [\[eq:linear-regime-proof-goal\]](#eq:linear-regime-proof-goal){reference-type="eqref" reference="eq:linear-regime-proof-goal"} follows as required. ◻ *Proof of Theorems [\[thm:linear_regime\]](#thm:linear_regime){reference-type="ref" reference="thm:linear_regime"} and [\[thm:threshold_regimes\]](#thm:threshold_regimes){reference-type="ref" reference="thm:threshold_regimes"}.* We have already proven the lower bounds: namely the lower bound of Theorem [\[thm:linear_regime\]](#thm:linear_regime){reference-type="ref" reference="thm:linear_regime"} follow directly from [\[eq:linear-lower\]](#eq:linear-lower){reference-type="eqref" reference="eq:linear-lower"} in Theorem [\[thm:linear_polynomial_lower_bound\]](#thm:linear_polynomial_lower_bound){reference-type="ref" reference="thm:linear_polynomial_lower_bound"}, with proof on page , and the proof of the lower bound in Theorem [\[thm:threshold_regimes\]](#thm:threshold_regimes){reference-type="ref" reference="thm:threshold_regimes"} can be found on page . The upper bounds of both theorems follow from Corollary [Corollary 25](#lem:linear-regime-join-two){reference-type="ref" reference="lem:linear-regime-join-two"}. ◻ It remains to prove Theorem [\[thm:finite_graph\]](#thm:finite_graph){reference-type="ref" reference="thm:finite_graph"}, which states that all results also hold in the finite GIRG model if $u_n,v_n$ are chosen uniformly at random from the largest component $\mathcal{C}_{\max}^{(n)}$. *Proof of Theorem [\[thm:finite_graph\]](#thm:finite_graph){reference-type="ref" reference="thm:finite_graph"}.* Let $\kappa_1 > 0$ be small enough to take the role of $\kappa$ in [\[eq:linear-lower\]](#eq:linear-lower){reference-type="eqref" reference="eq:linear-lower"} in Theorem [\[thm:linear_polynomial_lower_bound\]](#thm:linear_polynomial_lower_bound){reference-type="ref" reference="thm:linear_polynomial_lower_bound"}, and let $\kappa_2>0$ be large enough to take the role of $\kappa$ in Corollary [Corollary 25](#lem:linear-regime-join-two){reference-type="ref" reference="lem:linear-regime-join-two"}. Let $G_n = (\mathcal{V}_n,\mathcal{E}_n)$ be a GIRG satisfying the assumptions of the theorem statement, let $\mathcal{C}_{\max}^{(n)}$ be the largest component of $G_n$, and let $u_n$ and $v_n$ be vertices chosen independently and uniformly at random from $\mathcal{C}_{\max}^{(n)}$. For all $u,v\in \mathbb{R}^d$ and all graphs $H$, let $\mathcal{A}(H,u,v)$ be the event that $u$ and $v$ have cost-distance between $\kappa_1 |u-v|$ and $\kappa_2 |u-v|$ in the (sub)graph $H$. Then it suffices to prove that for all $\delta \in (0,1)$, whenever $n{\,\gg_{\star}\,}\delta$, $\mathbb{P}(\neg\mathcal{A}(G_n,u_n,v_n)) \le \delta$ holds. Let $x_n,y_n \in \mathcal{V}_n$ be chosen independently and uniformly at random from $\mathcal{V}_n$. Whenever $n{\,\gg_{\star}\,}\delta$; it is known [@komjathy2020stopping Theorem 3.11] that $\mathcal{C}_{\max}^{(n)}$ has constant density whp, so with probability at least $1/2$, we have $|\mathcal{C}_{\max}^{(n)}| \ge \delta^{1/4} |\mathcal{V}_n|$. Thus $\mathbb{P}(x_n,y_n \in \mathcal{C}_{\max}^{(n)}) \ge \sqrt{\delta}/2$, and so $$\begin{aligned} \mathbb{P}(\neg\mathcal{A}(G_n,u_n,v_n)) &= \mathbb{P}(\neg\mathcal{A}(G_n,x_n,y_n) \mid x_n,y_n \in \mathcal{C}_{\max}^{(n)}) \\ &\le 2\mathbb{P}(\neg\mathcal{A}(G_n,x_n,y_n) \mbox{ and }x_n,y_n\in\mathcal{C}_{\max}^{(n)})/\sqrt{\delta}. \end{aligned}$$ Recall that for all $n>0$, $Q_n := [-n^{1/d}/2, n^{1/d}/2]^d$. Let $x_n',y_n' \in Q_n$ be random points chosen independently and uniformly at random from the Lebesgue measure in $Q_n$. Let $\mathcal{V}_n'$ be a Poisson point process of unit intensity conditioned on $x_n',y_n'\in\mathcal{V}_n'$, i.e. a Palm process. It is known that the total variation distance of $(\mathcal{V}', x_n', y_n')$ from $(\mathcal{V}, x_n, y_n)$ converges to zero as $n\to\infty$, and in particular is at most $\delta^{3/2}/12$ when $n$ is sufficiently large. Thus on taking $G_n'$ to be a GIRG with vertex set $\mathcal{V}_n'$, we have $$\begin{aligned} \mathbb{P}(\neg\mathcal{A}(G_n,u_n,v_n)) \le (2/\sqrt{\delta})(\delta^{3/2}/12) + 2\mathbb{P}(\neg\mathcal{A}(G_n',x_n',y_n') \mbox{ and }x_n',y_n'\in\mathcal{C}_{\max}^{(n)})/\sqrt{\delta}. \end{aligned}$$ We may couple $G_n'$ to an IGIRG $G^+$ in such a way that $G_n = G^+[Q_n]$. Let $\mathcal{C}_\infty$ be the infinite component of $G^+$. Further, the giant component of $G_n'$ is part of $\mathcal{C}_\infty$ with probability tending to $1$ as $n$ tends to infinity, see [@komjathy2020explosion]. So for $n$ large enough, $\mathbb{P}(\mathcal{C}_{\max}^{(n)} \nsubseteq \mathcal{C}_\infty) \le \delta^{3/2}/12$, and hence $$\mathbb{P}(\neg\mathcal{A}(G_n,u_n,v_n)) \le \delta/3 + 2\mathbb{P}(\neg\mathcal{A}(G_n',x_n',y_n') \mbox{ and }x_n',y_n'\in\mathcal{C}_\infty)/\sqrt{\delta}.$$ Let $r$ be large enough for Corollary [Corollary 25](#lem:linear-regime-join-two){reference-type="ref" reference="lem:linear-regime-join-two"} and Theorem [\[thm:linear_polynomial_lower_bound\]](#thm:linear_polynomial_lower_bound){reference-type="ref" reference="thm:linear_polynomial_lower_bound"} to apply to $\delta^{3/2}/12$, taking $\zeta = \delta^2$ in Corollary [Corollary 25](#lem:linear-regime-join-two){reference-type="ref" reference="lem:linear-regime-join-two"}. Let $X$ be the set of pairs $(x,y) \in Q_{(1-d\delta^2)n}$ such that $|x-y| \ge r$, and let $\mathcal{X}_n$ be the event that $(x_n',y_n') \in X$. Observe that $\mathbb{P}(\neg\mathcal{X}_n) \le \delta^{3/2}/12$ if $\delta{\,\ll_{\star}\,}d$ and $n{\,\gg_{\star}\,}r$. Thus $$\begin{aligned} \mathbb{P}(\neg\mathcal{A}(G_n,u_n,v_n)) &\le \delta/2 + 2\mathbb{P}(\neg\mathcal{A}(G_n',x_n',y_n') \mbox{ and }x_n',y_n'\in\mathcal{C}_\infty\mid \mathcal{X}_n\mbox{ and }x_n',y_n' \in \mathcal{V}_n)/\sqrt\delta\\ &\le \delta/2 + 2\max_{x,y\in X}\mathbb{P}(\neg\mathcal{A}(G_n',x,y) \mid x_n=x,y_n=y,x,y\in\mathcal{C}_\infty)/\sqrt\delta. \end{aligned}$$ By Theorem [\[thm:linear_polynomial_lower_bound\]](#thm:linear_polynomial_lower_bound){reference-type="ref" reference="thm:linear_polynomial_lower_bound"}, the lower cost bound in the event $\mathcal{A}(G_n',x,y)$ fails with probability at most $\delta^{3/2}/8$. Moreover, when $x,y \in Q_{(1-d\delta^2)n}$, the upper cost bound in $\mathcal{A}(G_n',x,y)$ occurs whenever $x$ and $y$ are joined by a path in $\mathcal{C}_\infty$ with cost at most $\kappa |x-y|$ and deviation at most $\delta^2 |x-y| \le \delta^2 nd$; thus by Corollary [Corollary 25](#lem:linear-regime-join-two){reference-type="ref" reference="lem:linear-regime-join-two"}, the upper bound fails with probability at most $\delta^{3/2}/8$. By a union bound, it follows that $\mathbb{P}(\neg\mathcal{A}(G_n,u_n,v_n) \le \delta$ as required. ◻ # Appendix {#sec:appendix} In this appendix we prove Lemmas [\[lem:bond-percolation-density\]](#lem:bond-percolation-density){reference-type="ref" reference="lem:bond-percolation-density"}, [\[lem:bond-percolation-linear\]](#lem:bond-percolation-linear){reference-type="ref" reference="lem:bond-percolation-linear"} and [\[lem:dense-geometric-locally-dense\]](#lem:dense-geometric-locally-dense){reference-type="ref" reference="lem:dense-geometric-locally-dense"}. For Lemma [\[lem:bond-percolation-density\]](#lem:bond-percolation-density){reference-type="ref" reference="lem:bond-percolation-density"} we use the following theorem, which is a simplified version of [@DP-percolation Theorem 1.1]. **Theorem 26**. *For all $d \in \mathbb{N}$ with $d \ge 2$ and all $\sigma \in (0,1/2)$ there exists $p_0 = p_0(\sigma,d) \in (0,1)$ such that the following holds. Let $\omega^\star$ be an iid Bernoulli bond percolation on $\mathbb{Z}^d$ with edge-retention probability $p \ge p_0$. For $S \subseteq \mathbb{R}^d$, let $$\label{eq:A-dense-perc} \mathcal{A}_\mathrm{dense}(S,\sigma):=\{ \mbox{$S$ contains a unique cluster $\mathcal{C}$ with $|\mathcal{C}| \ge (1-\sigma)\textnormal{Vol}(S)$}\}.$$ Let $S_n = [-n/2, n/2]^d$. Then $$\begin{aligned} \limsup_{n\to\infty} \frac{1}{n^{d-1}}\log\mathbb{P}(\neg\mathcal{A}_\mathrm{dense}(S_n,\sigma)) < 0. \end{aligned}$$* The corresponding [@DP-percolation Theorem 1.1] is stated for site percolation. Since the event $\mathcal{A}_\mathrm{dense}(S,\delta)$ is monotone; the same result for bond percolation also holds via the standard domination in which we retain a vertex in a coupled site percolation process if and only if we retain all its edges in bond percolation. We now use Theorem [Theorem 26](#thm:DP-percolation){reference-type="ref" reference="thm:DP-percolation"} to prove Lemma [\[lem:bond-percolation-density\]](#lem:bond-percolation-density){reference-type="ref" reference="lem:bond-percolation-density"}: *Proof.* [\[proof:lemma3.2\]]{#proof:lemma3.2 label="proof:lemma3.2"} The statement that $\mathbb{P}(0\in \mathcal{C}_\infty^\star)\ge 1-\sigma$ as the edge-retention probability $1-\varepsilon$ tends to $1$ follows directly from the continuity of the the theta-function in the supercritical regime (in particular, near $1$), see [@grimmett1999percolation]. We move on to showing [\[eq:bond-percolation-density\]](#eq:bond-percolation-density){reference-type="eqref" reference="eq:bond-percolation-density"}. By translation invariance, it is enough to show the statement for $x=0\in \mathbb{Z}^d$. Recall $\mathcal{A}_\mathrm{dense}(S_n, \sigma)$ from [\[eq:A-dense-perc\]](#eq:A-dense-perc){reference-type="eqref" reference="eq:A-dense-perc"} and that $S_n = [-n/2,n/2]^d$. By Theorem [Theorem 26](#thm:DP-percolation){reference-type="ref" reference="thm:DP-percolation"}, taking $p_0:=1-\varepsilon$ and $\eta{\,\ll_{\star}\,}\sigma$, since the dimension is $d\ge2$, $$\label{eq:percolation-density-0} \forall n \ge n_0\colon \mathbb{P}(\mathcal{A}_\mathrm{dense}(S_n,\sigma/2)) \ge 1 - \exp(-\eta n^{d-1}) \ge 1 - \exp(-\eta n).$$ By the Borel-Cantelli lemma, almost surely $\mathcal{A}_{\mathrm{dense}}(S_n,\sigma/2)$ occurs for all but finitely many values of $n$. Moreover, if $\mathcal{A}_{\mathrm{dense}}(S_n,\sigma/2)$ and $\mathcal{A}_{\mathrm{dense}}(S_{n+1},\sigma/2)$ both occur for suitably large $n$, their respective clusters must intersect by the pigeonhole principle and hence must be equal. Thus almost surely there exists a single cluster $\mathcal{C}_\infty^\star$ such that for all but finitely many $n$, $|\mathcal{C}_\infty^\star \cap S_n| \ge (1-\sigma/2)\textnormal{Vol}(S_n)$. Such a cluster is necessarily both infinite and unique, as required. Hence, let $$\label{eq:a-infty} \mathcal{A}_\infty(S,\sigma/2):=\{ |S \cap \mathcal{C}_\infty^\star| \ge (1-\sigma/2)\textnormal{Vol}(S)\}.$$ Then using [\[eq:percolation-density-0\]](#eq:percolation-density-0){reference-type="eqref" reference="eq:percolation-density-0"}, and the above intersection property when one moves from $S_i$ to $S_{i+1}$, there exists $n_1 \ge n_0$ such that $$\label{eq:percolation-density-1} \forall n \ge n_1\colon\mathbb{P}(\mathcal{A}_\infty(S_n,\sigma/2)) \ge 1 - \sum_{i\ge n}\mathbb{P}(\neg\mathcal{A}_{\mathrm{dense}}(S_i,\sigma/2)) \ge 1 - \exp(-\eta n/2).$$ We now develop a boxing scheme. By definition, $S_{4r}=[-2r, 2r]^d$ which fully contains $B_r(0)$, and further, for each $y\in B_r(0)$, also $B_{(\log r)^{3/2}}(y)$ is fully contained in $S_{4r}$. For all $r>0$, let $\ell(r) := (\log r)^{4/3}$, so that $r = \exp(\ell(r)^{3/4})$. Let $\mathcal{S}_{4r}$ be a partition of $S_{4r}$ into at most $(4r/\ell(r))^d$ boxes of (equal) side length $\ell \in [\ell(r),2\ell(r)]$. We may assume that $r{\,\gg_{\star}\,}\delta, \sigma,\eta$ such that $\ell(r) \ge n_1$ and furthermore: (i) [\[item:box-1\]]{#item:box-1 label="item:box-1"} $(4r/\ell(r))^d\cdot \exp(-\eta \ell/2) = 4^d\exp(d\ell(r)^{3/4}-\eta \ell/2)/\ell(r)^{d} \le \delta$ for all $\ell\in[\ell(r), 2\ell(r)]$ and (ii) [\[item:box-2\]]{#item:box-2 label="item:box-2"} for all $y \in B_r(x)$, there exist disjoint boxes $S_1^{\scriptscriptstyle{(y)}},\dots,S_t^{\scriptscriptstyle{(y)}} \in \mathcal{S}_{4r}$ such that $S_i^{\scriptscriptstyle{(y)}} \subseteq B_{(\log r)^{3/2}}(y)$ for all $i\in[t]$ and such that $\textnormal{Vol}(S_1^{\scriptscriptstyle{(y)}} \cup \dots \cup S_t^{\scriptscriptstyle{(y)}}) \ge (1-\sigma/2)|B_{(\log r)^{3/2}}(y)\cap\mathbb{Z}^d|$. Here, [\[item:box-1\]](#item:box-1){reference-type="eqref" reference="item:box-1"} implies that the probability that there is a box $S$ in the boxing scheme $\mathcal{S}_{4r}$ for which $\mathcal{A}_{\infty}(S, \sigma/2)$ does not holds is at most $\delta$, by combining [\[eq:percolation-density-1\]](#eq:percolation-density-1){reference-type="eqref" reference="eq:percolation-density-1"} with a union bound over the at most $(4r/\ell(r))^d$ boxes. So, introducing $\mathcal{A}_\mathrm{good}(4r,\sigma/2)$ as the event that for all $S \in \mathcal{S}_{4r}$, the event $\mathcal{A}_\infty(S,\sigma/2)$ in [\[eq:a-infty\]](#eq:a-infty){reference-type="eqref" reference="eq:a-infty"} occurs, then $$\label{eq:bond-percolation-density-prob} \mathbb{P}(\mathcal{A}_\mathrm{good}(4r,\sigma/2)) \ge 1 - (4r/\ell(r))^d\exp(-\eta \ell/2) \ge 1 - \delta.$$ Suppose that $\mathcal{A}_\mathrm{good}(4r,\sigma/2)$ occurs, fix some $y\in B_r(0)$ and let $S_1^{\scriptscriptstyle{(y)}},\dots,S_t^{\scriptscriptstyle{(y)}}$ be the boxes in [\[item:box-2\]](#item:box-2){reference-type="eqref" reference="item:box-2"} contained in $B_{(\log r)^{3/2}}(y)$. Then by the uniqueness of the cluster $\mathcal{C}_\infty^\star$, $$\begin{aligned} \frac{|B_{(\log r)^{3/2}}(y) \cap \mathcal{C}_\infty^\star|}{|B_{(\log r)^{3/2}}(y) \cap \mathbb{Z}^d|} &\ge \frac{1}{|B_{(\log r)^{3/2}}(y)\cap \mathbb{Z}^d|}\sum_{i \in [t]}|S_i^{\scriptscriptstyle{(y)}} \cap \mathcal{C}_\infty^\star|\\ &\ge \frac{1}{|B_{(\log r)^{3/2}}(y)\cap \mathbb{Z}^d|}\sum_{i \in [t]}(1-\tfrac{\sigma}{2})\textnormal{Vol}(S_i^{\scriptscriptstyle{(y)}}) \ge (1-\tfrac{\sigma}{2})^2 > 1-\sigma, \end{aligned}$$ where in the last step we used that the union of the boxes $S_1^{\scriptscriptstyle{(y)}}, \dots, S_t^{\scriptscriptstyle{(y)}}$ covers at least $1-\sigma$ proportion of the vertices in $|B_{(\log r)^{3/2}}(y)\cap \mathbb{Z}^d|$ by [\[item:box-2\]](#item:box-2){reference-type="eqref" reference="item:box-2"}. Hence, [\[eq:bond-percolation-density\]](#eq:bond-percolation-density){reference-type="eqref" reference="eq:bond-percolation-density"} follows from [\[eq:bond-percolation-density-prob\]](#eq:bond-percolation-density-prob){reference-type="eqref" reference="eq:bond-percolation-density-prob"}. ◻ We next prove Lemma [\[lem:bond-percolation-linear\]](#lem:bond-percolation-linear){reference-type="ref" reference="lem:bond-percolation-linear"}, which states that the infinite component of highly supercritical Bernoulli bond percolation has linear graph distances realised by paths with sublinear deviation. *Proof.* [\[proof:lemma3.3\]]{#proof:lemma3.3 label="proof:lemma3.3"} We first recall a result on linear scaling of distances in $\mathcal{C}^\star_\infty$. Let $\{x\leftrightarrow y\}$ be the event that $x,y\in \mathbb{Z}^d$ lie in the same component of $\omega^\star$, and let $d_\star(x,y)$ denote their graph distance in $\omega^\star$. By [@antal1996chemical Theorem 1.1], there exists $\kappa',\varepsilon_0>0$ such that for all $\varepsilon\le \varepsilon_0$ and for all $a \in \mathbb{Z}^d$,[^9] $$\label{eq:dense-geometric-perc-0} \limsup_{|y|\to\infty} \frac{1}{|x-y|} \log\mathbb{P}\big(\{x\leftrightarrow y\} \cap \{d_\star(x,y) > \kappa'|x-y|\}\big) < 0.$$ We say that a site $x\in\mathbb{Z}^d$ has *$r$-linear scaling* if $x \in \mathcal{C}^\star_\infty$ and $$\label{eq:r-linear-scaling} d_\star(x,y) \le \kappa' |x-y| \quad \mbox{ holds for all } y \in \mathcal{C}_\infty^\star \setminus B_r(x).$$ By a union bound over all $y\in \mathbb{Z}^d\setminus B_r(x)$, it follows from [\[eq:dense-geometric-perc-0\]](#eq:dense-geometric-perc-0){reference-type="eqref" reference="eq:dense-geometric-perc-0"} that there exist $c_1,r_1>0$ (depending on $d$) such that for all $x \in \mathbb{Z}^d$ and $r\ge r_1$, $$\label{eq:dense-geometric-perc} \mathbb{P}\big(x \mbox{ has $r$-linear scaling or }x \notin \mathcal{C}^\star_\infty\big) \ge 1 - e^{-c_1r}.$$ We now use [\[eq:dense-geometric-perc\]](#eq:dense-geometric-perc){reference-type="eqref" reference="eq:dense-geometric-perc"} to show that there exist constants $c_2,r_2>0$ (depending on $\zeta$ and $d$) such that if $x,y \in \mathcal{C}_\infty^\star$ and $|x-y|\ge r_2$, then with probability at least $1-e^{-c_2|x-y|}$ there exists a linear-length low-deviation path from $x$ to $y$. Let $\kappa := 2\sqrt{d} \kappa'$ and $K := (\kappa'+1)\sqrt{d}/\zeta$, and let $x,y \in \mathcal{C}_\infty^\star$. Cover the straight line segment $S_{x,y}$ by a sequence of $k \le 2K$ cubes $Q^{\scriptscriptstyle{(1)}},\ldots,Q^{\scriptscriptstyle{(k)}}$ in such a way that (see also Figure [1](#fig:small-deviation){reference-type="ref" reference="fig:small-deviation"}): (a) each cube $Q^{\scriptscriptstyle{(i)}}$ has side length $|x-y|/K$; (b) $Q^{\scriptscriptstyle{(1)}}$ is the cube covering $x$, $Q^{\scriptscriptstyle{(k)}}$ is the cube covering $y$, and $Q^{\scriptscriptstyle{(i)}} \cap Q^{\scriptscriptstyle{(i+1)}}\cap S_{x,y} \ne \emptyset$ for all $i \in [k-1]$; (c) for all $i \in [k-1]$, the intersection $Q^{\scriptscriptstyle{(i)}} \cap Q^{\scriptscriptstyle{(i+1)}}$ contains a cube of side-length $|x-y|/(8K)$; (d) all pair of sites $a,b$ that fall into distinct sets in the list $\{x\}, Q^{\scriptscriptstyle{(1)}} \cap Q^{\scriptscriptstyle{(2)}} , Q^{\scriptscriptstyle{(2)}} \cap Q^{\scriptscriptstyle{(3)}} ,\ldots, Q^{\scriptscriptstyle{(k-1)}} \cap Q^{\scriptscriptstyle{(k)}} , \{y\}$ satisfy $|a-b| \ge |x-y|/(2K)$. ![ Example for $k=3$ cubes covering the line from $x$ to $y$. Any two adjacent cubes overlap by at least $4^{-d}\mathrm{Vol}(Q^{\scriptscriptstyle{(i)}})$ and contain a cube of side-length $|x-y|/(8K)$ and any two intersections have distance at least $r/2 = |x-y|/(2K)$, where $r = |x-y|/K$ is the side-length of the cubes. ](small_deviations.pdf){#fig:small-deviation width="95%"} Then the following events occur with probability at least $1-e^{-c_2|x-y|}$ if $|x-y|\ge r_2$ (we specify $c_2$ and $r_2$ below): (i) [\[item:event-1\]]{#item:event-1 label="item:event-1"} For all $i \in [k-1]$, $Q^{\scriptscriptstyle{(i)}} \cap Q^{\scriptscriptstyle{(i+1)}}$ contains at least one site $s_i \in \mathcal{C}^\star_\infty$; (ii) [\[item:event-2\]]{#item:event-2 label="item:event-2"} Both $x$ and $y$ have $r$-linear scaling with $r=|x-y|/(2K)$ conditioned on $x,y\in \mathcal{C}_\infty^\star$; (iii) [\[item:event-3\]]{#item:event-3 label="item:event-3"} For all $i \in [k-1]$, all sites $z \in Q^{\scriptscriptstyle{(i)}} \cap Q^{\scriptscriptstyle{(i+1)}} \cap \mathcal{C}^\star_\infty$ have $r$-linear scaling with $r=|x-y|/(2K)$. Indeed, since $\varepsilon{\,\ll_{\star}\,}d$, $\mathcal{C}^\star_\infty$ has density at least $1/2$ and since the intersection $Q^{\scriptscriptstyle{(i)}} \cap Q^{\scriptscriptstyle{(i)}}$ contains a cube $S$ of side-length $|x-y|/(8K)$, [\[eq:percolation-density-1\]](#eq:percolation-density-1){reference-type="eqref" reference="eq:percolation-density-1"} applies, i.e., the event in [\[eq:a-infty\]](#eq:a-infty){reference-type="eqref" reference="eq:a-infty"} that $S\cap \mathcal{C}_\infty$ contains linearly vertices in the volume of $S$ holds with probability $1-\exp(-\eta|x-y|/(8K))$. This event implies that there is at least one vertex in $\mathcal{C}_\infty^\star\cap S$ and with $c_3:=\eta/8K$, there exists $r_3>0$ depending on $\zeta,d$ such that [\[item:event-1\]](#item:event-1){reference-type="eqref" reference="item:event-1"} holds with probability at least $1-e^{-c_3|x-y|}$ whenever $|x-y|\ge r_3$. Moreover, using [\[eq:dense-geometric-perc\]](#eq:dense-geometric-perc){reference-type="eqref" reference="eq:dense-geometric-perc"} and a union bound over $i\in[k-1]$ and over all sites $z\in Q^{\scriptscriptstyle{(i)}} \cap Q^{\scriptscriptstyle{(i+1)}}$ (polynomially many in $|x-y|$) and over $x,y$, we deduce that there exists $r_4>0$ depending on $d$ such that [\[item:event-2\]](#item:event-2){reference-type="eqref" reference="item:event-2"} and [\[item:event-3\]](#item:event-3){reference-type="eqref" reference="item:event-3"} also hold with probability $1-e^{-c_1|x-y|/(4K)}$ whenever $|x-y|\ge r_4$. Taking $r_2 = \max\{r_3,r_4\}$ and $c_2$ small enough so that $e^{-c_3 r}+e^{-c_1r/(4K)} \le e^{-c_2r}$ for all $r\ge r_2$ guarantees that the above events occur with probability at least $1-e^{-c_2|x-y|}$ whenever $|x-y|\ge r_2$. Suppose the events in [\[item:event-1\]](#item:event-1){reference-type="eqref" reference="item:event-1"}--[\[item:event-2\]](#item:event-2){reference-type="eqref" reference="item:event-2"} all occur. Choose now for all $i\in[k-1]$ sites $s_i\in \mathcal{C}_\infty^\star \cap Q^{\scriptscriptstyle{(i)}} \cap Q^{\scriptscriptstyle{(i+1)}}$. Writing $s_0=x, s_k=y$, it holds for all $i\in[k]$ that $$\label{eq:distance-sisi} |s_{i-1}-s_{i}|>|x-y|/(2K) \quad \mbox{and} \quad |s_{i-1}-s_{i}|\le \sqrt{d}|x-y|/K.$$ Then the $r$-linear-scaling property of $s_i$ and $s_{i-1}$ with $r=|x-y|/(2K)$ in [\[eq:r-linear-scaling\]](#eq:r-linear-scaling){reference-type="eqref" reference="eq:r-linear-scaling"}, combined with [\[eq:distance-sisi\]](#eq:distance-sisi){reference-type="eqref" reference="eq:distance-sisi"}, implies deterministically that for all $i \in [k]$ there exists a path $\pi_i$ from $s_{i-1}$ to $s_i$ in $\mathcal{C}_\infty^\star$ of length at most $\kappa'\sqrt{d} |x-y|/K$. Since $s_{i-1}$ has distance at most $\sqrt{d}|x-y|/K$ from the segment $S_{x,y}$ and $\pi_i$ contains nearest neighbour edges, by the definition of $K=(\kappa'+1)\sqrt{d}/\zeta$ at the beginning of the proof and Definition [Definition 17](#def:deviation){reference-type="ref" reference="def:deviation"}, $$\mathrm{dev}_{xy}(\pi_i) \le \kappa' \sqrt{d} |x-y|/K+\sqrt{d}|x-y|/K = \zeta |x-y|.$$ Let $\pi = \pi_1\ldots \pi_k$; then since $k\le 2K$, and $\kappa = 2\sqrt{d} \kappa'$, $\pi$ has length at most $k\kappa'\sqrt{d} |x-y|/K \le \kappa|x-y|$ and deviation at most $\max(\mathrm{dev}_{xy}(\pi_i))\le \zeta|x-y|$ (see Definition [Definition 17](#def:deviation){reference-type="ref" reference="def:deviation"}). Therefore, we have shown that if $x,y\in\mathcal{C}_\infty^\star$ and $|x-y|\ge r_2$, then with probability at least $1-e^{-c_2|x-y|}$ there exists a path $\pi$ from $x$ to $y$ with length at most $|\pi|\le\kappa |x-y|$ and deviation at most $\mathrm{dev}(\pi)\le \zeta |x-y|$. Taking a union bound over all $y\in\mathcal{C}_\infty^\star \setminus B_r(x)$ and taking $|x-y|\ge r {\,\gg_{\star}\,}\zeta$ yields [\[eq:bond-percolation-distances\]](#eq:bond-percolation-distances){reference-type="eqref" reference="eq:bond-percolation-distances"} with $c=c_2/2$. ◻ Lastly, we prove Lemma [\[lem:dense-geometric-locally-dense\]](#lem:dense-geometric-locally-dense){reference-type="ref" reference="lem:dense-geometric-locally-dense"}, which uses Lemma [\[lem:bond-percolation-density\]](#lem:bond-percolation-density){reference-type="ref" reference="lem:bond-percolation-density"} to obtain that the infinite subgraph $\mathcal{H}_\infty$ of a $(R, \varepsilon)$-dense random geometric graph also has overall high density. *Proof.* [\[proof:lemma3.7\]]{#proof:lemma3.7 label="proof:lemma3.7"} Let $\omega^\star$ be a bond percolation process with retention probability $1-20d\varepsilon$. We take $\varepsilon$ small enough that Lemma [\[lem:bond-percolation-density\]](#lem:bond-percolation-density){reference-type="ref" reference="lem:bond-percolation-density"} applies with $\sigma_{\ref{lem:bond-percolation-density}} = \sigma^3$ and $\delta_{\ref{lem:bond-percolation-density}} = \delta/3$; thus $\omega^\star$ has a unique infinite component $\mathcal{C}_\infty^\star$ and $\mathcal{H}_\infty$ is well-defined in [\[eq:h-infty\]](#eq:h-infty){reference-type="eqref" reference="eq:h-infty"}. Then, since $\mathcal{H}_\infty$ consists of those boxes $S_z$ for which $z\in \mathbb{Z}^d$ belong to $\mathcal{C}_\infty^\star$ in $\omega^\star$, if $z\in \mathcal{C}_\infty^\star$, $z$ has at least one open adjacent bond. By the defining coupling in Lemma [Lemma 19](#lem:bond-percolation-coupling){reference-type="ref" reference="lem:bond-percolation-coupling"}, such a bond is only open in $\omega^\star$ if $G[S_z]$ is non-empty and connected. This implies that if $z\in \mathcal{C}_\infty$ then all the vertices in $G[S_z]$ are in $\mathcal{H}_\infty$. Since $\mathbb{P}(0\in \mathcal{C}_\infty^\star)\ge 1-\sigma_{\ref{lem:bond-percolation-density}}$ in Lemma [\[lem:bond-percolation-density\]](#lem:bond-percolation-density){reference-type="ref" reference="lem:bond-percolation-density"}, the translation invariance of $\mathbb{Z}^d$ readily implies that for any box $S$, $\mathbb{P}(\mathcal{V}\cap S\subseteq \mathcal{H}_\infty) \ge 1-\sigma_{\ref{lem:bond-percolation-density}}\ge 1-\sigma$. We turn to prove [\[eq:dense-geometric-density\]](#eq:dense-geometric-density){reference-type="eqref" reference="eq:dense-geometric-density"}. Throughout, let $\rho = (\log r)^2$ for brevity. We set out some preliminary notation and observations. Recall that for all $z\in \mathbb{Z}^d$, $S_z$ is the unique box in $\mathcal{S}$ such that $Rz \in S_z$ and that boxes have side-length $R$. For all sets $A \subseteq \mathbb{R}^d$ we write $A^\star = \{z \in \mathbb{Z}^d\colon S_z \subseteq A\}$; for the renormalisation of $A$, thus $\mathrm{Boxes}(A):=\bigcup_{z \in A^\star} S_z$ is the union of all boxes fully contained in $A$. Finally, for all $y \in \mathbb{R}^d$, by $\lfloor y/R\rfloor$ we mean taking lower-integer part of each coordinate, here $\lfloor y/R\rfloor$ is the renormalised site, i.e., in $\mathbb{Z}^d$, corresponding to the (box containing) $z\in \mathbb{R}^d$. Clearly then $y\in S_{\lfloor y/R\rfloor}$. Since $r{\,\gg_{\star}\,}R$, and the diameter of each box is $\sqrt{d}R$, it is not hard to see that for all $y \in \mathbb{R}^d$, $$\begin{aligned} B_{{(\log r)^2}-R\sqrt{d}}(y)& \subseteq \mathrm{Boxes}(B_{(\log r)^2}(y)) \subseteq B_{(\log r)^2}(y),\\ B_{{(\log r)^2}/R - 2\sqrt{d}}(\lfloor y/R\rfloor)\cap\mathbb{Z}^d &\subseteq B_{{(\log r)^2}}(y)^\star \subseteq B_{{(\log r)^2}/R + \sqrt{d}}(\lfloor y/R\rfloor),\\ B_{(\log r)^2-R\sqrt{d}}(y)^\star&\subseteq B_{(\log r)^2}(R\lfloor y/R\rfloor)^\star \cap B_{(\log r)^2}(y)^\star. \end{aligned}$$ Switching to $\rho:=(\log r)^2$, thus there exists $C>0$ depending on $R$ and $d$ (but not on $r$ and this neither on $\rho$) such that for all $y \in \mathbb{R}^d$, $$\begin{aligned} \label{eq:dense-geometric-boxes-1} \big|\mathrm{Vol}(B_{\rho}(y)) - R^d|B_{\rho}(y)^\star|\big| &\le C{\rho}^{d-1} \le \sigma^4 \rho^d,\\ \label{eq:dense-geometric-boxes-2} \big||B_{\rho}(y)^\star| - |B_{{\rho}/R}(\lfloor y/R\rfloor)\cap \mathbb{Z}^d| \big| &\le C{\rho}^{d-1} \le \sigma^4\rho^d,\\ \label{eq:dense-geometric-boxes-3} |B_{\rho}(R\lfloor y/R\rfloor)^\star \setminus B_{\rho}(y)^\star| &\le C \rho^{d-1} \le \sigma^4 \rho^d. \end{aligned}$$ These all intuitively recover the isoperimetric properties of $\mathbb{R}^d$: for sufficiently large $\rho$, the volume of a ball of radius $\rho$ can be well approximated by a boxing scheme of side-length $R$ [\[eq:dense-geometric-boxes-1\]](#eq:dense-geometric-boxes-1){reference-type="eqref" reference="eq:dense-geometric-boxes-1"}, and also the number of boxes we need both have error of order $\rho^{(d-1)/d}$ [\[eq:dense-geometric-boxes-2\]](#eq:dense-geometric-boxes-2){reference-type="eqref" reference="eq:dense-geometric-boxes-2"}, and switching from a renormalised set around $y$ to the renormalised set around the renormalised site corresponding to $y$ also causes small error [\[eq:dense-geometric-boxes-3\]](#eq:dense-geometric-boxes-3){reference-type="eqref" reference="eq:dense-geometric-boxes-3"}. Now we relate [\[eq:bond-percolation-density\]](#eq:bond-percolation-density){reference-type="eqref" reference="eq:bond-percolation-density"} to [\[eq:dense-geometric-density\]](#eq:dense-geometric-density){reference-type="eqref" reference="eq:dense-geometric-density"}. Note that [\[eq:bond-percolation-density\]](#eq:bond-percolation-density){reference-type="eqref" reference="eq:bond-percolation-density"} has radius $(\log r)^{3/2}$ inside the probability sign concerning overall high density in $\omega^\star$, while [\[eq:dense-geometric-density\]](#eq:dense-geometric-density){reference-type="eqref" reference="eq:dense-geometric-density"} has radius $(\log r)^2$ concerning overall high density of $\mathcal{H}_\infty$ in $G$. Hence, given $r, R$, let $r^\star$ be such that $(\log r^\star)^{3/2} = (\log r)^2/R$, i.e. $r^\star := \exp(((\log r)^2/R)^{2/3})$, and clearly for all sufficiently large $r{\,\gg_{\star}\,}R$, $r^\star > 2r/R$. We will now apply Lemma [\[lem:bond-percolation-density\]](#lem:bond-percolation-density){reference-type="ref" reference="lem:bond-percolation-density"} with $r=r^\star$. Then, the radius inside the fraction in [\[eq:bond-percolation-density\]](#eq:bond-percolation-density){reference-type="eqref" reference="eq:bond-percolation-density"} is exactly $(\log r^\star)^{3/2}=(\log r)^2/R$. So, for $x\in\mathbb{R}^d$, when we write the event in [\[eq:bond-percolation-density\]](#eq:bond-percolation-density){reference-type="eqref" reference="eq:bond-percolation-density"} for the site $\lfloor x/R\rfloor\in \mathbb{Z}^d$ and using radius $r^\star$ and $\sigma_{\ref{lem:bond-percolation-density}}=\sigma^3$, we obtain $$\label{eq:dense-geometric-adense-star} \mathcal{A}_\mathrm{dense}^\star(x, r^\star) = \Big\{\forall y \in B_{r^\star}(\lfloor x/R\rfloor)\colon \frac{|B_{(\log r)^2/R}(y) \cap \mathcal{C}_\infty^\star|}{|B_{(\log r)^2/R}(y) \cap \mathbb{Z}^d|} \ge 1-\sigma^3\Big\}.$$ Now applying Lemma [\[lem:bond-percolation-density\]](#lem:bond-percolation-density){reference-type="ref" reference="lem:bond-percolation-density"} yields directly that $\mathbb{P}(\mathcal{A}_{\mathrm{dense}}^\star) \ge 1-\delta_{\ref{lem:bond-percolation-density}} = 1-\delta/3$. When we consider any $y \in B_{r}(x)$ in $\mathbb{R}^d$, clearly $\lfloor y/R\rfloor \in B_{2r/R}(x) \subseteq B_{r^\star}(\lfloor x/R\rfloor)$ because $r^\star\ge 2r/R$; thus if [\[eq:dense-geometric-adense-star\]](#eq:dense-geometric-adense-star){reference-type="eqref" reference="eq:dense-geometric-adense-star"} holds, then also $$\forall y \in B_{r}(x)\colon \frac{|B_{(\log r)^2/R}(\lfloor y/R\rfloor ) \cap \mathcal{C}_\infty^\star|}{|B_{(\log r)^2/R}(\rfloor y/R\rfloor) \cap \mathbb{Z}^d|} \ge 1-\sigma^3.$$ Note that $B_{(\log r)^2/R}(\lfloor y/R\rfloor )$, when we move from sites in $\mathbb{Z}^d$ to boxes in $\mathcal{S}$, corresponds roughly to the set of boxes contained in $B_{(\log r)^2}(y)$. In fact [\[eq:dense-geometric-boxes-2\]](#eq:dense-geometric-boxes-2){reference-type="eqref" reference="eq:dense-geometric-boxes-2"} exactly quantifies the error being small. Hence, since $r$ is large, it follows from [\[eq:dense-geometric-boxes-2\]](#eq:dense-geometric-boxes-2){reference-type="eqref" reference="eq:dense-geometric-boxes-2"} that $\mathcal{A}_\mathrm{dense}^\star(x,r^\star)$ also implies the following event: $$\label{eq:dense-geometric-adense} \mathcal{A}_{\mathrm{dense}} (x,r):= \Big\{\forall y \in B_{r}(x)\colon \frac{|B_{(\log r)^2}(y)^\star \cap \mathcal{C}_\infty^\star|}{|B_{(\log r)^2}(y)^\star|} \ge 1-2\sigma^3\Big\}.$$ We have shown that $$\label{eq:dense-geometric-adense-bound} \mathbb{P}(\mathcal{A}_{\mathrm{dense}}(x,r)) \ge \mathbb{P}(\mathcal{A}_{\mathrm{dense}}^\star(x,r^\star)) \ge 1-\delta/3.$$ Interpreting the event on the lhs, $\mathcal{A}_{\mathrm{dense}}$ says that sites in $\mathcal{C}_\infty^\star$ form high density in $B_{(\log r)^2}(y)^\star$, or equivalently that boxes containing vertices of $\mathcal{H}_\infty$ are dense in $B_{(\log r)^2}(y)$; meanwhile, the event in [\[eq:dense-geometric-density\]](#eq:dense-geometric-density){reference-type="eqref" reference="eq:dense-geometric-density"} says that the vertices inside these boxes are dense in $B_{(\log r)^2}(y)\cap \mathcal{V}$. Thus we can achieve the high density in [\[eq:dense-geometric-density\]](#eq:dense-geometric-density){reference-type="eqref" reference="eq:dense-geometric-density"} if we can control the number of vertices per box. We now prove a concentration bound for the number of vertices of $G$ in a given collection of boxes; since $\mathcal{C}_\infty^\star$ is random, we basically will sum the errors over all realisations of $\mathcal{C}_\infty^\star$ satisfying $\mathcal{A}_{\mathrm{dense}}(x,r)$. We again abbreviate $\rho:=(\log r)^2$. Here below, we denote by $A$ any set in $B_{\rho}(y)^\star$ with number of vertices $|A| \le 2\sigma^3 |B_{\rho}(y)^\star|$ so that $A$ can essentially serve as a possible realisation of the *complement* of $\mathcal{C}_\infty^\star$ inside the ball $B_{\rho}(y)^\star$ when the event $\mathcal{A}_{\mathrm{dense}}(x,r)$ holds. Then, define $\mathcal{A}_{\mathrm{box}}$ as $$\label{eq:dense-geometric-local-0a} \mathcal{A}_{\mathrm{box}}:= \bigcap_{y \in B_r(x)} \bigcap_{\substack{A\subseteq B_{\rho}(y)^\star\\|A| \le 2\sigma^3 |B_{\rho}(y)^\star|}}\bigg\{ \Big|\bigcup_{z \in B_{\rho}(y)^\star \setminus A} \mathcal{V}[S_z]\Big| \ge (1-\sigma/5)\mathrm{Vol}(B_{\rho}(y))\Big\}\bigg\},$$ i.e., that leaving out the boxes in any not-too-large set $A$ from $B_{\rho}(y)^\star$, the remaining boxes still contain enough vertices proportional to the volume. When $\mathcal{V}= \mathbb{Z}^d$, $\mathcal{A}_{\mathrm{box}}$ always occurs. For $\mathcal{V}$ a Poisson point process, we dominate $\mathcal{A}_{\mathrm{box}}$ by an intersection of events as follows. For all $y \in \mathbb{R}^d$ and $A \subseteq \mathbb{Z}^d$, let $$\label{eq:dense-geometric-local-0b} \mathcal{A}_{\mathrm{box}}(y,A) := \bigg\{ \Big|\bigcup_{z \in B_{\rho}(y)^\star \setminus A} \mathcal{V}[S_z]\Big| \ge (1-\sigma/10)R^d|B_{\rho}(y)^\star|\bigg\}.$$ Applying [\[eq:dense-geometric-boxes-1\]](#eq:dense-geometric-boxes-1){reference-type="eqref" reference="eq:dense-geometric-boxes-1"} yields that $(1-\sigma/10)R^d|B_{\rho}(y)^\star| \ge (1-\sigma/5)\textnormal{Vol}(B_{\rho}(y))$. Moreover, by [\[eq:dense-geometric-boxes-3\]](#eq:dense-geometric-boxes-3){reference-type="eqref" reference="eq:dense-geometric-boxes-3"}, we can lower-bound $B_{\rho}(y)^\star$ in [\[eq:dense-geometric-local-0a\]](#eq:dense-geometric-local-0a){reference-type="eqref" reference="eq:dense-geometric-local-0a"} by $B_{\rho}(R\lfloor y/r\rfloor)^\star \setminus A_y$ for some set $A_y$ with $|A_y| \le \sigma^3 |B_{\rho}(y)^\star|$, effectively discretising the continuous intersection over $y \in B_r(x)$. Thus $$\begin{aligned} \mathcal{A}_{\mathrm{box}} &\supseteq \bigcap_{y \in B_r(x)} \bigcap_{\substack{A\subseteq B_{\rho}(y)^\star\\|A| \le 2\sigma^3 |B_{\rho}(y)^\star|}} \mathcal{A}_{\mathrm{box}}(y,A) \supseteq \bigcap_{y \in B_r(x)} \bigcap_{\substack{A\subseteq B_{\rho}(R\lfloor y/R\rfloor)^\star\\|A| \le 4\sigma^3 |B_{\rho}(R\lfloor y/R\rfloor)^\star|}} \mathcal{A}_{\mathrm{box}}(R\lfloor y/R\rfloor,A)\nonumber\\ \label{eq:dense-geometric-local-2} &\supseteq \bigcap_{z \in B_{2r}(x)^\star} \bigcap_{\substack{A\subseteq B_{\rho}(Rz)^\star\\|A| \le 4\sigma^3 |B_{\rho}(Rz)^\star|}} \mathcal{A}_{\mathrm{box}}(Rz,A), \end{aligned}$$ We obtained the last row by noting that the rhs of the first row is the same event for all $y$ with the same renormalised site $\lfloor y/R\rfloor$. When $\mathcal{V}$ is a PPP, the number of vertices in boxes in $B_\rho(y)^\star\setminus A$ in [\[eq:dense-geometric-local-0b\]](#eq:dense-geometric-local-0b){reference-type="eqref" reference="eq:dense-geometric-local-0b"} follows a Poisson distribution with mean $R^d|B_{\rho}(y)^\star \setminus A|$; thus when $|A| \le 4\sigma^3 |B_{\rho}(y)^\star|$, this mean is at least $(1-\sigma/20)R^d|B_{\rho}(y)^\star|$, and so by a Chernoff bound we arrive to $$\mathbb{P}(\neg \mathcal{A}_{\mathrm{box}}(y,A)) \le \exp(-\sigma^2\rho^d/300).$$ Now some combinatorics to deal with the union when taking the complement event in [\[eq:dense-geometric-local-2\]](#eq:dense-geometric-local-2){reference-type="eqref" reference="eq:dense-geometric-local-2"}: since $\sigma$ is small, the number of sets $A \subseteq B_{\rho}(Rz)^\star$ with $|A| \le 4\sigma^3|B_{\rho}(Rz)^\star|$ is at most $\exp(\sigma^2\rho^d/600)$. Moreover, there are at most $4(r/R)^d$ choices of $z \in B_{2r}(x)^\star$ in [\[eq:dense-geometric-local-2\]](#eq:dense-geometric-local-2){reference-type="eqref" reference="eq:dense-geometric-local-2"}. Hence using a union bound, $\rho = (\log r)^2$, and $r{\,\gg_{\star}\,}\delta, \sigma$, for Poisson $\mathcal{V}$, $$\begin{aligned} \label{eq:dense-geometric-avert-bound} \mathbb{P}(\mathcal{A}_{\mathrm{box}}) \ge 1 - 4(r/R)^d\exp(\sigma^2\rho^d/600) \cdot \exp(-\sigma^2\rho^d/300) \ge 1 - \delta/3. \end{aligned}$$ Our final event, $\mathcal{A}_{\mathrm{ball}}$, says that all radius-$\rho=(\log r)^2$ balls near $x$ contain at most roughly the expected number of vertices, so that $$\label{eq:a-ball} \begin{aligned} \mathcal{A}_{\mathrm{ball}} &= \bigcap_{y \in B_r(x)} \Big\{|B_{\rho}(y) \cap \mathcal{V}| \le (1+\sigma/5)\textnormal{Vol}(B_{\rho}(y))\Big\} \\ &\supseteq \bigcap_{y \in B_r(x) \cap \mathbb{Z}^d} \Big\{ |B_{\rho}(y) \cap \mathcal{V}| \le (1+\sigma/10)\textnormal{Vol}(B_{\rho}(y))\Big\}. \end{aligned}$$ where in the second row we discretised the space to obtain a finite intersection at the cost of reducing the error to $\sigma/10$. It is then immediate from Chernoff bounds and a union bound over $y$ that $$\label{eq:dense-geometric-aball-bound} \mathbb{P}(\mathcal{A}_{\mathrm{ball}}) \ge 1 - \delta/3.$$ Now by a union bound on their complements in [\[eq:dense-geometric-adense-bound\]](#eq:dense-geometric-adense-bound){reference-type="eqref" reference="eq:dense-geometric-adense-bound"}, [\[eq:dense-geometric-avert-bound\]](#eq:dense-geometric-avert-bound){reference-type="eqref" reference="eq:dense-geometric-avert-bound"}, [\[eq:dense-geometric-aball-bound\]](#eq:dense-geometric-aball-bound){reference-type="eqref" reference="eq:dense-geometric-aball-bound"}, the intersection $\mathcal{A}_{\mathrm{dense}}\cap\mathcal{A}_{\mathrm{box}}\cap\mathcal{A}_{\mathrm{ball}}$ occurs with probability at least $1-\delta$. Assume the intersection of the three events occur and let $y \in B_r(x)$. Since $\mathcal{A}_{\mathrm{dense}}$ occurs, $|B_{\rho}(y)^\star \setminus \mathcal{C}_\infty^\star| \le 2\sigma^3|B_{\rho}(y)^\star|$. Since $\mathcal{A}_{\mathrm{box}}$ also occurs in [\[eq:dense-geometric-adense\]](#eq:dense-geometric-adense){reference-type="eqref" reference="eq:dense-geometric-adense"}, taking $A = B_{\rho}(y)^\star \setminus \mathcal{C}_\infty^\star$ yields $$\begin{aligned} |B_{\rho}(y) \cap \mathcal{H}_\infty| &= \Big|B_{\rho}(y)\cap \bigcup_{z \in \mathcal{C}_\infty^\star} \mathcal{V}[S_z] \Big| \ge \Big|\bigcup_{z \in B_{\rho}(y)^\star \cap \mathcal{C}_\infty^\star}\mathcal{V}[S_z]\Big| \ge (1-\sigma/5)\textnormal{Vol}(B_{\rho}(y)). \end{aligned}$$ Finally, since $\mathcal{A}_{\mathrm{ball}}$ occurs in [\[eq:a-ball\]](#eq:a-ball){reference-type="eqref" reference="eq:a-ball"}, it follows that $$\frac{|B_{\rho}(y)\cap \mathcal{H}_\infty|}{|B_{\rho}(y) \cap \mathcal{V}|} \ge \frac{(1-\sigma/5)\textnormal{Vol}(B_{\rho}(y))}{(1+\sigma/5)\textnormal{Vol}(B_{\rho}(y))} \ge 1-\sigma.$$ Hence, the event in [\[eq:dense-geometric-density\]](#eq:dense-geometric-density){reference-type="eqref" reference="eq:dense-geometric-density"} is implied by the intersection of $\mathcal{A}_{\mathrm{dense}}\cap\mathcal{A}_{\mathrm{box}}\cap\mathcal{A}_{\mathrm{ball}}$, which finishes the proof of [\[eq:dense-geometric-density\]](#eq:dense-geometric-density){reference-type="eqref" reference="eq:dense-geometric-density"}. ◻ [^1]: Delft University of Technology, j.komjathy\@tudelft.nl [^2]: University of Bristol, john.lapinskas\@bristol.ac.uk [^3]: ETH Zürich, johannes.lengler\@inf.ethz.ch [^4]: ETH Zürich, ulysse.schaller\@inf.ethz.ch U.S. was supported by the Swiss National Science Foundation \[grant number 200021_192079\]. [^5]: They have also been called EGIRG, where E stands for extended [@komjathy2020explosion]. [^6]: *If we take an IGIRG and rescale the underlying space $\mathbb R^d$ by a factor $\lambda$, then we obtain a random graph which satisfies all conditions of IGIRGs except that the density of the Poisson point process is $\lambda^{-d}$ instead of one. Thus it is no restriction to assume density one.* [^7]: For fixed $q$, the $r_0$ in our proof grows like $\exp(c\cdot \log(1/q)\cdot \log\log(1/q))$ for a constant $c>0$, see Remark [Remark 14](#rem:large_box_good){reference-type="ref" reference="rem:large_box_good"}. [^8]: In SFP, there are choices of $h$ that enforce all nearest-neighbour edges to be present, so density one is possible, but not guaranteed. [^9]: The formulation of [@antal1996chemical Theorem 1.1] allows $\kappa'$ to depend on the edge-retention probability $p=1-\varepsilon$ because it allows $p$ to be arbitrarily close to the criticality threshold. So it is not quite clear from the formulation that $\kappa'$ is independent of $\varepsilon$. However, the condition $\varepsilon\le\varepsilon_0$ means that our $p$ is bounded away from the criticality threshold, and the proof in [@antal1996chemical] relies on a domination argument, so we may use the same $\kappa'$ for all $\varepsilon\le \varepsilon_0$.
arxiv_math
{ "id": "2309.11880", "title": "Four universal growth regimes in degree-dependent first passage\n percolation on spatial random graphs II", "authors": "J\\'ulia Komj\\'athy, John Lapinskas, Johannes Lengler, Ulysse Schaller", "categories": "math.PR cs.SI math.CO q-bio.PE", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We introduce the notion of Lie-Yamaguti algebra bundle, define its cohomology groups with coefficients in a representation and show that such bundles appeared naturally from geometric considerations in the work of M. Kikkawa, which motivates us to introduce this object in the proper mathematical framework. We also study abelian extensions of Lie-Yamaguti algebra bundles and investigate their relationship with suitable cohomology group. address: - RKMVERI, Belur, Howrah-711202, India. - TCG Centres for Research and Education in Science and Technology, Institute for Advancing Intelligence, Sector V, Salt lake, Kolkata 700091, INDIA. - TCG Centres for Research and Education in Science and Technology, Institute for Advancing Intelligence, Sector V, Salt lake, Kolkata 700091, INDIA. author: - Saikat Goswami - Goutam Mukherjee title: - - LIE-YAMAGUTI ALGEBRA BUNDLE --- # Introduction {#$1$} Triple systems in algebra may be traced back to the works of P. Jordan, J. v. Neumann and E. Wigner [@JNW] in quantum mechanics, and N. Kemmer [@NK39; @NK43] in particle physics. The notion of Lie triple system was formally introduced as an algebraic object by Jacobson [@NJ] in connection with problems which arose from quantum mechanics. Nomizu [@KN] proved that affine connections with parallel torsion and curvature are locally equivalent to invariant connections on reductive homogeneous spaces, and that each such space has a canonical connection for which parallel translation along geodesics agrees with the natural action of the group. Let $M$ be a smooth manifold equipped with a linear connection $\nabla.$ Let $e\in M$ be a given fixed point. Then there is a local multiplication $\mu$ at $e$ compatible with $\nabla$, which is given by $$\mu (x, y) = exp_x \circ \tau_{e,x} \circ exp_e^{-1}(y),$$ where $exp_x$ denotes the exponential mapping at $x$ and $\tau_{e,x}$ denotes the parallel displacement of tangent vectors along the geodesic joining $e$ to $x$ in a normal neighbourhood of $e$ [@MK75]. If $M$ is a reductive homogeneous space $A/K$ with the canonical connection, due to K. Nomizu, then the local multiplication $\mu$ given above satisfies some special property (cf. [@KN]). In particular, if M is a Lie group $A$ itself, then the canonical connection is reduced to the connection of [@CS] and the local multiplication $\mu$ coincides with the multiplication of $A$ in local. Motivated by this fact, M. Kikkawa [@MK75] investigated the problem of the existence of a global differentiable binary system on a reductive homogeneous space $A/K,$ which coincides locally with the above geodesic local multiplication $\mu$ and observed that the problem is related to the canonical connection and to the general Lie triple system defined on the tangent space $T_eM.$ In this paper, Kikkawa renamed the notion of general Lie triple system as *Lie triple algebra*. Kinyon and Weinstein [@KW] observed that Lie triple algebras, which they called Lie-Yamaguti algebras in their paper, can be constructed from Leibniz algebras. Leibniz algebras are non anti-symmetric analogue of Lie algebras introduced by J. L. Loday [@Loday93]. **Organization of the paper**: In §[2](#$2$){reference-type="ref" reference="$2$"}, we set up notations, recall some known definitions and results. In §[3](#$3$){reference-type="ref" reference="$3$"}, we introduce the main object of study of the present paper, namely, the notion of a *Lie-Yamaguti algebra bundle*, illustrate examples of such bundles and describe a general method of constructing such bundles. In §[4](#$4$){reference-type="ref" reference="$4$"}, we introduce the concept of representation of Lie-Yamaguti algebra bundles which is required to introduce cohomology groups of Lie-Yamaguti algebra bundles. In §[5](#$5$){reference-type="ref" reference="$5$"}, we define cohomology groups of a Lie-Yamaguti algebra bundle with coefficients in a given representation. Finally, in §[6](#$6$){reference-type="ref" reference="$6$"}, we study (abelian) extensions of Lie-Yamaguti algebra bundles and establish its connection to cohomology. # Preliminaries {#$2$} The aim of this section is to recall some basic definitions and set up notations to be followed throughout the paper. Let $\mathbb K$ be a given field. **Definition 1**. A Lie algebra is a vector space $\mathfrak g$ over $\mathbb K$ equipped with a $\mathbb K$-bilinear operation $[~,~] : \mathfrak g\times \mathfrak g \rightarrow \mathfrak g$ satisfying 1. (Anti-symmetry): $[x, y] = -[y, x]$ for all $x, y \in \mathfrak g$; 2. (Jacobi identity): $[[x,y],z] + [[y,z],x] + [[z,x],y] =0$ for all $x, y, z \in \mathfrak g.$ **Definition 2**. A Leibniz algebra is a vector space $\mathfrak g$ over $\mathbb K$ equipped with a $\mathbb K$-bilinear operation $\cdot : \mathfrak g \times \mathfrak g \rightarrow \mathfrak g$ satisfying the Leibniz identity $$x\cdot (y\cdot z) = (x\cdot y)\cdot z + y\cdot (x\cdot z)$$ for all $x, y, z \in \mathfrak g.$ It is easy to see that in presence of the anti-symmetric condition the Leibniz identity reduces to Jacobi identity. Thus, Lie algebras are examples of Leibniz algebras. See [@Loday93] for many other non-trivial examples of Leibniz algebras. **Definition 3**. A Lie triple system is a vector space $\mathfrak g$ over $\mathbb K$ equipped with a $\mathbb K$-trilinear operation $$\{~,~,~\} : \mathfrak g\times \mathfrak g\times \mathfrak g \rightarrow \mathfrak g$$ satisfying 1. $\{ x, y, z\} = -\{ y, x, z\}$ for all $x, y, z \in \mathfrak g$; 2. $\{ x,y,z\} + \{ y,z,x\} + \{ z,x,y\} = 0$ for all $x, y, z \in \mathfrak g$; 3. $\{ x, y, \{ u, v, w\} \} = \{ \{ x, y, u \} , v, w\} + \{ u, \{ x, y, v\} , w\} + \{ u, v, \{ x, y, w\} \}$\ for all $x, y, u, v, w \in \mathfrak g.$ The following is an interesting example of a Lie triple system which arose from Physics [@NJ]. **Example 4**. We denote by $M_{n+1}(\mathbb R)$, the set of all $(n+1) \times (n+1)$ matrices over the field $\mathbb R$, which is an associative algebra with respect to matrix multiplication. Let $\delta_{ij}$ denote the Kronecker delta symbol $$\delta_{ij}= \left\{\begin{array}{ll} 0 & i \neq j \\ 1 & i=j \end{array} \right.$$ and $e_{i,j}$ denote the elementary matrix which has $1$ in the $(i,j)$-entry as its only non-zero entry. Let $\mathfrak m$ be the subspace of $M_{n+1}(\mathbb R)$ spanned by the matrices $G_i$ for $i=1,2,\cdots,n$, where $G_i = e_{i,n+1}-e_{n+1, i}.$ As an example, for $n=3,$ the matrix $G_2 \in M_4(\mathbb R)$ is given by $$G_2 = \begin{pmatrix} 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 \\ \end{pmatrix}.$$ Then, the subspace $\mathfrak m$ is closed under the ternary product $$\{A, B, C\}:=[[A,B],C], ~~A, B, C \in \mathfrak g$$ where $[A,B] := AB - BA$ is the commutator bracket. Explicitly, the trilinear product of the basis elements are given by $$[[G_i,G_j],G_k] = \delta_{ki} G_j - \delta_{kj} G_i.$$ It turns out that $(\mathfrak m, \{~, ~,~\})$ is a Lie triple system, first used in [@Duffin] to provide a significant and elegant algebraic formalism of Meson equations. It was introduced formally as a Lie triple system by Jacobson [@NJ] and hence known as Meson field. *Remark 5*. Note that any Lie algebra $(\mathfrak g, [~,~])$ can be viewed as a Lie triple system with the trilinear operation $$\{ x, y, z\} := [[x, y], z]$$ for all $x, y, z \in \mathfrak g.$ **Definition 6**. A Lie-Yamaguti Algebra $(\mathfrak g, [~,~], \{~, ~, ~\})$ is a vector space $\mathfrak g$ equipped with a $\mathbb K$-bilinear operation $$[~, ~]: \mathfrak g \times \mathfrak g \rightarrow \mathfrak g$$ and a $\mathbb K$-trilinear operation $$\{~, ~, ~\} : \mathfrak g\times \mathfrak g \times \mathfrak g \rightarrow \mathfrak g$$ such that for all $x, y, z, u, v, w \in \mathfrak g$ the following relations hold: $$= -[y,x]; \tag{LY1} \label{LY1}$$ $$\{x, y, z\} = -\{y,x, z\}; \tag{LY2} \label{LY2}$$ $$\Sigma_{\circlearrowleft (x, y, z)}([[x,y], z] + \{x, y, z\}) =0; \tag{LY3} \label{LY3}$$ $$\Sigma_{\circlearrowleft (x, y, z)}\{[x, y], z, u\} = 0; \tag{LY4} \label{LY4}$$ $$\{x, y, [u, v]\} = [\{x, y, u\}, v] + [u, \{x, y, v\}]; \tag{LY5} \label{LY5}$$ $$\{x, y, \{u, v, w\}\}\\ = \{\{x, y, u\}, v, w\} + \{u, \{x, y, v\}, w\} + \{u, v, \{x, y, w\}\}. \tag{LY6} \label{LY6}$$ Here, $\Sigma_{\circlearrowleft (x,y,z)}$ denotes the sum over cyclic permutations of x, y, and z. *Remark 7*. Notice that if the trilinear product in a Lie-Yamaguti algebra is trivial, that is, if $\{~,~,~\} = 0,$ then (LY$2$), (LY$4$), (LY$5$), and (LY$6$) are trivial, and (LY$1$) and (LY$3$) define a Lie algebra structure on $\mathfrak g$. On the other hand, if the binary product is trivial, that is, $[~,~] =0,$ then (LY$1$), (LY$4$), and (LY$5$) are trivial, and (LY$2$), (LY$3$), together with (LY$6$) define a Lie triple system on $\mathfrak g.$ The following result is well-known. **Lemma 8**. *Let $(\mathfrak g, [~, ~])$ be a Lie algebra over $\mathbb K$. Then, $\mathfrak g$ has a Lie-Yamaguti algebra structure induced by the given Lie bracket, the trilinear operation being: $$\{a, b, c\} = [[a, b], c]$$ for all $a, b, c \in \mathfrak g$.* **Example 9**. Let $(\mathfrak g, \cdot)$ be a Leibniz algebra. Define a bilinear operation and a trilinear operation as follows: $$[~, ~]: \mathfrak g \times \mathfrak g \to \mathfrak g, ~~ [a, b] := a\cdot b-b\cdot a,~~ a, b \in \mathfrak g;$$ $$\{~, ~, ~\}: \mathfrak g \times \mathfrak g \times \mathfrak g \to \mathfrak g, ~~\{a, b, c\} := -(a\cdot b)\cdot c,~~a, b, c \in \mathfrak g.$$ Then, $(\mathfrak g, [~,~],\{~,~,~\})$ is a Lie-Yamaguti algebra. Let $(\mathfrak g, \langle~,~\rangle)$ be a Lie algebra. Recall that a reductive decomposition of $\mathfrak g$ is a vector space direct sum $\mathfrak g = \mathfrak h \oplus\mathfrak m$ satisfying $\langle\mathfrak h, \mathfrak h\rangle \subseteq\mathfrak h$ and $\langle\mathfrak h, \mathfrak m\rangle \subseteq \mathfrak m.$ In this case, we call $(\mathfrak h, \mathfrak m)$ a *reductive pair*. **Example 10**. Let $(\mathfrak g, \langle~, ~\rangle)$ be a Lie algebra with a reductive decomposition $\mathfrak g = \mathfrak h \oplus\mathfrak m.$ Then, there exist a natural binary and a ternary product on $\mathfrak m$ defined by $$[a, b] := \pi_{\mathfrak m}(\langle a, b\rangle),~~\{a, b, c\} := \langle \pi_{\mathfrak h}(\langle a, b\rangle), c \rangle,$$ where $\pi_{\mathfrak m}$ and $\pi_{\mathfrak h}$ are the projections on $\mathfrak m$ and $\mathfrak h$, respectively. These products endow $\mathfrak m$ with the structure of a Lie-Yamaguti algebra [@BBM]. **Example 11**. Consider the vector space $\mathfrak g$ over $\mathbb K$ generated by $\{e_1, e_2, e_3\}.$ Define a bilinear operation $[~,~]$ and a trilinear operation $\{~, ~, ~\}$ on $\mathfrak g$ as follows. $$[e_1, e_2] = e_3;~~\{e_1, e_2, e_1\} = e_3.$$ All other brackets of the basis elements are either determined by the definition of Lie-Yamaguti algebra or else are zero. Then, $\mathfrak g$ with the above operations is a Lie-Yamaguti algebra. See [@ABCO] for classification of some low dimensional Lie-Yamaguti algebras. **Definition 12**. Let $(\mathfrak g, [~,~], \{~,~,~\})$, $(\mathfrak g^\prime, [~,~]^\prime, \{~,~,~\}^\prime)$ be two Lie-Yamaguti algebras. A homomorphism $$\phi : (\mathfrak g, [~,~], \{~,~,~\}) \rightarrow (\mathfrak g^\prime, [~,~]^\prime, \{~,~,~\}^\prime)$$ of Lie-Yamaguti algebras is a $\mathbb K$-linear map $\phi : \mathfrak g \rightarrow \mathfrak g^\prime$ satisfying $$\phi ([x, y]) = [\phi (x), \phi (y)]^\prime,~~~~\phi (\{x, y, z\}) = \{\phi (x), \phi (y), \phi (z)\}^\prime$$ for all $x, y, z \in \mathfrak g.$ A homomorphism $$\phi : (\mathfrak g, [~,~], \{~,~,~\}) \rightarrow (\mathfrak g^\prime, [~,~]^\prime, \{~,~,~\}^\prime)$$ of Lie-Yamaguti algebras is an isomorphism if there exists a homomorphism $$\phi^\prime: (\mathfrak g^\prime, [~,~]^\prime, \{~,~,~\}^\prime) \rightarrow (\mathfrak g, [~,~], \{~,~,~\})$$ such that $\phi^\prime \circ \phi = id_{\mathfrak g}$ and $\phi \circ \phi^\prime = id_{\mathfrak g^\prime}.$ The set of all self-isomorphisms of a Lie-Yamaguti algebra $(\mathfrak g, [~,~], \{~,~,~\})$ is obviously a group under composition of maps and is denoted by ${Aut}_{LY}(\mathfrak g).$ The notion of Lie algebra bundle was introduced in [@DL]. For smooth Lie algebra bundle we refer [@mackenzie]. Other notions of algebra bundles are available in the literature and appeared in various context. Let $M$ be a smooth manifold (Hausdorff and second countable, hence, paracompact). Let $C^\infty(M)$ be the algebra of smooth functions on $M$. Let $TM$ be the tangent bundle of $M$. Recall that a vector field on $M$ is a smooth section of the tangent bundle $TM.$ Let us denote the space of vector fields on $M$ by $\chi (M).$ It is well-known that $\chi (M)$ is a $C^\infty (M)$-module. Moreover, $\chi (M)$ is a Lie algebra with the commutator bracket: $$[\alpha, \beta] := \alpha \beta -\beta \alpha$$ for $\alpha, \beta \in \chi (M).$ Here, for $\alpha, \beta \in \chi (M)$ and $p \in M,$ the action of $\alpha \beta (p)$ on a smooth function $f \in C^\infty (M)$ is given by $$\alpha \beta (p)(f) = \alpha_p(\beta f),$$ where $\beta f \in C^\infty(M)$ is given by $\beta f (m) = \beta_m (f), ~~m \in M.$ For a (smooth) vector bundle $p : L \rightarrow M,$ often denoted by $\xi = (L, p, M),$ we denote the space of smooth sections of $L$ by $\Gamma L.$ It is well-known that $\Gamma L$ is a $C^\infty (M)$-module. For any $m\in M,$ we denote the fibre of the vector bundle $\xi$ over $m$ by $L_m$ or sometimes by $\xi_m.$ Henceforth, we will work in the smooth category and with $\mathbb K = \mathbb R.$ **Definition 13**. Let $(L, p, M)$ be a vector bundle and let $[~, ~]$ be a section of the bundle $Alt^2(L)$ such that for each $m \in M,$ $$[~,~]_m : L_m \times L_m \rightarrow L_m$$ is a Lie algebra bracket on $L_m.$ We call such a section a field of Lie algebra brackets in $L.$ **Definition 14**. A Lie algebra bundle is a vector bundle $(L, p, M)$ together with a field of Lie algebra brackets $$m \mapsto [~, ~]_m,~m \in M.$$ Thus, for a Lie algebra bundle $(L, p, M),$ each fibre $L_m$ is a Lie algebra which varies smoothly as $m\in M$ varies over $M.$ In other words, the assignment $m\mapsto [~,~]_m,~~m \in M$ is smooth. **Definition 15**. Let $\mathfrak g$ be a given Lie algebra. A locally trivial Lie algebra bundle with fibre $\mathfrak g$ is a vector bundle $(L, p, M)$ together with a field of Lie algebra brackets $$m\mapsto [~, ~]_m,~~m \in M$$ such that $M$ admits an open covering $\{U_i\}$ equipped with local trivializations $\{\psi_i: U_i\times \mathfrak g \rightarrow p^{(-1)}(U_i)\}$ for which each $\psi_{i,m},~~m \in M$ ($\psi_i$ restricted to each fibre $L_m$) is a Lie algebra isomorphism. A homomorphism $\phi: (L, p, M) \rightarrow (L^\prime, p^\prime, M^\prime)$ of Lie algebra bundles is a vector bundle morphism $(\phi, \phi_0),$ where $\phi_0: M\rightarrow M^\prime$ such that $\phi|_{L_m}: L_m \rightarrow L^\prime_{\phi_0(m)},~m \in M$ is a Lie algebra homomorphism. # Lie-Yamaguti Algebra Bundle {#$3$} In this section we introduce the notion of Lie-Yamuguti algebra bundle and related results. All vector bundles and vector bundle maps are assumed to be smooth and $\mathbb K = \mathbb R.$ **Definition 16**. Let $\xi = (L, p, M)$ be a (real) vector bundle. Let $\langle~, \cdots , ~\rangle$ be a section of the bundle $Hom(\xi^{\otimes k}, \xi).$ We call such a section a $k$-field of ($\mathbb K$-multinear) brackets in $\xi.$ Thus, a $k$-field of brackets in $\xi$ is a smooth assignment $$m\mapsto (\langle~,\cdots, ~\rangle_m : \xi_m \times \cdots \times \xi_m \rightarrow \xi_m)$$ of multilinear operation on $\xi_m$, $m \in M.$ **Definition 17**. A Lie-Yamaguti algebra bundle is a vector bundle $\xi=(L, p, M)$ together with a $2$-field of brackets $$m\mapsto [~, ~]_m,~~ m\in M$$ and a $3$-field of brackets $$m \mapsto \{~,~, ~\}_m,~~m \in M$$ which make each fibre $\xi_m,$ $m\in M$ a Lie-Yamaguti algebra. **Definition 18**. Let $(\mathfrak g, [~,~]_{\mathfrak g}, \{~, ~, ~\}_{\mathfrak g})$ be a given Lie-Yamaguti algebra. A locally trivial Lie-Yamaguti algebra bundle is a vector bundle $\xi=(L, p, M)$ together with a $2$-field of brackets $$m\mapsto [~, ~]_m,~~ m \in M$$ and a $3$-field of brackets $$m \mapsto \{~,~, ~\}_m,~~ m \in M$$ such that $M$ admits an open covering $\{U_i\}$ equipped with local trivializations $\{\psi_i: U_i\times \mathfrak g \rightarrow p^{(-1)}(U_i)\}$ for which each $\psi_{i,m},~~m \in M$ ($\psi_i$ restricted to each fibre $\xi_m$) is a Lie-Yamaguti algebra isomorphism. *Remark 19*. Thus, for a Lie-Yamagutil algebra bundle as defined above each fibre $\xi_m= p^{-1}(m),~~m\in M,$ together with the binary operation $[~,~]_m$ and the ternary operation $\{~, ~,~\}_m$ is a Lie-Yamaguti algebra isomorphic to $\mathfrak g,$ and the assignments $$m \mapsto [~,~]_m,~~m\mapsto \{~, ~, ~\}_m$$ varies smoothly over $M.$ In other words, a Lie-Yamaguti algebra bundle over $M$ is a vector bundle over $M$ such that each fibre of the bundle has a Lie-Yamaguti algebra structure isomorphic to $\mathfrak g.$ An obvious example of a Lie-Yamaguti algebra bundle is the trivial bundle over a smooth manifold $M$ with fibres a Lie-Yamaguti algebra. **Example 20**. Let $(\mathfrak g, [~,~], \{~, ~, ~\})$ be a given Lie-Yamaguti algebra and $M$ be any smooth manifold. Then the trivial vector bundle $\xi = M\times \mathfrak g$ with the projection onto the first factor $\pi_1: M\times \mathfrak g \to M$ is a Lie-Yamaguti algebra bundle, called the product Lie-yamaguti algebra bundle. We have the following example from the Lemma [Lemma 8](#Lie to LYA){reference-type="ref" reference="Lie to LYA"}. **Example 21**. Any Lie algebra bundle $(L, p, M, [~,~])$ is a Lie-Yamaguti algebra bundle, where the $3$-field of brackets on $M$ induced by the $2$-field of Lie brackets $$m \mapsto [~,~]_m, ~~m\in M$$ is defined by $$\{a, b, c\}_m := [[a, b]_m, c]_m,~~m\in M,$$ for $a, b, c \in L_m, ~m\in M.$ **Definition 22**. Let $\xi = (L, p, M)$ be a Lie algebra bundle with the field of Lie algebra bracket $m \mapsto [~,~]_m,~~m\in M.$ A reductive decomposition of $\xi$ is a pair $(L^1, L^2)$ of sub-bundles of $L$ such that $L$ is a Whitney sum $L = L^1\oplus L^2$ satisfying $[ L^1_m, L^1_m]_m \subseteq L^1_m$ and $[L^1_m, L^2_m]_m \subseteq L^2_m.$ In this case, we call $(L^1, L^2)$ a reductive pair. For a reductive pair as above, let $\pi^i: L\to L^i,~~i= 1, 2$ denote the vector bundle projection maps. **Example 23**. Let $(L^1, L^2)$ be a reductive decomposition of a Lie algebra bundle $\xi = (L, p, M)$ as described in the above definition. Then, define a $2$-field of brackets and a $3$-field of brackets $$m\mapsto \langle~,~\rangle_m,~~ m\mapsto \{~,~,~\}_m,~m \in M$$ on the vector bundle $(L^2, p|_{L^2}, M)$ as follows. Let $a, b, c \in L^2_m,~~m \in M.$ $$[a, b]_m := \pi^1(\langle a, b \rangle_m),~~\{a, b, c\} := \langle \pi^2(\langle a, b\rangle_m), c ]_m.$$ Then, as in the case of Example [Example 10](#reductive){reference-type="ref" reference="reductive"}, the vector bundle $(L^2, p|_{L^2}, M)$ is a Lie-Yamaguti algebra bundle equipped with the $2$-field of brackets and the $3$-field of brackets as defined above. Next, we discuss an interesting example of a Lie-Yamaguti algebra bundle that arose from the work of M. Kikkawa [@MK64; @MK75; @MK99] to characterize some local geometric properties. We recall some definitions which are necessary to describe our next example. Recall that a linear connection on a smooth manifold $M$ is an $\mathbb R$-bilinear map $$\nabla: \chi (M) \times \chi (M) \to \chi (M)$$ written $\nabla_XY$ for $\nabla(X,Y)$, satisfying two properties stated below: For all $X,Y \in \chi (M)$ - $\nabla_XY$ is a $C^{\infty}(M)$-linear in $X$. - (Leibniz rule) $\nabla_XY$ satisfies the Leibniz rule in $Y$: For all $f \in C^{\infty} (M)$, $$\nabla_X(fY) = (Xf)Y+f(\nabla_XY).$$ Now, let $M$ be a smooth manifold along with linear connection $\nabla$. Recall that - a torsion tensor of the connection $\nabla$ is a $C^{\infty}(M)$-bilinear map $S: \chi(M) \times \chi(M) \to \chi(M)$ defined as $$S(X,Y) := \nabla_XY - \nabla_YX - [X,Y],~~~X, Y \in \chi (M),$$ where $[X,Y]$ is the Lie bracket of $\chi (M)$ and - a curvature tensor of the connection $\nabla$ is a $C^{\infty}(M)$-trilinear map $R:\chi (M) \times \chi (M) \times \chi (M) \to \chi (M)$ defined as $$R(X,Y)Z := \nabla_X \nabla_Y Z - \nabla_Y \nabla_X Z - \nabla_{[X,Y]} Z,~~~X, Y, Z \in \chi(M).$$ Recall the following definitions [@MK75]. **Definition 24**. Let $M$ be a smooth manifold with a connection $\nabla$. Let $S$ and $R$ denote the torsion and curvature tensors of $\nabla,$ respectively. Then, $(M,\nabla)$ is said to be a locally reductive space if $~\nabla S = 0 ~~\&~~ \nabla R =0;$ that is, - for all $X,Y,Z \in \chi (M)$; $\nabla_X S(Y,Z) = 0;$ - for all $X, U, V, W \in \chi (M)$; $\nabla_X R(U,V)W = 0.$ **Definition 25**. Let $G$ be a connected Lie group and $H$ be a closed subgroup of $G.$ Then the homogeneous space $M = G/H$ is said to be reductive if and only if $G$ acts effectively on $M$ and the Lie algebra $\mathfrak g$ of $G$ admits a direct sum decomposition as $$\mathfrak g = \mathfrak m \oplus \mathfrak h,$$ where $\mathfrak h$ is the Lie algebra of $H$ and $\mathfrak m$ is a subspace of $\mathfrak g.$ Next, we recall the notion of homogeneous Lie loops. **Definition 26**. Let $G=(G,\mu)$ be a binary system with the binary operation $$\mu:G \times G \to G$$ $G$ is a loop if there is a (two-sided) identity $e \in G$, $xe=ex=x ~~ (x \in G)$, and the left and right translations of $G$ by any element $x \in G$, denoted by $$L_x,R_x:G \to G;~L_x(y) = xy,~ R_x(y) = yx ~~~~ (y \in G),$$ are permutations of $G$. **Definition 27**. A loop $G$ is said to have the left inverse property, if for any $x \in G$ there exists an element $x^{-1} \in G$ such that $$x^{-1}(xy) = y ~~~~ (y \in G)$$ **Definition 28**. Let $L_0(G)$ be the group generated by all left inner mappings, i.e., $$L_{x,y} = L_{xy}^{-1} \circ L_x \circ L_y ~~~~ (x,y \in G)$$ A loop $G$ is called a left A-loop, if the left inner mapping group $L_0(G)$ is a subgroup of the automorphism group $AUT(G)$ of $G$. **Definition 29**. A Loop $(G,\mu)$ is said to be a homogeneous loop, if it is a left A-loop with the left inverse property. **Definition 30**. A homogeneous Lie loop $G$ is a homogeneous loop, and is also a smooth manifold such that the loop multiplication $\mu: G \times G \to G$ is smooth. Here are some examples of locally reductive spaces. - Let $G$ be a connected homogeneous Lie loop equipped with the canonical connection. - Define $K(G):=$ the closure of $L_0(G)$ in the smooth automorphism group $Aut(G)$ of $G$, and consider the semi-direct product $A(G)=G \times K(G)$. Since $G$ is connected, $L_0(G)$ is connected, and consequently $K(G)$ is also connected. $A(G)$ is also a connected Lie group with the product manifold structure. Further $A(G)$ contains $K(G)$ as a closed subgroup. - The homogeneous space $A(G)/K(G)$ is reductive. Consider the reductive homogeneous space $A(G)/K(G)$ equipped with the canonical connection. Then, we have the following results from [@MK75]. **Theorem 31**. *For a connected homogeneous Lie loop $G$, the map $$i:G \to A(G)/K(G), ~~~i(x) = x \times K(G)$$ is a connection preserving loop isomorphism onto $A(G)/K(G)$ with multiplication $$(x \times K(G)).(y \times K(G)) = (xy) \times K(G) ~~~~~ (x,y \in G)$$ with respect to the canonical connections on $G$ and $A(G)/K(G)$.* As a result, any connected homogeneous Lie loop with canonical connection can be identified with a reductive homogeneous space with canonical connection. The following result of M. Kikkawa tells us that any reductive homogeneous space with canonical connection is locally reductive. **Theorem 32**. *Let $S$ and $R$ denote the torsion and curvature tensors of the canonical connection $\nabla$ of a reductive homogeneous space $M=G/H$, respectively. Then $\nabla$ is locally reductive, i.e., $\nabla S=0$ and $\nabla R = 0$* **Corollary 33**. *Any connected homogeneous Lie loop with the canonical connection is a locally reductive space.* Below is a list of some examples of homogeneous Lie loops. **Example 34**. Any Lie group is a homogeneous Lie loop. **Example 35**. The set of all positive definite real symmetric matrices, denoted by $P_n$, is a homogeneous Lie loop. Loop multiplication $\mu$ being $$\mu (X, Y) = X^{\frac{1}{2}}Y X^{\frac{1}{2}},~~ X, Y \in P_n.$$ We are now in a position to describe a Lie-Yamaguti algebra bundle which arose from the work of M. Kikkawa. Since any connected homogeneous Lie loop with canonical connection is a locally reductive space, we obtain the following example (cf. Theorem $7.2$ [@MK75]). **Example 36**. Let $M$ be a connected homogeneous Lie loop with the canonical connection. Let the associated torsion and curvature tensors be $S$ and $R$, respectively. Let $\xi = (TM, p, M)$ be the tangent bundle of $M.$ Define a $2$-field of brackets and a $3$-field of brackets on $M$ as follows: $$m\mapsto [a,b]_m = S_m(a,b); ~~ m\mapsto \{a,b,c\} = R_m(a,b) c ~~~~~ (a,b,c \in T_mG).$$ Then $\xi$ is a Lie-Yamaguti algebra bundle. Next, we discuss a general existence theorem for locally trivial Lie-Yamaguti algebra bundle. **Definition 37**. Let $(\mathfrak g, [~,~], \{~,~,~\})$ be a Lie-Yamaguti algebra and $G$ be a Lie group. We say that $G$ acts on $\mathfrak g$ if there exists a smooth homomorphism $$\phi : G \to {Aut}_{LY}(\mathfrak g), ~~g\mapsto \phi_g.$$ Given such an action $\phi,$ we simply write $ga =: \phi_g(a),~g\in G,~ a \in \mathfrak g.$ Note that any closed subgroup of ${Aut}_{LY}(\mathfrak g)$ acts smoothly on $\mathfrak g$ and is a closed subgroup of the general linear group $GL_n(\mathbb R).$ **Definition 38**. Let $G$ be a Lie group and $M$ a smooth manifold. A family of smooth transition maps in $M$ with values in $G$ is an atlas $\{U_i: i\in I\}$ of $M$ together with a collection of smooth maps $$g_{ij} : U_i \cap U_j \to G,~~ i, j \in I,$$ where $I$ is any index set which we may assume to be countable satisfying the following condition. For $i, j, k \in I,$ with $U_i\cup U_j \cup U_k \neq \emptyset,$ $$g_{ij}(m)\cdot g_{jk}(m) = g_{ik}(m),~ m \in U_i \cap U_j \cap U_k.$$ It follows from the above condition by taking $i = j= k$ that for any $i\in I,$ $g_{ii}(m),~m \in M$ is the identity of $G.$ The above condition is known as the cocycle condition. We have the following existence result of locally trivial Lie-Yamguti algebra bundles whose proof is parallel to the proof of clutching construction in the theory of fibre bundles [@steenrod]. We outline the sketch of the proof. **Theorem 39**. *Let $(\mathfrak g, [~,~], \{~, ~, ~\})$ be a Lie-Yamaguti algebra equipped with a smooth action of a Lie group $G.$ Let $M$ be a smooth manifold with a given countable atlas $\{U_i :\ i \in I\}$ together with a family of smooth transition maps $$g_{ij} : U_i \cap U_j \to G,~~ i, j \in I,$$ in $M$ with values in $G.$ Then, there exists a locally trivial Lie-Yamuguti algebra bundle over $M,$ with $\mathfrak g$ as the fibre, $G$ as the structure group of the bundle and with $\{g_{ij}\}$ as the associated transition maps.* *Proof.* Consider the following space where $I$ has the discrete topology $$\tilde{L} := \bigcup _{i \in I}\{(u, a, i)| u\in U_i,~ a\in \mathfrak g, ~ i\in I\}.$$ Define an equivalence relation on $\tilde{L}$ by $(u, a, i) \sim (v, b, j)$ if and only if $u = v,~b = g_{ij}(u)a.$ Let $L = \tilde{L}/\sim.$ Let us denote the equivalence class of $(u, a, i)$ by $[u, a, i].$ Let $q : \tilde{L} \to L, (u, a,i) \mapsto [u, a, i]$ be the quotient map and $p : L \to M,~~ [u, a, i] \mapsto u$ be the natural projection map. If $q_i = q|_{(U_i \times \mathfrak g \times \{i\})},$ then it is readily seen that $q_i$ is injective, $(q_i(U_i \times \mathfrak g \times \{i\}), q_i^{-1})$ is a smooth chart on $L$ and $p : L \to M$ is a smooth vector bundle. We now show that $\xi = (L, p, M)$ is a Lie-Yamaguti algebra bundle. Let $m \in M$ and $\xi_m$ be the fibre over $m.$ Define a $2$-field of brackets $m \mapsto [~, ~]_m$ and a $3$-filed of brackets $m \mapsto \{~, ~, ~\}_m$ as follows. Note that for $i \in I,$ the map $$\{\psi_i: U_i\times \mathfrak g \rightarrow p^{(-1)}(U_i)\}$$ defined by $$\psi (u, a) = q(u, a, i),~u \in U_i,~ a \in \mathfrak g$$ gives the local trivialization of the vector bundle $\xi.$ Let $\psi_{i,m},~~m \in U_i\subset M$ denotes the restriction of $\psi_i$ to $\{m\}\times \mathfrak g.$ Let $a, b, c \in \xi_m,~m \in M.$ Choose $i \in I$ such that $m \in U_i.$ Define $$[a, b]_m := \psi_{i,m}([\psi_{i,m}^{-1}(a), \psi_{i, m}^{-1}(b)])$$ and $$\{a, b, c\}_m := \psi_{i,m}(\{\psi_{i,m}^{-1}(a), \psi_{i, m}^{-1}(b), \psi_{i, m}^{-1}(c)\}).$$ Then, it is routine to verify that $\xi$ is a locally trivial Lie-Yamaguti algebra bundle with fibre $\mathfrak g.$ ◻ *Remark 40*. The above theorem provides a general method of constructing a locally trivial Lie-Yamaguti algebra bundle from any Lie group of symmetry of a given Lie-Yamaguti algebra on a manifold, equipped with a family of smooth transition maps taking values in the group of symmetry. In particular, we may apply the above method for any Lie group of symmetry of the Lie-Yamaguti algebras discussed in the previous section to construct examples of Lie-Yamaguti algebra bundles. **Definition 41**. Let $\xi = (L, p, M)$ and $\xi^\prime = (L^\prime, p^\prime, M^\prime)$ be two Lie-Yamaguti algebra bundles. A homomorphism $\phi: (L, p, M) \rightarrow (L^\prime, p^\prime, M^\prime)$ from $\xi$ to $\xi^\prime$ is a vector bundle morphism $(\tilde{\phi}, \phi),$ where $\tilde{\phi}: L \rightarrow L^\prime,$ is the map between total spaces and $\phi : M\rightarrow M^\prime$ is the maps of the base spaces such that $\tilde{\phi}|_{L_m}: L_m \rightarrow L^\prime_{\phi (m)}$ is a Lie-Yamaguti algebra homomorphism, where $m\in M.$ A homomorphism $\phi: \xi \rightarrow \xi^\prime$ of two Lie-Yamaguti algebra bundles over the same base space $M$ is a vector bundle morphism $\phi: \xi \rightarrow \xi^\prime$ such that $\phi|_{\xi_m}: \xi_m \rightarrow \xi^\prime_m$ is a Lie-Yamaguti algebra homomorphism for all $m \in M.$ Moreover, if $\phi|_{\xi_m}$ is a linear bijection then $\xi=(L, p, M)$ is said to be isomorphic to $\xi^\prime= (L^\prime, p^\prime, M).$ **Definition 42**. A Lie-Yamaguti algebra bundle $\xi$ is said to be trivial if it is isomorphic to a product Lie-Yamaguti algebra bundle. # Representation of Lie-Yamaguti Algebra Bundles {#$4$} The aim of this section is to introduce the notion of representation of Lie-Yamaguti algebra bundles. Our definition of representation of a Lie-Yamaguti algebra bundle is based on the definition of representation of a Lie-Yamaguti algebra [@KY]. **Definition 43**. Let $\xi = (L, p, M)$ be a Lie-Yamaguti algebra bundle and $\eta = (E, q, M)$ be a vector bundle. For any point $m\in M,$ let $\eta_m$ denote the fibre $\eta_m = q^{-1}(m)$ of the bundle $\eta$ over $m.$ A representation of the Lie-Yamaguti algebra bundle $\xi$ on the vector bundle $\eta$ consists of vector bundle morphisms $$\rho : \xi \to \textrm{End} (\eta), ~~D,~ \theta : \xi \otimes \xi \to \textrm{End} (\eta)$$ such that these maps restricted to each fibre satisfy the conditions (RLYB$1$) - (RLYB$6$) as described below, where the bilinear maps $$D|_{\xi_m}, ~\theta|_{\xi_m} : \xi_m\times \xi_m \to \textrm{End}(\eta_m),$$ obtained by restricting $D,~\theta$ to a fibre $\xi_m$ are denoted by $D_m$ and $\theta_m,$ respectively and similarly, $\rho_m$ is the linear map $$\rho|_{\xi_m} : \xi_m \to \textrm{End}(\eta_m).$$ For any $m \in M$ and $a, b, c, d \in \xi_m,$ $$\begin{aligned} &D_m(a,b) + \theta_m(a,b) - \theta_m(b,a) = [\rho_m(a),\rho_m(b)]_m - \rho_m([a,b]); \label{RLYB1} \tag{RLYB1} \\ &\theta_m(a,[b,c]_m) - \rho_m(b) \theta_m(a,c) + \rho_m(c) \theta_m(a,b) = 0; \label{RLYB2} \tag{RLYB2} \\ &\theta_m([a,b]_m,c) - \theta_m(a,c) \rho_m(b) + \theta_m(b,c) \rho_m(a) = 0; \label{RLYB3} \tag{RLYB3} \\ &\theta_m(c,d) \theta_m(a,b) - \theta_m(b,d) \theta_m(a,c) - \theta_m(a,\{b,c,d\}_m) + D_m(b,c) \theta_m(a,d) = 0; \label{RLYB4} \tag{RLYB4} \\ &[D_m(a,b),\rho_m(c)]_m = \rho_m(\{a,b,c\}_m); \label{RLYB5} \tag{RLYB5} \\ &[D_m(a,b),\theta_m(c,d)]_m = \theta_m(\{a,b,c\}_m,d) + \theta_m(c,\{a,b,d\}_m). \label{RLYB6} \tag{RLYB6} \end{aligned}$$ We shall denote a representation of a Lie-Yamaguti algebra bundle $\xi$ on a vector bundle $\eta$ as described above by $(\eta;~\rho,~D,~ \theta).$ A representation $(\eta;~\rho,~D,~ \theta)$ of a Lie-Yamaguti algebra bundle $\xi$ is also called a $\xi$-module. *Remark 44*. Like a representation of a Lie-Yamaguti algebra [@KY], given a representation $(\eta;~\rho,~D,~ \theta)$ of a Lie-Yamaguti algebra bundle $\xi,$ we have for every $m\in M$ $$D_m([a, b]_m, c) + D_m([b, c]_m, a) + D_m([c, a]_m, b) = 0,\label{RLYB7} \tag{RLYB7}$$ for any $a,~ b,~ c \in \xi_m.$ **Example 45**. Given a Lie-Yamaguti algebra bundle $\xi$ over $M$, we may consider $\xi$ as a $\xi$-module which gives us the adjoint representation of $\xi$ on itself. Explicitly, for each $m \in M,$ $\rho_m,~D_m,~\theta_m$ are given by $$\rho_m (a): b \mapsto [a, b]_m;~~D_m(a, b): c \mapsto \{a, b, c\}_m;~~ \theta_m (a, b) : c \mapsto \{c, a, b\}_m,$$ for any $a,~ b,~c \in \xi_m.$ *Remark 46*. Observe that for a $0$-dimensional manifold $M = \{pt\},$ a Lie-Yamaguti algebra bundle $\xi$ over $M$ is simply a Lie-Yamaguti algebra and a representation $\eta$ of $\xi$ in this case, reduces to a representation of the Lie-Yamaguti algebra $\xi.$ More generally, given any representation $(\eta; \rho, D, \theta)$ of a Lie-Yamaguti algebra bundle $\xi = (L, p, M)$ on the vector bundle $\eta = (E, q, M)$ over a smooth manifold $M,$ $(\eta_m; \rho_m, D_m, \theta_m)$ may be viewed as a representation of the Lie-Yamaguti algebra $\xi_m$ for any $m\in M.$ # Cohomology of Lie-Yamaguti Algebra Bundle {#$5$} In this section we introduce cohomology of Lie-Yamaguti algebra bundle with coefficients in a representation. The definition is motivated by the definition of cohomology of a Lie-Yamaguti algebra as introduced in [@KY-cohomology]. We use the Remark [Remark 46](#restriction){reference-type="ref" reference="restriction"} to introduce our definition. **Definition 47**. Let $\xi = (L, p, M)$ be a Lie-Yamaguti algebra bundle and $(\eta; \rho, D, \theta)$ be a $\xi$-module. Let us denote the $2$-field and the $3$-field of brackets which make the vector bundle $\xi$ a Lie-Yamaguti algebra bundle by $$m\mapsto [~,~]_m,~~ m \mapsto \{~, ~, ~\}_m,~~ m\in M.$$ Let $$C^1(\xi; \eta) = \mbox{Hom}~(\xi; \eta).$$ Let $C^0(\xi; \eta)$ be the subspace spanned by the diagonal elements $(f,f) \in C^1(\xi;\eta) \times C^1(\xi,\eta)$. For $n\geq 2,$ let $C^n( \xi; \eta)$ be the space of all vector bundle maps $f : \xi^{\otimes n} \to \eta,$ that is, $f \in\mbox{Hom}~(\xi^{\otimes n}; \eta)$ such that the resulting $n$-linear maps $f_m=f|_{\xi^{\otimes n}_m} : \xi_m \times \cdots \times \xi_m \to \eta_m$ satisfy $$f_m(x_1, \ldots, x_{2i-1}, x_{2i}, \ldots, x_n) = 0,$$ if $x_{2i-1} = x_{2i},~~x_i \in \xi_m, ~i= 1, \ldots, [n/2].$ For $p \geq 1,$ set $$C^{(2p, 2p+1)}(\xi; \eta) := C^{2p}(\xi;\eta) \times C^{2p+1}(\xi; \eta).$$ Any element $(f, g) \in C^{(2p, 2p+1}(\xi; \eta)$ will be referred to as a $(2p, 2p+1)$-cochain. For $p\geq 1,$ we define a coboundary operator $$\delta = (\delta_I, \delta_{II}) : C^{(2p, 2p+1)}(\xi;\eta) \to C^{(2p+2, 2p+3)}(\xi;\eta),$$ $$(f, g) \mapsto \delta(f, g)= (\delta_If, \delta_{II}g)$$ by defining it fibre-wise using the formula introduced by K. Yamaguti [@KY-cohomology]. In other words, for any $m \in M,$ $$\delta (f, g)_m = ((\delta_I)_mf_m, (\delta_{II})_mg_m).$$ Explicitly, for $m \in M$ and $x_1, \ldots, x_{2p+2} \in \xi_m,$ $$\begin{aligned} & (\delta_I)_m f_m (x_1, \ldots, x_{2p+2})\\ =& (-1)^p [\rho_m(x_{2p+1})g_m(x_1, \ldots,x_{2p}, x_{2p+2}) - \rho_m(x_{2p+2})g_m(x_1, \ldots,x_{2p}, x_{2p+1})\\ -& g_m(x_1, \ldots, x_{2p}, [x_{2p+1}, x_{2p+2}]_m)]\\ +& \sum_{k = 1}^p(-1)^{k+1} D_m(x_{2k-1}, x_{2k})f_m(x_1, \ldots, \hat{x}_{2k-1}, \hat{x}_{2k}, \ldots, x_{2p+2})\\ +& \sum_{k=1}^{p+1}\sum_{j = 2k+1}^{2p+2}(-1)^kf_m (x_1, \ldots,\hat{x}_{2k-1}, \hat{x}_{2k}, \ldots,\{x_{2k-1}, x_{2k}, x_j\}_m, \ldots, x_{2p+2}).\end{aligned}$$ Let $x_1, \ldots, x_{2p+3} \in \xi_m.$ Then, $$\begin{aligned} & (\delta_{II})_m g_m (x_1, \ldots, x_{2p+3})\\ =& (-1)^p[\theta_m(x_{2p+2}, x_{2p+3})g_m(x_1, \ldots, x_{2p+1})\\ -& \theta_m(x_{2p+1}, x_{2p+3})g_m(x_1, \ldots, x_{2p}, x_{2p+2})]\\ +& \sum_{k = 1}^{p+1}(-1)^{k+1} D_m(x_{2k-1}, x_{2k})g_m(x_1, \ldots, \hat{x}_{2k-1}, \hat{x}_{2k}, \ldots, x_{2p+3})\\ +& \sum_{k=1}^{p+1}\sum_{j = 2k+1}^{2p+3}(-1)^kg_m (x_1, \ldots,\hat{x}_{2k-1}, \hat{x}_{2k}, \ldots,\{x_{2k-1}, x_{2k}, x_j\}_m, \ldots, x_{2p+3}).\end{aligned}$$ Now observe that for any $m\in M,$ the coboundary operator $\delta_m$ is precisely the coboundary operator for the Lie-Yamaguti algebra $\xi_m$ with coefficient in $\eta_m$ (cf. Remark [Remark 46](#restriction){reference-type="ref" reference="restriction"}) and since $\delta_m \circ \delta_m = 0$ [@KY-cohomology] we obtain the following result. **Lemma 48**. *For $p\geq 1,$ the coboundary operator $$\delta = (\delta_I, \delta_{II}) : C^{2p}(\xi;\eta) \times C^{2p+1}(\xi; \eta) \to C^{2p+2}(\xi;\eta) \times C^{2p+3}(\xi; \eta)$$ satisfy $\delta \circ\delta = 0.$* **Definition 49**. For the case $p\geq 2,$ let $Z^{(2p, 2p+1)}(\xi; \eta)$ be the subspace of $C^{(2p, 2p+1)}(\xi;\eta)$ spanned by $(f, g)$ such that $\delta (f, g) = 0$ and $B^{(2p, 2p+1)}(\xi; \eta)$ be the subspace $\delta (C^{(2p-2, 2p-1)}(\xi;\eta)).$ Then, the $(2p, 2p+1)$-cohomology group of the Lie-Yamaguti algebra bundle $\xi$ with coefficients in $\eta$ is defined by $$H^{(2p, 2p+1)}(\xi; \eta) := \frac{Z^{(2p, 2p+1)}(\xi; \eta)}{B^{(2p, 2p+1)}(\xi; \eta)}.$$ We next consider the case $p=1,$ and define the cohomology group $H^{(2,3)}(\xi; \eta).$ Define a coboundary operator $$\delta = (\delta_I, \delta_{II}): C^0(\xi; \eta) \to C^{(2, 3)} (\xi; \eta),~~ (f, f) \mapsto (\delta_If, \delta_{II}f),$$ where for $x_1, ~x_2, ~x_3 \in \xi_m,~ m\in M,$ $$\begin{aligned} (\delta_I)_m f_m (x_1, x_2) & = \rho_m(x_1)f_m(x_2)- \rho_m(x_2)f_m(x_1) -f_m([x_1, x_2]_m)\\ (\delta_{II})_mf_m(x_1, x_2, x_3) & = \theta_m(x_2, x_3)f_m(x_1) - \theta_m(x_1, x_3)f_m(x_2)\\ & + D_m(x_1, x_2)f_m(x_3)- f_m(\{x_1, x_2, x_3\}_m). \end{aligned}$$ Furthermore, we define another coboundary operator $$\delta^* = (\delta^*_I, \delta^*_{II}) : C^{(2, 3)}(\xi; \eta) \to C^{(3, 4)}(\xi; \eta)$$ as follows. Let $m\in M$ and $x_1, ~x_2, ~x_3~x_4 \in \xi_m.$ Then for $(f, g) \in C^{(2, 3)}(\xi; \eta),$ $$\begin{aligned} &(\delta^*_I)_mf_m(x_1, x_2, x_3)\\ =& \rho_m(x_1)f_m(x_2, x_3\rho_m(x_2)f_m(x_3, x_1)-\rho_m(x_3)f_m(x_1, x_2)\\ +& f_m([x_1,x_2]_m,x_3) + f_m([x_2,x_3]_m,x_1) + f_m([x_3,x_1]_m,x_2) \\ +& g_m(x_1, x_2, x_3) + g_m(x_2, x_3, x_3) + g_m(x_3, x_1, x_2),\end{aligned}$$ and $$\begin{aligned} & (\delta^*_{II})_mg_m (x_1, x_2, x_3, x_4)\\ & = \theta_m(x_1, x_4)f_m(x_2, x_3) + \theta_m(x_2, x_4)f_m(x_3, x_1) + \theta_m(x_3, x_4)f_m(x_1, x_2)\\ & + g_m([x_1,x_2]_m,x_3, x_4) + g_m([x_2,x_3]_m,x_1, x_4) + g_m([x_3,x_1]_m, x_2, x_4).\end{aligned}$$ Following [@KY-cohomology], we have for each $f\in C^1(\xi; \eta)$ $$\delta_I\delta_I f = \delta^*_I\delta_If= 0~~\mbox{and}~~\delta_{II}\delta_{II}f = \delta^*_{II}\delta_{II}f= 0.$$ In general, for $(f, g) \in C^{(2p, 2p+1)}(\xi; \eta)$ $$(\delta \circ \delta)(f, g) = (\delta_I\circ\delta_I(f), \delta_{II}\circ \delta_{II}(g)) =0.$$ We define **Definition 50**. $$H^1(\xi;\eta) := \{ f \in C^1( \xi; \eta) | \delta_If = 0,~\delta_{II}f = 0\}.$$ For $p= 1$, we define the cohomology $H^{(2,3)}(\xi; \eta)$ as follows. **Definition 51**. Let $Z^{(2, 3)}(\xi; \eta)$ be the subspace of $C^{(2,3)}(\xi; \eta)$ spanned by $(f, g)$ such that $\delta_If = \delta^*_If= 0,$ and $\delta_{II}g = \delta^*_{II}g= 0.$ Let $$B^{(2,3)}(\xi; \eta) = \{\delta (f, f)| f\in C^1(\xi; \eta)\}.$$ Then, the $(2, 3)$-cohomology group of the Lie-Yamaguti algebra bundle $\xi$ with coefficients in $\eta$ is defined by $$H^{(2, 3)}(\xi; \eta) = \frac{Z^{(2, 3)}(\xi; \eta)}{B^{(2,3)}(\xi; \eta)}.$$ # Extensions of Lie-Yamaguti algebra bundles {#$6$} Chevally and Eilenberg [@CE] showed that extensions of algebras can be interpreted in terms of certain Hochschild cohomology group. Later, D. K. Harrison [@Harrison] showed that certain Harrison cohomology group of commutative algebras can be related to extensions of commutative algebras. In the same spirit, Yamaguti showed that that $(2,3)$-cohomology of a Lie-Yamaguti algebra with coefficients in a representation may be interpreted in terms of isomorphism classes of extensions of the Lie-Yamaguti algebra. The aim of this section is to introduce the notion of extension of Lie-Yamaguti algebra bundles and relate isomorphism classes of extensions in terms of $(2,3)$-cohomology of such bundle as introduced in the previous section. We begin with some definitions. **Definition 52**. Let $\xi = (L, p, M)$ be a Lie-Yamaguti algebra bundle. Let us denote the associated $2$-field and $3$-field of brackets by $$m\to [~,~]_m~~\mbox{and}~~m \to \{~, ~,~\}_m,~ m\in M,$$ respectively. An ideal of $\xi$ is a sub-bundle $\eta$ of the vector bundle $\xi$ such that for all $m \in M,$ $v\in \eta_m,$ $a,~b \in \xi_m$ $$[v, a]_m \in \eta_m ~~\text{and}~~ \{v, a, b\}_m \in \eta_m,~ \{a, b, v\}_m \in \eta_m.$$ An ideal $\eta$ of $\xi$ is said to be **abelian** if for all $m\in M,$ $u, v \in \eta_m$ and $a \in \xi_m,$ $$[u, v]_m = 0,~~\{u, v, a\}_m = \{u, a, v\}_m = \{a, u, v\}_m = 0.$$ **Definition 53**. Let $\tilde{\xi} = (\tilde{L}, \tilde{p}, M)$ and $\eta = (E, q, M)$ be Lie-Yamaguti algebra bundles. An extension of Lie-Yamaguti algebra bundle over $M$ is a short exact sequence in the category of Lie-Yamaguti algebra bundles over $M$ (that is, restricted to each fibre yields a short exact sequence of vector spaces where the maps involved are Lie-Yamaguti algebra homomorphisms) $$\begin{tikzcd} 0 & \eta & \tilde{\xi} & \xi & 0 \arrow[from=1-1, to=1-2] \arrow["i", from=1-2, to=1-3] \arrow["j", from=1-3, to=1-4] \arrow[from=1-4, to=1-5]. \end{tikzcd}$$ We call $\tilde{\xi}$ an extension of $\xi$ by $\eta$, and denote it by ${Ext}_{\tilde{\xi}}$. An extension as above is said to be **abelian** if $\eta$ is an abelian ideal of $\tilde{\xi}.$ Throughout in this section, we will consider only abelian extensions. A **section** of the vector bundle map $j:\tilde{\xi} \to \xi$ is a vector bundle map $\sigma: \xi \to \tilde{\xi}$ such that $j \circ \sigma = id_{\xi}$. Note that if $\sigma$ is a splitting as above then $\tilde{\xi}$ may be viewed as as a Whitney sum of $\xi$ and $\eta.$ Here, we identify $\eta$ with its image $i(\eta).$ In other words, $\tilde{\xi} = \xi \oplus \eta$. Since we are concerned with smooth vector bundles over smooth manifolds (paracompact), any short exact sequence splits. We now introduce the notion of equivalence of two extensions. **Definition 54**. Two extensions of $\xi$ by $\eta$, say ${Ext}_{\tilde{\xi}}: 0 \to \eta \xrightarrow[]{i} \tilde{\xi} \xrightarrow[]{j} \xi \to 0$ and ${Ext}_{\hat{\xi}}: 0 \to \eta \xrightarrow[]{\hat{i}} \hat{\xi} \xrightarrow[]{\hat{j}} \xi \to 0$, are said to be equivalent, if $\exists$ a Lie-Yamaguti algebra bundle isomorphism $f:\tilde{\xi} \to \hat{\xi}$ such that the following diagram is commutative. $$\begin{tikzcd} && {} \\ 0 & \eta & \tilde{\xi} & \xi & 0 \\ 0 & \eta & \hat{\xi} & \xi & 0 \arrow[from=2-1, to=2-2] \arrow["i", from=2-2, to=2-3] \arrow[from=2-4, to=2-5] \arrow[from=3-1, to=3-2] \arrow["\hat{j}"', from=3-3, to=3-4] \arrow[from=3-4, to=3-5] \arrow["f"', from=2-3, to=3-3] \arrow["id"', from=2-2, to=3-2] \arrow["id", from=2-4, to=3-4] \arrow["\hat{i}"', from=3-2, to=3-3] \arrow["j", from=2-3, to=2-4] \end{tikzcd}$$ Note that, any extension $\tilde{\xi}$ of $\xi$ by $\eta$, is isomorphic to $\xi \oplus \eta$ as vector bundles. Next, we show that any given extension ${Ext}_{\tilde{\xi}}: 0 \to \eta \xrightarrow[]{i} \tilde{\xi} \xrightarrow[]{j} \xi \to 0$ induces a representation of the Lie-Yamaguti algebra bundle on the vector bundle $\eta.$ It turns out that equivalent extensions induce the same representation on $\eta$. Let ${Ext}_{\tilde{\xi}}: 0 \to \eta \xrightarrow[]{i} \tilde{\xi} \xrightarrow[]{j} \xi \to 0$ be a given extension of $\xi$ by $\eta$, that is, $$\begin{tikzcd} 0 & \eta & {\tilde{\xi}} & \xi & 0 \arrow[from=1-1, to=1-2] \arrow["i", from=1-2, to=1-3] \arrow["j", from=1-3, to=1-4] \arrow[from=1-4, to=1-5] \end{tikzcd}$$ and $\sigma:\xi \to \tilde{\xi}$ be a section of $j:\tilde{\xi} \to \xi$. Let us denote the associated $2$-field and $3$-field of brackets of the Lie-Yamaguti algebra bundle $\tilde{\xi}$ by $$m\to [~,~]^{\sim}_m~~\mbox{and}~~m \to \{~, ~,~\}^{\sim}_m,~ m\in M,$$ respectively. Define vector bundle morphisms $\rho:\xi \to \textrm{End}(\eta)$ and $D,\theta: \xi \otimes \xi \to \textrm{End}(\eta)$ fibre-wise as follows. Let $m\in M.$ For any $a,b \in \xi_m$ and $v \in \eta_m$ $$\begin{aligned} \rho_m(a)(v) &:= [\sigma_m(a),v]^{\sim}_m \label{rep1}\\ D_m(a,b)(v) &:= \{\sigma_m(a),\sigma_m(b),v\}^{\sim}_m \label{rep2}\\ \theta_m(a,b)(v) &:= \{v, \sigma_m(a),\sigma_m(b)\}^{\sim}_m. \label{rep3}\end{aligned}$$ **Proposition 55**. *The above data yield a representation $(\eta;~\rho,~D,~ \theta)$ of $\xi$. Furthermore,* 1. *The definition of $\rho, D$, and $\theta$ does not depend on the choice of the section $\sigma$, that is, given any two sections of $j$, say $\sigma^1$, and $\sigma^2$, we have $$[\sigma^1_m(a),v]^{\sim} = [\sigma^2_m(a),v]^{\sim} ~,~~\{\sigma^1_m(a),\sigma^1_m(b),v\}^{\sim} = \{\sigma^2_m(a),\sigma^2_m(b),v\}^{\sim} ~,$$ $$\text{and}~~ \{v,\sigma^1_m(a),\sigma^1(b)\}^{\sim} = \{v,\sigma^2_m(a),\sigma^2_m(b)\}^{\sim}.$$* 2. *Equivalent extensions induce the same representation on $\eta$, that is, given any two equivalent extensions, say ${Ext}_{\tilde{\xi}}$ and ${Ext}_{\hat{\xi}}$ with induced representations being $(\eta;~\rho,~D,~ \theta)$ and $(\eta;~\rho^\prime,~D^\prime,~ \theta^\prime)$ respectively. Then $\rho = \rho^\prime,~ D=D^\prime,~\theta = \theta^\prime$.* *Proof.* Let $m \in M,$ $a,b,c,d \in \xi_m$ and $v \in eta_m.$ Let $\sigma:\xi \to \tilde{\xi}$ be a given section of $j:\tilde{\xi} \to \xi.$ Then, we have the following equalities. From the condition ([\[LY3\]](#LY3){reference-type="ref" reference="LY3"}) of $\tilde{\xi}_m$ we get $$\sum_{\circlearrowleft(\sigma_m (a),\sigma_m (b),v)} [[\sigma_m(a),\sigma_m(b)]^{\sim},v]^{\sim} + \sum_{\circlearrowleft(\sigma_m (a),\sigma_m (b),v)} \{\sigma_m(a),\sigma_m(b),v\}^{\sim} = 0$$ which reduces to ([\[RLYB1\]](#RLYB1){reference-type="ref" reference="RLYB1"}): $$D_m(a,b) + \theta_m(a,b) - \theta_m(b,a) = [\rho_m(a),\rho_m(b)]_m - \rho_m([a,b]_m).$$ By ([\[LY5\]](#LY5){reference-type="ref" reference="LY5"}) of $\tilde{\xi}$ we get $$\begin{aligned} & \{\sigma_m(a),v,[\sigma_m(b),\sigma_m(c)]_m^{\sim} \\ & = [\{\sigma_m(a),v,\sigma_m(b)\}_m^{\sim},\sigma_m(c)]_m^{\sim} + [\sigma_m(b),\{\sigma_m(a),v,\sigma_m(c)\}_m^{\sim}]_m^{\sim}\end{aligned}$$ which reduces to ([\[RLYB2\]](#RLYB2){reference-type="ref" reference="RLYB2"}): $$\theta_m(a,[b,c]_m) = \rho_m(b)\theta_m(a,c) - \rho_m(c)\theta_m(a,b).$$ By ([\[LY4\]](#LY4){reference-type="ref" reference="LY4"}) of $\tilde{\xi}_m$ we get $$\sum_{\circlearrowleft (\sigma_m (a),\sigma_m (b),v)} \{[\sigma_m(a),\sigma_m(b)]_m^{\sim},v,\sigma_m(c)\}_m^{\sim} = 0$$ which reduces to ([\[RLYB3\]](#RLYB3){reference-type="ref" reference="RLYB3"}): $$\theta_m([a,b]_m,c) = \theta_m(a,c) \rho_m(b) - \theta_m(b,c)\rho_m(a).$$ By ([\[LY6\]](#LY6){reference-type="ref" reference="LY6"}) of $\tilde{\xi}_m$ we get $$\begin{aligned} & \{v,\sigma_m(a),\{\sigma_m(b),\sigma_m(c),\sigma_m(d)\}_m^{\sim}\}_m^{\sim} \\ & =\{\{v,\sigma_m(a),\sigma_m(b)\}_m^{\sim},\sigma_m(c),\sigma_m(d)\}_m^{\sim} + \{\sigma_m(b),\{v,\sigma_m(a),\sigma_m(c)\_m^{\sim},\sigma_m(d)\}_m^{\sim} \\ & + \{\sigma_m(b),\sigma_m(c),\{v,\sigma_m(a),\sigma_m(d)\}_m^{\sim}\}_m^{\sim} \end{aligned}$$ which reduces to ([\[RLYB4\]](#RLYB4){reference-type="ref" reference="RLYB4"}): $$\theta_m(a,\{b,c,d\}_m) = \theta_m(c,d)\theta_m(a,b) - \theta_m(b,d)\theta_m(a,c) + D_m(b,c)\theta_m(a,d).$$ By ([\[LY5\]](#LY5){reference-type="ref" reference="LY5"}) of $\tilde{\xi}_m$ we get $$\begin{aligned} & \{\sigma_m(a),\sigma_m(b),[\sigma_m(c),v]_m^{\sim}\}_m^{\sim}\\ & = [\{\sigma_m(a),\sigma_m(b),\sigma_m(c)\}_m^{\sim},v]_m^{\sim} + [\sigma_m(c),\{\sigma_m(a),\sigma_m(b),v\}_m^{\sim}]_m^{\sim}\end{aligned}$$ which reduces to ([\[RLYB5\]](#RLYB5){reference-type="ref" reference="RLYB5"}): $$D_m(a,b)\rho_m(c) = \rho_m(c)D_m(a,b) + \rho_m(\{a,b,c\}_m).$$ By ([\[LY6\]](#LY6){reference-type="ref" reference="LY6"}) of $\tilde{\xi}_m$ we get $$\begin{aligned} & \{\sigma_m(a),\sigma_m(b),\{v,\sigma_m(c),\sigma_m(d)\}_m^{\sim}\}_m^{\sim} \\ & = \{\{\sigma_m(a),\sigma_m(b),v\}_m^{\sim},\sigma_m(c),\sigma_m(d)\}_m^{\sim} + \{v,\{\sigma_m(a),\sigma_m(b),\sigma_m(c)\}_m^{\sim},\sigma_m(d)\}_m^{\sim} \\ & + \{v,\sigma_m(c),\{\sigma_m(a),\sigma_m(b),\sigma_m(d)\}_m^{\sim}\}_m^{\sim} \end{aligned}$$ which reduces to ([\[RLYB6\]](#RLYB6){reference-type="ref" reference="RLYB6"}): $$D_m(a,b)\theta_m(c,d) = \theta_m(c,d)D_m(a,b) + \theta_m(\{a,b,c\}_m,d) + \theta_m(c,\{a,b,d\}_m).$$ Therefore, $(\eta; \rho,~D,~\theta)$ is a representation of $\xi$. Hence any extension of $\xi$ by $\eta$ gives a representation of $\xi$ on $\eta$. Next we show that the definition of $\theta$ is independent of the choice of the section. The proofs that the definitions of $\rho$ and $D$ do not depend on the choice of the section $\sigma$ are similar, hence, we omit the details. Let $\sigma,~ \sigma^\prime:\xi \to \tilde{\xi}$ be two sections of $j:\tilde{\xi} \to \xi.$ Let $m \in M.$ Then, for any $a \in \xi_m$ $$j(\sigma_m(a) - \sigma_m^\prime(a)) = 0.$$ Therefore, $\sigma_m(a) - \sigma_m^\prime(a) \in \operatorname*{Ker}(j) = \eta_m$, so that $\sigma_m(a) = \sigma_m^\prime(a) + v_a$ for some $v_a \in \eta_m$. Since we are considering abelian extension, for any $v\in \eta_m,$ $a,~b \in \xi_m$ we have $$\begin{aligned} \{v, \sigma_m(a),\sigma_m(b)\}_m^{\sim} &= \{v,\sigma_m^\prime (a)+v_a,\sigma_m^\prime (b)+v_b\}_m^{\sim} \\ &= \{v,\sigma_m^\prime (a),\sigma_m^\prime (b)+v_b\}_m^{\sim} + \{v,v_a,\sigma_m^\prime (b)+v_b\}_m^{\sim} \\ &= \{v,\sigma_m^\prime (a),\sigma_m^\prime (b)\}_m^{\sim} +\{v,\sigma_m^\prime(a),v_b\}_m^{\sim}\\ &= \{v,\sigma_m^\prime (a),\sigma_m^\prime (b)\}_m^{\sim}. \end{aligned}$$ Finally, we show that two equivalent extensions of $\xi$ by $\eta$ induce the same representation. Suppose that ${Ext}_{\tilde{\xi}}$ and ${Ext}_{\hat{\xi}}$ are two equivalent extensions of $\xi.$ Let us denote the associated $2$-field and $3$-field of brackets of the Lie-Yamaguti algebra bundle $\hat{\xi}$ by $$m\to [~,~]^{\wedge}_m~~\mbox{and}~~m \to \{~, ~,~\}^{\wedge}_m,~ m\in M,$$ respectively. Let $f: \tilde{\xi} \to \hat{\xi}$ be a Lie-Yamaguti algebra isomorphism satisfying $f \circ i = \hat{i}$ and $\hat{j} \circ f = j.$ Thus, the following diagram is commutative. $$\begin{tikzcd} && {} \\ 0 & \eta & \tilde{\xi} & \xi & 0 \\ 0 & \eta & \hat{\xi} & \xi & 0 \arrow[from=2-1, to=2-2] \arrow["i", from=2-2, to=2-3] \arrow[from=2-4, to=2-5] \arrow[from=3-1, to=3-2] \arrow["\hat{j}"', from=3-3, to=3-4] \arrow[from=3-4, to=3-5] \arrow["f"', from=2-3, to=3-3] \arrow["id"', from=2-2, to=3-2] \arrow["id", from=2-4, to=3-4] \arrow["\hat{i}"', from=3-2, to=3-3] \arrow["j", from=2-3, to=2-4] \end{tikzcd}$$ Let $\sigma: \xi \to \tilde{\xi}$ and $\sigma^\prime: \xi \to \hat{\xi}$ be sections of $j$ and $\hat{j}$ respectively. Then, for any $a \in \xi_m,~ m\in M$ we have $$\hat{j} \circ f (\sigma_m(a)) = j\circ (\sigma_m (a)) = a = \hat{j}\circ (\sigma_m^\prime (a))$$ $$\Rightarrow \hat{j}(f(\sigma_m(a))-\sigma_m^\prime (a)) = 0.$$ This implies $f(\sigma_m(a))-\sigma_m^\prime (a) \in \operatorname*{Ker}(\hat{j}_m) = \eta_m$, that is, $f(\sigma_m(a)) = \sigma_m^\prime (a) + v_a$ for some $v_a \in \eta_m$. Thus, we have for any $a,b \in \xi_m$ and $v \in \eta_m$ $$f\big(\{v,\sigma_m (a),\sigma_m(b)\}_m^{\sim}\big) = \{f(v),f(\sigma_m (a)),f(\sigma_m (b))\}_m^{\wedge} = \{v,\sigma_m^\prime (a),\sigma_m^\prime(b)\}_m^{\wedge}.$$ Note that $f(v)=v$ follows from the commutativity of first box in the diagram. Therefore, equivalent extensions induce the same $\theta$. Similarly one can show that equivalent extensions induce the same $D$ and $\rho$. ◻ As of now, we have seen that any extension ${Ext}_{\tilde{\xi}}$ $$\begin{tikzcd} 0 & \eta & {\tilde{\xi}} & \xi & 0 \arrow[from=1-1, to=1-2] \arrow["i", from=1-2, to=1-3] \arrow["j", from=1-3, to=1-4] \arrow[from=1-4, to=1-5] \end{tikzcd}$$ of $\xi$ by $\eta$, induces a representation $(\eta; \rho, D, \theta)$ of $\xi$, where the vector bundle maps $\rho, D$, and $\theta$ are defined by ([\[rep1\]](#rep1){reference-type="ref" reference="rep1"})- ([\[rep3\]](#rep3){reference-type="ref" reference="rep3"}) in terms of a section $\sigma:\xi \to \tilde{\xi}$ of $j:\tilde{\xi} \to \xi$. Therefore, as discussed in the previous section, we have the following cochain complex of the Lie-Yamaguti algebra bundle $\xi$ with coefficients in the induced representation $(\eta; \rho, D, \theta)$ of $\xi.$ $$\begin{tikzcd} {C^0(\xi;\eta)} & {C^{(2, 3)}(\xi;\eta)} & {C^{(4, 5)}(\xi;\eta)} & \cdots \\ & {C^{(3, 4)}(\xi;\eta)} \arrow["\delta", from=1-2, to=1-3] \arrow["\delta", from=1-3, to=1-4] \arrow["{\delta^*}", from=1-2, to=2-2] \arrow["{\delta}", from=1-1, to=1-2] \end{tikzcd}$$ Our next goal is to attach a $(2,3)$-cocycle of the above cochain complex to ${Ext}_{\tilde{\xi}}.$ Fix a section $\sigma:\xi \to \tilde{\xi}$ of $j:\tilde{\xi} \to \xi$. Define two maps; $f:\xi \otimes \xi \to \eta$ and $g:\xi \otimes \xi \otimes \xi \to \eta$ in the following way. Let $m \in M.$ Denote by $f_m$ and $g_m$ the resulting bilinear and trilinear maps obtained by restricting $f$ and $g$ to the fibres $(\xi \otimes \xi)_m$ and $(\xi \otimes \xi \otimes \xi)_m$ respectively. For all $a_1,~a_2,~a_3 \in \xi_m,$ define $$\begin{aligned} f_m(a_1, a_2) &:= [\sigma_m(a_1),\sigma_m(a_2)]_m^{\sim} - \sigma_m([a_1,a_2]_m) \\ g_m(a_1, a_2, a_3) &:= \{\sigma_m(a_1),\sigma_m(a_2), \sigma_m (a_3)\}_m^{\sim} - \sigma_m(\{a_1, a_2, a_3\}_m)\end{aligned}$$ Note that $(f,g) \in C^{(2, 3)}(\xi;\eta).$ **Proposition 56**. *For any given abelian extension ${Ext}_{\tilde{\xi}}$ of $\xi$ by $\eta$, the cochain $(f,g) \in C^{(2, 3)}(\xi; \eta)$ as defined above is a $(2,3)$-cocycle.* *Proof.* To show that $(f,g)$ is a $(2,3)$-cocycle, we need to show $$\delta(f,g)= 0 ~~\text{and}~~ \delta^*(f,g) = 0,$$ that is, $$\delta_If = 0,~ \delta_{II}g = 0 ~~\text{and}~~ \delta_I^*f=0,~ \delta_{II}^*g=0.$$ Recall that the representation induced by the given extension are given by the vector bundle morphisms $\rho, D,$ and $\theta,$ where for $a,b \in \xi_m$ and $v \in \eta_m$, $m\in M,$ $$\begin{aligned} \rho_m(a)(v) &= [\sigma_m(a),v]_m^{\sim} \\ D_m(a,b)(v) &= \{\sigma_m(a),\sigma_m(b),v\}_m^{\sim} \\ \theta_m(a,b)(v) &= \{v, \sigma_m (a),\sigma_m (b)\}_m^{\sim}. \end{aligned}$$ Let $a_i\in \xi_m,$ $1\leq i \leq 5.$ By the definitions of $\delta$ and $\delta^*$ we get $$\begin{aligned} & (\delta_I)_m f_m(a_1, a_2,a_3,a_4) \\ & = - \rho_m(a_3)g_m(a_1,a_2,a_4) + \rho_m(a_4)g_m(a_1,a_2,a_3) + g_m(a_1, a_2, [a_3, a_4]_m) \\ & + D_m(a_1,a_2) f_m(a_3,a_4) - f_m(\{a_1,a_2,a_3\}_m ,a_4) - f(a_3,\{a_1,a_2,a_4\}_m) \\ %& = - [\sigma_m(a_3),\{\sigma_m(a_1), \sigma_m(a_2),\sigma_m(a_4)\}_m]_m^{\sim} + [\sigma_m(a_3),\sigma_m(\{a_1,a_2,a_4\}_m)]_m^{\sim} \\ % & + [\sigma_m(a_4),\{\sigma_m(a_1), \sigma_m(a_2),\sigma_m(a_3)\}_m^{\sim}]_m^{\sim} - [\sigma_m(a_4),\sigma_m(\{a_1,a_2,a_3\}_m)]_m^{\sim} \\ %&~~~~~ %+ \{\sigma(a_1),\sigma(a_2),\sigma([a_3,a_4])\}_{\L} %- \sigma(\{a_1,a_2,[a_3,a_4]\}) \\ %&~~~~~ %+ \{\sigma(a_1),\sigma(a_2),[\sigma(a_3),\sigma(a_4)]_{\L}\}_{\L} - \{\sigma(a_1),\sigma(a_2), \sigma([a_3,a_4])\}_{\L} \\ %&~~~~~ %- [\sigma(\{a_1,a_2,a_3\}),\sigma(a_4)]_{\L} + \sigma([\{a_1,a_2,a_3\},a_4]) \\ %&~~~~~ %- [\sigma(a_3),\sigma(\{a_1,a_2,a_4\})]_{\L} + %\sigma([a_3,\{a_1,a_2,a_4\}]) \\ &= 0,\end{aligned}$$ where we have used the definition of representation as given above and ([\[LY5\]](#LY5){reference-type="ref" reference="LY5"}). Similarly, we obtain using ([\[LY6\]](#LY6){reference-type="ref" reference="LY6"}) $$\begin{aligned} & (\delta_{II})_mg_m(a_1, a_2,a_3,a_4,a_5) \\ & = - \theta_m(a_4,a_5)g_m(a_1,a_2,a_3) + \theta_m(a_3,a_5)g_m(a_1,a_2,a_4)\\ & + D_m(a_1,a_2)g_m(a_3,a_4,a_5) - D_m(a_3,a_4)g_m(a_1,a_2,a_5)\\ & - g_m(\{a_1,a_2,a_3\}_m,a_4,a_5) - g_m(a_3,\{a_1,a_2,a_4\}_m,a_5) \\ & - g_m(a_3,a_4,\{a_1,a_2,a_5\}_m) + g_m(a_1,a_2,\{a_3,a_4,a_5\}_m) \\ %&= %- \{\{\sigma(a_1),\sigma(a_2),\sigma(a_3)\}_{\L},\sigma(a_4),\sigma(a_5)\}_{\L} %+ \{\sigma(\{a_1,a_2,a_3\}),\sigma(a_4),\sigma(a_5)\}_{\L} \\ %&~~~~~ %+ %\{\{\sigma(a_1),\sigma(a_2),\sigma(a_4)\}_{\L},\sigma(a_3),\sigma(a_5)\}_{\L} %- \{\sigma(\{a_1,a_2,a_4\}),\sigma(a_3),\sigma(a_5)\}_{\L} \\ %&~~~~~ %+ %\{\sigma(a_1),\sigma(a_2),\{\sigma(a_3),\sigma(a_4),\sigma(a_5)\}_{\L}\}_{\L} %- \{\sigma(a_1),\sigma(a_2),\sigma(\{a_3,a_4,a_5\})\}_{\L} \\ %&~~~~~ %- \{\sigma(a_3),\sigma(a_4),\{\sigma(a_1),\sigma(a_2),\sigma(a_5)\}_{\L}\}_{\L} + \{\sigma(a_3),\sigma(a_4),\sigma(\{a_1,a_2,a_5\})\}_{\L} \\ %&~~~~~ %- \{\sigma(\{a_1,a_2,a_3\}),\sigma(a_4),\sigma(a_5)\}_{\L} %+ \sigma(\{\{a_1,a_2,a_3\},a_4,a_5\}) \\ %&~~~~~ %- \{\sigma(a_3),\sigma(\{a_1,a_2,a_4\}),\sigma(a_5)\}_{\L} %+ \sigma(\{a_3,\{a_1,a_2,a_4\},a_5\}) \\ %&~~~~~ %- \{\sigma(a_3),\sigma(a_4),\sigma(\{a_1,a_2,a_5\})\}_{\L} %+ \sigma(\{a_3,a_4,\{a_1,a_2,a_5\}\})\\ %&~~~~~ %+ \{\sigma(a_1),\sigma(a_2),\sigma(\{a_3,a_4,a_5\})\}_{\L} %- \sigma(\{a_1,a_2,\{a_3,a_4,a_5\}\}) \\ &= 0.\end{aligned}$$ Moreover, from the above definition of representation and ([\[LY3\]](#LY3){reference-type="ref" reference="LY3"}) we get $$\begin{aligned} & (\delta_I^*)_mf_m (a_1, a_2, a_3) \\ & = - \sum_{\circlearrowleft(a_1,a_2,a_3)}\rho_m(a_1)f_m(a_2,a_3) + \sum_{\circlearrowleft(a_1,a_2,a_3)}f_m([a_1,a_2]_m,a_3)\\ & + \sum_{\circlearrowleft(a_1,a_2,a_3)} g_m(a_1,a_2,a_3) \\ & = 0,\end{aligned}$$ and from ([\[LY4\]](#LY4){reference-type="ref" reference="LY4"}) we obtain $$\begin{aligned} & (\delta_{II}^*)_m g_m(a_1, a_2,a_3,a_4) \\ & = \theta_m(a_1,a_4)f_m(a_2,a_3) + \theta_m(a_2,a_4)f_m(a_3,a_4) + \theta_m(a_3,a_4)f_m(a_1,a_2) \\ & + g_m([a_1,a_2]_m,a_3,a_4) + g_m([a_2,a_3]_m,a_1,a_4) + g_m([a_3,a_1]_m,a_2,a_4) \\ & = 0.\end{aligned}$$ Thus, $(f,g)\in C^{(2,3)}(\xi;\eta)$ is a $(2,3)$-cocycle. ◻ By a routine calculation we obtain the following result. **Corollary 57**. *If $\sigma, \sigma^\prime : \xi \to \tilde{\xi}$ are any two chosen sections of $j:\tilde{\xi} \to \xi$ and $(f, g),~(f^\prime,g^\prime)$ are the corresponding cocycles as obtained in Proposition [Proposition 56](#cocycle-extension){reference-type="ref" reference="cocycle-extension"}, then $(f,g)$ and $(f^\prime, g^\prime)$ are cohomologous. Hence, the extension ${Ext}_{\tilde{\xi}}$ of $\xi$ by $\eta$ determines uniquely an element of $H^{(2, 3)}(\xi;\eta).$* On the other hand, given a Lie-Yamaguti algebra bundle $\xi$ equipped with a representation $(\eta; \rho, D, \theta),$ any $(2, 3)$-cocycle in $Z^{(2,3)}(\xi; \eta)$ determines an abelian extension of $\xi$ by $\eta$ which is unique up to equivalence. Let $\xi$ be a given Lie-Yamaguti algebra bundle over $M$ and $(\eta; \rho, D, \theta)$ be a given representation of $\xi.$ Let $(f, g) \in Z^{(2,3)}(\xi; \eta).$ Then, we have the following result. **Lemma 58**. *Given $(f,g) \in Z^{(2, 3)}(\xi; \eta),$ the vector bundle $\tilde{\xi} = \xi \oplus \eta$ becomes a Lie-Yamaguti algebra bundle, where the associated $2$-field and $3$-field $$m \mapsto [~,~]_m^\sim,~~ m\mapsto \{~, ~, ~\}_m^\sim,~~ m\in M$$ are given by $$\begin{aligned} & [a_1+w_1,a_2+w_2]_m^{\sim} \\ & : = [a_1,a_2]_m + f_m(a_1,a_2) + \rho_m(a_1)(w_2) - \rho_m(a_2)(w_1) \\ &\{a_1+w_1,a_2+w_2,a_3+w_3\}_m^{\sim}\\ & := \{a_1,a_2,a_3\}_m + g_m(a_1,a_2,a_3) + D_m(a_1,a_2)(w_3) \\ & - \theta_m(a_1,a_3)(w_2) + \theta_m(a_2,a_3)(w_1) \end{aligned}$$ where $a_1,a_2,a_3 \in \xi_m$ and $w_1,w_2,w_3 \in \eta_m$. It is convenient to denote this Lie-Yamaguti algebra bundle by $\xi\oplus_{(f,g)}\eta$ to emphasize that it is induced by the given cocycle.* *Proof.* Clearly the assignments $$m \mapsto [~,~]_m^\sim,~~ m\mapsto \{~, ~, ~\}_m^\sim,~~ m\in M$$ as defined in the statement are smooth. So, it is enough to show that for any $m \in M,~ \tilde{\xi}_m$ is a Lie-Yamaguti algebra. Let $m \in M$. It is easy to see that ([\[LY1\]](#LY1){reference-type="ref" reference="LY1"}) and ([\[LY2\]](#LY2){reference-type="ref" reference="LY2"}) holds for $[~,~]_m^{\sim}$ and $\{~,~,~\}_m^{\sim}$ defined above. To verify ([\[LY6\]](#LY6){reference-type="ref" reference="LY6"}) proceed as follows. $$\begin{aligned} \big\{a_1&+w_1,a_2+w_2,\{b_1+v_1,b_2+v_2,b_3+v_3\}_m^{\sim}\big\}_m^{\sim} \\ &= \big\{a_1+w_1,~ a_2+w_2,~ \{b_1,b_2,b_3\}_m + g_m(b_1,b_2,b_3) \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +D_m(b_1,b_2)(v_3) - \theta_m(b_1,b_3)(v_2) + \theta_m(b_2,b_3)(v_1)\big\}_m^{\sim} \\ &= \{a_1,a_2,\{b_1,b_2,b_3\}_m\}_m + g_m(a_1,a_2,\{b_1,b_2,b_3\}) + D_m(a_1,a_2)g_m(b_1,b_2,b_3) \\ &~~~~~~ + D_m(a_1,a_2) D_m(b_1,b_2)(v_3) - D_m(a_1,a_2) \theta_m(b_1,b_3)(v_2) + D_m(a_1,a_2) \theta_m(b_2,b_3)(v_1) \\ &~~~~~~ - \theta_m(a_1,\{b_1,b_2,b_3\}_m)(w_2) + \theta_m(a_2,\{b_1,b_2,b_3\}_m)(w_1) \end{aligned}$$ $$\begin{aligned} \big\{\{a_1&+w_1,a_2+w_2,b_1+v_1\}_m^{\sim},b_2+v_2,b_3+v_3\big\}_m^{\sim} \\ &= \big\{\{a_1,a_2,b_1\}_m+g_m(a_1,a_2,b_1) + D_m(a_1,a_2)(v_1) \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - \theta_m(a_1,b_1)(w_2) + \theta_m(a_2,b_1)(w_1),~ b_2+v_2,~b_3+v_3\big\}_m^{\sim} \\ &= \{\{a_1,a_2,b_1\}_m,b_2,b_3\}_m + g_m(\{a_1,a_2,b_1\}_m,b_2,b_3) + D_m(\{a_1,a_2,b_1\}_m,b_2)(v_3) \\ &~~~~~ - \theta_m(\{a_1,a_2,b_1\}_m,b_3)(v_2) + \theta_m(b_2,b_3)g_m(a_1,a_2,b_1) + \theta_m(b_2,b_3)D_m(a_1,a_2)(v_1) \\ &~~~~~ - \theta_m(b_2,b_3)\theta_m(a_1,b_1)(w_2) + \theta_m(b_2,b_3)\theta_m(a_2,b_1)(w_1) \end{aligned}$$ $$\begin{aligned} \big\{b_1&+v_1,\{a_1+w_1,a_2+w_2,b_2+v_2\}_m^{\sim},b_3+v_3\big\}_m^{\sim} \\ &= \big\{b_1+v_1,~\{a_1,a_2,b_2\}_m+g_m(a_1,a_2,b_2) \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~ + D_m(a_1,a_2)(v_2) - \theta_m(a_1,b_2)(w_2) + \theta_m(a_2,b_2)(w_1),~ b_3+v_3\big\}_m^{\sim} \\ &= \{b_1,\{a_1,a_2,b_2\}_m,b_3\}_m + g_m(b_1,\{a_1,a_2,b_2\}_m,b_3) + D_m(b_1,\{a_1,a_2,b_2\}_m)(v_3) \\ &~~~~~ + \theta_m(\{a_1,a_2,b_2\}_m,b_3)(v_1) - \theta_m(b_1,b_3)g_m(a_1,a_2,b_2) - \theta_m(b_1,b_3)D_m(a_1,a_2)(v_2) \\ &~~~~~ + \theta_m(b_1,b_3)\theta_m(a_1,b_2)(w_2) - \theta_m(b_1,b_3)\theta_m(a_2,b_2)(w_1) \end{aligned}$$ $$\begin{aligned} \big\{b_1&+v_1,b_2+v_2,\{a_1+w_1,a_2+w_2,b_3+v_3\}_m^{\sim}\big\}_m^{\sim} \\ &= \big\{b_1+v_1,~b_2+v_2,~\{a_1,a_2,b_3\}_m+g_m(a_1,a_2,b_3) \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + D_m(a_1,a_2)(v_3) - \theta_m(a_1,a_3)(w_2) + \theta_m(a_2,a_3)(w_1)\big\}_m^{\sim} \\ &= \{b_1,b_2,\{a_1,a_2,b_3\}_m\}_m + g_m(b_1,b_2,\{a_1,a_2,b_3\}_m) + D_m(b_1,b_2)g_m(a_1,a_2,b_3) \\ &~~~~~ + D_m(b_1,b_2)D_m(a_1,a_2)(v_3) - D_m(b_1,b_2) \theta_m(a_1,a_3)(w_2) + D_m(b_1,b_2) \theta_m(a_2,a_3)(w_1) \\ &~~~~~ - \theta_m(b_1,\{a_1,a_2,b_3\})(v_2) + \theta_m(b_2,\{a_1,a_2,b_3\})(v_1) \end{aligned}$$ Using ([\[RLYB6\]](#RLYB6){reference-type="ref" reference="RLYB6"}), ([\[RLYB4\]](#RLYB4){reference-type="ref" reference="RLYB4"}) and the definition of coboundary maps we can show $$\begin{aligned} \big\{a_1+w_1,&a_2+w_2,\{b_1+v_1,b_2+v_2,b_3+v_3\}_m^{\sim}\big\}_m^{\sim} \\ &= \big\{\{a_1+w_1,a_2+w_2,b_1+v_1\}_m^{\sim},b_2+v_2,b_3+v_3\big\}_m^{\sim} \\ &~~~~~~~~~~~~~~~~ + \big\{b_1+v_1,\{a_1+w_1,a_2+w_2,b_2+v_2\}_m^{\sim},b_3+v_3\big\}_m^{\sim} \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \big\{b_1+v_1,b_2+v_2,\{a_1+w_1,a_2+w_2,b_3+v_3\}_m^{\sim}\big\}_m^{\sim} \end{aligned}$$ giving us ([\[LY6\]](#LY6){reference-type="ref" reference="LY6"}). Other relations, ([\[LY3\]](#LY3){reference-type="ref" reference="LY3"}), ([\[LY4\]](#LY4){reference-type="ref" reference="LY4"}), ([\[LY5\]](#LY5){reference-type="ref" reference="LY5"}) can also be obtained in the same way. Thus making $\xi \oplus_{(f,g)} \eta$ a Lie-Yamaguti algebra bundle. ◻ Observe that the Lie-Yamaguti algebra brackets of the fibres of $\xi \oplus_{(f,g)} \eta$ makes $\eta$ an abelian ideal in $\xi \oplus_{(f,g)} \eta$ and we have the following extension of $\xi$ by $\eta$: $$\begin{tikzcd} 0 & \eta & {\xi\oplus_{(f,g)}\eta} & \xi & 0 \arrow[from=1-1, to=1-2] \arrow["i", from=1-2, to=1-3] \arrow["j", from=1-3, to=1-4] \arrow[from=1-4, to=1-5] \end{tikzcd}$$ where $i$ is the inclusion map and $j$ is the projection map. Let $(h, k) \in Z^{(2,3)}(\xi; \eta)$ be another cocycle. Then, we have the following result. **Lemma 59**. *Two extensions $0 \to \eta \to \xi \oplus_{(f,g)} \eta \to \xi \to 0$ and $0 \to \eta \to \xi \oplus_{(h,k)} \to \xi \to 0$ are equivalent iff $(f, g),~(h,k) \in Z^{(2, 3)}(\xi; \eta)$ are cohomologous.* *Proof.* Let the two extensions $0 \to \eta \xrightarrow[]{i} \xi \oplus_{(f,g)} \eta \xrightarrow[]{p} \xi \to 0$ and $0 \to \eta \xrightarrow[]{i} \xi \oplus_{(h,k)} \eta \xrightarrow[]{p} \xi \to 0$ be equivalent through a Lie-Yamaguti algebra isomorphism $$\gamma:\xi \oplus_{(f,g)} \eta \to \xi \oplus_{(h,k)} \eta$$ Then for each $m \in M$ we have the following equivalence of abelian extension. $$\begin{tikzcd} 0 & {\eta_m} & {\xi_m \oplus_{(f_m,g_m)} \eta_m } & {\xi_m} & 0 \\ 0 & {\eta_m} & {\xi_m \oplus_{(h_m,k_m)} \eta_{m} } & {\xi_m} & 0 \arrow[from=1-1, to=1-2] \arrow[from=1-2, to=1-3] \arrow[from=1-3, to=1-4] \arrow[from=1-4, to=1-5] \arrow[from=2-1, to=2-2] \arrow[from=2-2, to=2-3] \arrow[from=2-3, to=2-4] \arrow[from=2-4, to=2-5] \arrow["id"', from=1-2, to=2-2] \arrow["{\gamma_m}"', from=1-3, to=2-3] \arrow["id", from=1-4, to=2-4] \end{tikzcd}$$ To show that $(f,g)$ and $(h,k)$ are cohomologous it is enough to show for each $m \in M$, $(f_m,g_m)$ and $(h_m,k_m)$ are cohomologous, that is, $$(f_m,g_m) - (h_m,k_m) \in B^{(2,3)}(\xi_m;\eta_m)$$ We define a map $\lambda_m: \xi_m \to \eta_m$ by $\lambda_m(a) = \gamma_m(a) - a$ by which one can show $$f_m-h_m = (\delta_{I})_m(\lambda_m) ~~\text{and}~~ g_m-k_m = (\delta_{II})_m(\lambda_m)$$ Conversely, assume that for each $m \in M,~ (f_m,g_m)$ and $(h_m,k_m)$ are in the same cohomology class, that is, $(f_m,g_m)-(h_m,k_m) = (\delta)_m(\lambda_m)$. Then, $\gamma_m:\xi_m \oplus_{(f,g)}\eta_m \to \xi_m \oplus_{(h,k)} \eta_m$ defined by $$\gamma_m(a+v) = a + \lambda_m(a) + v$$ gives the required isomorphism. ◻ By summarizing the above observations we have the following theorem. **Theorem 60**. *To each equivalence class of abelian extensions of $\xi$ by $\eta$ there corresponds an element of $H^{(2, 3)}(\xi; \eta)$.\ Suppose $\xi$ is a given Lie-Yamaguti algebra bundle over $M$ equipped with a representation $(\eta; \rho,~D,\theta).$ To each cohomology class $[(f, g)] \in H^{(2, 3)}(\xi; \eta)$, there is an extension of $\xi$ by $\eta$ $$\begin{tikzcd} 0 & \eta & {\xi\oplus_{(f, g)}\eta} & \xi & 0 \arrow[from=1-1, to=1-2] \arrow["i", from=1-2, to=1-3] \arrow["j", from=1-3, to=1-4] \arrow[from=1-4, to=1-5] \end{tikzcd}$$ which is unique up to equivalence of extensions.* We conclude addressing the following questions. **Question: Does there exist a notion of Lie-Yamaguti algebroid which appears as the infinitesimal version of some groupoid with relevant structure?**  \ BFGM03 Hani Abdelwahab, Elisabete Barreiro, Antonio J. Calder$\acute{o}$n c, and Amir Fern$\acute{a}$ndez Ouaridi, *The classification of nilpotent Lie-Yamaguti algebras*, *Linear Algebra and its Applications*, Vol. 654 (2022), 339-378. Pilar Benito, Murray Bremner and Sara Madariaga, *Symmetric matrices, orthogonal Lie algebras and Lie-Yamaguti algebras*, *Linear and Multilinear Algebra*, Vol. 63 (2015), 1257-1281. $\acute{E}$. Cartan and J. A. Schouten, *On the geometry of the group manifold of simple and semi-simple groups*, *Proc. Acad. Amsterdam*, 29 (1926), 803-815. C. Chevalley and S. Eilenber, *Cohomology theory of Lie groups and Lie algebras*, *Trans. Amer. Math. Soc.*, Vol. 63 (1948), 85-124. Douady, A. and Lazard, M., *Espace fibr$\acute{e}$s algebr$\grave{e}$s de Lie et en groupes*, *Invent. Math.*, Vol.1 (1966), 133-151. R. J. Duffin, *On the characteristic matrices of covariant systems*, *Physical Review*, Vol. 54 (1938), Page 1114. D. K. Harrison, *Commutative Algebras and Cohomology*, *Trans. Amer. Math. Soc.*, Vol.104 (1962), 191-204. N. Jacobson, *Lie and Jordan triple systems*, *Amer. J. Math. Soc.*, Vol. 71 (1949) 149-170. P. Jordan, J. v. Neumann and E. Wigner, *On an algebraic generalization of the quantum mechanical formation*, *Annals of Mathematics,* Vol. 35 (1934), 29-64. N. Kemmer, *Particle aspect of meson theory*, *Proceedings of the Royal Society,* Vol. 173 (1939), 91-116. N. Kemmer, *The algebra of meson matrices*, *Proceedings of the Cambridge Philosophical Society,* Vol. 39 (1943), 189-196. M. Kikkawa, *On local loops in affine manifolds*, *J. Sci. Hiroshima Univ. A-I,* Vol. 28 (1964), 199-207. M. Kikkawa, *Geometry of Homogeneous Lie loops*, *Hiroshima Math. J.*, Vol. 5 (1975), 141-179. M. Kikkawa, *Geometry of Homogeneous Left Lie Loops and Tangent Lie Triple Algebras*, *Mem. Fac. Sci. Eng. Shimane Univ. Series B: Mathematical Science*, Vol. 32 (1999), 57--68. M. K. Kinyon and A. Weinstein, *Leibniz algebras, Courant algebroids, and multiplications on reductive homogeneous spaces*, *American J. Math.*, Vol. 123 (2001), 525-550. J. L. Loday, *Une version non commutative des alg$\grave{e}$bres de Lie: les alg$\grave{e}$bres de Leibniz*, *Enseign. Math.* Vol. 39 (1993), 269-293. K. C. H. Mackenzie, *General theory of Lie groupoids and Lie algebroids*, *London Mathematical Society Lecture Note Series, 213, Cambridge University Press, Cambridge*, 2005. Zbl 1078.58011 MR 2157566. K. Nomizu, *Invariant affine connections on homogeneous spaces*, *Amer. J. Math* Vol. 76 (1954), 33-65. Norman Steenrod, *The Topology of Fibre Bundles*, *Princeton University Press,* 1951. K. Yamaguti, *On the Lie triple system and its generalization*, *J. Sci. Hiroshima Univ.*, Ser. A Vol. 21 (1958), 155-160. K. Yamaguti, *On cohomology groups of General Lie Triples systems*, *Kumamoto J. Sci., Series A*, Vol. 8, No. 4 (1969), 135-146.
arxiv_math
{ "id": "2309.05508", "title": "Lie-Yamaguti Algebra Bundle", "authors": "Saikat Goswami and Goutam Mukherjee", "categories": "math.RA math.DG", "license": "http://creativecommons.org/publicdomain/zero/1.0/" }
arxiv_math
{ "id": "2309.08383", "title": "Dynamical Analysis of an Allelopathic Phytoplankton Model with Fear\n Effect", "authors": "Shangming Chen, Fengde Chen, Vaibhava Srivastava and Rana D. Parshad", "categories": "math.DS q-bio.PE", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | The classical Maclaurin inequality asserts that the elementary symmetric means $$s_k(y) \coloneqq \frac{1}{\binom{n}{k}} \sum_{1 \leq i_1 < \dots < i_k \leq n} y_{i_1} \dots y_{i_k}$$ obey the inequality $s_\ell(y)^{1/\ell} \leq s_k(y)^{1/k}$ whenever $1 \leq k \leq \ell \leq n$ and $y = (y_1,\dots,y_n)$ consists of non-negative reals. We establish a variant $$|s_\ell(y)|^{\frac{1}{\ell}} \ll \frac{\ell^{1/2}}{k^{1/2}} \max (|s_k(y)|^{\frac{1}{k}}, |s_{k+1}(y)|^{\frac{1}{k+1}})$$ of this inequality in which the $y_i$ are permitted to be negative. In this regime the inequality is sharp up to constants. Such an inequality was previously known without the $k^{1/2}$ factor in the denominator. address: UCLA Department of Mathematics, Los Angeles, CA 90095-1555. author: - Terence Tao title: A Maclaurin type inequality --- # Introduction Given $n$ real numbers $y = (y_1,\dots,y_n) \in \mathbb{R}^n$ and $0 \leq k \leq n$, let $s_k(y)$ denote the elementary symmetric means $$s_k(y) \coloneqq \frac{1}{\binom{n}{k}} \sum_{1 \leq i_1 < \dots < i_k \leq n} y_{i_1} \dots y_{i_k}$$ (thus for instance $s_0(y)=1$). These means arise as normalized coefficients of the degree $n$ polynomial with roots $y_1,\dots,y_n$: $$\label{poly} \prod_{i=1}^n (z-y_i) = \sum_{k=0}^n (-1)^k \binom{n}{k} s_k(y) z^{n-k}.$$ We call a $n+1$-tuple $(s_0,\dots,s_n)$ of real numbers *attainable* if it is of the form $(s_0(y),\dots,s_n(y))$ for some $y$, or equivalently if the polynomial $\sum_{k=0}^n (-1)^k \binom{n}{k} s_k z^{n-k}$ is monic with all roots real. For example, the attainable $3$-tuples take the form $$(s_0,s_1,s_2) = \left(1, \frac{y_1+y_2}{2}, y_1 y_2\right)$$ and range in the set $\{ (s_0,s_1,s_2): s_0=1, s_2 \leq s_1^2 \}$. More generally, for any $n$ the set of attainable $n+1$-tuples is some closed semi-algebraic set [@sylvester]; see [@niculescu §3] for an explicit description in the cases $n=3,4$, which is related to the discriminant of a degree $n$ polynomial. As another example, by taking all the $y_i$ to be $1$, we see that the $n+1$-tuple $(1,\dots,1)$ is attainable for any $n$. There are several inequalities known for attainable tuples $(s_0,\dots,s_n)$, most famously the classical *Newton inequality* [@newton] $$\label{newton} s_{k-1} s_{k+1} \leq s_k^2$$ for any $1 \leq k < n$. If the underyling real variables $y_1,\dots,y_n$ are non-negative, the Newton inequalities can be used to easily establish the well-known *Maclaurin inequalities* [@maclaurin pp. 80-81] $$\label{maclaurin} s_1 \geq s_2^{\frac{1}{2}} \geq \dots \geq s_n^{\frac{1}{n}}.$$ Many further inequalities in the non-negative case are known; a classical reference in this regard is [@polya], and some more recently discovered inequalities may be found for instance in [@mitrinovic], [@sato], [@mitev], [@mc], [@fw], [@cgs], [@tin]. We also note the inequalities of Lin and Trudinger [@lint] which apply when a certain specified number of the $y_i$ (or the $s_k$) are known to be positive, and which have numerous applications to the study of partial differential equations. Returning to the case of arbitrary real $y_1,\dots,y_n$, the Newton inequalities were strengthened by Rosset [@rosset] to the cubic Newton inequalities $$4 (s_{k+2}^2 - s_{k+1} s_{k+3}) ( s_{k+1}^2 - s_k s_{k+2} ) \geq (s_{k+1} s_{k+2} - s_k s_{k+3})^2,$$ valid for $0 \leq k \leq n-3$ and arbitrary attainable $(s_0,\dots,s_n)$. A further quartic strengthening of this inequality was then established in [@niculescu]; these inequalities are related to the solution by radicals of the general cubic and quartic respectively. Some further inequalities between the $s_k$ may be found in (or derived from) [@hunter], [@timofte; @timofte-2; @timofte-3], [@rukhin], [@nicu-2]. On the other hand, the Maclaurin inequality [\[maclaurin\]](#maclaurin){reference-type="eqref" reference="maclaurin"} can fail in general once the $y_i$ are not required to be non-negative. Indeed, it is possible for an intermediate entry $s_k$ in an attainable tuple $(s_0,\dots,s_n)$ to vanish without the subsequent entries $s_{k+1},\dots,s_n$ vanishing. A key example occurs when $n$ is even, and the $y_1,\dots,y_n$ take values in $\{-1,+1\}$, with exactly $n/2$ of the $y_i$ equal to each of the signs $+1, -1$. From the binomial expansion $$(x-1)^{n/2} (x+1)^{n/2} = (x^2-1)^{n/2} = \sum_{j=0}^{n/2} (-1)^j \binom{n/2}{j} x^{2j}$$ and [\[poly\]](#poly){reference-type="eqref" reference="poly"} we see that $$\label{sk-form} s_k = s_k(y_1,\dots,y_n) = (-1)^{k/2} \frac{\binom{n/2}{k/2}}{\binom{n}{k}} 1_{k \text{ even}}.$$ Using the standard inequalities $$\label{standard} \frac{n^k}{k^k} \leq \binom{n}{k} \leq \frac{n^k}{k!}$$ for all $0 \leq k \leq n$, which follow immediately from rearranging the obvious inequalities $$\prod_{j=0}^{k-1} (1 - \frac{j}{k}) \leq \prod_{j=0}^{k-1} (1 - \frac{j}{n}) \leq 1,$$ combined with the weak Stirling bounds $$\label{weak-stirling} \frac{k^k}{e^k} \leq k! \leq k^k$$ (where the first inequality is immediate from the Taylor expansion $e^k = \sum_{j=0}^k \frac{k^j}{j!} \geq \frac{k^k}{k!}$ of $e^k$) we thus see for this specific example that $$\label{k-even} |s_k|^{\frac{1}{k}} \asymp \frac{k^{1/2}}{n^{1/2}} 1_{k \text{ even}}$$ for $1 \leq k \leq n$. Here we use the standard asymptotic notation $X \ll Y$, $Y \gg X$ or $X=O(Y)$ to denote the estimate $|X| \leq C Y$ for some absolute constant $C>0$, and $X \asymp Y$ to denote the bound $X \ll Y \ll X$. Thus we have a breakdown of [\[maclaurin\]](#maclaurin){reference-type="eqref" reference="maclaurin"} in the example [\[sk-form\]](#sk-form){reference-type="eqref" reference="sk-form"}, even when absolute value signs are placed on the $s_k$: smallness (or even vanishing) of one quantity $|s_k|^{\frac{1}{k}}$ does not imply smallness of all subsequent quantities $|s_\ell|^{\frac{1}{\ell}}$ for $k < \ell \leq n$. However, it was observed in[^1] [@gy] that if *two* consecutive quantities $|s_k|^{\frac{1}{k}}, |s_{k+1}|^{\frac{1}{k+1}}$ were small in magnitude, then the subsequent entries $|s_\ell|^{\frac{1}{\ell}}$ for $k+1 < \ell \leq n$ were also small in magnitude, though with some loss in the bounds. In fact, as observed in [@dhh; @mrt] the arguments in [@gy] yield an inequality of the form $$\label{prev-bound} |s_\ell|^{\frac{1}{\ell}} \ll \ell^{1/2} \max_{k' = k, k+1} |s_{k'}|^{\frac{1}{k'}}$$ whenever $1 \leq k \leq \ell \leq n$. For the convenience of the reader, we give a proof of [\[prev-bound\]](#prev-bound){reference-type="eqref" reference="prev-bound"} in Section [2](#prelim-sec){reference-type="ref" reference="prelim-sec"}. A comparison of [\[prev-bound\]](#prev-bound){reference-type="eqref" reference="prev-bound"} with [\[k-even\]](#k-even){reference-type="eqref" reference="k-even"} suggests that the former inequality is not sharp; also, when $\ell = k, k+1$, the factor $\ell^{1/2}$ in [\[prev-bound\]](#prev-bound){reference-type="eqref" reference="prev-bound"} is clearly unnecessary. The main result of this note is the following improvement of [\[prev-bound\]](#prev-bound){reference-type="eqref" reference="prev-bound"}, which was conjectured[^2] in the MathOverflow post <https://mathoverflow.net/questions/446254>: **Theorem 1** (Maclaurin type inequality). *Let $(s_0,\dots,s_n)$ be an attainable tuple. Then we have $$\label{main-eq} |s_\ell|^{\frac{1}{\ell}} \ll \max_{k' = k, k+1} \left( \frac{\ell}{k'}\right)^{1/2} |s_{k'}|^{\frac{1}{k'}} \asymp \left( \frac{\ell}{k}\right)^{1/2} \max_{k' = k, k+1} |s_{k'}|^{\frac{1}{k'}}$$ for all $1 \leq k \leq \ell \leq n$.* This bound is optimal up to constants, as can be seen from [\[k-even\]](#k-even){reference-type="eqref" reference="k-even"}. In particular, it may be used to make minor improvements to some of the error bounds in [@dhh; @mrt]. **Remark 2**. The bound [\[main-eq\]](#main-eq){reference-type="eqref" reference="main-eq"} can of course be expressed in terms of the non-normalized elementary symmetric functions $$S_k(y) \coloneqq \binom{n}{k} s_k(y) = \sum_{1 \leq i_1 < \dots < i_k \leq n} y_{i_1} \dots y_{i_k}.$$ Indeed, from [\[standard\]](#standard){reference-type="eqref" reference="standard"}, we see that [\[main-eq\]](#main-eq){reference-type="eqref" reference="main-eq"} is equivalent to the inequality $$|S_\ell(y)|^{\frac{1}{\ell}} \ll \left( \frac{k}{\ell}\right)^{1/2}\max_{k' = k, k+1} |S_{k'}(y)|^{\frac{1}{k'}}$$ for all real $y_1,\dots,y_n$ and all $1 \leq k < \ell \leq n$. Using the weak Stirling bounds [\[weak-stirling\]](#weak-stirling){reference-type="eqref" reference="weak-stirling"}, we may also write the bound [\[main-eq\]](#main-eq){reference-type="eqref" reference="main-eq"} in the following alternate form: if one has the bounds $$|S_k(y)|^2 \leq \frac{\theta^k}{k!}$$ and $$|S_{k+1}(y)|^2 \leq \frac{\theta^{k+1}}{(k+1)!}$$ for some $1 \leq k < n$ and $\theta>0$, then one has $$|S_\ell(y)|^2 \leq \frac{(C\theta)^\ell}{\ell!}$$ for all $k \leq \ell \leq n$ and some absolute constant $C$. This is the form of the inequality that was conjectured in the MathOverflow post mentioned previously. The approach in [@gy] to proving bounds such as [\[prev-bound\]](#prev-bound){reference-type="eqref" reference="prev-bound"} was based on the arithmetic-geometric mean inequality, which among other things implies that $$\label{amgm} |s_n(y)|^{\frac{2}{n}} = (y_1^2 \dots y_n^2)^{\frac{1}{n}} \leq \frac{1}{n} (y_1^2 + \dots + y_n^2)$$ for any real numbers $y_1,\dots,y_n$ (which are not required to be non-negative); this readily implies the $k=1, \ell=n$ case of [\[prev-bound\]](#prev-bound){reference-type="eqref" reference="prev-bound"}, and the general case can be deduced by using some well-known symmetry properties of the space of attainable tuples. This is similar to the standard proof of the Newton inequality [\[newton\]](#newton){reference-type="eqref" reference="newton"} that starts from the special case $k=1, n=2$ and then establishes the general case using the aforementioned symmetries. Our approach will proceed instead by exploiting the following inequality which, while simple to prove, has not appeared previously in the literature to the author's knowledge. **Theorem 3** (New inequality). *Let $(s_0,\dots,s_n)$ be an attainable tuple. Then for any $r>0$ and $1 \leq \ell \leq n$, one has $$\label{r1} \sum_{m=0}^\ell \binom{\ell}{m} |s_m| r^m \geq (1+ |s_\ell|^{2/\ell} r^2)^{\ell/2}$$ or equivalently $$\label{r2} \sum_{m=0}^\ell \binom{\ell}{m} |s_m| r^{\ell-m} \geq (|s_\ell|^{2/\ell} +r^2)^{\ell/2}.$$* We establish this inequality in Section [3](#new-ineq){reference-type="ref" reference="new-ineq"}. Observe from [\[sk-form\]](#sk-form){reference-type="eqref" reference="sk-form"} and the binomial theorem that these inequalities are completely sharp (at least when $\ell=n$), thus giving support to the heuristic that the tuple [\[sk-form\]](#sk-form){reference-type="eqref" reference="sk-form"} has extremal behavior amongst the set of all attainable tuples. Morally speaking, the estimates in Theorem [Theorem 3](#thm-new){reference-type="ref" reference="thm-new"} (working for instance in the model case $\ell=n$ and $|s_n|=1$, which is easy to reduce to) should then follow by applying [\[r1\]](#r1){reference-type="eqref" reference="r1"} with[^3] $r \asymp (k/n)^{1/2}$ and using [\[standard\]](#standard){reference-type="eqref" reference="standard"}, provided that we can somehow neglect the contribution of the $m \neq k,k+1$ terms on the left-hand side of [\[r1\]](#r1){reference-type="eqref" reference="r1"}. This turns out to be possible after a certain amount of technical preparation (relying on the aforementioned symmetries of the space of attainable tuples) to pass to an "extremal" attainable tuple. ## Acknowledgments The author is supported by NSF grant DMS-1764034. # Preliminary bounds {#prelim-sec} In this section we record some preliminary bounds, in particular establishing the bound [\[prev-bound\]](#prev-bound){reference-type="eqref" reference="prev-bound"} using the method from [@gy]. The collection of attainable tuples enjoys some useful (and well-known) symmetries, which we shall rely on frequently in our arguments: **Lemma 4** (Symmetries of attainable tuples). *Let $(s_0,\dots,s_n)$ be an attainable tuple.* - *(Scaling) For any real $\lambda$, $(s_0, \lambda s_1, \dots, \lambda^n s_n)$ is attainable.* - *(Reflection) If $s_n \neq 0$, then $(1, s_{n-1}/s_n, \dots, s_0/s_n)$ is attainable. (In particular, if $|s_n|=1$, then $\pm (s_n,\dots,s_0)$ is attainable with $\pm$ the sign of $s_n$.)* - *(Truncation) If $1 \leq \ell \leq n$, then $(s_0,\dots,s_\ell)$ is attainable.* *Proof.* We can write $s_k = s_k(y_1,\dots,y_n)$ for some real $y_1,\dots,y_n$. The claims (i), (ii) are immediate from the homogeneity identity $$s_k(\lambda y_1,\dots,\lambda y_n) = \lambda^k s_k(y_1,\dots,y_n)$$ and the reflection identity $$s_k(1/y_1,\dots,1/y_n) = s_{n-k}(y_1,\dots,y_n) / s_n(y_1,\dots,y_n)$$ respectively for all $0 \leq k \leq n$ (note that the non-vanishing of $s_n(y_1,\dots,y_n)$ implies that all the $y_1,\dots,y_n$ are non-zero). To prove (iii), observe from $n-\ell$ applications of Rolle's theorem that the degree $\ell$ polynomial $$\frac{\ell!}{n!} \frac{d^{n-\ell}}{dx^{n-\ell}} \prod_{i=1}^n (z-y_i) = \sum_{k=0}^\ell (-1)^k \binom{\ell}{k} s_k(y_1,\dots,y_n) z^{\ell-k}$$ is monic with all roots real, and hence the tuple $(s_0(y_1,\dots,y_n),\dots,s_\ell(y_1,\dots,y_n))$ is attainable. ◻ We may now prove [\[prev-bound\]](#prev-bound){reference-type="eqref" reference="prev-bound"}. By Lemma [Lemma 4](#syms){reference-type="ref" reference="syms"}(iii) we may assume without loss of generality that $\ell=n$. The bound is trivial or vacuous when $n \leq 2$, so we may assume $n \geq 3$. From the Newton identity $$\sum_{i=1}^n y_i^2 = n^2 s_1(y_1,\dots,y_n)^2 - 2 \binom{n}{2} s_2(y_1,\dots,y_n)$$ and [\[amgm\]](#amgm){reference-type="eqref" reference="amgm"} we obtain the bound $$|s_n(y_1,\dots,y_n)|^{\frac{2}{n}} \leq n s_1(y_1,\dots,y_n)^2 - (n-1) s_2(y_1,\dots,y_n).$$ By the triangle inequality, we conclude for any attainable tuple $(s_0,\dots,s_n)$ that $$|s_n|^{\frac{2}{n}} \leq n s_1^2 + (n-1) |s_2|\leq \max_{k=1,2} 2n |s_k|^{\frac{2}{k}}.$$ (This already establishes the $k=1$ case of [\[prev-bound\]](#prev-bound){reference-type="eqref" reference="prev-bound"}.) Using Lemma [Lemma 4](#syms){reference-type="ref" reference="syms"}(ii), we conclude that $$|s_n|^{-\frac{2}{n}} \leq \max_{k=1,2} 2n (|s_{n-k}|/|s_n|)^{\frac{2}{k}}$$ if $s_n \neq 0$. With our assumption $n \geq 3$, this rearranges as $$|s_n|^{\frac{1}{n}} \leq \max_{k=1,2} (2n)^{\frac{k}{2(n-k)}} |s_{n-k}|^{\frac{1}{n-k}}.$$ We may remove the hypothesis $s_n \neq 0$, since this inequality clearly holds when $s_n$ vanishes. It will be convenient to replace the factor $(2n)^{\frac{k}{2(n-k)}}$ by an expression which will form a telescoping product. From the monotone decreasing convex nature of $t \mapsto \frac{1}{t-1}$ for $t > 1$, one has the inequality $$\frac{k}{2(n-k)} \leq \int_{n-k}^n \frac{dt}{2(t-1)} = \frac{1}{2} \log \frac{n-1}{n-k-1}$$ for $k=1,2$. we conclude that $$|s_n|^{\frac{1}{n}} \leq \max_{k=1,2} (2n)^{\frac{1}{2} \log \frac{n-1}{n-k-1}} |s_{n-k}|^{\frac{1}{n-k}}.$$ A routine induction on $n$ then shows that $$\label{nk} |s_n|^{\frac{1}{n}} \leq \max_{k'=k,k+1} (2n)^{\frac{1}{2} \log \frac{n-1}{n-k'-1}} |s_{n-k'}|^{\frac{1}{n-k'}}$$ whenever $1 \leq k \leq n-2$. If we again make the temporary assumption that $s_n \neq 0$, we can use Lemma [Lemma 4](#syms){reference-type="ref" reference="syms"}(ii) to conclude that $$|s_n|^{-\frac{1}{n}} \leq \max_{k'=k,k+1} (2n)^{\frac{1}{2} \log \frac{n-1}{n-k'-1}} (|s_{k'}|/|s_n|)^{\frac{1}{n-k'}}$$ which rearranges as $$|s_n|^{\frac{1}{n}} \leq \max_{k'=k,k+1} (2n)^{\frac{n-k'}{2k'} \log \frac{n-1}{n-k'-1}} |s_{k'}|^{\frac{1}{k'}}.$$ We may again remove the condition $s_n \neq 0$ here, since the bound clearly holds when $s_n=0$. In the region $k \leq n/2$, we have $\log \frac{n-1}{n-k'-1} \leq \frac{k'}{n-k'} + O( \frac{k'}{n^2})$ and hence $$\label{sk} |s_n|^{\frac{1}{n}} \ll n^{1/2} \max_{k'=k,k+1} |s_{k'}|^{\frac{1}{k'}}.$$ In the region $n/2 < k \leq n-2$, we instead use [\[nk\]](#nk){reference-type="eqref" reference="nk"} (with $k$ replaced by $n+1-k$) to obtain $$|s_n|^{\frac{1}{n}} \leq \max_{k'=k,k+1} (2n)^{\frac{1}{2} \log \frac{n-1}{k'-1}} |s_{k'}|^{\frac{1}{k'}}.$$ But $\frac{n-1}{k'-1} \leq 2 + O(\frac{1}{n})$, and so in this region we also have [\[sk\]](#sk){reference-type="eqref" reference="sk"} since $\log 2 \leq 1$. Finally, [\[sk\]](#sk){reference-type="eqref" reference="sk"} trivially holds when $k=n-1$. We have thus established [\[sk\]](#sk){reference-type="eqref" reference="sk"} for all $1 \leq k < n$. Since we had previously reduced to the case $\ell=n$, we obtain [\[prev-bound\]](#prev-bound){reference-type="eqref" reference="prev-bound"} as required. **Remark 5**. It is not difficult to make the implied constant in [\[prev-bound\]](#prev-bound){reference-type="eqref" reference="prev-bound"} more explicit; see [@dhh Theorem 2] or [@mrt Theorem 5.2] for examples of such explicit versions of this bound (with slightly different normalizations). # Proof of Theorem [Theorem 3](#thm-new){reference-type="ref" reference="thm-new"} {#new-ineq} We now prove Theorem [Theorem 3](#thm-new){reference-type="ref" reference="thm-new"}. As in the proof of [\[prev-bound\]](#prev-bound){reference-type="eqref" reference="prev-bound"}, we may invoke Lemma [Lemma 4](#syms){reference-type="ref" reference="syms"}(iii) and assume without loss of generality that $\ell=n$. We may assume that $s_n \neq 0$ since the desired bound is trivial otherwise, and then by Lemma [Lemma 4](#syms){reference-type="ref" reference="syms"}(i) we may normalize $|s_n|=1$. The bounds [\[r1\]](#r1){reference-type="eqref" reference="r1"} and [\[r2\]](#r2){reference-type="eqref" reference="r2"} are clearly equivalent (either by replacing $r$ with $1/r$, or by using Lemma [Lemma 4](#syms){reference-type="ref" reference="syms"}(ii)), so we will content ourselves with proving [\[r2\]](#r2){reference-type="eqref" reference="r2"}. Fix an attainable tuple $(s_0,\dots,s_n)$ with $|s_n|=1$ and a real number $r>0$. We write $s_k = s_k(y)$ for some real $y_1,\dots,y_n$; from the normalization $|s_n|=1$ we see that the $y_i$ are non-zero. We form the polynomial $$\label{pz} P(z) \coloneqq \prod_{j=1}^n (z-y_j)$$ of one complex variable $z$. From [\[poly\]](#poly){reference-type="eqref" reference="poly"} and the triangle inequality we have $$|P(ir)| \leq \sum_{k=0}^n \binom{n}{k} |s_k| r^{n-k}$$ so it will suffice to establish the lower bound $$\log |P(ir)| \geq \frac{n}{2} \log(1+r^2).$$ But by taking absolute values and then logarithms in [\[pz\]](#pz){reference-type="eqref" reference="pz"}, we may expand $$\log |P(ir)| = \frac{1}{2} \sum_{j=1}^n \log(y_j^2 + r^2).$$ From the convexity of $t \mapsto \log(e^t+a)$ for $t \in \mathbb{R}$ and $a>0$ we have the inequality $$\log(e^t+a) \geq \log(1+a) + \frac{1}{1+a} t;$$ specializing to the case $a = r^2$, $t = \log y_j^2$ we conclude $$\log (y_j^2 + r^2) \geq \log(1+r^2) + \frac{2}{1+r^2} \log |y_j|.$$ But from the normalization $|s_n|=1$ we have $\sum_{j=1}^n \log |y_j|=0$. The claim follows. **Remark 6**. It is tempting to use the contour integral representation $$(-1)^k \binom{n}{k} s_k = \frac{1}{2\pi i} \int_\gamma \frac{P(z)}{z^{n-k+1}}\ dz$$ for any $0 \leq k \leq n$ and any contour $\gamma$ winding once anti-clockwise around the origin, together with the saddle point method, to try to control $s_k$ in terms of bounds on $P(z)$. We were not able to do this rigorously (mostly because the geometry of the subharmonic function $\log |P(z)|$ could be moderately complicated despite the singularities being constrained to the real axis), but eventually landed upon the the more elementary approach of estimating $P$ at a single complex value $ir$ as a workable substitute for the saddle point method. # Main argument Now we can establish . The idea is to work with an "extremal" tuple and then show that the contributions of the $m \neq k,k+1$ terms in [\[r2\]](#r2){reference-type="eqref" reference="r2"} are negligible in that case. In order to guarantee the existence of a (near)-extremizer, it is convenient to truncate the range of the parameters $k,\ell,n$. Let $N$ be a large natural number, and let $A = A_N$ be the best constant for which one has the inequality $$\label{A-def} |s_\ell|^{\frac{1}{\ell}} \leq A \max_{k' = k, k+1} \left( \frac{\ell}{k'}\right)^{1/2} |s_{k'}|^{\frac{1}{k'}}$$ whenever $1 \leq k \leq \ell \leq n \leq N$ and $(s_0,\dots,s_n)$ is attainable. By [\[prev-bound\]](#prev-bound){reference-type="eqref" reference="prev-bound"}, $A$ is finite (this is the only place in the argument where we exploit the upper bound of $N$). It will suffice to show that $A$ is bounded uniformly in $N$. Thus, we may assume for sake of contradiction that $A$ is larger than any specified constant. By Lemma [Lemma 4](#syms){reference-type="ref" reference="syms"}(ii), (iii), we have $$|1/s_\ell|^{\frac{1}{\ell}} \leq A \max_{k' = k, k+1} \left( \frac{\ell}{k'}\right)^{1/2} |s_{\ell-k'}/s_\ell|^{\frac{1}{k'}}$$ whenever $1 \leq k \leq \ell \leq n \leq N$ and $(s_0,\dots,s_N)$ is attainable with $s_\ell \neq 0$. This can be rearranged as $$\label{flip} |s_\ell|^{\frac{1}{\ell}} \leq \max_{k' = k, k+1} A^{\frac{k'}{\ell-k'}} \left( \frac{\ell}{k'}\right)^{\frac{k'}{2(\ell-k')}} |s_{\ell-k'}|^{\frac{1}{\ell-k'}}.$$ Obviously this bound also holds in the $s_\ell=0$ case, so the condition $s_\ell \neq 0$ may then be removed. The bounds [\[A-def\]](#A-def){reference-type="eqref" reference="A-def"}, [\[flip\]](#flip){reference-type="eqref" reference="flip"} involve positive powers of the potentially unbounded quantity $A$, which at first glance renders them unsuitable for our application. However, we can counteract these losses by working with an "extremal" tuple that gains back a negative power of $A$. Indeed, by definition of $A$, we can find $1 \leq k < \ell \leq n \leq N$ and an attainable tuple $(s_0,\dots,s_N)$ such that $$\label{near} |s_\ell|^{\frac{1}{\ell}} > e^{-1/n} A \max_{k' = k, k+1} \left( \frac{\ell}{k'}\right)^{1/2} |s_{k'}|^{\frac{1}{k'}}$$ (say); one should think of this tuple as being "near-extremal" for the inequality [\[A-def\]](#A-def){reference-type="eqref" reference="A-def"}. From [\[prev-bound\]](#prev-bound){reference-type="eqref" reference="prev-bound"} and the largeness of $A$, we may assume that $k$ (and hence $\ell, n$) is larger than any specified absolute constant. By Lemma [Lemma 4](#syms){reference-type="ref" reference="syms"}(iii) we may assume without loss of generality that $\ell=n$. From [\[near\]](#near){reference-type="eqref" reference="near"} we then have $s_n$ non-zero, and by Lemma [Lemma 4](#syms){reference-type="ref" reference="syms"}(ii) we may assume that we have the normalization $|s_n|=1$. Thus we now have $$\label{s-major} |s_{k'}|^{\frac{1}{k'}} \leq e^{1/n} A^{-1} \left( \frac{k'}{n}\right)^{1/2}$$ for $k'=k,k+1$; note in particular the negative power of $A$ on the right-hand side. Inserting this into [\[flip\]](#flip){reference-type="eqref" reference="flip"} (with $k$ replaced by $n+1-k$, and $\ell=n$) we conclude that $$1 = |s_n|^{\frac{1}{n}} \ll A^{\frac{2k'-n}{n-k}} \left( \frac{n}{k'}\right)^{\frac{2k'-n}{2(n-k')}}$$ for some $k'=k,k+1$. If $k \geq 2n/3$, this leads to a contradiction since $A$ is large. Thus we may assume[^4] that $k < 2n/3$. From Theorem [3](#new-ineq){reference-type="ref" reference="new-ineq"} and our choice of normalizations, we have the inequality $$\label{contra} \sum_{m=0}^n \binom{n}{m} |s_m| r^m \geq (1+r^2)^{n/2}$$ for any $r>0$. We will show that for $A$ large enough, this bound is inconsistent with [\[s-major\]](#s-major){reference-type="eqref" reference="s-major"}, [\[A-def\]](#A-def){reference-type="eqref" reference="A-def"}, [\[flip\]](#flip){reference-type="eqref" reference="flip"} for a suitable choice of $r$. In order to do this, we must first obtain upper bounds for $\binom{n}{m} |s_m|$ for all $0 \leq m \leq n$ (not just $m=k,k+1$). In the region $k \leq m \leq n$, we may simply apply [\[A-def\]](#A-def){reference-type="eqref" reference="A-def"} and [\[s-major\]](#s-major){reference-type="eqref" reference="s-major"} to obtain the upper bound[^5] $$|s_m| \leq O\left(\frac{m}{n}\right)^{m/2}$$ and hence by [\[standard\]](#standard){reference-type="eqref" reference="standard"} $$\label{mbound} \binom{n}{m} |s_m| \leq O\left(\frac{n}{m}\right)^{m/2}.$$ Now we consider the problem of bounding $s_m$ in the complementary range $0 \leq m < k$. To do this, we apply [\[flip\]](#flip){reference-type="eqref" reference="flip"} to the reversed sequence $\pm (s_n,\dots,s_0)$ (which is attainable for some choice of sign $\pm$ thanks to Lemma [Lemma 4](#syms){reference-type="ref" reference="syms"}(ii) and the normalization $|s_n|=1$) and with $k, \ell$ replaced by $k-m, n-m$ respectively to conclude that $$|s_m|^{\frac{1}{n-m}} \leq \max_{k' = k, k+1} A^{\frac{k'-m}{n-k'}} \left( \frac{n-m}{k'-m}\right)^{\frac{k'-m}{2(n-k')}} |s_{k'}|^{\frac{1}{n-k'}}$$ and thus by [\[s-major\]](#s-major){reference-type="eqref" reference="s-major"} (and noting that $\frac{(n-m)k'}{n-k}= O(k') = O(n)$ since $k \leq 2n/3$ and $n$ is large) $$|s_m| \ll A^{\frac{(k'-m) (n-m)}{n-k'}} \left( \frac{n-m}{k'-m}\right)^{\frac{(k'-m)(n-m)}{2(n-k')}} \left( A^{-1} \left(\frac{k'}{n}\right)^{1/2} \right)^{\frac{(n-m)k'}{n-k'}}$$ for some $k'=k,k+1$. This bound is somewhat complicated, so we shall now simplify the right-hand side (conceding some factors roughly of the form $O(1)^m$, which will turn out to be acceptable losses). From the trivial bound $\frac{n-m}{k'-m} \leq \frac{n}{k'-m}$, we conclude that $$|s_m| \ll A^{-\frac{m (n-m)}{n-k'}} \left( \frac{n}{k'-m}\right)^{\frac{(k'-m)(n-m)}{2(n-k')}} \left(\frac{k'}{n}\right)^{\frac{(n-m)k'}{2(n-k')}}.$$ Since $\frac{n-m}{n-k'} \geq 1$, we may replace the $A^{-\frac{m (n-m)}{n-k'}}$ term with $A^{-m}$. Next, we use the Bernoulli inequality bound $$\left(\frac{k'}{k'-m}\right)^{k'-m} \leq \exp\left( \frac{m}{k'-m} (k'-m) \right) = e^m$$ to conclude that $$|s_m| \ll O(A^{-1})^m \left( \frac{n}{k'}\right)^{-\frac{m(n-m)}{2(n-k')}}.$$ By [\[standard\]](#standard){reference-type="eqref" reference="standard"}, we thus have $$\begin{aligned} \binom{n}{m} |s_m| &\ll O(A^{-1})^m \left( \frac{n}{m} \right)^m \left( \frac{n}{k'}\right)^{-\frac{m(n-m)}{2(n-k')}} \\ &= O\left( \frac{k'}{Am} \right)^m \left( \frac{n}{k'}\right)^{\frac{m}{2} + \frac{m(m-k')}{2(n-k')}}.\end{aligned}$$ Since $\frac{m(m-k'}{n-k'} \geq 0$, $\frac{n}{k'} \geq 1$, and $k' \asymp k$, we conclude that $$\label{m-1} \binom{n}{m} |s_m| \leq O\left( \frac{k}{Am} \right)^m \left( \frac{n}{k}\right)^{\frac{m}{2}}$$ since we may absorb the implied constant into the $O()^m$ term when $m \geq 1$, and the bound is trivially true for $m=0$. We now insert the bounds [\[mbound\]](#mbound){reference-type="eqref" reference="mbound"}, [\[m-1\]](#m-1){reference-type="eqref" reference="m-1"} into [\[contra\]](#contra){reference-type="eqref" reference="contra"} with $$r \coloneqq \delta \left( \frac{k}{n}\right)^{1/2}$$ and $\delta>0$ a small absolute constant to be chosen later. We conclude that $$\begin{aligned} (1 + \delta^2 k/n)^{n/2} &\leq \sum_{m \leq k} O\left( \frac{k \delta}{Am} \right)^m + \sum_{m=k}^\infty O\left(\frac{\delta k^{1/2}}{m^{1/2}}\right)^m \\ &\leq \sum_{m=0}^\infty \frac{O(k\delta/A)^m}{m!} + \sum_{m=k}^\infty O(\delta)^m \\ &\leq \exp( O(\delta k/A) ) + 2^{-k}\end{aligned}$$ (say) if $\delta$ is small enough; on the other hand, we have $$1+\delta^2 k/n \geq \exp( \delta^2 k/2n )$$ if $\delta$ is small enough. We conclude that $$\exp( O(\delta k/A) ) + 2^{-k} \geq \exp(\delta^2 k / 4).$$ If $A$ is sufficiently large depending on $\delta$, this implies $$\exp( \delta^2 k / 8 ) + 2^{-k} \geq \exp(\delta^2 k / 4),$$ but this leads to a contradiction if $k$ is large enough. This completes the proof of Theorem [Theorem 1](#thm-main){reference-type="ref" reference="thm-main"}. 10 A. Cuttler, C. Greene, M. Skandera, *Inequalities for symmetric means*, European J. Combin. **32** (2011), no.6, 745--761. D. Doron, P. Hatami, W. Hoza, *Log-seed pseudorandom generators via iterated restrictions*, 35th Computational Complexity Conference, Art. No. 6, 36 pp. LIPIcs. Leibniz Int. Proc. Inform., 169, Schloss Dagstuhl. Leibniz-Zentrum für Informatik, Wadern, 2020. S. Favaro, S. Walker, *On a general Maclaurin's inequality*, Proceedings of the American Mathematical Society **146** (2018), 175-88. P. Gopalan, A. Yehudayoff, *Inequalities and tail bounds for elementary symmetric polynomials*, Electron. Colloq. Comput. Complexity, TR14-019, 2014. G. Hardy, J. E. Littlewood, G. Pólya, Inequalities, Cambridge Mathematical Library 2nd ed., 1952. D. B. Hunter, *The positive-definiteness of the complete symmetric functions of even order*, Math.Proc. Cambridge Philos. Soc. 82 (1977), 255--258. M. Lin, N. Trudinger, *On some inequalities for elementary symmetric functions*, Bull. Austral. Math. Soc. **50** (1994), no.2, 317--326. C. Maclaurin, *A Second Letter from Mr. Colin McLaurin to Martin Folkes, Esq.; Concerning the Roots of Equations, with the Demonstration of Other Rules in Algebra*, Phil. Trans. **36** (1729), 59--96. R. Meka, O. Reingold, A. Tal, *Pseudorandom generators for width-3 branching programs*, In Proc. 51st STOC, pp. 626--637. ACM Press, 2019 G. Milovanović, A. Cvetković, *Some inequalities for symmetric functions and an application to orthogonal polynomials*, J. Math. Anal. Appl. **311** (2005), no.1, 191--208. T. Mitev, *New inequalities between elementary symmetric polynomials*, JIPAM. J. Inequal. Pure Appl. Math. **4** (2003), no.2, Article 48, 11 pp. S. Mitrinović, *Certain inequalities for elementary symmetric function*. Univ. Beograd. Publ. Elektrotehn. Fak. Ser. Mat. Fiz.No. **1** (1967), 17--20. I. Newton, Arithmetica universalis: sive de compositione et resolutione arithmetica liber, 1707. C. P. Niculescu, *A new look at Newton's inequalities*, J. Inequal. Pure and Appl. Math., **1**(2) (2000), Article 17. C. Niculescu, *Interpolating Newton's inequalities*, Bull. Math. Soc. Sci. Math. Roumanie (N.S.) **47** (2004), no.1-2, 67--83. S. Rosset, *Normalized symmetric functions, Newton inequalities and a new set of stronger inequalitie*,Amer. Math. Soc., **96** (1989),815--820. A. Rukhin, *Bounds on elementary symmetric functions*, Linear Algebra Appl. **448** (2014), 329--342. N. Sato, *Symmetric polynomial inequalities*, Crux Mathematicorum with Mathematical Mayhem, **27**(2001), 529--533. J. Sylvester, *On a theory of syzgetic relations of two rational integral functions comprising an appplication to the theory of Sturm's functions and that of the greatest algebraical common measure*, Phil.Trans. of the Royal Soc. of London, CXLIII(1853), 407--548. V. Timofte, *On the positivity of symmetric polynomial functions. I. General results.* J. Math. Anal. Appl. **284** (2003), no.1, 174--190. V. Timofte, *On the positivity of symmetric polynomial functions. II. Lattice general results and positivity criteria for degrees 4 and 5*, J. Math. Anal. Appl. **304** (2005), no.2, 652--667. V. Timofte, *On the positivity of symmetric polynomial functions. III. Extremal polynomials of degree 4*, J. Math. Anal. Appl. **307** (2005), no.2, 565-- 578. G. Tinaztepe, R. Tinaztepe, *A refinement of Newton and Maclaurin inequalities through abstract convexity*, Turkish J. Math. **46** (2022), no.3, 699--709. [^1]: This assertion is not explicit in [@gy], which focused more on the related question of whether smallness of $|s_{k+1}|/|s_k|$ and $|s_{k+2}|/|s_k|$ implied smallness of $|s_\ell|/|s_k|$ for all larger $\ell$, but as observed by several authors [@dhh; @mrt], the methods in [@gy] extend to cover the situation discussed here. [^2]: The form of the bound in that post is slightly different from the one presented here, but is logically equivalent to it; see Remark [Remark 2](#eqv){reference-type="ref" reference="eqv"}. [^3]: This choice of $r$ is suggested by the law of large numbers (or de Moivre--Laplace theorem), which among other things indicates that the binomial distribution $m \mapsto (1+r^2)^{-n/2} \binom{n/2}{m/2} 1_{m \text{ even}} r^m$ peaks at $m \approx \frac{r^2}{1+r^2} n$. [^4]: In fact we could improve this to $k \leq n/2$ with a little more effort, but for our arguments the only thing that is important is that we can ensure that $n-k' \asymp n-k \asymp n$ for $k'=k,k+1$. [^5]: We adopt the convention that the asymptotic $O()$ notation is applied before exponentiation if the exponent appears outside the parentheses; thus for instance $O(\frac{m}{n})^{m/2}$ denotes a quantity bounded by $(C \frac{m}{n})^{m/2}$ for some absolute constant $C>0$.
arxiv_math
{ "id": "2310.05328", "title": "A Maclaurin type inequality", "authors": "Terence Tao", "categories": "math.CA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We show the time decay of spherically symmetric Coulomb waves in $\mathbb R^{3}$ for the case of a repulsive charge. By means of a distorted Fourier transform adapted to $H=-\Delta+q\cdot |x|^{-1}$, with $q>0$, we explicitly compute the kernel of the evolution operator $e^{itH}$. A detailed analysis of the kernel is then used to prove that for large times, $e^{i t H}$ obeys an $L^1 \to L^\infty$ dispersive estimate with the natural decay rate $t^{-\frac 32}$. address: - Department of Mathematics, Yale University, New Haven, CT 06511 - Department of Mathematics, Yale University, New Haven, CT 06511 - Department of Mathematics, Brown University, Providence, RI 02912 - Department of Mathematics, Rutgers University, Piscataway, NJ 08854 author: - Adam Black - Ebru Toprak - Bruno Vergara Biggio - Jiahua Zou bibliography: - coulombdispersion.bib title: $L^1\rightarrow L^\infty$ Dispersive estimates for Coulomb waves --- # Introduction ## Motivation and main result The time-dependent *Coulomb wave equation* describes the quantum dynamics of a charge under the influence of a long-range spherically symmetric potential in $\mathbb R^{3}$: $$\begin{aligned} \label{eq.Coulomb} \begin{split} &\big(i \partial_{t} -H)u=0\,,\qquad H:=-\Delta + \frac{q}{|x|} \,,\quad q\in\mathbb R\\ &u(0,x) = f(x) \end{split}\end{aligned}$$ In the attractive case, $q<0$, this equation has been used since the birth of quantum mechanics to describe the time-evolution of the electron in a hydrogen atom [@Schroe]. In this paper, we focus on the repulsive case, $q>0$, which was introduced by Yost, Wheeler and Breit in 1936 [@YWB] in connection with the interaction of charges of the same sign. The corresponding solutions are called *Coulomb waves* [@ESII]. As is well known, the free Schrödinger equation in $\mathbb R^3$ (that is ([\[eq.Coulomb\]](#eq.Coulomb){reference-type="ref" reference="eq.Coulomb"}) without the potential term) enjoys many dispersive estimates describing the spreading of the wave packet, the most fundamental of which is $$\begin{aligned} \label{disperse} \| e^{i t \Delta} f\|_{L_{}^{\infty}(\mathbb R^{3})}\leq \frac{C}{|t|^{3/2}} \|f\|_{L^{1}(\mathbb R^{3})}\,,\end{aligned}$$ with constant independent of $f$ and $t$. This is a direct consequence of the fact that the propagator of the free Schrödinger equation in $\mathbb R^3$ can be computed as $e^{it\Delta} f=k_{t}* f$, where $$\begin{aligned} k_{t}(x)= (4\pi i t)^{-3/2} e^{\frac{|x|^{2}}{4i t}}.\end{aligned}$$ Our goal in this paper is to extend the estimate [\[disperse\]](#disperse){reference-type="eqref" reference="disperse"} to $e^{itH}$ for radial data. Namely, we prove the following theorem: **Theorem 1**. *Let $H=-\Delta+\frac{q}{|x|}$ be the Coulomb Hamiltonian in $\mathbb R^{3}$, with $q>0$. Then, for any spherically symmetric function $f$ one has the following *a priori* estimate $$\label{eq.dispersive} \| e^{i t H} f\|_{L_{}^{\infty}(\mathbb R^{3})} \leq \frac{C}{|t|^{3/2}} \|f\|_{L_{}^{1}(\mathbb R^{3})}\,,\qquad \lvert t \rvert \geq 1.$$ with constant independent of $f$ and $t$.* As explained in the appendix, $H$ is a self-adjoint operator on $H^2(\mathbb R^3)$ so the meaning of $e^{itH}$ is clear. In later work, we plan to extend this result to non-radial functions, see the discussion in Section 1.3. Besides its intrinsic interest, the estimate ([\[eq.dispersive\]](#eq.dispersive){reference-type="ref" reference="eq.dispersive"}) serves as an initial step towards understanding the modified scattering of the *Hartree* or *Gross-Pitaevskii* equation in $\mathbb R^{3}$: $$(i\partial_{t}-\Delta) u+\left((-\Delta)^{-1} |u|^{2}\right)u=0\,.$$ This is a nonlinear equation modeling the dynamics of non-relativistic bosonic many-body particle systems in the mean-field limit, which has been extensively studied together with the NLS equation, see e.g. [@HN; @KP]. Indeed, taking radial perturbations leads to the equation $$\left(i\partial_{t}- \partial^2_r + \frac{1}{r} \right)v+\varepsilon^{2} v\int\limits_{\mathbb R_{+}}\frac{1}{\max\{r,s\}} |v(s,t)|^{2}ds=0\,,$$ with $0<\varepsilon\ll 1$ whose linearization is given by the Coulomb equation ([\[eq.Coulomb\]](#eq.Coulomb){reference-type="ref" reference="eq.Coulomb"}) for radial data. Thus, any investigation of the nonlinear scattering phenomenon of the Hartree equation must begin with a detailed understanding of the long-time asymptotics of solutions to ([\[eq.Coulomb\]](#eq.Coulomb){reference-type="ref" reference="eq.Coulomb"}). ## Prior work The problem of extending pointwise dispersive estimates to Hamiltonians of the form ${-\Delta +V(x)}$ has attracted considerable attention, given that these estimates serve as crucial tools for the subsequent analysis of both linear and nonlinear problems. In the linear setting, they give rise to Strichartz estimates and in the nonlinear realm they can be used to prove the stability of solitons, see e.g [@SW; @Weder; @survey; @survey2]. Nevertheless, most of the existing studies rely on either the perturbation of the free resolvent operator or the use of Duhamel's formula. Consequently, these approaches require that the potential $V$ be bounded or small in some suitable sense or decay faster than $|x|^{-1}$ at infinity, see the notable references [@BG; @EG; @GS; @JSS; @RSchlag; @Yaj] for some of the diverse methods employed in this area. From this perspective, the Coulomb potential $\lvert x \rvert^{-1}$ is pathological because of its singularity at the origin and slow decay at infinity. For instance, while pointwise estimates have been investigated for inverse square potentials [@FFF; @KT], the same methods cannot be directly applied to $H$ due to the differing scaling behavior between the Coulomb potential and the Laplacian. To the best of our knowledge, there is only one study quantifying the dispersive properties of the Hamiltonian $H$. In [@Miz], Mizutani considers the more general operator $$\begin{aligned} H_1:= -\Delta + Z |x|^{-\mu} + \varepsilon V_S(x)\end{aligned}$$ on $\mathbb R^n$ where $\mu\in (0,2)$ and $|\partial_x^{\alpha}\{ V_S(x) \} | \leq C \langle x \rangle^{-1-\mu -|\alpha|}$, which for $\varepsilon=0$ and $\mu=1$ is our operator $H$. For $\varepsilon\geq 0$ sufficiently small depending on $Z,\mu,$ and $V_S$, they show Strichartz estimates (including the endpoint) for $H_1$, i.e., $$\begin{aligned} \|e^{itH_1} f \|_{L^p(\mathbb R:L^q(\mathbb R^n))} \leq C \|f\|_{L^2(\mathbb R^n)}, \,\,\,\,\,\ \frac{2}{p}= \frac{n}{2} - \frac{n}{q} , \,\,\,\,\,\ (n,p,q)\neq(2,\infty,\infty). \end{aligned}$$ We emphasize that our estimates are instead pointwise in that they control the $L^\infty$ norm for all large $t$. Furthermore, we explicitly compute the kernel of the evolution operator $e^{itH}$ for radial data, which may be of independent interest. ## Overview of the proof Though we will mostly focus in radial waves, let us consider the spherical decomposition of $L^2(\mathbb R^{3}) = \bigoplus_{{\ell} =0}^\infty L^{2}(\mathbb R^{+},\,r^2 dr)\otimes L_{{\ell}}$, where $r:=|x|$ and $L_{{\ell}}$ denotes the $\ell$-th eigenspace of spherical harmonics of angular momentum $\ell=0,1,2\dots$. The Coulomb Hamiltonian $H$ restricted to $L_{\ell}$ is then unitarily equivalent to the Sturm-Liouville operator $- \partial^2_r +\frac{\ell (\ell+1)}{r^2} + \frac{q}{r}$: $$H|_{L_{\ell}}=H_{\ell,q} := r^{-1} \Big(- \partial^2_r +\frac{\ell (\ell+1)}{r^2} + \frac{q}{r} \Big) r\,.$$ Thus, any $f \in L^2(\mathbb R^3)$ can be represented as $$\label{eq.partialwaves} f(r,\omega) = r \sum_{\ell =0}^{\infty} f_{\ell}(r)Y_{\ell,m} (\omega)\,, \quad f_{\ell}(r):=\sum_{m=-\ell}^{\ell} \langle f(r, \cdot), Y_{\ell,m} \rangle\,, \qquad r\in\mathbb R^{+}\,,\omega\in\mathbb{S}^{2}\,,$$ where the inner product is understood in the sense of $L^{2}(\mathbb{S}^{2})$, $f_{\ell} \in L^{2}(\mathbb R^{+},\,r^2 dr)$ and $Y_{\ell,m}$ denotes the $(\ell,m)$-spherical harmonic, i.e. $-\Delta_{\mathbb{S}^2} Y_{\ell,m} = \ell (\ell+1) Y_{\ell,m}$. In this paper, we treat the radial sector, $\ell=0$, leaving the analysis of other angular momenta to a subsequent work. As such, we must understand the half-line Schrödinger operator $$\begin{aligned} -\frac{d^2}{dr^2}+\frac{q}{r}\,.\end{aligned}$$ This operator is not essentially self-adjoint (indeed, it is limit circle at $r=0$), so we must be careful to choose the correct self-adjoint extension whose dynamics coincide with that of $H_{0,q}$, which we denote $\mathcal{L}_q$. We refer the reader to the appendix for the construction of $\mathcal{L}_q$ and its relevant properties. To explicitly describe the time-evolution of $\mathcal{L}_q$, we derive its *distorted Fourier transform*, which diagonalizes the operator. This consists of a distorted Fourier basis of appropriately selected generalized eigenfunctions of $\mathcal{L}_q$ and a spectral measure $\rho(d\sigma)$, which yields the representation $$\begin{aligned} \label{Lprop} e^{it\mathcal{L}_q}g (r) =\int\limits_0^{\infty}\int\limits_0^{\infty} e^{it \sigma ^2} \phi_q(\sigma,r) \phi_q(\sigma,s) g(s) \:ds \:\rho(d\sigma). \end{aligned}$$ While the existence of such a transform is quite classical for many classes of potentials, the potential $r^{-1}$ is *strongly singular*, so that additional care is required to develop its spectral theory. The distorted Fourier transform of strongly singular potentials has been studied in [@GZ]. However, the Coulomb potential is not singular enough to apply the results therein verbatim. In the appendix, we adapt the results of [@GZ] to treat the Coulomb case, and derive a distorted Fourier basis and spectral measure given by $$\begin{aligned} \label{phiandmeasure} \phi_q(\sigma,r) = (2i\sigma)^{-1} M_{\frac{iq}{2 \sigma}, \frac 12} ( 2 i \sigma r), \,\,\,\,\,\,\, d\rho(\sigma)= 2\mu^2(\sigma) d \sigma \end{aligned}$$ where $\mu^2(\sigma)= q\sigma [e^{\frac{q\pi}{\sigma}} -1]$, $\mu \geq0$ and $M_{\frac{iq}{2 \sigma}, \frac 12} ( 2 i \sigma r)$ is Whittaker-M function, see 13.14 in [@NIST]. We also mention work of Fulton [@F], in which Frobenius solutions from zero are used to derive the distorted Fourier transform and spectral measure in the case of strongly singular potentials. Substituting [\[phiandmeasure\]](#phiandmeasure){reference-type="eqref" reference="phiandmeasure"} into [\[Lprop\]](#Lprop){reference-type="eqref" reference="Lprop"} and employing the relation $H_{0,q} = r^{-1} \mathcal{L}_q r$, we obtain $$\begin{aligned} \label{eq.kernel} e^{itH_{0,q}}g(r)&= \frac{2q}{r}\int\limits_0^{\infty}\int\limits_0^{\infty} e^{it q^2\sigma^2} e(q\sigma, r) e(q\sigma, s) s g(s) ds d \sigma \\ &=\int\limits_0^{\infty} K_t(r,s) s^2 g(s) ds \nonumber\,,\end{aligned}$$ where $$\begin{aligned} \label{dfb} e(q\sigma, r) = \mu(q \sigma) \phi_1(q\sigma,r) ,\,\,\,\,\, K_t(r,s)= \frac{2q}{rs}\int\limits_0^{\infty} e^{it q^2\sigma^2} e(q\sigma, r) e(q\sigma, s) d \sigma. \end{aligned}$$ Therefore, Theorem [Theorem 1](#thm.main){reference-type="ref" reference="thm.main"} holds provided that we establish $$\begin{aligned} \label{eq.kernelest} \sup_{r\geq s>0} | K_{t}(r,s)|\lesssim t^{-3/2},\,\,\,\,\,\ t\geq 1. \end{aligned}$$ It should be stressed that, despite the apparent simplicity of $K_t$ in terms of special functions, we require several very delicate approximations in order to prove [\[eq.kernelest\]](#eq.kernelest){reference-type="eqref" reference="eq.kernelest"}. In particular, for use in the oscillatory integral defining $K_t$, it is important that we obtain $C^2$ control jointly in the variables $r$ and $\sigma$. While for large $\sigma$ known integral representation of $e(q\sigma,r)$ suffice, as $\sigma \to 0$ these turn out to be useless. Therefore, one needs to perform a detailed analysis of the eigenvalue problem $$\begin{aligned} \label{eigenfeq} -\frac{d^2 f}{dr^2} + \frac{f}{r} = \sigma^2 f\end{aligned}$$ as $\sigma\rightarrow 0$, in which we have conveniently normalized the charge constant to $q=1$. By rescaling, this loses no generality so we adopt this convention from now on. The fundamental set of solutions to [\[eigenfeq\]](#eigenfeq){reference-type="eqref" reference="eigenfeq"} consists of $M_{\frac{i}{2 \sigma}, \frac{1}{2}} (2 i \sigma r)$ and $W_{\frac{i}{2 \sigma}, \frac{1}{2}} (2 i \sigma r)$, the latter being the Whittaker-$W$ function. With $\phi(\sigma,r):=\phi_1(\sigma,r)$, as $r$ approaches 0, for any fixed $\sigma>0$ $$\begin{aligned} \label{phiat0} \phi(\sigma,r)= r ( 1+ O(\sigma r))\end{aligned}$$ whereas as for $r\rightarrow\infty$, by equations (13.14.32), (13.14.21), and (5.4.3) in [@NIST], we have that $$\begin{aligned} \label{MAsy} &\phi(\sigma,r) \sim C_0\sigma^{-\frac{1}{2}} [ e^{\frac{\pi}{\sigma}}-1]^{\frac 12} \sin(\Theta(\sigma,r))\\ &\Theta(\sigma,r)=\sigma r-\frac{\log(2\sigma r)}{2\sigma}+\theta_0(\sigma)\end{aligned}$$ for some absolute constant $C_0$ and phase correction $\theta_0(\sigma)$. Heuristically, one may see these asymptotics via the WKB method. Let $$\begin{aligned} \rho_{\sigma}(s,r)=\int\limits_s^{r} \sqrt{ \lvert Q \rvert } du,\,\,\,Q=u^{-1} - \sigma^2\end{aligned}$$ be the *Agmon distance* [@Ag Ch.5] from $s$ to $r$. To the right of the *turning point* $r_*=\sigma^{-2}$, WKB predicts that $\phi$ will grow as $e^{\rho_\sigma(0,r_*)}$ before oscillations set in, which are governed by $e^{i\rho_\sigma(r_*,r)}$. Since $$\begin{aligned} \rho_{\sigma}(0,r_*)=\frac{\pi}{2\sigma}\end{aligned}$$ and $$\begin{aligned} \rho_{\sigma}(r_*,r)=\sigma^{-1}\left(r\sigma^2+\frac{1}{2}\log(r\sigma^2)+\frac{1}{2}-\log(2)+O(r\sigma^2)\right)\,, \end{aligned}$$ the predictions of WKB exactly match the asymptotics ([\[MAsy\]](#MAsy){reference-type="ref" reference="MAsy"}), where $\sigma^{-\frac{1}{2}}$ may be regarded as the usual WKB prefactor of $Q^{-\frac{1}{4}}$. Precisely within equation [\[dfb\]](#dfb){reference-type="eqref" reference="dfb"}, the role of the $\mu$ multiplier is adjusting the distorted Fourier basis in a manner that leads to $$\lim_{r\to\infty} |e(\sigma,r) - c_0\sin(\Theta(\sigma,r))|=0$$ for all $\sigma>0$ and an absolute constant $c_0 \in \mathbb R$. To make the approximation precise, we use the *Liouville-Green*(LG) transform [@Olver Ch. 6], which is standard practice for semi-classical problems with a simple turning point. Indeed, similar semi-classical problems with inverse square or exponentially decaying potentials have been studied in [@Wmain; @DSS; @Wexp; @survey2] (see Section 2 for a discussion on the differences). The LG transform is a change of variables $\zeta$ that transforms a second order ODE with a simple turning point into a *perturbed Airy equation* of the form $$-\sigma^{2}\frac{d^2f}{d\zeta^2}=(\zeta+\sigma^2V(\zeta)) f$$ for a suitable potential $V$. In this way, we obtain fundamental systems of solutions to [\[eigenfeq\]](#eigenfeq){reference-type="eqref" reference="eigenfeq"} with good asymptotics on both sides of the turning point, see Proposition [Proposition 8](#smallexp){reference-type="ref" reference="smallexp"} and Proposition [Proposition 13](#oscBasisPr){reference-type="ref" reference="oscBasisPr"}. Unfortunately, the approximation on the left side of the turning point cannot be extended all the way to the origin due to the local singularity in $V(\zeta)$ introduced by the Coulomb potential. However, referring to Nakamura's investigation in [@Nak], it is reasonable to anticipate a rapid decay of $e(\sigma,r)$ as $\sigma$ approaches 0. In [@Nak], the resolvent operator of $-\Delta + W$ where $|W(x)| \lesssim\langle x \rangle^{-\rho}$ for $\rho \in(0,2)$ is studied. It was proved that if $W$ is also positive then there exist $\beta, \gamma >0$, such that $$\begin{aligned} \label{Nakdec} \| F(|x| \leq \beta \sigma^2 ) F((-\Delta+W) \leq \sigma^2 )\|_{L^2 \to L^2} \lesssim\exp(- \gamma \sigma^{ \frac{2}{\rho}-1} ), \,\,\,\, \sigma^2 \in (0,1]\end{aligned}$$ where $F(A)$ denotes the characteristic function of the set $A$. It is important to note that the Coulomb potential takes the form of $W$ with $\rho=1$ as $x$ becomes large. Therefore, we should compare our estimate on $e(\sigma,r)$ with $e^{-\frac{\gamma}{\sigma}}$ as $\sigma \to 0$. With this consideration, we employ a different transformation of [\[eigenfeq\]](#eigenfeq){reference-type="eqref" reference="eigenfeq"} around zero and derive and approximation to $e(\sigma,r)$ via modified Bessel functions [@NIST Ch.10] that captures the expected behavior and is unable to be extended up to the turning point. We then connect this approximation to the Airy approximation mentioned above. Lemma [Lemma 5](#lem:bb){reference-type="ref" reference="lem:bb"} below provides a complementary perspective to the estimate given in [\[Nakdec\]](#Nakdec){reference-type="eqref" reference="Nakdec"}. It appears to us that one cannot avoid two connection problems when considering potentials that decay like $r^{-1}$. Indeed, in [@PSV], the same problem arose while performing a turning point analysis of a similar ODE arising in the context of the Klein-Gordon equation on a stationary spherically symmetric black hole. The approximations in terms of Airy and Bessel functions obtained in Section 5 of that work are similar to those we obtain in Section 2. We also mention that while approximations of Coulomb eigenfunctions via Bessel and Airy functions have appeared in the literature since the 50's [@AS; @ESI; @ESII], the form of these approximation are completely unsuitable for use inside the oscillatory integral ([\[eq.kernel\]](#eq.kernel){reference-type="ref" reference="eq.kernel"}) as we require explicit estimates of $C^{2}$-errors in both $r$ and the semi-classical parameter $\sigma$. To the best of our knowledge, the approximations derived in Sections 2 and 3 are novel. In particular, the results in the aforementioned references cannot be applied directly in the proof of estimate [\[eq.kernelest\]](#eq.kernelest){reference-type="eqref" reference="eq.kernelest"}, for which one needs the control of several derivatives in order to extract the time decay from the phase. See Section 4 for the oscillatory estimates that lead to the proof of [\[eq.kernelest\]](#eq.kernelest){reference-type="eqref" reference="eq.kernelest"}. ## Notation and conventions For the benefit of the reader, we define here the notation and conventions we use throughout this paper: - $\langle x \rangle= (1+x^2)^{\frac 12}$. - $f(x) \lesssim g(x)$ indicates that there exists some constant $C>0$ so $f(x) \leq Cg(x)$ for all $x$ in some specified domain. We will also use this notation for functions that depend on several variables. - $f(x) \sim g(x)$ indicates that $cg(x) \leq f(x) \leq Cg(x)$ for some $C>c>0$ independent of $x$. - $a(x,\sigma) = O_k(\sigma^m x^p )$ indicates that $|\partial^j_\sigma \{a(x,\sigma)\}| \lesssim\sigma^{m-j}x^p$ for $j= 0,1,\dots ,k$. - $\chi_c(x):\mathbb{R}_+\rightarrow\mathbb{R}$ is a smooth cut-off function supported on $[0,c]$ that is equal to $1$ when $x\leq \frac{2c}{3}$, and $\tilde{\chi}_c(x)= 1 - \chi_c(x)$. - For two functions $f(x)$ and $g(x)$, $W[f,g](x)$ denotes their Wronskian $$\begin{aligned} W[f,g](x)=f(x)g'(x)-f'(x)g(x)\end{aligned}$$ ## Organization of the paper The paper is organized as follows. In Sections 2 and 3, we derive approximations to the Whittaker M-function $e(\sigma,r)$ for small and large energies $\sigma$, respectively. We devote Section 4 to the proof of our main Theorem [Theorem 1](#thm.main){reference-type="ref" reference="thm.main"}. For this purpose, we employ the previous eigenfunction approximations to estimate the oscillatory integral [\[eq.kernelest\]](#eq.kernelest){reference-type="eqref" reference="eq.kernelest"}, which leads immediately to the estimate [\[eq.dispersive\]](#eq.dispersive){reference-type="eqref" reference="eq.dispersive"}. Finally, in an Appendix, we provide the details on the construction of the distorted Fourier basis [\[dfb\]](#dfb){reference-type="eqref" reference="dfb"} and thus demonstrate the form of the kernel $K_t$. ## Acknowledgement {#acknowledgement .unnumbered} This research was conducted as part of the Brown-Yale PDE seminar. We thank the other participants for stimulating discussions. We are particularly grateful for the organizers, Benoît Pausader and Wilhelm Schlag, who suggested this problem and provided guidance in the writing of this paper. We also thank Wilhelm for notes that formed the basis of the Appendix and Haram Ko for helpful revisions to those notes. # Eigenfunction approximation: Small Energies The main aim of this section is to find a good approximation for the distorted Fourier basis $e(\sigma,r) = -i \sigma^{-\frac 12}[e^{ \frac{\pi}{\sigma}} -1]^{-\frac 12} M_{ \frac{i}{2 \sigma}, \frac 12} ( 2 i \sigma r )$ when $\sigma <c$ for some sufficiently small $c$. The principal findings from this section that we employ for the analysis of the oscillatory integrals are summarized in Proposition [Proposition 14](#langerBounds){reference-type="ref" reference="langerBounds"}, Corollary [Corollary 15](#oscCor){reference-type="ref" reference="oscCor"}, and Corollary [Corollary 18](#cor:low){reference-type="ref" reference="cor:low"}. In the following section, we consider $q=1$. In particular, we construct approximate solutions to the ODE $$\begin{aligned} \label{rODE} -f''(r)+\frac{f(r)}{r}=\sigma^2f(r)\end{aligned}$$ for $\sigma$ small. It is convenient to change to the variable $x=\sigma^2 r$ whereupon ([\[rODE\]](#rODE){reference-type="ref" reference="rODE"}) becomes $$\begin{aligned} \label{xODE} -\sigma^{2}f''(x)+(x^{-1}-1)f(x)=0\,.\end{aligned}$$ As $\sigma\rightarrow 0$, [\[xODE\]](#xODE){reference-type="eqref" reference="xODE"} is a semi-classical problem with a simple turning point at $x=1$. Thus, we use the Liouville-Green transform to obtain solutions in terms of Airy functions, which is standard practice for such ODEs (see Chapter 11 in [@Olver]). In this section, we closely follow [@Wmain] where a similar analysis is performed for an ODE with a potential with $r^{-2}$ asymptotics. Unlike in [@Wmain], we cannot extend the Airy function approximation to a neighborhood of $x=0$. Indeed, the error in the approximation blows up in $\sigma$ as $x \to 0$ due to the exponential growth of $\mathop{\mathrm{Bi}}(z)$ as $z \to \infty$, see Proposition [Proposition 8](#smallexp){reference-type="ref" reference="smallexp"}. Therefore, in this regime we resort to an approximation in terms of the modified Bessel function $I_1$. Note that Erdélyi and Swanson use $I_1$ in [@ESII] to approximate the Whittaker function $M_{ \frac{i}{2 \sigma}, \frac 12} ( 2 i \sigma r )$, but the error term derived there is not suitable for our purposes. Therefore, our analysis in the next section may be regarded as a refined version of theirs. ## Bessel function approximation: $x\ll 1$ In this section, we approximate $e(\sigma,r)$ in terms of the modified Bessel function $I_1$ when $x\in [0,\delta]$ for some $\delta<1$. Let $Q(x) = x^{-1} -1$ and for $0\leq x\leq 1$ define $$\begin{aligned} \label{etadef} \eta(x) =\int\limits_0^x \sqrt{Q(y)}\, dy\,.%, \,\,\,\,\,\,\,\, f = \Big( \frac{\eta}{ \eta^{\prime}} \Big)^{\f 12} \omega( \eta) \end{aligned}$$ The properties of this function are summarized in the following lemma: **Lemma 2**. *The map $\eta$ is a smooth transformation from $[0,1]$ to $[0,\frac{\pi}{2}]$ and with respect to the change of variables $$\begin{aligned} \omega(\eta)=p^{\frac{1}{2}}f,\,\,p= \frac{\eta'}{\eta} \end{aligned}$$ ([\[xODE\]](#xODE){reference-type="ref" reference="xODE"}) transforms to $$\begin{aligned} \label{bessel} \sigma^2 \big( \ddot{\omega} (\eta) + \frac{ \dot{\omega} (\eta)}{ \eta} \big) - \omega(\eta) = \sigma^2 V_-(\eta) \omega(\eta) \end{aligned}$$ where $$\begin{aligned} V_-(\eta)= \eta^{-1}p^{-\frac{1}{2}}\frac{dp^{\frac{1}{2}}}{d\eta}+p^{-\frac{1}{2}}\frac{d^2p^{\frac{1}{2}}}{d\eta^2} \end{aligned}$$ Here, $\dot{ }$ represents the derivative with respect to $\eta$ and $'$ the derivative with respect to $x$.* *Furthermore, we may write $$\begin{aligned} V_-(\eta)=\frac{1}{\eta^2}+\tilde{V}_-(\eta)\end{aligned}$$ for $\tilde{V}_-$ smooth on any interval of the form $[0,\delta]$, $\delta<\frac{\pi}{2}$.* *Proof.* The smoothness of $\eta$ is clear. Furthermore, one computes that $$\begin{aligned} &\dot{\omega}=\eta^{-1}p^{-\frac{1}{2}}f'+\frac{dp^{\frac{1}{2}}}{d\eta}f\\ \begin{split} \ddot{\omega}&=-\eta^{-2}p^{-\frac{1}{2}}f'-\eta^{-1} p^{-1}\frac{dp^{\frac{1}{2}}}{d\eta}f'+\eta^{-2}p^{-\frac{3}{2}}f''+\eta^{-1}p^{-1}\frac{dp^{\frac{1}{2}}}{d\eta}f'+\frac{d^2p^{\frac{1}{2}}}{d\eta^2}f\\ &=\eta^{-2}p^{-\frac{3}{2}}f''-\eta^{-1}\dot{\omega}+\left( \eta^{-1}p^{-\frac{1}{2}}\frac{dp^{\frac{1}{2}}}{d\eta}+p^{-\frac{1}{2}}\frac{d^2p^{\frac{1}{2}}}{d\eta^2} \right) \omega \end{split}\end{aligned}$$ and thus, using that $\sigma^2f''=Qf$ and that $(\eta')^2=Q$, the above expression for $\ddot{\omega}$ may be rewritten as ([\[bessel\]](#bessel){reference-type="ref" reference="bessel"}). Furthermore, by the chain rule, one can calculate $$\begin{aligned} V_-(\eta) &= \frac{1}{ 4 \eta^2} - \frac{3}{4} \frac{ (\eta^{\prime \prime})^2}{ (\eta^{\prime})^4} + \frac{1}{2} \frac{\eta^{\prime \prime \prime}}{ (\eta^{\prime})^3} \\ &= \frac{1}{ \eta^2} +\Big[ - \frac{3}{4} \frac{ (\eta^{\prime \prime})^2}{ (\eta^{\prime})^4} + \frac{1}{2} \frac{\eta^{\prime \prime \prime}}{ (\eta^{\prime})^3} - \frac{3}{4 \eta^2} \Big] = \frac{1}{ \eta^2} + \tilde{V}_-(\eta)\,.\end{aligned}$$ Moreover, since $\eta(x) = 2x^{\frac 12} + O_{\infty}( x^{3/2})$ for $x < 1$, one has $|\partial^j_{\eta}\tilde{V}_-(\eta)| \lesssim 1$ for $j=0,1,\dots$ for $\eta < \pi/2$. ◻ With Lemma [\[rODE\]](#rODE){reference-type="ref" reference="rODE"} in hand, we look for the solution to [\[bessel\]](#bessel){reference-type="eqref" reference="bessel"} that is relevant to $e(\sigma,r)$. Recall that the equation [\[rODE\]](#rODE){reference-type="eqref" reference="rODE"} has a basis of solutions given by $M_{\frac{i}{2\sigma},\frac 12}(2 i x/\sigma )$, which is a multiple of $e(\sigma, x/\sigma^2 )$, and $W_{\frac{i}{2\sigma},\frac 12}(2 i x/\sigma )$. For any fixed $\sigma$, the first one vanishes to first order at $x=0$ whereas the latter is non-vanishing there [@NIST (13.14.17)]. Transforming these solutions under the change of variables defined in the above Lemma, the relation $p^{\frac 12}\sim x^{-\frac 12}$ shows that [\[bessel\]](#bessel){reference-type="eqref" reference="bessel"} must have two linearly independent solutions $\phi_-$ and $\phi_+$ satisfying the asymptotics $$\begin{aligned} \label{phi12} \phi_-(\sigma,\eta) \sim \eta \,\,\,\ \text{and} \,\,\, \phi_+(\sigma,\eta) \sim \eta^{-1}\,. \end{aligned}$$ as $\eta\rightarrow 0$. Therefore, $\phi_-$ is characterized, up to scaling, as the unique solution to [\[bessel\]](#bessel){reference-type="eqref" reference="bessel"} that is vanishing (or even finite) at $\eta=0$. In the following proposition we identify $\phi_-$ in terms of $I_1$ and connect it to $e(\sigma, x/\sigma^2 )$ using $$\begin{aligned} \label{smap} e(\sigma, x/\sigma^2 ) &= -i \sigma^{-\frac 12} [e^{\frac{\pi}{\sigma}} -1]^{-\frac 12} M_{\frac{i}{2\sigma},\frac 12}(2ix/\sigma) \\ &= 2\sigma^{-\frac 32} [e^{\frac{\pi}{\sigma}} -1]^{-\frac 12} x ( 1+ O(x/\sigma)) \,\,\,\, \text{as} \,\,\,\ x \to 0 .\nonumber\end{aligned}$$ which follows from [\[phiat0\]](#phiat0){reference-type="eqref" reference="phiat0"} **Proposition 3**. *For any $\delta\in(0,\frac{\pi}{2})$, there exists $c>0$ such that for all $\sigma \in [0,c)$, on $\eta\in[0,\delta]$, $e(\sigma,x/\sigma^2)$ has the form $$\begin{aligned} \label{exp:bessel} e(\sigma, x/\sigma^2) =2 \sqrt{2} \sigma^{-\frac 12} [ e^{\frac {\pi}{\sigma}} -1]^{-\frac 12} \Big( \frac{\eta}{ \eta^{\prime}} \Big)^{\frac 12} I_1(\eta/\sigma) (1+a_-(\sigma,\eta))\,, \end{aligned}$$ where $\rho= \sigma^{-1} \eta$, $I_1$ is the modified Bessel function of the first kind, and $a_-$ is a smooth function satisfying the bounds $$\begin{aligned} &|a_-( \sigma,\eta)|\lesssim\sigma ,\,\,\ \lvert \dot{a}_-(\sigma,\eta) \rvert\lesssim\sigma, \,\,\,\,\,\, |\ddot{a}_-( \sigma,\eta)|\lesssim 1, \\ &|\partial_\sigma\{a_-(\sigma,\eta)\}|\lesssim 1,\,\,\lvert \partial_{\sigma }\{\dot{a}_-(\sigma,\eta)\} \rvert\lesssim\sigma^{-2} ,\,\,|\partial_\sigma^2\{a_-(\sigma,\eta)\}|\lesssim\sigma^{-2}\end{aligned}$$ uniformly on $[0,\delta]$.* **Remark 4**. *We remark that one can see from the calculations in Lemma [Lemma 2](#etaLem){reference-type="ref" reference="etaLem"} that ${\tilde{V}_- =O((\eta - \pi/2)^{-3})}$ as $\eta \to \pi/2$. This is the main reason why we cannot extend the domain of $\eta$ past the turning point $\eta=\frac{\pi}{2}$.* *Proof.* In the variable $\rho= \sigma^{-1} \eta$, the equation [\[bessel\]](#bessel){reference-type="eqref" reference="bessel"} becomes the perturbed Bessel equation $$\begin{aligned} \partial^2_\rho\{ \omega(\rho)\} + \rho^{-1} \partial_\rho\{ \omega(\rho)\} - ( \rho^{-2} +1) \omega(\rho) = \sigma^2 \tilde{V}_-( \sigma \rho) \omega(\rho)\,.\end{aligned}$$ A fundamental system for the homogeneous equation $$\begin{aligned} \partial^2_\rho\{ \omega(\rho)\} + \rho^{-1} \partial_\rho\{ \omega(\rho)\} - ( \rho^{-2} +1) \omega(\rho) =0\end{aligned}$$ is given by modified Bessel functions of first order $I_1(\rho),K_1(\rho)$ so that by variation of parameters, the function $$\begin{aligned} &\phi_-(\sigma,\rho)=I_1(\rho)+ \sigma^2 \int\limits_0^\rho \frac{ [-I_1(\rho) K_1(u) +K_1(\rho) I_1(u)] \tilde{V}_-( \sigma u )\phi_-(\sigma u)}{ W(K_1(u), I_1(u))} \:du \end{aligned}$$ solves [\[bessel\]](#bessel){reference-type="eqref" reference="bessel"} (provided that the integral on the right converges) and vanishes at $\rho=0$. Evaluating the Wronskian via [@NIST (10.28.2)], plugging in the ansatz $\phi_-(\sigma,\rho)=I_1(\rho)(1+\sigma a_-(\sigma,\rho))$, and noting that $I_1 (u)$ has no real zeros we obtain the equation for $a_-(\sigma,\eta)$ $$\begin{aligned} \label{a-} a_-(\sigma, \eta) &= \sigma^2\int\limits_0^\rho u [K_1(u)I_1(u) - I_1^{-1} (\rho)K_1(\rho) I_1^2(u)] \tilde{V}_-( \sigma u )( 1+ a_- (\sigma, \sigma u ))\: du \\ & =: \sigma^2\int\limits_0^{\rho} M(\sigma,u) ( 1+ a_- (\sigma, \sigma u )) du \nonumber\,.\end{aligned}$$ We will first prove that $a_-(\sigma, \eta)$ is well-defined and bounded by analyzing the leading term $$\begin{aligned} a_{-,0}(\sigma, \eta) := \sigma^2\int\limits_0^\rho u [K_1(u) I_1(u)- I_1^{-1} (\rho)K_1(\rho) I^2_1(u)] \tilde{V}_-( \sigma u ) du \,.\end{aligned}$$ For this, we record the following bounds on $I_1$ and $K_1$: $$\begin{aligned} \begin{split}\label{IKbounds} &\lvert \partial_{u}^j\{K_1(u)\} \rvert \lesssim u^{-1-j}\langle u \rangle^{j+\frac{1}{2}} e^{-u},\,\,\,\,\,\ j=0,1,2... \\ & \lvert \partial_u^j\{I_1(u)\} \rvert \sim u^{1-j}\langle u \rangle^{j-\frac{3}{2}} e^{u}, \,\,\,\,\,\ j=0,1, \\ &\lvert \partial_u^2\{I_1(u)\} \rvert \lesssim u\langle u \rangle^{-\frac{3}{2}} e^u\,, \end{split}\end{aligned}$$ which may easily be deduced from [@NIST (10.30.1-2) and (10.40.1-2)]. In particular, they imply that $$\begin{aligned} &| I_1(u) K_1(u)| \lesssim\langle u \rangle^{-1}\\ &|I_1^{-1} (u)K_1(u)| \lesssim\rho^{-2} \langle\rho \rangle^2e^{-2\rho} \\ &|I_1^2(u)| \lesssim u^2 \langle u \rangle^{-3} e^{2u} \,.\end{aligned}$$ Therefore, since by Lemma [Lemma 2](#etaLem){reference-type="ref" reference="etaLem"} $\tilde{V}_-(\sigma u)$ is bounded for $u\leq \rho$, which is in turn less than $\sigma^{-1}\delta$, we may write $$\begin{aligned} \begin{split}\label{ltone} |a_{-,0}(\sigma, \eta)| &\lesssim\sigma^2 \int\limits_0^{\rho} u\left\langle u \right\rangle^{-1}\: du + \sigma^2 \rho^{-2} \left\langle \rho \right\rangle^{2} e^{-2 \rho}\int\limits_0^{\rho } u^3\left\langle u \right\rangle^{-3} e^{2u} du \\ &\lesssim\sigma^2 \langle\rho \rangle+\sigma^2\lesssim\sigma\,. \end{split}\end{aligned}$$ Moreover, as $x \to 0$, we have $|a_{-,0}(\sigma, \eta)| \lesssim\sigma^2 \rho^2 \lesssim x$. These bounds may easily be extended to $a_-(\sigma, \eta)$ itself by a contraction argument. In particular, we simply think of ([\[a-\]](#a-){reference-type="ref" reference="a-"}) as the fixed point equation $$\begin{aligned} a_-=T(1)+ T(a_-)\end{aligned}$$ for the linear operator $T$ given by $Ta= \sigma^2\int\limits_0^\rho M(\rho,u) a(u) du$. For $\sigma$ small enough, our computations show that $T$ is a contraction on $L^\infty_\eta[0,\delta]$ and moreover that $T(1)$ lies in this space. This implies that the $L^{\infty}$ norm of the fixed point, given by $a_- =\sum_{n=0}^\infty T^{n+1}(1)$, is bounded by the norm of the first term, which is $O\left( \sigma \right)$. Having established the existence and boundedness of $a_-(\sigma,\eta)$, we now turn to the bounds on its derivatives. We first treat the $\eta$-derivative. We have that $$\begin{aligned} \dot{a}_- (\sigma, \eta) = -\sigma \partial_{\rho} \{I_1^{-1} (\rho)K_1(\rho)\}\int_0^\rho u I^2_1(u) \tilde{V}_-( \sigma u ) ( 1+ a_- (\sigma, \sigma u )) du\,.\end{aligned}$$ To estimate this integral, we use that by [\[IKbounds\]](#IKbounds){reference-type="eqref" reference="IKbounds"} $$\begin{aligned} \lvert \partial_{\rho} \{I_1^{-1} (\rho)K_1(\rho)\} \rvert \lesssim\rho^{-3} \langle\rho \rangle^{3} e^{- 2\rho}\end{aligned}$$ so that $$\begin{aligned} \lvert \dot{a}_{-}(\sigma,\eta) \rvert \lesssim\sigma \rho^{-3}\left\langle \rho \right\rangle^{3} e^{-2\rho}\int_{0}^{\rho}u^3\left\langle u \right\rangle^{-3} e^{2u} \: du \lesssim\sigma\,.\end{aligned}$$ Differentiating again, we find that $$\begin{aligned} \ddot{a}_- (\sigma, \eta) &= -\partial_{\rho} \{I_1^{-1} (\rho)K_1(\rho)\}\rho I^2_1(\rho) \tilde{V}_-( \eta ) ( 1+ a_- (\sigma, \eta ))\\ &-\partial^2_{\rho} \{I_1^{-1} (\rho)K_1(\rho)\}\int\limits_0^\rho uI^2_1(u) \tilde{V}_-( \sigma u ) ( 1+ a_- (\sigma, \sigma u )) \:du\,.\end{aligned}$$ By [\[IKbounds\]](#IKbounds){reference-type="eqref" reference="IKbounds"}, the first term is uniformly bounded. For the second, we use $$\begin{aligned} \lvert \partial^2_{\rho} \{I_1^{-1} (\rho)K_1(\rho)\} \rvert \lesssim\rho^{-4} \langle\rho \rangle^{4} e^{-2 \rho} \end{aligned}$$ to argue similarly that the second term is uniformly bounded as well. Finally, we estimate the $\sigma$-derivatives of $a(\sigma, \eta)$. For the first derivative, we have $$\begin{aligned} \partial_\sigma \{a_-(\sigma, \eta )\}& =\sigma^{-1} (2 a_-(\sigma,\eta) - \eta \dot{a}(\sigma,\eta))\nonumber\\ & + \sigma^2\int\limits_0^\rho u^2 [K_1(u) - I_1^{-1} (\rho)K_1(\rho) I_1(u)] \tilde{V}_-'( \sigma u )I _1(u) ( 1+ a_- (\sigma, \sigma u ))\: du\nonumber\\ & + \sigma^2\int\limits_0^\rho u^2 [K_1(u) - I_1^{-1} (\rho)K_1(\rho) I_1(u)] \tilde{V}_-( \sigma u )I _1(u) \dot{a}_- (\sigma, \sigma u ))\: du\nonumber\\ &+ \sigma^2\int\limits_0^\rho u [K_1(u) - I_1^{-1} (\rho)K_1(\rho) I_1(u)] \tilde{V}_-( \sigma u ) I _1(u) \partial_\sigma\{ a_- (\sigma,x)\}|_{x=\sigma u} \:du \,. \nonumber\\ &=:A_1(\sigma,\eta)+A_2(\sigma,\eta)+A_3(\sigma,\eta)+A_4(\sigma,\eta)\nonumber\end{aligned}$$ By the bounds on $a(\sigma,\eta)$ and $\dot{a} (\sigma, \eta)$, it is clear $|A_1(\sigma,\eta)|\lesssim 1$. Since $\tilde{V}_-$ is uniformly smooth by Lemma [Lemma 2](#etaLem){reference-type="ref" reference="etaLem"}, it is easy to argue as for the bound $\lvert a_{-,0} \rvert \lesssim\sigma$ that $\lvert A_2(\sigma,\eta) \rvert \lesssim 1$, the only difference being an extra power of $u$ in the integral. Similarly, $\lvert A_3(\sigma,\eta) \rvert\lesssim\sigma$ due to the previously derived bound $\lvert \dot{a}(\sigma,\eta) \rvert\lesssim\sigma$. We have shown then that $\partial_{\sigma}\{a(\sigma,\eta)\}$ satisfies a fixed point equation of the form $$\begin{aligned} \partial_{\sigma}\{a_-(\sigma,\eta)\} =O(1)+ \sigma^2\int\limits_0^\rho u [K_1(u) - I_1^{-1} (\rho)K_1(\rho) I_1(u)] \tilde{V}_-( \sigma u ) I _1(u) \partial_\sigma\{ a_- (\sigma, \sigma u )\} \:du \end{aligned}$$ and since we have already shown via ([\[ltone\]](#ltone){reference-type="ref" reference="ltone"}) that the last term is bounded in terms of $\sigma\sup_{\eta\in [0,\delta)} \lvert \partial_\sigma\{ a_- (\sigma, \sigma u )\} \rvert$, we may iterate this equation to find that, for $\sigma$ sufficiently small, $\partial_{\sigma} \{a(\sigma,\eta)\}$ is uniformly bounded independent of $\sigma$ on the domain under consideration. For the mixed $\sigma$ and $\eta$ derivative, we first compute that $$\begin{aligned} &\partial_{\sigma}\{\dot{a}(\sigma,\eta)\} =\sigma^{-1}\dot{a}(\sigma,\eta)-\sigma^{-1}\eta\ddot{a}_{-}(\sigma,\eta)\\ &-\sigma \partial_{\sigma} \{I_1^{-1}(\rho)K_1(\rho)\}\int\limits_{0}^{\rho} u^2I_1^2(u)\tilde{V}'(\sigma u)(1+a_-(\sigma,\sigma u))\: du \\ &-\sigma \partial_{\sigma} \{I_1^{-1}(\rho)K_1(\rho)\}\int\limits_{0}^{\rho} u^2I_1^2(u)\tilde{V}(\sigma u)\dot{a}_-(\sigma,\sigma u)\: du \\ &-\sigma \partial_{\sigma} \{I_1^{-1}(\rho)K_1(\rho)\}\int\limits_{0}^{\rho} uI_1^2(u)\tilde{V}(\sigma u)\partial_\sigma \{a_-(\sigma,x)\}|_{x=\sigma u} \: du\,.\end{aligned}$$ The first two terms are bounded by $\sigma^{-1}$ and the third and fourth are bounded by a constant times $$\begin{aligned} \sigma\rho^{-3} \left\langle \rho \right\rangle^{3} e^{-2\rho}\int_{0}^{\rho} u^4\left\langle u \right\rangle^{-3} e^{2u} \: du \lesssim\sigma\left\langle \rho \right\rangle^{2} \lesssim\sigma^{-1}\,.\end{aligned}$$ Similarly, the last term is $O(\sigma^{-1})$ from the fact that $\partial_{\sigma}\{a(\sigma,\eta)\}$ is bounded. The second $\sigma$-derivative is now estimated by differentiating each of $A_i$ for $i=1,2,3,4$ in turn. It is easy to see from the bounds we have already developed that $$\begin{aligned} \lvert \partial_{\sigma}\{A_1(\sigma,\eta)\} \rvert \lesssim\sigma^{-2}\end{aligned}$$ the dominant term being $\sigma^{-1}\eta \partial_{\sigma}\{\dot{a}(\sigma,\eta) \}$. Furthermore, $$\begin{aligned} \lvert \partial_{\sigma} \{A_2(\sigma,\eta)\} \rvert &\lesssim\sigma^{-1} \lvert A_2(\sigma,\eta) \rvert+\Big|\eta\partial_{\sigma}\{I_1^{-1}(\rho)K_1(\rho)\} \int\limits_{0}^{\rho} u^2I_1^2(u)\: du \Big|\\ &+\sigma^2 \Big|\int_0^\rho u^3 [-K_1(u)I_1(u) + I_1^{-1} (\rho)K_1(\rho) I_1^2(u)] \: du \Big|\end{aligned}$$ which is in total bounded in terms of $\sigma^{-1}$. By the same token, $$\begin{aligned} \lvert \partial_{\sigma}\{A_3(\sigma,\eta)\} \rvert &\lesssim\sigma^{-1}|A_3(\sigma,\eta)|+\sigma\Big|\eta\partial_{\sigma}\{I_1^{-1}(\rho)K_1(\rho)\} \int\limits_{0}^{\rho} u^2I_1^2(u)\: du \Big|\\ &+\sigma \Big|\int\limits_0^\rho u^2\langle u \rangle[K_1(u)I_1(u) - I_1^{-1} (\rho)K_1(\rho) I_1^2(u)] \: du\Big| \end{aligned}$$ where we have used that $\lvert \partial_{\sigma}\{\dot{a}(\sigma,\eta)\} \rvert\lesssim\sigma^{-1}$. As before, we may argue that all three terms are bounded by a constant times $\sigma^{-1}$. One can also show that $$\begin{aligned} A_4(\sigma,\eta)=O(\sigma^{-1})+ \sigma^2\int\limits_0^\rho u [K_1(u) - I_1^{-1} (\rho)K_1(\rho) I_1(u)] \tilde{V}_-( \sigma u ) I _1(u) \partial_\sigma^2\{ a_- (\sigma, \sigma u )\} \:du\end{aligned}$$ so that $$\begin{aligned} \partial_\sigma^2\{ a_- (\sigma, \sigma u )\}=O(\sigma^{-1})+ \sigma^2\int\limits_0^\rho u [K_1(u) - I_1^{-1} (\rho)K_1(\rho) I_1(u)] \tilde{V}_-( \sigma u ) I _1(u) \partial_\sigma^2\{ a_- (\sigma, \sigma u )\} \:du\,.\end{aligned}$$ Since the last term is bounded in terms of $\sigma\sup_{\eta\in [0,\delta)} \lvert \partial_\sigma\{ a_- (\sigma, \sigma u )\} \rvert$, as before we can iterate for small enough $\sigma$ to see that in fact $\partial_{\sigma}^2\{a_-(\sigma,\eta)\} =O(\sigma^{-1})$, as claimed. Finally, we match the solution $\phi_-(\sigma,\eta)$ to $e(\sigma,x/\sigma^2)$. For any fixed $\sigma$, as $x \to 0$ we have $$\begin{aligned} \Big( \frac{\eta(x)}{ \eta^{\prime}(x)} \Big)^{\frac 12} \phi_-(\rho) = \frac{\sqrt{2} x} { \sigma} (1+ O( x/\sigma)) \,. \end{aligned}$$ as a consequence of [@NIST (10.30.1)]. Comparing this expansion to [\[smap\]](#smap){reference-type="eqref" reference="smap"} we obtain [\[exp:bessel\]](#exp:bessel){reference-type="eqref" reference="exp:bessel"}. ◻ Before we focus on the other regimes of $x$, we give the following bounds which will be useful for the oscillatory estimates. **Lemma 5**. *For any $\delta\in (0,1)$ there exists $c>0$ and $\varepsilon>0$ so that $$\begin{aligned} \label{besselbound} |\partial^j_\sigma \{ e(\sigma, r)\}| \lesssim e^{-\frac{\varepsilon}{\sigma}} r,\,\,\,\ j=0,1,2\end{aligned}$$ uniformly for $\sigma\in[0,\min\{c,\delta r^{-\frac{1}{2}}\}]$ and $r>0$.* *Proof.* We will use throughout the proof that we are considering $\sigma$ so that $x=\sigma^2r\leq \delta<1$. First, this allows us to apply Proposition [\[prop:bessel\]](#prop:bessel){reference-type="eqref" reference="prop:bessel"} on the interval $[0,\eta(\delta)]$ to obtain the representation [\[exp:bessel\]](#exp:bessel){reference-type="eqref" reference="exp:bessel"}. By series expansion, $$\begin{aligned} \eta(\sigma^2 r) = 2 (\sigma^2 r) ^{\frac 12} -{\frac 13} (\sigma^2 r)^{\frac 32} + O_2( (\sigma^2 r)^{\frac 52}) \end{aligned}$$ which gives $$\begin{aligned} \Big( \frac{\eta}{ \eta^{\prime}} \Big)^{\frac 12}(\sigma^2r) = \sqrt{2} \sigma r^{\frac 12} ( 1+ O_2(\sigma^2 r))\,.\end{aligned}$$ Therefore, using [\[IKbounds\]](#IKbounds){reference-type="eqref" reference="IKbounds"} we obtain $$\begin{aligned} |I_1 \Big(\frac{\eta(\sigma^2 r)}{\sigma}\Big) | \lesssim\min \Big\{ 1, \Big| \frac{\eta(\sigma^2 r)}{\sigma}\Big| \Big\} e^{\frac{\eta(\sigma^2 r)}{\sigma}} \lesssim e^{\frac{\pi}{2\sigma} - \frac{\varepsilon}{\sigma}} \min\{1,r^{\frac 12} \} \,.\end{aligned}$$ for $\epsilon<\frac{\pi}{2}-\eta(\delta)$. Hence, by [\[exp:bessel\]](#exp:bessel){reference-type="eqref" reference="exp:bessel"} $$\begin{aligned} |e(\sigma, \sigma r)|\lesssim\sigma^{-\frac 12}[ e^{\frac {\pi}{\sigma}} -1]^{-\frac 12} \sigma r^{\frac 12} |I_1 \Big(\frac{\eta(\sigma^2 r)}{\sigma}\Big) | (1+a_-(\sigma,\eta))| \lesssim e^{-\frac{\varepsilon}{\sigma}} r^{\frac 12} \min\{1,r^{\frac 12} \} \lesssim e^{-\frac{\varepsilon}{\sigma}} r.\end{aligned}$$ This establishes [\[besselbound\]](#besselbound){reference-type="eqref" reference="besselbound"} for $j=0$. Next, we estimate the $\sigma$-derivative. For the remainder of the proof we suppress the dependence of $\eta$ on $\sigma^2r$. By the chain rule one has $$\begin{aligned} \partial_\sigma \{ I_1 (\eta/\sigma) \} = I^{\prime} (\eta/\sigma) [ -\sigma^{-2} \eta + \sigma^{-1} \frac{d\eta}{d\sigma}]\,.\end{aligned}$$ and we have $|\frac{d\eta}{d\sigma}| \lesssim r^{\frac 12}$ for $\sigma\lesssim r^{-\frac{1}{2}}$. Therefore, by [\[IKbounds\]](#IKbounds){reference-type="eqref" reference="IKbounds"} $$\begin{aligned} |\partial_\sigma \{ I_1 (\eta/\sigma) \}| \lesssim e^{\frac{\pi}{2\sigma}-\frac{\varepsilon}{\sigma}} \sigma^{-1} r^{\frac 12}\end{aligned}$$ and $$\begin{aligned} \label{est1} |\partial_\sigma \Big\{ \Big( \frac{\eta}{ \eta^{\prime}} \Big)^{\frac 12} I_1 (\eta/\sigma) \Big\}| \lesssim e^{\frac{\pi}{2\sigma}-\frac{\varepsilon}{\sigma}} r\,.\end{aligned}$$ Now, we note that, $$\begin{aligned} %\label{est2} \partial_\sigma \{ a_-(\sigma, \eta )\} = \partial_\sigma\{ a_-(\sigma, \eta(x))\}|_{x=\sigma^2 r} + \dot{a}_-(\sigma, \eta) \frac{d \eta}{ d \sigma}\,.\end{aligned}$$ Therefore, $$\begin{aligned} \label{est2} |\partial_\sigma \{ a_-(\sigma, \eta(\sigma^2 r)) \}| \lesssim 1 +\sigma r^{\frac 12} \lesssim 1 \,.\end{aligned}$$ Using [\[est1\]](#est1){reference-type="eqref" reference="est1"} and [\[est2\]](#est2){reference-type="eqref" reference="est2"} in [\[exp:bessel\]](#exp:bessel){reference-type="eqref" reference="exp:bessel"} and applying the product rule, we obtain [\[besselbound\]](#besselbound){reference-type="eqref" reference="besselbound"} for $j=1$. For the second derivative in $\sigma$, we start by computing $$\begin{aligned} \partial^2_\sigma \{ I_1 (\eta/\sigma)\} = I_1^{\prime \prime} (\eta/\sigma) [-\sigma^{-2} \eta +\sigma^{-1}\frac{d \eta}{d \sigma} ]^2 + I_1^{\prime} (\eta/\sigma) [2\sigma^{-3} \eta -2 \sigma^{-2} \frac{d \eta}{d \sigma}+ \sigma^{-1}\frac{d^2 \eta}{d \sigma^2}]\,.\end{aligned}$$ Using [\[IKbounds\]](#IKbounds){reference-type="eqref" reference="IKbounds"} for $j=2$, we have $$\begin{aligned} | I_1^{\prime \prime} (\eta/\sigma) [-\sigma^{-2} \eta +\sigma^{-1}\frac{d \eta}{d \sigma} ]^2 | \lesssim e^{\frac{\pi}{2\sigma}-\frac{\varepsilon}{\sigma}} (\sigma^{-1}r^{\frac 12})^2 \,. \end{aligned}$$ Moreover, $|\frac{d^2 \eta}{d \sigma^2}| \lesssim\sigma^{-1} r^{\frac 12}$, and $| I_1^{\prime} (\eta/\sigma)| \lesssim e^{\frac{\pi}{2\sigma}-\frac{\varepsilon}{\sigma}}$. Therefore, we have $$\begin{aligned} | \partial^2_\sigma \{ I_1 (\eta/\sigma)\}| \lesssim e^{\frac{\pi}{2\sigma}-\frac{\varepsilon}{\sigma}} \sigma^{-2} r\end{aligned}$$ and by the chain rule $$\begin{aligned} \label{secder2} | \partial_\sigma^2 \Big\{ \Big( \frac{\eta}{ \eta^{\prime}} \Big)^{\frac 12} I_1 \Big(\frac{\eta}{\sigma}\Big)\Big\}\Big| \lesssim e^{\frac{\pi}{2\sigma}-\frac{\varepsilon}{\sigma}} [ \sigma^{-2} r + \sigma^{-1} r ] \lesssim e^{\frac{\pi}{2\sigma}-\frac{\varepsilon}{\sigma}} \sigma^{-2}r.\end{aligned}$$ In the last line we used the fact that $\sigma^2 r < c$ and $\sigma <c$. Moreover, $$\begin{gathered} \label{secder3} |\partial^2_\sigma\{a_-( \sigma, \eta(\sigma^2 r))\}| \lesssim| \partial^2_\sigma\{a_-(\sigma, \eta(x)\}|_{x=\sigma^2 r}| + |\partial_{\sigma}\{ \dot{a}_-(\sigma, \eta)\}\frac{d \eta}{d \sigma}| \\ +| \ddot{ a}_-(\sigma,\eta) \big(\frac{d \eta}{d \sigma}\big)^2| + |\dot{a}_-(\sigma, \eta)\} \frac{d^2 \eta}{d \sigma^2} |] \end{gathered}$$ and hence $|\partial^2_\sigma\{a_-( \sigma, \eta(\sigma^2 r))\}| \lesssim\sigma^{-2} + \sigma^{-2} r^{\frac 12} + r \lesssim\sigma^{-3}$. Finally, the chain rule together with [\[secder2\]](#secder2){reference-type="eqref" reference="secder2"} and [\[secder3\]](#secder3){reference-type="eqref" reference="secder3"} gives [\[besselbound\]](#besselbound){reference-type="eqref" reference="besselbound"} for $j=2$. ◻ ## Airy function approximation: $x\sim 1$ Let $Q(u)=u^{-1}-1$ and define the *Liouville-Green transform* $$\begin{aligned} \zeta(x)=\mathop{\mathrm{sign}}(x-1)\left|\frac{3}{2}\int\limits_{1}^{x} \sqrt{\lvert Q(u) \rvert } \: du \right|^{\frac{2}{3}} \,.\end{aligned}$$ Its properties are summarized in the following lemmas: **Lemma 6**. *The map $x\mapsto \zeta(x)$ defines a smooth change of variables from $(0,\infty)\rightarrow (-(\frac{3\pi}{4})^{\frac{2}{3}},\infty)$. Furthermore, $\zeta$ has the explicit form given by $$\begin{aligned} \label{zetaForm} \frac{2}{3}\zeta^{\frac{3}{2}}(x)&=\sqrt{x(x-1)} -\log(\sqrt{x} +\sqrt{x-1} ),\quad x\geq 1,\\ -\frac{2}{3}(-\zeta(x))^{\frac{3}{2}}&=\sqrt{x(1-x)} -\frac{1}{2}\arccos(2x-1),\quad x\leq 1.\nonumber\end{aligned}$$* *The function $q=-\frac{Q}{\zeta}$ is non-negative and satisfies $\sqrt{q}=\frac{d\zeta}{dx}$. Under the transformation $w(\zeta)=q^{\frac{1}{4}}f$, the equation ([\[xODE\]](#xODE){reference-type="ref" reference="xODE"}) becomes $$\begin{aligned} \label{zetaODE} -\sigma^2\ddot{w}(\zeta)=(\zeta+\sigma^2V)w(\zeta) \end{aligned}$$ where $$\begin{aligned} V=-q^{-\frac{1}{4}}\frac{d^2q^{\frac{1}{4}}}{d\zeta^2}\,.\end{aligned}$$ Here and for the rest of the paper we use  $\dot{} = \frac{d}{d\zeta}$.* *Proof.* The smoothness of $\zeta$ is clear away from $x=1$ and at this point it is a simple consequence of the fact that $Q$ vanishes only to first order. Indeed, we may expand $\sqrt{\lvert Q \rvert }$ into a series in powers of $\sqrt{\lvert x-1 \rvert }$ and integrate term by term to find that $$\begin{aligned} \int_{1}^{x} \sqrt{\lvert Q(u) \rvert } \: du=\frac{2}{3}(x-1)^{\frac{3}{2}}(1+O(x-1)) \end{aligned}$$ from which the claim is immediate. We omit the proof of [\[zetaForm\]](#zetaForm){reference-type="eqref" reference="zetaForm"} and [\[zetaODE\]](#zetaODE){reference-type="eqref" reference="zetaODE"} as it can be verified by differentiation. ◻ For reference, we record all of the notation relevant to the Liouville-Green transform: $$\begin{aligned} \begin{split} &Q(u)=u^{-1}-1,\,q=-\frac{Q}{\zeta}\\ &\zeta(x)=\mathop{\mathrm{sign}}(x-1)\left|\frac{3}{2}\int\limits_{1}^{x} \sqrt{\lvert Q(u) \rvert } \: du \right|^{\frac{2}{3}}\\ &w=q^{\frac{1}{4}}f,\,V=-q^{-\frac{1}{4}}\frac{d^2q^{\frac{1}{4}}}{d\zeta^2} \end{split}\,.\end{aligned}$$ **Lemma 7**. *Let $\zeta^* = - \big( 3\pi/4)^{\frac 23 }$ and $\zeta \in (\zeta^*, 0]$ then we have $|\partial^j_{\zeta} V| \lesssim 1$ for $j=0,1,2\dots$.* *Proof.* We note that as $|x-1| <1$, one has $\zeta \sim (x-1)$, and therefore, $q^{\frac 14} = \sum_{k=0}^{\infty} c_k \zeta^k$ for some $c_k \in \mathbb R$. This shows that $|\partial^j_{\zeta} V| \lesssim 1$ in the range of $|\zeta|<1$. On the other hand, as $|x|<1$, one has $(\zeta - \zeta^*)^{\frac 32} \sim x$, therefore, $q^{\frac 14} \sim (\zeta - \zeta^*)^{-\frac 38}$ and $V (\zeta) \in O_{\infty}( (\zeta - \zeta^*)^{-2})$. This shows that $|\partial^j_{\zeta} V| \lesssim 1$ as long as $|\zeta - \zeta^* |> \delta >0$. ◻ We may now construct a basis of solutions to ([\[zetaODE\]](#zetaODE){reference-type="ref" reference="zetaODE"}) in terms of the Airy functions $\mathop{\mathrm{Ai}}$ and $\mathop{\mathrm{Bi}}$ whose properties may be found in [@Olver]. **Proposition 8**. *Let $\delta>0$. Then there exists $c>0$ such that for all $\sigma \in [0,c]$, a fundamental system of solutions to [\[zetaODE\]](#zetaODE){reference-type="eqref" reference="zetaODE"} in the range $\zeta^*+\delta < \zeta \leq 0$ is given by $$\begin{aligned} \begin{split}\label{airyBasis} \phi_1(\sigma,\zeta) = \mathop{\mathrm{Ai}}(\tau)( 1+{ \sigma} a_1( \sigma,\zeta) ) \\ \phi_2(\sigma,\zeta) = \mathop{\mathrm{Bi}}(\tau)( 1+ {\sigma}a_2(\sigma,\zeta)) \end{split}\end{aligned}$$ where $\tau:= - \sigma^{-\frac{2}{3}} \zeta$ and $a_1$ and $a_2$ are smooth functions satisfying the bounds $$\begin{aligned} & | a_j(\sigma,\zeta) | \lesssim 1 ,\,\,\,| \dot{a}_j(\sigma,\zeta)| \lesssim\sigma^{-\frac{1}{3}} ,\,\,\,\, | \ddot{a}_j(\sigma,\zeta)| \lesssim\sigma^{-\frac 43},\,\,\\%\label{ajbound} & \lvert \partial_{\sigma}\{a_j(\sigma,\zeta)\} \rvert \lesssim\sigma^{-\frac{4}{3}},\,\,\,\lvert \partial_{\sigma}\{\dot{a}_j(\sigma,\zeta)\} \rvert\lesssim\sigma^{-7/3},\,\,\,\lvert \partial_{\sigma}^{2}\{a_j(\sigma,\zeta)\} \rvert\lesssim\sigma^{-\frac{10}{3}} %\\ %& \abs{\partial_{\sigma}\{a_2(\sigma,\zeta)\}} \les \sigma^{-1},\,\,\,\abs{\partial_{\sigma}\{\dot{a}_2(\sigma,\zeta)\} }\les \sigma^{-2},\,\,\,\abs{\partial_{\sigma}^{2}\{a_2(\sigma,\zeta)\} }\les \sigma^{-3} \end{aligned}$$ for $j=1,2$ uniformly on $[\zeta_*+\delta,0]$.* **Remark 9**. *The range of $\zeta$ corresponds to $x\in [\delta',1]$ for some $\delta'>0$ independent of $\sigma$. The restriction is designed to avoid the singularity of $V$ at $\zeta=\zeta_*$, see the proof of Lemma [Lemma 7](#smallV){reference-type="ref" reference="smallV"}. Note also that this approximation is only possible because $\tau>0$ for $\zeta<0$ and thus the Airy functions do not have zeroes in this regime.* *Proof.* Write $\phi_{1,0}(\sigma,\zeta)=\mathop{\mathrm{Ai}}(\tau)$ and $\phi_{2,0}(\sigma,\zeta)=\mathop{\mathrm{Bi}}(\tau)$. The variable $\tau$ is chosen so that $$\begin{aligned} -\sigma^2\ddot{\phi}_{j,0}-\zeta \phi_{j,0}=0\end{aligned}$$ for each of $j=1,2$ where $\dot{ }=\frac{\partial}{\partial\zeta}$. Therefore, $$\begin{aligned} -\sigma^2\ddot{\phi}_{j}-\zeta\phi_{j}=\left( \sigma^{3}\phi_{j,0}^2\dot{a}_j \right)^./\phi_{j,0}\end{aligned}$$ and plugging the representations ([\[airyBasis\]](#airyBasis){reference-type="ref" reference="airyBasis"}) into ([\[zetaODE\]](#zetaODE){reference-type="ref" reference="zetaODE"}) yields the equation for $a_j$ $$\begin{aligned} \label{ajEq} \left( \phi_{j,0}^2\dot{a}_j \right)^.=-\sigma^{-1}V\phi_{j,0}^2(1+\sigma a_j)\,.\end{aligned}$$ The solution to this equation for $j=2$ with $a_2(\sigma,0)=0$ and $\dot{a}_2(\sigma,0)=0$ is given by $$\begin{aligned} \label{a2full} a_2(\sigma,\zeta) = - \sigma^{ \frac{1}{3}}\int\limits_{0}^{ - \sigma^{-\frac{2}{3}} \zeta} \mathop{\mathrm{Bi}}^2(u) \Big[\int\limits_u^{ - \sigma^{-\frac{2}{3}} \zeta} \mathop{\mathrm{Bi}}^{-2} (v) \:dv \Big] V(-\sigma^{\frac{2}{3}} u) ( 1+ \sigma a_2(\sigma,-\sigma^{\frac{2}{3}} u)) \:du \:. \end{aligned}$$ We now recall the following expansions of the Airy functions found in [@Olver]: $$\begin{aligned} \label{asymptoticAB} &\mathop{\mathrm{Bi}}(x) = \pi^{-\frac 1 2} x^{-\frac 14} e^{{\frac 23}x^{\frac 32}} ( 1+ O (x^{-\frac 32})) \,\, \text{as}\,\, x \to \infty \\ &\mathop{\mathrm{Bi}}(x) \geq \mathop{\mathrm{Bi}}(0) > 0 , \,\,\,\text{ for $x\geq 0$} \nonumber\\ & \mathop{\mathrm{Ai}}(x) = \frac{1}{2}\pi^{-\frac 1 2} x^{-\frac 14} e^{-{\frac 23}x^{\frac 32}} ( 1+ O (x^{-\frac 32})) \,\, \text{as}\,\,\, x \to \infty \\ & \mathop{\mathrm{Ai}}(x) > 0 \,\,\,\text{ for $x\geq 0$} \nonumber\end{aligned}$$ These asymptotics and the identity, which holds for $0\leq x_0<x_1$ $$\begin{aligned} \label{Bil} \int\limits_{x_0}^{x_1} \mathop{\mathrm{Bi}}^{-2}(y)\:dy=\pi^{-1}\left(\frac{\mathop{\mathrm{Ai}}}{\mathop{\mathrm{Bi}}}(x_0)-\frac{\mathop{\mathrm{Ai}}}{\mathop{\mathrm{Bi}}}(x_1)\right)\end{aligned}$$ imply that for $x_0\geq 0$ $$\begin{aligned} \left| \mathop{\mathrm{Bi}}^2(x_0)\int\limits_{x_0}^{x_1}\mathop{\mathrm{Bi}}^{-2}(y)\:dy\right|\lesssim\left\langle x_0 \right\rangle^{-\frac{1}{2}}\end{aligned}$$ so that $$\begin{aligned} \label{Bil_2} \left|\int_{0}^{ - \sigma^{-\frac{2}{3}} \zeta} \mathop{\mathrm{Bi}}^2(u) \Big[\int\limits_u^{ - \sigma^{-\frac{2}{3}} \zeta} \mathop{\mathrm{Bi}}^{-2} (v) \:dv \Big] f(u) \:du \right| \lesssim\left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{\frac{1}{2}} \|f\|_{\infty}\,.\end{aligned}$$ Moreover, $$\begin{aligned} \label{Bil2} \left|\mathop{\mathrm{Bi}}^{-2}(x_0)\int\limits_0^{x_0}\mathop{\mathrm{Bi}}^{2}(y)\:dy\right|\lesssim\left\langle x_0 \right\rangle^{-\frac{1}{2}}\end{aligned}$$ which comes from inserting the above asymptotics into the integral and then computing for $x>1$ $$\begin{aligned} x^{\frac{1}{2}}e^{-\frac{4}{3}x^\frac{3}{2}}\int\limits_1^{x}y^{-\frac{1}{2}}e^{\frac{4}{3}y^\frac{3}{2}}\:dy&\lesssim x^{\frac{1}{2}}e^{-\frac{4}{3}x^\frac{3}{2}}\int\limits_1^{x^\frac{3}{2}}u^{-\frac{2}{3}}e^{\frac{4}{3}u}\:du= x^{\frac{1}{2}}e^{-\frac{4}{3}x^\frac{3}{2}}\left(\frac{3}{4}u^{-\frac{2}{3}}e^{\frac{4}{3}u}\bigg\vert_1^{x^\frac{3}{2}}+\frac{2}{3}\int\limits_1^{x^\frac{3}{2}}u^{-\frac{5}{3}}e^{\frac{4}{3}u}\:du\right)\\ &\lesssim x^{\frac{1}{2}}e^{-\frac{4}{3}x^\frac{3}{2}}\left(x^{-1}e^{\frac{4}{3}x^\frac{3}{2}}\right)=x^{-\frac{1}{2}}\end{aligned}$$ To estimate [\[a2full\]](#a2full){reference-type="eqref" reference="a2full"}, we let $$\begin{aligned} a_{2,0}(\zeta) := - \sigma^{ \frac 13 }\int\limits_{0}^{ - \sigma^{-\frac{2}{3}} \zeta} \mathop{\mathrm{Bi}}^2(u) \Big[\int\limits_u^{ - \sigma^{-\frac{2}{3}} \zeta} \mathop{\mathrm{Bi}}^{-2} (v) \:dv \Big] V(-\sigma^{\frac{2}{3}}u) \:du\end{aligned}$$ be the leading term, where we have suppressed the $\sigma$-dependence of $a_2$ for now. By Lemma [Lemma 7](#smallV){reference-type="ref" reference="smallV"} $V(-\sigma^{\frac{2}{3}}u)$ is bounded on the domain of integration when $\zeta\in [\zeta_*+\delta,0]$ so using [\[Bil_2\]](#Bil_2){reference-type="eqref" reference="Bil_2"} we have that $$\begin{aligned} \label{a20est} |a_{2,0}(\zeta)|\lesssim\sigma^{\frac 13} \left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{\frac{1}{2}} \lesssim 1\,.\end{aligned}$$ Now, a contraction argument will show that $|a_2(\zeta)| \lesssim 1$, as claimed. We next consider the $\zeta$ derivative of $a_2(\zeta)$. One has that $$\begin{aligned} \label{adot} \dot a_2(\zeta) = \sigma^{-\frac{1}{3}} \mathop{\mathrm{Bi}}^{-2}(-\sigma^{-\frac{2}{3}} \zeta)\int\limits_0^{-\sigma^{-\frac{2}{3}} \zeta} \mathop{\mathrm{Bi}}^2(u) V(-\sigma^{\frac{2}{3}} u) ( 1+ \sigma a_2 (-\sigma^{\frac{2}{3}} u)) \:du\end{aligned}$$ so that, since $V$ is bounded and we have shown that $a_2$ itself is bounded, we see that by ([\[Bil2\]](#Bil2){reference-type="ref" reference="Bil2"}) $$\begin{aligned} \lvert \dot{a}_2(\zeta) \rvert \lesssim\sigma^{-\frac{1}{3}}\left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{-\frac{1}{2}}\,,\end{aligned}$$ which is less than $\sigma^{-\frac{1}{3}}$, as claimed. For the second $\zeta$-derivative, we use ([\[ajEq\]](#ajEq){reference-type="ref" reference="ajEq"}) to write $$\begin{aligned} \ddot{a}_2(\zeta)=-\sigma^{-1}V(\zeta) (1+\sigma a_2(\zeta))-2\sigma^{-1}[\dot{\phi}_{2,0}\phi_{2,0}^{-1}](\zeta)\dot{a}_2(\zeta)\,.\end{aligned}$$ The first term is clearly bounded by $\sigma^{-1}$ while the second is bounded in terms of $$\begin{aligned} \sigma^{-1}\lvert \mathop{\mathrm{Bi}}'(-\sigma^{-\frac{2}{3}}\zeta)\mathop{\mathrm{Bi}}^{-1}(-\sigma^{-\frac{2}{3}}\zeta) \rvert \lesssim\sigma^{-\frac{4}{3}}.\end{aligned}$$ By [@NIST (9.7.8)], $\lvert \mathop{\mathrm{Bi}}'(-\sigma^{-\frac{2}{3}}\zeta) \rvert\lesssim\left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{\frac{1}{4}} e^{\frac{2}{3}\sigma^{-1}\lvert \zeta \rvert ^{\frac{3}{2}}}$, which shows that $\lvert \ddot{a}_2(\zeta) \rvert \lesssim\sigma^{-\frac{4}{3}}$. Now, we consider the $\sigma$-derivatives of $a_2$. Let $F(\sigma,u) = V(-\sigma^{\frac 23} u) ( 1+\sigma a_2(\sigma,-\sigma^{\frac 23}u))$, then by ([\[a2full\]](#a2full){reference-type="ref" reference="a2full"}) we can compute $$\begin{aligned} \partial_{\sigma} \{a_2(\sigma,\zeta)\} & =-\frac{1}{3 \sigma } [ a_2(\sigma,\zeta)-2\zeta \dot{a_2}(\sigma,\zeta) ] \\ &-\sigma^{\frac 13} \int\limits_{0}^{-\sigma^{-\frac{2}{3}}\zeta} \mathop{\mathrm{Bi}}^2(u)\Big[\int\limits_{u}^{-\sigma^{-\frac{2}{3}}\zeta} \mathop{\mathrm{Bi}}^{-2}(v)\: dv\Big] \partial_{\sigma}\{F(u,\sigma)\}\: d\sigma =:B_1(\sigma,\zeta)+B_2(\sigma,\zeta)\end{aligned}$$ Using the bounds previously established for $a_2(\sigma,\zeta)$, we can deduce $|B_1(\sigma,\zeta)|\lesssim\sigma^{-\frac{4}{3}}$. Moreover, we may write $$\begin{aligned} \partial_{\sigma}\{F(u,\sigma)\}= O(\sigma^{-\frac 13} \langle u \rangle)+ \sigma V(-\sigma^{\frac 32}u)\partial_\sigma[a_2(\sigma,v)]_{v=-\sigma^{\frac 32}u}. \end{aligned}$$ Therefore, by [\[Bil_2\]](#Bil_2){reference-type="eqref" reference="Bil_2"} we have $$\begin{aligned} B_2(\sigma,\zeta)= \sigma^{\frac 13} \langle\sigma^{-\frac 23} \zeta \rangle^{\frac 32}+ -\sigma^{\frac 43} \int\limits_{0}^{-\sigma^{-\frac{2}{3}}\zeta} \mathop{\mathrm{Bi}}^2(u)\Big[\int\limits_{u}^{-\sigma^{-\frac{2}{3}}\zeta} \mathop{\mathrm{Bi}}^{-2}(v)\: dv\Big] V(-\sigma^{\frac 32}u)\partial_\sigma[a_2(\sigma,v)]_{v=-\sigma^{\frac 32}u }\: du \end{aligned}$$ Letting $$T(a) := \sigma^{\frac 13} \int\limits_{0}^{-\sigma^{-\frac{2}{3}}\zeta} \mathop{\mathrm{Bi}}^2(u)\Big[\int\limits_{u}^{-\sigma^{-\frac{2}{3}}\zeta} \mathop{\mathrm{Bi}}^{-2}(v)\: dv\Big] V(-\sigma^{\frac 32}u)a(\sigma,u) \: du$$ we obtain $$\begin{aligned} \partial_{\sigma} \{a_2(\sigma,\zeta)\} = \sigma^{-1} + \sigma T\Big( \partial_\sigma[a_2(\sigma,v)]_{v=-\sigma^{\frac 32}u }\Big)\end{aligned}$$ Now, by contraction argument we obtain that $\lvert \partial_{\sigma}\{a_2(\sigma,\zeta)\} \rvert\lesssim\sigma^{-\frac{4}{3}}$. Proceeding onward, we differentiate ([\[adot\]](#adot){reference-type="ref" reference="adot"}) with respect to $\sigma$ to find that $$\begin{aligned} \partial_{\sigma}\{\dot{a}_2(\sigma,\zeta) \}&=-\frac{1}{3}\sigma^{-1}\dot{a}_2(\sigma,\zeta)+\frac{2}{3}\sigma^{-\frac{5}{3}}\zeta\mathop{\mathrm{Bi}}'(-\sigma^{-\frac{2}{3}}\zeta)\mathop{\mathrm{Bi}}^{-1}(-\sigma^{-\frac{2}{3}}\zeta)\dot{a}_2(\sigma,\zeta)\\ &+\frac{2}{3}\sigma^{-2}\zeta V_+(\zeta)(1+\sigma a_2(\sigma,\zeta)) \\ &+\sigma^{-\frac{1}{3}} \mathop{\mathrm{Bi}}^{-2}(-\sigma^{-\frac{2}{3}} \zeta)\int\limits_0^{-\sigma^{-\frac{2}{3}} \zeta} \mathop{\mathrm{Bi}}^2(u) \partial_{\sigma} \{F(u,\sigma)\} \:du\,.\end{aligned}$$ It is easy to see using previously derived inequalities that each term is at least $O(\sigma^{-2})$ except for the second term. For that, we write $$\begin{aligned} \lvert \sigma^{-\frac{5}{3}}\zeta\mathop{\mathrm{Bi}}'(-\sigma^{-\frac{2}{3}}\zeta)\mathop{\mathrm{Bi}}^{-1}(-\sigma^{-\frac{2}{3}}\zeta)\dot{a}_2(\sigma,\zeta) \rvert \lesssim\sigma^{-2}\zeta \left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{\frac{1}{2}} \lvert \dot{a}_2(\sigma,\zeta) \rvert \end{aligned}$$ and recall that $\lvert \dot{a}_2(\sigma,\zeta) \rvert\lesssim\sigma^{-\frac{1}{3}}\left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{-\frac{1}{2}}$. It follows then that $\lvert \partial_{\sigma}[\dot{a}_2(\sigma,\zeta)\} \rvert\lesssim\sigma^{-2}$. For the second $\sigma$-derivative of $a_2$, we differentiate each of $B_j(\sigma,\zeta)$ $j=1,2$ separately. First, it is easy to see that $$\begin{aligned} \lvert \partial_{\sigma} \{B_1(\sigma,\zeta)\} \rvert \lesssim\sigma^{-1}\lvert B_1(\sigma,\zeta) \rvert+ \sigma^{-1}(\lvert \partial_\sigma\{a_2(\sigma,\zeta)\} \rvert + \sigma^{-1} \lvert \partial_{\sigma}\{\dot{a_2}(\sigma,\zeta)\} \rvert\end{aligned}$$ which is in total $O(\sigma^{-3})$, the dominant term being the last term. Next, differentiating $B_2(\sigma,\zeta)$ we have $$\begin{aligned} \label{B2der} \partial_{\sigma} \{B_2(\sigma,\zeta)\} &= \frac{1}{3\sigma}B_2(\sigma,\zeta) +\frac{2}{3} \sigma^{-\frac{4}{3}} \zeta \mathop{\mathrm{Bi}}^{-2}(-\sigma^{-\frac{2}{3}} \zeta)\int\limits_0^{-\sigma^{-\frac{2}{3}} \zeta} \mathop{\mathrm{Bi}}^2(u) \partial_\sigma\{F(u,\sigma)\}\:du\\+&\sigma^{\frac 13}\int_{0}^{\tau}\mathop{\mathrm{Bi}}^2(u)\left[\int_{u}^{\tau}\mathop{\mathrm{Bi}}^{-2}(v) \: dv\right] \partial_\sigma^2\{F(u,\sigma)\} \: du \: \nonumber.\end{aligned}$$ Similar calculations to those employed in the estimation of $B_2(\sigma,\zeta)$ demonstrate that the initial two terms on the right-hand side of the equation in [\[B2der\]](#B2der){reference-type="eqref" reference="B2der"} are $O(\sigma^{-\frac{7}{3}})$. To estimate the last term, we compute $$\begin{aligned} \partial_\sigma^2\{F(u,\sigma)\} = O( \sigma^{-1} \langle u \rangle^{2} + \sigma^{2}) + \sigma V(-\sigma^{\frac 23}u)\partial_\sigma^2[a_2(\sigma,v]_{v=-\sigma^{\frac 23}u} \:.\end{aligned}$$ Hence, using [\[Bil2\]](#Bil2){reference-type="eqref" reference="Bil2"}, we deduce the following expressions: $$\begin{aligned} \partial_{\sigma} \{B_2(\sigma,\zeta)\} = O(\sigma^{-\frac 73}) + \sigma T\Big( \partial_\sigma^2[a_2(\sigma,v]_{v=-\sigma^{\frac 23}u} \Big).\end{aligned}$$ This, in turn, leads to: $$\begin{aligned} \partial_\sigma^2[a_2(\sigma,\zeta]= O(\sigma^{-3})+ \sigma T\Big( \partial_\sigma^2[a_2(\sigma,v]_{v=-\sigma^{\frac 23}u} \Big).\end{aligned}$$ which with a contraction argument, yields $|\partial_\sigma^2[a_2(\sigma,\zeta]|\lesssim\sigma^{-3}$. Having proven all of the stated bounds on $a_2$, we now turn to $a_1$. We make the reduction ansatz $\phi_1(\zeta) =g(\zeta) \phi_2(\zeta)$ and find that $g$ solves $(\phi_2^2 \dot{g}\dot{)}=0$. To simplify the analysis that follows, we extend the functions $\phi_1$ and $\phi_2$, defined at the moment on $[\zeta_*+\delta,0]$, to the interval $(-\infty,0]$ in such a way that the proven bounds still hold. We then choose the solution $g$ of the form $$\begin{aligned} g(\zeta) &= \pi\int\limits_{\tau}^{ \infty} \mathop{\mathrm{Bi}}^{-2}(u) ( 1+ a_2(-\sigma^{\frac 23} u))^{-2} \:du \,,\end{aligned}$$ which yields $$\begin{aligned} \label{phi2} \phi_1(\zeta ) = \pi \mathop{\mathrm{Bi}}(\tau) ( 1+\sigma a_2(\zeta))\int\limits_{\tau}^{\infty} \mathop{\mathrm{Bi}}^{-2}(u) ( 1+ \sigma a_2(-\sigma^{\frac 23}u))^{-2} \:du \,. \end{aligned}$$ Here, we suppressed the $\sigma$ dependence of $a_2$. We now write $(1+\sigma \tilde{a}_2)^{-2}=1+\sigma \tilde{a}_2$ and for $\sigma$ sufficiently small, $\tilde{a}_2$ satisfies all of the same bounds as $a_2$ because $\lvert a_2 \rvert \lesssim 1$. Recalling the identities $$\begin{aligned} \label{ftc} \frac{d}{dx}\Big\{\frac{\mathop{\mathrm{Ai}}}{\mathop{\mathrm{Bi}}}(x)\Big\} =-\pi^{-1}\mathop{\mathrm{Bi}}^{-2}(x),\,\,\frac{d}{dx}\Big\{\frac{\mathop{\mathrm{Bi}}}{\mathop{\mathrm{Ai}}}(x)\Big\} =\pi^{-1}\mathop{\mathrm{Ai}}^{-2}(x) \end{aligned}$$ and the fact that $\mathop{\mathrm{Ai}}(u)$ and $\mathop{\mathrm{Bi}}(u)$ are strictly positive for $u\geq 0$, we integrate by parts to see that $$\begin{gathered} \pi\int\limits_{\tau}^{ \infty} \mathop{\mathrm{Bi}}^{-2}(u) ( 1+\sigma \tilde{a}_2( -\sigma^{\frac 23} u) ) \:du=\\ \left[\frac{\mathop{\mathrm{Ai}}}{\mathop{\mathrm{Bi}}}(u)(1+\sigma\tilde{a}_2(-\sigma^{\frac{2}{3}}u))\right]\bigg\vert_{\infty}^{\tau} - \sigma^{\frac{5}{3}}\int\limits_{\tau}^{\infty} \frac{\mathop{\mathrm{Ai}}}{ \mathop{\mathrm{Bi}}}(u) \dot{\tilde{a}}_2( -\sigma^{\frac 23} u) \:du\,. \end{gathered}$$ Therefore, $$\begin{aligned} \phi_1(\zeta)=\mathop{\mathrm{Ai}}(\tau){(1+\sigma a_2(\zeta))}\Big[ ( 1+ \sigma \tilde{a}_2(\zeta)) - \sigma^{\frac 53 } \frac{\mathop{\mathrm{Bi}}}{ \mathop{\mathrm{Ai}}}(\tau) \int\limits_{\tau}^{ \infty}\frac{\mathop{\mathrm{Ai}}}{ \mathop{\mathrm{Bi}}}(u) \dot{\tilde{a}}_2( -\sigma^{\frac 23} u) \:du \Big]\end{aligned}$$ From this, we infer that $$\begin{aligned} a_1(\zeta)&=a_2(\zeta)+(1+\sigma a_2(\zeta))\Big[ \tilde{a}_2(\zeta)+ \sigma^{\frac 23 } \frac{\mathop{\mathrm{Bi}}}{ \mathop{\mathrm{Ai}}}(\tau) \int\limits_{-\sigma^{-\frac 23} \zeta}^{\infty}\frac{\mathop{\mathrm{Ai}}}{ \mathop{\mathrm{Bi}}}(u) \dot{\tilde{a}}_2( -\sigma^{\frac 23} u) \:du \Big]\\ &:=a_2(\zeta)+(1+\sigma a_2(\zeta))\left[\tilde{a}_2(\zeta)+\tilde{a}_1(\zeta)\right]\end{aligned}$$ so it suffices to control $$\begin{aligned} \tilde{a}_1(\zeta) =\sigma^{\frac{2}{3}}\frac{\mathop{\mathrm{Bi}}}{\mathop{\mathrm{Ai}}}(\tau)\int_{\tau}^{ \infty}\frac{\mathop{\mathrm{Ai}}}{ \mathop{\mathrm{Bi}}}(u) \dot{\tilde{a}}_2( -\sigma^{\frac 23} u) \:du \,.\end{aligned}$$ To begin with, we use that $\lvert \dot{\tilde{a}}_2(\zeta) \rvert\lesssim\sigma^{-\frac{1}{3}}$ to write $$\begin{aligned} \lvert \tilde{a}_1(\zeta) \rvert &\lesssim\sigma^{\frac{1}{3}}e^{\frac{4}{3}\left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{\frac{3}{2}}}\int_{-\sigma^{-\frac{2}{3}}\zeta}^{\infty} e^{-\frac{4}{3}\left\langle u \right\rangle^{\frac{3}{2}} }\: du \\ &\leq\sigma^{\frac{1}{3}}e^{\frac{4}{3}\left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{\frac{3}{2}} }\left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{-\frac{1}{2}}\int\limits_{-\sigma^{-\frac{2}{3}}\zeta}^{\infty} e^{-\frac{4}{3}\left\langle u \right\rangle^{\frac{3}{2}}}\left\langle u \right\rangle^{\frac{1}{2}} \: du \\ &\lesssim\sigma^{\frac{1}{3}}\left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{-\frac{1}{2}} \,.\end{aligned}$$ We compute further that $$\begin{aligned} \label{BZetaDer} \dot{\tilde{a}}_1(\zeta) =-\pi^{-1}\sigma^{-\frac{2}{3}}(\mathop{\mathrm{Ai}}\mathop{\mathrm{Bi}})^{-1}(\tau)\tilde{a}_1(\zeta)+\dot{\tilde{a}}_2(\zeta)\,.\end{aligned}$$ The second term we have already bounded by $\sigma^{-\frac{1}{3}}$ and the first obeys this bound as well because $\lvert (\mathop{\mathrm{Ai}}\mathop{\mathrm{Bi}})^{-1}(x) \rvert \lesssim\left\langle x \right\rangle^{\frac{1}{2}}$. Proceeding onward, $$\begin{aligned} \partial_{\zeta}^2\{\tilde{a}_1(\zeta)\} =\pi^{-1}\sigma^{-\frac{4}{3}}\frac{d}{du}\left[(\mathop{\mathrm{Ai}}\mathop{\mathrm{Bi}})^{-1}(u)\right]\vert_{u=\tau}\tilde{a}_1(\zeta)-\pi^{-1}\sigma^{-\frac{2}{3}}(\mathop{\mathrm{Ai}}\mathop{\mathrm{Bi}})^{-1}(\tau) \dot{\tilde{a}}_1 (\zeta)+\ddot{\tilde{a}}_2(\zeta)\,.\end{aligned}$$ From the fact that $\lvert \frac{d}{du}\{(\mathop{\mathrm{Ai}}\mathop{\mathrm{Bi}})^{-1}(u)\} \rvert \lesssim\left\langle u \right\rangle^{-\frac{1}{2}}$, it is easily checked as before that each term is bounded by at worst $\sigma^{-\frac{4}{3}}$, thus establishing all of the desired bounds on the $\zeta$-derivatives of $\tilde{a}_1$. For the $\sigma$-derivatives, it is convenient to first rewrite $$\begin{aligned} \tilde{a}_1(\sigma,\zeta)=\frac{\mathop{\mathrm{Bi}}}{\mathop{\mathrm{Ai}}}(\tau)\int_{-\zeta}^{\infty} \frac{\mathop{\mathrm{Ai}}}{\mathop{\mathrm{Bi}}}(\sigma^{-\frac{2}{3}}v)\dot{\tilde{a}}_2(\sigma,-v)\: dv \,,\end{aligned}$$ so that $$\begin{aligned} \partial_{\sigma} \{\tilde{a}_1(\sigma,\zeta)\} &=\frac{2}{3\pi}\sigma^{-\frac{5}{3}}\zeta(\mathop{\mathrm{Ai}}\mathop{\mathrm{Bi}})^{-1}(\tau)\tilde{a}_1(\sigma,\zeta)\\ &+\frac{2}{3\pi}\sigma^{-\frac{5}{3}}\frac{\mathop{\mathrm{Bi}}}{\mathop{\mathrm{Ai}}}(\tau)\int_{-\zeta}^{\infty} \mathop{\mathrm{Bi}}^{-2}(\sigma^{-\frac{2}{3}}v)v\dot{\tilde{a}}_2(\sigma,-v)\: dv\\ &+\frac{\mathop{\mathrm{Bi}}}{\mathop{\mathrm{Ai}}}(\tau)\int_{-\zeta}^{\infty} \frac{\mathop{\mathrm{Ai}}}{\mathop{\mathrm{Bi}}}(\sigma^{-\frac{2}{3}}v)\partial_{\sigma} \{\dot{\tilde{a}}_2\}(\sigma,-v) \: dv\\ &=: D_1(\sigma,\zeta)+ D_2(\sigma,\zeta)+ D_3(\sigma,\zeta)\,.\end{aligned}$$ Arguing as before, it is easy to see that $\lvert D_1(\sigma,\zeta) \rvert\lesssim\sigma^{-\frac{4}{3}}$ whereas $$\begin{aligned} \label{D2est} \lvert D_2(\sigma,\zeta) \rvert \lesssim\sigma^{-\frac{2}{3}}\frac{\mathop{\mathrm{Bi}}}{\mathop{\mathrm{Ai}}}(\tau)\int\limits_{\tau}^{\infty}\mathop{\mathrm{Bi}}^{-2}(v)v \: dv =\sigma^{-\frac{2}{3}}\frac{\mathop{\mathrm{Bi}}}{\mathop{\mathrm{Ai}}}(\tau)\left[\pi^{-1}\frac{\mathop{\mathrm{Ai}}}{\mathop{\mathrm{Bi}}}(\tau)\tau-\int\limits_\tau^\infty \frac{\mathop{\mathrm{Ai}}}{\mathop{\mathrm{Bi}}}(u)\,du\right]\lesssim\sigma^{-\frac{4}{3}}\end{aligned}$$ where the second term in brackets may be treated as the original estimate of $\tilde{a}_1$. By using that $\lvert \partial_{\sigma} \{\dot{\tilde{a}}_2(\sigma,\zeta)\} \rvert\lesssim\sigma^{-2}$, we may similarly argue that $\lvert D_3(\sigma,\zeta) \rvert \lesssim\sigma^{-\frac{4}{3}}$. Thus, we conclude that $\lvert \partial_{\sigma} \{\tilde{a}_1(\sigma,\zeta)\} \rvert \lesssim\sigma^{-\frac{4}{3}}$. For the mixed derivative $\partial_{\sigma} \{\dot{\tilde{a}}_1(\sigma,\zeta)\}$, we differentiate ([\[BZetaDer\]](#BZetaDer){reference-type="ref" reference="BZetaDer"}) to find that $$\begin{aligned} \partial_{\sigma} \{\dot{\tilde{a}}_1(\sigma,\zeta)\} & =\frac{2}{3\pi}\sigma^{-\frac{5}{3}}(\mathop{\mathrm{Ai}}\mathop{\mathrm{Bi}})^{-1}(\tau)\tilde{a}_1(\sigma,\zeta) +\frac{2}{3\pi}\sigma^{-\frac{7}{3}}\zeta\frac{d}{du}\{(\mathop{\mathrm{Ai}}\mathop{\mathrm{Bi}})^{-1}(u)\}\vert_{u=\tau}\tilde{a}_1(\sigma,\zeta)\\ &-\pi^{-1}\sigma^{-\frac{2}{3}}(\mathop{\mathrm{Ai}}\mathop{\mathrm{Bi}})^{-1}(\tau)\partial_{\sigma} \{\tilde{a}_1(\sigma,\zeta)\} +\partial_{\sigma} \{\dot{\tilde{a}}_2(\sigma,\zeta)\} \,,\end{aligned}$$ and it is merely a matter of collecting previously derived bounds to deduce that each term is bounded by $\sigma^{-2}$, except the third term is bounded by $\sigma^{-7/3}$. Finally, to bound the second $\sigma$-derivative of $\tilde{a}_2$, we comment on the derivatives of each of $D_i(\sigma,\zeta)$, $i=1,2,3$ separately. The expression $\partial_{\sigma}\{D_1(\sigma,\zeta)\}$ is essentially the same as $\partial_{\sigma} \{\dot{\tilde{a}}_2(\sigma,\zeta)\}$ with the loss of an additional $\sigma$ power, so it is bounded in terms of $\sigma^{-3}$. For $D_2(\sigma,\zeta)$, one differentiates to find that $$\begin{aligned} \lvert \partial_{\sigma} \{D_2(\sigma,\zeta) \} \rvert \lesssim\sigma^{-1}\lvert D_2(\sigma,\zeta) \rvert +\sigma^{-\frac{5}{3}}(\mathop{\mathrm{Ai}}\mathop{\mathrm{Bi}})^{-1}(\tau)\lvert D_2(\sigma,\zeta) \rvert +\\ \sigma^{-\frac{10}{3}}\frac{\mathop{\mathrm{Bi}}}{\mathop{\mathrm{Ai}}}(\tau)\int_{-\zeta}^{\infty} \frac{d}{du}\{\mathop{\mathrm{Bi}}^{-2}(u)\}\vert_{\sigma^{-\frac{2}{3}}v} v^2\: dv+\sigma^{-4}\frac{\mathop{\mathrm{Bi}}}{\mathop{\mathrm{Ai}}}(\tau)\int_{-\zeta}^{\infty} \mathop{\mathrm{Bi}}^{-2}(\sigma^{-\frac{2}{3}}v)v \: dv \,,\end{aligned}$$ where in the second integral we have used the bound $\lvert \dot{\tilde{a}}_2(\sigma,\zeta) \rvert\lesssim\sigma^{-2}$. Collecting bounds easily shows that the first term is bounded by $\sigma^{-\frac{7}{3}}$ and the second by $\sigma^{-\frac{10}{3}}$. The first integral is bounded in terms of $$\begin{aligned} \sigma^{-\frac{4}{3}}\frac{\mathop{\mathrm{Bi}}}{\mathop{\mathrm{Ai}}}(\tau)\int\limits_{\tau}^{\infty} \frac{d}{du}\{\mathop{\mathrm{Bi}}^{-2}(u)\}u^2 \: du =\sigma^{-\frac{4}{3}}\frac{\mathop{\mathrm{Bi}}}{\mathop{\mathrm{Ai}}}(\tau)\left[\mathop{\mathrm{Bi}}^{-2}(\tau)\tau^2-2\int\limits_\tau^\infty\mathop{\mathrm{Bi}}^{-2}(u)u\:du\right]\lesssim\sigma^{-\frac{8}{3}} \end{aligned}$$ where we may bound the second term as in [\[D2est\]](#D2est){reference-type="eqref" reference="D2est"}. Similarly, the second integral is bounded by $\sigma^{-10/3}$. We now compute $$\begin{aligned} \partial_{\sigma}\{D_3(\sigma,\zeta)\} =\frac{2}{3\pi}\sigma^{-\frac{5}{3}}\zeta(\mathop{\mathrm{Ai}}\mathop{\mathrm{Bi}})^{-1}(\tau)D_3(\sigma,\zeta)+\frac{2}{3\pi}\sigma^{-\frac{5}{3}}\frac{\mathop{\mathrm{Bi}}}{\mathop{\mathrm{Ai}}}(\tau)\int_{-\zeta}^{\infty} \mathop{\mathrm{Bi}}^{-2}(\sigma^{-\frac{2}{3}}v)v \partial_{\sigma} \{\dot{\tilde{a}}_2(\sigma,-v\} \: dv \\ +\frac{\mathop{\mathrm{Bi}}}{\mathop{\mathrm{Ai}}}(\tau)\int_{-\zeta}^{\infty} \frac{\mathop{\mathrm{Ai}}}{\mathop{\mathrm{Bi}}}(\sigma^{-\frac{2}{3}}v)\partial_{\sigma}^2\{\dot{\tilde{a}}_2(\sigma,-v)\} \: dv\,.\end{aligned}$$ The first two terms are easily bounded by $\sigma^{-\frac{10}{3}}$. Using that $\partial_{\sigma}^2\{\dot{\tilde{a}}_2 \}(\sigma,-v)=\frac{\partial}{\partial u}\{\partial_{\sigma}^2\{\tilde{a}_2(\sigma,u)\} \}\vert_{u=-v}$, we integrate by parts in the last term to rewrite it as $$\begin{aligned} \frac{\mathop{\mathrm{Bi}}}{\mathop{\mathrm{Ai}}}(\tau)\left(\frac{\mathop{\mathrm{Ai}}}{\mathop{\mathrm{Bi}}}(\tau)\partial_{\sigma}^2\{\tilde{a}_2(\sigma,\zeta)\}-\pi^{-1}\sigma^{-\frac{2}{3}}\int_{-\zeta}^{\infty} \mathop{\mathrm{Bi}}^{-2}(\sigma^{-\frac{2}{3}}v)\partial_{\sigma}^2\{\tilde{a}_2(\sigma,-v)\} \: dv \right)\,,\end{aligned}$$ which is controlled by $\partial_{\sigma}^2\{\tilde{a}_2(\sigma,\zeta)\}= O(\sigma^{-3})$. It follows then that $\partial_{\sigma}^2\{\dot{\tilde{a}}_2\} =O(\sigma^{-\frac{10}{3}})$. This was the last bound we needed to demonstrate, so the proof of the lemma is complete. ◻ **Corollary 10**. *Let $\sigma$ be sufficiently small. Then for $x\in[\frac{1}{2},1+\delta]$, $$\begin{aligned} \label{esmall} e(\sigma,r(x))&=A(\sigma) (q(x))^{-\frac{1}{4}}\phi_1(\sigma,x)+B(\sigma)(q(x))^{-\frac{1}{4}}\phi_2(\sigma,x)\,,\\ A(\sigma)&= \sigma^{-\frac{1}{6}}(2+O(\sigma)),\,\,B(\sigma)=O\left( \sigma^{\frac{5}{6}}e^{-\sigma^{-1}(\frac{\pi}{2}-1)} \right) \nonumber\,.\end{aligned}$$* *Proof.* We match the Bessel function approximation of $e(\sigma,r(x))$ to the basis $\{q^{-\frac{1}{4}}\phi_1(\tau),q^{-\frac{1}{4}}\phi_2(\tau)\}$ at $x=\frac{1}{2}$. We have that $$\begin{aligned} \label{BesselHalf} e(\sigma,r(1/2))= \sqrt{2} \sigma^{-\frac 12} [ e^{\frac {\pi}{\sigma}} -1]^{-\frac 12}\sqrt{\eta_*}I_1(\sigma^{-1} \eta_*)(1+a(\sigma,\eta_*))\,,\end{aligned}$$ where $\eta_*=\eta(\frac{1}{2})$ and we have used that $\eta'(\frac{1}{2})=1$ because $\eta'=\sqrt{Q}$. Using [@NIST (10.40.1)], we have from ([\[BesselHalf\]](#BesselHalf){reference-type="ref" reference="BesselHalf"}) that $$\begin{aligned} e(\sigma,r(\frac{1}{2}))=2\sqrt{2} [ e^{\frac {\pi}{\sigma}} -1]^{-\frac 12}\sqrt{\eta_*}\frac{e^{\frac{\eta_*}{\sigma}}\sigma^{\frac{1}{2}}}{\sqrt{2\pi \eta_*} }(1+O(\sigma))&= \frac{2}{\sqrt{\pi}}e^{\frac{\eta_*}{\sigma}}[e^{\frac{\pi}{\sigma}}-1]^{-\frac{1}{2}}(1+O(\sigma))\\ &=\frac{e^{\sigma^{-1}(\eta_*-\frac{\pi}{2})}}{\sqrt{\pi}}(2+O(\sigma))\end{aligned}$$ and also $$\begin{aligned} \partial_x[e(\sigma,r(x))]_{x=\frac{1}{2}}=2\sqrt{2} [ e^{\frac {\pi}{\sigma}} -1]^{-\frac 12}\sqrt{\eta_*}\sigma^{-\frac{3}{2}}I_1'(\sigma^{-1}\eta_*)(1+O(\sigma))&=\frac{2}{\sqrt{\pi}\sigma}e^{\frac{\eta_*}{\sigma}}[e^{\frac{\pi}{\sigma}}-1]^{-\frac{1}{2}}(1+O(\sigma))\\ &=\frac{e^{\sigma^{-1}(\eta_*-\frac{\pi}{2})}}{\sqrt{\pi}\sigma}(2+O(\sigma))\,.\end{aligned}$$ We also have that $$\begin{aligned} \phi_1(\sigma,\zeta(\frac{1}{2}))=q(\frac{1}{2})^{-\frac{1}{4}}\mathop{\mathrm{Ai}}(\sigma^{-\frac{2}{3}}\zeta_*)(1+a_1(\sigma,\zeta_*))\\ \phi_2(\sigma,\zeta(\frac{1}{2}))=q(\frac{1}{2})^{-\frac{1}{4}}\mathop{\mathrm{Bi}}(\sigma^{-\frac{2}{3}}\zeta_*)(1+a_2(\sigma,\zeta_*))\end{aligned}$$ where $\zeta_*=-\zeta(\frac{1}{2})$. Because $\zeta_*>0$, we use the asymptotics [@NIST (9.7.5) and (9.7.6)] to see that $$\begin{aligned} \phi_1(\sigma,\zeta(x))=\sigma^{\frac{1}{6}}\frac{e^{-\frac{2\zeta_*^{\frac{3}{2}}}{3\sigma}}}{2\sqrt{\pi} }(1+O(\sigma))\\ \phi_2(\sigma,\zeta(x))=\sigma^{\frac{1}{6}}\frac{e^{\frac{2\zeta_*^{\frac{3}{2}}}{3\sigma}}}{\sqrt{\pi} }(1+O(\sigma))\end{aligned}$$ and [@NIST (9.7.6) and (9.7.8)] for $$\begin{aligned} &\partial_x[\phi_1(\sigma,\zeta(x))]_{x=\frac{1}{2}}=-q^{-\frac{1}{4}}(\frac{1}{2})\zeta'(\frac{1}{2})\sigma^{-\frac{2}{3}}\mathop{\mathrm{Ai}}'(\sigma^{-\frac{2}{3}}\zeta_*)(1+O(\sigma))=\sigma^{-\frac{5}{6}}\frac{e^{-\frac{2\zeta_*^{\frac{3}{2}}}{3\sigma}}}{2\sqrt{\pi} }(1+O(\sigma))\\ &\partial_x[\phi_2(\sigma,\zeta(x))]_{x=\frac{1}{2}}=-q^{-\frac{1}{4}}(\frac{1}{2})\zeta'(\frac{1}{2})\sigma^{-\frac{2}{3}}\mathop{\mathrm{Bi}}'(\sigma^{-\frac{2}{3}}\zeta_*)(1+O(\sigma)) =-\sigma^{-\frac{5}{6}}\frac{e^{\frac{2\zeta_*^{\frac{3}{2}}}{3\sigma}}}{\sqrt{\pi} }(1+O(\sigma))\end{aligned}$$ from using that $q(\frac{1}{2})=\zeta_*^{-1}=[\zeta'(\frac{1}{2})]^2$. It follows that $$\begin{aligned} W\left[ \phi_1(\sigma,\zeta(\cdot)),\phi_2(\sigma,\zeta(\cdot)) \right]=-\frac{\sigma^{-\frac{2}{3}}}{\pi}(1+O(\sigma))\end{aligned}$$ where $W$ is the Wronskian evaluated at $x=\frac{1}{2}$ and also $$\begin{aligned} &W[e(\sigma,r(\cdot)),\phi_1(\sigma,\zeta(\cdot))]=O\left( \sigma^{\frac{1}{6}}e^{\sigma^{-1}(\eta_*-\frac{\pi}{2}-\frac{2}{3}\zeta_*^{\frac{3}{2}})} \right) =O\left( \sigma^{\frac{1}{6}}e^{-\sigma^{-1}(\frac{\pi}{2}-1)} \right) \\ &W[e(\sigma,r(\cdot)),\phi_2(\sigma,\zeta(\cdot))]=-\frac{\sigma^{-\frac{5}{6}}}{\pi}e^{\sigma^{-1}(\eta_*-\frac{\pi}{2}+\frac{2}{3}\zeta^{\frac{3}{2}})}(2+O(\sigma))=-\frac{\sigma^{-\frac{5}{6}}}{\sqrt{\pi} }(2+O(\sigma))\end{aligned}$$ where the last equality on each line follows from the facts that $\eta_*-\frac{2}{3}\zeta_*^{\frac{3}{2}}=1$ and $\eta_*+\frac{2}{3}\zeta_*^{\frac{3}{2}}=\frac{\pi}{2}$. Therefore, $$\begin{aligned} &A(\sigma)=\frac{W\left[e(\sigma,r(\cdot)),\phi_2(\sigma,\zeta(\cdot))\right]}{W\left[ \phi_1(\sigma,\zeta(\cdot)),\phi_2(\sigma,\zeta(\cdot)) \right]}= \sigma^{-\frac{1}{6}}(1+O(\sigma))\,,\\ &B(\sigma)=-\frac{W\left[e(\sigma,r(\cdot)),\phi_1(\sigma,\zeta(\cdot))\right]}{W\left[ \phi_1(\sigma,\zeta(\cdot)),\phi_2(\sigma,\zeta(\cdot)) \right]}=O\left( \sigma^{\frac{5}{6}}e^{-\sigma^{-1}(\frac{\pi}{2}-1)} \right)\,.\end{aligned}$$ ◻ ## Oscillatory Airy approximation: $x\gg 1$ **Proposition 11**. *When $\zeta\geq 0$, the potential $V$ satisfies the bounds $$\begin{aligned} |\partial^j_\zeta V(\zeta)|\lesssim\langle\zeta \rangle^{-2-j}\,\,\ j=0,1,2. \end{aligned}$$* *Proof.* Since by the above $\zeta'$ is smooth and non-vanishing, the identity $q=(\zeta')^2$ shows that near $0$, $V=-q^{-\frac{1}{4}}\frac{d^2q^{\frac{1}{4}}}{d\zeta^2}$ is bounded, so we need only show that $V$ has the claimed behavior as $\zeta\rightarrow\infty$. With ${ }^.=\frac{d}{d\zeta}$, one computes first that $$\begin{aligned} \label{Vfromq} q^{-\frac{1}{4}}\frac{d^2q^{\frac{1}{4}}}{d\zeta^2}=-\frac{3}{16}q^{-2} \dot{q}^{2}+\frac{1}{4}q^{-1}\ddot{q}\end{aligned}$$ so we need only find asymptotics for $q$ in terms of $\zeta$. Recalling definition of $\zeta$ as a function of $x$, we have that for $x\geq 1$ $$\begin{aligned} \frac{3}{2}\zeta^{\frac{3}{2}}(x)=\int_{1}^{x} \sqrt{1-u^{-1}} \: du=\int_{1}^{x} 1+O(u^{-1})\: du=x+c+O(x^{-1}) \,.\end{aligned}$$ The chain rule applied to the above equality then shows that $\zeta'(x)=O(\zeta^{-\frac{1}{2}})$, where every derivative of $O(\zeta^{-\frac{1}{2}})$ loses a power of $\zeta$, that is, it exhibits symbol behavior. It follows then that $q=(\zeta')^2=O(\zeta^{-1})$ for $x$ large, from which the bound on $V$ follows from ([\[Vfromq\]](#Vfromq){reference-type="ref" reference="Vfromq"}) and one may obtain the bounds on $V'$ and $V^{(2)}$ by differentiating this equality. ◻ **Proposition 13**. *For any $\sigma>0$ sufficiently small, the following holds: in the range $\zeta\geq 0$, a basis of solutions to ([\[zetaODE\]](#zetaODE){reference-type="ref" reference="zetaODE"}) is given by $$\begin{aligned} \begin{split}\label{oscBasis} \psi_+=(\mathop{\mathrm{Ai}}(\tau)+i\mathop{\mathrm{Bi}}(\tau))[1+\sigma b_+(\sigma,\zeta)]\\ \psi_-=(\mathop{\mathrm{Ai}}(\tau)-i\mathop{\mathrm{Bi}}(\tau))[1+\sigma b_-(\sigma,\zeta)] \end{split} \end{aligned}$$ with $\tau=-\sigma^{-\frac{2}{3}}\zeta$ and $b_\pm$ are smooth functions that satisfy the bounds $$\begin{aligned} &\lvert b_{\pm}(\sigma,\zeta) \rvert \lesssim\left\langle \zeta \right\rangle^{-\frac{3}{2}},\,\, \lvert \dot{b}_{\pm}(\sigma,\zeta) \rvert \lesssim\sigma^{-\frac{1}{3}}\left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{-\frac{1}{2}}\left\langle \zeta \right\rangle^{-2},\,\, \lvert \ddot{b}_{\pm}(\sigma,\zeta) \rvert \lesssim\sigma^{-1}\left\langle \zeta \right\rangle^{-2}\\ &\partial_{\sigma}[b_{\pm}(\sigma,\zeta)]\lesssim\sigma^{-1}\left\langle \zeta \right\rangle^{-\frac{3}{2}},\,\,\partial_{\sigma}^2[b_{\pm}(\sigma,\zeta)]\lesssim\sigma^{-3}, \,\,\, \partial_{\sigma}[\dot{b}_{\pm}(\sigma,\zeta)]\lesssim\sigma^{-2}\left\langle \zeta \right\rangle^{-1}.\end{aligned}$$* ***Remark 12**. *This proposition is very similar to Proposition 9 of [@Wmain]. Indeed, the bounds on $b_{\pm}$ and $\dot{b}_{\pm}$ are produced via the same proof, as the only inputs are the asymptotics of $V$, which in this regime of $\zeta$ are the same as in that paper. However, for our purposes we also require an additional derivative in $\zeta$ and derivatives in the semi-classical parameter $\sigma$ ($\hbar$ in [@Wmain]). Note that in both settings, a representation of the form ([\[oscBasis\]](#oscBasis){reference-type="ref" reference="oscBasis"}) is only possible because $\mathop{\mathrm{Ai}}$ and $\mathop{\mathrm{Bi}}$ have no common zeroes, and therefore ([\[oscBasis\]](#oscBasis){reference-type="ref" reference="oscBasis"}) does not fix the zeroes of any solution.** *Proof.* Let $\psi_{\pm,0}(\zeta,\sigma)=\mathop{\mathrm{Ai}}(\tau)\pm i\mathop{\mathrm{Bi}}(\tau)$. Similar to [\[ajEq\]](#ajEq){reference-type="eqref" reference="ajEq"}, we obtain the equations $$\begin{aligned} \left( \psi_{\pm,0}^2\dot{b}_\pm \right)^.=-\sigma^{-1}V\psi_{\pm,0}^2(1+\sigma b_\pm) \end{aligned}$$ whose solutions with $b_\pm(\infty)=0$ and $\dot{b}_\pm(\infty)=0$ is given by $$\begin{aligned} \label{voltEq} b_{\pm}(\zeta)=-\sigma^{-1}\int_{\zeta}^{\infty}\int_{\zeta}^{u} \psi_{\pm,0}^{-2}(v)\: dv\:\psi_{\pm,0}^2(u)V(u)(1+\sigma b_\pm(u)) \: du\,, \end{aligned}$$ where for now we have suppressed the dependence on $\sigma$ in the integrand. From [@Olver], we have the asymptotic expansion $$\begin{aligned} \label{oscAiry} \mathop{\mathrm{Ai}}(-z)\pm i\mathop{\mathrm{Bi}}(-z)=\frac{1}{\sqrt{\pi} z^{\frac{1}{4}}}e^{\mp i(\frac{2}{3}z^{\frac{3}{2}}-\frac{\pi}{4})}(1+O(z^{-\frac{3}{2}})) \end{aligned}$$ where the $O(z^{-\frac{3}{2}})$ term may be differentiated as a symbol. Thus, for $0<x_0<x_1$ $$\begin{gathered} \label{psiinverse} \int_{x_0}^{x_1} (\mathop{\mathrm{Ai}}(-z)\pm\mathop{\mathrm{Bi}}(-z))^{-2}\: dz \\ =\int_{x_0}^{x_1} z^{\frac{1}{2}}e^{\mp i\frac{4}{3}z^{\frac{3}{2}}}a(z)\: dz =\mp\frac{1}{2i}e^{\mp i \frac{4}{3}z^{\frac{3}{2}}}a(z)\bigg\vert_{x_0}^{x_1}\pm \frac{1}{2i}\int_{x_0}^{x_1} e^{\mp i \frac{2}{3}z^{\frac{3}{2}}}a'(z)\: dz\end{gathered}$$ for $a(z)=1+O(z^{-\frac{3}{2}})$. This shows that the above integral is $O(1)$ for all $x_0$ and $x_1$. The main term with respect to $\sigma$ in ([\[voltEq\]](#voltEq){reference-type="ref" reference="voltEq"}) $$\begin{aligned} b_{\pm,0}(\zeta)=-\sigma^{-1}\int_{\zeta}^{\infty}\int_{\zeta}^{u} \psi_{\pm,0}^{-2}(v)\: dv\:\psi_{\pm,0}^2(u)V(u)\:du\end{aligned}$$ satisfies the bound $$\begin{aligned} \lvert b_{\pm,0}(\zeta) \rvert \lesssim\sigma^{\frac{1}{3}}\int_{\sigma^{-\frac{2}{3}}\zeta}^{\infty} \left\langle u \right\rangle^{-\frac{1}{2}}\lvert V(-\sigma^{\frac{2}{3}}u) \rvert \: du \end{aligned}$$ where we have changed variables and used that by ([\[oscAiry\]](#oscAiry){reference-type="ref" reference="oscAiry"}) $$\begin{aligned} \lvert (\mathop{\mathrm{Ai}}(-z)+i\mathop{\mathrm{Bi}}(-z))^{2} \rvert \lesssim\left\langle z \right\rangle^{-\frac{1}{2}} \end{aligned}$$ By Proposition [Proposition 11](#Vbound){reference-type="ref" reference="Vbound"}, we see then that $$\begin{aligned} \lvert b_{\pm,0}(\zeta) \rvert \lesssim\sigma^{\frac{1}{3}}\int_{\sigma^{-\frac{2}{3}}\zeta}^{\infty} \left\langle u \right\rangle^{-\frac{1}{2}}\left\langle \sigma^{\frac{2}{3}}u \right\rangle^{-2}\: du \lesssim\left\langle \zeta \right\rangle^{-\frac{3}{2}}\,,\end{aligned}$$ where the last inequality comes from bounding the integrand by $\sigma^{-\frac{4}{3}}u^{-\frac{5}{2}}$ when $\sigma^{-\frac{2}{3}}\zeta$ is large. We can now extend this bound to $b_{\pm}$ by a contraction argument, as is explained in Proposition [Proposition 3](#prop:bessel){reference-type="ref" reference="prop:bessel"}, by considering the linear operator $$Ta= -\sigma^{-1}\int_{\zeta}^{\infty}\int_{\zeta}^{u} \psi_{\pm,0}^{-2}(v)\: dv\:\psi_{\pm,0}^2(u)V(u)a(u) \: du$$ as a map on the weighted space $\left\langle \zeta \right\rangle^{-\frac{3}{2}} L^\infty_\zeta$. For the $\zeta$-derivative bounds, we first write $$\begin{aligned} \label{bdotInt} \dot{b}_\pm(\zeta)=\sigma^{-1}\psi_{\pm,0}^{-2}(\zeta)\int_{\zeta}^{\infty} \psi_{\pm,0}^2(u)V(u)(1+\sigma b_\pm(u))\: du \end{aligned}$$ and use ([\[oscAiry\]](#oscAiry){reference-type="ref" reference="oscAiry"}) to see that $\psi_{\pm,0}^2(\zeta)=e^{i\frac{4}{3\sigma}\zeta^{\frac{3}{2}}}\omega(\sigma^{-\frac{2}{3}}\zeta)$ for some $\omega(u)$ with $\lvert \omega(u) \rvert \lesssim\left\langle u \right\rangle^{-\frac{1}{2}}$ and $\lvert \omega'(u) \rvert \lesssim\left\langle u \right\rangle^{-\frac{3}{2}}$. When $\zeta >\sigma^{-\frac{2}{3}}$, we may exploit the oscillatory phase by integrating by parts in the above integral via $$\begin{aligned} \psi_{\pm,0}^2=(2i\zeta^{\frac{1}{2}}/\sigma)^{-1}\omega(\sigma^{-\frac{2}{3}}\zeta)\frac{d}{d\zeta}[e^{i\frac{4}{3\sigma}\zeta^{\frac{3}{2}}}]\end{aligned}$$ to find that $$\begin{aligned} \dot{b}_\pm(\zeta)=\frac{1}{2i}\psi^{-2}_{\pm,0}(\zeta)[u^{-\frac{1}{2}}\omega(\sigma^{-\frac{2}{3}}u)e^{i\frac{4}{3\sigma}\zeta^{\frac{3}{2}}}V(u)(1+\sigma b_{\pm}(u))]\bigg\vert_{\zeta}^\infty\label{bdot_1}\\ -\frac{1}{2i}\psi_{\pm,0}^{-2}(\zeta)\int_{\zeta}^{\infty}e^{i\frac{4}{3\sigma}\zeta^{\frac{3}{2}}}\frac{d}{du}[u^{-\frac{1}{2}}\omega(\sigma^{-\frac{2}{3}}u)V(u)(1+\sigma b_{\pm}(u))] \: du \label{bdot_2}\,.\end{aligned}$$ Using the bounds on $\omega$ we see that the term on the right hand side of the equality in ([\[bdot_1\]](#bdot_1){reference-type="ref" reference="bdot_1"}) is bounded by $$\begin{aligned} \zeta^{-\frac{1}{2}}\left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{-\frac{1}{2}}\left\langle \zeta \right\rangle^{-2}\lesssim\sigma^{-\frac{1}{3}}\left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{-\frac{1}{2}}\left\langle \zeta \right\rangle^{-2}\,,\end{aligned}$$ with the last inequality coming from the assumption on $\zeta$. By differentiating the product in the integrand of ([\[bdot_2\]](#bdot_2){reference-type="ref" reference="bdot_2"}), one checks via the bounds on $\omega$ and Proposition [Proposition 11](#Vbound){reference-type="ref" reference="Vbound"} that every term other than the term in which the derivative falls on $b_\pm$ is $O(\sigma^{-\frac{1}{3}}\left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{-\frac{1}{2}}\left\langle \zeta \right\rangle^{-2})$ so that we may write $$\begin{aligned} \dot{b}_\pm(\zeta)= O(\sigma^{-\frac{1}{3}}\left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{-\frac{1}{2}}\left\langle \zeta \right\rangle^{-2}) -\frac{\sigma}{2i}\psi_{\pm,0}^{-2}(\zeta)\int_{\zeta}^{\infty}e^{i\frac{4}{3\sigma}\zeta^{\frac{3}{2}}}O(u^{-\frac{1}{2}} \left\langle \sigma^{-\frac{2}{3}}u \right\rangle^{-\frac{1}{2}}\left\langle u \right\rangle^{-2})\dot{b}_\pm(u) \: du\,.\end{aligned}$$ By iterating this equality, we see that the second term is better than the first one, so we see that $| \dot{b}_{\pm}(\zeta)| \lesssim\sigma^{-\frac{1}{3}}\left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{-\frac{1}{2}}\left\langle \zeta \right\rangle^{-2}$ . As for the case when $\sigma^{-\frac{2}{3}}\zeta \leq 1$, we simply write ([\[bdotInt\]](#bdotInt){reference-type="ref" reference="bdotInt"}) as $$\begin{aligned} \sigma^{-1}\psi_{\pm,0}^{-2}(\zeta)\int_{\zeta}^{\sigma^{\frac{2}{3}}} \psi_{\pm,0}^2(u)V(u)(1+\sigma b_\pm(u))\: du\\ +\sigma^{-1}\psi_{\pm,0}^{-2}(\zeta)\int_{\sigma^{\frac{2}{3}}}^{\infty} \psi_{\pm,0}^2(u)V(u)(1+\sigma b_\pm(u))\: du \,,\end{aligned}$$ where the first term is clearly bounded by $\sigma^{-\frac{1}{3}}$ and the second one can be estimated similar to [\[bdot_1\]](#bdot_1){reference-type="eqref" reference="bdot_1"}. Thus, in either case we see that $$\begin{aligned} \lvert \dot{b}_{\pm}(\zeta) \rvert \lesssim\sigma^{-\frac{1}{3}}\left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{-\frac{1}{2}}\left\langle \zeta \right\rangle^{-2}\end{aligned}$$ as claimed. For the second $\zeta$-derivative, we differentiate ([\[bdotInt\]](#bdotInt){reference-type="ref" reference="bdotInt"}) to find that $$\begin{aligned} \ddot{b}_{\pm}(\zeta)=-\sigma^{-1} V(\zeta)(1+\sigma b_{\pm}(\zeta))-2\dot{\psi}_{\pm,0}(\zeta)\psi_{\pm,0}^{-1}(\zeta)\dot{b}_{\pm}(\zeta)\end{aligned}$$ Since $\lvert \psi_{\pm,0}^{-1}(\zeta) \rvert \lesssim\left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{\frac{1}{4}}$ and by ([\[oscAiry\]](#oscAiry){reference-type="ref" reference="oscAiry"}) $\lvert \dot{\psi}_{\pm,0}(\zeta) \rvert \lesssim\sigma^{-\frac{2}{3}}\left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{\frac{1}{4}}$, the previously derived bounds on $b_\pm$ and $\dot{b}_{\pm}$ show that $$\begin{aligned} \lvert \ddot{b}_{\pm}(\zeta) \rvert \lesssim\sigma^{-1}\left\langle \zeta \right\rangle^{-2}\end{aligned}$$ We now demonstrate the bounds on the $\sigma$-derivatives of $b_{\pm}$. To begin, we rewrite ([\[voltEq\]](#voltEq){reference-type="ref" reference="voltEq"}) as $$\begin{aligned} b_\pm(\sigma,\zeta)&=-\sigma^{\frac{1}{3}}\int_{\sigma^{-\frac{2}{3}}\zeta}^{\infty}\int\limits_{\sigma^{-\frac{2}{3}}\zeta}^{u} (\mathop{\mathrm{Ai}}\pm i\mathop{\mathrm{Bi}})^{-2}(-v)\: dv (\mathop{\mathrm{Ai}}\pm i\mathop{\mathrm{Bi}})^2(-u)V(\sigma^{\frac{2}{3}}u)(1+\sigma b(\sigma,\sigma^{\frac{2}{3}}u))\: du \\ &=-\sigma^{\frac{1}{3}}\int_{\sigma^{-\frac{2}{3}}\zeta}^{\infty}\int\limits_{\sigma^{-\frac{2}{3}}\zeta}^{u} (\mathop{\mathrm{Ai}}\pm i\mathop{\mathrm{Bi}})^{-2}(-v)\: dv (\mathop{\mathrm{Ai}}\pm i\mathop{\mathrm{Bi}})^2(-u)F(u,\sigma) \:du\end{aligned}$$ and then differentiate with respect to $\sigma$ to find that $$\begin{aligned} \partial_{\sigma}[b_{\pm}(\sigma,\zeta)]&=\frac{1}{3}\sigma^{-1}b_{\pm}(\sigma,\zeta)\\ &-\frac{2}{3}\sigma^{-\frac{4}{3}}\psi_{\pm,0}^{-2}(\zeta)\zeta\int_{\sigma^{-\frac{2}{3}}\zeta}^{\infty} (\mathop{\mathrm{Ai}}\pm i\mathop{\mathrm{Bi}})^2(-u)F(u,\sigma)\: du \\ &-\sigma^{\frac{1}{3}}\int_{\sigma^{-\frac{2}{3}}\zeta}^{\infty}\int\limits_{\sigma^{-\frac{2}{3}}\zeta}^{u} (\mathop{\mathrm{Ai}}\pm i\mathop{\mathrm{Bi}})^{-2}(-v)\: dv (\mathop{\mathrm{Ai}}\pm i\mathop{\mathrm{Bi}})^2(-u)\partial_\sigma[F(u,\sigma)] \:du \\ &=: F_1^{\pm}(\sigma,\zeta)+ F_2^{\pm}(\sigma,\zeta)+F_3^{\pm}(\sigma,\zeta).\end{aligned}$$ Our previous bound on $b$ shows that $|F^{\pm}_1(\sigma,\zeta)|\lesssim\sigma^{-1}\left\langle \zeta \right\rangle^{-\frac{3}{2}}$. Also, from ([\[bdotInt\]](#bdotInt){reference-type="ref" reference="bdotInt"}), we see that $F_2^{\pm}(\sigma,\zeta)= -\frac{2}{3}\sigma^{-1}\zeta\dot{b}_{\pm}(\sigma,\zeta)$ and is therefore bounded by $\sigma^{-\frac{4}{3}}\left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{-\frac{1}{2}}\left\langle \zeta \right\rangle^{-2}\zeta$, which is again bounded by $\sigma^{-1}\left\langle \zeta \right\rangle^{-\frac{3}{2}}$. We compute that $$\begin{aligned} \begin{split} \partial_\sigma[F(u,\sigma)]&= \frac{2}{3}\sigma^{-\frac{1}{3}}V'(\sigma^{\frac{2}{3}}u)u(1+\sigma b_{\pm}(\sigma,\sigma^{\frac{2}{3}}u))+V(\sigma^{\frac{2}{3}}u)b_{\pm}(\sigma,\sigma^{\frac{2}{3}}u)\\ &+\frac{2}{3}\sigma^{\frac{2}{3}}V(\sigma^{\frac{2}{3}}u)\dot{b}_{\pm}(\sigma,\sigma^{\frac{2}{3}}u)+\sigma V(\sigma^{\frac{2}{3}}u)\partial_{\sigma}[b_{\pm}(\sigma,v)]_{v=\sigma^{\frac 23}u} \end{split}\label{sigmaF}\\ &=O\left( \sigma^{-\frac{1}{3}}\left\langle u \right\rangle\left\langle \sigma^{\frac{2}{3}}u \right\rangle^{-3} \right)+ O\left( \left\langle \sigma^{\frac{2}{3}}u \right\rangle^{-\frac{7}{2}} \right) + O\left( \sigma^{\frac{1}{3}}\left\langle u \right\rangle^{-\frac{1}{2}}\left\langle \sigma^{\frac{2}{3}}u \right\rangle^{-4} \right) \nonumber\\ &+\sigma V(\sigma^{\frac{2}{3}}u)\partial_{\sigma}[b_{\pm}(\sigma,v)]_{v=\sigma^{\frac 23}u} \nonumber\\ &=O\left( \sigma^{-\frac{1}{3}}\left\langle u \right\rangle\left\langle \sigma^{\frac{2}{3}}u \right\rangle^{-3} \right)+\sigma V(\sigma^{\frac{2}{3}}u)\partial_{\sigma}[b_{\pm}(\sigma,v)]_{v=\sigma^{\frac 23}u}\nonumber\,.\end{aligned}$$ Inserting the last line into $F_3^{\pm}(\sigma,\zeta)$ shows that $$\begin{aligned} F_3^{\pm}(\sigma,\zeta)&=\int\limits_{\sigma^{-\frac{2}{3}}\zeta}^{\infty} O\left( \left\langle u \right\rangle^{\frac{1}{2}}\left\langle \sigma^{\frac{2}{3}}u \right\rangle^{-3} \right) \: du +T\Big(\partial_\sigma[b_{\pm}(\sigma,v)]_{v=\sigma^{\frac 23}u}\Big)\\ &=O\left( \sigma^{-1}\left\langle \zeta \right\rangle^{-\frac{3}{2}} \right) +T\Big(\partial_\sigma[b_{\pm}(\sigma,v)]_{v=\sigma^{\frac 23}u}\Big)\end{aligned}$$ with $$\begin{aligned} T(a)=-\sigma^{\frac{4}{3}}\int_{\sigma^{-\frac{2}{3}}\zeta}^{\infty}\int\limits_{\sigma^{-\frac{2}{3}}\zeta}^{u} (\mathop{\mathrm{Ai}}\pm i\mathop{\mathrm{Bi}})^{-2}(-v)\: dv (\mathop{\mathrm{Ai}}\pm i\mathop{\mathrm{Bi}})^2(-u) V(\sigma^{\frac{2}{3}}u)a(\sigma^{\frac{2}{3}}u) \:du\,.\end{aligned}$$ Collecting various bounds, we have shown that $$\begin{aligned} \partial_{\sigma}[b_{\pm}(\sigma,\zeta)]&=O\left( \sigma^{-1}\left\langle \zeta \right\rangle^{-\frac{3}{2}}\right)+T\Big(\partial_\sigma[b_{\pm}(\sigma,v)]_{v=\sigma^{\frac 23}u}\Big)\,.\label{sigmabFixedPt}\end{aligned}$$ For small enough $\sigma$, $T$ is a contraction on the weighted space $\left\langle \zeta \right\rangle^{\frac{3}{2}}L^\infty_\zeta$ because $$\begin{aligned} \lvert T(\left\langle \zeta \right\rangle^{-\frac{3}{2}}) \rvert \lesssim\sigma^{\frac{4}{3}}\int_{\sigma^{-\frac{2}{3}}\zeta}^{\infty} \left\langle u \right\rangle^{-\frac{1}{2}}\left\langle \sigma^{\frac{2}{3}}u \right\rangle^{-\frac{7}{2}}\: du \lesssim\sigma \left\langle \zeta \right\rangle^{-3}\end{aligned}$$ so we may conclude that the first term in ([\[sigmabFixedPt\]](#sigmabFixedPt){reference-type="ref" reference="sigmabFixedPt"}) bounds the second, i.e. $$\begin{aligned} \lvert \partial_{\sigma}[b_{\pm}(\sigma,\zeta)] \rvert \lesssim\sigma^{-1}\left\langle \zeta \right\rangle^{-\frac{3}{2}}\,.\end{aligned}$$ The second $\sigma$-derivative will require an estimate on the mixed derivative $\partial_\sigma[\dot{b}(\sigma,\zeta)]$. This easily follows from the bounds we have in hand by differentiating in $\zeta$ each of $F_i^{\pm}(\sigma,\zeta)$ for $i=1,2,3$. Clearly $\partial_{\zeta}[F_1^{\pm}(\sigma,\zeta)]$ contributes $\sigma^{-\frac{4}{3}}\left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{-\frac{1}{2}}\left\langle \zeta \right\rangle^{-2}$ while it is easy to see that $$ $$\begin{aligned} \partial_{\zeta} [ F_2^{\pm}(\sigma,\zeta) ]&=-2\psi^{-1}_{\pm,0}(\zeta)\dot{\psi}_{\pm,0}(\zeta)F_2^{\pm}( \sigma,\zeta) )+\zeta^{-1}F_2^{\pm}(\sigma,\zeta)+\frac{2}{3}\sigma^{-2}\zeta V(\zeta)(1+\sigma b_{\pm}(\sigma,\zeta))\\ &=O\left( \sigma^{-2} \left\langle \zeta \right\rangle^{-1} \right) \,.\end{aligned}$$ Finally, the bound we have obtained on $\partial_{\sigma}[b(\sigma,\zeta)]$ may be used in ([\[sigmaF\]](#sigmaF){reference-type="ref" reference="sigmaF"}) to show that $$\begin{aligned} \partial_{\zeta}[ F_3^{\pm}(\sigma,\zeta)]&=\sigma^{-\frac{1}{3}}\psi^{-2}_{\pm,0}(\zeta)\int\limits_{\sigma^{-\frac{2}{3}}\zeta}^{\infty} (\mathop{\mathrm{Ai}}\pm i \mathop{\mathrm{Bi}})^2(-u)\partial_\sigma[F(u,\sigma)]\: du \\ &\lesssim\sigma^{-\frac{2}{3}} \left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{\frac{1}{2}}\int\limits_{\sigma^{-\frac{2}{3}}\zeta}^{\infty} \left\langle u \right\rangle^{\frac{1}{2}}\left\langle \sigma^{\frac{2}{3}}u \right\rangle^{-3} \: du \lesssim\sigma^{-2}\left\langle \zeta \right\rangle^{-1} \end{aligned}$$ for a total bound of $$\begin{aligned} \partial_{\sigma}[\dot{b}(\sigma,\zeta)]\lesssim\sigma^{-2}\left\langle \zeta \right\rangle^{-1} \,.\end{aligned}$$ For the second $\sigma$-derivative, we proceed similarly and differentiate each of $F_i^{\pm}(\sigma,\zeta)$. First, $$\begin{aligned} \partial_\sigma[F_1^{\pm}(\sigma,\zeta)]=-\sigma^{-1}(F_1^{\pm}(\sigma,\zeta))+\frac{1}{3}\sigma^{-1}\partial_\sigma[b_\pm(\sigma,\zeta)]\lesssim\sigma^{-2}\left\langle \zeta \right\rangle^{-\frac{3}{2}}\end{aligned}$$ whereas $$\begin{aligned} \partial_{\sigma}[F_2^{\pm}&(\sigma,\zeta)]=-\frac{4}{3}\sigma^{-1}F_2^{\pm}(\sigma,\zeta)-\frac{4}{3}\sigma^{-\frac{5}{3}}\zeta\psi_{\pm,0}^{-1}(\zeta)(\mathop{\mathrm{Ai}}\pm i \mathop{\mathrm{Bi}})'(-\sigma^{-\frac{2}{3}}\zeta)F_2^{\pm}(\sigma,\zeta)\\ &-\frac{4}{9}\sigma^{-3}\zeta^2V(\zeta)(1+\sigma b_{\pm}(\sigma,\zeta))-\frac{2}{3}\sigma^{-\frac{4}{3}}\psi^{-2}_{\pm,0}(\zeta)\zeta\int\limits_{\sigma^{-\frac{2}{3}}\zeta}^{\infty} (\mathop{\mathrm{Ai}}\pm\mathop{\mathrm{Bi}})^2(-u)\partial_\sigma\{F(u,\sigma)\}\: du \,.\end{aligned}$$ The first term is $O\left( \sigma^{-2}\left\langle \zeta \right\rangle^{-\frac{3}{2}} \right)$ and second term is $O\left( \sigma^{-3} \right)$ because we have already shown that $|F_2^{\pm}(\sigma,\zeta)|\lesssim\sigma^{-\frac{4}{3}}\left\langle \sigma^{-\frac{2}{3}}\zeta \right\rangle^{-\frac{1}{2}}\left\langle \zeta \right\rangle^{-2}\zeta$. The third term is also $O(\sigma^{-3})$ and the fourth is bounded by $\sigma^{-\frac{5}{3}}\psi_{\pm,0}^{-2}(\zeta)\zeta F_3^{\pm}(\sigma,\zeta)$ which is again $O(\sigma^{-3})$ by the previously obtained bound on $F_3^{\pm}(\sigma,\zeta)$. We compute further that $$\begin{aligned} \partial_\sigma[F_3^{\pm}(\sigma,\zeta)]&=\frac{1}{3}\sigma^{-1}F_3^{\pm}(\sigma,\zeta)-\frac{2}{3}\sigma^{-\frac{4}{3}}\psi_{\pm,0}^{-2}(\zeta)\int\limits_{\sigma^{-\frac{2}{3}}\zeta}^{\infty} (\mathop{\mathrm{Ai}}\pm i\mathop{\mathrm{Bi}})^2(-u)\partial_\sigma[F(u,\sigma)]\: du\\ &-\sigma^{\frac{1}{3}}\int_{\sigma^{-\frac{2}{3}}\zeta}^{\infty}\int\limits_{\sigma^{-\frac{2}{3}}\zeta}^{u} (\mathop{\mathrm{Ai}}\pm i \mathop{\mathrm{Bi}})^{-2}(-v)\: dv (\mathop{\mathrm{Ai}}\pm i\mathop{\mathrm{Bi}})^2(-u)\partial_\sigma^2[F(u,\sigma)]\: du \,.\end{aligned}$$ Arguing similarly, one may easily see that the first two terms are $O(\sigma^{-3})$. For the last term, we need to estimate $\partial_\sigma^2[F(u,\sigma)]$. Series of elementary operations show that $$\begin{aligned} \partial_{\sigma}^2[F(u,\sigma)] =O\left( \sigma^{-\frac{4}{3}}\left\langle \sigma^{\frac{2}{3}}u \right\rangle^{-2} \left\langle u \right\rangle^{} \right) +\sigma V(\sigma^{\frac{2}{3}}u)\partial_{\sigma}^2[b_{\pm}(\sigma,v)]_{v=\sigma^{\frac{2}{3}}u}\end{aligned}$$ and therefore $$\begin{aligned} \partial_{\sigma}[F_3^{\pm}(\sigma,\zeta)]&=O(\sigma^{-3})\\ &-\sigma^{\frac{1}{3}}\int_{\sigma^{-\frac{2}{3}}\zeta}^{\infty}\int\limits_{\sigma^{-\frac{2}{3}}}^{u} (\mathop{\mathrm{Ai}}\pm i \mathop{\mathrm{Bi}})^{-2}(-v)\: dv (\mathop{\mathrm{Ai}}\pm i\mathop{\mathrm{Bi}})^2(-u)O\left( \sigma^{-\frac{4}{3}}\left\langle \sigma^{\frac{2}{3}} \right\rangle^{-2} \left\langle u \right\rangle^{} \right) \: du\\ &+T\Big(\partial_{\sigma}^2[b_{\pm}(\sigma,v)]_{v=\sigma^{\frac{2}{3}}u}\Big)\,.\end{aligned}$$ The middle term is easily seen to be $O\left( \sigma^{-2}\left\langle \zeta \right\rangle^{-\frac{1}{2}} \right)$ so that all of our estimates on the $\sigma$-derivatives of $F_i^{\pm}(\sigma,\zeta)$ show that $\partial_{\sigma}^2[b_{\pm}(\sigma,\zeta)]$ satisfies a fixed point equation of the form $$\begin{aligned} \partial_{\sigma}^2[b_{\pm}(\sigma,\zeta)]=O(\sigma^{-3})+T\Big(\partial_{\sigma}^2[b_{\pm}(\sigma,v)]_{v=\sigma^{\frac{2}{3}}u}\Big)\,.\end{aligned}$$ To conclude, we need only show that $T$ is a contraction on $L^\infty$ for $\sigma$ sufficiently small, but this follows from the computation $$\begin{aligned} \lvert T(1) \rvert \lesssim\sigma^{\frac{4}{3}}\int_{\sigma^{-\frac{2}{3}}}^{\infty}\left\langle u \right\rangle^{-\frac{1}{2}} \left\langle \sigma^{\frac{2}{3}}u \right\rangle^{-2} \: du\lesssim\sigma\left\langle \zeta \right\rangle^{-\frac{3}{2}}\end{aligned}$$ so that as before it follows that $\partial_\sigma^2[b_{\pm}(\sigma,\zeta)]=O(\sigma^{-3})$, which completes the proof. ◻ We would now like to use the oscillatory basis to provide an approximation of $e(\sigma,r)$ that is suitable for use inside the oscillatory integral defining $K_t$. This requires detailed bounds on the function $$\begin{aligned} \zeta_r(\sigma):=\sigma^{-1}\frac{2}{3}\zeta^{\frac{3}{2}}(\sigma^2r) \,.\end{aligned}$$ **Proposition 14**. *Fix constants $k>2$, $0<c<1$, and $s>kc^{-2}$. Then for any $r\geq s$ the function $\zeta_r(\sigma)= \sigma^{-1}\frac{2}{3}\zeta^{\frac{3}{2}}(\sigma^2r)$ satisfies the inequalities $$\begin{aligned} % {\color{red}\zeta_r(\sigma) \sim\sigma r}\label{langerBound_0}\\ &\zeta_r'(\sigma)\sim r\label{langerBound_1}\\ &\lvert \zeta_r''(\sigma) \rvert \lesssim\frac{r}{\sigma}\\ &\zeta_r''(\sigma)<0\label{langerBound_3}\end{aligned}$$ uniformly for $\sigma$ in the region $[ks^{-\frac{1}{2}},c]$. Furthermore, for if $s<r$ then $$\begin{aligned} & r-s \lesssim\zeta_r'(\sigma) - \zeta_s'(\sigma)\lesssim r\label{langerDifferenceBound_0}\\ &\lvert \zeta_r''(\sigma)-\zeta_s''(\sigma) \rvert \lesssim\frac{r-s}{\sigma}\label{langerDifferenceBound_1}\\ &\zeta_r''(\sigma)-\zeta_s''(\sigma)<0\label{langerDifferenceBound_2}\end{aligned}$$ Here, all derivatives are with respect to $\sigma$ and $\lesssim$ and $\sim$ indicate bounds with respect to constants that depends only on $k$ and $c$ (i.e. not on $r$ and $s$).* *Proof.* Recall that for $z\geq 1$ $$\begin{aligned} \frac{2}{3}\zeta^{\frac{3}{2}}(z)=\int_{1}^{z} \sqrt{1-u^{-1}} \: du \end{aligned}$$ or explicitly $$\begin{aligned} \label{zetapower} \frac{2}{3}\zeta^{\frac{3}{2}}(z)=\sqrt{z(z-1)} -\log(\sqrt{z} +\sqrt{z-1} )\end{aligned}$$ One computes that $$\begin{aligned} \zeta_r'(\sigma)&=2r\sqrt{1-(\sigma^{2}r)^{-1}} - \frac{2}{3}\zeta^{\frac{3}{2}}(\sigma^{2}r)\sigma^{-2}\\ &=r\sqrt{1-(\sigma^2r)^{-1}} +\sigma^{-2}\log(\sigma r^{\frac{1}{2}}+\sqrt{\sigma^{2}r-1})\,.\end{aligned}$$ Since $\sigma\gtrsim s^{-\frac{1}{2}}$ on the regime in question, the log term is positive so the above is clearly greater than $r\sqrt{1-k^{-2}}$. For the upper bound, one writes $$\begin{aligned} \log(\sigma r^{\frac{1}{2}}+\sqrt{\sigma^2r-1} )\leq \log(2\sigma r^{\frac{1}{2}})\end{aligned}$$ and then checks that as a function of $\sigma$, $\sigma^{-2}\log(\sigma a)$ has a global maximum of $\frac{a^2}{2e}$. To bound the second derivative, we first calculate $$\begin{aligned} \zeta_r''(\sigma)&=2\sigma^{-3}(1-(\sigma^2r)^{-1})^{-\frac{1}{2}}-2r\sigma^{-1}\sqrt{1-(\sigma^2r)^{-1}} +2\sigma^{-3}\frac{2}{3}\zeta^{\frac{3}{2}}(\sigma^2r)\\ &=2\sigma^{-3}(1-(\sigma^2r)^{-1})^{-\frac{1}{2}}-2\sigma^{-3}\log(\sigma r^{\frac{1}{2}}+\sqrt{\sigma^2r-1} )\end{aligned}$$ and then observe that because $\sigma^{-2}\lesssim r$ $$\begin{aligned} \lvert \zeta_r'' \rvert(\sigma) \lesssim\sigma^{-1}(r+\sigma^{-2}\log(2\sigma r^{\frac{1}{2}})) \lesssim\frac{r}{\sigma}\,.\end{aligned}$$ The negativity of $\zeta_r''$ follows from the above expression and the fact that $(1-(\sigma^2r)^{-1})^{-\frac{1}{2}}$ is a decreasing function of $\sigma^2r$ while $\log(\sigma r^{\frac{1}{2}}+\sqrt{\sigma^2r-1})$ is an increasing function of $\sigma^2r$ and one may verify that their difference is negative at, say, $\sigma^2r=2$. For the estimate on the difference $\zeta_r'-\zeta_s'$, we observe first that $\zeta_r(\sigma)$ is an increasing function of $r$ for any fixed $\sigma$ and $\frac{ d \zeta^{\prime}_r}{dr}$ is uniformly bounded below and above for all allowed $\sigma$ and $s$. Hence, the mean value theorem implies ([\[langerDifferenceBound_0\]](#langerDifferenceBound_0){reference-type="ref" reference="langerDifferenceBound_0"}) and [\[langerDifferenceBound_1\]](#langerDifferenceBound_1){reference-type="eqref" reference="langerDifferenceBound_1"}. To see ([\[langerDifferenceBound_2\]](#langerDifferenceBound_2){reference-type="ref" reference="langerDifferenceBound_2"}), notice that $\zeta_r''$ is a negative decreasing function of $r$ for any fixed $\sigma$. ◻ **Corollary 15**. *Let $\sigma<c$ for $c>0$ sufficiently small. Then for all $x\geq 1$ $$\begin{aligned} \label{elarge} e(\sigma,r(x))=c_-(\sigma)q^{-\frac{1}{4}}\psi_+(\tau(x))+c_+(\sigma)q^{-\frac{1}{4}}\psi_-(\tau(x))\\ c_-(\sigma)= \sigma^{-\frac{1}{6}}(1+O(\sigma)),\,\,c_+(\sigma)=\sigma^{-\frac{1}{6}}(1+O(\sigma)) \nonumber\,.\end{aligned}$$ Furthermore, for some constant $C>0$ large enough, we may write $$\begin{aligned} e(\sigma,r)=e^{i\zeta_r(\sigma)}a_+(\sigma,r)+e^{-i\zeta_r(\sigma)}a_-(\sigma,r)\end{aligned}$$ where the functions $a_\pm$ are smooth and satisfy the bounds $$\begin{aligned} &\lvert a_\pm(\sigma,r) \rvert \lesssim 1\\ &\lvert \partial_{\sigma}[a_\pm(\sigma,r)] \rvert \lesssim\sigma^{-1} \lesssim s^{\frac{1}{2}}\\ &\lvert \partial^2_{\sigma}[a_\pm(\sigma,r)] \rvert \lesssim s\end{aligned}$$ uniformly for all $0<s\leq r$ and $\sigma\in[Cs^{-\frac{1}{2}},c]$.* *Proof.* First, we connect the expression derived for $e(\sigma,r)$ in Corollary [Corollary 10](#AiryCor){reference-type="ref" reference="AiryCor"} to the basis constructed in Proposition [Proposition 13](#oscBasisPr){reference-type="ref" reference="oscBasisPr"} at $\zeta=0$(that is, $x=1$). Based on Corollary [Corollary 10](#AiryCor){reference-type="ref" reference="AiryCor"}, by regarding $r$ as a function $x(\zeta)$ we have $$\begin{aligned} &e(\sigma,r)|_{\zeta=0}=2 \sigma^{-\frac{1}{6}}\mathop{\mathrm{Ai}}(0)(1+O(\sigma))\\ &\partial_{\zeta}\left[e(\sigma,r))\right]_{\zeta=0}=-2 \sigma^{-\frac{5}{6}}\mathop{\mathrm{Ai}}'(0)(1+O(\sigma)) \end{aligned}$$ since we may absorb the $\mathop{\mathrm{Bi}}$ term into the $O(\sigma)$ because the coefficient $B(\sigma)=O(\sigma^{\infty})$ and $q(0)=1$. Now, all of the Wronskians are evaluated at $\zeta=0$. It follows that $$\begin{aligned} W\left[e(\sigma,r(x(\cdot))),\psi_\pm(\sigma,\cdot)\right]=\mp 2i \sigma^{-\frac{5}{6}}W[\mathop{\mathrm{Ai}},\mathop{\mathrm{Bi}}](1+O(\sigma))=\mp\frac{i}{2\pi }\sigma^{-\frac{5}{6}}(1+O(\sigma)) \end{aligned}$$ and because $$\begin{aligned} W[\psi_+(\sigma,\cdot),\psi_-\sigma,\cdot)]&=-\sigma^{-\frac{2}{3}}W[\mathop{\mathrm{Ai}}+i\mathop{\mathrm{Bi}},\mathop{\mathrm{Ai}}-i\mathop{\mathrm{Bi}}](1+O(\sigma))=2i\sigma^{-\frac{2}{3}}W[\mathop{\mathrm{Ai}},\mathop{\mathrm{Bi}}](1+O(\sigma))\\ &=\frac{2i}{\pi}\sigma^{-\frac{2}{3}}(1+O(\sigma)) \end{aligned}$$ from which we may determine $c_{\pm}(\sigma)$ immediately. Now, using ([\[oscAiry\]](#oscAiry){reference-type="ref" reference="oscAiry"}), one may easily see that $$\begin{aligned} e(\sigma,r)&= (-Q)^{-\frac{1}{4}}(x)e^{-i(\zeta_r(\sigma)-\frac{\pi}{4})}(1+O\left( (\zeta_r(\sigma))^{-1}\right)(1+\sigma b_+(\sigma))\\ &+(-Q)^{-\frac{1}{4}}(x)e^{i(\zeta_r(\sigma)-\frac{\pi}{4})}(1+O\left( (\zeta_r(\sigma))^{-1}\right)(1+\sigma b_-(\sigma))\\ &=e^{-i\zeta_r(\sigma)}a_-(\sigma,r)+e^{i\zeta_r(\sigma)}a_+(\sigma,r)\end{aligned}$$ so we are only left to show the bounds on $a_\pm$ under the assumption that $r\geq s>C \sigma^{-2}$. The bound $\lvert a_\pm \rvert \lesssim 1$ is immediate from the fact that for $\sigma^2r>C$, $\zeta_r(\sigma)$ is bounded below. For the derivatives, by making liberal use of the fact that $\sigma^2r>C$, one computes that $$\begin{aligned} &\lvert (-Q(\sigma^2r))^{-\frac{1}{4}} \rvert \lesssim 1\\ &\lvert \partial_{\sigma} [(-Q(\sigma^2r))^{-\frac{1}{4}}] \rvert =\frac{1}{2}(1-(\sigma^2r)^{-1})^{-\frac{5}{4}}\sigma^{-3}r\lesssim\sigma^{-1}\lesssim s^{\frac{1}{2}}\\ &\lvert \partial_{\sigma}^2[(-Q(\sigma^2r))^{-\frac{1}{4}}] \rvert =\frac{5}{4}(1-(\sigma^2r)^{-1})^{-\frac{9}{4}}\sigma^{-6}r^{-2}+\frac{3}{2}(1-(\sigma^2r)^{-1})^{-\frac{5}{4}}\sigma^{-4}r^{-1}\lesssim\sigma^{-2}\lesssim s\,.\end{aligned}$$ Taylor expanding [\[zetapower\]](#zetapower){reference-type="eqref" reference="zetapower"} we also have $\zeta_r(\sigma) \sim \sigma r$ when $\sigma^2 r >2$. Therefore, by ([\[langerBound_1\]](#langerBound_1){reference-type="ref" reference="langerBound_1"}) we obtain $$\begin{aligned} \lvert \partial_\sigma[(\zeta_r(\sigma))^{-1}] \rvert=\lvert \zeta_r'(\sigma)(\zeta_r(\sigma))^{-2} \rvert \lesssim 1.\end{aligned}$$ Similarly $$\begin{aligned} \lvert \partial_{\sigma}^2[(\zeta_r(\sigma))^{-1}] \rvert \lesssim\lvert \zeta_r''(\sigma)(\zeta_r(\sigma))^{-2} \rvert +\lvert (\zeta_r'(\sigma))^{2}(\zeta_r(\sigma))^{-3} \rvert \lesssim\sigma^{-1} \lesssim s^{\frac{1}{2}}\end{aligned}$$ again from Proposition [Proposition 14](#langerBounds){reference-type="ref" reference="langerBounds"}. Now, one uses Proposition [Proposition 13](#oscBasisPr){reference-type="ref" reference="oscBasisPr"} and Proposition [Proposition 14](#langerBounds){reference-type="ref" reference="langerBounds"} to see that $$\begin{aligned} &\lvert b_{\pm}(\sigma,\zeta(\sigma^2r)) \rvert \lesssim 1\\ &\lvert \partial_\sigma[b_\pm(\sigma,\zeta(\sigma^2r))] \rvert\lesssim\lvert \sigma r\zeta'(\sigma^2r)\dot{b}_{\pm}(\sigma,\zeta(\sigma^2r)) \rvert +\partial_\sigma[b_{\pm}(\sigma,\zeta)]\vert_{\zeta=\zeta(\sigma^2r)}\\ &\lesssim(\sigma r)(\sigma^2r)^{-\frac{1}{3}}(\zeta(\sigma^2r))^{-\frac{5}{2}}+\sigma^{-1}\lesssim\sigma^{-1}\lesssim s^{\frac{1}{2}}\,,\end{aligned}$$ where $\dot{ }$ represents the derivative with respect to the second variable and thus $$\begin{aligned} &\lvert 1+\sigma b_{\pm}(\sigma,\zeta(\sigma^2r)) \rvert \lesssim 1\\ &\lvert \partial_\sigma[\sigma b_\pm(\sigma,\zeta(\sigma^2r))] \rvert\lesssim\lvert b_\pm(\sigma,\zeta(\sigma^2) \rvert+ \lvert \sigma^2r\zeta'(\sigma^2r)\dot{b}_{\pm}(\sigma,\zeta(\sigma^2r)) \rvert +\sigma\partial_\sigma[b_{\pm}(\sigma,\zeta)]\vert_{\zeta=\zeta(\sigma^2r)}\\ &\lesssim(\zeta(\sigma^2r))^{-\frac{3}{2}}+(\sigma^2r)(\sigma^2r)^{-\frac{1}{3}}(\zeta(\sigma^2r))^{-\frac{5}{2}}+(\zeta(\sigma^2r))^{-\frac{3}{2}}\lesssim 1\,.\end{aligned}$$ As usual suppressing the variable $\sigma^2r$ in $\zeta$, we obtain $$\begin{aligned} \lvert \partial^2_\sigma[\sigma b_{\pm}(\sigma,\zeta)] \rvert\lesssim\lvert \partial_\sigma[b_{\pm}(\sigma,\zeta(\sigma^2 r))] \rvert +\sigma\lvert \partial^2_\sigma [b_{\pm}(\sigma,\zeta(\sigma^2 r))] \rvert \,.\end{aligned}$$ The first term is less than $\sigma^{-1}$ by our previous computation and for the second we write $$\begin{aligned} \partial^2_\sigma[b_\pm(\sigma,\zeta(\sigma^2 r)))]=\partial_{\sigma}^2[b_\pm(\sigma,\zeta(x))]_{x=\sigma^2 r} +(2r\zeta'(\sigma^2r)+4(\sigma r)^2\zeta''(\sigma^2r))\dot{b}_{\pm} (\sigma,\zeta) \\+2\sigma r\zeta'(\sigma^2r)\partial_{\sigma}[\dot{b}_{\pm}(\sigma,\zeta(x))]_{x=\sigma^2r} +(2\sigma r\zeta'(\sigma^2r))^2\ddot{b}_\pm (\sigma,\zeta)\end{aligned}$$ and using various bounds from Proposition [Proposition 13](#oscBasisPr){reference-type="ref" reference="oscBasisPr"} shows that $\lvert \partial^2_\sigma[b_{\pm}(\sigma,\zeta(x)] \rvert_{x=\sigma^2r} \lesssim\sigma^{-3}$. This gives a total bound of $$\begin{aligned} \lvert \partial^2_\sigma[b_{\pm}(\sigma,\zeta(\sigma^2 r)] \rvert \lesssim\sigma^{-2}\lesssim s\,.\end{aligned}$$ Summarizing all of these derivative computations, we have shown that $$\begin{aligned} &\lvert (-Q(\sigma^2r))^{-\frac{1}{4}} \rvert \lesssim 1,\,\, \lvert \partial_{\sigma} [(-Q(\sigma^2r))^{-\frac{1}{4}}] \rvert \lesssim\sigma^{-1} \lesssim s^{\frac{1}{2}},\,\, \lvert \partial_{\sigma}^2[(-Q(\sigma^2r))^{-\frac{1}{4}}] \rvert \lesssim s\\ &O((\zeta_r(\sigma))^{-1})\lesssim 1,\,\, \lvert \partial_\sigma[(\zeta_r(\sigma))^{-1}] \rvert\lesssim 1,\,\,\lvert \partial_{\sigma}^2[(\zeta_r(\sigma))^{-1}] \rvert \lesssim\sigma^{-1} \lesssim s^{\frac{1}{2}}\\ &\lvert 1+\sigma b_\pm(\sigma, \zeta(\sigma^2r)) \rvert \lesssim 1,\,\,\lvert \frac{d}{d\sigma}[\sigma b_{\pm}(\sigma, \zeta(\sigma^2 r))] \rvert \lesssim 1,\,\,\lvert \frac{d^2}{d\sigma^2}[\sigma b_{\pm}(\sigma,\zeta(\sigma^2 r))] \rvert \lesssim s\end{aligned}$$ and then by the Leibniz rule $$\begin{aligned} \lvert \partial_{\sigma} a_{\pm}(\sigma,r) \rvert \lesssim\sigma^{-1} \lesssim s^{\frac{1}{2}},\lvert \partial_{\sigma}^2a_\pm(\sigma,r) \rvert \lesssim s\end{aligned}$$ as claimed. ◻ **Remark 16**. *We remark that our connection in Corollary [Corollary 15](#oscCor){reference-type="ref" reference="oscCor"} is consistent with the asymptotic behavior of $M_{\frac{i}{2 \sigma}, \frac 12} ( 2 i \sigma r )$. Using [@NIST (13.14.32) and (13.14.21)] one can calculate the following asymptotic behavior as $\sigma r \to \infty$ $$\begin{aligned} \label{Mexp} M_{\frac{i}{2 \sigma}, \frac 12} ( 2 i \sigma r ) \sim \frac{i e^{\frac{\pi}{4 \sigma}} }{ | \Gamma(1+\frac{i}{2 \sigma})|} \sin \Big( \sigma r - \frac{1}{2 \sigma} \log (2 \sigma r) + \theta(\sigma) \Big) \,,\end{aligned}$$ where $\theta(\sigma):= \arg( \Gamma(1+i/(2\sigma))$. Therefore, by [\[erep\]](#erep){reference-type="eqref" reference="erep"} we have $e(\sigma, r )\sim \sin \big( \sigma r - \frac{1}{2 \sigma} \log (2 \sigma r ) + \theta(\sigma)\big)$ as $\sigma r \to \infty$. Here, we used the fact that $|\Gamma(i s)| = \sqrt{ \frac{\pi}{s \sinh(\pi s)}}$, see (5.4.3) in [@NIST].* *On the other hand, by Stirling's formula [@NIST (5.11.1)], we have as $\sigma \to 0$, $$\begin{aligned} \theta(\sigma) = \frac{- \ln (2 \sigma)}{2 \sigma}- \frac{1}{2 \sigma} + \frac{\pi}{4} - \frac{\sigma}{3} + O_2(\sigma^3) \nonumber\end{aligned}$$ and therefore, $$\begin{aligned} e^{\pm i( \sigma r - \frac{1}{2\sigma} \log(2 \sigma r) +\theta(\sigma))} = e^{\pm i( \sigma r - \frac{1}{2 \sigma} - \frac{ \ln (4 \sigma^2 r )}{ 2 \sigma}+ \frac{\pi}{4})} ( 1+ O(\sigma)). \end{aligned}$$* *By [@NIST (10.4.60), (10.4.64)], when $x>0$ is big $$\begin{aligned} \mathop{\mathrm{Ai}}(-x)&=\frac{1}{\sqrt{\pi}x^{1/4}}\sin\left(\frac{2}{3}x^{3/2}+\frac{\pi}{4}\right)+O\left(\frac{1}{x^{3/2}}\right),\\ \mathop{\mathrm{Bi}}(-x)&=\frac{1}{\sqrt{\pi}x^{1/4}}\cos\left(\frac{2}{3}x^{3/2}+\frac{\pi}{4}\right)+O\left(\frac{1}{x^{3/2}}\right).\end{aligned}$$ Then the consistency is now clear using the following expansions $$\begin{aligned} & q^{-\frac{1}{4}}\psi_+(\tau(\sigma^2 r))= \frac{ \sigma^{\frac 16}}{ \pi^{\frac 12}} e^{-i( \sigma r - \frac{1}{2 \sigma} - \frac{ \ln (4 \sigma^2 r )}{ 2 \sigma}- \frac{\pi}{4})} (1+ O( (\sigma r) ^{-1} \nonumber))\\ &q^{-\frac{1}{4}}\psi_-(\tau(\sigma^2 r))= \frac{\sigma^{\frac 16} }{ \pi^{\frac 12}} e^{i(\sigma r - \frac{1}{2 \sigma} - \frac{ \ln (4 \sigma^2 r )}{ 2 \sigma} -\frac{\pi}{4})} (1+ O( (\sigma r)^{-1})) \nonumber\end{aligned}$$ as $(\sigma r) \to \infty$ in Corollary [Corollary 15](#oscCor){reference-type="ref" reference="oscCor"}.* We finish this section with the following lemma and its corollary. **Lemma 17**. *Let $c$ be sufficiently small such that for all $\sigma <c$ [\[esmall\]](#esmall){reference-type="eqref" reference="esmall"} and [\[elarge\]](#elarge){reference-type="eqref" reference="elarge"} hold. Moreover let $\frac 3 4 \leq n <1$, $m < \infty$ and define $\chi_n^m := \tilde{\chi}_n \chi_m$. Then one has $$\begin{aligned} &|\chi_c(\sigma) \chi_n^m (\sigma^2 r) e(\sigma, r)| \lesssim\chi_c(\sigma) \chi_n^m (\sigma^2 r) \sigma^2 r \nonumber\\ &|\chi_c(\sigma)\chi_n^m (\sigma^2 r) \partial_\sigma\{e( \sigma, r )\}| \lesssim\chi_c(\sigma) \chi_n^m r \nonumber\\ &|\chi_c(\sigma)\chi_n^m (\sigma^2 r) \partial_\sigma^2\{e(\sigma, r )\}| \lesssim\chi_c(\sigma) \chi_n^m (\sigma^2 r) \sigma^{-2} r \nonumber\,.\end{aligned}$$* *Proof.* Recall by [\[esmall\]](#esmall){reference-type="eqref" reference="esmall"}, and [\[elarge\]](#elarge){reference-type="eqref" reference="elarge"}, we have $$\begin{aligned} e(\sigma, r) = c_+(\sigma) \frac{\mathop{\mathrm{Ai}}(-\sigma^{-\frac 23} \zeta(\sigma^2 r)}{q^{\frac 14} (\sigma^2 r)} (1 + \sigma a(\sigma, \zeta(\sigma^2 r)) + c _-(\sigma) \frac{\mathop{\mathrm{Bi}}(-\sigma^{-\frac 23} \zeta(\sigma^2 r)}{q^{\frac 14} (\sigma^2 r)} (1 + \sigma b(\sigma, \zeta(\sigma^2 r))\end{aligned}$$ where $a,b$ hold the bounds in Proposition [Proposition 8](#smallexp){reference-type="ref" reference="smallexp"} for $\sigma^2 r \leq 1$, and the bounds in Proposition [Proposition 13](#oscBasisPr){reference-type="ref" reference="oscBasisPr"} for $\sigma^2 r \geq 1$. We show the statement first for the leading terms. We note that by the definition of $\chi_c$, given $n \geq \frac 34$, the cut-off $\tilde{\chi}_n \chi_m(x)$ is supported for $x \geq \frac 12$. Therefore, by expansions [\[asymptoticAB\]](#asymptoticAB){reference-type="eqref" reference="asymptoticAB"}, [\[oscAiry\]](#oscAiry){reference-type="eqref" reference="oscAiry"} and the fact that $-{\frac 32} ( \zeta(\frac 12))^{\frac 23} =\frac{\pi}{4} - {\frac 12}$ we have $$\begin{aligned} &|\chi_c(\sigma) \chi_n^m(x) \mathop{\mathrm{Bi}}(-\sigma^{-\frac 23} \zeta(x))| \lesssim e^{\frac{1}{\sigma}( \frac{\pi}{4} - {\frac 12})} \sigma^{\frac 16} \label{Biders} \\ &|\chi_c(\sigma)\chi_n^m(x) \dot{ \mathop{\mathrm{Bi}}}(-\sigma^{-\frac 23} \zeta(x))| \lesssim e^{\frac{1}{\sigma}( \frac{\pi}{4} - {\frac 12})}\sigma^{-\frac 5 6},\nonumber\\ &|\chi_c(\sigma)\chi_n^m(x) \ddot{ \mathop{\mathrm{Bi}}}(-\sigma^{-\frac 23} \zeta(x))| \lesssim e^{\frac{1}{\sigma}( \frac{\pi}{4} - {\frac 12})}\sigma^{-\frac{11}{6}} \nonumber, \end{aligned}$$ As usual we use  $\dot{}$  for $\zeta$-derivatives. Similarly $$\begin{aligned} &|\chi_c(\sigma)\chi_n^m(x)\mathop{\mathrm{Ai}}(-\sigma^{-\frac 23} \zeta(x))| \lesssim\sigma^{\frac 16} \label{Aiders}. \\ &|\chi_c(\sigma)\chi_n^m(x)\dot{ \mathop{\mathrm{Ai}}}(-\sigma^{-\frac 23} \zeta(x))| \lesssim\sigma^{-\frac 5 6} \nonumber\\ &| \chi_c(\sigma)\chi_n^m(x)\ddot{ \mathop{\mathrm{Ai}}}(-\sigma^{-\frac 23} \zeta(x))| \lesssim\sigma^{-\frac{11}{6}} \nonumber. \end{aligned}$$ Furthermore, by definition of $q$, we have for $x\geq {\frac 12}$ $$\begin{aligned} &\partial_x^j\{\zeta(x)\} = \langle x \rangle^{2/3-j} ,\,\,\ \partial_x^j\{ q^{-\frac 14}(x)\} = \langle x \rangle^{1/6-j} \,.\label{qders}\end{aligned}$$ As a result, using [\[Biders\]](#Biders){reference-type="eqref" reference="Biders"},[\[qders\]](#qders){reference-type="eqref" reference="qders"}, and $c_-(\sigma)$ from Corollary [Corollary 15](#oscCor){reference-type="ref" reference="oscCor"} we obtain $$\begin{aligned} \label{variable} \Big| \chi_c(\sigma) \chi_n^m (\sigma^2 r) c_-(\sigma) \frac{\mathop{\mathrm{Bi}}(-\sigma^{-\frac 23} \zeta(\sigma^2 r)}{ q^{\frac 14}(\sigma^2 r)}\Big| \lesssim\chi_n^m (\sigma^2 r) e^{ - \frac{1}{\sigma}( \frac{\pi}{4}-{\frac 12})} \langle\sigma^2 r \rangle^{\frac 16} \lesssim e^{ - \frac{1}{\sigma}( \frac{\pi}{4}-{\frac 12})} \sigma^{2} r\,.\end{aligned}$$ In the last inequality we used the fact that $n \leq \sigma^2 r$. As usual to shorten the notation, for the rest of the proof we avoid using the variable $\sigma^2 r$ in $\zeta$. We continue with estimating the $\sigma$ derivative of the leading term. We compute $$\begin{gathered} \label{middleBider} | \chi_c \chi_n^m \partial_{\sigma} \{ c_-(\sigma) q^{-\frac 14}(\zeta) \mathop{\mathrm{Bi}}(-\sigma^{-\frac 23}\zeta)\} | \lesssim\chi_c \chi_n^m [ c_-^{\prime}(\sigma)(q(\sigma^2 r))^{-\frac 14} \mathop{\mathrm{Bi}}(-\sigma^{-\frac 23}\zeta) \\ + c_-(\sigma)\frac{\partial (q(\sigma^2 r))^{-\frac 14}}{\partial \sigma } \mathop{\mathrm{Bi}}(-\sigma^{-\frac 23}\zeta)+ c_-(\sigma)(q(\sigma^2 r))^{-\frac 14} \dot{ \mathop{\mathrm{Bi}}}(-\sigma^{-\frac 23}\zeta) \frac{d \zeta}{d \sigma} ] \end{gathered}$$ Using [\[Biders\]](#Biders){reference-type="eqref" reference="Biders"},[\[qders\]](#qders){reference-type="eqref" reference="qders"} and $\big| \frac{d \zeta(\sigma^2 r)}{d \sigma}\big| \lesssim\sigma r$, we estimate $|\eqref{middleBider}| \lesssim e^{ - \frac{1}{\sigma}( \frac{\pi}{4}-{\frac 12})} r$ Similarly, using [\[Biders\]](#Biders){reference-type="eqref" reference="Biders"},[\[qders\]](#qders){reference-type="eqref" reference="qders"} and $\big| \frac{d \zeta(\sigma^2 r)}{d \sigma}\big| \lesssim\sigma r$, one can compute $$\begin{aligned} \label{middleBisecder} | \chi_c \chi_n^m \partial^2_{\sigma} \{ c_-(\sigma) q^{-\frac 14}(\zeta) \mathop{\mathrm{Bi}}(-\sigma^{\frac 23}\zeta)\} | \lesssim e^{ - \frac{1}{\sigma}( \frac{\pi}{4}-{\frac 12})} \sigma^{-2} r.\end{aligned}$$ As expected from the computation of [\[middleBider\]](#middleBider){reference-type="eqref" reference="middleBider"}, the restricting term $\sigma^{-2} r$ in [\[middleBisecder\]](#middleBisecder){reference-type="eqref" reference="middleBisecder"} arises when both of the two derivatives fall on $\mathop{\mathrm{Bi}}(-\sigma^{-\frac 23}\zeta)$. This situation leads us to the bound $\sigma^{-2} (\sigma r)^2$. However, due to the constraint $r \leq m \sigma^{-2}$, we can simplify this estimate to $\sigma^{-2}r$. With a similar observation and using [\[Aiders\]](#Aiders){reference-type="eqref" reference="Aiders"}, [\[qders\]](#qders){reference-type="eqref" reference="qders"} we have $$\begin{aligned} \label{middleAi} &| \chi_c \chi_n^m c_+(\sigma) q^{-\frac 14}(\zeta) \mathop{\mathrm{Ai}}(-\sigma^{\frac{3}{2}} \zeta) | \lesssim\sigma^2 r\\ & | \chi_c \chi_n^m \partial^2_{\sigma} \{ c_+(\sigma) q^{-\frac 14}(\zeta) \mathop{\mathrm{Ai}}(-\sigma^{\frac{3}{2}} \zeta)\} | \lesssim\sigma^{-j} r. \,\,\ j=1,2\,.\nonumber\end{aligned}$$ Hence, we obtain the bounds for the leading terms. We next estimate the error terms. We first start with $\sigma^2 r \leq 1$. For that we use Proposition [Proposition 8](#smallexp){reference-type="ref" reference="smallexp"}. We will only estimate $a$. The bounds on $b$ can be estimated similarly. We first note that $|a_1|\lesssim 1$, and $$\begin{aligned} \label{firstderer} |\chi_c \chi_n^m \partial_{\sigma}\{a(\sigma, \zeta(\sigma^2r)\}|\lesssim\chi_c \chi_n^m[ | \partial_\sigma\{a(\sigma, \zeta(x)\}|_{x=\sigma^2 r}| + |\dot a(\sigma, \zeta) \frac{d \zeta}{d \sigma}|] \lesssim\sigma^{-\frac 43}\,.\end{aligned}$$ Furthermore, $$\begin{aligned} \label{secondderer} | \chi_c \chi_n^m\partial^2_{\sigma}\{a_1(\sigma, \zeta(\sigma^2r)\}|\lesssim\chi_c \chi_n^m[ | \partial^2_\sigma\{a_1(\sigma, \zeta(x)\}|_{x=\sigma^2 r}| + |\partial_\sigma\{\dot a_1(\sigma, \zeta)\}\frac{d \zeta}{d \sigma}| \\ +| \ddot a_1(\sigma,\zeta) \big(\frac{d \zeta}{d \sigma}\big)^2| + |\dot a_1(\sigma, \zeta) \frac{d^2 \zeta}{d \sigma^2} |] \lesssim\chi_n^m \sigma^{-\frac 43} r \lesssim\sigma^{-\frac{10}{3}} \:.\nonumber\end{aligned}$$ We next, comment on the error term when $\sigma^2 r \geq 1$. For that we use Proposition [Proposition 13](#oscBasisPr){reference-type="ref" reference="oscBasisPr"} and will estimate $\sigma b_{\pm}$. In fact, using $\sigma b_{\pm}$ in [\[firstderer\]](#firstderer){reference-type="eqref" reference="firstderer"} and [\[secondderer\]](#secondderer){reference-type="eqref" reference="secondderer"} instead of $a_1$, we see the same bounds holds for the sigma derivatives of $\sigma b_{\pm}$ in the support of $\chi_c \chi_n^m$. Combining these bounds with [\[variable\]](#variable){reference-type="eqref" reference="variable"}, [\[middleBider\]](#middleBider){reference-type="eqref" reference="middleBider"}, [\[middleBisecder\]](#middleBisecder){reference-type="eqref" reference="middleBisecder"}, [\[middleAi\]](#middleAi){reference-type="eqref" reference="middleAi"} we obtain the statement. Here for the discontinuity at $\sigma^2 r=1$, we note that originally we converge to $M_{\frac{i}{2\sigma},{\frac 12}}(2i \sigma r)$ and this function is analytic in the vicinity of turning point. ◻ The following corollary is due to Lemma [Lemma 5](#lem:bb){reference-type="ref" reference="lem:bb"} and Lemma [Lemma 17](#lem:ab){reference-type="ref" reference="lem:ab"}. **Corollary 18**. *Fix $c >0$ sufficiently small and $k < \infty$. Then for any $\beta \geq 0$ we have $$\begin{aligned} &|\chi_c(\sigma) \chi_k (\sigma^2 r) e(\sigma, r)| \lesssim\chi_{c}(\sigma) \chi_{k} (\sigma^2 r) r \sigma^2\,, \nonumber\\ &|\chi_c(\sigma)\chi_k (\sigma^2 r) \partial_\sigma\{e( \sigma, r )\}| \lesssim\chi_c(\sigma) r [ \sigma^{\beta}\chi_{\frac 12}(\sigma^2 r) + \chi^k_{\frac 12}(\sigma^2 r)]\,, \nonumber\\ &|\chi_c(\sigma)\chi_k (\sigma^2 r) \partial_\sigma^2\{e(\sigma, r )\}| \lesssim\chi_c(\sigma) r [ \sigma^{\beta}\chi_{\frac 12}(\sigma^2 r) + \sigma^{-2} \chi^k_{\frac 12}(\sigma^2 r)]\,. \nonumber\end{aligned}$$* # Eigenfunction approximation: Large Energies In this section, we will consider the energies when $\sigma \geq c >0$ where $c \ll 1$ . Recall [\[erep\]](#erep){reference-type="eqref" reference="erep"}, we have $$\begin{aligned} \label{erep} e(\sigma, r) &= -i \sigma^{-\frac 12} [ e^{\frac {\pi}{\sigma}} -1]^{-\frac 12} M_{\frac{i}{2 \sigma}, \frac 12} ( 2 i \sigma r) \\ & =-i \sigma^{-\frac 12} [ e^{\frac {\pi}{\sigma}} -1]^{-\frac 12} e^{-i \sigma r} ( 2 i \sigma r) M\Big( 1 - i/(2 \sigma), 2, 2 i \sigma r \Big)\,.\nonumber\end{aligned}$$ Here $M(a,b,z)$ is the Kummer's function of the first kind. In this section we prove Proposition [Proposition 19](#prop:lexp){reference-type="ref" reference="prop:lexp"}. **Proposition 19**. *Let $k<\infty$, then the following expansions are valid for $e(\sigma,r)$ $$\begin{aligned} & \widetilde{\chi}_c(\sigma) \chi_k(\sigma r) e(\sigma, r)= \widetilde{\chi}_c(\sigma) \chi_k(\sigma r) \sigma^{\frac 12} [e^{\frac{\pi}{\sigma}} -1]^{-\frac 12} r ( 1+ O_2(\sigma r)) \label{lsest}\\ & \widetilde{\chi}_c(\sigma) \widetilde{\chi}_k(\sigma r) e(\sigma, r) =-\frac{i}{\sqrt{\pi}} \widetilde{\chi}_c(\sigma) \widetilde{\chi}_k(\sigma r) [ e^{ i(\sigma r - \frac{ \ln(2\sigma r)}{\sigma}- \theta(\sigma))} +e^{- i(\sigma r - \frac{ \ln(2\sigma r)}{\sigma} + \theta(\sigma))}] +\mathcal{E} (\sigma,r) \label{llest} \end{aligned}$$ where $\theta(\sigma)= \arg(\Gamma(1+i/(2 \sigma)))$ and $$\begin{aligned} \label{epsbound} | \mathcal{E}(\sigma,r)| \lesssim 1 ,\,\,\,\ | \partial_{\sigma} \{\mathcal{E}(\sigma,r))\}| \lesssim r \,\,\,\,\,\,| \partial_{\sigma}^2\{\mathcal{E}^{\prime \prime}(\sigma,r))\}| \lesssim\sigma^{-1} r\,.\end{aligned}$$* Before we start the proof, we state a couple of expressions for $M(a,b,z)$. These formulas can be found in [@NIST Chapter 13]. We will use [\[sumexp\]](#sumexp){reference-type="eqref" reference="sumexp"} to prove [\[lsest\]](#lsest){reference-type="eqref" reference="lsest"} and [\[intrep\]](#intrep){reference-type="eqref" reference="intrep"} to prove the [\[llest\]](#llest){reference-type="eqref" reference="llest"}. One has $$\begin{aligned} \label{sumexp} M(a,b,z) = 1+ \sum_{s=1}^{\infty} \frac{a(a+1)...(a+(s-1))}{ b(b+1)...(b+(s-1)) s!} z^s \end{aligned}$$ for any nonpositive integer $b$ and $$\begin{aligned} \label{intrep} M(a, b, z)= \frac{1}{ \Gamma(a) \Gamma(b-a)}\int_0^{1} e^{z s } s^{a-1} (1-s)^{b-a-1} dt. \end{aligned}$$ for $\Re(b) > \Re(a) >0$. We start with the following lemma which analyzes the integral in [\[intrep\]](#intrep){reference-type="eqref" reference="intrep"} for $a= 1-\frac{i}{2\sigma}$, $b=2$ and $z= 2i\sigma r$. **Lemma 20**. *We have the following expansion $$\begin{gathered} \label{eq:Mintest} \widetilde{\chi}_c(\sigma) \tilde{\chi}_c(\sigma r)\int\limits_0^{1} e^{2 i \sigma r s} s^{- \frac{i}{ 2\sigma}} (1-s)^{\frac{i}{2 \sigma}} ds \\ = \frac{\widetilde{\chi}_c(\sigma) \tilde{\chi}_c(\sigma r)}{(2i\sigma r)}\Bigg[ e^{2i \sigma r} (2 \sigma r i)^{-\frac{i}{2\sigma}} \Gamma\Big(1+ \frac{i}{ 2\sigma}\Big) - (-2 i \sigma r )^{\frac{i}{2\sigma}} \Gamma\Big(1- \frac{i}{ 2\sigma}\Big) \Bigg] \\ + e^{\frac{\pi}{2 \sigma}} \Big[ b_+(\sigma,r) + e^{2 i \sigma r} b_-(\sigma,r) \Big] \end{gathered}$$ where $| b_{\pm}(\sigma,r) \} | \lesssim|\sigma r|^{-2}$ , $|\partial_\sigma^{j} \{ b_{\pm}(\sigma,r) \} | \lesssim\sigma^{-j} |\sigma r|^{-1}$ for $j=0,1,2$.* *Proof.* Using the contour below, we obtain [\[contour\]](#contour){reference-type="eqref" reference="contour"}. $$\begin{gathered} \label{contour} \int_\varepsilon^{1-\varepsilon} e^{2 i \sigma r s} s^{- \frac{i}{ 2\sigma}} (1-s)^{\frac{i}{2 \sigma}} \:ds=\int\limits_{c_\varepsilon} e^{2 i \sigma r s} s^{- \frac{i}{ 2\sigma}} (1-s)^{\frac{i}{2 \sigma}} ds \\ +\int\limits_{l_1}e^{2 i \sigma r s} s^{- \frac{i}{ 2\sigma}} (1-s)^{\frac{i}{2 \sigma}} \:ds +\int\limits_{l_{\mathcal{R}}}e^{2 i \sigma r s} s^{- \frac{i}{ 2\sigma}} (1-s)^{\frac{i}{2 \sigma}} \:ds \\ +\int\limits_{l_2}e^{2 i \sigma r s} s^{- \frac{i}{ 2\sigma}} (1-s)^{\frac{i}{2 \sigma}} \: ds + \int\limits_{\tilde{c}_\varepsilon}e^{2 i \sigma r s} s^{- \frac{i}{ 2\sigma}} (1-s)^{\frac{i}{2 \sigma}} \:ds\,. \end{gathered}$$ Note that if $\sigma \geq c >0$, then $$\begin{aligned} &\Big|\int\limits_{c_\varepsilon} e^{2 i \sigma r s} s^{- \frac{i}{ 2\sigma}} (1-s)^{\frac{i}{2 \sigma}} \:ds \Big|= \varepsilon \Big|\int\limits_{\pi/2}^{\pi} e^{2i \sigma r \varepsilon e^{is}} ( \varepsilon e^{is})^{-\frac{i}{2\sigma}} (1- \varepsilon e^{is})^{\frac{i}{ 2\sigma}} e^{is}\:ds \Big| \leq \varepsilon e^{\frac{\pi}{2 c}} \\ &\Big|\int\limits_{\tilde{c}_\varepsilon} e^{2 i \sigma r s} s^{- \frac{i}{ 2\sigma}} (1-s)^{\frac{i}{2 \sigma}} \:ds \Big|= \varepsilon \Big|\int\limits_{\pi/2}^{\pi} e^{2i \sigma r (1+\varepsilon e^{is})} (1+ \varepsilon e^{is})^{-\frac{i}{ 2\sigma}} (- \varepsilon e^{is})^{\frac{i}{2\sigma}}e^{is}\:ds \Big| \leq \varepsilon e^{\frac{\pi}{2 c}}\,.\end{aligned}$$ Hence, the first and last term on the right side of the equality in [\[contour\]](#contour){reference-type="eqref" reference="contour"} goes to zero as $\varepsilon \to 0$. Moreover, as $\sigma r \geq c$ $$\begin{aligned} \Big|\int\limits_{l_{\mathcal{R}}}e^{2 i \sigma r s} s^{- \frac{i}{ 2\sigma}} (1-s)^{\frac{i}{2 \sigma}} \:ds \Big|= \Big|\int\limits_{0}^{1} e^{2i \sigma r (i \mathcal{R}+s)} (i\mathcal{R}+ s)^{-{\frac{i}{2 \sigma}}} (1-(i\mathcal{R}+s))^{\frac{i}{2 \sigma}} \: ds \Big| \leq e^{-2\mathcal{R}c} e^{\frac{\pi}{2 c}}\,.\end{aligned}$$ Therefore, the third term on the right side of the equality in [\[contour\]](#contour){reference-type="eqref" reference="contour"} goes to zero as $\mathcal{R} \to \infty$. We next estimate the second and fourth terms on the right side of the equality in [\[contour\]](#contour){reference-type="eqref" reference="contour"}. Parametrizing the paths and letting $\varepsilon \to 0$, $\mathcal{R} \to \infty$ we have $$\begin{aligned} \label{a1a2sum} \widetilde{\chi}_c(\sigma) \widetilde{\chi}_c(\sigma r)\int\limits_0^{1} e^{2 i \sigma r s} s^{- \frac{i}{ 2\sigma}} (1-s)^{\frac{i}{2 \sigma}} ds=\widetilde{\chi}_c(\sigma) \widetilde{\chi}_c(\sigma r)( A_1+A_2)\end{aligned}$$ where $$\begin{aligned} &A_1(\sigma,r):= \int\limits_0^{\infty} e^{2 i \sigma r (is)} (is)^{- \frac{i}{ 2\sigma}} (1-is)^{\frac{i}{2 \sigma}} i\:ds\\ &A_2(\sigma,r):=\int\limits_0^{\infty} e^{2 i \sigma r (1+i s)} (1+is)^{- \frac{i}{ 2\sigma}} (-is)^{\frac{i}{2 \sigma}} (-i)\:ds\,.\end{aligned}$$ We can write $$\begin{aligned} \label{eq:A1} A_1(\sigma,r)&=\int\limits_0^{\infty} e^{-2 \sigma r s} (is)^{- \frac{i}{ 2\sigma}} i\:ds +\int\limits_0^{\infty} e^{-2 \sigma rs } (is)^{- \frac{i}{ 2\sigma}} [(1-is)^{\frac{i}{2 \sigma}}-1] i\:ds \\ &=(-2 i \sigma r )^{ -1 + \frac{i}{2\sigma}} \Gamma(1-i/(2\sigma))+ e^{\frac{\pi}{2\sigma}}\int\limits_0^{\infty} e^{-\frac{\pi}{2\sigma}} e^{-2 \sigma r s} (is)^{- \frac{i}{ 2\sigma}}[(1-is)^{\frac{i}{2 \sigma}}-1 ] i\:ds \nonumber\end{aligned}$$ and similarly $$\begin{aligned} A_2(\sigma, r) &= e^{2 i \sigma r}\int\limits_0^{\infty} e^{-2\sigma rs} (1+is)^{-\frac{i}{2\sigma}} (-is)^{\frac{i}{2\sigma}} (-i) \:ds \\ & = e^{2 i \sigma r}\int\limits_0^{\infty} e^{-2\sigma rs} (-is)^{\frac{i}{2\sigma}} (-i) ds -e^{2 i \sigma r}\int\limits_0^{\infty} e^{-2\sigma rs} [(1+is)^{-\frac{i}{2\sigma}}-1] (-is)^{\frac{i}{2\sigma}} i\:ds \nonumber\\ & = e^{2i \sigma r} (2 \sigma r i)^{-1-\frac{i}{2\sigma}} \Gamma(1+ i/(2\sigma)) - e^{2 i \sigma r}\int\limits_0^{\infty} e^{-2\sigma rs} [(1+is)^{-\frac{i}{2\sigma}}-1] (-is)^{\frac{i}{2\sigma}} i\:ds \nonumber\,.\end{aligned}$$ If we let $$\begin{aligned} &b_+(\sigma,r):=\int\limits_0^{\infty} e^{-\frac{\pi}{2\sigma}} e^{-2 \sigma r s} (is)^{- \frac{i}{ 2\sigma}}[(1-is)^{\frac{i}{2 \sigma}}-1 ] i\:ds \\ &b_-(\sigma,r):=-\int\limits_0^{\infty} e^{-\frac{\pi}{2\sigma}}e^{-2\sigma rs} [(1+is^{-\frac{i}{2\sigma}}-1] (-is)^{\frac{i}{2\sigma}} i\:ds\end{aligned}$$ then plugging $A_1$, $A_2$ in [\[a1a2sum\]](#a1a2sum){reference-type="eqref" reference="a1a2sum"} and comparing to [\[eq:Mintest\]](#eq:Mintest){reference-type="eqref" reference="eq:Mintest"}, one can see that it is enough to show that $b_k$ satisfies the bounds $|\partial_\sigma^{j} \{ b_{\pm}(\sigma,r) \} | \lesssim\sigma^{-j} |\sigma r|^{-1}$, $| b_{\pm}(\sigma,r) | \lesssim|\sigma r|^{-2}$. Below, we prove the bounds for $b_+$, the bounds for $b_-$ follow similarly. Observe that $|(is)^{- \frac{i}{ 2\sigma}}[(1-is)^{\frac{i}{2 \sigma}}-1 ]| \lesssim 1$. Furthermore, for any $s<1$, $(1- i s)^{\frac{i}{2 \sigma}}= 1 + O( s/\sigma)$ and therefore $$\begin{aligned} |(is)^{- \frac{i}{ 2\sigma}}[(1-is)^{\frac{i}{2 \sigma}}-1 ]| \lesssim\sigma^{-1} s\,. \end{aligned}$$ This allows us to deduce, through interpolation, the following inequality for any $\sigma \geq c$: $$\begin{aligned} \label{b1est} | b_+(\sigma,r)| \lesssim\sigma^{-\alpha}\int\limits_0^{1/|\sigma r|} s^{\alpha} \:ds + \int\limits_{ 1/|\sigma r|}^{\infty} \frac{1}{ ( \sigma r s)^{5/2}} \:ds \lesssim|\sigma r |^{-(1+\alpha)}\,.\end{aligned}$$ We next estimate $\partial_{\sigma}b_1(\sigma,r)$. Note that since $b_1$ is convergent we can differentiate under the integral sign. Hence, we first estimate the $\sigma$ derivative of the integrand in $b_1(\sigma,r)$. One has $$\begin{aligned} \label{derexp1} \partial_\sigma \{ e^{-\frac{\pi}{2\sigma}} e^{-2 \sigma r s} & (is)^{- \frac{i}{ 2\sigma}}[(1-is)^{\frac{i}{2 \sigma}}-1 ] \} \\ & = (is)^{- \frac{i}{ 2\sigma}} \partial_\sigma \{ e^{-\frac{\pi}{2\sigma}}e^{-2 \sigma r s} [(1-is)^{\frac{i}{2 \sigma}}-1 ] \} \nonumber\\ & +e^{-\frac{\pi}{2\sigma}} e^{-2 \sigma r s} [(1-is)^{\frac{i}{2 \sigma}}-1 ] \partial_\sigma \{ (is)^{- \frac{i}{ 2\sigma}}\}\,. \nonumber \end{aligned}$$ Using the fact that $e^{-2 \sigma r s} \lesssim\langle\sigma r s \rangle^{-\ell}$ for any $\ell >0$, the first term in [\[derexp1\]](#derexp1){reference-type="eqref" reference="derexp1"} can be estimated as $$\begin{gathered} \label{derest1} | (is)^{- \frac{i}{ 2\sigma}} \partial_\sigma \{ e^{-\frac{\pi}{2\sigma}} e^{-2 \sigma r s} [(1-is)^{\frac{i}{2 \sigma}}-1 ] \}| \lesssim e^{-2 \sigma r s} [ ( rs )+ \sigma^{-2} \langle s \rangle^{0+} ] \\ \lesssim\langle\sigma r s \rangle^{-5/2} [ ( r s)+ \sigma^{-2} \langle s \rangle^{0+} ]\,.\end{gathered}$$ For the second term, we first have $$\begin{aligned} \label{derest2} \tilde{\chi}_c(\sigma) |e^{-\frac{\pi}{4\sigma}} [(1-is)^{\frac{i}{2 \sigma}} - 1]|\lesssim\frac{s}{\sigma} \chi(s < 1) + \chi(s \geq 1).\end{aligned}$$ Therefore, the second term in [\[derexp1\]](#derexp1){reference-type="eqref" reference="derexp1"} can be estimated for $\sigma \geq c$ as $$\begin{aligned} \label{derest3} | e^{-\frac{\pi}{2\sigma}} e^{-2 \sigma r s} [(1-is)^{\frac{i}{2 \sigma}}-1 ] \partial_\sigma \{ (is)^{- \frac{i}{ 2\sigma}}\}| \lesssim\langle\sigma r s \rangle^{-3/2}[ \frac{s}{\sigma} \chi(s <1) + \chi(s \geq 1)] \frac{\log |s|}{\sigma^2}\,.\end{aligned}$$ Using [\[derest1\]](#derest1){reference-type="eqref" reference="derest1"}, [\[derest3\]](#derest3){reference-type="eqref" reference="derest3"} and [\[derexp1\]](#derexp1){reference-type="eqref" reference="derexp1"} we obtain $$\begin{aligned} \label{intex} | \widetilde{\chi}_c(\sigma)& \tilde{\chi}_c(\sigma r)\partial_\sigma b_+(\sigma,r) | \lesssim\widetilde{\chi}_c(\sigma) \tilde{\chi}_c(\sigma r)\int\limits_0^{1/|\sigma r|} [\sigma^{-3} \max\{s^{1-},s^{0+}\}+ \sigma^{-1}] ds \\+ &\widetilde{\chi}_c(\sigma)\tilde{\chi}_c(\sigma r)\int\limits_{|\sigma r|^{-1}} ^{\infty} [ (rs) (\sigma rs)^{-5/2} + \sigma^{-2} s^{0+} (\sigma r s)^{-3/2}] ds \lesssim\sigma^{-1} |\sigma r|^{-1} \nonumber\,.\end{aligned}$$ We next estimate the second derivative of the integrand in $b_1(\sigma,r)$. One has, $$\begin{aligned} \label{derexp2} \partial^2_\sigma \{ e^{-\frac{\pi}{2\sigma}} &e^{-2 \sigma r s} (is)^{-\frac{i}{ 2\sigma}} [(1-is)^{\frac{i}{2 \sigma}}-1 ] \} = (is)^{- \frac{i}{ 2\sigma}} \partial^2_\sigma \{ e^{-\frac{\pi}{2\sigma}}e^{-2 \sigma r s} [(1-is)^{\frac{i}{2 \sigma}}-1 ] \} \\ & +e^{-\frac{\pi}{2\sigma}} e^{-2 \sigma r s} [(1-is)^{\frac{i}{2 \sigma}}-1 ] \partial^2_\sigma \{ (is)^{- \frac{i}{ 2\sigma}}\} \nonumber\\ &+ 2 \partial_\sigma \{ e^{-\frac{\pi}{4\sigma}}e^{-2 \sigma r s} [(1-is)^{\frac{i}{2 \sigma}}-1 ] \} \partial_\sigma \{e^{-\frac{\pi}{4\sigma}} (is)^{- \frac{i}{ 2\sigma}}\}\nonumber\,.\end{aligned}$$ We have the following estimate for the first term in [\[derexp2\]](#derexp2){reference-type="eqref" reference="derexp2"}. $$\begin{gathered} \label{derest4} \tilde{\chi}_c(\sigma) | (is)^{- \frac{i}{ 2\sigma}} \partial^2_\sigma \{ e^{-\frac{\pi}{2\sigma}}e^{-2 \sigma r s} [(1-is)^{\frac{i}{2 \sigma}}-1 ] \}| \\ \lesssim\langle\sigma r s \rangle^{-7/2} [ (rs)^2 + (rs) \langle s \rangle^{0+} \sigma^{-2} + \sigma^{-3} \langle s \rangle^{0+}] \,. \end{gathered}$$ Moreover, using [\[derest2\]](#derest2){reference-type="eqref" reference="derest2"} we have $$\begin{gathered} \label{derest6} \tilde{\chi}_c(\sigma) |e^{-\frac{\pi}{2\sigma}} e^{-2 \sigma r s} [(1-is)^{\frac{i}{2 \sigma}}-1 | \partial^2_\sigma \{ (is)^{- \frac{i}{ 2\sigma}}\} | \\ \lesssim\langle\sigma r s \rangle^{-3/2}[ \frac{s}{\sigma} \chi(s <1) + \chi(s \geq 1)] \sigma^{-3} \max\{s^{0+}, s^{0-}\}\,.\end{gathered}$$ and $$\begin{gathered} \label{derest5} \tilde{\chi}_c(\sigma) \partial_\sigma \{ e^{-\frac{\pi}{4\sigma}}e^{-2 \sigma r s} [(1-is)^{\frac{i}{2 \sigma}}-1 ] \} \\ \lesssim\langle\sigma r s\rangle^{-3/2} [ (rs) + \sigma^{-2} s \chi(s < 1) + \sigma^{-2} s^{0+} \chi (s \geq 1)] \,.\end{gathered}$$ Using [\[derest5\]](#derest5){reference-type="eqref" reference="derest5"} we also estimate the last term in [\[derexp2\]](#derexp2){reference-type="eqref" reference="derexp2"} as $$\begin{gathered} \label{derest7} \tilde{\chi}_c(\sigma) | \partial_\sigma \{ e^{-\frac{\pi}{4\sigma}}e^{-2 \sigma r s} [(1-is)^{\frac{i}{2 \sigma}}-1 ] \} \partial_\sigma \{e^{-\frac{\pi}{4\sigma}} (is)^{- \frac{i}{ 2\sigma}}\}| \\ \lesssim\langle\sigma r s \rangle^{-3/2} [ (rs) + \sigma^{-2} s \chi(s < 1) + \sigma^{-2} s^{0+} \chi (s\geq 1)] \sigma^{-2} \log|s| \,.\end{gathered}$$ Using [\[derest4\]](#derest4){reference-type="eqref" reference="derest4"},[\[derest5\]](#derest5){reference-type="eqref" reference="derest5"}, [\[derest7\]](#derest7){reference-type="eqref" reference="derest7"} and [\[derexp2\]](#derexp2){reference-type="eqref" reference="derexp2"} in a similar way to [\[intex\]](#intex){reference-type="eqref" reference="intex"},we obtain $$\begin{aligned} \label{derb1est} | \widetilde{\chi}_c(\sigma) \tilde{\chi}_c(\sigma r)\partial^2_\sigma b_+(\sigma,r) | \lesssim\sigma^{-2} |\sigma r|^{-1}.\end{aligned}$$ The estimates [\[b1est\]](#b1est){reference-type="eqref" reference="b1est"},[\[intex\]](#intex){reference-type="eqref" reference="intex"} and [\[derb1est\]](#derb1est){reference-type="eqref" reference="derb1est"} establishes the statement for $b_1(\sigma,r)$. ◻ *Proof of Proposition [Proposition 19](#prop:lexp){reference-type="ref" reference="prop:lexp"}.* We first prove [\[lsest\]](#lsest){reference-type="eqref" reference="lsest"}. Note that by [\[sumexp\]](#sumexp){reference-type="eqref" reference="sumexp"} we have $$\begin{aligned} \label{Msexp} M\Big( 1 - i/(2 \sigma), 2, 2 i \sigma r \Big) = 1 + i \sigma r + \frac{r}{2} + E(\sigma,r) \end{aligned}$$ where $$\begin{aligned} E(\sigma,r) = (1-\frac{i}{2\sigma})(2- \frac{i}{2\sigma})(\sigma r)^2 \sum_{s=0}^{\infty} c_s(\sigma) (\sigma r)^s\,.\end{aligned}$$ Since $\limsup_{s \to \infty} \big| \frac{c_{s+1}(\sigma)}{c_s(\sigma)}\big| =0$, the sum is convergent in the support of $\chi_k(\sigma r)$. Moreover, in the support of $\widetilde{\chi}_c(\sigma )$ one has $r \lesssim(\sigma r)$ and therefore, $$|\partial^j_\sigma \{ E(\sigma,r) \} | \lesssim\sigma^{2-j} r^2 .$$ Using [\[Msexp\]](#Msexp){reference-type="eqref" reference="Msexp"}, the expansion $e^{-i \sigma r} = 1 - i\sigma r + O_2( (\sigma r )^2)$ for $\sigma r<1$ and $$\begin{aligned} \label{gammaexp} \frac{e^{\frac{\pi}{4\sigma}}}{| \Gamma(1+ \frac{i}{2 \sigma})|} = (\pi)^{-\frac 12}\sigma^{\frac 12} [ e^{\frac{\pi}{\sigma}}-1]^{\frac 12} \end{aligned}$$ in [\[erep\]](#erep){reference-type="eqref" reference="erep"}, we obtain the statement. For the proof of [\[llest\]](#llest){reference-type="eqref" reference="llest"} we use Lemma [Lemma 20](#lem:Mintest){reference-type="ref" reference="lem:Mintest"}. Note that by [\[intrep\]](#intrep){reference-type="eqref" reference="intrep"} we have $$\begin{aligned} \label{intrepsigma} M(1-i/(2 \sigma), 2, 2 i \sigma r)= \frac{1}{| \Gamma(1+ i/(2 \sigma)|^2}\int_0^{1} e^{2 i \sigma r s} s^{- \frac{i}{ 2\sigma}} (1-s)^{\frac{i}{2 \sigma}} ds\,.\end{aligned}$$ Moreover, $$\begin{aligned} \Gamma(1\pm i/(2\sigma)) = |\Gamma(1+ i/(2\sigma))| e^{\pm i \theta(\sigma)}\end{aligned}$$ where $\theta(\sigma) = \arg (\Gamma(1+ i/(2\sigma))$ and $$(\mp 2i\sigma r)^{\pm \frac{i}{2\sigma} }= e^{\frac{\pi}{4\sigma}} e^{\pm \frac{log(2\sigma r}{\sigma}}.$$ Therefore, using Lemma [Lemma 20](#lem:Mintest){reference-type="ref" reference="lem:Mintest"}, [\[intrepsigma\]](#intrepsigma){reference-type="eqref" reference="intrepsigma"} and [\[gammaexp\]](#gammaexp){reference-type="eqref" reference="gammaexp"} in [\[erep\]](#erep){reference-type="eqref" reference="erep"}, we have in the support of $\widetilde{\chi}_c(\sigma) \tilde{\chi}_c(\sigma r)$ $$\begin{aligned} e(\sigma, \sigma^2 r) &= -i(\pi)^{-\frac 12} [e^{-i(\sigma r - \sigma^{-1} \ln(2\sigma r) + \theta(\sigma)} + e^{i(\sigma r - \sigma^{-1} \ln(2\sigma r) + \theta(\sigma)} ] \\ & -i(\pi)^{-\frac 12} ( 2 i \sigma r) \frac{e^{\frac{\pi}{4\sigma}}}{| \Gamma(1+ \frac{i}{2 \sigma})|} [ e^{-i\sigma r} b_+(\sigma,r) + e^{i\sigma r} b_-(\sigma,r)] \nonumber\,.\end{aligned}$$ Letting $\mathcal{E} (\sigma,r):= \pi^{-1} ( 2 \sigma r) \sigma^{\frac 12} [ e^{\frac{\pi}{\sigma}}-1]^{\frac 12} [ e^{-i\sigma r} b_+(\sigma,r) + e^{i\sigma r} b_-(\sigma,r)]$, we see that $\mathcal{E} (\sigma,r)$ holds the required bounds. We skip the validation of the bounds as it is basic differentiation. ◻ # Proof of Theorem [Theorem 1](#thm.main){reference-type="ref" reference="thm.main"} {#proof} In this section, we are focusing on estimating the kernel given by the equation: $$K_t(r,s) = \frac{1 }{2rs}\int\limits_0^{\infty} e^{it q^2 \sigma^2} e(q\sigma, r) e(q\sigma, s) \: d\sigma$$ as $\sup_{r,s} | K_t(r,s)| \lesssim t^{-\frac 32}$ for $r,s \geq0$ and $t \geq 1$. Importantly, this bound is sufficient to establish the validity of Theorem [Theorem 1](#thm.main){reference-type="ref" reference="thm.main"} as one has $$\begin{aligned} \| e^{itH_{0,q}} g \|_{L^{\infty}(\mathbb R^3)}= \Big\|\int\limits_0^{\infty} K_t(r,s) s^2 g(s) ds \Big\|_{L_r^{\infty}(\mathbb R^3)} \lesssim\sup_{r,s} | K_t(r,s)| \| g\|_{ L^{1,s^2}([0,\infty))}. \end{aligned}$$ We normalize $q=1$ and chose $\sigma <1$ sufficiently small to write $$\begin{aligned} K_t(r,s)&= \frac{1}{rs}\int\limits_0^{\infty} e^{it \sigma^2} \chi_c(\sigma) e(\sigma, r) e(\sigma, s) \: d\sigma + \frac{1}{rs}\int\limits_0^{\infty} e^{it \sigma^2} \tilde{\chi}_c(\sigma) e(\sigma, r) e(\sigma, s) \: d\sigma \\ &= K_t^l(r,s) + K_t^h(r,s) \nonumber\,.\end{aligned}$$ ## Estimation of $K_t^l(r,s)$ In this section, we will prove that **Proposition 21**. *For any $c>0$ sufficiently small, we have that $$\begin{aligned} \sup_{r,s} | K_t^l(r,s) | \lesssim t^{-\frac 32}.\end{aligned}$$* We prove Proposition [Proposition 21](#prop:low){reference-type="ref" reference="prop:low"} with a series of lemmas. Fix some constant $k \geq 4$ and write $$\begin{aligned} \label{maintermlow} K_t^l(r,s)= & \frac{1}{rs}\int_0^{\infty} e^{it \sigma^2} \chi_c(\sigma)\chi_{k}(\sigma^2 r) \chi_{k}(\sigma^2 s) e(\sigma,r) e(\sigma,s) \: d\sigma \\ &+ \frac{1}{rs}\int_0^{\infty} e^{it \sigma^2} \chi_c(\sigma) [\chi_{k}(\sigma^2 r) \tilde{\chi}_k(\sigma^2 s) + \tilde{\chi}_{k}(\sigma^2 r) \chi_k(\sigma^2 s) ] e(\sigma,r) e(\sigma,s)\: d\sigma \nonumber\\ &+ \frac{1}{rs}\int_0^{\infty} e^{it \sigma^2} \chi_c(\sigma) \tilde{\chi}_k(\sigma^2 s) \tilde{\chi}_k(\sigma^2 r)e(\sigma,r) e(\sigma,s) \nonumber\\ =& K_1(r,s; t) + K_2(r,s; t) +K_3(r,s; t) \nonumber\,. \end{aligned}$$ By symmetry, we may always assume that $r\geq s$. Furthermore, observe that the support of $\chi_c(\sigma)\tilde{\chi}_k(\sigma^2r)$ is empty unless $r\geq \frac{k}{c^2}>1$ so we are free to assume that $r\geq s >1$ when considering $K_3$. **Lemma 22**. *We have that $| K_1(r,s;t)| \lesssim t^{-\frac 32}.$* *Proof.* Let $a(\sigma;s,r)= (rs)^{-1}\chi_c(\sigma)\chi_k(\sigma^2 r) \chi_k(\sigma^2 s) e(\sigma,r) e(\sigma,s)$. With $^\prime$ denoting the derivative respect to $\sigma$, it is easy to see that, as a function of $\sigma$, $\chi_k(\sigma^2 s) = O_{\infty}(\sigma^{0})$ from the computation $\chi^{\prime}_k(\sigma^2 r) = \chi^{\prime}(\sigma^2 r) (2 \sigma r)$ and the fact that $\sigma^2 r \sim1$ on the support of $\chi^{\prime}(\sigma^2 r)$. Therefore, we may use the bounds from Corollary [Corollary 18](#cor:low){reference-type="ref" reference="cor:low"} to see that $$\begin{aligned} \label{smsmain} | a(\sigma;r,s) | \lesssim\sigma^4 , \,\,\, | a^{\prime}(\sigma;r,s)| \lesssim\sigma^2 , \,\,\,\ |a^{\prime \prime}(\sigma;r,s)| \lesssim 1\,.\end{aligned}$$ Integrating by parts via the identity $e^{it\sigma^2}=(i2t\sigma)^{-1}\frac{d}{d\sigma}[e^{it\sigma^2}]$ and suppressing the variables $r$ and $s$, we obtain $$\begin{aligned} K_1(r,s;t) = \frac{1}{2it}\int\limits_0^{\infty} e^{it \sigma^2} \Big(\frac{a(\sigma)}{\sigma}\Big)^{\prime} d \sigma =\frac{1}{2it}\int\limits_{\sigma < t^{-\frac 12}} e^{it \sigma^2} \Big(\frac{a(\sigma)}{\sigma}\Big)^{\prime} d \sigma +\frac{1}{2it}\int\limits_{\sigma \geq t^{-\frac 12}} e^{it \sigma^2} \Big(\frac{a(\sigma)}{\sigma}\Big)^{\prime} d \sigma \,.\end{aligned}$$ By [\[smsmain\]](#smsmain){reference-type="eqref" reference="smsmain"}, we have $\Big|\Big(\frac{a(\sigma,r,s)}{\sigma}\Big)^{\prime}\Big| \lesssim\sigma rs$, therefore, the first term is bounded by $t^{-\frac 32} rs$. We apply another integration by parts to the second term to bound it by $$\begin{aligned} \label{nobound} t^{-2}\int\limits_{\sigma \geq t^{-\frac 12}} \Big| \frac{1}{\sigma} \Big(\frac{a(\sigma)}{\sigma}\Big)^{\prime \prime}\Big| + \Big| \frac{1}{\sigma^2}\Big(\frac{a(\sigma)}{\sigma}\Big)^{\prime} \Big| \: d\sigma \lesssim\int\limits_{\sigma \geq t^{-\frac 12}} \sigma^{-2} \: d\sigma \lesssim t^{-\frac 32} \,.\end{aligned}$$ Here, we omit the boundary term arising from the integration by parts since it is bounded by the integral in [\[nobound\]](#nobound){reference-type="eqref" reference="nobound"}. This finishes the proof. ◻ To estimate the other terms in [\[maintermlow\]](#maintermlow){reference-type="eqref" reference="maintermlow"}, we prove the following oscillatory integral estimate. **Lemma 23**. *Suppose that for all $r>\frac{k}{c^2}$, $\omega_r(\sigma):[0,\infty)\rightarrow\mathbb{R}$ is a $C^2$ phase function and $\delta_r:\mathbb{R}\rightarrow \mathbb{R}$ is a weight function satisfying* 1. *$0<\delta_r \lesssim\omega_r'(\sigma)\lesssim r$* 2. *$\omega_r''(\sigma)<0$ and $\lvert \omega_r''(\sigma) \rvert \lesssim\frac{\delta_r}{\sigma}$* *and $a_r(\sigma):[0,\infty)\rightarrow \mathbb{C}$ is a $C^2$ amplitude function satisfying* 1. *$\lvert a_r(\sigma) \rvert \lesssim\frac{\sigma^2}{r}$* 2. *$\lvert a_r'(\sigma) \rvert \lesssim\sigma^2$* 3. *$\int_{0}^{\infty} \sigma^{-1}(\lvert a_r''(\sigma) \rvert+r\lvert a_r'(\sigma) \rvert)\chi(\sigma) \: d\sigma \lesssim 1$* *uniformly for $\sigma\in [k^{\frac{1}{2}} r^{-\frac 12},c]$ and $r>0$. Then with $\chi(\sigma) := \chi_c(\sigma) \tilde{\chi}_k(\sigma^2 r)$ we have that $$\begin{aligned} I^{\pm}(r,s;t) := \int\limits_0^{\infty} e^{i (t \sigma^2 \pm \omega_r(\sigma))} \chi(\sigma) a_r(\sigma) d\:\sigma \lesssim t^{-\frac 32} \end{aligned}$$ with an implicit constant that does not depend on $a_r$ or $\omega_r$.* **Remark 24**. *In the application of this lemma, the phase and amplitude may depend additionally on the parameter $s$. The last sentence of the statement indicates that as long as the bounds on $\omega_r$ and $a_r$ hold uniformly in $s$, the conclusion will also hold uniformly in $s$.* *Proof.* As before, we first integrate by parts via $e^{it\sigma^2}=(2it\sigma)^{-1}\frac{d}{d\sigma}[e^{it\sigma^2}]$ to find that $$\begin{aligned} I^{\pm}(r,s;t) &=\frac{1}{2it}\int\limits_{0}^{\infty} e^{i(t\sigma^2\pm\omega_r(\sigma))} [ b_{\pm} (\sigma;r) + \tilde{b} (\sigma;r) ]\chi(\sigma) \: d\sigma \\ &=I^{\pm}_1+I ^{\pm}_2\end{aligned}$$ for $$\begin{aligned} b_{\pm} (\sigma;r) = \pm i\sigma^{-1} \omega_r^{\prime}(\sigma) a_r(\sigma) \,\,\,\ \tilde{b} (\sigma;r) = \sigma^{-1} [\chi(\sigma) a_r(\sigma)]^{\prime}-\sigma^{-2} a_r(\sigma)]\,.\end{aligned}$$ We first estimate $I_2^\pm$. Split the integral as $$\begin{aligned} I^{\pm}_2 = \frac{1}{2it} \int\limits_{0}^{t^{-\frac{1}{2}}} e^{i(t\sigma^2\pm \omega_r(\sigma)} \tilde{b}(\sigma;r) \:d\sigma + \frac{1}{2it}\int\limits_{t^{-\frac{1}{2}}}^\infty e^{i(t\sigma^2 \pm \omega_r(\sigma))} \tilde{b}(\sigma;r) \:d\sigma \end{aligned}$$ and observe that the assumptions on $a_r$ imply that $$\begin{aligned} & |\tilde{b} (\sigma; r) |\lesssim\sigma^{-1} \lvert a^{\prime}_r(\sigma) \rvert +\sigma^{-2} \lvert a_r(\sigma) \rvert \lesssim\sigma, \\ &| \tilde{b}^{\prime}(\sigma; r)| \lesssim\sigma^{-1} |a_r^{\prime \prime}(\sigma)| + \sigma^{-2} |a_r^{\prime}(\sigma)| + \sigma^{-3} |a_r(\sigma)| \lesssim\sigma^{-1} [ |a_r^{\prime \prime}(\sigma)|+ r |a_r^{\prime}(\sigma)|] \end{aligned}$$ where for the final term in the second line we have used that $\sigma^{-3}\lvert a_r(\sigma) \rvert \lesssim 1/(\sigma r)\lesssim 1$. Therefore, the first integral is bounded by $t^{-\frac{3}{2}}$ and for the second we apply another integration by parts (ignoring the easily estimated boundary term) to bound it by $$\begin{gathered} t^{-2}\int\limits_{t^{-\frac 12}}^{\infty} \Big( | [\sigma^{-1}\tilde{b}^{\prime} (\sigma;r)]^{\prime} |+ |\sigma^{-1}\tilde{b}(\sigma;r) \omega^{\prime}_r(\sigma)| \Big)\chi(\sigma) \:d\sigma\\ \lesssim t^{-\frac{3}{2}}\int\limits_{0}^\infty \sigma^{-1} (\lvert a_r''(\sigma) \rvert+ r\lvert a_r'(\sigma) \rvert)\chi(\sigma) \: d\sigma \lesssim t^{-3/2}\,.\end{gathered}$$ We now turn our attention to $I^{\pm}_1$. Here, we must treat the $\omega_r(\sigma)$ term as part of the phase so we write $$\begin{aligned} I^{\pm}_1=(2it)^{-1}\int_{0}^{\infty} e^{it\Phi^{\pm}_{r,t}(\sigma)} b_{\pm}(\sigma;r) \chi(\sigma)\: d\sigma \,\,\ \text{with} \,\,\,\ \Phi_{r,t}(\sigma):=\sigma^{2}\pm t^{-1} \omega_r(\sigma)\,.\end{aligned}$$ We have $(\Phi^{\pm}_{r,t})'(\sigma)=2\sigma\pm t^{-1}\omega^{\prime}_r(\sigma)$, and $(\Phi^{\pm}_{r,t})''(\sigma)=2\pm t^{-1}\omega_r^{\prime \prime}(\sigma)$. As $\omega^{\prime}_r>0$ and $\omega_r^{\prime \prime}<0$, only $\Phi_{r,t}^{-}$ has a stationary point and it is automatically non-degenerate. Since the phase in $I_1^+$ is non-stationary, the integral is easily estimated. As before, the integrand is $O(\sigma)$ so we may split the domain of integration at $t^{-\frac{1}{2}}$ and integrate by parts to find that $$\begin{aligned} \label{ibp2} I_1^{+}\lesssim t^{-\frac{3}{2}} +t^{-2}\int\limits_{t^{-\frac{1}{2}}}^{\infty}\left| \frac{ b^{\prime}_+(\sigma;r) }{(\Phi^{+}_{r,t}(\sigma))'} + b_+(\sigma;r) \frac{d}{d\sigma}[((\Phi^{+}_{r,t})^{\prime})^{-1}(\sigma)] \right|\chi(\sigma) \: d\sigma \,.\end{aligned}$$ Now by applying various properties of $a_r$ and $\omega_r$, we observe that $$\begin{aligned} & |b_{\pm}(\sigma;r) |\lesssim\sigma^{-1} r a_r(\sigma) \lesssim\sigma, \label{bbounds} \\ & | b_{\pm}^{\prime}(\sigma;r)| \lesssim\sigma^{-2} |[\omega_r^{\prime} a_r](\sigma) |+ \sigma^{-1} |[\omega_r^{\prime \prime }a_r](\sigma)| + \sigma^{-1} |[\omega_r^{\prime} a_r^{\prime}](\sigma)| \lesssim 1+ \sigma^{-1} r |a_r^{\prime}(\sigma)| \nonumber\end{aligned}$$ so that because $|(\Phi^{+}_{r,t})^{\prime}(\sigma)|^{-1} \lesssim\sigma^{-1}$ we have $$\begin{aligned} \left| \frac{ b^{\prime}_+(\sigma;r) }{(\Phi^{+}_{r,t}(\sigma))'}\right| \lesssim\sigma^{-1} + \sigma^{-2} r \lvert a_r^{\prime}(\sigma) \rvert \end{aligned}$$ which makes an admissible contribution. Furthermore, observe that $$\begin{aligned} \frac{|(\Phi_{r,t}^{+})^{\prime \prime}(\sigma)|}{ |(\Phi_{r,t}^{+})^{\prime}(\sigma)|^2} \lesssim\frac{2+\delta_r/(t\sigma)}{(2\sigma +\delta_r/t)^2}\end{aligned}$$ so that $$\begin{aligned} \left|b_+(\sigma;r)\frac{d}{d\sigma}[(\Phi^+_{r,t})^{-1}](\sigma)\right|\lesssim(2\sigma+\delta_r/t)^{-1}< \sigma^{-1}.\end{aligned}$$ Integrating now shows that $|I_1^+| \lesssim t^{-\frac{3}{2}}$. We now treat $I_1^{-}$. Since the stationary point is not explicitly calculable, some care is required. Because of the lower bound on $\omega_r'$, we may find $C$ depending only on $c$ so that if $t<\delta_r C$ then $\Phi_{r,t}'<-1$ uniformly on $\mathop{\mathrm{supp}}\chi$. Using this, we break into cases depending on the value of $t$:\ [Case 1: $t < \delta_r C$ ]{.ul}\ Due to the lower bound on $\lvert \Phi_{r,t}' \rvert$, the phase is non-stationary and therefore the integral may be estimated similarly to $I_1^+$.\ [Case 2: $t\geq \delta_r C$ ]{.ul}\ In this regime, the phase may become stationary, however any stationary point will be non-degenerate because $( \Phi^{-}_{r,t})'\geq 2$ on $\text{supp} \chi$ uniformly in $r$ by the properties of $\omega_r$. Indeed, because the second derivative is bounded below away from $0$, we claim that we may always find some $\sigma_*\in \text{supp} \chi$ so that $|(\Phi^-_{r,t})'(\sigma)| \geq 2 |\sigma-\sigma_*|$ on $\text{supp} \chi$. If $(\Phi^-_{r,t})'$ vanishes at some $\sigma_*$ then this is immediate from the mean value theorem. Otherwise, we know that $\Phi^-_{r,t}$ is increasing on $[a,b]=\text{supp}\chi$ so we must have that either $(\Phi^-_{r,t})'(a)>0$ or $(\Phi^-_{r,t})'(b)<0$ if $\Phi^-_{r,t}$ does not vanish. In either case, the claim is easily seen to hold with $\sigma_*=a$ or $b$, respectively. Splitting $I_1^-$ as With this in mind, we write $$\begin{aligned} I^-_1&=\frac{1}{2it}\int\limits_{\lvert \sigma-\sigma^* \rvert <t^{-\frac{1}{2}}} e^{it\Phi^-_{r,t}(\sigma)} b_-(\sigma;r)\chi(\sigma) \: d\sigma +\frac{1}{2it}\int_{\lvert \sigma-\sigma_* \rvert >t^{-\frac{1}{2}}}^{\infty} e^{it\Phi^-_{r,t}(\sigma)} b_-(\sigma;r)\chi(\sigma)\: d\sigma\,.\end{aligned}$$ As before, the integrand of the first integral is bounded so by integrating by parts in the second we see that we need only estimate $$\begin{aligned} t^{-2}\int\limits_{\lvert \sigma-\sigma_* \rvert >t^{-\frac{1}{2}}}^{\infty } \Bigg( \frac{ b_-^{\prime}(\sigma;r) }{(\Phi^-_{r,t})^{\prime}(\sigma)} - \frac{b_-(\sigma;r)(\Phi^-_{r,t})''(\sigma)}{ ((\Phi^-_{r,t})'(\sigma))^2 } +\frac{b_-(\sigma;r)}{(\Phi^-_{r,t})'(\sigma)} \chi'(\sigma)\Bigg) \,d \sigma .\end{aligned}$$ The term with $\chi'$ is easily seen to be admissible whereas the rest of the integral may be bounded by $$\begin{aligned} & t^{-2}\int\limits_{\lvert \sigma-\sigma_* \rvert>t^{-\frac{1}{2}} }^{} \left(\frac{|b_-^{\prime}(\sigma;r) |}{\lvert \sigma-\sigma_* \rvert }+ \frac{|b_- (\sigma;r)| \sup\lvert (\Phi^{-}_{r,t})''(\sigma) \rvert }{\lvert \sigma-\sigma_* \rvert^2}\right)\chi(\sigma)\: d\sigma \\ &\lesssim t^{-\frac{3}{2}}\int\limits_0^{\infty} \lvert b_-'(\sigma;r) \rvert \chi(\sigma) \:d\sigma + t^{-2}\int_{\lvert \sigma-\sigma_* \rvert>t^{-\frac{1}{2}} }^{} \frac{ |b_-(\sigma;r)| \sup| \Phi_{r,t}''(\sigma)| }{ |\sigma - \sigma_*|^2} d \sigma\,. \end{aligned}$$ The bounds in $\eqref{bbounds}$ show that the first integral is bounded by $t^{-\frac{3}{2}}$ whereas for the second we use that $$\begin{aligned} \lvert b_-(\sigma;r)\Phi_{r,t}''(\sigma) \rvert \lesssim t^{-1}\sigma\lvert \omega_r''(\sigma) \rvert \lesssim\frac{\delta_r}{t}\lesssim 1\end{aligned}$$ to conclude. This finishes the proof. ◻ We are now ready to show that **Lemma 25**. *We have that [\[lem:sml\]]{#lem:sml label="lem:sml"}$| K_2(r,s;t)| \lesssim t^{-\frac 32}$.* *Proof.* Since $\tilde{\chi}_k(\sigma^2r)\chi_k(\sigma^2s)$ only has non-empty support if $r> s$, it suffices to consider $$\begin{aligned} \int_{0}^{\infty} e^{it\sigma^2}\frac{e(\sigma,r)}{r}\frac{e(\sigma,s)\chi_k(\sigma^2s)}{s}\chi(\sigma)\: d\sigma \end{aligned}$$ where as in Lemma [Lemma 23](#lem:osc2){reference-type="ref" reference="lem:osc2"} $\chi(\sigma)=\chi_c(\sigma)\tilde{\chi}_k(\sigma^2r)$. By Corollary [Corollary 15](#oscCor){reference-type="ref" reference="oscCor"}, on the support of $\chi$, $e(\sigma,r)= e^{i\zeta_r(\sigma)}a_{+}(\sigma,r)+e^{- i\zeta_r(\sigma)}a_{-}(\sigma,r)$. Therefore, we need to estimate the integrals $$\begin{aligned} \label{smalllarge} I_{\pm}:=\int_{0}^{\infty} e^{i(t\sigma^2\pm\zeta_r)}\frac{a_{\pm}(\sigma,r)}{r}\frac{b_s(\sigma)}{s}\chi(\sigma)\: d\sigma \end{aligned}$$ where $b_s(\sigma):=\chi_{k}(\sigma^2s)e(\sigma,s)$. We verify the conditions of Lemma [Lemma 23](#lem:osc2){reference-type="ref" reference="lem:osc2"}. Proposition [Proposition 14](#langerBounds){reference-type="ref" reference="langerBounds"} shows that $\zeta_r$ satisfies the hypotheses of the lemma so we need only check that $a_{r,s}(\sigma)=\frac{a_{-}(\sigma)}{r}\frac{b_s(\sigma)}{s}$ satisfies the hypotheses as well, uniformly in $s$. From Corollary [Corollary 15](#oscCor){reference-type="ref" reference="oscCor"} we obtain $$\begin{aligned} \lvert a_-(\sigma,r) \rvert \lesssim 1,\,\,\lvert a_-'(\sigma,r) \rvert \lesssim\sigma^{-1},\,\,\lvert a_-'' \rvert(\sigma,r) \lesssim r \end{aligned}$$ and from Corollary [Corollary 18](#cor:low){reference-type="ref" reference="cor:low"} that $$\begin{aligned} \lvert b_s(\sigma) \rvert \lesssim s\sigma^2,\,\,\lvert b_s'(\sigma) \rvert \lesssim s[\sigma^2\chi_{\frac{1}{2}}(\sigma^2s)+\chi_{\frac{1}{2},k}(\sigma^2s)],\,\,\lvert b_s''(\sigma) \rvert \lesssim s[\sigma^2\chi_{\frac{1}{2}}(\sigma^2s)+\sigma^{-2}\chi_{\frac{1}{2},k}(\sigma^2s)],\,\, \end{aligned}$$ where $\chi_{\frac{1}{2},k}=(\tilde{\chi}_{\frac{1}{2}}\cdot\chi_{k})(\sigma^2s)$. Clearly $\lvert a_{r,s}(\sigma) \rvert \lesssim\frac{\sigma^2}{r}$ and furthermore $$\begin{aligned} \lvert a_{r,s}'(\sigma) \rvert \lesssim r^{-1}[\sigma^2\chi_{\frac{1}{2}}(\sigma^2s)+\chi_{\frac{1}{2},k}(\sigma^2s)]+\frac{\sigma}{r}\lesssim\sigma^2\end{aligned}$$ since $r^{-1}\lesssim\sigma^2$ on the support of $\chi$. Proceeding, one may also check that $$\begin{aligned} \lvert a_{r,s}''(\sigma) \rvert \lesssim\sigma\chi_{\frac{1}{2}}(\sigma^2s)+\chi_{\frac{1}{2},k}(\sigma^2s)\end{aligned}$$ It now follows from the computation $$\begin{aligned} \int_{0}^{\infty}\sigma^{-1}\chi_{\frac{1}{2},k}(\sigma^2s)\chi \: d\sigma\leq \log(2k)/2\end{aligned}$$ that $\int\limits_{0}^{\infty}\sigma^{-1}(\lvert a_{r,s}''(\sigma) \rvert +r\lvert a_{r,s}'(\sigma) \rvert) \: d\sigma\lesssim 1$ so we conclude from Lemma [Lemma 23](#lem:osc2){reference-type="ref" reference="lem:osc2"} that $I_1=O(t^{-\frac{3}{2}})$. ◻ We next prove that **Lemma 26**. *$|K_3(r,s;t)| \lesssim t^{-\frac 32}.$* *Proof.* By Corollary [Corollary 15](#oscCor){reference-type="ref" reference="oscCor"}, in this $\sigma$ regime, $e(\sigma,r) e(\sigma,s)$ can be written as a sum of the terms $$\begin{aligned} e^{ - i (\zeta_r(\sigma)\pm \zeta_s(\sigma))} a_{- }(\sigma,r) a_{\pm }(\sigma,s), \,\,\,\ e^{ i (\zeta_r(\sigma)\pm \zeta_s(\sigma))} a_{+}(\sigma,r) a_{\pm}(\sigma,s) \end{aligned}$$ Therefore, it suffices to bound $$\begin{aligned} \label{largelarge} &\int\limits_0^{\infty} e^{it \sigma^2 - i (\zeta_r(\sigma)\pm \zeta_s(\sigma))} \frac{a_{- }(\sigma,r)}{r} \frac{a_{\pm }(\sigma,s) \tilde{\chi}_k(\sigma^2 s)}{s}\chi(\sigma)\:d \sigma \\ &\int_0^{\infty} e^{it \sigma^2 + i (\zeta_r(\sigma)\pm \zeta_s(\sigma))} \frac{a_{+}(\sigma,r)}{r} \frac{a_{\pm }(\sigma,s) \tilde{\chi}_k(\sigma^2 s)}{s}\chi(\sigma)\:d \sigma\,,\end{aligned}$$ where $\chi(\sigma):=\chi_c(\sigma) \tilde{\chi}_k(\sigma^2 r)$. In order to apply Lemma [Lemma 23](#lem:osc2){reference-type="ref" reference="lem:osc2"}, we must verify that the phases $\zeta_r+\zeta_s$ and $\zeta_r-\zeta_s$ and the amplitude $a_{r,s}(\sigma)= \frac{a_+(\sigma,s)}{r}\frac{a_+(\sigma,s)\tilde{\chi}_k(\sigma^2s)}{s}$ satisfy the hypotheses of the lemma (the latter being sufficient because $a_+$ and $a_-$ obey the same bounds). When $r=s$, the phase $\zeta_r-\zeta_s$ degenerates to $0$. Ignoring this easily treated case, the conditions on the phases are satisfied by Proposition [Proposition 14](#langerBounds){reference-type="ref" reference="langerBounds"} so we consider the amplitude $a_{r,s}(\sigma)$. From Corollary [Corollary 15](#oscCor){reference-type="ref" reference="oscCor"}, we see that $$\begin{aligned} \lvert a_{r,s}(\sigma) \rvert \lesssim\frac{\tilde{\chi}_k(\sigma^2s)}{rs}\lesssim\frac{\sigma^2}{r}\end{aligned}$$ and also that $$\begin{aligned} \lvert a^{\prime}_{r,s}(\sigma) \rvert \lesssim\frac{\sigma^{-1}\tilde{\chi}_k(\sigma^2s)}{rs}\lesssim\frac{1}{rs^{\frac{1}{2}}}\end{aligned}$$ which is indeed less than $\sigma^2$ on the domain in question. Furthermore, we have that $$\begin{aligned} \lvert a''_{r,s}(\sigma) \rvert \lesssim r^{-1}\end{aligned}$$ from which one can easily check that $\int\limits_{0}^{\infty} \sigma^{-1}(\lvert a''_{r,s}(\sigma) \rvert +r\lvert a'_{r,s}(\sigma) \rvert) \chi(\sigma)\: d\sigma \lesssim 1$. Applying Lemma [Lemma 23](#lem:osc2){reference-type="ref" reference="lem:osc2"} now completes the proof. ◻ *Proof of Proposition [Proposition 21](#prop:low){reference-type="ref" reference="prop:low"}.* Combining the bounds for $K_1$, $K_2$ and $K_3$ we obtain the statement. ◻ ## Estimate of $K_t^h(r,s)$ We will prove the following Proposition. **Proposition 27**. *Let $c<1$ and $t \geq 1$ then we have $\sup_{r,s} | K_t^h(r,s) | \lesssim t^{-\frac 32}.$* Similar to previous section we will prove Proposition [Proposition 27](#prop:high){reference-type="ref" reference="prop:high"} with a series of lemmas. We let $k \geq 4$ and write $$\begin{aligned} \label{maintermhigh} K_t^h(r,s)= & \frac{1}{rs}\int\limits_0^{\infty} e^{it \sigma^2} \tilde{\chi}_c(\sigma)\chi_{k}(\sigma r) \chi_{k}(\sigma s) e(\sigma,r) e(\sigma,s)\: d\sigma \\ &+ \frac{1}{rs}\int\limits_0^{\infty} e^{it \sigma^2} \tilde{\chi}_c(\sigma) [\chi_{k}(\sigma r) \tilde{\chi}_k(\sigma s) + \tilde{\chi}_{k}(\sigma r) \chi_k(\sigma s) ] e(\sigma,r) e(\sigma,s)\: d\sigma \nonumber\\ &+ \frac{1}{rs}\int\limits_0^{\infty} e^{it \sigma^2} \tilde{\chi}_k(\sigma) \tilde{\chi}_k(\sigma s) \tilde{\chi}_c(\sigma^2 r)e(\sigma,r) e(\sigma,s) \: d\sigma\nonumber\\ =& \tilde{K}_1(r,s;t)+\tilde{K}_2(r,s;t)+ \tilde{K}_3(r,s;t). \nonumber \end{aligned}$$ We start estimating the first term. **Lemma 28**. *$| \tilde{K}_1(r,s;t)| \lesssim t ^{-\frac 32}.$* *Proof.* Let $a(\sigma;r,s)= (rs)^{-1} \tilde{\chi}_c(\sigma) \chi_{k}(\sigma r) \chi_{k}(\sigma s) e(\sigma,r) e(\sigma,s)$, then using [\[lsest\]](#lsest){reference-type="eqref" reference="lsest"}, we have $$\begin{aligned} a(\sigma;r,s) = \tilde{\chi}_c(\sigma)\chi_k(\sigma r) \chi_k(\sigma s) \sigma [e^{\frac{\pi}{\sigma}} -1]^{-1} ( 1 + O_2(\sigma s) + O_2(\sigma r) )\,.\end{aligned}$$ Therefore, we have $$\begin{aligned} |a(\sigma;r,s)| \lesssim\sigma^2 \tilde{\chi}_c(\sigma)\chi_k(\sigma r) \chi_c(\sigma s)\,.\end{aligned}$$ Moreover, since $\chi_c(\sigma r) = O_{\infty}(\sigma^{0})$ we in fact, have $a(\sigma;r,s)= \tilde{\chi}_c(\sigma)\chi_c(\sigma r) O_2(\sigma^2 )$. Hence, by twice integration by parts, we obtain $$\begin{aligned} \label{ibptwice} \Big| \int\limits_0^{\infty} e^{i t \sigma^2} a(\sigma;r,s) d \sigma\Big| \lesssim t^{-2}\int\limits_0^{\infty} \Big| \Big(\frac{1}{\sigma} \Big( \frac{ a(\sigma;r,s)}{\sigma} \Big)^{\prime} \Big)^{\prime} \Big|\: d \sigma \lesssim t^{-2}\int\limits_{c}^{\infty} \sigma^{-2} d\sigma \lesssim t^{-2}\,. \end{aligned}$$ We obtain no boundary terms since the support of the integral is away from both zero and infinity. ◻ We next prove the following oscillatory lemma which will be useful to estimate the contributions of the rest of the terms. **Lemma 29**. *Let $a(\sigma)= \tilde{\chi}_c(\sigma) O_2( \sigma )$. Then one has $$\begin{aligned} I(r,s;t)=\int\limits_0^{\infty} e^{it \sigma^2 \pm i \varphi_u(\sigma)} \tilde{\chi}_k(\sigma r) a(\sigma) \: d \sigma \lesssim t^{-\frac 32} \max\{u,r\}\end{aligned}$$ provided that the condition [\[condu\]](#condu){reference-type="eqref" reference="condu"} holds for $\varphi_u(\sigma)$ within the support of the integral: $$\begin{aligned} \label{condu} \varphi^{\prime}_u(\sigma) \sim u, \quad \varphi^{\prime \prime}_u(\sigma) < 0, \quad |\varphi^{\prime \prime}_u(\sigma)| \lesssim\sigma^{-2} u\,.\end{aligned}$$* *Proof.* As in the proof of Lemma [Lemma 23](#lem:osc2){reference-type="ref" reference="lem:osc2"}, we start with an integration by parts and write $$\begin{aligned} I^{\pm}(r,s;t) &= \frac{1}{2 it}\int\limits_0^{\infty} e^{it \sigma^2 \pm i \varphi_u(\sigma)}[ b_1(\sigma;r) + b_2(\sigma;r,u)] \:d \sigma \\ &=: I^{\pm}_1 + I^{\pm}_2 \nonumber\end{aligned}$$ with $$\begin{aligned} b_1(\sigma;r):= \sigma^{-1} ( \tilde{\chi}_k(\sigma r) a(\sigma))^{\prime} - \sigma^{-2} \tilde{\chi}_k(\sigma r) a(\sigma), \,\,\,\,\,\ b_2(\sigma;r,u):= \pm i \sigma^{-1} \tilde{\chi}_k(\sigma r) a(\sigma) \varphi_u^{\prime}(\sigma)\,.\end{aligned}$$ We apply another integration by parts to $I^{\pm}_1$ to bound it by $$\begin{aligned} t^{-2}\int\limits_0^{\infty} | (\sigma^{-1} b_1(\sigma;r))^{\prime}|+ \sigma^{-1} | b_1(\sigma;r) \varphi_r^{\prime}(\sigma)| \: d \sigma \,.\end{aligned}$$ We have, $\varphi_u^{\prime}(\sigma) \lesssim u$, $|b_1| \lesssim\sigma^{-1}$ and $| (\sigma^{-1} b_1)^{\prime}| \lesssim\tilde{\chi}(\sigma) \sigma^{-3} \lesssim\sigma^{-2} r$. Therefore, $I^{\pm}_1 \lesssim t^{-2} \max\{r,u\}$. We next focus on $I^{\pm}_2$. We let $\Phi_{\pm}(\sigma) = \sigma^2 \pm t^{-1} \varphi_u (\sigma)$. Note that the conditions on $\varphi_u$ is arranged so that only $\Phi_{-}(\sigma)$ might have a critical point. In fact, as $\varphi^{\prime \prime}_u(\sigma)<0$ , for each fixed $r$, if this critical point exist then it must be non-degenerate in the support of $\tilde{\chi}_k(\sigma r)$. Furthermore, since $\varphi^{\prime}_u(\sigma)$ is a decreasing function with respect to $\sigma$, as opposed to $2 \sigma$, there is always $\sigma^* \sim u/t$ such that $| \Phi_-^{\prime}(\sigma) | \gtrsim |\sigma - \sigma^*|$. Having these in mind, we first focus on $I^{-}_2$ and divide $I^{-}_2$ as $$\begin{aligned} I^{-}_2 = (2it)^{-1}\int\limits_{|\sigma - \sigma_*| \leq t^{-\frac 12}} e^{it \Phi_{\pm}(\sigma)} b_2(\sigma; r,u)\: d \sigma + (2it)^{-1}\int\limits_{|\sigma - \sigma_*| > t^{-\frac 12}} e^{it \Phi_{\pm}(\sigma)} b_2(\sigma; r,u) \:d \sigma. \end{aligned}$$ We have $|b_2(\sigma;r,u)| \lesssim u$, therefore the first term in $I^{-}_2$ is bounded by $t^{-\frac 32} u$. We apply another integration by parts to bound the second term in $I^{-}_2$ by $$\begin{aligned} \label{lastterm} t^{-2}\int\limits_{|\sigma - \sigma^*| > t^{-\frac 12}} \frac{|b_2^{\prime}(\sigma;r,u)|}{| \Phi_-^{\prime}(\sigma)|} + \frac{|b_2(\sigma;r,u)|| \Phi_-^{\prime \prime }(\sigma)|}{| \Phi_-^{\prime}(\sigma)|^2} \: d \sigma \end{aligned}$$ where we omit the boundary term since it will be simply bounded by [\[lastterm\]](#lastterm){reference-type="eqref" reference="lastterm"}. Note that, we have $|b_2^{\prime}(\sigma;r,u)| \lesssim\sigma^{-1} u$. Therefore, $$\frac{|b_2^{\prime}(\sigma;r,u)|}{| \Phi^{\prime}(\sigma)|} \lesssim\frac{u} {\sigma |\sigma - \sigma_*|} \lesssim\frac{u}{\sigma^{2}} + \frac{u}{|\sigma - \sigma_*|^2}.$$ Furthermore, as $\Phi^{\prime \prime }(\sigma) = 2 - t^{-1} \varphi_u^{\prime \prime}$ and $| \varphi_u^{\prime \prime}| \lesssim\sigma^{-2} u$ $$\frac{|b_2(\sigma;r,u)|| \Phi^{\prime \prime }(\sigma)|}{| \Phi^{\prime}(\sigma)|^2} \lesssim\frac{ u}{|\sigma - \sigma_*|^2} + \frac{u^2 }{ |\sigma - \sigma_*|^2 \sigma^2t} \lesssim\frac{ u}{|\sigma - \sigma_*|^2} + \frac{ u \sigma_*}{|\sigma - \sigma_*|^2 \sigma^2}.$$ Note that if $\sigma_* \leq 1$, then $$\frac{|b_2(\sigma;r,u)|| \Phi^{\prime \prime }(\sigma)|}{| \Phi^{\prime}(\sigma)|^2}\lesssim\frac{ us}{|\sigma - \sigma_*|^2}.$$ as $\sigma > c$. On the other hand if $\sigma_* \geq 1$ and $|\sigma - \sigma_*| \leq \sigma_*/2$ then $\sigma \sim \sigma_*$ and $$\frac{ u \sigma_*}{|\sigma - \sigma_*|^2 \sigma^2} \lesssim\frac{u }{ \sigma^* |\sigma - \sigma_*|^2} \lesssim\frac{ u}{|\sigma - \sigma_*|^2}.$$ If $\sigma_* \geq 1$ and $|\sigma - \sigma_*| \geq \sigma_*/2$ then $$\frac{ u \sigma_*}{|\sigma - \sigma_*|^2 \sigma^2} \lesssim\frac{u}{ \sigma^2 \sigma_*} \lesssim\frac{ u}{\sigma^2}.$$ Therefore, the integrand in $\eqref{lastterm}$ is bounded by $u \sigma^{-2} + u |\sigma- \sigma_*|^{-2}$. The first term is integrable away from zero and the integration of the second one in $|\sigma - \sigma_*| \geq t^{-\frac 12}$ is bounded by $u t^{\frac 12}$. Therefore, we have $I^{-}_2 \lesssim u t^{-3/2}$. Finally, we consider $I^{+}_2$ where we have no critical points. We apply another integration by parts to this integral to see $$\begin{aligned} I^+_2 \lesssim t^{-2}\int\limits_0^{\infty} \frac{|b_2^{\prime}(\sigma;r,u)|}{| \Phi_+^{\prime}(\sigma)|} + \frac{|b_2(\sigma;r,u)|| \Phi_+^{\prime \prime }(\sigma)|}{| \Phi_+^{\prime}(\sigma)|^2}\: d \sigma \lesssim t^{-2} u \end{aligned}$$ since, $|b_2^{\prime}(\sigma; r,u)| \lesssim\sigma^{-1} u$, $| \Phi_+^{\prime}(\sigma)|^{-1} \lesssim|\sigma + t^{-1} \varphi^{\prime}_u (\sigma) |^{-1} \lesssim\sigma^{-1}$ and $|b_2(\sigma;r,u)| \lesssim u$, $| \Phi_+^{\prime}(\sigma)|^{-1}\lesssim|t^{-1} \varphi^{\prime}_u (\sigma)| \lesssim u/t$. ◻ We continue estimating the term $\tilde{K}_2(r,s;t)$. **Lemma 30**. *$|\tilde{K}_2(r,s;t)| \lesssim t^{-\frac 32}$.* *Proof.* Taking into account the symmetry in $\tilde{K}_2(r,s;t)$ with respect to $r$ and $s$, we concentrate on estimating the following expression: $$(rs)^{-1}\int\limits_0^{\infty}e^{it\sigma^2} \tilde{\chi}_c(\sigma) \tilde{\chi}_k(\sigma r) \chi_k(\sigma s) e(\sigma,r) e(\sigma,s) \: d\sigma.$$ Using Proposition [Proposition 19](#prop:lexp){reference-type="ref" reference="prop:lexp"}, we compute $$\begin{aligned} \label{hhhl} \tilde{\chi}_c(\sigma) \tilde{\chi}_k(\sigma r) e(\sigma,r) \chi_k(\sigma s) e(\sigma,s) &= e^{ i ( \sigma r - \sigma^{-1} \log(2 \sigma r))} e^{ i \theta(\sigma)} \tilde{\chi}_k(\sigma r) \tilde{\chi}_c(\sigma) \chi_k(\sigma s) e(\sigma,s) \\ & + e^{ -i ( \sigma r - \sigma^{-1} \log(2 \sigma r))} e^{- i \theta(\sigma)} \tilde{\chi}_k(\sigma r) \tilde{\chi}_c(\sigma) \chi_k(\sigma s) e(\sigma,s) \nonumber\\ & + \mathcal{E}(\sigma,r) \chi_k(\sigma s) e(\sigma,s) \nonumber\,.\end{aligned}$$ We first focus on the first two terms in [\[hhhl\]](#hhhl){reference-type="eqref" reference="hhhl"}. Taking $u=r$ and applying Lemma [Lemma 29](#lem:osc3){reference-type="ref" reference="lem:osc3"} to $\varphi_r(\sigma) = \sigma r - \sigma^{-1} \log(2 \sigma r)$ and $a(\sigma)= \tilde{\chi}_c(\sigma) e^{ \pm i \theta(\sigma)} \chi_k(\sigma s) e(\sigma,s)$, we proceed to estimate $$\begin{aligned} \label{highsmlg} (rs)^{-1}\int\limits_0^{\infty} e^{it \sigma^2 \pm \varphi_r(\sigma)} \tilde{\chi}_c(\sigma) \tilde{\chi}_k(\sigma r) a(\sigma) \: d \sigma. \end{aligned}$$ Note that $\varphi_r^{\prime}(\sigma) = r + \sigma^{-2} [\log(2 \sigma r) - 1]$. Within the domain of $\tilde{\chi}_k(\sigma r)$, it holds that $\log(2 \sigma r) > 1$, leading to the inequality $r \leq \varphi_r^{\prime}(\sigma) \leq (1 + c^{-1})r$. Additionally, we find that $\varphi_r^{\prime \prime}(\sigma) = -\sigma^{3}[ 2 \log(2 \sigma r) + 3] < 0$. Therefore, in the support of $\tilde{\chi}_k(\sigma r)$, it's evident that $|\varphi_r^{\prime \prime}(\sigma)| \lesssim\sigma^{-2} r$. Utilizing [\[lsest\]](#lsest){reference-type="eqref" reference="lsest"}, along with the fact that $| \theta^{\prime}| \lesssim\sigma^{-2}$ and $|\theta^{\prime \prime}| \lesssim\sigma^{-3}$, we observe that $a(\sigma)= s \tilde{\chi}_c(\sigma) O_2(\sigma )$. Hence, by applying Lemma [Lemma 29](#lem:osc3){reference-type="ref" reference="lem:osc3"}, we can conclude that $|\eqref{highsmlg}|\lesssim t^{-\frac{3}{2}}$. We next consider the last term in the expansion of $\tilde{\chi}_c(\sigma) \tilde{\chi}_k(\sigma r) e(\sigma,r) \chi_k(\sigma s) e(\sigma,s)$. That is we need to bound $$\begin{aligned} \label{epsilon} (rs)^{-1}\int\limits_0^{\infty} e^{i t \sigma^2} \tilde{\chi}_k(\sigma s) e(\sigma,s) \mathcal{E}(\sigma,r)\: d\sigma\end{aligned}$$ By [\[lsest\]](#lsest){reference-type="eqref" reference="lsest"} and [\[epsbound\]](#epsbound){reference-type="eqref" reference="epsbound"} we have $$\begin{aligned} |\partial^j_\sigma\{ \mathcal{E}_{\pm}(\sigma,r) \chi_k(\sigma s) e(\sigma,s)\}| \lesssim\sigma^{2-j} rs \tilde{\chi}_c(\sigma) \chi_k(\sigma s) \,.\end{aligned}$$ Hence, by integration by parts as in [\[ibptwice\]](#ibptwice){reference-type="eqref" reference="ibptwice"}, we bound [\[epsilon\]](#epsilon){reference-type="eqref" reference="epsilon"} by $t^{-2}$. This finishes the proof. ◻ We finally prove **Lemma 31**. *$|\tilde{K}_3(r,s;t)| \lesssim t^{-\frac 32}$.* *Proof.* We start computing the integrand in $\tilde{K}_3(r,s;t)$. Note that by Proposition  [Proposition 19](#prop:lexp){reference-type="ref" reference="prop:lexp"}, we may write $\tilde{\chi}_c(\sigma r)e(\sigma,r) \tilde{\chi}_c(\sigma s)e(\sigma,s)$ as the sum of the following terms $$\begin{aligned} \label{hhhh} & e^{ \pm i( [ \sigma r - \sigma^{-1} \log(2 \sigma r)]+[\sigma s - \sigma^{-1} \log(2 \sigma s)])}e^{\pm 2i \theta(\sigma)} \widetilde{\chi}_c(\sigma r) \widetilde{\chi}_k(\sigma) \widetilde{\chi}_k(\sigma s),\\ & e^{\pm ( [\sigma r - \sigma^{-1} \log(2 \sigma r))] - [\sigma s - \sigma^{-1} \log(2 \sigma s)])} \widetilde{\chi}_c(\sigma r) \widetilde{\chi}_k(\sigma) \widetilde{\chi}_k(\sigma s), \nonumber\\ & e^{\pm i ( \sigma s - \sigma^{-1} \log(2 \sigma s))} \widetilde{\chi}_k(\sigma s)e^{ \pm i \theta(\sigma)} \mathcal{E}(\sigma,r), \nonumber\\ & e^{\pm i ( \sigma r - \sigma^{-1} \log(2 \sigma r))} \widetilde{\chi}_k(\sigma r)e^{ \pm i \theta(\sigma)} \mathcal{E} (\sigma,s), \nonumber\\ & \mathcal{E}(\sigma,s) \mathcal{E} (\sigma,r) \nonumber\,.\end{aligned}$$ We first consider the last three terms in [\[hhhh\]](#hhhh){reference-type="eqref" reference="hhhh"}. Since the third and fourth terms are symmetric, it will be enough to bound $$\begin{aligned} \label{highlglg} (rs)^{-1}\int\limits_0^{\infty} e^{it \sigma^2 \pm \varphi_r(\sigma)} \tilde{\chi}_c(\sigma) \tilde{\chi}_k(\sigma r) a_{\pm}(\sigma) d \sigma + (rs)^{-1}\int_0^{\infty} e^{it \sigma^2} \mathcal{E}(\sigma,r) \mathcal{E}(\sigma,s) \: d \sigma\,,\end{aligned}$$ where $a_{\pm}(\sigma)= e^{ \pm i \theta(\sigma))} \mathcal{E}_{\pm}(\sigma,s)$ and $\varphi_r(\sigma) =\sigma r - \sigma^{-1} \log(2 \sigma r)$. By previous lemma $\varphi_r(\sigma)$ satisfies the conditions in [\[condu\]](#condu){reference-type="eqref" reference="condu"}. Moreover, $a_{\pm}(\sigma) = s \tilde{\chi}_c(\sigma) O_2(\sigma )$. Hence, by Lemma [Lemma 29](#lem:osc3){reference-type="ref" reference="lem:osc3"} we conclude that the first term in [\[highlglg\]](#highlglg){reference-type="eqref" reference="highlglg"} is bounded by $t^{-\frac 32}$. For the second term in [\[highlglg\]](#highlglg){reference-type="eqref" reference="highlglg"}, we have $$\begin{aligned} |\mathcal{E}(\sigma,r) \mathcal{E}(\sigma,r) | \lesssim 1, | \partial_\sigma^j\{\mathcal{E}(\sigma,r) \mathcal{E}(\sigma,r)\} | \lesssim\sigma^{2-j} rs , \,\,\,\ j=1,2\,.\end{aligned}$$ Therefore, by integration by parts twice we can bound the second term in [\[highlglg\]](#highlglg){reference-type="eqref" reference="highlglg"} by $t^{-2}$. We finally focus on the first two terms in [\[hhhh\]](#hhhh){reference-type="eqref" reference="hhhh"}. For the first term, we let $\varphi_{r+ s} (\sigma)= [ \sigma r + \sigma^{-1} \log(2 \sigma r)]+ [\sigma s - \sigma^{-1} \log(2 \sigma s)$ and estimate the integral $$\begin{aligned} \label{r+s} (rs)^{-1}\int\limits_0^{\infty} e^{it\sigma^2 \pm i \varphi_{r+s}(\sigma)} \tilde{\chi}_c(\sigma) \tilde{\chi}_k(\sigma r) \tilde{\chi}_k(\sigma s) e^{\pm 2 \theta(\sigma)} \: d \sigma. \end{aligned}$$ Note that [\[r+s\]](#r+s){reference-type="eqref" reference="r+s"} is symmetric with respect to $r$ and $s$. Therefore, without loss of generality, we assume $r\geq s$ and use Lemma [Lemma 29](#lem:osc3){reference-type="ref" reference="lem:osc3"} for $a_{\pm}(\sigma) = \tilde{\chi}_c(\sigma) \tilde{\chi}_k(\sigma s) e^{\pm 2 \theta(\sigma)}$. While it is evident that $a_{\pm}(\sigma) = s \tilde{\chi}c(\sigma) O(\sigma)$, we still need to demonstrate that $\varphi_{r + s}$ satisfies the conditions in [\[condu\]](#condu){reference-type="eqref" reference="condu"}. We calculate $\varphi^{\prime}_{r + s} (\sigma) = (r+s) + \sigma^{-2} \left[ \log( 2 \sigma r) + \log( 2 \sigma s) - 2 \right]$. Thus, within the domain of $\widetilde{\chi}c(\sigma s) \widetilde{\chi}c(\sigma r)$, we have $r+s \leq \varphi^{\prime}_{r + s} \leq (r+s) (1 + 1/c)$. It is also possible to compute $\varphi^{\prime \prime}_{r + s} = -\sigma^{3}\left[ 2 \log(2 \sigma r) + 2 \log(2 \sigma s) + 6 \right] < 0$. Noting that $r+s \leq 2 r$, we can deduce from Lemma [Lemma 29](#lem:osc3){reference-type="ref" reference="lem:osc3"} that $|\eqref{r+s} |\lesssim t^{-\frac{3}{2}}$. Finally, we consider the second term in [\[hhhh\]](#hhhh){reference-type="eqref" reference="hhhh"} and estimate $$\begin{aligned} \label{r-s} (rs)^{-1}\int\limits_0^{\infty} e^{it\sigma^2 \pm i \varphi_{r-s}(\sigma)} \tilde{\chi}_k(\sigma r) a(\sigma) \: d \sigma \end{aligned}$$ where $\varphi_{r - s} (\sigma) = [ \sigma r + \sigma^{-1} \log(2 \sigma r)]- [\sigma s - \sigma^{-1} \log(2 \sigma s)$ and $a(\sigma) = \tilde{\chi}_c(\sigma ) \tilde{\chi}_k(\sigma s)$. We compute $\varphi^{\prime}_{r - s} (\sigma) = (r-s) + \sigma^{-2} \log( r/s)$. Since we can assume $r\geq s$ due to the symmetry, we immediately have $\varphi^{\prime}_{r - s} (\sigma) \geq r-s$. Furthermore, by the mean value theorem, we have $0 < \log(\sigma r) - \log(\sigma s) \lesssim (r-s) s^{-1}$ and $$\sigma^{-2} \log( r/s) \lesssim\frac{ (r-s)} {\sigma^2 s} \lesssim c^{-1} (r-s)$$ in the support of $\sigma s \geq k$. Hence, $\varphi^{\prime}_{r - s} (\sigma) \sim r-s$ and $\varphi^{\prime \prime}_{r - s} (\sigma) \lesssim\sigma^{-2} (r-s)$. Moreover, $a(\sigma) = s \tilde{\chi}_c(\sigma ) O_2(\sigma)$ and therefore, by Lemma [Lemma 29](#lem:osc3){reference-type="ref" reference="lem:osc3"} we have $|\eqref{r-s}| \lesssim t^{-\frac 32}$. ◻ *Proof of Proposition [Proposition 27](#prop:high){reference-type="ref" reference="prop:high"}.* Combining the bounds for $\tilde{K}_1$, $\tilde{K}_2$ and $\tilde{K}_3$, we obtain the statement. ◻ # Kernel of the Coulomb evolution The analysis above stems from the following explicit representation of the time evolution of $H_{0,q}$ for $q>0$: $$\begin{aligned} \label{evolrep} [e^{itH_{0,q} } f](r) = -\frac{q}{2r} \int\limits_0^{\infty}\int\limits_0^{\infty} e^{it \sigma^2} M_{\frac{iq}{2\sigma},\frac 12}(2 i\sigma r) M_{\frac{iq}{2\sigma},\frac 12}(2i \sigma s) sf(s) \sigma^{-1} [e^{\frac{q \pi}{\sigma}} -1]^{-1} \: d\sigma \:ds \end{aligned}$$ where $M_{\frac{iq}{2 \sigma}, \frac 12} (\cdot)$ is the Whittaker-M function (see [@NIST Ch 13]). Upon substituting $\sigma$ with $q\sigma$, equation [\[evolrep\]](#evolrep){reference-type="eqref" reference="evolrep"} transforms into: $$\begin{aligned} \frac{q}{2r}\int\limits_0^{\infty}\int\limits_0^{\infty} e^{it q^2 \sigma^2} e(q \sigma,r) e(q \sigma,s) sf(s) \sigma^{-1} [e^{\frac{\pi}{ \sigma}} - 1]^{-1} \: d\sigma \: ds\,.\end{aligned}$$ Here, the function $e(q\sigma,r)$ is defined as: $$\begin{aligned} e(q\sigma,r) :=-i\sigma^{-\frac{1}{2}} [e^{\frac{\pi}{\sigma}} - 1]^{-\frac{1}{2}} M_{\frac{i}{2 \sigma}, \frac{1}{2}} (2iq\sigma r).\end{aligned}$$ This representation is obtained by diagonalizing $r H_{0,q} r^{-1} = -\frac{d^2}{dr^2} +\frac{q}{r}$ via the *distorted Fourier transform*. The purpose of this section is to explain the proof of ([\[evolrep\]](#evolrep){reference-type="ref" reference="evolrep"}), namely **Theorem 32**. *For all $f\in rC^\infty_{0,\mathrm{rad}}(\mathbb{R}^3)$, the equality ([\[evolrep\]](#evolrep){reference-type="ref" reference="evolrep"}) holds.* ## Review of Weyl-Titchmarsh theory We begin by briefly recalling some of the basic spectral theory of half-line Schrödinger operators with a regular left endpoint. In particular, we summarize the construction of the distorted Fourier transform. This theory is well-known and more details may be found in: [@CW], [@CL Ch.9], [@DS Sect. XIII.5], [@EK Ch. 2], [@Ev], [@GZ], [@Hi Ch. 10], [@Ko], [@Le], [@LS Ch. 2], [@Na Ch. VI], [@Pe Ch. 6], [@RS Ch.X], [@Ti Chs. II, III], [@W Sects. 7--10] While our potential is not regular at $0$ due to the $\frac{1}{r}$ singularity, the theory of singular potentials is developed in parallel to the regular case. Consider the symmetric Schrödinger operator $$\begin{aligned} H=-\frac{d^2}{dx^2} + V(x),\,\,\,\,V=\overline{V}\in L^1_{\text{loc}}(\mathbb{R}_+)\end{aligned}$$ with domain $\mathcal{D}(H)=C^2_{0}(\mathbb{R}_+)$. We assume that $V\in L^1(0,1)$ and that it is *limit point* at $\infty$, that is, for any $z\in \mathbb{C}\setminus \mathbb{R}$, the space of solutions to $Hf=zf$ that are $L^2$ at $\infty$ is at most $1$-dimensional. For instance, it is sufficient (but by no means necessary) to assume that $V$ is bounded at $\infty$. For $\alpha \in [0,\pi]$, let $H_{\alpha}$ be the self-adjoint extension of $H$ with the domain $$\begin{aligned} \mathcal{D}_{\alpha}:=\{ g \in H^2(\mathbb R_+) \mid \sin(\alpha) g^{\prime}(0) + \cos(\alpha) g(0) =0 \}. \end{aligned}$$ We first define $\phi_\alpha(z,x)$ and $\theta_\alpha(z,x)$ as the fundamental system of solutions to $H_{\alpha}f = -z^2f$, for $z \in \mathbb{C}$, that satisfy $$\begin{aligned} \label{phitheta} \phi_{\alpha}(z,0)= - \theta_{\alpha}^{\prime}(z,0) = -\sin(\alpha),\,\,\,\ \phi^{\prime}_{\alpha}(z,0)= \theta_{\alpha}(z,0)= \cos(\alpha), \,\,\,\ W(\phi(z, \cdot), \theta(z,\cdot)=1.\end{aligned}$$ Because $V$ is $L^1$ near $0$, the existence of $\phi_\alpha$ and $\theta_\alpha$ is assured by Picard iteration as is their analyticity as functions of $z$. Furthermore, they are real-valued for $z^2\in\mathbb R$. We next define a *Weyl solution* $\psi_{\alpha}(z,\cdot)$ near infinity (or zero) to be a non-zero solution to $H_{\alpha}f = -z^2f$ that is $L^2$ near infinity (or zero). We note that, as long as $V$ is continuous and real valued in $(0,\infty)$, there exist at least one Weyl solution near infinity and at least one Weyl solution near zero, see Theorem X.6 of [@RS]. Since $H$ is in the limit point case at infinity the Weyl solution near infinity is unique (up to scaling), whereas because $H$ is in the limit circle case at zero, all solutions are Weyl solutions near zero. Hence, we can uniquely characterize the Weyl solution at infinity as $$\begin{aligned} \psi_\alpha(x,z)=\theta_\alpha(x,z)+m(z)\phi_{\alpha}(x,z)\end{aligned}$$ where $m(z)= W(\theta(z,\cdot), \psi(z,\cdot))$, is the Weyl-m function, which is analytic for $z^2 \in \mathbb{C} \backslash \mathbb{R}$. Note that this representation is possible because the $\theta_\alpha$ coefficient of $\psi_\alpha$ cannot vanish or $\psi_\alpha$ would be an eigenfunction for non-real $z^2$. The significance of the Weyl solution is that it allows us write the resolvent kernel or Green's function via $$\begin{aligned} \label{Res} (H_{\alpha}+z^2)^{-1} f(x) =\int\limits_0^{\infty} [\phi(x,z)\psi(y,z)\chi_{[0<x<y]} + \phi(y,z)\psi(x,z)\chi_{[x>y>0]} ]f(y) \:dy. \end{aligned}$$ With these objects in hand, we are ready to define the distorted Fourier transform: **Proposition 33**. *For $f\in C_0([0,\infty))$ let $$\begin{aligned} (\lambda)=\int_{0}^{\infty} f(x)\phi_\alpha(x,\lambda)\: dx \,.\end{aligned}$$ Then we have the following Plancharel theorem $$\begin{aligned} \|f\|_{L^2(\mathbb{R}_+)}=\|U_\alpha f\|_{L^2(\mathbb{R},\rho)}\end{aligned}$$ and inversion formula $$\begin{aligned} f(x)=\lim_{b\to\infty}\int_{-b}^{b} [U_\alpha f](\lambda)\phi_\alpha(x,\lambda)\: \rho(d\lambda)\,.\end{aligned}$$ In particular, for any $F \in C(\mathbb R)$ and $f \in C_0^{\infty}([0,\infty))$, we have $$\begin{aligned} \label{diag} [F(H_{\alpha}) f](\cdot)=\int\limits_{\sigma(H_{\alpha})}\int_0^{\infty} F(\sigma^2) \phi_{\alpha}(\sigma,\cdot) \phi_{\alpha}(\sigma,x) f(x) \:dx \: \rho(d\sigma). \end{aligned}$$* We refer to $\phi_\alpha$ as the distorted Fourier basis and to $\rho$ as the associated spectral measure. The proof comes from plugging [\[Res\]](#Res){reference-type="eqref" reference="Res"} into Stone's formula. Recall that Stone's formula is given for $\lambda^2 \in\mathbb{R}$ and $f \in C_0^{\infty}([0,\infty))$ as $$\begin{aligned} \lim_{\varepsilon\to0+} \frac{1}{\pi i}\int\limits_a^b \langle[R(\lambda^2+i\varepsilon)- R(\lambda^2-i\varepsilon) ]f,f \rangle\: \lambda \: d\lambda = \langle[E(a,b)+\frac12(E(\{a\})+E(\{b\}))] f,f \rangle\,,\end{aligned}$$ where $-\infty\leq a\leq b\leq \infty$, $E(\cdot)$ is the spectral resolution of $H_{\alpha}$, $R(z):=(H_{\alpha}-z)^{-1}$ is the resolvent operator, and we adopt the convention $E(\{\pm\infty\})=0$. The result then follows from the fact that $\rho(d\lambda)$ is recoverable via the weak-$*$ limit as $\varepsilon\rightarrow 0$ of $$\begin{aligned} \frac{\lambda }{\pi i}[m(\lambda^2 +i\varepsilon)-m(\lambda^2-i\varepsilon)]\:d\lambda\,. \end{aligned}$$ ## Proof of Theorem [Theorem 32](#dftThm){reference-type="ref" reference="dftThm"} {#proof-of-theorem-dftthm} This section closely follows [@GZ], which in turn relies on the idea of [@HS] to determine the spectral measure via Stone's formula. Let $\mathcal{L}_q$ be the half-line Schrödinger operator that is unitarily equivalent to $H_{0,q}$, which we recall is the restriction of the Coulomb Hamiltonian $H$ to the radial sector. In order to apply the above scheme to $\mathcal{L}_q$, we must first make sense of it as a self-adjoint operator. First, recall the following simple consequence of the Kato-Rellich theorem: **Lemma 34** (Theorem X.15 in [@RS]). *Suppose that $V:\mathbb{R}^3\rightarrow \mathbb{R}$ is equal to $V_1+V_2$ where $V_1\in L^2(\mathbb{R}^3)$ and $V_2\in L^\infty(\mathbb{R}^3)$. Then $-\Delta +V$ is essentially self-adjoint on $C_0^\infty(\mathbb{R}^3)$ and self-adjoint on $H^2(\mathbb{R}^3)$.* Clearly then our Hamiltonian $H$ is a self-adjoint operator with domain $H^2(\mathbb{R}^3)$. Recall that $\mathcal{L}_q$ is the half-line Schrödinger operator given by conjugating $H_{0,q}$ by $r$. Thus, it is automatically self-adjoint on the domain $r H^2_{\textrm{rad}}(\mathbb{R}^3)$ (where we regard functions on $H^2_{\textrm{rad}}(\mathbb{R}^3)$ as functions on $\mathbb{R}_+$). In particular, for $g\in \mathcal{D}(\mathcal{L})$ the function $\frac{g(r)}{r}$ is continuous at $r=0$. To compute the resolvent of $\mathcal{L}_q=\mathcal{L}$, first observe that a fundamental system of solutions to $\mathcal{L}f=-z^2f$ for $\Re z^2>0$ is given by the Whittaker functions [@NIST 13.14] $$\begin{aligned} M_{-\frac{q}{ 2 z}, \frac{1}{2}} ( 2 z r),\,\,W_{-\frac{q}{ 2 z}, \frac{1}{2}} ( 2 zr).\end{aligned}$$ These are solutions of Whittaker's equation $$\begin{aligned} W''(\omega)+\left(-\frac{1}{4}+\frac{\kappa}{\omega}+\frac{\frac{1}{4}-\mu^2}{\omega^2}\right)W(\omega)=0\end{aligned}$$ which is related to $\mathcal{L}f=-z^2f$ via $r = \frac{\omega}{2 z }$ for $\kappa=-\frac{q}{2z}$ and $\mu=\frac{1}{2}$. By [@NIST (13.14.6)], we have $$\begin{aligned} \label{WM} \frac{M_{-\frac{q}{ 2 z}, \frac{1}{2}} ( 2 z r)}{2z} =r e^{-rz} \Big[1+\sum_{s=1}^\infty \frac{(q+2z)(q/2+2z)\cdot\ldots\cdot (q/s+2z)}{s!} r^s \Big] \end{aligned}$$ and thus $\phi (z,r) := (2 z )^{-1} M_{-\frac{q}{ 2 z}, \frac{1}{2}} ( 2 z r)$ is the unique solution satisfying the boundary condition for $\mathcal{D}(\mathcal{L})$ which is normalized so that $\phi'(0,z)=1$. It is real analytic for $z^2\leq 0$. Furthermore, by [@NIST (13.14.26)] we compute the Wronskian as $$W[ \phi(z,r ), W_{-\frac{q}{ 2 z},\frac{1}{2}} ( 2zr )] = W[M_{-\frac{q}{ 2 z}, \frac{1}{2}} ( \cdot ), W_{-\frac{q}{ 2 z},\frac{1}{2}} ( \cdot )] = - \frac{1}{\Gamma(1+\frac{q}{2z})} = -\frac{2z}{q\Gamma(q/(2z))}.$$ so we set $\psi (z,r) := -\frac{q}{2z} \Gamma( q/(2z)) W_{-\frac{q}{ 2 z},\frac{1}{2}} ( 2 z r)$ to ensure that $W[\phi,\psi]=1$. This gives us the following representation of the resolvent of $\mathcal{L}$: **Proposition 35**. *For $\Re z>0$, the resolvent kernel of $\mathcal{L}$ is given by $$\begin{aligned} (\mathcal{L}+z^2)^{-1}(r,s)=\begin{cases}\phi(z,r)\psi(z,s)\,,&0<r\leq s\\ \psi(z,r)\phi(z,s)\,,&0<s\leq r \end{cases}\,. \end{aligned}$$* *Proof.* This follows from the form of the resolvent of a Sturm-Liouville operator and the fact that $\phi$ is the unique solution satisfying the boundary condition of $\mathcal{D}(\mathcal{L})$. ◻ In particular, if we let $-z^2 = \sigma^2 \pm i\varepsilon$ for $\sigma^2 \geq0$, then we obtain $$\begin{aligned} (\mathcal{L}-(\sigma^2 \pm i0))^{-1}(r,s) = \begin{cases}\phi( \pm i \sigma ,r) \psi (\pm i\sigma,s)\,, & 0<r\leq s \\ \psi(\pm i\sigma, r) \phi(\pm i\sigma,s )\,, & 0<s\leq r \end{cases} \,.\end{aligned}$$ These considerations suggest that, as in the classical theory, $\phi$ should give the distorted Fourier basis. Moreover, the limiting forms of the Whittaker functions [@NIST 13.14.20-1] show that $\psi$ is the only decaying solution at $\infty$ i.e. it is the Weyl solution. To proceed, we need to define the $\theta$ function to determine the spectral measure $\rho$. Due to the strong singularity of $V$ at zero, we will not be able to pick the $\theta$ function as in [\[phitheta\]](#phitheta){reference-type="eqref" reference="phitheta"}. In [@GZ], Gesztesy and Zinchenko proved that if $V$ is real valued and $H$ is in the limit point case at both end points then Weyl-m function exist provided that $Hf = zf$ has a solution $\tilde{\phi}(z,x)$ in $O$, an open neighborhood of $\mathbb R$, that is (a) analytic for $x \in(0,\infty)$ and $z \in O$ (b) real valued for $x \in(0,\infty)$ and $z \in \mathbb R$, and (c) in $L^2$ around $x=0$ for $z \in \mathbb{C} \backslash \mathbb{R}$ with sufficiently small $|\Im z|$, see Hypothesis 3.1 of [@GZ]. They further showed that if the singularity point of $V$ and the end point of $H$ agree, then Weyl-m function is a scalar function of $z$. We note, that $\phi(z,r)$ holds (a),(b) and (c). We follow a similar argument that is used in [@GZ], and find the fundamental system of solutions to $\mathcal{L} f = -z^2f$ at a reference point $x_0=1$. We take the first solution as $\phi(z,r)$ and pick a $\theta(z,r)$ such that $W(\phi(z,r), \theta(z,r))=1$, which we are free to do by Picard iteration from this point. Then, we must have $$\begin{aligned} \psi(z,r) = \theta(z,r) + m(z) \phi(z,r), \,\,\,\,\ m(z) = W(\theta(z,r), \psi(z,r)). \end{aligned}$$ As in the proof of Proposition [Proposition 35](#dftPr){reference-type="ref" reference="dftPr"}, we use Stone's formula to obtain $$\begin{aligned} \label{ftkernel} [e^{it \mathcal{L} } f](r)= \int\limits_0^{\infty}\int\limits_0^{\infty} e^{it \sigma^2} \phi(i\sigma, r) \phi(i\sigma,s) f(s) \:\rho(d\sigma) \:ds, \,\,\,\,\ \frac{d\rho}{d\sigma}= \frac{2 \sigma}{\pi} \Im ( m(i \sigma)). \end{aligned}$$ Here, we have used that $\sigma(\mathcal{L})= \sigma_{\textrm{ac}}(\mathcal{L})=[0,\infty)$ since $\mathcal{L}$ is a positive operator and for any $\sigma \in \mathbb R$ we have $\phi(\pm i\sigma,r), \psi(\pm i \sigma, r) \in L_{\delta}^{2} \backslash L^2$ for $\delta>\frac 12$, where $L^{2}_{\delta}:=\{ (1+r^2)^{-\frac{\delta}2} f \in L^2\}$. Note that $L^2_{-\delta}$, being the dual space of $L^2_{\delta}$, is dense in $L^2$. Finally, let us determine the density of $\rho$. Note that $\theta(z,r)$ has to be real analytic for $z=i \sigma$, therefore, we have to have $\theta(i \sigma,r) = \Re( \psi(i\sigma ,r)) + b(\sigma) \phi(i\sigma,r)$ for some real valued $b(\sigma)$ as $$W[ \theta(i\sigma,\cdot), \phi(i\sigma,\cdot)]= \Re( W[ \psi(i\sigma, \cdot ), \phi(i\sigma,\cdot)]) =1 \,.$$ Hence, we compute $$\begin{aligned} m(i \sigma ) & = W(\theta(i\sigma,\cdot), \psi(i\sigma, \cdot) ) \\ &= 2^{-1} W[ \psi(i\sigma,\cdot) +\overline{\psi(i \sigma,\cdot)}, \psi(i\sigma,\cdot)] + b(\sigma) \nonumber\\ & = \Im( \psi (i\sigma,\cdot) \overline{\psi^{\prime}(i\sigma,\cdot)}) + b(\sigma) .\nonumber \end{aligned}$$ Moreover, we have as $r \to 0$, $$\psi(i \sigma ,r) = -1 + c(i \sigma)r - r \log r + O(r^{2-})$$ where $\Im(c(i \sigma)) =\sigma -q[\frac{\pi}{2} + \Im (\psi^{(0)}( 1- iq/(2 \sigma)))]$ and $\psi^0 ( z)$ is digamma function. Therefore, $\Im( m(i \sigma))= \Im(c(i \sigma))$ and using $\Im(\psi^0( 1+ i y)) = -(2y)^{-1} + \frac{\pi}{2} \coth (\pi y)$, see [@NIST (5.7.5)], we obtain $d \rho(\sigma) = 2q \sigma [e^{\frac{q\pi}{\sigma}} -1]^{-1}$ and this in [\[ftkernel\]](#ftkernel){reference-type="eqref" reference="ftkernel"} gives $$\begin{aligned} \label{finalform} [e^{it \mathcal{L} } f](r) &= -\frac{q}{2} \int\limits_0^{\infty}\int\limits_0^{\infty} e^{it \sigma^2} M_{\frac{iq}{2\sigma},\frac 12}(2 i\sigma r) M_{\frac{iq}{2\sigma},\frac 12}(2i \sigma s) f(s) \sigma^{-1} [e^{\frac{q \pi}{\sigma}} -1]^{-1} \: d\sigma \: ds \\ & = -\frac{q}{2}\int\limits_0^{\infty}\int\limits_0^{\infty} e^{itq^2 \sigma^2} M_{\frac{i}{2\sigma},\frac 12}(2 iq\sigma r) M_{\frac{i}{2\sigma},\frac 12}(2i q\sigma s) f(s) \sigma^{-1} [e^{\frac{\pi}{\sigma}} -1]^{-1} \: d\sigma \: ds \nonumber\,. \end{aligned}$$ Using the fact that $\mathcal{L}= r H_{0,q}r^{-1}$, we obtain [\[evolrep\]](#evolrep){reference-type="eqref" reference="evolrep"}.
arxiv_math
{ "id": "2309.01313", "title": "$L^1\\rightarrow L^\\infty$ Dispersive estimates for Coulomb waves", "authors": "Adam Black, Ebru Toprak, Bruno Vergara Biggio and Jiahua Zou", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
arxiv_math
{ "id": "2309.04052", "title": "Riemannian Adaptive Regularized Newton Methods with H\\\"older Continuous\n Hessians", "authors": "Chenyu Zhang and Rujun Jiang", "categories": "math.OC", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | This manuscript investigates the classical problem of determining conditions on the parameters $\alpha,\beta \in \mathbb{C}$ for which the integral transform $$C_{\alpha\beta}[\varphi](z):=\int_{0}^{z} \bigg(\frac{\varphi(\zeta)}{\zeta (1-\zeta)^{\beta}}\bigg)^\alpha\,d\zeta$$ is also univalent in the unit disk, where $\varphi$ is a normalized univalent function. Additionally, whenever $\varphi$ belongs to some subclasses of the class of univalent functions, the univalence features of the harmonic mappings corresponding to $C_{\alpha\beta}[\varphi]$ and its rotations are derived. As applications to our primary findings, a few non-trivial univalent harmonic mappings are also provided. The primary tools employed in this manuscript are Becker's univalence criteria and the shear construction developed by Clunie and Sheil-Small. author: - title: Univalence of horizontal shear of Cesàro type transforms --- Integral transform; Shear construction; Harmonic univalent mappings; Starlike functions; Convex functions; Close-to-convex functions # Introduction {#IntroductionSection} Let $\mathcal{A}$ denote the class of all analytic functions $\varphi$ in the open unit disk $\mathbb{D}=\lbrace z\in \mathbb{C}:\, |z|<1\rbrace$ with the normalization $\varphi(0)=0$ and $\varphi'(0)=1$. The subclass $\mathcal{S}$ of $\mathcal{A}$ consists of all univalent functions in $\mathbb{D}$. A function $\varphi\in\mathcal{A}$ is said to be *starlike of order $\delta$*, $0\le \delta<1$, if it satisfies ${\rm Re}\,[z\varphi'(z)/\varphi(z)]>\delta$ for all $z\in\mathbb{D}$, and is said to be *convex* if ${\rm Re}\,[1+z\varphi''(z)/\varphi'(z)]>0$ for all $z\in\mathbb{D}$. The subclass of $\mathcal{S}$ made up of starlike functions of order $\delta$ is denoted by the symbol $\mathcal{S}^*(\delta)$. It should be noted that a function $\varphi$ is referred to as *starlike* if it is a member of $\mathcal{S}^*(0)=:\mathcal{S}^*$. We designate the class of convex univalent functions by $\mathcal{K}$. A function $\varphi\in\mathcal{A}$ is known as *close-to-convex* if and only if $\int_{\theta_1}^{\theta_2}{\rm Re}\,[1+z\varphi''(z)/\varphi'(z)]\,d\theta>-\pi,~z=re^{i\theta}$, for each $r\in(0,1)$ and for each pair of real numbers $\theta_1,\theta_2$ with $\theta_1<\theta_2$. The class of close-to-convex functions is denoted by $\mathcal{CC}$. It is well-known that $\mathcal{K}\subsetneq \mathcal{S}^*\subsetneq \mathcal{CC}\subsetneq \mathcal{S}$. The traditional Alexander Theorem, which asserts that $\varphi\in \mathcal{S^{*}}$ if and only if $J[\varphi]\in \mathcal{K}$, where the Alexander transform $J[\varphi]$ of $\varphi\in \mathcal{A}$ defined as $$J[\varphi](z)=\int_{0}^{z}\frac{\varphi(\zeta)}{\zeta}\, d\zeta,$$ provides an important relationship between the classes $\mathcal{S^{*}}$ and $\mathcal{K}$. According to [@Dur83 §8.4], if $\varphi\in\mathcal{S}$, then $J[\varphi]$ is not always in $\mathcal{S}$. This provides impetus to research the preserving properties of the Alexander and related transforms of classical classes of univalent functions; see for instance [@KS20] and references therein. The Alexander transform was initially generalized to the following form (see [@Cau67; @Cau71; @MW71; @Nun69]) in order to investigate the univalence characteristics of the integral transforms of the aforementioned kind: $$J_{\alpha}[\varphi](z)=\int_{0}^{z}\Big(\frac{\varphi(\zeta)}{\zeta}\Big)^{\alpha}\, d\zeta, \quad \alpha\in\mathbb{C}.$$ Note that $J_{1}[\varphi]=J[\varphi]$ and $J_{\alpha}[\varphi]=(I_{\alpha} \circ J)[\varphi]$, where $I_{\alpha}[\varphi]$ is the Hornich scalar multiplication operator of a *locally univalent function* $\varphi$ (i.e. $\varphi'(z)\neq 0$) in $\mathbb{D}$ defined by $$I_{\alpha}[\varphi](z)=(\alpha \star \varphi(z))=\int_{0}^{z} \lbrace \varphi'(\zeta) \rbrace^{\alpha}\, d\zeta.$$ The operator $J_{\alpha}[\varphi]$ was later considered by Kim and Merkes [@KM72], and they showed that $J_{\alpha}(\mathcal{S})\subset \mathcal{S}$ for $|\alpha|\leq 1/4$. Further, the complete range of $\alpha$ for $J_{\alpha}(\mathcal{S})\subset \mathcal{S}$ was found by Aksent'ev and Nezhmetdinov [@AN87]. For the univalence of the operator $I_\alpha[\varphi]$, the ranges of $\alpha$ are obtained in [@Pfa75; @Roy65] whenever $\varphi$ is an analytic univalent function. Moreover, for the meromorphic univalent functions $\varphi$, conditions on $\alpha$ are obtained in [@NP03] for which $I_\alpha[\varphi]$ is also meromorphic univalent. Readers can also see the work of Ponnusamy and Singh [@PS96] for the univalence properties of the transforms $I_\alpha[\varphi]$ and $J_\alpha[\varphi]$ when $\varphi$ varies over other classical subclasses of $\mathcal{S}$. It is worth noting that the univalence of the transforms $I_\alpha[\varphi]$ and $J_\alpha[\varphi]$ generate numerous examples of integral transforms which are indeed univalent. In addition to the significance of the Alexander transform in the context of univalency, the Cesàro transform of $\varphi\in\mathcal{A}$, which is defined by $$C[\varphi](z)=\int_{0}^{z} \frac{\varphi(\zeta)}{\zeta (1-\zeta)}\, d\zeta,$$ has also been taken into account (see [@HM74]). It is worth recalling that if $\varphi\in\mathcal{S}$ then $C[\varphi]$ may not be in $\mathcal{S}$, see [@HM74 Theorem 3]. Furthermore, in view of [@HM74 p. 424], the Koebe function illustrates that the starlike functions need not be preserved by the Cesàro transform. However, it is proved that the transform $C[\varphi]$ preserves the class $\mathcal{K}$; see [@HM74 Theorem 1]. This fact encourages us to investigate the univalence properties of a generalised integral transform that incorporates both the Alexander and Cesàro transforms, which is defined by $$\label{Eq1.1P1} C_{\alpha\beta}[\varphi](z)=J_\alpha[\varphi]\oplus I_{\alpha\beta}[\chi]=\int_{0}^{z} \Big(\frac{\varphi(\zeta)}{\zeta (1-\zeta)^{\beta}}\Big)^{\alpha}\, d\zeta, \quad \alpha,\beta\in \mathbb{C},$$ where $\chi(z)=-\log(1-z)$ with a suitable branch. Here, $\oplus$ denotes the Hornich addition operator defined by $$(\varphi\oplus \psi)(z)=\int_0^z \varphi'(\zeta)\,\psi'(\zeta)\,d\zeta$$ between $\varphi,\psi\in \mathcal{A}$ with $\varphi'(z)\neq 0$ and $\psi'(z)\neq 0$. It is important to note that the operator $C_{\alpha\beta}[\varphi]$ is equivalent to the form having the integrand $(\varphi(\zeta)/\zeta)^\alpha(1-\zeta)^{-\delta}$ for some $\delta\in\mathbb{C}$. In our case, $\delta=\alpha\beta$. We write $C_\alpha[\varphi]:=C_{\alpha 1}[\varphi]$. Consequently, it should be noticed that $C_{11}[\varphi]=C_1[\varphi]=C[\varphi]$, $C_{\alpha 0}[\varphi]=J_\alpha[\varphi]$, and $C_{\alpha\beta}[\varphi]=(I_{\alpha}\circ C_{\beta})[\varphi]$, where $$C_{\beta}[\varphi](z)=\int_{0}^{z} \frac{\varphi(\zeta)}{\zeta (1-\zeta)^{\beta}}\, d\zeta, \quad \beta\in\mathbb{C}.$$ While $\varphi$ varies over specific subclasses of $\mathcal{S}$, the analytic and geometric properties of $C_\beta[\varphi]$ have been explored in [@KS20-1; @KS20; @PSS19]. The major objective of this manuscript is to deepen our understanding of the univalence of Cesàro type integral transforms of analytic functions to the harmonic setting. Let $\mathbb{H}$ denote the class of all harmonic mappings $f=h+\overline{g}$ in $\mathbb{D}$ with the normalization $h(0)=g(0)=0$ and $h'(0)=1$. Here, the functions $h$ and $g$ are called the *analytic* and the *co-analytic* parts of $f$, respectively. The notations $$\mathcal{S}_{\mathbb{H}}=\{f\in \mathbb{H}:\,\mbox{ $f$ is univalent in $\mathbb{D}$}\} ~\mbox{ and }~ \mathcal{CC}_{\mathbb{H}}=\{f\in \mathbb{H}:\,\mbox{ $f$ is close-to-convex in $\mathbb{D}$}\},$$ respectively, represent the class of harmonic univalent and harmonic close-to-convex mappings in $\mathbb{D}$. Here, $f\in \mathbb{H}$ is called a close-to-convex function if $f(\mathbb{D})$ is a close-to-convex domain [@CS84]. Note that $\mathcal{CC}_{\mathbb{H}}\subsetneq \mathcal{S}_{\mathbb{H}}$. Now we recall that a complex-valued harmonic mapping $f=h+\overline{g}$ defined on a simply connected domain $\Omega$ is called *locally univalent* if the Jacobian of $f$ defined by $J_{f}=|h'|^2-|g'|^2$ is non-vanishing. Further, it is called sense-preserving if $J_{f}>0$, or equivalently, the second complex dilatation $\omega=g'/h'$ has the property that $|\omega(z)|<1$ in $\Omega$, see [@Lew36]. In this context, $f=h+\overline{g}$ is called the *horizontal shear* of $h-g=:\varphi$ with its dilatation $\omega=g'/h'$. For this purpose, one can use the method of shear construction as a tool to construct univalent harmonic mappings that are convex in same direction. A domain is said to be convex in the horizontal direction (CHD) if its intersection with each horizontal line is connected (or empty). A function $\varphi$ defined on $D$ is said to be *convex in the horizontal direction* (CHD) if $\varphi(D)$ is convex in the horizontal direction. The following algorithm describes the horizontal shear construction for $f=h+\overline{g}$: **Algorithm for horizontal shear construction.** 1. choosing a conformal mapping $\varphi$ which is convex in horizontal direction; 2. choosing a dilatation $\omega$; 3. computing $h$ and $g$ by solving the system of equations $h-g=:\varphi,~\omega=g'/h'$; 4. constructing the harmonic mapping $f=h+\overline{g}$. Clunie and Sheil-Small first introduced this approach in [@CS84], and it was subsequently used by others (see for instance, [@Dur04 Section 3.4, p. 36] and [@PQR14]). Geometrically, a given locally univalent analytic function is sheared (i.e. stretched and translated) along parallel lines to produce a harmonic mapping onto a domain convex in one direction. In our discussion, we use this algorithm to take into account harmonic mappings that correspond to the integral transform $C_{\alpha\beta}$ and its rotation with some dilatation depending upon $\alpha$ and $\beta$. We now recall that Bravo et al. [@BHV17] extended the Ahlfors' univalence criteria [@Ahl74] to the harmonic case to extend the problem of univalence of $I_\alpha[\varphi]$ to the complex-valued harmonic mappings. In fact, in [@ABHSV20], a new approach has been initiated to study the problem of univalence of $I_\alpha[\varphi]$ and $J_\alpha[\varphi]$ to the case of harmonic mappings using the method of shear construction [@CS84]. The Cesàro integral transform and its generalization, however, are not included in either of these two transformations to investigate their univalency in both harmonic and analytical contexts. This is the primary justification for our consideration of the integral transform $C_{\alpha\beta}[\varphi]$ to broaden the issues researched in [@ABHSV20]. Indeed, in order to have additional information that incorporates the discoveries from [@ABHSV20], we present a general approach for addressing such issues. Moreover, this generates a number of integral transforms of functions that are harmonic and univalent. # Preliminaries {#PreliminariesSection} In this section we collect basic definitions and some well-known results which are used in the subsequent sections. The harmonic Schwarzian and pre-Schwarzian derivatives for sense-preserving harmonic mappings $f=h+\overline{g}$ are investigated in detail by Hernández and Martin in [@HM15]. Further applications of harmonic Schwarzian and pre-Schwarzian derivatives for sense-preserving harmonic mappings can be found from [@HM13; @HM15-1] and more recently [@BHPV22] includes such investigations on logharmonic mappings. Note that the pre-Schwarzian derivative of a sense-preserving harmonic mapping $f=h+\overline{g}$ is defined by $$\label{Eq2.1P1} P_{f}=\frac{h''}{h'}-\frac{\overline{\omega}\omega'}{1-|\omega|^2} =\frac{\partial}{\partial z} \log(J_{f}).$$ If $f$ is analytic (i.e. $g\equiv 0$) then $P_{f}=h''/h'$, which is nothing but the classical pre-Schwarzian derivative of $f=h$. However, the authors of [@HM15] demonstrated that given a sense-preserving harmonic mapping $f$, $P_{f+\overline{af}}=P_{f}$ for $a\in \mathbb{D}$, and they established an extension of Becker's criterion of univalence. **Lemma A.** *Let $f=h+\overline{g}$ be a sense-preserving harmonic mapping in the unit disk $\mathbb{D}$ with dilatation $\omega$. If for all $z\in \mathbb{D}$ $$(1-|z|^{2})|zP_{f}(z)|+\frac{|z\omega^{'}(z)|(1-|z|^{2})}{1-|\omega(z)|^{2}}\leq 1,$$ then $f$ is univalent. The constant $1$ is the best possible bound.* Similar types of univalence criteria for harmonic mappings can be found in [@ANS16]. Similar to the case of analytic univalent functions, the notion of pre-Schwarzian derivatives is also used to obtain certain necessary and sufficient conditions for harmonic univalent functions; see [@LP19] and Lemma A respectively. Moreover, in 2016, Graf obtained certain bounds of the pre-Schwarzian and Schwarzian derivatives in terms of the order of linear and affine invariant families of sense-preserving harmonic mappings of the unit disk; see [@Gra16]. It is also noteworthy that for the class of uniformly locally univalent harmonic mappings, the authors of [@LP18] provided a relationship between its pre-Schwarzian norm and uniformly hyperbolic radius, and also characterized uniformly locally univalent sense-preserving harmonic mappings in multiple ways. It is also important to study sufficient conditions for close-to-convexity which also generate more univalent functions. In this flow, the following useful result is quoted from [@BJJ13 Theorem 4]: **Lemma B.** *Let $f=h+\overline{g}$ be a harmonic mapping in $\mathbb{D}$, with $h'(0)\neq 0$ and $${\rm Re}\,\bigg[1+\frac{zh''(z)}{h'(z)}\bigg]> c$$ for some $c$ with $-1/2 <c \leq 0$, for all $z\in \mathbb{D}$. If the dilatation $\omega(z)$ satisfies the condition $|\omega(z)|< cos(\pi c)$ for $z\in \mathbb{D}$, then $f$ is close-to-convex in $\mathbb{D}$.* One can note that $\omega(z)\to 0$ whenever $c \to (-1/2)^+$. Therefore, the case $c=-1/2$ was studied separately by Bharanedhar and Ponnusamy [@BP14]. This was initially a conjecture by Mocanu (see [@Moc11 p. 764]) which was later settled in [@BL11] for the case $\theta =0$. The authors of [@MP16; @PK15] further provided some general sufficient conditions for a sense-preserving harmonic mapping to be close-to-convex. Next we deal with certain necessary conditions for univalency of functions belonging to linear invariant family (LIF) of analytic functions. A family $\mathcal{L}$ of normalized locally univalent functions is called LIF, if for any function $\varphi\in \mathcal{L}$, we have $$\frac{(\varphi\circ \varphi_{a})(z)-\varphi(a)}{(1-|a|^{2})\varphi'(a)} \in \mathcal{L},$$ for each automorphism $\varphi_{a}(z)=(z+a)/(1+\overline{a}z)$ of $\mathbb{D}$. The concept of LIF was introduced by Pommerenke in 1964 (see [@Pom64]) and since then it is widely studied in different contexts including harmonic mappings of the single and several complex variables, see for example [@Dur04; @GK03]. The quantity $$\gamma:=\sup\{|a_2(\varphi)|:\,\varphi(z)\in \mathcal{L}\}$$ is what determines the *order of a family* $\mathcal{L}$, where $a_2(\varphi)$ is the second Taylor coefficient of $\varphi(z)$. Let $\mathcal{L}(\gamma)$ be a linear invariant family of analytic functions in $\mathbb{D}$ of order $\gamma$, $\gamma\ge 1$ (see [@CCP71; @Pom64]). Since $|a_2(\varphi)|\le 2$ for a function $\varphi\in\mathcal{S}$, it is evident that $\mathcal{S}=\mathcal{L}(2)$. In connection with the order of LIF, the following lemma, recently showed in [@ABHSV20 Lemma 3], is used in this manuscript. **Lemma C.** *For each univalent function $\varphi \in \mathcal{L}(\gamma)$, $1\le \gamma < \infty$, we have $$(1-|z|^2)\Big|\frac{z\varphi'(z)}{\varphi(z)}\Big| \leq 2\gamma$$ for all $z\in \mathbb{D}$.* Next we focus on the concept of stable harmonic univalent functions defined as follows. For this, we frequently use the notation $\mathbb{T}$ to denote the unit circle $|z|=1$. A sense-preserving harmonic mapping $f = h + \overline{g}$ is called *stable harmonic univalent* (resp. *stable harmonic close-to-convex*) in $\mathbb{D}$ if all the mappings $f_\lambda= h + \lambda \overline{g}$, $\lambda\in\mathbb{T}$, are univalent (resp. close-to-convex) in $\mathbb{D}$. We use the notations $\mathcal{SHU}$ and $\mathcal{SHCC}$ to denote the class of stable harmonic univalent functions and the class of stable harmonic close-to-convex functions, respectively. Note that the following inclusion relations are well-known: $$\mathcal{SHU}\subsetneq \mathcal{S}_{\mathbb{H}},~ \mathcal{SHCC}\subsetneq \mathcal{CC}_{\mathbb{H}},$$ and also as discussed in [@HM13-1] we have $$\mathcal{SHCC} \subsetneq \mathcal{SHU}.$$ Surprisingly, the authors of [@HM13-1] provided the following useful characterization for a stable harmonic mapping. **Lemma D.** *A function $f=h+\overline{g}$ belongs to $\mathcal{SHU}$ (resp. $\mathcal{SHCC}$) if and only if for all $\lambda\in\mathbb{T}$, the analytic function $h+\lambda g$ is univalent (resp. close-to-convex).* # Univalence properties This section is devoted to the problem of studying the univalence of the integral transform $C_{\alpha\beta}[\varphi]$ whenever $\varphi$ belongs to certain subclasses of the class $\mathcal{S}$. In addition, we also aim to extend the problem of univalence of $C_{\alpha\beta}[\varphi]$ to the setting of harmonic mappings in the plane. For this purpose, we use the method of shear construction as noted in Section 1. Throughout this paper we consider $\alpha, \beta \in \mathbb{C}$ unless they are specified. The first result of this section obtains condition on $\alpha$ and $\beta$ for which $C_{\alpha\beta}[\varphi]$ is univalent in $\mathbb{D}$ whenever $\varphi\in\mathcal{S}$. **Theorem 1**. *If $\varphi \in \mathcal{S}$, then $C_{\alpha \beta}[\varphi]$ is contained in $\mathcal{S}$ for $|\alpha|\leq {1}/{[2(2+|\beta|)]}$.* *Proof.* By the definition of $C_{\alpha\beta}[\varphi]$, the concept of logarithmic derivative followed by the triangle inequality leads to $$(1-|z|^2)\bigg|\frac{z(C_{\alpha \beta}[\varphi])''(z)} {(C_{\alpha \beta}[\varphi])'(z)}\bigg| \leq (1-|z|^2) |\alpha|\left( \bigg|\frac{z\varphi'(z)}{\varphi(z)}-1\bigg| +\bigg|\frac{\beta z}{1-z}\bigg|\right).$$ If $\varphi \in \mathcal{S}$, then Theorem 9 of [@Goo83 p 69] gives that $\Big|\cfrac{z\varphi'(z)}{\varphi(z)}-1\Big|\leq 2/(1-|z|)$ and so it follows that $$(1-|z|^2)\bigg|\frac{z(C_{\alpha \beta}[\varphi])''(z)} {(C_{\alpha \beta}[\varphi])'(z)}\bigg| \leq |\alpha| \Big(2(1+|z|)+|\beta|(1+|z|)\Big)< 2|\alpha|(2+|\beta|).$$ Now, by the Becker criterion [@Bec72] for the univalence of an analytic function (see also [@Pom75 Theorem 6.7, p. 172] and [@GK03 Theorem 3.3.1, p. 130]), $C_{\alpha\beta}[\varphi]$ is univalent in $\mathbb{D}$ provided $2|\alpha|(2+|\beta|)\leq 1$ and hence the result follows. ◻ *Remark 1*. We assume that the bound for $\alpha$ in Theorem [Theorem 1](#thm3.1P1){reference-type="ref" reference="thm3.1P1"} may be improved further, however, for $\alpha, \beta$ satisfying $|\alpha|(2+|\beta|)\geq 2$, we ensure the existence of a function $\varphi \in \mathcal{S}$ such that $C_{\alpha \beta}[\varphi]\notin \mathcal{S}$. This can be seen by considering the Koebe function $\varphi(z)=z/(1-z)^2,\, z\in \mathbb{D}$. Indeed, the corresponding integral transform $$C_{\alpha \beta}[\varphi(z)]=\int_{0}^{z} (1-\zeta)^{-\alpha(2+\beta)}\, d\zeta$$ is trivially not univalent for $-\alpha(2+\beta)=2$. *Remark 2*. For the choice $\beta=0$, Theorem [Theorem 1](#thm3.1P1){reference-type="ref" reference="thm3.1P1"} is equivalent to [@KM72 Theorem 3]. As a consequence of Theorem [Theorem 1](#thm3.1P1){reference-type="ref" reference="thm3.1P1"}, one may generate a number of integral transforms that are indeed univalent. Our next purpose is to construct harmonic mappings corresponding to the integral transforms $C_{\alpha\beta}$ through shear construction. From the algorithm described in Section 1, we require to show that $C_{\alpha\beta}$ is CHD. **Definition 2**. A domain $D \subset \mathbb{C}$ is called *convex in the direction* $\theta \,(0\leq \theta <\pi)$ if every line parallel to the line through $0$ and $e^{i\theta}$ has a connected or empty intersection with $D$. A univalent harmonic mapping $f$ in $D$ is said to be *convex in the direction* $\theta$ if $f(D)$ is convex in the direction $\theta$. The case $\theta=0$ corresponds to CHD. **Theorem 3**. *If $\varphi \in \mathcal{S^*(\delta)}$, then $C_{\alpha \beta}[\varphi]$ is convex in one direction in $\mathbb{D}$ for all $\alpha,\beta\geq 0$ satisfying $\alpha(\beta+2(1-\delta))\leq 3$.* *Proof.* By the definition of $C_{\alpha\beta}[\varphi]$, we have $$\begin{aligned} 1+{\rm Re}\,\left[\frac{z(C_{\alpha\beta}[\varphi])''(z)} {(C_{\alpha\beta}[\varphi])'(z)}\right] & = 1+\alpha {\rm Re}\,\left[\frac{z\varphi'(z)}{\varphi(z)}-1+\frac{\beta z}{1-z}\right]\\ & > 1-\alpha +\alpha \delta-\alpha \beta/2\ge -1/2, \end{aligned}$$ where the last inequality holds by our assumption $\alpha(\beta+2(1-\delta))\leq 3$. Therefore, by using [@UME52 Theorem 1], one can conclude that $C_{\alpha \beta}[\varphi]$ is convex in one direction in $\mathbb{D}$. ◻ The following result characterizes a function to be CHD. **Lemma E** ([@RZ76 Theorem 1]). *Let $\varphi$ be a non-constant analytic function in $\mathbb{D}$. The function $\varphi$ is CHD if and only if there are numbers $\mu$ and $\nu$, $0\leq \mu <2\pi$ and $0\leq \nu \leq \pi$, such that $${\rm Re}\{e^{i\mu}(1-2ze^{-i\mu}\cos\nu+z^2 e^{-2i\mu})\varphi'(z)\}\geq 0, \quad z\in \mathbb{D}.$$* *Remark 3*. By Theorem [Theorem 3](#thm3.4P1){reference-type="ref" reference="thm3.4P1"} we learn that the operator $C_{\alpha \beta}[\varphi]$ need not be CHD under the same assumptions. However, for all $\alpha,\beta\geq 0$ satisfying $\alpha(\beta+2(1-\delta))\leq 3$, the rotation $C^\theta_{\alpha \beta}[\varphi](z):=e^{-i\theta}C_{\alpha \beta}[\varphi](e^{i\theta}z)$ of $C_{\alpha \beta}[\varphi](z)$ will be CHD for a suitable choice of $\theta$ whenever $\varphi \in \mathcal{S}^*(\delta)$. In particular, we write $J^{\theta}_{\alpha}[\varphi](z):=e^{-i\theta}J_{\alpha }[\varphi](e^{i\theta}z)$ and $C^{\theta}_{\alpha}[\varphi](z):=e^{-i\theta}C_{\alpha}[\varphi](e^{i\theta}z)$. For instance, we here present an integral operator that is convex in one direction, but not in horizontal direction, which becomes CHD with a suitable rotation. For the function $\varphi(z)=z/(1-z^2)$, one can show that by Theorem [Theorem 3](#thm3.4P1){reference-type="ref" reference="thm3.4P1"}, the integral transform $J_{3/2}[\varphi](z)=\int_{0}^{z} (1-\zeta^2)^{-3/2} \, d\zeta$ is convex in one direction. At this moment we do not have any analytical proof for $J_{3/2}[\varphi](z)$ to be non-CHD; however the Mathematica graphics tool confirms it (see Figure [2](#Fig!AP!){reference-type="ref" reference="Fig!AP!"}). As a result, we now show that the rotation operator $J^{\pi/4}_{3/2}[\varphi](z)$ is CHD. ![The images $J_{3/2}[\varphi](\mathbb{D})$ and $J^{\pi/4}_{3/2}[\varphi](\mathbb{D})$ for $\varphi(z)=z(1-z^2)^{-1}$](Ex1aP1.eps "fig:"){#Fig!AP! width="5cm" height="5cm"} non-CHD $J_{3/2}[\varphi](\mathbb{D})$ ![The images $J_{3/2}[\varphi](\mathbb{D})$ and $J^{\pi/4}_{3/2}[\varphi](\mathbb{D})$ for $\varphi(z)=z(1-z^2)^{-1}$](Ex1bP1.eps "fig:"){#Fig!AP! width="6cm" height="5cm"} CHD $J^{\pi/4}_{3/2}[\varphi](\mathbb{D})$ Lemma E, for the choices $\mu=\pi/4,\nu=\pi/2$, leads us in proving $${\rm Re}\{e^{i\pi/4}(1-iz^2)(J^{\pi/4}_{3/2}[\varphi])'\}={\rm Re}\{(1-iz^2)^{-1/2} \}>0.$$ This is equivalent to proving $|\arg (1-iz^2)^{-1/2}|<\pi/2$. For this, consider $$k(z)=\int_{0}^{z} (1-i\zeta^2)^{-1}\, d\zeta$$ and we obtain $$1+{\rm Re}\bigg[\frac{z k''(z)}{k'(z)}\bigg]=1+2{\rm Re}\bigg[\frac{iz^2}{1-iz^2}\bigg]>0.$$ This shows that $k(z)$ is a convex function and therefore, one can obtain $$|\arg (1-iz^2)^{-1/2}|=1/2\cdot|\arg (1-iz^2)^{-1}|<\pi/2.$$ Therefore, $J^{\pi/4}_{3/2}[\varphi](\mathbb{D})$ is CHD. We now define the corresponding harmonic mapping $F^\theta_{\alpha\beta}$ of the integral transform $C^\theta_{\alpha\beta}[\varphi]$ by using the shear construction algorithm as stated in Section 1. Theorem [Theorem 3](#thm3.4P1){reference-type="ref" reference="thm3.4P1"} and Remark [Remark 3](#remark3.5P1){reference-type="ref" reference="remark3.5P1"} justify the validity of the following definition: **Definition 4**. Let $\alpha, \beta\ge 0$ and $\alpha(\beta+2(\delta-1))\leq3$. Then we define $F^\theta_{\alpha\beta}(z)=H(z)+\overline{G(z)}$, with the usual normalization $H(0)=G(0)=0, H'(0)=1$ and $G'(0)=0$, as a *horizontal shear* of $C^{\theta}_{\alpha\beta}[\varphi](z)=H(z)-G(z)$ having its dilatation $w_{\alpha\beta}(z)=\alpha(1+\beta)w(z)$ for some analytic function $w(z)$ satisfying $|w(z)|<1$. Note that one can choose $w$ in such a way that the condition $|w_{\alpha\beta}(z)|<1$ is satisfied. In particular, we also use the notations $\mathcal{F}^\theta_\alpha$ and $\mathcal{G}^\theta_\alpha$ for the horizontal shears of $C^\theta_{\alpha}[\varphi]$ and $J^\theta_\alpha[\varphi]$ with their dilatations $w_{\alpha1}$ and $w_{\alpha0}$, respectively. One can take $F^\theta_{\alpha\beta}=H+\overline{G}$ as a vertical shear of the analytic function $C^\theta_{\alpha\beta}[\varphi]=H+G$ for some $\theta~(0\leq \theta <\pi)$ with the same normalization. However, this small change in the sign produces serious structural difference (see [@Dur04 Section 3.4, p. 40]). Next, we provide a counterexample to the statement that $F^\theta_{11}\in \mathcal{S}_H$, a horizontal shear of $C^\theta[\varphi]$, while $\varphi$ ranges over the class $\mathcal{S}^*(\delta),~0\le \delta< 1$. This motivates us to study the univalence property of $F^\theta_{\alpha\beta}$ under certain restrictions on the parameters $\alpha$ and $\beta$. We begin our investigation with the counterexample followed by the main results. **Example 5**. For $\lambda\in\mathbb{T}$, consider a locally univalent analytic function $\varPhi_{\lambda,\theta}=H+\lambda{G}$ in $\mathbb{D}$. Now $F^\theta_{11}=H+\overline{G}$ is a well defined sense-preserving harmonic mapping, a horizontal shear of $C^\theta[\varphi]=H-G$, with its dilatation $w_{11}=G'/H'$. Adhering to our counterexample, we take $\varphi(z)=z/(1-z)^2$ with $\theta=0$ and $w(z)=z/2$. For any $\lambda\in\mathbb{T}$, it is easy to see that the function $\varPhi_{\lambda, 0}=H+\lambda{G}$ satisfies $$\nonumber \varPhi'_{\lambda, 0}(z)=H'(z)\cdot [1+\lambda \,w_{11}(z)]=(C^0_{11}[\varphi])'(z) \cdot \frac{1+\lambda z}{1-z}.$$ Thus, for all $z\in \mathbb{D}$ and for all $\lambda\in\mathbb{T}$, we compute $$(1-|z|^2)\bigg|\frac{\varPhi''_{\lambda, 0}(z)}{\varPhi'_{\lambda, 0}(z)}\bigg|=(1-|z|^2)\bigg|\frac{4}{1-z}+\frac{\lambda}{1+\lambda z}\bigg|.$$ By choosing $z=1/2$ and $\lambda=1$, we notice that $$\sup_{z\in \mathbb{D}}(1-|z|^2)\bigg|\frac{\varPhi''_{\lambda, 0}(z)}{\varPhi'_{\lambda, 0}(z)}\bigg|\geq \frac{26}{4}>6,$$ which contradicts the well-known univalence criteria (an immediate consequence of [@Dur83 Theorem 2.4]). Therefore, $\varPhi_{1,0}=H+ G$ is not univalent. It follows by Lemma D that $F^\theta_{11}\notin \mathcal{S}_{\mathbb{H}}$. The graph in relation to the non-univalency of $F^\theta_{11}$ for $\varphi(z)=z/(1-z)^2$ is also shown in Figure [3](#Fig2P1){reference-type="ref" reference="Fig2P1"}. ![Image of $\mathbb{D}$ under $F_{11}$](Fig2.eps){#Fig2P1 width="6.5cm"} In what follows, our first main result provides conditions on $\alpha$ and $\beta$ for which $F^\theta_{\alpha\beta}$, with its dilatation $w_{\alpha\beta}$, is univalent whenever $\varphi$ is a starlike function of order $\delta,~0\le \delta<1$. For this purpose, we use the idea of linearly connected domains. **Theorem 6**. *Let $\varphi\in\mathcal{S}^*(\delta)$, and $F^\theta_{\alpha\beta}=H+\overline{G}$ be a sense-preserving harmonic mapping in $\mathbb{D}$ with dilatation $w_{\alpha\beta}$. Then for all non-negative parameters $\alpha,\beta$ such that $\alpha(\beta+2(1-\delta))\leq 2$ with $\alpha(1+\beta) \|w\| <1/3$, the corresponding $F^\theta_{\alpha \beta}$ is univalent in $\mathbb{D}$.* *Proof.* Let $F^\theta_{\alpha \beta}=H+\overline{G}$ be a sense-preserving harmonic mapping, which is a horizontal shear of $C^\theta_{\alpha\beta}[\varphi]$. We have $$\begin{aligned} 1+{\rm Re}\,\left[\frac{z(C^\theta_{\alpha\beta}[\varphi])''(z)} {(C^\theta_{\alpha\beta}[\varphi])'(z)}\right] & = 1+\alpha {\rm Re}\,\left[\frac{z e^{i\theta}\varphi'(ze^{i\theta})}{\varphi(ze^{i\theta})}-1+\frac{\beta ze^{i\theta}}{1-ze^{i\theta}}\right]\\ & = 1+\alpha {\rm Re}\,\left[\frac{\zeta \varphi'(\zeta)}{\varphi(\zeta)}-1+\frac{\beta \zeta}{1-\zeta}\right], \quad \zeta=e^{i\theta}z\\ & > 1-\alpha +\alpha \delta-\alpha \beta/2\ge 0, \end{aligned}$$ where the last inequality holds by our assumption. Therefore, $C^\theta_{\alpha\beta}[\varphi]$ is a convex function and so $C^\theta_{\alpha\beta}[\varphi](\mathbb{D})$ is a $1$-linearly connected domain; see for instance [@CH07; @Pom92]. Using Lemma 7 of [@ABHSV20], we conclude that $F^\theta_{\alpha \beta}$ is univalent for $\alpha(1+\beta) \|w\|<1/3$. ◻ *Remark 4*. Since $\mathcal{K}\subset \mathcal{S}^*(1/2)$, Theorem [Theorem 6](#Thm3.7P1){reference-type="ref" reference="Thm3.7P1"} is also valid whenever $\varphi$ is a convex function. We have a couple of immediate consequences of Theorem [Theorem 6](#Thm3.7P1){reference-type="ref" reference="Thm3.7P1"} which give the univalency of $\mathcal{G}^\theta_\alpha$ and $\mathcal{F}^\theta_\alpha$. **Corollary 7**. *Let $\varphi\in\mathcal{K}$, and $\mathcal{G}^\theta_{\alpha}=H+\overline{G}$ be a horizontal shear of $J^\theta_{\alpha}[\varphi]$ with dilatation $w_{\alpha 0}$ in $\mathbb{D}$. Then for all $\alpha\in [0,2]$ with $\alpha\|w\|<1/3$, the mapping $\mathcal{G}^\theta_{\alpha}$ is univalent in $\mathbb{D}$.* **Corollary 8**. *Let $\varphi\in\mathcal{K}$, and $\mathcal{F}^\theta_{\alpha}=H+\overline{G}$ be a horizontal shear of $C^\theta_{\alpha}[\varphi]$ with dilatation $w_{\alpha 1}$ in $\mathbb{D}$. Then for all $\alpha\in [0,1]$ with $\alpha\|w\|<1/6$, the mapping $\mathcal{F}^\theta_{\alpha}$ is univalent in $\mathbb{D}$.* Next we focus on the univalence of $F^\theta_{\alpha\beta}$ in terms of harmonic pre-Schwarzian derivative, where Lemma A plays a crucial role. For this, a simplified version of the pre-Schwarzian derivative of $F^\theta_{\alpha\beta}$ is required. Indeed, by using [\[Eq2.1P1\]](#Eq2.1P1){reference-type="eqref" reference="Eq2.1P1"}, a direct calculation shows that the pre-Schwarzian derivative of $F^\theta_{\alpha\beta}$ is obtained as $$\begin{aligned} \label{Eq3.1P1} P_{F^\theta_{\alpha \beta}}(z) & =\alpha \Big[\frac{e^{i\theta}\varphi'(ze^{i\theta})}{\varphi(ze^{i\theta})} -\frac{1}{z}+\frac{\beta e^{i\theta}}{1-e^{i\theta}z}\\ \nonumber & \hspace*{0.9cm}+(1+\beta) w'(z)\Big(\frac{1-\overline{\alpha(1+\beta) w(z)}} {(1-\alpha(1+\beta) w(z))(1-|\alpha(1+\beta)|^2|w(z)|^2)}\Big)\Big]. \end{aligned}$$ For the sake of convenience, we define the following notation. Using the classical Schwarz-Pick lemma, we observe that $$\label{Eq3.2P1} \| w^{*} \|= \sup_{z\in \mathbb{D}} \frac{|w^{'}(z)|(1-|z|^{2})}{1-|w|^{2}}\leq 1,$$ where $\Vert w^{*} \Vert$ is called the *hyperbolic norm* of $w(z)$. Thus, we have **Theorem 9**. *Let $F^\theta_{\alpha\beta}=H+\overline{G}$ be a sense-preserving harmonic mapping in $\mathbb{D}$ with dilatation $w_{\alpha\beta}$. If $\varphi\in \mathcal{L}(\gamma)$, then* 1. *for $\beta\geq 1$, $F^\theta_{\alpha \beta}\in \mathcal{S}_{\mathbb{H}}$ for all non-negative values of $\alpha$ satisfying $$\label{Eq3.3P1} \alpha\leq \frac{1}{2\gamma+2\beta+(1+\beta)\,\Vert w^{*} \Vert(1+\Vert w \Vert)]}.$$* 2. *for $0\leq \beta<1$, two cases arise.* 1. *If $(\beta+2(1+\beta)\,\Vert w^{*} \Vert (1+\Vert w \Vert))\le 2(1-\beta)$, then $F^\theta_{\alpha \beta}\in \mathcal{S}_{\mathbb{H}}$ for all non-negative values of $\alpha$ satisfying $$\label{Eq3.4P1} \alpha\leq \frac{4(1-\beta)}{\Big[4(2\gamma+1)(1-\beta)+(\beta+(1+\beta)\,\| w^{*} \|(1+\|w\|))^2+4(1-\beta^2)\| w^{*} \|\Big]}.$$* 2. *If $(\beta+2(1+\beta)\,\Vert w^{*} \Vert (1+\Vert w \Vert))> 2(1-\beta)$, then $F^\theta_{\alpha \beta}\in \mathcal{S}_{\mathbb{H}}$ for all non-negative values of $\alpha$ satisfying the inequality [\[Eq3.3P1\]](#Eq3.3P1){reference-type="eqref" reference="Eq3.3P1"}.* *Proof.* Note that, by Lemma C and [\[Eq3.1P1\]](#Eq3.1P1){reference-type="eqref" reference="Eq3.1P1"}, for all $z\in \mathbb{D}$ we estimate $$\begin{aligned} (1-|z|^{2})|z P_{F^\theta_{\alpha \beta}}(z)| & =(1-|z|^{2})\alpha\left|\frac{z e^{i\theta}\varphi'(z e^{i\theta})}{\varphi(z e^{i\theta})}-1 +\frac{\beta z e^{i\theta}}{1-z e^{i\theta}}\right.\\ &\hspace*{3cm} \left.+\frac{z(1+\beta)w'(z)(1-\overline{\alpha(1+\beta) w(z)})} {(1-\alpha(1+\beta) w(z))(1-(\alpha(1+\beta))^2|w(z)|^2)}\right|\\ & \leq \alpha \left[(1-|z|^{2})\Big|\frac{ze^{i\theta}\varphi'(ze^{i\theta})}{\varphi(ze^{i\theta})}\Big|+1-|z|^2+\beta|z|(1+|z|)\right.\\ & \hspace*{5cm}\left.+\frac{(1-|z|^{2})(1+\beta)|w'(z)||z|}{1-(\alpha(1+\beta))^2|w(z)|^2}\right]\\ & \leq \alpha \left[2\gamma+1+(\beta-1)|z|^{2}+\left(\beta+(1+\beta)\Vert w^{*} \Vert\Vert w \Vert\right)|z|\right].\end{aligned}$$ To find the supremum of the right-hand expression, we consider two cases: 1. The case $\beta\geq 1$. In this case, the maximum value of the right-hand expression holds trivially for $|z|=1$. This implies that $$(1-|z|^{2})|z P^\theta_{F_{\alpha \beta}}(z)| \leq \alpha [2\gamma+2\beta+(1+\beta)\,\Vert w^{*} \Vert\Vert w \Vert].$$ Thus, we compute $$(1-|z|^{2})|z P_{F^\theta_{\alpha \beta}}(z)|+\frac{|z w_{\alpha\beta}^{'}(z)|(1-|z|^{2})}{1-|w_{\alpha\beta}(z)|^{2}}\leq \alpha [2\gamma+2\beta+(1+\beta)\,\Vert w^{*} \Vert(1+\Vert w \Vert)].$$ It follows from Lemma A that $F^\theta_{\alpha \beta}$ is univalent in $\mathbb{D}$, if $\alpha$ and $\beta$ satisfy the bound given in [\[Eq3.3P1\]](#Eq3.3P1){reference-type="eqref" reference="Eq3.3P1"}. 2. The case $\beta< 1$. Clearly, the maximum value of the right-hand expression is attained for $$|z|=\frac{1}{2(1-\beta)}(\beta+(1+\beta)\,\Vert w^{*} \Vert \Vert w \Vert).$$ The supremum quantity is discussed through two subcases, namely, 1. The subcase $(\beta+(1+\beta)\,\Vert w^{*} \Vert \Vert w \Vert) \le 2(1-\beta)$. For this case, we have $$(1-|z|^{2})|z P_{F^\theta_{\alpha \beta}}(z)| \leq \frac{\alpha}{4(1-\beta)}\Big[4(2\gamma+1)(1-\beta)+(\beta+(1+\beta)\,\| w^{*} \|(1+\|w\|))^2\Big],$$ and thus, $$\begin{aligned} &(1-|z|^{2})|z P^\theta_{F_{\alpha \beta}}|+\frac{|z w_{\alpha\beta}^{'}(z)|(1-|z|^{2})}{1-|w_{\alpha\beta}(z)|^{2}}\\ &\hspace*{0.2cm} \leq \frac{\alpha}{4(1-\beta)}\Big[4(2\gamma+1)(1-\beta)+(\beta+(1+\beta)\,\| w^{*} \|(1+\|w\|))^2\\ &\hspace*{5cm} +4(1-\beta^2)\| w^{*} \|\Big]. \end{aligned}$$ Again using Lemma A, we conclude that $F^\theta_{\alpha \beta}$ is univalent in $\mathbb{D}$ whenever $\alpha$ satisfies the inequality [\[Eq3.4P1\]](#Eq3.4P1){reference-type="eqref" reference="Eq3.4P1"}. 2. The subcase $(\beta+2(1+\beta)\Vert w^{*} \Vert (1+\Vert w \Vert)) > 2(1-\beta)$. Trivially, the maximum value of the right-hand expression holds for $|z|=1$. Similarly, as an application of Lemma A, it then follows that $F^\theta_{\alpha \beta}$ is univalent in $\mathbb{D}$ whenever $\alpha$ and $\beta$ satisfy the inequality [\[Eq3.3P1\]](#Eq3.3P1){reference-type="eqref" reference="Eq3.3P1"}. This completes the proof. ◻ The concludes the univalence properties of $F^\theta_{\alpha\beta}$ for $\varphi$ that belong to specific subclasses of $\mathcal{S}$. # Stable harmonic univalence properties This section deals with the stable harmonic univalence properties of $F^\theta_{\alpha\beta}$. It is evident that $\mathcal{SHU}\subsetneq \mathcal{S}_{\mathbb{H}}$. As demonstrated in Example [Example 5](#Ex3.6P1){reference-type="ref" reference="Ex3.6P1"}, $F^\theta_{11}\not\in\mathcal{S}_{\mathbb{H}}$ and hence $F^\theta_{11}\not\in\mathcal{SHU}$. Therefore, it is also important to study the stable harmonic univalence properties of $F^\theta_{\alpha\beta}$. In fact, our findings show that the conditions on $\alpha$ and $\beta$ alter in the necessary circumstances for $F^\theta_{\alpha\beta}\in\mathcal{SHU}$, just as they appeared in the case of $F^\theta_{\alpha\beta}\in\mathcal{S}_{\mathbb{H}}$. Our first result determines conditions on $\alpha$ and $\beta$ for which $F^\theta_{\alpha\beta}\in\mathcal{SHU}$ whenever $\varphi\in\mathcal{S}^*(\delta)$. **Theorem 10**. *Let $F^\theta_{\alpha\beta}$ be a sense-preserving harmonic mapping in $\mathbb{D}$ with dilatation $w_{\alpha\beta}$. If $\varphi\in\mathcal{S}^*(\delta)$ then $F^\theta_{\alpha \beta}\in \mathcal{SHU}$ for all non-negative $\alpha,\beta$ satisfying $$\label{Eq4.1P1} \alpha\leq \frac{1}{2\Big(2+\beta+(1+\beta)\,\Vert w^{*} \Vert (1+\Vert w \Vert)\Big)}.$$* *Proof.* Since $\varphi\in\mathcal{S}^*(\delta)$, we have $\varphi(0)=0$ which justifies the local univalence of $C^\theta_{\alpha \beta}[\varphi]$ and so $F^\theta_{\alpha \beta}=H+\overline{G}$ is well-defined. It is easy to see that for any $\lambda\in\mathbb{T}$, the function $\varPhi_{\lambda, \theta}=H+\lambda G$ satisfies $$\label{Eq4.2P1} \varPhi'_{\lambda, \theta}(z)=H'(z)\cdot [1+\lambda w_{\alpha \beta}(z)]=(C^\theta_{\alpha \beta}[\varphi])'(z) \cdot \frac{1+\lambda \alpha(1+\beta) \,w(z)}{1-\alpha(1+\beta) \,w(z)}.$$ Hence, for all $z\in \mathbb{D}$, we have $$\begin{aligned} \label{Eq4.3P1} (1-|z|^{2})\bigg|\frac{z\,\varPhi''_{\lambda, \theta}(z)}{\varPhi'_{\lambda, \theta}(z)}\bigg| =(1-|z|^{2})\alpha \bigg|\frac{z e^{i\theta}\varphi'(z e^{i\theta})}{\varphi(ze^{i\theta})}-1+\frac{\beta ze^{i\theta}} {1-ze^{i\theta}} &+\frac{\lambda(1+\beta) z \,w'(z)}{1+\lambda (1+\beta)\alpha \,w(z)}\\ &\nonumber +\frac{z (1+\beta)\,w'(z)}{1-\alpha(1+\beta) \,w(z)}\bigg|. \end{aligned}$$ Since $w(z)$ is a self-map of $\mathbb{D}$ and $|z\varphi'(z)/\varphi(z)-1| \leq 2/(1-|z|)$, by the classical distortion theorem for $\mathcal{S}$ and [\[Eq3.2P1\]](#Eq3.2P1){reference-type="eqref" reference="Eq3.2P1"}, we find $$\begin{aligned} (1-|z|^{2})\bigg|z\frac{\varPhi''_{\lambda, \theta}(z)}{\varPhi'_{\lambda, \theta}(z)}\bigg| & \leq \alpha\Big(2(1+|z|)+\beta(1+|z|)+2(1+\beta)\,\Vert w^{*} \Vert (1+\Vert w \Vert)|z|\Big)\\ & \le \alpha\Big(4+2\beta+2(1+\beta)\,\Vert w^{*} \Vert (1+\Vert w \Vert)\Big).\end{aligned}$$ It follows that $\varPhi_{\lambda, \theta}$ satisfies the Becker univalence criterion for all $\lambda \in \mathbb{T}$ (see [@Bec72] and also [@GK03 Theorem 3.3.1, p. 130]), whenever $\alpha,\beta$ are related by [\[Eq4.1P1\]](#Eq4.1P1){reference-type="eqref" reference="Eq4.1P1"}. Therefore, by Lemma D, $F^\theta_{\alpha \beta}$ belongs to the class $\mathcal{SHU}$ under the restriction given by [\[Eq4.1P1\]](#Eq4.1P1){reference-type="eqref" reference="Eq4.1P1"}. ◻ For the choice $\beta=1$, Theorem [Theorem 10](#Thm4.1P1){reference-type="ref" reference="Thm4.1P1"} produces the stable harmonic univalence of $\mathcal{F}^\theta_\alpha$ as follows: **Corollary 11**. *Let $\mathcal{F}^\theta_{\alpha}$ be a horizontal shear of $C^\theta_{\alpha}[\varphi]$ with dilatation $w_{\alpha 1}$ in $\mathbb{D}$. If $\varphi \in \mathcal{S}^*(\delta)$, then $\mathcal{F}^\theta_{\alpha}\in \mathcal{SHU}$ for all non-negative $\alpha$ satisfying $$\alpha\leq \frac{1}{2\Big(3+2\Vert w^{*} \Vert (1+\Vert w \Vert)\Big)}.$$* Similarly, for the choice $\beta=0$, Theorem [Theorem 10](#Thm4.1P1){reference-type="ref" reference="Thm4.1P1"} produces the well-known fact about the stable harmonic univalency of $\mathcal{G}^\theta_\alpha$ (see [@ABHSV20 Theorem 2]), for $\alpha\geq 0$, as follows: **Corollary 12**. *Let $\mathcal{G}^\theta_{\alpha}$ be a horizontal shear of $J^\theta_{\alpha}[\varphi]$ with dilatation $w_{\alpha 0}$ in $\mathbb{D}$. If $\varphi \in \mathcal{S}^*(\delta)$, then $\mathcal{G}^\theta_{\alpha}\in \mathcal{SHU}$ for all non-negative $\alpha$ satisfying $$\alpha\leq \frac{1}{2\Big(2+\Vert w^{*} \Vert (1+\Vert w \Vert)\Big)}.$$* Next we discuss the stable harmonic univalence of $F^\theta_{\alpha\beta}$ when $\varphi$ belongs to a class of linear invariant family. **Theorem 13**. *Let $\alpha\ge 0$ and $F^\theta_{\alpha\beta}$ be a sense-preserving harmonic mapping in $\mathbb{D}$ with dilatation $w_{\alpha\beta}$. If $\varphi\in\mathcal{L}(\gamma)$, $1\le \gamma<\infty$, then we have* 1. *For $\beta\geq 1$, $F^\theta_{\alpha \beta}\in \mathcal{SHU}$ for all values of $\alpha$ satisfying $$\label{Eq4.4P1} \alpha\leq \frac{1}{2\Big(\gamma+\beta+(1+\beta)\,\|w^{*}\| (1+\|w\|)\Big)}.$$* 2. *For $0\le \beta<1$, two cases arise.* 1. *If $\beta+2(1+\beta)\,\|w^{*}\| (1+\|w\|)\le 2(1-\beta)$, then $F^\theta_{\alpha \beta}\in \mathcal{SHU}$ for all values of $\alpha$ satisfying $$\label{Eq4.5P1} \alpha\leq \frac{4(1-\beta)}{4(2\gamma+1)(1-\beta)+(\beta+2(1+\beta)\,\| w^{*} \|(1+\|w\|))^2}.$$* 2. *If $\beta+2(1+\beta)\,\|w^{*} \|(1+\|w\|) > 2(1-\beta)$, then $F^\theta_{\alpha \beta}\in \mathcal{SHU}$ for values of $\alpha$ satisfying the inequality [\[Eq4.4P1\]](#Eq4.4P1){reference-type="eqref" reference="Eq4.4P1"}.* *Proof.* Using Lemma C and [\[Eq4.3P1\]](#Eq4.3P1){reference-type="eqref" reference="Eq4.3P1"}, we get $$\begin{aligned} (1-|z|^{2})\bigg|\frac{z\varPhi''_{\lambda, \theta}(z)}{\varPhi'_{\lambda, \theta}(z)}\bigg| & \leq \alpha\Big(2\gamma+1-|z|^{2}+\beta(1+|z|)|z|+2(1+\beta)\,\|w^{*}\| (1+\|w\|)|z|\Big)\\ & = \alpha\Big(2\gamma+1+(\beta-1)|z|^{2}+(\beta+2(1+\beta)\,\|w^{*}\|(1+\|w\|))|z|\Big).\end{aligned}$$ To find the supremum of the right-hand expression, we consider two cases: 1. The case $\beta\geq 1$. In this case, the maximum value of the right-hand expression holds trivially for $|z|=1$. Therefore, $\varPhi_{\lambda, \theta}$ satisfies the Becker univalence criterion for all $\lambda \in \mathbb{T}$ whenever $\alpha$ satisfies the inequality [\[Eq4.4P1\]](#Eq4.4P1){reference-type="eqref" reference="Eq4.4P1"}. 2. The case $0\le \beta< 1$. Clearly, the maximum value of the right-hand expression is attained for $$|z|=\frac{1}{2(1-\beta)}(\beta+2(1+\beta)\,\|w^{*}\|(1+\|w \|)).$$ The supremum quantity is discussed through two subcases, namely, 1. The subcase $\beta+2(1+\beta)\,\|w^{*}\|(1+\|w\|)\le 2(1-\beta)$. In this case, $\varPhi_{\lambda, \theta}$ satisfies the Becker univalence criterion for all $\lambda \in \mathbb{T}$ when $\alpha$ satisfies the inequality [\[Eq4.5P1\]](#Eq4.5P1){reference-type="eqref" reference="Eq4.5P1"}. 2. The subcase $\beta+2(1+\beta)\,\|w^{*}\|(1+\|w\|) > 2(1-\beta)$. Trivially, the maximum value of the right-hand expression holds for $|z|=1$. It follows that $\varPhi_{\lambda, \theta}$ satisfies the Becker univalence criterion for all $\lambda \in \mathbb{T}$ whenever $\alpha$ satisfies the inequality [\[Eq4.4P1\]](#Eq4.4P1){reference-type="eqref" reference="Eq4.4P1"}. This completes the proof. ◻ Until this point, whenever $\varphi$ is univalent, we have dealt with the stable harmonic univalence properties of $F^\theta_{\alpha\beta}$ . The features of $F^\theta_{\alpha\beta}$ that are close-to-convex are examined in the next section whenever $\varphi$ belongs to certain subclasses of $\mathcal{S}$. Additionally, we offer bounds on $\alpha$ and $\beta$ under which $F^\theta_{\alpha\beta}$ is close-to-convex. # Close-to-Convexity properties Recall that $\mathcal{CC}_{\mathbb{H}}\subsetneq \mathcal{S}_{\mathbb{H}}$. The function $F^\theta_{11}$ does not belong to $\mathcal{S}_{\mathbb{H}}$ as seen in Example [Example 5](#Ex3.6P1){reference-type="ref" reference="Ex3.6P1"} and subsequently $F^\theta_{11}\not\in\mathcal{CC}_{\mathbb{H}}$. The phenomena of close-to-convexity of $F^\theta_{\alpha\beta}$ must therefore be studied. In fact, our results show that the conditions on $\alpha$ and $\beta$ alter in the necessary circumstances for $F^\theta_{\alpha\beta}\in\mathcal{CC}_{\mathbb{H}}$, just as they appeared in the case of $F^\theta_{\alpha\beta}\in\mathcal{S}_{\mathbb{H}}$. Our first result in this section provides the conditions on $\alpha$ and $\beta$ under which $F^\theta_{\alpha\beta}\in \mathcal{CC}_{\mathbb{H}}$ whenever $\varphi\in \mathcal{S}^*(\delta)$. **Theorem 14**. *Let $F^\theta_{\alpha\beta}=H+\overline{G}$ be a sense-preserving harmonic mapping in $\mathbb{D}$ with dilatation $w_{\alpha\beta}$. If $\varphi\in \mathcal{S^*}(\delta)$ and $w(z)=\cos(\pi c) z/2$, for some $c, -1/2< c<0$, then for all non-negative parameters $\alpha,\beta$ satisfying $\alpha(1+\beta)\leq 1$ with $\alpha(2(1-\delta)+\beta)\le -2c$, we have $F^\theta_{\alpha\beta}\in \mathcal{CC}_{\mathbb{H}}.$* *Proof.* Since $\varphi\in \mathcal{S^*(\delta)}$, by Definition [Definition 4](#Def3.5P1){reference-type="ref" reference="Def3.5P1"}, the harmonic mapping $F^\theta_{\alpha\beta}=H+\overline{G}$ is well-defined. Clearly, for the given choice of $w(z)$, we have $$|w_{\alpha\beta}(z)|=\alpha(1+\beta) |w(z)|<\frac{\cos(\pi |c|)}{2}<\cos(\pi |c|).$$ Since $C^\theta_{\alpha\beta}[\varphi]=H-G$ satisfies $(C^\theta_{\alpha\beta}[\varphi])'(z)=H'(z)(1-w_{\alpha\beta}(z))$, for all $z\in \mathbb{D}$, it follows that $$\begin{aligned} 1+{\rm Re} \bigg[\frac{z H''(z)}{H'(z)}\bigg] & =1+\alpha{\rm Re}\bigg[\frac{ze^{i\theta}\varphi'(ze^{i\theta})}{\varphi(z e^{i\theta})}-1+\frac{\beta ze^{i\theta}}{1-ze^{i\theta}}\bigg]+{\rm Re}\bigg[\frac{z w_{\alpha\beta}'(z)}{1-w_{\alpha\beta}(z)}\bigg]\\ & =1+\alpha{\rm Re}\bigg[\frac{\zeta\varphi'(\zeta)}{\varphi(\zeta)}-1+\frac{\beta \zeta}{1-\zeta}\bigg]-{\rm Re}\bigg[\frac{-\alpha(1+\beta)zw'(z)}{1-\alpha(1+\beta)w(z)}\bigg]\\ & > 1+ \alpha\delta-\alpha-\alpha\beta/2-1 \geq c, \end{aligned}$$ with $\zeta=e^{i\theta} z$, where the last inequality follows since $\alpha(2(1-\delta)+\beta)\le -2c$. Therefore according to Lemma B, $F^\theta_{\alpha\beta}$ is a close-to-convex mapping. ◻ Recall that the connection $\mathcal{K}\subset \mathcal{S}^*(1/2)$ is valid. Therefore, Theorem [Theorem 14](#Thm5.1P1){reference-type="ref" reference="Thm5.1P1"} offers the following univalence close-to-convexity of $\mathcal{G}^\theta_\alpha$, if $\beta=0$ is chosen. **Corollary 15**. *Let $\mathcal{G}^\theta_{\alpha}$ be a sense-preserving harmonic mapping in $\mathbb{D}$ with dilatation $w_{\alpha0}$. If $\varphi\in \mathcal{K}$ and $w(z)=\cos(\pi c) z/2$, for some $c, -1/2< c<0$, then for all $\alpha \in [0,-2c]$, the mapping $\mathcal{G}^\theta_{\alpha}\in\mathcal{CC}_{\mathbb{H}}$.* In the similar fashion, if one chooses $\beta=1$ in Theorem [Theorem 14](#Thm5.1P1){reference-type="ref" reference="Thm5.1P1"}, then the close-to-convexity of $\mathcal{F}^\theta_\alpha$ follows. **Corollary 16**. *Let $\mathcal{F}^\theta_{\alpha}$ be a sense-preserving harmonic mapping in $\mathbb{D}$ with dilatation $w_{\alpha1}$. If $\varphi\in \mathcal{K}$ and $w(z)=\cos(\pi c) z/2$, for some $c, -1/2< c<0$, then for all $\alpha \in [0,-c]$, the mapping $\mathcal{F}^\theta_{\alpha}\in\mathcal{CC}_{\mathbb{H}}$.* The stable harmonic close-to-convexity of $F^\theta_{\alpha\beta}$, whenever $\varphi\in\mathcal{S}^*(\delta)$, is the subject of our next major finding. However, this is dependent on the next elementary lemma. In the remaining section we choose $w(z)=z/2$. **Lemma 17**. *Let $F^\theta_{\alpha\beta}$ be a sense-preserving harmonic mapping in $\mathbb{D}$ with dilatation $w_{\alpha\beta}$. Then for all $\lambda\in\mathbb{T}$ and for all non-negative $\alpha,\beta$ with $\alpha(1+\beta)\leq 1$, we have $$\bigg| \arg \bigg(\frac{2+\lambda \alpha(1+\beta) z}{2-\alpha(1+\beta) z}\bigg)\bigg| \leq 2\arcsin(r\alpha(1+\beta)),$$ where $r=|z|<1$.* *Proof.* For any $\lambda\in\mathbb{T}$, the relation [\[Eq4.2P1\]](#Eq4.2P1){reference-type="eqref" reference="Eq4.2P1"} suggests us to consider the integral $$I(z)=\int_{0}^{z}\frac{ \varPhi'_{\lambda, \theta}(\zeta)}{(C^\theta_{\alpha \beta}[\varphi])'(\zeta)}\, d\zeta =\int_{0}^{z}\frac{2+\lambda \alpha(1+\beta) \zeta}{2-\alpha(1+\beta) \zeta}\, d\zeta.$$ Whence for all $z\in \mathbb{D}$, the logarithmic derivative of $I'(z)$ leads to $$\label{Eq5.1P1} 1+{\rm Re} \bigg[\frac{z I''(z)}{I'(z)}\bigg]=1+{\rm Re}\bigg[\frac{z \lambda \alpha(1+\beta)}{2+\lambda \alpha(1+\beta) z}\bigg]-{\rm Re}\bigg[\frac{-z\alpha(1+\beta)}{2-\alpha(1+\beta) z}\bigg].$$ It follows that $${\rm Re}\bigg[\frac{z \lambda \alpha(1+\beta)}{2+\lambda \alpha(1+\beta) z}\bigg]=\frac{\partial}{\partial \theta}\{\arg (2+\lambda \alpha(1+\beta) re^{i\theta})\}, \quad z=re^{i\theta}.$$ Geometrically, the function $2+\lambda \alpha(1+\beta)z$ being a Möbius transformation, it maps each circle $|z|=r<1$ onto another circle. It thus follows that $\arg (2+\lambda \alpha(1+\beta)z)$ increases as $z$ moves around the circle $|z|=r$ in the positive sense. That is, $$\frac{\partial}{\partial \theta}\{\arg (2+\lambda \alpha(1+\beta) re^{i\theta})\}>0, \quad z=re^{i\theta}.$$ Equivalently, on the one hand, we have $${\rm Re}\bigg[\frac{z \lambda \alpha(1+\beta)}{2+\lambda \alpha(1+\beta)z}\bigg]>0.$$ On the other hand, one can easily see that $${\rm Re}\bigg[\frac{-z\alpha(1+\beta)}{2-\alpha(1+\beta)z}\bigg]\le \frac{1}{2-|z|}<1.$$ Thus, by [\[Eq5.1P1\]](#Eq5.1P1){reference-type="eqref" reference="Eq5.1P1"}, we obtain $$1+{\rm Re} \bigg[\frac{z I''(z)}{I'(z)}\bigg]>0,$$ leading to the convexity of $I(z)$ in $\mathbb{D}$. Now, the rotation theorem for convex functions [@Dur83 Page 103], yields $$|\arg (I'(z))|=\bigg| \arg \bigg(\frac{2+\lambda \alpha(1+\beta)z}{2-\alpha (1+\beta)z}\bigg)\bigg| \leq 2\arcsin(r\alpha(1+\beta)), \quad |z|=r<1,$$ completing the proof. ◻ **Theorem 18**. *Let $F^\theta_{\alpha\beta}$ be a sense-preserving harmonic mapping in $\mathbb{D}$ with its dilatation $w_{\alpha\beta}$. If $\varphi\in\mathcal{S}^*(\delta)$ and $\alpha \in [0, 1/(1+\beta)\sqrt{2}]$, then $F^\theta_{\alpha \beta}\in \mathcal{SHCC}$.* *Proof.* Let $\lambda \in \mathbb{T}$ be arbitrary. Consider $\varPhi_{\lambda, \theta}$ as defined in the proof of Theorem [Theorem 10](#Thm4.1P1){reference-type="ref" reference="Thm4.1P1"}. For $0\leq t_{2}-t_{1}\leq 2\pi$ and $z=re^{it}$, we first compute $$\begin{aligned} \int_{t_1}^{t_2} {\rm Re}\,\bigg [1+\frac{z \varPhi''_{\lambda, \theta}(z)}{\varPhi'_{\lambda, \theta}(z)}\bigg] \, dt & = \int_{t_1}^{t_2} \bigg(1+ {\rm Re}\, \bigg[\frac{\alpha z e^{i\theta}\varphi'(z e^{i\theta})}{\varphi(ze^{i\theta})}-\alpha+\frac{\alpha \beta ze^{i\theta}}{1-ze^{i\theta}}\\ &\hspace*{2cm} +\frac{z\lambda \alpha(1+\beta)}{2+\lambda \alpha (1+\beta)z}+\frac{z\alpha(1+\beta)}{2-\alpha(1+\beta)z} \bigg]\,\bigg)\, dt\\ & > \Big(1+(\delta-1)\alpha-\frac{\alpha \beta}{2}\Big)(t_{2}-t_{1})\\ & \hspace*{1cm} +{\rm arg}\, \bigg(\frac{2+\lambda \alpha(1+\beta) re^{it_2}}{2+\lambda \alpha(1+\beta) re^{it_1}}\cdot \frac{2-\alpha (1+\beta)re^{it_1}}{2-\alpha (1+\beta)re^{it_2}} \bigg). \end{aligned}$$ Since $t_2-t_1\ge 0$, it follows that $$\begin{aligned} \int_{t_1}^{t_2} {\rm Re}\,\bigg [1+\frac{z\varPhi''_{\lambda, \theta}(z)}{\varPhi'_{\lambda, \theta}(z)}\bigg] \, dt & > {\rm arg}\, \bigg(\frac{2+\lambda \alpha (1+\beta) re^{it_2}}{2-\alpha(1+\beta) re^{it_2}}\bigg)\\ &\hspace*{2cm} +{\rm arg}\, \bigg( \frac{2-\alpha (1+\beta) re^{it_1}}{2+\lambda\alpha (1+\beta) re^{it_1}} \bigg)\\ & \geq -4\,\arcsin(r\alpha(1+\beta))>-4\,\arcsin(\alpha(1+\beta)), \end{aligned}$$ where the second inequality holds by Lemma [Lemma 17](#Lem5.4P1){reference-type="ref" reference="Lem5.4P1"}. Note that, if $\arcsin(\alpha(1+\beta))\leq \pi/4$, or equivalently $0\leq \alpha(1+\beta)\leq 1/\sqrt{2}$, immediately give us $$\int_{t_1}^{t_2} {\rm Re} \Big \lbrace 1+z\frac{\varPhi''_{\lambda, \theta}(z)}{\varPhi'_{\lambda, \theta}(z)}\Big \rbrace\, dt > -\pi.$$ Hence $\varPhi_{\lambda, \theta}$ is a close-to-convex mapping in the unit disk. This completes the proof. ◻ As we recall $\mathcal{K}\subsetneq\mathcal{S}^*(1/2)$, Theorem [Theorem 18](#Thm5.5P1){reference-type="ref" reference="Thm5.5P1"} provides the following immediate consequences, respectively for $\beta=0$ and $\beta=1$: **Corollary 19**. *Let $\mathcal{G}^\theta_{\alpha}$ be a horizontal shear of $J^\theta_{\alpha}[\varphi]$ with dilatation $w_{\alpha 0}$ in $\mathbb{D}$. If $\varphi\in\mathcal{K}$, then for all $\alpha\in[0, 1/\sqrt{2}]$, we have $\mathcal{G}^\theta_{\alpha}\in\mathcal{SHCC}$.* **Corollary 20**. *Let $\mathcal{F}^\theta_{\alpha}$ be a horizontal shear of $C^\theta_{\alpha}[\varphi]$ with dilatation $w_{\alpha 1}$ in $\mathbb{D}$. If $\varphi\in\mathcal{K}$, then for all $\alpha\in [0, 1/2\sqrt{2}]$, we have $\mathcal{F}^\theta_{\alpha}\in\mathcal{SHCC}$.* # Applications {#Sec6P1} As an application to Theorems [Theorem 6](#Thm3.7P1){reference-type="ref" reference="Thm3.7P1"}, [Theorem 10](#Thm4.1P1){reference-type="ref" reference="Thm4.1P1"}, [Theorem 14](#Thm5.1P1){reference-type="ref" reference="Thm5.1P1"} and [Theorem 18](#Thm5.5P1){reference-type="ref" reference="Thm5.5P1"}, in this section, we construct harmonic univalent mappings $F^\theta_{\alpha\beta}$ for certain elementary choices of $\varphi$ and their dilatations. **Example 21**. Consider a non-constant analytic function $w(z)=-z$. We choose $\varphi(z)=z/(1-z)\in \mathcal{S^{*}}$ in the definition of $C_{\alpha\beta}[\varphi]$ and obtain $$\label{Eq6.1P1} C^\theta_{\alpha\beta}[\varphi](z)=e^{-i\theta}\int_{0}^{z} \frac{1}{(1-e^{i\theta}\zeta)^{\alpha (\beta+1)}}\, d\zeta=e^{-2i\theta}\bigg(\frac{1-(1-e^{i\theta}z)^{1-\alpha(1+\beta)}}{1-\alpha(1+\beta)}\bigg).$$ By Remark [Remark 3](#remark3.5P1){reference-type="ref" reference="remark3.5P1"}, first we note that $C^\theta_{\alpha\beta}[\varphi]$ is CHD for some $\theta~ (0\leq \theta <\pi)$ and following the construction given in Section 3, we can construct $F^\theta_{\alpha\beta}=H+\overline{G}$, a horizontal shear of $C^\theta_{\alpha\beta}[\varphi]=H-G$ defined by [\[Eq6.1P1\]](#Eq6.1P1){reference-type="eqref" reference="Eq6.1P1"} with dilatation $w_{\alpha \beta}=-\alpha(1+\beta)z$. It leads to $$H-G=C^\theta_{\alpha\beta}[\varphi] ~~\mbox{ and }~~ \frac{G'(z)}{H'(z)}=-\alpha(1+\beta) z.$$ The second equation along with the differentiation of the first equation produces a system of equations in $H'$ and $G'$. An elementary calculation thus yields $$H(z)=e^{-i\theta}\int_{0}^{z} \frac{1}{(1-e^{i\theta}\zeta)^{\alpha(1+\beta)}(1+\alpha(1+\beta)\zeta)}\, d\zeta$$ and $$G(z)=e^{-i\theta}\int_{0}^{z} \frac{-\alpha(1+\beta)\zeta}{(1-e^{i\theta}\zeta)^{\alpha(1+\beta)}(1+\alpha(1+\beta)\zeta)}\, d\zeta$$ under the usual normalization $H(0)=G(0)=0$. Therefore, the harmonic mapping $$\begin{aligned} \label{Eq6.2P1} F^\theta_{\alpha\beta}(z)=H(z)+\overline{G(z)} \end{aligned}$$ maps the unit disk onto a domain convex in the horizontal direction. By using Theorem [Theorem 6](#Thm3.7P1){reference-type="ref" reference="Thm3.7P1"}, the mapping $F^\theta_{\alpha\beta}$ given by [\[Eq6.2P1\]](#Eq6.2P1){reference-type="eqref" reference="Eq6.2P1"} is univalent for all non-negative $\alpha,\beta$ satisfying $$\label{Eq6.3P1} \alpha(1+\beta)<1/3,$$ since in this case we have $\|w\|=1$. The first image of Figure [5](#Fig3P1){reference-type="ref" reference="Fig3P1"} demonstrates the univalence of $F^\theta_{\alpha\beta}$ for $\alpha,\beta$ satisfying [\[Eq6.3P1\]](#Eq6.3P1){reference-type="eqref" reference="Eq6.3P1"}, whereas the second image shows that there are non-univalent functions $F^\theta_{\alpha\beta}$ for $\alpha,\beta$ not satisfying [\[Eq6.3P1\]](#Eq6.3P1){reference-type="eqref" reference="Eq6.3P1"}. ![The image domains $F^\theta_{\alpha\beta}(\mathbb{D})$ for the above choices of $\alpha,\beta$.](Fig3aP1.eps "fig:"){#Fig3P1 width="6.5cm"} For $\alpha=1/5$ and $\beta=1/2$ ![The image domains $F^\theta_{\alpha\beta}(\mathbb{D})$ for the above choices of $\alpha,\beta$.](Fig3bP1.eps "fig:"){#Fig3P1 width="5.5cm" height="5.8cm"} For $\alpha=1/2$ and $\beta=1$ *Remark 5*. As a consequence of Example [Example 21](#Ex6.1P1){reference-type="ref" reference="Ex6.1P1"}, there are some $\alpha,\beta$ with $1/3\le \alpha(1+\beta)\le 1$ for which $F^\theta_{\alpha\beta}$ is locally univalent but not univalent. Moreover, similar remark also applies to the subsequent examples. **Example 22**. Consider $\varphi(z)=z$ and the function $w(z)=(2z+1)/(2+z)$. For this $\varphi$, the definition of $C^\theta_{\alpha\beta}[\varphi]$ is equivalent to $$\label{Eq6.4P1} C^\theta_{\alpha\beta}[\varphi](z)=e^{-i\theta}\int_{0}^{z} \frac{1}{(1-e^{i\theta}\zeta)^{\alpha \beta}}\, d\zeta=e^{-2i\theta}\bigg(\frac{1-(1-e^{i\theta}z)^{1-\alpha \beta}}{1-\alpha \beta}\bigg).$$ Similar to the explanations used in Example [Example 21](#Ex6.1P1){reference-type="ref" reference="Ex6.1P1"}, we can construct $F^\theta_{\alpha\beta}=H+\overline{G}$ and it generates $$H-G=C^\theta_{\alpha\beta}[\varphi] ~~\mbox{ and }~~ \frac{G'(z)}{H'(z)}= \alpha(1+\beta)\frac{2z+1}{2+z}.$$ By solving these two equations, we obtain $$H(z)=e^{-i\theta}\int_{0}^{z} \frac{2+\zeta}{(1-e^{i\theta}\zeta)^{\alpha \beta}[(1-2\alpha(1+\beta))\zeta+2-\alpha(1+\beta)]}\, d\zeta$$ and $$G(z)=e^{-i\theta}\int_{0}^{z}\frac{\alpha(1+\beta)(2\zeta+1)}{(1-e^{i\theta}\zeta)^{\alpha \beta}[(1-2\alpha(1+\beta))\zeta+2-\alpha(1+\beta)]}\, d\zeta$$ under the standard normalization $H(0)=G(0)=0$. Therefore, the harmonic mapping $$\begin{aligned} \label{Eq6.5P1} F^\theta_{\alpha\beta}(z)=H(z)+\overline{G(z)}\end{aligned}$$ maps the unit disk $\mathbb{D}$ onto a domain convex in the horizontal direction. Now, Theorem [Theorem 10](#Thm4.1P1){reference-type="ref" reference="Thm4.1P1"} gives that $F^\theta_{\alpha\beta}$ given by [\[Eq6.5P1\]](#Eq6.5P1){reference-type="eqref" reference="Eq6.5P1"} is $\mathcal{SHU}$ for $\alpha,\beta$ satisfying the bound $$\label{Eq6.6P1} |\alpha|\leq \frac{1}{2(2+|\beta|+2(1+|\beta|))},$$ since in this case $\|w\|=1=\|w^*\|$. While the second image of Figure [7](#Fig4P1){reference-type="ref" reference="Fig4P1"} shows that there are non-stable harmonic univalent functions $F^\theta_{\alpha\beta}$ for $\alpha,\beta$ not satisfying [\[Eq6.6P1\]](#Eq6.6P1){reference-type="eqref" reference="Eq6.6P1"}, the first image of Figure [7](#Fig4P1){reference-type="ref" reference="Fig4P1"} demonstrates that $F^\theta_{\alpha\beta}$ is stable harmonic univalent function for $\alpha,\beta$ satisfying [\[Eq6.6P1\]](#Eq6.6P1){reference-type="eqref" reference="Eq6.6P1"}. ![The image domains $F^\theta_{\alpha\beta}(\mathbb{D})$ for the above choices of $\alpha,\beta$.](Fig4aP1.eps "fig:"){#Fig4P1 width="5.5cm"} For $\alpha=1/14$ and $\beta=1$ ![The image domains $F^\theta_{\alpha\beta}(\mathbb{D})$ for the above choices of $\alpha,\beta$.](Fig4bP1.eps "fig:"){#Fig4P1 width="5.5cm" height="5cm"} For $\alpha=1/2$ and $\beta=1$ **Example 23**. We consider the analytic function $w(z)=\cos(\pi c)z/2$ and choose $\varphi(z)=z/(1-z)^2$ in the definition of $C_{\alpha\beta}[\varphi]$ to obtain $$\label{Eq6.7P1} C^\theta_{\alpha\beta}[\varphi](z)=e^{-i\theta}\int_{0}^{z} \frac{1}{(1-e^{i\theta}\zeta)^{\alpha (2+\beta)}}\, d\zeta=e^{-2i\theta}\bigg(\frac{1-(1-e^{i\theta}z)^{1-\alpha(2+\beta)}}{1-\alpha(2+\beta)}\bigg).$$ Following the similar steps as explained in Example [Example 21](#Ex6.1P1){reference-type="ref" reference="Ex6.1P1"}, one can easily obtain $$H(z)=e^{-i\theta}\int_{0}^{z} \frac{2}{(1-e^{i\theta}\zeta)^{\alpha(2+\beta)}(2-\alpha(1+\beta)\cos(\pi c)z)}\, d\zeta$$ and $$G(z)=e^{-i\theta}\int_{0}^{z} \frac{\alpha(1+\beta)\cos(\pi c)z}{(1-e^{i\theta}\zeta)^{\alpha(2+\beta)}(2-\alpha(1+\beta)\cos(\pi c)z)}\, d\zeta$$ under the usual normalization $H(0)=G(0)=0$. Therefore, the harmonic mapping $$\begin{aligned} \label{Eq6.8P1} F^\theta_{\alpha\beta}(z)=H(z)+\overline{G(z)} \end{aligned}$$ maps the unit disk onto a domain convex in the horizontal direction. By using Theorem [Theorem 14](#Thm5.1P1){reference-type="ref" reference="Thm5.1P1"}, the mapping $F^\theta_{\alpha\beta}$ given by [\[Eq6.8P1\]](#Eq6.8P1){reference-type="eqref" reference="Eq6.8P1"} is close-to-convex mapping for all non-negative $\alpha,\beta$ satisfying $$\label{Eq6.9P1} \alpha(2+\beta)\le -2c.$$ Figure [9](#Fig5P1){reference-type="ref" reference="Fig5P1"}'s first image illustrates the close-to-convexity of $F^\theta_{\alpha\beta}$ for $\alpha,\beta$ satisfying [\[Eq6.9P1\]](#Eq6.9P1){reference-type="eqref" reference="Eq6.9P1"} for $c=-4/10$, whereas Figure [9](#Fig5P1){reference-type="ref" reference="Fig5P1"}'s second image indicates the existence of non close-to-convex $F^\theta_{\alpha\beta}$ for $\alpha,\beta$ not satisfying [\[Eq6.9P1\]](#Eq6.9P1){reference-type="eqref" reference="Eq6.9P1"}. ![The image domains $F^\theta_{\alpha\beta}(\mathbb{D})$ for the above choices of $\alpha,\beta$.](Fig5aP1.eps "fig:"){#Fig5P1 height="4.1cm" width="6cm"} For $\alpha=4/15$ and $\beta=1$ ![The image domains $F^\theta_{\alpha\beta}(\mathbb{D})$ for the above choices of $\alpha,\beta$.](Fig5bP1.eps "fig:"){#Fig5P1 width="5cm" height="4.1cm"} For $\alpha=-1$ and $\beta=1$ **Example 24**. Consider $w(z)=z/2$ and choosing $\varphi(z)=z/(1-z)$ in the definition of $C_{\alpha\beta}[\varphi]$, we obtain $$\label{Eq6.10P1} C^\theta_{\alpha\beta}[\varphi](z)=e^{-i\theta}\int_{0}^{z} \frac{1}{(1-e^{i\theta}\zeta)^{\alpha (1+\beta)}}\, d\zeta=e^{-2i\theta}\bigg(\frac{1-(1-e^{i\theta}z)^{1-\alpha(1+\beta)}}{1-\alpha(1+\beta)}\bigg).$$ Similar to the explanations used in Example [Example 21](#Ex6.1P1){reference-type="ref" reference="Ex6.1P1"}, we find $F^\theta_{\alpha\beta}=H+\overline{G}$, where $$H(z)=e^{-i\theta}\int_{0}^{z} \frac{2}{(2-\alpha(1+\beta) \zeta)(1-e^{i\theta}\zeta)^{\alpha(1+\beta)}}\, d\zeta$$ and $$G(z)=e^{-i\theta}\int_{0}^{z} \frac{\alpha(1+\beta) \zeta}{(2-\alpha(1+\beta) \zeta)(1-e^{i\theta}\zeta)^{\alpha(1+\beta)}}\, d\zeta$$ under the usual normalization $H(0)=G(0)=0$. Therefore, the harmonic mapping $$\begin{aligned} \label{Eq6.11P1} F^\theta_{\alpha\beta}(z)=H(z)+\overline{G(z)} \end{aligned}$$ maps the unit disk onto a domain convex in the horizontal direction. Inferred from Theorem [Theorem 18](#Thm5.5P1){reference-type="ref" reference="Thm5.5P1"} is that the mapping $F^\theta_{\alpha\beta}$ given by [\[Eq6.11P1\]](#Eq6.11P1){reference-type="eqref" reference="Eq6.11P1"} belongs to $\mathcal{SHCC}$ for all non-negative $\alpha,\beta$ satisfying $\alpha \in [0, 1/(1+\beta)\sqrt{2}]$. The stable harmonic close-to-convexity of $F^\theta_{\alpha\beta}$ is seen in the first image of Figure [11](#Fig6P1){reference-type="ref" reference="Fig6P1"} for $\alpha,\beta$ fulfilling the aforementioned limits, however the second image in Figure [11](#Fig6P1){reference-type="ref" reference="Fig6P1"} suggests the presence of non-stable harmonic close-to-convex $F^\theta_{\alpha\beta}$ for $\alpha,\beta$ not satisfying the aforementioned constraints. ![The image domains $F^\theta_{\alpha\beta}(\mathbb{D})$ for the above choices of $\alpha,\beta$.](Fig6aP1.eps "fig:"){#Fig6P1 width="5.7cm" height="4.1cm"} For $\alpha=7/10$ and $\beta=1/100$ ![The image domains $F^\theta_{\alpha\beta}(\mathbb{D})$ for the above choices of $\alpha,\beta$.](Fig6bP1.eps "fig:"){#Fig6P1 width="5cm" height="4.1cm"} For $\alpha=2$ and $\beta=-1/2$ # Concluding Remarks In the light of Theorem [Theorem 3](#thm3.4P1){reference-type="ref" reference="thm3.4P1"} and Remark [Remark 3](#remark3.5P1){reference-type="ref" reference="remark3.5P1"}, the operator $C^\theta_{\alpha \beta}[\varphi]$ is CHD for all non-negative $\alpha,\beta$ satisfying $\alpha(\beta+2(1-\delta))\leq 3$ whenever $\varphi\in\mathcal{S}^*(\delta)$. This has been used in Definition [Definition 4](#Def3.5P1){reference-type="ref" reference="Def3.5P1"} and subsequently in the relevant results. It would be further interesting to concentrate on the problem for the remaining values of $\alpha$ and $\beta$. We recall from [@HM74 Theorem 1] that the Cesàro transform $C[\varphi]$ preserves the class $\mathcal{K}$. However, its corresponding harmonic mapping $F^0_{11}$ is not necessarily convex whenever $\varphi\in\mathcal{K}$. Indeed, by choosing $\varphi(z)=z/(1-z)$ and $w(z)=z/2$, we construct $F^0_{11}=H+\overline{G}$ with its dilatation $w_{11}(z)=z$. Now we define an analytic function $\varPhi_{\lambda, 0}:=H+\lambda G$, $\lambda\in\mathbb{T}$, so that $$\nonumber \varPhi'_{\lambda,0}(z)=H'(z)\cdot [1+\lambda \,w_{11}(z)]=(C[\varphi])'(z) \cdot \frac{1+\lambda z}{1-z}.$$ Thus, for all $z\in \mathbb{D}$, we compute $${\rm Re}\bigg[1+\frac{z \varPhi''_{\lambda,0}(z)}{\varPhi'_{\lambda,0}(z)}\bigg]={\rm Re} \bigg[1+\frac{3z}{1-z}+\frac{\lambda z}{1+\lambda z}\bigg].$$ By choosing $z=-1/2$ and $\lambda=1$, we note that $${\rm Re}\bigg[1+\frac{z\varPhi''_{\lambda, 0}(z)}{\varPhi'_{\lambda, 0}(z)}\bigg]=-1<0.$$ Thus, by [@HM13-1 Theorem 3.1], $F^0_{11}=H+\overline{G}$ is not convex harmonic mapping in $\mathbb{D}$. Following this, it is important to study the preserving property of $C^\theta_{\alpha\beta}[\varphi]$ when $\varphi\in\mathcal{K}$. This is seen in the proof of Theorem [Theorem 6](#Thm3.7P1){reference-type="ref" reference="Thm3.7P1"}. Indeed, we notice that for all non-negative $\alpha,\beta$ with $\alpha(\beta+2(1-\delta))\leq 2$, the integral transform $C^\theta_{\alpha\beta}[\varphi]$ preserves the class $\mathcal{K}$. However, it would be interesting to find ranges of $\alpha$ and $\beta$ under which $F^\theta_{\alpha\beta}$ is convex whenever $\varphi\in\mathcal{K}$. This remains as an open problem. On the one side, the manuscript deals with the sufficient conditions for the univalence of $F^\theta_{\alpha\beta}$ under certain constraints on $\alpha,\beta$, whereas on the other side, we observe from Section [6](#Sec6P1){reference-type="ref" reference="Sec6P1"} that there are non-univalent functions $F^\theta_{\alpha\beta}$ for some choices of $\alpha,\beta$ not satisfying such constraints. This observation suggests us to study the necessary conditions for the univalence of $F^\theta_{\alpha\beta}$ in terms of bounds of $\alpha$ and $\beta$, which remains open as well. # Acknowledgement(s) {#acknowledgements .unnumbered} The authors of this manuscript are grateful to Professor S. Ponnusamy for giving up his time to speak with us on this subject. # Disclosure statement {#disclosure-statement .unnumbered} The authors declare that they have no conflict of interest. # Funding {#funding .unnumbered} The work of the second author is supported by University Grants Commission -Ref. No.: 1163/(CSIR-UGC NET JUNE 2019). The authors also acknowledge the support of DST-FIST Project (File No.: SR/FST/MS I/2018/26) for providing research facilities in the department. 99 [A]{.smallcaps}hlfors, L. A.: Sufficient condition for quasiconformal extension. Ann. of Math. Stud. **79**, 23--29 (1974) [A]{.smallcaps}ksent'ev, L. A., Nezhmetdinov, I. R.: Sufficient conditions for univalence of certain integral transforms. Tr. Semin. Kraev. Zadacham. Kazan. **18**, 3--11 (1982) (in Russian); translation in: Amer. Math. Soc. Transl.. **136**, 1--9 (1987) [A]{.smallcaps}rbeláez, H., Bravo, V., Hernández, R., Sierra, W., Venegas, O.: A new approach for the univalence of certain integral of harmonic mappings. Indag. Math. (N.S.). **31**, 525--535 (2020) [A]{.smallcaps}vkhadiev, F. G., Nasibullin, R. G., Shafigullin, I. K.: Becker type univalence conditions for harmonic mappings. Russian Math. **60**(11), 69--73 (2016) [B]{.smallcaps}ecker, J.: Löwnersche Differentialgleichung und quasi-konform forttsetzbare schlichte Funktionen. J. Reine Angew. Math. **225**, 23--43 (1972) [B]{.smallcaps}haranedhar, S. V., Ponnusamy, S.: Coefficient conditions for harmonic univalent mappings and hypergeometric mappings. Rocky Mountain J. Math. **24**, 753--777 (2014) [B]{.smallcaps}ravo, V., Hernández, R., Ponnusamy, S., Venegas, O.: Pre-Schwarzian and Schwarzian derivatives of logharmonic mappings. Monatsh. Math. **199**, 733--754 (2022) [B]{.smallcaps}ravo, V., Hernández, R., Venegas, O.: On the univalence of certain integral transform for harmonic mappings. J. Math. Anal. Appl. **455**(1), 381--388 (2017) [B]{.smallcaps}shouty, D., Joshi, S. S., Joshi, S. B.: On close-to-convex harmonic mappings. Complex Var. Elliptic Equ. **58**(9), 1195--1199 (2013) [B]{.smallcaps}shouty, D., Lyzzaik, A.: Close-to-Convexity Criteria for Planar harmonic mappings. Complex Anal. Oper. Theory. **5**, 767--774 (2011) [C]{.smallcaps}ampbell, D. M., Cima, J. A., Pfaltzgraff, J. A.: Linear spaces and linear-invariant families of locally univalent analytic functions. Manuscripta Math. **4**, 1--30 (1971) [C]{.smallcaps}ausey, W. M.: The close-to-convexity and the univalence of an integral. Math. Z. **99**, 207--212 (1967) [C]{.smallcaps}ausey, W. M. : The univalence of an integral. Proc. Amer. Math. Soc. **27**, 500--503 (1971) [C]{.smallcaps}huaqui, M., Hernández, R.: Univalent harmonic mappings and linearly connected domains. J. Math. Anal. Appl. **332**, 1189--1194 (2007) [C]{.smallcaps}lunie, J., Sheil-Small, T.: Harmonic univalent functions Ann. Acad. Sci. Fenn. Ser. A.I. **9**, 3--25 (1984) [D]{.smallcaps}uren, P. L.: Univalent Functions. Springer-Verlag, New York (1983) [D]{.smallcaps}uren, P. L.: Harmonic Mappings in the Plane. Cambridge University Press, (2004) [G]{.smallcaps}oodman, A. W.: Univalent Functions. Vol. **1**, Mariner Publishing Company Florida (1983) [G]{.smallcaps}raf, S. Yu.: On the Schwarzian norm of harmonic mappings. Probl. Anal. Issues Anal. **5**(23), 20--32 (2016) [G]{.smallcaps}raham, I., Kohr, G.: Geometric Function Theory in One and Higher Dimensions. Marcel Dekker Inc. New York, Basel (2003) [H]{.smallcaps}artmann, F. W., MacGregor, T. H.: Matrix transformations of univalent power series. J. Aust. Math. Soc. **18**, 419--435 (1974) [H]{.smallcaps}ernández, R., Martín, M. J.: Quasi-conformal extensions of harmonic mappings in the plane. Ann. Acad. Sci. Fenn. Ser. A. I Math. **38**, 617--630 (2013) [H]{.smallcaps}ernández, R., Martín, M. J.: Stable geometric properties of analytic and harmonic functions. Math. Proc. Cambridge Philos. Soc. **155**(2), 343--359 (2013) [H]{.smallcaps}ernández, R., Martín, M. J.: Criteria for univalence and quasiconformal extension of harmonic mappings in terms of the Schwarzian derivative. Arch. Math. **104**(1), 53--59 (2015) [H]{.smallcaps}ernández, R., Martín, M. J.: Pre-Schwarzian and Schwarzian derivatives of harmonic mappings. J. Geom. Anal. **25**(1), 64--91 (2015) [K]{.smallcaps}im, Y. J., Merkes, E. P.: On an integral of powers of a spirallike function. Kyungpook Math. J. **12**(2), 249--253 (1972) [K]{.smallcaps}umar, S., Sahoo, S. K.: Properties of $\beta$-Cesàro operators on $\alpha$-Bloch space. Rocky Mountain J. Math. **50**(5), 1723--1746 (2020) [K]{.smallcaps}umar, S., Sahoo, S. K.: Preserving properties and pre-Schwarzian norms of nonlinear integral transforms. Acta Math. Hungar. **162**, 84--97 (2020) [L]{.smallcaps}ewy, H.: On the non-vanishing of the Jacobian in certain one-to-one mappings. Bull. Amer. Math.Soc. **42**, 689--692 (1936) [L]{.smallcaps}iu, G., Ponnusamy, S.: Uniformly locally univalent harmonic mappings associated with the pre-Schwarzian norm. Indag. Math. (N.S.) **29**(2), 752--778 (2018) [L]{.smallcaps}iu, G., Ponnusamy, S.: Harmonic pre-Schwarzian and its applications. Bull. Sci. Math.,. **152**, 150--168 (2019) [M]{.smallcaps}erkes, E. P., Wright, D. J.: On the univalence of certain integral. Proc. Amer. Math. Soc. **27**(1), 97--100 (1971) [M]{.smallcaps}ocanu, P. T.: Injectivity conditions in the complex plane. Complex Anal. Oper. Theory. **5**, 759--766 (2011) [M]{.smallcaps}uhanna, Y. A., Ponnusamy, S.: Extreme points method and univalent harmonic mappings. Contemp. Math. **667**, 223--237 (2016) [N]{.smallcaps}ezhmetdinov, I. R., Ponnusamy, S.: On the univalence of an integral on a subclass of meromorphic convex univalent functions. Hokkaido Math. J. **32**, 401--413 (2003) [N]{.smallcaps}unokowa, M.: On the univalence of a certain integral. Trans. Amer. Math. Soc. **146**, 439--446 (1969) [P]{.smallcaps}faltzgraff, J. A.: Univalence of the integral of $f'(z)^{\lambda}$. Bull. Lond. Math. Soc. **7**, 254--256 (1975) [P]{.smallcaps}ommerenke, Ch.: Linear-invariante familien analytischer funktionen I. Math. Ann. **155**, 108--154 (1964) [P]{.smallcaps}ommerenke, Ch.: Univalent Functions. Vandenhoeck and Ruprecht in Gottingen, Germany (1975) [P]{.smallcaps}ommerenke, Ch.: Boundary Behaviour of Conformal Maps. Springer-Verlag, Heidelberg (1992) [P]{.smallcaps}onnusamy, S., Kaliraj, A. S.: Constants and Characterization for Certain Classes of Univalent Harmonic Mappings. Mediterr. J. Math. **12**, 647--665 (2015) [P]{.smallcaps}onnusamy, S., Quach, T., Rasila, A.: Harmonic shears of slit and polygonal mappings. Appl. Math. Comput. **233**, 588--598 (2014) [P]{.smallcaps}onnusamy, S., Sahoo, S. K., Sugawa, T.: Hornich operations on functions of bounded boundary rotations and order alpha. Comput. Methods Funct. Theory. **19**(3), 455--472 (2019) [P]{.smallcaps}onnusamy, S., Singh, V.: Univalence of certain integral transforms. Glas. Mat. Ser. III **31**(51), 253--261 (1996) [R]{.smallcaps}oyster, W. C.: On the univalence of a certain integral. Michigan Math. J. **12**, 385--387 (1965) [R]{.smallcaps}oyster, W. C., Ziegler, M.: Univalent functions convex in one direction. Publ. Math. Debrecen **23**, 339--345 (1976) [Umezawa, T.]{.smallcaps}: Analytic functions convex in one direction. J. Math. Soc. Japan V **4**(2), 194--202 (1952)
arxiv_math
{ "id": "2310.01965", "title": "Univalence of horizontal shear of Ces\\`aro type transforms", "authors": "Swadesh Kumar Sahoo and Sheetal Wankhede", "categories": "math.CV", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Flexible estimation of the mean outcome under a treatment regimen (i.e., value function) is the key step toward personalized medicine. We define our target parameter as a conditional value function given a set of baseline covariates which we refer to as a stratum based value function. We focus on semiparametric class of decision rules and propose a sieve based nonparametric covariate adjusted regimen-response curve estimator within that class. Our work contributes in several ways. First, we propose an inverse probability weighted nonparametrically efficient estimator of the smoothed regimen-response curve function. We show that asymptotic linearity is achieved when the nuisance functions are undersmoothed sufficiently. Asymptotic and finite sample criteria for undersmoothing are proposed. Second, using Gaussian process theory, we propose simultaneous confidence intervals for the smoothed regimen-response curve function. Third, we provide consistency and convergence rate for the optimizer of the regimen-response curve estimator; this enables us to estimate an optimal semiparametric rule. The latter is important as the optimizer corresponds with the optimal dynamic treatment regimen. Some finite-sample properties are explored with simulations. bibliography: - chalipwbib.bib title: Nonparametric estimation of a covariate-adjusted counterfactual treatment regimen response curve --- # Introduction Marginal structural models have been widely used in causal inference as a tool for estimating the mean outcome under a give decision rule [@robins1998marginal] and constructing optimal treatment regimens. This approach requires modeling the potential outcome given a set of baseline covariates, including treatment and possibly some subset of baseline information on a subject. Existing methods typically impose a parametric causal model on the mean of the potential outcome and draw inference under the assumption that the model is correctly specified [@robins2000marginal; @murphy2001marginal; @joffe2004model; @orellana2010dynamic]. The latter is particularly restrictive when the conditioning set of treatment and baseline covariates includes some continuous variables. Although model selection approaches have been proposed [@van2003unified], it is likely that finite sample size prevents recovering the true model and leads to poor out-of-sample performance [@shinozaki2020understanding]. [@neugebauer2007nonparametric] proposed a nonparametric marginal structural model framework to improve the causal interpretability of the estimated parameters of a potentially misspecified parametric working model. However, when the working model is misspecified, the causally interpretable parameters might not be the parameters of interest and the resulting treatment regimen might be suboptimal. The parameters of the proposed marginal structural model can be estimated using an inverse probability weighted loss functions where weights are the conditional probability of following a particular decision rule. The consistency and causal interpretability of the corresponding estimators rely on the correctly specified weight function. To mitigate the chance of model misspecification, one may decide to model the weight function nonparametrically using data-adaptive techniques. However, this choice may result in nonregular estimators of the primary parameters in the marginal structural model. This lack of regularity is because the rate of convergence for inverse probability weighted estimators rely on a rate of convergence of the weight function which will be slower than the desired root-$n$ rate when data-adaptive techniques are used. To overcome this concern, authors have used so-called doubly robust estimators for the parameters in the working marginal structural model; that is, estimators that are consistent as long as one of either the treatment model or the outcome models are correctly specified [@robins2004optimal; @wahed2004optimal; @orellana2010dynamic; @rosenblum2011marginal; @petersen2014targeted; @ertefaie2015identifying]. However, doubly-robust nonparametric estimators may suffer from two important issues: (1) the performance of the estimators depends on the modeling choice of nuisance parameters and they can be irregular with large bias and a slow rate of convergence [@van2014targeted; @benkeser2017doubly]; and (2) the quality of the constructed decision rule still relies on correctly-specified marginal structural model. Alternative to the marginal structural models framework, several authors have proposed methods to estimate nonparametric decision rules that circumvent the need for specifying a parametric outcome model. [@zhao2012estimating] proposed an inverse probability weighting approach to estimate a nonparametric decision rule [@dudik2011doubly; @zhao2015new]. Doubly robust augmented inverse probability weighting procedures have also been proposed [@rubin2012statistical; @zhang2012robust; @liu2018augmented; @athey2021policy]. Due to the nonparametric nature of the estimated decision rule, inference for the resulting value function (i.e., mean outcome under the estimated rule) is challenging. [@zhao2012estimating] developed Fisher consistency, the excess risk bound, and the convergence rate of the value function to provide a theoretical guarantee for their proposed method. [@athey2021policy] derived bounds for the regret function under the estimated nonparametric treatment strategy and showed that the bounds decay as $n^{-1/2}$ as long as the Vapnik--Chervonenkis (VC) dimension of the class of decision rules does not grow too fast. Other recent germane contributions on estimating optimal treatment regimes and conditional average treatment effects include [@swaminathan2015batch], [@kallus2018balanced], [@nie2021quasi], [@zhao2022selective], and references therein. We define a stratum-based value function as the conditional expectation of the potential outcome under a treatment regimen given a prespecified set of baseline covariates $\bm{V}$. The value is a function of regimens and we are interested in understanding the functional association between the mean outcome and the regimens within a class which we refer to as the *regimen-response curve*. We consider a current status data setting with two levels of coarsening at random (i.e., treatment assignment and censoring). We overcome the two important shortcomings of the existing methods by: (1) allowing the regimen-response curve to be estimated nonparametrically; and (2) estimating the weight functions of the inverse probability weighted loss function using a data-adaptive technique. Recently, [@ertefaie2022nonparametric] proposed a nonparametric efficient inverse probability weighted estimator for the average causal effect where the weights are estimated using undersmoothed highly adaptive lasso. We will generalize below the same technique to estimate conditional average causal effects via nonparametric marginal structural models. An important byproduct of our nonparametric regimen-response curve estimator is a method to construct semiparametric decision rules where the parameters of the rule are allowed to be a nonparametric function of the set baseline covariates (i.e., ${V}$-specific optimal decision rule). The parameters are defined as the optimizer of the regimen-response curve which allows us to provide a rate of convergence for the corresponding functional parameter estimators using empirical process theory. Our theoretical contributions can be summarized as follows: - Considering a kernel smoothed regimen-response curve function as the target parameter, we show that the proposed inverse probability weighted estimator (1) is nonparametrically efficient when the weight functions are properly undersmoothed; (2) convergences to a Gaussian process at root-$n$ rate; and (3) is uniformly consistent at a derived rate. - Considering the regimen-response curve function as the target parameter, we derive the asymptotic normality of our estimator and show that the bias caused by the kernel smoothing of the estimator vanishes under some mild assumptions and properly undersmoothed kernel and weight functions. - Using a novel empirical process algorithm, we show that the minimizer of the estimated regimen-response curve is a consistent estimator of the minimizer of the true regimen-response curve and derive the convergence rate. This result is of independent interest as it derives a rate of convergence for an optimizer of a nonparametrically estimated function. In contrast to the existing literature which targets the value function under the estimated rule, our estimand is the stratum based value function. Deriving the asymptotic properties of our estimand is more challenging because it will be nonpathwise differentiable functional when $\bm{V}$ includes at least one continuous variable [@bickel1998efficient]. Moreover, existing methods can either estimate a parametric or nonparametric rules where in each case flexibility or interpretability is sacrificed. Our proposed semiparametric rule lands in between and provides a reasonable trade-off between flexibility and interpretability. For example, clinicians might have strong opinion about an important tailoring variable and want to ensure to capture the correct functional form of that variable in the decision rule. In such settings, we can include that variable in the set $\bm{V}$ and provide a stratum-based decision rule where the decision rule will be a nonparametric function of $\bm{V}$. # Preliminaries ## Notation and Formulation {#sec:notation} Let $T$ be a univariate failure time, $\bm{W} \in \mathcal{W} \subset \mathbb{R}^{p}$ be a vector of all available baseline covariates measured before treatment $A \in \{0,1\}$, and $C$ be a censoring variable. Let $\bm{S}\in \mathcal{S} \subset \mathbb{R}^{q}$ be a subvector of $\bm{W}$ and $\bm{V} = \bm{W} \setminus \bm{S}$ where $\bm{V} \in \mathcal{V} \subset \mathbb{R}^{p-q}$. For simplicity of notation, we denote $p-q$ as $r$. A decision rule $d^{\theta} \in \mathcal{D}$ is defined as a function that maps values of $\bm{S}$ to a treatment option $\{0,1\}$. In our formulation $\mathcal{D}$ is a class of deterministic rules indexed by a vector of coefficients. Specifically, we consider $d^{\theta} =I\{\bm{S}^\top \theta>0\}$ with $\theta \in \Theta \subset \mathbb{R}^q$ where $I(.)$ is an indicator function. Let $T^\theta$ be the potential outcome that would have been observed under the decision rule $d^\theta$ for a given value of $\theta \in \Theta$. Suppose that we have the full data $X = \{(T^\theta: d^\theta \in \mathcal{D}), \bm{W}\} \sim P_X \in \mathcal{M}^F$ where $\mathcal{M}^F$ is a nonparametric full data model. Define $E_{P_X}\{I(T^\theta >t) \mid \bm{V}\}=\Psi_{P_X}(\theta,\bm{V}) \in \mathcal{F}_{\lambda}$ where $\mathcal{F}_{\lambda}$ is a class of cadlag functions with sectional variation norm bounded by $\lambda$ and $t$ is a predetermined time point. The full-data risk of $\Psi$ is defined as $E_{P_X} \int_\theta \ell_{\theta}(\Psi,x) dF(\theta)$ where $\ell$ is the log-likelihood loss function, $\ell_{\theta}(\Psi) = I(T^\theta >t) \log \left(\frac{\Psi}{1-\Psi}\right) +\log(1-\Psi),$ and $dF(\theta)$ is a weight function/measure on the Euclidean set of all $\theta$-values (e.g., multivariate normal with possibly large variance). Define a V-specific optimal decision rule that maximizes the $t$-year survival probability as $\theta_{0}(\bm{V}) = \mathop{\mathrm{argmax}}_{\theta \in \Theta} \Psi_{P_X}(\theta,\bm{V})$. Hence, we allow $\theta$ to be a functional parameter mapping $\mathcal{V}$ onto $\mathbb{R}^q$. The observed data $\mathcal{O} = \{Y = \min(T,C), \Delta = I(T<C), A,\bm{W}\}$ follow some probability distribution $P_0$ that lies in some nonparametric model $\mathcal{M}$. Suppose we observe $n$ independent, identically distributed trajectories of $\mathcal{O}$. Let $\Delta^c = \Delta+(1-\Delta)I\{Y>t \}$ and $G: \mathcal{M} \rightarrow \mathcal{G}$ be a functional nuisance parameter where $\mathcal{G}=\{G(P): P \in \mathcal{M}\}$. In our setting, $G(P)\{(A,\Delta^c) \mid \bm{W}\}$ denotes the joint treatment and censoring mechanism under a distribution $P$. We refer to $G(P_0)$ as $G_0$ and $G(P)$ as $G$. Under coarsening at random assumption, $G(P)\{(A,\Delta^c)\mid {X}\}=G(P)\{(A,\Delta^c) \mid \bm{W}\}$. Denote $G^a \equiv G^a(P)(A \mid \bm{W})$ and $G^c \equiv G^c(P)(\Delta^c \mid \bm{W},A)$. We denote the latter two functions under $P_0$ as $G^a_0$ and $G^c_0$. Also define $Q: \mathcal{M} \rightarrow \mathcal{Q}$ and $\mathcal{Q}=\{Q(P): P \in \mathcal{M}\}$ where $Q_0(\theta,\bm{W})= \mathbb{E}_{P_0}\{I(Y^{\theta}>t) \mid \bm{W}\}$. Let $\prod$ be a projection operator in the Hilbert space $L_0^2(P)$ with inner product $\langle h_1, h_2 \rangle =E_P(h_1h_2)$. The loss function $\int_\theta \ell_{\theta}(\Psi) dF(\theta)$ is based on the full-data which is not observable, and thus, cannot be used to define our full data functional parameter $\Psi_{P_X}$. However, under consistency ($T_i=T^{A_i}$) and no unmeasured confounder ($A \perp T^a \mid \bm{W}$ and $\Delta^c \perp T \mid A, \bm{W}$) assumptions, it can be identified using the observed data distribution $P_0$. Specifically, we define an inverse probability weighting mapping of the full-data loss as $$\begin{aligned} L_{G}(\Psi)=\int_{\theta} \left[ \frac{\Delta^c I( A= d^\theta)\xi(\theta,\bm{V}) }{G\{(A,\Delta^c) \mid \bm{W}\} } \{ I(Y >t) \log \left(\frac{\Psi}{1-\Psi}\right) +\log(1-\Psi)\} \right] dF(\theta). \label{eq:lossf}\end{aligned}$$ Accordingly, define the regimen-response curve as $\Psi_{P_0} = \mathop{\mathrm{argmin}}_{\Psi \in \mathcal{F}_\lambda} E_{P_0} L_{G_0}(\Psi)$. Following our convention, we denote $\Psi_{P_0}$ as $\Psi_{0}$. We define a highly adaptive estimator of $\Psi_{P_0}$ as $\Psi_n = \mathop{\mathrm{argmin}}_{\Psi \in \mathcal{F}_{\lambda_n}} P_n L_{G_n}(\Psi)$ where $\lambda_n$ is a data adaptively selected $\lambda$ and $G_n$ is an estimator of $G_0$. The choice of function $\xi$ is inconsequential to the derivation of the efficient influence function of our target parameter but it may impact the level of undersmoothing needed in finite sample. One may consider $\xi(\theta,\bm{V})=1$ or $\xi(\theta,\bm{V})=G\{(A,\Delta^c) \mid \bm{V}\}$. The latter choice corresponds to the stabilized weight. ## Highly adaptive lasso The highly adaptive lasso is a nonparametric regression approach that can be used to estimate infinite dimensional functional parameters [@benkeser2016highly; @van2017generally; @van2017uniform]. It forms a linear combination of indicator basis functions to minimize the expected value of a loss function under the constraint that the constraint that the $L_1$-norm of the coefficient vector is bounded by a finite constant value. Let $\mathcal{D}[0,\tau]$ be the Banach space of d-variate real valued cadlag functions (right-continuous with left-hand limits) on a cube $[0,\tau] \in \mathbb{R}^d$. For each function $f \in \mathcal{D}[0,\tau]$ define the supremum norm as $\| f \|_{\infty}=\sup_{w \in [0,\tau]} |f(w)|$. For any subset $s$ of $\{0,1,\ldots,d\}$, we partition $[0,\tau]$ in $\{0\} \{ \cup_s (0_s,\tau_s]\}$ and define the sectional variation norm of such $f$ as $$\| f\|^*_\zeta = |f(0)|+\sum_{s \subset\{1,\ldots,d\}} \int_{0_s}^{\tau_s} |df_s(u_s)|,$$ where the sum is over all subsets of $\{0,1,\ldots,d\}$. For a given subset $s \subset \{0,1,\ldots,d\}$, define $u_s =(u_j :j \in s)$ and $u_{-s}$ as the complement of $u_s$. Then, $f_s: [0_s,\tau_s] \rightarrow \mathbb{R}$ defined as $f_s(u_s) = f(u_s,0_{-s})$. Thus, $f_s(u_s)$ is a section of $f$ that sets the components in the complement of $s$ equal to zero and varies only along the variables in $u_r$. Let $G^a \equiv G(P)(A \mid W)$ and $G^c \equiv G(P)(\Delta^c \mid W,A)$ denote the treatment and censoring mechanism under an arbitrary distribution $P \in \mathcal{M}$. Assuming that our nuisance functional parameter $G^a,G^c \in \mathcal{D}[0,\tau]$ has finite sectional variation norm, we can represent $\text{logit } G^a$ as [@gill1993inefficient] $$\begin{aligned} \text{logit } G^a(w):&=\text{logit } G^a(0)+\sum_{s \subset\{1,\ldots,d\}} \int_{0_s}^{w_s} dG^a_s(u_s) \nonumber \\ &=\text{logit } G^a(0)+\sum_{s \subset\{1,\ldots,d\}} \int_{0_s}^{\tau_s} I(u_s \leq w_s)dG^a_s(u_s). \label{eq:hal1}\end{aligned}$$ The representation ([\[eq:hal1\]](#eq:hal1){reference-type="ref" reference="eq:hal1"}) can be approximated using a discrete measure that puts mass on each observed $W_{s,i}$ denoted by $\alpha_{s,i}$. Let $\phi_{s,i}(c_s)= I(w_{i,s} \leq c_s)$ where $w_{i,s}$ are support points of $dG^a_s$. We then have $$\text{logit } G^a_\alpha = \alpha_0+\sum_{s \subset\{1,\ldots,d\}}\sum_{i=1}^{n} \alpha_{s,i} \phi_{s,i},$$ where $|\alpha_0|+\sum_{s \subset\{1,\ldots,d\}}\sum_{i=1}^{n} |\alpha_{s,i}|$ is an approximation of the sectional variation norm of $f$. The loss based highly adaptive lasso estimator $\beta_n$ is defined as $$\alpha_n (\lambda)= \arg \min_{\alpha: |\alpha_0|+\sum_{s \subset\{1,\ldots,d\}}\sum_{i=1}^{n} |\alpha_{s,i}|<\lambda} P_n L( \text{logit } G^a_\alpha).$$ where $L(.)$ is a given loss function. Denote $G^a_{n,\lambda} \equiv G^a_{\alpha_n(\lambda)}$ as the highly adaptive lasso estimate of $G^a$. Similarly, we can define $G^{c}_{n,\lambda}$ as the highly adaptive lasso estimate $G^c$. When the nuisance parameter correspond to a conditional probability of a binary variable (e.g., propensity score with a binary treatment or a binary censoring indicator), log-likelihood loss can be used. In practice, $\lambda$ is unknown and must be obtained using data. We refer to $\lambda_n$ as a value of $\lambda$ that is determined using data. # Dynamic marginal structural models with highly adaptive lasso {#sec:est} ## The challenge {#sec:challenge} The target functional parameter $\Psi_0$ is not a pathwise differentiable function which makes the inference challenging. The existing literature approximates this functional by pathwise differentiable function $\Psi_{\beta}$, where $\beta$ is a vector of unknown parameters [@van2006causal; @van2007statistical; @murphy2001marginal; @robins2008estimation; @orellana2010dynamic; @neugebauer2007nonparametric]. The working model provides a user-specified summary of the unknown true function relating the expectation of the potential outcome under a strategy. Hence, the quality of the constructed decision rule depends on how well the working model approximates the true function. To mitigate this concern we propose a kernel smoothed highly adaptive lasso estimator of $\Psi_0(\bm{v}_0,\theta)$. Specifically, we define an regimen-response curve estimator as $$\begin{aligned} \Psi_{nh}(\bm{v}_0,\theta) = P_n U(P_n,\bm{v}_0,\Psi_n) \label{eq:propest} %\frac{K_{h,\bm{v}_0} \Delta^c I( A= d^\theta) }{G_n P_n K_{h,\bm{v}_0} } \Psi_n(\bm{v}_0) \end{aligned}$$ where $U(P_n,\bm{v}_0,\Psi_n) = (P_n K_{h,\bm{v}_0})^{-1} \{K_{h,\bm{v}_0} \Psi_n(\theta)\}$ and $\Psi_n$ is a highly adaptive lasso estimate of $\Psi_0$ obtained based on the loss function ([\[eq:lossf\]](#eq:lossf){reference-type="ref" reference="eq:lossf"}). In this formulation, $K_{h,\bm{v}_0}(v) = h^{-r} K\{(v-\bm{v}_0)h^{-1}\}$ with $K(v) = K(v_1,\cdots,v_r)$ where $K(v)$ is a kernel satisfying $\int K(v) dv=1$. Moreover, we assume that $K(v) = \prod_{j=1}^r K_u(v_j)$ is a product kernel defined by a univariate kernel $K_u(.)$ and $K(v)$ is a J-orthogonal kernel such that $\int K_u(v) dv=1$ and $\int K_u(v) v^j dv=0$ for $j=1,\cdots,J$. We study the asymptotic behaviour of our estimator relative to $\Psi_0(\bm{v_0},\theta)$ and a kernel smoothed parameter $$\Psi_{0h}(\bm{v}_0,\theta) = P_0 U(P_0,\bm{v}_0,\Psi_0), %\frac{K_{h,\bm{v}_0} \Delta^c I( A= d^\theta) }{G_0 P_0 K_{h,\bm{v}_0} } \Psi_0(\bm{v}_0).$$ where $h$ is a fixed bandwidth. In general, when the nuisance parameter $G_0$ is estimated using data adaptive regression techniques, the resulting regimen-response curve estimator $\Psi_{nh}(\bm{v}_0,\theta)$ won't be asymptotically linear which makes inference challenging. For example, consider the target parameter $\Psi_{0h}(\bm{v}_0,\theta)$, we have $$\begin{aligned} \Psi_{nh}(\bm{v}_0,\theta)-\Psi_{0h}(\bm{v}_0,\theta) =& (P_n-P_0) U(P_0,\bm{v}_0,\Psi_0) + \nonumber \\ &P_0\{U(P_n,\bm{v}_0,\Psi_n)-U(P_0,\bm{v}_0,\Psi_0)\} + \nonumber\\ &(P_n-P_0) \{U(P_n,\bm{v}_0,\Psi_n)-U(P_0,\bm{v}_0,\Psi_0)\}. \label{eq:aslin} \end{aligned}$$ The first term on the right-hand side of ([\[eq:aslin\]](#eq:aslin){reference-type="ref" reference="eq:aslin"}) is exactly linear and the third term is negligible under Donsker condition of $\Psi_n$. The problem arises because data adaptive regression techniques have a rate of convergence slower than the desired root-$n$ rate, and thus, the bias of $P_0\{U(P_n,\bm{v}_0,\Psi_n)-U(P_0,\bm{v}_0,\Psi_0)\}$ diverges to infinity. ## The intuition We show that when the nuisance parameters are properly undersmoothed, the asymptotic linearity of our estimator $\Psi_{nh}(\bm{v}_0,\theta)$ can be retrieved. Specifically, in the proof of Theorem [Theorem 2](#th:movh){reference-type="ref" reference="th:movh"}, we show that $$\begin{aligned} P_0\{U(P_n,\bm{v}_0,\Psi_n)-U(P_0,\bm{v}_0,\Psi_0)\} = &(P_n-P_0)\tilde D^*_{h,\bm{v_0}}(P_0)+o_p(n^{-1/2}) \nonumber\\ &\hspace{.2in} +P_n \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{G_nP_n K_{h,\bm{v}_0}} \left\{I(Y\geq t)-\Psi_n \right\} \label{eq:scorepsi}\\ &\hspace{.2in} + P_n \frac{K_{h,\bm{v}_0}I( A= d^\theta)}{G_n P_n K_{h,\bm{v}_0}}(Q_n-\Psi_n)\left(\Delta^c-G_{n}^c \right) \label{eq:scoredelta}\\ &\hspace{.2in} + P_n \frac{K_{h,\bm{v}_0 }(Q_n-\Psi_n)}{G_{n}^a P_n K_{h,\bm{v}_0}} \left\{I( A= d^\theta)-G_{n}^a\right\}, \label{eq:scorea}\end{aligned}$$ where $\tilde D^*_{h,\bm{v_0}}(P_0)$ includes certain components of the canonical gradient of our statistical parameter $\Psi_{0h}(\bm{v_0},\theta)$ and thus, contributes to the efficient influence function of our estimator. The last three terms do not converge to zero at the appropriate rate and thus induce non-negligible bias. However, the terms ([\[eq:scorepsi\]](#eq:scorepsi){reference-type="ref" reference="eq:scorepsi"}), ([\[eq:scoredelta\]](#eq:scoredelta){reference-type="ref" reference="eq:scoredelta"}) and ([\[eq:scorea\]](#eq:scorea){reference-type="ref" reference="eq:scorea"}) resemble the form of score equations for $I(Y\geq t)$, $\Delta^c$ and $I( A= d^\theta)$. We first discuss the score equations for $G^.$ where $. \in \{a,c\}$. We can generate score $S_g(G_n^.)$ for a nonparametric estimator $G_n^.$ by paths $\{G_{n,\epsilon}^{.g}(w)\}$, such that $$\begin{aligned} \text{logit }G_{n,\epsilon}^{.g}(w) &= \text{logit }G_n^.(0) \{1+\epsilon g(0)\} + \sum_{s \subset\{1,\ldots,d\}} \int_{0_s}^{w_s} \phi_{s,u_s}(w) \{1+\epsilon g(s,u_s)\} d \text{logit }G_{n,s}^.(u),\end{aligned}$$ where $g$ is a uniformly bounded function on $[0,\tau]^p$. Accordingly when a function is estimated using a highly adaptive lasso, the score functions can be generated by a path $\{1+\epsilon g(s,j)\} \beta_{n,s,j}$ for a bounded vector $g$ as $$\label{eq:score} S_g(G_{n,\lambda_n}^.) = \frac{d}{d \text{logit }G^._{n,\lambda_n}} L( \text{logit }G^._{n,\lambda_n}) \left\{ \sum_{(s,j)} g(s,j) \beta_{n,s,j} \phi_{s,j} \right\},$$ where $L(\cdot)$ is the log-likelihood loss function. For example, for the treatment indicator, $L(G^a) = A \log \left(\frac{G^a}{1-G^a}\right) +\log(1-G^a)$. The penalty in the highly adaptive lasso imposes a restriction on the $g$ function. Specifically, the path $\{1+\epsilon g(s,j)\} \beta_{n,s,j}$ can be generated using those $g$ functions that do not change the $L_1$-norm of the parameters. Let $\beta^g_{n,s,j}=\{1+\epsilon g(s,j)\} \beta_{n,s,j}$ denote the set of perturbed parameters. Then, for small enough $\epsilon$ such that $\{1+\epsilon g(s,j)\}>0$, $\|\beta^g_{n,s,j}\|_1= \|\beta^g_{n,s,j}\|_1 \{1+\epsilon g(s,j)\} = \|\beta^g_{n,s,j}\|_1 + \epsilon g(s,j) \|\beta_{n,s,j}\|_1$. Hence, the restriction is satisfied when the inner product of $g$ and the vector $|\beta|$ is zero (i.e., $<g.|\beta|>=0$). We now provide a thought experiment on how undersmoothing the nuisance function estimates using a highly adaptive lasso can eliminate the terms ([\[eq:scorepsi\]](#eq:scorepsi){reference-type="ref" reference="eq:scorepsi"}), ([\[eq:scoredelta\]](#eq:scoredelta){reference-type="ref" reference="eq:scoredelta"}) and ([\[eq:scorea\]](#eq:scorea){reference-type="ref" reference="eq:scorea"}). We first ignore the $L_1$-norm restriction on the choice of function $g$ which is erroneous but contains the main idea of the approach. Without the restriction, one can choose $g(s_0,j_0) = I(s=s_0,j=j_0)$. The latter choice perturbs one coefficient at a time and corresponds to maximum likelihood estimators. Then, for any $s_0$ and $j_0$, the score function ([\[eq:score\]](#eq:score){reference-type="ref" reference="eq:score"}) becomes $$S_I(G_{n,\lambda_n}^.) = \frac{d}{d \text{logit }G^._{n,\lambda_n}} L( \text{logit }G^._{n,\lambda_n}) \left( \beta_{n,s_0,j_0} \phi_{s_0,j_0} \right).$$ Because, $\beta_{n,s_0,j_0}$ is a finite constant for all $s_0$ and $j_0$, solving the above score equation is equivalent to solving $$\frac{d}{d \text{logit }G^._{n,\lambda_n}} L( \text{logit }G^._{n,\lambda_n}) \phi_{s_0,j_0}.$$ For example, for the treatment indicator, the score equation is given by $(A - G_{n,\lambda_n}^a) \phi_{s_0,j_0}$. As we undersmooth the fit, we solve more and more score equations (i.e., the number of score equations increases with the number of features included in the model) and any linear combination of those score equations will also be solved. Now let's consider the term ([\[eq:scorea\]](#eq:scorea){reference-type="ref" reference="eq:scorea"}) and let $f=\frac{K_{h,\bm{v}_0 }(Q_n-\Psi_n)}{G_{n}^a P_n K_{h,\bm{v}_0}}$. Assuming that $Q_0$, $\Psi_0$ and $G_0 = G_0^aG_0^c$ are càdlàg with finite sectional variation norm (Assumption [Assumption 1](#assump:cadlag){reference-type="ref" reference="assump:cadlag"} in Section [4](#sec:theory){reference-type="ref" reference="sec:theory"}), [@gill1993inefficient] showed that $f$ can be approximated as a linear combination of indicator basis functions. Therefore, if we sufficiently undersmooth $G_{n,\lambda}^a$ we will solve ([\[eq:scorea\]](#eq:scorea){reference-type="ref" reference="eq:scorea"}) up to a root-$n$ factor. That is $$P_n \frac{K_{h,\bm{v}_0 }(Q_n-\Psi_n)}{G_{n}^a P_n K_{h,\bm{v}_0}} \left\{I( A= d^\theta)-G_{n}^a\right\}= o_p(n^{-1/2}).$$ The same argument can be made to show that the other terms ([\[eq:scorepsi\]](#eq:scorepsi){reference-type="ref" reference="eq:scorepsi"}) and ([\[eq:scoredelta\]](#eq:scoredelta){reference-type="ref" reference="eq:scoredelta"}) can be made negligible when $\Psi_n$ and $G_{n}^c$ are properly undersmoothed. Hence, the challenging term will be asymptotically linear with $$\begin{aligned} P_0\{U(P_n,\bm{v}_0,\Psi_n)-U(P_0,\bm{v}_0,\Psi_0)\} = &(P_n-P_0)\tilde D^*_{h,\bm{v_0}}(P_0)+o_p(n^{-1/2}). \end{aligned}$$ Now, we study the actual case where the choice function $h$ is restricted to those that do not change the $L_1-$norm of the coefficients. Lemma [Lemma 6](#lem:ucondition-fixedh){reference-type="ref" reference="lem:ucondition-fixedh"} shows than when ([\[eq:basis3-fixedh\]](#eq:basis3-fixedh){reference-type="ref" reference="eq:basis3-fixedh"}), ([\[eq:basis1-fixedh\]](#eq:basis1-fixedh){reference-type="ref" reference="eq:basis1-fixedh"}) and ([\[eq:basis2-fixedh\]](#eq:basis2-fixedh){reference-type="ref" reference="eq:basis2-fixedh"}) are satisfied, our proposed estimator achieves asymptotic linearity. As we undersmooth the fit, we start adding features with small coefficients into the model. Conditions ([\[eq:basis3-fixedh\]](#eq:basis3-fixedh){reference-type="ref" reference="eq:basis3-fixedh"}), ([\[eq:basis1-fixedh\]](#eq:basis1-fixedh){reference-type="ref" reference="eq:basis1-fixedh"}) and ([\[eq:basis2-fixedh\]](#eq:basis2-fixedh){reference-type="ref" reference="eq:basis2-fixedh"}) imply that the $L_1-$norm must be increases until one of the corresponding score equations is solved to a precision of $o_p(n^{-1/2})$. Here we provide details for the score equations corresponding to the treatment indicator where $S_g(G_{n,\lambda_n}^a) = (A - G_{n,\lambda_n}^a) \left\{\sum_{(s,j)} g(s,j) \beta_{n,s,j} \phi_{s,j} \right\}$. Let $r(g,G_{n,\lambda_n}^a) = \sum_{(s,j)} g(s,j) \lvert \beta_{n,s,j} \rvert$. For small enough $\epsilon$, $$\begin{aligned} \sum_{(s,j)} \lvert \{1+\epsilon g(s,j)\} \beta_{n,s,j} \rvert &= \sum_{(s,j)} \{1 + \epsilon g(s,j)\} \lvert \beta_{n,s,j} \rvert \\ &=\sum_{(s,j)} \lvert \beta_{n,s,j} \rvert + \epsilon r(g,G_{n,\lambda_n}^a).\end{aligned}$$ Hence, for any $g$ satisfying $r(g,G_{n,\lambda_n}^a)=0$ (i.e., $h$ does not change the $L_1-$ norm of the coefficients), we have $P_n S_g(G_{n,\lambda_n}^a) = 0$. Let $D\{f,G_{n,\lambda_n}^a\} = f \cdot (A - G_{n,\lambda_n}^a)$, where $f$ is defined above, and let $\tilde{f}$ be an approximation of $f$ using the basis functions that satisfy condition ([\[eq:basis1-fixedh\]](#eq:basis1-fixedh){reference-type="ref" reference="eq:basis1-fixedh"}). Then, $D\{\tilde{f}, G_{n,\lambda_n}^a\} \in \{S_g(G_{n,\lambda_n}^a): \lVert g \rVert_{\infty} < \infty \}$. Thus, there exists $g^{\star}$ such that $D(\tilde{f}, G_{n,\lambda_n}^a) = S_{g^{\star}} (G_{n,\lambda_n}^a)$; however, for this particular choice of $g^{\star}$, $r(g,G_{n,\lambda_n}^a)$ may not be zero (i.e., the restriction on $g$ might be violated). Now, define $g$ such that $g(s,j) = g^{\star}(s,j)$ for $(s,j) \neq (s^{\star}, j^{\star})$; $\tilde{g} (s^{\star}, j^{\star})$ is defined such that $$\begin{aligned} \label{eq:restr} \sum_{(s,j) \neq (s^{\star},j^{\star})} g^{\star}(s,j) \lvert \beta_{n,s,j} \rvert + g(s^{\star}, j^{\star}) \lvert \beta_{n, s^{\star}, j^{\star}} \rvert = 0.\end{aligned}$$ That is, $g$ matches $g^{\star}$ everywhere but for a single point $(s^{\star}, j^{\star})$, where it is forced to take a value such that $r(g,G_{n,\lambda_n}^a)=0$. As a result, for such a choice of $g$, $P_n S_{h} (G_{n,\lambda_n}^a) = 0$ by definition. Below, we show that $P_n S_{g}(G^a_{n,\lambda_n}) - P_n D(\tilde{f}, G_{n,\lambda_n}^a) = o_p(n^{-1/2})$ which then implies that $P_n D(\tilde{f}, G_{n,\lambda_n}^a) = o_p(n^{-1/2})$. We note that the choice of $(s^{\star}, j^{\star})$ is inconsequential. $$\begin{aligned} P_n S_{g}(G_{n,\lambda_n}^a) - P_n D\{\tilde{f}, G_{n,\lambda_n}^a\} &= P_n S_{g}(G_{n,\lambda_n}^a) - P_n S_{g^*}(G_{n,\lambda_n}^a) \\ &=P_n \left\{ \frac{d}{d\text{logit }G^a_{n,\lambda_n}} L( \text{logit }G^a_{n,\lambda_n}) \left[\sum_{(s,j)} \left\{g(s,j) - g^{\star}(s,j)\right\} \beta_{n,s,j} \phi_{s,j} \right] \right\} \\ &= P_n \left[\frac{d}{d\text{logit }G^a_{n,\lambda_n}} L( \text{logit }G^a_{n,\lambda_n}) \left\{ g(s^{\star},j^{\star}) - g^{\star}(s^{\star},j^{\star})\right\} \beta_{n,s^{\star},j^{\star}} \phi_{s^{\star},j^{\star}} \right] \\ &=O_p \left(P_n \left[\frac{d}{d\text{logit }G^a_{n,\lambda_n}} L( \text{logit }G^a_{n,\lambda_n})\phi_{s^{\star},j^{\star}} \right] \right) \\ & = o_p(n^{-1/2}).\end{aligned}$$ The details of the forth equality is given in the proof of Lemma [Lemma 6](#lem:ucondition-fixedh){reference-type="ref" reference="lem:ucondition-fixedh"} and the last equality follows from the assumption that $\min_{(s,j) \in \mathcal{J}_n } \lVert P_n \frac{d}{d\text{logit }G^a_{n,\lambda_n}} L(\text{logit }G^a_{n,\lambda_n}) (\phi_{s,j}) \rVert = o_p(n^{-1/2})$ for $L(\cdot)$ being log-likelihood loss (i.e., condition ([\[eq:basis1-fixedh\]](#eq:basis1-fixedh){reference-type="ref" reference="eq:basis1-fixedh"})). As $P_n S_{g}(G^a_{n,\lambda_n}) = 0$, it follows that $P_n D(\tilde{f},G^a_{n,\lambda_n}) = o_p(n^{-1/2})$. Using this result, under mild assumptions, we showed that $P_n D(f,G^a_{n,\lambda_n})= o_p(n^{-1/2})$ indicating that the term ([\[eq:scorea\]](#eq:scorea){reference-type="ref" reference="eq:scorea"}) will be asymptotically negligible (see the proof of Lemma [Lemma 6](#lem:ucondition-fixedh){reference-type="ref" reference="lem:ucondition-fixedh"} for details). ## The estimator. To improve the finite sample performance of our estimator we propose to estimate the nuisance parameters using cross-fitting  [@klaassen1987consistent; @zheng2011cross; @chernozhukov2017double]. We split the data at random into $B$ mutually exclusive and exhaustive sets of size approximately $n B^{-1}$. Let $P_{n,b}^0$ and $P_{n,b}^1$ denote the the empirical distribution of a training and a validation sample, respectively. For a given $\lambda$ and $h$, exclude a single (validation) fold of data and fit the highly adaptive lasso estimator using data from the remaining $(B-1)$ folds; use this model to estimate the nuisance parameters for samples in the holdout (validation) fold. By repeating the process $B$ times, we will have estimates of the nuisance parameters for all sample units. Accordingly, we define the cross-fitted IPW estimator $$\begin{aligned} \Psi_{nh}^{\textsuperscript{CF}}(\bm{v}_0,\theta) = B^{-1} \sum_{b=1}^B P_{n,b}^1 U(P_{n,b}^1,\bm{v}_0,\Psi_{n,\lambda,b}) \label{eq:propest} %\frac{K_{h,\bm{v}_0} \Delta^c I( A= d^\theta) }{G_n P_n K_{h,\bm{v}_0} } \Psi_n(\bm{v}_0) \end{aligned}$$ where $U(P_{n,b}^1,\bm{v}_0,\Psi_{n,\lambda,b}) = (G_{n,\lambda,b} P_{n,b}^1 K_{h,\bm{v}_0})^{-1} \{K_{h,\bm{v}_0} \Delta^c I( A= d^\theta) \Psi_{n,\lambda,b}\}$, $\Psi_{n,\lambda,b}$ and $G_{n,\lambda,b}$ are highly adaptive lasso estimates of $\Psi_0$ and $G_0$ applied to the training sample for the b^th^ sample split for a given $\lambda$. The estimator uses a ($J-1$)-orthogonal kernel with bandwidth $h$ centered at $\bm{v_0}$. ## Data-adaptive bandwidth selector {#sec:bandw} The proposed estimator $\Psi_{nh}(\bm{v}_0,\theta)$ approaches to $\Psi_0(\bm{v}_0,\theta)$ as $h_n \rightarrow 0$ at the cost of increasing the variance of the estimator. Mean square error (MSE) provides an ideal criterion for bias-variance trade-off. However, because the bias is unknown the latter cannot be used directly. Here we propose an approach that circumvent the need for knowing bias while targeting the optimal bias-variance trade-off. Let $\sigma_{nh}$ be a consistent estimator of $\sigma_{0} = [E\{D^{*2}_{\bm{v}_0}(P_0)\}]^{1/2}$ where $D^{*2}_{\bm{v}_0}(P_0)$ is the efficient influence function for $\Psi_{nh}(\bm{v}_0,\theta)$ (see Theorem [Theorem 2](#th:movh){reference-type="ref" reference="th:movh"}). The goal is to find an $h$ that minimizes the MSE or equivalently set the derivative of the MSE to zero. That is $$\{\Psi_{nh}(\bm{v}_0,\theta) - \Psi_0(\bm{v}_0,\theta)\} \frac{\partial }{\partial h} \Psi_{nh}(\bm{v}_0,\theta) + \frac{\sigma_{nh}}{(nh^r)^{1/2}} \frac{\partial }{\partial h}\sigma_{nh}/(nh^r)^{1/2}=0$$ We know that the optimal (up to a constant) bias-variance trade-off implies $\{\Psi_{nh}(\bm{v}_0,\theta) - \Psi_0(\bm{v}_0,\theta)\} \approx \sigma_{nh}/(nh^r)^{1/2}$ [@van2018cv]. Hence, the optimal $h$ will also solve one of the following equations $$\begin{aligned} \frac{\partial }{\partial h}\Psi_{nh}(\bm{v}_0,\theta) \pm \kappa \left(\frac{\partial }{\partial h}\sigma_{nh}/(nh^r)^{1/2}\right) =0, \label{eq:hcri}\end{aligned}$$ where $\kappa$ is a positive constant. Consider a scenario where $\Psi_{nh}(\bm{v}_0,\theta)$ shows an increasing trend as $h_n$ approaches zero. This implies that $\Psi_0(\bm{v}_0,\theta)>\Psi_{nh}(\bm{v}_0,\theta)- \kappa \sigma_{nh}/(nh^r)^{1/2}$. We then define our optimal finite sample bandwidth selector as $$\begin{aligned} h_n = \mathop{\mathrm{argmax}}_h \Psi_{nh}(\bm{v}_0,\theta)- \kappa \sigma_{nh}/(nh^r)^{1/2}.\end{aligned}$$ Similarly, when $\Psi_{nh}(\bm{v}_0,\theta)$ shows decreasing trend as $h_n$ approaches zero, we define $h_n = \mathop{\mathrm{argmin}}_h \Psi_{nh}(\bm{v}_0,\theta)+ \kappa \sigma_{nh}/(nh^r)^{1/2}$. A natural choice for the constant $\kappa$ is $(1-\alpha)$-quantile of the standard normal distribution (i.e., $\zeta_{1-\alpha}$). Hence, $h_n$ is the bandwidth value that minimizes the difference between the true value $\Psi_0(\bm{v}_0,\theta)$ and the corresponding lower (upper) confidence bound. ## Undersmoothing criteria {#sec:criteria} The asymptotic linearity results of Theorems [Theorem 2](#th:movh){reference-type="ref" reference="th:movh"} and [Theorem 3](#th:fixedh){reference-type="ref" reference="th:fixedh"} rely on estimating the corresponding nuisance parameters using an undersmoothed highly adaptive lasso. Specifically, Theorem [Theorem 2](#th:movh){reference-type="ref" reference="th:movh"} requires that - $\left|P_n D_{CAR}^a(P_n,G_n^a,Q_n,\Psi_n,\theta)\right|=o_p\left((nh^r)^{-1/2}\right)$, - $\left|P_n D_{CAR}^c(P_n,G_n,Q_n,\Psi_n,\theta)\right|=o_p\left((nh^r)^{-1/2}\right)$, - $\left| P_n D_{CAR}^\Psi(P_n,G_n,\Psi_n,\theta) \right|= o_p\left((nh^r)^{-1/2}\right)$, where $D_{CAR}^a(P_n,G_n^a,Q_n,\Psi_n,\theta) = \frac{K_{h,\bm{v}_0}}{P_n K_{h,\bm{v}_0}} (Q_n-\Psi_n)\left(\frac{G_n^a- I( A= d^\theta)}{G_n}\right)$, $D_{CAR}^c(P_n,G_n,Q_n,\Psi_n,\theta) = \frac{K_{h,\bm{v}_0}I( A= d^\theta)}{P_n K_{h,\bm{v}_0}} (Q_n-\Psi_n)\left(\frac{G_n^c-\Delta^c }{G_n}\right)$ and $D_{CAR}^\Psi(P_n,G_n,\Psi_n,\theta)=\frac{K_{h,v}\Delta^c I( A= d^\theta) }{G_n P_n K_{h,v}} \{ \Psi_n -I(Y >t) \}$. Motivated by our theoretical results, we propose the following practical L1-norm bound selection criteria for $G_n^a$, $G_n^c$ and $\Psi_n$: $$\begin{aligned} \lambda_{n,G^a} &= \mathop{\mathrm{argmin}}_{\lambda} \left| B^{-1} \sum_{b=1}^B P_{n,b}^1 D_{CAR}^a(P_{n,b}^1,G_{n,\lambda,b}^a,G_{n,b}^c,Q_{n,b},\Psi_{n,b},\theta)\right|, \label{eq:lamga}\\ \lambda_{n,G^c} &= \mathop{\mathrm{argmin}}_{\lambda} \left| B^{-1} \sum_{b=1}^B P_{n,b}^1 D_{CAR}^c(P_{n,b}^1,G_{n,b}^a,G_{n,\lambda,b}^c,Q_{n,b},\Psi_{n,b},\theta)\right|, \label{eq:lamgc}\\ \lambda_{n,\Psi} &= \min \left\{ \lambda: \left| B^{-1} \sum_{b=1}^B P_{n,b}^1 D_{CAR}^\Psi(P_{n,b}^1,G_{n,b}^a,G_{n,b}^c,\Psi_{n,\lambda,b},\theta) \right| \leq \frac{\sigma_{nh}}{(n \log n)^{1/2}} \right\}, \label{eq:lamm}\end{aligned}$$ where $G_{n,b}^a$, $G_{n,b}^c$, $\Psi_{n,b}$ and $Q_{n,b}$ are cross-validated highly adaptive lasso estimates of the corresponding nuisance parameters with the L1-norm bound based on the global cross-validation selector. To achieve the Gaussian process convergence results in Theorem [Theorem 3](#th:fixedh){reference-type="ref" reference="th:fixedh"} (with a fixed $h$), we would require stronger conditions. Specifically, the conditions listed above must hold uniformly for any $\bm{v} \in \mathcal{V}$, that is, $\sup_{v \in \mathcal{V}} \left|P_n D_{CAR}^a(P_n,G_n^a,Q_n,\Psi_n,\theta)\right|=o_p(n^{-1/2})$,\ $\sup_{v \in \mathcal{V}} \left|P_n D_{CAR}^c(P_n,G_n,Q_n,\Psi_n,\theta)\right|=o_p(n^{-1/2})$ and $\sup_{v \in \mathcal{V}} \left| P_n D_{CAR}^\Psi(P_n,G_n,\Psi_n,\theta) \right|= o_p(n^{-1/2})$. Accordingly, the practical criteria are given by $$\begin{aligned} \lambda_{n,G^a}^{u} &= \mathop{\mathrm{argmin}}_{\lambda} \sup_{v \in \tilde{\mathcal{V}}}\left| B^{-1} \sum_{b=1}^B P_{n,b}^1 D_{CAR}^a(P_{n,b}^1,G_{n,\lambda,b}^a,G_{n,b}^c,Q_{n,b},\Psi_{n,b},\theta)\right|, \label{eq:lamgua}\\ \lambda_{n,G^c}^{u} &= \mathop{\mathrm{argmin}}_{\lambda} \sup_{v \in \tilde{\mathcal{V}}}\left| B^{-1} \sum_{b=1}^B P_{n,b}^1 D_{CAR}^c(P_{n,b}^1,G_{n,b}^a,G_{n,\lambda,b}^c,Q_{n,b},\Psi_{n,b},\theta)\right|, \label{eq:lamguc}\\ \lambda_{n,\Psi}^{u} &= \min \left\{ \lambda: \sup_{v \in \tilde{\mathcal{V}}} \left| B^{-1} \sum_{b=1}^B P_{n,b}^1 D_{CAR}^\Psi(P_{n,b}^1,G_{n,b}^a,G_{n,b}^c,\Psi_{n,\lambda,b},\theta) \right|\leq \frac{\sigma_{nh}}{(n \log n)^{1/2}} \right\}, \label{eq:lammu}\end{aligned}$$ where $\tilde{\mathcal{V}}=(\tilde{\bm{v}}_{01},\cdots,\tilde{\bm{v}}_{0\alpha})$ includes $\alpha$ sample points of $\mathcal{V}$. The asymptotic linearity result of Theorem [Theorem 3](#th:fixedh){reference-type="ref" reference="th:fixedh"} with a fixed bandwidth $h$ relies on similar undersmoothing criteria as those listed in (a)--(c) but with $h$ being replaced by one, and thus, the same practical undersmoothing criteria as([\[eq:lamga\]](#eq:lamga){reference-type="ref" reference="eq:lamga"})-([\[eq:lamm\]](#eq:lamm){reference-type="ref" reference="eq:lamm"}) can be considered. In Theorems [Theorem 2](#th:movh){reference-type="ref" reference="th:movh"} and [Theorem 3](#th:fixedh){reference-type="ref" reference="th:fixedh"}, we show that the proposed estimators are asymptotically efficient when they solve the efficient influence function, that is, $P_n D_{CAR}^\diamond = o_p\left((nh^r)^{-1/2}\right)$ and $P_n D_{CAR}^\diamond = o_p(n^{-1/2})$, respectively, for $\diamond \in \{a,c,\Psi\}$. The $\mathop{\mathrm{argmin}}$ criteria proposed in this section correspond to the most efficient estimators for a given data. Increasing $\lambda$ results in decreasing the empirical mean of pseudo score functions $\frac{K_{h,\bm{v}_0}}{P_n K_{h,\bm{v}_0}} (Q_n-\Psi_n)\left(G_{n,\lambda}^a- I( A= d^\theta)\right)$ and $\frac{K_{h,\bm{v}_0}I( A= d^\theta)}{P_n K_{h,\bm{v}_0}} (Q_n-\Psi_n)\left(G_{n,\lambda}^c-\Delta^c \right)$ and increasing the variance of $(G_{n,\lambda}^a)^{-1}$ and $(G_{n,\lambda}^c)^{-1}$. At a certain point in the grid of $\lambda$, decreases in the empirical mean of the pseudo score functions are insufficient for satisfying the $\mathop{\mathrm{argmin}}$ conditions, which $\left|P_n D_{CAR}^\diamond \right|$ starts increasing on account of $(G_{n,\lambda}^a)^{-1}$ and $(G_{n,\lambda}^c)^{-1}$. Unlike $\left|P_n D_{CAR}^a\right|$ and $\left|P_n D_{CAR}^c\right|$, $\left|P_n D_{CAR}^\Psi\right|$ decreases as $\lambda$ increases which motivates the undersmoothing selection criteria ([\[eq:lamm\]](#eq:lamm){reference-type="ref" reference="eq:lamm"}) and ([\[eq:lammu\]](#eq:lammu){reference-type="ref" reference="eq:lammu"}). Specifically, the sectional variation norm $\lambda_{n,\Psi}$ and $\lambda_{n,\Psi}^{u}$ are defined as the smallest value (larger than that chosen by the cross-validation selector) for which the left-hand side of the condition is less than $\sigma_{nh}/(n \log n)^{1/2}$ where $\sigma_{nh}$ be a consistent estimator of $\sigma_{0} = [E\{D^{*2}_{\bm{v}_0}(P_0)\}]^{1/2}$ with $D^{*2}_{\bm{v}_0}(P_0)$ being the efficient influence function for $\Psi_{nh}(\bm{v}_0,\theta)$. The cutpoint will lead to a bias that is of a smaller magnitude of the standard error, and thus, won't affect the coverage. **Remark 1**. *The variance estimator $\sigma_{nh}^2$ is calculated using the efficient influence function. One can, instead, use a conservative variance estimator obtained by an influence function where the nuisance parameters are assumed to be known. That is $$\begin{aligned} \label{eq:conservative-variance} % \sigma_{nh}^2 &= P_n \left\{\frac{K_{h,\bm{v}_0}}{P_n K_{h,\bm{v}_0}}\Psi_n(\theta) - \Psi_{0h}(\bm{v}_0,\theta)\right\}^2,\\ \sigma_{nh}^2 &= P_n \left\{\frac{K_{h,\bm{v}_0}}{P_n K_{h,\bm{v}_0}}\Psi_n(\theta) - \Psi_{0}(\bm{v}_0,\theta)\right\}^2, \end{aligned}$$ where $\Psi_{0}(\bm{v}_0,\theta)$ can be replaced by their corresponding estimators. The variance estimator ([\[eq:conservative-variance\]](#eq:conservative-variance){reference-type="ref" reference="eq:conservative-variance"}) can also be used in Section [3.4](#sec:bandw){reference-type="ref" reference="sec:bandw"} where we proposed an adaptive bandwidth selector.* # Theoretical results {#sec:theory} The asymptotic properties of our estimator relies on the following assumptions. **Assumption 1** (Complexity of nuisance functions). *The functions $Q_0$, $\Psi_0$, $G_{0}^a$ and $G_{0}^c$ are càdlàg with finite sectional variation norm.* Assumption [Assumption 1](#assump:cadlag){reference-type="ref" reference="assump:cadlag"} characterizes the global smoothness assumption of the true functions which is much weaker than than local smoothness assumptions like those characterized by an assumption utilizing Hölder balls [e.g., @robins2008higher; @robins2017minimax; @mukherjee2017semiparametric]. The assumption facilitates the fast convergence rate of $n^{-1/3} \log(n)^{p/2}$ obtainable by the highly adaptive lasso (regardless of dimensionality $p$) [@van2017generally; @van2017uniform; @bibaut2019fast]. The latter rate is generally faster than the typical minimax rates that appear under differing, but similarly positioned, assumptions regarding function classes in nonparametric estimation. In fact, we expect that almost all true conditional mean models that we have to deal with in real data will satisfy Assumption [Assumption 1](#assump:cadlag){reference-type="ref" reference="assump:cadlag"}. The sectional variation is defined in more detail in Appendix [8](#sec:secnorm){reference-type="ref" reference="sec:secnorm"}. **Assumption 2**. *(causal inference and censoring assumptions) [\[assump:basic\]]{#assump:basic label="assump:basic"} For $i=1,\dotsc,n$,* - **Consistency* of the observed outcome: $T_i = T_i^{A_i}$;* - **Unconfoundedness* of the treatment and censoring mechanisms: $A_i \perp T_i^a | \mathbf{W}_i,~\forall a \in \{0,1\}$, and $\Delta_i^c \perp T_i \mid A_i, \bm{W}_i$;* - **Positivity* of the treatment and censoring mechanisms: $\min_{a,\delta} G(A=a,\Delta^c=\delta \mid \mathbf{W} )>\gamma$ for all $\mathbf{W} \in \mathcal{W}$ where $\gamma$ is a small positive constant.* Assumption [\[assump:basic\]](#assump:basic){reference-type="ref" reference="assump:basic"} lists standard causal identifiability assumptions. The consistency assumption ensures that a subject's potential outcome is not affected by other subjects' treatment levels and there are no different versions of treatment. Assumption [\[assump:basic\]](#assump:basic){reference-type="ref" reference="assump:basic"}b is a version of the coarsening at random assumption that imposes conditional independence between (1) treatment and potential outcomes given the measured covariates; and (2) censoring indicator and the observed outcome given the treatment and measured covariates. Positivity states that every subject has some chance of receiving treatment level $a$ with censoring status $\delta$ regardless of covariates. Using Assumption [\[assump:basic\]](#assump:basic){reference-type="ref" reference="assump:basic"} the full data risk of $\Psi$ can be identified using the observed data as $$E_{P_X} \int_\theta \ell_{\theta}(\Psi,x) dF(\theta) = E_{P_0} L_{G_0}(\Psi),$$ where $L_{G_0}(\Psi)$ and $\ell_{\theta}(\Psi,x)$ were defined in Section [2.1](#sec:notation){reference-type="ref" reference="sec:notation"}. Even if the Assumption [\[assump:basic\]](#assump:basic){reference-type="ref" reference="assump:basic"}a-b do not hold, it may still be of interest to estimate $E_{P_X} \int_\theta \ell_{\theta}(\Psi,x) dF(\theta)$ using $P_n L_{G_0}(\Psi)$ as an adjusted measure of association. **Remark 2**. *The censoring at random assumption implies that $C \perp T \mid A,\bm{W}$ on $C<T$ which is weaker than $C \perp T \mid A,\bm{W}$ [@van2003unified Chapter 1]. However, because the additional restriction can not be identified using the observed data, it will not impose any restriction on the tangent space and thus, the statistical model remains nonparametric.* **Assumption 3** (Undersmoothing). *The level of undersmoothing satisfies the following conditions:* - *Fixed $h$: Let ${f}_\phi^\Psi$, ${f}_\phi^a$ and ${f}_\phi^c$ be the projections of $f^\Psi = \Delta^c I( A= d^\theta)/G_0$, $f^a = (Q_0-\Psi_0)/ G_{0}$ and $f^c = I(A=d^\theta)(Q_0-\Psi_0)/ G_{0}$ onto a linear span of basis functions $\phi_{s,j}$ in $L^2(P)$, for $\phi_{s,j}$ satisfying conditions ([\[eq:basis3-fixedh\]](#eq:basis3-fixedh){reference-type="ref" reference="eq:basis3-fixedh"}), ([\[eq:basis1-fixedh\]](#eq:basis1-fixedh){reference-type="ref" reference="eq:basis1-fixedh"}) and ([\[eq:basis2-fixedh\]](#eq:basis2-fixedh){reference-type="ref" reference="eq:basis2-fixedh"}) of Lemma [Lemma 6](#lem:ucondition-fixedh){reference-type="ref" reference="lem:ucondition-fixedh"}. Then, $\lVert f^\Psi - {f}_\phi^\Psi \rVert_{2,\mu} = O_p(n^{-1/4})$, $\lVert f^a - {f}_\phi^a \rVert_{2,\mu} = O_p(n^{-1/4})$ and $\lVert f^c - {f}_\phi^c \rVert_{2,\mu} = O_p(n^{-1/4})$.* - *Converging $h$: The functions $f_h^\Psi = K_{h,\bm{v}_0}f^\Psi$, $f_h^a= K_{h,\bm{v}_0}f^a$ and $f_h^c= K_{h,\bm{v}_0}f^c$ are such that $\tilde f^\Psi = h^rf_h^\Psi$, $\tilde f^a=h^rf_h^a$ and $\tilde f^c=h^rf_h^c$ are càdlàg with finite sectional variation norm. Let $\tilde{f}_\phi^\Psi$, $\tilde{f}_\phi^a$ and $\tilde{f}_\phi^c$ be projections of $\tilde f^\Psi$, $\tilde f^a$ and $\tilde f^c$ onto the linear span of the basis functions $\phi_{s,j}$ in $L^2(P)$, where $\phi_{s,j}$ satisfies condition ([\[eq:basis3\]](#eq:basis3){reference-type="ref" reference="eq:basis3"}), ([\[eq:basis1\]](#eq:basis1){reference-type="ref" reference="eq:basis1"}) and ([\[eq:basis3\]](#eq:basis3){reference-type="ref" reference="eq:basis3"}) of Lemma [Lemma 5](#lem:ucondition-movh){reference-type="ref" reference="lem:ucondition-movh"}. Then, $\lVert \tilde f^\Psi - \tilde{f}_\phi^\Psi \rVert_{2,\mu} = O_p(n^{-1/3})$, $\lVert \tilde f^a - \tilde{f}_\phi^a \rVert_{2,\mu} = O_p(n^{-1/3})$ and $\lVert \tilde f^c - \tilde{f}_\phi^c \rVert_{2,\mu} = O_p(n^{-1/3})$.* *where $\mu$ is an appropriate measure (i.e., the Lebesgue or the counting measure).* Assumption [Assumption 3](#assump:proj){reference-type="ref" reference="assump:proj"} is arguably the most important of the conditions to retain the asymptotic linearity of $\Psi_{nh}(\bm{v}_0,\theta)$. Assumption [Assumption 3](#assump:proj){reference-type="ref" reference="assump:proj"} (a) states that when the bandwidth $h$ is fixed and the estimated coarsening mechanisms (i.e., $G^a$ and $G^c$) and $\Psi_n$ are sufficiently undersmoothed, the generated features can approximate functions $f^a$, $f^c$ and $f^\Psi$ with $n^{-1/3}$ rate. Lemma [Lemma 6](#lem:ucondition-fixedh){reference-type="ref" reference="lem:ucondition-fixedh"} in the supplementary material provides theoretical undersmoothing conditions required. Assumption [Assumption 3](#assump:proj){reference-type="ref" reference="assump:proj"} (b) correspond to the case where the bandwidth is allows to converge to zero and requires similar conditions as in part (a). Under Assumption [Assumption 1](#assump:cadlag){reference-type="ref" reference="assump:cadlag"}, ${f}_\phi^\Psi$, ${f}_\phi^a$ and ${f}_\phi^c$ (and $\tilde f^\Psi$, $\tilde f^a$ and $\tilde f^c$) will fall into the class of càdlàg functions with finite sectional variation norm. Hence these functions can be approximated using the highly adaptive lasso generated basis functions at $n^{-1/3}$ rate (up to a $\log n$ factor). Importantly, the required rate in Assumption [Assumption 3](#assump:proj){reference-type="ref" reference="assump:proj"} (a) is $O_p(n^{-1/4})$ which is slower than the $O_p(n^{-1/3})$ rate obtained by the highly adaptive lasso estimator. In Remark [Remark 4](#rem:slowrate){reference-type="ref" reference="rem:slowrate"}, we discuss that, under a certain smoothness assumption, the required rate in Assumption [Assumption 3](#assump:proj){reference-type="ref" reference="assump:proj"} (b) can also be slowed down to $O_p(n^{-1/4})$. Consequently, this key assumption is not particularly strong. **Assumption 4**. *There exist constants $\kappa \geq 0$ and $C>0$ such that, for all $l>0$, $pr(0<|\bm{S}^\top{{\theta}}_0|<l) \leq C l^\kappa$.* Assumption [Assumption 4](#assump:margin){reference-type="ref" reference="assump:margin"} is the margin assumption of [@audibert2007fast] which can also be found in [@tsybakov2004optimal]. The assumption is needed to derive the rate of convergence of $\theta_{nh}(\bm{v}) = \mathop{\mathrm{argmin}}_{\theta \in \Theta} \Psi_{nh}(\bm{v},\theta)$ (Theorem [Theorem 4](#th:thetrate){reference-type="ref" reference="th:thetrate"}). The extreme case of $\kappa=\infty$ corresponds to a case where there is a margin around zero, that is, $pr(0<|\bm{S}^\top{{\theta}}_0|<l) =0$. As shown in the proof of Theorem [Theorem 4](#th:thetrate){reference-type="ref" reference="th:thetrate"} the latter leads to the fastest rate of convergence for $\theta_{nh}(\bm{v})$. **Theorem 1**. *Let $\Psi$ be a functional parameter that is identified as $\Psi_0 = \mathop{\mathrm{argmin}}_{\Psi \in \mathcal{F}_{\lambda}}P_0 L_{G_0}(\Psi)$ where the loss function $L$ is defined in [\[eq:lossf\]](#eq:lossf){reference-type="ref" reference="eq:lossf"}. Assume the functional parameter space $\mathcal{F}_\lambda$ contained in all $p$-variate cadlag functions with a bound on its variation norm $\lambda$. Let $\Psi_n = \mathop{\mathrm{argmin}}_{\Psi \in \mathcal{F}_{\lambda_n}}P_n L_{G_n}(\Psi)$ where $\lambda_n$ is the cross-validated selector of $\lambda$ and let $d_0(\Psi_n,\Psi_0) = P_0 L_{G_0}(\Psi_n)-P_0 L_{G_0}(\Psi_0)$. Also assume $\tilde{\mathcal{F}}_{\tilde \lambda} = \{L_{G_0}(\Psi)-L_{G_0}(\Psi_0): \Psi \in \mathcal{F}_{\lambda}\}$ falls in to the class of cadlag functions with a bound on its variation norm $\tilde \lambda$. Then, under Assumptions [Assumption 1](#assump:cadlag){reference-type="ref" reference="assump:cadlag"} and [\[assump:basic\]](#assump:basic){reference-type="ref" reference="assump:basic"}, $d_0(\Psi_n,\Psi_0) = O_p(n^{-2/3} (\log n)^{4(p-1)/3})$.* Theorem [Theorem 1](#th:halrate){reference-type="ref" reference="th:halrate"} generalizes the results of [@van2017generally] to settings where the estimation of $\Psi_0$ involves estimation of nuisance parameters $G_0$. The proof of the results requires techniques to handle the additional complexities due to the presence of such nuisance parameters. The theorem relies on Assumptions [Assumption 1](#assump:cadlag){reference-type="ref" reference="assump:cadlag"} without the need for undersmoothing the nuisance function estimators. **Theorem 2**. *Let $\Psi_0(\bm{v}_0,\theta)=E \{I(Y^\theta>t) \mid \bm{V}=\bm{v}_0\}$ be $J$-times continuously differentiable at $\bm{v}_0$. Suppose the support of $\mathbf{W}$ is uniformly bounded, i.e., $\mathcal{W} \subseteq [0,\tau]^p$ for some finite constant $\tau$. Let $\Psi_{n,\lambda_{n}^\Psi}$, $G^a_{n,\lambda_{n}^a}$ and $G^c_{n,\lambda_{n}^c}$ be highly adaptive lasso estimators of $\Psi_0$, $G_{0}^a$ and $G_{0}^c$ with $L_{1}$-norm bounds equal to $\lambda_{n}^\Psi$, $\lambda_{n}^a$ and $\lambda_{n}^c$, respectively. Assuming that* - *the bandwidth $h$ satisfies $h^r \rightarrow 0$ and $nh^{r3/2} \rightarrow \infty$,* - *Assumptions [Assumption 1](#assump:cadlag){reference-type="ref" reference="assump:cadlag"}, [\[assump:basic\]](#assump:basic){reference-type="ref" reference="assump:basic"} and and [Assumption 3](#assump:proj){reference-type="ref" reference="assump:proj"} hold,* *we have $$\Psi_{nh}(\bm{v}_0,\theta)- \Psi_0(\bm{v}_0,\theta) = (P_n-P_0) D_{\bm{v}_0}^*(P_0) + h^J B_0(J,\bm{v}_0)+ o_p\left((nh^r)^{-1/2}\right),% \quad \text{where} \quad \psi_0=\Psi_h(P_0).$$ where $\Psi_{nh}(\bm{v}_0,\theta)$ is obtained using a ($J-1$)-orthogonal kernel, $G_{n,\lambda_{n}^a,\lambda_{n}^c} =G^a_{n,\lambda_{n}^a}G^c_{n,\lambda_{n}^c}$ and $D_{\bm{v}_0}^*(P_0)$ is the corresponding efficient influence function defined in the proof of the theorem. Moreover, assuming that $nh^{r+2J} \rightarrow 0$,$(nh^r)^{1/2}\{\Psi_{nh}(\bm{v}_0,\theta)- \Psi_0(\bm{v}_0,\theta)\}$ converges to a mean zero normal random variable.* Theorem [Theorem 2](#th:movh){reference-type="ref" reference="th:movh"} gives conditions under which the proposed kernel estimator $\Psi_{nh}(\bm{v}_0,\theta)$ is consistent for $\Psi_0(\bm{v}_0,\theta)$, and it also gives the corresponding rate of convergence. In general this result follows if the bandwidth decreases with sample size sufficiently slowly, and if the nuisance functions $G$ is estimated sufficiently well. The standard local linear kernel smoothing requires that $nh^{3r} \rightarrow \infty$ to control the variance [@wasserman2006all Chapter 5]. Using the entropy number of the class of càdlàg functions with finite sectional variation norm, we showed that bandwidth can converge to zero at much faster rate (i.e., $nh^{3r/2} \rightarrow \infty$) than the standard results. Condition (ii) is the standard causal assumptions. Condition (iii) indicates the level of undersmoothing of the nuisance parameter estimators required to achieve asymptotic linearity of our estimator. Section [14](#sec:uniform){reference-type="ref" reference="sec:uniform"} in the supplementary material provides uniform convergence results with a rate for our regimen-response curve estimator $\Psi_{nh}(\bm{v}_0,\theta)$ for all $\bm{v}_0 \in \mathcal{V}$. **Remark 3** (The optimal bandwidth rate). *Theorem [Theorem 2](#th:movh){reference-type="ref" reference="th:movh"} shows that in order to eliminate the bias term, we must undersmooth the kernel such that $nh^{r+2J} \rightarrow 0$ (i.e., $h<n^{-\frac{1}{r+2J}}$). In combination with rate condition (i) in the theorem, we have $n^{-\frac{2}{3r}}<h<n^{-\frac{1}{r+2J}}$. Note that for the latter constraint to hold we must have $J>r/4$. Also, the theoretical undersmoothing conditions in Lemma [Lemma 5](#lem:ucondition-movh){reference-type="ref" reference="lem:ucondition-movh"} require $J>r$. Hence, the constraint implies that the optimal bandwidth rate is $h_n = n^{-\frac{1}{r+2J}}$ with $J>r$.* **Remark 4** (Assumption [Assumption 3](#assump:proj){reference-type="ref" reference="assump:proj"} (b)). *We can allow slower rate of convergence of $O_p(n^{-1/4})$ in Assumption [Assumption 3](#assump:proj){reference-type="ref" reference="assump:proj"} (b) (i.e., $\lVert \tilde f^\Psi - \tilde{f}_\phi^\Psi \rVert_{2,\mu} = O_p(n^{-1/4})$, $\lVert \tilde f^a - \tilde{f}_\phi^a \rVert_{2,\mu} = O_p(n^{-1/4})$ and $\lVert \tilde f^c - \tilde{f}_\phi^c \rVert_{2,\mu} = O_p(n^{-1/4})$) at the cost of requiring higher level of smoothness for $\Psi_0(\bm{v}_0,\theta)=E \{I(Y^\theta>t) \mid \bm{V}=\bm{v}_0\}$. Specifically, the optimal bandwidth rate would be $h_n = n^{-\frac{1}{r+2J}}$ with $J>\frac{5}{2}r$.* **Theorem 3**. *Let $\Psi_{0h}(\bm{v}_0,\theta)=(P_0 K_{h,\bm{v}_0})^{-1} \{K_{h,\bm{v}_0} \Psi_0(\theta)\}$. Suppose the support of $\bm{W}$ is uniformly bounded, i.e., $\mathcal{W} \subseteq [0,\tau]^p$ for some finite constant $\tau$. Let $\Psi_{n,\lambda_{n}^\Psi}$, $G^a_{n,\lambda_{n}^a}$ and $G^c_{n,\lambda_{n}^c}$ be highly adaptive lasso estimators of $\Psi_0$, $G_{0}^a$ and $G_{0}^c$ with $L_{1}$-norm bounds equal to $\lambda_{n}^\Psi$, $\lambda_{n}^a$ and $\lambda_{n}^c$, respectively. Under Assumptions  [Assumption 1](#assump:cadlag){reference-type="ref" reference="assump:cadlag"}, [\[assump:basic\]](#assump:basic){reference-type="ref" reference="assump:basic"} and [Assumption 3](#assump:proj){reference-type="ref" reference="assump:proj"}, the estimator $\Psi_{nh}(\bm{v}_0,\theta)$ will be asymptotically linear. That is $$\Psi_{nh}(\bm{v}_0,\theta)- \Psi_{0h}(\bm{v}_0,\theta) = (P_n-P_0) D_{h,\bm{v}_0}^*(P_0) + o_p(n^{-1/2}),% \quad \text{where} \quad \psi_0=\Psi_h(P_0).$$ where $G_{n,\lambda_{n}^a,\lambda_{n}^c} =G^a_{n,\lambda_{n}^a}G^c_{n,\lambda_{n}^c}$ and $D_{h,\bm{v}_0}^*(P_0)$ is the corresponding efficient influence function defined in the proof of the theorem. Moreover, $\sqrt n \{\Psi_{nh}(\theta) - \Psi_{0h}(\theta)\}$ converges weakly as a random element of the cadlag function space endowed with the supremum norm to a Gaussian process with covariance structure implied by the covariance function $\rho (\bm{v},\bm{v}') = P_0 D^*_{h,\bm{v}}(P_0) D^*_{h,\bm{v}'}(P_0)$.* Theorem [Theorem 3](#th:fixedh){reference-type="ref" reference="th:fixedh"} strengthens the results of Theorem [Theorem 2](#th:movh){reference-type="ref" reference="th:movh"} when the bandwidth is fixed. Specifically, it shows that for a fixed bandwidth $h>0$, $\sqrt n \{\Psi_{nh}(\theta)- \Psi_{0h}(\theta)\}$ convergences to a Gaussian process. This is an important results because in enables us to construct simultaneous confidence intervals for the entire curve $\Psi_{0h}(\bm{v},\theta),$ $\bm{v} \in \mathcal{V}$ and $\theta \in \Theta$. **Theorem 4**. *Let $\theta_0(\bm{v}) = \mathop{\mathrm{argmin}}_{\theta \in \Theta} \Psi_0(\bm{v},\theta)$ and $\theta_{nh}(\bm{v}) = \mathop{\mathrm{argmin}}_{\theta \in \Theta} \Psi_{nh}(\bm{v},\theta)$ where both $\Psi_0$ and $\Psi_{nh}$ are defined in Theorem [Theorem 2](#th:movh){reference-type="ref" reference="th:movh"}. Then, under Assumptions [Assumption 1](#assump:cadlag){reference-type="ref" reference="assump:cadlag"}-[Assumption 4](#assump:margin){reference-type="ref" reference="assump:margin"}, $\|\theta_{nh}(\bm{v}_0)-\theta_0(\bm{v}_0)\|_{2}=O_p\left( n^{\frac{(r-4J)(2\kappa+4)}{(8J+4r)(3\kappa+8)}} \right)$.* Characterizing the minimizer (or in general optimizer) of the estimated regimen-response curve is of interest as it forms an optimal individualized decision rule. Under the margin assumption (i.e., Assumption [Assumption 4](#assump:margin){reference-type="ref" reference="assump:margin"}) and using a novel iterative empirical process theory, in Theorem [Theorem 4](#th:thetrate){reference-type="ref" reference="th:thetrate"}, we derive the convergence rate of a minimizer of a nonparametrically estimated function $\Psi_{nh}(\bm{v},\theta)$. # Simulation Studies We examine the performance of our estimator through two simulation studies. Within these studies, we compare the results of our estimator against theoretical values and random forest based estimation, demonstrating the practical benefits of using undersmoothed highly adaptive lasso based estimation when the data generating mechanism is unknown. In both scenarios $W \sim \text{Unif}(-1, 1)$ and $V \sim \text{Unif}(-1.3, 1.3)$. Additionally $\Delta|W,V \sim \text{Bernoulli}\{\text{expit}(0.5W-0.5V)\}$ and $Y|A,W,V \sim \text{Bernoulli}\{\text{expit}(0.5W + V^2 + 2VW - 2AVW)\}.$ For each scenario we sample $n \in \{240, 480, 960, 2400\}$ observations, applying our estimator for each sample. The results displayed below give average effects 360 replicates of each scenario and sample size pair where we set $\xi(\theta,\bm{V})=1$. The latter implies that $\Psi_n$ is obtained by solving a set of simpler score equations (compared with $\xi(\theta,\bm{V})=G\{(A,\Delta^c)\mid \bm{V}\}$) which may call for more undersmoothing in order to solve $D_{CAR}^\Psi(P_n,G_n,\Psi_n)$. The fundamental difference between the scenarios comes in the treatment assignment mechanism. In Scenario 1, we have $A \sim \text{Bernoulli}(0.5)$, giving us a randomized trial scenario, whereas in Scenario 2, we have $A \sim \text{Bernoulli}\{\text{expit}(0.7W + 0.5V + 0.5VW)\}.$ The highly adaptive lasso estimate of $G^a(A\mid W,V)$ requires solving fewer score functions in Scenario 1 compared with Scenario 2. In fact the only score equation that to be solved in the former is $P_n\{A-G^a_n(A\mid W,V)\}=0$. Hence, higher level of undersmooting is required to solve $D_{CAR}^a(P_n,G_n^a,Q_n,\Psi_n,\theta)$. In general, the simpler the $G\{(A,\Delta^c)\mid W,V\}$, the more undersmooting is needs to solve $D_{CAR}^a(P_n,G_n^a,Q_n,\Psi_n,\theta)$ and $D_{CAR}^c(P_n,G_n^a,Q_n,\Psi_n,\theta)$. In all simulations we are targeting the estimands $\Psi_{0h}({v}_0, \theta)$ and $\Psi_{0}({v}_0, \theta)$ using the estimator $\Psi_{nh}({v}_0, \theta)$ where ${v}_0 = 0.5$ and we consider the bandwidths $h \in n^{-1/3} \times\{0.5,1,1.5,2,2.5,3,4\}$. Recall that the optimal bandwidth rate is $n^{\frac{-1}{2J+r}}$ with $J>r$. In our simulation studies $r=1$ and we are setting $J=r$ leading to a rate $n^{-1/3}$ which is faster than $n^{-1/5}$ obtained by setting $J=2>r$. The choice of rate matters in cases where the target parameter is $\Psi_0$ and the goal is to show that the scaled biases remain constant and converges are nominal even under the faster rate of $n^{-1/3}$ (Figures [2](#fig:convergh){reference-type="ref" reference="fig:convergh"} and [3](#fig:convergopth){reference-type="ref" reference="fig:convergopth"}). Coverage results are also given for $\Psi_0({v}_0, \theta)$ where for each sample $h_n$ is chosen from the possible bandwidths as described in Section [3.4](#sec:bandw){reference-type="ref" reference="sec:bandw"}. In all cases highly adaptive lasso was implemented using the [R]{.sans-serif} package [hal9001]{.sans-serif} considering quadratic basis functions for up to 2-way interactions between all predictor variables. ![Simulation studies Scenario 1: The target parameter of interest is $\Psi_{0h}$. The plots show the scaled bias and coverage rate when the weight functions are estimated using an undersmoothed highly adaptive lasso (HAL) and a random forest (RF). ](fixedHps3_2.pdf){#fig:fixedh width="100%"} We estimate the weight functions (i.e., propensity score and censoring indicator models) using an undersmoothed highly adaptive lasso (HAL) and a random forest (RF). In both cases, $\Psi_n$ is a highly adaptive lasso estimate of $\Psi_0$ obtained based on the loss function ([\[eq:lossf\]](#eq:lossf){reference-type="ref" reference="eq:lossf"}). Figure [1](#fig:fixedh){reference-type="ref" reference="fig:fixedh"} shows the scaled bias and the coverage rate of the estimators when the target parameter is $\Psi_{0h}$. The results confirm our theoretical result presented in Theorem [Theorem 3](#th:fixedh){reference-type="ref" reference="th:fixedh"}. The scaled bias if the estimator with undersmoothed HAL is considerably smaller than the one obtained using RF. Importantly, while the coverage rate of undersmoothed HAL estimator remains close to the nominal rate of 95%, the coverage rate of RF based estimator sharply declines as sample size increases. To assess our theoretical result of Theorem [Theorem 2](#th:movh){reference-type="ref" reference="th:movh"}, we plotted the the scaled bias and the coverage rate of the estimators when the target parameter is $\Psi_{0}$. Figure [2](#fig:convergh){reference-type="ref" reference="fig:convergh"} shows that the undersmoothed HAL outperforms RF based estimators. Figures [5](#fig:supfixedh){reference-type="ref" reference="fig:supfixedh"}-[6](#fig:supconvergh){reference-type="ref" reference="fig:supconvergh"} in the Supplementary Material show that similar results hold in Scenario 2. Finally, Figure [3](#fig:convergopth){reference-type="ref" reference="fig:convergopth"} shows that our proposed data-adaptive bandwidth selector performs very well when combined with undersmoothed HAL estimators of weight functions. Specifically, even with smaller sample size of $n=240$, our approach results in nearly 95% coverage rate and as the sample size increases it quickly recovers the nominal rate. Figure [4](#fig:psinps1){reference-type="ref" reference="fig:psinps1"} displays $\Psi_{nh}(v_0=0.5,\theta)$ against $\theta$ values. As the sample size increases the bias reduces across all the bandwidth values. Figure [7](#fig:thetrate){reference-type="ref" reference="fig:thetrate"} in the Supplementary Material shows the scaled bias of $\theta_{nh}$ when the weight functions are estimated using an undersmoothed highly adaptive lasso. The plot shows that the scaled bias is relatively constant across different sample sizes thereby confirming our result in Theorem [Theorem 4](#th:thetrate){reference-type="ref" reference="th:thetrate"} (by setting $J=r=1$ and $\kappa\rightarrow \infty$). ![Simulation studies Scenario 1: The target parameter of interest is $\Psi_{0}$. The plots show the scaled bias and coverage rate when the weight functions are estimated using an undersmoothed highly adaptive lasso (HAL) and a random forest (RF).](convergingHps3_2.pdf){#fig:convergh width="100%"} ![Simulation studies: The target parameter of interest is $\Psi_{0}$. The dotted line shows the nominal rate. The optimal bandwidth $h$ is selected using the proposed data adaptive approach. The plots show the coverage rates in Scenarios 1 and 2 where the weight functions are estimated using an undersmoothed highly adaptive lasso (HAL) and a random forest (RF).](coveragehopt.pdf){#fig:convergopth width=".9\\textwidth"} ![Simulation studies Scenario 1: The target parameter of interest is $\Psi_{0}$ (black solid line). The plots show $\Psi_{nh}$ for different bandwidths when the weight functions are estimated using an undersmoothed highly adaptive lasso and $v_0=0.5$.](psinps1.pdf){#fig:psinps1 width="100%"} # Concluding Remarks ## Multi-stage decision making A natural and important extension of our work is to learn dynamic treatment regimens in multi-stage problems. In settings with many decision points (e.g., mobile health), it would be beneficial to consider a stochastic class of decision rules rather than a deterministic one [@kennedy2019nonparametric; @luckett2019estimating]. ## Score based undersmoothing criteria The practical undersmoothing criteria proposed in Section [3.5](#sec:criteria){reference-type="ref" reference="sec:criteria"} requires calculating the efficient influence function and deriving the relevant $D_{CAR}$ terms. The latter can be challenging and tedious, for example, in multi-stage settings (i.e., time-varying treatment) with many decision points. This motivates the derivation of score based criteria which requires only the score of the corresponding nuisance function. This will, in fact, resemble closely the theoretical undersmoothing conditions listed in Lemmas [Lemma 5](#lem:ucondition-movh){reference-type="ref" reference="lem:ucondition-movh"}, [Lemma 5](#lem:ucondition-movh){reference-type="ref" reference="lem:ucondition-movh"} and [Lemma 7](#lem:ucondition-uconvergence){reference-type="ref" reference="lem:ucondition-uconvergence"} in Appendix and Supplementary materials. For example, for the treatment assignment model, we can consider $$\label{eq:score} \lambda_{n,G^a} = \mathop{\mathrm{argmin}}_{\lambda} B^{-1} \sum_{b=1}^B \left[ \sum_{(s,j) \in \mathcal{J}_n} \frac{1}{ \lVert \beta_{n,\lambda,b} \rVert_{L_1}} \bigg\lvert P_{n,b}^1 \tilde S_{s,j}(\phi,G_{n,\lambda,b}) \bigg\rvert \right], %\kappa(\delta,n),% \frac{\sigma_n}{n^{1/2} \log n},$$ in which $\lVert \beta_{n,\lambda} \rVert_{L_1} = \lvert \beta_{n,\lambda,0} \rvert + \sum_{s \subset\{1, \ldots, d\}} \sum_{j=1}^{n} \lvert \beta_{n,\lambda,s,j} \rvert$ is the $L_1$-norm of the coefficients $\beta_{n,\lambda,s,j}$ in the highly adaptive lasso estimator $G_{n,\lambda}$ for a given $\lambda$, and $\tilde S_{s,j}(\phi, G_{n,\lambda,b}) = \phi_{s,j}(W) \{A_{} - G_{n,\lambda,b}(1 \mid W)\}\{G_{n,\lambda,b} (1 \mid W)\}^{-1}$. Note that $S_{s,j}(\phi, G_{n,\lambda,b})$ involves the score equation for the propensity score model (i.e., $\phi_{s,j}(W) \{A_{} - G_{n,\lambda,b}(1 \mid W)\}$) and the inverse of the estimate propensity score. A more detailed discussion of the score based criteria is provided in Section 4.2 of [@ertefaie2022nonparametric]. However, the theoretical properties of these criteria and their performance in regimen-curve estimation problems are unknown and merit further research. ## Near positively violation In general, near positivity violation can negatively impact the performance of causal estimators. In particular, it can lead to an increased bias and variance in inverse probability weighted estimators and undersmoothing may exacerbate the issue. One possible remedy is to consider truncation where individuals whose estimated weight functions are smaller than a given cutpoint are removed from the analyses [@crump2009dealing]. There are also other methods to trim the population in an interpretable way [@traskin2011defining; @fogarty2016discrete]. Another possible way to improve the performance of our approach under near positivity violation is to fit a highly adaptive lasso by enforcing the positivity $\min (G^a_n, 1 - G^a_n) > \gamma$ and $\min (G^c_n, 1 - G^c_n) > \gamma$, for a positive constant $\gamma$. One can even make $\gamma$ data dependent (i.e., consider $\gamma_n$). This is an interesting direction from both methodological and practical point of view. ## Variable selection In our proposed method, we assume that the set of variables included in regimen response-curve function and the decision rule (i.e., $\bm{V}$ and $\bm{S}$, respectively) are a priori specified. Particularly, in health research, variable selection in decision making is important as it may reduce the treatment costs and burden on patients. Moreover, the quality of the constructed rules can be severely hampered by inclusion of many extraneous variables [@jones2022valid]. Therefore, it will be interesting to systematically investigate how to perform variable selection in regimen response-curve function and the decision rule and to provide theoretical guarantees. # Theoretical undersmoothing conditions {#app:theoreticalconditions} **Lemma 5** (Theoretical undersmoothing conditions when $h_n \rightarrow 0$). *Let $\Psi \equiv \Psi(\theta,\bm{V})$, $G^a \equiv G(P)(A \mid \bm{W})$ and $G^c \equiv G(P)(\Delta^c \mid \bm{W},A)$ denote $E(Y^{d^\theta}\mid V)$, the treatment and censoring mechanism under an arbitrary distribution $P \in \mathcal{M}$. Let $\Psi_{n,\lambda_n^\Psi}$, $G^a_{n,\lambda_n^a}$ and $G^c_{n,\lambda_n^c}$ be the highly adaptive lasso estimators of $\Psi$, $G^a$ and $G^c$ using $L_1$-norm bound $\lambda_n^\Psi$, $\lambda_n^a$ and $\lambda_n^c$, respectively. Choosing $\lambda_n^\Psi$, $\lambda_n^a$ and $\lambda_n^c$ such that $$\begin{aligned} \min_{(s,j) \in \mathcal{J}_n^\Psi} {\bigg \Vert} P_n \frac{d}{d \Psi_{n,\lambda_n^\Psi}} L( \Psi_{n,\lambda_n^\Psi}) (\phi_{s,j}) {\bigg \Vert} &= o_p\left(n^{-1/2}h^{r/2}\right), \label{eq:basis3}\\ \min_{(s,j) \in \mathcal{J}_n^a} {\bigg \Vert} P_n \frac{d}{d\text{logit } G_{n,\lambda_n^a}^a} L(\text{logit }G_{n,\lambda_n^a}^a) (\phi_{s,j}) {\bigg \Vert}&= o_p\left(n^{-1/2}h^{r/2}\right), \label{eq:basis1} \\ \min_{(s,j) \in \mathcal{J}_n^c} {\bigg \Vert} P_n \frac{d}{d\text{logit } G_{n,\lambda_n^c}^c} L(\text{logit }G_{n,\lambda_n^c}^c) (\phi_{s,j}) {\bigg \Vert} &= o_p\left(n^{-1/2}h^{r/2}\right), \label{eq:basis2} \end{aligned}$$ where, in ([\[eq:basis3\]](#eq:basis3){reference-type="ref" reference="eq:basis3"}), $L(\cdot)$ is the loss function ([\[eq:lossf\]](#eq:lossf){reference-type="ref" reference="eq:lossf"}), and, in ([\[eq:basis1\]](#eq:basis1){reference-type="ref" reference="eq:basis1"}), $L(\cdot)$ is log-likelihood loss. Also, $\mathcal{J}_n^.$ is a set of indices for the basis functions with nonzero coefficients in the corresponding model. Let $D^\Psi(f_h^\Psi,\Psi_{n}) = f_h^\Psi \cdot \{I(Y>t) - \Psi_{n}\}$, $D^a(f_h^a,G_{n}^a) = f_h^a \cdot (A - G_{n}^a)$, and $D^c(f^c,G_{n}^c) = f_h^c \cdot (\Delta^c - G_{n}^c)$. The functions $f_h^\Psi$, $f_h^a$ and $f_h^c$ are such that $\tilde f^\Psi = h^rf_h^\Psi$, $\tilde f^a=h^rf_h^a$ and $\tilde f^c=h^rf_h^c$ are càdlàg with finite sectional variation norm. Let $\tilde{f}_\phi^\Psi$, $\tilde{f}_\phi^a$ and $\tilde{f}_\phi^c$ be projections of $\tilde f^\Psi$, $\tilde f^a$ and $\tilde f^c$ onto the linear span of the basis functions $\phi_{s,j}$ in $L^2(P)$, where $\phi_{s,j}$ satisfies condition ([\[eq:basis3\]](#eq:basis3){reference-type="ref" reference="eq:basis3"}) and ([\[eq:basis1\]](#eq:basis1){reference-type="ref" reference="eq:basis1"}), respectively. Then, under the optimal bandwidth rate $h = n^{\frac{-1}{r+2J}}$ and $J>r$, we have $P_n D^\Psi({f}_h^\Psi,\Psi_{n}) = o_p\left((nh^r)^{-1/2}\right)$, $P_n D^a({f}_h^a,G_{n}^a) = o_p\left((nh^r)^{-1/2}\right)$ and $P_n D^c({f}_h^c,G_{n}^c) = o_p\left((nh^r)^{-1/2}\right)$.* **Lemma 6** (Theoretical undersmoothing conditions for a fixed bandwidth $h$). *Let $\Psi \equiv \Psi(\theta,\bm{V})$, $G^a \equiv G(P)(A \mid \bm{W})$ and $G^c \equiv G(P)(\Delta^c \mid \bm{W},A)$ denote $E(Y^{d^\theta}\mid V)$, the treatment and censoring mechanism under an arbitrary distribution $P \in \mathcal{M}$. Let $\Psi_{n,\lambda_n^\Psi}$, $G^a_{n,\lambda_n^a}$ and $G^c_{n,\lambda_n^c}$ be the highly adaptive lasso estimators of $\Psi$, $G^a$ and $G^c$ using $L_1$-norm bound $\lambda_n^\Psi$, $\lambda_n^a$ and $\lambda_n^c$, respectively. Choosing $\lambda_n^\Psi$, $\lambda_n^a$ and $\lambda_n^c$ such that $$\begin{aligned} \min_{(s,j) \in \mathcal{J}_n^\Psi} {\bigg \Vert} P_n \frac{d}{d \Psi_{n,\lambda_n^\Psi}} L( \Psi_{n,\lambda_n^\Psi}) (\phi_{s,j}) {\bigg \Vert} &= o_p\left(n^{-1/2}\right), \label{eq:basis3-fixedh}\\ \min_{(s,j) \in \mathcal{J}_n^a} {\bigg \Vert} P_n \frac{d}{d\text{logit } G_{n,\lambda_n^a}^a} L(\text{logit }G_{n,\lambda_n^a}^a) (\phi_{s,j}) {\bigg \Vert}&= o_p\left(n^{-1/2}\right), \label{eq:basis1-fixedh} \\ \min_{(s,j) \in \mathcal{J}_n^c} {\bigg \Vert} P_n \frac{d}{d\text{logit } G_{n,\lambda_n^c}^c} L(\text{logit }G_{n,\lambda_n^c}^c) (\phi_{s,j}) {\bigg \Vert} &= o_p\left(n^{-1/2}\right), \label{eq:basis2-fixedh} \end{aligned}$$ where, in ([\[eq:basis3-fixedh\]](#eq:basis3-fixedh){reference-type="ref" reference="eq:basis3-fixedh"}), $L(\cdot)$ is the loss function ([\[eq:lossf\]](#eq:lossf){reference-type="ref" reference="eq:lossf"}), and, in ([\[eq:basis1-fixedh\]](#eq:basis1-fixedh){reference-type="ref" reference="eq:basis1-fixedh"}) and ([\[eq:basis2-fixedh\]](#eq:basis2-fixedh){reference-type="ref" reference="eq:basis2-fixedh"}), $L(\cdot)$ is log-likelihood loss. Also, $\mathcal{J}_n^.$ is a set of indices for the basis functions with nonzero coefficients in the corresponding model. Let $D^\Psi(f^\Psi,\Psi_{n}) = f^\Psi \cdot \{I(Y>t) - \Psi_{n}\}$, $D^a(f^a,G_{n}^a) = f^a \cdot (A - G_{n}^a)$, and $D^c(f^c,G_{n}^c) = f^c \cdot (\Delta^c - G_{n}^c)$. The functions $f^\Psi$, $f^a$ and $f^c$ are càdlàg with finite sectional variation norm. Let $\tilde{f}_\phi^\Psi$, $\tilde{f}_\phi^a$ and $\tilde{f}_\phi^c$ be projections of $\tilde f^\Psi$, $\tilde f^a$ and $\tilde f^c$ onto the linear span of the basis functions $\phi_{s,j}$ in $L^2(P)$, where $\phi_{s,j}$ satisfies condition ([\[eq:basis3-fixedh\]](#eq:basis3-fixedh){reference-type="ref" reference="eq:basis3-fixedh"}) and ([\[eq:basis1-fixedh\]](#eq:basis1-fixedh){reference-type="ref" reference="eq:basis1-fixedh"}), respectively. Then, we have $P_n D^\Psi({f}^\Psi,\Psi_{n}) = o_p\left(n^{-1/2}\right)$, $P_n D^a({f}^a,G_{n}^a) = o_p\left(n^{-1/2}\right)$ and $P_n D^c({f}^c,G_{n}^c) = o_p\left(n^{-1/2}\right)$.* **Lemma 7** (Theoretical undersmoothing conditions for uniform convergence with fixed bandwidth $h$). *Let $\Psi \equiv \Psi(\theta,\bm{V})$, $G^a \equiv G(P)(A \mid \bm{W})$ and $G^c \equiv G(P)(\Delta^c \mid \bm{W},A)$ denote $E(Y^{d^\theta}\mid V)$, the treatment and censoring mechanism under an arbitrary distribution $P \in \mathcal{M}$. Let $\Psi_{n,\lambda_n^\Psi}$, $G^a_{n,\lambda_n^a}$ and $G^c_{n,\lambda_n^c}$ be the highly adaptive lasso estimators of $\Psi$, $G^a$ and $G^c$ using $L_1$-norm bound $\lambda_n^m$, $\lambda_n^a$ and $\lambda_n^c$, respectively. Choosing $\lambda_n^\Psi$, $\lambda_n^a$ and $\lambda_n^c$ such that $$\begin{aligned} \min_{(s,j) \in \mathcal{J}_n^\Psi} \sup_{v \in {\mathcal{V}}} {\bigg \Vert} P_n \frac{d}{d \Psi_{n,\lambda_n^\Psi}} L( \Psi_{n,\lambda_n^\Psi}) (\phi_{s,j}) {\bigg \Vert} &= o_p(n^{-1/2}), \label{eq:basis3u}\\ \min_{(s,j) \in \mathcal{J}_n^a} \sup_{v \in {\mathcal{V}}}{\bigg \Vert} P_n \frac{d}{d\text{logit } G_{n,\lambda_n^a}^a} L(\text{logit }G_{n,\lambda_n^a}^a) (\phi_{s,j}) {\bigg \Vert}&= o_p(n^{-1/2}), \label{eq:basis1u} \\ \min_{(s,j) \in \mathcal{J}_n^c} \sup_{v \in {\mathcal{V}}}{\bigg \Vert} P_n \frac{d}{d\text{logit } G_{n,\lambda_n^c}^c} L(\text{logit }G_{n,\lambda_n^c}^c) (\phi_{s,j}) {\bigg \Vert} &= o_p(n^{-1/2}), \label{eq:basis2u} \end{aligned}$$ where, in ([\[eq:basis3u\]](#eq:basis3u){reference-type="ref" reference="eq:basis3u"}), $L(\cdot)$ is the loss function ([\[eq:lossf\]](#eq:lossf){reference-type="ref" reference="eq:lossf"}), and, in ([\[eq:basis1u\]](#eq:basis1u){reference-type="ref" reference="eq:basis1u"}) and ([\[eq:basis2u\]](#eq:basis2u){reference-type="ref" reference="eq:basis2u"}), $L(\cdot)$ is log-likelihood loss. Also, $\mathcal{J}_n^.$ is a set of indices for the basis functions with nonzero coefficients in the corresponding model. Let $D^\Psi(f^\Psi,\Psi_{n}) = f^\Psi \cdot \{I(Y>t) - \Psi_{n}\}$, $D^a(f^a,G_{n}^a) = f^a \cdot (A - G_{n}^a)$, and $D^c(f^c,G_{n}^c) = f^c \cdot (\Delta^c - G_{n}^c)$. Here, $f^\Psi$, $f^a$ and $f^c$ are càdlàg with finite sectional variation norm, and we let $\tilde{f}^\Psi$, $\tilde{f}^a$ and $\tilde{f}^c$ be projections of $f^\Psi$, $f^a$ and $f^c$ onto the linear span of the basis functions $\phi_{s,j}$ in $L^2(P)$, where $\phi_{s,j}$ satisfies condition ([\[eq:basis3u\]](#eq:basis3u){reference-type="ref" reference="eq:basis3u"}) and ([\[eq:basis1u\]](#eq:basis1u){reference-type="ref" reference="eq:basis1u"}), respectively. Assuming $\lVert f^\Psi - \tilde{f}^\Psi \rVert_{2,P_0} = O_p(n^{-1/4})$, $\lVert f^a - \tilde{f}^a \rVert_{2,P_0} = O_p(n^{-1/4})$ and $\lVert f^c - \tilde{f}^c \rVert_{2,P_0} = O_p(n^{-1/4})$, it follows that $P_n D^\Psi(\tilde{f}^\Psi,\Psi_{n}) = o_p(n^{-1/2})$, $P_n D^a(\tilde{f}^a,G_{n}^a) = o_p(n^{-1/2})$, $P_n D^c(\tilde{f}^c,G_{n}^c) = o_p(n^{-1/2})$ $P_n D^\Psi({f}^\Psi,\Psi_{n}) = o_p(n^{-1/2})$, $P_n D^a({f}^a,G_{n}^a) = o_p(n^{-1/2})$ and $P_n D^c({f}^c,G_{n}^c) = o_p(n^{-1/2})$.* # Sectional variation norm {#sec:secnorm} Suppose that the function of interest falls into a càdlàg class of functions on a p-dimensional cube $[0,\tau] \subset \mathcal{R}^p$ with finite sectional variation norm where $\tau$ is the upper bound of all supports which is assumed to be finite. The p-dimensional cube $[0,\tau]$ can be represented as a union of lower dimensional cubes (i.e., $l$-dimensional with $l\leq p$) plus the origin. That is $[0,\tau] = \{\cup_s(0,\tau_s]\} \cup \{0\}$ where $\cup_s$ is over all subsets $s$ of $\{1,2,\cdots,p\}$. Denote $\mathcal{D}[0,\tau]$ as the Banach space of $p$-variate real-valued càdlàg functions on a cube $[0,\tau] \in \mathbb{R}^p$. For each function $f \in \mathcal{D}[0,\tau]$, we define the $s-$th section $f$ as $f_s(u) = f\{u_1 I(1 \in s),\cdots, u_d \in I(p \in s)\}$. By definition, $f_s(u)$ varies along the variables in $u_s$ according to $f$ while setting other variables to zero. We define the sectional variation norm of a given $f$ as $$\lVert f \rVert^{*}_\zeta = \lvert f(0) \rvert +\sum_{s \subset\{1,\ldots,p\}} \int_{0_s}^{\tau_s} \lvert df_s(u) \rvert,$$ where the sum is over all subsets of the coordinates $\{0,1,\ldots,p\}$. The term $\int_{0_s}^{\tau_s} \lvert df_s(u) \rvert$ denotes the $s-$specific variation norm. To clarify the definition, let's consider a simple case of $p=1$. In this case, the variation norm of function $f$ over $[0,\tau]$ is defined as $$\lVert f \rVert^{*}_\zeta = \sup_{Q \in \mathcal{Q}} \sum_{i=1}^{n_q-1} \lvert f(c_{i+1}) - f(c_{i}) \rvert$$ where $c_0=0$, $Q$ is a partition of $[0,\tau]$ such as $(0,c_1],(c_1,c_2],\cdots,(c_{n-1},c_{n_q}=\tau]$ and $\mathcal{Q}$ is the class of all possible partitions. For differentiable functions $f$, the sectional variation norm is $\lVert f \rVert^{*}_\zeta = \int_{0}^{\tau} \lvert f'(u)\rvert du$. For $p=2$, the variation norm of function $f$ over $[0,\tau]\subset \mathcal{R}^2$ is defined as $$\lVert f \rVert^{*}_\zeta = \sup_{Q \in \mathcal{Q}} \sum_{i=0}^{n_{q_1}-1}\sum_{j=0}^{n_{q_2}-1} \lvert f(u_{i+1},v_{j+1}) - f(u_{i},v_{j+1})- f(u_{i+1},v_{j})+f(u_{i},v_{j})\rvert$$ where $u_0=v_0=0$, $Q$ is a partition of $[0,\tau]$ such as $(0,u_1],(u_1,u_2],\cdots,(u_{n_{q_2}-1},u_{n_{q_1}}=\tau]$, $(0,v_1],(v_1,v_2],\cdots,(v_{n_{q_2}-1},v_{n_{q_2}}=\tau]$ and $\mathcal{Q}$ is the class of all possible partitions. In practice, we choose the support points $u_{s,i}$ to be the observed values of $\bm{W}$ denoted as $\tilde w_{s,1},\cdots,\tilde w_{s,n}$. Hence the indicator basis functions will be $\phi_{s,i} (w_s) = I(w_s \geq \tilde w_{s,i})$. For simplicity lets consider $p=1$, then, in a hypothetical example, the design matrix based on indicator basis functions would be $W$ $I(W\geq 1.45)$ $I(W\geq 0.84)$ $I(W\geq 1.0)$ $I(W\geq 0.16)$ $\cdots$ ---------- ----------------- ----------------- ---------------- ----------------- ---------- 1.45 1 1 0 1 $\cdots$ 0.84 0 1 0 1 $\cdots$ 1.0 1 1 1 1 $\cdots$ 0.16 0 0 0 1 $\cdots$ $\cdots$ $\cdots$ $\cdots$ $\cdots$ $\cdots$ The first author was supported by National Institute on Drug Abuse under award number R01DA048764, and National Institute of Neurological Disorders and Stroke under award numbers R33NS120240 and R61NS12024. The third author was supported in part by NIH Grant R01AI074345. **Supplement to "Nonparametric estimation of a covariate-adjusted counterfactual treatment regimen response curve"** # Proof of Lemmas [Lemma 5](#lem:ucondition-movh){reference-type="ref" reference="lem:ucondition-movh"}, [Lemma 6](#lem:ucondition-fixedh){reference-type="ref" reference="lem:ucondition-fixedh"} and [Lemma 7](#lem:ucondition-uconvergence){reference-type="ref" reference="lem:ucondition-uconvergence"} {#sec:proofs1} *Proof of Lemma [Lemma 5](#lem:ucondition-movh){reference-type="ref" reference="lem:ucondition-movh"}.* We first proof the result for the treatment indicator and $G^a$. Let $\tilde f^a = h^rf_h^a$ and $D^a\{\tilde f^a,G^a_{n,\lambda_n}\} = \tilde f^a \cdot (A - G^a_{n,\lambda_n})$, where $\tilde f^a$ is a càdlàg function with a finite sectional variation norm, and let $\tilde{f}_\phi^a$ be an approximation of $\tilde f^a$ using the basis functions $\phi$ that satisfy condition ([\[eq:basis1\]](#eq:basis1){reference-type="ref" reference="eq:basis1"}). $$\begin{aligned} P_n D^a(f_h^a, G^a_{n,\lambda_n}) &= h^{-r}P_n D^a(\tilde f^a, G^a_{n,\lambda_n}) \\ & = h^{-r}P_n D^a(\tilde f^a - \tilde{f}_\phi^a, G^a_{n,\lambda_n}) +h^{-r}P_n D^a( \tilde{f}_\phi^a, G^a_{n,\lambda_n}) \\ & = h^{-r}(P_n-P_0) D^a(\tilde f^a - \tilde{f}_\phi^a, G^a_{n,\lambda_n}) +h^{-r}P_0 D^a(\tilde f^a - \tilde{f}_\phi^a, G^a_{n,\lambda_n})+h^{-r}P_n D^a( \tilde{f}_\phi^a, G^a_{n,\lambda_n}) \\ & = h^{-r} O_p(n^{-1/2} n^{-1/6}) + h^{-r} O_p(n^{-1/3}n^{-1/3})+h^{-r}P_n D^a( \tilde{f}_\phi^a, G^a_{n,\lambda_n}).\end{aligned}$$ The last equality follows from $\|\tilde f^a - \tilde{f}_\phi^a\|_2 = O_p(n^{-1/3})$ and Lemma [Lemma 8](#lem:vw){reference-type="ref" reference="lem:vw"} as $$\begin{aligned} (P_n-P_0) D^a(\tilde f^a - \tilde{f}_\phi^a, G^a_{n,\lambda_n}) &\leq \sup_{ \|\tilde f^a - \tilde{f}_\phi^a\|_2 \leq n^{-1/3}} | (P_n-P_0)D^a(\tilde f^a - \tilde{f}_\phi^a, G^a_{n,\lambda_n}) | \nonumber \\ & =O_p\{ n^{-1/2} \mathcal{E}(n^{-1/3},L^2(P))\} \leq O_p(n^{-1/2}n^{-1/6}). \end{aligned}$$ Moreover, by the convergence rate of the highly adaptive lasso estimate (i.e., $\lVert G^a_{0} - G^a_{n,\lambda_n} \rVert_{2,P_0} = O_p(n^{-1/3})$), $$P_0 D^a(\tilde f^a - \tilde{f}_\phi^a, G^a_{n,\lambda_n}) = P_0 (\tilde f^a - \tilde{f}_\phi^a) (G_0^a-G^a_{n,\lambda_n}) =O_p(n^{-1/3}) O_p(n^{-1/3}).$$ Therefore, the proof is complete if we show $$h^{-r} O_p(n^{-1/2} n^{-1/6}) + h^{-r} O_p(n^{-1/3}n^{-1/3})+h^{-r}P_n D^a( \tilde{f}_\phi^a, G^a_{n,\lambda_n}) = o_p\left((nh^r)^{-1/2}\right).$$ Considering the optimal bandwidth rate of $h^r = n^{\frac{-r}{r+2J}}$, the first term satisfies the desired rate when $h^{-r} n^{-1/2} n^{-1/6} < n^{-1/2} h^{-r/2}$ which implies $\frac{-1}{6} < \frac{-r}{2r+4J}$. The inequality is satisfied for $J>r$. We can similarly show that $h^{-r} O_p(n^{-2/3}) = o_p\left((nh^r)^{-1/2}\right)$ for $J>r$. Hence, for $J>r$, $$P_n D^a(f_h^a, G^a_{n,\lambda_n}) = o_p\left((nh^r)^{-1/2}\right) + h^{-r}P_n D^a( \tilde{f}_\phi^a, G^a_{n,\lambda_n}).$$ In the following we show that $h^{-r}P_n D^a( \tilde{f}_\phi^a, G^a_{n,\lambda_n})$ is also of the order $o_p\left((nh^r)^{-1/2}\right)$. For simplicity of notation, let $G^{a\dagger} = \text{logit }G^a$. Define a set of score functions generated by a path $\{1+\epsilon g(s,j)\} \beta_{n,s,j}$ for a uniformly bounded vector $g$ as $$S_g(G^a_{n,\lambda_n}) = \frac{d}{dG^{a\dagger}_{n,\lambda_n}} L( G^{a\dagger}_{n,\lambda_n}) \left\{ \sum_{(s,j)} g(s,j) \beta_{n,s,j} \phi_{s,j} \right\}.$$ When $L(\cdot)$ is the log-likelihood loss function, $L(G) = A \log \left(\frac{G}{1-G}\right) +\log(1-G)$. Thus, $L(G^{a\dagger}) = A \log G^{a\dagger} +\log(G^{a\dagger} -1)$ and $S_g(G^a) = (A - G^a_{n,\lambda_n}) \left\{\sum_{(s,j)} g(s,j) \beta_{n,s,j} \phi_{s,j} \right\}$. Let $r(g,G^a_{n,\lambda_n}) = \sum_{(s,j)} g(s,j) \lvert \beta_{n,s,j} \rvert$. For small enough $\epsilon$, $$\begin{aligned} \sum_{(s,j)} \lvert \{1+\epsilon g(s,j)\} \beta_{n,s,j} \rvert &= \sum_{(s,j)} \{1 + \epsilon g(s,j)\} \lvert \beta_{n,s,j} \rvert \\ &=\sum_{(s,j)} \lvert \beta_{n,s,j} \rvert + \epsilon r(g,G_{n,\lambda_n}).\end{aligned}$$ Hence, for any $g$ satisfying $r(g,G_{n,\lambda_n})=0$, we have $P_n S_g(G^a_{n,\lambda_n}) = 0$. Because $D^a\{\tilde{f}_\phi^a, G^a_{n,\lambda_n}\} \in \{S_g(G^a_{n,\lambda_n}): \lVert g \rVert_{\infty} < \infty \}$, there exists $g^{\star}$ such that $D^a(\tilde{f}_\phi^a, G^a_{n,\lambda_n}) = S_{g^{\star}} (G^a_{n,\lambda_n})$. However, for this particular choice of $g^{\star}$, $r(g,G^a_{n,\lambda_n})$ may not be zero. Now, define $g$ such that $g(s,j) = g^{\star}(s,j)$ for $(s,j) \neq (s^{\star}, j^{\star})$; $\tilde{g} (s^{\star}, j^{\star})$ is defined such that $$\begin{aligned} \label{eq:restr} \sum_{(s,j) \neq (s^{\star},j^{\star})} g^{\star}(s,j) \lvert \beta_{n,s,j} \rvert + g(s^{\star}, j^{\star}) \lvert \beta_{n, s^{\star}, j^{\star}} \rvert = 0.\end{aligned}$$ That is, $g$ matches $g^{\star}$ everywhere but for a single point $(s^{\star}, j^{\star})$, where it is forced to take a value such that $r(g,G_{n,\lambda_n})=0$. As a result, for such a choice of $g$, $P_n S_{g} (G^a_{n,\lambda_n}) = 0$ by definition. Below, we show that $P_n S_{g}(G^a_{n,\lambda_n}) - P_n D^a\{\tilde{f}_\phi^a, G^a_{n,\lambda_n}\} = o_p\left(n^{-1/2} h^{r/2}\right)$ which then implies that $h^{-r} P_n D^a\{\tilde{f}_\phi^a, G^a_{n,\lambda_n}\} = o_p\left((nh^r)^{-1/2}\right)$. We note that the choice of $(s^{\star}, j^{\star})$ is inconsequential. $$\begin{aligned} P_n S_{g}(G^a_{n,\lambda_n}) - P_n D^a\{\tilde{f}_\phi^a, G^a_{n,\lambda_n}\} &= P_n S_{g}(G^a_{n,\lambda_n}) - P_n S_{g^*}(G^a_{n,\lambda_n}) \\ &=P_n \left\{ \frac{d}{dG^{a\dagger}_{n,\lambda_n}} L( G^{a\dagger}_{n,\lambda_n}) \left[\sum_{(s,j)} \left\{g(s,j) - g^{\star}(s,j)\right\} \beta_{n,s,j} \phi_{s,j} \right] \right\} \\ &= P_n \left[\frac{d}{dG^{a\dagger}_{n,\lambda_n}} L( G^{a\dagger}_{n,\lambda_n}) \left\{ g(s^{\star},j^{\star}) - g^{\star}(s^{\star},j^{\star})\right\} \beta_{n,s^{\star},j^{\star}} \phi_{s^{\star},j^{\star}} \right] \\ & = P_n \left[\frac{d}{dG^{a\dagger}_{n,\lambda_n}} L( G^{a\dagger}_{n,\lambda_n}) \kappa(s^{\star},j^{\star}) \phi_{s^{\star},j^{\star}} \right]\\ &= \kappa(s^{\star},j^{\star}) P_n \left[\frac{d}{dG^{a\dagger}_{n,\lambda_n}} L( G^{a\dagger}_{n,\lambda_n}) \phi_{s^{\star},j^{\star}} \right],\end{aligned}$$ where the third equality follows from equation ([\[eq:restr\]](#eq:restr){reference-type="ref" reference="eq:restr"}) above with $$\kappa(s^{\star},j^{\star}) = -\frac{\sum_{(s,j) \neq (s^{\star},j^{\star})} g^{\star}(s,j) \lvert \beta_{n,s,j} \rvert }{\lvert \beta_{n,s^{\star},j^{\star}} \rvert} \beta_{n,s^{\star},j^{\star}} - g^{\star}(s^{\star},j^{\star}) \beta_{n,s^{\star},j^{\star}}.$$ Moreover, $$\left\lvert \kappa(s^{\star},j^{\star}) \right\rvert \leq \sum_{(s,j)} \lvert g^{\star}(s,j) \beta_{n,s,j} \rvert. %\leq \lVert h^{\star} \rVert_\infty \lambda_n$$ Assuming $\tilde f_\phi^a$ has finite sectional variation norm, the $L_1$ norm of the coefficients approximating $\tilde f_\phi^a$ will be finite which implies that $\sum_{(s,j)} \lvert g^{\star}(s,j) \beta_{n,s,j} \rvert$ is finite, and thus $\lvert \kappa(s^{\star},j^{\star}) \rvert = O_p(1)$. Then, $$\begin{aligned} P_n S_{g}(G^a_{n,\lambda_n}) - P_n D^a(\tilde{f}_\phi^a, G^a_{n,\lambda_n}) &= O_p \left(P_n \left[\frac{d}{dG^{a\dagger}_{n,\lambda_n}} L( G^{a\dagger}_{n,\lambda_n})\phi_{s^{\star},j^{\star}} \right] \right) \\ & = o_p\left(n^{-1/2}h^{r/2}\right),\end{aligned}$$ where the last equality follows from the assumption that $\min_{(s,j) \in \mathcal{J}_n } \lVert P_n \frac{d}{dG^{a\dagger}_{n,\lambda_n}} L(G^{a\dagger}_{n,\lambda_n}) (\phi_{s,j}) \rVert = o_p\left(n^{-1/2}h^{r/2}\right)$ for $L(\cdot)$ being log-likelihood loss. As $P_n S_{{g}}(G^a_{n,\lambda_n}) = 0$, it follows that $P_n D^a(\tilde{f}_\phi^a,G^a_{n,\lambda_n}) = o_p\left(n^{-1/2}h^{r/2}\right)$ and thus $h^{-r}P_n D^a(\tilde{f}_\phi^a,G^a_{n,\lambda_n}) = o_p\left((nh^r)^{-1/2}\right)$.\ We just showed that $P_n D^a(f_h^a,G^a_{n,\lambda_n}) = o_p\left((nh^r)^{-1/2}\right)$. Similarly we can show that $P_n D^c(f_h^c,G^c_{n,\lambda_n}) = o_p\left((nh^r)^{-1/2}\right)$ and $P_n D^\Psi(f_h^\Psi,\Psi_{n}) = o_p\left((nh^r)^{-1/2}\right)$ which completes the proof. ◻ *Proof of Lemma [Lemma 6](#lem:ucondition-fixedh){reference-type="ref" reference="lem:ucondition-fixedh"}.* The prove follows from the proof of Lemma [Lemma 5](#lem:ucondition-movh){reference-type="ref" reference="lem:ucondition-movh"} with the bandwidth $h$ replaced by a constant value (e.g., one). ◻ *Proof of Lemma [Lemma 7](#lem:ucondition-uconvergence){reference-type="ref" reference="lem:ucondition-uconvergence"}.* We first proof the result for the treatment indicator and $G^a$. $$\begin{aligned} \sup_{v \in \mathcal{V}} \left| P_n D^a( f^a, G^a_{n,\lambda_n}) \right| &\leq \sup_{v \in \mathcal{V}}\left|P_n D^a( f^a - \tilde{f}_\phi^a, G^a_{n,\lambda_n})\right| +\sup_{v \in \mathcal{V}}\left|P_n D^a( \tilde{f}_\phi^a, G^a_{n,\lambda_n})\right| \\ & \leq \sup_{v \in \mathcal{V}} \left|(P_n-P_0) D^a( f^a - \tilde{f}_\phi^a, G^a_{n,\lambda_n})\right| +\sup_{v \in \mathcal{V}} \left|P_0 D^a( f^a - \tilde{f}_\phi^a, G^a_{n,\lambda_n})\right|\\ &+\sup_{v \in \mathcal{V}} \left|P_n D^a( \tilde{f}_\phi^a, G^a_{n,\lambda_n})\right| \\ & = O_p(n^{-1/2} n^{-1/6}) + O_p(n^{-1/3}n^{-1/3})+\sup_{v \in \mathcal{V}} \left|P_n D^a( \tilde{f}_\phi^a, G^a_{n,\lambda_n})\right|.\end{aligned}$$ The last equality follows from $\sup_{v \in \mathcal{V}} \|\tilde f^a - \tilde{f}_\phi^a\|_2 = O_p(n^{-1/3})$ and Lemma [Lemma 8](#lem:vw){reference-type="ref" reference="lem:vw"} as $$\begin{aligned} \sup_{v \in \mathcal{V}} (P_n-P_0) D^a(\tilde f^a - \tilde{f}_\phi^a, G^a_{n,\lambda_n}) &\leq \sup_{ \substack{ v \in \mathcal{V} \\ \|\tilde f^a - \tilde{f}_\phi^a\|_2 \leq n^{-1/3}}} | (P_n-P_0)D^a(\tilde f^a - \tilde{f}_\phi^a, G^a_{n,\lambda_n}) | \nonumber \\ & =O_p\{ n^{-1/2} \mathcal{E}(n^{-1/3},L^2(P))\} \leq O_p(n^{-1/2}n^{-1/6}). \end{aligned}$$ Moreover, by the convergence rate of the highly adaptive lasso estimate (i.e., $\sup_{v \in \mathcal{V}}\lVert G^a_{0} - G^a_{n,\lambda_n} \rVert_{2,P_0} = O_p(n^{-1/3})$), $$\sup_{v \in \mathcal{V}} \left| P_0 D^a(\tilde f^a - \tilde{f}_\phi^a, G^a_{n,\lambda_n}) \right| = \sup_{v \in \mathcal{V}} \left| P_0 (\tilde f^a - \tilde{f}_\phi^a) (G_0^a-G^a_{n,\lambda_n})\right| =O_p(n^{-1/3}) O_p(n^{-1/3}).$$ Therefore, the proof is complete if we show $$O_p(n^{-1/2} n^{-1/6}) + O_p(n^{-1/3}n^{-1/3})+\sup_{v \in \mathcal{V}} \left|P_n D^a( \tilde{f}_\phi^a, G^a_{n,\lambda_n})\right| = o_p(n^{-1/2}).$$ $$\sup_{v \in \mathcal{V}} \left| P_n D^a(f^a, G^a_{n,\lambda_n}) \right| = o_p(n^{-1/2}) + \sup_{v \in \mathcal{V}} \left|P_n D^a( \tilde{f}_\phi^a, G^a_{n,\lambda_n})\right|.$$ In the following we show that $\sup_{v \in \mathcal{V}} \left|P_n D^a( \tilde{f}_\phi^a, G^a_{n,\lambda_n})\right|$ is also of the order $o_p(n^{-1/2})$. For simplicity of notation, let $G^{a\dagger} = \text{logit }G^a$. Define a set of score functions generated by a path $\{1+\epsilon g(s,j)\} \beta_{n,s,j}$ for a uniformly bounded vector $g$ as $$S_g(G^a_{n,\lambda_n}) = \frac{d}{dG^{a\dagger}_{n,\lambda_n}} L( G^{a\dagger}_{n,\lambda_n}) \left\{ \sum_{(s,j)} g(s,j) \beta_{n,s,j} \phi_{s,j} \right\}.$$ When $L(\cdot)$ is the log-likelihood loss function, $L(G) = A \log \left(\frac{G}{1-G}\right) +\log(1-G)$. Thus, $L(G^{a\dagger}) = A \log G^{a\dagger} +\log(G^{a\dagger} -1)$ and $S_g(G^a) = (A - G^a_{n,\lambda_n}) \left\{\sum_{(s,j)} g(s,j) \beta_{n,s,j} \phi_{s,j} \right\}$. Let $r(g,G^a_{n,\lambda_n}) = \sum_{(s,j)} g(s,j) \lvert \beta_{n,s,j} \rvert$. For small enough $\epsilon$, $$\begin{aligned} \sum_{(s,j)} \lvert \{1+\epsilon g(s,j)\} \beta_{n,s,j} \rvert &= \sum_{(s,j)} \{1 + \epsilon g(s,j)\} \lvert \beta_{n,s,j} \rvert \\ &=\sum_{(s,j)} \lvert \beta_{n,s,j} \rvert + \epsilon r(g,G_{n,\lambda_n}).\end{aligned}$$ Hence, for any $g$ satisfying $r(g,G_{n,\lambda_n})=0$, we have $P_n S_g(G^a_{n,\lambda_n}) = 0$. Because $D^a\{\tilde{f}_\phi^a, G^a_{n,\lambda_n}\} \in \{S_g(G^a_{n,\lambda_n}): \lVert g \rVert_{\infty} < \infty \}$, there exists $g^{\star}$ such that $D^a(\tilde{f}_\phi^a, G^a_{n,\lambda_n}) = S_{g^{\star}} (G^a_{n,\lambda_n})$. However, for this particular choice of $g^{\star}$, $r(g,G^a_{n,\lambda_n})$ may not be zero. Now, define $g$ such that $g(s,j) = g^{\star}(s,j)$ for $(s,j) \neq (s^{\star}, j^{\star})$; $\tilde{g} (s^{\star}, j^{\star})$ is defined such that $$\begin{aligned} \label{eq:restr} \sum_{(s,j) \neq (s^{\star},j^{\star})} g^{\star}(s,j) \lvert \beta_{n,s,j} \rvert + g(s^{\star}, j^{\star}) \lvert \beta_{n, s^{\star}, j^{\star}} \rvert = 0.\end{aligned}$$ That is, $g$ matches $g^{\star}$ everywhere but for a single point $(s^{\star}, j^{\star})$, where it is forced to take a value such that $r(g,G_{n,\lambda_n})=0$. As a result, for such a choice of $g$, $P_n S_{g} (G^a_{n,\lambda_n}) = 0$ by definition. Below, we show that $P_n S_{g}(G^a_{n,\lambda_n}) - P_n D^a\{\tilde{f}_\phi^a, G^a_{n,\lambda_n}\} = o_p(n^{-1/2} )$ which then implies that $P_n D^a\{\tilde{f}_\phi^a, G^a_{n,\lambda_n}\} = o_p(n^{-1/2})$. We note that the choice of $(s^{\star}, j^{\star})$ is inconsequential. $$\begin{aligned} P_n S_{g}(G^a_{n,\lambda_n}) - P_n D^a\{\tilde{f}_\phi^a, G^a_{n,\lambda_n}\} &= P_n S_{g}(G^a_{n,\lambda_n}) - P_n S_{g^*}(G^a_{n,\lambda_n}) \\ &=P_n \left\{ \frac{d}{dG^{a\dagger}_{n,\lambda_n}} L( G^{a\dagger}_{n,\lambda_n}) \left[\sum_{(s,j)} \left\{g(s,j) - g^{\star}(s,j)\right\} \beta_{n,s,j} \phi_{s,j} \right] \right\} \\ &= P_n \left[\frac{d}{dG^{a\dagger}_{n,\lambda_n}} L( G^{a\dagger}_{n,\lambda_n}) \left\{ g(s^{\star},j^{\star}) - g^{\star}(s^{\star},j^{\star})\right\} \beta_{n,s^{\star},j^{\star}} \phi_{s^{\star},j^{\star}} \right] \\ & = P_n \left[\frac{d}{dG^{a\dagger}_{n,\lambda_n}} L( G^{a\dagger}_{n,\lambda_n}) \kappa(s^{\star},j^{\star}) \phi_{s^{\star},j^{\star}} \right]\\ &= \kappa(s^{\star},j^{\star}) P_n \left[\frac{d}{dG^{a\dagger}_{n,\lambda_n}} L( G^{a\dagger}_{n,\lambda_n}) \phi_{s^{\star},j^{\star}} \right],\end{aligned}$$ where the third equality follows from equation ([\[eq:restr\]](#eq:restr){reference-type="ref" reference="eq:restr"}) above with $$\kappa(s^{\star},j^{\star}) = -\frac{\sum_{(s,j) \neq (s^{\star},j^{\star})} g^{\star}(s,j) \lvert \beta_{n,s,j} \rvert }{\lvert \beta_{n,s^{\star},j^{\star}} \rvert} \beta_{n,s^{\star},j^{\star}} - g^{\star}(s^{\star},j^{\star}) \beta_{n,s^{\star},j^{\star}}.$$ Moreover, $$\left\lvert \kappa(s^{\star},j^{\star}) \right\rvert \leq \sum_{(s,j)} \lvert g^{\star}(s,j) \beta_{n,s,j} \rvert. %\leq \lVert h^{\star} \rVert_\infty \lambda_n$$ Assuming $\tilde f_\phi^a$ has finite sectional variation norm, the $L_1$ norm of the coefficients approximating $\tilde f_\phi^a$ will be finite which implies that $\sum_{(s,j)} \lvert g^{\star}(s,j) \beta_{n,s,j} \rvert$ is finite, and thus $\lvert \kappa(s^{\star},j^{\star}) \rvert = O_p(1)$. Hence, $$\begin{aligned} \sup_{v \in \mathcal{V}} \left| P_n S_{g}(G^a_{n,\lambda_n}) - P_n D^a(\tilde{f}_\phi^a, G^a_{n,\lambda_n}) \right| &= O_p \left( \sup_{v \in \mathcal{V}} \left| P_n \left[\frac{d}{dG^{a\dagger}_{n,\lambda_n}} L( G^{a\dagger}_{n,\lambda_n})\phi_{s^{\star},j^{\star}} \right] \right|\right) \\ & = o_p(n^{-1/2}),\end{aligned}$$ where the last equality follows from the assumption that $\min_{(s,j) \in \mathcal{J}_n } \sup_{v \in \mathcal{V}} \lVert P_n \frac{d}{dG^{a\dagger}_{n,\lambda_n}} L(G^{a\dagger}_{n,\lambda_n}) (\phi_{s,j}) \rVert = o_p(n^{-1/2})$ for $L(\cdot)$ being log-likelihood loss. As $P_n S_{{g}}(G^a_{n,\lambda_n}) = 0$, it follows that $P_n D^a(\tilde{f}_\phi^a,G^a_{n,\lambda_n}) = o_p(n^{-1/2})$.\ We just showed that $P_n D^a(f^a,G^a_{n,\lambda_n}) = o_p(n^{-1/2})$. Similarly we can show that $P_n D^c(f^c,G^c_{n,\lambda_n}) = o_p(n^{-1/2})$ and $P_n D^\Psi(f^\Psi,\Psi_{n}) = o_p(n^{-1/2})$ which completes the proof. ◻ # Other Lemmas **Lemma 8** (Lemma 3.4.2 in [@vaart1996weak]). *For a class of functions $\mathcal{F}$ with envelop $\textbf{F} = \sup_{f \in \mathcal{F}} |f(x)|$ bounded from above by $M < \infty$, we have $$E_P \sup_{f \in \mathcal{F}, \|f\|_P \leq \delta} n^{1/2}(P_n-P_0)(f) \leq \mathcal{E}(\delta,\mathcal{F},L^2(P)) \left\{1+\frac{\mathcal{E}(\delta,\mathcal{F},L^2(P))}{\delta^2 n^{1/2}}M\right\}.$$ This implies that $\sup_{f \in \mathcal{F}, \|f\|_P \leq \delta} n^{1/2}(P_n-P_0)(f)$ is bounded in probability by the right-hand side.* **Lemma 9** (Proposition 2 in [@bibaut2019fast]). *Let $p \geq 2$ and $M>0$. Denote $\mathcal{F}_{p,M}$ the class of cadlag functions on $[0,1]^p$ with sectional variation norm smaller than $M$. Suppose that $P_0$ is such that, for all $1 \leq r \leq \infty$, for all real-valued function $f$ on $[0,1]^p$, $\| f\|_{P_0,b} = c(b)\| f\|_{\mu,b}$, for some $c(b)>0$, and where $\mu$ is the Lebesgue measure. Then, for any $0<\delta<1$, the bracketing entropy integral of $\mathcal{F}_{p,M}$ with respect to $\| .\|_{P_0,b}$ norm is bounded by $$\mathcal{E}_{[]}(\delta,\mathcal{F}_{p,M},\| .\|_{P_0,b}) \leq \{C(b,p) M \delta\}^{1/2} \mid \log(\delta/M) \mid ^{p-1},$$ where $C(b,p)$ is a constant that depends only on $b$ and $p$.* **Lemma 10**. *Consider a sequence of processes $\{n^{1/2}(P_n-P_0) H_{n,h} (f) \}$ where $$H_{n,h} (f) = h^{r/2} K_{h,\bm{v}_0} ( f_n- f_0).$$ Assuming the following:* - *The function $f_0$ belongs to a cadlag class of functions with finite sectional variation norm.* - *The function $f_n$ is the highly adaptive lasso estimate of $f_0$.* - *The bandwidth $h_n$ satisfies $h^r \rightarrow 0$ and $nh^{r3/2} \rightarrow \infty$.* *Then $n^{1/2}(P_n-P_0) H_{n,h} (f_n) = o_p(1)$.* *Proof.* Let $\tilde K_{h,\bm{v}_0}(v) = h^r K_{h,\bm{v}_0}(v)$. Define $h^{-r/2}\tilde H_{n,h}(f) =H_{n,h}(f)$. The function $\tilde H_n(f)$ belongs to a class of function $\mathcal{F}_n = \{\tilde H_{n,h}(f) : h\}$. We have $$\begin{aligned} \| \tilde H_n(f_n) \|_{P_0} &\leq \| \tilde K_{h,\bm{v}_0}(v) \|_{P_0} \| ( f_n-f_0 ) \|_{P_0} \\ &= O_p(h^{r/2})O_p(n^{-1/3}) \end{aligned}$$ where the last equality follows from the kernel and the highly adaptive lasso rate of convergence. Using the result of Lemma [Lemma 9](#lem:laan){reference-type="ref" reference="lem:laan"}, $n^{1/2} (P_n-P_0) \tilde H_n(f_n) = O_p(h^{r/4} n^{-1/6})$. Therefore, $$\begin{aligned} %(nh^r)^{1/2} (P_n-P_0) H_n(G,m) = n^{1/2} h^{-r/2} (P_n-P_0) \tilde H_n(G,m) = O_p(n^{-1/6}). n^{1/2}(P_n-P_0) H_n(f_n) = O_p(h^{-r/4}n^{-1/6}).\end{aligned}$$ Thus, to obtain the desired rate we must have $h^{-r/4}n^{-1/6} =o(1)$ which implies that $h^r$ has to converge to zero slower than $n^{-2/3}$ (i.e., $h^r>n^{-2/3}$). Note that under continuity and no smoothness assumptions, the rate for the bandwidth is $h^r=n^{-1/3}$ which is much slower than $n^{-2/3}$. Thus, under continuity, for every level of smoothness the choice of optimal $h$ will satisfy the rate $h^r=n^{-2/3}$ (i.e., no condition is imposed). ◻ **Lemma 11**. *Let the function $f_n$ is the highly adaptive lasso estimate of $f_0$ where $f_0$ belongs to a cadlag class of functions with finite sectional variation norm. Then $$P_n \frac{K_{h,\bm{v}_0}f_0}{P_0 K_{h,\bm{v}_0}} \left(f_0-f_n\right) = P_n \frac{K_{h,\bm{v}_0}f_n}{P_n K_{h,\bm{v}_0}} \left(f_0-f_n\right)+o_p\left((nh^r)^{-1/2}\right).$$* *Proof.* We have $$\begin{aligned} P_n \frac{K_{h,\bm{v}_0}f_0}{P_0 K_{h,\bm{v}_0}} \left(f_0-f_n\right) &= P_n \frac{K_{h,\bm{v}_0}f_n}{P_n K_{h,\bm{v}_0}} \left(f_0-f_n\right) + P_n K_{h,\bm{v}_0} (f_0-f_n) \left( \frac{f_0}{P_0 K_{h,\bm{v}_0}}-\frac{f_n}{P_n K_{h,\bm{v}_0}}\right) \\ &=P_n \frac{K_{h,\bm{v}_0}f_n}{P_n K_{h,\bm{v}_0}} \left(f_0-f_n\right) + (P_n-P_0) K_{h,\bm{v}_0} (f_0-f_n) \left( \frac{f_0}{P_0 K_{h,\bm{v}_0}}-\frac{f_n}{P_n K_{h,\bm{v}_0}}\right)\\ &\hspace{2in}+P_0 K_{h,\bm{v}_0} (f_0-f_n) \left( \frac{f_0}{P_0 K_{h,\bm{v}_0}}-\frac{f_n}{P_n K_{h,\bm{v}_0}}\right)\\ &=P_n \frac{K_{h,\bm{v}_0}f_n}{P_n K_{h,\bm{v}_0}} \left(f_0-f_n\right)+o_p\left((nh^r)^{-1/2}\right).\end{aligned}$$ The last equality follows because $P_0 K_{h,\bm{v}_0} (f_0-f_n) \left( \frac{f_0}{P_0 K_{h,\bm{v}_0}}-\frac{f_n}{P_n K_{h,\bm{v}_0}}\right)=o_p\left((nh^r)^{-1/2}\right)$ by the highly adaptive lasso rate of convergence. ◻ **Lemma 12**. *Let $P_0L_{G_0}(\Psi_n,\Psi_0) \equiv P_0L_{G_0}(\Psi_n) - P_0L_{G_0}(\Psi_0)$ where $L_{G}$ is the weighted squared-loss function defined in ([\[eq:lossf\]](#eq:lossf){reference-type="ref" reference="eq:lossf"}) and $\mathcal{M}_\lambda=\{ m \in \mathcal{M}: \| m\|_{\nu} < \lambda\}$ be the set of cadlag functions with variation norm smaller than $\lambda$. Then, uniformly over $\lambda$, we have $$\left | P_0 \{L_{G_0}(\Psi_n,\Psi_0)-L_{G_n}(\Psi_n,\Psi_0)\}\right | \leq C \left\{ d_{01}(G_n,G_0) \right\}^{1/2} \left\{ P_0L_{G_0}(\Psi_n,\Psi_0) \right\}^{1/2},$$ where $C$ is a positive finite constant.* **Lemma 13**. *Suppose $\|\theta-\theta_0\|_2 = O_p(\delta_n)$ and $f(\theta) = \frac{h^rK_{h,\bm{v}_0} I( A= d^\theta)}{G_0(\theta)}Q_0(\theta)$. Under Assumption [Assumption 4](#assump:margin){reference-type="ref" reference="assump:margin"}, $\|f(\theta)-f(\theta_0)\|_2 = O_p(h^{r/2}\delta_n)$.* *Proof.* We assume that $\bm{S}$ and $Q_0(\theta)$ are bounded uniformly by some constants $C_S$ and $C_Q$. Moreover, by strong positivity assumption $\min\{G_0(\theta,\bm{W}),1-G_0(\theta,\bm{W})\}>\gamma$ for all $\bm{W} \in \mathcal{W}$ and $\theta \in \Theta$. By definition the norm can be written as $$\begin{aligned} \|f(\theta)-f(\theta_0)\|_2 &= \left\|h^rK_{h,\bm{v}_0} \left\{\frac{I( A= d^\theta)}{G_0(\theta)}Q_0(\theta)- \frac{I( A= d^{\theta_0})}{G_0(\theta_0)}Q_0(\theta_0) \right\} \right\|_2\\ & \leq \|h^rK_{h,\bm{v}_0} \|_2 \left\|(\gamma)^{-1} \max\{|Q_0(\theta_0)|,|Q_0(\theta)|\} I(d^{\theta} \neq d^{\theta_0})\right\|_2\\ & = \|h^rK_{h,\bm{v}_0} \|_2 \left\|(\gamma)^{-1} \max\{|Q_0(\theta_0)|,|Q_0(\theta)|\} I(d^{\theta} \neq d^{\theta_0}) \{ I(|\bm{S}^\top \theta_0|>t)+I(|\bm{S}^\top \theta_0|<t)\}\right\|_2\\ & \leq \gamma^{-1} C_{Q} \|h^rK_{h,\bm{v}_0} \|_2 \left\{C t^{\kappa/2}+\left\|I(|\bm{S}^\top \theta_0-\bm{S}^\top \theta|>t) \right\|_2\right\}\\ & \leq \gamma^{-1} C_{Q} \|h^rK_{h,\bm{v}_0} \|_2 \left\{ C t^{\kappa/2}+ \frac{\|\bm{S}^\top \theta_0-\bm{S}^\top \theta\|_2}{t}\right\}. %&=\|h^rK_{h,\bm{v}_0} \|_2 \left\|(\gamma)^{-1} \max\{|Q_0(\theta_0)|,|Q_0(\theta)|\} \{I(\bm{S}^\top \theta>0) - I(\bm{S}^\top \theta_0>0)\}\right\|_2 \\ %& \leq \gamma^{-1} C \|h^rK_{h,\bm{v}_0} \|_2 \left\| I(0<|\bm{S}^\top \theta_0|<|\bm{S}^\top \theta_0-\bm{S}^\top \theta|)\right\|_2\\ %& = \gamma^{-1} C_{Q} O_p(h^{r/2}) pr\{I(0<|\bm{S}^\top \theta_0|<|\bm{S}^\top \theta_0-\bm{S}^\top \theta|)\}^{1/2} \\ %& \leq \gamma^{-1} C_Q O_p(h^{r/2}) |\bm{S}^\top \theta_0-\bm{S}^\top \theta|^{\kappa/2}\\ %& \leq \gamma^{-1} C_Q C_S O_p(h^{r/2}) \|\theta-\theta_0\|_2^{\kappa/2} \\ %& = C O_p(h^{r/2}\delta^{\kappa/2}),\end{aligned}$$ The forth inequality follows from Assumption [Assumption 4](#assump:margin){reference-type="ref" reference="assump:margin"}, the Cauchy-Schwarz and $|\bm{S}^\top \theta_0-\bm{S}^\top \theta|>|\bm{S}^\top \theta_0|$. The latter holds because because $I(d^\theta \neq d^{\theta_0}) = 1$ when (a) $\bm{S}^\top {{\theta}}>0$ and $\bm{S}^\top \theta_0<0$; or (b) $\bm{S}^\top \theta<0$ and $\bm{S}^\top \theta_0>0$. The former and latter imply that $\bm{S}^\top {\theta}_0-\bm{S}^\top \theta<\bm{S}^\top \theta_0<0$ and $0<\bm{S} \theta_0<\bm{S}^\top \theta_0-\bm{S}^\top \theta$, respectively, and thus, $|\bm{S}^\top \theta_0-\bm{S}^\top \theta|>|\bm{S}^\top \theta_0|$. The fifth inequality holds by Markov inequalities. The upper bound is minimized when $t=C_t \|\bm{S}^\top \theta_0-\bm{S}^\top \theta\|_2^{\frac{1}{\kappa/2+1}}$ for a constant $C_t$ which depends on $\kappa$ and $C$. Hence, $$\begin{aligned} \label{lem:wbound} \|f(\theta)-f(\theta_0)\|_2 &\leq C^{\dagger} \|h^rK_{h,\bm{v}_0} \|_2 \left\{ C \|\bm{S}^\top \theta_0-\bm{S}^\top \theta\|_2^{\frac{\kappa}{\kappa+2}}+ \|\bm{S}^\top \theta_0-\bm{S}^\top \theta\|_2^{\frac{\kappa}{\kappa+2}}\right\}\nonumber\\ &=O_p(h^{r/2}\delta^{\frac{\kappa}{\kappa+2}}). \end{aligned}$$ Hence where there is a margin around zero, that is, $pr(0<|\bm{S}^\top{{\theta}}_0|<l) =0$, the right hand side of ([\[lem:wbound\]](#lem:wbound){reference-type="ref" reference="lem:wbound"}) reduces to $O_p(h^{r/2}\delta)$. ◻ # Proof of Theorem [Theorem 1](#th:halrate){reference-type="ref" reference="th:halrate"}: Rate of convergence of HAL with unknown nuisance parameters {#proof-of-theorem-thhalrate-rate-of-convergence-of-hal-with-unknown-nuisance-parameters} Define $P_0L_{G_0}(\Psi_n,\Psi_0) \equiv P_0L_{G_0}(\Psi_n) - P_0L_{G_0}(\Psi_0)$. Then using the definition of the loss-based dissimilarity, we have $$\begin{aligned} 0\leq d_0(\Psi_n,\Psi_0 )&=P_0L_{G_0}(\Psi_n,\Psi_0)\\ &=(P_0-P_n) L_{G_0}(\Psi_n,\Psi_0) + P_n L_{G_0}(\Psi_n,\Psi_0) \\ &=O_p(n^{-2/3} (\log n)^{4(p-1)/3})+P_n L_{G_n}(\Psi_n,\Psi_0)+ P_n\{L_{G_0}(\Psi_n,\Psi_0)-L_{G_n}(\Psi_n,\Psi_0)\} \\ &\leq O_p(n^{-2/3} (\log n)^{4(p-1)/3})+(P_n-P_0) \{L_{G_0}(\Psi_n,\Psi_0)-L_{G_n}(\Psi_n,\Psi_0)\} \\ &\hspace{3in} + P_0 \{L_{G_0}(\Psi_n,\Psi_0)-L_{G_n}(\Psi_n,\Psi_0)\} \\ &=P_0 \{L_{G_0}(\Psi_n,\Psi_0)-L_{G_n}(\Psi_n,\Psi_0)\} +O_p(n^{-2/3} (\log n)^{4(p-1)/3}) % &\leq o_p(n^{-1/2})+ (P_0-P_n) L_{\Psi_n,\Psi_0,\lambda,g_n}+ P_n (L_{\Psi_n,\Psi_0,\lambda,g_0}-L_{\Psi_n,\Psi_0,\lambda,g_n}) \\ % &= o_p(n^{-1/2})+ (P_0-P_n) L_{\Psi_n,\Psi_0,\lambda,g_n} -(P_0-P_n) (L_{\Psi_n,\Psi_0,\lambda,g_0}-L_{\Psi_n,\Psi_0,\lambda,g_n}) \\ % & \hspace{2in} + P_0 (L_{\Psi_n,\Psi_0,\lambda,g_0}-L_{\Psi_n,\Psi_0,\lambda,g_n}) \\ % & \leq P_0 \left(L_{m_{n,\lambda},g_n} - L_{m_{n,\lambda} ,g_0}\right)-(P_n-P_0) \left(L_{m_{n,\lambda},g_0} - L_{m_{0,\lambda} ,g_0}\right)\end{aligned}$$ Moreover, using Lemma [Lemma 12](#lm:convrate){reference-type="ref" reference="lm:convrate"} we have $$\begin{aligned} %\left | P_0 (L_{\Psi_n,\Psi_0,\lambda,g_0}-L_{\Psi_n,\Psi_0,\lambda,g_n})\right | \leq C \left\{ d_{01}(g_n,g_0) \right\}^{1/2} \left\{ d_0(m_{n,\lambda,g_0} ,m_{0,\lambda,g_0} ) \right\}^{1/2}, d_0(\Psi_n,\Psi_0 ) &\leq C \left\{ d_{01}(G_n,G_0) \right\}^{1/2} \left\{ d_0(\Psi_n,\Psi_0 ) \right\}^{1/2} + O_p(n^{-2/3} (\log n)^{4(p-1)/3}) \\ &= O_p(n^{-2/3} (\log n)^{4(p-1)/3}) + O_p(n^{-1/3} (\log n)^{2(p-1)/3}) \left\{ d_0(\Psi_n,\Psi_0 ) \right\}^{1/2}\end{aligned}$$ where $C$ is a positive finite constant. The last equality follows from the convergence rate of HAL minimum loss based estimator of $g_0$. Let $r_1(n)=O_p(n^{-2/3} (\log n)^{4(p-1)/3})$ and $r_2(n)=O_p(n^{-1/3} (\log n)^{2(p-1)/3})$, the above equality is equivalent to $$\left\{d_0(\Psi_n,\Psi_0 )\right\}^{1/2} \leq \frac{ \{4 r_1(n) + r_2^2(n) \}^{1/2} +r_2(n) }{2}$$ which implies that $d_0(\Psi_n,\Psi_0 )=O_p(n^{-2/3} (\log n)^{4(p-1)/3})$. # Proof of Theorem [Theorem 2](#th:movh){reference-type="ref" reference="th:movh"}: Asymptotic linearity of the regimen-response curve estimator {#proof-of-theorem-thmovh-asymptotic-linearity-of-the-regimen-response-curve-estimator} We represent the difference between $\Psi_{nh}(\bm{v}_0,\theta)$ and $\Psi_0(\bm{v}_0,\theta)$ as $$\Psi_{nh}(\bm{v}_0,\theta) - \Psi_0(\bm{v}_0,\theta)= \{ \Psi_{0h}(\bm{v}_0,\theta) - \Psi_0(\bm{v}_0,\theta)\}+\{\Psi_{nh}(\bm{v}_0,\theta) - \Psi_{0h}(\bm{v}_0,\theta)\}.$$ Moreover, we can write $$\begin{aligned} \label{eq:proofth1} \Psi_{nh}(\bm{v}_0,\theta) - \Psi_{0h}(\bm{v}_0,\theta) = &P_n K_{h,\bm{v}_0}\Psi_n \left(\frac{1}{P_n K_{h,\bm{v}_0}}-\frac{1}{P_0 K_{h,\bm{v}_0}} \right) \\ &+ (P_n-P_0)\frac{K_{h,\bm{v}_0}}{P_0 K_{h,\bm{v}_0}}(\Psi_{n}-\Psi_{0}) + (P_n-P_0)\frac{K_{h,\bm{v}_0}}{P_0 K_{h,\bm{v}_0}}(\Psi_{0}) \nonumber\\ &+P_0\frac{K_{h,\bm{v}_0}}{P_0 K_{h,\bm{v}_0}}(\Psi_{n}-\Psi_{0}). \nonumber% &P_n \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{P_0 K_{h,\bm{v}_0}} \left(\frac{\psi_n }{G_n }-\frac{\Psi_0 }{G_0 }\right)+ \nonumber\\ % &(P_n-P_0) \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{P_0 K_{h,\bm{v}_0}} \left(\frac{\Psi_0 }{G_0 }\right) \nonumber\end{aligned}$$ ## Convergence rate of $\Psi_{0h}(v_0,\theta) - \Psi_0(v_0,\theta)$ We consider a ($J-1$)-orthogonal kernel with bandwidth $h$ centered at $\bm{v}_0$. Then following Lemma 25.1 in [@van2018targeted], as $h \rightarrow 0$, $$\Psi_{0h}(\bm{v}_0,\theta) - \Psi_0(\bm{v}_0,\theta) = h^{J} B_0(J,\bm{v}_0),$$ where $$B_0(J,\bm{v}_0) = \sum_{\{\eta \in \{0,\cdots,J\}^d: \sum_l \eta_l = J\}} \Psi_0(\bm{v}_0)^\eta \int_s k(s) \frac{\prod_l s_l ^{\eta_l}}{\prod_l \eta_l !} ds.$$ ## Asymptotic behaviour of the first term on the RHS of ([\[eq:proofth1\]](#eq:proofth1){reference-type="ref" reference="eq:proofth1"}) Because, $(P_n-P_0) K_{h,\bm{v}_0} = O_p(n^{-1/2} h^{-r/2})$, $$\begin{aligned} P_n K_{h,\bm{v}_0} \Psi_n \left(\frac{1}{P_n K_{h,\bm{v}_0}}-\frac{1}{P_0 K_{h,\bm{v}_0}}\right) =& (P_n-P_0) K_{h,\bm{v}_0} \Psi_n \left(\frac{1}{P_n K_{h,\bm{v}_0}}-\frac{1}{P_0 K_{h,\bm{v}_0}}\right) \\%\label{eq:proofth1}\\ &+P_0 K_{h,\bm{v}_0} \Psi_n \left(\frac{1}{P_n K_{h,\bm{v}_0}}-\frac{1}{P_0 K_{h,\bm{v}_0}}\right)\\ =&P_0 K_{h,\bm{v}_0} \Psi_n \left(\frac{-(P_n-P_0) K_{h,\bm{v}_0}}{P_n K_{h,\bm{v}_0}P_0 K_{h,\bm{v}_0}}\right)+o_p(n^{-1/2}h^{-r/2})\\ =&P_0 K_{h,\bm{v}_0} \Psi_n \left(\frac{-(P_n-P_0) K_{h,\bm{v}_0}}{P_0^2 K_{h,\bm{v}_0}}\right)\\ &+P_0 K_{h,\bm{v}_0} \Psi_n \left(\frac{-\{(P_n-P_0) K_{h,\bm{v}_0}\}^2}{P_n K_{h,\bm{v}_0}P_0 K_{h,\bm{v}_0}}\right)+o_p(n^{-1/2}h^{-r/2})\\%\frac{-(P_n-P_0)K_{h,\bm{v}_0}}{P_0K_{h,\bm{v}_0}} P_0 \frac{K_{h,\bm{v}_0} \Delta^c I( A= d^\theta) }{G_n P_0K_{h,\bm{v}_0} } \Psi_n+\\ %&\hspace{2in}o_p(n^{-1/2}h^{-r/2}) \\ %& =\frac{-(P_n-P_0)K_{h,\bm{v}_0}}{P_0K_{h,\bm{v}_0}} P_0 \frac{K_{h,\bm{v}_0} \Delta^c I( A= d^\theta) }{ P_0K_{h,\bm{v}_0} } \left(\frac{\Psi_n }{G_n }-\frac{\Psi_0 }{G_0 }\right) + \\ %&\hspace{1in}\frac{-(P_n-P_0)K_{h,\bm{v}_0}}{P_0K_{h,\bm{v}_0}} P_0 \frac{K_{h,\bm{v}_0} \Delta^c I( A= d^\theta) \Psi_0 }{ G_0P_0K_{h,\bm{v}_0} } +o_p(n^{-1/2}h^{-r/2})\\ %& =\frac{-(P_n-P_0)K_{h,\bm{v}_0}}{P_0K_{h,\bm{v}_0}}P_0 \frac{K_{h,\bm{v}_0} \Delta^c I( A= d^\theta) }{ G_0P_0K_{h,\bm{v}_0} } I(Y >t)+o_p(n^{-1/2}h^{-r/2})\\ & =\frac{-(P_n-P_0)K_{h,\bm{v}_0}}{P_0K_{h,\bm{v}_0}} P_0 \frac{K_{h,\bm{v}_0}}{P_0K_{h,\bm{v}_0}}(\Psi_n-\Psi_{0})+\frac{-(P_n-P_0)K_{h,\bm{v}_0}}{P_0K_{h,\bm{v}_0}} \Psi_{0h}\\ &+o_p(n^{-1/2}h^{-r/2})\\ & =\frac{-(P_n-P_0)K_{h,\bm{v}_0}}{P_0K_{h,\bm{v}_0}} \Psi_{0h}+o_p(n^{-1/2}h^{-r/2})\end{aligned}$$ The second and third equalities follows because $\{(P_n-P_0) K_{h,\bm{v}_0}\}^2=O_p(n^{-1} h^{-r})$, and thus, those are of order $o_p(n^{-1/2}h^{-r/2})$. In the last equality we have also used the consistency of $\Psi_n$. ## Asymptotic negligibility of the second term on the RHS of ([\[eq:proofth1\]](#eq:proofth1){reference-type="ref" reference="eq:proofth1"}) {#sec:th1p2} We will show that the second order term $(P_n-P_0) K_{h,\bm{v}_0} (\Psi_n-\Psi_0) = o_p\left((nh^r)^{-1/2}\right)$. Let $\tilde K_{h,\bm{v}_0}(v) = h^r K_{h,\bm{v}_0}(v)$. Define $H_{n,h}(m) = K_{h,\bm{v}_0} (m-\Psi_0)$ and $h^{-r}\tilde H_{n,h}(m) =H_{n,h}(m)$. The function $\tilde H_n(m)$ belongs to a class of function $\mathcal{F}_n = \{\tilde H_{n,h}(m) : h\}$. We have $$\begin{aligned} \| \tilde H_n(\Psi_n) \|_{P_0} &\leq \| \tilde K_{h,\bm{v}_0}(v) \|_{P_0} \| ( \Psi_n -\Psi_0) \|_{P_0} \\ &= O_p(h^{r/2})O_p(n^{-1/3}). \end{aligned}$$ The rest follows from Lemma [Lemma 10](#lem:hnneg){reference-type="ref" reference="lem:hnneg"}. Therefore, $(P_n-P_0)\frac{K_{h,\bm{v}_0}}{P_0 K_{h,\bm{v}_0}}(\Psi_{n}-\Psi_{0})=o_p\left((nh^r)^{-1/2}\right)$ when the bandwidth $h_n$ satisfies $h^r \rightarrow 0$ and $nh^{r3/2} \rightarrow \infty$. ## Asymptotic behaviour of the forth term on the RHS of ([\[eq:proofth1\]](#eq:proofth1){reference-type="ref" reference="eq:proofth1"}) {#sec:th1p3} The forth term can be written as $$\begin{aligned} P_0\frac{K_{h,\bm{v}_0}}{P_0 K_{h,\bm{v}_0}}(\Psi_{n}-\Psi_{0}) =&P_0\frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{G_0P_0 K_{h,\bm{v}_0}}(\Psi_{n}-\Psi_{0}) \\ =&P_0\frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{G_0P_0 K_{h,\bm{v}_0}}\{\Psi_{n}-I(Y>t)\}\\ =&(P_n-P_0)\frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{G_0P_0 K_{h,\bm{v}_0}}\{I(Y>t)-\Psi_{0}\}\\ &-P_n \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{G_0P_0 K_{h,\bm{v}_0}} \{I(Y\geq t)-\Psi_n\}+o_p\left((nh^r)^{-1/2}\right).\end{aligned}$$ The third equality follows from Section [12.3](#sec:th1p2){reference-type="ref" reference="sec:th1p2"}. The last term on the RHS can be represented as $$\begin{aligned} P_n \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{G_0P_0 K_{h,\bm{v}_0}} \{I(Y\geq t)-\Psi_n\} &= P_n \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{G_nP_0 K_{h,\bm{v}_0}} \{I(Y\geq t)-\Psi_n\}\\ & +P_n \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{P_0 K_{h,\bm{v}_0}} \{I(Y\geq t)-\Psi_n\}\left(\frac{G_n-G_0}{G_0G_n}\right)\\ &=P_n \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{G_nP_0 K_{h,\bm{v}_0}} \{I(Y\geq t)-\Psi_n\}+ \\ &(P_n-P_0) \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{P_0 K_{h,\bm{v}_0}} \{I(Y\geq t)-\Psi_n\}\left(\frac{G_n-G_0}{G_0G_n}\right)\\ & +P_0 \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{P_0 K_{h,\bm{v}_0}} \{I(Y\geq t)-\Psi_n\}\left(\frac{G_n-G_0}{G_0G_n}\right).\end{aligned}$$ Using Lemma [Lemma 10](#lem:hnneg){reference-type="ref" reference="lem:hnneg"} and similar techniques used in Section [12.3](#sec:th1p2){reference-type="ref" reference="sec:th1p2"}, we have $(P_n-P_0) \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{P_0 K_{h,\bm{v}_0}} \{I(Y\geq t)-\Psi_n\}\left(\frac{G_n-G_0}{G_0G_n}\right)=o_p\left((nh^r)^{-1/2}\right)$. Let $Q_0 = E \{I(Y\geq t) | A= d^\theta,\bm{W}\}$. Then $$\begin{aligned} P_0 \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{P_0 K_{h,\bm{v}_0}} \{I(Y\geq t)-\Psi_n\}\left(\frac{G_n-G_0}{G_0G_n}\right) =& P_0 \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{P_0 K_{h,\bm{v}_0}} (Q_0-\Psi_n) \left(\frac{G_n-G_0}{G_0G_n}\right)\\ =&-P_0 \frac{K_{h,\bm{v}_0}G_0}{P_0 K_{h,\bm{v}_0}} (Q_0-\Psi_n)\left(\frac{G_0-G_n }{G_0^2 }\right)\\ &+P_0 \frac{K_{h,\bm{v}_0}G_0}{P_0 K_{h,\bm{v}_0}} (Q_0-\Psi_n)\left(\frac{(G_0-G_n)^2 }{G_nG_0^2 }\right). % &=-(P_n-P_0) \frac{K_{h,\bm{v}_0}}{P_0 K_{h,\bm{v}_0}} \{Q_0-\Psi_0\}\left(\frac{G_0-\Delta^c I( A= d^\theta)}{G_0}\right)\\ % &+P_n \frac{K_{h,\bm{v}_0}}{P_0 K_{h,\bm{v}_0}} \{Q_0-\Psi_n\}\left(\frac{G_n-\Delta^c I( A= d^\theta)}{G_0}\right)+o_p(n^{-1/2})\end{aligned}$$ Using the rate of convergence of the highly adaptive lasso estimator $P_0 \frac{K_{h,\bm{v}_0}G_0}{P_0 K_{h,\bm{v}_0}} (Q_0-\Psi_n)\left(\frac{(G_0-G_n)^2 }{G_nG_0^2 }\right)=o_p\left((nh^r)^{-1/2}\right)$. Moreover, $$\begin{aligned} P_0 \frac{K_{h,\bm{v}_0}(Q_0-\Psi_n)}{P_0 K_{h,\bm{v}_0}}\left(\frac{G_0-G_n }{G_0}\right)&= P_0 \frac{K_{h,\bm{v}_0}(Q_0-\Psi_n)}{P_0 K_{h,\bm{v}_0}}\left(\frac{G_{0}^cG_{0}^a-G_{n}^cG_{n}^a }{G_{0}^cG_{0}^a}\right) \\ &=P_0 \frac{K_{h,\bm{v}_0}(Q_0-\Psi_n)}{P_0 K_{h,\bm{v}_0}} G_{0}^a\left(\frac{G_{0}^c-G_{n}^c }{G_{0}^cG_{0}^a}\right)+P_0 \frac{K_{h,\bm{v}_0}(Q_0-\Psi_n)}{P_0 K_{h,\bm{v}_0}} G_{n}^c\left(\frac{G_{0}^a-G_{n}^a}{G_{0}^cG_{0}^a}\right) \\ & = -(P_n-P_0) \frac{K_{h,\bm{v}_0}I( A= d^\theta)}{G_0^a P_0 K_{h,\bm{v}_0}} (Q_0-\Psi_0)\left(\frac{\Delta^c-G_{0}^c }{G_0^c}\right) \\ & \hspace{.2in}-(P_n-P_0) \frac{K_{h,\bm{v}_0}(Q_0-\Psi_0)}{P_0 K_{h,\bm{v}_0}} \left(\frac{I( A= d^\theta)-G_{0}^a}{G_0^a}\right)\\ &\hspace{.2in} + P_n \frac{K_{h,\bm{v}_0}I( A= d^\theta)}{G_0^a P_0 K_{h,\bm{v}_0}}(Q_0-\Psi_n)\left(\frac{\Delta^c-G_{n}^c }{G_0^c}\right) \\ &\hspace{.2in} + P_n \frac{K_{h,\bm{v}_0 }(Q_0-\Psi_n)}{P_0 K_{h,\bm{v}_0}} G_{n}^c\left(\frac{I( A= d^\theta)-G_{n}^a}{G_{0}^cG_{0}^a}\right)+o_p\left((nh^r)^{-1/2}\right).\end{aligned}$$ The first equality follows because $$\begin{aligned} (P_n-P_0) \frac{K_{h,\bm{v}_0}I( A= d^\theta)}{G_0^a P_0 K_{h,\bm{v}_0}} (Q_0-\Psi_n) \left(\frac{\Delta^c-G_{n}^c }{G_0^c}\right) &= (P_n-P_0) \frac{K_{h,\bm{v}_0}I( A= d^\theta)}{G_0^a P_0 K_{h,\bm{v}_0}} (Q_0-\Psi_0) \left(\frac{\Delta^c-G_{0}^c }{G_0^c}\right)\\ &\hspace{2.5in}+o_p\left((nh^r)^{-1/2}\right), \\ (P_n-P_0) \frac{K_{h,\bm{v}_0}(Q_0-\Psi_n)}{P_0 K_{h,\bm{v}_0}} G_{n}^c\left(\frac{I( A= d^\theta)-G_{n}^a}{G_0^a G_0^c}\right)&=(P_n-P_0) \frac{K_{h,\bm{v}_0}(Q_0-\Psi_0)}{P_0 K_{h,\bm{v}_0}} \left(\frac{I( A= d^\theta)-G_{0}^a}{G_0^a}\right)\\ &\hspace{2.5in}+o_p\left((nh^r)^{-1/2}\right).\end{aligned}$$ Moreover, by Lemma [Lemma 11](#lem:pnterm){reference-type="ref" reference="lem:pnterm"}, we have $$\begin{aligned} P_n \frac{K_{h,\bm{v}_0}I( A= d^\theta)}{G_0^a P_0 K_{h,\bm{v}_0}} (Q_0-\Psi_n)\left(\frac{\Delta^c-G_{n}^c }{G_0^c}\right) &= P_n \frac{K_{h,\bm{v}_0}I( A= d^\theta)}{G_n^a P_0 K_{h,\bm{v}_0}} (Q_n-\Psi_n)\left(\frac{\Delta^c-G_{n}^c }{G_n^c}\right)+o_p\left((nh^r)^{-1/2}\right)\\ P_n \frac{K_{h,\bm{v}_0}(Q_0-\Psi_n)}{P_0 K_{h,\bm{v}_0}} G_{n}^c\left(\frac{I( A= d^\theta)-G_{n}^a}{G_{0}^cG_{0}^a}\right) &=P_n \frac{K_{h,\bm{v}_0}(Q_n-\Psi_n)}{P_n K_{h,\bm{v}_0}} \left(\frac{I( A= d^\theta)-G_{n}^a}{G_{n}^a}\right)+o_p\left((nh^r)^{-1/2}\right)\end{aligned}$$ Hence, if we undersmooth $G_{n}^a$ and $G_{n}^c$ such that $P_n \frac{K_{h,\bm{v}_0}I( A= d^\theta)}{G_n^a P_0 K_{h,\bm{v}_0}} (Q_n-\Psi_n)\left(\frac{\Delta^c-G_{n}^c }{G_n^c}\right)=o_p\left((nh^r)^{-1/2}\right)$ and $P_n \frac{K_{h,\bm{v}_0}(Q_n-\Psi_n)}{P_n K_{h,\bm{v}_0}} \left(\frac{I( A= d^\theta)-G_{n}^a}{G_{n}^a}\right)=o_p\left((nh^r)^{-1/2}\right)$, then $$\begin{aligned} P_0 \frac{K_{h,\bm{v}_0}(Q_0-\Psi_n)}{P_0 K_{h,\bm{v}_0}}\left(\frac{G_0-G_n }{G_0}\right)&= -(P_n-P_0) \frac{K_{h,\bm{v}_0}I( A= d^\theta)}{G_0^a P_0 K_{h,\bm{v}_0}} (Q_0-\Psi_0)\left(\frac{\Delta^c-G_{0}^c }{G_0^c}\right) \\ & \hspace{.2in}-(P_n-P_0) \frac{K_{h,\bm{v}_0}(Q_0-\Psi_0)}{P_0 K_{h,\bm{v}_0}} \left(\frac{I( A= d^\theta)-G_{0}^a}{G_0^a}\right)+o_p\left((nh^r)^{-1/2}\right).\end{aligned}$$ ## The influence function. {#sec:if} Gathering all the terms, we have, $$\begin{aligned} \Psi_{nh}(\bm{v}_0,\theta) - \Psi_0(\bm{v}_0,\theta) =& (P_n-P_0) \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{G_0P_0 K_{h,\bm{v}_0}}I(Y\geq t) \\ &- (P_n-P_0) \frac{K_{h,\bm{v}_0}I( A= d^\theta)}{G_0^a P_0 K_{h,\bm{v}_0}} Q_0\left(\frac{\Delta^c-G_{0}^c }{G_0^c}\right) \\ & -(P_n-P_0) \frac{K_{h,\bm{v}_0}Q_0}{ P_0 K_{h,\bm{v}_0}} \left(\frac{I( A= d^\theta)-G_{0}^a}{G_0^a}\right) \\ &- (P_n-P_0)\frac{K_{h,\bm{v}_0}}{P_0K_{h,\bm{v}_0}}\Psi_{0h}(\bm{v}_0,\theta)+ h^{J} B_0(J)+o_p\left((nh^r)^{-1/2}\right).\end{aligned}$$ Thus, the asymptotic normality follows from the choice of bandwidth such that $(nh^r)^{1/2}h^{J} =O(1)$ which implies $h_n=n^{-1/(2J+r)}$. Thus, $(nh^r)^{1/2}\{\Psi_{nh}(\bm{v}_0,\theta)- \Psi_0(\bm{v}_0,\theta)\}$ converges to a mean zero normal random variable when $\frac{h_n}{n^{-1/(2J+r)}} \rightarrow 0$ (i.e., $h_n$ converges to zero faster than $n^{-1/(2J+r)}$). ## The optimal bandwidth rate. {#sec:opthrate} The rate constraints obtained in Lemma [Lemma 10](#lem:hnneg){reference-type="ref" reference="lem:hnneg"} (i.e., $h>n^{-\frac{2}{3r}}$) and Section [12.5](#sec:if){reference-type="ref" reference="sec:if"} (i.e., $h<n^{-\frac{1}{r+2J}}$) imply that the optimal bandwidth rate is $h_n = n^{-\frac{1}{r+2J}}$. Note that for the constraint $n^{-\frac{2}{3r}}<h<n^{-\frac{1}{r+2J}}$ to hold we must have $J>r/4$ which is a very mild condition. For example, for $r<4$, we can still pick $J=1$. # Proof of Theorem [Theorem 3](#th:fixedh){reference-type="ref" reference="th:fixedh"}: Asymptotic linearity of the regimen-response curve estimator for a fixed bandwidth $h$ {#proof-of-theorem-thfixedh-asymptotic-linearity-of-the-regimen-response-curve-estimator-for-a-fixed-bandwidth-h} $$\begin{aligned} \label{eq:proofth2} \Psi_{nh}(\bm{v}_0,\theta) - \Psi_{0h}(\bm{v}_0,\theta) = &P_n K_{h,\bm{v}_0}\Psi_n \left(\frac{1}{P_n K_{h,\bm{v}_0}}-\frac{1}{P_0 K_{h,\bm{v}_0}} \right) \\ &+ (P_n-P_0)\frac{K_{h,\bm{v}_0}}{P_0 K_{h,\bm{v}_0}}(\Psi_{n}-\Psi_{0}) + (P_n-P_0)\frac{K_{h,\bm{v}_0}}{P_0 K_{h,\bm{v}_0}}(\Psi_{0}) \nonumber\\ &+P_0\frac{K_{h,\bm{v}_0}}{P_0 K_{h,\bm{v}_0}}(\Psi_{n}-\Psi_{0}). \nonumber% &P_n \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{P_0 K_{h,\bm{v}_0}} \left(\frac{\Psi_n }{G_n }-\frac{\Psi_0 }{G_0 }\right)+ \nonumber\\ % &(P_n-P_0) \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{P_0 K_{h,\bm{v}_0}} \left(\frac{\Psi_0 }{G_0 }\right) \nonumber\end{aligned}$$ ## Asymptotic behaviour of the first term on the RHS of ([\[eq:proofth2\]](#eq:proofth2){reference-type="ref" reference="eq:proofth2"}) Following the same steps used in the proof of Theorem [Theorem 2](#th:movh){reference-type="ref" reference="th:movh"} by consistency of $\Psi_n$ and because, for fixed $h$, $(P_n-P_0)K_{h,\bm{v}_0}=O_p(n^{-1/2})$, we have $$\begin{aligned} P_n K_{h,\bm{v}_0}\Psi_n \left(\frac{1}{P_n K_{h,\bm{v}_0}}-\frac{1}{P_0 K_{h,\bm{v}_0}} \right) = \frac{-(P_n-P_0)K_{h,\bm{v}_0}}{P_0K_{h,\bm{v}_0}} \Psi_{0h}(\bm{v}_0,\theta)+o_p(n^{-1/2}). \end{aligned}$$ Under assumptions [Assumption 1](#assump:cadlag){reference-type="ref" reference="assump:cadlag"} and [\[assump:basic\]](#assump:basic){reference-type="ref" reference="assump:basic"}, the equality above holds uniformly on the space $\mathcal{V}$. ## Asymptotic negligibility of the second term on the RHS of ([\[eq:proofth1\]](#eq:proofth1){reference-type="ref" reference="eq:proofth1"}) {#sec:th2p2} We show that $\sup_{v \in \mathcal{V}} \left|(P_n-P_0) \frac{K_{h,v}}{P_0 K_{h,v}} \left( \Psi_n - \Psi_0 \right)\right| = o_p(n^{-1/2})$. Let $f_{v}(m) = m-\Psi_0$ and $\mathcal{F}_n = \{f_{v}(m): \sup_{v \in \mathcal{V}} \|f_{v}(m)\|<\delta_n, m \in \mathcal{D}[0,\tau], \|m\|^*_\nu<\infty, v\in\mathcal{V} \}$. Then we have $$\begin{aligned} \sup_{v \in \mathcal{V}} \left|(P_n-P_0) \frac{K_{h,v}}{P_0 K_{h,v}} \left( \Psi_n-\Psi_0\right)\right| \leq \sup_{\substack{v \in \mathcal{V}\\ f_{v}(m) \in \mathcal{F}_n}} \left|(P_n-P_0) \frac{K_{h,v}}{P_0 K_{h,v}} f_{v}(m) \right| \end{aligned}$$ By consistency of $\Psi_n$, we can consider $\delta_n \rightarrow 0$ which implies that the right hand side of the inequality above is of order $o_p(n^{-1/2})$. ## Asymptotic behaviour of the forth term on the RHS of ([\[eq:proofth2\]](#eq:proofth2){reference-type="ref" reference="eq:proofth2"}) Using the same techniques as in the proof of Theorem [Theorem 2](#th:movh){reference-type="ref" reference="th:movh"}, we can show $$\begin{aligned} P_0\frac{K_{h,\bm{v}_0}}{P_0 K_{h,\bm{v}_0}}(\Psi_{n}-\Psi_{0}) = &(P_n-P_0)\frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{G_0P_0 K_{h,\bm{v}_0}}\{I(Y>t)-\Psi_{0}\}\\ &-P_n \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{G_0P_0 K_{h,\bm{v}_0}} \{I(Y\geq t)-\Psi_n\}+o_p(n^{-1/2}).\end{aligned}$$ This equality holds uniformly over $v \in \mathcal{V}$, because $\sup_{v \in \mathcal{V}} \left|(P_n-P_0) \frac{K_{h,v}}{P_0 K_{h,v}} \left( \Psi_n - \Psi_0 \right)\right| = o_p(n^{-1/2})$ as shown in Section [13.2](#sec:th2p2){reference-type="ref" reference="sec:th2p2"}. Moreover, following the result in Section [12.4](#sec:th1p3){reference-type="ref" reference="sec:th1p3"}, $$\begin{aligned} P_n \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{G_0P_0 K_{h,\bm{v}_0}} \{I(Y\geq t)-\Psi_n\} =& P_n \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{G_nP_0 K_{h,\bm{v}_0}} \{I(Y\geq t)-\Psi_n\}\\ & -(P_n-P_0) \frac{K_{h,\bm{v}_0}I( A= d^\theta)}{G_0^a P_0 K_{h,\bm{v}_0}} (Q_0-\Psi_0)\left(\frac{\Delta^c-G_{0}^c }{G_0^c}\right) \\ & -(P_n-P_0) \frac{K_{h,\bm{v}_0}(Q_0-\Psi_0)}{P_0 K_{h,\bm{v}_0}} \left(\frac{I( A= d^\theta)-G_{0}^a}{G_0^a}\right)\\ & + P_n \frac{K_{h,\bm{v}_0}I( A= d^\theta)}{G_0^a P_0 K_{h,\bm{v}_0}}(Q_0-\Psi_n)\left(\frac{\Delta^c-G_{n}^c }{G_0^c}\right) \\ & + P_n \frac{K_{h,\bm{v}_0 }(Q_0-\Psi_n)}{P_0 K_{h,\bm{v}_0}} G_{n}^c\left(\frac{I( A= d^\theta)-G_{n}^a}{G_{0}^cG_{0}^a}\right)+o_p(n^{-1/2}).\end{aligned}$$ The equality above also holds uniformly over $v \in \mathcal{V}$ because - $\sup_{v \in \mathcal{V}} \left| (P_n-P_0) \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{P_0 K_{h,\bm{v}_0}} \{I(Y\geq t)-\Psi_n\}\left(\frac{G_n-G_0}{G_0G_n}\right) \right|=o_p(n^{-1/2})$, - $\sup_{v \in \mathcal{V}} \left| P_0 \frac{K_{h,\bm{v}_0}G_0}{P_0 K_{h,\bm{v}_0}} (Q_0-\Psi_n)\left(\frac{(G_0-G_n)^2 }{G_nG_0^2 }\right)\right|=o_p(n^{-1/2})$, - $\sup_{v \in \mathcal{V}} \left| P_0 K_{h,\bm{v}_0}\{G_n-G_0\} \left( \frac{Q_0-\Psi_n}{G_0P_0 K_{h,\bm{v}_0}}-\frac{Q_n-\Psi_n}{G_nP_n K_{h,\bm{v}_0}}\right)\right|=o_p(n^{-1/2})$. ## Convergence to a Gaussian process If we undersmooth $G_{n}^a$, $G_{n}^c$ and $\Psi_n$ such that $\sup_{v \in \mathcal{V}} \left|P_n \frac{K_{h,\bm{v}_0 }(Q_0-\Psi_n)}{P_0 K_{h,\bm{v}_0}} G_{n}^c\left(\frac{I( A= d^\theta)-G_{n}^a}{G_{0}^cG_{0}^a}\right)\right|=o_p(n^{-1/2})$, $\sup_{v \in \mathcal{V}} \left| P_n \frac{K_{h,\bm{v}_0}I( A= d^\theta)}{G_0^a P_0 K_{h,\bm{v}_0}}(Q_0-\Psi_n)\left(\frac{\Delta^c-G_{n}^c }{G_0^c}\right) \right|= o_p(n^{-1/2})$, and $\sup_{v \in \mathcal{V}} \left| P_n \frac{K_{h,v}\Delta^c I( A= d^\theta) }{G_n P_n K_{h,v}} \{ \Psi_n -I(Y >t) \} \right|= o_p(n^{-1/2})$, then $$\begin{aligned} \sqrt n \{\Psi_{nh}(\bm{v}_0,\theta) - \Psi_{0h}(\bm{v}_0,\theta)\} = \sqrt n (P_n-P_0) D_v^*(P_0) + o_p(1) \Rightarrow_d GP.\end{aligned}$$ where $$\begin{aligned} D_{\bm{v}}^*(P_0) = \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{G_0P_0 K_{h,\bm{v}_0}} &I(Y\geq t) -\frac{K_{h,\bm{v}_0}I( A= d^\theta)}{G_0^a P_0 K_{h,\bm{v}_0}} Q_0\left(\frac{\Delta^c-G_{0}^c }{G_0^c}\right)\\ &-\frac{K_{h,\bm{v}_0}Q_0}{P_0 K_{h,\bm{v}_0}} \left(\frac{I( A= d^\theta)-G_{0}^a}{G_0^a}\right)+\frac{K_{h,\bm{v}_0}}{P_0K_{h,\bm{v}_0}}\Psi_{0h}(\bm{v}_0,\theta).\end{aligned}$$ Hence, for all $\bm{v} \in \mathcal{V}$, $\sqrt n \{\Psi_{nh}(\bm{v},\theta) - \Psi_{0h}(\bm{v},\theta)\}$ converges weakly as a random element of the cadlag function space endowed with the supremum norm to a Gaussian process $GP$ with covariance structure implied by the covariance function $\rho (\bm{v},\bm{v}') = P_0 D^*_{\bm{v}}(P_0) D^*_{\bm{v}'}(P_0)$. # Uniform consistency of the regimen-response curve estimator {#sec:uniform} In Theorem [Theorem 2](#th:movh){reference-type="ref" reference="th:movh"} we showed that our estimator is pointwise consistent. In this section we show that our estimator is uniformly consistent as well in the sense that $\sup_{v \in \mathcal{V}} |\Psi_{nh}(\bm{v}_0,\theta) - \Psi_0(\bm{v}_0,\theta)| = o_p(1)$ and derive a rate of convergence. Recall that $$\Psi_{nh}(\bm{v}_0,\theta) - \Psi_0(\bm{v}_0,\theta)= \{ \Psi_{0h}(\bm{v}_0,\theta) - \Psi_0(\bm{v}_0,\theta)\}+\{\Psi_{nh}(\bm{v}_0,\theta) - \Psi_{0h}(\bm{v}_0,\theta)\},$$ where $$\begin{aligned} \Psi_{nh}(\bm{v}_0,\theta) - \Psi_{0h}(\bm{v}_0,\theta) = &P_n K_{h,\bm{v}_0}\Psi_n \left(\frac{1}{P_n K_{h,\bm{v}_0}}-\frac{1}{P_0 K_{h,\bm{v}_0}} \right) \\ &+ (P_n-P_0)\frac{K_{h,\bm{v}_0}}{P_0 K_{h,\bm{v}_0}}(\Psi_{n}-\Psi_{0}) + (P_n-P_0)\frac{K_{h,\bm{v}_0}}{P_0 K_{h,\bm{v}_0}}\Psi_{0} \nonumber\\ &+P_0\frac{K_{h,\bm{v}_0}}{P_0 K_{h,\bm{v}_0}}(\Psi_{n}-\Psi_{0}). \nonumber% &P_n \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{P_0 K_{h,\bm{v}_0}} \left(\frac{\Psi_n }{G_n }-\frac{\Psi_0 }{G_0 }\right)+ \nonumber\\ % &(P_n-P_0) \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{P_0 K_{h,\bm{v}_0}} \left(\frac{\Psi_0 }{G_0 }\right) \nonumber\end{aligned}$$ We know that $\sup_{v \in \mathcal{V}} |\Psi_{0h}(\bm{v}_0,\theta) - \Psi_0(\bm{v}_0,\theta)| = O_p(h^r)$, $\sup_{v \in \mathcal{V}} (P_n-P_0) K_{h,\bm{v}} = O_p( n^{-1/2}h^{-r/2})$, and $\sup_{v \in \mathcal{V}} \left|(P_n-P_0) \frac{K_{h,v}}{P_0 K_{h,v}} \left( \Psi_n-\Psi_0\right)\right| = o_p(n^{-1/2}h^{-r/2})$. Moreover, $\sup_{v \in \mathcal{V}} \left| P_0 \frac{K_{h,\bm{v}_0}}{P_0 K_{h,\bm{v}_0}} \left(\Psi_n-\Psi_0\right) \right| = O_p(\sup_{v \in \mathcal{V}} \|\Psi_n-\Psi_0\|)$. Thus, assuming that $\sup_{v \in \mathcal{V}} \|\Psi_n-\Psi_0\|=O_p(b_n)$, we have $$\sup_{v \in \mathcal{V}} |\Psi_{nh}(\bm{v}_0,\theta) - \Psi_0(\bm{v}_0,\theta)| = O_p\left( n^{-1/2}h^{-r/2}+h^r+b_n \right).$$ # Proof of Theorem [Theorem 4](#th:thetrate){reference-type="ref" reference="th:thetrate"}: Consistency of $\theta_{nh}$ {#sec:ratethetaconvh} The upper bound of $E_P \sup_{f \in \mathcal{F}, \|f\|_P \leq \delta} (n)^{1/2}(P_n-P_0)(f)$ given in Lemma [Lemma 8](#lem:vw){reference-type="ref" reference="lem:vw"} is either dominated by $\mathcal{E}(\delta,\mathcal{F},L^2(P))$ or $\mathcal{E}^2(\delta,\mathcal{F},L^2(P))/(\delta^2 n^{1/2})$. The switch occurs when $\mathcal{E}(\delta,\mathcal{F},L^2(P)) = \delta^2 n^{1/2}$. By Lemma [Lemma 9](#lem:laan){reference-type="ref" reference="lem:laan"}, for the cadlag class of functions $\mathcal{F}=\mathcal{F}_{d,M}$, we have $\mathcal{E}(\delta,\mathcal{F},L^2(P)) = \delta^{1/2} |\log \delta|^{p-1}$. Therefore, the switch occurs when $\delta^2 n^{1/2}=\delta^{1/2} |\log \delta|^{p-1}$ which implies $\delta = n^{-1/3}|\log n|^{2(p-1)/3}$ (This is because $\log \delta = -1/3\log n+\log |\log \delta|^{p-1}$ where $\log n$ is the leading term). So the result of Lemma [Lemma 8](#lem:vw){reference-type="ref" reference="lem:vw"} can be written as $$\begin{aligned} E_P \sup_{f \in \mathcal{F}, \|f\|_P \leq \delta} n^{1/2}(P_n-P_0)(f) \leq &I(\delta \geq n^{-1/3}| \log n|^{2(p-1)/3} )\mathcal{E}(\delta,\mathcal{F},L^2(P)) \\ &+I(\delta \leq n^{-1/3}|\log n|^{2(p-1)/3} ) \left\{\frac{\mathcal{E}(\delta,\mathcal{F},L^2(P))}{\delta^2 n^{1/2}}M\right\}.\end{aligned}$$ In the following, for the ease of notation, we denote $\theta_{nh}(v_0)\equiv \theta_{nh}$ and $\theta_{0}(v_0)\equiv \theta_{0}$. Define a loss-based dissimilarity $d(\theta_n,\theta_0) = \Psi_0(\theta_{nh}) - \Psi_0(\theta_0)$. Then by definition $$\begin{aligned} 0 \leq d(\theta_{nh},\theta_0) &= (\Psi_0-\Psi_{nh})(\theta_{nh}) - (\Psi_0-\Psi_{nh})(\theta_0) + \Psi_{nh}(\theta_{nh})-\Psi_{nh}(\theta_0) \\ & \leq -\{(\Psi_{nh}-\Psi_0)(\theta_{nh}) - (\Psi_{nh}-\Psi_0)(\theta_0) \}\\ &=(P_n-P_0) \{D^*_0(\theta_{nh})-D^*_0(\theta_0)\} + h^{J} B_0(\theta_{nh}-\theta_0) + R(\theta_{nh})-R(\theta_0).\end{aligned}$$ The last equality follows from Theorem [Theorem 2](#th:movh){reference-type="ref" reference="th:movh"}. Assuming $\frac{h_n}{n^{-1/(2J+r)}} \rightarrow 0$ (i.e., $h_n$ converges to zero faster than $n^{-1/(2J+r)}$), we have $$\begin{aligned} 0 \leq d(\theta_{nh},\theta_0) \leq (P_n-P_0) \{D^*_0(\theta_{nh})-D^*_0(\theta_0)\} + \{R(\theta_{nh})-R(\theta_0)\}+o(n^{-J/(2J+r)}).\end{aligned}$$ Let $\tilde d(\theta_{nh},\theta_{0h})=(P_n-P_0) \{D^*_0(\theta_{nh})-D^*_0(\theta_{0h})\}$. Using the rate obtained for the remainder terms in Theorem [Theorem 2](#th:movh){reference-type="ref" reference="th:movh"}, we have $d(\theta_{nh},\theta_0) = \tilde d(\theta_{nh},\theta_{0h}) + \{R(\theta_{nh})-R(\theta_0)\} = \tilde d(\theta_{nh},\theta_{0h})+ o_p(n^{-1/2}h^{-r/2})$. We proceed with assuming that the convergence rate of $\tilde d(\theta_{nh},\theta_{0h})$ is the dominating rate. We will show that the assumption holds latter in the proof. Recall, $$\begin{aligned} D^*_0(\theta)= \frac{K_{h,\bm{v}_0}\Delta^c I( A= d^\theta)}{G_0P_0 K_{h,\bm{v}_0}}I(Y\geq t) - \frac{K_{h,\bm{v}_0}I( A= d^\theta)}{G_0^a P_0 K_{h,\bm{v}_0}} Q_0\left(\frac{\Delta^c-G_{0}^c }{G_0^c}\right) - &\frac{K_{h,\bm{v}_0}Q_0}{P_0 K_{h,\bm{v}_0}} \left(\frac{I( A= d^\theta)-G_{0}^a}{G_0^a}\right) \\ &- \frac{K_{h,\bm{v}_0}}{P_0K_{h,\bm{v}_0}}\Psi_{0h}(\bm{v}_0,\theta). \end{aligned}$$ Let $f=D^*_0(\theta)-D^*_0(\theta_0)$ and $h^{-r}\tilde{f} = f$. Then, $$\begin{aligned} (P_n-P_0) \tilde f &\leq \sup_{ \|\tilde f\|_P \leq h^{r/2}} | (P_n-P_0)\tilde f | \nonumber \\ & =O_p\{ n^{-1/2} \mathcal{E}(h^{r/2},L^2(P))\} \label{pth:bound1} %(P_n-P_0) R(\theta_{nh}-\theta_0) &\leq E_P \sup_{ \|\theta-\theta_0\|_P \leq 1} (P_n-P_0)\{R(\theta-\theta_0)\} \nonumber \\ % & \leq (h^2n)^{-1/2}J(1,L^2(P)). \label{pth:bound2}\end{aligned}$$ The equality ([\[pth:bound1\]](#pth:bound1){reference-type="ref" reference="pth:bound1"}), follows from Markov inequality and the result of Lemma [Lemma 13](#lem:wrate){reference-type="ref" reference="lem:wrate"}. Hence, $(P_n-P_0) f = O_p(n^{-1/2} h^{r/4}h^{-r}) = O_p( n^{-1/2} h^{-3r/4})$. Using Taylor expansion, we have $d(\theta_{nh},\theta_0) =\Psi_0(\theta_{nh}) - \Psi_0(\theta_0)= (\theta_{nh}-\theta_0)^\top\frac{\partial^2 \Psi_0}{\partial \theta^2} (\theta_{nh}-\theta_0)+o(\|\theta_{nh}-\theta_0\|^2)$ which implies $\|\theta_{nh}-\theta_0\|_{2,\mu}=O_p(n^{-1/4}h^{-3r/8})$. Let $\delta = n^{-1/4}h^{-3r/8}$. Using Lemma [Lemma 13](#lem:wrate){reference-type="ref" reference="lem:wrate"}, $$\begin{aligned} (P_n-P_0) \tilde f &\leq \sup_{ \|\tilde f\|_P \leq h^{r/2}\delta} |(P_n-P_0)\tilde f| \nonumber \\ & =O_p\{ n^{-1/2} \mathcal{E}(h^{r/2}\delta^{\frac{\kappa}{\kappa+2}},L^2(P))\}, \label{pth:bound2} %(P_n-P_0) R(\theta_{nh}-\theta_0) &\leq E_P \sup_{ \|\theta-\theta_0\|_P \leq 1} (P_n-P_0)\{R(\theta-\theta_0)\} \nonumber \\ % & \leq (h^2n)^{-1/2}J(1,L^2(P)). \label{pth:bound2}\end{aligned}$$ Then $d(\theta_{nh},\theta_0) = O_p(n^{-1/2}h^{-3r/4} \delta^{\frac{\kappa}{2\kappa+4}})$. The iteration continues until there is no improvement in the rate. That is $\delta^2 = n^{-1/2}h^{-3r/4} \delta^{\frac{\kappa}{2\kappa+4}}$ which implies $$\delta = \left( n^{-1/2} h^{-3r/4}\right)^{\frac{2\kappa+4}{3\kappa+8}}.$$ Hence, $\|\theta_{nh}(v_0)-\theta_0(v_0)\|_{2}=O_p\left(\left( n^{-1/2} h^{-3r/4}\right)^{\frac{2\kappa+4}{3\kappa+8}}\right)$. When there is a margin around zero, that is, $pr(0<|\bm{S}^\top{{\theta}}_0|<l) =0$, we will have $\|\theta_{nh}(v_0)-\theta_0(v_0)\|_{2,\mu}=O_p\left( n^{-1/3} h^{-r/2}\right)$. At each iteration the convergence rate of $d(\theta_{nh},\theta_0)$ and at the fix point it achieves the rate of $d(\theta_{nh},\theta_0)=O_p\left(\left( n^{-1/2} h^{-3r/4}\right)^{\frac{4\kappa+8}{3\kappa+8}}\right)$. Hence, to show that the latter rate dominates the remainder term rate, it is sufficient to show that $$\frac{(nh^r)^{1/2}}{\left( n^{1/2} h^{3r/4}\right)^{\frac{4\kappa+8}{3\kappa+8}}} \rightarrow \infty.$$ The condition is satisfied when $h < n^{-\frac{\kappa}{r(3k+4)}}$. The right hand side is a decreasing function of $\kappa$ and as $\kappa \rightarrow \infty$, $n^{-\frac{\kappa}{r(3\kappa+4)}} \rightarrow n^{-\frac{1}{3r}}$. This rate condition is satisfied when $n^{-2/(3r)}<h<n^{-1/(2J+r)}$ where $J>r/4$ and for the optimal rate of $h_n = n^{-1/(2J+r)}$. So, no further condition is imposed to the rate of bandwidth and the optimal rate remains as $h_n = n^{-1/(2J+r)}$. Consequently, $$\|\theta_{nh}(v_0)-\theta_0(v_0)\|_{2}=O_p\left( n^{\frac{(r-4J)(2\kappa+4)}{(8J+4r)(3\kappa+8)}} \right).$$ # Additional figures {#sec:addfig} ![Simulation studies Scenario 2: The target parameter of interest is $\Psi_{0h}$. The plots show the scaled bias and coverage rate when the weight functions are estimated using an undersmoothed highly adaptive lasso (HAL) and a random forest (RF). ](fixedHps1_2.pdf){#fig:supfixedh width="100%"} ![Simulation studies Scenario 2: The target parameter of interest is $\Psi_{0}$. The plots show the scaled bias and coverage rate when the weight functions are estimated using an undersmoothed highly adaptive lasso (HAL) and a random forest (RF).](convergingHps1.pdf){#fig:supconvergh width="100%"} ![Simulation studies Scenarios 1 and 2: The target parameter of interest is the true minimizer of $\Psi_0$ (i.e., $\theta_{0}$). The plots show the scaled bias of $\theta_{nh}$ when the weight functions are estimated using an undersmoothed highly adaptive lasso.](thetabias_ps12HAL.pdf){#fig:thetrate width="100%"}
arxiv_math
{ "id": "2309.16099", "title": "Nonparametric estimation of a covariate-adjusted counterfactual\n treatment regimen response curve", "authors": "Ashkan Ertefaie, Luke Duttweiler, Brent A. Johnson and Mark J. van der\n Laan", "categories": "math.ST stat.ME stat.ML stat.TH", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | This work focuses on regularity criteria of 3D generalized magneto-micropolar fluid system in terms of the pressure in Lorentz spaces inspired by the recent works in [@FS22] and [@LN22]. author: - | Jae-Myoung Kim$^{1}$\ $^{{\small 1}}$Department of Mathematics Education, Andong National University,\ Andong 36729, Republic of Korea,\ Email: jmkim02\@anu.ac.kr title: Regularity criteria of 3D generalized magneto-micropolar fluid system in terms of the pressure --- Key words: Regularity criteria; Weak solution, generalized magneto-micropolar fluid system # Introduction This paper is concerned about the reglarity conditions of the weak solutions to generalized magneto-micropolar fluid equations in $\mathbb R^3$, which are described by $$\label{mag-mipolar} \left\{ \begin{aligned} \partial_t u +( u \cdot \nabla) u + \nabla \pi& = -( \mu + \chi)\Lambda^{2\alpha} u+ \chi \nabla \times w + (b \cdot \nabla) b, \\ \partial_t w + (u \cdot \nabla) w & = -\kappa \Lambda^{2\gamma} w+ \eta\nabla (\nabla \cdot w) + \chi \nabla \times u -2 \chi w, \\ \partial_t b + (u \cdot \nabla) b & = -\nu \Lambda^{2\beta} b+ (b \cdot \nabla) u, \\ \nabla \cdot u (\cdot,t) & = \nabla \cdot b(\cdot,t)\, = 0, \end{aligned} \right.$$ where $u = u(x, t)$, $w = w(x, t)$, $b = b(x, t)$ and $\pi: =\pi(x, t)=\mathcal{P}+\frac{|b|^2}{2}$ denote the fluid velocity, the micro-rotation velocity (angular velocity of the rotation of the fluid particles), the magnetic and total pressure fields respectively. The notation $\Lambda := (-\Delta)^{1/2}$ stands for positive constants. The positive constant $\kappa$ in ([\[mag-mipolar\]](#mag-mipolar){reference-type="ref" reference="mag-mipolar"}) correspond to the angular viscosity, $\nu$ is the inverse of the magnetic Reynolds number and $\chi$ is the micro-rotational viscosity. We consider the initial value problem of [\[mag-mipolar\]](#mag-mipolar){reference-type="eqref" reference="mag-mipolar"}, which requires initial conditions $$\label{ini} u(x,0)=u_0(x),\quad w(x,0)=w_0(x) \quad \text{and} \quad b(x,0)=b_0(x), \qquad x\in\mathbb R^3,$$ and we also assume that $\text{div} \ u_0=0=\text{div} \ b_0$. The authors in [@CH19 Theorem 2.2] construct the existence of Leray-Hopf solution on the whole space $\mathbb R^3 \times (0, T)$ to the generalized Navier-Stokes equation. It is worth pointing that $\dot{H}^{\frac{5-4\alpha}{2}}$ is a critical space, that is, $\dot{H}^{\frac{5-4\alpha}{2}}$ norm is scaling invariant is also a solution, where $u_\lambda(x, t)=\lambda^{2\alpha-1}u(\lambda x, \lambda^{2\alpha}t)$ with any $\lambda>0$ . By Sobolev embedding theorem, it is checked that $\dot{H}^{\frac{5-3\alpha}{2}}(\mathbb R^3)\hookrightarrow L^{\frac{6}{3-2\alpha}}(\mathbb R^3)$. Then $$\|u\|_{L^4(0,T ;L^{\frac{6}{3-2\alpha}})} \leq C\|u\|^2_{L^\infty(0,T ;\dot{H}^{\frac{5-4\alpha}{2}})} \|u\|^2_{L^2(0,T ;\dot{H}^{\frac{5-2\alpha}{2}})},\quad \frac{2\alpha}{4}+\frac{3}{\frac{6}{3\alpha-2}}=2\alpha-1.$$ For the regularity issues to weak solutions to the generalized Navier-Stokes equation, we refer to [@Chae07], [@FO08], [@FFY13]. Recently, Deng and Shang [@DS21] obtained global-in-time existenceand uniqueness of smooth solutions to the problem [\[mag-mipolar\]](#mag-mipolar){reference-type="eqref" reference="mag-mipolar"}--[\[ini\]](#ini){reference-type="eqref" reference="ini"} if $\alpha\geq \frac{1}{2}+\frac{n}{4}, \alpha +\gamma \geq \max (2,\frac{n}{2}), \mbox{and}\quad \alpha +\beta \geq 1 +\frac{n}{2}$. On the other hands, Fan and Zhong [@FS22] established local-in-time existene and uniqueness of smooth solutions to the problem [\[mag-mipolar\]](#mag-mipolar){reference-type="eqref" reference="mag-mipolar"}--[\[ini\]](#ini){reference-type="eqref" reference="ini"} for $\alpha+\gamma>1$ and furthermore they gave some regularity criteria via the gradient of velocity in a meaningful and appropriate spaces. For the regularity criteria in Lorentz space, Li and Niu [@LN22] proved that a weak solution $(u,\,w,\ b)$ for the standard 3D MHD equations become regular under the scaling invariant conditions for the total pressure, in particualr, so called Serrin's conditions, $\pi \in L^{q,\infty}(0,\, T; L^{p,\infty}({\mathbb{R}}^3))$ with $3/p+2/q \leq 2$ and $p>\frac{3}{2}$ (compare to [@[BG]], [@Zhou05], [@Zhou06], [@Zhou07], [@BPR14], [@[Suzuki1]], [@[Suzuki2]] for Navier--Stokes equations). For $p,q\in[1,\infty]$, we define $$\|f\|_{L^{p,q}(\Omega)}=\left\{\begin{aligned} &\Big(p\int_{0}^{\infty}\alpha^{q}|\{x\in \Omega:|f(x)|>\alpha\}|^{\frac{q}{p}}\frac{d\alpha}{\alpha}\Big)^{\frac{1}{q}} , ~~~q<\infty, \\ &\sup_{\alpha>0}\alpha\ |\{x\in \Omega:|f(x)|>\alpha\}|^{\frac{1}{p}} ,~~~q=\infty. \end{aligned}\right.$$ (see e.g. [@Neil], [@Grafakos], [@Maly]) Motivated by the recent works in [@FS22] and [@LN22], the purpose of this note is to establish regularity criteria of 3D generalized magneto-micropolar fluid system [\[mag-mipolar\]](#mag-mipolar){reference-type="eqref" reference="mag-mipolar"} in terms of the pressure in Lorentz space. In this paper, we assume that $\gamma=\nu=\chi=1$ and $\alpha=\beta$. Our results are stated as follows. **Theorem 1**. *Let $\quad 0 < T < \infty$ and $u_0, b_0, w_0 \in H^m(\mathbb R^3)$ with $m > \frac{5}{2}$ and $1\leq \alpha, \gamma \leq \frac{5}{4}$. There exists a sufficient constant $\epsilon > 0$ such that if $\pi$ or $\nabla \pi$ satisfy* (A) *$\pi \in L^{q,\infty}(0,T; L ^{p,\infty}(\mathbb{R}_+^{3}))$  and  $$\|\pi\|_{L^{p,\infty}(0,T; L ^{q,\infty}(\mathbb{R}^{3}))} \leq\varepsilon, ~ \text{with} ~~\frac{3}{p}+\frac{2\alpha}{q}=2(2\alpha-1) , ~\frac{3}{2(\alpha-1)}<q<\infty;$$* (B) *$\nabla \pi \in L^{\frac{2r\alpha}{2r\alpha-3}}(0,T; L ^{r,\infty}(\mathbb{R}^{3}))$ with $\frac{3}{2\alpha} < r \leq \infty,$* *then the weak $\left( u,b\right)$ is regular on $(0,T].$* **Theorem 2**. *Let $\alpha=\beta=\gamma=1$. Assume $(u_0, b_0) \in L^2_{\sigma}(\mathbb{R}^{3}) \cap L^4_{\sigma}(\mathbb{R}^{3})$ and $w_0 \in L^2(\mathbb{R}^{3}) \cap L^4(\mathbb{R}^{3})$. Let the triple $(u, b, w)$ be a weak solution to system [\[mag-mipolar\]](#mag-mipolar){reference-type="eqref" reference="mag-mipolar"} on some time interval $[0, T)$ with $0<T<\infty$. There exists a sufficient constant $\epsilon > 0$ such that if $\nabla \mathcal{\mathcal{P}}$ and $b$ satisfy $\nabla \mathcal{\mathcal{P}} \in L^{\frac{2r}{3r-3},\infty}(0,T; L ^{r,\infty}(\mathbb{R}^{3}))$ with $$\| \nabla \mathcal{\mathcal{P}}\|_{L^{\frac{2r}{3r-3},\infty}(0, T ;L^{r,\infty})}\leq \epsilon, \quad \frac{3}{2} < r \leq \infty,$$ and $$\| b\|_{L^{\frac{2a_1}{a_1-3},\infty}(0, T ;L^{r,\infty})}\leq \infty, \quad 3 < a_1 \leq \infty,$$ then the weak solutions $\left( u,b, w\right)$ is regular on $(0,T].$* # Proof of Theorem [Theorem 1](#the1.1){reference-type="ref" reference="the1.1"} {#proof-of-theorem-the1.1} To control the fractional diffusion term, we recall the following result (see e.g. [@Codorba]). **Lemma 3**. *With $0 < \alpha < 2$, $v,\Lambda^\alpha v \in L^p(\mathbb R^3)$ with $p=2k$, $k\in \mathbb{N}$, we obtain $$\int |v|^{p-2}v\Lambda^\alpha v\, dx \geq \frac{1}{p} \int |\Lambda^{\frac{\alpha}{2}} v^{\frac{p}{2}}|^2\,dx.$$* Also, we recall the following nonlinear Gronwall-type inequality established in [@PX20] (see also [@BPR14] and [@LRM16]). **Lemma 4**. *Let $T > 0$ and $\varphi \in L_{loc}([0, T ))$ be non-negative function. Assume further that $$\varphi(t) \leq C_0 + C_1\int^t_0 \mu(s)\varphi(s)\,ds + \kappa \int^t_0 \lambda(s)^{1-\epsilon}\varphi(s)^{1+A(\epsilon)}\, ds,\quad \forall \ 0 < \epsilon < \epsilon_0.$$ Where $\kappa, \epsilon_0 > 0$ are constants, $\mu \in L^1(0, T )$ and $A(\epsilon)> 0$ satisfies $\lim_{\epsilon \rightarrow 0}\frac{A(\epsilon)}{\epsilon}= c_0 > 0$. Then $\varphi$ is bounded on $[0, T ]$ if $\|\lambda\|_{L^{1,\infty}(0,T)} < c^{-1}_0 \kappa^{-1}$.* In order to derive the regularity criteria of weak solutions to the system ([\[mag-mipolar\]](#mag-mipolar){reference-type="ref" reference="mag-mipolar"}), we introduce the definition of weak solution. Let us denote $$\label{variable} z^+=u+b,\ \ \text{\ }z^-=u-b.$$ Then system (1.1) can be reformulated as $$\left\{ \begin{array}{c} \partial _{t}z^{+}-\Lambda^{2\alpha} z^{+}+(z^{-}\cdot \nabla )z^{+}-\chi\nabla \times w+\nabla \pi =0, \\ \partial _{t}z^{-}-\Lambda^{2\alpha} z^{-}+(z^{+}\cdot \nabla )z^{-}-\chi\nabla \times w+\nabla \pi =0, \\ \partial _{t}w-\kappa\Lambda^{2\gamma} w+ (\frac{z^++ z^-}{2})\cdot \nabla w - \eta\nabla \mbox{div}\ w + w-\nabla \times(\frac{z^++ z^-}{2}) =0, \\ \nabla \cdot z^{+}=\nabla \cdot z^{-}=0, \\ z^{+}(x,0)=z^{+}_{0}(x),\text{ \ }z^{-}(x,0)=z^{-}_{0}(x),% \end{array}% \right. \label{fmmp-100}$$ It is easy to show the following global $L^2$-bound, $$\|(u,b,w)((\tau))\|^2_{L^2}+ \int^t_0 (\|\Lambda^\alpha u(\tau )\|^2_{L^2} + \|\Lambda^\gamma w(\tau)\|^2_{L^2}+ \|\Lambda^\alpha b(\tau)\|^2_{L^2})\,d\tau \leq C.$$ ***Proof:*** Multiplying the first and the second equations of [\[fmmp-100\]](#fmmp-100){reference-type="eqref" reference="fmmp-100"} by $% \left\vert z^{+}\right\vert ^{2}z^{+}$ and $\left\vert z^{-}\right\vert ^{2}z^{-}$, respectively, integrating by parts and summing up, we have $$\frac{1}{4}\frac{d}{dt}|\!|\Big(z^{+},z^{-}\Big)|\!|_{L^{4}}^{4}+ |\!|\Lambda^{\alpha} \Big(|z^+|^2,|z^-|^2\Big)|\!|^2_{L^2}$$ $$\begin{aligned} \label{L4-estimate} =-\underbrace{\int_{\mathbb{R}^{3}}\nabla \pi \cdot (z^{+}\left\vert z^{+}\right\vert ^{2}+z^{-}\left\vert z^{-}\right\vert ^{2})dx}_{\mathcal{J}_1}+\underbrace{\int_{\mathbb{R}^{3}} (|z^+|^2 z^{+}+|z^-|^2 z^{-}) \cdot (\nabla \times w)dx}_{\mathcal{J}_2} .\end{aligned}$$ Taking the operator div, to the first equation of (4.1), and using the facts $\mbox{div}(\nabla \times w) = 0$, we see $$-\Delta \pi = \mbox{div}\mbox{div}(z^+\otimes z^-),$$ and thus, $$||\pi||_{L^{q,2}}\leq C||z^+\otimes z^-||_{L^{q,2}}\leq C||z^+||_{L^{2q,4}}||z^-||_{L^{2q,4}}=C|||z^+|^2||^{1/2}_{L^{q,2}}|||z^-|^2||^{1/2}_{L^{q,2}}$$ $$\leq C(|||z^+|^2||_{L^{q,2}}+|||z^-|^2||_{L^{q,2}}).$$ Using integration by parts, Hölder's inequality in Lorentz space, Young's inequalities and Sobolev embedding for the fractional power, we note that for $p>1$, $q>2$ and $\frac{5}{2}>\alpha>0$ with $$\label{relation} \frac{1}{q}+\frac{1}{2p}+\frac{5-2\alpha}{6}=1,$$ $$\int_{\mathbb{R}^{3}}\nabla \pi \cdot \left\vert z^{+}\right\vert ^{2}z^{+}\, dx=\int_{\mathbb{R}^{3}} \nabla (z^{+}\left\vert z^{+}\right\vert ^{2})\pi\, dx\leq C\int_{\mathbb{R}^{3}} |z^{+}||\nabla |z^{+}|^2||\pi|\, dx$$ $$\leq C\|z^+\|_{L^{2q,4}}\|\nabla |z^+|^2\|_{L^{\frac{6}{5-2\alpha},2}}\||\pi|^{1/2}\|_{L^{2p,\infty}}\||\pi|^{1/2}\|_{L^{2q,4}},$$ $$=C\||z^+|^2\|^{1/2}_{L^{q,2}}\|\nabla |z^+|^2\|_{L^{\frac{6}{5-2\alpha},2}}\|\pi\|^{1/2}_{L^{p,\infty}}\|\pi\|^{1/2}_{L^{q,2}}$$ $$\leq C\||z^+|^2\|^{1/2}_{L^{q,2}}\|\nabla |z^+|^2\|_{L^{\frac{6}{5-2\alpha},2}}\|\pi\|^{1/2}_{L^{p,\infty}}\Big(|||z^+|^2||^{1/2}_{L^{q,2}}+|||z^-|^2||^{1/2}_{L^{q,2}}\Big)$$ $$\leq C\||z^+|^2\|_{L^{q,2}}\|\nabla |z^+|^2\|_{L^{\frac{6}{5-2\alpha},2}}\|\pi\|^{1/2}_{L^{p,\infty}}|||z^-|^2||^{1/2}_{L^{q,2}}$$ $$+C\||z^+|^2\|^{1/2}_{L^{q,2}}\|\nabla |z^+|^2\|_{L^{\frac{6}{5-2\alpha},2}}\|\pi\|^{1/2}_{L^{p,\infty}}|||z^-|^2||^{1/2}_{L^{q,2}}$$ $$\leq C\|\pi\|^{1/2}_{L^{p,\infty}}\Big(\||z^+|^2\|_{L^{q,2}}+\||z^-|^2\|_{L^{q,2}}\Big)\|\nabla |z^+|^2\|_{L^{\frac{6}{5-2\alpha},2}}.$$ In the same way, $\int_{\mathbb{R}^{3}}\nabla \pi \cdot z^{-}\left\vert z^{-}\right\vert ^{2}\, dx$ can be bounded by $$C\|\pi\|^{1/2}_{L^{p,\infty}}\Big(\||z^+|^2\|_{L^{q,2}}+\||z^-|^2\|_{L^{q,2}}\Big)\|\nabla |z^-|^2\|_{L^{\frac{6}{5-2\alpha},2}}.$$ And thus, we get $$\mathcal{J}_1%=\int_{\mathbb{R}^{3}}\nabla \pi \cdot\left\vert %z^{+}\right\vert ^{2} z^{+}\, dx+\int_{\mathbb{R}^{3}}\nabla \pi %\cdot z^{-}\left\vert z^{-}\right\vert ^{2}\, dx %\] %\[ \leq C\|\pi\|^{1/2}_{L^{p,\infty}}\Big(\||z^+|^2\|_{L^{q,2}}+\||z^-|^2\|_{L^{q,2}}\Big) \Big(\|\nabla|z^+|^2\|_{L^{\frac{6}{5-2\alpha},2}}+\|\nabla|z^-|^2\|_{L^{\frac{6}{5-2\alpha},2}}\Big).$$ $$\leq C\|\pi\|^{1/2}_{L^{p,\infty}}\Big(\||z^+|^2\|^{1-\Big(\frac{3}{2\alpha}+\frac{3}{q\alpha}\Big)}_{L^{2}}+\||z^-|^2\|^{1-\Big(\frac{3}{2\alpha}+\frac{3}{q\alpha}\Big)}_{L^{2}}\Big)$$ $$\times \Big(\|\nabla|z^+|^2\|^{1+\Big(\frac{3}{2\alpha}+\frac{3}{q\alpha}\Big)}_{L^{\frac{6}{5-2\alpha},2}}+\|\nabla|z^-|^2\|^{1+\Big(\frac{3}{2\alpha}+\frac{3}{q\alpha}\Big)}_{L^{\frac{6}{5-2\alpha},2}}\Big).$$ $$\leq C\|\pi\|^{\frac{2\alpha p}{4\alpha p-2p-3}}_{L^{p,\infty}}\Big(\||z^+|^2\|^2_{L^{2}}+\||z^-|^2\|^2_{L^{2}}\Big) +\frac{1}{16}\Big(\|\Lambda^{\alpha}|z^+|^2\|^2_{L^2}+\|\Lambda^{\alpha}|z^-|^2\|^2_{L^2}\Big).$$ For $\mathcal{J}_2$, integrating by parts, we note that $$\int_{\mathbb{R}^{3}} |z^+|^2 z^{+} \cdot (\nabla \times w)dx\leq |\!|w|\!|_{L^4}|\!|\nabla |z^+|^2|\!|_{L^{2}}|\!|z^+|\!|_{L^4}\leq|\!|w|\!|^2_{L^4}\|z^+\|^2_{L^{4}}+\frac{1}{16}|\!|\nabla |z^+|^2|\!|^2_{L^{2}}$$ $$\leq|\!|w|\!|^2_{L^4}\|z^+\|^2_{L^{4}}+\frac{1}{16}|\!||z^+|^2|\!|^{2\theta}_{L^{2}}|\!|\Lambda^\alpha |z^+|^2|\!|^{2(1-\theta)}_{L^{2}},\quad \theta=\frac{\alpha-1}{\alpha}$$ $$\leq C(|\!|w|\!|^4_{L^4}+\|z^+\|^4_{L^{4}})+\frac{C}{16}|\!||z^+|^2|\!|^{2}_{L^{2}}+\frac{1}{16}|\!|\Lambda^\alpha |z^+|^2|\!|^{2}_{L^{2}}$$ $$\leq C(|\!|w|\!|^4_{L^4}+\|z^+\|^4_{L^{4}})+\frac{1}{16}|\!|\Lambda^\alpha |z^+|^2|\!|^{2}_{L^{2}}.$$ And thus, $\mathcal{J}_2$ is bounded by $$\mathcal{J}_2 \leq C(|\!|w|\!|^4_{L^4}+\|z^+\|^4_{L^{4}}+\|z^-\|^4_{L^{4}})+\frac{1}{16}\Big(|\!|\Lambda^\alpha |z^+|^2|\!|^{2}_{L^{2}}+|\!|\Lambda^\alpha |z^-|^2|\!|^{2}_{L^{2}}\Big).$$ To get $L^4$-estimate for $w$, as before, multiplying the third equation of [\[fmmp-100\]](#fmmp-100){reference-type="eqref" reference="fmmp-100"} by $\left\vert w\right\vert ^{2}w$, integrating by parts and summing up, we have $$\frac{1}{4}\frac{d}{dt}|\!|w|\!|_{L^{4}}^{4}+ |\!|\Lambda^\gamma |w|^2|\!|^2_{L^2}+2\chi|\!|w|\!|_{L^{4}}^{4}+\kappa|\!||w|\mbox{div}\ w|\!|_{L^{2}}^{2}$$ $$\begin{aligned} \label{L4-estimate} =\underbrace{\frac{\chi}{2}\int_{\mathbb{R}^{3}}|w|^2 w \cdot (\nabla \times (z^+ + z^-))dx}_{\mathcal{J}_3}- \underbrace{\int_{\mathbb{R}^{3}} \mbox{div}\ w\ (w \cdot \nabla|w|^2)dx}_{\mathcal{J}_4}.\end{aligned}$$ As same manner as $\mathcal{J}_2$, $\mathcal{J}_3$ is bounded by $$\mathcal{J}_3\leq C(|\!|w|\!|^{4}_{L^4}+|\!|z^+|\!|^{4}_{L^4}+|\!|z^-|\!|^{4}_{L^4})+\frac{1}{16}|\!|\Lambda^\gamma |w|^2|\!|^2_{L^2},\quad \gamma\geq1.$$ In a similar way, $\mathcal{J}_4$ is also bounded by $$\mathcal{J}_4\leq \kappa|\!||w|\mbox{div}\ w|\!|_{L^{2}}|\!|\nabla |w|^2|\!|_{L^{2}}\leq C|\!|\nabla |w|^2|\!|^2_{L^{2}}+\frac{\kappa}{16}|\!||w|\mbox{div}\ w|\!|^2_{L^{2}}$$ $$\leq C|\!|w|\!|^4_{L^{4}}+\frac{1}{16}|\!|\Lambda^\gamma|w|^2|\!|_{L^{2}}^2+\frac{\kappa}{16}|\!||w|\mbox{div}\ w|\!|^2_{L^{2}}, \quad \gamma\geq1.$$ Let $Y(t):=\|(z^+,z^-,w)\|^4_{L^4(\mathbb{R}^{3})}$ and thus [\[L4-estimate\]](#L4-estimate){reference-type="eqref" reference="L4-estimate"} becomes $$\label{qq111} \frac{d}{dt}Y(t)\lesssim \|\pi\|^{q}_{L^{q,\infty}(\mathbb{R}^{3})}Y(t),\quad\quad q=\frac{2\alpha p}{4\alpha p-2p-3}.$$ Now, we use an argument similar to the one used in the work of Bosia et al. [@BPR14]. For $\epsilon > 0$, Choose $q_\epsilon = q -\epsilon(q+\frac{\alpha}{2\alpha-1}-\frac{3c_0}{4(2\alpha-1)})$ and $r_\epsilon:=\frac{q-\epsilon(q+\frac{\alpha}{2\alpha-1}-\frac{3c_0}{4(2\alpha-1)})}{\frac{2}{3}(q(2\alpha-1)-\alpha)(1-\epsilon)+\frac{c_0\epsilon}{2} }$ with $$\left\{\begin{aligned}\label{ro} &\frac{3}{p_{\kappa}}+\frac{2\alpha}{q_{\kappa}}=2(2\alpha-1), \\ &\frac{q_{\kappa}}{p_{\kappa}}=\frac{q\big(1-\kappa\big)}{p}+\frac{c_0\kappa}{2}. \end{aligned}\right.$$ Due to the above relation, we get $$\|\pi\|^{q_{\kappa}}_{L^{p_{\kappa},\infty}(\mathbb{R}^{3})}\lesssim \|\pi\|^{q(1-\kappa)}_{L^{p ,\infty}(\mathbb{R}^{3})} \|\pi\|^{4\kappa}_{L^{2,\infty}}\lesssim \|\pi\|^{q(1-\kappa)}_{L^{p ,\infty}(\mathbb{R}^{3})} \|\pi\|^{4\kappa}_{L^{2 }(\mathbb{R}^{3})}$$ $$\label{2.2000} \lesssim \|\pi\|^{q(1-\kappa)}_{L^{p ,\infty}(\mathbb{R}^{3})} \Big(|\!||u|^{2}|\!|^{4\kappa}_{L^{2 }(\mathbb{R}^{3})}+|\!||b|^{2}|\!|^{4\kappa}_{L^{2 }(\mathbb{R}^{3})}\Big).$$ Since the pair $(p_{\kappa}, q_{\kappa})$ also meets $3/p_{\kappa}+2\alpha/q_{\kappa}=2(2\alpha-1)$. Using the estimate [\[2.2000\]](#2.2000){reference-type="eqref" reference="2.2000"}, [\[qq111\]](#qq111){reference-type="eqref" reference="qq111"} becomes $$\frac{d}{dt}Y(t)\lesssim \|\pi\|^{p_{\kappa}}_{L^{q_{\kappa},\infty}(\mathbb{R}^{3})}\Big\||u|^{2}\Big\|_{L^{2}(\mathbb{R}^{3})}^{2} \leq C \|\pi\|^{p(1-\kappa)}_{L^{q ,\infty}(\mathbb{R}^{3})} Y(t)^{1+2\kappa}.$$ And then integrating with respect to time from $0$ to $t$ with $0\leq t<T$, $$Y(t) \leq CY(0)+C\int_0^t\|\pi\|^{p(1-\kappa)}_{L^{q ,\infty}(\mathbb{R}^{3})} Y(t)^{1+2\kappa}\,ds,$$ or equivalently, $$\|w^+(t)\|^4_{L^4(\mathbb{R}^{3})}+\|w^-(t)\|^4_{L^4(\mathbb{R}^{3})} \leq C\|w_0^+\|^4_{L^4(\mathbb{R}^{3})}+\|w_0^-\|^4_{L^4(\mathbb{R}^{3})}$$ $$+C\int_0^t\|\pi\|^{p(1-\kappa)}_{L^{q ,\infty}(\mathbb{R}^{3})} \|w^+\|^4_{L^4(\mathbb{R}^{3})}+\|w^-\|_{L^4(\mathbb{R}^{3})}^{4(1+2\kappa)}\,ds.$$ Due to Lemma [Lemma 4](#grow){reference-type="ref" reference="grow"}, we are now able to complete the proof of Theorem 1.1 under the assumption $(A)$ in Theorem 1.1. *Part (B): Indeed, a proof is almost same to that in the argument in [@Duan12] or [@Kim22], however, for the reader's convenience, a sketch of the proof will be given. Multiplying both side of $\eqref{fmmp-100}_1$ by $z^+|z^+|^{3r-4}$ and then integrating them over $\mathbb R^3$ we conclude that $$\begin{aligned} \frac{1}{3r-2}\frac{d}{dt}&\int_{\mathbb{R}^{3}}|z^+|^{3r-2}dx +\frac{4(3r-4)}{(3r-2)^2}\int_{\mathbb{R}^{3}}|\Lambda^{\alpha}| z^+|^{\frac{3r-2}{2}}|^{2}\,dx\end{aligned}$$ $$\begin{aligned} \label{L4-estimate} \lesssim \underbrace{\int_{\mathbb{R}^{3}}\nabla \pi \cdot |z^+|^{3r-4}z^+ dx }_{\mathcal{J}_5}+\frac{1}{2}\underbrace{\int_{\mathbb{R}^{3}}|z^+|^{3r-4}z^+ \cdot (\nabla \times w )dx}_{\mathcal{J}_6}\end{aligned}$$ By the integration by parts and Hölder inequality, $\mathcal{J}_5$ is also written by $$\begin{aligned} \label{aaa-1000} \mathcal{J}_1&\leq (3r - 4) \int_{\mathbb{R}^{3}} | \pi||\nabla |z^+||z^+|^{3r-4}\,dx \\ &\leq \frac{2(3r-4)}{(3r-2)}\Big(\int_{\mathbb{R}^{3}}|\pi|^2|z^+|^{3r-4}\,dx\Big)^\frac{1}{2} \Big(\int_{\mathbb{R}^{3}}|\nabla|z^+|^{\frac{3r-2}{2}}\,dx\Big)^\frac{1}{2}\notag.\end{aligned}$$ Note that $0\leq I\leq a$ and $0\leq I\leq b$, then $I\leq \sqrt{ab}$. Combining $\mathcal{J}_5$ in [\[L4-estimate\]](#L4-estimate){reference-type="eqref" reference="L4-estimate"} and [\[aaa-1000\]](#aaa-1000){reference-type="eqref" reference="aaa-1000"}, we get $$\begin{aligned} \mathcal{J}_5&\lesssim \Big(\int_{\mathbb{R}^{3}} |\nabla \pi| |z^+|^{3r-3}\,dx\Big)^{1/2}\Big(\int_{\mathbb{R}^{3}}|\pi|^2|z^+|^{3r-4}\,dx\Big)^\frac{1}{4} \Big(\int_{\mathbb{R}^{3}}|\nabla|z^+|^{\frac{3r-2}{2}}|\,dx\Big)^\frac{1}{4}\\ &\leq C \Big(\int_{\mathbb{R}^{3}} \Big(|\nabla \pi|\Big( |z^+|^{2}+|z^-|^{2}\,dx\Big)^{(3r-3)/2}\Big)^{2/3}\Big(\int_{\mathbb{R}^{3}}\Big(|\pi|\Big( |z^+|^{2}+|z^-|^{2}\,dx\Big)^{(3r-4)/2}\Big)^{1/3}\\ &\hspace{2cm}+\frac{3r-4}{(3r-2)^2}\Big(\int_{\mathbb{R}^{3}}|\nabla |z^+|^{\frac{3r-2}{2}}|^2\,dx\Big).\end{aligned}$$ Due to $$\int_{\mathbb{R}^{3}} | \pi|^2\Big( |z^+|^{2}+|z^-|^2\Big)^{\frac{3r-4}{2}}\,dx\lesssim \|\pi\|_{L^{\frac{3r}{2},6r-6}}^2\||z^+|^2+|z^-|^2\|_{L^{\frac{3r}{2},\frac{3r-3}{2}}}^{\frac{3r-4}{2}}$$ $$\lesssim \||z^+|^2+|z^-|^2\|_{L^{\frac{3r}{2},6r-6}}^2\||z^+|^2+|z^-|^2\|_{L^{\frac{3r}{2},\frac{3r-3}{2}}}^{\frac{3r-4}{2}}$$ $$\lesssim \||z^+|^2+|z^-|^2\|_{L^{\frac{3r}{2},\frac{3r-3}{2}}}^2\||z^+|^2+|z^-|^2\|_{L^{\frac{3r}{2},\frac{3r-3}{2}}}^{\frac{3r-4}{2}}=\||z^+|^2+|z^-|^2\|_{L^{\frac{3r}{2},\frac{3r-3}{2}}}^{\frac{3r}{2}},$$ and $$\int_{\mathbb{R}^{3}} | \nabla \pi|\Big( |z^+|^{2}+|z^-|^2\Big)^{\frac{3r-3}{2}}\,dx\lesssim \|\nabla \pi\|_{L^{r,\infty}}\||z^+|^2+|z^-|^2\|_{L^{\frac{3r}{2},\frac{3r-3}{2}}}^{\frac{3r-3}{2}},$$ $\mathcal{J}_5$ is estimated by $$\label{aaaaa-10} \mathcal{J}_5\leq C\|\nabla \pi\|^{\frac{2}{3}}_{L^{r,\infty}}\||z^+|^2+|z^-|^2\|_{L^{\frac{3r}{2},\frac{3r-2}{2}}}^{\frac{3r-2}{2}}+\frac{3r-4}{(3r-2)^2}\Big(\int_{\mathbb{R}^{3}}|\nabla |z^+|^{\frac{3r-2}{2}}|^2\,dx\Big).$$ Next, for $\mathcal{J}_6$, using the Hölder and Young inequalities, we have $$\label{aaaaa-20} \mathcal{J}_6\lesssim \|w\|_{L^{3r-2}}\||z^+|^{\frac{3r-4}{2}}\|_{L^{\frac{2(3r-2)}{3r-4}}}\|\nabla |z^+|^{\frac{3r-2}{2}}\|_{L^2}$$ $$\lesssim\|w\|^2_{L^{3r-2}}\||z^+|^{\frac{3r-4}{2}}\|^2_{L^{\frac{2(3r-2)}{3r-4}}}+\|\nabla |z^+|^{\frac{3r-2}{2}}\|^2_{L^2} \lesssim(\|w\|^{3r-2}_{L^{3r-2}}+\||z^+\|^{3r-2}_{L^{3r-2}})+\|\nabla |z^+|^{\frac{3r-2}{2}}\|^2_{L^2}.$$ And them, considering the estimates [\[aaaaa-10\]](#aaaaa-10){reference-type="eqref" reference="aaaaa-10"} and [\[aaaaa-20\]](#aaaaa-20){reference-type="eqref" reference="aaaaa-20"}, [\[L4-estimate\]](#L4-estimate){reference-type="eqref" reference="L4-estimate"} reduces $$\begin{aligned}\label{2.13} \frac{d}{dt}&\int_{\mathbb{R}^{3}}|z^+|^{3r-2}dx +\int_{\mathbb{R}^{3}}|\Lambda^{\alpha}| z^+|^{\frac{3r-2}{2}}|^{2}\,dx \end{aligned}$$ $$\lesssim \|\nabla \pi\|^{\frac{2}{3}}_{L^{r,\infty}}\||z^+|^2+|z^-|^2\|_{L^{\frac{3r}{2},\frac{3r-2}{2}}}^{\frac{3r-2}{2}}+ C(\|w\|^{3r-2}_{L^{3r-2}}+\||z^+\|^{3r-2}_{L^{3r-2}})+\|\nabla |z^+|^{\frac{3r-2}{2}}\|^2_{L^2}.$$ $$\leq C \|\nabla \pi\|^{\frac{2}{3}}_{L^{r,\infty}}\||z^+|^2+|z^-|^2\|_{L^{\frac{3r}{2},\frac{3r-2}{2}}}^{\frac{3r-2}{2}}+ C(\|w\|^{3r-2}_{L^{3r-2}}+\||z^+\|^{3r-2}_{L^{3r-2}})+\frac{1}{256}\|\Lambda^{\alpha} |z^+|^{\frac{3r-2}{2}}\|^2_{L^2}.$$ where we use the estimate $$\|\nabla |z^+|^{\frac{3r-2}{2}}\|^2_{L^2} \leq C \|z^+\|^{3r-2}_{L^{3r-2}}\|\Lambda^{\alpha} |z^+|^{\frac{3r-2}{2}}\|^{2(1-\theta)}_{L^2} \leq \||z^+|^{\frac{3r-2}{2}}\|^{2\theta}_{L^2}+\frac{1}{256}\|\Lambda^{\alpha} |z^+|^{\frac{3r-2}{2}}\|^{2}_{L^2}.$$ In a similar fashion, if you do it for the equation $\eqref{fmmp-100}_2$, we have $$\begin{aligned}\label{2.14} \frac{d}{dt}&\int_{\mathbb{R}^{3}}|z^-|^{3r-2}dx +\int_{\mathbb{R}^{3}}|\Lambda^{\alpha}| z^-|^{\frac{3r-2}{2}}|^{2}\,dx \end{aligned}$$ $$\leq C \|\nabla \pi\|^{\frac{2}{3}}_{L^{r,\infty}}\||z^+|^2+|z^-|^2\|_{L^{\frac{3r}{2},\frac{3r-2}{2}}}^{\frac{3r-2}{2}}+ C(\|w\|^{3r-2}_{L^{3r-2}}+\||z^-\|^{3r-2}_{L^{3r-2}})+\frac{1}{256}\|\Lambda^{\alpha} |z^-|^{\frac{3r-2}{2}}\|^2_{L^2}.$$ After summing up [\[2.13\]](#2.13){reference-type="eqref" reference="2.13"} and [\[2.14\]](#2.14){reference-type="eqref" reference="2.14"}, using Sobolev embedding and Young's inequality, we obtain $$\begin{aligned}\label{2.15-10} \frac{d}{dt}&\int_{\mathbb{R}^{3}}\Big(|z^+|^{3r-2}+|z^-|^{3r-2}\Big)dx +\int_{\mathbb{R}^{3}}\Big(|\Lambda^{\alpha}| z^+|^{\frac{3r-2}{2}}|^{2}+|\Lambda^{\alpha}| z^-|^{\frac{3r-2}{2}}|^{2}\Big)\,dx\\ &\lesssim\|\nabla \pi\|^{\frac{2}{3}}_{L^{r,\infty}}\Big(\||z^+|^{\frac{3r-2}{2}}\|_{L^{\frac{6r}{3r-2}},1}^{2}+\||z^-|^{\frac{3r-2}{2}}\|_{L^{\frac{6r}{3r-2}},1}^{2}\Big)\\ &\hspace{4cm}+C(\|w\|^{3r-2}_{L^{3r-2}}+\||z^+\|^{3r-2}_{L^{3r-2}}+\||z^-\|^{3r-2}_{L^{3r-2}})\\ &\lesssim\|\nabla \pi\|^{\frac{2}{3}}_{L^{r,\infty}}\Big(\||z^+|^{\frac{3r-2}{2}}\|_{L^{2}}^{(2-\frac{2}{r\alpha})}\|\Lambda^{\alpha}|z^+|^{\frac{3r-2}{2}}\|_{L^{2}}^{\frac{2}{r\alpha}} +\||z^-|^{\frac{3r-2}{2}}\|_{L^{2}}^{(2-\frac{2}{r\alpha})}\|\Lambda^{\alpha}|z^-|^{\frac{3r-2}{2}}\|_{L^{2}}^{\frac{2}{r\alpha}}\Big)\\ &\hspace{4cm}+C(\|w\|^{3r-2}_{L^{3r-2}}+\||z^+\|^{3r-2}_{L^{3r-2}}+\||z^-\|^{3r-2}_{L^{3r-2}})\\ &\lesssim\|\nabla \pi\|^{\frac{2r\alpha}{3r\alpha-3}}_{L^{r,\infty}}\Big(\|z^+|^{\frac{3r-2}{2}}+|z^-|^{\frac{3r-2}{2}}\|^2_{L^2}\Big) +\frac{1}{8}\int_{\mathbb{R}_+^{3}}\Big(|\Lambda^{\alpha}| z^+|^{\frac{3r-2}{2}}|^{2}+|\Lambda^{\alpha}| z^-|^{\frac{3r-2}{2}}|^{2}\Big)\,dx\\ &\hspace{4cm}+C(\|w\|^{3r-2}_{L^{3r-2}}+\|z^+\|^{3r-2}_{L^{3r-2}}+\|z^-\|^{3r-2}_{L^{3r-2}})\\ &\lesssim\|\nabla \pi\|^{\frac{2r\alpha}{3r\alpha-3}}_{L^{r,\infty}}\Big(\|z^+\|^{3r-2}_{L^{3r-2}(\mathbb{R}^{3})}+\|z^-\|^{3r-2}_{L^{3r-2}}\Big) +\frac{1}{8}\int_{\mathbb{R}^{3}}\Big(|\Lambda^{\alpha}| z^+|^{\frac{3r-2}{2}}|^{2}+|\Lambda^{\alpha}| z^-|^{\frac{3r-2}{2}}|^{2}\Big)\,dx\\ &\hspace{4cm}+C(\|w\|^{3r-2}_{L^{3r-2}}+\|z^+\|^{3r-2}_{L^{3r-2}}+\|z^-\|^{3r-2}_{L^{3r-2}}). \end{aligned}$$ To control $L^{3r-2}$-estimate for $w$, multiplying both side of $\eqref{fmmp-100}_3$ by $w|w|^{3r-4}$, we have $$\begin{aligned} \frac{1}{3r-2}\frac{d}{dt}&\int_{\mathbb{R}^{3}}|w|^{3r-2}dx +\frac{4(3r-4)}{(3r-2)^2}\int_{\mathbb{R}^{3}}|\Lambda^{\gamma}| w|^{\frac{3r-2}{2}}|^{2}\,dx\end{aligned}$$ $$+\int_{\mathbb{R}^{3}}|w|^{3r-2}dx+\int_{\mathbb{R}^{3}}|w|^{3r-4}|\mbox{\rm div}\ w|^2\,dx$$ $$\begin{aligned} \label{2.16} =\underbrace{\frac{\chi}{2}\int_{\mathbb{R}^{3}}|w|^{3r-4}w \cdot (\nabla \times (z^++z^-) )dx}_{\mathcal{J}_7}+ \underbrace{\int_{\mathbb{R}^{3}} \mbox{\rm div}\ w \cdot w\ \mbox{\rm div}(|w|^{3r-4})dx}_{\mathcal{J}_8}.\end{aligned}$$ As same manner as $\mathcal{J}_2$, the term $\mathcal{J}_7$ is bounded by $$\label{2.17} \mathcal{J}_7\lesssim(\|w\|^{3r-2}_{L^{3r-2}}+\||z^+\|^{3r-2}_{L^{3r-2}}+\||z^-\|^{3r-2}_{L^{3r-2}})+\frac{1}{256}\|\Lambda^{\gamma} |w|^{\frac{3r-2}{2}}\|^2_{L^2}.$$ For $\mathcal{J}_8$, we get $$\label{2.18} \mathcal{J}_8\leq C\|\nabla |w|^{\frac{3r-2}{2}}\|^2_{L^2}+\frac{1}{256}\int_{\mathbb{R}^{3}}|w|^{3r-4}|\mbox{\rm div}\ w|^2\,dx$$ $$\leq C\|w\|^{3r-2}_{L^{3r-2}}+\frac{1}{256}\Big(\|\Lambda^{\gamma} |w|^{\frac{3r-2}{2}}\|^2_{L^2}+\||w|^{\frac{3r-4}{2}}|\mbox{\rm div}\ w|\|^2_{L^2}\Big).$$ Summing up all estimates [\[2.15-10\]](#2.15-10){reference-type="eqref" reference="2.15-10"}--[\[2.18\]](#2.18){reference-type="eqref" reference="2.18"}, we have $$\begin{aligned}\label{2.15} \frac{d}{dt}&(\|z^+\|^{3r-2}_{L^{3r-2}(\mathbb{R}^{3})}+\|z^-\|^{3r-2}_{L^{3r-2}(\mathbb{R}^{3})}+\|w\|^{3r-2}_{L^{3r-2}(\mathbb{R}^{3})})\\ &\lesssim\|\nabla \pi\|^{\frac{2r\alpha}{3r\alpha-3}}_{L^{r,\infty}}\Big(\|z^+\|^{3r-2}_{L^{3r-2}(\mathbb{R}^{3})}+\|z^-\|^{3r-2}_{L^{3r-2}}\Big) +(\|w\|^{3r-2}_{L^{3r-2}}+\|z^+\|^{3r-2}_{L^{3r-2}}+\|z^-\|^{3r-2}_{L^{3r-2}}). \end{aligned}$$ Let $\mathcal{Y}(t):=\|z^+\|^{3r-2}_{L^{3r-2}(\mathbb{R}^{3})}+\|z^-\|^{3r-2}_{L^{3r-2}(\mathbb{R}^{3})}+\|w\|^{3r-2}_{L^{3r-2}(\mathbb{R}^{3})}$ and then [\[2.15\]](#2.15){reference-type="eqref" reference="2.15"} becomes $$\mathcal{Y}(t) \leq C\| \nabla \pi \|^{\frac{2r\alpha}{3r\alpha-3}}_{L^{r,\infty}(\mathbb{R}^{3})}\mathcal{Y}(t)+\mathcal{Y}(t).$$ As the previous way, it allow us to finish the proof of Theorem [Theorem 1](#the1.1){reference-type="ref" reference="the1.1"}. $\Box$\ * # Proof of Theorem [Theorem 2](#the1.2){reference-type="ref" reference="the1.2"} {#proof-of-theorem-the1.2} For this, according to the argument in [@Zhou06] or [@JWW20], we can establish a Serrin's type regularity criterion on the gradient of pressure function $\pi$. Indeed, from [\[L4-estimate\]](#L4-estimate){reference-type="eqref" reference="L4-estimate"}, we know $$\frac{1}{4}\frac{d}{dt}\|(u,b,w)\|_{L^{4}}^{4}+ \|\nabla(|u|^2,|b|^2,|w|^2)\|^2_{L^2}$$ $$+|\!||u||\nabla u||\!|^2_{L^2}+|\!||b||\nabla b||\!|^2_{L^2}+|\!||w||\nabla w||\!|^2_{L^2}+2\chi\|w\|_{L^{4}}^{4}+\kappa|||w|\mbox{div}\ w||_{L^{2}}^{2}$$ $$\lesssim \underbrace{\int_{\mathbb{R}^{3}}|\nabla \pi| |u|^3|\nabla |u|^2| dx}_{\mathcal{J}_1} +\underbrace{\int_{\mathbb{R}^{3}}(b\cdot \nabla) b\cdot |u|^2u dx}_{\mathcal{J}_2}$$ $$+\frac{1}{2}\underbrace{\int_{\mathbb{R}^{3}}\nabla(|b|^2)\cdot |u|^2u \,dx}_{\mathcal{J}_3}+\underbrace{\int_{\mathbb{R}^{3}}(b\cdot \nabla)u\cdot |b|^2b dx dt}_{\mathcal{J}_4}+\frac{\chi}{2}\underbrace{\int_{\mathbb{R}^{3}}|w|^2 w \cdot (\nabla \times u )dx}_{\mathcal{J}_5}$$ $$\begin{aligned} \label{L4-estimate-10} +\underbrace{\frac{\chi}{2}\int_{\mathbb{R}^{3}}|w|^2 w \cdot (\nabla \times u )dx}_{\mathcal{J}_6}- \underbrace{\int_{\mathbb{R}^{3}} \mbox{div}\ w\ (w \cdot \nabla|w|^2)dx}_{\mathcal{J}_7}\end{aligned}$$ For the result, $\mathcal{J}_1$ only has been changed as follows: for $\gamma>1$ $$\int_{\mathbb{R}^{3}}\nabla \pi \cdot|u|^{2}u\, dx\leq \||\nabla \pi|^{1/2}\|_{L^{4,4}} \||\nabla \pi|^{1/2}\|_{L^{2\gamma,\infty}} \||u^3\|_{L^{\frac{4\gamma}{3\gamma-2},\frac{4}{3}}}$$ $$=\|\nabla\pi\|^{1/2}_{L^{2,2}} \|\nabla \pi\|^{1/2}_{L^{\gamma,\infty}} \|u\|^3_{L^{\frac{12\gamma}{3\gamma-2},4}}\leq \frac{1}{4}\|\nabla \pi\|^2_{L^2}+ C\|\nabla \pi\|^{3/2}_{L^{\gamma,\infty}} \|u\|^4_{L^{\frac{12\gamma}{3\gamma-2},4}}$$ $$\leq \frac{1}{4}\|\nabla \pi\|^2_{L^2}+ C\|\nabla \pi\|^{\frac{2}{3}}_{L^{\gamma,\infty}} \||u|^2\|^2_{L^{\frac{6\gamma}{3\gamma-2},2}}$$ $$\leq \frac{1}{4}\|\nabla \pi\|^2_{L^2}+ C\|\nabla \pi\|^{\frac{2}{3}}_{L^{\gamma,\infty}} \||u|^2\|^{2(1-\frac{1}{\gamma})}_{L^{2,2}}|\!|\nabla|u|^2|\!|^{\frac{2}{\gamma}}_{L^{2,2}}$$ $$\leq \frac{1}{16}\|\nabla \pi\|^2_{L^2}+\frac{1}{8}|\!|\nabla|u|^2|\!|^{2}_{L^{2}} +C\|\nabla \pi\|^{\frac{2\gamma}{3(\gamma-1)}}_{L^{\gamma,\infty}} \|u\|^{4}_{L^{4}}, \quad$$ and thus $$\mathcal{J}_1 \leq \frac{1}{4}\|\nabla \pi\|^2_{L^2}+\frac{1}{16}|\!|\nabla|u|^2|\!|^{2}_{L^{2}} +C\|\nabla \pi\|^{\frac{2\gamma}{3(\gamma-1)}}_{L^{\gamma,\infty}} |\!|u|\!|^{4}_{L^{4}}$$ Using the following estimate, $$\|\nabla v\|^2_{L^2}\lesssim \|(u\cdot \nabla )u+b\cdot \nabla )b\|^2_{L^2}\lesssim |\!||u||\nabla u||\!|^2_{L^2}+ |\!||b||\nabla b||\!|^2_{L^2}.$$ we get $$\mathcal{J}_1 \leq C \|\nabla \pi\|^{\frac{2\gamma}{3(\gamma-1)}}_{L^{\gamma,\infty}}|\!|u|\!|^{4}_{L^{4}}+\frac{1}{8}( |\!||u||\nabla u||\!|^2_{L^2}+ |\!||b||\nabla b||\!|^2_{L^2}).$$ Using the integration by parts, $\mathcal{J}_2$, $\mathcal{J}_3$ and $\mathcal{J}_4$ is bounded by $$\int_0^T\int_{\mathbb{R}^{3}}|u||b|^2 (\nabla |u|^2+\nabla |b|^2) dx dt \leq C(|\!|(|u|^2+|b|^2)b|\!|^2_{L^2}+\frac{1}{16}(|\!|\nabla |u|^2|\!|_{L^2}^2+|\!|\nabla |b|^2|\!|_{L^2}^2)$$ $$\leq C|\!|b|\!|^{\frac{2a_1}{a_1-3}}_{L^{a_1}}(|\!| |u|^2|\!|^{2}_{L^2}+|\!| |b|^2|\!|^{2}_{L^2})+\frac{1}{16}(|\!|\nabla|u|^2|\!|^{2}_{L^2}+|\!|\nabla|b|^2|\!|^{2}_{L^2})$$ where we use the following inequality: $$|\!|b|\!|^2_{L^{a_1}}|\!||u|^2|\!|^2_{L^{\frac{2a_1}{a_1-2}}}\lesssim |\!|b|\!|^2_{L^{a_1}}|\!||u|^2|\!|^{2(1-\frac{3}{a_1})}_{L^2}|\!|\nabla|u|^2|\!|^{\frac{6}{a_1})}_{L^2} \leq C |\!|b|\!|^{\frac{2a_1}{a_1-3}}_{L^{a_1}}|\!| |u|^2|\!|^{2}_{L^2}+\frac{1}{16}|\!|\nabla|u|^2|\!|^{2}_{L^2}$$ In a similar way, for $\mathcal{J}_5$ and $\mathcal{J}_6$, it shows $$|J_5|\leq C\int_{\mathbb{R}^{3}}(|u|^{4}+|w|^{4})\,dx+\frac{1}{16}\int_{\mathbb{R}^{3}}||w||\nabla w||^2\,dx,$$ and $$|J_6|\leq C\int_{\mathbb{R}^{3}} |\mbox{div}\ w|^2dx+\frac{1}{16}\int_{\mathbb{R}^{3}} |\nabla |w|^2|^2dx.$$ Plugging this into [\[L4-estimate-10\]](#L4-estimate-10){reference-type="eqref" reference="L4-estimate-10"}, we get $$\begin{aligned} \frac{d}{dt}&\|(u,b,w)\|_{L^{4}}^{4}+ \|\nabla(|u|^2,|b|^2,|w|^2)\|^2_{L^2} +|\!||u||\nabla u||\!|^2_{L^2}+|\!||b||\nabla b||\!|^2_{L^2}+|\!||w||\nabla w||\!|^2_{L^2}\\ &\lesssim \| \nabla \pi \|^{p}_{L^{q,\infty}(\mathbb{R}^{3})}\|(u,b,w)\|_{L^{4}}^{4} +|\!|b|\!|^{\frac{2a_1}{a_1-3}}_{L^{a_1}}\|(u,b,w)\|_{L^{4}}^{4}\\ &\lesssim\| \nabla \pi \|^{p_{\kappa}}_{L^{q_{\kappa},\infty}(\mathbb{R}^{3})}\|(u,b,w)\|_{L^{4}}^{4}+|\!|b|\!|^{\frac{2a_1}{a_1-3}}_{L^{a_1}}\|(u,b,w)\|_{L^{4}}^{4}\\ &\lesssim\| \nabla \pi \|^{p(1-\kappa)}_{L^{q,\infty }(\mathbb{R}^{3})}\| \nabla \Pi \|^{c_{1}\kappa}_{L^{2 }(\mathbb{R}^{3})}\|(u,b,w)\|_{L^{4}}^{4}+|\!|b|\!|^{\frac{2a_1}{a_1-3}}_{L^{a_1}}\|(u,b,w)\|_{L^{4}}^{4}+|\!|b|\!|^{\frac{2a_1}{a_1-3}}_{L^{a_1}}\|(u,b,w)\|_{L^{4}}^{4}\\ &\leq C\| \nabla \pi \|^{p(1-\kappa)}_{L^{q,\infty}(\mathbb{R}^{3})}\Big\| |u||\nabla u|+|b||\nabla b| \Big\|^{c_{1}\kappa}_{L^{2 }(\mathbb{R}^{3})}\|(u,b,w)\|_{L^{4}}^{4}\\ &\leq C\| \nabla \pi \|^{\frac{2p(1-\kappa)}{2-c_{1}\kappa}}_{L^{q,\infty}(\mathbb{R}^{3})} \| u \|^{\frac{8}{2-c_{1}\kappa}}_{L^{4 }(\mathbb{R}^{3})}+|\!|b|\!|^{\frac{2a_1}{a_1-3}}_{L^{a_1}}\|(u,b,w)\|_{L^{4}}^{4}+\frac 18\Big(|\!| |u||\nabla u| |\!|^{2}_{L^{2 }(\mathbb{R}^{3})}+|\!||b||\nabla b||\!|^{2}_{L^{2 }(\mathbb{R}^{3})}\Big)\\ &\leq C\| \nabla \pi \|^{p(1-\delta)}_{L^{q,\infty}(\mathbb{R}^{3})} \|(u,b,w)\|_{L^{4}(\mathbb{R}^{3})}^{4(1+2\delta) }+|\!|b|\!|^{\frac{2a_1}{a_1-3}}_{L^{a_1}}\|(u,b,w)\|_{L^{4}}^{4}\end{aligned}$$ Notice that $2/p_{\kappa}+3/q_{\kappa}=3$. Chooing $\delta=\frac{(2-c_{1})\kappa}{2-c_{1}\kappa},~~c_{1}=\frac{4}{3}$, it finally follows $$\frac{d}{dt}\|(u,b,w)\|_{L^{4}}^{4}\lesssim \| \nabla \pi \|^{p(1-\delta)}_{L^{q,\infty}(\mathbb{R}^{3})} \|(u,b,w)\|_{L^{4}(\mathbb{R}^{3})}^{4(1+2\delta) }.$$ As the previous way, it allow us to finish the proof of Theorem [Theorem 2](#the1.2){reference-type="ref" reference="the1.2"}. 99 L.C. Berselli and G.P. Galdi, Regularity criteria involving the pressure for the weak solutions of the Navier-Stokes equations, Proc. Amer. Math. Soc., 130 (2002) 3585--3595. S. Bosia, V. Pata and J. C. Robinson, A weak-$L^p$ Prodi-Serrin type regularity criterion for the Navier-Stokes equations, J. Math. Fluid Mech., 16 (2014), 721--725. D. Chae, On the regularity conditions for the Navier--Stokes and related equations, Rev. Mat. Iberoam. 23 (2007), 371--384. M. Colombo, S. Haffter, Global regularity for the hyperdissipative Navier-Stokes equation below the critical order, ArXiv e-prints, November 2019. A. Códorba, D. Códorba, A maximum principle applied to quasi-geostrophic equations, Commun. Math. Phys. 249 (2004), 511-528. L. Deng, H. Shang, Global well-posedness for n-dimensional megneto-micropolar equations with hyperdissipation, Appl.Math. Lett. 111 (2021) 106610. H. Duan, On regularity criteria in terms of pressure for the 3D viscous MHD equations. Appl. Anal. 91 (2012) 947--952. J. Fan, T. Ozawa, On the regularity criteria for the generalized Navier--Stokes equations and Lagrangian averaged Euler equations, Differ. Integral Equ., 21 (2008), 443--457. J. Fan, Y. Fukumoto, Y. Zhou, Logarithmically improved regularity criteria for the generalized Navier-Stokes and related equations. Kinet. Relat. Models 6 (2013), 545--556. J. Fan, X. Zhong, Regularity criteria for 3D generalized incompressible magneto-micropolar fluid equations. Appl. Math. Lett. 127 (2022), Paper No. 107840, 5 pp. L. Grafakos, Classical Fourier analysis. 2nd Edition, Springer, 2008 X. Ji, Y. Wang, W. Wei, New regularity criteria based on pressure or gradient of velocity in Lorentz spaces for the 3D Navier-Stokes equations. J. Math. Fluid Mech. 22 (2020), Art. 13, 8 pp. J.-M. Kim. On regularity criteria via pressure for the 3D MHD equations in a half space. Adv. Math. Phys. 2022, Art. ID 6954802, 7 pp. J. Malý, Advanced theory of differentiation--Lorentz spaces, March 2003 http://www.karlin.mff.cuni.cz/\~maly/lorentz.pdf. Z. Li, Zhouyu, P. Niu, New regularity criteria for the 3D magneto-micropolar fluid equations in Lorentz spaces. Math. Methods Appl. Sci. 44 (2021), no. 7, 6056--6066. M. Loayza, M. A. Rojas-Medar. A weak-$L^p$ Prodi-Serrin type regularity criterion for the micropolar fluid equations. J. Math. Phys. 57 (2016) 021512, 6 pp. R. O'Neil, Convolution operaters and $L^{p,q}$ spaces. Duke Math J., 30 (1963), 129--142. B. Pineau, X. Yu, On Prodi-Serrin type conditions for the 3D Navier-Stokes, Nonlinear Anal. 190 (2020), 111612, 15 pp. T. Suzuki, Regularity criteria of weak solutions in terms of the pressure in Lorentz spaces to the Navier-Stokes equations. J. Math. Fluid Mech., 14 (2012), 653--660. T. Suzuki, A remark on the regularity of weak solutions to the Navier-Stokes equations in terms of the pressure in Lorentz spaces. Nonlinear Analysis: Theory, Methods & Applications, 75 (2012 ), 3849--3853. Y. Zhou, On regularity criteria in terms of pressure for the Navier-Stokes equations in $\mathbb{R}^{3}$, Proc. Amer. Math. Soc., 134 (2005) 149--156. Y. Zhou, Regularity criteria for the 3D MHD equations in terms of the pressure. Internat. J. Non-Linear Mech. 41 (2006), 1174-1180. Y. Zhou, Regularity criteria for the generalized viscous MHD equations. Ann. Inst. H. Poincare Anal. Non Lineaire 24 (2007), 491--505.
arxiv_math
{ "id": "2310.00217", "title": "Regularity criteria of 3D generalized magneto-micropolar fluid system in\n terms of the pressure", "authors": "Jae-Myoung Kim", "categories": "math.AP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- author: - | Tahir Shamsher$^{a}$, S. Pirzada$^{b}$, Mushtaq A. Bhat$^{c}$\ $^{}$ *$^{a,b}$Department of Mathematics, University of Kashmir, Srinagar, Kashmir, India*\ $^{}$*$^{c}$Department of Mathematics, National Institute of Technology, Srinagar, India*\ $^{a}$`tahir.maths.uok@gmail.com`; $^{b}$`pirzadasd@kashmiruniversity.ac.in`\ $^{a}$`mushtaqab@nitsri.net` title: On the Estrada index of unicyclic and bicyclic signed graphs --- Let $\Gamma=(G, \sigma)$ be a signed graph of order $n$ with eigenvalues $\mu_1,\mu_2,\ldots,\mu_n.$ We define the Estrada index of a signed graph $\Gamma$ as $EE(\Gamma)=\sum_{i=1}^ne^{\mu_i}$. We characterize the signed unicyclic graphs with the maximum Estrada index. The signed graph $\Gamma$ is said to have the pairing property if $\mu$ is an eigenvalue whenever $-\mu$ is an eigenvalue of $\Gamma$ and both $\mu$ and $-\mu$ have the same multiplicities. If $\Gamma_{p}^-(n, m)$ denotes the set of all unbalanced graphs on $n$ vertices and $m$ edges with the pairing property, we determine the signed graphs having the maximum Estrada index in $\Gamma_{p}^-(n, m)$, when $m=n$ and $m=n+1$. Finally, we find the signed graphs among all unbalanced complete bipartite signed graphs having the maximum Estrada index. # Introduction {#sec1} A signed graph $\Gamma$ is an ordered pair $(G, \sigma)$ in which $G$ is an underlying graph and $\sigma$ is a function from the edge set $E(G$) to $\{-1,1\}$, which is called a signing. For a signed graph $\Gamma=(G, \sigma)$ and its subgraph $H \subset G$, we use the notation $(H, \sigma)$ for writing the signed subgraph of $\Gamma=(G, \sigma)$, where $\sigma$ is the restriction of the mapping $\sigma: E(G) \rightarrow \{-1,1\}$ to the edge set $E(H)$. The adjacency matrix $A_{\Gamma}=\left(a_{i j}\right)$ of a signed graph $\Gamma=(G, \sigma)$ naturally arose from the unsigned graph by putting $-1$ or $1$, whenever the corresponding edge is either negative or positive, respectively. The characteristic polynomial, denoted by $\varphi(\Gamma, x)=\operatorname{det}\left(x I-A_{\Gamma}\right)$, is called the characteristic polynomial of the signed graph $\Gamma=(G, \sigma)$. For brevity, the spectrum of the adjacency matrix $A_{\Gamma}$ is called the spectrum of the signed graph $(G, \sigma)$. Let the signed graph $\Gamma$ of order $n$ has distinct eigenvalues $\mu_1(\Gamma),\mu_2(\Gamma),\ldots,\mu_k(\Gamma)$ (we drop $\Gamma$ where the signed graph is understood) and let their respective multiplicities be $m_1,m_2,\ldots,m_k$. The adjacency spectrum of $\Gamma$ is written as $Spec(\Gamma)=\{\mu^{(m_1)}_1(\Gamma),\mu^{(m_2)}_2(\Gamma),\ldots,\mu^{(m_k)}_k(\Gamma)\}$. From the definition, it follows that the matrix $A_{\Gamma}$ is a real symmetric and hence the eigenvalues $\mu_{1}((G, \sigma)) \geq \mu_{2}((G, \sigma)) \geq \dots \geq \mu_{n}((G, \sigma))$ of the signed graph $(G, \sigma)$ are all real numbers. The largest eigenvalue $\mu_{1}(\Gamma)$ is also known as the index of the signed graph $\Gamma$.\ The concept of signature switching is necessary when dealing with signed graphs. Let $Z$ be a subset of the vertex set $V(\Gamma)$. The switched signed graph $\Gamma^{Z}$ is obtained from $\Gamma$ by reversing the signs of the edges in the cut $[Z, V(\Gamma) \backslash Z]$. Clearly, we see that the signed graphs $\Gamma$ and $\Gamma^{Z}$ are switching equivalent. The switching equivalence is an equivalence relation that preserves the eigenvalues. The switching class of $\Gamma$ is denoted by $[\Gamma]$. The sign of a cycle is the product of the signs of its edges. A signed cycle $C^{}_{n\sigma}$ on $n$ vertices is positive (or negative) if it contains an even (or odd) number of negative edges, respectively. A signed graph is said to be balanced if all of its signed cycles are positive, otherwise, it is unbalanced. That is, a signed graph is said to be balanced if it switches to the signed graph with all positive signature. Otherwise, it is said to be unbalanced. By $\sigma \sim+,$ we say that the signature $\sigma$ is equivalent to the all-positive signature, and the corresponding signed graph is equivalent to its underlying graph. In general, the signature is determined by the set of positive cycles. Hence, all trees are switching equivalent to the all-positive signature. Equivalently, we can say that the edge signs of bridges are irrelevant. Finally, the signature of the balanced signed graphs is switching equivalent to the all-positive one. In our drawings of signed graphs, we represent the negative edges with dashed lines and the positive edges with solid lines. A connected signed graph is said to be unicyclic if it has the same number of vertices and edges. If the number of edges is one more than the number of vertices, then it is said to be bicyclic. For definitions and notations of graphs, we refer to [@sp].\ The signed graph $\Gamma$ is said to have the pairing property if $\mu$ is an eigenvalue whenever $-\mu$ is an eigenvalue of $\Gamma$ and both $\mu$ and $-\mu$ have the same multiplicities. The signed graph $\Gamma=(G,+)$ with all positive signature has the pairing property if and only if its underlying graph $G$ is bipartite. For any signature $\sigma$ it is not true.\ The rest of the paper is organized as follows. In Section $2$, we extend the Estrada index to signed graphs. Furthermore, we characterize the signed unicyclic graphs with the maximum Estrada index. In Section $3$, we find the signed graphs in the set of all unbalanced unicyclic and bicyclic graphs having the pairing property with the maximal Estrada index respectively. # Estrada index in signed graphs {#sec2} The Estrada index, a graph-spectrum-based structural descriptor, of a graph is defined as the trace of the adjacency matrix exponential and was first proposed by Estrada in $2000$. Pena et al. [@de] recommended calling it the Estrada index, which has since become widely adopted. This index can be used to measure a range of things, including the degree of protein folding [@e1; @e2; @e3], the subgraph centrality and bipartivity of complex networks [@e4; @e5]. Because of the graph Estrada index's exceptional use, various Estrada indices based on the eigenvalues of other graph matrices were investigated. Estrada index-based invariant concerning the Laplacian matrix, signless Laplacian matrix, distance matrix, distance Laplacian matrix and distance signless Laplacian matrix have been studied, see [@e6].\ In social networks, the balance (stability) [@e7; @e8] of a signed network can be quantified by $$k=\frac{tr(e^{A_{(G,\sigma)}})}{tr(e^{A_{(G,+)}})},$$ where $tr(X)$ denotes the trace of the matrix $X$. Motivated by Equation $(2.1)$, we define the Estrada index for the signed graph in full analogy with the Estrada index for a graph, as $$EE(\Gamma)=EE((G,\sigma))=\sum_{i=1}^ne^{\mu_i},$$ where $\mu_1$, $\mu_2$, $\ldots$, $\mu_n$ are the eigenvalues of the signed graph $\Gamma$. The Seidel matrix $S_G$ of a simple graph $G$ with $n$ vertices and having the adjacency matrix $A_G$ is defined as $S_G = J -I -2A_G$. Obviously, the Seidel matrix is the adjacency matrix of some signed graph $\Gamma=(K_n, \sigma)$, where $K_n$ is the complete graph on $n$ vertices. Therefore, Eq. $(2.2)$ is the extension of the Siedel Estrada index [@se].\ For non-negative integer $k$, let $M_{k}(\Gamma)=\sum_{i=1}^{n} \mu_{i}^{k}$ denote the $k$-th spectral moment of $\Gamma$. From the Taylor expansion of $\mathrm{e}^{\mu_{i}}$, $E E(\Gamma)$ in $(2.2)$ can be rewritten as $$E E(\Gamma)=\sum_{k=0}^{\infty} \frac{M_{k}(\Gamma)}{k !}.$$ It is well-known that $M_{k}(\Gamma)$ is equal to the difference of the number of positive and negative closed walks of length $k$ in $\Gamma$. Mathematically, we have $$M_{k}(\Gamma)=w^+(k)-w^-(k),$$ where $w^+(k)$ and $w^-(k)$ are, respectively, the number of positive and negative closed walks of length $k$ in $\Gamma$. In particular, we have $M_{0}(\Gamma)=n$, $~ M_{1}(\Gamma)=0$, $~ M_{2}(\Gamma)=2m$ and $~ M_{3}(\Gamma)=6(t^+-t^-)$, where $n$ is the number of vertices, $m$ is the number of edges, $t^+$ is the number of positive triangles and $t^-$ is the number of negative triangles in the signed graph $\Gamma$. Let $\Gamma_{1}$ and $\Gamma_{2}$ be two signed graphs. If $M_{ k}\left(\Gamma_{1}\right) \geq M_{ k}\left(\Gamma_{2}\right)$ holds for any positive integer $k$, then, by Eq. $(2.3)$ we get $E E\left(\Gamma_{1}\right) \geq E E\left(\Gamma_{2}\right)$. Further, if the strict inequality $M_{k}\left(\Gamma_{1}\right)>M_{ k}\left(\Gamma_{2}\right)$ holds for at least one integer $k$, then $E E\left(\Gamma_{1}\right)>E E\left(\Gamma_{2}\right)$. It is easy to see that if $\Gamma$ has $q$ connected components $\Gamma_{1}, \Gamma_{2}, \ldots, \Gamma_{q}$, then $E E(\Gamma)=$ $\sum_{i=1}^{q} E E\left(\Gamma_{i}\right).$ So, we shall investigate the Estrada index of connected signed graph from now on. One classical problem of graph spectra is to identify the extremal graphs with respect to the Estrada index in some given class of graphs, for example, see [@h; @zu; @g1]. For a signed tree, all signatures are equivalent. The following result shows that among all signed trees on $n$ vertices, the signed path $P_n$ has a minimum and the signed star $S_n$ has the maximum Estrada index. ![Signed graph $\Gamma_i \quad i=1,2,\ldots,6.$](Fig1){#Figure 1} **Theorem 1**. **[@h]* If $T_{n}$ is an $n$-vertex tree different from $S_{n}$ and $P_{n}$, then $$EE\left((P_{n}, \sigma\right))<EE\left((T_{n}, \sigma\right))<EE\left((S_{n}, \sigma\right)).$$* **Remark 1.1** Let $\Gamma = (G, +)$ be a connected signed graph of order $n$ with all positive signature and $e$ a positive edge. The signed graph $\Gamma^{\prime}=\Gamma+e$ is obtained from $\Gamma$ by adding the edge $e$. It is easy to see that any self-returning walk of length $k$ of $\Gamma$ is also a self-returning walk of length $k$ of $\Gamma^{\prime}$. Thus, $M_{ k}\left(\Gamma'\right) \geq M_{ k}\left(\Gamma\right)$ and $EE(\Gamma^{\prime})\geq EE(\Gamma).$ But in general adding any signed edge between two non-adjacent vertices of the signed graph $\Gamma = (G, \sigma)$ may not increase the Estrada index. Consider the signed graphs $\Gamma_i$, $i=1, ~2, \ldots, 6$ as shown in Figure $1$. Their spectrum is given by $Spec(\Gamma_1)=\{2,-2, 0^2\}$, $Spec(\Gamma_2)=\{\frac{-1+\sqrt{17}}{2},1, 0, \frac{-1-\sqrt{17}}{2}\}$, $Spec(\Gamma_3)=\{2,1,-1,-2\}$, $Spec(\Gamma_4)=\{\sqrt{5},1,-1, -\sqrt{5}\}$, $Spec(\Gamma_5)=\{2.17,0.31,-1,-1.48\}$ and $Spec(\Gamma_6)=\{2,1,-1,-2\}$ respectively. The signed graph $\Gamma_2$ is obtained from $\Gamma_1$ by adding negative edge between two non-adjacent vertices and clearly $EE(\Gamma_2)=8.55<9.52= EE(\Gamma_1).$ The signed graph $\Gamma_4$ is obtained from $\Gamma_3$ by adding negative edge between two non-adjacent vertices and $EE(\Gamma_4)=12.54 > 10.61= EE(\Gamma_3).$ Furthermore, The signed graph $\Gamma_6$ is obtained from $\Gamma_5$ by adding positive edge and $EE(\Gamma_5)=10.71 > 10.61= EE(\Gamma_6).$ Therefore, edge addition (deletion) technique cannot be used to compare the Estrada index in signed graphs. **Theorem 2**. *Let $G$ be a graph on $n$ vertices. Then $EE((G,+))\geq EE((G,-))$, with strict inequality if and only if $G$ contains at least one odd cycle.* **Proof.** Let $G$ be a graph on $n$ vertices. Put $\Gamma_1=(G,+)$ and $\Gamma_2=(G,-)$. Then, by Eq. $(2.2)$, we have $$\begin{aligned} EE(\Gamma_1)-EE(\Gamma_2)&=\sum_{i=1}^ne^{\mu_i(\Gamma_1)}-\sum_{i=1}^ne^{\mu_i(\Gamma_2)} \\ &= \sum_{i=1}^n(e^{\mu_i(\Gamma_1)}-e^{\mu_i(\Gamma_2)}). \end{aligned}$$ The signed graph $\Gamma_2$ can be obtained from the signed graph $\Gamma_1$ by negating each edge. Therefore $Spec(\Gamma_1)=-Spec(\Gamma_2)$. Thus, by rearrangement of Eq. $(2.5)$ and using Taylor's expansion, we have $$\begin{aligned} EE(\Gamma_1)-EE(\Gamma_2)&= \sum_{i=1}^n(e^{\mu_i(\Gamma_1)}-e^{-\mu_i(\Gamma_1)})\\ &= 2\sum_{k=0}^{\infty}\frac{M_{2k+1}(\Gamma_1)}{(2k+1)!}. \end{aligned}$$ The signed graph $\Gamma_1$ has all positive signature and therefore by Eq. $(2.4)$, $M_{2k+1}(\Gamma_1)\geq 0$. If $\Gamma_1$ has an odd cycle of size $l$, then $M_{l}(\Gamma_1)> 0$. Hence the result follows. ------------------------------------------------------------------------ There exist exactly two switching classes on the signings of an odd unicyclic graph (whose unique cycle has odd girth). The cycle with all positive signature and the cycle with all negative signature. The following result is an immediate consequence of Theorem [Theorem 2](#2.3){reference-type="ref" reference="2.3"}. **Corollary 3**. *Let $G$ be an odd unicyclic graph of order $n$ and let $\Gamma_1$ be any balanced signed graph on $G$ and $\Gamma_2$ be any unbalanced one. Then $EE(\Gamma_1) > EE(\Gamma_2)$.* Let $G$ be a bipartite unicyclic graph of girth $l$ and order $n$ and let $\Gamma_1$ be any balanced signed graph on $G$ and $\Gamma_2$ be any unbalanced one. It is easy to see that $M_{2k+1}(\Gamma_1)=0=M_{2k+1}(\Gamma_2)$ for each $k\geq 0$, $M_{2k}(\Gamma_1)=M_{2k}(\Gamma_2)$ for $2k\leq l-2$ and $M_{2k}(\Gamma_1)>M_{2k}(\Gamma_2)$ for $2k\geq l$. In particular, $M_{l}(\Gamma_1)=M_{l}(\Gamma_2)+4l.$ Thus, by Eq.s $(2.3)$ and $(2.4)$, we have the following lemma. **Lemma 4**. *Let $G$ be a bipartite unicyclic graph of order $n$ and let $\Gamma_1$ be any balanced signed graph on $G$ and $\Gamma_2$ be any unbalanced one. Then $EE(\Gamma_1) > EE(\Gamma_2)$.* Let $\Gamma^+(n,l)$ and $\Gamma^-(n,l)$ denote the set of balanced and unbalanced unicyclic graphs with $n$ vertices and containing a cycle of length $l\le n$ respectively. Also, we denote by $\Gamma_n^{l+}$ (respt. $\Gamma_n^{l-}$), the signed graph obtained by identifying the center of the signed star $S_{n-l+1}$ with a vertex of positive cycle $C_{l+}$ (respt. negative cycle $C_{l-}$). Du et al. [@zu] characterized the unique signed unicyclic graph having all positive signature with the maximum Estrada index and showed the following. **Lemma 5**. *Let $\ \Gamma=(G,+)$ be a unicyclic graph on $n \geq 4$ vertices. Then $EE(\Gamma) \leq EE(\Gamma_n^{3+})$ with equality if and only if $\Gamma$ is isomorphic to $\Gamma_n^{3+}$.* The following result is directly obtained from Corollary [Corollary 3](#1.4){reference-type="ref" reference="1.4"}, Lemma [Lemma 4](#1.5){reference-type="ref" reference="1.5"} and Lemma [Lemma 5](#1.6){reference-type="ref" reference="1.6"}. **Theorem 6**. *Let $\ \Gamma=(G,\sigma)$ be a signed unicyclic graph on $n \geq 4$ vertices. Then $EE(\Gamma) \leq EE(\Gamma_n^{3+})$ with equality if and only if $\Gamma$ is switching equivalent to $\Gamma_n^{3+}$.* Next, we show that the Estrada index of the balanced cycle $C_{n+}$ is almost equal to the Estrada index of the unbalanced cycle ${ C}_{n-}$. **Theorem 7**. *Let $C_{n+}$ and $C_{n-}$ be the balanced and unbalanced cycles on $n$ vertices respectively. Then $E E\left(C_{n-}\right) \approx n J_{0}\approx E E\left(C_{n+}\right),$ where $J_{0}=\sum_{r \geqslant 0} \frac{1}{(r !)^{2}}=2.27958530 \ldots .$ Also, $EE\left(C_{n+}\right)=EE\left(C_{n-}\right)+\epsilon_n$, where $\epsilon_n\rightarrow 0$ as $n\rightarrow \infty$.* **Proof.** The Estrada index of the $n$-vertex signed cycle $C_{n+}$ can be approximated [@ec] as $$E E\left(C_{n+}\right) \approx n J_{0},$$ where $J_{0}=\sum_{r \geqslant 0} \frac{1}{(r !)^{2}}=2.27958530 \ldots .$\ The eigenvalues of the unbalanced cycle $C_{n-}$ are given by $2 \cos \frac{ (2r+1) \pi}{n}, \quad r=0, 1,2, \ldots, n-1$. Therefore $$E E\left(C_{n-}\right)=\sum_{r=0}^{n-1} \mathrm{e}^{2 \cos ( (2r+1) \pi / n)} .$$ The angles $(2r+1) \pi / n$, for $r=0,1,2, \ldots, n-1,$ uniformly cover the semi-closed interval $[0,2 \pi)$. Now, using the property of definite integrals as a sum, we have $$E E\left(C_{n-}\right)=n\left(\frac{1}{n} \sum_{r=0}^{n-1} \mathrm{e}^{2 \cos ((2 r+1) \pi / n)}\right) \approx n\left(\frac{1}{2 \pi} \int_{0}^{2 \pi} \mathrm{e}^{2 \cos x} \mathrm{~d} x\right).$$ As $\mathrm{e}^{2 \cos x}$ is an even function, therefore $$\int_{0}^{2 \pi} \mathrm{e}^{2 \cos x} \mathrm{~d} x=2 \int_{0}^{\pi} \mathrm{e}^{2 \cos x} \mathrm{~d} x=\pi J_{0},$$ where $J_{0}$ is a special value of the function encountered in the theory of Bessel function and can be seen in [@bes] as $$J_{0}=\sum_{r=0}^{\infty} \frac{1}{(r !)^{2}}=2.27958530 \ldots.$$ In view of this, $$E E\left(C_{n-}\right) \approx n J_{0}=2.27958530 n .$$ Eqs. $(2.7)$ and $(2.8)$ give $$EE\left(C_{n-}\right) \approx n J_{0} \approx EE\left(C_{n+}\right) .$$ Note that $M_{k}(C_{n+})=M_{k}(C_{n-})$ for $k\leq n-1$ and $M_{k}(C_{n+})\geq M_{k}(C_{n-})$ for $k\geq n$. In particular, $M_{n}(C_{n+})=M_{n}(C_{n-})+4n.$ Thus, by Eq.s $(2.3)$ and $(2.4)$, we get $$EE(C_{n+})-EE(C_{n-})= \frac{4n}{n!}+\sum_{k=n+1}^{\infty}\frac{M_{k}(C_{n+})-M_{k}(C_{n-})}{k!}.$$ The eigenvalues of $C_{n+}$ and $C_{n-}$ are, respectively, given by $2 \cos \frac{ 2(r-1) \pi}{n}$ and $2 \cos \frac{ (2r+1) \pi}{n}, \quad r=0, 1,2, \ldots, n-1.$ Therefore $$\begin{aligned} \sum_{k=n+1}^{\infty}\frac{M_{k}(C_{n+})-M_{k}(C_{n-})}{k!}&=\sum_{k=n+1}^{\infty}\frac{\sum_{r=0}^{n-1}2^k(\cos^k \frac{ 2(r-1) \pi}{n}- \cos^k \frac{ (2r+1) \pi}{n})}{k!}. \end{aligned}$$ The maximum value of the function $\cos^k \frac{ 2(r-1) \pi}{n}- \cos^k \frac{ (2r+1) \pi}{n}$ is $2$. Thus, by Eq. $(2.11)$, we have $$\begin{aligned} \sum_{k=n+1}^{\infty}\frac{M_{k}(C_{n+})-M_{k}(C_{n-})}{k!} &\leq \sum_{k=n+1}^{\infty}\frac{\sum_{r=0}^{n-1}2^{k+1}}{k!} \\ &=\sum_{k=n+1}^{\infty}\frac{2^{k+1}n}{k!} \\ &= \frac{2^{n+2}n}{(n+1)!}+ \frac{2^{n+3}n}{(n+2)!}+\frac{2^{n+4}n}{(n+3)!}+\cdots \\&\leq \frac{2^{n+2}n}{n!}\left(\frac{1}{(n+1)}+ \frac{2}{(n+1)^2}+\frac{2^2}{(n+1)^3}+\cdots \right). \end{aligned}$$ The series $\frac{1}{(n+1)}+ \frac{2}{(n+1)^2}+\frac{2^2}{(n+1)^3}+\dots$ is an infinite geometric progression with common ratio $\frac{2}{(n+1)}$. By inequality $(2.12)$, we obtain $$\begin{aligned} \sum_{k=n+1}^{\infty}\frac{M_{k}(C_{n+})-M_{k}(C_{n-})}{k!} &\leq \frac{2^{n+2}n}{n!(n-1)} . \end{aligned}$$ Eq.s $(2.10)$ and $(2.13)$ imply that $$\begin{aligned} EE(C_{n+})-EE(C_{n-}) &\leq \frac{2^{n+2}n+4n(n-1)}{n!(n-1)} , \end{aligned}$$ where the term $\frac{2^{n+2}n+4n(n-1)}{n!(n-1)}\sim \frac{2^{n+2}}{n!}$ tends to zero as the girth $n$ becomes large enough. Moreover, the accuracy of the Eq. $(2.9)$ can be seen from the data given in Table 1. As seen from this data, except for the first few values of $n$ $(n\leq 9)$, the accuracy is more than sufficient.\ **Table 1.** Approximate and exact values of the Estrada index of the $n$-vertex signed cycles $\left(C_{n+}\right)$ and $\left(C_{n-}\right).$ $n$ $E E\left(C_{n+}\right)$ $n J_{0}$ $E E\left(C_{n-}\right)$ ----- -------------------------- -------------- -------------------------- 3 $8.1248150$ $6.8387561$ $5.571899$ 4 $9.5243914$ $9.1183414$ $8.7127342$ 5 $11.4961863$ $11.3979268$ $11.2993665$ 6 $13.6967139$ $13.6775122$ $13.658309$ 7 $15.9602421$ $15.9570975$ $15.9533523$ 8 $18.2371256$ $18.2366829$ $18.2368574$ 9 $20.5163225$ $20.5162683$ $20.5163962$ 10 $22.7958591$ $22.7958536$ $22.7958491$ 11 $25.0754389$ $25.0754390$ $25.0754200$ 12 $27.3550237$ $27.3550243$ $27.3550195$ 13 $29.6346089$ $29.6346097$ $29.6345864$ 14 $31.9141942$ $31.9141951$ $31.9141892$ 15 $34.1937795$ $34.1937804$ $34.1937780$ Hence the result follows. ------------------------------------------------------------------------ The main tool used to prove Lemma [Lemma 5](#1.6){reference-type="ref" reference="1.6"} is the construction of mappings which increases the $k$-th spectral moment for each $k$ and using Eq.$(2.3)$. For example consider the following result. **Theorem 8**. **[@zu]* For all positive signature $\sigma$ and $4\leq l \leq n$, we have $EE(\Gamma_n^{(l+1)+})<EE(\Gamma_n^{l+})$.* But in signed unicyclic graphs, in general, we cannot construct the mapping which increases the $k$-th spectral moment for each $k$. To defend this statement, consider the signed unicyclic graphs $\Gamma_5^{4-}$ and $\Gamma_5^{5-}$. Their spectra is given by $Spec(\Gamma_5^{5-})=\{\frac{1+\sqrt{5}}{2}^2, \frac{1-\sqrt{5}}{2}^2,-2\}$ and $Spec(\Gamma_5^{4-})=\{\sqrt{3},\sqrt{2}, 0,-\sqrt{2},-\sqrt{3}\}$, respectively. It is easy to see that not only $EE(\Gamma_5^{5-})=11.30> 11.18 = EE(\Gamma_5^{4-})$ but also $M_4(\Gamma_5^{5-})= 30 > 26= M_4(\Gamma_5^{4-})$ and $M_5(\Gamma_5^{5-})= -10 < 0= M_5(\Gamma_5^{4-})$. Thus the problem of finding the unbalanced unicyclic graphs with extremal Estrada index is interesting. # Unbalanced unicyclic and bicyclic graphs with the pairing property and with maximal Estrada index {#sec2} In this section, we first characterize the unbalanced bipartite unicyclic graphs with the maximal Estrada index. Given two non-increasing real number sequences $\alpha=\{\alpha_1,\alpha_2, \alpha_3,\ldots, \alpha_n\}$ and $\beta=\{\beta_{1}, \beta_{2}, \beta_{3}, \ldots, \beta_{n}\}$, we say that $\alpha$ majorizes $\beta$, denoted by $\alpha \succeq \beta$, if $\sum_{j=1}^{t} \alpha_{j} \geq \sum_{j=1}^{t} \beta_{j}$ for each $t=1, \ldots, n$, with equality for $t=n.$ Also, if $\alpha \neq \beta$ then $\alpha \succ \beta$. **Lemma 9**. **[@sf]* Let $f:\mathbb{R}\rightarrow \mathbb{R}$ be a strictly convex function. If $\alpha \succeq \beta$, then $\sum_{i=1}^{n} f\left(\alpha_{i}\right) \geq$ $\sum_{i=1}^{n} f\left(\beta_{i}\right) .$ Also, if $\alpha \neq \beta$, then $\sum_{i=1}^{n} f\left(\alpha_{i}\right)>\sum_{i=1}^{n} f\left(\beta_{i}\right)$.* Let $\Gamma_{p}(n, m)$ denote the set of all signed graphs on $n$ vertices and $m$ edges with the pairing property. The following result will be useful in the sequel. **Theorem 10**. *Let $\Gamma_{1}$, $\Gamma_{2}$ $\in \Gamma_{p}(n, m)$ be two signed graphs on $n$ vertices and $m$ edges with the pairing property. If $\Gamma_{1}$ has exactly four non-zero eigenvalues and $\Gamma_{2}$ has at least four non-zero eigenvalues with $\mu_{1}\left(\Gamma_{1}\right)>$ $\mu_{1}\left(\Gamma_{2}\right)$, then $E E\left(\Gamma_{1}\right)>E E\left(\Gamma_{2}\right)$.* **Proof.** Let $\Gamma_{1}$, $\Gamma_{2}$ $\in \Gamma_{p}(n, m)$. Therefore, we have $\sum_{i=1}^{n} \mu_{i}^{2}(\Gamma_1)=\sum_{i=1}^{n} \mu_{i}^{2}(\Gamma_2)=2 m$. The signed graphs $\Gamma_{1}$ and $\Gamma_{2}$ have the pairing property, so we get $M_{2k+1}(\Gamma_1)=0=M_{2k+1}(\Gamma_2)$ for each $k\geq 0$. Let $\mu_{1}(\Gamma_2)\geq \mu_{2}(\Gamma_2)\geq\mu_{3}(\Gamma_2)\geq \dots \geq \mu_{2r}(\Gamma_2)$, $r\geq 2$, be the non-zero eigenvalues of $\Gamma_2$. Thus, by Eq. $(2.3)$ and using the pairing property, we have $$EE\left(\Gamma_{1}\right)= n-2r+ 2\sum_{k=0}^{\infty} \frac{g_{k}\left(\mu_{1}^{2}\left(\Gamma_{1}\right), \mu_{2}^{2}\left(\Gamma_{1}\right),0,\dots,0\right)}{(2 k) !}$$ and $$EE\left(\Gamma_{2}\right)= n-2r+ 2\sum_{k=0}^{\infty} \frac{g_{k}\left(\mu_{1}^{2}\left(\Gamma_{2}\right), \mu_{2}^{2}\left(\Gamma_{2}\right), \dots, \mu_{r}^{2}\left(\Gamma_{2}\right)\right)}{(2 k) !} ,$$ where $g_k(x_1,x_2,x_3,\ldots,x_r)=x_1^k+x_2^k+x_3^k+\dots + x_r^k$, $k$ is a positive integer and $\mu_{1}^{2}\left(\Gamma_{1}\right)+\mu_{2}^{2}\left(\Gamma_{1}\right)=m=\mu_{1}^{2}\left(\Gamma_{2}\right)+\mu_{2}^{2}\left(\Gamma_{2}\right)+\cdots+\mu_{r}^{2}\left(\Gamma_{2}\right)$. Now, Eq.s $(3.15)$ and $(3.16)$ imply that $$EE\left(\Gamma_{1}\right)-EE\left(\Gamma_{2}\right)= 2\sum_{k=2}^{\infty} \frac{[g_{k}\left(\mu_{1}^{2}\left(\Gamma_{1}\right), \mu_{2}^{2}\left(\Gamma_{1}\right),0,\ldots,0\right)-g_{k}\left(\mu_{1}^{2}\left(\Gamma_{2}\right), \mu_{2}^{2}\left(\Gamma_{2}\right), \dots, \mu_{r}^{2}\left(\Gamma_{2}\right)\right)]}{(2 k) !} .$$ We know that the function $f(x)=x^{2k}$ is strictly convex for every positive integer $k.$\ As $\mu_{1}\left(\Gamma_{1}\right)>\mu_{1}\left(\Gamma_{2}\right)$ and $\sum_{i=1}^{n} \mu_{i}^{2}(\Gamma_1)=\sum_{i=1}^{n} \mu_{i}^{2}(\Gamma_2)=2 m$, therefore the vector\ $\alpha=(\mu_{1}^{2}\left(\Gamma_{1}\right), \mu_{2}^{2}\left(\Gamma_{1}\right),0,\ldots,0)$ majorizes $\beta =(\mu_{1}^{2}\left(\Gamma_{2}\right), \mu_{2}^{2}\left(\Gamma_{2}\right), \cdots, \mu_{r}^{2}\left(\Gamma_{2}\right)),$ that is $\alpha \succ \beta$. Thus, by Lemma [Lemma 9](#3.1){reference-type="ref" reference="3.1"}, we have $$g_{k}\left(\mu_{1}^{2}\left(\Gamma_{1}\right), \mu_{2}^{2}\left(\Gamma_{1}\right),0,\ldots,0\right)>g_{k}\left(\mu_{1}^{2}\left(\Gamma_{2}\right), \mu_{2}^{2}\left(\Gamma_{2}\right), \dots, \mu_{r}^{2}\left(\Gamma_{2}\right)\right),$$ for $k \geq 2 .$ Hence, by Eq. $(3.17)$, we get $E E\left(\Gamma_{1}\right)>E E\left(\Gamma_{2}\right)$ and the result follows. ------------------------------------------------------------------------ **Lemma 11**. **[@uc]* Let $\Gamma^-(n,l)$ be the set of unbalanced unicyclic graphs with $n$ vertices and containing a cycle of length $l\le n$. Then\ $(i)$ for any $\Gamma \in \Gamma^-(n,l)$, we have $\mu_1(\Gamma_n^{l-})\geq \mu_1(\Gamma)$ with equality if and only if $\Gamma$ is switching equivalent to $\Gamma_n^{l-}$;\ $(ii)$ $\mu_1(\Gamma_n^{l-})>\mu_1(\Gamma_n^{(l+1)-}).$* The following result [@horn] shows that the eigenvalues of an induced signed subgraph of the signed graph $\Gamma$ interlaces the eigenvalues of $\Gamma$. **Lemma 12**. *Let $\Gamma$ be an $n$-vertex signed graph with eigenvalues $\mu_{1} \geq \dots \geq \mu_{n}$, and let $\Gamma^{\prime}$ be an induced signed subgraph of $\Gamma$ with $m$ vertices. If the eigenvalues of $\Gamma^{\prime}$ are $\lambda_{1} \geq \dots \geq \lambda_{m}$, then $\mu_{n-m+i} \leq \lambda_{i} \leq \mu_{i},$ where $i=1, \ldots, m .$* We now recall Schwenk's formula and can be seen in [@b1]. **Lemma 13**. *Let $u$ be any fixed vertex of a signed graph $\Gamma$. Then $$\varphi(\Gamma, x)=x \varphi(\Gamma-u, x)-\sum_{vu \in E(\Gamma)} \varphi(\Gamma-v-u, x)-2 \sum_{Y \in \mathcal{Y}_{u}} \sigma(Y) \varphi(\Gamma-Y, x),$$ where $\mathcal{Y}_{u}$ is the set of all signed cycles passing through $u$, and $\Gamma-Y$ is the graph obtained from $\Gamma$ by deleting $Y$.* A signed unicyclic graph has the pairing property if and only if its underlying graph is bipartite because it has a unique cycle. Next, we characterize the unique unbalanced bipartite unicyclic graphs with the maximum Estrada index among all unbalanced bipartite unicyclic graphs. ![Signed graphs $\Gamma_1$ and $\Gamma_2$, which are in the statement of Theorem [Theorem 15](#3.7){reference-type="ref" reference="3.7"}.](Fig2){#Figure 1} **Theorem 14**. *Let $\Gamma_{p}^-(n, n)$ be the set of all unbalanced bipartite unicyclic graphs on $n$ vertices. If $\Gamma \in \Gamma_{p}^-(n, n)$, then $EE(\Gamma) \leq EE(\Gamma_n^{4-})$ with equality if and only if $\Gamma$ is switching equivalent to $\Gamma_n^{4-}$.* **Proof.** Let $\Gamma \in \Gamma_{p}^-(n, n)$ be an unbalanced bipartite unicyclic graph on $n$ vertices. By applying Lemma [Lemma 11](#3.3){reference-type="ref" reference="3.3"}, we get $\mu_1(\Gamma) \leq \mu_1(\Gamma_n^{4-})$ with equality if and only if $\Gamma$ is switching equivalent to $\Gamma_n^{4-}$. Now, by Lemma [Lemma 13](#3.5){reference-type="ref" reference="3.5"}, the characteristic polynomial of $\Gamma_n^{4-}$ is given by $\varphi(\Gamma_n^{4-}, x)=x^{n-4}\{x^4-nx^2+2(n-2)\}.$ Clearly, the signed graph $\Gamma_n^{4-}$ has four non-zero eigenvalues. Let the signed graph $\Gamma$, where $\Gamma \in \Gamma_{p}^-(n, n)$, contains a cycle of length $l\geq 4$. Therefore, the unbalanced cycle $C_{l-}$ is an induced signed subgraph of $\Gamma$. The eigenvalues of $C_{l-}$ are given by $2 \cos \frac{ (2r+1) \pi}{l}, \quad r=0, 1,2, \ldots, l-1$. Since $l$ is a positive even integer and thus all the eigenvalues of $C_{l-}$ are non-zero. Hence the result follows by Theorem [Theorem 10](#3.2){reference-type="ref" reference="3.2"} and Lemma [Lemma 12](#3.4){reference-type="ref" reference="3.4"}. ------------------------------------------------------------------------ **Theorem 15**. *Let $\Gamma_{p}^-(n, n+1)$ be the set of all unbalanced bicyclic graphs on $n$ $(n\geq5)$ vertices with the pairing property. If $\Gamma \in \Gamma_{p}^-(n, n+1)\backslash \{\Gamma_1, \Gamma_2\}$, $\Gamma$ is not switching equivalent to $\Gamma_1$ and $\Gamma_2$, then $EE(\Gamma_1)> EE(\Gamma_2)>EE(\Gamma),$ where $\Gamma_1$ and $\Gamma_2$ are the signed graphs on $n$ vertices as shown in Fig.2.* **Proof.** By Lemma [Lemma 13](#3.5){reference-type="ref" reference="3.5"}, the characteristic polynomials of $\Gamma_1$ and $\Gamma_2$ are, respectively, given by $\varphi(\Gamma_1, x)=x^{n-6}(x^2-1)\{x^4-nx^2+n-5\}$ and $\varphi(\Gamma_2, x)=x^{n-4}\{x^4-(n+1)x^2+2(n-2)\}.$ It is easy to see that $Spec(\Gamma_1)= \{\pm\sqrt{\frac{n\pm\sqrt{n^2-4n+20}}{2}},1,0^{n-6},-1\}$ and $Spec(\Gamma_2)=$\ $\{\pm\sqrt{\frac{n+1\pm\sqrt{(n+1)^2-8(n-2)}}{2}},0^{n-4}\}$ respectively. Therefore\ $$EE(\Gamma_1)=n-6+e^{\sqrt{\frac{n+\sqrt{n^2-4n+20}}{2}}}+e^{\sqrt{\frac{n-\sqrt{n^2-4n+20}}{2}}}+e^{-\sqrt{\frac{n+\sqrt{n^2-4n+20}}{2}}}+e^{-\sqrt{\frac{n-\sqrt{n^2-4n+20}}{2}}}+e+e^{-1}.$$ Also,\ $$\begin{split} EE(\Gamma_2)&=n-4+e^{\sqrt{\frac{n+1+\sqrt{(n+1)^2-8(n-2)}}{2}}}+e^{\sqrt{\frac{n+1-\sqrt{(n+1)^2-8(n-2)}}{2}}}+e^{-\sqrt{\frac{n+1+\sqrt{(n+1)^2-8(n-2)}}{2}}}\\&+e^{-\sqrt{\frac{n+1-\sqrt{(n+1)^2-8(n-2)}}{2}}}. \end{split}$$ Note that $\sqrt{\frac{n+\sqrt{n^2-4n+20}}{2}}> \sqrt{\frac{n+1+\sqrt{(n+1)^2-8(n-2)}}{2}}$ for $n \geq 5$. We can check that the right-hand side of $(3.18)$ is greater than of $(3.19)$ for $n\geq 5$, which proves that $EE(\Gamma_1)> EE(\Gamma_2)$.\ Let $\Gamma \in \Gamma_{p}^-(n, n+1)\backslash \{\Gamma_1, \Gamma_2\}$ be an unbalanced bicyclic graph with the pairing property. To prove the result it is enough to show that $EE(\Gamma_2)>EE(\Gamma)$, where $\Gamma$ is not switching equivalent to $\Gamma_1$ and $\Gamma_2$. Since $\mu_1(\Gamma_1)> \mu_1(\Gamma_2)$, for $n\geq 5$, therefore, by \[[@bo], Theorem 3.1\], we get $\mu_1(\Gamma_1)>\mu_1(\Gamma_2)> \mu_1(\Gamma)$, where $\Gamma$ is not switching equivalent to $\Gamma_1$ and $\Gamma_2$. The signed graph $\Gamma_2$ has four non-zero eigenvalues. Clearly, the signed graph $\Gamma$ contains a 4-vertex signed path $P_4$ as an induced subgraph for $n\geq 5$. The characteristic polynomial of $P_4$ is given by $\varphi(P_4, x)=x^4-3x^2+1.$ Thus the signed path $P_4$ has four non-zero eigenvalues. By interlacing Lemma [Lemma 12](#3.4){reference-type="ref" reference="3.4"}, the signed graph $\Gamma$ has at least four non-zero eigenvalues. Hence the result follows by Theorem [Theorem 10](#3.2){reference-type="ref" reference="3.2"}. ------------------------------------------------------------------------ **Lemma 16**. **[@z1]* Let $(G, \sigma)$ be a connected signed graph. Then, we have $\mu_{1}((G, \sigma)) \leq \mu_{1}((G, +))$ with equality if and only if $(G, \sigma)$ switches to $(G, +)$.* **Lemma 17**. **[@z2]* For an eigenvalue $\mu$ of a connected signed graph $\Gamma$, there exists a switching equivalent signed graph $\Gamma'$, for which the $\mu$-eigenspace contains an eigenvector whose non-zero coordinates are of the same sign.* Let $\Gamma = (K_{m, n},\sigma)$ be a signed complete bipartite graph, where $K_{m, n}$ is the complete bipartite graph on $m+n$ vertices. We denote, $S(K_{m, n},-)$, the set of all unbalanced complete bipartite graphs on $n+m$ vertices. Also, let $\Gamma^*$ be an unbalanced complete bipartite graph that contains exactly one negative edge. Finally, we characterize the unbalanced complete bipartite graphs with the maximum Estrada index. **Theorem 18**. *Let $S(K_{m, n},-)$ be the set of all unbalanced complete bipartite graphs on $n+m$ vertices. If $\Gamma \in S(K_{m, n},-)$, then $EE(\Gamma^*)\geq EE(\Gamma)$ with equality if and only if $\Gamma$ is switching equivalent to $\Gamma^*$, where $\Gamma^*$ is an unbalanced complete bipartite signed graph that contains exactly one negative edge.* **Proof.** Let $\left(\Gamma_{1}, \Gamma_{2}, \ldots, \Gamma_{k}\right)$ be a sequence consisting of the representatives of all switching equivalence classes of unbalanced complete bipartite graphs with $m+n$ vertices such that the representatives are ordered non-increasingly by the largest eigenvalue (index) and chosen in such a way that, for $1 \leq j \leq k$, the $\mu_{1}(\Gamma_{j})$-eigenspace contains an eigenvector whose non-zero coordinates are positive (This existence of $\Gamma_{j}$ is provided by Lemma [Lemma 17](#3.9){reference-type="ref" reference="3.9"}). By Lemma [Lemma 16](#3.8){reference-type="ref" reference="3.8"}, the signed complete bipartite with all positive signature has the maximum index. Therefore, by \[[@z3],Theorem $3.2$\], the signed graph with maximum index among all unbalanced complete bipartite graphs on $n+m$ vertices is $\Gamma^*$ (upto switching), where $\Gamma^*$ is an unbalanced complete bipartite signed graph that contains exactly one negative edge. That is, for each $\Gamma \in S(K_{m, n},-)$, we have $\mu_1(\Gamma) \leq \mu_1(\Gamma^*)$ with equality if and only if $\Gamma$ is switching equivalent to $\Gamma^*$. Now, by \[[@ts], Theorem 4.1\], the spectrum of the signed graph $\Gamma^*$ is given by $$Spec(\Gamma^*)=\{\pm\sqrt{\frac{mn \pm \sqrt{n^2+2(m-1)(n-2)^2+n^2(m-1)^2}}{2}},0^{n-4}\}.$$ Clearly, the signed graph $\Gamma^*$ has four non-zero eigenvalues. Also, with a suitable labelling of the vertices of $\Gamma \in S(K_{m, n},-)$, its adjacency matrix is given by $$A_\Gamma =\left(\begin{array}{cc} O_{m\times m} & B_{m\times n} \\ B_{n\times m } ^{\top}& O_{n\times n} \end{array}\right),$$ where $B_{m\times n}$ is a matrix whose entries are either $1$ or $-1$. We know that $rank(A_\Gamma)= rank(B_{m\times n} )+rank(B_{m\times n}^\top )=2 rank(B_{m\times n} )$. It is easy to see that $rank(A_\Gamma)\geq 4$ because if $rank(B_{m\times n} )=1$, then $\Gamma$ is a switching equivalent to a signed complete bipartite graph with all positive signature which is a contradiction. Thus the signed graph $\Gamma \in S(K_{m, n},-)$ has at least four non-zero eigenvalues. Hence the result follows by Theorem [Theorem 10](#3.2){reference-type="ref" reference="3.2"}. ------------------------------------------------------------------------ This research is supported by SERB-DST research project number CRG/2020/000109. The research of Tahir Shamsher is supported by SRF financial assistance by Council of Scientific and Industrial Research (CSIR), New Delhi, India.\ Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.\ The authors declare that there is no conflict of interest amongst them. 0 J. Askari, A. Iranmanesh and K. C. Das, Seidel-Estrada index, J. Inequal. Appl. **120** (2016) 9 pp. F. Belardo , E. M. Li Marzi and S. K. Simic, Combinatorial approach for computing the characteristic polynomial of a matrix, Linear Algebra Appl. **433** (2010) 1513--1523. M. Brunetti and Z. Stanic, Ordering signed graphs with large index, Ars Math. Contemporanea **22(4)** (2022) P4.05, 14 pp H. Deng, A proof of a conjecture on the Estrada index, MATCH Commun. Math. Comput. Chem. **62** (2009) 599--606. Z. Du and B. Zhou, The Estrada index of unicyclic graphs, Linear Algebra Appl. **436** (2012) 3149--3159. E. Estrada, Characterization of 3D molecular structure, Chem. Phys. Lett. **319** (2000) 713--718. E. Estrada, Characterization of the folding degree of proteins, Bioinformatics **18** (2002) 697--704. E. Estrada, Characterization of the amino acid contribution to the folding degree of proteins, Proteins **54** (2004) 727--737. E. Estrada and J. A. Rodrguez-Valzquez, Subgraph centrality in complex networks, Phys. Rev. E **71** (2005) 056103--1--9. E. Estrada and J. A. Rodráguez-Valázquez, Spectral measures of bipartivity in complex networks, Phys. Rev. E **72** (2005) 046105-1--6. E. Estrada, The many facets of the Estrada indices of graphs and networks, SeMA Journal **79** (2022) 57--125. E. Estrada, Rethinking structural balance in signed social networks, Discrete Appl. Math. **268** (2019) 70--90. E. Estrada and M. Benzi, Walk-based measure of balance in signed networks: detecting lack of balance insocial networks, Phys. Rev. E **90(4)** (2014) 042802. I. S. Gradshteyn and I. M. Ryzhik, Tables of Integrals, Series, and Products, Academic Press, New York, 1965. I. Gutman and A. Graovac, Estrada index of cycles and paths, Chemical Physics Letters **436** (2007) 294--296. G. H. Hardy, J. E. Littlewood and G. Polya, Inequalities, Cambridge: Cambridge University Press; 1952. C. He, Y. Li, H. Shan and W. Wang, On the index of unbalanced signed bicyclic graphs, Comp. Appl. Math. **40(124)** (2021) Art. No. 124. R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, 1990. J. A. de la Pena, I. Gutman and J. Rada, Estimating the Estrada index, Linear Algebra Appl. **427** (2007) 70--76. S. Pirzada, T. Shamsher and M.A. Bhat, On the eigenvalues of signed complete bipartite graphs, https://doi.org/10.48550/arXiv.2111.07262. S. Pirzada, An Introduction to Graph Theory, Universities Press, Orient Blackswan, Hyderabad $2012$. M. Souri, F. Heydari and M. Maghasedi, Maximizing the largest eigenvalues of signed unicyclic graphs, Discrete Math. Algorithms Appl. **12** (2020) 2050016. Z. Stanic, Perturbations in a signed graph and its index, Discuss. Math. Graph Theory **38** (2018) 841--852. Z. Stanic, Integral regular net-balanced signed graphs with vertex degree at most four, Ars Math. Contemp. 17 (2019) 103--114. H. Zhao and Y. Jia, On the Estrada index of bipartite graph, MATCH Commun. Math. Comput. Chem. **61** (2009) 495--501.
arxiv_math
{ "id": "2309.13252", "title": "On the Estrada index of unicyclic and bicyclic signed graphs", "authors": "Tahir Shamsher, S. Pirzada, Mushtaq A. Bhat", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by-nc-nd/4.0/" }
--- abstract: | We study optimal control problems governed by abstract infinite dimensional stochastic differential equations using the dynamic programming approach. In the first part, we prove Lipschitz continuity, semiconcavity and semiconvexity of the value function under several sets of assumptions, and thus derive its $C^{1,1}$ regularity in the space variable. Based on this regularity result, we construct optimal feedback controls using the notion of the $B$-continuous viscosity solution of the associated Hamilton--Jacobi--Bellman equation. This is done in the case when the diffusion coefficient is independent of the control variable. author: - "Filippo de Feo[^1]" - "Andrzej Święch[^2]" - "Lukas Wessels[^3]" date: October 4, 2023 title: "Stochastic optimal control in Hilbert spaces: $C^{1,1}$ regularity of the value function and optimal synthesis via viscosity solutions " --- # Introduction In this paper, we show how to use the theory of viscosity solutions in Hilbert spaces to construct optimal feedback controls for optimal control problems governed by abstract infinite dimensional stochastic differential equations (SDEs). Such evolution equations include for example controlled semilinear stochastic partial differential equations (SPDEs). The general problem we study is the following. Let $T <\infty$ be a fixed terminal time, let $H$ be a real, separable Hilbert space, let $\Lambda$ be a real, separable Banach space and let $\Lambda_0$ be a convex subset of $\Lambda$. For an initial time $0\leq t < T$, we consider the optimal control problem of minimizing the cost functional $$\label{costfunctional} J(t,x;a(\cdot)) := \mathbb{E} \left [ \int_t^T l(X(s),a(s)) \mathrm{d}s + g(X(T)) \right ]$$ over a class of appropriately defined admissible controls $a(\cdot):[t,T]\times\Omega \to \Lambda_0$, subject to the abstract SDE $$\label{state} \begin{cases} \mathrm{d}X(s) = [ A X(s) + b(X(s),a(s)) ] \mathrm{d}s + \sigma(X(s),a(s)) \mathrm{d}W(s),\quad s\in [t,T]\\ X(t) = x\in H. \end{cases}$$ Here, $l:H\times\Lambda_0\to\mathbb{R}, \Lambda_0\subset\Lambda$ and $g:H\to\mathbb{R}$ denote the running and terminal cost, respectively, $A:\mathcal{D}(A) \subset H \to H$ is a linear unbounded operator, $b:H\times \Lambda_0 \to H$ and $\sigma:H\times \Lambda_0 \to L_2(\Xi,H)$ denote the drift and the diffusion coefficient functions, respectively, and the SDE is driven by a cylindrical Wiener process $(W(s))_{s\in [t,T]}$ taking values in some real, separable Hilbert space $\Xi$ and defined on some filtered probability space $(\Omega, \mathscr{F},(\mathscr{F}_s^t)_{s\in[t,T]},\mathbb{P})$, where $(\mathscr{F}_s^t)_{s\in[t,T]}$ is the natural filtration generated by $(W(s))_{s\in [t,T]}$. The notation and the precise definitions will be explained in Section [2](#sec:defnotass){reference-type="ref" reference="sec:defnotass"}. Our goal in this paper is the construction of an optimal feedback control. One of the major approaches to control problems of this kind is the dynamic programming approach introduced by Richard Bellman in the 1950s, see [@bellman1957]. In this approach, the central object of study is the so-called value function defined as $$V(t,x) := \inf_{a(\cdot)} J(t,x;a(\cdot)),$$ which, provided it is sufficiently regular, can be used to derive sufficient optimality conditions (also known as verification theorems) as well as to construct optimal feedback controls (known as optimal synthesis). Obtaining the value function in the first place is a challenging task. To approach this problem, Bellman formally derived the, in general fully nonlinear, so-called Hamilton--Jacobi--Bellman (HJB) equation which should be satisfied by the value function. In our case the associated HJB equation has the form $$\label{HJB} \begin{cases} v_t + \langle Ax,Dv\rangle_H + \inf_{a\in \Lambda_0} \mathcal{F}(x,Dv,D^2v,a) =0,\quad (t,x)\in (0,T)\times H\\ v(T,\cdot) = g, \end{cases}$$ where the Hamiltonian function $\mathcal{F}:H\times H \times S(H)\times \Lambda_0\to \mathbb{R}$ is given by $$\mathcal{F}(x,p,P,a) := \frac12 \text{Tr} \left [ \sigma(x,a) \sigma^{\ast}(x,a) P \right ] + \langle b(x,a),p \rangle_H + l(x,a).$$ In a second step, assuming sufficient regularity, the solution of the HJB equation can also be used to derive verification theorems and perform optimal synthesis. For this reason, the construction of optimal feedback controls is intrinsically related to the study of the regularity of the solution of the HJB equation [\[HJB\]](#HJB){reference-type="eqref" reference="HJB"}. In many interesting cases, the value function is not sufficiently regular to satisfy the HJB equation in a classical sense. Therefore, various notions of solution have been applied to the study HJB equations. In finite dimensional spaces, existence, uniqueness and regularity of classical and strong solutions of parabolic HJB equations have been well studied (see, e.g., [@Dong; @Lieber; @Krylov1; @Wang1; @Wang2]) and classical verification theorems and existence of optimal feedback controls can be found for instance in [@FlRie; @yong1999] (see also [@Krylov] for related results). When classical solutions do not exist, e.g., in the case of fully nonlinear and degenerate equations, the notion of viscosity solution has generally been used. We refer the reader for instance to [@bardi_1997; @CIL; @FlSon] for excellent overviews of the subject. In the absence of uniform parabolicity, regularity results tend to be short-time and rely on techniques which are typically based on comparison theorems or explicit representation formulas to show Lipschitz continuity, and semiconcavity and semiconvexity of viscosity solutions in the spatial variable. We refer the readers to [@CanSin; @FlSon; @Ishii; @IshiiLions; @Lions-book; @yong1999] for various results in this direction. $C^{1,1}$ regularity results using a weaker notion of uniform ellipticity/parabolicity can be found in [@LionsIII] (see also [@Krylov] for earlier one- and two-sided second derivative estimates without the use of viscosity solutions). Applications of viscosity solutions to stochastic optimal control problems are discussed in classical monographs [@FlSon; @yong1999]. A stochastic verification theorem and some results about optimal feedbacks using viscosity solutions can be found in [@yong1999; @gozzi_swiech_zhou_2005; @gozzi_swiech_zhou_2010]. Since the notion of viscosity solution does not require any regularity of the solution, these results rely on set-valued sub/superdifferentials (jets) of the value function. In infinite dimensional Hilbert spaces, the problem is even more complicated since we have fewer techniques available and all HJB equations of type [\[HJB\]](#HJB){reference-type="eqref" reference="HJB"} are in some sense degenerate. Classical solutions are hard to come by and several different notions of generalized solutions, like viscosity solutions, mild solutions, mild solutions in $L^2$ spaces, solutions using backward SDEs, etc., have been introduced to resolve this issue. Early results can be found in [@barbu_daprato_1983], see also [@DZ02] for the study of linear second order equations in Hilbert spaces. The recent monograph [@fabbri2017] contains a comprehensive overview of all the previously mentioned approaches. Apart from viscosity solutions, all these notions of generalized solutions assume some regularity of the solutions and their theories are mostly restricted to semilinear equations with a degree of non-degeneracy guaranteeing some smoothing properties of the transition semigroups generated by the linear parts of the equations, which in addition may also satisfy some structure conditions. Various regularity results using these notions of solutions are discussed in [@fabbri2017 Chapters 4--6]. These regularity results can then be used to derive verification theorems and construct optimal feedback controls, see [@fabbri2017 Sections 4.8, 5.5, 6.5 and the bibliographical notes of these sections]. In most cases, an approximation procedure by more regular solutions is used to circumvent the issue of lacking sufficient regularity. Another direction is to obtain weaker versions of Itô's formula. This was done for instance in [@federico_gozzi_2018] and we refer to this paper for a discussion of other such results. While the notion of viscosity solution adapts with some modifications to fully nonlinear degenerate equations in Hilbert spaces without unbounded operators, the presence of unbounded operators introduces serious difficulties. Firstly, there are various notions of viscosity solutions with subtle differences for such so-called "unbounded" equations. In this paper we use the approach of $B$-continuous viscosity solutions introduced in [@CLIV; @CLV; @SW94]; see [@fabbri2017 Chapter 3] for a comprehensive presentation. Secondly, since the lack of any a priori regularity persists from the finite dimensional case, verification theorems and optimal synthesis become even more complicated. For deterministic problems we refer for instance to [@LY Chapter 6], [@can-fr; @fgs_2008] for some results. We also mention the paper [@GomNur] which has results about the existence of minimizers, the value function and the associated HJB equation for deterministic calculus of variation problems in a Hilbert space. In the stochastic case, versions of the finite dimensional result from [@yong1999; @gozzi_swiech_zhou_2005; @gozzi_swiech_zhou_2010] appeared recently in [@chen2022; @stannat_wessels_2021; @wessels2023], again using set-valued differentials of the value function. There are very few regularity results for viscosity solutions of HJB equations in Hilbert spaces. When the equation has only bounded and continuous terms, $C^{1,1}$ regularity in $x$ was proved for some classes of first order equations in [@bensoussan_2019; @gangbo_meszaros_2020]. For special equations, regularity can also be derived from explicit representation formulas, see [@lasry1986]. For bounded second order equations, semiconcavity estimates were derived in [@lions-infdim1]. Moreover, when the HJB equation has some non-degeneracy on a closed subspace of $H$, partial $C^{1,1}$ regularity was derived using semiconcavity and classical finite dimensional non-degeneracy arguments. $C^{1,1}$ regularity in $x$ based on showing semiconcavity and semiconvexity of the value function for a class of semilinear degenerate parabolic second order HJB equations in a Hilbert space related to a specific mean field control problems with common additive noise was obtained in [@mayorga_swiech_2022]. $C^{1,1}$ regularity of the value function for a control problem coming from mean field games was also proved in [@bensoussan_2020]. The equations considered in [@bensoussan_2020; @mayorga_swiech_2022] do not have unbounded operators. In [@SwTe], $W^{2,\infty}_Q$ regularity was proved for viscosity solutions of fully nonlinear obstacle problems with the so-called $Q$-elliptic operator, with the nuclear self-adjoint operator $Q>0$ governing the degenerate ellipticity of the PDE. Regarding viscosity solutions of "unbounded\" HJB equations of type [\[HJB\]](#HJB){reference-type="eqref" reference="HJB"}, there are almost no results besides Lipschitz continuity. Semiconcavity estimates for the value function for deterministic control problems related to first order HJB equations are in [@LY]. Other available differentiability results were proven for problems with delays (rewritten in the Hilbert space $H=\mathbb R^n \times L^2([-d,0];\mathbb R^n)$): Partial finite dimensional $C^1$ and $C^{1,\alpha}$ regularity[^4] for viscosity solutions of the HJB equations related to optimal control problems with delays only in the state were obtained in [@goldys_1] for first order equations, in [@rosestolato] for a linear second order Kolmogorov equation and recently in [@deFeo-Federico-Swiech] for a class of fully nonlinear second order equations. Regarding the case of problems with delays in the state and in the control, a directional $C^1$ regularity result was obtained in [@tacconi] for a first order HJB equation (that is, for the deterministic case), while for stochastic problems we refer to [@deFeo_2023] for remarks about why the method of [@deFeo-Federico-Swiech] could not be applied there to obtain the partial $C^{1,\alpha}$ regularity. Hence, up to our knowledge, there are no "full" $C^1$ regularity results available in the literature for viscosity solutions of (first or second order) HJB equations in Hilbert spaces with unbounded operators. Using viscosity solutions, on the one hand we do not have any a priori regularity of a solution, however on the other hand, contrary to the case of mild solutions, we know in advance that the value function is the unique viscosity solution of [\[HJB\]](#HJB){reference-type="eqref" reference="HJB"}. This is a big help which was exploited in [@mayorga_swiech_2022] to perform optimal synthesis. In this work, based on the $C^{1,1}$ regularity of the value function in the space variable, the authors construct optimal feedback controls in a special case coming from a mean field control problem (mentioned above). Let us also mention that for deterministic problems with delays, existence of optimal feedback control was obtained in [@goldys_2] (using the partial regularity result of [@goldys_1]) and in [@tacconi]. A similar result was obtained in [@deFeoSwiech] for stochastic optimal control problems with delay using $B$-continuous viscosity solutions, a partial regularity result from [@deFeo-Federico-Swiech] and an elaborate double approximation procedure which permitted to find approximations of the value function which are regular enough so that Itô's formula could be used in order to prove a verification theorem. This manuscript has two main goals. The first is to prove $C^{1,1}$ regularity in the space variable $x$ of the value function for the stochastic optimal control problem [\[costfunctional\]](#costfunctional){reference-type="eqref" reference="costfunctional"}--[\[state\]](#state){reference-type="eqref" reference="state"}. The second is to show, in the semilinear case, how to use the HJB equation [\[HJB\]](#HJB){reference-type="eqref" reference="HJB"} and the notion of $B$-continuous viscosity solution to construct optimal feedback controls under this minimal regularity assumption. In order to prove $C^{1,1}$ regularity in $x$ of the value function in the fully nonlinear case, we prove its semiconcavity and semiconvexity. The $C^{1,1}$ regularity is then well-known, see [@lasry1986]. The semiconcavity is an expected property and it is proved by adapting and modifying the proof in the finite dimensional case from [@yong1999]. The semiconvexity, on the other hand, is less standard and we prove it in three different cases: (1) when the running cost is strongly uniformly convex in the control variable; (2) when the state equation is linear and the cost is convex; (3) via comparison results for mild solutions of the state equation. In the third case, we need a comparison result for SPDEs, and existing results such as [@kotelenez1992; @manthey1999; @milian2002] do not apply to our situation. Therefore, along the way, we prove two comparison results, Theorems [Theorem 23](#comparisonSPDEs1){reference-type="ref" reference="comparisonSPDEs1"} and [Theorem 27](#comparisonSPDEs){reference-type="ref" reference="comparisonSPDEs"}, which may be of independent interest. Our $C^{1,1}$ regularity results are, to the best of our knowledge, the first continuous Fréchet differentiability[^5] results for the value function of optimal control problems in Hilbert spaces with unbounded operators (hence, under appropriate conditions, also for solutions of the corresponding HJB equation) when diffusion coefficients may be completely degenerate. We point out that our results and techniques can also be applied to deterministic problems, i.e., when $\sigma=0$. Regarding the second goal, in the semilinear case, we use the notion of the $B$-continuous viscosity solution of the HJB equation [\[HJB\]](#HJB){reference-type="eqref" reference="HJB"} to construct optimal feedback controls under the space $C^{1,1}$ minimal regularity assumption. In this respect, under appropriate conditions, we extend the method of [@mayorga_swiech_2022] to the case of general semilinear HJB equations in Hilbert spaces with unbounded operators. Our method is based on the simple observation that, if the value function $V$ is $C^{1,1}$ regular in $x$, then, under appropriate conditions, it is also a viscosity solution of a linear Kolmogorov equation for which we have an explicit solution given by a Feynman--Kac formula. It then follows that the underlying SDE for this solution formula gives the optimal trajectory and the optimal feedback control. This way, a proof of a verification theorem (which requires to apply Itô's formula to the value function) is avoided. The ideas behind this method are straightforward but we are not aware of any explicit use of it (other than [@mayorga_swiech_2022]) in the context of viscosity solutions. Hence, the results of the present paper, together with those in [@deFeoSwiech] are, up to our knowledge, the first results on optimal synthesis for optimal control problems for SDEs in Hilbert spaces with unbounded operators using viscosity solutions. Moreover, the results here apply to general Hilbert spaces. Finally we point out that the optimal synthesis result can also be applied to deterministic problems. A limitation of using the notion of $B$-continuous viscosity solution is that the operator $A$ in [\[state\]](#state){reference-type="eqref" reference="state"} and [\[HJB\]](#HJB){reference-type="eqref" reference="HJB"} must be maximal dissipative. Moreover the theory requires $A$ to satisfy one of the two conditions, the so-called strong $B$-condition or the weak $B$-condition. Examples of operators satisfying these conditions can be found in [@fabbri2017 Section 3.1.1]. Another example of an operator $A$ satisfying the weak $B$-condition is the operator coming from stochastic delay equations, see [@deFeo-Federico-Swiech]. The paper is organized as follows. In Section [2](#sec:defnotass){reference-type="ref" reference="sec:defnotass"} we introduce the notation and the basic assumptions we will use throughout the rest of the paper. In Section [3](#sec:regularity){reference-type="ref" reference="sec:regularity"} we prove regularity results for the value function and deduce its $C^{1,1}$ regularity. Based on this regularity result, optimal feedback controls are constructed in Section [4](#sec:optsynt){reference-type="ref" reference="sec:optsynt"}. Finally, in Section [5](#weakBcase){reference-type="ref" reference="weakBcase"}, we extend our results to SDEs involving unbounded operators that satisfy the weak $B$-condition. # Definitions, Notation and Basic Assumptions {#sec:defnotass} Throughout the paper, $H$ is a real, separable Hilbert space with inner product $\langle\cdot,\cdot\rangle_H$ and norm $\|\cdot\|_H$, and $\Lambda_0$ is a convex subset of a real, separable Banach space $\Lambda$ with norm $\|\cdot\|_{\Lambda}$. The variables in $\Lambda$ will be denoted by $a$. Moreover, $\Xi$ is a real, separable Hilbert space and we fix $p>2$. We will write $B_R\subset H$ to denote the closed ball of radius $R>0$ in $H$. We say that $\mu=(\Omega, \mathscr{F}, (\mathscr{F}^t_s)_{s\in [t,T]}, \mathbb{P}, (W(s))_{s\in [t,T]})$ is a reference probability space (cf. [@fabbri2017 Definition 2.7]) if: - $(\Omega, \mathscr{F},\mathbb{P})$ is a complete probability space. - $(W(s))_{s\in[t,T]}$ is a cylindrical Wiener process in $\Xi$ with $W(t) = 0$ $\mathbb{P}$--a.s. - $\mathscr{F}^t_s = \sigma(\mathscr{F}^{t,0}_s,\mathcal{N})$, where $\mathscr{F}^{t,0}_s = \sigma(W(\tau) : t\leq \tau \leq s )$ is the filtration generated by $W$, and $\mathcal{N}$ is the collection of all $\mathbb{P}$--null sets in $\mathscr{F}$. The set of admissible controls $\mathcal{U}_t$ consists of all $\mathscr{F}^t_s$--progressively measurable processes $a(\cdot):[t,T]\times\Omega \to \Lambda_0$ such that $$\label{eq:integr} \mathbb{E} \left [ \int_t^T \|a(s) \|_{\Lambda}^p \mathrm{d}s \right ] <\infty.$$ **Remark 1**. Note that condition [\[eq:integr\]](#eq:integr){reference-type="eqref" reference="eq:integr"} is only relevant if $\Lambda$ is unbounded. Furthermore, the requirement that $p>2$ is necessary in order to guarantee continuity of the paths of the solution of the state equation [\[state\]](#state){reference-type="eqref" reference="state"} which is used explicitly and implicitly in many parts of the paper. We define the value function $V:[0,T]\times H\to \mathbb{R}$ as $$V(t,x) := \inf_{a(\cdot)\in \mathcal{U}_t} J(t,x;a(\cdot)),$$ where $J$ is given by equation [\[costfunctional\]](#costfunctional){reference-type="eqref" reference="costfunctional"}. Note that this definition of the value function coincides with the definition in which the infimum is also taken over all reference probability spaces. For more details, see [@fabbri2017 Chapter 2 and in particular Theorem 2.22]. Next, we introduce the spaces which will be used in the paper. Let $\mathcal{X}_1,\mathcal{X}_2$ be Banach spaces. By $L(\mathcal{X}_1,\mathcal{X}_2)$ we denote the space of bounded linear operators mapping from $\mathcal{X}_1$ to $\mathcal{X}_2$. For Hilbert spaces $H_1,H_2$, we denote by $L_2(H_1,H_2)$ the space of Hilbert--Schmidt operators. In the case $H=H_1=H_2$, we denote $L(H) = L(H,H)$, and $S(H)\subset L(H)$ denotes the subspace of self-adjoint operators. If $C\in L(H)$, $C\geq 0$ means that $\langle Cx,x\rangle_H \geq 0$ for all $x\in H$. We say that $B\in S(H)$ is strictly positive if $\langle Bx,x\rangle_H> 0$ for all $x\in H, x\not=0$. We denote by $C^{1,2}((0,T)\times H)$ the space of functions $\varphi:(0,T)\times H\to\mathbb R$ such that, denoting the variables by $(t,x)$, $\partial_t\varphi, D\varphi, D^2\varphi$ are continuous, where $\partial_t\varphi$ is the partial derivative of $\varphi$ with respect to $t$ and $D\varphi, D^2\varphi$ are the first and second order Fréchet derivarives od $\varphi$ with respect to the $x$ variable. It $v:H\times\Lambda \to Z$, where $Z$ is a Banach space we will write $D_x v, D_a v$ to denote partial Fréchet derivarives of $v$ with respect to the variables $x$ and $a$ respectively. By $C^{1,1}(\mathcal{X}_1,\mathcal{X}_2)$ we denote the space of all once Fréchet differentiable mappings $v:\mathcal{X}_1\to \mathcal{X}_2$ with Lipschitz continuous derivative. In the case $\mathcal{X}_2=\mathbb{R}$, we write $C^{1,1}(\mathcal{X}_1) := C^{1,1}(\mathcal{X}_1,\mathbb{R})$. Throughout the paper, $A$ is a linear, densely defined, maximal dissipative operator in $H$. Note that $A$ is the generator of a $C_0$-semigroup of contractions on $H$. We will denote this semigroup by $(\mathrm{e}^{sA})_{s\geq 0}$. Since in some parts of the paper we will have to assume that the functions $b,\sigma$ are differentiable, to avoid any confusion we will assume that the functions $b,\sigma, l$ are defined on the whole space $H\times \Lambda$, however all conditions will be required to hold only on $H\times \Lambda_0$. In some parts of the paper we will also use the following two assumptions on the operator $A$. **Assumption 2**. *(Strong $B$-condition) There is a strictly positive, self-adjoint operator $B\in L(H)$ such that $A^{\ast} B\in L(H)$ and for some $c_0\geq 0$ $$-A^{\ast} B+c_0 B \geq I.$$* **Assumption 3**. *(Weak $B$-condition) There is a strictly positive, self-adjoint operator $B\in L(H)$ such that $A^{\ast} B\in L(H)$ and for some $c_0\geq 0$ $$-A^{\ast} B+c_0 B \geq 0.$$* We refer to [@fabbri2017 Section 3.1.1] for the proof that the weak $B$-condition is always satisfied with $B=((I-A)(I-A^{\ast}))^{\frac{1}{2}}$ and $c_0=1$. We also refer to [@fabbri2017 Section 3.1.1] for examples of operators satisfying the strong and the weak $B$-conditions. An important case of an operator $A$ satisfying the weak $B$-condition is the operator coming from optimal control problems with delay from [@deFeo-Federico-Swiech Section 3.2] (denoted $\tilde A$ there), where $B=(A^{-1})^{\ast} A^{-1}$ and $c_0=0$ are taken. Using the operator $B$ from Assumption [Assumption 2](#assumptionA){reference-type="ref" reference="assumptionA"} or Assumption [Assumption 3](#assumptionAw){reference-type="ref" reference="assumptionAw"}, we extend the space $H$ in the following way. **Definition 4**. Let $B\in S(H)$ be strictly positive. The space $H_{-1}$ is the completion of $H$ with respect to the norm $$\|x\|_{-1}^2 := \langle Bx,x \rangle_H.$$ Note that $H_{-1}$ is a Hilbert space when endowed with the inner product $$\langle x,y\rangle_{-1} := \langle B^{\frac12} x, B^{\frac12} y \rangle_H.$$ The notions of semiconcavity and semiconvexity will play a crucial role in our work. **Definition 5**. A function $v:H\to\mathbb{R}$ is called semiconcave if there is a constant $C>0$ such that $$\lambda v(x) + (1-\lambda) v(x^{\prime}) - v(\lambda x + (1-\lambda)x^{\prime}) \leq C \lambda(1-\lambda) \| x - x^{\prime} \|_H^2$$ for all $\lambda \in [0,1]$ and $x,x^{\prime}\in H$. $v$ is called semiconvex if $-v$ is semiconcave. The constant $C$ in Definition [Definition 5](#def:semiconcave){reference-type="ref" reference="def:semiconcave"} is called the *semiconcavity constant* of $v$. The semiconcavity constant of $-v$ is called the *semiconvexity constant* of $v$. We also introduce the following notion of semiconcavity and semiconvexity in $H_{-1}$. **Definition 6**. A function $v:H\to\mathbb{R}$ is called semiconcave in $H_{-1}$ if there is a constant $C>0$ such that $$\lambda v(x) + (1-\lambda) v(x^{\prime}) - v(\lambda x + (1-\lambda)x^{\prime}) \leq C \lambda(1-\lambda) \| x - x^{\prime} \|_{-1}^2$$ for all $\lambda \in [0,1]$ and $x,x^{\prime}\in H$. $v$ is called semiconvex in $H_{-1}$ if $-v$ is semiconcave. When it is clear that we deal with functions $v$ which are semiconcave/semiconvex in $H_{-1}$, the constant $C$ in Definition [Definition 6](#def:semiconvex){reference-type="ref" reference="def:semiconvex"} will also be called the semiconcavity constant of $v$ and the semiconcavity constant of $-v$ will be called the semiconvexity constant of $v$. Note that the function $v$ in Definition [Definition 6](#def:semiconvex){reference-type="ref" reference="def:semiconvex"} does not need to be defined on $H_{-1}$ but only on $H$. # Regularity of the Value Function {#sec:regularity} ## Lipschitz Continuity {#lipschitz} In this section, we prove Lipschitz continuity of the value function. We impose the following assumptions on the coefficients of the state equation. **Assumption 7**. 1. *The function $b$ is continuous on $H\times \Lambda_0$ and there exists a constant $C>0$ such that $$\|b(x,a)-b(x^{\prime},a) \|_H \leq C \|x-x^{\prime} \|_H$$ for all $x,x^{\prime}\in H$ and $a\in\Lambda_0$.* 2. *There exists a constant $C>0$ such that $$\|b(x,a)\|_H \leq C(1+\|x\|_H+\|a\|_{\Lambda} )$$ for all $x\in H$ and $a\in\Lambda_0$.* 3. *The function $\sigma$ is continuous on $H\times \Lambda_0$ and there exists a constant $C>0$ such that $$\|\sigma(x,a)-\sigma(x^{\prime},a) \|_{L_2(\Xi,H)} \leq C \|x-x^{\prime} \|_H$$ for all $x,x^{\prime}\in H$ and $a\in\Lambda_0$.* 4. *There exists a constant $C>0$ such that $$\|\sigma(x,a)\|_{L_2(\Xi,H)} \leq C(1+\|x\|_H+\|a\|_{\Lambda} )$$ for all $x\in H$ and $a\in\Lambda_0$.* **Assumption 8**. 1. *There exists a constant $C>0$ such that $$\| b(x,a) - b(x^{\prime},a^{\prime} ) \|_H \leq C (\|x-x^{\prime}\|_H + \|a-a^{\prime}\|_{\Lambda} )$$ for all $x,x^{\prime}\in H$ and $a,a^{\prime}\in \Lambda_0$.* 2. *There exists a constant $C>0$ such that $$\| \sigma(x,a) - \sigma(x^{\prime},a^{\prime} ) \|_{L_2(\Xi,H)} \leq C (\|x-x^{\prime}\|_H + \|a-a^{\prime}\|_{\Lambda} )$$ for all $x,x^{\prime}\in H$ and $a,a^{\prime}\in \Lambda_0$.* Under these assumptions, for fixed $t \in [0,T], x \in H ,$ $a( \cdot) \in \mathcal U_t$, the state equation [\[state\]](#state){reference-type="eqref" reference="state"} has a unique solution with continuous trajectories, denoted by $X(s;t,x,a(\cdot))$, see, e.g., [@chow2007 Chapter 6, Theorem 6.5]; see also [@DZ14 Theorem 7.2][^6]. The solution satisfies $$\label{xeq:mildsupest} \mathbb{E}\left [\sup_{s\in[t,T]}\|X(s;t,x,a(\cdot))\|_H^p\right ]\leq C_p(T)\left(1+\|x\|_H^p +\mathbb{E}\int_t^T \|a(r) \|_{\Lambda}^p \mathrm{d}r \right).$$ For $x_0,x_1\in H$ and $a_0(\cdot),a_1(\cdot)\in \mathcal{U}_t$, define $$\label{x1x0differentcontrols} \begin{cases} X_0(s) = X(s;t,x_0,a_0(\cdot))\\ X_1(s)=X(s,t,x_1,a_1(\cdot)). \end{cases}$$ In the proof of Lemma [Lemma 9](#estimatex1x0one){reference-type="ref" reference="estimatex1x0one"} below and later, we will use [@fabbri2017 Proposition 1.166]. Note that in [@fabbri2017], the coefficients of the state equation are uniformly bounded in the control variable, while in our case they are not. Nevertheless, adapting the proof to our case, the functions $f$ and $\Phi$ defined in the proof of [@fabbri2017 Proposition 1.166] are still in $M^p_{\mu}(t,T;H)$ and $\mathcal{N}_I^p(t,T;H)$ (since $Q=I$ in our case), respectively, and thus the proof can be carried out verbatim. We have the following estimates. **Lemma 9**. 1. *Let Assumption [Assumption 7](#bsigmalipschitzfirstvariable){reference-type="ref" reference="bsigmalipschitzfirstvariable"} be satisfied. Let $a_0(\cdot) = a_1(\cdot) = a(\cdot)\in \mathcal{U}_t$. Then, there are constants $C_1,C_2\geq 0$ independent of $T$, such that $$\mathbb{E} \left [ \sup_{s\in [t,T]} \| X_1(s) - X_0(s) \|_H^{2} \right ] \leq C_1 \mathrm{e}^{C_2(T-t)}\|x_1 - x_0 \|_H^{2}$$ for all $x_0,x_1\in H$ and $a(\cdot)\in \mathcal{U}_t$.* 2. *Let Assumption [Assumption 8](#bsigmalipschitz){reference-type="ref" reference="bsigmalipschitz"} be satisfied. Then, there are constants $C_1,C_2\geq 0$ independent of $T$, such that $$\label{eq:strongest} \mathbb{E} \left [ \sup_{s\in [t,T]} \| X_1(s) - X_0(s) \|_H^{2} \right ] \leq C_1 \mathrm{e}^{C_2(T-t)} \left ( \|x_1 - x_0 \|_H^{2} + \mathbb{E} \left [ \int_t^T \| a_1(s) - a_0(s) \|_{\Lambda}^2 \mathrm{d}s \right ] \right )$$ for all $x_0,x_1\in H$ and $a_0(\cdot), a_1(\cdot) \in \mathcal{U}_t$.* *Proof.* We are only going to prove part $(ii)$; the proof of part $(i)$ follows along the same lines. Applying [@fabbri2017 Proposition 1.166], we have $$\label{itosformula3} \begin{split} &\| X_1(s) - X_0(s) \|_H^2\\ &\leq \| x_1 - x_0 \|_H^2 + 2 \int_t^s \langle b(X_1(r),a_1(r)) - b(X_0(r),a_0(r)) , X_1(r) - X_0(r) \rangle_H \mathrm{d}r\\ &\quad + \int_t^s \| \sigma(X_1(r),a_1(r)) - \sigma(X_0(r),a_0(r)) \|_{L_2(\Xi,H)}^2 \mathrm{d}r\\ &\quad + 2 \int_t^s \langle X_1(r)-X_0(r), (\sigma(X_1(r),a_1(r)) - \sigma(X_0(r),a_0(r))) \mathrm{d}W(r) \rangle_H. \end{split}$$ Due to Burkholder--Davis--Gundy inequality (see, e.g., [@fabbri2017 Theorem 1.80]) and the Lipschitz continuity of $\sigma$, we have $$\begin{split} &\mathbb{E} \left [ \sup_{s\in [t,T]} \left | \int_t^s \langle X_1(r)-X_0(r), (\sigma(X_1(r),a_1(r)) - \sigma(X_0(r),a_0(r))) \mathrm{d}W(r) \rangle_H \right | \right ]\\ &\leq C \mathbb{E} \left [ \left ( \int_t^T \| X_1(s) - X_0(s) \|_H^2 \left ( \| X_1(s) - X_0(s) \|_H^2 + \| a_1(s) - a_0(s) \|_{\Lambda}^2 \right ) \mathrm{d}s \right )^{\frac12} \right ]\\ &\leq \frac12 \mathbb{E} \left [ \sup_{s\in [t,T]} \| X_1(s) - X_0(s) \|_H^2 \right ] + C \mathbb{E} \left [ \int_t^T \| X_1(s) - X_0(s) \|_H^2 + \| a_1(s) - a_0(s) \|_{\Lambda}^2 \mathrm{d}s \right ]. \end{split}$$ Therefore, we derive from [\[itosformula3\]](#itosformula3){reference-type="eqref" reference="itosformula3"} using the Lipschitz continuity of $b$ and $\sigma$ $$\begin{split} &\mathbb{E} \left [ \sup_{s\in [t,T]} \| X_1(s) - X_0(s) \|_H^2 \right ]\\ &\leq C \| x_1 - x_0 \|_H^2 + C \int_t^T \mathbb{E} \left [ \sup_{r\in [t,s]} \| X_1(r) - X_0(r) \|_H^2 + \| a_1(s) - a_0(s) \|_{\Lambda}^2 \right ] \mathrm{d}s. \end{split}$$ Applying Grönwall's inequality yields the claim. ◻ **Assumption 10**. 1. *The function $l$ is continuous on $H\times\Lambda_0$ and there exists a constant $C>0$ such that $$|l(x,a) - l(x^{\prime},a)| \leq C \|x-x^{\prime}\|_H$$ for all $x,x^{\prime}\in H$ and $a\in \Lambda_0$.* 2. *There exists a constant $C>0$ such that $$|l(x,a)| \leq C (1+\|x\|_H+\|a\|_{\Lambda}^2)$$ for all $x\in H$ and $a\in \Lambda_0$.* 3. *There exists a constant $C>0$ such that $$|g(x) - g(x^{\prime})| \leq C \|x-x^{\prime}\|_H$$ for all $x,x^{\prime}\in H$.* 4. *If $\Lambda_0$ is unbounded we assume that $l$ and $g$ are bounded from below.* Assumption *(iv)* above is made to guarantee that the value function is well defined. It can be replaced by a different assumption guaranteeing an appropriate growth of $l$ in the control variable or by imposing restrictions on the quantity [\[eq:integr\]](#eq:integr){reference-type="eqref" reference="eq:integr"} for admissible controls, however we do not pursue this direction here. We note that it follows easily from [\[xeq:mildsupest\]](#xeq:mildsupest){reference-type="eqref" reference="xeq:mildsupest"} and Assumption [Assumption 10](#lglipschitzfirstvariable){reference-type="ref" reference="lglipschitzfirstvariable"} that $|V(t,x)|\leq C(1+\|x\|_H)$ for all $(t,x)\in [0,T]\times H$. **Theorem 11**. *Let Assumptions [Assumption 7](#bsigmalipschitzfirstvariable){reference-type="ref" reference="bsigmalipschitzfirstvariable"} and [Assumption 10](#lglipschitzfirstvariable){reference-type="ref" reference="lglipschitzfirstvariable"} be satisfied. Then there are constants $C_1,C_2$ independent of $T$, such that $$|V(t,x) - V(t,y)| \leq C_1 \mathrm{e}^{C_2(T-t)} \|x-y\|_H$$ for all $t\in [0,T]$ and $x,y\in H$.* *Proof.* Fix $t\in [0,T]$, $x_0,x_1\in H$ and $a(\cdot)\in\mathcal{U}_t$. Let $X_0$ and $X_1$ be given by equation [\[x1x0differentcontrols\]](#x1x0differentcontrols){reference-type="eqref" reference="x1x0differentcontrols"} with $a_0(\cdot) = a_1(\cdot) = a(\cdot)$. Using Lipschitz continuity of $l$ and $g$ in the $x$-variable and Lemma [Lemma 9](#estimatex1x0one){reference-type="ref" reference="estimatex1x0one"}(i) we have $$\begin{split} & |J(t,x_1;a(\cdot)) - J(t,x_0;a(\cdot))| \\ &\leq C \mathbb{E} \left [ \int_t^T \| X_1(s) - X_0(s) \|_H \mathrm{d}s + \| X_1(T) - X_0(T) \|_H \right ] \\ &\leq C (T-t+1) \mathbb{E} \left [ \sup_{s\in [t,T]} \| X_1(s) - X_0(s) \|_H \right ] \leq C_1 \mathrm{e}^{C_2(T-t)} \| x_1-x_0 \|_H \end{split}$$ for some $C_1,C_2\geq 0$. ◻ We point out that if in addition Assumption [Assumption 2](#assumptionA){reference-type="ref" reference="assumptionA"} is satisfied, using [@fabbri2017 Lemma 3.23] we also have that $V(t,\cdot)$ is Lipschitz continuous with respect to the $\|\cdot\|_{-1}$ norm for every $0\leq t<T$, however the Lipschitz constant blows up as $t$ approaches $T$. ## Semiconcavity **Assumption 12**. 1. *Let $b:H\times\Lambda \to H$ be Fréchet differentiable in the first variable and let there be a constant $C>0$ such that $$\| D_xb(x,a) - D_xb(x^{\prime},a) \|_{L(H)} \leq C \|x-x^{\prime}\|_H$$ for all $x,x^{\prime}\in H$ and $a\in \Lambda_0$.* 2. *Let $\sigma:H\times\Lambda \to L_2(\Xi,H)$ be Fréchet differentiable in the first variable and let there be a constant $C>0$ such that $$\| D_x\sigma(x,a) - D_x\sigma(x^{\prime},a) \|_{L(H,L_2(\Xi,H))} \leq C \|x-x^{\prime}\|_H$$ for all $x,x^{\prime}\in H$ and $a\in \Lambda_0$.* **Assumption 13**. 1. *Let $b:H\times\Lambda\to H$ be Fréchet differentiable and let there be a constant $C>0$ such that $$\| D_xb(x,a) - D_xb(x^{\prime},a^{\prime})\|_{L(H)} + \| D_ab(x,a) - D_ab(x^{\prime},a^{\prime}) \|_{L(\Lambda,H)} \leq C (\|x-x^{\prime}\|_H + \|a-a^{\prime}\|_{\Lambda})$$ for all $x,x^{\prime}\in H$ and $a,a^{\prime}\in \Lambda_0$.* 2. *Let $\sigma:H\to L_2(\Xi,H)$ be Fréchet differentiable and let there be a constant $C>0$ such that $$\| D\sigma(x) - D\sigma(x^{\prime}) \|_{L(H,L_2(\Xi,H))} \leq C \|x-x^{\prime}\|_H$$ for all $x,x^{\prime}\in H$.* Recall the definition of $X_0$ and $X_1$ given in equation [\[x1x0differentcontrols\]](#x1x0differentcontrols){reference-type="eqref" reference="x1x0differentcontrols"}. Furthermore, for $\lambda\in [0,1]$ we introduce $$\label{lambdadefinition} \begin{cases} a_{\lambda}(s) = \lambda a_1(s) + (1-\lambda) a_0(s)\\ x_{\lambda} = \lambda x_1 +(1-\lambda)x_0\\ X_{\lambda}(s) = X(s;t,x_{\lambda},a_{\lambda}(\cdot))\\ X^{\lambda}(s) = \lambda X_1(s) +(1-\lambda)X_0(s). \end{cases}$$ **Lemma 14**. 1. *Let Assumptions [Assumption 7](#bsigmalipschitzfirstvariable){reference-type="ref" reference="bsigmalipschitzfirstvariable"} and [Assumption 12](#bsigmafrechetfirstvariable){reference-type="ref" reference="bsigmafrechetfirstvariable"} be satisfied. Let $a_0(\cdot) = a_1(\cdot) = a(\cdot)\in \mathcal{U}_t$. There are constants $C_1,C_2\geq 0$ independent of $T$ where $C_2$ depends only on the constant $C_2$ in Lemma [Lemma 9](#estimatex1x0one){reference-type="ref" reference="estimatex1x0one"}, such that $$\mathbb{E} \left [ \sup_{s\in [t,T]} \| X^{\lambda}(s) - X_{\lambda}(s) \|_H \right ] \leq C_1 \mathrm{e}^{C_2(T-t)} \lambda (1-\lambda) \|x_1-x_0\|_H^2$$ for all $\lambda \in [0,1]$, $x_0,x_1\in H$ and $a(\cdot)\in \mathcal{U}_t$.* 2. *Let Assumptions [Assumption 8](#bsigmalipschitz){reference-type="ref" reference="bsigmalipschitz"} and [Assumption 13](#bsigmafrechet){reference-type="ref" reference="bsigmafrechet"} be satisfied. There are constants $C_1,C_2\geq 0$, where $C_2$ depends only on the constant $C_2$ in Lemma [Lemma 9](#estimatex1x0one){reference-type="ref" reference="estimatex1x0one"}, such that $$\label{eq:xlambda} \mathbb{E} \left [ \sup_{s\in [t,T]} \| X^{\lambda}(s) - X_{\lambda}(s) \|_H \right ] \leq C_1 \mathrm{e}^{C_2(T-t)} \lambda (1-\lambda) \left ( \|x_1-x_0\|_H^2 + \mathbb{E} \left [ \int_t^T \|a_1(s) - a_0(s) \|_{\Lambda}^2 \mathrm{d}s \right ] \right )$$ for all $\lambda \in [0,1]$, $x_0,x_1\in H$ and $a_0(\cdot),a_1(\cdot) \in \mathcal{U}_t$.* *Proof.* We are only going to prove part (ii); the proof of part (i) follows along the same lines. For $\theta\in [0,1]$, we define $$\begin{cases} \bar X_0(\theta) = X^{\lambda}(s) + \theta \lambda (X_0(s) -X_1(s))\\ \bar X_1(\theta) = X^{\lambda}(s) + \theta (1-\lambda)(X_1(s) -X_0(s)) \end{cases}$$ and $$\begin{cases} \bar a_0(\theta) = a_{\lambda}(s) + \theta \lambda (a_0(s) -a_1(s))\\ \bar a_1(\theta) = a_{\lambda}(s) + \theta (1-\lambda)(a_1(s) -a_0(s)). \end{cases}$$ Note that $$\begin{split} &\lambda b(X_1(s),a_1(s)) +(1-\lambda) b(X_0(s),a_0(s)) - b(X^{\lambda}(s),a_{\lambda}(s))\\ &= \lambda (1-\lambda) \int_0^1 ( D_x b(\bar X_1(\theta), \bar a_1(\theta)) - D_x b(\bar X_0(\theta), \bar a_0(\theta))) (X_1(s) - X_0(s)) \mathrm{d}\theta\\ &\quad + \lambda (1-\lambda) \int_0^1 ( D_a b(\bar X_1(\theta), \bar a_1(\theta)) - D_a b(\bar X_0(\theta), \bar a_0(\theta))) (a_1(s) - a_0(s)) \mathrm{d}\theta. \end{split}$$ Therefore, using Assumption [Assumption 13](#bsigmafrechet){reference-type="ref" reference="bsigmafrechet"}, we obtain $$\label{estimateb} \begin{split} &\| \lambda b(X_1(s),a_1(s)) +(1-\lambda) b(X_0(s),a_0(s)) - b(X^{\lambda}(s),a_{\lambda}(s)) \|_{H}\\ &\leq C \lambda (1-\lambda) \left ( \|X_1(s) - X_0(s)\|_H^2 + \|a_1(s) -a_0(s)\|_{\Lambda}^2 \right ). \end{split}$$ Since $\sigma$ is independent of the control, we obtain similarly $$\label{estimatesigma} \| \lambda \sigma(X_1(s)) +(1-\lambda) \sigma(X_0(s)) - \sigma(X^{\lambda}(s)) \|_{L_2(\Xi,H)} \leq C \lambda (1-\lambda) \|X_1(s) - X_0(s)\|_H^2.$$ Applying [@fabbri2017 Proposition 1.166], we obtain $$\label{itosformula4} \begin{split} &\| X_{\lambda}(s) - X^{\lambda}(s) \|_H^2\\ &\leq 2 \int_t^s \langle \lambda b(X_1(r),a_1(r)) +(1-\lambda) b(X_0(r),a_0(r)) - b(X^{\lambda}(r),a_{\lambda}(r)) , X_{\lambda}(r) - X^{\lambda}(r) \rangle_H \mathrm{d}r\\ &\quad + \int_t^s \| \lambda \sigma(X_1(r)) +(1-\lambda) \sigma(X_0(r)) - \sigma(X^{\lambda}(r)) \|_{L_2(\Xi,H)}^2 \mathrm{d}r\\ &\quad + 2 \int_t^s \Big \langle X_{\lambda}(r) - X^{\lambda}(r), ( \lambda \sigma(X_1(r)) +(1-\lambda) \sigma(X_0(r)) - \sigma(X^{\lambda}(r)) ) \mathrm{d}W(r)\Big \rangle_H. \end{split}$$ Now, first note that $$\begin{split} & \int_t^s \langle \lambda b(X_1(r),a_1(r)) +(1-\lambda) b(X_0(r),a_0(r)) - b(X^{\lambda}(r),a_{\lambda}(r)) , X_{\lambda}(r) - X^{\lambda}(r) \rangle_H \mathrm{d}r \\ &\leq \int_t^s \| \lambda b(X_1(r),a_1(r)) +(1-\lambda) b(X_0(r),a_0(r)) - b(X^{\lambda}(r),a_{\lambda}(r)) \|_H \| X_{\lambda}(r) - X^{\lambda}(r) \|_H \mathrm{d}r\\ &\leq \frac18 \sup_{r\in [t,T]} \| X_{\lambda}(r) - X^{\lambda}(r) \|^2_H\\ &\quad + C \left ( \int_t^T \| \lambda b(X_1(r),a_1(r)) +(1-\lambda) b(X_0(r),a_0(r)) - b(X^{\lambda}(r),a_{\lambda}(r)) \|_{H} \mathrm{d}r \right )^2 \end{split}$$ and, by Burkholder--Davis--Gundy inequality, $$\begin{split} &\mathbb{E} \bigg [ \sup_{s\in [t,T]} \bigg | \int_t^s \Big \langle X_{\lambda}(r) - X^{\lambda}(r), (\lambda \sigma(X_1(r)) +(1-\lambda) \sigma(X_0(r)) - \sigma(X^{\lambda}(r))) \mathrm{d}W(r) \Big \rangle_H \bigg |^{\frac12} \bigg ]\\ &\leq \mathbb{E} \bigg [ \bigg ( \int_t^T \| X_{\lambda}(r) - X^{\lambda}(r) \|_H^2 \| \lambda \sigma(X_1(r)) +(1-\lambda) \sigma(X_0(r)) - \sigma(X^{\lambda}(r)) \|_{L_2(\Xi,H)}^2 \mathrm{d}r \bigg )^{\frac14} \bigg ]\\ &\leq \frac18 \mathbb{E} \left [ \sup_{r\in [t,T]} \| X_{\lambda}(r) - X^{\lambda}(r) \|_H \right ]\\ &\quad + C \mathbb{E} \left [ \left ( \int_t^T \| \lambda \sigma(X_1(r)) +(1-\lambda) \sigma(X_0(r)) - \sigma(X^{\lambda}(r)) \|_{L_2(\Xi,H)}^2 \mathrm{d}r \right )^{\frac12} \right ]. \end{split}$$ Thus, taking the absolute value and square root of both sides of inequality [\[itosformula4\]](#itosformula4){reference-type="eqref" reference="itosformula4"}, taking the supremum over $[t,T]$, taking the expectation and using the previous two inequalities as well as [\[estimateb\]](#estimateb){reference-type="eqref" reference="estimateb"} and [\[estimatesigma\]](#estimatesigma){reference-type="eqref" reference="estimatesigma"}, we obtain $$\mathbb{E} \left [ \sup_{s\in [t,T]} \left \|X_{\lambda}(s) - X^{\lambda}(s)\right \|_H \right ] \leq C \lambda (1-\lambda) \mathbb{E} \left [ (T-t)\sup_{s\in [t,T]} \|X_1(s) - X_0(s)\|_H^2 + \int_t^T \|a_1(s) -a_0(s)\|_{\Lambda}^2 \mathrm{d}s \right ].$$ Applying Lemma [Lemma 9](#estimatex1x0one){reference-type="ref" reference="estimatex1x0one"}(ii) concludes the proof. ◻ It is clear from the proof that $C_2$ can be any constant bigger than $C_2$ from Lemma [Lemma 9](#estimatex1x0one){reference-type="ref" reference="estimatex1x0one"}. **Assumption 15**. 1. *Let $l(\cdot,a)$ be semiconcave uniformly in $a\in \Lambda_0$.* 2. *Let $g$ be semiconcave.* **Theorem 16**. *Let Assumptions [Assumption 7](#bsigmalipschitzfirstvariable){reference-type="ref" reference="bsigmalipschitzfirstvariable"}, [Assumption 10](#lglipschitzfirstvariable){reference-type="ref" reference="lglipschitzfirstvariable"}, [Assumption 12](#bsigmafrechetfirstvariable){reference-type="ref" reference="bsigmafrechetfirstvariable"} and [Assumption 15](#lgsemiconcave){reference-type="ref" reference="lgsemiconcave"} be satisfied. Then, for every $t\in[0,T]$, the function $V(t,\cdot)$ is semiconcave with semiconcavity constant $C_1 \mathrm{e}^{C_2(T-t)}$ for some $C_1,C_2\geq 0$ independent of $t,T$.* *Proof.* The proof is analogous to the finite dimensional case, which can be found, e.g., in [@yong1999 Chapter 4, Proposition 4.5]. It is sufficient to show that for every $a(\cdot)\in \mathcal{U}_t$, the function $J(t,\cdot;a(\cdot))$ is semiconcave with the semiconcavity constant independent of $a(\cdot)$. Let $X_0$ and $X_1$ be given by [\[x1x0differentcontrols\]](#x1x0differentcontrols){reference-type="eqref" reference="x1x0differentcontrols"} with $a_0(\cdot) = a_1(\cdot) = a(\cdot)$, and let $x_{\lambda}$, $X_{\lambda}$ and $X^{\lambda}$ be given by [\[lambdadefinition\]](#lambdadefinition){reference-type="eqref" reference="lambdadefinition"}. Note that $a_{\lambda}(\cdot) = a(\cdot)$ in this case. Then $$\begin{split} &\lambda J(t,x_1;a(\cdot)) + (1-\lambda) J(t,x_0;a(\cdot)) - J(t,x_{\lambda};a(\cdot))\\ &= \mathbb{E} \left [ \int_t^T \lambda l(X_1(s),a(s)) + (1-\lambda) l(X_0(s),a(s)) - l(X_{\lambda}(s),a(s)) \mathrm{d}s \right ]\\ &\quad + \mathbb{E} \left [ \lambda g(X_1(T)) + (1-\lambda) g(X_0(T)) - g(X_{\lambda}(T)) \right ]. \end{split}$$ Due to the semiconcavity of $l(\cdot,a)$, we have $$\lambda l(X_1(s),a(s)) + (1-\lambda) l(X_0(s),a(s)) - l(X^{\lambda}(s),a(s)) \leq C \lambda(1-\lambda) \|X_1(s) - X_0(s) \|_H^2$$ and the corresponding inequality holds for $g$. Thus, $$\begin{split} &\lambda J(t,x_1;a(\cdot)) + (1-\lambda)J(t,x_0;a(\cdot)) - J(t,x_{\lambda};a(\cdot))\\ &\leq C\lambda(1-\lambda) \mathbb{E} \left [ \int_t^T \| X_1(s) - X_0(s) \|_H^2 \mathrm{d}s \right ] + \mathbb{E} \left [ \int_t^T |l(X^{\lambda}(s),a(s)) - l(X_{\lambda}(s),a(s)) | \mathrm{d}s \right ]\\ &\quad + C\lambda (1-\lambda) \mathbb{E} \left [ \| X_1(T) - X_0(T) \|_H^2 \right ] + \mathbb{E} \left [ | g(X^{\lambda}(T)) - g(X_{\lambda}(T)) | \right ]. \end{split}$$ Using the Lipschitz continuity of $l$ and $g$ as well as Lemma [Lemma 14](#estimatexlambdasamecontrol){reference-type="ref" reference="estimatexlambdasamecontrol"}(i), we obtain $$\lambda J(t,x_1;a(\cdot)) + (1-\lambda) J(t,x_0;a(\cdot)) - J(t,x_{\lambda};a(\cdot)) \leq C_1 \mathrm{e}^{C_2(T-t)} \lambda (1-\lambda) \|x_1 - x_0\|^2_H.$$ This proves the semiconcavity of $J(t,\cdot;a(\cdot))$ with the required constant. ◻ ## Semiconvexity In this subsection, we derive semiconvexity of the value function under three different sets of assumptions. ### Case 1: Uniform Convexity of Running Cost in The Control Variable **Assumption 17**. 1. *There exist constants $C,\nu \geq 0$ such that the map $$H\times\Lambda \ni (x,a)\mapsto l(x,a) + C\|x\|_H^2 - \nu \|a\|^2_{\Lambda}$$ is convex.* 2. *Let $g:H\to\mathbb{R}$ be semiconvex.* **Theorem 18**. *Let Assumptions [Assumption 8](#bsigmalipschitz){reference-type="ref" reference="bsigmalipschitz"}, [Assumption 10](#lglipschitzfirstvariable){reference-type="ref" reference="lglipschitzfirstvariable"}, [Assumption 13](#bsigmafrechet){reference-type="ref" reference="bsigmafrechet"} and [Assumption 17](#lgsemiconvex){reference-type="ref" reference="lgsemiconvex"} be satisfied. Then there exists a constant $\nu_0$ depending only on the data of the problem such that if $\nu\geq \nu_0$, then $V(t,\cdot)$ is semiconvex with constant $C_1\mathrm{e}^{C_2T}$.* *Proof.* Let $\varepsilon>0$, $x_0,x_1\in H$. Let $X_0(s),X_1(s),x_{\lambda},X^{\lambda}(s),X_{\lambda}(s)$ be as in [\[x1x0differentcontrols\]](#x1x0differentcontrols){reference-type="eqref" reference="x1x0differentcontrols"} and [\[lambdadefinition\]](#lambdadefinition){reference-type="eqref" reference="lambdadefinition"}, respectively, for $a_0(\cdot)=a_0^{\varepsilon}(\cdot), a_1(\cdot)=a_1^{\varepsilon}(\cdot)$, where $$V(t,x_0)\geq J(t,x; a_0^{\varepsilon}(\cdot))-\varepsilon,\quad V(t,x_1)\geq J(t,x; a_1^{\varepsilon}(\cdot))-\varepsilon$$ and for $0\leq\lambda\leq 1$ set $a_\lambda^{\varepsilon}(\cdot)= \lambda a_1^{\varepsilon}(\cdot)+(1-\lambda)a_0^{\varepsilon}(\cdot)$. Then $$\label{semiconvexityinequality} \begin{split} &\varepsilon+\lambda V(t,x_1)+(1-\lambda )V(t,x_0)-V(t,x_\lambda)\\ &\geq \lambda J(t,x_1;a_1^{\varepsilon}(\cdot)) + (1-\lambda) J(t,x_0; a_0^{\varepsilon}(\cdot)) - J(t,x_{\lambda};a_{\lambda}^{\varepsilon}(\cdot)) \\ &= \mathbb{E} \left [ \int_t^T \lambda l(X_1(s),a_1^{\varepsilon}(s)) + (1-\lambda)l(X_0(s),a_0^{\varepsilon}(s)) - l(X_{\lambda}(s),a_{\lambda}^{\varepsilon}(s)) \mathrm{d}s \right ]\\ &\quad + \mathbb{E} \left [ \lambda g(X_1(T)) + (1-\lambda)g(X_0(T)) - g(X_{\lambda}(T)) \right ]\\ &= \mathbb{E} \left [ \int_t^T \lambda l(X_1(s),a_1^{\varepsilon}(s)) + (1-\lambda)l(X_0(s),a_0^{\varepsilon}(s)) - l(X^{\lambda}(s),a_{\lambda}^{\varepsilon}(s)) \mathrm{d}s \right ]\\ &\quad + \mathbb{E} \left [ \int_t^T l(X^{\lambda}(s),a_{\lambda}^{\varepsilon}(s)) - l(X_{\lambda}(s),a_{\lambda}^{\varepsilon}(s)) \mathrm{d}s \right ]\\ &\quad + \mathbb{E} \left [ \lambda g(X_1(T)) + (1-\lambda)g(X_0(T)) - g(X^{\lambda}(T)) + g(X^{\lambda}(T)) - g(X_{\lambda}(T)) \right ]. \end{split}$$ Due to Assumption [Assumption 17](#lgsemiconvex){reference-type="ref" reference="lgsemiconvex"}, we have $$\begin{split} &\lambda l(X_1(s),a_1^{\varepsilon}(s)) + (1-\lambda)l(X_0(s),a_0^{\varepsilon}(s)) - l(X^{\lambda}(s),a_{\lambda}^{\varepsilon}(s))\\ &\geq - C\lambda(1-\lambda)\|X_1(s)-X_0(s)\|_H^2 + \nu \lambda (1-\lambda) \|a_1^{\varepsilon}(s) - a_0^{\varepsilon}(s) \|_{\Lambda}^2 \end{split}$$ and $$\lambda g(X_1(T)) + (1-\lambda)g(X_0(T)) - g(X^{\lambda}(T)) \geq -C\lambda(1-\lambda)\|X_1(T)-X_0(T)\|_H^2.$$ Therefore, we derive from [\[semiconvexityinequality\]](#semiconvexityinequality){reference-type="eqref" reference="semiconvexityinequality"} using Assumption [Assumption 10](#lglipschitzfirstvariable){reference-type="ref" reference="lglipschitzfirstvariable"}, [\[eq:strongest\]](#eq:strongest){reference-type="eqref" reference="eq:strongest"} and [\[eq:xlambda\]](#eq:xlambda){reference-type="eqref" reference="eq:xlambda"} $$\begin{split} &\varepsilon+\lambda V(t,x_1)+(1-\lambda )V(t,x_0)-V(t,x_\lambda)\\ &\geq - C_1\mathrm{e}^{C_2(T-t)} \lambda(1-\lambda) \left ( \|x_1-x_0\|_H^2 + \mathbb{E} \left [ \int_t^T \|a_1^{\varepsilon}(s) - a_0^{\varepsilon}(s) \|_{\Lambda}^2 \mathrm{d}s \right ] \right )\\ &\quad + \nu \lambda (1-\lambda) \mathbb{E} \left [ \int_t^T \|a_1^{\varepsilon}(s) - a_0^{\varepsilon}(s) \|_{\Lambda}^2 \mathrm{d}s \right ] \geq - C_1\mathrm{e}^{C_2(T-t)} \lambda(1-\lambda) \|x_1-x_0\|_H^2 \end{split}$$ if $\nu\geq \nu_0=C_1\mathrm{e}^{C_2T}$. This implies that the function $V(t,\cdot)$ is semiconvex. ◻ ### Case 2: Linear State Equation and Convex Costs **Assumption 19**. 1. *Let $b:H\times\Lambda\to H$ and $\sigma: H \times \Lambda \to L_2(\Xi,H)$ be linear.* 2. *Let $l:H\times\Lambda_0\to\mathbb{R}$ and $g:H\to\mathbb{R}$ be convex.* **Theorem 20**. *Let Assumptions [Assumption 10](#lglipschitzfirstvariable){reference-type="ref" reference="lglipschitzfirstvariable"} and [Assumption 19](#blinearlgconvex){reference-type="ref" reference="blinearlgconvex"} be satisfied. Then, for every $t\in [0,T]$, the function $V(t,\cdot)$ is convex.* Note that in contrast to Theorem [Theorem 18](#convexity1){reference-type="ref" reference="convexity1"}, in Theorem [Theorem 20](#convexity2){reference-type="ref" reference="convexity2"} we do not require strong convexity in the control of the cost function $l$ of the type used in Assumption [Assumption 17](#lgsemiconvex){reference-type="ref" reference="lgsemiconvex"}(i). Also the full Assumption [Assumption 10](#lglipschitzfirstvariable){reference-type="ref" reference="lglipschitzfirstvariable"} is not necessary but we use it here since it guarantees that the value function is well defined. *Proof.* We use the same notation as in the proof of Theorem [Theorem 18](#convexity1){reference-type="ref" reference="convexity1"}. Note that due to the linearity of the state equation, we have $X^{\lambda}(s) = X_{\lambda}(s)$. Due to Assumption [Assumption 19](#blinearlgconvex){reference-type="ref" reference="blinearlgconvex"}, we have $$\begin{split} &\lambda J(t,x_1;a_1^{\varepsilon}(\cdot)) + (1-\lambda) J(t,x_0; a_0^{\varepsilon}(\cdot)) - J(t,x_{\lambda};a_{\lambda}^{\varepsilon}(\cdot))\\ &= \mathbb{E} \left [ \int_t^T \lambda l(X_1(s),a_1^{\varepsilon}(s)) + (1-\lambda)l(X_0(s),a_0^{\varepsilon}(s)) - l(X_{\lambda}(s),a_{\lambda}^{\varepsilon}(s)) \mathrm{d}s \right ]\\ &\quad + \mathbb{E} \left [ \lambda g(X_1(T)) + (1-\lambda)g(X_0(T)) - g(X_{\lambda}(T)) \right ] \geq 0, \end{split}$$ from which the convexity of $V(t,\cdot)$ follows. ◻ ### Case 3: Comparison for the State Equation In this subsection we show the convexity of the value function by applying comparison principles for SPDEs. Throughout this section we set the Hilbert space $H=L^2(\mathcal{O})$, for some domain $\mathcal{O}\subset \mathbb{R}^d$, $d\in\mathbb{N}$. Moreover, for $x_1,x_2\in L^2(\mathcal{O})$, $x_1\leq x_2$ is understood in the almost everywhere sense. Finally, for $x \in L^2(\mathcal{O})$ we denote by $x_+$ its positive part, that is $x_+(\xi)=x(\xi)_+$, $\xi\in \mathcal{O}$, where for $a\in\mathbb{R}$, $a_+ := \max\{a,0\}$. #### Stochastic Partial Differential Equations (Monotonicity) To state a comparison result we impose the following assumptions. **Assumption 21**. 1. *Let $\mathrm{e}^{sA}$ be positivity preserving for every $s\ge 0$, i.e., for $x\in H$ with $x\geq 0$ it holds $\mathrm{e}^{sA}x \geq 0$.* 2. *Let $\sigma(x,a) \equiv \sigma\in L_2(\Xi,H)$.* **Assumption 22**. *Let $\bar b:\Omega\times[t,T]\times H\to H$ be $\textit{Prog}_{[t,T]}\otimes {\mathscr B}(H)/ {\mathscr B}(H)$ measurable (where $\textit{Prog}_{[t,T]}$is the $\sigma$-field of progressively measurable sets, see e.g., [@RY page 44] or [@fabbri2017 bottom of page 23]) and ${\mathscr B}(H)$ is the Borel $\sigma$-field in $H$) and assume moreover that (suppressing as usual the first variable) $$\label{assumptionstar} \| [ \bar b(s,x_1) - \bar b(s,x_2) ]_+ \|_H \leq C \| [x_1-x_2]_+ \|_H$$ for all $s\in [t,T]$ and $x_1,x_2\in H$.* Under these assumptions, we have the following comparison result stated below in Theorem [Theorem 23](#comparisonSPDEs1){reference-type="ref" reference="comparisonSPDEs1"}. We emphasize that in Theorem [Theorem 23](#comparisonSPDEs1){reference-type="ref" reference="comparisonSPDEs1"} we do not assume that mild solutions $X_i$, $i=1,2$, of equation [\[eq:mildxi\]](#eq:mildxi){reference-type="eqref" reference="eq:mildxi"} are unique. They only have to satisfy the conditions imposed in the definition of mild solutions, see [@fabbri2017 Definition 1.119]. **Theorem 23**. *Let Assumptions [Assumption 21](#comparisonprinciple){reference-type="ref" reference="comparisonprinciple"} and [Assumption 22](#comp:monotonicity){reference-type="ref" reference="comp:monotonicity"} be satisfied and let $0\leq t<T$. Let $X_i$, $i=1,2$, be a mild solution of $$\label{eq:mildxi} \begin{cases} \mathrm{d}X(s) = [AX(s) + \bar b(s,X(s))+f_i(s) ] \mathrm{d}s + \sigma \mathrm{d}W(s),\quad s\in [t,T]\\ X(t)=x_i\in L^2(\mathcal{O}), \end{cases}$$ where $f_i:\Omega\times [t,T] \to H$ are $\textit{Prog}_{[t,T]}/\mathscr{B}(H)$ measurable and $f_i \in L^1(t,T;H)$ $\mathbb{P}$--almost surely, $i=1,2$. Furthermore, let $$\label{assumptioncomparison} \begin{cases} x_1 \geq x_2,\\ f_1(s) \geq f_2(s). \end{cases}$$ Then $X_1(s) \geq X_2(s)$ for all $s\in [t,T]$.* *Proof.* Since $A$ generates a positivity preserving semigroup and by assumption [\[assumptioncomparison\]](#assumptioncomparison){reference-type="eqref" reference="assumptioncomparison"}, we have $$X_2(s) - X_1(s)\leq \int_t^s \mathrm{e}^{(s-r)A} \bar b(r,X_2(r)) - \bar b(r,X_1(r)) \mathrm{d}r \leq \int_t^s \mathrm{e}^{(s-r)A} [ \bar b(r,X_2(r)) - \bar b(r,X_1(r)) ]_+ \mathrm{d}r$$ and since the right hand side is a non-negative function, $$_+ \leq \int_t^s \mathrm{e}^{(s-r)A} [ \bar b(r,X_2(r)) - \bar b(r,X_1(r)) ]_+ \mathrm{d}r.$$ Thus, by [\[assumptionstar\]](#assumptionstar){reference-type="eqref" reference="assumptionstar"}, $$\begin{split} \left \| [ X_2(s) - X_1(s) ]_+ \right \|_H &\leq \int_t^s \left \| \mathrm{e}^{(s-r)A} [ \bar b(r,X_2(r)) - \bar b(r,X_1(r)) ]_+ \right \|_H \mathrm{d}r\leq C \int_t^s \left \| [ X_2(r) - X_1(r) ]_+ \right \|_H \mathrm{d}r. \end{split}$$ Grönwall's inequality now yields $X_2(s) \leq X_1(s)$. ◻ **Assumption 24**. 1. *Let $l:H \times \Lambda_0\to \mathbb{R}$ be convex.* 2. *Let $g:H \to \mathbb{R}$ be convex.* 3. *Let $l(\cdot,a)$ and $g$ satisfy $$\label{eq:lgdecreasing} \begin{cases} l(x_1,a) \geq l(x_2,a),\\ g(x_1) \geq g(x_2), \end{cases}$$ for all $x_1,x_2\in H$ such that $x_1\leq x_2$ and $a\in \Lambda_0$.* Using Theorem [Theorem 23](#comparisonSPDEs1){reference-type="ref" reference="comparisonSPDEs1"} we now obtain the following result. **Theorem 25**. *Let Assumptions [Assumption 7](#bsigmalipschitzfirstvariable){reference-type="ref" reference="bsigmalipschitzfirstvariable"}, [Assumption 10](#lglipschitzfirstvariable){reference-type="ref" reference="lglipschitzfirstvariable"}, [Assumption 21](#comparisonprinciple){reference-type="ref" reference="comparisonprinciple"} and [Assumption 24](#comparisonprinciple2){reference-type="ref" reference="comparisonprinciple2"} be satisfied. Assume in addition that $b$ satisfies $$\label{bconvexity} \lambda b(x_1,a_1)+(1-\lambda)b(x_0,a_0) \leq b(\lambda x_1+(1-\lambda)x_0,\lambda a_1+(1-\lambda)a_0)$$ for all $x_0,x_1\in H, a_0,a_1\in \Lambda_0, \lambda\in[0,1]$. Then $V(t,\cdot)$ is convex for $0\leq t\leq T$.* *Proof.* We use the same notation as in the proof of Theorem [Theorem 18](#convexity1){reference-type="ref" reference="convexity1"}. First note that $$\mathrm{d}X_{\lambda}(s) = [ AX_{\lambda}(s) + b(X_{\lambda}(s),a^{\varepsilon}_{\lambda}(s)) ] \mathrm{d}s + \sigma \mathrm{d}W(s)$$ and $$\begin{split} \mathrm{d}X^{\lambda}(s) & = [ A X^{\lambda}(s) + \lambda b(X_1(s),a^{\varepsilon}_1(s)) + (1-\lambda) b(X_0(s),a^{\varepsilon}_0(s)) ] \mathrm{d}s + \sigma \mathrm{d}W(s)\\ &= [ A X^{\lambda}(s) + b(X^{\lambda}(s),a^{\varepsilon}_{\lambda}(s))+\lambda b(X_1(s),a^{\varepsilon}_1(s)) + (1-\lambda) b(X_0(s),a^{\varepsilon}_0(s))-b(X^{\lambda}(s),a^{\varepsilon}_{\lambda}(s)) ] \mathrm{d}s + \sigma \mathrm{d}W(s). \end{split}$$ Due to [\[bconvexity\]](#bconvexity){reference-type="eqref" reference="bconvexity"}, we have $$\lambda b(X_1(s),a^{\varepsilon}_1(s)) + (1-\lambda) b(X_0(s),a^{\varepsilon}_0(s)) -b(X^{\lambda}(s),a^{\varepsilon}_{\lambda}(s)) \leq 0.$$ Thus, setting $$f_1(s)=0, \quad f_2(s) = \lambda b(X_1(s),a^{\varepsilon}_1(s)) + (1-\lambda) b(X_0(s),a^{\varepsilon}_0(s)) - b(X^{\lambda}(s),a^{\varepsilon}_{\lambda}(s)),$$ we have $f_1(s)\geq f_2(s)$. Therefore, we can apply Theorem [Theorem 23](#comparisonSPDEs1){reference-type="ref" reference="comparisonSPDEs1"} to obtain $$\label{comparisonresult} X_{\lambda}(s) \geq X^{\lambda}(s).$$ Now, since $l$ and $g$ are decreasing in the $x$-variable and convex, we have $$\begin{split} V(t,x_{\lambda}) &\leq J(t,x_{\lambda};a^{\varepsilon}_{\lambda}(\cdot)) =\mathbb{E} \left [ \int_t^T l(X_{\lambda}(s),a^{\varepsilon}_{\lambda}(s))\mathrm{d}s + g(X_{\lambda}(T)) \right ]\\ &\leq \mathbb{E} \left [ \int_t^T l(X^{\lambda}(s),a^{\varepsilon}_{\lambda}(s))\mathrm{d}s + g(X^{\lambda}(T)) \right ]\\ &\leq \mathbb{E} \left [ \int_t^T (\lambda l(X_1(s),a^{\varepsilon}_1(s)) + (1-\lambda) l(X_0(s),a^{\varepsilon}_0(s))) \mathrm{d}s + \lambda g(X_1(T)) + (1-\lambda) g(X_0(T)) \right ]\\ &= \lambda J(t,x_1;a^{\varepsilon}_1(\cdot)) + (1-\lambda) J(t,x_0;a^{\varepsilon}_0(\cdot))\\ &\leq \lambda V(t,x_1) + (1-\lambda) V(t,x_0) + \varepsilon, \end{split}$$ which concludes the proof. ◻ **Remark 26**. Note that the conclusion of Theorem [Theorem 25](#convexity3){reference-type="ref" reference="convexity3"} still holds if the inequality in [\[bconvexity\]](#bconvexity){reference-type="eqref" reference="bconvexity"} is reversed (that is $b$ is "convex") and [\[eq:lgdecreasing\]](#eq:lgdecreasing){reference-type="eqref" reference="eq:lgdecreasing"} is satisfied whenever $x_1\geq x_2$ (that is $l$ and $g$ are "non-decreasing"). #### Stochastic Partial Differential Equations (Nemytskii Type) In this part, we apply Theorem [Theorem 23](#comparisonSPDEs1){reference-type="ref" reference="comparisonSPDEs1"} to derive a comparison result for SPDEs with Nemytskii type coefficients. Similar comparison results were proven for example in [@kotelenez1992; @manthey1999; @milian2002]. However, none of these results applies directly to our situation. **Theorem 27**. *Let Assumption [Assumption 21](#comparisonprinciple){reference-type="ref" reference="comparisonprinciple"} hold. Let $\mathfrak{b}:\Omega\times[t,T]\times \mathbb{R} \to \mathbb{R}$ be $\textit{Prog}_{[t,T]}\otimes {\mathscr B}(\mathbb{R})/ {\mathscr B}(\mathbb{R})$ measurable (where ${\mathscr B}(\mathbb{R})$ is the Borel $\sigma$-field in $\mathbb{R}$) and let there be a constant $C_L$ such that (suppressing as usual the first variable) $$|\mathfrak{b}(s,\mathrm{x}_1) - \mathfrak{b}(s,\mathrm{x}_2) | \leq C_L |\mathrm{x}_1-\mathrm{x}_2|.$$ for all $s\in [t,T]$ and $\mathrm{x},\mathrm{x}_1,\mathrm{x}_2\in \mathbb{R}$. Furthermore, assume that $\mathfrak{b}(\cdot,0)\in L^1(t,T)$ $\mathbb{P}$--almost surely. Let $b$ denote the Nemytskii operator associated with $\mathfrak{b}$, that is $b(s,x)(\xi):=\mathfrak{b}(s,x(\xi))$ for $s\in[t,T],x\in H$ and $\xi\in\mathcal{O}$. Furthermore, let $f_1,f_2: \Omega\times [t,T] \to H$ be $\textit{Prog}_{[t,T]}/ {\mathscr B}(H)$ measurable and $f_1,f_2\in L^1(t,T;H)$ $\mathbb{P}$--almost surely. Let $X_i$ be a mild solution of $$\begin{cases} \mathrm{d}X(s) = [AX(s) + b(s,X(s)) +f_i(s) ] \mathrm{d}s + \sigma \mathrm{d}W(s),\quad s\in [t,T]\\ X(t)=x_i\in L^2(\mathcal{O}), \end{cases}$$ $i=1,2$. Furthermore, let $$\begin{cases} x_1\geq x_2,\\ f_1(s) \geq f_2(s). \end{cases}$$ Then $X_1(s) \geq X_2(s)$ for all $s\in [t,T]$.* *Proof of Theorem [Theorem 27](#comparisonSPDEs){reference-type="ref" reference="comparisonSPDEs"}.* Note that for $C\in\mathbb{R}$, the process $Y_i(s) := \mathrm{e}^{C(s-t)} X_i(s)$, $s\in [t,T]$, is a mild solution of $$\mathrm{d}Y(s) = [ AY(s) + CY(s) + \mathrm{e}^{C(s-t)} b(s,\mathrm{e}^{-C(s-t)}Y(s)) + \mathrm{e}^{C(s-t)} f_i(s) ] \mathrm{d}s + \mathrm{e}^{C(s-t)}\sigma \mathrm{d}W(s).$$ We set $C=C_L$, where $C_L$ is the Lipschitz constant of $\mathfrak{b}$. Then $$\bar b(s,x) := C_L x + \mathrm{e}^{C_Ls} b(s,\mathrm{e}^{-C_Ls} x)$$ satisfies Assumption [Assumption 22](#comp:monotonicity){reference-type="ref" reference="comp:monotonicity"}. Indeed, note that due to the Lipschitz continuity of $\mathfrak{b}$, we have $$\begin{split} C_L\mathrm{x}_1 + \mathrm{e}^{C_Ls} \mathfrak{b}(s,\mathrm{e}^{-C_Ls}\mathrm{x}_1) - C_L\mathrm{x}_2 - \mathrm{e}^{C_Ls} \mathfrak{b}(s,\mathrm{e}^{-C_Ls}\mathrm{x}_2) \leq \begin{cases} 2C_L(\mathrm{x}_1-\mathrm{x}_2),& \text{if}\; \mathrm{x}_1\geq \mathrm{x}_2\\ 0,& \text{if}\; \mathrm{x}_1\leq \mathrm{x}_2, \end{cases} \end{split}$$ and therefore, $$\left [ C_L\mathrm{x}_1 + \mathrm{e}^{C_Ls} \mathfrak{b}(s,\mathrm{e}^{-C_Ls}\mathrm{x}_1) - C_L\mathrm{x}_2 - \mathrm{e}^{C_Ls} \mathfrak{b}(s,\mathrm{e}^{-C_Ls}\mathrm{x}_2) \right ]_+ \leq 2C_L [\mathrm{x}_1 - \mathrm{x}_2 ]_+$$ from which [\[assumptionstar\]](#assumptionstar){reference-type="eqref" reference="assumptionstar"} follows. Thus, we can apply Theorem [Theorem 23](#comparisonSPDEs1){reference-type="ref" reference="comparisonSPDEs1"}, which concludes the proof. ◻ **Assumption 28**. *The function $\mathfrak{b}: \mathbb{R} \times\Lambda_0\to \mathbb{R}$ is continuous and there exists a constant $C>0$ such that $$\begin{cases} |\mathfrak{b}(\mathrm{x}_1,a) - \mathfrak{b}(\mathrm{x}_2,a) | \leq C |\mathrm{x}_1 - \mathrm{x}_2|\\ | \mathfrak{b}(\mathrm{x},a) | \leq C(1 + |\mathrm{x}| + \| a \|_{\Lambda} ) \end{cases}$$ for all $\mathrm{x},\mathrm{x}_1,\mathrm{x}_2\in \mathbb{R}$, $a\in \Lambda_0$, and $$\lambda \mathfrak{b}(\mathrm{x}_1,a_1) +(1-\lambda) \mathfrak{b}(\mathrm{x}_0,a_0) \leq \mathfrak{b}(\lambda \mathrm{x}_1 +(1-\lambda)\mathrm{x}_0,\lambda a_1+(1-\lambda)a_0),$$ for all $\lambda \in [0,1]$, $\mathrm{x}_0,\mathrm{x}_1\in \mathbb{R}$ and $a_0,a_1\in \Lambda_0$.* We consider the control problem [\[costfunctional\]](#costfunctional){reference-type="eqref" reference="costfunctional"}--[\[state\]](#state){reference-type="eqref" reference="state"}, where $b:H\times \Lambda_0 \to H$ is given by the Nemytskii operator associated with $\mathfrak{b}$, i.e., $$b(x,a)(\xi) := \mathfrak{b}(x(\xi),a)$$ for $x\in H$, $a\in \Lambda_0$ and $\xi\in\mathcal{O}$. **Theorem 29**. *Let Assumptions [Assumption 10](#lglipschitzfirstvariable){reference-type="ref" reference="lglipschitzfirstvariable"}, [Assumption 21](#comparisonprinciple){reference-type="ref" reference="comparisonprinciple"}, [Assumption 24](#comparisonprinciple2){reference-type="ref" reference="comparisonprinciple2"} and [Assumption 28](#comparisonprinciple3){reference-type="ref" reference="comparisonprinciple3"} be satisfied. Then $V(t,\cdot)$ is convex.* The proof follows along the same lines as the proof of Theorem [Theorem 25](#convexity3){reference-type="ref" reference="convexity3"}. **Remark 30**. Note that in this whole subsection, the restriction to additive noise is crucial only in the proofs of the comparison theorems. If the comparison property [\[comparisonresult\]](#comparisonresult){reference-type="eqref" reference="comparisonresult"}, together with the assumptions imposed on $l$ and $g$, holds, the same arguments would give convexity of the value function for problems with multiplicative noise. ## $C^{1,1}$ Regularity of the Value Function {#sec:C11reg} It is well known that if $v:H\to\mathbb{R}$ be semiconvex and semiconcave, then $v\in C^{1,1}(H)$. The proof of this result in a Hilbert space can be found for instance in [@lasry1986]. Thus in each of the cases of Subsection [3.3](#semiconvexity){reference-type="ref" reference="semiconvexity"}, in combination with the semiconcavity of the value function proved in Subsection [3.2](#semiconcavity){reference-type="ref" reference="semiconcavity"}, we obtain that $V(t,\cdot)$ is $C^{1,1}$ on $H$ for all $t\in [0,T]$. We point out that this $C^{1,1}$ regularity result for the value function is applicable also to deterministic problems, i.e., when $\sigma=0.$ # Optimal Synthesis {#sec:optsynt} In this section, we show how to use viscosity solution methods and the fact that the value function is $C^{1,1}$ in the space variable to construct optimal feedback controls. Since we only have one derivative of the value function, in order to perform optimal synthesis, we need to assume that the diffusion coefficient $\sigma$ is independent of the control variable. **Assumption 31**. *Let $\sigma(x,a) = \sigma(x)$ be independent of the control and let there be a constant $C>0$ such that $$\| \sigma(x) - \sigma(x^{\prime}) \|_{L_2(\Xi,H)} \leq C \|x-x^{\prime}\|_{-1}$$ for all $x,x^{\prime}\in H$.* In this case, the HJB equation is given by $$\label{HJBsemilinear} \begin{cases} v_t + \langle Ax, Dv \rangle_H + \frac 1 2 \text{Tr} [ \sigma(x) \sigma^{\ast}(x) D^2 v ] + \mathcal{H}(x,Dv,D^2v) =0\\ v(T,\cdot) = g, \end{cases}$$ where the Hamiltonian $\mathcal{H}:H\times H \to \mathbb{R}$ is given by $$\mathcal{H}(x,p)=\inf_{a\in \Lambda_0} \mathcal{F}(x,p,a),\quad \mathcal{F}(x,p,a) := \langle p,b(x,a) \rangle_H + l(x,a)$$ for all $x,p \in H$. For $m>0$, we define $$\mathcal{H}_m(x,p):=\inf_{a\in \Lambda_0, \|a\|_\Lambda \leq m} \mathcal{F}(x,p,a)$$ for all $x,p \in H$. Since the coefficients of our control problem may not be bounded in $a \in \Lambda_0$ we assume the following. **Assumption 32**. *For every $R>0$ there is an $m_R>0$ such that $$\mathcal{H}(x,p)=\mathcal{H}_{m_R}(x,p)$$ for all $x,p\in H,\|p\|_H\leq R$.* This assumption is satisfied for instance if $b(x,a)=b_1(x)+b_2(x,a)$, where $b_1,b_2$ satisfy Assumption [Assumption 7](#bsigmalipschitzfirstvariable){reference-type="ref" reference="bsigmalipschitzfirstvariable"}(i) and $$\|b_2(x,a)\|_H\leq C(1+\|a\|_{\Lambda})$$ for all $x\in H, a\in\Lambda_0$, and $$\lim_{a \to +\infty}\frac{l(x,a)}{\|a\|_{\Lambda}}=+\infty$$ uniformly in $x \in H$. Let us recall the definition of a viscosity solution of [\[HJBsemilinear\]](#HJBsemilinear){reference-type="eqref" reference="HJBsemilinear"}, see[@fabbri2017 Definition 3.35]. **Definition 33**. A function $\psi:(0,T)\times H\to \mathbb{R}$ is a test function if $\psi=\varphi+h$, where: - $\varphi \in C^{1,2}((0,T)\times H), \varphi$ is $B$--lower semicontinuous and $\varphi, \varphi_t, D\varphi , D^2\varphi , A^{\ast}D\varphi$ are uniformly continuous on $(0,T)\times H$; - $h(t,x)=h_0(t,\|x\|_H)$, where $h_0 \in C^{1,2}((0,T)\times\mathbb R), h_0(t,\cdot)$ is even and for every $t\in(0,T), h_0(t,\cdot)$ is non-decreasing on $[0,+\infty)$. We say that a function is locally bounded if it is bounded on bounded subsets of its domain. **Definition 34**. 1. A locally bounded upper semicontinuous function $v:(0,T]\times H\to\mathbb{R}$ is a viscosity subsolution of [\[HJBsemilinear\]](#HJBsemilinear){reference-type="eqref" reference="HJBsemilinear"} if $v$ is $B$--upper semicontinuous on $(0,T)\times H$, $v(0,x)\leq g(x)$ for $x\in H$ and, whenever $v-\psi$ has a local maximum at $(t,x) \in (0,T)\times H$ for a test function $\psi =\varphi+h$, then $$\psi_t(t,x) +\langle x, A^{\ast}D\varphi(t,x) \rangle_H + \frac 1 2 \text{Tr} [ \sigma(x) \sigma^{\ast}(x) D^2 \psi(t,x) ] + \mathcal{H}(x,D\psi(t,x)) \geq 0.$$ 2. A locally bounded lower semicontinuous function $v:(0,T]\times H\to\mathbb{R}$ is a viscosity subsolution of [\[HJBsemilinear\]](#HJBsemilinear){reference-type="eqref" reference="HJBsemilinear"} if $v$ is $B$--lower semicontinuous on $(0,T)\times H$, $v(0,x)\geq g(x)$ for $x\in H$ and, whenever $v+\psi$ has a local minimum at $(t,x) \in (0,T)\times H$ for a test function $\psi =\varphi+h$, then $$-\psi_t(t,x) -\langle x, A^{\ast}D\varphi(t,x) \rangle_H + \frac 1 2 \text{Tr} [ \sigma(x) \sigma^{\ast}(x) D^2 \psi(t,x) ] + \mathcal{H}(x,-D\psi(t,x)) \leq 0.$$ 3. A function $v:(0,T]\times H\to\mathbb{R}$ is a viscosity solution of [\[HJBsemilinear\]](#HJBsemilinear){reference-type="eqref" reference="HJBsemilinear"} if it is both a viscosity subsolution of [\[HJBsemilinear\]](#HJBsemilinear){reference-type="eqref" reference="HJBsemilinear"} and a viscosity supersolution of [\[HJBsemilinear\]](#HJBsemilinear){reference-type="eqref" reference="HJBsemilinear"}. We have the following uniqueness result. **Proposition 35**. *Let Assumptions [Assumption 2](#assumptionA){reference-type="ref" reference="assumptionA"}, [Assumption 7](#bsigmalipschitzfirstvariable){reference-type="ref" reference="bsigmalipschitzfirstvariable"}, [Assumption 10](#lglipschitzfirstvariable){reference-type="ref" reference="lglipschitzfirstvariable"}, [Assumption 31](#hp:sigma_independent_control){reference-type="ref" reference="hp:sigma_independent_control"}, [Assumption 32](#coercivity){reference-type="ref" reference="coercivity"} hold. Then $V$ is the unique viscosity solution of [\[HJBsemilinear\]](#HJBsemilinear){reference-type="eqref" reference="HJBsemilinear"} in the set $$\begin{split} S=\Big\{&u:[0,T]\times H\to\mathbb{R}: |u(t,x)|\leq C(1+\|x\|_H^k)\quad\textit{for some}\,\,k\geq 0, \\ &|u(t,x)-u(t,y)|\leq C\|x-y\|_H\quad\textit{for all}\,\,t\in(0,T],x,y\in H, and \\ &\lim_{t\to T}|u(t,x)-g(e^{(T-t)A}x)|=0 \quad\textit{uniformly on bounded subsets of}\,\,H\Big\}. \end{split}$$ Moreover, $V$ is uniformly continuous on every set $[0,\tau]\times B_R$ for every $0<\tau<T, R>0$.* Note that we cannot immediately use [@fabbri2017 Theorem 3.67] as in our case the coefficients of the control problem are not bounded in $\Lambda$. *Proof.* 1. Let $m>0$ and consider the truncated control set $\mathcal{U}^m_t= \left \{a(\cdot)\in \mathcal{U}_t: \| a\|_{L^{\infty}((t,T)\times\Omega;\Lambda)} \leq m \right\}$. Define the value function for the optimal control problem with controls $\mathcal{U}^m_t$ by $$V^m(t,x) := \inf_{a(\cdot)\in \mathcal{U}^m_t} J(t,x;a(\cdot)).$$ By [@fabbri2017 Theorem 3.67], $V^m$ is the unique viscosity solution of $$\label{eq:hjb_V_m} \begin{cases} v_t + \langle Dv,Ax \rangle_H +\frac 1 2\text{Tr} [ \sigma(x) \sigma^{\ast}(x) D^2 v ] + \mathcal{H}_m(x,Dv) =0\\ v(T,\cdot) = g \end{cases}$$ in $S$. Moreover, it follows from [@fabbri2017 Proposition 3.62 and Lemma 3.19(i)] that $V^m$ is uniformly continuous on $[0,\tau]\times B_R$ for every $0<\tau<T, R>0$. 2. Note that, as in Theorem [Theorem 11](#th:V_lip){reference-type="ref" reference="th:V_lip"}, $V^m(t,\cdot)$ is Lipschitz uniformly in $t$ with Lipschitz constant $R>0$ independent of $m.$ Then, if $\psi$ is a test function in the definition of viscosity solution of [\[eq:hjb_V\_m\]](#eq:hjb_V_m){reference-type="eqref" reference="eq:hjb_V_m"} and $V^m-\psi$ (respectively, $V^m+\psi$) achieves a local maximum (respectively, minimum) at $(\overline t,\overline x)$, we have $\|D \psi(\overline t,\overline x)\|_H \leq R$. Thus, setting $\overline m = m_R$, this implies $$V^m=V^{\overline m}$$ for all $m \geq \overline m$. 3. We prove that $$V^{\overline m}=V.$$ Since obviously $V^{\overline m} \geq V$ as $\mathcal{U}^{\overline m}_t \subset \mathcal{U}_t$, we are left to show that $$\label{eq:V_m_leq_V} V^{\overline m} \leq V.$$ Indeed, let $t \in[t,T], x \in H$. Fix $\varepsilon>0$ and let $a^\varepsilon(\cdot)$ be an $\varepsilon$-optimal control for $V$, i.e., $J(t,x,a^\varepsilon(\cdot)) \leq V(t,x)+\varepsilon$. Fix $m > \overline m$ and let $a_m^\varepsilon(\cdot) \in \mathcal{U}^m_t$ be a truncation at $m$ of $a^\varepsilon(\cdot)$, that is, $a_m^\varepsilon(t)=a^\varepsilon(t)$ if $\|a^\varepsilon(t)\|_\Lambda \leq m$ and $a_m^\varepsilon(t)=a_0$ for some fixed $a_0\in\Lambda_0$ otherwise. First, by Step 2 and the definition of $V^m$, we have $$\label{eq:V_bar_m_leq_J_a_m} V^{\overline m}(t,x)=V^m(t,x) \leq J(t,x;a_m^\varepsilon(\cdot)).$$ We now prove $$\label{eq:J(a_m_eps)_to_J(a_eps)} \lim_{m \to \infty} J(t,x;a_m^\varepsilon(\cdot)) =J(t,x;a^\varepsilon(\cdot))$$ for all $\varepsilon>0$. Indeed, denoting by $X^\varepsilon_m, X^\varepsilon$ the solutions of the state equation with initial time $t$, initial datum $x$ and controls $a_m^\varepsilon(\cdot)$, $a^\varepsilon(\cdot)$, respectively, we have $$\begin{split} | J(t,x;a_m^\varepsilon(\cdot)) - J(t,x;a^\varepsilon(\cdot)| & \leq \int_t^T \mathbb E |l(X^\varepsilon_m(s),a_m^\varepsilon(s))-l(X^\varepsilon(s),a_m^\varepsilon(s))| \mathrm{d}s\\ &\quad + \int_t^T \mathbb E |l(X^{a^\varepsilon}(s),a_m^\varepsilon(s))-l(X^{a^\varepsilon}(s),a^\varepsilon(s))| \mathrm{d}s\\ &\quad + |g(X^\varepsilon_m(T))-g(X^\varepsilon(T))|=:I_1^m+I_2^m+I_3^m. \end{split}$$ Noticing that $a_m^\varepsilon(s) \xrightarrow{m \to \infty} a^\varepsilon(s)$ a.s., by dominated convergence we have $I_2^m \xrightarrow{m\to \infty}0.$\ For $I_1^m,$ by Assumption [Assumption 10](#lglipschitzfirstvariable){reference-type="ref" reference="lglipschitzfirstvariable"}, Lemma [Lemma 9](#estimatex1x0one){reference-type="ref" reference="estimatex1x0one"} and dominated convergence, we have $$\begin{split} I_1^m &\leq C\int_t^T \mathbb E \| X^\varepsilon_m(s)-X^\varepsilon(s)\|_H \mathrm{d}s \leq C \mathbb E \left[ \sup_{s \in [t,T]}\| X^\varepsilon_m(s)-X^\varepsilon(s)\|_H \right]\\ &\leq C \mathbb E \left[ \sup_{s \in [t,T]}\| X^\varepsilon_m(s)-X^\varepsilon(s)\|_H^2 \right]^{1/2} \leq C \left ( \int_t^T \mathbb E \|a_m^\varepsilon (s)-a^\varepsilon(s) \|_\Lambda^2 \mathrm{d}s \right)^{1/2} \xrightarrow{m \to \infty} 0. \end{split}$$ A similar argument also shows that $I_3^m \xrightarrow{m \to \infty} 0$. This proves [\[eq:J(a_m\_eps)\_to_J(a_eps)\]](#eq:J(a_m_eps)_to_J(a_eps)){reference-type="eqref" reference="eq:J(a_m_eps)_to_J(a_eps)"}. Taking the limit $m \to \infty$ in [\[eq:V_bar_m\_leq_J\_a_m\]](#eq:V_bar_m_leq_J_a_m){reference-type="eqref" reference="eq:V_bar_m_leq_J_a_m"} we therefore obtain $$V^{\overline m}(t,x) \leq J(t,x;a^\varepsilon(\cdot)) \leq V(t,x)+ \varepsilon,$$ where the last inequality follows from the fact that, by definition, $a^\varepsilon(\cdot)$ is an $\varepsilon$-optimal control for $(t,x)$. Letting $\varepsilon \to 0$ we get [\[eq:V_m\_leq_V\]](#eq:V_m_leq_V){reference-type="eqref" reference="eq:V_m_leq_V"}. 4. From the previous steps it follows that $V$ is the unique viscosity solution of [\[eq:hjb_V\_m\]](#eq:hjb_V_m){reference-type="eqref" reference="eq:hjb_V_m"} for all $m\geq\overline m$ and, since $V(t,\cdot)$ is Lipschitz, $V$ is also a viscosity solution of [\[HJBsemilinear\]](#HJBsemilinear){reference-type="eqref" reference="HJBsemilinear"}. If $u\in S$ is another viscosity solution of [\[HJBsemilinear\]](#HJBsemilinear){reference-type="eqref" reference="HJBsemilinear"}, both $V$ and $u$ are viscosity solutions of [\[eq:hjb_V\_m\]](#eq:hjb_V_m){reference-type="eqref" reference="eq:hjb_V_m"} for some $m\geq\overline m$ and by comparison for viscosity solutions of [\[eq:hjb_V\_m\]](#eq:hjb_V_m){reference-type="eqref" reference="eq:hjb_V_m"} given by [@fabbri2017 Theorem 3.54] (or existence and uniqueness theorem [@fabbri2017 Theorem 3.67]) we conclude that $u=V$.  ◻ Motivated by the regularity results of the previous section, we now impose the following assumption. **Assumption 36**. *Let for every $0\leq t \leq T$, $V(t,\cdot)\in C^{1,1}(H)$ and let there exist $C\geq 0$ such that $\|DV(t,x)\|_H\leq C, \|DV(t,x)-DV(t,x')\|_H\leq C\|x-x'\|_H$ for all $0\leq t \leq T,x,x'\in H$.* We first prove the following regularity result. **Lemma 37**. *Let $V$ be uniformly continuous on $[0,\tau]\times B_R$ for every $0<\tau<T$ and $R>0$, and let Assumption [Assumption 36](#hp:V_C_11){reference-type="ref" reference="hp:V_C_11"} hold. Then $DV$ is uniformly continuous on $[0,\tau]\times B_R$ for every $0<\tau<T$ and $R>0$.* *Proof.* Fix $R>0$ and assume, for the sake of contradiction, that there exist $\varepsilon>0$, $(t_n,x_n)_{n\in{\mathbb{N}}} ,(s_n,y_n)_{n\in{\mathbb{N}}} \subset [0,\tau] \times B_R$ such that $|t_n-s_n| \to 0, \|x_n-y_n\|_H \to 0$ and $\|DV(t_n,x_n)- DV(s_n,y_n)\|_H \geq \varepsilon.$ We remark that in the following, the constants $C$ may depend on $R$. Define $$V_n(t,x):=V(t,x)-V(s_n,y_n)- \langle DV(s_n,y_n),x-y_n\rangle_H.$$ Note that $V_n$ is uniformly continuous on $[0,\tau] \times B_R$ with a modulus of continuity $\rho$ independent of $n$ and $V_n(t, \cdot) \in C^{1,1}$ for every $t$ with Lipschitz constant of $DV_n$ independent of $n$ and $t \leq T$. Moreover note that $DV_n(t,x)=DV(t,x)-DV(s_n,y_n).$ Then $$V_n(s_n,y_n)=0, \quad DV_n(s_n,y_n)=0, \quad \|DV_n(t_n,x_n)\|_H=\|DV(t_n,x_n)- DV(s_n,y_n)\|_H \geq \varepsilon.$$ Hence, for every $y \in H$ such that $$\label{eq:norm_y-y_n} \| y-y_n\|_H \leq \sqrt{\rho(|t_n-s_n|)}+2\|x_n-y_n \|_H,$$ by the Mean Value Theorem (denoting $y_{\theta_n}:=\theta_n y + (1-\theta_n)y_n$ for $\theta_n \in [0,1]$) we have $$\begin{split} |V_n(s_n,y)|&=|V_n(s_n,y)-V_n(s_n,y_n)|=| \langle DV_n(s_n,y_{\theta_n}),y \rangle_H| =| \langle DV_n(s_n,y_{\theta_n})- DV_n(s_n,0),y \rangle_H| \\ &\leq C |y|^2 \leq C \left[\rho(|t_n-s_n|)+\|x_n-y_n \|_H^2 \right], \end{split}$$ where we have also used the fact that $V_n(t, \cdot) \in C^{1,1}$ for every $t \leq T$ with Lipschitz constant of $DV_n$ independent of $n$ and $t \leq T$. It follows $$\label{eq_norm_v_n_t_n_y} |V_n(t_n,y)|\leq |V_n(s_n,y)| + \rho(|t_n-s_n|)\leq C \left[\rho(|t_n-s_n|)+\|x_n-y_n \|_H^2 \right].$$ Defining $y \in H$ by $$y:=x_n+\frac{DV_n(t_n,x_n)}{\|DV_n(t_n,x_n) \|_H} \left [ \sqrt{\rho(|t_n-s_n|)}+\|x_n-y_n \|_H \right ],$$ we have [\[eq:norm_y-y_n\]](#eq:norm_y-y_n){reference-type="eqref" reference="eq:norm_y-y_n"}, so that [\[eq_norm_v\_n_t\_n_y\]](#eq_norm_v_n_t_n_y){reference-type="eqref" reference="eq_norm_v_n_t_n_y"} holds. Hence, we have a first inequality for $V_n(t_n,y)$: $$\label{inequalityVone} C \left[\rho(|t_n-s_n|)+\|x_n-y_n \|_H^2 \right] \geq V_n(t_n,y).$$ Note also that $x_n$ satisfies [\[eq:norm_y-y_n\]](#eq:norm_y-y_n){reference-type="eqref" reference="eq:norm_y-y_n"}, so that [\[eq_norm_v\_n_t\_n_y\]](#eq_norm_v_n_t_n_y){reference-type="eqref" reference="eq_norm_v_n_t_n_y"} holds for $V_n(t_n,x_n)$. Then, since $V_n(t, \cdot) \in C^{1,1}$ for every $t \leq T$ with Lipschitz constant of $DV_n$ independent of $n$ and $t \leq T$, we have $$\label{inequalityVtwo} \begin{split} V_n(t_n,y) & \geq V_n(t_n,x_n) + \left \langle DV_n(t_n,x_n), \frac{DV_n(t_n,x_n)}{\|DV_n(t_n,x_n) \|_H} \right \rangle_H \left [ \sqrt{\rho(|t_n-s_n|)}+\|x_n-y_n \|_H \right ]\\ & \quad - C \|y-y_n\|_H^2\\ & =V_n(t_n,x_n) +\|DV_n(t_n,x_n) \|_H \left [ \sqrt{\rho(|t_n-s_n|)}+\|x_n-y_n \|_H \right ]- C \|y-y_n\|_H^2\\ & \geq - C \left[\rho(|t_n-s_n|)+\|x_n-y_n \|_H^2 \right] + \varepsilon \left [ \sqrt{\rho(|t_n-s_n|)}+2\|x_n-y_n \|_H \right ], \end{split}$$ where in the last inequality we have used $\|DV_n(t_n,x_n)\|_H \geq \varepsilon$ and [\[eq:norm_y-y_n\]](#eq:norm_y-y_n){reference-type="eqref" reference="eq:norm_y-y_n"}. Combining the two inequalities [\[inequalityVone\]](#inequalityVone){reference-type="eqref" reference="inequalityVone"} and [\[inequalityVtwo\]](#inequalityVtwo){reference-type="eqref" reference="inequalityVtwo"} for $V_n(t_n,y)$, we obtain $$C \left[\rho(|t_n-s_n|)+\|x_n-y_n \|_H^2 \right] \geq \varepsilon \left [ \sqrt{\rho(|t_n-s_n|)}+2\|x_n-y_n \|_H \right ],$$ which is a contradiction for large $n$. ◻ For $x,p\in H$, let $\Gamma(x,p) \subset \Lambda_0$ denote the set of all $a^{\ast}\in \Lambda_0$ such that $$\mathcal{H}(x,p,a^{\ast}) = \inf_{a\in \Lambda_0} \mathcal{H}(x,p,a).$$ We impose the following assumption. **Assumption 38**. 1. *There exists a selection function $$\gamma : H \times H \to \Lambda_0,\quad (x,p) \mapsto \gamma(x,p) \in \Gamma(x,p),$$ which is Lipschitz continuous in both variables.* 2. *For every $R>0$ there is a modulus $\omega_R$ such that $$|l(x,a)-l(x,a')|\leq \omega_R(\|a-a'\|_{\Lambda})$$ for all $x\in H$ and $a,a'\in\Lambda_0$ such that $\|x\|_H$, $\|a\|_{\Lambda},\|a'\|_{\Lambda}\leq R$.* For an example in which Assumption [Assumption 38](#infimumattained){reference-type="ref" reference="infimumattained"} is satisfied, see Example [Example 40](#exampleinfimumattained){reference-type="ref" reference="exampleinfimumattained"} below. Due to Assumption [Assumption 38](#infimumattained){reference-type="ref" reference="infimumattained"}, it is immediate to see that $V$, which is the viscosity solution of the HJB equation [\[HJBsemilinear\]](#HJBsemilinear){reference-type="eqref" reference="HJBsemilinear"}, is also a viscosity solution of the linear equation $$\label{reducedHJB} v_t + \langle Ax,Dv\rangle_H + \frac12 \text{Tr} [ \sigma(x) \sigma^{\ast}(x) D^2v ] + \langle \tilde b(t,x), Dv\rangle_H +\tilde l(t,x) = 0,$$ where $\tilde b(t,x):= b(x,\gamma(x,DV(t,x)))$ and $\tilde l(t,x):=l(x,\gamma(x,DV(t,x)))$. **Theorem 39**. *Let Assumptions [Assumption 2](#assumptionA){reference-type="ref" reference="assumptionA"}, [Assumption 8](#bsigmalipschitz){reference-type="ref" reference="bsigmalipschitz"}, [Assumption 10](#lglipschitzfirstvariable){reference-type="ref" reference="lglipschitzfirstvariable"}, [Assumption 31](#hp:sigma_independent_control){reference-type="ref" reference="hp:sigma_independent_control"}, [Assumption 32](#coercivity){reference-type="ref" reference="coercivity"}, [Assumption 36](#hp:V_C_11){reference-type="ref" reference="hp:V_C_11"}, [Assumption 38](#infimumattained){reference-type="ref" reference="infimumattained"} hold. Then the pair $(a^{\ast}(s),X^{\ast}(s))$, where $$\begin{cases} a^{\ast}(s) = \gamma(X^{\ast}(s),DV(s,X^{\ast}(s)))\\ X^{\ast}(s) = X(s,x;a^{\ast}(\cdot)) \end{cases}$$ is an optimal couple for the optimal control problem [\[costfunctional\]](#costfunctional){reference-type="eqref" reference="costfunctional"}--[\[state\]](#state){reference-type="eqref" reference="state"} and the control problem has an optimal feedback control.* We point out that this result also applies to deterministic problems, i.e., when $\sigma=0$. *Proof.* Consider the linear equation [\[reducedHJB\]](#reducedHJB){reference-type="eqref" reference="reducedHJB"}. Thanks to Lemma [Lemma 37](#lemma:DV_locally_uniform_continuous){reference-type="ref" reference="lemma:DV_locally_uniform_continuous"} and by our assumptions, the functions $\tilde b(t,x),\tilde l(t,x)$ are uniformly continuous on $[0,\tau] \times B_R$ for every $0<\tau<T, R>0$, and moreover $$\|\tilde b(t,x)-\tilde b(t,y)\|_H\leq C\|x-y\|_H$$ for all $t\in[0,T]$, $x,y\in H$, and for every $R>0$ there is a modulus $\sigma_R$ such that $$\|\tilde l(t,x)-\tilde l(t,y)\|_H\leq \sigma_R(\|x-y\|_H)$$ for all $t\in[0,T]$, $x,y\in H$, $\|x\|_H,\|y\|_H\leq R$. Hence, we can apply [@fabbri2017 Theorem 3.67] (with the control set being a singleton) to get the existence of a unique viscosity solution of [\[reducedHJB\]](#reducedHJB){reference-type="eqref" reference="reducedHJB"} in $S$, given by the Feynman--Kac formula $$\label{eq:feynman_kac} v(t,x) = \mathbb{E} \left [ \int_t^T \tilde l(s,X(s)) \mathrm{d}s + g(X(T)) \right ],$$ where $X(s)$ is the solution of $$\label{stateexplicitsolution} \begin{cases} \mathrm{d}X(s) = [ A X(s) + \tilde b(s,X(s)) ] \mathrm{d}s + \sigma(X(s))\mathrm{d}W(s)\\ X(t) = x. \end{cases}$$ We note that in order to apply the comparison theorem [@fabbri2017 Theorem 3.54] (used in [@fabbri2017 Theorem 3.67] to obtain uniqueness of viscosity solutions) to equation [\[reducedHJB\]](#reducedHJB){reference-type="eqref" reference="reducedHJB"}, it is enough that the functions $\tilde b$ and $\tilde l$ are uniformly continuous on $[0,\tau] \times B_R$ for every $0<\tau<T, R>0$, rather than on $[0,T] \times B_R$. Since also $V$ is a viscosity solution of [\[reducedHJB\]](#reducedHJB){reference-type="eqref" reference="reducedHJB"}, by uniqueness of viscosity solutions of [\[reducedHJB\]](#reducedHJB){reference-type="eqref" reference="reducedHJB"} we have $$\label{eq:Vrepresentation} V(t,x)=v(t,x) = \mathbb{E} \left [ \int_t^T l(X(s),\gamma(X(s),DV(s,X(s)))) \mathrm{d}s + g(X(T)) \right ].$$ This shows the optimality of the feedback control $a^{\ast}(\cdot)$. ◻ We give an example in which Assumption [Assumption 38](#infimumattained){reference-type="ref" reference="infimumattained"} is satisfied. **Example 40**. Let $\Lambda_0=\Lambda=H$, $l(x,a) = l_1(x) + l_2(a)$ and $b(x,a) = b(x) - a$, where $l_2\in C^{1,1}(H)$, satisfies Assumption [Assumption 32](#coercivity){reference-type="ref" reference="coercivity"} and is such that $l_2(a)-\nu\|a\|_H^2$ is convex for some $\nu>0$. Then $$\mathcal{H}(x,p,a) := \langle p, b(x) \rangle_H - \langle p,a\rangle_H + l_1(x) + l_2(a).$$ In this case, the infimum is attained at $$a^{\ast} = Dl_2^{-1}(p).$$ In particular, Assumption [Assumption 38](#infimumattained){reference-type="ref" reference="infimumattained"} is satisfied. Indeed, due to the convexity of $l_2(\cdot) - \nu \| \cdot \|_H^2$ and the differentiability of $l_2$, we have $$l_2(x) \geq l_2(y) + \langle Dl_2(y),x-y\rangle_H - 2\nu \langle y,x-y\rangle_H$$ as well as $$l_2(y) \geq l_2(x) + \langle Dl_2(x),y-x\rangle_H - 2\nu \langle x,y-x\rangle_H.$$ Adding these two inequalities yields $$\label{injectivity} \langle Dl_2(x) - Dl_2(y),x-y\rangle_H \geq 2\nu \|x-y\|_H^2.$$ This implies that $Dl_2$ is injective. Moreover, since $l_2 - \nu \|\cdot \|_H^2$ is convex (Assumption [Assumption 17](#lgsemiconvex){reference-type="ref" reference="lgsemiconvex"}(i)), $Dl_2 - 2\nu \text{Id}$ is maximal monotone. Therefore, $Dl_2 -2\nu \text{Id} + \lambda \text{Id}$ is surjective for all $\lambda>0$. In particular, $Dl_2$ is surjective and thus bijective. Moreover, by [\[injectivity\]](#injectivity){reference-type="eqref" reference="injectivity"} we have $$\begin{split} \| Dl_2^{-1}(x) - Dl_2^{-1}(y) \|_H^2 &\leq \frac{1}{2\nu} \langle Dl_2^{-1}(x) - Dl_2^{-1}(y), x-y \rangle_H\leq \frac{1}{2\nu} \| Dl_2^{-1}(x) - Dl_2^{-1}(y) \|_H \|x-y\|_H. \end{split}$$ Therefore, $Dl_2^{-1}$ is Lipschitz continuous with Lipschitz constant $1/(2\nu)$. Thus, Assumption [Assumption 38](#infimumattained){reference-type="ref" reference="infimumattained"} is satisfied. # Case of Weak $B$-Condition {#weakBcase} In this section, we investigate the regularity of the value function and perform optimal synthesis under the so-called weak $B$-condition (Assumption [Assumption 3](#assumptionAw){reference-type="ref" reference="assumptionAw"}). To do this, we need better regularity properties of the coefficients of the state equation [\[stateexplicitsolution\]](#stateexplicitsolution){reference-type="eqref" reference="stateexplicitsolution"}. ## Lipschitz Continuity in $\|\cdot\|_{-1}$ {#lipschitzw} We will need the following assumptions. **Assumption 41**. 1. *The function $b$ is continuous on $H\times \Lambda_0$ and there exists a constant $C>0$ such that $$\|b(x,a)-b(x^{\prime},a) \|_H \leq C \|x-x^{\prime} \|_{-1}$$ for all $x,x^{\prime}\in H$ and $a\in\Lambda_0$.* 2. *There exists a constant $C>0$ such that $$\|b(x,a)\|_H \leq C(1+\|x\|_H+\|a\|_{\Lambda} )$$ for all $x\in H$ and $a\in\Lambda_0$.* 3. *The function $\sigma$ is continuous on $H\times \Lambda_0$ and there exists a constant $C>0$ such that $$\|\sigma(x,a)-\sigma(x^{\prime},a) \|_{L_2(\Xi,H)} \leq C \|x-x^{\prime} \|_{-1}$$ for all $x,x^{\prime}\in H$ and $a\in\Lambda_0$.* 4. *There exists a constant $C>0$ such that $$\|\sigma(x,a)\|_{L_2(\Xi,H)} \leq C(1+\|x\|_H+\|a\|_{\Lambda} )$$ for all $x\in H$ and $a\in\Lambda_0$.* **Assumption 42**. 1. *There exists a constant $C>0$ such that $$\| b(x,a) - b(x^{\prime},a^{\prime} ) \|_H \leq C (\|x-x^{\prime}\|_{-1} + \|a-a^{\prime}\|_{\Lambda} )$$ for all $x,x^{\prime}\in H$ and $a,a^{\prime}\in \Lambda_0$.* 2. *There exists a constant $C>0$ such that $$\| \sigma(x,a) - \sigma(x^{\prime},a^{\prime} ) \|_{L_2(\Xi,H)} \leq C (\|x-x^{\prime}\|_{-1} + \|a-a^{\prime}\|_{\Lambda} )$$ for all $x,x^{\prime}\in H$ and $a,a^{\prime}\in \Lambda_0$.* We have the following equivalent of Lemma [Lemma 9](#estimatex1x0one){reference-type="ref" reference="estimatex1x0one"}. **Lemma 43**. *For $x_0,x_1\in H$ and $a_0(\cdot),a_1(\cdot)\in \mathcal{U}_t$, define $X_0(s) = X(s;t,x_0,a_0(\cdot)),X_1(s)=X(s,t,x_1,a_1(\cdot))$. Then it holds:* 1. *Let Assumptions [Assumption 3](#assumptionAw){reference-type="ref" reference="assumptionAw"} and [Assumption 41](#bsigmalipschitzfirstvariablew){reference-type="ref" reference="bsigmalipschitzfirstvariablew"} be satisfied. Let $a_0(\cdot) = a_1(\cdot) = a(\cdot)\in \mathcal{U}_t$. Then, there are constants $C_1,C_2\geq 0$ independent of $T$, such that $$\mathbb{E} \left [ \sup_{s\in [t,T]} \| X_1(s) - X_0(s) \|_{-1}^{2} \right ] \leq C_1 \mathrm{e}^{C_2(T-t)}\|x_1 - x_0 \|_{-1}^{2}$$ for all $x_0,x_1\in H$ and $a(\cdot)\in \mathcal{U}_t$.* 2. *Let Assumptions [Assumption 3](#assumptionAw){reference-type="ref" reference="assumptionAw"} and [Assumption 42](#bsigmalipschitzw){reference-type="ref" reference="bsigmalipschitzw"} be satisfied. Then, there are constants $C_1,C_2\geq 0$ independent of $T$, such that $$\label{ew:-1weak} \mathbb{E} \left [ \sup_{s\in [t,T]} \| X_1(s) - X_0(s) \|_{-1}^{2} \right ] \leq C_1 \mathrm{e}^{C_2(T-t)} \left ( \|x_1 - x_0 \|_{-1}^{2} + \mathbb{E} \left [ \int_t^T \| a_1(s) - a_0(s) \|_{\Lambda}^2 \mathrm{d}s \right ] \right )$$ for all $x_0,x_1\in H$ and let $a_0(\cdot), a_1(\cdot) \in \mathcal{U}_t$.* *Proof.* The proof follows the proof of Lemma [Lemma 9](#estimatex1x0one){reference-type="ref" reference="estimatex1x0one"} so we only point out the necessary adjustments. We only discuss part $(ii)$. We now apply [@fabbri2017 Proposition 1.164] and use Assumption [Assumption 3](#assumptionAw){reference-type="ref" reference="assumptionAw"} to get $$\begin{split} &\| X_1(s) - X_0(s) \|_{-1}^2\leq \| x_1 - x_0 \|_{-1}^2\\ &\quad+ 2 \int_t^s (c_0\| X_1(s) - X_0(s) \|_{-1}^2+\langle b(X_1(r),a_1(r)) - b(X_0(r),a_0(r)) , B(X_1(r) - X_0(r)) \rangle_H) \mathrm{d}r\\ &\quad + \int_t^s \| B^{\frac{1}{2}}(\sigma(X_1(r),a_1(r)) - \sigma(X_0(r),a_0(r))) \|_{L_2(\Xi,H)}^2 \mathrm{d}r\\ &\quad + 2 \int_t^s \langle B(X_1(r)-X_0(r)), (\sigma(X_1(r),a_1(r)) - \sigma(X_0(r),a_0(r))) \mathrm{d}W(r) \rangle_H. \end{split}$$ The rest of the proof proceeds as the proof of Lemma [Lemma 9](#estimatex1x0one){reference-type="ref" reference="estimatex1x0one"} however we now use Assumption [Assumption 42](#bsigmalipschitzw){reference-type="ref" reference="bsigmalipschitzw"}. ◻ Assumption [Assumption 10](#lglipschitzfirstvariable){reference-type="ref" reference="lglipschitzfirstvariable"} is now replaced by the following assumption. **Assumption 44**. 1. *The function $l$ is continuous on $H\times\Lambda_0$ and there exists a constant $C>0$ such that $$|l(x,a) - l(x^{\prime},a)| \leq C \|x-x^{\prime}\|_{-1}$$ for all $x,x^{\prime}\in H$ and $a\in \Lambda_0$.* 2. *There exists a constant $C>0$ such that $$|l(x,a)| \leq C (1+\|x\|_H+\|a\|_{\Lambda}^2)$$ for all $x\in H$ and $a\in \Lambda_0$.* 3. *There exists a constant $C>0$ such that $$|g(x) - g(x^{\prime})| \leq C \|x-x^{\prime}\|_{-1}$$ for all $x,x^{\prime}\in H$.* 4. *If $\Lambda_0$ is unbounded we assume that $l$ and $g$ are bounded from below.* The value function now is Lipschitz continuous in $x$ in the $\|\cdot\|_{-1}$ norm: **Theorem 45**. *Let Assumptions [Assumption 3](#assumptionAw){reference-type="ref" reference="assumptionAw"}, [Assumption 41](#bsigmalipschitzfirstvariablew){reference-type="ref" reference="bsigmalipschitzfirstvariablew"} and [Assumption 44](#lglipschitzfirstvariablew){reference-type="ref" reference="lglipschitzfirstvariablew"} be satisfied. Then there are constants $C_1,C_2$ independent of $T$, such that $$|V(t,x) - V(t,y)| \leq C_1 \mathrm{e}^{C_2(T-t)} \|x-y\|_{-1}$$ for all $t\in [0,T]$ and $x,y\in H$.* The proof of Theorem [Theorem 45](#th:V_lipw){reference-type="ref" reference="th:V_lipw"} is the same as the proof of Theorem [Theorem 11](#th:V_lip){reference-type="ref" reference="th:V_lip"} if one uses [\[ew:-1weak\]](#ew:-1weak){reference-type="eqref" reference="ew:-1weak"} so it is omitted. ## Semiconcavity in $H_{-1}$ {#semiconcavityw} We now show that $V$ extends to a function which is semiconcave in $H_{-1}$. We need to strengthen Assumptions [Assumption 12](#bsigmafrechetfirstvariable){reference-type="ref" reference="bsigmafrechetfirstvariable"} and [Assumption 13](#bsigmafrechet){reference-type="ref" reference="bsigmafrechet"}. We point out that Assumption [Assumption 47](#bsigmafrechetw){reference-type="ref" reference="bsigmafrechetw"} is not needed in this subsection. **Assumption 46**. 1. *Let $b:H\times\Lambda \to H$ be Fréchet differentiable in the first variable and let there be a constant $C>0$ such that $$\| (D_xb(x,a) - D_xb(x^{\prime},a))(x-x^{\prime}) \|_{-1} \leq C \|x-x^{\prime}\|_{-1}$$ for all $x,x^{\prime}\in H$ and $a\in \Lambda_0$.* 2. *Let $\sigma:H\times\Lambda \to L_2(\Xi,H)$ be Fréchet differentiable in the first variable and let there be a constant $C>0$ such that $$\| (D_x\sigma(x,a) - D_x\sigma(x^{\prime},a))(x-x^{\prime}) \|_{L_2(\Xi,H_{-1})} \leq C \|x-x^{\prime}\|_{-1}$$ for all $x,x^{\prime}\in H$ and $a\in \Lambda_0$.* **Assumption 47**. 1. *Let $b:H\times\Lambda\to H$ be Fréchet differentiable and let there be a constant $C>0$ such that $$\begin{split} &\| (D_x b(x,a) - D_x b(x^{\prime},a^{\prime}))(x-x^{\prime}) + ( D_a b(x,a) - D_a b(x^{\prime},a^{\prime}))(a-a^{\prime})\|_{-1}\\ &\leq C \left ( \|x-x^{\prime}\|^2_{-1} + \| a-a^{\prime} \|_{\Lambda}^2 \right ) \end{split}$$ for all $x,x^{\prime}\in H$, $a,a^{\prime}\in \Lambda_0$.* 2. *Let $\sigma:H\to L_2(\Xi,H)$ be Fréchet differentiable and let there be a constant $C>0$ such that $$\| (D\sigma(x) - D\sigma(x^{\prime}))(x-x^{\prime})\|_{L_2(\Xi,H_{-1})} \leq C \|x-x^{\prime}\|_{-1}^2$$ for all $x,x^{\prime}\in H$.* For given $x_0,x_1\in H$ and $a_0(\cdot), a_1(\cdot)\in \mathcal{U}_t$, let $X_0$ and $X_1$ be given by [\[x1x0differentcontrols\]](#x1x0differentcontrols){reference-type="eqref" reference="x1x0differentcontrols"}, and for $\lambda\in [0,1]$ we define $a_{\lambda}(\cdot), x_{\lambda}, X_{\lambda}, X^{\lambda}$ as in [\[lambdadefinition\]](#lambdadefinition){reference-type="eqref" reference="lambdadefinition"}. We now have the following version of Lemma [Lemma 14](#estimatexlambdasamecontrol){reference-type="ref" reference="estimatexlambdasamecontrol"}: **Lemma 48**. 1. *Let Assumptions [Assumption 3](#assumptionAw){reference-type="ref" reference="assumptionAw"}, [Assumption 41](#bsigmalipschitzfirstvariablew){reference-type="ref" reference="bsigmalipschitzfirstvariablew"} and [Assumption 46](#bsigmafrechetfirstvariablew){reference-type="ref" reference="bsigmafrechetfirstvariablew"} be satisfied. Let $a_0(\cdot) = a_1(\cdot) = a(\cdot)\in \mathcal{U}_t$. There are constants $C_1,C_2\geq 0$ independent of $T$ where $C_2$ depends only on the constant $C_2$ in Lemma [Lemma 43](#estimatex1x0onew){reference-type="ref" reference="estimatex1x0onew"} and $c_0$ in Assumption [Assumption 3](#assumptionAw){reference-type="ref" reference="assumptionAw"}, such that $$\mathbb{E} \left [ \sup_{s\in [t,T]} \| X^{\lambda}(s) - X_{\lambda}(s) \|_{-1} \right ] \leq C_1 \mathrm{e}^{C_2(T-t)} \lambda (1-\lambda) \|x_1-x_0\|_{-1}^2$$ for all $\lambda \in [0,1]$, $x_0,x_1\in H$ and $a(\cdot)\in \mathcal{U}_t$.* 2. *Let Assumptions [Assumption 3](#assumptionAw){reference-type="ref" reference="assumptionAw"}, [Assumption 42](#bsigmalipschitzw){reference-type="ref" reference="bsigmalipschitzw"} and [Assumption 47](#bsigmafrechetw){reference-type="ref" reference="bsigmafrechetw"} be satisfied. There are constants $C_1,C_2\geq 0$, where $C_2$ depends only on the constant $C_2$ in Lemma [Lemma 43](#estimatex1x0onew){reference-type="ref" reference="estimatex1x0onew"} and $c_0$ in Assumption [Assumption 3](#assumptionAw){reference-type="ref" reference="assumptionAw"}, such that $$\mathbb{E} \left [ \sup_{s\in [t,T]} \| X^{\lambda}(s) - X_{\lambda}(s) \|_{-1} \right ] \leq C_1 \mathrm{e}^{C_2(T-t)} \lambda (1-\lambda) \left ( \|x_1-x_0\|_{-1}^2 + \mathbb{E} \left [ \int_t^T \|a_0(s) - a_1(s) \|_{\Lambda}^2 \mathrm{d}s \right ] \right )$$ for all $\lambda \in [0,1]$, $x_0,x_1\in H$ and $a_0(\cdot),a_1(\cdot) \in \mathcal{U}_t$.* *Proof.* The proof is similar to the proof of Lemma [Lemma 14](#estimatexlambdasamecontrol){reference-type="ref" reference="estimatexlambdasamecontrol"} with little modifications. Repeating its steps and using Assumption [Assumption 47](#bsigmafrechetw){reference-type="ref" reference="bsigmafrechetw"}, instead of [\[estimateb\]](#estimateb){reference-type="eqref" reference="estimateb"} we now have $$\begin{split} &\| \lambda b(X_1(s),a_1(s)) +(1-\lambda) b(X_0(s),a_0(s)) - b(X^{\lambda}(s),a_{\lambda}(s)) \|_{-1}\\ &\leq C \lambda (1-\lambda) \left ( \|X_1(s) - X_0(s)\|_{-1}^2 + \|a_1(s) -a_0(s)\|_{\Lambda}^2 \right ) \end{split}$$ and in place of [\[estimatesigma\]](#estimatesigma){reference-type="eqref" reference="estimatesigma"}, $$\| \lambda \sigma(X_1(s)) +(1-\lambda) \sigma(X_0(s)) - \sigma(X^{\lambda}(s)) \|_{L_2(\Xi,H_{-1})} \leq C \lambda (1-\lambda) \|X_1(s) - X_0(s)\|_{-1}^2.$$ Then, applying [@fabbri2017 Proposition 1.164] and Assumption [Assumption 3](#assumptionAw){reference-type="ref" reference="assumptionAw"}, we obtain in place of [\[itosformula4\]](#itosformula4){reference-type="eqref" reference="itosformula4"} $$\label{itosformula4w} \begin{split} &\| X_{\lambda}(s) - X^{\lambda}(s) \|_{-1}^2\leq 2c_0 \int_t^s \| X_{\lambda}(r) - X^{\lambda}(r) \|_{-1}^2 \mathrm{d}r\\ &+ 2 \int_t^s \langle \lambda b(X_1(r),a_1(r)) +(1-\lambda) b(X_0(r),a_0(r)) - b(X^{\lambda}(r),a_\lambda(r)) , X_{\lambda}(r) - X^{\lambda}(r) \rangle_{-1} \mathrm{d}r\\ &\quad + \int_t^s \| ( \lambda \sigma(X_1(r)) +(1-\lambda) \sigma(X_0(r)) - \sigma(X^{\lambda}(r))) \|_{L_2(\Xi,H_{-1})}^2 \mathrm{d}r\\ &\quad + 2 \int_t^s \left \langle X_{\lambda}(r) - X^{\lambda}(r), ( \lambda \sigma(X_1(r)) +(1-\lambda) \sigma(X_0(r)) - \sigma(X^{\lambda}(r)) ) \mathrm{d}W(r)\right \rangle_{-1}. \end{split}$$ Now $$\left(2c_0 \int_t^s \| X_{\lambda}(r) - X^{\lambda}(r) \|_{-1}^2 \mathrm{d}r\right)^{\frac{1}{2}}\leq \frac{1}{4}\sup_{r\in [t,s]} \| X^{\lambda}(r) - X_{\lambda}(r) \|_{-1} +2 c_0\int_t^s \sup_{\tau\in [t,r]}\| X_{\lambda}(\tau) - X^{\lambda}(\tau) \|_{-1}\mathrm{d}r$$ and $$\begin{split} & \left|2 \int_t^s \langle \lambda b(X_1(r),a_1(r)) +(1-\lambda) b(X_0(r),a_0(r)) - b(X^{\lambda}(r),a_\lambda(r)) , X_{\lambda}(r) - X^{\lambda}(r) \rangle_{-1} \mathrm{d}r\right|\\ & \leq \left(\int_t^s \| \lambda b(X_1(r),a_1(r)) +(1-\lambda) b(X_0(r),a_0(r)) - b(X^{\lambda}(r),a_{\lambda}(r)) \|_{-1} \| X_{\lambda}(r) - X^{\lambda}(r) \|_{-1}\mathrm{d}r\right)^{\frac{1}{2}}\\ &\leq \frac14 \sup_{r\in [t,s]} \| X_{\lambda}(r) - X^{\lambda}(r) \|_{-1} + C\int_t^s \| \lambda b(X_1(r),a_1(r)) +(1-\lambda) b(X_0(r),a_0(r)) - b(X^{\lambda}(r),a_{\lambda}(r)) \|_{-1} \mathrm{d}r \\ & \leq \frac14 \sup_{r\in [t,s]} \| X_{\lambda}(r) - X^{\lambda}(r) \|_{-1} +C\lambda (1-\lambda)\int_t^T \left ( \|X_1(r) - X_0(r)\|_{-1}^2 + \|a_1(r) -a_0(r)\|_{\Lambda}^2 \right ) \mathrm{d}r. \end{split}$$ Furthermore, by Burkholder--Davis--Gundy inequality, $$\begin{split} &\mathbb{E} \bigg [ \sup_{s\in [t,T]} \bigg | \int_t^s \Big \langle X_{\lambda}(r) - X^{\lambda}(r), (\lambda \sigma(X_1(r)) +(1-\lambda) \sigma(X_0(r)) - \sigma(X^{\lambda}(r))) \mathrm{d}W(r) \Big \rangle_{-1} \bigg |^{\frac12} \bigg ]\\ &\leq \mathbb{E} \bigg [ \bigg ( \int_t^T \| X_{\lambda}(r) - X^{\lambda}(r) \|_{-1}^2 \| \lambda \sigma(X_1(r)) +(1-\lambda) \sigma(X_0(r)) - \sigma(X^{\lambda}(r)) \|_{L_2(\Xi,H_{-1})}^2 \mathrm{d}r \bigg )^{\frac14} \bigg ]\\ &\leq \frac14 \mathbb{E} \left [ \sup_{r\in [t,s]} \| X_{\lambda}(r) - X^{\lambda}(r) \|_{-1} \right ] +C \lambda (1-\lambda)\mathbb{E} \left [ \left ( \int_t^T \|X_1(r) - X_0(r)\|_{-1}^4\mathrm{d}r \right )^{\frac12} \right ] \\ & \leq \frac14 \mathbb{E} \left [ \sup_{r\in [t,s]} \| X_{\lambda}(r) - X^{\lambda}(r) \|_{-1} \right ] +C \lambda (1-\lambda)(T-t)\mathbb{E} \left [\sup_{s\in [t,T]}\|X_1(s) - X_0(s)\|_{-1}^2\right ]. \end{split}$$ Therefore, if we take the absolute value of the right hand side of inequality [\[itosformula4w\]](#itosformula4w){reference-type="eqref" reference="itosformula4w"}, then the square root of both sides, and the expectation of both sides, we obtain $$\begin{split} &\mathbb{E} \left [ \sup_{r\in [t,s]} \| X_{\lambda}(r) - X^{\lambda}(r) \|_{-1} \right ]\leq 8c_0 \int_t^s \sup_{\tau\in [t,r]}\| X_{\lambda}(\tau) - X^{\lambda}(\tau) \|_{-1} \mathrm{d}r\\ &+ C\lambda (1-\lambda)\mathbb{E}\left((T-t)\sup_{s\in [t,T]}\|X_1(s) - X_0(s)\|_{-1}^2+\int_t^T \|a_1(r) -a_0(r)\|_{\Lambda}^2 \mathrm{d}r\right) \end{split}$$ We now apply Lemma [Lemma 43](#estimatex1x0onew){reference-type="ref" reference="estimatex1x0onew"}(ii) and use Grönwall's inequality to conclude the proof. ◻ Finally, we strengthen Assumption [Assumption 15](#lgsemiconcave){reference-type="ref" reference="lgsemiconcave"} to the following one. **Assumption 49**. 1. *Let $l(\cdot,a)$ be semiconcave in $H_{-1}$ uniformly in $a\in \Lambda_0$.* 2. *Let $g$ be semiconcave in $H_{-1}$.* **Theorem 50**. *Let Assumptions [Assumption 3](#assumptionAw){reference-type="ref" reference="assumptionAw"}, [Assumption 41](#bsigmalipschitzfirstvariablew){reference-type="ref" reference="bsigmalipschitzfirstvariablew"}, [Assumption 44](#lglipschitzfirstvariablew){reference-type="ref" reference="lglipschitzfirstvariablew"}, [Assumption 46](#bsigmafrechetfirstvariablew){reference-type="ref" reference="bsigmafrechetfirstvariablew"} and [Assumption 49](#lgsemiconcavew){reference-type="ref" reference="lgsemiconcavew"} be satisfied. Then, for every $t\in[0,T]$, the function $V(t,\cdot)$ is semiconcave in $H_{-1}$ with semiconcavity constant $C_1 \mathrm{e}^{C_2(T-t)}$ for some $C_1,C_2\geq 0$ independent of $t,T$.* *Proof.* The proof repeats the steps of the proof of Theorem [Theorem 16](#th:semiconc){reference-type="ref" reference="th:semiconc"}, now using the new assumptions, and Lemma [Lemma 43](#estimatex1x0onew){reference-type="ref" reference="estimatex1x0onew"}(i) and Lemma [Lemma 48](#estimatexlambdasamecontrolw){reference-type="ref" reference="estimatexlambdasamecontrolw"}(i) instead of Lemma [Lemma 9](#estimatex1x0one){reference-type="ref" reference="estimatex1x0one"}(i) and Lemma [Lemma 14](#estimatexlambdasamecontrol){reference-type="ref" reference="estimatexlambdasamecontrol"}(i). ◻ ## Semiconvexity {#semiconvexityw} In this subsection, we show some cases when the value function is semiconvex. Since the classical Nemytskii operator does not satisfy Assumption [Assumption 41](#bsigmalipschitzfirstvariablew){reference-type="ref" reference="bsigmalipschitzfirstvariablew"}, we will only consider versions of Cases 1 and 2 from Section [3.3](#semiconvexity){reference-type="ref" reference="semiconvexity"}. ### Uniform Convexity of Running Cost in The Control Variable Assumption [Assumption 17](#lgsemiconvex){reference-type="ref" reference="lgsemiconvex"} is now changed to Assumption [Assumption 51](#lgsemiconvexw){reference-type="ref" reference="lgsemiconvexw"} below. **Assumption 51**. 1. *There exist constants $C,\nu \geq 0$ such that the map $$H\times\Lambda \ni (x,a)\mapsto l(x,a) + C\|x\|_{-1}^2 - \nu \|a\|^2_{\Lambda}$$ is convex.* 2. *Let $g:H\to\mathbb{R}$ be semiconvex in $H_{-1}$.* **Theorem 52**. *Let Assumptions [Assumption 3](#assumptionAw){reference-type="ref" reference="assumptionAw"}, [Assumption 42](#bsigmalipschitzw){reference-type="ref" reference="bsigmalipschitzw"}, [Assumption 44](#lglipschitzfirstvariablew){reference-type="ref" reference="lglipschitzfirstvariablew"}, [Assumption 47](#bsigmafrechetw){reference-type="ref" reference="bsigmafrechetw"} and [Assumption 51](#lgsemiconvexw){reference-type="ref" reference="lgsemiconvexw"} be satisfied. Then there exists a constant $\nu_0$ depending only on the data of the problem such that if $\nu\geq \nu_0$, then $V(t,\cdot)$ is semiconvex in $H_{-1}$ with constant $C_1\mathrm{e}^{C_2T}$.* *Proof.* The proof is analogous to the proof of Theorem [Theorem 18](#convexity1){reference-type="ref" reference="convexity1"} with obvious changes due to the new assumptions, and using Lemma [Lemma 43](#estimatex1x0onew){reference-type="ref" reference="estimatex1x0onew"}(ii) and Lemma [Lemma 48](#estimatexlambdasamecontrolw){reference-type="ref" reference="estimatexlambdasamecontrolw"}(ii) instead of Lemma [Lemma 9](#estimatex1x0one){reference-type="ref" reference="estimatex1x0one"}(ii) and Lemma [Lemma 14](#estimatexlambdasamecontrol){reference-type="ref" reference="estimatexlambdasamecontrol"}(ii). ◻ ### Linear State Equation and Convex Costs In this case we need to enhance Assumption [Assumption 19](#blinearlgconvex){reference-type="ref" reference="blinearlgconvex"} to the following one. **Assumption 53**. 1. *The functions $b,\sigma$ extend to linear functions $b:H_{-1}\times\Lambda\to H$ and $\sigma: H_{-1} \times \Lambda \to L_2(\Xi,H)$.* 2. *The functions $l:H\times\Lambda_0\to\mathbb{R}$ and $g:H\to\mathbb{R}$ is convex.* The proof of Theorem [Theorem 54](#convexity2w){reference-type="ref" reference="convexity2w"} is the same as that of Theorem [Theorem 20](#convexity2){reference-type="ref" reference="convexity2"}. **Theorem 54**. *Let Assumptions [Assumption 3](#assumptionAw){reference-type="ref" reference="assumptionAw"}, [Assumption 44](#lglipschitzfirstvariablew){reference-type="ref" reference="lglipschitzfirstvariablew"} and [Assumption 53](#blinearlgconvexw){reference-type="ref" reference="blinearlgconvexw"} be satisfied. Then, for every $t\in [0,T]$, the function $V(t,\cdot)$ is convex.* ## $C^{1,1}$ Regularity of the Value Function {#c11-regularity-of-the-value-function} Similarly to Section [3.4](#sec:C11reg){reference-type="ref" reference="sec:C11reg"}, since $V(t,\cdot), 0\leq t\leq T$, is Lipschitz in the $\|\cdot\|_{-1}$ norm, if it is semiconvex in $H_{-1}$ and semiconcave in $H_{-1}$, $V$ can be extended to a function (still denoted by $V$) such that $V(t,\cdot)\in C^{1,1}(H_{-1})$. ## Optimal Synthesis {#sec:optsyntw} In the weak $B$-condition case, the following equivalent of the uniqueness result Proposition [Proposition 35](#prop:V_viscosity_sol){reference-type="ref" reference="prop:V_viscosity_sol"} is in fact easier to prove than in the strong $B$-condition case. **Proposition 55**. *Let Assumptions [Assumption 3](#assumptionAw){reference-type="ref" reference="assumptionAw"}, [Assumption 31](#hp:sigma_independent_control){reference-type="ref" reference="hp:sigma_independent_control"}, [Assumption 32](#coercivity){reference-type="ref" reference="coercivity"}, [Assumption 41](#bsigmalipschitzfirstvariablew){reference-type="ref" reference="bsigmalipschitzfirstvariablew"}, [Assumption 44](#lglipschitzfirstvariablew){reference-type="ref" reference="lglipschitzfirstvariablew"}, hold. Then $V$ is the unique viscosity solution of [\[HJBsemilinear\]](#HJBsemilinear){reference-type="eqref" reference="HJBsemilinear"} in the set $$\begin{split} S=\Big\{&u:[0,T]\times H\to\mathbb{R}: |u(t,x)|\leq C(1+\|x\|_H^k)\quad\textit{for some}\,\,k\geq 0, \\ &|u(t,x)-u(t,y)|\leq C\|x-y\|_H\quad\textit{for all}\,\,t\in(0,T],x,y\in H, and \\ &\lim_{t\to T}|u(t,x)-g(x)|=0 \quad\textit{uniformly on bounded subsets of}\,\,H\Big\}. \end{split}$$ Moreover, $V$ is uniformly continuous in the $|\cdot|\times\|\cdot\|_{-1}$ norm on every set $[0,T]\times B_R$ for every $R>0$.* *Proof.* The proof repeats the steps of the proof of Proposition [Proposition 35](#prop:V_viscosity_sol){reference-type="ref" reference="prop:V_viscosity_sol"} with a few adjustments. Using [@fabbri2017 Theorem 3.66], the functions $V^m$ are the unique viscosity solutions of [\[eq:hjb_V\_m\]](#eq:hjb_V_m){reference-type="eqref" reference="eq:hjb_V_m"} in $S$ and it follows from [@fabbri2017 Proposition 3.61] that $V^m$ is uniformly continuous in the $|\cdot|\times\|\cdot\|_{-1}$ norm on every set $[0,T]\times B_R$ for every $R>0$. We then show that $V^m=V^{\bar m}$ for $m\geq \bar m$ for some $\bar m$ as in Step 2, and then show that $V^m$ converge to $V$ as in Step 3. The only difference is that we now use Assumption [Assumption 44](#lglipschitzfirstvariablew){reference-type="ref" reference="lglipschitzfirstvariablew"} and Lemma [Lemma 43](#estimatex1x0onew){reference-type="ref" reference="estimatex1x0onew"}. We conclude as in Step 4 noting that uniqueness now follows from [@fabbri2017 Theorem 3.50] (or existence and uniqueness theorem [@fabbri2017 Theorem 3.66]). ◻ We strengthen Assumption [Assumption 36](#hp:V_C_11){reference-type="ref" reference="hp:V_C_11"}. **Assumption 56**. *Let for every $0\leq t \leq T$, $V(t,\cdot)\in C^{1,1}(H)$ and let there exist $C\geq 0$ such that $\|DV(t,x)\|_H\leq C, \|DV(t,x)-DV(t,x')\|_H\leq C\|x-x'\|_{-1}$ for all $0\leq t \leq T,x,x'\in H$.* Lemma [Lemma 37](#lemma:DV_locally_uniform_continuous){reference-type="ref" reference="lemma:DV_locally_uniform_continuous"} gives us now that $DV$ is uniformly continuous on $[0,T]\times B_R$ for every $R>0$. We change Assumption [Assumption 38](#infimumattained){reference-type="ref" reference="infimumattained"} to **Assumption 57**. 1. *There exists a selection function $$\gamma : H \times H \to \Lambda_0,\quad (x,p) \mapsto \gamma(x,p) \in \Gamma(x,p),$$ which is Lipschitz continuous in both variables with respect to the norm $\|\cdot\|_{-1}\times\|\cdot\|_H$.* 2. *For every $R>0$ there is a modulus $\omega_R$ such that $$|l(x,a)-l(x,a')|\leq \omega_R(\|a-a'\|_{\Lambda})$$ for all $x\in H$ and $a,a'\in\Lambda_0$ such that $\|x\|_H$, $\|a\|_{\Lambda}$, $\|a'\|_{\Lambda}\leq R$.* **Theorem 58**. *Let Assumptions [Assumption 3](#assumptionAw){reference-type="ref" reference="assumptionAw"}, [Assumption 42](#bsigmalipschitzw){reference-type="ref" reference="bsigmalipschitzw"}, [Assumption 44](#lglipschitzfirstvariablew){reference-type="ref" reference="lglipschitzfirstvariablew"}, [Assumption 31](#hp:sigma_independent_control){reference-type="ref" reference="hp:sigma_independent_control"}, [Assumption 32](#coercivity){reference-type="ref" reference="coercivity"}, [Assumption 56](#hp:V_C_11w){reference-type="ref" reference="hp:V_C_11w"}, [Assumption 57](#infimumattainedw){reference-type="ref" reference="infimumattainedw"} hold. Then the pair $(a^{\ast}(s),X^{\ast}(s))$, where $$\begin{cases} a^{\ast}(s) = \gamma(X^{\ast}(s),DV(s,X^{\ast}(s))\\ X^{\ast}(s) = X(s,x;a^{\ast}(\cdot)) \end{cases}$$ is an optimal couple for the optimal control problem [\[costfunctional\]](#costfunctional){reference-type="eqref" reference="costfunctional"}--[\[state\]](#state){reference-type="eqref" reference="state"} and the control problem has an optimal feedback control.* *Proof.* We just need to update the proof of Theorem [Theorem 39](#th:optsynth){reference-type="ref" reference="th:optsynth"}. We notice that by our assumptions, the functions $\tilde b(t,x)= b(x,\gamma(x,DV(t,x)))$ and $\tilde l(t,x)=l(x,\gamma(x,DV(t,x)))$ are uniformly continuous on bounded subsets of $[0,T]\times H$, and moreover $$\|\tilde b(t,x)-\tilde b(t,y)\|_H\leq C\|x-y\|_{-1}$$ for all $t\in[0,T]$, $x,y\in H$, and for every $R>0$ there is a modulus $\sigma_R$ such that $$\|\tilde l(t,x)-\tilde l(t,y)\|_H\leq \sigma_R(\|x-y\|_{-1})$$ for all $t\in[0,T]$ and $x,y\in H$ such that $\|x\|_H,\|y\|_H\leq R$. Thus, we can apply [@fabbri2017 Theorem 3.66] to claim that [\[reducedHJB\]](#reducedHJB){reference-type="eqref" reference="reducedHJB"} has a unique viscosity solution in $S$ given by [\[eq:feynman_kac\]](#eq:feynman_kac){reference-type="eqref" reference="eq:feynman_kac"}. However, since $V$ is also a viscosity solution of [\[reducedHJB\]](#reducedHJB){reference-type="eqref" reference="reducedHJB"}, using uniqueness we obtain [\[eq:Vrepresentation\]](#eq:Vrepresentation){reference-type="eqref" reference="eq:Vrepresentation"}, which gives the optimality of the feedback control $a^{\ast}(\cdot)$. ◻ 99 V. Barbu and G. Da Prato, *Hamilton--Jacobi equations in Hilbert spaces*, Res. Notes in Math. 86, Pitman, Boston, 1983. M. Bardi and I. Capuzzo-Dolcetta, *Optimal control and viscosity solutions of Hamilton--Jacobi--Bellman equations*, Systems Control Found. Appl. 12, Birkhäuser, Boston, 1997. R. Bellman, *Dynamic programming*, Princeton University Press, Princeton, 1957. A. Bensoussan and P. Yam, *Control problem on space of random variables and master equation*, ESAIM Control Optim. Calc. Var. 25 (2019), Paper No. 10, 36 pp. A. Bensoussan, P. Graber and P. Yam, *Control on Hilbert spaces and application to some mean field type control problems*, preprint, https://arxiv.org/abs/2005.10770 (2020). P. Cannarsa and H. Frankowska, *Value function and optimality conditions for semilinear control problems*, Appl. Math. Optim. 26 (1992), no. 2, 139--169. P. Cannarsa and C. Sinestrari, *Semiconcave functions, Hamilton--Jacobi equations, and optimal control*, Progr. Nonlinear Differential Equations Appl. 58, Birkhäuser, Boston, 2004. L. Chen and Q. Lü, *Stochastic verification theorem for infinite dimensional stochastic control systems*, preprint, https://arxiv.org/abs/2209.09576 (2022). P.-L. Chow, *Stochastic partial differential equations*, Chapman & Hall/CRC Appl. Math. Nonlinear Sci. Ser., Chapman & Hall/CRC, Boca Raton, 2007. M. G. Crandall, H. Ishii and P.-L. Lions, *User's guide to viscosity solutions of second order partial differential equations*, Bull. Amer. Math. Soc. (N.S.) 27 (1992), no. 1, 1--67. M. G. Crandall and P.-L. Lions, *Viscosity solutions of Hamilton--Jacobi equations in infinite dimensions. IV. Hamiltonians with unbounded linear terms*, J. Funct. Anal. 90 (1990), no. 2, 237--283. M. G. Crandall and P.-L. Lions, *Viscosity solutions of Hamilton--Jacobi equations in infinite dimensions. V. Unbounded linear terms and $B$-continuous solutions*, J. Funct. Anal. 97 (1991), no. 2, 417--465. G. Da Prato and J. Zabczyk, *Second order partial differential equations in Hilbert spaces*, London Math. Soc. Lecture Note Ser. 293, Cambridge University Press, Cambridge, 2002. G. Da Prato and J. Zabczyk, *Stochastic equations in infinite dimensions*, 2nd ed., Encyclopedia Math. Appl. 152, Cambridge University Press, Cambridge, 2014. F. de Feo, *Stochastic optimal control problems with delays in the state and in the control via viscosity solutions and an economical application*, arXiv preprint arXiv:2308.14506 (2023). F. de Feo, S. Federico and A. Święch, *Optimal control of stochastic delay differential equations and applications to path-dependent financial and economic models*, preprint, https://arxiv.org/abs/2302.08809 (2023). F. de Feo and A. Święch *Optimal control of stochastic delay differential equations: Optimal feedback controls*, arXiv preprint arXiv:2309.05029 (2023). G. C. Dong, *Nonlinear partial differential equations of second order*, Transl. Math. Monogr. 95, American Mathematical Society, Providence, 1991. G. Fabbri, F. Gozzi and A. Święch, *Verification theorem and construction of $\varepsilon$-optimal controls for control of abstract evolution equations*, J. Convex Anal. 17 (2010), no. 2, 611--642. G. Fabbri, F. Gozzi and A. Święch, *Stochastic optimal control in infinite dimension: Dynamic programming and HJB equations*, with a contribution by Marco Fuhrman and Gianmario Tessitore, Probab. Theory Stoch. Model. 82, Springer, Cham, 2017. S. Federico, B. Goldys and F. Gozzi, *HJB equations for the optimal control of differential equations with delays and state constraints, I: Regularity of viscosity solutions*, SIAM J. Control Optim. 48 (2010), no. 8, 4910--4937. S. Federico, B. Goldys and F. Gozzi, *HJB equations for the optimal control of differential equations with delays and state constraints, II: Verification and optimal feedbacks*, SIAM J. Control Optim. 49 (2011), no. 6, 2378--2414. S. Federico and F. Gozzi, *Verification theorems for stochastic optimal control problems in Hilbert spaces by means of a generalized Dynkin formula*, Ann. Appl. Probab. 28 (2018), no. 6, 3558--3599. S. Federico and E. Tacconi, *Dynamic programming for optimal control problems with delays in the control variable*, SIAM J. Control Optim. 52 (2014), no. 2, 1203--1236. W. H. Fleming and R. W. Rishel, *Deterministic and stochastic optimal control*, Applications of Mathematics 1, Springer, Berlin--New York, 1975. W. H. Fleming and H. M. Soner, *Controlled Markov processes and viscosity solutions*, Stoch. Model. Appl. Probab. 25, Springer, New York, 2006. W. Gangbo and A. Mészáros, *Global well‐posedness of master equations for deterministic displacement convex potential mean field games*, Comm. Pure Appl. Math. 75 (2022), no. 12, 2685--2801. D. Gomes and L. Nurbekyan, *On the minimizers of calculus of variations problems in Hilbert spaces*, Calc. Var. Partial Differential Equations 52 (2015), no. 1-2, 65--93. F. Gozzi, A. Święch and X. Y. Zhou, *A corrected proof of the stochastic verification theorem within the framework of viscosity solutions*, SIAM J. Control Optim. 43 (2005), no. 6, 2009--2019. F. Gozzi, A. Święch and X. Y. Zhou, *Erratum: "A corrected proof of the stochastic verification theorem within the framework of viscosity solutions"*, SIAM J. Control Optim. 48 (2010), no. 6, 4177--4179. H. Ishii, *On the equivalence of two notions of weak solutions, viscosity solutions and distribution solutions*, Funkcial. Ekvac. 38 (1995), no. 1, 101--120. H. Ishii and P.-L. Lions, *Viscosity solutions of fully nonlinear second-order elliptic partial differential equations*, J. Differential Equations 83 (1990), no. 1, 26--78. P. Kotelenez, *Comparison methods for a class of function valued stochastic partial differential equations*, Probab. Theory Related Fields 93 (1992), no. 1, 1--19. N. V. Krylov, *Controlled diffusion processes*, Applications of Mathematics 14, Springer, New York--Berlin, 1980. N. V. Krylov, *Sobolev and viscosity solutions for fully nonlinear elliptic and parabolic equations*, Math. Surveys Monogr. 233, American Mathematical Society, Providence, 2018. J.-M. Lasry and P.-L. Lions, *A remark on regularization in Hilbert spaces*, Israel J. Math. 55 (1986), 257--266. X. J. Li and J. M. Yong, *Optimal control theory for infinite-dimensional systems*, Systems Control Found. Appl., Birkhäuser, Boston, 1995. G. M. Lieberman, *Second order parabolic differential equations*, World Scientific Publishing Co., River Edge, 1996. P.-L. Lions, *Generalized solutions of Hamilton--Jacobi equations*, Res. Notes in Math. 69 Pitman, Boston--London, 1982. P.-L. Lions, *Optimal control of diffusion processes and Hamilton--Jacobi--Bellman equations. III. Regularity of the optimal cost function*, in Nonlinear Partial Differential Equations and Their Applications. Collège de France Seminar, Vol. V (Paris, 1981/1982), Res. Notes in Math. 93, H. Brézis and J.-L. Lions, eds., Pitman, Boston, 1983, 95--205. P.-L. Lions, *Viscosity solutions of fully nonlinear second-order equations and optimal stochastic control in infinite dimensions. I. The case of bounded stochastic evolutions*, Acta Math. 161 (1988), no. 3-4, 243--278. R. Manthey and T. Zausinger, *Stochastic evolution equations in $L^{2\nu}_\rho$*, Stoch. Stoch. Rep. 66 (1999), no. 1-2, 37--85. S. Mayorga and A. Święch, *Finite dimensional approximations of Hamilton--Jacobi--Bellman equations for stochastic particle systems with common noise*, SIAM J. Control Optim. 61 (2023), no. 2, 820--851. A. Milian, *Comparison theorems for stochastic evolution equations*, Stoch. Stoch. Rep. 72 (2002), no. 1-2, 79--108. D. Revuz and M. Yor, *Continuous martingales and Brownian motion*, Grundlehren Math. Wiss., 293, Springer-Verlag, Berlin, 1999. M. Rosestolato and A. Święch, *Partial regularity of viscosity solutions for a class of Kolmogorov equations arising from mathematical finance*, J. Differential Equations 262 (2017), no. 3, 1897--1930. W. Stannat and L. Wessels, *Necessary and Sufficient Conditions for Optimal Control of Semilinear Stochastic Partial Differential Equations*, preprint, https://arxiv.org/abs/2112.09639 (2021). A. Święch *"Unbounded" second order partial differential equations in infinite-dimensional Hilbert spaces*, Comm. Partial Differential Equations 19 (1994), no. 11-12, 1999--2036. A. Święch and E. V. Teixeira, *Regularity for obstacle problems in infinite dimensional Hilbert spaces* Adv. Math. 220 (2009), no. 3, 964--983. L. Wang, *On the regularity of fully nonlinear parabolic equations: I*, Comm. Pure Appl. Math. 45 (1992), 27--76. L. Wang, *On the regularity of fully nonlinear parabolic equations: II*, Comm. Pure Appl. Math. 45 (1992), 141--178. L. Wessels, *Optimal control of stochastic reaction-diffusion equations*, Doctoral Thesis, Technische Universität Berlin, Berlin, Germany, 2022. J. Yong and X. Y. Zhou, *Stochastic Controls: Hamiltonian Systems and HJB Equations*, Appl. Math. (N.Y.) 43, Springer, New York, 1999. [^1]: Departimento di Matematica, Politecnico di Milano, Milano, 20133, Italy; Email: filippo.defeo\@polimi.it [^2]: School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332, USA; Email: swiech\@math.gatech.edu [^3]: School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332, USA; Email: wessels\@gatech.edu [^4]: This partial $C^1$ and $C^{1,\alpha}$ regularity is with respect to the so-called "present" variable $x_0 \in \mathbb R^n$. [^5]: With the approaches via mild solutions or BSDEs, usually only Gateaux differentiability is obtained. [^6]: Note that, in contrast to our situation, in [@DZ14], the coefficients of the state equation, depending on $\omega$, are uniformly bounded in $\omega$ (while in our case this is not true due the their unboundedness in the control. However, due to [\[eq:integr\]](#eq:integr){reference-type="eqref" reference="eq:integr"}, a straightforward modification of the arguments leads to the desired existence and uniqueness result.
arxiv_math
{ "id": "2310.03181", "title": "Stochastic optimal control in Hilbert spaces: $C^{1,1}$ regularity of\n the value function and optimal synthesis via viscosity solutions", "authors": "Filippo de Feo and Andrzej \\'Swi\\k{e}ch and Lukas Wessels", "categories": "math.OC math.AP math.PR", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | We introduce the concept of a Fock bundle, a smooth principal bundle over a surface equipped with a special kind of adjoint-valued 1-form, as a new tool for studying character varieties of surface groups. Although similar to Higgs bundles, the crucial difference is that no complex structure is fixed on the underlying surface. Fock bundles are the gauge-theoretic realization of higher complex structures. We construct a canonical connection to a Fock bundle equipped with compatible symmetric pairing and hermitian structure. The space of flat Fock bundles maps to the character variety of the split real form. Determining the hermitian structure such that this connection is flat gives a non-linear PDE similar to Hitchin's equation. We explicitly construct solutions for Fock bundles in the Fuchsian locus. Ellipticity of the relevant linear operator provides a map from a neighborhood of the Fuchsian locus in the space of higher complex structures modulo higher diffeomorphisms to a neighborhood of the Fuchsian locus in the Hitchin component. author: - Georgios Kydonakis, Charlie Reid, Alexander Thomas title: Fock bundles and Hitchin components --- # Introduction and results ## Context Let $S$ be a smooth closed orientable surface of genus at least 2. The moduli space of complex structures on $S$ modulo diffeomorphisms isotopic to the identity is famously known as the *Teichmüller space*. The Poincaré--Koebe uniformization theorem implies that the Teichmüller space also describes all marked hyperbolic structures on $S$. The holonomy representation of a hyperbolic structure leads to a discrete and faithful representation $\pi_1 S \to \mathrm{PSL}_2(\mathbb{R})$, thus allowing for an algebraic realization of the Teichmüller space of $S$ as a connected component of the character variety $\mathcal{X}(\pi_1 S,\mathrm{PSL}_2(\mathbb{R}))$. Generalizing the uniformization theorem to higher rank Lie groups is the main motivation of this work. Representations of fundamental groups and their links to geometric structures are captured by the character variety. For a Lie group $G$, the $G$-*character variety* $$\mathcal{X}(\pi_1 S,G)=\mathrm{Hom}(\pi_1 S,G)/G$$ is defined as the space of isomorphism classes of completely reducible representations, where $G$ acts by conjugation. Higher Teichmüller theory concerns the study of connected components of character varieties for more general Lie groups than $\mathrm{PSL}_2(\mathbb{R})$ which share analogous properties to the classical Teichmüller space. The first step in this direction was taken by Hitchin in [@Hit92], where he found a connected component of $\chi(\pi_1 S,\mathrm{PSL}_n(\mathbb{R}))$ (and more generally for any adjoint group of the split real form of a complex simple Lie group) which is contractible, and naturally contains a copy of Teichmüller space. Representations parametrized by these components, now most often called *Hitchin components*, were later shown to be discrete and faithful [@Labourie06; @FG]. The *mapping class group* $\mathrm{Mod}(S)$ of the surface $S$, that is, the group of all orientation-preserving diffeomorphisms of $S$ modulo the ones which are isotopic to the identity, acts naturally on the Teichmüller space by changing the marking. This action is properly discontinuous and the resulting quotient is the moduli space of Riemann surfaces with the same topological type as $S$. Similarly, $\mathrm{Mod}(S)$ acts on any character variety by precomposition. Fixing a complex structure on $S$, the nonabelian Hodge correspondence identifies $\chi(\pi_1 S,G)$, for any reductive Lie group $G$, with the moduli space of polystable $G$-Higgs bundles. The nonabelian Hodge correspondence is very effective in providing complex analytic descriptions of character varieties $\mathcal{X}(\pi_1 S,G)$, their Hitchin components, and other Teichmüller space-like components, now called *positive* components. However, a major drawback in this correspondence is that one has to fix a complex structure on $S$, and so $\mathrm{Mod}(S)$ does not act on the moduli of Higgs bundles. In [@Labourie], Labourie proposed a remedy which, if generally applicable, could provide a canonical way to associate a complex structure on $S$ to a given point in the Hitchin component. This analytic method involving harmonic maps and minimal surfaces has received considerable attention and has proven to be effective in several cases of higher Teichmüller spaces but only for groups of rank 2; we refer to [@Lab17] and [@CTT] for general accounts on split real simple Lie groups of rank 2 and Hermitian Lie groups of rank 2 respectively. Yet, it recently became clear that this approach via minimal surfaces towards establishing a canonical choice of complex structure on $S$ and thus obtaining a mapping class group equivariant parametrization of the associated higher Teichmüller spaces in terms of holomorphic data, fails for groups of rank greater than 2. In [@SaSm], Sagman and Smillie demonstrated the existence of numerous equivariant minimal surfaces for certain Hitchin representations $\rho: \pi_1(S) \to G$, when $G$ is a split real semisimple Lie group of rank at least 3, building on the breakthrough accomplished by Marković [@Mar], who applied his New Main Inequality to get an analogous statement in the case of $\mathrm{PSL}_2(\mathbb{R})^3$. A new approach towards realizing an equivariant parametrization of the Hitchin components was proposed by Vladimir Fock and the third author [@FT] in terms of newly introduced geometric structures called *higher complex structures*. These were originally defined using the punctual Hilbert scheme of the plane and a moduli space of higher complex structures was defined in [@FT] as a quotient by Hamiltonian diffeomorphisms of the cotangent bundle of the surface preserving the zero section $S\subset T^*S$ setwise. Such diffeomorphisms were called *higher diffeomorphisms* and the resulting quotient space, called the *geometric Hitchin space* and denoted by $\hat{\mathcal{T}}^n(S)$, recovers the classical Teichmüller space when $n=2$. Moreover, this space is a manifold of complex dimension equal to that of the Hitchin component for $G=\mathrm{PSL}_2(\mathbb{R})$ [@FT Theorem 2]. Its cotangent bundle can be parametrized by a set of tensors on the underlying surface, satisfying a certain compatibility condition called *$\mu$-holomorphicity* [@FT Theorem 3]. The main conjecture suggested in [@FT] was that the geometric Hitchin space is canonically diffeomorphic to the Hitchin component. In [@Nolte], Nolte introduced the notion of *harmonic* representatives for higher complex structures and studied in detail the group of higher diffeomorphisms. There, the geometric Hitchin space is denoted by $\mathcal{T}^n(S)$ and is called the degree-n Fock--Thomas space. Moreover, a canonical diffeomorphism from $\mathcal{T}^3(S)$ to the $\mathrm{PSL}_3(\mathbb{R})$-Hitchin component was constructed. This construction is, in fact, $\mathrm{Mod}(S)$-equivariant. More generally, the space $\mathcal{T}^n(S)$ is diffeomorphic to a ball of complex dimension $(g-1)(n^2-1)$, meaning that it is abstractly diffeomorphic to the Hitchin component; see [@FT Theorem 2] and [@Nolte Corollary 1.7]. Nolte's method, however, for getting this canonical $\mathrm{Mod}(S)$-equivariant diffeomorphism uses the affine spheres perspective on the $\mathrm{PSL}_3(\mathbb{R})$-Hitchin component by Labourie [@Lab07] and Loftin [@Loftin], and so the technique does not directly apply for higher rank. In this article, we introduce a gauge-theoretic realization of higher complex structures which allows us to make progress on their link to character varieties, in particular to Hitchin components. In addition, central associated notions such as higher diffeomorphisms and the $\mu$-holomorphicity condition become more transparent. We introduce a new type of object $(P,\Phi,\sigma)$, that we call *$G$-Fock bundle*, consisting of a smooth principal $G$-bundle $P$ together with an adjoint-valued 1-form $\Phi$, called a *Fock field*, that satisfies certain conditions. In this triple, $\sigma$ is an involution on $P$ generalizing the notion of a symmetric pairing on a vector bundle. This notion is similar to a $G$-Higgs bundle, with the crucial difference that it does not involve fixing a priori any complex structure on the underlying surface $S$. Isomorphism classes of $\mathop{\mathrm{PSL}}_n(\mathbb{C})$-Fock bundles are equivalent to higher complex structures of order $n$. A $G$-Fock field $\Phi$ as above naturally induces a complex structure on $S$. Varying the Fock field also varies this complex structure. We then set to establish a passage from $G$-Fock bundles to connections analogously to the nonabelian Hodge correspondence. Our theory associates to a Fock bundle $(P,\Phi,\sigma)$ equipped with a compatible hermitian structure $\rho$, a connection $\nabla=\Phi+d_A+\Phi^{*}$, where $\Phi^{*}=-\rho(\Phi)$ is the hermitian conjugate of $\Phi$ and $d_A$ is the unique unitary $\sigma$-invariant connection satisfying $d_A\Phi=0$. We conjecture that there exists a hermitian structure such that $\nabla$ is flat. The flatness equation for $\nabla$ is similar to Hitchin's self-duality equations over a Riemann surface [@Hit87] and, in fact, coincides with Hitchin's equation in the most trivial examples, the so-called Fuchsian locus (Fock bundles coming from uniformizing Higgs bundles). Moreover we prove the conjecture in a neighborhood of the Fuchsian locus. For the family of flat $G$-Fock bundles, we show that the monodromy of the connection is always in the split real form of $G$. Rudiments of the approach towards establishing a passage to a canonical family of flat connections have appeared in the previous articles [@thomas-flat-conn; @ThoWKB] and [@Tho22] by the third author. In particular, it was shown in [@thomas-flat-conn] that the cotangent bundle of the moduli space of higher complex structures can be included into a 1-parameter family of spaces whose sections are flat formal connections. The theory of $\mathfrak{g}$-complex structures was introduced in [@Tho22], extending the case of $\mathfrak{sl}_n(\mathbb{C})$ to a general complex simple Lie algebra $\mathfrak{g}$, and constitutes the starting point for our definition of $G$-Fock bundles. The introduction of Fock bundles and their relationship to families of connections points towards a broader framework to study character varieties, which includes both the theory of Higgs and Fock bundles. Fixing a complex structure on $S$, one may consider a $G$-principal bundle $P$ on $S$, together with a connection $d_A$ and a field $\Phi\in \Omega^1(S,\mathfrak{g}_P)$ satisfying $[\Phi\wedge\Phi]=0$, $d_A(\Phi)=0$ and with fixed conjugacy class of $\Phi^{0,1}$. Here we used the complex structure to decompose $\Phi$ into Hodge types. Under this setup, then whenever $\Phi^{0,1}=0$, the field $\Phi$ is holomorphic with respect to the holomorphic structure on $P$ given by $d_A^{0,1}$, thus leading to the theory of Higgs bundles. On the other hand, whenever $\Phi^{0,1}$ is principal nilpotent, this leads to our theory of Fock bundles. We hope that to such data, one can always associate a flat connection, in analogy to the nonabelian Hodge correspondence. Alternative conditions for the conjugacy class of $\Phi^{0,1}$ can possibly lead to alternative approaches to character varieties, yet the case when $\Phi^{0,1}$ is principal nilpotent describes the only possible conjugacy class which stays invariant under coordinate change on $S$. In other words, *the approach via Fock bundles within this broader framework is the only one which is independent of the complex structure on $S$*. ## Results and structure We now make the statements appearing in the article more precise. Throughout the whole paper we denote by $S$ a smooth closed connected orientable surface of genus at least 2 and by $G$ a complex simple Lie group with Lie algebra $\mathfrak{g}$. The main new object we introduce is the notion of a $G$-Fock bundle. We restrict to the case $G=\mathrm{SL}_n(\mathbb{C})$ here for simplicity and give the full definition in Section [3](#Sec:Fock-bundles){reference-type="ref" reference="Sec:Fock-bundles"}. **Definition 1** (Definition [Definition 18](#defn_Fock-bundle){reference-type="ref" reference="defn_Fock-bundle"}). *An *$\mathrm{SL}_n(\mathbb{C})$-Fock bundle* over $S$ is a triple $(E,\Phi,g)$ where $E$ is a complex vector bundle over $S$ with fixed volume form, non-degenerate symmetric pairing $g:E\times E\to \underline{\mathbb{C}}$, where $\underline{\mathbb{C}}$ denotes the trivial line bundle, and $\Phi\in \Omega^1(S,\mathfrak{sl}(E))$ satisfying* 1. *$\Phi\wedge\Phi = 0$,* 2. *$\Phi(v)(z)$ is principal nilpotent for all $z\in S$ and all vectors $v\in T_zS$.* 3. *$\Phi$ is $g$-self-adjoint.* *We refer to $\Phi$ as the *Fock field*.* This generalizes easily to general $G$: a $G$-Fock bundle is a principal $G$-bundle $P$ with a certain involution $\sigma$ playing the role of the symmetric pairing, and an adjoint-valued 1-form $\Phi$ satisfying the analogous three properties above. An important example of $G$-Fock bundles, the so-called *Fuchsian locus*, arises from the underlying smooth principal bundle to the uniformizing $G$-Higgs bundle when we equip $S$ with a complex structure (see Section [3.2](#Sec:fuchsian-locus){reference-type="ref" reference="Sec:fuchsian-locus"}). The link to higher complex structures is given by the following: **Proposition 1** (Proposition [Proposition 22](#Prop:link-to-g-C-str){reference-type="ref" reference="Prop:link-to-g-C-str"}). *An $\mathrm{SL}_n(\mathbb{C})$-Fock bundle induces a higher complex structure of order $n$ on $S$. Isomorphism classes of $\mathrm{PSL}_n(\mathbb{C})$-Fock bundles are equivalent to higher complex structures.* In Subsection [3.3](#Sec:var-Fock-fields){reference-type="ref" reference="Sec:var-Fock-fields"} we describe variations of Fock bundles via the cohomology of a certain chain complex. In particular, we prove that a Fock bundle has no infinitesimal automorphisms, hence is stable in that sense. The main result of Section [4](#Sec:can-connection){reference-type="ref" reference="Sec:can-connection"} is the construction of a canonical connection associated to a Fock bundle equipped with a compatible positive hermitian structure. A hermitian structure $\rho$ is an involution on $P$ associated to the compact real form of $G$. It is said to be *compatible* with a Fock bundle if $\rho$ and $\sigma$ commute. The property of being *positive* is a certain open condition, see Definition [Definition 52](#Def:pos-Higgs-field){reference-type="ref" reference="Def:pos-Higgs-field"} for the precise statement. **Theorem 2** (Theorem [Theorem 48](#Thm-filling-in){reference-type="ref" reference="Thm-filling-in"}). *For a $G$-Fock bundle $(P,\Phi,\sigma)$ equipped with a compatible, positive hermitian structure $\rho$, there is a unique unitary, $\sigma$-invariant connection $d_A$ satisfying $d_A\Phi=0$.* This result is analogous to the existence of a Chern connection on a holomorphic bundle induced by a hermitian structure. Thus, to the data $(P,\Phi,\sigma,\rho)$ as above we can associate a connection $\Phi+d_A+\Phi^{*}$, where $\Phi^*=-\rho(\Phi)$, which preserves a split-real structure, and has curvature $$\label{Eq-intro:hitchin-like} F(A)+[\Phi\wedge\Phi^{*}].$$ Our main conjecture states: **Conjecture 3** (Conjecture [Conjecture 56](#main-conj){reference-type="ref" reference="main-conj"}). *For a $G$-Fock bundle $(P,\Phi,\sigma)$, there exists a unique compatible positive hermitian structure $\rho$ such that the associated connection $\Phi+d_A+\Phi^{*}$ is flat, in other words, such that $F(A)+[\Phi\wedge\Phi^{*}]=0$.* This would give a map from the space of Fock bundles to the space of Hitchin representations. The conjecture is true for examples of Fock bundles from the Fuchsian locus obtained by the nonabelian Hodge correspondence (the uniformizing Higgs bundles). Section [5](#Sec:ellipticity){reference-type="ref" reference="Sec:ellipticity"} analyzes the conjecture near the Fuchsian locus. The following result provides strong evidence towards the validity of the conjecture in general: **Theorem 4** (Theorem [Theorem 60](#Thm:ellipticity){reference-type="ref" reference="Thm:ellipticity"}). *Let $(P,\Phi,\sigma)$ be a $G$-Fock bundle equipped with a compatible positive hermitian structure $\rho$. The derivative of the map from the $\mathop{\mathrm{Aut}}(P,\sigma)$-conjugacy class of $\Phi$ to the curvature of the resulting connection $F(A)+[\Phi\wedge\Phi^{*}]$ is an elliptic isomorphism.* We then use the implicit function theorem in Section [5.2](#Sec:impl-fct-thm){reference-type="ref" reference="Sec:impl-fct-thm"} to conclude that the space of solutions to Equation ([\[Eq-intro:hitchin-like\]](#Eq-intro:hitchin-like){reference-type="ref" reference="Eq-intro:hitchin-like"}) is a Banach manifold which maps locally diffeomorphically to the space of Fock bundles. In particular, we get a map from a neighborhood of the Fuchsian locus in the space of higher complex structures to the Hitchin component. In [@Nolte] it is shown that on each higher diffeomorphism orbit there is a harmonic representative unique up to isotopy. By restricting our map to this slice, we get the following: **Theorem 5** (Theorem [Theorem 64](#Thm:neighborhood){reference-type="ref" reference="Thm:neighborhood"}). *There is an open neighborhood of the Fuchsian locus in the moduli space $\mathcal{T}^n(S)$ of higher complex structures of order $n$, which has a canonical map to the $\mathop{\mathrm{PSL}}_n(\mathbb{R})$-Hitchin component, via solution of Equation ([\[Eq-intro:hitchin-like\]](#Eq-intro:hitchin-like){reference-type="ref" reference="Eq-intro:hitchin-like"}).* It is expected that the restriction to harmonic higher complex structures is not actually necessary because our map to the character variety should be constant along higher diffeomorphism orbits. In Section [6](#Sec:higher-diffeos){reference-type="ref" reference="Sec:higher-diffeos"}, we analyse the action of special $\lambda$-dependent gauge transformations on a family of flat connections of the form $\lambda^{-1}\Phi+d_A+\lambda \Phi^{*}$ where $\lambda\in\mathbb{C}^*$ is a parameter. In the case of $\mathrm{SL}_n(\mathbb{C})$, these special gauge transformations relate to higher diffeomorphisms: **Theorem 6** (Theorem [Theorem 68](#hamiltonian-first-variations-coincide){reference-type="ref" reference="hamiltonian-first-variations-coincide"}). *The variation on an $\mathrm{SL}_n(\mathbb{C})$-Fock field $\Phi$ induced by an infinitesimal gauge transformation $\lambda^{-1} \eta$ with $\eta=\Phi(v_1)\cdots \Phi(v_k)$ is equivalent to the infinitesimal action of the Hamiltonian $H=v_1\cdots v_k$ on the higher complex structure induced by $\Phi$.* This theorem gives a clear gauge-theoretic meaning to higher diffeomorphisms, which in the theory of higher complex structures are Hamiltonian diffeomorphisms of $T^*S$ preserving the zero-section $S\subset T^*S$ setwise. This realization of higher diffeomorphisms as gauge transformations, together with our ellipticity result gives the following: **Proposition 7** (Proposition [Proposition 69](#prop:constancy){reference-type="ref" reference="prop:constancy"}). *A differentiable family $\Phi_t$ of solutions to Equation ([\[Eq-intro:hitchin-like\]](#Eq-intro:hitchin-like){reference-type="ref" reference="Eq-intro:hitchin-like"}) for $G=\mathop{\mathrm{SL}}_n(\mathbb{C})$ which induces a family of higher diffeomorphic higher complex structures maps to a constant path of representations.* Together with the main Conjecture [Conjecture 3](#Conj-intro){reference-type="ref" reference="Conj-intro"}, this would give the desired map from $\mathcal{T}^n(S)$ to the $\mathop{\mathrm{PSL}}_n(\mathbb{R})$-Hitchin component, without the need for harmonic representatives. In the final Section [7](#Sec:covectors){reference-type="ref" reference="Sec:covectors"} we consider $G$-Fock bundles $(P,\Phi,\sigma,\rho)$ equipped with a hermitian structure $\rho$ not necessarily commuting with $\sigma$. We parametrize the space of unitary connections $d_A$ satisfying $d_A\Phi=0$ by so-called *covectors*. In the case of $\mathrm{SL}_n(\mathbb{C})$, covectors can be identified with cotangent vectors to the space of higher complex structures. We conjecture that we can still solve the equation $F(A)+[\Phi\wedge\Phi^{*}]=0$ for small and $\mu$-holomorphic covectors and that the monodromies of the flat connections $\Phi+d_A+\Phi^{*}$ describe a tubular neighborhood of the $G$-Hitchin component inside the complex character variety $\chi(\pi_1S,G)$. This picture generalizes the work of Donaldson [@Donaldson] and Trautwein [@Traut] on the space of almost-Fuchsian representations in the $\mathrm{SL}_2(\mathbb{C})$-case. The $\mu$-holomorphicity condition from the theory of higher complex structures gets a precise gauge-theoretical interpretation: **Theorem 8** (Theorem [Theorem 82](#Thm:mu-holo){reference-type="ref" reference="Thm:mu-holo"}). *For an $\mathrm{SL}_n(\mathbb{C})$-Fock bundle, the condition $F(A)\in\mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)\subset\Omega^2(S,\mathfrak{g}_P)$ is equivalent to the $\mu$-holomorphicity condition [\[Eq:mu-holo-cond-hcs\]](#Eq:mu-holo-cond-hcs){reference-type="eqref" reference="Eq:mu-holo-cond-hcs"}.* ***Acknowledgements.*** We warmly thank Abdelmalek Abdesselam, Andrea Bianchi, Jeff Danciger, Vladimir Fock, Dan Freed, Andy Neitzke and Alex Nolte for many fruitful discussions and insights. We are also grateful to the Alexander von Humboldt Foundation for support towards completing this project, to the National Science Foundation (grants DMS-1937215, and DMS-1945493), to the SPP 2026 *Geometry at Infinity* for a travel grant and to the University of Heidelberg and the University of Texas at Austin where most of the work has been carried out. # Preliminaries We gather necessary material for the main part of the text. We review higher complex structures in Subsection [2.1](#Sec:HCS){reference-type="ref" reference="Sec:HCS"}, the nonabelian Hodge correspondence in [2.2](#Sec:nonabelian-Hodge){reference-type="ref" reference="Sec:nonabelian-Hodge"}, the Lie theoretic background in [2.3](#Sec:princ-nilp-elements){reference-type="ref" reference="Sec:princ-nilp-elements"} (especially principal nilpotent elements) and involutions on principal $G$-bundles in [2.4](#Sec:involutions){reference-type="ref" reference="Sec:involutions"}. Part of the material in the Lie-theoretic section seems new, the rest is well-known. ## Higher complex structures {#Sec:HCS} Motivated by the aspiration to describe components of real character varieties as moduli spaces of geometric structures, Vladimir V. Fock and the third author introduced higher complex structures [@FT] and conjectured that they parametrize $\mathrm{PSL}_n(\mathbb{R})$-Hitchin components. These were, in turn, generalized to $\mathfrak{g}$-complex structures where $\mathfrak{g}$ is a complex simple Lie algebra in [@Tho22]. We give here a brief account, concentrating on the properties we need in the subsequent sections. ### Definitions Let $S$ be a closed orientable surface of genus at least 2. A complex structure on $S$ is equivalent to an *almost complex structure*, that is, an automorphism $J$ of $TS$ satisfying $J^2=-\mathrm{id}$. The complexified tangent bundle then decomposes as $$T^\mathbb{C}S=T^{1,0}S\oplus T^{0,1}S,$$ where $T^{1,0}S$ and $T^{0,1}S$ are pointwise the eigendirections of $J$. Since $T^{1,0}S$ is the complex conjugate of $T^{0,1}S$, the complex structure is uniquely encoded by $T^{1,0}S$ which is a non-real direction of $T^\mathbb{C}S$. This in turn is uniquely encoded by a certain ideal $I$ in $\mathrm{Sym}(T^{\mathbb{C}}S)$ generated by $T^{0,1}S$ and $(T^{1,0}S)^2$. Geometrically, $\mathrm{Sym}(T^{\mathbb{C}}S)$ is the ring of functions on $T^{*\mathbb{C}}S$ which are polynomial on fibers, and $I$ cuts out an infinitesimal thickening of the zero section of $T^\mathbb{C}S$ in the direction of $T^{1,0}S$. To describe this equivalence more explicitly, fix a reference complex structure on $S$ and local coordinates $(z,\bar{z})$. Denote by $(p,\bar{p})$ the linear coordinates on $T^{*\mathbb{C}}S$ corresponding to vector fields $\partial_z$ and $\partial_{\bar{z}}$. Any other complex structure can be then described by an ideal $I$ locally of the form $$I=\langle p^2, \bar{p}-\mu(z,\bar{z}) p\rangle,$$ where $\mu$ is known as the *Beltrami differential* which satisfies $\lvert \mu\rvert \neq 1$. The holomorphic cotangent bundle of the new complex structure is defined by the equation $\bar{p} = \mu(z,\bar{z})p$. **Definition 1**. *A *higher complex structure of order $n$* (or of rank $n-1$) on $S$ is a special ideal $I$ in $\mathrm{Sym}(T^{\mathbb{C}}S)$ locally of the form $$\label{Eq:hcs-ideal} I=\langle p^n, -\bar{p}+\mu_2(z,\bar{z}) p+\mu_3(z,\bar{z}) p^2+...+\mu_n(z,\bar{z}) p^{n-1}\rangle,$$ where $\mu_2=\mu$ is the usual Beltrami differential and $\mu_\ell$ for $\ell=3,...,n$ are called *higher Beltrami differentials* (see [@FT Proposition 1]).* Globally, $\mu_\ell$ is a smooth section of $K^{1-\ell}\otimes \bar{K}$ where $K$ denotes the canonical bundle. A usual complex structure is a higher complex structure of order 2, i.e. of rank 1. An important feature of these structures is the forgetful map, which associates to a higher complex structure of order $n$ a structure of order $n-1$ by forgetting the last Beltrami differential $\mu_n$. In particular, *any higher complex structure induces a complex structure on $S$*. **Remark 1**. *The space of higher complex structures as defined above has two connected components. A higher complex structure induces a complex structure which in turn induces an orientation on $S$. This orientation coincides with the one induced from the reference complex structure iff $\lvert\mu_2\rvert <1$. Changing the reference complex structure to the complex conjugate one changes $\mu_2$ to $1/\bar{\mu}_2$ which is of norm strictly bigger than 1.* More generally, let $\mathfrak{g}$ be a complex simple Lie algebra. An element $x\in\mathfrak{g}$ is called *principal nilpotent* if $\mathop{\mathrm{ad}}_x$ is nilpotent and $\dim Z(x)=\mathrm{rk}\,\mathfrak{g}$, where $Z(x)=\{y\in \mathfrak{g}\mid [x,y]=0\}$ denotes the centralizer (see also Section [2.3](#Sec:princ-nilp-elements){reference-type="ref" reference="Sec:princ-nilp-elements"}). **Definition 2** (Definition 4.1 in [@Tho22]). *A *$\mathfrak{g}$-complex structure* on $S$ is a $G$-conjugacy class of fields $\Phi\in\Omega^1(S,\mathfrak{g})$ locally of the form $\Phi=\Phi_1(z,\bar{z})dz+\Phi_2(z,\bar{z})d\bar{z}$ such that $\Phi_1$ is principal nilpotent and $\Phi_2\in Z(\Phi_1)$ commutes with $\Phi_1$ satisfying a certain inequality explained below.* There is a unique linear combination of the form $\mu_2\Phi_1-\Phi_2$ which is not principal nilpotent [@Tho22 Proposition 2.21]. The inequality we impose is $\lvert\mu_2\rvert <1$. This allows to identify $\mu_2$ with a Beltrami differential of a complex structure on $S$. Therefore a $\mathfrak{g}$-complex structure induces a complex structure on $S$. For $\mathfrak{g}=\mathfrak{sl}_n(\mathbb{C})$ we get the notion of higher complex structure as follows: principal nilpotent elements in $\mathfrak{g}$ form a single conjugacy orbit. So we can fix a principal nilpotent element $F\in\mathfrak{sl}_n(\mathbb{C})$ and use the gauge freedom to fix $\Phi_1=F$. Using the standard representation of $\mathfrak{sl}_n(\mathbb{C})$ on $\mathbb{C}^n$, the centralizer of $F$ in $\mathfrak{g}$ is generated by all polynomials in $F$ without constant term. Hence the ideal of polynomials $P\in\mathbb{C}[p,\bar{p}]$ such that $P(\Phi_1,\Phi_2)=0$ (which makes sense since $\Phi_1$ and $\Phi_2$ commute) is of the form [\[Eq:hcs-ideal\]](#Eq:hcs-ideal){reference-type="eqref" reference="Eq:hcs-ideal"}. ### Moduli space For complex structures, the associated moduli space is the *Teichmüller space*, where complex structures are considered modulo diffeomorphisms of $S$ isotopic to the identity. For higher complex structures, one needs to mod-out by a larger group in order to get a finite-dimensional moduli space. We restrict exposition to the case $\mathfrak{g}=\mathfrak{sl}_n(\mathbb{C})$ here. For $\mathfrak{g}$ of classical type, see [@Tho22 Section 4]. For general complex simple $\mathfrak{g}$, the moduli space of $\mathfrak{g}$-complex structures is not constructed yet. **Definition 3** (Definition 3 in [@FT]). *A *higher diffeomorphism* of a surface $S$ is a hamiltonian diffeomorphism of $T^*S$ preserving the zero-section $S \subset T^*S$ setwise. The group of higher diffeomorphisms is denoted by $\mathrm{Ham}_0(T^*S)$.* Diffeomorphisms of $T^*S$ fixing the zero-section act on the completed symmetric algebra $\widehat{\mathrm{Sym}} (T^\mathbb{C}S)$ of power series. The map from ideals in $\mathrm{Sym} (T^\mathbb{C}S)$ to ideals in $\widehat{\mathrm{Sym}} (T^\mathbb{C}S)$ is injective on those which contain some power of the ideal $(T^\mathbb{C}S) \subset \mathrm{Sym} (T^\mathbb{C}S)$. This includes higher complex structures, so we can just as well view higher complex structures as ideals in $\widehat{\mathrm{Sym}} (T^\mathbb{C}S)$ where diffeomorphisms naturally act. A precise study of the space $\mathrm{Ham}_0(T^*S)$ and the action by diffeomorphisms appears in [@Nolte Section 7]. **Definition 4**. *The *moduli space of higher complex structures of order $n$*, denoted by $\mathcal{T}^n(S)$, is the space of higher complex structures of order $n$, denoted by $\mathbb{M}^n(S)$, modulo higher diffeomorphisms.* *The *Fuchsian locus* in $\mathbb{M}^n(S)$ consists in those higher complex structures with trivial higher Beltrami differentials. The Fuchsian locus in $\mathcal{T}^n(S)$ is the image under the projection $\mathbb{M}^n(S)\to\mathcal{T}^n(S)$.* This moduli space is finite-dimensional of dimension $(n^2-1)(2g-2)$, contractible and allows a forgetful map $\mathcal{T}^n(S)\to\mathcal{T}^{n-1}(S)$, see [@FT Theorem 2] and [@Nolte Theorem 1.1]. The Fuchsian locus inside $\mathcal{T}^n(S)$ is a copy of Teichmüller space, which is $\mathcal{T}^2(S)$. The main conjecture within this theory concerns the existence of a canonical diffeomorphism between $\mathcal{T}^n(S)$ and the $\mathrm{PSL}_n(\mathbb{R})$-Hitchin component which is equivariant with respect to the natural mapping class group action. This conjecture has been proven for $n=3$ by Nolte in [@Nolte] using techniques which are special to $n=3$ and are analogous to a positive resolution to the Labourie Conjecture for $n=3$ [@Labourie], but are known to fail for higher $n$ (see [@SaSm]). Finally, we present the description of the total cotangent bundle $T^*\mathcal{T}^n(S)$. **Theorem 5** (Theorem 3 in [@FT]). *The cotangent bundle $T^*\mathcal{T}^n(S)$ is an $\mathrm{Ham}_0(T^*S)$-equivalence class of tensors $\mu_\ell\in \Gamma(K^{1-\ell}\otimes\bar{K})$ and $t_\ell\in\Gamma(K^\ell)$ for $\ell=2,...,n$ satisfying $$\label{Eq:mu-holo-cond-hcs} -\bar{\partial}t_k\!+\!\mu_2\partial t_k\!+\!kt_k\partial\mu_2+\sum_{l=1}^{n-k}((l\!+\!k)t_{k+l}\partial\mu_{l+2}+(l\!+\!1)\mu_{l+2}\partial t_{k+l})=0.$$ We refer to this condition as the *$\mu$-holomorphicity condition*.* The tensors $\mu_\ell$ are the higher Beltrami differentials from [\[Eq:hcs-ideal\]](#Eq:hcs-ideal){reference-type="eqref" reference="Eq:hcs-ideal"}. The tensors $t_\ell$ describe the covector. For a trivial higher complex structure, that is, when $\mu_k=0$ for all $k\in\{2,...,n\}$, the condition simply reduces to $\bar\partial t_k=0$. Thus, $\mu$-holomorphicity can be seen as *a generalization of the usual holomorphicity condition*. ## Nonabelian Hodge theory and twistor approach {#Sec:nonabelian-Hodge} Building on the fundamental theorems by Narasimhan--Seshadri and Eells--Sampson, nonabelian Hodge theory provides an abundance of methods that can be used for the study of character varieties via holomorphic techniques and Higgs bundles, as well as analytic techniques from the theory of harmonic maps. We provide a brief overview of this holomorphic viewpoint and the twistor space framework to character varieties. ### The nonabelian Hodge correspondence Let $X$ be a compact Kähler manifold and let $G$ be a connected complex reductive Lie group. For $H \subseteq G$ a maximal compact subgroup of $G$, its complexification $H^{\mathbb{C}}$ is isomorphic to $G$, and the Cartan decomposition for the corresponding Lie algebras $\mathfrak{g}:=\mathrm{Lie}(G)$ and $\mathfrak{h}:=\mathrm{Lie}(H)$ reads in this case as $$\mathfrak{g}=\mathfrak{h} \oplus i\mathfrak{h}.$$ A flat principal $G$-bundle $(P, \Theta)$ over $X$ is equivalent to a reductive representation $\rho: \pi_1(X) \to G$ via the *Riemann--Hilbert correspondence*, for $\Theta$ a 1-form on $P$ with values in $\mathfrak{g}$ (a principal connection on $P$). The flatness condition means that $\Theta$ satisfies the equation $$d\Theta +\tfrac{1}{2}[\Theta \wedge \Theta]=0.$$ Since $\rho$ is assumed to be reductive, Corlette's Theorem [@Corlette] provides the existence of a $\rho$-equivariant harmonic map from the universal cover $\widetilde{X}$ of $X$ to the associated symmetric space, $$f: \widetilde{X}\to G/H.$$ An equivariant map from $\widetilde{X}$ to $G/H$ is the same as a reduction of structure group of the principal $G$-bundle $P$ to the maximal compact $H$. This means there is an $H$-bundle $P_H$, and an $H$-equivariant map $\iota :P_H\to P$, and the flat connection ${{\iota }^{*}}\Theta$ on $P_{H}$ now splits as $${{\iota }^{*}}\Theta =A+\psi,$$ where $A$ is a connection on $P_{H}$, and $\psi$ descends to a 1-form on $X$ with values in the bundle associated to $P_{H}$ via the isotropy representation $\mathrm{Ad}:H\to \mathrm{GL}(i\mathfrak{h})$. The flatness of the connection ${{\iota }^{*}}\Theta$ implies the two equations $$\begin{aligned} {{F}_{A}}+\tfrac{1}{2}\left[ \psi \wedge\psi \right]&=0\\ {{d}_{A}}\psi &=0,\end{aligned}$$ and harmonicity of the map $f$ gives the additional equation $$d_{A}^{*}\psi =0.$$ In the case when $G=\mathrm{SL}_n(\mathbb{C})$, a reduction to a maximal compact is the same as a unit volume hermitian metric on the associated vector bundle. In the light of this equivalence, we call the special metric provided by Corlette's Theorem, a *harmonic hermitian metric*.\ An application of the Siu--Sampson Theorem [@Sam Theorem 1] now gives that for a reduction of structure group as above, then the $(0,1)$-part ${{\bar{\partial }}_{{{P}_{H}}}}$ of the connection $A$ and the $(1,0)$-part $\varphi$ of the 1-form $\psi$ satisfy the equations $$\bar{\partial }^2_{P_H}\varphi=0\;, \;\; \bar{\partial }_{P_H}\varphi =0\;, \;\; [\varphi \wedge\varphi ]=0.$$ The $(0,1)$-form ${{\bar{\partial }}_{{{P}_{H}}}}$ defines a holomorphic structure on the ${{C}^{\infty }}$-principal bundle $P_{H}$; extending the structure group from $H$ to ${{H}^{\mathbb{C}}}$ we finally have: **Definition 6**. *For a compact Kähler manifold $X$ and a connected complex reductive Lie group $G$, a *$G$-Higgs bundle* over $X$ is a pair $(P,\varphi)$, where* - *$P$ is a holomorphic principal $G$-bundle over $X$, and* - *$\varphi \in {{\Omega}^{1,0}}\left( X,\mathfrak{g}_P \right)$ satisfying ${{\bar{\partial }}_{{{P}_{H}}}}\varphi =0$ and $[\varphi \wedge\varphi ]=0$.* *Equivalently, we can be thinking of the Higgs field $\varphi$ as a holomorphic section $\varphi \in \mathrm{H}^0\left( X,\mathfrak{g}_P \otimes K \right)$, where $\mathfrak{g}_P$ denotes the adjoint bundle of $P$.* **Remark 2**. *In the case when $X$ is a compact Riemann surface, the two conditions $\bar{\partial }^2_{P_H}\varphi=0$ and $[\varphi \wedge\varphi ]=0$ are automatically satisfied.* **Remark 3**. *When $G \subset \mathrm{GL}_n( \mathbb{C})$, a $G$-Higgs bundle can be naturally interpreted as a Higgs bundle $(E, \Phi)$ in the original sense of Hitchin [@Hit87], [@Simpson] together with some additional structure reflecting the structure of the group $G$. In particular, when $G=\mathrm{SL}_n(\mathbb{C})$, an $\mathrm{SL}_n(\mathbb{C})$-Higgs bundle is described by a pair $(E,\Phi)$, where $E$ is a holomorphic rank $n$ vector bundle with trivial determinant and the Higgs field $\Phi$ is a holomorphic section $\Phi \in \mathrm{H}^0\left( X,\mathrm{End}(E)\otimes K \right)$ with $\mathrm{tr}(\Phi)=0$.* The flatness of the connection $\Theta$ finally gives the so-called *Hitchin equation* for the $G$-Higgs bundle $\left( P,\varphi \right)$ constructed above, $$\label{G_Hitchin_eq} F(A) + [\varphi\wedge \varphi^*]=0,$$ where $\varphi^*$ denotes the hermitian adjoint coming from the Cartan involution of $\mathfrak{g}$ extended to $\mathfrak{g}_P\otimes \Omega^1(S)$. So far we have described the passage from a reductive representation to a Higgs bundle; the opposite direction is provided by Simpson's Theorem [@Simpson] and its generalizations [@GGM], which says that for a $G$-Higgs bundle $\left( P,\varphi \right)$ with vanishing Chern classes, there exists a reduction of structure group of $P$ such that Equation ([\[G_Hitchin_eq\]](#G_Hitchin_eq){reference-type="ref" reference="G_Hitchin_eq"}) holds if and only if $\left( P,\varphi \right)$ is polystable. The polystability condition is a condition appropriately extending Mumford's stability condition for vector bundles, which is implemented in order to construct the *moduli space of $G$-Higgs bundles*, or Dolbeault moduli space, $\mathcal{M}_{Dol} (X,G)$ as a GIT quotient. The nonabelian Hodge correspondence for a Kähler manifold $X$ with underlying topological manifold $M$, therefore, is describing a bijection between this Dolbeault moduli space and the character variety, or Betti moduli space, $\mathcal{M}_B (M,G)$ of reductive fundamental group representations: **Theorem 7** (Nonabelian Hodge correspondence). *There is a real-analytic isomorphism $\mathcal{M}_{Dol} (X,G) \cong \mathcal{M}_B (M,G)$.* ### The twistor approach We now restrict attention to a compact Riemann surface $X$ with underlying smooth surface $S$. For a polystable Higgs bundle $\left( {{{\bar{\partial }}}_{{{E}_{H}}}},\varphi \right)$ over $X$, Hitchin [@Hit87] interpreted Equation ([\[G_Hitchin_eq\]](#G_Hitchin_eq){reference-type="ref" reference="G_Hitchin_eq"}) together with the condition of holomorphicity for the Higgs field $\varphi$, ${{{\bar{\partial }}}_{{{E}_{H}}}}(\varphi)=0$, in terms of a set of three moment maps for the action of the unitary gauge group. Following symplectic reduction techniques, the moduli space $\mathcal{M}_{Hit}$ of solutions to this set of equations was constructed as a *hyperkähler quotient*. This means that $\mathcal{M}_{Hit}$ is a $4n$-dimensional Riemannian manifold equipped with three covariant constant (with respect to the Levi-Civita connection) orthogonal automorphisms $I, J$ and $K$ of the tangent bundle $T\mathcal{M}_{Hit}$ which satisfy the quaternionic identities $${{I}^{2}}={{J}^{2}}={{K}^{2}}=IJK=-1.$$ In fact, any linear combination of the form $$\label{cx_str_hyperk} i(a,b,c)=aI+bJ+cK$$ satisfies $i^2=-\mathrm{id}$ if and only if $a^2+b^2+c^2 = 1$. Taking $\lambda = (a,b,c)\in \mathbb{CP}^1$, we have a 1-parameter family of complex structures on $\mathcal{M}_{Hit}$. Then, using the Riemannian metric on $\mathcal{M}_{Hit}$, one gets a family of symplectic structures which, combined with the complex structures above, give a 1-parameter family of Kähler structures. In the light of [@HKLR87], the *twistor approach* allows one to incorporate all complex structures in the 1-parameter family described above into a single complex structure on a larger manifold, the *twistor space of* $\mathcal{M}_{Hit}$. As a smooth manifold, this is defined as the product manifold $$Z=\mathcal{M}_{Hit} \times \mathbb{CP}^1.$$ One can equip $Z$ with a complex structure as follows: at a point $(m,\lambda)$ of $Z$ it is given by the pair $(i_{\lambda}, i_0)$, where $i_0$ denotes the standard complex structure on $\mathbb{CP}^1$ and $i_{\lambda}$ is as in Equation ([\[cx_str_hyperk\]](#cx_str_hyperk){reference-type="ref" reference="cx_str_hyperk"}), for $\lambda = (a,b,c)\in \mathbb{CP}^1$. This defines an almost-complex structure and it is not hard to prove the integrability. The projection $p:Z\to \mathbb{CP}^1$ is holomorphic and each copy $(m, \mathbb{CP}^1)$ of the projective line is a holomorphic section of this projection, called a *twistor line*. Using ideas of Deligne, Simpson [@Sim97] constructed the twistor space for $\mathcal{M}_{Hit}$ as the moduli space of $\lambda$-connections which he called *Hodge moduli space* $\mathcal{M}_{Hod}$. In this broader picture, the Betti moduli space $\mathcal{M}_B (S,G)$ and the Dolbeault moduli space $\mathcal{M}_{Dol} (X,G)$ are two special fibers of the holomorphic fiber bundle $\mathcal{M}_{Hod}\to \mathbb{CP}^1$. One possible way to describe Deligne's $\lambda$-connections is via what we call three-term connections, which is discussed next. ### Three-term connections {#sec:examples of 3-term conn} For a complex reductive Lie group $G$, the work of Hitchin [@Hit87] and its extensions provide that the cotangent bundle of the space of holomorphic $G$-bundles on a given Riemann surface $X$ can be mapped isomorphically to the space of families of connections of the form $$\mathcal{A}(\lambda)=\lambda^{-1}\Phi +d_A + \lambda \Phi^{*},$$ where $\lambda \in \mathbb{C}^{*}$ is a parameter, $\Phi$ a holomorphic Higgs field, $\Phi^{*}$ the hermitian conjugate with respect to a harmonic metric $h$, and $d_A$ the associated Chern connection. In the Higgs bundle setting, this is a family of *flat connections*, the $(0,1)$-part of the background connection $d_A$ defines a holomorphic structure on the bundle and the Higgs field $\Phi$ defines a cotangent vector to the space of holomorphic bundles. The family $\mathcal{A}(\lambda)$ gives a family of maps from the moduli space of flat $G$-connections to itself depending on the parameter $\lambda \in \mathbb{C}^{*}$. In the limit $\lambda \to 0$ (resp. $\lambda\to \infty$) these structures tend to the moduli space of $G$-Higgs bundles on $X$ (resp. the complex conjugate $\bar{X}$) with its Kähler structure. Another important example of a 1-parameter family of flat connections depending on a parameter $\lambda \in \mathbb{C}^{*}$ analogous to the Hitchin family of flat connections described above was given by Fock in [@Fock]. These flat connections are determined by solutions of the cosh-Gordon equation and describe a candidate for the twistor space of almost-Fuchsian representations. In the approach of [@Fock], the complex structure on the surface is a function of a background connection determined as above, and is not fixed once for all connections in the family as in Hitchin's case. ## Principal nilpotent elements {#Sec:princ-nilp-elements} Let $\mathfrak{g}$ be a complex simple Lie algebra and $G$ be a Lie group with Lie algebra $\mathfrak{g}$. For $x\in\mathfrak{g}$ we denote by $Z(x)=\{y\in \mathfrak{g}\mid [x,y]=0\}$ its centralizer in $\mathfrak{g}$. We also denote by $Z_G(x)$ the centralizer of $x$ in $G$ and by $Z(G)$ the center of $G$. **Definition 8**. *An element of $\mathfrak{g}$ is called *regular* if the dimension of its centralizer equals the rank of the Lie algebra. A regular nilpotent element is called *principal nilpotent*.* A more general statement holds about the dimension of the centralizer: for any $x \in \mathfrak{g}$, we have $\dim Z(x) \geq \mathop{\mathrm{rk}}(\mathfrak{g})$ (see for example Lemma 2.1.15. in [@Collingwood]). So the regular elements minimize this dimension. For $\mathfrak{g}=\mathfrak{sl}_n(\mathbb{C})$, a nilpotent element is principal nilpotent iff it has maximal rank, i.e. it is of rank $n-1$. **Theorem 9**. *[@Kost Corollary 5.5.][\[Thm:one-reg-orbit\]]{#Thm:one-reg-orbit label="Thm:one-reg-orbit"} All principal nilpotent elements are conjugate under the adjoint action of the Lie group $G$.* For a principal nilpotent element $F$, its centralizer has properties quite analogous to a Cartan subalgebra (the centralizer of a regular semisimple element): **Theorem 10**. *For $F$ a principal nilpotent element, its centralizer $Z(F)$ is abelian and nilpotent.* Using a limit argument one can show even more: for any element $x\in \mathfrak{g}$, there is an abelian subalgebra of $Z(x)$ of dimension $\mathop{\mathrm{rk}}(\mathfrak{g})$; see [@Kost Theorem 5.7]. The nilpotency of $Z(f)$ can be found in [@Steinberg Corollary in Section 3.7]. The *Jacobson--Morozov lemma* states that any non-zero nilpotent element $F\in\mathfrak{g}$ can be included into an $\mathfrak{sl}_2$-subalgebra, the image of an injective homomorphism from $\mathfrak{sl}_2(\mathbb{C})$ into $\mathfrak{g}$. An $\mathfrak{sl}_2$-subalgebra is called *principal* if it contains a principal nilpotent element. It follows from Theorem [\[Thm:one-reg-orbit\]](#Thm:one-reg-orbit){reference-type="ref" reference="Thm:one-reg-orbit"} that all principal $\mathfrak{sl}_2$-subalgebras are conjugate. Given any $\mathfrak{sl}_2$-subalgebra, the Lie algebra $\mathfrak{g}$ splits into irreducible $\mathfrak{sl}_2$-modules. For a principal one, none of these modules is trivial and exactly one module is of dimension 3 (the principle $\mathfrak{sl}_2$-subalgebra itself). **Proposition 11**. *For a principal nilpotent element $F\in \mathfrak{g}$, we have $Z(F)\subset \mathrm{Im}(\mathop{\mathrm{ad}}_F)$.* To see this, include $F$ into a principal $\mathfrak{sl}_2$-subalgebra, decompose $\mathfrak{g}$ into irreducible $\mathfrak{sl}_2$-modules and use $\mathfrak{sl}_2$-representation theory. The important point is that there is no trivial $\mathfrak{sl}_2$-module. **Proposition 12**. *The $\mathfrak{sl}_2$-subalgebras containing a given principal nilpotent $F$ are acted on transitively by $Z_G(F)\cong \exp(Z(F))\times Z(G)$ with stabilizer $\exp(F)\times Z(G)$. In particular, the set of $\mathfrak{sl}_2$-subalgebras containing $F$ is a torsor for the quotient group which is isomorphic to $\mathbb{C}^{\mathrm{rk}(\mathfrak{g}) -1}$. In particular, this space is contractible.* The structure of the centralizer $Z_G(F)$ is described in Lemma 3.7.3 in [@Collingwood]. It states that for a non-zero nilpotent element $x$ the centralizer $Z_G(x)$ is a semidirect product between $\exp(Z(x)\cap \mathrm{Im}(\mathop{\mathrm{ad}}_x))$, which is $\exp(Z(F))$ for $x=F$ by Proposition [Proposition 11](#Prop:center-in-image){reference-type="ref" reference="Prop:center-in-image"}, and the centralizer of an $\mathfrak{sl}_2$-subalgebra containing $x$, which in the principal case is $Z(G)$. A basis $(F,H,E)$ of an $\mathfrak{sl}_2$-subalgebra, satisfying the standard relations $[H,E]=2E$, $[H,F]=-2F$ and $[E,F]=H$, is called an *$\mathfrak{sl}_2$-triple*. Fix a principal $\mathfrak{sl}_2$-triple $(F,H,E)$. To this triple, Hitchin [@Hit92] associates a Lie algebra involution $\sigma_0:\mathfrak{g}\to \mathfrak{g}$ as follows. Using the decomposition of $\mathfrak{g}$ into irreducible $\mathfrak{sl}_2$-modules, the involution $\sigma_0$ is uniquely determined by negating all highest and lowest weight vectors [@Hit92 Proposition 6.1]. He also defines a compact real form $\rho_0:\mathfrak{g}\to \mathfrak{g}$ which extends $$E \mapsto -F, \;\;\;\; H \mapsto -H, \;\;\;\; F, \mapsto -E$$ and commutes with $\sigma_0$. Finally, he shows that $\tau_0 := \sigma_0\rho_0$ is a split real structure. In fact, $\sigma_0$ can be defined up to conjugation by inner automorphisms as the product of commuting split and compact real forms. It will be useful to understand the set of involutions conjugate to $\sigma_0$ which negate a given principal nilpotent element. **Lemma 13**. *The collection of involutions $\sigma$ conjugate to $\sigma_0$ which negate a given principal nilpotent $F$ is acted on simply transitively by $\exp(Z(F))$. In particular, this space is contractible.* *Proof.* Let $\sigma,\sigma'$ be two such involutions which negate $F$. They differ by an inner automorphism: $\sigma' = \sigma\circ \mathop{\mathrm{Ad}}_\gamma$ where $\gamma\in G$. We see that $\mathop{\mathrm{Ad}}_\gamma F = \sigma\sigma'\cdot F = F$, so $\gamma\in Z_G(F)$. Since $Z_G(F) \cong \exp(Z(F))\times Z(G)$, we see that $\exp(Z(F))$ must act transitively. From this fact, we deduce Corollary [Corollary 14](#cor:sigma_negate_centralizer){reference-type="ref" reference="cor:sigma_negate_centralizer"} below, stating that $\sigma$ negates the whole centralizer $Z(F)$. Hence it inverts $\exp(Z(F))$. We have to check that if $z\in \exp(Z(F))$ is different from the identity, then $\sigma\circ \mathop{\mathrm{Ad}}_z \neq \sigma$. For $X\in \mathfrak{g}$ we have: $$\sigma(\mathop{\mathrm{Ad}}_z X) = \mathop{\mathrm{Ad}}_{\sigma(z)} \sigma(X) = \mathop{\mathrm{Ad}}_{z^{-1}} \sigma(X).$$ If this was always equal to $\sigma(X)$, then $z$ would be central, which contradicts the fact that it is the exponential of a nilpotent element. ◻ **Corollary 14**. *Any $\sigma$ conjugate to $\sigma_0$ which negates $F$, actually negates all of $Z(F)$.* *Proof.* This is true for the involution $\sigma$ of Hitchin's construction, and this property is unchanged under inner automorphisms by elements in $\exp(Z(F))$. ◻ **Lemma 15**. *Let $\sigma$ be an involution conjugate to $\sigma_0$ and let $F\in\mathfrak{g}$ be principal nilpotent with $\sigma(F)=-F$. Then there is a unique $\sigma$-invariant $\mathfrak{sl}_2$-subalgebra containing $F$.* *Proof.* Proposition [Proposition 12](#Prop:space-of-sl2-f){reference-type="ref" reference="Prop:space-of-sl2-f"} shows that $\exp(Z(F))$ acts transitively on the space of $\mathfrak{sl}_2$-subalgebras containing $F$. By Corollary [Corollary 14](#cor:sigma_negate_centralizer){reference-type="ref" reference="cor:sigma_negate_centralizer"}, we know that $\sigma$ inverts $\exp(Z(F))$. The involution $\sigma$ acts on the set of principal $\mathfrak{sl}_2$-subalgebras containing $F$, intertwining the action of $\exp(Z(F))$ with its negative. More explicitly, let $\mathfrak{p}\subset \mathfrak{g}$ be a principal $\mathfrak{sl}_2$-subalgebra containing $F$, and let $z\in \exp(Z(F))$. Then: $$\sigma(\mathop{\mathrm{Ad}}_{z}\mathfrak{p})=\mathop{\mathrm{Ad}}_{z^{-1}}\sigma(\mathfrak{p}).$$ It will follow from this property that there is a unique fixed point of the action by $\sigma$, which we can construct as a kind of midpoint. Let $z$ be the unique element of $\exp(Z(F))$ such that $\mathop{\mathrm{Ad}}_{z^2}(\mathfrak{p}) = \sigma(\mathfrak{p})$. Moreover, $$\sigma(\mathop{\mathrm{Ad}}_{z}(\mathfrak{p})) = \mathop{\mathrm{Ad}}_{z^{-1}}(\sigma(\mathfrak{p})) = \mathop{\mathrm{Ad}}_{z^{-1}}(\mathop{\mathrm{Ad}}_{z^2}(\mathfrak{p})) = \mathop{\mathrm{Ad}}_{z}(\mathfrak{p}).$$ Therefore, $\mathop{\mathrm{Ad}}_{z}(\mathfrak{p})$ is an $\mathfrak{sl}_2$-subalgebra fixed by $\sigma$. If there were two fixed subalgebras $\mathfrak{p},\mathfrak{p}'$, then $\mathfrak{p}' = \mathop{\mathrm{Ad}}_z \mathfrak{p}$ for some $z$. Such $z$ satisfies $\sigma(z)=z$, thus is the identity. ◻ Finally, we will need the following lemma: **Lemma 16**. *For a principal nilpotent element $F\in\mathfrak{g}$ and an element $F'\in Z(F)$, we have $\mathrm{Im}(\mathop{\mathrm{ad}}_{F'})\subset\mathrm{Im}(\mathop{\mathrm{ad}}_{F})$.* *Proof.* By the Jacobson--Morozov lemma we can complete $F$ into a principal $\mathfrak{sl}_2$-triple $(F,H,E)$. Choose a compact real form $\rho$ such that $E=-\rho(F)$. Let $E'=-\rho(F')$. Since $F'\in Z(F)$ we get $E'\in Z(E)$. The commutator $Z(E)$ is abelian, hence we have $Z(E)\subset Z(E')$. With respect to the hermitian inner product $\mathop{\mathrm{tr}}(\rho(.).)$ on $\mathfrak{g}$, $E$ and $F$ are adjoint, and so are $E'$ and $F'$. This implies that $\mathrm{Im}(\mathop{\mathrm{ad}}_F)$ is the perp of $\ker(\mathop{\mathrm{ad}}_E)=Z(E)$, and $\mathrm{Im}(\mathop{\mathrm{ad}}_F')$ is the perp of $\ker(\mathop{\mathrm{ad}}_{E'})=Z(E')$. This concludes the proof since $Z(E)\subset Z(E')$. ◻ ## Involutions and reductions of structure group {#Sec:involutions} There are three basic structures often put on a complex vector bundle: a hermitian structure, a symmetric pairing, and a real structure. In this section we explain precisely how to generalize these notions to principal $G$-bundles for connected complex simple $G$. Fix an antiholomorphic involution $\rho_0:G\to G$ whose fixed point locus is a maximal compact subgroup, and another commuting antiholomorphic involution $\tau_0$ whose fixed point locus is a split real subgroup. Call their composition $\sigma_0 := \rho_0\tau_0$. These involutions induce, (and are determined by) anti-linear involutions of the lie algebra $\mathfrak{g}$ which we will refer to by the same symbols. **Remark 4**. *The involution $\sigma_0$ can equivalently be obtained by Hitchin's construction using a principal $\mathfrak{sl}_2$-triple [@Hit92 Proposition 6.1].* **Definition 17**. *Let $G$ be a group, let $\epsilon_0:G\to G$ be an involution, and let $P$ be a principal $G$-bundle on a manifold $M$. An *$\epsilon_0$-structure* on $P$ is an involution $\epsilon: P\to P$ such that $$\epsilon(p.g)=p.\epsilon_0(g)\;\;\;\forall\, p\in P, g\in G.$$* An $\epsilon_0$-structure on a $G$-bundle $P\to M$ is equivalent to a reduction of structure group of $P$ from $G$ to the group of fixed points $G^{\epsilon_0}$. Indeed, the fixed point locus of $\epsilon$ is naturally a $G^{\epsilon_0}$-bundle. The special case of $\rho_0$-structures, which we will refer to as a *hermitian structure*, can be understood more concretely. A hermitian structure $\rho$ on a $G$-bundle $P$ induces an involution (which we also call $\rho$) on the adjoint bundle $\mathfrak{g}_P$. The eigenspaces of $\rho$ give a fiberwise Cartan decomposition of $\mathfrak{g}_P$. Since $G^{\rho_0}$ is precisely the subgroup of $G$ which preserves a Cartan decomposition, the reduction to $G^{\rho_0}$ is fully specified by this involution of $\mathfrak{g}_P$. In the case of $\sigma_0$- and $\tau_0$-structures, we also get involutions $\sigma$ and $\tau$ of $\mathfrak{g}_P$, but there is slightly more data in the structure. This is because the subgroup of $G$ commuting with $\tau_0$ is not just $G^{\tau_0}$, but also the center of $G$. So just specifying an involution $\tau$ of $\mathfrak{g}_P$, which is conjugate to $\tau_0$ in each fiber, only gives a reduction to a slightly larger group. The same is true for $\sigma$. In the case $G=G_{ad}$ where the center is trivial, this is not an issue, and we may think of $\tau_0$- and $\sigma_0$-structures purely as involutions of the adjoint bundle. # Fock bundles {#Sec:Fock-bundles} In this section, we introduce Fock bundles, analyze the so-called Fuchsian locus and describe the variation in the Fock field. ## Fock bundles and higher complex structures Fix a smooth closed orientable surface $S$ with genus at least 2. Consider a complex simple Lie group $G$ with associated Lie algebra $\mathfrak{g}$. Throughout, we will fix commuting involutions $\rho_0, \tau_0$ and $\sigma_0$ of $G$ such that $\rho_0$ is a compact real form, $\tau_0$ is a split real form, and $\sigma_0 = \rho_0\tau_0$. For a principal $G$-bundle $P$, denote by $\mathfrak{g}_P$ the associated $\mathfrak{g}$-bundle using the adjoint action of $G$ on $\mathfrak{g}$. Recall the notions of a principal nilpotent element and a $\sigma_0$-structure from the previous Subsections [2.3](#Sec:princ-nilp-elements){reference-type="ref" reference="Sec:princ-nilp-elements"} and [2.4](#Sec:involutions){reference-type="ref" reference="Sec:involutions"}. The main notion we introduce in this article is the following: **Definition 18**. *A *$G$-Fock bundle* over $S$ is a triple $(P,\Phi,\sigma)$, where $P$ is a principal $G$-bundle over $S$, $\sigma$ is a $\sigma_0$-structure, and $\Phi\in \Omega^1(S,\mathfrak{g}_P)$ is a $\mathfrak{g}_P$-valued 1-form satisfying* 1. *$[\Phi\wedge\Phi] = 0$,* 2. *$\Phi(v)(z)$ is principal nilpotent for all $z\in S$ and all non-zero vectors $v\in T_zS$.* 3. *$\sigma(\Phi) = -\Phi$.* *We shall call a field $\Phi$ defined as above a *Fock field* and will often refer to the second condition above as the *nilpotency condition*. An isomorphism of Fock bundles $(P,\Phi,\sigma) \to (P',\Phi',\sigma')$ is a $G$-bundle isomorphism $P \to P'$ taking $\Phi$ to $\Phi'$ and $\sigma$ to $\sigma'$.* Note that all conditions in Definition [Definition 18](#defn_Fock-bundle){reference-type="ref" reference="defn_Fock-bundle"} are given pointwise. We stress that no term in the definition of a Fock bundle is considered to be holomorphic. There are two main cases one should highlight: - An $\mathop{\mathrm{SL}}_n(\mathbb{C})$-Fock bundle is a vector bundle $E$ of rank $n$ with fixed volume form, equipped with a symmetric pairing $g$ (a complex bilinear non-degenerate symmetric form) and a Fock field $\Phi$ satisfying the three conditions above. - For $G=G_{ad}$ the adjoint group, a $G$-Fock bundle is specified up to isomorphism by the pair $(P,\Phi)$. The data contained in $\sigma$ is actually very little: We will see later in Proposition [Proposition 39](#Prop:sigma-dependence){reference-type="ref" reference="Prop:sigma-dependence"} that a $\sigma_0$-structure $\sigma$ which negates $\Phi$ always exists locally, and always exists globally if $G=G_{ad}$ is the adjoint group. For any $G$, any two choices of $\sigma$ are conjugate by an automorphism fixing $\Phi$, thus give isomorphic Fock bundles. That is why we will sometimes write $(P,\Phi)$ for a $G$-Fock bundle. By [@Tho22 Proposition 2.21], we know that there is a unique complex line $L\subset T^\mathbb{C}_zS$ such that for all $v\in L$ the matrix $\Phi(v)(z)$ is not principal nilpotent. Note that the uniqueness needs the Lie algebra $\mathfrak{g}$ to be simple. The nilpotency condition in Definition [Definition 18](#defn_Fock-bundle){reference-type="ref" reference="defn_Fock-bundle"} then implies that this direction $L$ is avoiding the real locus and hence encodes a complex structure on $S$ (see Section [2.1](#Sec:HCS){reference-type="ref" reference="Sec:HCS"}). **Proposition 19**. *A $G$-Fock bundle induces a complex structure on $S$.* In the sequel, unless stated explicitly otherwise, whenever we work with complex local coordinates on $S$, we use the complex structure induced by the Fock bundle. For such a complex coordinate $z$ on $S$, we can locally write $\Phi=\Phi_1 dz+\Phi_2 d\bar{z}$. The condition $[\Phi\wedge\Phi]=0$ for a Fock field can be then written as $[\Phi_1,\Phi_2]=0$. By construction of the complex structure on $S$, we know that $\Phi_2$ is not principal nilpotent. Hence $\Phi_1$ has to be principal nilpotent. We now describe more explicitly what $\mathrm{SL}_n(\mathbb{C})$-Fock bundles look like locally. We start with $n=2$. Let $E$ be a complex vector bundle of degree zero and rank $2$ over $S$ with a fixed volume form $\nu$. **Lemma 20**. *Fix an arbitrary complex coordinate on $S$. An $\mathrm{SL}_2(\mathbb{C})$-Fock field on $E$ locally is of the form $$\Phi=\begin{pmatrix} 0&0\\1&0\end{pmatrix}dz+\begin{pmatrix} 0&0\\\mu_2&0\end{pmatrix}d\bar{z},$$ where $\mu_2$ is a local complex function on $S$ with $\mu_2\bar{\mu}_2\neq 1$.* Note that globally, $\mu_2$ is the Beltrami differential of the complex structure on $S$ induced by the Fock bundle. *Proof.* There is a local trivialization of $E$ in which $\Phi_1$ is given as in the statement of the lemma since all principal nilpotent elements are conjugate. The condition $[\Phi_1,\Phi_2]=0$ then implies the form of $\Phi_2$. Finally, the nilpotency condition implies that for any non-zero real tangent vector $v\partial+\bar{v}\bar\partial$ (where $v$ is a complex function) the combination $v\Phi_1+\bar{v}\Phi_2$ is principal nilpotent. This means that $v+\mu_2 \bar{v}\neq 0$ for all $v\neq 0$. This is equivalent to the condition $\lvert\mu_2\rvert\neq 1$. ◻ We can generalize this local description to higher rank. Let $E$ be a complex vector bundle of degree zero and rank $n$ over $S$ with a fixed volume form $\nu$. **Lemma 21**. *Fix an arbitrary complex coordinate on $S$. An $\mathrm{SL}_n(\mathbb{C})$-Fock field on $E$ locally is of the form $\Phi=\Phi_1dz+\Phi_2d\bar{z}$ with $$\Phi_1=\sum_{i=1}^{n-1}E_{i+1,i}\;\text{ and }\; \Phi_2=\mu_2 \Phi_1+\mu_3\Phi_1^2+...+\mu_n \Phi_1^{n-1},$$ where the $\mu_k$ are local complex functions on $S$ with $\mu_2\bar{\mu}_2\neq 1$.* The proof is similar to the one given above; there is only one conjugacy class of principal nilpotent elements which explains the form of $\Phi_1$. The centralizer of $\Phi_1$ is the set of polynomials in $\Phi_1$ which provides the form of $\Phi_2$. In the complex structure induced by the Fock field, we always have $\mu_2=0$. The case when all higher Beltrami differentials vanish is the so-called *Fuchsian locus* which will be analysed in Section [3.2](#Sec:fuchsian-locus){reference-type="ref" reference="Sec:fuchsian-locus"} below. Comparing Definition [Definition 18](#defn_Fock-bundle){reference-type="ref" reference="defn_Fock-bundle"} of a $G$-Fock bundle to Definition [Definition 2](#def-g-C-str){reference-type="ref" reference="def-g-C-str"} of a $\mathfrak{g}$-complex structure, we immediately see the following: **Proposition 22**. *Any $G$-Fock bundle induces a $\mathfrak{g}$-complex structure on $S$. For the adjoint group $G_{ad}$, the isomorphism class of a $G_{ad}$-Fock bundle is equivalent to a $\mathfrak{g}$-complex structure.* The second assertion follows directly from Corollary [Corollary 32](#Prop:P-topology){reference-type="ref" reference="Prop:P-topology"} below. For $G=\mathrm{SL}_n(\mathbb{C})$, we get a direct link between Fock bundles and the ideals describing higher complex structures of order $n$: **Proposition 23**. *Let $(E, \Phi)$ be an $\mathrm{SL}_n(\mathbb{C})$-Fock bundle over a surface $S$. The map $$p:\left \{ \begin{array}{ccc} \mathrm{Sym}(T^{\mathbb{C}}S) & \longrightarrow &\mathrm{End}(E) \\ v_1\cdots v_k & \mapsto &\Phi(v_1)\cdots \Phi(v_k) \end{array} \right.$$ is well-defined and the kernel of $p$ defines a higher complex structure.* *Proof.* To prove that $p$ is well-defined, one has to show that the expression $\Phi(v_1)\cdots \Phi(v_k)$ remains unchanged under permutation of $(v_1,...,v_k)$. This follows from $\Phi\wedge \Phi=0$. The matrix viewpoint of higher complex structures analyzed in [@Tho22 Section 4.2] implies that the kernel of $p$ is a higher complex structure. ◻ **Proposition 24**. *A $G$-Fock bundle $(P,\Phi,\sigma)$ has no infinitesimal automorphisms. Thus, all Fock bundles are stable in this sense.* *Proof.* Consider an infinitesimal gauge transformation $\eta\in\Omega^0(S,\mathfrak{g}_P)$. In order to preserve $\Phi$, we need $\eta\in Z(\Phi)$. Since $Z(\Phi)$ is negated by $\sigma$ (see Corollary [Corollary 14](#cor:sigma_negate_centralizer){reference-type="ref" reference="cor:sigma_negate_centralizer"}), the only way for $\eta$ to preserve $\sigma$ is to be zero. ◻ For $\mathop{\mathrm{SL}}_n(\mathbb{C})$, the Fock field $\Phi$ induces a natural filtration $\mathcal{F}$ on the bundle $E$: **Proposition 25**. *An $\mathop{\mathrm{SL}}_n(\mathbb{C})$-Fock bundle has a natural increasing filtration given by $\mathcal{F}_k=\ker \Phi(v)^k$, where $v$ is a local non-vanishing vector field. The filtration is independent of the choice of $v$.* The proposition follows directly from the fact that $\Phi(v)$ is a principal nilpotent element. Independence follows from $\Phi\wedge\Phi=0$ and the fact that $Z(\Phi)$ is abelian. ## Fuchsian Locus {#Sec:fuchsian-locus} The Fuchsian locus in a Hitchin component, for an adjoint group, is the set of representations which factor through $\mathop{\mathrm{PSL}}_2(\mathbb{R})$. Similarly, we will introduce the Fuchsian locus in the space of Fock bundles for an adjoint group as the subset induced from $\mathop{\mathrm{PSL}}_2(\mathbb{C})$-Fock bundles. When $G$ has center, we call a Fock bundle Fuchsian if the associated Fock bundle for $G_{ad}$ is Fuchsian. It will turn out that every Fock bundle with $\Phi_2 = 0$ is Fuchsian, so all Fock bundles are deformable to the Fuchsian locus. Every Fock bundle in the Fuchsian locus can be equipped with a holomorphic structure, making it into a Higgs bundle. This is the uniformizing Higgs bundle. **Proposition 26**. *Any $\mathop{\mathrm{SL}}_2(\mathbb{C})$-Fock bundle is of the form $(E,\Phi,g)$ where $E = K^{1/2}\oplus K^{-1/2}$, where we use the complex structure on $S$ induced from the Fock bundle, $g$ is the apparent symmetric pairing in which these line subbundles are isotropic, and $$\Phi=\begin{pmatrix} 0&0\\1&0\end{pmatrix}$$ where $1$ denotes the canonical 1-form valued in $\mathop{\mathrm{Hom}}(K^{1/2},K^{-1/2})\cong K^{-1}$.* *Proof.* Let $(E,\Phi,g)$ be an $\mathop{\mathrm{SL}}_2(\mathbb{C})$-Fock bundle. The symmetric pairing $g$, gives an isotropic decomposition into dual line bundles $E = L\oplus L^{-1}$. We get a decomposition $$\mathfrak{sl}(E) = L^{-2} \oplus \underline{\mathbb{C}} \oplus L^{2}$$ in which $\sigma:X\mapsto -X^{*_g}$ acts by $(-1,1,-1)$. The Fock field $\Phi$ must be valued in the $-1$ eigenspace $L^2 \oplus L^{-2}$. Since $\Phi$ is nilpotent, it is valued in only one of these line bundles at any given point, and since it is nowhere vanishing, it is valued in only one of the line bundles globally. Without loss of generality, suppose it is valued in $L^{-2}$. Since it is nowhere vanishing, $\Phi$ is an isomorphism from the holomorphic tangent bundle of $S$ to $L^{-2}$, so $L$ must be a square root of the canonical bundle. ◻ As a corollary, any $\mathop{\mathrm{SL}}_2(\mathbb{C})$-Fock bundle has a natural upgrade to a Higgs bundle for the induced complex structure, because $K^{1/2}\oplus K^{-1/2}$ is naturally a holomorphic vector bundle. We see that an $\mathop{\mathrm{SL}}_2(\mathbb{C})$-Fock bundle is equivalent to a complex structure together with a spin structure. Similarly, a $\mathop{\mathrm{PSL}}_2(\mathbb{C})$-Fock bundle is equivalent to a complex structure. **Proposition 27**. *Any $\mathop{\mathrm{PSL}}_2(\mathbb{C})$-Fock bundle is of the form $(P,\Phi,\sigma)$ where $P = K_* \times_{\mathbb{C}^*} \mathop{\mathrm{PSL}}_2(\mathbb{C})$ with adjoint bundle $\mathfrak{sl}_2(\mathbb{C})_P = K^{-1}\oplus \underline{\mathbb{C}} \oplus K$, $\Phi$ is the canonical $K^{-1}$-valued $1$-form, and $\sigma$ acts by $(-1,1,-1)$ on the adjoint bundle. Here, $K_*$ denotes the $\mathbb{C}^*$-bundle of nonzero covectors.* In particular, $P$ is topologically trivial because $K$ has even degree. Any $\mathop{\mathrm{PSL}}_2(\mathbb{C})$-Fock bundle can be upgraded to an $\mathop{\mathrm{SL}}_2(\mathbb{C})$-Fock bundle by choosing a square root of $K$. Now restrict attention to an adjoint group $G_{ad}$. Then, the principal 3-dimensional subgroup is always $\mathop{\mathrm{PSL}}_2(\mathbb{C})$. This is because the $\mathfrak{sl}_2$-representations appearing in $\mathfrak{g}$ are always odd dimensional. Fix a principal embedding $\mathop{\mathrm{PSL}}_2(\mathbb{C})\to G_{ad}$ such that the diagram $$\begin{tikzcd} \mathop{\mathrm{PSL}}_2(\mathbb{C}) \arrow[r] \arrow[d,"\sigma_0"] & G_{ad} \arrow[d, "\sigma_0"] \\ \mathop{\mathrm{PSL}}_2(\mathbb{C}) \arrow[r] & G_{ad} \end{tikzcd}$$ commutes. **Definition 28**. *If $(P,\Phi,\sigma)$ is a $\mathop{\mathrm{PSL}}_2(\mathbb{C})$-Fock bundle, then the *induced $G_{ad}$-Fock bundle* is the triple $(P',\Phi',\sigma')$ where* - *$P'$ is the induced bundle $P\times_{\mathop{\mathrm{PSL}}_2(\mathbb{C})} G_{ad}$, or alternatively $P^\sigma \times_{\mathop{\mathrm{PSL}}_2(\mathbb{C})^{\sigma_0}} G_{ad}$.* - *$\Phi'$ is the composition of $\Phi$ with the inclusion $\mathfrak{sl}_2(\mathbb{C})_{P}\to \mathfrak{g}_{P'}$, and* - *$\sigma'$ is $\sigma_0$ acting on the right factor $G_{ad}$ in the second description of $P'$.* The same definition will work for general $G$ with $\mathop{\mathrm{PSL}}_2(\mathbb{C})$ possibly replaced by $\mathop{\mathrm{SL}}_2(\mathbb{C})$. **Definition 29**. *A $G_{ad}$-Fock bundle is in the *Fuchsian locus* if it is induced from a $\mathop{\mathrm{PSL}}_2(\mathbb{C})$-Fock bundle. For general $G$, a $G$-Fock bundle is in the Fuchsian locus if the associated $G_{ad}$-Fock bundle is in the Fuchsian locus.* **Example 30**. *The $\mathop{\mathrm{SL}}_n(\mathbb{C})$-Fock bundle induced from an $\mathop{\mathrm{SL}}_2(\mathbb{C})$-Fock bundle is the pair $(E,\Phi)$ given by $$E=K^{(n-1)/2}\oplus K^{(n-3)/2}\oplus ...\oplus K^{(1-n)/2} \;\; \text{ and } \;\; \Phi=\sum_{i=1}^{n-1}E_{i+1,i},$$ where $K$ denotes the canonical bundle on $S$, $K^{1/2}$ is a choice of a square-root and $E_{i+1,i}$ is the canonical identity 1-form valued in $\mathrm{Hom}(K^{(n-k)/2},K^{(n-k-2)/2}) = K^{-1}$. Again, if we view $E$ as a smooth vector bundle, this is a Fock bundle, but if we view it as a holomorphic bundle, it becomes the uniformizing $\mathrm{SL}_n(\mathbb{C})$-Higgs bundle.* **Proposition 31**. *A Fock bundle $(P,\Phi,\sigma)$ is in the Fuchsian locus if and only if $\Phi_2=0$.* *Proof.* A Fock field in the Fuchsian locus clearly has no $(0,1)$-part. For the converse, by Lemma [Lemma 15](#lemma:unique-sigma-sl2){reference-type="ref" reference="lemma:unique-sigma-sl2"}, we know that there is a unique $\sigma$-invariant $\mathfrak{sl}_2$-subbundle $\mathfrak{p}$ of $\mathfrak{g}_P$ containing the image of $\Phi$, because the image of $\Phi$ is a line bundle of principal nilpotents negated by $\sigma$. The normalizer in $G_{ad}$ of a principal $\mathfrak{sl}_2$-subalgebra is just the principal $\mathop{\mathrm{PSL}}_2(\mathbb{C})$ it generates. To see this, first recall that the centralizer of a principal $\mathfrak{sl}_2$ is trivial, then note that anything in the normalizer can be multiplied by an element of $\mathop{\mathrm{PSL}}_2(\mathbb{C})$ to be in the centralizer. The subbundle $\mathfrak{p}$ thus gives a reduction of structure group of $G_{ad}$ to $\mathop{\mathrm{PSL}}_2(\mathbb{C})$. Call this reduction of structure group $Q\subset P$. The induced map of adjoint bundles is simply the inclusion $\mathfrak{p}\to \mathfrak{g}_P$. The involution $\sigma$ on $\mathfrak{g}_P$ restricts to an involution on $\mathfrak{p}$, and in fact is the unique extension of this involution conjugate to $\sigma_0$. We see that $(P,\Phi,\sigma)$ is the induction from $\mathop{\mathrm{PSL}}_2(\mathbb{C})$ to $G$ of the Fock bundle $(Q,\Phi,\sigma|_{Q})$. ◻ **Corollary 32**. *Let $(P,\Phi,\sigma)$ be a $G$-Fock bundle. Then $P$ is topologically trivial.* *Proof.* We can deform $\Phi$ continuously to get to the Fuchsian locus where $\Phi_2 = 0$. This does not alter the topology of $P$. We then consider the induced $G_{ad}$-bundle $P_{ad}$. If $P_{ad}$ is trivial, it follows the same for $P$. We know that $P_{ad}$ is induced from a $\mathop{\mathrm{PSL}}_2(\mathbb{C})$-Fock bundle. We have seen in Proposition [Proposition 27](#Prop:psl2-fock-bundle){reference-type="ref" reference="Prop:psl2-fock-bundle"} that those are topologically trivial. ◻ As with the $\mathop{\mathrm{SL}}_n(\mathbb{C})$-case, $G$-Fock bundles in the Fuchsian locus are exactly those obtained by uniformizing $G$-Higgs bundles by forgetting the holomorphic structure. For any $G$, there is a finite collection of $G$-Fock bundles which induce a given $G_{ad}$-Fock bundle. **Proposition 33**. *Let $A$ be the subgroup of the center of $G$ which is fixed by $\sigma$. There is a simply transitive action of $\mathrm{H}^1(S,A)$ on the set of $G$-Fock bundles which induce a given $G_{ad}$-Fock bundle.* The proof is once more abstract bundle theory, which we leave to the reader. Instead, we describe the case of $\mathop{\mathrm{SL}}_n(\mathbb{C})$. The center of $\mathop{\mathrm{SL}}_n(\mathbb{C})$ is the $n$-th roots of unity and $\sigma$ acts by inversion, so for $n$ even $A$ is $\{1,-1\}$ whereas for $n$ odd $A$ is trivial. This means that for $n$ even there are $2^{2g}$ $\mathop{\mathrm{SL}}_n(\mathbb{C})$-Fock bundles for every $\mathrm{PSL}_n(\mathbb{C})$-Fock bundle, whereas for $n$ odd $\mathrm{SL}_n(\mathbb{C})$- and $\mathrm{PSL}_n(\mathbb{C})$-Fock bundles are the same. ## Variations of Fock bundles {#Sec:var-Fock-fields} Simpson [@Simpson] studied the infinitesimal deformation space of stable Higgs bundles via a hypercohomology group for suitable chain complexes. In a similar way, we study the variations of $G$-Fock bundles $(P, \Phi,\sigma)$. We will first forget about $\sigma$ and study the variations of Fock fields, then see what happens when we introduce $\sigma$. ### Variation complex without $\sigma_0$-structure Consider the complex $\Omega^{\bullet}(S,\mathfrak{g}_P)$ of $\mathfrak{g}$-valued differential forms, with differential given by the adjoint action of the Fock field $\Phi$: $$\label{phi-cohom} \Omega^0(S,\mathfrak{g}_P)\xrightarrow{\mathop{\mathrm{ad}}_\Phi\;} \Omega^1(S,\mathfrak{g}_P)\xrightarrow{\mathop{\mathrm{ad}}_\Phi\;}\Omega^2(S,\mathfrak{g}_P).$$ **Proposition 34**. *The complex [\[phi-cohom\]](#phi-cohom){reference-type="eqref" reference="phi-cohom"} is a chain complex.* *Proof.* Indeed, using the Jacobi identity and $[\Phi\wedge \Phi]=0$, we get for all $A\in \Omega^0(S,\mathfrak{g}_P)$: $$[\Phi\wedge [\Phi,A]]=-[\Phi\wedge [\Phi,A]]+[A,[\Phi\wedge\Phi]],$$ which implies $\mathop{\mathrm{ad}}_\Phi^2(A)=0$. ◻ We call the cohomology groups defined for this chain complex, *$\Phi$-cohomology groups*, and denote them by $\mathrm{H}^{\bullet}(\Phi)$. The zero-th cohomology group $\mathrm{H}^0(\Phi)$ describes the centralizer of $\Phi$. To be more precise, the centralizers of all $\Phi(v)(z)$ for a non-zero real vector $v\in T_zS$ are all equal. This follows from $[\Phi\wedge\Phi]=0$, the nilpotency condition and the general fact that the centralizer of a principal nilpotent element is abelian. This is why we often write $Z(\Phi)\subset \Omega^0(S,\mathfrak{g}_P)$ for any of these centralizers. We have the following: **Proposition 35**. *The dimensions of $\mathrm{H}^0(\Phi), \mathrm{H}^1(\Phi)$ and $\mathrm{H}^2(\Phi)$ are respectively $\mathrm{rk}(\mathfrak{g})$, $2\mathrm{rk}(\mathfrak{g})$ and $\mathrm{rk}(\mathfrak{g})$.* *Proof.* We have $\mathrm{H}^0(\Phi)=\ker\,\mathop{\mathrm{ad}}_\Phi=Z(\Phi)$, which by the nilpotency condition is of dimension $\mathrm{rk}(\mathfrak{g})$. The natural pairing between $a\in\ker\,\mathop{\mathrm{ad}}_\Phi\subset\Omega^0(S,\mathfrak{g}_P)$ and $b\in\Omega^2(S,\mathfrak{g}_P)$ given by $\int_S \mathop{\mathrm{tr}}(ab)$, where $\mathop{\mathrm{tr}}$ denotes the Killing form on $\mathfrak{g}$, descends to cohomology. This follows from the cyclicity property $\mathop{\mathrm{tr}}([a,b]c)=\mathop{\mathrm{tr}}([b,c]a)$ of the Killing form. Hence $\mathrm{H}^2(\Phi)\cong \mathrm{H}^0(\Phi)^*$ which gives $\dim \mathrm{H}^2(\Phi) =\dim \mathrm{H}^0(\Phi)= \mathrm{rk}(\mathfrak{g})$. Finally, the dimension of $\mathrm{H}^1(\Phi)$ can be computed via the Euler characteristic of the complex which is zero. ◻ The first $\Phi$-cohomology group describes variations of $G$-Fock fields which do not necessarily preserve the nilpotency condition: **Proposition 36**. *The first $\Phi$-cohomology group $\mathrm{H}^1(\Phi)$ describes variations of a $G$-Fock field $\Phi$ leaving the condition $[\Phi\wedge\Phi]=0$ invariant, modulo gauge transformations.* *Proof.* A variation $\delta \Phi$ preserves the condition $[\Phi\wedge\Phi]=0$ if and only if $[\Phi\wedge\delta \Phi]=0$, that is, if and only if $\delta \Phi$ is a cocycle. An infinitesimal gauge transformation $\eta\in \Omega^0(S,\mathfrak{g}_P)$ induces $\delta\Phi=[\Phi,\eta]$ which is a coboundary. ◻ We next analyze variations of Fock bundles including the nilpotency condition. The space $\Omega^1(S,\mathfrak{g}_P)$ has a natural symplectic structure $\omega$ defined by $$\label{Eq:sympl-str-1-forms} \omega(\alpha,\beta)=\int_S \mathop{\mathrm{tr}}\, \alpha\wedge\beta,$$ where $\mathop{\mathrm{tr}}$ denotes the Killing form of $\mathfrak{g}$. The symplectic structure $\omega$ descends to a symplectic structure on $\mathrm{H}^1(\Phi)$. Indeed, for $\alpha,\beta\in \Omega^1(S,\mathfrak{g}_P)$ representing classes in $\mathrm{H}^1(\Phi)$, adding a coboundary $[\gamma,\Phi]$ to $\alpha$ adds $$\mathrm{tr}([\gamma,\Phi] \wedge\beta) = \mathrm{tr}(\gamma [\Phi\wedge\beta]) = 0.$$ The same holds when adding a coboundary to $\beta$. Hence the symplectic form only depends on the cohomology classes. Denote by $\mathrm{Var}(\Phi)$ the variations of $\Phi$ which both preserve $[\Phi\wedge\Phi]=0$ and the nilpotency condition. **Proposition 37**. *The variations of Fock fields $\mathrm{Var}(\Phi)$ are described by the direct sum $$\mathrm{Var}(\Phi)\cong \mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)\oplus Z(\Phi)\bar{K},$$ where $Z(\Phi)\subset \Omega^0(S,\mathfrak{g}_P)$ denotes the kernel of $\mathop{\mathrm{ad}}_\Phi$ and $\bar{K}$ the conjugated canonical bundle.* *Proof.* Using the complex structure on $S$ induced from the Fock bundle, we can locally write $\Phi=\Phi_1 dz+\Phi_2d\bar{z}$, where $\Phi_1$ is principal nilpotent. Since all principal nilpotent matrices are conjugate, any variation preserving the nilpotency condition is equivalent modulo $\mathrm{Im} \mathop{\mathrm{ad}}_{\Phi}$ to a a variation which changes $\Phi_2$ without changing $\Phi_1$. Such a variation is of the form $Z(\Phi_1)d\bar{z}$ since $[\Phi\wedge\delta\Phi]=0$. ◻ **Corollary 38**. *For a $G$-Fock bundle $(P, \Phi)$ over $S$, $\mathrm{Var}(\Phi)$ forms an isotropic subspace in $(\Omega^1(S,\mathfrak{g}_P), \omega)$. Modulo $\mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi})$, it descends to an isotropic subspace of $\mathrm{H}^1(\Phi)$.* *Proof.* Take the variations of two Fock fields $\delta\Phi=\mathop{\mathrm{ad}}_\Phi(\eta)+Cd\bar{z}$ and $\delta\Phi'=\mathop{\mathrm{ad}}_{\Phi'}(\eta')+C'd\bar{z}$, with $C\in Z(\Phi_1)$, $C'\in Z(\Phi'_1)$ and $\eta,\eta'\in\Omega^0(S,\mathfrak{g}_P)$. One now checks that the symplectic form $\omega(\delta\Phi,\delta\Phi')$ defined by [\[Eq:sympl-str-1-forms\]](#Eq:sympl-str-1-forms){reference-type="eqref" reference="Eq:sympl-str-1-forms"} vanishes. Indeed, since $\mathop{\mathrm{ad}}_\Phi$ squares to zero, we get $\mathrm{tr} \mathop{\mathrm{ad}}_\Phi(\eta)\wedge \mathop{\mathrm{ad}}_{\Phi'}(\eta')=0$. Moreover, $Cd\bar{z}\wedge C'd\bar{z}=0$ and the wedge product of the crossed terms also vanishes since $C\in Z(\Phi_1)$ and $C'\in Z(\Phi'_1)$. ◻ Note that pointwise, $\mathrm{Var}(\Phi)$ is a Lagrangian in $T^*S\otimes \mathfrak{g}$ (where we choose a volume form at a point to identify $\Lambda^2(TS)$ with $\mathbb{C}$). ### Introduction of $\sigma_0$-structures {#sec_sigma_split_coh} We first analyse the possible $\sigma_0$-structures $\sigma$ one can put on a pair $(P,\Phi)$ where $\Phi$ satisfies the first two conditions of Definition [Definition 18](#defn_Fock-bundle){reference-type="ref" reference="defn_Fock-bundle"}. **Proposition 39**. *For any such pair $(P,\Phi)$, there locally exist $\sigma_0$-structures making it into a Fock bundle. If $G=G_{ad}$, they exist globally.* *Proof.* Recall that $\Phi_1$ is valued in principal nilpotent elements. By Lemma [Lemma 13](#lem:space_of_sigma){reference-type="ref" reference="lem:space_of_sigma"} there is a contractible space of choices of involution of each fiber of the adjoint bundle which negate the image of $\Phi_1$. We can thus choose $\bar{\sigma}:{\mathfrak{g}_P}\to \mathfrak{g}$ which negates $\Phi_1$ globally. By Lemma $\ref{cor:sigma_negate_centralizer}$, $\bar{\sigma}$ will also negate the centralizer of $\Phi_1$, so must negate $\Phi_2$ as well. When $G$ is adjoint, $\bar{\sigma}$ uniquely determines a $\sigma_0$ structure $\sigma:P\to P$. When $G$ is not adjoint, there is slightly more data to a $\sigma_0$-structure. The involution $\bar{\sigma}$ gives a reduction to $\pi^{-1}(G_{ad}^{\sigma})$ where $\pi:G\to G_{ad}$ is the projection. A $\sigma_0$-structure on the other hand is a reduction to $G^{\sigma}$ which is the identity component of this group. Reductions of a principal bundle to its identity component are sections of a bundle with discrete fibers, so they always exist locally. ◻ **Proposition 40**. *The infinitesimal variations of $\sigma_0$-structures negating $\Phi$ are described by $\mathrm{H}^0(\Phi)$.* *Proof.* Consider a $\sigma_0$-structure $\sigma$ satisfying $\sigma(\Phi)=-\Phi$. Since all $\sigma_0$-structures are conjugate, any infinitesimal variation $\delta\sigma$ is described by $\xi\in\mathfrak{g}^{-\sigma}$ and given by $\delta\sigma(x)=[\xi,x]$ for $x\in\mathfrak{g}$. The variations negating $\Phi$ have to satisfy $\delta\sigma(\Phi)=0$, hence $\xi\in\ker\mathop{\mathrm{ad}}_\Phi$, i.e. $\xi\in \mathrm{H}^0(\Phi)$. ◻ Actually a stronger statement is true: by Lemma [Lemma 13](#lem:space_of_sigma){reference-type="ref" reference="lem:space_of_sigma"} the exponential of $\mathrm{H}^0(\Phi)=Z(\Phi)$ acts simply transitively on the space of choices of $\sigma_0$-structure. In particular, all $\sigma_0$ structures are conjugate. We can now use the $\sigma_0$-structure $\sigma$ in the $\Phi$-cohomology. Define $\Omega^{k,\sigma}(S,\mathfrak{g}_P)$ to be the space of $\sigma$-invariant $\mathfrak{g}_P$-valued $k$-forms, and similarly $\Omega^{k,-\sigma}(S,\mathfrak{g}_P)$ to be the space of $\sigma$-anti-invariant forms. Since $\sigma(\Phi)=-\Phi$, the $\Phi$-cohomology complex splits as a direct sum of two complexes $$\Omega^{0,\pm \sigma}(S,\mathfrak{g}_P) \xrightarrow{\mathop{\mathrm{ad}}_\Phi\;} \Omega^{1,\mp \sigma}(S,\mathfrak{g}_P) \xrightarrow{\mathop{\mathrm{ad}}_\Phi\;} \Omega^{2,\pm \sigma}(S,\mathfrak{g}_P),$$ and we may define $\mathrm{H}^{k,\sigma}(\Phi)$ and $\mathrm{H}^{k,-\sigma}(\Phi)$ as the subspaces represented by $\sigma$-invariant and respectively $\sigma$-anti-invariant elements of these complexes. Equivalently, we can define them as the $\sigma$-invariant and $\sigma$-anti-invariant parts of $\mathrm{H}^k(\Phi)$: $$\mathrm{H}^k(\Phi)\cong \mathrm{H}^{k,\sigma}(\Phi)\oplus \mathrm{H}^{k,-\sigma}(\Phi).$$ Moreover, one sees that the Lie bracket respects the $\sigma$-grading, because $\sigma$ commutes with the Lie bracket. **Proposition 41**. *We have $\mathrm{H}^{k,\sigma}(\Phi)=0$, for $k=0,1,2$.* *Proof.* One needs to show that $\sigma$ acts by $-1$ on $\Phi$-cohomology. This is a pointwise statement. For $k=0$, the 0-th cohomology is simply the center $Z(\Phi)$, on which $\sigma$ acts by $-1$. The natural pairing between 0-forms and 2-forms is $\sigma$-invariant. Hence $\sigma$ also acts by $-1$ on $\mathrm{H}^2(\Phi)$. For $k=1$, the centralizer $Z(\Phi)\bar{K}$ descends to a Lagrangian subspace in $\mathrm{H}^1(\Phi)$. This subspace is negated by $\sigma$, so is its complement under the symplectic pairing between 1-forms. This describes the entire space $\mathrm{H}^1(\Phi)$. ◻ In view of Proposition [Proposition 41](#Prop-no-sigma-cohom){reference-type="ref" reference="Prop-no-sigma-cohom"}, we have $\mathrm{H}^k(\Phi)=\mathrm{H}^{k,-\sigma}(\Phi)$, hence $[\mathrm{H}^k(\Phi),\mathrm{H}^\ell(\Phi)]\subset \mathrm{H}^{k+\ell,\sigma}(\Phi)=0$. We thus have the following: **Corollary 42**. *For representatives $\alpha,\beta$ of cohomology classes $[\alpha] \in \mathrm{H}^k(\Phi)$ and $[\beta] \in \mathrm{H}^\ell(\Phi)$, the bracket $[\alpha, \beta]$ is a coboundary.* # Connections associated to Fock bundles {#Sec:can-connection} In nonabelian Hodge theory, one associates to polystable Higgs bundles, gauge equivalence classes of Hermitian-Yang-Mills metrics and subsequently flat bundles over a Riemann surface. In this section, we will associate to Fock bundles a certain family of connections. Our main conjecture is that we can moreover choose these connections to be flat. To begin with, we define a compatible connection on a Fock bundle: **Definition 43**. *On a $G$-Fock bundle $(P,\Phi, \sigma)$, a connection $d_A$ is called *$\Phi$-compatible* if $d_A\Phi=0$.* Note that $d_A\Phi=0$ is an affine equation in the connection matrix $A$, thus easily provides the existence of $\Phi$-compatible connections. Indeed, local solutions can be found by choosing a local trivialization of $P$ in which the centralizer of $\Phi$ is constant. In this trivialization, $d\Phi$ will be valued in the centralizer of $\Phi$, thus it will be in the image of $\mathop{\mathrm{ad}}_{\Phi}$ (see Proposition [Proposition 11](#Prop:center-in-image){reference-type="ref" reference="Prop:center-in-image"}), allowing us to find a matrix valued $1$-form $A_0$ with $d\Phi + [A_0\wedge\Phi]=0$. These local solutions can be now patched together using a partition of unity. We thus have the following: **Proposition 44**. *There exist $\Phi$-compatible connections on any given $G$-Fock bundle.* ## Three-term connections {#Sec:3-term-connections} In Section [2.2.3](#sec:examples of 3-term conn){reference-type="ref" reference="sec:examples of 3-term conn"} we reviewed two sorts of families of 3-term connections from the works of Hitchin [@Hit87] and Fock [@Fock] subject to a parameter $\lambda \in \mathbb{C}^{*}$. The form of these families comes from the general theory of hyperkähler manifolds and their twistor space [@HKLR87]. Within this framework, we consider a family of connections on a principal $G$-bundle $P$ over a surface $S$ of the form $$\mathcal{A}(\lambda)= \lambda^{-1}\Phi+d_A+\lambda \Psi,$$ where $\lambda\in \mathbb{C}^*$ is a parameter, $d_A$ is a fixed background connection on $P$ and $\Phi,\Psi\in\Omega^1(S,\mathfrak{g}_P)$. The curvature of a connection in this family is a Laurent polynomial in $\lambda$ given by: $$\begin{aligned} \label{curvature_3term} F(\mathcal{A}(\lambda))&=[\mathcal{A}(\lambda)\wedge\mathcal{A}(\lambda)] \nonumber \\ & = \lambda^{-2}[\Phi\wedge\Phi]+\lambda^{-1}d_A(\Phi)+ F(A)+[\Phi\wedge\Psi]+\lambda d_A(\Psi)+\lambda^2[\Psi\wedge\Psi],\end{aligned}$$ where $F(A)$ denotes the curvature of the fixed background connection $d_A$. In order to have a family $\mathcal{A}(\lambda)$ of flat connections, i.e. flat for all values of $\lambda$, all five coefficients of the Laurent polynomial in [\[curvature_3term\]](#curvature_3term){reference-type="eqref" reference="curvature_3term"} have to vanish. Note that if $\Phi$ and $\Psi$ are Fock fields, then the terms $\lambda^{-2}[\Phi\wedge\Phi]$ and $\lambda^2[\Psi\wedge\Psi]$ in ([\[curvature_3term\]](#curvature_3term){reference-type="ref" reference="curvature_3term"}) do vanish. If, in addition, the background connection $d_A$ is a connection compatible with both $\Phi$ and $\Psi$, then the curvature $F(\mathcal{A}(\lambda))$ becomes independent of the parameter $\lambda$. In this situation, the flatness of $\mathcal{A}(\lambda)$ is equivalent to the equation $$\label{Fock equation} F(A)+[\Phi\wedge\Psi] = 0.$$ Note the similarity of this equation to the form of the well-known Hitchin equation from [@Hit87]. In Section [4.3](#Sec:canonical-connection){reference-type="ref" reference="Sec:canonical-connection"} below, we will show the existence of a connection $d_A$ compatible with both Fock fields $\Phi$ and $\Psi$. For this to be possible, the two fields have to satisfy a certain transversality condition. ## Transversality We next define the notion of transverse Fock fields. **Definition 45**. *For a pair of Fock fields $\Phi, \Psi \in \Omega^1(S,\mathfrak{g}_P)$, we call $\Phi$ and $\Psi$ *transverse* if $\Omega^1(S,\mathfrak{g}_P)=\ker(\mathop{\mathrm{ad}}_\Phi)\oplus \mathrm{Im}(\mathop{\mathrm{ad}}_\Psi)$ and $\Omega^1(S,\mathfrak{g}_P)=\ker(\mathop{\mathrm{ad}}_\Psi)\oplus \mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)$.* Figure [1](#Fig:transversality){reference-type="ref" reference="Fig:transversality"} illustrates the transversality condition; note that the figure shows the decomposition of $\Omega^1(S,\mathfrak{g}_P)$ as a direct sum, not as a union. ![Decomposition of $\Omega^1(S,\mathfrak{g}_P)$ for transverse $\Phi$ and $\Psi$.](transversality-2.pdf){#Fig:transversality height="4.5cm"} **Proposition 46**. *For transverse $\Phi$ and $\Psi$, the cohomology $\mathrm{H}^1(\Phi)$ can be identified with $\ker(\mathop{\mathrm{ad}}_\Phi)\cap\ker(\mathop{\mathrm{ad}}_\Psi)$.* *Proof.* Let $[\alpha]\in \mathrm{H}^1(\Phi)$ be represented by $\alpha\in \ker(\mathop{\mathrm{ad}}_\Phi)$. Using the transversality condition $\Omega^1(S,\mathfrak{g}_P)=\ker(\mathop{\mathrm{ad}}_\Psi)\oplus \mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)$, we see that the projection onto $\ker(\mathop{\mathrm{ad}}_\Psi)$ changes $\alpha$ only by a coboundary. Hence we can use this projection as a preferred representative for the class $[\alpha]$. ◻ Moreover, in the setting of $\sigma$-splitted $\Phi$-cohomology from Section [3.3.2](#sec_sigma_split_coh){reference-type="ref" reference="sec_sigma_split_coh"}, we get the following corollary: **Corollary 47**. *For a pair of transverse Fock fields $\Phi, \Psi \in \Omega^1(S,\mathfrak{g}_P)$, one has: $$\Omega^1(S,\mathfrak{g}_P)^\sigma=\mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)^\sigma\oplus \mathrm{Im}(\mathop{\mathrm{ad}}_\Psi)^\sigma.$$* This follows from the fact that $\mathrm{H}^{1,\sigma}(\Phi)=0$ (see Proposition [Proposition 41](#Prop-no-sigma-cohom){reference-type="ref" reference="Prop-no-sigma-cohom"}) and the identification of $\ker(\mathop{\mathrm{ad}}_\Phi)\cap \ker(\mathop{\mathrm{ad}}_\Psi)$ with $\mathrm{H}^1(\Phi)$. The transversality condition is an open condition. In the unitary setting (where the bundle $P$ is equipped with a hermitian structure $\rho$), we will see that the Fock field $\Psi=\Phi^{*}$ can be made transverse to $\Phi$, by a suitable diagonal gauge transformation. ## Canonical connection from transverse Fock fields {#Sec:canonical-connection} We now associate a canonical connection $d_A$ to a given pair of transverse Fock fields $\Phi$ and $\Psi$. This allows to get a family of 3-term connections $\lambda^{-1}\Phi+d_A+\lambda\Psi$. The existence and uniqueness of $d_A$ brings to mind the Chern connection on a holomorphic bundle with hermitian structure. **Theorem 48**. *Let $(P,\Phi,\sigma)$ be a $G$-Fock bundle equipped with a second Fock field $\Psi$, transverse to $\Phi$. Then there is a unique $\sigma$-invariant connection $d_A$ on $P$ which is compatible with both $\Phi$ and $\Psi$, i.e. solving the equations $$d_A\Phi=d_A\Psi=0.$$* In the sequel, we call $d_A$ the *canonical connection* and refer to the construction of $d_A$ as the *filling-in procedure* (since it gives the middle term in the 3-term connection). The associated 3-term connection $\Phi+d_A+\Psi$ will be called the *canonical 3-term connection*. *Proof.* *Existence*: By Proposition [Proposition 44](#Prop:comp-connections){reference-type="ref" reference="Prop:comp-connections"}, we first find a compatible connection $d_{A_0}$ on $P$, i.e. such that $d_{A_0}\Phi = 0$. The existence of $d_{A_0}$ relies on the fact that for appropriate local trivializations, $d\Phi$ lies in the image of $\mathop{\mathrm{ad}}_\Phi$. Note that we can choose $d_{A_0}$ to be $\sigma$-invariant since $\sigma(\Phi)=-\Phi$. The same argument as for $\Phi$ shows that $d_{A_0}\Psi=d\Psi+[A_0\wedge\Psi]$ lies in the image of $\mathop{\mathrm{ad}}_\Psi$. Hence there is a section $R\in\Omega^1(S,\mathfrak{g}_P)$ such that $d_{A_0}\Psi=[R\wedge\Psi]$. Since both $\Psi$ and $d_{A_0}\Psi$ are $\sigma$-anti-invariant, we can choose $R$ to be $\sigma$-invariant. By Corollary [Corollary 47](#Coro:sigma-decompo-transverse){reference-type="ref" reference="Coro:sigma-decompo-transverse"} (since $\Phi$ and $\Psi$ are transverse) we can decompose $R=R_1+R_2$ where $R_1\in\mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)^\sigma$ and $R_2\in\mathrm{Im}(\mathop{\mathrm{ad}}_\Psi)^\sigma$. Since $\mathop{\mathrm{ad}}_\Psi^2=0$, we have $[R\wedge\Psi]=[R_1\wedge\Psi]$. We claim that $d_A:=d_{A_0}-R_1$ is a solution. Indeed, $d_A\Psi=0$ by definition of $R_1$ and $d_A\Phi=-[R_1\wedge\Phi]=0$ since $R_1\in\mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)$ and $\mathop{\mathrm{ad}}_\Phi^2=0$. Finally, both $d_{A_0}$ and $R_1$ are $\sigma$-invariant. Therefore $d_A$ is indeed a solution. *Uniqueness*: The space of solutions to the equations $d_{A}\Phi=d_{A}\Psi=0$ is an affine subspace of the space of connections. If $d_A$ is one such solution, then all solutions are of the form $d_A+C$, where $C\in \ker(\mathop{\mathrm{ad}}_{\Phi})\cap \ker(\mathop{\mathrm{ad}}_{\Psi})$. Lemma [Proposition 41](#Prop-no-sigma-cohom){reference-type="ref" reference="Prop-no-sigma-cohom"} shows that the involution $\sigma$ acts by $-1$ on the $\Phi$-cohomology, so acts by $-1$ on $\ker(\mathop{\mathrm{ad}}_{\Phi})\cap \ker(\mathop{\mathrm{ad}}_{\Psi})$. Since $d_A+C$ and $d_A$ are $\sigma$-invariant, $C$ has to be $\sigma$-invariant. Therefore $C=0$. ◻ ## Unitary case {#Sec:unitary-case} We equip a $G$-Fock bundle $(P,\Phi,\sigma)$ with a compatible hermitian structure $\rho$. Ideally we would like to apply the construction of a canonical connection from Theorem [Theorem 48](#Thm-filling-in){reference-type="ref" reference="Thm-filling-in"} for the pair of Fock fields $\Phi$ and $\Psi=\Phi^{*}=-\rho(\Phi)$. However, the construction can not be used directly since the Fock fields $\Phi$ and $\Phi^{*}$ need not be necessarily transversal. We will see that we can always conjugate $\Phi$ so that $\Phi$ and $\Phi^{*}$ become transversal. **Definition 49**. *A hermitian structure $\rho$ on a $G$-Fock bundle $(P,\Phi,\sigma)$ is called *compatible* if the involutions $\rho$ and $\sigma$ of $\mathfrak{g}_P$ commute.* In the case of an $\mathrm{SL}_n(\mathbb{C})$-Fock bundle $(E,\Phi,g)$, this is equivalent to the existence of a real vector bundle $E^{\mathbb{R}}$ equipped with a real metric $g^{\mathbb{R}}$ such that the complexification of $E^{\mathbb{R}}$ gives the bundle $E$, the complex linear extension of the real metric $g^{\mathbb{R}}$ gives $g$ and the hermitian extension gives the hermitian form $h$. **Proposition 50**. *For any $G$-Fock bundle $(P,\Phi, \sigma)$, there is a compatible hermitian structure.* *Proof.* The statement is pointwise, so we can reduce to a problem of Lie algebra involutions on $\mathfrak{g}$. The proposition directly follows from the fact that for any Lie algebra involution (here $\sigma$), there is a compact real form which commutes with it [@knapp Theorem 6.16]. ◻ **Proposition 51**. *Let $(P,\Phi,\sigma)$ be a $G$-Fock bundle equipped with a compatible hermitian structure $\rho$. Then $(P,\Phi^*,\sigma)$ is again a $G$-Fock bundle.* *Proof.* Applying the hermitian conjugation to $[\Phi\wedge\Phi]=0$ we get the same condition for $\Phi^{*}$. Since the set of principal nilpotent matrices is invariant under hermitian conjugation, $\Phi^*$ satisfies the nilpotency condition. Finally, $\Phi^{*}$ is negated by $\sigma$: $$\sigma(\Phi^{*})=-\sigma\circ\rho(\Phi) = -\rho\circ\sigma(\Phi)=\rho(\Phi)=-\Phi^{*}.$$ ◻ In order to use the filling-in procedure of Theorem [Theorem 48](#Thm-filling-in){reference-type="ref" reference="Thm-filling-in"}, we need $\Phi$ and $\Phi^{*}$ to be transverse. For that, note that the spaces $\mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi})$ and $\ker(\mathop{\mathrm{ad}}_{\Phi^{*}})$ are orthogonal complements with respect to the (pseudo-)hermitian form on $T^*S\otimes\mathfrak{g}_P$ given by $$\langle \alpha, \beta\rangle d\bar{z}\wedge dz=\mathrm{tr}(\alpha^{*}\wedge \beta),$$ where $\mathop{\mathrm{tr}}$ denotes the Killing form on $\mathfrak{g}$. These are indeed orthogonal complements because $\mathop{\mathrm{ad}}_\Phi$ and $\mathop{\mathrm{ad}}_{\Phi^{*}}$ are adjoint with respect to the inner product on the entire space $\Lambda^\bullet T^*S\otimes\mathfrak{g}_P$ given by this same formula. Note that we chose a volume form to define this, but different choices of volume form will lead to metrics which differ by a scalar, so the notions of adjoint, and orthogonal complement do not depend on this choice. This hermitian form is not positive definite though, in fact, the norm of $a\,dz+b\,d\bar{z}$ is given by $\mathrm{tr}(a^{*} a)-\mathrm{tr}(b^{*} b)$. On $\Omega^1(S,\mathfrak{g}_P)$, the pseudo-hermitian form is given by the formula $$\label{Eq:herm-form} \langle \alpha,\beta\rangle = -\tfrac{i}{2}\int_S\mathop{\mathrm{tr}}\alpha^*\wedge\beta.$$ Recall that any subspace $W$ of a vector space equipped with an indefinite form is complementary to a subspace $W^\perp$, precisely whenever the form is non-degenerate when restricted to $W$. This motivates the following: **Definition 52**. *For a $G$-Fock bundle $(P,\Phi)$ with hermitian form $\rho$, the Fock field $\Phi$ is called *positive* if $\mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)\subset \Omega^1(S,\mathfrak{g}_P)$ is positive-definite with respect to the pseudo-hermitian form in [\[Eq:herm-form\]](#Eq:herm-form){reference-type="eqref" reference="Eq:herm-form"}. The space of positive Fock fields is denoted by $\mathcal{P}$. We also refer to $\rho$ being a positive hermitian structure.* Note that for a positive Fock field $\Phi\in\mathcal{P}$, the fields $\Phi$ and $\Phi^{*}$ are transverse. Note further that the Fuchsian locus, as described in Section [3.2](#Sec:fuchsian-locus){reference-type="ref" reference="Sec:fuchsian-locus"}, is included in $\mathcal{P}$ since $\mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi})$ consists only of $(1,0)$-forms. **Lemma 53**. *For every Fock field $\Phi\in \Omega^1(S,\mathfrak{g}_P)$ there is a diagonal gauge transformation $a\in \mathrm{Aut}(P,\sigma)$ such that $a^{-1} \Phi a$ is positive.* *Proof.* Consider a $G$-Fock bundle $(P,\Phi,\sigma)$. Decompose $\Phi=\Phi_1+\Phi_2$. There is an $\mathfrak{sl}_2$-subbundle $\mathfrak{p}\subset \mathfrak{g}_P$ obtained by completing $\Phi_1$ into a principal $\mathfrak{sl}_2$-triple. Let $H$ be a non-vanishing section of the line subbundle $\mathfrak{l}\subset \mathfrak{p}$ consisting of the semisimple elements of the principal $\mathfrak{sl}_2$-triples (acting on the nilpotent part by scaling). Note that $\mathfrak{l}$ is a degree zero line bundle, so this section indeed exists. Consider the gauge transformation $a_t=\exp(tH)$. Since $H$ is $\sigma$-invariant, we have $a_t\in\mathrm{Aut}(P,\sigma)$. The element $H$ allows to decompose the bundle $\mathfrak{g}_P\cong \oplus_{k\in\mathbb{Z}}\, \mathfrak{g}_{P,k}$ where elements in $\mathfrak{g}_{P,k}$ are eigenvectors of $\mathop{\mathrm{ad}}_H$ with eigenvalue $k$. We know that $\Phi_1\in \mathfrak{g}_{P,-2}$ since $\Phi_1$ and $H$ are part of the same $\mathfrak{sl}_2$-triple. We have $\Phi_2\in Z(\Phi_1)$, i.e. $\Phi_2$ is a linear combination of lowest weight vectors. In addition, $\Phi_2$ is not regular since we work in the complex structure induced from $(P,\Phi)$. This implies that $\Phi_2\in\oplus_{k\leq -4}\mathfrak{g}_{P,k}$ since in the decomposition of $\mathfrak{g}_P$ into irreducible $\mathfrak{sl}_2$-modules (via adjoint action by $\mathfrak{p}$), only one module is of dimension 3. Therefore the quantity $e^{2t}a_t^{-1}\Phi a_t$ converges to $\Phi_1$ for $t\to\infty$. Since $\Phi_1$ is clearly positive and positive Fock fields form an open set, we conclude. ◻ For transverse $\Phi$ and $\Phi^{*}$ we can use the filling-in procedure from Theorem [Theorem 48](#Thm-filling-in){reference-type="ref" reference="Thm-filling-in"} to get a canonical connection $d_A$. **Proposition 54**. *For a Fock bundle $(P,\Phi,\sigma)$ equipped with a compatible positive hermitian structure $\rho$, then the canonical connection $d_A$ is unitary.* *Proof.* One can first check that both $d_A$ and $\rho(d_A)$ are $\sigma$-invariant solutions to the equations $d_A(\Phi)=0$ and $d_A(\Phi^{*})=0$. Indeed, since $\sigma$ and $\rho$ commute, $\rho(d_A)$ is $\sigma$-invariant. Applying $\rho$ to the equation $d_A\Phi=0$ gives $\rho(d_{A})(\Phi^{*})=0$. Similarly, applying $\rho$ to $d_A(\Phi^{*})=0$ gives $\rho(d_A)(\Phi)=0$. Hence $\rho(d_A)$ is also a solution. By uniqueness from Theorem [Theorem 48](#Thm-filling-in){reference-type="ref" reference="Thm-filling-in"}, we get $\rho(d_A)=d_A$, thus the connection $d_A$ is unitary. ◻ An important example of positive Fock fields is the Fuchsian locus: **Proposition 55**. *If $(P,\Phi,\sigma)$ lies in the Fuchsian locus, then there is a compatible positive hermitian structure $\rho$. In addition, the canonical connection $d_A$ is the Chern connection.* *Proof.* From Proposition [Proposition 31](#prop:fock-fuchsian-locus){reference-type="ref" reference="prop:fock-fuchsian-locus"}, we know that a Fock bundle in the Fuchsian locus comes from some uniformizing $G$-Higgs bundle $(V,\Phi)$. From [@Hit92] we know that $V$ can be equipped with a compatible hermitian structure $\rho$ and $\sigma_0$-structure $\sigma$. Since $\Phi\in\Omega^{1,0}(S,\mathfrak{g}_V)$, the image of $\mathop{\mathrm{ad}}_\Phi$ is positive definite for the form defined in [\[Eq:herm-form\]](#Eq:herm-form){reference-type="eqref" reference="Eq:herm-form"}, i.e. $\Phi$ is positive. From nonabelian Hodge theory, we know that the Chern connection $d_A$ satisfies $d_A\Phi=0$ and $\sigma(d_A)=d_A$ [@Hit92 Equation (7.2)]. Hence the Chern connection is the result of the filling-in procedure by the uniqueness part of Theorem [Theorem 48](#Thm-filling-in){reference-type="ref" reference="Thm-filling-in"}. ◻ ## Main Conjecture We have seen that on a $G$-Fock bundle $(P,\Phi,\sigma)$ with compatible positive hermitian structure $\rho$, there is a canonical 3-term connection $\Phi+d_A+\Phi^{*}$. Our main conjecture is that we can choose $\rho$ to get a *flat connection*. We give evidence in favor of the conjecture. Instead of varying $\rho$, we can equivalently fix the hermitian structure $\rho$ and vary the gauge class of the Fock field. **Conjecture 56**. *Let $(P, \Phi_0, \sigma)$ be a $G$-Fock bundle over $S$ equipped with a compatible hermitian structure $\rho$. Then there exists a unique hermitian endomorphism-valued function $$\eta\in \Omega^0(S,\mathfrak{g}_P^{\sigma, -\rho})$$ such that $\Phi = e^{-\eta} \Phi_0 e^{\eta}$ is positive, and the corresponding 3-term connection $\Phi + d_A + \Phi^{*}$ is flat, that is, it satisfies the equation $$\label{Eq:Hitchin-like-3} F(A) + [\Phi\wedge\Phi^{*}] = 0.$$* The conjecture is appealingly similar to nonabelian Hodge theory associating a flat connection to the gauge equivalent class of any polystable Higgs bundle. In our setting, Fock bundles are automatically stable, see Proposition [Proposition 24](#Prop:no-automorphisms){reference-type="ref" reference="Prop:no-automorphisms"}. Apart from the similarity with nonabelian Hodge theory, there is strong evidence for the conjecture which is discussed below: 1. There is a class of Fock bundles for which Conjecture [Conjecture 56](#main-conj){reference-type="ref" reference="main-conj"} is indeed true, namely, for Fock bundles in the Fuchsian locus from Section [3.2](#Sec:fuchsian-locus){reference-type="ref" reference="Sec:fuchsian-locus"}. Fix a complex structure on $S$ and consider the uniformizing $G$-Higgs bundle $(V,\Phi)$, i.e. the Higgs bundle in the Hitchin section with principal nilpotent $\Phi$. Forgetting about the holomorphic structure, we get a $G$-Fock bundle $(P,\Phi)$. Proposition [Proposition 55](#Prop:chern-like-connection){reference-type="ref" reference="Prop:chern-like-connection"} tells us that $\Phi$ and $\Phi^{*}$ are transverse and that the Chern connection is the result from our construction of 3-term connections. Therefore, Conjecture [Conjecture 56](#main-conj){reference-type="ref" reference="main-conj"} follows from the nonabelian Hodge correspondence in this case. 2. The strongest argument in favor of the conjecture is that the linearization is an elliptic isomorphism. This is developed below in Section [5.1](#Sec:elliptic){reference-type="ref" reference="Sec:elliptic"}. We can then conclude that the subset of Fock fields for which there exists a solution is open. 3. Another interesting class of examples pertains to *harmonic higher complex structures* as introduced in [@Nolte]. Fix a complex structure on $S$ giving a hyperbolic metric $g_S$ (via uniformization). A higher complex structure is called *harmonic* if its induced complex structure on $S$ coincides with the fixed one and if $$\label{harmonicity_cond} \bar\partial(\bar{\mu}_k g_S^{k-1})=0,$$ for all $k\in\{3,...,n\}$. The gauge-theoretic meaning of condition ([\[harmonicity_cond\]](#harmonicity_cond){reference-type="ref" reference="harmonicity_cond"}) can be given as follows: denote by $h_S$ the hermitian structure on $V=K^{(n-1)/2}\oplus...\oplus K^{(1-n)/2}$ induced from the hyperbolic metric on $S$. Denote again by $E$ the underlying smooth complex vector bundle of $V$. Then a Fock field $\Phi$, with $(1,0)$-part fixed to be the Fuchsian Fock field in Example [Example 30](#Ex:Fuchsian-locus){reference-type="ref" reference="Ex:Fuchsian-locus"}, corresponds to a harmonic higher complex structure (via Proposition [Proposition 23](#link_hcs){reference-type="ref" reference="link_hcs"}) if and only if $\bar\partial(\Phi^{*_{h_S}})=0$. Indeed, the hermitian metric $h_S$ is diagonal and is given by $h_S=\mathrm{diag}(g_S^{(1-n)/2},g_S^{(3-n)/2},...,g_S^{(n-1)/2})$. The non-zero entries of $(\Phi^{0,1})^{*_{h_S}}$ are given by $\bar{\mu}_k g_S^{k-1}$. One of the main results of [@Nolte] is that every equivalence class of higher complex structures (modulo higher diffeomorphisms) has a harmonic representative (unique up to the action of usual diffeomorphisms isotopic to the identity). In Section [6](#Sec:higher-diffeos){reference-type="ref" reference="Sec:higher-diffeos"} below, we give the gauge-theoretic meaning of higher diffeomorphisms as special gauge transformations. Hence it seems that we can reduce Conjecture [Conjecture 56](#main-conj){reference-type="ref" reference="main-conj"} to harmonic higher complex structures, in which the elliptic condition $\bar\partial(\Phi^{*_{h_S}})=0$ holds. We are very grateful to Alexander Nolte for his insight in this description. 4. In the case of an $\mathrm{SL}_n(\mathbb{C})$-Fock bundle $(E,\Phi,g)$ equipped with a hermitian structure $h$, we have two filtrations $\mathcal{F}$ and $\mathcal{F}^*$ associated to $(E,\Phi)$ and $(E,\Phi^{*})$ respectively by Proposition [Proposition 25](#Prop:filtration){reference-type="ref" reference="Prop:filtration"}. They are transverse since $\mathcal{F}_k$ is $h$-orthogonal to $\mathcal{F}_{n-k}^*$. Therefore by considering $L_k=\mathcal{F}_k\cap \mathcal{F}_{n-k}^*$, we get a line decomposition of $E$ for which $\Phi$ is lower triangular, $\Phi^*$ is upper triangular and $h$ is diagonal. This line decomposition might help to find estimates. 5. Further evidence for the validity of the conjecture comes from a symplectic viewpoint. The Atiyah--Bott symplectic form on the space of all connections gives a presymplectic form on the space of appropriate Fock bundles. The gauge group action gives a moment map which is precisely given by $F(A)+[\Phi\wedge\Phi^{*}]$ in this case. Then Conjecture [Conjecture 56](#main-conj){reference-type="ref" reference="main-conj"} becomes equivalent to an infinite-dimensional symplectic reduction. The difficulty here comes from the fact that the presymplectic form has degenerate directions, in particular, the orbits of the higher diffeomorphism action. Along the complex gauge orbits, the form is non-degenerate though. This is work in progress. ## Flat Fock bundles and the Hitchin component {#Flat_Fock_vs_Hitchin_comp} In this section, Fock bundles with a flat canonical 3-term connection are linked to the Hitchin component. We show that the monodromy is always in the split real form, and assuming Conjecture [Conjecture 56](#main-conj){reference-type="ref" reference="main-conj"}, one then gets a map from Fock bundles to the Hitchin component. Let $(P,\Phi, \sigma)$ be a $G$-Fock bundle equipped with a compatible positive hermitian structure $\rho$. **Proposition 57**. *Suppose the canonical 3-term connection $\nabla=\Phi+d_A+\Phi^{*}$ associated to the tuple $(P,\Phi,\sigma,\rho)$ as above is flat. Then the monodromy of $\nabla$ lies in the split real form of the Lie algebra $\mathfrak{g}$.* *Proof.* We simply show that $\Phi+d_A+\Phi^{*}$ is invariant under the involution $\tau=\sigma\rho=\rho\sigma$. Recall that $\sigma(\Phi)=-\Phi$. Hence $$\tau(\Phi)=\rho\circ\sigma(\Phi)=\rho(-\Phi)=\Phi^{*}.$$ Since the canonical connection $d_A$ is unitary, we have $\rho(d_A)=d_A$. Since $d_A$ is also $\sigma$-invariant, we get $\tau(d_A)=\rho(\sigma(d_A))=d_A$. Therefore, $\Phi+d_A+\Phi^{*}$ is $\tau$-invariant. By [@Hit92 Proposition 6.1], the monodromy of the connection $\nabla$ lies in the split real form of $\mathfrak{g}$. ◻ **Proposition 58**. *Assume the main conjecture [Conjecture 56](#main-conj){reference-type="ref" reference="main-conj"} is true. Then we get a map from the space of $G$-Fock bundles equipped with compatible hermitian structure to the Hitchin component.* *Proof.* Consider a $G$-Fock bundle $(P,\Phi,\sigma)$ together with a compatible hermitian structure $\rho$. The main conjecture associates a flat connection $\Phi+d_A+\Phi^{*}$ which by the previous proposition has monodromy in the split real form. We can continuously deform any Fock field $\Phi$ to a Fock field with vanishing $(0,1)$-part, which is a point in the Fuchsian locus by Proposition [Proposition 31](#prop:fock-fuchsian-locus){reference-type="ref" reference="prop:fock-fuchsian-locus"}. Using the main conjecture, this gives a continuous path in the character variety for the split real form. Since the monodromy of any point in the Fuchsian locus is in the Hitchin component, the same has to be true for $(P,\Phi,\sigma)$. ◻ # Neighborhood of the Fuchsian locus {#Sec:ellipticity} We prove the Main Conjecture [Conjecture 56](#main-conj){reference-type="ref" reference="main-conj"} in a neighborhood of the Fuchsian locus. In the whole section, let $(P,\Phi, \sigma)$ be a $G$-Fock bundle equipped with a compatible positive hermitian structure $\rho$. ## Linearized Equation {#Sec:elliptic} We show that the differential of the map from positive Fock fields to curvature $F(A) + [\Phi\wedge \Phi^{*}]$, is an elliptic operator of order two and, in fact, is an isomorphism of appropriate Sobolev spaces. We compute the change in curvature as we infinitesimally conjugate $\Phi$ by a hermitian endomorphism. Let $(P,\sigma,\rho)$ be a principal $G$-bundle equipped with $\sigma_0$-structure $\sigma$ and compatible hermitian structure $\rho$. Define $F$ to be the map $\mathcal{P}\to\Omega^2(S,\mathfrak{g}_P)$ given by the curvature of the canonical 3-term connection: $$\label{Eq:curv-operator} F(\Phi):=F(A)+[\Phi\wedge\Phi^*].$$ **Lemma 59**. *Let $\Phi_t$ be a smooth path of positive Fock fields on a $G^{\sigma,\rho}$-bundle with $\Phi_0 = \Phi$, and whose derivative at $t=0$ is$$\dot{\Phi} := [\Phi,\eta]$$ for some $\eta\in \Omega^0(S,\mathfrak{g}_P^{\sigma,-\rho})$. Let $A_t$ be the path of connections obtained by the filling-in procedure of Theorem [Theorem 48](#Thm-filling-in){reference-type="ref" reference="Thm-filling-in"}. We have $$\dot{A} = Q d_A \eta,$$ where $Q$ is the involution on $\Omega^{1}(S,\mathfrak{g}_P)^\sigma\cong \mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)^\sigma\oplus\mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi^*})^\sigma$ defined by $-1$ on $\mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)^\sigma$ and by $1$ on $\mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi^*})^\sigma$. Note that we used Corollary [Corollary 47](#Coro:sigma-decompo-transverse){reference-type="ref" reference="Coro:sigma-decompo-transverse"} here.* *Proof.* It suffices to check that the derivatives of $d_A\Phi$ and $d_A\Phi^{*}$ vanish at $t=0$. One sees $$\frac{d}{dt} d_A\Phi = d_A \dot\Phi + [\dot{A}\wedge\Phi]=-[\Phi\wedge d_A \eta] + [\dot{A}\wedge\Phi] = [\Phi\wedge(\dot{A} -d_A\eta)]$$ and similarly $$\frac{d}{dt} d_A\Phi^{*} = [\Phi^{*}\wedge(\dot{A} +d_A\eta)].$$ The fact that $\mathop{\mathrm{ad}}_{\Phi}^2=0$ and $\mathop{\mathrm{ad}}_{\Phi^{*}}^2=0$, implies that $\dot{A} = Q d_A \eta$ indeed solves both of these equations. ◻ We can now easily calculate the variation of curvature as we move the Fock field in the direction of $\eta$. **Theorem 60**. *The derivative of the map $F$ from [\[Eq:curv-operator\]](#Eq:curv-operator){reference-type="eqref" reference="Eq:curv-operator"} is the differential operator $L:\Omega^0(S,\mathfrak{g}_P^{\sigma,-\rho}) \to \Omega^2(S,\mathfrak{g}_P^{\sigma,\rho})$ given by $$L\eta = d_A Q d_A \eta + [[\Phi,\eta]\wedge\Phi^{*}] - [\Phi\wedge [\Phi^{*},\eta]].$$ The operator $L$ is elliptic, and extends to a continuous linear isomorphism of Sobolev spaces $$H_{l+2}(S,\mathfrak{g}_P^{\sigma,-\rho})\to H_{l}(S,\Lambda^2T^*S\otimes\mathfrak{g}_P^{\sigma,\rho}).$$ for all $l\in\mathbb{N}$, where $H_l$ denotes the Sobolev space with $l$ square integrable derivatives.* *Proof.* To compute the symbol of $L$, we only need to look at the highest order term $d_A Q d_A$. This is a composition of three operators, so its symbol is the composition of three symbols. The symbol $\sigma_{d_A}$ of $d_A$ is given by the formula $$\sigma_{d_A}(\alpha)\eta = \alpha\wedge\eta.$$ This is a map $T^*S\times \Lambda^\bullet T^*S\otimes \mathfrak{g}_P\to \Lambda^\bullet T^*S\otimes \mathfrak{g}_P$. Note that this does not actually depend on the connection $d_A$. Since $Q$ is an operator of order zero, the symbol of $Q$ is just itself. Putting these together, we get the symbol of $L$ $$\sigma_{L}(\alpha)\eta = \alpha \wedge Q (\alpha \eta).$$ The operator $L$ is elliptic if this is an isomorphism from $\mathfrak{g}_P^{\sigma,-\rho}$ to $\Lambda^2T^*S \otimes \mathfrak{g}_P^{\sigma,\rho}$ for all non-zero $\alpha$. This is implied if $-\frac{i}{2}\mathop{\mathrm{tr}}(\eta \alpha \wedge Q(\alpha \eta))$ is a definite form in $\eta$ for all non-zero $\alpha$. This is indeed true, because if we write $\alpha\eta = \xi + \xi^{*}$ where $\xi$ is in $\mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)$, we get $$-\tfrac{i}{2}\mathop{\mathrm{tr}}(\eta \alpha \wedge Q(\alpha \eta)) = -\tfrac i2\mathop{\mathrm{tr}}((\xi + \xi^{*}) \wedge (-\xi + \xi^{*})) = i \mathop{\mathrm{tr}}(\xi^{*}\wedge\xi)=-2\lVert \xi \rVert^2,$$ which is negative-definite on $\mathcal{P}$ by definition [Definition 52](#Def:pos-Higgs-field){reference-type="ref" reference="Def:pos-Higgs-field"}. Now we prove that $L$ induces an isomorphism $H_1\to H_{-1}$. The essential point is that the bilinear form $\int_S \mathop{\mathrm{tr}}(\eta L\eta)$ is equivalent to the standard $H_1$-norm (defined using any choice of metrics). Recall the pseudo-hermitian product given by Equation [\[Eq:herm-form\]](#Eq:herm-form){reference-type="eqref" reference="Eq:herm-form"}. We use integration by parts, and conjugation invariance of trace to write $$\begin{aligned} -\tfrac{i}{2}\int_S \mathop{\mathrm{tr}}(\eta L\eta) &= -\tfrac{i}{2}\int_S \mathop{\mathrm{tr}}(\eta d_A Q d_A \eta) + \mathop{\mathrm{tr}}(\eta[[\Phi,\eta]\wedge\Phi^{*}]) - \mathop{\mathrm{tr}}(\eta[\Phi\wedge [\Phi^{*},\eta]])\\ &= -\tfrac{i}{2}\int_S -\mathop{\mathrm{tr}}(d_A \eta \wedge Q d_A \eta) + \mathop{\mathrm{tr}}([\eta,\Phi^{*}]\wedge[\Phi,\eta]) - \mathop{\mathrm{tr}}([\eta,\Phi] \wedge [\Phi^{*},\eta])\\ &=-\tfrac{i}{2}\int_S 2 \mathop{\mathrm{tr}}(\pi_{\mathrm{Im}\,\mathop{\mathrm{ad}}_\Phi}(d_A \eta)^{*} \wedge \pi_{\mathrm{Im}\,\mathop{\mathrm{ad}}_\Phi}(d_A \eta)+ 2\mathop{\mathrm{tr}}([\eta,\Phi^{*}]\wedge[\Phi,\eta])\\ &= 2\lVert \pi_{\mathrm{Im}\,\mathop{\mathrm{ad}}_\Phi}(d_A\eta) \rVert^2 +2\lVert [\Phi,\eta]\rVert^2.\end{aligned}$$ This is a norm on $\Omega^1(S,\mathfrak{g}^{\sigma,-\rho})$ which is the integral of the sum of a positive definite norm of the first derivatives of $\eta$, and a positive definite norm of $\eta$ pointwise. This means it is equivalent to the standard $H_1$-norm. Recall that $H_{-1}(S,E\otimes \Lambda^2T^*S)$ can be defined as the topological dual to $H_1(E)$ for any vector bundle $E$. The norms being equivalent means that $L$ induces an isomorphism $H_1\to H_{-1}$. This implies that the induced maps $H_l \to H_{l-2}$ are all isomorphisms for $l\in\mathbb{N}, l\geq 1$. ◻ ## Application of the implicit function theorem {#Sec:impl-fct-thm} In this section we use standard techniques for nonlinear elliptic PDE to show that for Fock fields with small enough $(0,1)$-part, i.e. near the Fuchsian locus, there is a hermitian structure such that the canonical connection is flat. We start by recalling the implicit function theorem: **Theorem 61** (Implicit function theorem). *Let $W\subset X\times Y$ be an open subset of a product of differentiable Banach manifolds. Let $f:W \to Z$ be a continuously differentiable function to a third differentiable Banach manifold. Suppose that $f(x,y)=z$, and that $df_{(x,y)}$ restricts to an isomorphism from $T_y Y$ to $T_zZ$. Then there is a function $g$ from a neighborhood $U$ of $x$ to $Y$ satisfying $$f(x,g(x)) = 0$$ for all $x\in U$. Furthermore, $g$ is continuously differentiable.* We will apply this where $X$ is the space of Fock fields for a principle bundle $P$ equipped with $\sigma$ and $\rho$, $Y$ is the space of sections of $\mathfrak{g}_P^{\sigma,-\rho}$, $W\subset X\times Y$ is the subset of $(\phi,\eta)$ where $e^{-\eta}\Phi e^\eta\in \mathcal{P}$, $Z$ is the space of two-forms valued in $\mathfrak{g}^{\sigma,\rho}$, and $f$ is the map taking $(\phi,\eta)$ to the curvature of the canonical connection $d_A$ associated to the conjugated Fock field: $$f: \left \{ \begin{array}{ccc} W &\longrightarrow & \Omega^2(S, \mathfrak{g}_P^{\sigma,\rho}) \\ (\phi,\eta) &\mapsto & F(e^{-\eta}\Phi e^{\eta}) \end{array} \right.$$ where $F$ is the map from Equation [\[Eq:curv-operator\]](#Eq:curv-operator){reference-type="eqref" reference="Eq:curv-operator"}. We need to equip all these spaces with differentiable Banach manifold structures such that the criteria of the implicit function theorem are satisfied. To do this we should understand the derivative of the map from Fock fields to curvature. Let $\dot\Phi\in \mathrm{Var}(\Phi)^{-\sigma}$ be a tangent vector at $\Phi$ to the space of Fock fields. Let $\dot A$ be the corresponding variation in canonical connection. The derivative of $d_A\Phi$ and $d_A\Phi^*$ must be zero, giving us two equations on $\dot{A}$. $$d_A\dot\Phi + [\dot{A}\wedge \Phi] = 0$$ $$d_A\dot\Phi^* + [\dot{A}\wedge \Phi^*] = 0$$ By transversality, we know that $ad_\Phi$ restricts to an isomorphism from $\mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi^*})^{\sigma}\subset \Omega^1(S,\mathfrak{g}_P)$ to $\mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi})^{-\sigma}\subset \Omega^2(S,\mathfrak{g}_P)$. Let $Y:\mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi})^{-\sigma} \to \mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi^*})^{\sigma}$ denote the inverse of this map. We can express $\dot{A}\in\Omega^1(S,\mathfrak{g}_P)^{\sigma,\rho}$ in terms of $Y$, using Corollary [Corollary 47](#Coro:sigma-decompo-transverse){reference-type="ref" reference="Coro:sigma-decompo-transverse"}. **Lemma 62**. *The variation of the connection $d_A$ is given by $$\dot{A} = - Y(d_A\dot\Phi) + Y(d_A\dot\Phi)^*.$$* It follows that the total change in the curvature $F(\Phi)$ is $$dF_{\Phi}(\dot\Phi) = -d_A Y(d_A\dot\Phi) + d_A Y(d_A\dot\Phi)^* + [\dot\Phi \wedge \Phi^*] + [\Phi \wedge \dot\Phi^*].$$ All we will use here is that this is a second order operator which varies continuously (in the operator norm topology) with respect to variations of $\Phi$ in the $C^0$-topology. This is because the linear maps $Y$ depend continuously on $\Phi$. Since $dF_\Phi$ is a second order operator, it extends to a continuous map $H_k(\mathrm{Var}(\Phi)) \to H_{k-2}(\Lambda^2T^*S\otimes\mathfrak{g}^{\rho,\sigma})$ where $H_k$ denotes the Sobolev Hilbert space of sections with $k$ square-integrable derivatives, where $k\in\mathbb{N}$ with $k\geq 2$. The Sobolev embedding theorem tells us that in two dimensions, $H_k$ is in $C^0$. If we equip $\mathcal{P}$ with the $H_k$-topology, and $\Omega^2(S\mathfrak{g}^{\sigma,\rho})$ with the $H_{k-2}$-topology, then $F$ is continuously differentiable. A Sobolev inequality tells us that multiplication $H_k \times H_k\to H_k$ is continuous in dimension 2, so the map $(\Phi,\eta)\mapsto e^{-\eta}\Phi e^{\eta}$ is continuous (and continuously differentiable). Finally, by Theorem [Theorem 60](#Thm:ellipticity){reference-type="ref" reference="Thm:ellipticity"} we know that the partial derivative of $f$ with respect to $\eta$ is an isomorphism. All the axioms of the implicit function theorem are satisfied, so we get the following: **Proposition 63**. *Let $k\geq 2$. The set of Fock fields $\Phi$ of class $H_k$, such that there exists $\eta \in H_k(S,\mathfrak{g}_P^{\sigma,-\rho})$ with flat canonical connection associated to $e^{-\eta}\Phi e^\eta$, is open. Furthermore, the map from $\Phi$ to this solution $\eta$ is differentiable.* Note that solutions $\eta$ for a given $\Phi$ are isolated, but at this point we have not proven them to be unique. Since the operator $L$ is elliptic in the variable $\eta$, and continuously differentiable with respect to the $C^0$-topology (thus also the $H_k$-topology,) it can be approximated by its linearization for small continuous fluctuations of $\eta$. Standard elliptic regularity arguments then imply that if $\Phi$ is smooth, then $\eta$ must be as well. ## Solution in a neighborhood Now we restrict to $\mathrm{SL}_n(\mathbb{C})$ to get a link from a neighborhood of the Fuchsian locus in the moduli space of higher complex structures $\mathcal{T}^n(S)$ to the Hitchin component. Proposition [Proposition 63](#prop:openness){reference-type="ref" reference="prop:openness"} gives us a map from a neighborhood $U\subset \mathbb{M}^n(S)$ of the Fuchsian locus in the space of higher complex structures to the Hitchin component. In Section [6](#Sec:higher-diffeos){reference-type="ref" reference="Sec:higher-diffeos"} we will see that this map is locally constant along higher diffeomorphism orbits. Unfortunately, this is not quite enough to conclude that we have a canonical map from a neighborhood of the Fuchsian locus in $\mathcal{T}^n(S)$ to the Hitchin component, because we can not guarantee that the intersection of $U$ with higher diffeomorphism orbits is connected. Luckily there is a convenient slice of the higher diffeomorphism action, namely the harmonic higher complex structures defined in [@Nolte]. Let $\mathcal{H}\mathbb{M}^n(S)$ denote the space of harmonic higher complex structures of order $n$ on $S$. A harmonic higher complex structure amounts to a complex structure, and list of tensors $(\mu_{3},...,\mu_{n})$ with $\mu_k$ a section of $K^{1-n}\otimes \bar{K}$ satisfying $\bar\partial(\bar{\mu}_k g_S^{k-1})=0$, where $g_S$ denotes the hyperbolic metric on $S$ associated to the complex structure. Theorem 8.2 in [@Nolte] states that any higher complex structure modulo higher diffeomorphism allows a harmonic representative, unique up to isotopy: $$\mathbb{M}^n(S)/\mathrm{Ham}_0(T^*S) \cong \mathcal{H}\mathbb{M}^n(S)/\mathrm{Diff}_0(S).$$ There is a natural action of $\mathbb{R}^*$ on higher complex structures by $$t\cdot(\mu_{3},...,\mu_{n}) = (t\mu_3, ..., t^{n-2}\mu_n)$$ coming from scalar multiplication by $t$ in $T^*S$. This action clearly preserves $\mathcal{H}\mathbb{M}^n(S)$. Let $\mathcal{H}\mathbb{M}^{n}_{*}(S) \subset \mathcal{H}\mathbb{M}^n(S)$ denote the subset for which there is a path of solutions to ([\[Eq:Hitchin-like-3\]](#Eq:Hitchin-like-3){reference-type="ref" reference="Eq:Hitchin-like-3"}), for $\{(t\mu_{3},...,t^{n-2}\mu_{n}) \vert \,\, t\in [0,1]\}$ starting at the Fuchsian solution. Proposition [Proposition 63](#prop:openness){reference-type="ref" reference="prop:openness"} gives that this path of solutions is unique if it exists, so we have a natural map from $\mathcal{H}\mathbb{M}^{n}_{*}(S)$ to the Hitchin component via evaluating at the end of the path. Proposition [Proposition 63](#prop:openness){reference-type="ref" reference="prop:openness"} also implies that $\mathcal{H}\mathbb{M}^{n}_{*}(S)$ is open. Also, note that $\mathcal{H}\mathbb{M}^{n}_{*}(S)$ is necessarily $\mathrm{Diff}_0(S)$-invariant because we can pull-back solutions by diffeomorphisms. We can define $\mathcal{T}^n_{*}(S)\subset \mathcal{T}^n(S)$ to be the quotient of $\mathcal{H}\mathbb{M}^{n}_{*}(S)$ by $\mathrm{Diff}_0(S)$. The map from $\mathcal{H}\mathbb{M}^{n}_{*}(S)$ to the Hitchin component factors through $\mathcal{T}^n_{*}(S)$ because diffeomorphisms isotopic to the identity cannot change monodromy. Therefore, we have the following result: **Theorem 64**. *There is an open neighborhood of the Fuchsian locus, namely $\mathcal{T}^n_{*}(S)$, which has a canonical map to the Hitchin component.* Of course it is the nature of the map that is interesting, not its mere existence. # Higher diffeomorphisms {#Sec:higher-diffeos} In this section, we exhibit a gauge-theoretic interpretation for higher diffeomorphisms. These can be described by special $\lambda$-dependent gauge transformations acting on a family of flat 3-term connections $\lambda^{-1}\Phi+d_A+\lambda\Psi$. For $\Psi=\Phi^{*}$, this action does not change the associated point in the Hitchin component since a gauge transformation does not change the monodromy. ## Special $\lambda$-dependent gauge transformations Consider a $G$-Fock bundle $(P,\Phi,\sigma)$ equipped with a second Fock field $\Psi$. Assume the existence of a connection $d_A$ on $P$ such that $$\mathcal{A}(\lambda)=\lambda^{-1}\Phi+d_A+\lambda\Psi$$ is flat for all $\lambda \in \mathbb{C}^*$. The space of all compatible triples $(\Phi,\Psi,d_A)$ on $(P,\sigma)$ such that the associated connection is flat is denoted by $\mathcal{C}^{fl}(P,\sigma)$. Consider a 3-term infinitesimal gauge transformation $$\eta(\lambda)=\lambda^{-1}\eta_{-1}+\eta_0+\lambda\eta_1 \;\text{ where }\; \eta_{-1},\eta_0,\eta_1 \in \Omega^0(S,\mathfrak{g}_P).$$ The action on $\mathcal{A}(\lambda)$ is given by $$\begin{aligned} \label{Eq:3-term-gauge-action} \eta(\lambda).\mathcal{A}(\lambda) &= d\eta(\lambda)+[\mathcal{A}(\lambda),\eta(\lambda)] \nonumber\\ &= \lambda^{-2}[\Phi,\eta_{-1}]+\lambda^{-1}(d_A(\eta_{-1})+[\Phi,\eta_0])+d_A(\eta_0)+[\Phi,\eta_1]+[\Psi,\eta_{-1}] \nonumber\\ & \;\;\;+ \lambda(d_A(\eta_1)+[\Psi,\eta_0])+\lambda^2[\Psi,\eta_1]. \end{aligned}$$ The action of $\eta_0$ being the usual complex gauge action, we consider $\eta_0=0$ here to concentrate on the new $\lambda$-dependent gauge action. In order to preserve the space of 3-term connections, the lowest and highest term in $\lambda$ have to vanish. This gives that $[\Phi,\eta_{-1}]=0$ and $[\Psi,\eta_{1}]=0$. Hence $\eta_{-1}\in Z(\Phi)$ and $\eta_1\in Z(\Psi)$. The variation of $\Phi$ is then given by $$\label{variation-phi} \delta \Phi = d_A(\eta_{-1}).$$ Similarly we get $\delta\Psi =d_A(\eta_1)$ and $\delta A = [\eta_{-1},\Psi]+[\eta_1,\Phi]$. Note that the variation $\delta A$, coming from the gauge transformation, is the same as the one induced from the filling-in procedure [Theorem 48](#Thm-filling-in){reference-type="ref" reference="Thm-filling-in"}, since gauge transformations preserve curvature. **Definition 65**. *A *special $\lambda$-dependent gauge* of type $(P,\Phi,\Psi)\in\mathcal{C}^{fl}(P,\sigma)$ is an element of $\Omega^0(S,\mathfrak{g}_P\otimes \mathbb{C}[\lambda^{\pm 1}])$ of the form $\lambda^{-1}\eta_{-1}+\lambda\eta_1$ such that $\eta_{-1}\in Z(\Phi)$ and $\eta_1\in Z(\Psi)$.* Note that the notion of special $\lambda$-dependent gauge depends explicitly on $\Phi$ and $\Psi$. Considering Fock fields up to these transformations gives equivalence classes, but this equivalence relation does not come from a group action. In the unitary setting, where $(P,\Phi)$ is equipped with a compatible hermitian structure $\rho$ and $\Psi=\Phi^{*}=-\rho(\Phi)$, we ask for $\eta_1=\eta_{-1}^*$ in order to preserve $\rho$. **Proposition 66**. *Variations coming from special $\lambda$-dependant gauge transformations are tangent to $\mathcal{C}^{fl}(P,\sigma)$.* *Proof.* Let $\mathcal{A}(\lambda)\in\mathcal{C}^{fl}(P,\sigma)$. Since gauge transformations preserve curvature, the flatness of $\mathcal{A}(\lambda)$ remains. We have also already seen that the space of 3-term connections is preserved. It remains to show that $\Phi$ and $\Psi$ stay Fock fields. It is enough to prove it for $\Phi$. Let us show that $\sigma$ negates the variation $\delta \Phi=d_A(\eta_{-1})$. Since $\eta_{-1}\in Z(\Phi)$, it is negated by $\sigma$. The canonical connection $d_A$ being $\sigma$-invariant, we get that $\delta\Phi$ is negated by $\sigma$. The only remaining point is that the nilpotency condition is preserved. This is a bit more delicate and needs Lemma [Lemma 16](#lemma:im-incl){reference-type="ref" reference="lemma:im-incl"}. Consider a local chart in which $\Phi=Fdz+\Phi_2d\bar{z}$, where $F$ is a fixed principal nilpotent element. We have to show that $\delta\Phi=d_A(\eta_{-1})\in \mathrm{Var}(\Phi)$. By Proposition [\[Prop:Phi-variation\]](#Prop:Phi-variation){reference-type="eqref" reference="Prop:Phi-variation"} we know that $\mathrm{Var}(\Phi)\cong \mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)\oplus Z(\Phi)\bar{K}$. The (1,0)-part of $d_A(\eta_{-1})$ is $\partial \eta_{-1}+[A_1,\eta_{-1}]$. We have $\partial \eta_{-1}\in Z(F)$ and $Z(F)\subset \mathrm{Im}(\mathop{\mathrm{ad}}_F)$ since $Z(F)$ describes the space generated by lowest weight vectors. From $[\eta_{-1},\Phi]=0$, we get $\eta_{-1}\in Z(F)$, hence $[A_1,\eta_{-1}]\in \mathrm{Im}(\mathop{\mathrm{ad}}_{\eta_{-1}})\subset \mathrm{Im}(\mathop{\mathrm{ad}}_F)$ by Lemma [Lemma 16](#lemma:im-incl){reference-type="ref" reference="lemma:im-incl"}. Therefore, locally there is $R\in\Omega^0(S,\mathfrak{g}_P)$ such that $[R,F]=\partial \eta_{-1}+[A_1,\eta_{-1}]$. We show that $d_A(\eta_{-1})-[R,\Phi]\in Z(\Phi)\bar{K}$. By definition of $R$ this difference has no (1,0)-part. It is sufficient to prove $\bar{\partial}\eta_{-1}+[A_2,\eta_{-1}]-[R,\Phi_2]\in Z(F)$. We compute $$\label{Eq:nilp-aux} [F,\bar{\partial}\eta_{-1}+[A_2,\eta_{-1}]-[R,\Phi_2]] = [\eta_{-1},[A_2,F]]-[\Phi_2,[R,F]],$$ where we used $[F,\eta_{-1}]=0$ (hence also $[F,\bar{\partial}\eta_{-1}]=0$), the Jacobi identity and $[F,\Phi_2]=0$. Now the identity $d_A\Phi=0$ gives $$[A_2,F]=\bar{\partial}\Phi_2+[A_1,\Phi_2].$$ Together with $[R,F]=\partial \eta_{-1}+[A_1,\eta_{-1}]$ (from definition of $R$), we continue Equation [\[Eq:nilp-aux\]](#Eq:nilp-aux){reference-type="eqref" reference="Eq:nilp-aux"}: $$[\eta_{-1},[A_2,F]]-[\Phi_2,[R,F]]=[\eta_{-1},\bar{\partial}\Phi_2]-[\Phi_2,\partial \eta_{-1}]=0,$$ where some cancellation happened and we use that $[\eta_{-1},\Phi_2]=0$. ◻ **Remark 5**. *At first glance it might be surprising that the action is only defined on the space of *flat* 3-term connections. In fact, the condition $d_A\Phi=0$ is needed in order to get a variation $\delta\Phi$ which preserves $[\Phi\wedge\Phi]=0$, i.e. which satisfies $[\Phi\wedge\delta\Phi]=0$. Indeed, since $\delta\Phi=d_A(\eta_{-1})$ and $[\Phi,\eta_{-1}]=0$, we get $$[\Phi\wedge\delta\Phi]=[\Phi\wedge d_A(\eta_{-1})]=[d_A(\Phi),\eta_{-1}]=0.$$ The condition $F(A)+[\Phi\wedge\Psi]=0$ assures the preservation of $d_A(\Phi)=0$: $$\delta(d_A(\Phi))= d_A(d_A(\eta_{-1}))+[([\eta_{-1},\Psi]+[\eta_1,\Phi]) \wedge \Phi]=[F(A)+[\Phi\wedge\Psi], \eta_{-1}]=0.$$* The variation $\delta \Phi=d_A(\eta_{-1})$ seems to depend on the connection $d_A$. We show that this is not really the case: **Proposition 67**. *Up to usual gauge transformations, the variation $\delta\Phi=d_A(\eta_{-1})$ does not depend on the $\Phi$-compatible connection $d_A$.* The main ingredient of the proof is Corollary [Corollary 42](#coboundaries){reference-type="ref" reference="coboundaries"} about the vanishing of brackets in $\Phi$-cohomology. *Proof.* If $\lambda^{-1}\Phi+ d_B+\lambda\Psi$ is another flat 3-term connection, then we prove that there exists a usual gauge transformation $R$ such that $d_A(\eta_{-1})-d_B(\eta_{-1}) = [R,\Phi].$ This means that the variation of $\Phi$ in [\[variation-phi\]](#variation-phi){reference-type="eqref" reference="variation-phi"} does not depend on $d_A$ modulo gauge transformations. Note that $d_A\Phi=0$ and $d_B\Phi=0$ implies $[A-B,\Phi]=0$, and also $d_A(\eta_{-1})-d_B(\eta_{-1}) = [A-B,\eta_{-1}]$. Hence in the language of $\Phi$-cohomology, $A-B$ is a 1-cocycle and $\eta_{-1}$ is a 0-cocycle. We have seen in Corollary [Corollary 42](#coboundaries){reference-type="ref" reference="coboundaries"} that the bracket of cohomology classes of Fock fields is always a coboundary. This gives the existence of $R$. ◻ ## First variation formula {#Sec:higher-diff-action-via-gauge} Now we restrict to the case of $\mathrm{SL}_n(\mathbb{C})$-Fock bundles $(E,\Phi,g)$. We prove that the induced infinitesimal action of special $\lambda$-gauge transformations on the higher complex structure is given by the infinitesimal action by higher diffeomorphisms. Here we concentrate on the action on $\Phi$ and consider a $\lambda$-dependent gauge transformation $\lambda^{-1}\xi$ (we slightly change notation to the previous subsection where $\xi=\eta_{-1}$), where $\xi\in\Omega^0(S,\mathfrak{sl}(E))$ with $[\Phi,\xi]=0$. This implies that $\xi$ is a polynomial in $\Phi$, evaluated in various vector fields. Recall the variation of $\Phi$ from Equation [\[variation-phi\]](#variation-phi){reference-type="eqref" reference="variation-phi"}: $$\delta \Phi = d_A\xi.$$ **Theorem 68**. *The variation on a Fock field $\Phi$ induced by a gauge transformation $\lambda^{-1} \xi$ with $\xi=\Phi(v_1)\cdots \Phi(v_k)$ is equivalent to the action of the Hamiltonian $H=v_1\cdots v_k$ on the higher complex structure induced by $\Phi$.* The proof is a direct computation using local coordinates and the condition $d_A\Phi=0$. *Proof.* The statement is local. Hence consider a coordinate system $(z,\bar{z})$ on $S$ and a gauge fixing in which the Fock field $\Phi$ is $\Phi=F \, dz+Q(F)d\bar{z}$, where $F$ is a fixed principal nilpotent element and $Q$ is a polynomial without constant term. We can decompose $\xi=\Phi(v_1)\cdots \Phi(v_k)$ into monomials in $F$. So let us suppose that $\xi=w_k(z,\bar{z})F^k$. We then get $$\delta \Phi = d_A\xi = d\xi+[A,\xi] = \left(\partial w_k F^k+w_k[A_1,F^k]\right)dz+\left(\bar{\partial}w_k F^k+w_k[A_2,F^k]\right)d\bar{z},$$ where we used $A=A_1dz+A_2d\bar{z}$. With $\Phi=\Phi_1dz+\Phi_2d\bar{z}$, we get $$\label{var-phi-1-2} \delta \Phi_1 = \partial w_k F^k+w_k[A_1,F^k] \;\;\text{ and }\;\; \delta \Phi_2 = \bar{\partial}w_k F^k+w_k[A_2,F^k].$$ From $\Phi_2=Q(F)=\mu_2 F+\mu_3F^2+...+\mu_nF^{n-1}$, we get $$\delta \Phi_2 = \sum_{k=2}^n \delta \mu_k \, F^{k-1} + \mu_2\delta F+\mu_3(\delta F \,F+F\delta F)+...+\mu_n\sum_{\ell=0}^{n-2} F^\ell\delta F\, F^{n-2-\ell}.$$ Hence using Equation [\[var-phi-1-2\]](#var-phi-1-2){reference-type="eqref" reference="var-phi-1-2"} one has $$\label{aux-1} \sum_{k=2}^n \delta \mu_k F^{k-1} = \bar{\partial}w_k F^k+w_k[A_2,F^k]-\sum_{m=2}^n\mu_m \sum_{\ell=0}^{m-2}F^\ell\left(\partial w_k F^k+w_k[A_1,F^k]\right)F^{m-2-\ell}.$$ The flatness condition now gives $$\begin{aligned} 0=d_A\Phi &= -\bar{\partial}\Phi_1+\partial\Phi_2 +[A_1,\Phi_2]+[\Phi_1,A_2] \\ &=\sum_{m=2}^n \partial\mu_m F^{m-1}+[A_1,Q(F)]-[A_2,F].\end{aligned}$$ We may thus deduce $$\label{aux-2} [A_2,F^k]=\sum_{\ell=0}^{k-1}F^\ell[A_2,F]F^{k-1-\ell}=kF^{k-1}\sum_{m=2}^n \partial\mu_m F^{m-1}+\sum_{m=2}^n\sum_{j=0}^{k-1}\mu_m F^j[A_1,F^{m-1}]F^{k-1-j}.$$ The last part of the expression needs some more manipulation: $$\begin{aligned} \sum_{j=0}^{k-1} F^j[A_1,F^{m-1}]F^{k-1-j} &= \sum_{j=0}^{k-1}\sum_{\ell=0}^{m-2}F^{j+\ell}[A_1,F]F^{m-2-\ell}F^{k-1-j}\\ &= \sum_{\ell=0}^{m-2}F^\ell[A_1,F^k]F^{m-2-\ell}.\end{aligned}$$ Combining this last equation with Equations [\[aux-2\]](#aux-2){reference-type="eqref" reference="aux-2"} and [\[aux-1\]](#aux-1){reference-type="eqref" reference="aux-1"}, we get $$\sum_{k=2}^n \delta \mu_k F^{k-1} = \bar{\partial}w_k \, F^k+kw_kF^{k-1}\sum_{m=2}^n \partial\mu_m F^{m-1}-\partial w_k\, F^k\sum_{m=2}^{n}(m-1)\mu_mF^{m-2}.$$ This gives exactly the first variation formula for higher complex structures. In order to see this, compare to the Poisson bracket expression (see [@FT Section 3.3]): $$\begin{aligned} \sum_{k=2}^n \delta \mu_k \, p^{k-1} &= \{w_kp^k,-\bar{p}+\mu_2 p+\mu_3 p^2+...+\mu_np^{n-1}\} \\ &= \bar{\partial}w_k \,p^k+kw_kp^{k-1}\sum_{m=2}^n\partial\mu_m\, p^{m-1}-\partial w_k\, p^{k}\sum_{m=2}^n (m-1)\mu_m p^{m-2}.\end{aligned}$$ This completes the proof of the theorem. ◻ **Remark 6**. *For a higher complex structure with Beltrami differentials $\mu_k=0$, for all $k\in\{2,...,n\}$, the argument above can be simplified to $$\delta \mu_2\, F+...+\delta\mu_n\, F^{n-1} = \bar{\partial}w_k\, F^k+w_k[A_2,F^k]$$ and the flatness condition provides that $[A_2,F^k]=\bar{\partial}\Phi_1=0$.* ## Constant monodromy along paths of higher diffeomorphisms In the previous subsection, we lifted infinitesimal higher diffeomorphisms to 3-term gauge transformations. Now, we show that under the right circumstances, a path of flat 3-term connections inducing higher diffeomorphic higher complex structures must be obtained from a path of 3-term gauge transformations. This shows that within the subset of higher complex structures where our main conjecture holds, monodromy is locally constant along higher diffeomorphism orbits. We prove this statement in the setting with hermitian structure, though it surely has an analogue for more general 3-term connections. **Proposition 69**. *Suppose $\lambda^{-1}\Phi_t + d_{A_t} + \lambda\Phi_t^{*}$ is a differentiable family of flat 3-term connections on a vector bundle $E$ with compatible symmetric pairing and hermitian metric, and $\Phi_t$ is a family of positive $\mathrm{SL}_n(\mathbb{C})$-Fock fields inducing a path of higher diffeomorphic higher complex structures. Then the $3$-term connections $\lambda^{-1}\Phi_t + d_{A_t} + \lambda\Phi_t^{*}$ are all gauge equivalent.* *Proof.* The fact that the higher complex structures are higher diffeomorphic gives us a time dependant Hamiltonian $h_t:T^*S \to \mathbb{C}$ which satisfies $$\frac{d}{dt} \Phi_t = d_{A_t}(h_t(\Phi_t)) + [\Phi_t,\eta_t],$$ where $\eta_t$ is a path in $\Omega^0(S,\mathfrak{g}_P^\sigma)$. This follows from Theorem [Theorem 68](#hamiltonian-first-variations-coincide){reference-type="ref" reference="hamiltonian-first-variations-coincide"}. Decompose $\eta_t$ into its hermitian and anti-hermitian parts $\eta_t=\eta_t^{-\rho}+\eta_t^{\rho}$. If $\eta_t^{-\rho} = 0$, then we are done because the path of three term infinitesimal gauge transformations $$\lambda^{-1}h_t(\Phi_t) + \eta_t + \lambda^{-1}h_t(\Phi_t)^{*}$$ induces the correct time derivative of $\Phi_t$, thus must induce the correct derivative of the whole 3-term family. At each time $t$, the derivative of curvature $F_{A_t} + [\Phi_t\wedge\Phi_t^{*}]$ with respect to a change in $\Phi_t$ is a linear operator. We have just argued that the derivative of curvature if we change $\Phi$ by $d_{A_t}(h_t(\Phi_t)) + [\Phi_t,\eta_t^{\rho}]$ is zero, so the only thing that could be changing curvature is $[\Phi_t,\eta_t^{-\rho}]$. The change in curvature induced by a hermitian gauge transformation of $\Phi$ is the operator $L$ from theorem [Theorem 60](#Thm:ellipticity){reference-type="ref" reference="Thm:ellipticity"}. Since we are assuming our connections to be flat, we have $L(\eta_t^{-\rho})=0$. This implies $\eta_t^{-\rho} = 0$ because $L$ has zero kernel. ◻ If we assume the main Conjecture [Conjecture 56](#main-conj){reference-type="ref" reference="main-conj"}, then Proposition [Proposition 69](#prop:constancy){reference-type="ref" reference="prop:constancy"} tells us that the map from Fock bundles to the Hitchin component factors through the projection to $\mathcal{T}^n(S)$. Note the importance of the higher diffeomorphism group being connected. # Covectors {#Sec:covectors} In the previous sections we have seen the conjectural link between Fock bundles and the Hitchin component. We believe that we can extend this link to a tubular neighborhood of the Hitchin component inside the character variety $\chi(\pi_1S,G)$ for the complex Lie group $G$. For this, we do not require anymore the canonical connection $d_A$ to be $\sigma$-invariant. We will see that $d_A$ is then determined by cotangent vectors to the space of higher complex structures. This also leads to a gauge-theoretic meaning of the $\mu$-holomorphicity condition. ## Definition of covectors The setting of our investigations is now the following. Consider a $G$-Fock bundle $(P,\Phi,\sigma)$ on the surface $S$ equipped with a hermitian structure $\rho$. *The main difference with the previous sections is that $\rho$ is not supposed to commute with $\sigma$.* We assume $\Phi\in\mathcal{P}$ to be a positive Fock field (see Definition [Definition 52](#Def:pos-Higgs-field){reference-type="ref" reference="Def:pos-Higgs-field"}). This implies that $\Phi$ and $\Phi^{*}=-\rho(\Phi)$ are transverse. We have seen in Proposition [Proposition 37](#Prop:Phi-variation){reference-type="ref" reference="Prop:Phi-variation"} that the tangent space to the space of Fock bundles modulo usual gauge transformations is given by $Z(\Phi)\bar{K}$. This motivates the following definition of a covector: **Definition 70**. *A *covector* for a $G$-Fock bundle $(P,\Phi)$ is an element of $(Z(\Phi)\bar{K})^*$.* We will give several realizations of covectors. For $\mathrm{SL}_n(\mathbb{C})$, we will see in Section [7.3](#Sec:mu-holo-meaning){reference-type="ref" reference="Sec:mu-holo-meaning"} below the link to the cotangent bundle to higher complex structures. **Lemma 71**. *For any $\Phi\in \mathcal{P}$ we have: $$\mathfrak{g}_P\otimes T^*S = \mathrm{Im}(\mathop{\mathrm{ad}}_\Phi) \oplus \mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi^{*}}) \oplus Z(\Phi) \bar{K} \oplus Z(\Phi) K.$$* The same statement holds for $\Omega^1(S,\mathfrak{g}_P)$, where we use $\mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)$ both for the subbundle and its sections. The proof uses a fixed point argument coming from a contraction. *Proof.* We do the proof at a point $p\in S$, so we may as well choose an identification of $(\mathfrak{g}_P)_p$ with $\mathfrak{g}$, and a local coordinate so we get a basis $dz,d\bar{z}$ of $T^*S\otimes \mathbb{C}$. Both vector spaces in the lemma have the same dimension, namely $2\mathrm{dim}(\mathfrak{g})$. All we have to show is that there is a direct sum on the right hand side. Suppose we have a sum $$[\Phi,A] + [\Phi^{*},B] + C d\bar{z} + D dz = 0$$ where $A,B,C,D\in \mathfrak{g}$ and $C,D$ are in the centralizers of $\Phi$ and $\Phi^{*}$ respectively. We want to show that the only solution to this equation is the trivial solution $[\Phi,A]=[\Phi^{*},B]=C d\bar{z}=D dz=0$. Split our linear relation into $dz$ and $d\bar{z}$ parts: $$[\Phi_1, A] + [\Phi_2^*, B] + D = 0$$ $$[\Phi_1^*, B] + [\Phi_2, A] + C = 0$$ Note that $\mathfrak{g}$ is an orthogonal direct sum of $\mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi_1})$ and $Z(\Phi_1^*)$, and likewise for $\mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi_1^*})$ and $Z(\Phi_1)$. This means, for example, that $[\Phi_1, A]$ is $-\pi_{\mathrm{Im}(\Phi_1)}([\Phi_2^*, B])$, where $\pi_{\mathrm{Im}(\Phi_1)}$ is orthogonal projection onto $\mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi_1})$. Let $c_\Phi:\mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi_1})\to \mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi_2})$ denote the map which takes $[\Phi_1, A]$ to $[\Phi_2, A]$ for any $A$. Define $c_{\Phi^{*}}$ analogously. We see that $[\Phi_1,A]$ has to be a fixed point of the following map: $$(-\pi_{\mathrm{Im}(\Phi_1)}) \circ c_{\Phi^{*}} \circ (-\pi_{\mathrm{Im}(\Phi^{*}_1)}) \circ c_{\Phi}.$$ The positive definiteness condition $\Phi\in\mathcal{P}$ is precisely that $c_\Phi$, (and thus $c_{\Phi^{*}}$,) decreases norm. Orthogonal projections do not increase norm, so this composition must decrease norm. This implies that $[\Phi_1,A]$ must be zero. We immediately get that $[\Phi,A]=[\Phi^{*},B]=0$. The centralizers of $\Phi$ and $\Phi^{*}$ have trivial intersection because they are nilpotent preserving opposite flags, so $C$ and $D$ are zero as well. ◻ An equivalent way to phrase this lemma is as follows: for positive Fock field $\Phi$ we have $$\Omega^1(S,\mathfrak{g}_P)= \mathrm{Var}(\Phi)\oplus \mathrm{Var}(\Phi^{*}),$$ where we used $\mathrm{Var}(\Phi)=\mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)\oplus Z(\Phi)\bar{K}$ from Proposition [Proposition 37](#Prop:Phi-variation){reference-type="ref" reference="Prop:Phi-variation"}. Recall from Section [3.3](#Sec:var-Fock-fields){reference-type="ref" reference="Sec:var-Fock-fields"} the symplectic form $\omega=\int_S \mathop{\mathrm{tr}}\alpha\wedge\beta$ on $\Omega^1(S,\mathfrak{g}_P)$. Both subspaces $\mathrm{Var}(\Phi)$ and $\mathrm{Var}(\Phi^*)$ are maximal isotropic with respect to $\omega$. Pointwise they give a decomposition of $T^*S\otimes \mathfrak{g}$ into two Lagrangian subspaces. Therefore, pairing with $\omega$ gives the duality $\mathrm{Var}(\Phi)\cong \mathrm{Var}(\Phi^{*})^*$. The space of covectors is related to the cohomology group $\mathrm{H}^1(\Phi)$. From $\Omega^1(S,\mathfrak{g}_P)=\mathrm{Var}(\Phi)\oplus\mathrm{Var}(\Phi^{*})$ and $\mathrm{Var}(\Phi)\subset \ker\,\mathop{\mathrm{ad}}_\Phi$, we get $\ker\,\mathop{\mathrm{ad}}_\Phi=\mathrm{Var}(\Phi)\oplus (\ker\,\mathop{\mathrm{ad}}_\Phi\cap \mathrm{Var}(\Phi^{*}))$. Hence from $\mathrm{Var}(\Phi)=Z(\Phi)\bar{K}\oplus \mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)$ we deduce $$\label{Eq:h1-aux} \mathrm{H}^1(\Phi)\cong Z(\Phi)\bar{K}\oplus (\ker\,\mathop{\mathrm{ad}}_\Phi\cap \mathrm{Var}(\Phi^{*})).$$ The latter can be identified with the space of covectors: **Proposition 72**. *The symplectic pairing induces an isomorphism between $(Z(\Phi)\bar{K})^*$ and $\ker\,\mathop{\mathrm{ad}}_\Phi\cap\mathrm{Var}(\Phi^{*})$.* *Proof.* Both spaces have the same dimension $\mathrm{rk}(\mathfrak{g})$. This follows from $\dim \mathrm{H}^1(\Phi)=2\mathrm{rk}(\mathfrak{g})$ (see Proposition [Proposition 35](#Prop:dim-phi-cohom){reference-type="ref" reference="Prop:dim-phi-cohom"}) and Equation [\[Eq:h1-aux\]](#Eq:h1-aux){reference-type="eqref" reference="Eq:h1-aux"}. Given a non-zero element $x\in \ker\,\mathop{\mathrm{ad}}_\Phi\cap\mathrm{Var}(\Phi^{*})$, there is $z\in\Omega^1(S,\mathfrak{g}_P)$ such that $\omega(x,z)\neq 0$ since $\omega$ is non-degenerate. The symplectic pairing between $\ker\,\mathop{\mathrm{ad}}_\Phi\cap\mathrm{Var}(\Phi^{*})$ and $\mathrm{Var}(\Phi^{*})\oplus \mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)$ is trivial, hence we can reduce to $z\in Z(\Phi)\bar{K}$ by Lemma [Lemma 71](#lemma:contraction){reference-type="ref" reference="lemma:contraction"}. Therefore, the map $x\mapsto \omega(x,.)$ from $\ker\,\mathop{\mathrm{ad}}_\Phi\cap\mathrm{Var}(\Phi^{*})$ to $(Z(\Phi)\bar{K})^*$ is injective, thus an isomorphism. ◻ Combining this proposition with Equation [\[Eq:h1-aux\]](#Eq:h1-aux){reference-type="eqref" reference="Eq:h1-aux"}, we get $$\mathrm{H}^1(\Phi)\cong Z(\Phi)\bar{K}\oplus (Z(\Phi)\bar{K})^*.$$ This shows that $\mathrm{H}^1(\Phi)$ describes variations of $\Phi$ and a covector modulo gauge transformations. ## Canonical connection with covectors We want to equip $P$ with a unitary $\Phi$-compatible connection $d_A$, generalizing the filling-in procedure from Theorem [Theorem 48](#Thm-filling-in){reference-type="ref" reference="Thm-filling-in"}, to get a canonical 3-term connection $\Phi+d_A+\Phi^{*}$. Denote by $\Pi$ the space of all unitary connections which are $\Phi$-compatible (solutions to $d_A\Phi=0$). Note that such connections are automatically $\Phi^{*}$-compatible. Since $\Phi$ and $\Phi^{*}$ are transverse, $\Pi$ is non-empty and is modeled on $\rho$-invariant elements of $\ker\,\mathop{\mathrm{ad}}_\Phi\cap\ker\,\mathop{\mathrm{ad}}_{\Phi^{*}}$. We start by analysing the vector space $(\ker\,\mathop{\mathrm{ad}}_\Phi\cap\ker\,\mathop{\mathrm{ad}}_{\Phi^{*}})^\rho$ and show that it is a realization of the space of covectors. **Proposition 73**. *The symplectic pairing gives an isomorphism $$\ker(\mathop{\mathrm{ad}}_\phi)\cap\ker(\mathop{\mathrm{ad}}_{\Phi^{*}})\cong (Z(\Phi)\bar{K}\oplus Z(\Phi^{*})K)^*.$$* *Proof.* Both spaces have the same dimension. Indeed the right hand side is of dimension $\dim Z(\Phi)+\dim Z(\Phi^{*})=2\mathrm{rk}(\mathfrak{g})$. The left hand side can be identified with $\mathrm{H}^1(\Phi)$ which is of the same dimension by Proposition [Proposition 35](#Prop:dim-phi-cohom){reference-type="ref" reference="Prop:dim-phi-cohom"}. The symplectic pairing between $\ker\,\mathop{\mathrm{ad}}_\Phi\cap\ker\,\mathop{\mathrm{ad}}_{\Phi^{*}}$ and $\mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)\oplus\mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi^{*}})$ is trivial by the cyclicity of the Killing form. Since $\omega$ is non-degenerate and using Lemma [Lemma 71](#lemma:contraction){reference-type="ref" reference="lemma:contraction"}, the map from $\ker\,\mathop{\mathrm{ad}}_\phi\cap\ker\,\mathop{\mathrm{ad}}_{\Phi^{*}}$ to $(Z(\Phi)\bar{K}\oplus Z(\Phi^{*})K)^*$ given by $x\mapsto \omega(x,\cdot)$ is injective. Therefore it is an isomorphism. ◻ Since $\rho$ exchanges $Z(\Phi)\bar{K}$ with $Z(\Phi^{*})K$, the $\rho$-invariant part of $(Z(\Phi)\bar{K}\oplus Z(\Phi^{*})K)^*$ maps isomorphically to any of the two factors. Thus: **Corollary 74**. *The symplectic pairing induces an isomorphism between $(Z(\Phi)\bar{K})^*$ and $(\ker\,\mathop{\mathrm{ad}}_\Phi\cap\ker\,\mathop{\mathrm{ad}}_{\Phi^{*}})^\rho$.* Using this identification, we sometimes call an element of $(\ker\,\mathop{\mathrm{ad}}_\Phi\cap\ker\,\mathop{\mathrm{ad}}_{\Phi^{*}})^\rho$ a *covector*. In the case of $\mathrm{SL}_n(\mathbb{C})$, the centralizer $Z(\Phi)=Z(\Phi_1)$ is generated by the powers of $\Phi_1$. Hence a covector $t\in (\ker\,\mathop{\mathrm{ad}}_\Phi\cap\ker\,\mathop{\mathrm{ad}}_{\Phi^{*}})^\rho$ is uniquely described by $(t_k=\mathop{\mathrm{tr}}\,\Phi_1^{k}t^{1,0})_{1\leq k\leq n-1}$. This is similar to the formula for covectors to higher complex structures [@thomas-flat-conn Proposition 4.2]. Now we study the affine space $\Pi$ of unitary $\Phi$-compatible connections. For a connection $d_A\in\Pi$, we put $A^{-\sigma}=\tfrac{1}{2}(d_A-\sigma(d_A))\in \Omega^1(S,\mathfrak{g}_P^{-\sigma})$. It is important to notice that $A^{-\sigma}\in \ker(\mathop{\mathrm{ad}}_\Phi)$. Indeed, this is the $\sigma$-invariant part of $d_A\Phi=0$. Note also that in general, since $\rho$ and $\sigma$ do not necessarily commute, $A^{-\sigma}\notin \Pi$. Again we use the symplectic pairing with $Z(\Phi)\bar{K}$ and define the map $$\psi: \left \{ \begin{array}{ccc} \Pi &\rightarrow & (Z(\Phi)\bar{K})^* \\ d_A &\mapsto & \int_S\mathop{\mathrm{tr}}(A^{-\sigma}\,\cdot) \end{array} \right.$$ **Proposition 75**. *The map $\psi$ is an affine map covering the linear map from Corollary [Corollary 74](#coro:covector-para){reference-type="ref" reference="coro:covector-para"}. Hence it is an isomorphism.* *Proof.* For $d_A,d_B\in\Pi$, we have $d_A-d_B=A-B\in (\ker\,\mathop{\mathrm{ad}}_\Phi\cap\ker\,\mathop{\mathrm{ad}}_{\Phi^{*}})^\rho$. In particular $A-B\in \ker\,\mathop{\mathrm{ad}}_\Phi$. Since $A^{-\sigma}\in\ker\,\mathop{\mathrm{ad}}_\Phi$ we also have $A^{-\sigma}-B^{\sigma}\in \ker\,\mathop{\mathrm{ad}}_\Phi$. By Proposition [Proposition 41](#Prop-no-sigma-cohom){reference-type="ref" reference="Prop-no-sigma-cohom"} there is no $\sigma$-invariant $\Phi$-cohomology, so $[A-B]=[A^{-\sigma}-B^{-\sigma}]\in \mathrm{H}^1(\Phi)$. Therefore $$\psi(d_A)-\psi(d_B)=\int_S\mathop{\mathrm{tr}}((A^{-\sigma}-B^{-\sigma})\,\cdot)=\int_S\mathop{\mathrm{tr}}((A-B)\,\cdot),$$ where we used that the pairing with $Z(\Phi)\bar{K}$ does only depend on the cohomology class (since it pairs to zero with $\mathrm{Im}\,\mathop{\mathrm{ad}}_\Phi$). ◻ **Proposition 76**. *Let $\mathcal{A}$ denote the space of all connections on $P$. The map $\mathcal{A}\times Z(\Phi)\bar{K}\to \mathbb{C}$ given by $(d_A,x)\mapsto \int_S\mathop{\mathrm{tr}}(A^{-\sigma}x)$ is independent of the choice of $\sigma$ (among $\sigma_0$-structure negating $\Phi$), and is gauge invariant.* *Proof.* For the first claim, we know from Proposition [Proposition 40](#Prop:var-sigma){reference-type="ref" reference="Prop:var-sigma"} that an infinitesimal variation of a $\sigma_0$-structure negating $\Phi$ is described by $\mathrm{H}^0(\Phi)$. More precisely a variation $\delta\sigma$ is described by $\delta\sigma(x)=[\xi,x]$ for all $x\in \mathfrak{g}$ where $\xi\in \mathrm{H}^0(\Phi)=Z(\Phi)$. The change of $A^{-\sigma}=\frac{1}{2}(A-\sigma(A))$ is given by $\delta A^{-\sigma}=-\frac{1}{2}[\xi,A] \in \mathrm{Im}(\mathop{\mathrm{ad}}_\xi)$. Since $Z(\Phi)$ is abelian we get $\int_S\mathop{\mathrm{tr}}([\xi,A_1]x)=\int_S\mathop{\mathrm{tr}}([x,\xi]A_1)=0$. For the second claim, consider a gauge transformation $\eta\in\mathop{\mathrm{Aut}}(P)$. The action on $\Phi$ and $A^{-\sigma}$ are given by $\eta.\Phi=\eta\Phi \eta^{-1}$ and $\eta.A^{-\sigma}=\eta A^{-\eta.\sigma}\eta^{-1}$ where $\eta.\sigma$ is the pull-back of $\sigma$ along $\eta$. Note that $\eta.\sigma$ is compatible with $\eta.\Phi$. We distinguish two cases. If $\eta.\sigma=\sigma$, i.e. $\eta\in\mathop{\mathrm{Aut}}(P,\sigma)$, the pairing stays unchanged since $Z(\eta\Phi\eta^{-1})=\eta Z(\Phi)\eta^{-1}$ and the Killing form is Ad-invariant. If $\eta.\Phi=\Phi$, we conclude by the first part since $\eta.\sigma$ is then $\Phi$-compatible. Both cases generate all gauge transformations since $\mathfrak{g}^\sigma$ is a maximal Lie subalgebra of $\mathfrak{g}$. ◻ From all this, we get a canonical base point in $\Pi$, generalizing the filling-in procedure from Theorem [Theorem 48](#Thm-filling-in){reference-type="ref" reference="Thm-filling-in"}. **Theorem 77**. *Consider a $G$-Fock bundle $(P,\Phi,\sigma)$ equipped with a hermitian structure $\rho$. If $\Phi\in\mathcal{P}$ is positive, then there is a unique unitary $\Phi$-compatible connection $d_{A_0}$ such that $A_0^{-\sigma}\in\mathrm{Var}(\Phi)$. This is independent of the choice of $\sigma$.* *Proof.* We start from uniqueness: if $d_{A_0}\in\Pi$ is such that $A_0^{-\sigma}\in\mathrm{Var}(\Phi)$, then $\psi(d_{A_0})=0$ since $Z(\Phi)\bar{K}\subset \mathrm{Var}(\Phi)$ which is isotropic. By Proposition [Proposition 75](#Prop:affine-iso){reference-type="ref" reference="Prop:affine-iso"}, $\psi$ is an isomorphism, which gives uniqueness. To show existence, consider $d_{A_0}=\psi^{-1}(\{0\})$. Then $\int_S\mathop{\mathrm{tr}}(A_0^{-\sigma}\,\cdot)$ both vanishes on $Z(\Phi)\bar{K}$ and on $\mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)$ since $A_0^{-\sigma}\in\ker\,\mathop{\mathrm{ad}}_\Phi$. Hence $A_0^{-\sigma}\in\mathrm{Var}(\Phi)^{\perp_\omega}=\mathrm{Var}(\Phi)$. Finally, the independence of the choice of $\sigma$ follows directly from Proposition [Proposition 76](#Prop:covector){reference-type="ref" reference="Prop:covector"}. ◻ A more conceptual formulation of the previous theorem is given by: **Corollary 78**. *The space of unitary connections $d_A$ on $P$ such that $d_A \Phi=0$ is naturally the dual space to $T_\Phi( \mathcal{P})/\mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)$.* *Proof.* By definition, we have $T_\Phi(\mathcal{P})=\mathrm{Var}(\Phi)\cong \mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)\oplus Z(\Phi)\bar{K}$. Corollary [Corollary 74](#coro:covector-para){reference-type="ref" reference="coro:covector-para"} shows that $(\ker\,\mathop{\mathrm{ad}}_\Phi\cap\ker\,\mathop{\mathrm{ad}}_{\Phi^{*}})^\rho$, which is identified with the space of unitary $\Phi$-compatible connections via Theorem [Theorem 77](#Thm:gen-filling-in){reference-type="ref" reference="Thm:gen-filling-in"}, is dual to $Z(\Phi)\bar{K}$. ◻ The generalized filling-in procedure [Theorem 77](#Thm:gen-filling-in){reference-type="ref" reference="Thm:gen-filling-in"} allows to describe a point $d_A\in\Pi$ in two different, but equivalent ways: First using the canonical base point $d_{A_0}\in \Pi$ we can write $$d_A=d_{A_0}+t$$ where $t\in(\ker\,\mathop{\mathrm{ad}}_\Phi\cap\ker\,\mathop{\mathrm{ad}}_{\Phi^{*}})^\rho$ is a covector. The second description simply decomposes $d_A$ into its $\sigma$-invariant and $\sigma$-anti-invariant part: $$d_A=d_{A^\sigma}+A^{-\sigma}.$$ Note that in general, neither $d_{A^\sigma}$ nor $A^{-\sigma}$ are in $\Pi$. Proposition [Proposition 75](#Prop:affine-iso){reference-type="ref" reference="Prop:affine-iso"} shows that using the pairing with $Z(\Phi)\bar{K}$ to parametrize $d_A\in\Pi$ gives the same result whether we use $t$ or $A^{-\sigma}$. This is why we also refer to $A^{-\sigma}$ as being the covector. We formulate an extension of our main conjecture [Conjecture 56](#main-conj){reference-type="ref" reference="main-conj"} including the data of a covector. Let $(P, \Phi)$ be a $G$-Fock bundle over $S$ equipped with a hermitian structure $\rho$ such that $\Phi\in\mathcal{P}$ is positive. Denote by $\mu$ the $\mathfrak{g}$-complex structure on $S$ induced by $\Phi$ and let $t\in(Z(\Phi)\bar{K})^*$ be a covector. From Theorem [Theorem 77](#Thm:gen-filling-in){reference-type="ref" reference="Thm:gen-filling-in"} we know that there is a unique unitary $\Phi$-compatible connection $d_A$ described by $t$. We need the following definition. **Definition 79**. *A covector $t$ is called *$\mu$-holomorphic* if the associated connection $d_A$ has curvature $F(A)\in\mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)\subset\Omega^2(S,\mathfrak{g}_P)$.* We will see below in Theorem [Theorem 82](#Thm:mu-holo){reference-type="ref" reference="Thm:mu-holo"} that this condition coincides with the usual $\mu$-holomorphicity from Equation [\[Eq:mu-holo-cond-hcs\]](#Eq:mu-holo-cond-hcs){reference-type="eqref" reference="Eq:mu-holo-cond-hcs"} in the case of $\mathrm{SL}_n(\mathbb{C})$. For now, we show that this notion only depends on $\mu$ and $t$: **Proposition 80**. *The condition $F(A)\in\mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi})$ only depends on $t$ and the $\mathfrak{g}$-complex structure $\mu$ induced by $\Phi$.* *Proof.* The condition does clearly only depend on the isomorphism class of the Fock bundle since both $\Phi$ and $F(A)$ get conjugated under a gauge transformation. To show independence from the choice of $\rho$, note that changing $\rho$ changes $d_A$ by a coboundary term: the new middle term $d_{A'}$ is given by $d_{A'}=d_A+[C,\Phi]$ for some $C\in \Omega^0(S,\mathfrak{g}_P)$ (see Theorem [Theorem 48](#Thm-filling-in){reference-type="ref" reference="Thm-filling-in"}). The curvature change is then $$F(A')=F(A)+d[C,\Phi]+[A\wedge [C,\Phi]]+\tfrac{1}{2}[[C,\Phi]\wedge[C,\Phi]].$$ Modulo $\mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi})$, the second term equals $[C,d\Phi]$, the third terms equals $[C,[A\wedge \Phi]]$ (using the Jacobi identity) and the last term vanishes (using again the Jacobi identity and $(\mathop{\mathrm{ad}}_\Phi)^2=0$). Therefore $F(A')-F(A)\equiv [C,d\Phi+[A\wedge\Phi]]=0 \,\mod\,\mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)$ using $d_A\Phi=0$. ◻ We can now formulate the extension of our main conjecture, including the covector: **Conjecture 81**. *Let $(P, \Phi)$ be a $G$-Fock bundle with a hermitian structure $\rho$ such that $\Phi\in\mathcal{P}$ is positive. Denote by $\mu$ the induced $\mathfrak{g}$-complex structure on $S$ and consider a covector $t\in(Z(\Phi)\bar{K})^*$.* *If $t$ is $\mu$-holomorphic and small, then there is a gauge transformation $\eta\in \Omega^0(S,\mathfrak{g}_P^{-\rho})$ such that the associated 3-term connection associated to $e^{-\eta}\Phi e^{\eta}$ and covector $t$ is flat.* Assuming Conjecture [Conjecture 81](#main-conj-covec){reference-type="ref" reference="main-conj-covec"}, we get a map from Fock bundles with compatible connection to the character variety by taking the monodromy of the flat connection $\Phi+d_A+\Phi^{*}$. We expect that the image of this map is a tubular neighborhood of the Hitchin component (which corresponds exactly to those Fock bundles with covector zero). This picture generalizes the work of Donaldson [@Donaldson] and Trautwein [@Traut] on the space of almost-Fuchsian representations in the $\mathrm{SL}_2(\mathbb{C})$-case. ## $\mu$-holomorphicity {#Sec:mu-holo-meaning} We are now giving the gauge-theoretic meaning of the $\mu$-holomorphicity condition [\[Eq:mu-holo-cond-hcs\]](#Eq:mu-holo-cond-hcs){reference-type="eqref" reference="Eq:mu-holo-cond-hcs"}. For that we consider an $\mathrm{SL}_n(\mathbb{C})$-Fock bundle $(E,\Phi,g)$ with hermitian metric $h$ and assume that $\Phi$ is positive. Denote by $\sigma$ and $\rho$ the involutions on $\mathfrak{sl}(E)$ induced by $g$ and $h$. Denote by $\mu$ the induced higher complex structure on $S$. Finally, let $A^{-\sigma}$ be a covector. This describes a 3-term connection $\Phi+d_A+\Phi^*$ by Theorem [Theorem 77](#Thm:gen-filling-in){reference-type="ref" reference="Thm:gen-filling-in"}. Recall from the previous subsections that the pairing with $Z(\Phi)\bar{K}$ parametrizes covectors. Since $Z(\Phi)=Z(\Phi_1)$ is generated by powers of $\Phi_1$, we consider $$\label{formula-t} t_k=\mathop{\mathrm{tr}}\Phi_1^{k-1}A^{-\sigma}_1$$ where $2\leq k\leq n$ and $A_1^{-\sigma}$ denotes the $(1,0)$-part of $A^{-\sigma}$. **Theorem 82**. *Let $(E,\Phi,g)$ be an $\mathop{\mathrm{SL}}_n(\mathbb{C})$-Fock bundle with hermitian structure $h$ and positive $\Phi$. Denote by $\mu=(\mu_k)_{2\leq k\leq n}$ the induced higher complex structure and let $t=(t_k)_{2\leq k\leq n}$ be a covector data [\[formula-t\]](#formula-t){reference-type="eqref" reference="formula-t"}. Then the $\mu$-holomorphicity condition $$(-\bar{\partial}\!+\!\mu_2\partial\!+\!k\partial\mu_2)t_{k}+\sum_{l=1}^{n-k}((l\!+\!k)\partial\mu_{l+2}+(l\!+\!1)\mu_{l+2}\partial)t_{k+l}=0$$ for all $k\in\{2,3,...,n\}$, is equivalent to the condition $$F(A)\in\mathrm{Im}(\mathop{\mathrm{ad}}_\Phi)\subset\Omega^2(S,\mathfrak{sl}(E)),$$ where $F(A)$ is the curvature of the unique unitary $\Phi$-compatible connection described by the covector $t$.* We can make the condition more symmetric in $\Phi$ and $\Phi^*$. Since $d_A$ is unitary, its curvature is $\rho$-invariant. Hence $F(A)\in\mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi})$ is equivalent to $F(A)\in\mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi})\cap\mathrm{Im}(\mathop{\mathrm{ad}}_{\Phi^*})$. Another useful reformulation using $\Phi$-cohomology reads $[F(A)]= 0 \in \mathrm{H}^2(\Phi)$. From Proposition [Proposition 41](#Prop-no-sigma-cohom){reference-type="ref" reference="Prop-no-sigma-cohom"}, we know that there is no $\sigma$-invariant $\Phi$-cohomology. Hence it is sufficient to show that $[F(A)^{-\sigma}]=0 \in\mathrm{H}^2(\Phi)$. Decomposing $d_A=d_{A^\sigma}+A^{-\sigma}$, we get $$F(A)=F(A^\sigma)+A^{-\sigma}\wedge A^{-\sigma}+d_{A^\sigma}(A^{-\sigma}).$$ Therefore $[F(A)]=[d_{A^\sigma}(A^{-\sigma})] \mod \mathrm{H}^2(\Phi)$. The symplectic pairing on $\Phi$-cohomology induces an isomorphism $\mathrm{H}^2(\Phi)\cong \mathrm{H}^0(\Phi)^*$. Thus, an element in $\mathrm{H}^2(\Phi)$ is trivial if and only if its pairing with $\mathrm{H}^0(\Phi)=Z(\Phi)$ is trivial. The centralizer $Z(\Phi)$ is generated (as vector space) by the elements $\Phi_1^k$ for $1\leq k\leq n$. We will see that the condition $$\label{imp-eq} \mathop{\mathrm{tr}}\Phi_1^k d_{A^\sigma}(A^{-\sigma})=0$$ gives the $\mu$-holomorphicity condition for $t_{k+1}$. Before going to the general case, consider the example for $n=2$. **Example 83**. *Fix an arbitrary complex structure on $S$ and fix a standard basis $(F,H,E)$ in $\mathfrak{sl}_2$. We work in a gauge where $\Phi=F \, dz+\mu_2F\, d\bar{z}$. By Equation [\[formula-t\]](#formula-t){reference-type="eqref" reference="formula-t"}, we know that $A^{-\sigma}=t_2E\, dz+\mu_2t_2 E \,d\bar{z}$. Since the only $\sigma$-invariant part of $\mathfrak{sl}_2$ is spanned by $H$, we can put $A^\sigma=aH\, dz+bH\, d\bar{z}$ for some local functions $a$ and $b$. From $d_{A^\sigma}(\Phi)=0$, we get $2b-2a\mu_2+\partial\mu_2=0$. This yields $$\begin{aligned} \mathop{\mathrm{tr}}Fd_{A^\sigma}(A^{-\sigma}) &= -\bar{\partial}t_2+\partial(\mu_2t_2)+2a\mu_2t_2-2bt_2 \\ &= -\bar{\partial}t_2+\mu_2 \partial t_2+2\partial\mu_2\, t_2,\end{aligned}$$ which is the $\mu$-holomorphicity condition for $n=2$.* ### Interlude: Natural basis from principal $\mathfrak{sl}_2$-triple {#Sec:interlude} Consider a complex simple Lie algebra $\mathfrak{g}$. By a theorem of Kostant (see Theorem [\[Thm:one-reg-orbit\]](#Thm:one-reg-orbit){reference-type="ref" reference="Thm:one-reg-orbit"}), we know that there is a unique principal $\mathfrak{sl}_2$-triple in $\mathfrak{g}$ up to conjugation. Fix $(E,H,F)$ such a triple. It induces two decompositions of $\mathfrak{g}$. First by weights of $\mathop{\mathrm{ad}}_H$: $$\mathfrak{g}\cong \bigoplus_{k\in\mathbb{Z}} \mathfrak{g}_k \;\; \text{ where } \;\; \mathfrak{g}_k=\{g\in \mathfrak{g}\mid [H,g]=k g\}.$$ Second by the action with the bracket, $\mathfrak{g}$ becomes a $\mathfrak{sl}_2$-module which can be decomposed into irreducible representations: $$\mathfrak{g}\cong \bigoplus_{i\in\mathbb{N}} n_iV_{i},$$ where $V_i$ is the irreducible representation of $\mathfrak{sl}_2$ of dimension $2i+1$ and $n_i\in \mathbb{N}$ the multiplicities. In the sequel we only consider $\mathfrak{g}=\mathfrak{sl}_n(\mathbb{C})$. Then, we know that $n_i=1$ for $1\leq i\leq n-1$ and $n_i=0$ otherwise. Using both decompositions, we get $$\label{line-decompo} \mathfrak{g}\cong \bigoplus_{k,i} \mathfrak{g}_k\cap V_{i},$$ which is a line decomposition; see also Figure [2](#Fig:g-decompo){reference-type="ref" reference="Fig:g-decompo"}. ![Line decomposition of $\mathfrak{sl}_n$ by a principal triple](principal-sl2-decomposition.pdf){#Fig:g-decompo height="4cm"} All irreducible representations of $\mathfrak{sl}_2$ are highest weight representations. This means that for a given irreducible representation $V$, there is a highest weight vector $v\in V\backslash\{0\}$ with $E.v=0$. Then acting successively with $F$ generates all of $V$. In our setting, the highest weight vector of $V_i$ is given by $E^i$. Hence a basis adapted to the line decomposition [\[line-decompo\]](#line-decompo){reference-type="eqref" reference="line-decompo"} is given by $$\label{def-G} G_{i,j}=\mathop{\mathrm{ad}}_F^{i-j}(E^i) \in V_i\cap \mathfrak{g}_{2j}$$ where $i\in \{1,...,n-1\}$ and $j\in \{-i,-i+1,...,i-1,i\}$. A nice property of this line decomposition is its behavior under the trace: $$\mathop{\mathrm{tr}}(G_{i,j}G_{k,\ell}) = 0 \;\text{ if }\; (k,\ell)\neq (i,-j).$$ In terms of Figure [2](#Fig:g-decompo){reference-type="ref" reference="Fig:g-decompo"}, the proposition says that the trace of a product of two elements of the basis is only non-zero if the two corresponding dots lie symmetric with respect to the middle axis. More details can be found in [@Malek] (in particular Section 4), where the Lie bracket is computed in the basis $(G_{i,j})$ using a graphical calculus. ### Proof of Theorem [Theorem 82](#Thm:mu-holo){reference-type="ref" reference="Thm:mu-holo"} {#proof-of-theorem-thmmu-holo} The proof of Theorem [Theorem 82](#Thm:mu-holo){reference-type="ref" reference="Thm:mu-holo"} is a nice but lengthy computation, using principally the Lie theory of the decomposition [\[line-decompo\]](#line-decompo){reference-type="eqref" reference="line-decompo"}. Since Theorem [Theorem 82](#Thm:mu-holo){reference-type="ref" reference="Thm:mu-holo"} is a local statement, we can work in local coordinates. To do that, we fix an arbitrary complex structure on $S$. The general setting is the following: using a gauge transformation, we can fix $$\Phi(z,\bar{z})=F\, dz+ Q(F)\,d\bar{z}\;\;\text{ where } Q(F)=\textstyle\sum_{k=2}^n \mu_k(z,\bar{z}) F^{k-1}.$$ Further put $A^{-\sigma}=B\,dz+C\, d\bar{z}$. From $[A^{-\sigma}\wedge \Phi]=0$ we have $$=[Q(F),B].$$ Finally, we put $A^\sigma=M_1\, dz+M_2\, d\bar{z}$ where $M_i\in \mathfrak{g}^\sigma$. Since $\mathfrak{g}^\sigma\subset \mathrm{Im}(\mathop{\mathrm{ad}}_F)$, we can write $M_i=[F,N_i]$ with suitable $N_i\in \mathfrak{g}$, where $i=1,2$. Recall from Equation [\[imp-eq\]](#imp-eq){reference-type="eqref" reference="imp-eq"} that we want to compute $$\mathop{\mathrm{tr}}F^k d_{A^\sigma}(A^{-\sigma})= \mathop{\mathrm{tr}}(F^k dA^{-\sigma}) + \mathop{\mathrm{tr}}(F^k[A^\sigma\wedge A^{-\sigma}]).$$ Using the above notations and the definition $t_{k+1}=\mathop{\mathrm{tr}}F^kB$ we get $$\begin{aligned} \label{Eq-proof-mu-holo-0} \mathop{\mathrm{tr}}(F^kd_{A^\sigma}(A^{-\sigma})) &= \mathop{\mathrm{tr}}(F^k(\partial C-\bar{\partial}B)) + \mathop{\mathrm{tr}}(F^k([M_1,C]+[B,M_2])) \nonumber\\ &= -\bar{\partial}t_{k+1}+\mathop{\mathrm{tr}}(F^k\partial C)+\mathop{\mathrm{tr}}M_1[C,F^k]+\mathop{\mathrm{tr}}M_2[F^k,B].\end{aligned}$$ We analyze the different terms separately. [Step 1:]{.ul} We compute $$\begin{aligned} \mathop{\mathrm{tr}}M_1[C,F^k] &=\mathop{\mathrm{tr}}[F,N_1][C,F^k] = \mathop{\mathrm{tr}}N_1[[C,F^k],F] \\ &= \mathop{\mathrm{tr}}N_1[[C,F],F^k]= \mathop{\mathrm{tr}}N_1[[B,Q(F)],F^k] \\ &= \mathop{\mathrm{tr}}N_1[[B,F^k],Q(F)] = \mathop{\mathrm{tr}}[B,F^k][Q(F),N_1]\end{aligned}$$ where we used $[C,F]=[B,Q(F)]$, cyclicity of the trace and the Jacobi identity. Combined with $M_2=[F,N_2]$ we get $$\label{Eq-proof-mu-holo-1} \mathop{\mathrm{tr}}M_1[C,F^k]+\mathop{\mathrm{tr}}M_2[F^k,B] = \mathop{\mathrm{tr}}[F^k,B]X$$ where $X=[N_1,Q(F)]+[F,N_2]$. [Interlude:]{.ul} From the flatness equation $d_{A^\sigma}(\Phi)=0$, we get an important relation for $X$: $$\begin{aligned} 0=d_{A^\sigma}(\Phi)&= \partial Q(F)+[M_1,Q(F)]+[F,M_2]\\ &= \partial Q(F)+[[F,N_1],Q(F)]+[F,[F,N_2]] \\ &= \partial Q(F)-[[N_1,Q(F)]+[F,N_2],F]\end{aligned}$$ where we used the Jacobi identity. Therefore: $$\label{eq-for-X} \partial Q(F)=[X,F].$$ We can explicitly solve Equation [\[eq-for-X\]](#eq-for-X){reference-type="eqref" reference="eq-for-X"} in $X$. From the line decomposition of $\mathfrak{sl}_n{\mathbb{C}}$ (see Figure [2](#Fig:g-decompo){reference-type="ref" reference="Fig:g-decompo"}), we see that we should look for a solution of the form $X=\sum_{\ell=2}^n \alpha_\ell \partial\mu_\ell\,[F^{\ell-1},E]$. Then we get $$\begin{aligned} \sum_{\ell=2}^n \partial\mu_\ell\, F^{\ell-1} = \partial Q(F) &= [X,F]=\sum_{\ell=2}^n \alpha_\ell\partial\mu_\ell[[F^{\ell-1},E],F]\\ &= \sum_{\ell=2}^n \alpha_\ell\partial\mu_\ell (2\ell-2)F^{\ell-1}\end{aligned}$$ using the Jacobi identity and $F^{\ell-1}\in \mathfrak{g}_{-2\ell+2}$. Therefore we find $\alpha_\ell=\frac{1}{2\ell-2}$, so $$\label{sol-X} X=\sum_{\ell=2}^n \frac{\partial\mu_\ell}{2\ell-2}\,[F^{\ell-1},E].$$ We see that we found $X$ up to an element of $\ker\,\mathop{\mathrm{ad}}_F$. This choice does not matter since we compute the symplectic pairing with $Z(\Phi)$. [Step 2:]{.ul} Now we come back to Equation [\[Eq-proof-mu-holo-1\]](#Eq-proof-mu-holo-1){reference-type="eqref" reference="Eq-proof-mu-holo-1"} using the explicit expression [\[sol-X\]](#sol-X){reference-type="eqref" reference="sol-X"} for $X$: $$\label{Eq-proof-mu-holo-2} \mathop{\mathrm{tr}}[F^k,B]X = \sum_{\ell=2}^n \frac{\partial\mu_\ell}{2\ell-2} \mathop{\mathrm{tr}}[F^k,B][F^{\ell-1},E].$$ Using cyclicity of the trace and Jacobi identity, we get $$\begin{aligned} \label{Eq:proof-mu-holo-16} \mathop{\mathrm{tr}}[F^k,B][F^{\ell-1},E] &= \mathop{\mathrm{tr}}[F,\sum_{j=0}^{k-1}F^jBF^{k-1-j}][F^{\ell-1},E] \nonumber\\ &= \sum_{j=0}^{k-1}\mathop{\mathrm{tr}}F^j BF^{k-1-j}[[F^{\ell-1},E],F] \nonumber\\ &= \sum_{j=0}^{k-1}\mathop{\mathrm{tr}}F^j BF^{k-1-j}[-H,F^{\ell-1}] \nonumber\\ &= (2\ell-2)\sum_{j=0}^{k-1}\mathop{\mathrm{tr}}F^j BF^{k-1-j}F^{\ell-1} \nonumber\\ &= (2\ell-2)k\mathop{\mathrm{tr}}BF^{k+\ell-2} \nonumber\\ &= (2\ell-2)kt_{k+\ell-1}\end{aligned}$$ where we used $F^{\ell-1}\in \mathfrak{g}_{2-2\ell}$ and the definition of $t_{k+\ell-1}$. Therefore Equation [\[Eq-proof-mu-holo-2\]](#Eq-proof-mu-holo-2){reference-type="eqref" reference="Eq-proof-mu-holo-2"}, using Equation [\[Eq-proof-mu-holo-1\]](#Eq-proof-mu-holo-1){reference-type="eqref" reference="Eq-proof-mu-holo-1"}, becomes $$\label{Eq-proof-mu-holo-3} \mathop{\mathrm{tr}}M_1[C,F^k]+\mathop{\mathrm{tr}}M_2[F^k,B] = \mathop{\mathrm{tr}}[F^k,B]X = \sum_{\ell=2}^n k (\partial\mu_\ell) t_{k+\ell-1}.$$ [Step 3:]{.ul} The remaining term in Equation [\[Eq-proof-mu-holo-0\]](#Eq-proof-mu-holo-0){reference-type="eqref" reference="Eq-proof-mu-holo-0"} can now be computed: $$\begin{aligned} \mathop{\mathrm{tr}}F^k\partial C = \partial \mathop{\mathrm{tr}}(F^kC) &= \frac{1}{2k}\partial \mathop{\mathrm{tr}}([F^k,H]C)\\ &= \frac{1}{2k}\partial \mathop{\mathrm{tr}}(F^k[[E,F],C])\\ &= \frac{1}{2k}\partial \mathop{\mathrm{tr}}(F^k([[C,F],E]+[[E,C],F]))\\ &= \frac{1}{2k}\partial \mathop{\mathrm{tr}}(F^k[[B,Q(F)],E])\\ &= \frac{1}{2k}\partial\left(\sum_{\ell=2}^n \mathop{\mathrm{tr}}(\mu_\ell F^k[[B,F^{\ell-1}],E])\right)\\ &= \frac{1}{2k}\partial\left(\sum_{\ell=2}^n \mu_\ell\mathop{\mathrm{tr}}([F^k,B][F^{\ell-1},E])\right).\end{aligned}$$ Using $\mathop{\mathrm{tr}}[F^k,B][F^{\ell-1},E] =(2\ell-2)kt_{k+\ell-1}$ (Equation [\[Eq:proof-mu-holo-16\]](#Eq:proof-mu-holo-16){reference-type="eqref" reference="Eq:proof-mu-holo-16"}), we conclude $$\label{Eq-proof-mu-holo-4} \mathop{\mathrm{tr}}F^k\partial C = \sum_{\ell=2}^n(\ell-1)\partial(\mu_\ell t_{k+\ell-1}).$$ Putting Equations [\[Eq-proof-mu-holo-0\]](#Eq-proof-mu-holo-0){reference-type="eqref" reference="Eq-proof-mu-holo-0"}, [\[Eq-proof-mu-holo-3\]](#Eq-proof-mu-holo-3){reference-type="eqref" reference="Eq-proof-mu-holo-3"} and [\[Eq-proof-mu-holo-4\]](#Eq-proof-mu-holo-4){reference-type="eqref" reference="Eq-proof-mu-holo-4"} together, we get that the condition $\mathop{\mathrm{tr}}(F^kd_{A^\sigma}A^{-\sigma})=0$ is equivalent to $$-\bar{\partial}t_{k+1}+\sum_{\ell=2}^n\left((k+\ell-1)\partial\mu_\ell+(\ell-1)\mu_\ell\partial\right)t_{k+\ell-1}=0$$ which is exactly the $\mu$-holomorphicity condition. **Remark 7**. *It is surprising that in the proof above, we never used integration by parts. The statement is somehow pointwise true (without being a pointwise statement).* ## Higher diffeomorphism action on covectors {#Sec:higher-diffeos-on-covectors} We extend the gauge-theoretic implementation of the action of higher diffeomorphisms on flat Fock bundles in Section [6.2](#Sec:higher-diff-action-via-gauge){reference-type="ref" reference="Sec:higher-diff-action-via-gauge"} to the setting with covectors. In the case of $\mathrm{SL}_n(\mathbb{C})$, we recover the variation formula of the covector data $t_k$ from the theory of higher complex structures. Consider a $G$-Fock bundle $(P,\Phi,\sigma)$ with a hermitian structure $\rho$ (not necessarily commuting with $\sigma$) such that $\Phi$ is positive. Use $\sigma$ to decompose a unitary $\Phi$-compatible connection $d_A=d_{A^{\sigma}}+A^{-\sigma}$, where $A^{-\sigma}$ is the covector. Recall from Section [6](#Sec:higher-diffeos){reference-type="ref" reference="Sec:higher-diffeos"} that a higher diffeomorphism is a special $\lambda$-dependent gauge transformation $\lambda^{-1}\xi+\lambda\xi^{*}$ where $\xi\in Z(\Phi)$. The variations of $\Phi$ and $d_A$ are given by $\delta\Phi=d_A\xi$ and $\delta A=[\xi^{*},\Phi]+[\xi,\Phi^{*}]$. In the framework with covectors, we define the action of higher diffeomorphisms the same way. The only problem is that the variation $\delta \Phi=d_A\xi$ does not preserve $\sigma(\Phi)=-\Phi$ since $\xi\in Z(\Phi)$ is $\sigma$-anti-invariant but $d_A$ is not $\sigma$-invariant. Thus, higher diffeomorphisms also change $\sigma$. There is way out to keep the same $\sigma$: we can use a usual gauge transformation $\eta$ to make $\delta\Phi$ anti-invariant under $\sigma$. The infinitesimal gauge transformation $\lambda^{-1}\xi+\eta+\lambda\xi^{*}$ induces the variation $$\delta\Phi=d_{A^\sigma}(\xi)+[A^{-\sigma},\xi]+[\eta,\Phi].$$ The first term is $\sigma$-anti-invariant, we wish to choose $\eta$ which let vanish the rest. Then the action on $\Phi$ is simply given by $\delta\Phi=d_{A^\sigma}(\xi)$. **Proposition 84**. *There exists $\eta\in\Omega^0(S,\mathfrak{g}_P)$ such that $[A^{-\sigma},\xi]+[\eta,\Phi]=0$.* *Proof.* This is an application of Corollary [Corollary 42](#coboundaries){reference-type="ref" reference="coboundaries"} stating that the bracket between cohomology classes is a coboundary. We have $\xi\in Z(\Phi)=\mathrm{H}^0(\Phi)$ and $A^{-\sigma}\in\text{ker}\,\text{ad}_{\Phi}$. Since $\xi\in Z(\Phi)$ the bracket $[A^{-\sigma},\xi]$ only depends on the class $[A^{-\sigma}]\in \mathrm{H}^1(\Phi)$. In addition, the bracket is a coboundary which gives the existence of $\eta$. ◻ Now we restrict attention to the case of an $\mathrm{SL}_n(\mathbb{C})$-Fock bundle $(E,\Phi,g)$ and compute explicitly the action on the covector data. Put $\xi=\Phi(v_1)\cdots \Phi(v_k)$ which is associated to the Hamiltonian $H=v_1\cdots v_k$. Consider the infinitesimal gauge transformation $\lambda^{-1}\xi+\eta+\lambda\xi^{*}$, where $\eta$ is explicitly given by $$\label{def-eta-0} \eta = A^{-\sigma}(v_1)\Phi(v_2)\cdots \Phi(v_k)+...+\Phi(v_1)\cdots \Phi(v_{k-1})A^{-\sigma}(v_k).$$ Note that $\sigma(\eta)=-\eta$ and that $\eta=0$ if $t_k=0$ for all $k$ (recall that $t_k=\mathop{\mathrm{tr}}\Phi_1^{k-1}A_1^{-\sigma}$). **Proposition 85**. *The $\eta$ is well-defined, i.e. does not depend on the order of the $v_i$.* *Proof.* This will follow from $\Phi\wedge\Phi=0$ and $[A^{-\sigma}\wedge \Phi]=0$. Let us compute what happens if we exchange $v_1$ and $v_2$. From $[A^{-\sigma}\wedge \Phi](v_1,v_2)=0$ we get $$[A^{-\sigma}(v_1),\Phi(v_2)]=[A^{-\sigma}(v_2),\Phi(v_1)].$$ Hence $$A^{-\sigma}(v_1)\Phi(v_2)+\Phi(v_1)A^{-\sigma}(v_2) = A^{-\sigma}(v_2)\Phi(v_1)+\Phi(v_2)A^{-\sigma}(v_1).$$ Since the $\Phi(v_i)$ commute among themselves, $\eta$ will not change under the exchange between $v_1$ and $v_2$. In a similar way, you prove this for any exchange between $v_i$ and $v_{i+1}$. Hence $\eta$ is invariant under any permutation. ◻ **Proposition 86**. *We have $[A^{-\sigma},\xi]+[\Phi,\eta]=0$.* *Proof.* Since the equation to prove is additive, we can assume the Hamiltonian to be $H=v_1\cdots v_k$. Then, the first term is $$[A^{-\sigma},\Phi(v_1)\cdots \Phi(v_k)] = \sum_{i=0}^{k}\Phi(v_1)\cdots \Phi(v_{i-1})[A^{-\sigma},\Phi(v_i)]\Phi(v_{i+1})\cdots \Phi(v_k).$$ Using $\Phi\wedge\Phi=0$ the second term is $$\sum_{i=0}^{k}\Phi(v_1)\cdots \Phi(v_{i-1})[\Phi,A^{-\sigma}(v_i)]\Phi(v_{i+1})\cdots \Phi(v_k).$$ The flatness identity $[A^{-\sigma},\Phi]=0$ implies $[A^{-\sigma},\Phi(v_i)]+[\Phi,A^{-\sigma}(v_i)]=0 \,\forall\, i$ which concludes the proof. ◻ As a consequence, we get $$\label{higher-diff-action-extended-2} \delta \Phi = d_{A^\sigma}(\xi).$$ In particular, the property $\sigma(\Phi)=-\Phi$ is preserved. The variation of the middle term $d_A$ is given by (see also Equation [\[Eq:3-term-gauge-action\]](#Eq:3-term-gauge-action){reference-type="eqref" reference="Eq:3-term-gauge-action"}): $$\delta A=d_A\eta+[\xi,\Phi^{*}]+[\xi^{*},\Phi].$$ Thus, the variation of $A^{-\sigma}$, which is important for the covector data, is given by $$\label{higher-diff-action-extended-1} \delta A^{-\sigma} = d_{A^\sigma}(\eta)+[\xi,(\Phi^{*})^\sigma]+[(\xi^{*})^{\sigma},\Phi].$$ We can now state the variation formula of the differentials $t_k$. **Proposition 87**. *The variation of $t_k$ under a Hamiltonian $H=w_\ell p^{\ell-1}$ is given by $$\delta t_k = (k+\ell-2)t_{k+\ell-2}\partial w_\ell+(\ell-1)w_\ell\partial t_{k+\ell-2}.$$* This is in accordance to the computations from the classical perspective on higher complex structures. Before giving the proof which is a direct computation, let us see an example: **Example 88**. *For $n=2$, the $\mu$-holomorphicity condition reads $(-\bar{\partial}+\mu_2\partial+2\partial\mu_2)t_2=0$. A Hamiltonian $H=w_2p$ induces $$\begin{aligned} \delta \mu_2 &= (\bar{\partial}-\mu_2\partial+\partial\mu_2)w_2,\\ \delta t_2 &= w_2\partial t_2+2t_2 \partial w_2.\end{aligned}$$ One can check that the $\mu$-holomorphicity is preserved.* Note that the $\mu$-holomorphicity condition is preserved under the higher diffeomorphism action. This is because it is a gauge action, hence it preserves the flatness of the 3-term connection. *Proof of Proposition [Proposition 87](#Prop:var-covectors){reference-type="ref" reference="Prop:var-covectors"}.* We start in a gauge in which $\Phi_1=Fdz$ is a fixed principal nilpotent element ($F$ will only change under the higher diffeomorphism action). Recall the definition $t_k=\mathop{\mathrm{tr}}(F^{k-1}A^{-\sigma}_1)$ and the notation $B=A^{-\sigma}_1$, from which we get $$\delta t_k=\mathop{\mathrm{tr}}\left(\delta\left(F^{k-1}\right)B + F^{k-1}\delta B\right).$$ From Equation [\[higher-diff-action-extended-1\]](#higher-diff-action-extended-1){reference-type="eqref" reference="higher-diff-action-extended-1"} and [\[higher-diff-action-extended-2\]](#higher-diff-action-extended-2){reference-type="eqref" reference="higher-diff-action-extended-2"}, we get $$\delta\left(F^{k-1}\right) = \sum_{i=0}^{k-2}F^i (\delta F)F^{k-i-2} = \sum_{i=0}^{k-2}F^i (\partial\xi+[A^\sigma_1,\xi])F^{k-2-i}$$ and $$\delta B = \partial\eta+[A^\sigma_1,\eta]+[\xi,(\Phi_2)^{*,\sigma}]+[\xi^{*,\sigma},F].$$ The last two terms of $\delta B$ will not contribute to the variation $\delta t_k$. Indeed $\mathop{\mathrm{tr}}F^{k-1}[\xi^{*,\sigma},F]=0$ by cyclicity of the trace and $\mathop{\mathrm{tr}}F^{k-1}[\xi,(\Phi_2)^{*,\sigma}]=0$ since $[\xi,F^{k-1}]=0$. Using $\xi=w_\ell F^{\ell-1}$ and $\eta=w_\ell\sum_{j=0}^{\ell-2}F^j B F^{\ell-2-j}$, we get $$\delta\left(F^{k-1}\right) = (k-1)\partial w_\ell \,F^{k+\ell-3} + w_\ell\sum_{i=0}^{k-2}F^i[A^\sigma_1,F^{\ell-1}]F^{k-2-i}$$ and $$\partial\eta+[A^\sigma_1,\eta] = \partial w_\ell \sum_{j=0}^{\ell-2}F^jBF^{\ell-2-j}+w_\ell\sum_{j=0}^{\ell-2}F^j (\partial B) F^{\ell-2-j}+w_\ell[A^\sigma_1,\sum_{j=0}^{\ell-2}F^jBF^{\ell-2-j}].$$ Therefore using $t_{k+\ell-2}=\mathop{\mathrm{tr}}F^{k+\ell-3}B$ we get $$\begin{aligned} \delta t_k &= \mathop{\mathrm{tr}}\delta(F^{k-1})B+\mathop{\mathrm{tr}}F^{k-1}\delta B\\ &= (k-1)\partial w_\ell t_{k+\ell-2}+\partial w_\ell \sum_{j=0}^{\ell-2}t_{k+\ell-2}+w_\ell\sum_{j=0}^{\ell-2}\partial t_{k+\ell-2} \\ & \; +w_\ell \sum_{i=0}^{k-2}\mathop{\mathrm{tr}}\left(F^i[A^\sigma_1,F^{\ell-1}]F^{k-2-i}B\right)+w_\ell\sum_{j=0}^{\ell-2}\mathop{\mathrm{tr}}\left(F^{k-1}[A^\sigma_1,F^jBF^{\ell-2-j}]\right)\\ &= \left((k+\ell-2)\partial w_\ell+(\ell-1)w_\ell\partial\right)t_{k+\ell-2}.\end{aligned}$$ The last line comes from the cancellation of two terms. Using cyclicity we get: $$\begin{aligned} \mathop{\mathrm{tr}}\sum_{i=0}^{k-2}F^i[A^\sigma_1,F^{\ell-1}]F^{k-2-i}B &= \mathop{\mathrm{tr}}A^\sigma_1 \sum_{i=0}^{k-2}[F^{\ell-1},F^{k-2-i}BF^i] \\ &= \mathop{\mathrm{tr}}A^\sigma_1 \sum_{i=0}^{k-2}\sum_{j=0}^{\ell-2}F^{k-2-i}F^j[F,B]F^{\ell-2-j}F^i.\end{aligned}$$ Similarly $$\mathop{\mathrm{tr}}\sum_{j=0}^{\ell-2}F^{k-1}[A^\sigma_1,F^jBF^{\ell-2-j}] = \mathop{\mathrm{tr}}A^\sigma_1 \sum_{j=0}^{\ell-2}\sum_{i=0}^{k-2}F^jF^{k-2-i}[B,F]F^iF^{\ell-2-j}.$$ Hence the two terms cancel out. ◻ 999 Abdesselam A., Thomas, A.: Structure constants for simple Lie algebras from principal $\mathfrak{sl}_2$-triple. arXiv preprint (2023), [arXiv: 2309.08213](https://arxiv.org/pdf/2309.08213.pdf) Collier, B., Tholozan, N., Toulisse, J.: The geometry of maximal representations of surface groups into $\text{SO}_0(2,n)$. *Duke Math. J.* **168**, no. 15, 2873-2949 (2019), [arXiv: 1702.08799](https://arxiv.org/pdf/1702.08799.pdf) Collingwood, D. H., McGovern, W. M.: *Nilpotent orbits in semisimple Lie algebra: an introduction*. van Nostrand Reinhold, New York (1993) Corlette, K.: Flat $G$-bundles with canonical metrics. *J. Differential Geom.* **28**, no. 3, 361-382 (1988), [Project-Euclid: 1214442469](https://projecteuclid.org/journals/journal-of-differential-geometry/volume-28/issue-3/Flat-G-bundles-with-canonical-metrics/10.4310/jdg/1214442469.full) Donaldson, S. K.: Moment maps in differential geometry. *Surv. Differ. Geom.* **8**, 171-189 (2003), [Intl-Press: 2003-0008-0001-a006](https://www.intlpress.com/site/pub/files/_fulltext/journals/sdg/2003/0008/0001/SDG-2003-0008-0001-a006.pdf) Fock, V. V., Goncharov, A.: Moduli spaces of local systems and higher Teichmüller theory. *Publ. Math. Inst. Hautes Étud. Sci.* **103**, 1-211 (2006), [arXiv: 0311149](https://arxiv.org/pdf/math/0311149.pdf) Fock, V. V.: Cosh-Gordon equation and quasi-Fuchsian groups. *Moscow Seminar on Mathematical Physics* II, 49--58, Amer. Math. Soc. Transl. Ser. 2, **221**, Adv. Math. Sci., **60**, Amer. Math. Soc., Providence, RI (2007), [arXiv: 0811.3356](https://arxiv.org/pdf/0811.3356.pdf) Fock, V. V., Thomas, A.: Higher complex structures. *Int. Math. Res. Not. IMRN* **2021**, no. 20, 15873-15893, [arXiv: 1812.11199](https://arxiv.org/pdf/1812.11199.pdf) García-Prada, O., Gothen, P. B., Mundet i Riera, I.: The Hitchin--Kobayashi correspondence, Higgs pairs and surface group representations. arXiv preprint (2009), [arXiv: 0909.4487](https://arxiv.org/pdf/0909.4487.pdf) Gothen, P.: Representations of surface groups and Higgs bundles. *London Math. Soc. Lecture Note Ser.* **411**, Cambridge University Press, Cambridge, pp. 151-178 (2014), [arXiv: 1209.0568](https://arxiv.org/pdf/1209.0568.pdf) Hall, B. C.: *Lie groups, Lie algebras, and representations*. Graduate Texts in Mathematics Vol. 222, Springer, New York (2013) Hitchin, N. J.: The self-duality equations on a Riemann surface. *Proc. London Math. Soc.* (3) **55**, no. 1, 59-126 (1987), [Citeseer: 10.1112/plms/s3-55.1.59](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=6aa618e0d133cdb486c3096fd062d50d87cd224c) Hitchin, N. J., Karlhede, A., Lindström, U., Roček, M.: Hyper-Kähler metrics and supersymmetry. *Comm. Math. Phys.* **108**, no. 4, 535-589 (1987), [Researchgate: 0046352c2fc0703cff000000](https://www.researchgate.net/publication/38330004_HyperKahler_Metrics_and_Supersymmetry) Hitchin, N. J.: Lie groups and Teichmüller space. *Topology* **31**, no. 3, 449-473 (1992), [Sciencedirect: 004093839290044I](https://www.sciencedirect.com/science/article/pii/004093839290044I) Knapp, A.W.: *Lie groups beyond an introduction*. Progress in Mathematics Vol. 140, Springer (1996) Kostant, B.: The principal three-dimensional subgroup and the Betti numbers of a complex simple Lie group, *Amer. J. Math.* **81**, p. 973 -- 1032 (1959) Labourie, F.: Anosov flows, surface groups and curves in projective space. *Invent. Math.* **165**, no. 1, 51--114 (2006), [arXiv: 0401230](https://arxiv.org/pdf/math/0401230.pdf) Labourie, F.: Flat projective structures on surfaces and cubic holomorphic differentials. *Pure Appl. Math. Q.* **3**, no. 4, 1057-1099 (2007), [arXiv: 0611250](https://arxiv.org/pdf/math/0611250.pdf) Labourie, F.: Cross rations, Anosov representations and the energy functional on Teichmüller space. *Ann. Sci. Éc. Norm. Supér. (4)*, **41**, no. 3, 437-471 (2008), [Numdam: asens.2072](http://www.numdam.org/item/10.24033/asens.2072.pdf) Labourie, F.: Cyclic surfaces and Hitchin components in rank 2. *Ann. of Math. (2)* **185**, no. 1, 1-58 (2017), [Annals: v185-n1-p01-s](https://annals.math.princeton.edu/wp-content/uploads/annals-v185-n1-p01-s.pdf) Loftin, J. C.: Affine spheres and convex $\mathbb{RP}^n$-manifolds. *Amer. J. Math.* **123**, no. 2, 255-274 (2001), [JSTOR: 25099056](https://www.jstor.org/stable/25099056) Marković, V.: Non-uniqueness of minimal surfaces in a product of closed Riemann surfaces. *Geom. Funct. Anal.* **32**, no. 1, 31-52 (2022), [http://people.maths.ox.ac.uk/ markovic/M-minimalac.pdf](http://people.maths.ox.ac.uk/~markovic/M-minimalac.pdf) Nolte, A: Canonical Maps from Spaces of Higher Complex Structures to Hitchin Components. arXiv preprint (2022), [arXiv: 2204.04732](https://arxiv.org/pdf/2204.04732.pdf) Sagman, N., Smillie, P.: Unstable minimal surfaces in symmetric spaces of non-compact type. arXiv preprint (2022), [arXiv: 2208.04885](https://arxiv.org/pdf/2208.04885.pdf) Sampson, J. H.: Applications of harmonic maps to Kähler geometry. In: Complex differential geometry and nonlinear differential equations, Brunswick, Maine (1984). *Contemp. Math. Amer. Math. Soc.* **49**, Providence, RI, pp. 125-134 (1986) Simpson, C. T.: Higgs bundles and local systems. *Publ. Math. Inst. Hautes Étud. Sci.* **75**, 5-95 (1992), [Numdam: 1992\_\_75\_\_5_0](http://www.numdam.org/item/PMIHES_1992__75__5_0.pdf) Simpson, C. T.: The Hodge filtration on nonabelian cohomology. in: Kollár, János (ed.) et al., Algebraic geometry. Proceedings of the Summer Research Institute, Santa Cruz, CA, USA, July 9-29, 1995. Providence, RI: American Mathematical Society. *Proc. Symp. Pure Math.* **62** (2), 217-281 (1997), [arXiv: 9604005](https://arxiv.org/pdf/alg-geom/9604005.pdf) Steinberg, R.: *Conjugacy Classes in Algebraic Groups*. Lecture Notes in Mathematics 366, Springer-Verlag, New York (1974) Thomas, A.: Higher complex structures and flat connections. arXiv preprint (2020), [arXiv: 2005.14445](https://arxiv.org/pdf/2005.14445) Thomas, A.: Differential Operators on Surfaces and Rational WKB Method. arXiv preprint (2021), [arXiv: 2111.07946](https://arxiv.org/pdf/2111.07946) Thomas, A.: Generalized punctual Hilbert schemes and $\mathfrak{g}$-complex structures. *Internat. J. Math.* **33**, no. 1, Paper No. 2250004, 44 pp. (2022), [arXiv: 1910.08504](https://arxiv.org/pdf/1910.08504.pdf) Trautwein, S.: The hyperkähler metric on the almost-Fuchsian moduli space, *EMS Surv. Math. Sci* **6**, No. 1-2, 83--131 (2019), [arXiv: 1809.00869](https://arxiv.org/pdf/1809.00869) [Department of Mathematics, University of Patras]{.smallcaps}\ Panepistimioupolis Patron, Patras 26504, Greece \ *E-mail address*: `gkydonakis@math.upatras.gr` [Department of Mathematics, University of Texas at Austin]{.smallcaps}\ 2515 Speedway, Austin, TX 78712, USA\ *E-mail address*: `charliereid@utexas.edu` [Institut für Mathematik, Universität Heidelberg]{.smallcaps}\ Berliner Str. 41-49, 69120 Heidelberg, Germany\ *E-mail address*: `athomas@mathi.uni-heidelberg.de`
arxiv_math
{ "id": "2310.04377", "title": "Fock bundles and Hitchin components", "authors": "Georgios Kydonakis, Charlie Reid, Alexander Thomas", "categories": "math.DG math-ph math.GT math.MP", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | Hurwitz numbers enumerate branched morphisms between Riemannn surfaces with fixed numerical data. They represent important objects in enumerative geometry that are accessible by combinatorial techniques. In the past decade, many variants of Hurwitz numbers have appeared in the literature. In this paper, we focus on an exciting such variant that arises naturally from the theory of topological recursion: Pruned Hurwitz numbers. These are defined as an enumeration of a relevant subset of branched morphisms between Riemann surfaces, that yield smaller numbers than their classical counterparts while retaining maximal information. Thus, pruned Hurwitz numbers may be viewed as the *core* of the Hurwitz problem. In this paper, we develop the combinatorial theory of pruned Hurwitz numbers. In particular, motivated by the successful application of combinatorial techniques to classical Hurwitz numbers, we derive two new combinatorial expressions of pruned Hurwitz numbers. Firstly, we show that they may be expressed in terms of Hurwitz mobiles which are tree-like structure that arise from the theory of random planar maps. Secondly, we prove a tropical correspondence theorem which allows the enumeration of pruned Hurwitz numbers in terms of tropical covers. address: - "S. G. Fitzgerald: School of Mathematics 17, Westland Row, Trinity College Dublin, Dublin 2, Ireland" - "M. A. Hahn: School of Mathematics 17, Westland Row, Trinity College Dublin, Dublin 2, Ireland" - "S. Kelly: School of Mathematics 17, Westland Row, Trinity College Dublin, Dublin 2, Ireland" author: - Sean Gearoid Fitzgerald - Marvin Anas Hahn - Síofra Kelly bibliography: - Hurwitz.bib title: Combinatorics of pruned Hurwitz numbers --- # Introduction ## Single and double Hurwitz numbers Hurwitz numbers count branched coverings of Riemann surfaces with fixed ramification data and genera. In this work, we are particularly interested in two important families, that arise from specific kinds of ramification data: *single Hurwitz numbers* and *double Hurwitz numbers*. Single Hurwitz numbers enumerate branched coverings of the projective line with arbitrary ramification of $0$ and simple ramification else. **Definition 1**. Let $d>0$, $g\ge0$, $\mu$ a partition of $d$ and let $b=2g-2+\ell(\mu)+d$. We fix $p_1,\dots,p_b\in\mathbb{P}^1$. We call a holomorphic map $f\colon S\to\mathbb{P}^1$ a Hurwitz cover of type $(g,\mu)$ if - $S$ is a compact, connected Riemann surface of genus $g$, - the ramification profile of $0$ is $\mu$, - the ramification profile of $p_i$ is $(2,1,\dots,1)$, - the pre-images of $0$ are labelled by $1,\dots,\ell(\mu)$, such that the pre-image labelled $i$ has ramification index $\mu_i$. We call two Hurwitz covers $f\colon S\to\mathbb{P}^1$ and $f'\colon S'\to\mathbb{P}^1$ equivalent if there exists an isomorphism $g\colon S\to S'$, such that $f=f'\circ g$. We denote by $\mathfrak{H}_g(\mu)$ the set of all equivalence classes of Hurwitz covers of type $(g,\mu,\nu)$.\ Finally, we define the associated single Hurwitz number as $$H_g(\mu)=\sum_{[f]\in\mathfrak{H}_g(\mu)}\frac{1}{|\mathrm{Aut}(f)|}.$$ Dating back to Hurwitz' original work [@hurwitz1891riemann], single Hurwitz numbers are well--studied objects with various remarkable properties. A striking example is the celebrated ELSV formula [@ekedahl1999hurwitz; @ekedahl2000hurwitz] which relates single Hurwitz numbers to the intersection theory of the moduli space $\overline{\mathcal{M}}_{g,n}$ of genus $g$ curves with $n$ marked points. As a direct corollary one obtains that single Hurwitz numbers are polynomials up to a combinatorial factor.\ More precisely, for fixed $n$, there exists a polynomial $P_g$ in $n$ variables, such that for all partitions $\mu=(\mu,\dots,\mu_n)$, we have $$\frac{1}{b!}H_g(\mu)=\prod_{i=1}^n\frac{\mu_i^{\mu_i}}{\mu_i!}P_g(\mu_1,\dots,\mu_n).$$ Hurwitz already made the observation that Hurwitz numbers may be expressed as a combinatorial factorisation problem in the symmetric group. For single Hurwitz numbers this manifests as follows. **Theorem 2**. *Let $g$ be a non-negative integer, $d$ a positive integer and $\mu$ a partition of $d$. Furthermore, let $b=2g-2+\ell(\mu)+d$. Then, we have $$H_g(\mu)= \frac{1}{d!} \left| \left\{ \begin{array}{l} (\sigma, \tau_1, \ldots, \tau_b) \textrm{ such that:} \\ \bullet \ \sigma,\tau_i \in S_d, \ 1\leq i \leq b, \\ \bullet \ \mathcal{C} (\sigma) = \mu, \\ \bullet \ \text{the }\tau_i \text{ are transpositions}, \\ \bullet \ \sigma \tau_1 \cdots \tau_b =\mathrm{id}, \\ \bullet \ \textrm{the cycles of }\sigma\textrm{ are labelled by }1,\dots,\ell(\mu),\\\textrm{such that the cycle labeleld }i\textrm{ has length }\mu_i,\\ \bullet \ \text{the subgroup generated by } (\sigma, \tau_1, \ldots, \tau_b)\\ \text{ acts transitively on } [d]. \end{array}\right\} \right|,$$ where $\mathcal{C}(\sigma)$ denotes the cycle type of $\sigma$, i.e. the partition of $d$ corresponding to its conjugacy class.* In [@zbMATH01539310], Okounkov introduced a natural generalisation of single Hurwitz numbers, namely *double Hurwitz numbers* and studied them in the context of integrable systems. **Definition 3**. Let $d>0$, $g\ge0$, $\mu,\nu$ partitions of $d$ and let $b=2g-2+\ell(\mu)+\ell(\nu)$. We fix $p_1,\dots,p_b\in\mathbb{P}^1$. We call a holomorphic map $f\colon S\to\mathbb{P}^1$ a Hurwitz cover of type $(g,\mu,\nu)$ if - $S$ is a compact, connected Riemann surface of genus $g$, - the ramification profile of $0$ is $\mu$, the ramification profile of $\infty$ is $\nu$, - the ramification profile of $p_i$ is $(2,1,\dots,1)$, - the pre-images of $0$ (resp. $\infty$) are labelled by $1,\dots,\ell(\mu)$ (resp. $1,\dots,\ell(\nu)$), such that the pre-image labelled $i$ has ramification index $\mu_i$ (resp. $\nu_j$). We call two Hurwitz covers $f\colon S\to\mathbb{P}^1$ and $f'\colon S'\to\mathbb{P}^1$ equivalent if there exists an isomorphism $g\colon S\to S'$, such that $f=f'\circ g$. We denote by $\mathfrak{H}_g(\mu,\nu)$ the set of all equivalence classes of Hurwitz covers of type $(g,\mu,\nu)$.\ Finally, we define the associated single Hurwitz number as $$H_g(\mu,\nu)=\sum_{[f]\in\mathfrak{H}_g(\mu,\nu)}\frac{1}{|\mathrm{Aut}(f)|}.$$ We see immediately, that $H_g(\mu)=H_g(\mu,(1^d))$. Analogously to single Hurwitz numbers, double Hurwitz numbers also admit an expression in terms of factorisations in the symmetric group. **Theorem 4**. *Let $g$ be a non-negative integer, $d$ a positive integer and $\mu,\nu$ partitions of $d$. Furthermore, let $b=2g-2+\ell(\mu)+\ell(\nu)$. Then, we have $$H_g(\mu)= \frac{1}{d!} \left| \left\{ \begin{array}{l} (\sigma_1, \tau_1, \ldots, \tau_b,\sigma_2) \text{ such that:} \\ \bullet \ \sigma_1, \sigma_2, \tau_i \in S_d, \ 1\leq i \leq b, \\ \bullet \ \mathcal{C} (\sigma_1) = \mu, \mathcal{C}(\sigma_2) = \nu, \\ \bullet \ \text{the }\tau_i \text{ are transpositions}, \\ \bullet \ \textrm{the cycles of }\sigma_1\textrm{ are labelled by }1,\dots,\ell(\mu),\\\textrm{such that the cycle labeleld }i\textrm{ has length }\mu_i,\\ \bullet \ \textrm{the cycles of }\sigma_2\textrm{ are labelled by }1,\dots,\ell(\nu),\\\textrm{such that the cycle labeleld }i\textrm{ has length }\nu_i,\\ \bullet \ \sigma_1 \tau_1 \cdots \tau_b\sigma_2 =\mathrm{id}, \\ \bullet \ \text{the subgroup generated by } (\sigma_1, \tau_1, \ldots, \tau_b, \sigma_2)\\ \ \ \text{ acts transitively on } [d]. \end{array}\right\} \right|.$$* Quite remarkably, double Hurwitz numbers share many features with single Hurwitz numbers. In [@goulden2005towards], the geometry of double Hurwitz numbers was explored towards a connection to the intersection theory of moduli spaces resembling the ELSV formula. As a key feature towards such a connection, a polynomial behaviour of double Hurwitz numbers was identified. More precisely, the authors considered the following set-up.\ Let $m,n>0$ integers and define the hyperplane $$\mathcal{H}_{m,n}=\left\{(\mu,\nu)\in\mathbb{N}^m\times\mathbb{N}^n\mid\sum\mu_i=\sum\nu_j\right\}.$$ Further, define a hyperplane arrangement, the so-called *resonance arrangement*, in $\mathcal{H}_{m,n}$ $$\mathcal{R}_{m,n}=\left\{\sum_{i\in I}\mu_i-\sum_{j\in J}\nu_j=0\mid I\subset[m],J\subset[n]\right\}.$$ We call the hyperplanes in $\mathcal{R}_{m,n}$ *walls* and the connected components of $\mathcal{H}_{m,n}\backslash|\mathcal{R}_{m,n}|$ *chambers* -- where $|\mathcal{R}_{m,n}|$ is the support of $\,\mathcal{R}_{m,n}$, i.e. the union of all hyperplanes. Now, one may consider the map $$\begin{aligned} H_g\colon\mathcal{H}_{m,n}&\to\mathbb{Q}\\ (\mu,\nu)&\mapsto H_g(\mu,\nu).\end{aligned}$$ **Theorem 5** ([@goulden2005towards Theorem 2.1]). *The map $H_g$ is piecewise polynomial. More precisely, for each chamber $C$ of $\mathcal{R}_{m,n}$, there exists a polynomial $P_g^C$ in $m+n$ variables of degree $4g-3+m+n$, such that for all $(\mu,\nu)\in C$ we have $H_g(\mu,\nu)=P_g^C(\mu,\nu)$.* The polynomial structure of double Hurwitz numbers was further studied in [@shadrin2008chamber] in genus $0$ employing intersection theoretic methods. The higher genus case was studied in [@cavalieri2011wall] via tropical geometry and in [@johnson2015double] using a representation theoretic approach.\ All these works explored the difference of polynomiality in different chambers. More precisely, they derived *wall-crossing formulae*. **Definition 6**. Let $C_1$ and $C_2$ be two adjacent chambers in $\mathcal{R}_{m,n}$ seperated by a wall $\delta=\sum_{i\in I}\mu_i-\sum_{j\in J}\nu_j$. We assume that $\delta>0$ in $C_1$ and $\delta<0$ in $C_2$. Furthermore, let $g$ be a non-negative integer. Then, we define the associated wall-crossing as $$WC_\delta^g=P_g^{C_1}-P_g^{C_2}.$$ It turns out that these wall--crossings may again be expressed in terms of Hurwitz numbers with smaller input data. For the purpose of this manuscript, we only state the genus $0$ result. **Theorem 7** ([@shadrin2008chamber]). *Let $m,n$ be positive integers and $C$ a chamber of $\mathcal{R}_{m,n}$ adjacent to a fixed wall $\delta=\sum_{i\in I}\mu_i-\sum_{j\in J}\nu_j=0$ with $\delta>0$ in $C$. Let $\tilde{C}$ be another chamber neighbouring $C$ and sharing the wall $\delta$ in codimension $1$. Then, we have $$WC_\delta^0(\mu,\nu)=\binom{m+n-2}{|I|+|J|-1}\cdot\delta\cdot H_0(\mu_I,(\nu_J,\delta))\cdot H_0((\mu_{I^c},\delta),(\nu_{J^c})).$$ for all $\mu,\nu\in C$.* We want to highlight the work in [@cavalieri2011wall] employing a tropical approach towards Hurwitz numbers. It is based on the derivation of a tropical interpretation of double Hurwitz numbers obtained in [@cavalieri2010tropical], i.e. an expression of double Hurwitz numbers in terms of maps between combinatorial graphs, so-called *tropical covers*. The wall--crossing formulae in arbitrary genus, then follow from an intricate analysis of spaces of these graphs in different chambers of polynomiality. In particular, the Hurwitz numbers with smaller input data in the wall--crossing formulae arise from cutting the involved graphs into smaller graphs, each contributing to a smaller Hurwitz problem. A non-tropical, combinatorial study of double Hurwitz numbers in genus $0$ was undertaken in [@duchi2014bijections]. The basis of this work, is a well--known graph-theoretic interpretation of double Hurwitz numbers. We note that while also an expression in terms of graphs, this interpretation differs significantly from the previously mentioned tropical one. Namely, while the tropical correspondence involves a *weighted bijection*, the one employed in *loc.cit.* is actually a $1$-to-$1$ bijection. The involved graphs are called *Hurwitz galaxies*. Starting from Hurwitz galaxies, the authors of [@duchi2014bijections], proceed to employ an ingenious idea from graph theory, similar to Schaeffer's bijection, to derive a correspondence between Hurwitz galaxies and tree-like structure called *Hurwitz mobiles*. Due to their close proximity to trees, Hurwitz mobiles allow for simplified counting arguments, which in *loc.cit.* enabled a new proof of Hurwitz' original closed formula for genus $0$ single Hurwitz numbers and the resolution of a conjecture of Kazarian and Zvonkine. ## Pruned Hurwitz numbers In the past decade, many new variants of Hurwitz numbers were introduced. In this work, we focus on so-called *pruned single Hurwitz numbers*, which we will denote $PH_g(\mu)$, that were first studied in [@do2018pruned] in the context of topological recursion.[^1] Pruned Hurwitz numbers were originally defined as an enumeration of a subset of the branched covers enumerated by single Hurwitz numbers (see [Definition 41](#def:pruned){reference-type="ref" reference="def:pruned"} for a precise definition). We denote, for a fixed partition $\mu$ and fixed genus $g$, pruned single Hurwitz numbers by $PH_g(\mu)$ (for a precise definition, see [Definition 41](#def:pruned){reference-type="ref" reference="def:pruned"}). This enumeration has remarkable combinatorial properties and pruned single Hurwitz numbers are in many ways better behaved than their classical counterparts. For example, for fixed $n$, there exists a polynomial $Q_g$ in $n$ variables, such that $$\frac{1}{b!}PH_g(\mu)=Q_q(\mu_1,\dots,\mu_n),$$ i.e. pruned single Hurwitz numbers are actually polynomial without an involved combinatorial pre-factor [@do2018pruned Theorem 1]. Moreover, it was proved in [@do2018pruned] that pruned single Hurwitz determine their classical counterparts and vice versa. Therefore, they provide a smaller, in some sense better behaved, enumerative invariant that may be viewed as the *core* of the Hurwitz number problem.\ In [@zbMATH06791415], pruned double Hurwitz numbers $PH_g(\mu,\nu)$ were introduced. Analogously to the single case, pruned double Hurwitz numbers enumerate a subset of $\mathfrak{H}_g(\mu,\nu)$ and again this smaller enumerative invariant captures the key features of double Hurwitz numbers. In particular, pruned double Hurwitz numbers are piecewise polynomial with the same chamber structure as their classical counterparts. Wall--crossing formulae for pruned double Hurwitz numbers however remain an open problem. Moreover, mirroring the idea for single pruned Hurwitz numbers, it was proved in [@zbMATH06791415] that pruned double Hurwitz numbers determine double Hurwitz numbers and vice versa. ## Main results The aim of this paper is to explore the combinatorics of pruned Hurwitz numbers and lay the foundations for further combinatorial analysis.\ The first part is dedicated to applying the techniques from [@duchi2014bijections] to this setting. In [@hahn2020bi], an interpretation of pruned single and double Hurwitz numbers in terms of Hurwitz galaxies was given. Due to their aptitude for enumerative applications, it is natural to ask for an expression of pruned Hurwitz numbers in terms of Hurwitz mobiles. In order to achieve this, we review the bijection between Hurwitz galaxies and Hurwitz mobiles derived in [@duchi2014bijections] in [4](#sec:bij){reference-type="ref" reference="sec:bij"}. This bijection proceeds through several steps, by enhancing and altering Hurwitz galaxies. While the combinatorial data is preserved through this process, it becomes reencoded. For our purposes, we rephrase the bijection in the rich language of *Dyck paths*. The properties and applications of Dyck paths and their various forms (also known as contour functions, Dyck codes, standard contour codes etc.) have been the subject of much combinatorial research. [@dyckmodk; @dycknondecreasing; @dycknondecreasingenum; @dyckcoloured] Our rephrasing of the bijection in [@duchi2014bijections] in this language, allows us to perform an intricate combinatorial analysis to derive a new correspondence theorem expressing pruned single Hurwitz numbers in terms of what we call *pruned Hurwitz mobiles* in [4](#sec:bij){reference-type="ref" reference="sec:bij"}. This achieves an enumeration of genus $0$ pruned single Hurwitz numbers in terms of tree-like graphs.\ In the second part, we study the tropical combinatorics of pruned double Hurwitz numbers. While in the past, tropical derivations of enumerative invariants were obtained either using representation theoretic methods [@cavalieri2018graphical; @cavalieri2010tropical] or via degeneration techniques applied to the underlying algebro-geometric objects [@cavalieri2016tropicalizing], we employ a purely combinatorial approach. The starting point is a cut--and--join recursion of pruned double Hurwitz numbers derived in [@zbMATH06791415]. By tracing this recursion back to its infinite set of base cases, we build tropical covers without the use of any underlying geometry or representation theory in [6](#sec:troppru){reference-type="ref" reference="sec:troppru"}. The correspondence theorem, expressing pruned double Hurwitz numbers in terms of tropical covers is then obtained by analysing the multiplicity of each cover in the recursion. This allows us to study the polynomiality of pruned double Hurwitz numbers via tropical techniques in [6.3](#sec:polytrop){reference-type="ref" reference="sec:polytrop"}. ## Structure In [2](#sec:graphs){reference-type="ref" reference="sec:graphs"}, we review the basics on branching graphs, Hurwitz galaxies, Hurwitz mobiles and Dyck paths. These are the main objects in the bijection correspondences between $\mathfrak{H}_g(\mu,\nu)$ and graph theoretic structures. Based on this, we introduce pruned single and double Hurwitz numbers in [3](#sec:prunhur){reference-type="ref" reference="sec:prunhur"} and discuss some of their key properties in more detail. In [4](#sec:bij){reference-type="ref" reference="sec:bij"}, we sketch the bijection between covers contributing to genus zero double Hurwitz numbers and Hurwitz mobiles obtained in [@duchi2014bijections]. We rephrase this bijection in terms of Dyck paths to prove our correspondence theorem enumerate genus zero pruned single Hurwitz numbers in terms of pruned Hurwitz mobiles. After that, we move on to the tropical combinatorics of pruned double Hurwitz numbers and first introduce some basics on tropical covers in [5](#sec:trop){reference-type="ref" reference="sec:trop"}. We prove the correspondence theorem, enumerating pruned double Hurwitz numbers via weighted tropical covers in [6](#sec:troppru){reference-type="ref" reference="sec:troppru"}. Finally, we explore the polynomiality of pruned double Hurwitz numbers via their new tropical interpretation in [6.3](#sec:polytrop){reference-type="ref" reference="sec:polytrop"}. ## Acknowledgements {#acknowledgements .unnumbered} The first and third author acknowledge partial support by the Hamilton Trust fund during the work on this paper. # Bijections for Hurwitz numbers: Branching graphs, Galaxies and mobiles {#sec:graphs} In this section, we review a classical construction expressing double Hurwitz numbers in terms of graphs on surfaces and explore some of their combinatorics. There are several equivalent ways to produce such a correspondence. ## Branching graphs and Galaxies First, we focus on two of them: *Branching graphs* and *Hurwitz galaxies*. These graphs are obtained by pulling back graphs on $\mathbb{P}^1$ along Hurwitz covers. We begin with the following definition. **Definition 8**. A good graph on a surface $S$ is a graph $\Gamma$ embedded on $S$ such that: 1. $S \backslash \Gamma$ is homeomorphic to a disjoint union of open disks, 2. Wherever two edges cross there is a vertex, 3. Edges that end without a vertex are called half-edges. **Definition 9**. Let $d$ be a positive integer, $g$ a non-negative integer and $\mu, \nu$ be ordered partitions of $d$. Again, let $b=2g-2+\ell(\mu)+\ell(\nu)$. We define a branching graph of type $(g, \mu, \nu)$ to be a good graph $\Gamma$ on a compact oriented surface $S$ of genus $g$ that satisfies the following: 1. There are $\ell(\nu)$ vertices, labelled $1, \dots, \ell(\nu)$. Each vertex is adjacent to $\nu_i \cdot b$ half-edges, labelled cyclically counter-clockwise by $1, \dots, b$. 2. There are exactly $b$ full edges labelled by $1, \dots, b$. 3. The $\ell(\mu)$ faces are labelled by $1, \dots, \ell(\mu)$ and the face labelled $i$ has perimeter which we denote by $per(i)=\mu_i$. We define an isomorphism of branching graphs as an orientation-preserving homeomorphism of surfaces which induces an isomorphism of graphs preserving vertex, edge and face labels. Further, we denote by $B_g(\mu,\nu)$ the set of all isomorphism classes of branching graphs of type $(g,\mu,\nu)$. We illustrate a branching graph in the following example. **Example 10**. We fix $g=0$, $\mu=(2,1)$ and $\nu=(1,1,1)$. A branching graph of type $(0,\mu,\nu)$ is depicted in [\[fig:brgraph\]](#fig:brgraph){reference-type="ref" reference="fig:brgraph"}. All three vertices have valency $3$. Since $b=3$, this reflects the partition $\nu$. The outer face has perimeter $2$, while the inner face has perimeter $1$ corresponding to $\mu$. The following folklore theorem connects branching graphs to double Hurwitz numbers **Theorem 11**. *Let $g$ be a non-negative integer, $d$ a positive integer and $\mu,\nu$ partitions of $d$. Then, we have $$H_g(\mu,\nu)=\sum_{[\Gamma]\in B_g(\mu,\nu)}\frac{1}{|\mathrm{Aut}(\Gamma)|}.$$* *Sketch of proof.* Let $\zeta_1,\dots,\zeta_b$ be the $b-$th roots of unity. The idea behind the proof is to consider the *star graph* $G$ on $\mathbb{P}^1$ with a unique vertex $v$ at $\infty$ and non-intersecting half-edges connecting $v$ to each $\zeta_i$ (see [\[fig:stargraph\]](#fig:stargraph){reference-type="ref" reference="fig:stargraph"}). Choosing $p_i=\zeta_i$ in [Definition 3](#def:hurwitzdouble){reference-type="ref" reference="def:hurwitzdouble"} and a Hurwitz cover $f\colon S\to\mathbb{P}^1$ of type $(g,\mu,\nu)$, we may consider the graph $\Gamma=f^{-1}(G)\subset S$, where vertices of $\Gamma$ are pre-images of the vertex on $\mathbb{P}^1$ and edges are obtained by two half-edges meeting at a pre-image of $\zeta_i$. It is easy to see that $f^{-1}(G)$ is indeed a branching graph of type $(g,\mu,\nu)$. For the other direction, we observe that a branching graph is a good graph with half-edges. Thus, the Riemann existence theorem together with the Galois correspondence for topological covers allows to re-construct the original branched covering of $\mathbb{P}^1$. ◻ Next, we define Hurwitz galaxies. **Definition 12**. Let $d$ be a positive integer, $g$ a non-negative integer, $\mu,\nu$ partitions of $d$, and $b=2g-2+\ell(\mu)+\ell(\nu)$. Then, we define a Hurwitz galaxy of type $(g,\mu,\nu)$ to be a graph $G$, such that 1. $G$ partitions $S$ into $\ell(\mu)+\ell(\nu)$ disjoint faces homeomorphic to an open disk. 2. The faces are bi-coloured black and white, such that each edge is adjacent to a white and a black face. 3. The white faces are labelled by $1, \dots, l(\mu)$, the black faces by $1, \dots, l(\nu)$, such that the boundary of each white face labelled $i$ contains $\mu_i \cdot (b+1)$ vertices and the boundary of each black face $i$ contains $\nu_i \cdot (b+1)$ vertices. 4. The vertices are coloured cyclically clockwise with respect to the adjacent white faces by $0,1, \dots, b$. 5. For each $i \in \{1, \dots, b\}$ there are $d-1$ vertices with colour $i$, where $d-2$ of these vertices are 2-valent, and one is 4-valent. There are $d$ vertices with colour $0$, each of which are 2-valent. An isomorphism between Hurwitz galaxies is an orientation-preserving homeomorphism of their respective surfaces which induces an isomorphism of graphs preserving vertex, edge and face labels. We denote by $G_{g}(\mu,\nu)$ be the set of isomorphism classes of Hurwitz galaxies of type $(g,\mu,\nu)$. **Example 13**. We fix $g,\mu,\nu$ as in [Example 10](#ex:brgraph){reference-type="ref" reference="ex:brgraph"}. An example of a Hurwitz galaxy of type $(0,(2,1),(1,1,1))$ is depicted in [1](#fig:hurgal){reference-type="ref" reference="fig:hurgal"}. The arrows indicate the orientation of the faces with respect to the ordering of the adjacent vertices. ![A Hurwitz galaxy of type $(0,(2,1),(1,1,1))$.](galaxyexample.pdf){#fig:hurgal} ![The circle graph on the Riemann sphere whose vertices are $\zeta_1,\dots,\zeta_b$, the $b$-th roots of unity for fixed $b$.](goodgraphexample1.pdf){#fig:circlegraph} The following theorem is proved similarly to [Theorem 11](#thm-brangr){reference-type="ref" reference="thm-brangr"} by pulling back the graph in [2](#fig:circlegraph){reference-type="ref" reference="fig:circlegraph"} whose vertices are the $b$-th roots of unity. **Theorem 14**. *Let $g$ be a non-negative integer, $d$ a positive integer and $\mu,\nu$ partitions of $d$. Then, we define $$H_g(\mu,\nu)=\sum_{[G]\in G_g(\mu,\nu)}\frac{1}{|\mathrm{Aut}(G)|}.$$* **Remark 15**. As a consequence of the above discussion, there is a bijection between branching graphs and Hurwitz galaxies. The combinatorial correspondence is described in [@zbMATH06791415 Figure 3, Proposition 9]. In fact, the branching graph in [\[fig:brgraph\]](#fig:brgraph){reference-type="ref" reference="fig:brgraph"} and the Hurwitz galaxy in [1](#fig:hurgal){reference-type="ref" reference="fig:hurgal"} correspond to each other and give rise to the same branched cover in $\mathfrak{H}_g(\mu,\nu)$. ## Distance Labelling and Geodesic Edges We now explore some of the combinatorics of Hurwitz galaxies. For a more detailed account, we refer to [@duchi2014bijections]. To begin with, we define a marked Hurwitz galaxy as a tuple $(G,x_0)$ where $G$ is a Hurwitz galaxy as defined above, and $x_0$ is a vertex of $G$ with colour $0$. When the marked vertex is clear from the context, we mostly denote a marked Hurwitz galaxy $(G,x_0)$ by $G$. We denote by $\mathcal{G}_g(\mu,\nu)$ the set of isomorphism classes of marked Hurwitz galaxies of type $(g,\mu,\nu)$, where isomorphisms respect the marked vertex.\ The introduction of this distinguished vertex allows us to define a notion of distance on a Hurwitz galaxy. **Definition 16**. Let $G$ be a Hurwitz galaxy with marked vertex $x_0$. We define the distance labelling of a vertex $x$ of $G$ to be the number $\delta(x)$ of edges contained in a shortest oriented path from $x_0$ to $x$. Note that every vertex can be reached by an oriented path from $x_0$ due to the fact that $G$ is connected and each oriented edge belongs to a cycle. Thus $\delta$ is well-defined and we get the following properties: 1. the colour and distance label of a vertex $x$ are related by $c(x)\equiv \delta (x)\, \mathrm{mod}\, b+1$. 2. For any oriented edge $x \rightarrow y$, $\delta(y) \equiv \delta(x)+1$ (mod $b+1$). Since the distance label of $y$ corresponds to the length of the shortest oriented path to $y$, we also must have $\delta(y) \leq \delta(x) +1$. **Definition 17**. Let $G$ be a marked Hurwitz galaxy with distinguished vertex $x_0$. We define the weight of an edge $e=x \rightarrow y$ as the non-negative integer quantity given by: $$\label{eq:weightdef} w(e)=\frac{\delta(x)+1-\delta(y)}{b+1}$$ An edge $e$ is called geodesic if $w(e)=0$ and non-geodesic otherwise. **Example 18**. We consider the Hurwitz galaxy in [1](#fig:hurgal){reference-type="ref" reference="fig:hurgal"} and fix the $0$ colored vertex on the bottom left as the marked vertex. The resulting marked Hurwitz galaxy is depicted in [3](#fig:hurgalmarked){reference-type="ref" reference="fig:hurgalmarked"}. The marked vertex is coloured green and the non-geodesic edges are coloured yellow. The vertices are labelled by $(a,b)$, where $a$ is the colour of the vertex and $b$ is the distance from the marked vertex. ![A marked Hurwitz galaxy of type $(0,(2,1),(1,1,1))$. The marked vertex is coloured green and the non-geodesic edges are coloured yellow. The vertices are labelled by $(a,b)$, where $a$ is the colour of the vertex and $b$ is the distance from the marked vertex.](galaxywithdistances.pdf){#fig:hurgalmarked} Consider some arbitrary vertex $v$ of a face $F$ of degree $i$ of a marked Hurwitz galaxy $G$. Let $p$ be some path around the boundary of $F$ that starts at $v$, passes through each edge of $F$ once and finishes at $v$ (a total of $(b+1)i$ edges). Then the change in distance label from the beginning of $p$ to the end of $p$ is given by $$\sum_{e=[x,y] \in p} \delta(x)-\delta(y).$$ Since $p$ begins and ends at $p$ however, the total variation in distance labels is in fact equal to $0$. Thus: $$\sum_{e=[x,y] \in p} \delta(x)-\delta(y) =0$$ Using the definition of $w(e)$ as in equation ([\[eq:weightdef\]](#eq:weightdef){reference-type="ref" reference="eq:weightdef"}) we conclude that: $$\sum_{e=[x,y] \in p} w(e)= \sum_{e=[x,y] \in p} \frac{\delta(x)+1-\delta(y)}{b+1} = \sum_{e=[x,y] \in p} \frac{1}{b+1} = \frac{(b+1)i}{b+1} =i$$ Thus, we have the following lemma. **Lemma 19**. *Let $G$ a marked Hurwitz galaxy. Then, the sum of the weight of the edges incident to any face with degree $i$ is $i$.* ## Hurwitz mobiles Hurwitz mobiles are tree-like structures introduced in [@duchi2014bijections] consisting of black polygons, white polygons and edges between them. They are in some sense easier to work with than Hurwitz galaxies or branching graphs, due to their tree-like behaviour.\ Since our use of Hurwitz mobiles is restricted to the genus $0$ case, we also only give the definition in that situation. In [@duchi2014bijections], the Hurwitz mobiles we consider are called *free*. **Definition 20**. Let $d$ be a positive integer and $\mu, \nu$ be ordered partitions of $d$. Furthermore let $b=\ell(\mu)+\ell(\nu)-2$. A Hurwitz mobile of type $(\mu, \nu)$ is then a connected partially oriented graph consisting of: 1. $d$ white nodes forming $\ell(\mu)$ disjoint oriented simple cycles, of length $\mu_i$, for $i=1, \dots ,\ell(\mu)$. We refer to these cycles as white polygons. 2. $d$ black nodes forming $\ell(\nu)$ disjoint oriented simple cycles, of length $\nu_i$, for $i=1, \dots ,\ell(\nu)$. We refer to these cycles as black polygons. 3. $b$ non-oriented edges with non-negative weights such that: - each zero weight edge has its endpoints on white polygons. - each positive weight edge is incident to a black polygon and a white polygon. - The sum of the weights of edges incident to any $i$-gon is $i$. An edge-labelled Hurwitz mobile is then a Hurwitz mobile in which each of the weighted edges has a distinct associated label taken in the set $\{ 0, 1, \dots , b \}$. We denote the set of Hurwitz mobiles of type $(\mu,\nu)$ by $M(\mu,\nu)$. **Example 21**. Let $\mu=(2,1)$ and $\nu=(1,1,1)$. A Hurwitz mobile of type $((2,1),(1,1,1))$ is depicted in [4](#fig:hurmobile){reference-type="ref" reference="fig:hurmobile"}. The label of the edges is depicted as numbers inside circles, whereas the number of dashes of each edge indicates their weight. For example, the edge labelled $3$ has weight $0$, whereas the edge labelled $0$ has weight $1$. ![A Hurwitz mobile of type $((2,1),(1,1,1))$](mobileexample.pdf){#fig:hurmobile} Given a mobile $M$, there exists an operation $\sigma$ on $M$ called a shift that when applied to $M$, yields another mobile $\sigma(M)$. This operation is of interest to us in this report as it partitions $\mathcal{M}(\mu,\nu)$, the set of Hurwitz mobiles, into so called shift-equivalence classes. **Definition 22**. Let $M$ be a Hurwitz mobile. Its shift $\sigma(M)$ is the Hurwitz mobile obtained by translating the two endpoints of the edge of label $b$ to the next vertices of the adjacent polygons according to their orientation, and then incrementing each edge label in $M$ by $1$ modulo $b+1$.\ We call two Hurwitz mobiles shift equivalent if one may be obtained by the other by a finite sequence of shifts. We will denote by $\mathcal{M}(\mu,\nu)$ the set of shift equivalence classes of mobiles in $M(\mu,\nu)$. The local structure of the shift is depicted in [\[fig:shiftex\]](#fig:shiftex){reference-type="ref" reference="fig:shiftex"} **Proposition 23**. *Given a Hurwitz mobile $M\in M(\mu,\nu)$, then we have $\sigma^{b+1}(M)=M$ and there are $b+1$ distinct graphs in the shift equivalence class of each mobile in $M(\mu,\nu)$.* **Example 24**. The shift equivalence class of the Hurwitz galaxy in [3](#fig:hurgalmarked){reference-type="ref" reference="fig:hurgalmarked"} is depicted in [\[fig:shiftequivalence\]](#fig:shiftequivalence){reference-type="ref" reference="fig:shiftequivalence"}. ![](mobileexample.pdf){width="0.9\\linewidth"} ![](shiftequivalence1.pdf){width="0.9\\linewidth"} ![](shiftequivalence2.pdf){width="0.9\\linewidth"} ![](shiftequivalence3.pdf){width="0.9\\linewidth"} ![](mobileexample.pdf){width="0.9\\linewidth"} ## Hurwitz Dyck Paths In this subsection, we lay the groundwork towards understanding the relation between Hurwitz galaxies and Hurwitz mobiles. In order to achieve this, we introduce the notion of Hurwitz Dyck paths, which are a special class of Dyck paths that appear in the combinatorics of Hurwitz mobiles. These objects build on ideas of [@duchi2014bijections] and prove to be closely related to the classes of Dyck path found in [@dyckmodk]. In [4](#sec:bij){reference-type="ref" reference="sec:bij"} we provide a bijection between these objects and Hurwitz galaxies and mobiles, before using this bijection to provide a classification of pruned Hurwitz mobiles in [4.2](#sec:prunhurdyck){reference-type="ref" reference="sec:prunhurdyck"}. **Definition 25**. For some positive integer $n$, a Dyck path of length $2n$ is a lattice path consisting of $n$ up-steps of the form $u=(1,1)$ as well as $n$ down-steps of the form $d=(1,-1)$ which starts at $(0,0)$, ends at $(2n,0)$ and stays (weakly) above the $x$-axis. Additionally, an occurrence of $ud$ (resp. $du$) within a Dyck path is called a *peak* (resp. *valley*), an occurrence of $u^h d^h$ is called a *pyramid of height* $h$, a down-step beginning at height $1$ is called a *return step* and for any up-step $U$ its *matching down-step* is the nearest down-step (to the right) such that the number of up-steps and down-steps between them are equal. We say that a Dyck path with only one return step is *primitive*. For a vertex $v$, we denote its $x$-coordinate by $v_x$ and its $y$-coordinate by $v_y$. Before defining single Hurwitz Dyck paths, we introduce two new notions. **Definition 26**. Let $b$ be a positive integer and $n$ some multiple of $b$. A $b$-Dyck path of length $2n$ is a Dyck path of length $2n$ in which the length of all maximal increases (a maximal path of upsteps) is a multiple of $b$.\ A vertex $v$ is called *distinguished* if the number of up-steps in the Dyck path prior to $v$ is a multiple of $b$ and $v$ is succeeded by an up-step. Let $m$ be the number of distinguished vertices, then we label the distinguished vertices by $1,\dots,m$. **Definition 27**. Let $t$ be a positive integer. A $t$-marked Dyck path is a Dyck path in which $t$ pairs of vertices of down-steps have been marked, such that for any pair $(v,w)$ the $y$-coordinates of $v$ and $w$ co-incide. Furthermore, the $t$ pairs are labelled $1,\dots,t$.\ Given a pair $(v,w)$, we define its interval $[v,w]$ to be the lattice paths from $v$ to $w$ in the Dyck path. We define its *essential interval* $[v,w]^e$ as $$^e=[v,w]\backslash \bigcup_{(v',w')}[v',w']$$ where the union runs over all marked pairs $(v',w')$ in $[v,w]$. For a marked pair $(v,w)$ in a $b$-Dyck path, we define its degree as $$\mathrm{deg}((v,w))=\frac{|\{\textrm{up-steps in }[v,w]^e\}|}{b}=\frac{|\{\textrm{down-steps in }[v,w]^e\}|}{b}.$$ We are now ready to define single Hurwitz Dyck paths. **Definition 28**. Let $d$ be a positive integer and $\mu$ be an ordered partition of $d$. Furthermore let $b=\ell(\mu)+d-2$.\ A *single Hurwitz Dyck* path of type $\mu$ is then a primitive $\ell(\mu)$-marked $b$-Dyck path of length $2bd$, such that: 1. the pair $((0,0),(2bd,0))$ is marked, 2. the set of $y$-coordinates of marked pairs modulo $b+1$ is of size $\ell(\mu)$, 3. the set of $y$-coordinate of distinguished vertices modulo $b+1$ is of size $d$, 4. the set of non-zero $y$-coordinates of marked pairs modulo $b+1$ and the set of non-zero $y$-coordinate of distinguished vertices modulo $b+1$ are disjoint, 5. for any two marked pairs of vertices $(v_1,w_1)$ and $(v_2,w_2)$, assuming w.l.o.g. that $(v_1)_x\le (w_1)_x$ and $(v_2)_x\le(w_2)_x$, we have that if $(v_1)_x\le (v_2)_x$, then $(w_2)_x\le(w_1)_x$. In other words, pairs of vertices admit a non-crossing condition, 6. the marked pair labelled $i$ has degree $\mu_i$. We denote the set of Hurwitz Dyck paths by $D(\mu)$. **Example 29**. In [\[fig:contourfunctionexample\]](#fig:contourfunctionexample){reference-type="ref" reference="fig:contourfunctionexample"}, we illustrate a single Hurwitz Dyck path of type $(2,1)$. The distinguished vertices are marked red. The green vertices represent the marked pairs in this example. Note that $b=3$. ## Basics on Cacti In [4](#sec:bij){reference-type="ref" reference="sec:bij"}, we outline the bijection between marked Hurwitz galaxies of genus $0$ and shift equivalence classes of mobiles given in [@duchi2014bijections]. While the authors of *loc.cit* construct an explicit map $\Phi\colon\mathcal{G}_0(\mu,\nu)\to\mathcal{M}(\mu,\nu)$, the proof of its bijectivity moves through combinatorial objects called *cacti*. We now review some of the basic notions. **Definition 30**. *[@duchi2014bijections]* We denote by $C_g(\mu,\nu)$ the set of graphs on surfaces of genus $g$ with one boundary such that the following hold: 1. (Face colour condition) There are $l(\mu)$ white faces and $l(\nu)$ black faces, labelled $1,\dots,\ell(\mu)$ and $1,\dots,\ell(\nu)$. The white face labelled $i$ has degree $\mu_i$. Similarly, the degree of the black face labelled $j$ is $\nu_j$. 2. There are three types of edges. These are: - Internal edges, that are incident to a black face and a white face. - White boundary edges, that are oriented and have a white face on their right hand side. - Black boundary edges, that are oriented and have a black face on their left hand side. We say that a vertex is active if it has at least one incoming white boundary edge. 3. (Vertex colour condition) All vertices are incident to the boundary and have a colour in $\{ 0, \dots, b \}$. Each boundary edge $u \rightarrow v$ that joins a vertex $u$ with colour $c(u)$ to a vertex $v$ with colour $c(v)$ satisfies $c(v)=c(u)+1\,\mathrm{mod}\,b+1$. 4. (Hurwitz condition) There are $d-1$ active vertices of each colour. We call an element $C$ of $C_g(\mu,\nu)$ a cactus of type $(g,\mu,\nu)$. **Example 31**. Let $g=0$, $\mu=(2,1)$ and $\nu=(1,1,1)$. A cactus of type $(0,(2,1),(1,1,1))$ is depicted in [5](#fig:cactusexample){reference-type="ref" reference="fig:cactusexample"}. ![An example of a cactus $C$ of type $(0,(2,1),(1,1,1))$. Internal edges are shown in green. The vertex colours are included within the small circles along the boundary of $C$.](cactusexample.pdf){#fig:cactusexample width="0.5\\linewidth"} One can see that for cacti in $C_0(\mu,\nu)$, like $C$ in Figure [5](#fig:cactusexample){reference-type="ref" reference="fig:cactusexample"}, the graph consists of polygons that are arranged to form a tree-like structure (a kind of cactus -- hence the terminology). Due to this tree-like structure, as well as the black and white galaxy-like faces, we see that cacti share some of the important properties of both Hurwitz galaxies and Hurwitz mobiles. As a first step, we associate a cactus to a marked Hurwitz galaxy. We begin with the following definitions. **Definition 32**. Let $G\in\mathcal{G}_0(\mu,\nu)$ and $v$ be a vertex with two incoming geodesic edges. We define a splitting of $v$ in $G$ as the graph $\tilde{G}$ obtained by replacing $v$ by two new vertices, each carrying one incoming geodesic edge and the outgoing edge following it in clockwise direction around $v$. **Definition 33**. Let $G\in\mathcal{G}_0(\mu,\nu)$ be a marked Hurwitz galaxy. Then, we define $\Theta(G)$ to be the marked graph obtained from $G$ by splitting all vertices with two incoming geodesic edges and removing non-geodesic edges. **Example 34**. Let $G$ be the marked Hurwitz galaxy in [3](#fig:hurgalmarked){reference-type="ref" reference="fig:hurgalmarked"}. The graph $\Theta(G)$ is illustrated in [6](#fig:treegalaxy){reference-type="ref" reference="fig:treegalaxy"}. ![The construction of the tree $\Theta(G)$ corresponding to the marked galaxy $G$ of [3](#fig:hurgalmarked){reference-type="ref" reference="fig:hurgalmarked"}. Notice the splitting of the 4-valent vertex of colour $3$, as well as the distance labelling which agrees with the distance labelling in [3](#fig:hurgalmarked){reference-type="ref" reference="fig:hurgalmarked"}.](treegalaxyex.pdf){#fig:treegalaxy width="0.5\\linewidth"} Note that by [@duchi2014bijections Proposition 3], the graph $\Theta(G)$ is a tree and for each vertex $v\in\Theta(G)$ the distance $\delta(v)$ (defined in the Hurwitz galaxy) is the distance of $v$ to the marked vertex in $\Theta(G)$. **Construction 35**. Let $G$ be a marked Hurwitz galaxy on some compact oriented surface $S$ of genus $g$. 1. Consider the graph $\Theta(G)$ on $S$ 2. Since $\Theta(G)$ is a tree, we have that $S \setminus \Theta(G)$ has one open boundary and its closure is a surface $S^{\partial}$ of genus $g$ with one boundary. 3. We let $\Gamma(G)$ be the graph induced by $G$ on $S^{\partial}$. By [@duchi2014bijections Lemma 4], we have $\Gamma(G)\in C_g(\mu,\nu)$. We see that $\Gamma(G)$ contains faces and non-geodesic edges that it inherits from the faces and non-geodesic edges of $G$.\ The geodesic edges of $G$ produce two boundary edges of $\Gamma(G)$, one white boundary edge and one black boundary edge. In total, we have the cases illustrated in [\[fig:localcactus\]](#fig:localcactus){reference-type="ref" reference="fig:localcactus"} (see also [@duchi2014bijections Section 4.1]). ![A vertex with one incoming geodesic edge produces two boundary edges in the construction of $\Gamma(G)$ (one white boundary edge and one black boundary edge).](localcactus1.pdf){width="0.75\\linewidth"} ![A 4-valent vertex with one incoming geodesic edge and one incoming non-geodesic edge produces the structure shown on the right. There are three vertices corresponding to the vertex of colour $c$, as well as an internal edge corresponding to the non-geodesic edge in $G$.](localcactus2.pdf){width="0.75\\linewidth"} ![A 4-valent vertex with two incoming geodesic edges produces three vertices in $\Gamma(G)$; one of these is incident to two white polygons.](localcactus3.pdf){width="0.75\\linewidth"} We call $(v,e,e')$ a *boundary corner*, for a vertex $v$ with two adjacent edges $e$ and $e'$ traversing along the same part of the boundary . **Example 36**. The cactus considered in [Example 31](#ex:cactusex){reference-type="ref" reference="ex:cactusex"} and illustrated in [5](#fig:cactusexample){reference-type="ref" reference="fig:cactusexample"}, is obtained from the marked Hurwitz galaxy in [3](#fig:hurgalmarked){reference-type="ref" reference="fig:hurgalmarked"} via [Construction 35](#constr:galtocac){reference-type="ref" reference="constr:galtocac"}. This is illustrated at the top of [\[fig:galtocac\]](#fig:galtocac){reference-type="ref" reference="fig:galtocac"}. At the bottom, we include the canonical corner labelling in red. ![](cactusconstruction.pdf){width="0.9\\linewidth"} ![](cornerlabel1.pdf){width="0.9\\linewidth"} **Definition 37**. Given a cactus $C$, a *canonical corner labelling* is a mapping $\delta$ from the set of boundary corners of $C$ into the non-negative integers such that: - the minimum label is $0$, - for each boundary edge $e = u \rightarrow v$, $\delta(c^{\prime})=\delta(c)+1$, where $c$ is the boundary vertex incident to $e$ at $u$ and $c^{\prime}$ is the boundary vertex incident to $e$ at $v$. For any galaxy $G$, the corner labelling of $\Gamma(G)$ inherited from the distance labelling on $G$ is canonical by construction. Furthermore, any cactus $C\in C_g(\mu,\nu)$ has a unique canonical corner labelling by [@duchi2014bijections Lemma 5]. **Definition 38**. *[@duchi2014bijections]* The canonical corner labelling $\delta$ of a cactus $C \in C_g(\mu,\nu)$ is said to be coherent if for each vertex $u \in C$, all boundary corners of $u$ have the same label. In this case the canonical corner labelling yields a vertex labelling called the *coherent canonical labelling* of $C$. We denote by $C_g^c(\mu,\nu)$ the set of cacti of $C_g(\mu,\nu)$ whose canonical corner labelling is coherent. **Remark 39**. By [@duchi2014bijections Proposition 4], we have $C_0(\mu,\nu)=C^c_0(\mu,\nu)$, i.e. all canonical corner labellings in genus zero are coherent. We end this section, with the following definition. **Definition 40**. A cactus $C$ of $C_g^c(\mu,\nu)$ is said to be proper if the colour of its vertices with canonical label $0$ is $0$. We denote by $C_g^{0c}(\mu,\nu)$ the set of proper cacti. Elements of this set are said to have a proper canonical corner labelling.\ # Pruned Hurwitz numbers {#sec:prunhur} In [@do2018pruned], a new variant of Hurwitz numbers was introduced, so-called *pruned Hurwitz numbers*. Their motivation stem from the theory of Chekhov--Eynard--Orantin topological recursion. In this theory, one starts with a spectral curve as input datum and then recursively constructs a sequence of multi-differential forms. Astonishingly, for many enumerative invariants, one may find a spectral curve such that the coefficients of the multi-differentials obtained via topological recursion are exactly the enumerative invariants we started with.\ For example, for the spectral curve $$x(z)=ze^{-z}\quad\mathrm{and}\quad y(z)=z,$$ the multi-differentials $\omega_{g,n}$ that are the output of topological recursion satisfy $$\omega_{g,n}=\sum_{\mu_1,\dots,\mu_n=1}^\infty\frac{{H}_g(\mu)}{b!}\prod_{i=1}^n\mu_ix_i^{\mu_i-1} \mathrm{d}x_1\cdots\mathrm{d}x_n,$$ where $b$ is the number of simple branch points of the respective Hurwitz numbers. This result is the content of the Bouchard-Mariño conjecture [@zbMATH05380331] that was proved in [@borot2011matrix; @eynard2011laplace].\ The idea in [@do2018pruned] is to expand the differentials $\omega_{g,n}$ in the $z$-coordinates instead of the $x$-coordinates to obtain $$\omega_{g,n}=\sum_{\mu_1,\dots,\mu_n=1}^\infty\frac{{PH}_g(\mu)}{b!}\prod_{i=1}^n\mu_iz_i^{\mu_i-1} \mathrm{d}z_1\cdots\mathrm{d}z_n.$$ The astonishing observation of *loc.cit.* is that the coefficients $PH_g(\mu)$ admit a natural interpretation as enumerative invariants that are now called *pruned single Hurwitz numbers*.\ We begin with their definition. **Definition 41**. Let $g$ be a non-negative integer, $d$ a positive integer and $\mu$ a partition of $d$. We define $PB_g(\mu)\subset B_g(\mu,(1^d))$ to be the subset of branching graphs of type $(g,\mu,1^d)$ with no $1$--valent vertices.\ We define the associated *pruned single Hurwitz number* as $$PH_g(\mu)=\sum_{[\Gamma]\in PB_g(\mu,1^d)}\frac{1}{|\mathrm{Aut}(\Gamma)|}.$$ Next, we rephrase the definition of pruned Hurwitz numbers in terms of Hurwitz galaxies. To begin with, we need the following definition. **Definition 42**. Let $P$ be some face of a Hurwitz galaxy $G$. We say that $P$ is a bubble if it contains exactly one 4-valent vertex $v$. **Example 43**. In [7](#fig:bubble1){reference-type="ref" reference="fig:bubble1"}, we illustrate a Hurwitz galaxy with a black bubble. ![A Hurwitz galaxy of type $(0,(2,1),(1,1,1))$ with a bubble.](bubbleexample1.pdf){#fig:bubble1} **Definition 44**. Let $g$ be a non-negative integer, $d$ a positive integer and $\mu,\nu$ partitions of $d$. Then, we define $PG_g(\mu)\subset G_g(\mu,1^d)$ to be the subset of Hurwitz galaxies of type $(g,\mu,\nu)$ without black faces that are bubbles. Recall from [Remark 15](#rem:brgalcorr){reference-type="ref" reference="rem:brgalcorr"} that there is a combinatorial correspondence between branching graphs and Hurwitz galaxies. It is easy to see that Hurwitz galaxies in $PG_g(\mu)$ exactly correspond to branching graphs in $PB_g(\mu)$ ([@hahn2020bi Proposition 3.3]) and thus that $$PH_g(\mu)=\sum_{[G]\in PG_g(\mu)}\frac{1}{|\mathrm{Aut}(G)|}.$$ Next, we define a natural generalisation of $PH_g(\mu)$, namely pruned double Hurwitz numbers. For this it will be convenient to consider the pruning condition with respect to pre-images of $0$ instead of $\infty$. **Definition 45**. Let $g$ be a non-negative integer, $d$ a positive integer, $\mu,\nu$ partitions of $d$. Then, we define $PG_g(\mu,\nu)\subset G_g(\mu,\nu)$ to be subset of Hurwitz galaxies of type $(g\mu,\nu)$ without white faces that are bubbles. We call the elements of $PG_g(\mu,\nu)$ pruned Hurwitz galaxies. In particular, we define pruned double Hurwitz numbers as $$PH_g(\mu,\nu)=\sum_{G\in PG_g(\mu,\nu)}\frac{1}{|\mathrm{Aut}(G)|}$$ For some purposes, it is useful to ignore automorphisms and thus, we also define $$\widehat{PH}_g(\mu,\nu)=\sum_{[\Gamma]\in PB_g(\mu,\nu)}1.$$ We note that $PH_g(\mu,\nu)=\widehat{PH}_g(\mu,\nu)$ unless $\mu=\nu=(d)$. Moreover, we have $PH_g(\nu)=PH_g(1^d,\nu)$. Pruned double Hurwitz numbers share many properties with their classical counterparts. Here we collect the ones relevant for our work. Firstly, pruned Hurwitz numbers may be expressed in terms of factorisations in the symmetric group. To formulate this correspondence, we need the following definition. **Definition 46**. For a permutation $\sigma \in S_d$, the *support of* $\sigma$, denoted $\mathrm{supp}(\sigma)$, is the set of elements of $[d] = \{1, \ldots, d\}$ that are not fixed by $\sigma$. **Example 47**. Consider the permutations $\sigma_1 = (246)\in S_7$ and $\sigma_2 = (23)\in S_7$. Then, $$\mathrm{supp}(\sigma_1) = \{ 2,4,6\}, \quad \mathrm{supp}(\sigma_2) = \{ 2, 3\}, \quad \text{ and } \mathrm{supp}(\sigma_1) \cap \mathrm{supp}(\sigma_2) = \{ 2\}.$$ Using this notation we may express pruned double Hurwitz numbers as the following weighted count. **Theorem 48** ([@zbMATH06791415 Theorem 41]). *Let $g\geq 0$ and $d > 0$ be integers, and let $\mu, \nu$ be partitions of $d$. Then, we have $$PH_g(\mu, \nu) = \frac{1}{d!} \cdot \left| \left\{ \begin{array}{l} (\sigma_1, \tau_1, \ldots, \tau_b, \sigma_2) \text{ such that:} \\ \bullet \ b = 2g-2+\ell(\mu) + \ell(\nu), \\ \bullet \ \sigma_1, \sigma_2, \tau_i \in S_d, \\ \bullet \ \mathcal{C} (\sigma_1) = \mu,\ \mathcal{C}(\sigma_2) = \nu, \\ \bullet \ \tau_i \text{ are transpositions }, \\ \bullet \ \sigma_1 \tau_1 \cdots \tau_b = \sigma_2^{-1} \\ \bullet \ \text{the subgroup generated by } (\sigma_1, \tau_1, \ldots, \tau_b, \sigma_2)\\ \ \ \text{ acts transitively on } [d], \\ \bullet \ \textrm{the cycles of }\sigma_1\textrm{ are labelled by }1,\dots,\ell(\mu),\\\textrm{such that the cycle labelled }i\textrm{ has length }\mu_i,\\ \bullet \ \textrm{the cycles of }\sigma_2\textrm{ are labelled by }1,\dots,\ell(\nu),\\\textrm{such that the cycle labelled }i\textrm{ has length }\nu_i,\\ \bullet \ \sum_{j=1}^b|\mathrm{supp}(\mu_i) \cap \mathrm{supp}(\tau_j)| \geq 2, \textrm{ for all } i = 1, \ldots , \ell(\mu). \end{array}\right\} \right|$$* **Remark 49**. We note that the last condition differs from the one in [@zbMATH06791415]. Indeed, the given interpretation of pruned double Hurwitz numbers in *loc. cit.* is not correct as the condition there misses loops in branching graphs. However, the two expressions only differ for $PH_0((\mu_1),(\nu_1,\nu_2))$. Moreover, pruned double Hurwitz numbers admit polynomial behaviours reminiscent of that of classical Hurwitz numbers. More precisely, let $m,n>0$ integers and recall the hyperplane $$\mathcal{H}_{m,n}=\left\{(\mu,\nu)\in\mathbb{N}^m\times\mathbb{N}^n\mid\sum\mu_i=\sum\nu_j\right\}$$ and the resonance arrangement inside this hyperplane $$\mathcal{R}_{m,n}=\left\{\sum_{i\in I}\mu_i-\sum_{j\in J}\nu_j=0\mid I\subset[m],J\subset[n]\right\}.$$ We may now consider the map $$\begin{aligned} PH_g\colon\mathcal{H}_{m,n}&\to\mathbb{Q}\\ (\mu,\nu)&\mapsto PH_g(\mu,\nu).\end{aligned}$$ **Theorem 50** ([@zbMATH06791415 Theorem 31]). *The map $PH_g$ is piecewise polynomial. More precisely, for each chamber $C$ of $\mathcal{R}_{m,n}$, there exists a polynomial $P_g^C$ in $m+n$ variables of degree $4g-3+m+n$, such that for all $(\mu,\nu)\in C$ we have $PH_g(\mu,\nu)=P_g^C(\mu,\nu)$.* Wall-crossing formulae reminiscent of the classical situation remain an open problem. We end this section with the computation of the polynomial expression of certain pruned double Hurwitz numbers. The first two cases were already obtained in [@zbMATH06791415 Example 36]. **Example 51**. - For $g=0$, $\ell(\mu) = 1$, and $\ell(\nu) = 2$, we have $$PH_0((a), (b, c)) = 1.$$ - For $g=0$, $\ell(\mu) = 2 = \ell(\nu)$, we have $$PH_0 ((a, b), (c,d) ) = 2 \cdot \min \{ a, b, c, d\}.$$ - Generalising Example 36 in [@zbMATH06791415] we consider the case $\ell(\mu)= n \geq 3$, $\ell(\nu) = 2$. That is, $\mu= (\mu_1, \ldots, \mu_n)$ and $\nu = (\nu_1, \nu_2)$. Then, $$PH_0(\mu, \nu) = \frac{1}{n} \cdot \left| \left\{ \begin{array}{l} (\alpha_1, \ldots, \alpha_n, i_1, \ldots, i_n, j_1, \ldots, j_n) \text{ such that:} \\ \bullet \ (i_1, \ldots, i_n) , (j_1, \ldots, j_n) \text{ are ordered} \\ \bullet \ \text{tuples with entries in } [n], \\ \bullet \ \sum_{i=1}^n \alpha_i = \nu_1, \\ \bullet \ 0 \leq \alpha_i \leq \mu_i, \\ \bullet \ \text{If } j_k < j_{k-1}: \ \alpha_i \geq 1, \\ \bullet \ \text{If } j_k > j_{k-1}: \ \alpha_i \leq \mu_{i-1}. \end{array}\right\} \right|.$$ We see immediately that $PH_0(\mu,\nu)$ is polynomial in the resonance arrangement. - We have $PH_0((a), (a)) = 0$ and $PH_0((a, b), (c)) = 0$. Furthermore, for all $PH_g(\mu, \nu)$ with $(g, \ell(\nu) ) = (0, 1)$ we find that $$PH_g((\mu_1, \ldots, \mu_m), (\nu_1) ) = 0 .$$ # Pruned Hurwitz mobiles {#sec:bij} In this section, we express pruned single Hurwitz numbers in terms of pruned Hurwitz mobiles. To begin with, in [4.1](#sec:hurdyckbij){reference-type="ref" reference="sec:hurdyckbij"} we recall the bijection between marked Hurwitz galaxies of genus $0$ and shift equivalence classes of mobiles given in [@duchi2014bijections] and rephrase it in terms of of single Hurwitz Dyck paths. We then derive an expression of pruned single Hurwitz numbers in terms of a distinguished subset of single Hurwitz Dyck paths in [4.2](#sec:prunhurdyck){reference-type="ref" reference="sec:prunhurdyck"}. Finally, we employ this expression to prove a correspondence between pruned Hurwitz galaxies and a distinguished subset of Hurwitz mobiles in [\[sec:prunhurmob\]](#sec:prunhurmob){reference-type="ref" reference="sec:prunhurmob"}, giving an interpretation of pruned single Hurwitz numbers in terms of what we call *pruned Hurwitz mobiles*. ## Single Hurwitz numbers and Dyck paths {#sec:hurdyckbij} We construct a bijective map $$\Phi\colon \mathcal{G}_0(\mu,\nu)\to\mathcal{M}(\mu,\nu).$$ between marked Hurwitz galaxies and shift equivalence classes of mobiles. Here, we focus on the case of $\nu=(1^d)$ and rephrase the bijection in terms of single Hurwitz Dyck paths, introduced in [Definition 28](#def:singlehurwitzdyck){reference-type="ref" reference="def:singlehurwitzdyck"}. **Construction 52**. Let $G\in\mathcal{G}_0(\mu,\nu)$. The steps of the construction of $\Phi(G)$, as described in [@duchi2014bijections Section 2] are as follows: 1. Polygons: Place in each face of degree $i$ of the galaxy $G$ an oriented $i$-gon -- oriented clockwise in the white faces and counter-clockwise in the black faces. These are the nodes and arcs of $\Phi(G)$, i.e the white and black polygons of the mobile. 2. Construction lines: Join with dashed lines the $i$ nodes in each face $F$ of degree $i$ to the centres of the $i$ edges given by $b\rightarrow 0$ on the boundary of the face. This divides the face of degree $i$ into $i+1$ sub-regions: The interior of the white or black polygon, as well as $i$ sub-regions that each have on their boundary a path $0 \rightarrow 1 \rightarrow \dots \rightarrow b$ along the boundary of $F$, as well as dotted lines and an arc joining two adjacent nodes of the $i-$gon. 3. Positive weight edges: For each non-geodesic edge $e = u \rightarrow v$ with weight $w(e)$, let $F_{\circ}$ and $F_{\bullet}$ be the white and black faces incident to $e$ respectively, and let $x$ (resp. $y$) denote the origin of the unique arc of the polygon incident to the same sub-region of $F_{\circ}$ (resp. $F_{\bullet}$) as the vertex $v$. Create an edge with label $c(v)$ and weight $w(e)$ between the nodes $x$ and $y$ (constructed through the edge $e$). 4. Zero weight edges: For each vertex $v$ of $G$ with colour $c(v)$ that has two incoming geodesic edges, let $F_{\circ}$ and $F_{\circ}'$ denote the two incident white faces, and let $y$ (resp. $y^{\prime}$) denote the origin of the unique arc of the respective polygon incident to the same sub-region of $F_{\circ}$ (resp. $F_{\circ}'$) as $v$. Create an edge with label $c(v)$ and weight zero between $y$ and $y^{\prime}$ (constructed to pass through the vertex $v$). 5. Remove the dashed lines. We call the resulting graph $\Phi(G)$. **Example 53**. We illustrate [Construction 52](#constr:galtomob){reference-type="ref" reference="constr:galtomob"} in [8](#fig:galaxytomobile){reference-type="ref" reference="fig:galaxytomobile"} for the marked galaxy of [3](#fig:hurgalmarked){reference-type="ref" reference="fig:hurgalmarked"}. We obtain as a result the Hurwitz mobile in [4](#fig:hurmobile){reference-type="ref" reference="fig:hurmobile"}. ![The construction of an edge-labelled Hurwitz mobile from the galaxy $G$ of type $(0,(2,1),(1,1,1))$ in [3](#fig:hurgalmarked){reference-type="ref" reference="fig:hurgalmarked"}. The mobile $\Phi(G)$ that obtained from this construction (edges shown in red) is the mobile of type $(0,(2,1),(1,1,1))$ of [4](#fig:hurmobile){reference-type="ref" reference="fig:hurmobile"}.](galaxytomobile1.pdf){#fig:galaxytomobile width="0.6\\linewidth"} **Theorem 54** ([@duchi2014bijections Theorem 1]). *Let $\mu,\nu$ partitons of $d$. Then $$\Phi\colon\mathcal{G}_0(\mu,\nu)\to\mathcal{M}(\mu,\nu)$$ is bijective.* Since the proof of [Theorem 54](#thm-schaeffer){reference-type="ref" reference="thm-schaeffer"} is constructive, it will play an important role in our proof of [Theorem 70](#thm:classhurmob){reference-type="ref" reference="thm:classhurmob"}. Thus, we outline the key steps and refer to [@duchi2014bijections Section 4] for a more detailed account . Rather than proving explicitly that $\Phi$ is bijective, the authors of [@duchi2014bijections] split $\Phi$ into two steps, then prove that each is bijective. We translate parts of the proof into the language of Dyck paths. As a first step, we consider the map $\Gamma$ associating a cactus to a marked Hurwitz galaxy. It was proved in [@duchi2014bijections Corollary 8] that $\Gamma$ is a bijection. As a first step we focus on the case $(0,\mu,(1^d))$ and rephrase the bijection in terms of single Hurwitz Dyck paths that we introduced in [Definition 28](#def:singlehurwitzdyck){reference-type="ref" reference="def:singlehurwitzdyck"}. **Construction 55**. Given some cactus $C \in C_0(\mu,1^d)$, we construct a lattice path as follows: 1. We begin at $(0,0)$ and construct the lattice path and move in a counterclockwise direction from the marked vertex along the boundary of $C$. 2. For each black boundary edge, we insert an up-step and for each white boundary edge, we insert a down-step. 3. For any vertex in $C$ incident to two white faces, we indicate the two instances it appears in the lattice path as a marked pair as in [Definition 27](#def:tdyckpath){reference-type="ref" reference="def:tdyckpath"}. 4. When crossing a vertex from a white to a black boundary edge, distinguish the corresponding lattice point. 5. As $C$ only has one boundary forming a cyclic path, we end again at the marked vertex, which ends the construction. We denote the constructed lattice path by $D(C)$. **Remark 56**. We note that by construction, for a given vertex in a cactus $C$, the $y$-coordinate of corresponding vertex in $D(C)$ agrees with its canonical corner label. The constructed lattice paths lie in $D(\mu)$. More precisely, we have the following result. **Proposition 57**. *Given some cactus $C \in C_0(\mu,1^d)$, the lattice path $D(C)$ is a single Hurwitz Dyck path of type $\mu$.\ * *Proof.* We first show that for a given cactus $C$, the lattice paths $D(C)$ lies in $D(\mu)$. To show that $D(C)$ is a Dyck path, we need to prove that it does not have vertices with negative $y$-coordinate. This follows from [Remark 56](#rem:cornerlabel){reference-type="ref" reference="rem:cornerlabel"}. The fact that it is a $b$-Dyck path follows from the fact that each black face in $C$ possesses exactly $b$ black boundary edges and precisely one internal edge, since the sum of weights of edges of a face sum up to its degree in a galaxy. The Dyck path is $\ell(\mu)$ marked, since we have $\ell(\mu)+d-1$ colours in a galaxy (and thus in a cactus) of type $(0,\mu,(1^d))$. These give rise to $\ell(\mu)+d-2$ many $4-$valent vertices and the unique marked vertex. Each black face has a unique internal edge in the cactus corresponding to a non-geodesic edge in the galaxy. As we have $d$ black faces, we have $\ell(\mu)-1$ many $4$-valent vertices in a galaxy without incoming non-geodesic edges. These correspond to the vertices in $C$ incident to two white faces. Thus, we obtain $\ell(\mu)-1$ pairs of markings. In addition, we have marked the begining and end of the Dyck path. The fact that each element of the pair has the same $y$-coordiante follows from the fact that the canonical corner labelling is coherent. In particular, this implies conditions (1) and (2) of [Definition 28](#def:singlehurwitzdyck){reference-type="ref" reference="def:singlehurwitzdyck"}. By the same consideration, we obtain conditions (3) and (4). Conditions (5) and (6) follow by construction and the fact that the boundary of a cactus forms a single cyclic path. ◻ Next, we consider the opposite direction. **Construction 58**. Let $D \in D(\mu)$ be a single Hurwitz Dyck path. We construct a cactus from $D$. 1. We fix $d$ black polygons of degree $1$ and $\ell(\mu)$ white polygons of degrees $\mu_i$, labelled $1,\dots,d$ and $1,\dots,\ell(\mu)$ respectively. The white polygon labelled $i$ consists of $(b+1)\mu_i$ vertices coloured clockwise $0,\dots,b,0,\dots$. Similarly, the black polygon labelled $j$ consists of $b+1$ vertices coloured counterclockwise $0\dots,b$. 2. Let $i$ be the label of $((0,0),(2bd,0))$ and consider the white polygon labelled $i$. 3. We choose a vertex $v$ of colour $0$ in $F$ to be the marked vertex. 4. We consider the first sequence of $b$ up-steps, connecting $(0,0)$ to $(b,b)$, and consider the corresponding distinguished vertex labelled $j$. We glue the edge connecting a vertex of colour $b$ to a vertex of colour $0$ of the black polygon labelled $j$ to the edge $w\to v$ in $F$ connecting a vertex of colour $b$ to $0$ as well. Furthermore, we consider the glued edge to be an internal edge. 5. Let the next distinguished/marked vertex of $D$ occur after $k$ down-steps ($0 \leq k \leq b$). We have two possible cases: 1. this vertex is a marked vertex labelled $\tilde{i}$. In this case, move in a counterclockwise direction along $k$ edges of the polygon $F$, starting at the vertex $w$ and ending at some vertex $\tilde{w}$. Glue at this vertex a white face of degree $\mu_{\tilde{i}}$ (so that the colour of the glued vertices coincide). 2. this vertex is a distinguished vertex labelled $\tilde{j}$. As in step 4, we attach the black polygon labelled $j$ to the white face. The gluing is performed to the edge of the white face, that is $k$ steps in clockwise direction from the previous internal edge with matching colours. 6. We now iteratively repeat step 5 by traversing along white faces for down-steps, gluing in white faces (and traversing along them) for marked vertices and gluing in black faces for distinguished vertices. Note, that after meeting the second vertex of a marked pair, we then traverse along the already constructed corresponding white face. **Example 59**. In [9](#fig:dycktocactus){reference-type="ref" reference="fig:dycktocactus"}, we illustrate [Construction 58](#constr:dyckcacti){reference-type="ref" reference="constr:dyckcacti"} for the Dyck path in [Construction 58](#constr:dyckcacti){reference-type="ref" reference="constr:dyckcacti"}. We see that the first marked pair $((0,0),(18,0))$ has degree $2$. Thus, we start with a white polygon of degree $2$ with a fixed marked vertex.\ Traversing along the Dyck path, we first meet a distinguished vertex at height $0$ which corresponds to gluing in the black polygon in the second step. We encounter the next distinguished vertex at height $2$. Thus, this corresponds to the gluing of the black polygon in the third step. Traversing further, we encounter the first vertex of a marked pair at height $3$, which gives rise to the gluing of a white face in the fourth step. The third distinguished vertex at height $1$ finally yields the gluing in the last step. Traversing further along the Dyck path, confirms that this is the entire cactus and we end up again at the marked vertex. ![We build a cactus iteratively for the Dyck path in [\[fig:contourfunctionexample\]](#fig:contourfunctionexample){reference-type="ref" reference="fig:contourfunctionexample"}](dycktocactus.pdf){#fig:dycktocactus width="0.95\\linewidth"} The following proposition follows by construction. **Proposition 60**. *Given a single Hurwitz Dyck path $D\in D(\mu)$, the cactus $\mathcal{C}(D)$ lies in $C_0(\mu,(1^d))$. In particular, the map $$\begin{aligned} D\colon C_0(\mu,1^d)&\to D(\mu)\\ C&\mapsto D(C) \end{aligned}$$ is a bijection.* **Remark 61**. We note that traversing along the Dyck path, corresponds to traversing along the boundary of the cactus. The vertices of the Dyck path that are identified in the cactus are exactly those that are connected by a horizontal line that lies weakly below the Dyck path. Next, we describe how to construct Hurwitz mobiles from cacti. Recall that given a cactus $C\in C_0(\mu,\nu)$, its internal edges correspond to non-geodesic edges in its corresponding Hurwitz galaxy and the vertices incident to two white faces correspond to the vertices in the galaxy that split. Thus, we can apply [Construction 52](#constr:galtomob){reference-type="ref" reference="constr:galtomob"} again to obtain a Hurwitz mobile $\Pi(C)$. The following theorem was proved in [@duchi2014bijections Proposition 7]. **Theorem 62**. *The map $$\begin{aligned} \Pi\colon C_0(\mu,(1^d))&\to\mathcal{M}(\mu,(1^d))\\ C&\mapsto [\Pi(C)] \end{aligned}$$ is a bijection.* We remark that the shift equivalence classes of the Hurwitz mobiles correspond to shifting the colour of each vertex in a cactus by $1$ mod $b+1$. We will see immediately from the construction that [Construction 58](#constr:dyckcacti){reference-type="ref" reference="constr:dyckcacti"} can be easily adapted to construct Hurwitz mobiles from single Hurwitz Dyck paths. Finally, we discuss how Dyck paths encode the procedure that describes how to re-glue a cactus to a Hurwitz galaxy. **Definition 63**. Let $D\in D(\mu)$ a single Hurwitz Dyck path. Let $v$ and $w$ be lattice points on $D$ with the same height. Let $H$ be the horizontal line connecting $v$ and $w$. We call $H$ a *gluing line*, if 1. $v$ and $w$ are not valleys in $D$, 2. the only other vertices of $D$ on $H$ are valleys. We note that gluing lines are exactly those maximal horizontal lines that live under the Dyck path. The following lemma was essentially proved in the proof of [@duchi2014bijections Proposition 5]. **Lemma 64**. *Let $D\in D(\mu)$ a single Hurwitz Dyck path and $C$ its corresponding cactus via [Construction 58](#constr:dyckcacti){reference-type="ref" reference="constr:dyckcacti"}. Moreover, let $G=\Gamma^{-1}(C)$ the marked Hurwitz galaxy corresponding to $C$.\ Then, $G$ is obtained from $C$ by gluing those tuples of vertices that lie on a shared gluing line.* ## Pruned Hurwitz Dyck Paths and pruned Hurwitz mobiles {#sec:prunhurdyck} In this subsection, we express pruned single Hurwitz numbers in terms of *pruned Hurwitz mobiles*. We begin by deriving a classification of *pruned single Hurwitz Dyck paths*, where we call a single Hurwitz Dyck path $D$ pruned if it gives rise to a pruned Hurwitz galaxy. **Proposition 65** (Classification of pruned single Hurwitz Dyck paths). *A single Hurwitz Dyck path $D$ of type $\mu$ is pruned if and only if the following conditions are satisfied:* - *(descent condition) any sequence of down-steps of the form $d^b$ that begins at a peak of $D$ must contain a vertex belonging to a marked pair.* - *(lowest distinguished vertex condition) either the distinguished vertex $v$ of lowest non-zero height in $D$ is not the second distinguished vertex of $D$ (i.e. the only distinguished vertex to the left of $v$ in $D$ is the origin) or if it is the second distinguished vertex of $D$, then there must be a vertex belonging to a marked pair to the left of $v$.* *Proof.* If the descent condition and the lowest distinguished vertex condition are satisfied, then clearly any black face carries at least two four valent vertices in the corresponding galaxy.\ For the other direction, let $G$ be a pruned Hurwitz galaxy and $D(G)$ be its corresponding single Hurwitz Dyck path. Recall that any black face of a galaxy is a bubble if and only if it contains exactly one $4$-valent vertex. Therefore, any black face in $G$ contains at least two $4$ valent vertices. As outlined in [Construction 58](#constr:dyckcacti){reference-type="ref" reference="constr:dyckcacti"} each sequence of $b$ up-steps starting at a distinguished vertex corresponds to a black face of the cactus $C$ and thus the galaxy $G$ that we started with. Moreover, the vertices of $D(G)$ corresponding to the 4-valent vertices of $G$ are precisely the distinguished vertices (excluding the origin) and the marked pairs (excluding the pair $((0,0),(2bd,0))$). The vertices of $D$ corresponding to the marked vertex of $G$ are $(0,0)$ and $(2bd,0)$. By [Lemma 64](#lem:gluehor){reference-type="ref" reference="lem:gluehor"}, all vertices of $G$ are obtained by gluing lattice points on gluing lines of $D$. If the descent condition is violated then the preceeding black face is not glued to a marked pair or an extra distinguished vertex and therefore only contains one $4$-valent vertex corresponding to the single distinguished vertex. If the lowest distinguished vertex condition is violated, then the black face in $G$ which contains the marked vertex contains just one 4-valent vertex, since it is glued to no vertices belonging to marked pairs, and to a single distinguished vertex, namely the second one. In either case, we obtain a bubble, which is a contradiction, since $G$ was assumed to the pruned. ◻ Before, we can state our main theorem of this section, we have to introduce several notions. **Definition 66**. Let $P$ be a black polygon in a single Hurwitz mobile $M$, and let $y$ be the weight $1$ edge incident to $P$. Furthermore let $z$ be some other labelled edge of $M$. We define the white distance $d_{\circ}(y,z)$ (resp. black distance $d_{\bullet}(y,z)$) between $y$ and $z$ to be the number of white polygon arcs (resp. black polygon arcs) traversed in a counterclockwise path that starts at $y$, traverses along the black arc belonging to $P$, continues along $M$ and ends at $z$. **Definition 67**. Let $P$ be a black polygon in a single Hurwitz mobile $M$, and let $y$ be the weight $1$ edge incident to $P$. We say that the edge labelled $y$ is *interrupted* by some edge $z$ of $M$ if: - $d_{\circ}(y,z)<d_{\bullet}(y,z)$, or, - $d_{\circ}(y,z)=d_{\bullet}(y,z)$ with $y<z$. **Example 68**. We consider the Hurwitz mobile depicted in [4](#fig:hurmobile){reference-type="ref" reference="fig:hurmobile"}. Then, we have e.g. $d_{\circ}(0,2)=d_{\circ}(2,1)=d_{\circ}(1,0)=1$ and $d_{\circ}(0,1)=2$. Moreover, we have $d_{\bullet}(0,2)=d_{\bullet}(2,1)=d_{\bullet}(1,0)=1$ and $d_{\bullet}(0,1)=2$. Then, we see that the edge labelled $0$ is interrupted by the edges labelled $2$ and $1$ since $d_{\circ}(0,2)=d_{\bullet}(0,2)=1$ and $d_{\circ}(0,1)=d_{\bullet}(0,1)=2$ with $0<1,2$. On the other hand, the edges labelled $2$ and $1$ are not interrupted by the edges labelled $1$ and $0$ respectively since $2>1>0$. **Definition 69**. We call a single Hurwitz mobile $M$ pruned if it corresponds to a pruned Hurwitz galaxy. Moreover, we call $M$ in standard form if it is the element of its shift equivalence class that is the image of a marked Hurwitz galaxy under [Construction 52](#constr:galtomob){reference-type="ref" reference="constr:galtomob"}. **Theorem 70** (Classification of pruned single Hurwitz mobiles). *A single Hurwitz mobile $M$ in standard form of type $\mu$ is pruned if and only if each black polygon $P$ of $M$ satisfies either of the following two conditions:* 1. *The edge of weight $1$ that is incident to $P$ has label $y \neq 0$ and is interrupted by the next labelled edge $z$ in $M$ (i.e. the first edge reached in a counterclockwise path starting at $y$).* 2. *The edge of weight $1$ that is incident to $P$ has label $0$ and:* - *working counter-clockwise from the edge $0$, the next labelled edge in the mobile is a non-weighted edge, or* - *the next labelled edge in the mobile (again working counter-clockwise from the edge $0$) is a weight $1$ edge of label $y$ and there exists some edge of label $z \neq 0$ in $M$ that does not interrupt $y$.* *Proof.* Let $M$ be a Hurwitz mobile of type $(\mu,1^d)$, $G$ the corresponding Hurwitz galaxy and $D$ the respective Dyck path. We show that $M$ satisfies the first condition if and only if $D$ satisfies the descent condition. Moreover, we prove that $M$ satisfies the second condition if and only if $D$ satisfies the lowest distinguished vertex condition. Then, the theorem follows.\ Let $M$ satisfy the first condition. Let $P$ be a black polygon in $M$ and $\tilde{u}$ the corresponding sequence of $b$ up-steps in $D$. Let $\tilde{d}$ be the set of $b$ down-steps in $D$ that is glued to $\tilde{u}$ along horizontal lines. Then, the last lattice point of $\tilde{d}$ has label $y\,\mathrm{mod}\,(b+1)$. If $\tilde{d}$ is not a connected sequence then there is nothing to prove. If it is a connected sequence, then we see that it must contain a vertex belonging to a marked pair. This is due to the fact that the number of down-steps traversed before the next distinguished vertex or marked pair is precisely $(b+1)\cdot d_{\circ}(y,z)+y-z-d_{\bullet}(y,z)$. - If $d_{\circ}(y,z)=0$, then we have $y>z$ since the two edges have to live on the same node. Then, we have that $(b+1)\cdot d_{\circ}(y,z)+y-z-d_{\bullet}(y,z)\le y-z\leq b$. We cannot $y-z=b$ since $4$-valent vertices must have distinct labels in galaxies. - If $d_{\circ}(y,z)=1$ but $z>y$, we have $(b+1)\cdot d_{\circ}(y,z)+y-z-d_{\bullet}(y,z)= b+1+y-z-d_{\bullet}(y,z)$. Since the edge $y$ is incident to a black face, we have $d_{\bullet}(y,z)\ge1$ (in fact it is equal to $1$, since $z$ is the first edge interrupting $y$.). Therefore, we obtain with $z>y$, that $b+1+y-z-d_{\bullet}(y,z)<b$. Note that $d_{\circ}(y,z)$ cannot be greater than $1$ as $d_{\bullet}(y,z)=1$ and $y$ is necessarily interrupted by $z$. Thus, in the two possible cases, the next distinguished vertex or vertex belonging to a marked pair is part of the $\tilde{d}$ down-steps. Finally, we note that any sequence of down-steps as considered in the descent condition arises this way. This proves one direction. The reverse direction is obtained by working the same steps backwards.\ Now, let $M$ satisfy the second condition. Moreover, let $P$ be the black polygon that is incident to the edge of label $0$. Let $v$ be the lowest distinguished vertex in $D$ corresponding to the edge of label $z$. If $v$ is not the second distinguished vertex in $D$, there is nothing to prove. If it is the second distinguished vertex, then we prove that $M$ has to satisfy case (a) which immediately proves the assertion that there is a marked pair to the left of $v$. Therefore, assume that $v$ is the second distinguished vertex in $D$ and that $M$ satisfies case (b) of the second condition.\ The edge of label $y$ in case (b) corresponds to the vertex $v$. Let $z$ be an edge that does not interrupt $y$ and let $w$ the corresponding vertex. We note that $w$ is on the right of $v$. Considering the lattice path on $D$ from $v$ to $w$, we define $D_{\textrm{up}}(w,v)$ the number of upsteps on this path and $D_{\textrm{down}}(w,v)$ the number of down-steps. Either $w$ is a distinguished vertex, then clearly, we have $$\label{equ:inequ} D_{\textrm{up}}(w,v)-D_{\textrm{down}}(w,v)>0.$$ If $w$ is a marked vertex, its $y$-coordinate can only be smaller than the $y$-coordinate of $v$ if it comes from a pair with one lattice point on the first sequence of up-steps of $D$ and the other on the last sequence of down-steps. This is however not possible, since $y$ is the first labelled edge after $0$. Thus, we again have [\[equ:inequ\]](#equ:inequ){reference-type="ref" reference="equ:inequ"}. Moreover, we observe that $$D_{\textrm{down}}(w,v)=(b+1)\cdot d_{\circ}(y,z)+y-z-d_{\bullet}(y,z)\quad\textrm{and}\quad D_{\textrm{up}}(w,v)=b\cdot d_{\bullet}(y,z).$$ Therefore, in total, we obtain $$0<D_{\textrm{up}}(w,v)-D_{\textrm{down}}(w,v)=(b+1)(d_{\bullet}(y,z)-d_{\circ}(y,z))+y-z.$$ However, since $z,y\neq0$, we have that $|z-y|<b$. Therefore, we either have $d_{\bullet}(y,z)>d_{\circ}(y,z)$ or $d_{\bullet}(y,z)=d_{\circ}(y,z)$ and $y<z$. In both cases, we obtain that $z$ does interrupt $y$ a contradiction. Therefore, we are in case (a). This proves that the second condition in the theorem implies the lowest distinguished vertex condition. Again working backwards, we obtain the inverse implication, which completes the proof. ◻ # Tropical Hurwitz covers {#sec:trop} The idea of connecting Hurwitz numbers to tropical geometry, interpreting double Hurwitz numbers as the weighted count of the constructed tropical graphs was introduced in [@cavalieri2010tropical]. This tropical interpretation proved fruitful in producing polynomiality results; in particular it was used in the proof of wall-crossing formulae for double Hurwitz numbers in genus $0$. ## Tropical graphs {#sec:tropgra} Tropical geometry can be considered as a "combinatorial shadow" of algebraic geometry, where piece-wise linear objects called tropical graphs can be obtained as a skeleton of degenerated algebraic curves. Though this tropicalisation procedure loses information, many properties of the algebraic curves continue to be determinable from their corresponding tropical curve. To begin, we define the edges of a tropical graph as follows. **Definition 71**. Let $\Gamma$ be a connected graph. We say an edge is an *end* if it is adjacent to a $1$-valent vertex. Edges that are not ends are called *bounded edges*. We use $V(\Gamma)$ to denote the vertex set of the graph $\Gamma$. Furthermore, we denote the set of $1$-valent vertices (i.e. leaves) by $V_\infty(\Gamma)$, whereas we denote by $V_0(\Gamma)$ the set of vertices with a valency greater than $1$, called inner vertices. Moreover, we use $E(\Gamma)$ to denote the edge set of the graph $\Gamma$, $E_\infty(\Gamma)$ to denote the subset of ends, and $E_0(\Gamma)$ to denote the subset of bounded edges. Thus, we may define a tropical curve as follows. **Definition 72**. An *abstract tropical curve* is a connected graph $\Gamma$ (with $E(\Gamma) \neq \emptyset$) such that 1. $\Gamma \backslash V_\infty (\Gamma)$ is a metric graph. That is, $\Gamma$ is equipped with a map $$l : E(\Gamma) \rightarrow \mathbb{R} \cup \{\infty \}$$ $$e \mapsto l(e)$$ such that $l(E(\Gamma) \backslash E_\infty(\Gamma) ) \subset \mathbb{R}$ and all ends $e \in E_\infty (\Gamma)$ have length $l(e) = \infty$, 2. the inner vertices $v\in V_0(\Gamma)$ have non-negative integer weights. Namely, the graph $\Gamma$ is further equipped with a map $$g: V_0(\Gamma) \rightarrow \mathbb{N}$$ $$v \mapsto g_v.$$ If we consider a vertex $v\in V_0(\Gamma)$ we may define the integer $g_v$ to be the *genus of* $v$. The genus of the curve $\Gamma$ is defined to satisfy $$g(\Gamma) = \beta_1(\Gamma) + \sum_{v \in V_0(\Gamma)} g_v$$ where $\beta_1(\Gamma)$ is the first Betti number of $\Gamma$. In our analysis, we only consider so-called "explicit" tropical curves $\Gamma$, which have $g_v = 0$ for ever inner vertex. That is, the genus of our graphs satisfies $$g (\Gamma) = \beta_1(\Gamma).$$ **Definition 73**. The *combinatorial type* of a tropical curve is the equivalence class of tropical curves where any two curves are equivalent if they differ only by the length of their edges. One tropical curve that is important in our story is the tropical projective line. **Example 74**. We denote by $\mathbb{T}\mathbb{P}^{1}(\mathbb{C})$ the tropical $\mathbb{P}^{1}(\mathbb{C})$, and define this as $\mathbb{T}\mathbb{P}^{1}(\mathbb{C}) = \mathbb{R} \cup \{ \pm \infty \}$. Furthermore, we may construct vertices on this curve by picking a finite number of points $p_i \in \mathbb{R}$ and creating a vertex at each point. We define the length of the edges that joins two inner vertices to be their absolute distance in $\mathbb{R}$, and we define the length of the ends to be $\infty$. The genus of the tropical projective line is $g(\mathbb{T}\mathbb{P}^{1}(\mathbb{C})) = 0$ as expected-- [10](#fig:tropproj){reference-type="ref" reference="fig:tropproj"} shows us that this graph is a tree. ![The tropical projective curve.](Figures/tropproj.pdf){#fig:tropproj width="\\linewidth"} **Remark 75**. The above example is slightly "cheating", as a tropical curve with two $1$-valent ends, and all other vertices are $2$-valent. ## Tropical covers {#sec:tropcov} There are many equivalent tropicalisations of Hurwitz numbers, including working via the symmetric group. In tropical geometry, algebraic curves tropicalise to combinatorial graphs. Therefore, maps between Riemann surfaces *should* tropicalise to maps between graphs. **Definition 76**. A *tropical cover* of two tropical curves $\Gamma_1, \Gamma_2$ is a surjective map $f: \Gamma_1 \rightarrow \Gamma_2$ which satisfies the following conditions: 1. (Locally integer affine linear) For any edge $e\in E(\Gamma_1)$, $f(e)$ is contained either in an edge of $\Gamma_2$ or in an inner vertex of $\Gamma_2$. Moreover, the map $f$ is piecewise integer affine linear. Namely, on an edge $e$ the map is locally a positive integer $w(e)$ called the weight of the edge, so that $$l(f(e)) = w(e) l(e).$$ 2. (Balancing) For an inner vertex $v\in V_0(\Gamma_1)$, the local degree of $f$ at $v$ (denoted by $d_v$) is defined as follows; consider an edge $e_2$ adjacent to $f(v)$ in $\Gamma_2$, and sum the weights of all edges $e_2$ adjacent to $v$ in $\Gamma_1$ that map to $e_2$: $$d_v = \sum_{e_1 \mapsto e_2} w(e_1).$$ The balancing condition thus ensures that the local degree of $f$ at $v$ is well defined, and independent of the choice of $e_2$. In our case, we choose the vertex sets $V(\Gamma_1)$ and $V(\Gamma_2)$ in such a way that the tropical covers considered above are maps between graphs. In particular, the images and preimages of vertices are vertices. **Definition 77**. A tropical cover between curves $\Gamma_1$ and $\Gamma_2$ is called a *tropical Hurwitz cover* if it satisfies the *local Riemann--Hurwitz condition* at every vertex $v\in \Gamma_1$, meaning that if $v \mapsto v_2$ with local degree $d_v$, we have $$\label{local RH condition eq} 0 \leq d_v(2-2g(v_2)) - \sum_{e \text{ adj. to }v} \Big(w(e) -1 \Big) - (2-2g(v)).$$ The right hand side of this inequality can be thought of as a measure of the ramification at the vertex $v$, which cannot be negative. When considering explicit tropical curves, this expression simplifies to $$\label{local RH condition eq explicit} 0 \leq 2d_v - \sum_{e \text{ adj. to }v} \Big(w(e) -1 \Big) - 2.$$ **Definition 78**. The *degree* $d$ of a tropical cover is the sum over all local degrees of preimages of a point $y\in \Gamma_2$. That is, we consider all points $x\in \Gamma_1$ such that $x\mapsto y$ and obtain $$d = \sum_{x\mapsto y} d_x.$$ By the balancing condition ([Definition 76](#tropical cover def){reference-type="ref" reference="tropical cover def"} (2)), the degree of the tropical cover is independent of the choice of $y\in \Gamma_2$ that we take. Let us consider an end $e\in E_\infty(\Gamma_2)$, and let $\mu_e \vdash d$ be the partition obtained from the weights $w(e_1)$ of the ends $e_1\in E_\infty(\Gamma_1)$ which are mapped by $f$ onto $e$. **Definition 79**. The partition $\mu_e$ is called the *ramification profile* above $e$ **Definition 80**. We define the *combinatorial type of a tropical cover* to be the equivalence class of covers when we drop all metric information about the curves. Namely, it is comprised of the combinatorial types of the tropical curves $\Gamma_1$ and $\Gamma_2$, the weights of the map, and the information of which edges map to which. Given a tropical cover $f:\Gamma_1 \rightarrow \Gamma_2$, the combinatorial type of the cover, and the lengths of $\Gamma_2$, it is possible to recover the metric on $\Gamma_1$. Indeed, let us consider any edge $e_1$ of $\Gamma_1$ with weight $w(e_1)$ and unknown length $l(e_1)$. If $f(e_1)=e_2$, then we have $l(e_2) = w(e_1)\cdot l(e_1)$ by [Definition 76](#tropical cover def){reference-type="ref" reference="tropical cover def"} $(1)$. **Remark 81**. It turns out that the local Riemann--Hurwitz condition is in fact a *realizability* condition. Namely, only tropical curves satisfying [\[local RH condition eq\]](#local RH condition eq){reference-type="ref" reference="local RH condition eq"} at every point can be degenerations of covers of algebraic curves. Now, we wish to define morphisms of tropical curves. **Definition 82**. Tropical covers $f_1: \Gamma_1 \rightarrow \tilde{\Gamma}$ and $f_2 : \Gamma_2 \rightarrow \tilde{\Gamma}$ are called *isomorphic* if there exists an isomorphism $\phi: \Gamma_1 \rightarrow \Gamma_2$ of the underlying tropical curves $\Gamma_1, \Gamma_2$ such that $f_2 \circ \phi = f_1$. In particular, we identify curves for which the following diagram commutes. $$\begin{tikzcd}%[column sep=small] \Gamma_1 \arrow[rd, "f_1" ' near start] \arrow[rr, "\phi" ] & & \Gamma_2 \arrow[ld, "f_2" near start] \\ & \tilde{\Gamma} & \end{tikzcd}$$ Isomorphism classes of covers are an equivalence relation, allowing us to consider representatives for certain isomorphism classes of graphs. In our discussion this amounts to considering covers as equivalent whenever the maps sends the same vertices in $\Gamma_1$ to the same vertices in $\tilde{\Gamma}$, though when edges $e_i$ adjacent to a vertex $v$ in $\Gamma_1$ are mapped to an edge $\tilde{e}$ in $\tilde{\Gamma}$, the weights may be permuted among the edges $e_i$. There are some shapes of special interest that appear in our tropical monodromy graphs, as they result in automorphisms of the tropical graphs. ![Balanced left pointing fork, balanced right pointing fork, and balanced wiener.](Figures/forksnwieners1.pdf){#fig:forks n weiners width="\\linewidth"} **Definition 83**. A *left balanced pointing fork* (resp. *right*) is a tripod with weights $w, w, 2w$ such that the edges of weight $w$ lie over $0$ (resp. $b+1$) (see [11](#fig:forks n weiners){reference-type="ref" reference="fig:forks n weiners"}). **Definition 84**. A *balancing wiener* appears in the graph when a strand of weight $2w$ splits into two strands of weight $w$, and then rejoins into a strand of weight $2w$ (see [11](#fig:forks n weiners){reference-type="ref" reference="fig:forks n weiners"}) ## Tropical double Hurwitz numbers {#sec:tropdhn} The tropical monodromy graphs of [@cavalieri2010tropical] arise using an analysis of cut-and-join procedures in the symmetric group $S_d$. Fixing integers $g\geq 0$ and $d>0$, taking $\mu, \nu$ to be partitions of $d$, and taking $b=2g-2+\ell(\mu)+\ell(\nu)>0$, they are constructed as follows. **Definition 85**. *Monodromy graphs of type* $(g, \mu, \nu)$ are tropical graphs with a map projecting to the segment $[0 , b+1] \subset \mathbb{T}\mathbb{P}^{1}(\mathbb{C})$, constructed as follows: 1. Begin with $m$ strands over $0$ that are labelled by $\mu_1, \ldots, \mu_m$. These $\mu_i$ are called the weight of their respective strands. 2. Create a $3$-valent vertex over the point $1$ by either joining two strands or cutting one strand that has weight strictly greater than $1$. - For a join, the new strand is weighted with the sum of the weights of the strands joined, - For a cut, the new strands are weighted in all possible positive ways that sum to the weight of the cut strand. 3. Consider one representative for each isomorphism class of tropical covers (as in [Definition 82](#def isomorphic covers){reference-type="ref" reference="def isomorphic covers"}) 4. Repeat steps $2, 3$ above for each consecutive integer up to $b$. 5. Consider all connected graphs that conclude over $b+1$ with $n$ strands of weight $\nu_1, \ldots, \nu_n$. **Remark 86**. These tropical monodromy graphs should be treated as abstract graphs with weighted edges, mapping to the segment $[0, b+1]$ of the tropical projective line. That is to say, the relative positions of the strands is inconsequential, and there are no crossings between the strands. **Remark 87**. These monodromy graphs can be considered as graphs with half edges. The vertex set $V(G)$ of a monodromy graph $G$ consists of the $b$ many $3$-valent vertices, the edge set $E(G)$ consists of the inner edges between these $b$ vertices, and we can consider the set of half edges $E^\prime$ to consist of the ends, i.e. the unbounded rays over $(-\infty, 1)$ and $(b+1, \infty)$ labelled with the parts of $\mu$ and $\nu$ respectively. Let us illustrate this construction with an example. ![Monodromy graphs of type $(1, (5), (4, 1) )$.](Figures/monogrex.pdf){#fig:monogrex2 width="\\linewidth"} **Example 88**. We consider monodromy graphs of type $(1, (5), (4, 1) )$. That is, we construct connected graphs with $1$ strand of weight $5$ above $0$, and with $2$ strands of weights $1$ and $4$ above $4$. Furthermore, these graphs have $b=2-2+1+2=3$ many $3$-valent vertices over the points $1, 2$, and $3$ where strands may be joined, or cut into two strands of positive weight. The graphs of this form are depicted in [12](#fig:monogrex2){reference-type="ref" reference="fig:monogrex2"}. We note that the implicit map for these graphs is the projection to the segment $[0, 4] \subset \mathbb{T}\mathbb{P}^{1}(\mathbb{C})$. We further note that cover $B$ is the only cover containing an automorphism, namely a balanced wiener (as defined in [Definition 84](#def wiener){reference-type="ref" reference="def wiener"}). Using these tropical monodromy graphs, we may define tropical double Hurwitz numbers as follows. **Definition 89**. Let $g\geq 0, d>0$ be integers, $\mu, \nu$ partitions of $d$, and $b=2g-2+\ell(\mu) +\ell(\nu) >0$. The *tropical double Hurwitz number* $H_g^{\text{trop}}(\mu, \nu)$ is the weighted sum of monodromy graphs $\Gamma$ of type $(g, \mu, \nu)$ by the formula $$H_g^{\text{trop}}(\mu, \nu) = \sum_{\Gamma} \frac{|\mathop{\mathrm{Aut}}(\mu)| |\mathop{\mathrm{Aut}}(\nu)|}{|\mathop{\mathrm{Aut}}(\Gamma )|} \prod_{e \text{ inner edge}} w(e)$$ taking the product over the interior edge weights of the monodromy graphs $\Gamma$, where factors of $1/2$ for each balancing fork and wiener amounts to the size of the automorphism group of  $\Gamma$. Using these tropical double Hurwitz numbers we may enumerate classical double Hurwitz numbers as a weighted sum of tropical covers using the following tropical correspondence theorem. **Theorem 90**. *([@cavalieri2010tropical Section 4]) Fix integers $d>0$ and $g\geq 0$, and let $\mu$, $\nu$ be partitions of $d$. Then, the classical count of the double Hurwitz number is equal to the tropical count, i.e. $$H_g(\mu, \nu) = H_g^{\text{trop}}(\mu, \nu).$$* We give a demonstration of this count as follows. **Example 91**. We consider the monodromy graphs of type $(1, (5), (4, 1))$ constructed in [Example 88](#ex monogr2){reference-type="ref" reference="ex monogr2"} (which can be seen in [12](#fig:monogrex2){reference-type="ref" reference="fig:monogrex2"}). Let us note the following information: - The graph $B$ contains a balanced wiener, meaning that $|\mathop{\mathrm{Aut}}(B)| = 2$ - None of the other graphs contain balanced forks or balanced wieners, so $|\mathop{\mathrm{Aut}}(\Gamma)| = 1$ for $\Gamma = A, C, D, E, F, G$. - The partitions $\mu = (5),\ \nu = (4, 1)$ contain no automorphisms, giving $|\mathop{\mathrm{Aut}}(\mu)| = 1 = |\mathop{\mathrm{Aut}}(\nu)|$. Graph $\Gamma$ $\frac{|\mathop{\mathrm{Aut}}(\mu)||\mathop{\mathrm{Aut}}(\nu)|}{|\mathop{\mathrm{Aut}}(\Gamma )|}$ $\prod_{\hat{e} } w(\hat{e})$ Total ---------------- ----------------------------------------------------------------------------------------------------- ------------------------------- ------- [A]{.roman} $1$ $4 \cdot 5$ $20$ [B]{.roman} $1/2$ $4\cdot 2 \cdot 2$ $8$ [C]{.roman} $1$ $4\cdot 3$ $12$ [D]{.roman} $1$ $4\cdot 3$ $12$ [E]{.roman} $1$ $2\cdot 3 \cdot 5$ $30$ [F]{.roman} $1$ $3\cdot 2 \cdot 2$ $12$ [G]{.roman} $1$ $2\cdot 3$ $6$ : [\[table2DHN\]]{#table2DHN label="table2DHN"}The weight for each pruned monodromy graph of type $(2, (1, 1, 1), (3))$. By [Definition 89](#deftropdhn){reference-type="ref" reference="deftropdhn"} and [Theorem 90](#trop corr thm for DHN){reference-type="ref" reference="trop corr thm for DHN"}, $H_1((5), (4,1))$ is found by summing over the total column in [1](#table2DHN){reference-type="ref" reference="table2DHN"} to yield $$H_1^{\textrm{trop}} ((5), (4,1)) = 100 = H_1((5), (4, 1)).$$ # Tropical pruned Hurwitz numbers {#sec:troppru} In this section we construct a new interpretation of pruned double Hurwitz numbers $PH_g(\mu, \nu)$ using tropical covers via the cut-and-join recursion for $PH_g(\mu, \nu)$. Based on this new interpretation, we study the polynomial structure on pruned double Hurwitz numbers from a tropical perspective in [6.3](#sec:polytrop){reference-type="ref" reference="sec:polytrop"}. ## Pruned monodromy graphs To state the aforementioned recursion, we first fix $g\geq 0$ and $d>0$, consider $\mu, \nu$ partitions of $d$, and denote by $b = 2g-2 + \ell(\mu) + \ell(\nu)$. **Theorem 92**. *([@zbMATH06791415 Theorem 24]) For $b >0$ and $(g, \ell(\nu))\notin \{ (0, 1), (0, 2)\}$, the pruned double Hurwitz number $PH_g(\mu, \nu)$ satisfies the following recursion $$\begin{gathered} PH_g(\mu, \nu) = \sum_{i<j} \sum_{I \subset \{1, \ldots, \ell(\mu)\} } \ \ \sum_{\mathclap{\substack{\alpha + |\mu_{I^c}| \\ = \nu_i + \nu_j } }} \ \alpha \cdot \frac{(b-1)!}{(b-(|I^c|+1))!} \cdot (|I^c| +1)! \cdot \prod_{s\in \mu_{I^c}} s \cdot PH_g(\mu_I, (\nu \backslash\{\nu_i, \nu_j\}, \alpha) ) \\ + \frac{1}{2} \sum_{i=1}^{\ell(\nu)} \sum_{I \subset \{1, \ldots, \ell(\mu)\} } \ \ \ \ \sum_{\mathclap{\substack{\alpha + \beta + |\mu_{I^c}| \\ = \nu_i } }} \ \ \alpha \cdot \beta \cdot \frac{(b-1)!}{(b-(|I^c|+1))!} \cdot (|I^c| +1) \cdot \prod_{s\in \mu_{I^c}} s \cdot PH_{g-1}(\mu_I, (\nu \backslash \{\nu_i \} , \alpha, \beta ) ) \\ + \frac{1}{2} \sum_{i=1}^{\ell(\nu)} \sum_{g_1 + g_2 = g}^{\text{stable}} \quad \sum_{\mathclap{\substack{\nu_{J_1} \coprod \nu_{J_2} = \\ \nu \backslash \{\nu_i\} } }} \qquad \sum_{\mathclap{\substack{I_1, I_2 \subset\\ \{ 1, \ldots, \ell(\mu ) \}\\ \text{disjoint} } }} \qquad \ \sum_{\mathclap{\substack{\alpha + \beta +\\ |(\mu_{I_1} + \mu_{I_2})^c|\\ = \nu_i } }} \quad \alpha \cdot \beta \cdot \frac{(b-1)!}{b_1! \cdot b_2!} \cdot (|(I_1 +I_2)^c| +1)! \cdot \\ \prod_{s\in \mu_{(I_1 \cup I_2)^c}} s \cdot PH_{g_2} (\mu_{I_2}, (\nu_{J_2}, \beta)) . \end{gathered}$$ where $b_1 = 2g_1 -2 + |I_1| + |J_1| +1 , \ b_2 = 2g_2 -2 + |I_2| + |J_2| +1$, and the "stable" terms in the sum mean that we exclude terms with $(g_i, |J_i|) \in \{ (0, 1), (0, 2)\}$.* Using this recursion we introduce a new "pruned" monodromy graph structure. These graphs may additionally feature coloured ends and $n$-valent vertices, which we formalise as follows. **Definition 93**. The coloured ends of a graph $\Gamma$ is a proper subset $CE(\Gamma) \subsetneq E_\infty(\Gamma)$ of the set of ends of the graph. **Definition 94**. A regular edge $e$ of a graph $\Gamma$ is an edge (or end) that is not a coloured end. In particular, $e\in E(\Gamma) \backslash CE(\Gamma)$. When drawn, we represent these edges as expected; regular edges are drawn as a plain black edge and coloured ends are drawn as red dashed lines in our figures. **Definition 95**. A regular $n$-valent vertex is a vertex that is joined by $n$ regular strands. We specify certain types of vertices as follows. **Definition 96**. Initial vertices are regular $n$-valent vertices joined by $n-2$ ends on the left, which split into two regular edges on the right. **Definition 97**. A secondary vertex is a regular $3$-valent vertex that is either: joined by two regular inner edges on the left that join into one regular edge on the right, or it is joined by one regular inner edge on the left that splits into two regular edges on the right. We construct our pruned monodromy graphs such that each strand of our graph must first be joined to an initial vertex. These initial vertices create new automorphism shapes in our graph, that we call $k$-pronged forks. **Definition 98**. An $k$*-pronged fork* appears in a graph when $k$ strands of weight $\mu_1, \ldots, \mu_k$ lying over $0$ join at an initial vertex. We take the automorphism factor of a $k$--pronged fork to be $|\mathop{\mathrm{Aut}}(\mu_1, \ldots, \mu_k)|$, the automorphism of the partition $(\mu_1, \ldots, \mu_k)$. When constructing our new curves and covers, we wish to define a new equivalence relation between the tropical covers. In this instance, we do not care about the ordering of vertices of disjoint connected components before the components join together. We make this precise as follows. **Definition 99**. We call two tropical covers $\pi_1 : \Gamma_1 \rightarrow \mathbb{T}\mathbb{P}^{1}(\mathbb{C}), \pi_2 : \Gamma_2 \rightarrow \mathbb{T}\mathbb{P}^{1}(\mathbb{C})$ of the tropical projective line *quasi-isomorphic* if: - the underlying graphs $\Gamma_1, \Gamma_2$ are isomorphic with isomorphism $\phi : \Gamma_1 \rightarrow \Gamma_2$, where coloured edges map to coloured edges, - the covers carry the same edge weights $w(e) = w(\phi(e))$ for an edge $e\in \Gamma_1$, - the covers carry the same vertex multiplicities $m(v) = m(\phi(v))$ for a vertex $v\in \Gamma_1$, - and furthermore the edge $\phi(e)$ carries the same direction as $e$. That is, if $e$ is joined to vertices above the point $i$ and the point $j$ where $i<j$, then $\phi(e)$ is mapped to vertices $\phi(i)$ and $\phi(j)$ where $\phi(i) < \phi(j)$. Intuitively, quasi-isomorphic covers are covers of graphs with the same structure and the same weights, though if we consider any partial graph from the left, vertices that originate in disjoint connected components may project to $\mathbb{T}\mathbb{P}^{1}(\mathbb{C})$ in a different order up until the components connect (see [13](#fig:quasi){reference-type="ref" reference="fig:quasi"}). ![Isomorphic graphs with quasi-isomorphic covers projecting to the segment $[0, 6]\subset \mathbb{T}\mathbb{P}^{1}(\mathbb{C})$.](Figures/quasi.pdf){#fig:quasi width="\\linewidth"} Now, we wish to formulate a tropical interpretation of pruned double Hurwitz numbers using the recursion as stated in [Theorem 92](#thm pruned recursion){reference-type="ref" reference="thm pruned recursion"}. **Construction 100**. Let $g\geq 0$ and $d> 0$ be integers, and let $\mu = (\mu_1, \ldots , \mu_m)$ and $\nu = (\nu_1, \ldots, \nu_n)$ be partitions of $d$. Let us consider $b = 2g-2 + m + n > 0$ and $(g, \ell(\nu)) \notin \{ (0, 1), (0,2) \}$. We associate to this data a tropical graph $\Gamma$ and a map to the segment $[0, \tilde{b} + 1]$ as follows: 1. We start with $m$ strands over $0$ that are labelled by $\mu_1, \ldots, \mu_m$, where $\mu_k$ is the weight of the respective strands. We choose a proper subset $I\subsetneq [m]$ of these strands to be coloured ends, keeping the remaining strands $\mu_j$, $j \in [m ] \backslash I$, as regular strands. 2. The recursion tells us the specific ways in which we may cut regular edges and join regular or coloured edges at any given vertex. Over the first vertex, we deal with the base case where $(g, \ell(\nu))$ is locally equal to $(0, 2)$. We consider the other base cases in [Remark 101](#rem base case prunedtrop){reference-type="ref" reference="rem base case prunedtrop"}. 3. Over the point $1$ we construct a regular $n$-valent vertex, where $n\geq3$. To do this, we join together $n-2$ regular edges that originate over $0$ to the vertex lying above $1$, which we then split into two outgoing regular edges on the right. We do not join any coloured edges to the vertex above $1$. This vertex is an initial vertex. 4. This process of creating initial vertices is repeated at each successive integer until every regular strand originating above $0$ has been joined into an $n$-valent vertex, where $n\geq 3$. In doing this, we only consider attaching regular strands from the left that are original regular edges $\mu_j,\ j\in [m]\backslash I$ originating above $0$. In particular, we do not join any inner edges or coloured edges of our graph to these initial vertices. 5. Upon attaching every regular strand $\mu_j, \ j\in [m]\backslash I$, to an initial vertex, we may begin to construct secondary vertices. If we denote by $s$ the number of secondary vertices, we create $s = b - \ell(\mu)$ secondary vertices. Let us denote by $i$ the number of initial vertices of $\Gamma$ that we have constructed. Thus, we find that $\Tilde{b} = i+s$, so we are projecting to the segment $[0, i+s+1]$. 6. Over subsequent vertices, the recursion tells us the manner in which we can construct these secondary vertices. That is, for the vertex above $i+1$ we may attach edges in two ways. Namely, we may create a regular $3$-valent vertex by either: - joining two regular strands at the vertex, with one new regular edge coming out of the vertex on the right, - or cutting one regular strand, which we split into two new regular edges exiting the vertex to the right. When we have appropriately constructed our $3$-valent secondary vertex, we may join a non-negative number of coloured ends $\mu_k$, $k\in I$, to this same vertex. 7. Consider one representative for each quasi-isomorphism class of labelled graphs. 8. We repeat steps $(6)$ and $(7)$ for each consecutive integer up to $\Tilde{b}$. 9. When we reach vertex $\tilde{b}$, we consider connected graphs that terminate with $\ell(\nu) = n$ edges above $\Tilde{b} +1$. We thus obtain a graph $\Gamma$ with a map projecting to the interval $[0, \Tilde{b}+1]$. We call this graph (along with the projection map) a pruned monodromy graph of type $(g, \mu, \nu)_{\text{P}}$. **Remark 101**. The base cases that are not covered by the construction are those with $$(g, \ell(\nu)) \in \{ (0, 1), (0, 2) \}.$$ We construct monodromy graphs of type $(g, \mu, \nu)_{\text{P}}$ in order to express tropical pruned double Hurwitz number $PH^{\text{trop}}_g(\mu, \nu)$ as a weighted count of tropical graphs. However, we know by [Example 51](#base case pruned ex){reference-type="ref" reference="base case pruned ex"}, that $PH_0((\mu_1, \ldots, \mu_m) , (\nu_1)) = 0$ for $m\in \{1,2, \ldots\}$. Thus, we represent these cases by the tropical graphs as illustrated in [14](#fig:basecaseprunes){reference-type="ref" reference="fig:basecaseprunes"} (a) and (b). When computing the weighted count of these graphs we prescribe these values to be exactly the classical pruned double Hurwitz count. That is, we say that in graph (b) corresponding to $PH^{\text{trop}}_0( (\mu_1, \ldots, \mu_m) , (\nu_1) )$, the single vertex has multiplicity $m(v) = 0$. By convention, the pruned monodromy graph (a) on zero vertices has weight $0$. Moreover, we prescribe the multiplicity of the single vertex in graph (c) to be the pruned double Hurwitz number $PH_0((\mu_1, \ldots, \mu_m), (\nu_1, \nu_2))$. Furthermore, any graphs that result from those of [14](#fig:basecaseprunes){reference-type="ref" reference="fig:basecaseprunes"} (a) and (b) have a vertex multiplicity factors of $0$. Thus, in our construction we *could* have allowed the vertex above $1$ to be joined by two (or more) regular strands on the left that combine into one regular strand on the right, but this is essentially a redundant case -- when we take the weighted sum over these graph, their weight is $0$ from the first vertex. ![The tropical picture for $PH_0( (\mu_1), (\nu_1))$, $PH_0((\mu_1, \ldots, \mu_m), (\nu_1))$, and $PH_0((\mu_1, \ldots, \mu_m), (\nu_1, \nu_2))$.](Figures/basecaseprunes.pdf){#fig:basecaseprunes width="\\linewidth"} **Definition 102**. A pruned monodromy graph of type $(g, \mu, \nu)_{\text{P}}$, with $b=2g - 2 +\ell(\mu) +\ell(\nu)$, is a graph $\Gamma$ projecting to the segment $[0, \tilde{b} +1]$ with the following properties: 1. The graph $\Gamma$ has $c$ many coloured edges, where $0 \leq c<\ell(\mu)$. 2. The graph $\Gamma$ has $s = b - \ell(\mu)$ many secondary vertices. Furthermore, if $\Gamma$ has $i$ initial vertices, then taking $\Tilde{b} = i+s$, $\Gamma$ projects to the segment $[0, s + i + 1]$. 3. The genus of $\Gamma$ is $g$. 4. We assign a weight to each edge $e$. This weight $w(e)$ is a positive integer. 5. At each inner vertex the sum of the weights of the incoming edges (both regular and coloured) is the sum of the outgoing weights of the edges. 6. The weights of the $n$ strands above $\Tilde{b} + 1$ are $\nu_1, \ldots, \nu_n$. We denote the set of pruned monodromy graphs of type $(g,\mu,\nu)\mathop{\mathrm{_{\text{P}}}}$ by $\mathfrak{TP}_g(\mu,\nu)$. ## Tropical pruned double Hurwitz numbers As in the case of tropical double Hurwitz numbers, we wish to define pruned double Hurwitz numbers as the weighted sum over tropical graphs. **Definition 103**. Tropical pruned double Hurwitz numbers $PH^{\text{trop}}_g (\mu, \nu)$ are the weighted sum of pruned monodromy graphs $\Gamma$ of type $(g, \mu, \nu)\mathop{\mathrm{_{\text{P}}}}$, $$PH^{\text{trop}}_g (\mu, \nu) = \sum_{\Gamma\in \mathfrak{TP}_g(\mu,\nu)} \frac{|\mathop{\mathrm{Aut}}(\mu)||\mathop{\mathrm{Aut}}(\nu)|}{|\mathop{\mathrm{Aut}}(\Gamma)|}\prod_{e \text{ inner edge}} w(e) \prod_{\hat{e} \text{ col. edge}} w(\hat{e}) \prod_{v \text{ vertex}} m(v)$$ where the automorphism group of $\Gamma$ is made up of the factors of $1/2$ for each right balanced fork and wiener and the factor of $1/|\mathop{\mathrm{Aut}}(\mu_1, \ldots, \mu_k)|$ for any $k$--pronged fork of weights $\mu_1, \ldots, \mu_k$. Furthermore, the product over the edge weights ranges over the inner edges in the first product and over the coloured edges in the second product. Moreover, the multiplicity of each vertex is prescribed as follows: - If $(g, \ell(\nu)) = (0, 1)$, then the graph looks like [14](#fig:basecaseprunes){reference-type="ref" reference="fig:basecaseprunes"} (a) or (b). In case (a), we define the product of the vertex multiplicities over zero vertices to be $0$. Likewise, if we consider the vertex $v$ in case (b), where we have a regular $n$-valent vertex with $n-1$ edges joining from the left into $1$ on the right, we prescribe this to have multiplicity $$m(v)=0.$$ - The vertex multiplicity for a initial vertex $v$, i.e. a regular $n$-valent vertex as depicted in [14](#fig:basecaseprunes){reference-type="ref" reference="fig:basecaseprunes"} (c), is the local pruned double Hurwitz number; $$m(v) = PH_0( (\mu_1, \ldots, \mu_{n-2}), (\nu_1, \nu_2)).$$ - For a given secondary vertex $v$ let us denote by $|c_v|$ the number of coloured ends joining $v$, and denote by $|\Tilde{c}|$ the number of coloured ends joined to vertices coming before $v$ in the connected component that $v$ lies in. Furthermore, we denote by $s$ the number of secondary vertices that come before $v$ in the connected component of the graph that $v$ lies in, while we denote by $v_i$ the initial vertices in the same connected component as $v$, that come before $v$. - If the secondary vertex $v$ consists of a regular strand being cut into two regular strands (possibly with $|c_v|$ coloured edges joining into $v$), then the multiplicity $m(v)$ of $v$ is $$m(v) = \frac{\Big( \sum_{v_i} \big(\mathop{\mathrm{val}}(v_i) -2\big) + s + |\Tilde{c}| +|c_v|\Big)!}{\Big(\sum_{v_i} \big(\mathop{\mathrm{val}}(v_i) -2\big) + s + |\Tilde{c}|\Big)!} \cdot \big( |c_v| +1\big)!$$ In particular $m(v)=1$ if no coloured edges join $v$. - If two regular strands from the same connected component of the graph before $v$ are joined together at the secondary vertex $v$, then the multiplicity of $v$ is $$m(v) = \frac{\Big(\sum_{v_i} \big(\mathop{\mathrm{val}}(v_i) -2\big) + s + |\Tilde{c}| +|c_v|\Big)!}{\Big(\sum_{v_i} \big(\mathop{\mathrm{val}}(v_i) -2\big) + s + |\Tilde{c}|\Big)!} \cdot \big(|c_v| +1\big).$$ As in the above case, $m(v)=1$ if no coloured edges join $v$. - If two regular strands from two disjoint connected components join at the inner vertex $v$, there are two cases that we distinguish between; *degenerate disconnected joins* and *non-degenerate disconnected joins*. - Degenerate disconnected joins are the cases in our recursion that do not qualify as "stable", which arise when $(g, \ell(\nu)) \in \{(0, 1), (0 ,2)\}$. In particular, if one (or both) of the strands joining at $v$ comes from a connected component that is a graph of genus $g=0$ (i.e. a tree) up to this vertex $v$ *and* furthermore has only one choice or two choices of strands to join at vertex $v$, then $v$ has degenerate multiplicity: $$m(v) = 0.$$ - A non-degenerate disconnected join occurs in the case in which the strands joining at $v$ come from connected components $\Gamma_1, \Gamma_2$ that up to now; are not trees, and/or have more than three strands above $v$ that we may choose to join at $v$. In this case, we denote by $s_j$ and $|\tilde{c_j}|$ the number of secondary vertices and the number of coloured edges respectively in each connected component $\Gamma_j$ before the vertex $v$, $j\in \{1, 2\}$. Furthermore, we denote by $v_{i_j}$ the initial vertices in the connected component $\Gamma_j$, $j \in \{1, 2 \}$ coming before the vertex $v$. Then the multiplicity of $v$ is $$m(v) = \frac{\Big(\sum_{v_{i_1}} \big(\mathop{\mathrm{val}}(v_{i_1}) -2\big) + s_1 +\sum_{v_{i_2}} \big(\mathop{\mathrm{val}}(v_{i_2}) -2\big) + s_2 + |\Tilde{c_1}| + |\Tilde{c_2}| +|c_v|\Big)!}{\Big(\sum_{v_{i_1}} \big(\mathop{\mathrm{val}}(v_{i_1}) -2\big) + s_1 + |\Tilde{c_1}|\Big)!\cdot \Big(\sum_{v_{i_2}} \big(\mathop{\mathrm{val}}(v_{i_2}) -2\big) + s_2 + |\Tilde{c_2}|\Big)!} \cdot \big(|c_v| +1\big)!$$ **Remark 104**. In [Remark 81](#realizability){reference-type="ref" reference="realizability"}, the "realisability" condition of tropical curves was considered. We may note that the pruned tropical graphs that we construct are not, what we would call, the "natural" tropicalisation. The covers that we establish are a purely combinatorial way of giving a geometric meaning to a combinatorial count. We now state our tropical correspondence theorem for pruned double Hurwitz numbers as follows. **Theorem 105**. *(Correspondence theorem for tropical pruned double Hurwitz numbers) Let us fix integers $d>0$ and $g\geq 0$, and let $\mu, \nu$ be partitions of $d$. Then, the classical count of the pruned double Hurwitz numbers is equal to the tropical count. Namely, $$PH_g(\mu, \nu) = PH^{\text{trop}}_g(\mu, \nu).$$* *Proof.* We prove this correspondence by induction. To begin, we consider the base cases. By definition these are equal to their classical pruned double Hurwitz analogue. It remains to show that tropical pruned double Hurwitz numbers satisfy the same recursion as the classical count. In particular, let us assume that $$PH^{\text{trop}}_{g^\prime}(\mu^\prime, \nu^\prime) = PH_{g^\prime}(\mu^\prime, \nu^\prime)$$ for all $g^\prime, \mu^\prime, \nu^\prime$ such that $2g^\prime -2 + \ell(\mu^\prime) + \ell(\nu^\prime) = k \geq 1$. Now, we claim that $$PH^{\text{trop}}_g(\mu, \nu) = PH_g(\mu, \nu)$$ for $g, \mu, \nu$ such that $2g-2 + \ell(\mu) + \ell(\nu) = k+1$ (excluding the base cases $(g, \ell(\nu)) = \{(0, 1), (0, 2)\}$). To show this equality, we first recall the recursion for pruned double Hurwitz numbers as in [Theorem 92](#thm pruned recursion){reference-type="ref" reference="thm pruned recursion"}. We split this into three distinct parts $PH_g(\mu, \nu)^\text{cut}, PH_g(\mu, \nu)^\text{cj}, PH_g(\mu, \nu)^\text{dj}$ - Let us consider the "cut" case in the recursion, and denote by $PH_g(\mu, \nu)^\text{cut}$ the first sum: $$\begin{gathered} \label{pruned c recur} PH_g(\mu, \nu)^\text{cut} = \sum_{i<j} \sum_{I \subset \{1, \ldots, \ell(\mu)\} } \ \ \sum_{\mathclap{\substack{\alpha + |\mu_{I^c}| \\ = \nu_i + \nu_j } }} \ \alpha \cdot \frac{(b-1)!}{(b-(|I^c|+1))!} \cdot (|I^c| +1)! \cdot \prod_{s\in \mu_{I^c}} s \cdot PH_g(\mu_I, (\nu \backslash\{\nu_i, \nu_j\}, \alpha) ) . \end{gathered}$$ - Let us consider the "connected join" case in the recursion, denoting by $PH_g(\mu, \nu)^\text{cj}$ the second sum: $$\begin{gathered} \label{pruned cj recur} PH_g(\mu, \nu)^\text{cj} = \frac{1}{2} \sum_{i=1}^{\ell(\nu)} \sum_{I \subset \{1, \ldots, \ell(\mu)\} } \ \ \ \ \sum_{\mathclap{\substack{\alpha + \beta + |\mu_{I^c}| \\ = \nu_i } }} \ \ \alpha \cdot \beta \cdot \frac{(b-1)!}{(b-(|I^c|+1))!} \cdot (|I^c| +1) \cdot \prod_{s\in \mu_{I^c}} s \cdot PH_{g-1}(\mu_I, (\nu \backslash \{\nu_i \} , \alpha, \beta ) ). \end{gathered}$$ - Let us consider the "disconnected join" case in the recursion, and denote the third sum by $PH_g(\mu, \nu)^\text{dj}$: $$\begin{gathered} \label{pruned dj recur} PH_g(\mu, \nu)^\text{cj} = \frac{1}{2} \sum_{i=1}^{\ell(\nu)} \sum_{g_1 + g_2 = g}^{\text{stable}} \quad \sum_{\mathclap{\substack{\nu_{J_1} \coprod \nu_{J_2} = \\ \nu \backslash \{\nu_i\} } }} \qquad \sum_{\mathclap{\substack{I_1, I_2 \subset\\ \{ 1, \ldots, \ell(\mu ) \}\\ \text{disjoint} } }} \qquad \ \sum_{\mathclap{\substack{\alpha + \beta +\\ |(\mu_{I_1} + \mu_{I_2})^c|\\ = \nu_i } }} \quad \alpha \cdot \beta \cdot \frac{(b-1)!}{b_1! \cdot b_2!} \cdot (|(I_1 +I_2)^c| +1)! \cdot \\ \prod_{s\in \mu_{(I_1 \cup I_2)^c}} s \cdot PH_{g_1}(\mu_{I_1}, (\nu_{J_1}, \alpha)) \cdot PH_{g_2} (\mu_{I_2}, (\nu_{J_2}, \beta)) . \end{gathered}$$ Putting this together, we have $PH_g(\mu, \nu) = PH_g(\mu, \nu)^{\text{cut}} + PH_g(\mu, \nu)^{\text{cj}} + PH_g(\mu, \nu)^{\text{dj}}$. Now, we can recall [Definition 103](#pruned weights){reference-type="ref" reference="pruned weights"}, where we took the weighted sum over pruned monodromy graphs $\Gamma$ of type $(g, \mu, \nu)\mathop{\mathrm{_{\text{P}}}}$ in order to calculate $PH^{\text{trop}}_g(\mu, \nu)$ for $2g-2+\ell(\mu)+\ell(\nu) = k+1$. We may split the set of monodromy graphs $\Gamma \in \mathcal{TB}_g(\mu, \nu)$ into three distinct sets of graphs based on the type of vertex constructed over the final vertex. That is, - We denote by $\Gamma^\text{cut}$ those pruned monodromy graphs where one regular strand is cut into two over the final vertex, possibly with coloured edges joining this vertex. - Similarly, we denote by $\Gamma^{\text{cj}}$ the subset of these pruned monodromy graphs such that there is a connected join over the last vertex. That is, two regular strands from the same connected component join into one strand at the final vertex, possibly with coloured edges joining from the left. - Finally, we denote by $\Gamma^{\text{dj}}$ those pruned monodromy graphs that have a disconnected join over the final vertex, possibly with coloured edges joining from the left. In particular, we have that $\{\ \Gamma \ | \ \Gamma \text{ pruned monodromy graphs of type } (g, \mu, \nu)\mathop{\mathrm{_{\text{P}}}}\} = \Gamma^\text{cut} + \Gamma^\text{cj} + \Gamma^\text{dj}$. Then, we may take $PH^{\text{trop}}_g(\mu, \nu)^i$ to be the weighted sum over pruned monodromy graphs $\Gamma^i$, where $i \in \{ \text{cut, cj, dj} \}$. Summing these three quantities gives us our total pruned count $PH^{\text{trop}}_g(\mu, \nu)$. To show that $PH^{\text{trop}}_g(\mu, \nu)^i = PH_g(\mu, \nu)^i$ for $i\in \{\textrm{cut, cj, dj}\}$ we consider the following analysis. - $PH^{\text{trop}}_g(\mu, \nu)^\text{cut}$ satisfies [\[pruned c recur\]](#pruned c recur){reference-type="ref" reference="pruned c recur"}. Indeed, each pruned monodromy graph $\Gamma^\text{cut}$ is obtained from a pruned monodromy graph of type $(g, \mu_I, (\nu \backslash \{\nu_i, \nu_j\}, \alpha))$ by cutting a strand of weight $\alpha$ over the final vertex and joining any coloured edges if $\mu_I \neq \mu$. By the induction step $PH^{\text{trop}}_g(\mu_I, (\nu \backslash \{\nu_i, \nu_j\}, \alpha)) = PH_g(\mu_I, (\nu \backslash \{\nu_i, \nu_j\}, \alpha))$. However, $PH^{\text{trop}}_g(\mu, \nu)^\text{cut}$ is calculated by multiplying new edge weights, the new vertex multiplicity, by any new automorphisms, and by the weight of the old graph up to that point. By construction, our vertex multiplicity is exactly the combinatorial factor that features in the recursion. The tropical edge weights correspond to the weights $\alpha$ and $s$, for $s\in \mu_{I^c}$ in the recursion. Moreover, the automorphism factors of the graphs negate any overcounting of quasi-isomorphic covers, whereas the automorphism factors of the partitions give the multiplicity needed for labelled pruned double Hurwitz numbers. Thus, $PH^{\text{trop}}_g(\mu, \nu)^\text{cut} = PH_g(\mu, \nu)^\text{cut}$. - Similarly, each pruned monodromy graph $\Gamma^\text{cj}$ arises by taking a pruned monodromy graph of type $(g-1, \mu_I, (\nu\backslash \{\nu_i\}, \alpha, \beta))$ and joining regular strands of weight $\alpha, \beta$ over the final vertex, where $\alpha, \beta$ belong to the same connected component, and join any coloured strands of weight $s$, $s\in \mu_{I^c}$ if $I^c \neq \emptyset$. Analogously, $PH^{\text{trop}}_g(\mu, \nu)^\text{dj}$ satisfies [\[pruned cj recur\]](#pruned cj recur){reference-type="ref" reference="pruned cj recur"}. Again, this follows by construction, with the combinatorial factor in the recursion appearing as the vertex multiplicity in our graphs, and with $PH_{g-1}( \mu_I, (\nu\backslash \{\nu_i\}, \alpha, \beta)) = PH^{\text{trop}}_{g-1}(\mu_I, (\nu\backslash \{\nu_i\}, \alpha, \beta))$ by the induction hypothesis. Furthermore, the automorphism factors count as described above by cancelling any overcounting of quasi-isomorphic graphs and ensuring the right count for labelled covers. Thus, $PH^{\text{trop}}_g(\mu, \nu)^\text{cj} = PH_g(\mu, \nu)^\text{cj}$. - Finally, pruned monodromy graphs $\Gamma^\text{dj}$ are formed by taking pruned monodromy graphs of type $(g_1, \mu_{I_1}, (\nu_{J_1}, \alpha)), (g_2, \mu_{I_2}, (\nu_{J_2}, \beta))$ and joining regular strands of weight $\alpha, \beta$ over the final vertex, along with any coloured edges. $PH^{\text{trop}}_g(\mu, \nu)^\text{dj}$ satisfies [\[pruned dj recur\]](#pruned dj recur){reference-type="ref" reference="pruned dj recur"}. Indeed, the weights and the vertex multiplicity are defined this way by construction, where we do not take into consideration the ordering of the vertices in the different connected components, as we take quasi-isomorphic graphs. We find that $PH^{\text{trop}}_g(\mu, \nu)^\text{dj} = PH_g(\mu, \nu)^\text{dj}$. Putting these three parts together, we find our desired equivalence $$PH^{\text{trop}}_g(\mu, \nu) = PH_g(\mu, \nu).$$ ◻ To end this subsection, we give an example computing a pruned double Hurwitz number, using this correspondence theorem. ![All pruned monodromy graphs of type $(2, (1, 1, 1), (3))\mathop{\mathrm{_{\text{P}}}}$.](Figures/bigprunes2.pdf){#fig:bigprunes width="\\linewidth"} **Example 106**. Let us consider $PH^{\text{trop}}_2((1, 1, 1), (3))$. We can see all monodromy graphs of type $(2, (1, 1, 1), (3))\mathop{\mathrm{_{\text{P}}}}$ are as shown in [15](#fig:bigprunes){reference-type="ref" reference="fig:bigprunes"}. The initial vertex multiplicities are the local pruned double Hurwitz numbers $$PH_0 ((1, 1, 1), (2, 1)) = 6, \qquad PH_0 ((1, 1), (1, 1)) = 2.$$ Furthermore, the vertex multiplicity of any secondary vertex $v$, to which no coloured edges join is $m(v)= 1$. Then, we find the weights of our graphs in [2](#TablePrunes){reference-type="ref" reference="TablePrunes"}. Graph $\Gamma$ $\frac{|\mathop{\mathrm{Aut}}(\mu)||\mathop{\mathrm{Aut}}(\nu)|}{|\mathop{\mathrm{Aut}}(\Gamma )|}$ $\prod_{e \text{ inner edge}} w(e)$ $\prod_{\hat{e} \text{ col. edge}} w(\hat{e})$ $\prod_{v \text{ vertex}} m(v)$ Total ---------------- ----------------------------------------------------------------------------------------------------- ------------------------------------- ------------------------------------------------ --------------------------------- ------- [A]{.roman} $6 \cdot (1/6)$ $2\cdot 3\cdot 2$ $1$ $6$ $72$ [B]{.roman} $6 \cdot (1/6)$ $2\cdot 2$ $1$ $6$ $24$ [C]{.roman} $6 \cdot (1/6)\cdot (1/2)$ $2\cdot 2$ $1$ $6$ $12$ [D]{.roman} $6 \cdot (1/2)$ $2$ $1$ $2\cdot (3!/2!) \cdot 2!$ $72$ [E]{.roman} $6 \cdot (1/2)\cdot(1/2)$ $2$ $1$ $2\cdot (3!/2!) \cdot 2!$ $36$ [F]{.roman} $6 \cdot (1/2)\cdot (1/2)$ $3\cdot 2$ $1$ $2\cdot (3!/2!)\cdot 2$ $108$ [G]{.roman} $6 \cdot (1/2)\cdot (1/2)$ $2\cdot 2$ $1$ $2\cdot (4!/3!) \cdot 2!$ $96$ [H]{.roman} $6 \cdot (1/2) \cdot (1/2)\cdot (1/2)$ $2$ $1$ $2\cdot (5!/4!) \cdot 2$ $30$ : [\[TablePrunes\]]{#TablePrunes label="TablePrunes"} The weight for each pruned monodromy graph of type $(2, (1, 1, 1), (3))\mathop{\mathrm{_{\text{P}}}}$. Summing the total column gives us $$PH^{\text{trop}}_2((1, 1, 1), (3)) = 450 = PH_2((1, 1, 1), (3))$$. ## Polynomiality in genus $0$ {#sec:polytrop} In this subsection, we study the polynomiality of pruned double Hurwitz numbers in genus $0$ from a tropical perspective. Based on this we take first steps towards wall--crossing formulae for pruned double Hurwitz numbers in genus $0$. We begin with the notion of the combinatorial type of a pruned monodromy graph. **Definition 107**. Let $\pi\colon\Gamma\to[0,\tilde{b}+1]$ a pruned monodromy graph as in [Definition 102](#def:prunedmonod){reference-type="ref" reference="def:prunedmonod"}. The map $\pi$ induces a partial ordering $\mathcal{O}$ on the edge set of $\Gamma$. We call the tuple $(\Gamma,\mathcal{O})$, where we forgot the weights of $\pi$ but retain the information in [Definition 102](#def:prunedmonod){reference-type="ref" reference="def:prunedmonod"}, the combinatorial type of $\pi$. When there is no potential for confusion, we will denote the combinatorial type of $\pi$ by $\Gamma$. Recall the set $\mathfrak{TP}_g(\mu,\nu)$ of pruned monodromy graphs of type $(g,\mu,\nu)\mathop{\mathrm{_{\text{P}}}}$. Let $m$ and $n$ be positive integers, then we denote by $\mathfrak{TP}_g(m,n)$ the set of all tuples $(\Gamma,\mathcal{O})$, such that there exist partitions $\mu,\nu$ of length $m$ and $n$, so that $(\Gamma,\mathcal{O})$ is the combinatorial type of pruned monodromy graph in $\mathfrak{TP}_g(\mu,\nu)$. Since by definition the number of vertices and edges of a pruned monodromy graph is bounded in terms of $g$, $m$ and $n$, the set $\mathfrak{TP}_g(m,n)$ is finite.\ We now fix $g=0$ for the rest of our discussion. Since in this case, $\Gamma$ is a tree, fixing the weights $\mu_1,\dots,\mu_m$ and $\nu_1,\dots,\nu_n$ of the strands over $0$ and $\infty$ also determines the weights of all inner edges by the balancing condition [Definition 102](#def:prunedmonod){reference-type="ref" reference="def:prunedmonod"} (5). By the arguments as in [@cavalieri2010tropical Lemma 6.4], the weight of an edge is a linear polynomial in the $\mu_i$ and $\nu_j$. More precisely, we have for an edge $e$ of $\Gamma$ that $$\omega(e)=\sum_{i\in I}\mu_i-\sum_{j\in J}\nu_j$$ where $I\subset[m]$ and $J\subset[n]$ are the in- and out-weights of the component of $\Gamma\backslash \{e\}$ from which $e$ points away.\ Now, we observe that the vertex multiplicities of secondary vertices in [Definition 103](#pruned weights){reference-type="ref" reference="pruned weights"} only depends on the combinatorial type of a pruned monodromy graph. Moreover, the multiplicity of the primary vertices are exactly the pruned double Hurwitz numbers considered as the third case of [Example 51](#base case pruned ex){reference-type="ref" reference="base case pruned ex"}, which were observed to be piecewise polynomial with respect to the resonance arrangement.\ We note, however, that here we need to introduce a refinement of the resonance arrangement in order to deduce piecewise polynomiality from the tropical perspective. Recall the hyperplane $$\mathcal{H}_{m,n}=\left\{(\mu,\nu)\in\mathbb{N}^m\times\mathbb{N}^n\mid\sum\mu_i=\sum\nu_j\right\}.$$ and the *resonance arrangement* in $\mathcal{H}_{m,n}$ given by $$\mathcal{R}_{m,n}=\left\{\sum_{i\in I}\mu_i-\sum_{j\in J}\nu_j=0\mid I\subset[m],J\subset[n]\right\}.$$ **Definition 108**. Let $m$, $n$ positive integers, then we define the refined resonance arrangement in $\mathcal{H}_{m,n}$ given by $$\tilde{\mathcal{R}}_{m,n}=\left\{\sum_{i\in I}\alpha(i)\mu_i-\sum_{j\in J}\beta(j)\nu_j=0\mid I\subset[m],J\subset[n],\alpha(i),\beta(i)\in\{-1,1\}\right\}.$$ Note, that the vertex multiplicity of any primary vertex of a tropical monodromy graph in $\mathfrak{TP}_g(\mu,\nu)$ is given by $$PH_0\left((\mu_{i_1},\dots,\mu_{i_s}),\left(\sum_{I_1}\mu_i-\sum_{J_1}\nu_j,\sum_{I_2}\mu_i-\sum_{J_2}\nu_j\right)\right)$$ for some subsets $I_i\subset[m],J_j\subset[n]$. Substituting the partition $(\sum_{I_1}\mu_i-\sum_{J_1}\nu_j,\sum_{I_2}\mu_i-\sum_{J_2}\nu_j)$ into the equations of the resonance arrangement, we obtain equations of the form that define $\tilde{\mathcal{R}}_{m,n}$. Thus, we have proved the following. **Theorem 109**. *Let $(\Gamma,\mathcal{O})\in \mathfrak{TP}_0(m,n)$ and denote by $m(\Gamma,\mathcal{O})(\mu,\nu)$ the contribution of pruned monodromy graphs in $\mathfrak{TP}_0(\mu,\nu)$ to $PH_0(\mu,\nu)$. Then, the map $$\begin{aligned} m(\Gamma,\mathcal{O})\colon\mathcal{H}_{m,n}&\to\mathbb{Q}\\ (\mu,\nu)&\mapsto m(\Gamma,\mathcal{O})(\mu,\nu) \end{aligned}$$ is piecewise polynomial with respect to the refined resonance arrangement $\tilde{\mathcal{R}}_{m,n}$.* We remark that in all examples we computed, the map $m(\Gamma,\mathcal{O})$ was actually piecewise polynomial with respect to $\mathcal{R}_{m,n}$. We illustrate this phenomenon in the following example. **Example 110**. Let $g=0$, $\mu=(\mu_1,\mu_2)$ and $\nu=(\nu_1,\nu_2,\nu_3)$. We consider the combinatorial type of a pruned monodromy graph in [\[fig:prungraphpoly\]](#fig:prungraphpoly){reference-type="ref" reference="fig:prungraphpoly"}. We consider the chamber of the resonance arrangement given by $\mu_1,\mu_2>\nu_1,\nu_2,\nu_3$. Note that these inequalities automatically imply $\nu_i+\nu_j>\mu_1,\mu_2$ for all $i\neq j$. Its multiplicity is $$2\mathrm{min}(\mu_1,\mu_2,\mu_1+\mu_2-\nu_1,\nu_1)(\mu_1+\mu_2-\nu_1).$$ Since $\mu_1,\mu_2>\nu_1$, we need to compute $\mathrm{min}(\mu_1+\mu_2-\nu_1,\nu_1)$. We observe that $\mu_1+\mu_2-\nu_1>\nu_1$ is equivalent to $\mu_1+\mu_2>2\nu_1$ which is true in the chosen chamber. Therefore, the contribution of the graph in [\[fig:prungraphpoly\]](#fig:prungraphpoly){reference-type="ref" reference="fig:prungraphpoly"} to $PH_0((\mu_1,\mu_2),(\nu_1,\nu_2,\nu_3))$ is $$2\nu_1(\mu_1+\mu_2-\nu_1).$$ [^1]: We note that the main ideas were already present in [@zvonkine2004algebra; @irving2009minimal].
arxiv_math
{ "id": "2310.05225", "title": "Combinatorics of pruned Hurwitz numbers", "authors": "Sean Gearoid Fitzgerald, Marvin Anas Hahn, S\\'iofra Kelly", "categories": "math.CO math.AG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The classification of the maximal subgroups of the Monster $\mathbf{M}$ is believed to be complete subject to an unpublished result of Holmes and Wilson asserting that $\mathbf{M}$ has no maximal subgroups that are almost simple with socle isomorphic to ${\rm PSL}_2(8)$, ${\rm PSL}_2(16)$, or ${\rm PSU}_3(4)$. We prove this result for ${\rm PSL}_2(16)$, with the intention that the other two cases will be dealt with in an expanded version of this paper. Our proof is supported by reproducible computations carried out using Seysen's publicly available Python package `mmgroup` for computing with $\mathbf{M}$. address: School of Mathematics, Monash University, Clayton VIC 3800, Australia author: - Heiko Dietrich - Melissa Lee - Tomasz Popiel title: | Indeed, the Monster has no almost simple\ maximal subgroup with socle ${\rm PSL}_2(16)$ --- [^1] # Introduction {#sec_intro} The Monster $\mathbf{M}$ is the largest of the 26 sporadic finite simple groups, and contains all but six of the other sporadic groups as subgroups or subquotients. The maximal subgroups of all of the sporadic groups other than $\mathbf{M}$ have been classified for some time, but classifying the maximal subgroups of $\mathbf{M}$ has been considerably more difficult. Based on a significant body of work (some 15 papers) due chiefly to Norton, Wilson, and Holmes, the classification was seemingly complete as of 2017, apart from the possibility that $\mathbf{M}$ contained as-yet-undiscovered maximal subgroups that are almost simple with socle isomorphic to one of ${\rm PSL}_2(8)$, ${\rm PSL}_2(13)$, ${\rm PSL}_2(16)$, or ${\rm PSU}_3(4)$. Wilson reported that he had eliminated the cases ${\rm PSL}_2(8)$ and ${\rm PSU}_3(4)$ [@W17 p. 65], and that the case ${\rm PSL}_2(16)$ had been eliminated by Holmes [@W17b p. 877], but the proofs of these results were never published. The remaining case of ${\rm PSL}_2(13)$ was dealt with in our recent paper [@jems]. We made extensive use of Seysen's ground-breaking Python package `mmgroup` [@sey_python; @sey20; @sey22] for computing with $\mathbf{M}$ to classify all subgroups of $\mathbf{M}$ that are almost simple with socle ${\rm PSL}_2(13)$. In particular, we found a (unique) new class of maximal subgroups of $\mathbf{M}$ isomorphic to ${\rm PGL}_2(13)$. According to the existing literature, our result [@jems] completes the classification of the maximal subgroups of $\mathbf{M}$, with the caveat that Holmes and Wilson's non-existence results for the cases ${\rm PSL}_2(8)$, ${\rm PSL}_2(16)$, and ${\rm PSU}_3(4)$ remain unpublished. The aim of this paper is to provide a reproducible proof of the non-existence result for the case ${\rm PSL}_2(16)$. The other two cases are currently works-in-progress; we intend to deal with them in an expanded version of this paper. For now, we announce (and prove) the following result. **Theorem 1**. *The Monster contains no maximal subgroup that is almost simple with socle ${\rm PSL}_2(16)$.* As in [@jems], our proof is supported by computations carried out using `mmgroup`. We refer the reader to [@W17; @W17b] and references therein for further discussion of the history of the maximal subgroup problem for $\mathbf{M}$, and to [@jems; @sey_python; @sey20; @sey22] for details on computing with $\mathbf{M}$ in `mmgroup` that are not summarised here. Note that we complement `mmgroup` with our own Python implementations of standard algorithms for generating random elements and determining the order of a subgroup $G$ of $\mathbf{M}$ from a generating set for $G$; see [@jems Section 2.4]. This functionality is required to reproduce our proof; see Remark [Remark 6](#remPSL2(16)){reference-type="ref" reference="remPSL2(16)"}. # Preliminaries {#sec_prel} Most of our group-theoretic notation is standard, and usually follows the Atlas [@atlas], with the notable exception that we write ${\rm PSL}_d(q)$ instead of ${\rm L}_d(q)$. An extension of a group $B$ by a group $A$ is denoted $A.B$, where $A$ is the normal subgroup. The notation $A{:}B$ highlights that an extension splits, and $A\mathpalette% \begingroup \sbox\z@{$\relax {:}$}% \sbox\tw@{$\relax {.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax .$}% \endgroup$ denotes a non-split extension. We denote by ${\rm D}_n$, ${\rm A}_n$, and ${\rm S}_n$ the dihedral group of order $n$, the alternating group of degree $n$, and the symmetric group of degree $n$. We also use $n$ to denote a cyclic group of order $n$, and $q^k$ to denote an elementary abelian group of order $q^k$ for $q$ a prime power and $k$ a positive integer; for example, the Borel subgroup of ${\rm PSL}_2(16)$ is denoted by $2^4{:}15$. We often use subscripts to indicate group element orders; for example, $g_5$ might denote an element of order $5$. ## The Monster {#sec_MM} The Monster $\mathbf{M}$ has two classes of involutions, labelled $2A$ and $2B$ in Atlas [@atlas] notation. The respective involution centralisers are maximal subgroups of $\mathbf{M}$ isomorphic to $2 \mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$, the double cover of the Baby Monster $\mathbf{B}$, and $2^{1+24}\mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup _1$, where ${\rm Co}_1$ is Conway's first sporadic group. We fix $z\in 2B$ and write $$\mathbf{G}=C_\mathbf{M}(z) \cong 2^{1+24}\mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup _1,$$ although we may sometimes abuse notation and write $\mathbf{G}$ for some unspecified conjugate of $2^{1+24}\mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup _1$. We have in mind that $z$ is the central involution in the *fixed* copy of $\mathbf{G}$ in `mmgroup`, which can be defined by the `mmgroup` command ${\footnotesize \rm\texttt{z = MM("M<x\_1000h>")}}$. The normal subgroup $2^{1+24}$ of $\mathbf{G}$ is denoted by $\mathbf{Q}$. In [@jems], we described how to use `mmgroup` to construct a group homomorphism $\pi\colon \mathbf{G}\to {\rm GL}_{24}(2)$ whose image is a $24$-dimensional representation of ${\rm Co}_1$. Mapping generators and keeping track of straight-line programs (SLPs) allowed us to use the computer algebra system Magma [@magma] to compute with this matrix representation of ${\rm Co}_1$, and to pull back elements from Magma to `mmgroup`. Given a $2B$-involution $b\in \mathbf{M}$, `mmgroup` is able to find an element $h\in\mathbf{M}$ such that $b^h=z$. This is done using the method `conjugate_involution()`, which also determines which of the two $\mathbf{M}$-classes of involutions a given involution belongs to. This allows us to compute in the centraliser of any $2B$-involution as efficiently as we can compute in the fixed group $\mathbf{G}$. This strategy of "changing post" has been used extensively by Holmes and Wilson; see, for example, [@HW08 Section 1.4], and also [@black Section 3]. When applied to a $2A$-involution $y\in \mathbf{M}$, `conjugate_involution()` returns an element conjugating $y$ to a 'standard' $2A$-involution, labelled `MM("M<d_200h>")`, but we note that `mmgroup` does not contain a copy of $C_\mathbf{M}(y) \cong 2\mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$; see also the discussion in Section [3](#secA5){reference-type="ref" reference="secA5"}. ## Known subgroups of $\mathbf{M}$ isomorphic to ${\rm PSL}_2(16)$ {#sec_known_subgroups} The Monster has two conjugacy classes of elements of order $5$, denoted by $5A$ and $5B$. Norton and Wilson [@N98; @NW02] have classified the subgroups of $\mathbf{M}$ isomorphic to ${\rm PSL}_2(16)$ whose elements of order $5$ belong to class $5A$. These subgroups are listed in [@N98 Table 5], along with their centralisers, and the centralisers of their centralisers. Up to conjugacy, there is a unique ${\rm PSL}_2(16) < \mathbf{M}$ containing $5A$-elements. This group is not maximal in $\mathbf{M}$, and no almost simple extension of it is maximal in $\mathbf{M}$. Every as-yet-unclassified subgroup of $\mathbf{M}$ isomorphic to ${\rm PSL}_2(16)$ must have all of its elements of order $5$ lying in the $\mathbf{M}$-class $5B$. By [@NW02 Table 3], such a subgroup must also satisfy certain other conjugacy class fusion restrictions; in particular, its elements of order $2$ must belong to the $\mathbf{M}$-class $2B$. # Subgroups of $\mathbf{M}$ isomorphic to ${\rm A}_5$ {#secA5} The group ${\rm PSL}_2(16)$ contains ${\rm A}_5$ as a maximal subgroup, so we can attempt to generate subgroups of $\mathbf{M}$ isomorphic to ${\rm PSL}_2(16)$ by starting with appropriate subgroups isomorphic to ${\rm A}_5$. The subgroups ${\rm A}_5$ of $\mathbf{M}$ have been classified by Norton [@N98 Section 4]. We collect some information about those conjugacy classes of ${\rm A}_5 < \mathbf{M}$ that could, in principle, lead to 'new' subgroups ${\rm PSL}_2(16)$. Recall that ${\rm A}_5$ has unique conjugacy classes of elements of orders $2$ and $3$, and two classes of elements of order $5$. The character table of $\mathbf{M}$, which is available in the computer algebra system GAP [@GAPbc; @gap], shows that for every element $g_5 \in \mathbf{M}$ of order $5$, the non-trivial powers of $g_5$ lie in a single conjugacy class; equivalently, the $\mathbf{M}$-classes $5A$ and $5B$ are rational, i.e. all complex irreducible characters of $\mathbf{M}$ take rational (hence integer) values on these classes. Because the Sylow-$5$ subgroup of ${\rm A}_5$ is cyclic of order $5$, it follows that all elements of order $5$ in a subgroup ${\rm A}_5$ of $\mathbf{M}$ belong to a single $\mathbf{M}$-class. As explained in Section [2.2](#sec_known_subgroups){reference-type="ref" reference="sec_known_subgroups"}, every as-yet-unclassified subgroup of $\mathbf{M}$ isomorphic to ${\rm PSL}_2(16)$ has its elements of order $5$ lying in the $\mathbf{M}$-class $5B$. By [@N98 Table 3], the Monster has eight conjugacy classes of subgroups isomorphic to ${\rm A}_5$, but only three of these contain $5B$-elements. Table [1](#tab_A5){reference-type="ref" reference="tab_A5"} shows the $\mathbf{M}$-classes containing the elements of orders $2$, $3$, and $5$ in each such ${\rm A}_5$, and the centraliser of the ${\rm A}_5$ in $\mathbf{M}$. The two classes of ${\rm A}_5$ with elements lying in the $\mathbf{M}$-classes $2B$, $3B$, and $5B$ can be distinguished as follows. One has centraliser ${\rm S}_3$ and is contained in the subgroup ${\rm Th}$ of a maximal subgroup ${\rm S}_3 \times {\rm Th}$ of $\mathbf{M}$; the other has centraliser generated by a $2A$-involution and is contained in a maximal subgroup $2\mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$. Per Norton [@N98] and Holmes and Wilson [@HW08], these ${\rm A}_5 < \mathbf{M}$ are said to be of *type* $T$ and *type* $B$, respectively. The third type of ${\rm A}_5 < \mathbf{M}$ that we need to consider contains elements from the $\mathbf{M}$-classes $2B$, $3C$, and $5B$. Holmes and Wilson [@HW08] say that such an ${\rm A}_5$ has *type* $BCB$, but we shall say that it has *type* $G$ because we find a copy in the maximal subgroup $\mathbf{G}$ of $\mathbf{M}$. Subgroup ${\rm A}_5$ $A_G\leqslant\mathbf{G}$ $A_T< {\rm Th} < {\rm S}_3 \times {\rm Th}$ $A_B< 2\mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$ --------------------------- -------------------------- -- --------------------------------------------- -- ------------------------------------------------------ -- Class fusions $(2B,3C,5B)$ $(2B,3B,5B)$ $(2B,3B,5B)$ $C_\mathbf{M}({\rm A}_5)$ ${\rm D}_{10}$ ${\rm S}_3$ $2$ : The conjugacy classes of subgroups ${\rm A}_5 < \mathbf{M}$ containing $5B$-elements, their class fusions in $\mathbf{M}$, and their centralisers in $\mathbf{M}$; see [@N98 Table 3]. In the third column, ${\rm Th}$ denotes Thompson's sporadic group. \ Let us generically denote by $A_G$, $A_T$, or $A_B$, respectively, a subgroup ${\rm A}_5$ of $\mathbf{M}$ of type $G$, $T$, or $B$. We reiterate that each such subgroup of $\mathbf{M}$ is unique up to conjugacy [@N98 Section 4]. As discussed in [@jems], when working with `mmgroup`, it is desirable to perform calculations in the fixed maximal subgroup $\mathbf{G}$ of $\mathbf{M}$ when possible, because `mmgroup` has certain functionality for computing in $\mathbf{G}$ that it does not have in general. In particular, one can compute certain character values of elements in $\mathbf{G}$ using the method `chi_G_x0()`, but this method does not apply to arbitrary elements of $\mathbf{M}$. We were able to find a subgroup $A_G$ in $\mathbf{G}$ via random search, but we note that it is *not* possible to find copies of $A_T$ or $A_B$ in $\mathbf{G}$. Indeed, $A_B<C_\mathbf{M}(C_\mathbf{M}(A_B)) \cong 2\mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$, and $C_{\mathbf{M}}(A_B)$ is generated by the central involution of $2\mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$, which lies in the $\mathbf{M}$-class $2A$. Therefore, $C_{\mathbf{M}}(A_B)$ does not intersect the class $2B$, and so $A_B$ is not contained in any $2B$-involution centraliser $\mathbf{G}$. Similarly, $A_T<C_\mathbf{M}(C_\mathbf{M}(A_T)) \cong {\rm Th} < 2\mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$ with $C_\mathbf{M}(A_T) \cong {\rm S}_3$. Given that the centre of $2\mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$ is contained in $C_\mathbf{M}(A_T)$, which has a unique class of involutions, we see that $A_T$ is also not contained in $\mathbf{G}$. It was relatively straightforward to find a subgroup $A_T$ of $\mathbf{M}$. The construction is summarised in Proposition [Proposition 3](#prop_A5){reference-type="ref" reference="prop_A5"}, but we first describe the basic strategy. We first used Bray's method [@bray] to find various elements in the centraliser $2\mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup < \mathbf{M}$ of a certain $2A$-involution, namely the 'standard' $2A$-involution in `mmgroup` (see Section [2.1](#sec_MM){reference-type="ref" reference="sec_MM"}). We were able to deduce that, amongst these elements, we had a certain pair of generators $a$ and $b$ for $2\mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$. (We verified that our $a$ and $b$ really were generators by finding elements of sufficiently many different orders in $\langle a,b \rangle$ and inferring from the list of maximal subgroups of $\mathbf{B}$ that $\langle a,b \rangle \cong 2 \mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$.) We chose the generators for our copy of $2 \mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$ such that they project to *standard generators* $c$ and $d$ for $\mathbf{B}$ itself, in the sense defined by Wilson [@W96] and the Atlas website [@atlas]; namely, generators $c,d\in\mathbf{B}$ such that $c$ belongs to $\mathbf{B}$-class $2C$, $d$ belongs to $\mathbf{B}$-class $3A$, $cd$ has order $55$, and $(cd)^4(dc)^2d^2cd^2$ has order $23$. The Atlas website provides SLPs for constructing various subgroups of $\mathbf{B}$ from standard generators. We were thereby able to construct all subgroups in the chain ${\rm S}_5 < {\rm Th} < 2\mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$, and then we finally found a copy of $A_T$ in the ${\rm S}_5$. To confirm that we really did have an $A_T$, we calculated certain character values of (conjugates in $\mathbf{G}$ of) its elements of orders $2$ and $5$, and found a centralising element of order $3$; see the proof of Proposition [Proposition 3](#prop_A5){reference-type="ref" reference="prop_A5"}. Constructing a copy of $A_B$ was significantly more difficult. We first constructed a subgroup ${\rm A}_5\times {\rm A}_{12}$ of $\mathbf{M}$, which has index $2$ in the maximal subgroup $({\rm A}_5\times {\rm A}_{12}){:}2$. As explained by Norton [@N98 Section 4], the group ${\rm A}_5\times {\rm A}_{12}$ contains subgroups ${\rm A}_5$ of both types $T$ and $B$ as diagonal subgroups, and it is possible to distinguish between the two types by considering orbit lengths in the natural $12$-point permutation representation of the ${\rm A}_{12}$. This allowed us to find a copy of $A_B$ in ${\rm A}_5\times {\rm A}_{12}$. Proposition [Proposition 2](#prop_A12){reference-type="ref" reference="prop_A12"} provides generators for our copy of ${\rm A}_5\times {\rm A}_{12} < \mathbf{M}$. Proposition [Proposition 3](#prop_A5){reference-type="ref" reference="prop_A5"} shows that our copy of $A_B$ in ${\rm A}_5\times {\rm A}_{12}$ does indeed have the claimed type. (Of course, one might naturally ask how we constructed the ${\rm A}_5\times {\rm A}_{12}$ in the first place; for the sake of exposition, this is explained in Remark [Remark 5](#remA12){reference-type="ref" reference="remA12"}.) ``` {#fig:A5A12 .python float="" breaklines="true" language="Python" captionpos="b" texcl="false" frame="lines" caption="Generators for ${\\rm A}_5\\times{\\rm A}_{12} < \\mathbf{M}$ in {\\footnotesize \\rm\\texttt{mmgroup}} format; see also \\cite{ourfile} and the proofs of Propositions~\\ref{prop_A12} and~\\ref{prop_A5}." label="fig:A5A12"} # generators for A12 x3 = MM("M<y_31h*x_0d92h*d_85ah*p_240874113*l_1*p_80762880*l_1*p_221802288*t_1*l_2*p_50160000*l_1*p_232003248*l_2*t_2*l_1*p_78988800*l_1*p_182328960*l_1*t_1*l_2*p_118018560*l_1*t_1*l_1*p_183216000*l_1>") x10 = MM("M<y_491h*x_18h*d_77ah*p_179668320*l_1*p_68344320*l_2*p_159709440*l_2*t_1*l_1*p_70561920*l_2*p_242647728*l_2*t_1*l_1*p_79875840*l_1*p_182772480*l_1*t_1*l_1*p_4012800*l_2*t_1*l_2*p_117575040*l_1>") # generators for A5 commuting with A12 a2 = MM("M<y_511h*x_19e5h*d_0f88h*p_175676956*l_2*p_127776000*t_2*l_1*p_60360960*l_1*p_193416960*l_2*t_1*l_1*p_69231360*l_2*p_162370608*l_2*t_2*l_1*p_67457280>") a3 = MM("M<y_411h*x_158eh*d_64fh*p_160702030*l_2*p_1900800*l_2*p_684131*t_1*l_1*p_1499520*l_1*p_32064306*l_2*t_1*l_2*p_1394880*l_1*p_22320*l_2*p_98880*t_2*l_2*p_2830080*l_2*p_21469865*t_2*l_2*p_2830080*l_2*p_106661290*t_1*l_2*p_2597760*l_1*p_43613421*t_2*l_2*p_2830080*l_2*p_96456578>") # generators for A5 < A12 with orbits of size 6 and 6 on 12 points b2 = MM("M<y_599h*x_41ah*d_6b7h*p_240430467*l_1*p_70561920*l_1*p_140194560*t_1*l_1*p_81206400*l_2*p_169023408*l_1*t_1*l_2*p_79432320*l_2*p_212044848*l_2*t_1*l_2*p_59917440*l_1*p_157048416>") b3 = MM("M<y_1eeh*x_15e7h*d_0d65h*p_141989494*l_1*p_59473920*l_2*p_131767728*l_2*t_2*l_2*p_50160000*l_2*p_179224368*l_2*t_2*l_1*p_71005440*l_1*p_243091248*l_1*t_2*l_1*p_58143360*l_2*p_179667936>") ``` **Proposition 2**. *The elements $x_3$ and $x_{10}$ defined in `mmgroup` format in Listing [\[fig:A5A12\]](#fig:A5A12){reference-type="ref" reference="fig:A5A12"} generate a subgroup of $\mathbf{M}$ isomorphic to ${\rm A}_{12}$. The elements $a_2$ and $a_3$ defined in Listing [\[fig:A5A12\]](#fig:A5A12){reference-type="ref" reference="fig:A5A12"} generate a subgroup of $\mathbf{M}$ isomorphic to ${\rm A}_5$ that commutes with $\langle x_3, x_{10} \rangle$. In particular, $\langle x_3,x_{10},a_2,a_3 \rangle \cong {\rm A}_5 \times {\rm A}_{12} < \mathbf{M}$.* *Proof.* A direct calculation in `mmgroup` shows that $x_3$ and $x_{10}$ satisfy the presentation $$\begin{aligned} \langle x_3, x_{10} \mid &x_3^3 = x_{10}^{10} = (x_3x_{10})^{11} = [x_3,x_{10}]^2 = \\ &(x_3x_{10}^{-2}x_3x_{10}^2)^2 = [x_3,x_{10}^3]^2 = (x_3x_{10}^{-4}x_3x_{10}^4)^2 = [x_3,x_{10}^5]^2 = 1 \rangle\end{aligned}$$ for ${\rm A}_{12}$; see [@CoxeterMoser p. 67]. Similarly, $a_2$ and $a_3$ satisfy the presentation $\langle a_2, a_3 \mid a_2^2=a_3^3=(a_2a_3)^5=1 \rangle$ for ${\rm A}_5$, and $a_2$ and $a_3$ commute with $x_3$ and $x_{10}$. Given that both ${\rm A}_{12}$ and ${\rm A}_5$ are simple groups, the desired result follows from Von Dyck's theorem [@handbook Theorem 2.53]. ◻ ``` {#fig:A5 .python float="" breaklines="true" language="Python" captionpos="b" texcl="false" frame="lines" caption="Generators $g_2$ and $g_3$ for subgroups $A_G,A_T,A_B \\cong {\\rm A}_5$ of $\\mathbf{M}$ in {\\footnotesize \\rm\\texttt{mmgroup}} format; see also \\cite{ourfile}. In the third case, the $g_i$ are defined in terms of the elements $a_i$ and $b_i$ given in Listing~\\ref{fig:A5A12}. The elements $c_i$ generate the centraliser of $\\langle g_2,g_3 \\rangle$ in $\\mathbf{M}$. In the second and third cases, $i_2$ is a $2B$-involution centralising the element $g_2g_3$ of order $5$, and $h$ is an element conjugating $i_2$ to the central involution in the fixed copy of $\\mathbf{G}\\cong 2^{1+24}\\mathpalette% \\begingroup \\sbox\\z@{$\\relax{:}$}% \\sbox\\tw@{$\\relax{.}$}% \\raisebox{\\dimexpr\\ht\\z@-\\ht\\tw@}{$\\m@th\\relax.$}% \\endgroup _1< \\mathbf{M}$ in {\\footnotesize \\rm\\texttt{mmgroup}}. See also the proof of Proposition~\\ref{prop_A5}." label="fig:A5"} # type G g2 = MM("M<y_4f6h*x_1f98h*d_0b7h*p_67615847*l_1*p_2999040*l_1*p_86264262*l_2*p_11172480>") g3 = MM("M<y_4e1h*x_19cbh*d_9c8h*p_19643307*l_1*p_2999040*l_1*p_64003504*l_2*p_1478400>") c2 = MM("M<x_1000h>") c5 = MM("M<y_548h*x_34ah*d_0a9ch*p_243281095*l_1*p_1457280*l_2*p_43255315*t_2*l_1*p_3840*l_2*p_465936*l_2*p_1101120*t_1*l_2*p_2787840*l_2*p_32009429*l_1*t_2*l_2*p_2956800*l_1*p_64018007*t_1*l_2*p_2880*l_2*p_3120*l_2*p_2579520*t_2*l_2*p_2830080*l_2*p_42706069*t_1*l_2*p_2787840*l_2*p_148289>") # type T g2 = MM("M<y_82h*x_140eh*d_327h*p_130881367*l_1*p_80319360*l_1*p_131324208*l_1*t_1*l_1*p_69674880*l_2*p_160152960*l_1*t_1*l_1*p_48829440*l_1*p_230229120*l_2*t_1*l_1*p_70561920*l_1*p_87859296>") g3 = MM("M<y_430h*x_0d4h*d_8a2h*p_242204766*l_2*p_60804480*l_2*p_11552640*l_2*t_1*l_2*p_49272960*l_1*p_172128000*l_2*t_1*l_1*p_59917440*l_1*p_239986560*l_1*t_2*l_2*p_3125760*l_2*t_1*l_2*p_47055360>") c2 = MM("M<d_200h>") c3 = MM("M<y_4cdh*x_1274h*d_499h*p_8151915*l_2*p_1900800*l_2*p_43255347*t_2*l_2*p_2597760*l_1*p_479249*l_2*t_2*l_1*p_4654080*t_1*l_2*p_2956800*l_1*p_53436116*t_2*l_2*p_2386560*l_2*p_85412773*t_1*l_1*p_1499520*l_1*p_106661296>") i2 = MM("M<y_1d9h*x_1d53h*d_170h*p_157936168*l_2*p_68344320*l_2*p_202730880*l_2*t_1*l_1*p_78545280*l_1*p_212044848*l_2*t_2*l_1*p_80762880*l_2*p_149508480*l_2*t_1*l_1*p_81206400*l_1*p_85198176>") h = MM("M<y_17eh*x_143ah*d_0c93h*p_48068830*l_2*p_2956800*l_1*p_43160055*t_2*l_2*p_1943040*l_2*p_1471043*l_1*t_2*l_1*p_1499520*l_1*p_32513830*l_1*t_1*l_2*p_2830080*l_2*p_85329986*t_2*l_2*p_1985280*l_1*p_96485399*t_1*l_2*p_2386560*l_2*p_85330945>") # type B g2, g3 = a2*b2, a3*b3 c2 = MM("M<y_15h*x_1c83h*d_955h*p_191219869*l_2*p_48829440*l_2*p_85198080*t_1*l_2*p_7560960*l_2*p_1795200*t_2*l_1*p_67013760*l_1>") i2 = MM("M<y_487h*x_1426h*d_602h*p_173036153*l_2*p_47055360*l_1*p_53264640*t_1*l_1*p_60360960*l_1*p_182772480*l_2*t_1*l_1*p_59473920*l_2*p_192086400*l_2*t_1*l_1*p_3569280*l_2*t_2*l_2*p_66570240*l_1>") h = MM("M<y_4f1h*x_9bch*d_0f77h*p_106507260*l_1*p_80762880*l_2*p_213375504*t_2*l_1*p_1499520*l_2*p_583047*t_2*l_2*p_1900800*l_2*p_1040998*t_2*l_2*p_2386560*l_2*p_21331401*t_1>") ``` The following result summarises our construction of the $G$-, $T$-, and $B$-type subgroups ${\rm A}_5$ of $\mathbf{M}$. **Proposition 3**. *The subgroups of $\mathbf{M}$ generated by the elements $g_2$ and $g_3$ given in Listing [\[fig:A5\]](#fig:A5){reference-type="ref" reference="fig:A5"} are isomorphic to ${\rm A}_5$ and have types $G$, $T$, and $B$, according to the "type" indicated in the listing.* *Proof.* In each case, a direct calculation in `mmgroup` confirms that $g_2$ and $g_3$ satisfy the presentation for ${\rm A}_5$ given in the proof of Proposition [Proposition 2](#prop_A12){reference-type="ref" reference="prop_A12"}, namely, $g_2^2=g_3^3=(g_2g_3)^5=1$. It remains to verify that $\langle g_2,g_3 \rangle \cong {\rm A}_5$ has type $G$, $T$, and $B$ in the three respective cases. Note that the arguments that follow involve the auxiliary elements $c_i$, $i_2$, and $h$ defined in Listing [\[fig:A5\]](#fig:A5){reference-type="ref" reference="fig:A5"}. In the first case (type $G$), the element $c_2$ is the central involution $z$ in the fixed copy of the maximal subgroup $\mathbf{G}\cong 2^{1+24}\mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup _1< \mathbf{M}$ in `mmgroup` (cf. Section [2.1](#sec_MM){reference-type="ref" reference="sec_MM"}). A direct calculation shows that $c_2$ centralises $g_2$ and $g_3$, so $\langle g_2,g_3 \rangle < \mathbf{G}$. The `mmgroup` method `chi_G_x0()` can therefore be used to calculate the character values of $g_2$, $g_3$, and $g_2g_3$ in the $196883$-dimensional complex representation of $\mathbf{M}$. The character values are $275$, $-1$, and $8$, which indicates that $g_2$, $g_3$, and $g_2g_3$ belong to the $\mathbf{M}$-classes $2B$, $3C$, and $5B$, respectively, confirming that $\langle g_2,g_3 \rangle$ has type $G$. (Note also that $c_5$ has order $5$, centralises $g_2$ and $g_3$, and is inverted by $c_2$. Therefore, $\langle c_2,c_5 \rangle = C_\mathbf{M}(\langle g_2,g_3 \rangle) \cong {\rm D}_{10}$.) In the second case (type $T$), the method `conjugate_involution()` confirms that $g_2 \in 2B$. The element $i_2$ is a $2B$-involution that centralises $g_5 = g_2g_3$, but not $g_2$ or $g_3$. The element $h$ conjugates $i_2$ to the central involution in $\mathbf{G}$, and therefore also conjugates $g_5$ into $\mathbf{G}$. The method `chi_G_x0()` confirms that $g_5^h$ and hence $g_5$ belong to the $\mathbf{M}$-class $5B$, so it follows from Table [1](#tab_A5){reference-type="ref" reference="tab_A5"} that $\langle g_2,g_3 \rangle$ has type $G$, $T$, or $B$. The elements $c_2$ and $c_3$ centralise $\langle g_2,g_3 \rangle$, and have orders $2$ and $3$, respectively. In particular, $\langle g_2,g_3 \rangle$ is centralised by an element of order $3$, so it must be of type $T$ according to Table [1](#tab_A5){reference-type="ref" reference="tab_A5"}. In the third case (type $B$), $g_2 \in 2B$, and $i_2$ and $h$ have the same properties as in the second case, so proceeding as in that case confirms that $g_2g_3 \in 5B$, whence $\langle g_2,g_3 \rangle$ has type $G$, $T$, or $B$. The element $c_2$ centralises $\langle g_2,g_3 \rangle$ and lies in the $\mathbf{M}$-class $2A$, so $\langle g_2,g_3 \rangle$ is contained in a conjugate of $2 \mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$. This shows that $\langle g_2,g_3 \rangle$ does not have type $G$, because $A_G$ is contained in a $2B$-centraliser and $C_\mathbf{M}(A_G) \cong {\rm D}_{10}$ has a unique class of involutions. To show that $\langle g_2,g_3 \rangle$ has type $B$, we explain how we constructed $\langle g_2,g_3 \rangle$ as a diagonal subgroup of the copy of ${\rm A}_5 \times {\rm A}_{12}$ given in Proposition [Proposition 2](#prop_A12){reference-type="ref" reference="prop_A12"}. The proof of Proposition [Proposition 2](#prop_A12){reference-type="ref" reference="prop_A12"} says (in particular) that the elements $x_3$ and $x_{10}$ defined in Listing [\[fig:A5A12\]](#fig:A5A12){reference-type="ref" reference="fig:A5A12"} satisfy a certain presentation for ${\rm A}_{12}$. Now consider the following elements of the symmetric group on $\{1,\ldots,12\}$: $$y_3=(1,2,3) \quad \text{and} \quad y_{10} = (1,3)(2,4,5,6,7,8,9,10,11,12).$$ These elements satisfy the same presentation for ${\rm A}_{12}$ as $x_3$ and $x_{10}$. We construct $\langle y_3,y_{10} \rangle \cong {\rm A}_{12}$ in Magma [@magma] and look for a subgroup $A \cong {\rm A}_5$ such that a diagonal subgroup of ${\rm A}_5 \times A < {\rm A}_5 \times {\rm A}_{12}$ has type $B$. This is done by considering the orbits of $A$ on the points $\{1,\ldots,12\}$. By [@N98 Table 4, rows 5--7], we need to choose an $A$ with orbit lengths $12$; $6$ and $6$; or $6$, $5$, and $1$. We take an $A$ with orbit lengths $6$ and $6$, and use the Magma function InverseWordMap to record generators for $A$ as words in $y_3$ and $y_{10}$. This allows us to use our generators $x_3$ and $x_{10}$ for $\text{A}_{12}$ in `mmgroup` to construct a subgroup $\overline{A}$ of $\langle x_3,x_{10} \rangle$ that is an image of $A$ under some automorphism of ${\rm A}_{12}$. (For brevity, we do not include the SLPs here; see instead our supporting Python code [@ourfile].) Because all automorphisms of ${\rm A}_{12}$ preserve cycle structure in the natural $12$-point representation, the group $\overline{A} < \langle x_3,x_{10} \rangle$ still has the required orbit-length property. The group $\overline{A}$ is generated by the elements $b_2$ and $b_3$ in Listing [\[fig:A5A12\]](#fig:A5A12){reference-type="ref" reference="fig:A5A12"}, which satisfy the aforementioned presentation for ${\rm A}_5$, i.e. $b_2^2=b_3^3=(b_2b_3)^5=1$. The elements $g_2$ and $g_3$ given under "type $B$" in Listing [\[fig:A5\]](#fig:A5){reference-type="ref" reference="fig:A5"} generate a diagonal subgroup of ${\rm A}_5 \times \overline{A} < {\rm A}_5 \times {\rm A}_{12}$. ◻ **Remark 4**. *Note that neither of the groups $\langle a_2,a_3 \rangle \cong {\rm A}_5$ nor $\langle b_2,b_3 \rangle \cong {\rm A}_5$ given in Listing [\[fig:A5A12\]](#fig:A5A12){reference-type="ref" reference="fig:A5A12"} has type $G$, $T$, or $B$. Per [@N98 Tables 3--4], the group $\langle a_2,a_3 \rangle$ belongs to the unique class of ${\rm A}_5 < \mathbf{M}$ containing $2A$, $3A$, and $5A$ elements; and $\langle b_2,b_3 \rangle$ belongs to the unique class containing $2B$, $3A$, and $5A$ elements.* **Remark 5**. *Although they are not needed for the proof of Proposition [Proposition 2](#prop_A12){reference-type="ref" reference="prop_A12"}, we provide some details as to how we constructed the subgroup ${\rm A}_5\times{\rm A}_{12}$ of $\mathbf{M}$ given in Listing [\[fig:A5A12\]](#fig:A5A12){reference-type="ref" reference="fig:A5A12"}. Recall that we had constructed a copy of $2 \mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$ in $\mathbf{M}$ and found generators that project to standard generators for $\mathbf{B}$, enabling us to use SLPs from the Atlas website to construct various subgroups of $\mathbf{B}$ or $2 \mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$. To construct ${\rm A}_5\times{\rm A}_{12}$, we considered the chain of subgroups ${\rm A}_{12} < {\rm HN}<{\rm HN}{:}2<\mathbf{B}$, where ${\rm HN}$ is the Harada--Norton sporadic group. We constructed a subgroup $2 \times \text{A}_{12}$ of $2\mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$, and then used a random search to find the elements $x_3$ and $x_{10}$ given in Listing [\[fig:A5A12\]](#fig:A5A12){reference-type="ref" reference="fig:A5A12"}, which generate a copy $X$ of $\text{A}_{12}$ itself. To find the group $\langle a_2,a_3 \rangle \cong {\rm A}_5$ given in Listing [\[fig:A5A12\]](#fig:A5A12){reference-type="ref" reference="fig:A5A12"}, which centralises $X \cong {\rm A}_{12}$, we first found a $2B$-involution in $X$ and conjugated it to the central involution in $\mathbf{G}$ via some $h \in \mathbf{M}$. Working with $X^h \cap \mathbf{G}< \mathbf{G}$ and the homomorphism $\pi \colon \mathbf{G}\to {\rm GL}_{24}(2)$ described in Section [2.1](#sec_MM){reference-type="ref" reference="sec_MM"}, we used Magma to construct the centraliser of $\pi(X^h \cap \mathbf{G})$ in $\pi(\mathbf{G}) \cong {\rm Co}_1$ and pulled back generators of this centraliser to $\mathbf{G}$. We then adjusted the pulled-back generators by elements in the normal subgroup $\mathbf{Q}$ of $\mathbf{G}$ to obtain appropriate coset representatives centralising $X^h \cap \mathbf{G}$. Finally, we used a random search in the group generated by these elements to find sufficiently many that commute with the whole of $X^h$ and generate a group isomorphic to ${\rm A}_5$.* # Proof of Theorem [Theorem 1](#mainthm){reference-type="ref" reference="mainthm"} {#sec_L16} Let $P = {\rm PSL}_2(16)$. We now show that the Monster contains no maximal subgroup that is almost simple with socle isomorphic to $P$. The group $P$ has order $2^4{\cdot}3{\cdot}5{\cdot}17$ and maximal subgroups $2^4{:}15$, ${\rm A}_5$, ${\rm D}_{34}$, and ${\rm D}_{30}$, all unique up to conjugacy. There are two classes of elements of order $5$ in $P$, both of which intersect a maximal ${\rm A}_5$ non-trivially. For each element $g_5$ of order $5$ in a fixed maximal ${\rm A}_5 < P$, there are exactly $10$ involutions $j_2\in P$ such that $\langle g_5,j_2\rangle\cong {\rm D}_{10}$ and $\langle {\rm A}_5,j_2\rangle=P$. As explained in Sections [2.2](#sec_known_subgroups){reference-type="ref" reference="sec_known_subgroups"} and [3](#secA5){reference-type="ref" reference="secA5"}, every as-yet-unclassified subgroup of $\mathbf{M}$ isomorphic to $P$ must have its elements of orders $2$ and $5$ lying in the $\mathbf{M}$-classes $2B$ and $5B$. In particular, a maximal ${\rm A}_5$ in such a subgroup must have type $G$, $T$, or $B$. Given that $A_G$, $A_T$, and $A_B$ are unique up to conjugacy in $\mathbf{M}$, it suffices to classify the subgroups of $\mathbf{M}$ isomorphic to $P$ that contain one of the *fixed* groups $A_G$, $A_T$, or $A_B$ defined in Listing [\[fig:A5\]](#fig:A5){reference-type="ref" reference="fig:A5"}. Let $A = \langle g_2,g_3 \rangle$ be one of the groups $A_G$, $A_T$, or $A_B$ in Listing [\[fig:A5\]](#fig:A5){reference-type="ref" reference="fig:A5"}, and note that we also refer to some of the other elements defined there. Recall from the proof of Proposition [Proposition 3](#prop_A5){reference-type="ref" reference="prop_A5"} that $g_5 = g_2g_3$ has order $5$ in each case, and that, in the second and third cases, $i_2$ is a $2B$-involution centralising $g_5$. Let us define $i_2$ be the central involution in $\mathbf{G}$ when $A=A_G$, so that we can discuss all three cases together. We need to find all involutions $j_2 \in \mathbf{M}$ that invert $g_5$ by conjugation, because such involutions are precisely those yielding $\langle g_5,j_2 \rangle \cong {\rm D}_{10}$. All such involutions lie in the normaliser $N=N_\mathbf{M}(\langle g_5\rangle)$ of $\langle g_5 \rangle$ in $\mathbf{M}$, so we need to find a subgroup of $N$ that contains all of them. Because $g_5 \in 5B$, it follows from [@W88 Theorem 5] that $N \cong 5_+^{1+6}{:}2\mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup {J}_2{:}4$, where ${\rm J}_2$ is the second Janko group. Recall that $A_G < \mathbf{G}$, and that, in the other two cases, $h$ conjugates $\langle g_5,i_2 \rangle$ into $\mathbf{G}$. Let us set $h=1$ in the case $A = A_G$. The group $\mathbf{G}\cong 2^{1+24}\mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup _1$ intersects the $\mathbf{M}$-class $5B$ in two $\mathbf{G}$-classes. These $\mathbf{G}$-classes are labelled $5A$ and $5C$ in the character table of $2^{1+24}\mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup _1$ in GAP [@GAPbc; @gap]. They project to classes also labelled $5A$ and $5C$ (respectively) in the character table of the quotient ${\rm Co}_1$, and can be distinguished by the dimension of their fixed-point spaces on the $24$-dimensional module for ${\rm Co}_1$ in characteristic $2$. Specifically, the $5A$-elements have a trivial fixed-point space, and the $5C$-elements have a $4$-dimensional fixed-point space. It turns out that if $A=A_G$ or $A_B$ then $g_5^h$ belongs to the $\mathbf{G}$-class $5C$, and if $A=A_T$ then $g_5^h$ belongs to the $\mathbf{G}$-class $5A$. This can be verified using the `mmgroup` method `chi_G_x0()`. To find a sufficiently large subgroup of $N$, we proceed as follows. The element $h$ gives us a conjugate $\langle g_5^h \rangle$ of the subgroup $\langle g_5 \rangle$ of $N$ inside $\mathbf{G}$. (Recall that $h:=1$ when $A=A_G$.) We use the homomorphism $\pi \colon \mathbf{G}\to {\rm GL}_{24}(2)$ described in Section [2.1](#sec_MM){reference-type="ref" reference="sec_MM"} to construct the normaliser of $\pi(g_5^h)$ in $\text{Co}_1$, pull back generators of this normaliser to $\mathbf{G}$, and adjust the pulled-back generators by elements in $\mathbf{Q}$ to obtain appropriate coset representatives generating the normaliser of $g_5^h$ in $\mathbf{G}$, i.e. the subgroup $N^h \cap \mathbf{G}$ of $N^h$. This yields the subgroup $K_1 = (N^h \cap \mathbf{G})^{h^{-1}}$ of $N$, which we seek to extend. We find a second $2B$-involution $i_2'$ centralising $g_5$ by random search in $N^h \cap \mathbf{G}$, conjugate $i_2'$ to the central involution in $\mathbf{G}$ via some $h' \in \mathbf{M}$, and repeat the process of passing to ${\rm Co}_1$ and pulling back to $\mathbf{G}$ to produce the subgroup $K_2 = (N^{h'} \cap \mathbf{G})^{(h')^{-1}}$ of $N$. At this point, we have the subgroup $K = \langle K_1,K_2 \rangle$ of $N$. We then conduct a random search in $K$ for all involutions $j_2 \in \mathbf{M}$ that invert $g_5$. The $(2B,2B,5B)$ class multiplication coefficient of $\mathbf{M}$, which can be calculated from the character table of $\mathbf{M}$ in GAP, tells us that there are $3{,}150{,}000$ such involutions. We are able to find all of them (without having to check whether $K=N$), and we test each one to determine whether the group $S = \langle g_2,g_3,j_2 \rangle$ is isomorphic to $P$. For $A = A_G$ and $A = A_T$, this is never the case. For $A = A_B$, we find precisely $40$ involutions $j_2$ such that $S \cong P$. Moreover, each of these $40$ involutions commutes with the $2A$-involution $c_2$ in the "type $B$" case of Listing [\[fig:A5\]](#fig:A5){reference-type="ref" reference="fig:A5"}. Given that $c_2$ generates the centraliser of the subgroup $A_B$ of $S$ (see Table [1](#tab_A5){reference-type="ref" reference="tab_A5"}), it follows that $C_\mathbf{M}(S) = C_\mathbf{M}(A_B)$. In particular, $S < C_\mathbf{M}(C_\mathbf{M}(S)) \cong 2 \mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$, so $S$ is not maximal in $\mathbf{M}$. It remains to show that no almost simple extension of one of the $40$ groups $S = \langle A_B,j_2 \rangle \cong P$ can be maximal in $\mathbf{M}$, in the event that such an extension arises in $\mathbf{M}$. Every almost simple extension $E<\mathbf{M}$ of $S$ normalises $S$, and $N_\mathbf{M}(S)$ normalises $C_\mathbf{M}(S)$, so $S \leqslant E \leqslant N_\mathbf{M}(S) \leqslant N_\mathbf{M}(C_\mathbf{M}(S))$. As explained above, $C_\mathbf{M}(S)$ is generated by a $2A$-involution, so $N_\mathbf{M}(C_\mathbf{M}(S)) = C_\mathbf{M}(C_\mathbf{M}(S)) \cong 2 \mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$. In particular, $E$ is certainly not maximal in $\mathbf{M}$. **Remark 6**. *The files containing the involutions $j_2$ are too large to upload to our GitHub repository [@ourfile], but the Python code given there includes generators for (subgroups of) $N$ from which the $j_2$ can be recovered by random search. To check whether each $S = \langle g_2,g_3,j_2 \rangle$ is isomorphic to $P$, we first checked whether $j_2g_2$ and $j_2g_3$ have orders that arise in $P$. This ruled out a vast majority of cases. In the remaining cases, we checked whether $|S|=|P|$. Every $S$ that passed that test also satisfied $S \cong P$, which we checked by computing the order of every element of $S$ and applying the Main Theorem of [@PSL2qCharacterisation]. (Recall that we use our own Python implementations of algorithms for generating random elements and determining the order of a subgroup of $\mathbf{M}$ from a generating set, as noted in Section [1](#sec_intro){reference-type="ref" reference="sec_intro"}.)* **Remark 7**. *The subgroups $S \cong {\rm PSL}_2(16)$ of $\mathbf{M}$ containing $A_B$ do not seem to be explicitly mentioned in the literature. As noted, $S < 2 \mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$. Because $S$ is simple, it projects to some $T \cong {\rm PSL}_2(16)$ in $\mathbf{B}$. Because $|T|$ is divisible by $5$ and $17$, the only maximal subgroups of $\mathbf{B}$ that can contain $T$ are $2.{}^2{\rm E}_6(2){:}2$, Fischer's group ${\rm Fi}_{23}$, $2^{9+16}.{\rm PSp}_8(2)$, and $(2^2 \times {\rm F}_4(2)) {:} 2$. (The maximal subgroups of $\mathbf{B}$ were determined across several papers, as explained in [@W17 Section 3.5]; they are listed e.g. on the Atlas website [@atlas].) Elements of order $5$ in $S$ lie in $\mathbf{M}$-class $5B$. A conjugacy class fusion calculation in GAP shows that no elements of order $5$ in $2.{}^2{\rm E}_6(2){:}2$ or ${\rm Fi}_{23}$ lift to elements in $2 \mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$ that lie in $\mathbf{M}$-class $5B$. The class fusions from $(2^2 \times {\rm F}_4(2)) {:} 2$ to $\mathbf{B}$ are not stored in GAP, but the following argument shows that $T \not < (2^2 \times {\rm F}_4(2)) {:} 2$. The Baby Monster has two classes of elements of order $5$, called $5A$ and $5B$, with respective centraliser orders $2^{10}{\cdot}3^2{\cdot}5^4{\cdot}7{\cdot}11$ and $2^7{\cdot}3{\cdot}5^6$. A class fusion calculation shows that these classes lift to the $\mathbf{M}$-classes with the same labels. The group $(2^2 \times {\rm F}_4(2)) {:} 2$ has a unique class of elements of order $5$, with centraliser of order $2^7 {\cdot} 3^2 {\cdot} 5^2$. Comparing $3$-parts of centraliser orders, we see that elements of order $5$ in $(2^2 \times {\rm F}_4(2)) {:} 2 < \mathbf{B}$ lift to elements in $\mathbf{M}$-class $5A$. Therefore, $S < 2 \mathpalette% \begingroup \sbox\z@{$\relax{:}$}% \sbox\tw@{$\relax{.}$}% \raisebox{\dimexpr\ht\z@-\ht\tw@}{$\m@th\relax.$}% \endgroup$ projects to a group isomorphic to ${\rm PSL}_2(16)$ in a maximal subgroup $2^{9+16}.{\rm PSp}_8(2)$ of $\mathbf{B}$. (Note that we have not yet checked how many conjugacy classes of subgroups of $\mathbf{M}$ our groups $S$ comprise; we intend to answer this question as part of an expanded version of this paper, cf. Section [1](#sec_intro){reference-type="ref" reference="sec_intro"}.)* 99 A. K. Asboei, S. S. S. Amiri, A. Iranmanesh. A new characterization of ${\rm PSL}(2,q)$ for some $q$. Ukranian Math. J. 67 (1297--1305) 2016. W. Bosma, J. Cannon, C. Playoust. The Magma algebra system I: The user language. J. Symb. Comp. 24 (235--265) 1997. J. N. Bray. An improved method for generating the centralizer of an involution. Arch. Math. 47 (241--245) 2000. T. Breuer. CTblLib --- a GAP package, Version 1.3.2, 2021.\ <https://www.math.rwth-aachen.de/homes/Thomas.Breuer/ctbllib/> J. H. Conway, R. T. Curtis, S. P. Norton, R. A. Parker, R. A. Wilson. An altas of finite groups. Oxford University Press, 1985. Atlas website: <http://atlas.math.rwth-aachen.de/Atlas/v3/> H. S. M. Coxeter, W. O. J. Moser. Generators and relations for discrete groups. 3rd edition. Springer-Verlag, 1972. H. Dietrich, M. Lee, T. Popiel. The maximal subgroups of the Monster. <https://arxiv.org/abs/2304.14646> H. Dietrich, M. Lee, T. Popiel. Python code accompanying this paper.\ <https://github.com/melissa-maths/M> The GAP Group. GAP -- Groups, Algorithms and Programming. <https://gap-system.org> P. E. Holmes, R. A. Wilson. On subgroups of the Monster containing $A_5$'s. J. Algebra 319 (2653--2667) 2008. D. F. Holt, B. Eick, E. A. O'Brien. Handbook of computational group theory. Discrete Mathematics and its Applications. Chapman & Hall/CRC, FL, 2005. S. P. Norton. Anatomy of the Monster: I. The atlas of finite groups: ten years on (Birmingham, 1995), 198--214, London Math. Soc. Lecture Note Ser., 249, Cambridge Univ. Press, Cambridge, 1998. S. P. Norton, R. A. Wilson. Anatomy of the Monster: II. Proc. London Math. Soc. (3) 84 (581--598) 2002. M. Seysen. The Python package `mmgroup`. <https://github.com/Martin-Seysen/mmgroup>\ Documentation: <https://mmgroup.readthedocs.io/en/latest/> M. Seysen. A computer-friendly construction of the Monster. 2020. <https://arxiv.org/abs/2002.10921> M. Seysen. A fast implementation of the Monster group. 2022. <https://arxiv.org/abs/2203.04223> R. A. Wilson. The odd-local subgroups of the Monster. J. Austral. Math. Soc. 44 (1--16), 1988. R. A. Wilson. Standard generators for sporadic simple groups. J. Algebra 184 (505--515) 1996. R. A. Wilson. The Monster and black-box groups. 2013. <https://arxiv.org/abs/1310.5016> R. A. Wilson. Maximal subgroups of sporadic groups. Finite simple groups: thirty years of the atlas and beyond. In Contemp. Math. 694 (57--72), Amer. Math. Soc., Providence, RI, 2017. R. A. Wilson. The uniqueness of ${\rm PSU}_3(8)$ in the Monster. Bull. London Math. Soc. 49 (877--880) 2017. [^1]: The second author acknowledges the support of an Australian Research Council Discovery Early Career Researcher Award (project number DE230100579). We thank Thomas Breuer, Frank Lübeck, Klaus Lux, Martin Seysen, and Rob Wilson for helpful discussions related to this paper and our earlier paper [@jems].
arxiv_math
{ "id": "2310.03317", "title": "Indeed, the Monster has no almost simple maximal subgroup with socle\n $\\text{PSL}_2(16)$", "authors": "Heiko Dietrich, Melissa Lee, Tomasz Popiel", "categories": "math.GR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | A CUR factorization is often utilized as a substitute for the singular value decomposition (SVD), especially when a concrete interpretation of the singular vectors is challenging. Moreover, if the original data matrix possesses properties like nonnegativity and sparsity, a CUR decomposition can better preserve them compared to the SVD. An essential aspect of this approach is the methodology used for selecting a subset of columns and rows from the original matrix. This study investigates the effectiveness of *one-round sampling* and iterative subselection techniques and introduces new iterative subselection strategies based on iterative SVDs. One provably appropriate technique for index selection in constructing a CUR factorization is the discrete empirical interpolation method (DEIM). Our contribution aims to improve the approximation quality of the DEIM scheme by iteratively invoking it in several rounds, in the sense that we select subsequent columns and rows based on the previously selected ones. That is, we modify $A$ after each iteration by removing the information that has been captured by the previously selected columns and rows. We also discuss how iterative procedures for computing a few singular vectors of large data matrices can be integrated with the new iterative subselection strategies. We present the results of numerical experiments, providing a comparison of one-round sampling and iterative subselection techniques, and demonstrating the improved approximation quality associated with using the latter. author: - Perfect Y. Gidisu - Michiel E. Hochstenbach title: A DEIM-CUR factorization with iterative SVDs --- DEIM, CUR decomposition ,iterative SVD,low-rank approximation,adaptive sampling,iterative subselection # Introduction {#sec:intro} In data analysis applications and machine learning, the data set is often represented by a matrix $A\in \mathbb R^{m\times n}$, where $m$ is the number of data points and $n$ is the number of features or variables. Data matrices are usually large and in many cases, a key step in the analysis is to approximate the data using a few features and/or a few data points so that one can easily manipulate, understand, and interpret the data. The optimal approximation is obtained by the truncated singular value decomposition (TSVD). On the other hand, this approximation problem can also be reduced to identifying a good subset of columns and rows in the data matrix, a CUR factorization. A CUR decomposition is an alternative solution to the SVD, motivated by the fact that in several applications, it can be challenging to have a concrete interpretation of the singular vectors. Additionally, the singular vectors fail to preserve properties such as nonnegativity, and sparsity, if the original data matrix has these. Theoretical computer science and numerical linear algebra communities have extensively studied column subset selection (index selection) algorithms. In the former, researchers have primarily focused on selecting good columns using randomized algorithms with provable error bounds [@optimalboutsidis; @deshpande2006matrix; @Drineas; @frieze2004fast; @guruswami2012optimal]. On the other hand, numerical linear algebra researchers have primarily focused on deterministic algorithms, which exploit the SVD or rank-revealing QR factorizations that select columns by pivoting rules [@bischof1991structure; @chandrasekaran1994; @gu1996; @Voronin]. This paper focuses on the deterministic algorithms that exploit the SVD for index selection; in particular, the discrete empirical interpolation method (DEIM) [@Barrault; @Chaturantabut; @Drmac; @Sorensen]. Throughout the paper, we denote the spectral (2-norm) and the Frobenius norm by $\lVert\cdot\rVert$ and $\lVert\cdot\rVert_F$, respectively, and the infinity-norm by $\lVert\cdot\rVert_\infty$. The notations $C^+$ and $C^\top$ denote the Moore--Penrose pseudoinverse and the transpose of $C$, respectively. We use MATLAB notation to index vectors and matrices; thus, $A(:,\mathbf p)$ denotes the $k$ columns of $A$ whose corresponding indices are in vector $\mathbf p\in \mathbb N_+^k$. The DEIM algorithm [@Barrault; @Chaturantabut] is a technique used to select some important column and row indices from an $m \times n$ data matrix $A$ [@Sorensen], where without loss of generality $m \ge n$. We wish to select $k\ll n$ relevant row and column indices. The first step in the DEIM procedure is to compute an (reduced) SVD, $A = U \Sigma V^\top$. The associated cost when a direct method is used is $\mathcal O(mn^2)$, independent of the value of $k$. The paper also considers iterative methods to approximate the leading $k$ singular vectors. Having the left and right singular vectors contained in $U$ and $V$, respectively, we are interested in selecting distinct row indices $s_1, \dots, s_k$ from the set $\{1,\dots,m\}$ and column indices $p_1, \dots, p_k$ from the set $\{1,\dots,n\}$. The result of the method may also be represented by an $m \times k$ row *selection matrix* $S$ and an $n \times k$ column *selection matrix* $P$, whose columns are the standard basis vectors indexed by the selected indices. The corresponding CUR factorization is of the form (In line with [@Str19], we will use the letter $M$ rather than $U$ for the middle matrix because $U$ is used to denote the matrix containing left singular vectors) $$\label{eq: cur} \begin{array}{ccccc} A& \approx & C & M & R~, \\[-0.5mm] {\scriptstyle m\times n} & & {\scriptstyle m\times k} &{\scriptstyle k\times k} & {\scriptstyle k\times n}~ \end{array}$$ where matrices $C=AP$ and $R=S^\top A$ (both of full rank) are a subset of the columns and rows of A, respectively, and the middle matrix $M$ of full rank is computed such that the decomposition is as close to $A$ as possible. Given $C$ and $R$, a standard procedure to determine $M$ is (see, e.g., [@Sorensen Sec. 2], where also an alternative is presented) by two consecutive least squares problems: ---- ---------------------------------------------------------------------------------------------------- 1: Solve the least squares problem $CX \approx A$ for $X \in \mathbb R^{k \times n}$ with solution $X = (C^\top C)^{-1} C^\top A$. 2: Solve the least squares problem $R^\top M^\top \approx X^\top$ for $M \in \mathbb R^{k \times k}$ with solution $M = X R^\top (RR^\top)^{-1}$. ---- ---------------------------------------------------------------------------------------------------- Both steps are optimal with respect to the spectral and Frobenius norm. It is important to note that the solution in the spectral norm may not be unique. In many applications, one cares primarily about key columns or rows of $A$, rather than an explicit $A\approx CUR$ factorization. Thus, an interpolative decomposition of the form $A = C\widehat M$ or $A = \widehat M R$ [@Voronin]. We will now describe how the DEIM scheme selects the row indices, and the selection of the column indices follows similarly. The DEIM algorithm starts from the dominant left singular vector $\mathbf u_1$, and the first index $s_1$ corresponds to the largest magnitude entry in $\mathbf u_1$, that is $|\mathbf u_1(s_1)|=\lVert\mathbf u_1\rVert_\infty$. Given that $I$ is the identity matrix, let $\mathbf s=[s_1]$, $S_1=I(:,s_1)$, $U_1=[\mathbf u_1]$, and define an oblique projection operator as $\mathbb{S}=\mathbf u_1(S_1^\top \mathbf u_1)^{-1}S_1^\top$. Suppose we have $j-1$ indices, so that $$\mathbf s_{j-1}=\begin{bmatrix}s_1\\\vdots\\s_{j-1}\end{bmatrix}, \quad S_{j-1}=I(:,\mathbf s_{j-1}), \quad U_{j-1}=[\mathbf u_1,\dots,\mathbf u_{j-1}],$$ and $$\mathbb{S}_{j-1}=U_{j-1}(S_{j-1}^\top U_{j-1})^{-1}S_{j-1}^\top.$$ Compute the residual vector $\mathbf r_j=\mathbf u_j-\mathbb{S}_{j-1}\mathbf u_j$, and select the next index $s_j$ such that $|\mathbf r_j(s_j)|=\lVert\mathbf r_j\rVert_\infty$. It is worth noting that, using this oblique projection operator $\mathbb{S}_{j-1}$ on $\mathbf u_j$ ensures that the $s_{j-1}$ entry in $\mathbf r_j$ is 0, which guarantees nonrepeating indices. We also note that for the projector $\mathbb{S}_{j-1}$ to exist at the $j$th step, $S_{j-1}^\top U_{j-1}$ must be nonsingular. The linear independence of the columns of $U$ guarantees this. The DEIM index selection procedure is summarized in [\[algo:DEIM\]](#algo:DEIM){reference-type="ref" reference="algo:DEIM"} [^1]. A variant of the DEIM scheme proposed by [@Drmac] called QDEIM involves computing a column-pivoted QR factorization on the transpose of the singular vectors to obtain the column and row indices, which correspond to the indices of the first $k$ pivoted columns. $\mathbf s(1)$ = $\mathop{\mathrm{\arg\!\max}}_{1\le i\le m}~ |(U(:,\,1))_i|$ In this paper, we explore iterative subselection variants of the DEIM scheme. Our contribution aims to improve the approximation quality of the DEIM scheme by iteratively invoking it, in the sense that we select subsequent columns and rows based on the previously selected ones. That is, we modify $A$ after each iteration by removing the information that has been captured by the previously selected columns and rows. We show this by adapting an existing iterative subselection technique for the DEIM index selection scheme and also propose a new strategy. We also discuss how iterative procedures for computing a few singular vectors of large data matrices can be used with our proposed methods. To the best of our knowledge, this is the first deterministic DEIM-type algorithm for large-scale data sets. # Adaptive sampling for column subset selection problem {#sec:adapsam} The iterative subselection strategies proposed in this work are related to the so-called *adaptive sampling* for column subset selection. In this section, we provide an overview of the adaptive sampling technique proposed by @deshpande2006matrix [@deshpande2006matrix]. The authors introduce a probabilistic method that iteratively selects a subset of columns in multiple rounds to construct a rank-$k$ approximation of a matrix. This approach has been demonstrated to provide improved accuracy and flexibility compared to *one-round sampling* methods. One-round sampling methods refer to selection schemes that obtain all $k$ columns in a single round. $\mathbf p=[\ ]; \quad E_0=A$ The adaptive sampling method of [@deshpande2006matrix] as summarized in [\[algo:AdapSamp1\]](#algo:AdapSamp1){reference-type="ref" reference="algo:AdapSamp1"} involves alternating between two steps in each round: selecting a subset of columns and updating the probability distribution over all columns. The selection of columns in each round is influenced by the columns picked in previous rounds. Suppose we aim to select $k$ subsets of columns from matrix $A$. The process begins with an initial probability distribution and randomly selects $c<k$ columns to form a matrix $C$. The selection of columns is based on the norms of the columns, as described in [@deshpande2006matrix; @frieze2004fast]. Each column $j$ is chosen with a probability $\text{pr}_i^{(j)}={\lVert E_{i-1}^{(j)}\rVert^2}\,/\,{\lVert E_{i-1}\rVert_F^2}$ (as in [\[probline\]](#probline){reference-type="ref" reference="probline"} of [\[algo:AdapSamp1\]](#algo:AdapSamp1){reference-type="ref" reference="algo:AdapSamp1"}). After selecting $c$ columns, the probabilities are updated based on the chosen columns, and $c$ new columns are sampled and added to the matrix $C$. This iterative process continues until all $k$ columns are selected. Note that assigning zero probability to previously selected indices, as described in [\[probline1\]](#probline1){reference-type="ref" reference="probline1"}, represents a sampling without replacement strategy. The authors present a detailed explanation and theoretical analysis of this adaptive sampling technique, emphasizing its advantages and diverse applications [@deshpande2006matrix]. The algorithm improves the accuracy of a CUR decomposition compared to one-round sampling methods as demonstrated in [@deshpande2006matrix; @deshpande2006adaptive; @paul2015column; @Zhang]. Moreover, it allows for flexibility by accommodating different criteria for selecting column and row subsets based on specific problem requirements, which we will also discuss in [3](#sec:iterDEIM){reference-type="ref" reference="sec:iterDEIM"}. In their approach ([\[algo:AdapSamp1\]](#algo:AdapSamp1){reference-type="ref" reference="algo:AdapSamp1"}), a constant number of columns is selected per iteration, and the residual is computed as $E=A-CC^+A$. Nevertheless, one notable drawback of this adaptive sampling technique is its probabilistic nature, introducing randomness into the index selection process. This randomness may sacrifice predictability to some extent. Applying the algorithm multiple times to the same data may lead to inconsistent factorizations, yielding varying CUR decompositions across runs. This lack of predictability may hamper the reproducibility of the decomposition. Moreover, the unpredictable nature may impact downstream tasks reliant on the CUR decomposition, potentially leading to inconsistent outcomes. As a result, the practical use of a CUR decomposition by this type of probabilistic algorithm may be compromised. To address this limitation, in the next section, we introduce a modified approach based on the DEIM scheme to implement the iterative subselection algorithm proposed by @deshpande2006matrix [@deshpande2006matrix] and also propose a new iterative subselection strategy. By incorporating the DEIM scheme, our method provides a deterministic technique for iterative subselection of column indices, in contrast to the original methods that utilized a probabilistic approach [@optimalboutsidis; @deshpande2006matrix; @deshpande2006adaptive; @paul2015column; @Zhang]. This deterministic nature enhances the reproducibility of the CUR decomposition, making it more suitable for practical applications. # Small-scale DEIM type CUR with iterative SVDs {#sec:iterDEIM} In this section, we introduce new index-picking schemes for constructing a CUR decomposition. The standard DEIM scheme computes an SVD of $A$ once, after which the indices are picked iteratively "locally optimal". The new methods that we present now compute an SVD in every iteration. The algorithms adaptively select columns and rows of $A$ in several rounds. In each iteration, we modify $A$ by removing the information that has been captured by the previously selected columns and rows. The time complexities of the various proposed methods after $t$ rounds are summarized in [1](#ch6tab:overview){reference-type="ref" reference="ch6tab:overview"}. This includes the computational time for computing an SVD and an updated $A$ (residual) matrix in every round. ---------------------------------------------------------------------------------------------------------------------- -------- ------ ------------------------------------------- -------------------- -------------------- Method Matrix SVD Time svd $X$ or $M$ Residual ($E$) **CADP-CX** ([\[algo:AdapSamp\]](#algo:AdapSamp){reference-type="ref" reference="algo:AdapSamp"}) Small Full $\mathcal O(tmn^2)$ $\mathcal O(tmnk)$ $\mathcal O(tmnk)$ **DADP-CX** ([\[algo:ExAdapDEIM\]](#algo:ExAdapDEIM){reference-type="ref" reference="algo:ExAdapDEIM"}) **DADP-CUR** ([\[algo:NewAdapDEIM\]](#algo:NewAdapDEIM){reference-type="ref" reference="algo:NewAdapDEIM"}) **Large: DADP-CX** ([\[algo:NewAdapDEIM2\]](#algo:NewAdapDEIM2){reference-type="ref" reference="algo:NewAdapDEIM2"}) Large Few $\mathcal O(mn\cdot \text{nr}_\text{in})$ $\mathcal O(mnk)$ -- ---------------------------------------------------------------------------------------------------------------------- -------- ------ ------------------------------------------- -------------------- -------------------- : Summary of the dominant work of the different algorithms after $t$ rounds. The time complexity column excludes the computational cost of the DEIM scheme as it is approximately the same for all algorithms. The complexity estimates and the other descriptions are similar for the first three algorithms. It is important to highlight that when constructing a CUR factorization using [\[algo:AdapSamp,algo:ExAdapDEIM\]](#algo:AdapSamp,algo:ExAdapDEIM){reference-type="ref" reference="algo:AdapSamp,algo:ExAdapDEIM"}, one needs to perform almost twice the number of SVDs, $X$, and $E$, compared to what is required by [\[algo:NewAdapDEIM\]](#algo:NewAdapDEIM){reference-type="ref" reference="algo:NewAdapDEIM"}. For the large-scale algorithm, estimating the precise time complexity of computing a low-rank SVD using an iterative method (refer to [\[algo:itersvd\]](#algo:itersvd){reference-type="ref" reference="algo:itersvd"}), as done in [\[algo:NewAdapDEIM2\]](#algo:NewAdapDEIM2){reference-type="ref" reference="algo:NewAdapDEIM2"}, can be challenging. The iterative approach involves a series of matrix-vector multiplications and orthogonalization steps. The computational cost of the Lanczos algorithm is typically dominated by matrix-vector multiplications, which is $\mathcal O(mn)$ times the total number of inner iterations (denoted by $\text{nr}_\text{in}$ in the table) needed. It is worth noting that we do not explicitly compute the residual in [\[algo:NewAdapDEIM2\]](#algo:NewAdapDEIM2){reference-type="ref" reference="algo:NewAdapDEIM2"} as done in [\[algo:AdapSamp,algo:ExAdapDEIM,algo:NewAdapDEIM\]](#algo:AdapSamp,algo:ExAdapDEIM,algo:NewAdapDEIM){reference-type="ref" reference="algo:AdapSamp,algo:ExAdapDEIM,algo:NewAdapDEIM"}. ## A DEIM based adaptive sampling for column subset selection We present a deterministic variant of the iterative subselection scheme discussed in [2](#sec:adapsam){reference-type="ref" reference="sec:adapsam"}. The newly proposed algorithm (**CADP-CX**) builds upon the original adaptive sampling algorithm [@deshpande2006matrix] by leveraging the benefits of the DEIM technique. The procedure is summarized in [\[algo:AdapSamp\]](#algo:AdapSamp){reference-type="ref" reference="algo:AdapSamp"}. The method involves iteratively selecting a constant number of column indices, denoted as $c$, from $A$ in multiple rounds. We start by computing the leading $c<k$ singular vectors of $A$. Next, we apply the DEIM scheme ([\[algo:DEIM\]](#algo:DEIM){reference-type="ref" reference="algo:DEIM"}) to these singular vectors, resulting in the first set of $c$ indices. We then update $A$ by computing the residual matrix $E$ using the interpolative decomposition (as described in [\[line:res\]](#line:res){reference-type="ref" reference="line:res"} of [\[algo:AdapSamp\]](#algo:AdapSamp){reference-type="ref" reference="algo:AdapSamp"}). Next, we compute the leading $c$ singular vectors of $E$ and apply the DEIM procedure again to obtain the next set of $c$ indices. This process is repeated until we have selected all $k$ required indices. $\mathbf p=[\ ]; \quad E_0=A$ One consequence of utilizing the DEIM scheme here is that each column of $A$ has a chance of being selected in subsequent iterations, even if it was selected in a previous iteration. This could result in the selection of previously chosen columns in subsequent iterations. of [\[algo:AdapSamp\]](#algo:AdapSamp){reference-type="ref" reference="algo:AdapSamp"} is a possible sample without-replacement strategy that can alleviate this problem. Since the DEIM procedure selects the index corresponding to the entry of the largest magnitude in a given vector, when these indices are zeroed out after being chosen, it guarantees that they will not be selected again. With regards to the memory and computational complexity, computing the residual in [\[{line:res}\]](#{line:res}){reference-type="ref" reference="{line:res}"} involves a full iteration over the matrix, which has a space complexity of $\mathcal O(mn)$. Given that we use the DEIM procedure and select $c$ columns per iteration, in terms of computational complexity, a full SVD requires $\mathcal O(mn^2)$, one run of the DEIM algorithm requires $\mathcal O(mc^2)$, and computing the residual in [\[line:res\]](#line:res){reference-type="ref" reference="line:res"} costs $\mathcal O(mnk)$. The overall time complexity after $t$ rounds is $\mathcal O(tmn^2 +tmc^2 +tmnk)$. ## A new iterative subselection method In [\[algo:ExAdapDEIM\]](#algo:ExAdapDEIM){reference-type="ref" reference="algo:ExAdapDEIM"}, we introduce a new iterative subselection strategy (**DADP-CX**) for a CUR factorization, which differs from the method employed in our proposed [\[algo:AdapSamp\]](#algo:AdapSamp){reference-type="ref" reference="algo:AdapSamp"} and the adaptive sampling procedures in previous works [@optimalboutsidis; @deshpande2006matrix; @paul2015column; @Zhang]. In contrast to the previous strategy, which selects a fixed number of columns or rows in each iteration, this new strategy dynamically adjusts the selection schedule based on the decay of the singular values of the data (the relative magnitudes of the singular values). The motivation behind this new approach is to adapt the subselection process according to the significance of the singular values. By considering the decay pattern of the singular values, we can prioritize the selection of columns or rows that contribute the most to the data's overall structure and information. The decay of singular values provides valuable information about the significance of different components in the data. By leveraging this information, the iterative subselection strategy can adapt to the specific characteristics of the data and prioritize the selection of columns or rows that contribute the most to its structure. This adaptability allows for a more data-driven selection process. Set $E=A$;$\mathbf p=[\ ]$;$\mathbf s=[\ ]$ Repeat steps 1--9 on $A^\top$ to find the row indices $\mathbf s$[\[repeat:steps\]]{#repeat:steps label="repeat:steps"} $M = A(:,\mathbf p) \ \backslash \ (A \ / \ A(\mathbf s,:))$ With a user-defined threshold $\delta\in (0,1]$, the small-scale version of our method begins by computing the leading singular vectors corresponding to the singular values greater than the threshold multiplied by the largest singular value of $A$, i.e., all $\sigma_i >\delta\cdot\sigma_1$. Let $b$ denote the number of singular values satisfying this condition. Additionally, we introduce an extra parameter $\ell$ to establish an upper limit on the number of indices per round, taking into account the number of singular values that exceed the threshold. Consequently, we select the first $c=\min(b,\ell)$ column indices denoted by $\mathbf p_c$ by applying the DEIM scheme to the leading $c$ right singular vectors ($V_c$). Subsequently, we proceed to construct an interpolative decomposition using the chosen column indices and compute the residual matrix $E$ by subtracting this approximation from $A$, i.e., $E=A-CC^+A$. To determine the next set of indices, we repeat the aforementioned process on $E$. Thus, we compute the leading singular vectors of $E$ corresponding to singular values greater than $\delta$ times the largest singular value of $E$ and repeat the procedure mentioned earlier. As previously mentioned, since the DEIM scheme selects indices corresponding to entries with the largest magnitude, we set the entries in the right singular vectors that correspond to the previously selected indices to zero. This prevents the selection of duplicate indices. We continue this procedure until all $k$ indices are selected. We expect that the multiple passes through $A$ would lead to a reduced approximation error. It is worth mentioning that in [\[repeat:steps\]](#repeat:steps){reference-type="ref" reference="repeat:steps"}, there is no need to compute the initial SVD of $A^\top$ since we can store the initial left singular vectors from the SVD of $A$. In addition to the new selection strategy described in [\[algo:ExAdapDEIM\]](#algo:ExAdapDEIM){reference-type="ref" reference="algo:ExAdapDEIM"}, we also define an alternative way to compute the residual in the index selection process, which is presented in [\[algo:NewAdapDEIM\]](#algo:NewAdapDEIM){reference-type="ref" reference="algo:NewAdapDEIM"}. The newly proposed iterative subselection algorithms ([\[algo:AdapSamp,algo:ExAdapDEIM\]](#algo:AdapSamp,algo:ExAdapDEIM){reference-type="ref" reference="algo:AdapSamp,algo:ExAdapDEIM"}) and existing adaptive sampling procedures such as those outlined in [@optimalboutsidis; @deshpande2006matrix; @deshpande2006adaptive; @paul2015column; @Zhang] define the residual as the error incurred by projecting the matrix $A$ onto either the column space of $C$ or the row space of $R$, i.e., $E = A - CC^+A$ or $E = A - AR^+R$, respectively. In contrast, this new method (**DADP-CUR**) defines the residual as the error incurred by simultaneously projecting $A$ onto both the column space of $C$ and the row space of $R$. This means computing a CUR factorization at each step using only the selected columns and rows. Set $E=A$;$\mathbf p=[\ ]$;$\mathbf s=[\ ]$ By considering the simultaneous projection onto the column and row spaces, we aim to use a residual that provides a more accurate representation of the error in the CUR factorization. It takes into account the combined effect of selecting specific columns and rows on capturing the underlying structure and information in the data. This approach offers several potential advantages. It allows for a more comprehensive assessment of the error in the index selection process, considering the contributions from both the columns and rows. Furthermore, it ensures that the residual accurately reflects the approximation quality obtained by a CUR factorization using the selected columns and rows. Additionally, it has the potential to reduce computational costs compared to [\[algo:ExAdapDEIM\]](#algo:ExAdapDEIM){reference-type="ref" reference="algo:ExAdapDEIM"}, as the latter approach involves performing nearly twice the number of SVDs required by [\[algo:NewAdapDEIM\]](#algo:NewAdapDEIM){reference-type="ref" reference="algo:NewAdapDEIM"}. Without showing specifics, it is worth mentioning that [\[algo:AdapSamp\]](#algo:AdapSamp){reference-type="ref" reference="algo:AdapSamp"} can be adapted using the newly defined residual. Note that when $\delta=0$, both [\[algo:ExAdapDEIM,algo:NewAdapDEIM\]](#algo:ExAdapDEIM,algo:NewAdapDEIM){reference-type="ref" reference="algo:ExAdapDEIM,algo:NewAdapDEIM"} are equivalent to the DEIM-type CUR factorization. In terms of time complexity, suppose we need $t$ iterations in [\[algo:ExAdapDEIM,algo:NewAdapDEIM\]](#algo:ExAdapDEIM,algo:NewAdapDEIM){reference-type="ref" reference="algo:ExAdapDEIM,algo:NewAdapDEIM"} to select all $k$ columns and rows. The cost of solving $E$ is $\mathcal O(tmnk)$. The cost of an SVD and one run of the DEIM scheme are $\mathcal O(mn^2)$ and $\mathcal O(nc^2)$, respectively, where $c$ is the maximum number of columns selected per iteration. Therefore, the overall cost of the algorithms is $\mathcal O(tmn^2 +t(m+n)c^2 +tmnk)$. However, constructing $C$ and $R$ using [\[algo:ExAdapDEIM\]](#algo:ExAdapDEIM){reference-type="ref" reference="algo:ExAdapDEIM"} requires two runs of it. Thus, its cost is almost twice that of [\[algo:NewAdapDEIM\]](#algo:NewAdapDEIM){reference-type="ref" reference="algo:NewAdapDEIM"}. For small matrices, these iterative subselection techniques may be worthwhile as the costs are modest and the quality of the approximations may increase. However, these schemes may be especially interesting for large matrices, for which an SVD may be too expensive, and iterative methods are used to compute the left and right singular vectors. We will study this situation in the next section. # Large-scale DEIM type CUR with iterative SVDs {#sec:large-scale} For large-scale matrices, taking an SVD every round in [\[algo:AdapSamp,algo:ExAdapDEIM,algo:NewAdapDEIM\]](#algo:AdapSamp,algo:ExAdapDEIM,algo:NewAdapDEIM){reference-type="ref" reference="algo:AdapSamp,algo:ExAdapDEIM,algo:NewAdapDEIM"} will usually be prohibitively expensive. Indeed, even one (reduced) SVD will be too costly, which means that the standard DEIM-type CUR decomposition is generally not affordable. However, the proposed algorithm is suitable for large-scale data, as approximating the largest singular vectors by iterative (Krylov) methods is usually a relatively easy task. Additionally, here, we do not explicitly compute the residual matrix as done in the proposed algorithms; this is done implicitly in the computation of the approximate singular vectors. Furthermore, instead of computing the full SVD as we do in [\[algo:AdapSamp,algo:ExAdapDEIM,algo:NewAdapDEIM\]](#algo:AdapSamp,algo:ExAdapDEIM,algo:NewAdapDEIM){reference-type="ref" reference="algo:AdapSamp,algo:ExAdapDEIM,algo:NewAdapDEIM"}, we now carry out:\ ---- --------------------------------------------------- 1: Approximate $\widehat U$ and $\widehat V$ of $E$. ---- --------------------------------------------------- This can efficiently be carried out by an implicitly restarted version of Lanczos bidiagonalization (see, e.g., [@baglama2005augmented]). The idea is as follows. Let $k < {\widehat k}$ be the minimal and maximal dimension of the subspaces. We first carry out ${\widehat k}$ steps of Lanczos bidiagonalization summarized by the matrix equations $$E\widehat V_{\widehat k} = \widehat U_{\widehat k} B_{\widehat k}, \quad E^\top \widehat U_{\widehat k}= \widehat V_{\widehat k}B_{\widehat k}^\top + \beta_{\widehat k}\widehat\mathbf v_{{\widehat k}+1} \mathbf e_{\widehat k}^\top,$$ where $B_{\widehat k}$ is bidiagonal. The singular values of $B_{\widehat k}$ are approximations to those of $E$, and the singular vectors lead to approximations to those of $E$. With the SVD $B_{\widehat k}= W \widehat\Sigma Z^\top$, we get $$E(\widehat V_{\widehat k}Z) = (\widehat U_{\widehat k}W) \widehat\Sigma, \quad E^\top(\widehat U_{\widehat k}W) = (\widehat V_{\widehat k}Z) \widehat\Sigma + \beta_{\widehat k}\widehat\mathbf v_{{\widehat k}+1} (W^\top \mathbf e_{\widehat k})^\top,$$ For any upper triangular matrix $\widehat\Sigma$ an elegant implicit restart procedure is possible; here $\widehat\Sigma$ is even diagonal. Order the singular values in the desired way; in this case nonincreasingly. Partition the transformed basis, redefining $\widehat U_k$ and $\widehat V_k$: $$\label{partition} (\widehat U_{\widehat k}W) =: [\widehat U_k \ \, \widehat U_{{\widehat k}-k}], \quad (\widehat V_{\widehat k}Z) =: [\widehat V_k \ \, \widehat V_{{\widehat k}-k}], \quad \widehat\Sigma = {\mbox{\scriptsize $\left[\!\! \begin{array}{cc} \widehat\Sigma_k \\ & \widehat\Sigma_{{\widehat k}-k} \end{array} \!\! \right]$}},$$ and redefine $B_k = \widehat\Sigma_k$, $\beta_{k+1} := \beta_{{\widehat k}+1}$, $\widehat\mathbf v_{k+1} := \widehat\mathbf v_{{\widehat k}+1}$, and $\mathbf f_k := W^\top \mathbf e_{\widehat k}$. We can now conveniently restart from the decomposition $$E\widehat V_k = \widehat U_k B_k, \quad E^\top \widehat U_k = \widehat V_k B_k^\top + \beta_k \widehat\mathbf v_{k+1} \mathbf f_k^\top.$$ The pair $(\widehat U_k, \widehat V_k)$ may be viewed as a pair of approximate invariant spaces with error $\|\mathbf f_k\|$. The spaces are expanded with Lanczos bidiagonalization to dimension ${\widehat k}$, after which the selection procedure is carried out again. This scheme is repeated until the quantify $\|\mathbf f_k\|$ is sufficiently small. We summarize the method in [\[algo:itersvd\]](#algo:itersvd){reference-type="ref" reference="algo:itersvd"}. Note that MATLAB built-in function svds is a different implementation of a related technique. Generate $E\widehat V_k = \widehat U_k B_k$,  $E^\top \widehat U_k = \widehat V_kB_k^\top + \beta_{k+1} \widehat\mathbf v_{k+1} \mathbf f_k^\top$ Set $E=A$;$\mathbf p=[\ ]$;\ Repeat steps 1--9 on $A^\top$ to find the row indices $\mathbf s$\ $M = A(:,\mathbf p) \ \backslash \ (A \ / \ A(\mathbf s,:))$ In [\[algo:NewAdapDEIM2\]](#algo:NewAdapDEIM2){reference-type="ref" reference="algo:NewAdapDEIM2"} we provide the large-scale version of [\[algo:ExAdapDEIM\]](#algo:ExAdapDEIM){reference-type="ref" reference="algo:ExAdapDEIM"} by employing [\[algo:itersvd\]](#algo:itersvd){reference-type="ref" reference="algo:itersvd"}. Without showing details, it is worth noting that this can also be adapted for [\[algo:AdapSamp,algo:NewAdapDEIM\]](#algo:AdapSamp,algo:NewAdapDEIM){reference-type="ref" reference="algo:AdapSamp,algo:NewAdapDEIM"}. It is important to note that the threshold parameter $\delta$ and the upper limit $\ell$ on the number of indices to be selected per round are incorporated within the implementation of [\[algo:itersvd\]](#algo:itersvd){reference-type="ref" reference="algo:itersvd"}. The efficiency of this method is because for the procedure in [\[algo:itersvd\]](#algo:itersvd){reference-type="ref" reference="algo:itersvd"}, we do not need the matrix $E$ in explicit form; only matrix-vector products (MVs) with $E$ and $E^\top$ are necessary (see [\[implicit_E\]](#implicit_E){reference-type="ref" reference="implicit_E"}). The routine of [\[algo:itersvd\]](#algo:itersvd){reference-type="ref" reference="algo:itersvd"} also takes several MVs that depend on the distribution of the singular value and the starting vector. The cost of computing the singular values and vectors in [\[linesvds\]](#linesvds){reference-type="ref" reference="linesvds"} depends on the total number of inner iterations of [\[algo:itersvd\]](#algo:itersvd){reference-type="ref" reference="algo:itersvd"}. The number of iterations required by the Lanczos algorithm depends on the size of the matrix. In a matrix-vector product $E\mathbf x$ for a vector $\mathbf x$, the component $A\mathbf x$ costs $\mathcal O(mn)$ for a full matrix. In the case of a sparse matrix with $d$ nonzeros entries per row, the cost reduces to $\mathcal O(md)$. The aggregated cost of [\[lineincQR\]](#lineincQR){reference-type="ref" reference="lineincQR"} is only $\mathcal O(mk^2)$. In [\[implicit_E\]](#implicit_E){reference-type="ref" reference="implicit_E"}, the computation of $Q(Q^\top \mathbf x)$ requires $\mathcal O(mk)$ operations. The cost of solving the least squares problem $M = C^+AR^+$ would be $\mathcal O(mnk)$, which is relatively expensive. Nevertheless, it is important to highlight that this step is necessary for all CUR methods as the final step. As previously mentioned, it is not necessary to compute the initial SVD of $A^\top$ in this case, as we can simply retain the initial left singular vectors obtained from the SVD of $A$. The value of $\delta$ will typically depend on the data set. A value close to $1$ may be favorable for the approximation result but is more expensive since [\[algo:itersvd\]](#algo:itersvd){reference-type="ref" reference="algo:itersvd"} needs to be carried out approximately $k$ times. However, having $\delta=1$ implies that we select one index per iteration, and thus, we need just the first right and left singular vectors of $E$ corresponding to the largest singular value. We can reduce the computational cost by specifying an earlier convergence criterion for finding the approximate leading right and left singular vectors. We use Wedin's theorem for this. The theorem bounds the distance between subspaces and the proof is in (cf., e.g., [@stewart1990matrix pp. 260--262]). **Theorem 1**. *(Wedin's Theorem) Given $E\in\mathbb R^{m\times n}$, let $$[U_1\ U_2 \ U_3]^\top \ E\ [V_1 \ V_2]= \left[ \begin{array}{cc} \Sigma_1&0\\0&\Sigma_2\\0&0 \end{array} \right],$$ be the SVD of $E$ (where the singular values are not necessarily nonincreasing). The singular subspaces of interest are in the column spaces of $U_1$ and $V_1$. Let the inexact/approximate singular subspaces be in the column spaces of $\widehat U_1$ and $\widehat V_1$ in the decomposition $$[ \widehat U_1\ \widehat U_2 \ \widehat U_3]^\top \ \widehat E\ [\widehat V_1 \ \widehat V_2]= \left[ \begin{array}{cc} \widehat\Sigma_1&0\\0& \widehat\Sigma_2\\0&0 \end{array} \right].$$ Now let $\Phi$ be the matrix of canonical angles between $\mathop{\mathrm{Range}}(U_1)$ and $\mathop{\mathrm{Range}}(\widehat U_1)$, and $\Theta$ be the matrix of canonical angles between $\mathop{\mathrm{Range}}(V_1)$ and $\mathop{\mathrm{Range}}(\widehat V_1)$. Given the residuals $F_1=E\widehat V_1 -\widehat U_1\widehat\Sigma_1$, $F_2=E^\top \widehat U_1 -\widehat V_1\widehat\Sigma_1$, suppose that there is a number $\alpha>0$ such that $$\min |\sigma(\widehat\Sigma_1)-\sigma(\Sigma_2)| \ge \alpha \quad \text{and} \quad \sigma_{\min}(\widehat\Sigma_1)\ge \alpha.$$ Then $$\sqrt{\lVert\sin{\Phi}\rVert_F^2 + \lVert\sin{\Theta}\rVert_F^2}\le\frac{\sqrt{\lVert F_1\rVert_F^2+\lVert F_2\rVert_F^2}}{\alpha}.$$* [Theorem 1](#thm:wedins){reference-type="ref" reference="thm:wedins"} shows that the computed singular vectors extracted by the projection method are optimal up to the factor in the right-hand side of the above inequality. This implies that any change in the entries of the computed singular vectors is bounded by this factor. Note that $\Sigma_2$ is unknown. For our context, we use $\widehat\Sigma_2$ as an approximation to $\Sigma_2$. Since we are only concerned with approximating the first leading right singular vector $\widehat\mathbf v_1$, we approximate $\alpha\approx\widehat\sigma_1 - \widehat\sigma_2$. Let $m_1(\widehat\mathbf v_1)$ and $m_2(\widehat\mathbf v_1)$ denote the largest and second-largest entries in $\widehat\mathbf v_1$, respectively, and let $\mathbf{f}_2=E^\top \widehat\mathbf u_1-\widehat\sigma_1 \widehat\mathbf v_1$ be the residual vector (associated with residual matrix $F_2$). The above svds routine results in $\mathbf{f}_1=E\widehat\mathbf v_1 -\widehat\sigma_1 \widehat\mathbf u_1 =0$ (associated with residual matrix $F_1$). The DEIM algorithm selects the index corresponding to the largest element in the magnitude of a vector. Therefore, when $\delta=1$, one can set an early convergence criterion to find the first singular vector that corresponds to the largest singular value, using the following approximate bound: $$m_1(\widehat\mathbf v_1)-m_2(\widehat\mathbf v_1)\lesssim 2\,\frac{\lVert\mathbf{f}_2\rVert}{\widehat\sigma_1 - \widehat\sigma_2}.$$ # Error Analysis In this section, we provide a known theoretical error bound applicable to a general class of CUR factorizations. Thus, this bound also holds for our proposed methods. A detailed constructive proof is in [@Sorensen], but we provide the necessary details here for the reader's convenience. Let $P\in \mathbb R^{n\times k}$ and $S\in \mathbb R^{m\times k}$ be selection matrices with some columns of the identity indexed by the indices selected by employing the index selection techniques described in this paper. **Theorem 2**. *[@Sorensen Thm. 4.1] Given $A\in \mathbb R^{m \times n}$ and a target rank $k$, let $V\in \mathbb R^{n\times k}$ and $U\in \mathbb R^{m\times k}$ be the leading $k$ right and left singular vectors of $A$. Suppose $C=AP$ and $R=S^\top \!A$ are of full rank, and $V^\top \!P$ and $S^\top U$ are nonsingular, then, with $M=C^+\!AR^+$, a rank-$k$ CUR decomposition constructed by the proposed techniques satisfies $$\lVert A-CMR\rVert\le(\eta_\mathbf s+ \eta_\mathbf p)\,\sigma_{k+1} \quad with \quad \eta_\mathbf s< \sqrt{\tfrac{nk}{3}}\,2^k~, \quad \eta_\mathbf p< \sqrt{\tfrac{mk}{3}}\,2^k,$$ where $\eta_\mathbf p=\lVert(V^\top\!P)^{-1}\rVert$, $\eta_\mathbf s=\lVert(S^\top U)^{-1}\rVert$.* Let $\mathbb{P} = P(V^\top P)^{-1}V^\top$ and $\mathbb{S}=U(S^\top U)^{-1}S^\top$ be oblique projectors. Note that $V^\top \mathbb{P}=V^\top$ and $\mathbb{S}U=U$, implying that $V^\top (I-\mathbb{P})=0$ and $(I-\mathbb{S})U=0$. Using $M=C^+\!AR^+$, we have [@Mahoney; @Sorensen] $$\begin{aligned} \lVert A-CMR\rVert &=\lVert A-CC^+AR^+R\rVert=\lVert(I-CC^+)A +CC^+A(I-R^+R)\rVert\\ & \le \lVert(I-CC^+)A\rVert+\lVert CC^+\rVert\,\lVert A(I-R^+R)\rVert.\end{aligned}$$ Leveraging the fact that $CC^+$ is an orthogonal projector, $\lVert CC^+\rVert=1$ and [@Sorensen Lemmas 4.1 and 4.2] $$\begin{aligned} \lVert(I-CC^+)A\rVert & \le \lVert A(I-\mathbb{P})\rVert =\lVert A(I-VV^\top)(I-\mathbb{P})\rVert\\ & \le \lVert(V^\top \!P)^{-1}\rVert\,\lVert A(I-VV^\top)\rVert, \\ \lVert A(I-R^+\!R)\rVert & \le \lVert(I-\mathbb{S})A\rVert =\lVert(I-\mathbb{S})(I-UU^\top)A\rVert\\ & \le \lVert(S^\top U)^{-1}\rVert\,\lVert(I-UU^\top)A\rVert,\end{aligned}$$ we have that $$\lVert A-CMR\rVert\le \lVert(V^\top \!P)^{-1}\rVert\,\lVert A(I-VV^\top)\rVert+\lVert(S^\top U)^{-1}\rVert\,\lVert(I-UU^\top)A\rVert.$$ Since $U$ and $V$ contain the leading $k$ left and right singular vectors, respectively, $\lVert(I-UU^\top)A\rVert=\lVert A(I-VV^\top)\rVert=\sigma_{k+1}$. Hence $$\lVert A-CMR\rVert\le (\lVert(V^\top \!P)^{-1}\rVert +\lVert(S^\top U)^{-1}\rVert)\cdot\sigma_{k+1}.$$ ------------------------------------------------------------------------ \ # Experiments We conduct numerical experiments to evaluate the empirical performance of the DEIM scheme [@Sorensen], the QDEIM procedure [@Drmac], the MaxVol method [@Goreinov2010], and the iterative subselection techniques discussed in this paper. The following are the iterative subselection methods we evaluate: 1. **CADP-CX**:  refers to [\[algo:AdapSamp\]](#algo:AdapSamp){reference-type="ref" reference="algo:AdapSamp"}. 2. **DADP-CX**:  represents [\[algo:ExAdapDEIM\]](#algo:ExAdapDEIM){reference-type="ref" reference="algo:ExAdapDEIM"}. 3. **DADP-CUR**:  corresponds to [\[algo:NewAdapDEIM\]](#algo:NewAdapDEIM){reference-type="ref" reference="algo:NewAdapDEIM"}. 4. **CADP-CUR**: denotes the adapted version of [\[algo:AdapSamp\]](#algo:AdapSamp){reference-type="ref" reference="algo:AdapSamp"} with the residual defined as $E=A-CMR$. To assess the effectiveness of our algorithms, we test them on various data analysis tasks in different application domains, such as analyzing recommendation systems, categorizing text and retrieving information, and image compression. Our evaluation includes synthetic and real-world data matrices, both sparse and dense, with varying sizes ranging from small to large scale, which we summarize in [2](#tab:exps){reference-type="ref" reference="tab:exps"}. In the implementation, we perform the column-pivoted QR factorization and the reduced SVD using the MATLAB built-in functions qr and svd, respectively. For [\[algo:itersvd,algo:NewAdapDEIM2\]](#algo:itersvd,algo:NewAdapDEIM2){reference-type="ref" reference="algo:itersvd,algo:NewAdapDEIM2"}, we use our implementation of the Lanczos bidiagonalization method of [@baglama2005augmented] by incorporating the threshold parameter $\delta$ and upper limit $\ell$ on the number of singular vectors to be computed. Unless otherwise stated, in all the experiments we use as default the number of rounds $t=10$, the parameter $\delta=0.8$, and upper limit $\ell=k/10$. **Exp.** **Domain** **Matrix** $m$ $n$ ---------- -------------------------------------- ------------ -------- ------- -- -- 1 Synthetic  Sparse 100000 300 2 Text categorization  Sparse 139 15210 3 Text categorization  Sparse 8293 18933 4 Image compression  Dense 13500 5000 5 Recommendation system  Dense 14116 100 6 Economic modeling  Sparse 29610 29610 7 Computational fluid dynamics problem  Sparse 30412 30412 8 Image compression  Dense 50000 3072 : Various examples and dimensions considered.[\[tab:exps\]]{#tab:exps label="tab:exps"} **Experiment 3**. *In our first experiment, we investigate how different choices of $\delta$ and the number of rounds $t$ affect the approximation accuracy of the various iterative subselection strategies. We use the relative approximation error ${\lVert A-CMR\rVert}\,/\,{\lVert A\rVert}$ as the evaluation metric. For this experiment just as in [@Sorensen], we generate a sparse, nonnegative matrix $A \in \mathbb R^{m \times n}$, with $m=100000$ and $n=300$, of the form $$A=\sum_{j=1}^{10}\frac{2}{j}\, \mathbf x_j\ \mathbf y_j^\top + \sum_{j=11}^{300}\frac{1}{j}\, \mathbf x_j\ \mathbf y_j^\top,$$ where $\mathbf x_j \in \mathbb R^{m}$ and $\mathbf y_j \in \mathbb R^{n}$ are sparse vectors with random nonnegative entries (i.e., $\mathbf x_j={\sf sprand}(m,1,0.025)$ and $\mathbf y_j={\sf sprand}(n,1,0.025)$.* *From [\[fig: 1\]](#fig: 1){reference-type="ref" reference="fig: 1"} we observe that increasing the number of rounds $t$ or $\delta$ does not necessarily lead to a monotonic decrease in the approximation errors. The result implies that one needs to carefully choose the parameter $\delta$ or the number of rounds to get the optimum advantage of using the iterative subselection strategies. For this experiment, we also observe that using the residual $E=A-CMR$ instead of $A-CC^+A$ for the iterative subselection yields better approximation errors in the delta strategy while it produces worse approximation errors in the constant number of columns strategy.* **Experiment 4**. *Our next experiment is to demonstrate that the iterative subselection techniques yield better approximation results than one-round sampling. We perform the experiment using four real data sets and report the relative approximation error ${\lVert A-CMR\rVert}\,/\,{\lVert A\rVert}$ of each algorithm on each data set.* *The first two data sets are relevant to text categorization and information retrieval applications. In such data analysis problems, a "bag of words" approach is commonly employed to represent documents. We opt for the Reuters-21578 text categorization collection, which comprises documents that were featured on Reuters' newswire in $1987$. This data set is extensively used as a benchmark in the text classification community, consisting of $21578$ documents categorized into $135$ categories. For our experiment, we use the preprocessed data set, which has $18933$ unique terms and $8293$ documents [@cai2005document]. We normalize the rows of the sparse matrix, which has dimensions $8293 \times 18933$, to have a unit length. The second data set, the Internet term document data, is from the Technion Repository of Text Categorization Datasets (TechTC) [@Gabrilovich]. We use the test set $26$, which consists of a collection of $139$ documents on two topics with $15210$ terms describing each document[^2]. As in [@Sorensen], the $139 \times 15210$ TechTC matrix rows are scaled to have a unit 2-norm.* *The third data set is the Gisette data. Gisette is a handwritten digit recognition problem. The problem is to separate the highly confusable digits '4' and '9'. The digits have been size-normalized and centered in a fixed-size image of dimension $28\times 28$. The resulting data set is of dimension $13500\times 5000$.* *The next data set pertains to the recommendation system analysis field, where the primary objective is to provide service or purchase suggestions to users. Collaborative filtering is a commonly used technique in recommendation systems, which involves recommending items to users that were previously liked by customers with comparable preferences. The Jester data set is frequently employed as a benchmark in recommendation system research [@goldberg2001eigentaste]. The data set comprises $73421$ users and their ratings for $100$ jokes. We limit our analysis to users who have provided ratings for all $100$ jokes, resulting in a $14116 \times 100$ matrix. We normalize the matrix by subtracting the mean of each column from all the entries in that column.* *The final two data sets are standard test matrices specifically designed for sparse matrix problems. These data matrices are sourced from the publicly available SuiteSparse Matrix Collection. The data set referred to as g7jac100, is derived from the "Overlapping Generations Model" used to study the social security systems of the G7 nations. It is a sparse matrix with dimensions $29610\times 29610$ and contains $335972$ numerically nonzero entries with a low rank of $21971$. We utilize the invextr1-new matrix, associated with a computational fluid dynamics problem, which has dimensions $30412\times 30412$ and represents a rank-$27502$ structure. It contains $1793881$ nonzero entries.* *From [\[fig: 2\]](#fig: 2){reference-type="ref" reference="fig: 2"}, we can see that in all cases our iterative subselection-based CUR algorithm has a lower approximation error than all the *one-round* deterministic index selection algorithms considered. We also observe that the approximation error of the QDEIM and the MaxVol techniques do not always decrease monotonically with increasing $k$ values. We choose the number of rounds for the CADP-CUR and CADP-CX algorithms to be $t=10$, and the parameter $\delta=0.8$ and upper limit $\ell=k/10$ for the DADP-CUR and DADP-CX algorithms. The results of all four proposed iterative subselection algorithms are comparable.* **Experiment 5**. *As stated in [4](#sec:large-scale){reference-type="ref" reference="sec:large-scale"}, when dealing with large-scale matrices, performing a full (even reduced) SVD in each iteration of algorithms [\[algo:AdapSamp\]](#algo:AdapSamp){reference-type="ref" reference="algo:AdapSamp"}, [\[algo:ExAdapDEIM\]](#algo:ExAdapDEIM){reference-type="ref" reference="algo:ExAdapDEIM"}, and [\[algo:NewAdapDEIM\]](#algo:NewAdapDEIM){reference-type="ref" reference="algo:NewAdapDEIM"} can often become excessively costly. To evaluate the efficiency of the proposed algorithms, which compute the full SVD, compared to their respective large-scale versions that utilize Lanczos bidiagonalization to find a limited number of singular vectors (refer to [\[algo:NewAdapDEIM2\]](#algo:NewAdapDEIM2){reference-type="ref" reference="algo:NewAdapDEIM2"}), we conduct experiments using a large-scale data set. Specifically, we utilized the cifar-10 data set, which consists of $60000$ color images sized $32\times32$ pixels, divided into ten different classes with $6000$ images per class. The data set is divided into $50000$ training set images and $10000$ test set images. We focus on the training data set containing $50000$ images. Each image in the data set has been reshaped into a 1D array of length $3072$, resulting in a dense matrix of size $50000\times3072$. For our analysis, we approximate this matrix using a rank-100 approximation.* *[3](#tab:smlascale){reference-type="ref" reference="tab:smlascale"} presents the results obtained from running the various algorithms. We observe that the large-scale variants (i.e., the various adaptations of [\[algo:NewAdapDEIM2\]](#algo:NewAdapDEIM2){reference-type="ref" reference="algo:NewAdapDEIM2"}), which utilize an iterative method for computing a few SVDs, demonstrate higher efficiency while maintaining comparable approximation quality compared to the algorithms that compute the full SVD. Notably, both for the full SVD and the iterative SVD routines, the algorithms with the residual defined as $E=A-CMR$ exhibit greater efficiency than those with the residual computed as $E=A-CC^+A$. Therefore, our new approach to computing the residual for the iterative subselection proves to be more efficient than the existing method while maintaining similar approximation accuracy for this experiment.* ---------------- ------------------ -------------------- -- ------------------ --------------------- *Method* *Full SVD* *Iterative SVD* *Relative error* *Runtime (s)* *Relative error* *Runtime (s)* ***CADP-CUR*** *$0.028$* *$3.24\cdot 10^2$* *$0.028$* *$1.61\cdot 10^2$* ***DADP-CUR*** *$0.027$* *$5.06\cdot 10^2$* *$0.027$* *$1.66\cdot 10^2$* ***CADP-CX*** *$0.026$* *$6.21\cdot 10^2$* *$0.026$* *$5.40 \cdot 10^2$* ***DADP-CX*** *$0.026$* *$1.15\cdot 10^3$* *$0.026$* *$5.78\cdot 10^2$* ---------------- ------------------ -------------------- -- ------------------ --------------------- : *Comparison of large-scale iterative subselection algorithms (iterative method for computing few SVDs) and small-scale iterative subselection algorithms (full SVD computation) on the cifar-10 data set approximation.* # Conclusions New approaches for selecting subsets of columns and rows using iterative subselection strategies have been presented. The first one is a DEIM adaptation of the so-called adaptive sampling [@deshpande2006matrix] for column subset selection. This procedure follows a fixed selection schedule, choosing a predetermined number of columns or rows in each iteration. In contrast, the second proposed iterative subselection strategy dynamically adjusts the selection schedule based on the decay of the singular values of the data. This approach aims to prioritize the selection of columns or rows that contribute the most to the overall structure and information of the data. By considering the significance of singular values and leveraging their decay pattern, the algorithm can adapt to the unique characteristics of the data, resulting in a more data-driven selection process. Additionally, we also introduce an alternative approach for computing the residual in the index selection process. The first two iterative subselection algorithms we propose, i.e., [\[algo:AdapSamp\]](#algo:AdapSamp){reference-type="ref" reference="algo:AdapSamp"} and [\[algo:ExAdapDEIM\]](#algo:ExAdapDEIM){reference-type="ref" reference="algo:ExAdapDEIM"}, as well as existing adaptive sampling procedures [@optimalboutsidis; @deshpande2006matrix; @deshpande2006adaptive; @paul2015column; @Zhang], define the residual as the error resulting from projecting the matrix $A$ onto either the column space of $C$ or the row space of $R$, i.e., $E = A - CC^+A$ or $E = A - AR^+R$, respectively. In contrast, our new method defines the residual as the error incurred by simultaneously projecting $A$ onto both the column space of $C$ and the row space of $R$. This entails computing a CUR factorization at each step using only the selected columns and rows. We have also discussed how iterative procedures for computing a few singular vectors of large data matrices can be used with the newly proposed strategies. We have presented an adaptation of [\[algo:ExAdapDEIM\]](#algo:ExAdapDEIM){reference-type="ref" reference="algo:ExAdapDEIM"} for the large-scale case in [\[algo:NewAdapDEIM2\]](#algo:NewAdapDEIM2){reference-type="ref" reference="algo:NewAdapDEIM2"}, which can straightforwardly be adapted for [\[algo:AdapSamp\]](#algo:AdapSamp){reference-type="ref" reference="algo:AdapSamp"} and [\[algo:NewAdapDEIM\]](#algo:NewAdapDEIM){reference-type="ref" reference="algo:NewAdapDEIM"}. For each of the iterative subselection strategies proposed in this paper, we invoke the DEIM index selection method. However, we note that other deterministic index selection schemes such as the QDEIM technique [@Drmac] and the MaxVol procedure [@Goreinov2010] may be employed. We have demonstrated through empirical analysis that the proposed methods in this work can produce better approximation results than the traditional method of *one-round sampling* of all columns and rows. Overall, the proposed techniques may be useful for improving the accuracy of a CUR decomposition, but may also introduce additional complexities that need to be carefully addressed. The choice of whether to use the proposed iterative subselection methods or not may depend on the specific problem or application, as well as the trade-offs between accuracy, complexity, and computational resources. # Acknowledgements {#acknowledgements .unnumbered} This work has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 812912. 23 natexlab J. Baglama, L. Reichel, Augmented implicitly restarted Lanczos bidiagonalization methods, SIAM J. Sci. Comput. 27 (2005) 19--42. M. Barrault, Y. Maday, N. C. Nguyen, A. T. Patera, An 'empirical interpolation' method: Application to efficient reduced-basis discretization of partial differential equations, Comptes Rendus Math. 339 (2004) 667--672. C. H. Bischof, P. C. Hansen, Structure-preserving and rank-revealing QR-factorizations, SIAM J. Sci. Comput. 12 (1991) 1332--1350. C. Boutsidis, D. P. Woodruff, Optimal CUR matrix decompositions, SIAM J. Comput. 46 (2017) 543--589. D. Cai, X. He, J. Han, Document clustering using locality preserving indexing, IEEE Trans. Knowl. Data Eng. 17 (2005) 1624--1637. S. Chandrasekaran, I. C. F. Ipsen, On rank-revealing factorisations, SIAM J. Matrix Anal. Appl. 15 (1994) 592--622. S. Chaturantabut, D. C. Sorensen, Nonlinear model reduction via discrete empirical interpolation, SIAM J. Sci. Comput. 32 (2010) 2737--2764. A. Deshpande, L. Rademacher, S. S. Vempala, G. Wang, Matrix approximation and projective clustering via volume sampling, Theory Comput. 2 (2006) 225--247. A. Deshpande, S. Vempala, Adaptive sampling and fast low-rank matrix approximation, in: Proceedings of the 10th RANDOM APPROX, 2006, pp. 292--303. P. Drineas, M. W. Mahoney, S. Muthukrishnan, Relative-error CUR matrix decompositions, SIAM J. Matrix Anal. Appl. 30 (2008) 844--881. Z. Drmac, S. Gugercin, A new selection operator for the discrete empirical interpolation method---Improved a priori error bound and extensions, SIAM J. Sci. Comput. 38 (2016) A631--A648. A. Frieze, R. Kannan, S. Vempala, Fast monte-carlo algorithms for finding low-rank approximations, J. ACM 51 (2004) 1025--1041. E. Gabrilovich, S. Markovitch, Text categorization with many redundant features: Using aggressive feature selection to make svms competitive with c4.5, in: The 21st International Conference on Machine Learning, 2004, p. 41. K. Goldberg, T. Roeder, D. Gupta, C. Perkins, Eigentaste: A constant time collaborative filtering algorithm, Inform. Retrieval 4 (2001) 133--151. S. A. Goreinov, I. V. Oseledets, D. V. Savostyanov, E. E. Tyrtyshnikov, N. L. Zamarashkin, How to find a good submatrix, in: Matrix Methods: Theory, Algorithms And Applications, World Scientific, Singapore, 2010, pp. 247--256. M. Gu, S. C. Eisenstat, Efficient algorithms for computing a strong rank-revealing QR factorization, SIAM J. Sci. Comput. 17 (1996) 848--869. V. Guruswami, A. K. Sinop, Optimal column-based low-rank matrix reconstruction, in: Proc. Annu. ACM-SIAM Symp., 2012, pp. 1207--1214. M. W. Mahoney, P. Drineas, matrix decompositions for improved data analysis, Proc. Natl. Acad. Sci. USA 106 (2009) 697--702. S. Paul, M. Magdon-Ismail, P. Drineas, Column selection via adaptive sampling, Adv.  Neural Inf. Process Syst. 28 (2015). D. C. Sorensen, M. Embree, A DEIM induced CUR factorization, SIAM J. Sci. Comput. 33 (2016) A1454--A1482. G. W. Stewart, J. G. Sun, Matrix perturbation theory, Academic press, 1990. Strang, G., Linear Algebra and Learning from Data, SIAM, Philadelphia, 2019. S. Voronin, P. G. Martinsson, Efficient algorithms for CUR and interpolative matrix decompositions, Adv. Comput. Math. 43 (2017) 495--516. S. Wang, Z. Zhang, Improving CUR matrix decomposition and the Nyström approximation via adaptive sampling, J. Mach. Learn. Res. 14 (2013) 2729--2769. [^1]: Note that the backslash operator used in the algorithms is a Matlab type notation for solving linear systems and least-squares problems. [^2]: *<http://gabrilovich.com/resources/data/techtc/>*
arxiv_math
{ "id": "2310.00636", "title": "A DEIM-CUR factorization with iterative SVDs", "authors": "Perfect Y. Gidisu and Michiel E. Hochstenbach", "categories": "math.NA cs.NA", "license": "http://creativecommons.org/licenses/by/4.0/" }
# Introduction Reaction-diffusion equations have been used to study a wide range of phenomena within the natural sciences, and are a topic of great mathematical intrigue in their own right. They appear as models for the spread of populations [@fisher1937wave; @kolmogorov1937etude; @aronson1978multidimensional], phase transitions [@henkel2004non; @wang; @soner], combustion [@berestycki1985traveling; @berestycki1989multi], and chemical reactions [@allen1979microscopic; @bramson1991asymptotic]. In this work, we explore *fractional* reaction-diffusion equations that model populations that exhibit long range dispersal. Fractional reaction-diffusion equations are reaction-diffusion equations in which the diffusive term is replaced by the generator of a pure jump process (namely, a stable process). As a result, they present new challenges that have not yet been fully explored in a probabilistic context.\ In recent decades, fractional reaction-diffusion equations and reaction-diffusion equations with anomalous diffusion have surged in popularity. This is in part due to their relevance as physical models. From a purely mathematical viewpoint, they pose technical difficulties that classical parabolic equations like the Fisher-KPP equation do not. Much of the theoretical work on these topics, to date, has been led by the PDE community [@mellet; @gui2015traveling; @cabre; @cabre1; @cabre2; @Mellet2]. The effect of long range dispersal on various models has also been studied numerically [@Mancinelli; @del2009truncation; @hallatschek2014acceleration], and applied to data from epidemics [@brockmann2013hidden] and plant populations [@clark2003estimating; @marco2011comparing; @cannas2006long].\ In the context of mathematical biology, fractional reaction-diffusion equations arise naturally as models for populations that exhibit long range dispersal (i.e. the capacity for offspring to, on rare occasions, establish very far away from their parent). This behaviour is ubiquitous in nature and is a crucial survival mechanism for many organisms, particularly those insular species that must travel vast distances to populate new regions. Examples include the dispersal of plant seeds (which can travel by wind, water, and can be transported internally or externally by animals) [@Cain:2000], fungi [@Buschbom:2007], and small insects such as flies, moths, and bees (which have the secondary effect of facilitating the hybridisation of the flora they pollinate) [@barrett1996reproductive; @reatini2021influence]. The ability for certain organisms to populate islands through long range dispersal can have a profound impact on the biological composition of the land, increasing the genetic diversity of isolated regions [@weigelt2015global; @heaney2000dynamic; @reatini2021influence]. One example of this is the Hawaiian angiosperm flora, that cannot be attributed to a single mainland source, but instead have the genetic composition of flora from across circum-Pacific regions [@carlquist1967biota; @weigelt2015global]. Another recently observed instance of long range dispersal was that of a single finch that travelled over 100 km to an island in the Galápagos where it went on to produce hybrid offspring with the resident population [@lamichhaney2018rapid; @reatini2021influence].\ In this work, we use the fractional Allen--Cahn equation to model the motion of *hybrid zones* in populations exhibiting long range dispersal. Hybrid zones are narrow geographical regions where two genetically distinct species meet and reproduce, resulting in individuals of mixed ancestry (hybrid individuals). Hybrid zones have been observed extensively in nature. Examples include the European house mouse [@hunt1973biochemical] and North American warbler birds [@birds] (see [@barton1989adaptation] for an extensive list of examples). There are two primary mechanisms acting to maintain hybrid zones. This first is due to a change in environment where the two populations meet. The second, which will be of interest to this work, is when selection acts against hybrid individuals. In this setting, the hybrid zone is maintained for large times through a balance of negative selection with the dispersal of individuals. We will show that the long-time behaviour of hybrid zones maintained by selection in populations that exhibit long range dispersal converges to motion by mean curvature flow under a large range of possible spatial scalings. This new family of scalings, as well as our explicit description of the interface width and speed of convergence, distinguishes our work from that of [@imbert2009phasefield], where it was shown that the solution interface of the fractional Allen--Cahn equation with a diffusive scaling converges to mean curvature flow with respect to the scaling parameter. ## The Allen--Cahn equation and hybrid zones The one-dimensional Allen--Cahn equation is a reaction-diffusion equation that takes the form $$\begin{aligned} \label{allencahnequation} \partial_t {u^\varepsilon(t,x) }= \Delta u^\varepsilon(t,x) - \frac{1}{\varepsilon^{2}}f(u^\varepsilon(t,x)) \end{aligned}$$ for $\varepsilon>0$ fixed and all $t>0$, $x\in \mathbb{R}$. This equation can be obtained from the unscaled equation $\partial_t u(t,x) = \Delta u(t,x) - f(u(t,x))$ by defining $u^\varepsilon(t,x):= u(\varepsilon^2 t, \varepsilon x).$ Here, $f\in C^2(\mathbb{R})$ is assumed to have precisely three zeros, $v_-, v_0$ and $v_+$, such that $$\begin{aligned} &f<0 \text{ on } (-\infty, v_-)\cup (v_0, v_+), \\ &f>0 \text{ on } (v_-, v_0)\cup (v_+, \infty), \\ &f'(v_-), f'(v_+)>0 \text{ and } f'(v_0)<0. \end{aligned}$$ In equation ([\[allencahnequation\]](#allencahnequation){reference-type="ref" reference="allencahnequation"}), the diffusive term $\Delta$ acts to smooth solutions (in the context of a biological model, this could be viewed as the 'mixing' of the population), while the nonlinearity $f$ drives solutions towards one of two states, $v_-$ or $v_+$ (in a population, these states might correspond to the dominance of a particular allele). These opposing effects, which are characteristic of reaction-diffusion equations, create a solution interface separating the two states. Over large spatial and temporal scales (corresponding to $\varepsilon\to 0$) this interface appears sharp and one can study its motion.\ The Allen--Cahn equation was originally introduced in [@allen1979microscopic] to model the motion of curved *antiphase boundaries* (APB) in crystalline solids. These are defective regions in the crystal lattice where atoms have the opposite configuration to that predicted by their lattice system, producing a positive excess of free energy in the system [@allen1979microscopic]. This is a non-equilibrium state of the lattice, resulting in the diffusive movement of the APB to minimise the total area of the boundary. The motion of the APB can then be modelled by the solution interface of the Allen--Cahn equation. In fact, in their original work [@allen1979microscopic], Allen and Cahn already note the relevance of long range dispersal to interfacial motion. Allen and Cahn mention that interfacial motion sometimes requires long range dispersal, citing the growth of a (solid) precipitate from a supersaturated solution and the motion of interfaces with impurities as two examples. It was observed by Allen and Cahn in [@allen1979microscopic] that, in the case of local dispersion, the velocity of the interfacial motion described by equation ([\[allencahnequation\]](#allencahnequation){reference-type="ref" reference="allencahnequation"}) was proportional to the mean curvature of the boundary. Bronsard and Kohn [@bronsard1990; @bronsard1991] and Demottoni and Schatzman [@mottoni1989; @mottoni1990] provided a rigorous proof of this under a variety of dimensional and regularity restrictions, and in 1992, Chen proved this result in all dimensions under relatively weak regularity assumptions [@chen].\ By viewing the Allen--Cahn equation as a model for a hybrid zone in a population with local dispersal, Chen's result has a biological interpretation. As explained in [@etheridge2017branching], the connection between the hybrid zones and the Allen--Cahn equation can be motivated as follows. Consider a single genetic locus in diploid population with allele types $a$ and $A$ that is in Hardy-Weinberg proportions. That is, if the proportion of $a$-alleles in the parental population is $w$, then the proportions of parents of type $aa$, $aA$ and $AA$ are $w^2, 2w(1-w)$ and $(1-w)^2$, respectively. To reflect our assumption that the hybrid zone is maintained by selection against heterozygotes, we assign to each of the allele pairs $aa, aA$ and $AA$ the relative fitnesses $1, 1-s$ and $1$, for $s>0$ a small selection parameter. These fitnesses refer to the relative proportion of germ cells produced by heterozygotes and homozygotes during reproduction. It follows that if $w$ is the proportion of type $a$ alleles before reproduction, then the proportion of type $a$ alleles after reproduction is $$w^* = \frac{w^2 + w(1-w)(1-{s})}{1 - 2{s}w(1-w)} = w+ {s}w(1-w)(2w-1)+\mathcal{O}(s^2).$$ Taking ${s}= \frac{1}{N}$ and measuring time in units of $N$ generations, the above calculation implies that, as $N\to \infty$, $\frac{dw}{dt} = w(1-w)(2w-1).$ Adding (local) dispersal and applying a diffusive scaling $t\mapsto \varepsilon^2 t$, $x\mapsto \varepsilon x$ for $t>0$ and $x\in \mathbb{R}^2$ gives $$\begin{aligned} \label{ACE} \frac{\partial w}{\partial t} = \Delta w + \frac{1}{\varepsilon^2}w(1-w)(2w-1). \end{aligned}$$ Chen's result tells us that solutions to scaled Allen--Cahn equation ([\[ACE\]](#ACE){reference-type="ref" reference="ACE"}) (also known as the Nagumo's equation, see [@nagumo1962active; @nagumoref]) converge as $\varepsilon\to 0$ to the indicator function of a set whose boundary evolves according to motion by mean curvature flow. Since the hybrid zone of a diploid population should correspond to the level set $w(t,x) = \tfrac{1}{2}$ for $w$ a solution of ([\[ACE\]](#ACE){reference-type="ref" reference="ACE"}), Chen's result tells us that hybrid zones will follow motion by mean curvature flow over large spatial and temporal scales. We note that, although $x\in \mathbb{R}^2$ is the biologically relevant case, Chen's result and our own hold in all spatial dimensions $\mathbbm{d}\geq 2$.\ In Etheridge et al. [@etheridge2017branching], the authors used purely probabilstic techniques to reprove Chen's result. This was accomplished by constructing a probabilistic dual to ([\[ACE\]](#ACE){reference-type="ref" reference="ACE"}) in terms of ternary branching Brownian motions. Additionally, their proof could be adapted to incorporate genetic drift, which refers to the randomness inherent in reproduction within finite populations. This was achieved via a so-called Spatial-$\Lambda$-Fleming-Viot process. It was shown in [@etheridge2017branching] that, provided the genetic drift is not too strong, the limiting mean curvature flow behaviour observed in the deterministic case is preserved by the scaled stochastic model. This stochastic result can be extended to the stable setting, and will appear in the PhD thesis of the first author. An interesting avenue for further research would be to find the critical strength of genetic drift that determines if motion by mean curvature flow is preserved. This has been accomplished by Etheridge et al. [@ian1] in the Brownian setting, but is more difficult to identify in the stable setting.\ To model the motion of hybrid zones in populations that exhibit long range dispersal, we consider the fractional Allen--Cahn equation $$\begin{aligned} \label{dodeedo} {\partial_t u(t,x) }= -(-\Delta)^{\tiny \frac{\alpha}{2}} u(t,x) +su(t,x)(1-u(t,x))(2u(t,x)-1),\end{aligned}$$ for all $t>0$ and $x\in \mathbb{R}$ where ${s}$ is a small selection parameter and $-(-\Delta)^{\frac{\alpha}{2}}$ is the generator of an $\alpha$-stable process. In view of Chen's result, it is natural to ask: will mean curvature flow be preserved in populations that exhibit long range dispersal? The answer to this should, of course, depend on the strength of the dispersal mechanism. This is determined by the index $\alpha \in (0,2]$. When $\alpha = 2$, the fractional Laplacian and ordinary Laplacian coincide, so in view of Chen's result [@chen], we expect mean curvature flow to be preserved for $\alpha$ sufficiently close to $2$. As $\alpha$ approaches $0$, the severity and frequency of large jumps increases, so for small $\alpha$ it seems unlikely that motion by mean curvature flow will be seen in the limit. This intuition is supported by results in the PDE literature, with the threshold between the two behaviours occurring at $\alpha=1$. For example, in [@caffarelli2010convergence], Caffarelli and Souganidis consider a threshold dynamics type algorithm to simulate a moving front governed by the fractional heat equation. The resulting interface was shown to converge to mean curvature flow for $\alpha\geq 1$ and 'weighted' mean curvature flow for $0<\alpha<1$. The results from the unpublished manuscript [@imbert2009phasefield] suggest that equation ([\[dodeedo\]](#dodeedo){reference-type="ref" reference="dodeedo"}), with the scaling $I(\varepsilon) = \varepsilon$, should converge as $\varepsilon\to 0$ to motion by mean curvature flow when $\alpha\in (1,2)$, however such a result is certainly out of reach with our techniques. As we will see, our result is stated for a large family of possible scalings of equation ([\[dodeedo\]](#dodeedo){reference-type="ref" reference="dodeedo"}), which does not include the diffusive scaling taken in [@imbert2009phasefield].\ We now briefly outline the structure of this paper. First, in Section [2](#secone){reference-type="ref" reference="secone"}, we state our main result, Theorem [\[maintheorem\]](#maintheorem){reference-type="ref" reference="maintheorem"}. In Section [3](#ch:2){reference-type="ref" reference="ch:2"}, we go on to construct a probabilistic representation of solutions to the fractional Allen--Cahn equation. We then prove a one-dimensional analogue of our main result in Section [\[sectionmajorityvotinginonedimension\]](#sectionmajorityvotinginonedimension){reference-type="ref" reference="sectionmajorityvotinginonedimension"}. This will enable us to prove Theorem [\[maintheorem\]](#maintheorem){reference-type="ref" reference="maintheorem"} in Section [\[ch:3\]](#ch:3){reference-type="ref" reference="ch:3"}. Supplementary calculations will be provided in the appendix. # Main Results {#secone} To begin, we recall the definition of the mean curvature at a point on a hypersurface $M\subset \mathbb{R}^\mathbbm{d}$. Let $n:M\to \mathbb{R}^\mathbbm{d}$ be the Gauss map, i.e. the map that assigns to each point $p\in M$ the outward facing unit vector $n(p)$ orthogonal to the tangent space of $M$ at $p$, denoted $T_pM$. By choosing an appropriate orthonormal basis of the tangent space $T_pM$ for all $p\in M$, we can define the shape operator $\mathbb{S}_p$ at $p$ (locally) as the negative Jacobian of $n$ at $p$. It can be shown that there exists an inner product on $T_pM$ (called the first fundamental form) and $\mathbb{S}_p$ can be diagonalised, $\mathbb{S}_p = \text{diag}(\kappa_1(p), ..., \kappa_{\mathbbm{d}-1}(p)).$ The *mean curvature at $p$* is then $$\kappa(p) := \frac{1}{\mathbbm{d}-1}\sum_{i=1}^{\mathbbm{d}-1}\kappa_i(p).$$ Fix $\mathscr{T}>0$. Let $S^{\mathbbm{d}-1}\subset \mathbb{R}^{\mathbbm{d}}$ be the unit sphere and $(\mathbf{\Gamma}_t)_{0\leq t < \mathscr{T}}$ be a family of smooth embeddings $S^{\mathbbm{d}-1} \to \mathbb{R}^\mathbbm{d}$. Let $\mathbf{n} = \mathbf{n}_t(\phi)$ be the unit inward normal vector to $\mathbf{\Gamma}_t$ at $\phi$ and let $\kappa = \kappa_t(\phi)$ be the mean curvature of $\mathbf{\Gamma}_t$ at $\phi$. Then $(\mathbf{\Gamma}_t)_{0\leq t < \mathscr{T}}$ is a *mean curvature flow* if, for all $t$ and $\phi$, $$\begin{aligned} \label{mcfeqn} \frac{\partial \mathbf{\Gamma}_t(\phi)}{\partial t} = \kappa_t(\phi)\mathbf{n}_t(\phi).\end{aligned}$$ It can be shown that mean curvature flow in dimension $\mathbbm{d}=2$ (also called curve shortening flow) terminates after a finite time $\mathscr{T}$, and by the theorems of Gage and Hamilton (1986) and Grayson (1987), any smoothly embedded closed curve shrinks to a point as $t\uparrow \mathscr{T}$. When $\mathbbm{d}>2$, this does not always hold as singularities may develop. Following Chen [@chen], we shall impose sufficient regularity conditions to ensure the existence of a finite time $\mathscr{T}$ before which the mean curvature flow exists and does not develop a singularity. For an overview of existence results for mean curvature flow see [@etheridge2017branching].\ Let $\mathbbm{d}\geq 1$ and denote the standard Euclidean distance in $\mathbb{R}^\mathbbm{d}$ by $|\cdot|$. The fractional Laplacian is defined on functions $u:\mathbb{R}^\mathbbm{d}\to \mathbb{R}$ with sufficient decay by $$\begin{aligned} -(-\Delta)^{\tiny \frac{\alpha}{2}}u(x) := C_\alpha \lim_{r\to 0} \int_{\mathbb{R}^\mathbbm{d}\backslash {{B}}_r(x)} \frac{u(y)-u(x)}{| y-x |^{\mathbbm{d}+\alpha}}dy, \end{aligned}$$ where $C_\alpha:= \frac{\alpha 2^{\alpha-1} \Gamma\left(\frac{\alpha+\mathbbm{d}}{2}\right)}{\pi^{\frac{\mathbbm{d}}{2}}\Gamma\left(1-\frac{ \alpha}{2}\right)}$ and ${{B}}_r(x)\subset \mathbb{R}^\mathbbm{d}$ is the sphere of radius $r$ about $x$. We will show that in dimension $\mathbbm{d}\geq 2$, for suitable initial conditions, the solution of the scaled fractional Allen--Cahn equation $$\begin{aligned} \label{mainequation22} \begin{cases} \partial_t u^\varepsilon = -\sigma_\alpha {I(\varepsilon)^{\alpha-2}}(-\Delta)^{\tiny \frac{\alpha}{2}} u^\varepsilon + {\varepsilon^{-2}}u^\varepsilon(1-u^\varepsilon)(2u^\varepsilon-1), \ \ \ t\geq 0, \ x\in \mathbb{R}^\mathbbm{d}\\ u^\varepsilon(0,x) = p(x), \ \ \ x\in \mathbb{R}^\mathbbm{d}\end{cases} \end{aligned}$$ converges as $\varepsilon \to 0$ to the indicator function of a set whose boundary evolves according to mean curvature flow. Here, $$\begin{aligned} \label{defn_sig_alpha} \sigma_\alpha:= \left(\tfrac{2-\alpha}{\alpha}\right)^{\frac{\alpha}{2}} {\Gamma\left(1-\tfrac{\alpha}{2}\right)} \end{aligned}$$ is a normalising constant that will simplify our later calculations, and $I$ can be any function satisfying the following. [\[assumptions2\]]{#assumptions2 label="assumptions2"} For some $\delta>0$, assume that $I: (0,\delta) \to (0,\infty)$ satisfies (A) $\lim \limits_{\varepsilon \rightarrow 0} I(\varepsilon) |\log \varepsilon|^k=0 \, \, \, \, \, \forall\, k \in \mathbb{N},$[\[assumptions2_A\]]{#assumptions2_A label="assumptions2_A"} (B) $\lim \limits_{\varepsilon \rightarrow 0} \dfrac{\varepsilon^2 }{I(\varepsilon)^2}|\log \varepsilon| = 0,$ [\[assumptions2_B\]]{#assumptions2_B label="assumptions2_B"} (C) $\lim \limits_{\varepsilon \rightarrow 0} \dfrac{I(\varepsilon)^{2 \alpha}}{\varepsilon^2} |\log \varepsilon|^\alpha = 0.$[\[assumptions2_C\]]{#assumptions2_C label="assumptions2_C"} The rate of convergence and width of the 'solution interface' in our convergence result will ultimately depend on this choice of $I$. Note that Assumption [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"} [\[assumptions2_B\]](#assumptions2_B){reference-type="ref" reference="assumptions2_B"} and Assumption [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"} [\[assumptions2_C\]](#assumptions2_C){reference-type="ref" reference="assumptions2_C"} are incompatible as soon as $\alpha \leq 1$. When $\alpha >1$ a standing example that fulfills all the conditions is $I(\varepsilon)=\varepsilon |\log \varepsilon|$.\ In Imbert and Souganidis' work [@imbert2009phasefield], they consider a class of reaction-diffusion equations with diffusive term given by a singular integral operator. In the case when this operator is the fractional Laplacian $(-\Delta)^{\frac{\alpha}{2}}$ with $\alpha\in (1,2)$, their scaled equation amounts to $$\begin{aligned} \label{imbertsequation} \partial_t u^\varepsilon = -\varepsilon^{\alpha-2}(-\Delta)^{\frac{\alpha}{2}} + \varepsilon^{-2}f(u^\varepsilon)\end{aligned}$$ for a bistable nonlinearity $f$. The authors show that solutions to ([\[imbertsequation\]](#imbertsequation){reference-type="ref" reference="imbertsequation"}) converge as $\varepsilon\to 0$ to the indicator function of a set whose boundary evolves under motion by mean curvature flow. This result is distinct from our own, since we consider a family of possible scalings $I(\varepsilon)$. In particular, our result does not include the case when $I(\varepsilon)=\varepsilon$, which is when equations ([\[mainequation22\]](#mainequation22){reference-type="ref" reference="mainequation22"}) and ([\[imbertsequation\]](#imbertsequation){reference-type="ref" reference="imbertsequation"}) coincide. Equation ([\[mainequation22\]](#mainequation22){reference-type="ref" reference="mainequation22"}) can be obtained from the unscaled fractional Allen--Cahn equation by scaling time and space by $$t\mapsto \varepsilon^2 t \text{ and } x\mapsto \left(\sigma_ \alpha {I(\varepsilon)^{\alpha-2}}\varepsilon^2\right)^{\frac{1}{\alpha}}x.$$ To see this, let $\iota_\varepsilon:= \left(\sigma_ \alpha {I(\varepsilon)^{\alpha-2}}\varepsilon^2\right)^{\frac{1}{\alpha}}$ and define $$u^\varepsilon(t,x):= u(\varepsilon^2 t, \iota_\varepsilon x).$$ Then, by definition of the fractional Laplacian, $$-(-\Delta)^{\frac{\alpha}{2}}u(\varepsilon^2 t, \iota_\varepsilon x) = -\iota_\varepsilon^\alpha (-\Delta)^{\frac{\alpha}{2}} u^\varepsilon(t,x),$$ so using that $u$ is a solution to the unscaled equation $\partial_t u = -(-\Delta)^{\frac{\alpha}{2}}u + f(u)$ where $f(u) = u(1-u)(2u-1)$, we have $$\begin{aligned} \partial_t u^\varepsilon(t,x) &= \varepsilon^{-2} \,\partial_t u(\varepsilon^2 t, \iota_\varepsilon x)\\ &= \varepsilon^{-2}\left( - (-\Delta)^{\frac{\alpha}{2}} u(\varepsilon^2t, \iota_\varepsilon x) + f(u(\varepsilon^2 t, \iota_\varepsilon x)\right)\\ &= -\varepsilon^{-2}\iota_\varepsilon^\alpha(-\Delta)^{\frac{\alpha}{2}} u^\varepsilon(t,x) + \varepsilon^{-2}f(u^\varepsilon(t,x)),\end{aligned}$$ which is equivalent to ([\[mainequation22\]](#mainequation22){reference-type="ref" reference="mainequation22"}) by definition of $\iota_\varepsilon$.\ Using the definition of $\sigma_\alpha$ from ([\[defn_sig_alpha\]](#defn_sig_alpha){reference-type="ref" reference="defn_sig_alpha"}), as $\alpha\to 2^-$, $\left(\sigma_\alpha {I(\varepsilon)^{\alpha-2}}\varepsilon^2\right)^{\frac{1}{\alpha}}x \to \varepsilon x$ which is consistent with the diffusive scaling considered in the local setting of [@etheridge2017branching]. Note that the spatial scaling factor in the fractional setting, $\left(\sigma_\alpha I(\varepsilon)^{\alpha-2}\varepsilon^2\right)^{\frac{1}{\alpha}}$, converges to zero faster than the spatial scaling factor in the local setting, $\varepsilon$. This follows by Assumption [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"} [\[assumptions2_B\]](#assumptions2_B){reference-type="ref" reference="assumptions2_B"}, since $$\lim_{\varepsilon\to0} \frac{\left(\sigma_\alpha I(\varepsilon)^{\alpha-2}\varepsilon^2\right)^{\frac{1}{\alpha}}}{\varepsilon}=\lim_{\varepsilon\to0} \left(\frac{\varepsilon}{I(\varepsilon)}\right)^{\frac{2-\alpha}{\alpha}}=0.$$ This suggests that, with our method of proof, to observe a hybrid zone evolving by motion by mean curvature in the original spatial coordinates, one must 'zoom out' much more in the stable (non-local) setting than in the local setting. We discuss the origin of our chosen scaling more in Section [3.2](#choice_scaling_sec){reference-type="ref" reference="choice_scaling_sec"}. Our assumptions on the initial condition $p$ in ([\[mainequation22\]](#mainequation22){reference-type="ref" reference="mainequation22"}) mirror those in [@etheridge2017branching]. First, $p$ is assumed to take values in $[0,1]$. Set $$\begin{aligned} \label{gammadefinition}\mathbf{\Gamma}_0 := \left\{x\in \mathbb{R}^\mathbbm{d}: p(x) = \tfrac{1}{2}\right\}.\end{aligned}$$ Assume $\mathbf{\Gamma}_0$ is a smooth hypersurface and is the boundary of an open set homeomorphic to a sphere. Further suppose the following. [\[assumptions1\]]{#assumptions1 label="assumptions1"} Let $p: \mathbb{R}^\mathbbm{d}\to [0,1]$ and $\mathbf{\Gamma}_0$ be as in ([\[gammadefinition\]](#gammadefinition){reference-type="ref" reference="gammadefinition"}). Denote the shortest Euclidean distance between a point $x\in \mathbb{R}^\mathbbm{d}$ and the surface $\mathbf{\Gamma}_0$ by $\text{dist}(x, \mathbf{\Gamma}_0).$ (A) $\mathbf{\Gamma}_0$ is $C^a$ for some $a>3$. (B) For $x$ inside $\mathbf{\Gamma}_0$, $p(x)<\frac{1}{2}$, and for $x$ outside $\mathbf{\Gamma}_0$, $p(x)>\frac{1}{2}$.[\[assumptions1_B\]]{#assumptions1_B label="assumptions1_B"} (C) There exist $r, \gamma >0$ such that, for all $x\in \mathbb{R}^\mathbbm{d}$, $\left|p(x) - \frac{1}{2}\right| \geq \gamma(\text{dist}(x, \mathbf{\Gamma}_0)\wedge r).$[\[assumptions1_C\]]{#assumptions1_C label="assumptions1_C"} Assumption [\[assumptions1\]](#assumptions1){reference-type="ref" reference="assumptions1"} [\[assumptions1_B\]](#assumptions1_B){reference-type="ref" reference="assumptions1_B"} establishes a sign convention, and Assumption [\[assumptions1\]](#assumptions1){reference-type="ref" reference="assumptions1"} [\[assumptions1_C\]](#assumptions1_C){reference-type="ref" reference="assumptions1_C"} ensures $p(x)$ is bounded away from $\tfrac{1}{2}$ when $x$ is away from the interface. Together, these conditions ensure that the mean curvature flow started from $\mathbf{\Gamma}_0$, $(\mathbf{\Gamma}_t(\cdot))_{t\geq 0}$, exists up until some finite time $\mathscr{T}$.\ Following [@etheridge2017branching], we let $d(x,t)$ be the signed distance from $\mathbf{\Gamma}_{t}$ to $x$, which we choose to be negative inside $\mathbf{\Gamma}_{t}$ and positive outside $\mathbf{\Gamma}_{t}$. Then, as sets, $$\mathbf{\Gamma}_{t} = \left\{x \in \mathbb{R}^\mathbbm{d}: d(x,t) =0\right\}.$$ Lastly, define $F(\varepsilon)$ by $$\begin{aligned} \label{defnofF} \hspace{5 mm} F(\varepsilon) =\frac{I(\varepsilon)^{2}}{\varepsilon^\frac{2}{\alpha}}|\log\varepsilon| + I(\varepsilon)^{\alpha-1}. \end{aligned}$$ Note that, for any $\alpha \in (1,2)$ and function $I:(0,\delta)\to (0,\infty)$ satisfying Assumptions [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"}, $F(\varepsilon)\to 0$ as $\varepsilon\to 0$.\ The scaling function $I$ and parameter $\alpha$ will be fixed throughout this work. Just as we have done for $F$, when defining new functions, we will typically not make explicit their dependence on the choice of $I$ and choice of $\alpha$. Our main theorem is the following. [\[maintheorem\]]{#maintheorem label="maintheorem"} Let $\alpha \in (1,2)$, $\mathbbm{d}\geq 2$ and fix a function $I$ satisfying Assumptions [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"}. Suppose $u^\varepsilon$ solves equation ([\[mainequation22\]](#mainequation22){reference-type="ref" reference="mainequation22"}) with initial condition $p$ satisfying Assumptions [\[assumptions1\]](#assumptions1){reference-type="ref" reference="assumptions1"}. Let $\mathscr{T}$ and $d(x, t)$ be as above, $F$ be as in ([\[defnofF\]](#defnofF){reference-type="ref" reference="defnofF"}) and fix $T^*\in (0, \mathscr{T})$. Then there exists $\varepsilon_\mathbbm{d}(\alpha), a_\mathbbm{d}(\alpha), c_\mathbbm{d}(\alpha), m >0$ such that, for $\varepsilon \in (0, \varepsilon_\mathbbm{d})$ and $a_\mathbbm{d}\varepsilon^2 |\log\varepsilon|\leq t\leq T^*$, (1) for $x$ with $d(x, t)\geq c_\mathbbm{d}I(\varepsilon)|\log\varepsilon|$, we have $u^\varepsilon(t, x)\geq 1-m\frac{\varepsilon^2}{I(\varepsilon)^2}-mF(\varepsilon),$ (2) for $x$ with $d(x, t) \leq - c_\mathbbm{d}I(\varepsilon)|\log\varepsilon|$, we have $u^\varepsilon(t, x)\leq m\frac{\varepsilon^2}{I(\varepsilon)^2}+mF(\varepsilon)$. Of course, we could have stated Theorem [\[maintheorem\]](#maintheorem){reference-type="ref" reference="maintheorem"} in terms of an error function $F'(\varepsilon):= F(\varepsilon) + \frac{\varepsilon^2}{I(\varepsilon)^2}$. We choose to make the $\frac{\varepsilon^2}{I(\varepsilon)^2}$ term explicit since it will also appear in the one-dimensional analogue of Theorem [\[maintheorem\]](#maintheorem){reference-type="ref" reference="maintheorem"}.\ Throughout this work, we often often discuss the *solution interface* and its corresponding width. The term solution interface refers to the spatial region outside of which the solution $u^\varepsilon(t,\cdot)$ is very close to zero or one. Explicitly, in Theorem [\[maintheorem\]](#maintheorem){reference-type="ref" reference="maintheorem"} the solution interface at time $t$ is the set of $x\in \mathbb{R}^\mathbbm{d}$ for which $|d(x,t)|\leq c_\mathbbm{d}I(\varepsilon)|\log\varepsilon|$, and we call $2c_\mathbbm{d}I(\varepsilon)|\log\varepsilon|$ the *interface width*. We will refer to the error bounds on $u^\varepsilon$ in Theorem [\[maintheorem\]](#maintheorem){reference-type="ref" reference="maintheorem"} as the *sharpness* of the interface. In the following example, we observe that neither the $F(\varepsilon)$ or the $\frac{\varepsilon^2}{I(\varepsilon)^2}$ terms in the sharpness of the interface are the dominant term, in general. (1) It is easy to verify that $I(\varepsilon) = \varepsilon|\log\varepsilon|$ satisfies Assumptions [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"}, so for this choice of $I$ the interface width is $\mathcal{O}\left(\varepsilon|\log\varepsilon|^{2}\right)$. One can also check that $F(\varepsilon) = \mathcal{O}(\varepsilon^{\alpha-1}|\log \varepsilon|^{\alpha-1})$, which is dominated by $\frac{\varepsilon^2}{I(\varepsilon)^2}$, so the sharpness of the interface in Theorem [\[maintheorem\]](#maintheorem){reference-type="ref" reference="maintheorem"} is $\mathcal{O}\left({\varepsilon^2}{I(\varepsilon)^{-2}}\right)=\mathcal{O}\left({|\log\varepsilon|^{-2}}\right)$. (2) We now provide an example in which $F(\varepsilon)$ dominates ${\varepsilon^2}{I(\varepsilon)^{-2}}$. Set $I(\varepsilon) = \varepsilon^{q}$ where $q = \tfrac{3\alpha +1}{2\alpha(1+\alpha)}.$ This choice of $I$ satisfies Assumptions [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"}, and the resulting interface width and sharpness given by Theorem [\[maintheorem\]](#maintheorem){reference-type="ref" reference="maintheorem"} are $\mathcal{O}\left(\varepsilon^{q}|\log\varepsilon|\right)$ and $\mathcal{O}\left(\varepsilon^{\frac{\alpha-1}{\alpha+1}}|\log\varepsilon|^\alpha\right)$, respectively. We often reference the Brownian analogue of Theorem [\[maintheorem\]](#maintheorem){reference-type="ref" reference="maintheorem"} proved using probabilistic techniques in [@etheridge2017branching]. This result, originally due to Chen [@chen], is given as follows (here we state the version found in [@etheridge2017branching Theorem 1.3]). [\[browniantheorem\]]{#browniantheorem label="browniantheorem"} Let $u^\varepsilon$ solve $$\begin{aligned} \label{BMeth} \begin{cases} \partial_t u^\varepsilon = \Delta u^\varepsilon + {\varepsilon^{-2}}u^\varepsilon(1-u^\varepsilon)(2u^\varepsilon-1), \ \ \ t\geq 0, \ x\in \mathbb{R}^\mathbbm{d}\\ u^\varepsilon(0,x) = p(x), \ \ \ x\in \mathbb{R}^\mathbbm{d}\end{cases} \end{aligned}$$ with initial condition $p$ satisfying Assumptions [\[assumptions1\]](#assumptions1){reference-type="ref" reference="assumptions1"}. Let $\mathscr{T}$ and $d(x,t)$ be as above. Fix $T^*\in (0,\mathscr{T})$ and $k\in \mathbb{N}$. Then there exists $\varepsilon_\mathbbm{d}(k), a_\mathbbm{d}(k), c_\mathbbm{d}(k)>0$ such that, for all $\varepsilon\in (0, \varepsilon_\mathbbm{d})$ and $t$ with $a_\mathbbm{d}\varepsilon^2|\log \varepsilon|\leq t \leq T^*,$ (1) for $x$ such that $d(x,t)\geq c_\mathbbm{d}\varepsilon|\log \varepsilon|,$ we have $u^\varepsilon(t,x) \geq 1-\varepsilon^k$, (2) for $x$ such that $d(x,t)\leq -c_\mathbbm{d}\varepsilon|\log\varepsilon|,$ we have $u^\varepsilon(t,x)\leq \varepsilon^k.$ The interface width $\mathcal{O}(\varepsilon |\log\varepsilon|)$ in Theorem [\[browniantheorem\]](#browniantheorem){reference-type="ref" reference="browniantheorem"} is not achievable in Theorem [\[maintheorem\]](#maintheorem){reference-type="ref" reference="maintheorem"}, but we may approximate it by choosing $I(\varepsilon)= \varepsilon^{\delta}$ where $\tfrac{1}{\alpha}<\delta <1$, and considering $\delta\to 1$. The interface width and sharpness in this case are $\mathcal{O}(\varepsilon^\delta |\log\varepsilon|)$ and $\varepsilon^{2-2\delta}$, respectively. We note some key differences between our result (Theorem [\[maintheorem\]](#maintheorem){reference-type="ref" reference="maintheorem"}) and Theorem [\[browniantheorem\]](#browniantheorem){reference-type="ref" reference="browniantheorem"}. Firstly, in Theorem [\[browniantheorem\]](#browniantheorem){reference-type="ref" reference="browniantheorem"}, the width of the solution interface is $\mathcal{O}(\varepsilon|\log\varepsilon|)$, compared to the strictly larger width of $\mathcal{O}(I(\varepsilon)|\log\varepsilon|)$ in the fractional setting. Secondly, the sharpness of the interface in Theorem [\[browniantheorem\]](#browniantheorem){reference-type="ref" reference="browniantheorem"}, $\varepsilon^k$, is much better than the sharpness in Theorem [\[maintheorem\]](#maintheorem){reference-type="ref" reference="maintheorem"}. Both of these differences are consistent with our intuition by considering the (soon to be formalised) probabilistic representation of solutions to the ordinary and fractional Allen--Cahn equations in terms of Brownian and $\alpha$-stable motions, respectively. In the stable case, the rare large jumps of the $\alpha$-stable motion act to 'fatten' the interface compared to the Brownian case. While the effect of these large jumps is small enough that we still observe mean curvature flow (as $\alpha>1$), it does result in a much less sharp interface.\ As we have mentioned, to prove our result, we adapt techniques from Etheridge et al. [@etheridge2017branching]. However, our work significantly differs from that of Etheridge et al. since the method of proof in [@etheridge2017branching] cannot be applied to the stable setting in a straightforward way. To overcome this, we devote a significant portion of this paper to constructing a series of couplings that enable us to compare the probabilistic dual to equation ([\[mainequation22\]](#mainequation22){reference-type="ref" reference="mainequation22"}) to another quantity for which the proofs in [@etheridge2017branching] can more easily be adapted. # Majority voting in one dimension {#ch:2} ## A probabilistic dual In this section, we define a probabilistic dual to the scaled fractional Allen--Cahn equation ([\[mainequation22\]](#mainequation22){reference-type="ref" reference="mainequation22"}), which is very similar to the probabilistic dual to the ordinary Allen--Cahn equation developed in [@etheridge2017branching]. In our setting, we consider a ternary-branching $\alpha$-stable motion in which each individual, independently, follows an $\alpha$-stable motion, until the end of its exponentially distributed lifetime (with mean $\varepsilon^2$) at which point it splits into three particles. Let $Y(t)$ denote a $\mathbbm{d}$-dimensional $\alpha$-stable process and ${\boldsymbol{Y}}(t)$ denote a $\mathbbm{d}$-dimensional historical ternary branching $\alpha$-stable motion. That is, ${\boldsymbol{Y}}(t)$ traces out the space-time trees that record the position of all particles alive at time $s$ for $s\in [0,t]$. Throughout this work, we adopt the following convention. Recall that $\sigma_\alpha := \left(\tfrac{2-\alpha}{\alpha}\right)^{\frac{\alpha}{2}}\Gamma\left(1-\tfrac{\alpha}{2}\right).$ [\[assumption3\]]{#assumption3 label="assumption3"} All $\alpha$-stable motions have generator $-\sigma_\alpha I(\varepsilon )^{\alpha-2}(-\Delta)^{\tiny \frac{\alpha}{2}}$ for a fixed $\alpha \in (1,2).$ To record the genealogy of the process we employ the Ulam-Harris notation to label individuals by elements of $\mathcal{U} = \bigcup_{m=0}^\infty \{1, 2, 3\}^m.$ For example, $(3,1)$ represents the particle which is the first child of the third child of the initial ancestor $\emptyset$. Let $N(t)\subset \mathcal{U}$ denote the set of individuals alive at time $t$.\ We call ${\cal T}$ a *time-labelled ternary tree* if ${\cal T}$ is a finite subset of $\mathcal{U}$ with each internal vertex $v$ labelled with a time $t_v>0$, where $t_v$ is strictly greater than the label of the parent vertex of $v$. Ignoring the spatial position of individuals, we see that ${\boldsymbol{Y}}(t)$ traces out a time-labelled ternary tree which associates to each branch point the time of the branching event. Let $Y_i(t)$ be the $\alpha$-stable motion traced out by individual $i$ in ${\boldsymbol{Y}}(t)$. Denote the time-labelled ternary tree traced out by ${\boldsymbol{Y}}(t)$ by ${{\cal T}({\boldsymbol{Y}}(t))}.$ [\[definitionmajority\]]{#definitionmajority label="definitionmajority"} Fix $p:\mathbb{R}^\mathbbm{d}\to [0,1]$ and define the *majority voting procedure* on ${{\cal T}({\boldsymbol{Y}}(t))}$ as follows. (1) each leaf $i$ of ${{\cal T}({\boldsymbol{Y}}(t))}$ independently votes $1$ with probability $p(Y_i(t))$ and otherwise votes $0$; (2) at each branching event in ${{\cal T}({\boldsymbol{Y}}(t))}$, the vote of the parent particle $j$ is given by the majority vote of its offspring $(j,1), (j,2), (j,3)$. This voting procedure runs inward from the leaves of ${{\cal T}({\boldsymbol{Y}}(t))}$ to the root $\emptyset$. Under this voting procedure, define $\mathbb{V}_p({\boldsymbol{Y}}(t))$ to be the vote associated to the root $\emptyset$ of the ternary branching stable tree. For $x\in \mathbb{R}^\mathbbm{d}$, we shall write $\mathbb{P}_x$ and $\mathbb{E}_x$ for the probability measure and expectation associated to the law of a stable motion starting at $x$. Write $\mathbb{P}_x^{\varepsilon}$ for the probability measure under which $({\boldsymbol{Y}}(t), t\geq 0)$ has the law of the historical process of a ternary branching $\alpha$-stable motion in $\mathbb{R}^\mathbbm{d}$ with branching rate $\varepsilon^{-2}$ started from a single particle at location $x$ at time $0$. We write $\mathbb{E}_x^\varepsilon$ for the corresponding expectation. We emphasise that the variable $\varepsilon$ used to define the speed of the stable process, $I(\varepsilon)^{\alpha-2}$, is the same variable $\varepsilon$ that defines the branch rate, $\varepsilon^{-2}.$ Then the root vote $\mathbb{V}_p({\boldsymbol{Y}}(t))$ provides us with a dual to equation ([\[mainequation22\]](#mainequation22){reference-type="ref" reference="mainequation22"}) in the following sense. [\[votedual\]]{#votedual label="votedual"} Let $p:\mathbb{R}^\mathbbm{d}\to [0,1]$. Then $$\begin{aligned} \label{milk} u^\varepsilon(t,x) := \mathbb{P}^\varepsilon_x[\mathbb{V}_p({\boldsymbol{Y}}(t))=1]\end{aligned}$$ is a solution to equation ([\[mainequation22\]](#mainequation22){reference-type="ref" reference="mainequation22"}) with initial condition $u^\varepsilon(0,x) = p(x)$. *Sketch of proof of Theorem [\[votedual\]](#votedual){reference-type="ref" reference="votedual"}.* This proof follows closely that of [@etheridge2017branching Theorem 2.2]. In this proof we neglect the superscript $\varepsilon$ on $\mathbb{P}^\varepsilon_x, \mathbb{E}^\varepsilon_x$ and $u^\varepsilon$. Let $u$ be as in ([\[milk\]](#milk){reference-type="ref" reference="milk"}) and consider $u(t+\delta t, x)$. If $\tau$ is the time of the first branching event, we have $$\begin{aligned} u(t+\delta t, x) &= \mathbb{P}_x[\mathbb{V}_p({\boldsymbol{Y}}(t+\delta t))=1 \left \vert \right.\tau< \delta t]\, \mathbb{P}[\tau\leq \delta t]\nonumber \\ & \ \ + \mathbb{P}_x[\mathbb{V}_p({\boldsymbol{Y}}(t+\delta t))=1 \left \vert \right.\tau> \delta t]\, \left(1-\mathbb{P}[\tau\leq \delta t]\right).\label{week} \end{aligned}$$ We will consider each of the terms in ([\[week\]](#week){reference-type="ref" reference="week"}) separately. Let $V_1, V_2,$ and $V_3$ be the votes of the three offspring created at time $\tau$. Conditional on $\tau\leq \delta t$, the probability of a second branching event before time $\delta t$ is $\mathcal{O}(\delta t)$, so for $s\leq \delta t$ $$\mathbb{E}_x\left[V_1 \left \vert \right.(\tau, Y_\tau) = (s, y)\right] = \mathbb{E}_y\left[u(t, Y_{\delta t -s})\right] + \mathcal{O}(\delta t).$$ Assuming sufficient regularity of $u$ (which follows from that of the fractional heat semigroup [@bassregularity]), it follows that $$\mathbb{E}_x[V_1 \left \vert \right.\tau< \delta t] = u(t,x) + \mathcal{O}(\delta t).$$ Since the root vote of ${\cal T}({\boldsymbol{Y}}(t))$ will equal one if and only if at most one of $V_1, V_2,$ and $V_3$ is zero, it follows by conditional independence of each $V_i$ given $(\tau, Y_\tau)$ that $$\begin{aligned} \mathbb{P}_x[\mathbb{V}_p({\boldsymbol{Y}}(t))=1 \left \vert \right.\tau\leq \delta t] &= u(t,x)^3 +3u(t,x)^2(1-u(t,x)) + o(1)\\ &= u(t,x)(1-u(t,x))(2u(t,x)-1) + u(t,x) + o(1). \end{aligned}$$ For the second term in ([\[week\]](#week){reference-type="ref" reference="week"}) note that $$\mathbb{P}_x[\mathbb{V}_p({\boldsymbol{Y}}(t+\delta t))=1 \left \vert \right.\tau> \delta t] = \mathbb{E}_x[\mathbb{P}_{Y_{ \delta t}}(\mathbb{V}_p({\boldsymbol{Y}}( t))=1)] = \mathbb{E}_x[u(t, Y_{\delta t})]$$ by the Markov property of ${\boldsymbol{Y}}$ at time $\delta t$. Since $\mathbb{P}[\tau\leq \delta t] \approx \delta t \varepsilon^{-2} + \mathcal{O}(\delta t^2)$, putting the above estimates into ([\[week\]](#week){reference-type="ref" reference="week"}) gives $$\begin{aligned} & \lim_{\delta t \to 0} \frac{u(t+\delta t,x) -u(t,x)}{\delta t} \\ &=\lim_{\delta t \to 0}\frac{\mathbb{E}_x(u(t, Y_{\delta t}))-u(t,x)}{\delta t}+ {\varepsilon^{-2}}u(t,x)(1-u(t,x))(2u(t,x)-1) \\ &=- \sigma_\alpha I(\varepsilon)^{\alpha-2}(-\Delta)^{\tiny \frac{\alpha}{2}} u(t,x) +{\varepsilon^{-2}}u(t,x)(1-u(t,x))(2u(t,x)-1). \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \qedhere\end{aligned}$$ ◻ With this in mind, we can restate Theorem [\[maintheorem\]](#maintheorem){reference-type="ref" reference="maintheorem"} as follows. [\[maintheorem2\]]{#maintheorem2 label="maintheorem2"} Fix a function $I$ satisfying Assumptions [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"}. Suppose the initial condition $p$ satisfies Assumptions [\[assumptions1\]](#assumptions1){reference-type="ref" reference="assumptions1"}. Let $\mathscr{T}$ and $d(x, t)$ be as in Section [2](#secone){reference-type="ref" reference="secone"}, $F$ be as in ([\[defnofF\]](#defnofF){reference-type="ref" reference="defnofF"}) and fix $T^*\in (0, \mathscr{T})$. Then there exist $\varepsilon_\mathbbm{d}(\alpha),$ $a_\mathbbm{d}(\alpha),$ $c_\mathbbm{d}(\alpha),$ $m>0$ such that, for $\varepsilon \in (0, \varepsilon_\mathbbm{d})$ and $a_\mathbbm{d}\varepsilon^2 |\log\varepsilon|\leq t\leq T^*$, (1) for $x$ with $d(x, t)\geq c_\mathbbm{d}I(\varepsilon)|\log\varepsilon|$, $\mathbb{P}_x^\varepsilon\left[\mathbb{V}_p({\boldsymbol{Y}}(t)) = 1\right]\geq 1- m\dfrac{\varepsilon^2}{I(\varepsilon)^2}-mF(\varepsilon),$ (2) for $x$ with $d(x, t) \leq - c_\mathbbm{d}I(\varepsilon)|\log\varepsilon|$, $\mathbb{P}_x^\varepsilon\left[\mathbb{V}_p({\boldsymbol{Y}}(t)) = 1\right]\leq m\dfrac{\varepsilon^2}{I(\varepsilon)^2}+mF(\varepsilon)$. For the sake of completeness, we mention the Brownian analogue of Theorem [\[votedual\]](#votedual){reference-type="ref" reference="votedual"} and Theorem [\[maintheorem2\]](#maintheorem2){reference-type="ref" reference="maintheorem2"} from [@etheridge2017branching]. There, the authors considered a historical ternary branching $\mathbbm{d}$-dimensional Brownian motion $(\boldsymbol{W}(t), t\geq 0)$ with branching rate $\varepsilon^{-2}$. Let $\mathbb{P}_x^\varepsilon$ and $\mathbb{V}_p$ be defined as above but for the process $(\boldsymbol{W}(t), t\geq 0)$. Then, by [@etheridge2017branching Theorem 2.2], given $p:\mathbb{R}^\mathbbm{d}\to [0,1],$ $$\begin{aligned} \label{prob_brown}u^\varepsilon(t,x):= \mathbb{P}^\varepsilon_x[\mathbb{V}_p(\boldsymbol{W}(t))=1]\end{aligned}$$ is a solution to equation ([\[BMeth\]](#BMeth){reference-type="ref" reference="BMeth"}), and Theorem [\[browniantheorem\]](#browniantheorem){reference-type="ref" reference="browniantheorem"} can be restated in terms of $\mathbb{P}_x^\varepsilon[\mathbb{V}_p(\boldsymbol{W}(t))=1]$. We will refer to the restatement of Theorem [\[browniantheorem\]](#browniantheorem){reference-type="ref" reference="browniantheorem"} in terms of $\mathbb{P}_x^\varepsilon[\mathbb{V}_p(\boldsymbol{W}(t))=1]$ as the 'probabilistic version of Theorem [\[browniantheorem\]](#browniantheorem){reference-type="ref" reference="browniantheorem"}'. The duality representation ([\[prob_brown\]](#prob_brown){reference-type="ref" reference="prob_brown"}) developed in Etheridge et al. [@etheridge2017branching] is similar to that of solutions to the Fisher--KPP equation in terms of binary branching Brownian motions developed by Skorohod and McKean [@skorokhod1964branching; @McKean]. The dual described in [@etheridge2017branching] was novel in that it generalised Skorohod and McKean's result to equations with an Allen--Cahn type non-linearity. It was later found in the Master's thesis of [@zach], and independently in [@an2022voting], that a semilinear heat equation can be expressed in this way if and only if the nonlinearity of the equation belongs to a certain very general family of polynomials. It will be convenient to distinguish between one-dimensional and multidimensional $\alpha$-stable processes. We adopt the convention that $X(t)$ will denote the one-dimensional $\alpha$-stable process, with the corresponding historical branching stable process denoted by $\boldsymbol{X}(t).$ When $\mathbbm{d}>1$, we denote the $\alpha$-stable process by $Y(t)$ and denote the corresponding historical branching stable process by ${\boldsymbol{Y}}(t)$. ## Remarks on the choice of scaling {#choice_scaling_sec} Now that we have described the probabilistic dual to the fractional Allen--Cahn equation, we can explain the origin of the scaling taken in equation ([\[mainequation22\]](#mainequation22){reference-type="ref" reference="mainequation22"}). As we will see, this choice of scaling is intimately linked to our strategy of proof for Theorem [\[maintheorem2\]](#maintheorem2){reference-type="ref" reference="maintheorem2"}. To explain this, we will need to consider stable processes run at varying speeds. For this reason, in this section (and this section only) we do *not* adopt Assumption [\[assumption3\]](#assumption3){reference-type="ref" reference="assumption3"}, so the speeds of all stable process, stable subordinators, and historical stable trees will be made explicit.\ Let $(Y_t^\varepsilon)_{t\geq 0}$ be a $\mathbbm{d}$-dimensional $\varepsilon$-truncated $\alpha$-stable process with $\alpha\in (1,2)$, i.e. $Y_t^\varepsilon$ is a Lévy process with Lévy measure given (up to a multiplicaitave constant) by $$\nu(dx) = |x|^{-\alpha - \mathbbm{d}} \mathbbm{1}_{|x|<\varepsilon}.$$ By [@cohen2007gaussian Proposition 3.2], for some $c_\alpha>0$, $$\begin{aligned} \label{driftwood}c_\alpha \varepsilon^{\frac{\alpha}{2}-1}Y^\varepsilon_t \xrightarrow[]{w} W_t \ \text{ as } \varepsilon\to 0\end{aligned}$$ where $(W_t)_{t\geq 0}$ is a standard $\mathbbm{d}$-dimensional Brownian motion and $\xrightarrow[]{w}$ denotes weak convergence. It is straightforward to show using characteristic exponents and the Lévy-Khintchine formula that $$\begin{aligned} \label{scalingdiscussion} \varepsilon^{\frac{\alpha-2}{\alpha}}Y^{\varepsilon^{2/\alpha}}_t \stackrel{D}{=} Y^{\varepsilon}_{\varepsilon^{\alpha-2}t},\end{aligned}$$ where $(Y^{\varepsilon^{2/\alpha}}_t)_{t\geq 0}$ denotes the $\varepsilon^{2/\alpha}$-truncated $\alpha$-stable process. Therefore, by replacing $\varepsilon$ by $\varepsilon^{2/\alpha}$ in ([\[driftwood\]](#driftwood){reference-type="ref" reference="driftwood"}), and applying ([\[scalingdiscussion\]](#scalingdiscussion){reference-type="ref" reference="scalingdiscussion"}), we see that $$\begin{aligned} \label{epsilon_limit}c_\alpha Y^\varepsilon_{\varepsilon^{\alpha-2}t} \xrightarrow[]{w} W_t \ \text{ as } \varepsilon\to 0.\end{aligned}$$ More generally, one could replace $\varepsilon$ in ([\[epsilon_limit\]](#epsilon_limit){reference-type="ref" reference="epsilon_limit"}) by a function $I(\varepsilon)$, satisfying $I(\varepsilon)\to 0$ as $\varepsilon\to 0$. The $I(\varepsilon)$-truncated stable process $Y^{I(\varepsilon)}_{I(\varepsilon)^{\alpha-2}t}$ will approximate a Brownian motion, so it is reasonable to expect that the probabilistic version of Theorem [\[browniantheorem\]](#browniantheorem){reference-type="ref" reference="browniantheorem"} holds when the branching Brownian motion is replaced by a branching $I(\varepsilon)$-truncated stable process, run at speed $I(\varepsilon)^{\alpha-2}$. Therefore to prove Theorem [\[maintheorem2\]](#maintheorem2){reference-type="ref" reference="maintheorem2"}, it should suffice to show: 1. the probabilistic version of Theorem [\[browniantheorem\]](#browniantheorem){reference-type="ref" reference="browniantheorem"} holds when $\boldsymbol{W}(t)$ is replaced by a $\mathbbm{d}$-dimensional ternary branching $I(\varepsilon)$-truncated stable tree run at speed $I(\varepsilon)^{\alpha-2}$, denoted ${\boldsymbol{Y}}^{I(\varepsilon)}(I(\varepsilon)^{\alpha-2}t)$; 2. there exists a coupling of the root votes of ${\boldsymbol{Y}}(I(\varepsilon)^{\alpha-2}t)$ and ${\boldsymbol{Y}}^{I(\varepsilon)}(I(\varepsilon)^{\alpha-2}t)$ in such a way that Step 1 implies Theorem [\[maintheorem2\]](#maintheorem2){reference-type="ref" reference="maintheorem2"}. The purpose of this two-step approach is that, by using a truncated stable process (which is not heavy tailed) we can more readily adapt proofs from the Brownian case of [@etheridge2017branching].\ The discussion above explains why the fractional Laplacian in equation ([\[mainequation22\]](#mainequation22){reference-type="ref" reference="mainequation22"}) is sped up by a factor of $I(\varepsilon)^{\alpha-2}$: this is the precise speed that the truncated stable process must run at in order for it to approximate a Brownian motion, allowing us to prove Step 1. However, we have not addressed our choice of $I(\varepsilon)$ as outlined by Assumptions [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"}. In particular, we require $\lim_{\varepsilon\to 0} \frac{\varepsilon}{I(\varepsilon)}=0$, so $I(\varepsilon)\neq \varepsilon$, which might be unexpected in view of the limit ([\[epsilon_limit\]](#epsilon_limit){reference-type="ref" reference="epsilon_limit"}), and the result of [@imbert2009phasefield] where the scaling $I(\varepsilon)=\varepsilon$ was used. This is because we must carefully balance two opposing effects: the truncation level must be small enough that the truncated motion is 'similar' to a Brownian motion (to prove Step 1), but it must be large enough so that the truncated motion and original stable motion are themselves 'similar' (to prove Step 2). In particular, the truncation level must be large enough so that the probability of an individual in the ternary stable tree ${\boldsymbol{Y}}$ making a jump larger than the truncation is sufficiently small, enabling ${\boldsymbol{Y}}$ and ${\boldsymbol{Y}}^{I(\varepsilon)}$ to be coupled.\ Although Step 1 will hold when $I(\varepsilon)=\varepsilon$, Step 2 does not. More concretely, consider Assumption [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"} [\[assumptions2_B\]](#assumptions2_B){reference-type="ref" reference="assumptions2_B"}: $\lim_{\varepsilon \to 0}\frac{\varepsilon^2}{I(\varepsilon)^2}{|\log \varepsilon|} = 0.$ Recall that the ternary branching stable motion branches at rate $\varepsilon^{-2}$. Suppose $\tau\sim \mathit{Exp}(\varepsilon^{-2})$ is the time of one such branching event. Then, for $k\in \mathbb{N}$, conditional on $\tau \leq k\varepsilon^2|\log\varepsilon|$ (which happens with probability $1-\varepsilon^k$), an individual in the tree ${\boldsymbol{Y}}(I(\varepsilon)^{\alpha-2}t)$ is expected to make, at most, $$\begin{aligned} \label{numjump}k\frac{\varepsilon^2}{I(\varepsilon)^2}|\log\varepsilon|\end{aligned}$$jumps larger than $I(\varepsilon)$ in its lifetime, because the arrival rate of jumps larger then $I(\varepsilon)$ made by a stable process run at speed $I(\varepsilon)^{\alpha-2}$ is $$\begin{aligned} \label{donthaveany}\mathcal{O}\left( {I(\varepsilon)^{\alpha-2}}\int_{I(\varepsilon)}^\infty {x^{{-\alpha}-1}}dx \right)= \mathcal{O}\left({I(\varepsilon)^{-2}}\right).\end{aligned}$$ This follows because the arrival rate of jumps larger than $I(\varepsilon)$ made by the $\mathbbm{d}$-dimensional process $Y_{I(\varepsilon)^{\alpha-2}t}$ is proportional to $I(\varepsilon)^{\alpha-2}\int_{x\in \mathbb{R}^\mathbbm{d}: |x|> I(\varepsilon)}\nu(dx)$ where $\nu(dx) = |x|^{-\alpha - \mathbbm{d}}dx$ is the Lévy measure of $(Y_t)_{t\geq 0}$. To prove Step 2, each individual stable motion in ${\boldsymbol{Y}}(I(\varepsilon)^{\alpha-2}t)$ should approximate (asymptotically in $\varepsilon$) an $I(\varepsilon)$-truncated process, so the quantity ([\[numjump\]](#numjump){reference-type="ref" reference="numjump"}) should converge to zero with $\varepsilon$, which is precisely Assumption [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"} [\[assumptions2_B\]](#assumptions2_B){reference-type="ref" reference="assumptions2_B"}. The remaining assumptions on $I(\varepsilon)$ from Assumptions [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"} arise from several technical lemmas needed to prove Theorem [\[maintheorem2\]](#maintheorem2){reference-type="ref" reference="maintheorem2"}.\ To the best of our knowledge, Theorem [\[maintheorem2\]](#maintheorem2){reference-type="ref" reference="maintheorem2"} is the first result on the solution interface of equation ([\[mainequation22\]](#mainequation22){reference-type="ref" reference="mainequation22"}) with our chosen scaling. The work of [@imbert2009phasefield] suggests that our result should hold even when $I(\varepsilon)=\varepsilon.$ However, it does not seem likely that we can achieve the $I(\varepsilon)=\varepsilon$ scaling using our current method of proof and, conversely, it is unclear if the method of proof in [@imbert2009phasefield] could be adapted to handle our chosen scaling.\ The two-step proof described above motivates our choice of scaling. However, in reality, we opt to work with $W\left(R^{I(\varepsilon)^2}_{I(\varepsilon)^{\alpha-2}t}\right),$ a Brownian motion subordinated by an $I(\varepsilon)^2$-truncated $\frac{\alpha}{2}$-stable subordinator run at speed $I(\varepsilon)^{\alpha-2}$, instead of the $I(\varepsilon)$-truncated stable process $Y^{I(\varepsilon)}_{I(\varepsilon)^{\alpha-2}t}$. The central idea of our proof remains the same as before, however, we found the subordinated process easier to work with (more on this below). Moreover, the error made by approximating the stable process $Y_{I(\varepsilon)^{\alpha-2}t}$ by $W\left(R^{I(\varepsilon)^2}_{I(\varepsilon)^{\alpha-2}t}\right)$ is roughly equal to the error we would obtain using $Y^{I(\varepsilon)}_{I(\varepsilon)^{\alpha-2}}$. Recall that $Y_{I(\varepsilon)^{\alpha-2}t}\stackrel{D}{=} W(R_{I(\varepsilon)^{\alpha-2}t})$, a Brownian motion subordinated by an $\frac{\alpha}{2}$-stable subordinator. The arrival rate of jumps larger that $I(\varepsilon)^2$ made by the $\frac{\alpha}{2}$-stable subordinator is $$\mathcal{O}\left({I(\varepsilon)^{\alpha-2}}\int_{I(\varepsilon)^2}^\infty {x^{-\frac{\alpha}{2}-1}}dx\right) = \mathcal{O}\left({I(\varepsilon)^{-2}}\right),$$ which is the same order as the arrival rate of jumps larger than $I(\varepsilon)$ made by a stable process from ([\[donthaveany\]](#donthaveany){reference-type="ref" reference="donthaveany"}).\ Like the truncated stable process, the subordinated Brownian motion is not heavy tailed, and its explicit description in terms of a Brownian motion allows us to more elegantly adapt the Brownian proof of [@etheridge2017branching]. In Step 1 of our approach, it is much more straightforward to compare this subordinated Brownian motion to a standard Brownian motion, than it would have been to compare a truncated stable process to a Brownian motion. This error made by estimating the subordinated Brownian motion $W\left(R^{I(\varepsilon)^2}_{I(\varepsilon)^{\alpha-2}t}\right)$ by a standard Brownian motion can be quantified by considering the difference $$\left|R^{I(\varepsilon)^2}_{I(\varepsilon)^{\alpha-2}t}-t\right|.$$ We will see in Section [3.6](#acouplingargument){reference-type="ref" reference="acouplingargument"} that this difference is ultimately the source of the error term $F(\varepsilon)$ in our main result, Theorem [\[maintheorem\]](#maintheorem){reference-type="ref" reference="maintheorem"}. Majority voting in one dimension[\[sectionmajorityvotinginonedimension\]]{#sectionmajorityvotinginonedimension label="sectionmajorityvotinginonedimension"} In this section, we prove a one-dimensional analogue of Theorem [\[maintheorem2\]](#maintheorem2){reference-type="ref" reference="maintheorem2"}, which will be used in the proof of Theorem [\[maintheorem2\]](#maintheorem2){reference-type="ref" reference="maintheorem2"} in Section [\[ch:3\]](#ch:3){reference-type="ref" reference="ch:3"}. This parallels the structure of proof for the Brownian result, Theorem [\[browniantheorem\]](#browniantheorem){reference-type="ref" reference="browniantheorem"}.\ For each $x\in \mathbb{R}$, let $\mathbb{P}_x$ be the law of a one-dimensional $\alpha$-stable process $X(t)$ satisfying Assumption [\[assumption3\]](#assumption3){reference-type="ref" reference="assumption3"} started at $x$, with corresponding expectation $\mathbb{E}_x$. Write $\mathbb{P}_x^\varepsilon$ for the probability measure under which $(\boldsymbol{X}(t), t\geq0)$ has the law of a one-dimensional historical ternary branching $\alpha$-stable motion, with branching rate $\varepsilon^{-2}$ started from a single particle at location $x$ at time $0$. Write $\mathbb{E}_x^\varepsilon$ for the corresponding expectation. In accordance with Assumption [\[assumption3\]](#assumption3){reference-type="ref" reference="assumption3"}, each particle in $(\boldsymbol{X}(t), t\geq 0)$ is assumed to run at speed $\sigma_\alpha I(\varepsilon)^{\alpha-2}$ for $\sigma_\alpha$ given in ([\[defn_sig_alpha\]](#defn_sig_alpha){reference-type="ref" reference="defn_sig_alpha"}).\ Define $$\begin{aligned} \label{xx} p_0(x)=\mathbbm{1}_{\{x \geq 0\}}.\end{aligned}$$ With this choice of initial condition, under majority voting (Definition [\[definitionmajority\]](#definitionmajority){reference-type="ref" reference="definitionmajority"}), the leaves of ${\cal T}(\boldsymbol{X}(t))$ will vote one if and only if they are on the right half line. Denote the root vote of ${\cal T}(\boldsymbol{X}(t))$ under majority voting with initial condition ([\[xx\]](#xx){reference-type="ref" reference="xx"}) by ${\mathbb{V}}(\boldsymbol{X}(t)):= {\mathbb{V}}_{p_0}(\boldsymbol{X}(t))$. The following result is the natural analogue of Theorem [\[maintheorem2\]](#maintheorem2){reference-type="ref" reference="maintheorem2"} in one dimension. [\[maintheorem1D\]]{#maintheorem1D label="maintheorem1D"} Let $T^*\in (0,\infty)$. Suppose $I$ satisfies Assumptions [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"} [\[assumptions2_A\]](#assumptions2_A){reference-type="ref" reference="assumptions2_A"}-[\[assumptions2_B\]](#assumptions2_B){reference-type="ref" reference="assumptions2_B"}. Then there exist $c_1(\alpha), \varepsilon_1(\alpha)>0$ such that, for all $t\in [0,T^*]$ and all $\varepsilon \in (0, \varepsilon_1),$ (1) for $x \geq c_1I(\varepsilon)|\log\varepsilon|$, we have $\mathbb{P}_x^\varepsilon[{\mathbb{V}}(\boldsymbol{X}(t)) =1] \geq 1 - \frac{\varepsilon^2}{I(\varepsilon)^2},$ (2) for $x \leq -c_1I(\varepsilon)|\log\varepsilon|$, we have [$\mathbb{P}_x^\varepsilon[{\mathbb{V}}(\boldsymbol{X}(t)) =1] \leq \frac{\varepsilon^2}{I(\varepsilon)^2}$]{.upright}. This result tells us that, for positive $x$, 'typical' leaves of the tree ${\cal T}(\boldsymbol{X}(t))$ based at $x$ are more likely to vote $1$ than $0$. As we mentioned in our introduction, Theorem [\[maintheorem1D\]](#maintheorem1D){reference-type="ref" reference="maintheorem1D"} is weaker than the actual one-dimensional result that will be used to prove Theorem [\[maintheorem2\]](#maintheorem2){reference-type="ref" reference="maintheorem2"} (at this stage, we have not developed the technical jargon needed to state it). This stronger result (Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"}) will be developed in Section [3.3](#couplingmarkedsystems){reference-type="ref" reference="couplingmarkedsystems"}, and shown to imply Theorem [\[maintheorem1D\]](#maintheorem1D){reference-type="ref" reference="maintheorem1D"}. There is some evidence in the literature that the interface width in Theorem [\[maintheorem1D\]](#maintheorem1D){reference-type="ref" reference="maintheorem1D"} could be improved. Let $f$ be any bistable nonlinearity and consider the one-dimensional equation $$\begin{aligned} \label{onedsystem} \begin{cases} (-\Delta)^{\frac{\alpha}{2}}u(y) = f(u(y)) \ \ \forall \, y \in \mathbb{R},\\ \lim \limits_{y\to \infty} u(y) = 1,\ \lim \limits_{y\to - \infty} u(y) = 0.\end{cases}\end{aligned}$$ Then, by [@gui2015traveling Proposition 3.2], a solution $u \in C^2(\mathbb{R})$ to ([\[onedsystem\]](#onedsystem){reference-type="ref" reference="onedsystem"}) satisfies $$\frac{A}{y^{\alpha}}\leq 1 - u(y) \leq \frac{B}{y^{\alpha}}$$ for all $y>1$ and some $A, B > 0$ (with a similar inequality holding if $y\leq -1$). We rewrite this equation under our scaling by setting $z = \varepsilon^{\frac{2}{\alpha}} I(\varepsilon)^{1-\frac{2}{\alpha}} y,$ and obtain $$\begin{aligned} \frac{A \varepsilon^2 I(\varepsilon)^{\alpha-2}}{z^\alpha} \leq 1-u(z)\leq \frac{B \varepsilon^2 I(\varepsilon)^{\alpha-2}}{z^\alpha}.\end{aligned}$$ Therefore if $z\geq I(\varepsilon)$, $1-u(z)$ is of order $\frac{\varepsilon^2}{I(\varepsilon)^2}$, the interface sharpness from Theorem [\[maintheorem1D\]](#maintheorem1D){reference-type="ref" reference="maintheorem1D"}. This suggests that Theorem [\[maintheorem1D\]](#maintheorem1D){reference-type="ref" reference="maintheorem1D"} should hold for an interface of width $I(\varepsilon)$, and that this would be the narrowest width possible for the given interface sharpness. However, we were only able to prove Theorem [\[maintheorem1D\]](#maintheorem1D){reference-type="ref" reference="maintheorem1D"} for an interface width of order $I(\varepsilon)|\log\varepsilon|$. The choice of initial condition $p_0(x)={\mathbbm{1}_{\{x\geq 0\}}}$ affords us several useful inequalities. First, for all $x_1, x_2\in \mathbb{R}$ with $x_1\leq x_2,$ we have $$\begin{aligned} \label{monotonicity1}\mathbb{P}_{x_1}^\varepsilon[{\mathbb{V}}(\boldsymbol{X}(t))=1]\leq \mathbb{P}_{x_2}^\varepsilon[{\mathbb{V}}(\boldsymbol{X}(t))=1].\end{aligned}$$ For any time-labelled ternary tree ${\cal T}$, write $$\mathbb{P}_x^t({\cal T}) := \mathbb{P}^\varepsilon_x[{\mathbb{V}}(\boldsymbol{X}(t))=1 \left \vert \right.{{\cal T}(\boldsymbol{X}(t))}= {\cal T}].$$ It then follows by symmetry of $\alpha$-stable motions and the definition of $p_0(x)$ that, for any $x\in \mathbb{R}$ and $t>0$, $$\begin{aligned} \label{symetrytree1}\mathbb{P}_x^t({\cal T}) = 1- \mathbb{P}_{-x}^t({\cal T}). \end{aligned}$$ Setting $x=0$ in equation ([\[symetrytree1\]](#symetrytree1){reference-type="ref" reference="symetrytree1"}) yields $\mathbb{P}_0^t({\cal T}) = \frac{1}{2}$, so by monotonicity ([\[monotonicity1\]](#monotonicity1){reference-type="ref" reference="monotonicity1"}) $$\begin{aligned} \label{tannn}\mathbb{P}_x^t({\cal T})\geq \tfrac{1}{2} \text{ for } x>0, \text{ and } \, \mathbb{P}_x^t({\cal T})\leq \tfrac{1}{2} \text{ for } x<0. \end{aligned}$$ It will be convenient in our later calculations to introduce notation for the majority voting system. Mimicking [@etheridge2017branching], define the function $g:[0,1]^3\to [0,1]$ by $$\begin{aligned} \label{majorityvotingfunction} g(p_1,p_2,p_3) = p_1p_2p_3 + p_1p_2(1-p_3)+p_2p_3(1-p_1)+p_3p_1(1-p_2).\end{aligned}$$ This is the probability that a majority vote gives the result $1$, in the special case when the three voters are independent and have probabilities $p_1, p_2$ and $p_3$ of voting $1$. We will abuse notation slightly and write $g(q):= g(q, q,q)$. Note that, for all $q\in [0,1],$ $$\begin{aligned} \label{symmetryofg} g(q) = 1-g(1-q). \end{aligned}$$ ## A coupling of voting systems {#couplingmarkedsystems} In this section, we will couple the root vote of ${\cal T}(\boldsymbol{X}(t))$ under majority voting to the root vote of another ternary branching process under a different voting system. This other branching process will be a ternary branching subordinated Brownian motion, with subordinator given by a truncated $\tfrac{\alpha}{2}$-stable subordinator. We endow this process with a voting procedure that we call 'marked majority voting'. Once we have achieved this coupling of root votes, we state a more general theorem in terms of this new branching process that will imply Theorem [\[maintheorem1D\]](#maintheorem1D){reference-type="ref" reference="maintheorem1D"}.\ The intuition behind marked majority voting, which we denote by ${\mathbb{V}}^\times$, is straightforward. However, formally proving a coupling of the majority and marked majority systems ${\mathbb{V}}$ and ${\mathbb{V}}^\times$ is more challenging. To aid us with this, we introduce an intermediate voting system $\widehat{{\mathbb{V}}}$ that can be readily compared to both voting systems. We call this the 'exponentially marked' voting procedure. The definition of the exponentially marked voting procedure, together with a proof that it couples with the ordinary majority voting system in the appropriate sense (Theorem [\[teo:ineqmark\]](#teo:ineqmark){reference-type="ref" reference="teo:ineqmark"}) make up the content of Section  [3.3.1](#expmarkedsection){reference-type="ref" reference="expmarkedsection"}. After this, in Section [3.3.2](#sectionnmarked){reference-type="ref" reference="sectionnmarked"}, we define the marked majority voting procedure that will be carried through the one-dimensional proof, and prove (using the intermediate voting system) that it can be coupled to majority voting. Theorem [\[maintheorem1D\]](#maintheorem1D){reference-type="ref" reference="maintheorem1D"} will then be a consequence of a more general theorem stated in terms of the marked voting system on the subordinated Brownian tree (Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"}), which will be proved in Section [3.4](#saltchips){reference-type="ref" reference="saltchips"}. ### Exponentially marked voting {#expmarkedsection} In this section we will couple the majority voting system on ${\cal T}(\boldsymbol{X}(t))$ with the exponentially marked voting system defined on a ternary branching subordinated Brownian motion. Fix $\varepsilon>0$ throughout. We will consider an $I(\varepsilon)^2$-truncated $\tfrac{\alpha}{2}$-stable subordinator denoted $R^\varepsilon_t$. Assumption [\[assumption3\]](#assumption3){reference-type="ref" reference="assumption3"} will also be adopted for all stable subordinators. To be precise, if $$\nu(dx) := \frac{\alpha}{2 \Gamma\left(1-\tfrac{\alpha}{2}\right)}x^{-1-\frac{\alpha}{2}}dx$$ is the Lévy measure of the $\tfrac{\alpha}{2}$-stable subordinator, then the Lévy measure of $R^\varepsilon_t$ is given by $$\begin{aligned} \sigma_\alpha I(\varepsilon)^{\alpha-2}\nu(dx)\mathbbm{1}_{0\leq x\leq \frac{2-\alpha}{\alpha} I(\varepsilon)^2}.\end{aligned}$$ In using the notation $R^\varepsilon_t$ we suppress the true speed of the process its truncation level, which was made explicit previously in Section [3.2](#choice_scaling_sec){reference-type="ref" reference="choice_scaling_sec"}. Henceforth we will use the former suppressed notation. Moreover, although we technically truncate at level $\frac{2-\alpha}{\alpha} I(\varepsilon)^2$, we shall refer to this simply as the $I(\varepsilon)^2$-truncated stable subordinator. Let $\boldsymbol{B}_{R^\varepsilon}(t)$ denote the historical process of a ternary branching $R^\varepsilon_t$-subordinated Brownian motion with branching rate $\varepsilon^{-2}$. Unless stated otherwise, all subordinators in this work will be zero at time zero.\ Let us now make precise the form of the coupling that we desire. As before, let $\mathbb{V}(\boldsymbol{X}(t))$ denote the root vote of ${\cal T}(\boldsymbol{X}(t))$ under majority voting (Definition [\[definitionmajority\]](#definitionmajority){reference-type="ref" reference="definitionmajority"}). We will define a voting system on ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))$ with root vote $\widehat{\mathbb{V}}(\boldsymbol{B}_{R^\varepsilon}(t))$ satisfying $$\begin{aligned} \label{cupcakes}\mathbb{P}^\varepsilon_x\left[\mathbb{V}(\boldsymbol{X}(t))=1\right] \geq \mathbb{P}^\varepsilon_x\left[\widehat{{\mathbb{V}}}(\boldsymbol{B}_{R^\varepsilon}(t))=1\right] \ \text{ for all } \ x \geq 0, \end{aligned}$$ with the reverse inequality holding for $x< 0$. Having obtained this, it will suffice to prove the analogue of Theorem [\[maintheorem1D\]](#maintheorem1D){reference-type="ref" reference="maintheorem1D"} with $\mathbb{V}(\boldsymbol{X})$ replaced by $\widehat{\mathbb{V}}(\boldsymbol{B}_{R^\varepsilon}).$ In this way, we will have incorporated the problematic 'large jumps' of the $\alpha$-stable process $\boldsymbol{X}(t)$ into the voting system $\widehat{\mathbb{V}}$.\ To define the voting system on ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))$, consider a collection of independent $\tfrac{\alpha}{2}$-stable subordinators, $\{R_i\}_{i\in M(t)}$, where $M(t)$ denotes the set of individuals that have ever been alive in ${\cal T}(\boldsymbol{X}(t))$. For each $i\in M(t)$, let $\tau^{\times}_i$ be the first time that $R_i$ makes a jump of size larger than $\frac{2-\alpha}{\alpha}I(\varepsilon)^2$ (these are the exponential times after which the voting system is named). Explicitly, $$\tau^{\times}_i := \inf\left\{t\geq0 : |R_i(t)-R_i(t-)| > \tfrac{2-\alpha}{\alpha}I(\varepsilon)^2\right\}.$$ For each $i$, $\tau^{\times}_i$ is exponentially distributed with parameter $$\begin{aligned} &\int_{\frac{2-\alpha}{\alpha}I(\varepsilon)^2}^\infty \sigma_\alpha I(\varepsilon)^{\alpha-2}\nu(dx)\mathbbm{1}_{0\leq x \leq \frac{2-\alpha}{\alpha}I(\varepsilon)^2}\\ &= \tfrac{\alpha}{2}\left(\tfrac{2-\alpha}{\alpha}\right)^{\frac{\alpha}{2}} I(\varepsilon)^{\alpha-2}\int_{\frac{2-\alpha}{\alpha}I(\varepsilon)^2}^\infty {x^{-\frac{\alpha}{2}-1}} dx \\ &= I(\varepsilon)^{-2} \end{aligned}$$ using the definition of $\sigma_\alpha$ from ([\[defn_sig_alpha\]](#defn_sig_alpha){reference-type="ref" reference="defn_sig_alpha"}) (indeed, we chose $\sigma_\alpha$ so that the above equality would hold). Therefore $\{\tau^{\times}_i\}_{i\in M(t)}$ is a family of i.i.d. $\mathit{Exp}(I(\varepsilon)^{-2})$ variables. We associate to the particle in ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))$ with label $i$ their lifetime, $\tau_i\wedge t$, for $\tau_i \sim \mathit{Exp}(\varepsilon^{-2}).$ Fix $\varepsilon>0$. For $p:\mathbb{R}\to [0,1]$, define the *exponentially marked voting procedure* on ${{\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))}$ as follows. (1) Each individual $B_i(R^\varepsilon_i)$ is said to be *marked* if $\tau^{\times}_i <\tau_i.$ Each marked individual votes $1$ with probability $\tfrac{1}{2}$ and otherwise votes $0$. (2) Each unmarked leaf $i$ of ${{\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))}$, independently, votes $1$ with probability\ $p(B_i(R^\varepsilon_i(t)))$ and otherwise votes $0$. (3) At each branch point in ${{\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))}$, if the parent particle $k$ is unmarked, she votes according to the majority vote of her three offspring $(k,1), (k,2)$ and $(k,3)$. [\[definitionnonmarkov\]]{#definitionnonmarkov label="definitionnonmarkov"} Under this voting procedure, define $\widehat{{\mathbb{V}}}_p(\boldsymbol{B}_{R^\varepsilon}(t))$ to be the vote associated to the root $\emptyset$ of ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))$. When an individual in ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))$ is marked, its vote is independent of the votes of its ancestors. Therefore if at least two individuals born at the same branching event are marked, the vote of their parent is independent of all of its ancestors, making it effectively random. Reassuringly, this scenario is very unlikely, since down a 'typical' line of descent in ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))$, two individuals will not be marked at the same branching event.\ We now describe the intuition behind the exponentially marked voting procedure. Recall that an $\frac{\alpha}{2}$-stable subordinated Brownian motion is equal in distribution to an $\alpha$-stable process, so we may consider the historical ternary branching $\tfrac{\alpha}{2}$-stable subordinated Brownian motion $\boldsymbol{B}_R(t)$ in place of $\boldsymbol{X}(t)$. Suppose the trees ${\cal T}(\boldsymbol{B}_R(t))$ and ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))$ rooted at $x>0$ have been generated up to time $t$ and that they have the same branching structure, written as ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(t)) = {\cal T}(\boldsymbol{B}_R(t))$. Then each individual $B_i(R_i)$ in ${\cal T}(\boldsymbol{B}_R(t))$ can be associated to the individual $B_i(R^\varepsilon_i)$ in ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))$. If the subordinator $R_i$ has not made a large jump (i.e. a jump bigger than $I(\varepsilon)^2$) in its lifetime ($\tau_i^\times \geq\tau_i)$, then $B_i(R^\varepsilon_i)$ votes in the same way as $B_i(R_i)$ according to majority voting. However, if $R_i$ does make a large jump ($\tau_i^\times <\tau_i$), then, since $x>0$, $B_i(R_i)$ is more likely to jump into right-half line than the left. Therefore the vote of $B_i(R_i)$ should be one with probability strictly greater than $1/2$. In contrast, when $\tau_i^\times<\tau_i$, $B_i(R^\varepsilon_i)$ votes one with probability exactly $1/2$, which reduces the probability that the root vote of ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))$ will equal one, so we expect ([\[cupcakes\]](#cupcakes){reference-type="ref" reference="cupcakes"}) to hold.\ Now, instead of considering the initial condition $p_0(x) = \mathbbm{1}_{\{x\geq 0\}}$ as we did for majority voting, we will use $$\begin{aligned} \label{phat}\widehat{p}_0(x) = u_+ \mathbbm{1}_{\{x \geq 0\}} + u_- \mathbbm{1}_{\{x \leq 0\}},\end{aligned}$$ where $0< u_- < u_+ < 1$ satisfy $1-u_+ = u_-$. We will fix a choice of $u_-$ and $u_+$ later (see ([\[asympfixedpoint1\]](#asympfixedpoint1){reference-type="ref" reference="asympfixedpoint1"}) and ([\[asympfixedpoint2\]](#asympfixedpoint2){reference-type="ref" reference="asympfixedpoint2"})). For this choice of initial condition, we write $\widehat{\mathbb{V}} := \widehat{\mathbb{V}}_{{\widehat{p}}_0}$. Noting that $\widehat{p}_0$ is symmetric, for any $x_1\leq x_2 \in \mathbb{R}$, $$\begin{aligned} \label{awake} \mathbb{P}^\varepsilon_{x_1}\left[\widehat{{\mathbb{V}}}(\boldsymbol{B}_{R^\varepsilon}(t))=1\right] \leq \mathbb{P}^\varepsilon_{x_2}\left[\widehat{{\mathbb{V}}}(\boldsymbol{B}_{R^\varepsilon}(t))=1\right]. \end{aligned}$$ Let ${\cal T}$ be a time-labelled ternary tree and define $$\widehat{\mathbb{P}}_x^\varepsilon({\cal T}) := \mathbb{P}_x^\varepsilon\left[\widehat{{\mathbb{V}}}(\boldsymbol{B}_{R^\varepsilon}(t))=1 \left \vert \right.{\cal T}= {{\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))}\right].$$ Then, since $0$ and $1$ are exchangeable in the exponentially marked voting system, $$\begin{aligned} \label{tired} \widehat{\mathbb{P}}_x^t({\cal T}) = 1- \widehat{\mathbb{P}}_{-x}^t({\cal T})\end{aligned}$$ for all $x\in \mathbb{R}$ and $t\geq 0$. Setting $x=0$ in equation ([\[tired\]](#tired){reference-type="ref" reference="tired"}) shows that $\widehat{\mathbb{P}}_0^t({\cal T}) = \tfrac{1}{2}$ for all $t>0$, and together with monotonicity ([\[awake\]](#awake){reference-type="ref" reference="awake"}), this gives $$\widehat{\mathbb{P}}_x^\varepsilon({\cal T}) \geq \tfrac{1}{2} \ \text{ for } \ x>0, \ \ \widehat{\mathbb{P}}_x^\varepsilon({\cal T}) \leq \tfrac{1}{2} \ \text{ for } \ x<0.$$ To conclude this section, we prove the claimed coupling of voting systems. [\[teo:ineqmark\]]{#teo:ineqmark label="teo:ineqmark"} Let $\varepsilon>0$ and $t\geq 0$. Then (1) for all $x\geq0$, $\mathbb{P}^\varepsilon_x[{\mathbb{V}}(\boldsymbol{X}(t))=1] \geq \mathbb{P}^\varepsilon_x\left[\widehat{{\mathbb{V}}}(\boldsymbol{B}_{R^\varepsilon}(t))=1\right],$ (2) for all $x\leq 0$, $\mathbb{P}^\varepsilon_x[{\mathbb{V}}(\boldsymbol{X}(t))=1] \leq \mathbb{P}^\varepsilon_x\left[\widehat{{\mathbb{V}}}(\boldsymbol{B}_{R^\varepsilon}(t))=1\right]$. *Proof.* We only prove the first inequality, since the second inequality will follow by the symmetry relations ([\[symetrytree1\]](#symetrytree1){reference-type="ref" reference="symetrytree1"}) and ([\[tired\]](#tired){reference-type="ref" reference="tired"}). Recall the initial conditions for ${\mathbb{V}}(\boldsymbol{X}(t))$ and $\widehat{{\mathbb{V}}}(\boldsymbol{B}_{R^\varepsilon}(t))$ are given by $$p_0(x) = \mathbbm{1}_{\{x\geq 0\}} \ \text{ and } \ \widehat{p}_0(x) = u_+\mathbbm{1}_{\{x\geq 0\}} + u_-\mathbbm{1}_{\{x\leq 0\}}$$ respectively. To ease notation, let $p \equiv p_0$ and $\widehat{p}\equiv \widehat{p}_0$ for the remainder of this proof.\ First, by coupling branching structures and branching times of both trees, we can assume ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(t)) = {\cal T}(\boldsymbol{X}(t)),$ so it suffices to show that $$\begin{aligned} \label{teotreeversion} \mathbb{P}_x^t({\cal T}) \geq \widehat{\mathbb{P}}_x^t({\cal T}) \ \text{for all } x\geq 0\end{aligned}$$ for any time-labelled ternary tree ${\cal T}$. Denote the time of the first branching event in ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(t)) = {\cal T}(\boldsymbol{X}(t))$ by $\tau$ (which corresponds to $\tau_\emptyset$ in Definition [\[definitionnonmarkov\]](#definitionnonmarkov){reference-type="ref" reference="definitionnonmarkov"}). Let $\tau^\times\sim \mathit{Exp}(I(\varepsilon)^{-2})$ be the exponential random variable that determines if the ancestral individual in ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))$ is marked. We proceed by induction on the number of branching events in ${\cal T}$.\ To prove the base case, let ${\cal T}_0$ denote the tree with a root and a single leaf. Conditional on $\left\{{\cal T}_0 = {{\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))}\right\}$, let $B(R^\varepsilon_t)$ be the position of the single individual at time $t$ where $(B_s)_{s\geq0}$ is a standard Brownian motion and $(R^\varepsilon_s)_{s\geq0}$ is an $I(\varepsilon)^2$-truncated $\frac{\alpha}{2}$-stable subordinator. Under the exponentially marked voting procedure, this individual votes $1$ with probability $\widehat{p}(B(R^\varepsilon_t))$ if she is unmarked (i.e. $\tau^\times\geq \tau$), or she votes $1$ with probability $\tfrac{1}{2}$ if she is marked ($\tau^\times < \tau$). Since the event $\{{{\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))}= {\cal T}_0\}$ is equivalent to $\{\tau>t\}$, we have, for all $x\geq 0$, $$\begin{aligned} \label{needcoffee3} \widehat{\mathbb{P}}_x^t({\cal T}_0) &= \mathbb{E}^{\varepsilon}_x\left[ \widehat{p}\left(B(R^\varepsilon_t)\right) \mathbbm{1}_{\tau^\times\geq \tau}\left \vert \right.\tau>t\right] + \tfrac{1}{2}\mathbb{P}^{\varepsilon}_x[\tau^\times < \tau\left \vert \right.\tau>t]\nonumber \\ &=\mathbb{E}^{\varepsilon}_x\left[ \widehat{p}\left(B(R^\varepsilon_t)\right)\left \vert \right.\tau>t\right] \mathbb{P}^{\varepsilon}_x[ \tau^\times \geq \tau\left \vert \right.\tau>t] + \tfrac{1}{2} \mathbb{P}^{\varepsilon}_x[\tau^\times < \tau\left \vert \right.\tau>t], \end{aligned}$$ where in the second line we have used that, conditional on the event $\{\tau>t\}$, the events $\{B(R^\varepsilon_t)>0\}$ and $\{\tau^\times \geq \tau\}$ are independent. We next observe that $$\begin{aligned} \label{needcoffee2} & \mathbb{E}^{\varepsilon}_x\left[ \widehat{p}\left(B(R^\varepsilon_t)\right)\left \vert \right.\tau>t\right] \mathbb{P}^{\varepsilon}_x[ \tau^\times \geq \tau\left \vert \right.\tau>t] + \tfrac{1}{2} \mathbb{P}^{\varepsilon}_x[\tau^\times < \tau\left \vert \right.\tau>t] \nonumber \\ & \ \ \ \ \ \leq \mathbb{E}^\varepsilon_x\left[ \widehat{p}\left(B(R^\varepsilon_t)\right)\left \vert \right.\tau>t\right]\mathbb{P}_x^\varepsilon[ \tau^\times \geq t] + \tfrac{1}{2}\mathbb{P}_x^\varepsilon[\tau^\times < t].\end{aligned}$$ To see this, note that $\mathbb{P}_x^\varepsilon\left[ \tau^\times \geq \tau\left \vert \right.\tau>t\right] \leq \mathbb{P}_x^\varepsilon\left[ \tau^\times \geq t\right]$ and, since $x\geq 0$, by similar arguments as those used to obtain ([\[tannn\]](#tannn){reference-type="ref" reference="tannn"}), we have $\mathbb{E}^\varepsilon_x\left[ \widehat{p}\left(B(R^\varepsilon_t)\right)\left \vert \right.\tau>t\right]\geq \tfrac{1}{2}$. Therefore ([\[needcoffee2\]](#needcoffee2){reference-type="ref" reference="needcoffee2"}) holds if $$\begin{aligned} \tfrac{1}{2}\left(\mathbb{P}^{\varepsilon}_x[\tau^\times < \tau\left \vert \right.\tau>t] - \mathbb{P}_x^\varepsilon[\tau^\times < t]\right) \leq \tfrac{1}{2}\left(\mathbb{P}_x^\varepsilon\left[ \tau^\times \geq t\right]-\mathbb{P}_x^\varepsilon\left[ \tau^\times \geq \tau\left \vert \right.\tau>t\right] \right),\end{aligned}$$ or, equivalently, $$\mathbb{P}^{\varepsilon}_x[\tau^\times < \tau\left \vert \right.\tau>t] +\mathbb{P}_x^\varepsilon\left[ \tau^\times \geq \tau\left \vert \right.\tau>t\right] \leq \mathbb{P}_x^\varepsilon\left[ \tau^\times \geq t\right]+\mathbb{P}_x^\varepsilon[\tau^\times < t],$$ which holds trivially. Therefore ([\[needcoffee2\]](#needcoffee2){reference-type="ref" reference="needcoffee2"}) holds, and combining ([\[needcoffee3\]](#needcoffee3){reference-type="ref" reference="needcoffee3"}) with ([\[needcoffee2\]](#needcoffee2){reference-type="ref" reference="needcoffee2"}) we obtain $$\begin{aligned} \label{needcoffee} \widehat{\mathbb{P}}_x^t({\cal T}_0) \leq \mathbb{E}^\varepsilon_x\left[ \widehat{p}\left(B(R^\varepsilon_t)\right)\left \vert \right.\tau>t\right]\mathbb{P}_x^\varepsilon[ \tau^\times \geq t] + \tfrac{1}{2}\mathbb{P}_x^\varepsilon[\tau^\times < t].\end{aligned}$$ Continuing the proof of the base case, consider the leaf in $\boldsymbol{X}(t)$. Conditional on $\left\{{\cal T}(\boldsymbol{X}(t))={\cal T}_0\right\}$, abuse notation and denote the position of the single individual in ${\cal T}(\boldsymbol{X}(t))$ at time $t$ by $B(R_t)$ for $(B_s)_{s\geq0}$ a standard Brownian motion and $(R_s)_{s\geq0}$ an $\tfrac{\alpha}{2}$-stable subordinator. Define $$\overline{\tau} := \inf\left\{ t\geq 0: |R_t-R_{t-}| >\tfrac{2-\alpha}{\alpha}I(\varepsilon)^2 \right\}$$ which describes the first time that $R_t$ makes a jump of size greater than $\frac{2-\alpha}{\alpha} I(\varepsilon)^2$. Of course, $\overline{\tau} \stackrel{D}{=} \tau^\times$, but $\overline{\tau}$ is defined in terms of the subordinator of the ancestral particle in $\boldsymbol{X}(t)$. Noting that $\overline{\tau}$ is independent of $\tau$, we have $$\begin{aligned} \label{sosleepy} \mathbb{P}_x^t({\cal T}_0) &=\mathbb{E}^{\varepsilon}_x[p\left(B(R_t)\right) \left \vert \right.\overline{\tau}\geq t, \tau>t] \mathbb{P}_x^\varepsilon[\overline{\tau}\geq t] +\mathbb{E}^{\varepsilon}_x[p(B(R_t)) \left \vert \right.\overline{\tau}<t<\tau] \mathbb{P}_x^\varepsilon[\overline{\tau}<t]\nonumber \\ &\geq \mathbb{E}^{\varepsilon}_x[p(B(R^\varepsilon_t))\left \vert \right.\tau>t]\,\mathbb{P}_x^\varepsilon[\overline{\tau}\geq t] + \tfrac{1}{2}\mathbb{P}_x^\varepsilon[\overline{\tau}<t] \end{aligned}$$ using that, conditional on $\{\overline{\tau} > t\}, B(R_t) \stackrel{D}{=} B(R^\varepsilon_t)$, and $\mathbb{E}^\varepsilon_x[p(B(R_t)) \left \vert \right.\overline{\tau}<t<\tau] \geq \tfrac{1}{2}$ since $x\geq 0$, which can be shown using a similar identity to ([\[symetrytree1\]](#symetrytree1){reference-type="ref" reference="symetrytree1"}). By definition of $p$ and $\widehat{p}$, $\mathbb{E}^\varepsilon_x[p(B(R_t))\left \vert \right.\tau>t] \geq \mathbb{E}^\varepsilon_x[\widehat{p}(B(R_t))\left \vert \right.\tau>t]$ for all $x\geq 0$, which, together with ([\[needcoffee\]](#needcoffee){reference-type="ref" reference="needcoffee"}) and ([\[sosleepy\]](#sosleepy){reference-type="ref" reference="sosleepy"}) gives us $$\mathbb{P}_x^t({\cal T}_0) \geq \widehat{\mathbb{P}}_x^t({\cal T}_0)$$ for $x\geq 0$, proving the base case.\ Now suppose that, for all trees with at most $n-1>1$ branching events, ([\[teotreeversion\]](#teotreeversion){reference-type="ref" reference="teotreeversion"}) holds. Let ${\cal T}^n$ be a tree with $n$ branching events. Define the first three trees of descent, denoted ${\cal T}_1, {\cal T}_2,$ and ${\cal T}_3$, to be the three subtrees of ${\cal T}^n$ generated at time $\tau$. Note that ${\cal T}_1, {\cal T}_2,$ and ${\cal T}_3$ have strictly less than $n$ branching events. Write $$\begin{aligned} g\left(\mathbb{P}^{t-\tau}_{X_\tau}({\cal T}\star)\right):=g\left(\mathbb{P}_{X_\tau}^{t-\tau}({\cal T}_1),\mathbb{P}_{X_\tau}^{t-\tau}({\cal T}_2),\mathbb{P}_{X_\tau}^{t-\tau}({\cal T}_3)\right)\end{aligned}$$ and define $g\left(\widehat{\mathbb{P}}^{t-\tau}_{X_\tau}({\cal T}\star)\right)$ similarly. By ([\[symetrytree1\]](#symetrytree1){reference-type="ref" reference="symetrytree1"}) $$\begin{aligned} \label{symmmmmm} g\left(\mathbb{P}^{t-\tau}_{X_\tau}({\cal T}\star)\right) = 1-g\left(\mathbb{P}^{t-\tau}_{-X_\tau}({\cal T}\star)\right) \ \text{and} \ g\left(\widehat{\mathbb{P}}^{t-\tau}_{X_\tau}({\cal T}\star)\right) = 1-g\left(\widehat{\mathbb{P}}^{t-\tau}_{-X_\tau}({\cal T}\star)\right).\ \end{aligned}$$ Let $T_n:=\left\{{{\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))}= {{\cal T}(\boldsymbol{X}(t))}= {\cal T}^n\right\}$. By almost identical arguments to those used to obtain ([\[sosleepy\]](#sosleepy){reference-type="ref" reference="sosleepy"}), but now conditioning on the event $\{\overline{\tau} >\tau\}$, we have $$\begin{aligned} &\mathbb{P}_x^t({\cal T}^n) \nonumber\\ &= \mathbb{E}_x^\varepsilon\left[g\left(\mathbb{P}^{t-\tau}_{B(R^\varepsilon_\tau)}({\cal T}\star)\right) \mathbbm{1}_{\overline{\tau} > \tau}\left \vert \right.T_n\right] +\mathbb{E}_x^\varepsilon\left[g\left(\mathbb{P}^{t-\tau}_{B(R_\tau)}({\cal T}\star)\right) \left \vert \right.\overline{\tau} \leq \tau, T_n\right] \mathbb{P}_x^\varepsilon[\overline{\tau} \leq \tau\left \vert \right.T_n]\nonumber \\ &\geq \mathbb{E}^\varepsilon_x\left[g\left(\mathbb{P}^{t-\tau}_{B(R^\varepsilon_\tau)}({\cal T}\star)\right) \mathbbm{1}_{\overline{\tau} > \tau}\left \vert \right.T_n\right] + \tfrac{1}{2} \mathbb{P}_x^\varepsilon[\overline{\tau} \leq \tau\left \vert \right.T_n],\label{protein} \end{aligned}$$ using that, for all $x\geq 0$, $\mathbb{E}_x^\varepsilon\left[g\left(\mathbb{P}^{t-\tau}_{B(R_\tau)}({\cal T}\star)\right) \left \vert \right.\overline{\tau} \leq \tau, T_n\right] \geq \tfrac{1}{2}$ by a similar symmetry relation to ([\[tired\]](#tired){reference-type="ref" reference="tired"}). By definition of the exponentially marked voting system, for $\tau^\times$ as above, we also have $$\begin{aligned} &\widehat{\mathbb{P}}_x^t({\cal T}^n) = \mathbb{E}_x^\varepsilon\left[g\left(\widehat{\mathbb{P}}^{t-\tau}_{B(R^\varepsilon_\tau)}({\cal T}\star)\right) \mathbbm{1}_{\tau^\times > \tau}\left \vert \right.T_n\right] + \tfrac{1}{2}\mathbb{P}_x^\varepsilon[\tau^\times \leq \tau\left \vert \right.T_n]. \label{protein2} \end{aligned}$$ Therefore, since $\overline{\tau} \stackrel{D}{=} \tau^\times$, by ([\[protein\]](#protein){reference-type="ref" reference="protein"}) and ([\[protein2\]](#protein2){reference-type="ref" reference="protein2"}), it suffices to show that $$\begin{aligned} \label{endresult} \mathbb{E}_x^\varepsilon\left[g\left(\mathbb{P}^{t-\tau}_{B(R_\tau^\varepsilon)}({\cal T}\star)\right)\mathbbm{1}_{\overline{\tau} > \tau}\left \vert \right.T_n\right] \geq \mathbb{E}_x^\varepsilon\left[g\left(\widehat{\mathbb{P}}^{t-\tau}_{B(R_\tau^\varepsilon)}({\cal T}\star)\right) \mathbbm{1}_{\overline{\tau}> \tau}\left \vert \right.T_n \right].\end{aligned}$$Now, for all $x\geq0$, $$\begin{aligned} &\mathbb{E}^\varepsilon_x\left[g\left(\mathbb{P}^{t-\tau}_{B(R_\tau^\varepsilon)}({\cal T}\star)\right)\mathbbm{1}_{\overline{\tau} > \tau}\left \vert \right.T_n\right]\\ &= \mathbb{E}^\varepsilon_x\left[g\left(\mathbb{P}^{t-\tau}_{B(R_\tau^\varepsilon)}({\cal T}\star)\right)\mathbbm{1}_{\overline{\tau} > \tau}\left \vert \right.B(R_\tau^\varepsilon) >0, T_n\right]\mathbb{P}_x^\varepsilon\left[B(R_\tau^\varepsilon) >0\left \vert \right.T_n\right]\nonumber \\ & \ \ \ + \mathbb{E}_x^\varepsilon\left[g\left(\mathbb{P}^{t-\tau}_{B(R_\tau^\varepsilon)}({\cal T}\star)\right)\mathbbm{1}_{\overline{\tau} > \tau} \left \vert \right.B(R_\tau^\varepsilon)\leq 0, T_n\right]\mathbb{P}^\varepsilon_x[B(R_\tau^\varepsilon) \leq0\left \vert \right.T_n]\\ &= \mathbb{E}_x^\varepsilon\left[g\left(\mathbb{P}^{t-\tau}_{B(R_\tau^\varepsilon)}({\cal T}\star)\right)\mathbbm{1}_{\overline{\tau} > \tau}\left \vert \right.B(R_\tau^\varepsilon) >0, T_n\right]\mathbb{P}^\varepsilon_x[B(R_\tau^\varepsilon) >0 \left \vert \right.T_n] \\ & \ \ \ + \mathbb{P}_x^\varepsilon[\overline{\tau}>\tau \left \vert \right.T_n]\mathbb{P}^\varepsilon_x[B(R_\tau^\varepsilon) \leq0\left \vert \right.T_n] \\ & \ \ \ - \mathbb{E}_x^\varepsilon\left[g\left(\mathbb{P}^{t-\tau}_{-B(R_\tau^\varepsilon)}({\cal T}\star)\right)\mathbbm{1}_{\overline{\tau}> \tau}\left \vert \right.B(R_\tau^\varepsilon) \leq0, T_n\right]\mathbb{P}^\varepsilon_x[B(R_\tau^\varepsilon) \leq0 \left \vert \right.T_n]\\ &= \mathbb{E}_x^\varepsilon\left[g\left(\mathbb{P}^{t-\tau}_{B(R_\tau^\varepsilon)}({\cal T}\star)\right)\mathbbm{1}_{\overline{\tau} > \tau}\left \vert \right.B(R_S^\varepsilon) >0, T_n\right] \left(\mathbb{P}^\varepsilon_x[B(R_\tau^\varepsilon) >0\left \vert \right.T_n]\vphantom{[g\left(\mathbb{P}^{t-\tau}_{B(R_\tau^\varepsilon)}({\cal T}\star)\right)} \right. \\ & \ \ \left. \vphantom{[g\left(\mathbb{P}^{t-\tau}_{B(R_\tau^\varepsilon)}({\cal T}\star)\right)} - \mathbb{P}^\varepsilon_x[B(R_\tau^\varepsilon) \leq 0\left \vert \right.T_n]\right) +\mathbb{P}^\varepsilon_x[\overline{\tau}>\tau \left \vert \right.T_n]\mathbb{P}^\varepsilon_x[B(R_\tau^\varepsilon) \leq0\left \vert \right.T_n]\\ &\geq \mathbb{E}_x^\varepsilon\left[g\left(\widehat{\mathbb{P}}^{t-\tau}_{B(R_\tau^\varepsilon)}({\cal T}\star)\right)\mathbbm{1}_{\overline{\tau} > \tau}\left \vert \right.B(R_\tau^\varepsilon) >0, T_n\right] \left(\vphantom{[g\left(\mathbb{P}^{t-\tau}_{B(R_\tau^\varepsilon)}({\cal T}\star)\right)} \mathbb{P}^\varepsilon_x[B(R_\tau^\varepsilon) >0\left \vert \right.T_n] \right. \\ & \ \ \left.- \mathbb{P}^\varepsilon_x[B(R_\tau^\varepsilon) \leq 0\left \vert \right.T_n]\vphantom{[g\left(\mathbb{P}^{t-\tau}_{B(R_\tau^\varepsilon)}({\cal T}\star)\right)} \right) +\mathbb{P}^\varepsilon_x[\overline{\tau}>\tau \left \vert \right.T_n]\mathbb{P}^\varepsilon_x[B(R_\tau^\varepsilon) \leq0 \left \vert \right.T_n]\\ &= \mathbb{E}^\varepsilon_x\left[g\left(\widehat{\mathbb{P}}^{t-\tau}_{B(R_\tau^\varepsilon)}({\cal T}\star)\right)\mathbbm{1}_{\overline{\tau} > \tau}\left \vert \right.T_n\right]\end{aligned}$$ where, in the second equality, we have applied the symmetry ([\[symmmmmm\]](#symmmmmm){reference-type="ref" reference="symmmmmm"}), and in the second to last line, we used monotonicity of $g$ together with our inductive hypothesis, and that, given $x\geq 0$, the difference $\mathbb{P}^\varepsilon_x[B(R_\tau^\varepsilon) >0 \left \vert \right.T_n] - \mathbb{P}^\varepsilon_x[B(R_\tau^\varepsilon) \leq 0\left \vert \right.T_n]$ is non-negative. The final equality follows by reversing the arguments used above but for $\widehat{\mathbb{P}}^{t-\tau}_{B(R_\tau^\varepsilon)}({\cal T}\star).$ We conclude that ([\[endresult\]](#endresult){reference-type="ref" reference="endresult"}) holds, proving our inductive step. ◻ ### Marked majority voting {#sectionnmarked} In this section, we describe what will be the final voting system in one dimension, denoted ${\mathbb{V}}^\times$, defined on ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))$. This voting system will be carried throughout the one-dimensional proof. In spirit, ${\mathbb{V}}^\times$ is very similar to $\widehat{\mathbb{V}}$, but no longer relies on knowing the lifetime of particles in order to mark them. Instead, particles are marked (independently) when they are born. In Theorem [\[teo:ineqmark2\]](#teo:ineqmark2){reference-type="ref" reference="teo:ineqmark2"}, it will be shown that our new voting system ${\mathbb{V}}^\times$ can be coupled to the exponentially marked voting system $\widehat{{\mathbb{V}}}$, so by Theorem [\[teo:ineqmark\]](#teo:ineqmark){reference-type="ref" reference="teo:ineqmark"}, it can also be coupled to the original majority voting system ${\mathbb{V}}$.\ Under the marked majority voting system, particles will be marked (independently) with probability $b_\varepsilon$, defined as follows. Recall that, for $i\in M(t)$, $\tau^{\times}_i \sim \mathit{Exp}(I(\varepsilon)^{-2})$ is the first time the subordinator $(R_i(s))_{s\geq 0}$ makes a jump of size larger than $\frac{2-\alpha}{\alpha}I(\varepsilon)^2$. Further recall that $\tau_i~\sim~\mathit{Exp}(\varepsilon^{-2})$, where $\tau_i\wedge t$ is the lifetime of the particle $X_i \stackrel{D}{=}B_i(R_i)$ in $\boldsymbol{X}(t)$. Define $$\begin{aligned} \label{bdeltapage} b_\varepsilon := \mathbb{P}[\tau^{\times}_i < \tau_i] =\frac{I(\varepsilon)^{-2}}{ I(\varepsilon)^{-2} + \varepsilon^{-2}} \sim \frac{\varepsilon^2}{I(\varepsilon)^2} \end{aligned}$$ where $x \sim y$ for some $x, y$ depending on $\varepsilon$ means that there exists constants $c, d>0$ independent of $\varepsilon$ such that $cy < x < dy$. The quantity $b_\varepsilon$ is the probability that the subordinator associated to individual $i$ makes a large jump in its lifetime (if individual $i$ is a leaf, $b_\varepsilon$ gives an upper bound on this probability). By Assumption [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"} [\[assumptions2_B\]](#assumptions2_B){reference-type="ref" reference="assumptions2_B"}, $b_\varepsilon \to 0$ as $\varepsilon \to 0$. [\[vtimesdef\]]{#vtimesdef label="vtimesdef"} Let $\varepsilon>0$. For $p:\mathbb{R}\to [0,1]$, we define a *marked majority* voting procedure on ${{\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))}$ as follows. (1) At each branch point in ${{\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))}$, the parent particle $j$ marks each of her three offspring $(j,1), (j,2)$ and $(j,3)$ independently with probability $b_\varepsilon$. Each marked particle (independently) votes $1$ with probability $\tfrac{1}{2}$ and otherwise votes $0$. (2) Each unmarked leaf $i$ of ${{\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))}$, independently, votes $1$ with probability\ $p(B_i(R^\varepsilon_i)))$ and otherwise votes $0$. (3) At each branch point in ${{\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))}$, if the parent particle $k$ is unmarked, she votes according to the majority vote of her three offspring $(k,1), (k,2)$ and $(k,3)$. Observe that the initial ancestor of ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))$ is never marked under this procedure. With the marked majority voting procedure described above, define $\mathbb{V}^\times_p$ to be the vote associated to the root $\emptyset$ of ${{\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))}$. The marked majority voting procedure makes it more difficult for the root of ${{\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))}$ (rooted at $x>0$) to vote $1$ compared to majority voting. Indeed, even if all three offspring vote $1$ at a branch point, under marked majority voting the parent particle can still vote $0$ with positive probability. This can be viewed as the penalty one must pay for truncating the underlying spatial motion (where this truncation really takes place on the subordinator). In other words, to couple ${\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t))$ to ${\mathbb{V}}(\boldsymbol{X}(t)),$ the voting procedure ${\mathbb{V}}^\times$ should make it more difficult for individuals to vote $1$, to compensate for the new underlying spatial motion, which makes it easier for individuals to vote $1$ (since, for a tree rooted at $x>0$, $\boldsymbol{B}_{R^\varepsilon}(t)$ is more likely to remain on the right-half line than $\boldsymbol{X}(t)$).\ Here, we will use the same initial condition as $\widehat{p}_0$ ([\[phat\]](#phat){reference-type="ref" reference="phat"}) that we did for exponentially marked voting, and write $\mathbb{V}^\times := {\mathbb{V}}^\times_{\widehat{p}_0}.$ With this choice of initial condition, the marked majority voting system, $\mathbb{V}^\times$, retains many of the symmetry relations exploited in [@etheridge2017branching], that we have already used here for ${\mathbb{V}}$ and $\widehat{{\mathbb{V}}}$. Namely, for all $x_1, x_2\in \mathbb{R}$ with $x_1\leq x_2$, $$\begin{aligned} \label{monotonicity} \mathbb{P}^\varepsilon_{x_1}\left[\mathbb{V}^{\times}(\boldsymbol{B}_{R^\varepsilon}(t))=1\right] \leq \mathbb{P}_{x_2}^\varepsilon \left[\mathbb{V}^{\times}(\boldsymbol{B}_{R^\varepsilon}(t))=1\right],\end{aligned}$$ and, for any time-labelled tree ${\cal T}$, if we set $${\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_{x}({\cal T}) = \mathbb{P}_x^\varepsilon\left[\mathbb{V}^{\times}(\boldsymbol{B}_{R^\varepsilon}(t))=1 \left \vert \right.{\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))={\cal T}\right],$$ then by symmetry of the historical stable process and exchangeability of $0$ and $1$ in the marked voting procedure, $$\begin{aligned} \label{symmetry} {\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_x({\cal T}) = 1-{\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_{-x}({\cal T}) \end{aligned}$$ for all ${\cal T}$, $x\in \mathbb{R},$ and $t\geq 0$. Setting $x=0$ in ([\[symmetry\]](#symmetry){reference-type="ref" reference="symmetry"}) gives ${\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_0({\cal T})=\frac{1}{2}$, so by monotonicity ([\[monotonicity\]](#monotonicity){reference-type="ref" reference="monotonicity"}), for any time-labelled ternary tree ${\cal T}$, $$\begin{aligned} \label{coco}{\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_{x}({\cal T}) \geq \tfrac{1}{2} \ \text{ for } x>0, \text{ and } \, {\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_x({\cal T}) \leq \tfrac{1}{2} \ \text{ for } x<0. \end{aligned}$$ We next introduce notation for our marked majority voting procedure. Recall that $g$ from ([\[majorityvotingfunction\]](#majorityvotingfunction){reference-type="ref" reference="majorityvotingfunction"}) is the majority voting function associated to $\mathbb{V}$. Define the *marked majority voting function* ${g_\times}: [0,1]^3 \to [0,1]$ by $${g_\times}(p_1, p_2, p_3):= g\left((1-b_\varepsilon)p_1 + \tfrac{b_\varepsilon}{2},(1-b_\varepsilon)p_2 + \tfrac{b_\varepsilon}{2},(1-b_\varepsilon)p_3 + \tfrac{b_\varepsilon}{2} \right).$$ This is the probability that an unmarked parent particle votes $1$ under $\mathbb{V}^\times$, in the special case when the three offspring are independent and have probabilities $p_1, p_2$ and $p_3$ of voting $1$ if they are unmarked. We abuse notation and write ${g_\times}(q) := {g_\times}(q, q, q)$. It is easy to check using symmetry of the majority voting function ([\[symmetryofg\]](#symmetryofg){reference-type="ref" reference="symmetryofg"}) that, for all $q\in [0,1]$, $$\begin{aligned} \label{symmetryofmarkedg} g_\times(q) = 1-g_\times(1-q).\end{aligned}$$ Let $\{u_-, \tfrac{1}{2}, u_+\}$ be the three solutions to ${g_\times}(q) = q$, satisfying $0<u_-<\tfrac{1}{2}<u_+<1$. The exact derivation of these fixed points can be found in the Proposition [\[appendixfixedpointsofg\]](#appendixfixedpointsofg){reference-type="ref" reference="appendixfixedpointsofg"}. It will be useful later to note that these fixed points can be approximated by Taylor expansion as $$\begin{aligned} u_- &= \frac{1}{2} - \frac{\sqrt{(1 - b_\varepsilon)^3 (1 - 3 b_\varepsilon)}}{2 (1 - b_\varepsilon)^3} = \frac{3}{4}b_\varepsilon^2 +\mathcal{O}(b_\varepsilon^3) \label{asympfixedpoint1}\\ u_+ &= \frac{1}{2} + \frac{\sqrt{(1 - b_\varepsilon)^3 (1 - 3 b_\varepsilon)}}{2 (1 - b_\varepsilon)^3} = 1- \frac{3}{4}b_\varepsilon^2 +\mathcal{O}(b_\varepsilon^3). \label{asympfixedpoint2}\end{aligned}$$ Henceforth, these fixed points $u_+$ and $u_-$ will be used to define the initial condition $$\widehat{p}_0(x) = u_+ \mathbbm{1}_{\{x\geq 0\}} + u_- \mathbbm{1}_{\{x\geq 0\}}.$$ We now return to the coupling of ${\mathbb{V}}^\times$ and the original voting system ${\mathbb{V}}$ via the intermediate system $\widehat{\mathbb{V}}$. [\[teo:ineqmark2\]]{#teo:ineqmark2 label="teo:ineqmark2"} Let $\widehat{\mathbb{V}}$ be the exponentially marked majority voting system (Definition [\[definitionnonmarkov\]](#definitionnonmarkov){reference-type="ref" reference="definitionnonmarkov"}) and $\mathbb{V}^\times$ be the marked majority voting system (Definition [\[vtimesdef\]](#vtimesdef){reference-type="ref" reference="vtimesdef"}), both with initial condition $\widehat{p}_0(x) = u_+\mathbbm{1}_{\{x\geq 0\}} + u_-\mathbbm{1}_{\{x\leq 0\}}$. Then, for all $x\in \mathbb{R}$ and $t\geq 0$, $$\begin{aligned} \mathbb{P}^\varepsilon_x\left[\widehat{{\mathbb{V}}}(\boldsymbol{B}_{R^\varepsilon}(t))=1\right] = \mathbb{P}^\varepsilon_x\left[{\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t))=1\right](1-b_\varepsilon) +\frac{b_\varepsilon}{2}.\end{aligned}$$ *Proof.* Recall that, under the voting system $\widehat{\mathbb{V}}$, each particle $i$ is marked if $\tau^{\times}_i<\tau_i$. By definition of $b_\varepsilon$, all particles in ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))$ are marked with probability $b_\varepsilon$ under both $\widehat{{\mathbb{V}}}$ and ${\mathbb{V}}^\times$, except for ancestral particle, which remains unmarked under ${\mathbb{V}}^\times$ by definition. Conditioning on the marking of the ancestral particle in $\widehat{\mathbb{V}}$, we obtain $$\begin{aligned} \mathbb{P}^\varepsilon_x\left[\widehat{{\mathbb{V}}}(\boldsymbol{B}_{R^\varepsilon}(t))=1\right] &= \mathbb{P}^\varepsilon_x\left[\widehat{{\mathbb{V}}}(\boldsymbol{B}_{R^\varepsilon}(t))=1 \left \vert \right.\tau^\times > \tau_0\right]\mathbb{P}^\varepsilon_x[\tau^\times > \tau_0] \\ & \ \ + \mathbb{P}^\varepsilon_x\left[\widehat{{\mathbb{V}}}(\boldsymbol{B}_{R^\varepsilon}(t))=1 \left \vert \right.\tau^\times \leq \tau_0\right]\mathbb{P}^\varepsilon_x[\tau^\times \leq \tau_0] \\ &= \mathbb{P}^\varepsilon_x\left[\mathbb{V}^{\times}(\boldsymbol{B}_{R^\varepsilon}(t))=1\right](1-b_\varepsilon) +\frac{b_\varepsilon}{2},\end{aligned}$$ where the last line follows by definition of $b_\varepsilon$. ◻ Finally, by Theorem [\[teo:ineqmark\]](#teo:ineqmark){reference-type="ref" reference="teo:ineqmark"} and Theorem [\[teo:ineqmark2\]](#teo:ineqmark2){reference-type="ref" reference="teo:ineqmark2"}, to prove our main one-dimensional result, Theorem [\[maintheorem1D\]](#maintheorem1D){reference-type="ref" reference="maintheorem1D"}, it suffices to show the following. [\[mainteo1dmarked\]]{#mainteo1dmarked label="mainteo1dmarked"} Suppose $I$ satisfies Assumptions [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"} [\[assumptions2_A\]](#assumptions2_A){reference-type="ref" reference="assumptions2_A"}-(B). Fix $k\in \mathbb{N}$ and $T^*\in (0,\infty)$. Let $u_+, u_-$ be as in ([\[asympfixedpoint1\]](#asympfixedpoint1){reference-type="ref" reference="asympfixedpoint1"}) and ([\[asympfixedpoint2\]](#asympfixedpoint2){reference-type="ref" reference="asympfixedpoint2"}). Then there exist $c_1(\alpha, k), \varepsilon_1(\alpha, k)>0$ such that, for all $t\in [0,T^*]$ and all $\varepsilon \in (0, \varepsilon_1(k)),$ (1) for $x \geq c_1(k)I(\varepsilon)|\log\varepsilon|$, we have $\mathbb{P}^\varepsilon_x\left[\mathbb{V}^{\times}(\boldsymbol{B}_{R^\varepsilon}(t))=1\right] \geq u_+ - \varepsilon^k,$ (2) for $x \leq -c_1(k)I(\varepsilon)|\log\varepsilon|$, we have $\mathbb{P}^\varepsilon_x\left[\mathbb{V}^{\times}(\boldsymbol{B}_{R^\varepsilon}(t))=1\right] \leq u_- +\varepsilon^k$. Observe that the sharpness of the interface in Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"} is of the same order as the sharpness from the Brownian result, Theorem [\[browniantheorem\]](#browniantheorem){reference-type="ref" reference="browniantheorem"}. This is a result of the truncated subordinated Brownian motion behaving similarly to a Brownian motion (as discussed in Section [3.2](#choice_scaling_sec){reference-type="ref" reference="choice_scaling_sec"}). [\[cool\]]{#cool label="cool"} By a similar proof to that of Theorem [\[votedual\]](#votedual){reference-type="ref" reference="votedual"}, we see that $$u^\varepsilon(t, x) = \mathbb{P}^\varepsilon_x\left[\mathbb{V}^{\times}(\boldsymbol{B}_{R^\varepsilon}(t))=1\right]$$ is a solution to the equation $$\begin{aligned} \nonumber \partial_t u^\varepsilon &= \mathcal{L}^\varepsilon u^\varepsilon + {\varepsilon^{-2}} (g_\times(u^\varepsilon) - u^\varepsilon)\\ &= \mathcal{L}^\varepsilon u^\varepsilon + \mathcal{O}\left({\varepsilon^2}{I(\varepsilon)^{-4}}\right)\left(\tfrac{1}{2}-u^\varepsilon\right)^3 + \mathcal{O}\left({\varepsilon^{-2}}\right)u^\varepsilon(2u^\varepsilon-1)(1-u^\varepsilon)\label{icecream}\end{aligned}$$ with initial condition $u^\varepsilon(0, x) = \widehat{p}_0(x)$, where $\mathcal{L}^\varepsilon$ denotes the infinitesimal generator of $(B(R^\varepsilon_t))_{t\geq 0}$. Rather remarkably, the work from this section tells us that solutions to ([\[icecream\]](#icecream){reference-type="ref" reference="icecream"}) and ([\[mainequation22\]](#mainequation22){reference-type="ref" reference="mainequation22"}) are related. More precisely, the couplings from Theorem [\[teo:ineqmark\]](#teo:ineqmark){reference-type="ref" reference="teo:ineqmark"} and Theorem [\[teo:ineqmark2\]](#teo:ineqmark2){reference-type="ref" reference="teo:ineqmark2"} tell us that solutions to equation ([\[icecream\]](#icecream){reference-type="ref" reference="icecream"}) (after transformation by the function $v\mapsto (1-b_\varepsilon)v+\tfrac{b_\varepsilon}{2}$) are lower and upper bounds to solutions of the scaled fractional Allen--Cahn equation ([\[mainequation22\]](#mainequation22){reference-type="ref" reference="mainequation22"}) restricted to $x\geq 0$ and $x\leq 0$, respectively. It would be interesting to see if this relationship provides any insights into a PDE-theoretic proof of our main result. ## Proof of Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"} {#saltchips} We now prove Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"}. To do so, we will adapt ideas from both [@etheridge2017branching; @durrett2020motion]. In [@etheridge2017branching], the majority voting function $g$ was used throughout, while [@durrett2020motion] builds upon this work and considers more general voting functions. This makes [@durrett2020motion] useful when proving results about the marked majority voting function $g_\times$.\ Throughout this section, we take the initial condition $$\widehat{p}_0(x)=u_+ \mathbbm{1}_{\{x\geq0\}}+u_-\mathbbm{1}_{\{x\leq0\}}.$$ Our next result verifies that the marked majority voting procedure cannot reduce the positive voting bias on the leaves when the root, $x$, is non-negative. Here, we say a leaf has a 'positive voting bias' if it has a preference for voting one instead of zero, which is the case when the tree is rooted at a non-negative point $x$. Once we have shown Lemma [\[ineq:nobranch\]](#ineq:nobranch){reference-type="ref" reference="ineq:nobranch"}, we will follow the strategy of [@etheridge2017branching] to show that, after enough time has passed, with high probability enough rounds of voting have occurred to ensure that the positive voting bias at a leaf is amplified to a large voting bias at the root.\ By the symmetry ([\[symmetry\]](#symmetry){reference-type="ref" reference="symmetry"}), a similar result to Lemma [\[ineq:nobranch\]](#ineq:nobranch){reference-type="ref" reference="ineq:nobranch"} will hold for the negative voting bias on the leaves when $x<0$. In view of this symmetry, we often state results only for positive $x$ when convenient to do so. Recall that $${\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_{x}({\cal T}) := \mathbb{P}_x^\varepsilon\left[\mathbb{V}^{\times}(\boldsymbol{B}_{R^\varepsilon}(t))=1 \left \vert \right.{\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))={\cal T}\right].$$ [\[ineq:nobranch\]]{#ineq:nobranch label="ineq:nobranch"} For any time-labelled ternary tree ${\cal T}$, time $t>0$, and any $x\geq 0$, [$$\begin{aligned} {\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_x({\cal T}) &\geq u_+ \mathbb{P}_x[ B(R^\varepsilon_t) \geq 0] + u_-\mathbb{P}_x[B(R_t^\varepsilon) \leq 0].\end{aligned}$$]{.upright} *Proof.* This proof follows exactly [@durrett2020motion Lemma 3.1], by an inductive argument on the number of branching events in ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))$ together with symmetry of the voting function $g_\times$ and symmetry of the transition density for $(B(R^\varepsilon_t))_{t\geq 0}.$ ◻ Lemma [\[ineq:nobranch\]](#ineq:nobranch){reference-type="ref" reference="ineq:nobranch"} partly motivated our definitions of $\widehat{{\mathbb{V}}}$ and ${\mathbb{V}}^\times$. Recall that marked individuals in ${\mathbb{V}}^\times$ and $\widehat{{\mathbb{V}}}$ vote $1$ or $0$ with equal probability. However, the proof of Theorem [\[teo:ineqmark\]](#teo:ineqmark){reference-type="ref" reference="teo:ineqmark"} would have simplified greatly if we had asked marked individuals under $\widehat{{\mathbb{V}}}$ to vote $0$ with probability $1$. Technically, this version of $\widehat{{\mathbb{V}}}$ would only satisfy part (1) of Theorem [\[teo:ineqmark\]](#teo:ineqmark){reference-type="ref" reference="teo:ineqmark"}. To obtain part (2) of Theorem [\[teo:ineqmark\]](#teo:ineqmark){reference-type="ref" reference="teo:ineqmark"}, one would need to define $\widehat{{\mathbb{V}}}$ so that marked individuals vote $1$ with probability $1$. In fact, we will explore these voting systems more in Section [\[ch:3\]](#ch:3){reference-type="ref" reference="ch:3"}. Unlike $g_\times$, which satisfies ([\[symmetryofmarkedg\]](#symmetryofmarkedg){reference-type="ref" reference="symmetryofmarkedg"}), the voting function corresponding to this other system would not be symmetric. As a result, the proof of [@durrett2020motion Lemma 3.1] would no longer apply, and we are unsure if Lemma [\[ineq:nobranch\]](#ineq:nobranch){reference-type="ref" reference="ineq:nobranch"} would hold at all. The symmetry of $g_\times$ will be used in several other proofs throughout this section as well.\ We now show that the iterative voting procedure amplifies a small positive bias at the leaves to a large positive voting bias at the root. To do this, we define $g^{(n)}_\times(q)$ inductively by $$g^{(1)}_\times(q) = {g_\times}(q), \ g^{(n+1)}_\times(q) = g^{(n)}_\times({g_\times}(q)).$$ Noting that the ancestral particle is never marked under $\mathbb{V}^\times$, we see that $g^{(n)}_\times(q)$ is the probability of voting $1$ at the root of an $n$-level regular ternary tree under $\mathbb{V}^\times$ if the votes of the *unmarked* leaves are i.i.d. Bernoulli($q$). In the following result, we consider the rate of convergence of $g_\times$ to its fixed points. Let $I$ be a scaling function satisfying Assumptions [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"} [\[assumptions2_A\]](#assumptions2_A){reference-type="ref" reference="assumptions2_A"}-[\[assumptions2_B\]](#assumptions2_B){reference-type="ref" reference="assumptions2_B"} throughout. [\[lemma:iterative_voting\]]{#lemma:iterative_voting label="lemma:iterative_voting"} Fix $k\in \mathbb{N}$. There exists $A(k)<\infty$ and $\varepsilon_1(k)>0$ such that, for all $\varepsilon \in (0, \varepsilon_1)$ and $n \geq A(k)|\log \varepsilon|$, $$g^{(n)}_\times\left( \tfrac{1}{2} + \varepsilon \right) \geq u_+ - \varepsilon^{k} \ \text{ and } \ g^{(n)}_\times\left( \tfrac{1}{2} - \varepsilon \right) \leq u_- + \varepsilon^{k}.$$ *Proof.* We follow the proof of [@durrett2020motion Lemma 3.2], with some important changes to reflect that our voting function, $g_\times$, depends on the parameter $\varepsilon$. Recall that $$g_\times(p) = g((1-b_\varepsilon)p+\tfrac{b_\varepsilon}{2})$$ where $b_\varepsilon = \mathcal{O}\left(\tfrac{\varepsilon^2}{I(\varepsilon)^2}\right)$. We prove only the first inequality since the second follows by completely symmetric arguments. First, we show that there exists $C_k>0$ and some fixed $q_0>0$ such that, after $n\geq C_k |\log\varepsilon|$ iterations, $$\begin{aligned} \label{geqn1} g^{(n)}_\times(u_+-q) \geq u_+-\varepsilon^k\end{aligned}$$ for all $q\leq q_0$. We then show that there exists $D>0$ such that, after $n \geq D|\log\varepsilon|$ iterations, $$\begin{aligned} \label{geqn2} g_\times^{(n)}\left(\tfrac{1}{2}+\varepsilon\right)\geq u_+-q_0.\end{aligned}$$ Combining ([\[geqn1\]](#geqn1){reference-type="ref" reference="geqn1"}) and ([\[geqn2\]](#geqn2){reference-type="ref" reference="geqn2"}) then gives the result. To prove ([\[geqn1\]](#geqn1){reference-type="ref" reference="geqn1"}), choose $\varepsilon_1$ sufficiently small so that $b_\varepsilon \leq \frac{1}{6}$ for all $\varepsilon \in (0,\varepsilon_1)$. Then $$\begin{aligned} \label{half} g_\times'\left( \tfrac{1}{2} \right) &= (1-b_\varepsilon) g'\left(\tfrac{1}{2} \right) =\tfrac{3}{2} (1-b_\varepsilon) \geq \tfrac{5}{4} > 1. \end{aligned}$$ Next, using that $g'(0)=0$ and $g'$ is continuous, together with the estimate ([\[asympfixedpoint1\]](#asympfixedpoint1){reference-type="ref" reference="asympfixedpoint1"}), for $\varepsilon_1$ sufficiently small we have $$g'\left(u_-(1-b_\varepsilon)+\tfrac{b_\varepsilon}{2}\right) < \tfrac{1}{4}$$ for all $\varepsilon\in (0,\varepsilon_1).$ It follows that, for this choice of $\varepsilon_1$, $$g_\times'(u_-) =(1-b_\varepsilon) g'\left(u_-(1-b_\varepsilon)+ \tfrac{b_\varepsilon}{2}\right) < \tfrac{1}{4}.$$ Since $g_\times'\left(\tfrac{1}{2}\right)>1$ and $g_\times'$ is continuous, $$\begin{aligned} q_0 := \inf \left\{ q \geq 0 : g_\times'(u_-+q) \geq \tfrac{1}{2} \right\} > 0.\end{aligned}$$ By the Mean Value Theorem and definition of $q_0$, together with the symmetry of the marked voting function ([\[symmetryofmarkedg\]](#symmetryofmarkedg){reference-type="ref" reference="symmetryofmarkedg"}), for all $q < q_0$ $$u_+ - g_\times(u_+ - q) = g_\times(u_- +q) - u_- \leq q\, g_\times'(u_-+q_0) = \tfrac{q}{2}.$$ Iterating this yields $$u_+ - g_\times^{(n)}(u_+-q) \leq \tfrac{1}{2^n}\left( u_+-q \right) \leq \tfrac{1}{2^n}.$$ It follows that there exists $C_k >0$ such that if $n \geq C_k |\log \varepsilon|$ and $q \leq q_0$ then $$g_\times^{(n)}(u_+-q) \geq u_+ - \varepsilon^k,$$ thereby proving ([\[geqn1\]](#geqn1){reference-type="ref" reference="geqn1"}). We now prove ([\[geqn2\]](#geqn2){reference-type="ref" reference="geqn2"}). By equation ([\[half\]](#half){reference-type="ref" reference="half"}), $\varepsilon_1$ is sufficiently small so that, for all $\varepsilon \in (0, \varepsilon_1)$, $g_\times'(\frac{1}{2})>1$. Since $g_\times$ is increasing and $u_-, \tfrac{1}{2}, u_+$ are the only fixed points of $g_\times$, we have $$\begin{aligned} g_\times(q) > q \text{ for all } q \in \left(\tfrac{1}{2}, u_+ \right).\end{aligned}$$ By definition of $q_0$ and since $g_\times'$ is increasing on $(0, \tfrac{1}{2})$, $u_+-q_0-\tfrac{1}{2} = \tfrac{1}{2}-q_0-u_->0$. Therefore $$\begin{aligned} q_1 := \inf_{\varepsilon\leq q \leq u_+-q_0-\frac{1}{2}} \frac{g_\times\left(\tfrac{1}{2}+q\right)-\left(\frac{1}{2}+q\right)}{q} \geq 0, \end{aligned}$$ and so for $q \in \left[\varepsilon,u_+-q_0-\frac{1}{2}\right]$, by definition of $q_1$ we have $$\begin{aligned} \label{rowrowrow} g_\times\left(\tfrac{1}{2}+q\right)-\tfrac{1}{2} > (1+q_1)q.\end{aligned}$$ Now, if $g_\times\left(\frac{1}{2}+\varepsilon\right) \geq u_+ -q_0$, we are done. If not, we can apply ([\[rowrowrow\]](#rowrowrow){reference-type="ref" reference="rowrowrow"}) twice to obtain $$\begin{aligned} g_\times^{(2)}\left(\tfrac{1}{2}+\varepsilon\right)&=g_\times\left(\tfrac{1}{2} + \left(g_\times(\tfrac{1}{2}+\varepsilon) - \tfrac{1}{2}\right)\right) \\ &\geq (1+q_1)\left[g_\times(\tfrac{1}{2}+\varepsilon)-\tfrac{1}{2}\right] + \tfrac{1}{2}\\ &\geq (1+q_1)^2\varepsilon+\tfrac{1}{2}.\end{aligned}$$ Repeating this argument $n-1$ times, we obtain $$g_\times^{(n)}\left( \tfrac{1}{2} + \varepsilon \right) \geq (1+q_1)^n \varepsilon + \tfrac{1}{2}.$$ It follows that, for $D := \tfrac{1}{\log(1+q_1)}$, after $n > D |\log \varepsilon|$ iterations, $$g_\times^{(n)}\left( \tfrac{1}{2} + \varepsilon \right) \geq u_+ - q_0.$$ Setting $A = C_k + D$ proves the result. ◻ The following useful inequality for $g_\times$ will be used in the proof of Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"}. [\[easyboundg\]]{#easyboundg label="easyboundg"} If $p_1,p_2,p_3 \geq \tfrac{1}{2}$ then, $$g_\times(p_1,p_2,p_3) \geq \min (p_1,p_2,p_3,u_+).$$ If $p_1,p_2,p_3 \leq \tfrac{1}{2}$ then, $$g_\times(p_1,p_2,p_3) \leq \max(p_1,p_2,p_3,u_-).$$ *Proof.* We prove only the first inequality, since the second one follows by a symmetric argument. Denote $$p_{\min} = \min\{ p_1,p_2,p_3,u_+\}.$$ If $p_1, p_2, p_3 \geq \tfrac{1}{2},$ it follows that $\tfrac{1}{2} \leq p_{\min} \leq u_+$. Since $g_\times$ is increasing in each variable, $$g_\times(p_1,p_2,p_3) \geq g_\times(p_{\min}).$$ Therefore it suffices to show that $g_\times(p_{\min}) \geq p_{\min}$. For this recall that $\{u_-,\tfrac{1}{2},u_+\}$ are the only fixed points of $g_\times$. Factorising $g_\times(p_{\min})-p_{\min}$ yields $$g_\times(p_{\min}) - p_{\min} = (p_{\min}- u_-)(2p_{\min}-1)(u_+-p_{\min}).$$ Since $\tfrac{1}{2} \leq p_{\min} \leq u_+$, $g_\times(p_{\min})-p_{\min} \geq 0$, as required. ◻ The following lemma states that, with high probability, by time $t\geq a\varepsilon^2 |\log\varepsilon|$, each ancestral line of descent in ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))$ contains at least $\mathcal{O}(|\log \varepsilon|)$ branching events. Let $${\cal T}_n^{reg} = \cup_{k\leq n} \{1, 2, 3\}^k \subset \mathcal{U}$$ denote the $n$-level regular ternary tree, and for $l\in \mathbb{R}$, let ${\cal T}^{reg}_l = {\cal T}^{reg}_{\lceil l \rceil}$. For ${\cal T}$ a time-labelled ternary tree, we use ${\cal T}^{reg}_l\subseteq {\cal T}$ to mean that, as subtrees of $\mathcal{U}$, ${\cal T}^{reg}_l$ is contained inside ${\cal T}$ (ignoring time labels). [\[defnofa\]]{#defnofa label="defnofa"} Let $k\in \mathbb{N}$ and $A(k)$ be as in Lemma [\[lemma:iterative_voting\]](#lemma:iterative_voting){reference-type="ref" reference="lemma:iterative_voting"}. There exists $a_1(\alpha, k) >0$ and $\varepsilon_1(\alpha, k)>0$ such that, for all $\varepsilon \in (0, \varepsilon_1)$ and $t \geq a_1(k)\varepsilon^2 |\log \varepsilon|$, $$\begin{aligned} \mathbb{P}^\varepsilon \left[\mathcal{T}\left(\boldsymbol{B}_{R^\varepsilon}(t)\right) \supseteq \mathcal{T}^{\text{reg}}_{A(k)|\log\varepsilon|}\right]\geq 1 - \varepsilon^{k}. \end{aligned}$$ *Proof.* This proof proceeds exactly as that of [@etheridge2017branching Lemma 2.10], where the authors estimate the probability that a single leaf of $\mathcal{T}^{\text{reg}}_{A(k)|\log\varepsilon|}$ is not in $\mathcal{T}\left(\boldsymbol{B}_{R^\varepsilon}(t)\right)$ and combine this with a union bound summing over all leaves. ◻ Next, we control the displacement of leaves from the root of $\boldsymbol{B}_{R^\varepsilon}(t)$. [\[displacement lemma\]]{#displacement lemma label="displacement lemma"} Fix $k\in \mathbb{N}$ and let $a_1(k)$ be as in Lemma [\[defnofa\]](#defnofa){reference-type="ref" reference="defnofa"}. There exists $\varepsilon_1(\alpha, k)~>~0$ and $l_1(\alpha, k)>0$ such that, for all $\varepsilon \in (0, \varepsilon_1)$ and $s\leq a_1(k) \varepsilon^2|\log\varepsilon|,$ $$\mathbb{P}^\varepsilon_x\left[\exists i\in N(s) : |B_i(R^\varepsilon_i(s))-x| \geq l_1(k) I(\varepsilon)|\log \varepsilon| \right] \leq \varepsilon^k.$$ Lemma [\[displacement lemma\]](#displacement lemma){reference-type="ref" reference="displacement lemma"} highlights the importance of working with a truncated subordinated Brownian motion instead of the original stable process. In the proof of Lemma [\[displacement lemma\]](#displacement lemma){reference-type="ref" reference="displacement lemma"}, we control the position of the leaves in ${\cal T}(\boldsymbol{B}_{R{^\varepsilon}}(t))$ using a many-to-one lemma. If we were to use the same approach for the stable tree (without any truncation), we could not obtain the polynomial error $\varepsilon^k$ in Lemma [\[displacement lemma\]](#displacement lemma){reference-type="ref" reference="displacement lemma"}, which is crucial to our later proofs. *Proof of Lemma [\[displacement lemma\]](#displacement lemma){reference-type="ref" reference="displacement lemma"}.* First note that for any $m>0$ $$\begin{aligned} \nonumber & \mathbb{P}^\varepsilon_x\left[\exists i\in N(s) : |B_i(R_i^\varepsilon(s))-x| \geq m I(\varepsilon)|\log\varepsilon| \right] \\ & \ \ \ \leq \mathbb{E}^\varepsilon_x[N(s)]\mathbb{P}_0\left[|B(R_s^\varepsilon)| \geq m I(\varepsilon)|\log\varepsilon| \right] \nonumber \\ & \ \ \ = e^{\frac{2 s}{\varepsilon^2}} \mathbb{P}_0\left[|B(R_s^\varepsilon)| \geq m I(\varepsilon)|\log\varepsilon| \right] \nonumber \\ & \ \ \ \leq \varepsilon^{-2a_1} \mathbb{P}_0\left[|B(R_s^\varepsilon)| \geq m I(\varepsilon)|\log\varepsilon|\right].\end{aligned}$$ Denote the density of $R_s^\varepsilon$ by $f_{\varepsilon}.$ Then for any $h\geq 0$, partitioning over the event $\{R^\varepsilon_s\leq hI(\varepsilon)^2|\log\varepsilon|\}$ and its complement, we obtain$$\begin{aligned} &\mathbb{P}_0\left[|B(R_s^\varepsilon)| \geq m I(\varepsilon)|\log\varepsilon|\right] \\ &\leq \int_{0}^{hI(\varepsilon)^2|\log \varepsilon|} \mathbb{P}_0\left[|B_t| \geq m I(\varepsilon)|\log\varepsilon| \, | \ R_s^\varepsilon = t\right]f_\varepsilon(t) dt + \mathbb{P}_0[R_s^\varepsilon \geq hI(\varepsilon)^2 |\log\varepsilon|]\\ &\leq \sup_{0\leq t \leq h I(\varepsilon)^2|\log \varepsilon|}\mathbb{P}_0[|B_t| \geq m I(\varepsilon)|\log\varepsilon|] + \mathbb{P}_0[ R_s^\varepsilon \geq hI(\varepsilon)^2 |\log\varepsilon|].\end{aligned}$$ We bound each term separately. First, by a Chernoff bound, for all $t\leq h I(\varepsilon)^2|\log\varepsilon|,$$$\begin{aligned} \mathbb{P}_0\left[|B_t| \geq mI(\varepsilon)|\log \varepsilon|\right] &= \mathbb{P}_0\left[\sqrt{2t}|Z|\geq m I(\varepsilon) |\log\varepsilon|\right]\\ &\leq \mathbb{P}_0\left[\sqrt{2h}|Z|\geq m |\log\varepsilon|^{\frac{1}{2}} \right]\\ &\leq \exp\left(-\tfrac{1}{4}\tfrac{m^2}{h}|\log \varepsilon|\right)\\ &= \varepsilon^{\tfrac{m^2}{4h}}.\end{aligned}$$ Fix $h := k+2a_1(k)+1$. By Lemma [\[boundonsubordinator\]](#boundonsubordinator){reference-type="ref" reference="boundonsubordinator"}, there exists $\varepsilon_1(k)>0$ such that, for all $\varepsilon \in (0,\varepsilon_1)$ $$\begin{aligned} \mathbb{P}_0\left[ R_s^\varepsilon \geq h I(\varepsilon)^2 |\log\varepsilon|\right] &\leq \varepsilon^{k+2a_1(k)}.\end{aligned}$$ Putting this together, we obtain $$\begin{aligned} \mathbb{P}^\varepsilon_x\left[\exists i\in N(s) : |B_i(R_i^\varepsilon(s))-x| \geq m I(\varepsilon)|\log\varepsilon| \right] \leq \varepsilon^k + \varepsilon^{\tfrac{m^2}{4h}-2a_1(k)}\end{aligned}$$ so the result holds by choosing $m = l_1(k)$ sufficiently large. ◻ In the following proof of Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"}, we suppose $x\geq 2l_1(k)I(\varepsilon)|\log\varepsilon|$, $s^*:=a_1(k)\varepsilon^2|\log\varepsilon|$ and consider the cases $t\leq s^*$ and $t\geq s^*$ separately. First, if $t\leq s^*$, by Lemma [\[displacement lemma\]](#displacement lemma){reference-type="ref" reference="displacement lemma"}, with high probability none of the particles in ${\cal T}(\boldsymbol{X}(t))$ have moved a distance further than $l_1(k)I(\varepsilon)|\log\varepsilon|,$ and the result follows easily. When $t\geq s^*$, we use Lemma [\[ineq:nobranch\]](#ineq:nobranch){reference-type="ref" reference="ineq:nobranch"} to show that the leaves of $\boldsymbol{B}_{R^\varepsilon}(t)$ have a positive voting bias, which, by Lemma [\[defnofa\]](#defnofa){reference-type="ref" reference="defnofa"} is magnified by $\mathcal{O}(|\log\varepsilon|)$ rounds of voting, so Lemma [\[lemma:iterative_voting\]](#lemma:iterative_voting){reference-type="ref" reference="lemma:iterative_voting"} applies and gives the result. *Proof of Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"}.* Our approach of truncating the stable subordinator now allows us to follow the strategy of proof of [@etheridge2017branching Theorem 2.6]. We suppress the superscript $\varepsilon$ of $\mathbb{P}^\varepsilon_x$ throughout. Fix $k\in \mathbb{N}$ and $T^*\in (0,\infty).$ Let $\varepsilon<\tfrac{1}{2}$ and define $z_\varepsilon$ implicitly by the relation $$\begin{aligned} \label{iloverowing} \mathbb{P}_{z_\varepsilon}[B(R_{T^*}^\varepsilon) \geq 0] = \tfrac{1}{2}+ (u_+-u_-)^{-1}\varepsilon,\end{aligned}$$ and note that $z_\varepsilon \sim \varepsilon \sqrt{4\pi R^\varepsilon_{T^*}}$ as $\varepsilon \to 0$. Moreover, with arbitrarily high probability, $R_{T^*}^\varepsilon$ is bounded above by a constant (depending on $T^*$). Comparing the above estimate with the definition of $z_\varepsilon$, there exists a constant $C(T^*)$ such that $z_\varepsilon \leq C(T^*) \varepsilon$ as $\varepsilon \to 0$ (with arbitrarily high probability). Suppose $\varepsilon_1(k) < \tfrac{1}{2}$ is sufficiently small so that Lemmas [\[defnofa\]](#defnofa){reference-type="ref" reference="defnofa"} and [\[displacement lemma\]](#displacement lemma){reference-type="ref" reference="displacement lemma"} hold for all $\varepsilon \in (0, \varepsilon_1)$. Let $c_1(k)=2l_1(k)$, for $l_1(k)$ as in Lemma [\[displacement lemma\]](#displacement lemma){reference-type="ref" reference="displacement lemma"}. By Assumption [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"} [\[assumptions2_B\]](#assumptions2_B){reference-type="ref" reference="assumptions2_B"}, $\varepsilon I(\varepsilon)^{-1} \to 0$ as $\varepsilon \to 0$, so we can choose $\varepsilon_1$ sufficiently small so that $$\begin{aligned} l_1(k) I(\varepsilon)|\log\varepsilon| + z_\varepsilon \leq c_1(k) I(\varepsilon)|\log\varepsilon|\end{aligned}$$ for all $\varepsilon\in (0,\varepsilon_1)$ with arbitrarily high probability. Let $a_1$ be as in Lemma [\[defnofa\]](#defnofa){reference-type="ref" reference="defnofa"} and define $$s^*(\varepsilon)= a_1(k)\varepsilon^2|\log\varepsilon|.$$ Let $t\in (0, s^*)$ and $z\geq c_1(k) I(\varepsilon)|\log\varepsilon|$. Note that $g_\times(u_+) = u_+$, so if the initial condition is constant with $p(x) \equiv u_+$, then $$\begin{aligned} \label{con}\mathbb{P}_z\left[{\mathbb{V}}^\times_{u_+}(\boldsymbol{B}_{R^\varepsilon}(t)) = 1\right] = u_+ \text{ for all } t>0, z\in \mathbb{R}. \end{aligned}$$ Recall that the initial condition for marked majority voting is chosen to be $\widehat{p}_0 = u_+ \mathbbm{1}_{\{x\geq 0\}} + u_- \mathbbm{1}_{\{x\leq 0\}}$, so $$\begin{aligned} &\mathbb{P}_z\left[{\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t)) = 0\right] \leq \mathbb{P}_z\left[\exists \, i\in N(t) :\vphantom{{\mathbb{V}}^\times} |B_i(R_i^\varepsilon(t))-z| \geq c_1 I(\varepsilon)|\log\varepsilon| \right] \\ & \ \ \ \ \ \ \ + \mathbb{P}_z\left[\{{\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t)) = 0\} \cap \{\nexists\, i\in N(t) : |B_i(R_i^\varepsilon(t))-z| \geq c_1I(\varepsilon)|\log\varepsilon|\} \right] \\ &\ \ \ \ \ \leq \varepsilon^k + 1-u_+ \\ &\ \ \ \ \ = \varepsilon^k + u_- \end{aligned}$$ where we have used ([\[con\]](#con){reference-type="ref" reference="con"}) in the second inequality. This proves the result when $t<s^*.$ Now suppose $t \in [s^*, T^*]$ and $z\geq c_1(k) I(\varepsilon)|\log\varepsilon|.$ Let ${\cal T}_{s^*} = {\cal T}(\boldsymbol{B}_{R^\varepsilon}(s^*))$ be the time-labelled tree of the branching stable process at time $s^*$. Define $$q_{t-s^*}(z) = \mathbb{P}_z[{\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t-s^*))=1]$$ for all $z\in \mathbb{R}$. Write $\{\boldsymbol{B}_{R^\varepsilon}(s^*)>z_\varepsilon\}$ for the event $B_i(R^\varepsilon_i(s^*))>z_\varepsilon$ for all $i\in N(s^*)$. Then $$\begin{aligned} \label{fries} \mathbb{P}_z[{\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t))=1] &= \mathbb{P}_z\left[\mathbb{V}^\times_{q_{t-s^*}(\cdot)}(\boldsymbol{B}_{R^\varepsilon}(s^*))=1\right] \nonumber \\ &\geq \mathbb{P}_z \left[\left\{\mathbb{V}^\times_{q_{t-s^*}(z_\varepsilon)}(\boldsymbol{B}_{R^\varepsilon}(s^*))=1\right\} \cap \{\boldsymbol{B}_{R^\varepsilon}(s^*)>z_\varepsilon\}\right]\nonumber \\ &\geq \mathbb{P}_z\left[\mathbb{V}^\times_{q_{t-s^*}(z_\varepsilon)}(\boldsymbol{B}_{R^\varepsilon}(s^*))=1\right] - \varepsilon^k,\end{aligned}$$ where the first line follows by the Markov property of $\boldsymbol{B}_{R^\varepsilon}$ at time $s^*$, the second line follows by monotonicity ([\[monotonicity\]](#monotonicity){reference-type="ref" reference="monotonicity"}), and the third line follows by the Lemma [\[displacement lemma\]](#displacement lemma){reference-type="ref" reference="displacement lemma"} and our assumption $z\geq c_1 I(\varepsilon)|\log\varepsilon|$. Now, by Lemma [\[ineq:nobranch\]](#ineq:nobranch){reference-type="ref" reference="ineq:nobranch"} and the definition of $z_\varepsilon$ ([\[iloverowing\]](#iloverowing){reference-type="ref" reference="iloverowing"}) (noting that $t-s^* \leq T^*$), we have $$\begin{aligned} \label{burgers} q_{t-s^*}(z_\varepsilon) &\geq \mathbb{P}_{z_\varepsilon}[B(R^\varepsilon_{t-s^*}) \geq 0] u_+ + \mathbb{P}_{z_\varepsilon}[B(R^\varepsilon_{t-s^*}) \leq 0] u_- \nonumber\\ &\geq u_+\left( \tfrac{1}{2} + (u_+-u_-)^{-1}\varepsilon \right) + u_-\left( \tfrac{1}{2} - (u_+-u_-)^{-1}\varepsilon \right)\nonumber\\ &= \tfrac{1}{2} + \varepsilon.\end{aligned}$$ Substituting ([\[burgers\]](#burgers){reference-type="ref" reference="burgers"}) into ([\[fries\]](#fries){reference-type="ref" reference="fries"}), we obtain $$\begin{aligned} \label{shake} \mathbb{P}_z\left[{\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t))=1\right] \geq \mathbb{P}_z\left[\mathbb{V}^\times_{\frac{1}{2}+\varepsilon}(\boldsymbol{B}_{R^\varepsilon}(s^*))=1\right]-\varepsilon^k.\end{aligned}$$ Note that, if $p_i \geq \tfrac{1}{2} + \varepsilon$ for $i=1, 2, 3$, then $g_\times(p_1, p_2, p_3) \geq \min(p_1, p_2, p_3,u_+)$ from Lemma [\[easyboundg\]](#easyboundg){reference-type="ref" reference="easyboundg"}. Therefore, if each leaf of ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(s^*))$ votes $1$ independently with probability greater than $\tfrac{1}{2} + \varepsilon$, and ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(s^*)) \supseteq {\cal T}^{reg}_{A|\log\varepsilon|}$, then each leaf in ${\cal T}^{reg}_{A|\log\varepsilon|}$ also votes $1$ with probability greater than $\tfrac{1}{2} + \varepsilon$. By Lemma [\[defnofa\]](#defnofa){reference-type="ref" reference="defnofa"}, ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(s^*))\supseteq {\cal T}^{reg}_{A|\log\varepsilon|}$ with probability at least $1-\varepsilon^k$, so by Lemma [\[lemma:iterative_voting\]](#lemma:iterative_voting){reference-type="ref" reference="lemma:iterative_voting"} $$\begin{aligned} \mathbb{P}_z\left[\mathbb{V}^\times_{\frac{1}{2}+\varepsilon}(\boldsymbol{B}_{R^\varepsilon}(s^*))=1\right] &\geq (1-\varepsilon^k)g_\times^{(\lceil A|\log\varepsilon|\rceil)}\left(\tfrac{1}{2}+ \varepsilon\right)\\ &\geq (1-\varepsilon^k)(u_+-\varepsilon^k) \\ &\geq u_+-2\varepsilon^k.\end{aligned}$$ Substituting this into ([\[shake\]](#shake){reference-type="ref" reference="shake"}) yields $$\begin{aligned} \label{conven}\mathbb{P}_z\left[{\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t))=1\right] \geq u_+-3\varepsilon^k,\end{aligned}$$ thereby proving part (1) of Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"}. Part (2) of Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"} follows by completely symmetric arguments. ◻ [\[coeffignore\]]{#coeffignore label="coeffignore"} Observe that ([\[conven\]](#conven){reference-type="ref" reference="conven"}) contains a coefficient in front of the polynomial error term, $\varepsilon^k$, that is not mentioned in the statement of Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"}. Our convention here and in the following sections is that coefficients of polynomial error terms will not be stated in theorems and propositions (this convention was also used in [@etheridge2017branching]). ## Slope of the interface Just as in [@etheridge2017branching], to prove the multidimensional result, Theorem [\[maintheorem2\]](#maintheorem2){reference-type="ref" reference="maintheorem2"}, we make use of a lower bound on the 'slope' of the interface in dimension $\mathbbm{d}=1$. For a time-labelled ternary tree ${\cal T}$, recall that $${\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_{x}({\cal T}):= \mathbb{P}^\varepsilon_x\left[\mathbb{V}^{\times}(\boldsymbol{B}_{R^\varepsilon}(t))=1) \left \vert \right.{\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))={\cal T}\right].$$ Suppose $x\geq 0$ and $\eta>0$. Then for any time-labelled ternary tree ${\cal T}$ and any time $t$, $$\begin{aligned} \label{yoga} \textup{$ {\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_{x}({\cal T})-{\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_{x-\eta}({\cal T}) \geq {\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_{x+\eta}({\cal T})-{\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_x({\cal T}).$ } \end{aligned}$$ *Proof.* We follow the strategy of [@etheridge2017branching], adapted to take account of our different choice of voting function $g_\times$. We proceed by induction on the number of branching events. Let ${\cal T}_0$ denote a time-labelled tree with a root and a single leaf. Recall that, under ${\mathbb{V}}^\times$, the initial ancestor is never marked. Denote the transition density of $B(R^\varepsilon_t)$ started at $z\in \mathbb{R}$ by $p_{z,t}(\cdot)$. Then for $x\geq 0$ and $\eta>0$, $${\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_{x} ({\cal T}_0)-{\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_{x-\eta}({\cal T}_0) = \int_{x-\eta}^x p_{0,t}(z)dz\geq \int_x^{x+\eta} p_{0,t}(z)dz = {\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_{x+\eta}({\cal T}_0)-{\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_{x}({\cal T}_0).$$ Now assume the inequality holds for all time-labelled ternary trees with at most $n$ branch points. Let ${\cal T}$ be a time-labelled ternary tree with $n+1$ internal vertices, and denote the time of the first branching event in ${\cal T}$ by $\tau$. Let ${\cal T}_1, {\cal T}_2$ and ${\cal T}_3$ denote the three trees of descent from the first branching event in ${\cal T}$. Write $$\begin{aligned} g_\times\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{x}({\cal T}\star)\right):=g_\times\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{x}({\cal T}_1),{\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{x}({\cal T}_2),{\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{x}({\cal T}_3)\right).\end{aligned}$$ Then $$\begin{aligned} {\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_x({\cal T})-{\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_{x-\eta}({\cal T}) &=\mathbb{E}^\varepsilon_x\left[g_\times\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{B(R_\tau^\varepsilon)}({\cal T}\star ) \right)\right] -\mathbb{E}^\varepsilon_{x-\eta}\left[g_\times\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{B(R_\tau^\varepsilon)}({\cal T}\star ) \right)\right] \\ &= \int_{-\infty}^\infty \left[g_\times\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y}({\cal T}\star ) \right) - g_\times\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y-\eta}({\cal T}\star ) \right) \right]p_{x,\tau}(y)dy\\ &= \int_0^\infty \left[g_\times({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y}({\cal T}\star ) ) - g_\times\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y-\eta}({\cal T}\star ) \right) \right] (p_{x,\tau}(y)-p_{x,\tau}( -y))dy,\end{aligned}$$ where the final line follows from the symmetry relations ([\[symmetry\]](#symmetry){reference-type="ref" reference="symmetry"}) and ([\[symmetryofmarkedg\]](#symmetryofmarkedg){reference-type="ref" reference="symmetryofmarkedg"}). By identical arguments applied to the right hand side of ([\[yoga\]](#yoga){reference-type="ref" reference="yoga"}), we obtain $$\begin{aligned} \allowdisplaybreaks &\left( {\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_{x}({\cal T})-{\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_{x-\eta}({\cal T}) \right) - \left({\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_{x+\eta}({\cal T})-{\accentset{\times}{\mathbb{P}}}^{_{_{\text{\scriptsize{$t$} } }}}_{x-\eta}({\cal T}) \right) \\ &= \int_0^\infty \left(g_\times\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y}({\cal T}\star ) \right) - g_\times\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y-\eta}({\cal T}\star ) \right)\right)(p_{x,\tau}(y)-p_{x,\tau}(-y))dy\\& \ \ - \int_0^\infty \left(g_\times\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y+\eta}({\cal T}\star ) \right) - g_\times\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y}({\cal T}\star ) \right)\right) (p_{x,\tau}(y)-p_{x,\tau}(-y))dy.\end{aligned}$$ Since $x\geq 0$, $p_{x,\tau}(y)-p_{x,\tau}(-y)\geq 0$ for $y\geq 0$, and it suffices to show that, for $y\geq 0$, $$\begin{aligned} \label{chocolate} \left(g_\times\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y}({\cal T}\star ) \right) - g_\times\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y-\eta}({\cal T}\star ) \right)\right) - \left(g_\times\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y+\eta}({\cal T}\star ) \right) - g_\times\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y}({\cal T}\star ) \right)\right) \geq 0.\end{aligned}$$ By the inductive hypothesis, for each $i = 1, 2, 3$ $$\begin{aligned} \label{caramel}\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_y({\cal T}_i)-{\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y-\eta}({\cal T}_i)\right)-\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y+\eta}({\cal T}_i)-{\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_y({\cal T}_i)\right)\geq 0\end{aligned}$$ for $y\geq 0$. By monotonicity of $g_\times$ and ([\[caramel\]](#caramel){reference-type="ref" reference="caramel"}) $$\begin{aligned} \label{taylor} g_\times\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y-\eta}({\cal T}\star )\right) \leq g_\times\left(2{\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_y({\cal T}\star)+{\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y+\eta}({\cal T}\star)\right).\end{aligned}$$ Substituting ([\[taylor\]](#taylor){reference-type="ref" reference="taylor"}) into ([\[chocolate\]](#chocolate){reference-type="ref" reference="chocolate"}), it suffices to show $$\begin{aligned} \label{vanilla} \hspace*{-1cm} g_\times\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y+\eta}({\cal T}\star)\right)-2g_\times\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y}({\cal T}\star)\right)+g_\times\left(2{\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y}({\cal T}\star)-{\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y+\eta}({\cal T}\star)\right)\leq 0.\end{aligned}$$ By definition of $g_\times$, ([\[vanilla\]](#vanilla){reference-type="ref" reference="vanilla"}) is equivalent to $$\begin{aligned} \label{pistachio} g\left((1-b_\varepsilon){\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y+\eta}({\cal T}\star)+\tfrac{b_\varepsilon}{2}\right)-2g\left((1-b_\varepsilon){\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y}({\cal T}\star)+\tfrac{b_\varepsilon}{2}\right)\nonumber \\ +g\left((1-b_\varepsilon)(2{\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y}({\cal T}\star)-{\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y+\eta}({\cal T}\star))+\tfrac{b_\varepsilon}{2}\right) \leq 0.\end{aligned}$$ To see that ([\[pistachio\]](#pistachio){reference-type="ref" reference="pistachio"}) holds, note that $$\begin{aligned} &g(p_1+q_1, p_2 +q_2, p_3 +q_3) - 2g(p_1, p_2, p_3) + g(p_1-q_1, p_2-q_2, p_3-q_3) \\ & \ \ \ = 2q_1 q_2 (1-2p_3) + 2q_2 q_3 (1-2p_1) + 2q_3 q_1(1-2p_2).\end{aligned}$$ Setting $p_i = (1-b_\varepsilon){\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y}({\cal T}_i) + \tfrac{b_\varepsilon}{2}$ and $q_i = (1-b_\varepsilon)\left({\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y+\eta}({\cal T}_i) - {\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y}({\cal T}_i)\right)$, we see that ([\[pistachio\]](#pistachio){reference-type="ref" reference="pistachio"}) will hold if $p_i\geq \tfrac{1}{2}$, or equivalently, ${\accentset{\times}{\mathbb{P}}}^{_{_ {\text{\scriptsize{$t\hspace{-.2em}-\hspace{-.2em}\tau$}}}}}_{y}({\cal T}_i)\geq \frac{1}{2},$ which holds by ([\[coco\]](#coco){reference-type="ref" reference="coco"}) since $y\geq 0$. ◻ With this, we can prove the slope of the interface result. [\[cor:slope\]]{#cor:slope label="cor:slope"} Let $\varepsilon_1(\alpha)$ and $c_1(\alpha)$ be as in Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"}. Let $\varepsilon < \min(\varepsilon_1, \frac{1}{24}).$ Suppose that for some $t\in [0, T^*]$ and $z\in \mathbb{R}$, $$\left|\mathbb{P}^\varepsilon_z[\mathbb{V}^{\times}(\boldsymbol{B}_{R^\varepsilon}(t))=1]-\tfrac{1}{2}\right| \leq \tfrac{5}{12},$$ and let $w\in \mathbb{R}$ with $|z-w|\leq c_1( \alpha)I(\varepsilon)|\log \varepsilon|$. Then $$\left|\mathbb{P}^\varepsilon_z[\mathbb{V}^{\times}(\boldsymbol{B}_{R^\varepsilon}(t))=1]-\mathbb{P}^\varepsilon_w[\mathbb{V}^{\times}(\boldsymbol{B}_{R^\varepsilon}(t))=1]\right| \geq \frac{|z-w|}{48c_1( \alpha)I(\varepsilon)|\log\varepsilon|}.$$ *Proof.* This follows exactly that of [@etheridge2017branching Corollary 2.12], replacing the interface width $\varepsilon|\log\varepsilon|$ with $I(\varepsilon)|\log\varepsilon|.$ ◻ ## Coupling one-dimensional and $\mathbbm{d}$-dimensional processes {#acouplingargument} In this section, we will construct a coupling of the the one-dimensional and multidimensional voting systems, so that the results of Section [\[sectionmajorityvotinginonedimension\]](#sectionmajorityvotinginonedimension){reference-type="ref" reference="sectionmajorityvotinginonedimension"} can be used to prove our multidimensional result in the next section. To accomplish this, we require the following regularity properties also used in [@etheridge2017branching], which follow from Assumptions [\[assumptions1\]](#assumptions1){reference-type="ref" reference="assumptions1"} by [@chen]. Recall that the sets $(\mathbf{\Gamma}_t)_{0\leq t <\mathscr{T}}$ denote the mean curvature flow of $\boldsymbol{\Gamma}_0$ defined in ([\[gammadefinition\]](#gammadefinition){reference-type="ref" reference="gammadefinition"}). (1) There exists $c_0>0$ such that for all $t\in [0,T^*]$ and $x\in\{y : |d(y,t)|\leq c_0\}$ $$\begin{aligned} \label{A1} |\nabla d(x,t) | = 1. \end{aligned}$$ Moreover, $d$ is a $C^{a, \frac{a}{2}}$ function in $\{(x,t) : |d(x,t)|\leq c_0, t\leq T^*\}$ for $a$ as in Assumption [\[assumptions1\]](#assumptions1){reference-type="ref" reference="assumptions1"} (A). (2) Viewing $\mathbf{n} := \nabla d$ as the positive normal direction, for $x\in \mathbf{\Gamma}_t$, the normal velocity of $\mathbf{\Gamma}_t$ at $x$ is $-d(x,t)$, and the curvature of $\mathbf{\Gamma}_t$ at $x$ is $-\Delta d(x,t)$. Thus, the equation defining mean curvature flow, equation ([\[mcfeqn\]](#mcfeqn){reference-type="ref" reference="mcfeqn"}), becomes $$\begin{aligned} \dot{d}(x,t)= \Delta d(x,t) \end{aligned}$$ for all $x$ such that $d(x,t)=0$. (3) There exists $C_0>0$ such that for all $t\in [0,T^*]$ and $x$ such that $|d(x,t)|\leq c_0$ for $c_0$ as in the first assumption,$$\begin{aligned} \label{A3} \left|\nabla \left(\dot{d}(x,t) - \Delta d(x,t)\right)\right|\leq C_0. \end{aligned}$$ (4) There exists $v_0, V_0>0$ such that for all $t\in [0, T^*-v_0]$ and all $s\in [t, t+v_0]$, $$\begin{aligned} \label{A4} |d(x,t) - d(x,s)| \leq V_0(s-t). \end{aligned}$$ The condition ([\[A1\]](#A1){reference-type="ref" reference="A1"}) ensures that, for all $t\geq 0,$ the region $\{x : d(x,t)\leq c_0\}$ is not self intersecting. That is, for any $x$ with $d(x,t)\leq c_0$, the closed ball centred at $x$ of radius $d(x,t)$ intersects $\mathbf{\Gamma}_t$ at precisely one point.\ Before explaining our result, let us briefly recall the coupling in [@etheridge2017branching] that compares $d(W_s, t-s)$, the distance from a $\mathbbm{d}$-dimensional Brownian motion to $\mathbf{\Gamma}_{t-s}$, to a one-dimensional Brownian motion. [\[sar\]]{#sar label="sar"} Let $(W_s)_{s\geq 0}$ denote a $\mathbbm{d}$-dimensional Brownian motion started at $x\in \mathbb{R}^\mathbbm{d}$. Suppose that $t\leq T^*, \beta \leq c_0$ and let $$T_\beta = \inf (\{s\in [0,t):|d(W_s, t-s)|\geq \beta\} \cup \{t\}).$$ Then we can couple $(W_s)_{s\geq 0}$ with a one-dimensional Brownian motion $(B_s)_{s\geq 0}$ started from $z=d(x,t)$ in such a way that for $s\leq T_\beta$, $$\begin{aligned} \label{sarahscoupling} B_s - C_0\beta s \leq d(W_s, t-s) \leq B_s + C_0\beta s.\end{aligned}$$ This result was a key ingredient in the proofs of [@etheridge2017branching Proposition 2.17, Lemma 2.18] that gave a comparison between the multidimensional and one-dimensional results. It turns out that [@etheridge2017branching Proposition 2.17, Lemma 2.18] are extremely sensitive to any change in the coupling ([\[sarahscoupling\]](#sarahscoupling){reference-type="ref" reference="sarahscoupling"}). Indeed, if there is any additional drift term in the left and right bounds of ([\[sarahscoupling\]](#sarahscoupling){reference-type="ref" reference="sarahscoupling"}) (that is *not* of the form $f(\varepsilon)s$ for some $f$ satisfying $\lim_{\varepsilon\to 0}f(\varepsilon)=0$), then this error propagates, and the strategy of proof in [@etheridge2017branching] no longer works. This provides us with a major hurdle, since, if we mimic the proof of the Brownian coupling but with subordinated Brownian motions, the drift term in ([\[sarahscoupling\]](#sarahscoupling){reference-type="ref" reference="sarahscoupling"}) changes drastically. To overcome this, we employ not one but *two* coupling results. The first, Theorem [\[teo:subestimate\]](#teo:subestimate){reference-type="ref" reference="teo:subestimate"}, is a straightforward adaptation of Proposition [\[sar\]](#sar){reference-type="ref" reference="sar"} to our setting. Our second (and final) coupling will then follow by replacing the multidimensional subordinated Brownian motion in the previous coupling result by one that is *shifted* along an appropriately chosen outward facing normal vector of $\mathbf{\Gamma}_{t-s}$ (this is the content of Theorem [\[couplingtheoremforz\]](#couplingtheoremforz){reference-type="ref" reference="couplingtheoremforz"}). [\[teo:subestimate\]]{#teo:subestimate label="teo:subestimate"} Let $k\in \mathbb{N}$. Let $(W_t)_{t \geq 0}$ be a $\mathbbm{d}$-dimensional standard Brownian motion started at $x\in \mathbb{R}^\mathbbm{d}$, and $(R_t^\varepsilon)_{t \geq 0}$ be an $I(\varepsilon)^2$-truncated $\frac{\alpha}{2}$-stable subordinator satisfying Assumption [\[assumption3\]](#assumption3){reference-type="ref" reference="assumption3"}. Fix $t\leq T^*$ and $\beta < c_0$ for $c_0$ as in ([\[A3\]](#A3){reference-type="ref" reference="A3"}). Define the stopping time $$\begin{aligned} T_\beta &= \inf(\{s \in [0, (k+1)\varepsilon^2|\log\varepsilon|): |d(W_{s},t-s)|>\beta \}\cup \{t\}). \end{aligned}$$ Fix $s\geq 0$. If $R^\varepsilon_s < T_\beta \wedge t$, then there exists a one-dimensional standard Brownian motion $(B_t)_{t\geq 0}$ started at $d(x,t)$, constants $C_0, D_0>0$ and $\varepsilon_1(k)>0$ such that, for all $\varepsilon\in (0,\varepsilon_1(k))$, with probability at least $1-\varepsilon^{k+1}$, $$\begin{aligned} \label{thisisthecoupling} |d(W(R^\varepsilon_s),t-s) - B(R^\varepsilon_s)| \leq C_0 \beta s + D_0(k+2)I(\varepsilon)^2|\log\varepsilon|.\end{aligned}$$ *Proof.* We first rewrite $d(W(R^\varepsilon_s),t-s)$ as $$\begin{aligned} \label{eq:browniandistance} d(W({R^\varepsilon_s}),t-R^\varepsilon_s) + \left[d(W({R_s^\varepsilon}),t-s) - d(W({R^\varepsilon_s}),t-R^\varepsilon_s)\right].\end{aligned}$$ By the Proposition [\[sar\]](#sar){reference-type="ref" reference="sar"}, since $R^\varepsilon_s \leq T_\beta\wedge t$ by assumption, there exists a one-dimensional Brownian motion $(B_t)_{t\geq 0}$ started from $d(x,t)$ and $C_0>0$ such that $$\begin{aligned} \label{purp} B({R^\varepsilon_s}) - C_0 \beta R_s^\varepsilon \leq d(W(R_s^\varepsilon),t-R^\varepsilon_s) &\leq B({R^\varepsilon_s}) + C_0 \beta R_s^\varepsilon. \end{aligned}$$ By ([\[A4\]](#A4){reference-type="ref" reference="A4"}), the second term in ([\[eq:browniandistance\]](#eq:browniandistance){reference-type="ref" reference="eq:browniandistance"}) is bounded as $$\begin{aligned} \label{sparkle} |d(W({R^\varepsilon_s}),t-s) - d(W({R^\varepsilon_s}),t-R^\varepsilon_s)| \leq V_0 |R^\varepsilon_s -s|.\end{aligned}$$ Combining ([\[eq:browniandistance\]](#eq:browniandistance){reference-type="ref" reference="eq:browniandistance"}), ([\[purp\]](#purp){reference-type="ref" reference="purp"}) and ([\[sparkle\]](#sparkle){reference-type="ref" reference="sparkle"}), $$\begin{aligned} |d(W(R^\varepsilon_s),t-s) - B(R^\varepsilon_s)| &\leq C_0 \beta R^\varepsilon_s + V_0|R^\varepsilon_s - s|\\ &\leq C_0 \beta s + D_0 |R^\varepsilon_s -s|\end{aligned}$$ where $D_0 := V_0 + C_0c_0$ for $c_0$ as in ([\[A1\]](#A1){reference-type="ref" reference="A1"}). By Lemma [\[boundonsubordinator\]](#boundonsubordinator){reference-type="ref" reference="boundonsubordinator"}, there exists $\varepsilon_1(k)>0$ such that, for all $\varepsilon \in (0,\varepsilon_1(k)),$ $$\mathbb{P}(|R^\varepsilon_s - s| > (k+2)I(\varepsilon)^2|\log\varepsilon|)\leq \varepsilon^{k+1}$$ and the result follows. ◻ We now define the shifted subordinated Brownian motions that will ultimately move the unwanted drift term in ([\[thisisthecoupling\]](#thisisthecoupling){reference-type="ref" reference="thisisthecoupling"}) into the multidimensional spatial motion. [\[definitionofZ\]]{#definitionofZ label="definitionofZ"} Let $(W(R^\varepsilon_t))_{t \geq 0}$ be a $\mathbbm{d}$-dimensional subordinated Brownian motion. Fix $0< t, T < \mathscr{T}$, $l>0$ and $\beta < c_0$ for $c_0$ as in ([\[A1\]](#A1){reference-type="ref" reference="A1"}). Let $x_s \in \mathbf{\Gamma}_{t-s}$ be the unique point on $\mathbf{\Gamma}_{t-s}$ that is the shortest distance from $W(R^\varepsilon_s)$, and $\mathbf{v}_s$ be the outward facing unit vector perpendicular to the tangent hypersurface of $\mathbf{\Gamma}_{t-s}$ at $x_s$. Then we define the processes $(Z^+_s)_{0 \leq s \leq T}$ and $(Z^-_s)_{0 \leq s \leq T}$ by $$\begin{aligned} \label{zplusdef} Z_s^+ &= \begin{cases} W(R^\varepsilon_s) +l I(\varepsilon)^2|\log \varepsilon|\mathbf{v}_s & \text{ if } |d(W(R^\varepsilon_s), t-s)| \leq \beta \\ W(R^\varepsilon_s) & \text{ otherwise.} \end{cases}\\ \label{zminusdef} Z_s^- &= \begin{cases} W(R^\varepsilon_s) -l I(\varepsilon)^2|\log \varepsilon|\mathbf{v}_s & \text{ if } |d(W(R^\varepsilon_s), t-s)| \leq \beta \\ W(R^\varepsilon_s) & \text{ otherwise.} \end{cases}\end{aligned}$$ Observe that we may choose $\varepsilon$ sufficiently small so that any point $x$ on the line segment between $W(R_s^\varepsilon)$ and $Z^+_s$ (or $Z^-_s$) satisfies $d(x,t-s) \leq c_0$. Then by ([\[A1\]](#A1){reference-type="ref" reference="A1"}) $|\nabla d(x, t-s)|=1$ and $\{z: d(z,t)\leq c_0\}$ is not self intersecting. This implies that $\mathbf{\Gamma}_{t-s}$ is sufficiently 'flat' near $x$ to ensure that there is a unique point $y\in \mathbf{\Gamma}_{t-s}$ that is the closest point on $\mathbf{\Gamma}_{t-s}$ to both $Z^+_s$ and $W(R^\varepsilon_s)$ (and similarly for $Z^-_s$ and $W(R^\varepsilon_s)$). Therefore, provided $\varepsilon$ is sufficiently small, we have $$\begin{aligned} \label{ineqZminus} & d(Z^+_s,t-s) = d(W(R^\varepsilon_s), t-s)+lI(\varepsilon)^2|\log \varepsilon|\\ &d(Z^-_s,t-s) = d(W(R^\varepsilon_s), t-s)-l I(\varepsilon)^2|\log \varepsilon|.\label{ineqZplus} \end{aligned}$$ Consequently, we obtain the following important restatement of Theorem [\[teo:subestimate\]](#teo:subestimate){reference-type="ref" reference="teo:subestimate"}. [\[couplingtheoremforz\]]{#couplingtheoremforz label="couplingtheoremforz"} Let $k\in \mathbb{N}$. For $\alpha \in (1,2)$ and $D_0$ as in Theorem [\[teo:subestimate\]](#teo:subestimate){reference-type="ref" reference="teo:subestimate"}, let $Z^+_t$ and $Z^-_t$ be as in Definition [\[definitionofZ\]](#definitionofZ){reference-type="ref" reference="definitionofZ"} for $l := D_0 (k+2),$ started at $x\in \mathbb{R}^\mathbbm{d}$. Let $(R_t^\varepsilon)_{t \geq 0}$ be an $I(\varepsilon)^2$-truncated $\frac{\alpha}{2}$-stable subordinator satisfying Assumption [\[assumption3\]](#assumption3){reference-type="ref" reference="assumption3"}. Fix $t\leq T^*$, $\beta < c_0$ for $c_0$ as in ([\[A3\]](#A3){reference-type="ref" reference="A3"}). Define the stopping time $$\begin{aligned} T_\beta &= \inf(\{s \in [0, (k+1)\varepsilon^2|\log\varepsilon|): |d(W_{s},t-s)|>\beta \}\cup \{t\}). \end{aligned}$$ Fix $s\geq 0$. If $R^\varepsilon_s < T_\beta \wedge t$, then there exists a one-dimensional standard Brownian motion $(B_t)_{t\geq 0}$ started at $d(x,t)$ and $C_0>0$ such that, with probability at least $1-\varepsilon^{k+1}$, $$\begin{aligned} \label{thisisthecoupling:copiedlabel} d(Z^+_s,t-s) &\geq B(R^\varepsilon_s)-C_0 \beta s \\ d(Z^-_s,t-s) &\leq B(R^\varepsilon_s)+ C_0 \beta s.\label{oxford}\end{aligned}$$ *Proof.* This follows from Theorem [\[teo:subestimate\]](#teo:subestimate){reference-type="ref" reference="teo:subestimate"} and equations ([\[ineqZminus\]](#ineqZminus){reference-type="ref" reference="ineqZminus"}), ([\[ineqZplus\]](#ineqZplus){reference-type="ref" reference="ineqZplus"}). ◻ As we have seen in Theorem [\[couplingtheoremforz\]](#couplingtheoremforz){reference-type="ref" reference="couplingtheoremforz"}, $Z^+$ and $Z^-$ satisfy ([\[thisisthecoupling:copiedlabel\]](#thisisthecoupling:copiedlabel){reference-type="ref" reference="thisisthecoupling:copiedlabel"}) and ([\[oxford\]](#oxford){reference-type="ref" reference="oxford"}) when $l:= D_0(k+2)$ in ([\[zplusdef\]](#zplusdef){reference-type="ref" reference="zplusdef"}) and ([\[zminusdef\]](#zminusdef){reference-type="ref" reference="zminusdef"}). For the remainder of this work, we shall take $l:= D_0(k+2)$ in the definition of $Z^+$ and $Z^-$, where the choice of $k$ will be clear in the given context. Equipped with Theorem [\[couplingtheoremforz\]](#couplingtheoremforz){reference-type="ref" reference="couplingtheoremforz"}, we will be able to use our one-dimensional result, but for the processes ${Z}^+, {Z}^-$ instead of a $\mathbbm{d}$-dimensional stable process. To translate this back to a result in terms of stable processes, we use the following comparison between root votes.\ Let $\boldsymbol{Z}^+$ be the ternary branching process (with branching rate $\varepsilon^{-2}$) in which individuals independently travel according to $(Z^+_s)_{0\leq s\leq \mathscr{T}}$. Define $\boldsymbol{Z}^-$ similarly. Denote the historical ternary branching process associated to the $R^\varepsilon_t$-subordinated $\mathbbm{d}$-dimensional Brownian motion by $\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}$. [\[gronwallforZ\]]{#gronwallforZ label="gronwallforZ"} Let $0 < t < \mathscr{T}$, $0 \leq \beta < c_0$, $k \in \mathbb{N}$, $p:\mathbb{R}^\mathbbm{d}\to [0,1]$ and $F$ be as in ([\[defnofF\]](#defnofF){reference-type="ref" reference="defnofF"}). Let $\boldsymbol{Z}^+(t)$ and $\boldsymbol{Z}^-(t)$ be the historical path of branching processes defined above. Then, for any $x \in \mathbb{R}^\mathbbm{d}$, there exists $m_1, m_2>0$ such that $$\begin{aligned} & \left| \mathbb{P}_x^\varepsilon[\mathbb{V}^\times_p(\boldsymbol{Z}^-(t))=1)] - \mathbb{P}_x^\varepsilon[\mathbb{V}^\times_p(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1)] \right| \leq m_1e^{-\frac{t}{\varepsilon^2}}+m_2F(\varepsilon) \label{zminuss} \\ & \left| \mathbb{P}_x^\varepsilon[\mathbb{V}^\times_p(\boldsymbol{Z}^+(t))=1)] - \mathbb{P}_x^\varepsilon[\mathbb{V}^\times_p(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1)] \right|\leq m_1e^{-\frac{t}{\varepsilon^2}}+m_2F(\varepsilon). \label{zplus} \end{aligned}$$ Proposition [\[gronwallforZ\]](#gronwallforZ){reference-type="ref" reference="gronwallforZ"} is integral to the proof of the main multidimensional result. While intuitively the root votes in ([\[zminuss\]](#zminuss){reference-type="ref" reference="zminuss"}) and ([\[zplus\]](#zplus){reference-type="ref" reference="zplus"}) should be close (since the spatial motions are) to obtain the precise bound above requires lengthy calculations which are not illuminating. So as to not disrupt our flow, we defer the proof of Proposition [\[gronwallforZ\]](#gronwallforZ){reference-type="ref" reference="gronwallforZ"} to Section [5.2](#gronwallappendix){reference-type="ref" reference="gronwallappendix"} of the appendix. Observe that Proposition [\[gronwallforZ\]](#gronwallforZ){reference-type="ref" reference="gronwallforZ"} marks the first appearance of the term $F(\varepsilon)$ that will later contribute to the sharpness of the interface in Theorem [\[maintheorem\]](#maintheorem){reference-type="ref" reference="maintheorem"}. It is also the first time we require that $\alpha \in (1,2)$ and that Assumption [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"} [\[assumptions2_C\]](#assumptions2_C){reference-type="ref" reference="assumptions2_C"} holds, to ensure that $F(\varepsilon) \to 0$. # Majority voting in dimension $\mathbbm{d}\geq 2$[\[ch:3\]]{#ch:3 label="ch:3"} In this section, we will use the one-dimensional result to prove the main multidimensional result, Theorem [\[maintheorem2\]](#maintheorem2){reference-type="ref" reference="maintheorem2"}. To begin, in Section [4.1](#sectioncoupvotingd){reference-type="ref" reference="sectioncoupvotingd"}, we will prove a series of couplings. This will allow us to restate Theorem [\[maintheorem2\]](#maintheorem2){reference-type="ref" reference="maintheorem2"} in terms of the processes $\boldsymbol{Z}^+(t)$ and $\boldsymbol{Z}^-(t)$ in Theorem [\[new_multi_d\_theorem\]](#new_multi_d_theorem){reference-type="ref" reference="new_multi_d_theorem"}. We go on to prove Theorem [\[new_multi_d\_theorem\]](#new_multi_d_theorem){reference-type="ref" reference="new_multi_d_theorem"} in Section [4.2](#generationoftheinterfacesection){reference-type="ref" reference="generationoftheinterfacesection"} and Section [4.3](#propinterfacesection){reference-type="ref" reference="propinterfacesection"} following similar arguments to those in [@etheridge2017branching]. The proof of a technical lemma will make up the content of Section [4.4](#sectionproofofuglylemma){reference-type="ref" reference="sectionproofofuglylemma"}.\ Let us briefly recall the notation introduced in Section [3](#ch:2){reference-type="ref" reference="ch:2"}. We write $X_t$ for the one-dimensional $\alpha$-stable process (with associated historical ternary branching process $\boldsymbol{X}(t)$), and $Y_t$ for the $\mathbbm{d}$-dimensional $\alpha$-stable process (with associated historical ternary branching process ${\boldsymbol{Y}}(t)$). The one-dimensional $R^\varepsilon_t$-subordinated Brownian motion is denoted $B(R_t^\varepsilon)$ (with associated historical ternary branching process $\boldsymbol{B}_{R^\varepsilon}(t)$), and the $\mathbbm{d}$-dimensional $R^\varepsilon_t$-subordinated Brownian motion is denoted $W(R_t^\varepsilon)$ (with associated historical ternary branching process $\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t)$). As ever, all stable processes and subordinators are assumed to satisfy Assumption [\[assumption3\]](#assumption3){reference-type="ref" reference="assumption3"}. ## A coupling of voting systems in higher dimensions {#sectioncoupvotingd} Recall that ${Z}^+(t)$ and ${Z}^-(t)$ satisfy the coupling result Theorem [\[couplingtheoremforz\]](#couplingtheoremforz){reference-type="ref" reference="couplingtheoremforz"}. This is almost identical to the coupling result from the Brownian setting, Proposition [\[sar\]](#sar){reference-type="ref" reference="sar"}. Using this and our one-dimensional result (Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"}) it will be straightforward to prove an analogue of Theorem [\[maintheorem2\]](#maintheorem2){reference-type="ref" reference="maintheorem2"} for the processes $\boldsymbol{Z}^+(t)$ and $\boldsymbol{Z}^-(t)$ by adapting the techniques of [@etheridge2017branching]. In this section, we will show that this analogue of Theorem [\[maintheorem2\]](#maintheorem2){reference-type="ref" reference="maintheorem2"} for the processes $\boldsymbol{Z}^+(t)$ and $\boldsymbol{Z}^-(t)$ (stated in Theorem [\[new_multi_d\_theorem\]](#new_multi_d_theorem){reference-type="ref" reference="new_multi_d_theorem"}) will imply Theorem [\[maintheorem\]](#maintheorem){reference-type="ref" reference="maintheorem"}. To do this, we construct the following couplings: $$\begin{aligned} \mathbb{P}^\varepsilon_x\left[\mathbb{V}_p(\boldsymbol{Y}(t))=1\right] &\stackrel{Prop \ \ref{votingcoup2}}{\approx} \mathbb{P}^\varepsilon_x\left[\mathbb{V}_p^+(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1\right]\\ &\stackrel{Prop \ \ref{votingcoup1}}{\approx} \mathbb{P}^\varepsilon_x\left[\mathbb{V}_p^\times(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1\right] \\ & \stackrel{Prop \ \ref{gronwallforZ}}{\approx} \mathbb{P}^\varepsilon_x\left[\mathbb{V}_p^\times(\boldsymbol{Z}^-(t))=1\right]\end{aligned}$$ where the voting system $\mathbb{V}_p^+$ will be defined in Definition [\[abel\]](#abel){reference-type="ref" reference="abel"}. As we will see, a similar series of couplings also relate $\mathbb{P}^\varepsilon_x\left[\mathbb{V}_p(\boldsymbol{Y}(t))=1\right]$ to $\mathbb{P}^\varepsilon_x\left[\mathbb{V}_p^\times(\boldsymbol{Z}^+(t))=1\right]$. Note that the final coupling of $\mathbb{V}_p^\times(\boldsymbol{Z}^-(t))$ to $\mathbb{V}_p^\times (\boldsymbol{W}_{R^\varepsilon}(t))$ has already been developed in Section [3](#ch:2){reference-type="ref" reference="ch:2"}.\ We now proceed to construct a coupling of $\mathbb{V}_p({\boldsymbol{Y}}(t))$ to $\mathbb{V}_p^\times(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))$. To begin, we introduce the positively and negatively biased asymmetric marked voting procedures. [\[abel\]]{#abel label="abel"}Let $\varepsilon>0$ and $t\geq0$. Let $b_\varepsilon$ be as in ([\[bdeltapage\]](#bdeltapage){reference-type="ref" reference="bdeltapage"}). For a fixed function $p:\mathbb{R}^{\mathbbm{d}}\to [0,1]$, we define the *positively biased* (resp. *negatively biased*) asymmetric marked voting procedures on ${\cal T}(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))$ as follows. (1) At each branch point in ${\cal T}(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))$, the parent particle $j$ marks each of their three offspring $(j,1), (j,2)$ and $(j,3)$ independently with probability $b_\varepsilon$. All marked particles vote $1$ with probability $1$ (resp. $0$ for the negatively biased procedure). (2) Each unmarked leaf $i$ of ${\cal T}(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))$, independently, votes $1$ with probability\ $p(W_i(R^\varepsilon_i(t))$ and otherwise votes $0$. (3) At each branch point in ${\cal T}(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))$, if the parent particle $k$ is unmarked, she votes according to the majority vote of her three offspring $(k,1), (k,2)$ and $(k,3)$. Define $\mathbb{V}_p^+$ (resp. $\mathbb{V}_p^-$) to be the vote associated to the root $\emptyset$ of the ternary branching truncated stable tree under the positively biased (negatively biased) asymmetric marked voting procedure described above. We can now prove the first coupling result. [\[votingcoup2\]]{#votingcoup2 label="votingcoup2"} For all $\varepsilon>0$, $x \in \mathbb{R}^\mathbbm{d}$, $t \geq 0$ and $p:\mathbb{R}^\mathbbm{d}\to [0,1]$ we have $$\begin{aligned} (1-b_\varepsilon)\mathbb{P}^\varepsilon_x[\mathbb{V}_p^-(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1] \leq \mathbb{P}^\varepsilon_x[\mathbb{V}_p({\boldsymbol{Y}}(t))=1] \leq (1-b_\varepsilon)\mathbb{P}^\varepsilon_x[\mathbb{V}_p^+(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1]+b_\varepsilon. \end{aligned}$$ *Proof.* This proof proceeds almost identically to the proof of the one-dimensional coupling of voting systems, Theorems [\[teo:ineqmark\]](#teo:ineqmark){reference-type="ref" reference="teo:ineqmark"} and [\[teo:ineqmark2\]](#teo:ineqmark2){reference-type="ref" reference="teo:ineqmark2"} using an intermediate asymmetric exponentially marked voting system. More specifically, define the negatively biased exponential marked voting procedure $\widehat{{\mathbb{V}}}^-_p$ like the exponential marked voting procedure from Definition [\[definitionnonmarkov\]](#definitionnonmarkov){reference-type="ref" reference="definitionnonmarkov"}, except that marked individuals vote $1$ with probability $0$. Similarly define the positively biased exponential marked voting procedure $\widehat{{\mathbb{V}}}^+_p$ where marked individuals vote $1$ with probability $1$. Then, mimicking the proof of Theorem [\[teo:ineqmark\]](#teo:ineqmark){reference-type="ref" reference="teo:ineqmark"}, we can show that, for all $x\in \mathbb{R}^\mathbbm{d}$, $$\begin{aligned} \label{comparison_multi_mark} \mathbb{P}_x^\varepsilon\left[\widehat{{\mathbb{V}}}^-_p(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1\right]\leq \mathbb{P}^\varepsilon_x[\mathbb{V}_p({\boldsymbol{Y}}(t))=1] \leq \mathbb{P}_x^\varepsilon\left[\widehat{{\mathbb{V}}}^+_p(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1\right].\end{aligned}$$ Finally, by conditioning on whether or not the initial ancestor in $\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t)$ is marked (just as we did in the proof of Theorem [\[teo:ineqmark2\]](#teo:ineqmark2){reference-type="ref" reference="teo:ineqmark2"}) we obtain $$\mathbb{P}_x^\varepsilon\left[\widehat{{\mathbb{V}}}^-_p(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1\right]\geq (1-b_\varepsilon)\mathbb{P}^\varepsilon_x[\mathbb{V}_p^-(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1]$$ and $$\mathbb{P}_x^\varepsilon\left[\widehat{{\mathbb{V}}}^+_p(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1\right]\leq (1-b_\varepsilon)\mathbb{P}^\varepsilon_x[\mathbb{V}_p^+(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1]+b_\varepsilon,$$ proving the result. ◻ In the one-dimensional analogue of Proposition [\[votingcoup2\]](#votingcoup2){reference-type="ref" reference="votingcoup2"} (Theorems [\[teo:ineqmark\]](#teo:ineqmark){reference-type="ref" reference="teo:ineqmark"}, [\[teo:ineqmark2\]](#teo:ineqmark2){reference-type="ref" reference="teo:ineqmark2"}), we were able to couple ${\mathbb{V}}(\boldsymbol{X}(t))$ and ${\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t))$ using the (symmetric) exponentially marked voting procedure $\widehat{{\mathbb{V}}}.$ However, we could not adapt this proof to couple $\mathbb{V}_p({\boldsymbol{Y}}(t))$ to $\mathbb{V}_p^\times(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))$ (having instead to use the auxiliary voting procedures $\mathbb{V}_p^+$ and $\mathbb{V}_p^-$). This is because the initial condition $p$ is no longer assumed to be symmetric.\ To better understand this, let us revisit the proof of Theorem [\[teo:ineqmark\]](#teo:ineqmark){reference-type="ref" reference="teo:ineqmark"}, where we showed that, for $x\geq 0$, $$\begin{aligned} \label{pepper_steak} \mathbb{P}_x^\varepsilon[{\mathbb{V}}(\boldsymbol{X}(t))=1]\geq \mathbb{P}_x^\varepsilon\left[\widehat{{\mathbb{V}}}(\boldsymbol{B}_{R^\varepsilon}(t))=1\right]\end{aligned}$$ with the reverse inequality holding when $x<0$. We then obtained a coupling of ${\mathbb{V}}(\boldsymbol{X}(t))$ to ${\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t))$ by showing in Theorem [\[teo:ineqmark2\]](#teo:ineqmark2){reference-type="ref" reference="teo:ineqmark2"} that $$\mathbb{P}_x^\varepsilon\left[\widehat{{\mathbb{V}}}(\boldsymbol{B}_{R^\varepsilon}(t))=1\right] = (1-b_\varepsilon)\mathbb{P}_x^\varepsilon[{\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t))=1]+\frac{b_\varepsilon}{2}.$$ To prove Theorem [\[teo:ineqmark\]](#teo:ineqmark){reference-type="ref" reference="teo:ineqmark"}, we used an inductive argument on the number of branching events in ${\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))$ and ${\cal T}(\boldsymbol{X}(t)).$ In the base case, when ${\cal T}(\boldsymbol{X}(t)) = {\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))={\cal T}_0$, the tree with a single leaf, we considered the single individual in ${\cal T}(\boldsymbol{X}(t)),$ $X(t)\stackrel{D}{=} B(R_t)$ for $B(t)$ a standard one-dimensional Brownian motion and $R(t)$ an $\frac{\alpha}{2}$-stable subordinator. We saw in equation ([\[sosleepy\]](#sosleepy){reference-type="ref" reference="sosleepy"}) that, if $\overline{\tau}$ was the first time that $R_t$ made a large jump, and $\tau$ was the time of the first branching event in ${\cal T}(\boldsymbol{X}(t)),$ then if $x\geq 0$ and $p_0(x)=\mathbbm{1}_{\{x\geq 0\}}$ $$\begin{aligned} \label{ahalf}\mathbb{E}_x^\varepsilon[p_0(B(R_t)) \, |\, \overline{\tau}< t < \tau]\geq \tfrac{1}{2},\end{aligned}$$ from which it followed that $$\begin{aligned} \label{sosleepy2} \mathbb{P}_x^t({\cal T}_0) &=\mathbb{E}^{\varepsilon}_x[p_0\left(B(R_t)\right) \left \vert \right.\overline{\tau}\geq t, \tau>t] \mathbb{P}_x^\varepsilon[\overline{\tau}\geq t] +\mathbb{E}^{\varepsilon}_x[p_0(B(R_t)) \left \vert \right.\overline{\tau}<t<\tau] \mathbb{P}_x^\varepsilon[\overline{\tau}<t]\nonumber \\ &\geq \mathbb{E}^{\varepsilon}_x[p_0(B(R^\varepsilon_t))\left \vert \right.\tau>t]\,\mathbb{P}_x^\varepsilon[\overline{\tau}\geq t] + \tfrac{1}{2}\mathbb{P}_x^\varepsilon[\overline{\tau}<t]. \end{aligned}$$ The quantity in ([\[sosleepy2\]](#sosleepy2){reference-type="ref" reference="sosleepy2"}) is an upper bound for $\widehat{\mathbb{P}}_x^t({\cal T}_0),$ so by induction we obtained ([\[pepper_steak\]](#pepper_steak){reference-type="ref" reference="pepper_steak"}).\ In the multidimensional setting, for a general initial condition $p$, this argument does not hold. More specifically, ([\[ahalf\]](#ahalf){reference-type="ref" reference="ahalf"}) need not hold since $p$ may not be symmetric; instead, we only have the trivial inequality $\mathbb{E}_x^\varepsilon[p(W(R_t))\, |\, \overline{\tau}<t<\tau]\geq 0$. Using this, the multidimensional analogue of ([\[sosleepy2\]](#sosleepy2){reference-type="ref" reference="sosleepy2"}) becomes $$\mathbb{P}_x^t({\cal T}_0) \geq \mathbb{E}^{\varepsilon}_x[p(W(R^\varepsilon_t))\left \vert \right.\tau>t]\,\mathbb{P}_x^\varepsilon[\overline{\tau}\geq t],$$ where the right hand side is an upper bound for the probability that, conditional on ${\cal T}(\boldsymbol{W}_{R^\varepsilon}(t))={\cal T}_0$, the single individual votes $1$ under the negatively biased exponentially marked voting procedure $\widehat{{\mathbb{V}}}_p^-$ defined in the proof of Proposition [\[votingcoup2\]](#votingcoup2){reference-type="ref" reference="votingcoup2"}. Similarly, we can use the trivial bound $\mathbb{E}_x^\varepsilon[p(W(R_t))\, |\, \overline{\tau}<t<\tau]\leq 1$ to obtain $$\mathbb{P}_x^t({\cal T}_0) \leq \mathbb{E}^{\varepsilon}_x[p(W(R^\varepsilon_t))\left \vert \right.\tau>t]\,\mathbb{P}_x^\varepsilon[\overline{\tau}\geq t]+\mathbb{P}_x^\varepsilon[\overline{\tau}\leq t]$$ where the right hand side is equal to the probability that, conditional on ${\cal T}(\boldsymbol{W}_{R^\varepsilon}(t))={\cal T}_0$, the single individual in votes $1$ under the positively biased exponentially marked voting procedure, $\widehat{{\mathbb{V}}}_p^+.$ These bounds, together with an inductive argument, can be used to prove equation ([\[comparison_multi_mark\]](#comparison_multi_mark){reference-type="ref" reference="comparison_multi_mark"}) from the proof of Proposition [\[votingcoup2\]](#votingcoup2){reference-type="ref" reference="votingcoup2"}. Just as in Remark [\[cool\]](#cool){reference-type="ref" reference="cool"}, we can write down the partial differential equation solved by $\mathbb{P}^\varepsilon_x[\mathbb{V}_p^+(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1]$ and $\mathbb{P}^\varepsilon_x[\mathbb{V}_p^-(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1].$ Denote the infinitesimal generator of $(W(R^\varepsilon_t))_{t\geq 0}$ by $\mathcal{L}^\varepsilon$. Then it is straightforward to verify, using similar arguments to those in the proof of Theorem [\[votedual\]](#votedual){reference-type="ref" reference="votedual"}, that $v_+^\varepsilon(t,x) := \mathbb{P}^\varepsilon_x[\mathbb{V}_p^+(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1]$ solves $$\begin{aligned} \partial_t v^\varepsilon_+ = \mathcal{L}^\varepsilon v^\varepsilon_+ + {\varepsilon^{-2}}f_+(v^\varepsilon_+), \ \ v_+^\varepsilon(0,x)=p(x)\end{aligned}$$ and $v_-^\varepsilon(t,x) := \mathbb{P}^\varepsilon_x[\mathbb{V}_p^-(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1]$ solves $$\begin{aligned} \partial_t v^\varepsilon_- = \mathcal{L}^\varepsilon v^\varepsilon_+ + {\varepsilon^{-2}}f_-(v^\varepsilon_-), \ \ v_-^\varepsilon(0,x)=p(x)\end{aligned}$$ where the nonlinearities $f_+$ and $f_-$ are given by $$f_+(x) := x(1-x)(2x-1) -2b_\varepsilon^3(1-x)^3-3b_\varepsilon^2(1-x)^2(2x-1)+6b_\varepsilon x(1-x)^2$$ and $$f_-(x) := x(1-x)(2x-1)+2b_\varepsilon^3x^3- 3b_\varepsilon^2x^2(2x-1)-6b_\varepsilon x^2(1-x).$$ Proposition [\[votingcoup2\]](#votingcoup2){reference-type="ref" reference="votingcoup2"} relates solutions to these equations to the original (scaled) fractional Allen--Cahn equation, equation ([\[mainequation22\]](#mainequation22){reference-type="ref" reference="mainequation22"}). The positively and negatively biased voting systems can be compared to our (symmetric) marked system as follows. Combining Propositions [\[votingcoup2\]](#votingcoup2){reference-type="ref" reference="votingcoup2"} and [\[votingcoup1\]](#votingcoup1){reference-type="ref" reference="votingcoup1"} will give us the desired comparison between $\mathbb{P}^\varepsilon_x[\mathbb{V}_p({\boldsymbol{Y}}(t))=1]$ and $\mathbb{P}_x^\varepsilon[\mathbb{V}_p^\times(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1]$. [\[votingcoup1\]]{#votingcoup1 label="votingcoup1"} There exists $C>0$ such that, for all $\varepsilon>0$, $x \in \mathbb{R}^\mathbbm{d}$, $t \geq 0$ and $p:\mathbb{R}^\mathbbm{d}\to [0,1]$, $$\begin{aligned} \sup_{x \in \mathbb{R}^\mathbbm{d}} \left(\mathbb{P}^\varepsilon_x[\mathbb{V}_p^+(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1]-\mathbb{P}^\varepsilon_x[\mathbb{V}_p^\times(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1]\right) &\leq C b_\varepsilon \label{poto1} \\ \sup_{x \in \mathbb{R}^\mathbbm{d}} \left(\mathbb{P}_x^\varepsilon[\mathbb{V}_p^\times(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1]-\mathbb{P}^\varepsilon_x[\mathbb{V}_p^-(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1]\right) &\leq C b_\varepsilon. \label{poto2} \end{aligned}$$ *Proof.* We prove only ([\[poto1\]](#poto1){reference-type="ref" reference="poto1"}), noting that ([\[poto2\]](#poto2){reference-type="ref" reference="poto2"}) follows by symmetric arguments. Define $g_+:[0,1] \to [0,1]$ by $g_+(q)=g((1-b_\varepsilon)q+b_\varepsilon)$ where $g$ is the ordinary majority voting function. This is the probability that an unmarked parent particle votes $1$ under $\mathbb{V}_p^+$, in the special case when the three offspring are independent and each have probability $q$ of voting $1$ if they are unmarked. Write $\tau$ for the time of the first branching event in $\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(\cdot)$. To ease notation, set $$u_\times^\varepsilon(t, x) = \mathbb{P}_x^\varepsilon[\mathbb{V}_p^\times(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1] \ \text{ and } \ u_+^\varepsilon(t,x)=\mathbb{P}_x^\varepsilon[\mathbb{V}_p^+(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1].$$ Then, by the Markov property at time $t \wedge \tau$ and definition of $\mathbb{V}_p^\times$ and $\mathbb{V}_p^+$ (noting that the initial ancestor is never marked in both schemes) we have $$\begin{aligned} u_\times^\varepsilon(t, x) &= \mathbb{E}_x^\varepsilon\left[g_\times(u_\times^\varepsilon(t-\tau,W(R^\varepsilon_{\tau})))\mathbbm{1}_{\tau \leq t}\right] + \mathbb{E}_x^\varepsilon\left[p(W(R_t^\varepsilon))\mathbbm{1}_{\tau>t}\right] \\ u_+^\varepsilon(t,x) &= \mathbb{E}_x^\varepsilon\left[g_+\left(u_+^\varepsilon\left(t-\tau,W(R^\varepsilon_{\tau})\right)\right)\mathbbm{1}_{\tau \leq t}] + \mathbb{E}_x^\varepsilon[p(W(R_t^\varepsilon))\mathbbm{1}_{\tau>t}\right].\end{aligned}$$ It follows that $$\begin{aligned} |u_\times^\varepsilon(t,x) - u_+^\varepsilon(t,x)| & \leq \mathbb{E}_x^\varepsilon\left[\left|g_\times(u_\times^\varepsilon(t-\tau, W(R^\varepsilon_{\tau}))) - g_+(u_+^\varepsilon(t-\tau, W(R^\varepsilon_{\tau})))\right| \mathbbm{1}_{\tau \leq t} \right].\end{aligned}$$ By definition of $g_\times$ and $g_+$, and since $g$ is Lipschitz with constant $\tfrac{3}{2}$, we have $$\begin{aligned} &|u_\times^\varepsilon(t,x) - u_+^\varepsilon(t,x)| \\ & \leq \tfrac{3}{2}\mathbb{E}_x^\varepsilon\left[\left| (1-b_\varepsilon)(u_\times^\varepsilon(t-\tau,W(R^\varepsilon_{\tau})) - u_+^\varepsilon(t-\tau, W(R^\varepsilon_{\tau})))- \tfrac{b_\varepsilon}{2} \right| \mathbbm{1}_{\{\tau \leq t\}} \right] \\ & \leq \tfrac{3}{4} b_\varepsilon + \tfrac{3}{2}(1-b_\varepsilon) \mathbb{E}_x^\varepsilon\left[\left|u_\times^\varepsilon(t-\tau, W(R^\varepsilon_{\tau}))-u_+^\varepsilon(t-\tau, W(R^\varepsilon_{\tau}))\right|\mathbbm{1}_{\{\tau \leq t\}}\right] \\ &= \tfrac{3}{4} b_\varepsilon + \tfrac{3}{2}(1-b_\varepsilon) \int_0^t \frac{e^{-\rho \varepsilon^{-2}}}{\varepsilon^2} \mathbb{E}^\varepsilon_x\left[|u_\times^\varepsilon(t-\rho, W(R^\varepsilon_u))-u_+^\varepsilon(t-\rho, W(R^\varepsilon_u))|\right] d\rho \\ &\leq \tfrac{3}{4} b_\varepsilon + \tfrac{3}{2}(1-b_\varepsilon) \int_0^t \frac{e^{-\rho \varepsilon^{-2}}}{\varepsilon^2} \lVert u_\times^\varepsilon(t-\rho, \cdot)-u_+^\varepsilon(t-\rho, \cdot)\rVert_\infty d\rho,\end{aligned}$$ where $\lVert \cdot \rVert_\infty$ denotes the uniform norm, and we have used that $\tau \sim \mathit{Exp}(\varepsilon^{-2})$ is independent of the spatial motion. Noting that the above inequality holds for all $x\in \mathbb{R}^\mathbbm{d}$, and applying the change of variables $\rho \mapsto t - \rho$, we obtain $$\begin{aligned} \lVert u_\times^\varepsilon(t, \cdot) - u_+^\varepsilon(t, \cdot) \rVert_\infty \leq \tfrac{3}{4}b_\varepsilon + \tfrac{3}{2}e^{-t \varepsilon^{-2}}\int_{0}^t e^{\rho\varepsilon^{-2}} \varepsilon^{-2} \lVert u_\times^\varepsilon(\rho, v) - u_+^\varepsilon(\rho, \cdot)\rVert_\infty d\rho.\end{aligned}$$ By an adaptation of Grönwall's inequality, available, for instance, in [@dragomir2002some Theorem 15], $$\begin{aligned} \lVert u_\times^\varepsilon(t, \cdot) - u_+^\varepsilon(t, \cdot) \rVert_\infty &\leq \tfrac{3}{4}b_\varepsilon \exp \left(\tfrac{3}{2}\left(\int_0^t \exp\left( - s \varepsilon^{-2} \right) \varepsilon^{-2}\right) ds \right) \\ &= \tfrac{3}{4}b_\varepsilon \exp\left(\tfrac{3}{2} \mathbb{P}[\tau \leq t] \right) \\ & \leq \tfrac{3}{4}b_\varepsilon \exp\left( \tfrac{3}{2} \right).\end{aligned}$$ Setting $C:= \tfrac{3}{4}\exp\left({\tfrac{3}{2}}\right)$ gives the result. ◻ Now, using the coupling result Proposition [\[gronwallforZ\]](#gronwallforZ){reference-type="ref" reference="gronwallforZ"} from Section [3.6](#acouplingargument){reference-type="ref" reference="acouplingargument"}, we obtain our main coupling result of this section. [\[new_corollary\]]{#new_corollary label="new_corollary"} Let $\varepsilon\in (0,1)$, $x\in \mathbb{R}^\mathbbm{d}$ and $p:\mathbb{R}^\mathbbm{d}\to [0,1]$. Let $F$ be as in ([\[defnofF\]](#defnofF){reference-type="ref" reference="defnofF"}). Then there exists $a_\mathbbm{d}(\alpha)>0$ and $m>0$ such that, for all $t\geq a_\mathbbm{d}\varepsilon^2|\log \varepsilon|,$ $$\mathbb{P}^\varepsilon_x[\mathbb{V}_p({\boldsymbol{Y}}(t))=1]\leq \mathbb{P}^\varepsilon_x[\mathbb{V}_p^\times(\boldsymbol{Z}^-(t))=1] + mF(\varepsilon) + mb_\varepsilon$$ and $$\mathbb{P}^\varepsilon_x[\mathbb{V}_p({\boldsymbol{Y}}(t))=1]\geq (1-b_\varepsilon)\mathbb{P}^\varepsilon_x[\mathbb{V}_p^\times(\boldsymbol{Z}^+(t))=1] - mF(\varepsilon) - m b_\varepsilon.$$ *Proof.* We prove only the first inequality, noting that the second follows by similar arguments. By Propositions [\[votingcoup2\]](#votingcoup2){reference-type="ref" reference="votingcoup2"}, [\[votingcoup1\]](#votingcoup1){reference-type="ref" reference="votingcoup1"}, and [\[gronwallforZ\]](#gronwallforZ){reference-type="ref" reference="gronwallforZ"} there exists $m_1, m_2, C>0$ such that $$\begin{aligned} \mathbb{P}^\varepsilon_x[\mathbb{V}_p({\boldsymbol{Y}}(t))=1]&\leq (1-b_\varepsilon)\mathbb{P}^\varepsilon_x[\mathbb{V}_p^\times(\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t))=1]+(C+1)b_\varepsilon \\ & \leq (1-b_\varepsilon)\left(\mathbb{P}^\varepsilon_x[\mathbb{V}_p^\times(\boldsymbol{Z}^-(t))=1]+ m_1e^{-\frac{t}{\varepsilon^2}}+m_2F(\varepsilon)\right)+(C+1)b_\varepsilon.\end{aligned}$$ Choose $a_\mathbbm{d}$ sufficiently large so that, for $t\geq a_\mathbbm{d}\varepsilon^2|\log \varepsilon|,$ $e^{-\frac{t}{\varepsilon^2}}\leq F(\varepsilon).$ Choosing $m$ sufficiently large then gives the upper bound. ◻ Next, we will state our main theorem for $\boldsymbol{Z}^+(t)$ and $\boldsymbol{Z}^-(t)$ and show using Corollary [\[new_corollary\]](#new_corollary){reference-type="ref" reference="new_corollary"} that it implies Theorem [\[maintheorem\]](#maintheorem){reference-type="ref" reference="maintheorem"}. Recall that $u_-= \frac{3}{4}b_\varepsilon^2 + \mathcal{O}(b_\varepsilon^3)$ and $u_+= 1-\frac{3}{4}b_\varepsilon^2 + \mathcal{O}(b_\varepsilon^3).$ [\[new_multi_d\_theorem\]]{#new_multi_d_theorem label="new_multi_d_theorem"} Fix $I$ satisfying Assumptions [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"} and $k\in \mathbb{N}$. Suppose the initial condition $p$ satisfies Assumptions [\[assumptions1\]](#assumptions1){reference-type="ref" reference="assumptions1"}. Let $\mathscr{T}$ and $d(x, t)$ be as in Section [2](#secone){reference-type="ref" reference="secone"}, $F$ be as in ([\[defnofF\]](#defnofF){reference-type="ref" reference="defnofF"}) and fix $T^*\in (0, \mathscr{T})$. Let $u_+, u_-$ be as in ([\[asympfixedpoint1\]](#asympfixedpoint1){reference-type="ref" reference="asympfixedpoint1"}) and ([\[asympfixedpoint2\]](#asympfixedpoint2){reference-type="ref" reference="asympfixedpoint2"}). Then there exists $\varepsilon_\mathbbm{d}(\alpha, k), a_\mathbbm{d}(\alpha, k), c_\mathbbm{d}(\alpha, k)>0$ such that, for $\varepsilon \in (0, \varepsilon_\mathbbm{d})$ and $a_\mathbbm{d}\varepsilon^2 |\log\varepsilon|\leq t\leq T^*$, (1) for $x$ with $d(x,t) \geq c_\mathbbm{d}I(\varepsilon)|\log \varepsilon|$, $\mathbb{P}_x^\varepsilon\left[\mathbb{V}_p^\times(\boldsymbol{Z}^+(t)) = 1\right]\geq u_+-\varepsilon^k,$ (2) for $x$ with $d(x,t) \leq -c_\mathbbm{d}I(\varepsilon)|\log \varepsilon|,$ $\mathbb{P}_x^\varepsilon\left[\mathbb{V}_p^\times(\boldsymbol{Z}^-(t)) = 1\right]\leq u_-+\varepsilon^k.$ To see that this implies Theorem [\[maintheorem\]](#maintheorem){reference-type="ref" reference="maintheorem"}, let $k\in \mathbb{N}$ and suppose $$\mathbb{P}_x^\varepsilon\left[\mathbb{V}_p^\times(\boldsymbol{Z}^+(t)) = 1\right]\geq u_+-\varepsilon^k.$$ By Corollary [\[new_corollary\]](#new_corollary){reference-type="ref" reference="new_corollary"}, this implies $$\mathbb{P}^\varepsilon_x[\mathbb{V}_p({\boldsymbol{Y}}(t))=1] \geq (1-b_\varepsilon)(u_+-\varepsilon^k)-mF(\varepsilon)-mb_\varepsilon$$ for some $m>0$. Since $u_+\geq 1-b_\varepsilon$, it is straightforward to see that, for $\varepsilon>0$ sufficiently small, we may increase $m$ as necessary so that $$\mathbb{P}^\varepsilon_x[\mathbb{V}_p({\boldsymbol{Y}}(t))=1] \geq 1-mF(\varepsilon)-m \frac{\varepsilon^2}{I(\varepsilon)^2}.$$ Similar arguments using Theorem [\[new_multi_d\_theorem\]](#new_multi_d_theorem){reference-type="ref" reference="new_multi_d_theorem"} (2) prove the lower bound in Theorem [\[maintheorem\]](#maintheorem){reference-type="ref" reference="maintheorem"}. ## Generation of the interface {#generationoftheinterfacesection} We now show that in a time $\mathcal{O}(\varepsilon^2|\log\varepsilon|)$, an interface of width $\mathcal{O}(I(\varepsilon)|\log\varepsilon|)$ is created. Here, we refer to the solution interface associated to the partial differential equation solved by $\mathbb{P}_x^\varepsilon[\mathbb{V}_p^\times(\boldsymbol{Z}^-(t))=1]$ with initial condition $p$. We will make use of the following one-dimensional result, where we recall that ${\mathbb{V}}^\times = {\mathbb{V}}^\times_{\widehat{p}_0}$ is the marked majority voting system with initial condition $$\widehat{p}_0(x) = u_+\mathbbm{1}_{\{x\geq 0\}}+u_-\mathbbm{1}_{\{x< 0\}}.$$ [\[boundsofvote\]]{#boundsofvote label="boundsofvote"} Let $a_1$ be as in Lemma [\[defnofa\]](#defnofa){reference-type="ref" reference="defnofa"} and fix $k\in \mathbb{N}$. Then there exists $\varepsilon_\mathbbm{d}(k) >0$ such that, for all $\varepsilon\in (0,\varepsilon_\mathbbm{d})$ , $t\geq a_1(k)\varepsilon^2|\log\varepsilon|$ and $x\in \mathbb{R}$, $$\begin{aligned} \label{greenwallet} u_- - \varepsilon^k \leq \mathbb{P}^\varepsilon_x[{\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t))=1]\leq u_+ + \varepsilon^k.\end{aligned}$$ *Proof.* We prove the right hand inequality in ([\[greenwallet\]](#greenwallet){reference-type="ref" reference="greenwallet"}) and note that the left hand inequality follows by very similar arguments. It is easy to verify that $\delta:= g_\times'(u_+) = \mathcal{O}(b_\varepsilon)<1$, and by the Mean Value Theorem, since $g_\times'$ is decreasing on $[u_+, 1],$ for all $q\in (0, 1-u_+]$, $$g_\times(u_++q) - g_\times(u_+) \leq \delta q.$$ Since $u_+$ is a fixed point of $g_\times,$ for $\varepsilon$ sufficiently small, $g_\times(q)<q$ for $q \in (u_+, 1]$ so iterating the above inequality as in the proof of Lemma [\[lemma:iterative_voting\]](#lemma:iterative_voting){reference-type="ref" reference="lemma:iterative_voting"} gives us $$g_\times^{(n)}(u_++q)-u_+ \leq \delta^n q$$ for all $q\in (0, 1-u_+]$. In particular, $$g^{(n)}_\times(u_+ + (1-u_+)) - u_+ \leq \delta^n(1-u_+)\leq \varepsilon^k$$ after $n\geq C(k)|\log \varepsilon|$ iterations, for some $C(k)>0$. That is, $g^{(n)}_\times(1)\leq u_+ +\varepsilon^k$ if $n\geq C(k)|\log\varepsilon|$. We note that, since $g_\times$ is increasing on $[0,1]$, the largest value of the iterates of $g_\times(x)$ will be when $x = 1$. Finally, by Lemma [\[defnofa\]](#defnofa){reference-type="ref" reference="defnofa"}, for $t\geq a_1(k)\varepsilon^2|\log\varepsilon|,$ $$\mathbb{P}^\varepsilon_x\left[{\cal T}(\boldsymbol{B}_{R^\varepsilon}(t))\supset {\cal T}^{reg}_{A(k)|\log\varepsilon|}\right]\geq 1-\varepsilon^k.$$ Therefore when $t\geq a_1(k)\varepsilon^2 |\log\varepsilon|, \mathbb{P}^\varepsilon_x[{\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t))=1] \leq u_+ + 2\varepsilon^k.$ ◻ It is straightforward to adapt the proof of Lemma [\[defnofa\]](#defnofa){reference-type="ref" reference="defnofa"} to show that, for all $k\in \mathbb{N}$ and $A(k)$ as in Lemma [\[lemma:iterative_voting\]](#lemma:iterative_voting){reference-type="ref" reference="lemma:iterative_voting"}, there exists $\rho_\mathbbm{d}^+(k)>0$ and $\varepsilon_\mathbbm{d}>0$ such that, for all $\varepsilon\in (0, \varepsilon_\mathbbm{d})$, $x\in \mathbb{R}^\mathbbm{d}$ and $t\geq \rho_\mathbbm{d}^+(k)\varepsilon^2|\log \varepsilon|$, $$\mathbb{P}^\varepsilon_x\left[{\cal T}(\boldsymbol{Z}^+(t))\supset {\cal T}^{reg}_{A(k)|\log\varepsilon|}\right]\geq 1-\varepsilon^k.$$ Similarly, there exists $\rho_\mathbbm{d}^-(k)$ and $\varepsilon_\mathbbm{d}'>0$ such that, for all $\varepsilon\in (0, \varepsilon_\mathbbm{d}')$ and $t\geq \rho_\mathbbm{d}^-\varepsilon^2|\log\varepsilon|,$ $$\mathbb{P}^\varepsilon_x\left[{\cal T}(\boldsymbol{Z}^-(t))\supset {\cal T}^{reg}_{A(k)|\log\varepsilon|}\right]\geq 1-\varepsilon^k.$$ With this, we can adapt the proof of Proposition [\[boundsofvote\]](#boundsofvote){reference-type="ref" reference="boundsofvote"} to show that, for any $k\in \mathbb{N}$ and $\varepsilon$ sufficiently small, if $t\geq \rho_\mathbbm{d}^+(k)\varepsilon^2|\log\varepsilon|$,$$\begin{aligned} \label{rhoplus} u_- - \varepsilon^k \leq \mathbb{P}^\varepsilon_x[{\mathbb{V}}^\times_p(\boldsymbol{Z}^+(t))=1]\leq u_+ + \varepsilon^k\end{aligned}$$ for any initial condition $p$, and if $t\geq \rho_\mathbbm{d}^-(k)\varepsilon^2|\log\varepsilon|$ $$\begin{aligned} \label{rhominus} u_- - \varepsilon^k \leq \mathbb{P}^\varepsilon_x[{\mathbb{V}}^\times_p(\boldsymbol{Z}^-(t))=1]\leq u_+ + \varepsilon^k.\end{aligned}$$ In our later proofs, we will want ([\[greenwallet\]](#greenwallet){reference-type="ref" reference="greenwallet"}), ([\[rhoplus\]](#rhoplus){reference-type="ref" reference="rhoplus"}) and ([\[rhominus\]](#rhominus){reference-type="ref" reference="rhominus"}) to hold simultaneously, so it will be useful to define $$\begin{aligned} \label{defrho} \rho_\mathbbm{d}(k) := \rho_\mathbbm{d}^-(k)\vee\rho_\mathbbm{d}^+(k)\vee a_1(k).\end{aligned}$$ [\[generationoftheinterface\]]{#generationoftheinterface label="generationoftheinterface"} Let $k\in \mathbb{N}$ and $\rho_\mathbbm{d}(k)$ be as in equation ([\[defrho\]](#defrho){reference-type="ref" reference="defrho"}). Fix $I$ satisfying Assumptions [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"} and let $u_+, u_-$ be as in ([\[asympfixedpoint1\]](#asympfixedpoint1){reference-type="ref" reference="asympfixedpoint1"}), ([\[asympfixedpoint2\]](#asympfixedpoint2){reference-type="ref" reference="asympfixedpoint2"}). Then there exists $\varepsilon_\mathbbm{d}(\alpha, k), b_\mathbbm{d}(\alpha, k) >0$ such that, for all $\varepsilon \in (0,\varepsilon_\mathbbm{d}),$ if $$\begin{aligned} t_\mathbbm{d}(k, \varepsilon)&:= \rho_\mathbbm{d}(k)\varepsilon^2|\log\varepsilon|, \\ t'_\mathbbm{d}(k,\varepsilon)&:= (2 \rho_\mathbbm{d}(k)+k+1)\varepsilon^2|\log\varepsilon|,\ \end{aligned}$$then for $t\in [t_\mathbbm{d}, t'_\mathbbm{d}],$ (1) for $d(x,t) \geq b_\mathbbm{d}(k) I(\varepsilon)|\log\varepsilon|$, we have $\mathbb{P}^\varepsilon_x[\mathbb{V}^\times_p(\boldsymbol{Z}^-(t))=1] \geq u_+-\varepsilon^k,$ (2) for $d(x, t) \leq -b_\mathbbm{d}(k) I(\varepsilon)|\log\varepsilon|$, we have $\mathbb{P}^\varepsilon_x[\mathbb{V}^\times_p(\boldsymbol{Z}^-(t))=1] \leq u_-+\varepsilon^k$. By almost identical arguments, Proposition [\[generationoftheinterface\]](#generationoftheinterface){reference-type="ref" reference="generationoftheinterface"} holds when $\boldsymbol{Z}^-$ is replaced with $\boldsymbol{Z}^+$. Note that our choice of $t_\mathbbm{d}$ and $t_\mathbbm{d}'$ are stricter than needed for this result alone, but it will be convenient to define them in this way for use in later proofs. *Proof.* We follow the proof of [@etheridge2017branching Proposition 2.16] closely, and consider the multidimensional analogues of Lemmas [\[defnofa\]](#defnofa){reference-type="ref" reference="defnofa"} and [\[displacement lemma\]](#displacement lemma){reference-type="ref" reference="displacement lemma"}. First, by choice of $\rho_\mathbbm{d}^-$, there exists $\varepsilon_\mathbbm{d}(\alpha, k)>0$ such that, for all $\varepsilon \in (0,\varepsilon_\mathbbm{d})$, $x\in \mathbb{R}^\mathbbm{d}$ and $t\geq \rho_\mathbbm{d}^-(k) \varepsilon^2 |\log\varepsilon|,$ $$\begin{aligned} \label{multidtree} \mathbb{P}^\varepsilon_x \left[\mathcal{T}(\boldsymbol{Z}^-(t)) \supseteq \mathcal{T}^{\text{reg}}_{A(k)|\log\varepsilon|}\right]\geq 1 - \varepsilon^{k}\end{aligned}$$ for $A(k)$ as in Lemma [\[lemma:iterative_voting\]](#lemma:iterative_voting){reference-type="ref" reference="lemma:iterative_voting"}. By standard estimates for the multidimensional standard normal variable, the proof of Lemma [\[displacement lemma\]](#displacement lemma){reference-type="ref" reference="displacement lemma"} can be adapted to show that there exists $h_\mathbbm{d}(k)>0$ and $\varepsilon_\mathbbm{d}(k)>0$ such that, for all $\varepsilon \in (0,\varepsilon_\mathbbm{d})$ and $t\in [t_\mathbbm{d},t_\mathbbm{d}']$, $$\begin{aligned} \label{displacementmultid} \mathbb{P}_x^\varepsilon\left[\exists i\in N(s) : | W_i(R^\varepsilon_i(s))-x| \geq h_\mathbbm{d}(k) I(\varepsilon)|\log\varepsilon| \right] \leq \varepsilon^k.\end{aligned}$$ By definition of $Z_s^-$, $| Z^-_s| \leq | W(R^\varepsilon_s)| + D_0(k+2)I(\varepsilon)^2|\log\varepsilon|,$ for $D_0$ the constant from Theorem [\[teo:subestimate\]](#teo:subestimate){reference-type="ref" reference="teo:subestimate"}. Therefore ([\[displacementmultid\]](#displacementmultid){reference-type="ref" reference="displacementmultid"}) implies that, for $\varepsilon$ sufficiently small, there exists $l_\mathbbm{d}(k) > h_\mathbbm{d}(k)$ for which $$\begin{aligned} \mathbb{P}_x^\varepsilon\left[\exists i\in N(s) : | Z^-_i(s)-x| \geq l_\mathbbm{d}(k) I(\varepsilon)|\log\varepsilon| \right] \leq \varepsilon^k.\end{aligned}$$ Set $b_\mathbbm{d}= 2l_\mathbbm{d}$. Recall that $d(x,t)$ is the signed distance between $x\in \mathbb{R}^\mathbbm{d}$ and $\mathbf{\Gamma}_{t}$. By the regularity assumption on $\mathbf{\Gamma}_{t}$ ([\[A4\]](#A4){reference-type="ref" reference="A4"}), there exist $v_0, V_0>0$ such that, for $t\leq v_0$ and $x\in \mathbb{R}^\mathbbm{d}$, $|d(x,0)-d(x,t)|\leq V_0t$. Reduce $\varepsilon_\mathbbm{d}$ if necessary so that $t_\mathbbm{d}'\leq v_0$ for all $\varepsilon\in (0,\varepsilon_\mathbbm{d}).$ Let $\varepsilon\in (0,\varepsilon_\mathbbm{d})$, $t\in [t_\mathbbm{d}, t_\mathbbm{d}']$ and $x$ be such that $d(x,t)\geq b_\mathbbm{d}I(\varepsilon)|\log\varepsilon|$ and $| Z^-_i(t)-x|\leq l_\mathbbm{d}I(\varepsilon)|\log\varepsilon|$. It follows by the triangle inequality and Lipschitz continuity of $d(\cdot, t)$ that $$\begin{aligned} d(Z^-_i(t),0) &\geq d(x,t) - \left|d(x,t)-d(Z^-_i(t),t)\right|-\left| d(Z^-_i(t),t)-d(Z^-_i(t),0)\right|\\ &\geq b_\mathbbm{d}I(\varepsilon)|\log\varepsilon| - l_\mathbbm{d}I(\varepsilon)|\log\varepsilon|- V_0t_\mathbbm{d}'\\ &=\tfrac{1}{2}b_\mathbbm{d}I(\varepsilon)|\log\varepsilon| - V_0(2\rho_\mathbbm{d}+k+1)\varepsilon^2|\log\varepsilon|.\end{aligned}$$ By Assumption [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"} [\[assumptions2_B\]](#assumptions2_B){reference-type="ref" reference="assumptions2_B"} we may reduce $\varepsilon_\mathbbm{d}$ if necessary so that $$d(Z^-_i(t),0)\geq \tfrac{1}{4}b_\mathbbm{d}I(\varepsilon)|\log\varepsilon|$$ for all $\varepsilon\in (0,\varepsilon_\mathbbm{d}).$ Finally, by Assumptions [\[assumptions1\]](#assumptions1){reference-type="ref" reference="assumptions1"} (B)-(C), and reducing $\varepsilon_\mathbbm{d}$ if necessary, $$\begin{aligned} \label{12ineq} p(Z^-_i(t)) &\geq \tfrac{1}{2}+ \gamma\left(\tfrac{1}{4}b_\mathbbm{d}I(\varepsilon)|\log\varepsilon| \wedge r\right)\nonumber \\ &\geq \tfrac{1}{2}+\varepsilon\end{aligned}$$ for all $\varepsilon\in (0,\varepsilon_\mathbbm{d})$. We then combine ([\[multidtree\]](#multidtree){reference-type="ref" reference="multidtree"}), ([\[displacementmultid\]](#displacementmultid){reference-type="ref" reference="displacementmultid"}) and ([\[12ineq\]](#12ineq){reference-type="ref" reference="12ineq"}) exactly as the proof of Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"}, to obtain that, for $\varepsilon\in (0,\varepsilon_\mathbbm{d})$, $t\in [t_\mathbbm{d}, t_\mathbbm{d}']$ and $x$ such that $d(x,t)\geq b_\mathbbm{d}I(\varepsilon)|\log\varepsilon|,$ $$\mathbb{P}_x^\varepsilon[\mathbb{V}_p^\times(\boldsymbol{Z}^-(t))=1] \geq u_+-3\varepsilon^k.$$ The upper bound is obtained using the same approach. ◻ ## Propagation of the interface {#propinterfacesection} In this section, we will compare $\mathbb{V}_p^\times(\boldsymbol{Z}^-(t))$ to ${\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t))$, and use this to show that the interface propagates. Throughout this section, define $$\begin{aligned} \label{defn_gamma} \gamma(t):= K_1e^{K_2t}I(\varepsilon)|\log\varepsilon|\end{aligned}$$ where the choice of $K_1, K_2$ and $\varepsilon$ will be clear in the given context. [\[propogationofinterface\]]{#propogationofinterface label="propogationofinterface"} Let $l \in \mathbb{N}$ with $l\geq 4$ and fix $I$ satisfying Assumptions [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"}. Let $t_\mathbbm{d}$ be as in Proposition [\[generationoftheinterface\]](#generationoftheinterface){reference-type="ref" reference="generationoftheinterface"}. There exist $K_1, K_2>0$ such that for $\gamma(\cdot)$ as in ([\[defn_gamma\]](#defn_gamma){reference-type="ref" reference="defn_gamma"}), $\varepsilon\in (0,\varepsilon_\mathbbm{d})$ and $t\in [t_\mathbbm{d}(l), T^*]$ we have $$\begin{aligned} \label{firstpropagation} \sup_{x\in\mathbb{R}^\mathbbm{d}} \left(\mathbb{P}^\varepsilon_x[\mathbb{V}^\times_p(\boldsymbol{Z}^-(t)) =1]- \mathbb{P}^\varepsilon_{d(x,t)+\gamma(t)}[{\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t))=1]\right) \leq \varepsilon^l \ \ \ \ \ \end{aligned}$$ and $$\begin{aligned} \label{Secondpropagation} \sup_{x\in \mathbb{R}^\mathbbm{d}} \left(\mathbb{P}^\varepsilon_x[\mathbb{V}^\times_p(\boldsymbol{Z}^+(t))=0]-\allowdisplaybreaks \mathbb{P}^\varepsilon_{d(x,t)-\gamma(t)}[{\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t))=0]\right)\leq \varepsilon^l. \ \ \ \ \\end{aligned}$$ Throughout this section, we will extend the domain of $g_\times:[0,1]\to[0,1]$ to all of $\mathbb{R}$. Namely, we set $$g_\times(p) = \begin{cases} g_\times(0)& \text{if } p<0 \\ g_\times(p) &\text{if } p\in [0,1]\\ g_\times(1) & \text{if } p>1. \end{cases}$$ Key to the proof of Proposition [\[propogationofinterface\]](#propogationofinterface){reference-type="ref" reference="propogationofinterface"} will be Lemma [\[biguglylemma\]](#biguglylemma){reference-type="ref" reference="biguglylemma"}, which parallels [@etheridge2017branching Lemma 2.18]. The proof of Theorem [\[new_multi_d\_theorem\]](#new_multi_d_theorem){reference-type="ref" reference="new_multi_d_theorem"} will then follow easily. We defer the lengthy proof of Lemma [\[biguglylemma\]](#biguglylemma){reference-type="ref" reference="biguglylemma"} to Section [4.4](#sectionproofofuglylemma){reference-type="ref" reference="sectionproofofuglylemma"}. [\[biguglylemma\]]{#biguglylemma label="biguglylemma"} Let $K_1>0$, $l \in \mathbb{N}$ with $l\geq 4$, and fix $I$ satisfying Assumptions [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"}. Let $t_\mathbbm{d}'$ be as in Proposition [\[generationoftheinterface\]](#generationoftheinterface){reference-type="ref" reference="generationoftheinterface"}. Then there exists $K_2 = K_2(K_1, l)>0$ and $\varepsilon_\mathbbm{d}(K_1, K_2, l)>0$ such that, for $\gamma(\cdot)$ as in ([\[defn_gamma\]](#defn_gamma){reference-type="ref" reference="defn_gamma"}), $\varepsilon\in (0, \varepsilon_\mathbbm{d})$, $x\in \mathbb{R}^\mathbbm{d}$, $s\in [0, (l+1) \varepsilon^2|\log\varepsilon|]$ and $t\in [t'_{\mathbbm{d}}(l), T^*]$, $$\begin{aligned} \label{2.18eqn1} &\mathbb{E}_x\left[g_\times\left(\mathbb{P}^\varepsilon_{d(Z^-_s, t-s) + \gamma(t-s)}[\mathbb{V}^\times(\boldsymbol{B}_{R^\varepsilon}(t-s))=1] + \varepsilon^l \right) \right] \nonumber \\ & \ \ \leq \frac{3}{4} \varepsilon^l+ \mathbb{E}_{d(x,t)}\left[g_\times\left(\mathbb{P}^{\varepsilon}_{B(R_s^\varepsilon) +\gamma(t)}[\mathbb{V}^\times(\boldsymbol{B}_{R^\varepsilon}(t-s))=1] \right) \right] + \mathbbm{1}_{\{s \leq \varepsilon^3\}}\varepsilon^l \end{aligned}$$ and $$\begin{aligned} \label{2.18eqn2} & \mathbb{E}_x\left[g_\times\left(\mathbb{P}^\varepsilon_{d(Z^+_s, t-s) - \gamma(t-s)}[\mathbb{V}^\times(\boldsymbol{B}_{R^\varepsilon}(t-s))=0] + \varepsilon^l \right)\right] \nonumber \\ & \ \ \leq \frac{3}{4} \varepsilon^l +\mathbb{E}_{d(x,t)}\left[g_\times\left(\mathbb{P}^{\varepsilon}_{B(R_s^\varepsilon)-\gamma(t)}[\mathbb{V}^\times(\boldsymbol{B}_{R^\varepsilon}(t-s))=0]\right)\right]+ \mathbbm{1}_{\{s \leq \varepsilon^3\}}\varepsilon^l. \end{aligned}$$ *Proof of Proposition [\[propogationofinterface\]](#propogationofinterface){reference-type="ref" reference="propogationofinterface"}.* We only prove ([\[firstpropagation\]](#firstpropagation){reference-type="ref" reference="firstpropagation"}), since ([\[Secondpropagation\]](#Secondpropagation){reference-type="ref" reference="Secondpropagation"}) follows by completely symmetric arguments. Set $K_1 = b_\mathbbm{d}(l) +c_1(l)$ for $b_\mathbbm{d}$ as in Proposition [\[generationoftheinterface\]](#generationoftheinterface){reference-type="ref" reference="generationoftheinterface"} and $c_1$ as in Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"}. Take $\varepsilon_\mathbbm{d}>0$ sufficiently small so that Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"}, Proposition [\[generationoftheinterface\]](#generationoftheinterface){reference-type="ref" reference="generationoftheinterface"}, Proposition [\[boundsofvote\]](#boundsofvote){reference-type="ref" reference="boundsofvote"} and Lemma [\[biguglylemma\]](#biguglylemma){reference-type="ref" reference="biguglylemma"} hold for all $\varepsilon \in (0, \varepsilon_\mathbbm{d})$. We first observe that, for $\varepsilon \in (0,\varepsilon_\mathbbm{d})$, $t\in [t_\mathbbm{d}(l), t'_\mathbbm{d}(l)]$ (for $t_\mathbbm{d}$ and $t_\mathbbm{d}'$ as in Proposition [\[generationoftheinterface\]](#generationoftheinterface){reference-type="ref" reference="generationoftheinterface"}) and $x\in \mathbb{R}^\mathbbm{d}$, $$\begin{aligned} \label{koala} \mathbb{P}^\varepsilon_x[\mathbb{V}^\times_p(\boldsymbol{Z}^-(t)) =1] - \mathbb{P}^\varepsilon_{d(x,t)+\gamma(t)}[{\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t))=1] \leq \varepsilon^l.\end{aligned}$$ To see this, first suppose that $d(x, t) \leq -b_\mathbbm{d}(l) I(\varepsilon)|\log\varepsilon|$. Now, reducing $\varepsilon_\mathbbm{d}$ if necessary, by Proposition [\[generationoftheinterface\]](#generationoftheinterface){reference-type="ref" reference="generationoftheinterface"} $$\begin{aligned} \mathbb{P}^\varepsilon_x[\mathbb{V}^\times_p(\boldsymbol{Z}^- (t)) =1] \leq u_- + \varepsilon^l.\end{aligned}$$ Also, by Proposition [\[boundsofvote\]](#boundsofvote){reference-type="ref" reference="boundsofvote"}, $$\begin{aligned} \mathbb{P}^\varepsilon_{d(x,t)+\gamma(t)}[{\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t))=1] &\geq u_- -\varepsilon^l,\end{aligned}$$ hence ([\[koala\]](#koala){reference-type="ref" reference="koala"}) holds. Here, we continue to ignore coefficients in front of polynomial error terms following Remark [\[coeffignore\]](#coeffignore){reference-type="ref" reference="coeffignore"}. If we added a coefficient to the error term in ([\[koala\]](#koala){reference-type="ref" reference="koala"}), it would appear in all polynomial error terms that follow, but would not affect our proof.\ Now suppose $d(x,t) \geq -b_\mathbbm{d}(l) I(\varepsilon)|\log\varepsilon|$. Then, reducing $\varepsilon$ if necessary, $$d(x,t) + \gamma(t)\geq c_1(l) I(\varepsilon)|\log\varepsilon|,$$ so by Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"}, $\mathbb{P}^\varepsilon_{d(x,t)+\gamma(t)}[{\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t))=1] \geq u_+-\varepsilon^l$. By ([\[rhominus\]](#rhominus){reference-type="ref" reference="rhominus"}), $$\mathbb{P}^\varepsilon_x[\mathbb{V}^\times_p(\boldsymbol{Z}^-(t))=1]\leq u_+ +\varepsilon^l,$$ and ([\[koala\]](#koala){reference-type="ref" reference="koala"}) holds.\ It remains to verify ([\[koala\]](#koala){reference-type="ref" reference="koala"}) for $t\in [t_\mathbbm{d}', T^*]$. Assume for the purpose of a contradiction that there exists $t\in [t_\mathbbm{d}', T^*]$ such that, for some $x\in \mathbb{R}^\mathbbm{d}$, $$\mathbb{P}^\varepsilon_x[\mathbb{V}^\times_p(\boldsymbol{Z}^-(t))=1]- \mathbb{P}^\varepsilon_{d(x,t)+\gamma(t)}[{\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t))=1] >\varepsilon^l.$$ Let $T'$ be the infimum of the set of such $t$, and choose $$T\in [T', \text{min}(T' +\varepsilon^{l+3}, T^*)],$$ which is in the set of such $t$. So there exists some $x \in \mathbb{R}^\mathbbm{d}$ such that $$\begin{aligned} \mathbb{P}^\varepsilon_x[\mathbb{V}^\times_p(\boldsymbol{Z}^-(T))=1]-\mathbb{P}^\varepsilon_{d(x,T)+\gamma(T)}[{\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(T))=1]> \varepsilon^l.\ \ \ \ \ \ \ \end{aligned}$$ We will show $$\begin{aligned} \label{cake2} \mathbb{P}^\varepsilon_x[\mathbb{V}^\times_p(\boldsymbol{Z}^-(T))=1] \leq \tfrac{7}{8}\varepsilon^l+\mathbb{P}^\varepsilon_{d(x,T)+\gamma(T)}[{\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(T))=1]. \ \ \ \ \ \ \end{aligned}$$ Let $\tau$ be the time of the first branching event in $\boldsymbol{Z}^-(T)$ and $Z_\tau^-$ be the position of the initial ancestor particle at that time. Then, by the Strong Markov Property at time $\tau \wedge (T-t_\mathbbm{d}),$ $$\begin{aligned} \label{markovpropeqn} \mathbb{P}_x^\varepsilon[\mathbb{V}_p^\times(\boldsymbol{Z}^-(t))=1]= \mathbb{E}_x^\varepsilon\left[g_\times(\mathbb{P}_{Z^-_\tau}^\varepsilon [\mathbb{V}_p^\times(\boldsymbol{Z}^-(T-\tau))=1]) \mathbbm{1}_{\tau \leq T-t_\mathbbm{d}}\right]\nonumber\\ + \mathbb{E}_x^\varepsilon\left[\mathbb{P}^\varepsilon_{Z^-_{T-t_\mathbbm{d}}}[\mathbb{V}_p^\times(\boldsymbol{Z}^-(t_\mathbbm{d}))=1]\mathbbm{1}_{\tau\geq T-t_\mathbbm{d}}\right].\end{aligned}$$ Since $T-t_\mathbbm{d}\geq t_\mathbbm{d}'- t_\mathbbm{d}> (l+1)\varepsilon^2|\log\varepsilon|$ and $\tau\sim \mathit{Exp}(\varepsilon^{-2}),$ the second term on the right side of ([\[markovpropeqn\]](#markovpropeqn){reference-type="ref" reference="markovpropeqn"}) is bounded by $$\begin{aligned} \nonumber \mathbb{E}_x^\varepsilon\left[\mathbb{P}^\varepsilon_{Z^-_{T-t_\mathbbm{d}}}[\mathbb{V}_p(\boldsymbol{Z}^+(t_\mathbbm{d}))=1]\mathbbm{1}_{\tau\geq T-t_\mathbbm{d}}\right]&\leq \mathbb{P}\left[\tau\geq (l+1)\varepsilon^2|\log\varepsilon|\right]\\ &=\varepsilon^{l+1}.\label{2.49}\end{aligned}$$ To bound the first term on the right hand side of ([\[markovpropeqn\]](#markovpropeqn){reference-type="ref" reference="markovpropeqn"}), we partition over the event $\{\tau\leq \varepsilon^{3+l}\}$ (which has probability $\leq \varepsilon^{l+1}$) and its complement to obtain [\[2.50\]]{#2.50 label="2.50"} & \_x\^) \_T-t\_\]\ & \_x\^) \_ \^l+1T-t\_\] + \^l+1\ & \_x\^+ \^l ) \_T-t\_ \] + \^l+1, where the last line follows by minimality of $T'$, since $\varepsilon^{l+3} \leq \tau \leq T-t_\mathbbm{d}$, so $T-\tau \in [t_\mathbbm{d}, T')$, and by monotoncity of $g_\times$. Then, conditioning on the value of $\tau$, and noting that the path of the ancestral particle $(B(R^\varepsilon_\cdot))$ is independent of $\tau$,$$\begin{aligned} \label{2.51} &\mathbb{E}_x^\varepsilon\left[g_\times\left(\mathbb{P}^\varepsilon_{d(Z^-_\tau,T-\tau)+\gamma(T-\tau)}[\mathbb{V}^\times(\boldsymbol{B}_{R^\varepsilon}(T-\tau))=1] + \varepsilon^l \right) \mathbbm{1}_{ \tau \leq T-t_\mathbbm{d}} \right]\nonumber \\ &\leq \int_0^{(l+1)\varepsilon^2|\log\varepsilon|}\frac{e^{-\varepsilon^{-2}s}}{\varepsilon^2} \mathbb{E}_x\left[g_\times\left(\mathbb{P}^\varepsilon_{d(Z^-_s,T-s)+ \gamma(T-s)}[\mathbb{V}^\times(\boldsymbol{B}_{R^\varepsilon}(T-s))=1] + \varepsilon^l \right) \right]ds\nonumber \\ & \ \ \ \ + \mathbb{P}[\tau\geq (l+1)\varepsilon^2|\log\varepsilon]]\nonumber\\ &\leq \int_0^{(l+1)\varepsilon^2|\log\varepsilon|}\frac{e^{-\varepsilon^{-2}s}}{\varepsilon^2} \mathbb{E}_{d(x,T)}\left[g_\times\left(\mathbb{P}^\varepsilon_{B(R_s^\varepsilon)+\gamma(T)}[\mathbb{V}^\times(\boldsymbol{B}_{R^\varepsilon}(T-s))=1]\right) \right]ds \nonumber \\ & \ \ \ \ + \varepsilon^{l+1} + \varepsilon^{l}\left(\tfrac{3}{4} + \mathbb{P}[\tau \leq \varepsilon^3]\right)\nonumber\\ &\leq \mathbb{E}_{d(x,T)}^\varepsilon\left[g_\times \left(\mathbb{P}^\varepsilon_{B(R^\varepsilon_{\tau'})+\gamma(T)}[\mathbb{V}^\times(\boldsymbol{B}_{R^\varepsilon}(T-\tau'))=1]\right)\mathbbm{1}_{\tau'\leq T-t_\mathbbm{d}} \right]\nonumber\\ &\ \ \ \ + \tfrac{3}{4}\varepsilon^l + 2\varepsilon^{l+1},\end{aligned}$$ where the second inequality follows by Lemma [\[biguglylemma\]](#biguglylemma){reference-type="ref" reference="biguglylemma"}. Here $\tau'$ denotes the time of the first branching event in $\boldsymbol{B}_{R^\varepsilon}$, which has the same distribution as $\tau$. The final inequality holds since $T\geq t_\mathbbm{d}'$, so $T-t_\mathbbm{d}\geq (l+1)\varepsilon^2|\log\varepsilon|$. Putting ([\[2.49\]](#2.49){reference-type="ref" reference="2.49"}), ([\[2.50\]](#2.50){reference-type="ref" reference="2.50"}) and ([\[2.51\]](#2.51){reference-type="ref" reference="2.51"}) into ([\[markovpropeqn\]](#markovpropeqn){reference-type="ref" reference="markovpropeqn"}), we obtain $$\begin{aligned} \mathbb{P}_x^\varepsilon[\mathbb{V}_p^\times(\boldsymbol{Z}^-(T))=1] &\leq\mathbb{E}_{d(x,T)}^\varepsilon\left[g_\times\left(\mathbb{P}^\varepsilon_{B(R^\varepsilon_{\tau'})+\gamma(T)}[\mathbb{V}^\times(\boldsymbol{B}_{R^\varepsilon}(T-\tau'))=1]\right)\mathbbm{1}_{\tau'\leq T-t_\mathbbm{d}} \right]\\ & \ \ + \tfrac{3}{4} \varepsilon^l + 4 \varepsilon^{l+1}\\ & \ \ \leq \mathbb{P}^\varepsilon_{d(x,T)+\gamma(T)}[\mathbb{V}^\times(\boldsymbol{B}_{R^\varepsilon}(T))=1] + \tfrac{3}{4} \varepsilon^l + 4 \varepsilon^{l+1} ,\end{aligned}$$ where the last line follows by the Strong Markov Property for $(\boldsymbol{B}_{R^\varepsilon}(\cdot))$ at time $\tau'\wedge (T-t_\mathbbm{d})$. We can reduce $\varepsilon_\mathbbm{d}$ if necessary so that $4 \varepsilon^{l+1} + \tfrac{3}{4} \varepsilon^l \leq \tfrac{7}{8} \varepsilon^l$ for all $\varepsilon \in (0,\varepsilon_\mathbbm{d})$. This gives ([\[cake2\]](#cake2){reference-type="ref" reference="cake2"}), thereby proving ([\[firstpropagation\]](#firstpropagation){reference-type="ref" reference="firstpropagation"}). The inequality ([\[Secondpropagation\]](#Secondpropagation){reference-type="ref" reference="Secondpropagation"}) follows by a similar argument, using ([\[2.18eqn2\]](#2.18eqn2){reference-type="ref" reference="2.18eqn2"}). ◻ With this, we can now prove Theorem [\[new_multi_d\_theorem\]](#new_multi_d_theorem){reference-type="ref" reference="new_multi_d_theorem"}. *Proof of Theorem [\[new_multi_d\_theorem\]](#new_multi_d_theorem){reference-type="ref" reference="new_multi_d_theorem"}.* Set $c_\mathbbm{d}(l) := c_1(l) +K_1e^{K_2T^*}$. Then, for any $x\in \mathbb{R}^\mathbbm{d}$ and $t\in [t_\mathbbm{d}, T^*]$ such that $d(x,t)\leq -c_\mathbbm{d}(l)I(\varepsilon)|\log\varepsilon|$, we have $$d(x,t)+K_1e^{K_2 t}I(\varepsilon)|\log\varepsilon| \leq - c_1(l)I(\varepsilon)|\log\varepsilon|.$$ Then, by Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"}, reducing $\varepsilon_\mathbbm{d}$ if necessary so that $\varepsilon_\mathbbm{d}<\varepsilon_1(l),$ $$\mathbb{P}_x^\varepsilon[\mathbb{V}_p^\times(\boldsymbol{Z}^-(t))=1)]\leq u_- + 2\varepsilon^l.$$ Similarly, for $x$ and $t$ such that $d(x,t)\geq c_\mathbbm{d}(l)I(\varepsilon)|\log\varepsilon|$, by Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"} and ([\[Secondpropagation\]](#Secondpropagation){reference-type="ref" reference="Secondpropagation"}), $\mathbb{P}_x^\varepsilon[\mathbb{V}_p^\times(\boldsymbol{Z}^+(t))=1)]\geq u_+-2\varepsilon^l$. Theorem [\[new_multi_d\_theorem\]](#new_multi_d_theorem){reference-type="ref" reference="new_multi_d_theorem"} then holds by setting $a_\mathbbm{d}:=\rho_\mathbbm{d}$. ◻ *Proof of Theorem [\[maintheorem\]](#maintheorem){reference-type="ref" reference="maintheorem"}.* This follows immediately by combining Theorem [\[new_multi_d\_theorem\]](#new_multi_d_theorem){reference-type="ref" reference="new_multi_d_theorem"} and Corollary [\[new_corollary\]](#new_corollary){reference-type="ref" reference="new_corollary"}. ◻ ## Proof of Lemma [\[biguglylemma\]](#biguglylemma){reference-type="ref" reference="biguglylemma"} {#sectionproofofuglylemma} To prove Lemma [\[biguglylemma\]](#biguglylemma){reference-type="ref" reference="biguglylemma"}, we follow the proof of [@etheridge2017branching Lemma 2.18] and consider separately the cases $|d(x,t)| \leq DI(\varepsilon)|\log\varepsilon|$ and $|d(x,t)| \geq DI(\varepsilon)|\log\varepsilon|$, for some large $D>0$. Since neither the one-dimensional process $B(R^\varepsilon_s)$ nor the multidimensional process $Z^+$ (or $Z^-$) travel further than a distance $\mathcal{O}(I(\varepsilon)|\log\varepsilon|)$ in time $s = \mathcal{O}(\varepsilon^2|\log(\varepsilon)|)$ with high probability, if $D$ is sufficiently large and $|d(x,t)|\leq D I(\varepsilon)|\log\varepsilon|$, we will see that the result follows from the main one-dimensional result for $\mathbb{V}_p^\times(\boldsymbol{B}_{R^\varepsilon})$, Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"}. When $|d(x,t)|\leq D I(\varepsilon)|\log\varepsilon|,$ we apply Theorem [\[couplingtheoremforz\]](#couplingtheoremforz){reference-type="ref" reference="couplingtheoremforz"} so that, with probability at least $1-\varepsilon^{l+1},$ $$d(Z^-_s, t-s)\leq B(R_s^\varepsilon) + \mathcal{O}(I(\varepsilon)|\log\varepsilon|)s.$$ Using this and monotonicity of $g_\times$ we can bound the left hand side of ([\[2.18eqn1\]](#2.18eqn1){reference-type="ref" reference="2.18eqn1"}) by $$\begin{aligned} \label{roses} \mathbb{E}_{d(x,t)}\left[g_\times\left(\mathbb{P}^{\varepsilon}_{B(R_s^\varepsilon) +\gamma(t-s)+\mathcal{O}(s)I(\varepsilon)|\log\varepsilon|}[{\mathbb{V}}^\times(\boldsymbol{B}_{R^\varepsilon}(t))=1]+\varepsilon^l\right)\right] + \mathcal{O}(\varepsilon^{l+1}). \end{aligned}$$ We then control ([\[roses\]](#roses){reference-type="ref" reference="roses"}) by considering two cases: when the argument of $g_\times$ in ([\[roses\]](#roses){reference-type="ref" reference="roses"}) is bounded away from $\tfrac{1}{2}$, and when it is close to $\tfrac{1}{2}$. In the former case, we use that $|g_\times'(y)| < \tfrac{2}{3}$ when $y$ is bounded far enough away from $\tfrac{1}{2}$, together with monotonicty of $g_\times$, to obtain ([\[2.18eqn1\]](#2.18eqn1){reference-type="ref" reference="2.18eqn1"}). In the second case, we apply the slope of the interface result, Corollary [\[cor:slope\]](#cor:slope){reference-type="ref" reference="cor:slope"}, to bound the difference between the two expectations appearing in the inequality ([\[2.18eqn1\]](#2.18eqn1){reference-type="ref" reference="2.18eqn1"}) directly. *Proof of Lemma [\[biguglylemma\]](#biguglylemma){reference-type="ref" reference="biguglylemma"}.* Fix $l\geq 4$. For all $u\geq 0$ and $z \in \mathbb{R}$, let $$\mathbb{Q}_z^{\varepsilon, u} = \mathbb{P}_z^\varepsilon[\mathbb{V}^\times(\boldsymbol{B}_{R^\varepsilon}(u))=1].$$ Let $C_0$ be as in ([\[A3\]](#A3){reference-type="ref" reference="A3"}) and $c_1$ be defined as in Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"}. Let $$\begin{aligned} \label{defnofR} R:= 2c_1(l)+4(l+1) \mathbbm{d}+1\end{aligned}$$ and fix $K_2$ such that $$\begin{aligned} \label{K2} K_1(K_2-C_0)-C_0R-C_1= c_1(1). \end{aligned}$$ To start we let $\varepsilon_\mathbbm{d}(l) = \varepsilon_1(l)$ where $\varepsilon_1(l)$ is defined in Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"}.\ Following the proof of [@etheridge2017branching Lemma 2.18], we begin by estimating the probability that a $\mathbbm{d}$-dimensional subordinated Brownian motion moves further than a distance $I(\varepsilon)|\log \varepsilon|$ in time $s\leq (l+1)\varepsilon^2 |\log\varepsilon|$. Define the event $$A_x = \left\{\sup_{u\in [0,R^\varepsilon_s]} | W_u-x |\leq 2 (l+1) I(\varepsilon)|\log\varepsilon|\right\}.$$ Then, bounding $| W_{u}-x |$ by the sum of the moduli of $\mathbbm{d}$ one-dimensional Brownian motions, and by Lemma [\[boundonsubordinator\]](#boundonsubordinator){reference-type="ref" reference="boundonsubordinator"}, which bounds the displacement of the subordinator $R^\varepsilon_s$ for small times, we obtain $$\begin{aligned} \label{Acomp} \mathbb{P}_x[A_x^c] &\leq 2\mathbbm{d}\mathbb{P}_0\left[\sup_{u\in[0,R_s^\varepsilon]} B_u> 2(l+1) I(\varepsilon)|\log\varepsilon|\right]\nonumber \\ &\leq 2\mathbbm{d}\mathbb{P}_0\left[\sup_{u\in[0,(l+2)I(\varepsilon)^2|\log\varepsilon|]} B_u> 2(l+1) I(\varepsilon)|\log\varepsilon|\right] + 2\mathbbm{d}\varepsilon^{l+1}\nonumber \\ &\leq 4\mathbbm{d}\mathbb{P}_0\left[B_1 > 2((l+1)|\log\varepsilon|)^{1/2}\right] + 2\mathbbm{d}\varepsilon^{l+1}\nonumber \\ &\leq 6\mathbbm{d}\varepsilon^{l+1}\end{aligned}$$ where the second inequality follows by the reflection principle, and the final inequality follows by identical arguments to those in the proof of Lemma [\[displacement lemma\]](#displacement lemma){reference-type="ref" reference="displacement lemma"}. Now consider the cases (i) $d(x, t) \leq -(2c_1(l) +2(l+1) \mathbbm{d}+ K_1e^{K_2(t-s)})I(\varepsilon)|\log\varepsilon|$ (ii) $d(x, t) \geq (2c_1(l) +2(l+1) \mathbbm{d}+ K_1e^{K_2(t-s)})I(\varepsilon)|\log\varepsilon|$ (iii) $|d(x, t)| \leq (2c_1(l) +2(l+1) \mathbbm{d}+ K_1e^{K_2(t-s)})I(\varepsilon)|\log\varepsilon|$. **Case (i):** By ([\[A3\]](#A3){reference-type="ref" reference="A3"}), there exist $v_0, V_0>0$ such that, if $s\leq v_0$ and $x\in \mathbb{R}^\mathbbm{d}$, then $$|d(x,t)-d(x,t-s)|\leq V_0s.$$ Reduce $\varepsilon_\mathbbm{d}$ if necessary to ensure that, for all $\varepsilon \in (0,\varepsilon_{\mathbbm{d}}),$ $(l+1)\varepsilon^2|\log\varepsilon| \leq v_0.$ Then if $A_x$ occurs, $$\begin{aligned} &d(W(R^\varepsilon_s), t-s) + K_1e^{K_2(t-s)}I(\varepsilon)|\log\varepsilon| \\&\leq -(2c_1(l) + 2(l+1) \mathbbm{d}) I(\varepsilon)|\log\varepsilon| + |d(W(R^\varepsilon_s), t-s) - d(x,t)|\nonumber\\ &\leq -(2c_1(l) + 2(l+1) \mathbbm{d})I(\varepsilon)|\log\varepsilon| + |d(x,t) - d(x,t-s)| + |W(R^\varepsilon_s) - x| \nonumber\\ &\leq -2c_1(l)I(\varepsilon)|\log\varepsilon| +V_0(l+1)\varepsilon^2|\log\varepsilon|. \end{aligned}$$ By Assumption [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"} [\[assumptions2_B\]](#assumptions2_B){reference-type="ref" reference="assumptions2_B"}, we may reduce $\varepsilon_\mathbbm{d}$ if necessary so that $$d(W(R^\varepsilon_s), t-s) + K_1e^{K_2(t-s)}I(\varepsilon)|\log\varepsilon| \leq -c_1( l)I(\varepsilon)|\log\varepsilon|,$$ for all $\varepsilon \in (0,\varepsilon_\mathbbm{d}).$ Then, since $d(Z^-_s, t-s) \leq d(W(R_s^\varepsilon), t-s),$ $$d(Z^-_s, t-s) + K_1e^{K_2(t-s)}I(\varepsilon)|\log\varepsilon| \leq -c_1(l)I(\varepsilon)|\log\varepsilon|.$$ So, by Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"} and definition of $g_\times$, $$\begin{aligned} \mathbb{E}_x\left[g_\times\left(\mathbb{Q}^{\varepsilon,t-s}_{d(Z^-_s, t-s)+\gamma(t-s)}+\varepsilon^l\right)\right]&\leq \mathbb{E}_x[g_\times(u_- + 2\varepsilon^l)\mathbbm{1}_{A_x}] + \mathbb{P}_x\left[A_x^c\right]\\ &\leq u_- + 6\mathbbm{d}\varepsilon^{l+1}+12\varepsilon^{l}b_\varepsilon \end{aligned}$$ where the last line follows by calculating $g_\times(u_- + 2\varepsilon^l)$ explicitly and reducing $\varepsilon_\mathbbm{d}$ if necessary.\ Next, recall that $g_\times(y) = g((1-b_\varepsilon)y+\tfrac{b_\varepsilon}{2})$ for $y\in [0,1].$ So $$g_\times'(y) = 6(1-b_\varepsilon)\left((1-b_\varepsilon)y+\tfrac{b_\varepsilon}{2}\right)\left(1-\left((1-b_\varepsilon)y+\tfrac{b_\varepsilon}{2}\right)\right).$$ Hence, if $$\begin{aligned} \label{tay} (1-b_\varepsilon)(y+\delta)+\tfrac{b_\varepsilon}{2}\leq \tfrac{1}{9} \ \ \text{or} \ \ (1-b_\varepsilon)y+\tfrac{b_\varepsilon}{2}\geq \tfrac{8}{9}\end{aligned}$$ then $$\begin{aligned} \label{geqnp} g_\times(y+\delta)\leq g_\times(y)+\tfrac{2}{3}\delta.\end{aligned}$$ From Proposition [\[boundsofvote\]](#boundsofvote){reference-type="ref" reference="boundsofvote"} (since $t-s\geq \rho_\mathbbm{d}(l)\varepsilon^2|\log\varepsilon|$) and ([\[geqnp\]](#geqnp){reference-type="ref" reference="geqnp"}), reducing $\varepsilon$ if necessary, $$\mathbb{E}_{d(x,t)}\left[g_\times\left(\mathbb{Q}^{\varepsilon,t-s}_{B(R^\varepsilon_s) +\gamma(t)} \right)\right] \geq u_- - \tfrac{2}{3} \varepsilon^l .$$ By choosing $\varepsilon$ small enough so that $6 \mathbbm{d}\varepsilon^{l+1} + 12\varepsilon^{l}b_\varepsilon + \tfrac{2}{3} \varepsilon^l \leq \tfrac{3}{4}\varepsilon^{l}$ ([\[2.18eqn1\]](#2.18eqn1){reference-type="ref" reference="2.18eqn1"}) holds in this case.\ \ **Case (ii):** Suppose now that $d(x, t) \geq (2c_1(l) +2(l+1) \mathbbm{d}+ K_1e^{K_2(t-s)})I(\varepsilon)|\log\varepsilon|$. Using this, together with a similar argument to that used to obtain ([\[Acomp\]](#Acomp){reference-type="ref" reference="Acomp"}), we have $$\begin{aligned} \mathbb{P}_{d(x,t)}[|B(R^\varepsilon_s)| \geq c_1(l)I(\varepsilon)|\log\varepsilon|] \leq \varepsilon^{l+1}.\end{aligned}$$ It follows that $$\begin{aligned} \mathbb{E}_{d(x,t)}\left[g_\times \left(\mathbb{Q}^{\varepsilon, t-s}_{B(R_s^\varepsilon)+\gamma(t)}\right)\right] &\geq \mathbb{E}_{d(x,t)}\left[g_\times \left(\mathbb{Q}^{\varepsilon, t-s}_{B(R_s^\varepsilon) +\gamma(t)}\right)\mathbbm{1}_{\{B(R_s^\varepsilon)\geq c_1(l)I(\varepsilon)|\log\varepsilon|\}} \right]\\ &\geq g_\times(u_+-\varepsilon^l) -\varepsilon^{l+1}\\ &\geq u_+-\varepsilon^{l+1}-12\varepsilon^{l}b_\varepsilon\end{aligned}$$ where the second inequality follows by Theorem [\[mainteo1dmarked\]](#mainteo1dmarked){reference-type="ref" reference="mainteo1dmarked"}, and in the third inequality we expand $g_\times(u_+ - \varepsilon^l)$ and reduced $\varepsilon_\mathbbm{d}$ if necessary. From Proposition [\[boundsofvote\]](#boundsofvote){reference-type="ref" reference="boundsofvote"} we can get that, for $\varepsilon$ small enough, $$\begin{aligned} \mathbb{E}_x\left[g_\times\left(\mathbb{Q}^{\varepsilon,t-s}_{d(Z^-_s, t-s)+\gamma(t-s)}+\varepsilon^l\right)\right] \leq u_+ + \tfrac{2}{3}\varepsilon^l.\end{aligned}$$ Hence, reducing $\varepsilon$ if necessary ([\[2.18eqn1\]](#2.18eqn1){reference-type="ref" reference="2.18eqn1"}) holds in this case.\ **Case (iii):** Finally, suppose $|d(x, t)| \leq (2c_1(l) +2(l+1) \mathbbm{d}+ K_1e^{K_2(t-s)})I(\varepsilon)|\log(\varepsilon)|$. If $A_x$ occurs and $u\in [0,(l+2)I(\varepsilon)^2|\log\varepsilon|]$, $$\begin{aligned} & |d(W({R_u^\varepsilon}), t-u)| \\ &\leq |W({R^\varepsilon_u}) - x| + |d(x, t)| + |d(x, t)-d(x, t-u)| \\ &\leq (2c_1(l) + 4(l+1) \mathbbm{d}+K_1e^{K_2(t-s)})I(\varepsilon)|\log\varepsilon| + V_0(l+2)I(\varepsilon)^2|\log(\varepsilon)|. \end{aligned}$$ Therefore, reducing $\varepsilon$ if necessary, with probability at least $1-\varepsilon^{l+1}$, $$|d(W(R^\varepsilon_s),t-s)| \leq (R+K_1e^{K_2(t-s)})I(\varepsilon)|\log\varepsilon|$$ for $R$ as in ([\[defnofR\]](#defnofR){reference-type="ref" reference="defnofR"}). Now set $$\begin{aligned} \beta = (R+K_1e^{K_2(t-s)})I(\varepsilon)|\log\varepsilon|. \label{defnbeta}\end{aligned}$$ Reduce $\varepsilon_\mathbbm{d}$ if necessary so that, for all $\varepsilon \in (0, \varepsilon_\mathbbm{d})$, $\beta \leq c_0/2$, for $c_0$ as in ([\[A1\]](#A1){reference-type="ref" reference="A1"}). Recall that $$T_\beta = \inf\left(\{s \in [0, (l+1)\varepsilon^2|\log\varepsilon|): |d(W_{s},t-s)|>\beta \}\cup \{t\}\right).$$ Note that $\mathbb{P}[R^\varepsilon_s >T_\beta] \leq 2\varepsilon^{l+1}:$ by the above calculation, if $A_x$ occurs, then $T_\beta > R^\varepsilon_s$ with probability at least $\geq 1-\varepsilon^{l+1}$. Therefore, by Theorem [\[couplingtheoremforz\]](#couplingtheoremforz){reference-type="ref" reference="couplingtheoremforz"}, and reducing $\varepsilon$ if necessary so that $T_\beta <t,$ $$\begin{aligned} d(Z^-_s, t-s) &\leq B(R_s^\varepsilon) + C_0\beta s \label{boundexpE} \end{aligned}$$ with probability at least $1-2\varepsilon^{l+1}$. Then, by monotonicity of $g_\times$ and ([\[boundexpE\]](#boundexpE){reference-type="ref" reference="boundexpE"}), partitioning over $\{R^\varepsilon_s > T_\beta\}$ and $A_x$, we obtain $$\begin{aligned} \label{qeq} &\mathbb{E}_x\left[g_\times\left(\mathbb{Q}^{\varepsilon,t-s}_{d(Z^-_s, t-s)+\gamma(t-s)}+\varepsilon^l \right)\right] \\ \nonumber &\leq \mathbb{E}_{d(x,t)}\left[g_\times\left(\mathbb{Q}^{\varepsilon,t-s}_{B(R^\varepsilon_s) + C_0\beta s + \gamma(t-s)}+\varepsilon^l\right)\right] + (2+6\mathbbm{d}) \varepsilon^{l+1}. \end{aligned}$$ Let $$D := \left\{\left|\mathbb{Q}^{\varepsilon,t-s}_{B(R^\varepsilon_s) + C_0\beta s + \gamma(t-s)}-\tfrac{1}{2} \right|\leq \tfrac{5}{12}\right\}.$$ We consider $D$ and $D^c$ separately to bound the right hand side of ([\[qeq\]](#qeq){reference-type="ref" reference="qeq"}). First suppose the event $D$ occurs. Then, by definition of $\beta$ ([\[defnbeta\]](#defnbeta){reference-type="ref" reference="defnbeta"}), $$\begin{aligned} \label{Keqn} &K_1e^{K_2t}I(\varepsilon)|\log\varepsilon| - \left(C_0\beta s+ K_1e^{K_2(t-s)}I(\varepsilon)|\log\varepsilon| \right)\nonumber \\ &= \left(K_1e^{K_2(t-s)}\left(e^{K_2s}-1-C_0s\right)-C_0Rs\right) I(\varepsilon)|\log\varepsilon| \nonumber \\ &\geq \left(K_1(K_2-C_0)-C_0R\right)sI(\varepsilon)|\log\varepsilon| \nonumber \\ &=c_1(1)sI(\varepsilon)|\log\varepsilon| \end{aligned}$$ where the final equality follows by ([\[K2\]](#K2){reference-type="ref" reference="K2"}). Reducing $\varepsilon_\mathbbm{d}$ if necessary so that $\varepsilon_\mathbbm{d}< \min(\varepsilon_1(1), \tfrac{1}{24}),$ for $\varepsilon \in (0, \varepsilon_\mathbbm{d})$ we may apply Corollary [\[cor:slope\]](#cor:slope){reference-type="ref" reference="cor:slope"} with $$z= B(R_s^\varepsilon)+C_0\beta s+K_1e^{K_2(t-s)}I(\varepsilon)|\log\varepsilon|$$ and $$w=z+ c_1(1)sI(\varepsilon)|\log\varepsilon| \leq B(R_s^\varepsilon) + \gamma(t)$$ to give $$\begin{aligned} \label{2.63} \mathbb{Q}^{\varepsilon,t-s}_{B(R_s^\varepsilon) + C_0\beta s + \gamma(t-s)}\mathbbm{1}_D \leq \left(\mathbb{Q}^{\varepsilon,t-s}_{B(R_s^\varepsilon) + \gamma(t)}-\tfrac{1}{48}s\right)\mathbbm{1}_D.\end{aligned}$$ Now suppose the event $D^c$ occurs. Reduce $\varepsilon_\mathbbm{d}$ if necessary so that $$\tfrac{1}{12}<\tfrac{1}{9}-\varepsilon^l(1-b_\varepsilon)-\tfrac{b_\varepsilon}{2},$$ which implies ([\[tay\]](#tay){reference-type="ref" reference="tay"}) for $\delta = \varepsilon^l$. Thus, for $\varepsilon \in (0,\varepsilon_\mathbbm{d})$, we have $$\begin{aligned} \label{2.65} g_\times\left( \mathbb{Q}^{\varepsilon,t-s}_{B(R_s^\varepsilon) + C_0\beta s + \gamma(t-s)} + \varepsilon^l \right)\mathbbm{1}_{D^c} &\leq \left(g_\times\left( \mathbb{Q}^{\varepsilon,t-s}_{B(R_s^\varepsilon)+C_0\beta s +\gamma(t-s)}\right) + \tfrac{2}{3}\varepsilon^l \right)\mathbbm{1}_{D^c} \nonumber \\ &\leq \left(g_\times\left( \mathbb{Q}^{\varepsilon,t-s}_{B(R_s^\varepsilon)+\gamma(t)}\right) + \tfrac{2}{3}\varepsilon^l \right)\mathbbm{1}_{D^c}\end{aligned}$$ where the first inequality follows by ([\[geqnp\]](#geqnp){reference-type="ref" reference="geqnp"}) and the second inequality by ([\[Keqn\]](#Keqn){reference-type="ref" reference="Keqn"}) and monotonicity of $g_\times$. Putting ([\[2.63\]](#2.63){reference-type="ref" reference="2.63"}) and ([\[2.65\]](#2.65){reference-type="ref" reference="2.65"}) into ([\[qeq\]](#qeq){reference-type="ref" reference="qeq"}), and since $2+6\mathbbm{d}\leq 8\mathbbm{d}$ we obtain $$\begin{aligned} & \mathbb{E}_x\left[g_\times\left(\mathbb{Q}^{\varepsilon,t-s}_{d(Z^-_s, t-s)+\gamma(t-s)}+\varepsilon^l\right)\right]\\ & \leq \mathbb{E}_{d(x,t)}\left[g_\times\left(\mathbb{Q}^{\varepsilon,t-s}_{B(R_s^\varepsilon)+\gamma(t)}-\tfrac{1}{48}s+\varepsilon^l\right)\mathbbm{1}_D\right]+ \mathbb{E}_{d(x,t)}\left[\left(g_\times\left(\mathbb{Q}^{\varepsilon,t-s}_{B(R_s^\varepsilon)+\gamma(t)}\right)+\tfrac{2}{3}\varepsilon^l\right)\mathbbm{1}_{D^c}\right] \\ & \ \ +8\mathbbm{d}\varepsilon^{l+1}\\ &\leq \mathbb{E}_{d(x,t)}\left[g_\times\left(\mathbb{Q}^{\varepsilon,t-s}_{B(R_s^\varepsilon)+\gamma(t)}\right)\right]+\tfrac{2}{3}\varepsilon^l + \varepsilon^l\mathbbm{1}_{\{\tiny\frac{1}{48}s\leq \varepsilon^l\}} +8\mathbbm{d}\varepsilon^{l+1}\end{aligned}$$ where in the final inequality, we use that $g_\times'(y)\leq \tfrac{3}{2}$ for all $y\in [0,1]$. Further reducing $\varepsilon_\mathbbm{d}$ if necessary so that $8\mathbbm{d}\varepsilon^{l+1}\leq \tfrac{1}{12}\varepsilon^l$ and $48 \varepsilon^l \leq \varepsilon^3$ for all $\varepsilon \in (0,\varepsilon_\mathbbm{d})$ gives the result. ◻ # Appendix In this appendix, we will calculate the fixed points of $g_\times$ (Section [5.1](#app_g){reference-type="ref" reference="app_g"}), prove Proposition [\[gronwallforZ\]](#gronwallforZ){reference-type="ref" reference="gronwallforZ"} (Section [5.2](#gronwallappendix){reference-type="ref" reference="gronwallappendix"}) and provide several supplementary calculations for the truncated subordinator $R^\varepsilon_s$ (Section [5.3](#appsub){reference-type="ref" reference="appsub"}). ## Fixed points of $g_\times$ {#app_g} [\[appendixfixedpointsofg\]]{#appendixfixedpointsofg label="appendixfixedpointsofg"} The function $g_\times$ has fixed points $u_-, \tfrac{1}{2},$ and $u_+$, where $$\begin{aligned} u_- &= \tfrac{1}{2}- \tfrac{\sqrt{(1-b_\varepsilon)^3(1-3b_\varepsilon)}}{2(1-b_\varepsilon)^3}, \ \ u_+ = \tfrac{1}{2}+ \tfrac{\sqrt{(1-b_\varepsilon)^3(1-3b_\varepsilon)}}{2(1-b_\varepsilon)^3}.\end{aligned}$$ *Proof.* We aim to find $a$ such that $g_\times\left(\tfrac{1}{2} + a\right) = \tfrac{1}{2}+a$. Now, $$\begin{aligned} g_\times\left(\tfrac{1}{2} + a\right) &= 3\left((1-b_\varepsilon)(\tfrac{1}{2} + a) + \tfrac{b_\varepsilon}{2}\right)^2-2\left((1-b_\varepsilon)(\tfrac{1}{2} + a) + \tfrac{b_\varepsilon}{2}\right)^3\\ &= 2(b_\varepsilon-1)^3a^3 + \tfrac{3}{2}(1-b_\varepsilon)a + \tfrac{1}{2}.\end{aligned}$$ Setting this equal to $\tfrac{1}{2}+a$ we obtain the quadratic equation $$2(1-b_\varepsilon)^3a^2 +\tfrac{3}{2}(1-b_\varepsilon) = 0,$$ for which $u_- - \tfrac{1}{2}$ and $u_+ - \tfrac{1}{2}$ are clearly solutions. To see that $g_\times(\tfrac{1}{2}) = \tfrac{1}{2},$ note that $$g_\times\left(\tfrac{1}{2}\right) = g\left((1-b_\varepsilon)\tfrac{1}{2}+ \tfrac{b_\varepsilon}{2}\right) = g\left(\tfrac{1}{2}\right) = \tfrac{1}{2}. \qedhere$$ ◻ ## Proof of Proposition [\[gronwallforZ\]](#gronwallforZ){reference-type="ref" reference="gronwallforZ"} {#gronwallappendix} Before proving Proposition [\[gronwallforZ\]](#gronwallforZ){reference-type="ref" reference="gronwallforZ"}, we will need the following result. [\[regofdensity\]]{#regofdensity label="regofdensity"} Let $h_t(x,\cdot)$ denote the transition density of a $\mathbbm{d}$-dimensional Brownian motion $W_t$ started at $x$. There exists a constant $C>0$ such that, for all $r\geq 0,$ $$|h_r(x,y) - h_r(x,y+z) | \leq C r^{-\frac{\mathbbm{d}+1}{2}} | z |$$ for all $x,y, z \in \mathbb{R}^\mathbbm{d}$. *Proof.* Fix $r\geq 0$. Since $$h_r(x,y) = \frac{1}{(4 \pi r)^{\frac{\mathbbm{d}}{2}}} \exp\left(- \tfrac{| x - y |^2}{4 r} \right),$$ by the Mean Value Theorem $$\begin{aligned} |h_r(x,y) - h_r(x,y+z) | \leq \frac{1}{(4 \pi r)^{\frac{\mathbbm{d}}{2}}} | z | \left| \nabla \exp\left(-\tfrac{| \xi | ^2}{4r}\right) \right|\end{aligned}$$ for some $\xi$ on the line segment between $y-x$ and $y+z-x$. Now, $$\begin{aligned} \left| \nabla \exp\left(-\tfrac{|\xi|^2}{4r}\right) \right| = \frac{| \xi |}{2r} \exp\left(-\tfrac{| \xi |^2}{4r} \right) \leq {r^{-\frac{1}{2}}} \end{aligned}$$ since $x e^{-x^2}\leq 1$ for all $x\in \mathbb{R}$. The result follows by setting $C:= (4\pi)^{-\frac{\mathbbm{d}}{2}}.$ ◻ We now prove Proposition [\[gronwallforZ\]](#gronwallforZ){reference-type="ref" reference="gronwallforZ"}. *Proof of Proposition [\[gronwallforZ\]](#gronwallforZ){reference-type="ref" reference="gronwallforZ"}.* We prove ([\[zplus\]](#zplus){reference-type="ref" reference="zplus"}) and note that ([\[zminuss\]](#zminuss){reference-type="ref" reference="zminuss"}) follows by identical arguments. Denote the standard Euclidean distance in $\mathbb{R}^\mathbbm{d}$ as $|\cdot|$ throughout. To begin, we will construct a coupling of $\boldsymbol{Z}^+(t)$ to a historical ternary branching subordinated Brownian motion, $\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t)$. Define $$u(t,x):= \mathbb{P}_x[\mathbb{V}^\times_p(\boldsymbol{Z}^+(t))=1] \ \text{ and } \ v(t,x):= \mathbb{P}_x[\mathbb{V}^\times_p(\boldsymbol{W}_{R^\varepsilon}(t))=1].$$ Abuse notation and let $\tau$ denote the time of the first branching event in both $\boldsymbol{Z}^+(t)$ and $\boldsymbol{W}_{\hspace{-.1cm} R^\varepsilon}(t)$. By the Markov property at time $\tau$, $u$ and $v$ can be written as $$\begin{aligned} \label{cafe}u(t,x) &= \mathbb{E}_x[g_\times(u(t-\tau,Z^+_\tau)) \mathbbm{1}_{\tau \leq t}] + \mathbb{E}_x[p(Z^+_t)\mathbbm{1}_{\tau > t}]\\ \label{hotchoccy} v(t,x) &= \mathbb{E}_x[g_\times(v(t-\tau, W(R_\tau^\varepsilon))) \mathbbm{1}_{\tau \leq t}] + \mathbb{E}_x[p(W(R_t^\varepsilon)) \mathbbm{1}_{\tau > t}].\end{aligned}$$ To bound the difference of $u$ and $v$, we control the difference of the first terms and second terms in ([\[cafe\]](#cafe){reference-type="ref" reference="cafe"}) and ([\[hotchoccy\]](#hotchoccy){reference-type="ref" reference="hotchoccy"}) separately. First, since $\tau\sim \mathit{Exp}(\varepsilon^{-2}),$ $$\begin{aligned} \label{red} \mathbb{E}_x[p(Z^+_t)\mathbbm{1}_{\tau > t}] - \mathbb{E}_x[p(W(R_t^\varepsilon)) \mathbbm{1}_{\tau > t}] \leq \mathbb{P}[t \leq \tau] \leq e^{-\frac{t}{\varepsilon^2}}.\end{aligned}$$ To bound the difference of the first terms in ([\[cafe\]](#cafe){reference-type="ref" reference="cafe"}) and ([\[hotchoccy\]](#hotchoccy){reference-type="ref" reference="hotchoccy"}), set $t^* := k\varepsilon^2|\log(\varepsilon)|$ and $\delta_\varepsilon:=D_0(k+2)I^2(\varepsilon)|\log(\varepsilon)|$ for $D_0$ as in Theorem [\[teo:subestimate\]](#teo:subestimate){reference-type="ref" reference="teo:subestimate"}. Denote the transition density of $W(R^\varepsilon_t)$ started at $x$ by $f_t(x,\cdot)$. Then, since $g_\times$ is bounded above by one and, by definition of $Z^+_t$, $$\begin{aligned} \nonumber &\mathbb{E}_x\left[g_\times(u(t-\tau,Z^+_\tau))\mathbbm{1}_{\tau \leq t}\right] \\ \nonumber & \leq\mathbb{E}_{x}\left[g_\times(u(t-\tau, Z^+_\tau))\mathbbm{1}_{\tau \leq t \wedge t^*}\right] + \mathbb{P}[\tau>t^*] \\ \nonumber & \leq \sup_{\substack{| w | \leq \delta_\varepsilon}} \mathbb{E}_{x}\left[g_\times(u(t-\tau, W(R^\varepsilon_\tau)+w))\mathbbm{1}_{\tau \leq t \wedge t^*}\right] + \varepsilon^k \\ \nonumber & = \sup_{\substack{| w | \leq \delta_\varepsilon}} \mathbb{E}\left[\int_{\mathbb{R}^\mathbbm{d}} g_\times(u(t-\tau,z+w)) f_\tau(x,z) dz \mathbbm{1}_{\tau \leq t \wedge t^*}\right] + \varepsilon^k \\ \nonumber &= \sup_{\substack{| w | \leq \delta_\varepsilon}} \mathbb{E}\left[\int_{\mathbb{R}^\mathbbm{d}} g_\times(u(t-\tau,z)) f_\tau(x,z- w) dz \mathbbm{1}_{\tau \leq t \wedge t^*}\right] + \varepsilon^k \\ \nonumber & \leq \mathbb{E}\left[\int_{\mathbb{R}^\mathbbm{d}} g_\times(u(t-\tau,z)) f_\tau(x,z) dz \mathbbm{1}_{\tau \leq t \wedge t^*}\right] + \varepsilon^k \\ \label{last}& \ \ \ + \sup_{\substack{| w | \leq \delta_\varepsilon}} \mathbb{E}\left[\int_{\mathbb{R}^\mathbbm{d}} g_\times(u(t-\tau,z))[f_\tau(x,z-w)-f_\tau(x,z)] \mathbbm{1}_{\tau \leq t \wedge t^*} dz\right]. \end{aligned}$$ Consider the $\mathbbm{d}$-dimensional ball ${{B}}_x\left(\tau^{{\frac{1}{\alpha}}}+\delta_\varepsilon\right) :=\left\{ z \in \mathbb{R}^\mathbbm{d} : | z-x | \leq \tau^{\frac{1}{\alpha}} +\delta_\varepsilon\right\}$. To ease notation, let ${{B}}_x := {{B}}_x\left(\tau^{{\frac{1}{\alpha}}}+\delta_\varepsilon\right)$. Here, ${{B}}_x$ has been chosen so that, for $z \in {{B}}_x$, the difference $|f_\tau(x, z-w) - f_\tau(x,z)|$ is sufficiently small, and for $z\in {{B}}_x^{\mathsf{c}}$ (the complement of ${{B}}_x$ in $\mathbb{R}^\mathbbm{d}$), the probability of the subordinated Brownian motion jumping from $x$ to $z$ by time $\tau$ is sufficiently small. Then, to bound the last term in ([\[last\]](#last){reference-type="ref" reference="last"}) we use that $g(u(t-\tau, z))$ is bounded by $1$ to get $$\begin{aligned} \nonumber & \sup_{\substack{| w | \leq \delta_\varepsilon}} \mathbb{E}\left[\int_{\mathbb{R}^\mathbbm{d}} g(u(t-\tau, z))[f_\tau(x,z-w)-f_\tau(x,z)] \mathbbm{1}_{\tau \leq t \wedge t^*} dz\right] \\ \nonumber & \leq \sup_{\substack{| w | \leq \delta_\varepsilon}} \mathbb{E}\left[\int_{\mathbb{R}^\mathbbm{d}} |f_\tau(x,z-w)-f_\tau(x,z)| \mathbbm{1}_{\tau \leq t \wedge t^*} dz \right] \\ \nonumber & \leq \sup_{\substack{| w | \leq \delta_\varepsilon}} \mathbb{E}\left[\int_{{{B}}_x} |f_\tau(x,z-w)-f_\tau(x,z)| dz \mathbbm{1}_{\tau \leq t \wedge t^*} \right] \\ & \label{twointegrals}\ \ \ + \sup_{| w| \leq \delta_\varepsilon} \mathbb{E}\left[\int_{{{B}}_x^{\mathsf{c}}} |f_\tau(x,z-w)-f_\tau(x,z)| dz \mathbbm{1}_{\tau \leq t \wedge t^*} \right].\end{aligned}$$ To bound the first term of ([\[twointegrals\]](#twointegrals){reference-type="ref" reference="twointegrals"}), first note that $$\mathbb{P}[\tau < \delta_\varepsilon^{\alpha}] \leq 1-\exp\left( -{\delta_\varepsilon^\alpha}{\varepsilon^{-2}} \right) \leq {I(\varepsilon)^{2\alpha}}{\varepsilon^{-2}} |\log(\varepsilon)|^\alpha.$$ By Proposition [\[regofdensity\]](#regofdensity){reference-type="ref" reference="regofdensity"} (noting that $f_\tau(x, \cdot) \equiv h_{R^\varepsilon_\tau}(x,\cdot)$ for $h$ the transition density of a $\mathbbm{d}$-dimensional Brownian motion) using that the first integral is bounded above by two, and allowing the constant $C$ to change from line to line, $$\begin{aligned} \label{chillip} \nonumber & \sup_{| w | \leq \delta_\varepsilon} \mathbb{E}\left[\int_{{{B}}_x} |f_\tau(x,z-w)-f_\tau(x,z)| dz \mathbbm{1}_{\tau \leq t \wedge t^*} \right] \\ \nonumber & \leq \sup_{| w | \leq \delta_\varepsilon} C \mathbb{E}\left[ \int_{{{B}}_x} | w | (R_\tau^\varepsilon)^{-\frac{\mathbbm{d}+1}{2}} dz\mathbbm{1}_{\tau \leq t \wedge t^*} \mathbbm{1}_{\tau \geq \delta^\alpha_\varepsilon}\right] + 2{I(\varepsilon)^{2\alpha}}{\varepsilon^{-2}} |\log(\varepsilon)|^\alpha \\ \nonumber &\leq C \delta_\varepsilon \mathbb{E}\left[V({{B}}_x) (R_\tau^\varepsilon)^{-\frac{\mathbbm{d}+1}{2}} \mathbbm{1}_{\tau \geq \delta^\alpha_\varepsilon} \right] + 2{I(\varepsilon)^{2\alpha}}{\varepsilon^{-2}} |\log(\varepsilon)|^\alpha\\ & \leq C \delta_\varepsilon \mathbb{E}\left[ (2\tau)^{\frac{\mathbbm{d}}{\alpha}} (R_\tau^\varepsilon)^{-\frac{\mathbbm{d}+1}{2}} \mathbbm{1}_{\tau \geq \delta^\alpha_\varepsilon}\right] + 2{I(\varepsilon)^{2\alpha}}{\varepsilon^{-2}} |\log(\varepsilon)|^\alpha \end{aligned}$$ where $V({{B}}_x)$ denotes the volume of ${{B}}_x$ which is proportional to $(\tau^{\frac{1}{\alpha}}+\delta_\varepsilon)^{\mathbbm{d}}$, and, conditional on $\tau \geq \delta^{\alpha}_\varepsilon$, is bounded above by $(2\tau)^{\frac{\mathbbm{d}}{\alpha}}$. In Lemma [\[newLemma:NegMomentsForR\]](#newLemma:NegMomentsForR){reference-type="ref" reference="newLemma:NegMomentsForR"}, we will bound the $-p$-th moments of $(R^\varepsilon_s)_{s\geq0}$. Using this, and letting $D_1, D_2$ change from line to line, we obtain $$\begin{aligned} \mathbb{E}\left[\tau^{\frac{\mathbbm{d}}{\alpha}}(R_\tau^\varepsilon)^{-\frac{\mathbbm{d}+1}{2}} \mathbbm{1}_{\tau \geq \delta^\alpha_\varepsilon} \right] &\leq \mathbb{E}\left[\tau^{\frac{\mathbbm{d}}{\alpha}}(R_{\tau}^\varepsilon)^{-\frac{\mathbbm{d}+1}{2}} \right]\\ &\leq D_1 \mathbb{E}\left[\tau^{\frac{\mathbbm{d}}{\alpha}}e^\tau\right] + D_2\mathbb{E}\left[\tau^{-\frac{1}{\alpha}}e^\tau\right]\\ &\leq D_1 \varepsilon^{\frac{2\mathbbm{d}}{\alpha}} + D_2 \varepsilon^{-\frac{2}{\alpha}}.\end{aligned}$$ Substituting this back into ([\[chillip\]](#chillip){reference-type="ref" reference="chillip"}), and again letting the constants change from line to line, we obtain $$\begin{aligned} \nonumber &\sup_{| w | \leq \delta_\varepsilon} \mathbb{E}\left[\int_{{{B}}_x} |f_\tau(x,z-w)-f_\tau(x,z)| dz \mathbbm{1}_{\tau\leq t \wedge t^*} \right] \\ & \leq C \delta_\varepsilon\left(\varepsilon^{\frac{2\mathbbm{d}}{\alpha}} + \varepsilon^{-\frac{2}{\alpha}}\right)+ 2{I(\varepsilon)^{2\alpha}}{\varepsilon^{-2}} |\log(\varepsilon)|^\alpha \nonumber \\ &= C_1\varepsilon^{\frac{2\mathbbm{d}}{\alpha}}I(\varepsilon)^{2}|\log(\varepsilon)| + C_2 {I(\varepsilon)^{2}}{\varepsilon^{-\frac{2}{\alpha}}}|\log\varepsilon| + 2{I(\varepsilon)^{2\alpha}}{\varepsilon^{-2}} |\log(\varepsilon)|^\alpha\nonumber \\ & \leq C_2 {I(\varepsilon)^{2}}{\varepsilon^{-\frac{2}{\alpha}}}|\log\varepsilon|,\label{lasteqn}\end{aligned}$$ where the final inequality follows by Assumptions [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"} (B)-(C) and we allow $C_2$ to change from line to line. By Assumption [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"} (C), ([\[lasteqn\]](#lasteqn){reference-type="ref" reference="lasteqn"}) goes to $0$ with $\varepsilon$. To bound the second term in ([\[twointegrals\]](#twointegrals){reference-type="ref" reference="twointegrals"}) we use [@chen:2003 Theorem 1.1], which provides upper bounds on the transition density of subordinated Brownian motion, with truncated stable subordinator that has truncation level independent of $\varepsilon$. To apply this result, we rewrite $W(R^\varepsilon_s)$ in terms of a $1$-truncated subordinated Brownian motion as follows. Let $(U_{s}^{a})_{s \geq 0}$ denote an $\frac{\alpha}{2}$-stable subordinator with truncation level $a$ (and no speed change). Let $\stackrel{D}{=}$ denote equality in distribution. Then $$R_s^\varepsilon \stackrel{D}{=} U_{s I(\varepsilon)^{\alpha-2}}^{I(\varepsilon)^2} \stackrel{D}{=}I(\varepsilon)^2 U_{s I(\varepsilon)^{-2}}^{1}$$ where the first equality follows by definition and the second equality follows by showing that, if $\Psi$ and $\Psi'$ are the characteristic exponents of $(U^{I(\varepsilon)^2}_s)_{s\geq 0}$ and $(U^1_s)_{s\geq 0}$ respectively, then $I(\varepsilon)^\alpha \Psi(\theta) = \Psi'(\theta I(\varepsilon)^2),$ which follows easily from the Lévy-Khintchine formula. Then, by the scaling property of the Brownian motion, $$\begin{aligned} W(R_s^\varepsilon) \stackrel{D}{=} I(\varepsilon) W(U_{s I(\varepsilon)^{-2}}^1). \label{scalingtruncBM} \end{aligned}$$ Denote the transition density of $(W(U^{1}_t))_{t \geq 0}$ by $\hat{f}_t(x,y)$. By ([\[scalingtruncBM\]](#scalingtruncBM){reference-type="ref" reference="scalingtruncBM"}), $\hat{f}_t(x,\cdot)$ is related to the transition density of $W(R^\varepsilon_s)$ by $$f_t(x,y) = \hat{f}_{I(\varepsilon)^{-2}t}(I(\varepsilon)^{-1}x,I(\varepsilon)^{-1}y).$$ Therefore the second term in ([\[twointegrals\]](#twointegrals){reference-type="ref" reference="twointegrals"}) can be rewritten as $$\sup_{| w| \leq \delta_\varepsilon} \mathbb{E}\left[\int_{{{B}}_x^{\mathsf{c}}} \left|\hat{f}_{I(\varepsilon)^{-2}\tau }(I(\varepsilon)^{-1}x,I(\varepsilon)^{-1}(z-w))-\hat{f}_{I(\varepsilon)^{-2}\tau }(I(\varepsilon)^{-1}x,I(\varepsilon)^{-1}z)\right| dz \mathbbm{1}_{\tau \leq t \wedge t^*}\right]$$ which, by [@chen:2003 Theorem 1.1], is bounded above by $$\begin{aligned} \nonumber & \sup_{| w| \leq \delta_\varepsilon} \mathbb{E}\left[\int_{{{B}}_x^{\mathsf{c}}} \frac{D \tau I(\varepsilon)^{-2}}{((z - w) I(\varepsilon)^{-1} )^{\mathbbm{d}+\alpha}} dz \right] +\mathbb{E}\left[\int_{{{B}}_x^{\mathsf{c}}} \frac{D\tau I(\varepsilon)^{-2}}{(z I(\varepsilon)^{-1} )^{\mathbbm{d}+\alpha}} dz \right] \\ \nonumber & = \sup_{| w| \leq \delta_\varepsilon} \mathbb{E}\left[\int_{{{B}}_x^{\mathsf{c}}} \frac{D\tau I(\varepsilon)^{\alpha-1}}{(z-w)^{\mathbbm{d}+\alpha}} dz \right]+ \mathbb{E}\left[\int_{{{B}}_x^{\mathsf{c}}} \frac{D \tau I(\varepsilon)^{\alpha-1}}{z^{\mathbbm{d}+\alpha}} dz \right] \\ \nonumber & \leq D I(\varepsilon)^{\alpha-1} \mathbb{E}\left[ \tau \int_{\tau^{\frac{1}{\alpha}}}^\infty \frac{1}{z^{\alpha+1}} dz \right] \\ \nonumber & \leq D I(\varepsilon)^{\alpha-1} \mathbb{E}[\tau (\tau^{-1})] \\ &\label{anotherbound} = D I(\varepsilon)^{\alpha-1}, \end{aligned}$$ for some $D>0$ that we allow to change from line to line. Here we have applied the change of variables $z-w\mapsto z$ in the third line. Note that, since $\alpha>1$ the last quantity goes to $0$ with $\varepsilon$. Recall from ([\[defnofF\]](#defnofF){reference-type="ref" reference="defnofF"}) that $$F(\varepsilon) = {I(\varepsilon)^{2}}{\varepsilon^{-\frac{2}{\alpha}}}|\log\varepsilon| + I(\varepsilon)^{\alpha-1}.$$ Then, choosing $m>0$ sufficiently large and $\varepsilon$ sufficiently small so that $$mF(\varepsilon) \geq \varepsilon^k + DI(\varepsilon)^{\alpha-1} + C_2 {I(\varepsilon)^{2}}{\varepsilon^{-\frac{2}{\alpha}}}|\log\varepsilon|,$$ by ([\[anotherbound\]](#anotherbound){reference-type="ref" reference="anotherbound"}) and ([\[lasteqn\]](#lasteqn){reference-type="ref" reference="lasteqn"}) we can bound ([\[twointegrals\]](#twointegrals){reference-type="ref" reference="twointegrals"}) to obtain $$\begin{aligned} &\mathbb{E}_x\left[g_\times(u(t-\tau, Z^+_\tau))\mathbbm{1}_{\tau \leq t}\right]\nonumber\\ &\leq \mathbb{E}\left[\int_{\mathbb{R}^\mathbbm{d}} g_\times(u(t-\tau, z)) f_\tau(x,z) dz \mathbbm{1}_{\tau \leq t \wedge t^*}\right] + mF(\varepsilon). \label{pink}\end{aligned}$$ Finally, we note that $$\begin{aligned} \mathbb{E}_x\left[g_\times(u(t-\tau, W(R^\varepsilon_\tau))\mathbbm{1}_{\tau\leq t}\right] &\geq \mathbb{E}_x\left[g_\times(u(t-\tau, W(R^\varepsilon_\tau))\mathbbm{1}_{\tau\leq t\wedge t^*}\right] \\ &= \mathbb{E}\left[\int_{\mathbb{R}^\mathbbm{d}} g_\times(u(t-\tau, z))f_\tau(x,z)dz \mathbbm{1}_{\tau\leq t\wedge t^*}\right],\end{aligned}$$ which, together with ([\[pink\]](#pink){reference-type="ref" reference="pink"}) and ([\[red\]](#red){reference-type="ref" reference="red"}) gives $$\begin{aligned} &u(t,x)-v(t,x) \\ &\leq \mathbb{E}\left[\int g_\times(u(t-\tau, z)) f_\tau(x,z) dz \mathbbm{1}_{\tau \leq t \wedge t^*}\right]-\mathbb{E}\left[\int g_\times(v(t-\tau, z)) f_\tau(x,z) dz \mathbbm{1}_{\tau \leq t \wedge t^*}\right] \\ & \ \ +e^{-\frac{t}{\varepsilon^2}}+ mF(\varepsilon).\end{aligned}$$ Using the same approach we can obtain the lower bound on $u(t,x) - v(t,x).$ Namely, by analogy with ([\[last\]](#last){reference-type="ref" reference="last"}), $$\begin{aligned} &\mathbb{E}_x\left[g_\times(u(t-\tau,Z^+_t))\mathbbm{1}_{\tau \leq t}\right] \\ &\geq \mathbb{E}\left[\int_{\mathbb{R}^\mathbbm{d}} g(u(t-\tau, z))[f_\tau(x,{z}-{w})-f_\tau(x,{z})]\, \mathbbm{1}_{\tau \leq t \wedge t^*} dz\right]\\ & \ \ +\mathbb{E}\left[g(u(t-\tau, W(R^{\varepsilon}_\tau))) \mathbbm{1}_{\tau \leq t \wedge t^*}\right] \\ &\geq - \sup_{\substack{| w | \leq \delta_\varepsilon}}\left| \mathbb{E}\left[\int_{\mathbb{R}^\mathbbm{d}} g(u(t-\tau, z))[f_\tau(x,{z}-{w})-f_\tau(x,{z})]\, \mathbbm{1}_{\tau \leq t \wedge t^*} dz\right]\right|\\ & \ \ +\mathbb{E}\left[g(u(t-\tau, W(R^{\varepsilon}_\tau))) \mathbbm{1}_{\tau \leq t \wedge t^*}\right]\end{aligned}$$ and the final term can be bounded identically as before to give us that $$\begin{aligned} &u(t,x)-v(t,x) \\&\geq \mathbb{E}\left[\int g_\times(u(t-\tau, z)) f_\tau(x,z) dz \mathbbm{1}_{\tau \leq t \wedge t^*}\right]-\mathbb{E}\left[\int g_\times(v(t-\tau, z)) f_\tau(x,z) dz \mathbbm{1}_{\tau \leq t \wedge t^*}\right] \\ & \ \ -\left( e^{-\frac{t}{\varepsilon^2}}+mF(\varepsilon) \right).\end{aligned}$$ Therefore, using that $g_\times$ is Lipschitz with constant $\tfrac{3}{2}$, we obtain $$\begin{aligned} &|u(t,x)-v(t,x)| \\ &\leq \mathbb{E}\left[\left|\int (g(u(t-\tau, z))-g(v(t-\tau,z)))f_\tau(x,z) dz \right|\mathbbm{1}_{\tau \leq t}\right]+ e^{-\frac{t}{\varepsilon^2}}+mF(\varepsilon)\\ & \leq \tfrac{3}{2} \mathbb{E}\left[|u(t-\tau,W(R^\varepsilon_\tau))-v(t-\tau, W(R^\varepsilon_\tau)) |\mathbbm{1}_{\tau \leq t}\right] + e^{-\frac{t}{\varepsilon^2}}+mF(\varepsilon) \\ & \leq \tfrac{3}{2} \int_0^t \lVert u(\rho, \cdot)-v(\rho,\cdot) \rVert_{\infty} e^{-(t-\rho) \varepsilon^{-2}} \varepsilon^{-2} d\rho + e^{-\frac{t}{\varepsilon^2}}+mF(\varepsilon).\end{aligned}$$ Finally, using the Gronwall's inequality [@movljankulov1972ob] (see also [@dragomir2002some Theorem 15]) we deduce that $$\begin{aligned} \lVert u(t,\cdot) - v(t,\cdot) \rVert_\infty &\leq \left( e^{-\frac{t}{\varepsilon^2}}+mF(\varepsilon)\right) \exp\left( \tfrac{3}{2} \int_0^t \exp(-u \varepsilon^{-2}) \varepsilon^{-2} du \right), \\ & \leq \left( e^{-\frac{t}{\varepsilon^2}}+ mF(\varepsilon)\right) \exp\left( \tfrac{3}{2} \right),\end{aligned}$$ which gives the desired bound by choosing an appropriate $m_1, m_2>0$. ◻ ## Truncated subordinator calculations {#appsub} Throughout this section, let $(R^\varepsilon_s)_{s\geq 0}$ be the $I(\varepsilon)^2$-truncated $\tfrac{\alpha}{2}$-stable subordinator with Lévy measure given by $$\begin{aligned} \tfrac{\alpha}{2}\left(\tfrac{2-\alpha}{\alpha}\right)^{\frac{\alpha}{2}} I(\varepsilon)^{\alpha-2}{y^{-1-\frac{\alpha}{2}}} \mathbbm{1}_{\left\{0\leq y\leq \frac{2-\alpha}{\alpha} I(\varepsilon)^2\right\}}dy.\end{aligned}$$ Denote by $\mathbb{P}$ the probability measure under which $(R^\varepsilon_s)_{s\geq0}$, started at $R^\varepsilon_0 = 0$ has this distribution, and let $\mathbb{E}$ denote the corresponding expectation. For all $s, \lambda \geq 0$ the Laplace transform $\phi(\lambda):= \mathbb{E}\left[\exp\left(-\lambda R^\varepsilon_s\right)\right]$ of $R^\varepsilon_s$ is $$\begin{aligned} \label{laplace_transform_R} \phi(\lambda) &= \exp\left(\tfrac{\alpha}{2}\left(\tfrac{2-\alpha}{\alpha}\right)^{\frac{\alpha}{2}} I(\varepsilon)^{\alpha-2}s\int_0^{\frac{2-\alpha}{\alpha}I(\varepsilon)^2} \frac{e^{-\lambda y}-1}{y^{\frac{\alpha}{2}+1}}dy\right).\end{aligned}$$ The following lemma will enable us to show, in Proposition [\[boundonsubordinator\]](#boundonsubordinator){reference-type="ref" reference="boundonsubordinator"}, that $R^\varepsilon_s$ is close to $s$ for small times $s$, which is a crucial component of our proof of Theorem [\[teo:subestimate\]](#teo:subestimate){reference-type="ref" reference="teo:subestimate"}. [\[expectedvalueofR\]]{#expectedvalueofR label="expectedvalueofR"} Let $(R^\varepsilon_s)_{s\geq 0}$ be as above. For all $s\geq 0$, $\mathbb{E}[R_s^\varepsilon] = s.$ *Proof.* The expected value of $R^\varepsilon_s$ can be calculated explicitly by considering the derivative of its Laplace transform, namely $\mathbb{E}[R_s^\varepsilon] = -\frac{d}{d\lambda}\phi(\lambda)|_{\lambda =0}$. Denote $I:= I(\varepsilon)$ throughout. Fix $\lambda, s \geq 0$. Using integration by parts, $$\begin{aligned} &\int_0^{\frac{2-\alpha}{\alpha}I^2} \frac{e^{-\lambda y}-1}{y^{\frac{\alpha}{2}+1}}dy = -\tfrac{2}{\alpha}\left(\tfrac{2-\alpha}{\alpha}\right)^{-\frac{\alpha}{2}}I^{-\alpha}(e^{-\frac{2-\alpha}{\alpha}\lambda I^2}-1) - \tfrac{2}{\alpha}\lambda^{\frac{\alpha}{2}} \gamma\left(1-\tfrac{\alpha}{2}, \tfrac{2-\alpha}{\alpha}\lambda I^2\right)\end{aligned}$$ where $\gamma(s,x) := \int_0^x t^{s-1}e^{-t}dt$ is the lower incomplete gamma function. So, using ([\[laplace_transform_R\]](#laplace_transform_R){reference-type="ref" reference="laplace_transform_R"}), $\phi(\lambda)$ can be written as $$\begin{aligned} \phi(\lambda) = \exp\left(- I^{-2}s\left(e^{- \frac{2-\alpha}{\alpha}\lambda I^2}-1\right) - \left({\tfrac{2-\alpha}{\alpha}}\right)^{\frac{\alpha}{2}}I^{\alpha-2}\lambda^{\frac{\alpha}{2}} s\gamma\left(1-\tfrac{\alpha}{2}, \tfrac{2-\alpha}{\alpha}\lambda I^2\right) \right).\end{aligned}$$ By differentiating this quantity with respect to $\lambda$, we obtain $$\begin{aligned} & \mathbb{E}[R_s^\varepsilon] \\ &= \left. - \tfrac{2-\alpha}{\alpha}s + \left({\tfrac{2-\alpha}{\alpha}}\right)^{\frac{\alpha}{2}}I^{\alpha-2}s\left(\tfrac{\alpha}{2}\lambda^{\frac{\alpha}{2}-1} \gamma\left(1-\tfrac{\alpha}{2}, \tfrac{2-\alpha}{\alpha}\lambda I^2\right)+ \lambda^{\frac{\alpha}{2}} \frac{d\gamma }{d \lambda}\left(1-\tfrac{\alpha}{2}, \tfrac{2-\alpha}{\alpha}\lambda I^2\right)\right) \right|_{\lambda=0}.\end{aligned}$$ This can be calculated by considering the following identities for $\gamma$. First, by the definition of $\gamma$ and a change of variables$$\begin{aligned} \gamma\left(1-\tfrac{\alpha}{2}, \tfrac{2-\alpha}{\alpha}\lambda I^2\right) = I^{2-\alpha} \int_0^{\frac{2-\alpha}{\alpha}\lambda} z^{-\frac{\alpha}{2}}e^{-zI^2}dz.\end{aligned}$$ Therefore $$\begin{aligned} \lambda^{\frac{\alpha}{2}}\frac{d\gamma}{d\lambda} \left.\left(1-\tfrac{\alpha}{2}, \tfrac{2-\alpha}{\alpha}\lambda I^2\right)\right|_{\lambda=0} &=\left. I^{2-\alpha} \left(\tfrac{2-\alpha}{\alpha}\right)^{1-\frac{\alpha}{2}}e^{-\frac{2-\alpha}{\alpha}\lambda I^2}\right|_{\lambda=0}\\ &= I^{2-\alpha} \left(\tfrac{2-\alpha}{\alpha}\right)^{1-\frac{\alpha}{2}}.\end{aligned}$$ Under a different change of variables, $\gamma$ also satisfies $$\begin{aligned} \left.\lambda^{\frac{\alpha}{2}-1}\gamma\left(1-\tfrac{\alpha}{2}, \tfrac{2-\alpha}{\alpha}\lambda I^2\right)\right|_{\lambda=0} &= \left(\tfrac{2-\alpha}{{\alpha}}\right)^{1-\frac{\alpha}{2}}\int_0^{I^2}\left.z^{-\frac{\alpha}{2}}e^{-\frac{2-\alpha}{\alpha}\lambda z}dz\right|_{\lambda=0} \\ &=\left(\tfrac{2-\alpha}{{\alpha}}\right)^{1-\frac{\alpha}{2}} \left(\tfrac{2}{2-\alpha}\right)I^{2-\alpha}.\end{aligned}$$ Putting this all together we obtain $$\begin{aligned} \mathbb{E}[R_s^\varepsilon] &= - \tfrac{2-\alpha}{\alpha}s + \left(\tfrac{2-\alpha}{\alpha}\right)^{\frac{\alpha}{2}}I^{\alpha-2}s\left(\tfrac{\alpha}{2} \left(\tfrac{2-\alpha}{\alpha}\right)^{1-\frac{\alpha}{2}}\left(\tfrac{2}{2-\alpha}\right)I^{2-\alpha}+ \left(\tfrac{2-\alpha}{\alpha}\right)^{1-\frac{\alpha}{2}}I^{2-\alpha}\right) \\ &= s.\qedhere\end{aligned}$$ ◻ With this, we can now prove Proposition [\[boundonsubordinator\]](#boundonsubordinator){reference-type="ref" reference="boundonsubordinator"}. [\[boundonsubordinator\]]{#boundonsubordinator label="boundonsubordinator"} Let $k\in \mathbb{N}$ and $(R^\varepsilon_s)_{s\geq 0}$ be as above. There exists $\varepsilon_k>0$ such that, for all $\varepsilon\in (0,\varepsilon_k)$ and $s\leq \varepsilon^2|\log\varepsilon|$, $$\mathbb{P}\left[|R^\varepsilon_s-s| \geq (k+1)I(\varepsilon)^2|\log\varepsilon|\right]\leq \varepsilon^k.$$ *Proof.* Let $\varepsilon>0$. As before we set $I:= I(\varepsilon)$. Fix $k\in \mathbb{N}$ and $s\leq \varepsilon^2 |\log\varepsilon|$. Note that $$\begin{aligned} \label{edsheeran}& \mathbb{P}\left[|R^\varepsilon_s-s|\geq (k+1)I^2|\log\varepsilon|\right]\nonumber \\ &=\mathbb{P}\left[R^\varepsilon_s\geq (k+1)I^2|\log\varepsilon|+s\right] + \mathbb{P}\left[R^\varepsilon_s \leq s-(k+1)I^2|\log\varepsilon|\right]. \end{aligned}$$ By Assumption [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"} (b), $\varepsilon I^{-1}\to 0$ as $\varepsilon\to 0$, and since $s\leq \varepsilon^2|\log \varepsilon|$, we may decrease $\varepsilon$ as necessary to ensure that $s-(k+1)I^2|\log\varepsilon|\leq 0$, and the second term in ([\[edsheeran\]](#edsheeran){reference-type="ref" reference="edsheeran"}) equals zero.\ To bound the first term in ([\[edsheeran\]](#edsheeran){reference-type="ref" reference="edsheeran"}), we note that, by [@sato1999levy Theorem 25.17], and since the Lévy measure of $R^\varepsilon_s$ has compact support, the Laplace transform $\phi(\lambda)$ from ([\[laplace_transform_R\]](#laplace_transform_R){reference-type="ref" reference="laplace_transform_R"}) can be extended to all of $\mathbb{R}$, and the exponential moments of $R^\varepsilon_s$ exist and satisfy $\mathbb{E}\left[\exp\left(\lambda R^\varepsilon_s\right)\right] = \phi(\lambda)$ for all $\lambda\in \mathbb{R}$. It is straightforward to verify using Lemma [\[expectedvalueofR\]](#expectedvalueofR){reference-type="ref" reference="expectedvalueofR"} that $\mathbb{E}\left[R_s^\varepsilon\right]$ satisfies $$\mathbb{E}\left[R_s^\varepsilon\right] = \tfrac{\alpha}{2}\left(\tfrac{2-\alpha}{\alpha}\right)^{\frac{\alpha}{2}} I^{\alpha-2}s\int_0^{\frac{2-\alpha}{\alpha}I^2} {y^{-\frac{\alpha}{2}}}dy,$$ therefore for any $s, \lambda\geq 0$, by Taylor's theorem $$\begin{aligned} \label{exp_bound} {\mathbb{E}}\left[\exp\left({\lambda R^\varepsilon_s - \lambda \mathbb{E}[R^\varepsilon_s]}\right)\right]\nonumber &= \exp\left( \tfrac{\alpha}{2}\left(\tfrac{2-\alpha}{\alpha}\right)^{\frac{\alpha}{2}} I^{\alpha-2}s\int_0^{\frac{2-\alpha}{\alpha}I^2} \frac{e^{\lambda v } - 1 -\lambda v}{v^{1+\frac{\alpha}{2}}} dv \right) \\ \nonumber &\leq \exp\left( \tfrac{\alpha}{4} \left(\tfrac{2-\alpha}{\alpha}\right)^{\frac{\alpha}{2}} I^{\alpha-2} e^{\frac{2-\alpha}{\alpha}I^2 \lambda}\lambda^2 s \int_0^{\frac{2-\alpha}{\alpha}I^2} v^{1-\frac{\alpha}{2}} dv\right) \\&= \exp\left(\tfrac{1}{2}\left(\tfrac{2-\alpha}{\alpha}\right)^2\tfrac{\alpha}{4-\alpha} e^{\frac{2-\alpha}{\alpha}I^2 \lambda} I^{2} \lambda^2 s\right). \end{aligned}$$ Choose $\lambda = I^{-2}$. Then ([\[exp_bound\]](#exp_bound){reference-type="ref" reference="exp_bound"}) becomes $$\begin{aligned} {\mathbb{E}}\left[\exp\left({ I^{-2} R^\varepsilon_s - I^{-2} \mathbb{E}[R^\varepsilon_s]}\right)\right]&\leq \exp\left(\tfrac{1}{2}\left(\tfrac{2-\alpha}{\alpha}\right)^2\tfrac{\alpha}{4-\alpha} e^{\frac{2-\alpha}{\alpha}} I^{-2} s\right)\\ &\leq \exp\left(\tfrac{1}{2}\left(\tfrac{2-\alpha}{\alpha}\right)^2\tfrac{\alpha}{4-\alpha} e^{\frac{2-\alpha}{\alpha}} \frac{\varepsilon^2|\log \varepsilon|}{I^2}\right).\end{aligned}$$ This, together with Lemma [\[expectedvalueofR\]](#expectedvalueofR){reference-type="ref" reference="expectedvalueofR"} and Markov's inequality gives us $$\begin{aligned} {\mathbb{P}}\left[R^\varepsilon_s \geq (k+1)I^2|\log\varepsilon| + s \right] & = {\mathbb{P}}\left[R^\varepsilon_s - \mathbb{E}[R^\varepsilon_s] \geq (k+1)I^2|\log\varepsilon|\right] \\ &= {\mathbb{P}}\left[\exp\left({ I^{-2} R^\varepsilon_s - I^{-2}\mathbb{E}[R^\varepsilon_s]}\right) \geq \varepsilon^{-k-1} \right] \\ & \leq \varepsilon^{k+1}\exp\left(\tfrac{1}{2}\left(\tfrac{2-\alpha}{\alpha}\right)^2\tfrac{\alpha}{4-\alpha} e^{\frac{2-\alpha}{\alpha}}\frac{\varepsilon^2|\log \varepsilon|}{I^2}\right).\end{aligned}$$ By Assumption [\[assumptions2\]](#assumptions2){reference-type="ref" reference="assumptions2"} [\[assumptions2_B\]](#assumptions2_B){reference-type="ref" reference="assumptions2_B"}, $\frac{\varepsilon^2|\log \varepsilon|}{I^2}\to 0$ as $\varepsilon\to 0,$ so the previous inequality implies that, for $\varepsilon$ sufficiently small, $${\mathbb{P}}\left[R^\varepsilon_s > (k+1)I^2|\log\varepsilon| + s \right] \leq \varepsilon^k,$$ which, by ([\[edsheeran\]](#edsheeran){reference-type="ref" reference="edsheeran"}) gives us the result. ◻ [\[newLemma:NegMomentsForR\]]{#newLemma:NegMomentsForR label="newLemma:NegMomentsForR"} Let $(R^\varepsilon_s)_{s\geq 0}$ be as above. Then, for any $q, s>0$, there exists $D_1 = D_1(q,\alpha)>0$ and $D_2 = D_2(q,\alpha)>0$ such that $$\mathbb{E}\left[(R^\varepsilon_{s})^{-q}\right] \leq e^s (D_1+ D_2s^{-\frac{2q}{\alpha}}).$$ *Proof.* Fix $s\geq 0$, $\varepsilon>0$ and $I:= I(\varepsilon)$. Let $\gamma$ be the lower incomplete gamma function. The Laplace transform ([\[laplace_transform_R\]](#laplace_transform_R){reference-type="ref" reference="laplace_transform_R"}) of $R^\varepsilon_s$ can be written as $$\begin{aligned} \phi(\lambda) &= \exp\left(- I^{-2}s\left(e^{- \frac{2-\alpha}{\alpha}\lambda I^2}-1\right) - \left({\tfrac{2-\alpha}{\alpha}}\right)^{\frac{\alpha}{2}}I^{\alpha-2}\lambda^{\frac{\alpha}{2}} s\gamma\left(1-\tfrac{\alpha}{2}, \tfrac{2-\alpha}{\alpha}\lambda I^2\right) \right)\\ &= \exp\left(I^{-2}s\int_0^{\frac{2-\alpha}{\alpha}\lambda I^2}e^{-z}dz - \left({\tfrac{2-\alpha}{\alpha}}\right)^{\frac{\alpha}{2}}I^{\alpha-2}\lambda^{\frac{\alpha}{2}} s\int_0^{\frac{2-\alpha}{\alpha}\lambda I^2}z^{-\frac{\alpha}{2}}e^{-z}dz \right)\\ &= \exp\left(\tfrac{2-\alpha}{\alpha}s\int_0^{\lambda }e^{-\frac{2-\alpha}{\alpha} I^2 z}\left(1- \lambda^{\frac{\alpha}{2}}z^{-\frac{\alpha}{2}}\right)dz \right).\end{aligned}$$ By [@schurger2002laplace Theorem 1.1], if $X$ is a non-negative random variable with Laplace transform $\rho$, then for any $q>0$, $\mathbb{E}\left[X^{-q}\right] = \frac{1}{q\Gamma(q)}\int_0^ \infty \rho\left(t^{\frac{1}{q}}\right) dt.$ Therefore, using the above expression for the Laplace transform of $R^\varepsilon_s,$ for any $q>0$ $$\begin{aligned} \mathbb{E}\left[(R^\varepsilon_s)^{-q}\right] &= \frac{1}{q\Gamma(q)}\int_0^ \infty \exp\left(\tfrac{2-\alpha}{\alpha}s\int_0^{\lambda^{\frac{1}{q}} }e^{-\frac{2-\alpha}{\alpha} I^2 z}\left(1- \lambda^{\frac{\alpha}{2q}}z^{-\frac{\alpha}{2}}\right)dz \right)d\lambda.\end{aligned}$$ When $\lambda \in [0,1],$ $$\begin{aligned} \int_0^{\lambda^{\frac{1}{q}} }e^{-\frac{2-\alpha}{\alpha} I^2 z}\left(1- \lambda^{\frac{\alpha}{2q}}z^{-\frac{\alpha}{2}}\right)dz \leq 1 %&\leq \int_0^{1 }e^{-\frac{2-\alpha}{\alpha} I^2 z}dz \leq 1,\end{aligned}$$ and if $\lambda>1$, since the integrand is negative $$\begin{aligned} & \int_0^{\lambda^{\frac{1}{q}} }e^{-\frac{2-\alpha}{\alpha} I^2 z}\left(1- \lambda^{\frac{\alpha}{2q}}z^{-\frac{\alpha}{2}}\right)dz \leq \int_0^{1 }1- \lambda^{\frac{\alpha}{2q}}z^{-\frac{\alpha}{2}} dz = 1-\tfrac{2}{2-\alpha}\lambda^{\frac{\alpha}{2q}}.\end{aligned}$$ Therefore $$\begin{aligned} \mathbb{E}\left[(R^\varepsilon_s)^{-q}\right] &\leq \frac{1}{q\Gamma(q)}e^{\frac{2-\alpha}{\alpha}s}\left(1+\int_1^ \infty \exp\left(- \tfrac{2}{\alpha}s \lambda^{\frac{\alpha}{2q}}\right)d\lambda\right).\end{aligned}$$ Let $\Gamma(s,x):= \int_x^\infty t^{s-1}e^{-t}dt$ be the upper incomplete gamma function. Then it is straightforward to show that the above is equivalent to $$\begin{aligned} \mathbb{E}\left[(R^\varepsilon_s)^{-q}\right] &\leq\frac{1}{q\Gamma(q)}e^{\frac{2-\alpha}{\alpha}s}\left(1+ \frac{2q}{\alpha}\left(\frac{\alpha}{2s}\right)^{\frac{2q}{\alpha}}\Gamma\left(\tfrac{2q}{\alpha}, \tfrac{2s}{\alpha}\right)\right)\\ &\leq \frac{1}{q\Gamma(q)}e^{\frac{2-\alpha}{\alpha}s}\left(1+ \frac{2q}{\alpha}\left(\frac{\alpha}{2s}\right)^{\frac{2q}{\alpha}}\Gamma\left(\tfrac{2q}{\alpha}\right)\right).\end{aligned}$$ Since $\tfrac{2-\alpha}{\alpha}\leq 1,$ the result follows. ◻ 99 S. M. Allen and J. W. Cahn: A microscopic theory for antiphase boundary motion and its application to antiphase domain coarsening. *Acta Metall.*, 27(6):1085--1095, 1979. D. G. Aronson and H. F. Weinberger. Multidimensional nonlinear diffusion arising in population genetics. , 30(1):33--76, 1978. S. C. H. Barrett. The reproductive biology and genetics of island plants. , 351(1341):725--733, 1996. R. F. Bass. Regularity results for stable-like operators. , 257(8):2693--2722, 2009. N. H. Barton and G. M. Hewitt. Adaptation, speciation and hybrid zones. , 341(6242):497--503, 1989. D. Brockmann and D. Helbing. The Hidden Geometry of Complex, Network-Driven Contagion Phenomena. , 342(6164):1337--1342, 2013. L. Bronsard and R. V. Kohn. On the slowness of phase boundary motion in one space dimension. , 43(8):983--997, 1990. L. Bronsard and R. V. Kohn. Motion by mean curvature as the singular limit of Ginzburg-Landau dynamics. , 90(2):211--237, 1991. M. Bramson and J. L. Lebowitz. Asymptotic behavior of densities for two-particle annihilating random walks. , 62(1-2):297--372, 1991. H. Berestycki, B. Larrouturou, and P.-L. Lions. Multi-dimensional travelling-wave solutions of a flame propagation model. , 111(1):33--49, 1990. H. Berestycki, B. Nicolaenko, and B. Scheurer. Traveling wave solutions to combustion models and their singular limits. , 16(6):1207--1242, 1985. J. Buschbom. Migration between continents: Geographical structure and long-distance gene flow in Porpidia flavicunda (lichen-forming Ascomycota). , 16(9):1835--1846, 2007. S. Carlquist. The Biota of Long-Distance Dispersal. V. Plant Dispersal to Pacific Islands. , 94(3):129--162, 1967. X. Chen. Generation and propagation of interfaces for reaction-diffusion equations. , 96(1):116--141, 1992. Z.-Q. Chen and T. Kumagai. Heat kernel estimates for stable-like processes on $d$-sets. , 108(1):27--62, 2003. J. S. Clark, M. Lewis, J. S. McLachlan, and J. HilleRisLambers. Estimating population spread: what can we forecast and how well? , 84(8):1979--1988, 2003. S. A. Cannas, D. E. Marco, and M. A. Montemurro. Long range dispersal and spatial pattern formation in biological invasions. , 203(2):155--170, 2006. M. L. Cain, B. G. Milliga, and A. E. Strand. Long-distance seed dispersal in plant populations. , 87(9):1217--1227, 2000. S. Cohen and J. Rosiński. Gaussian approximation of multivariate Lévy processes with applications to simulation of tempered stable processes. , 13(1):195--210, 2007. X. Cabré and J.-M. Roquejoffre. The influence of fractional diffusion in Fisher-KPP equations. , 320(3):679--722, 2013. L. A. Caffarelli and P. E. Souganidis. Convergence of nonlocal threshold dynamics approximations to front propagation. , 195(1):1--23, 2010. X. Cabré and Y. Sire. Nonlinear equations for fractional Laplacians, I: Regularity, maximum principles, and Hamiltonian estimates. , 31(1):23--53, 2014. X. Cabré and Y. Sire. Nonlinear equations for fractional Laplacians II: Existence, uniqueness, and qualitative properties of solutions. , 367(2):911--941, 2015. D. del-Castillo-Negrete. Truncation effects in superdiffusive front propagation with Lévy flights. , 79(3):031120--1--031120--10, 2009. P. de Mottoni and M. Schatzman. Évolution géométrique d'interfaces. , 309(7):453--458, 1989. P. de Mottoni and M. Schatzman. Development of interfaces in ${\mathbbm{R}}^N$. , 116(3-4):207--220, 1990. S. S. Dragomir. . Nova Science Publishers, Inc., Hauppauge, 2003. A. Etheridge, N. Freeman, and S. Penington. Branching Brownian motion, mean curvature flow and the motion of hybrid zones. , 22:no. 103, 40 pp., 2017. A. M. Etheridge, M. D. Gooding, and I. Letter. On the effects of a wide opening in the domain of the (stochastic) Allen-Cahn equation and the motion of hybrid zones. , 27:no. 161, 53 pp., 2022. L. C. Evans, H. M. Soner, and P. E. Souganidis. Phase transitions and generalized motion by mean curvature. , 45(9):1097--1123, 1992. R. A. Fisher. The wave of advance of advantageous genes. , 7(4):355--369, 1937. C. Gui and M. Zhao. Traveling wave solutions of Allen-Cahn equation with a fractional Laplacian. , 32(4):785--812, 2015. X. Huang and R. Durrett. Motion by mean curvature in interacting particle systems. , 181(1-3):489--532, 2021. L. R. Heaney. Dynamic disequilibrium: a long-term, large-scale perspective on the equilibrium model of island biogeography. , 9(1):59--74, 2000. O. Hallatschek and D. S. Fisher. Acceleration of evolutionary spread by long-range dispersal. , 111(46):E4911--E4919, 2014. M. Henkel and H. Hinrichsen. The non-equilibrium phase transition of the pair-contact process with diffusion. , 37(28):R117--R159, 2004. W. G. Hunt and R. K. Selander. Biochemical genetics of hybridisation in European house mice. , 31(1):11--33, 1973. C. Imbert and P. E. Souganidis. Phasefield theory for fractional diffusion-reaction equations and applications. , 2009. A. N. Kolmogorov, I. G. Petrovskii, and N. S. Piskunov. tude de l'équation de la diffusion avec croissance de la quantité de matière et son application à un problème biologique. , 1:1--25, 1937. J. An, C. Henderson and L. Ryzhik. Voting models and semilinear parabolic equations. 2022. S. Lamichhaney, F. Han, M. T. Webster, L. Andersson, B. R. Grant, and P. R. Grant. Rapid hybrid speciation in Darwin's finches. , 359(6372):224--228, 2018. H. P. McKean, Jr. Nagumo's equation. , 4:209--223, 1970. H. P. McKean. Application of Brownian motion to the equation of Kolmogorov-Petrovskii-Piskunov. , 28(3):323--331, 1975. H. Movljankulov and A. Filatov. Ob odnom približennom metode postroenija rešenii integral'nyh uravnenii, Tr. , 12:11--18, 1972. D. E. Marco, M. A. Montemurro, and S. A. Cannas. Comparing short and long-distance dispersal: modelling and field case studies. , 34(4):671--682, 2011. A. Mellet, S. Mischler, and C. Mouhot. Fractional diffusion limit for collisional kinetic equations. , 199(2):493--525, 2011. A. Mellet, J. Roquejoffre, and Y. Sire. Existence and asymptotics of fronts in non local combustion models. , 12(1):1--11, 2014. R. Mancinelli, D. Vergni, and A. Vulpiani. Front propagation in reactive systems with anomalous diffusion. , 185(3-4):175--195, 2003. J. Nagumo, S. Arimoto, and S. Yoshizawa. An active pulse transmission line simulating nerve axon. , 50(10):2061--2070, 1962. Z. O'Dowd. Branching Brownian Motion and Partial Differential Equations. Master's thesis, Oxford University, 2019. B. S. Reatini. . PhD thesis, The University of North Carolina at Chapel Hill, 2021. K.-I. Sato. . Cambridge university press, Cambridge, 1999. K. Schürger. Laplace transforms and suprema of stochastic processes. In *Advances in finance and stochastics*, pages 285--294. Springer, Berlin, 2002. A. V. Skorokhod. Branching diffusion processes. , 9(3):445--449, 1964. D. P. L. Toews, I. J. Lovette, D. E. Irwin, and A. Brelsford. Similar hybrid composition among different age and sex classes in the Myrtle--Audubon's warbler hybrid zone. , 135(4):1133--1145, 2018. X. Wang. Metastability and stability of patterns in a convolution model for phase transitions. , 183(2):434--461, 2002. P. Weigelt, W. D. Kissling, Y. Kisel, S. A. Fritz, D. N. Karger, M. Kessler, S. Lehtonen, J. Svenning, and H. Kreft. Global patterns and drivers of phylogenetic structure in island floras. , 5(1):1--13, 2015.
arxiv_math
{ "id": "2309.13899", "title": "Branching stable processes and motion by mean curvature flow", "authors": "Kimberly Becker, Alison Etheridge, Ian Letter", "categories": "math.PR", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We consider products of matrices of the form $A_n=O_n+\epsilon N_n$ where $O_n$ is a sequence of $d\times d$ orthogonal matrices and $N_n$ has independent standard normal entries and the $(N_n)$ are mutually independent. We study the Lyapunov exponents of the cocycle as a function of $\epsilon$, giving an exact expression for the $j$th Lyapunov exponent in terms of the Gram-Schmidt orthogonalization of $I+\epsilon N$. Further, we study the asymptotics of these exponents, showing that $\lambda_j=(d-2j)\epsilon^2/2+O(\epsilon^4|\log\epsilon|^4)$. author: - Sam Bednarski and Anthony Quas title: Lyapunov exponents of orthogonal-plus-normal cocycles --- # Introduction and Statement of Results Lyapunov exponents play a highly important role in dynamical systems, allowing quantification of chaos, the development of a theory of hyperbolic and non-uniformly hyperbolic dynamical systems and much more. We work in the framework of multiplicative ergodic theory, where one has a base dynamical system $\sigma\colon\Omega\to\Omega$ preserving an ergodic measure $\mathbb P$ and a measurable map $A\colon \Omega\to M_{d\times d}(\mathbb{R})$. One then takes the cocycle of partial products $A^{(n)}_\omega$ of the sequence of $d\times d$ matrices and one studies the limiting growth rate of the $j$th singular value of the products. In the case $d=1$ this is often straightforward to calculate: the Lyapunov exponent is just $\int\log|A(\omega)|\,d\mathbb P(\omega)$. In higher dimensions, Lyapunov exponents tend to be much harder to calculate, and it is rare to be able to give exact expressions. In this paper, we are able to establish exact expressions for Lyapunov exponents for cocycles of a particular form, namely where the matrices $A_\omega$ are of the form $O_\omega+\epsilon N_\omega$, where the $O_\omega$ are orthogonal matrices and the $N_\omega$ are mutually independent Gaussian matrices with independent standard normal entries. We further assume that the $(N_{\sigma^n\omega})$ are independent of the cocycle $O_\omega$. We then interpret the cocycle as an additive noise perturbation of a cocycle of orthogonal matrices. Our main results are the following: **Theorem 1**. *Let $\sigma\colon\Omega\to\Omega$ be an ergodic transformation preserving an invariant measure $\mathbb P$ and let $O\colon\Omega\to O(d,\mathbb{R})$ be a measurable map into the $d\times d$ orthogonal matrices. Suppose that $N\colon\Omega\to M_{d\times d}(\mathbb{R})$ is measurable, and that that conditioned on $(O_{\sigma^n\omega})_{n\in\mathbb Z}$ and $(N_{\sigma^n\omega})_{n\ne 0}$, $N_\omega$ has independent standard normal entries. Then for all $\epsilon\in\mathbb{R}$, the Lyapunov exponents of the cocycle $A_\omega=O_\omega+\epsilon N_\omega$ are given by $$\lambda_j=\mathbb{E}\log\|c_j^\perp(I+\epsilon N)\|,$$ where $c_j^\perp(A)$ is the $j$th column of the Gram-Schmidt orthogonalization of $A$.* The following theorem describes the asymptotic behaviour of the exponents as $\epsilon$ tends to 0. **Theorem 2**. *Let the matrix cocycle be as above. Then the Lyapunov exponents satisfy $$\lambda_j(\epsilon)=(d-2j)\tfrac {\epsilon^2}2+O(\epsilon^4|\log\epsilon|^4)$$ as $\epsilon\to 0$.* We make the following conjecture. Let $\sigma$ be an ergodic measure-preserving transformation of a space $(\Omega,\mathbb P)$. If $B\colon\Omega\to M_{d\times d}(\mathbb{R})$ is the generator of a matrix cocycle with the property that $\|B_\omega\|\le 1$ almost surely, and $N_\omega$ is Gaussian with the independence properties above, then setting $\lambda_j'(\epsilon)$ to be the sequence of Lyapunov exponents of the cocycle $A^\epsilon_\omega=B_\omega+\epsilon N_\omega$, one has $$\lambda_j'(\epsilon)-\lambda_{j+1}'(\epsilon)\ge \lambda_j(\epsilon)-\lambda_{j+1}(\epsilon) \text{ for all $\epsilon>0$,}$$ where $\lambda_j(\epsilon)$ are the Lyapunov exponents for the cocycle described in Theorem [Theorem 1](#thm:expression){reference-type="ref" reference="thm:expression"}. That is, we conjecture that there are universal lower bounds on the gaps between consecutive Lyapunov exponents of Gaussian perturbed cocycles of matrices where the matrices in the unperturbed cocycle have norm at most 1; and that these lower bounds are obtained in the case where all of the matrices are the identity matrix. The results in this paper are closely related to results in Newman [@Newman], where he gave a result similar to Theorem [Theorem 1](#thm:expression){reference-type="ref" reference="thm:expression"} for some i.i.d. cocycles involving Gaussian matrices and SDE flows on the space of non-singular matrices. Newman also re-derives an important result of Dynkin [@Dynkin] that also has intermediate proofs due to LeJan [@LeJan]; Baxendale and Harris [@BaxendaleHarris]; and Norris, Rogers and Williams [@NorrisRogersWilliams]. Dynkin's result concerns the Lyapunov exponents of a simple stochastic flow on $GL_d(\mathbb{R})$, the group of invertible $d\times d$ matrices, and identifies explicit exact Lyapunov exponents for the flow. Although this cocycle is not the same as ours, it is in the same spirit. The Lyapunov exponents in that paper have a similar form to ours and are given by $\lambda_k=(d-2k+1)\sigma^2/2$ # Definitions and Preliminary lemmas If $N$ is a $d\times d$ matrix valued random variable whose entries are independent standard normal random variables, we will say that $N$ is the *standard Gaussian matrix random variable*. We will need the following property of the normal distribution: **Lemma 3**. *Let $U$ be an orthogonal matrix and let $N$ be a standard Gaussian matrix random variable of the same dimensions. Then the matrices $N$, $UN$ and $NU$ are equal in distribution.* This follows from a more general fact about the multivariate normal distribution. **Proposition 4**. *Let $X\sim N(\boldsymbol{\mu},\boldsymbol{\Sigma})$ be a $d$-dimensional multivariate normal distribution with mean vector $\boldsymbol{\mu}$ and covariance matrix $\boldsymbol{\Sigma}$. Suppose $V$ is a $d\times n$ matrix of rank $d$. Then $VX \sim N(V\boldsymbol{\mu},V\boldsymbol{\Sigma}V^T)$.* *Proof.* Recall that $X\sim N(\boldsymbol{\mu},\boldsymbol{\Sigma})$ if and only if $X\sim AZ +\boldsymbol{\mu}$ where $AA^T = \boldsymbol{\Sigma}$ and $Z \sim N(\boldsymbol{0},I_d)$. If $X = AZ+\boldsymbol{\mu}$ this implies that $$\begin{aligned} VX &= VAZ + V\boldsymbol{\mu} \\ &\sim N(V\boldsymbol{\mu},VA(VA)^T) \text{ by the fact above}\\ &\sim N(V\boldsymbol{\mu},VAA^TV^T) \\ &\sim N(V\boldsymbol{\mu},V\boldsymbol{\Sigma}V^T) \end{aligned}$$ ◻ **Lemma 5**. *Let $N$ be a standard normal random variable. Then for any $a$ and for $b\ne 0$, $\mathbb{E}\log^-|a+bN|\le \mathbb{E}\log^-|bN|<\infty$.* *Proof.* We have $$\begin{aligned} \mathbb{E}\log^-|a+bN|&=\int_0^\infty \mathbb P(\log^-|a+bN|<t)\,dt\\ &=\int_0^\infty\mathbb P(N\in[-a-e^{-t}/|b|,-a+e^{-t}/|b|]\,dt\\ &\le\int_0^\infty \mathbb P(N\in [-e^{-t}/|b|,e^{-t}/|b|]\,dt\\ &=\mathbb{E}\log^-|bN|. \end{aligned}$$ Since $\log^-|bN|\le\log^-|b|+\log^-|N|$, and $\mathbb{E}\log^-|N|=\frac{2}{\sqrt{2\pi}}\int_0^1 |\log x| e^{-x^2/2}\,dx$, it is easy to see $\mathbb{E}\log^-|bN|<\infty$. ◻ For a matrix $B$, let $c_j(B)$ denote the $j$th column of $B$ and let $\theta_j(B)=\text{dist}\big(c_j(B),\operatorname{lin}(\{c_i(B)\colon i\ne j\})\big)$ and let $\Theta(B)=\min\theta_j(B)$. **Lemma 6**. *Let $A$ be an arbitrary matrix and let $\epsilon>0$. Let $Z$ denote a standard $d\times d$ Gaussian matrix random variable and $N$ denote a standard normal random variable. Then $\mathbb{E}\log^-\Theta(A+\epsilon Z)\le \mathbb{E}\log^-(\epsilon N)<\infty$.* *Further, if $S$ is any set, $\mathbb{E}\big(\log^-\Theta(A+\epsilon Z)\,\mathbf 1_S\big) \le d\,\mathbb P(S)(1+\log^-(\epsilon\mathbb P(S)))$.* *Proof.* Let $\mathcal F$ denote the $\sigma$-algebra generated by the columns of $N$ except for the $j$th. Then $\mathbb{E}\log^-\theta_j(A+\epsilon Z)= \mathbb{E}\big(\mathbb{E}(\log^-\theta_j(A+\epsilon Z)|\mathcal F)\big)$. Let $\mathbf n$ be an $\mathcal F$-measurable unit normal to the subspace spanned by $(c_i(A+\epsilon Z))_{i\ne j}$ (this is almost surely unique up to a change of sign). Then $\theta_j(A+\epsilon Z)=|\langle \mathbf n,c_j(A+\epsilon Z)\rangle| =|\langle \mathbf n,c_j(A))\rangle+\epsilon\langle \mathbf n,c_j(Z)\rangle|$. Let $a=\langle \mathbf n,c_j(A))\rangle$ (an $\mathcal F$-measurable random variable) and note that since $c_j(Z)$ is independent of the unit vector $\mathbf n$, conditioned on $\mathcal F$, by Proposition [Proposition 4](#prop:MVN){reference-type="ref" reference="prop:MVN"}, $\langle \mathbf n,c_j(Z)\rangle$ is distributed as a standard normal random variable. Hence we have $\mathbb{E}\log^-\theta_j(A+\epsilon Z)=\mathbb{E}\big(\mathbb{E}(\log^-\theta_j(A+\epsilon Z)\big|\mathcal F)\big) =\mathbb{E}\big(\mathbb{E}(\log^-|a+\epsilon N|\big|\mathcal F)\big)\le\mathbb{E}\big(\mathbb{E}(\log^-|\epsilon N|\big|\mathcal F)\big) =\mathbb{E}\log^-|\epsilon N|$ which is finite by the lemma above. By definition, $\Theta(A+\epsilon Z)=\min_j\theta_j(A+\epsilon Z)$ so that $\log^-\Theta(A+\epsilon Z)=\max_j\log^-\theta_j(A+\epsilon Z)\le\sum_j\log^-\theta_j(A+\epsilon Z)$. By Lemma [Lemma 5](#lem:logNest){reference-type="ref" reference="lem:logNest"}, we see $\mathbb{E}\log^-\Theta(A+\epsilon Z)<\infty$ as required. Now if $S$ is any set, we have $$\begin{aligned} \mathbb{E}\big(\log^-\theta_j(A+\epsilon Z)\,\mathbf 1_S\big)&= \int_0^\infty \mathbb P(S\cap\{\log^-\theta_j(A+\epsilon Z)>t\})\,dt\\ &\le \int_0^\infty \min\big(\mathbb P(S),\mathbb P(\theta_j(A+\epsilon Z)<e^{-t})\big)\,dt\\ &\le \int_0^\infty \min\big(\mathbb P(S),\mathbb P(a+\epsilon N\in [-e^{-t},e^{-t}])\big)\,dt\\ &\le \int_0^\infty \min\big(\mathbb P(S),e^{-t}/\epsilon\big)\,dt, \end{aligned}$$ where in the third line, as above, $a$ is a random variable that is independent of $N$. For the fourth line, we used the fact that the density of a standard normal is bounded above by $(2\pi)^{-1/2}<\frac 12$. Separating the integration region into $[0,\log^-(\epsilon \mathbb P(S))]$ and $[\log^-(\epsilon\mathbb P(S)),\infty)$, we obtain $\mathbb{E}\big(\log^-\theta_j(A+\epsilon Z)\,\mathbf 1_S\big)\le \mathbb P(S)\log^-(\epsilon\mathbb P(S))+\mathbb P(S)$. Since $\log^-\Theta(B)\le\sum_{j=1}^d\log^-\theta_j(B)$, the result follows. ◻ For any vector $y$ in $\mathbb{R}^d$, $y$ has at least one coefficient of magnitude $\|y\|/\sqrt d$, say the $j$th, so $\|By\|\ge \|y_jc_j(B)+\sum_{i\ne j}y_ic_i(B)\|\ge |y_j|\theta_j(B)\ge (1/\sqrt d)\Theta(B)\|y\|$. If $B$ is invertible then $\Theta(B)$ is non-zero and substituting $y=B^{-1}x$ gives $\|B^{-1}\|\le \sqrt d/\Theta(B)$. **Corollary 7**. *Let $(A_n)$ denote an i.i.d. sequence of $d\times d$ random matrices where $A_n=I+\epsilon N_n$, and where $N_n$ is a $d\times d$ standard Gaussian matrix random variables. Then $(A_n)$ satisfies the following:* 1. *$A_n\in GL_d(\mathbb{R})$ a.s.;* 2. *the distribution of $A_n$ is fully supported in $GL_d(\mathbb{R})$: for any non-empty open set $U\subset GL_d(\mathbb{R})$, $\mathbb P(A_n\in U)>0$.* 3. *$\log\|A_n\|\in L^1(\Omega)$.* This corollary establishes that the sequence $(A_n)$ satisfies the conditions of the Gol'dsheid-Margulis theorem [@GoldsheidMargulis Theorem 5.4] which ensures that the Lyapunov exponents of the cocycle $A^{(n)}=(I+\epsilon N_n)\cdots (I+\epsilon N_1)$ are all distinct. *Proof.* The distribution of the matrices $A_i$ is mutually absolutely continuous with respect to Lebesgue measure. Since the zero locus of the polynomial equation $\det(A)=0$ is a measure zero set, the first and second conclusions are established. To show $\log\|A_i\|$ is integrable, we separately show that $\log^+\|A_i\|$ and $\log^-\|A_i\|$ are integrable. First, $$\mathbb{E}\log^+\left\lVert A_1\right\rVert \leq \mathbb{E}\left\lVert A_1\right\rVert \leq \mathbb{E}\sum_{1\leq i,j\leq d}|A_{ij}|$$ where each $A_{ij}$ is an integrable normal random variable. The fact that $\mathbb{E}\log^+\left\lVert A_1^{-1}\right\rVert<\infty$ follows from Lemma [Lemma 6](#lem:Thetaest){reference-type="ref" reference="lem:Thetaest"} and the observation that $\|A_1^{-1}\|\le \sqrt d/\Theta(A_1)$ made above. ◻ We make extensive use of the singular value decomposition in what follows. More information on this topic may be found in Horn and Johnson [@horn_johnson_1991] and Bhatia [@Bhatia]. For a $d\times d$ matrix $A$, a singular value decomposition is a triple $(L,D,R)$ where $L$ and $R$ are orthogonal matrices and $D$ is a diagonal matrix with non-negative entries such that $A=LDR$. We impose without loss of generality the requirement that the diagonal entries of $D$ are decreasing. The matrix $D$ is uniquely determined by $A$ while there is some freedom in the choice of $L$ and $R$. The *singular values* of $A$ are denoted $s_i(A)$, where $s_i(A)$ is the $i$th entry of the diagonal of $A$. It is known (see for example Ragunathan [@Raghunathan Lemma 1]) that there exist measurable functions $L$, $D$ and $R$ mapping $M_d(\mathbb{R})$ to $O(d,\mathbb{R})$, $M_\text{diag}(d,\mathbb{R})$ and $O(d,\mathbb{R})$ respectively such that $A=L(A)D(A)R(A)$. It is well known that $|s_i(A)-s_i(B)|\le \|A-B\|$ where $\|\cdot\|$ is the standard operator norm on matrices. We also make use of the products $S_i^j(A)=s_i(A)\cdots s_j(A)$. These have an interpretation in terms of exterior algebra. We write $\bigwedge^k\mathbb{R}^d$ for the $k$th exterior power of $\mathbb{R}^d$ and equip it with the standard inner product coming from the Cauchy-Binet formula and the corresponding norm. In particular if $v_1,\ldots,v_d$ is an orthonormal basis for $\mathbb{R}^d$, then $\{v_{i_1}\wedge\cdots\wedge v_{i_k}\colon i_1<i_2<\ldots<i_k\}$ is an orthonormal basis for $\bigwedge^k\mathbb{R}^d$. With respect to the corresponding operator norm, it is well known that $\big\|A^{\wedge k}\|=S_1^k(A)$. If $(A_n)$ is an independent identically distributed sequence of random variables taking values in $GL_d(\mathbb{R})$ such that $\mathbb{E}\log\|A_1\|^{\pm 1}<\infty$, it was shown by Oseledets [@Oseledets] and Raghunathan [@Raghunathan] that the limits $$\lim_{n\to\infty}\frac 1n\log s_j(A_n\ldots A_1)$$ exist and are almost surely constant for almost every realization of $(A_n)$. The almost sure limit is denoted $\lambda_j$ and the $(\lambda_j)$ are the *Lyapunov exponents* of the cocycle. # Exact expressions for Lyapunov exponents For $1\leq k \leq d$, we define $e_k$ to be the $k$th coordinate vector, so that, as previously defined, $c_k(A) := A e_k$ is the $k$th column of $A$. Let $c_k^\perp(A)$ denote the component of the $k$th column of $A$ which is orthogonal to the first $k-1$ columns. That is, suppressing the matrix $A$ for brevity, $c_1^\perp = c_1$, and $$c_k^\perp := c_k-\sum_{1\leq j<k}\frac{\big\langle c_j^\perp,c_k\big\rangle}{\big\langle c_j^\perp,c_j^\perp\big\rangle}c_j^\perp.$$ We now prove Theorem [Theorem 1](#thm:expression){reference-type="ref" reference="thm:expression"} which we restate here for convenience. **Theorem 1**. *Let $(U_n)_{n\in\mathbb Z}$ be a sequence of $d\times d$ orthogonal matrices and let $(N_n)_{n\in\mathbb Z}$ be a sequence of independent $d\times d$ matrices, each with independent standard normal coefficients. Let $\epsilon>0$ and let $A^\epsilon_j=U_j+\epsilon Z_j$. Then for $1\le k\le d$, the $k$th Lyapunov exponent of the cocycle $(A^\epsilon_{\sigma^{n-1}\omega}\cdots A^\epsilon_\omega)$ is given by $$\lambda_k=\mathbb{E}(\log\big\|c_k^\perp(I+\epsilon N)\big\|).$$* Fix $\epsilon>0$ and set $A_i:= U_i + \epsilon N_i$ for each $i$. To find the Lyapunov exponents of this sequence we work with the products $A^{(n)}=A_n\cdots A_1$. We now define $\Sigma_n=D(A^{(n)})$ and study the evolution of $\Sigma_n$. More precisely we are interested in the stochastic process $(\Sigma_n)_{n\ge 0}$. To write $\Sigma_{n+1}$ in terms of $\Sigma_n$, we have $\Sigma_{n+1}=D\big(A_{n+1}L(A^{(n)})\Sigma_n R(A^{(n)})\big)$. The following lemma shows that this process $(\Sigma_n)$ is Markov and that the process has the same distribution as the simpler process $\Sigma'_{n+1}=D((1+\epsilon N_{n+1})\Sigma'_n)$. **Lemma 8**. *($(\Sigma_n)$ is a Markov process) Let the sequence of matrices $(A_i)$ be given by $U_i+\epsilon N_i$ as above and let $\Sigma_n=D(A^{(n)})$. Then $(\Sigma_n)$ is a Markov process: For any measurable set $F$ of diagonal matrices, $$\begin{aligned} \mathbb{P}(\Sigma_{n+1}\in F|\Sigma_n,\ldots,\Sigma_1)&= \mathbb{P}(\Sigma_{n+1}\in F|\Sigma_n)\\ &=\mathbb{P}(D((I+\epsilon N)\Sigma_n)\in F|\Sigma_n).\end{aligned}$$* *That is, the Markov process $(\Sigma_n)$ has the same distribution as the Markov process $(\Sigma'_n)$ where $\Sigma'_0=I$ and $\Sigma'_{n+1}=D(A'_{n+1}\Sigma_n)$, where $(A'_n)$ is an independent sequence of matrices, each distributed as $I+\epsilon N$.* *Proof.* Let $\mathcal{F}_n$ denote the smallest $\sigma$-algebra with respect to which $N_1,\ldots,N_n$ are measurable. Let $\mathcal G_n$ be the smallest $\sigma$-algebra with respect to which $\Sigma_1,\ldots,\Sigma_n$ are measurable (so that $\mathcal G_n$ is a sub-$\sigma$-algebra of $\mathcal{F}_n$). As usual, we write $A^{(n)}$ for the product $A_n\ldots A_1$. Let $L_n=L(A^{(n)})$, $\Sigma_n=D(A^{(n)})$, $R_n=R(A^{(n)})$. Let $F$ be a measurable subset of the range of $D$. We compute $$\begin{aligned} \mathbb{P}(\Sigma_{n+1}\in F|\mathcal F_n)&=\mathbb{P}\Big(D(A_{n+1}L_n\Sigma_nR_n)\in F|\mathcal F_n\Big)\\ &=\mathbb{P}\Big(D(A_{n+1}L_n\Sigma_n)\in F|\mathcal F_n\Big)\\ &=\mathbb{P}\Big(D\big((U_{n+1}+\epsilon N_{n+1})L_n\Sigma_n\big)\in F|\mathcal F_n\Big)\\ &=\mathbb{P}\Big(D\big(U_{n+1}L_n(I+\epsilon L_n^{-1}U_{n+1}^{-1}N_{n+1}L_n)\Sigma_n\big)\in F|\mathcal F_n\Big)\\ &=\mathbb{P}\Big(D\big((I+\epsilon L_n^{-1}U_{n+1}^{-1}N_{n+1}L_n)\Sigma_n\big)\in F|\mathcal F_n\Big)\\ %&=\mathbb{P}(I+\epsilon U_{n+1}^{-1}N_{n+1}\in G\cdot \Sigma_n^{-1}L_n^{-1}|\mathcal F_n)\\ %&=\mathbb{P}(I+\epsilon N_{n+1}\in G\cdot \Sigma_n^{-1}|\mathcal F_n)\\ &=\mathbb{P}\Big(D\big((I+\epsilon N_{n+1})\Sigma_n\big)\in F|\mathcal F_n\Big),\end{aligned}$$ where the second and fifth lines follow from that facts that $D(A)=D(AU)=D(UA)$ for any matrix $A$ and any orthogonal matrix $U$. The sixth line uses the fact that $N_{n+1}$ is independent of $\mathcal F_n$ and Lemma [Lemma 3](#lem:samedist){reference-type="ref" reference="lem:samedist"} so that conditioned on $\mathcal F_n$, $L_n^{-1}U_{n+1}^{-1}N_{n+1}L_n$ has the same distribution as $N_{n+1}$. Since $N_{n+1}$ is independent of $\mathcal F_n$, this is equal to $\mathbb{P}\Big(D\big((I+\epsilon N_{n+1})\Sigma_n\big)\in F|\Sigma_n\Big )$. We have established that $$\mathbb{P}(\Sigma_{n+1}\in F|\mathcal F_n)=\mathbb{P}\Big(D\big((I+\epsilon N_{n+1})\Sigma_n\big)\in F|\Sigma_n\Big ).$$ Taking conditional expectations of both sides with respect to $\mathcal G_n$, we deduce $$\mathbb{P}(\Sigma_{n+1}\in F|\Sigma_n,\ldots,\Sigma_1)= \mathbb{P}(\Sigma_{n+1}\in F|\Sigma_n).$$ ◻ *Proof of Theorem [Theorem 1](#thm:expression){reference-type="ref" reference="thm:expression"}.* Fix $1\leq k \leq d$. We use the stochastic process described in Lemma [Lemma 8](#lem:orthoreduce){reference-type="ref" reference="lem:orthoreduce"}: let $A_n=I+\epsilon N_n$, $\Sigma_0=I$, $\Sigma_{n}=D(A_n\Sigma_{n-1}) =\text{diag}(s_1(A_n\Sigma_{n-1}),\ldots,s_d(A_n\Sigma_{n-1}))$. As before, we write $A^{(n)}=A_n\ldots A_1$. We note that $\Sigma_n$ is *not* equal to $D(A^{(n)})$, but using Lemma [Lemma 8](#lem:orthoreduce){reference-type="ref" reference="lem:orthoreduce"} the two processes $(\Sigma_n)_{n\ge 0}$ and $\big(D(A^{(n)})\big)_{n\ge 0}$ have the same distribution. Let $B_n = A_n\Sigma_{n-1}\begin{pmatrix} e_1\; \dots \; e_k \;0 \;\dots \; 0 \end{pmatrix}$. Then for all $1\leq j \leq k$, $$\begin{aligned} \big|s_j(\Sigma_{n})-s_j(B_n)\big| &=\big|s_j(A_n\Sigma_{n-1})-s_j(B_n)\big|\\ &\leq \big\|A_n\Sigma_{n-1}-B_n\big\| \\ &= \left\lVert A_{n}\Sigma_{n-1}\begin{pmatrix} 0\; \dots 0\; e_{k+1} \;\dots \; e_d\end{pmatrix}\right\rVert \\ &= \left\lVert A_{n}\,\text{diag}(0,\ldots 0,s_{k+1}(\Sigma_{n-1}),\ldots ,s_d(\Sigma_{n-1})) \right\rVert \\ &\leq s_{k+1}(\Sigma_{n-1})\left\lVert A_{n}\right\rVert\end{aligned}$$ Then we have $$\begin{aligned} \left|\frac{s_j(B_n)}{s_j(\Sigma_n)}-1\right| &=\left|\frac{s_j(B_n)-s_j(\Sigma_n)}{s_j(\Sigma_n)}\right| \\ &\leq \frac{s_{k+1}(\Sigma_{n-1})}{s_j(\Sigma_n)}\left\lVert A_{n}\right\rVert\end{aligned}$$ By Gol'dsheid and Margulis, [@GoldsheidMargulis Theorem 5.4], $\frac{1}{n}\log s_j(A^{(n)})\to \lambda_j$ and $\frac{1}{n}\log s_{k+1}(A^{(n)})\to \lambda_{k+1}$ almost surely for some $\lambda_j>\lambda_{k+1}$. Since the processes $\big(D(A^{(n)})\big)$ and $(\Sigma_n)$ have a common distribution, it follows that $\frac{1}{n}\log s_j(\Sigma_n)\to \lambda_j$ and $\frac{1}{n}\log s_{k+1}(\Sigma_n)\to \lambda_{k+1}$ almost surely. So $$\frac{1}{n}\log\left(\frac{s_{k+1}(\Sigma_{n-1})}{s_j(\Sigma_n)}\right)\to \lambda_{k+1}-\lambda_j<0$$ almost surely as $n\to \infty$. If this occurs, there is some $N\in \mathbb{N}$ such that $s_{k+1}(\Sigma_{n-1})/s_j(\Sigma_n)<e^{-n(\lambda_{k+1}-\lambda_j)/2}$ for all $n\geq N$. A well-known consequence of the Strong Law of Large Numbers ensures that $C(\omega):=\sup_n \left\lVert A_n\right\rVert/n$ is finite a.s., so that $\left\lVert A_n\right\rVert/n\leq C(\omega)$ for all $n$. For $n\geq N$ we then have $$\left|\frac{s_{k+1}(\Sigma_{n-1})}{s_j(\Sigma_n)}\right|\left\lVert A_{n}\right\rVert \leq C(\omega)n e^{-n(\lambda_{k+1}-\lambda_j)/2}\to 0$$ as $n\to\infty$. Hence $$\label{eq:asymp} \frac{s_j(B_n)}{s_j(\Sigma_n)}\to 1 \text{ as $n\to \infty$}.$$ For a matrix $A$, let $s_1^k (A)=s_1(A)\cdots s_k(A)$. Since $B_n$ has $k$ non-zero columns, ${B_n}^{\wedge k}$ has rank one and we have $$\begin{aligned} s_1^k(B_n) &= \left\lVert B_ne_1 \wedge B_ne_2 \wedge\dots \wedge B_ne_k \right\rVert \\ &= \left\lVert(A_n\Sigma_{n-1})e_1 \wedge (A_n\Sigma_{n-1})e_2 \wedge\dots \wedge (A_n\Sigma_{n-1})e_k \right\rVert \\ &= \left\lVert(A_{n}e_1)s_1(\Sigma_{n-1}) \wedge (A_{n}e_2)s_2(\Sigma_{n-1}) \wedge\dots \wedge (A_{n}e_k)s_k(\Sigma_{n-1}) \right\rVert \\ &=s_1^k(\Sigma_{n-1}) \left\lVert c_1(A_n) \wedge c_2(A_n) \wedge\dots \wedge c_k(A_n)\right\rVert \\ &=s_1^k(\Sigma_{n-1}) \big\|c_1^\perp(A_n)\big\|\big\|c_2^\perp(A_n)\big\|\cdots \big\|c_k^\perp(A_n)\big\|,\end{aligned}$$ where $\wedge$ denotes the wedge product. For $n\in\mathbb{N}$ and $1\le k\le d$, let $X^k_{n}:=\big\|c_1^\perp(A_n)\big\| \big\|c_2^\perp(A_n)\big\|\cdots \big\|c_k^\perp(A_n)\big\|$. Then $X^k_1,X^k_2,\dots$ is a sequence of i.i.d. random variables. Since $\Theta(A)\le \|c_i^\perp(A)\|\le \|A\|$, we see, using Lemma [Lemma 6](#lem:Thetaest){reference-type="ref" reference="lem:Thetaest"} and Corollary [Corollary 7](#cor:Margulis-suff){reference-type="ref" reference="cor:Margulis-suff"} that $\log\|c_i^\perp(A)\|$ is integrable. We have $$\begin{aligned} s_1^k(\Sigma_n)&=\frac{s_1^k(\Sigma_n)}{s_1^k(B_n)}s_1^k(B_n)\\ &=\frac{s_1^k(\Sigma_n)}{s_1^k(B_n)}X_n^ks_1^k(\Sigma_{n-1}).\end{aligned}$$ Using induction, we obtain $$s_1^k(\Sigma_n)=\left(\prod_{j=1}^n\frac{s_1^k(\Sigma_j)}{s_1^k(B_j)}\right)X^k_1\ldots X^k_n.$$ Hence $$\begin{aligned} \tfrac 1n\log s_1^k(\Sigma_n)&= \frac 1n\sum_{j=1}^n\log\frac{s_1^k(\Sigma_j)}{s_1^k(B_j)}+\frac 1n\sum_{j=1}^n \log X^k_j.\end{aligned}$$ By [\[eq:asymp\]](#eq:asymp){reference-type="eqref" reference="eq:asymp"}, the first term on the right side converges almost surely to 0 and by the Strong Law of Large Numbers the second term converges almost surely to $\mathbb{E}\log X^k_1$. Hence we obtain $$\lambda_1+\ldots+\lambda_k=\mathbb{E}\big(\log \|c_1^\perp(I+\epsilon N)\|+\ldots+ \log\|c_k^\perp(I+\epsilon N)\|\big).$$ Subtracting the $(k-1)$-fold partial sum from the $k$-fold partial sum, we obtain $$\lambda_k=\mathbb{E}\log\|c_k^\perp(I+\epsilon N)\|,$$ as required. ◻ This gives us an explicit description of $\lambda_k$. However it is difficult to compute for large matrices. In the next section we find an approximation for $\lambda_k$ which is easier to compute. # An approximation for $\lambda_j$ In this section we focus on the case where $A\sim I_d+\epsilon N$ and introduce the computationally simpler vectors $c_j'(A)$ approximating $c_j^\perp(A)$, defined by $c_1'(A)=c_1(A)$ and $$c_k'(A)=c_k(A)-\sum_{1\leq j<k}\left\langle c_j(A),c_k(A)\right\rangle c_j(A)$$ With the same setup as in the previous section, when $|\epsilon\log\epsilon|<(100d)^{-1}$ we have **Theorem 9**. *For any $d\in\mathbb{N}$, if $A_1\sim I_d+\epsilon N$ and $1\leq k\leq d$ then $\mathbb{E}\log\big\|{c_k^\perp}\big\|=\mathbb{E}\log\big\|{c_k'}\big\|+O(\epsilon^{4}|\log\epsilon|^4)$.* We will say that $A=I+\epsilon N$ is *bad* if $|N_{ij}|>|\log\epsilon|$ for some $i,j$. Let $\mathsf{bad}$ denote the event that $A$ is bad. We first control the contribution to $\mathbb{E}\log\left\lVert c_k^\perp\right\rVert-\mathbb{E}\log\left\lVert c_k'\right\rVert$ coming from the bad set. **Lemma 10**. *Let $\epsilon>0$. Then $$\begin{aligned} \mathbb{E}\big(\mathbbm 1_{\mathsf{bad}}\big|\log\|c_j^\perp(I+\epsilon N)\|\big|\big)&=O(|\log\epsilon|e^{-(\log\epsilon)^2/2});\text{ and}\\ \mathbb{E}\big(\mathbbm 1_{\mathsf{bad}}\big|\log\|c_j'(I+\epsilon N)\|\big|\big)&=O(|\log\epsilon|e^{-(\log\epsilon)^2/2}). \end{aligned}$$* *Proof.* We write $c_j^\perp$ and $c_j'$ for $c_j^\perp(I+\epsilon N)$ and $c_j'(I+\epsilon N)$ respectively. We control the positive parts $\log^+\|c_j'\|$ and $\log^+\|c_j^\perp\|$, and the negative parts $\log^-\|c_j'\|$ and $\log^-\|c_j^\perp\|$. For the positive parts, notice that $\|c_j^\perp\|\le \|c_j\|\le \sum_{i,j}|a_{ij}|$ and $\|c_j'\|\le \Big(1+\sum_{i,j}|a_{ij}|\Big)^3$. The set $\mathsf{bad}$ is a union of $d^2$ parts of the form $\mathsf{bad}_{ij}=\{N\colon |N_{ij}|>|\log\epsilon|\}$. Using the bound $\log^+(x)\le x$, this gives $$\begin{aligned} \mathbb{E}\big(\mathbbm 1_{\mathsf{bad}}\log^+\|c_j^\perp\|\big)&\le \sum_{i,j}\int_{\mathsf{bad}_{i,j}}\Big(d+\epsilon\sum_{k,l}|x_{kl}|\Big)f_X\big((x_{kl})\big)d(x_{kl})\\ &\le d^2\int_{\mathsf{bad}_{1,1}}\left(d+\epsilon d^2|x_{11}|\right)f_{N}(x_{11})dx_{11}\\ &=O(\exp(-(\log\epsilon)^2/2)). \end{aligned}$$ A similar argument holds for $\mathbb{E}\big(\mathbbm 1_{\mathsf{bad}}\log^+\|c_j'\|\big)$. To control $\mathbb{E}\big(\mathbbm 1_{\mathsf{bad}}\log^-\|c_j^\perp\|\big)$ and $\mathbb{E}\big(\mathbbm 1_{\mathsf{bad}}\log^-\|c_j'\|\big)$, recall $\|c_j^\perp\|$ and $\|c_j'\|$ are bounded below by $\Theta(A)$. By standard estimates on the tail of the normal distribution, $\mathbb P(\mathsf{bad})=O(e^{-(\log\epsilon)^2/2}/|\log\epsilon|)$. We see from Lemma [Lemma 6](#lem:Thetaest){reference-type="ref" reference="lem:Thetaest"}, $\mathbb{E}\log^-\Theta(I+\epsilon N)\mathbf 1_{\mathsf{bad}}= O(|\log\epsilon|e^{-(\log\epsilon)^2/2})$, which gives the required estimates. ◻ We now give pointwise estimates for $\big|\log\|{c_k^\perp}\|-\log\left\lVert c_k'\right\rVert\big|$ when $A$ is not bad. That is, when $A=I+\epsilon N_{ij}$ where $|N_{ij}|\leq |\log\epsilon|$ for all $i,j$. **Lemma 11**. *There exist $\epsilon_0>0$ and $C>0$ depending only on $d$ such that for all matrices $A$ of the form $A=I+\epsilon X$ where $|X_{ij}|\le |\log\epsilon|$ for each $i,j$, then for each $k$, $$\Big|\log\big\|c_k^\perp(A)\big\|-\log\big\|c_k'(A)\|\Big|\le C(\epsilon|\log\epsilon|)^4\text{ for all $\epsilon<\epsilon_0$.}$$* As usual, we write $c_j$, $c_j^\perp$ and $c_j'$ in place of $c_j(A)$, $c_j^\perp(A)$ and $c_j'(A)$ for brevity. We define $\alpha_i^j :=\frac{\left\langle c_i^\perp,c_j\right\rangle}{\left\lVert c_i^\perp\right\rVert^2}$ so that $c_j^\perp = c_j-\sum_{i<j}\alpha_i^j c_i^\perp$. Throughout the proof, we let $\eta=|\log\epsilon|$. We let $\epsilon_0$ be sufficiently small that $\epsilon\eta<1/(100d)$ for $\epsilon<\epsilon_0$. The proof makes use of a number of claims. **Claim 1**. *Let $A=I+\epsilon X$ where $|X_{ij}|\le\eta$ for all $i,j$.[\[claim:one\]]{#claim:one label="claim:one"} For all $1\leq n\leq d$, the following hold:* (i) *$|\left\lVert c_n\right\rVert^2-1|\leq 2\epsilon\eta+d\eta^2\epsilon^2\leq 3\epsilon\eta$;[\[it:claim1(i)\]]{#it:claim1(i) label="it:claim1(i)"}* (ii) *$|\left\lVert c_n^\perp\right\rVert^2-1|\leq 3\epsilon \eta$;[\[it:claim1(ii)\]]{#it:claim1(ii) label="it:claim1(ii)"}* (iii) *$|\alpha_i^k|\leq 6\epsilon\eta$ for all $i\le n$ and $k>i$;[\[it:claim1(iii)\]]{#it:claim1(iii) label="it:claim1(iii)"}* (iv) *$|\left\langle c_n^\perp,c_k\right\rangle|\leq 3\epsilon \eta$ for all $k>n$.[\[it:claim1(iv)\]]{#it:claim1(iv) label="it:claim1(iv)"}* *Proof.* Since $|X_{ij}|\le\eta$ for all $i,j$, for any $1\leq n\leq d$ and $i<j$ we have $$\begin{aligned} |\left\lVert c_n\right\rVert^2-1|&\leq 2\epsilon\eta+d\epsilon^2\eta^2\quad \text{and}\\ |\left\langle c_i,c_j\right\rangle|&\leq 2\epsilon \eta+d\epsilon^2\eta^2. \end{aligned}$$ This shows [\[it:claim1(i)\]](#it:claim1(i)){reference-type="eqref" reference="it:claim1(i)"} for all $n$, as well as [\[it:claim1(ii)\]](#it:claim1(ii)){reference-type="eqref" reference="it:claim1(ii)"}, [\[it:claim1(iii)\]](#it:claim1(iii)){reference-type="eqref" reference="it:claim1(iii)"} and [\[it:claim1(iv)\]](#it:claim1(iv)){reference-type="eqref" reference="it:claim1(iv)"} for $n=1$. Now suppose for some $2\le j\le d$, [\[it:claim1(ii)\]](#it:claim1(ii)){reference-type="eqref" reference="it:claim1(ii)"}--[\[it:claim1(iv)\]](#it:claim1(iv)){reference-type="eqref" reference="it:claim1(iv)"} each hold for all $n\leq j-1$. Then for all $k>j$ we have $$\begin{aligned} \big\langle c_j^\perp, c_k\big\rangle=\Big\langle c_j-\sum_{i<j}\alpha_i^j c_i^\perp, c_k\Big\rangle =\big\langle c_j, c_k\big\rangle-\sum_{i<j}\alpha_i^j\big\langle c_i^\perp, c_k\big\rangle \end{aligned}$$ This implies $$\begin{aligned} \big|\big\langle c_j^\perp,c_k\big\rangle\big| &\leq |\big\langle c_j,c_k\big\rangle|+\sum_{i<j}|\alpha_i^j| \big|\big\langle c_i^\perp,c_k\big\rangle\big| \\ &\leq (2\epsilon \eta+d\epsilon^2\eta^2)+ d \cdot (6\epsilon \eta )(3\epsilon \eta)\\ &\leq 3\epsilon \eta, \end{aligned}$$ where we used [\[it:claim1(i)\]](#it:claim1(i)){reference-type="eqref" reference="it:claim1(i)"} and the induction hypotheses in the second line and the condition on $\epsilon_0$ in the third line. This establishes [\[it:claim1(iv)\]](#it:claim1(iv)){reference-type="eqref" reference="it:claim1(iv)"} for $n=j$. Since $c_1^\perp,\dots,c_j^\perp$ are mutually perpendicular, it follows that $$\begin{aligned} \big\|c_j\big\|^2 = \big\|c_j^\perp\big\|^2+\sum_{i<j}(\alpha_i^j)^2 \big\|c_i^\perp\big\|^2 \end{aligned}$$ Thus we have $$\begin{aligned} \Big|\big\|c_j^\perp\big\|^2-1\Big| &=\bigg|\big\|c_j\|^2-1+\sum_{i<j}(\alpha_i^j)^2 \big\|c_i^\perp\big\|^2 \bigg| \\ &\leq \Big|\big\|c_j\big\|^2-1\Big|+\sum_{i<j}(\alpha_i^j)^2 \big\|c_i^\perp\big\|^2 \\ &\leq(2\epsilon\eta+d\epsilon^2\eta^2)+ d(6\epsilon\eta)^2(1+3\epsilon\eta) \\ &\leq 3\epsilon\eta, \end{aligned}$$ establishing [\[it:claim1(ii)\]](#it:claim1(ii)){reference-type="eqref" reference="it:claim1(ii)"} for $n=j$. We show that [\[it:claim1(iii)\]](#it:claim1(iii)){reference-type="eqref" reference="it:claim1(iii)"} holds for $n=j$. Since by the induction hypothesis, $|\alpha_i^k|\le 6\epsilon\eta$ for all $i<j$ and $k>i$, if suffices to show that $|\alpha_j^k|\le 6\epsilon\eta$ for all $k>j$. For any $k>j$, using [\[it:claim1(iv)\]](#it:claim1(iv)){reference-type="eqref" reference="it:claim1(iv)"}, we have $$\begin{aligned} \big|\alpha_j^k\big|=\frac{|\big\langle c_j^\perp,c_k\big\rangle|}{\big\|c_j^\perp\big\|^2}\leq \frac{3\epsilon\eta}{1/2}=6\epsilon\eta \end{aligned}$$ which shows that (iii) holds for $n=j$. ◻ **Claim 2**. *For each $1\leq n\leq d$, $c_n^\perp = c_n + \sum_{j<n} \beta_{j}^n c_j$ where $|\beta_j^n|<7\epsilon \eta$. [\[claim:two\]]{#claim:two label="claim:two"}* *Proof.* We use induction on $j$. The base case is $c_1^\perp=c_1$. Suppose the claim holds for all $n<j\leq d$. Then $$\begin{aligned} c_j^\perp &= c_j - \sum_{i<j}\alpha_i^j c_i^\perp\\ &=c_j - \sum_{i<j}\alpha_i^j \Big(c_i+\sum_{\ell<i}\beta_\ell^i c_\ell\Big)\\ &=c_j - \sum_{\ell<j}\alpha_\ell^j c_\ell-\sum_{i<j}\alpha_i^j\sum_{\ell<i}\beta_\ell^i c_\ell \\ &= c_j - \sum_{\ell<j}\alpha_\ell^j c_\ell -\sum_{\ell<j-1}\Big(\sum_{i=\ell+1}^{j-1}\alpha_{i}^j \beta_\ell^i\Big) c_\ell \end{aligned}$$ For any $\ell<j$, the coefficient of $c_\ell$ in the above expression is bounded by $$|\alpha_\ell^j|+\sum_{i=\ell+1}^{j-1}|\alpha_i^j \beta_\ell^i| \leq 6\epsilon \eta + d(6\epsilon \eta)(7\epsilon\eta)\leq 7\epsilon\eta$$ ◻ **Claim 3**. *For all $1\leq j\leq d$, $c_j' = c_j^\perp+\sum_{n<j}\gamma_n c_n$ where $\gamma_n=O(\epsilon^2\eta^2)$, where the implicit constant depends only on $d$* *Proof.* For any such $j$ we have $$c_j'-c_j^\perp = \sum_{i<j} \bigg( \frac{\left\langle c_i^\perp,c_j\right\rangle}{\left\langle c_i^\perp,c_i^\perp\right\rangle}c_i^\perp-\left\langle c_i,c_j\right\rangle c_i\bigg).$$ We identify the coefficient of $c_\ell$ when $c_j'-c_j^\perp$ is expanded in the basis $(c_k)$. That coefficient may be seen to be $$\begin{aligned} &\frac{\left\langle c_\ell^\perp,c_j\right\rangle}{\left\langle c_\ell^\perp,c_\ell^\perp\right\rangle}-\left\langle c_\ell,c_j\right\rangle +\sum_{\ell<i< j}\frac{\left\langle c_i^\perp,c_j\right\rangle}{\left\langle c_i^\perp,c_i^\perp\right\rangle}\beta^i_\ell\\ %=&\frac{\inprod{c_\ell^\perp,c_j}\inprod{c_\ell,c_\ell}-\inprod{c_\ell^\perp,c_\ell^\perp}\inprod{c_\ell,c_j}} %{\inprod{c_\ell^\perp,c_\ell^\perp}\inprod{c_\ell,c_\ell}}+O(\epsilon^2\eta^2)\\ =&\frac{\left\langle c_\ell^\perp,c_j\right\rangle-\left\langle c_\ell,c_j\right\rangle} {\left\langle c_\ell^\perp,c_\ell^\perp\right\rangle} +\frac{\left\langle c_\ell,c_j\right\rangle \big(1-\left\langle c_\ell^\perp,c_\ell^\perp\right\rangle\big)}{\left\langle c_\ell^\perp,c_\ell^\perp\right\rangle} +O(\epsilon^2\eta^2),\end{aligned}$$ where we added and subtracted $\left\langle c_\ell,c_j\right\rangle/\left\langle c_\ell^\perp,c_\ell^\perp\right\rangle$; and the estimate for the third term follows from Claims [\[claim:one\]](#claim:one){reference-type="ref" reference="claim:one"} and [\[claim:two\]](#claim:two){reference-type="ref" reference="claim:two"}. Since $\left\langle c_\ell^\perp,c_j\right\rangle-\left\langle c_\ell,c_j\right\rangle= -\left\langle\sum_{i<\ell}\beta^\ell_ic_i,c_j\right\rangle$, the estimates in Claims [\[claim:one\]](#claim:one){reference-type="ref" reference="claim:one"} and [\[claim:two\]](#claim:two){reference-type="ref" reference="claim:two"} show the first term is $O(\epsilon^2\eta^2)$. Finally since $\left\langle c_\ell,c_j\right\rangle=O(\epsilon\eta)$ and $1-\left\langle c_\ell^\perp,c_\ell^\perp\right\rangle$ is $O(\epsilon\eta)$ by Claim [\[claim:one\]](#claim:one){reference-type="ref" reference="claim:one"}, the middle term is also $O(\epsilon^2\eta^2)$. ◻ *Proof of Lemma [Lemma 11](#lem:qdiff){reference-type="ref" reference="lem:qdiff"}.* By orthogonality, $$\begin{aligned} \big\|c_j'\big\|^2 = \bigg\|c_j^\perp+\sum_{n<j}\gamma_n c_n\bigg\|^2 = \big\|c_j^\perp\|^2+\bigg\|\sum_{n<j}\gamma_n c_n\bigg\|^2, \end{aligned}$$ where $\gamma_n$ is as in Claim [Claim 3](#claim:three){reference-type="ref" reference="claim:three"}. Since $\gamma_n=O(\epsilon^2\eta^2)$, we obtain $\big\|c_j'\big\|^2 = \big\|c_j^\perp\|^2+ O(\epsilon^4\eta^4)$. Since $\big\|c_j^\perp\|^2$ is in the range $(\frac 12,\frac 32)$, it follows that $\big|\log\big\|c_j'\big\|-\log\big\|c_j^\perp\big\|\big|=O(\epsilon^4\eta^4)$ as required. ◻ *Proof of Theorem [Theorem 9](#thm:lognormdiff){reference-type="ref" reference="thm:lognormdiff"}.* Lemma [Lemma 10](#lem:Enormdiffbad){reference-type="ref" reference="lem:Enormdiffbad"} shows that $$\begin{aligned} &\big|\mathbb{E}\big(\log\|c_k^\perp\|-\log\|c_k'\|)\mathbf 1_\mathsf{bad}\big)\big|\\ \le{} &\mathbb{E}\big(\log\|{c_k^\perp}\|\mathbf 1_\mathsf{bad}\big)+\mathbb{E}\big(\log\|{c_k'}\|\mathbf 1_\mathsf{bad}\big) \\ ={}&O(|\log\epsilon|e^{-(\log\epsilon)^2}). \end{aligned}$$ and Lemma [Lemma 11](#lem:qdiff){reference-type="ref" reference="lem:qdiff"} shows that $\big|\log\big\|{c_k^\perp}\big\|-\log\big\|{c_k'}\big\|\big|\mathbf 1_{\mathsf{bad}^c}=O(\epsilon^4|\log\epsilon|^4)$. Taking the expectation of this and combining the estimates gives the theorem. ◻ # Computing $\mathbb{E}\log\|c_k'\|$ Finally, we find the dominant term in the asymptotic expansion for $\mathbb{E}\log\|c_j'\|$ in the same setup as the previous section. This is Theorem [Theorem 2](#thm:approx){reference-type="ref" reference="thm:approx"} which we restate here for convenience. **Theorem 2**. *Consider an orthogonal-plus-Gaussian cocycle as in Theorem [Theorem 1](#thm:expression){reference-type="ref" reference="thm:expression"}. Then the Lyapunov exponents satisfy $$\lambda_k(\epsilon)=(d-2k)\tfrac{\epsilon^2}2+O(\epsilon^4|\log\epsilon|^4) \text{ as $\epsilon\to 0$.}$$* As in the previous sections, let $A = I_d + \epsilon N$ where $N$ is a standard Gaussian matrix random variable. *Proof.* Let $\eta=|\log\epsilon|$ and let $\mathsf{bad}$ be defined as above. We assume $\epsilon$ is sufficiently small that $\|c_j'(I+\epsilon N)\|^2\in (\frac 12,\frac 32)$ for all $N\in\mathsf{bad}^c$. Expanding, we have that $$\begin{aligned} &\quad\|c_j'\|^2=\bigg\langle c_j-\sum_{i<j}\langle c_i,c_j\rangle c_i\,,\,c_j-\sum_{k<j}\langle c_k,c_j\rangle c_k \bigg\rangle\\ &=\|c_j\|^2 - 2\sum_{i<j}\langle c_i,c_j\rangle^2 +\sum_{i,k<j}\langle c_i,c_j\rangle\langle c_k,c_j\rangle \langle c_i,c_k\rangle\\ &=\|c_j\|^2 - 2\sum_{i<j}\langle c_i,c_j\rangle^2 +\sum_{i<j}\langle c_i,c_j\rangle^2\|c_i\|^2 +2\sum_{i<k<j}\langle c_i,c_j\rangle\langle c_k,c_j\rangle \langle c_i,c_k\rangle\\ &=\|c_j\|^2 - \sum_{i<j}\langle c_i,c_j\rangle^2(2-\|c_i\|^2) +2\sum_{i<k<j}\langle c_i,c_j\rangle\langle c_k,c_j\rangle \langle c_i,c_k\rangle, \end{aligned}$$ where to obtain the third line from the second, we separated the case $i=k$ from the case $i\ne k$. We take a finite Taylor expansion, valid for $t\in(-1,1)$: $\log(1+t) = t - \frac{t^2}{2}+\frac{t^3}3 - R(t)$ where $R(t)=\frac{1}{4} (1+\xi)^{-4} t^4$ for some $\xi$ with $|\xi|\leq |t|$. Let $X_j$ be the random variable $\|c_j'(I+\epsilon N)\|^2-1$. Notice from the above that $X_j$ is a polynomial of degree 6 (whose coefficients don't depend on $\epsilon$) in the entries of $\epsilon N$. If $N=0$, then $c_j'(I+\epsilon N)=e_j$ so that the constant term in the polynomial $X_j$ is 0. Notice also that by Claim [\[claim:one\]](#claim:one){reference-type="ref" reference="claim:one"}, on $\mathsf{bad}^c$, all terms other than the first term in the expression for $\|c_j'\|^2$ are $O(\epsilon^2|\log\epsilon|^2)$, while a calculation shows that $\|c_j\|^2=1+O(\epsilon|\log\epsilon|)$. Hence $X_j\mathbf 1_{\mathsf{bad}^c}=O(\epsilon|\log\epsilon|)$. Let $Y_j=X_j-\frac 12X_j^2+\frac 13X_j^3$, so that $Y_j$ is another polynomial in the entries of $\epsilon N$ with no constant term. Combining the above, on $\mathsf{bad}^c$ $$\log(\|c_j'(I+\epsilon N)\|^2)=\log(1+X_j)=Y_j+O(\epsilon^4|\log\epsilon|^4).$$ Then we have $$\label{eq:Eapprox} \begin{split} &\mathbb{E}\log(\|c_j'(I+\epsilon N)\|^2)\\ ={}&\mathbb{E}\log(\|c_j'(I+\epsilon N)\|^2\mathbf 1_{\mathsf{bad}^c}) +\mathbb{E}\log(\|c_j'(I+\epsilon N)\|^2\mathbf 1_\mathsf{bad})\\ ={}&\mathbb{E}(Y_j\mathbf 1_{\mathsf{bad}^c})+O(\epsilon^4|\log\epsilon|^4)+\mathbb{E}\log(\|c_j'(I+\epsilon N)\|^2\mathbf 1_\mathsf{bad})\\ ={}&\mathbb{E}Y_j-\mathbb{E}(Y_j\mathbf 1_\mathsf{bad})+\mathbb{E}\log(\|c_j'(I+\epsilon N)\|^2\mathbf 1_\mathsf{bad}) +O(\epsilon^4|\log\epsilon|^4). \end{split}$$ Since $Y_j$ is a fixed polynomial function of the entries of $\epsilon N$, and all monomials that are products of entries $N$ have finite expectation, we see that $\mathbb{E}Y_j$ agrees up to order $\epsilon^4$ with the expectation of its terms of degree 3 or lower. Also, since the entries of $N$ are independent and each has a symmetric distribution, the constant term of $Y_j$ being 0, the only terms that give a non-zero contribution to $\mathbb{E}Y_j$ are the terms of the forms $N_{ab}^2$. Since the lowest order terms in $Y_j$ are polynomials of degree 1, and $Y_j=X_j-\frac 12X_j^2+\frac 13X_j^3$, the terms of the form $N_{ab}^2$ in $Y_j$ are those appearing in $X_j$ and $\frac 12X_j^2$. We established above $$X_j=\|c_j\|^2-1 - \sum_{i<j}\langle c_i,c_j\rangle^2(2-\|c_i\|^2) +2\sum_{i<k<j}\langle c_i,c_j\rangle\langle c_k,c_j\rangle \langle c_i,c_k\rangle.$$ We see that $\|c_j\|^2-1=2\epsilon N_{jj}+\epsilon^2\sum_i N_{ij}^2$ and $\langle c_i,c_j\rangle=\epsilon(N_{ij}+N_{ji})+\epsilon^2\sum_k N_{ki}N_{kj}$. Substituting these in the expression for $X_j$, we see $$\begin{aligned} \mathbb{E}X_j&=d\epsilon^2-\epsilon^2\sum_{i<j}\mathbb{E}(N_{ij}+N_{ji})^2+O(\epsilon^4)\\ &=(d-2j+2)\epsilon^2+O(\epsilon^4).\end{aligned}$$ We also see $\mathbb{E}X_j^2=4\epsilon^2\mathbb{E}N_{jj}^2+O(\epsilon^4)$. Combining these gives $$\mathbb{E}Y_j=\mathbb{E}(X_j-\tfrac 12X_j^2)+O(\epsilon^4)=(d-2j)\epsilon^2+O(\epsilon^4).$$ Therefore by [\[eq:Eapprox\]](#eq:Eapprox){reference-type="eqref" reference="eq:Eapprox"}, to finish the argument, it suffices to show $\mathbb{E}(Y_j\mathbf 1_\mathsf{bad})=O(\epsilon^4|\log\epsilon|^4)$ and $\mathbb{E}\log(\|c_j'(I+\epsilon N)\|^2\mathbf 1_{\mathsf{bad}})=O(\epsilon^4|\log\epsilon|^4)$. Since $\|c_j'(A)\|\ge \Theta(A)$, Lemma [Lemma 6](#lem:Thetaest){reference-type="ref" reference="lem:Thetaest"} shows $$\mathbb{E}(\log^-\|c_j'(I+\epsilon N)\|\mathbf 1_\mathsf{bad}) =O(|\log\epsilon|^2e^{-(\log\epsilon)^2/2}).$$ Since $\|c_j'(A)\|\le 2(\sum_{k,l}|A_{kl}|)^3$, we see $\mathbb{E}(\log^+\|c_j'(I+\epsilon N)\|^2\mathbf 1_\mathsf{bad})=O(\mathbb P(\mathsf{bad}))=O(e^{-|\log\epsilon|^2/2}/ |\log\epsilon|)$. Finally, for any of the (finitely many) monomial terms $M$ appearing in $Y_j$, we can check $\mathbb{E}M\mathbf 1_\mathsf{bad}=O(\mathbb P(\mathsf{bad}))=O(e^{-|\log\epsilon|^2/2}/ |\log\epsilon|)$. This completes the proof. ◻ 10 P. Baxendale and T. E. Harris. Isotropic stochastic flows. , 14:1155--1179, 1986. R. Bhatia. . Springer-Verlag, New York, 1997. E. Dynkin. Non-negative eigenfunctions of the Laplace-Beltrami operator and Brownian motion in certain symmetric spaces. , pages 288--291, 1961. I. Y. Gol'dsheid and G. A. Margulis. Lyapunov exponents of a product of random matrices. , 44:11--71, 1989. R. A. Horn and C. R. Johnson. . Cambridge University Press, 1991. Y. Le Jan. On isotropic Brownian motions. , 70:609--620, 1985. C. M. Newman. The distribution of Lyapunov exponents: exact results for random matrices. , 103:121--126, 1986. J. R. Norris, L. C. G. Rogers, and D. Williams. Brownian motions of ellipsoids. , 294:757--765, 1986. V. I. Oseledec. A multiplicative ergodic theorem. Characteristic Ljapunov, exponents of dynamical systems. , pages 179--210, 1968. M. S. Raghunathan. A proof of Oseledec's multiplicative ergodic theorem. , 32:356--362, 1979.
arxiv_math
{ "id": "2309.08193", "title": "Lyapunov exponents of orthogonal-plus-normal cocycles", "authors": "Sam Bednarski and Anthony Quas", "categories": "math.DS", "license": "http://creativecommons.org/licenses/by-sa/4.0/" }
--- abstract: | Mosconi [@mosconi2021] proved Liouville theorems for ancient solutions of subexponential growth to the heat equation on a manifold with Ricci curvature bounded below. We extend these results to graphs with bounded geometry: for a graph with bounded geometry, any nonnegative ancient solution of subexponential growth in space and time to the heat equation is stationary, and thus is a harmonic solution. **Keywords:** Liouville theorem, heat equation, ancient solutions, graphs with bounded geometry. address: - "Bobo Hua: School of Mathematical Sciences, LMNS, Fudan University, Shanghai 200433, China; Shanghai Center for Mathematical Sciences, Fudan University, Shanghai 200438, China." - "Wenhao Yang: School of Mathematical Sciences, Fudan University, Shanghai 200433, China." author: - Bobo Hua - Wenhao Yang bibliography: - Liouville_Ancient_Heat.bib title: Liouville theorems for ancient solutions of subexponential growth to the heat equation on graphs --- # Introduction In 1975, Yau [@Yau75] proved the celebrated Liouville theorem for harmonic functions: on a Riemannian manifold with nonnegative Ricci curvature any positive harmonic function is constant. Later, Cheng and Yau [@ChengYau75] obtained the gradient estimate for positive harmonic functions, which implies that any harmonic function of sublinear growth on such a manifold is constant. In 1997, Colding-Minicozzi [@ColdingMAnnals97] proved Yau's conjecture that the space of polynomial growth harmonic functions is finite dimensional. Liouville theorems for harmonic functions on manifolds have received much attention in the literature, e.g. [@Sullivan83; @Anderson83; @LiTam87; @Lyons87; @Grigoryan90; @Benjamini91; @Grigoryan91; @Saloff92; @WangFY02; @Erschler04; @Li12; @Brighton13]. In 1986, Li and Yau [@LiYau86] proved the gradient estimate for positive solutions to the heat equation, nowadays called the Li-Yau inequality. An ancient solution to the heat equation is a solution defined on the time interval $(-\infty, t_0)$ for some $t_0 \in \mathbb{R}$. As a corollary of the Li-Yau inequality, any bounded ancient solution on a manifold with nonnegative Ricci curvature is constant. Souplet and Zhang [@souplet2006sharp] proved interesting Liouville theorems for positive eternal solutions (i.e. solutions defined in all space and time) of subexponential growth and also for general eternal solutions of sublinear growth on a manifold with nonnegative Ricci curvature. For such a manifold, Lin and Zhang [@lin2019ancient] further proved that an ancient solution of polynomial growth to the heat equation is a polynomial in time. The latter was extended to manifolds with polynomial volume growth by Colding-Minicozzi [@colding2021optimal]. In 2020, Mosconi proved much more general Liouville theorems for nonnegative ancient solutions of subexponential growth to the heat equation on a manifold with Ricci curvature bounded below, which extended the result in [@souplet2006sharp]. **Theorem 1** ([@mosconi2021]). *Suppose that $M$ is a complete noncompact Riemannian manifold with $Ric_M \geq -K (K \geq 0)$. Let $u \in C^{\infty}\left(M \times\left(-\infty, t_0\right)\right)$ be a nonnegative ancient solution to the heat equation on $M$. Let $p \in M$ and $\rho$ be the distance function. Then the following hold:* *(1) For $K = 0$, if there exists $t \in\left(-\infty, t_0\right)$ such that $u(x, t) = e^{o(\rho(x, p))}$ as $\rho(x, p) \rightarrow+\infty$, then $u$ is constant.* *(2) For $K > 0$, if $u(x, t) = e^{o(\rho(x, p)-t)}$ as $\rho(x, p)\rightarrow+\infty$ and $t \rightarrow-\infty$, then $u$ is independent of $t,$ and is a harmonic function.* We aim to extend the above theorem to graphs. We first introduce the setting of graphs. Assume $G = (V,E)$ is a connected, locally finite, simple and undirected graph, consisting of the set of vertices $V$ and the set of edges $E$. Vertices $x$ and $y$ are called neighbors, denoted by $x \sim y$, if there is an edge connecting them, i.e. ${\{x,y\} \in E}$. For any vertex $x$, deg$x$ is the number of neighbors of $x$. Let $$\begin{aligned} w: E & \rightarrow(0,+\infty), \\ e=\{x, y\} & \mapsto w_e=w_{xy}=w_{yx}, \end{aligned}$$ be the weight function on edges, and $$\begin{aligned} m: V & \rightarrow(0,+\infty), \\ x & \mapsto m_x, \end{aligned}$$ be the weight function on vertices. We call $G=(V,E,w,m)$ a weighted graph. For a weighted graph $G=(V,E,w,m)$, the Laplace operator $\Delta$ on $G$ is defined as follows: for any function $f : V \to \mathbb{R},$ $$\Delta f(x)=\sum_{\substack{y \sim x \\ y \in V}} \frac{w_{xy}}{m_x}(f(y)-f(x)), \ \text{for all} \ x \in V.$$ We denote by $\rho(x, y) =\min \{ l | x = z_0 \sim ... \sim z_l = y\}$ the combinatorial distance between vertices $x$ and $y,$ by $B_n\left(x_0\right):=\left\{x \in V: \rho\left(x, x_0\right) \leq n\right\}$ and $\partial B_n\left(x_0\right):=\left\{x \in V : \rho\left(x, x_0\right) = n\right\}$ for $n\in {\mathbb N},$ the ball and sphere centered at $x_0$ respectively. We recall Liouville theorems for harmonic functions on graphs. The Liouville theorem for positive harmonic functions is proved by Delmotte [@DelmottePolynomial98] on a graph satisfying the volume doubling property and the Poincaré inequality, and by Bauer et al.[@bauer2015li; @HLLY19] on a graph with nonnegative curvature in the sense of CDE or CDE'. The Liouville theorem for bounded harmonic functions is proved by the first author [@hua2019Liouville] for a graph with nonnegative Bakry-Émery curvature, and by Jost, Münch and Rose [@jost2019liouville] for a graph with nonnegative Ollivier curvature. In this paper, we prove Liouville theorems for nonnegative ancient solutions of subexponential growth to the heat equation on graphs. We first introduce the definition of graphs with bounded geometry. **Definition 2**. We call a weighted graph $G=(V,E,w,m)$ has *bounded geometry* if there exists $c_0>0$ such that $$\left\{\begin{aligned} c_0^{-1} \leq w_e \leq c_0, \ & \forall e \in E , \\ c_0^{-1} \leq m_x \leq c_0, \ & \forall x \in V , \\ \operatorname{deg} x \leq c_0, \ & \forall x \in V. \end{aligned}\right.$$ The following is the main result. **Theorem 3**. *Let $G=(V,E,w,m)$ be an infinite graph with bounded geometry, $u \in C^{\infty}\left(V \times\left(-\infty, t_0\right)\right)$ be a nonnegative ancient solution to the heat equation on $G$. Assume $x_0 \in V$, and let $\lambda_1(G)$ be the bottom of the spectrum of $G$. Then the following hold:* *(1) If $\lambda_1(G)=0,$ and there exists $t \in\left(-\infty, t_0\right)$ such that $$u(x, t)=e^{o(\rho(x, x_0))},\rho(x, x_0) \rightarrow+\infty,$$ then $u$ is independent of $t,$ and is a harmonic function on $G$.* *(2) If $\lambda_1(G)>0,$ and there exist $t \in\left(-\infty, t_0\right)$ and $y_0 \in V$ such that $$u(x, t)=e^{o(\rho(x, x_0))},\rho(x, x_0)\rightarrow+\infty, \ {\rm and}\ u(y_0, t)=e^{o(-t)}, t \rightarrow-\infty,$$ then $u$ is independent of $t,$ and is a harmonic function on $G$.* **Remark 4**. 1. The above theorem reduces the Liouville property of ancient solutions of subexponential growth to that of harmonic functions. As a corollary, any bounded ancient solution is constant for a graph with nonnegative Bakry-Émery curvature or Ollivier curvature. 2. Compared with Theorem [Theorem 1](#thm:mos){reference-type="ref" reference="thm:mos"}, we use the bottom of the spectrum $\lambda_1(G),$ instead of the curvature lower bound, to classify the cases. We follow the proof strategy of Mosconi [@mosconi2021]: First, by Choquet Representation Theorem of convex cones, nonnegative ancient solutions to the heat equation can be written as integral of extreme points of the cone. Mosconi used the Harnack inequality for positive solutions to the heat equation on manifolds with Ricci curvature bounded below to classify those extreme points. In particular, any extreme point can be expressed as $e^{\lambda t}w(x)$ in the form of separation of variables, where $w$ is a generalized eigenfunction satisfying $\Delta w = \lambda w$. Moreover, one can show that these generalized eigenfunctions $w(x)$ grow at least exponentially. Thus, it contradicts the assumption of subexponential growth of ancient solutions. The key point is to classify these extreme points. However, the Harnack inequality on graphs with curvature conditions remains still open. In this paper, we replace the assumption of lower bounded curvature with the assumption of bounded geometry. In this case, we use the local Harnack inequality for the heat equation by Lin et al. [@lin2017gradient], which suffices to classify extreme points. Moreover, by the assumption of bounded geometry we get the exponential growth rate of generalized eigenfunctions on such graphs, and hence prove the theorem. # The Representation of Nonnegative Ancient Solutions to the Heat Equation First, we introduce some notations. Let $X$ be a real linear space and let $A \subset X$. $A$ is called a convex cone if it is a convex set satisfying $\lambda x \in A,$ for all $x \in A, \lambda > 0;$ $A$ is called a proper convex cone if it is a convex cone satisfying $A \cap -A = \{0\} \text{ or } \varnothing.$ A topological linear space is called a locally convex space provided that there exists a local base at the origin consisting of convex sets. Similar to extreme points of convex sets, we can define extreme points and extreme rays of convex cones. Let $A$ be a convex cone. For any $a \in A$, denote by $r_a = \{\lambda a: \lambda > 0\}$ the ray passing through $a$. For $a \in A$, we call $r_a$ an extreme ray provided that $u \in r_a$, $v \in A$ and $u - v \in A$ imply $v = ku$ for some $k > 0.$ Denote by $\operatorname{Ext}(A)$ the set of extreme points of $A,$ which is the union of all extreme rays. Let $C:= \left\{u \in C^{\infty}\left(V \times \left(-\infty, t_0\right)\right): \Delta u = \partial_t u, u \geq 0\right\}$ be the set of nonnegative ancient solutions to the heat equation, which is a proper convex cone. The following is the local Harnack inequality for the heat equation on a graph with bounded geometry. **Lemma 5** (Local Harnack inequality, Theorem 3 in [@lin2017gradient]). *Let $G = (V, E, w, m)$ be a graph with bounded geometry. If $u \in C^{\infty}\left(V \times \left(-\infty, +\infty\right)\right)$ satisfies $\Delta u = \partial_t u$ and $u \geq 0$, then for all $x, y \in V$ and $t_1 < t_2$, $$u\left(x, t_1\right) \leq u\left(y, t_2\right) \exp \left\{ C_1\left(t_2 - t_1\right) +\frac{C_2(\rho^2(x, y))}{t_2 - t_1} \right\},$$ where $C_1$ and $C_2$ are constants.* By the local Harnack inequality, one can classify extreme points of $C$. A function $w$ is called an (generalized) eigenfunction, or $\lambda$-eigenfunction if $\Delta w=\lambda w.$ **Proposition 6**. *If $0 \neq u \in \operatorname{Ext}(C)$, then there exists a unique representation $u(x, t) = e^{\lambda t} w(x)$, where $\lambda \in \mathbb{R}$ and $w$ is a $\lambda -$eigenfunction.* *Proof.* Note that $$\label{minimal} \operatorname{Ext}(C)=\{u \in C: \text{if } \exists \ v \in C , v \leq u , v=k u \ (k>0)\}.$$ For $u\neq 0,$ by Lemma [Lemma 5](#Harnack){reference-type="ref" reference="Harnack"}, one can show that $u>0.$ Moreover, letting $x=y, t_2=t$, and $t_1=t-a \ (a>0)$ in Lemma [Lemma 5](#Harnack){reference-type="ref" reference="Harnack"}, we have $$u(x, t-a) \leq u(x, t) \cdot e^{C_1 a}.$$ Set $v(x, t)=u(x, t-a)$. Clearly, $v \in C$, and it satisfies $$v(x, t) \leq e^{C_1 a} u(x, t).$$ By ([\[minimal\]](#minimal){reference-type="ref" reference="minimal"}), we have $v(x, t)=k(u, a) u(x, t)$, where the constant $k$ depends only on the function $u$ and $a$. Hence, $$\partial_t u(x, t)=\lim_{a\to 0} \frac{u(x, t)-u(x, t-a)}{t-(t-a)} =\lim_{a\to 0}\frac{1-k(u, a)}{a} u(x, t)=:\lambda u(x,t),$$ where $\lambda$ depends only on $u.$ By integrating over time, we have $$u(x, t) =e^{\lambda t} w(x).$$ By the heat equation of $u,$ $$\Delta w=\lambda w.$$ This yields the representation, and the uniqueness follows directly. ◻ To obtain the representation theorem, we refer to Choquet Representation Theorem for convex cones. **Lemma 7** (Choquet Representation Theorem, Theorem 30.22 in [@choquet1969lectures]). *Let $X$ be a Hausdorff locally convex space, $A \subset X$ be a weakly complete, metrizable, and proper convex cone. For any $a \in A$, there exists a Borel probability measure $\mu$ with $\operatorname{supp} \mu \subseteq \operatorname{Ext}(A)$ such that $$f(a) = \int_{y \in \operatorname{Ext}(A)} f(y) \mathrm{d}\mu(y), \quad \text{for all } f \in X^*.$$* **Proposition 8**. *$C$ is a proper convex cone on $\mathbb{R}^{V \times(-\infty, t_0)}$ under the topology of pointwise convergence, and satisfies all the conditions of Lemma [Lemma 7](#Choquet){reference-type="ref" reference="Choquet"}.* *Proof.* (1) $\mathbb{R}^{V \times(-\infty, t_0)}$ is Hausdorff. The uniqueness of limits of sequences in this topology yields the result. \(2\) $\mathbb{R}^{V \times(-\infty, t_0)}$ is locally convex. For any $a \in V \times(-\infty, t_0)$ and $u \in X$, let $p_a(u) = u(a)$. One easily checks that $p_a(u)$ is a semi-norm on $\mathbb{R}^{V \times(-\infty, t_0)}$. Let $\mathscr{P} = \{p_a : a \in V \times(-\infty, t_0)\}$ be a family of semi-norms on $\mathbb{R}^{V \times(-\infty, t_0)}$. One can show that $\mathscr{P}$ is a separating family, and by [@rudin1973functional Theorem 1.37], $\mathscr{P}$ generates a balanced and convex local base at the origin. \(3\) $C$ is weakly complete. Let $\{u_n\}$ be a weak Cauchy sequence in $C$, which means that $\{Tu_n\}$ is a Cauchy sequence for all $T \in C^*$. Fix $(x, t) \in V \times(-\infty, t_0)$, and define $T_{(x, t)}u = u(x, t),\forall u\in C$. Thus, $\{u_n(x, t)\}_{n=1}^{+\infty}$ is a Cauchy sequence. We define $$u_\infty(x,t):=\lim_{n\to \infty}u_n(x, t),\quad \forall (x, t) \in V \times(-\infty, t_0).$$ It suffices to show that $u_0 \in C$. By the local Harnack inequality, Lemma [Lemma 5](#Harnack){reference-type="ref" reference="Harnack"}, for any $t\in(t_1,t_2)$ with $0<t_1<t_2<\infty$ and $k=0,1$ or $2,$ $$|\partial_t^k u_n(x,t)|=|\Delta^k u_n(x,t)|\leq C\max_{B_k(x)}u_n(\cdot, t)\leq Cu_n(x,t_2)\leq \tilde{C}<\infty.$$ By passing to a subsequence, we get $C^1$ convergence in time for $u_n,$ and hence $u_\infty$ is a solution to the heat equation. It is obvious that $u_\infty\in C.$ \(4\) $C$ is metrizable. Take a countable dense subset $D \subset V \times(-\infty, t_0).$ We can equip the elements of $C$ with a metric only dependent on the value of functions taken on $D$ such that it preserves the pointwise convergence on $D$. By the local Harnack inequality, Lemma [Lemma 5](#Harnack){reference-type="ref" reference="Harnack"}, in $C$, the pointwise convergence on $D$ coincides with the pointwise convergence on $V \times(-\infty, t_0)$, and thus we can define a metric on $C$ compatible with its original topology. ◻ **Proposition 9**. *For $u \in C$, there exist a Borel probability measure $\nu$ on $\mathbb{R}$, and a family of nonnegative functions $\{w_\lambda\}$(dependent on $u$) satisfying $\Delta w_\lambda = \lambda w_\lambda$, such that $$\label{27} u(x, t) = \int_{\mathbb{R}} e^{\lambda t} w_\lambda(x) \mathrm{d}\nu(\lambda).$$* *Furthermore, the function $\lambda \mapsto w_\lambda(x)$ is a Borel function.* *Proof.* By Lemma [Lemma 7](#Choquet){reference-type="ref" reference="Choquet"}, for any $u \in C$, there exists a Borel probability measure $\mu$, with $\operatorname{supp}\mu \subseteq \operatorname{Ext}(C)$ such that for any continuous linear functional $f$, $$f(u)=\int_{v \in \operatorname{Ext}(C)} f(v) \mathrm{d}\mu(v).$$ Fix $(x, t) \in V \times\left(-\infty, t_0\right)$ and define $f(h) = h(x, t), \forall h\in C.$ Then we have $$u(x,t) = \int_{v \in \operatorname{Ext}(C)} v(x,t) \mathrm{d}\mu(v).$$ By Proposition [Proposition 6](#Ext){reference-type="ref" reference="Ext"}, $$u(x,t) = \int_{v \in \operatorname{Ext}(C)} e^{\lambda t} w(x) \mathrm{d}\mu(v).$$ Next, we want to rewrite the integral over elements of $\operatorname{Ext}(C)$ as an integral over $\lambda$. The idea is to classify the elements in $\operatorname{Ext}(C)$ based on $\lambda$ and construct a new measure and integral. We introduce the classification function $$\begin{aligned} \varphi: \operatorname{Ext}(C) & \rightarrow \mathbb{R}, \\ 0 \neq v(x, t) =e^{\lambda t} w(x) & \mapsto \lambda . \end{aligned}$$ In particular, we define $\varphi(0)=0,$ and by Proposition [Proposition 6](#Ext){reference-type="ref" reference="Ext"}, the function $\varphi$ is well-defined. Now we prove that $C$ is a Polish space. Since $C$ is metrizable, its weak completeness implies the completeness. The metric mentioned in Proposition [Proposition 8](#Cond){reference-type="ref" reference="Cond"}(4) naturally induces the C2 property and separability, making $C$ a Polish space. Moreover, $\operatorname{Ext}(C)$, being a $G_\delta$ set of $C$, is also a Polish space and therefore a Radon space. Next, we will prove that $\varphi$ is continuous except at 0, hence measurable. Suppose $\{v_n\} \subset \operatorname{Ext}(C)$ converges to $0 \neq {v_0} \subset \operatorname{Ext}(C)$. By Proposition [Proposition 6](#Ext){reference-type="ref" reference="Ext"}, we can assume that for any $n \in {\mathbb N}_0$, $e^{\lambda_n t} \widetilde{w_n}(x)=v_n(x, t)$, where $\widetilde{w_n}$ satisfies $\Delta \widetilde{w_n}=\lambda_n \widetilde{w_n}$ and $\widetilde{w_0} \neq 0$. For any $x \in V$ and $t \in (-\infty, t_0)$, we have $$\label{lim1} e^{\lambda_n t} \widetilde{w_n}(x)=v_n(x, t) \rightarrow v_0(x, t)=e^{\lambda_0 t} \widetilde{w_0}(x) , n \rightarrow+\infty.$$ Clearly, ([\[lim1\]](#lim1){reference-type="ref" reference="lim1"}) still holds for $t-1$. Thus, $$\label{lim2} e^{\lambda_n (t-1)} \widetilde{w_n}(x) \rightarrow e^{\lambda_0 (t-1)} \widetilde{w_0}(x) , n \rightarrow+\infty.$$ Since $\widetilde{w_0} \neq 0$, deviding ([\[lim1\]](#lim1){reference-type="ref" reference="lim1"}) by ([\[lim2\]](#lim2){reference-type="ref" reference="lim2"}), we have $e^{\lambda_n} \rightarrow e^{\lambda_0} ,n \rightarrow+\infty,$ and thus $\lambda_n \rightarrow \lambda_0 ,n \rightarrow+\infty.$ Therefore, $\varphi$ is continuous except at 0, thus Borel measurable. The measurable function $\varphi$ induces a decomposition $$\label{Deco} \operatorname{Ext}(C)=\bigcup_{\lambda \in \mathbb{R}}\varphi^{-1}(\lambda).$$ By the Disintegration Theorem, there exist conditional probability measures $\left\{\mu_\lambda\right\}$ with $\operatorname{supp} \mu_\lambda \subseteq \varphi^{-1}(\lambda)$, and a Borel probability measure $\nu$ on $\mathbb{R}$ such that $$\label{bs} u=\int_{\mathbb{R}} \int_{v \in\varphi^{-1}(\lambda)} v \mathrm{d}\mu_\lambda(v) \mathrm{d}\nu(\lambda).$$ Moreover, $\lambda \mapsto \mu_\lambda$ is a Borel function and if we define $u_\lambda:=\int_{v \in \varphi^{-1}(\lambda)} v \mathrm{d}\mu_\lambda(v) =\int_{v \in \operatorname{Ext}(C)} v \mathrm{d}\mu_\lambda(v),$ then $\lambda \mapsto u_\lambda$ is also a Borel function. Let $w_\lambda = e^{-\lambda t}u_\lambda$. Since $u_\lambda \mapsto e^{-\lambda t}u_\lambda$ is continuous in the topology of pointwise convergence, $\lambda \mapsto w_\lambda$ is Borel. By Proposition [Proposition 6](#Ext){reference-type="ref" reference="Ext"}, for any $v \in \varphi^{-1}(\lambda) \subset \operatorname{Ext}(C), e^{-\lambda t}v$ is independent of $t$. Thus, $$w_\lambda:= e^{-\lambda t}u_\lambda=e^{-\lambda t}\int_{v \in \varphi^{-1}(\lambda)} v \mathrm{d}\mu_\lambda(v)$$ is independent of $t.$ Moreover, by $\partial_t v=\lambda v,$ $$\Delta w_\lambda =e^{-\lambda t}\Delta u_\lambda=e^{-\lambda t}\int_{v \in \varphi^{-1}(\lambda)}{\Delta v \mathrm{d}\mu_\lambda(v)} =e^{-\lambda t}\int_{v \in \varphi^{-1}(\lambda)} \partial_t v \mathrm{d}\mu_\lambda(v) %=\int_{v \in \varphi^{-1}(\lambda)} \lambda e^{\lambda t} w_v(x) \dd \mu_\lambda(v) =\lambda w_\lambda.$$ This proves the proposition. ◻ **Remark 10**. Furthermore, we can also require $$\operatorname{supp}\nu \subseteq \Lambda := \{ \lambda: \Delta w=\lambda w\ \mathrm{has\ a\ nonzero\ nonnegative\ solution\ on}\ G\}.$$ In fact, by Lemma [Lemma 13](#Spec){reference-type="ref" reference="Spec"} and Proposition [Proposition 12](#Zer){reference-type="ref" reference="Zer"}, $\Lambda =[-\lambda_1(G),+\infty).$ Hence, $\Lambda$ is a closed set on $\mathbb{R}$, and is naturally a Radon space. By the definition of $\Lambda$, we know that $\forall \lambda \notin \Lambda, \ \varphi^{-1}(\lambda)=\varnothing$. Hence, we only need to modify the decomposition ([\[Deco\]](#Deco){reference-type="ref" reference="Deco"}) to $$\operatorname{Ext}(C)=\bigcup_{\lambda \in \Lambda}\varphi^{-1}(\lambda),$$ and change $\mathbb{R}$ to $\Lambda$. **Remark 11**. We can also require that $\forall \ \lambda \in \operatorname{supp} \nu, \ w_\lambda>0$. In fact, by Proposition [Proposition 12](#Zer){reference-type="ref" reference="Zer"} below, $\left\{\lambda: w_\lambda = 0\right\}^c=\left\{\lambda: w_\lambda \neq 0\right\} =\left\{\lambda: w_\lambda>0\right\}$ is measurable since $\lambda \mapsto w_\lambda$ is a Borel function. Thus, we can restrict $\nu$ on this set. # Estimates of the Growth Rate of the generalized eigenfunction The following proposition is elementary. **Proposition 12**. *On a weighed graph $G=(V,E,w,m)$, let $w$ be a nonnegative $\lambda-$eigenfunction on $G$, $\lambda \in \mathbb{R}.$ If there exists $x_0 \in V$ s.t. $w(x_0)=0$, then $w(x)=0, \forall x \in V.$* *Proof.* By the equation $\Delta w=\lambda w$ at the vertex $x_0,$ we get $w(y)=0, \forall y \sim x_0.$ Applying the above result for these $y$ and using the iteration, we prove $w(x)=0, \forall x \in V$ by the connectedness of $G.$ ◻ We recall a well-known result. **Lemma 13** (Agmon-Allegretto-Piepenbrink Theorem, see e.g. Theorem 4.14 in [@keller2021graphs]). *Assume graph $G=(V,E,w,m)$ is infinite, locally finite and connected. Let $\lambda_1(G)$ denote the bottom of the spectrum of Laplacian on $G$ and $\lambda \in \mathbb{R}$. Then the following are equivalent:* *(1) $\lambda \geq -\lambda_1(G)$.* *(2) $\Delta w=\lambda w$ has a positive solution $w$.* The maximum principle is well-known. **Lemma 14** (Lemma 1.39 in [@grigor2018introduction]). *Assume $G=(V,E,w,m)$ is an infinite and connected graph, and $w$ is a subharmonic function, i.e. $\Delta w \geq 0$. Then $\forall x_0 \in V, n \in \mathbb{N}$, $$\label{Max} \max _{\partial B_n\left(x_0\right)} w=\max _{B_n\left(x_0\right)} w.$$* For $x_0 \in V, w:V \to \mathbb{R}$, we denote by $$M_n:=\max \limits _{B_n\left(x_0\right)} w.$$ We prove the estimates on the growth rate of $M_n$ for a nonnegative generalized eigenfunction. **Proposition 15**. *Assume that the graph $G=(V,E,w,m)$ has bounded geometry, $x_0 \in V$, $\lambda \geq 0$. There exists $C=C(\lambda,c_0),$ where $c_0$ is the constant in Definition [Definition 2](#BG){reference-type="ref" reference="BG"}, such that for any positive solution $w$ of the equation $\Delta w=\lambda w$, $$\varlimsup _{n \rightarrow+\infty}\frac{\ln M_n}{n} = \varlimsup _{n \rightarrow+\infty}\max _{B_n\left(x_0\right)} \frac{\ln w}{n} \leq C.$$* *Proof.* By the local Harnack inequality in [@keller2021graphs Theorem 4.1], there exists $C=C(\lambda,c_0) > 1$ such that for any $y \sim x$ with $w(y)>w(x)$, $$\begin{aligned} C^{-1}\leq \frac{w(y)}{w(x)} \leq C. \end{aligned}$$ This yields that $$\frac{M_{n+1}}{M_n} \leq C.$$ By the iteration, one gets $M_n\leq C^{n-1} M_1.$ This proves the result. ◻ **Proposition 16**. *Assume weighed graph $G=(V,E,w,m)$ has bounded geometry, $x_0 \in V$, $\lambda > 0.$ There exists $C=C(\lambda, c_0)>1$ such that for any positive non-constant $\lambda-$eigenfunction $w,$ $$\varliminf _{n \rightarrow+\infty}\frac{\ln M_n}{n} = \varliminf _{n \rightarrow+\infty} \max _{\partial B_n(x_0)} \frac{\ln w}{n} \geq \ln C.$$* *Proof.* For $n \in {\mathbb N},$ by the maximum principle, Lemma [Lemma 14](#Maximal){reference-type="ref" reference="Maximal"}, we choose $x_n \in \partial B_n\left(x_0\right)$ such that $w\left(x_n\right)= M_n$. By the equation of $w$ and the assumption of the bounded geometry, $$\begin{aligned} \lambda w\left(x_n\right)\ & =\sum_{x \sim x_n} \frac{w_{x_n x}}{m_{x_n}}\left(w(x)-w\left(x_n\right)\right) \\ %& \leq c^2\sum_{x \sim x_n} |w(x)-w\left(x_n\right)| \\ & \leq c_0^3 \left(\max _{B_{n+1}\left(x_0\right)}w-w\left(x_n\right)\right). \end{aligned}$$ Thus, $$\begin{aligned} %M_{n+1} &\geq \frac{c^2d+\lambda}{c^2d} M_n,\\ \frac{M_{n+1}}{M_n} \geq \frac{c_0^3+\lambda}{c_0^3}=:C. \end{aligned}$$ By the iteration, we prove the result. ◻ # Proof of the Main result In this section, we prove Theorem [Theorem 3](#main){reference-type="ref" reference="main"}. *Proof.* Without loss of generality, we assume $u > 0$. By Proposition [Proposition 9](#Representation){reference-type="ref" reference="Representation"}, $$u(x, t)=\int_{\mathbb{R}} e^{\lambda t} w_\lambda(x) \mathrm{d}\nu(\lambda),$$ where $w_\lambda$ is a nonnegative solution of the equation $\Delta w=\lambda w$. By Remark [Remark 10](#Rem){reference-type="ref" reference="Rem"} and Lemma [Lemma 13](#Spec){reference-type="ref" reference="Spec"}, $$\operatorname{supp}\nu \subset \Lambda = [-\lambda_1(G),+\infty).$$ Therefore, $$u(x, t)=\int_{-\lambda_1(G)}^{+ \infty} e^{\lambda t} w_\lambda(x) \mathrm{d}\nu(\lambda).$$ Next, we argue by contradiction that supp$\ \nu = \left\{0\right\}.$ Case 1: $\lambda_1(G)=0$. Then $$u(x, t)=\int_{0}^{+ \infty} e^{\lambda t} w_\lambda(x) \mathrm{d}\nu(\lambda).$$ Suppose $\exists \ [a,b] \subset (0,+\infty) \text { s.t. } \nu([a,b]) > 0.$ By Proposition [Proposition 9](#Representation){reference-type="ref" reference="Representation"}, the function $\lambda \mapsto w_\lambda(x)$ is Borel from $\mathbb{R} \rightarrow \mathbb{R}^{V}$ (the latter space is equipped with the topology of pointwise convergence). Let $\Pi=[a, b]\cap \{\lambda:w_\lambda \neq 0\}$, which is still a Borel set. The restriction of this function on $\Pi$ remains a Borel function. Moreover, by Remark [Remark 11](#24){reference-type="ref" reference="24"}, $\nu(\Pi)=\nu([a, b])>0$. Set $F:=\{\ln w: \Delta w=\lambda w, w>0, \lambda \in \Pi\}.$ By Remark [Remark 11](#24){reference-type="ref" reference="24"}, $F$ is not empty. Define the function $$\begin{aligned} \varphi: \Pi & \rightarrow F , \\ \lambda & \mapsto \varphi(\lambda)=\ln w_\lambda . \end{aligned}$$ By Proposition [Proposition 9](#Representation){reference-type="ref" reference="Representation"}, this function is well-defined. Since the function $w_\lambda \mapsto \ln w_\lambda$ is continuous from $\mathbb{R}^{V}\backslash\{0\} \rightarrow F$ (both with the topology of pointwise convergence), $\varphi$ is a Borel function with respect to pointwise convergence topology. By Proposition [Proposition 15](#Grow0){reference-type="ref" reference="Grow0"}, we can define a new distance function on $F$ $$\rho_F(f, g):=\max \limits _{n \in {\mathbb N}}\left\{\frac{1}{n} \max \limits _{B_n(x_0)}|f-g|\right\} < +\infty.$$ We claim that $\varphi$ is a Borel function in the new topology induced by the distance function $\rho_F$. Note that for any $g\in F,$ $c>0,$ $$\varphi^{-1}\left(f\in F: \left\{\rho_F(f, g) \leq c\right\}\right)=\bigcap_{n \in {\mathbb N}}\left\{\lambda \in \mathbb{R}: \max _{B_n(x_0)}|\varphi(\lambda)-g| \leq cn \right\}.$$ One easily checks that the function $\varphi(\lambda) \mapsto \max \limits _{B_n\left(x_0\right)}|\varphi(\lambda)-g|$ is a continuous function from $F \rightarrow \mathbb{R}$ (where the topology of $F$ is induced by pointwise convergence). Thus the function $\lambda \mapsto \max \limits _{B_n\left(x_0\right)}|\varphi(\lambda)-g|$ is a Borel function, and hence $\varphi^{-1}\left(f \in F: \left\{\rho_F(f, g) \leq c\right\}\right)$ is Borel. This proves the claim. By Rusin's theorem, there exists a compact set $K \subset \Pi$ such that $\nu(K)>0, \varphi |_K$ is a continuous function. Thus, there exists $\lambda_0 \in K$ such that $\nu\left(B_r\left(\lambda_0\right) \cap K\right)>0, \forall r>0$. Since $\varphi |_K$ is continuous at $\lambda_0$, $\forall \ \varepsilon>0$, $\exists \ \delta>0$ such that for any $\lambda \in B_\delta(\lambda_0)\cap K$, $$\frac{1}{n} \max _{ B_n\left(x_0\right)}\left|\varphi\left(\lambda_0\right)-\varphi(\lambda)\right| =\frac{1}{n} \max _{x\in B_n\left(x_0\right)}\left|\ln w_{\lambda_0}(x)-\ln w_\lambda(x)\right| \leq \varepsilon, \quad \forall n \in {\mathbb N}.$$ Hence for any $\lambda \in B_\delta(\lambda_0)\cap K$, $n \in {\mathbb N}$ and $x \in B_n\left(x_0\right)$, $$\ln w_{\lambda_0}(x)\leq \ln w_\lambda(x)+n \varepsilon.$$ Taking the average of $\lambda$ over $B_\delta\left(\lambda_0\right) \cap K$, and by Jensen's inequality, $$\begin{aligned} \ln w_{\lambda_0}(x) &\leq \frac{1}{\nu\left(B_\delta\left(\lambda_0\right) \cap K\right)} \int_{B_\delta {\left(\lambda_0\right) \cap K}} \ln w_\lambda(x) \mathrm{d}\nu ( \lambda)+n \varepsilon,\\ & \leq \ln \left( \frac{1}{\nu\left(B_\delta\left(\lambda_0\right) \cap K \right)} \int_{B_\delta {\left(\lambda_0\right) \cap K}} w_\lambda(x) \mathrm{d}\nu ( \lambda) \right) +n \varepsilon, \\ & \leq \ln \left( \frac{1}{\nu\left(B_\delta\left(\lambda_0\right) \cap K \right)} \int_{\mathbb{R}} w_\lambda(x) \mathrm{d}\nu(\lambda)\right)+n \varepsilon . %& =\ln v(x, 0)+n \varepsilon . \end{aligned}$$ For fixed $t \in (-\infty, t_0)$, by Proposition [Proposition 9](#Representation){reference-type="ref" reference="Representation"}, we have $$\int_{\mathbb{R}} w_\lambda(x) d \nu(\lambda)=e^{-\lambda t} \int_{\mathbb{R}} e^{\lambda t} w_\lambda(x) \mathrm{d}\nu(\lambda)=e^{-\lambda t} u(x, t).$$ Hence, for all $x \in B_n\left(x_0\right)$, $$\ln w_{\lambda_0}(x) \leq -\ln \nu \left(B_\delta\left(\lambda_0\right) \cap K\right)-\lambda t+\ln u(x, t) + n \varepsilon.$$ Dividing on both sides by $n$, we get $$\frac{\ln w_{\lambda_0}(x)}{n} \leq \frac{\ln u\left(x, t\right)}{n}+\varepsilon +\frac{c}{n},$$ where $c$ is a constant. Letting $x=x_n\in \partial B_n(x_0),$ which attains its maximum of $w_{\lambda_0}$ on $B_n(x_0),$ we have $$\frac{1}{n} \max_{B_n(x_0)} \ln w_{\lambda_0} \leq \frac{\ln u\left(x_n, t\right)}{n}+\varepsilon +\frac{c}{n} =\frac{\ln u\left(x_n, t\right)}{\rho(x_n,x_0)}+\varepsilon +\frac{c}{n} .$$ Letting $n \to \infty$, by Lemma [Proposition 16](#Grow){reference-type="ref" reference="Grow"}, and considering the assumption of spatial subexponential growth, we obtain $$0<\ln C \leq \varlimsup _{n \rightarrow +\infty} \frac{1}{n} \max _{B_n(x_0)} \ln w_{\lambda_0} \leq \varlimsup _{n \rightarrow +\infty} \frac{\ln u\left(x_n,t\right)}{\rho\left(x_n, t\right)}+\varepsilon = \varepsilon.$$ Since $\varepsilon$ is arbitrary, this yields a contradiction. Hence, supp$\ \nu = \left\{0\right\}$, and consequently $$\begin{aligned} u(x, t) & =\int_{\mathbb{R}} e^{\lambda t} w_\lambda(x) \mathrm{d}\nu(\lambda) \\ & =e^{0 \cdot t} w_0(x) \\ & =w_0(x) . \end{aligned}$$ That is, $u$ is independent of $t$, and is a harmonic function on $G$ with respect to $x$. Case 2: $\lambda_1(G) > 0$. Then $$u(x, t)=\int_{-\lambda_1}^{+ \infty} e^{\lambda t} w_\lambda(x) \mathrm{d}\nu(\lambda).$$ First, we argue by contradiction that $\operatorname{supp} \nu \subset [0,+\infty)$. Suppose it is not true, then there exists $[a, b] \subset (-\infty, 0)$ such that $\nu([a, b]) > 0$. Thus, $$\begin{aligned} u(x, t) & \geq \int_a^b e^{\lambda t} w_\lambda(x) \mathrm{d}\nu(\lambda) \\ & \geq e^{a t} \int_a^b w_\lambda(x) \mathrm{d}\nu(\lambda). \end{aligned}$$ By using Rusin's theorem, we can prove $\int_a^b w_\lambda(x) \mathrm{d}\nu(\lambda) > 0$. However, for fixed $x$, it contradicts with the assumption of subexponential growth with respect to time. Hence, we conclude that $$\operatorname{supp} \nu \subset [0,+\infty).$$ The remaining part of the argument follows verbatim as in Case 1. ◻ **Acknowledgements.** We thank Professor Qi S. Zhang for his suggestion of the problem and many helpful discussions. B. Hua is supported by NSFC, no.11831004, and by Shanghai Science and Technology Program \[Project No. 22JC1400100\].
arxiv_math
{ "id": "2309.17250", "title": "Liouville theorems for ancient solutions of subexponential growth to the\n heat equation on graphs", "authors": "Bobo Hua and Wenhao Yang", "categories": "math.DG math.MG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We discuss two variations of Edwards' duality theorem. More precisely, we prove one version of the theorem for cones not necessarily containing all constant functions. In particular, we allow the functions in the cone to have a non-empty common zero set. In the second variation, we replace suprema of point evaluations and infima over Jensen measures by suprema of other continuous functionals and infima over a set measures defined through a natural order relation induced by the cone. As applications, we give some results on propagation of discontinuities for Perron--Bremermann envelopes in hyperconvex domains as well as a characterization of minimal elements in the order relation mentioned above. address: | Center for Mathematical Sciences\ Lund University\ Box 118, SE-221 00 Lund, Sweden author: - Mårten Nilsson, Frank Wikström title: Variations on a theorem by Edwards --- # Introduction Let $X$ be a metric space, and let $\mathcal{F}$ be a cone of upper semicontinuous functions on $X$. We say that a compactly supported, positive measure $\mu$ is a *Jensen measure* for $\mathcal{F}$ with barycenter $x \in X$, denoted $\mu \in J_x^{\mathcal{F}}$, if $$u(x)\leq \int u \,d\mu, \quad\text{for all } u \in \mathcal{F}.$$ In many situations, it is possible to retrieve interesting information about $X$ and $\mathcal F$ from the sets $J_x^{\mathcal{F}}$. For example, the notions of hyperconvexity and B-regularity in pluripotential theory may be characterized in terms of Jensen measures of cones of plurisubharmonic functions [@Wikstrom]. Another application of Jensen measures to pluripotential theory is in the study of pointwise suprema over elements in $\mathcal{F}$ [@cole; @nilssonwik; @nilsson; @poletsky; @Wikstrom]. A general result is the following theorem due to Edwards [@edwards]: **Theorem 1** (Edwards' theorem). *Suppose that $X$ is compact, that $\mathcal{F}$ contains all constant functions, and that $g:X \to (-\infty, \infty]$ is lower semicontinuous. Then $$\sup\big\{u(x) \mathrel{;}u\in \mathcal{F}, u \leq g\big\} = \inf\big\{\int g \, d\mu \mathrel{;}\mu \in J_x^{\mathcal{F}}\big\}.$$* In this note, we prove two variations of Edwards' theorem. In Section 2, we relax the conditions on $\mathcal{F}$ in a manner that allows us to apply the theorem to cones of negative plurisubharmonic functions in hyperconvex domains, paralleling the case when the domain is B-regular [@nilsson; @Wikstrom]. This is done by modifying the orginal proof of Edwards', replacing the space $C(X)$ by a subspace $H \subset C(X)$ satisfying certain criteria. We then use this result to establish continuity of certain Perron--Bremermann envelopes. Section 3 and Section 4 provide the setting for the second variation of Edwards' theorem, where the Jensen measures are replaced by a set of measures given by an ordering induced by $\mathcal{F}$. Here, we are motivated by an ordering of positive measures induced by (negative) plurisubharmonic functions, introduced by Bengtson in [@Bengtson]. More precisely, **Definition 2**. Let $\Omega \subset \mathbb{C}^{n}$ be a domain, and let $\mu$ and $\nu$ be positive Radon measures supported on $\bar\Omega$. We say that $\mu \mathop{\mathrm{\le_\mathrm{psh}}}\nu$ if $$\int u\,\mathrm{d}\mu \ge \int u\,\mathrm{d}\nu$$ for all $u \in \mathcal{E}_0(\Omega)$. Recall that $$\mathcal{E}_0(\Omega) = \{ u \in \mathcal{PSH}(\Omega) : u < 0, \lim_{\zeta \to p \in \partial\Omega} u(\zeta) = 0, \int_\Omega (\mathop{\mathrm{dd^c}}u)^n < \infty \},$$ is the set of "plurisubharmonic test functions" introduced by Cegrell [@Cegrell]. Note that if $\mu$ is a Jensen measure with barycenter $z$, then $\mu \mathop{\mathrm{\le_\mathrm{psh}}} \delta_z$ (where $\delta_z$ denotes a point mass at $z$). The fact that $\frac12\mu \mathop{\mathrm{\le_\mathrm{psh}}}\mu$ in Bengtson's ordering gives rise to a number of normalization issues, for example in the definition of maximal and/or minimal measures in [@Bengtson]. Inspired by these observations, we introduce a few (slightly) different notions of orderings induced by plurisubharmonic functions in Section 3, in which comparable measures have the same total mass. In Section 4, we continue our study, employing a result of Duval and Sibony [@Duval:95] which provides sufficient criteria for two comparable measures to coincide. Finally, in Section 5, we prove a version of Edwards' theorem adapted to this setting by letting a fixed measure $\mu$ play the rôle of $\delta_z$, and letting measures plurisubharmonically larger than $\mu$ play the part of the Jensen measures. We then apply the result to characterize the minimal elements in the different orderings. # Duality on hyperconvex domains In this section, we study envelopes over cones of negative plurisubharmonic functions on the closure of a bounded hyperconvex domain $\Omega$. Specifically, we consider the two cones $$\begin{aligned} &\mathcal{PSH}^0(\bar \Omega) := \{u \in \mathcal{USC}(\bar \Omega)\cap\mathcal{PSH}^-(\Omega) \mathrel{;}u\big|_{\partial\Omega} = 0\}, \\ &\mathcal{E}_0(\Omega) \cap C(\bar \Omega).\end{aligned}$$ By an approximation theorem of Cegrell [@Cegrell2], the Jensen measures of these two cones coincide, using the monotone convergence theorem. Denote this set of measures by $J^-_z$. We first aim to prove the following: **Theorem 3**. *Let $g: \bar \Omega \to \mathbb{R}$ be a lower bounded function such that $g_* \leq g$, with equality outside a pluripolar set $P$, and suppose that $g_*(\zeta)=0$ for all $\zeta \in \partial\Omega$. Then $$\begin{aligned} \sup\big\{u(x) \mathrel{;}u\in \mathcal{E}_0(\Omega) \cap C(\bar \Omega), u \leq g\big\} &= \sup\big\{u(x) \mathrel{;}u\in \mathcal{PSH}^0(\Omega) , u \leq g\big\} \\ &= \inf\big\{\int g \, d\mu \mathrel{;}\mu \in J_z^-\} \end{aligned}$$ holds outside the pluripolar hull of $P$. In particular, if $g$ is upper semicontinuous and $P$ is a closed, complete pluripolar set, then the envelopes are continuous outside $P$.* Before embarking on a proof, we begin by establishing a version of Edwards' theorem adapted to also work for cones $\mathcal{F}$ where the common zero set $$Z_{\mathcal{F}}:=\{x \in X \mathrel{;}\forall u \in \mathcal{F}, u(x)=0\}$$ is non-empty. Specifically, we will modify the aspects of the original proof that rely on the following properties of $C(X)$: - For each element $u \in \mathcal{USC}(X)$, there exists a decreasing sequence $g_i \in C(X)$ such that $g_i \searrow u$. In particular, this holds if $u \in \mathcal{F}$. - For each element $l \in \mathcal{LSC}(X)$, there exists an increasing sequence $g_i \in C(X)$ such that $g_i \nearrow l$. Instead, this rôle will be played by a subspace $H \subset C(X)$, for which the set $$Z_{H}=\{x \in X \mathrel{;}\forall h \in H, h(x)=0\}$$ is allowed to be non-empty as well. In order to do so, we make the following definitions. **Definition 4**. Let $H \subset C(X)$ be a vector space, and let $E: C_c( X \setminus Z_{H}) \to C(X)$ denote extension by zero. We say that $H$ is *total outside its zeros* if for each $\varphi \in E(C_c(X \setminus Z_{H}))$, there exists a decreasing sequence $h_i \in H$ such that $h_i \searrow \varphi.$ **Definition 5**. Let $H \subset C(X)$ be a vector space. We say that $\mathcal{F}$ is $H$*-admissible* if each element in $H$ has minorant in $\mathcal{F}$, and for each element $u \in \mathcal{F}$, there exists a decreasing sequence $h_i \in H$ such that $h_i \searrow u$. *Remark 1*. Note that a cone of upper semicontinuous functions containing constants is $C(X)$-admissible. For brevity of notation, we define the following two operators on the space of Borel functions on $X$: $$\begin{aligned} S_x(g) &= \begin{cases} \sup\{u(x) \mathrel{;}u\in \mathcal{F}, u \leq g\}, & \hfill \text{if }\exists u\in \mathcal{F} \mathrel{;}u \leq g, \\ - \infty, &\hfill \text{otherwise,} \end{cases} \\ I_x(g) &= \inf\Big\{\int g \, d\mu \mathrel{;}\mu \in J_x^{\mathcal{F}}\Big\}.\end{aligned}$$ Adapting Edwards' original proof, we get the following version of the theorem. **Theorem 6** (Edwards' theorem on $H$-admissible cones). *Let $X$ be compact, $H \subset C(X)$ be a subspace total outside its zeros, and suppose that $\mathcal{F}$ is $H$-admissible. Fix $x\in X$ and assume that there exists $C_x>0$ such that for each element $h \in H$, there exists $u\in \mathcal{F}$ with $u\leq h$ and $$u(x) \geq -C_x|h|_\infty.$$ Then for each non-decreasing sequence $H \ni h_i \nearrow h_\infty$, we have $S_x(h_\infty)=I_x(h_\infty)$.* *Proof.* We first consider the case when $h_\infty=h \in H$. Since $h$ is bounded from below by an element $u\in\mathcal{F}$ such that $u(x)\geq -C_x|h|_\infty$ at $x\in X$, we have $$\infty > h(x) \geq S_x(h) \geq u(x) > -\infty,$$ so $-S_x$ defines a sublinear map $H\to\mathbb{R}$. On the subspace of generated by $h$, we now define a linear map $$-A(ch)=-cS_x(h), \quad c \in \mathbb{R},$$ which satisfies $-A\leq-S_x$ on the whole span of $h$. Using the Hahn--Banach theorem, we first extend $-A$ to a linear map $-A'$ on $H$ such that $$-A' \leq -S_x \text{ on }H,$$ and then extend $A'$ to a continuous linear functional $A''$ on $C(X)$ such that $$A''(\varphi) \leq C_x|\varphi|_\infty \text{ for all }\varphi \in C(X).$$ Since $X$ is compact, the Riesz representation theorem implies that $A''$ may be represented by a signed measure $\mu_h = \mu_h^+ - \mu_h^-$. Using the fact that $H$ is total outside its zeros, we now note that $A''|_{E(C_c(X \setminus Z_{H}))}$ is positive, which implies that $\mu_h^-$ is supported on the closed set of common zeros of $H$. For any $g \in H$, $\mu_h^+$ hence satisfies $$\int_X g\, d\mu_h^+ = \int_X g\, d\mu_h \geq S_x(g),$$ with equality if $g=h$. Since $\mathcal{F}$ is $H$-admissible, the monotone convergence theorem implies that $$\int_X u\, d\mu_h \geq u(x)$$ for any $u \in \mathcal{F}$, and so $\mu_h^+$ is a Jensen measure. We conclude that $S_x(h)=I_x(h)$. For general $h_\infty$, associate to each $h_i$ a signed measure $\mu_{h_i}$ as above, and note that $$C_x \geq \int_X d\mu_{h_i} \geq -C_x.$$ By the Banach--Alaoglu theorem, we may thus extract a subsequence $\{\mu_{h_{i_n}}\}$ converging to a signed measure $\mu_{h_\infty}$ in the weak$^*$-topology. Fixing $k\in \mathbb{N}$, we have $$\begin{aligned} I_x(h_\infty) &\geq S_x(h_\infty) \geq \lim_{n\to \infty} S_x(h_{i_n}) = \lim_{n\to \infty} I_x(h_{i_n}) \\ &=\lim_{n \to \infty}\int h_{i_n} \, d\mu_{h_{i_n}}^+ =\lim_{n \to \infty}\int h_{i_n} \, d\mu_{h_{i_n}} \\ &\geq \lim_{n\to \infty}\int h_k \, d\mu_{g_{i_n}} = \int h_k \, d\mu_{h_\infty}=\int h_k \, d\mu_{h_\infty}^+, \end{aligned}$$ and letting $k \to \infty$ and applying the monotone convergence theorem, we get $S_x(h_\infty)=I_x(h_\infty)$. ◻ *Remark 2*. In Section 5, we will introduce another layer of generality by replacing point evaluations with integration against any positive Radon measure. Directly applying Theorem [Theorem 6](#edwards){reference-type="ref" reference="edwards"}, we get the following lemma. **Lemma 7**. *Suppose that $g: \bar \Omega \to \mathbb{R}$ is a lower semicontinuous function such that $g(\zeta)=0$ for all $\zeta \in \partial\Omega$. Then $$\begin{aligned} \MoveEqLeft\sup\big\{u(x) \mathrel{;}u\in \mathcal{E}_0(\Omega) \cap C(\bar \Omega), u \leq g\big\} \\ &= \sup\big\{u(x) \mathrel{;}u\in \mathcal{PSH}^0(\bar \Omega) , u \leq g\big\} \\ &= \inf\Big\{\int g \, d\mu \mathrel{;}\mu \in J_z^- \Big\}. \end{aligned}$$* *Proof.* We first check that the conditions required for Theorem [Theorem 6](#edwards){reference-type="ref" reference="edwards"} are fulfilled with $$X= \bar \Omega, \quad H= \{ h \in C(\bar \Omega) \mathrel{;}h\big|_{\partial\Omega} =0 \}, \quad \mathcal{F}\in \{\mathcal{PSH}^0(\Omega), \mathcal{E}_0(\Omega) \cap C(\bar \Omega)\}.$$ Clearly, $H$ is total outside its zeros, and $\mathcal{F}$ is $H$-admissible. For each $h \in H$, we may find $u\in \mathcal{F}$ such that $u\leq h$ and $u(z) \geq -|h|_\infty$. This follows for $\mathcal{F}=\mathcal{E}_0(\Omega) \cap C(\bar \Omega)$ since $$u \in \mathcal{E}_0(\Omega) \cap C(\bar \Omega) \implies \max \{u, -|h|_\infty\} \in \mathcal{E}_0(\Omega) \cap C(\bar \Omega)$$ by [@Cegrell Lemma 3.4]. It remains to show that $g$ may be approximated from below by elements in $H$. Pick $\varphi \in H$ such that $\varphi \leq g$ (such a function exists by the Katětov--Tong insertion theorem [@tong]) and let $g_i \in C(\bar \Omega)$ approximate $g$ pointwise from below. Then $\varphi_i :=\max\{\varphi, g_i\} \in H$, and $\varphi_i \nearrow g$ on $\bar \Omega$. ◻ *Proof of Theorem [Theorem 3](#thm:edwhypcon2){reference-type="ref" reference="thm:edwhypcon2"}.* Pick any point $z_0$ outside the pluripolar hull of $P$ and let $u \in \mathcal{PSH}^0(\Omega)$ satisfy $u \big|_P = -\infty$, with $u(z_0)>-\infty$. Then for any $\mu \in J_{z_0}^-$, $$\int_P u \, d\mu \geq \int_{\bar \Omega} u \, d\mu \geq u(z_0) > -\infty,$$ and so $\mu(P)=0$. We conclude that $I_{z_0}(g)= I_{z_0}(g_*)$. On the other hand, it is clear that $S_{z_0}(g)\leq I_{z_0}(g)$, which together with Lemma [Lemma 7](#thm:edwhypcon){reference-type="ref" reference="thm:edwhypcon"} implies that $$S_{z_0}(g)\leq I_{z_0}(g) = I_{z_0}(g_*) = S_{z_0}(g_*) \leq S_{z_0}(g)$$ for both cones. Since $z_0$ was an arbitrary point outside the pluripolar hull of $P$, this proves the first statement of the theorem. Now suppose that $g$ is upper semicontinuous, and that $P$ is a closed, complete pluripolar set. Then, we have an equality $$\sup\big\{u(x) \mathrel{;}u\in \mathcal{E}_0(\Omega) \cap C(\bar \Omega), u \leq g\big\} = \sup\big\{u(x) \mathrel{;}u\in \mathcal{PSH}^0(\Omega) , u \leq g\big\}$$ between a lower semicontinuous function and upper semicontinuous function on an open set, which implies that the envelopes must be continuous on that set. ◻ In certain situations, it is possible to guarantee that even non-pluripolar discontinuities do not propagate; the extremal function of interior balls in hyperconvex domains are for example continuous everywhere [@kerzman]. The proof of this fact can be extended to the following gluing construction. **Theorem 8**. *Let $\Omega$ be a B-regular domain, and let $\Omega' \subset \Omega$ be a relatively compact subdomain with the property that for each $p \in \partial \Omega'$, there exists a ball $B \subset \Omega'$ with $p \in \partial B$. Let $h \in C (\bar \Omega)$, $u \in \mathcal{PSH}(\Omega)$ be such that $u \leq h$, and assume that $u$ is continuous on $\bar \Omega'$. Then the Perron--Bremermann envelope $P(h_{u, \Omega'})$ of $$h_{u, \Omega'}(z) := \begin{cases} u(z) \text{ if }z\in \bar{\Omega'}, \\ h(z) \text{ if }z\in \Omega \setminus \bar{\Omega'} \end{cases}$$ is continuous.* *Proof.* Since $h_{u, \Omega'}$ is upper semicontinuous, $P(h_{u, \Omega'})$ is a plurisubharmonic function. Notice that $$\begin{aligned} P(h_{u, \Omega'})\big|_{\partial\Omega} &= h, \\ P(h_{u, \Omega'})\big|_{\Omega'} &= u, \end{aligned}$$ and that $P(h_{u, \Omega'})_*\big|_{\bar{\Omega'}}= u$ since $P(h_{u, \Omega'})\geq u$ and $u$ is continuous on $\bar \Omega'$. In order to show that $P(h_{u, \Omega'})$ is continuous outside $\Omega'$ as well, we will now show that $$P(h_{u, \Omega'})^*\big|_{\bar{\Omega'}}= u.$$ Fix $\varepsilon>0$ and pick $p \in \partial\Omega'$ and two concentric balls $B(\zeta, r) \subset B(\zeta, R) \subset \Omega'$ such that $$p\in \partial B(\zeta, r), \quad |u(z) -u(p)| < \varepsilon \text{ on }\bar B(\zeta, r).$$ Then the Perron--Bremermann envelope $$\varphi_{r,R}(z) := \sup \{v(z) \mathrel{;}v \in \mathcal{PSH}^-(B(0, R)), v \big|_{B(0,r)} \leq u(p)+\varepsilon - \max_{\partial B(\zeta,R)} h\}$$ is toric, and thus continuous. Hence $$\limsup_{z \to p} P(h_{u, \Omega'}) \leq \limsup_{z \to p} \varphi_{r,R}(z+\zeta) + \max_{\partial B(\zeta,R)} h \leq u(p) + \varepsilon.$$ Letting $\varepsilon \to 0$, we conclude that $P(h_{u, \Omega'})$ is continuous on the boundary of $\Omega \setminus \partial\Omega'$, and by theorem of Walsh [@walsh], continuous in the interior as well. ◻ *Remark 3*. If $h\big|_{\partial\Omega} =0$, it is enough to assume that $\Omega$ is hyperconvex. Finally, Theorem [Theorem 6](#edwards){reference-type="ref" reference="edwards"} implies the following statement for envelopes of a class of unbounded functions. **Theorem 9**. *Let $\Omega$ be a hyperconvex domain, and let $g: \bar \Omega \to [-\infty, 0]$ be such that $g^*=g_*$ on $\bar \Omega$, $g = 0$ on $\partial \Omega$. Furthermore, assume that there exists $v \in \mathcal{PSH}^-(\Omega)$ such that $$\begin{aligned} & g(z_0)=-\infty \implies v(z_0)=-\infty,\quad\text{and} \\ & \frac{v(z)}{g(z)} \to \infty \quad\text{as}\quad g(z) \to -\infty. \end{aligned}$$ Then the envelope $$\sup \big\{u(z) \mathrel{;}u \in \mathcal{PSH}^-(\Omega), u^* \le g \big\}$$ is continuous on $\{z \in \bar\Omega \mathrel{;}v_*(z) \neq -\infty\}$.* *Proof.* The proof is completely analogous to the case when the domain is B-regular. See [@nilsson Section 4] for more details. ◻ # Plurisubharmonic orderings of measures Let us begin this section by giving two alternative orderings induced by plurisubharmonic functions. **Definition 10**. Let $\Omega \subset \mathbb{C}^{n}$ be a bounded domain, and let $\mathcal{M}(\bar\Omega)$ denote the set of finite positive Radon measures supported on $\bar\Omega$. Let $\mu, \nu \in \mathcal{M}(\bar\Omega)$. We say that $\mu \mathop{\mathrm{\le_\mathrm{c}}}\nu$ if $$\int u\,\mathrm{d}\mu \ge \int u\,\mathrm{d}\nu \quad\text{for all $u \in \mathcal{PSH}(\Omega) \cap C(\bar\Omega)$}.$$ Note the reversal of the inequality. We choose this notation to keep comparisons with Bengtson's ordering easier to formulate. Also note that we allow the measures to have positive mass on $\partial\Omega$. Similarly, we say that $\mu \mathop{\mathrm{\le_*}}\nu$ if $$\int u^*\,\mathrm{d}\mu \ge \int u^*\,\mathrm{d}\nu \quad\text{for all upper bounded $u \in \mathcal{PSH}(\Omega)$}.$$ By requiring the defining inequality to hold, not only for negative plurisubarmonic functions, comparable measures automatically have the same total mass. **Proposition 11**. *If $\mu \mathop{\mathrm{\le_\mathrm{c}}}\nu$ (or $\mu \mathop{\mathrm{\le_*}}\nu$), then $\mu(\bar\Omega) = \nu(\bar\Omega)$.* *Proof.* Apply the definition to the plurisubharmonic functions $u(z) \equiv 1$ and $u(z) \equiv -1$, respectively. ◻ Furthermore, for finite measures on $\Omega$ with equal total mass, Bengtson's ordering coincides with our ordering. **Proposition 12**. *If $\mu \mathop{\mathrm{\le_\mathrm{psh}}}\nu$ (in the sense of Bengtson) and $\mu(\Omega) = \nu(\Omega) < \infty$, then $\mu \mathop{\mathrm{\le_*}}\nu$, and hence $\mu \mathop{\mathrm{\le_\mathrm{c}}}\nu$.* *Proof.* Assume that $\mu \mathop{\mathrm{\le_\mathrm{psh}}}\nu$. Then, in particular, the measures are carried by $\Omega$. By Bengtson [@Bengtson Prop. 3.2c], it follows that $$\int u\,d\mu \ge \int u\,d\nu$$ for all negative plurisubharmonic function $u$, not just functions in $\mathcal{E}_0$. If $v \in \mathcal{PSH}(\Omega)$ is upper bounded, then $u = v-M < 0$ for $M$ large enough, and since $\mu(\Omega) = \nu(\Omega)$, it follows that $\int v\,d\mu \ge \int v\,d\nu$. ◻ Let us also remark that there is no difference between the orderings $\mathop{\mathrm{\le_\mathrm{c}}}$ and $\mathop{\mathrm{\le_*}}$ on B-regular domains. **Proposition 13**. *Assume that $\Omega$ is B-regular, and that $\mu$ and $\nu$ are positive Radon measures on $\bar\Omega$. Then $\mu \mathop{\mathrm{\le_\mathrm{c}}}\nu$ if and only if $\mu \mathop{\mathrm{\le_*}} \nu$.* *Proof.* Clearly, if $\mu \mathop{\mathrm{\le_*}}\nu$, then $\mu \mathop{\mathrm{\le_\mathrm{c}}}\nu$. Conversely, assume that $\mu \mathop{\mathrm{\le_\mathrm{c}}}\nu$ and let $u \in \mathcal{PSH}(\Omega)$ be an upper bounded plurisubharmonic function. Then, by an approximation theorem in [@Wikstrom], there is a decreasing sequence $u_j \in \mathcal{PSH}(\Omega) \cap C(\bar\Omega)$, such that $u_j \searrow u^*$ on $\bar\Omega$. Hence, by monotone convergence $$\int u^*\,d\nu = \lim_{j \to \infty} \int u_j\,d\nu \le \lim_{j \to \infty} \int u_j\,d\mu = \int u^*\,d\mu,$$ and consequently $\mu \mathop{\mathrm{\le_*}}\nu$. ◻ On B-regular domains, two measures supported on the boundary are comparable only if they are equal. **Proposition 14**. *Let $\Omega$ be a B-regular domain. If $\mu \mathop{\mathrm{\le_\mathrm{c}}}\nu$ are both supported on $\partial\Omega$, then $\mu = \nu$.* *Proof.* Let $\phi \in C(\partial\Omega)$. Since $\Omega$ is B-regular, $\phi$ extends to $\mathcal{PSH}(\Omega) \cap C(\bar\Omega)$, and hence $$\int \phi\,d\nu \le \int \phi\,d\mu$$ for every $\phi \in C(\partial\Omega)$. In other words, $\sigma = \mu - \nu$ defines a positive measure with support on $\partial\Omega$. By Proposition [Proposition 11](#prop:equal_mass){reference-type="ref" reference="prop:equal_mass"}, $\sigma(\partial\Omega) = 0$, so $\mu = \nu$. ◻ *Remark 4*. The same is true for $\mathop{\mathrm{\le_*}}$, even if $\Omega$ is not B-regular, if we define the ordering via the cone $\mathcal{F} = \mathcal{USC}(\bar\Omega) \cap \mathcal{PSH}(\Omega)$. Every continuous function on $\partial\Omega$ extends to an element in $\mathcal{F}$. We also note that if $\Omega$ is hyperconvex, but not B-regular, the conclusion in Proposition [Proposition 14](#prop:boundary_measures){reference-type="ref" reference="prop:boundary_measures"} does not hold. In that case, there is a point $p \in \partial\Omega$ and a Jensen measure $\mu \in J_p^c$ with $\mathop{\mathrm{supp}}\mu \subset \partial\Omega$, $\mu \neq \delta_p$, see [@Wikstrom] for details. Then $\mu \mathop{\mathrm{\le_\mathrm{c}}}\delta_p$. # Hulls and positive currents {#sec:hulls} In a seminal paper by Duval and Sibony [@Duval:95], it is shown that polynomial hulls can be described by the support of positive currents of bidimension (1,1). In fact, several of their results may be localized to study hulls with respect to other classes of plurisubharmonic functions. (Recall that the polynomial hull is the same as the hull with respect to $\mathcal{PSH}(\mathbb{C}^{n})$.) To be more precise, let $\Omega$ be a bounded domain in $\mathbb{C}^{n}$ and let $\mathcal{E}' = \mathcal{E}'(\Omega)$ be the space of distributions with compact support in $\Omega$. We will also need various notions of plurisubharmonic hulls. **Definition 15**. Let $\Omega \subset \mathbb{C}^{n}$, and let $K \mathop{\mathrm{\subset\!\subset}} \Omega$ be compact. We define the relative plurisubharmonic hull of $K$ in $\Omega$ by $$\label{eq:defhull} \mathop{\mathrm{hull}}(K,\Omega) = \{ z \in \Omega \mathrel{;}u(z) \le \sup_{x \in K} u(x) \text{ for every $u \in \mathcal{PSH}(\Omega)$} \}.$$ In a similar way we define the smooth relative plurisubharmonic hull of $K$ in $\Omega$, denoted by $\mathop{\mathrm{hull}}^\infty(K,\Omega)$ where we only require that the inequality in [\[eq:defhull\]](#eq:defhull){reference-type="eqref" reference="eq:defhull"} holds for functions in $\mathcal{PSH}(\Omega) \cap C^\infty(\Omega)$. *Remark 5*. Note that $\hat K = \mathop{\mathrm{hull}}(K,\mathbb{C}^{n}) = \mathop{\mathrm{hull}}^\infty(K,\mathbb{C}^{n})$. (Here, $\hat K$ denotes the polynomial hull of $K$.) Furthermore, if $\Omega$ is pseudoconvex, then any plurisubharmonic function on $\Omega$ can be written as the limit of a decreasing sequence of smooth plurisubharmonic functions, and from this it follows that $\mathop{\mathrm{hull}}(K,\Omega) = \mathop{\mathrm{hull}}^\infty(K,\Omega)$. Also, if $\Omega$ is a neighbourhood of $\hat K$, then it can be checked that $\hat K = \mathop{\mathrm{hull}}(K,\Omega)$. **Proposition 16**. *Let $\Omega$ be a domain in $\mathbb{C}^{n}$ and assume that $T$ is a positive current of bidimension (1,1) with $\mathop{\mathrm{supp}}T \mathop{\mathrm{\subset\!\subset}}\Omega$. Assume that $\mathop{\mathrm{dd^c}}T$ is negative on $\Omega \setminus K$ for some compact set $K \mathop{\mathrm{\subset\!\subset}}\Omega$. Then $\mathop{\mathrm{supp}}T \subset \mathop{\mathrm{hull}}^\infty(K,\Omega)$.* *Proof.* Assume that $x \in \mathop{\mathrm{supp}}T \setminus \mathop{\mathrm{hull}}^\infty(K,\Omega)$. Then by definition of the hull, there is a $\phi \in \mathcal{PSH}(\Omega) \cap C^\infty(\Omega)$ such that $\phi(x) > 0$ and $\phi \le 0$ on a neighbourhood of $K$. Let $\psi = \max \{ \phi, 0 \}$ and define $u = \psi * \chi_\varepsilon$ where $\chi_\varepsilon$ is a standard radial positive smothing kernel with support in $\mathbb{B}(0;\varepsilon)$. Then $u$ is plurisubharmonic and smooth on $\Omega_\varepsilon= \{ z \in \Omega : \mathop{\mathrm{dist}}(z,\partial\Omega) < \varepsilon\}$. Choose $\varepsilon$ so small that $\mathop{\mathrm{supp}}T \mathop{\mathrm{\subset\!\subset}}\Omega_\varepsilon$ and that $u = 0$ on $K$. Then, $u \ge 0$, $u$ vanishes near $K$ and is strictly plurisubharmonic on a neighbourhood of $x$. Hence $$0 < \langle T, \mathop{\mathrm{dd^c}}\phi \rangle = \langle \mathop{\mathrm{dd^c}}T, \phi \rangle \le 0,$$ which is a contradiction. ◻ **Proposition 17**. *Let $\Omega$ be a pseudoconvex domain in $\mathbb{C}^{n}$ and let $\nu \in \mathcal{E}'(\Omega)$. Assume that $\langle \nu,\phi \rangle \ge 0$ for every $\phi \in \mathcal{PSH}(\Omega) \cap C^\infty(\Omega)$. Then there exists a positive current $T$ of bidimension (1,1) such that $$\label{eq:ddc} \mathop{\mathrm{dd^c}}T = \nu.$$* *Proof.* Take an open set $U$ such that $\mathop{\mathrm{hull}}(\mathop{\mathrm{supp}}\nu, \Omega) \mathop{\mathrm{\subset\!\subset}}U \mathop{\mathrm{\subset\!\subset}}\Omega$. Let $\Gamma = \{ \mathop{\mathrm{dd^c}}T : T \ge 0 \text{ of bidimension (1,1)}, \mathop{\mathrm{supp}}T \subset \bar U \}$. Note that $\Gamma$ is a closed cone in $\mathcal{E}'(\Omega)$ (in the weak topology). Assume that [\[eq:ddc\]](#eq:ddc){reference-type="eqref" reference="eq:ddc"} has no solution $T$ with $\mathop{\mathrm{supp}}T \subset \bar U$. Then by Hahn--Banach's Theorem, $\nu$ is separated from the closed cone $\Gamma$, and there is a $\phi \in C^\infty(\Omega)$ and a constant $c$ such that $$\label{eq:ineqc} \langle \nu, \phi \rangle < c \le \langle T, \mathop{\mathrm{dd^c}}\phi \rangle$$ for every $T \ge 0$ of bidimension (1,1) with support in $\bar U$. Since [\[eq:ineqc\]](#eq:ineqc){reference-type="eqref" reference="eq:ineqc"} holds for $T$ replaced by $r T$ for every $r > 0$, we first note that $c \le 0$ (by taking $r$ sufficiently small), and secondly that $\langle T, \mathop{\mathrm{dd^c}}\phi \rangle \ge 0$ for every $T \ge 0$ (by taking $r$ sufficiently large). This implies that $\mathop{\mathrm{dd^c}}\phi \ge 0$ on $U$, i.e. that $\phi$ is plurisubharmonic on $U$. By modifying $\phi$ outside $\mathop{\mathrm{supp}}\nu$ (using the assumption that $\mathop{\mathrm{hull}}(\mathop{\mathrm{supp}}\nu, \Omega) \mathop{\mathrm{\subset\!\subset}}U$) we can ensure that $\phi \in \mathcal{PSH}(\Omega) \cap C^\infty(\Omega)$, but then $\langle \nu, \phi \rangle < 0$ which contradicts the assumption on $\nu$. ◻ **Theorem 18**. *Let $\mu_1$ and $\mu_2$ be compactly supported in $\Omega$. If $\mu_1 \mathop{\mathrm{\le_\mathrm{psh}}}\mu_2$, and $\int \varphi\,d\mu_1 = \int \varphi\,d\mu_2$ for some strictly plurisubharmonic function $\varphi$, then $\mu_1 = \mu_2$.* *Proof.* Define a distribution $\nu$ by $$\langle \nu, \phi \rangle = \int \phi\,d\mu_1 - \int \phi\,d\mu_2$$ for $\phi \in C_0^\infty$. Then $\nu$ is a compactly supported distribution that is non-negative on $\mathcal{PSH}(\Omega) \cap C^\infty(\Omega)$. By Proposition [Proposition 17](#prop:T){reference-type="ref" reference="prop:T"}, we can write $\nu = \mathop{\mathrm{dd^c}}T$ for some positive $T$ with compact support. By assumption $0 = \langle \nu, \varphi \rangle = \langle \mathop{\mathrm{dd^c}}T, \varphi \rangle = \langle T, \mathop{\mathrm{dd^c}}\varphi \rangle$. Since $\varphi$ is strictly plurisubharmonic, $\mathop{\mathrm{dd^c}}\varphi > 0$ on $\Omega$. This implies that $T = 0$, since $T$ is positive. ◻ # Edwards' theorem for other functionals We have already seen that it is possible to weaken some of the hypotheses in Edwards' original formulation of his duality result. In this section, we will explore another variation. We begin by noting that any cone $\mathcal{F}$ of real-valued functions on $X$ induces an ordering on the measures supported on $X$: **Definition 19**. Let $\mu$ and $\nu$ be positive Radon measures supported on $X$. We say that $\mu \mathop{\mathrm{\le_{\mathcal{F}}}}\nu$ if $$\int u\,d\nu \le \int u\,d\mu$$ for all $u \in \mathcal{F}$. Note that, if $\mathcal{F}$ contains all constant functions, then $\mu \mathop{\mathrm{\le_{\mathcal{F}}}}\nu$ implies that $\mu$ and $\nu$ have the same total mass. This induced ordering allows us to reinterpret the original formulation of Edwards. Specifically, the operator $$Ig = \inf\{ g\,d\nu \mathrel{;}\nu \in J^\mathcal{F}_x \},$$ taking the infimum over the Jensen measures in $J^\mathcal{F}_x$, may be viewed as taking the infimum over all measures $\nu$ satisfying $\nu \mathop{\mathrm{\le_{\mathcal{F}}}}\delta_x$. On the other hand, in the definition of $$Sg = \sup\{ u(x) \mathrel{;}u \in \mathcal{F}, u \le g \},$$ $u(x)$ is the point evaluation $u \mapsto \int u\,d\delta_x$. This suggests replacing point evaluation by another positive functional, i.e. by integration against any positive Radon measure $\mu$ supported on $X$, yielding operators $$\begin{aligned} I_\mu g &= \inf \Big\{ \int g\,d\nu \mathrel{;}\nu \mathop{\mathrm{\le_{\mathcal{F}}}}\mu \Big\}, \shortintertext{and} S_\mu g &= \sup \Big\{ \int u\,d\mu \mathrel{;}u \in \mathcal{F}, u\le g\Big\}\end{aligned}$$ for any bounded Borel function $g$ on $X$. Note that, in contrast to the usual formulation of Edwards' machinery, this setup will not give us functions $S_\mu$ and $I_\mu$ on $X$ unless we make a choice of functionals $x \mapsto \mu_x$. At the time of writing, we are not aware of any natural such choice other than $x \mapsto \delta_x$. Edwards' theorem continues to hold *mutatis mutandis* for these operators as well. **Theorem 20** (Edwards' theorem for positive functionals). *Let $X$ be a compact metric space, $H \subset C(X)$ be a subspace total outside its zeros, and suppose that $\mathcal{F}$ is $H$-admissible. Let $\mu$ be a positive Radon measure supported on $X$ and assume that there exists $C_\mu>0$ such that for each element $h \in H$, there exists $u\in \mathcal{F}$ with $u\leq h$ and $$\int u\,d\mu \geq -C_\mu |h|_\infty.$$ Then for each non-decreasing sequence $h_i \in H$, we have $S_\mu(h_\infty)=I_\mu(h_\infty)$.* **Corollary 21**. *Let $X$ be a compact metric space, suppose that $\mathcal{F}$ contains all constant functions, and let $\mu$ be a positive Radon measure. If $g$ is lower semicontinuous on $X$, then $S_\mu g = I_\mu g$.* This version of Edwards' theorem can be used to understand minimal elements in the plurisubharmonic orderings of measures. **Definition 22**. Let $\mu$ be a positive Radon measure on $\bar\Omega$. We say that $\mu$ is *minimal* with respect to the ordering $\mathop{\mathrm{\le_\mathrm{c}}}$ induced by $\mathcal{PSH}(\Omega) \cap C(\bar\Omega)$ if $$\nu \mathop{\mathrm{\le_\mathrm{c}}}\mu \quad\to\quad \nu = \mu,$$ and similarly for the ordering $\mathop{\mathrm{\le_*}}$. Similarly, we say that $\mu$ is *maximal* if $\mu \mathop{\mathrm{\le_\mathrm{c}}}\nu$ implies that $\mu = \nu$. Bengtson [@Bengtson] studies the maximal elements in $\mathop{\mathrm{\le_\mathrm{psh}}}$ and by using Corollary [Corollary 21](#edwards:ordering){reference-type="ref" reference="edwards:ordering"}, we can characterize the minimal elements in our plurisubharmonic orderings of measures. **Theorem 23**. *Let $\Omega \subset \mathbb{C}^{n}$ be a B-regular domain. A positive Radon measure $\mu$ on $\bar\Omega$ is minimal with respect to $\mathop{\mathrm{\le_\mathrm{c}}}$ if and only if $\mathop{\mathrm{supp}}\mu \subset \partial\Omega$.* *Proof.* Assume that $\mu$ is minimal with respect to $\mathop{\mathrm{\le_\mathrm{c}}}$. Let $g \in C(\bar\Omega)$. By Corollary [Corollary 21](#edwards:ordering){reference-type="ref" reference="edwards:ordering"}, $$S_\mu g = \sup \Big\{ \int u\,d\mu \mathrel{;}u \in \mathcal{PSH}(\Omega) \cap C(\bar\Omega), u\le g\Big\} = I_\mu g = \int g\,d\mu.$$ On the other hand, $$S_\mu g = \int (Pg)\,d\mu,$$ where $Pg(z) = \sup \{ u(z) : u \in \mathcal{PSH}(\Omega) \cap C(\bar\Omega) \}$ is the Perron--Bremermann envelope of $g$. (Since $g \in C(\bar\Omega)$, it follows that $(Pg)^* = Pg$ is plurisubharmonic and continuous on $\Omega$, and from the characterization of Jensen measures in [@Wikstrom], we see that it makes no difference whether the supremum defining $Pg$ is taken over $\mathcal{PSH}(\Omega)$ or $\mathcal{PSH}(\Omega) \cap C(\bar\Omega)$.) Hence, for every $g \in C(\bar\Omega)$, $\int (Pg)\,d\mu = \int g\,d\mu$, but since $Pg \le g$, we can conclude that $Pg = g$ almost everywhere $[\mu]$. By continuity, $Pg = g$ on $\mathop{\mathrm{supp}}\mu$. In particular, we can choose a continuous function $g \le 0$, such that $g^{-1}(0) = \mathop{\mathrm{supp}}\mu$. Then $Pg \le 0$, $Pg = g$ on $\mathop{\mathrm{supp}}\mu$ and by the maximum principle, $\mathop{\mathrm{supp}}\mu \subset \partial\Omega$. Conversely, assume that $\mathop{\mathrm{supp}}\mu \subset \partial \Omega$ and $\nu \mathop{\mathrm{\le_\mathrm{c}}} \mu$. Let $\phi \in \mathcal{PSH}(\Omega) \cap C(\bar\Omega)$ be a bounded plurisubharmonic exhaustion function for $\Omega$, i.e. $\phi|_{\partial\Omega} = 0$ and $\phi < 0$ on $\Omega$. Then $$0 \ge \int \phi\,d\nu \ge \int \phi\,d\mu = 0$$ In particular $\phi = 0$ almost everywhere $[\nu]$, so $\mathop{\mathrm{supp}}\nu \subset \partial\Omega$. Proposition [Proposition 11](#prop:equal_mass){reference-type="ref" reference="prop:equal_mass"} shows that $\nu = \mu$, i.e. $\mu$ is minimal. ◻ In Bengtson's ordering, there are no minimal measures by a definition similar to above since $\frac{1}{2}\mu \mathop{\mathrm{\le_\mathrm{psh}}}\mu$ holds for any positive Radon measure $\mu$. One way to circumvent this issue is to instead consider the weaker minimality property $$\nu \mathop{\mathrm{\le_\mathrm{psh}}}\mu \quad\to\quad \nu = k_\nu\mu$$ for some $k_\nu \in [0,1]$. Even with this weaker definition, there are no minimal elements. **Theorem 24**. *Let $\Omega \subset \mathbb{C}^{n}$ be a hyperconvex domain, and suppose that $\mu$ is positive Radon measure on $\Omega$ satisfying the minimality property above. Then $\mu(\Omega)=0$.* *Proof.* Pick two precompact, disjoint open sets $A, B \subset \Omega$ and let $g \in C(\bar \Omega)$ satisfy $$g\leq 0, \qquad g\big|_{A}=g\big|_{\bar \Omega}=0, \qquad g\big|_{B} = -1.$$ By Theorem [Theorem 3](#thm:edwhypcon2){reference-type="ref" reference="thm:edwhypcon2"}, $Pg=\sup\big\{u(x) \mathrel{;}u\in \mathcal{E}_0(\Omega), u \leq g\big\}$ is plurisubharmonic and continuous since $$\begin{aligned} Pg &\geq \sup\big\{u(x) \mathrel{;}u\in \mathcal{E}_0(\Omega) \cap C(\bar \Omega), u \leq g\big\} \\ &= \sup\big\{u(x) \mathrel{;}u\in \mathcal{PSH}^0(\Omega) , u \leq g\big\} \\ &\geq Pg. \end{aligned}$$ Applying Theorem [Theorem 20](#edwards2){reference-type="ref" reference="edwards2"}, we have $$\int (Pg)\,d\mu=S_\mu g= I_\mu g = \inf_{k_\nu\in[0,1]}\Big\{k_\nu \int g\,d\mu\Big\} = \int g\,d\mu,$$ and as in the proof of Theorem [Theorem 23](#thm:minimal){reference-type="ref" reference="thm:minimal"}, it follows that $Pg = g$ on $\mathop{\mathrm{supp}}\mu$. By the maximum principle, $P(g)\big|_A < 0$, which implies that $\mu(A)=0$. Since $A$ was arbitrary, the conclusion of the theorem follows. ◻ 10 Bengtson, B., *An ordering of measures induced by plurisubharmonic functions*, Ann. Polon. Math. **119** (2017), 221--237. Cegrell, U. *Pluricomplex energy*, Acta Math., **180** (1998), 187--217. Cegrell, U. *The general definition of the complex Monge-Ampère operator*, Ann. Inst. Fourier (Grenoble), **54** (2004), 159--179. Cole, B. J., Ransford, T. J., *Subharmonicity without upper semicontinuity*, J. Funct. Anal. **147** (1997), 420--442. Duval, J., Sibony, N., *Polynomial convexity, rational convexity, and currents*, Duke Math. J., **79** (1995), 487--513. Edwards, D. A., *Choquet boundary theory for certain spaces of lower semicontinuous functions*, International symposium on function algebras (Tulane Univ., 1965), Function algebras, Scott-Foresman, Chicago (1966), 300--309. Kerzman, N., Rosay, J.-P., *Fonctions plurisousharmoniques d'exhaustion bornées et domaines taut*, Math. Ann. **257** (1981), 171--184. Nilsson, M., Wikström F., *Quasibounded plurisubharmonic functions*, Internat. J. Math. **32**, No. 9 (2021). Nilsson, M., *Continuity of envelopes of unbounded plurisubharmonic functions*, Math. Z. **301** (2022), 3959--3971. Poletsky, E. A., *Holomorphic currents*, Indiana Univ. Math. J. **42** (1993), pp. 85--144. Tong, H., *Some characterizations of normal and perfectly normal spaces*, Duke Math. J. **19**, No. 2 (1952), 289--292. Walsh, J.B., *Continuity of envelopes of plurisubharmonic functions*, J. Math. Mech. **18** (1968/69), 143--148. Wikström, F., *Jensen measures and boundary values of plurisubharmonic functions*, Ark. Mat., **39** (2001), 181--200.
arxiv_math
{ "id": "2309.12762", "title": "Variations on a theorem by Edwards", "authors": "M{\\aa}rten Nilsson and Frank Wikstr\\\"om", "categories": "math.CV", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | Given a two-sided shift space on a finite alphabet and a continuous potential function, we give conditions under which an equilibrium measure can be described using a construction analogous to Hausdorff measure that goes back to the work of Bowen. This construction was previously applied to smooth uniformly and partially hyperbolic systems by the first author, Pesin, and Zelerowicz. Our results here apply to all subshifts of finite type and Hölder continuous potentials, but extend beyond this setting, and we also apply them to shift spaces with synchronizing words. address: - Dept. of Mathematics, University of Houston, Houston, TX 77204 - Dept. of Mathematics, University of Houston, Houston, TX 77204 author: - Vaughn Climenhaga - Jason Day bibliography: - es-formula-ref.bib title: Equilibrium measures for two-sided shift spaces via dimension theory --- [^1] # Introduction and main results ## Background In the study of hyperbolic dynamical systems, thermodynamic formalism allows us to identify certain *equilibrium* measures whose ergodic and statistical properties provide insight into the underlying dynamics. For Anosov and Axiom A systems equipped with Hölder continuous potential functions, and also for subshifts of finite type, it is known that the equilibrium measure is unique and has a product structure in terms of measures along stable and unstable leaves that satisfy certain scaling properties given by the potential function [@gM70; @nH94; @Lep]. Moreover, it is possible to describe these leaf measures as analogues of Hausdorff measure, where the "refining" in the definition of Hausdorff measure is done dynamically rather than geometrically. The idea of treating topological entropy and pressure as analogues of Hausdorff dimension goes back to Bowen [@rB73] and to Pesin and Pitskel' [@PP84]; the idea of constructing leaf measures of the measure of maximal entropy via a Hausdorff measure construction first appeared in [@uH89; @bH89]; the general theory was developed by the first author, Pesin, and Zelerowicz [@CPZ; @CPZ2; @direct-srb]. In this paper, we take some steps toward the setting of non-uniform hyperbolicity, or hyperbolicity with singularities, by considering two-sided shift spaces that are not necessarily subshifts of finite type. We give general conditions under which this dimension theoretic construction of leaf measures can still be used to construct an equilibrium measure with local product structure. Using these conditions, we obtain the following application (see §§[1.2](#sec:setting){reference-type="ref" reference="sec:setting"}--[1.4](#sec:classes){reference-type="ref" reference="sec:classes"} for precise definitions). **Theorem 1**. *Let $X$ be a two-sided shift space on a finite alphabet with the specification property, and let $\varphi\colon X\to\mathbb{R}$ be a continuous potential function with the Walters property. Then the unique equilibrium measure $\mu$ for $(X,\varphi)$ has local product structure in the sense of §[1.3](#sec:prod){reference-type="ref" reference="sec:prod"} below.* Theorem [Theorem 1](#thm:spec){reference-type="ref" reference="thm:spec"} follows from Theorem [Theorem 22](#thm:sync){reference-type="ref" reference="thm:sync"} in §[2.2](#sec:synchronizing-words){reference-type="ref" reference="sec:synchronizing-words"}, which applies to the broader class of shift spaces with synchronizing words satisfying a certain summability condition [\[eqn:sync-assumption\]](#eqn:sync-assumption){reference-type="eqref" reference="eqn:sync-assumption"}. In a forthcoming paper [@CD-billiards], we will explore applications of our general results to the measure of maximal entropy for dispersing billiards that was recently studied by Baladi and Demers [@BD20]. *Remark 1*. The conclusion of Theorem [Theorem 1](#thm:spec){reference-type="ref" reference="thm:spec"} can be deduced from results in the literature when $X$ is a mixing subshift of finite type and $\varphi$ is Hölder continuous [@Bow08; @nH94; @Lep] or at least has the (weaker) Walters property [@pW78], and also when $X$ has specification and $\varphi$ is Hölder continuous [@vC18]. The novelty here is the ability to simultaneously weaken SFT to specification, and Hölder to Walters, as well as to provide a framework for going beyond specification (see §[2.2](#sec:synchronizing-words){reference-type="ref" reference="sec:synchronizing-words"}). Uniqueness was already known in the setting of Theorem [Theorem 1](#thm:spec){reference-type="ref" reference="thm:spec"} [@rB745], but not local product structure. It is also worth mentioning [@BO22], which establishes a certain local product structure property for equilibrium states of non-uniformly hyperbolic diffeomorphisms with potentials satisfying a "Grassmann--Hölder" continuity condition. The main general results of this paper are the following. - §[1.5](#sec:build-leafs){reference-type="ref" reference="sec:build-leafs"}, [\[eqn:shift-mxu\]](#eqn:shift-mxu){reference-type="eqref" reference="eqn:shift-mxu"} and [\[eqn:shift-mxs\]](#eqn:shift-mxs){reference-type="eqref" reference="eqn:shift-mxs"}: Construction of the leaf measures $m^\mathrm{u}_x$ and $m^\mathrm{s}_x$. - §[1.6](#sec:finite){reference-type="ref" reference="sec:finite"}, Theorem [Theorem 5](#thm:finite){reference-type="ref" reference="thm:finite"}: If we have uniform bounds on partition sums, then the leaf measures are positive and finite. - §[1.7](#sec:dynamics){reference-type="ref" reference="sec:dynamics"}, Theorem [Theorem 6](#thm:dynamics){reference-type="ref" reference="thm:dynamics"}: The leaf measures scale by $e^{-\varphi+ P}$ under the action of $\sigma$ and $\sigma^{-1}$. - §[1.8](#sec:holonomies){reference-type="ref" reference="sec:holonomies"}, Theorem [Theorem 14](#thm:shift-cts){reference-type="ref" reference="thm:shift-cts"}: Given a family of rectangles on which the leaf measures are positive and vary uniformly continuously with respect to holonomies, we obtain a formula for the Radon--Nikodym derivative of the holonomy map.[^2] - §[1.9](#sec:product-measure){reference-type="ref" reference="sec:product-measure"}, Theorems [Theorem 16](#thm:q-indep){reference-type="ref" reference="thm:q-indep"}, [Theorem 17](#thm:return-map){reference-type="ref" reference="thm:return-map"}, and [Theorem 18](#thm:push-product){reference-type="ref" reference="thm:push-product"}: If $R$ is a finite union of rectangles as in the previous item, the leaf measures can be used to build a product measure $\lambda$ on $R$ that induces an invariant measure $\mu$ on $X$ with local product structure; if the return time to $R$ is integrable (in $\lambda$), then the normalization of $\mu$ is an equilibrium measure. - §[1.10](#sec:gibbs){reference-type="ref" reference="sec:gibbs"}, Theorem [Theorem 19](#thm:gibbs){reference-type="ref" reference="thm:gibbs"}: The leaf measures satisfy Gibbs-type estimates with constants given in terms of partition sum bounds; similar bounds hold for the measure $\mu$ constructed above. The proofs of the general results are contained in §§[3](#sec:shift-mxu-pf){reference-type="ref" reference="sec:shift-mxu-pf"}--[6](#sec:es-two-sided){reference-type="ref" reference="sec:es-two-sided"}. ## Description of setting {#sec:setting} Let $A$ be a finite set (the *alphabet*), and equip the set $A^\mathbb{Z}$ of all bi-infinite sequences over $A$ with the metric $d(x,y) = 2^{-\min \{|n| : x_n \neq y_n\}}$. The *shift map* $\sigma \colon A^\mathbb{Z}\to A^\mathbb{Z}$ is defined by $(\sigma x)_n = x_{n+1}$. A *(two-sided) shift space* is a closed $\sigma$-invariant set $X\subset A^\mathbb{Z}$. Given such a shift space $X$, we will study the thermodynamic formalism of the homeomorphism $\sigma \colon X\to X$ equipped with a continuous *potential function* $\varphi\colon X\to \mathbb{R}$. To describe this, we first need some more terminology and notation. A *word* over $A$ is a finite string of symbols $w\in A^n$ for some $n$; we write $|w| = n$ for the *length* of $w$. (The case $n=0$ gives the *empty word*.) Given $x\in A^\mathbb{Z}$ and $i,j\in \mathbb{Z}$, we write $x_{[i,j)}$ for the word $x_i x_{i+1} \cdots x_{j-1}$, and similarly for $(i,j]$, $(i,j)$, and $[i,j]$. Given a shift space $X$ and a word $w\in A^n$, the *(forward) cylinder* of $w$ is $$[w]^+ := \{x\in X : x_{[0,n)} = w\},$$ and the *language* of $X$ is $$\mathcal{L}:= \bigcup_{n=0}^\infty \mathcal{L}_n, \quad\text{where } \mathcal{L}_n := \{w\in A^n : [w]^+ \neq \emptyset \}.$$ Given a potential function $\varphi\colon X\to \mathbb{R}$, we assign a weight to each word $w\in \mathcal{L}$ by $$\label{eqn:Phi} \Phi(w) := \sup_{x\in [w]^+} S_n\varphi(x), \quad\text{where } S_n\varphi(x) := \sum_{k=0}^{n-1} \varphi(\sigma^k x)$$ The *$n$th partition sum* associated to $(X,\sigma,\varphi)$ is $$\label{eqn:Lambda} \Lambda_n := \sum_{w\in \mathcal{L}_n} e^{\Phi(w)}.$$ Given $m,n\in \mathbb{N}$ one quickly sees that $$\Lambda_{m+n} = \sum_{u\in \mathcal{L}_m} \sum_{\substack{v\in \mathcal{L}_n \\ uv\in \mathcal{L}_{m+n}}} e^{\Phi(uv)} \leq \sum_{u\in \mathcal{L}_m} \sum_{\substack{v\in \mathcal{L}_n \\ uv\in \mathcal{L}_{m+n}}} e^{\Phi(u) + \Phi(v)} \leq \Lambda_m \Lambda_n,$$ and so Fekete's lemma guarantees that the following limit exists: $$\label{eqn:P} P := \lim_{n\to\infty} \frac 1n \log \Lambda_n.$$ The number $P = P(X,\sigma,\varphi)$ is the *topological pressure*. Writing $M_\sigma(X)$ for the space of $\sigma$-invariant Borel probability measures on $X$, and $h\colon M_\sigma(X) \to [0,\infty)$ for the measure-theoretic entropy function, the variational principle [@pW82 Theorem 9.10] states that $$P = \sup_{\mu \in M_\sigma(X)} \Big( h(\mu) + \int \varphi\,d\mu\Big).$$ A measure achieving this supremum is an *equilibrium measure*. There is always at least one equilibrium measure because the entropy function is upper semi-continuous. ## Local product structure {#sec:prod} We will be concerned with the structure and properties of equilibrium measures, and in particular with their product structure in terms of stable and unstable sets. Given a shift space $X$, we say that $R\subset X$ has *product structure* (or is a *rectangle*) if for every $x,y\in R$, the *bracket* $[x,y]$ given by $$\label{eqn:lps} [x,y]_n := \begin{cases} x_n & n \leq 0, \\ y_n & n\geq 0 \end{cases}$$ is defined (which requires $x_0 = y_0$) and lies in $R$. In this case we fix $q\in R$, consider the sets $$\label{eqn:WRq} \begin{aligned} W^\mathrm{u}_R(q) &:= \{x\in R : x_n = q_n \text{ for all } n\leq 0 \}, \\ W^\mathrm{s}_R(q) &:= \{x\in R : x_n = q_n \text{ for all } n\geq 0 \}, \end{aligned}$$ and then consider the bijection $$\label{eqn:iota} \begin{aligned} \iota_{q} \colon W^\mathrm{u}_R(q) \times W^\mathrm{s}_{R}(q) &\to R, \\ (x,y) &\mapsto [y,x]. \end{aligned}$$ We say that a measure $m$ on $R$ is a *product measure* if $m = (\iota_q)_* (m^\mathrm{u}_q \times m^\mathrm{s}_q)$ for some measures $m^\mathrm{u}_q$ on $W^\mathrm{u}_R(q)$ and $m^\mathrm{s}_q$ on $W^\mathrm{s}_R(q)$. **Definition 2**. A probability measure $\mu$ on $X$ has *local product structure* if there exist a sequence of product measures $m_n$ on rectangles $R_n$ such that we have - $\mu|_{R_n} \ll m_n$ for every $n\in \mathbb{N}$; and - $\mu(\bigcup_{n\in\mathbb{N}} R_n) = 1$. ## Classes of shift spaces and potentials {#sec:classes} To go beyond mere existence and deduce uniqueness, local product structure, or other stronger properties for equilibrium measures, one needs to know more about the shift space $X$ and the potential $\varphi$. We gather some basic definitions and facts here. - The *$n$th variation* of $\varphi$ is $V_n(\varphi) := \sup \{ |\varphi(x) - \varphi(y)| : x_{(-n,n)} = y_{(-n,n)} \}$. - Every continuous $\varphi\colon X\to\mathbb{R}$ is uniformly continuous: $V_n(\varphi)\to 0$ as $n\to\infty$. - $\varphi$ is *Hölder continuous* if $V_n(\varphi)$ decays exponentially fast: there are $C,\alpha>0$ such that $V_n(\varphi) \leq C e^{-\alpha n}$ for all $n\in\mathbb{N}$. - $\varphi$ is *Dini continuous* (or has *summable variations*) if $\sum_n V_n(\varphi)<\infty$. - $\varphi$ has the *Bowen property* if there is $C>0$ such that for every $n\in \mathbb{N}$ and every $x,y\in X$ with $x_{[0,n)} = y_{[0,n)}$, we have $|S_n\varphi(x) - S_n\varphi(y)| \leq C$. - $\varphi$ has the *Walters property* if for every $\epsilon>0$, there is $k\in\mathbb{N}$ such that for all $n\in \mathbb{N}$ and $x,y\in X$ with $x_{[-k,n+k)} = y_{[-k,n+k)}$, we have $|S_n\varphi(x) - S_n\varphi(y)| \leq \epsilon$. These properties have the following relationships: $$\text{H\"older} \quad\Rightarrow\quad \text{Dini} \quad\Rightarrow\quad \text{Walters} \quad\Rightarrow\quad \text{Bowen}.$$ The Bowen property has the following immediate consequence: $$\label{eqn:almost-add} \Phi(v) + \Phi(w) - C \leq \Phi(vw) \leq \Phi(v) + \Phi(w) \text{ for all } v,w\in \mathcal{L}\text{ with } vw\in \mathcal{L}.$$ A shift space $X\subset A^\mathbb{Z}$ is a *subshift of finite type* (SFT) if there is a finite set of *forbidden words* $\mathcal{F}\subset A^* := \bigcup_{n=0}^\infty A^n$ such that $$\label{eqn:SFT} X = \{ x\in A^\mathbb{Z}: x_{[i,j]} \notin \mathcal{F}\text{ for all } i,j\in \mathbb{Z}\}.$$ Every topologically mixing SFT has the following *specification property*: there is $\tau\in\mathbb{N}$ such that for all $v,w\in \mathcal{L}$, there is $u\in \mathcal{L}_\tau$ such that $vuw\in \mathcal{L}$. As mentioned in Remark [Remark 1](#rmk:known){reference-type="ref" reference="rmk:known"}, the following facts were already known. - If $X$ has specification and $\varphi$ has the Bowen property, then there is a unique equilibrium measure [@rB745], and it has the K property [@fL77; @bC22]. - If $X$ is a mixing SFT and $\varphi$ has the Walters property, then the unique equilibrium measure has local product structure and the Bernoulli property; see [@Bow08; @nH94; @Lep] for Hölder continuous $\varphi$, and [@pW78] for the extension to the Walters property. - If $X$ has specification and $\varphi$ is Hölder continuous, then the unique equilibrium measure has local product structure and the Bernoulli property [@vC18]. Theorem [Theorem 1](#thm:spec){reference-type="ref" reference="thm:spec"} extends the "local product structure" result in the last item to the case when $\varphi$ has the Walters property. We do not address the question of whether the unique equilibrium measure is Bernoulli in this setting. In the classical results above, an important role is played by the fact that the partition sums in [\[eqn:Lambda\]](#eqn:Lambda){reference-type="eqref" reference="eqn:Lambda"} admit *uniform counting bounds* $e^{nP} \leq \Lambda_n \leq Q e^{nP}$, where $Q<\infty$ is independent of $n$, as well as by the existence of "large" sets with product structure. An important feature of our approach is that our general results rely only on these rather flexible assumptions, rather than more restrictive assumptions on the structure of $X$. ## Construction of the leaf measures {#sec:build-leafs} The definition of topological pressure in [\[eqn:P\]](#eqn:P){reference-type="eqref" reference="eqn:P"} gave it as an exponential growth rate, analogous to box (capacity) dimension. In dimension theory one can also study Hausdorff dimension, which is a critical value rather than a growth rate; more precisely, one defines a one-parameter family of Hausdorff measures, whose total weight jumps from $\infty$ to $0$ at a particular value of the parameter. This value is the Hausdorff dimension; Pesin and Pitskel' [@PP84] gave an analogous definition of topological pressure, building on earlier work of Bowen [@rB73] for topological entropy. In the symbolic setting, this definition goes as follows. Given $N\in \mathbb{N}$, let $\mathcal{L}_{\geq N} = \{w\in \mathcal{L}: |w| \geq N\}$ denote the set of words of length at least $N$, and given $Z\subset X$, consider the following set of covers of $Z$ by cylinders corresponding to such words: $$\label{eqn:E+x} \mathbb{E}^+(Z,N) := \Big\{ \mathcal{E}\subset \mathcal{L}_{\geq N} : Z \subset \bigcup_{w\in \mathcal{E}} [w]^+ \Big\},$$ Then to each $\alpha\in \mathbb{R}$, associate an outer measure $m_\alpha$ on $X$ defined by $$\label{eqn:ma} m_\alpha(Z) = \lim_{N\to\infty} \inf \Big\{ \sum_{w\in \mathcal{E}} e^{\Phi(w) - |w|\alpha} : \mathcal{E}\subset \mathbb{E}^+(Z,N) \Big\}.$$ The topological pressure $P\in \mathbb{R}$ is the unique real number with the property that $m_\alpha(X) = \infty$ for all $\alpha < P$, and $m_\alpha(X) = 0$ for all $\alpha > P$. Now one may naturally ask whether $m_P$ has anything to do with the equilibrium measure(s) of $(X,\sigma,\varphi)$. There are two problems that arise immediately: 1. a priori we could have $m_P(X) = 0$ or $m_P(X) = \infty$; 2. the outer measure $m_P$ does not restrict to a Borel measure unless $m_P \equiv 0$. Indeed, if we consider for each $x\in X$ the *local unstable set* (or *leaf*) $$W^\mathrm{u}_\mathrm{loc}(x) = \{y\in X : y_k = x_k \text{ for all } k\leq 0\},$$ then $\mathbb{E}^+(W^\mathrm{u}_\mathrm{loc}(x),N) = \mathbb{E}^+([x_0]^+,N)$ for all $N$, and thus $m_P(W^\mathrm{u}_\mathrm{loc}(x)) = m_P([x_0]^+)$, from which one can quickly deduce that $m_P$ is not additive on the Borel $\sigma$-algebra. The first problem is addressed in §[1.6](#sec:finite){reference-type="ref" reference="sec:finite"}. For now we address the second problem, by defining a measure not on all of $X$, but on a local unstable set. We modify the definition in [\[eqn:ma\]](#eqn:ma){reference-type="eqref" reference="eqn:ma"} slightly: instead of using weights $\Phi(w)$, we fix $x\in X$ and consider for each $w\in \mathcal{L}$ the quantity[^3] $$\label{eqn:P+x} \Phi_x^+(w) = \sup \{ S_{|w|}\varphi(y) : y\in W^\mathrm{u}_\mathrm{loc}(x) \cap [w]^+ \}.$$ Then define the unstable leaf measure as $$\label{eqn:shift-mxu} m^\mathrm{u}_x(Z) = \lim_{N\to\infty} \inf \Big\{ \sum_{w\in \mathcal{E}} e^{\Phi_x^+(w)-|w|P} : \mathcal{E}\in \mathbb{E}^+(Z,N) \Big\}.$$ Given $w\in \mathcal{L}_{\geq N}$, the set $W^\mathrm{u}_\mathrm{loc}(x) \cap [w]^+$ has diameter $\leq 2^{-N}$, which goes to $0$ as $N\to\infty$, and thus [@direct-srb Lemma 2.14] implies that $m^\mathrm{u}_x$ defines a Borel measure on $W^\mathrm{u}_\mathrm{loc}(x)$. By reversing the direction of everything, we can analogously define a Borel measure $m^\mathrm{s}_x$ on the *local stable set* (or *leaf*) $$W^\mathrm{s}_\mathrm{loc}(x) = \{y\in X : y_k = x_k \text{ for all } k\geq 0\}.$$ For the definition of $m^\mathrm{s}_x$ we work with the backwards cylinders corresponding to words $w\in A^n$: $$[w]^- = \{x\in X : x_{(-n,0]} = w\}.$$ Given $Z \subset W^\mathrm{s}_\mathrm{loc}(x)$ and $N\in \mathbb{N}$, we consider the collection of covers $$\label{eqn:E-x} \mathbb{E}^-(Z,N) = \Big\{ \mathcal{E}\subset \mathcal{L}_{\geq N} : Z \subset \bigcup_{w\in \mathcal{E}} [w]^- \Big\}.$$ Using the weight function $$\label{eqn:P-x} \Phi_x^-(w) = \sup \Big\{ \sum_{k=0}^{|w|-1} \varphi(\sigma^{-k} y) : y\in W^\mathrm{s}_\mathrm{loc}(x) \cap [w]^- \Big\},$$ we define the leaf measure $m^\mathrm{s}_x$ by $$\label{eqn:shift-mxs} m^\mathrm{s}_x(Z) = \lim_{N\to\infty} \inf \Big\{ \sum_{w\in \mathcal{E}} e^{\Phi_x^-(w) - |w|P} : \mathcal{E}\in \mathbb{E}^-(Z,N) \Big\}.$$ Once again, [@direct-srb Lemma 2.14] guarantees that $m^\mathrm{s}_x$ is a Borel measure on $W^\mathrm{s}_\mathrm{loc}(x)$. Now we must provide some way of guaranteeing that the leaf measures $m^\mathrm{u}_x$ and $m^\mathrm{s}_x$ are positive and finite; we will do this in the next section. We conclude this section by observing that the set $W^\mathrm{u}_\mathrm{loc}(x)$ and the measure $m^\mathrm{u}_x$ only depend on $x_{(-\infty,0]}$; similarly, $W^\mathrm{s}_\mathrm{loc}(x)$ and $m^\mathrm{s}_x$ depend only on $x_{[0,\infty)}$. This is immediate from the definitions but deserves to be recorded: **Proposition 3**. *If $x_{(-\infty,0]} = y_{(-\infty,0]}$ then $W^\mathrm{u}_\mathrm{loc}(x) = W^\mathrm{u}_\mathrm{loc}(y)$ and $m^\mathrm{u}_x = m^\mathrm{u}_y$. If $x_{[0,\infty)} = y_{[0,\infty)}$ then $W^\mathrm{s}_\mathrm{loc}(x) = W^\mathrm{s}_\mathrm{loc}(y)$ and $m^\mathrm{s}_x = m^\mathrm{s}_y$.* *Remark 4*. We will occasionally abuse notation slightly by using the notation $W^\mathrm{u}_\mathrm{loc}(x), m^\mathrm{u}_x$ when $x = \cdots x_{-2} x_{-1} x_0$ is a backward-infinite sequence, and $W^\mathrm{s}_\mathrm{loc}(x), m^\mathrm{s}_x$ when $x = x_0 x_1 x_2 \cdots$ is forward-infinite. By Proposition [Proposition 3](#prop:xy){reference-type="ref" reference="prop:xy"}, the meaning of this notation is well-defined by extending such an $x$ to any bi-infinite sequence in $X$. ## Conditions for finiteness {#sec:finite} As the arguments in [@CPZ] reveal, the key property needed to guarantee positivity and finiteness of the leaf measures is uniform control of various version of the partition sums $\Lambda_n$ from [\[eqn:Lambda\]](#eqn:Lambda){reference-type="eqref" reference="eqn:Lambda"}. Start by observing that [\[eqn:P\]](#eqn:P){reference-type="eqref" reference="eqn:P"} means that $\Lambda_n \approx e^{nP}$ for large $n$, but leaves open the possibility that $\Lambda_n e^{-nP}$ can approach $0$ or $\infty$ subexponentially. In fact, by Fekete's lemma, the submultiplicativity property $\Lambda_{m+n} \leq \Lambda_m \Lambda_n$ guarantees that $$\label{eqn:Ln-geq} \Lambda_n \geq e^{nP} \text{ for all } n;$$ however, there is no universal upper bound. We define $$\label{eqn:Q} \overline{Q}:= \varlimsup_{n\to\infty} \Lambda_n e^{-nP} \in [1,\infty].$$ To obtain good bounds on $m^\mathrm{u}_x$, we will need $\overline{Q}<\infty$. This is known to be true for transitive SFTs with Hölder continuous potentials, and in many other settings as well; see for example §[2](#sec:spec){reference-type="ref" reference="sec:spec"} and [@CT13 Proposition 5.3]. We will also need good control of partition sums restricted to $W^\mathrm{u}_\mathrm{loc}(x)$: let $$\label{eqn:Lambda+} \Lambda_n^+(x) := \sum_{w\in \mathcal{L}_n} e^{\Phi_x^+(w)},$$ recalling that words with $W^\mathrm{u}_\mathrm{loc}(x) \cap [w]^+ = \emptyset$ make no contribution to the sum, and then define $$\label{eqn:Q+} \overline{Q}^+_x := \varlimsup_{n\to\infty} \Lambda_n^+(x) e^{-nP}, \qquad \underline{Q}^+_x := \varliminf_{n\to\infty} \Lambda_n^+(x) e^{-nP}.$$ Clearly we have $\underline{Q}^+_x \leq \overline{Q}^+_x \leq \overline{Q}$, but we have no a priori lower bound on $\underline{Q}^+_x$. Replacing $+$ with $-$ in [\[eqn:Lambda+\]](#eqn:Lambda+){reference-type="eqref" reference="eqn:Lambda+"} and [\[eqn:Q+\]](#eqn:Q+){reference-type="eqref" reference="eqn:Q+"} yields analogous quantities $\underline{Q}^-_x \leq \overline{Q}^-_x \leq \overline{Q}$. The following result is proved in §[3](#sec:shift-mxu-pf){reference-type="ref" reference="sec:shift-mxu-pf"}. **Theorem 5**. *Let $X$ be a two-sided shift space on a finite alphabet with language $\mathcal{L}$, and $\varphi\colon X\to \mathbb{R}$ a continuous potential. For each $x\in X$, define a Borel measure $m^\mathrm{u}_x$ on $W^\mathrm{u}_\mathrm{loc}(x)$ by [\[eqn:shift-mxu\]](#eqn:shift-mxu){reference-type="eqref" reference="eqn:shift-mxu"} and a Borel measure $m^\mathrm{s}_x$ on $W^\mathrm{s}_\mathrm{loc}(x)$ by [\[eqn:shift-mxs\]](#eqn:shift-mxs){reference-type="eqref" reference="eqn:shift-mxs"}. Then we have $$\begin{aligned} \label{eqn:finite-u} \overline{Q}^+_x / \overline{Q}&\leq m^\mathrm{u}_x(W^\mathrm{u}_\mathrm{loc}(x)) \leq \underline{Q}^+_x, \\ \label{eqn:finite-s} \overline{Q}^-_x / \overline{Q}&\leq m^\mathrm{s}_x(W^\mathrm{s}_\mathrm{loc}(x)) \leq \underline{Q}^-_x.\end{aligned}$$* Theorem [Theorem 5](#thm:finite){reference-type="ref" reference="thm:finite"} guarantees that the measures $m^\mathrm{u}_x$ and $m^\mathrm{s}_x$ are finite as soon as $\overline{Q}<\infty$, but leaves open the possibility that they may vanish for some $x$. For a shift with specification and a potential with the Bowen property, there is $c>0$ such that $\underline{Q}_x^{\pm} \geq c$ for all $x\in X$ [@rB745], so that in particular all leaf measures are nonzero. See §[2](#sec:spec){reference-type="ref" reference="sec:spec"} for a more general statement. ## Scaling under dynamics {#sec:dynamics} A crucial property of $\alpha$-dimensional Hausdorff measure is the way it transforms under geometric scaling: if two sets are related by a transformation that scales distances by a factor of $r$, then their Hausdorff measures are related by a factor of $r^\alpha$. The corresponding property for the measures $m^\mathrm{u}_x$ and $m^\mathrm{s}_x$ concerns how they transform under the dynamics of $\sigma$; they scale by a factor of $e^{P-\varphi}$, as described in the following result, which is proved in §[4](#sec:shift-mxs-pf){reference-type="ref" reference="sec:shift-mxs-pf"}. **Theorem 6**. *Let $X$ be a two-sided shift space on a finite alphabet with language $\mathcal{L}$, and $\varphi\colon X\to \mathbb{R}$ a continuous potential. Let $m^\mathrm{u}_x$ and $m^\mathrm{s}_x$ be the families of leaf measures defined by [\[eqn:shift-mxu\]](#eqn:shift-mxu){reference-type="eqref" reference="eqn:shift-mxu"} and [\[eqn:shift-mxs\]](#eqn:shift-mxs){reference-type="eqref" reference="eqn:shift-mxs"}. Then if $Z\subset W^\mathrm{u}_\mathrm{loc}(x)$ is a Borel set such that $\sigma(Z) \subset W^\mathrm{u}_\mathrm{loc}(\sigma x)$, we have $$\label{eqn:shift-mxu-scales} m^\mathrm{u}_{\sigma x}(\sigma Z) = \int_Z e^{-\varphi(y)+P} \,dm^\mathrm{u}_x(y).$$ Similarly, given a Borel set $Z\subset W^\mathrm{s}_\mathrm{loc}(x)$, we have $$\label{eqn:shift-mxs-scales} m^\mathrm{s}_{\sigma^{-1} x}(\sigma^{-1} Z) = \int_Z e^{-\varphi(y)+P} \,dm^\mathrm{s}_x(y).$$* *Remark 7*. It is occasionally useful to write the scaling property in [\[eqn:shift-mxu-scales\]](#eqn:shift-mxu-scales){reference-type="eqref" reference="eqn:shift-mxu-scales"} in one of the following equivalent forms, valid whenever $\sigma y \in W^\mathrm{u}_\mathrm{loc}(\sigma x)$ (that is, $y_{(-\infty,1]} = x_{(-\infty,1]}$): $$\label{eqn:mxu-RN} \frac{dm^\mathrm{u}_{\sigma x}}{d(\sigma_* m^\mathrm{u}_x)}(\sigma y) = e^{-\varphi(y) + P}, %\\ %\label{eqn:mxu-RN-2} \quad\text{or}\quad \frac{d m^\mathrm{u}_x}{d(\sigma^*m^\mathrm{u}_{\sigma x})}(y) = e^{\varphi(\sigma^{-1}y) - P}.$$ If $y_{(-\infty,n]} = x_{(-\infty,n]}$, then iterating the second of these gives $$\label{eqn:mxu-RN-n} \frac{d m^\mathrm{u}_x}{d((\sigma^*)^n m^\mathrm{u}_{\sigma^n x})}(y) = e^{\sum_{k=1}^{n} \varphi(\sigma^{-k}y) - nP}.$$ Analogous formulas hold for $m^\mathrm{s}_x$. In particular $$\label{eqn:mxs-RN-n} \frac{dm^\mathrm{s}_{\sigma^{-n}x}}{d((\sigma^n)_* m^\mathrm{s}_x)}(y)=e^{-\sum_{k=0}^{n-1}\varphi(\sigma^{-k}y)+nP}.$$ In §[1.10](#sec:gibbs){reference-type="ref" reference="sec:gibbs"} we will use these scaling properties to obtain Gibbs-type estimates. ## Scaling under holonomies {#sec:holonomies} Recalling §[1.3](#sec:prod){reference-type="ref" reference="sec:prod"}, we say $R \subset X$ has *product structure*, and refer to $R$ as a *rectangle*, if for every $x,y\in R$, the bracket $[x,y]$ from [\[eqn:lps\]](#eqn:lps){reference-type="eqref" reference="eqn:lps"} is defined and lies in $R$. Note that every rectangle must be contained in a 1-cylinder: if $R$ is a rectangle, then there is $a\in A$ such that $R\subset [a]^+ = [a]^-$. Given a rectangle $R$ and a point $x\in R$, we will write (see [\[eqn:WRq\]](#eqn:WRq){reference-type="eqref" reference="eqn:WRq"}) $$W^\mathrm{u}_R(x) := W^\mathrm{u}_\mathrm{loc}(x) \cap R \quad\text{and}\quad W^\mathrm{s}_R(x) := W^\mathrm{s}_\mathrm{loc}(x) \cap R.$$ Given $z,y\in R$, the *(stable) holonomy map* $\pi^s_{z,y} : W^\mathrm{u}_R(z) \to W^\mathrm{u}_R(y)$ is defined by $$\label{eqn:pi-u} \pi_{z,y}^s(x) = [y,x],$$ so that $\{\pi_{z,y}^s(x)\} = W^\mathrm{s}_\mathrm{loc}(x) \cap W^\mathrm{u}_\mathrm{loc}(y)$; the map $\pi^s_{z,y}$ "slides" a point $x$ along its stable leaf until it reaches $W^\mathrm{u}_\mathrm{loc}(y)$. We will be interested in rectangles on which the unstable leaf measures $m^\mathrm{u}_x$ transform nicely under holonomies. **Definition 8**. Given a collection $\mathcal{R}$ of rectangles, the family of unstable leaf measures has *uniformly continuous holonomies* on $\mathcal{R}$ if for every $\epsilon>0$, there exists $\delta>0$ such that given any $R\in \mathcal{R}$ and $y,z\in R$ such that $d(y,z)<\delta$, we have $m^\mathrm{u}_y(Y) = e^{\pm\epsilon} m^\mathrm{u}_z(\pi_{y,z}^s(Y))$ for all measurable $Y\subset W^\mathrm{u}_R(y)$.[^4] We prove the following in §[5](#sec:shift-cts-pf){reference-type="ref" reference="sec:shift-cts-pf"}. **Proposition 9**. *If $\varphi$ has the Walters property, then the leaf measures $m^\mathrm{u}_x$ have uniformly continuous holonomies on the family of open rectangles.* **Proposition 10**. *If $\varphi\equiv 0$, then the leaf measures $m^\mathrm{u}_x$ have uniformly continuous holonomies on the family of all rectangles.* In the next section, it will be important to have a formula for the Radon--Nikodym derivative of the holonomy map. To give this, we start with the following direct consequence of [\[eqn:lps\]](#eqn:lps){reference-type="eqref" reference="eqn:lps"}: $$\label{eqn:shift-bracket} \text{if $x_{[0,n]} = y_{[0,n]}$, then $[\sigma^n x,\sigma^n y] = \sigma^n[x,y]$}.$$ With [\[eqn:shift-bracket\]](#eqn:shift-bracket){reference-type="eqref" reference="eqn:shift-bracket"} in mind, the following is immediate. **Lemma 11**. *Given any rectangle $R$, any $n\in \mathbb{N}$, and any $w\in \mathcal{L}_{n+1}$, the sets $\sigma^n(R\cap [w]^+) = \sigma^n(R) \cap [w]^-$ and $\sigma^{-n}(R\cap [w]^-) = \sigma^{-n}(R) \cap [w]^+$ are rectangles as well. (They may be empty.)* **Definition 12**. A family $\mathcal{R}$ of rectangles is *$\mathcal{L}$-invariant* if given any $R\in \mathcal{R}$, $n\in \mathbb{N}$, and $w\in \mathcal{L}_{n+1}$, the rectangles $\sigma^{n}(R\cap [w]^+)$ and $\sigma^{-n}(R \cap [w]^-)$ are contained in $\mathcal{R}$. The families of rectangles in Propositions [Proposition 9](#prop:open-hol){reference-type="ref" reference="prop:open-hol"} and [Proposition 10](#prop:zero-hol){reference-type="ref" reference="prop:zero-hol"} are both $\mathcal{L}$-invariant. **Definition 13**. The family of leaf measures $m^\mathrm{u}_x$ is *positive* on a family $\mathcal{R}$ of rectangles if for every $R\in \mathcal{R}$ there exists $x\in R$ such that $m^\mathrm{u}_x(W^\mathrm{u}_R(x))>0$. **Theorem 14**. *Let $\mathcal{R}$ be an $\mathcal{L}$-invariant family of rectangles, and suppose that the family of leaf measures $m^\mathrm{u}_x$ is positive on $\mathcal{R}$. Then the following are equivalent.* 1. *The leaf measures $m^\mathrm{u}_x$ have uniformly continuous holonomies on $\mathcal{R}$.* 2. *The sum $$\label{eqn:Deltas} \Delta^s(x',x) := \sum_{n=0}^{\infty} \varphi(\sigma^{n} x') - \varphi(\sigma^{n} x)$$ converges uniformly on $\{ (x,x') \in R^2 : R\in \mathcal{R}, x' \in W^\mathrm{s}_R(x)\}$, is uniformly bounded on this set, and for every $R\in \mathcal{R}$ and $y,z\in R$, we have $$\label{eqn:shift-RN} \frac{d(\pi^s_{z,y})_*m^\mathrm{u}_z}{dm^\mathrm{u}_y}(x) = e^{\Delta^s((\pi^s_{z,y})^{-1}(x),x)}. %\text{ where } %x' = (\pi^u_{z,u})^{-1}(x).$$* The proof for this result is in §[5](#sec:shift-cts-pf){reference-type="ref" reference="sec:shift-cts-pf"}. An analogous result is true if we reverse the roles of stable and unstable and change the sign of each instance of $n$. *Remark 15*. It is worth observing that [\[eqn:shift-RN\]](#eqn:shift-RN){reference-type="eqref" reference="eqn:shift-RN"} gives $$m^\mathrm{u}_z(W^\mathrm{u}_R(z)) = m^\mathrm{u}_z((\pi^s_{z,y})^{-1}(W^\mathrm{u}_R(y))) = \int_{W^\mathrm{u}_R(y)} e^{\Delta^s((\pi^s_{z,y})^{-1}(x),x)} \,dm^\mathrm{u}_y(x).$$ Thus if the conditions of Theorem [Theorem 14](#thm:shift-cts){reference-type="ref" reference="thm:shift-cts"} are satisfied, we actually have a stronger version of positivity: $m^\mathrm{u}_y(W^\mathrm{u}_R(y))>0$ for every $y\in R\in \mathcal{R}$. ## Product measure construction of an equilibrium measure {#sec:product-measure} To go from the leaf measures $m^\mathrm{u}_x$ and $m^\mathrm{s}_x$ to an equilibrium measure for $(X,\sigma,\varphi)$, we start with a product construction. Suppose that $\mathcal{R}$ is an $\mathcal{L}$-invariant family of rectangles on which the families of leaf measures $m^\mathrm{u}_x$ and $m^\mathrm{s}_x$ both have uniformly continuous holonomies and are positive in the sense of Definition [Definition 13](#def:pos-meas){reference-type="ref" reference="def:pos-meas"}. Given $R\in \mathcal{R}$ and $q\in R$, consider the bijection $\iota_q\colon W^\mathrm{u}_R(q) \times W^\mathrm{s}_R(q) \to R$ from [\[eqn:iota\]](#eqn:iota){reference-type="eqref" reference="eqn:iota"}, defined by $\iota_q(x,y) = [y,x]$, and the product measure $m_{R,q} := (\iota_{q})_* (m^\mathrm{u}_{q} \times m^\mathrm{s}_{q})$, for which $$\label{eqn:mxm} m_{R,q}(Z) = \int_{W^\mathrm{u}_R(q)}\int_{W^\mathrm{s}_R(q)}\mathbf{1}_{Z}([y,z])\,dm^\mathrm{s}_{q}(y)\,dm^u_{q}(z).$$ Let $\lambda_R \ll m_{R,q}$ be the measure on $R$ defined by $$\label{eqn:def-lambda-R} \frac{d\lambda_R}{dm_{R,q}}(z) = e^{\Delta^u([y,z],y)} e^{\Delta^s([y,z],y)}.$$ The following is proved in §[6](#sec:es-two-sided){reference-type="ref" reference="sec:es-two-sided"} and justifies the suppression of $q$ in the notation $\lambda_R$. **Theorem 16**. *Let $X$ be a two-sided shift space and $\varphi\colon X\to \mathbb{R}$ be continuous. Suppose that $\mathcal{R}$ is an $\mathcal{L}$-invariant family of rectangles on which the families of leaf measures $m^\mathrm{u}_x$ and $m^\mathrm{s}_x$ both have uniformly continuous holonomies and are positive. Then for every $R\in \mathcal{R}$, the measure $\lambda_R$ defined in [\[eqn:mxm\]](#eqn:mxm){reference-type="eqref" reference="eqn:mxm"} and [\[eqn:def-lambda-R\]](#eqn:def-lambda-R){reference-type="eqref" reference="eqn:def-lambda-R"} is independent of the choice of $q\in R$, and is nonzero.* Given disjoint $R_1,\dots, R_\ell \in \mathcal{R}$, define a measure $\lambda_R$ on the union $R=\bigcup_{i=1}^\ell R_i$ by $$\label{eqn:def-lambda} \lambda_R=\sum_{i=1}^\ell\lambda_{R_i}$$ The first return time function $\tau \colon R\to \mathbb{N}\cup \{\infty\}$ is given by $$\tau(x) = \min \{n\geq 1 : \sigma^n(x) \in R\}.$$ If $\lambda_R(\{x \in R : \tau(x)=\infty\}) = 0$, then the first return map $T \colon R\to R$ given by $T(x) = \sigma^{\tau(x)}(x)$ is defined $\lambda_R$-a.e. In §[6](#sec:es-two-sided){reference-type="ref" reference="sec:es-two-sided"} we prove the following. **Theorem 17**. *Let $X,\varphi,\mathcal{R}$ be as in Theorem [Theorem 16](#thm:q-indep){reference-type="ref" reference="thm:q-indep"}, and $R,\tau,T$ as above. If $\lambda_R(\{x\in R : \tau(x) = \infty\}) = 0$, then $\lambda_R$ is $T$-invariant.* The procedure for passing from a $T$-invariant measure on $R$ to a $\sigma$-invariant measure on $X$ is standard: under the hypotheses of Theorem [Theorem 17](#thm:return-map){reference-type="ref" reference="thm:return-map"}, write $Y_n := \{y\in R : \tau(y) > n \}$, and consider the measure $$\label{eqn:es-mu} \mu := \sum_{n= 0}^\infty\sigma_*^n \lambda_R|_{Y_n}.$$ Clearly $\mu|_R = \lambda_R$. In §[6](#sec:es-two-sided){reference-type="ref" reference="sec:es-two-sided"} we prove the following result. **Theorem 18**. *Let $X,\varphi,\mathcal{R},R,\tau,T$ be as in Theorems [Theorem 16](#thm:q-indep){reference-type="ref" reference="thm:q-indep"} and [Theorem 17](#thm:return-map){reference-type="ref" reference="thm:return-map"}. Suppose that $\lambda_R$ is positive, that $\int \tau \,d\lambda_R < \infty$, and that $\overline{Q}<\infty$. Then the measure $\mu$ defined in [\[eqn:es-mu\]](#eqn:es-mu){reference-type="eqref" reference="eqn:es-mu"} is positive, finite, and $\sigma$-invariant; moreover, its normalization $\mu/\mu(X)$ is an equilibrium measure with local product structure.* ## Gibbs-type estimates {#sec:gibbs} We conclude our general results with a description of how combining Theorem [Theorem 5](#thm:finite){reference-type="ref" reference="thm:finite"} with the scaling estimates in [\[eqn:mxu-RN-n\]](#eqn:mxu-RN-n){reference-type="eqref" reference="eqn:mxu-RN-n"} provides Gibbs-type estimates on the weight that the leaf measures give cylinders. These lead in turn to Gibbs-type bounds for the measures $\lambda_R$ from [\[eqn:mxm\]](#eqn:mxm){reference-type="eqref" reference="eqn:mxm"}--[\[eqn:def-lambda-R\]](#eqn:def-lambda-R){reference-type="eqref" reference="eqn:def-lambda-R"}. Given $x\in X$ and $w\in \mathcal{L}_n$ such that $w_1 = x_0$, we adopt the notation $$\label{eqn:x.w} x.w := \cdots x_{-2} x_{-1} w_1 w_2 w_3 \cdots w_n.$$ That is, $x.w$ is the left-infinite sequence obtained by taking the negative half of $x$ and appending $w$. (The result may or may not be legal in $X$: it is legal if and only if $W^\mathrm{u}_\mathrm{loc}(x) \cap [w]^+ \neq \emptyset$.) The sequence $x.w$ is indexed by $(-\infty,0] \cap \mathbb{Z}$, so $(x.w)_0 = w_n$, $(x.w)_{-1} = w_{n-1}$, and so on. Now observe that $$\sigma^n(W^\mathrm{u}_\mathrm{loc}(x) \cap [w]^+) = \bigcup_{a\in A} %[x.wa]^-, W^\mathrm{u}_\mathrm{loc}(x.wa),$$ where as in Remark [Remark 4](#rmk:1-sided){reference-type="ref" reference="rmk:1-sided"} we abuse notation slightly by using the notation $W^\mathrm{u}_\mathrm{loc}$ when $x.wa$ is only a one-sided infinite sequence; this is justified by Proposition [Proposition 3](#prop:xy){reference-type="ref" reference="prop:xy"}. Some of the sets $W^\mathrm{u}_\mathrm{loc}(x.wa)$ may be empty. Consider the quantity $$\label{eqn:Vxw} V_x^+(w) := \sup \{ |S_n \varphi(y) - S_n\varphi(z)| : y,z \in W^\mathrm{u}_\mathrm{loc}(x) \cap [w]^+ \}.$$ We see immediately that given any $y\in W^\mathrm{u}_\mathrm{loc}(x) \cap [w]^+$, we have $$\label{eqn:Vxw-2} \Phi_x^+(w) - V_x^+(w) \leq S_n\varphi(y) \leq \Phi_x^+(w).$$ Given $a\in A$ such that $W^\mathrm{u}_\mathrm{loc}(x.wa) \neq \emptyset$, write $m^\mathrm{u}_{x.wa}$ for the corresponding leaf measure; then [\[eqn:mxu-RN-n\]](#eqn:mxu-RN-n){reference-type="eqref" reference="eqn:mxu-RN-n"}, [\[eqn:Vxw-2\]](#eqn:Vxw-2){reference-type="eqref" reference="eqn:Vxw-2"}, and Proposition [Proposition 3](#prop:xy){reference-type="ref" reference="prop:xy"} give $$e^{\Phi_x^+(w) - nP - V_x^+(w)} \leq \frac{d m^\mathrm{u}_x}{d((\sigma^*)^n m^\mathrm{u}_{x.wa})}(y) \leq e^{\Phi_x^+(w) - nP}$$ for each $y\in W^\mathrm{u}_\mathrm{loc}(x) \cap [wa]^+$. Writing $\|m^\mathrm{u}_{x.wa}\| := m_{x.wa}(W^\mathrm{u}_\mathrm{loc}(x.wa))$ for the total weight of the measure $m^\mathrm{u}_{x.wa}$, we obtain $$e^{-V_x^+(w)} \sum_a \|m^\mathrm{u}_{x.wa}\| %\mmu_{x.wa}(\Wul(x.wa)) \leq \frac{m^\mathrm{u}_x([w]^+)}{e^{\Phi_x^+(w) - nP}} \leq \sum_a %\mmu_{x.wa}(\Wul(x.wa)), \|m^\mathrm{u}_{x.wa}\|,$$ and Theorem [Theorem 5](#thm:finite){reference-type="ref" reference="thm:finite"} lets us conclude the following Gibbs-type estimate: **Theorem 19**. *Let $X$ be a two-sided shift space on a finite alphabet with language $\mathcal{L}$, and $\varphi\colon X\to \mathbb{R}$ a continuous potential. Then given any $x\in X$ and $w\in \mathcal{L}_n$, we have $$\label{eqn:gibbs+} \frac{\sum_{a\in A} \overline{Q}^+_{x.wa}}{e^{V_x^+(w)} \overline{Q}} \leq \frac{m^\mathrm{u}_x([w]^+)}{e^{\Phi_x^+(w) - nP}} \leq \sum_{a\in A} \underline{Q}^+_{x.wa}.$$ An analogous bound holds for $m^\mathrm{s}_x([w]^-)$: $$\label{eqn:gibbs-} \frac{\sum_{a\in A}\overline{Q}^-_{aw.x}}{e^{V_x^-(w)}\overline{Q}} \leq \frac{m^\mathrm{s}_x([w]^-)}{e^{\Phi_{x}^-(w)-nP}} \leq\sum_{a\in A}\underline{Q}^-_{aw.x}.$$* Once again we point out that if $X$ has the specification property and $\varphi$ has the Bowen property, then the quantities in [\[eqn:gibbs+\]](#eqn:gibbs+){reference-type="eqref" reference="eqn:gibbs+"} and [\[eqn:gibbs-\]](#eqn:gibbs-){reference-type="eqref" reference="eqn:gibbs-"} are all bounded away from $0$ and $\infty$ independently of $x$ and $w$, yielding the usual Gibbs property. Theorem [Theorem 19](#thm:gibbs){reference-type="ref" reference="thm:gibbs"} yields an upper Gibbs bound for the product measures used in §[1.9](#sec:product-measure){reference-type="ref" reference="sec:product-measure"}; the constant involved is given in terms of $\underline{Q}$, and so for this bound to be useful we will need to know that $\underline{Q}<\infty$. **Corollary 20**. *Let $X$ be a two-sided shift space on a finite alphabet and $\varphi\colon X\to \mathbb{R}$ be continuous. Suppose that $\mathcal{R}$ is an $\mathcal{L}$-invariant family of rectangles on which the families of leaf measures $m^\mathrm{u}_x$ and $m^\mathrm{s}_x$ both have uniformly continuous holonomies. Let $C = \max(\|\Delta^s\|,\|\Delta^u\|)$, which is finite by Theorem [Theorem 14](#thm:shift-cts){reference-type="ref" reference="thm:shift-cts"}, and let $K = (\#A) \underline{Q}^2 e^{2C}$. Then for every $R\in \mathcal{R}$, the measure $\lambda_R$ defined in [\[eqn:mxm\]](#eqn:mxm){reference-type="eqref" reference="eqn:mxm"} and [\[eqn:def-lambda-R\]](#eqn:def-lambda-R){reference-type="eqref" reference="eqn:def-lambda-R"} satisfies the following bound for all $w\in \mathcal{L}$: $$\label{eqn:lambda-Gibbs} \lambda_R([w]^+) \leq K e^{\Phi(w) - |w|P}.$$* *Proof.* If $[w]^+\cap R = \emptyset$, there is nothing to prove, so assume the intersection is nonempty. By Theorem [Theorem 16](#thm:q-indep){reference-type="ref" reference="thm:q-indep"}, we can define $\lambda_R$ using $q\in [w]^+ \cap R$. We get $$\begin{aligned} \lambda_{R}([w]^+) &=\int_{W^\mathrm{u}_{R}(q)}\int_{W^\mathrm{s}_{R}(q)}e^{\Delta^u([y,z],y)}e^{\Delta^s([y,z],z)}\mathbf{1}_{[w]^+}\,dm^\mathrm{s}_{q}(y)dm^\mathrm{u}_{q}(z)\\ &\leq e^{2C}\int_{W^\mathrm{u}_\mathrm{loc}(q)}\int_{W^\mathrm{s}_\mathrm{loc}(q)}\mathbf{1}_{[w]^+}\,dm^\mathrm{s}_{q}(y)dm^\mathrm{u}_{q}(z)\\ & = e^{2C}\int_{W^\mathrm{u}_\mathrm{loc}(q)\cap [w]^+}\int_{W^\mathrm{s}_\mathrm{loc}(q)}\,dm^\mathrm{s}_{q}(y)dm^\mathrm{u}_{q}(z)\\ & = e^{2C}m^\mathrm{u}_{q}([w]^+)m^\mathrm{s}_{q}(W^\mathrm{s}_\mathrm{loc}(q)) \end{aligned}$$ By Theorems [Theorem 5](#thm:finite){reference-type="ref" reference="thm:finite"} and [Theorem 19](#thm:gibbs){reference-type="ref" reference="thm:gibbs"} we have $$m^\mathrm{u}_{q}([w]^+)m^\mathrm{s}_{q}(W^\mathrm{s}_\mathrm{loc}(q))\leq (\#A)\underline{Q}^2e^{\Phi^+(w)},$$ which proves the corollary. ◻ # Examples {#sec:spec} ## Subshifts of finite type Let $X$ be a topologically mixing subshift of finite type on a finite alphabet $A$, determined by a finite set $\mathcal{F}$ of forbidden words. Let $\varphi\colon X\to\mathbb{R}$ have the Walters property. Then it is well-known [@rB745; @pW78] that $(X,\varphi)$ has a unique equilibrium measure, and that $\overline{Q}<\infty$ and $\inf_x \underline{Q}^+>0$, $\inf_x \underline{Q}^->0$, so the leaf measures $m^\mathrm{u}_x$ and $m^\mathrm{s}_x$ are uniformly positive and finite by Theorem [Theorem 5](#thm:finite){reference-type="ref" reference="thm:finite"}. The product measure construction in §[1.9](#sec:product-measure){reference-type="ref" reference="sec:product-measure"} can be carried out as follows. Let $k=\max\{|u|:u\in\mathcal{F}\}$, and observe that for all $w\in\mathcal{L}$ with $|w|\geq k$, the cylinder $[w]^+$ has product structure: indeed, given any $x,y \in [w]^+$, any subword of $[x,y]$ with length $\leq k$ must be a subword of either $x$ or $y$, and thus cannot lie in $\mathcal{F}$ (since $x,y\in X$), allowing us to conclude that $[x,y] \in X$ and hence $[x,y]\in [w]^+$. By Proposition [Proposition 9](#prop:open-hol){reference-type="ref" reference="prop:open-hol"}, the leaf measures $m^\mathrm{u}_x$ and $m^\mathrm{s}_x$ have uniformly continuous holonomies on the family $\mathcal{R}$ of open rectangles. By the previous paragraph, this family contains $[w]^+$ for every $w\in \mathcal{L}_k$. So we can write $X$ as the finite union of open rectangles: $$X=\bigcup_{w\in\mathcal{L}_k}[w]^+.$$ On each $[w]^+$, [\[eqn:mxm\]](#eqn:mxm){reference-type="eqref" reference="eqn:mxm"} and [\[eqn:def-lambda-R\]](#eqn:def-lambda-R){reference-type="eqref" reference="eqn:def-lambda-R"} produce a positive and finite measure $\lambda_w$, and taking $R = X$ we have return time $\tau\equiv 1$, so the construction in [\[eqn:def-lambda\]](#eqn:def-lambda){reference-type="eqref" reference="eqn:def-lambda"} and [\[eqn:es-mu\]](#eqn:es-mu){reference-type="eqref" reference="eqn:es-mu"} reduces to $$\mu = \sum_{w\in \mathcal{L}_k} \lambda_w.$$ The return time is integrable (since it is bounded and each $\lambda_w$ is finite), so by Theorem [Theorem 18](#thm:push-product){reference-type="ref" reference="thm:push-product"}, $\mu/\mu(X)$ is the unique equilibrium measure. ## Synchronizing words {#sec:synchronizing-words} The result for subshifts of finite type can be extended to shift spaces with a synchronizing word and the appropriate counting estimates. **Definition 21**. A word $v$ is said to be *synchronizing* if for every pair $u,w\in\mathcal{L}$ such that $uv,vw\in\mathcal{L}$, we have that $uvw\in\mathcal{L}$ as well. Equivalently, $v$ is synchronizing if the cylinder $[v]^+$ has product structure; in particular, $X$ has a synchronizing word if and only if the family $\mathcal{R}$ of open rectangles (as in Proposition [Proposition 9](#prop:open-hol){reference-type="ref" reference="prop:open-hol"}) is nonempty. In this section we give a condition for our main results to apply to such shift spaces. In a separate paper [@CD-billiards], we will study shift spaces that code dispersing billiards and do not have any open rectangles. We remark that a shift space is a subshift of finite type if and only if all sufficiently long words are synchronizing. To go beyond SFTs, we need to control the set of long words that do not contain a synchronizing word. To this end, let $\mathcal{L}$ be the language of a shift space $X$ with a synchronizing word $v\in \mathcal{L}$, and consider the collection $$\mathcal{B}:=\{u\in\mathcal{L}:u \text{ does not contain }v\}.$$ Let $\varphi\colon X\to \mathbb{R}$ be a potential with the Walters property (so leaf measures have uniformly continuous holonomies on open rectangles by Proposition [Proposition 9](#prop:open-hol){reference-type="ref" reference="prop:open-hol"}), and consider for each $n\in \mathbb{N}$ the quantity $$\label{eqn:QnB} Q_n(\mathcal{B}) := \sum_{w\in \mathcal{B}_n} e^{\Phi(w) - nP}.$$ **Theorem 22**. *With $X,\mathcal{L},v,\mathcal{B},\varphi,Q_n$ as above, let $R = [v]^+$ and let $\tau\colon R\to \mathbb{N}\cup\{\infty\}$ be the first return time function. Suppose that we have $$\label{eqn:sync-assumption} \sum_{n=1}^\infty Q_n(\mathcal{B})<\infty.$$ Then the following are true.* 1. *$(X,\sigma,\varphi)$ has a unique equilibrium measure.* 2. *The measure $\lambda = \lambda_R$ defined by [\[eqn:mxm\]](#eqn:mxm){reference-type="eqref" reference="eqn:mxm"}--[\[eqn:def-lambda-R\]](#eqn:def-lambda-R){reference-type="eqref" reference="eqn:def-lambda-R"} is nonzero and $\int \tau \,d\lambda < \infty$.* 3. *The $\sigma$-invariant measure $\mu$ defined in [\[eqn:es-mu\]](#eqn:es-mu){reference-type="eqref" reference="eqn:es-mu"} is a scalar multiple of the unique equilibrium measure.* *Remark 23*. If $X$ is a shift space with the specification property, then it has a synchronizing word [@aB88], and [\[eqn:sync-assumption\]](#eqn:sync-assumption){reference-type="eqref" reference="eqn:sync-assumption"} holds for every potential with the Bowen property. This last assertion can be deduced using the fact that $(X,\varphi)$ has a unique equilibrium measure $\mu$, and that $\mu$ is fully supported [@rB745], so that in particular $\mu([v]^+)>0$. Consequently, writing $Y = X \setminus \bigcup_{n\in\mathbb{Z}} \sigma^n([v]^+)$, we can observe that $\mathcal{B}= \mathcal{L}(Y)$ and that $(Y,\varphi)$ has an equilibrium measure $\nu \neq \mu$, for which $P(Y,\varphi) = h(\nu) + \int \varphi\,d\nu < P(X,\varphi)$, so that $\lim_{n\to\infty} \frac 1n \log Q_n(\mathcal{B}) = \lim_{n\to\infty} \frac 1n \log \Lambda_n(Y) e^{-nP} < 0$, proving [\[eqn:sync-assumption\]](#eqn:sync-assumption){reference-type="eqref" reference="eqn:sync-assumption"}. The remainder of this section is devoted to the proof of Theorem [Theorem 22](#thm:sync){reference-type="ref" reference="thm:sync"}, which will proceed along the following lines. 1. Use the synchronizing property together with [\[eqn:sync-assumption\]](#eqn:sync-assumption){reference-type="eqref" reference="eqn:sync-assumption"} to prove that the set $\mathcal{G}\subset \mathcal{L}$ of words that start and end with $v$ has the specification property. 2. Use this specification property and [\[eqn:sync-assumption\]](#eqn:sync-assumption){reference-type="eqref" reference="eqn:sync-assumption"} to apply results from [@CT13; @PYY22], deducing uniqueness and uniform counting bounds on partition sums. 3. Use these counting bounds together with Theorem [Theorem 5](#thm:finite){reference-type="ref" reference="thm:finite"} to obtain lower bounds on $m^\mathrm{u}_x$ and $m^\mathrm{s}_x$ whenever $x\in [v]^+$, thus deducing that $\lambda([v]^+)>0$. 4. Use [\[eqn:sync-assumption\]](#eqn:sync-assumption){reference-type="eqref" reference="eqn:sync-assumption"} and the Gibbs bounds from §[1.10](#sec:gibbs){reference-type="ref" reference="sec:gibbs"} to show that $\int \tau\,d\lambda<\infty$. 5. Apply Theorem [Theorem 18](#thm:push-product){reference-type="ref" reference="thm:push-product"} to deduce that $\mu/\mu(X)$ is an equilibrium measure. For Step 1, we need the following lemma. **Lemma 24**. *If equation [\[eqn:sync-assumption\]](#eqn:sync-assumption){reference-type="eqref" reference="eqn:sync-assumption"} holds, then there exists $u\in\mathcal{L}$ such that $vuv\in\mathcal{L}$.* *Proof.* Recalling the notation $\Lambda_n=\sum_{|w|=n}e^{\Phi(w)}$ from [\[eqn:Lambda\]](#eqn:Lambda){reference-type="eqref" reference="eqn:Lambda"}, given $\mathcal{D}\subset \mathcal{L}$ we write $$\Lambda_n(\mathcal{D})=\sum_{w\in\mathcal{D}_n}e^{\Phi(w)}.$$ Let $\mathcal{D}= \mathcal{B}\cup ((\mathcal{B}v \mathcal{B}) \cap \mathcal{L})$ be the set of words in $\mathcal{L}$ that do not contain two disjoint copies of $v$. Then we have $$\label{eqn:LnD-leq} \Lambda_n(\mathcal{D}) \leq \Lambda_n(\mathcal{B})+ e^{\Phi(v)}\sum_{i=0}^{n-|v|} \Lambda_i(\mathcal{B}) \Lambda_{n-i-|v|}(\mathcal{B}).$$ Recall also from §[1.6](#sec:finite){reference-type="ref" reference="sec:finite"} that $\Lambda_n = \Lambda_n(\mathcal{L}) \geq e^{nP}$, so [\[eqn:LnD-leq\]](#eqn:LnD-leq){reference-type="eqref" reference="eqn:LnD-leq"} gives $$\label{eqn:LnLn} \frac{\Lambda_n(\mathcal{D})}{\Lambda_n(\mathcal{L})} \leq \Lambda_n(\mathcal{D}) e^{-nP} \leq Q_n(\mathcal{B})+e^{\Phi(v)-|v|P} %\sum_{\substack{i,j\geq 0\\ i+j = n-|v|}} \sum_{i=0}^{n-|v|} Q_i(\mathcal{B})Q_{n-|v|-i}(\mathcal{B}).$$ Writing $c_i = Q_i(\mathcal{B})$, observe that [\[eqn:sync-assumption\]](#eqn:sync-assumption){reference-type="eqref" reference="eqn:sync-assumption"} gives $$\label{eqn:ci-trick} \sum_{\ell=0}^\infty \sum_{i=0}^\ell c_i c_{\ell-i} = \Big( \sum_{k=0}^\infty c_k \Big)^2 < \infty,$$ so $\sum_{i=0}^\ell c_i c_{\ell-i} \to 0$ as $\ell\to \infty$. Thus the right-hand side of [\[eqn:LnLn\]](#eqn:LnLn){reference-type="eqref" reference="eqn:LnLn"} converges to $0$ as $n\to\infty$. For sufficiently large $n$ this gives $\Lambda_n(\mathcal{D}) < \Lambda_n(\mathcal{L})$, so $\mathcal{D}\neq \mathcal{L}$, which proves Lemma [Lemma 24](#lem:sync-spec){reference-type="ref" reference="lem:sync-spec"}. ◻ **Proposition 25**. *Writing $\mathcal{G}= \mathcal{L}\cap v\mathcal{L}\cap \mathcal{L}v$ for the set of words in $\mathcal{L}$ that both start and end with $v$, the following are true whenever [\[eqn:sync-assumption\]](#eqn:sync-assumption){reference-type="eqref" reference="eqn:sync-assumption"} holds.* - *$\mathcal{B}\mathcal{G}\mathcal{B}$ is a decomposition of the language $\mathcal{L}$: for every $w\in \mathcal{L}$ there are $u^p, u^s \in \mathcal{B}$ and $u^g \in \mathcal{G}$ such that $w = u^p u^g u^s$.* - *$\mathcal{G}$ has the specification property: there is $\tau\in \mathbb{N}$ such that for every $w,w'\in \mathcal{G}$, there is $u\in \mathcal{L}_\tau$ such that $wuw'\in \mathcal{G}$.* *Proof.* For the first claim, if $w$ does not contain $v$ then we take $u^p = w$ and set $u^g$ and $u^s$ to be the empty word. If $w$ does contain $v$, then we take $u^p$ to be the initial segment of $w$ preceding the first appearance of $v$, we take $u^s$ to be the final segment of $w$ following the last appearance of $v$, and we take $u^g$ to be the subword of $w$ lying between $u^p$ and $u^s$. For the second claim, let $u\in \mathcal{L}$ be provided by Lemma [Lemma 24](#lem:sync-spec){reference-type="ref" reference="lem:sync-spec"} and let $\tau = |u|$. Then given any $w,w'\in \mathcal{G}$, we claim that $wuw'\in \mathcal{G}$. Indeed, by the definition of $\mathcal{G}$ we have $w = qv$ and $w' = vq'$ for some $q,q'\in \mathcal{L}$, and thus $wuw' = qvuvq'$. By the synchronizing property, since $qv\in \mathcal{L}$ and $vuv\in \mathcal{L}$ we deduce that $qvuv\in \mathcal{L}$, and since $vq'\in \mathcal{L}$ we apply the property again to get $wuw' = qvuvq'\in \mathcal{L}$. This word starts and ends with $v$ since $w$ and $w'$ do, so it lies in $\mathcal{G}$. ◻ Now we are in a position to carry out Step 2 and obtain uniqueness and uniform counting bounds. Start by observing that by Proposition [Proposition 25](#prop:decomp-spec){reference-type="ref" reference="prop:decomp-spec"} and [\[eqn:sync-assumption\]](#eqn:sync-assumption){reference-type="eqref" reference="eqn:sync-assumption"}, the conditions of [@CT13 Theorem C] are satisfied, with one exception: Condition (I) there asks that specification hold not just for $\mathcal{G}$, but also for the sets $\mathcal{G}(M)=\{pwq\in\mathcal{L}: p,q\in\mathcal{B},\,w\in\mathcal{G},\,|p|,|q|\leq M\}$, for each $M$. However, it was later proved by Pacifico, Yang, and Yang [@PYY22] that it suffices to have specification for $\mathcal{G}$ itself, and in particular [@CT13 Theorem C] applies to our setting,[^5] proving that there is a unique equilibrium measure. Moreover, [@CT13 Proposition 5.3] establishes the upper counting bound $\overline{Q}<\infty$. We will also need uniform lower counting bounds on $\mathcal{G}$; these follow from similar arguments to those in [@CT13], but cannot be deduced directly from what is written there, so we provide a proof. **Proposition 26**. *If equation [\[eqn:sync-assumption\]](#eqn:sync-assumption){reference-type="eqref" reference="eqn:sync-assumption"} holds, then $\overline{Q}(\mathcal{G}) := \varlimsup_{n\to\infty} Q_n(\mathcal{G}) > 0$.* *Proof.* Using the decomposition $\mathcal{B}\mathcal{G}\mathcal{B}$ provided by Proposition [Proposition 25](#prop:decomp-spec){reference-type="ref" reference="prop:decomp-spec"}, we have $$\label{eqn:QnL} Q_n(\mathcal{L}) \leq \sum_{i+j+k=n} Q_i(\mathcal{B})Q_j(\mathcal{G})Q_k(\mathcal{B}) =\sum_{\ell=0}^n Q_{n-\ell}(\mathcal{G}) \sum_{i=0}^\ell Q_i(\mathcal{B})Q_{\ell-i}(\mathcal{B}).$$ Let $R_\ell(\mathcal{B}) := \sum_{i=0}^\ell Q_i(\mathcal{B}) Q_{\ell-i}(\mathcal{B})$, and observe that just as in [\[eqn:ci-trick\]](#eqn:ci-trick){reference-type="eqref" reference="eqn:ci-trick"}, we have $$\label{eqn:Rell} \sum_{\ell=0}^\infty R_\ell(\mathcal{B}) = \Big( \sum_{i=0}^\infty Q_i(\mathcal{B}) \Big)^2 < \infty.$$ Thus $S := \max_\ell R_\ell(\mathcal{B}) < \infty$. Since $\overline{Q}<\infty$, we have $K := \max_j Q_j(\mathcal{G}) < \infty$. By [\[eqn:Rell\]](#eqn:Rell){reference-type="eqref" reference="eqn:Rell"}, there is $N\in \mathbb{N}$ such that $\sum_{\ell=N}^\infty R_\ell(\mathcal{B}) < 1/(2K)$, and so returning to [\[eqn:QnL\]](#eqn:QnL){reference-type="eqref" reference="eqn:QnL"} and recalling [\[eqn:Ln-geq\]](#eqn:Ln-geq){reference-type="eqref" reference="eqn:Ln-geq"}, we obtain $$\begin{aligned} 1 &\leq Q_n(\mathcal{L}) \leq \sum_{\ell=0}^n Q_{n-\ell}(\mathcal{G}) R_\ell(\mathcal{B}) \leq \sum_{\ell=0}^{N-1} Q_{n-\ell}(\mathcal{G}) S + \sum_{\ell=N}^n K R_\ell(\mathcal{B}) \\ &\leq SN \max \{Q_j(\mathcal{G}) : n-N < j \leq n \} + K/(2K).\end{aligned}$$ The last term is equal to $\frac 12$; subtracting it from both sides and dividing by $SN$ gives $$\max\{Q_j(\mathcal{G}) : n-N < j \leq n \} \geq 1/(2SN).$$ Since $n$ was arbitrary this proves Proposition [Proposition 26](#prop:lambda-snyc-pos){reference-type="ref" reference="prop:lambda-snyc-pos"}. ◻ We have now completed Step 2 of the proof, and proceed to Step 3, the task of obtaining lower bounds on $m^\mathrm{u}_x$ and $m^\mathrm{s}_x$ when $x\in [v]^+$. With Proposition [Proposition 26](#prop:lambda-snyc-pos){reference-type="ref" reference="prop:lambda-snyc-pos"} in hand, we can obtain these using Theorem [Theorem 5](#thm:finite){reference-type="ref" reference="thm:finite"}; to show that both of these measures are nonzero, it suffices to show that $\overline{Q}_x^{\pm}>0$ for all $x\in [v]^+$. Fix $x\in [v]^+$, and let $u\in \mathcal{L}$ be as in Lemma [Lemma 24](#lem:sync-spec){reference-type="ref" reference="lem:sync-spec"} (so $vuv\in \mathcal{L}$). Given $w\in \mathcal{G}$, let $y_w$ be the periodic sequence $\cdots uwu.wuwu\cdots$, and let $z_w = [x,y_w] \in W^\mathrm{u}_\mathrm{loc}(x) \cap [w]^+$. Thus $$\label{eqn:Phix} \Phi^+_x(w) \geq S_{|w|} \varphi(z_w) \geq \Phi(w) - C,$$ where $C$ is the constant from the Bowen property. From the definitions of $\Lambda_n^+(x)$ and $\overline{Q}_x^+$ in [\[eqn:Lambda+\]](#eqn:Lambda+){reference-type="eqref" reference="eqn:Lambda+"} and [\[eqn:Q+\]](#eqn:Q+){reference-type="eqref" reference="eqn:Q+"} we now have $$\overline{Q}_x^+ = \varlimsup_{n\to\infty} \sum_{w\in \mathcal{L}_n} e^{\Phi_x^+(w) - nP} \geq \varlimsup_{n\to\infty} \sum_{w\in \mathcal{G}_n} e^{\Phi(w) - C - nP} = e^{-C} \overline{Q}(\mathcal{G}) > 0.$$ By Theorem [Theorem 5](#thm:finite){reference-type="ref" reference="thm:finite"}, this implies that $m^\mathrm{u}_x(W^\mathrm{u}_\mathrm{loc}(x)) \geq e^{-C} \overline{Q}(\mathcal{G})/\overline{Q}> 0$. The argument for $m^\mathrm{s}_x$ is similar: fix $x\in [v]^+$ and given $w\in \mathcal{G}$, let $z_w = [y_w,x] \in W^\mathrm{s}_\mathrm{loc}(x) \cap [wu]^-$. In place of [\[eqn:Phix\]](#eqn:Phix){reference-type="eqref" reference="eqn:Phix"} we have $$\Phi_x^-(wu) \geq S_{|wu|}\varphi(\sigma^{-|wu|}(z_w)) \geq \Phi(wu) - C,$$ and we get $$\begin{aligned} \overline{Q}_x^- &\geq \varlimsup_{n\to\infty} \sum_{w\in \mathcal{G}_n} e^{\Phi(wu) - C - (n+|u|)P} \\ &\geq \varlimsup_{n\to\infty} \sum_{w\in \mathcal{G}_n} e^{\Phi(w) + \Phi(u) - 2C - nP - |u|P} \geq e^{\Phi(u) - 2C - |u|P} \overline{Q}(\mathcal{G}) > 0.\end{aligned}$$ This completes Step 3 of the proof; we have now shown that $\lambda([v]^+)=0$. Proceeding to Step 4, we show integrability of the return time by using [\[eqn:sync-assumption\]](#eqn:sync-assumption){reference-type="eqref" reference="eqn:sync-assumption"} and the Gibbs bound [\[eqn:lambda-Gibbs\]](#eqn:lambda-Gibbs){reference-type="eqref" reference="eqn:lambda-Gibbs"} from Corollary [Corollary 20](#cor:lambda-Gibbs){reference-type="ref" reference="cor:lambda-Gibbs"}; note that the constant $K = (\#A) e^{2C} \underline{Q}^2$ appearing there is finite because $\overline{Q}<\infty$. Writing $Y_n=\{x\in [v]^+:\tau(x)>n\}$, we observe that $$Y_n \subset \bigcup_{u\in \mathcal{B}_n} [vu]^+$$ (note that the cylinder is empty if $vu\notin \mathcal{L}$), and thus [\[eqn:lambda-Gibbs\]](#eqn:lambda-Gibbs){reference-type="eqref" reference="eqn:lambda-Gibbs"} gives $$\begin{aligned} \lambda(Y_n) &\leq \sum_{u\in \mathcal{B}_n} \lambda([vu]^+) \leq \sum_{u\in \mathcal{B}_n} K e^{\Phi(vu) - |vu|P} \\ &\leq \sum_{u\in\mathcal{B}_n} K e^{\Phi(v) + \Phi(u) - |v|P - |u|P} = e^{\Phi(v) - |v|P} Q_n(\mathcal{B}).\end{aligned}$$ Thus the integral of the return time is $$\int \tau\,d\lambda = \sum_{n=0}^\infty \lambda(Y_n) \leq e^{\Phi(v) - |v|P} \sum_{n=0}^\infty Q_n(\mathcal{B}) < \infty.$$ This completes Step 4. For the completion of the proof, Step 5, it suffices to check the conditions of Theorem [Theorem 18](#thm:push-product){reference-type="ref" reference="thm:push-product"}. We showed in Step 3 that $\lambda$ is positive, in Step 4 that $\int\tau\,d\lambda<\infty$, and in Step 2 that $\overline{Q}<\infty$. Thus Theorem [Theorem 18](#thm:push-product){reference-type="ref" reference="thm:push-product"} applies, showing that the measure $\mu$ defined in [\[eqn:es-mu\]](#eqn:es-mu){reference-type="eqref" reference="eqn:es-mu"} is positive, finite, $\sigma$-invariant, and its normalization is an equilibrium measure with local product structure. Since we proved earlier that there is a unique equilibrium measure, this completes the proof of Theorem [Theorem 22](#thm:sync){reference-type="ref" reference="thm:sync"}. # Weights for leaf measures in Theorem [Theorem 5](#thm:finite){reference-type="ref" reference="thm:finite"} {#sec:shift-mxu-pf} Observe that for any potential $\varphi$, if $\bar{\varphi}=\varphi-P(\varphi)$, then $P(\bar{\varphi})=0$. Moreover, under this construction $\varphi$ and $\bar{\varphi}$ produce the same equilibrium measure as $$\Phi^+_x(w)-nP(\varphi)=\bar\Phi^+_x(w)=\sup\{S_{|w|}\bar\varphi(y):y\in[w]^+\cap W^\mathrm{u}_\mathrm{loc}(x)\}.$$ To simplify the notation in the proofs we will replace $\varphi$ with $\bar{\varphi}$ and make the following assumption for the remainder of the paper. $$\label{eqn:0-P-assumption} \textbf{Standing Assumption: }P=0$$ For nonzero pressure, the work is similar where an additional term including the pressure would be included. For example, wherever a $\Phi^{\pm}_x(w)$ or $\varphi(x)$ appears, one would need to include $\Phi^{\pm}_x(w)-|w|P$ or $\varphi(x)-P$, respectively. Equations [\[eqn:finite-u\]](#eqn:finite-u){reference-type="eqref" reference="eqn:finite-u"} and [\[eqn:finite-s\]](#eqn:finite-s){reference-type="eqref" reference="eqn:finite-s"} are equivalent results. By proving one result, one can prove the other by reversing the direction of the shift. In this section we will prove the bounds on the weights of the unstable leaf measure in [\[eqn:finite-u\]](#eqn:finite-u){reference-type="eqref" reference="eqn:finite-u"}. To obtain these bounds, we first need to establish a pair of inequalities, which should be compared to [\[eqn:almost-add\]](#eqn:almost-add){reference-type="eqref" reference="eqn:almost-add"}. First, recall from [\[eqn:x.w\]](#eqn:x.w){reference-type="eqref" reference="eqn:x.w"} that for $x\in X$ and $w\in\mathcal{L}_n$ with $[w]^+ \cap W^\mathrm{u}_\mathrm{loc}(x) \neq\emptyset$, we write $x.w=\dots x_{-2} x_{-1} w_1 w_2\dots w_n$, and define $$\Phi_{x.w}(u)=\sup\{S_{|u|}\varphi(y): y\in W^\mathrm{u}_\mathrm{loc}(x.wu_1)\cap[u]^+\}.$$ **Lemma 27**. *For $w,u,v\in\mathcal{L}$ such that $wu\in\mathcal{L}$ and $uv\in\mathcal{L}$ $$\begin{aligned} \Phi^+_{x}(wu)&\leq \Phi^+_{x}(w)+\Phi_{x.wu_1}^+(u)\label{eqn:upper-add-mxu}\\ \Phi^+_{x}(uv)&\geq \Phi^+_{x}(u)+\Phi_{x.uv_1}^+(v)-V_x^+(u) \label{eqn:lower-add-mxu} \end{aligned}$$ where $V^+_x(u)=\sup_{y,z\in W^\mathrm{u}_\mathrm{loc}(x)\cap[u]^+}(S_{| u|}\varphi(y)-S_{| u|}\varphi(z))$.* *Proof.* Let $y\in W^\mathrm{u}_\mathrm{loc}(x)\cap [wu]^+$ and observe that $$S_{|wu|}\varphi(y)=S_{|w|}\varphi(y)+S_{| u|}\varphi(\sigma^{| w| }y).$$ If $y'\in W^\mathrm{u}_\mathrm{loc}(x)\cap [w]^+$ and $z\in W^\mathrm{u}_\mathrm{loc}(x.wu_1)\cap [u]^+$, then taking supremums yields $$\begin{aligned} \Phi_x^+(wu)&=\sup_{y}S_{|wu|}\varphi(y)\\ &\leq \sup_{y',z}S_{|w|}\varphi(y')+S_{| u|}\varphi(\sigma^{|w|}z) =\Phi_{x}^+(w)+\Phi_{x.wu_1}^+(u). \end{aligned}$$ Similarly, if $y'\in W^\mathrm{u}_\mathrm{loc}(x)\cap[u]^+$ we have $$\begin{aligned} S_{|u|}\varphi(y)+S_{|v|}\varphi(\sigma^{|v| }y)&=S_{|u|}\varphi(y)-S_{|u|}\varphi(y') + S_{|u|}\varphi(y')+S_{|v|}\varphi(\sigma^{|u| }y)\\ &\geq -V_x^+(u)+ S_{|u|}\varphi(y')+S_{|v|}\varphi(\sigma^{|u|}y) \end{aligned}$$ If $z\in W^\mathrm{u}_\mathrm{loc}(x.uv_1)\cap [v]^+$, then taking supremums gives us $$\begin{aligned} \Phi_x^+(uv)&=\sup_{y}S_{|u|}\varphi(y)+S_{|v|}\varphi(\sigma^{|v|}y)\\ &\geq \sup_{y',z}-V_x^+(u)+ S_{|u|}\varphi(y')+S_{|v|}\varphi(z) = \Phi_{x}^+(u)+\Phi_{x.uv_1}(v)-V_x^+(u). \end{aligned}$$ Since every $z\in W^\mathrm{u}_\mathrm{loc}(x.uv_1)\cap[v]^+$ is equal to $\sigma^{|u|}y$ for some $y\in W^\mathrm{u}_\mathrm{loc}(x)\cap [uv]^+$. ◻ Now we prove the inequalities in Theorem [Theorem 5](#thm:finite){reference-type="ref" reference="thm:finite"} for unstable leaves. *Proof.* We begin with the proof of the upper bound in [\[eqn:finite-u\]](#eqn:finite-u){reference-type="eqref" reference="eqn:finite-u"}. For all $n\in\mathbb{N}$, we have $W^\mathrm{u}_\mathrm{loc}(x) \subset X = \bigcup_{u\in \mathcal{L}_n} [u]^+$, so $\mathcal{L}_n \in \mathbb{E}^+(W^\mathrm{u}_\mathrm{loc}(x),N)$ whenever $n\geq N$. Thus $$\label{eqn:2-Phi-upper} \inf\left\{\sum_{u\in\mathcal{E}}e^{\Phi^+_{x }(u)}:\mathcal{E}\in\mathbb{E}^+(W^\mathrm{u}_\mathrm{loc}(x),N)\right\}\leq \sum_{u\in\mathcal{L}_{n}}e^{\Phi^+_{x}(u)} = \Lambda_n^+(x).$$ Sending $n\to\infty$ along an arbitrary subsequence and recalling [\[eqn:Q+\]](#eqn:Q+){reference-type="eqref" reference="eqn:Q+"}, this gives $$m_x^u(W^\mathrm{u}_\mathrm{loc}(x))\leq \underline{Q}^+_{x},$$ which proves the upper bound in equation [\[eqn:finite-u\]](#eqn:finite-u){reference-type="eqref" reference="eqn:finite-u"}. For the lower bound in [\[eqn:finite-u\]](#eqn:finite-u){reference-type="eqref" reference="eqn:finite-u"}, we begin by finding an inequality for expressing a fixed cylinder as a disjoint union of longer cylinders of some uniform length. Writing $\mathcal{S}_n(x.u):=\{v\in\mathcal{L}_n:[v]^+ \cap W^\mathrm{u}_\mathrm{loc}(x.u) \neq\emptyset\}$, then by inequality [\[eqn:upper-add-mxu\]](#eqn:upper-add-mxu){reference-type="eqref" reference="eqn:upper-add-mxu"}, we have $$\begin{aligned} \sum_{v\in\mathcal{S}_n(x.u)}e^{\Phi^+_{x}(uv)} & \leq \sum_{v\in\mathcal{S}_n(x.u)}e^{\Phi^+_{x}(u)+\Phi^+_{x.uv_1}(v)} = e^{\Phi^+_{x}(u)} \sum_{v\in\mathcal{S}_n(x.u)}e^{\Phi^+_{x.uv_1}(v)} \\ & \leq e^{\Phi^+_{x}(u)}\sum_{v\in\mathcal{L}_n}e^{\Phi(v)} = e^{\Phi^+_{x}(u)}Q_n. \end{aligned}$$ This implies that $$\label{eqn:unfm-cyl-len} \left(Q_n\right)^{-1}\sum_{v\in\mathcal{S}_n(x.u)}e^{\Phi^+_{x}(uv)}\leq e^{\Phi^+_{x}(u)}$$ Next, we will take a cover of $W^\mathrm{u}_\mathrm{loc}(x)$ and lengthen the words of the cylinders so they are all the same length, then apply the bound in [\[eqn:unfm-cyl-len\]](#eqn:unfm-cyl-len){reference-type="eqref" reference="eqn:unfm-cyl-len"}. By compactness of $W^\mathrm{u}_\mathrm{loc}(x)$, it suffices to consider only finite covers of $W^\mathrm{u}_\mathrm{loc}(x)$ in estimating $m^\mathrm{u}_x(W^\mathrm{u}_\mathrm{loc}(x))$. Given such a finite cover $\mathcal{E}$ and any $n \geq \max \{ |u| : u\in \mathcal{E}\}$ (which is finite since $\mathcal{E}$ is), we produce a new cover $\{uv : u \in \mathcal{E}, v\in \mathcal{S}_{n-|u|}(x.u)\}$. All words of this new cover have length $n$. Now [\[eqn:unfm-cyl-len\]](#eqn:unfm-cyl-len){reference-type="eqref" reference="eqn:unfm-cyl-len"} gives $$\label{eqn:2-w-factor} \sum_{u\in\mathcal{E}} e^{\Phi^+_{x}(u)}\geq \sum_{u\in\mathcal{E}}\left(Q_{n-|u|}\right)^{-1}\sum_{v\in\mathcal{S}_{n-| u|}(x.u)}e^{\Phi^+_{x}(uv)}$$ We desire to find a bound for the $Q_{n-|u|}$ so that we can factor it out of the sum. For all $Q>\overline{Q}$ there exists an $n_0$ such that $Q>Q_k$ for all $k\geq n_0$. If we restrict $n$ to be greater than $n_0+\max_{u\in\mathcal{E}}|u|$, then [\[eqn:2-w-factor\]](#eqn:2-w-factor){reference-type="eqref" reference="eqn:2-w-factor"} implies that $$\label{eqn:2-w-arb} \sum_{u\in\mathcal{E}}e^{\Phi^+_{x}(u)}\geq {Q}^{-1}\sum_{u\in\mathcal{E}}\sum_{v\in\mathcal{S}_{n-|u|}(x.u)}e^{\Phi^+_{x}(uv)} \geq{Q}^{-1}\sum_{w\in\mathcal{L}_n}e^{\Phi^+_{x}(w)}.$$ The new index in the last step follows from the fact that for each $u$, the word $uv$ has length $n$, and the collection of $[uv]^+$ covers $W^\mathrm{u}_\mathrm{loc}(x)$. Additionally, any word $w\in\mathcal{L}_n$ such that $W^\mathrm{u}_\mathrm{loc}(x)\cap [w]^+=\emptyset$ does not contribute to the sum. By taking an arbitrary subsequence $n_k\to\infty$ in [\[eqn:2-w-arb\]](#eqn:2-w-arb){reference-type="eqref" reference="eqn:2-w-arb"} we have $$m^u_x(W^\mathrm{u}_\mathrm{loc}(x))\geq{Q}^{-1}\overline{Q}^+_{x}.$$ Since $Q>\overline{Q}$ was arbitrary, this proves the lower bound in [\[eqn:finite-u\]](#eqn:finite-u){reference-type="eqref" reference="eqn:finite-u"}. ◻ The proof for [\[eqn:finite-s\]](#eqn:finite-s){reference-type="eqref" reference="eqn:finite-s"} is analogous. # Scaling of leaf measures in Theorem [Theorem 6](#thm:dynamics){reference-type="ref" reference="thm:dynamics"} {#sec:shift-mxs-pf} We will prove the scaling property in Theorem [Theorem 6](#thm:dynamics){reference-type="ref" reference="thm:dynamics"} for unstable leaves by proving a preliminary lemma. The approach for the stable setting is identical by reversing the direction of the shift. We will use the notation $\varphi(x)=c\pm\epsilon$ as a shorthand for $c-\epsilon\leq \varphi(x) \leq c+\epsilon$. **Lemma 28**. *Let $Z\subset W^\mathrm{u}_\mathrm{loc}(x)$ such that $\sigma Z\subset W^\mathrm{u}_\mathrm{loc}(\sigma x)$. Assume further that there is a constant $c\in\mathbb{R}$ and $N_0\in\mathbb{N}$ such that for any $w\in\mathcal{L}_{\geq N_0}$ with $[w]^+\cap Z\neq\emptyset$ we have that $\varphi(y)=c\pm\epsilon$ for all $y\in[w]^+ \cap W^\mathrm{u}_\mathrm{loc}(x)$. Then we have $$m^\mathrm{u}_{\sigma x}(\sigma Z)=e^{\pm2\epsilon}\int_Ze^{-\varphi(y)}\,dm^\mathrm{u}_x(y).$$* *Proof.* If $|w|\geq N_0$ we have that $\varphi(x)=c\pm\epsilon$ on $[w]^+$ we have that $$\begin{split}\label{eqn:Phi-plus-cont} \Phi_x^+(w)&=\sup\left\{S_{|w|}\varphi(y): y\in [w]^+\cap W^\mathrm{u}_\mathrm{loc}(x)\right\}\\ & = \sup\left\{S_{|w|-1}\varphi\left(\sigma y\right)+c\pm\epsilon: y\in [w]^+\cap W^\mathrm{u}_\mathrm{loc}(x)\right\}\\ &=c\pm\epsilon+\sup\left\{S_{|w|-1}\varphi(y'): y'\in [\sigma w]^+\cap W^\mathrm{u}_\mathrm{loc}(\sigma x)\right\}\\ &=c\pm\epsilon+\Phi_{\sigma x}^+(\sigma w). \end{split}$$ Because $\sigma(Z)\subset W^\mathrm{u}_\mathrm{loc}(\sigma x)$ for any $z\in Z$, the symbol $z_{1}=x_{1}$. This implies that for a sufficiently long word $w$ such that $[w]^+\cap Z$, the map $\sigma$ is a bijection between $[w]^+\cap W^\mathrm{u}_\mathrm{loc}(x)$ and $[\sigma w]^+\cap W^\mathrm{u}_\mathrm{loc}(\sigma x)$. Moreover, $\sigma$ acts as a bijection between covers in $\mathbb{E}^+(Z,N)$ and $\mathbb{E}^+(\sigma Z,N-1)$. That is, if $w\in\mathcal{E}$ for some cover in $\mathbb{E}^+(Z,N)$, then $[\sigma w]^+\cap \sigma Z\neq\emptyset$. Likewise, if $w\in\mathcal{E}'$ for some cover in $\mathbb{E}(\sigma Z,N-1)$, then $[x_0w]^+\cap Z\neq\emptyset$. Using equation [\[eqn:Phi-plus-cont\]](#eqn:Phi-plus-cont){reference-type="eqref" reference="eqn:Phi-plus-cont"} in the definition of $m^\mathrm{u}_x(Z)$ for $N> N_0$, we get $$\label{eqn:shift-mxu-bound} \begin{split} m^\mathrm{u}_x(Z)&=\lim_{N\to\infty}\inf\left\{\sum_{w\in\mathcal{E}}e^{\Phi_{x}^+(w)}: \mathcal{E}\in\mathbb{E}^+(Z, N)\right\}\\ &=\lim_{N\to\infty}\inf\left\{\sum_{w\in\mathcal{E}}e^{c\pm\epsilon+\Phi_{\sigma x}^+(\sigma w)}:\mathcal{E}\in\mathbb{E}^+(Z,N)\right\} \\ &=e^{c\pm\epsilon}\lim_{N\to\infty}\inf\left\{\sum_{u\in\mathcal{E}'}e^{\Phi_{\sigma x}^+(u)}: \mathcal{E}'\in\mathbb{E}^+(\sigma Z, N-1)\right\}\\ & = e^{c\pm\epsilon}m^\mathrm{u}_{\sigma x}(\sigma Z) \end{split}$$ where $u=\sigma w$. Additionally, because $\varphi(y) =c\pm\epsilon$ on $Z$, we have that $$\label{eqn:shift-mxu-int} \int_{Z}e^{-\varphi(y)}\,dm^\mathrm{u}_x(y)=\int_{Z}e^{-c\pm\epsilon}\,dm^\mathrm{u}_x(y)=e^{-c\pm\epsilon}m^\mathrm{u}_{x}(Z).$$ Combining the result from equation [\[eqn:shift-mxu-bound\]](#eqn:shift-mxu-bound){reference-type="eqref" reference="eqn:shift-mxu-bound"} and [\[eqn:shift-mxu-int\]](#eqn:shift-mxu-int){reference-type="eqref" reference="eqn:shift-mxu-int"} we get $$\begin{aligned} \int_{Z}e^{-\varphi(y)}\,dm^\mathrm{u}_x(y)&=e^{-c\pm\epsilon}m^\mathrm{u}_x(Z)\\ &=e^{-c\pm\epsilon}e^{c\pm\epsilon}m^\mathrm{u}_{\sigma x}(\sigma Z) =e^{\pm2\epsilon}m^\mathrm{u}_{\sigma x}(\sigma Z), \end{aligned}$$ which completes the proof of the lemma. ◻ To prove Theorem [Theorem 6](#thm:dynamics){reference-type="ref" reference="thm:dynamics"}, we consider a set $Z\subset W^\mathrm{u}_\mathrm{loc}(x)$ satisfying $\sigma Z\subset W^\mathrm{u}_\mathrm{loc}(\sigma x)$. For $\epsilon>0$, let $Z_n=Z\cap \varphi^{-1}([n\epsilon/2,(n+1)\epsilon/2))$ and note that the sets $Z_n$ are measurable, mutually disjoint, and $Z=\bigcup_{n\in\mathbb{Z}}Z_n$. Moreover, since $\varphi$ is uniformly continuous, there is $N\in\mathbb{N}$ such that $|\varphi(y) - \varphi(z)|<\epsilon/2$ whenever $y_{[-N,N]} = z_{[-N,N]}$, and in particular given any $w\in \mathcal{L}$ with $|w|\geq N$ and $[w]^+ \cap Z_n \neq\emptyset$, we can fix $y\in [w]^+\cap Z_n$ and observe that any $z\in [w]^+ \cap W^\mathrm{u}_\mathrm{loc}(x)$ has $$|\varphi(z) - n\epsilon/2| \leq |\varphi(z) - \varphi(y)| + |\varphi(y) - n\epsilon/2| \leq \epsilon/2 + \epsilon/2 = \epsilon.$$ Thus each $Z_n$ satisfies the conditions of Lemma [Lemma 28](#lem:shift-mxu-scales){reference-type="ref" reference="lem:shift-mxu-scales"}, so $$\begin{aligned} m^\mathrm{u}_{\sigma x}(\sigma Z)&=\sum_{n\in\mathbb{Z}}m^\mathrm{u}_{\sigma x}(\sigma Z_n)\\ &= e^{\pm2\epsilon}\sum_{n\in\mathbb{Z}}\int_{Z_n}e^{-\varphi(y)}\,dm^\mathrm{u}_x(y) = e^{\pm2\epsilon}\int_Ze^{-\varphi(y)}\,dm^\mathrm{u}_x(y).\end{aligned}$$ Letting $\epsilon\to0$ proves [\[eqn:shift-mxu-scales\]](#eqn:shift-mxu-scales){reference-type="eqref" reference="eqn:shift-mxu-scales"}. The proof of [\[eqn:shift-mxs-scales\]](#eqn:shift-mxs-scales){reference-type="eqref" reference="eqn:shift-mxs-scales"} is analogous. # Uniformly continuous holonomies; proof of Theorem [Theorem 14](#thm:shift-cts){reference-type="ref" reference="thm:shift-cts"} {#sec:shift-cts-pf} In this section we prove the results in §[1.8](#sec:holonomies){reference-type="ref" reference="sec:holonomies"} about uniformly continuous holonomies, starting with Propositions [Proposition 9](#prop:open-hol){reference-type="ref" reference="prop:open-hol"} and [Proposition 10](#prop:zero-hol){reference-type="ref" reference="prop:zero-hol"}, then proceeding to Theorem [Theorem 14](#thm:shift-cts){reference-type="ref" reference="thm:shift-cts"}. ## Examples with uniformly continuous holonomies Propositions [Proposition 9](#prop:open-hol){reference-type="ref" reference="prop:open-hol"} and [Proposition 10](#prop:zero-hol){reference-type="ref" reference="prop:zero-hol"} both rely on the following fact. **Lemma 29**. *Given any rectangle $R$, any $y,z\in R$, and any $Y \subset W^\mathrm{u}_R(y)$, we have $\mathbb{E}^+(Y,N) = \mathbb{E}^+(\pi^s_{y,z}(Y),N)$ for all $N\in\mathbb{N}$.* *Proof.* Given any $N\in \mathbb{N}$ and $\mathcal{E}\in \mathbb{E}^+(Y,N)$, for every $x\in \pi^s_{y,z}(Y)$ we have $x' = (\pi^s_{y,z})^{-1}(x) \in Y$ and thus $x' \in [w]^+$ for some $w\in \mathcal{E}$, but since $x' \in W^\mathrm{s}_\mathrm{loc}(x)$ this implies that $x \in [w]^+$ as well, so $\mathcal{E}\in \mathbb{E}^+(\pi^s_{y,z}(Y),N)$. This proves one inclusion, and the other follows by symmetry. ◻ With this lemma in hand, Proposition [Proposition 10](#prop:zero-hol){reference-type="ref" reference="prop:zero-hol"} (the case $\varphi\equiv 0$) is easy to prove. *Proof of Proposition [Proposition 10](#prop:zero-hol){reference-type="ref" reference="prop:zero-hol"}.* By Lemma [Lemma 29](#lem:same-E){reference-type="ref" reference="lem:same-E"}, we have $\mathbb{E}^+(Y,N) = \mathbb{E}^+(\pi^s_{y,z}(Y),N)$. For every $\mathcal{E}\in \mathbb{E}^+(Y,N)$ and every $w\in \mathcal{E}$, we have $\Phi^+_y(w) = \Phi^+_z(w) = -|w| P$ since $\varphi\equiv 0$, and thus $\sum_{w\in \mathcal{E}} e^{\Phi^+_y(w)} = \sum_{w\in \mathcal{E}} e^{\Phi^+_z(w)}$. Taking an infimum over all such $\mathcal{E}$ gives $m^\mathrm{u}_y(Y) = m^\mathrm{u}_z(\pi^s_{y,z}(Y))$. ◻ Now we prove uniform continuity of holonomies for open rectangles when $\varphi$ has the Walters property. *Proof of Proposition [Proposition 9](#prop:open-hol){reference-type="ref" reference="prop:open-hol"}.* By the Walters property, given $\epsilon>0$, there is $k\in \mathbb{N}$ such that for all $n\in \mathbb{N}$ and $y,z\in R$ with $y_i = z_i$ for all $i\geq -k$, we have $|S_n\varphi(y) - S_n\varphi(z)| \leq \epsilon$ for all $n\in\mathbb{N}$. We claim that $\delta = 2^{-k}$ satisfies the required conclusion in Definition [Definition 8](#def:cts-hol){reference-type="ref" reference="def:cts-hol"}; that is, that for any open rectangle $R$ and any $y,z\in R$ with $d(y,z) < \delta$, we have $m^\mathrm{u}_y(Y) = e^{\pm \epsilon} m^\mathrm{u}_z(\pi_{y,z}^s(Y))$ for all measurable $Y\subset W^\mathrm{u}_R(y)$. It suffices to show this conclusion in the case when $R = [u]^- \cap [v]^+$ for some $u,v\in \mathcal{L}$. Indeed, *any* open rectangle $R$ can be written as a disjoint union $R = \bigsqcup_{i=1}^\infty R_i$, where each $R_i$ is of the form $[u]^- \cap [v]^+ \cap R = [u]^- \cap [v]^+$, and then if we have the result for each $R_i$, we can deduce that if $d(y,z) < \delta$, then there are $y_i \in W^\mathrm{u}_R(y) \cap R_i$ and $z_i \in W^\mathrm{u}_R(z) \cap R_i$ such that $d(y_i,z_i) < \delta$, so $$m^\mathrm{u}_y(Y) = \sum_{i=1}^\infty m^\mathrm{u}_{y_i}(Y\cap R_i) = \sum_{i=1}^\infty e^{\pm \epsilon} m^\mathrm{u}_{z_i}(\pi_{y,z}^s(Y) \cap R_i) = e^{\pm \epsilon} m^\mathrm{u}_z(\pi_{y,z}^s(Y)).$$ To prove the conclusion when $R = [u]^- \cap [v]^+$, we observe that given any $w\in \mathcal{L}$ with $|w| \geq |v|$ and $[w]^+ \cap R \neq\emptyset$, we in fact have $[u]^- \cap [w]^+ \subset R$. Given any $x\in W^\mathrm{u}_R(y) \cap [w]^+$, we have $x' := \pi^s_{y,z}(x) \in W^\mathrm{u}_R(z) \cap [w]^+$ and so $x_i = x_i'$ for all $i\geq -k$. By the Walters property, this implies that $$|S_n\varphi(x) - S_n\varphi(x')| \leq \epsilon\text{ for all } n\in \mathbb{N}.$$ Taking a supremum over all $x\in W^\mathrm{u}_R(y) \cap [w]^+$ gives $\Phi_x^+(w) \leq \Phi_{x'}^+(w) + \epsilon$, and by the symmetrical result with $x,x'$ reversed we get $$\label{eqn:Phixx'} \Phi_x^+(w) = \Phi_{x'}^+(w) \pm \epsilon.$$ (Note that this argument would fail if we did not have $[u]^- \cap [w]^+ \subset R$, since then the holonomy map would only be defined on $W^\mathrm{u}_\mathrm{loc}(x) \cap [w]^+ \cap R$, which might not be all of $W^\mathrm{u}_\mathrm{loc}(x) \cap [w]^+$.) Fix $Y\subset W^\mathrm{u}_R(y)$ and let $N \geq |w|$. Then any cover $\mathcal{E}\in \mathbb{E}^+(Y,N)$ is also a cover of $\pi^s_{y,z}(Y) \subset W^\mathrm{u}_R(z)$, and vice versa, by the product structure, and [\[eqn:Phixx\'\]](#eqn:Phixx'){reference-type="eqref" reference="eqn:Phixx'"} gives $$\begin{aligned} \sum_{w\in\mathcal{E}}e^{\Phi^+_y(w)}=\sum_{w\in\mathcal{E}}e^{\Phi^+_z(w)\pm\epsilon} \end{aligned}$$ Taking an infimum over covers in $\mathbb{E}^+(Y,N)$ (and hence also over $\mathbb{E}^+(\pi^s_{y,z}(Y),N)$) and then sending $N\to\infty$ gives $m^\mathrm{u}_y(Y)=e^{\pm\epsilon}m^\mathrm{u}_z(\pi^s_{y,z}(Y))$. This proves the conclusion when $R = [u]^- \cap [v]^+$, and by the discussion at the start of the proof, taking countable unions of such rectangles gives the result for arbitrary open rectangles. ◻ ## Proof of Theorem [Theorem 14](#thm:shift-cts){reference-type="ref" reference="thm:shift-cts"} {#proof-of-theorem-thmshift-cts} Now we prove Theorem [Theorem 14](#thm:shift-cts){reference-type="ref" reference="thm:shift-cts"}, that uniform continuity of holonomies is equivalent to uniform convergence and boundedness of $\Delta^s$ together with the formula [\[eqn:shift-RN\]](#eqn:shift-RN){reference-type="eqref" reference="eqn:shift-RN"} for the Radon--Nikodym derivative of the holonomy map. Before proving the equivalence, we observe an important consequence of the positivity condition in Definition [Definition 13](#def:pos-meas){reference-type="ref" reference="def:pos-meas"}; given any $R\in \mathcal{R}$, any $x\in R$, and any $w\in \mathcal{L}$ such that $[w]^+ \cap W^\mathrm{u}_R(x) \neq \emptyset$, we have $\sigma^n(x) \in \sigma^n(R\cap [w]^+)$, which is itself a rectangle in $\mathcal{R}$ by $\mathcal{L}$-invariance, and thus $$m^\mathrm{u}_{\sigma^n(x)} (\sigma_n(R\cap [w]^+)) > 0$$ by positivity. Applying [\[eqn:mxu-RN-n\]](#eqn:mxu-RN-n){reference-type="eqref" reference="eqn:mxu-RN-n"}, we see that $m^\mathrm{u}_x([w]^+)>0$, and conclude that $m^\mathrm{u}_x$ is fully supported in the following sense. **Lemma 30**. *If $\mathcal{R}$ is an $\mathcal{L}$-invariant family of rectangles on which the family of leaf measures $m^\mathrm{u}_x$ is positive, then for every $R\in \mathcal{R}$ and $x\in R$, the measure $m^\mathrm{u}_x$ gives positive weight to every (relatively) open set in $W^\mathrm{u}_R(x)$.* Now we prove the equivalence of the two conditions in Theorem [Theorem 14](#thm:shift-cts){reference-type="ref" reference="thm:shift-cts"}. Start by assuming that $\Delta^s$ converges uniformly and that [\[eqn:shift-RN\]](#eqn:shift-RN){reference-type="eqref" reference="eqn:shift-RN"} is the Radon-Nikodym derivative.[^6] We must prove uniform continuity of holonomies. Observe that each partial sum $\sum_{n=0}^N (\varphi(\sigma^n x') - \varphi(\sigma^n x))$ is uniformly continuous, and thus by uniform convergence, $\Delta^s$ is uniformly continuous as well. Since $\Delta^s(x,x) = 0$, this implies that given $\delta>0$, there exists $N\in \mathbb{N}$ such that given any $x,x'\in R\in\mathcal{R}$ satisfying $x_i=x'_i$ for all $i \geq -N$, we have $|\Delta^s(x',x)| < \delta$. Now given any $y,z\in R$ with $d(y,z) < 2^{-N}$, we have $y_i = z_i$ for all $|i|\leq N$, and thus given any $x\in W^\mathrm{u}_R(y)$ and $x' = \pi^s_{y,z}(x)$, we have $x_i = x'_i$ for all $i\geq -N$. Using [\[eqn:shift-RN\]](#eqn:shift-RN){reference-type="eqref" reference="eqn:shift-RN"}, it follows that for every $Y\subset W^\mathrm{u}_R(y)$, we have $$m^\mathrm{u}_z(\pi_{y,z}^s(Y))=\int_{Y}e^{\Delta^s([z,x],x)}\,dm^\mathrm{u}_y(x) =\int_Ye^{\pm\delta}\,dm^\mathrm{u}_y(x) =e^{\pm\delta}\,dm^\mathrm{u}_y(Y).$$ Conversely, suppose that the leaf measures have uniformly continuous holonomies on $\mathcal{R}$. Let $y\in R$ and fix $\delta>0$. There exists an $N_0\in \mathbb{N}$ such that if $y'_i=y_i$ for all $i\geq -N_0$, then $$\label{eqn:pmdelta} m^\mathrm{u}_y(Y)=e^{\pm\delta}m^\mathrm{u}_{y'}(\pi_{y,y'}^s(Y)).$$ Choose $Y\subset W^\mathrm{u}_R(y)$ such that $\sigma^{N}(Y)\subset W^\mathrm{u}_\mathrm{loc}(\sigma^{N}y)$. For simplicity, consider $z\in W^\mathrm{s}_R(y)$, and note that given any $N\geq N_0$, we have $(\sigma^{N}z)_{i}=(\sigma^{N}y)_{i}$ for all $i\geq -N$, and hence for all $i\geq -N_0$. Thus, [\[eqn:pmdelta\]](#eqn:pmdelta){reference-type="eqref" reference="eqn:pmdelta"} gives $$\label{eqn:pmdelta-again} m^\mathrm{u}_{\sigma^{N}y}(\sigma^{N}Y)=e^{\pm\delta}m^\mathrm{u}_{\sigma^{N}z}(\pi^s_{y,z}(\sigma^{N}Y)).$$ Using [\[eqn:shift-mxu-scales\]](#eqn:shift-mxu-scales){reference-type="eqref" reference="eqn:shift-mxu-scales"} twice and [\[eqn:pmdelta-again\]](#eqn:pmdelta-again){reference-type="eqref" reference="eqn:pmdelta-again"} once, we get $$\begin{aligned} m^\mathrm{u}_z(\pi^s_{y,z}(Y))&=\int_{\sigma^{N}(\pi^s_{y,z}(Y))}e^{S_N\varphi(\sigma^{-N}x)}\,dm^\mathrm{u}_{\sigma^{N}z}(x)\\ &=e^{\pm\delta}\int_{\sigma^N(Y)}e^{S_N\varphi(\sigma^{-N}[\sigma^Nz,x])}\,dm^\mathrm{u}_{\sigma^{N}y}(x)\\ &=e^{\pm\delta}\int_{Y}e^{S_N\varphi(\sigma^{-N}[\sigma^{N}z,\sigma^Nx])-S_N\varphi(x)}\,dm^\mathrm{u}_{y}(x)\\ &=e^{\pm\delta}\int_{Y}e^{S_N\varphi([x,z])-S_N\varphi(x)}\,dm^\mathrm{u}_{y}(x). \end{aligned}$$ The last line follows because for any $x\in W^\mathrm{u}_\mathrm{loc}(y)\cap Y$, we have that $x_{[0,N]}=y_{[0,N]}=z_{[0,N]}$ as $\sigma^{N}(Y)\subset W^\mathrm{u}_\mathrm{loc}(\sigma^{-N}y)$ and $z\in W^\mathrm{s}_R(y)$. Thus $\sigma^N[\sigma^{-N}x,\sigma^{-N}z]=[x,z]$. Since every $Y\subset W^\mathrm{u}_R(y)$ can be decomposed as a disjoint union of sets that lie in a cylinder of length $N$, we conclude that with $N_0 = N_0(\delta)$ as above, if we write $\Delta_N(x',x) := S_N\varphi(x') - S_N\varphi(x)$ then we have $$\label{eqn:muY} m^\mathrm{u}_z(\pi^s_{y,z}(Y)) = e^{\pm \delta} \int_{Y}e^{\Delta_N([x,z],x)}\,dm^\mathrm{u}_{y}(x)$$ for every $y,z,x$, every measurable $Y\subset W^\mathrm{u}_R(y)$, and every $N\geq N_0$. Thus for every $N,N'\geq N_0$ we have $$\label{eqn:int-close} \int_Y e^{\Delta_N([x,z],x)} \,dm^\mathrm{u}_y(x) = e^{\pm 2\delta} \int_Y e^{\Delta_{N'}([x,z],x)} \,dm^\mathrm{u}_y(x).$$ Since $m^\mathrm{u}_y$ gives positive weight to every relatively open set in $W^\mathrm{u}_R(y)$ by Lemma [Lemma 30](#lem:full-support){reference-type="ref" reference="lem:full-support"}, a standard argument shows that $\Delta_N([x,z],x) = \Delta_{N'}([x,z],x) \pm 2\delta$ for all $x\in W^\mathrm{u}_R(y)$; indeed, if this inequality fails at any point, then by continuity there is a relatively open set $Y$ on which it fails uniformly, and since $m^\mathrm{u}_y(Y)>0$, this would violate the integral estimates in [\[eqn:int-close\]](#eqn:int-close){reference-type="eqref" reference="eqn:int-close"}. We have proved that $\Delta_N$ is uniformly Cauchy: for every $\delta>0$ there is $N_0\in \mathbb{N}$ such that for every $R\in \mathcal{R}$, every $y,z\in R$, every $x\in W^\mathrm{u}_R(y)$, and every $N,N'\geq N_0$, we have $$|\Delta_N([x,z],x) - \Delta_{N'}([x,z],x)| \leq 2\delta.$$ In particular, given any $x,x'\in R$ with $x' \in W^\mathrm{s}_R(x)$, we can choose $y=x$ and $z=x'$ to deduce that $$|\Delta_N(x',x) - \Delta_{N'}(x',x)| \leq 2\delta.$$ This proves that $\Delta_N$ converges uniformly to $\Delta^s$ on $\{(x,x') \in R^2 : R\in \mathcal{R}, x' \in W^\mathrm{s}_R(x)\}$, and then [\[eqn:muY\]](#eqn:muY){reference-type="eqref" reference="eqn:muY"} gives $$m^\mathrm{u}_z(\pi^s_{y,z}(Y)) = e^{\pm \delta} \int_{Y}e^{\Delta^s([x,z],x)}\,dm^\mathrm{u}_{y}(x)$$ Since $\delta>0$ is arbitrary, [\[eqn:shift-RN\]](#eqn:shift-RN){reference-type="eqref" reference="eqn:shift-RN"} follows. Finally, we observe that uniform convergence of $\Delta_N$ implies uniform boundedness, by the following lemma. **Lemma 31**. *If the sums in the definitions of $\Delta^{s,u}$ converge uniformly, then there is $C>0$ such that $|\Delta^s(x,x')| \leq C$ for all $(x,x')\in R^2$, $R\in \mathcal{R}$, $x'\in W_R^s(x)$, and similarly for $\Delta^u$.* *Proof.* By uniform convergence, there exists $N\in\mathbb{N}$ such that $$\Big|\sum_{k=N}^\infty\varphi(\sigma^{k}x)-\varphi(\sigma^{k}x')\Big| \leq 1$$ for all $x,x'$ as in the statement. Thus $$|\Delta^s(x,x')|\leq \sum_{k=0}^{N-1}|\varphi(\sigma^{k}x)-\varphi(\sigma^{k}x')|+\Big|\sum_{k=N}^\infty \varphi(\sigma^{k}x)-\varphi(\sigma^{k}x')\Big| \leq 2N\|\varphi\| + 1.$$ The proof for $\Delta^u$ is analogous. ◻ # Equilibrium states using product structure {#sec:es-two-sided} In this section we prove Theorems [Theorem 16](#thm:q-indep){reference-type="ref" reference="thm:q-indep"}, [Theorem 17](#thm:return-map){reference-type="ref" reference="thm:return-map"}, and [Theorem 18](#thm:push-product){reference-type="ref" reference="thm:push-product"} from §[1.9](#sec:product-measure){reference-type="ref" reference="sec:product-measure"}. Throughout this section, we will assume that $\mathcal{R}$ is an $\mathcal{L}$-invariant family of rectangles on which the families of leaf measures $m^\mathrm{u}_x$ and $m^\mathrm{s}_x$ both have uniformly continuous holonomies and are positive in the sense of Definition [Definition 13](#def:pos-meas){reference-type="ref" reference="def:pos-meas"}. ## Proof of Theorem [Theorem 16](#thm:q-indep){reference-type="ref" reference="thm:q-indep"} {#proof-of-theorem-thmq-indep} By Remark [Remark 15](#rmk:leafs-related){reference-type="ref" reference="rmk:leafs-related"}, the assumption of positivity together with the holonomy results from Theorem [Theorem 14](#thm:shift-cts){reference-type="ref" reference="thm:shift-cts"} guarantees that $m^\mathrm{u}_q(R)>0$ and $m^\mathrm{s}_q(R)>0$ for every $q\in R$. This is enough to imply that $m_{R,q}$ from [\[eqn:mxm\]](#eqn:mxm){reference-type="eqref" reference="eqn:mxm"} is nonzero for every $q\in R$, and since $\Delta^{u,s}$ are uniformly bounded by Theorem [Theorem 14](#thm:shift-cts){reference-type="ref" reference="thm:shift-cts"}, this in turn implies that $\lambda_R$ is nonzero. It remains to show that $\lambda_R$ is independent of our choice of $q\in R$. To this end, let $q,q'\in R$. Recalling that $$\Delta^u(x',x) := \sum_{n=0}^\infty \varphi(\sigma^{-n} x') - \varphi(\sigma^{-n} x),$$ we see that $\Delta^u$ has the following additivity property: given any $y\in W^\mathrm{s}_R(q)$ and $z\in W^\mathrm{u}_R(q)$, we have $$\Delta^u([y,z],y)= \Delta^u([y,q'],y) +\Delta^u([y,z],[y,q']).$$ Using this and (the $m^\mathrm{s}$-version of) Theorem [Theorem 14](#thm:shift-cts){reference-type="ref" reference="thm:shift-cts"} we have $$\begin{aligned} \int_{W^\mathrm{s}_R(q)}&e^{\Delta^u([y,z],y)}e^{\Delta^s([y,z],z)}\mathbf{1}_{Z}([y,z])\,dm^\mathrm{s}_q(y)\\ &=\int_{W^\mathrm{s}_R(q)}e^{\Delta^u([y,z],[y,q'])+\Delta^u([y,q'],y)}e^{\Delta^s([y,z],z)}\mathbf{1}_{Z}([y,z])\,dm^\mathrm{s}_q(y)\\ &=\int_{W^\mathrm{s}_R(q)}e^{\Delta^u([y,z],[y,q'])}e^{\Delta^s([y,z],z)}\mathbf{1}_{Z}([y,z])\,d((\pi^u_{q',q})_*m^\mathrm{s}_{q'})(y). %& = \int_{(\pi^{u}_{q',q})^{-1}(\Wsl(q))}e^{\Delta^u([\pi^u_{q',q}(y'),z],[\pi^u_{q',q}(y'),q'])}e^{\Delta^s([\pi^u_{q',q}(y'),z],z)}\one_Z([\pi_{q',q}^u(y),z])\,d\ms_{q'}(y') \end{aligned}$$ Given $y\in W^\mathrm{s}_R(q)$, let $y' = (\pi_{q',q}^u)^{-1}(y)$; then the above equation gives $$\begin{gathered} \label{eqn:stable-leaf-prod} % \begin{split} \int_{W^\mathrm{s}_R(q)}e^{\Delta^u([y,z],y)}e^{\Delta^s([y,z],z)}\mathbf{1}_{Z}([y,z])\,dm^\mathrm{s}_q(y)\\ =\int_{W^\mathrm{s}_R(q')}e^{\Delta^u([y',z],y')}e^{\Delta^s([y',z],z)}\mathbf{1}_{Z}([y',z])\,dm^\mathrm{s}_{q'}(y'). % \end{split} \end{gathered}$$ Using [\[eqn:stable-leaf-prod\]](#eqn:stable-leaf-prod){reference-type="eqref" reference="eqn:stable-leaf-prod"} and Fubini's theorem we get $$\begin{aligned} \lambda_R(Z) &= \int_{W^\mathrm{u}_R(q)} \int_{W^\mathrm{s}_R(q)} e^{\Delta^u([y,z],y)}e^{\Delta^s([y,z],z)}\mathbf{1}_{Z}([y,z])\,dm^\mathrm{s}_q(y) \,dm^\mathrm{u}_q(z) \\ &=\int_{W^\mathrm{s}_R(q')} \int_{W^\mathrm{u}_R(q)} e^{\Delta^u([y',z],y')}e^{\Delta^s([y',z],z)}\mathbf{1}_{Z}([y',z])\,dm^\mathrm{u}_q(z)\,dm^\mathrm{s}_{q'}(y').\end{aligned}$$ The argument leading to [\[eqn:stable-leaf-prod\]](#eqn:stable-leaf-prod){reference-type="eqref" reference="eqn:stable-leaf-prod"} lets us replace $q$ with $q'$ in the integral over $W^\mathrm{s}$, provided we also replaced $y$ with $y' = (\pi_{q',q}^s)^{-1}(y)$. A completely analogous argument for the integral over $W^\mathrm{u}$ lets us deduce that $$\lambda_R(Z) = \int_{W^\mathrm{s}_R(q')} \int_{W^\mathrm{u}_R(q')} e^{\Delta^u([y',z'],y')}e^{\Delta^s([y',z'],z')}\mathbf{1}_{Z}([y',z'])\,dm^\mathrm{u}_{q'}(z')\,dm^\mathrm{s}_{q'}(y'),$$ where $z'$ and $z$ are related by $z=\pi^u_{q',q}(z')$. We see that using $q'$ instead of $q$ in [\[eqn:mxm\]](#eqn:mxm){reference-type="eqref" reference="eqn:mxm"} and [\[eqn:def-lambda-R\]](#eqn:def-lambda-R){reference-type="eqref" reference="eqn:def-lambda-R"} would lead to exactly this formula for $\lambda_R(Z)$, and thus we have proved that $\lambda_R$ is independent of the choice of $q\in R$. ## Proof of Theorem [Theorem 17](#thm:return-map){reference-type="ref" reference="thm:return-map"} {#proof-of-theorem-thmreturn-map} Now we assume that $R$ is the disjoint union of finitely many rectangles $R_1,\dots, R_\ell\in \mathcal{R}$, and let $\lambda_R = \sum_{i=1}^\ell \lambda_{R_i}$ as in [\[eqn:def-lambda\]](#eqn:def-lambda){reference-type="eqref" reference="eqn:def-lambda"}. Let $R^{(1)} = \{x\in R : \tau(x) < \infty\}$ be the set on which $T$ is defined; observe that $\lambda_R(R\setminus R^{(1)}) = 0$ by hypothesis. Define $R^{(n)}$ iteratively by $R^{(n+1)} = R^{(1)} \cap T^{-1}(R^{(n)})$; again we have $\lambda_R(R\setminus R^{(n)}) = 0$ for all $n$. Let $R^{(\infty)} = \bigcap_{n=1}^\infty R^{(n)}$, so we have $T\colon R^{(\infty)} \to R^{(\infty)}$, and $\lambda_R(R\setminus R^{(\infty)})=0$. We will show that $\lambda_R$ is $T$-invariant on $R^{(\infty)}$. Given $n\in \mathbb{N}$ and $i,j\in \{1,\dots, \ell\}$, let $$E_{ij}^n := \tau^{-1}(n) \cap R_i \cap \sigma^{-n}(R_j).$$ We will prove the following. **Lemma 32**. *Given any $i,j,n$ as above, any $w\in \mathcal{L}_n$, and any measurable $Z \subset E_{ij}^n \cap [w]^+$, we have $\lambda_R(T(Z)) = \lambda_R(Z)$.* Once the lemma is proved, observe that for any measurable $Z\subset R^{(\infty)}$ we have $Z = \bigcup_{i,j,n,w} Z \cap E_{ij}^n \cap [w]^+$, and $T(Z) = \bigcup_{i,j,n,w} T(Z\cap E_{ij}^n \cap [w]^+)$, and thus $$\lambda_R(T(Z)) = \sum_{i,j,n,w} \lambda_R(T(Z\cap E_{ij}^n \cap [w]^+)) = \sum_{i,j,n,w} \lambda_R(Z \cap E_{ij}^n \cap [w]^+) = \lambda_R(Z),$$ which proves Theorem [Theorem 17](#thm:return-map){reference-type="ref" reference="thm:return-map"}. So it only remains to prove Lemma [Lemma 32](#lem:Eijn){reference-type="ref" reference="lem:Eijn"}. To this end, fix $i,j,n,w$ as above, and let $Z\subset E_{ij}^n \cap [w]^+$ be measurable. Then $Z\subset R_i \cap [w]^+$, and since $\tau(x)=n$ for all $x\in Z$, we have $T(Z) = \sigma^n(Z) \subset R_j$. Since we assumed that the family $\mathcal{R}$ of rectangles is $\mathcal{L}$-invariant as in Definition [Definition 12](#def:L-inv){reference-type="ref" reference="def:L-inv"}, the rectangles $\sigma^n(R_i \cap [w]^+)$ and $\sigma^{-n}(R_j \cap [w]^-)$ both lie in $\mathcal{R}$, and hence by Theorem [Theorem 14](#thm:shift-cts){reference-type="ref" reference="thm:shift-cts"} we can use [\[eqn:shift-RN\]](#eqn:shift-RN){reference-type="eqref" reference="eqn:shift-RN"} for holonomies between unstable leaf measures, as well as its analogue for the stable leaf measures. Recalling [\[eqn:shift-bracket\]](#eqn:shift-bracket){reference-type="eqref" reference="eqn:shift-bracket"}, given any $y,z\in E_{ij}^n \cap [w]^+$, we have $\sigma^n y,\sigma^nz \in R_j$, and $[\sigma^n y, \sigma^n z] = \sigma^n [y,z]$. Fix a reference point $q\in E_{ij}^n \cap [w]^+$. As a consequence of equation [\[eqn:shift-mxs-scales\]](#eqn:shift-mxs-scales){reference-type="eqref" reference="eqn:shift-mxs-scales"}, we have $$\label{eqn:mxs-f-scales} \begin{split} \int_{W^\mathrm{s}_{R_j}(\sigma^nq)\cap [wq_n]^-} f(y)&e^{-\sum_{k=0}^{n-1}\varphi(\sigma^{-k}(y))}\,dm^\mathrm{s}_{\sigma^nq}(y)\\ &=\int_{W^\mathrm{s}_{R_i}(q)} f(\sigma^n(y))\,dm^\mathrm{s}_{q}(y) \end{split}$$ where $f$ is integrable. Motivated by this expression, observe that if $y\in W^\mathrm{s}_{R_j}(\sigma^n q)$ and $z\in W^\mathrm{u}_{R_j}(\sigma^n q)$, then $$\label{eqn:Deltau-pushforward} \begin{split} \Delta^u([y,z], y)&+\sum_{k=0}^{n-1}\varphi(\sigma^{-k} y)\\ &=\sum_{k=0}^{n-1}\varphi(\sigma^{-k}[ y,z])+\Delta^u(\sigma^{-n}[ y,z],\sigma^{-n}y). \end{split}$$ Similarly, equation [\[eqn:shift-mxu-scales\]](#eqn:shift-mxu-scales){reference-type="eqref" reference="eqn:shift-mxu-scales"} implies $$\label{eqn:mxu-f-scales} \int_{W^\mathrm{u}_{R_j}(\sigma^n q)}f(z)\,dm^\mathrm{u}_{\sigma^n q}(y)=\int_{W^\mathrm{u}_{R_{i}}(q)}f(\sigma^ny)e^{-S_n\varphi(y)}\,dm^\mathrm{u}_q(y).$$ Moreover, $$\label{eqn:Deltas-pushforward} \begin{split} \Delta^s([y,\sigma^n z],\sigma^n z)&-S_n\varphi(z)\\ &=\Delta^s(\sigma^{-n}[y,\sigma^n z],z)-\sum_{k=0}^{n-1}\varphi(\sigma^{-k}[y,\sigma^n z]). \end{split}$$ Using [\[eqn:Deltas-pushforward\]](#eqn:Deltas-pushforward){reference-type="eqref" reference="eqn:Deltas-pushforward"} followed by [\[eqn:mxs-f-scales\]](#eqn:mxs-f-scales){reference-type="eqref" reference="eqn:mxs-f-scales"} on the integral $$\int_{W^\mathrm{u}_{R_j}(Tq)}\int_{W^\mathrm{s}_{R_j}(Tq)} e^{\Delta^u([y,z],y)+\Delta^s([y,z],z)}\mathbf{1}_{T(Z_n(w))}([y,z])\,dm^\mathrm{s}_{Tq}(y)\,dm^u_{Tq}(z),$$ the term in the exponent becomes $$\Delta^u(\sigma^{-n}[\sigma^n y,z],y)+\sum_{k=0}^{n-1}\varphi(\sigma^{-k}[\sigma^n y,z]) + \Delta^s([\sigma^ny,z],z).$$ Then applying [\[eqn:mxu-f-scales\]](#eqn:mxu-f-scales){reference-type="eqref" reference="eqn:mxu-f-scales"} and then [\[eqn:Deltas-pushforward\]](#eqn:Deltas-pushforward){reference-type="eqref" reference="eqn:Deltas-pushforward"} the exponent becomes $$\begin{gathered} \Delta^u(\sigma^{-n}([\sigma^n y, \sigma^n z]), y)+\sum_{k=0}^{n-1}\varphi(\sigma^{-k}[\sigma^n y,\sigma^n z])\\ +\Delta^s(\sigma^{-n}([\sigma^n y, \sigma^n z]), z)-\sum_{k=0}^{n-1}\varphi(\sigma^{-k}[\sigma^n y,\sigma^n z]). \end{gathered}$$ Since $\sigma^{-n}([\sigma^n y,\sigma^n z])=[y,z]$ this term simplifies to $$\Delta^u([y, z], y)+\Delta^s([y,z], z).$$ For convenience, let $$\rho(y,z)=e^{\Delta^u([y, z], y)+\Delta^s([y,z], z)}.$$ Altogether, we have shown that given $Z\subset E_{ij}^n \cap [w]^+$, we have $$\begin{aligned} \lambda_R(TZ) & = \int_{W^\mathrm{u}_{R_j}(T q)}\int_{W^\mathrm{s}_{R_j}(Tq)} \rho(y',z')\mathbf{1}_{TZ}[y', z']\,dm^\mathrm{s}_{Tq}(y')\,dm^u_{Tq}(z')\\ &=\int_{W^\mathrm{u}_{R_i}(q)}\int_{W^\mathrm{s}_{R_i}(q)} \rho(y,z)\mathbf{1}_{TZ}[\sigma^n y,\sigma^n z]\,dm^\mathrm{s}_q(y)\,dm^u_q(z)\\ & = \int_{W^\mathrm{u}_{R_i}(q)}\int_{W^\mathrm{s}_{R_i}(q)} \rho(y,z)\mathbf{1}_{Z}[y,z]\,dm^\mathrm{s}_q(y)\,dm^u_q(z) = \lambda_R(Z). \end{aligned}$$ This proves Lemma [Lemma 32](#lem:Eijn){reference-type="ref" reference="lem:Eijn"}, and as explained in the paragraph following the lemma, we deduce that $\lambda_R$ is invariant with respect to the return map on $R^{(\infty)}$. ## Proof of Theorem [Theorem 18](#thm:push-product){reference-type="ref" reference="thm:push-product"} {#proof-of-theorem-thmpush-product} Now we assume that in addition to the conditions of the previous sections, we have $\int \tau\,d\lambda_R < \infty$ and $\overline{Q}<\infty$. We must show that the measure $\mu$ in [\[eqn:es-mu\]](#eqn:es-mu){reference-type="eqref" reference="eqn:es-mu"} is positive, finite, and $\sigma$-invariant, and that $\mu/\mu(X)$ is an equilibrium measure with local product structure. Positivity of $\mu$ is immediate from positivity of $\lambda_R$. Finiteness of $\mu$ follows from the integrability assumption: $$\mu(X) = \sum_{n=0}^\infty \lambda_R(Y_n) = \sum_{n=0}^\infty \lambda_R( \{y\in R : \tau(y) > n \}) = \int \tau \,d\lambda_R.$$ Shift-invariance of $\mu$ follows immediately from $T$-invariance of $\lambda_R$ by the usual argument. Finally, local product structure in the sense of Definition [Definition 2](#def:lps){reference-type="ref" reference="def:lps"} follows by considering the rectangles $R_i$ together with the rectangles $\sigma^k(E_{ij}^n \cap [w]^+)$ for $w\in \mathcal{L}_n$ and $1\leq k < n$. So it only remains to show that $\mu/\mu(X)$ is an equilibrium measure. For this we need the upper Gibbs bound in Corollary [Corollary 20](#cor:lambda-Gibbs){reference-type="ref" reference="cor:lambda-Gibbs"}, which implies that for every $w\in \mathcal{L}$ and $1\leq i\leq \ell$, we have $\lambda_{R_i}([w]^+) \leq K e^{\Phi(w)}$ (recall we are assuming $P=0$), and thus $$\label{eqn:lambda-w-upper-bound} \lambda_R([w]^+)\leq \ell K e^{\Phi(w)}.$$ In order to prove that $\mu$ is an equilibrium measure, we will need one more preliminary result. Let $V_j(\varphi)=\sup\{|\varphi(y)-\varphi(z)|:y_{[0,j)}=z_{[0,j)}\}$. Recall $\tau\colon R^{(\infty)} \to\mathbb{N}$ is the first return time to $R$ and let $\tau_n(x)$ denote the $n^{\mathrm{th}}$-return time of $x$ to $R$, defined as $$\tau_n(x)=\sum_{j=0}^{n-1}\tau(T^jx).$$ We will require the following lemma. Note that the dependence of $x$ is in the upper index of the sum and not in the terms of the summand itself as $V_j(\varphi)$ is defined globally. **Lemma 33**. *If $\tau$ is integrable with respect to $\lambda_R$, then $$\lim_{n\to\infty}\frac{1}{n}\sum_{j=0}^{\tau_n(x)-1}V_j(\varphi)=0$$ for $\lambda_R$-a.e. $x\in R$.* *Proof.* For all $\epsilon>0$, there exists an $N_1$ such that for all $n\geq N_1$ we get $V_n(\varphi)<\epsilon$ by uniform continuity. By the Birkhoff ergodic theorem, for $\lambda_R$-a.e. $x$, $$\lim_{n\to\infty}\frac{\tau_n(x)}{n}=\tau_\infty(x)\in L^1(\lambda_R),$$ so there exists an $N_2$ such that for $n\geq N_2$, we have that $0\leq \tau_n(x)/n<\tau_\infty(x)+\epsilon$. Additionally, for $\lambda_R$-a.e. $x$, we have that $\tau_\infty(x)<\infty$, so if $n\geq \max\{N_1,N_2\}$, $$\begin{aligned} \frac{1}{n}\sum_{j=0}^{\tau_n(x)-1}V_j(\varphi)&=\frac{1}{n}\sum_{j=0}^NV_j(\varphi)+\frac{1}{n}\sum_{j=N}^{\tau_n(x)-1}V_j(\varphi) \leq \frac{1}{n}\sum_{j=0}^NV_j(\varphi)+\frac{1}{n}\sum_{j=N}^{\tau_n(x)-1}\epsilon\\ & = \frac{1}{n}\sum_{j=0}^NV_j(\varphi)+\frac{\tau_n(x)}{n}\epsilon < \frac{1}{n}\sum_{j=0}^NV_j(\varphi)+\epsilon(\tau_\infty(x) +\epsilon) \end{aligned}$$ Letting $n\to\infty$ we get $$\lim_{n\to\infty}\frac{1}{n}\sum_{j=0}^{\tau_n(x)-1}V_j(\varphi)\leq\epsilon(\tau_\infty(x)+\epsilon).$$ Since $\epsilon$ was arbitrary, we get that for $\lambda_R$-a.e. $x\in R$, $$\lim_{n\to\infty}\frac{1}{n}\sum_{j=0}^{\tau_n(x)-1}V_j(\varphi)=0.\qedhere$$ ◻ To show that $\mu$ as defined in [\[eqn:es-mu\]](#eqn:es-mu){reference-type="eqref" reference="eqn:es-mu"} is an equilibrium measure, we will first show that the measure theoretic pressure for $\lambda_R$ is 0 for the induced system when $\varphi$ is normalized to have 0 topological pressure. This will imply that the measure theoretic pressure for $\mu$ is also 0. We now present the proof of Theorem [Theorem 18](#thm:push-product){reference-type="ref" reference="thm:push-product"}. *Proof.* Let $\tilde\varphi(x)=S_{\tau(x)}\varphi(x)$. If $X_n=\bigcup_{j=0}^{n-1}\sigma^j Z_n$, then $$\mu(X)\int_{X_n}\varphi\,d\mu=\int_{Z_n}S_n\varphi(x)\,d\lambda_R=\int_{Z_n}\tilde\varphi\,d\lambda_R.$$ Hence, $\mu(X)\int_{X}\varphi\,d\mu=\int_{R}\tilde\varphi\,d\lambda_R.$ Similarly, $\mu(X) h_{\mu}(\sigma)=h_{\lambda_R}(T)$ by applying Kac's formula, so we have that $$\mu(X) \left(h_{\mu}(\sigma)+\int_X\varphi\,d\mu\right)=h_{\lambda_R}(T)+\int_R{\tilde{\varphi}}\,d\lambda_R.$$ Since we are assuming that the topological pressure $P(\varphi)=0$, it is sufficient to show that $h_{\lambda_R}(T)+\int_R{\tilde{\varphi}}\,d\lambda_R\geq 0$. Now we use the Shannon--McMillan--Breiman theorem [@Glasner p. 265] to deduce that if we write $$I_n^{\lambda_R}(x):=-\log(\lambda_R([x_0\dots x_{\tau_n(x)-1}]^+)),$$ then there exists an $L^1(\lambda_R)$ function $J$ such that $$\lim_{n\to\infty}\frac{1}{n}I_n^{\lambda_R}\to J$$ for $\lambda_R$-a.e. $x$, and $$\int_R J\,d\lambda_R=h_{\lambda_R}(T).$$ By the Birkhoff ergodic theorem, there exists a function $\tilde\varphi_\infty\in L^1(\lambda_R)$ such that $$\lim_{n\to\infty}\frac{1}{n}S_n\tilde\varphi\to\tilde\varphi_{\infty}$$ for $\lambda_R$-a.e. $x$, and $$\int_R\tilde\varphi\,d\lambda_R=\int_R\tilde\varphi_{\infty}\,d\lambda_R.$$ Using [\[eqn:lambda-w-upper-bound\]](#eqn:lambda-w-upper-bound){reference-type="eqref" reference="eqn:lambda-w-upper-bound"}, we get $$I_n^{\lambda_R}(x)\geq -\log(\ell K)-\Phi(x_0\dots x_{\tau_n(x)-1}),$$ and rearranging terms, we obtain $$I_n^{\lambda_R}(x)+\Phi(x_0\dots x_{\tau_n(x)-1})\geq-\log(\ell K),$$ so that in particular, $$\varliminf_{n\to\infty}\frac{1}{n}I_n^{\lambda_R}(x)+\frac{1}{n}\Phi(x_0\dots x_{\tau_n(x)-1})\geq0.$$ Lastly, we must show that $\frac{1}{n}\Phi(x_0,\dots x_{\tau_n(x)-1})\to\tilde\varphi_\infty$ a.e. $$\begin{aligned} \Phi(x_0,\dots x_{\tau_n(x)-1})-S_{\tau_n(x)}\varphi(x) & = \sup_{y\in[x_0\dots x_{k-1}]^+}(S_{\tau_n(x)}\varphi(y)-S_{\tau_n(x)}\varphi(x))\\ & \leq \sum_{j=0}^{\tau_n(x)-1}V_j(\varphi). \end{aligned}$$ By Lemma [Lemma 33](#lem:Phi-Sn-lim){reference-type="ref" reference="lem:Phi-Sn-lim"}, for $\lambda_R$-a.e. $x\in R$ we have $$\varliminf_{n\to\infty}\frac{1}{n}\left(\Phi(x_0\dots x_{\tau_n(x)-1})-S_{\tau_n(x)}\varphi(x)\right)=0,$$ which means that $\frac{1}{n}\Phi(x_0\dots x_{\tau_n(x)-1})\to\tilde\varphi_\infty$. We conclude that $h_{\lambda_R}(T)+\int \tilde\varphi\,d\lambda_R=J(x)+\tilde\varphi_{\infty}\geq 0$, so $\mu$ is an equilibrium measure. ◻ [^1]: The authors were partially supported by NSF grants DMS-1554794 and DMS-2154378. VC was partially supported by a Simons Foundation Fellowship. [^2]: The continuity condition holds if all rectangles in the family are open and the potential has the Walters property (Proposition [Proposition 9](#prop:open-hol){reference-type="ref" reference="prop:open-hol"}), or for an arbitrary family of rectangles if the potential is constant (Proposition [Proposition 10](#prop:zero-hol){reference-type="ref" reference="prop:zero-hol"}). [^3]: If $W^\mathrm{u}_\mathrm{loc}(x) \cap [w]^+ = \emptyset$, then $\Phi_x^+(w) = -\infty$ and $e^{\Phi_x^+(w)} = 0$. [^4]: We use the notation $A = e^{\pm C} B$ to mean that $e^{-C}B\leq A \leq e^C B$. [^5]: Although the results in [@PYY22] are stated for flows rather than for shift spaces, their arguments work in this symbolic setting as well; see [@blog] for further details. [^6]: In fact, uniform convergence of $\Delta^s$ implies boundedness, see Lemma [Lemma 31](#lem:Delta-bdd){reference-type="ref" reference="lem:Delta-bdd"}.
arxiv_math
{ "id": "2309.11410", "title": "Equilibrium measures for two-sided shift spaces via dimension theory", "authors": "Vaughn Climenhaga, Jason Day", "categories": "math.DS", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this note, we prove the quantitative observability with an explicit control cost for the 1D Schrödinger equation over ${\mathbb{R} }$ with real-valued, bounded continuous potential on thick sets. Our proof relies on different techniques for low-frequency and high-frequency estimates. In particular, we extend the large time observability result for the 1D free Schrödinger equation in Theorem 1.1 of Huang-Wang-Wang [@HWW] to any short time. As another byproduct, we extend the spectral inequality of Lebeau-Moyano [@LeM] for real-analytic potentials to bounded continuous potentials in the one-dimensional case. address: - Department of Mathematical Analysis, Faculty of Mathematics and Physics, Charles University, Sokolovská 83, 186 75 Praha 8, Czech Republic. - CNRS, Université Paris-Est Créteil, Laboratoire d'Analyse et de Mathématiques appliquées, UMR 8050 du CNRS, 94010 Créteil cedex, France. - Department of Mathematics, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong, P.R. China. author: - Pei Su - Chenmin Sun - Xu Yuan title: Quantitative observability for one-dimensional Schrödinger equations with potentials --- # Introduction ## Main result Consider the 1D Schrödinger equation, $$\label{equ:LS} \begin{aligned} i\partial_{t}u-\partial_{x}^{2}u+V(x)u=0,\quad u_{|t=0}=u_{0}\in L^{2}({\mathbb{R} }), \end{aligned}$$ where the potential $V$ is a real-valued, continuous and bounded function. We are interested in the observability of the Schrödinger equation [\[equ:LS\]](#equ:LS){reference-type="eqref" reference="equ:LS"} on rough sets, which concerns the following inequality, $$\label{def:ob} \|u_0\|_{L^2({\mathbb{R} })}^2 \leq C(T,V,\Omega) \int_0^T \|u(t)\|_{L^2(\Omega)}^2 {\rm{d}}t,$$ for all solutions $u(t)$ of [\[equ:LS\]](#equ:LS){reference-type="eqref" reference="equ:LS"} where $\Omega \subset {\mathbb{R} }$ is a measurable subset. The inequality [\[def:ob\]](#def:ob){reference-type="eqref" reference="def:ob"} measures how solutions of Schrödinger equations can concentrate on subsets of the domain. Such a property is linked to the high-frequency wave propagation phenomenon and to concentration properties for quasimodes of the Schrödinger operator. The results are sensitive for different underlying manifolds and the corresponding Schrödinger operators. Another motivation for establishing the observability estimate [\[def:ob\]](#def:ob){reference-type="eqref" reference="def:ob"} is to prove the exact controllability for the associated control system. See Corollary [Corollary 4](#cor:control){reference-type="ref" reference="cor:control"} for the precise statement. In the general framework, there are three parameters that affect the observability estimates for Schrödinger type equations. These are the underlying geometry (the background manifold on which the equation is posed and the associated Schrödinger operator), the control region $\Omega$, and the time $T>0$ to achieve the observability. When observability holds for any time $T>0$, the control cost, i.e. the blow-up rate of the optimal constant $C(T,V,\Omega)$ is also an object of study. In this note, we address the observability problem for the 1D Schrödinger equation on an unbounded set by measurable control regions. To the best of our knowledge, this setup is much less studied in the literature. To state the main result, we recall the thickness condition for the control region. **Definition 1**. Let $0<\zeta<1$ and $0<L<\infty$. We say that $\Omega\subset {\mathbb{R} }$ is a $(L,\zeta)$-thick set if $\Omega$ is a measurable set and satisfies $$\left|\Omega\cap [x,x+L]\right|\ge \zeta L,\quad \mbox{for any} \ x\in {\mathbb{R} }.$$ The main result of this note is the following quantitative observability for [\[equ:LS\]](#equ:LS){reference-type="eqref" reference="equ:LS"}. **Theorem 2**. *Let $\Omega$ be a $(L,\zeta)$-thick set and let $V\in C({\mathbb{R} })\cap L^{\infty}({\mathbb{R} })$. Then there exists a constant $C=C(V,L,\zeta)>0$, depending only on $V$, $L$ and $\zeta$, such that for any $T>0$ and any solution $u$ of [\[equ:LS\]](#equ:LS){reference-type="eqref" reference="equ:LS"} we have $$\label{est:Obser} \|u_{0}\|_{L^{2}({\mathbb{R} })}^{2}\le C e^{\frac{C}{T^{2}}} \int_{0}^{T}\left\|u(t)\right\|^{2}_{L^{2}(\Omega)}{\rm{d}}t.$$* **Remark 3**. In [@HWW Theorem 1.1], the authors considered the 1D free Schrödinger equation. They showed that [\[def:ob\]](#def:ob){reference-type="eqref" reference="def:ob"} holding for some time $T>0$ is equivalent to the control region $\Omega$ being thick. Theorem [Theorem 2](#thm:main1){reference-type="ref" reference="thm:main1"} extends their results to any observability time $T>0$ for general 1D Schrödinger operators with bounded and continuous potential. In particular, we answer the question raised in Remark (a1) below [@HWW Theorem 1.1] concerning the short time observability for the 1D Schrödinger equation. Moreover, the proof of Theorem [Theorem 2](#thm:main1){reference-type="ref" reference="thm:main1"} is purely quantitative which provides the upper bound for the control cost in terms of $T>0$ and the parameter defining the thickness. See Section [5](#SS:MAIN){reference-type="ref" reference="SS:MAIN"} for more precise comments on the proof. As a consequence of the classical HUM method (see [@Li]), we have the following exact controllability result for 1D Schrödinger equations. **Corollary 4**. *Let $\Omega$ be a $(L,\zeta)$-thick set and let $V\in C({\mathbb{R} })\cap L^{\infty}({\mathbb{R} })$. For any $T>0$ and any $(u_0,u_1)\in L^2({\mathbb{R} })\times L^{2}({\mathbb{R} })$, there exists a control $f\in L^2((0,T)\times{\mathbb{R} })$ such that the unique solution of the 1D inhomogeneous Schrödinger equation $$i\partial_t u-\partial_x^2 u+V(x)u={\mathbf{1}}_{\Omega}f$$ with initial data $u(0)=u_{0}$ satisfies $u(T)=u_1$.* ## Previous results In the existing literature, observability for Schrödinger equations on compact manifolds and bounded domains has been extensively studied. Sufficient geometric conditions are often imposed on the control region $\Omega$. When $\Omega$ is an open set satisfying the geometric control condition (GCC), Lebeau [@Le] proved that observability is true for an arbitrarily short time $T>0$. The essential behind Lebeau's theorem is the infinite speed of propagation of singularities for high-frequency wave-packets of solutions to the Schrödinger equation. The condition GCC is in general not necessary. The stability (or instability) property of the geodesic flow of the underlying manifold plays a decisive role. Specifically, it has been shown (though this list is not exhaustive) that observability is valid for any $T>0$ and for any non-empty open control region if the underlying manifold is a torus [@AM14; @BBZ; @BZ4; @Ja], or compact hyperbolic surfaces [@Ji] (see also [@AR] for negatively curved manifolds), or the disk [@ALM14], provided the control region encompasses a neighborhood of some portion of the boundary. We also mention here that for certain subelliptic Schrödinger equations, the minimal observability time is strictly positive (see [@BuSun; @FeLe]). In recent years, there has been a growing focus on observability for Schrödinger equations by rough control regions or in non-compact settings [@BZ5; @HWW; @MLe; @Pr; @WWZ]. For the Schrödinger equation on two-dimensional tori, [@BZ5] showed that the short time observability result holds for any measurable control region with a positive measure. Very recently, this result has been extended to Schrödinger equations (with periodic potential) on ${\mathbb{R} }^2$ having a periodic control region. It is worth mentioning that the thickness condition (see Definition [Definition 1](#def:thick){reference-type="ref" reference="def:thick"}) in the one-dimensional context is a version of GCC on rough sets. The interest of the result in [@MLe] lies in the fact that the control region, though $2\pi \mathbb{Z}^2$-periodic, might not satisfy GCC. Lastly, we refer to [@HWW; @Pr] for discussions on the observability of Schrödinger operators on ${\mathbb{R} }^{d}$ with confined potentials. ## Comments on the proof Our proof of Theorem [Theorem 2](#thm:main1){reference-type="ref" reference="thm:main1"} relies on three steps: spectral inequality for low-frequency part, high-frequency part observability via resolvent estimate (of Hautus type inequality), and a quantitative gluing argument. The first part of our analysis is the proof of the low-frequency observability by establishing the spectral inequality for 1D Schrödinger operators with continuous bounded potentials (see Lemma [Lemma 12](#le:spectra){reference-type="ref" reference="le:spectra"}). This particularly generalizes the spectral inequality in [@LeM] for real-analytic potentials to bounded continuous potentials in the one-dimensional context. Our proof of the spectral inequality leverages a very recent work [@ZHU] on the propagation of smallness for elliptic equations on the plane. Second, the resolvent estimate for the high-frequency part is essentially not new and has already been obtained in [@Gr Proposition 1] for the 1D free Schrödinger operator (i.e. the case of $V=0$). The proof in [@Gr] utilizes the Logvinenko-Sereda uncertainty principle (see [@Ko Theorem 2]), which exploits the straightforward geometric feature of the 1D Fourier space. More precisely, in the 1D case, the measure of $\xi$ such that $||\xi|-\lambda|\leq O(1)$ is bounded by $O(1)$, independent of the position of $\lambda$. Instead, we provide a different, more elementary proof (see Lemma [Lemma 17](#le:resolvent){reference-type="ref" reference="le:resolvent"}) of the high-frequency resolvent estimate for 1D Schrödinger operators. Another delicate issue in the analysis is gluing low-frequency and high-frequency estimates to obtain the full observability estimate. We briefly revisit the classical compactness-uniqueness method that is extensively used in the literature in the compact setting (in the context where the Schrödinger operator has a compact resolvent). The strategy is due to Bardos-Lebeau-Rauch [@BLR] (see also [@TuWe Chapter 6] for a similar abstract framework). Given a general Schrödinger type equation associated with a self-adjoint operator $\mathcal{A}$ on a Hilbert space $\mathcal{H}$ and a control operator $\mathcal{C}$: $$i\partial_tu=\mathcal{A}u,\quad u_{|t=0}=u_0\in \mathcal{H}.$$ We are concerned with the following observability inequality: $$\begin{aligned} \label{ob:abstract} \|u_0\|_{\mathcal{H}}^2\leq C\int_0^T\|\mathcal{C}u(t)\|_{\mathcal{H}}^2{\rm{d}}t.\end{aligned}$$ The compactness-uniqueness strategy can be summarized as follows. Let us first assume that we have already established the high-frequency observability estimate, $$\begin{aligned} \label{ob:highabstract} \|u_0\|_{\mathcal{H}}^2\leq C\int_0^T\|\mathcal{C}u(t)\|_{\mathcal{H}}^2{\rm{d}}t+C\|\mathcal{A}^{-1}u_0\|_{\mathcal{H}}^2.\end{aligned}$$ Here, for the sake of clarity, we assume that $0\notin \mathrm{Spec}(\mathcal{A})$. Given $T>0$, we define $$\mathcal{N}_T:=\{u_0\in \mathcal{H}:\; \mathcal{C}u(t)\equiv 0 \text{ in } L^2((0,T);\mathcal{H})\}.$$ When $\mathcal{A}$ has a compact resolvent (thus $\mathrm{Spec}(\mathcal{A})$ is discrete), the compact embedding and [\[ob:highabstract\]](#ob:highabstract){reference-type="eqref" reference="ob:highabstract"} ensures that $\mathcal{N}_T$ exists as a finite-dimensional linear subspace of $\mathcal{H}$. Furthermore, it can be deduced that the restriction of $\mathcal{A}$ on $\mathcal{N}_T$ is a well-defined linear operator. By taking an eigenfunction $\phi \neq 0$ of $\mathcal{A}$ on $\mathcal{N}_T$, we find that $\mathcal{C}\phi\equiv 0$. Thus, the observability [\[ob:abstract\]](#ob:abstract){reference-type="eqref" reference="ob:abstract"} results from [\[ob:highabstract\]](#ob:highabstract){reference-type="eqref" reference="ob:highabstract"} coupled with the unique continuation property of eigenfunctions: $$\mathcal{A}\phi=\lambda \phi\ \ \mbox{and}\ \ \mathcal{C}\phi\equiv 0 \Longrightarrow \phi\equiv 0.$$ However, when considering the non-compact setting where $\mathcal{A}$ does not have a compact resolvent, we are not aware of any compactness-uniqueness type argument to deduce [\[ob:abstract\]](#ob:abstract){reference-type="eqref" reference="ob:abstract"} from [\[ob:highabstract\]](#ob:highabstract){reference-type="eqref" reference="ob:highabstract"}. Actually, in [@MLe], this issue does not exist as the problem can be reduced to the compact setting $\mathbb{T}^2$, thanks to the periodicity of both the control region and the potential. Here, we prove a quantitative unique continuation property for Schrödinger equations, which facilitates to glue the high-frequency observability with low-frequency estimates. This idea was introduced in Phung [@Phung01] for the Schrödinger equation on bounded domain over ${\mathbb{R} }^{d}$. Following this strategy, we begin by establishing the backward-in-time observability for the associated heat semigroup (see Proposition [Proposition 14](#prop:heat){reference-type="ref" reference="prop:heat"}). This allows us to deduce a quantitative unique continuation estimate for Schrödinger equations by taking the Fourier--Bros--Iagolnitzer (FBI) transformation in time. We refer to [@BBZ Appendix A] where another quantitative uniqueness-compactness argument was introduced in the compact setting. Though the passage from the observability of heat-semigroup and the high-frequency observability to the full observability is quite robust in both the compact and non-compact settings, it is worth mentioning that this zigzag path exceeds what is essentially required to ensure the Schrödinger observability estimate. For instance, we consider the half Laplacian $\mathcal{A}=|\nabla|$ on the 1D torus $\mathbb{T}$. It is a known fact that for any non-empty open set $\omega\subset \mathbb{T}$, the associated heat semigroup is not observable by control operator $\mathcal{C}=\mathbf{1}_{\omega}$ (see [@Koe Subsection 2.1]). However, the Schrödinger semigroup $e^{\pm it\mathcal{A}}$ (which is the half-wave operator) is observable by control operator $\mathcal{C}=\mathbf{1}_{\omega}$ for a certain $T>0$ (this is a very special case of the exact controllability for the wave equation under the geometric control condition). This discussion raises a natural question of extending the compactness-uniqueness approach, especially concerning the observability estimates of Schrödinger type equations in non-compact domains. We expect that such an extension would be potentially useful when addressing observability estimates for many other Schrödinger type equations on non-compact domains. The note is organized as follows: First, in Section [2](#SS:Nota){reference-type="ref" reference="SS:Nota"}, we state notation and conventions. Second, in Section [3](#SS:SPEC){reference-type="ref" reference="SS:SPEC"}, we derive the propagation of smallness for the 2D elliptic equation and then establish the spectral inequality for the Schrödinger operator $H$ as a consequence. Third, in Section [4](#SS:RESO){reference-type="ref" reference="SS:RESO"}, we deduce the high-frequency observability based on the resolvent estimate. Then, in Section [5](#SS:MAIN){reference-type="ref" reference="SS:MAIN"}, we complete the proof of Theorem [Theorem 2](#thm:main1){reference-type="ref" reference="thm:main1"} based on the gluing of the low-frequency and high-frequency estimates. Last, for self-contained reason, we give the details of proof for Theorem [Theorem 7](#thm:ZHU){reference-type="ref" reference="thm:ZHU"} and Proposition [Proposition 14](#prop:heat){reference-type="ref" reference="prop:heat"} in Appendix [6](#AppA){reference-type="ref" reference="AppA"} and [7](#AppB){reference-type="ref" reference="AppB"}, respectively. ## Acknowledgments {#acknowledgments .unnumbered} The author P. S. is supported by the ERC-CZ Grant CONTACT LL2105 funded by the Ministry of Education, Youth and Sport of the Czech Republic. The author C. S. is partially supported by the PEPS-JCJC and ANR project SmoothANR-22CE40-0017. The author X. Y. would like to thank Gengsheng Wang and Yubiao Zhang for early discussions on the observability of Schrödinger equations. # Notation and conventions {#SS:Nota} The Fourier transform of a function $f\in L^{1}({\mathbb{R} })$, denoted by $\widehat{f}$, is defined as: $$\widehat{f}(\xi)=\frac{1}{2\pi}\int_{{\mathbb{R} }}e^{- i x\xi}f(x){\rm{d}}x,\quad \mbox{for any}\ \xi\in {\mathbb{R} }.$$ Recall that, the Fourier transform defines a linear bounded operator from $L^{1}({\mathbb{R} })\cap L^{2}({\mathbb{R} })$ to $L^{2}({\mathbb{R} })$. Moreover, this operator is an isometry and so there is a unique bounded extension $\mathcal{F}$ defined in all $L^{2}({\mathbb{R} })$. The operator $\mathcal{F}$ is called the Fourier transform in $L^{2}({\mathbb{R} })$. To shorten notation, we denote $\widehat{f}=\mathcal{F}f$ for $f\in L^{2}({\mathbb{R} })$. Note that, if $u(t)$ is a solution for [\[equ:LS\]](#equ:LS){reference-type="eqref" reference="equ:LS"}, then for any $\theta>0$, $U(t)=e^{i\theta t}u(t)$ is a solution for $$i\partial_{t}U-\partial_{x}^{2}U+V(x)U+\theta U=0,\quad U_{|t=0}=u_{0}\in L^{2}({\mathbb{R} }).$$ This gauge transformation leaves the observability inequality [\[est:Obser\]](#est:Obser){reference-type="eqref" reference="est:Obser"} invariant. Therefore, without loss of generality, we can assume the potential $V$ is a continuous bounded function with $V\ge 1$ throughout the article. To shorten notation, for a given potential $V\in C({\mathbb{R} })\cap L^{\infty}({\mathbb{R} })$ with $V\ge 1$, we denote $$\|V\|_{\infty}=\|V\|_{L^{\infty}({\mathbb{R} })}\quad \mbox{and}\quad {H}=-\partial_{x}^{2}+V\quad \mbox{with domain}\ H^{2}({\mathbb{R} }).$$ Note also that, the operator ${H}$ on $L^{2}({\mathbb{R} })$ with domain $H^{2}({\mathbb{R} })$ is self-adjoint with non-negative continuous spectrum. Based on the spectral theorem (see for instance [@DAVIES Section 2.5]), there exists a spectral measure ${\rm{d}}m_{\lambda}$ such that $$F(\sqrt{{H}})=\int_{0}^{\infty}F(\lambda){\rm{d}}m_{\lambda},\quad \mbox{for any bounded function} \ F.$$ Moreover, for any bounded functions $F$ and $G$, we have $$\left(F(\sqrt{{H}})f,G(\sqrt{{H}})f\right)_{L^{2}({\mathbb{R} })}=\int_{0}^{\infty}F(\lambda)\overline{G(\lambda)}({\rm{d}}m_{\lambda}f,f)_{L^{2}({\mathbb{R} })},\quad \mbox{for}\ f\in L^{2}({\mathbb{R} }).$$ Here, ${\rm{d}}m_{\lambda}$ is the spectral measure of the operator $\sqrt{{H}}$. In particular, for $\mu>0$, the spectral projector $\Pi_{\mu}$, associate with the function $F(\lambda)=\textbf{1}_{\{\lambda\le \mu\}}$, is defined by $$\Pi_{\mu}=\textbf{1}_{\left\{\sqrt{{H}}\le \mu\right\}}=\int_{0}^{\mu}{\rm{d}}m_{\lambda}.$$ On the other hand, the operator $i{H}$ generates a unitary group $e^{it{H}}$ and so the solution for [\[equ:LS\]](#equ:LS){reference-type="eqref" reference="equ:LS"} can be written as $u(t)=e^{it{H}}u_{0}\in L^{2}({\mathbb{R} })$. For future reference, we define $$D_{1}={\mathbb{R} }\times \left[-\frac{1}{2},\frac{1}{2}\right],\quad D_{2}={\mathbb{R} }\times \left[-\frac{3}{2},\frac{3}{2}\right] \quad \mbox{and}\ \ D_{3}={\mathbb{R} }\times \left[-\frac{5}{2},\frac{5}{2}\right].$$ Next, for any $\ell\in \mathbb{Z}$, we define $$I_{1\ell}=[\ell,\ell+1],\quad I_{2\ell}=[\ell-1,\ell+2]\quad \mbox{and}\ \ I_{3\ell}=[\ell-2,\ell+3].$$ Moreover, for any $\ell\in \mathbb{Z}$, we set $$D_{1\ell}=I_{1\ell}\times \left[-\frac{1}{2},\frac{1}{2}\right],\quad D_{2\ell}=I_{2\ell}\times \left[-\frac{3}{2},\frac{3}{2}\right] \quad \mbox{and}\ \ D_{3\ell}=I_{3\ell}\times \left[-\frac{5}{2},\frac{5}{2}\right].$$ Without loss of generality, we can assume for a given thick set $\Omega\subset {\mathbb{R} }$, there exists a constant $0<\zeta< 1$ such that $$\left|\Omega\cap [x,x+1]\right|\ge \zeta \quad \mbox{for any}\ x\in {\mathbb{R} },$$ that is, the constant $L=1$ in the Definition [Definition 1](#def:thick){reference-type="ref" reference="def:thick"}. For a given thick set $\Omega$, we set $$\Omega_{\ell}=\Omega \cap I_{1\ell}=\Omega\cap [\ell,\ell+1],\ \ \mbox{for any}\ \ell\in \mathbb{Z}.$$ By abuse of notation, in this article, we use the same letters $\alpha$ and $C$ for some small or large constants and state their dependency on other parameters. For $a\in {\mathbb{R} }^{2}$ and $r>0$, we denote by $B_{r}(a)$ the ball of ${\mathbb{R} }^{2}$ of center $a$ and of radius $r$. To simplify notation, we also denote by $B_{r}$ the ball in ${\mathbb{R} }^{2}$ centred at the origin with radius $r>0$. For $\delta>0$, we denote by $\mathcal{H}_{\delta}$ the $\delta$-dimensional Hausdorff content, that is, for a subset $E\subset \mathbb{R}^{2}$, we define $$\mathcal{H}_{\delta}(E)=\inf\left\{\sum_{n=1}^{\infty}r_{n}^{\delta}:E\subset \bigcup_{n=1}^{\infty}B_{r_{n}}(a_{n}),\ a_{n}\in {\mathbb{R} }^{2}\right\}.$$ Let $\omega\subset B_{1}\cap{\ell_{0}}$ satisfy $|\omega|>0$ for some line $\ell_{0}$ in ${\mathbb{R} }^{2}$. From the definition of 1D Lebesgue measure and $\delta$-Hausdorff content, we have $$\label{equ:measu} |\omega|=\inf\left\{\sum_{n=1}^{\infty}|I_{n}|:\omega\subset\bigcup_{n=1}^{\infty}I_{n},\ I_{n}\subset \ell_{0} \right\}=2\mathcal{H}_{1}(\omega).$$ # Spectral inequality {#SS:SPEC} ## Propagation of smallness In this subsection, we introduce the propagation of smallness for solutions of elliptic equations in ${\mathbb{R} }^{2}$. We start with the following technical lemma for the second-order ODE $$\label{equ:ode} -\varphi''(x)+V(x)\varphi(x)=0.$$ **Lemma 5**. *Let $I=[a,b]$ be a finite interval and let $V\in C({\mathbb{R} })\cap L^{\infty}({\mathbb{R} })$ with $V\ge 1$. There exists a $C^{2}$ positive solution $\varphi$ of [\[equ:ode\]](#equ:ode){reference-type="eqref" reference="equ:ode"} on the interval $I$ such that $$1\le \varphi(x)\le e^{(b-a)^{2}\|V\|_{{\infty}}},\quad \mbox{for any}\ x\in I.$$* **Remark 6**. The continuity of the potential $V$ is to ensure the existence and regularity of the solution for the second-order ODE [\[equ:ode\]](#equ:ode){reference-type="eqref" reference="equ:ode"}. Such a regular solution can help us to reduce the elliptic equation [\[equ:ellV\]](#equ:ellV){reference-type="eqref" reference="equ:ellV"} of non-divergence form to divergence form [\[equ:2Dell\]](#equ:2Dell){reference-type="eqref" reference="equ:2Dell"}. This is the only reason why we assume $V\in C({\mathbb{R} })$. Actually, we expect Theorem [Theorem 2](#thm:main1){reference-type="ref" reference="thm:main1"} still hold true for any potential $V\in L^{\infty}({\mathbb{R} })$. *Proof of Lemma [Lemma 5](#le:ODE){reference-type="ref" reference="le:ODE"}.* Consider the following initial-value problem $$\label{equ:ODEcau} -\varphi''(x)+V(x)\varphi(x)=0\quad \mbox{with}\ \ (\varphi(a),\varphi'(a))=(1,0).$$ Based on Picard's existence theorem and $V\in C({\mathbb{R} })\cap L^{\infty}({\mathbb{R} })$, there exists a unique $C^{2}$ solution $\varphi$ to the initial-value problem [\[equ:ODEcau\]](#equ:ODEcau){reference-type="eqref" reference="equ:ODEcau"}. We now establish the lower and upper bounded estimates for $\varphi$ on the interval $I$. First, from [\[equ:ODEcau\]](#equ:ODEcau){reference-type="eqref" reference="equ:ODEcau"} and $V\in C({\mathbb{R} })\cap L^{\infty}({\mathbb{R} })$ with $V\ge 1$, we see that $\varphi(x)\ge 0$ on $I$. Therefore, for any $x\in I$, we have $$\varphi'(x)=\int_{a}^{x}V(s)\varphi(s){\rm{d}}s\ge 0\Longrightarrow \varphi(x)\ge \varphi(a)=1.$$ Second, using again [\[equ:ODEcau\]](#equ:ODEcau){reference-type="eqref" reference="equ:ODEcau"} and $V\in C({\mathbb{R} })\cap L^{\infty}({\mathbb{R} })$ with $V\ge 1$, $$\varphi(x)\le 1+ (b-a)\|V\|_{\infty}\int_{a}^{x}\varphi(s){\rm{d}}s,\quad \mbox{for any}\ x\in I.$$ It follows from Grönwall's inequality that $$\varphi(x)\le e^{\|V\|_{\infty}\int_{a}^{x}(b-a){\rm{d}}s}\le e^{(b-a)^{2}\|V\|_{{\infty}}},\quad \mbox{for any}\ x\in I.$$ Combining the above inequalities, we complete the proof of Lemma [Lemma 5](#le:ODE){reference-type="ref" reference="le:ODE"}. ◻ Second, we consider the 2D elliptic equation in divergence form $$\label{equ:2Dell} \nabla \cdot \left(A(z)\nabla \phi(z)\right)=0\quad \mbox{in} \ B_{4}.$$ Here $z=(x,y)\in {\mathbb{R} }^{2}$, and the real symmetric matrix $A(z)=(a_{jk}(z))_{2\times2}$ is elliptic, that is, there exists a constant $\Lambda>1$ such that $$\label{est:ell} \Lambda^{-1}\le \xi^{T}A(z)\xi\le \Lambda,\quad \mbox{for any}\ \xi\in B_{1} \ \mbox{and}\ z\in B_{4}.$$ We recall the following propagation of smallness for solutions to [\[equ:2Dell\]](#equ:2Dell){reference-type="eqref" reference="equ:2Dell"} from [@ZHU]. **Theorem 7** ([@ZHU]). *Let $\omega\subset B_{1}\cap \ell_{0}$ satisfy $|\omega|>0$ for some line $\ell_{0}$ in ${\mathbb{R} }^{2}$ with the normal vector $\boldsymbol{\rm{e}}_{0}$. There exist some constants $\alpha=\alpha(\Lambda, |\omega|)\in(0,1)$ and $C=C(\Lambda,|\omega|)>0$, depending only on $\Lambda$ and $|\omega|$, such that for any real-valued $H_{loc}^{2}$ solution $\phi$ of [\[equ:2Dell\]](#equ:2Dell){reference-type="eqref" reference="equ:2Dell"} with $A\nabla \phi\cdot \boldsymbol{\rm{e}}_{0}=0$ on $B_{1}\cap \ell_{0}$, we have $$\label{eq:smallness} \sup_{B_{1}}|\phi|\le C\left(\sup_{\omega}|\phi|^{\alpha}\right)\left(\sup_{B_{2}}|\phi|^{1-\alpha}\right).$$* **Remark 8**. By scaling and translation, the interpolation inequality [\[eq:smallness\]](#eq:smallness){reference-type="eqref" reference="eq:smallness"} remains true if we replace $B_1,B_2$ by balls $B_r(a),B_{2r}(a)$ for $H_{loc}^2$ solutions to [\[equ:2Dell\]](#equ:2Dell){reference-type="eqref" reference="equ:2Dell"} in $B_{4r}(a)$, where $a \in {\mathbb{R} }^2$ and $r>0$, and the constant $C$ depends only on $\Lambda,|\omega|$ and $r>0$. **Remark 9**. We mention here that Theorem [Theorem 7](#thm:ZHU){reference-type="ref" reference="thm:ZHU"} is just a special version of [@ZHU Theorem 1.1]. Actually, [@ZHU Theorem 1.1] shows the propagation of smallness for solutions from any $\omega\subset B_{1}$ lying on a line with $\mathcal{H}_{\delta}(\omega)>0$. Here $\mathcal{H}_{\delta}(\omega)$ means $\delta$-dimensional Hausdorff content of $\omega$. For the sake of completeness and the readers' convenience, the sketch of the proof for Theorem [Theorem 7](#thm:ZHU){reference-type="ref" reference="thm:ZHU"} is given in Appendix [6](#AppA){reference-type="ref" reference="AppA"}. Last, we introduce the $L^{2}$-propagation of smallness for $H_{\emph{loc}}^{2}$ solution of the following 2D elliptic equation in nondivergence form $$\label{equ:ellV} -\Delta \phi(z)+V(x)\phi (z)=0\quad \mbox{with}\ \ \partial_{y}\phi_{|y=0}=0.$$ Following [@LOGUNOV Section 2], we see that the equation [\[equ:ellV\]](#equ:ellV){reference-type="eqref" reference="equ:ellV"} in nondivergence form can be reduced to the divergence form [\[equ:2Dell\]](#equ:2Dell){reference-type="eqref" reference="equ:2Dell"}. More precisely, for the given potential $V\in C({\mathbb{R} })\cap L^{\infty}({\mathbb{R} })$ with $V\ge 1$, we first consider the positive $C^{2}$ solution $\varphi$ constructed in Lemma [Lemma 5](#le:ODE){reference-type="ref" reference="le:ODE"} for the second-order ODE [\[equ:ode\]](#equ:ode){reference-type="eqref" reference="equ:ode"}. Then, by an elementary computation, on $[a,b]\times {\mathbb{R} }$, we deduce that $$\label{equ:reduc} \left\{\begin{aligned} \partial_{y}\phi_{|y=0}=0 &\Longrightarrow \partial_{y}\left(\frac{\phi(z)}{\varphi(x)}\right)_{|y=0}=0,\\ -\Delta \phi(z)+V(x)\phi(z)=0&\Longrightarrow \nabla \cdot \left(\varphi^{2}(x)\nabla\left(\frac{\phi(z)}{\varphi(x)}\right) \right)=0. \end{aligned}\right.$$ Combining the above reduction with Theorem [Theorem 7](#thm:ZHU){reference-type="ref" reference="thm:ZHU"}, we now establish the following $L^{2}$-propagation of smallness for the $H_{\emph{loc}}^{2}$ solution of 2D elliptic equation [\[equ:ellV\]](#equ:ellV){reference-type="eqref" reference="equ:ellV"}. The proof is inspired by the techniques developed in Burq-Moyano [@BurqMoyano Section 2]. **Proposition 10**. *Let $\ell\in \mathbb{Z}$ and let $V\in C({\mathbb{R} })\cap L^{\infty}({\mathbb{R} })$ with $V\ge 1$. Then for any measurable set $\omega\subset I_{1\ell}$ with $|\omega|>0$ and any real-valued $H_{loc}^{2}$ solution $\phi$ of [\[equ:ellV\]](#equ:ellV){reference-type="eqref" reference="equ:ellV"}, we have $$\|\phi\|_{L^{2}(D_{1\ell})}\le C\|\phi\|_{L^{2}(\omega)}^{\alpha}\left(\sup_{D_{2\ell}}|\phi|^{1-\alpha}\right),$$ where $\alpha=\alpha(V,|\omega|)\in (0,1)$ and $C=C(V,|\omega|)>0$ depend only on $V$ and $|\omega|$.* **Remark 11**. In the spirit of Burq-Moyano [@BurqMoyano] (see also [@BurqMoyanoJEMS] in a similar context), the $L^{2}$-propagation of smallness can be used to establish the spectral inequality which can help us to obtain the observability estimate for 1D homogenous heat equation with a potential (see more details in §[3.2](#SS:LOW){reference-type="ref" reference="SS:LOW"}). *Proof of Proposition [Proposition 10](#pro:L2){reference-type="ref" reference="pro:L2"}.* **Step 1.** $L^{\infty}$-propagation of smallness. We claim that, for any measurable set $\omega\subset I_{1\ell}$ with $|\omega|>0$ and any $H_{loc}^{2}$ solution $\phi$ of [\[equ:ellV\]](#equ:ellV){reference-type="eqref" reference="equ:ellV"}, $$\label{est:Linfty} \sup_{D_{1\ell}}|\phi|\le C\left(\sup_{\omega}|\phi|^{\alpha}\right)\left(\sup_{D_{2\ell}}|\phi|^{1-\alpha}\right),$$ where $\alpha=\alpha(V,|\omega|)\in (0,1)$ and $C=C(V,|\omega|)>0$ depend only on $V$ and $|\omega|$. Indeed, we first consider the positive $C^{2}$ solution $\varphi$ constructed in Lemma [Lemma 5](#le:ODE){reference-type="ref" reference="le:ODE"} for the second-order ODE [\[equ:ode\]](#equ:ode){reference-type="eqref" reference="equ:ode"} on $[\ell-5,\ell+5]$. Then, from the reduction [\[equ:reduc\]](#equ:reduc){reference-type="eqref" reference="equ:reduc"}, we can apply Theorem [Theorem 7](#thm:ZHU){reference-type="ref" reference="thm:ZHU"} to $(\phi/\varphi)$ with $\ell_{0}=\left\{(x,y)\in {\mathbb{R} }^{2}:y=0\right\}$. Then, according to scaling and translation, there exist some constants $\alpha=\alpha(\varphi,|\omega|)\in (0,1)$ and $C=C(\varphi,|\omega|)>0$, depending only on $|\omega|$ and the lower and upper bounds of $\varphi$ on $[\ell-5,\ell+5]$, such that for any $H_{loc}^{2}$ solution $\phi$ of [\[equ:ellV\]](#equ:ellV){reference-type="eqref" reference="equ:ellV"}, we have $$\sup_{B_{1\ell}}\left|\frac{\phi}{\varphi}\right|\le C\left(\sup_{\omega}\left|\frac{\phi}{\varphi}\right|^{\alpha}\right)\left(\sup_{B_{2\ell}}\left|\frac{\phi}{\varphi}\right|^{1-\alpha}\right),$$ where $B_{1\ell}$ and $B_{2\ell}$ are defined by $$B_{1\ell}=B_{\frac{\sqrt{2}}{2}} \left(\left(\ell+\frac{1}{2},0\right)\right)\quad \mbox{and}\quad B_{2\ell}=B_{\sqrt{2}}\left(\left(\ell+\frac{1}{2},0\right)\right).$$ It follows directly from Lemma [Lemma 5](#le:ODE){reference-type="ref" reference="le:ODE"} that $$\sup_{B_{1\ell}}\left|\frac{\phi}{\varphi}\right|\le C_{1}\left(\sup_{\omega}\left|{\phi}\right|^{\alpha}\right)\left(\sup_{B_{2\ell}}\left|{\phi}\right|^{1-\alpha}\right).$$ where $C_{1}=C(V,|\omega|)>0$ is a constant depending only on $V$ and $|\omega|$. On the other hand, from the definition of $D_{1\ell}$, $D_{2\ell}$, $B_{1\ell}$ and $B_{2\ell}$, we observe that $$D_{1\ell}\subset B_{1\ell}\subset B_{2\ell}\subset D_{2\ell},\quad \mbox{for all}\ \ell\in \mathbb{Z}.$$ Therefore, using again Lemma [Lemma 5](#le:ODE){reference-type="ref" reference="le:ODE"}, we conclude that $$\begin{aligned} \sup_{D_{1\ell}}|\phi| &\le \left(\sup_{I_{1\ell}}\left|\varphi\right|\right) \left(\sup_{B_{1\ell}}\left|\frac{\phi}{\varphi}\right|\right)\\ &\le C_{1}e^{100\|V\|_{\infty}}\left(\sup_{\omega}\left|{\phi}\right|^{\alpha}\right)\left(\sup_{D_{2\ell}}\left|{\phi}\right|^{1-\alpha}\right). \end{aligned}$$ This completes the proof of [\[est:Linfty\]](#est:Linfty){reference-type="eqref" reference="est:Linfty"}. **Step 2.** Replacing $L^{\infty}$ norm with $L^{2}$ norm. From [\[est:Linfty\]](#est:Linfty){reference-type="eqref" reference="est:Linfty"}, there exists some constants $\alpha_{1}=\alpha_{1}(\Lambda,|\omega|)\in (0,1)$ and $C_{2}=C_{2}(\Lambda,|\omega|)>0$, depending only on $\Lambda$ and $|\omega|$, such that for any $\widetilde{\omega}\subset \omega$ with $\frac{1}{2}|\omega|\le |\widetilde{\omega}|\le |\omega|$, we have $$\label{est:Linfty2} \sup_{D_{1\ell}}|\phi|\le C_{2}\left(\sup_{\widetilde{\omega}}|\phi|^{\alpha_{1}}\right)\left(\sup_{D_{2\ell}}|\phi|^{1-\alpha_{1}}\right).$$ Assume $\phi \not\equiv 0$ on $D_{1\ell}$. Let $0<\varepsilon<1$ be a small constant to be chosen later. Define $$\delta=\varepsilon\left(\big(\sup\limits_{D_{1\ell}}|\phi|^{\frac{1}{\alpha_{1}}}\big)/\big(\sup\limits_{D_{2\ell}}|\phi|^{\frac{1}{\alpha_{1}}-1}\big)\right)\quad \mbox{and}\quad \omega_{\delta}=\left\{x\in \omega:|\phi(x,0)|\le \delta\right\}\subset \omega.$$ We claim that, there exists a constant $\varepsilon=\varepsilon(V,|\omega|)$, depending only on $V$ and $|\omega|$, such that $|\omega_{\delta}|\le \frac{1}{2}|\omega|$. Indeed, otherwise, the inequality [\[est:Linfty2\]](#est:Linfty2){reference-type="eqref" reference="est:Linfty2"} would hold with $\widetilde{\omega}$ replaced by $\omega_{\delta}$ (with same constants $\alpha_{1}$ and $C_{2}$). Hence, from the definition of $\omega_{\delta}$, $$\sup_{D_{1\ell}}|\phi|\le C_{2}\left(\sup_{\omega_{\delta}}|\phi|^{\alpha_{1}}\right) \left(\sup_{D_{2\ell}}|\phi|^{1-\alpha_{1}}\right) \le C_{2}\delta^{\alpha_{1}} \left(\sup_{D_{2\ell}}|\phi|^{1-\alpha_{1}}\right) \le C_{2}\varepsilon^{\alpha_{1}}\sup_{D_{1\ell}}|\phi|.$$ This is a contradiction with $\phi\not\equiv 0$ on $D_{1\ell}$ for $\varepsilon$ small enough such that $C_{2}\varepsilon^{\alpha_{1}}<1$, and so we have $|\omega_{\delta}|\le \frac{1}{2}|\omega|$ for a constant $\varepsilon=\varepsilon(V,|\omega|)\in (0,1)$ which depends only on $V$ and $|\omega|$. In conclusion, using again the definition of $\omega_{\delta}$, we obtain $$\|\phi\|^{2}_{L^{2}(\omega)}\ge \|\phi\|^{2}_{L^{2}(\omega\setminus \omega_{\delta})} \ge \frac{\delta^{2}}{2}|\omega|\ge \frac{1}{2}\varepsilon^{2}|\omega|\left(\big(\sup\limits_{D_{1\ell}}|\phi|^{\frac{2}{\alpha_{1}}}\big)/\big(\sup\limits_{D_{2\ell}}|\phi|^{\frac{2}{\alpha_{1}}-2}\big)\right),$$ which implies $$\|\phi\|_{L^{2}(D_{1\ell})}\le \sup_{D_{1\ell}}|\phi|\le 2(\varepsilon^{2}|\omega|)^{-\frac{\alpha_{1}}{2}}\|\phi\|_{L^{2}(\omega)}^{\alpha_{1}}\left(\sup_{D_{2\ell}}|\phi|^{1-\alpha_{1}}\right).$$ The proof of Proposition [Proposition 10](#pro:L2){reference-type="ref" reference="pro:L2"} is complete. ◻ ## Spectral inequality {#SS:LOW} In this subsection, we deduce the spectral inequality and then establish the observability estimate for the 1D heat equation as a consequence. Following Burq-Moyano [@BurqMoyano], we first establish the spectral inequality for the low-frequency part from $L^{2}$-propagation of smallness. **Lemma 12**. *Let $\Omega$ be a $(1,\zeta)$-thick set and let $V\in C({\mathbb{R} })\cap L^{\infty}({\mathbb{R} })$ with $V\ge 1$. Then there exists a constant $C=C(V,\zeta)>0$, depending only on $V$ and $\zeta$, such that for any $\mu>0$ and any $f\in L^{2}({\mathbb{R} })$, we have $$\left\|\Pi_{\mu}f\right\|_{L^{2}({\mathbb{R} })}\le Ce^{3\mu}\|\Pi_{\mu}f\|_{L^{2}(\Omega)}.$$* *Proof.* Without loss of generality, we assume that $f\in L^2({\mathbb{R} })$ is real-valued. For $\mu>0$, $(x,y)\in D_{3}$ and $f\in L^{2}({\mathbb{R} })$, we set $$F_{\mu}(x,y)=\int_{0}^{\mu}\cosh(y\lambda){\rm{d}}m_{\lambda}f(x).$$ Using the fact that $(\cosh{s})''=\cosh s$ and the definition of ${\rm{d}}m_{\lambda}$, $$\partial_{y}^{2}F_{\mu}={H}F_{\mu}=\int_{0}^{\mu}\lambda^{2}\cosh(y\lambda){\rm{d}}m_{\lambda}f.$$ It follows that $$-\Delta F_{\mu}+V(x)F_{\mu}=-\partial_{y}^{2}F_{\mu}+{H}F_{\mu}=0.$$ On the other hand, for the case of $y=0$, using the fact that $(\cosh s)'=\sinh s$, $${\partial_{y}F_{\mu}}_{|y=0}=\int_{0}^{\mu}\lambda \sinh(0\lambda){\rm{d}}m_{\lambda}f=0.$$ Hence, the function $F_{\mu}$ is a $H_{loc}^{2}$ solution for [\[equ:ellV\]](#equ:ellV){reference-type="eqref" reference="equ:ellV"} with $F_{\mu}(x,0)=\Pi_{\mu}f(x)$ on $\mathbb{R}$. We now apply Proposition [Proposition 10](#pro:L2){reference-type="ref" reference="pro:L2"} to $F_{\mu}$, and thus, for any $\ell\in \mathbb{Z}$, we obtain $$\|F_{\mu}\|_{L^{2}(D_{1\ell})}\le C\|\Pi_{\mu}f\|_{L^{2}(\Omega_{\ell})}^{\alpha}\left(\sup_{D_{2\ell}}|F_{\mu}|^{1-\alpha}\right).$$ where $C=(V,\zeta)>0$ is a constant depending only on $V$ and $\zeta$. Therefore, from Young's inequality for products, for any $\varepsilon$ small enough, we have $$\|F_{\mu}\|^{2}_{L^{2}(D_{1\ell})}\le \frac{C_{1}}{\varepsilon}\|\Pi_{\mu}f\|^{2}_{L^{2}(\Omega_{\ell})}+C_{1}\varepsilon\|F_{\mu}\|^{2}_{L^{\infty}{(D_{2\ell})}},$$ where $C_{1}=C_{1}(V,\zeta)>0$ depends only on $V$ and $\zeta$. Summing over $\ell\in \mathbb{Z}$, we find, $$\label{est:FmuL2} \|F_{\mu}\|^{2}_{L^{2}(D_{1})}\le \frac{C_{1}}{\varepsilon}\|\Pi_{\mu}f\|^{2}_{L^{2}(\Omega)}+C_{1}\varepsilon\sum_{\ell\in \mathbb{Z}}\|F_{\mu}\|^{2}_{L^{\infty}{(D_{2\ell})}}.$$ Then we fix suitable cut-off functions. Let $\chi:{\mathbb{R} }^{2}\to {\mathbb{R} }$ be a $C^{2}$ function such that $$\chi\equiv 1\ \ \mbox{on}\ \left[-\frac{3}{2},\frac{3}{2}\right]^{2}\quad \mbox{and}\quad \mbox{supp}\chi\subset \left[-\frac{5}{2},\frac{5}{2}\right]^{2}.$$ For any $\ell\in \mathbb{Z}$, we set $$\chi_{\ell}(x,y)=\chi\left(x-\ell-\frac{1}{2},y\right) \Longrightarrow \chi_{\ell}\equiv 1\ \ \mbox{on}\ D_{2\ell}\quad \mbox{and}\quad \mbox{supp}\chi_{\ell}\subset D_{3\ell}.$$ Therefore, using the 2D Sobolev embedding theorem, we deduce that $$\label{est:FmuLinfty} \begin{aligned} \sum_{\ell\in \mathbb{Z}}\|F_{\mu}\|^{2}_{L^{\infty}(D_{2\ell})} &\le \pi\sum_{\ell\in \mathbb{Z}}\|\chi_{\ell} F_{\mu}\|^{2}_{H^{2}({\mathbb{R} }^{2})}\\ &\le {{C}_{2}}\|F_{\mu}\|^{2}_{H^{2}(D_{3})}\le C_{3}(1+\mu^{4})\|F_{\mu}\|^{2}_{L^{2}(D_{3})}, \end{aligned}$$ where ${C}_{2}>0$ and $C_{3}>0$ depend only on $V$ and the chose of $\chi$. Combining [\[est:FmuL2\]](#est:FmuL2){reference-type="eqref" reference="est:FmuL2"} and [\[est:FmuLinfty\]](#est:FmuLinfty){reference-type="eqref" reference="est:FmuLinfty"}, for any $\varepsilon$ small enough, we have $$\|F_{\mu}\|^{2}_{L^{2}(D_{1})}\le \frac{C_{1}}{\varepsilon}\|\Pi_{\mu}f\|_{L^{2}(\Omega)}^{2}+C_{1}C_{3}\varepsilon(1+\mu^{4})\|F_{\mu}\|_{L^{2}(D_{3})}^{2}.$$ On the other hand, from the definition of the spectral projector $\Pi_{\mu}$, $$\begin{aligned} \|F_{\mu}\|_{L^{2}(D_{1})}^{2} &=\int_{-\frac{1}{2}}^{\frac{1}{2}}\int_{0}^{\mu}\cosh^{2}(y\lambda)({\rm{d}}m_{\lambda}f,f)_{L^{2}({\mathbb{R} })}{\rm{d}}y\ge \|\Pi_{\mu}f\|_{L^{2}({\mathbb{R} })}^{2},\\ \|F_{\mu}\|_{L^{2}(D_{3})}^{2} &=\int_{-\frac{5}{2}}^{\frac{5}{2}}\int_{0}^{\mu}\cosh^{2}(y\lambda)({\rm{d}}m_{\lambda}f,f)_{L^{2}({\mathbb{R} })}{\rm{d}}y\le 5e^{5\mu}\|\Pi_{\mu}f\|_{L^{2}({\mathbb{R} })}^{2}. \end{aligned}$$ Gathering the above three inequalities, we obtain $$\|\Pi_{\mu}f\|_{L^{2}({\mathbb{R} })}^{2}\le \frac{C_{1}}{\varepsilon}\|\Pi_{\mu}f\|_{L^{2}(\Omega)}^{2} +5C_{1}C_{3}\varepsilon e^{5\mu}(1+\mu^{4})\|\Pi_{\mu}f\|_{L^{2}({\mathbb{R} })}^{2},$$ which completes the proof of Lemma [Lemma 12](#le:spectra){reference-type="ref" reference="le:spectra"} for taking $\varepsilon$ small enough. ◻ **Remark 13**. The assumption $V\geq 1$ can be removed in Lemma [Lemma 12](#le:spectra){reference-type="ref" reference="le:spectra"}, thus extending the result of Lebeau-Moyano [@LeM] to bounded potentials. Here, we consider the spectral measure for $H$ instead of $\sqrt{H}$. The additional argument can be sketched as follows. Although $V(x)$ may take negative values, by Sturm-Liouville theory, we are still able to construct $\varphi(x)$, bounded from below and above as in Lemma [Lemma 5](#le:ODE){reference-type="ref" reference="le:ODE"} by restricting the size of interval $I$ such that $|I|\leq 2\sigma_0$, where $\sigma_0=1/4\pi\sqrt{\|V\|_{L^{\infty}}}$. Consequently, the analogue of Proposition [Proposition 10](#pro:L2){reference-type="ref" reference="pro:L2"} remains true if we replace $D_{1\ell}, D_{2\ell}$ with boxes of the form $I_1\times\left[-\frac{1}{2},\frac{1}{2}\right], I_2\times\left[-\frac{3}{2},\frac{3}{2}\right]$ such that $I_1=[x_0,x_0+\sigma_0], I_2=[x_0-\sigma_0, x_0+2\sigma_0]$. By dividing the interval $[\ell,\ell+1]$ into at most $\left\lfloor1/\sigma_0\right\rfloor+1$ intervals of size $\sigma_0$ and using the pigeonhole principle, finally we are able to obtain the same statement as Proposition [Proposition 10](#pro:L2){reference-type="ref" reference="pro:L2"}. Note that, from Lemma [Lemma 12](#le:spectra){reference-type="ref" reference="le:spectra"} and a standard argument, we directly have the observability estimate for 1D homogeneous heat equation (not necessarily real-valued) $$\label{equ:1D heat} \partial_{t}u-\partial_{x}^{2}u+V(x)u=0,\quad u_{|t=0}=u_{0}\in L^{2}({\mathbb{R} }).$$ Recall that, the operator $-H$ generates a semigroup $e^{-tH}$ and so the solution for [\[equ:1D heat\]](#equ:1D heat){reference-type="eqref" reference="equ:1D heat"} can be written as $u(t)=e^{-t{H}}u_{0}\in L^{2}({\mathbb{R} })$. **Proposition 14**. *Let $\Omega$ be a $(1,\zeta)$-thick set and let $V\in C({\mathbb{R} })\cap L^{\infty}({\mathbb{R} })$ with $V\ge 1$. Then there exists a constant $C=C(V,\zeta)>0$, depending only on $V$ and $\zeta$, such that for any $T>0$ and any solution $u$ of [\[equ:1D heat\]](#equ:1D heat){reference-type="eqref" reference="equ:1D heat"} we have $$\|u(T)\|_{L^{2}({\mathbb{R} })}^{2}\le Ce^{\frac{C}{T}}\int_{0}^{T}\left\|u(t)\right\|^{2}_{L^{2}(\Omega)}{\rm{d}}t.$$* *Proof.* For the sake of completeness and the readers' convenience, the details of the proof for Proposition [Proposition 14](#prop:heat){reference-type="ref" reference="prop:heat"} is given in Appendix [7](#AppB){reference-type="ref" reference="AppB"}. ◻ Thanks to the above Proposition, we obtain the observability estimate for the 1D inhomogeneous heat equation $$\label{equ:1Dinhomoheat} \partial_{t}u-\partial_{x}^{2}u+V(x)u=F\in L^{2}((0,\infty):L^{2}({\mathbb{R} })),\quad u_{|t=0}=u_{0}\in H^{2}({\mathbb{R} }).$$ **Corollary 15**. *Let $\Omega$ be a $(1,\zeta)$-thick set and let $V\in C({\mathbb{R} })\cap L^{\infty}({\mathbb{R} })$ with $V\ge 1$. Then there exists a constant $C=C(V,\zeta)>0$, depending only on $V$ and $\zeta$, such that for any $T>0$ and any solution $u$ of [\[equ:1Dinhomoheat\]](#equ:1Dinhomoheat){reference-type="eqref" reference="equ:1Dinhomoheat"} we have $$\|u(T)\|_{L^{2}({\mathbb{R} })}^{2}\le Ce^{\frac{C}{T}}\int_{0}^{T}\left(\left\|Hu(t)\right\|^{2}_{L^{2}(\Omega)}+\|F(t)\|^{2}_{L^{2}({\mathbb{R} })}\right){\rm{d}}t.$$* *Proof.* We decompose the solution $u$ as $$u(t,x)=u_{1}(t,x)+u_{2}(t,x),\quad \mbox{on}\ [0,T]\times {\mathbb{R} },$$ where $u_{1}$ and $u_{2}$ are the solutions for the following 1D homogeneous or inhomogeneous heat equations $$\label{equ:u1u2} \left\{ \begin{aligned} \partial_{t}u_{1}-\partial_{x}^{2}u_{1}+V(x)u_{1}&=0,\quad {u_{1}}_{|t=0}=u_{0},\\ \partial_{t}u_{2}-\partial_{x}^{2}u_{2}+V(x)u_{2}&=F,\quad {u_{2}}_{|t=0}=0.\end{aligned} \right.$$ First, from $V\in C({\mathbb{R} })\cap L^{\infty}({\mathbb{R} })$ with $V\ge 1$, we have $$H\ge {\rm{Id}}\Longrightarrow \left\|Hu_{1}\right\|_{L^{2}({\mathbb{R} })}\ge \|u_{1}\|_{L^{2}({\mathbb{R} })}.$$ It follows from Proposition [Proposition 14](#prop:heat){reference-type="ref" reference="prop:heat"} that $$\|u_{1}(T)\|_{L^{2}({\mathbb{R} })}^{2}\le \|Hu_{1}(T)\|_{L^{2}({\mathbb{R} })}^{2}\le Ce^{\frac{C}{T}}\int_{0}^{T}\|Hu_{1}(t)\|^{2}_{L^{2}(\Omega)}{\rm{d}}t,$$ where $C=C(V,\zeta)>0$ depends only on $V$ and $\zeta$. Note that, from [\[equ:u1u2\]](#equ:u1u2){reference-type="eqref" reference="equ:u1u2"}, the term $Hu_{1}$ can be rewritten as $Hu_{1}=Hu+\partial_{t}u_{2}-F$ and so we obtain $$\label{est:u1} \begin{aligned} \|u_{1}(T)\|_{L^{2}({\mathbb{R} })}^{2} &\le 3Ce^{\frac{C}{T}}\int_{0}^{T}\|Hu(t)\|^{2}_{L^{2}(\Omega)}{\rm{d}}t\\ &+3Ce^{\frac{C}{T}}\int_{0}^{T}\left(\|\partial_{t}u_{2}(t)\|^{2}_{L^{2}({\mathbb{R} })}+\|F(t)\|^{2}_{L^{2}({\mathbb{R} })} \right){\rm{d}}t. \end{aligned}$$ Second, using again [\[equ:u1u2\]](#equ:u1u2){reference-type="eqref" reference="equ:u1u2"}, we directly have $$(\partial_{t}u_{2})^{2}+\frac{1}{2}\partial_{t}\left((\partial_{x}u_{2})^{2}+V(x)u_{2}^{2}\right)-\partial_{x}\left((\partial_{t}u_{2})(\partial_{x}u_{2})\right)=F\partial_{t}u_{2}.$$ Integrating the above identities over $[0,T]\times {\mathbb{R} }$, and then using Cauchy-Schwarz inequality, we see that $$\label{est:u2} \|u_{2}(T)\|_{L^{2}({\mathbb{R} })}^{2}+\int_{0}^{T}\|\partial_{t}u_{2}(t)\|_{L^{2}({\mathbb{R} })}^{2}{\rm{d}}t\le \int_{0}^{T}\|F(t)\|_{L^{2}({\mathbb{R} })}^{2}{\rm{d}}t.$$ Here, we used the fact that $H\ge {\rm{Id}}$ and the zero initial condition of $u_2$ in $H^2({\mathbb{R} })$. Combining [\[est:u1\]](#est:u1){reference-type="eqref" reference="est:u1"} and [\[est:u2\]](#est:u2){reference-type="eqref" reference="est:u2"} with Cauchy-Schwarz inequality, we obtain $$\begin{aligned} \|u(T)\|_{L^{2}({\mathbb{R} })}^{2} &\le 2\left(\|u_{1}(T)\|_{L^{2}({\mathbb{R} })}^{2}+\|u_{2}(T)\|_{L^{2}({\mathbb{R} })}^{2}\right)\\ &\le (6C+1)e^{\frac{C}{T}}\int_{0}^{T}\left(\|Hu(t)\|^{2}_{L^{2}(\Omega)}+2\|F(t)\|^{2}_{L^{2}({\mathbb{R} })}\right){\rm{d}}t. \end{aligned}$$ The proof of Proposition [Proposition 14](#prop:heat){reference-type="ref" reference="prop:heat"} is complete. ◻ For the notational convenience of introducing the FBI transformation later, we can reverse the time $t$ to $T-t$ to obtain the following observability estimate for the 1D inhomogeneous backward heat equation $$\label{equ:1Dinhoback} \partial_{t}u+\partial_{x}^{2}u-V(x)u=F\in L^{2}((0,\infty):L^{2}({\mathbb{R} })),\quad u_{|t=T}=u_{T}\in H^{2}({\mathbb{R} }).$$ **Corollary 16**. *Let $\Omega$ be a $(1,\zeta)$-thick set and let $V\in C({\mathbb{R} })\cap L^{\infty}({\mathbb{R} })$ with $V\ge 1$. Then there exists a constant $C=C(V,\zeta)>0$, depending only on $V$ and $\zeta$, such that for any $T>0$ and any solution $u$ of [\[equ:1Dinhoback\]](#equ:1Dinhoback){reference-type="eqref" reference="equ:1Dinhoback"} we have $$\|u(0)\|_{L^{2}({\mathbb{R} })}^{2}\le Ce^{\frac{C}{T}}\int_{0}^{T}\left(\left\|Hu(t)\right\|^{2}_{L^{2}(\Omega)}+\|F(t)\|^{2}_{L^{2}({\mathbb{R} })}\right){\rm{d}}t.$$* # Resolvent estimate {#SS:RESO} In this section, we recall the resolvent estimate for 1D Schrödinger operator $H$ and then deduce the observability estimate for the high-frequency part. Recall that, the resolvent estimate for operator $H=-\partial_x^2$ was first given in [@Gr Proposition 1]. **Lemma 17** ([@Gr]). *Let $\Omega$ be a $(1,\zeta)$-thick set and let $V\in L^{\infty}({\mathbb{R} })$. Then there exist some constants $\mu_{0}=\mu_{0}(V,\zeta)$ and $C=C(V,\zeta)$, depending only on $V$ and $\zeta$, such that for any $\mu>\mu_{0}$ and any $f\in H^{2}({\mathbb{R} })$, we have $$\|f\|_{L^{2}({\mathbb{R} })}^{2}\le \frac{C}{\mu}\|({H}-\mu)f\|_{L^{2}({\mathbb{R} })}^{2}+C\|f\|_{L^{2}(\Omega)}^{2}.$$* For the reader's convenience, we will provide a proof of Lemma [Lemma 17](#le:resolvent){reference-type="ref" reference="le:resolvent"}, different from the one given in [@Gr]. We first need the following technical estimate. **Lemma 18**. *Let $0<\zeta<1$. Then there exists a constant $c=c(\zeta)>0$, depending only on $\zeta$, such that for any measurable set $\omega\subset (0,1)$ with $\zeta\le |\omega|\le 1$ and any $\lambda>1$, we have $$\label{est:A} \inf_{x_{0}\in {\mathbb{R} }} \int_{\omega}\cos^{2}(\lambda(x-x_{0})){\rm{d}}x\ge c(\zeta).$$* **Remark 19**. We mention here that Lemma [Lemma 18](#le:technical){reference-type="ref" reference="le:technical"} is not a direct consequence of the Riemann-Lebesgue lemma and the proof will require more quantitative analysis, since the lower bound in estimate [\[est:A\]](#est:A){reference-type="eqref" reference="est:A"} only depends on the size of the measurable set $\omega$ which is different from the statement of Riemann-Lebesgue lemma. *Proof of Lemma [Lemma 18](#le:technical){reference-type="ref" reference="le:technical"}.* Using the structure of a measurable set in ${\mathbb{R} }$, for any $0<\varepsilon<1$, there exists a finite union of disjoint open intervals $$U=\bigcup\limits_{n=1}^{N}I_{n}\quad \mbox{with}\quad I_{n}=(a_{n},b_{n})\subset (0,1) \ \ \mbox{for any}\ n\in \left\{1,\dots,N\right\},$$ such that $|\omega \setminus U|+|U\setminus \omega |<\varepsilon$ and $|U|\ge \zeta-\varepsilon$. It follows that $$\label{est:cos2} \begin{aligned} \int_{\omega}\cos^{2}(\lambda(x-x_{0})){\rm{d}}x &\ge \sum_{n=1}^{N}\int_{a_{n}}^{b_{n}}\cos^{2}(\lambda(x-x_{0})){\rm{d}}x-\varepsilon. \end{aligned}$$ By an elementary computation, on any finite interval $I_{n}=(a_{n},b_{n})$, we have $$\label{est:ajbj} \begin{aligned} &\int_{a_{n}}^{b_{n}}\cos^{2}(\lambda(x-x_{0})){\rm{d}}x\\ &=\frac{1}{2}\int_{a_{n}}^{b_{n}}(1+\cos (2\lambda(x-x_{0}))){\rm{d}}x\\ &=\frac{1}{2}\left((b_{n}-a_{n})+\frac{1}{\lambda}\sin \left(\lambda(b_{n}-a_{n})\right)\cos \left(\lambda(b_{n}+a_{n}-2x_{0})\right)\right). \end{aligned}$$ Let $0<\delta\ll 1$ be a small constant to be chosen later. For any $m\in \mathbb{Z}$, we set $$J_{m,\delta}=\left(x_{0}+\frac{m \pi }{2\lambda}-\frac{\delta}{2\lambda},x_{0}+\frac{m \pi }{2\lambda}+\frac{\delta}{2\lambda}\right)\cap (0,1)\ \ \mbox{and}\ \ S=\bigcup\limits_{m\in \mathbb{Z}}J_{m,\delta}.$$ Note that, $\left\{J_{m,\delta}\right\}_{m\in \mathbb{Z}}$ are disjoint sets. For further reference, we consider $$\mathcal{A}=\left\{m\in \mathbb{Z}:J_{m,\delta}\ne \emptyset\right\},\quad \mbox{and thus we have}\quad \# \mathcal{A}\le 3\lambda.$$ We split the index set $\left\{1,\dots,N\right\}$ to the following three cases according to the size and position of the finite interval $I_{n}$ and then establish estimate for each case. **Case 1.** Let $(b_{n}-a_{n})\ge \frac{\delta}{\lambda}$. From the fact that $\frac{\sin x}{x}$ is decreasing on $[\delta,\pi)$, for $0<\delta\ll 1$, $$\sup_{[\delta,\infty)}\left|\frac{\sin x}{x}\right|\le \frac{\sin \delta}{\delta} \Longrightarrow \frac{1}{\lambda}\left|\sin \left(\lambda(b_{n}-a_{n})\right)\right|\le \frac{\sin \delta}{\delta}(b_{n}-a_{n}).$$ Based on the above estimate and [\[est:ajbj\]](#est:ajbj){reference-type="eqref" reference="est:ajbj"}, we obtain $$\label{est:case1} \int_{a_{n}}^{b_{n}}\cos^{2}(\lambda(x-x_{0})){\rm{d}}x \ge \frac{1}{2}\left(1-\frac{\sin \delta}{\delta}\right)(b_{n}-a_{n}).$$ **Case 2.** Let $0<(b_{n}-a_{n})< \frac{\delta}{\lambda}$ with $S\cap I_{n}= \emptyset$. First, from the definition of $S$ and $S\cap I_{n}=\emptyset$, there exists $m\in \mathbb{Z}$ such that $$I_{n}\subset \left(x_{0}+\frac{m \pi }{2\lambda}+\frac{\delta}{2\lambda},x_{0}+\frac{ (m+1)\pi}{2\lambda}-\frac{\delta}{2\lambda}\right),$$ and thus, from $\sin^{2}x+\cos^{2}x=1$, we find $$\begin{aligned} \mbox{dist}(\lambda(b_{n}+a_{n}-2x_{0}),\pi\mathbb{Z})\ge \delta\Longrightarrow \left|\cos (\lambda(b_{n}+a_{n}-2x_{0})\right|\le \sqrt{1-\sin^{2}\delta}. \end{aligned}$$ Therefore, using again [\[est:ajbj\]](#est:ajbj){reference-type="eqref" reference="est:ajbj"} and $|\sin x|\le |x|$, we obtain $$\label{est:case2} \int_{a_{n}}^{b_{n}}\cos^{2}(\lambda(x-x_{0})){\rm{d}}x\ge \frac{1}{2}\left(1-\sqrt{1-\sin^{2}\delta}\right)(b_{n}-a_{n}).$$ **Case 3.** We now consider the last case, that is, the case of $0<(b_{n}-a_{n})< \frac{\delta}{\lambda}$ with $S\cap I_{n}\ne \emptyset$. To simplify notation, we denote $$\mathcal{B}=\left\{n\in\left\{1,\dots,N\right\}:0<(b_{n}-a_{n})< \frac{\delta}{\lambda}\ \ \mbox{with} \ \ S\cap I_{n}\ne \emptyset\right\}.$$ Note that, for any $n\in \mathcal{B}$, there exists $m\in\mathcal{A}$ such that $$I_{n}\subset {J}_{m,3\delta}\quad \mbox{where}\ \ {J}_{m,3\delta}=\left(x_{0}+\frac{m \pi }{2\lambda}-\frac{3\delta}{2\lambda},x_{0}+\frac{m \pi }{2\lambda}+\frac{3\delta}{2\lambda}\right)\cap(0,1).$$ Next, from $0<\delta\ll 1$, for any $(m,m')\in \mathbb{Z}^{2}$ with $m\ne m'$, we find ${J}_{m,3\delta}\cap {J}_{m',3\delta}=\emptyset$. Therefore, using the fact that $\left\{I_{n}\right\}_{n=1}^{N}$ are disjoint intervals and $\# \mathcal{A}\le 3\lambda$, we obtain $$\label{est:case3} \sum_{n\in \mathcal{B}}(b_{n}-a_{n})=\bigg|\bigcup\limits_{n\in \mathcal{B}}I_{n}\bigg|\le \bigg|\bigcup\limits_{m\in \mathcal{A}}{J}_{m,3\delta}\bigg|\le \frac{3\delta}{\lambda}\#\mathcal{A}\le 9\delta.$$ Combining [\[est:cos2\]](#est:cos2){reference-type="eqref" reference="est:cos2"}, [\[est:case1\]](#est:case1){reference-type="eqref" reference="est:case1"}, [\[est:case2\]](#est:case2){reference-type="eqref" reference="est:case2"}, [\[est:case3\]](#est:case3){reference-type="eqref" reference="est:case3"} with $|U|\ge \zeta-\varepsilon$, we conclude that $$\inf_{x_{0}\in {\mathbb{R} }}\int_{\omega}\cos^{2}(\lambda(x-x_{0})){\rm{d}}x\ge \frac{1}{2}\left(1-\max\left(\frac{\sin \delta}{\delta},\sqrt{1-\sin^{2}\delta}\right)\right)(\zeta-\varepsilon-9\delta)-\varepsilon.$$ We see that [\[est:A\]](#est:A){reference-type="eqref" reference="est:A"} follows from the above estimate fo $\varepsilon$ and $\delta$ small enough. ◻ We now give an alternative proof of Lemma [Lemma 17](#le:resolvent){reference-type="ref" reference="le:resolvent"} for the reader's convenience. *Proof of Lemma [Lemma 17](#le:resolvent){reference-type="ref" reference="le:resolvent"}.* **Step 1.** Estimate for the flat case. Let $V=0$ and $\ell\in \mathbb{Z}$. For $\mu>1$, we denote $$F=-\partial_{x}^{2}f-\mu f\quad \mbox{on}\ {\mathbb{R} }.$$ For any $s\in I_{1\ell}$, the function $f$ can be expressed by $$f(x)=\cos (\sqrt{\mu}(x-s))f(s)+\frac{\sin (\sqrt{\mu}(x-s))}{\sqrt{\mu}}f'(s)-\int_{s}^{x}\frac{\sin (\sqrt{\mu}(x-y))}{\sqrt{\mu}}F(y){\rm{d}}y.$$ Note that, in the above identity, the sum of the first two terms can be rewritten as $$\cos (\sqrt{\mu}(x-s))f(s)+\frac{\sin (\sqrt{\mu}(x-s))}{\sqrt{\mu}}f'(s)=r\cos \left(\sqrt{\mu}\left(x-s-\frac{\theta}{\sqrt{\mu}}\right)\right),$$ where $$\theta\in [0,2\pi)\quad \mbox{and}\quad r=\sqrt{|f(s)|^{2}+\mu^{-1}|f'(s)|^{2}}.$$ Therefore, from Lemma [Lemma 18](#le:technical){reference-type="ref" reference="le:technical"} and $\zeta\le |\Omega_{\ell}|\le 1$, there exists a constant $c=c(\zeta)>0$, depending only on $\zeta$, such that $$c|f(s)|^{2}\le \left\| \cos (\sqrt{\mu}(x-s))f(s)+\frac{\sin (\sqrt{\mu}(x-s))}{\sqrt{\mu}}f'(s)\right\|_{L^{2}(\Omega_{\ell})}^{2}.$$ Combining the above estimate with the explanation of $f(x)$, we find $$\begin{aligned} c|f(s)|^{2}&\le \left\|f(x)+\int_{s}^{x}\frac{\sin (\sqrt{\mu}(x-y))}{\sqrt{\mu}}F(y){\rm{d}}y\right\|^{2}_{L^{2}(\Omega_{\ell})}\\ &\le 2\left\|f\right\|_{L^{2}(\Omega_{\ell})}^{2}+\frac{2}{\mu}\left\|\int_{s}^{x}|F(y)|{\rm{d}}y\right\|^{2}_{L^{2}(\Omega_{\ell})}\\ &\le 2\|f\|_{L^{2}(\Omega_{\ell})}^{2}+\frac{2}{\mu}\|F\|^{2}_{L^{2}(I_{1\ell})}. \end{aligned}$$ Integrating the above estimate with respect to variable $s$ over $I_{1\ell}$ and then summing over $\ell\in \mathbb{Z}$, we conclude that $$\label{est:V=0} \|f\|_{L^{2}({\mathbb{R} })}^{2}\le \frac{C}{\mu}\|(-\partial_{x}^{2}-\mu)f\|_{L^{2}({\mathbb{R} })}^{2}+C\|f\|_{L^{2}(\Omega)}^{2},$$ where $C=C(\zeta)>0$ is a constant depending only on $\zeta$. **Step 2.** Conclusion. Note that, from [\[est:V=0\]](#est:V=0){reference-type="eqref" reference="est:V=0"} and $V\in L^{\infty}({\mathbb{R} })$, we obtain $$\begin{aligned} \|f\|_{L^{2}({\mathbb{R} })}^{2} &\le \frac{C}{\mu}\|(-\partial_{x}^{2}-\mu)f\|_{L^{2}({\mathbb{R} })}^{2}+C\|f\|_{L^{2}(\Omega)}^{2}\\ &\le \frac{C}{\mu}\|(H-\mu)f\|_{L^{2}({\mathbb{R} })}^{2} +\frac{C}{\mu}\|V\|_{\infty}^{2}\|f\|^{2}_{L^{2}({\mathbb{R} })} +C\|f\|_{L^{2}(\Omega)}^{2}, \end{aligned}$$ which completes the proof of Lemma [Lemma 17](#le:resolvent){reference-type="ref" reference="le:resolvent"} by taking $\mu$ large enough. ◻ Combining the above resolvent estimate with an argument in [@BZ1 Section 3], we obtain the following observability inequality for the high-frequency part. **Corollary 20**. *Let $\Omega$ be a $(1,\zeta)$-thick set and let $V\in L^{\infty}({\mathbb{R} })$ with $V\ge 1$. Then there exist some constants $\mu_{1}=\mu_{1}(V,\zeta)$ and $C=C(V,\zeta)$, depending only on $V$ and $\zeta$, such that for any $T>0$, $\mu>\mu_{1}\left(1+T^{-\frac{1}{2}}\right)$ and $f\in L^{2}({\mathbb{R} })$, we have $$\|(1-\Pi_{\mu})f\|_{L^{2}({\mathbb{R} })}^{2}\le \frac{C}{T}\int_{0}^{T}\left\|e^{it{H}}(1-\Pi_{\mu})f\right\|_{L^{2}(\Omega)}^{2}{\rm{d}}t.$$* *Proof.* Let $f\in H^{2}({\mathbb{R} })$ and let $F$ be the solution of the 1D Schrödinger equation $$i\partial_{t}F-\partial_{x}^{2}F+V(x)F=0\quad \mbox{with}\ F_{|t=0}=(1-\Pi_{\mu})f\in H^{2}({\mathbb{R} }).$$ We fix a cut-off $C^{2}$ function $\chi:{\mathbb{R} }\to [0,1]$ satisfying $$\chi\equiv 1 \ \mbox{on}\ \left[\frac{1}{4},\frac{3}{4}\right],\quad \mbox{supp}\chi\subset[0,1] \quad \mbox{and}\quad \chi'\in [-5,5].$$ For time $T>0$, we consider a new function $$\Psi(t,x)=\chi\left(\frac{t}{T}\right)F(t,x) \Longrightarrow i\partial_{t}\Psi-\partial_{x}^{2}\Psi+V(x)\Psi=\frac{i}{T}\chi'\left(\frac{t}{T}\right)F.$$ Taking the Fourier transform in the above equation with respect to $t$, $$\label{equ:H2pixi} (H- \xi)\widehat{\Psi}(\xi,x)=\frac{i}{T}\mathcal{F}_{t\to \xi}\left({\chi'\left(\frac{t}{T}\right)F}\right)(\xi,x).$$ Let $\mu>\mu_{0}+2$ where $\mu_{0}$ is the parameter appearing in Lemma [Lemma 17](#le:resolvent){reference-type="ref" reference="le:resolvent"}. On the one hand, we apply Lemma [Lemma 17](#le:resolvent){reference-type="ref" reference="le:resolvent"} to $\widehat{\Psi}$. Hence, based on the identity [\[equ:H2pixi\]](#equ:H2pixi){reference-type="eqref" reference="equ:H2pixi"}, for $\xi>\mu>\mu_{0}+2$, we directly have $$\label{est:Psi1} \left\|\widehat{\Psi}(\xi,x)\right\|^{2}_{L_{x}^{2}({\mathbb{R} })}\le \frac{C}{\mu}\left\|\mathcal{F}_{t\to \xi}\left({\chi'\left(\frac{t}{T}\right)F}\right)(\xi,x)\right\|_{L_{x}^{2}({\mathbb{R} })}^{2}+C\left\|\widehat{\Psi}(\xi,x)\right\|^{2}_{L_{x}^{2}(\Omega)},$$ where $C=C(V,\zeta)$ is a constant depending only on $V$ and $\zeta$. On the other hand, for $\xi\le \mu$, we estimate [\[equ:H2pixi\]](#equ:H2pixi){reference-type="eqref" reference="equ:H2pixi"} directly using the fact that $(H-\xi)(\mathrm{Id}-\Pi_{\mu})$ is invertible. More precisely, from the definition of ${\rm{d}}m_{\lambda}$ and $\Psi$, we have $$(H- \xi)\widehat{\Psi}(\xi,x)=\int_{\mu}^{\infty}(\lambda^{2}- \xi){\rm{d}}m_{\lambda}\widehat{\Psi}(\xi,x).$$ Observe that $\frac{\mu^2}{2}<\lambda^2-\mu$ for $\lambda\geq \mu$, since $\mu>\mu_{0}+2>2$. Therefore, for any $\xi\le \mu$, we see that $$\begin{aligned} \frac{1}{2}\mu^{2}\left\|\widehat{\Psi}(\xi,x)\right\|^{2}_{L_{x}^{2}({\mathbb{R} })} &\le \int_{\mu}^{\infty}(\lambda^{2}-\mu)\left({\rm{d}}m_{\lambda}\widehat{\Psi}(\xi,x),\widehat{\Psi}(\xi,x)\right)_{L_{x}^{2}({\mathbb{R} })} \\ &\le \left((H- \xi)\widehat{\Psi}(\xi,x),\widehat{\Psi}(\xi,x)\right)_{L_{x}^{2}({\mathbb{R} })}. \end{aligned}$$ Combining the above inequality with [\[equ:H2pixi\]](#equ:H2pixi){reference-type="eqref" reference="equ:H2pixi"}, for $\xi \le \mu$, we obtain $$\label{est:Psi2} \left\|\widehat{\Psi}(\xi,x)\right\|^{2}_{L_{x}^{2}({\mathbb{R} })}\le \frac{4}{\mu^{4} T^{2}}\left\|\mathcal{F}_{t\to \xi}\left({\chi'\left(\frac{t}{T}\right)F}\right)(\xi,x)\right\|_{L_{x}^{2}({\mathbb{R} })}^{2}.$$ Gathering [\[est:Psi1\]](#est:Psi1){reference-type="eqref" reference="est:Psi1"} and [\[est:Psi2\]](#est:Psi2){reference-type="eqref" reference="est:Psi2"}, and then integrating over ${\mathbb{R} }$ for the variable $\xi$, we find $$\begin{aligned} \left\|\widehat{\Psi}(\xi,x)\right\|^{2}_{L_{}^{2}({\mathbb{R} }^{2})} &\le \frac{C}{\mu}\left\|\mathcal{F}_{t\to \xi}\left({\chi'\left(\frac{t}{T}\right)F}\right)(\xi,x)\right\|_{L^{2}({\mathbb{R} }^{2})}^{2}\\ &+\frac{1}{\mu^{4}T^{2}}\left\|\mathcal{F}_{t\to \xi}\left({\chi'\left(\frac{t}{T}\right)F}\right)(\xi,x)\right\|_{L^{2}({\mathbb{R} }^{2})}^{2} +C\left\|\widehat{\Psi}(\xi,x)\right\|^{2}_{L^{2}({\mathbb{R} }\times\Omega)}. \end{aligned}$$ Then, using the Plancherel theorem for the variables $t$ and $\xi$, $$\left\|\Psi(t,x)\right\|_{L^{2}({\mathbb{R} }^{2})}^{2}\le \left(\frac{C}{\mu}+\frac{4}{\mu^{4}T^{2}}\right)\left\|\chi'\left(\frac{t}{T}\right)F(t,x)\right\|_{L^{2}({\mathbb{R} }^{2})}^{2}+ C\left\|\Psi(t,x)\right\|_{L^{2}({\mathbb{R} }\times \Omega)}^{2}.$$ Therefore, by the conservation of $L_{x}^{2}$ for $F$ and the definition of $\chi$ and $\Psi$, we obtain $$\begin{aligned} \|(1-\Pi_{\mu})f\|_{L^{2}({\mathbb{R} })}^{2} &\le 50\left(\frac{C}{\mu}+\frac{4}{\mu^{4}T^{2}}\right)\|(1-\Pi_{\mu})f\|_{L^{2}({\mathbb{R} })}^{2}\\ &+\frac{2C}{T}\int_{0}^{T}\|e^{itH}(1-\Pi_{\mu})f\|_{L^{2}(\Omega)}^{2}{\rm{d}}t, \end{aligned}$$ which completes the proof of $f\in H^{2}({\mathbb{R} })$ for taking $\mu$ large enough. Last, using a density argument, we complete the proof of Corollary [Corollary 20](#coro:High){reference-type="ref" reference="coro:High"} for any $f\in L^{2}({\mathbb{R} })$. ◻ # Proof of Theorem [Theorem 2](#thm:main1){reference-type="ref" reference="thm:main1"} {#SS:MAIN} In this section, we prove Theorem [Theorem 2](#thm:main1){reference-type="ref" reference="thm:main1"}. The proof is based on the general strategy introduced in Phung [@Phung01] (inspired by Lebeau-Robbiano [@LeRo]) for the Schrödinger equation in a similar context. We start with the following quantitative unique continuation estimate for the 1D Schrödinger equation which plays a crucial role in our proof for Theorem [Theorem 2](#thm:main1){reference-type="ref" reference="thm:main1"}. The key idea of the proof is to take an FBI transformation that transfers the 1D Schödinger equation to the 1D heat equation. **Proposition 21**. *Let $\Omega$ be a $(1,\zeta)$-thick set and let $V\in C({\mathbb{R} })\cap L^{\infty}({\mathbb{R} })$ with $V\ge 1$. Then there exist some constants $h_{0}=h_{0}(V,\zeta)\in (0,1)$ and $C=C(V,\zeta)>0$, depending only on $V$ and $\zeta$, such that for any $T>0$, $0<h<h_{0}\left(1+T^{-3}\right)^{-1}$ and $f\in H^{2}({\mathbb{R} })$, we have $$\label{est:Key} \begin{aligned} \|f\|_{L^{2}({\mathbb{R} })}^{2} &\le Ch\left(1+\frac{1}{T^{2}}\right)\|Hf\|_{L^{2}({\mathbb{R} })}^{2}\\ &+Ce^{\frac{2T^{2}}{h}+\frac{C}{T}}\int_{0}^{T}\left\|He^{itH}f\right\|_{L^{2}(\Omega)}^2{\rm{d}}t. \end{aligned}$$* *Proof.* **Step 1.** FBI transformation. Following Zworski [@Zworski], we introduce the definition of FBI transformation. For $0<h<1$, $z=\tau+is\in \mathbb{C}$ and $L^{2}$-valued regular function $\Gamma(t)$, we define $$\mathcal{T}_{h}\Gamma(z)=\frac{2^{\frac{1}{4}}}{(2\pi h)^{\frac{3}{4}}}\int_{{\mathbb{R} }}e^{-\frac{(z+t)^{2}}{2h}}\Gamma(t){\rm{d}}t.$$ By an elementary computation and integration by parts, we directly have $$\label{equ:heatSchro} (\partial_{s}+\partial_{x}^{2}-V(x))\mathcal{T}_{h}\Gamma(z)= -\mathcal{T}_{h}\left(\left(i\partial_{t}-\partial_{x}^{2}+V(x)\right)\Gamma\right)(z).$$ This is the key point to transfer the observability estimate for 1D Schrödinger equation to the observability estimate for the 1D heat equation. Fix $T>0$. We define a cut-off $C^{1}$ function $\chi:{\mathbb{R} }\to [0,1]$ satisfying $$\label{def:chi} \chi\equiv 1\ \mbox{on}\ [2T,8T],\quad \mbox{supp}\chi\subset [0,10T]\quad \mbox{and} \quad \chi' \in \left[-\frac{2}{T},\frac{2}{T}\right].$$ To simplify notation, we will prove [\[est:Key\]](#est:Key){reference-type="eqref" reference="est:Key"} is true for $10T$ and then based on the arbitrary choice of $T$, we can complete the proof of [\[est:Key\]](#est:Key){reference-type="eqref" reference="est:Key"} for any $T>0$. For any $f\in H^{2}({\mathbb{R} })$, we denote $F=e^{itH}f$ and $\widetilde{F}=\chi F$. We directly have $$i\partial_{t}\widetilde{F}-\partial_{x}^{2}\widetilde{F}+V(x)\widetilde{F}=i\chi'(t)F.$$ Taking the FBI transformation on both sides of the above identity and then using [\[equ:heatSchro\]](#equ:heatSchro){reference-type="eqref" reference="equ:heatSchro"}, we obtain $$\partial_{s}W+\partial_{x}^{2}W-V(x)W=G,$$ where $W=\mathcal{T}_{h}\widetilde{F}$ and $G=-\mathcal{T}_{h}(i\chi'F)$. From Corollary [Corollary 16](#coro:1Dheatback){reference-type="ref" reference="coro:1Dheatback"}, there exists a constant $C=C(V,\zeta)$, depending only on $V$ and $\zeta$ such that for any $\tau\in {\mathbb{R} }$, we have $$\label{est:W} \begin{aligned} \left\|W(\tau)\right\|_{L^{2}({\mathbb{R} })}^{2} &\le Ce^{\frac{C}{T}}\int_{0}^{T}\left\|HW(\tau+is)\right\|_{L^{2}(\Omega)}^{2}{\rm{d}}s\\ &+Ce^{\frac{C}{T}} \int_{0}^{T}\|G(\tau+is)\|^{2}_{L^{2}({\mathbb{R} })}{\rm{d}}s. \end{aligned}$$ **Step 2.** $L_{s}^{1}L_{x}^{2}$ estimates on $HW$ and $G$. First, from the definition of FBI transformation and $W(\tau+is)$, we have $$HW(\tau+is)=\frac{2^{\frac{1}{4}}}{(2\pi h)^{\frac{3}{4}}}e^{\frac{s^{2}}{2h}}\int_{{\mathbb{R} }}e^{-\frac{(\tau+t)^{2}}{2h}}e^{-i\frac{s(\tau+t)}{h}}\chi(t)HF(t){\rm{d}}t.$$ It follows from Cauchy-Schwarz inequality that $$\left\|HW(\tau+is)\right\|_{L^{2}(\Omega)}^{2}\le \frac{20T}{(2\pi h)^{\frac{3}{2}}}e^{\frac{s^{2}}{h}}\int_{0}^{10T}\|HF(t)\|_{L^{2}(\Omega)}^{2}{\rm{d}}t.$$ Integrating the above inequality on $[0,T]$, we see that $$\label{est:HW} \sup_{\tau\in {\mathbb{R} }}\int_{0}^{T}\left\|HW(\tau+is)\right\|_{L^{2}(\Omega)}^{2}{\rm{d}}s\le \frac{20T^{2}}{(2\pi h)^{\frac{3}{2}}}e^{\frac{T^{2}}{h}}\int_{0}^{10T}\|HF(t)\|_{L^{2}(\Omega)}^{2}{\rm{d}}t.$$ Second, using again the definition of FBI transformation, $$G(\tau+is)=-i\frac{2^{\frac{1}{4}}}{(2\pi h)^{\frac{3}{4}}}e^{\frac{s^{2}}{2h}}\int_{{\mathbb{R} }}e^{-\frac{(\tau+t)^{2}}{2h}}e^{-i\frac{s(\tau+t)}{h}}\chi'(t)F(t){\rm{d}}t.$$ Note that, from the definition of $\chi$ in [\[def:chi\]](#def:chi){reference-type="eqref" reference="def:chi"}, we infer that $$|\tau+t|\ge 2T,\ \ \mbox{for any}\ (\tau,t)\in [-6T,-4T]\times \mbox{supp}\chi'.$$ It follows from $\|F(t)\|_{L^{2}({\mathbb{R} })}=\|e^{itH}f\|_{L^{2}({\mathbb{R} })}=\|f\|_{L^{2}({\mathbb{R} })}$ that $$\max_{\tau\in [-6T,-4T]}\|G(\tau+is)\|_{L^{2}({\mathbb{R} })}^{2}\le \frac{128}{(2\pi h)^{\frac{3}{2}}}e^{\frac{s^{2}}{h}}e^{-\frac{4T^{2}}{h}}\|f\|_{L^{2}({\mathbb{R} })}^{2}.$$ Integrating the above inequality on $[0,T]$, we see that $$\label{est:G} \max_{\tau \in [-6T,-4T]}\int_{0}^{T}\|G(\tau+is)\|_{L^{2}({\mathbb{R} })}^{2}{\rm{d}}s \le \frac{128T}{(2\pi h)^{\frac{3}{2}}}e^{-\frac{3T^{2}}{h}}\|f\|_{L^{2}({\mathbb{R} })}^{2}.$$ **Step 3.** Conclusion. Combining [\[est:W\]](#est:W){reference-type="eqref" reference="est:W"} and [\[est:G\]](#est:G){reference-type="eqref" reference="est:G"} with [\[est:HW\]](#est:HW){reference-type="eqref" reference="est:HW"}, we obtain $$\label{est:W2} \begin{aligned} \max_{\tau \in [-6T,-4T]}\left\|W(\tau)\right\|_{L^{2}({\mathbb{R} })}^{2} &\le\frac{128CT}{(2\pi h)^{\frac{3}{2}}}e^{-\frac{3T^{2}}{h}+\frac{C}{T}}\|f\|_{L^{2}({\mathbb{R} })}^{2}\\ &+\frac{20CT^{2}}{(2\pi h)^{\frac{3}{2}}}e^{\frac{T^{2}}{h}+\frac{C}{T}}\int_{0}^{10T}\|HF(t)\|_{L^{2}(\Omega)}^{2}{\rm{d}}t. \end{aligned}$$ On the other hand, using again $\|F(t)\|_{L^{2}({\mathbb{R} })}=\|e^{itH}f\|_{L^{2}({\mathbb{R} })}=\|f\|_{L^{2}({\mathbb{R} })}$, $$\|f\|_{L^{2}({\mathbb{R} })}^{2}=\frac{1}{2T}\int_{4T}^{6T}\|F(\tau)\|_{L^{2}({\mathbb{R} })}^{2}{\rm{d}}\tau\le I_{1}+I_{2},$$ where $$\begin{aligned} I_{1}&=\frac{1}{T}\int_{4T}^{6T}\left\|\frac{1}{\sqrt{2\pi h}}\int_{{\mathbb{R} }}e^{-\frac{(\tau-t)^{2}}{2h}}\widetilde{F}(t){\rm{d}}t\right\|_{L^{2}({\mathbb{R} })}^{2}{\rm{d}}\tau,\\ I_{2}&= \frac{1}{T}\int_{4T}^{6T}\left\|F(\tau) -\frac{1}{\sqrt{2\pi h}}\int_{{\mathbb{R} }}e^{-\frac{(\tau-t)^{2}}{2h}}\widetilde{F}(t){\rm{d}}t \right\|_{L^{2}({\mathbb{R} })}^{2}{\rm{d}}\tau. \end{aligned}$$ Based on the definition of FBI transformation and $W(\tau)$, we have $$I_{1}=\frac{1}{T}\int_{-6T}^{-4T}\left\|\frac{1}{\sqrt{2\pi h}}\int_{{\mathbb{R} }}e^{-\frac{(\tau+t)^{2}}{2h}}\widetilde{F}(t){\rm{d}}t\right\|_{L^{2}({\mathbb{R} })}^{2}{\rm{d}}\tau=\frac{\sqrt{\pi h}}{T}\int_{-6T}^{-4T}\|W(\tau)\|_{L^{2}({\mathbb{R} })}^{2}{\rm{d}}\tau.$$ It follows from [\[est:W2\]](#est:W2){reference-type="eqref" reference="est:W2"} that $$\label{est:I1} \begin{aligned} I_{1} &\le \frac{128CT}{{\pi h}}e^{-\frac{3T^{2}}{h}+\frac{C}{T}}\|f\|_{L^{2}({\mathbb{R} })}^{2}\\ &+\frac{20CT^{2}}{\pi h}e^{\frac{T^{2}}{h}+\frac{C}{T}} \int_{0}^{10T}\left\|HF(t)\right\|_{L^{2}(\Omega)}^{2}{\rm{d}}t. \end{aligned}$$ Next, using the fact that $\int_{{\mathbb{R} }}e^{-x^{2}}{\rm{d}}x=\sqrt{\pi}$ and the definition of $\chi$, we rewrite $I_{2}$ as $$I_{2}=\frac{1}{T}\int_{4T}^{6T}\left\| \frac{1}{\sqrt{2\pi h}}\int_{{\mathbb{R} }}e^{-\frac{t^{2}}{2h}}\left(F(\tau)-\chi(\tau-t)F(\tau-t)\right){\rm{d}}t \right\|_{L^{2}({\mathbb{R} })}^{2}{\rm{d}}\tau=I_{21}+I_{22},$$ where $$\begin{aligned} I_{21}&=\frac{1}{T}\int_{4T}^{6T}\left\| \frac{1}{\sqrt{2\pi h}}\int_{|t|\ge 6T}e^{-\frac{t^{2}}{2h}}F(\tau){\rm{d}}t \right\|_{L^{2}({\mathbb{R} })}^{2}{\rm{d}}\tau,\\ I_{22}&=\frac{1}{T}\int_{4T}^{6T}\left\| \frac{1}{\sqrt{2\pi h}}\int_{-6T}^{6T}e^{-\frac{t^{2}}{2h}}\left(F(\tau)-\chi(\tau-t)F(\tau-t)\right){\rm{d}}t \right\|_{L^{2}({\mathbb{R} })}^{2}{\rm{d}}\tau. \end{aligned}$$ Using again $\|F(t)\|_{L^{2}({\mathbb{R} })}=\|e^{itH}f\|_{L^{2}({\mathbb{R} })}=\|f\|_{L^{2}({\mathbb{R} })}$, we have $$I_{21}\le \frac{1}{\pi h}\left(\int_{|t|\ge 6T}e^{-\frac{t^{2}}{2h}}{\rm{d}}t\right)^{2}\|f\|_{L^{2}({\mathbb{R} })}^{2} \le 2 e^{-\frac{18T^{2}}{h}}\|f\|_{L^{2}({\mathbb{R} })}^{2}.$$ Note that, from $f\in H^{2}({\mathbb{R} })$ and $H\ge \rm{Id}$, we have $$\begin{aligned} \|F(t)\|_{L^{2}({\mathbb{R} })}&\le \|HF(t)\|_{L^{2}({\mathbb{R} })}=\|Hf\|_{L^{2}({\mathbb{R} })},\\ \|\partial_{t}F(t)\|_{L^{2}({\mathbb{R} })}&=\|HF(t)\|_{L^{2}({\mathbb{R} })}=\|Hf\|_{L^{2}({\mathbb{R} })}. \end{aligned}$$ Therefore, from the mean-value theorem, for any $\tau\in (4T,6T)$, we have $$\begin{aligned} &\left\|F(\tau)-\chi(\tau-t)F(\tau-t)\right\|_{L^{2}({\mathbb{R} })}\\ &\le \|F(\tau)-F(\tau-t)\|_{L^{2}({\mathbb{R} })}+|\chi(\tau)-\chi(\tau-t)|\|F(\tau-t)\|_{L^{2}({\mathbb{R} })}\\ &\le |t|\left(\|Hf\|_{L^{2}({\mathbb{R} })}+\|\chi'\|_{L^{\infty}({\mathbb{R} })}\|f\|_{L^{2}({\mathbb{R} })}\right)\le |t|\left(1+\frac{2}{T}\right)\|Hf\|_{L^{2}({\mathbb{R} })}. \end{aligned}$$ Based on the above inequality and Minkowski inequality, we directly have $$I_{22}\le \frac{4}{\pi h}\left(1+\frac{2}{T}\right)^{2}\left(\int_{0}^{6T}e^{-\frac{t^{2}}{2h}}t{\rm{d}}t\right)^{2}\|Hf\|_{L^{2}({\mathbb{R} })}^{2}\le \frac{32h}{\pi}\left(1+\frac{1}{T^{2}}\right)\|Hf\|_{L^{2}({\mathbb{R} })}^{2}.$$ By the above estimates for $I_{21}$ and $I_{22}$, we see that $$\label{est:I2} I_{2}\le I_{21}+I_{22}\le 2e^{-\frac{18T^{2}}{h}}\|f\|_{L^{2}({\mathbb{R} })}^{2}+\frac{32h}{\pi}\left(1+\frac{1}{T^{2}}\right)\|Hf\|_{L^{2}({\mathbb{R} })}^{2}.$$ Gathering [\[est:I1\]](#est:I1){reference-type="eqref" reference="est:I1"} and [\[est:I2\]](#est:I2){reference-type="eqref" reference="est:I2"} together, we conclude that $$\begin{aligned} \|f\|_{L^{2}({\mathbb{R} })}^{2} &\le \frac{32h}{\pi}\left(1+\frac{1}{T^{2}}\right)\|Hf\|_{L^{2}({\mathbb{R} })}^{2}\\ &+ \frac{20CT^{2}}{{\pi h}}e^{\frac{T^{2}}{h}+\frac{C}{T}}\int_{0}^{10T}\left\|HF(t)\right\|_{L^{2}(\Omega)}^{2}{\rm{d}}t\\ &+\left(\frac{128CT}{{\pi h}}e^{-\frac{3T^{2}}{h}+\frac{C}{T}}+2e^{-\frac{18T^{2}}{h}}\right)\|f\|_{L^{2}({\mathbb{R} })}^{2}. \end{aligned}$$ This completes the proof of Proposition [Proposition 21](#prop:key){reference-type="ref" reference="prop:key"} for taking $h_{0}$ small enough where $h$ satisfies $0<h<h_{0}(1+T^{-3})^{-1}$. ◻ We are in a position to complete the proof of Theorem [Theorem 2](#thm:main1){reference-type="ref" reference="thm:main1"}. *End of the proof of Theorem [Theorem 2](#thm:main1){reference-type="ref" reference="thm:main1"}.* Recall that, without loss of generality, we assume $V\in C({\mathbb{R} })\cap L^{\infty}({\mathbb{R} })$ with $V\ge 1$. It follows that the operator $H\ge {\rm{{Id}}}$ and thus $H$ is invertible and is a bijection from $H^{2}({\mathbb{R} })$ to $L^{2}({\mathbb{R} })$. For any $u_{0}\in L^{2}({\mathbb{R} })$, we denote $U_{0}=H^{-1}u_{0}\in H^{2}({\mathbb{R} })$ and $u(t)=e^{itH}u_{0}$ be the corresponding solution of [\[equ:LS\]](#equ:LS){reference-type="eqref" reference="equ:LS"}. First, from Proposition [Proposition 21](#prop:key){reference-type="ref" reference="prop:key"}, there exist some constants $h_{0}=h_{0}(V,\zeta)\in (0,1)$ and $C=C(V,\zeta)>0$ depending only on $V$ and $\zeta$, such that for any $T>0$ and any $0<h<h_{0}(1+T^{-3})^{-1}$, we have $$\label{est:U0} \begin{aligned} \|U_{0}\|_{L^{2}({\mathbb{R} })}^{2} &\le Ch\left(1+\frac{1}{T^{2}}\right)\|u_{0}\|_{L^{2}({\mathbb{R} })}^{2}\\ &+Ce^{\frac{2T^{2}}{h}+\frac{C}{T}}\int_{0}^{T}\|u(t)\|_{L^{2}(\Omega)}^{2}{\rm{d}}t. \end{aligned}$$ Second, using Corollary [Corollary 20](#coro:High){reference-type="ref" reference="coro:High"} and the triangle inequality, there exists $\mu_{1}=\mu_{1}(V,\zeta)>0$ and $C_{1}=C_{1}(V,\zeta)>0$, depending only on $V$ and $\zeta$, such that for any $T>0$ and $\mu>\mu_{1}(1+T^{-\frac{1}{2}})$, we have $$\|u_{0}\|_{L^{2}({\mathbb{R} })}^{2}\le \frac{C_{1}}{T}\int_{0}^{T}\|u(t)\|_{L^{2}(\Omega)}^{2}{\rm{d}}t+(C_{1}+1)\|\Pi_{\mu} u_{0}\|_{L^{2}({\mathbb{R} })}^{2},$$ Note that, from the definition of $U_{0}$ and $\Pi_{\mu}$, we find $$\|\Pi_{\mu} u_{0}\|_{L^{2}({\mathbb{R} })}^{2}=\int_{0}^{\mu}({\rm{d}}m_{\lambda}u_{0},u_{0})_{L^{2}({\mathbb{R} })}=\int_{0}^{\mu}\lambda^{4}\left({\rm{d}}m_{\lambda} U_{0},U_{0}\right)_{L^{2}({\mathbb{R} })}\le \mu^{4}\left\|U_{0}\right\|_{L^{2}({\mathbb{R} })}^{2},$$ which implies $$\label{est:u0} \|u_{0}\|_{L^{2}({\mathbb{R} })}^{2}\le \frac{C_{1}}{T}\int_{0}^{T}\|u(t)\|_{L^{2}(\Omega)}^{2}{\rm{d}}t+(C_{1}+1)\mu^{4}\|U_{0}\|_{L^{2}({\mathbb{R} })}^{2}.$$ Combining [\[est:U0\]](#est:U0){reference-type="eqref" reference="est:U0"} and [\[est:u0\]](#est:u0){reference-type="eqref" reference="est:u0"}, we conclude that $$\begin{aligned} \|u_{0}\|_{L^{2}({\mathbb{R} })}^{2} &\le C(C_{1}+1)\mu^{4}h\left(1+\frac{1}{T^{2}}\right)\|u_{0}\|_{L^{2}({\mathbb{R} })}^{2}\\ &+\left(C(C_{1}+1)\mu^{4}e^{\frac{2T^{2}}{h}+\frac{C}{T}}+\frac{C_{1}}{T}\right)\int_{0}^{T}\|u(t)\|_{L^{2}(\Omega)}^{2}{\rm{d}}t. \end{aligned}$$ This completes the proof of Theorem [Theorem 2](#thm:main1){reference-type="ref" reference="thm:main1"} for taking $\mu=2\mu_{1}(1+T^{-\frac{1}{2}})$ and $h=\varepsilon(1+T^{-4})^{-1}$ where $\varepsilon$ small enough. The proof of Theorem [Theorem 2](#thm:main1){reference-type="ref" reference="thm:main1"} is complete. ◻ # Proof of Theorem [Theorem 7](#thm:ZHU){reference-type="ref" reference="thm:ZHU"} {#AppA} In this appendix, we repeat the proof of Theorem [Theorem 7](#thm:ZHU){reference-type="ref" reference="thm:ZHU"} in [@ZHU] based on complex analysis. First, we recall the following Jensen's formula from [@Rudin Theorem 15.18]. **Lemma 22** (Jensen's formula). *Let $0<r_{1}<r_{2}<\infty$. Let $f$ be a holomorphic function in ${B}_{r_{2}}$ with $f(0)\ne 0$ and $a_{1}, a_{2},\dots,a_{N}$ are the zeros of $f$ in $B_{r_{1}}$ $($repeated according to their respective multiplicity$)$, then we have $$\log |f(0)|=\sum_{n=1}^{N}\log \left(\frac{|a_{n}|}{r_{1}}\right)+\frac{1}{2\pi }\int_{0}^{2\pi}\left|f(r_{1}e^{i\theta})\right|{\rm{d}}\theta.$$* Note that, Jensen's formula can be used to estimate the number of zeros of the holomorphic function in an open disk. **Corollary 23**. *Let $0<r_{1}<r_{2}<r_{3}<\infty$ and $a\in \mathbb{C}$. Let $f$ be a holomorphic function in ${B}_{r_{3}}(a)$ with $f(a)\ne 0$ and $a_{1}, a_{2},\dots,a_{N}$ are the zeros of $f$ in $B_{r_{1}}(a)$ $($repeated according to their respective multiplicity$)$, then we have $$N\le \frac{\log M-\log |f(a)|}{\log r_{2}-\log r_{1}},\quad \mbox{where}\ M=\max_{|z|=r_{2}}|f(z)|.$$* Second, we recall Hadamard's three-circle theorem from [@Rudin Page 264]. **Lemma 24**. *Let $f$ be a holomorphic function in $B_{R}$ for $0<R<\infty$. Let $M(r)$ be the maximum of $|f(z)|$ on the circle $|z|=r$ for $0<r<R$. Then we have $$\log\left(\frac{r_{3}}{r_{1}}\right)\log M(r_{2})\le \log\left(\frac{r_{3}}{r_{2}}\right)\log M(r_{1}) +\log\left(\frac{r_{2}}{r_{1}}\right)\log M(r_{3}),$$ for any three concentric circles of radii $0<r_{1}<r_{2}<r_{3}<R$.* Third, following from [@FY Theorem 4.3], we introduce the Remez-type inequality for holomorphic polynomials for further reference. **Lemma 25** ([@FY]). *Let $P(z)$ be a holomorphic polynomial of degree $N$. Let $E\subset B_{1}$. Then for any $\delta>0$, we have $$\sup_{B_{1}}\left|P(z)\right|\le \left(\frac{6e}{\mathcal{H}_{\delta}(E)}\right)^{\frac{N}{\delta}}\sup_{E}\left|P(z)\right|.$$ Here, $\mathcal{H}_{\delta}(E)$ means the $\delta$-dimensional Hausdorff content of $E$.* Now we recall some elementary properties of quasiconformal mapping and the presentation is close to [@AIM]. For complex function $f$, we write the derivative as $$Df(z)h=\frac{\partial f}{\partial z}(z)h+\frac{\partial f}{\partial \bar{z}}(z)\overline{h},\quad \mbox{for}\ h\in \mathbb{C}.$$ The norm of the derivative and Jacobian can be explained as $$|Df(z)|=\left|\frac{\partial f}{\partial z}(z)\right|+\left|\frac{\partial f}{\partial \bar{z}}(z)\right|\quad \mbox{and}\quad Jf(z)=\left|\frac{\partial f}{\partial z}(z)\right|^{2}-\left|\frac{\partial f}{\partial \bar{z}}(z)\right|^{2}.$$ **Definition 26**. Let $U$ and $V$ be open sets of $\mathbb{C}$ and take $K\ge 1$. \(i\) An orientation-preserving mapping $f:U\to V$ is $K$-quasiregular mapping if $$f\in W_{loc}^{1,2}(U)\quad \mbox{and}\quad \left|Df(z)\right|^{2}\le K Jf(z),\quad \mbox{for almost every}\ z\in U.$$ \(ii\) An orientation-preserving homeomorphism $f:U\to V$ is $K$-quasiconformal if $$f\in W_{loc}^{1,2}(U)\quad \mbox{and}\quad \left|Df(z)\right|^{2}\le K Jf(z),\quad \mbox{for almost every}\ z\in U.$$ Following [@AIM Chapter 16], we denote $$*=\begin{pmatrix} 0 &-1\\ 1 &0 \end{pmatrix}:{\mathbb{R} }^{2}\to {\mathbb{R} }^{2}\quad \mbox{and}\ **=-{\rm{Id}}.$$ For any solution $\phi$ of [\[equ:2Dell\]](#equ:2Dell){reference-type="eqref" reference="equ:2Dell"}, we find the field $(*A\nabla \phi)$ is curl-free, and thus from Poincaré lemma, there exists a Sobolev function $\psi\in W_{loc}^{1,2}(B_{4})$ such that $$\nabla \psi= *A\nabla \phi=\begin{pmatrix} 0 &-1\\ 1 &0 \end{pmatrix}A\nabla \phi.$$ Note that the function $\psi$ is unique up to an additive constant and it is called the $A$-harmonic conjugate of $\phi$. For any $a\in B_{1}$, we consider the complex function $f_{a}=\phi+i\psi_{a}$ where $\psi_{a}$ is $A$-harmonic conjugate with $\psi_{a}(a)=0$. By an elementary computation and the definition of $\psi_{a}$ $$|Df_{a}|^{2}\le |\nabla \phi|^{2}+|\nabla \psi_{a}|^{2}\le \left(\Lambda+\Lambda^{-1}\right)\nabla \phi\cdot A\nabla \phi \le \left(\Lambda+\Lambda^{-1}\right)\nabla \phi\cdot (-*\nabla \psi_{a}).$$ Note that, from the definition of $*$, we find $$\nabla \phi\cdot (-*\nabla \psi_{a})=(\partial_{x}\phi)(\partial_{y}\psi_{a})-(\partial_{y}\phi)(\partial_{x}\psi_{a})= Jf_{a}.$$ Based on the above inequalities, we obtain $$|Df_{a}(z)|^{2}\le (\Lambda+\Lambda^{-1})Jf_{a}(z),\quad \mbox{for almost every}\ z\in B_{4},$$ which means that $f_{a}$ is a $(\Lambda+\Lambda^{-1})$-quasiregular mapping. It then follows from the representation theorem (see [@AEESAIM Section 2] and [@AIM Corollary 5.5.3]) that $f_{a}$ can be written as $$f_{a}=F\circ G,\quad \mbox{on}\ B_{2},$$ where $F$ is holomorphic in $B_{2}$, and $G$ is a $(\Lambda+\Lambda^{-1})$-quasiconformal homeomorphism from $B_{2}$ onto $B_{2}$ which verifies $G(0)=0$ and $$\label{est:Holder} C^{-1}|z_{2}-z_{1}|^{\frac{1}{\alpha}}\le |G(z_{2})-G({z_{1}})|\le C|z_{2}-z_{1}|^{\alpha},\ \ \mbox{when}\ (z_{1},z_{2})\in B_{2}\times B_{2},$$ for some constants $\alpha=\alpha(\Lambda)\in (0,1)$ and $C=C(\Lambda)>0$ depending only on $\Lambda$. Moreover, from [@AIM Corollary 5.9.2], we know that $G$ has a continuous homeomorphic extension to the boundary $\partial B_{2}$. Therefore, from [\[est:Holder\]](#est:Holder){reference-type="eqref" reference="est:Holder"}, there exists a constant $r=r(\Lambda)\in (0,2)$, depending only $\Lambda$, such that $$\label{est:dist} 0<2-r\le {\rm{dist}}\left(G(B_{1}),\partial B_{2}\right)\Longrightarrow G(B_{1})\subset B_{r}.$$ For any $a\in B_{1}$, we consider the holomorphic self-homeomorphism of $B_{2}$ to itself: $$R_{a}(z)=4\frac{z-G(a)}{4-\overline{G(a)}z}: B_{2}\to B_{2}.$$ Based on [@AIM Theorem 3.1.2], we know that $f_{a}$ can be rewritten as $$\label{equ:fFaGa} f_{a}=\left(F\circ R_{a}^{-1}\right)\circ (R_{a}\circ G):=F_{a}\circ G_{a},\quad \mbox{on}\ B_{2},$$ where $F_{a}$ is holomorphic in $B_{2}$, and $G_{a}$ is a $(\Lambda+\Lambda^{-1})$-quasiconformal homeomorphism from $B_{2}$ onto $B_{2}$ which verifies $G_{a}(a)=0$. Moreover, from [\[est:Holder\]](#est:Holder){reference-type="eqref" reference="est:Holder"} and [\[est:dist\]](#est:dist){reference-type="eqref" reference="est:dist"}, $$\label{est:Ga} C^{-1}|z_{2}-z_{1}|^{\frac{1}{\alpha}}\le |G_{a}(z_{2})-G_{a}({z_{1}})|\le C|z_{2}-z_{1}|^{\alpha},\ \ \mbox{when}\ (z_{1},z_{2})\in B_{2}\times B_{2},$$ where $\alpha=\alpha(\Lambda)\in (0,1)$ and $C=C(\Lambda)>0$ are constants depending only on $\Lambda$. For any $a\in B_{1}$, using [\[est:Ga\]](#est:Ga){reference-type="eqref" reference="est:Ga"}, we know that [\[est:dist\]](#est:dist){reference-type="eqref" reference="est:dist"} is also true for $G_{a}$ with probably different constant $r=r(\Lambda)$ but it still depends only on $\Lambda$. The following proposition plays a crucial role in our proof of Theorem [Theorem 7](#thm:ZHU){reference-type="ref" reference="thm:ZHU"}. **Proposition 27**. *Let $\omega\subset B_{1}\cap \ell_{0}$ satisfy $|\omega|>0$ for some line $\ell_{0}\in \mathbb{R}^{2}$. Then there exist $z_{0}\in \omega$ and some constants $\alpha=\alpha(\Lambda,|\omega|)\in (0,1)$ and $C=C(\Lambda,|\omega|)>0$, depending only on $\Lambda$ and $|\omega|$, such that for any $H^{1}_{loc}$ solution $\phi$ of [\[equ:2Dell\]](#equ:2Dell){reference-type="eqref" reference="equ:2Dell"} with its $A$-harmonic conjugate satisfying $\psi_{z_{0}}(z_{0})=0$, we have $$\sup_{B_{1}}|\phi|\le C\left(\sup_{\omega}|\phi+i\psi_{z_{0}}|^{\alpha}\right)\left(\sup_{B_{2}}|\phi|^{1-\alpha}\right).$$* *Proof.* From $\omega \subset B_{1}\cap \ell_{0}$ and [\[est:Ga\]](#est:Ga){reference-type="eqref" reference="est:Ga"}, there exist $r_{1}=r_{1}(|\omega|)\in (0,1)$ depending only on $|\omega|$ and $r_{2}=r_{2}(\Lambda,|\omega|)\in (0,\frac{1}{6})$ depending only on $\Lambda$ and $|\omega|$ such that $$\label{est:12omega} \frac{1}{2}|\omega|\le \left|\omega\cap B_{r_{1}}\right|\quad \mbox{and}\quad B_{6r_{2}}\subset \bigcap_{a\in B_{r_{1}}}G_{a}\left(B_{1}\right).$$ Moreover, from [\[est:Ga\]](#est:Ga){reference-type="eqref" reference="est:Ga"} and [\[est:12omega\]](#est:12omega){reference-type="eqref" reference="est:12omega"}, there exist $z_{0}\in \omega\cap B_{r_{1}}$ and $r_{3}=r_{3}(\Lambda,|\omega|)\in (0,1)$ depending only on $\Lambda$ and $|\omega|$ such that $$\label{est:omega} c|\omega|\le |\omega\cap B_{r_{3}}(z_{0})|\quad \mbox{and}\quad G_{z_{0}}\left(B_{r_{3}}(z_{0})\right)\subset B_{r_{2}},$$ where $c=c(\Lambda,|\omega|)$ depends only on $\Lambda$ and $|\omega|$. We denote $\widetilde{\omega}=\omega\cap B_{r_{3}}(z_{0})$. Let $a_{1},a_{2},\dots,a_{N}$ be the zeros of $F_{z_{0}}$ in $B_{2r_{2}}$ $($repeated according to their respective multiplicity$)$. Let $a_{0}\in \partial B_{r_{2}}$ be such that $|F_{z_{0}}(a_{0})|$ is the maximum of $|F_{z_{0}}(z)|$ on $B_{r_{2}}$. We consider the following complex polynomial $P_{z_{0}}(z)$ and holomorphic non-vanishing function $h_{z_{0}}(z)$ on $B_{6r_{2}}$, $$P_{z_{0}}(z)=\prod_{n=1}^{N}(z-a_{n})\quad \mbox{and}\quad h_{z_{0}}(z)=\frac{F_{z_{0}}(z)}{P_{z_{0}}(z)}.$$ On the one hand, we denote by $\widetilde{N}$ the number of zeros of $F_{z_{0}}$ in $B_{3r_{2}}(a_{0})$. From $B_{2r_{2}}\subset B_{3r_{2}}(a_{0})$, $B_{\frac{7}{2}r_{2}}(a_{0})\subset B_{5r_{2}}$ and Corollary [Corollary 23](#coro:zero){reference-type="ref" reference="coro:zero"}, $$\label{est:N} \begin{aligned} N\le \widetilde{N} &\le \frac{1}{\log \frac{7}{6}}\left(\log \sup_{B_{\frac{7}{2}r_{2}}(a_{0})}|F_{z_{0}}|-\log |F_{z_{0}}(a_{0})|\right)\\ &\le \frac{1}{\log \frac{7}{6}} \left(\log \sup_{B_{5r_{2}}}|F_{z_{0}}|-\log\sup_{B_{r_{2}}} |F_{z_{0}}|\right). \end{aligned}$$ Then, using the definition of $h_{z_{0}}$ and $P_{z_{0}}$, we see that $$\label{est:F1} \sup_{B_{r_{2}}}|F_{z_{0}}|=|F_{z_{0}}(a_{0})|\le |h_{z_{0}}(a_{0})||P_{z_{0}}(a_{0})|\le (3r_{2})^{N}|h_{z_{0}}(a_{0})|.$$ Using the maximum modulus principle on $B_{5r_{2}}$, we find $$\label{est:h1} \sup_{B_{5r_{2}}}|h_{z_{0}}|\le \left(\sup_{\partial{B_{5r_{2}}}}|F_{z_{0}}|\right)\left(\sup_{\partial{B_{5r_{2}}}}|P_{z_{0}}^{-1}|\right)\le \left(\frac{1}{3r_{2}}\right)^{N} \sup_{{B_{5r_{2}}}}|F_{z_{0}}|.$$ Since $h_{z_{0}}$ is a holomorphic non-vanishing function, $\log |h_{z_{0}}|$ is a harmonic function on $B_{2r_{2}}$. From Harnack's inequality for harmonic function, there exists a constant $C=C(\Lambda)>1$, depending only on $\Lambda$, such that $$\sup_{B_{r_{2}}}\left(\sup_{B_{5r_{2}}}\log |h_{z_{0}}|-\log |h_{z_{0}}(z)|\right)\le C\inf_{B_{r_{2}}}\left(\sup_{B_{5r_{2}}}\log |h_{z_{0}}|-\log |h_{z_{0}}(z)|\right).$$ It follows that $$|h_{z_{0}}(a_{0})|^{C}\sup_{B_{5r_{2}}}|h_{z_{0}}|\le \left(\sup_{B_{5r_{2}}}|h_{z_{0}}|^{C}\right) \left(\inf_{B_{r_{2}}}|h_{z_{0}}|\right).$$ Combining [\[est:F1\]](#est:F1){reference-type="eqref" reference="est:F1"} and [\[est:h1\]](#est:h1){reference-type="eqref" reference="est:h1"} with the above inequality, we obtain $$\label{est:F} \left(\sup_{B_{r_{2}}}|F_{z_{0}}|^{C}\right)\left(\sup_{B_{5r_{2}}}|h_{z_{0}}|\right)\le \left(\sup_{B_{5r_{2}}}|F_{z_{0}}|^{C}\right)\left(\inf_{B_{r_{2}}}|h_{z_{0}}|\right).$$ On the other hand, using Lemma [Lemma 25](#le:poly){reference-type="ref" reference="le:poly"}, we find $$\sup_{B_{r_{2}}}|P_{z_{0}}|\le \left(\frac{6e}{\mathcal{H}_{\alpha}(G_{z_{0}}(\widetilde{\omega}))}\right)^{\frac{N}{\alpha}}\sup_{G_{z_{0}}(\widetilde{\omega})}|P_{z_{0}}|.$$ From [\[equ:measu\]](#equ:measu){reference-type="eqref" reference="equ:measu"}, [\[est:Ga\]](#est:Ga){reference-type="eqref" reference="est:Ga"} and [\[est:omega\]](#est:omega){reference-type="eqref" reference="est:omega"}, there exists a constant $C_{1}=C_{1}(\Lambda)>0$, depending only on $\Lambda$, such that $$\label{est:P} c|\omega|\le |\widetilde{\omega}|\le C_{1}\mathcal{H}_{\alpha}(G_{z_{0}}(\widetilde{\omega}))\Longrightarrow \sup_{B_{r_{2}}}|P_{z_{0}}|\le \left(\frac{6C_{1}e}{c|{\omega}|}\right)^{\frac{N}{\alpha}}\sup_{G_{z_{0}}(\widetilde{\omega})}|P_{z_{0}}|.$$ Thanks to [\[est:omega\]](#est:omega){reference-type="eqref" reference="est:omega"}, [\[est:F\]](#est:F){reference-type="eqref" reference="est:F"} and [\[est:P\]](#est:P){reference-type="eqref" reference="est:P"}, we find $$\begin{aligned} \sup_{B_{r_{2}}}|F_{z_{0}}|^{1+C} &\le \left(\sup_{B_{r_{2}}}|F_{z_{0}}|^{C}\right)\left(\sup_{B_{5r_{2}}}|h_{z_{0}}|\right)\left(\sup_{B_{r_{2}}}|P_{z_{0}}|\right)\\ &\le C^{N}_{2}\left(\sup_{B_{5r_{2}}}|F_{z_{0}}|^{C}\right)\left(\sup_{G_{z_{0}}({\widetilde{\omega}})}|P_{z_{0}}|\right) \left(\inf_{B_{r_{2}}}|h_{z_{0}}|\right)\\ &\le C^{N}_{2}\left(\sup_{B_{5r_{2}}}|F_{z_{0}}|^{C}\right)\left(\sup_{G_{z_{0}}({\widetilde{\omega}})}|F_{z_{0}}|\right), \end{aligned}$$ where $C_{2}=C_{2}(\Lambda,|\omega|)>1$ is a constant depending only on $\Lambda$ and $|\omega|$. Combining the above inequality with [\[est:N\]](#est:N){reference-type="eqref" reference="est:N"}, there exists a constant $C_{3}=C_{3}(\Lambda,|\omega|)>0$, depending only on $\Lambda$ and $|\omega|$, such that $$\sup_{B_{r_{2}}}|F_{z_{0}}|^{1+C+C_{3}}\le \left(\sup_{B_{5r_{2}}}|F_{z_{0}}|^{C+ C_{3} }\right)\left(\sup_{G_{z_{0}}({\widetilde{\omega}})}|F_{z_{0}}|\right),$$ and thus we have $$\sup_{B_{r_{2}}}|F_{z_{0}}|\le \left(\sup_{B_{5r_{2}}}|F_{z_{0}}|^{1-\alpha}\right) \left(\sup_{G_{z_{0}}(\widetilde{\omega})}|F_{z_{0}}|^{\alpha}\right),\quad \mbox{where}\ \alpha=\frac{1}{1+C+C_{3}}\in (0,1).$$ Therefore, from [\[est:dist\]](#est:dist){reference-type="eqref" reference="est:dist"} and Lemma [Lemma 24](#le:Hada){reference-type="ref" reference="le:Hada"}, there exists a constant $r=r(\Lambda)\in (0,2)$, depending only on $\Lambda$, such that $$\label{est:F11} \begin{aligned} \sup_{G_{z_{0}}(B_{1})}|F_{z_{0}}|\le \sup_{B_{r}}|F_{z_{0}}| &\le \left(\sup_{B_{r_{2}}}|F_{z_{0}}|^{\alpha_{1}} \right) \left( \sup_{B_{\frac{2+r}{2}}}|F_{z_{0}}|^{1-\alpha_{1}}\right)\\ &\le \left(\sup_{G_{z_{0}}(\widetilde{\omega})}|F_{z_{0}}|^{\alpha\alpha_{1}}\right) \left(\sup_{B_{\frac{2+r}{2}}}|F_{z_{0}}|^{1-\alpha\alpha_{1}}\right), \end{aligned}$$ where $\alpha_{1}=\alpha_{1}(\Lambda,|\omega|)\in (0,1)$ is a constant depending only on $\Lambda$ and $|\omega|$. From the definition of $F_{z_{0}}$, the holomorphic function $F_{z_{0}}$ can be written as $$\label{equ:Fz0} F_{z_{0}}=f_{z_{0}}\circ G_{z_{0}}^{-1}=\phi\circ G_{z_{0}}^{-1}+i\psi_{z_{0}}\circ G_{z_{0}}^{-1}.$$ Therefore, using the Cauchy-Riemann equation and $\psi_{z_{0}}\circ G_{z_{0}}^{-1}(0)=0$, $$\psi_{z_{0}}\circ G_{z_{0}}^{-1}(x,y)=\int_{0}^{1}(y,-x)\cdot\nabla(\phi\circ G_{z_{0}}^{-1})(x\sigma,y\sigma){\rm{d}}\sigma,\ \mbox{on}\ B_{2},$$ and thus, from interior estimates of derivatives of harmonic function (see for instance [@GN Theorem 2.10]), there exists a constant $C_{4}=C_{4}(\Lambda)$, depending only on $\Lambda$, such that $$\sup_{B_{\frac{2+r}{2}}} \left|\psi_{z_{0}}\circ G_{z_{0}}^{-1} \right| \le 2\sup_{B_{\frac{2+r}{2}}} \left|\nabla (\phi\circ G_{z_{0}}^{-1}) \right|\le C_{4}\sup_{B_{\frac{6+r}{4}}} \left| \phi\circ G_{z_{0}}^{-1} \right| .$$ Hence, using again [\[equ:Fz0\]](#equ:Fz0){reference-type="eqref" reference="equ:Fz0"}, we find $$\label{est:F2} \begin{aligned} \sup_{B_{\frac{2+r}{2}}}|F_{z_{0}}| &\le \sup_{B_{\frac{2+r}{2}}}|\phi\circ G_{z_{0}}^{-1}|+ \sup_{B_{\frac{2+r}{2}}}|\psi_{z_{0}}\circ G_{z_{0}}^{-1}| \\ & \le (1+C_{4})\sup_{B_{\frac{6+r}{4}}} \left| \phi\circ G_{z_{0}}^{-1} \right| \le (1+C_{4})\sup_{B_{2}}|\phi|. \end{aligned}$$ Combining [\[est:F11\]](#est:F11){reference-type="eqref" reference="est:F11"} and [\[est:F2\]](#est:F2){reference-type="eqref" reference="est:F2"} with $\widetilde{\omega}\subset \omega$, we conclude that $$\begin{aligned} \sup_{G_{z_{0}}(B_{1})}|F_{z_{0}}| &\le (1+C_{4})^{1-\alpha\alpha_{1}}\left(\sup_{G_{z_{0}}(\widetilde{\omega})}|F_{z_{0}}|^{\alpha\alpha_{1}}\right) \left(\sup_{B_{2}}|\phi|^{1-\alpha\alpha_{1}}\right)\\ &\le (1+C_{4})^{1-\alpha\alpha_{1}}\left(\sup_{G_{z_{0}}({\omega})}|F_{z_{0}}|^{\alpha\alpha_{1}}\right) \left(\sup_{B_{2}}|\phi|^{1-\alpha\alpha_{1}}\right), \end{aligned}$$ which completes the proof of Proposition [Proposition 27](#prop:ZHU){reference-type="ref" reference="prop:ZHU"} ◻ We are in a position to complete the proof of Theorem [Theorem 7](#thm:ZHU){reference-type="ref" reference="thm:ZHU"}. *End of the proof of Theorem [Theorem 7](#thm:ZHU){reference-type="ref" reference="thm:ZHU"}.* Let $z_{0}\in\omega$ be the point appearing in Proposition [Proposition 27](#prop:ZHU){reference-type="ref" reference="prop:ZHU"} and $\psi_{z_{0}}$ be the $A$-harmonic conjugate of $\phi$ satisfying $\psi_{z_{0}}(z_{0})=0$. Note that, from $A\nabla\phi\cdot\textbf{e}_{0}=0$ and the definition of $\psi_{z_{0}}$, we obtain $$\nabla \psi_{z_{0}} \cdot \textbf{e}_{0}^{\perp}= * A\nabla \phi \cdot \textbf{e}_{0}^{\perp}=A\nabla \phi \cdot (-* \textbf{e}_{0}^{\perp})=A\nabla \phi\cdot {\textbf{e}_{0}}=0.$$ It follows that $\psi_{z_{0}}(z)=\psi_{z_{0}}(z_{0})=0$ on $\ell_{0}$. Therefore, from $\omega\subset B_{1}\cap \ell_{0}$ and Proposition [Proposition 27](#prop:ZHU){reference-type="ref" reference="prop:ZHU"}, we conclude that, there exist some constants $\alpha=\alpha(\Lambda,|\omega|)\in (0,1)$ and $C=C(\Lambda,|\omega|)>0$, depending only on $\Lambda$ and $|\omega|$, such that $$\sup_{B_{1}}|\phi|\le C\left(\sup_{\omega}|\phi+i\psi_{z_{0}}|^{\alpha}\right)\left(\sup_{B_{2}}|\phi|^{1-\alpha}\right)\le C\left(\sup_{\omega}|\phi|^{\alpha}\right)\left(\sup_{B_{2}}|\phi|^{1-\alpha}\right).$$ The proof of Theorem [Theorem 7](#thm:ZHU){reference-type="ref" reference="thm:ZHU"} is complete. ◻ # Proof of Proposition [Proposition 14](#prop:heat){reference-type="ref" reference="prop:heat"} {#AppB} The implication of backward observability for the heat equation from the spectral inequality is standard. Here we closely follow the strategy of [@BurqMoyanoJEMS Theorem 8 and Corollary 4.1] which builds upon the early work in [@AEWZJEMS]. The key point is the following interpolation estimates for solutions of the heat equation. **Lemma 28**. *Let $\Omega$ be a $(1,\zeta)$-thick set and let $V\in C({\mathbb{R} })\cap L^{\infty}({\mathbb{R} })$ with $V\ge 1$. Then there exists a constant $C=C(V,\zeta)>0$, depending only on $V$ and $\zeta$, such that for any $\alpha\in (0,1)$, $0\le s<t<\infty$ and $f\in L^{2}({\mathbb{R} })$, we have $$\|e^{-tH}f\|_{L^{2}({\mathbb{R} })}\le Ce^{\frac{3}{\alpha(t-s)}}\|e^{-tH}f\|_{L^{2}(\Omega)}^{1-\alpha}\|e^{-sH}f\|_{L^{2}({\mathbb{R} })}^{\alpha}.$$* *Proof.* First, for any $\mu>0$ and $0\le s<t<\infty$, we have $$\label{est:tH} \begin{aligned} \|e^{-tH}(1-\Pi_{\mu})f\|_{L^{2}({\mathbb{R} })}^{2} &=\int_{\mu}^{\infty}e^{-2t\lambda^{2}}({\rm{d}}m_{\lambda}f,f)_{L^{2}({\mathbb{R} })}\\ & \le e^{-2(t-s)\mu^{2}}\|e^{-sH}f\|_{L^{2}({\mathbb{R} })}^{2}. \end{aligned}$$ It follows from Lemma [Lemma 12](#le:spectra){reference-type="ref" reference="le:spectra"} that $$\begin{aligned} \|e^{-tH}\Pi_{\mu}f\|_{L^{2}({\mathbb{R} })} &\le Ce^{3\mu}\left(\|e^{-tH}f\|_{L^{2}(\Omega)}+ \|e^{-tH}(1-\Pi_{\mu})f\|_{L^{2}({\mathbb{R} })} \right)\\ &\le Ce^{3\mu}\left( \|e^{-tH}f\|_{L^{2}(\Omega)}+e^{-(t-s)\mu^{2}} \|e^{-sH}f\|_{L^{2}({\mathbb{R} })} \right). \end{aligned}$$ From AM-GM inequality, for any $\mu>0$, $\alpha\in (0,1)$ and $0\le s<t<\infty$, we obtain $$3\mu\le \alpha(t-s)\mu^{2}+\frac{3}{\alpha(t-s)}\Longrightarrow 3\mu- \alpha(t-s)\mu^{2}\le\frac{3}{\alpha(t-s)},$$ which implies $$\label{est:etHOR} \begin{aligned} \|e^{-tH}\Pi_{\mu}f\|_{L^{2}({\mathbb{R} })} &\le Ce^{\frac{3}{\alpha(t-s)} } e^{\alpha(t-s)\mu^{2}} \|e^{-tH}f\|_{L^{2}(\Omega)}\\ &+ Ce^{\frac{3}{\alpha(t-s)} } e^{-(1-\alpha)(t-s)\mu^{2}} \|e^{-sH}f\|_{L^{2}({\mathbb{R} })}. \end{aligned}$$ Now we make the choice of $\mu \in(0,\infty)$ such that $$e^{(t-s)\mu^{2}}=\frac{ \|e^{-sH}f\|_{L^{2}({\mathbb{R} })}}{ \|e^{-tH}f\|_{L^{2}(\Omega)}}\in (1,\infty).$$ This is always possible since for $f\ne 0$, we find $$0<\|e^{-tH}f\|_{L^{2}(\Omega)}\le\|e^{-tH}f\|_{L^{2}({\mathbb{R} })}\le \|e^{-sH}f\|_{L^{2}({\mathbb{R} })}.$$ Therefore, from [\[est:tH\]](#est:tH){reference-type="eqref" reference="est:tH"} and [\[est:etHOR\]](#est:etHOR){reference-type="eqref" reference="est:etHOR"}, we conclude that $$\|e^{-tH}f\|_{L^{2}({\mathbb{R} })}\le (2C+1)e^{\frac{3}{\alpha(t-s)}}\|e^{-tH}f\|_{L^{2}(\Omega)}^{1-\alpha}\|e^{-sH}f\|_{L^{2}({\mathbb{R} })}^{\alpha}.$$ The proof of Lemma [Lemma 28](#le:inter){reference-type="ref" reference="le:inter"} is complete. ◻ We are in a position to complete the proof of Proposition [Proposition 14](#prop:heat){reference-type="ref" reference="prop:heat"}. *End of the proof of Proposition [Proposition 14](#prop:heat){reference-type="ref" reference="prop:heat"}.* We split the proof into the following two steps. **Step 1.** Recursive estimates. First, from Lemma [Lemma 28](#le:inter){reference-type="ref" reference="le:inter"}, for $0<t_{1}<t_{2}\le T<\infty$, $t\in \left(\frac{t_{1}+t_{2}}{2},t_{2}\right)$ and $\alpha=\frac{1}{4}$, we have $$\begin{aligned} \|u(t_{2})\|_{L^{2}({\mathbb{R} })}\le \|u(t)\|_{L^{2}({\mathbb{R} })} &\le Ce^{\frac{12}{t-t_{1}}}\|u(t)\|_{L^{2}(\Omega)}^{\frac{3}{4}}\|u(t_{1})\|_{L^{2}({\mathbb{R} })}^{\frac{1}{4}}\\ &\le Ce^{\frac{24}{t_{2}-t_{1}}}\|u(t)\|_{L^{2}(\Omega)}^{\frac{3}{4}}\|u(t_{1})\|_{L^{2}({\mathbb{R} })}^{\frac{1}{4}}, \end{aligned}$$ where $C=C(V,\zeta)>2$ is a constant depending only on $V$ and $\zeta$. Integrating the above inequality over $\left(\frac{t_{1}+t_{2}}{2},t_{2}\right)$ and then using Hölder inequality, we see that $$\|u(t_{2})\|_{L^{2}({\mathbb{R} })}^{2} \le C^{2}e^{\frac{48}{t_{2}-t_{1}}}\left(\frac{t_{2}-t_{1}}{2}\right)^{-\frac{3}{4}}\|u(t_{1})\|_{L^{2}({\mathbb{R} })}^{\frac{1}{2}}\left(\int_{t_{1}}^{t_{2}}\|u(t)\|_{L^{2}(\Omega)}^{2}{\rm{d}}t\right)^{\frac{3}{4}}.$$ It follows that $$\begin{aligned} \|u(t_{2})\|_{L^{2}({\mathbb{R} })}^{2}e^{-\frac{99}{t_{2}-t_{1}}} \le& C^{3} \left({t_{2}-t_{1}}\right)^{-\frac{3}{4}}\left(\|u(t_{1})\|_{L^{2}({\mathbb{R} })}^{2}e^{-\frac{198}{t_{2}-t_{1}}} \right)^{\frac{1}{4}} \\ &\times e^{-\frac{3}{2(t_{2}-t_{1})}}\left(\int_{t_{1}}^{t_{2}}\|u(t)\|_{L^{2}(\Omega)}^{2}{\rm{d}}t\right)^{\frac{3}{4}}\\ \le &\left(\|u(t_{1})\|_{L^{2}({\mathbb{R} })}^{2}e^{-\frac{198}{t_{2}-t_{1}}} \right)^{\frac{1}{4}}\left(C^{4}(t_{2}-t_{1}) \int_{t_{1}}^{t_{2}}\|u(t)\|_{L^{2}(\Omega)}^{2}{\rm{d}}t \right)^{\frac{3}{4}}. \end{aligned}$$ Based on the above inequality and Young's inequality, we conclude that $$\label{est:u2u1} \|u(t_{2})\|_{L^{2}({\mathbb{R} })}^{2}e^{-\frac{99}{t_{2}-t_{1}}} \le \frac{1}{4}\|u(t_{1})\|_{L^{2}({\mathbb{R} })}^{2}e^{-\frac{198}{t_{2}-t_{1}}} + C^{4}(t_{2}-t_{1}) \int_{t_{1}}^{t_{2}}\|u(t)\|_{L^{2}(\Omega)}^{2}{\rm{d}}t.$$ **Step 2.** Conclusion. For $T>0$ and $n\in \mathbb{N}$, we set $$S_{n}=\frac{T}{2^{n}}\quad \mbox{and}\quad a_{n}=\left\|u(S_{n})\right\|_{L^{2}({\mathbb{R} })}^{2}e^{-\frac{99}{S_{n}-S_{n+1}}}.$$ By an elementary computation, $$S_{n+1}-S_{n+2} = \frac{S_{n}-S_{n+1}}{2} \Longrightarrow a_{n+1}=\left\|u(S_{n+1})\right\|_{L^{2}({\mathbb{R} })}^{2}e^{-\frac{198}{S_{n}-S_{n+1}}}.$$ Therefore, from [\[est:u2u1\]](#est:u2u1){reference-type="eqref" reference="est:u2u1"}, for any $n\in \mathbb{N}$, we have $$a_{n}\le \frac{1}{4}a_{n+1}+C^{4}T\int_{\frac{T}{2^{n+1}}}^{\frac{T}{2^{n}}}\|u(t)\|_{L^{2}(\Omega)}^{2}{\rm{d}}t,$$ which implies $$\|u(T)\|_{L^{2}({\mathbb{R} })}^{2}e^{-\frac{198}{T}} = a_{0}\le \frac{1}{4}a_{n}+C^{4}T\int_{0}^{T}\|u(t)\|_{L^{2}(\Omega)}^{2}{\rm{d}}t.$$ Using the fact that $a_{n}\to 0$ as $n\to\infty$, we complete the proof of Proposition [Proposition 14](#prop:heat){reference-type="ref" reference="prop:heat"}. ◻ 99 G. Alessandrini and L. Escauriaza. Null-controllability of one-dimensional parabolic equations. *ESAIM Control Optim. Calc. Var.* **14** (2008), no. 2, 284--293. N. Anantharaman and F. Macià. Semiclassical measures for the Schrödinger equation on the torus. *J. Eur. Math. Soc. (JEMS)* **16** (2014), no. 6, 1253--1288. N. Anantharaman, M. Léautaud and F. Macià. Wigner measures and observability for the Schrödinger equation on the disk. *Invent. Math.* **206** (2016), no. 2, 485--599. N. Anantharaman and G. Rivière. Dispersion and controllability for the Schrödinger equation on negatively curved manifolds. *Anal. PDE* **5** (2012), no. 2, 313--338. J, Apraiz, L. Escauriaza, G. Wang and C. Zhang. Observability inequalities and measurable sets. *J. Eur. Math. Soc. (JEMS)* **16** (2014), no. 11, 2433--2475. K. Astala, T. Iwaniec and G. Martin. *Elliptic partial differential equations and quasiconformal mappings in the plane*. Princeton Mathematical Series, 48. Princeton University Press, Princeton, NJ, 2009. xviii+677 pp. C. Bardos, G. Lebeau and J. Rauch. Sharp sufficient conditions for the observation, control, and stabilization of waves from the boundary. *SIAM J. Control Optim.* **30** (1992), no. 5, 1024--1065. J. Bourgain, N. Burq and M. Zworski. Control for Schrödinger operators on 2-tori: rough potentials. *J. Eur. Math. Soc. (JEMS)* **15** (2013), no. 5, 1597--1628. N. Burq and I. Moyano. Propagation of smallness and control for heat equations. *J. Eur. Math. Soc. (JEMS)* **25** (2023), no. 4, 1349--1377. N. Burq and I. Moyano. Propagation of smallness and spectral estimates. Preprint, arXiv:2109.06654. N. Burq and C. Sun. Time optimal observability for the Grushin Schrödinger equation. *Anal. PDE* **15** (2022), no. 6, 1487--1530. N. Burq and M. Zworski. Control in the presence of a black box. *J. Amer. Math. Soc.* **17** (2004), no. 2, 443--471. N. Burq and M. Zworski. Control for Schrödinger operators on tori. *Math. Res. Lett.* **19** (2012), no. 2, 309--324. N. Burq and M. Zworski. Rough controls for Schrödinger operators on 2-tori. *Ann. H. Lebesgue* **2** (2019), 331--347. E. B. Davies. *Spectral theory and differential operators.* Cambridge Studies in Advanced Mathematics, 42. Cambridge University Press, Cambridge, 1995. x+182 pp. C. Fermanian Kammerer and C. Letrouit. Observability and controllability for the Schrödinger equation on quotients of groups of Heisenberg type. *J. Éc. Polytech. Math.* **8** (2021), 1459--1513. O. Friedland and Y. Yomdin. $(s,p)$-valent functions. *Geometric aspects of functional analysis*, 123--136, Lecture Notes in Math., 2169, Springer, Cham, 2017. D. Gilbarg and N.S. Trudinger. *Elliptic partial differential equations of second order*. Reprint of the 1998 edition. Classics in Mathematics. Springer-Verlag, Berlin, 2001. xiv+517 pp. W. Green. On the energy decay rate of the fractional wave equation on ${\mathbb{R} }$ with relatively dense damping. *Proc. Amer. Math. Soc.* **148** (2020), no. 11, 4745--4753. S. Huang, G. Wang and M. Wang. Observable Sets, Potentials and Schrödinger Equations. *Comm. Math. Phys.* **395** (2022), no. 3, 1297--1343. S. Jaffard. Contrôle interne exact des vibrations d'une plaque rectangulaire. *Portugal. Math.* **47** (1990), no. 4, 423--429. L. Jin. Control for Schrödinger equation on hyperbolic surfaces. *Math. Res. Lett.* **25** (2018), no. 6, 1865--1877. A. Koenig. Non-null-controllability of the Grushin operator in 2D. *C. R. Math. Acad. Sci. Paris* **355** (2017), no. 12, 1215--1235. O. Kovrijkine. Some results related to the Logvinenko--Sereda theorem. *Proc. Amer. Math. Soc.* **129** (2001), no. 10, 3037--3047. K. Le Balc'h and J. Martin. Observability estimates for the Schrödinger equation in the plane with periodic bounded potentials from measurable sets. Preprint, arXiv:2304.08050. G. Lebeau. Contrôle de l'équation de Schrödinger. *J. Math. Pures Appl.* **(9)** 71 (1992), no. 3, 267--291. G. Lebeau and I. Moyano. Spectral inequalities for the Schrödinger operator. To appear in *Anal. PDE*. arXiv:1901.03513. G. Lebeau and L. Robbiano. Stabilisation de l'équation des ondes par le bord. *Duke Math. J.* **86** (1997), no. 3, 465--491. J.-L. Lions. *Contrôlabilité exacte, stabilisation et perturbations de systemes distribués*. Tome 1. Contrôlabilité exact. Recherches en Mathématiques Appliquées, 8. Masson, Paris, 1988. x+541 pp. A. Logunov, E. Malinnikova, N. Nadirashvili and F. Nazarov. The Landis conjecture on exponential decay. Preprint, arXiv:2007.07034. K.-D. Phung. Observability and control of Schrödinger equations. *SIAM J. Control Optim.* **40** (2001), no. 1, 211--230. A. Prouff. Observability of the Schrödinger equation with subquadratic confining potential in the Euclidean space. Preprint, arXiv:2307.00839. W. Rudin. *Real and complex analysis.* Third edition. McGraw-Hill Book Co., New York, 1987. xiv+416 pp. M. Tucsnak and G. Weiss. *Observation and Control for Operator Semigroups.* Birkhäuser Advanced Texts: Basler Lehrbücher. Birkhäuser Verlag, Basel, 2009. xii+483 pp. G. Wang, M. Wang and Y. Zhang. Observability and unique continuation inequalities for the Schrödinger equation. *J. Eur. Math. Soc. (JEMS)* **21** (2019), no. 11, 3513--3572. Y. Zhu. Remarks on propagation of smallness for solutions of elliptic equations in the plane. Preprint, arXiv: 2304.09800. M. Zworski. *Semiclassical analysis*. Graduate Studies in Mathematics, **138**. American Mathematical Society, Providence, RI, 2012. xii+431 pp.
arxiv_math
{ "id": "2309.00963", "title": "Quantitative observability for one-dimensional Schr\\\"odinger equations\n with potentials", "authors": "Pei Su, Chenmin Sun, Xu Yuan", "categories": "math.AP math.OC", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this note, we discuss and motivate the use of the terminology "median graphs" in place of "CAT(0) cube complexes" in geometric group theory. author: - Anthony Genevois bibliography: - CCvsMedian.bib title: Why CAT(0) cube complexes should be replaced with median graphs --- A *CAT(0) cube complex* refers to a cube complex (i.e. a cellular complex obtained by gluing cubes of various dimensions along faces) whose length metric obtained by extending the Euclidean metrics on its cubes (by convention with sides of length one) is CAT(0). They first appear in [@MR0919829], where they are introduced as a convenient source of CAT(0) spaces. Indeed, while it is in general a difficult task to determine whether or not a given geodesic metric space is CAT(0), CAT(0) cube complexes can be identified thanks to a simple local criterion: **Theorem 1** ([@MR0919829; @MR3029427]). *A cube complex is CAT(0) if and only if it is simply connected and all its vertices have flag simplicial links.* Roughly speaking, the *link* of a vertex corresponds to a small sphere around it endowed with the complex structure induced by the cubical structure of the whole complex. Thus, vertices (resp. edges, triangles, $n$-simplices) in the link correspond to edges (resp. squares, $3$-cubes, $(n+1)$-cubes) in the cube complex. A simplicial complex is *flag* if any pairwise adjacent vertices span a simplex. A typical example where the local criterion provided by Theorem [Theorem 1](#thm:Local){reference-type="ref" reference="thm:Local"} fails is an $n$-cube minus its $n$-cell (so, topologically, an $(n-1)$-sphere). An important feature of CAT(0) cube complexes is highlighted in [@MR1347406]: CAT(0) cube complexes admit natural "codimension-one separating subspaces", called *hyperplanes*. The graph-metric on the one-skeleton of a CAT(0) cube complex, which is quasi-isometric to the CAT(0) metric in the finite-dimensional case, turns out to be tightly connected to hyperplanes. For instance, a combinatorial path between vertices in the one-skeleton is a geodesic (with respect to the graph-metric) if and only if it crosses each hyperplane at most once. As a consequence, the graph-distance between two vertices coincides with the number of hyperplanes separating them. Since then, the combinatorics of hyperplanes, and consequently the graph-metric, has been a central tool in the study of CAT(0) cube complexes. Interestingly, the one-skeletons of CAT(0) cube complexes coincide with *median graphs*, a well-known family of graphs, formally introduced in the 1960s [@MR0125807; @MR0286705] but with even older roots (see for instance [@MR2405677; @MR2798499] and references therein). **Theorem 2** ([@MR1663779; @MR1748966; @Roller]). *The one-skeleton of a CAT(0) cube complex is a median graph. Conversely, the cube-completion of a median graph (i.e. the cube complex obtained by filling with cubes all the subgraphs isomorphic to one-skeletons of cubes) is a CAT(0) cube complex.* Recall that a graph is *median* if, for any three vertices $x_1,x_2,x_3$, there exists a unique vertex $m$ (the *median point*) that lies at the intersection of three geodesics connecting $x_1,x_2,x_3$, i.e. satisfying $d(x_i,x_j) = d(x_i,m)+d(m,x_j)$ for all distinct $i,j \in \{1,2,3\}$. Despite the fact that CAT(0) cube complexes and median graphs essentially define the same objects, and the fact that CAT(0) cube complexes are now mainly studied through their one-skeletons, the terminology used in geometric group theory remains "CAT(0) cube complexes" in the literature. In this note, we would like to motivate the use of "median graphs" instead. #### CAT(0) geometry is not necessary. Very few articles actually use CAT(0) geometry in an essential way to prove results about CAT(0) cube complexes. Instead, they mainly use the graph-metric and the combinatorics of hyperplanes. Nevertheless, before replacing "CAT(0) cube complexes" with "median graphs", at the very least we have to verify that the central results about the structure of CAT(0) cube complexes and about group actions used in geometric group theory can be easily and naturally proved by using the point of view of median graphs. This approach is the leitmotiv of the project [@Book]. Personally, I am not aware of a single theorem about CAT(0) cube complexes that cannot be naturally proved thanks to median graphs. #### CAT(0) geometry not easily accessible. Many group actions on CAT(0) cube complexes are obtained by cubulating a space with walls (e.g. Coxeter groups [@MR1983376], small cancellation groups [@MR2053602], hyperbolic $3$-manifold groups [@MR2931226], one-relator groups with torsion [@MR3118410]); by taking the universal cover of a nonpositively cube complex (e.g. right-angled Artin groups [@MR1368655], groups of alternating links [@MR1707529], graph braid groups [@MR2701024]); by constructing directly the cube complex (e.g. graph products [@MR1389635], diagram groups [@MR1978047], Artin groups of infinite type [@MR3993762], Cremona groups [@MR4340723], Neretin groups [@Neretin]); or by finding a commensurating action (e.g. groups of local similarities [@MR2486801], some topological full groups [@MR3377390], Neretin groups [@Boudec], birational groups [@MR4014636], piecewise smooth homeomorphism groups [@MR4100128], Grigorchuk groups [@GrigorCC]). In all these cases, computing the CAT(0) distance between two arbitrary points, or even vertices, is often out of reach. The graph-distance between two vertices, however, may be computable and even meaningful. For right-angled Artin groups, the median graphs on which they act are the Cayley graphs given by their canonical generating sets. Therefore, the distance between vertices coincides with the corresponding word-distance. For Coxeter groups, the median graphs are obtained by cubulating the canonical wallspace structures on their Cayley graphs. Since the distance in the median graph of two points coming from the wallspace coincides with the number of walls separating them, it follows that the distance in the median graph of two points coming from the Coxeter group agrees with the word-distance. As a more interesting example, consider the group $\mathrm{PDiff}(\mathbb{S}^1)$ of piecewise differentiable homeomorphisms $\mathbb{S}^1 \to \mathbb{S}^1$. We fix an orientation on $\mathbb{S}^1$, and, for all $g \in \mathrm{PDiff}(\mathbb{S}^1)$ and $x \in \mathbb{S}^1$, we denote by $g'(x^-)$ (resp. $g'(x^+)$) the left-derivative (resp. right-derivative) of $g$ at $x$. One can make $\mathrm{PDiff}(\mathbb{S}^1)$ act on $\mathbb{S}^1 \times (0,+ \infty)$ via $$g \cdot (x,r) := \left( g(x), g'(x^-)rg'(x^+)^{-1} \right), \ g \in \mathrm{PDiff}(\mathbb{S}^1), (x,r) \in \mathbb{S}^1 \times (0,+\infty).$$ A key observation is that $\mathrm{PDiff}(\mathbb{S}^1)$ commensurates $S:= \mathbb{S}^1 \times \{1\}$, i.e. the symmetric difference $S \triangle gS$ is finite for every $g \in \mathrm{PDiff}(\mathbb{S}^1)$. This follows from the observation that $$S \backslash g^{-1}S = \{ (x,1) \mid x \in \mathbb{S}^1 \text{ such that } g'(x^-) \neq g'(x^+) \}$$ for every $g \in \mathrm{PDiff}(\mathbb{S}^1)$. As a consequence, $\mathrm{PDiff}(\mathbb{S}^1)$ admits an action on a median graph $X$ such that, for some vertex $o \in X$, the equalities $$d_X(o,g \cdot o ) = S \triangle gS = 2 \# \underset{=: \mathrm{Sing}(g)}{\underbrace{\{ \text{points at which $g$ is not differentiable} \} }}$$ hold for every $g \in \mathrm{PDiff}(\mathbb{S}^1)$. Because every isometry of a median graph with unbounded orbits admits (up to taking a convenient subdivision) a bi-infinite geodesic on which it acts as a translation, it follows that, for every $g \in \mathrm{PDiff}(\mathbb{S}^1)$, either $(\# \mathrm{Sing}(g^n))_{n \geq 1}$ is bounded or there exists a positive integer $K \geq 1$ such that $$\# \mathrm{Sing}(g^n)= \frac{K}{2} \cdot n + O(1).$$ The constant $K$ corresponds to the translation length of $g$ in the median graph. In the same spirit, consider, given a field $k$, the *$d$-dimensional Cremona group $\mathrm{Cr}_k^d$ over $k$*, i.e. the group of birational transformations $\mathbb{P}_k^d \dashrightarrow \mathbb{P}_k^d$. In [@MR4340723], a median graph on which $\mathrm{Cr}_k^d$ acts is explicitly constructed. The distance between vertices can be computed [@MR4340723 Lemma 4.11], and, combined with some median geometry, a linear lower bound on the asymptotic growth of $(\mathrm{deg}(f^n))_n$ can be deduced for non-pseudo-regularisable birational transformations $f : \mathbb{P}_k^d \dashrightarrow \mathbb{P}_k^d$. (Recall that $f$ has *degree $k$* if one can write $f : [x_0,\ldots, x_d] \dashrightarrow [g_0, \ldots, g_d]$ for homogeneous polynomials $g_0, \ldots, g_d$ of degree $k$ without a non-constant common factor.) In the same way that distances with respect to the CAT(0) metric are usually not computable, the solutions known for various algorithmic problems in CAT(0) groups, such as the word and conjugacy problems, do not provide explicit procedures in practice. However, in some cubulable groups for which the structure of the median graph is tightly connected to the algebraic structure of groups (e.g. when the median graph is a Cayley graph), then it may be possible to exploit the median geometry in order to solve explicitly some algorithmic problems. This is illustrated, for instance, by the solution of the conjugacy problem in cactus groups in [@Cactus]. #### Median geometry may be enlightening. Using the point of view of median graphs may also lead to the introduction of new concepts, which would not have been accessible from CAT(0) geometry. This can be illustrated by the *Roller boundary*, which can be described as follows [@MR4071367]. Let $X$ be a countable median graph. Fix a basepoint $o \in X$. The Roller boundary $\mathfrak{R}X$ is the graph - whose vertices are classes of geodesic rays starting from $o$, where two rays are equivalent whenever they cross exactly the same hyperplanes; - whose edges connect two (classes of) rays whenever there exists exactly one hyperplane that is crossed by one but not by the other. The Roller boundary is usually not connected, but each connected component turns out to be a median graph. Thus, one gets the same structure on the space and on its boundary[^1], contrasting with visual boundaries of CAT(0) spaces. This property may be useful in some inductive arguments: if a group acts on a median graph $X$ (of finite cubical dimension) and stabilises a component $Y$ of $\mathfrak{R}X$, then it acts on a new median graph $Y$ (of smaller cubical dimension). See for instance [@MR4062290] for an application of this idea. Let us illustrate how the Roller boundary may be useful, and conceptually satisfying, by sketching a proof of the Tits alternative extracting from [@MR2827012][^2]. So let $G$ be a group acting on a median graph $X$ with finite cubical dimension. We claim that $G$ either contains a non-abelian free subgroup or is virtually (locally $X$-elliptic)-by-(free abelian). - A hyperplane $J$ is *$G$-flippable* if there exists $g \in G$ such that $gD \subset D^c$ for some halfspace $D$ delimited by $J$. By an easy ping-pong argument, we know that, if $G$ acts on a median graph with a facing triple (i.e. three hyperplanes such that no one separates the other two) of $G$-flippable hyperplanes, then $G$ contains a non-abelian free subgroup. - If some hyperplane $J$ is not $G$-flippable, then, fixing a halfspace $X$ delimited by $J$, the halfspaces $gD$ with $g \in G$ pairwise intersect. The intersection $\bigcap_{g \in G} gD$ either is non-empty, providing a $G$-invariant convex subgraph; or accumulates to the Roller boundary, providing a $G$-invariant component of $\mathfrak{R}X$. By iterating the argument, one finds a $G$-invariant convex subgraph $Y$ in $\overline{X}:= X \cup \mathfrak{R}X$ all of whose hyperplanes are $G$-flippable. - If $Y$ contains a facing triple, then we know from the first item that $G$ contains a non-abelian free subgroup. Otherwise, we can show that $Y$ must have only finitely components in its Roller boundary (like the infinite grid $\mathbb{Z}^n$), which implies that $G$ virtually fixes a point $\xi$ in $\mathfrak{R}X$. - Let $Z_1, \ldots, Z_k$ denote the components of $\mathfrak{R}X$ containing $\xi$ or containing $\xi$ in their boundaries. We can prove that there are only finitely many such components. Because $G$ virtually fixes $\xi$, it contains a finite-index subgroup $H$ stabilising each $Z_i$. For every $1 \leq i \leq k$, we can define a morphism $H \to \mathbb{Z}$ by $$\mathfrak{h}_i : h \mapsto |\mathcal{W}_i\backslash g \mathcal{W}_i|- |g \mathcal{W}_i \backslash \mathcal{W}_i|, \ \mathcal{W}_i:= \{ \text{hyperplanes separating $o$ from $Z_i$} \}$$ where $o \in X$ is a fixed basepoint. Finally, we prove that all the elements in the kernel of $\bigoplus_i \mathfrak{h}_i : H \to \mathbb{Z}^k$ are elliptic. #### CAT(0) geometry may be inefficient. Among the results coming from CAT(0) geometry that are sometimes applied to CAT(0) cube complexes, we can mention the structure of minimising sets of loxodromic isometries [@MR1744486 Proposition II.6.2] and the flat torus theorem [@MR1744486 Theorem II.7.1]. Most of the time, the cube complex is supposed to be finite-dimensional in order to avoid parabolic isometries (which may exist in infinite-dimensional CAT(0) cube complexes; see for instance [@MR3198728] for a natural example (but parabolic isometries can be found in more elementary examples, such as the lamplighter group $\mathbb{Z} \wr \mathbb{Z}$)). However, this assumption is often not necessary in the results thus obtained. This is essentially due to the fact that, up to a subdivision, an isometry of a median graph either fixes a vertex (*elliptic*) or acts as a translation on some bi-infinite geodesic (*loxodromic*) [@Axis]. This dichotomy holds regardless of the dimension of the underlying cube complex. As an example, consider the mapping class group $\mathrm{MCG}_g$ of a closed surface of genus $g \geq 3$. It is known that $\mathrm{MCG}_g$ cannot act properly on a CAT(0) space by semisimple isometries [@MR1411351], and, more precisely, that Dehn twists must be elliptic for every action of $\mathrm{MCG}_g$ on a CAT(0) space by semisimple isometries [@MR2665003]. Because there is no parabolic isometries in finite-dimensional CAT(0) cube complexes, one obtains severe restrictions on possible actions of $\mathrm{MCG}_g$ on finite-dimensional CAT(0) cube complexes[^3]. It turns out that, by following the same arguments but with a median perspective, the same conclusions can be obtained without any restriction on the dimension: Dehn twists are elliptic for every action of $\mathrm{MCG}_g$ on a median graph [@MR4574362]. The argument goes as follows. Let $G$ be an arbitrary group acting on a median graph $X$. Assume that $G$ contains a central element $z$ having unbounded orbits in $X$. - Up to subdividing $X$, we assume that $z$ admits an axis $\gamma$. For every $g\in G$, $g \gamma$ is an axis for $gzg^{-1}=z$. But different axes for the same element cross exactly the same hyperplanes, which implies that all the $g \gamma$ have a point at infinity in the same component $Y$ of the Roller boundary of $X$. This component must be stabilised by $G$. - Then we get a morphism $\Theta : G \to \mathbb{Z}$ defined by $$g \mapsto |\mathcal{W}(o|Y) \backslash \mathcal{W}(go|Y)| - |\mathcal{W}(go|Y) \backslash \mathcal{W}(o|Y)|,$$ where $\mathcal{W}(\cdot | \cdot)$ denotes the set of the hyperplanes separating the two subsets under consideration and where $o \in X$ is a fixed basepoint. - By taking $o$ on $\gamma$, one sees that $\Theta(z)$ coincides with the translation length of $z$. In particular, $\Theta(z) \neq 0$. Thus, we have proved that the central element $z$ survives in some quotient $G \twoheadrightarrow \mathbb{Z}$. Now, let $\mathrm{MCG}_g$ act on a median graph and let $\tau$ be a Dehn twist along a simple closed curve $c$. If $\tau$ has unbounded orbits in the median graph, then it follows from what we have just said that the centraliser $C(\tau)$ of $\tau$ must surject onto $\mathbb{Z}$ such that $\tau$ has a non-trivial image. But the mapping class group of the surface cut along $c$ surjects onto $C(\tau)$ and has finite abelianisation. So $\tau$ must have bounded orbits. As another example of how median graphs can be used to obtain more efficient statements, consider the local criterion provided by Theorem [Theorem 1](#thm:Local){reference-type="ref" reference="thm:Local"}. Given a cube complex $X$ and a vertex $v \in X$, for all neighbours $v_1, \ldots, v_n \in X$ such that the edges $[v,v_1], \ldots, [v,v_n]$ pairwise span squares, we need to verify that $[v,v_1], \ldots, [v,v_n]$ span an $n$-cube. Therefore, we need to understand, at least a little bit, the ball centred at $v$ of radius $n$. In fact, from the point of view of median graphs, it suffices to investigate balls of radius $3$: **Theorem 3**. *[^4] Let $X$ be a (simplicial) graph. Then $X$ is median if and only if the following conditions hold:* - *the square-completion of $X$ (i.e. the square complex obtained from $X$ by filling in all the $4$-cycles) is simply connected;* - *for every vertex $v \in X$ and all neighbours $a,b \in X$, the edges $[v,a],[v,b]$ span at most one $4$-cycle;* - *for every vertex $v \in X$ and all neighbours $a,b,c \in X$, if the edges $[v,a],[v,b],[v,c]$ pairwise span a $4$-cycle then they globally span (the one-skeleton of) a $3$-cube.* For explicit constructions of median graphs, this may simplify significantly the verification of the local criterion. (For instance, we applied this criterion in [@ARMCG2].) This observation may also be useful in cubulating quotients, since the local criterion is preserved under quotients by subgroups with large injectivity radius, even if the cube complex is infinite-dimensional [@FixedPoint]. #### Median geometry is conceptually important. Recently, median geometry and its related combinatorics of hyperplanes have been adapted in several directions. Variations around median geometry include coarse median spaces [@MR3037559], as well as the subfamilies given by hierarchically hyperbolic spaces [@MR3650081; @MR3956144] and locally quasi-cubical spaces [@LQC]. They explore similarities between median graphs and mapping class groups. The machinery of hyperplanes also found applications beyond the scope of median graphs, including CAT(0) spaces themselves [@HypModel; @ZSurvey]. All these developments are fairly recent and require to be further investigated, but they highlight the conceptual potential of median geometry and its related concepts. #### Median graphs or median cube complexes? So far, our goal was to promote the use of "median graphs" instead of "CAT(0) cube complexes". However, another alternative, not mentioned so far, could be "median cube complexes", in order to keep the higher-dimensional cellular structure. Here, a *median cube complex* would refer to a cube complex such that the length metric extending the $\ell^1$-metrics on each cube (whose sides, by convention, have length one) is median. Even in the study of median graphs, cubes are fundamental. (It is understood that, when dealing with graphs, a cube always refers to a (sub)graph isomorphic to a product of pairs of adjacent edges (in other words, to the one-skeleton of a topological cube).) For instance: - In a median graph, every group of isometries with bounded orbits stabilises a cube. - Given a median graph of finite cubical dimension, there is a natural bijection between the maximal cubes and the maximal collections of pairwise transverse hyperplanes. - Median graphs coincide with retracts of infinite cubes. Moreover, cube-completions of median graphs are necessary to study topological and cohomological properties of groups, such as finiteness properties (especially when combined with Morse theory [@MR1465330]). They are also useful to recognise or construct examples of groups satisfying fixed-point properties, because the Helly property can be applied to fixed-point sets in cube-completions (which are contractible). Another argument in favour of keeping actual cubes in our spaces comes from the fact that, as already mentioned, some group actions on median graphs are constructed by taking the universal covers of cube complexes satisfying the local criterion given by Theorem [Theorem 1](#thm:Local){reference-type="ref" reference="thm:Local"} (which could be called *locally median cube complexes*). On the other hand, many other group actions are obtained by constructing the median graph directly. Then, adding the cubes would sound inefficient (for instance, the action would not be minimal, since the one-skeleton would provide an invariant median subspace). Moreover, as mentioned in §2, this is the distances between vertices that contain the interesting information. In conclusion, despite the fact that CAT(0) geometry is clearly not relevant in the study of CAT(0) cube complexes, it is not always desirable, and sometimes not even pertinent at all, to avoid the cube complex structure. This is why both median graphs and their cube-completions (which can be referred to as *median cube complexes*) have to be considered. #### CAT(0) geometry somewhere? In order to justify the replacement of CAT(0) cube complexes with median graphs, we do not have to prove that CAT(0) geometry will never be relevant. Actually, there may (and probably do) exist a few specific cases where CAT(0) geometry on cube complexes is superior from some perspective. But it suffices to show, as tried in this note, that median geometry is more natural, efficient, and satisfying in most cases. Also, our comments here deal with geometric group theory only. In other contexts, the CAT(0) geometry may be the most relevant. For instance, this is the case in applications to robotics. Indeed, state spaces of some robots turn out to be often CAT(0) cube complexes (see for instance [@MR2301699]). Moving such a robot efficiently amounts to finding a geodesic between two states in the cube complex with respect to the CAT(0) metric. Considering the median-metric here would not be reasonable. For instance, given an articulated arm, moving one articulation and then another could define a median-geodesic, while a CAT(0)-geodesic would require to move the two articulations simultaneously (which corresponds to what we have in mind as an efficient move). [University of Montpellier\ Institut Mathématiques Alexander Grothendieck\ Place Eugène Bataillon\ 34090 Montpellier (France)]{.smallcaps} *E-mail address*: `anthony.genevois@umontpellier.fr` [^1]: Even better, the components are organised along a median graph! More precisely, let $X$ be a median graph. Consider the Hasse diagram $\mathfrak{C}X$ of the poset whose elements are the components of the Roller completion $\overline{X}:= X \cup \mathfrak{R}X$, and whose order is defined as follows. Given two components $Y,Z$ of $\overline{X}$, $Y \leq Z$ if $Y$ is contained in the image of the Roller completion $\overline{Z}$ of $Z$ in $\overline{X}$. Then every connected component of $\mathfrak{C}X$ is a median graph. If $X$ has finite cubical dimension, $\mathfrak{C}X$ is connected and bounded. [^2]: Here, we follow [@Book]. In particular, we avoid the use of the visual and simplicial boundaries. See also [@MR3959849] for a similar approach applied to median spaces of finite rank. [^3]: Actions of mapping class groups on CAT(0) cube complexes are of interest in particular due to connections to other well-known open questions: constructing actions on CAT(0) cube complexes with no global fixed points would imply that mapping class groups do not satisfy Kazhdan's property (T); and proving the fixed-point property on finite-dimensional CAT(0) cube complexes would imply that mapping class groups do not virtually surjects onto $\mathbb{Z}$. [^4]: *The theorem is essentially contained in the proof of [@MR1748966 Theorem 6.1]. The philosophy underlying the statement is probably well-known to specialists. For instance, the approach to CAT(0) cube complexes based on disc diagrams described in [@MR4298722] only uses the $2$-skeletons of cube complexes.*
arxiv_math
{ "id": "2309.02070", "title": "Why CAT(0) cube complexes should be replaced with median graphs", "authors": "Anthony Genevois", "categories": "math.GR math.MG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We prove that $3$-dimensional ellipsoids invariant under a $2$-torus action contain infinitely many distinct immersed minimal tori, with at most one exception. These minimal tori bifurcate from the $2$-torus orbit of largest volume at a dense set of eccentricities, and remain invariant under a circle. address: Universidade de São Paulo Departamento de Matemática Rua do Matão, 1010 São Paulo, SP, 05508-090, Brazil author: - Renato G. Bettiol - Paolo Piccione title: Bifurcations of Clifford tori in ellipsoids --- # Introduction Despite recent spectacular advances in the existence theory of minimal surfaces, e.g., obtained via min-max theory and doubling constructions, many fundamental questions remain unanswered. In particular, while min-max theory applies broadly, yielding infinitely many embedded minimal surfaces on closed manifolds [@song], there is usually not much control on the topology of these minimal surfaces. For example, it is still unknown if the set of embedded minimal surfaces of a given genus $g\geq2$ in the round $\mathds{S}^3$ is finite up to congruence, a question raised by Yau [@yau-prob]. For genus $g=0$ and $g=1$, there is only one such minimal surface up to congruence, as shown by Almgren [@almgren] and Brendle [@brendle], respectively. Recently, Ketover [@ketover] proved that the number of such minimal surfaces diverges as $g\nearrow+\infty$. Another intriguing open question is whether every metric on $\mathds{S}^3$ admits at least 5 embedded minimal tori, as conjectured by White [@white-tori], who verified this for almost round metrics. In the same paper, White showed that metrics on $\mathds{S}^3$ with $\operatorname{Ric}\succ0$ carry at least one such torus. A natural approach to produce minimal surfaces with prescribed topology is to use bifurcation theory. Namely, given a $1$-parameter family $\Sigma_a$ of minimal surfaces in $(M^3,\mathrm g_a)$, a jump in the Morse index of $\Sigma_a$ as $a$ crosses a degeneracy instant $a_*$ generally corresponds to the existence of a bifurcation branch of other (isotopic, but noncongruent) minimal surfaces in $(M^3,\mathrm g_a)$ for $a$ near $a_*$. Under some conditions, this branch is not only local, but global, i.e., continues to exist for $a$ far from $a_*$. The typical setup to implement this approach is given by deformations $(M^3,\mathrm g_a)$ of manifolds with many symmetries, which retain a family of minimal surfaces $\Sigma_a$. For instance, ellipsoidal deformations $(\mathds{S}^3,\mathrm g_a)\subset \mathds R^4$ of the round sphere retain 4 "planar" minimal spheres given by the intersection with a coordinate hyperplane in $\mathds R^4$. If the minimal spheres $\Sigma_a$ in one of these families are rotationally invariant, then they bifurcate into arbitrarily many *nonplanar* pairwise noncongruent minimal spheres as the ellipsoid $(\mathds{S}^3,\mathrm g_a)$ becomes sufficiently elongated [@ellipsoids]. Similarly, certain ellipsoidal deformations retain a family of minimal tori. Namely, consider $$\mathds{S}^3(a,b):=\left\{(z,w)\in\mathds C^2: \frac{|z|^2}{a^2}+\frac{|w|^2}{b^2}=1\right\}, % =\left\{(x_1,x_2,x_3,x_4)\in\R^4: \frac{x_1^2}{a^2}+\frac{x_2^2}{a^2}+\frac{x_3^2}{b^2}+\frac{x_4^2}{b^2}=1\right\}.$$ with the isometric action of the torus $\mathsf{T}^2=\mathsf{S}^1 \times \mathsf{S}^1$, where each factor $\mathsf{S}^1\times \{1\}$ and $\{1\}\times \mathsf{S}^1$ acts via multiplication by unit complex numbers, on $z$ and $w$, respectively. In view of the case $a=b$ of the round sphere, we shall refer to the "middle" $\mathsf{T}^2$-orbit $$\Sigma(a,b):=\left\{(z,w)\in \mathds{S}^3(a,b) :\frac{|z|^2}{a^2}=\frac{|w|^2}{b^2}=\frac12 \right\},$$ as the *Clifford torus* in $\mathds{S}^3(a,b)$. For all $a,b>0$, the torus $\Sigma(a,b)$ has maximal volume among principal $\mathsf{T}^2$-orbits in $\mathds{S}^3(a,b)$, and is hence a minimal surface. In this paper, we investigate minimal tori in $\mathds{S}^3(a,b)$ that bifurcate from $\Sigma(a,b)$, or from one of its finite coverings, as the eccentricity $a/b$ varies. Without loss of generality, as minimal surfaces in $\mathds{S}^3(a,b)$ can be identified with those in $\mathds{S}^3(\lambda a,\lambda b)$ for all $\lambda>0$, we henceforth fix $b=1$, and use $a>0$ as the eccentricity parameter. To simplify notation, we write $\mathds{S}^3_a:=\mathds{S}^3(a,1)$ and $\Sigma_a:=\Sigma(a,1)$. Our main result is: **Theorem 1**. *For $a_*\in \mathfrak b=\{q/\sqrt{4-q^2} : q\in\mathds Q\cap (0,2) \}$, which is dense in $(0,+\infty)$, a bifurcation branch of $\mathsf{S}^1$-invariant immersed minimal tori in $\mathds{S}^3_a$ stems from the Clifford torus $\Sigma_{a_*}$. Each such branch persists for all $a$ up to $0$ or $+\infty$, and different branches contain pairwise noncongruent tori in $\mathds{S}^3_a$. The bifurcation branch that issues at $a_*=\frac{1}{\sqrt3}$ contains only embedded minimal tori, and persists until $a=0$.* Many geometric properties of these minimal tori in $\mathds{S}^3_a$ can be inferred from the instant at which they bifurcate from $\Sigma_a$. Given $q\in\mathds Q\cap (0,2)$, let $0<j<2k$ be the unique integers such that $\gcd(j,k)=1$ and $q=j/k$, and let $\mathcal B_{(j,k)}$ be the branch of minimal tori that bifurcate from $\Sigma_a$ at $a_* = q/\sqrt{4-q^2}$. (More precisely, the tori in $\mathcal B_{(j,k)}$ bifurcate from a $k$-fold covering of $\Sigma_a$.) Each minimal immersion $\mathsf{T}^2\hookrightarrow \mathds{S}^3_a$ in $\mathcal B_{(j,k)}$ maps the parallel circles $\{e^{i\theta}\}\times \mathsf{S}^1\subset\mathsf{T}^2$ into $\{1\}\times \mathsf{S}^1$-orbits in $\mathds{S}^3_a$. Tori in $\mathcal B_{(j,k)}$, other than $\Sigma_{a_*}$, intersect $\Sigma_a$ along the image of $2j$ such circles, and have linking number $k$ with the closed geodesic $\{(0,w) \in \mathds{S}^3_a : |w|=1\}$. This implies that the branches $\mathcal B_{(j,k)}$ are pairwise disjoint, see for details. All tori in $\mathcal B_{(1,1)}$ are embedded, but none of the tori in $\mathcal B_{(j,k)}$ with $(j,k)\neq (1,1)$ are embedded. The latter are (Alexandrov) immersed and self-intersect along the image of $k-1$ circles contained in the $2$-sphere $\mathds{S}^3_a \cap\{\operatorname{Im}z=0\}$, which are $\{1\}\times\mathsf{S}^1$-orbits. Owing to our $\mathsf{S}^1$-invariant setup, all branches $\mathcal B_{(j,k)}$ can be realized as closed connected subsets of the strip $\{(a,s)\in (0,+\infty)\times (-1,1)\}$, where $s\in (-1,1)$ parametrizes $\{1\}\times\mathsf{S}^1$-invariant minimal immersions $\mathds R\times \mathsf{S}^1 \hookrightarrow \mathds{S}^3_a$ that coincide with the Clifford torus $\Sigma_a$ if $s=0$, close up as tori if $(a,s)\in \mathcal B_{(j,k)}$ for some $j,k$, and degenerate to the totally geodesic $2$-sphere $\mathds{S}^3_a\cap \{\operatorname{Re} z =0\}$ as $s\searrow -1$, and to the closed geodesic $\{(z,0) \in \mathds{S}^3_a : |z|=a\}$ as $s\nearrow1$. Using this, we prove that $\mathcal B_{(j,k)}$ are noncompact and contain points $(a,s)$ with $a$ arbitrarily close to $0$ or $+\infty$. The round sphere $\mathds{S}^3_1$ contains infinitely many pairwise noncongruent immersed minimal tori which are $\mathsf{S}^1$-invariant, see e.g. [@lawson Thm. 3] or [@brendle-survey Thm. 1.4]. This can be generalized to the ellipsoids $\mathds{S}^3_a$ as a consequence of the above Theorem: **Corollary 1**. *The ellipsoid $\mathds{S}^3_a$ contains infinitely many pairwise noncongruent $\mathsf{S}^1$-invariant immersed minimal tori for all $a \in (0,+\infty)$ except possibly for one value in $\big(\frac{1}{\sqrt3},+\infty\big)$. If $a\in \big(0,\frac{1}{\sqrt3}\big)$, then at least one of these minimal tori is embedded and not congruent to the Clifford torus $\Sigma_a$.* The value $a=\frac{1}{\sqrt3}$ has a special role because tori in $\mathcal B_{(1,1)}$ are embedded. Namely, $\mathcal B_{(1,1)}$ cannot enter the region $a>1$, otherwise it would yield an embedded minimal torus in the round sphere $\mathds{S}^3_1$ not congruent to the Clifford torus, in contradiction with Brendle's proof [@brendle] of the Lawson conjecture. In other words, $a=1$ is a barrier for $\mathcal B_{(1,1)}$, and $\mathcal B_{(1,1)}$ is then itself a barrier for all $\mathcal B_{(j,k)}$ that bifurcate at $a_*<\frac{1}{\sqrt3}$. Thus, these are nested in the strip $\{(a,s)\in (0,+\infty)\times (-1,1)\}$, and so contain points with $a$ arbitrarily close to $0$. Either this is the case for all branches (which is natural to expect), or else a branch bifurcating at some $a_*>\frac{1}{\sqrt3}$, and hence all subsequent branches, do not contain points with $a$ close to $0$, but instead have points with arbitrarily large $a$. If the latter occurs, in principle, there could be (at most) one value of $a\in \big(\frac{1}{\sqrt3},+\infty\big)$ for which $\{a\}\times (-1,1)$ does not intersect infinitely many branches $\mathcal B_{(j,k)}$. Because of this possibility, the above Corollary states *except possibly for one value in* $\big(\frac{1}{\sqrt3},+\infty\big)$. Our analysis also yields a local uniqueness counterpart to the above Theorem. Given $k\in \mathds N$, let $\mathfrak b_{k}:=\{(j/k)/\sqrt{4-(j/k)^2} : 0<j<2k,\, \gcd(j,k)=1 \}$, and note that $\mathfrak b=\bigcup_{k\in\mathds N}\mathfrak b_{k}$. If $a\notin \mathfrak b_{k}$, then the $k$-fold covering of the Clifford torus $\Sigma_a$ is locally unique among $\{1\}\times\mathsf{S}^1$-invariant immersed minimal tori in $\mathds{S}^3_a$ up to congruence. In particular, since $\mathfrak b_{1}=\{\frac{1}{\sqrt3}\}$, we conclude that the Clifford torus $\Sigma_a$, $a\neq \frac{1}{\sqrt3}$, is locally unique among $\{1\}\times\mathsf{S}^1$-invariant embedded minimal tori in $\mathds{S}^3_a$ up to congruence. For details, see (ii). Beyond ellipsoidal deformations, there are other natural families of deformations of the round $\mathds{S}^3$ that retain several minimal surfaces, and are hence well-suited for the bifurcation approach put forth in this paper. For instance, Berger spheres $\mathds{S}^3(\tau)$, $\tau>0$, are homogeneous deformations of the round sphere $\mathds{S}^3(1)$ obtained scaling the Hopf circles $\mathsf{S}^1\to \mathds{S}^3\to \mathds CP^1$ by $\tau>0$. Their isometry group $\mathsf U(2)$ contains a torus $\mathsf{T}^2$ that acts by cohomogeneity one, and the $\mathsf{T}^2$-orbit of maximal volume in $\mathds{S}^3(\tau)$ is an embedded minimal torus, which coincides with the Clifford torus for $\tau=1$. These minimal tori bifurcate infinitely many times as $\tau\nearrow+\infty$, as shown in [@LLP], though in a local "cluster-point" sense that is weaker than the one used here. On the other hand, some families of minimal surfaces never bifurcate; e.g., as $\mathds{S}^3(\tau)$ are homogeneous, they each admit a *unique* (up to congruence) immersed minimal $2$-sphere [@mmpr2]. This paper is organized as follows. In , we use the symmetry reduction method of Hsiang--Lawson [@hsiang-lawson] to identify $\{1\}\times\mathsf{S}^1$-invariant minimal tori in $\mathds{S}^3_a$ and closed geodesics on a certain Riemannian $2$-disk $\Omega_a$ with singular boundary. The second variation of (iterates of) the closed geodesic $\gamma_{a,0}$ in $\Omega_a$ that corresponds to the Clifford torus $\Sigma_a$ is analyzed in . We restrict to perturbations of $\gamma_{a,0}$ that are invariant under a reflection in $\Omega_a$ to avoid multiplicities issues, allowing us to use the simple eigenvalue bifurcation theorem of Crandall--Rabinowitz [@crandall-rabinowitz] for the local result (). Combining this local result with the discrete-valued invariant in and the global bifurcation theorem of Rabinowitz [@rabinowitz], we prove the Theorem and Corollary stated above. ## Acknowledgements The first-named author was supported by the National Science Foundation CAREER grant DMS-2142575. The second-named author was supported by grants from CNPq and Fapesp (2022/16097-2, 2022/14254-3), Brazil. # Geometric setup {#sec:setup} ## Symmetry reduction Consider the isometric $\mathsf{S}^1$-action on $\mathds{S}^3_a\subset \mathds C^2$ given by complex multiplication in the second coordinate, i.e., the action of $\{1\}\times\mathsf{S}^1\subset\mathsf{T}^2$. The orbit through $(z,w)\in \mathds{S}^3_a$ is a circle of radius $r=|w|=\sqrt{1-\frac{|z|^2}{a^2}}$ if $|w|>0$, and a fixed point if $|w|=0$. Moreover, the orbit space is isometric to $$\label{eq:quotient} \mathds{S}^3_a/(\mathsf{S}^1) = \left\{ (z,r)\in\mathds C\times\mathds R: \frac{|z|^2}{a^2}+ r^2 =1, \, r\geq 0 \right\},$$ with boundary $\partial(\mathds{S}^3_a/\mathsf{S}^1)=\{(z,0)\in \mathds{S}^3_a/ \mathsf{S}^1\}$ given by the circle of $\mathsf{S}^1$-fixed points in $\mathds{S}^3_a$, and the volume function of $\mathsf{S}^1$-orbits is given by $$\label{eq:vol-fct} V\colon \mathds{S}^3_a/ \mathsf{S}^1\longrightarrow\mathds R, \quad V(z,r)=2\pi r=2\pi \sqrt{1- |z|^2/a^2}.$$ Consider the Riemannian $2$-disk endowed with the conformal metric $$\Omega_{a}:=\big(\mathds{S}^3_a/ \mathsf{S}^1, V^2\, \check{\mathrm g}_{a}\big),$$ where $\check{\mathrm g}_{a}$ is the Riemannian metric of [\[eq:quotient\]](#eq:quotient){reference-type="eqref" reference="eq:quotient"}. The symmetry reduction procedure of Hsiang--Lawson [@hsiang-lawson], cf. [@ellipsoids Prop. 3.1], readily implies the following: **Proposition 1**. *An $\{1\}\times\mathsf{S}^1$-invariant surface $\Sigma$ in $\mathds{S}^3_a$ is minimal if and only if its quotient $\Sigma/\mathsf{S}^1$ is a geodesic in $\Omega_{a}$. In particular, $\Sigma$ is an $\{1\}\times\mathsf{S}^1$-invariant minimal torus in $\mathds{S}^3_a$ if and only if $\Sigma/\mathsf{S}^1$ is a closed geodesic in the interior of $\Omega_{a}$, and $\Sigma$ is embedded if and only if $\Sigma/\mathsf{S}^1$ is embedded (i.e., a simple closed geodesic).* ## Geodesics in $\Omega_{a}$ Since the isometric actions on $\mathds{S}^3_a$ of the circle subgroups $\mathsf{S}^1\times \{1\}$ and $\{1\}\times\mathsf{S}^1$ of $\mathsf{T}^2$ commute, and [\[eq:vol-fct\]](#eq:vol-fct){reference-type="eqref" reference="eq:vol-fct"} is $\mathsf{S}^1\times \{1\}$-invariant, it follows that the metric $V^2\, \check{\mathrm g}_{a}$ of $\Omega_{a}$ is invariant with respect to the induced $\mathsf{S}^1\times \{1\}$-action. In order to write this metric in polar coordinates $(\rho,\theta)$ as $\mathrm{d}\rho^2+\varphi(\rho)^2\mathrm{d}\theta^2$, we parametrize the orbit space [\[eq:quotient\]](#eq:quotient){reference-type="eqref" reference="eq:quotient"} with $X(\phi,\theta)=\big(x(\phi,\theta),y(\phi,\theta),r(\phi,\theta)\big)$, where $$x(\phi,\theta)=a\cos\theta\sin\phi, \quad y(\phi,\theta)=a\sin\theta\sin\phi, \quad r(\phi,\theta)=\cos\phi,$$ and $\phi\in \big[0,\frac\pi2\big], \, \theta\in[0,2\pi]$. Thus, from [\[eq:vol-fct\]](#eq:vol-fct){reference-type="eqref" reference="eq:vol-fct"}, we have that the metric of $\Omega_{a}$ is $$\label{eq:conf-metric-polarcoord} \begin{aligned} V^2\, \check{\mathrm g}_{a}&=4\pi^2\cos^2\phi\,(\mathrm{d}x^2+\mathrm{d}y^2 +\mathrm{d}r^2) \\ &=4\pi^2\cos^2\phi\,\big((a^2\cos^2\phi+\sin^2\phi)\,\mathrm{d}\phi^2 + a^2\sin^2\phi\,\mathrm{d}\theta^2\big)\\ &=\mathrm{d}\rho^2+ \varphi(\rho)^2\mathrm{d}\theta^2, \end{aligned}$$ where $$\label{eq:drho-varphi} \mathrm{d}\rho=2\pi \cos(\phi)\sqrt{a^2\cos^2\phi+\sin^2\phi}\,\mathrm{d}\phi, \;\; \text{and} \;\; \varphi(\rho)=\pi a \sin(2\phi(\rho)).$$ Here, $\phi(\rho)$ is the inverse of the arclength function $\rho\colon\left[0,\frac\pi2\right]\to [0,L_a]$, given by $\rho(\phi)=2\pi \int_0^\phi \cos(\xi)\sqrt{a^2\cos^2\xi+\sin^2\xi}\,\mathrm{d}\xi$, with $L_a=\rho(\frac\pi2)$. Clearly, $\varphi(\rho)>0$ for all $\rho \in (0,L_a)$, and $\varphi(0)=\varphi(L_a)=0$. Since $\rho'(0)=2\pi a$ and $\lim\limits_{\phi\nearrow \frac\pi2}\rho'(\phi)=0$, we have that $\varphi'(0)=1$ and $\lim\limits_{\rho\nearrow L_a}\varphi'(\rho)=-\infty$, corresponding to the fact that [\[eq:conf-metric-polarcoord\]](#eq:conf-metric-polarcoord){reference-type="eqref" reference="eq:conf-metric-polarcoord"} is smooth at the central point $O=\{\rho=0\}$ but singular at $\partial \Omega_{a}=\{\rho=L_a\}$. ### Geodesic equation A routine computation with [\[eq:conf-metric-polarcoord\]](#eq:conf-metric-polarcoord){reference-type="eqref" reference="eq:conf-metric-polarcoord"} shows that the curve $\gamma(t)=(\rho(t),\theta(t))$ is a geodesic in $\Omega_{a}$ if and only if it satisfies the system of ODEs $$\label{eq:geod-eqn} \ddot\rho -\varphi(\rho) \varphi'(\rho)\,\dot\theta^2=0 \quad \text{and}\quad \ddot\theta+2 \frac{ \varphi'(\rho)}{\varphi(\rho)}\,\dot\rho\,\dot\theta =0.$$ Integrating the second equation above yields a conserved quantity, whose constant value depends on the geodesic $\gamma$, given by $$\label{eq:conserved} \dot\theta\,\varphi(\rho)^2=c(\gamma),$$ which corresponds to the fact that $\frac{\partial}{\partial \theta}$ is a Killing field in $\Omega_{a}$. Thus, along a geodesic, $\theta$ is either *constant* (radial geodesic) or *monotonic*. This allows us to reparametrize nonradial geodesics as $\gamma(\theta)=(\rho(\theta),\theta)$, though we shall not make use of this fact. For each $\theta_*\in [0,2\pi)$, we define the *radial geodesic segment* issuing from $O$, $$\label{eq:radialgeod} \sigma(\theta_*):=\{ (\rho,\theta_*)\in\Omega_{a} : 0<\rho <L_a\},$$ and, for each $\theta_*\in [0,\pi)$, the *diameter* $$\label{eq:diam} D(\theta_*):=\overline{\sigma(\theta_*)\cup\sigma(\theta_*+\pi)}.$$ Note that any geodesic in $\Omega_a$ going through $O$ must be a diameter; by [@ellipsoids Thm. 3.6] these are the only geodesics to reach $\partial\Omega_a$. The reflection $\tau_{\theta_*}\colon\Omega_a\to\Omega_a$ about $D(\theta_*)$ is an isometry, and the preimage of $D(\theta_*)$ under the quotient map $\mathds{S}^3_a\to\Omega_a$ is a totally geodesic (planar) $2$-sphere in $\mathds{S}^3_a$. From [\[eq:conserved\]](#eq:conserved){reference-type="eqref" reference="eq:conserved"}, every nonradial geodesic crosses (transversely) each $\sigma(\theta_*)$, see . ### Clifford geodesic {#subsec:cliffordgeod} According to [\[eq:geod-eqn\]](#eq:geod-eqn){reference-type="eqref" reference="eq:geod-eqn"}, a curve $\gamma(t)=(\rho(t),\theta(t))$ such that $\rho(t)\equiv\rho_{a,0}$ is constant, i.e., a circle of latitude $\rho_{a,0}$, is a geodesic in $\Omega_a$ if and only if $\varphi'(\rho_{a,0})=0$. As a consequence of [\[eq:drho-varphi\]](#eq:drho-varphi){reference-type="eqref" reference="eq:drho-varphi"}, there is only one such $\rho_{a,0}\in (0,L_a)$, namely the unique solution to $\phi(\rho_{a,0})=\frac\pi4$, since $\phi(\rho)$ is monotonic. The corresponding simple closed geodesic $\gamma_{a,0}(t):=(\rho_{a,0},t)$ is the image of the Clifford torus $\Sigma_a\subset \mathds{S}^3_a$ under the quotient map $\mathds{S}^3_a\to\Omega_{a}$, so we call it the *Clifford geodesic*. Note that $\gamma_{a,0}$ has length $|\Sigma_a|=2\pi^2a$ and, from [\[eq:drho-varphi\]](#eq:drho-varphi){reference-type="eqref" reference="eq:drho-varphi"}, $$\label{eq:valuesvarphi} \varphi(\rho_{a,0})=\pi a, \quad \text{ and }\quad \varphi''(\rho_{a,0})=-\tfrac{4 a}{\pi (a^2+1)}.$$ ### Geodesics $\gamma_{a,s}$ Fix a real analytic function $\beta_a\colon (-1,1)\to (0,L_a)$ such that $\beta_a(0)=\rho_{a,0}$ for all $a$ and $\beta_a'(s)>0$. In particular, $(-1,1)\ni s\mapsto (\beta_a(s),0)\in\Omega_a$ is a parametrization of $\sigma(0)$ starting at $O$ and ending at $\partial\Omega_a$. Let $$\label{eq:gamma_as} \gamma_{a,s}(t)=\big(\rho_{a,s}(t),\theta_{a,s}(t)\big)$$ be the (maximal) geodesic in $\Omega_{a}$ starting orthogonally to $\sigma(0)$ at $\gamma_{a,s}(0)=(\beta_a(s),0)$ with $\dot\gamma_{a,s}(0)=(0,1)$. Since the reflection $\tau_0$ about $D(0)$ maps $\dot\gamma_{a,s}(0)$ to $-\dot\gamma_{a,s}(0)$, it follows that $\tau_0(\gamma_{a,s}(t))=\gamma_{a,s}(-t)$ for all $t\in\mathds R$. Note that, for $s=0$, we have that $\rho_{a,0}(t)\equiv\rho_{a,0}$ is constant and $\theta_{a,0}(t)=t$, since $\gamma_{a,0}$ is the Clifford geodesic. *Remark 2*. Every closed geodesic $\gamma(t)=(\rho(t),\theta(t))$ in $\Omega_a$ intersects some radial geodesic $\sigma(\theta_*)$ orthogonally, e.g., at the point where $\rho(t)$ is maximal, and hence is congruent to $\gamma_{a,s}$ via clockwise $\mathsf{S}^1\times \{1\}$-rotation by $\theta_*$. Using the variation $s\mapsto \gamma_{a,s}$ of $\gamma_{a,0}$ by geodesics, we linearize [\[eq:geod-eqn\]](#eq:geod-eqn){reference-type="eqref" reference="eq:geod-eqn"} and [\[eq:conserved\]](#eq:conserved){reference-type="eqref" reference="eq:conserved"}, obtaining the Jacobi equation for $(R_a,T_a)=\frac{\mathrm{d}}{\mathrm{d}s}(\rho_{a,s},\theta_{a,s})|_{s=0}$, which, in view of [\[eq:valuesvarphi\]](#eq:valuesvarphi){reference-type="eqref" reference="eq:valuesvarphi"}, is the following system of ODEs with constant coefficients: $$\label{eq:jacobi-eqn-quotient} \ddot R_a+\tfrac{4a^2}{a^2+1}\,R_a=0 \quad \text{and}\quad (\pi a)^2 \,\dot T_a=\tfrac{\mathrm{d}}{\mathrm{d}s}c(\gamma_{a,s})\big|_{s=0}.$$ # Bifurcations of the Clifford geodesic {#sec:bif} In this section, we classify the closed geodesics in $\Omega_a$ that bifurcate from the Clifford geodesic $\gamma_{a,0}$ as the parameter $a$ varies. By , this corresponds to classifying the $\{1\}\times\mathsf{S}^1$-invariant minimal tori in $\mathds{S}^3_a$ that bifurcate from $\Sigma_a$. **Lemma 3**. *Each $\gamma_{a,s}$ intersects (transversely) all radial geodesics $\sigma(k\pi)$, $k\in\mathds N$.* *Proof.* If $\gamma_{a,s}$ did not intersect $\sigma(k\pi)$, then $\lim_{t\to +\infty} \theta_{a,s}(t)=\theta_{\mathrm{max}}\in (0,k\pi]$ and $\lim_{t\to +\infty } \rho_{a,s}(t)=L_a$, i.e., $\gamma_{a,s}(t)$ converges to a point in $\partial \Omega_{a}$ as $t\nearrow + \infty$, hence $\varphi(\rho_{a,s}(t))\searrow 0$. This implies $\dot\theta \nearrow +\infty$ by [\[eq:conserved\]](#eq:conserved){reference-type="eqref" reference="eq:conserved"}, contradicting $\theta_{\mathrm{max}}\in (0,k\pi]$. ◻ By the and the Implicit Function Theorem, we have: **Proposition 4**. *For all $k\in\mathds N$, there exists a real analytic positive function $\ell_k(a,s)$ such that $\theta_{a,s}(\ell_k(a,s))=k\pi$. In particular, $\gamma_{a,s}(\ell_k(a,s))\in D(0)$.* *Remark 5*. We have that $\ell_k(a,0)=k\pi$ for all $a>0$, since $\theta_{a,0}(t)=t$. This also follows from $\ell_k(a,0)=\frac{k|\Sigma_a|}{2\|\dot\gamma_{a,0}\|}$ as $|\Sigma_a|=2\pi^2a$ and $\|\dot\gamma_{a,0}\|=\pi a$ by [\[eq:valuesvarphi\]](#eq:valuesvarphi){reference-type="eqref" reference="eq:valuesvarphi"}. For each $k\in\mathds N$, consider the real analytic function $$\label{eq:f} f_k\colon (0,+\infty) \times (-1,1) \longrightarrow \mathds R, \quad f_k(a,s):= \dot\rho_{a,s}(\ell_k(a,s)),$$ and note that $f_k(a,0)=0$ for all $a>0$, see . **Proposition 6**. *The following hold:* 1. *The geodesic $\gamma_{a,s}$ is closed if and only if $f_k(a,s)=0$ for some $k\in\mathds N$;* 2. *The geodesic $\gamma_{a,s}$ is a simple closed geodesic if and only if $f_1(a,s)=0$;* 3. *The geodesic segment $\gamma_{a,s}([-\ell_k(a,s),\ell_k(a,s)])$ is a primitive closed geodesic (i.e., traces its image exactly once) with winding number $k$ around $O$ if and only if $f_k(a,s)=0$ and $f_{k'}(a,s)\neq 0$ for all $0<k'<k$.* *If $f_k(a,s)=0$, the preimage of $\gamma_{a,s}$ under the quotient map $\mathds{S}^3_a\to\Omega_a$ is an $\{1\}\times\mathsf{S}^1$-invariant immersed minimal torus in $\mathds{S}^3_a$, which is embedded if and only if $k=1$, and is congruent to (a covering of) the Clifford torus $\Sigma_a$ if and only if $s=0$.* *Proof.* Recall that $\tau_0(\gamma_{a,s}(t))=\gamma_{a,s}(-t)$ for all $t$, so $\gamma_{a,s}(\mathds R)$ is invariant under $\tau_0$. If $f_k(a,s)=0$, then $\gamma_{a,s}([0,\ell_k(a,s)])$ is a geodesic segment that starts orthogonally from $\sigma(0)$ and ends orthogonally at $\sigma(k\pi)$, so $\gamma_{a,s}$ is a closed geodesic. Conversely, if $\gamma_{a,s}$ is a closed geodesic, then any $\ell>0$ such that $\gamma_{a,s}(-\ell)=\gamma_{a,s}(\ell)$ and $\dot\gamma_{a,s}(-\ell)=\dot\gamma_{a,s}(\ell)$ must satisfy $\theta_{a,s}(\ell)=k\pi$ for some $k\in\mathds N$, i.e., $\ell=\ell_k(a,s)$, because $\gamma_{a,s}(\ell)$ is a fixed point of $\tau_0$ hence lies on $D(0)$. Since $\gamma_{a,s}$ is smooth at that point, $\dot\gamma_{a,s}(\ell)$ must be orthogonal to $D(0)$, i.e., $f_k(a,s)=0$. This concludes the proof of (i), and shows that if $f_k(a,s)=0$ then $f_{mk}(a,s)=0$ for all $m\in\mathds N$. By the same arguments, $\ell=\ell_k(a,s)$ is the *smallest* $\ell>0$ as above if and only if $\gamma_{a,s}([-\ell,\ell])$ is a *primitive* closed geodesic with winding number $k$ around $O$, which proves (ii) and (iii). The claims about the preimage of $\gamma_{a,s}$ in $\mathds{S}^3_a$ follow from , and from the correspondence between congruence classes of $\{1\}\times\mathsf{S}^1$-invariant surfaces in $\mathds{S}^3_a$ and congruence classes of their images in $\Omega_a$, i.e., orbits under $\mathsf S^1\times\{1\}$-rotations and reflections $\tau_{\theta_*}$, all of which map $\gamma_{a,s}$ to $\gamma_{a,0}$ if only if $s=0$. ◻ *Remark 7*. If $f_1(a,s)=0$, then $\gamma_{a,s}$ is not only a *simple* closed geodesic but also meets each radial geodesic $\sigma(\theta_*)$ exactly *once*, i.e., bounds a star-shaped domain. *Remark 8*. The minimal tori in are *Alexandrov immersed*, i.e., the immersion $\mathsf{T}^2\hookrightarrow \mathds{S}^3_a$ is the restriction of an immersion $\mathsf{S}^1\times \overline{B^2}\hookrightarrow \mathds{S}^3_a$ to $\partial (\mathsf{S}^1\times \overline{B^2})=\mathsf{T}^2$. Namely, $\mathsf{T}^2\hookrightarrow \mathds{S}^3_a$ can be extended to an immersion $\mathsf{S}^1\times \overline{B^2}\hookrightarrow \mathds{S}^3_a$ by filling in the corresponding $\{1\}\times\mathsf{S}^1$-orbits in $\mathds{S}^3_a$ with $\{1\}\times\mathsf{S}^1$-invariant $2$-disks. In what follows, we study the zero sets $f_k^{-1}(0)\subset (0,+\infty) \times (-1,1)$, which contain the *trivial branch* $\mathcal B_{\mathrm{triv}}=\{(a,0) : a>0\}$ corresponding to the Clifford tori $\Sigma_a$. If $(a_*,0)$ is in the closure of $f_k^{-1}(0)\setminus\mathcal B_{\mathrm{triv}}$, we say $a=a_*$ is a *bifurcation instant* for $f_k$, and the connected component of the closure of $f_k^{-1}(0)\setminus\mathcal B_{\mathrm{triv}}$ containing $(a_*,0)$ is called the *bifurcation branch* for $f_k$ issuing from $(a_*,0)$. In particular, $\frac{\partial f_k}{\partial s}(a_*,0)=0$ by the Implicit Function Theorem, for otherwise points in $\mathcal B_{\mathrm{triv}}$ would be locally unique as solutions to $f_k(a,s)=0$ near $a=a_*$. However, in general, this necessary condition is *not* sufficient for bifurcation to occur at $a=a_*$. ## Local bifurcation A sufficient condition for $a=a_*$ to be a bifurcation instant is given by a classical theorem of Crandall and Rabinowitz [@crandall-rabinowitz], see [@ellipsoids Thm 2.2] for a formulation adapted to the present setup, which yields: **Proposition 9**. *Let $k\in\mathds N$ be fixed. For each integer $0<j<2k$, let $$\label{eq:ajk} a^j_k:=\frac{(j/k)}{\sqrt{4-(j/k)^2}}.$$* 1. *There exists an open neighborhood $U^j_k$ of $\big(a^j_k,0\big)$ in $(0,+\infty)\times (-1,1)$ and a real analytic curve $(-\varepsilon,\varepsilon)\ni t\mapsto \big(a^j_k(t),s^j_k(t)\big)\in U^j_k$ with $\big(a^j_k(0),s^j_k(0)\big)=\big(a^j_k,0\big)$ and $(s^j_k)'(0)>0$ such that $$\label{eq:local-form-bifbranch} f_k^{-1}(0)\cap U^j_k= (\mathcal B_{\mathrm{triv}}\cap U^j_k)\cup\left\{\big(a^j_k(t),s^j_k(t)\big) : t\in (-\varepsilon,\varepsilon) \right\}.$$* 2. *For all $a\neq a^j_k$, the trivial solution $(a,0)$ is locally unique for $f_k$, i.e., if $a\neq a^j_k$, there exists an open neighborhood $V_k$ of $(a,0)$ in $(0,+\infty)\times (-1,1)$ such that $f_k^{-1}(0)\cap V_k= \mathcal B_{\mathrm{triv}}\cap V_k$.* *Proof.* Differentiating [\[eq:f\]](#eq:f){reference-type="eqref" reference="eq:f"} in $s$ at $(a,0)$, we have $$\tfrac{\partial f_k}{\partial s}(a,0) = \tfrac{\partial}{\partial s}\dot\rho_{a,s}\big|_{s=0}(\ell_k(a,0))+\dot\rho_{a,0}(\ell_k(a,0))\tfrac{\partial\ell_k}{\partial s}(a,0)=\dot R_a(k\pi),$$ because $\ell_k(a,0)=k\pi$ by , and $\dot\rho_{a,0}(\ell_k(a,0))=f_k(a,0)=0$. Recall that, for all $a>0$, the variational field $(R_a,T_a)=\frac{\mathrm{d}}{\mathrm{d}s}(\rho_{a,s},\theta_{a,s})\big|_{s=0}$ solves the ODE system [\[eq:jacobi-eqn-quotient\]](#eq:jacobi-eqn-quotient){reference-type="eqref" reference="eq:jacobi-eqn-quotient"} with initial conditions $$\begin{aligned} (R_a(0),T_a(0))&=(\beta'_a(0),0), \\ (\dot R_a(0),\dot T_a(0))&=\big(0,(\pi a)^{-2} \, \tfrac{\mathrm{d}}{\mathrm{d}s}c(\gamma_{a,s})\big|_{s=0}\big).\end{aligned}$$ Thus, we have that $R_a(t)=\beta'_a(0)\cos\left(\frac{2a}{\sqrt{a^2+1}} \,t\right)$, and hence $\dot R_a(k\pi)=0$ if and only if $\frac{2ak}{\sqrt{a^2+1}}\in\mathds Z$. Since $0 < \frac{a}{\sqrt{a^2+1}}<1$ for all $a>0$, the only possible integer values for $\frac{2ak}{\sqrt{a^2+1}}$ are $j=1,\dots, 2k-1$, which correspond to $a=a^j_k$. Altogether, $\tfrac{\partial f_k}{\partial s}(a,0) =0$ if and only if $a=a^j_k$ for some integer $0<j<2k$. Thus, (ii) follows from the Implicit Function Theorem, while (i) follows from [@ellipsoids Thm 2.2] since $$\tfrac{\partial^2 f_k}{\partial a\partial s}\big(a^j_k,0\big) =\tfrac{\partial}{\partial a} \dot R_a(k \pi) \Big|_{a=a^j_k} =\beta_{a^j_k}'(0) \frac{(-1)^{j+1}(4k^2-j^2)^{3/2}\pi j}{4k^3}\neq 0.\qedhere$$ ◻ *Remark 10*. The induced metric on $\Sigma_a\subset\mathds{S}^3_a$ is flat, and isometric to $\mathds R^2/\Gamma$ where $\Gamma$ is the integer lattice generated by $\left({\sqrt2}\pi a,0\right)$ and $\left(0,{\sqrt2}\pi \right)$. The Laplace spectrum of the $(k,l)$-fold covering of such a $2$-torus consists of the eigenvalues $2\left(\frac{j^2}{k^2a^2}+\frac{m^2}{l^2}\right)$, where $j,m\in\mathds Z$. Moreover, we have $\operatorname{Ric}(\vec n)+\|A\|^2=\frac{8}{a^2+1}$, so the eigenvalues of the Jacobi operator of the $(k,l)$-covering of $\Sigma_a\subset\mathds{S}^3_a$ are: $$\label{eq:eigenv-jacobi} \phantom{, \qquad j,m\in\mathds Z.} \textstyle \lambda^{k,l}(j,m)=2\left(\frac{j^2}{k^2a^2}+\frac{m^2}{l^2}\right)-\frac{8}{a^2+1}, \qquad j,m\in\mathds Z.$$ The eigenfunctions with eigenvalue $\lambda^{k,l}(j,m)$ are $\{1\}\times\mathsf{S}^1$-invariant if and only if $m=0$, from which one recovers the degeneracy instants $a^j_k$ as the values of $a>0$ such that $\lambda^{k,l}(j,0)=0$. Moreover, $\frac{\partial^2 f_k}{\partial a\partial s}\big(a^j_k,0\big)=(-1)^j\,C\,\frac{\partial}{\partial a} \lambda^{k,l}(j,0)\big|_{a=a^j_k}$, for some $C>0$. Analyzing [\[eq:eigenv-jacobi\]](#eq:eigenv-jacobi){reference-type="eqref" reference="eq:eigenv-jacobi"}, we see that if $\Sigma_a\subset\mathds{S}^3_a$ is degenerate as an embedded minimal surface, i.e., $\lambda^{1,1}(j,m)=0$, then either $a = 1$, or the corresponding Jacobi field is $\mathsf{S}^1\times \{1\}$-invariant ($j=0$, $m=\pm1$, $a=\sqrt3$) or $\{1\}\times\mathsf{S}^1$-invariant ($j=\pm1$, $m=0$, $a=\frac{1}{\sqrt3}$). Meanwhile, if $k\geq 2$ or $l\geq 2$, then $\lambda^{k,l}(j,m)=0$ has non-$\mathsf{S}^1$-invariant solutions, i.e., with both $m\neq 0$ and $j\neq 0$, besides those with $a=1$. Thus, our $\mathsf{S}^1$-invariant setup detects all possible bifurcations of $\Sigma_a\subset\mathds{S}^3_a$ through *embedded* minimal tori, but there may be bifurcations through immersed minimal tori (that fail to be embedded) besides the $\mathsf{S}^1$-invariant ones studied here. In fact, since there is a jump in the Morse index of $\Sigma_a$ at such degeneracy instants $a=a_*$, the abstract bifurcation criterion in [@g-bif] implies that there are sequences $a_n\to a_*$ and $\Sigma_{(n)}\hookrightarrow \mathds{S}^3_{a_n}$ of immersed minimal tori not congruent to $\Sigma_{a_n}$ that accumulate on $\Sigma_{a_*}$. However, without the $\mathsf{S}^1$-invariant setup, it is unclear how to extract global (in $a>0$) consequences from these local "cluster-point" bifurcations of $\Sigma_a$. *Remark 11*. The value $a=1$ is not a bifurcation instant for any $f_k$, since, by [\[eq:ajk\]](#eq:ajk){reference-type="eqref" reference="eq:ajk"}, $a^j_k=1$ if and only if $\frac{j}{k}=\sqrt 2$. Thus, given $k_0\in\mathds N$, it follows from  (ii) that $\gamma_{a,0}$ and its first $k_0$ iterates are locally unique as closed geodesics in $\Omega_a$ for $a$ near $1$, since $\bigcap_{k\leq k_0} V_k$ is open. However, note that $\bigcap_{k\in\mathds N} V_k$ need not be open. ## Bifurcation branches It follows from [\[eq:ajk\]](#eq:ajk){reference-type="eqref" reference="eq:ajk"} that the collection $$\label{eq:allbif} \mathfrak b=\bigcup_{k\in \mathds N} \{a^j_k : 0 < j < 2k \}$$ of bifurcation instants forms a countable dense subset of $(0,+\infty)$, since it is the image of $(0,2)\cap \mathds Q$ under the diffeomorphism $(0,2)\ni q\mapsto q/\sqrt{4-q^2}\in(0,+\infty)$, and $a^j_k = a^{j'}_{k'}$ if and only if $\frac{j}{k}=\frac{j'}{k'}$. In order to uniquely identify each $a_*\in \mathfrak b$ as $a_*=a^j_k$ for some $j$ and $k$, we shall henceforth assume that $\gcd(j,k)=1$. In other words, we shall only consider $(a_*,0)$ as a bifurcation point for $f_k$ with the smallest possible $k$. By  (iii), this means considering only bifurcations of the closed geodesic $\gamma_{a,0}$ through primitive closed geodesics, i.e., disregarding iterates. Let $\mathcal B_{(j,k)}$ be the bifurcation branch for $f_k$ issuing from $(a^j_k,0)$, where $0<j<2k$ and $\gcd(j,k)=1$ per the convention above. In order to distinguish branches, we now determine a discrete-valued invariant for $f_k(a,s)=0$ , see [@ellipsoids Sec. 2.2.3]. **Proposition 12**. *If $(a,s)\in\mathcal B_{(j,k)}\setminus\mathcal B_{\mathrm{triv}}$, then $\gamma_{a,s}([-\ell_k(a,s),\ell_k(a,s)])$ is a closed geodesic in $\Omega_a$ that intersects the Clifford geodesic $\gamma_{a,0}$ at $2j$ instants, has winding number $k$ around $O$, and self-intersects along $D(0)$ at $k-1$ instants. In particular, bifurcation branches are pairwise disjoint: $\mathcal B_{(j,k)}\cap \mathcal B_{(j',k')}=\emptyset$ if $(j,k)\neq(j',k')$.* *Proof.* If $(a,s)\in\mathcal B_{(j,k)}$, $s\neq 0$, is close to $(a^j_k,0)$, then $\rho_{a,s}\colon [0,\ell_k(a,s)]\to (0,L_a)$ is well-approximated to first order by $R_{a^j_k}\colon [0,k\pi]\to(0,L_a)$, $R_{a^j_k}(t)=\beta'_{a^j_k}(0)\cos\big(\frac{j}{k}t\big)$. In particular, $\rho_{a,s}(t)$ has exactly $j$ simple zeros on $[0,\ell_k(a,s)]$, i.e., $\gamma_{a,s}([0,\ell_k(a,s)])$ intersects $\gamma_{a,0}$ exactly $j$ times, so $\gamma_{a,s}([-\ell_k(a,s),\ell_k(a,s)])$ intersects it $2j$ times. Since intersections between distinct geodesics must be transverse, this remains true for all $(a,s)\in \mathcal B_{(j,k)}\setminus\mathcal B_{\mathrm{triv}}$, proving the first assertion. The second assertion is a consequence of $\gamma_{a,s}\colon [-\ell_k(a,s),\ell_k(a,s)]\to\Omega_a\setminus\{O\}$ being regularly homotopic to $\gamma_{a,0}\colon [-k\pi,k\pi]\to\Omega_a\setminus\{O\}$, whose winding number around $O$ is equal to $k$, cf.  (iii), and $\gamma_{a,s}(-\ell_{k'}(a,s))=\gamma_{a,s}(\ell_{k'}(a,s))$, $0<k'<k$, give rise to the $k-1$ self-intersections along $D(0)$. If $(j,k)\neq(j',k')$, then $\mathcal B_{(j,k)}$ and $\mathcal B_{(j',k')}$ cannot intersect at $(a,s)$ with $s\neq 0$ by the above, while intersection at $(a,0)$ would violate [\[eq:local-form-bifbranch\]](#eq:local-form-bifbranch){reference-type="eqref" reference="eq:local-form-bifbranch"} since $f^{-1}_{\max\{k,k'\}}(0)\cap U^j_k\cap U^{j'}_{k'}$ consists of a single curve besides $\mathcal B_{\mathrm{triv}}$. ◻ The subsets $\mathcal B_{(j,k)}^\pm := \{(a,s)\in \mathcal B_{(j,k)} : \pm s>0\}$ are connected, by [\[eq:local-form-bifbranch\]](#eq:local-form-bifbranch){reference-type="eqref" reference="eq:local-form-bifbranch"} and $\mathcal B_{(j,k)} \cap\mathcal B_{\mathrm{triv}}=\big\{\big(a^j_k,0\big)\big\}$. For each $k\in\mathds N$, there is an involutive homeomorphism $$\iota_k\colon f_k^{-1}(0)\to f_k^{-1}(0)$$ such that, for all $t\in [0,\ell_k(a,s)]$ and $(a,s)\in f_k^{-1}(0)$, $$\gamma_{\iota_k(a,s)}(t)=\big(\rho_{a,s}(\ell_k(a,s)-t),k\pi-\theta_{a,s}(\ell_k(a,s)-t)\big)$$ Clearly, $\gamma_{a,s}\big([0,\ell_k(a,s)]\big)$ and $\gamma_{\iota_k(a,s)}\big([0,\ell_k(a,s)]\big)$ are congruent geodesic segments, since they are mapped to one another by the reflection $\tau_{\frac\pi2}$. Note that $\ell_k\big(\iota_k(a,s)\big)=\ell_k(a,s)$ for all $(a,s)\in f_k^{-1}(0)$, and $\Pi(\iota_k(a,s))=a$ where $\Pi(a,s)=a$ is the projection onto the first factor. Moreover, $\iota_k$ fixes $\mathcal B_{\mathrm{triv}}$ pointwise, i.e., $\iota_k(a,0)=(a,0)$ for all $a>0$. Since $\mathcal B_{(j,k)}$ are connected and $\mathcal B_{(j,k)}\cap\mathcal B_{\mathrm{triv}}=\big\{\big(a^j_k,0\big)\big\}$ by , we have that $\iota_k$ maps $\mathcal B_{(j,k)}$ to itself. Thus, either $\iota_k(\mathcal B^\pm_{(j,k)})=\mathcal B^\mp_{(j,k)}$ or $\iota_k(\mathcal B^\pm_{(j,k)})=\mathcal B^\pm_{(j,k)}$. **Proposition 13**. *Let $k\in\mathds N$ and $0<j<2k$ with $\gcd(j,k)=1$. If $j$ is even, then $\iota_k(\mathcal B^\pm_{(j,k)})=\mathcal B^\pm_{(j,k)}$; if $j$ is odd, then $\iota_k(\mathcal B^\pm_{(j,k)})=\mathcal B^\mp_{(j,k)}$.* *Proof.* By the proof of , if $(a,s)\in \mathcal B_{(j,k)}$, then $\gamma_{a,s}\big([0,\ell_k(a,s)]\big)$ intersects $\gamma_{a,0}$ exactly $j$ times, so the endpoints of $\gamma_{a,s}([0,\ell_k(a,s)])$ lie in the same component of $\Omega_a\setminus\gamma_{a,0}$ if and only if $j$ is even. In other words, if $(a,s)\in \mathcal B_{(j,k)}$, then $\rho_{a,s}(0)-\rho_0>0$ if and only if $(-1)^j(\rho_{a,s}(\ell_k(a,s))-\rho_0)>0$ hence if and only if $(-1)^j(\rho_{\iota_k(a,s)}(0)-\rho_0)>0$. On the other hand, if $(a,s)\in \mathcal B^\pm_{(j,k)}$ is close to $\big(a^j_k,0\big)$, then $\pm (\rho_{a,s}(0)-\rho_0)=\pm (\beta_a(s)-\beta_a(0))>0$, because $\beta'_a(0)>0$. ◻ ## Global behavior The only points $(a,s)\in \mathcal B_{(j,k)}$ near the boundary of the strip $(0,+\infty)\times(-1,1)$ must have $a\approx 0$ or $a\approx+\infty$ as a consequence of the next: **Proposition 14**. *If a convergent sequence $(a_n,s_n)\in\mathcal B_{(j,k)}$ has $|s_n|\nearrow1$, then either $a_n\searrow0$ or $a_n\nearrow+\infty$. In particular, the restriction to $\mathcal B_{(j,k)}$ of the projection $\Pi\colon (0,+\infty)\times (-1,1)\to(0,+\infty)$, $\Pi(a,s)=a$, is a proper map.* *Proof.* The only geodesics in $\Omega_a$ that converge to the boundary $\partial\Omega_a$ or that pass through the central point $O$ are the diameters [\[eq:diam\]](#eq:diam){reference-type="eqref" reference="eq:diam"}. Thus, if a sequence of points $(a_n,s_n)\in\mathcal B_{(j,k)}$ has $a_n\to a_\infty\in (0,+\infty)$ and $s_n\nearrow 1$, respectively $s_n\searrow-1$, then the corresponding closed geodesics $\gamma_{a_n,s_n}([-\ell_k(a_n,s_n),\ell_k(a_n,s_n)])$ converge graphically to the diameter $D(0)$, respectively $D(\frac\pi2)$, with multiplicity $2k$ in $\Omega_{a_\infty}$. So, for large $n$, the Clifford geodesic $\gamma_{a_n,0}$ intersects these closed geodesics $4k$ times, but this contradicts because $2j<4k$. ◻ *Remark 15*. The case $j=k=1$ of can be alternatively proved using the compactness theorem of Choi--Schoen [@choi-schoen]. Indeed, a convergent sequence $(a_n,s_n)\in\mathcal B_{(1,1)}$ with $a_n\to a_\infty\in (0,+\infty)$ and $|s_n|\nearrow1$ would correspond to a sequence of embedded minimal tori in $\mathds{S}^3_{a_n}$ that converges to a planar minimal sphere in $\mathds{S}^3_{a_\infty}$ with multiplicity $2$, which is impossible by [@choi-schoen]. We are now ready to prove the Theorem and Corollary from the Introduction. *Proof of Theorem.* By  (i) and [\[eq:allbif\]](#eq:allbif){reference-type="eqref" reference="eq:allbif"}, the collection $\mathfrak b$ of bifurcation instants for $\Sigma_a$ forms a dense subset of $(0,+\infty)$. We then apply the Rabinowitz global bifurcation theorem [@rabinowitz], see [@ellipsoids Thm 2.5] for a formulation adapted to the present setup, to each bifurcation instant $a_*=a^j_k$. Hypotheses (1) and (2) of [@ellipsoids Thm 2.5] are satisfied as a consequence of , so the curve $(-\varepsilon,\varepsilon)\ni t\mapsto \big(a^j_k(t),s^j_k(t)\big)\in (0,+\infty)\times (-1,1)$ in [\[eq:local-form-bifbranch\]](#eq:local-form-bifbranch){reference-type="eqref" reference="eq:local-form-bifbranch"} can be extended to a piecewise real analytic curve $\mathds R\ni t\mapsto \big(a^j_k(t),s^j_k(t)\big)\in (0,+\infty)\times (-1,1)$ with $s^j_k(t)=0$ if and only if $t=0$, whose restriction to $(0,+\infty)$ takes values in $\mathcal B^+_{(j,k)}$. Moreover, [@ellipsoids Thm 2.5] yields a dichotomy for the global behavior of branches. Alternative (I) is reattachment to the trivial branch; by , it would have to occur at some other bifurcation instant $a_{**}=a^{j'}_{k'}$, with $(j,k)\neq (j',k')$, in contradiction to . Thus, alternative (II) holds, i.e., $\mathcal B_{(j,k)}$ are noncompact. In view of , this implies either $\lim_{t\to+\infty} a^j_k(t)=0$ or $\lim_{t\to+\infty} a^j_k(t)=+\infty$. The claim that minimal tori bifurcating at different values of $a\in\mathfrak b$ are pairwise noncongruent follows from , and embeddedness of those bifurcating at $a^1_1=\frac{1}{\sqrt3}$ follows from (ii). ◻ *Proof of Corollary.* The $\{1\}\times\mathsf{S}^1$-invariant immersed minimal tori in $\mathds{S}^3_a$ given by the preimage of the geodesic $\gamma_{a,s}$ with $(a,s)\in\mathcal B_{(j,k)}\setminus \{(a^j_k,0)\}$ are not congruent to the Clifford torus $\Sigma_a$ by . If $j=k=1$, these tori are embedded and hence $\lim_{t\to+\infty} a^1_1(t)=0$, for otherwise $\lim_{t\to+\infty} a^1_1(t)=+\infty$ so $a^1_1(t_*)=1$ for some $t_*>0$ in contradiction to the fact that the unique embedded minimal torus in the round sphere $\mathds{S}^3_1$ is the Clifford torus [@brendle]. The branches $\mathcal B_{(j,k)}$ are pairwise disjoint by , so $\lim_{t\to+\infty} a^j_k(t)=0$ for all $0<j<k$, which implies that every ellipsoid $\mathds{S}^3_a$ with $a<a^1_1=\frac{1}{\sqrt3}$ contains infinitely many pairwise noncongruent $\{1\}\times\mathsf{S}^1$-invariant immersed minimal tori, including the embedded minimal torus given by the preimage of $\gamma_{a^1_1(t),s^1_1(t)}$. For $k<j<2k$, either $\lim_{t\to+\infty} a^j_k(t)=0$ for all such $j,k$, hence there are infinitely many pairwise noncongruent $\{1\}\times\mathsf{S}^1$-invariant immersed minimal tori in $\mathds{S}^3_a$ for all $a>0$, or there exists a smallest $j_+/k_+\in (1,2)\cap\mathds Q$ such that $\lim_{t\to+\infty} a^{j_+}_{k_+}(t)=+\infty$, see . In that case, implies $\lim_{t\to+\infty} a^{j}_{k}(t)=+\infty$ for all $j,k$ with $a^j_k>a^{j_+}_{k_+}>\frac{1}{\sqrt3}$, so the same conclusion holds except possibly at $a=a^{j_+}_{k_+}$. ◻ *Remark 16*. If $a<\frac{1}{\sqrt3}$, then the vertical line $\{a\}\times (-1,1)$ intersects $\mathcal B_{(1,1)}$ in at least two points $(a,s_\pm)$, where $s_-<0<s_+$, but we cannot exclude the possibility that the corresponding embedded minimal tori in $\mathds{S}^3_a$ are congruent. In fact, if $(a,s_-)=\iota_1(a,s_+)$, then $\gamma_{a,s_\pm}$ are congruent via the reflection $\tau_{\frac\pi2}$, see the discussion preceding . *Remark 17*. While $a=1$ is a barrier for the bifurcation branch $\mathcal B_{(1,1)}$ of embedded minimal tori in $\mathds{S}^3_a$, it is not a barrier for $\mathcal B_{(j,k)}$, $(j,k)\neq (1,1)$. In fact, infinitely many such branches cross $a=1$ as a consequence of . By a result of Brendle [@brendle-mrl], all Alexandrov immersed minimal tori in the round sphere $\mathds{S}^3_1$ are rotationally invariant. There are infinitely many such tori, see [@lawson Thm. 3] or [@brendle-survey Thm. 1.4], and all must be congruent to tori in some $\mathcal B_{(j,k)}$ by . *Remark 18*. For odd $k\in\mathds N$, the minimal tori in $\mathcal B_{(j,k)}$ are invariant under the isometry of $\mathds{S}^3_a$ given by $(z,w)\mapsto (\bar z,w)$, so they yield free boundary immersed minimal annuli in the ellipsoidal hemispheres $(\mathds{S}^3_a)^\pm =\{(z,w)\in \mathds{S}^3_a : \pm\operatorname{Im} z\geq 0\}$. DLDLP14 F. J. Almgren, Jr., *Some interior regularity theorems for minimal surfaces and an extension of Bernstein's theorem*, Ann. of Math. (2) **84** (1966), 277--292. MR 0200816 R. G. Bettiol and P. Piccione, *Nonplanar minimal spheres in ellipsoids of revolution*, arXiv:2111.14995. R. G. Bettiol, P. Piccione, and G. Siciliano, *Equivariant bifurcation in geometric variational problems*, Analysis and topology in nonlinear differential equations, Progr. Nonlinear Differential Equations Appl., vol. 85, Birkhäuser/Springer, Cham, 2014, pp. 103--133. MR 3330725 S. Brendle, *Alexandrov immersed minimal tori in $S^3$*, Math. Res. Lett. **20** (2013), no. 3, 459--464. MR 3162839 to3em, *Embedded minimal tori in $S^3$ and the Lawson conjecture*, Acta Math. **211** (2013), no. 2, 177--190. MR 3143888 to3em, *Minimal surfaces in $S^3$: a survey of recent results*, Bull. Math. Sci. **3** (2013), no. 1, 133--171. MR 3061135 M. G. Crandall and P. H. Rabinowitz, *Bifurcation from simple eigenvalues*, J. Functional Analysis **8** (1971), 321--340. MR 0288640 H. I. Choi and R. Schoen, *The space of minimal embeddings of a surface into a three-dimensional manifold of positive Ricci curvature*, Invent. Math. **81** (1985), no. 3, 387--394. MR 807063 L. L. De Lima, J. H. De Lira, and P. Piccione, *Bifurcation of Clifford tori in Berger 3-spheres*, Q. J. Math. **65** (2014), no. 4, 1345--1362. MR 3285774 W.-Y. Hsiang and H. B. Lawson, Jr., *Minimal submanifolds of low cohomogeneity*, J. Differential Geometry **5** (1971), 1--38. MR 298593 D. Ketover, *Flipping Heegaard splittings and minimal surfaces*, arXiv:2211.03745. H. B. Lawson, Jr., *Complete minimal surfaces in $S\sp{3}$*, Ann. of Math. (2) **92** (1970), 335--374. MR 270280 W. H. Meeks, III, P. Mira, J. Pérez, and A. Ros, *Constant mean curvature spheres in homogeneous three-manifolds*, Invent. Math. **224** (2021), no. 1, 147--244. MR 4228502 P. H. Rabinowitz, *Some global results for nonlinear eigenvalue problems*, J. Functional Analysis **7** (1971), 487--513. MR 0301587 A. Song, *Existence of infinitely many minimal hypersurfaces in closed manifolds*, Ann. of Math. (2) **197** (2023), no. 3, 859--895. MR 4564260 B. White, *Every three-sphere of positive Ricci curvature contains a minimal embedded torus*, Bull. Amer. Math. Soc. (N.S.) **21** (1989), no. 1, 71--75. MR 994891 S.-T. Yau, *Nonlinear analysis in geometry*, Enseign. Math. (2) **33** (1987), no. 1-2, 109--158. MR 896385
arxiv_math
{ "id": "2309.13758", "title": "Bifurcations of Clifford tori in ellipsoids", "authors": "Renato G. Bettiol, Paolo Piccione", "categories": "math.DG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We construct a divergence-free velocity field $u:[0,T] \times \mathbb{T}^2 \to \mathbb{R}^2$ satisfying $$u \in C^\infty([0,T];C^\alpha(\mathbb{T}^2)) \quad \forall \alpha \in [0,1)$$ such that the corresponding drift-diffusion equation exhibits anomalous dissipation for every smooth initial data. We also show that, given any $\alpha_0 < 1$, the flow can be modified such that it is uniformly bounded only in $C^{\alpha_0}(\mathbb{T}^2)$ and the regularity of solutions satisfy sharp (time-integrated) bounds predicted by the Obukhov-Corrsin theory. The proof is based on a general principle implying $H^1$ growth for all solutions to the transport equation, which may be of independent interest. author: - Tarek M. Elgindi and Kyle Liss bibliography: - ADbib.bib title: Norm Growth, Non-uniqueness, and Anomalous Dissipation in Passive Scalars --- # Introduction We consider the advection-diffusion equation on $\mathbb{T}^2 = \mathbb{R}^2 \slash (2\pi\mathbb{Z}^2)$: $$\label{eq:ADE} \begin{cases} \partial_t f^\kappa + u \cdot \nabla f^\kappa = \kappa \Delta f^\kappa, \\ f^\kappa|_{t=0} = f_0. \end{cases}$$ Here, for some time $T > 0$, $u:[0,T]\times \mathbb{T}^2 \to \mathbb{R}^2$ is a given divergence-free velocity field, $f^\kappa:[0,T]\times \mathbb{T}^2 \to \mathbb{R}$ is a passive scalar representing, for instance, temperature or concentration, $\kappa > 0$ is a small constant, and $f_0 \in L^2$ is a mean-free initial data. The vector field $u$ may be prescribed as a solution to some hydrodynamical equation, like the Euler or Navier-Stokes equations, or it may simply be imposed. Since $u$ is divergence free, the $L^2$ energy of a solution to [\[eq:ADE\]](#eq:ADE){reference-type="eqref" reference="eq:ADE"} is monotone decreasing and governed by the energy balance $$\label{eq:energybalance} \frac{d}{dt} \|f^\kappa(t)\|_{L^2}^2 = -2\kappa \|\nabla f^\kappa(t)\|_{L^2}^2,$$ or equivalently $$\|f^\kappa(t)\|_{L^2}^2=\|f^\kappa(0)\|_{L^2}^2-2\kappa\int_{0}^t\|\nabla f^\kappa(s)\|_{L^2}^2\mathrm{d}s,$$ on any interval $[0,t]$ where $f^\kappa$ is sufficiently smooth. In particular, the quantity $$\mathcal{E}_\kappa(t):=2\kappa\int_{0}^t\|\nabla f^\kappa(s)\|_{L^2}^2 \mathrm{d}s$$ determines the energy dissipation of a solution. The size of this quantity is, in turn, related to the distribution of the solution in Fourier space or its average length scale in physical space. Even though the velocity field does not enter directly into [\[eq:energybalance\]](#eq:energybalance){reference-type="eqref" reference="eq:energybalance"}, it generally plays an important role in the rate of energy dissipation. Indeed, advection typically contributes to the formation of small spatial scales and consequently enhances the $L^2$ decay of the scalar. In fact, if $u$ is rougher than Lipschitz (as is expected in regimes of turbulent advection), then it is possible for this effect to be so dramatic that a fixed amount of dissipation can occur with arbitrarily small diffusion and in a $\kappa$-independent length of time. That is, one can have *anomalous dissipation*: $$\label{Anomaly}\liminf_{\kappa\rightarrow 0} \mathcal{E}_{\kappa}(t)=c_0>0$$ for some $t > 0$. The term anomalous dissipation refers to how [\[Anomaly\]](#Anomaly){reference-type="eqref" reference="Anomaly"} implies that solutions to [\[eq:ADE\]](#eq:ADE){reference-type="eqref" reference="eq:ADE"} dissipate energy at a rate that is independent of the diffusivity constant $\kappa>0.$ The presence of anomalous dissipation in the advection-diffusion and Navier-Stokes equations is a central assumption in the phenomenological theories of turbulence, playing a fundamental role in the Obukhov-Corrsin theory for passive scalars [@Obukhov49; @Corrsin51; @Sr19; @SS2000] as well as the K41 hydrodynamical theory [@K41a; @K41b; @Frisch]. The predicted dissipation anomalies for both turbulent fluids and scalars advected by them are well supported by numerics and experiments [@Sr19; @DSrY05; @Kaneda03; @Pearson02]. It is easy to achieve anomalous dissipation mathematically if the initial $f_0=f_0^\kappa$ become rough when $\kappa\rightarrow 0$ (even with $u\equiv 0$) and it is also easy to achieve when the velocity field becomes unbounded pointwise, even if $f_0$ is fixed. Certainly, if $f_0$ is chosen independent of $\kappa>0$ and if $u$ is smooth, [\[Anomaly\]](#Anomaly){reference-type="eqref" reference="Anomaly"} is impossible for finite $t>0.$ This can be seen easily since smoothness of $u$ propagates smoothness of $f_0$ ($L^2$ compactness of the solution being the property of interest). Anomalous dissipation is similarly impossible when $u$ belongs to certain DiPerna-Lions classes [@DL] or, more generally, when the transport equation ([\[eq:ADE\]](#eq:ADE){reference-type="eqref" reference="eq:ADE"} with $\kappa=0$) has unique $L^2$ solutions. An example of a deterministic velocity field that exhibits (genuine) anomalous dissipation for a large class of initial data was constructed in [@DEIJ22] using alternating "sawtooth" shear flows. The main purpose of this paper is to revisit the idea of [@DEIJ22] and show that alternating shears can in fact be used to achieve anomalous dissipation *for all* sufficiently smooth initial data, while at the same time improving upon the regularity of the earlier construction. ## Main results Our first result is an explicit example of a divergence-free velocity field $u:[0,T] \times \mathbb{T}^2 \to \mathbb{R}^2$ that is uniformly smooth in time in $C^\alpha(\mathbb{T}^2)$ for every $\alpha < 1$ and for which the solution of [\[eq:ADE\]](#eq:ADE){reference-type="eqref" reference="eq:ADE"} exhibits anomalous dissipation for every mean-zero and smooth initial data. **Theorem 1**. *Fix $T > 0$. There exists a divergence-free velocity field $u:[0,T] \times \mathbb{T}^2 \to \mathbb{R}^2$ with $$\label{eq:generalizedregularity} u \in C^\infty([0,T]; C^\alpha(\mathbb{T}^2)) \quad \forall \alpha \in (0,1)$$ such that, for every mean-zero and smooth initial data $f_0,$ the solutions $f^\kappa$ exhibit anomalous dissipation on $[0,T]$. Solutions to the corresponding transport equation are non-unique while $u$ satisfies $$\label{eq:velocityregularity}|u(t,z)-u(t,z')|\leq C\omega(|z-z'|)$$ for all $t\in [0,T]$ and $z,z'\in\mathbb{T}^2$, with $\omega(s)=s(1+|\log(s)|^4)$ and $C>0$ a universal constant.* We now make a few remarks on the above results. **Remark 1**. (Modulus of continuity of the velocity field) It is straightforward to see from the proof that the power 4 in the definition of $\omega$ can be replaced with any $p > 3$, but we do not do this for simplicity of presentation. Modifying slightly the proof, we can bring the power on the logarithm down to any $p>2$, at least for suitable initial data. It seems that our proof cannot go all the way down to the Osgood threshold. As anomalous dissipation is known to imply non-uniqueness for the underlying transport equation with $\kappa = 0$ (see e.g. [@DEIJ22]), an interesting open question is if for any non-Osgood modulus of continuity one can construct a velocity field enjoying that modulus of continuity uniformly in time and such that the corresponding drift-diffusion equation exhibits anomalous dissipation or such that the corresponding transport equation exhibits non-uniqueness. **Remark 2**. (Regularity of the initial data) [\[rem:constants\]]{#rem:constants label="rem:constants"} It is not required in Theorem [Theorem 1](#thrm:main){reference-type="ref" reference="thrm:main"} that $f_0$ be smooth. All that we require is $f_0 \in H^{1+s} \cap W^{1,\infty}$ for some $s > 2/5$. Moreover, for a fixed such $s$, the amount of energy dissipated depends only on upper bounds for $$\|f_0\|_{W^{1,\infty}}/\|f_0\|_{L^2} \quad \text{and} \quad \frac{\|f_0\|_{L^2}^s \|f_0\|_{\dot{H}^{1+s}}}{\|f_0\|_{H^1}^{1+s}}.$$ For any $\delta > 0$ and with same proof, the requirement $s > 2/5$ can be weakened to $s>\delta$ by allowing the power 4 in the definition of $\omega$ to be sufficiently large depending on $\delta$. **Remark 3**. (Dimensions $d\geq 2$) Theorem [Theorem 1](#thrm:main){reference-type="ref" reference="thrm:main"} provides, as an immediate corollary, anomalous dissipation in any dimension $d \ge 2$. Indeed, one can lift the velocity field to $\mathbb{T}^d$, switching its orientation in space $d-1$ times as time evolves, to construct a divergence-free velocity field $u \in C^\infty([0,T]; C^\alpha(\mathbb{T}^d))$ that exhibits anomalous dissipation for every mean-zero $f_0 \in C^\infty(\mathbb{T}^d)$. **Remark 4**. (Euler and Navier-Stokes) Let us remark finally that the velocity field of Theorem [Theorem 1](#thrm:main){reference-type="ref" reference="thrm:main"} solves the 2d Euler equation with a force that is uniformly smooth in time with values in $C^\alpha.$ This is simply because the velocity field is just a shear flow for each $t\in [0,T]$, so the force is just $\partial_t u$. Note that, upon inspecting the various parameters in the proof, it is easy to use this to construct a so-called $2\frac{1}{2}$-dimensional solution to the 3d Navier-Stokes system that gives anomalous dissipation. By analogy with the scaling of velocity increments over the inertial range predicted by the K41 theory and its connection with the Onsager Hölder-1/3 regularity threshold for the conservation of kinetic energy in the Euler equations [@Eyink94; @CET94; @Ons49], the scaling of structure functions over the inertial-convective range within the Obukhov-Corrsin theory of scalar turbulence [@Obukhov49; @Corrsin51] underlies a regularity threshold for anomalous dissipation in the advection-diffusion equation. Specifically, if the advecting velocity field satisfies $u \in L^\infty([0,T];C^\alpha)$ for some $\alpha \in [0,1)$ and the family of solutions $\{f^\kappa\}_{\kappa > 0}$ to [\[eq:ADE\]](#eq:ADE){reference-type="eqref" reference="eq:ADE"} remain uniformly bounded in $L^2([0,T];C^\beta)$, then heuristic scaling arguments suggest that $$\label{eq:regthreshold} \limsup_{\kappa \to 0} \kappa \int_0^T \|\nabla f^\kappa(s)\|_{L^2}^2\mathrm{d}s = 0 \quad \text{unless} \quad \beta \le \frac{1-\alpha}{2}.$$ This statement generalizes to different time integrability exponents (see e.g. the introduction of [@CCS]) and can be proven in a similar fashion to the rigidity side of Onsager's conjecture (see [@DEIJ22]). Since the velocity field of Theorem [Theorem 1](#thrm:main){reference-type="ref" reference="thrm:main"} belongs to $L^\infty([0,T];C^\alpha(\mathbb{T}^2))$ for every $\alpha \in (0,1)$, it follows from [\[eq:regthreshold\]](#eq:regthreshold){reference-type="eqref" reference="eq:regthreshold"} that the associated solutions $f^\kappa$ cannot retain any degree of Hölder regularity (even in a time integrated sense) uniformly as $\kappa \to 0$. In our second result, we show that we can modify the velocity field from Theorem [Theorem 1](#thrm:main){reference-type="ref" reference="thrm:main"} so that it remains uniformly bounded only in some fixed Hölder class and the scalar regularity gets arbitrarily close to the threshold set by [\[eq:regthreshold\]](#eq:regthreshold){reference-type="eqref" reference="eq:regthreshold"}. **Theorem 2**. *Fix $T > 0$, $\alpha \in (0,1)$, and $\beta < (1-\alpha)/2$. There exists a divergence-free velocity field $u:[0,T] \times \mathbb{T}^2 \to \mathbb{R}^2$ with $u \in C([0,T];C^\alpha(\mathbb{T}^2))$ such that for every mean-zero $f_0 \in C^\infty(\mathbb{T}^2)$ the solutions $f^\kappa$ of [\[eq:ADE\]](#eq:ADE){reference-type="eqref" reference="eq:ADE"} exhibit anomalous dissipation on $[0,T]$ and satisfy $$\label{eq:scalarregularity} \sup_{\kappa \in [0,1] }\|f^\kappa\|_{L^2([0,T];C^\beta(\mathbb{T}^2))} < \infty.$$* ## Previous works and discussion We now provide an overview of known results concerning anomalous dissipation and discuss the present contribution within the context of the previous literature. ### Previous work {#sec:previousresults} There have been a number of recent works that consider anomalous dissipation for the advection-diffusion equation with divergence free drift. The first example of a deterministic vector field that exhibits anomalous dissipation was given in [@DEIJ22]. Specifically, for any $\alpha \in (0,1)$ and $d\ge 2$, the authors construct a velocity field $u \in L^1([0,1];C^\alpha(\mathbb{T}^d)) \cap L^\infty([0,1]\times \mathbb{T}^d)$ that yields anomalous dissipation for a large class of data. The example is based on alternating approximately piece-wise linear shear flows with rapidly increasing frequencies up to a singular time. One can view the result of [@DEIJ22] as being a continuation of previous works on enhanced dissipation [@CKRZ; @CZDE]. Building off the idea of [@DEIJ22], Brué and De Lellis [@BDL] exhibited an example of anomalous dissipation in the forced 3d Navier-Stokes equation. Thereafter, Colombo, Crippa, and Sorella [@CCS] revisited the problem of anomalous dissipation and made a number of new contributions. Namely, they showed that vanishing viscosity is *not* a selection criterion for uniqueness in the transport equation. Moreover, they showed that the velocity field can be significantly more regular than the advertised regularity of [@DEIJ22] while maintaining the corresponding sharp upper bounds on the scalar regularity. The example of [@CCS] is based on using examples of finite-time mixers constructed, for example, in [@ACM]. See also [@BCCDS; @JS] for follow-up works. Very recently, Armstrong and Vicol [@AV] introduced a different mechanism for anomalous dissipation that is not based on a "finite-time singularity\" in the velocity field but on a continuous cascade to high frequencies in the advection-diffusion equation; this is made rigorous using ideas from quantitative homogenization. A key advance in the construction of [@AV] is that, for their choice of $u$, *all* solutions to [\[eq:ADE\]](#eq:ADE){reference-type="eqref" reference="eq:ADE"} with sufficiently smooth initial data exhibit anomalous dissipation. Additionally, the dissipative anomaly is spread out over time; in particular, the energy of the solution is continuous uniformly in $\kappa$. Finally, in an interesting paper of Huysmans and Titi [@HT], another example of anomalous dissipation is given based on mixing in finite time. A surprising consequence of the analysis in [@HT] is the existence of a solution to the transport equation with energy that jumps down and then up again, while also being a limit of vanishing viscosity. Anomalous dissipation has also been recently established by Bedrossian, Blumenthal and Punshon-Smith for passive scalars driven by a spatially smooth, white-in-time stochastic forcing and advected by velocity fields solving various stochastic fluid models [@BBPSBatchelor] (see also [@BBPS21; @BBPS22] for the earlier works of the same authors on mixing and enhanced dissipation used importantly in [@BBPSBatchelor]). The velocity realizations are almost surely uniformly bounded in $C^1$ on every finite time interval, so anomalous dissipation in the sense of [\[Anomaly\]](#Anomaly){reference-type="eqref" reference="Anomaly"} is impossible. In this setting, anomalous dissipation refers to a constant, non-vanishing flux of scalar $L^2$ energy from low to high frequencies in statistical equilibrium and the convergence of solutions as $\kappa \to 0$ to a statistically stationary solution of the *forced* transport equation that lives in a regularity class just below some Onsager critical space. ### Discussion Let us now take a moment to reflect upon the place of this work in the context of previous works. First, the construction we use here is based on alternating shear flows, inspired by [@DEIJ22]. While the velocity field is *not* mixing, solutions to the transport equation do lose compactness in $L^2$ (see Figure [1](#fig1){reference-type="ref" reference="fig1"}). Second, as compared to [@CCS], we are able to recover the previous results on the regularity of the velocity field and the passive scalar and this is done *for all* sufficiently smooth data. The velocity field constructed here is also more regular in space and time than the one constructed in [@AV], where the velocity is $C_{t,x}^{1/3-}.$ We do not consider the question of selection or the lack of selection in the vanishing diffusivity limit. Also, since in the example we give the energy only dissipates at one point in time as $\kappa \to 0$, a drawback of our result is that the energy is discontinuous in the limit (as compared to [@AV], where the energy dissipates continuously). It is possible that suitable modifications of our arguments could give continuous energy dissipation, though it seems to be more difficult to give a construction that yields continuous and strictly decreasing energy in the limit (we are not aware of a construction giving this latter property). ![Numerical simulation of the transport equation [\[eq:AE\]](#eq:AE){reference-type="eqref" reference="eq:AE"} with $u$ a regularized version of the alternating shear flow used in the proof of Theorem [Theorem 1](#thrm:main){reference-type="ref" reference="thrm:main"} and initial condition $f_0(x,y) = \sin(x)$, displayed in the upper left image. Moving from left to right, the scalar is shown after each successive shearing direction is applied.](AD0.png "fig:"){#fig1 width="5.25cm"}\ ## Main ideas of the proof The main idea of [@DEIJ22] was that anomalous dissipation (or sharp enhanced dissipation) in the advection-diffusion equation can be deduced under the condition that solutions to just the advection equation $$\label{eq:AE} \begin{cases} \partial_t f + u \cdot \nabla f = 0, \\ f|_{t=0} = f_0 \end{cases}$$ satisfy a bound of the form $$\label{EnergySpectrum}\frac{\|\mathbb{P}_{>N}f\|_{L^2}}{\|f\|_{L^2}}\geq c$$ with $N=c \|f\|_{H^1}$ for some fixed constant $c>0$ (where $\mathbb{P}_{>N}$ is a Fourier projection). Estimate [\[EnergySpectrum\]](#EnergySpectrum){reference-type="eqref" reference="EnergySpectrum"} implies that if the $H^1$ norm of $f$ becomes large, then the energy spectrum of $f$ contains some bump localized around frequencies comparable to $\|f\|_{H^1}.$ It is not difficult to show (see [@DEIJ22]) that the condition $$\label{H1Growth}\lim_{t\rightarrow T_*}\int_0^{t} \|f\|_{H^1}^2=+\infty$$ is *equivalent* to anomalous dissipation when [\[EnergySpectrum\]](#EnergySpectrum){reference-type="eqref" reference="EnergySpectrum"} is satisfied for a uniform $c>0$ and all $t\in [0,T_*)$. The key is thus to construct a velocity field with the property that all solutions to [\[eq:AE\]](#eq:AE){reference-type="eqref" reference="eq:AE"} satisfy [\[EnergySpectrum\]](#EnergySpectrum){reference-type="eqref" reference="EnergySpectrum"}-[\[H1Growth\]](#H1Growth){reference-type="eqref" reference="H1Growth"}. It is easy to imagine that advecting a scalar by a rougher and rougher velocity field leads to unbounded $H^1$ growth, even fast enough to satisfy [\[H1Growth\]](#H1Growth){reference-type="eqref" reference="H1Growth"}, but it is not so clear how to construct that velocity field in such a way that this holds while also maintaining [\[EnergySpectrum\]](#EnergySpectrum){reference-type="eqref" reference="EnergySpectrum"} for *every* smooth solution to [\[eq:AE\]](#eq:AE){reference-type="eqref" reference="eq:AE"}. We achieve this by using a variant of the following general lemma. **Lemma 1**. *(Forwards-Backwards Principle) Assume that $\Phi$ is a volume preserving and bi-Lipschitz map of a smooth manifold $\Omega.$ Assume that for every mean-zero $f\in H^1(\Omega)$, we have that $$\label{fwdbckwd}\frac{\|f\circ\Phi\|_{\dot{H}^1}^2+\|f\circ\Phi^{-1}\|_{\dot{H}^1}^2}{2}\geq K\|f\|_{\dot{H}^1}^2,$$ for some $K>1.$ Then, there exists $c=c(f)$ so that $$\|f\circ\Phi^{n}\|_{H^1}\geq K^{|n|} c \|f\|_{H^1},$$ for all $n\in\mathbb{Z}.$* The proof simply follows from the fact that $\|f \circ \Phi^n\|_{\dot{H}^1} \ge \|f\|_{L^2}$ for every $n \in \mathbb{Z}$ by the Poincaré inequality and that if we have $\|f\circ\Phi^{-(n_0-1)}\|_{H^1}\geq \|f\circ \Phi^{-n_0}\|_{H^1}$ for some $n_0 \in \mathbb{Z}$, then the assumption implies that $\|f\circ\Phi^{-n}\|_{H^1}$ is increasing for $n\ge n_0$. Indeed, one can interpret [\[fwdbckwd\]](#fwdbckwd){reference-type="eqref" reference="fwdbckwd"} as a type of (discrete) convexity assumption on the sequence $\{\|f\circ\Phi^{n}\|_{H^1}\}_{n\in\mathbb{Z}}.$ A small technical difficulty in our proof is that we must apply a version of the above lemma to the composition of different mappings (but that all belong to some class allowing for similar argumentation). This idea is partially inspired by our work with J. Mattingly on enhanced dissipation and mixing for (time-periodic) alternating shear flows [@ELM] and the previous work [@CEIM]. The velocity fields we construct here are all alternating shear flows consisting of a succession of sawtooth shear flows of rapidly increasing frequency and rapidly decaying amplitude defined on a time interval $[0,T]$. There are thus three parameters that define our flows: the amplitudes $\alpha_j$, frequencies $N_j$, and the runtimes $t_j$ of the successive shear flows. The full Lagrangian flow-map associated to the velocity fields can thus be written as a composition of piece-wise linear maps (each of which depends on the triple $(\alpha_j, N_j, t_j))$: $$\Phi(t)=\mathcal{U}_1\circ\mathcal{U}_2\circ...\circ\mathcal{U}_{N-1}\circ\mathcal{U}_{N}(t),$$ for any $t\in [0,T).$ For piece-wise linear maps, it turns out that the assumption [\[fwdbckwd\]](#fwdbckwd){reference-type="eqref" reference="fwdbckwd"} (and its multiple map variant, given in Lemma [Lemma 4](#lem:fwdbwd){reference-type="ref" reference="lem:fwdbwd"}) can be checked simply by computing the singular values of a $4\times 2$ constant matrix. From there, we argue relatively softly that the expected growth of the $H^1$ norm of solutions occurs for all smooth initial data *at a universal rate*. This universality as well as the growth condition given in [\[H1Growth\]](#H1Growth){reference-type="eqref" reference="H1Growth"} imposes one condition on the triple $(\alpha_j, N_j, t_j)$, which essentially requires $N_j$ to grow sufficiently rapidly. To show that [\[EnergySpectrum\]](#EnergySpectrum){reference-type="eqref" reference="EnergySpectrum"} holds, we observe as in [@DEIJ22] that this is implied by a "balanced growth\" condition. Namely, if we can prove that solutions to [\[eq:AE\]](#eq:AE){reference-type="eqref" reference="eq:AE"} satisfy a reverse interpolation estimate: $$\|f\|_{H^\sigma}\leq C\frac{\|f\|_{H^1}^{\sigma}}{\|f\|_{L^2}^{\sigma-1}}$$ for some fixed $\sigma>1$ and $C>0$, then [\[EnergySpectrum\]](#EnergySpectrum){reference-type="eqref" reference="EnergySpectrum"} holds automatically. Establishing this is relatively straightforward and this imposes another condition on $(\alpha_j, N_j,t_j)$. Finally, ensuring that the velocity field (and/or the scalar) satisfies the correct regularity bounds imposes a final condition on the parameters. ## Sobolev space and Fourier analysis conventions {#sec:Fourier} Some of the exact constants are important in the proofs (e.g., the fact that the constant prefactor in the first term on the right-hand side of [\[eq:upper2\]](#eq:upper2){reference-type="eqref" reference="eq:upper2"} is exactly one), and so before proceeding we define precisely the Sobolev norms that we are using. We identify $\mathbb{T}^2$ with $[-\pi,\pi)^2$. For $f \in L^2(\mathbb{T}^2)$, we define its Fourier series $\hat{f}:\mathbb{Z}^2 \to \mathbb{C}$ by $$\hat{f}(k,\ell) = \frac{1}{2\pi} \int_{\mathbb{T}^2} e^{-i(kx + \ell y)}f(x,y)\mathrm{d} x\mathrm{d}y.$$ Then, $f$ is recovered by the Fourier inversion formula $$f(x,y) = \frac{1}{2\pi} \sum_{k,\ell \in \mathbb{Z}} e^{i(kx + \ell y)} \hat{f}(k,\ell)$$ and with our normalization conventions Plancherel's theorem reads $$\|f\|_{L^2}^2 = \sum_{k, \ell \in \mathbb{Z}^2}|\hat{f}(k,\ell)|^2.$$ For $\sigma \ge 0$, the Sobolev space $H^\sigma$ is defined by $$H^\sigma = \left\{f \in L^2: \|f\|_{H^\sigma} < \infty\right\}, \quad \|f\|_{H^\sigma}: = \left(\sum_{(k,\ell)\in \mathbb{Z}^2} (1+|k|^2 + |\ell|^2)^\sigma|\hat{f}(k,\ell)|^2\right)^{1/2}$$ and for $f \in H^\sigma$, the homogeneous Sobolev seminorm is defined by $$\|f\|_{\dot{H}^\sigma}: = \left(\sum_{(k,\ell)\in \mathbb{Z}^2\setminus (0,0)} (|k|^2 + |\ell|^2)^\sigma|\hat{f}(k,\ell)|^2\right)^{1/2}.$$ For $s \ge 0$ we write $D^s = (-\Delta)^{s/2}$. That is, $D^s$ is the Fourier multiplier with symbol $(|k|^2 + |\ell|^2)^{s/2}$. We also define $D_x^s = (-\partial_{xx})^{s/2}$ and $D_y^s=(-\partial_{yy})^{s/2}$ to be the Fourier multipliers with symbols $|k|^s$ and $|\ell|^s$, respectively. In Appendix [4](#appendix){reference-type="ref" reference="appendix"}, we recall some Sobolev interpolation and commutator inequalities that will be required in the proof. # Proof of main theorems The proof of Theorem [Theorem 1](#thrm:main){reference-type="ref" reference="thrm:main"} is based on constructing a velocity field that satisfies the hypotheses of the abstract criterion for anomalous dissipation given in [@DEIJ22 Proposition 1.3] for every smooth initial data. We begin in Section [2.1](#sec:criteria){reference-type="ref" reference="sec:criteria"} by recalling a version of this criterion, Proposition [Proposition 2](#prop:ADcriteria){reference-type="ref" reference="prop:ADcriteria"} below, that is suitable for our setting. Then, in Section [2.2](#sec:udefinition){reference-type="ref" reference="sec:udefinition"} we define the velocity field used to prove Theorem [Theorem 1](#thrm:main){reference-type="ref" reference="thrm:main"}. The bulk of the paper consists of Sections  [2.3](#sec:lowerbounds){reference-type="ref" reference="sec:lowerbounds"} and [2.4](#sec:upperbounds){reference-type="ref" reference="sec:upperbounds"}. Here, we prove the upper and lower bounds on the growth of Sobolev norms for the solution of [\[eq:AE\]](#eq:AE){reference-type="eqref" reference="eq:AE"} needed to apply Proposition [Proposition 2](#prop:ADcriteria){reference-type="ref" reference="prop:ADcriteria"}. Finally, in Section [2.5](#sec:finish){reference-type="ref" reference="sec:finish"} we conclude the proof of Theorem [Theorem 1](#thrm:main){reference-type="ref" reference="thrm:main"} and then in Section [2.6](#sec:thrm2){reference-type="ref" reference="sec:thrm2"} make the appropriate modifications to prove Theorem [Theorem 2](#thrm:2){reference-type="ref" reference="thrm:2"}. ## Criteria for anomalous dissipation {#sec:criteria} We begin with a criterion for anomalous dissipation which is a modified version of [@DEIJ22 Corollary 1.5] with $H^2$ replaced by $H^\sigma$ for some $\sigma \in (1,2]$. The proof is exactly the same as in [@DEIJ22] after noting that the balanced growth condition of [@DEIJ22 Lemma 1.4] holds just as well with the reverse interpolation $$\|f\|_{L^2}\|f\|_{\dot{H}^2} \le C\|\dot{f}\|_{H^1}^2$$ for some $C \ge 1$ replaced by $$\|f\|_{L^2}^{\sigma-1}\|f\|_{\dot{H}^\sigma} \le C\|f\|_{\dot{H}^1}^\sigma$$ for any $\sigma > 1$. We use a criterion that allows for fractional Sobolev regularity because it is most convenient in the proof of norm growth to use a velocity field which is only $H^{3/2-}$. **Proposition 2**. *Fix $T > 0$, $\sigma \in (1,2]$, and let $u \in L_{\text{loc}}^\infty([0,T);W^{1,\infty}(\mathbb{T}^2))$ be a divergence free velocity field. Let $f_0 \in H^\sigma$ be a mean-zero initial data and suppose that there exists $C > 1$ such that the solution to the transport equation [\[eq:AE\]](#eq:AE){reference-type="eqref" reference="eq:AE"} satisfies the following two hypotheses:* 1. *$\int_0^T \|\nabla f(t)\|_{L^2}^2 \mathrm{d}t= \infty$,* 2. *$\|f(t)\|_{L^2}^{\sigma - 1}\|f(t)\|_{\dot{H}^\sigma} \le C\|f(t)\|_{\dot{H}^1}^\sigma$ for every $t \in [0,T)$.* *Then, for every $\kappa \in (0,1)$ the solution of [\[eq:ADE\]](#eq:ADE){reference-type="eqref" reference="eq:ADE"} with the same initial data $f_0$ satisfies $$\kappa\int_0^T \|\nabla f^\kappa(t)\|_{L^2}^2 \mathrm{d}t\ge \chi \|f_0\|_{L^2}^2, \quad \text{where} \quad \chi = \frac{1}{16}\left(\frac{1}{1+C^{\frac{1}{\sigma-1}}}\right)^{\frac{2\sigma}{\sigma-1}}.$$* ## Construction and regularity of the velocity field {#sec:udefinition} For a particular $T_* > 0$, we now define the divergence-free velocity field $u:[0,T_*] \times \mathbb{T}^2 \to \mathbb{R}^2$ that we will use to prove Theorem [Theorem 1](#thrm:main){reference-type="ref" reference="thrm:main"}. The fact that this is sufficient to prove the result for general $T > 0$ follows from a simple scaling argument by defining $\tilde{u}(t) = (T_*/T) u(T_*t/T)$. ### Definition of $u:[0,T_*]\times \mathbb{T}^2 \to \mathbb{R}^2$ Recall that we identify $\mathbb{T}^2$ with $[-\pi,\pi)^2$. Define $S:\mathbb{T}\to \mathbb{R}$ by $$S(x) = |x| .$$ For parameters $\alpha, N \in \mathbb{R}$ let $H_{\alpha,N}$ and $V_{\alpha,N}$ denote the shear flows $$H_{\alpha,N}(x,y) = \begin{pmatrix} \alpha S(Ny) \\ 0 \end{pmatrix}, \quad V_{\alpha,N}(x,y) = \begin{pmatrix} 0 \\ \alpha S(Nx) \end{pmatrix}.$$ Our chosen velocity field $u$ will alternate in time along a sequence of decreasing time steps between $H_{\alpha,N}$ and $V_{\alpha,N}$ for appropriately chosen parameters $\alpha$ and $N$ that vary at each step. Below we will specify a suitable sequence of frequencies $\{N_j\}_{j=1}^\infty$, amplitudes $\{\alpha_j\}_{j=1}^\infty$, and time steps $\{t_j\}_{j=1}^\infty$ with $\sum_{j=1}^\infty t_j < \infty$. For the time steps $\{t_j\}_{j=1}^\infty$ to be defined, let $T_0 = 0$ and $T_j = 2\sum_{n=1}^j t_j$ for $j \in \mathbb{N}$. Then, the terminal time $T_* > 0$ is defined by $$T_* := 2\sum_{j=1}^\infty t_j.$$ Let $\psi \in C_c^\infty((0,1))$ be a smooth function satisfying $\psi \ge 0$ and $\int_0^1 \psi(t) \mathrm{d}t= 1$. Then, we define $u:[0,T_*]\times \mathbb{T}^2 \to \mathbb{R}^2$ for $t \in [T_{j-1}, T_j)$ by $$u(t) = \begin{cases} \psi\left(\frac{t-T_{j-1}}{t_j}\right) H_{\alpha_j, N_j} & t\in [T_{j-1}, T_{j-1}+t_j), \\ \psi\left(\frac{t-T_{j-1}-t_j}{t_j}\right) V_{\alpha_j, N_j} & t \in [T_{j-1}+t_j, T_j). \end{cases}$$ Note that since $\int_0^1 \psi(t)\mathrm{d}t= 1$, the flow map associated with $u$ at the discrete times $T_j$ and $T_j + t_j$ is the same as it would be if $\psi$ were removed from the definition. The time dependence involving $\psi$ is included so that $u$ can be regular in time. Ignoring $\psi$, the schematic for how the velocity field alternates in time is $$\underbrace{H_{\alpha_1, N_1}, V_{\alpha_1, N_1}}_{t \in [0,T_1)}, \underbrace{H_{\alpha_2, N_2}, V_{\alpha_2, N_2}}_{t\in [T_1,T_2)}, \underbrace{H_{\alpha_3, N_3}, V_{\alpha_3, N_3}}_{t\in [T_2, T_3)}, \ldots,$$ where each $H_{\alpha_j,N_j}$ and $V_{\alpha_j,N_j}$ runs for time $t_j$. ### Choice of parameters and regularity {#sec:parameters} With the construction above it is clear that $u \in L^\infty_{\text{loc}}([0,T_*); W^{1,\infty}(\mathbb{T}^2))$, as is required to apply Proposition [Proposition 2](#prop:ADcriteria){reference-type="ref" reference="prop:ADcriteria"}. Additionally, one can easily check that a sufficient condition to have $u \in C^\infty([0,T];C^\alpha(\mathbb{T}^2)))$ as well as the regularity claimed in [\[eq:velocityregularity\]](#eq:velocityregularity){reference-type="eqref" reference="eq:velocityregularity"} is $$\label{eq:velconditions} \sup_{j\in \mathbb{N}}\frac{\alpha_j N_j^\alpha}{t_j^m} + \sup_{j \in \mathbb{N}}\frac{\alpha_j N_j}{1+|\log(N_j)|^4} < \infty,$$ for all $m\in\mathbb{N}$ and $\alpha<1.$ The bound on the first term gives the time regularity, while the bound on the second term implies that $u$ possesses the modulus of continuity $\omega(s) = s(1+(\log(s))^4)$ uniformly in time. For $M \ge 2$ to be chosen sufficiently large, we choose the parameters $N_j = 2^j$, $$\alpha_j = 2^{-j}(1+|\log(N_j)|^4) = 2^{-j}\left(1 + j^4 |\log(2)|^4\right),$$ and $t_j = 2\lceil M j^{5/2}\rceil /(\alpha_j N_j)$, where $\lceil x \rceil$ denotes the first integer greater than or equal to $x$. Then, $T_* = 2\sum_{j}t_j < \infty$ and it is easy to check that [\[eq:velconditions\]](#eq:velconditions){reference-type="eqref" reference="eq:velconditions"} holds. For convenience of notation, we define $K_j : = \alpha_j N_j t_j = 2\lceil M j^{5/2}\rceil$. ## Norm growth {#sec:lowerbounds} From here until Section [2.6](#sec:thrm2){reference-type="ref" reference="sec:thrm2"}, $N_j$, $\alpha_j$, and $t_j$ denote the parameter choices of Section [2.2.2](#sec:parameters){reference-type="ref" reference="sec:parameters"} for some $M \ge 2$ to be chosen sufficiently large. For $j \in \mathbb{N}$ define the Lebesgue measure preserving homeomorphisms $$\phi_j(x,y) = \begin{pmatrix} x + \alpha_j t_j S(N_j y) \\ y \end{pmatrix}, \quad \psi_j(x,y) = \begin{pmatrix} x \\ y + t_j \alpha_j S(N_j x) \end{pmatrix}, \quad \Phi_j = \psi_j \circ \phi_j.$$ For a solution of [\[eq:AE\]](#eq:AE){reference-type="eqref" reference="eq:AE"} we will write $f_j = f(T_j)$. Note that the definitions above are such that $$f_j = f_0 \circ \Phi_1^{-1}\circ \Phi_2^{-1} \circ \ldots \circ \Phi_j^{-1}.$$ A crucial step in verifying both hypotheses of Proposition [Proposition 2](#prop:ADcriteria){reference-type="ref" reference="prop:ADcriteria"} is obtaining essentially sharp lower bounds on the exponential growth of the $H^1$ norm of solutions to [\[eq:AE\]](#eq:AE){reference-type="eqref" reference="eq:AE"}. Since, defining $\bar{f}_j = f_{j-1} \circ \phi_j^{-1}$, we have $$\begin{aligned} \partial_y\bar{f}_j &= (\partial_y f_{j-1}) \circ \phi_{j}^{-1} - K_j (\partial_x f_{j-1})\circ \phi_{j}^{-1}, \\ \partial_x f_j &= (\partial_x \bar{f}_j)\circ \psi_j^{-1} - K_j (\partial_y \bar{f}_j)\circ \psi_j^{-1},\end{aligned}$$ one expects that over the time interval $[T_{j-1},T_j]$ the $H^1$ norm of a solution is amplified by the factor $K_j^2$ (if possible cancellations can be ignored). Lemma [Lemma 3](#lem:H1growth){reference-type="ref" reference="lem:H1growth"} below shows that this is indeed the case. There is a small complication with the preceding idea due to the fact that we need to consider all $t\in [0,T_*)$ and not just $T_j.$ To deal with the growth between the discrete times $T_j$, for $j \in \mathbb{N}$ we define the increasing function $\zeta_j:[0,t_j]\to [0,1]$ with $\zeta_j(0) = 0$ and $\zeta_j(t_j) = 1$ by $$\label{eq:zetaj} \zeta_j(t) = \int_0^{t/t_j}\psi(\tau)\mathrm{d}\tau,$$ where $\psi$ is as defined in Section [2.2](#sec:udefinition){reference-type="ref" reference="sec:udefinition"}. Then, let $$\tilde{h}_j(t) = \begin{cases} K_j \zeta_j(t-T_{j-1}) & t \in [T_{j-1}, T_{j-1}+t_j], \\ K_j^2 \zeta_j(t-T_{j-1}-t_j) & t\in [T_{j-1}+t_j, T_{j}] \end{cases}$$ and define $h_j:[T_{j-1},T_j]\to [0,\infty)$ by $$\label{eq:hj} h_j(t) = \begin{cases} \max(\tilde{h}_j(t),1) & t \in [T_{j-1}, T_{j-1}+t_j], \\ \max(\tilde{h}_j(t),K_j) & t\in [T_{j-1}+t_j, T_{j}]. \end{cases}$$ Our main lower bound on the $H^1$ growth of solutions of [\[eq:AE\]](#eq:AE){reference-type="eqref" reference="eq:AE"} is then given as follows. **Lemma 3** ($H^1$ growth). *Let $M$ be as defined in Section [2.2.2](#sec:parameters){reference-type="ref" reference="sec:parameters"} and chosen sufficiently large. For every mean-zero $f_0 \in H^1$ there exists a constant $c$ depending only on an upper bound for $\|f_0\|_{\dot{H}^1}/\|f_0\|_{L^2}$ such that for every $j \in \mathbb{N}$ and $t \in [T_{j-1},T_j]$ the solution of [\[eq:AE\]](#eq:AE){reference-type="eqref" reference="eq:AE"} satisfies $$\label{eq:H1growthlemma} \|f(t)\|_{\dot{H}^1} \ge c h_j(t) \|f_0\|_{\dot{H}^1}\prod_{n=1}^{j-1}K_n^2,$$ where $h_j$ is as defined above in [\[eq:hj\]](#eq:hj){reference-type="eqref" reference="eq:hj"}. In particular, for any mean-zero and nontrivial initial data $f_0$ we have $$\label{eq:blowup} \int_0^{T_*}\|\nabla f(t)\|_{L^2}^2 \mathrm{d}t\ge c\|f_0\|_{\dot{H}^1}^2 \sum_{j=1}^\infty t_j \prod_{n=1}^{j-1}K_n^4 = \infty,$$ where the divergence of the sum follows easily from the definitions of $\alpha_j$, $N_j$, and $t_j$.* ### Proof of discrete-time $H^1$ growth We will first obtain a lower bound on the $\|f_j\|_{\dot{H}^1}$ and then upgrade to continuous time. The key lemma needed to prove the growth at discrete times, which one should view as a generalization of [\[fwdbckwd\]](#fwdbckwd){reference-type="eqref" reference="fwdbckwd"}, is the following. **Lemma 4**. *There exist constants $c,C > 0$ so that if $M$ is sufficiently large then for every mean-zero $g \in H^1$ and $j \in \mathbb{N}$ we have the estimates $$\begin{aligned} \|g \circ \Phi_{j+1}^{-1}\|_{\dot{H}^1}^2 + \|g \circ \Phi_{j}\|_{\dot{H}^1}^2 &\ge cK_j^4\|g\|_{\dot{H}^1}^2 \label{eq:FBdiff} \\ \|g \circ \Phi_{j+2}^{-1}\|_{\dot{H}^1}^2 + \|g \circ \Phi_{j+2}\|_{\dot{H}^1}^2 &\ge (K_{j+2}^4 - CK_{j+2}^3)\|g\|_{\dot{H}^1}^2. \label{eq:FBsame}\end{aligned}$$* *Proof.* Both of the estimates require us to bound from below $$\|\nabla(g \circ \Phi_{k})\|_{L^2}^2 + \|\nabla(g\circ \Phi_{\ell}^{-1})\|_{L^2}^2$$ for some choices for $k,\ell \in \mathbb{N}$. By the chain rule and the fact that $\Phi_{k}$ is area preserving we have $$\|\nabla(g \circ \Phi_{k})\|_{L^2}^2 = \int_{\mathbb{T}^2} |(\nabla\Phi_{k})^T (\nabla g) \circ \Phi_{k}|^2 = \int_{\mathbb{T}^2} |A_{k} \nabla g|^2,$$ where $A_{k}: \mathbb{T}^2 \to \mathbb{R}^{2\times 2}$ is the matrix valued function given by $$A_{k} = (\nabla\Phi_{k})^T \circ \Phi^{-1}_{k}.$$ A direct computation shows that $$A_{k}(x,y) = \begin{pmatrix} 1 & K_k S'(N_k x) \\ K_k S'(N_k(y-\alpha_k t_k S(N_k x))) & 1 + K^2_k S'(N_k x)S'(N_k(y-\alpha_k t_k S(N_k x))) \end{pmatrix}.$$ A similar calculation yields $$\|\nabla(g \circ \Phi_\ell^{-1})\|_{L^2}^2 = \int_{\mathbb{T}^2}|B_{\ell} \nabla g|^2,$$ where $$B_{\ell}(x,y) = \begin{pmatrix} 1 + K_\ell^2 S'(N_\ell y) S'(N_\ell(x+\alpha_\ell t_\ell S(N_\ell y))) & -K_\ell S'(N_\ell(x+\alpha_\ell t_\ell S(N_\ell y))) \\ -K_\ell S'(N_\ell y) & 1 \end{pmatrix}.$$ Define $Q_{k,\ell}:\mathbb{T}^2 \to \mathbb{R}^{4\times 2}$ by $$Q_{k,\ell}(x,y) = \begin{pmatrix} A_{k}(x,y) \\ B_{\ell}(x,y) \end{pmatrix}.$$ Then, $$\label{eq:H1sum} \|g\circ \Phi_{k}\|_{\dot{H}^1}^2 + \|g \circ \Phi_{\ell}^{-1}\|_{\dot{H}^1}^2 = \int_{\mathbb{T}^2}|Q_{k,\ell}\nabla g|^2 = \int_{\mathbb{T}^2}(Q_{k,\ell}^T Q_{k,\ell} \nabla g, \nabla g)$$ and to prove [\[eq:FBdiff\]](#eq:FBdiff){reference-type="eqref" reference="eq:FBdiff"} and [\[eq:FBsame\]](#eq:FBsame){reference-type="eqref" reference="eq:FBsame"} it suffices to suitably bound from below $$Q_{k,\ell}^T(x,y)^T Q_{k,\ell}(x,y) v \cdot v$$ for general $v \in \mathbb{R}^2$ in the cases $(k,\ell) = (j,j+1)$ and $(k,\ell) = (j,j)$ uniformly on the full measure set where the derivatives in $A_{k}$ and $B_{\ell}$ are all defined. For any $(x,y) \in \mathbb{T}^2$ where all of the derivatives are defined, let $$\begin{aligned} a_1 &= S'(N_k x), \\ a_2 &= S'(N_k(y-\alpha_k t_k S(N_kx))), \\ a_3 &= S'(N_\ell y), \\ a_4 & = S'(N_\ell(x+\alpha_\ell t_\ell S(N_\ell y))). \\\end{aligned}$$ Then, $a_j \in \{1,-1\}$ and we have $$Q_{k,\ell} = \begin{pmatrix} 1 & K_k a_1 \\ K_k a_2 & 1 + K^2_k a_1 a_2 \\ 1 + K_\ell^2 a_3 a_4 & -K_\ell a_4 \\ -K_\ell a_3 & 1 \end{pmatrix}.$$ We first let $(k,\ell) = (j,j)$ for some $j \in \mathbb{N}$ and prove [\[eq:FBsame\]](#eq:FBsame){reference-type="eqref" reference="eq:FBsame"}. In this case it is straightforward to check that there exists a matrix $P_1 \in \mathbb{R}^{2\times 2}$ with each entry bounded by a constant $C_1$ that does not depend on $j$ such that $$Q_{j,j}^T Q_{j,j} = K_j^4\left(\mathbf{1} + \frac{P_1}{K_j}\right).$$ This implies that $$Q_{j,j}^T Q_{j,j} v \cdot v \ge (K_j^4 - 2C_1K_j^3)|v|^2$$ for every $v \in \mathbb{R}^2$, which together with [\[eq:H1sum\]](#eq:H1sum){reference-type="eqref" reference="eq:H1sum"} implies [\[eq:FBsame\]](#eq:FBsame){reference-type="eqref" reference="eq:FBsame"}. For [\[eq:FBdiff\]](#eq:FBdiff){reference-type="eqref" reference="eq:FBdiff"}, we consider the case $(k,\ell) = (j,j+1)$ and compute that $$Q_{j,j+1}^T Q_{j,j+1} = \begin{pmatrix} K_{j+1}^4 & 0 \\ 0 & K_j^4 \end{pmatrix} + (K_j^3 + K_{j+1}^3) P_2$$ for a matrix $P_2 \in \mathbb{R}^{2\times 2}$ with again entries uniformly bounded by a constant $C_2$ that does not depend on $j$. Since $1 \le K_{j+1}/K_j \le 17$ for any $j$, it follows then that for any $v \in \mathbb{R}^2$ we have $$\label{eq:jjplus1} Q_{j,j+1} Q_{j,j+1}^T v \cdot v \ge (K_j^4 - 2(1+17^3)C_2K_j^3)|v|^2 \ge \frac{1}{2}K_j^4 |v|^2,$$ where the second inequality follows from $K_j \ge M$ by choosing $M$ sufficiently large. Combining [\[eq:jjplus1\]](#eq:jjplus1){reference-type="eqref" reference="eq:jjplus1"} with [\[eq:H1sum\]](#eq:H1sum){reference-type="eqref" reference="eq:H1sum"} proves [\[eq:FBdiff\]](#eq:FBdiff){reference-type="eqref" reference="eq:FBdiff"}. ◻ We can now prove the sharp $H^1$ growth at the discrete times $T_j$ by using an argument based on the idea of Lemma [Lemma 1](#fwdbckwdp){reference-type="ref" reference="fwdbckwdp"} discussed in the introduction. **Lemma 5**. *Let $M$ be as defined in Section [2.2.2](#sec:parameters){reference-type="ref" reference="sec:parameters"} and chosen sufficiently large. For every mean-zero $f_0 \in H^1$ there exists a constant $c$ depending only on an upper bound for $\|f_0\|_{\dot{H}^1}/\|f_0\|_{L^2}$ such that for every $j \in \mathbb{N}$ the solution of [\[eq:AE\]](#eq:AE){reference-type="eqref" reference="eq:AE"} satisfies $$\|f_j\|_{\dot{H}^1} \ge c\|f_0\|_{\dot{H}^1}\prod_{n=1}^{j}K_n^2.$$* *Proof.* By linearity and the fact that [\[eq:AE\]](#eq:AE){reference-type="eqref" reference="eq:AE"} conserves the $L^2$ norm we may assume without loss of generality that $\|f_0\|_{L^2} = 1$. Throughout this proof, $c$ and $C$ denote the constants from Lemma [Lemma 4](#lem:fwdbwd){reference-type="ref" reference="lem:fwdbwd"}. **Claim 1**: There exists $J \in \mathbb{N}$ depending only on $\|f_0\|_{\dot{H}^1}$ such that for some $j_0 \in \mathbb{N}\cup \{0\}$ with $j_0 \le J$ there holds $$\label{eq:growth1} \|f_{j_0+1}\|_{\dot{H}^1} \ge \|f_{j_0}\|_{\dot{H}^1}.$$ In fact, one can take $J = \lceil \log(\|f_0\|_{\dot{H}^1})\rceil$. *Proof of Claim 1.* Let $j \in \mathbb{N}$ be such that $\|f_{j+1}\|_{\dot{H}^1} \le \|f_j\|$. By [\[eq:FBdiff\]](#eq:FBdiff){reference-type="eqref" reference="eq:FBdiff"} applied with $g = f_j$ we have $$\label{eq:growth2} \|f_{j+1}\|_{\dot{H}^1}^2 + \|f_{j-1}\|_{\dot{H}^1}^2 \ge c K_j^4 \|f_j\|_{\dot{H}^1}^2.$$ Since $\|f_{j+1}\|_{\dot{H}^1} \le \|f_j\|$, assuming $M$ is large enough so that $cK_j^4/2 \ge 1$ for all $j$, it follows that $$\|f_j\|_{\dot{H}^1}^2 \le \frac{2}{c K_j^4}\|f_{j-1}\|_{\dot{H}^1}^2.$$ Therefore, if [\[eq:growth1\]](#eq:growth1){reference-type="eqref" reference="eq:growth1"} fails for all $0 \le j_0 \le J$ and $M$ is large enough so that $2/(cK_j^4) \le e^{-2}$ for every $j$, we have $$\|f_J\|_{\dot{H}^1}^2 \le \|f_0\|_{\dot{H}^1}^2 \prod_{j=1}^J \frac{2}{cK_j^4} \le e^{-2J}\|f_0\|_{\dot{H}^1}^2.$$ It follows then by the Poincaré inequality that $$1 = \|f_0\|_{L^2}^2 = \|f_J\|_{L^2}^2 \le \|f_J\|_{\dot{H}^1}^2 \le e^{-2J}\|f_0\|_{\dot{H}^1}^2,$$ and so $$\label{eq:growth4} J \le \log(\|f_0\|_{\dot{H}^1}).$$ ◻ **Claim 2**: There exists a constant $C_1 > 0$ so that if $\|f_{j}\|_{\dot{H}^1} \ge \|f_{j-1}\|_{\dot{H}^1}$ for some $j \in \mathbb{N}$, then $$\begin{aligned} \|f_{j+1}\|_{\dot{H}^1} &\ge \|f_{j}\|_{\dot{H}^1}, \label{eq:growth4}\\ \|f_{j+2}\|_{\dot{H}^1}^2 &\ge (K_{j+2}^4 - C_1K_{j+2}^3)\|f_{j+1}\|_{\dot{H}^1}^2. \label{eq:growth5}\end{aligned}$$ *Proof of Claim 2.* By [\[eq:FBdiff\]](#eq:FBdiff){reference-type="eqref" reference="eq:FBdiff"} applied with $g = f_j$ we have $$\|f_{j-1}\|_{\dot{H}^1}^2 + \|f_{j+1}\|_{\dot{H}^2}^2 \ge c K_j^4 \|f_j\|_{\dot{H}^1}.$$ Since $\|f_j\|_{\dot{H}^1} \ge \|f_{j-1}\|_{\dot{H}^1}$ it follows that $$\label{eq:growth6} \|f_j\|_{\dot{H}^1}^2 \le \frac{2}{cK_j^4} \|f_{j+1}\|_{\dot{H}^1}^2,$$ which implies [\[eq:growth4\]](#eq:growth4){reference-type="eqref" reference="eq:growth4"} and will also be crucial to obtaining [\[eq:growth5\]](#eq:growth5){reference-type="eqref" reference="eq:growth5"}. To prove [\[eq:growth5\]](#eq:growth5){reference-type="eqref" reference="eq:growth5"} we begin by applying [\[eq:FBsame\]](#eq:FBsame){reference-type="eqref" reference="eq:FBsame"} with $g = f_{j+1}$ to get $$\label{eq:growth10} \|f_{j+2}\|_{\dot{H}^1}^2 + \|f_{j+1}\circ \Phi_{j+2}\|_{\dot{H}^1}^2 \ge (K_{j+2}^4 - CK_{j+2}^3)\|f_{j+1}\|_{\dot{H}^1}^2.$$ Observe now that $$f_{j+1} \circ \Phi_{j+2} = f_j \circ (\Phi_{j+1}^{-1} \circ \Phi_{j+2}) = f_j \circ (\phi_{j+1}^{-1} \circ \psi_{j+1}^{-1} \circ \psi_{j+2} \circ \phi_{j+2})$$ and so by the fact that $$\nabla(\psi_{j+1}^{-1} \circ \psi_{j+2})(x,y) = \begin{pmatrix} 1 & 0 \\ K_{j+2}S'(N_{j+2}x) - K_{j+1}S'(N_{j+1}x) & 1 \end{pmatrix}$$ we have $$\|\nabla(\phi_{j+1}^{-1} \circ \psi_{j+1}^{-1} \circ \psi_{j+2} \circ \phi_{j+2})\|_{L^\infty} \le \|\nabla\phi_{j+1}^{-1}\|_{L^\infty} \|\nabla(\psi_{j+1}^{-1} \circ \psi_{j+2})\|_{L^\infty} \|\nabla\phi_{j+2}\|_{L^\infty} \le C_2K_{j+2}^3$$ for some constant $C_2$ that does not depend on $j$. Employing [\[eq:growth6\]](#eq:growth6){reference-type="eqref" reference="eq:growth6"} we deduce that there is $C_3$ independent of $j$ such that $$\label{eq:growth9} \|f_{j+1} \circ \Phi_{j+2}\|_{\dot{H}^1}^2 \le C_2^2 K_{j+2}^6\|f_{j}\|_{\dot{H}^1}^2 \le C_3K_{j+2}^2 \|f_{j+1}\|_{\dot{H}^1}^2.$$ Putting [\[eq:growth9\]](#eq:growth9){reference-type="eqref" reference="eq:growth9"} into [\[eq:growth10\]](#eq:growth10){reference-type="eqref" reference="eq:growth10"} completes the proof of [\[eq:growth5\]](#eq:growth5){reference-type="eqref" reference="eq:growth5"}. ◻ We are now ready to complete the proof of the lemma. By Claim 1, there exists $j_0 \in \mathbb{N}$ with $j_0 \le \lceil\log(\|f_0\|_{\dot{H}^1})\rceil + 1$ such that $$\|f_{j_0}\|_{\dot{H}^1} \ge \|f_{j_0-1}\|_{\dot{H}^1}.$$ Iterating [\[eq:growth4\]](#eq:growth4){reference-type="eqref" reference="eq:growth4"} of Claim 2 it follows that $\|f_{j+1}\|_{\dot{H}^1} \ge \|f_j\|_{\dot{H}^1}$ for every $j \ge j_0$. Therefore, by Claim 2, [\[eq:growth5\]](#eq:growth5){reference-type="eqref" reference="eq:growth5"} holds for every $j \ge j_0$. Given $n \ge j_0+2$ we iterate [\[eq:growth5\]](#eq:growth5){reference-type="eqref" reference="eq:growth5"} over $j_0 \le j \le n$ and use also $\|f_{j_0+1}\|_{\dot{H}^1} \ge \|f_{j_0}\|_{\dot{H}^1}$ to conclude $$\|f_n\|_{\dot{H}^1}^2 \ge \|f_{j_0}\|_{\dot{H}^1}^2 \prod_{j={j_0+2}}^n (K_j^4 - C_1 K_j^3)\ge c_1 \|f_0\|_{\dot{H}^1}^2 \prod_{j=1}^n (K_j^4 - C_1 K_j^3),$$ where $$c_1 = \frac{1}{\|f_0\|_{\dot{H}^1}^2} \left(\prod_{j=1}^{\lceil \log(\|f_0\|_{\dot{H}^1})\rceil + 2}K_j^4\right)^{-1}.$$ The result then follows from $$\prod_{j=1}^n(K_j^4 - C_1 K_j^3) = \left(\prod_{j=1}^n K_j^4\right)\left(\prod_{j=1}^n(1-C_1K_j^{-1})\right)$$ and the fact that $$\prod_{j=1}^\infty(1-C_1K_j^{-1}) > 0$$ because $\sum_{j=1}^\infty K_j^{-1} < \infty$ due to $K_j \ge j^{5/2}$. ◻ ### Proof of continuous time $H^1$ growth We now upgrade Lemma [Lemma 5](#lem:discretegrowth){reference-type="ref" reference="lem:discretegrowth"} to continuous time and complete the proof of Lemma [Lemma 3](#lem:H1growth){reference-type="ref" reference="lem:H1growth"}. *Proof of Lemma [Lemma 3](#lem:H1growth){reference-type="ref" reference="lem:H1growth"}.* As before, we may assume without loss of generality that $\|f_0\|_{L^2} = 1$. Moreover, it is sufficient to prove the estimate for all $j \ge j_0$ with $j_0$ depending only on an upper bound for $\|f_0\|_{\dot{H}^1}$. Throughout this proof, $c > 0$ denotes the constant from Lemma [Lemma 5](#lem:discretegrowth){reference-type="ref" reference="lem:discretegrowth"}. We begin by using Lemma [Lemma 5](#lem:discretegrowth){reference-type="ref" reference="lem:discretegrowth"} to obtain lower bounds on specific derivatives of $f_j$ as well as the solution at the intermediate time $f(T_j + t_{j+1}) = f_j \circ \phi_{j+1}^{-1}$. A straightforward computation with the chain rule shows that for any $g \in H^1$ and $j \in \mathbb{N}$ we have $$\label{eq:trivialchain} \|\nabla(g\circ \phi_j^{-1})\|_{L^2} \le (K_j+2)\|\nabla g\|_{L^2}$$ as well as the same bound with $\phi_j^{-1}$ replaced by $\psi_j^{-1}$. Thus, $$\|\nabla(g \circ \Phi_j^{-1})\|_{L^2} \le (K_j+2)^2\|\nabla g\|_{L^2} \le (K_j^2 + 5K_j)\|\nabla g\|_{L^2}.$$ Iterating this estimate and using $\sum_{j=1}^\infty K_j^{-1} < \infty$ we see that there is $C_1$ independent of $j$ such that $$\begin{aligned} \|f_j\|_{\dot{H}^1} & \le C_1 \|f_0\|_{\dot{H}^1} \prod_{n=1}^j K_n^2, \label{eq:H1ubd1}\\ \|f_j \circ \phi_{j+1}^{-1}\|_{\dot{H}^1} &\le C_1 \|f_0\|_{\dot{H}^1} K_{j+1} \prod_{n=1}^j K_n^2. \label{eq:H1ubd2}\end{aligned}$$ For simplicity of notation, let $\bar{f}_{j+1} = f_j \circ \phi_{j+1}^{-1}$. Since $$\nabla f_{j+1} (x,y) = \begin{pmatrix} (\partial_x \bar{f}_{j+1}) \circ \psi_{j+1}^{-1}(x,y) - K_{j+1}S'(N_{j+1}x)(\partial_y \bar{f}_{j+1})\circ \psi_{j+1}^{-1}(x,y) \\ (\partial_y \bar{f}_{j+1})\circ \psi_{j+1}^{-1}(x,y) \end{pmatrix},$$ it follows from Lemma [Lemma 5](#lem:discretegrowth){reference-type="ref" reference="lem:discretegrowth"} and [\[eq:H1ubd2\]](#eq:H1ubd2){reference-type="eqref" reference="eq:H1ubd2"} that $$c \|f_0\|_{\dot{H}^1} K_{j+1}^2\prod_{n=1}^{j} K_n^2 \le \|\nabla f_{j+1}\|_{L^2} \le K_{j+1}\|\partial_y \bar{f}_{j+1}\|_{L^2} + 2C_1\|f_0\|_{\dot{H}^1}K_{j+1}\prod_{n=1}^j K_n^2.$$ Thus, if $j$ is large enough so that $cK_{j+1}^2/2 \ge 2C_1 K_{j+1}$, then $$\label{eq:partialybound} \|\partial_y \bar{f}_{j+1}\|_{L^2} \ge \frac{c}{2}\|f_0\|_{\dot{H}^1}K_{j+1}\prod_{n=1}^j K_n^2.$$ Note that since $c$ depends only on an upper bound for $\|f_0\|_{\dot{H}^1}$, so does this choice of $j$. A similar argument using [\[eq:partialybound\]](#eq:partialybound){reference-type="eqref" reference="eq:partialybound"} and [\[eq:H1ubd1\]](#eq:H1ubd1){reference-type="eqref" reference="eq:H1ubd1"} shows that for $j$ sufficiently large we also have $$\label{eq:partialxbound} \|\partial_x f_j\|_{L^2} \ge \frac{c}{4}\|f_0\|_{\dot{H}^1} \prod_{n=1}^j K_n^2.$$ We are now ready to complete the proof. Let $j_0$ be large enough so that both [\[eq:partialybound\]](#eq:partialybound){reference-type="eqref" reference="eq:partialybound"} and [\[eq:partialxbound\]](#eq:partialxbound){reference-type="eqref" reference="eq:partialxbound"} hold for all $j \ge j_0 - 1$. Fix $j \in \mathbb{N}$ with $j \ge j_0$ and $t \in [T_{j-1},T_{j-1}+t_j]$. Then, defining $\phi_{j,t}(x,y) = (x+\alpha_j t_j \zeta_j(t-T_{j-1})S(N_j y), y)$, for such $t$ we have $$\nabla f(t,x,y) = \begin{pmatrix} (\partial_x f_{j-1}) \circ \phi_{j,t}^{-1}(x,y) \\ (\partial_y f_{j-1})\circ \phi_{j,t}^{-1}(x,y) - K_j \zeta_j(t-T_{j-1})S'(N_j y) (\partial_x f_{j-1}) \circ \phi_{j,t}^{-1}(x,y) \end{pmatrix},$$ where $\zeta_j:[0,t_j]\to [0,1]$ is as defined in [\[eq:zetaj\]](#eq:zetaj){reference-type="eqref" reference="eq:zetaj"}. Thus, by [\[eq:partialxbound\]](#eq:partialxbound){reference-type="eqref" reference="eq:partialxbound"} there holds $$\label{eq:continuoustime1} \|\nabla f(t)\|_{L^2} \ge \frac{c}{4}\|f_0\|_{\dot{H}^1}\prod_{n=1}^{j-1} K_n^2$$ for all $t \in [T_{j-1},T_{j-1}+t_j]$. On the other hand, if $t$ is such that $K_j \zeta_j(t-T_{j-1}) \ge 8C_1/c$, then by [\[eq:H1ubd1\]](#eq:H1ubd1){reference-type="eqref" reference="eq:H1ubd1"} and [\[eq:partialxbound\]](#eq:partialxbound){reference-type="eqref" reference="eq:partialxbound"} we have $$\begin{aligned} \|\nabla f(t)\|_{L^2} &\ge K_j \zeta_j(t-T_{j-1})\|\partial_x f_{j-1}\|_{L^2} - \|f_{j-1}\|_{\dot{H}^1} \\ &\ge \frac{1}{2}K_j \zeta_j(t-T_{j-1})\|\partial_x f_{j-1}\|_{L^2} \\ & \ge \frac{c}{8}K_j \zeta_j(t-T_{j-1})\|f_0\|_{\dot{H}^1}\prod_{n=1}^{j-1} K_n^2. \label{eq:continuoustime2}\end{aligned}$$ Combining [\[eq:continuoustime1\]](#eq:continuoustime1){reference-type="eqref" reference="eq:continuoustime1"} and [\[eq:continuoustime2\]](#eq:continuoustime2){reference-type="eqref" reference="eq:continuoustime2"} proves [\[eq:H1growthlemma\]](#eq:H1growthlemma){reference-type="eqref" reference="eq:H1growthlemma"} for $t \in [T_{j-1},T_{j-1}+t_j]$. The estimate on the other half of the time interval $[T_{j-1},T_j]$ follows in a similar way using [\[eq:H1ubd2\]](#eq:H1ubd2){reference-type="eqref" reference="eq:H1ubd2"} and [\[eq:partialybound\]](#eq:partialybound){reference-type="eqref" reference="eq:partialybound"}. ◻ ## Balanced growth {#sec:upperbounds} Lemma [Lemma 3](#lem:H1growth){reference-type="ref" reference="lem:H1growth"} establishes the first hypothesis of Proposition [Proposition 2](#prop:ADcriteria){reference-type="ref" reference="prop:ADcriteria"}, and moreover shows that in order to obtain the balanced growth hypothesis we need to prove that for some $s \in (0,1]$ the $\dot{H}^{1+s}$ norm of $\|f_j\|_{\dot{H}^{1+s}}$ grows at most like $\prod_{n=1}^{j} K_n^{2s+2}$ with $j$. This is the content of the next lemma. **Lemma 6** (Upper bound in $H^{\sigma}$, $\sigma > 1$). *Fix $s \in (2/5,1/2)$ and let $h_j$ be as in the statement of Lemma [Lemma 3](#lem:H1growth){reference-type="ref" reference="lem:H1growth"}. For every mean-zero $f_0 \in H^{1+s} \cap W^{1,\infty}$ there exists a constant $C > 0$ depending only on $s$ and an upper bound for $\|f_0\|_{W^{1,\infty}}/\|f_0\|_{L^2}$ such that for every $j \in \mathbb{N}$ and $t \in [T_{j-1},T_j]$ the solution of [\[eq:AE\]](#eq:AE){reference-type="eqref" reference="eq:AE"} satisfies $$\|f(t)\|_{\dot{H}^{1+s}} \le C h_j^{1+s}(t) \|f_0\|_{\dot{H}^{1+s}}\prod_{n=1}^{j-1}K_n^{2+2s}.$$* Unlike in proof of Lemma [Lemma 3](#lem:H1growth){reference-type="ref" reference="lem:H1growth"}, the extension from discrete to continuous time is immediate and will only serve to complicate the notation. Thus, for simplicity we will prove the bound only for $t \in \{T_j\}_{j=1}^\infty$. In particular, in the notation of Lemma [Lemma 6](#lem:uppermain){reference-type="ref" reference="lem:uppermain"}, we prove in this section that there is a constant $C$ depending only an upper bound for $\|f_0\|_{\dot{H}^1}/\|f_0\|_{L^2}$ such that $$\label{eq:upperdiscrete} \|f_j\|_{\dot{H}^{1+s}} \le \exp\left(C\left(1+\frac{\|f_0\|_{W^{1,\infty}}}{\|f_0\|_{\dot{H}^1}}\right)\right)\|f_0\|_{\dot{H}^{1+s}}\prod_{n=1}^{j}K_n^{2+2s}$$ for every $j \in \mathbb{N}$. ### Proof of discrete-time upper bound We will bound $\|f_j\|_{\dot{H}^{1+s}}$ by computing $\nabla(f_{j-1}\circ \Phi_j^{-1})$ and then estimating the $\dot{H}^s$ norm of the result. We thus begin with a bound for $\|f \circ \Phi_j^{-1}\|_{\dot{H}^s}$ that follows easily from interpolation theory. **Lemma 7**. *Let $s \in (0,1/2)$. For every mean-zero $f \in H^{s}$ we have $$\begin{aligned} \|(f \circ \phi^{-1}_j)\|_{\dot{H}^s} \le (K_j+2)^{s} \|f\|_{\dot{H}^s}. \end{aligned}$$ Moreover, the same estimate holds with $\phi_j^{-1}$ replaced with $\psi_j^{-1}$.* *Proof.* Since $\|f\circ \phi_{j}^{-1}\|_{L^2} = \|f\|_{L^2}$ the result follows immediately from [\[eq:trivialchain\]](#eq:trivialchain){reference-type="eqref" reference="eq:trivialchain"} and the interpolation theorem Lemma [Lemma 9](#lem:interp){reference-type="ref" reference="lem:interp"}. ◻ Now we use Lemma [Lemma 7](#lem:interp1){reference-type="ref" reference="lem:interp1"} and the commutator estimate of Lemma [Lemma 10](#lem:Kato){reference-type="ref" reference="lem:Kato"} to bound $\|f \circ \phi_{j}^{-1}\|_{\dot{H}^{1+s}}$. **Lemma 8**. *For any $s \in (0,1/2)$ there exists a constant $C$ depending only on $s$ such that for every $f \in \dot{H}^{s+1} \cap W^{1,\infty}$ there holds $$\label{eq:upper2} \|f \circ \phi_j^{-1}\|_{\dot{H}^{s+1}} \le K_j^{s+1}\|f\|_{\dot{H}^{s+1}} + C(K_j \|f\|_{\dot{H}^{s+1}}+ K_j N_j^s \|f\|_{W^{1,\infty}}).$$ Moreover, the same estimate holds with $\phi_j^{-1}$ replaced by $\psi_j^{-1}$.* *Proof.* Recall the notation and conventions of Section [1.4](#sec:Fourier){reference-type="ref" reference="sec:Fourier"}. First, note that by the definition of $\|\cdot\|_{\dot{H}^{s+1}}$ and the triangle inequality, for any $g \in \dot{H}^{s+1}$ we have $$\|g\|_{\dot{H}^{s+1}} = \|D^s \nabla g\|_{L^2} \le \|D^s \partial_x g\|_{L^2} + \|D^s \partial_y g\|_{L^2}.$$ We will estimate each term for $g = f\circ \phi_j^{-1}$. For the first term, we note that since $\partial_x (f\circ \phi_j^{-1}) = (\partial_x f)\circ \phi_j^{-1}$ it follows from Lemma [Lemma 7](#lem:interp1){reference-type="ref" reference="lem:interp1"} that there is a constant $C_1$ such that $$\|D^s \partial_x (f\circ \phi_j^{-1})\|_{L^2} \le (K_j+2)^s\|\partial_x f\|_{\dot{H}^s} \le C_1(K_j+2)^s\|f\|_{\dot{H}^{1+s}}.$$ For the second term, we begin by computing $$\partial_y (f\circ \phi_j^{-1}) = -K_j S'(N_j y)(\partial_x f)\circ \phi_j^{-1} + (\partial_y f)\circ \phi_j^{-1}.$$ Applying $D^s$ and introducing a commutator term we have $$\label{eq:fourterms} \begin{aligned} D^s \partial_y (f\circ \phi_j^{-1}) &= -K_jS'(N_j y) D^s ((\partial_x f)\circ \phi_j^{-1}) \\ & \quad+ [K_j S'(N_j y) D^s((\partial_x f)\circ \phi_j^{-1}) - D^s(K_j S'(N_j y)(\partial_x f) \circ \phi_j^{-1})] \\ & \quad + D^s ((\partial_y f)\circ \phi_j^{-1}). \end{aligned}$$ By Lemma [Lemma 7](#lem:interp1){reference-type="ref" reference="lem:interp1"}, we have $$\|K_j S'(N_j \cdot) D^s ((\partial_x f)\circ \phi_j^{-1})\|_{L^2} = K_j \|(\partial_x f) \circ \phi_j^{-1}\|_{\dot{H}^s} \le K_j (K_j + 2)^s \|f\|_{\dot{H}^{s+1}}\le (K_j^{s+1} + 2^s K_j)\|f\|_{\dot{H}^{s+1}}.$$ For the commutator term, we use the homogeneous Kato-Ponce inequality given in Lemma [Lemma 10](#lem:Kato){reference-type="ref" reference="lem:Kato"} to obtain $$\begin{aligned} \|K_j S'(N_j \cdot) D^s((\partial_x f)\circ \phi_j^{-1}) - D^s(K_j S'(N_j \cdot)(\partial_x f) \circ \phi_j^{-1})\|_{L^2} \le C_2 K_j \|D_y^s S'(N_j \cdot)\|_{L^2}\|(\partial_x f)\circ \phi_j^{-1}\|_{L^\infty}\end{aligned}$$ for some constant $C_2$ depending on $s$. Using that $S' \in \dot{H}^s(\mathbb{T})$ since $s < 1/2$ and the fact that $$\widehat{S'(N_j\cdot)}(\ell) = \begin{cases} \widehat{S'}(\ell/N_j) & \ell/N_j \in \mathbb{Z}\\ 0 & \ell/N_j \not \in \mathbb{Z} \end{cases}$$ because $N_j$ is an integer, one can show that $\|D_y^s S'(N_j \cdot)\|_{L^2} \le C_3 N_j^s$ for $C_3 > 0$ depending on $s$. Thus, $$\|K_j S'(N_j \cdot) D^s((\partial_x f)\circ \phi_j^{-1}) - D^s(K_j S'(N_j \cdot)(\partial_x f) \circ \phi_j^{-1})\|_{L^2} \le C_2 C_3 K_j N_j^s \|f\|_{W^{1,\infty}}.$$ Bounding the final term in [\[eq:fourterms\]](#eq:fourterms){reference-type="eqref" reference="eq:fourterms"} using Lemma [Lemma 7](#lem:interp1){reference-type="ref" reference="lem:interp1"} and combining all of our other estimates completes the proof of the estimate of $\|f\circ \phi_j^{-1}\|_{\dot{H}^{s+1}}$. The estimate with $\psi_j^{-1}$ is obtained in the same way by reversing the roles of $x$ and $y$. ◻ *Proof of [\[eq:upperdiscrete\]](#eq:upperdiscrete){reference-type="eqref" reference="eq:upperdiscrete"}.* Applying Lemma [Lemma 8](#lem:upper2){reference-type="ref" reference="lem:upper2"} twice we deduce that there exists a constant $C_1$ depending only on $s$ such that for every mean-zero $f_0 \in H^{1+s} \cap W^{1,\infty}$ and $j \in \mathbb{N}$ we have $$\label{eq:fullcomp} \|f_{j}\|_{\dot{H}^{s+1}} \le K_j^{2s+2}\|f_{j-1}\|_{\dot{H}^{s+1}}\left(1 + C_1K_j^{-s} + C_1K_j^{-s}N_j^s \frac{\|f_{j-1}\|_{W^{1,\infty}}}{\|f_{j-1}\|_{\dot{H}^{s+1}}}\right).$$ Next, note that straightforward estimates similar to previous computations yield $$\|f_{j-1}\|_{W^{1,\infty}} \le C_2\|f_0\|_{W^{1,\infty}}\prod_{n=1}^{j-1} K_n^2,$$ while interpolation and Lemma [Lemma 5](#lem:discretegrowth){reference-type="ref" reference="lem:discretegrowth"} give $$\|f_{j-1}\|_{\dot{H}^{1+s}} \ge \frac{\|f_{j-1}\|_{\dot{H}^1}^{1+s}}{\|f_{j-1}\|_{L^2}^{s}} = \frac{\|f_{j-1}\|_{\dot{H}^1}^{1+s}}{\|f_0\|_{L^2}^{s}} \ge c_1 \|f_0\|_{\dot{H}^1}\prod_{n=1}^{j-1} K_n^{2 + 2s}$$ for a constant $c_1$ that depends only on an upper bound for $\|f_0\|_{\dot{H}^1}/\|f_0\|_{L^2}$. Thus, there is a constant $C_3$ depending only on an upper bound for $\|f_0\|_{\dot{H}^1}/\|f_0\|_{L^2}$ such that $$\frac{\|f_{j-1}\|_{W^{1,\infty}}}{\|f_{j-1}\|_{\dot{H}^{1+s}}} \le C_3\frac{\|f_0\|_{W^{1,\infty}}}{\|f_0\|_{\dot{H}^1}}\prod_{n=1}^{j-1} K_n^{-2s}.$$ Putting this bound into [\[eq:fullcomp\]](#eq:fullcomp){reference-type="eqref" reference="eq:fullcomp"} we obtain $$\label{eq:iterate1} \|f_{j}\|_{\dot{H}^{s+1}} \le K_j^{2s+2}\|f_{j-1}\|_{\dot{H}^{s+1}}\left(1+C_1K_j^{-s} + C_3\frac{\|f_0\|_{W^{1,\infty}}}{\|f_0\|_{\dot{H}^1}}\left(N_j\prod_{n=1}^{j-1} K_n^{-2}\right)^s\right).$$ Since $K_j \ge j^{5/2}$ and $s > 2/5$, we have that $\sum_{j=1}^\infty K_j^{-s} < \infty$. Moreover, as $N_j = 2^j$ and $\prod_{n=1}^{j-1} K_n^2 \ge (j-1)!$ we clearly have $$\sum_{n=1}^\infty \left(N_j \prod_{n=1}^{j-1} K_n^{-2}\right)^s < \infty.$$ Thus, by iterating [\[eq:iterate1\]](#eq:iterate1){reference-type="eqref" reference="eq:iterate1"} we see that there is a constant $C_4$ depending only on $s$ and an upper bound for $\|f_0\|_{\dot{H}^1}/\|f_0\|_{L^2}$ such that for every $j \in \mathbb{N}$ we have $$\|f_{j}\|_{\dot{H}^{s+1}} \le \exp\left(C_4\left(1+\frac{\|f_0\|_{W^{1,\infty}}}{\|f_0\|_{\dot{H}^1}}\right)\right)\|f_0\|_{\dot{H}^{s+1}}\prod_{n=1}^j K_n^{2s+2}.$$ ◻ ## Concluding the proof of Theorem [Theorem 1](#thrm:main){reference-type="ref" reference="thrm:main"} {#sec:finish} With Lemmas [Lemma 3](#lem:H1growth){reference-type="ref" reference="lem:H1growth"} and [Lemma 6](#lem:uppermain){reference-type="ref" reference="lem:uppermain"} at hand, the proof of Theorem [Theorem 1](#thrm:main){reference-type="ref" reference="thrm:main"} is essentially immediate from Proposition [Proposition 2](#prop:ADcriteria){reference-type="ref" reference="prop:ADcriteria"}. *Proof of Theorem [Theorem 1](#thrm:main){reference-type="ref" reference="thrm:main"}.* It suffices to prove the anomalous dissipation portion of the statement, as once this is established the non-uniqueness for a suitably modified velocity field follows as in [@DEIJ22]. Let $u:[0,T_*] \times \mathbb{T}^2 \to \mathbb{R}^2$ be as defined in Section [2.2](#sec:udefinition){reference-type="ref" reference="sec:udefinition"} with the parameters chosen as in Section [2.2.2](#sec:parameters){reference-type="ref" reference="sec:parameters"} and the constant $M \ge 2$ picked large enough so that the conclusion of Lemma [Lemma 3](#lem:H1growth){reference-type="ref" reference="lem:H1growth"} holds. Recall that by a simple scaling argument it is sufficient to prove Theorem [Theorem 1](#thrm:main){reference-type="ref" reference="thrm:main"} for the particular time $T = T_*$. As described in Section [2.2.2](#sec:parameters){reference-type="ref" reference="sec:parameters"}, the fact that $u$ has the regularity claimed in Theorem [Theorem 1](#thrm:main){reference-type="ref" reference="thrm:main"} follows easily from the sufficient condition [\[eq:velconditions\]](#eq:velconditions){reference-type="eqref" reference="eq:velconditions"} and the definitions of the parameters $\alpha_j$, $N_j$, and $t_j$. Fix any $s \in (2/5,1/2)$ and mean-zero initial data $f_0 \in H^{1+s} \cap W^{1,\infty}$. We apply Proposition [Proposition 2](#prop:ADcriteria){reference-type="ref" reference="prop:ADcriteria"} with $\sigma = 1+s$. The fact that the first hypothesis holds is the content of Lemma [Lemma 3](#lem:H1growth){reference-type="ref" reference="lem:H1growth"}, specifically [\[eq:blowup\]](#eq:blowup){reference-type="eqref" reference="eq:blowup"}. For the second hypothesis, it follows from Lemmas [Lemma 3](#lem:H1growth){reference-type="ref" reference="lem:H1growth"} and [Lemma 6](#lem:uppermain){reference-type="ref" reference="lem:uppermain"} that for all $t \in [0,T_*)$ the solution of [\[eq:AE\]](#eq:AE){reference-type="eqref" reference="eq:AE"} satisfies $$\|f(t)\|_{L^2}^s \|f(t)\|_{\dot{H}^{1+s}} \le C_1 \|f(t)\|_{\dot{H}^1}^{1+s} \quad \text{with} \quad C_1 = \frac{C}{c^{1+s}}\frac{\|f_0\|^s_{L^2}\|f_0\|_{\dot{H}^{1+s}}}{\|f_0\|^{1+s}_{\dot{H}^{1}}},$$ where $c$ and $C$ are as in the statements of the lemmas, and in particular only depend on an upper bound for $\|f_0\|_{W^{1,\infty}}/{\|f_0\|_{L^2}}$. Thus, Theorem [Theorem 1](#thrm:main){reference-type="ref" reference="thrm:main"} follows from Proposition [Proposition 2](#prop:ADcriteria){reference-type="ref" reference="prop:ADcriteria"} and there is a lower bound for the amount of energy dissipated on $[0,T_*]$ with the dependencies claimed in Remark [\[rem:constants\]](#rem:constants){reference-type="ref" reference="rem:constants"}. ◻ ## Proof of Theorem [Theorem 2](#thrm:2){reference-type="ref" reference="thrm:2"} {#sec:thrm2} We now prove Theorem [Theorem 2](#thrm:2){reference-type="ref" reference="thrm:2"}, which amounts to appropriately choosing the parameters $(\alpha_j, N_j, t_j)$ and estimating the Lipschitz norm of the solution to the full advection-diffusion equation. Before beginning the proof, we notice that a careful reading of the proof of Theorem [Theorem 1](#thrm:main){reference-type="ref" reference="thrm:main"} above shows that the only properties of the parameters defining the velocity that we needed to deduce anomalous dissipation for every mean-zero $f_0 \in H^{s+1}$ and some fixed $s \in (0,1/2)$ were the following: - $\sum_{j=1}^\infty t_j < \infty$; - $\sum_{j=1}^\infty t_j \prod_{n=1}^{j-1}K_n^4 = \infty$; - $\sum_{j=1}^\infty K_j^{-s} < \infty$; - $\sum_{j=1}^\infty \left(N_j \prod_{n=1}^{j-1} K_n^{-2}\right)^s < \infty$; - there exists $C \ge 1$ such that $1 \le K_{j+1}/K_j \le C$ and for some $M$ sufficiently large $K_j \ge M$; - $N_j$ is an integer. *Proof of Theorem [Theorem 2](#thrm:2){reference-type="ref" reference="thrm:2"}.* Fix $\alpha \in (0,1)$ and $0 \le \beta < (1-\alpha)/2$. For $\epsilon \in (0,1/6)$ to be chosen sufficiently small depending on the gap between $\beta$ and $(1-\alpha)/2$, we apply the proof of Theorem [Theorem 1](#thrm:main){reference-type="ref" reference="thrm:main"} with $s = 1/2 - \epsilon$ and the parameters $(\alpha_j, N_j, t_j)$ chosen as $$N_j = j^{4j}, \quad \alpha_j = N_j^{-\alpha}, \quad \text{and} \quad t_j = Mj^{\frac{2}{1-3\epsilon}} j^{-4(1-\alpha)j},$$ where $M$ is taken sufficiently large. We define our velocity field as in Section [2.2](#sec:udefinition){reference-type="ref" reference="sec:udefinition"} with the parameters as given above. We have $K_j = Mj^{\frac{2}{1-3\epsilon}}$ and it is straightforward to check using Stirling's formula that each of the six conditions stated before the start of the proof are satisfied. The anomalous dissipation claimed in Theorem [Theorem 2](#thrm:2){reference-type="ref" reference="thrm:2"} then follows from the proof of Theorem [Theorem 1](#thrm:main){reference-type="ref" reference="thrm:main"}. It remains only to verify the regularity of the solutions $f^\kappa$ claimed in [\[eq:scalarregularity\]](#eq:scalarregularity){reference-type="eqref" reference="eq:scalarregularity"}. To estimate $\|f^\kappa(t)\|_{C^\beta}$ we will bound $\|\nabla f^\kappa(t)\|_{L^\infty}$ and then interpolate with $\|f^\kappa(t)\|_{L^\infty}$. Let $\{T_j\}_{j \ge 0}$ be as defined in the proof of Theorem [Theorem 1](#thrm:main){reference-type="ref" reference="thrm:main"}. For $j \in \mathbb{N}$ and $t \in [T_{j-1}, T_{j-1} + t_j)$ we have $$\begin{aligned} & \partial_t (\partial_x f^\kappa) + \alpha_j \psi\left(\frac{t-T_{j-1}}{t_j}\right) S(N_j y)\partial_x (\partial_x f^\kappa) = \kappa \Delta (\partial_x f^\kappa), \label{eq:viscous1} \\ & \partial_t (\partial_y f^\kappa) + \alpha_j \psi\left(\frac{t-T_{j-1}}{t_j}\right) S(N_j y)\partial_x (\partial_y f^\kappa) = \kappa \Delta (\partial_y f^\kappa) - \frac{1}{t_j}\psi\left(\frac{t-T_{j-1}}{t_j}\right) K_j S'(N_j y)\partial_x f^\kappa. \label{eq:viscous2}\end{aligned}$$ From [\[eq:viscous1\]](#eq:viscous1){reference-type="eqref" reference="eq:viscous1"} and the maximum principle it follows that $$\label{eq:viscous3} \sup_{t \in [T_{j-1},T_{j-1}+t_j]}\|\partial_x f^\kappa(t)\|_{L^\infty} \le \|\partial_x f^\kappa(T_{j-1})\|_{L^\infty}.$$ Then, treating the term involving $\partial_x f^\kappa$ on the right-hand side of [\[eq:viscous2\]](#eq:viscous2){reference-type="eqref" reference="eq:viscous2"} as a forcing term and using [\[eq:viscous3\]](#eq:viscous3){reference-type="eqref" reference="eq:viscous3"} together with the maximum principle again we obtain $$\label{eq:viscous4} \sup_{t \in [T_{j-1},T_{j-1}+t_j]} \|\partial_y f^\kappa(t)\|_{L^\infty} \le \|\partial_y f^{\kappa}(T_{j-1})\|_{L^\infty} + K_j \|\partial_x f^\kappa(T_{j-1})\|_{L^\infty}.$$ Combining the previous two bounds we have $$\sup_{t \in [T_{j-1},T_{j-1} + t_j]}\|\nabla f^\kappa(t)\|_{L^\infty} \le (2+K_j)\|\nabla f^{\kappa}(T_{j-1})\|_{L^\infty}.$$ Applying the same argument on the time interval $[T_{j-1} + t_j, T_{j})$ gives $$\label{eq:viscous5} \sup_{t \in [T_{j-1},T_{j}]}\|\nabla f^\kappa(t)\|_{L^\infty} \le (K_j^2 + C_1K_j)\|\nabla f^{\kappa}(T_{j-1})\|_{L^\infty}$$ for some $C_1 > 0$. Since $\sum_{j=1}^\infty K_j^{-1} < \infty$, by iterating [\[eq:viscous5\]](#eq:viscous5){reference-type="eqref" reference="eq:viscous5"} and then interpolating the resulting estimate with $\|f^\kappa(t)\|_{L^\infty} \le \|f_0\|_{L^\infty}$ we conclude there is $C_2 > 0$ depending on $f_0$ such that $$\label{eq:viscous6} \sup_{t \in [T_{j-1},T_{j}]}\|f^\kappa(t)\|_{C^\beta(\mathbb{T}^2)} \le C_2 \prod_{n=1}^j K_n^{2\beta} = C_2 M^{2\beta j}(j!)^\frac{4\beta}{1-3\epsilon}.$$ Thus, we have $$\|f^\kappa\|_{L^2([0,T_*];C^\beta(\mathbb{T}^2))}^2 \le 2 C_2^2\sum_{j=1}^\infty t_j M^{4\beta j} (j!)^\frac{8\beta}{1-3\epsilon} \le 2MC_2^2 \sum_{j=1}^\infty j^4 M^{2\beta j} j^{-4(1-\alpha)j}(j!)^\frac{8\beta}{1-3\epsilon} < \infty$$ provided that $\epsilon > 0$ is chosen small enough so that $$\frac{1}{1-3\epsilon} < \frac{1-\alpha}{2\beta}.$$ ◻ # General Remarks and Questions Let us close by giving a few questions for further investigation that might be interesting. ## Uniqueness Threshold? Using the construction here, it appears that the strongest that the modulus of continuity of the velocity can be is $x|\log(x)|^2$. It is not clear whether one can reach the Osgood threshold using this technique or whether it is even possible. While it is clear that anomalous dissipation is impossible when the velocity field satisfies the Osgood condition, it is not clear that the Osgood condition is really necessary even for uniqueness in the PDE setting. Resolving this gap, for non-uniqueness and/or anomalous dissipation, may be of great theoretical interest. ## Autonomous flows The example given here relies on the existence of a "singular time\" in the velocity field. While this may be interesting for the purposes of studying anomalous dissipation coming from a potential finite-time singularity in the fluid equations, it is of great mathematical, and possibly physical, interest to construct autonomous flows that give anomalous dissipation. There are two relevant directions that one can think about here. In two dimensions, it is possible that the examples given by Alberti, Bianchini, and Crippa [@ABC] or similarly constructed flows can give anomalous dissipation for some or all data. In three dimensions and higher, the existence of an autonomous flow giving anomalous dissipation (and non-uniqueness) for a single data or a large class of data is immediate from the construction here and earlier works, for instance by treating the third dimension as a time variable. There is, however, no example of an autonomous flow in three dimensions giving anomalous dissipation for *all* smooth data. It is possible that one can lift versions of the flow constructed here to serve this purpose. ## Forwards-Backwards Principle Lemma [Lemma 1](#fwdbckwdp){reference-type="ref" reference="fwdbckwdp"} gives a sufficient condition for exponential growth of solutions to the transport equation with a time-periodic velocity (the mapping $\Phi$ can be taken to be the associated Lagrangian flow at $t=T_*$, the period of the velocity field). It is not clear whether there exists any time-periodic and *smooth* velocity field for which such an inequality holds. # Acknowledgements {#acknowledgements .unnumbered} T.M.E. acknowledges funding from NSF DMS-2043024 and the Alfred P. Sloan Foundation. K.L. acknowledges funding from NSF DMS-2038056. The authors thank T. Drivas for helpful conversations and suggestions. # Interpolation and commutator estimates {#appendix} In this section we recall some Sobolev interpolation and commutator estimates that are needed in the proof of Lemma [Lemma 6](#lem:uppermain){reference-type="ref" reference="lem:uppermain"}. The result from interpolation theory that we require is as follows. For a proof, see e.g. [@chandler2015interpolation Corollary 3.2]). **Lemma 9** (Sobolev space interpolation). *Fix $0 \le s_0 < s_1 < \infty$ and let $T:\dot{H}^\sigma(\mathbb{T}^2) \to \dot{H}^{\sigma}(\mathbb{T}^2)$ be a bounded, linear operator for $\sigma \in \{s_0, s_1\}$. Then, for every $\theta \in (0,1)$, defining $s_\theta = \theta s_0 + (1-\theta)s_1$ we have that $T:\dot{H}^{s_\theta}(\mathbb{T}^2)\to \dot{H}^{s_\theta}(\mathbb{T}^2)$ is also bounded and $$\|T\|_{\dot{H}^{s_\theta} \to \dot{H}^{s_\theta}} \le \|T\|_{\dot{H}^{s_0}\to \dot{H}^{s_0}}^\theta \|T\|_{\dot{H}^{s_1}\to \dot{H}^{s_1}}^{1-\theta}.$$* Next, we have a homogeneous Kato-Ponce inequality; see for instance [@MuscaluSchlag Problem 2.7]. **Lemma 10**. *Fix $s \in (0,1)$. There exists a constant $C_s$ such that for every $f,g \in C^\infty(\mathbb{T}^2)$ there holds $$\|D^s(f g) - f D^s g\|_{L^2} \le C_s \|D^s f\|_{L^2}\|g\|_{L^\infty}.$$*
arxiv_math
{ "id": "2309.08576", "title": "Norm Growth, Non-uniqueness, and Anomalous Dissipation in Passive\n Scalars", "authors": "Tarek M. Elgindi and Kyle Liss", "categories": "math.AP physics.flu-dyn", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The directed Oberwolfach problem OP$^\ast(m_1,\ldots,m_k)$ asks whether the complete symmetric digraph $K_n^\ast$, assuming $n=m_1+\ldots +m_k$, admits a decomposition into spanning subdigraphs, each a disjoint union of $k$ directed cycles of lengths $m_1,\ldots,m_k$. We hereby describe a method for constructing a solution to OP$^\ast(m_1,\ldots,m_k)$ given a solution to OP$^\ast(m_1,\ldots,m_\ell)$, for some $\ell<k$, if certain conditions on $m_1,\ldots,m_k$ are satisfied. This approach enables us to extend a solution for OP$^\ast(m_1,\ldots,m_\ell)$ into a solution for OP$^\ast(m_1,\ldots,m_\ell,t)$, as well as into a solution for OP$^\ast(m_1,\ldots,m_\ell,2^{\langle t \rangle})$, where $2^{\langle t \rangle}$ denotes $t$ copies of 2, provided $t$ is sufficiently large. In particular, our recursive construction allows us to effectively address the two-table directed Oberwolfach problem. We show that OP$^\ast(m_1,m_2)$ has a solution for all $2 \le m_1\le m_2$, with a definite exception of $m_1=m_2=3$ and a possible exception in the case that $m_1 \in \{ 4,6 \}$, $m_2$ is even, and $m_1+m_2 \ge 14$. It has been shown previously that OP$^\ast(m_1,m_2)$ has a solution if $m_1+m_2$ is odd, and that OP$^\ast(m,m)$ has a solution if and only if $m \ne 3$. In addition to solving many other cases of OP$^\ast$, we show that when $2 \le m_1+\ldots +m_k \le 13$, OP$^\ast(m_1,\ldots,m_k)$ has a solution if and only if $(m_1,\ldots,m_k) \not\in \{ (4),(6),(3,3) \}$. Complete symmetric digraph, directed Oberwolfach problem, directed 2-factorization, recursive construction. author: - | Suzan Kadri and Mateja Šajna[^1]\ University of Ottawa title: | The directed Oberwolfach problem\ with variable cycle lengths: a recursive construction --- # Introduction The celebrated Oberwolfach problem (OP), posed by Ringel in 1967, asks whether $n$ participants at a conference can be seated at $k$ round tables of sizes $m_1, \ldots, m_k$ for several nights in row so that each participant sits next to everybody else exactly once. The assumption is that $n$ is odd and $n=m_1+\ldots+m_k$. In graph-theoretic terms, OP$(m_1, \ldots, m_k)$ asks whether $K_n$ admits a decomposition into 2-factors, each a disjoint union of cycles of lengths $m_1, \ldots, m_k$. When $n$ is even, the complete graph minus a 1-factor, $K_n-I$, is considered instead [@HuaKot]. OP has been solved completely in the case that $m_1=\ldots=m_k$ [@AlsHag; @AlsSch; @Hag; @HofSch], and in many other special cases (for example, for $m_1, \ldots, m_k$ all even [@BryDan; @Hag], and for $n$ sufficiently large [@GloJoo]), but is in general still open. Most pertinent for this paper are the following results on OP. **Theorem 1**. *[@AdaBry; @Dez; @FraHol; @FraRos; @SalDra][\[thm:OPsmall\]]{#thm:OPsmall label="thm:OPsmall"} Let $3\le m_1 \le \ldots \le m_k$ be integers with $m_1+\ldots + m_k \le 60$. Then OP$(m_1, \ldots, m_k)$ has a solution if and only if $(m_1, \ldots, m_k) \not\in \{ (3,3),(4,5),(3,3,5),$ $(3,3,3,3) \}$.* **Theorem 2**. *[@BryDan; @Gvo; @Hag; @Tra][\[thm:Tra\]]{#thm:Tra label="thm:Tra"} Let $m_1$ and $m_2$ be integers such that $3 \le m_1 \le m_2$. Then $OP(m_1,m_2)$ has a solution if and only if $(m_1,m_2) \not\in \{ (3,3),(4,5) \}$.* The directed Oberwolfach problem, OP$^\ast(m_1,\ldots,m_k)$, was introduced in [@BurSaj]. It asks whether $n$ participants can be seated at $k$ round tables of sizes $m_1, \ldots, m_k$ (where $n=m_1+\ldots+m_k$) for several nights in row so that each person sits *to the right* of everybody else exactly once. Such a seating is equivalent to a decomposition of $K_n^*$, the complete symmetric digraph of order $n$, into subdigraphs isomorphic to a disjoint union of directed $k$ cycles of lengths $m_1, \ldots, m_k$. The directed Oberwolfach problem with uniform cycle lengths, that is, with $m_1=\ldots=m_k=m$, is denoted OP$^\ast(n;m)$. The solution to OP$^\ast(n;m)$ has been completed very recently, as seen below. **Theorem 3**. *[@AdaBry1; @BenZha; @BerGerSot; @BurFraSaj; @BurSaj; @Lac; @Til][\[thm:Kn\*\]]{#thm:Kn* label="thm:Kn*"} Let $m \ge 2$ and $n \ge 2$. Then OP$^\ast(n;m)$ has a solution if and only if $m|n$ and $(n, m) \not\in\{(4,4),(6,6),(6,3)\}$.* For $n$ odd, it is easy to see that a solution for OP$(m_1,\ldots,m_k)$ gives rise to a solution for OP$^\ast(m_1,\ldots,m_k)$; we simply take two copies of each 2-factor and direct the two copies of each cycle in two different ways. We thus have the following immediate corollary to Theorems [\[thm:OPsmall\]](#thm:OPsmall){reference-type="ref" reference="thm:OPsmall"} and [\[thm:Tra\]](#thm:Tra){reference-type="ref" reference="thm:Tra"}. **Corollary 4**. *[@AdaBry; @BryDan; @Dez; @FraHol; @FraRos; @Gvo; @Hag; @SalDra; @Tra][\[cor:OP\]]{#cor:OP label="cor:OP"} Let $3\le m_1 \le \ldots \le m_k$ be integers such that $n=m_1+\ldots + m_k$ is odd. If $(m_1, \ldots, m_k) \not\in \{ (4,5),(3,3,5) \}$, and $n < 60$ or $k=2$, then OP$^\ast(m_1,\ldots,m_k)$ has a solution.* Very little else is known about the non-uniform case; as far as we know, the only other previous results are solutions to OP$^\ast(4,5)$, OP$^\ast(3,3,5)$, and OP$^\ast(2^{\langle b \rangle},3)$ when $2b+3 \equiv 1,3$ or $7 \pmod{8}$ by Shabani and the second author [@ShaSaj]. The latter of these results will be subsumed by the much more general results of this paper. In this paper, we describe a method for constructing a solution to OP$^\ast(m_1,\ldots,m_k)$ given a solution to OP$^\ast(m_1,\ldots,m_\ell)$, for some $\ell<k$, if certain conditions on $m_1,\ldots,m_k$ are satisfied. Such an extension is the simplest in the bipartite case; that is, when all cycle lengths are even. **Theorem 5**. *Let $\ell$ and $k$ be integers such that $1 \le \ell < k$, and let $m_1,\ldots, m_k$ be even positive integers such that $m_1+\ldots+m_{\ell}=m_{\ell+1}+ \ldots + m_k$.* *If OP$^\ast(m_1,\ldots,m_{\ell})$ and OP$^\ast(m_{\ell+1},\ldots,m_k)$ both have solutions, then OP$^\ast(m_1,\ldots,m_k)$ has a solution,* More sophisticated conditions need to be imposed when the directed 2-factor contains odd cycles or the extension does not exactly double the number of vertices; see Proposition [Proposition 16](#prop:main){reference-type="ref" reference="prop:main"} and Corollary [Corollary 17](#cor:main){reference-type="ref" reference="cor:main"}. This more general approach enables us to extend a solution for OP$^\ast(m_1,\ldots,m_\ell)$ into solutions for OP$^\ast(m_1,\ldots,m_{\ell},t)$ and OP$^\ast(m_1,\ldots,m_{\ell},2^{\langle \frac{t}{2} \rangle})$, as elaborated below. **Theorem 6**. *Let $\ell$ and $m_1,\ldots, m_{\ell}$ be positive integers such that $m_i \ge 2$ for $i=1,\ldots, \ell$. Let $s=m_1+\ldots+m_{\ell}$ and let $t$ be an integer, $t>s$. Furthermore, let $a$ be the number of odd integers in the multiset $\{ \!\! \{ m_1,\ldots,m_{\ell} \} \!\! \}$, and assume that $a \le 2\lfloor \frac{t}{2} \rfloor -s$.* *If OP$^\ast(m_1,\ldots,m_{\ell})$ has a solution, then the following also have a solution:* (1) *OP$^\ast(m_1,\ldots,m_{\ell},t)$; and* (2) *if $t$ is even, OP$^\ast(m_1,\ldots,m_{\ell},2^{\langle \frac{t}{2} \rangle})$.* In particular, this approach allows us to obtain an almost-complete solution to the directed Oberwolfach problem with two tables. **Theorem 7**. *Let $m_1$ and $m_2$ be integers such that $2 \le m_1 \le m_2$. Then OP$^\ast(m_1,m_2)$ has a solution if and only if $(m_1,m_2) \ne (3,3)$, with a possible exception in the case that $m_1 \in \{ 4,6 \}$, $m_2$ is even, and $m_1+m_2 \ge 14$.* This paper is organized as follows. In Section 2 we give the necessary terminology and other prerequisites, and in Section 3 we solve some special small cases of the problem that will be required to fill in the gaps in later, more general constructions. In the next two sections we introduce the main recursive approach to constructing solutions to larger cases of OP$^\ast$ from solutions to smaller cases; for bipartite directed 2-factors (Theorem [Theorem 5](#thm:main-even){reference-type="ref" reference="thm:main-even"}) in Section 4, and for general directed 2-factors (Proposition [Proposition 16](#prop:main){reference-type="ref" reference="prop:main"} and Corollary [Corollary 17](#cor:main){reference-type="ref" reference="cor:main"}) in Section 5. In Sections 6 and 7, we perform the heavy technical work that allows us to prove our main extension result, Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}, in Section 8. This theorem gives rise to many explicit existence results, including Theorem [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"}, which are also presented in this section. Finally, in Section 9, we completely solve the directed Oberwolfach problem for orders up to 13. Solutions to small cases that do not arise from other results and are not used anywhere else are given in the appendix. # Prerequisites We use the symbol $\{ \!\! \{ . \} \!\! \}$ to denote multisets, and $\langle . \rangle$ in the exponent to denote the multiplicity of an element in a multiset or sequence. Thus, for example $\{ \!\! \{ 2,2,2,4 \} \!\! \}=\{ \!\! \{ 2^{\langle 3 \rangle},4^{\langle 1 \rangle} \} \!\! \}=\{ \!\! \{ 2^{\langle 3 \rangle},4 \} \!\! \}$ and $(2,2,2,4)=(2^{\langle 3 \rangle},4^{\langle 1 \rangle})=(2^{\langle 3 \rangle},4)$. As usual, the vertex set and arc set of a directed graph (shortly *digraph*) $D$ will be denoted $V(D)$ and $A(D)$, respectively. All digraphs in this paper are strict, that is, they have no loops and no parallel arcs. By $K_n$, $\bar{K}_n$, and $C_m$ we denote the complete graph of order $n$, the empty graph of order $n$, and the cycle of length $m$ ($m$-cycle), respectively. Analogously, by $K_n^*$ and $\vec{C}_m$ we denote the complete symmetric digraph of order $n$ and the directed cycle of length $m$ (directed $m$-cycle), respectively, while the symbol $\vec{P}_m$ will denote a directed path of length $m$; that is, a directed path with $m$ arcs. A disjoint union of digraphs $D_1$ and $D_2$ is denoted by $D_1 \dot{\cup} D_2$. A *decomposition* of a digraph $D$ is a set $\{ D_1, \ldots, D_k \}$ of subdigraphs of $D$ such that $\{ A(D_1), \ldots, A(D_k) \}$ is a partition of $A(D)$; in this case, we write $D= D_1 \oplus \ldots \oplus D_k$. A *$D'$-decomposition* of $D$, where $D'$ is a subdigraph of $D$, is a decomposition into subdigraphs isomorphic to $D'$. A *directed 2-factor* of a digraph $D$ is a spanning subdigraph of $D$ that is a disjoint union of directed cycles. In particular, a *$(\vec{C}_{m_1},\ldots,\vec{C}_{m_k})$-factor* of $D$ is a directed 2-factor of $D$ that is a disjoint union of $k$ directed cycles of lengths $m_1,\ldots,m_k$, respectively. A *$(\vec{C}_{m_1},\ldots,\vec{C}_{m_k})$-factorization* of $D$ is a decomposition of $D$ into $(\vec{C}_{m_1},\ldots,\vec{C}_{m_k})$-factors. A decomposition, 2-factor, $(C_{m_1},\ldots,C_{m_k})$-factor, and $(C_{m_1},\ldots,C_{m_k})$-factorization of a graph are defined analogously. If $D$ is a digraph, and $D_1,\ldots,D_k$ its subdigraphs, then we define a *$(D_1,\ldots,D_k)$-subdigraph* of $D$ as a subdigraph of $D$ that is a disjoint union of $k$ digraphs isomorphic to $D_1,\ldots,D_k$, respectively. The *join* of vertex-disjoined digraphs $D_1$ and $D_2$, denoted $D_1 \bowtie D_2$, is the digraph with vertex set $V(D_1 \bowtie D_2)=V(D_1) \dot{\cup} V(D_2)$ and arc set $A(D_1 \bowtie D_2)= A(D_1) \cup A(D_2) \cup \{ (u_1,u_2),(u_2,u_1): u_1 \in V(D_1), u_2 \in V(D_2) \}$. Let $n$ be a positive integer and $S \subseteq \mathbb{Z}_n^\ast$. The *directed circulant* of order $n$ with connection set $S$, denoted $\overrightarrow{\rm Circ}(n;S)$, is the digraph with vertex set $V=\{ u_i: i \in \mathbb{Z}_n \}$ and arc set $A=\{ (u_i,u_{i+d}): i \in \mathbb{Z}_n, d \in S \}$. An arc of the form $(u_i,u_{i+d})$ in $\overrightarrow{\rm Circ}(n;S)$ is said to be of *difference* $d$. A subdigraph of $\overrightarrow{\rm Circ}(n;S)$ is called *$S$-orthogonal* if it contains exactly one arc of each difference in $S$. If $S=-S$, then ${\rm Circ}(n;S)$ will denote the corresponding undirected circulant graph. The following previous results will be used in our constructions. **Theorem 8**. *[@BerFavMah][\[the:BerFavMah\]]{#the:BerFavMah label="the:BerFavMah"} Let $n \in \mathbb{Z}^+$, and $a,b \in \mathbb{Z}_n^\ast$. If the circulant graph ${\rm Circ}(n;\{ \pm a,\pm b \})$ is connected and of degree $4$, then it admits a decomposition into two Hamilton cycles.* **Theorem 9**. *[@Wes][\[the:Wes\]]{#the:Wes label="the:Wes"} Let $n \in \mathbb{Z}^+$ be even, and $a,b,c \in \mathbb{Z}_n^\ast$. If $\gcd(n,a,b)\gcd(n,c)=2$ and the circulant graph ${\rm Circ}(n;\{ \pm a,\pm b, \pm c \})$ is of degree 6, then it admits a decomposition into three Hamilton cycles.* # 1-rotational constructions: special cases In this section, we exhibit direct constructions of solutions to some special cases of OP$^\ast$ that will be required in more general results before Section [9](#sec:small){reference-type="ref" reference="sec:small"}. The next definition and lemma provide a framework for a generalized version of the well-known 1-rotational construction. **Definition 10**. * Let $q$ and $n$ be positive integers such that $q|(n-1)$, and let $K_n^\ast=K_{n-1}^\ast \bowtie K_1$, where $V(K_{n-1})=\{ u_i: i \in \mathbb{Z}_{n-1} \}$ and $V(K_1)=\{ u_{\infty} \}$. The *base-$q$ differences* of arcs $(u_i,u_j)$, $(u_i,u_{\infty})$, and $(u_{\infty},u_i)$ are defined to be $d_r$, $\infty_r$, and $-\infty_r$, respectively, where $d \in \mathbb{Z}_{n-1}$ and $r \in \mathbb{Z}_q$ are such that $j-i \equiv d \pmod{n-1}$ and $i \equiv r \pmod{q}$. When $q=1$, we write simply $d$, $\infty$, and $-\infty$ instead of $d_0$, $\infty_0$, and $-\infty_0$, respectively.* **Lemma 11**. *For $i=1,\ldots,k$, let $m_i \ge 2$ be an integer, and let $n=m_1+\ldots +m_k$. Furthermore, let $q$ be a positive divisor of $n-1$.* *If $K_{n-1}^\ast \bowtie K_1$ admits $(\vec{C}_{m_1},\ldots,\vec{C}_{m_k})$-factors $F_0,F_1,\ldots,F_{q-1}$ that jointly contain exactly one arc of each base-$q$ difference in the set $\{ d_r: d \in \mathbb{Z}_{n-1}^\ast \cup \{ \infty, -\infty \}, r \in \mathbb{Z}_q \}$, then $K_n^\ast$ admits a $(\vec{C}_{m_1},\ldots,\vec{C}_{m_k})$-factorization.* . With the set-up of Definition [Definition 10](#def:base=q){reference-type="ref" reference="def:base=q"}, let $\rho$ be the permutation $\rho=( u_0 \, u_1 \, \ldots \, u_{n-2}) ( u_{\infty})$. Since $\rho^q$ preserves base-$q$ differences of the arcs, we can easily see that the $(\vec{C}_{m_1},\ldots,\vec{C}_{m_k})$-factors in $\{ \rho^{qi}(F_j): i \in\mathbb{Z}_{\frac{n-1}{q}}, j \in \mathbb{Z}_{q} \}$ jointly contain each arc of $K_n^\ast$ exactly once, and the result follows. Note that directed 2-factors $F_0,F_1,\ldots,F_{q-1}$ from Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} are often referred to as *starter 2-factors* of the $(\vec{C}_{m_1},\ldots,\vec{C}_{m_k})$-factorization. **Lemma 12**. *The following problems have a solution:* - *OP$^\ast(4,5)$, OP$^\ast(4,6)$, OP$^\ast(2^{\langle 3 \rangle},4)$, OP$^\ast(4,8)$, and OP$^\ast(2,4,6)$;* - *OP$^\ast(2^{\langle b \rangle},3^{\langle 2 \rangle})$ for $1 \le b \le 5$; and* - *OP$^\ast(2^{\langle b \rangle},6)$ for $2 \le b \le 4$.* . We shall use the set-up from Definition [Definition 10](#def:base=q){reference-type="ref" reference="def:base=q"}. In all cases except for OP$^\ast(4,6)$ and OP$^\ast(4,8)$, we have $q=1$. It thus suffices to exhibit a $(\vec{C}_{m_1},\ldots,\vec{C}_{m_k})$-factor that contains exactly one arc of each difference in $\mathbb{Z}_{n-1}^\ast \cup \{ \infty, -\infty \}$. The following are - a $(\vec{C}_{4},\vec{C}_{5})$-factor of $K_9^\ast$: $u_0 \, u_1 \, u_5 \, u_3 \, u_0 \; \cup \; u_2 \, u_4 \, u_7 \, u_6 \, u_{\infty} u_2$; - a $(\vec{C}_{2}^{\langle 3 \rangle},\vec{C}_{4})$-factor of $K_{10}^\ast$: $u_0 \, u_{\infty} \, u_0 \; \cup \; u_1 \, u_3 \, u_1 \; \cup \; u_5 \, u_6 \, u_5 \; \cup \; u_2 \, u_7 \, u_4 \, u_8 \, u_2$; - a $(\vec{C}_{2},\vec{C}_{4},\vec{C}_{6})$-factor of $K_{12}^\ast$: $u_0 \, u_{\infty} \, u_0 \; \cup \; u_3 \, u_7 \, u_9 \, u_8 \, u_3 \; \cup \; u_1 \, u_2 \, u_5 \; u_{10} \, u_6 \, u_4 \, u_1$; - a $(\vec{C}_{2},\vec{C}_{3}^{\langle 2 \rangle})$-factor of $K_8^\ast$: $u_0 \, u_{\infty} \, u_0 \; \cup \; u_1 \, u_2 \, u_4 \, u_1 \; \cup \; u_6 \, u_5 \, u_3 \, u_6$; - a $(\vec{C}_{2}^{\langle 2 \rangle},\vec{C}_{3}^{\langle 2 \rangle})$-factor of $K_{10}^\ast$: $u_0 \, u_{\infty} \, u_0 \; \cup \; u_2 \, u_7 \, u_2 \; \cup \; u_1 \, u_3 \, u_4 \, u_1 \; \cup \; u_5 \, u_8 \, u_6 \, u_5$; - a $(\vec{C}_{2}^{\langle 3 \rangle}, \vec{C}_{3}^{\langle 2 \rangle})$-factor of $K_{12}^\ast$: $u_0 \, u_{\infty} \, u_0 \; \cup \; u_1 \, u_4 \, u_1 \; \cup \; u_5 \, u_6 \, u_5 \; \cup \; u_3 \, u_{10} \, u_8 \, u_3 \; \cup \; u_2 \, u_7 \, u_9 \, u_2$; - a $(\vec{C}_{2}^{\langle 4 \rangle}, \vec{C}_{3}^{\langle 2 \rangle})$-factor of $K_{14}^\ast$:\ $u_0 \, u_{\infty} \, u_0 \; \cup \; u_6 \, u_7 \, u_6 \; \cup \; u_1 \, u_5 \, u_1 \; \cup \; u_2 \, u_{12} \, u_2 \; \cup \; u_3 \, u_{10} \, u_8 \, u_3 \; \cup \; u_4 \, u_{9} \, u_{11} \, u_4$; - a $(\vec{C}_{2}^{\langle 5 \rangle}, \vec{C}_{3}^{\langle 2 \rangle})$-factor of $K_{16}^\ast$:\ $u_0 \, u_{\infty} \, u_0 \; \cup \; u_4 \, u_8 \, u_4 \; \cup \; u_7 \, u_{12} \, u_7 \; \cup \; u_3 \, u_{6} \, u_3 \; \cup \; u_9 \, u_{11} \, u_9 \; \cup \; u_1 \, u_{10} \, u_2 \, u_1 \; \cup \; u_5 \, u_{13} \, u_{14} \, u_5$; - a $(\vec{C}_{2}^{\langle 2 \rangle},\vec{C}_{6})$-factor of $K_{10}^\ast$: $u_0 \, u_{\infty} \, u_0 \; \cup \; u_1 \, u_6 \, u_1 \; \cup \; u_2 \, u_3 \, u_5 \; u_8 \, u_7 \, u_4 \, u_2$; - a $(\vec{C}_{2}^{\langle 3 \rangle},\vec{C}_{6})$-factor of $K_{12}^\ast$: $u_0 \, u_{\infty} \, u_0 \; \cup \; u_1 \, u_6 \, u_1 \; \cup \; u_7 \, u_{10} \, u_7 \; \cup \; u_2 \, u_3 \, u_5 \; u_9 \, u_8 \, u_4 \, u_2$; - a $(\vec{C}_{2}^{\langle 4 \rangle},\vec{C}_{6})$-factor of $K_{14}^\ast$:\ $u_0 \, u_{\infty} \, u_0 \; \cup \; u_1 \, u_6 \, u_1 \; \cup \; u_3 \, u_{9} \, u_3 \; \cup \; u_{12} \, u_{2} \, u_{12} \; \cup \; u_4 \, u_5 \, u_7 \; u_{11} \, u_{10} \, u_8 \, u_4$, all with the required properties. For OP$^\ast(4,6)$, we have $n=10$, and we choose $q=3$. By Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"}, it suffices to find three directed 2-factors that jointly contain exactly one arc of each difference in $\{ d_r: d \in \{ \pm 1, \pm 2, \pm 3, \pm 4, \pm \infty \}, r \in \mathbb{Z}_3 \}$. It is not difficult to verify that the following $(\vec{C}_{4},\vec{C}_{6})$-factors of $K_{10}^\ast$ satisfy the requirement: $$\begin{aligned} F_0 &=& u_1 \, u_5 \, u_8 \, u_6 \, u_1 \; \cup \; u_0 \, u_2 \, u_4 \, u_3 \, u_{\infty} \, u_7 \, u_0, \\ F_1 &=& u_2 \, u_7 \, u_3 \, u_6 \, u_2 \; \cup \; u_0 \, u_8 \, u_5 \, u_4 \, u_1 \, u_{\infty} \, u_0, \\ F_2 &=& u_0 \, u_7 \, u_1 \, u_8 \, u_0 \; \cup \; u_2 \, u_6 \, u_3 \, u_4 \, u_5 \, u_{\infty} \, u_2. \end{aligned}$$ For OP$^\ast(4,8)$, no 1-rotational solution with $q=1$ exists (see the paragraphs following the proof of Theorem [Theorem 28](#thm:small){reference-type="ref" reference="thm:small"}). The following directed 2-factors form a solution without symmetry: $$\begin{aligned} F_0 &=& u_3 \, u_{10} \, u_4 \, u_{\infty} \, u_3 \; \cup \; u_0 \, u_2 \, u_9 \, u_5 \, u_ 1 \, u_7 \, u_6 \, u_8 \, u_0, \\ F_1 &=& u_1 \, u_4 \, u_7 \, u_3 \, u_1 \; \cup \; u_0 \, u_{\infty} \, u_5 \, u_9 \, u_ 2 \, u_{10} \, u_8 \, u_6 \, u_0, \\ F_2 &=& u_0 \, u_7 \, u_5 \, u_3 \, u_0 \; \cup \; u_1 \, u_2 \, u_{\infty} \, u_9 \, u_{10} \, u_6 \, u_4 \, u_8 \, u_1, \\ F_3 &=& u_3 \, u_8 \, u_4 \, u_6 \, u_3 \; \cup \; u_0 \, u_5 \, u_{10} \, u_2 \, u_ 1 \, u_9 \, u_{\infty} \, u_7 \, u_0, \\ F_4 &=& u_0 \, u_3 \, u_{\infty} \, u_1 \, u_0 \; \cup \; u_2 \, u_6 \, u_9 \, u_8 \, u_ 7 \, u_{10} \, u_5 \, u_4 \, u_2, \\ F_5 &=& u_0 \, u_{10} \, u_3 \, u_9 \, u_0 \; \cup \; u_1 \, u_5 \, u_8 \, u_{\infty} \, u_6 \, u_7 \, u_2 \, u_4 \, u_1, \\ F_6 &=& u_0 \, u_8 \, u_{10} \, u_{\infty} \, u_0 \; \cup \; u_1 \, u_3 \, u_4 \, u_5 \, u_6 \, u_2 \, u_7 \, u_9 \, u_1, \\ F_7 &=& u_3 \, u_5 \, u_7 \, u_8 \, u_3 \; \cup \; u_0 \, u_9 \, u_4 \, u_{10} \, u_ 1 \, u_6 \, u_{\infty} \, u_2 \, u_0, \\ F_8 &=& u_4 \, u_9 \, u_7 \, u_{\infty} \, u_4 \; \cup \; u_0 \, u_1 \, u_8 \, u_5 \, u_ 2 \, u_3 \, u_6 \, u_{10} \, u_0, \\ F_9 &=& u_2 \, u_5 \, u_{\infty} \, u_8 \, u_2 \; \cup \; u_0 \, u_6 \, u_1 \, u_{10} \, u_9 \, u_3 \, u_7 \, u_4 \, u_0, \\ F_{10} &=& u_1 \, u_{\infty} \, u_{10} \, u_7 \, u_1 \; \cup \; u_0 \, u_4 \, u_3 \, u_2 \, u_8 \, u_9 \, u_6 \, u_5 \, u_0.\end{aligned}$$ We remark that a solution to OP$^\ast(4,5)$ (as well as to OP$^\ast(3,3,5)$, given in Appendix [11.1](#app1){reference-type="ref" reference="app1"}) has been previously constructed by Shabani and the second author [@ShaSaj], while a solution to OP$^\ast(4,6)$ of a different form has been obtained by Lacaze-Masmonteil [@Lac2]. # A recursive construction of solutions to OP$^\ast$: the bipartite case In this section, we begin to describe a method for constructing solutions to larger cases of OP$^\ast$ from solutions to smaller cases. This method is particularly simple and effective when the directed 2-factor is bipartite; that is, when all its cycle lengths are even. We re-state Theorem [Theorem 5](#thm:main-even){reference-type="ref" reference="thm:main-even"} for convenience. Let $\ell$ and $k$ be integers such that $1 \le \ell < k$, and let $m_1,\ldots, m_k$ be even positive integers such that $m_1+\ldots+m_{\ell}=m_{\ell+1}+ \ldots + m_k$. If OP$^\ast(m_1,\ldots,m_{\ell})$ and OP$^\ast(m_{\ell+1},\ldots,m_k)$ both have solutions, then OP$^\ast(m_1,\ldots,m_k)$ has a solution, . Let $n=m_1+\ldots+m_{\ell}=m_{\ell+1}+ \ldots + m_k$. We need to show that $K_{2n}^\ast$ admits a $(\vec{C}_{m_1},\ldots,\vec{C}_{m_k})$-factorization. Let $D=K_{2n}^\ast= D_1 \bowtie D_2$, where $D_1 \cong D_2 \cong K_n^\ast$, and $$V(D_1)=X=\{ x_i: i \in \mathbb{Z}_n \} \quad \mbox{ and } \quad V(D_2)=Y=\{ y_i: i \in \mathbb{Z}_n \}.$$ Let $\rho$ be the cyclic permutation $\rho=(y_0 \, y_1 \, \ldots \, y_{n-1} )$ on $X \cup Y$ that fixes the vertices in $X$ pointwise. We first decompose $D= (D_1 \dot{\cup} D_2) \oplus (\bar{D}_1 \bowtie\bar{D}_2)$. By assumption, digraphs $D_1$ and $D_2$ admit a $(\vec{C}_{m_1},\ldots,\vec{C}_{m_{\ell}})$-factorization and a $(\vec{C}_{m_{\ell+1}},\ldots,\vec{C}_{m_k})$-factorization, respectively, each with $n-1$ directed 2-factors. Hence $D_1 \dot{\cup} D_2$ admits a $(\vec{C}_{m_{1}},\ldots,\vec{C}_{m_k})$-factorization, and it remains only to construct a $(\vec{C}_{m_{1}},\ldots,\vec{C}_{m_k})$-factorization of $\bar{D_1} \bowtie\bar{D_2}$. Let ${\mathcal{P}}_1=\{ X_1,\ldots,X_k \}$ and ${\mathcal{P}}_2=\{ Y_1,\ldots,Y_k \}$ be partitions of $X$ and $Y$, respectively, such that $|X_i|=|Y_i|=\frac{m_i}{2}$ for $i=1,\ldots,k$. For each $i$, relabel the vertices in $X_i$ and $Y_i$ as $X_i=\{ x_1^{(i)},\ldots, x_{\frac{m_i}{2}}^{(i)} \}$ and $Y_i=\{ y_1^{(i)},\ldots, y_{\frac{m_i}{2}}^{(i)} \}$, and let $$C^{(i)}= x_1^{(i)} \, y_1^{(i)} \, x_2^{(i)} \, y_2^{(i)} \, \ldots x_{\frac{m_i}{2}}^{(i)} \, y_{\frac{m_i}{2}}^{(i)} \, x_1^{(i)}.$$ So $C^{(i)}$ is a directed $m_i$-cycle with arcs in $(X \times Y) \cup (X \times Y)$, and $F=C^{(1)} \cup \ldots \cup C^{(k)}$ is a $(\vec{C}_{m_1},\ldots,\vec{C}_{m_k})$-factor of $\bar{D_1} \bowtie\bar{D_2}$. Since every vertex in $X$ is the tail and the head of exactly one arc of $F$, the directed 2-factors in ${\mathcal{D}}=\{ \rho^i(F): i \in \mathbb{Z}_n \}$ jointly contain each arc in $(X \times Y) \cup (X \times Y)$ exactly once. Hence ${\mathcal{D}}$ is a $(\vec{C}_{m_{1}},\ldots,\vec{C}_{m_k})$-factorization of $\bar{D_1} \bowtie\bar{D_2}$. **Corollary 13**. *Let $a$ and $b$ be positive integers, and $s$ and $t$ be even positive integers such that $as=bt$. Then OP$^\ast( s^{\langle a \rangle}, t^{\langle b \rangle} )$ has a solution.* . Assume first that $(a,s),(b,t) \not\in \{ (1,4),(1,6) \}$, so that by Theorem [\[thm:Kn\*\]](#thm:Kn*){reference-type="ref" reference="thm:Kn*"}, both OP$^\ast( as;s)$ and OP$^\ast(bt;t )$ have solutions. Since $as=bt$ and $s,t$ are even, it follows from Theorem [Theorem 5](#thm:main-even){reference-type="ref" reference="thm:main-even"} that OP$^\ast( s^{\langle a \rangle}, t^{\langle b \rangle} )$ has a solution. The remaining cases are OP$^\ast(4,4)$ and OP$^\ast(6,6)$, which have solutions by Theorem [\[thm:Kn\*\]](#thm:Kn*){reference-type="ref" reference="thm:Kn*"}, as well as OP$^\ast(2^{\langle 2 \rangle},4)$ and OP$^\ast(2^{\langle 3 \rangle},6)$, which have solutions by Lemmas [Lemma 18](#lem:(2,2,4)){reference-type="ref" reference="lem:(2,2,4)"} and [Lemma 12](#lem:special){reference-type="ref" reference="lem:special"}, respectively. A repeated application of Theorem [Theorem 5](#thm:main-even){reference-type="ref" reference="thm:main-even"} and Corollary [Corollary 13](#cor:even){reference-type="ref" reference="cor:even"}, respectively, immediately yields the following two results. **Corollary 14**. *Let $M=\{ \!\! \{ m_1,\ldots,m_k \} \!\! \}$ be a multiset of even positive integers with a partition into multisets $P_1,\ldots,P_{\ell}$ with the following properties:* (i) *$\ell=2^a$ for some $a \in \mathbb{Z}^+$;* (ii) *$\sum_{m \in P_i} m=\sum_{m \in P_j} m$ for all $i,j \in \{ 1,2,\ldots,\ell\}$; and* (iii) *for each $i=1,2,\ldots ,\ell$, if $P_i=\{ \!\! \{ m_{1,i},\ldots,m_{k_i,i} \} \!\! \}$, then OP$^\ast(m_{1,i},\ldots,m_{k_i,i})$ has a solution.* *Then OP$^\ast(m_1,\ldots,m_k)$ has a solution.* **Corollary 15**. *Let $\ell=2^a$ for $a \in \mathbb{Z}^+$, and for each $i=1,2,\ldots,\ell$, let $a_i$ and $s_i$ be positive integers with $s_i$ even. If $a_is_i=a_js_j$ for all $i$ and $j$, then OP$^\ast( s_1^{\langle a_1 \rangle}, \ldots, s_{\ell}^{\langle a_{\ell} \rangle} )$ has a solution.* # A recursive construction of solutions to OP$^\ast$: the general case In this section, we shall generalize the idea of constructing solutions to larger cases of OP$^\ast$ from smaller ones by allowing cycles of odd length. In this case, the two "parts" of the construction will necessarily be of unequal size. **Proposition 16**. *Let $\ell$, $k$, and $m_1,\ldots, m_k$ be integers such that $1 \le \ell < k$ and $m_i \ge 2$ for $i=1,\ldots, k$. Let $s=m_1+\ldots+m_{\ell}$ and $t=m_{\ell+1}+\ldots+m_{k}$, and assume $s<t$.* *Then OP$^\ast(m_1,\ldots,m_k)$ has a solution if the following conditions all hold.* (1) *OP$^\ast(m_1,\ldots,m_{\ell})$ has a solution.* (2) *There exist a decomposition $\{D_1,D_2\}$ of $K_t^\ast$ and non-negative integers $s_1, \ldots, s_k, t_1, \ldots, t_k$ such that:* (a) *$D_1$ admits a $(\vec{C}_{m_{\ell+1}},\ldots,\vec{C}_{m_{k}})$-factorization with exactly $s-1$ directed 2-factors;* (b) *$s_1+ \ldots + s_k=s$;* (c) *$m_i=2s_i+t_i$ for $i=1,2,\ldots,k$; and* (d) *$D_2$ admits a decomposition ${\mathcal{D}}=\{ H_0,\ldots,H_{t-1} \}$ such that for all $j \in \mathbb{Z}_t$,* - *$V(H_j)=\rho^j(V(H_0))$, where $\rho$ is a cyclic permutation of order $t$ on $V(K_t^\ast)$;* - *$H_j= D_1^{(j)} \dot{\cup} \ldots \dot{\cup} D_k^{(j)}$, where for each $i=1,\ldots,k$,* 1. *$D_i^{(j)} \cong \vec{P}_{t_i}$ if $t_i<m_i$, and $D_i^{(j)} \cong \vec{C}_{m_i}$ if $t_i=m_i$;* 2. *if $D_i^{(0)}$ is a directed $(x,y)$-path for some vertices $x$ and $y$, then $D_i^{(j)}$ is a directed $(\rho^j(x),\rho^j(y))$-path.* . Assume Conditions (1) and (2) hold. We need to show that $K_{s+t}^\ast$ admits a $(\vec{C}_{m_1},\ldots,\vec{C}_{m_k})$-factorization. Let $D=K_{s+t}^\ast= K_s^\ast \bowtie K_t^\ast$, where $$V(K_s^\ast)=X=\{ x_i: i \in \mathbb{Z}_s \} \quad \mbox{ and } \quad V(K_t^\ast)=Y=\{ y_i: i \in \mathbb{Z}_t \}.$$ Let $\rho$ be the cyclic permutation $\rho=(y_0 \, y_1 \, \ldots \, y_{t-1} )$ on $X \cup Y$ that fixes the vertices in $X$ pointwise. Construct subdigraphs $L_1,\ldots,L_k$ of $D$ recursively as follows. Assuming $L_1,\ldots,L_{i-1}$ have already been constructed, we use the subdigraph $D_i^{(0)}$ of $H_0$ to construct $L_i$ as follows. (i) If $t_i=m_i$, then $D_i^{(0)} \cong \vec{C}_{m_i}$, and we let $L_i=D_i^{(0)}$. (ii) If $t_i=0$, then $D_i^{(0)} \cong \vec{P}_{0}$. Choose any vertices $u_0,\ldots,u_{s_i-1} \in X - \bigcup_{j=1}^{i-1} V(L_j)$ and $v_0,\ldots,v_{s_i-1} \in Y - \bigcup_{j=1}^{i-1} V(L_j) - \bigcup_{j=i+1}^k V(D_j^{(0)})$. Then let $L_i$ be the directed cycle $$L_i=u_0 \, v_0 \, u_1 \, v_1 \, \ldots \, u_{s_i-1} \, v_{s_i-1} \, u_0.$$ (iii) Otherwise, we have $0 < t_i < m_i$, and $D_i^{(0)}$ is a directed $t_i$-path, say $D_i^{(0)}=v_0 \, v_1 \, \ldots \, v_{t_i}$, for some $v_0, v_1, \ldots, v_{t_i} \in Y$. Choose any vertices $u_0,\ldots,u_{s_i-1} \in X - \bigcup_{j=1}^{i-1} V(L_j)$ and $v_{t_i+1}, v_{t_i+2}, \ldots, v_{t_i+s_i-1} \in Y - \bigcup_{j=1}^{i-1} V(L_j) - \bigcup_{j=i}^k V(D_j^{(0)})$. Then let $L_i$ be the directed cycle $$L_i= D_i^{(0)} \, v_{t_i} \, u_0 \, v_{t_i+1} \, u_1 \, v_{t_i+2} \, u_2 \, \ldots \, v_{t_i+s_i-1} \, u_{s_i-1} \, v_0.$$ Note that since $\sum_{i=1}^k s_i=s$ and $$\sum_{i=1}^k (s_i+t_i)= \sum_{i=1}^k (m_i-s_i)= \sum_{i=1}^k m_i - \sum_{i=1}^k s_i=(s+t)-s=t,$$ the required vertices can indeed be found. Observe that in all three cases, for each $i=1,2,\ldots,k$, the constructed digraph $L_i$ is a directed $m_i$-cycle with $s_i$ vertices in $X$ and $s_i+t_i$ vertices in $Y$. The digraphs $L_1,\ldots,L_k$ are pairwise disjoint, so $F_0=L_1 \cup \ldots \cup L_k$ is a $(\vec{C}_{m_1},\ldots,\vec{C}_{m_k})$-factor of $D$. For $j \in \mathbb{Z}_t$, obtain $F_j$ from $F_0$ by first applying $\rho^j$ to $F_0$, and then for each $i=1,2,\ldots,k$ such that $t_i>0$, replacing $\rho^j(D_i^{(0)})$ with $D_i^{(j)}$. By Assumption (d), without loss of generality, we have that $V(D_i^{(j)})=V(\rho^j(D_i^{(0)}))$ for each $i$, and if $D_i^{(0)}$ is a directed path, then $D_i^{(j)}$ is directed path with the same source and the same terminus as $\rho^j(D_i^{(0)})$. It follows that $F_j$ is also a $(\vec{C}_{m_1},\ldots,\vec{C}_{m_k})$-factor of $D$. We claim that ${\mathcal{F}}=\{ F_j: j \in \mathbb{Z}_t \}$ is a $(\vec{C}_{m_1},\ldots,\vec{C}_{m_k})$-factorization of $\bar{K}_s \bowtie D_2$. Observe that for each $j \in \mathbb{Z}_t$, $$A(F_j)= \rho^j( A_0) \cup A(H_j),$$ where $A_0= \left( A(F_0) \cap (X \times Y) \right) \cup \left( A(F_0) \cap (Y \times X) \right).$ For any vertex $x \in X$, the indegree and outdegree of $x$ in $F_0$ is 1. Therefore each arc incident with $x$ is covered exactly once in $\bigcup_{j=0}^{t-1} \rho^j(A_0)$. Since by assumption ${\mathcal{D}}=\{ H_0,\ldots,H_{t-1} \}$ decomposes $D_2$, it follows that ${\mathcal{F}}=\{ F_j: j \in \mathbb{Z}_t \}$ decomposes $\bar{K}_s \bowtie D_2$. Finally, let $\{ F_1^{[s]},\ldots, F_{s-1}^{[s]} \}$ and $\{ F_1^{[t]},\ldots, F_{s-1}^{[t]} \}$ be a $(\vec{C}_{m_1},\ldots,\vec{C}_{m_{\ell}})$-factorization of $\bar{K}_s^\ast$ and a $(\vec{C}_{m_{\ell+1}},\ldots,\vec{C}_{m_k})$-factorization of $D_1$, respectively. Then $${\mathcal{F}}'=\{ F_i^{[s]} \cup F_i^{[t]}: i=1,2,\ldots,s-1 \}$$ is a $(\vec{C}_{m_{1}},\ldots,\vec{C}_{m_k})$-factorization of $K_s^\ast \dot{\cup} D_1$, and ${\mathcal{F}}\cup {\mathcal{F}}'$ is a $(\vec{C}_{m_{1}},\ldots,\vec{C}_{m_k})$-factorization of $D$. Corollary [Corollary 17](#cor:main){reference-type="ref" reference="cor:main"} below is a simpler (but slightly more limited) version of Proposition [Proposition 16](#prop:main){reference-type="ref" reference="prop:main"}. For most of our recursive constructions, Corollary [Corollary 17](#cor:main){reference-type="ref" reference="cor:main"} will suffice; however, to solve OP$^\ast(2,2,4)$ (see Lemma [Lemma 18](#lem:(2,2,4)){reference-type="ref" reference="lem:(2,2,4)"} below), Proposition [Proposition 16](#prop:main){reference-type="ref" reference="prop:main"} will be required in full generality. **Corollary 17**. *Let $\ell$, $k$, and $m_1,\ldots, m_k$ be integers such that $1 \le \ell < k$ and $m_i \ge 2$ for $i=1,\ldots, k$. Let $s=m_1+\ldots+m_{\ell}$ and $t=m_{\ell+1}+\ldots+m_{k}$, and assume $s<t$.* *Then OP$^\ast(m_1,\ldots,m_k)$ has a solution if the following conditions all hold.* (1) *OP$^\ast(m_1,\ldots,m_{\ell})$ has a solution.* (2) *There exists a set $S \subseteq \mathbb{Z}_t^\ast$ and non-negative integers $s_1, \ldots, s_k, t_1, \ldots, t_k$ such that:* (a) *$\overrightarrow{\rm Circ}(t;\mathbb{Z}_t^\ast -S)$ admits a $(\vec{C}_{m_{\ell+1}},\ldots,\vec{C}_{m_{k}})$-factorization;* (b) *$s_1+ \ldots + s_k=s$;* (c) *$m_i=2s_i+t_i$ for $i=1,2,\ldots,k$; and* (d) *$\overrightarrow{\rm Circ}(t;S)$ admits an $S$-orthogonal $(D_1,\ldots,D_k)$-subdigraph such that, for all $i=1,2,\ldots,k$, we have that $D_i \cong \vec{P}_{t_i}$ if $t_i<m_i$, and $D_i \cong \vec{C}_{m_i}$ if $t_i=m_i$.* . Assume Conditions (1) and (2) hold, label the vertices of $D=K_s^\ast \bowtie K_t^\ast$ as in the proof of Proposition [Proposition 16](#prop:main){reference-type="ref" reference="prop:main"}, and let $\rho=(y_0 \, y_1 \, \ldots \, y_{t-1} )$. Let $D_1=\overrightarrow{\rm Circ}(t;\mathbb{Z}_t^\ast -S)$ and $D_2=\overrightarrow{\rm Circ}(t;S)$, so $K_t^\ast=D_1 \oplus D_2$. Let $D'$ be an $S$-orthogonal $(D_1,\ldots,D_k)$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$ satisfying Condition (2d). Then ${\mathcal{D}}=\{ \rho^j(D'): j \in \mathbb{Z}_t \}$ is a decomposition of $\overrightarrow{\rm Circ}(t;S)$ satisfying Condition (2d) of Proposition [Proposition 16](#prop:main){reference-type="ref" reference="prop:main"}. Observe that $$|A(D')|=\sum_{i=1}^k t_i=\sum_{i=1}^k (m_i-2s_i)=\sum_{i=1}^k m_i -2\sum_{i=1}^k s_i= (s+t)-2s=t-s,$$ and so $|S|=t-s$ and $|\mathbb{Z}_t^\ast-S|=t-1-(t-s)=s-1$. Thus any $(\vec{C}_{m_{\ell+1}},\ldots,\vec{C}_{m_{k}})$-factorization of $D_1$ indeed contains exactly $s-1$ directed 2-factors. By Proposition [Proposition 16](#prop:main){reference-type="ref" reference="prop:main"}, it follows that OP$^\ast(m_1,\ldots,m_k)$ has a solution. **Lemma 18**. *OP$^\ast(2,2,4)$ has a solution.* . We use Proposition [Proposition 16](#prop:main){reference-type="ref" reference="prop:main"} with $m_1=m_2=2$, $m_3=4$, $\ell=1$ and $k=3$. Hence $s=2$ and $t=6$. By Proposition [Proposition 16](#prop:main){reference-type="ref" reference="prop:main"}, since OP$^\ast(2)$ has a solution, it suffices to find a decomposition $\{ D_1,D_2\}$ of $K_6^\ast$ and non-negative integers $s_1,s_2,s_3,t_1,t_2,t_3$ satisfying Condition (2). We take $s_1=s_2=1$, $s_3=0$, $t_1=t_2=0$, and $t_3=4$, so Conditions (2b) and (2c) hold. Let $V(K_6^\ast)=\{ y_0,\ldots,y_5 \}$ and $\rho=( y_0 \, y_1 \, \ldots \, y_5)$. Let $$D_1=y_1 \, y_4 \, y_1 \; \cup \; y_0 \, y_5 \, y_2 \, y_3 \, y_0,$$ and let $D_2$ be its complement in $K_6^\ast$. Note that $D_1$ is a $(\vec{C}_2,\vec{C}_4)$-factor of $K_6^\ast$, thus satisfying Condition (2a). Define the following $(\vec{P}_0,\vec{P}_0,\vec{C}_4)$-subdigraphs of $D_2$: $$\begin{aligned} H_0 &= y_0 \;\cup\; y_1 \;\cup\; y_2 \, y_5 \, y_3 \, y_4 \, y_2, \qquad\qquad\qquad H_3 &=\rho(H_2), \\ H_1 &= y_1 \;\cup\; y_2 \;\cup\; y_3 \, y_5 \, y_4 \, y_0 \, y_3, \qquad\qquad\qquad H_4 &=\rho^2(H_2) \\ H_2 &= y_2 \;\cup\; y_3 \;\cup\; y_4 \, y_5 \, y_1 \, y_0 \, y_4, \qquad\qquad\qquad H_5 &=\rho^3(H_2).\end{aligned}$$ It is not difficult to verify that $\{ H_i: i \in \mathbb{Z}_6 \}$ is a decomposition of $D_2$ satisfying Condition (2d) of Proposition [Proposition 16](#prop:main){reference-type="ref" reference="prop:main"}. Hence OP$^\ast(2,2,4)$ has a solution. # Extending the 2-factor by a single cycle: technical lemmas In this section, we shall accomplish the heavy technical work that, using Corollary [Corollary 17](#cor:main){reference-type="ref" reference="cor:main"}, will allow us to obtain a solution to OP$^\ast(m_1,\ldots,m_{\ell},t)$ from a solution to OP$^\ast(m_1,\ldots,m_{\ell})$ when $t$ is sufficiently large; see Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(1). **Lemma 19**. *Let $s$ and $t$ be integers such that $2 \le s < t$, and $s \ne 4$ if $t$ is even. Furthermore, let $$\textstyle D=\{ \pm 1, \pm 2, \ldots, \pm \frac{s-1}{2} \}$$ if $s$ is odd, and $$\textstyle D=\{ -1 \} \cup \{ \pm 2, \pm 3, \ldots, \pm \frac{s}{2} \}$$ if $s$ is even. Then $\overrightarrow{\rm Circ}(t;D)$ admits a $\vec{C}_t$-factorization.* . Let $k=\lfloor \frac{s}{2} \rfloor$ and $T=\{ 2,3, \ldots, k \}$. If $k \ge 3$, then $T$ has a partition $$\textstyle {\mathcal{P}}= \left\{ \{ 2,3 \}, \{ 4,5 \}, \ldots, \{ k-1, k \} \right\}$$ or $$\textstyle {\mathcal{P}}= \left\{ \{ 2,3,4 \}, \{ 5,6 \}, \ldots, \{ k-1, k \} \right\}.$$ In either case, for each $S \in {\mathcal{P}}$, the circulant graph ${\rm Circ}(t;S \cup (-S))$ has a decomposition into Hamilton cycles. This conclusion follows from Theorem [\[the:BerFavMah\]](#the:BerFavMah){reference-type="ref" reference="the:BerFavMah"} if $|S|=2$, and from Theorem [\[the:Wes\]](#the:Wes){reference-type="ref" reference="the:Wes"} if $|S|=3$ and $t$ is even. If $|S|=3$ and $t$ is odd, then we first write $\{ 2,3,4 \}=\{ 2 \} \cup \{ 3,4 \}$, and use a similar reasoning. Directing each $t$-cycle in this decomposition of ${\rm Circ}(t;S \cup (-S))$ in both directions, we obtain a $\vec{C}_t$-factorization of $\overrightarrow{\rm Circ}(t;S \cup (-S))$. Since $\overrightarrow{\rm Circ}(t;D)$ decomposes into directed circulants of this form and either $\overrightarrow{\rm Circ}(t; \{ -1 \})$ or $\overrightarrow{\rm Circ}(t; \{ \pm 1 \})$, each of which also admits a $\vec{C}_t$-factorization, the result follows. If $k \le 2$, then $2 \le s \le 5$. Since $t$ is odd if $s=4$, it can be established similarly that in each of these cases $\overrightarrow{\rm Circ}(t;D)$ admits a $\vec{C}_t$-factorization. **Lemma 20**. *Let $a$, $s$, and $t$ be integers such that $2 \le s < t$, $s \ne 3$ if $t$ is even, $a \le \min \{ \lfloor \frac{s}{3} \rfloor, 2\lfloor \frac{t}{2} \rfloor -s \}$, and $a \equiv s \pmod{2}$. Furthermore, let $$\textstyle S=\{ \pm \frac{s+1}{2}, \pm \frac{s+3}{2}, \ldots, \pm \lfloor \frac{t}{2} \rfloor \}$$ if $s$ is odd, and $$\textstyle S=\{ 1 \} \cup \{ \pm (\frac{s}{2}+1), \pm (\frac{s}{2}+2), \ldots, \pm \lfloor \frac{t}{2} \rfloor \}$$ if $s$ is even.* *Then $\overrightarrow{\rm Circ}(t;S)$ admits an $S$-orthogonal $(\vec{P}_1^{\langle a \rangle},\vec{P}_{t-s-a})$-subdigraph.* . First observe that in all cases, $|S|=t-s$. Let the vertex set of the circulant digraph $\overrightarrow{\rm Circ}(t;S)$ be $V=\{ y_i: i \in \mathbb{Z}_t \}$. For any $0 \le \ell \le \frac{t-s-1}{2}$, let $R$ be any subset of $S$ of the form $$\textstyle R=\{ \pm d_i: i=1,2,\ldots,\ell \} \cup \{ \lfloor \frac{t}{2} \rfloor \} \quad \mbox{with } 1 \le d_1 < d_2 < \ldots < d_\ell < \lfloor \frac{t}{2} \rfloor. \eqno(\star)$$ For any $k \in \mathbb{Z}_t$, we then define a directed walk $P(R,y_k)$ as follows: $$\begin{aligned} P(R,y_k) &=& y_k \, y_{k+d_1} \, y_{k+d_1-d_2} \, y_{k+d_1-d_2+d_3} \, y_{k+d_1-d_2+d_3-d_4} \, \ldots \, y_{k+\sum_{i=1}^\ell (-1)^{i-1} d_i} \, \\ && y_{k+\sum_{i=1}^\ell (-1)^{i-1} d_i+ \lfloor \frac{t}{2} \rfloor} \, y_{k+\sum_{i=1}^{\ell-1} (-1)^{i-1} d_i+\lfloor \frac{t}{2} \rfloor} \, y_{k+\sum_{i=1}^{\ell-2} (-1)^{i-1} d_i+\lfloor \frac{t}{2} \rfloor} \ldots \, y_{k+d_1+\lfloor \frac{t}{2} \rfloor} \, y_{k+\lfloor \frac{t}{2} \rfloor}.\end{aligned}$$ Observe that $P(R,y_k)$ is actually a directed path that successively traverses arcs of the following differences: $$\textstyle d_1, -d_2, d_3, -d_4, \ldots, (-1)^{\ell-1}d_{\ell}, \lfloor \frac{t}{2} \rfloor, (-1)^{\ell}d_{\ell}, (-1)^{\ell-1}d_{\ell-1}, \ldots, d_2, -d_1.$$ Moreover, its vertex set is contained in the set $$\textstyle \{ y_i: k+d_1 \le i \le k+\lfloor \frac{t}{2} \rfloor \} \cup \{ y_i: k+d_1-\lceil \frac{t}{2} \rceil \le i \le k \}.$$ Case 1: $t$ and $s$ are both even. Then $S=\{ 1 , \pm (\frac{s}{2}+1), \pm (\frac{s}{2}+2), \ldots, \pm (\frac{t}{2}-1), \frac{t}{2}\}.$ For $i=-\frac{a}{2}+1, -\frac{a}{2}+2,\ldots,0,1,\ldots,\frac{a}{2}$, define the directed 1-path $$\textstyle A_i=y_i\, y_{\frac{t}{2}-i+1}.$$ Observe that the arc in $A_i$ is of difference $\frac{t}{2}-(2i-1)$ if $i > 0$, and of difference $-(\frac{t}{2}+(2i-1))$ if $i \le 0$. Thus $A_{-\frac{a}{2}+1},\ldots,A_{\frac{a}{2}}$ jointly contain exactly one arc of each difference in $$\textstyle T=\{ \pm (\frac{t}{2}-1), \pm (\frac{t}{2}-3),\ldots, \pm (\frac{t}{2}-(a-1)) \}.$$ Moreover, the $A_i$ are pairwise disjoint, so $A= \bigcup_{i=-\frac{a}{2}+1}^{\frac{a}{2}}$ is a $T$-orthogonal $(\vec{P}_1^{\langle a \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t; \mathbb{Z}_t^\ast)$. Its vertex set is $$\textstyle V(A)=\{ y_i: -\frac{a}{2}+1 \le i \le \frac{a}{2} \} \cup \{ y_i: \frac{t}{2}-\frac{a}{2}+1 \le i \le \frac{t}{2}+\frac{a}{2} \}.$$ ![Lemma [Lemma 20](#lem:long2){reference-type="ref" reference="lem:long2"}, Subcase 1.1. (Only the subscripts of the vertices are specified.)](Long-even-even-1.pdf){#fig:Lee1} Subcase 1.1: $a \le \frac{t-s}{2}-1$. If $a>0$, then $T \subseteq S$ and $\frac{s+2}{2} \not\in T$. Let $R=S-T-\{ 1 \}$. Then a directed path $P(R,y_{-\lceil \frac{s+2}{4} \rceil})$ is well defined, with $d_1=\frac{s+2}{2}$. Its vertex set is contained in $$\textstyle \{ y_i: \lfloor \frac{s+2}{4} \rfloor \le i \le \frac{t}{2}-\lceil \frac{s+2}{4} \rceil\} \cup \{ y_i: \frac{t}{2}+\lfloor \frac{s+2}{4} \rfloor \le i \le t-\lceil \frac{s+2}{4} \rceil\},$$ and its terminus is $y_{\frac{t}{2}-\lceil \frac{s+2}{4} \rceil}$. Let $P=P(R,y_{-\lceil \frac{s+2}{4} \rceil})\, y_{\frac{t}{2}-\lceil \frac{s+2}{4} \rceil}\, y_{\frac{t}{2}-\lceil \frac{s+2}{4} \rceil+1}$. Since $a \le \frac{s}{3}$, we have $\frac{a}{2} < \lfloor \frac{s+2}{4} \rfloor$. It follows that $P$ and $A$ are disjoint, and that $D=A \cup P$ is an $S$-orthogonal $(\vec{P}_1^{\langle a \rangle},\vec{P}_{t-s-a})$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$. See Figure [1](#fig:Lee1){reference-type="ref" reference="fig:Lee1"}. If $a=0$, then $T=\emptyset$ and $D=P$ is the required $S$-orthogonal $(\vec{P}_{t-s})$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$. ![Lemma [Lemma 20](#lem:long2){reference-type="ref" reference="lem:long2"}, Subcase 1.2, for $|R|>1$. (Only the subscripts of the vertices are specified.)](Long-even-even-2.pdf){#fig:Lee2} Subcase 1.2: $\frac{t-s}{2} \le a < t-s$. Let $a'=\lfloor \frac{t-s}{4} \rfloor$ and $d_A=\frac{t}{2}-(2a'-1)$. Observe that $d_A=\frac{s+2}{2}$ if $t \equiv s \pmod{4}$, and $d_A=\frac{s+4}{2}$ otherwise. Let $$\textstyle T'=\{ \pm (\frac{t}{2}-1), \pm (\frac{t}{2}-3),\ldots, \pm d_A \},$$ so that $|T'|=2a'$. Let $A'=\bigcup_{i=-a'+1}^{a'} A_i$. Then $A'$ is a $T'$-orthogonal $(\vec{P}_1^{\langle 2a' \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t;T')$. Its vertex set is $$\textstyle V(A')=\{ y_i: -a'+1 \le i \le a' \} \cup \{ y_i: \frac{t}{2}-a'+1 \le i \le \frac{t}{2}+a' \}.$$ Let $b=a-2a'$, and note that $b$ is even. Let $d_B=\frac{s+4}{2}$ if $t \equiv s \pmod{4}$, and $d_B=\frac{s+2}{2}$ otherwise, so that $\{ d_A,d_B \} = \{ \frac{s+2}{2}, \frac{s+4}{2} \}$ in both cases. Let $$R'=\{ \pm d_B, \pm (d_B+2), \ldots, \pm (d_B+b-2) \},$$ so that $R' \cap T' =\emptyset$. Since $a<t-s$, we know that $d_B+b-2 < \frac{t}{2}$ and $|R'|=b$. For each $d \in R'$ such that $d>0$, define the directed 1-paths $$\textstyle B_d=y_{-\lceil \frac{d}{2} \rceil} \, y_{\lfloor \frac{d}{2} \rfloor} \qquad \mbox{ and }\qquad B_{-d}=y_{\frac{t}{2}+\lfloor \frac{d}{2} \rfloor} \, y_{\frac{t}{2}-\lceil \frac{d}{2} \rceil}.$$ Observe that the arcs in $B_d$ and $B_{-d}$ are of difference $d$ and $-d$, respectively. Thus, the directed 1-paths in $\{ B_i: i \in R' \}$ jointly contain exactly one arc of each difference in $R'$. Moreover, the digraphs $B_i$ are pairwise disjoint, so that $B= \bigcup_{i \in R'} B_i$ is an $R'$-orthogonal $(\vec{P}_1^{\langle b \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t; R')$. Let $R=S-T'-R'-\{ 1 \}$, and observe that $\frac{t}{2} \in R$ so that the directed path $P(R,y_{-\lceil \frac{d_1}{2} \rceil})$ is well defined. Assume first that $|R|>1$. With the notation ($\star)$, we have $d_1=d_B+b$. If $\ell \ge 2$, then $d_2 =d_1+2$, and if $\ell=1$, then $\frac{t}{2}=d_1+2$. In either case, vertex $y_{-\lceil \frac{d_1}{2} \rceil-1}$ is not on $P(R,y_{-\lceil \frac{d_1}{2} \rceil})$, and $P=y_{-\lceil \frac{d_1}{2} \rceil-1} \, y_{-\lceil \frac{d_1}{2} \rceil} \, P(R,y_{-\lceil \frac{d_1}{2} \rceil})$ is a directed path. Moreover, $P$ is disjoint from $B$, and $V(B \cup P)$ is contained in $$\textstyle \{ y_i: \lfloor \frac{d_B}{2} \rfloor \le i \le \frac{t}{2} - \lceil \frac{d_B}{2} \rceil \} \cup \{ y_i: \frac{t}{2}+ \lfloor \frac{d_B}{2} \rfloor \le i \le t-\lceil \frac{d_B}{2} \rceil\}.$$ Since $\frac{t-s}{2} \le a \le \frac{s}{3}$, it is not difficult to show that $a' < \lfloor \frac{d_B}{2} \rfloor$. It follows that $B \cup P$ and $A'$ are disjoint, and that $D=A' \cup B \cup P$ is an $S$-orthogonal $(\vec{P}_1^{\langle a \rangle},\vec{P}_{t-s-a})$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$. See Figure [2](#fig:Lee2){reference-type="ref" reference="fig:Lee2"}. If, however, $|R|=1$, then $P=y_{-\lceil \frac{t}{4} \rceil} \, y_{\lfloor \frac{t}{4} \rfloor}$ is a directed 1-path, and $t-s-a=2$. Let $A''= \bigcup_{i=-a'+2}^{a'} A_i$ and $P'=y_{-a'} \, y_{-a'+1} A_{-a'+1}$. As $a' < \lceil \frac{d_B}{2} \rceil$, we have that $P'$ is a directed 2-path disjoint from $A'' \cup B \cup P$. It follows that $D=A'' \cup B \cup P \cup P'$ is an $S$-orthogonal $(\vec{P}_1^{\langle a \rangle},\vec{P}_{t-s-a})$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$. Subcase 1.3: $a = t-s$. This case is very similar to Subcase 1.2, except that $|R'|=b-1$ and $R=\emptyset$. We define the unions $A'$ and $B$ of directed 1-paths exactly as in Subcase 1.2, while the last directed 1-path will be $P=y_{-a'-1} \, y_{-a'}$. As $t-s=a\le\frac{s}{3}<s$, it can be shown that $a'+1 < \lceil \frac{d_B}{2} \rceil$. It follows that $P$ is disjoint from $A' \cup B$, and $D=A' \cup B \cup P$ is an $S$-orthogonal $(\vec{P}_1^{\langle a \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$. Hence $\overrightarrow{\rm Circ}(t;S)$ admits an $S$-orthogonal $(\vec{P}_1^{\langle a \rangle},\vec{P}_0)$-subdigraph as well. Case 2: $t$ is even and $s$ is odd. Then $S=\{ \pm \frac{s+1}{2}, \pm \frac{s+3}{2}, \ldots, \pm (\frac{t}{2}-1), \frac{t}{2}\}.$ By the assumption, we have $s \ne 3$. ![Lemma [Lemma 20](#lem:long2){reference-type="ref" reference="lem:long2"}, Subcase 2.1. (Only the subscripts of the vertices are specified.)](Long-even-odd-1.pdf){#fig:Leo1} Subcase 2.1: $a \le \frac{t-s-1}{2}$ and $(s,a) \not\in \{ (9,3),(5,1) \}$. Define directed 1-paths $$\textstyle A_{i}=y_{i}\, y_{\frac{t}{2}-i-1} \quad \mbox{ for } i=-1,-2,\ldots,-\frac{a-1}{2},$$ and $$\textstyle A_{i}=y_{i}\, y_{\frac{t}{2}-i+1} \quad \mbox{ for } i=2,3,\ldots,\frac{a-1}{2}.$$ In addition, let $$A_{1}= y_{\frac{s-a}{2}-1} \, y_{\frac{t}{2}+\frac{s-a}{2}-2} \qquad \mbox{ and } \qquad A_{-\frac{a+1}{2}}= y_{\frac{t}{2}-\frac{a-1}{2}}\, y_{\frac{a+1}{2}}.$$ Observe that the arc in $A_i$ is of difference $\frac{t}{2}-(2i-1)$ if $i>0$, and of difference $-(\frac{t}{2}+(2i+1))$ if $i < 0$. Thus the $A_{i}$, for $i \in \{ \pm 1, \ldots,\pm \frac{a-1}{2}, -\frac{a+1}{2} \}$, jointly contain exactly one arc of each difference in $$\textstyle T=\{ \pm (\frac{t}{2}-1), \pm (\frac{t}{2}-3),\ldots, \pm (\frac{t}{2}-(a-2)), -(\frac{t}{2}-a) \}.$$ Moreover, since $a \le \frac{s}{3}$ and $(s,a) \not\in \{ (9,3),(5,1),(3,1) \}$, we find that $\frac{a+1}{2} < \frac{s-a}{2}-1$. Consequently, the $A_i$ are pairwise disjoint, and $A= \bigcup_{i=1}^{\frac{a-1}{2}} A_{-i} \cup \bigcup_{i=1}^{\frac{a+1}{2}} A_{i}$ is a $T$-orthogonal $(\vec{P}_1^{\langle a \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t; T)$. Its vertex set is contained in $$\textstyle \{ y_i: -\frac{a-1}{2} \le i \le \frac{s-a}{2}-1 \} \cup \{ y_i: \frac{t}{2}-\frac{a-1}{2} \le i \le \frac{t}{2}+\frac{a-3}{2} \} \cup \{ y_{\frac{t}{2}+\frac{s-a}{2}-2} \}.$$ Since $a \le \frac{t-s-1}{2}$, we know that $T \subseteq S$. Let $R=S-T-\{ \frac{t}{2}-a \}$. Then a directed path $P(R,y_{- \frac{a+1}{2}})$ is well defined, with $d_1 \ge \frac{s+1}{2}$. Its vertex set is contained in $$\textstyle \{ y_i: d_1-\frac{a+1}{2} \le i \le \frac{t}{2}- \frac{a+1}{2} \} \cup \{ y_i: \frac{t}{2}+ d_1-\frac{a+1}{2} \le i \le t- \frac{a+1}{2} \}.$$ Let $P=y_{\frac{t}{2}+ \frac{a-1}{2}} \, y_{- \frac{a+1}{2}} \, P(R,y_{- \frac{a+1}{2}})$. Observe that $\frac{s-a}{2}= \frac{s+1}{2}-\frac{a+1}{2} \le d_1 -\frac{a+1}{2}$. It follows that $P$ is an $(R \cup \{ \frac{t}{2}-a \})$-orthogonal directed path, and that $P$ and $A$ are disjoint. Hence $D=A \cup P$ is an $S$-orthogonal $(\vec{P}_1^{\langle a \rangle},\vec{P}_{t-s-a})$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$. See Figure [3](#fig:Leo1){reference-type="ref" reference="fig:Leo1"}. Subcase 2.2: $(s,a) \in \{ (9,3),(5,1) \}$. If $s=9$ and $a=3$, then $S=\{ \pm 5, \pm 6, \ldots, \pm (\frac{t}{2}-1), \frac{t}{2}\}$. Construct directed paths $A_{-1}$, $A_{2}$, and $P$ as in Subcase 2.1, but let $A_1=y_{\frac{t}{2}+2} \, y_1$. The rest of the construction is completed as in Subcase 2.1. If $s=5$ and $a=1$, then $S=\{ \pm 3, \pm 4, \ldots, \pm (\frac{t}{2}-1), \frac{t}{2}\}$. Define directed paths $A=y_0 \, y_{\frac{t}{2}+1}$ and $P=y_{\frac{t}{2}} \, y_{-1} \, P(R,y_{-1})$, where $R=S- \{ \pm(\frac{t}{2}-1) \}$, and complete the proof as in Subcase 2.1. ![Lemma [Lemma 20](#lem:long2){reference-type="ref" reference="lem:long2"}, Subcase 2.3, with $|R|>1$. (Only the subscripts of the vertices are specified.)](Long-even-odd-2.pdf){#fig:Leo2} Subcase 2.3: $\frac{t-s+1}{2} \le a < t-s$. Let $a'=\lfloor \frac{t-s+1}{4} \rfloor$ and $d_A=\frac{t}{2}-(2a'-1)$. Observe that $d_A=\frac{s+1}{2}$ if $t \equiv s+3 \pmod{4}$, and $d_A=\frac{s+3}{2}$ otherwise. Let $$\textstyle T'=\{ \pm (\frac{t}{2}-1), \pm (\frac{t}{2}-3),\ldots, \pm d_A \},$$ so that $|T'|=2a'$. Define directed 1-paths $$\textstyle A_{i}=y_{i}\, y_{\frac{t}{2}-i-1} \quad \mbox{ for } i=-a',-a'+1,\ldots, 0,1,\ldots,a'-1.$$ Observe that the arc in $A_i$ is of difference $\frac{t}{2}-(2i+1)$ if $i \ge 0$, and of difference $-(\frac{t}{2}+(2i+1))$ if $i < 0$. Thus the $A_{i}$, for $i \in \{ 0, \pm 1, \ldots,\pm (a'-1),-a' \}$, jointly contain exactly one arc of each difference in $T'$. The $A_i$ are clearly pairwise disjoint, and $A= \bigcup_{i=-a'}^{a'-1} A_{i}$ is a $T$-orthogonal $(\vec{P}_1^{\langle 2a' \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t; T)$. Its vertex set is $$\textstyle V(A)=\{ y_i: -a' \le i \le a'-1 \} \cup \{ y_i: \frac{t}{2}-a' \le i \le \frac{t}{2}+a'-1 \}.$$ Let $b=a-2a'$, and note that $b$ is odd, and $b \ge 0$ as $a \ge \frac{t-s+1}{2}$. Let $d_B=\frac{s+1}{2}$ if $t \equiv s+1 \pmod{4}$, and $d_B=\frac{s+3}{2}$ otherwise, so that $\{ d_A,d_B \} = \{ \frac{s+1}{2}, \frac{s+3}{2} \}$ in both cases. Let $$R'=\{ \pm d_B, \pm (d_B+2), \ldots, \pm (d_B+b-1) \},$$ $R' \cap T =\emptyset$. As $a< t-s$, we can see that $d_B+b-1<\frac{t}{s}$. Hence $|R'|=b+1$. For $d \in R'$ such that $d>0$, define directed 1-paths $$\textstyle B_{-d}=y_{\lfloor \frac{d}{2} \rfloor} \, y_{-\lceil \frac{d}{2} \rceil} \qquad \mbox{ and }\qquad B_{d}= y_{\frac{t}{2}-\lceil \frac{d+2}{2} \rceil} \, y_{\frac{t}{2}+\lfloor \frac{d-2}{2} \rfloor}.$$ Observe that the arcs in $B_{-d}$ and $B_{d}$ are of difference $-d$ and $d$, respectively. Thus, the directed 1-paths in $\{ B_i: i \in R' \}$ jointly contain exactly one arc of each difference in $R'$. Moreover, the digraphs $B_i$ are pairwise disjoint. Let $R''=R'-\{ d_B+b-1\}$. Then $B= \bigcup_{i \in R''} B_i$ is an $R''$-orthogonal $(\vec{P}_1^{\langle b \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t; R'')$. Let $R=S-T-R'$, and observe that $\frac{t}{2} \in R$. If $|R|>1$, then with the notation $(\star)$, we have $d_1=d_B+b+1$, and the directed path $P(R,y_{-\lceil \frac{d_1}{2} \rceil})$ is well defined. Its last two vertices are $y_{\frac{t}{2}+\lfloor \frac{d_1}{2} \rfloor}$ and $y_{\frac{t}{2}-\lceil \frac{d_1}{2} \rceil}$. Hence $P(R,y_{-\lceil \frac{d_1}{2} \rceil})$ is disjoint from $B$, but shares the vertex $y_{\frac{t}{2}-\lceil \frac{d_1}{2} \rceil}=y_{\frac{t}{2}-\lceil \frac{d_B+b+1}{2} \rceil}$ with $B_{d_B+b-1}$. Hence $P=P(R,y_{-\lceil \frac{d_1}{2} \rceil})\, y_{\frac{t}{2}-\lceil \frac{d_B+b+1}{2} \rceil}\, y_{\frac{t}{2}+\lfloor \frac{d_B+b-3}{2} \rceil}$ is a directed path. Note that $V(B \cup P)$ is contained in $$\textstyle \{ y_i: \lfloor \frac{d_B}{2} \rfloor \le i \le \frac{t}{2} - \lceil \frac{d_B+2}{2} \rceil \} \cup \{ y_i: \frac{t}{2}+ \lfloor \frac{d_B-2}{2} \rfloor \le i \le t-\lceil \frac{d_B}{2} \rceil\}.$$ Since $\frac{t-s+1}{2} \le a \le \frac{s}{3}$, it is not difficult to show that $t-s+1 < s+3$, and hence $a' < \lceil \frac{d_B}{2} \rceil$. It follows that $B \cup P$ and $A$ are disjoint, and that $D=A \cup B \cup P$ is an $S$-orthogonal $(\vec{P}_1^{\langle a \rangle},\vec{P}_{t-s-a})$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$. See Figure [4](#fig:Leo2){reference-type="ref" reference="fig:Leo2"}. If $|R|=1$, then $t-s-a=2$, $R=\{ \frac{t}{2} \}$, and $d_B+b-1=\frac{t}{2}-2$. Let $P= y_{\frac{t}{2}+\lfloor \frac{t}{4} \rfloor} \, y_{\frac{t}{2}-\lceil \frac{t}{4} \rceil} \, y_{\frac{t}{2}+\lfloor \frac{t}{4} \rfloor}-2$, so that $P$ is a directed 2-path disjoint from $A$ and $B$, and $D=A \cup B \cup P$ is an $S$-orthogonal $(\vec{P}_1^{\langle a \rangle},\vec{P}_{2})$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$. Subcase 2.4: $a=t-s$. This case is similar to Subcase 2.3, so we only highlight the differences. Define $A$ exactly as in Subcase 2.3, but let $$\textstyle R'=S-T=\{ \pm d_B, \pm (d_B+2),\ldots,\pm (\frac{t}{2}-2), \frac{t}{2} \}.$$ For $d \in R'$ such that $d>0$, define directed 1-paths $$\textstyle B_{-d}=y_{\lfloor \frac{d}{2} \rfloor} \, y_{-\lceil \frac{d}{2} \rceil} \qquad \mbox{ and }\qquad B_{d}= y_{\frac{t}{2}-\lceil \frac{d}{2} \rceil} \, y_{\frac{t}{2}+\lfloor \frac{d}{2} \rfloor},$$ so that the arcs in $B_{-d}$ and $B_{d}$ are of difference $-d$ and $d$, respectively. Then $B= \bigcup_{i \in R'} B_i$ is an $R'$-orthogonal $(\vec{P}_1^{\langle a-2a' \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t; R')$. Its vertex set is contained in $$\textstyle \{ y_i: \lfloor \frac{d_B}{2} \rfloor \le i \le \frac{t}{2} - \lceil \frac{d_B}{2} \rceil \} \cup \{ y_i: \frac{t}{2}+ \lfloor \frac{d_B}{2} \rfloor \le i \le t-\lceil \frac{d_B}{2} \rceil\},$$ and as before, it can be shown that $A$ and $B$ are disjoint. Hence $D=A \cup B$ is an $S$-orthogonal $(\vec{P}_1^{\langle a \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$. Case 3: $t$ and $s$ are both odd. Then $S=\{ \pm \frac{s+1}{2}, \pm \frac{s+3}{2}, \ldots, \pm \frac{t-1}{2}\}.$ Define directed 1-paths $$\textstyle A_{i}=y_{i}\, y_{\frac{t-1}{2}-i} \qquad \mbox{ for } i=1,2,\ldots,\frac{a-1}{2},$$ and $$\textstyle A_{i}=y_{i}\, y_{\frac{t-1}{2}-i+1}\qquad \mbox{ for } i=-\frac{a-1}{2},-\frac{a-3}{2},\ldots,0.$$ Observe that the arc in $A_i$ is of difference $\frac{t-1}{2}-2i$ if $i>0$, and of difference $-(\frac{t-1}{2}+2i)$ if $i \le 0$. Thus the $A_{i}$, for $i \in \{ 0, \pm 1, \ldots,\pm \frac{a-1}{2} \}$, jointly contain exactly one arc of each difference in $$\textstyle T=\{ - \frac{t-1}{2}, \pm (\frac{t-1}{2}-2),\ldots, \pm (\frac{t-1}{2}-(a-1)) \}.$$ Moreover, the $A_i$ are pairwise disjoint, and $A= \bigcup_{i=-\frac{a-1}{2}}^{\frac{a-1}{2}} A_{i}$ is a $T$-orthogonal $(\vec{P}_1^{\langle a \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t; T)$. Its vertex set is contained in $$\textstyle \{ y_i: -\frac{a-1}{2} \le i \le \frac{a-1}{2} \} \cup \{ y_i: \frac{t-1}{2}-\frac{a-1}{2} \le i \le \frac{t-1}{2}+\frac{a+1}{2} \}.$$ ![Lemma [Lemma 20](#lem:long2){reference-type="ref" reference="lem:long2"}, Subcase 3.1. (Only the subscripts of the vertices are specified.)](Long-odd-odd-1.pdf){#fig:Loo1} Subcase 3.1: $a \le \frac{t-s-2}{2}$ and $(s,a) \not\in \{ (9,3),(5,1),(3,1) \}$. Note that $\frac{t-1}{2}-(a-1) \ge \frac{s+3}{2}$, so $\frac{s+1}{2} \not\in T$. Let $R=S-T$. As $\frac{t-1}{2} \in R$, the directed path $P=P(R,y_{- \lceil \frac{s+1}{4} \rceil})$ is well defined, and with the notation $(\star)$, we have $d_1=\frac{s+1}{2}$. Its vertex set is contained in $$\textstyle \{ y_i: \lfloor \frac{s+1}{4} \rfloor \le i \le \frac{t-1}{2}- \lceil \frac{s+1}{4} \rceil \} \cup \{ y_i: \frac{t-1}{2}+\lfloor \frac{s+1}{4} \rfloor \le i \le t- \lceil \frac{s+1}{4} \rceil \}.$$ Since $a \le \frac{s}{3}$ and $(s,a) \not\in \{ (9,3),(5,1),(3,1) \}$, we have that $\frac{a+1}{2}< \lfloor \frac{s+1}{4} \rfloor$, implying that $P$ is disjoint from $A$. It follows that $D=A \cup P$ is an $S$-orthogonal $(\vec{P}_1^{\langle a \rangle},\vec{P}_{t-s-a})$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$. See Figure [5](#fig:Loo1){reference-type="ref" reference="fig:Loo1"}. Subcase 3.2: $(s,a) \in \{ (9,3),(5,1),(3,1) \}$. Use $P$ as in Subcase 3.1, but replace $A$ as follows. If $(s,a)=(9,3)$, let $A$ be the union of $A_{-1}=y_{-2} \, y_{\frac{t+1}{2}}$, $A_{0}=y_{-1} \, y_{\frac{t-1}{2}}$, and $A_{1}=y_{0} \, y_{\frac{t-5}{2}}$. If $(s,a) \in \{ (3,1),(5,1) \}$, let $A=y_{\frac{t-1}{2}}\, y_{0}$. Then complete the proof as in Subcase 3.1. ![Lemma [Lemma 20](#lem:long2){reference-type="ref" reference="lem:long2"}, Subcase 3.3, and Subcase 4.3 with $B_0$ not shown. (Only the subscripts of the vertices are specified.)](Long-odd-odd-2.pdf){#fig:Loo2} Subcase 3.3: $a \ge \frac{t-s}{2}$. Let $a'=\lfloor \frac{t-s-2}{4} \rfloor$ and $d_A=\frac{t-1}{2}-2a'$. Observe that $d_A=\frac{s+1}{2}$ if $t \equiv s+2 \pmod{4}$, and $d_A=\frac{s+3}{2}$ otherwise. Let $$\textstyle T'=\{ -\frac{t-1}{2}, \pm (\frac{t-1}{2}-2),\ldots, \pm d_A \},$$ so that $|T'|=2a'+1$. Let $A'=\bigcup_{i=-a'}^{a'} A_i$. Then $A'$ is a $T'$-orthogonal $(\vec{P}_1^{\langle 2a'+1 \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t;T')$. Its vertex set is contained in $$\textstyle V(A')=\{ y_i: -a' \le i \le a' \} \cup \{ y_i: \frac{t-1}{2}-a' \le i \le \frac{t-1}{2}+a'+1 \}.$$ Let $b=a-2a'-1$, and note that $b$ is even, $b \ge 0$. Let $d_B=\frac{s+3}{2}$ if $t \equiv s+2 \pmod{4}$, and $d_B=\frac{s+1}{2}$ otherwise, so that $\{ d_A,d_B \} = \{ \frac{s+1}{2}, \frac{s+3}{2} \}$ in both cases. Let $$R'=\{ \pm d_B, \pm (d_B+2), \ldots, \pm (d_B+b-2) \},$$ so that $|R'|=b$ and $R' \cap T' =\emptyset$. Since $a$ is odd, note that $a<t-s$, and hence $d_B+b-2 < \frac{t-1}{2}$. For $d \in R'$ such that $d>0$, define directed 1-paths $$\textstyle B_{-d}=y_{\lfloor \frac{d}{2} \rfloor} \, y_{-\lceil \frac{d}{2} \rceil} \qquad \mbox{ and }\qquad B_{d}= y_{\frac{t-1}{2}-\lceil \frac{d}{2} \rceil} \, y_{\frac{t-1}{2}+\lfloor \frac{d}{2} \rfloor}.$$ Observe that the arcs in $B_{-d}$ and $B_{d}$ are of difference $-d$ and $d$, respectively. Thus, the directed 1-paths in $\{ B_i: i \in R' \}$ jointly contain exactly one arc of each difference in $R'$. Moreover, the digraphs $B_i$ are pairwise disjoint. Thus $B= \bigcup_{i \in R'} B_i$ is an $R'$-orthogonal $(\vec{P}_1^{\langle b \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t; R')$. Let $R=S-T'-R'$, and as $\frac{t-1}{2} \in R$, observe that the directed path $P=P(R,y_{-\lceil \frac{d_1}{2} \rceil})$ is well defined, with $d_1=d_B+b$, assuming Notation $(\star)$. Moreover, $P$ is disjoint from $B$, and $V(B \cup P)$ is contained in $$\textstyle \{ y_i: \lfloor \frac{d_B}{2} \rfloor \le i \le \frac{t-1}{2} - \lceil \frac{d_B}{2} \rceil \} \cup \{ y_i: \frac{t-1}{2}+ \lfloor \frac{d_B}{2} \rfloor \le i \le t-\lceil \frac{d_B}{2} \rceil\}.$$ Since $\frac{t-s}{2} \le a \le \frac{s}{3}$, we have $t-s \le \frac{2}{3}s<s-1$, and hence $a'+1 < \lfloor \frac{d_B}{2} \rfloor$, unless $s=3$. It follows that $B \cup P$ and $A$ are disjoint, and that $D=A \cup B \cup P$ is an $S$-orthogonal $(\vec{P}_1^{\langle a \rangle},\vec{P}_{t-s-a})$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$. See Figure [6](#fig:Loo2){reference-type="ref" reference="fig:Loo2"}. If $s=3$, then $a=1$, $t=5$, and $D=y_0 \, y_{-2} \cup y_{-1} \, y_1$ is the desired $S$-orthogonal $(\vec{P}_1^{\langle 2 \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$. Case 4: $t$ is odd and $s$ is even. Then $S=\{ 1, \pm \frac{s+2}{2}, \pm \frac{s+4}{2}, \ldots, \pm \frac{t-1}{2}\}.$ In particular, $S=\{ 1 \}$ if $t-s=1$. In this case, the required $S$-orthogonal $(\vec{P}_1)$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$ is easy to find. Hence we may assume $t-s \ge 3$ and $\pm \frac{t-1}{2} \in S$. If $a \ge 2$, define directed 1-paths $$\textstyle A_{i}=y_{i}\, y_{\frac{t-1}{2}-i} \qquad \mbox{ for } i=1,2,\ldots,\frac{a-2}{2},$$ and $$\textstyle A_{i}=y_{i}\, y_{\frac{t-1}{2}-i+1} \qquad \mbox{ for } i=-\frac{a-2}{2},-\frac{a-4}{2}, \ldots,0.$$ Observe that the arc in $A_i$ is of difference $\frac{t-1}{2}-2i$ if $i>0$, and of difference $-(\frac{t-1}{2}+2i)$ if $i \le 0$. Thus the $A_{i}$, for $i \in \{ 0, \pm 1, \ldots,\pm \frac{a-2}{2} \}$, jointly contain exactly one arc of each difference in $$\textstyle T=\{ - \frac{t-1}{2}, \pm (\frac{t-1}{2}-2),\ldots, \pm (\frac{t-1}{2}-(a-2)) \}.$$ Moreover, the $A_i$ are pairwise disjoint, and $A= \bigcup_{i=-\frac{a-2}{2}}^{\frac{a-2}{2}} A_{i}$ is a $T$-orthogonal $(\vec{P}_1^{\langle a-1 \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t; T)$. Its vertex set is contained in $$\textstyle \{ y_i: -\frac{a-2}{2} \le i \le \frac{a-2}{2} \} \cup \{ y_i: \frac{t-1}{2}-\frac{a-2}{2} \le i \le \frac{t-1}{2}+\frac{a}{2} \}.$$ ![Lemma [Lemma 20](#lem:long2){reference-type="ref" reference="lem:long2"}, Subcase 4.1. (Only the subscripts of the vertices are specified.)](Long-odd-even-1.pdf){#fig:Loe1} Subcase 4.1: $2 \le a \le \frac{t-s-1}{2}$. Note that $\frac{t-1}{2}-(a-2) \ge \frac{s+4}{2}$, so $\frac{s+2}{2} \not\in T$. Let $R=S-T-\{ 1 \}$. As $\frac{t-1}{2} \in R$, the directed path $P=P(R,y_{- \lceil \frac{s+2}{4} \rceil})$ is well defined, and with Notation $(\star)$, we have $d_1=\frac{s+2}{2}$. Note that $V(P)$ is contained in $$\textstyle \{ y_i: \lfloor \frac{s+2}{4} \rfloor \le i \le \frac{t-1}{2}- \lceil \frac{s+2}{4} \rceil \} \cup \{ y_i: \frac{t-1}{2}+\lfloor \frac{s+2}{4} \rfloor \le i \le t- \lceil \frac{s+2}{4} \rceil \}.$$ Additionally, define the directed 1-path $B_0=y_{1-\lceil \frac{s+2}{4} \rceil} \, y_{2-\lceil \frac{s+2}{4} \rceil}$, and observe that $B_0$ is disjoint from $P$. If $(s,a) \ne (6,2)$, since $2 \le a \le \frac{s}{3}$, we have that $\frac{a-2}{2}-1< \lceil \frac{s+2}{4} \rceil -2$, implying that $B_0 \cup P$ is disjoint from $A$. It follows that $D=A \cup B_0 \cup P$ is an $S$-orthogonal $(\vec{P}_1^{\langle a \rangle},\vec{P}_{t-s-a})$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$. See Figure [7](#fig:Loe1){reference-type="ref" reference="fig:Loe1"}. If $(s,a) = (6,2)$, replace $A$ with the directed 1-path $A'=y_{\frac{t+1}{2}} \, y_1$, and complete as above. Subcase 4.2: $a=0$. Let $R=S-\{ 1, -\frac{t-1}{2} \}$. Then $P=P(R,y_{- \lceil \frac{s+2}{4} \rceil})$ is well defined, with $d_1=\frac{s+2}{2}$, and $P \, y_{\frac{t-1}{2}-\lceil \frac{s+2}{4} \rceil} \, y_{\frac{t-1}{2}-\lceil \frac{s+2}{4} \rceil+1} \, y_{1- \lceil \frac{s+2}{4} \rceil}$ is the required $S$-orthogonal directed $(t-s)$-path in $\overrightarrow{\rm Circ}(t;S)$ Subcase 4.3: $a \ge \frac{t-s+1}{2}$. Let $a'=\lfloor \frac{t-s-3}{4} \rfloor$ and $d_A=\frac{t-1}{2}-2a'$. Observe that $d_A=\frac{s+2}{2}$ if $t \equiv s+3 \pmod{4}$, and $d_A=\frac{s+4}{2}$ otherwise. Let $$\textstyle T'=\{ -\frac{t-1}{2}, \pm (\frac{t-1}{2}-2),\ldots, \pm d_A \},$$ so that $|T'|=2a'+1$. Let $A'=\bigcup_{i=-a'}^{a'} A_i$. Then $A'$ is a $T'$-orthogonal $(\vec{P}_1^{\langle 2a'+1 \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t;T')$. Its vertex set is contained in $$\textstyle V(A')=\{ y_i: -a' \le i \le a' \} \cup \{ y_i: \frac{t-1}{2}-a' \le i \le \frac{t-1}{2}+a'+1 \}.$$ Let $b=a-2a'-2$, and note that $b$ is even, $b \ge 0$. Let $d_B=\frac{s+4}{2}$ if $t \equiv s+3 \pmod{4}$, and $d_B=\frac{s+2}{2}$ otherwise, so that $\{ d_A,d_B \} = \{ \frac{s+2}{2}, \frac{s+4}{2} \}$ in both cases. Let $$\textstyle R'=\{ \pm d_B, \pm (d_B+2), \ldots, \pm (d_B+b-2) \},$$ so that $|R'|=b$ and $R' \cap T' =\emptyset$. Note that since $a<t-s$, we have that $d_B+b-2 < \frac{t-1}{2}$. For $d \in R'$ such that $d>0$, define directed 1-paths $$\textstyle B_{-d}=y_{\lfloor \frac{d}{2} \rfloor} \, y_{-\lceil \frac{d}{2} \rceil} \qquad \mbox{ and }\qquad B_{d}= y_{\frac{t-1}{2}-\lceil \frac{d}{2} \rceil} \, y_{\frac{t-1}{2}+\lfloor \frac{d}{2} \rfloor}.$$ Observe that the arcs in $B_{-d}$ and $B_{d}$ are of difference $-d$ and $d$, respectively. Thus, the directed 1-paths in $\{ B_i: i \in R' \}$ jointly contain exactly one arc of each difference in $R'$. Moreover, the digraphs $B_i$ are pairwise disjoint. Thus $B= \bigcup_{i \in R'} B_i$ is an $R'$-orthogonal $(\vec{P}_1^{\langle b \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t; R')$. Let $R=S-T'-R'-\{ 1 \}$, and as $\frac{t-1}{2} \in R$, observe that the directed path $P=P(R,y_{-\lceil \frac{d_1}{2} \rceil})$ is well defined, with $d_1=d_B+b$ using Notation $(\star)$. Moreover, $P$ is disjoint from $B$, and $V(B \cup P)$ is contained in $$\textstyle \{ y_i: \lfloor \frac{d_B}{2} \rfloor \le i \le \frac{t-1}{2} - \lceil \frac{d_B}{2} \rceil \} \cup \{ y_i: \frac{t-1}{2}+ \lfloor \frac{d_B}{2} \rfloor \le i \le t-\lceil \frac{d_B}{2} \rceil\}.$$ Additionally, define the directed 1-path $B_0=y_{1-\lceil \frac{d_B}{2} \rceil} \, y_{2-\lceil \frac{d_B}{2} \rceil}$, and observe that $B_0$ is disjoint from $B \cup P$. Since $\frac{t-s+1}{2} \le a \le \frac{s}{3}$, we have $t-s+1\le \frac{2}{3}s<s$, and hence $a'+2 < \lceil \frac{d_B}{2} \rceil$. It follows that $B \cup B_0 \cup P$ and $A$ are disjoint, and that $D=A \cup B \cup B_0 \cup P$ is an $S$-orthogonal $(\vec{P}_1^{\langle a \rangle},\vec{P}_{t-s-a})$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$. Figure [6](#fig:Loo2){reference-type="ref" reference="fig:Loo2"} shows $A \cup B \cup P$ only. This completes all cases. In the next lemma, we provide constructions for the special cases when either Lemma [Lemma 19](#lem:long1){reference-type="ref" reference="lem:long1"} or Lemma [Lemma 20](#lem:long2){reference-type="ref" reference="lem:long2"} does not apply. **Lemma 21**. *Let $t$ be an even integer.* (a) *If $t\ge 4$, then OP$^\ast(3,t)$ has a solution.* (b) *If $t\ge 8$, then OP$^\ast(2,2,t)$ has a solution.* . Since OP$^\ast(3)$ and OP$^\ast(2,2)$ have solutions by Theorem [\[thm:Kn\*\]](#thm:Kn*){reference-type="ref" reference="thm:Kn*"}, it suffices to show that Condition (2) of Corollary  [Corollary 17](#cor:main){reference-type="ref" reference="cor:main"} is satisfied. (a) Using Corollary  [Corollary 17](#cor:main){reference-type="ref" reference="cor:main"}, we have $\ell=1$, $k=2$, $s=m_1=3$, and $m_2=t$. We take $s_1=t_1=1$, $s_2=2$, and $t_2=t-4$, and it then suffices to find $S \subset \mathbb{Z}_t^\ast$ such that $\overrightarrow{\rm Circ}(t;\mathbb{Z}_t^\ast-S)$ admits a $\vec{C}_t$-factorization and $\overrightarrow{\rm Circ}(t;S)$ admits an $S$-orthogonal $(\vec{P}_1,\vec{P}_{t-4})$-subdigraph. First, let $t \ge 8$, and assume $t \equiv 0 \pmod{4}$. Let $D=\{ -1, \frac{t}{2}-1 \}$. Since $\gcd(t,\frac{t}{2}-1)=1$ and $\frac{t}{2}-1>1$, it is easy to see that $\overrightarrow{\rm Circ}(t;D)$ admits a $\vec{C}_t$-factorization with two $\vec{C}_t$-factors. Let $S=\mathbb{Z}_t^\ast-D=\{ 1, \pm 2, \pm 3, \ldots, \pm (\frac{t}{2}-2), -(\frac{t}{2}-1), \frac{t}{2} \}$, and let the vertex set of $\overrightarrow{\rm Circ}(t;S)$ be $\{ y_i: i \in \mathbb{Z}_t \}$. Construct the following directed paths in $\overrightarrow{\rm Circ}(t;S)$: $$D_1= y_0 \, y_2 \, y_{-1} \, y_{3} \, y_{-2} \, \ldots \, y_{-(\frac{t}{4}-2)} \, y_{\frac{t}{4}} \, y_{-(\frac{t}{4}-1)} \, y_{\frac{t}{4}+3} \, y_{-\frac{t}{4}} \, y_{\frac{t}{4}+4} \, \ldots \, y_{-(\frac{t}{2}-3)} \, y_{\frac{t}{2}+1} \, y_1$$ and $D_2=y_{\frac{t}{4}+1} \, y_{\frac{t}{4}+2}.$ Note that the differences of the arcs in $D_1$ are, in order, $$\textstyle 2, -3, 4, -5, \ldots, \frac{t}{2}-2, -(\frac{t}{2}-1), -(\frac{t}{2}-2), \frac{t}{2}-3, \ldots, -2,\frac{t}{2},$$ while the arc of $D_2$ is of difference 1. Hence $D_1 \cup D_2$ is an $S$-orthogonal $(\vec{P}_1,\vec{P}_{t-4})$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$. Assume now that $t \equiv 2 \pmod{4}$, and let $D=\{ \pm (\frac{t}{2}-2) \}$. Since $\gcd(t,\frac{t}{2}-2)=1$, we know that $\overrightarrow{\rm Circ}(t;D)$ admits a $\vec{C}_t$-factorization. Note that $\frac{t}{2}-2>1$ by the assumption. Let $S=\mathbb{Z}_t^\ast-D=\{ \pm 1, \pm 2, \ldots, \pm (\frac{t}{2}-3), \pm(\frac{t}{2}-1), \frac{t}{2} \}$, and let the vertex set of $\overrightarrow{\rm Circ}(t;S)$ be $\{ y_i: i \in \mathbb{Z}_t \}$. Construct the following directed paths in $\overrightarrow{\rm Circ}(t;S)$: $$D_1= y_0 \, y_2 \, y_{-1} \, y_{3} \, y_{-2} \, \ldots \, y_{-\frac{t-10}{4}} \, y_{\frac{t-2}{4}} \, y_{-\frac{t-2}{4}} \, y_{-\frac{t+2}{4}} \, y_{\frac{t+10}{4}} \, y_{-\frac{t+6}{4}} \, \ldots \, y_{-\frac{t-4}{2}} \, y_{\frac{t}{2}} \, y_{\frac{t}{2}+1} \, y_1$$ and $D_2=y_{-\frac{t-6}{4}} \, y_{\frac{t+2}{4}}.$ Note that the differences of the arcs in $D_1$ are, in order, $$\textstyle 2,-3, 4, -5, \ldots, \frac{t}{2}-3, -(\frac{t}{2}-1), -1, -(\frac{t}{2}-3), \frac{t}{2}-4, \ldots, -2, 1, \frac{t}{2},$$ while $D_2$ contains an arc of difference $\frac{t}{2}-1$. Hence $D_1 \cup D_2$ is an $S$-orthogonal $(\vec{P}_1,\vec{P}_{t-4})$-subdigraph. For $t=4$, let $D=\{ \pm 1 \}$ and $S=\{2\}$. For $t=6$, let $D=\{ \pm 1 \}$ and $S=\{ \pm 2, 3 \}$. In both cases, it is easy to verify that $\overrightarrow{\rm Circ}(t;D)$ admits a $\vec{C}_t$-factorization and that $\overrightarrow{\rm Circ}(t;S)$ admits an $S$-orthogonal $(\vec{P}_1,\vec{P}_{t-4})$-subdigraph. In all cases, by Corollary [Corollary 17](#cor:main){reference-type="ref" reference="cor:main"}, it follows that OP$^\ast(3,t)$ has a solution. (b) Using Corollary  [Corollary 17](#cor:main){reference-type="ref" reference="cor:main"}, we now have $\ell=2$, $k=3$, $m_1=m_2=2$, $s=4$, and $m_3=t$. We take $s_1=s_2=1$, $t_1=t_2=0$, $s_3=2$, and $t_3=t-4$. It then suffices to find $S \subset \mathbb{Z}_t^\ast$ such that $\overrightarrow{\rm Circ}(t;\mathbb{Z}_t^\ast-S)$ admits a $\vec{C}_t$-factorization and $\overrightarrow{\rm Circ}(t;S)$ admits an $S$-orthogonal $(\vec{P}_{t-4})$-subdigraph. Recall that $t \ge 8$. If $t \equiv 0 \pmod{4}$, let $D=\{ -1, \pm (\frac{t}{2}-1) \}$. Since $\gcd(t,\frac{t}{2}-1)=1$ and $\frac{t}{2}-1>1$, it is easy to see that $\overrightarrow{\rm Circ}(t;D)$ admits a $\vec{C}_t$-factorization with three $\vec{C}_t$-factors. Let $S=\mathbb{Z}_t^\ast-D=\{ 1, \pm 2, \pm 3, \ldots, \pm (\frac{t}{2}-2), \frac{t}{2} \}$, and let the vertex set of $\overrightarrow{\rm Circ}(t;S)$ be $\{ y_i: i \in \mathbb{Z}_t \}$. Construct the following directed path in $\overrightarrow{\rm Circ}(t;S)$: $$P= y_0 \, y_2 \, y_{-1} \, y_{3} \, y_{-2} \, \ldots \, y_{-(\frac{t}{4}-2)} \, y_{\frac{t}{4}} \, y_{-\frac{t}{4}} \, y_{\frac{t}{4}+2} \, y_{-(\frac{t}{4}+1)} \, y_{\frac{t}{4}+3} \, \ldots \, y_{-(\frac{t}{2}-2)} \, y_{\frac{t}{2}} \, y_{\frac{t}{2}+1}.$$ Note that the differences of the arcs in $P$ are, in order, $$\textstyle 2, -3, 4, -5, \ldots, \frac{t}{2}-2, \frac{t}{2}, -(\frac{t}{2}-2), \frac{t}{2}-3, \ldots, -2,1,$$ so that $P$ is an $S$-orthogonal $(\vec{P}_{t-4})$-subdigraph. If $t \equiv 2 \pmod{4}$, let $D=\{ -1, \pm (\frac{t}{2}-2) \}$. Since $\gcd(t,\frac{t}{2}-2)=1$ and $\frac{t}{2}-2>1$, we know that $\overrightarrow{\rm Circ}(t;D)$ admits a $\vec{C}_t$-factorization with three $\vec{C}_t$-factors. Let $S=\mathbb{Z}_t^\ast-D=\{ 1, \pm 2, \ldots, \pm (\frac{t}{2}-3), \pm(\frac{t}{2}-1), \frac{t}{2} \}$, and let the vertex set of $\overrightarrow{\rm Circ}(t;S)$ be $\{ y_i: i \in \mathbb{Z}_t \}$. Construct the following directed path in $\overrightarrow{\rm Circ}(t;S)$: $$P= y_0 \, y_2 \, y_{-1} \, y_{3} \, y_{-2} \, \ldots \, y_{-\frac{t-10}{4}} \, y_{\frac{t-2}{4}} \, y_{-\frac{t-2}{4}} \, y_{\frac{t+2}{4}} \, y_{-\frac{t+2}{4}} \,y_{\frac{t+10}{4}} \, y_{-\frac{t+6}{4}} \, \ldots \, y_{-\frac{t-4}{2}} \, y_{\frac{t}{2}} \, y_{\frac{t}{2}+1}.$$ Note that the differences of the arcs in $P$ are, in order, $$\textstyle 2,-3, 4, -5, \ldots, \frac{t}{2}-3, -(\frac{t}{2}-1), \frac{t}{2}, \frac{t}{2}-1, -(\frac{t}{2}-3), \frac{t}{2}-4, \ldots, -2, 1,$$ so that $P$ is an $S$-orthogonal $(\vec{P}_{t-4})$-subdigraph. In both cases, by Corollary [Corollary 17](#cor:main){reference-type="ref" reference="cor:main"}, it follows that OP$^\ast(2,2,t)$ has a solution. # Extending the 2-factor by 2-cycles: technical lemmas Analogously to the previous section, we shall now perform the majority of the work required to prove Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(2); that is, we present the constructions that, with the help of Corollary [Corollary 17](#cor:main){reference-type="ref" reference="cor:main"}, enable us to obtain a solution to OP$^\ast(m_1,\ldots,m_{\ell},2^{\langle b \rangle})$ from a solution to OP$^\ast(m_1,\ldots,m_{\ell})$ when $b$ is sufficiently large. **Lemma 22**. *Let $s$ and $t$ be integers such that $t$ is even and $2 \le s < t$. Furthermore, let $$\textstyle D=\{ \pm 1, \pm 2, \ldots, \pm \frac{s-1}{2} \}$$ if $s$ is odd, and $$\textstyle D=\{ \pm 1, \pm 2, \ldots, \pm (\frac{s}{2}-1)\} \cup \{ \frac{t}{2} \}$$ if $s$ is even. Then $\overrightarrow{\rm Circ}(t;D)$ admits a $\vec{C}_2$-factorization.* . Let $k=\lfloor \frac{s-1}{2} \rfloor$ and $T=\{ 1,2, \ldots, k\}$. Note that $T$ has a partition $$\textstyle {\mathcal{P}}= \left\{ \{ 1 \}, \{ 2,3 \}, \ldots, \{ k-1, k \} \right\}$$ or $$\textstyle {\mathcal{P}}= \left\{ \{ 1,2 \}, \{ 3,4 \}, \ldots, \{ k-1, k \} \right\}.$$ In either case, for each $S \in {\mathcal{P}}$, the circulant graph ${\rm Circ}(t;S \cup (-S))$ is either a cycle or has a decomposition into Hamilton cycles by Theorem [\[the:BerFavMah\]](#the:BerFavMah){reference-type="ref" reference="the:BerFavMah"}. Hence ${\rm Circ}(t;T \cup (-T))$ admits a $C_t$-factorization, and since $t$ is even, it admits a 1-factorization. It follows that $\overrightarrow{\rm Circ}(t;T \cup (-T))$ admits a $\vec{C}_2$-factorization. When $s$ is odd, we have $D=T \cup (-T)$, so the result follows directly. When $s$ is even, we observe that $\overrightarrow{\rm Circ}(t;D)=\overrightarrow{\rm Circ}(t;T \cup (-T)) \oplus \overrightarrow{\rm Circ}(t;\{ \frac{t}{2} \})$, and that $\overrightarrow{\rm Circ}(t;\{ \frac{t}{2} \})$ is a $\vec{C}_2$-factor of $\overrightarrow{\rm Circ}(t;D)$. Hence $\overrightarrow{\rm Circ}(t;D)$ admits a $\vec{C}_2$-factorization in both cases. **Lemma 23**. *Let $a$, $s$, and $t$ be integers such that $t$ is even, $2 \le s < t$, $a \le \min \{ \lfloor \frac{s}{3} \rfloor, t-s \}$, and $a \equiv s \pmod{2}$. Furthermore, let $$\textstyle S=\{ \pm \frac{s+1}{2}, \pm \frac{s+3}{2}, \ldots, \pm (\frac{t}{2}-1)\} \cup \{ \frac{t}{2} \}$$ if $s$ is odd, and $$\textstyle S=\{ \pm \frac{s}{2}, \pm (\frac{s}{2}+1), \ldots, \pm (\frac{t}{2}-1) \}$$ if $s$ is even. Then $\overrightarrow{\rm Circ}(t;S)$ admits an $S$-orthogonal $(\vec{P}_1^{\langle a \rangle},\vec{C}_{2}^{\langle \frac{t-s-a}{2} \rangle})$-subdigraph.* . Note that in both cases, $|S|=t-s$. Let the vertex set of the circulant digraph $\overrightarrow{\rm Circ}(t;S)$ be $V=\{ y_i: i \in \mathbb{Z}_t \}$. ![Lemma [Lemma 23](#lem:2-cycles2){reference-type="ref" reference="lem:2-cycles2"}, Case 1, with $d_A>d_B$ --- digraph $A$ (black arcs) and sets $B_i$ (thick black lines), $E_i$ (grey lines), and $C_i$ (dashed lines). (Only the subscripts of the vertices are specified.)](Short-odd.pdf){#fig:So} Case 1: $s$ is odd. Then $S=\{ \pm \frac{s+1}{2}, \pm \frac{s+3}{2}, \ldots, \pm \frac{t-2}{2}, \frac{t}{2} \}$. For any $c \in \{ 0, 1, \ldots, \lfloor \frac{t-2}{4} \rfloor \}$ and $i=0, \pm 1, \pm 2, \ldots, \pm c$, define the directed 1-path $$\textstyle A_{i}=y_{i}\, y_{\frac{t}{2}-i}.$$ Observe that the arc in $A_i$ is of difference $\frac{t}{2}-2i$ if $i \ge 0$, and of difference $-(\frac{t}{2}+2i)$ if $i < 0$. Thus the $A_{i}$, for $i \in \{ 0, \pm 1, \ldots,\pm c \}$, jointly contain exactly one arc of each difference in $$\textstyle T_c=\{ \frac{t}{2}, \pm (\frac{t}{2}-2), \pm (\frac{t}{2}-4),\ldots, \pm (\frac{t}{2}-2c) \}.$$ Moreover, the $A_i$ are pairwise disjoint, and $A= \bigcup_{i=-c}^{c} A_{i}$ is a $T_c$-orthogonal $(\vec{P}_1^{\langle 2c+1 \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t; T_c)$. Its vertex set is $$\textstyle V(A)=\{ y_i: -c \le i \le c \} \cup \{ y_i: \frac{t}{2}-c \le i \le \frac{t}{2}+c \}.$$ Let $d_A=\frac{s+1}{2}$ and $d_B=\frac{s+3}{2}$ if $t \equiv s+1 \pmod{4}$, and $d_A=\frac{s+3}{2}$ and $d_B=\frac{s+1}{2}$ otherwise. Thus $d_A \equiv \frac{t}{2} \pmod{2}$ and $d_B \equiv \frac{t}{2}-1 \pmod{2}$ in both cases. Let $I=\{ d_B, d_B+2, \ldots, \frac{t}{2}-1 \}$ and $J=\{ d_A, d_A+2, \ldots, \frac{t}{2}-2 \}$, and note that $S=I \dot{\cup} J \dot{\cup} \{ \frac{t}{2} \}$. For $i \in I$, define the sets $$B_i= \{ y_{-\lceil \frac{i}{2} \rceil}, y_{\lfloor \frac{i}{2} \rfloor} \} \qquad \mbox{ and } \qquad E_i= \{ y_{\frac{t}{2}-\lceil \frac{i}{2} \rceil}, y_{\frac{t}{2}+\lfloor \frac{i}{2} \rfloor} \}.$$ For $i \in J$, define the set $$C_i= \{ y_{\frac{t}{2}-\lceil \frac{i-1}{2} \rceil }, y_{\frac{t}{2}+\lfloor \frac{i+1}{2} \rfloor} \}.$$ See Figure [8](#fig:So){reference-type="ref" reference="fig:So"}. Observe that the sets $B_i$, for $i \in I$, are pairwise disjoint, as are the sets $E_i$, for $i \in I$, and the sets $C_i$, for $i \in J$. Moreover, each of the sets $B_i$, $C_i$, and $E_i$ accommodates arcs of difference $\pm i$. Let $B=\bigcup_{i \in I} B_i$, $C=\bigcup_{j \in J} C_j$, and $E=\bigcup_{i \in I} E_i$, and observe that $B \cap C= \emptyset$, $B \cap E =\emptyset$, $$\textstyle B \cup C \subseteq \{ y_i: \lfloor \frac{d_B}{2} \rfloor \le i \le \frac{t}{2}- \lceil \frac{d_A-1}{2} \rceil \} \cup \{ y_i: \frac{t}{2}+\lfloor \frac{d_A+1}{2} \rfloor \le i \le t- \lceil \frac{d_B}{2} \rceil \},$$ and $$\textstyle B \cup E \subseteq \{ y_i: \lfloor \frac{d_B}{2} \rfloor \le i \le \frac{t}{2}- \lceil \frac{d_B}{2} \rceil \} \cup \{ y_i: \frac{t}{2}+\lfloor \frac{d_B}{2} \rfloor \le i \le t- \lceil \frac{d_B}{2} \rceil \}.$$ Subcase 1.1: $a \le \frac{t-s-1}{2}$. Let $c=\frac{a-1}{2}$ and $A= \bigcup_{i=-c}^{c} A_i$, so $A$ is a $(\vec{P}_1^{\langle a \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t; T)$ for $T=\{ \frac{t}{2}, \pm (\frac{t}{2}-2), \pm (\frac{t}{2}-4),\ldots, \pm (\frac{t}{2}-2c)) \}$. Note that, since $a \le \frac{t-s-1}{2}$, we have that $\frac{t}{2}-2c \ge d_A$. Since $a \le \frac{s}{3}$, it is not difficult to show that $c < \min\{ \lceil \frac{d_A-1}{2} \rceil, \lfloor \frac{d_B}{2} \rfloor\}$. It follows that $V(A)$ is disjoint from $B \cup C$. For $i \in I$, let $D_i$ be the directed 2-cycle with vertex set $B_i$, and for $j \in J-T$, let $D_j$ be the directed 2-cycle with vertex set $C_j$. Let $F=\bigcup_{i \in I} D_i \cup \bigcup_{j \in J-T} D_j$. Then $D=A \cup F$ is the desired $S$-orthogonal $(\vec{P}_1^{\langle a \rangle},\vec{C}_{2}^{\langle \frac{t-s-a}{2} \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$. Subcase 1.2: $a \ge \frac{t-s+1}{2}$. Let $c=\lfloor \frac{t-s-1}{4} \rfloor$, and $A= \bigcup_{i=-c}^{c} A_i$, so $A$ is a $(\vec{P}_1^{\langle 2c+1 \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t; T)$ for $T=\{ \frac{t}{2}, \pm (\frac{t}{2}-2), \pm (\frac{t}{2}-4),\ldots, \pm d_A \}$. As $\frac{t-s+1}{2} \le a \le \frac{s}{3}$, we have that $t-s+1 \ge \frac{2}{3}s<s$, and hence $c < \lfloor \frac{d_B}{2} \rfloor$. It follows that $V(A)$ is disjoint from $B \cup E$. Let $b=a-(2c+1)$, and observe that $b$ is even, $b \ge 0$. Let $I' \subseteq I$ such that $|I'|=\frac{b}{2}$. For $i \in I-I'$, let $D_i$ be the directed 2-cycle with vertex set $B_i$, and for $j \in I'$, let $P_j$ and $P_j'$ be the disjoint directed 1-paths with vertex sets in $B_j$ and $E_j$ that contain arcs of difference $j$ and $-j$, respectively. Let $F=\bigcup_{i \in I-I'} D_i \cup \bigcup_{j \in I'} (P_j \cup P_j')$. Then $D=A \cup F$ is the desired $S$-orthogonal $(\vec{P}_1^{\langle a \rangle},\vec{C}_{2}^{\langle \frac{t-s-a}{2} \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$. ![Lemma [Lemma 23](#lem:2-cycles2){reference-type="ref" reference="lem:2-cycles2"}, Case 2, with $d_A>d_B$ --- digraph $A$ (black arcs) and sets $B_i$ (thick black lines), $E_i$ (grey lines), and $C_i$ (dashed lines). (Only the subscripts of the vertices are specified.)](Short-even.pdf){#fig:Se} Case 2: $s$ is even. Then $S=\{ \pm \frac{s}{2}, \pm \frac{s+2}{2}, \ldots, \pm \frac{t-2}{2}\}$. For any $c \in \{ 1,2, \ldots, \lfloor \frac{t}{4} \rfloor \}$ and $i=-c, -c+1, \ldots,c-1$, define the directed 1-path $$\textstyle A_{i}=y_{i}\, y_{\frac{t}{2}-i-1}.$$ Observe that the arc in $A_i$ is of difference $\frac{t}{2}-(2i+1)$ if $i \ge 0$, and of difference $-(\frac{t}{2}+(2i+1))$ if $i < 0$. Thus the $A_{i}$, for $i \in \{ -c, -c+1, \ldots,c-1 \}$, jointly contain exactly one arc of each difference in $$\textstyle T_c=\{ \pm(\frac{t}{2}-1), \pm (\frac{t}{2}-3), \ldots, \pm (\frac{t}{2}-(2c-1)) \}.$$ Moreover, the $A_i$ are pairwise disjoint, and $A= \bigcup_{i=-c}^{c-1} A_{i}$ is a $T_c$-orthogonal $(\vec{P}_1^{\langle 2c \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t; T_c)$. Its vertex set is $$\textstyle V(A)=\{ y_i: -c \le i \le c-1 \} \cup \{ y_i: \frac{t}{2}-c \le i \le \frac{t}{2}+c-1 \}.$$ Let $d_A=\frac{s}{2}$ and $d_B=\frac{s+2}{2}$ if $t \equiv s+2 \pmod{4}$, and $d_A=\frac{s+2}{2}$ and $d_B=\frac{s}{2}$ otherwise. Note that in both cases, $d_A \equiv \frac{t}{2}-1 \pmod{2}$ and $d_B \equiv \frac{t}{2} \pmod{2}$. Let $I=\{ d_B, d_B+2, \ldots, \frac{t}{2}-2 \}$ and $J=\{ d_A, d_A+2, \ldots, \frac{t}{2}-1 \}$, so that $S=I \dot{\cup} J \dot{\cup} \{ \frac{t}{2} \}$. For $i \in I$, define the sets $$B_i= \{ y_{-\lceil \frac{i}{2} \rceil}, y_{\lfloor \frac{i}{2} \rfloor} \} \qquad \mbox{ and } \qquad E_i= \{ y_{\frac{t}{2}-\lceil \frac{i}{2} \rceil}, y_{\frac{t}{2}+\lfloor \frac{i}{2} \rfloor} \}.$$ For each $i \in J$, define the set $$C_i= \{ y_{\frac{t}{2}-\lceil \frac{i+1}{2} \rceil}, y_{\frac{t}{2}+\lfloor \frac{i-1}{2} \rfloor} \}.$$ See Figure [9](#fig:Se){reference-type="ref" reference="fig:Se"}. Observe that the sets $B_i$, for $i \in I$, are pairwise disjoint, as are the sets $E_i$, for $i \in I$, and the sets $C_i$, for $i \in J$. Moreover, each of the sets $B_i$, $C_i$, and $E_i$ accommodates arcs of difference $\pm i$. Let $B=\bigcup_{i \in I} B_i$, $C=\bigcup_{j \in J} C_j$, and $E=\bigcup_{i \in I} E_i$, and observe that $B \cap C= \emptyset$, $B \cap E =\emptyset$, $$\textstyle B \cup C \subseteq \{ y_i: \lfloor \frac{d_B}{2} \rfloor \le i \le \frac{t}{2}- \lceil \frac{d_A+1}{2} \rceil \} \cup \{ y_i: \frac{t}{2}+\lfloor \frac{d_A-1}{2} \rfloor \le i \le t- \lceil \frac{d_B}{2} \rceil \},$$ and $$\textstyle B \cup E \subseteq \{ y_i: \lfloor \frac{d_B}{2} \rfloor \le i \le \frac{t}{2}- \lceil \frac{d_B}{2} \rceil \} \cup \{ y_i: \frac{t}{2}+\lfloor \frac{d_B}{2} \rfloor \le i \le t- \lceil \frac{d_B}{2} \rceil \}.$$ Subcase 2.1: $2 \le a \le \frac{t-s+2}{2}$. Let $c=\frac{a}{2}$ and $A= \bigcup_{i=-c}^{c-1} A_i$, so $A$ is a $(\vec{P}_1^{\langle a \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t; T)$ for $T=\{ \pm (\frac{t}{2}-1), \pm (\frac{t}{2}-3),\ldots, \pm (\frac{t}{2}-(2c-1)) \}$. Observe that $\frac{t}{2}-(2c-1)\ge d_A$ as $a \le \lfloor \frac{t-s+2}{2} \rfloor$. Since $a \le \frac{s}{3}$, it is not difficult to show that $c < \min\{ \lceil \frac{d_A}{2} \rceil, \lceil \frac{d_B}{2} \rceil \}$. It follows that $V(A)$ is disjoint from $B \cup C$. For $i \in I$, let $D_i$ be the directed 2-cycle with vertex set $B_i$, and for $j \in J-T$, let $D_j$ be the directed 2-cycle with vertex set $C_j$. Let $F=\bigcup_{i \in I} D_i \cup \bigcup_{j \in J-T} D_j$. Then $D=A \cup F$ is the desired $S$-orthogonal $(\vec{P}_1^{\langle a \rangle},\vec{C}_{2}^{\langle \frac{t-s-a}{2} \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$. Subcase 2.2: $a=0$. With $T=\emptyset$, construct $F$ as in Subcase 2.1. Then $F$ is the desired $S$-orthogonal $(\vec{C}_{2}^{\langle \frac{t-s}{2} \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$. Subcase 2.3: $a \ge \frac{t-s+4}{2}$. Let $c=\lfloor \frac{t-s+2}{4} \rfloor$, and $A= \bigcup_{i=-c}^{c-1}$, so $A$ is a $T$-orthogonal $(\vec{P}_1^{\langle 2c \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t; T)$ for $T=\{ \pm (\frac{t}{2}-1), \pm (\frac{t}{2}-3),\ldots, \pm d_A \}$. As $\frac{t-s+4}{2} \le a \le \frac{s}{3}$, we have $t-s+4 \le \frac{2}{3}s<s$, and hence $c < \lceil \frac{d_B}{2} \rceil$. It follows that $V(A)$ is disjoint from $B \cup E$. Let $b=a-2c$, and observe that $b$ is even, $b \ge 2$. Let $I' \subseteq I$ such that $|I'|=\frac{b}{2}$. For $i \in I-I'$, let $D_i$ be the directed 2-cycle with vertex set $B_i$, and for $j \in I'$, let $P_j$ and $P_j'$ be the disjoint directed 1-paths with vertex sets in $B_j$ and $E_j$ that contain arcs of difference $j$ and $-j$, respectively. Let $F=\bigcup_{i \in I-I'} D_i \cup \bigcup_{j \in I'} (P_j \cup P_j')$. Then $D=A \cup F$ is the desired $S$-orthogonal $(\vec{P}_1^{\langle a \rangle},\vec{C}_{2}^{\langle \frac{t-s-a}{2} \rangle})$-subdigraph of $\overrightarrow{\rm Circ}(t;S)$. This completes all cases. # Main extension results We are now ready to sum up the proof of our main extension theorem, which we re-state below for convenience. Let $\ell$ and $m_1,\ldots, m_{\ell}$ be positive integers such that $m_i \ge 2$ for $i=1,\ldots, \ell$. Let $s=m_1+\ldots+m_{\ell}$ and let $t$ be an integer, $t>s$. Furthermore, let $a$ be the number of odd integers in the multiset $\{ \!\! \{ m_1,\ldots,m_{\ell} \} \!\! \}$, and assume that $a \le 2\lfloor \frac{t}{2} \rfloor -s$. If OP$^\ast(m_1,\ldots,m_{\ell})$ has a solution, then the following also have a solution: (1) OP$^\ast(m_1,\ldots,m_{\ell},t)$; and (2) if $t$ is even, OP$^\ast(m_1,\ldots,m_{\ell},2^{\langle \frac{t}{2} \rangle})$. . In both cases, for $i=1,\ldots,\ell$, let $s_i=\lfloor \frac{m_i}{2} \rfloor$, and $t_i=m_i-2s_i$. Observe that for each $i$, we have that $t_i=1$ if $m_i$ is odd, and $t_i=0$ if $m_i$ is even, so that $\sum_{i=1}^\ell t_i=a$. Since $\sum_{i=1}^\ell m_i =s$, it follows that $a \equiv s \pmod{2}$, and as $m_i \ge 3$ if $t_i=1$, it follows that $a \le \lfloor \frac{s}{3} \rfloor$. Let $r=s-\sum_{i=1}^\ell s_i$, and observe that $r= s-\frac{1}{2}\sum_{i=1}^\ell (m_i-t_i)=s-\frac{s}{2}+\frac{a}{2} =\frac{s+a}{2}$. Hence $r > 0$, and by the assumption on $a$, it follows that $r \le \lfloor \frac{t}{2} \rfloor$. (1) Assume first that $s \not\in \{ 3, 4 \}$ whenever $t$ is even. Let $m_{\ell+1}=t$, $s_{\ell +1}=r$, and $t_{\ell +1}=m_{\ell +1}-2s_{\ell +1}$. By the above observation, $0< s_{\ell +1} \le \lfloor \frac{t}{2} \rfloor$ and hence $t_{\ell +1} \ge 0$. Define $D \subseteq \mathbb{Z}_t^\ast$ as in Lemma [Lemma 19](#lem:long1){reference-type="ref" reference="lem:long1"}, and let $S=\mathbb{Z}_t^\ast-D$. Then $\overrightarrow{\rm Circ}(t;D)$ admits a $\vec{C}_t$-factorization by Lemma [Lemma 19](#lem:long1){reference-type="ref" reference="lem:long1"}. By Lemma [Lemma 20](#lem:long2){reference-type="ref" reference="lem:long2"}, $\overrightarrow{\rm Circ}(t;S)$ admits an $S$-orthogonal $(\vec{P}_1^{\langle a \rangle},\vec{P}_{t-s-a})$-subdigraph, and hence also an $S$-orthogonal $(\vec{P}_0^{\langle \ell-a \rangle},\vec{P}_1^{\langle a \rangle},\vec{P}_{t-s-a})$-subdigraph. It follows that the conditions of Corollary [Corollary 17](#cor:main){reference-type="ref" reference="cor:main"}, with $k=\ell+1$, are satisfied, and we conclude that OP$^\ast(m_1,\ldots,m_{\ell},t)$ has a solution. If $s=3$ and $t$ is even, then $(m_1,\ldots,m_{\ell})=(3)$, and OP$^\ast(3,t)$ has a solution by Lemma [Lemma 21](#lem:long-special){reference-type="ref" reference="lem:long-special"}(a). If $s=4$ and $t$ is even, then we may assume that $(m_1,\ldots,m_{\ell})=(2,2)$ because OP$^\ast(4)$ does not have a solution. By Lemma [Lemma 21](#lem:long-special){reference-type="ref" reference="lem:long-special"}(b), OP$^\ast(2,2,t)$ has a solution whenever $t \ge 8$, while OP$^\ast(2,2,6)$ has a solution by Lemma [Lemma 12](#lem:special){reference-type="ref" reference="lem:special"}. (2) Let $k=\ell+\frac{t}{2}$. As seen above, $0 < r \le \frac{t}{2}$. Let $s_i=1$ for $i=\ell+1,\ldots,\ell+r$, and $s_i=0$ for $i=\ell+r+1,\ldots,k$. For all $i=\ell+1,\ldots,k$, let $m_i=2$ and $t_i=m_i-2s_i$. Define $D \subseteq \mathbb{Z}_t^\ast$ as in Lemma [Lemma 22](#lem:2-cycles1){reference-type="ref" reference="lem:2-cycles1"}, and let $S=\mathbb{Z}_t^\ast-D$. Then $\overrightarrow{\rm Circ}(t;D)$ admits a $\vec{C}_2$-factorization by Lemma [Lemma 22](#lem:2-cycles1){reference-type="ref" reference="lem:2-cycles1"}. By Lemma [Lemma 23](#lem:2-cycles2){reference-type="ref" reference="lem:2-cycles2"}, $\overrightarrow{\rm Circ}(t;S)$ admits an $S$-orthogonal $(\vec{P}_1^{\langle a \rangle},\vec{C}_{2}^{\langle \frac{t-s-a}{2} \rangle})$-subdigraph, and hence also an $S$-orthogonal $(\vec{P}_0^{\langle \ell-a+r \rangle},\vec{P}_1^{\langle a \rangle},\vec{C}_{2}^{\langle \frac{t-s-a}{2} \rangle})$-subdigraph. It follows that the conditions of Corollary [Corollary 17](#cor:main){reference-type="ref" reference="cor:main"} are satisfied, and we conclude that OP$^\ast(m_1,\ldots,m_{\ell},2^{\langle \frac{t}{2} \rangle})$ has a solution. In the rest of the section, we present some explicit existence results that are the outgrowth of Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}. **Corollary 24**. *The following problems have a solution:* (a) *OP$^\ast(m_1,m_2)$ if $2 \le m_1 < m_2$ and $m_1 \not\in \{ 4,6 \}$;* (b) *OP$^\ast(2^{\langle b \rangle},m)$ if $b \ge 1$.* . (a) We use Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(1) with $\ell=1$, $s=m_1$, and $t=m_2$. Observe that the condition $a \le 2\lfloor \frac{t}{2} \rfloor -s$ is indeed satisfied, and that OP$^\ast(m_1)$ has a solution by Theorem [\[thm:Kn\*\]](#thm:Kn*){reference-type="ref" reference="thm:Kn*"}. (b) If $2b<m$, we use Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(1) with $(m_1,\ldots,m_\ell)=(2^{\langle b \rangle})$ and $t=m$. Note that $a=0$ and that OP$^\ast(2^{\langle b \rangle})$ has a solution by Theorem [\[thm:Kn\*\]](#thm:Kn*){reference-type="ref" reference="thm:Kn*"}. If $m=2b$, the result follows from Corollary [Corollary 13](#cor:even){reference-type="ref" reference="cor:even"}. Hence assume $m<2b$. If $m \not\in \{ 4,6 \}$, then OP$^\ast(m)$ has a solution by Theorem [\[thm:Kn\*\]](#thm:Kn*){reference-type="ref" reference="thm:Kn*"}, and we can use Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(2) with $\ell=1$, $s=m_1=m$, and $t=2b$. Note that $a=0$ if $s$ is even, and $a=1$ if $s$ is odd. In either case, $a \le 2 \lfloor \frac{t}{2} \rfloor -s$. Hence OP$^\ast(m,2^{\langle b \rangle})$ has a solution. Let $m \in \{ 4,6 \}$, so that $b \ge \frac{m}{2}+1$. Now OP$^\ast(m,2^{\langle \frac{m}{2}+1 \rangle})$ has a solution by Lemma [Lemma 12](#lem:special){reference-type="ref" reference="lem:special"}. Since OP$^\ast(2,m)$ and OP$^\ast(2^{\langle \frac{m}{2}+1 \rangle})$ have solutions by (a) and Theorem [\[thm:Kn\*\]](#thm:Kn*){reference-type="ref" reference="thm:Kn*"}, respectively, OP$^\ast(m,2^{\langle \frac{m}{2}+2 \rangle})$ has a solution by Theorem [Theorem 5](#thm:main-even){reference-type="ref" reference="thm:main-even"}. Finally, since OP$^\ast(2,m)$ has a solution, OP$^\ast(m,2^{\langle b \rangle})$ for solution for $b\ge \frac{m}{2}+3$ by Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(2). We are are now ready to summarize the proof of our almost-complete solution to the directed Oberwolfach problem with two tables. Let $m_1$ and $m_2$ be integers such that $2 \le m_1 \le m_2$. Then OP$^\ast(m_1,m_2)$ has a solution if and only if $(m_1,m_2) \ne (3,3)$, with a possible exception in the case that $m_1 \in \{ 4,6 \}$, $m_2$ is even, and $m_1+m_2 \ge 14$. . Assuming $m_1 \le m_2$, OP$^\ast(m_1,m_2)$ has a solution in the case that $m_1 \ge 3$, $m_1+m_2$ is odd, and $(m_1,m_2) \ne (4,5)$ by Corollary [\[cor:OP\]](#cor:OP){reference-type="ref" reference="cor:OP"}; for $m_1=m_2 \ne 3$ by Theorem [\[thm:Kn\*\]](#thm:Kn*){reference-type="ref" reference="thm:Kn*"}; for $2 \le m_1 < m_2$ and $m_1 \not\in \{ 4,6 \}$ by Corollary [Corollary 24](#cor:ext){reference-type="ref" reference="cor:ext"}(a); and for $(m_1,m_2) \in \{ (4,6),(4,8) \}$ by Lemma [Lemma 12](#lem:special){reference-type="ref" reference="lem:special"}. Hence the result. In the remaining corollaries in this section, we use the following notation: for an integer $m$, we let $\delta(m)=0$ if $m$ is even, and $\delta(m)=1$ if $m$ is odd. **Corollary 25**. *Let $2 \le m_1 \le m_2 \le m_3$ be integers such that OP$^\ast(m_1,m_2)$ has a solution, and assume $m_3 \ge m_1+m_2+\delta(m_1)+\delta(m_2)$.* *Then OP$^\ast(m_1,m_2,m_3)$ has a solution.* . We attempt to use Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"} with $\ell=2$, $s=m_1+m_2$, and $t=m_3$, observing that $a=\delta(m_1)+\delta(m_2)$. Since $m_3 \ge m_1+m_2+\delta(m_1)+\delta(m_2)$ by assumption, and $m_1+m_2+\delta(m_1)+\delta(m_2)$ is even, we have that $$\textstyle 2 \lfloor \frac{t}{2} \rfloor = m_3-\delta(m_3) \ge m_1+m_2+\delta(m_1)+\delta(m_2)=s+a.$$ It follows that $a \le 2 \lfloor \frac{t}{2} \rfloor-s$, and if $t>s$, then OP$^\ast(m_1,m_2,m_3)$ has a solution by Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(1). Hence we may assume that $t=s$. Then $m_1$, $m_2$, and $m_3$ are all even, and $m_3=m_1+m_2$. If $m_3 \not\in \{ 4,6 \}$, then OP$^\ast(m_3)$ has a solution, and hence OP$^\ast(m_1,m_2,m_3)$ has a solution by Theorem [Theorem 5](#thm:main-even){reference-type="ref" reference="thm:main-even"}. If $m_3=4$, then $(m_1,m_2)=(2,2)$, and if $m_3=6$, then $(m_1,m_2)=(2,4)$. Since OP$^\ast(2,2,4)$ and OP$^\ast(2,4,6)$ have solutions by Lemmas [Lemma 18](#lem:(2,2,4)){reference-type="ref" reference="lem:(2,2,4)"} and [Lemma 12](#lem:special){reference-type="ref" reference="lem:special"}, respectively, the proof is complete. We remark that for $n=m_1+m_2+m_3<60$ with $n$ odd, more existence results on OP$^\ast(m_1,m_2,m_3)$ follow from Corollary [\[cor:OP\]](#cor:OP){reference-type="ref" reference="cor:OP"}. **Corollary 26**. *Let $3 \le m_1 \le m_2$ and $b \ge 1$ be integers. Then OP$^\ast(m_1,m_2,2^{\langle b \rangle})$ has a solution in each of the following cases:* (i) *OP$^\ast(m_1,m_2)$ has a solution, and $2b \ge m_1+m_2+\delta(m_1)+\delta(m_2)$ or $2b \le m_2-m_1-\delta(m_1)$;* (ii) *$(m_1,m_2) =(3,3)$;* (iii) *$m_1 \in \{ 4,6 \}$, $m_2$ is even, $m_1+m_2 \ge 14$, and $2b \ge m_1+m_2+4$ or $2b \le m_2-m_1$.* . (i) If $2b \ge m_1+m_2+\delta(m_1)+\delta(m_2)$, then we attempt to use Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(2) with $\ell=2$, $s=m_1+m_2$, and $t=2b$. Note that $a=\delta(m_1)+\delta(m_2) \le 2b-s$. If $t>s$, then OP$^\ast(m_1,m_2,2^{\langle b \rangle})$ has a solution by Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(2). If, however, $t=s$, then $m_1$ and $m_2$ are both even and $2b=m_1+m_2$, and since OP$^\ast(2^{\langle b \rangle})$ and OP$^\ast(m_1,m_2)$ both have solutions, the result follows from Theorem [Theorem 5](#thm:main-even){reference-type="ref" reference="thm:main-even"}. If $2b \le m_2-m_1-\delta(m_1)$, then we attempt to use Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(1) with $\ell=b+1$, $s=m_1+2b$, and $t=m_2$. Note that $a=\delta(m_1) \le t-s$. If $t>s$, then OP$^\ast(m_1,2^{\langle b \rangle},m_2)$ has a solution by Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(1). If, however, $t=s$, then $m_1$ is even and $m_2=m_1+2b$. If $m_2 \not \in \{ 4,6 \}$, then OP$^\ast(m_2)$ has a solution by Theorem [\[thm:Kn\*\]](#thm:Kn*){reference-type="ref" reference="thm:Kn*"}, as does OP$^\ast(m_1,2^{\langle b \rangle})$ by Corollary [Corollary 24](#cor:ext){reference-type="ref" reference="cor:ext"}(b), and the result follows from Theorem [Theorem 5](#thm:main-even){reference-type="ref" reference="thm:main-even"}. Since $m_1 \ge 3$, we know $m_2 \ne 4$. Finally, if $m_2=6$, then $m_1=4$ and $b=1$, and OP$^\ast(2,4,6)$ has a solution by Lemma [Lemma 12](#lem:special){reference-type="ref" reference="lem:special"}. (ii) OP$^\ast(2,3,3)$ has a solution by Lemma [Lemma 12](#lem:special){reference-type="ref" reference="lem:special"}. Using Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(2) with $\ell=3$, $s=8$, and $t=2(b-1)$, assuming $b \ge 6$, we have that $2=a \le t-s$, and OP$^\ast(3,3,2^{\langle b \rangle})$ has a solution. For $2 \le b \le 5$, the result follows from Lemma [Lemma 12](#lem:special){reference-type="ref" reference="lem:special"}. (iii) By Corollary [Corollary 24](#cor:ext){reference-type="ref" reference="cor:ext"}(2), OP$^\ast(2,m_1)$ has a solution, and since $m_2 \ge 2+m_1$, we know that OP$^\ast(2,m_1,m_2)$ has a solution by Corollary [Corollary 25](#cor:ext2){reference-type="ref" reference="cor:ext2"}. If $2b \ge m_1+m_2+4$, then we attempt to use Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(2) with $\ell=3$, $s=2+m_1+m_2$, and $t=2(b-1)$. Note that $0=a \le t-s$. If $t>s$, then OP$^\ast(m_1,m_2,2^{\langle b \rangle})$ has a solution by Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(2), and if $t=s$, then the result follows from Theorem [Theorem 5](#thm:main-even){reference-type="ref" reference="thm:main-even"}. If $2b \le m_2-m_1$, then we attempt to use Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(1) with $\ell=b+1$, $s=m_1+2b$, and $t=m_2$. Note that $0=a\le t-s$. If $t>s$, then OP$^\ast(m_1,2^{\langle b \rangle},m_2)$ has a solution by Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(1), and if $t=s$, then the result follows from Theorem [Theorem 5](#thm:main-even){reference-type="ref" reference="thm:main-even"} because OP$^\ast(m_1,2^{\langle b \rangle})$ and OP$^\ast(m_2)$ have solutions by Corollary [Corollary 24](#cor:ext){reference-type="ref" reference="cor:ext"}(2) and Theorem [\[thm:Kn\*\]](#thm:Kn*){reference-type="ref" reference="thm:Kn*"}, respectively. Repeatedly applying Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(1) we obtain the following. **Corollary 27**. *For $i=1,2,\ldots,k$, let $m_i \ge 2$. Assume $m_1 \not \in \{ 4,6 \}$ and that for each $\ell<k$, we have $m_{\ell+1} -\delta(m_{\ell+1}) > \sum_{i=1}^{\ell} (m_i + \delta(m_i))$. Then OP$^\ast(m_1,\ldots,m_k)$ has a solution.* . Since $m_1 \not \in \{ 4,6 \}$, by Theorem [\[thm:Kn\*\]](#thm:Kn*){reference-type="ref" reference="thm:Kn*"}, OP$^\ast(m_1)$ has a solution. Fix any $\ell <k$, and assume OP$^\ast(m_1,\ldots,m_{\ell})$ has a solution. Let $a$ be the number of odd integers in the multiset $\{ \!\! \{ m_1,\ldots,m_{\ell} \} \!\! \}$, let $s=\sum_{i=1}^{\ell} m_i$ and $t=m_{\ell+1}$. Then $a=\sum_{i=1}^{\ell} \delta(m_i) < m_{\ell+1} -\delta(m_{\ell+1}) - \sum_{i=1}^{\ell} m_i=2 \lfloor \frac{t}{2} \rfloor -s$, so the conditions of Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(1) are satisfied. It follows that OP$^\ast(m_1,\ldots,m_{\ell},m_{\ell+1})$ has a solution. The result follows by induction. # Solutions to small cases of OP$^\ast$ {#sec:small} In this section, we solve the directed Oberwolfach problem for orders up to 13. **Theorem 28**. *Let $2 \le m_1 \le \ldots \le m_k$ be integers, and assume that $n=m_1+ \ldots + m_k \le 13$. Then OP$^\ast(m_1,\ldots,m_k)$ has a solution if and only if $(m_1,\ldots,m_k) \not\in \{ (4),(6),(3,3) \}$.* . For the uniform case, that is, when $m_1=\ldots = m_k$, the result follows from Theorem [\[thm:Kn\*\]](#thm:Kn*){reference-type="ref" reference="thm:Kn*"}, while for $n$ odd, $m_1\ge3$, and $(m_1,\ldots,m_k) \not\in \{ (4,5),(3,3,5) \}$, the result follows from Corollary [\[cor:OP\]](#cor:OP){reference-type="ref" reference="cor:OP"}. The remaining cases are listed in the tables below. $n=m_1+ \ldots + m_k$ $(m_1,\ldots,m_k)$ OP$^\ast(m_1,\ldots,m_k)$ has a solution by\... ----------------------- -------------------- --------------------------------------------------------------------------------- 5 $(2,3)$ Theorem [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"} 6 $(2,4)$ Theorem [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"} 7 $(2,5)$ Theorem [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"} $(2,2,3)$ Corollary [Corollary 24](#cor:ext){reference-type="ref" reference="cor:ext"}(b) $n=m_1+ \ldots + m_k$ $(m_1,\ldots,m_k)$ OP$^\ast(m_1,\ldots,m_k)$ has a solution by\... ----------------------- -------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------- 8 $(2,6)$ Theorem [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"} $(3,5)$ Theorem [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"} $(2,2,4)$ Lemma [Lemma 18](#lem:(2,2,4)){reference-type="ref" reference="lem:(2,2,4)"} $(2,3,3)$ Lemma [Lemma 12](#lem:special){reference-type="ref" reference="lem:special"} 9 $(2,7)$ Theorem [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"} $(4,5)$ Theorem [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"} $(2,2,5)$ Corollary [Corollary 24](#cor:ext){reference-type="ref" reference="cor:ext"}(b) $(2,3,4)$ Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} and Appendix [11.1](#app1){reference-type="ref" reference="app1"} $(2,2,2,3)$ Corollary [Corollary 24](#cor:ext){reference-type="ref" reference="cor:ext"}(b) 10 $(2,8)$ Theorem [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"} $(3,7)$ Theorem [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"} $(4,6)$ Theorem [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"} $(2,2,6)$ Corollary [Corollary 24](#cor:ext){reference-type="ref" reference="cor:ext"}(b) $(2,3,5)$ Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} and Appendix [11.1](#app1){reference-type="ref" reference="app1"} $(2,4,4)$ Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} and Appendix [11.1](#app1){reference-type="ref" reference="app1"} $(3,3,4)$ Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} and Appendix [11.2](#app3){reference-type="ref" reference="app3"} $(2,2,2,4)$ Corollary [Corollary 24](#cor:ext){reference-type="ref" reference="cor:ext"}(b) $(2,2,3,3)$ Lemma [Lemma 12](#lem:special){reference-type="ref" reference="lem:special"} 11 $(2,9)$ Theorem [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"} $(2,2,7)$ Corollary [Corollary 24](#cor:ext){reference-type="ref" reference="cor:ext"}(b) $(2,3,6)$ Theorems [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"} and [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(1) $(2,4,5)$ Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} and Appendix [11.1](#app1){reference-type="ref" reference="app1"} $(3,3,5)$ Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} and Appendix [11.1](#app1){reference-type="ref" reference="app1"} $(2,2,2,5)$ Corollary [Corollary 24](#cor:ext){reference-type="ref" reference="cor:ext"}(b) $(2,2,3,4)$ Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} and Appendix [11.1](#app1){reference-type="ref" reference="app1"} $(2,3,3,3)$ Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} and Appendix [11.1](#app1){reference-type="ref" reference="app1"} $(2,2,2,2,3)$ Corollary [Corollary 24](#cor:ext){reference-type="ref" reference="cor:ext"}(b) 12 (2,10) Theorem [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"} (3,9) Theorem [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"} (4,8) Lemma [Lemma 12](#lem:special){reference-type="ref" reference="lem:special"} (5,7) Theorem [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"} (2,2,8) Corollary [Corollary 24](#cor:ext){reference-type="ref" reference="cor:ext"}(b) (2,3,7) Theorems [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"} and [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(1) (2,4,6) Lemma [Lemma 12](#lem:special){reference-type="ref" reference="lem:special"} (2,5,5) Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} and Appendix [11.1](#app1){reference-type="ref" reference="app1"} (3,3,6) Appendix [11.3](#appq){reference-type="ref" reference="appq"} (3,4,5) Appendix [11.3](#appq){reference-type="ref" reference="appq"} (2,2,2,6) Corollary [Corollary 24](#cor:ext){reference-type="ref" reference="cor:ext"}(b) (2,2,3,5) Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} and Appendix [11.1](#app1){reference-type="ref" reference="app1"} (2,2,4,4) Theorems [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"} and [Theorem 5](#thm:main-even){reference-type="ref" reference="thm:main-even"} (2,3,3,4) Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} and Appendix [11.1](#app1){reference-type="ref" reference="app1"} (2,2,2,2,4) Corollary [Corollary 24](#cor:ext){reference-type="ref" reference="cor:ext"}(b) (2,2,2,3,3) Lemma [Lemma 12](#lem:special){reference-type="ref" reference="lem:special"} $n=m_1+ \ldots + m_k$ $(m_1,\ldots,m_k)$ OP$^\ast(m_1,\ldots,m_k)$ has a solution by\... ----------------------- -------------------- --------------------------------------------------------------------------------------------------------------------------------------------------------------- 13 (2,11) Theorem [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"} (2,2,9) Corollary [Corollary 24](#cor:ext){reference-type="ref" reference="cor:ext"}(b) (2,3,8) Theorems [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"} and [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(1) (2,4,7) Theorems [Theorem 7](#thm:main2){reference-type="ref" reference="thm:main2"} and [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"}(1) (2,5,6) Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} and Appendix [11.1](#app1){reference-type="ref" reference="app1"} (2,2,2,7) Corollary [Corollary 24](#cor:ext){reference-type="ref" reference="cor:ext"}(b) (2,2,3,6) Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} and Appendix [11.1](#app1){reference-type="ref" reference="app1"} (2,2,4,5) Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} and Appendix [11.1](#app1){reference-type="ref" reference="app1"} (2,3,3,5) Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} and Appendix [11.1](#app1){reference-type="ref" reference="app1"} (2,3,4,4) Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} and Appendix [11.1](#app1){reference-type="ref" reference="app1"} (2,2,2,2,5) Corollary [Corollary 24](#cor:ext){reference-type="ref" reference="cor:ext"}(b) (2,2,2,3,4) Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} and Appendix [11.1](#app1){reference-type="ref" reference="app1"} (2,2,3,3,3) Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} and Appendix [11.1](#app1){reference-type="ref" reference="app1"} (2,2,2,2,2,3) Corollary [Corollary 24](#cor:ext){reference-type="ref" reference="cor:ext"}(b) For $14 \le n \le 16$, we have obtained similar, though incomplete results. Namely, we showed OP$^\ast(m_1,\ldots,m_k)$ has a solution, except possibly for $(m_1,\ldots,m_k) \in \{ (4,10), (6,8),$ $(3,3,8), (3,4,7),(3,5,6),(4,4,6),(4,5,5),(3,3,3,5),(3,3,4,4)\}$. The difficulty with these outstanding cases (all with $n=14$) is as follows. First, these cases do not fall into any of the families solved by the results of this paper, so a direct construction is needed. A simple 1-rotational approach with $q=1$ described in Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} cannot possibly work when $n$ is even and all $m_i \ge 3$. Namely, if in this case, a $(\vec{C}_{m_1},\ldots,\vec{C}_{m_k})$-factor $F$ satisfying the conditions of Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"} with $q=1$ existed, then $\overrightarrow{\rm Circ}(n-1;\mathbb{Z}_{n-1}^\ast)$ would admit a $\mathbb{Z}_{n-1}^\ast$-orthogonal subdigraph consisting of disjoint directed cycles and a directed path. Since the differences of the arcs in each directed cycle add up to 0 modulo $n-1$, and the differences in $\mathbb{Z}_{n-1}^\ast$ add up to 0 as well, the differences of the arcs in the remaining directed path would have to add up to 0 as well, which is a contradiction. If $n-1$ has a small divisor $q >1$, then the next simplest approach would be a base-$q$ 1-rotational construction from Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"}. Unfortunately, for $n=14$, we have that $n-1$ is a prime, so the smallest possible $q=13$. This essentially means that we are looking for a directed 2-factorization without symmetry, which is virtually impossible by hand, and computationally hard. Finally, applying Theorem [Theorem 6](#the:main-ext){reference-type="ref" reference="the:main-ext"} to Theorem [Theorem 28](#thm:small){reference-type="ref" reference="thm:small"}, we immediately obtain the following. **Corollary 29**. *Let $2 \le m_1 \le \ldots \le m_{\ell}<t$ be integers such that $s=m_1+ \ldots + m_{\ell} \le 13$. Furthermore, assume that the following all hold:* (i) *$(m_1,\ldots,m_{\ell}) \not\in \{ (4),(6),(3,3) \}$;* (ii) *$t>s$; and* (iii) *$\delta(m_1)+\ldots+\delta(m_{\ell}) \le t-\delta(t)-s$.* *Then OP$^\ast(m_1,\ldots,m_{\ell},t)$, as well as OP$^\ast(m_1,\ldots,m_{\ell},2^{\langle \frac{t}{2} \rangle})$ if $t$ is even, has a solution.* # Conclusion In this paper, we introduced a recursive method that allows us to construct solutions to larger cases of OP$^\ast$ from solutions to smaller cases. In particular, we showed how to extend the directed 2-factor in a known solution by a long cycle, as well as by a relatively large number of 2-cycles. Among many other explicit infinite families of cases that we were able to successfully address, this method yields an almost-complete solution to the two-table directed Oberwolfach problem. We were also able to settle all cases with up to 13 vertices. In closing, we would like to propose several future research directions. The most compelling would be to complete the solution to the two-table OP$^\ast$, which would be analogous to Theorem [\[thm:Tra\]](#thm:Tra){reference-type="ref" reference="thm:Tra"} [@BryDan; @Gvo; @Hag; @Tra]. Since the outstanding cases are all bipartite, such a solution could perhaps be obtained as a special case of the solution to the general bipartite case, analogous to the results of [@BryDan; @Hag] for OP. Another natural direction would be to push the upper bound on $n$ in Theorem [Theorem 28](#thm:small){reference-type="ref" reference="thm:small"} in order to obtain a result more comparable to that of Theorem [\[thm:OPsmall\]](#thm:OPsmall){reference-type="ref" reference="thm:OPsmall"}. While larger cases in Lemma [Lemma 12](#lem:special){reference-type="ref" reference="lem:special"} and Appendix [11](#app){reference-type="ref" reference="app"} were obtained computationally, more sophisticated algorithms and more computing power will be needed to solve cases with $n \ge 14$. It is well known and easy to see that a solution to OP$^\ast(m_1,\ldots,m_t)$ gives rise to a solution to OP$\!_2(m_1,\ldots,m_t)$, which asks for a $(C_{m_1},\ldots,C_{m_t})$-factorization of $2K_n$, the two-fold complete graph with $n=m_1+\ldots+m_t$ vertices. Thus both the explicit solutions to OP$^\ast$, as well as the recursive constructions presented in this paper, extend to OP$\!_2$. It would be of interest to collect the known results on OP$\!_2$ (of which there are many more than for OP$^\ast$), and use these solutions as base cases for our recursive method to see what explicit new solutions can be obtained. While it is generally believed that the directed version of OP is harder than the undirected version, the discovery of our new recursive method, which in its present form does not apply to undirected simple graphs, may cast a shadow of doubt on this belief. On the other hand, we propose that it might be possible to develop a more limited version of our recursive construction that would, in fact, apply to simple graphs and therefore lead to new solutions for OP. ## Acknowledgements {#acknowledgements .unnumbered} Both authors wish to thank Kelsey Gasior for mentoring and financially supporting the first author. The first author would like to thank Allah for the knowledge, ideas, and opportunities He has given her. She would also like to thank the second author for her continuous kindness and guidance. The second author gratefully acknowledges support by the Natural Sciences and Engineering Research Council of Canada (NSERC), Discovery Grant RGPIN-2022-02994. 99 P. Adams, D. Bryant, Resolvable directed cycle systems of all indices for cycle length 3 and 4, unpublished. P. Adams, D. Bryant, Two-factorisations of complete graphs of orders fifteen and seventeen, *Australas. J. Combin.* **35** (2006), 113--118. B. Alspach, H. Gavlas, M. Šajna, H. Verrall, Cycle decompositions IV: Complete directed graphs and fixed length directed cycles, *J. Combin. Theory Ser. A* **103** (2003), 165--208. B. Alspach, R. Häggkvist, Some observations on the Oberwolfach problem, *J. Graph Theory* **9** (1985), 177--187. B. Alspach, P. J. Schellenberg, D. R. Stinson, D. Wagner, The Oberwolfach problem and factors of uniform odd length cycles, *J. Combin. Theory Ser. A* **52** (1989), 20--43. F. E. Bennett, X. Zhang, Resolvable Mendelsohn designs with block size $4$, *Aequationes Math.* **40** (1990), 248--260. Resolvable Mendelsohn triple systems with equal sized holes, *J. Combin. Des.* **5** (1997), 329--340. J.-C. Bermond, O. Favaron, M. Mahéo, Hamiltonian decomposition of Cayley graphs of degree 4, *J. Combin. Theory Ser. B* **46** (1989), 142--153. J.-C. Bermond, A. Germa, D. Sotteau, Resolvable decomposition of $K_n^\ast$, *J. Combin. Theory Ser. A* **26** (1979), 179--185. D. Bryant, P. Danziger, On bipartite 2-factorizations of $K_n-I$ and the Oberwolfach problem, *J. Graph Theory* **68** (2011), 22--37. A. Burgess, N. Francetić, M. Šajna, On the directed Oberwolfach Problem with equal cycle lengths: the odd case, *Australas. J. Combin.* **71** (2018), 272--292. A. Burgess, M. Šajna, On the directed Oberwolfach problem with equal cycle lengths, *Electron. J. Combin.* **21** (2014), Paper 1.15, 14 pp. A. Deza, F. Franek, W. Hua, M. Meszka, A. Rosa, Solutions to the Oberwolfach problem for orders 18 to 40, *J. Combin. Math. Combin. Comput.* **74** (2010), 95--102. F. Franek, J. Holub, A. Rosa, Two-factorizations of small complete graphs II: The case of 13 vertices, *J. Combin. Math. Combin. Comput.* **51** (2004), 89--94. F. Franek, A. Rosa, Two-factorizations of small complete graphs, *J. Statist. Plann. Inference* **86** (2000), 435--442. S. Glock, F. Joos, J. Kim, D. Kühn, D. Osthus, Resolution of the Oberwolfach problem, *J. Eur. Math. Soc.* **23** (2021), 2511--2547. P. Gvozdjak, On the Oberwolfach problem for cycles with multiple lengths, PhD Thesis, Simon Fraser University, 2004. R. Häggkvist, A lemma on cycle decompositions, *Ann. Discrete Math.* **27** (1985), 227--232. D. G. Hoffman, P. J. Schellenberg, The existence of $C_k$-factorizations of $K_{2n}-F$, *Discrete Math.* **97** (1991), 243--250. C. Huang, A. Kotzig, A. Rosa, On a variation of the Oberwolfach problem, *Discrete Math.* **27** (1979), 261--277. A. Lacaze-Masmonteil, Completing the solution of the directed Oberwolfach problem with cycles of equal length, *J. Comb. Des.*, accepted Sep. 2023, arXiv:2212.12072. A. Lacaze-Masmonteil, private communication, Aug. 2023. F. Salassa, G. Dragotto, T. Traetta, M. Buratti, F. Della Croce, Merging combinatorial design and optimization: the Oberwolfach problem, *Australas. J. Combin.* **79** (2021), 141--166. E. Shabani, M. Šajna, On the Directed Oberwolfach Problem with variable cycle lengths, unpublished, arXiv:2009.08731 (2020). T. W. Tillson, A hamiltonian decomposition of $K_{2m}^\ast$, $2m \ge 8$, *J. Comb. Theory Ser. B* **29** (1980), 69--74. T. Traetta, A complete solution to the two-table Oberwolfach problems, *J. Combin. Theory Ser. A* **120** (2013), 984--997. Erik E. Westlund, Hamilton decompositions of certain 6-regular Cayley graphs on Abelian groups with a cyclic subgroup of index two, *Discrete Math.* **312** (2012), 3228--3235. # Solutions to special small cases {#app} ## Solutions with a single starter {#app1} In each of the following problems, we give a starter 2-factor in a base-1 1-rotational solution, as described in Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"}. - OP$^\ast(2,3,4)$: $u_1 \, u_7 \, u_1 \; \cup \; u_0 \, u_4 \, u_{\infty} \, u_0 \; \cup \; u_2 \, u_3 \, u_6 \, u_5 \, u_2$; - OP$^\ast(2,3,5)$: $u_0 \, u_{\infty} \, u_0 \; \cup \; u_2 \, u_3 \, u_5 \, u_2 \; \cup \; u_1 \, u_6 \, u_4 \; u_8 \, u_7 \, u_1$; - OP$^\ast(2,4,4)$: $u_0 \, u_{\infty} \, u_0 \; \cup \; u_1 \, u_7 \, u_5 \, u_6 \, u_1 \; \cup \; u_2 \, u_4 \, u_3 \, u_8 \, u_2$; - OP$^\ast(2,4,5)$: $u_1 \, u_2 \, u_1 \; \cup \; u_3 \, u_{\infty} \, u_8 \, u_6 \, u_3 \; \cup \; u_0 \, u_4 \, u_7 \, u_9 \, u_5 \, u_0$; - OP$^\ast(3,3,5)$: $u_0 \, u_{\infty} \, u_5 \, u_0 \; \cup \; u_3 \, u_4 \, u_6 \, u_3 \; \cup \; u_1 \, u_9 \, u_2 \; u_8 \, u_7 \, u_1$; - OP$^\ast(2,2,3,4)$: $u_1 \, u_9 \, u_1 \; \cup \; u_2 \, u_8 \, u_2 \; \cup \; u_0 \, u_{\infty} \, u_5 \, u_0 \; \cup \; u_3 \, u_4 \, u_7 \, u_6 \, u_3$; - OP$^\ast(2,3,3,3)$: $u_3 \, u_7 \, u_3 \; \cup \; u_0 \, u_{\infty} \, u_5 \, u_0 \; \cup \; u_1 \, u_2 \, u_4 \, u_1 \; \cup \; u_6 \, u_9 \, u_8 \, u_6$; - OP$^\ast(2,5,5)$: $u_0 \, u_{\infty} \, u_0 \; \cup \; u_1 \, u_8 \, u_2 \, u_4 \, u_3 \, u_1 \; \cup \; u_5 \, u_9 \, u_6 \, u_7 \; u_{10} \, u_5$; - OP$^\ast(2,2,3,5)$: $u_0 \, u_{\infty} \, u_0 \; \cup \; u_5 \, u_6 \, u_5 \; \cup \; u_2 \, u_7 \, u_4 \, u_2 \; \cup \; u_1 \, u_8 \, u_{10} \, u_3 \; u_9 \, u_1$; - OP$^\ast(2,3,3,4)$: $u_0 \, u_{\infty} \, u_0 \; \cup \; u_1 \, u_5 \, u_2 \, u_1 \; \cup \; u_4 \, u_{10} \, u_8 \, u_4 \; \cup \; u_3 \, u_6 \, u_7 \; u_9 \, u_3$; - OP$^\ast(2,5,6)$: $u_3 \, u_6 \, u_3 \; \cup \; u_0 \, u_7 \, u_5 \, u_{11} \, u_1 \, u_0 \; \cup \; u_2 \, u_{10} \, u_{\infty} \, u_4 \, u_8 \, u_9 \, u_2$; - OP$^\ast(2,2,3,6)$: $u_5 \, u_7 \, u_5 \; \cup \; u_{10} \, u_{11} \, u_{10} \; \cup \; u_1 \, u_4 \, u_8 \, u_1 \; \cup \; u_0 \, u_{\infty} \, u_6 \, u_2 \, u_9 \, u_3 \, u_0$; - OP$^\ast(2,2,4,5)$: $u_0 \, u_{11} \, u_0 \; \cup \; u_1 \, u_5 \, u_1 \; \cup \; u_2 \, u_7 \, u_{10} \, u_4 \, u_2 \; \cup \; u_3 \, u_{\infty} \, u_9 \, u_6 \, u_8 \, u_3$; - OP$^\ast(2,3,3,5)$: $u_2 \, u_4 \, u_2 \; \cup \; u_0 \, u_ 1 \, u_7 \, u_0 \; \cup \; u_3 \, u_{10} \, u_6 \, u_3 \; \cup \; u_5 \, u_9 \, u_8 \, u_{11} \, u_{\infty} \, u_5$; - OP$^\ast(2,3,4,4)$: $u_6 \, u_9 \, u_6 \; \cup \; u_1 \, u_8 \, u_2 \, u_1 \; \cup \; u_0 \, u_{10} \, u_3 \, u_4 \, u_ 0 \; \cup \; u_5 \, u_7 \, u_{11} \, u_{\infty} \, u_5$; - OP$^\ast(2,2,2,3,4)$: $u_1 \, u_{10} \, u_1 \; \cup \; u_4 \, u_{11} \, u_ 4 \; \cup \; u_5 \, u_6 \, u_5 \; \cup \; u_3 \, u_7 \, u_9 \, u_3 \; \cup \; u_0 \, u_8 \, u_{\infty} \, u_2 \, u_0$; - OP$^\ast(2,2,3,3,3)$: $u_0 \, u_5 \, u_0 \; \cup \; u_4 \, u_6 \, u_4 \; \cup \; u_1 \, u_{10} \, u_ 2 \, u_1 \; \cup \; u_3 \, u_9 \, u_{\infty} \, u_3\; \cup \; u_7 \, u_8 \, u_{11} \, u_7$; ## Solutions with three starters {#app3} In the following problem, we give three starter 2-factors in a base-3 1-rotational solution, as described in Lemma [Lemma 11](#lem:1-rotational-q){reference-type="ref" reference="lem:1-rotational-q"}. - OP$^\ast(3,3,4)$: $$\begin{aligned} F_0 &=& u_0 \, u_5 \, u_2 \, u_0 \; \cup \; u_1 \, u_{\infty} \, u_3 \, u_1 \; \cup \; u_4 \, u_6 \, u_8 \, u_7 \, u_4, \\ F_1 &=& u_0 \, u_3 \, u_4 \, u_0 \; \cup \; u_5 \, u_6 \, u_{\infty} \, u_5 \; \cup \; u_1 \, u_8 \, u_2 \, u_7 \, u_1, \\ F_2 &=& u_2 \, u_4 \, u_3 \, u_2 \; \cup \; u_7 \, u_8 \, u_{\infty} \, u_7 \; \cup \; u_0 \, u_6 \, u_1 \, u_5 \, u_0. \end{aligned}$$ ## Solutions without symmetry {#appq} In each of the following problems, we list the directed 2-factors that form a solution. - OP$^\ast(3,3,6)$: $$\begin{aligned} F_0 &=& u_3 \, u_{\infty} \, u_{10} \, u_3 \; \cup \; u_4 \, u_6 \, u_7 \, u_4 \; \cup \; u_0 \, u_8 \, u_9 \, u_2 \, u_5 \, u_1 \, u_0, \\ F_1 &=& u_1 \, u_2 \, u_{10} \, u_1 \; \cup \; u_3 \, u_6 \, u_8 \, u_3 \; \cup \; u_0 \, u_4 \, u_7 \, u_9 \, u_{\infty} \, u_5 \, u_0, \\ F_2 &=& u_1 \, u_{10} \, u_4 \, u_1 \; \cup \; u_7 \, u_{\infty} \, u_8 \, u_7 \; \cup \; u_0 \, u_6 \, u_2 \, u_9 \, u_5 \, u_3 \, u_0, \\ F_3 &=& u_0 \, u_{10} \, u_9 \, u_0 \; \cup \; u_7 \, u_8 \, u_{\infty} \, u_7 \; \cup \; u_1 \, u_5 \, u_4 \, u_2 \, u_6 \, u_3 \, u_1, \\ F_4 &=& u_1 \, u_6 \, u_{\infty} \, u_1 \; \cup \; u_5 \, u_9 \, u_7 \, u_5 \; \cup \; u_0 \, u_3 \, u_2 \, u_8 \, u_4 \, u_{10} \, u_0, \\ F_5 &=& u_4 \, u_5 \, u_{\infty} \, u_4 \; \cup \; u_3 \, u_{10} \, u_7 \, u_3 \; \cup \; u_0 \, u_2 \, u_1 \, u_9 \, u_8 \, u_6 \, u_0, \\ F_6 &=& u_2 \, u_7 \, u_{10} \, u_2 \; \cup \; u_4 \, u_{9} \, u_6 \, u_4 \; \cup \; u_0 \, u_1 \, u_{\infty} \, u_3 \, u_5 \, u_8 \, u_0, \\ F_7 &=& u_2 \, u_3 \, u_7 \, u_2 \; \cup \; u_5 \, u_{10} \, u_8 \, u_5 \; \cup \; u_0 \, u_{\infty} \, u_6 \, u_9 \, u_1 \, u_4 \, u_0, \\ F_8 &=& u_3 \, u_9 \, u_4 \, u_3 \; \cup \; u_5 \, u_6 \, u_{10} \, u_5 \; \cup \; u_0 \, u_7 \, u_1 \, u_8 \, u_2 \, u_{\infty} \, u_0, \\ F_9 &=& u_1 \, u_3 \, u_8 \, u_1 \; \cup \; u_2 \, u_4 \, u_{\infty} \, u_2 \; \cup \; u_0 \, u_9 \, u_{10} \, u_6 \, u_5 \, u_7 \, u_0, \\ F_{10} &=& u_0 \, u_5 \, u_2 \, u_0 \; \cup \; u_1 \, u_7 \, u_6 \, u_1 \; \cup \; u_3 \, u_4 \, u_8 \, u_{10} \, u_{\infty} \, u_9 \, u_3; \end{aligned}$$ - OP$^\ast(3, 4, 5)$: $$\begin{aligned} F_0 &=& u_9 \, u_{10} \, u_{\infty} \, u_9 \; \cup \; u_3 \, u_7 \, u_4 \, u_5 \, u_3 \; \cup \; u_0 \, u_6 \, u_8 \, u_1 \, u_2 \, u_0, \\ F_1 &=& u_7 \, u_9 \, u_{\infty} \, u_7 \; \cup \; u_0 \, u_{10} \, u_3 \, u_8 \, u_0 \; \cup \; u_1 \, u_4 \, u_6 \, u_5 \, u_2 \, u_1, \\ F_2 &=& u_2 \, u_7 \, u_{10} \, u_2 \; \cup \; u_1 \, u_{\infty} \, u_4 \, u_9 \, u_1 \; \cup \; u_0 \, u_8 \, u_6 \, u_3 \, u_5 \, u_0, \\ F_3 &=& u_1 \, u_9 \, u_4 \, u_1 \; \cup \; u_2 \, u_{10} \, u_8 \, u_7 \, u_2 \; \cup \; u_0 \, u_5 \, u_6 \, u_{\infty} \, u_3 \, u_0, \\ F_4 &=& u_0 \, u_2 \, u_{\infty} \, u_0 \; \cup \; u_5 \, u_9 \, u_8 \, u_{10} \, u_5 \; \cup \; u_1 \, u_6 \, u_4 \, u_7 \, u_3 \, u_1, \\ F_5 &=& u_5 \, u_{10} \, u_9 \, u_5 \; \cup \; u_2 \, u_3 \, u_4 \, u_8 \, u_2 \; \cup \; u_0 \, u_{\infty} \, u_6 \, u_1 \, u_7 \, u_0, \\ F_6 &=& u_2 \, u_8 \, u_4 \, u_2 \; \cup \; u_0 \, u_3 \, u_9 \, u_6 \, u_0 \; \cup \; u_1 \, u_5 \, u_7 \, u_{\infty} \, u_{10} \, u_1, \\ F_7 &=& u_2 \, u_9 \, u_3 \, u_2 \; \cup \; u_1 \, u_8 \, u_5 \, u_{\infty} \, u_1 \; \cup \; u_0 \, u_7 \, u_6 \, u_{10} \, u_4 \, u_0, \\ F_8 &=& u_3 \, u_{\infty} \, u_8 \, u_3 \; \cup \; u_2 \, u_4 \, u_{10} \, u_6 \, u_2 \; \cup \; u_0 \, u_9 \, u_7 \, u_5 \, u_1 \, u_0, \\ F_9 &=& u_1 \, u_{10} \, u_7 \, u_1 \; \cup \; u_2 \, u_5 \, u_8 \, u_{\infty} \, u_2 \; \cup \; u_0 \, u_4 \, u_3 \, u_6 \, u_9 \, u_0, \\ F_{10} &=& u_4 \, u_{\infty} \, u_5 \, u_4 \; \cup \; u_0 \, u_1 \, u_3 \, u_{10} \, u_0 \; \cup \; u_2 \, u_6 \, u_7 \, u_8 \, u_9 \, u_2.\end{aligned}$$ [^1]: Email: msajna\@uottawa.ca. Phone: +1-613-562-5800 ext. 3522. Mailing address: Department of Mathematics and Statistics, University of Ottawa, 150 Louis-Pasteur Private, Ottawa, ON, K1N 6N5,Canada.
arxiv_math
{ "id": "2309.12549", "title": "The directed Oberwolfach problem with variable cycle lengths: a\n recursive construction", "authors": "Suzan Kadri and Mateja \\v{S}ajna", "categories": "math.CO", "license": "http://creativecommons.org/licenses/by/4.0/" }
--- abstract: | In this note we study countable subgroups of the full group of a measure preserving equivalence relation. We provide various constraints on the group structure, the nature of the action, and on the measure of fixed point sets, that imply that the subgroup topology is not discrete. We mention various conjectures about discrete subgroups of full groups. address: - Vadim Alekseev, TU Dresden, Germany - Alessandro Carderi - Andreas Thom, TU Dresden, Germany - Robin Tucker-Drob, University of Florida, USA author: - Vadim Alekseev - Alessandro Carderi - Andreas Thom - Robin Tucker-Drob title: About discrete subgroups of full groups of measure preserving equivalence relations --- # Introduction In this note we continue a study that was started in [@andreasvadim]. It is our attempt to understand the structure of discrete subgroups of full groups of equivalence relations induced by a probability measure preserving action of a countable group. Let $\Gamma$ be a countable (discrete) group and let $\Gamma \curvearrowright (X,\mu)$ be an ergodic measure preserving action. Let $G:=[\Gamma \curvearrowright (X,\mu)]$ be the full group of the associated measurable equivalence relation. For any $g \in G,$ we denote by $S(g)$ its support, i.e., $S(g):=\{x \in X \mid g.x \neq x\}.$ The group $G$ is endowed with a natural bi-invariant metric associated with the conjugation invariant length function $\ell(g):= \mu(S(g))$, i.e., $d(g,h):= \ell(gh^{-1})$. For any subgroup $\Lambda \leq G$, we set $$\delta(\Lambda) := \inf \{ \mu (S(g)) \mid g\in \Lambda \setminus \{ 1_\Lambda \} \}$$ and call it the *modulus of discreteness* of $\Lambda.$ We say that $\Lambda$ is $\delta$-discrete if $\delta(\Lambda) \geq \delta.$ Note that $\delta(\Lambda)=1$ if and only if the action of $\Lambda$ is essentially free. Moreover, $\delta(\Lambda)>0$ if and only if $\Lambda$ is a discrete subgroup of $G$, i.e., the subgroup topology is discrete. We say that $\Gamma \curvearrowright (X,\mu)$ is *maximal among discrete actions with the same orbits* or just *maximal discrete* if there is no non-trivial discrete subgroup $\Lambda$ of $G$ that contains $\Gamma$ properly. It is a natural question to wonder if $G$ contains maximal discrete subgroups at all or if a given discrete subgroup is itself maximal discrete or at least contained in a maximal discrete subgroup. In order to state our main results, we have to say a few words about MIF groups. A group $\Gamma$ is said to be *mixed identity free* (or MIF for short) if there exists no non-trivial word $w \in \Gamma \ast \mathbb Z$, such that the associated word map $w \colon \Gamma \to \Gamma$, that is given by evaluation of the variable, satisfies $w(g)=1_\Gamma$ for all $g \in \Gamma.$ There has been a lot of study on MIF groups in recent years, see for example [@bst; @MR3556970; @MR4193626] and the references therein. Natural examples of MIF groups include free groups and ${\rm PSL}_n(\mathbb Z)$. More generally, by results of Hull and Osin, acylindrically hyperbolic group with trivial finite radical [@MR3556970] as well as Zariski-dense subgroups of particular simple Lie groups, see for example [@MR728928], are MIF. We have three main results covering various natural problems in that general area: **Theorem 1**. *[\[sequence\]]{#sequence label="sequence"} Let $\Gamma$ be group and $\Gamma \curvearrowright (X,\mu)$ be a faithful p.m.p. action on a standard probability space, such that at least one of the following conditions is satisfied:* 1. *The group $\Gamma$ is MIF, the action is weakly mixing, and essentially free.* 2. *The group $\Gamma$ is torsion-free and the action is mixing.* *If $g \in G$ is such that $\mu(S(g))<1/3$, then the group $\langle \Gamma,g \rangle$ is not discrete. In particular, $\Gamma$ is contained in a maximal discrete subgroup.* By a result of Jacobson [@MR4193626], there exists an elementary amenable MIF group, so that the above theorem implies that the full group of the hyperfinite equivalence relation admits maximal discrete subgroups, see Remark [Remark 21](#jacobs){reference-type="ref" reference="jacobs"}. The second result is concerned discreteness of an arbitrary p.m.p. ergodic action of a MIF group. **Theorem 2**. *Let $\Gamma$ be a MIF group and let $\Gamma \curvearrowright (X,\mu )$ be a faithful and ergodic p.m.p. action. If $\Gamma$ is discrete, then it is $1/2$-discrete.* Note that Theorem [Theorem 1](#main1){reference-type="ref" reference="main1"} is not a consequence of this result, since we do not know a priori whether $\langle \Gamma,g\rangle$ is MIF or not. Our third result is concerned with compact actions, where the situation is much easier. **Theorem 3**. *Let $\Gamma$ be a MIF group and let $\Gamma \curvearrowright (X,\mu )$ be a faithful, p.m.p., ergodic and compact action. If $\Gamma$ is discrete, then the action is essentially free.* The assumption MIF is essential in Theorems [Theorem 2](#main2){reference-type="ref" reference="main2"} and [Theorem 3](#thm:compact){reference-type="ref" reference="thm:compact"}, in fact, we show that for group $\Gamma$ acting essentially freely on its pro-finite completion, there exists an ascending sequence of groups containing $\Gamma$ as a finite index subgroup, whose union is dense, see Section [4.1](#profinite){reference-type="ref" reference="profinite"}. An example of a different kind based on restricted wreath products can be found in Remark [Remark 19](#rem:example2){reference-type="ref" reference="rem:example2"}. The gist of the proofs is different to the arguments in [@andreasvadim]. This time the contraction principle is derived from Khintchine's inequality resp. the various mixing conditions that are assumed. The rough idea of the proof is as follows. A more or less classical observation from the theory of permutation groups yields that $$\mu(S([g,hgh^{-1}]) \leq 3 \mu(S(g) \cap hS(g)).$$ Now, ergodicity implies that for given $g \in \Gamma$, there exists $h \in \Gamma$, such that $$\mu(S(g) \cap hS(g)) \leq \mu(S(g))^2 + \varepsilon.$$ Thus, we can construct shorter and shorter elements, provided there exists $g \in \Gamma$ with $\mu(S(g))<1/3$. The remaining assumptions are needed to overcome the problem that the iterated commutators could in fact become trivial in $G$ and thus be not useful for the purpose of proving non-discreteness. This gives the idea of a proof of Theorem [Theorem 1](#main1){reference-type="ref" reference="main1"} and of Theorem [Theorem 2](#main2){reference-type="ref" reference="main2"} when $1/2$ is replaced by $1/3$ -- while the case $1/2$ needs a more sophisticated approach. These arguments and the proof of Theorem [Theorem 3](#thm:compact){reference-type="ref" reference="thm:compact"} make use of a finer study of the support of the commutator, see Proposition [Proposition 6](#prop:subset){reference-type="ref" reference="prop:subset"}. We would like to mention that to the best of our knowledge, there is no known example of a discrete ergodic action of a MIF group, which is not essentially free. Thus, we put forward the following question: **Question 4**. *Is every faithful, discrete, ergodic and p.m.p. action of a countable MIF group essentially free?* We heard this question in some form from Yair Glasner 15 years ago (for free groups), but maybe its origins go back even further. A positive answer amounts to replacing $1/2$ by $1$ in Theorem [Theorem 2](#main2){reference-type="ref" reference="main2"}. A related question, whose negative answer would be a consequence of a positive answer to the previous question, is the following: **Question 5**. *Does the full group of a hyperfinite equivalence relation contain a discrete free subgroup?* In [@MR3568978 Section 1.2], a negative answer to this question appeared as a conjecture attributed to the third-named author. Related to this, it might be that every discrete subgroup of the full group of the hyperfinite equivalence relation is amenable. Note, that it was proved that discrete free subgroups exist in the group of invertible elements of the mod-$p$ analogue of the hyperfinite II$_1$-factor, see [@MR3795479]. The analogous question for the unitary group of the actual hyperfinite II$_1$-factor is open. # Contraction from equidistribution for mixing actions {#contraction} ## Preliminaries on actions and mixed identities Let $\Gamma$ be a group. For any $g,h\in \Gamma$ define $[g,h] := ghg^{-1}h^{-1}$, so that $[g,h]^{-1}=[h,g]$. If $\Gamma\curvearrowright X$ is an action of $\Gamma$ on a set $X$ then for $g\in \Gamma$ define the sets $$S(g) \coloneqq \{ x \in X \mid gx \neq x \} \quad \text{and} \quad F(g) \coloneqq \{ x \in X \mid gx=x \} = X\setminus S(g) .$$ Then $S(g)=S(g^{-1})$ and $S(ghg^{-1})=gS(h)$ (and similarly for $F$ in place of $S$). **Proposition 6**. *Let $\Gamma \curvearrowright X$ be an action of a group on a set $X$. Let $g,h\in \Gamma$ and let $A_{g,h}= S(g)\cap S(h)$. Then we have $$S([g,h]) \subseteq A_{g,h}\cup gA_{g,h}\cup hA_{g,h} %(S(g)\cap S(h)) \cup g(S(g)\cap S(h)) \cup h(S(g)\cap S(h)) .$$ Moreover, if $\mu$ is a $\Gamma$-invariant probability measure on $X$ then $$\mu (S([g,h])) \leq 3\mu (A_{g,h}) - \mu (gA_{g,h}\cap A_{g,h}) - \mu (hA_{g,h}\cap A_{g,h}) .$$* *Proof.* We begin with the containment. Suppose that $x\in S([g,h])$, i.e., suppose that $ghg^{-1}h^{-1}x\neq x$. It is clear that this implies that $x\in S(g)\cup S(h) = (S(g)\cap S(h))\cup (S(g)\setminus S(h)) \cup (S(h)\setminus S(g))$. If $x\in S(g)\cap S(h)$ then we are done, so we just need to consider the two cases (1) $x\in S(g)\setminus S(h)$, and (2) $x\in S(h)\setminus S(g)$. In case (1) we have $x\neq ghg^{-1}h^{-1}x=ghg^{-1}x$, so $x\in S(ghg^{-1})$, and hence $x \in S(g)\cap S(ghg^{-1})=g(S(g)\cap S(h))$, as desired. Since $S([g,h])=S([h,g])$, this case is symmetric to Case (1) but with the roles of $g$ and $h$ interchanged, and hence in this case we have $x\in h(S(g)\cap S(h))$. [\[venn4\]]{#venn4 label="venn4"} ![Venn diagram of four sets](venn_diagram.png "fig:"){#venn4 width="220pt"} The inequality now follows from applying the inclusion-exclusion principle and noting that two of the terms cancel out, see Figure [1](#venn4){reference-type="ref" reference="venn4"}. ◻ **Lemma 7** ([@MR1774423]). *Let $1\geq c>0$ and $1\geq \epsilon >0$ be given and let $n$ be an integer with $n>\frac{1-c}{c\epsilon} + 1$. Let $(Y, \nu )$ be a probability space, let $a\geq c$, and suppose that $A_0,\dots , A_{n-1}$ are measurable subsets of $Y$ each having measure $a$. Then there exist distinct $0\leq i,j < n$ with $\nu(A_i\cap A_j)> a^2(1-\epsilon )$.* *In particular, if $T:(Y,\nu )\rightarrow (Y,\nu )$ is an invertible measure preserving transformation and $A\subseteq Y$ is measurable with $\nu (A)\geq c$, then there exists some $1\leq j<n$ such that $\nu (T^jA\cap A)>\nu (A)^2(1-\epsilon )$.* *Proof.* Assume that $\nu (A_i\cap A_j)\leq a^2(1-\epsilon )$ for all distinct $i,j$, and we will show that $n\leq \frac{1-a}{a\epsilon} + 1$, which will give a contradiction since $\frac{1-a}{a\epsilon}+1\leq \frac{1-c}{c\epsilon}+1$. By Cauchy-Schwarz we have $$\begin{aligned} n^2a^2 = \big( \int \sum _{i<n}1_{A_i}\, d\nu \big) ^2 \leq \int \big( \sum _{i<n}1_{A_i}\big) ^2 \, d\nu &= \sum _{i<n}\nu (A_i) + \sum _{i<n}\sum _{j\neq i} \nu (A_i\cap A_j) \\ &\leq na + n(n-1)a^2(1-\epsilon),\end{aligned}$$ and solving for $n$ shows $n\leq \frac{1-a}{a\epsilon} + 1$. ◻ We need the following proposition: **Proposition 8**. *Let $\Gamma$ be a group and suppose there exists $w_1,\dots,w_k \in \Gamma \ast \mathbb Z$, such that for all $g \in \Gamma$ there exists $1 \leq i \leq k$, such that $w_i(g)=1_{\Gamma}.$ Then, $\Gamma$ satisfies a non-trivial mixed identity.* *Proof.* This is a standard exercise using iterated commutators. See for example the proof of [@MR3451381 Lemma 2.2]. ◻ ## Weakly mixing actions Let's recall that a subset $T \subset \Gamma$ is called *syndetic*, if there exists a finite subset $S \subset \Gamma$ with $TS=\Gamma$. Recall Khintchine's lemma [@MR1556883] from 1934, which is a straightforward quantitative form of Poincaré's recurrence theorem. **Lemma 9** (Khintchine). *Let $\Gamma$ be a group and $\Gamma \curvearrowright (X,\mu)$ be p.m.p. ergodic action on a standard probability space. Let $A,B \subset X$ be measurable subsets. For every $\varepsilon>0$, there exists $g \in \Gamma$, such that $$\mu(A)\mu(B) - \varepsilon < \mu(gA \cap B).$$ Moreover, the set $T:=\{g \in \Gamma \mid \mu(A)\mu(B) - \varepsilon < \mu(gA \cap B) \}$ is syndetic.* It is easy to see that a similar lemma holds when studying the bound $\mu(gA \cap B) < \mu(A)\mu(B) + \varepsilon$, i.e. the set of $g\in \Gamma$ for which it holds is syndetic. It is worth noting that in general, it is not possible to have both bounds at the same time -- just assuming ergodicity. However, assuming that the action is weakly mixing, we have the following result: **Theorem 10** (Bergelson-Rosenblatt, [@MR921351]). *Let $\Gamma$ be a group and $\Gamma \curvearrowright (X,\mu)$ be p.m.p. weakly mixing action on a standard probability space. Let $A,B \subset X$ be measurable subsets. For every $\varepsilon>0$, the set $$T:=\{g \in \Gamma \mid |\mu(A)\mu(B) - \mu(gA \cap B)|< \varepsilon\}$$ is syndetic.* *Proof.* Apply [@MR921351 Corollary 1.5] to the function $(1-\mu(A))\chi_A - \mu(A)\chi_{A^c}$. ◻ *Proof of Theorem [Theorem 1](#main1){reference-type="ref" reference="main1"}(1):.* We denote the generator of $\mathbb Z$ by $t$. Let $\ell := \mu(S(g))$ and $\delta \in (0,1/3 - \ell(g))$. We define $\alpha_n:= (\ell- \delta)^{2^n}$ and $\gamma_n:= (3 \cdot\ell + \delta)^{2^n}/3$. We will define a sequence of elements $(w_n)_n$ in $\Gamma \ast \mathbb Z$ and denote the natural image of $w_n$ in $\langle \Gamma,g\rangle$ by $v_n:=w_n(g)$. Our construction is such that $\alpha_n < \mu(S(v_n)) < \gamma_n$ for all $n \in \mathbb N.$ In particular, $(v_n)_n$ is a sequence that converges to $1_X$ and is not equal to $1_X$ for all $n \in \mathbb N.$ It follows that $\langle \Gamma,g\rangle$ is not a discrete subgroup of $G$. We start the inductive construction with $w_0=t$ and $v_0=g$, which clearly satisfies the claim. Let's assume that we constructed $w_n \in \Gamma \ast \mathbb Z$ already. By inductive assumption, $v_n \in \Gamma$ satisfies $\alpha:=\mu(S(v_n)) \in(\alpha_n, \gamma_n).$ We set $A := S(v_n)$ and $$\varepsilon:= \min\{\alpha(\gamma_n - \alpha), \alpha^2 - \alpha_n^2 \}>0.$$ Then, by Theorem [Theorem 10](#equid){reference-type="ref" reference="equid"}, the set $T:=\{h \in \Gamma \mid |\mu(A)^2 - \mu(hA \cap A)|< \varepsilon\}$ is syndetic. Let's consider some $h \in T.$ By Proposition [Proposition 6](#prop:subset){reference-type="ref" reference="prop:subset"}, we have $$\mu(\{x \mid [hv_nh^{-1},v_n]x \neq x\}) < 3 (\alpha^2 + \varepsilon).$$ By our choice of $\varepsilon$, we get $$3 (\alpha^2 + \varepsilon) \leq 3 \alpha^2 + 3\alpha(\gamma_n - \alpha) = 3 \alpha \gamma_n < 3 \gamma_n^2 = \gamma_{n+1}.$$ We would like to set $$w_{n+1}(t) := [hw_n(t)h^{-1},w_n(t)]$$ and obtain $v_{n+1} = [hv_nh^{-1},v_n]$. However, it remains to ensure the lower bound on $\mu(S(v_{n+1}))$. We claim that if $\Gamma$ is ${\rm MIF}$, then there exists a choice for $h \in T$ with $$\mu(S(v_{n+1}))=\mu(S([hv_nh^{-1},v_n]) \geq \mu(S(v_n))^2 - \varepsilon.$$ Let's assume for a moment that $g \in G$ had a finite decomposition. It follows that $v_n$ has a finite decomposition, say using group elements from a finite set $E \subset \Gamma.$ Moreover, there exists a finite set of elements $F \subset \Gamma$, such that $\cup_{s \in F} Ts^{-1} = \Gamma$. Let now $q \in \Gamma$ be arbitrary and $s \in F$ such that $h:=qs \in T$. The element $[hv_nh^{-1},v_n]$ acts on $x \in X$ as $he^{-1}_1h^{-1}e_2^{-1}he_3h^{-1}e_4$, where $e_1,e_2,e_3,e_4 \in E \cup \{e\}$. If $x \in hA \cap A$, then $e_3,e_4 \neq e$ and the word $w(t):=te^{-1}_1t^{-1}e_2^{-1}te_3t^{-1}e_4 \in \Gamma \ast \mathbb Z$ is not conjugate to an element of $\Gamma$. Now, if the complement of the support of $[hv_nh^{-1},v_n]$ in $hA \cap A$ has positive measure, this element acts trivially on a set of positive measure, and hence $he^{-1}_1h^{-1}e_2^{-1}he_3h^{-1}e_4=1_\Gamma$ or in other words $w(q)=1_\Gamma$ for $$w(t) = tse^{-1}_1(ts)^{-1}e_2^{-1}tse_3(ts)^{-1}e_4$$ since the action is assumed to be essentially free. Let us assume that this happens for all $h \in T.$ Then, each $q \in \Gamma$ satisfies a mixed identity with constants from $E \cup F$ as above. Since there are only finitely many such identities, Proposition [Proposition 8](#prop:mif){reference-type="ref" reference="prop:mif"} implies that $\Gamma$ is not MIF. Thus, there exists $h \in T$ with $$\mu(S(w_{n+1}(h))) \geq \mu(hA \cap A) > \alpha^2 - \varepsilon \geq \alpha^2_n = \alpha_{n+1}.$$ This finishes the proof in case where $g$ has a finite decomposition and $\Gamma$ is MIF. Note that it proves a uniform lower bounds on $\mu(S(v_n))$ depending only on $\mu(S(g))$ and not on the complexity of the decomposition of $g$. Using this, we can now approximate an arbitrary $g \in G$ with $\ell(g)<1/3$ within $\delta'>0$ by an element $g' \in G$ of finite decomposition in order to obtain a word $w_n(t) \in \Gamma \ast \mathbb Z$ of controlled length and uniform bounds $$\alpha_n < \mu(S(w_n(g')) < \gamma_n.$$ Taking $\delta'>0$ small enough shows that arbitrarily small non-trivial elements also exist in $\langle \Gamma,g \rangle.$ This finishes the proof. ◻ We would like to record another consequence of Khintchine's lemma to the structure of non-discrete subgroups of full groups. **Proposition 11**. *Let $\Gamma \curvearrowright (X,\mu)$ be an ergodic action and let $\Lambda \subset [\Gamma \curvearrowright (X,\mu)]$ be a non-discrete subgroup containing $\Gamma$. Then, $\{\mu(S(g)) \mid g \in \Lambda \} \subset [0,1]$ is dense in $[0,1]$.* *Proof.* Suppose that there is a $\delta$-gap in $\{\mu(S(g)) \mid g \in \Lambda \} \subset [0,1]$, i.e., there exists $\alpha \in (0,1)$ with $(\alpha,\alpha + \delta) \cap \{\mu(S(g)) \mid g \in \Lambda \} = \varnothing$ and $\alpha \leq 1 - \delta$. Let us also assume that $\alpha$ lies in the closure of $\{\mu(S(g)) \mid g \in \Lambda \}.$ If $\Lambda$ is not discrete, there is an element $g \in \Lambda$ of non-trivial support of size less than $\delta$. Let $h \in \Lambda$ with $\mu(S(h)) \in [\alpha-\delta \mu(S(g))/3,\alpha].$ Now, using Khintchine's inequality from above the overlap $\mu(S(h) \cap tS(g))$ can be made as small as $\mu(S(g)) \mu(S(h)) + \delta \mu(S(g))/3$, so that $htgt^{-1}$ has support at most $$\mu(S(h)) + \mu(S(g)) < \alpha + \delta$$ and at least $$\begin{aligned} &&\mu(S(h)) + \mu(S(g)) - \mu(S(h) \cap tS(g))\\ &\geq& \alpha - \delta \mu(S(g))/3 + \mu(S(g)) - (\mu(S(g)) \mu(S(h)) + \delta \mu(S(g))/3)\\ &\geq& \alpha - \delta \mu(S(g))/3 + \mu(S(g)) - \mu(S(g)) (1 - \delta) - \delta \mu(S(g))/3\\ &=&\alpha + \delta \mu(S(g))/3.\end{aligned}$$ This is a contradiction to the assumption that there was no value in the interval $(\alpha,\alpha + \delta).$ This finishes the proof. ◻ ## Mixing actions We recall that a p.m.p. action of $\Gamma$ on $(X,\mu)$ is *mixing* if for every measurable subset $A,B\subseteq X$ and $\varepsilon>0$, there is a finite set $F\subset \Gamma$ such that for every $\gamma\notin F$ we have $$\left|\mu(\gamma A\cap B)-\mu(A)\mu(B)\right|<\varepsilon.$$ For this we follow [@MR3616077 Definition 2.11] -- note that this notion is sometimes called *strong mixing*. Let us recall that factor of weakly mixing actions are weakly mixing. **Lemma 12**. *Let $\Gamma$ be a countable group and let $\Lambda\leq \Gamma$ be a finite index subgroup. Consider a weakly mixing action $\Gamma\curvearrowright X$. Then the restriction of the action to $\Lambda$ is ergodic.* *Proof.* Since $\Lambda$ contains a finite index normal subgroup, we can assume that $\Lambda$ is normal. Let us denote by $\mathcal A$ the $\sigma$-algebra of subsets fixed by $\Lambda$. Note that since $\Lambda$ is normal in $\Gamma$, the algebra $\mathcal A$ is $\Gamma$-invariant. We hence obtain a $\Gamma$-factor on which $\Lambda$ acts trivially. Since this factor is also weakly mixing, the only possibility is that it is the trivial factor, that is if $\mathcal A$ is the trivial $\sigma$-algebra. ◻ **Lemma 13**. *Consider a weakly mixing action $\mathbb Z\curvearrowright X$ and let $g$ be a measure preserving transformation of $(X,\mu)$. Suppose that there is a positive measure subset $A\subseteq X$ and $z \in \mathbb Z\setminus\{0\}$ such that $gx=z x$ for all $x\in A$. If there is $k\in\mathbb Z \setminus \{0\}$ such that for every $n\in\mathbb Z$, $[g,z^{nk}g z^{-nk}]$ is trivial, then $S(g)$ has full measure.* *Proof.* Given $u,v\in\mathrm{Aut}(X,\mu)$ such that $[u,v]$ is trivial, then $S(v)$ is $u$-invariant. Indeed, if $[u,v]$ is trivial, then for every $x\in X$, $uvu^{-1}x=vx$, that is $uF(v)=F(v)$ and hence $S(v)$ is $u$-invariant. Fix $k\in\mathbb Z \setminus \{0\}$, we will show that if for every $n\in\mathbb Z$, the set $z^{nk}S(g)$ is invariant under $g$, then $g$ has full support. Let $\Lambda$ be the group generated by $z^k$. Denote by $\mathcal A$ the $\sigma$-algebra generated by $\{\lambda S(g);\ \lambda\in\Lambda\}$. Clearly $\mathcal A$ is $\Lambda$-invariant and $g$ acts trivially on it. Consider the $\Lambda$-factor $\pi\colon (X,\mu)\rightarrow (Y,\nu)$ associated to it. Remark that $g$ induces the trivial action on $(Y,\nu)$, that is $\pi(gx)=\pi(x)$ for every $x\in X$. In particular, we have that $\pi(z x)=\pi(x)$ for every $x\in A$. Since by the previous lemma the action of $\Lambda$ is ergodic, for almost every $x\in X$, there is $\lambda\in \Lambda$ such that $\lambda x\in A$. Then $$\pi(z x)=\pi(\lambda^{-1}z \lambda x)=\lambda^{-1}\pi(z \lambda x)=\lambda^{-1}\pi(\lambda x)=\pi(x)$$ that is, $\pi(z x)=\pi(x)$ for almost every $x\in X$. This implies that the action of $\Lambda$ on $Y$ is trivial. However since the action of $\Lambda$ on $X$ is ergodic, this can only happens when $Y$ is the one point space, that is when $S(g)$ has full measure. ◻ **Proposition 14**. *Let $\Gamma$ be a countable torsion free group and consider a p.m.p. mixing action of $\Gamma$ on $(X,\mu)$. Then for every $\varepsilon>0$ and $g\in[\Gamma\curvearrowright (X,\mu)]$ non-trivial and not of full support, there is $h\in\Gamma$ such that $[g,h gh^{-1}]$ is non-trivial and $$\mu(S([g,h gh^{-1}]))<3\mu(S(g))^2+\varepsilon.$$* *Proof.* There is $\lambda\in\Gamma$ and a positive measure subset $A\subseteq X$ such that $gx=\lambda x$ for every $x\in A$. Since the action of $\Lambda:= \langle \lambda \rangle$ is mixing, for every $\varepsilon>0$ there is $n_0\in\mathbb N$ such that for every $n\geq n_0$ we have that $$\left|\mu(S(g)\cap \lambda^nS(g))-\mu(S(g))^2\right|<\frac\varepsilon 3.$$ Proposition [Proposition 6](#prop:subset){reference-type="ref" reference="prop:subset"} then implies that $$\mu(S([g,\lambda^n g\lambda^{-n}]))<3\mu(S(g)\cap \lambda^nS(g))<3\mu(S(g))^2+\varepsilon.$$ Since $g$ is not of full support, we apply Lemma [Lemma 13](#lem: if T commutes then support){reference-type="ref" reference="lem: if T commutes then support"} to obtain that there is $k\in\mathbb Z\setminus \{0\}$ such that $[g,\lambda^{nk}g\lambda^{-nk}]$ is not trivial. Thus, we can set $h:=\lambda^{nk}$ for any such $k \in \mathbb Z \setminus \{0\}$. ◻ *Proof of Theorem [Theorem 1](#main1){reference-type="ref" reference="main1"}(2).* This is now immediate from the previous proposition. We define a sequence $(g_n)_n$ with $g_0=g$ and $g_{n+1} := [g_n,hg_nh^{-1}]$ for suitable $h \in \Gamma.$ If $\mu(S(g_0))<1/3,$ then the sequence $(g_n)_n$ consists of non-trivial elements and converges to the identity in the full group. ◻ *Remark 15*. Let $\Lambda$ be any countable group and set $\Gamma:=\Lambda\times \mathbb Z$. Let us denote by $z$ a generator of $\mathbb Z$. Let us consider the Bernoulli actions of $\Lambda$ on $(\{0,1\}^\Lambda,\nu_\Lambda)$ and of $\Gamma$ on $(\{0,1\}^\Gamma,\nu_\Gamma)$. Since $\Gamma/\mathbb Z=\Lambda$, we will sometimes consider the former action as a non-faithful action of $\Gamma$ and since $\Lambda\leq \Gamma$ we will consider the latter also as a $\Lambda$ action. Consider now the space $$X:=\{0,1\}^\Lambda\times\{0,1\}^{\Gamma}=\{0,1\}^{\Lambda\sqcup \Gamma}$$ equiped with the product measure $\mu:=\nu_\Lambda\times \nu_\Gamma$. Then $\Gamma$ acts on $(X,\mu)$ diagonally and the action preserves the measure. We remark that the action of $\Lambda$ on $X$ is a generalized Bernoulli shift and that the action of action of $\Lambda$ on $\Lambda\sqcup \Gamma$ is free. Moreover the product action of $\Gamma$ on $X\times X$ is again a generalized Bernoulli shift and hence it is ergodic. Therefore the action of $\Gamma$ on $X$ is weakly mixing, see [@MR3616077 Theorem 2.25]. Let $A\subset \{0,1\}^\Lambda$ be a positive measure subset and set $B:=A\times \{0,1\}^{\Gamma}$. Clearly $\mu(B)=\nu_\Lambda(A)$ and hence $B\subseteq X$ has positive measure. Remark also that $B$ is invariant by the $\mathbb Z$-action and denote by $g \in [\Gamma\curvearrowright (X,\mu)]$ the restriction of the generator $z$ of $\mathbb Z$ on the $\mathbb Z$-invariant set $B$. Fix now $h=(\lambda,z^k)\in\Gamma$. Then for every $x\in \lambda A$, we have $$h gh^{-1}(x,y)=h g(h^{-1}x,z^ky)=h (\lambda^{-1}x,z^{k+1}y)=(x,zy)$$ and that if $x\notin \lambda A$, then $\lambda^{-1} x\notin A$ and hence $h gh^{-1}(x,y)=(x,y)$. That is, the element $h gh^{-1}$ is the restriction of the generator $z$ of $\mathbb Z$ on the $\mathbb Z$-invariant set $h B=\lambda A\times \{0,1\}^\Gamma$. In particular, $g$ and $h gh^{-1}$ commute. Therefore the strategy we use in our results cannot be applied for this action. Observe that the action of $\Gamma$ is weakly mixing and not mixing and that $\Gamma$ is not MIF so neither of our two results apply. # Ergodic actions of MIF groups The aim of this section is to prove Theorem [Theorem 2](#main2){reference-type="ref" reference="main2"}. We start with some preparations. Let $1>a_0>0$. In the proof of Lemma [Lemma 17](#lem:MIFrecurrence){reference-type="ref" reference="lem:MIFrecurrence"} below we will use some properties of the family of sequences $(a_{m,\delta} )_{m=0}^{\infty}$ defined for $\delta \in [0,1)$ by $$\begin{aligned} a_{0,\delta} &=a_0 \\ \text{and } \ a_{m+1,\delta}&=\frac{1}{2-a_{m,\delta}(1-\delta )}.\end{aligned}$$ We write $a_m$ for $a_{m,0}$. **Proposition 16**. *Let $a_{m,\delta}$ and $a_m$ be defined as above. Then for all $m\geq 0$ we have:* 1. *$a_m=1- \frac{1-a_0}{m(1-a_0)+1}$, and hence $\lim _{m\rightarrow\infty} a_m = 1$.* 2. *$0<a_{m,\delta _1}\leq a_{m,\delta _0} <1$ whenever $1> \delta _1\geq \delta _0\geq 0$.* 3. *$\lim _{\delta \rightarrow 0^{+}} a_{m,\delta}=a_m$.* 4. *If $N_0\geq 0$ is an integer and $(b_m)_{m=0}^{N_0}$ is a sequence with $1\geq b_0\geq a_0$ and $1\geq b_{m+1}\geq \frac{1}{2-b_m(1-\delta)}$ for all $0\leq m<N_0$, then $b_m\geq a_{m,\delta}$ for all $0\leq m\leq N_0$.* 5. *If $a_0\leq \frac{1}{1+\sqrt{\delta}}$ then $a_{0,\delta}\leq a_{1,\delta}\leq a_{2,\delta}\leq \cdots$, and $\lim _{m\rightarrow\infty}a_{m,\delta} = \frac{1}{1+\sqrt{\delta}}$.* *Proof.* Items (1), (2), (3), and (4) all follow by induction on $m$. For (5), assume that $a_0\leq \frac{1}{1+\sqrt{\delta}}$. Then induction on $m$ shows that $a_{m,\delta}\leq \frac{1}{1+\sqrt{\delta}}$ for all $m$, i.e., $\delta \leq \frac{(1-a_{m,\delta})^2}{a_{m,\delta}^2}$ for all $m$. We have $a_{m,\delta}\leq a_{m+1,\delta}$ if and only if $(1-\delta)a_{m,\delta}^2-2a_{m,\delta}+1\geq 0$, which is seen to hold for all $m$ using that $\delta \leq \frac{(1-a_{m,\delta})^2}{a_{m,\delta}^2}$. The sequence $(a_{m,\delta})_{m\geq 0}$ is monotone nondecreasing and its limit $L_{\delta}$ is bounded above by $\frac{1}{1+\sqrt{\delta}}$ and satisfies $L_{\delta}=\frac{1}{2-L_{\delta}(1-\delta)}$, and hence $L_{\delta} = \frac{1}{1+\sqrt{\delta}}$. ◻ **Lemma 17**. *Let $\Gamma\curvearrowright (X,\mu )$ be a p.m.p. action of a MIF group $\Gamma$, and let $g$ be a nonidentity element of $\Gamma$. Given $\epsilon >0$ and a natural number $N$, there exists an element $k\in \Gamma$ with $\mu (kS(g)\triangle S(g))<\epsilon$ such that $[k,g]$ has order at least $N$ and does not commute with $g$.* *Proof.* We may assume that $0<\mu (S(g))<1$. Let $a_0 := \mu (S(g))$ and let $a_m$ and $a_{m,\delta}$ be defined as above. Fix a natural number $N_0$ with $a_{N_0}>1 -\epsilon /2$. By parts (3) and (5) of Proposition [Proposition 16](#prop:amdelta){reference-type="ref" reference="prop:amdelta"} we can find some $1>\delta >0$ such that $a_0=a_{0,\delta}\leq a_{1,\delta}\leq\cdots \leq a_{N_0,\delta}$ and $a_{N_0,\delta}>1-\epsilon /2$. Let $N_1>N$ be larger than $\frac{1-\mu (S(g))}{\mu (S(g))\delta} + 1$. For $h\in \Gamma$ and positive integers $i_0,i_1, i_2,\dots$ we define $w_{i_0}(h):= h^{i_0}$, and $$w_{i_0,\dots ,i_{m-1},i_m}(h) := [w_{i_0,\dots , i_{m-1}}(h),g]^{i_m}$$ for each $m\geq 1$. Since $\Gamma$ is MIF, using Proposition [Proposition 8](#prop:mif){reference-type="ref" reference="prop:mif"}, we may find some $h\in \Gamma$ such that for all $i_0,\dots , i_{N_0}\in \{ 1,\dots , N_1 \}$ the group element $$[w_{i_0,\dots , i_{N_0}}(h),g] = [[\cdots [[[h^{i_0},g]^{i_1},g]^{i_2},g]^{i_3}\cdots ,g]^{i_{N_0}},g]$$ is nontrivial. In particular, given any choice of $i_0,\dots , i_{N_0-1}\in \{ 1,\dots , N_1 \}$, each of the group elements $h,[w_{i_0}(h),g], [w_{i_0,i_1}(h) ,g],\dots , [w_{i_0,\dots , i_{N_0-1}}(h),g]$ has order strictly greater than $N_1$. For each non-null subset $B$ of $X$ let $\mu _B$ denote the normalized restriction of $\mu$ to $B$. We will recursively define sets $X_0\supseteq X_1\supseteq \cdots \supseteq X_{N_0}\supseteq S(g)$, and $j_0,j_1,\dots ,j_{N_0 -1}\in \{ 1,\dots ,N_1 \}$ such that $X_m$ contains $S([w_{j_0,\dots , j_{m-1}}(h),g])$, and $$\mu _{X_{m+1}}(S(g)) >\frac{1}{2-\mu _{X_m}(S(g))(1-\delta )}$$ for all $m=0,\dots , N_0-1$. We define $X_0=X$. By Lemma [Lemma 7](#lem:recurrence){reference-type="ref" reference="lem:recurrence"} and our choice of $N_1$ we can find some $1\leq j_0\leq N_1$ such that $\mu _{X_0} (h^{j_0}S(g)\cap S(g)) > \mu _{X_0}(S(g))^2(1-\delta )$. Let $X_1=S(g)\cup h^{j_0}S(g)\subseteq X_0$, so that $\mu _{X_0}(X_1)<2\mu _{X_0}(S(g))-\mu _{X_0}(S(g))^2(1-\delta )$ and $$\mu _{X_1}(S(g)) > \frac{\mu (S(g))}{2\mu _{X_0}(S(g))-\mu _{X_0}(S(g))^2(1-\delta )} = \frac{1}{2-\mu _{X_0}(S(g))(1-\delta )} .$$ The set $X_1$ contains both $S(g)$ and $h^{j_0}S(g)$, hence it contains $S([h^{j_0},g])=S([w_{j_0}(h),g])$. Since $X_1$ contains $S([w_{j_0}(h),g])$ it is invariant under $[w_{j_0}(h),g]$. Therefore, we may apply Lemma [Lemma 7](#lem:recurrence){reference-type="ref" reference="lem:recurrence"} to the transformation $[w_{j_0}(h),g]$ of $(X_1,\mu _{X_1})$ to find some $1\leq j_1\leq N_1$ such that $\mu _{X_1}([w_{j_0}(h),g]^{j_1}S(g)\cap S(g))>\mu _{X_1}(S(g))^2(1-\delta )$. Let $X_2 = S(g)\cup [w_{j_0}(h),g]^{j_1}S(g) \subseteq X_1$, so that $\mu _{X_1} (X_2) <2\mu _{X_2}(S(g))-\mu _{X_2}(S(g))^2(1-\delta )$ and $$\begin{aligned} \mu _{X_2}(S(g)) = \frac{\mu _{X_1}(S(g))}{\mu _{X_1}(X_2)}&>\frac{\mu _{X_1}(S(g))}{2\mu _{X_1}(S(g))-\mu _{X_1}(S(g))^2(1-\delta)} \\ &= \frac{1}{2-\mu _{X_1}(S(g))(1-\delta )}.\end{aligned}$$ The set $X_2$ contains both $S(g)$ and $w_{j_0,j_1}(h)S(g)$ hence it contains $S([w_{j_0,j_1}(h),g])$. We continue this process: in general, if $2\leq m<N_0$ and we have already defined $j_0,\dots , j_{m-1}$, and $X_0\supseteq \cdots \supseteq X_{m}\supseteq S(g)$ with $X_{m}\supseteq S([w_{j_0,\dots , j_{m-1}}(h),g])$, we apply Lemma [Lemma 7](#lem:recurrence){reference-type="ref" reference="lem:recurrence"} again to find some $1\leq j_m\leq N_1$ such that $$\mu _{X_m}(w_{j_0,\dots ,j_m}(h)S(g)\cap S(g)) > \mu _{X_m}(S(g))^2(1-\delta ).$$ We take $X_{m+1}= S(g)\cup w_{j_0,\dots ,j_m}(h)S(g)\subseteq X_m$ so that $\mu _{X_m}(X_{m+1}) < 2\mu _{X_m}(S(g))- \mu _{X_m}(S(g))^2(1-\delta )$ and $$\begin{aligned} \mu _{X_{m+1}}(S(g)) = \frac{\mu _{X_m}(S(g))}{\mu _{X_m}(X_{m+1})} &>\frac{\mu _{X_m}(S(g))}{2\mu _{X_m}(S(g))-\mu _{X_m}(S(g))^2(1-\delta)} \\ &= \frac{1}{2-\mu _{X_m}(S(g))(1-\delta )} .\end{aligned}$$ Since $X_{m+1}$ contains both $S(g)$ and $w_{j_0,\dots ,j_m}(h)S(g)$, it contains $S([w_{j_0,\dots , j_{m}}(h),g])$. We let $k:=w_{j_0,j_1,\dots , j_{N_0-1}}(h)$, so that both $k$ and $[k,g]$ have order at least $N_1>N$, and $X_{N_0}=S(g)\cup kS(g)$. It remains to show that our choice of $\delta$ implies that $\mu (kS(g)\triangle S(g))<\epsilon$. By (4) of Proposition [Proposition 16](#prop:amdelta){reference-type="ref" reference="prop:amdelta"} we have $\mu _{X_{m}}(S(g))\geq a_{m,\delta}$ for all $0\leq m\leq N_0$, and hence $\mu _{X_{N_0}}(S(g))\geq a_{N_0,\delta}>1-\epsilon /2$. Since $X_{N_0}=S(g)\cup kS(g)$, this means that $\mu (S(g)\cup kS(g))< \mu (S(g)) + \epsilon /2$ and hence $\mu (S(g)\triangle kS(g))<\epsilon$. ◻ *Proof of Theorem [Theorem 2](#main2){reference-type="ref" reference="main2"}.* If $t=1$ then there is nothing to prove, so we may assume that $t<1$. Fix $1\geq \epsilon _0>0$, and find $0< \epsilon <\epsilon _0$ so small that $t(1-2\epsilon ) - t^2(1+\epsilon )^3 >\epsilon /\epsilon _0$. By Lemma [Lemma 7](#lem:recurrence){reference-type="ref" reference="lem:recurrence"} there exists an integer $N >0$ such that if $T$ is a measure preserving transformation on a probability space $(Y,\nu )$ and $C$ is a measurable subset of $Y$ with $\nu (C)\geq 1/6$ then there is some $1\leq i < N$ such that $\nu (T^iC\cap C)>\nu (C)^2(1-\epsilon ^2)$. Let $g_0\in \Gamma\setminus \{ 1 \}$ be such that $t\leq \mu (S(g_0))<t(1 + \epsilon )$. Then, by Lemma [Lemma 17](#lem:MIFrecurrence){reference-type="ref" reference="lem:MIFrecurrence"} we can find an element $g\in G$ of order greater than $N$ with $\mu (S(g))<t(1+\epsilon )$. By ergodicity of the action $\Gamma \curvearrowright (X,\mu )$, there is a syndetic subset $D$ of $\Gamma$ such that for all $k\in D$ we have $\mu (S(g)\cap kS(g)) < \mu (S(g))^2(1+\epsilon )<t^2(1+\epsilon )^3$. For a non-null $B\subseteq X$ let $\mu _B$ denote the normalized restriction of $\mu$ to $B$. *Claim 18*. There exists some $k\in D$ such that for all $1\leq i,j<N$ the group elements $g^i$ and $kg^jk^{-1}$ do not commute. Moreover, for every such $k$ we have both $$\begin{aligned} \mu _{S(g)}(S(g)\cap S(kgk^{-1})) &\geq 1/6 \\ \text{ and } \ \mu _{S(kgk^{-1})}(S(g)\cap S(kgk^{-1}))&\geq 1/6.\end{aligned}$$ *Proof of Claim [Claim 18](#claim:MIFnoncommute){reference-type="ref" reference="claim:MIFnoncommute"}.* Fix a finite subset $F_D\subseteq G$ such that $DF_D=\Gamma$. Suppose no such $k$ exists as in the first statement in the claim. Then for every $h\in \Gamma$ there exists some $s\in F_D$ and some $1\leq i,j\leq N$ such that $h$ satisfies the nontrivial mixed identity $[g^i,hs^{-1}g^jsh^{-1}] =e$; the mixed identity is nontrivial since $g$ has order greater than $N$. Since there are only finitely many such triples $(s,i,j)$, Proposition [Proposition 8](#prop:mif){reference-type="ref" reference="prop:mif"} shows that $\Gamma$ satisfies a nontrivial mixed identity, a contradiction. For the moreover statement, given such a $k$, Proposition [Proposition 6](#prop:subset){reference-type="ref" reference="prop:subset"} applied to the non-commuting elements $g$ and $kgk^{-1}$ shows that $$t\leq \mu (S([g,kgk^{-1}]))\leq 3\mu (S(g)\cap S(kgk^{-1})),$$ and hence $\frac{\mu (S(g)\cap S(kgk^{-1}))}{\mu (S(g))}\geq\frac{t}{3t(1+\epsilon )} \geq 1/6$. Since $S(kgk^{-1})=kS(g)$ we likewise have $$\frac{\mu (S(g)\cap S(kgk^{-1}))}{\mu (S(kgk^{-1}))}\geq 1/6.$$ This finishes the proof of Claim [Claim 18](#claim:MIFnoncommute){reference-type="ref" reference="claim:MIFnoncommute"}. ◻ Fix now $k\in D$ as in Claim [Claim 18](#claim:MIFnoncommute){reference-type="ref" reference="claim:MIFnoncommute"} and let $h=kgk^{-1}$. Since $S(g)$ is invariant under the cyclic group $\langle g\rangle$, by applying our choice of $N$ to the action $\langle g\rangle \curvearrowright (S(g), \mu _{S(g)})$ and the subset $S(g)\cap S(h)$ of $S(g)$, we obtain some $1\leq i<N$ such that $\mu _{S(g)}(g^i (S(g)\cap S(h))\cap S(g)\cap S(h)) \geq \mu _{S(g)}(S(g)\cap S(h))^2 (1- \epsilon ^2)$, i.e., $$\mu (g^i (S(g)\cap S(h))\cap S(g)\cap S(h)) \geq \frac{\mu (S(g)\cap S(h))^2}{\mu (S(g))} (1- \epsilon ^2) \geq \frac{\mu (S(g)\cap S(h))^2}{t} (1-\epsilon ) .$$ Likewise, applying our choice of $N$ to the action $\langle h\rangle \curvearrowright (S(h),\mu _{S(h)})$ and the subset $S(g)\cap S(h)$, we obtain some $1\leq j< N$ such that $$\mu (h^j (S(g)\cap S(h))\cap S(g)\cap S(h)) \geq \frac{\mu (S(g)\cap S(h))^2}{t} (1-\epsilon ) .$$ Since $S(g^i)\subseteq S(g)$ and $t\leq \mu (S(g^i)) \leq \mu (S(g))\leq t(1+\epsilon )$, we have $\mu (S(g^i) \triangle S(g))< \epsilon t$. Likewise, $S(h^j)\subseteq S(h)$ and $\mu (S(h^j)\triangle S(h))<\epsilon t$. Therefore $$\begin{aligned} \label{eqn:gn} \mu (g^i (S(g^i)\cap S(h^j))\cap S(g^i)\cap S(h^j)) &\geq \frac{\mu (S(g^i)\cap S(h^j))^2}{t} (1-\epsilon ) - 4\epsilon t \\ \label{eqn:hm} \mu (h^j (S(g^i)\cap S(h^j))\cap S(g^i)\cap S(h^j)) &\geq \frac{\mu (S(g^i)\cap S(h^j))^2}{t} (1-\epsilon ) - 4\epsilon t\end{aligned}$$ Let $A=S(g^i)\cap S(h^j)$. Observe that $\mu (A)\leq \mu (S(g)\cap S(h)) + 2\epsilon t < t^2(1+\epsilon )^3 + 2\epsilon t$, so by our choice of $\epsilon$ we have $$\label{eqn:tA} \frac{\epsilon}{\epsilon _0} < t- \mu (A) .$$ Our choice of $k$ ensures that $g^i$ and $h^j$ do not commute. Therefore, applying Proposition [Proposition 6](#prop:subset){reference-type="ref" reference="prop:subset"} to $[g^i,h^j]$ and using [\[eqn:gn\]](#eqn:gn){reference-type="eqref" reference="eqn:gn"} and [\[eqn:hm\]](#eqn:hm){reference-type="eqref" reference="eqn:hm"}, we obtain $$\begin{aligned} t \leq \mu (S([g^i,h^j])) &\leq 3\mu (A) - 2\frac{\mu (A)^2}{t} (1-\epsilon ) + 8\epsilon t \\ &=3\mu (A) - 2\frac{\mu (A)^2}{t} + 2\frac{\mu (A)^2}{t}\epsilon + 8\epsilon t .\end{aligned}$$ Multiplying this inequality by $t$ (which by assumption is strictly positive) and rearranging gives $t^2 - 3\mu (A)t + 2\mu (A)^2 \leq 2\mu (A)^2\epsilon + 8\epsilon t^2 \leq 10\epsilon$ and hence $$\begin{aligned} (t-2\mu (A)) (t-\mu (A)) & \leq 10\epsilon .\end{aligned}$$ If $t-2\mu (A)> 0$, then multiplying [\[eqn:tA\]](#eqn:tA){reference-type="eqref" reference="eqn:tA"} by $t-2\mu (A)$ shows that $(t-2\mu (A))\frac{\epsilon}{\epsilon _0} < 10 \epsilon$, and hence $$t\leq 2\mu (A) + 10\epsilon _0 \leq 2t^2(1+\epsilon )^3 + 2\epsilon t + 10 \epsilon _0 .$$ If $t- 2\mu (A)\leq 0$, then this last inequality holds trivially. In either case, this last displayed inequality holds, so since $\epsilon _0>0$ was arbitrary, and since $\epsilon \rightarrow 0$ as $\epsilon _0\rightarrow 0$, it follows that $t\leq 2t^2$, and therefore $t\geq 1/2$. ◻ *Remark 19*. Here is a first example, which shows that the assumption that $\Gamma$ is MIF is necessary in Theorem [Theorem 2](#main2){reference-type="ref" reference="main2"}. Given any infinite group $H$, consider a $\mathbb Z/2\mathbb Z$-vector space $V$ equipped with a basis $(\delta _h)_{h\in H}$ that is in bijection with $H$. The left translation action of $H$ permutes this basis, inducing an action by automorphisms on $V$, and we identify $V$ and $H$ naturally with subgroups of the associated semidirect product $V\rtimes H$ (which is isomorphic to the restricted regular wreath product $(\mathbb Z/2\mathbb Z)\wr H$). Independently assign to each basis element a uniformly distributed label in $[0,1]$, and for $t\in [0,1]$ let $V_t$ be the (random) subspace of $V$ generated by those basis elements with label at most $t$. Then $V_t$ is an ergodic invariant random subgroup of $V\rtimes H$, hence by [@MR3165420], $V_t$ is the stabilizer distribution of some ergodic p.m.p. action of $V\rtimes H$. Under this action, fixed point sets of group elements not lying in $V$ have measure zero, and the measure of the fixed point set of a vector $v\in V$ of the form $v=\sum _{h\in Q}\delta _h$ is exactly the probability that $v$ belongs to $V_t$, which is $t^{|Q|}$. Thus, when $t>1/2$ this gives an ergodic action of $V\rtimes H$ for which $V\rtimes H$ is discrete, but not $1/2$-discrete. # Discrete groups and compact actions A p.m.p. action $\Gamma\curvearrowright (X,\mu )$ is *compact* if the image of $\Gamma$ in $\mathrm{Aut}(X,\mu )$ is precompact in the weak topology, i.e., the usual Polish group topology on $\mathrm{Aut}(X,\mu )$. ## Profinite actions {#profinite} Let $\Gamma$ be a discrete group and let $(\Gamma_n)_n$ be a descending sequence of normal subgroups with $\Gamma=\Gamma_0$ and $\bigcap_n \Gamma_n = \{e\}$. We consider the corresponding profinite completion $\widehat \Gamma$ with its Haar measure $\mu_H$. We denote the closure of $\Gamma_n$ in $\widehat \Gamma$ by $\widehat \Gamma_n$. Let $k:= [\Gamma:\Gamma_n]$ and let $g_1,\dots,g_k$ be a set of representatives of $\Gamma_n$-cosets, i.e. $\Gamma = \bigsqcup_{i=1}^k g_i\Gamma_n$. It follows that $$\widehat \Gamma = \bigsqcup_{i=1}^k g_i \widehat \Gamma_n = [k] \times \widehat\Gamma_n.$$ Let $G:=[\Gamma \curvearrowright (\widehat{\Gamma},\mu_H)]$, and let $T\in G$ be compatible with this decomposition, that is, $T$ acts by the left multiplication with $h_i \in \Gamma$ on $g_i\widehat \Gamma$. Then $T$ is identified with the self-map of $[k] \times \widehat\Gamma_n$ that sends the $i$-th copy of $\widehat \Gamma_n$ to the $j$-th copy of $\widehat \Gamma_n$ by left-multiplication with $\gamma \in \Gamma_n$ for the unique $j \in [k]$, $\gamma \in \Gamma_n$ with $h_ig_i = g_j \gamma$. As this is in particular true for $T\in\Gamma$, we obtain a chain of inclusions $$\Gamma \leq \Gamma_n^\vee:= (\Gamma_n)^k \rtimes {\rm Sym}(k) \leq G=[\Gamma \curvearrowright (\widehat{\Gamma},\mu_H)],$$ where the group in the middle is the permutational wreath product. In particular, $\Gamma$ is contained in a discrete subgroup which contains elements whose support has measure $1/k$. The sequence of wreath products $(\Gamma_n^\vee)_n$ is an increasing sequence of subgroups of $G$ containing $\Gamma$, whose union is dense in $G$: indeed, for $T\in G$ and $\varepsilon> 0$ arbitrary we can find $n$ large enough and $T'\in G$ such that $d(T,T')<\varepsilon$ and $T'$ decomposes over $\widehat \Gamma = \bigsqcup_{i=1}^k g_i \widehat\Gamma_n$. By the analysis above $T'$ belongs to $(\Gamma_n)^k \rtimes {\rm Sym}(k)$. Let's sumarize the result in the following proposition: **Proposition 20**. *Let $\Gamma$ be a countable residually finite group and consider the action on its profinite completion $\Gamma \curvearrowright (\widehat{\Gamma},\mu_H)$. There exists an increasing chain of discrete subgroups $\Gamma:= \Gamma_0 \leq \Gamma_1 \leq \cdots \leq G:= [\Gamma \curvearrowright (\widehat{\Gamma},\mu_H)]$, whose union is dense.* *Remark 21*. Let us now come back to the example of Jacobson mentioned already in the introduction. Jacobson showed that there exists a group $\Gamma$, which is elementary amenable and MIF, see [@MR4193626]. It was shown in [@MR2507115] that this group is also residually finite. Thus, we obtain that the full group $G$ of the unique hyperfinite equivalence relation contains two $1$-discrete copies of $\Gamma$, one is contained in a maximal discrete subgroup (by Theorem [Theorem 1](#main1){reference-type="ref" reference="main1"} applied to the Bernoulli action) and the other contained in an infinite chain of discrete overgroups, whose union is dense (by Proposition [Proposition 20](#dense){reference-type="ref" reference="dense"} applied to the action on the profinite completion). ## Compact actions This section contains the proof of Theorem [Theorem 3](#thm:compact){reference-type="ref" reference="thm:compact"}. By assumption, the infimum $$t:= \inf \{ \mu (S(g)) : g\in \Gamma\setminus \{ e \} \}$$ is strictly greater than $0$. Our goal is to show that $t=1$. In what follows we will identify $\Gamma$ with its image in $\mathrm{Aut}(X,\mu )$. Let $K$ denote the closure of $\Gamma$ in $\mathrm{Aut}(X,\mu )$, which is compact by assumption. We begin with the following claim. *Claim 22*. For every $g_0\in \Gamma \setminus \{ 1 \}$, $\epsilon >0$, and natural number $n$, there exists some $g_1\in \Gamma$ of order greater than $n$ such that $\mu (S(g_1)\setminus S(g_0))<\epsilon$. *Proof of Claim [Claim 22](#claim:largeorder){reference-type="ref" reference="claim:largeorder"}.* Let $V$ be an open identity neighborhood in $K$ satisfying $$\mu (kS(g_0)\triangle S(g_0))<\epsilon$$ for all $k\in V$. Then there must be some $k\in V\cap \Gamma$ such that $[k,g_0]$ has order greater than $n$. For suppose otherwise. Since $K$ is compact and $\Gamma$ is dense in $K$ there is some finite $F\subseteq \Gamma$ such that $VF=K$. Then for every $h\in \Gamma$ there is some $s\in F$ with $hs^{-1}\in V\cap \Gamma$ and hence some $1\leq i \leq n$ such that $[hs^{-1},g_0]^i = e$, so $\Gamma$ satisfies a nontrivial mixed identity by Proposition [Proposition 8](#prop:mif){reference-type="ref" reference="prop:mif"}, a contradiction. Let $k\in V\cap \Gamma$ be such that $g_1:= [k,g_0]$ has order greater than $n$. Then $S(g_1)$ is contained in $kS(g_0)\cup S(g_0)$, hence $\mu (S(g_1)\setminus S(g_0))\leq \mu (kS(g_0)\setminus S(g_0))<\epsilon$. This finishes the proof of Claim [Claim 22](#claim:largeorder){reference-type="ref" reference="claim:largeorder"}. ◻ Fix $1\geq \epsilon >0$, and let $g_0\in \Gamma\setminus \{ 1 \}$ be such that $\mu (S(g_0))<t + \epsilon$. Since $K$ is compact we can find a $K$-conjugation-invariant identity neighborhood $U$ in $K$ such that $\mu (kS(g_0)\triangle S(g_0))<\epsilon$ for all $k\in U$. By compactness again there is a natural number $n\geq 1$ such that for every $k\in K$ there is some $1\leq i< n$ with $k^i\in U$. By Claim [Claim 22](#claim:largeorder){reference-type="ref" reference="claim:largeorder"} there exists some $g_1\in \Gamma$ of order greater than $n$ such that $\mu (S(g_1)\setminus S(g_0))<\epsilon$. Let $1\leq i< n$ be such that $g_1^i \in U$ and let $g:=g_1^i$. We have $S(g)\subseteq S(g_1)$, and since $g\neq e$ we have $t\leq \mu (S(g))$, and therefore $\mu (S(g)\triangle S(g_0))< 2\epsilon$. It follows that for all $k\in U$ we have $\mu (kS(g)\triangle S(g))<5\epsilon$. By ergodicity there exists a syndetic subset $D$ of $\Gamma$ such that for all $k\in D$ we have $$\mu (S(g)\cap kS(g)) < \mu (S(g))^2+\epsilon < t^2 + 9\epsilon .$$ Then, arguing as in Claim [Claim 18](#claim:MIFnoncommute){reference-type="ref" reference="claim:MIFnoncommute"}, there exists some $k\in D$ such that $g$ and $kgk^{-1}$ do not commute. Let $h := kgk^{-1}$. Then $S(h)=kS(g)$, so $\mu (S(h))=\mu (S(g))$ and $\mu (S(g)\cap S(h) ) < t^2 +9\epsilon$. Since $U$ is conjugation invariant and $g$ belongs to $U$, both of the conjugates $kgk^{-1}$ and $k^{-1}gk$ belong to $U$ as well, hence $$\begin{aligned} \mu (hS(g)\triangle S(g)) &= \mu (kgk^{-1}S(g)\triangle S(g) )<5\epsilon , \\ \text{and } \ \mu (gS(h)\triangle S(h)) &= \mu (k^{-1}gkS(g)\triangle S(g))<5\epsilon .\end{aligned}$$ Letting $A:=S(g)\cap S(h)$, it follows that $\mu (g A \cap A ) \geq \mu (A) - 5\epsilon$ and $\mu (hA\cap A)\geq \mu (A)-5\epsilon$. Since $g$ and $h$ do not commute we have $t\leq \mu (S([g,h]))$, so applying Proposition [Proposition 6](#prop:subset){reference-type="ref" reference="prop:subset"} we obtain $$\begin{aligned} t\leq \mu (S([g,h])) &\leq 3\mu (A) - \mu (gA\cap A)-\mu (hA\cap A) \\ &\leq \mu (A) + 10\epsilon \\ &\leq t^2 + 19\epsilon .\end{aligned}$$ Since $\epsilon >0$ was arbitrary this shows that $t\leq t^2$, hence $t=1$. This finishes the proof of Theorem [Theorem 3](#thm:compact){reference-type="ref" reference="thm:compact"}. **Corollary 23**. *Let $\Gamma=\Gamma_0\geq \Gamma_1\geq \Gamma_2\cdots$ be a chain of finite index subgroups of a MIF group $\Gamma$. For $g\in \Gamma$ let $$t_g := \lim _{n\rightarrow\infty} \frac{ | \{ x\in \Gamma/\Gamma_n \, : \, gx = x \} |}{[\Gamma:\Gamma_n]} .$$ Suppose that there is some nonidentity element $h\in \Gamma$ for which $t_h >0$. Then for any $\epsilon >0$ there exists some nonidentity element $g\in \Gamma$ such that $t_g>1-\epsilon$.* Thus, in terms of residual chains of finite index subgroups, we arrive at a dichotomy: Either the chain $(\Gamma_n)_n$ is a Farber chain (see [@MR1625742 Theorem 0.3]), i.e. $t_g=0$ for all $g\neq 1$, or the opposite holds: for any $\varepsilon>0$, there exists $g \in \Gamma$, such that the probability that $g$ is contained in a random conjugate of $\Gamma_n$ is a least than $1-\varepsilon.$ Theorem [Theorem 3](#thm:compact){reference-type="ref" reference="thm:compact"} implies that for every nonfree ergodic compact action of an MIF group $\Gamma$ on a probability space $(X,\mu )$, there exists a sequence $(g_n)_{n\geq 0}$ of nonidentity elements of $\Gamma$ satisfying $\mu (F(g_n))\rightarrow 1$. It is unclear whether this sequence $(g_n)_{n\geq 0}$ can be chosen independently of the nonfree ergodic compact action of $\Gamma$, even in the case where $\Gamma$ is a free group. Let us record this as a question: **Question 24**. *Let $\Gamma$ be a nonabelian free group. Does there exist a sequence $(g_n)_{n\geq 0}$ of nonidentity elements of $\Gamma$ such that for every ergodic compact p.m.p. action $\Gamma\curvearrowright (X,\mu )$ which is not essentially free we have $\mu (\{ x: g_nx=x \} )\rightarrow 1$ as $n\rightarrow \infty$?* This should be compared to the results in [@MR3043070]. By results from that paper, there exists a sequence $(g_n)_{n \geq 0}$ in every non-abelian free group, such that $g_n \to 1$ in the weak topology for every compact action (no matter if the action is essentially free or not). A potential strategy to arrive at a positive answer to Question [Question 24](#convseq){reference-type="ref" reference="convseq"} is to enumerate all non-trivial elements of $F_2$ in a sequence $(h_n)_{n \geq 0}$ and consider a sequence $(g_n)_{n \geq 0}$ as above. One could then start taking commutators of conjugates in some determined iteration scheme (compare to the proof of the main result in [@MR3043070]). # A case study building on work of Choi and Blackadar In this section, we analyze a natural example of a free subgroup of the full group of a hyperfinite equivalence relation provided by a combination of the work of Choi [@MR540914] and Blackadar [@MR808296]. Choi proved that the following unitaries in $M_2(\mathcal O_2)\cong \mathcal O_2$ satisfy $\widehat U^2 = 1$, $\widehat V^3 = 1$ and generate a copy of $C^*_r(\mathop{\mathrm{PSL}}(2,\mathbb Z))$ [@MR540914]: $$U = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix},\quad V = \begin{pmatrix} 0 & S_2^* \\ S_1 & S_2 S_1^*\end{pmatrix},$$ where $S_1$ and $S_2$ are canonical isometries generating $\mathcal O_2$. Blackadar, see [@MR808296], then produced an explicit $C^*$-subalgebra of the UHF algebra $\bigotimes_{k\in\mathbb Z} M_2(\mathbb{C})$ which surjects onto $\mathcal O_2$. In what follows, we describe his construction with slight change of notation. Consider the crossed product $B=\left(\bigotimes_{k\in\mathbb Z} M_2(\mathbb C)\right)\rtimes \mathbb Z$, where $\mathbb Z$ acts by shifting the tensor factors. It is generated by the canonical unitary $z$ implementing the shift and a copy of $M_2(\mathbb C)$ in the $0$-th entry. We let the latter be generated by a projection $e_0$ and a unitary $t_0$ which maps $e_0$ to its complement: $t_0 e_0 t_0^* = 1-e_0$. Explicitly, we can take $$e_0=\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix},\quad t_0 = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \in M_2(\mathbb C)$$ The shifted copies of $e_0$ and $t_0$ will be denoted by $e_k= z^k e_0 z^{-k}$ resp. $t_k=z^k t_0 z^{-k}$, $k\in \mathbb Z$. We let $e\coloneqq e_{0}$. Blackadar then introduces the following elements: $$s_1\coloneqq z(1-e)=(1-e_1)z,\quad s_2 = t_1s_1 = t_1z(1-e) = zt_0(1-e) = t_1(1-e_1)z.$$ They satisfy $$s_1^*s_1 = s_2^*s_2 = 1-e,\quad s_1s_1^* = 1-e_1, \quad s_2s_2^* = e_1$$ and therefore their images $S_1$ and $S_2$ in the quotient by a suitable ideal containing $e$ generate the Cuntz algebra $\mathcal O_2$. **Lemma 25**. *The following elements are unitary lifts of $U$ and $V$ into $M_2(B)$: $$\widehat U = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix},\quad \widehat V = \begin{pmatrix} e & s_2^* \\ s_1 & s_2 s_1^*\end{pmatrix}$$ which generate a copy of $\mathop{\mathrm{PSL}}(2,\mathbb Z)$ inside ${\rm U}(M_2(B))$.* *Proof.* Indeed, $$\widehat V^* = \begin{pmatrix} e & s_1^* \\ s_2 & s_1 s_2^*\end{pmatrix},$$ and so $$\widehat V^* \widehat V = \begin{pmatrix} e+s_1^*s_1 & es_2^* + s_1s_2^*s_1 \\ s_2e+s_1^*s_2s_1^* & s_2s_2^* + s_1s_2^*s_2s_1^* \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix},$$ since $s_1^*s_1 = 1-e$, $s_2^*s_1 = 0$, $s_2 e = 0$ and $$s_2s_2^* + s_1s_2^*s_2s_1^* = e_1 + s_1(1-e)s_1^* = 1 - s_1es_1^* = 1.$$ Moreover, $$\widehat V^2 = \begin{pmatrix} e+s_2^*s_1 & es_2^* + s_2^*s_2s_1^* \\ s_1e+s_2s_1^*s_1 & s_1s_2^* + s_2s_1^*s_2s_1^* \end{pmatrix} = \widehat V^*.$$ This finishes the proof. ◻ We equip $B$ with the canonical normalized trace $\tau$ coming from the crossed product structure using the standard tensor product trace on $\bigotimes_{k\in \mathbb Z} M_2(\mathbb C)$. We then canonically extend $\tau$ to a normalized trace on $M_2(B)$. In view of the existence of the surjection of the subalgebra generated by $U$ and $V$ onto $C^*_r({\rm PSL}_2(\mathbb Z))$, this representation of ${\rm PSL}_2(\mathbb Z)$ into ${\rm U}(M_{2^{\infty}}(\mathbb C))$ is "as non-amenable as it gets" and provides a promising candidate of a free copy of ${\rm PSL}_2(\mathbb Z)$ (and hence $F_2$) in ${\rm U}(R)$ with the $2$-norm. In the next step, we observe that this copy of ${\rm PSL}_2(\mathbb Z)$ actually sits inside the full group of a natural hyperfinite equivalence relation. Indeed, consider the Cantor space $X=\{0,1\}^\mathbb Z\times \{0,1\}$ with the natural product measure giving each bit weight $1/2$. We will interpret it as the state space of a Turing machine with a bi-infinite tape with zeroes and ones and an additional state (or signal) that can take values in $\mathbb Z/2$. We identify the algebra $C(X)$ with the diagonal subalgebra in $\bigotimes_{k\in \mathbb Z} M_2(\mathbb{C})$; the trace $\tau$ then corresponds to the aforementioned product measure. Now, the unitary $z$ corresponds to the tape shift, and $t_0$ corresponds to switching the signal. Therefore the unitaries $\widehat U$ and $\widehat V$ are elements of the full group $[\mathcal R]$ of the hyperfinite equivalence relation given by the natural measure preserving action of $(\mathbb Z/2\mathbb Z\wr \mathbb Z)\times \mathbb Z/2\mathbb Z$ on $X$, where the first factor acts by shifting and changing entries on the tape, and the second factor acting by changing the signal. It is routine to check that under this identification the trace $\tau$ of an element $g\in[\mathcal R]$ is equal to the measure of the set of the fixed points of $g$. To make the computations easier, we will conjugate $\widehat U$ and $\widehat V^*$ to the elements $$u= \mathop{\mathrm{diag}}(1,z^*)\cdot \widehat U\cdot \mathop{\mathrm{diag}}(1,z)= \begin{pmatrix} 0 & z \\ z^* & 0 \end{pmatrix}$$ and $$v= \mathop{\mathrm{diag}}(1,^*) \widehat V^* \mathop{\mathrm{diag}}(1,z)= \begin{pmatrix} e & s_1^*z \\ z^*s_2 & z^*s_1 s_2^*z\end{pmatrix} = \mathop{\mathrm{diag}}(1,t_0)\cdot\begin{pmatrix} e & 1-e \\ 1-e & e \end{pmatrix}.$$ Now, $u$ has the interpretation that it changes the signal and shifts the tape according to the signal to the right (if the signal was $1$) or to the left (if the signal was $0$). In particular, it does not change the tape -- it only changes the signal and shifts the tape. The element $v$ has the interpretation that it changes the signal iff we read a $1$ at the zeroeth entry and afterwards, if the signal is $1$, it also changes the zeroeth entry of the tape. In particular, the operation $v$ does not shift the tape. We summarize the above observations as follows: **Proposition 26**. *Consider the Cantor space $X=\{0,1\}^\mathbb Z\times \{ 0,1\}$ with the natural product measure $\mu$ assigning weight $1/2$ to every bit, understood as the state space of a Turing machine whose tape is bi-infinite with entries from the alphabet $\{0,1\}$ and whose only internal state, the signal $S$, is taking values in $\mathbb Z/2$. Let $x_0$ denote the 0-th entry on the tape.* *Consider the following commands of this Turing machine:* - *if $S = 0$, then shift the tape to the left and set $S\coloneqq 1$; if $S = 1$, shift the tape to the right and set $S\coloneqq 0$;* - *permute the pairs $(x_0,S)$ as follows: $(0,0)\mapsto (0,0)$, $(0,1)\mapsto(1,1)\mapsto(1,0)\mapsto(0,1)$ without shifting the tape.* *Then $u$ and $v$ act on $X$ as probability measure-preserving automorphisms of order $2$ and $3$ respectively, generating a copy of $\mathop{\mathrm{PSL}}(2,\mathbb Z)$ inside the full group of a hyperfinite equivalence relation on $X$.* Since Theorem [Theorem 2](#main2){reference-type="ref" reference="main2"} is only true for ergodic actions, we have to investigate the ergodicity of the above action. It turns out that it is not ergodic as such, but has a transparent description of the ergodic components. We let $\pi_0\colon \{0,1\}^{\mathbb Z}\to \{0,1\}^{-\mathbb{N}}$ be the natural projection discarding everything on the right from the 0-th entry and $\pi_{-1}\colon \{0,1\}^{\mathbb Z}\to \{0,1\}^{-\mathbb{N}}$ be the natural projection discarding everything on the right from the $(-1)$-st entry. We furthermore let $L\colon \{0,1\}^{-\mathbb{N}}\setminus\{(\dots,1,1,1)\}\to \{0,1\}^{-\mathbb{N}}$ be the map that discards the rightmost zero in the sequence and all entries on the right from it. **Proposition 27**. *The following map $$p\colon (X,\mu)\to (\{0,1\}^{-\mathbb{N}},p_*\mu),$$ $$p(x,0) = (L\circ \pi_0)(x),\quad p(x,1) = (L\circ \pi_{-1})(x).$$ is the ergodic decomposition of the action $\mathop{\mathrm{PSL}}(2,\mathbb Z)\curvearrowright (X,\mu)$. All ergodic components of this action are isomorphic; in particular, for each $g\in \mathop{\mathrm{PSL}}(2,\mathbb Z)$ the measure of the fixed point set is a.e. constant on the space of ergodic components.* *Proof.* The key observation here is the following: if $(x,S)$ is such that $(x_0,S) = (0,0)$, then $v$ does not change the pair $(x_0,S)$, and then $u$ is the only nontrivial operation, which necessarily shifts the tape to the left. Furthermore, in an arbitrary state $u$ is the only operation which can shift the tape and after shifting it to the right the signal $S$ is set to $0$. Therefore, if $x_{-1} = 0$ and we shift the tape to the right (by applying $u$), then we necessarily get $(x_0,S) = (0,0)$, from which we can only shift the tape to the left. Together, this implies the following claim: given an initial state $(x,S)$, we can never shift the tape beyond the right-most zero in $\pi_0(x)$ if $S=0$ or beyond the right-most zero in $\pi_{-1}(x)$ if $S=1$ (we thus refer to this zero on the tape as the "stopping zero"). Now, the map $p$ is exactly the map which discards the stopping zero and the half-tape to the right of it (ignoring the null set of states where no zero occurs on the on the strictly negative half of the tape). By the above considerations, each fiber $p^{-1}(y)$ is invariant under the action, being exactly the set of states which have the prefix $y$ to the left of the stopping zero. By disintegration of measures, it comes naturally equipped with the probability measure $\nu_y$ which can be described as follows. Consider the product measure $\theta$ on $\{0, 1\} ^\mathbb{N}$ and the maps $$\varphi_y \colon \{ 0, 1\} ^\mathbb{N}\setminus \{ (1,1,1,\dots ) \} \to \{0,1\}^{\mathbb Z}\times\{0\},$$ $$1^k0x \mapsto (y01^kx, 0),$$ where the leftmost bit of $x$ occupies coordinate $1$ in the string $y01^kx$, and $$\psi_y \colon \{ 0, 1\} ^\mathbb{N}\setminus \{ (1,1,1,\dots ) \} \to \{0,1\}^{\mathbb Z}\times\{1\},$$ $$1^k0x \mapsto (y01^kx, 1),$$ where the leftmost bit of $x$ occupies coordinate $0$ in the string $y01^kx$. Then $\nu_y = \frac12(\varphi_y)_*\theta + \frac12(\psi_y)_*\theta$. Equivalently, $\nu _y =\tfrac{1}{2}\nu _y ^0 + \tfrac{1}{2}u_*\nu _y^0$, where $\nu _y^0 = (\varphi_y)_*\theta$. Let us check that the action of $\mathop{\mathrm{PSL}}(2,\mathbb Z)$ on $(p^{-1}(y),\nu_y)$ is ergodic. We first prove the following claim: given an arbitrary initial state of the form $(z,S)=(y01^jtx,S)\in p^{-1}(y)$ (where $x$ is the infinite tail and $t$ is a finite string of length $k$ whose leftmost bit occupies coordinate $1$ if $S=0$, and coordinate $0$ if $S=1$) together with an arbitrary $S'\in \{0,1\}$, there is an element $g\in \mathop{\mathrm{PSL}}(2,\mathbb Z)$ such that $g(z,S)=(y01^{j+k}x,S')$, where each bit of $t$ is replaced by a $1$ and the leftmost bit of $x$ occupies coordinate $1$. If $S=0$, we first apply $u$ to shift the tape to the left. Then we repeat the following procedure $k$ times: apply a power of $v$ to get $(z_0,S) = (1,0)$ (this is possible because now $S = 1$), then apply $u$ again to shift the tape further to the left. Finally, we can apply $v$ if necessary to change $S$ to $S'$, retaining $1$ on the tape, finishing the proof of the claim. Let $E_y$ denote the orbit equivalence relation of the action of $\mathop{\mathrm{PSL}}(2,\mathbb Z)$ on $p^{-1}(y)$, and let $E_y^0$ denote its restriction to $p^{-1}(y)\cap \{ 0, 1 \}^{\mathbb Z} \times \{ 0 \}$. It follows from the claim that the map $\varphi _y$ defined above gives an isomorphism from the equivalence relation of eventual equality on $\{ 0, 1 \} ^{\mathbb{N}}$ (equipped with product measure) to $E_y^0$; to see that $\varphi _y$ gives an isomorphism with $E_y^0$ and not just with one of its subequivalence relations, observe that the map $p^{-1}(y)\rightarrow \{ 0, 1\} ^{\mathbb{N}}$, given by $(z,0)\mapsto \varphi _y^{-1}(z,0)$ and $(z,1)\mapsto \psi _y ^{-1}(z,1)$, maps $E_y$-equivalent points to eventually equal sequences. Since $\nu _y =\tfrac{1}{2}\nu _y ^0 + \tfrac{1}{2}u_*\nu _y^0$, this implies that the action of $\mathop{\mathrm{PSL}}(2,\mathbb Z)$ on $(p^{-1}(y),\nu _y )$ is ergodic. Finally, it is easy to see that the actions of $\mathop{\mathrm{PSL}}(2,\mathbb Z)$ on arbitrary two fibers $p^{-1}(y)$ and $p^{-1}(y')$ are isomorphic through the obvious map which replaces the prefix $y$ to the left of the stopping zero with the prefix $y'$. ◻ We thus are interested in understanding the measures of fixed point sets of words in $u$ and $v$. The above proposition suggests an efficient method to evaluate them on a computer. Indeed, it is obvious that for every initial state of the Turing machine a word $w=w(u,v)$ in $u$ and $v$ can shift the tape at most by the number of occurences of $u$ in $w$ which we denote by $|w|_u$. Therefore, it is enough to evaluate the word $w$ on $2^{2|w|_u + 2}$ possible initial configurations of the Turing machine ($2|w|_u + 1$ bits on the tape and 1 bit of the signal), checking whether the Turing machine returns to the initial configuration; the proportion of the fixed configurations is exactly the measure of the fixed point set. Moreover, it turns out that on most of the initial configurations the tape will actually be shifted by far less than $|w|_u$ in one direction. Thus, to estimate the measure of the fixed points from below, it is enough to bound the tape displacement by some number $\ell$, discarding an initial configuration as "possibly non-fixed" once the tape displacement exceeds $\ell$ in the process of applying $w(u,v)$. This reduces the number of initial configurations to be checked to $2^{2\ell + 2}$. Using computer search and the idea of applying iterated commutators, we were able to identify following elements with corresponding lower trace estimates with displacement bound $\ell \coloneqq 7$ (see the [Magma]{.smallcaps} code in the Appendix). We let $a\coloneqq uv$, $b\coloneqq uv^2$, $$g_1\coloneqq [a^{14} v a^{-14},v], \quad \tau(g_1)\geq 0.53,$$ $$g_2\coloneqq [a^9 g_1a^{-9},g_1],\quad \tau(g_2)\geq 0.64,$$ $$g_3\coloneqq [b^2g_2b^{-2},g_2], \quad \tau(g_3)\geq 0.69.$$ One quick way to see that ${\rm PSL}_2(\mathbb Z)$ is MIF is to note that the shortest mixed identity of ${\rm PSL}_2(p)$ is of length $\Omega(p)$, see [@bst]. Applying Proposition [Proposition 27](#prop:isomorphic-ergodic-components){reference-type="ref" reference="prop:isomorphic-ergodic-components"} and Theorem [Theorem 2](#main2){reference-type="ref" reference="main2"}, we thus obtain the following: **Theorem 28**. *The above copy of ${\rm PSL}_2(\mathbb Z)$ is not discrete as a subgroup of $([\mathcal R],d)$ resp. $({\rm U}(R),\lVert{\cdot}\rVert_2)$.* We include this example so that it becomes obvious that Theorem [Theorem 2](#main2){reference-type="ref" reference="main2"} is actually quite useful when studying concrete examples. We take that example as evidence that the answer to Question [Question 5](#q2){reference-type="ref" reference="q2"} might be negative. # Computation of the trace This is the source code of a [Magma]{.smallcaps} program giving the estimates for the elements in Section 5. It can be sucessfully executed on the free [Magma]{.smallcaps} online calculator <http://magma.maths.usyd.edu.au/calc/> if you restrict the computation to the first element (see the end of the code). QQ:=RationalField(); RR:=RealField(4); ZZ:=Integers(); tape0:=AssociativeArray(ZZ); signal0:=0; state0:=[*signal0,tape0,0*]; function u(state,init) res:=state; if state[1] eq 0 then res[1]:=1; res[3]:=res[3]+1; end if; if state[1] eq 1 then res[1]:=0; res[3]:=res[3]-1; end if; return res,init; end function; function v(state,init) res:=state; rinit:=init; if res[1] eq 0 and res[2][state[3]] eq 0 then return res,rinit; end if; if res[1] eq 0 and res[2][state[3]] eq 1 then res[1]:=1; res[2][state[3]]:=0; return res,rinit; end if; if res[1] eq 1 and res[2][state[3]] eq 0 then res[1]:=1; res[2][state[3]]:=1; return res,rinit; end if; if res[1] eq 1 and res[2][state[3]] eq 1 then res[1]:=0; res[2][state[3]]:=1; return res,rinit; end if; end function; function vv(state,init) res:=state; rinit:=init; if res[1] eq 0 and res[2][state[3]] eq 0 then return res,rinit; end if; if res[1] eq 0 and res[2][state[3]] eq 1 then res[1]:=1; res[2][state[3]]:=1; return res,rinit; end if; if res[1] eq 1 and res[2][state[3]] eq 1 then res[1]:=1; res[2][state[3]]:=0; return res,rinit; end if; if res[1] eq 1 and res[2][state[3]] eq 0 then res[1]:=0; res[2][state[3]]:=1; return res,rinit; end if; end function; function iseq(state,init) if state[3] ne 0 then return false; end if; if state[1] ne init[1] then return false; end if; if #Keys(init[2]) ne #Keys(state[2]) then return false; end if; for k in Keys(init[2]) do if init[2][k] ne state[2][k] then return false; end if; end for; return true; end function; function apply_generator(gen,state,init) if gen eq 1 or gen eq -1 then return u(state,init); end if; if gen eq 2 then return v(state,init); end if; if gen eq -2 then return vv(state,init); end if; end function; function precise_trace(h,prec) tr:=0; badness:=0; state:=[*0,AssociativeArray(ZZ),0*]; bits:=[]; pwrs:= ElementToSequence(h); for k in [0..2^(2*prec+2)-1] do if (k mod 2^(2*prec-2)) eq 0 then printf "prc progress = %o trace = %o badness = %o\n", RR! k/2^(2*prec+2), RR ! tr, RR ! badness; end if; bits:=IntegerToSequence(k,2); for i in [#bits+1..2*prec+2] do Append(~bits,0); end for; state[1]:=bits[1]; for l in [-prec..prec] do state[2][l]:=bits[prec+l+2]; end for; state[3]:=0; init:=[*state[1],state[2]*]; for i in [1..#pwrs] do state,init:=apply_generator(pwrs[i],state,init); if state[3] gt prec or state[3] lt -prec then badness:=badness+1/2^(2*prec+2); continue k; end if; end for; if iseq(state,init) then tr:=tr+1/2^(2*prec+2); end if; end for; return tr,badness; end function; G<u,v>:=FPGroup<u,v | u^2 = v^3 = 1>; a:=v*u; b:=v^2*u; comm:=[]; comm[1]:=(a^14*v*a^-14,v); //~0.53 print "prc",RR ! precise_trace(comm[1],7); //Comment the remaining lines out if you want to run the code //on free Magma online calculator http://magma.maths.usyd.edu.au/calc/ comm[2]:=(a^9*comm[1]*a^-9,comm[1]); //~0.64 print "prc",RR ! precise_trace(comm[2],7); comm[3]:=(b^2*comm[2]*b^-2,comm[2]); //~0.69 print "prc",RR ! precise_trace(comm[3],7); # Acknowledgments {#acknowledgments .unnumbered} A manuscript written by the third-named author containing the ideas outlined in the introduction circulated in 2015. The first-named and the third-named author acknowledge funding by the Deutsche Forschungsgemeinschaft (SPP 2026). The fourth-named author was supported in part by NSF grant DMS 2246684.
arxiv_math
{ "id": "2309.08400", "title": "About discrete subgroups of full groups of measure preserving\n equivalence relations", "authors": "Vadim Alekseev, Alessandro Carderi, Andreas Thom, and Robin\n Tucker-Drob", "categories": "math.GR math.DS", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We initiate the study of the spectrum $\mathop{\mathrm{Vspec}}(\kappa)$ of sets that can be realized as the vanishing levels $V(\mathbf T)$ of a normal $\kappa$-tree $\mathbf T$. The latter is an invariant in the sense that if $\mathbf T$ and $\mathbf T'$ are club-isomorphic, then $V(\mathbf T)\mathbin{\bigtriangleup}V(\mathbf T')$ is nonstationary. Additional features of this invariant imply that $\mathop{\mathrm{Vspec}}(\kappa)$ is closed under finite unions and intersections. The set $V(\mathbf T)$ must be stationary for an homogeneous normal $\kappa$-Aronszajn tree $\mathbf T$, and if there exists a special $\kappa$-Aronszajn tree, then there exists one $\mathbf T$ that is homogeneous and satisfies $V(\mathbf T)=\kappa$ (modulo clubs). It is consistent (from large cardinals) that there is an $\aleph_2$-Souslin tree, and yet $V(\mathbf T)$ is co-stationary for every $\aleph_2$-tree $\mathbf T$. Both $V(\mathbf T)=\emptyset$ and $V(\mathbf T)=\kappa$ (modulo clubs) are shown to be feasible using $\kappa$-Souslin trees, even at some large cardinal close to a weakly compact. It is also possible to have a family of $2^\kappa$ many $\kappa$-Souslin trees for which the corresponding family of vanishing levels forms an antichain modulo clubs. address: - Department of Mathematics, Bar-Ilan University, Ramat-Gan 52900, Israel. - Department of Mathematics, Bar-Ilan University, Ramat-Gan 52900, Israel. - Department of Mathematics, Bar-Ilan University, Ramat-Gan 52900, Israel. author: - Assaf Rinot - Shira Yadai - Zhixing You date: "Preprint as of September 26, 2023. For updates, visit [http://p.assafrinot.com/58]{.sans-serif}." title: The vanishing levels of a tree --- # Introduction Throughout this paper, $\kappa$ denotes a regular uncountable cardinal. Recall that a poset $\mathbf T=(T,{<_T})$ is a *$\kappa$-tree* iff all of the following hold: 1. For every $x\in T$, the set $x_\downarrow:=\{ y\in T\mathrel{|}\allowbreak y<_T x\}$ is well-ordered by $<_T$. Hereafter, write $\mathop{\mathrm{ht}}(x):=\mathop{\mathrm{otp}}(x_\downarrow,<_T)$; 2. For every ordinal $\alpha<\kappa$, the set $T_\alpha:=\{ x\in T\mathrel{|}\allowbreak\mathop{\mathrm{ht}}(x)=\alpha\}$ is nonempty and has size less than $\kappa$, and the set $T_\kappa$ is empty. A subset $B\subseteq T$ is an *$\alpha$-branch* iff $(B,<_T)$ is linearly ordered and $\{\mathop{\mathrm{ht}}(x)\mathrel{|}\allowbreak x\in B\}=\alpha$; it is said to be *vanishing* iff it has no upper bound in $\mathbf T$. **Definition 1** (Vanishing levels). For a $\kappa$-tree $\mathbf T=(T,<_T)$, let $V(\mathbf T)$ denote the set of all $\alpha\in\mathop{\mathrm{acc}}(\kappa)$ such that for any $x\in T$ with $\mathop{\mathrm{ht}}(x)<\alpha$ there exists a vanishing $\alpha$-branch containing $x$.[^1] The above is an invariant of trees in the sense that if two $\kappa$-trees $\mathbf T,\mathbf T'$ are isomorphic on a club, then $V(\mathbf T)$ is equal to $V(\mathbf T')$ modulo a club. It also satisfies that $V(\mathbf T\otimes \mathbf T')=V(\mathbf T)\cup V(\mathbf T')$ and $V(\mathbf T+\mathbf T')=V(\mathbf T)\cap V(\mathbf T')$ for any two normal $\kappa$-trees $\mathbf T,\mathbf T'$. The importance of this invariant became apparent in [@paper48], where it was shown that if $\mathbf T$ is a $\kappa$-Souslin tree, i.e., a $\kappa$-tree with no $\kappa$-branches and no $\kappa$-sized antichains, then the combinatorial principle $\clubsuit_{\mathop{\mathrm{AD}}}(S)$ holds for some subset $S\subseteq\kappa$ that is equal to $V(\mathbf T)$ modulo a club.[^2] In particular, if $V(\mathbf T)$ is stationary, then a nontrivial instance of $\clubsuit_{\mathop{\mathrm{AD}}}$ holds true, and this has important applications in set-theoretic topology. Surprisingly enough, the first main result of this paper shows that $V(\mathbf T)$ need not be stationary. This is demonstrated in Gödel's constructible universe, $\mathsf{L}$, where we obtain the following characterization: **Theorem 1**. *In $\mathsf{L}$, for every (regular uncountable cardinal) $\kappa$ that is not weakly compact, the following are equivalent:* - *there exists a $\kappa$-Souslin tree $\mathbf T$ such that $V(\mathbf T)=\emptyset$;* - *there exists a normal and splitting $\kappa$-tree $\mathbf T$ such that $V(\mathbf T)=\emptyset$;* - *$\kappa$ is not the successor of a cardinal of countable cofinality.* On the other extreme, it is possible to have a $\kappa$-Souslin tree $\mathbf T$ with $V(\mathbf T)$ as large as possible. Again, we obtain a complete characterization: **Theorem 2**. *In $\mathsf{L}$, for every (regular uncountable cardinal) $\kappa$ that is not weakly compact, the following are equivalent:* - *there exists a $\kappa$-Souslin tree $\mathbf T$ such that $V(\mathbf T)=\mathop{\mathrm{acc}}(\kappa)$;* - *there exists a $\kappa$-tree $\mathbf T$ such that $V(\mathbf T)=\mathop{\mathrm{acc}}(\kappa)$;* - *$\kappa$ is not subtle.* An interesting feature of the proof of Theorem [Theorem 2](#thmb){reference-type="ref" reference="thmb"} is that it goes through a pump-up theorem generating $\kappa$-Souslin trees from other input trees with weaker properties. For a $\kappa$-tree $\mathbf T$, let $V^-(\mathbf T)$ denote the set of all $\alpha\in\mathop{\mathrm{acc}}(\kappa)$ such that there exists a vanishing $\alpha$-branch. If $\mathbf T$ is homogeneous, then $V^-(\mathbf T)$ coincides with $V(\mathbf T)$, but in contrast with Theorem [Theorem 1](#thma){reference-type="ref" reference="thma"}, for every normal $\kappa$-Aronszajn tree $\mathbf T$, the set $V^-(\mathbf T)$ is necessarily stationary.[^3] Our first pump-up theorem asserts that the existence of a special $\kappa$-Aronszajn tree $\mathbf T$ is equivalent to the existence of one with $V(\mathbf T)=\mathop{\mathrm{acc}}(\kappa)$. Our second pump-up theorem asserts that for every $\kappa$-tree $\mathbf K$ there exists a $\kappa$-tree $\mathbf T$ such that $V^-(\mathbf K)\setminus V(\mathbf T)$ is nonstationary. Our third pump-up theorem asserts that assuming an instance of the proxy principle $\mathop{\mathrm{P}}(\ldots)$ from [@paper22],[^4] the corresponding tree $\mathbf T$ may moreover be made to be $\kappa$-Souslin: **Theorem 3**. *Suppose that $\mathop{\mathrm{P}}(\kappa,2,{\sqsubseteq^*},1)$ holds. Then:* 1. *For every $\kappa$-tree $\mathbf K$, there exists a $\kappa$-Sousin tree $\mathbf T$ such that $V^-(\mathbf K)\setminus V(\mathbf T)$ is nonstationary. In particular:* 2. *There exists a $\kappa$-Sousin tree $\mathbf T$ such that $V(\mathbf T)$ is stationary.* The preceding addresses the problem of ensuring $V(\mathbf T)$ to cover some stationary set $S$. The next theorem addresses the dual problem. Along the way, it provides a cheap way to obtain a family of $2^\kappa$-many $\kappa$-Souslin trees that are not pairwise club-isomorphic. **Theorem 4**. *If $\diamondsuit(S)$ holds for some nonreflecting stationary subset $S$ of a strongly inaccessible cardinal $\kappa$, then there is an almost disjoint family $\mathcal S$ of $2^\kappa$ many stationary subsets of $S$ such that, for each $S'\in\mathcal S$, there is a $\kappa$-Souslin tree $\mathbf T$ with $V(\mathbf T)=S'$.* Let us now come back to the motivating problem of getting instances of $\clubsuit_{\mathop{\mathrm{AD}}}$. By [@paper48 Theorem 2.30], if $\kappa$ is weakly compact, then $\clubsuit_{\mathop{\mathrm{AD}}}(S)$ fails for every $S$ with $\mathop{\mathrm{Reg}}(\kappa)\subseteq S\subseteq\kappa$. This raises the question as to whether $\clubsuit_{\mathop{\mathrm{AD}}}(S)$ may hold over a large subset $S$ of a cardinal $\kappa$ that is close to being weakly compact. We answer this question in the affirmative: **Theorem 5**. *Assuming the consistency of a weakly compact cardinal, it is consistent that for some strongly inaccessible cardinal $\kappa$ satisfying $\chi(\kappa)=\omega$,[^5] there is a $\kappa$-Souslin tree $\mathbf T$ such that $V(\mathbf T)=\mathop{\mathrm{acc}}(\kappa)$.* In the appendix to this paper, we improve a result from [@paper48] concerning the connection between Ostaszewski's principle $\clubsuit$ and the principle $\clubsuit_{\mathop{\mathrm{AD}}}$. As a byproduct, we obtain the following unexpected result: **Theorem 6**. *If $\clubsuit(S)$ holds over a nonreflecting stationary $S\subseteq\kappa$, then there exists a Dowker space of size $\kappa$.* ## Organization of this paper In Section [2](#sect2){reference-type="ref" reference="sect2"}, we develop the basic theory of vanishing levels of trees. It is proved that if $\kappa$ is not a strong limit, then $V^-(\mathbf T)$ is stationary for every normal and splitting $\kappa$-tree $\mathbf T$. It is proved that for every $\kappa$-tree $\mathbf K$, there exists a $\kappa$-tree $\mathbf T$ such that $V^-(\mathbf K)\setminus V(\mathbf T)$ is nonstationary, and that the existence of a special $\kappa$-Aronszajn tree $\mathbf T$ is equivalent to the existence of an homogeneous one with $V(\mathbf T)=\mathop{\mathrm{acc}}(\kappa)$. In Section [3](#sec4){reference-type="ref" reference="sec4"}, we prove Theorem [Theorem 3](#thmc){reference-type="ref" reference="thmc"} and some variations of it. As a corollary, we get Theorem [Theorem 2](#thmb){reference-type="ref" reference="thmb"} and infer that if $\square_\lambda\mathrel{+}\diamondsuit(\lambda^+)$ holds for an infinite cardinal $\lambda$, or if $\square(\lambda^+)\mathrel{+}\textsf{\textup{GCH}}$ holds for a regular uncountable $\lambda$, then there exists a $\lambda^+$-Souslin tree $\mathbf T$ with $V(\mathbf T)=\mathop{\mathrm{acc}}(\lambda^+)$. In Section [4](#sect5){reference-type="ref" reference="sect5"}, we address the problem of realizing a given nonreflecting stationary subset of $\kappa$ as $V(\mathbf T)$ for some $\kappa$-Souslin tree $\mathbf T$. The proof of Theorem [Theorem 4](#thmd){reference-type="ref" reference="thmd"} will be found there. In Section [5](#sect6){reference-type="ref" reference="sect6"}, we address the problem of constructing an homogeneous $\kappa$-Souslin tree $\mathbf T$ such that $V(\mathbf T)=\{ \alpha<\kappa\mathrel{|}\allowbreak\mathop{\mathrm{cf}}(\alpha)\in x\}$ for a prescribed nonempty finite set $x\subseteq\mathop{\mathrm{Reg}}(\kappa)$. In particular, this is shown to be feasible in $\mathsf L$ whenever $\kappa$ is ${<}\max(x)$-inaccessible. The proof of Theorem [Theorem 1](#thma){reference-type="ref" reference="thma"} will be found there. In Section [6](#sect7){reference-type="ref" reference="sect7"}, we deal with Souslin trees admitting an ascent path. It is proved that for every uncountable cardinal $\lambda$, $\square_\lambda+\textsf{\textup{GCH}}$ entails that for every $\mu\in\mathop{\mathrm{Reg}}(\mathop{\mathrm{cf}}(\lambda))$ there exists a $\lambda^+$-Souslin tree $\mathbf T$ with a $\mu$-ascent path such that $V(\mathbf T)=\mathop{\mathrm{acc}}(\lambda^+)$. The proof of Theorem [Theorem 5](#thme){reference-type="ref" reference="thme"} will be found there. Section [7](#secA){reference-type="ref" reference="secA"} is a short appendix where we improve [@paper48 Lemma 2.10], from which we obtain the proof of Theorem [Theorem 6](#thmf){reference-type="ref" reference="thmf"}. ## Notation and conventions {#nocon} $H_\kappa$ denotes the collection of all sets of hereditary cardinality less than $\kappa$. $\mathop{\mathrm{Reg}}(\kappa)$ denotes the set of all infinite regular cardinals $<\kappa$. For $\chi\in\mathop{\mathrm{Reg}}(\kappa)$, $E^\kappa_\chi$ denotes the set $\{\alpha < \kappa \mathrel{|}\allowbreak\mathop{\mathrm{cf}}(\alpha) = \chi\}$, and $E^\kappa_{\geq \chi}$, $E^\kappa_{<\chi}$, $E^\kappa_{\neq\chi}$, are defined analogously. For a set of ordinals $C$, we write $\mathop{\mathrm{ssup}}(C) := \sup\{\alpha + 1 \mathrel{|}\allowbreak\alpha \in C\}$, $\mathop{\mathrm{acc}}^+(C) := \{\alpha < \mathop{\mathrm{ssup}}(C) \mathrel{|}\allowbreak\sup(C \cap \alpha) = \alpha > 0\}$, $\mathop{\mathrm{acc}}(C) := C \cap \mathop{\mathrm{acc}}^+(C)$, and $\mathop{\mathrm{nacc}}(C) := C \setminus \mathop{\mathrm{acc}}(C)$. For a set $S$, we write $[S]^{\chi}$ for $\{A\subseteq S\mathrel{|}\allowbreak|A|=\chi\}$, and $[S]^{<\chi}$ is defined analogously. For a set of ordinals $S$, we identify $[S]^2$ with $\{ (\alpha,\beta)\mathrel{|}\allowbreak\alpha,\beta\in S, \alpha<\beta\}$, and we let $\mathop{\mathrm{Tr}}(S):=\{ \beta<\mathop{\mathrm{ssup}}(S)\mathrel{|}\allowbreak\mathop{\mathrm{cf}}(\beta)>\omega\ \&\ S \cap \beta \text{ is stationary in } \beta\}$. We define four binary relations over sets of ordinals, as follows: - $D\sqsubseteq C$ iff there exists some ordinal $\beta$ such that $D = C \cap \beta$; - $D\sqsubseteq^* C$ iff $D \setminus \varepsilon\sqsubseteq C \setminus \varepsilon$ for some $\varepsilon < \sup(D)$; - $D\mathrel{^{S}{\sqsubseteq}} C$ iff $D\sqsubseteq C$ and $\sup(D)\notin S$; - $D \mathrel{_{\chi}{\sqsubseteq}} C$ iff $D \sqsubseteq C$ or $\mathop{\mathrm{cf}}(\sup(D))<\chi$. A *list* over a set of ordinals $S$ is a sequence $\vec A=\langle A_\alpha\mathrel{|}\allowbreak\alpha\in S\rangle$ such that, for each $\alpha\in S$, $A_\alpha$ is a subset of $\alpha$. It is said to be *thin* if $|\{ A_\alpha\cap\varepsilon\mathrel{|}\allowbreak\alpha\in S\}|<\mathop{\mathrm{ssup}}(S)$ for every $\varepsilon<\mathop{\mathrm{ssup}}(S)$. It is said to be *$\xi$-bounded* if $\mathop{\mathrm{otp}}(A_\alpha)\le\xi$ for all $\alpha\in S$. A *ladder system* over $S$ is a list $\vec A=\langle A_\alpha\mathrel{|}\allowbreak\alpha\in S\rangle$ such that $\sup(A_\alpha)=\sup(\alpha)$ for every $\alpha\in S$. It is said to be *almost disjoint* if $\sup(A_\alpha\cap A_{\alpha'})<\alpha$ for all $\alpha\neq\alpha'$ in $S$. A *$C$-sequence* over $S$ is a ladder system $\vec C=\langle C_\alpha\mathrel{|}\allowbreak\alpha\in S\rangle$ such that each $C_\alpha$ is a closed subset of $\alpha$. Finally, a (resp. thin/$\xi$-bounded/almost-disjoint) *$\mathcal C$-sequence* over $S$ is a sequence $\vec{\mathcal C}=\langle\mathcal C_\alpha\mathrel{|}\allowbreak\alpha\in S\rangle$ of nonempty sets such that every element of $\prod_{\alpha\in S}\mathcal C_\alpha$ is a (resp. thin/$\xi$-bounded/almost-disjoint) $C$-sequence. # The basic theory of vanishing levels {#sect2} **Definition 1**. A tree $\mathbf T=(T,<_T)$ is said to be: - *Hausdorff* iff for every limit ordinal $\alpha$ and all $x,y\in T_\alpha$, if $x_\downarrow=y_\downarrow$, then $x=y$; - *normal* iff for every pair $\alpha<\beta$ of ordinals, if $T_\beta\neq\emptyset$, then for every $x\in T_\alpha$ there exists $y\in T_\beta$ with $x<_T y$; - *$\chi$-complete* iff any $<_T$-increasing sequence of elements of $\mathbf T$, and of length $<\chi$, has an upper bound in $\mathbf T$; - *$\varsigma$-splitting* iff every node of $\mathbf T$ admits at least $\varsigma$-many immediate successors, that is, for every $x\in T$, $|\{ y\in T\mathrel{|}\allowbreak x<_T y, \mathop{\mathrm{ht}}(y)=\mathop{\mathrm{ht}}(x)+1\}|\ge\varsigma$. By *splitting*, we mean $2$-splitting; - *$\kappa$-Aronszajn* iff $\mathbf T$ is a $\kappa$-tree with no $\kappa$-branches; - *special $\kappa$-Aronszajn tree* iff it is a $\kappa$-Aronszajn and there exists a map $\rho:T\rightarrow T$ satisfying the following: - for every non-minimal $x\in T$, $\rho(x)<_T x$; - for every $y\in T$, $\rho^{-1}\{y\}$ is covered by less than $\kappa$ many antichains. *Remark 2*. All the $\kappa$-Souslin trees constructed in this paper will be Hausdorff, normal and splitting. **Definition 3**. For a $\kappa$-tree $\mathbf T=(T,<_T)$: 1. $V^-(\mathbf T)$ denotes the set of all $\alpha\in\mathop{\mathrm{acc}}(\kappa)$ such that there exists a vanishing $\alpha$-branch; 2. $V(\mathbf T)$ denotes the set of all $\alpha\in\mathop{\mathrm{acc}}(\kappa)$ such that for every $x\in T$ with $\mathop{\mathrm{ht}}(x)<\alpha$ there exists a vanishing $\alpha$-branch containing $x$. 3. $\mathop{\mathrm{Vspec}}(\kappa):=\{ V(\mathbf T)\mathrel{|}\allowbreak\mathbf T\text{ is a normal }\kappa\text{-tree}\}$; 4. For $A\subseteq\kappa$, we write $T\mathbin\upharpoonright A:=\{ x\in T\mathrel{|}\allowbreak\mathop{\mathrm{ht}}(x)\in A\}$. Note that if $\mathbf T$ is a $\kappa$-tree such that $V(\mathbf T)$ is cofinal in $\kappa$, then $\mathbf T$ is normal. **Lemma 4**. *Suppose that $\mathbf T$ is a $\kappa$-tree such that $V^-(\mathbf T)$ (resp. $V(\mathbf T)$) covers a club in $\kappa$. Then there exists a subtree $\mathbf T'$ of $\mathbf T$ such that $V^-(\mathbf T)$ (resp. $V(\mathbf T)$) is equal to $\mathop{\mathrm{acc}}(\kappa)$.* *Proof.* Let $D\subseteq\kappa$ be a club as in the hypothesis. Then $\mathbf T':=(T\mathbin\upharpoonright D,<_T)$ is a subtree as sought. ◻ **Proposition 5**. *For a $\kappa$-tree $\mathbf T=(T,<_T)$:* 1. *If $\mathbf T$ is a normal $\kappa$-Aronszajn tree, then $V^-(\mathbf T)$ is stationary;* 2. *If $\mathbf T$ is homogeneous,[^6] then $V^-(\mathbf T)=V(\mathbf T)$.* *Proof.* (1) Suppose not, and fix a club $D\subseteq\kappa$ disjoint from $V^{-}(\mathbf T)$. We shall construct a $<_T$-increasing sequence $\langle t_\alpha\mathrel{|}\allowbreak\alpha\in D\rangle$ in such a way that $t_\alpha\in T_\alpha$ for all $\alpha\in D$, contradicting the fact that $\mathbf T$ is $\kappa$-Aronszajn. We start by letting $t_{\min(D)}$ be an arbitrary element of $T_{\min(D)}$. Next, for every $\alpha\in D$ such that $t_\alpha$ has already been successfully defined, we set $\beta:=\min(D\setminus(\alpha+1))$, and use the normality of $\mathbf T$ to pick $t_\beta$ in $T_\beta$ extending $t_\alpha$. For every $\alpha\in\mathop{\mathrm{acc}}(D)$ such that $\langle t_\epsilon\mathrel{|}\allowbreak\epsilon\in D\cap\alpha\rangle$ has already been defined, the latter clearly induces an $\alpha$-branch, so the fact that $\alpha\notin V^-(\mathbf T)$ implies that there exists some $t_\alpha\in T_\alpha$ such that $t_\epsilon<_T t_\alpha$ for all $\epsilon\in D\cap\alpha$. This completes the description of the recursion. \(2\) Suppose that $\mathbf T$ is homogeneous. Let $\alpha\in V^-(\mathbf T)$, and fix a vanishing $\alpha$-branch $b$. Now, given a node $x$ of $\mathbf T$ of height less than $\alpha$, let $y$ be the unique element of $b$ to have the same height as $x$. Since $\mathbf T$ is homogeneous, there exists an automorphism $\pi$ of $\mathbf T$ sending $y$ to $x$, and it is clearly the case that $\pi[b]$ is a vanishing $\alpha$-branch through $x$. ◻ **Proposition 6**. *If $\square(\kappa)$ holds, then there exists a $\kappa$-Aronszajn tree $\mathbf T$ such that $V(\mathbf T)=E^\kappa_\omega$.* *Proof.* By [@MR2013395 Theorem 3.9], $\square(\kappa)$ yields a sequence of functions $\langle f_\beta:\beta\rightarrow\beta\mathrel{|}\allowbreak\beta\in\mathop{\mathrm{acc}}(\kappa)\rangle$ such that: - for every $(\beta,\gamma)\in[\mathop{\mathrm{acc}}(\kappa)]^2$, $\{\alpha<\beta\mathrel{|}\allowbreak f_\beta(\alpha)\neq f_\gamma(\alpha)\}$ is finite; - there is no cofinal $B\subseteq\mathop{\mathrm{acc}}(\kappa)$ such that $\{ f_\beta\mathrel{|}\allowbreak\beta\in B\}$ is linearly ordered by $\subseteq$. Set $T:=\{ f\in{}^\alpha\alpha\mathrel{|}\allowbreak\alpha\le\beta<\kappa, f\text{ disagrees with }f_\beta\text{ on a finite set}\}$. Then $\mathbf T=(T,{\subseteq})$ is a uniformly coherent $\kappa$-Aronszajn tree. By [@paper48 Remark 2.20], then, $V(\mathbf T)=E^\kappa_\omega$. ◻ **Definition 7**. For a $\kappa$-tree $\mathbf T=(T,<_T)$ and a subset $S\subseteq\kappa$, we say that $\mathbf T$ is *$S$-regressive* iff there exists a map $\rho:T\mathbin\upharpoonright S\rightarrow T$ satisfying the following: - for every $x\in T\mathbin\upharpoonright S$, $\rho(x)<_T x$; - for all $\alpha\in S$ and $x,y\in T_\alpha$, if $\rho(x)<_T y$ and $\rho(y)<_T x$, then $x=y$. *Remark 8*. If $\rho$ is as above, then every map $\varrho:T\mathbin\upharpoonright S\rightarrow T$ satisfying $\rho(x)\le_T \varrho(x)<_T x$ for all $x\in T\mathbin\upharpoonright S$ is as well a witness to $\mathbf T$ being $S$-regressive. The next lemma generalizes [@paper48 Lemmas 2.19 and 2.21]. **Lemma 9**. *Suppose that:* - *$\mathbf T$ is a normal, $\varsigma$-splitting $\kappa$-tree, for some fixed cardinal $\varsigma<\kappa$;* - *$S\subseteq E^\kappa_\chi$ is stationary for some fixed regular cardinal $\chi<\kappa$;* - *Either of the following:* 1. *$\varsigma^\chi\ge\kappa$;* 2. *$T$ is $S$-regressive and $\varsigma^{<\chi}<\varsigma^\chi$;* 3. *$T$ is $S$-regressive, $\chi=\varsigma$ and there exists a weak $\chi$-Kurepa tree.[^7]* *Then, for every $\alpha\in S$, either $\alpha\in V(\mathbf T)$ or ($\mathop{\mathrm{cf}}(\alpha)>\omega$ and) $V^-(\mathbf T)\cap\alpha$ is stationary in $\alpha$. In particular, $V^-(\mathbf T)\cap E^\kappa_{\le\chi}$ is stationary.* *Proof.* Write $\mathbf T=(T,{<_T})$. Towards a contradiction, suppose that $\alpha\in S$ is a counterexample. As $\alpha\notin V(\mathbf T)$, we may fix $x\in T$ with $\mathop{\mathrm{ht}}(x)<\alpha$ such that every $\alpha$-branch $B$ with $x\in B$ has an upper bound in $\mathbf T$. Since either $\mathop{\mathrm{cf}}(\alpha)\le\omega$ or $V^-(\mathbf T)\cap\alpha$ is nonstationary in $\alpha$, we may fix a club $C$ in $\alpha$ of order-type $\chi$ such that $\min(C)=\mathop{\mathrm{ht}}(x)$ and such that $\mathop{\mathrm{acc}}(C)\cap V^-(\mathbf T)=\emptyset$. Let $\langle \alpha_i\mathrel{|}\allowbreak i<\chi\rangle$ denote the increasing enumeration of $C$. We shall recursively construct an array of nodes $\langle t_s\mathrel{|}\allowbreak s\in{}^{<\chi}\varsigma\rangle$ in such a way that $t_s\in T_{\alpha_{\mathop{\mathrm{dom}}(s)}}$. Set $t_\emptyset:=x$. For every $i<\chi$ and every $s:i\rightarrow\varsigma$ such that $t_s$ has already been defined, since $T$ is normal and $\varsigma$-splitting, we may find an injective sequence $\langle t_{s{}^\smallfrown\langle j\rangle}\mathrel{|}\allowbreak j<\varsigma\rangle$ of nodes of $T_{\alpha_{i+1}}$ all extending $t_s$. For every $i\in\mathop{\mathrm{acc}}(\chi)$ such that $\langle t_s\mathrel{|}\allowbreak s\in{}^{<i}\varsigma\rangle$ has already been defined, for every $s:i\rightarrow\varsigma$, since $\{ t_{s\mathbin\upharpoonright\iota}\mathrel{|}\allowbreak\iota<i\}$ induces an $\alpha_i$-branch, the fact that $\alpha_i\notin V^-(\mathbf T)$ implies that we may find $t_s\in T_{\alpha_i}$ that is a limit of that $\alpha_i$-branch. This completes the recursive construction of our array. For every $s\in{}^{\chi}\varsigma$, $B_s:=\{ t\in T\mathrel{|}\allowbreak\exists i<\chi\,(t<_T t_{s\mathbin\upharpoonright i})\}$ is an $\alpha$-branch containing $x$, and hence there must be some $b_s\in T_\alpha$ extending all elements of $B_s$. Our construction also ensures that $B_s\neq B_{s'}$ whenever $s\neq s'$. We now consider a few options: 1. Suppose that $\varsigma^\chi\ge\kappa$. Then $|T_\alpha|\ge|\{ b_s\mathrel{|}\allowbreak s\in{}^\chi\varsigma\}|=\varsigma^\chi\ge\kappa$. This is a contradiction. 2. Suppose that $\mathbf T$ is $S$-regressive, as witnessed by $\rho:T\mathbin\upharpoonright S\rightarrow T$. For every $s\in{}^{\chi}\varsigma$, $\rho(b_s)$ belongs to $B_s$, but by Remark [Remark 8](#rmk25){reference-type="ref" reference="rmk25"}, we may assume that $\rho(b_s)= t_{s\mathbin\upharpoonright i}$ for some $i<\chi$. - If $\varsigma^{<\chi}<\varsigma^\chi$, then we may now find $s\neq s'$ in ${}^\chi\varsigma$ such that $\rho(b_s)=\rho(b_{s'})$. Then, $\rho(b_{s'})<_T t_s$ and $\rho(b_s)<_T t_{s'}$, contradicting the fact that $b_s\neq b_{s'}$. - If $\chi=\varsigma$ and there exists a weak $\chi$-Kurepa tree, then this may be witnessed by a tree of the form $(K,{\subseteq})$ for some $K\subseteq{}^{<\chi}\varsigma$. Let $\langle s_\beta\mathrel{|}\allowbreak\beta<\chi^+\rangle$ be an injective enumeration of branches through $(K,{\subseteq})$. Since $|K|\le\chi$, there must exist $\beta\neq\beta'$ such that $\rho(b_{s_\beta})=\rho(b_{s_{\beta'}})$, which yields a contradiction as in the previous case.  ◻ **Corollary 10**. *If $\kappa$ is not a strong limit, then for every normal and splitting $\kappa$-tree $\mathbf T$, $V^-(\mathbf T)$ is stationary.* *Proof.* Suppose that $\kappa$ is not a strong limit. It is not hard to see that there exists some infinite cardinal $\varsigma<\kappa$ for which there exists a regular cardinal $\chi<\kappa$ such that $\varsigma^\chi\ge\kappa$. Now, given a normal and splitting $\kappa$-tree $\mathbf T=(T,<_T)$, as shown in the proof of [@paper48 Proposition 2.16], the club $D:=\{\alpha<\kappa\mathrel{|}\allowbreak\alpha=\varsigma^\alpha\}$ satisfies that $\mathbf T'=(T\mathbin\upharpoonright D,{<_T})$ is normal and $\varsigma$-splitting. By Lemma [Lemma 9](#cor53){reference-type="ref" reference="cor53"}, $V^-(\mathbf T')$ is stationary. As $D$ is a club in $\kappa$, this means that $V^-(\mathbf T)$ is stationary, as well. ◻ **Corollary 11**. *If $\kappa=\lambda^+$ is a successor cardinal and $\lambda^{\aleph_0}\ge\kappa$, then for every normal and splitting $\kappa$-tree $\mathbf T$, $E^\kappa_\omega\setminus V(\mathbf T)$ is nonstationary.* *Proof.* Suppose that $\kappa$ and $\lambda$ are as above. Now, given a normal and splitting $\kappa$-tree $\mathbf T=(T,<_T)$, the club $D:=\{\alpha<\kappa\mathrel{|}\allowbreak\alpha=\lambda^\alpha\}$ satisfies that $\mathbf T'=(T\mathbin\upharpoonright D,{<_T})$ is normal and $\lambda$-splitting. By Lemma [Lemma 9](#cor53){reference-type="ref" reference="cor53"}, $V(\mathbf T')\supseteq E^\kappa_\omega$. As $D$ is a club in $\kappa$, this means that $E^\kappa_\omega\setminus V(\mathbf T)$ is nonstationary. ◻ **Definition 12** ([@paper23]). A *streamlined $\kappa$-tree* is a subset $T\subseteq{}^{<\kappa}H_\kappa$ such that the following two conditions are satisfied: 1. $T$ is downward-closed, i.e, for every $t\in T$, $\{ t\mathbin\upharpoonright\alpha\mathrel{|}\allowbreak\alpha<\kappa\}\subseteq T$; 2. for every $\alpha<\kappa$, the set $T_\alpha:=T\cap{}^\alpha\kappa$ is nonempty and has size $<\kappa$. For every $\alpha\le\kappa$, we denote $\mathcal B(T\mathbin\upharpoonright\alpha):=\{f\in{}^\alpha H_\kappa\mathrel{|}\allowbreak\forall\beta<\alpha\,(f\mathbin\upharpoonright\beta\in T)\}$. Note that every streamlined tree is Hausdorff. *Convention 13*. We identify a streamlined tree $T$ with the poset $\mathbf T=(T,{\subseteq})$. **Definition 14**. For two elements $s,t$ of $H_\kappa$, we define $s*t$ to be the emptyset, unless $s,t\in{}^{<\kappa}H_\kappa$ with $\mathop{\mathrm{dom}}(s)\le\mathop{\mathrm{dom}}(t)$, in which case $s*t:\mathop{\mathrm{dom}}(t)\rightarrow H_\kappa$ is defined by stipulating: $$(s*t)(\beta):=\begin{cases}s(\beta),&\text{if }\beta\in\mathop{\mathrm{dom}}(s);\\ t(\beta),&\text{otherwise.}\end{cases}$$ **Definition 15**. A streamlined $\kappa$-tree $T$ is *uniformly homogeneous* iff for all $\alpha<\beta<\kappa$, $s\in T_\alpha$ and $t\in T_\beta$, $s*t$ is in $T$. The next proposition should be clear, but we include a proof sketch. **Proposition 16**. *Suppose that $T$ is a streamlined $\kappa$-tree that is uniformly homogeneous. Then $T$ is indeed homogeneous.* *Proof.* Let $\alpha<\kappa$ and $s,s'\in T_\alpha$. Define $\pi:T\rightarrow T$ via: $$\pi(t):=\begin{cases} s'\mathbin\upharpoonright\mathop{\mathrm{dom}}(t),&\text{if }t\subseteq s;\\ s\mathbin\upharpoonright\mathop{\mathrm{dom}}(t),&\text{if }t\subseteq s';\\ s'*t,&\text{if }t\supseteq s;\\ s*t,&\text{if }t\supseteq s';\\ t,&\text{otherwise}. \end{cases}$$ Then $\pi$ is a well-defined automorphism of $T$, sending $s$ to $s'$. ◻ **Lemma 17**. *For a stationary $S\subseteq\kappa$, the following are equivalent:* 1. *There exist a club $D\subseteq\kappa$ and a thin ladder system $\langle A_\alpha\mathrel{|}\allowbreak\alpha\in S\cap D\rangle$ such that, for every $(\alpha,\beta)\in [S\cap D]^2$, $\sup(A_\alpha\cap A_\beta)<\alpha$;* 2. *There exist a club $D\subseteq\kappa$ and a thin ladder system $\langle A_\alpha\mathrel{|}\allowbreak\alpha\in S\cap D\rangle$ such that, for every $(\alpha,\beta)\in [S\cap D]^2$, $A_\alpha\neq A_\beta\cap\alpha$;* 3. *There exist a club $D\subseteq\kappa$ and a uniformly homogeneous streamlined $\kappa$-tree $T$ such that $V(T)\supseteq S\cap D$;* 4. *There exist a club $D\subseteq\kappa$ and a $\kappa$-tree $\mathbf T$ such that $V^-(\mathbf T)\supseteq S\cap D$.* *Proof.* $(1)\implies(2)$: This is immediate. $(2)\implies(3)$: Suppose that $D$ and $\langle A_\alpha\mathrel{|}\allowbreak\alpha\in S\cap D\rangle$ are as in (2). Let $\langle x_i\mathrel{|}\allowbreak i<\kappa\rangle$ be an injective enumeration of $\langle A_\alpha\cap\varepsilon\mathrel{|}\allowbreak\varepsilon<\alpha, \alpha\in S\cap D\rangle$. For each $\alpha\in S\cap D$, let $k_\alpha:\alpha\rightarrow\kappa$ be the unique function to satisfy for all $\varepsilon<\alpha$: $$A_\alpha\cap\varepsilon=x_{k_\alpha(\varepsilon)}.$$ Define first an auxiliary collection $K$ by letting $$K:=\{ k_\beta\mathbin\upharpoonright\alpha\mathrel{|}\allowbreak\alpha<\beta, \beta\in S\cap D\}.$$ Note that $\{ \mathop{\mathrm{dom}}(y)\mathrel{|}\allowbreak y\in K\}=\kappa$ and that $K$ is closed under taking initial segments. So $K$ is a streamlined $\kappa$-tree because otherwise there must exist some $\varepsilon<\kappa$ such that $\{ k_\beta\mathbin\upharpoonright\varepsilon\mathrel{|}\allowbreak\beta\in S\cap D\}$ has size $\kappa$, contradicting the fact that $\langle A_\beta\mathrel{|}\allowbreak\beta\in S\cap D\rangle$ is thin. We shall use $K$ to construct a uniformly homogeneous streamlined $\kappa$-tree $T$ by defining its levels $T_\alpha$ by recursion on $\alpha<\kappa$. Start by letting $T_0:=K_0$. Clearly, $T_0=\{\emptyset\}$, so that $|T_0|<\kappa$. Next, for every nonzero $\alpha<\kappa$ such that $T\mathbin\upharpoonright\alpha$ has already been defined and have size less than $\kappa$, let $$T_{\alpha}:=\{ x*y\mathrel{|}\allowbreak x\in T\mathbin\upharpoonright\alpha,\, y\in K_\alpha\}$$ and note that $|T_\alpha|<\kappa$. Altogether, $T$ is a streamlined $\kappa$-tree. **Claim 1**. *$T$ is uniformly homogeneous.* *Proof.* We prove that $x*y\in T$ for all $x,y\in T$ with $\mathop{\mathrm{dom}}(x)<\mathop{\mathrm{dom}}(y)$. The proof is by induction on $\mathop{\mathrm{dom}}(y)$. So suppose that $\alpha<\kappa$ is such that for all $x,y\in T$ with $\mathop{\mathrm{dom}}(x)<\mathop{\mathrm{dom}}(y)<\alpha$, it is the case that $x*y\in T$, and let $x,y\in T$ with $\mathop{\mathrm{dom}}(x)<\mathop{\mathrm{dom}}(y)=\alpha$. Recalling the definition of $T_\alpha$, pick $x'\in T\mathbin\upharpoonright\alpha$ and $y'\in K_\alpha$ such that $y=x'*y'$. $\blacktriangleright$ If $\mathop{\mathrm{dom}}(x)<\mathop{\mathrm{dom}}(x')$, then $x*y=x*(x'*y')=(x*x')*y'$. As $\mathop{\mathrm{dom}}(x)<\mathop{\mathrm{dom}}(x')<\alpha$, the induction hypothesis implies that $x*x'\in T\mathbin\upharpoonright\alpha$, and then the definition of $T_\alpha$ implies that $(x*x')*y'$ is in $T$. $\blacktriangleright$ If $\mathop{\mathrm{dom}}(x)\ge\mathop{\mathrm{dom}}(x')$, then $x*y=x*(x'*y')=x*y'$, and then the definition of $T_\alpha$ implies that $x*y'$ is in $T$. ◻ By the preceding claim together with Proposition [Proposition 5](#prop22){reference-type="ref" reference="prop22"}, it now suffices to prove that $V^{-}(T)\supseteq S\cap D\cap\mathop{\mathrm{acc}}(\kappa)$. To this end, let $\alpha\in S\cap D\cap\mathop{\mathrm{acc}}(\kappa)$. Clearly, $b:=\{ k_\alpha\mathbin\upharpoonright\varepsilon\mathrel{|}\allowbreak\varepsilon<\alpha\}$ is an $\alpha$-branch in $K$ and hence in $T$. If $b$ is not vanishing in $T$, then we may find $x\in T\mathbin\upharpoonright\alpha$ and $y\in K_\alpha$ such that $x*y=k_\alpha$. Recalling the definition of $K_\alpha$, we may pick $\beta\in S\cap D$ above $\alpha$ such that $y=k_\beta\mathbin\upharpoonright\alpha$. As $\alpha<\beta$, it is the case that $A_\alpha\neq A_{\beta}\cap\alpha$, so we may pick $\delta\in A_\alpha\Delta(A_{\beta}\cap\alpha)$. Then $\varepsilon:=\max\{\delta,\mathop{\mathrm{dom}}(x)\}+1$ is smaller than $\alpha$ and satisfies $k_\alpha(\varepsilon)\neq k_{\beta}(\varepsilon)$, contradicting the fact that $k_\alpha(\varepsilon)=(x*y)(\varepsilon)=y(\varepsilon)=k_\beta(\varepsilon)$. $(3)\implies(4)$: This is immediate. $(4)\implies(1)$ Every $\kappa$-tree is order-isomorphic to an ordinal-based tree (see, e.g., [@paper48 Proposition 2.16]), so we may assume that we are given a tree $\mathbf T$ of the form $(\kappa,<_T)$ and a club $D\subseteq\kappa$ such that $V^-(\mathbf T)\supseteq S\cap D$. By possibly shrinking $D$, we may also assume that $D\subseteq\mathop{\mathrm{acc}}\{\beta<\kappa\mathrel{|}\allowbreak T\mathbin\upharpoonright\beta=\beta\}$. It follows that for every $\alpha\in D$, every $\alpha$-branch is a cofinal subset of $\alpha$. For every $\alpha\in S\cap D$, let $A_\alpha$ be a vanishing $\alpha$-branch. As $\mathbf T$ is a $\kappa$-tree, the ladder system $\langle A_\alpha\mathrel{|}\allowbreak\alpha\in S\cap D\rangle$ is thin. In addition, for every $(\alpha,\beta)\in [S\cap D]^2$, if it were the case that $\sup(A_\beta\cap A_\alpha)=\alpha$, then $\min(A_\beta\setminus A_\alpha)$ is a node extending all elements of $A_\alpha$, contradicting the fact that $A_\alpha$ is vanishing. So, $\sup(A_\beta\cap A_\alpha)<\alpha$. ◻ When $S$ is a club, the preceding is related to the subtle tree property: **Definition 18** (Weiß, [@Chris10]). $\kappa$ has the *subtle tree property* ($\kappa$-$\textsf{\textup{STP}}$ for short) iff for every thin list $\langle A_\alpha \mathrel{|}\allowbreak\alpha\in D\rangle$ over a club $D \subseteq \kappa$, there exists a pair $(\alpha,\beta)\in [D]^2$ such that $A_\alpha=A_\beta \cap \alpha$. **Corollary 19**. *All of the following are equivalent:* - *$\kappa$-$\textsf{\textup{STP}}$ fails;* - *there is a $\kappa$-tree $\mathbf T$ with $V^-(\mathbf T)=\mathop{\mathrm{acc}}(\kappa)$;* - *there is an homogeneous $\kappa$-tree $\mathbf T$ with $V(\mathbf T)=\mathop{\mathrm{acc}}(\kappa)$;* - *there is a uniformly homogeneous streamlined $\kappa$-tree $T$ such that $V(T)$ covers a club in $\kappa$.* *Proof.* By Lemmas [Lemma 17](#lemma311){reference-type="ref" reference="lemma311"} and [Lemma 4](#lemma33){reference-type="ref" reference="lemma33"}. ◻ *Remark 20*. By [@Chris10 Theorem 3.2.5], $\textsf{\textup{PFA}}$ implies that $\aleph_2$-$\textsf{\textup{STP}}$ holds. By [@HS20 Theorem 1.2], if $\lambda$ is the singular limit of supercompact cardinals then $\lambda^+$-$\textsf{\textup{STP}}$ fails.[^8] **Corollary 21**. *Assuming the consistency of a subtle cardinal, it is consistent that the conjunction of the following holds true:* - *there exists an $\aleph_2$-Souslin tree;* - *for every normal and splitting $\aleph_2$-tree $\mathbf T$, $E^{\aleph_2}_{\aleph_1}\setminus V(\mathbf T)$ is stationary.* *Proof.* Fix a subtle cardinal $\kappa$ that is not weakly compact in $\mathsf L$, and work in the forcing extension by Mitchell's forcing of length $\kappa$. By [@Chris10 Theorem 2.3.1], $\aleph_2$-$\textsf{\textup{STP}}$ holds, and hence $V(\mathbf T)$ cannot contain a club for every $\aleph_2$-tree $\mathbf T$. In addition, this is a model in which $2^{\aleph_0}=\aleph_2$ and hence Corollary [Corollary 11](#cor27){reference-type="ref" reference="cor27"} implies that $E^{\aleph_2}_{\aleph_0}\setminus V(\mathbf T)$ is nonstationary for every normal and splitting $\aleph_2$-tree $\mathbf T$. Therefore, $E^{\aleph_2}_{\aleph_1}\setminus V(\mathbf T)$ is stationary for every normal and splitting $\aleph_2$-tree $\mathbf T$. In addition, this is a model in which $\mathfrak b=\aleph_1$, $2^{\aleph_1}=\aleph_2$, and (since $\kappa$ is not weakly compact in $\mathsf{L}$) $\square(\aleph_2)$ holds. So, by [@paper51 Theorem A], there exists an $\aleph_2$-Souslin tree. ◻ **Corollary 22**. *Suppose that $S$ is a stationary subset of a strongly inaccessible $\kappa$. Then there exists a $\kappa$-tree $\mathbf T$ such that $V(\mathbf T)\cap S$ is stationary.* *Proof.* By Lemma [Lemma 17](#lemma311){reference-type="ref" reference="lemma311"}, it suffices to find a stationary $S^-\subseteq S$ that carries a thin almost disjoint $C$-sequence. We consider two cases: $\blacktriangleright$ If $S\cap E^\kappa_\omega$ is stationary, then set $S^-:=S\cap E^\kappa_\omega$, and let $\langle C_\alpha\mathrel{|}\allowbreak\alpha\in S^-\rangle$ be some $\omega$-bounded $C$-sequence over $S^-$. $\blacktriangleright$ Otherwise, let $S^-:=S\setminus(E^\kappa_\omega\cup\mathop{\mathrm{Tr}}(S))$. Then $S^-$ is stationary, and for every $\alpha\in S^-$, we may pick a club $C_\alpha$ in $\alpha$ that is disjoint from $S$. Evidently, $\sup(C_{\alpha'}\cap C_\alpha)<\alpha'$ for every $(\alpha,\alpha')\in[S^-]^2$. ◻ **Lemma 23**. *If $\theta\in\mathop{\mathrm{Reg}}(\kappa)$ is such that $\lambda^{<\theta}<\kappa$ for all $\lambda<\kappa$, then there exists an almost disjoint thin $C$-sequence over $E^\kappa_\theta$.* *Proof.* Just take a $\theta$-bounded $C$-sequence over $E^\kappa_\theta$. ◻ Building on the work of Todorčević [@MR2355670] and Krueger [@MR3078820], we obtain the following pump-up theorem for special $\kappa$-Aronszajn trees. **Theorem 24**. *The following are equivalent:* - *There exists a special $\kappa$-Aronszajn tree;* - *There exists a streamlined $\kappa$-Aronszajn tree $K$, a club $D\subseteq\mathop{\mathrm{acc}}(\kappa)$ and a function $f:K\mathbin\upharpoonright D\rightarrow\kappa$ such that all of the following hold:* - *$V^-(K)\supseteq D$;* - *$f(x)<\mathop{\mathrm{dom}}(x)$ for all $x\in K\mathbin\upharpoonright D$;* - *$f(x)\neq f(y)$ for every pair $x\subsetneq y$ of nodes from $K\mathbin\upharpoonright D$;* - *for all $x,y\in K$ and $\varepsilon\in\mathop{\mathrm{dom}}(x)\cap\mathop{\mathrm{dom}}(y)$, if $x(\varepsilon)=y(\varepsilon)$, then $x\mathbin\upharpoonright\varepsilon=y\mathbin\upharpoonright\varepsilon$.* - *There exists a streamlined uniformly homogeneous special $\kappa$-Aronszajn tree $T$ for which $V(T)$ covers a club in $\kappa$;* - *There exists an homogeneous special $\kappa$-Aronszajn tree $\mathbf T$ with $V(\mathbf T)=\mathop{\mathrm{acc}}(\kappa)$.* *Proof.* $(i)\implies(ii)$ Assuming that there exists a special $\kappa$-Aronszajn tree, by [@MR3078820 Lemma 1.2 and Theorem 2.5], we may fix a $C$-sequence $\vec C=\langle C_\beta\mathrel{|}\allowbreak\beta<\kappa\rangle$ and a club $C\subseteq\mathop{\mathrm{acc}}(\kappa)$ satisfying the following: 1. for every $\beta\in C$, $\min(C_\beta)>\mathop{\mathrm{otp}}(C_\beta)$; 2. for every $\beta\in\mathop{\mathrm{acc}}(\kappa)\setminus C$, $\min(C_\beta)>\sup(C\cap\beta)$; 3. for every $\epsilon<\kappa$, $|\{ C_\beta\cap\epsilon\mathrel{|}\allowbreak\beta<\kappa\}|<\kappa$. Consider the following additional requirement: - $\min(C_\beta)=\mathop{\mathrm{otp}}(C_\beta)+1$ for every $\beta\in C$. **Claim 2**. *We may moreover assume that Clause (4) holds.* *Proof.* For every $\beta\in C$, let $C_\beta^\bullet:=C_\beta\cup\{\mathop{\mathrm{otp}}(C_\beta)+1\}$, and for every $\beta\in\kappa\setminus C$, let $C_\beta^\bullet:=C_\beta$. We just need to verify that $|\{ C^\bullet_\beta\cap\epsilon\mathrel{|}\allowbreak\beta<\kappa\}|<\kappa$ for every $\epsilon<\kappa$. Towards a contradiction, suppose that $\epsilon$ is a counterexample. From $(3)$, it follows that we may fix $B\in[C]^\kappa$ on which the map $\beta\mapsto C_\beta^\bullet\cap\epsilon$ is injective. We may moreover assume that $\beta\mapsto C_\beta\cap\epsilon$ is constant over $B$. By possibly removing one element of $B$, we may assume that $C_\beta^\bullet\cap\epsilon$ is nonempty for all $\beta\in B$. So, we may moreover assume the existence of $\tau<\epsilon$ such that $\min(C^\bullet_\beta)=\tau$ for every $\beta\in B$. But then $C_\beta^\bullet\cap\epsilon=(C_\beta\cap\epsilon)\cup\{\tau\}$ for every $\beta\in B$. This is a contradiction. ◻ Now, let $\rho_0$ be the characteristic function from [@MR2355670 §6] obtained by walking along $\vec C$ satisfying (1)--(4), and consider the following streamlined $\kappa$-tree $$T(\rho_0):=\{ \rho_{0\beta}\mathbin\upharpoonright\alpha\mathrel{|}\allowbreak\alpha\le\beta<\kappa\}.$$ Using (1)--(3), the proof of [@MR3078820 Theorem 4.4] provides a club $D\subseteq C$ and a function $g:T(\rho_0)\mathbin\upharpoonright D\rightarrow\kappa$ satisfying the following two: - $g(t)<\mathop{\mathrm{dom}}(t)$ for all $t\in T(\rho_0)\mathbin\upharpoonright D$; - for every pair $s\subsetneq t$ of nodes from $T(\rho_0)\mathbin\upharpoonright D$, $g(s)\neq g(t)$. Next, consider the following subfamily of $T(\rho_0)$: $$T:=\{ \rho_{0\beta}\mathbin\upharpoonright\alpha\mathrel{|}\allowbreak\alpha<\beta<\kappa\}.$$ Clearly, $T$ is downward-closed and $\{\mathop{\mathrm{dom}}(y)\mathrel{|}\allowbreak y\in T\}=\kappa$, so that $T$ is a streamlined $\kappa$-Aronszajn subtree of $T(\rho_0)$. **Claim 3**. *$T\cap\{\rho_{0\alpha}\mathrel{|}\allowbreak\alpha\in C\}=\emptyset$. In particular, $V^-(T)\supseteq C\supseteq D$.* *Proof.* The "in particular" part will follow from the fact that $\{ \rho_{0\alpha}\mathbin\upharpoonright\epsilon\mathrel{|}\allowbreak\epsilon<\alpha\}$ is an $\alpha$-branch of $T$ for every $\alpha<\kappa$. Thus, let $\alpha\in C$ and we shall prove that $\rho_{0\alpha}\notin T$. Suppose not, and pick some $\beta>\alpha$ such that $\rho_{0\alpha}=\rho_{0\beta}\mathbin\upharpoonright\alpha$. Recall that for every $\gamma<\kappa$, $$C_\gamma=\{ \xi<\gamma\mathrel{|}\allowbreak\rho_{0\gamma}(\xi)\text{ is a sequence of length }1\}.$$ In particular, $\min(C_\alpha)=\min(C_\beta)$. As $\sup(C\cap\beta)\ge\alpha>\min(C_\alpha)$, it follows from Clause (2) that $\beta\in C$. So, by Clause (4), $\mathop{\mathrm{otp}}(C_\alpha)=\mathop{\mathrm{otp}}(C_\beta)$. It follows that may fix some $\delta\in C_\alpha\setminus C_\beta$. But then $\rho_{0\alpha}(\delta)$ is a sequence of length $1$, whereas $\rho_{0\beta}(\delta)$ is a longer sequence. This is a contradiction. ◻ For every $t\in T\mathbin\upharpoonright\mathop{\mathrm{acc}}(\kappa)$, define a function $k_t:\mathop{\mathrm{dom}}(t)\rightarrow T$ via $$k_t(\varepsilon):=t\mathbin\upharpoonright\varepsilon.$$ Let $K$ be the following downward-closed subfamily of ${}^{<\kappa}H_\kappa$: $$K:=\{ k_t\mathbin\upharpoonright\alpha\mathrel{|}\allowbreak\alpha\le\mathop{\mathrm{dom}}(t), t\in T\mathbin\upharpoonright\mathop{\mathrm{acc}}(\kappa)\}.$$ Evidently, for all $x,y\in K$ and $\varepsilon\in\mathop{\mathrm{dom}}(x)\cap\mathop{\mathrm{dom}}(y)$, if $x(\varepsilon)=y(\varepsilon)$, then $x\mathbin\upharpoonright\varepsilon=y\mathbin\upharpoonright\varepsilon$. In addition, $t\mapsto k_t$ constitutes an isomorphism between $(T\mathbin\upharpoonright\mathop{\mathrm{acc}}(\kappa),{\subseteq})$ and $(K\mathbin\upharpoonright\mathop{\mathrm{acc}}(\kappa),{\subseteq})$, and hence $K$ is a streamlined $\kappa$-Aronszajn tree with $V^-(K)\supseteq D$. The fact that the above map is an isomorphism also implies that a function $f:K\mathbin\upharpoonright D\rightarrow\kappa$ defined via $f(k_t):=g(t)$ satisfies that $f(x)<\mathop{\mathrm{dom}}(x)$ for all $x\in K\mathbin\upharpoonright D$, and that $f(x)\neq f(y)$ for every pair $x\subsetneq y$ of nodes from $K\mathbin\upharpoonright D$. $(ii)\implies(iii)$: Suppose that $K$ and $f:K\mathbin\upharpoonright D\rightarrow\kappa$ are as in Clause (ii). By possibly shrinking $D$, we may assume that for all $\beta\in D$ and $\alpha<\beta$, it is the case that $\omega\cdot\alpha<\beta$. The operation of Definition [Definition 14](#regdef){reference-type="ref" reference="regdef"} is associative, so we may define a family $T$ to be the collection of all elements of the form $x_0*\cdots*x_n$ where[^9] - $n<\omega$, - $x_i\in K$ for all $i\le n$, and - $\mathop{\mathrm{dom}}(x_i)<\mathop{\mathrm{dom}}(x_{i+1})$ for all $i<n$. It is clear that $t\mathbin\upharpoonright\alpha\in T$ for all $t\in T$ and $\alpha<\kappa$. Thus, recalling the proof of Claim [Claim 1](#c2171){reference-type="ref" reference="c2171"}, to establish that $T$ is a uniformly homogeneous streamlined $\kappa$-tree, it suffices to prove the following claim. **Claim 4**. *$T_0=\{\emptyset\}$ and $T_\alpha=\{ x*y\mathrel{|}\allowbreak x\in T\mathbin\upharpoonright\alpha, y\in K_\alpha\}$ for every nonzero $\alpha<\kappa$.* *Proof.* Suppose that $\alpha$ is a nonzero ordinal such that $T_\epsilon=\{ x*y\mathrel{|}\allowbreak x\in T\mathbin\upharpoonright\alpha, y\in K_\epsilon\}$ for every $\epsilon<\alpha$. Let $t\in T_\alpha$. Pick a sequence $(x_0,\ldots,x_n)$ satisfying (a)--(c) for which $t=x_0*\cdots*x_n$. $\blacktriangleright$ If $n=0$, then $t=\emptyset*x_0$ with $\emptyset\in T\mathbin\upharpoonright\alpha$ and $x_0\in K_\alpha$. $\blacktriangleright$ If $n=m+1$ for some $m<\omega$, then $t=x*y$ with $x:=x_0*\cdots*x_m$ in $T\mathbin\upharpoonright\alpha$ and $y:=x_{m+1}$ in $K_\alpha$. ◻ For each node $t\in T$, we define $n(t)$ and $x(t)$ by first letting $n(t)$ denote the least $n$ for which there exists a sequence $(x_0,\ldots,x_n)$ satisfying (a)--(c) for which $t=x_0*\cdots*x_n$, and then letting $x(t)$ be such an $x_n$. Note that $\mathop{\mathrm{dom}}(x(t))=\mathop{\mathrm{dom}}(t)$, and that $K=\{ t\in T\mathrel{|}\allowbreak n(t)=0\}$. Define a function $g:T\mathbin\upharpoonright D\rightarrow\kappa$ via $$g(t):=(\omega\cdot f(x(t)))+n(t).$$ **Claim 5**. 1. *$g(t)<\mathop{\mathrm{dom}}(t)$ for all $t\in T\mathbin\upharpoonright D$;* 2. *Let $s\subsetneq t$ be a pair of nodes from $T\mathbin\upharpoonright D$. Then $g(s)\neq g(t)$.* *Proof.* (1) Since $\omega\cdot\alpha<\beta$ for all $\beta\in D$ and $\alpha<\beta$. \(2\) Suppose not. Let $\tau<\kappa$ and $n<\omega$ be such that $f(x(s))=\tau=f(x(t))$ and $n(s)=n=n(t)$. By the choice of $f$ it follows that $x(s)\nsubseteq x(t)$, so since $s\subsetneq t$, it must be the case that $n=m+1$ for some $m<\omega$. Fix a sequence $(x_0,\ldots,x_m,x_{m+1})$ of nodes from $K$ such that $s=x_0*\cdots * x_m* x_{m+1}$ and $x_{m+1}=x(s)$. Likewise, fix a sequence $(y_0,\ldots,y_m,y_{m+1})$ of nodes from $K$ such that $t=y_0*\cdots * y_m* y_{m+1}$ and $y_{m+1}=x(t)$. $\blacktriangleright$ As $x_{m+1}\nsubseteq y_{m+1}$, we may fix $\delta\in\mathop{\mathrm{dom}}(x_{m+1})$ such that $x_{m+1}(\delta)\neq y_{m+1}(\delta)$. $\blacktriangleright$ As $s\subseteq t= y_0*\cdots * y_m* y_{m+1}$ and $n(s)>m$, it must be the case that $\mathop{\mathrm{dom}}(y_m)<\mathop{\mathrm{dom}}(s)$. Altogether, $\varepsilon:=\max\{\delta+1,\mathop{\mathrm{dom}}(x_m),\mathop{\mathrm{dom}}(y_m)\}$ is an ordinal less than $\mathop{\mathrm{dom}}(s)$, satisfying $x_{m+1}(\varepsilon)=s(\varepsilon)=t(\varepsilon)=y_{m+1}(\varepsilon)$, but then $x_{m+1}\mathbin\upharpoonright\varepsilon=y_{m+1}\mathbin\upharpoonright\varepsilon$, contradicting the fact that $\delta<\varepsilon$. ◻ It is easy to see that the two features of $g$ together imply that $T$ admits no $\kappa$-branch. The beginning of the proof of [@MR3078820 Theorem 4.4] shows furthermore that $T$ must be a special $\kappa$-Aronszajn tree. **Claim 6**. *$V(T)\supseteq D$.* *Proof.* Let $\alpha\in D$. As $D\subseteq V^-(K)$, we may fix a function $t:\alpha\rightarrow H_\kappa$ such that $\{ t\mathbin\upharpoonright\epsilon\mathrel{|}\allowbreak\epsilon<\alpha\}\subseteq K$, but $t\notin K$. As $K\subseteq T$, it thus suffices to prove that $t\notin T$. Towards a contradiction, suppose that $t\in T$. In particular, $n(t)>0$. Fix $m<\omega$ and a sequence $(x_0,\ldots,x_m,x_{m+1})$ of nodes from $K$ such that $t=x_0*\cdots * x_m*x_{m+1}$. As $x_{m+1}\neq t$, we may fix some $\delta<\alpha$ such that $t(\delta)\neq x_{m+1}(\delta)$. Pick $\varepsilon<\alpha$ above $\max\{\delta,\mathop{\mathrm{dom}}(x_m)\}$. Then $t(\varepsilon)=x_{m+1}(\varepsilon)$. But $t\mathbin\upharpoonright(\varepsilon+1)$ and $x_{m+1}\mathbin\upharpoonright(\varepsilon+1)$ are two nodes in $K$ that agree on $\varepsilon$ and hence $t\mathbin\upharpoonright(\varepsilon+1)=x_{m+1}\mathbin\upharpoonright(\varepsilon+1)$, contradicting the fact that $\delta<\varepsilon$. ◻ The implication $(iii)\implies(iv)$ follows from the proof of Lemma [Lemma 4](#lemma33){reference-type="ref" reference="lemma33"} and the implication $(iv)\implies(i)$ is trivial. ◻ **Definition 25** (Products). For a sequence of $\kappa$-trees $\langle \mathbf{T}^i \mathrel{|}\allowbreak i<\tau \rangle$ with $\mathbf T^i = (T^i, {<_{T^i}})$ for each $i<\tau$, the product $\bigotimes_{i<\tau} \mathbf{T}^i$ is defined to be the tree $\mathbf T=({T}, {<_{{T}}})$, where: - $T=\bigcup\{\prod_{i<\tau}T_\alpha^i\mathrel{|}\allowbreak\alpha<\kappa\}$; - $\vec{s} <_{{T}} \vec{t}$ iff $\vec s(i) <_{T^i} \vec t(i)$ for every $i<\tau$. **Proposition 26**. *For a sequence $\langle \mathbf{T}^i \mathrel{|}\allowbreak i<\tau \rangle$ of normal $\kappa$-trees, if $\lambda^\tau<\kappa$ for all $\lambda<\kappa$, then:* 1. *$\bigotimes_{i<\tau} \mathbf{T}^i$ is a normal $\kappa$-tree;* 2. *$V(\bigotimes_{i<\tau} \mathbf{T}^i)=\bigcup\{V(\mathbf T^i)\mathrel{|}\allowbreak i<\tau\}$;* 3. *$V^-(\bigotimes_{i<\tau} \mathbf{T}^i)=\bigcup\{V^-(\mathbf T^i)\mathrel{|}\allowbreak i<\tau\}$.* *Proof.* Left to the reader. ◻ **Definition 27** (Sums). The *disjoint sum* $\sum\mathcal P$ of a family of posets $\mathcal P$ is the poset $(A,<_A)$ defined as follows: - $A:=\{ ((P,<_P),x)\mathrel{|}\allowbreak(P,<_P)\in\mathcal P, x\in P\}$; - $((P,<_P),x)<_A ((Q,<_Q),y)$ iff $(P,<_P)=(Q,<_Q)$ and $x<_Py$. In the special case of doubleton we write $\mathbf T+\mathbf S$ instead of $\sum\{\mathbf T,\mathbf S\}$. **Proposition 28**. *Suppose that $\mathcal T$ is a family of less than $\kappa$ many $\kappa$-trees. Then:* 1. *$\sum\mathcal T$ is a $\kappa$-tree;* 2. *$V(\sum\mathcal T)=\bigcap\{V(\mathbf T)\mathrel{|}\allowbreak\mathbf T\in \mathcal T\}$;* 3. *$V^-(\sum\mathcal T)=\bigcup \{V^-(\mathbf T)\mathrel{|}\allowbreak\mathbf T\in \mathcal T\}$.* *Proof.* Left to the reader. ◻ It follows from Propositions [Proposition 26](#products){reference-type="ref" reference="products"} and [Proposition 28](#sums){reference-type="ref" reference="sums"} that $\mathop{\mathrm{Vspec}}(\kappa)$ is closed under finite unions and intersections. **Corollary 29**. *Suppose $\chi\in\mathop{\mathrm{Reg}}(\kappa)$ is such that $\lambda^{<\chi}<\kappa$ for all $\lambda<\kappa$. Then there exists a $\kappa$-tree $\mathbf T$ with $V^-(\mathbf T)\supseteq \mathop{\mathrm{acc}}(\kappa)\cap E^\kappa_{\leq \chi }$.* *Proof.* Denote $\Theta:=\mathop{\mathrm{Reg}}(\chi+1)$. By Lemmas [Lemma 23](#l22){reference-type="ref" reference="l22"} and [Lemma 17](#lemma311){reference-type="ref" reference="lemma311"}, for every $\theta\in\Theta$, we may pick a $\kappa$-tree $\mathbf T^\theta$ such that $V^-(\mathbf T^\theta)$ covers $E^\kappa_{\theta}$ modulo a club. In fact, the proof of $(2)\implies(3)$ of Lemma [Lemma 17](#lemma311){reference-type="ref" reference="lemma311"} shows that we may secure $V^-(\mathbf T^\theta)\supseteq E^\kappa_{\theta}$. Let $\mathbf T:=\sum\{\mathbf T^\theta\mathrel{|}\allowbreak\theta\in\Theta\}$ be the disjoint sum of these trees. By Proposition [Proposition 28](#sums){reference-type="ref" reference="sums"}, $V^-(\mathbf T)=\bigcup_{\theta\in\Theta}V^-(\mathbf T^\theta)\supseteq\bigcup_{\theta\in\Theta}E^\kappa_\theta= \mathop{\mathrm{acc}}(\kappa)\cap E^\kappa_{\leq \chi }$. ◻ *Remark 30*. In Section [5](#sect6){reference-type="ref" reference="sect6"}, we provide sufficient conditions for getting an homogeneous $\kappa$-Souslin tree $\mathbf T$ with $V(\mathbf T)=\bigcup_{\chi\in x}E^\kappa_\chi$ for a prescribed finite and nonempty $x\subseteq\mathop{\mathrm{Reg}}(\kappa)$. **Question 31**. Is it consistent that for some regular uncountable cardinal $\kappa$, there are $\kappa$-Souslin trees, but $V(\mathbf T)$ is nonstationary for every $\kappa$-Souslin tree $\mathbf T$? By Proposition [Proposition 5](#prop22){reference-type="ref" reference="prop22"}, Corollary [Corollary 10](#cor26){reference-type="ref" reference="cor26"} and [@paper20 Lemma 2.4], in such a model there cannot be an homogeneous $\kappa$-Souslin tree. A model with an $\aleph_1$-Souslin tree but no homogeneous one was constructed by Abraham and Shelah in [@Sh:403]. # Consulting another tree {#sec4} The main result of this section is Theorem [Theorem 38](#thm41){reference-type="ref" reference="thm41"} below. A sample corollary of it reads as follows. **Corollary 32**. *Suppose that $\kappa=\lambda^+$ for an infinite cardinal $\lambda$.* 1. *If $\square_\lambda\mathrel{+}\diamondsuit(\kappa)$ holds, then there exists a $\kappa$-Souslin tree $\mathbf T$ with $V(\mathbf T)=\mathop{\mathrm{acc}}(\kappa)$;* 2. *If $\square(\kappa)$ holds and $\aleph_0<\lambda^{<\lambda}<\lambda^+=2^\lambda$, then there exists a $\kappa$-Souslin tree $\mathbf T$ with $V(\mathbf T)=\mathop{\mathrm{acc}}(\kappa)$;* 3. *If $\mathop{\mathrm{P}}_\lambda(\kappa,\kappa,{\sqsubseteq},1)$ holds, then there exists a $\kappa$-Souslin tree $\mathbf T$ such that $V(\mathbf T)\supseteq E^\kappa_{>\omega}$.* *Proof.* (1) $\diamondsuit(\aleph_1)$ implies the existence of a normal and splitting $\aleph_1$-Souslin tree $\mathbf T$, and by Corollary [Corollary 11](#cor27){reference-type="ref" reference="cor27"}, $V(\mathbf T)=\mathop{\mathrm{acc}}(\aleph_1)$. For $\lambda\ge\aleph_1$, by [@paper22 Corollary 3.9], $\square_\lambda+\mathop{\mathrm{CH}}_\lambda$ is equivalent to $\mathop{\mathrm{P}}_\lambda(\kappa,2,{\sqsubseteq},1)$. In addition, by a theorem of Jensen, $\square_\lambda$ gives rise to a special $\lambda^+$-Aronszajn tree. Thus, we infer from Proposition [Theorem 24](#lemma39){reference-type="ref" reference="lemma39"} the existence of a $\kappa$-tree $\mathbf K$ for which $V^-(\mathbf K)=\mathop{\mathrm{acc}}(\kappa)$. It thus follows from Theorem [Theorem 38](#thm41){reference-type="ref" reference="thm41"}(1) below that there exists a $\kappa$-Souslin tree $\mathbf T$ for which $V(\mathbf T)$ is a club in $\kappa$. Finally, appeal to Lemma [Lemma 4](#lemma33){reference-type="ref" reference="lemma33"}. \(2\) By [@paper24 Corollary 4.4], the hypothesis implies that $\mathop{\mathrm{P}}^-(\kappa,2,{\sqsubseteq},1)$ holds. In addition, by a theorem of Specker, $\lambda=\lambda^{<\lambda}$ implies the existence of a special $\lambda^+$-Aronszajn tree. Now, continue as in the proof of Clause (1). \(3\) Similar to the proof of Clause (1), using Theorem [Theorem 38](#thm41){reference-type="ref" reference="thm41"}(2), instead. ◻ *Remark 33*. Sufficient conditions for $\mathop{\mathrm{P}}_\lambda(\kappa,\kappa,{\sqsubseteq},1)$ to hold are given by Corollaries 3.15 and 3.24 of [@paper32]. Before turning to the proofs of the main results of this section, we provide a few preliminaries. **Definition 34** (Proxy principle, [@paper22; @paper23]). Suppose that $\mu,\theta\le\kappa$ are cardinals, $\xi\le\kappa$ is an ordinal, $\mathcal R$ is a binary relation over $[\kappa]^{<\kappa}$ and $\mathcal S$ is a collection of stationary subsets of $\kappa$. The principle $\mathop{\mathrm{P}}_\xi^-(\kappa,\mu,\mathcal{R},\theta,\mathcal{S})$ asserts the existence of a $\xi$-bounded $\mathcal C$-sequence $\langle \mathcal{C}_\alpha\mathrel{|}\allowbreak\alpha<\kappa\rangle$ such that: - for every $\alpha<\kappa$, $|\mathcal C_\alpha|<\mu$; - for all $\alpha<\kappa$, $C\in \mathcal{C}_\alpha$, and $\bar{\alpha}\in\mathop{\mathrm{acc}}(C)$, there exists some $D\in \mathcal{C}_{\bar{\alpha}}$ such that $D\mathrel{\mathcal{R}}C$; - for every sequence $\langle B_i\mathrel{|}\allowbreak i<\theta\rangle$ of cofinal subsets of $\kappa$, and every $S\in\mathcal{S}$, there are stationarily many $\alpha\in S$ such that for all $C\in\mathcal C_\alpha$ and $i<\min\{\alpha,\theta\}$, $\sup(\mathop{\mathrm{nacc}}(C)\cap B_i)=\alpha$. *Convention 35*. We write $\mathop{\mathrm{P}}_\xi(\kappa, \mu, \mathcal R, \theta, \mathcal S)$ to assert that $\mathop{\mathrm{P}}^-_\xi(\kappa, \mu, \mathcal R, \theta, \mathcal S)$ and $\diamondsuit(\kappa)$ both hold. *Convention 36*. If we omit $\xi$, then we mean $\xi:=\kappa$. If we omit $\mathcal S$, then we mean $\mathcal S:=\{\kappa\}$. In the case $\mu=2$, we identify $\langle \mathcal{C}_\alpha\mathrel{|}\allowbreak\alpha<\kappa\rangle$ with the unique element $\langle C_\alpha \mathrel{|}\allowbreak\alpha < \kappa\rangle$ of $\prod_{\alpha<\kappa}\mathcal{C}_\alpha$. **Fact 37** ([@paper22 Lemma 2.2]). *The following are equivalent:* 1. *$\diamondsuit(\kappa)$, i.e., there is a sequence $\langle f_\beta\mathrel{|}\allowbreak\beta<\kappa\rangle$ such that for every function $f:\kappa\rightarrow\kappa$, the set $\{ \beta<\kappa\mathrel{|}\allowbreak f\mathbin\upharpoonright\beta=f_\beta\}$ is stationary in $\kappa$.* 2. *$\diamondsuit^-(H_\kappa)$, i.e., there is a sequence $\langle \Omega_\beta \mathrel{|}\allowbreak\beta < \kappa \rangle$ such that for all $p\in H_{\kappa^{+}}$ and $\Omega \subseteq H_\kappa$, there exists an elementary submodel $\mathcal M\prec H_{\kappa^{+}}$ such that:* - *$p\in\mathcal M$;* - *$\mathcal M\cap\kappa\in\kappa$;* - *$\mathcal M\cap \Omega=\Omega_{\mathcal M\cap\kappa}$.* 3. *$\diamondsuit(H_\kappa)$, i.e., there are a partition $\langle R_i \mathrel{|}\allowbreak i < \kappa \rangle$ of $\kappa$ and a sequence $\langle \Omega_\beta \mathrel{|}\allowbreak\beta < \kappa \rangle$ such that for all $p\in H_{\kappa^{+}}$, $\Omega \subseteq H_\kappa$, and $i<\kappa$, there exists an elementary submodel $\mathcal M\prec H_{\kappa^{+}}$ such that:* - *$p\in\mathcal M$;* - *$\mathcal M\cap\kappa\in R_i$;* - *$\mathcal M\cap \Omega=\Omega_{\mathcal M\cap\kappa}$.* **Theorem 38**. *Suppose that $K$ is some streamlined $\kappa$-tree.* 1. *If $\mathop{\mathrm{P}}(\kappa,2,{\sqsubseteq^*},1)$ holds, then there exists a normal and splitting streamlined $\kappa$-Souslin tree $T$ such that $V(T)\supseteq V^-(K)$;* 2. *If $\mathop{\mathrm{P}}(\kappa,\kappa,{\sqsubseteq},1)$ holds, then there exists a normal and splitting streamlined $\kappa$-Souslin tree $T$ such that $V(T)\supseteq V^-(K)\cap E^\kappa_{>\omega}$.* *Proof.* Fix a well-ordering $\vartriangleleft$ of $H_\kappa$, and a sequence $\langle \Omega_\beta\mathrel{|}\allowbreak\beta<\kappa\rangle$ witnessing $\diamondsuit^-(H_\kappa)$. If $\mathop{\mathrm{P}}^-(\kappa,\kappa,{\sqsubseteq},1)$ holds, then let $\vec {\mathcal C}=\langle \mathcal C_\alpha\mathrel{|}\allowbreak\alpha<\kappa\rangle$ be any $\mathop{\mathrm{P}}^-(\kappa,\kappa,{\sqsubseteq},1)$-sequence. If $\mathop{\mathrm{P}}^-(\kappa,2,{\sqsubseteq^*},1)$ holds, then, by [@paper23 Theorem 4.39], we may let $\vec {\mathcal C}=\langle \mathcal C_\alpha\mathrel{|}\allowbreak\alpha<\kappa\rangle$ be a $\mathop{\mathrm{P}}^-(\kappa,\kappa,{\sqsubseteq},1)$-sequence with the added feature that for every $\alpha\in\mathop{\mathrm{acc}}(\kappa)$ for all $C,D\in\mathcal C_\alpha$, $\sup(C\mathbin{\bigtriangleup}D)<\alpha$. Following the proof of [@paper26 Proposition 2.2], we shall recursively construct a sequence $\langle T_\alpha\mathrel{|}\allowbreak\alpha<\kappa\rangle$ such that $T:=\bigcup_{\alpha<\kappa}T_\alpha$ will constitute the tree of interest whose $\alpha^{\text{th}}$-level is $T_\alpha$. We start by letting $T_0:=\{\emptyset\}$, and once $T_\alpha$ has already been defined, we let $$T_{\alpha+1}:=\{t{}^\smallfrown\langle 0\rangle,t{}^\smallfrown\langle 1\rangle,t{}^\smallfrown\langle \eta\rangle\mathrel{|}\allowbreak t\in T_\alpha, \eta\in K_\alpha\}.$$ Next, suppose that $\alpha\in \mathop{\mathrm{acc}}(\kappa)$ is such that $T\mathbin\upharpoonright\alpha$ has already been defined. For all $C\in\mathcal C_\alpha$ and $x\in T\mathbin\upharpoonright C$, we shall identify a set of potential nodes $\{\mathbf b_x^{C,\eta}\mathrel{|}\allowbreak\eta\in\mathcal B(K\mathbin\upharpoonright\alpha)\}$ and then let $$\tag{$\star$}\label{promise0}T_\alpha:=\{\mathbf b_x^{C,\eta}\mathrel{|}\allowbreak C\in\mathcal C_\alpha,\eta\in K_\alpha, x\in T\mathbin\upharpoonright C\}.$$ To this end, fix $C\in\mathcal C_\alpha$, $x\in T\mathbin\upharpoonright C$ and $\eta\in\mathcal B(K\mathbin\upharpoonright\alpha)$. The node $\mathbf b_x^{C,\eta}$ will be obtained as the limit $\bigcup\mathop{\mathrm{Im}}(b_x^{C,\eta})$ of a sequence $b_x^{C,\eta}\in\prod_{\beta\in C\setminus\mathop{\mathrm{dom}}(x)}T_\beta$, as follows: - Let $b_x^{C,\eta}(\mathop{\mathrm{dom}}(x)):=x$. - For every $\beta\in \mathop{\mathrm{nacc}}(C)$ above $\mathop{\mathrm{dom}}(x)$ such that $b_x^{C,\eta}(\beta^-)$ has already been defined for $\beta^-:=\sup(C\cap \beta)$, let $$Q^{C, \eta}_x(\beta) := \{ t\in T_\beta\mathrel{|}\allowbreak\exists s\in \Omega_{\beta}[ (s\cup(b_x^{C,\eta}(\beta^-){}^\smallfrown\langle \eta\mathbin\upharpoonright\beta^-\rangle))\subseteq t]\}.$$ Now, consider the two possibilities: - If $Q^{C,\eta}_x(\beta) \neq \emptyset$, then let $b^{C,\eta}_x(\beta)$ be its $\lhd$-least element; - Otherwise, let $b^{C,\eta}_x(\beta)$ be the $\lhd$-least element of $T_\beta$ that extends $b_x^{C,\eta}(\beta^-){}^\smallfrown\langle \eta\mathbin\upharpoonright\beta^-\rangle$. Such an element must exist, as the level $T_\beta$ was constructed so as to preserve normality. - For every $\beta\in \mathop{\mathrm{acc}}(C\setminus \mathop{\mathrm{dom}}(x))$ such that $b_x^{C,\eta}\mathbin\upharpoonright\beta$ has already been defined, let $b_x^{C,\eta}(\beta):=\bigcup \mathop{\mathrm{Im}}(b_x^{C,\eta}\mathbin\upharpoonright\beta)$. For the last case, we need to argue that $b_x^{C,\eta}(\beta)$ is indeed an element of $T_\beta$. As $\vec{\mathcal C}$ is $\sqsubseteq$-coherent, the set $\bar C:=C\cap \beta$ is in $\mathcal C_\beta$. Also, $K$ is a tree and hence $\bar\eta:=\eta\mathbin\upharpoonright\beta$ is in $K_\beta$. So, since $\mathbf b_x^{\bar C,\eta\mathbin\upharpoonright\beta}\in T_\beta$, to show that $b_x^{C,\eta}(\beta)\in T_\beta$, it suffices to prove the following. **Claim 7**. *$b_x^{C,\eta}(\beta)=\mathbf b_x^{{\bar C},\bar\eta}$.* *Proof.* Clearly, $\mathop{\mathrm{dom}}(b_x^{C,\eta}(\beta))=C\cap \beta\setminus \mathop{\mathrm{dom}}(x)={\bar C}\setminus\mathop{\mathrm{dom}}(x)=\mathop{\mathrm{dom}}(b_x^{{\bar C},\bar\eta})$. So, we are left with showing that $b_x^{C,\eta}(\delta)=b_x^{{\bar C},\bar\eta}(\delta)$ for all $\delta\in {\bar C}\setminus\mathop{\mathrm{dom}}(x)$. The proof is by induction on $\delta\in {\bar C}\setminus\mathop{\mathrm{dom}}(x)$: - For $\delta=\mathop{\mathrm{dom}}(x)$, we have that $b_x^{\eta,C}(\delta)=x=b_x^{{\bar C},\bar\eta}(\delta)$. - Given $\delta\in \mathop{\mathrm{nacc}}({\bar C})$ above $\mathop{\mathrm{dom}}(x)$ such that $b_x^{C,\eta}(\delta^-)=b_x^{{\bar C},\bar\eta}(\delta^-)$ for $\delta^-:=\sup({\bar C}\cap\delta)$, we argue as follows. Since $$b_x^{C,\eta}(\delta^-){}^\smallfrown\langle \eta\mathbin\upharpoonright\delta^-\rangle=b_x^{{\bar C},\bar \eta}(\delta^-){}^\smallfrown\langle \bar\eta\mathbin\upharpoonright\delta^-\rangle,$$ the definitions of $b_x^{C,\eta}(\delta)$ and $b_x^{{\bar C},\bar\eta}(\delta)$ coincide. - If $\delta\in\mathop{\mathrm{acc}}({\bar C}\setminus\mathop{\mathrm{dom}}(x))$, then we take the limit of two identical sequences, and the unique limit is identical.  ◻ This completes the definition of $b_x^{C,\eta}$. For all $\eta\in\mathcal B(K\mathbin\upharpoonright\alpha)$, let $\mathbf b_x^{C,\eta}:=\bigcup\mathop{\mathrm{Im}}(b_x^{C,\eta})$, and then we define $T_\alpha$ as promised in [\[promise0\]](#promise0){reference-type="eqref" reference="promise0"}. Clearly, $T:=\bigcup_{\alpha<\kappa}T_\alpha$ is a normal and splitting $\kappa$-tree. The verification of Souslin-ness is standard (see [@paper26 Claims 2.2.2 and 2.2.3]). **Claim 8**. *Suppose that $\alpha\in V^-(K)$ is such that $\sup(C\cap D)=\alpha$ for all $C,D\in\mathcal C_\alpha$. Then $\alpha\in V(T)$.* *Proof.* As $\alpha\in V^-(K)$, we may fix $\eta\in\mathcal B(K\mathbin\upharpoonright\alpha)\setminus K_\alpha$. Let $x\in T\mathbin\upharpoonright\alpha$, and we shall find a vanishing $\alpha$-branch through $x$ in $T$. First fix $C\in\mathcal C_\alpha$. Using normality and by possibly extending $x$, we may assume that $x\in T\mathbin\upharpoonright C$. We have already established that $\{ \mathbf b_x^{C,\eta}\mathbin\upharpoonright\epsilon\mathrel{|}\allowbreak\epsilon<\alpha\}$ is an $\alpha$-branch through $x$. Towards a contradiction, suppose that it is not vanishing, so that $\bigcup\mathop{\mathrm{Im}}(b_x^{C,\eta})$ is in $T_\alpha$. It follows from [\[promise0\]](#promise0){reference-type="eqref" reference="promise0"} that we may pick $D\in\mathcal C_\alpha$, $y\in T\mathbin\upharpoonright D$ and $\xi\in K_\alpha$ such that $\bigcup\mathop{\mathrm{Im}}(b_x^{C,\eta})=\mathbf b_y^{D,\xi}$. Fix $\beta\in C\cap D$ large enough such that $\beta>\max\{\mathop{\mathrm{dom}}(x),\mathop{\mathrm{dom}}(y)\}$ and $\eta\mathbin\upharpoonright\beta\neq\xi\mathbin\upharpoonright\beta$. In particular, $\beta\in \mathop{\mathrm{dom}}(b_x^{C,\eta})\cap \mathop{\mathrm{dom}}(b_y^{D,\xi})$. Consider $\beta^C:=\min(C\setminus \beta+1)$, the successor of $\beta$ in $C$ and $\beta^D:=\min(D\setminus \beta+1)$, the successor of $\beta$ in $D$. Then the definition of the successor stage of $b_x^{C,\eta}$ ensures that $b_x^{C,\eta}(\beta^C)$ extends $b_x^{C,\eta}(\beta){}^\smallfrown\langle \eta\mathbin\upharpoonright\beta\rangle$, so that $b_x^{C,\eta}(\beta^C)(\beta)=\eta\mathbin\upharpoonright\beta$. Likewise, $b_y^{D,\xi}(\beta^D)(\beta)=\xi\mathbin\upharpoonright\beta$. From $\mathbf b_x^{C,\eta}=\mathbf b_y^{D,\xi}$, we infer that $b_x^{C,\eta}(\beta^C)(\beta)=\mathbf b_x^{C,\eta}(\beta)=\mathbf b_y^{D,\xi}(\beta)=b_y^{D,\xi}(\beta^D)(\beta)$, contradicting the fact that $\eta\mathbin\upharpoonright\beta\neq\xi\mathbin\upharpoonright\beta$. ◻ This completes the proof. ◻ We now arrive at Theorem [Theorem 3](#thmc){reference-type="ref" reference="thmc"}: **Corollary 39**. *Suppose that $\mathop{\mathrm{P}}(\kappa,2,{\sqsubseteq^*},1)$ holds. Then:* 1. *For every $\chi\in\mathop{\mathrm{Reg}}(\kappa)$ such that $\lambda^{<\chi}<\kappa$ for all $\lambda<\kappa$, and every $\kappa$-tree $\mathbf K$, there exists a $\kappa$-Sousin tree $\mathbf T$ such that $(E^\kappa_{\le\chi}\cup V^-(\mathbf K))\setminus V(\mathbf T)$ is nonstationary;* 2. *There exists a $\kappa$-Sousin tree $\mathbf T$ such that $V(\mathbf T)$ is stationary.* *Proof.* (1) Suppose $\chi$ and $\mathbf K$ are as above. By Corollary [Corollary 29](#cor224){reference-type="ref" reference="cor224"}, we may fix a $\kappa$-tree $\mathbf H$ with $V^-(\mathbf H)\supseteq \mathop{\mathrm{acc}}(\kappa)\cap E^\kappa_{\leq \chi }$. By Proposition [Proposition 28](#sums){reference-type="ref" reference="sums"}, $\mathbf K+\mathbf H$ is a $\kappa$-tree with $V^-(\mathbf K+\mathbf H)=V^-(\mathbf K)\cup V^-(\mathbf H)$. By [@paper23 Lemma 2.5], we may fix a streamlined $\kappa$-tree that $K$ that is club-isomorphic to $\mathbf K+\mathbf H$. Now, appeal to Theorem [Theorem 38](#thm41){reference-type="ref" reference="thm41"}(1) with $K$. \(2\) Appeal to Clause (1) with $\chi=\omega$. ◻ **Definition 40** (Jensen-Kunen, [@jensen1969some]). A cardinal $\kappa$ is *subtle* iff for every list $\langle A_\alpha\mathrel{|}\allowbreak\alpha\in D\rangle$ over a club $D\subseteq\kappa$, there is a pair $(\alpha,\beta)\in[D]^2$ such that $A_\alpha=A_\beta\cap\alpha$. We now arrive at Theorem [Theorem 2](#thmb){reference-type="ref" reference="thmb"}: **Corollary 41**. *We have $(1)\implies(2)\implies(3)\implies(4)$:* 1. *there exists a $\kappa$-Souslin tree $\mathbf T$ such that $V(\mathbf T)=\mathop{\mathrm{acc}}(\kappa)$;* 2. *there exists a $\kappa$-tree $\mathbf T$ such that $V(\mathbf T)=\mathop{\mathrm{acc}}(\kappa)$;* 3. *there exists a $\kappa$-tree $\mathbf T$ such that $V^-(\mathbf T)$ contains a club in $\kappa$;* 4. *$\kappa$ is not subtle.* *In addition, in $\mathsf{L}$, for $\kappa$ not weakly compact, $(4)\implies(1)$.* *Proof.* $(1)\implies(2)\implies(3)$: This is immediate. $(3)\implies(4)$: By Lemma [Lemma 17](#lemma311){reference-type="ref" reference="lemma311"}. Next, work in $\mathsf L$ and suppose that $\kappa$ is a regular uncountable cardinal that is not subtle and not weakly compact. If $\kappa$ is a successor cardinal, then by Corollary [Corollary 32](#cor31){reference-type="ref" reference="cor31"}(1), Clause (1) holds, so assume that $\kappa$ is inaccessible. By $\textsf{\textup{GCH}}$, $\kappa$ is moreover strongly inaccessible, and then Lemma [Lemma 17](#lemma311){reference-type="ref" reference="lemma311"} yields that Clause (3) holds. Since we work in $\mathsf{L}$ and $\kappa$ is not weakly compact, by [@paper22 Theorem 3.12], $\mathop{\mathrm{P}}(\kappa,2,{\sqsubseteq},1)$ holds. So by Corollary [Corollary 39](#cor310){reference-type="ref" reference="cor310"}(1), Clause (3) yields a $\kappa$-Souslin tree $\mathbf T$ such that $V(\mathbf T)$ covers a club in $\kappa$. Now, appeal to Lemma [Lemma 4](#lemma33){reference-type="ref" reference="lemma33"}. ◻ **Corollary 42**. *In $\mathsf{L}$, if $\kappa$ is not weakly compact, then for every stationary $S\subseteq\kappa$, there exists a $\kappa$-Souslin tree $\mathbf T$ for which $V(\mathbf T)\cap S$ is stationary.* *Proof.* By Corollary [Corollary 32](#cor31){reference-type="ref" reference="cor31"}(1), we may assume that $\kappa$ is (strongly) inaccessible. By Corollary [Corollary 22](#cor213){reference-type="ref" reference="cor213"}, we may fix a $\kappa$-tree $\mathbf K$ such that $V^-(\mathbf K)\cap S$ is stationary. By [@paper22 Theorem 3.12], $\mathop{\mathrm{P}}(\kappa,2,{\sqsubseteq},1)$ holds. Finally, appeal to Corollary [Corollary 39](#cor310){reference-type="ref" reference="cor310"}(1). ◻ # Realizing a nonreflecting stationary set {#sect5} In this section, we provide conditions concerning a set $S\subseteq\kappa$ sufficient to ensure the existence of a $\kappa$-Souslin tree $\mathbf T$ with $V(\mathbf T)\supseteq S$ and possibly $V(\mathbf T)=S$. As a corollary, we obtain Theorem [Theorem 4](#thmd){reference-type="ref" reference="thmd"}: **Corollary 43**. *If $\diamondsuit(S)$ holds for some nonreflecting stationary subset $S$ of a strongly inaccessible cardinal $\kappa$, then there is an almost disjoint family $\mathcal S$ of $2^\kappa$ many stationary subsets of $S$ such that, for every $S'\in\mathcal S$, there is a $\kappa$-Souslin tree $\mathbf T$ with $V^-(\mathbf T)=V(\mathbf T)=S'$.* *Proof.* By Corollary [Corollary 51](#cor55){reference-type="ref" reference="cor55"} below, it suffices to prove that there exists a family $\mathcal S$ of $2^\kappa$ many stationary subsets of $S$ such that: - for every $S'\in\mathcal S$, $\diamondsuit(S')$ holds. - $|S'\cap S''|<\kappa$ for all $S'\neq S''$ from $\mathcal S$. Now, as $\diamondsuit(S)$ holds, we may easily fix a sequence $\langle (A_\beta,B_\beta)\mathrel{|}\allowbreak\beta\in S\rangle$ such that, for all $A,B\in\mathcal P(\kappa)$, the following set is stationary $$G_A(B):=\{\beta\in S\mathrel{|}\allowbreak A\cap\beta=A_\beta\ \&\ B\cap\beta=B_\beta\}.$$ Set $\mathcal S:=\{ S_A\mathrel{|}\allowbreak A\in\mathcal P(\kappa)\}$, where $S_A:=\{\beta\in S\mathrel{|}\allowbreak A\cap\beta=A_\beta\}$. Then $\mathcal S$ is an almost disjoint family of $2^\kappa$ many stationary subsets of $S$, and for every $S'\in\mathcal S$, $\diamondsuit(S')$ holds, as witnessed by $\langle B_\beta\mathrel{|}\allowbreak\beta\in S'\rangle$. ◻ **Definition 44** ([@paper22]). A streamlined tree $T\subseteq{}^{<\kappa}H_\kappa$ is *prolific* iff for all $\alpha<\kappa$ and $t\in T_\alpha$, $\{ t{}^\smallfrown\langle i\rangle\mathrel{|}\allowbreak i<\max\{\omega,\alpha\}\}\subseteq T$. A prolific tree is clearly splitting. **Theorem 45**. *Suppose that $\mathop{\mathrm{P}}(\kappa,\kappa,\mathrel{^{S}{\sqsubseteq}},1)$ holds for a given $S\subseteq\mathop{\mathrm{acc}}(\kappa)$. Then there exists a normal, prolific, streamlined $\kappa$-Souslin tree $T$ such that $V(T)\supseteq S$.* *Proof.* Fix a well-ordering $\vartriangleleft$ of $H_\kappa$, a sequence $\langle \Omega_\beta\mathrel{|}\allowbreak\beta<\kappa\rangle$ witnessing $\diamondsuit^-(H_\kappa)$, and a sequence $\vec{\mathcal C}=\langle \mathcal C_\alpha\mathrel{|}\allowbreak\alpha<\kappa\rangle$ witnessing $\mathop{\mathrm{P}}^-(\kappa,\kappa,{\mathrel{^{S}{\sqsubseteq}}},2)$. By $\mathrel{^{S}{\sqsubseteq}}$-coherence, we may assume that for every $\alpha\in S$, $\mathcal C_\alpha$ is a singleton. Following the proof of [@paper26 Proposition 2.2], we shall recursively construct a sequence $\langle T_\alpha\mathrel{|}\allowbreak\alpha<\kappa\rangle$ such that $T:=\bigcup_{\alpha<\kappa}T_\alpha$ will constitute a normal prolific full streamlined $\kappa$-Souslin tree whose $\alpha^{\text{th}}$-level is $T_\alpha$. Let $T_0:=\{\emptyset\}$, and for all $\alpha<\kappa$ let $$T_{\alpha+1}:=\{t{}^\smallfrown\langle i\rangle\mathrel{|}\allowbreak t\in T_\alpha, i<\max\{\omega,\alpha\}\}.$$ Next, suppose that $\alpha\in\mathop{\mathrm{acc}}(\kappa)$ is such that $T\mathbin\upharpoonright\alpha$ has already been defined. Constructing the level $T_\alpha$ involves deciding which branches through $T\mathbin\upharpoonright\alpha$ will have its limit placed into our tree. For all $C\in\mathcal C_\alpha$ and $x\in T\mathbin\upharpoonright C$, we first define two $\alpha$-branches $\mathbf b_x^C$ and $\mathbf d_x^C$ such that $\{\mathbf b_x^C\mathrel{|}\allowbreak x\in T\mathbin\upharpoonright C\}\cap\{\mathbf d_x^C\mathrel{|}\allowbreak x\in T\mathbin\upharpoonright C\}=\emptyset$, and then we shall let: $$\tag{$\star$}\label{promise2}T_\alpha:=\begin{cases}\{\mathbf{b}_x^{C}\phantom{,\mathbf{d}_x^C}\mathrel{|}\allowbreak C\in\mathcal C_\alpha, x\in T\mathbin\upharpoonright C\},&\text{if }\alpha\in S;\\ \{\mathbf{b}_x^C,\mathbf{d}_x^C\mathrel{|}\allowbreak C\in\mathcal C_\alpha, x\in T\mathbin\upharpoonright C\},&\text{otherwise}. \end{cases}$$ For every $\alpha\in S$, since $|\mathcal C|=1$, this ensures that $\alpha\in V(T)$. Let $C\in\mathcal C$ and $x\in T\mathbin\upharpoonright C$. We start by defining $\mathbf b_x^C$. It will be the limit $\bigcup\mathop{\mathrm{Im}}(b_x^C)$ of a sequence $b_x^C\in\prod_{\beta\in C\setminus\mathop{\mathrm{dom}}(x)}T_\beta$ obtained by recursion, as follows. Set $b_x^C(\mathop{\mathrm{dom}}(x)):=x$. At successor step, for every $\beta\in C\setminus(\mathop{\mathrm{dom}}(x)+1)$ such that $b_x^C(\beta^-)$ has already been defined with $\beta^-:=\sup(C\cap\beta)$, we consult the following set: $$Q^{C, \beta}_{x,0} := \{ t\in T_\beta\mathrel{|}\allowbreak\exists s\in \Omega_{\beta}[ (s\cup (b^{C}_x(\beta^-){}^\smallfrown\langle0\rangle))\subseteq t]\}.$$ Now, consider the two possibilities: - If $Q^{C,\beta}_{x,0} \neq \emptyset$, then let $b^C_x(\beta)$ be its $\lhd$-least element; - Otherwise, let $b^C_x(\beta)$ be the $\lhd$-least element of $T_\beta$ that extends $b^C_x(\beta^-){}^\smallfrown\langle0\rangle$. Such an element must exist, as the tree constructed so far is prolific and normal. Finally, for every $\beta\in\mathop{\mathrm{acc}}(C\setminus\mathop{\mathrm{dom}}(x))$ such that $b_x^C\mathbin\upharpoonright\beta$ has already been defined, we let $b_x^C(\beta)=\bigcup\mathop{\mathrm{Im}}(b_x^C\mathbin\upharpoonright\beta)$. By [\[promise2\]](#promise2){reference-type="eqref" reference="promise2"}, $\mathrel{^{S}{\sqsubseteq}}$-coherence and the exact same proof of [@paper26 Claim 2.2.1], $b_x^C(\beta)$ is indeed in $T_\beta$. Next, we define $\mathbf d_x^C$ as the limit of a sequence $d_x^C\in\prod_{\beta\in C\setminus\mathop{\mathrm{dom}}(x)}T_\beta$ obtained by recursion, as follows. Set $d_x^C(\mathop{\mathrm{dom}}(x)):=x$. At successor step, for every $\beta\in C\setminus(\mathop{\mathrm{dom}}(x)+1)$ such that $d_x^C(\beta^-)$ has already been defined with $\beta^-:=\sup(C\cap\beta)$, we consult the following set: $$Q^{C, \beta}_{x,1} := \{ t\in T_\beta\mathrel{|}\allowbreak\exists s\in \Omega_{\beta}[ (s\cup (d^{C}_x(\beta^-){}^\smallfrown\langle1\rangle))\subseteq t]\}.$$ Now, consider the two possibilities: - If $Q^{C,\beta}_{x,1} \neq \emptyset$, then let $d^C_x(\beta)$ be its $\lhd$-least element; - Otherwise, let $d^C_x(\beta)$ be the $\lhd$-least element of $T_\beta\setminus\{b_x^C(\beta)\}$ that extends $d^C_x(\beta^-){}^\smallfrown\langle1\rangle$. Such an element must exist, as the tree constructed so far is prolific and normal. Finally, for every $\beta\in\mathop{\mathrm{acc}}(C\setminus\mathop{\mathrm{dom}}(x))$ such that $d_x^C\mathbin\upharpoonright\beta$ has already been defined, we let $d_x^C(\beta)=\bigcup\mathop{\mathrm{Im}}(d_x^C\mathbin\upharpoonright\beta)$. By [\[promise2\]](#promise2){reference-type="eqref" reference="promise2"}, $\mathrel{^{S}{\sqsubseteq}}$-coherence and the exact same proof of [@paper26 Claim 2.2.1], $d_x^C(\beta)$ is indeed in $T_\beta$. **Claim 9**. *For every $C\in\mathcal C_\alpha$, $\{\mathbf b_x^C\mathrel{|}\allowbreak x\in T\mathbin\upharpoonright C_\alpha\}\cap \{\mathbf d_x^C\mathrel{|}\allowbreak x\in T\mathbin\upharpoonright C_\alpha\}=\emptyset$.* *Proof.* Let $C\in\mathcal C_\alpha$ and $x,y\in T\mathbin\upharpoonright C$. Fix a large enough $\beta\in\mathop{\mathrm{nacc}}(C)$ for which $\beta^-:=\sup(C\cap\beta)$ is bigger than $\max\{\mathop{\mathrm{dom}}(x),\mathop{\mathrm{dom}}(y)\}$. By the definitions of $b_x^C$ and $d_y^C$, - $b_x^C(\beta)(\beta^-)=0$, and - $d_y^C(\beta)(\beta^-)=1$. In particular, $\mathbf b_x^C\neq \mathbf d_y^C$. ◻ This finishes the construction of $T_\alpha$. Finally, by [@paper26 Claims 2.2.2 and 2.2.3], $T:=\bigcup_{\alpha<\kappa}T_\alpha$ is a $\kappa$-Souslin tree. ◻ **Theorem 46**. *Suppose that $\chi$ is a cardinal such that $\lambda^\chi<\kappa$ for all $\lambda<\kappa$, and that $\mathop{\mathrm{P}}(\kappa,\kappa,\mathrel{^{S}{\sqsubseteq}},1,\{S\cup E^\kappa_{>\chi}\})$ holds for a given $S\subseteq\mathop{\mathrm{acc}}(\kappa)\cap E^\kappa_{\le\chi}$. Then there exists a normal, prolific, streamlined $\kappa$-Souslin tree $T$ such that $V^-(T)\cap E^\kappa_{\le\chi}=V(T)\cap E^\kappa_{\le\chi}=S$.* *Proof.* The proof is almost identical to that of Theorem [Theorem 45](#thm52){reference-type="ref" reference="thm52"}, where the only change is in that now, the definition of $T_\alpha$ for a limit $\alpha$ splits into three: $$T_\alpha:=\begin{cases}\{\mathbf{b}_x^{C}\phantom{,\mathbf{d}_x^C}\mathrel{|}\allowbreak C\in\mathcal C_\alpha, x\in T\mathbin\upharpoonright C\},&\text{if }\alpha\in S;\\ \{\mathbf{b}_x^C,\mathbf{d}_x^C\mathrel{|}\allowbreak C\in\mathcal C_\alpha, x\in T\mathbin\upharpoonright C\},&\text{if }\alpha\in E^\kappa_{>\chi};\\ \mathcal B(T\mathbin\upharpoonright\alpha),&\text{otherwise}.\\ \end{cases}$$ The details are left to the reader. ◻ *Remark 47*. Sufficient conditions for the existence of $S\subseteq\kappa$ for which $\mathop{\mathrm{P}}(\kappa,\kappa,\mathrel{^{S}{\sqsubseteq}},1,\{S\})$ holds are given by [@paper23 Corollary 4.22] and [@paper23 Theorem 4.28]. In particular, for every (nonreflecting) stationary $E\subseteq\kappa$, if $\square(E)$ and $\diamondsuit(E)$ both hold, then there exists a stationary $S\subseteq E$ such that $\mathop{\mathrm{P}}(\kappa,\kappa,\mathrel{^{S}{\sqsubseteq}},1,\{S\})$ holds. **Corollary 48**. *Suppose that $2^{2^{\aleph_0}}=\aleph_2$, and that $S$ is a nonreflecting stationary subset of $E^{\aleph_2}_{\aleph_0}$. Then there exists a normal prolific streamlined $\aleph_2$-Souslin tree $T$ such that $V(T)=S\cup E^{\aleph_2}_{\aleph_1}$.* *Proof.* By [@paper32 Lemma 3.2], the hypotheses implies that $\mathop{\mathrm{P}}(\aleph_2,\aleph_2,{\mathrel{^{S}{\sqsubseteq}}},\allowbreak1,\{S\})$ holds. Appealing to Theorem [Theorem 46](#thm53){reference-type="ref" reference="thm53"} with $(\kappa,\chi):=(\aleph_2,\aleph_0)$ provides us with a normal, prolific, streamlined $\aleph_2$-Souslin tree $T$ such that $V^-(T)\cap E^{\aleph_2}_{\aleph_0}=V(T)\cap E^{\aleph_2}_{\aleph_0}=S$. As $V^-(T)\cap E^{\aleph_2}_{\aleph_0}$ is a nonreflecting stationary set, Lemma [Lemma 9](#cor53){reference-type="ref" reference="cor53"}(1) (using $(\varsigma,\chi,\kappa):=(2,\aleph_1,\aleph_2)$) implies that $V(T)\cap E^{\aleph_2}_{\aleph_1}=E^{\aleph_2}_{\aleph_1}$. ◻ **Corollary 49**. *Suppose $\mathop{\mathrm{CH}}$ and $\framebox[3.0mm][l]{$\diamondsuit$}\hspace{0.5mm}{}_{\aleph_1}$ both hold. For every stationary $S\subseteq E^{\aleph_2}_{\aleph_0}$, there exists an $\aleph_2$-Souslin tree $\mathbf T$ such that $V(\mathbf T)$ is a stationary subset of $S$.* *Proof.* $\framebox[3.0mm][l]{$\diamondsuit$}\hspace{0.5mm}{}_{\aleph_1}$ implies $\square_{\aleph_1}$ which implies that for every stationary $S\subseteq E^{\aleph_2}_{\aleph_0}$ there exists a stationary $R\subseteq S$ that is nonreflecting. It thus follows from Corollary [Corollary 48](#cor46){reference-type="ref" reference="cor46"} that for every stationary $S\subseteq E^{\aleph_2}_{\aleph_0}$ there exist a stationary $R\subseteq S$ and an $\aleph_2$-Souslin tree $\mathbf T$ such that $V(\mathbf T)=R\cup E^{\aleph_2}_{\aleph_1}$. In addition, $\framebox[3.0mm][l]{$\diamondsuit$}\hspace{0.5mm}{}_{\aleph_1}$ yields a uniformly coherent $\aleph_2$-Souslin tree $\mathbf S$ (see [@MR830071 Theorem 7] or [@paper22 Proposition 2.5 and Theorem 3.6]). By [@paper48 Remark 2.20], then, $V(\mathbf S)=E^{\aleph_2}_{\aleph_0}$. Clearly, $\mathbf T+\mathbf S$ is an $\aleph_2$-Souslin tree, and, by Proposition [Proposition 28](#sums){reference-type="ref" reference="sums"}(2), $V(\mathbf T+\mathbf S)=R$. ◻ **Theorem 50**. *Suppose that $\kappa$ is a strongly inaccessible cardinal, and that $\mathop{\mathrm{P}}(\kappa,\kappa,\mathrel{^{S}{\sqsubseteq}},1,\{S\})$ holds for a given $S\subseteq\mathop{\mathrm{acc}}(\kappa)$. Then there exists a normal, prolific, streamlined $\kappa$-Souslin tree $T$ such that $V^-(T)=V(T)=S$.* *Proof.* The proof is almost identical to that of Theorem [Theorem 45](#thm52){reference-type="ref" reference="thm52"}, where the only change is that now, the definition of $T_\alpha$ for a limit $\alpha$ does not explicitly mention the $\mathbf{d}_x^C$'s. Instead, it is: $$T_\alpha:=\begin{cases}\{\mathbf{b}_x^{C}\mathrel{|}\allowbreak C\in\mathcal C_\alpha, x\in T\mathbin\upharpoonright C\},&\text{if }\alpha\in S;\\ \mathcal B(T\mathbin\upharpoonright\alpha),&\text{otherwise}.\\ \end{cases}$$ The details are left to the reader. ◻ **Corollary 51**. *Suppose that $\kappa$ is a strongly inaccessible cardinal, and $S$ is a nonreflecting stationary subset of $\mathop{\mathrm{acc}}(\kappa)$ on which $\diamondsuit$ holds. Then there exists a normal prolific streamlined $\kappa$-Souslin tree $T$ such that $V^-(T)=V(T)=S$.* *Proof.* By Theorem [Theorem 50](#thm54){reference-type="ref" reference="thm54"} together with [@paper23 Theorem 4.26]. ◻ # Realizing all points of some fixed cofinality {#sect6} The main result of this section is Theorem [Theorem 60](#thm58){reference-type="ref" reference="thm58"} below. A sample corollary of it reads as follows. **Corollary 52**. *In $\mathsf{L}$, for every regular uncountable cardinal $\kappa$ that is not weakly compact, for every finite nonempty $x\subseteq\mathop{\mathrm{Reg}}(\kappa)$ with $\max(x)\le\mathop{\mathrm{cf}}(\sup(\mathop{\mathrm{Reg}}(\kappa)))$, there exists a uniformly homogeneous $\kappa$-Souslin tree $\mathbf T$ such that $V^-(\mathbf T)=\bigcup_{\chi\in x}E^\kappa_\chi$.* *Proof.* Work in $\mathsf{L}$. Let $\kappa$ be regular uncountable cardinal that is not weakly compact, and let $\langle \chi_i\mathrel{|}\allowbreak i\le n\rangle$ be a strictly increasing finite sequence of regular cardinals with $\chi_n\le\mathop{\mathrm{cf}}(\sup(\mathop{\mathrm{Reg}}(\kappa)))$. By [@paper22 Theorem 3.6] and [@paper29 Corollary 4.12], $\mathop{\mathrm{P}}(\kappa,2,{\sqsubseteq},\kappa,\{E^\kappa_{\ge\chi_n}\})$ holds. By $\textsf{\textup{GCH}}$, $\lambda^{<\chi_n}<\kappa$ for all $\lambda<\kappa$. So, by Theorem [Theorem 60](#thm58){reference-type="ref" reference="thm58"} below, using $S:={}^{<\kappa}1$, we may pick a streamlined, normal, $2$-splitting, uniformly homogeneous, $\chi_0$-complete, $\chi_0$-coherent, $E^\kappa_{\ge\chi_0}$-regressive $\kappa$-Souslin tree $T^0$. Furthermore, $T^0$ is $\mathop{\mathrm{P}}^-(\kappa,2,\sqsubseteq,\kappa,\{E^\kappa_{\geq\chi_n}\})$-respecting. **Claim 10**. *$V^-(T^0)=E^\kappa_{\chi_0}$.* *Proof.* Since $T^0$ is $\chi_0$-complete, $V^-(T^0)\cap E^\kappa_{<{\chi_0}}=\emptyset$, so that $\mathop{\mathrm{Tr}}(\kappa\setminus V^-(T^0))$ covers $E^\kappa_{\ge{\chi_0}}$. By $\textsf{\textup{GCH}}$, $2^{<\chi_0}<2^{\chi_0}$. Together with the fact that $T$ is $E^\kappa_{\chi_0}$-regressive, it follows from Lemma [Lemma 9](#cor53){reference-type="ref" reference="cor53"}(2) that $E^\kappa_{\chi_0}\subseteq V^-(T^0)$. Finally, since $T^0$ is ${\chi_0}$-coherent and uniformly homogeneous, we get from Lemma [Lemma 54](#lemma64){reference-type="ref" reference="lemma64"} below that $V^-(T^0)\cap E^\kappa_{>{\chi_0}}=\emptyset$. ◻ If $n=0$, then our proof is complete. Otherwise, one can continue by recursion, where the successive step is as follows: Suppose that $i<n$ is such that $\bigotimes_{j\le i}T^j$ is a streamlined uniformly homogeneous normal $\kappa$-Souslin tree that is $\mathop{\mathrm{P}}^-(\kappa,2,{\sqsubseteq},\allowbreak\kappa,\{E^\kappa_{\geq\chi_n}\})$-respecting, and that $V(\bigotimes_{j\le i}T^j)=\bigcup_{j\le i}E^\kappa_{\chi_j}$. By Theorem [Theorem 60](#thm58){reference-type="ref" reference="thm58"} below, using $S:=\bigotimes_{j\le i}T^j$, we may pick a streamlined, normal, $2$-splitting, uniformly homogeneous, $\chi_{i+1}$-complete, $\chi_{i+1}$-coherent, $E^\kappa_{\ge\chi_{i+1}}$-regressive $\kappa$-Souslin tree $T^{i+1}$. Furthermore, $S\otimes T^{i+1}$ is a normal $\mathop{\mathrm{P}}^-(\kappa,2,\sqsubseteq,\kappa,\allowbreak\{E^\kappa_{\geq\chi_n}\})$-respecting $\kappa$-Souslin tree. By an analysis similar to that of Claim [Claim 10](#claim511){reference-type="ref" reference="claim511"}, $V^-(T^{i+1})=E^\kappa_{\chi_{i+1}}$. Therefore, $\bigotimes_{j\le i+1}T^j$ is a uniformly homogeneous normal $\kappa$-Souslin tree that is $\mathop{\mathrm{P}}^-(\kappa,2,{\sqsubseteq},\allowbreak\kappa,\{E^\kappa_{\geq\chi_n}\})$-respecting. In addition, by Proposition [Proposition 26](#products){reference-type="ref" reference="products"}(2), $V(\bigotimes_{j\le i+1}T^j)=\bigcup_{j\le i+1}E^\kappa_{\chi_j}$. ◻ We start by giving a definition. **Definition 53**. A streamlined $\kappa$-tree $T$ is *$\chi$-coherent* iff for all $s,t\in T$, $\{ \xi\in\mathop{\mathrm{dom}}(s)\cap\mathop{\mathrm{dom}}(t)\mathrel{|}\allowbreak s(\xi)\neq t(\xi)\}$ has size $<\chi$. **Lemma 54**. *Suppose that $\chi<\kappa$ is a cardinal, and that $T$ is a streamlined, $\chi$-coherent uniformly homogeneous $\kappa$-tree. Then $V^-(T)\subseteq E^\kappa_{\le\chi}$.* *Proof.* Let $\alpha\in E^\kappa_{>\chi}$. Suppose that $B\subseteq T$ is an $\alpha$-branch, and we shall show it is not vanishing. For every $\beta<\alpha$, let $t_\beta$ denote the unique element of $T_\beta\cap B$. Fix a node $t\in T_\alpha$. For every $\beta\in E^\alpha_\chi$, by $\chi$-coherence, the following ordinal is smaller than $\beta$: $$\epsilon_\beta:=\sup\{ \xi<\beta\mathrel{|}\allowbreak t_\beta(\xi)\neq t(\xi)\}.$$ As $\mathop{\mathrm{cf}}(\alpha)>\chi$, $E^\alpha_\chi$ is a stationary subset of $\alpha$, so we may fix a large enough $\epsilon<\alpha$ for which $R:=\{\beta\in E^\alpha_\chi\mathrel{|}\allowbreak\epsilon_\beta<\epsilon\}$ is stationary. As $T$ is uniformly homogeneous, $t_\epsilon*t$ is in $T_\alpha$. For every $\beta\in R$, $t_\beta=(t_\epsilon*t)\mathbin\upharpoonright\beta$. But since $R$ is cofinal in $\alpha$, it is the case that $t_\epsilon*t$ constitutes a limit for $B$. Therefore, $B$ is not vanishing. ◻ In the context of streamlined $\kappa$-trees, there is a neater way of presenting the operation of product (compare with Definition [Definition 25](#def-classical-product-tree){reference-type="ref" reference="def-classical-product-tree"}): **Definition 55** ([@paper23 §6.7]). For every function $x:\alpha\rightarrow{}^\tau H_\kappa$ and every $i<\tau$, we let $(x)_i:\alpha\rightarrow H_\kappa$ be $\langle x(\beta)(i)\mathrel{|}\allowbreak\beta<\alpha\rangle$. Using this notation, for every sequence $\langle T^i\mathrel{|}\allowbreak i<\tau\rangle$ of streamlined $\kappa$-trees, one may identify $\bigotimes_{i<\tau}T^i$ with the streamlined tree $T:=\{ x\in{}^{<\kappa}({}^\tau H_\kappa)\mathrel{|}\allowbreak\forall i<\tau\,[(x)_i\in T^i]\}$. *Remark 56*. The product of two uniformly homogeneous $\kappa$-trees is uniformly homogeneous. Before we can state the main result of this section, we need one more definition. **Definition 57** ([@paper20]). A streamlined $\kappa$-tree $X$ is *$\mathop{\mathrm{P}}_\xi^-(\kappa, \mu,\mathcal R, \theta, \mathcal S)$-respecting* if there exists a subset $\S\subseteq\kappa$ and a sequence of mappings $\langle d ^C:(X\mathbin\upharpoonright C)\rightarrow {}^\alpha H_\kappa\cup\{\emptyset\}\mathrel{|}\allowbreak\alpha<\kappa, C\in\mathcal C_\alpha\rangle$ such that: 1. [\[respectingonto\]]{#respectingonto label="respectingonto"} for all $\alpha\in\S$ and $C\in\mathcal C_\alpha$, $X_\alpha\subseteq\mathop{\mathrm{Im}}(d^C)$; 2. $\vec{\mathcal C}=\langle \mathcal C_\alpha\mathrel{|}\allowbreak\alpha<\kappa\rangle$ witnesses $\mathop{\mathrm{P}}_\xi^-(\kappa, \mu,\mathcal R, \theta, \{S\cap\S\mathrel{|}\allowbreak S\in \mathcal S\})$; 3. [\[respectcohere\]]{#respectcohere label="respectcohere"} for all sets $D\sqsubseteq C$ from $\vec{\mathcal C}$ and $x\in X\mathbin\upharpoonright D$, $d^D(x)=d^C(x)\mathbin\upharpoonright\sup(D)$. *Remark 58*. 1. If $\mathop{\mathrm{P}}_\xi^-(\kappa, \mu,\mathcal R, \theta, \mathcal S)$ holds, then the normal streamlined $\kappa$-tree $X:={}^{<\kappa}1$ is $\mathop{\mathrm{P}}_\xi^-(\kappa, \mu,\mathcal R, \theta, \mathcal S)$-respecting; 2. If $\kappa=\lambda^+$ for an infinite regular cardinal $\lambda$, and $\mathop{\mathrm{P}}_\lambda^-(\kappa, \mu,\allowbreak\mathrel{_{\lambda}{\sqsubseteq}}, \theta, \{E^\kappa_\lambda\})$ holds, then every $\kappa$-tree is $\mathop{\mathrm{P}}_\lambda^-(\kappa, \mu,\allowbreak\mathrel{_{\lambda}{\sqsubseteq}}, \theta, \{E^\kappa_\lambda\})$-respecting. **Lemma 59**. *Suppose that:* - *$X$ is a streamlined $\kappa$-tree that is $\mathop{\mathrm{P}}_\xi^-(\kappa, \mu,\mathcal R, \kappa, \mathcal S)$-respecting, as witnessed by some $\vec{\mathcal C}$ and $\S$;* - *$Y$ is a streamlined $\kappa$-tree that is $\mathop{\mathrm{P}}_\xi^-(\kappa, \mu,\mathcal R, \kappa, \{S\cap\S\mathrel{|}\allowbreak S\in \mathcal S\})$-respecting, as witnessed by the same $\vec{\mathcal C}$.* *Then the product $X\otimes Y$ is $\mathop{\mathrm{P}}_\xi^-(\kappa, \mu,\mathcal R, \kappa, \mathcal S)$-respecting.* *Proof.* In view of Definition [Definition 55](#def54){reference-type="ref" reference="def54"}, for every two functions $x,y$ from an ordinal $\alpha<\kappa$ to $H_\kappa$, we denote by ${}^\ulcorner (x,y){}^\urcorner$ the unique function $p:\alpha\rightarrow{}^2H_\kappa$ such that $(p)_0=x$ and $(p)_1=y$. Note that $X\otimes Y=\bigcup_{\alpha<\kappa}\{ {}^\ulcorner (x,y){}^\urcorner \mathrel{|}\allowbreak(x,y)\in X_\alpha\times Y_\alpha\}$. Write $\vec{\mathcal C}$ as $\langle \mathcal C_\alpha\mathrel{|}\allowbreak\alpha<\kappa\rangle$. Fix a sequence of mappings $\langle d^C:(X\mathbin\upharpoonright C)\rightarrow {}^\alpha H_\kappa\cup\{\emptyset\}\mathrel{|}\allowbreak\alpha<\kappa, C\in\mathcal C_\alpha\rangle$ such that: 1. for all $\alpha\in\S$ and $C\in\mathcal C_\alpha$, $X_\alpha\subseteq\mathop{\mathrm{Im}}(d^C)$; 2. $\vec{\mathcal C}=\langle \mathcal C_\alpha\mathrel{|}\allowbreak\alpha<\kappa\rangle$ witnesses $\mathop{\mathrm{P}}_\xi^-(\kappa, \mu,\mathcal R, \kappa, \{S\cap\S\mathrel{|}\allowbreak S\in \mathcal S\})$; 3. for all sets $D\sqsubseteq C$ from $\vec{\mathcal C}$ and $x\in X\mathbin\upharpoonright D$, $d^D(x)=d^C(x)\mathbin\upharpoonright\sup(D)$. Fix a stationary $\S'\subseteq\S$ and a sequence of mappings $\langle e^C:(Y\mathbin\upharpoonright C)\rightarrow {}^\alpha H_\kappa\cup\{\emptyset\}\mathrel{|}\allowbreak\alpha<\kappa, C\in\mathcal C_\alpha\rangle$ such that: 1. for all $\alpha\in\S'$ and $C\in\mathcal C_\alpha$, $Y_\alpha\subseteq\mathop{\mathrm{Im}}(e^C)$; 2. $\vec{\mathcal C}=\langle \mathcal C_\alpha\mathrel{|}\allowbreak\alpha<\kappa\rangle$ witnesses $\mathop{\mathrm{P}}_\xi^-(\kappa, \mu,\mathcal R, \kappa, \{S\cap\S'\mathrel{|}\allowbreak S\in \mathcal S\})$; 3. for all sets $D\sqsubseteq C$ from $\vec{\mathcal C}$ and $y\in Y\mathbin\upharpoonright D$, $e^D(y)=e^C(y)\mathbin\upharpoonright\sup(D)$. Let $\vec B=\langle B_{x,y}\mathrel{|}\allowbreak(x,y)\in X\times Y\rangle$ be a partition of $\kappa$ into cofinal subsets of $\kappa$. Define a sequence of mappings $\langle b^C:(X\otimes Y)\mathbin\upharpoonright C\rightarrow {}^\alpha H_\kappa\cup\{\emptyset\}\mathrel{|}\allowbreak\alpha<\kappa, C\in\mathcal C_\alpha\rangle$, as follows. Let $\alpha<\kappa$ and $C\in\mathcal C_\alpha$. $\blacktriangleright$ For every $\beta\in C$, if there are $x\in X\mathbin\upharpoonright(C\cap\beta)$ and $y\in Y\mathbin\upharpoonright(C\cap\beta)$ such that $\beta\in B_{x,y}$, then since $\vec B$ is a sequence of pairwise disjoint sets, this pair $(x,y)$ is unique, and we let $b^C(p):={}^\ulcorner (d^C(x),e^C(y)){}^\urcorner$ for every $p\in(X\otimes Y)_\beta$. $\blacktriangleright$ For every $\beta\in C$ for which there is no such pair $(x,y)$, we let $b^C(p):=\emptyset$ for every $p\in(X\otimes Y)_\beta$. **Claim 11**. *Suppose $D\sqsubseteq C$ are sets from $\vec{\mathcal C}$. For every $p\in (X\otimes Y)\mathbin\upharpoonright D$, $b^D(p)=b^C(p)\mathbin\upharpoonright\sup(D)$.* *Proof.* Given $p\in (X\otimes Y)\mathbin\upharpoonright D$. Denote $\beta:=\mathop{\mathrm{dom}}(p)$. Note that $D\cap\beta=C\cap\beta$. Now, there are two options: $\blacktriangleright$ There are $x\in X\mathbin\upharpoonright(C\cap\beta)$ and $y\in Y\mathbin\upharpoonright(C\cap\beta)$ such that $\beta\in B_{x,y}$. Then $b^D(p)={}^\ulcorner (d^D(x),e^D(y)){}^\urcorner$ and $b^C(p)={}^\ulcorner (d^C(x),e^C(y)){}^\urcorner$. Since $D\sqsubseteq C$, we know that $d^D(x)=d^C(x)\mathbin\upharpoonright\sup(D)$ and $e^D(y)=e^C(y)\mathbin\upharpoonright\sup(D)$. Therefore, $b^D(p)=d^C(p)\mathbin\upharpoonright\sup(D)$. $\blacktriangleright$ There are no such $x$ and $y$. Then $b^D(p)=\emptyset=d^C(p)$. ◻ Consider the following set: $$\S'':=\{\alpha\in\S'\mathrel{|}\allowbreak\forall C\in\mathcal C_\alpha\forall x\in(X\mathbin\upharpoonright\alpha)\forall y\in(Y\mathbin\upharpoonright\alpha)\,[\sup( C_\alpha\cap B_{x,y})=\alpha]\}.$$ **Claim 12**. *$\vec{\mathcal C}=\langle \mathcal C_\alpha\mathrel{|}\allowbreak\alpha<\kappa\rangle$ witnesses $\mathop{\mathrm{P}}_\xi^-(\kappa, \mu,\mathcal R, \kappa, \{S\cap\S''\mathrel{|}\allowbreak S\in \mathcal S\})$.* *Proof.* Let $\langle B_i\mathrel{|}\allowbreak i<\kappa\rangle$ be a given sequence of cofinal subsets of $\kappa$. Let $\pi:\kappa\leftrightarrow\kappa\uplus(X\times Y)$ be a surjection. As $X$ and $Y$ are $\kappa$-tree, the set $D:=\{ \alpha<\kappa\mathrel{|}\allowbreak\pi[\alpha]=\alpha\uplus((X\mathbin\upharpoonright\alpha)\times(Y\mathbin\upharpoonright\alpha))\}$ is a club in $\kappa$. By Clause (5), then, for every $S\in\mathcal{S}$, there are stationarily many $\alpha\in S\cap\S'\cap D$ such that for all $C\in\mathcal C_\alpha$ and $i<\alpha$, $\sup(\mathop{\mathrm{nacc}}(C)\cap B_{\pi(i)})=\alpha$. In particular, for every $S\in\mathcal{S}$, there are stationarily many $\alpha\in S\cap\S''$ such that for all $C\in\mathcal C_\alpha$ and $i<\alpha$, $\sup(\mathop{\mathrm{nacc}}(C)\cap B_i)=\alpha$. ◻ **Claim 13**. *Let $\alpha\in\S''$ and $C\in\mathcal C_\alpha$. Then $(X\otimes Y)_\alpha\subseteq\mathop{\mathrm{Im}}(b^C)$.* *Proof.* Let $(s,t)\in X_\alpha \times Y_\alpha$. As $\S''\subseteq\S'\subseteq S$, using Clauses (1) and (4), we may fix $x\in X\mathbin\upharpoonright C$ and $y\in Y\mathbin\upharpoonright C$ such that $d^C(x)=s$ and $e^c(y)=t.$ As $\alpha\in\S''$, we may pick $\beta\in C_\alpha\cap B_{x,y}$ above $\max\{\mathop{\mathrm{dom}}(x),\mathop{\mathrm{dom}}(y)\}$. Let $p$ be an arbitrary element of $(X\otimes Y)\mathbin\upharpoonright C$. Then $b^C(p):={}^\ulcorner (d^C(x),e^C(y)){}^\urcorner ={}^\ulcorner (s,t){}^\urcorner$. ◻ This completes the proof. ◻ **Theorem 60**. *Suppose that:* - *$\varsigma<\kappa$ is a cardinal;* - *$\nu\le\chi<\kappa$ are cardinals such that $\lambda^{<\chi}<\kappa$ for all $\lambda<\kappa$;* - *$S$ is a $\mathop{\mathrm{P}}^-(\kappa,2,{\mathrel{_{\nu}{\sqsubseteq}}},\kappa,\{E^\kappa_{\geq\chi}\})$-respecting streamlined normal $\kappa$-tree with no $\kappa$-sized antichains;* - *$\diamondsuit(\kappa)$ holds.* *Then there exists a streamlined, normal, $\varsigma$-splitting, prolific, uniformly homogeneous, $\chi$-complete, $\chi$-coherent, $E^\kappa_{\ge\chi}$-regressive $\kappa$-Souslin tree $T$ such that $S\otimes T$ is a normal $\mathop{\mathrm{P}}^-(\kappa,2,{\mathrel{_{\nu}{\sqsubseteq}}},\kappa,\{E^\kappa_{\geq\chi}\})$-respecting $\kappa$-Souslin tree.* *Proof.* Fix a stationary $\S\subseteq\kappa$ and a sequence $\langle d^\alpha:S\mathbin\upharpoonright C_\alpha\rightarrow {}^\alpha H_\kappa\cup\{\emptyset\}\mathrel{|}\allowbreak\alpha<\kappa\rangle$ such that: 1. [\[respectingonto\]]{#respectingonto label="respectingonto"} for all $\alpha\in\S$, $S_\alpha\subseteq\mathop{\mathrm{Im}}(d^\alpha)$; 2. $\vec C:=\langle C_\alpha\mathrel{|}\allowbreak\alpha<\kappa\rangle$ witnesses $\mathop{\mathrm{P}}^-(\kappa, 2,{\mathrel{_{\nu}{\sqsubseteq}}}, \kappa, \{\S\})$; 3. for all $\alpha<\beta<\kappa$, if $C_\alpha\sqsubseteq C_\beta$, then $d^\alpha(x)=d^\beta(x)\mathbin\upharpoonright\alpha$ for every $x\in S\mathbin\upharpoonright C_\alpha$. Without loss of generality, we may assume that $0\in C_\alpha$ for all nonzero $\alpha<\kappa$. The upcoming construction follows the proof of [@paper22 Proposition 2.5]. Let $\langle R_i \mathrel{|}\allowbreak i<\kappa\rangle$ and $\langle \Omega_\beta\mathrel{|}\allowbreak\beta<\kappa\rangle$ together witness $\diamondsuit(H_\kappa)$. Let $\pi:\kappa\rightarrow\kappa$ be such that $\alpha\in R_{\pi(\alpha)}$ for all $\alpha<\kappa$. From $\diamondsuit(\kappa)$, we have $\left|H_\kappa\right| =\kappa$, thus let $\lhd$ be some well-ordering of $H_\kappa$ of order-type $\kappa$, and let $\phi:\kappa\leftrightarrow H_\kappa$ witness the isomorphism $(\kappa,\in)\cong(H_\kappa,\lhd)$. Put $\psi:=\phi\circ\pi$. We now recursively construct a sequence $\langle T_\alpha\mathrel{|}\allowbreak\alpha<\kappa\rangle$ of levels whose union will ultimately be the desired tree $T$. Let $T_0:=\{\emptyset\}$, and for all $\alpha<\kappa$, let $$T_{\alpha+1}:=\{ t{}^\smallfrown\langle i\rangle\mathrel{|}\allowbreak t\in T_\alpha, i<\max\{\varsigma,\omega,\alpha\}\}.$$ Next, suppose that $\alpha\in\mathop{\mathrm{acc}}(\kappa)$, and that $\langle T_\beta\mathrel{|}\allowbreak\beta<\alpha\rangle$ has already been defined. We shall identify some $\mathbf b^\alpha\in\mathcal B(T\mathbin\upharpoonright\alpha)$, and then define the $\alpha^{\text{th}}$-level, as follows: $$\tag{$\star$}\label{promise3} T_\alpha:=\begin{cases} \mathcal B(T\mathbin\upharpoonright\alpha),&\text{if }\alpha\in E^\kappa_{<\chi};\\ \{x*\mathbf b^\alpha\mathrel{|}\allowbreak x\in T\mathbin\upharpoonright\alpha\},&\text{if }\alpha\in E^\kappa_{\ge\chi}. \end{cases}$$ We shall obtain $\mathbf b^\alpha$ as a limit $\bigcup\mathop{\mathrm{Im}}(b^\alpha)$ of a sequence $b^\alpha\in\prod_{\beta\in C_\alpha}T_\beta$ that we define recursively, as follows. Let $b^\alpha(0):=\emptyset$. Next, suppose $\beta^-<\beta$ are two successive points of $C_\alpha$, and that $b^\alpha(\beta^-)$ has already been defined. There are two possible options: $\blacktriangleright$ If $\psi(\beta)$ happens to be a pair $(y,x)$ lying in $(S\mathbin\upharpoonright\beta^-)\times (T\mathbin\upharpoonright\beta^-)$, and the following set happens to be nonempty: $$Q^{\alpha,\beta}:=\{t\in T_\beta\mathrel{|}\allowbreak\exists(\bar s,\bar t)\in \Omega_\beta\,[\bar s\subseteq d^\alpha(y)\mathbin\upharpoonright\beta\ \&\ (\bar t\cup(x*b^\alpha(\beta^-)))\subseteq t] \},$$ then let $t$ denote its $\lhd$-least element, and put $b^\alpha(\beta):=b^\alpha(\beta^-)* t$. $\blacktriangleright$ Otherwise, let $b^\alpha(\beta)$ be the $\lhd$-least element of $T_\beta$ that extends $b^\alpha(\beta^-)$. As always, for all $\beta \in \mathop{\mathrm{acc}}(C_\alpha)$ such that $b^\alpha\mathbin\upharpoonright\beta$ has already been defined, we let $b^\alpha(\beta):=\bigcup\mathop{\mathrm{Im}}(b^\alpha\mathbin\upharpoonright\beta)$ and infer that it belongs to $T_\beta$. Indeed, either $\mathop{\mathrm{cf}}(\beta)<\chi$, and then $b^\alpha(\beta)\in\mathcal B(T\mathbin\upharpoonright\beta)=T_\beta$, or $\mathop{\mathrm{cf}}(\beta)\ge\chi\ge\nu$, and then $C_\beta=C_\alpha\cap\beta$ from which it follows that $b^\alpha(\beta)=\mathbf b^\beta\in T_\beta$. This completes the definition of $b^\alpha$, hence also that of $\mathbf b^\alpha$. Finally, let $T_\alpha$ be defined as promised in [\[promise3\]](#promise3){reference-type="eqref" reference="promise3"}. It is clear that $T := \bigcup_{\alpha < \kappa} T_\alpha$ is a streamlined, normal, $\varsigma$-splitting, prolific, uniformly homogeneous, $\chi$-complete $\kappa$-tree. **Claim 14**. *$T$ is $\chi$-coherent.* *Proof.* Suppose not, and let $\alpha$ be the least ordinal to accommodate $s,t\in T_\alpha$ such that $s$ differs from $t$ on a set of size $\ge\chi$. Clearly, $\alpha\in E^\kappa_{\ge\chi}$. So $s=x*\mathbf b^\alpha$ and $t=y*\mathbf b^\alpha$ for nodes $x,y\in T\mathbin\upharpoonright\alpha$, and hence $x$ and $y$ differ on a set of size $\ge\chi$, contradicting the minimality of $\alpha$. ◻ **Claim 15**. *$T$ is $E^\kappa_{\ge\chi}$-regressive.* *Proof.* To define $\rho:T\mathbin\upharpoonright E^\kappa_{\ge\chi}\rightarrow T$, let $\alpha\in E^\kappa_{\ge\chi}$. By the definition of $T_\alpha$, for every $t\in T$, there exists some $x\in T\mathbin\upharpoonright\alpha$ such that $t=x*\mathbf b^\alpha$, so we let $\rho(t)$ be an element of $T\mathbin\upharpoonright\alpha$ such that $t=\rho(t)*\mathbf b^\alpha$. Now, if $s,t\in T_\alpha$ are such that $\rho(t)\subseteq s$ and $\rho(s)\subseteq t$, then $\rho(t)\subseteq\rho(s)*\mathbf b^\alpha$ and $\rho(s)\subseteq\rho(t)*\mathbf b^\alpha$. In particular, $\rho(s)$ is compatible with $\rho(t)$. Without loss of generality, $\rho(s)\subseteq\rho(t)$. Then $t=\rho(s)*\mathbf b^\alpha=s$. ◻ **Claim 16**. *$T$ is $\mathop{\mathrm{P}}^-(\kappa,2,{\mathrel{_{\nu}{\sqsubseteq}}},\kappa,\{\S\})$-respecting, as witnessed by $\vec C$.* *Proof.* Define $\langle e^\alpha:T\mathbin\upharpoonright C_\alpha\rightarrow T_\alpha\mathrel{|}\allowbreak\alpha<\kappa\rangle$ via: $$e^\alpha(x):=x*\mathbf b^\alpha.$$ The second part of [\[promise3\]](#promise3){reference-type="eqref" reference="promise3"} implies that $S_\alpha=\mathop{\mathrm{Im}}(d^\alpha)$ for all $\alpha\in E^\kappa_{\ge\chi}\supseteq\S$. In addition, it is clear that for all $\alpha<\beta<\kappa$, if $C_\alpha\sqsubseteq C_\beta$, then ${\mathbf b}^\alpha={\mathbf b}^\beta\mathbin\upharpoonright\alpha$, and hence $e^\alpha(x)=e^\beta(x)\mathbin\upharpoonright\alpha$ for every $x\in S\mathbin\upharpoonright C_\alpha$. ◻ It thus follows from Lemma [Lemma 59](#lemma58){reference-type="ref" reference="lemma58"} that $S\otimes T$ is $\mathop{\mathrm{P}}^-(\kappa,2,{\mathrel{_{\nu}{\sqsubseteq}}},\kappa,\{E^\kappa_{\geq\chi}\})$-respecting. It is clear that $S\otimes T$ is normal, thus we are left with verifying that it is Souslin. To this end, let $A$ be a maximal antichain in $S\otimes T$. As both $S$ and $T$ are normal, it follows that for every $z\in T$, the following (upward-closed) set is cofinal in $S$: $$D_z:=\{s\in S\mathrel{|}\allowbreak\exists (\bar s,\bar t)\in A \exists t\in T\cap z^\uparrow\,[\mathop{\mathrm{dom}}(s)=\mathop{\mathrm{dom}}(t), \bar s\subseteq s, \bar t\subseteq t]\}.$$ As an application of $\diamondsuit(H_\kappa)$, using the parameter $p:=\{\phi,S\otimes T, A, \langle D_z\mathrel{|}\allowbreak z\in T\rangle\}$, we get that for every $i<\kappa$, the following set is cofinal (in fact, stationary) in $\kappa$: $$B_i:=\{\beta\in R_i\mathrel{|}\allowbreak\exists \mathcal M\prec H_{\kappa^+}\,(p\in \mathcal M, \mathcal M\cap\kappa=\beta, \Omega_\beta=A\cap \mathcal M)\}.$$ Note that $(S\mathbin\upharpoonright\beta)\otimes (T\mathbin\upharpoonright\beta)\subseteq\phi[\beta]$ for every $\beta\in \bigcup_{i<\kappa}B_i$. Now, as $\vec C$ witnesses $\mathop{\mathrm{P}}^-(\kappa, 2,{\mathrel{_{\nu}{\sqsubseteq}}}, \kappa, \{\S\})$, we may fix some $\alpha\in \S$ such that, for all $i<\alpha$, $$\sup (\mathop{\mathrm{nacc}}(C_\alpha)\cap B_i)=\alpha.$$ In particular, $(S\mathbin\upharpoonright\alpha)\otimes (T\mathbin\upharpoonright\alpha)\subseteq\phi[\alpha]$. As $\alpha\in\S$, we also know that $S_\alpha\subseteq\mathop{\mathrm{Im}}(d^\alpha)$ and that $\mathop{\mathrm{cf}}(\alpha)\ge\chi$. **Claim 17**. *$A\subseteq(S\otimes T)\mathbin\upharpoonright\alpha$. In particular, $|A|<\kappa$.* *Proof.* As $A$ is an antichain, it suffices to prove that every element of $(S\otimes T)_\alpha$ extends some element of $A$. To this end, fix $(s',t')\in (S\otimes T)_\alpha$. Since $S_\alpha\subseteq\mathop{\mathrm{Im}}(d^\alpha)$, we may fix a $y\in S\mathbin\upharpoonright C_\alpha$ such that $d^\alpha(y)=s'$. Recalling [\[promise3\]](#promise3){reference-type="eqref" reference="promise3"}, we may also fix some $x\in T\mathbin\upharpoonright C_\alpha$ such that $t'=x*\mathbf b^\alpha$. As the pair $(y,x)$ is an element of $(S\mathbin\upharpoonright\alpha)\times (T\mathbin\upharpoonright\alpha)$, we may find an $i<\alpha$ such that $\phi(i)=(y,x)$, and then find a $\beta\in \mathop{\mathrm{nacc}}(C_\alpha)\cap B_i$ such that $\beta^-:=\sup(C_\alpha\cap\beta)$ is greater than $\max\{\mathop{\mathrm{dom}}(y),\mathop{\mathrm{dom}}(x)\}$. Note that $\psi(\beta)=\phi(\pi(\beta))=\phi(i)=(y,x)$. **Subclaim 1**. *$\Omega_\beta=A\cap ((S\otimes T)\mathbin\upharpoonright\beta)$, and $Q^{\alpha,\beta}\neq\emptyset$.* *Proof.* As $\beta\in B_i$, we may fix $\mathcal M\prec H_{\kappa^+}$ such that all of the following hold: - $\{\phi,S\otimes T, A, \langle D_x\mathrel{|}\allowbreak x\in T\rangle\}\in \mathcal M$; - $\mathcal M\cap\kappa=\beta$; - $\Omega_\beta=A\cap \mathcal M$ By elementarity, $(T\otimes S)\cap \mathcal M=(S\otimes T)\mathbin\upharpoonright\beta$, and $\Omega_\beta=A\cap \mathcal M=A\cap ((S\otimes T)\mathbin\upharpoonright\beta)$. Then $z:=t'\mathbin\upharpoonright\beta^-$ is in $\mathcal M$, and hence, so is $D_{z}$. Pick in $\mathcal M$ a maximal antichain $\bar D$ in $D_z$. Since $D_z$ is cofinal in $S$, $\bar D$ is a maximal antichain in $S$. Since $S$ has no $\kappa$-sized antichains, we may find a large enough $\gamma\in\mathcal M\cap\kappa$ such that $\bar D\subseteq S\mathbin\upharpoonright\gamma$. It thus follows that $s'\mathbin\upharpoonright\gamma$ extends an element of $\bar D$, but since $D_z$ is upward-closed, $s:=s'\mathbin\upharpoonright\gamma$ is in $D_z$. It follows that we may fix $(\bar s,\bar t)\in A$ and $t\in T_\gamma\cap z^\uparrow$ such that $\bar s\subseteq s$ and $\bar t\subseteq t$. As $\Omega_\beta=A\cap ((S\otimes T)\mathbin\upharpoonright\beta)$, $(d^\alpha(y)\mathbin\upharpoonright\beta)\mathbin\upharpoonright\gamma=s$ and $x*b^\alpha(\beta^-)=z\subseteq t$, we infer that $t\in Q^{\alpha,\beta}$. ◻ It follows that $b^\alpha(\beta)=b^\alpha(\beta^-)* t$ for some $t\in Q^{\alpha,\beta}$. This means that we may pick $(\bar s,\bar t)\in \Omega_\beta\subseteq A$ such that $\bar s\subseteq s'\mathbin\upharpoonright\beta$ and $\bar t\cup(x*b^\alpha(\beta^-))\subseteq t$. Therefore, $\bar t\subseteq x*b^\alpha(\beta)$. Altogether, $(\bar s,\bar t)\in A$, $\bar s\subseteq s'$ and $\bar t\subseteq t'$. ◻ This completes the proof. ◻ We now arrive at the proof of Theorem [Theorem 1](#thma){reference-type="ref" reference="thma"}: **Theorem 61**. *We have $(1)\implies(2)\implies(3)$:* 1. *there exists a $\kappa$-Souslin tree $\mathbf T$ such that $V(\mathbf T)=\emptyset$;* 2. *there exists a normal and splitting $\kappa$-tree $\mathbf T$ such that $V(\mathbf T)$ is nonstationary;* 3. *$\kappa$ is not the successor of a cardinal of countable cofinality.* *In addition, in $\mathsf{L}$, for $\kappa$ not weakly compact, $(3)\implies(1)$.* *Proof.* $(1)\implies(2)$: If $\mathbf T=(T,<_T)$ is a $\kappa$-Souslin tree, then a standard argument (see [@paper20 Lemma 2.4]) shows that for some club $D\subseteq\kappa$, $\mathbf T'=(T\mathbin\upharpoonright D,{<_T})$ is normal and splitting. Clearly, if $V(\mathbf T)=\emptyset$, then $V(\mathbf T')=\emptyset$, as well. $(2)\implies(3)$: Suppose that $\mathbf T$ is a normal and splitting $\kappa$-tree. If $\kappa$ is the successor of a cardinal of countable cofinality then by Corollary [Corollary 11](#cor27){reference-type="ref" reference="cor27"}, $V(\mathbf T)$ covers the stationary set $E^\kappa_\omega$. Hereafter, work in $\mathsf L$, and suppose that $\kappa$ is a regular uncountable cardinal that is not weakly compact and not the successor of a cardinal of countable cofinality. Then by Corollary [Corollary 52](#cor61){reference-type="ref" reference="cor61"} together with Proposition [Proposition 5](#prop22){reference-type="ref" reference="prop22"}(2) there are $\kappa$-Souslin trees $\mathbf T^0,\mathbf T^1$ such that $V(\mathbf T^0)=E^\kappa_\omega$ and $V(\mathbf T^1)=E^\kappa_{\omega_1}$. The disjoint sum of the two $\mathbf T:=\sum\{\mathbf T^0,\mathbf T^1\}$ is clearly $\kappa$-Souslin. In addition, by Proposition [Proposition 28](#sums){reference-type="ref" reference="sums"}(2), $V(\mathbf T)=V(\mathbf T^0)\cap V(\mathbf T^1)=\emptyset$. ◻ *Remark 62*. The $\kappa$-Souslin tree $\mathbf T$ constructed in the preceding proof satisfies $V(\mathbf T)=\emptyset$, yet it has a $\kappa$-Souslin subtree $\mathbf T'$ for which $V(\mathbf T')$ is stationary. A $\kappa$-tree $\mathbf T$ is said to be *full* iff for every $\alpha\in\mathop{\mathrm{acc}}(\kappa)$, there is no more than one vanishing $\alpha$-branch in $\mathbf T$. It is clear that if $\mathbf T$ is a full $\kappa$-tree that is splitting (resp. Aronszajn), then $V(\mathbf T)$ is empty (resp. nonstationary). In [@paper62], we construct full $\kappa$-Souslin trees, thus giving an example of a $\kappa$-Souslin tree $\mathbf T$ such that $V(\mathbf T')$ is nonstationary for all of its $\kappa$-subtrees $\mathbf T'$. We conclude this section by pointing out that by using [@paper22 Theorem 3.6] and a proof similar to that of Theorem [Theorem 61](#thm511){reference-type="ref" reference="thm511"}, we get more information on the model studied in Corollary [Corollary 49](#cor47){reference-type="ref" reference="cor47"}. **Corollary 63**. *Suppose that $\mathop{\mathrm{CH}}$ and $\framebox[3.0mm][l]{$\diamondsuit$}\hspace{0.5mm}{}_{\aleph_1}$ both hold. Then there are $\aleph_2$-Souslin trees $\mathbf T^0,\mathbf T^1,\mathbf T^2,\mathbf T^3$ such that:* - *$V(\mathbf T^0)=\emptyset$;* - *$V(\mathbf T^1)=E^{\aleph_2}_{\aleph_0}$;* - *$V(\mathbf T^2)=E^{\aleph_2}_{\aleph_1}$;* - *$V(\mathbf T^3)=\mathop{\mathrm{acc}}(\aleph_2)$. 0◻* # Souslin trees with an ascent path {#sect7} The subject matter of this section is the following definition. **Definition 64** (Laver). Suppose that $\mathbf{T} = (T, <_T)$ is a tree of some height $\kappa$. A *$\mu$-ascent path* through $\mathbf{T}$ is a sequence $\vec f=\langle f_\alpha \mathrel{|}\allowbreak\alpha < \kappa \rangle$ such that: - for every $\alpha < \kappa$, $f_\alpha:\mu \rightarrow T_\alpha$ is a function; - for all $\alpha < \beta < \kappa$, there is an $i < \mu$ such that $f_\alpha(j) <_T f_\beta(j)$ whenever $i\le j<\mu$. We will show that Souslin trees having a large set of vanishing levels are compatible with carrying an ascent path. For this, we shall make use of the following strengthening of $\mathop{\mathrm{P}}_\xi^-(\kappa,\mu^+,{\sqsubseteq},\theta,\mathcal{S})$: **Definition 65** ([@paper23 §4.6]). The principle $\mathop{\mathrm{P}}_\xi^-(\kappa, \mu^{\textup{ind}},\allowbreak{\sqsubseteq},\theta,\mathcal{S})$ asserts the existence of a $\xi$-bounded $\mathcal C$-sequence $\langle \mathcal{C}_\alpha\mathrel{|}\allowbreak\alpha<\kappa\rangle$ together with a sequence $\langle i(\alpha)\mathrel{|}\allowbreak\alpha<\kappa\rangle$ of ordinals in $\mu$, such that: - for every $\alpha<\kappa$, there exists a canonical enumeration $\langle C_{\alpha,i}\mathrel{|}\allowbreak i(\alpha)\le i<\mu\rangle$ of $\mathcal C_\alpha$ satisfying that the sequence $\langle \mathop{\mathrm{acc}}({C}_{\alpha,i})\mathrel{|}\allowbreak i(\alpha)\le i<\mu\rangle$ is $\subseteq$-increasing with $\bigcup_{i\in[i(\alpha),\mu)}\mathop{\mathrm{acc}}(C_{\alpha,i})=\mathop{\mathrm{acc}}(\alpha)$; - for all $\alpha<\kappa$, $i\in[i(\alpha),\mu)$ and $\bar\alpha\in\mathop{\mathrm{acc}}({C}_{\alpha,i})$, it is the case that $i\ge i(\bar\alpha)$ and $C_{\bar\alpha,i}\sqsubseteq C_{\alpha,i}$; - for every sequence $\langle B_\tau\mathrel{|}\allowbreak\tau<\theta\rangle$ of cofinal subsets of $\kappa$, and every $S\in\mathcal{S}$, there are stationarily many $\alpha\in S$ such that for all $C\in\mathcal C_\alpha$ and $\tau<\min\{\alpha,\theta\}$, $\sup(\mathop{\mathrm{nacc}}(C)\cap B_\tau)=\alpha$. Conventions [Convention 35](#proxydef2){reference-type="ref" reference="proxydef2"} and [Convention 36](#conv35){reference-type="ref" reference="conv35"} apply to the preceding, as well. **Lemma 66**. *Suppose that:* - *$\mu<\kappa$ is an infinite cardinal;* - *$K$ is a streamlined $\kappa$-tree;* - *$\mathop{\mathrm{P}}(\kappa, \mu^{\textup{ind}},\allowbreak{\sqsubseteq},1)$ holds.* *Then there exists a normal and splitting streamlined $\kappa$-Souslin tree $T$ with $V(T)\supseteq V^-(K)$ such that $T$ admits a $\mu$-ascent path.* *Proof.* As a preparatory step, we shall need the following simple claim. **Claim 18**. *We may assume that $\mathcal B(K)\neq\emptyset$.* *Proof.* For every $\eta\in K$, define a function $\eta':\mathop{\mathrm{dom}}(\eta)\rightarrow H_\kappa$ via $\eta'(\alpha):=(\eta(\alpha),0)$. Then $K':=\{ \eta'\mathrel{|}\allowbreak\eta\in K\}\uplus{}^{<\kappa}1$ is a streamlined $\kappa$-tree with $V^-(K')=V^-(K)$ and, in addition, $\mathcal B(K')\neq\emptyset$. ◻ Let $\vec{\mathcal C}=\langle \mathcal{C}_\alpha\mathrel{|}\allowbreak\alpha<\kappa\rangle$ and $\langle i(\alpha)\mathrel{|}\allowbreak\alpha<\kappa\rangle$ witness together that $\mathop{\mathrm{P}}^-(\kappa, \mu^{\textup{ind}},\allowbreak{\sqsubseteq},1)$ holds. In particular, $\vec{\mathcal C}$ is a $\mathop{\mathrm{P}}^-(\kappa,\kappa,{\sqsubseteq},1)$-sequence satisfying that, for all $\alpha\in\mathop{\mathrm{acc}}(\kappa)$ and $C,D\in\mathcal C_\alpha$, $\sup(C\cap D)=\alpha$. As always, we may also assume that $0\in \bigcap_{0<\alpha<\kappa}\bigcap\mathcal C_\alpha$. Using $\vec{\mathcal C}$ and $K$, construct the sequence of levels $\langle T_\alpha\mathrel{|}\allowbreak\alpha<\kappa\rangle$ exactly as in the proof of Theorem [Theorem 38](#thm41){reference-type="ref" reference="thm41"}, so that $T:=\bigcup_{\alpha<\kappa}T_\alpha$ is a normal and splitting streamlined $\kappa$-Souslin tree. From Claim [Claim 8](#c372){reference-type="ref" reference="c372"}, we infer that $V(T)\supseteq V^-(K)$. In addition, the construction of Theorem [Theorem 38](#thm41){reference-type="ref" reference="thm41"} ensures that for every $\alpha\in \mathop{\mathrm{acc}}(\kappa)$, it is the case that $$T_\alpha=\{\mathbf b_x^{C,\eta}\mathrel{|}\allowbreak C\in\mathcal C_\alpha,\eta\in K_\alpha, x\in T\mathbin\upharpoonright C\}.$$ Fix $\zeta\in\mathcal B(K)$. For every $\alpha\in\mathop{\mathrm{acc}}(\kappa)$, using the canonical enumeration $\langle C_{\alpha,i}\mathrel{|}\allowbreak i(\alpha)\le i<\mu\rangle$ of $\mathcal C_\alpha$, we define a function $f_\alpha:\mu\rightarrow T_\alpha$ via $$f_\alpha(j):=\mathbf{b}_\emptyset^{C_{\alpha,\max\{j,i(\alpha)\}},\zeta\mathbin\upharpoonright\alpha}.$$ **Claim 19**. *Let $\beta<\alpha$ be a pair of ordinals in $\mathop{\mathrm{acc}}(\kappa)$. Then there exists an $i<\mu$ such that $f_\beta(j)\subseteq f_\alpha(j)$ whenever $i\le j<\mu$.* *Proof.* Note that by Claim [Claim 7](#c371){reference-type="ref" reference="c371"}, for all $C\in\mathcal C_\alpha$, $\eta\in K_\alpha$, and $x\in T\mathbin\upharpoonright(C\cap\beta)$, if $\beta\in\mathop{\mathrm{acc}}(C)$, then $\mathbf b_x^{C,\eta}\mathbin\upharpoonright\beta=\mathbf b_x^{C\cap\beta,\eta\mathbin\upharpoonright\beta}$. Now, by Definition [Definition 65](#indexedP){reference-type="ref" reference="indexedP"}, we may fix a large enough $i\in[i(\alpha),\mu)$ such that $\beta\in\mathop{\mathrm{acc}}(C_{\alpha,j})$ whenever $i\le j<\mu$. Let $j$ be such an ordinal. Then $j\ge i(\beta)$ and $C_{\alpha,j}\cap\beta=C_{\beta,j}$, so that $$f_\beta(j)=\mathbf{b}_\emptyset^{C_{\beta,j},\zeta\mathbin\upharpoonright\beta}=\mathbf{b}_\emptyset^{C_{\alpha,j},\zeta\mathbin\upharpoonright\alpha}\mathbin\upharpoonright\beta=f_\alpha(j)\mathbin\upharpoonright\beta,$$ as sought. ◻ It now easily follows that $T$ admits a $\mu$-ascent path. ◻ **Corollary 67**. *Suppose that:* - *$\lambda$ is an uncountable cardinal satisfying $\square_\lambda$ and $2^\lambda=\lambda^+$;* - *$\mu<\lambda$ is an infinite regular cardinal satisfying $\lambda^\mu=\lambda$.* *Then there exists a streamlined $\lambda^+$-Souslin tree $T$ with $V(T)=\mathop{\mathrm{acc}}(\lambda^+)$ such that $T$ admits a $\mu$-ascent path.* *Proof.* By [@lh_lucke Theorem 3.4], in particular, $\square^{\textup{ind}}(\lambda^+,\mu)$ holds. Then, by [@paper23 Theorem 4.44], $\mathop{\mathrm{P}}^-(\lambda^+, \mu^{\textup{ind}},\allowbreak{\sqsubseteq},1)$ holds. By Shelah's theorem, $2^\lambda=\lambda^+$ implies $\diamondsuit(\lambda^+)$, so that, altogether $\mathop{\mathrm{P}}(\lambda^+, \mu^{\textup{ind}},\allowbreak{\sqsubseteq},1)$ holds. In addition, it is a classical theorem of Jensen that $\square_\lambda$ gives a special $\lambda^+$-Aronszajn tree, so by Lemma [Theorem 24](#lemma39){reference-type="ref" reference="lemma39"}, $\mathop{\mathrm{acc}}(\lambda^+)\in\mathop{\mathrm{Vspec}}(\lambda^+)$. It now follows from Lemma [Lemma 66](#l63){reference-type="ref" reference="l63"} that there exists a normal and splitting streamlined $\lambda^+$-Souslin tree $T$ such that $V(T)$ covers a club in $\lambda^+$ and such that $T$ admits a $\mu$-ascent path. Finally, the proof of Lemma [Lemma 4](#lemma33){reference-type="ref" reference="lemma33"} completes this proof. ◻ *Remark 68*. The conclusion of the preceding remains valid once relaxing $\square_\lambda$ to $\square_\lambda(\sqsubseteq_\mu)$. In particular, the conclusion of the preceding is compatible with $\mu$ being supercompact. We now turn to combine the preceding construction with the study of large cardinals. The following cardinal characteristic $\chi(\kappa)$ provides a measure of how far $\kappa$ is from being weakly compact. **Definition 69** (The $C$-sequence number of $\kappa$, [@paper35]). If $\kappa$ is weakly compact, then let $\chi(\kappa):=0$. Otherwise, let $\chi(\kappa)$ denote the least cardinal $\chi\le\kappa$ such that, for every $C$-sequence $\langle C_\beta\mathrel{|}\allowbreak\beta<\kappa\rangle$, there exist $\Delta\in[\kappa]^\kappa$ and $b:\kappa\rightarrow[\kappa]^{\chi}$ with $\Delta\cap\alpha\subseteq\bigcup_{\beta\in b(\alpha)}C_\beta$ for every $\alpha<\kappa$. By [@paper35 Lemma 2.12(1)], if $\kappa$ is an inaccessible cardinal satisfying $\chi(\kappa)<\kappa$, then $\kappa$ is $\omega$-Mahlo. The following is an expanded form of Theorem [Theorem 5](#thme){reference-type="ref" reference="thme"}. **Theorem 70**. *Assuming the consistency of a weakly compact cardinal, it is consistent that for some strongly inaccessible cardinal $\kappa$ satisfying $\chi(\kappa)=\omega$, the following two hold:* - *Every $\kappa$-Aronszajn tree admits an $\omega$-ascent path;* - *There is a $\kappa$-Souslin tree $\mathbf T$ such that $V(\mathbf T)=\mathop{\mathrm{acc}}(\kappa)$.* *Proof.* Suppose that $\kappa$ is a non-subtle weakly compact cardinal. By possibly using a preparatory forcing, we may assume that the non-subtle weak compactness of $\kappa$ is indestructible under forcing with $\mathrm{Add}(\kappa,1)$. Following the proof of [@paper35 Theorem 3.4], let $\mathbb{P}$ be the standard forcing to add $\square^{\textup{ind}}(\kappa,\omega)$-sequence by closed initial segments, let $G$ be $\mathbb{P}$-generic, and let $\vec{\mathcal{C}}=\langle C_{\alpha,i} \mathrel{|}\allowbreak\alpha<\kappa,~i(\alpha) \leq i<\omega \rangle$ denote the generically-added $\square^{\textup{ind}}(\kappa,\omega)$-sequence. Work in $V[G]$. By Clauses (1),(2) and (4) of [@paper35 Theorem 3.4], $\kappa$ is strongly inaccessible, $\chi(\kappa)=\omega$, and every $\kappa$-Aronszajn tree admits an $\omega$-ascent path. For every $\alpha\in \mathop{\mathrm{acc}}(\kappa)$, let $$B_\alpha:=\{ \beta\in C_{\alpha,i(\alpha)}\mathrel{|}\allowbreak\forall l<\omega\,[\min(C_{\alpha,i(\alpha)}\setminus\beta+1)+l\in C_{\alpha,i(\alpha)}]\}.$$ **Claim 20**. *For every cofinal $B\subseteq\kappa$, there exist $\alpha\in E^\kappa_\omega$ and $\epsilon<\alpha$ such that $(B_\alpha\setminus\epsilon)\subseteq B$, $i(\alpha)=0$ and $\sup(\mathop{\mathrm{nacc}}(C_{\alpha,i})\cap B_\alpha)=\alpha$ for all $i<\omega$.* *Proof.* We follow the proof of [@Chris17 Lemma 3.9]. Work in $V$. For every $\alpha\in\mathop{\mathrm{acc}}(\kappa)$, let $\dot{B_\alpha}$ be the canonical $\mathbb P$-name for $B_\alpha$. Next, let $\dot{B}$ be a $\mathbb P$-name for a cofinal subset of $\kappa$, and let $p_0$ be an arbitrary condition in $\mathbb{P}$. By possibly extending $p_0$, we may assume that $i(\gamma^{p_0})^{p_0}=0$. We shall recursively define a decreasing sequence of conditions $\langle p_n \mathrel{|}\allowbreak n<\omega \rangle$, and an increasing sequence of ordinals $\langle \beta_n \mathrel{|}\allowbreak n<\omega \rangle$ such that for every $n<\omega$, all of the following hold: 1. [\[dec1\]]{#dec1 label="dec1"} $p_{n+1}\leq p_n$; 2. $i(\gamma^{p_{n+1}})^{p_{n+1}}=0$; 3. $p_{n+1} \Vdash ``\beta_n \in \dot{B}\text{ and }\dot B_{\gamma^{p_{n+1}}}\setminus(\gamma^{p_n}+1)=\{\beta_n \}"$; 4. For every $i\le n$, $\beta_n\in\mathop{\mathrm{nacc}}(C_{\gamma^{p_{n+1}},i}^{p_{n+1}})$; 5. [\[dec4\]]{#dec4 label="dec4"} For every $i<\omega$, $C_{\gamma^{p_{n+1}},i}^{p_{n+1}} \cap (\gamma^{p_n}+1)=C_{\gamma^{p_n},i}^{p_{n}}\cup \{\gamma^{p_n}\}$. Suppose $n<\omega$ is such that $\langle p_m \mathrel{|}\allowbreak m\leq n \rangle$ and $\langle \beta_m \mathrel{|}\allowbreak m< n \rangle$ have already been successfully defined. Find a $p^*_n \leq p_n$ and a $\beta_n>\gamma^{p_n}$ such that $p^*_n \Vdash ``\beta_n \in \dot{B}"$. Without loss of generality, $\gamma^{p^*_n}>\beta_n$. Now, let $\gamma:=\gamma^{p^*_n}+\omega$, so that $$\gamma^{p_n}<\beta_n<\gamma^{p_n^*}<\gamma^{p^*_n}+\omega=\gamma.$$ Let $m <\omega$ be the least such that $m\geq \max\{n, i(\gamma^{p^*_n})^{p^*_n}\}$ and $\gamma^{p_n} \in \mathop{\mathrm{acc}}(C_{\gamma^*,m}^{p_n^*})$. Then let $p_{n+1}$ be the unique extension of $p_n^*$ with $\gamma^{p_{n+1}}=\gamma$ and $i(\gamma)^{p_{n+1}}=0$ to satisfy the following for all $i<\omega$: $$C_{\gamma,i}^{p_{n+1}}:=\begin{cases} C_{\gamma^{p_n},i}^{p_{n}}\cup \{ \gamma^{p_n},\beta_n \}\cup \{ \gamma^{p_n^*}+l \mathrel{|}\allowbreak l<\omega \},&\text{if }i\leq m;\\ C_{\gamma^{p_n^*},i}^{p^*_{n}}\cup \{ \gamma^{p_n^*}+l \mathrel{|}\allowbreak l<\omega \},&\text{otherwise}. \end{cases}$$ Thus, we have maintained requirements [\[dec1\]](#dec1){reference-type="eqref" reference="dec1"}--[\[dec4\]](#dec4){reference-type="eqref" reference="dec4"}. Once completing the above recursion, we obtain a decreasing sequence of conditions $\langle p_n \mathrel{|}\allowbreak n<\omega \rangle$. Let $\alpha:=\sup\{ \gamma^{p_n} \mathrel{|}\allowbreak n<\omega\}$, and let $p$ be the unique lower bound of $\langle p_n \mathrel{|}\allowbreak n<\omega \rangle$ to satisfy $\gamma^{p}=\alpha$, $i(\alpha)^{p}=0$, and $C^{p}_{\alpha,i}=\bigcup_{n<\omega}C^{p_n}_{\gamma^{p_n},i}$ for every $i<\omega$. Then $p$ is a legitimate condition satisfying $p\Vdash``\dot B_\alpha\setminus(\gamma^{p_0}+1)=\{\beta_n \mathrel{|}\allowbreak n<\omega \} \subseteq\dot{B}"$. In addition, for all $i<\omega$, $\{ \beta_n \mathrel{|}\allowbreak i \leq n<\omega\} \subseteq\mathop{\mathrm{nacc}}(C_{\alpha,i}^{p})$. So we are done. ◻ We claim that $\vec{\mathcal{C}}$ is a $\mathop{\mathrm{P}}^-(\kappa, \omega^{\textup{ind}},\allowbreak{\sqsubseteq},1)$-sequence. As we already know that $\vec{\mathcal{C}}$ is an $\square^{\textup{ind}}(\kappa,\omega)$-sequence, we just need to verify that it satisfies the last bullet of Definition [Definition 65](#indexedP){reference-type="ref" reference="indexedP"} with $\theta:=1$ and $\mathcal S:=\{\kappa\}$. But, by the same argument from the proof of [@paper23 Corollary 3.4], this boils down to showing that for every cofinal $B\subseteq\kappa$, there exists at least one $\alpha\in\mathop{\mathrm{acc}}(\kappa)$ such that $\sup(\mathop{\mathrm{nacc}}(C_{\alpha,i})\cap B)=\alpha$ for all $i\in[i(\alpha),\omega)$. This is covered by Claim [Claim 20](#671){reference-type="ref" reference="671"}. **Claim 21**. *$\diamondsuit(E^\kappa_\omega)$ holds.* *Proof.* This is a standard consequence of Claim [Claim 20](#671){reference-type="ref" reference="671"} together with the fact that $\kappa^{<\kappa}=\kappa$, but we give the details. Let $\vec X=\langle X_\beta\mathrel{|}\allowbreak\beta<\kappa\rangle$ be a repetitive enumeration of $[\kappa]^{<\kappa}$ such that each set appears cofinally often. Let us say that an ordinal $\alpha\in E^\kappa_\omega$ is *informative* if $\sup(B_\alpha)=\alpha$ and there are $\epsilon<\kappa$ and a subset $A_\alpha\subseteq\alpha$ such that $A_\alpha\cap\gamma=X_\beta\cap\gamma$ for every pair $\gamma<\beta$ of ordinals from $B_\alpha\setminus\epsilon$. Note that if $\alpha$ is informative, then the set $A_\alpha$ is uniquely determined. For a noninformative $\alpha\in E^\kappa_\omega$, we let $A_\alpha:=\emptyset$. To verify that $\langle A_\alpha\mathrel{|}\allowbreak\alpha\in E^\kappa_\omega\rangle$ witnesses $\diamondsuit(E^\kappa_\omega)$, let $A$ be a subset of $\kappa$ and let $C$ be a club in $\kappa$, and we shall find an $\alpha\in C\cap E^\kappa_\omega$ such that $A\cap\alpha=A_\alpha$. By the choice of $\vec X$, we may fix a strictly increasing function $f:\kappa\rightarrow\kappa$ satisfying that $A\cap\xi=X_{f(\xi)}$ for every $\xi<\kappa$. Consider the club $D:=\{\delta\in C\mathrel{|}\allowbreak f[\delta]\subseteq\delta\}$. Let $B$ be some cofinal subset of $\mathop{\mathrm{Im}}(f)$ sparse enough to satisfy that for every pair $\gamma<\beta$ of ordinals from $B$, there exists a $\delta\in D$ with $\gamma<\delta<\beta$. Using Claim [Claim 20](#671){reference-type="ref" reference="671"}, fix $\alpha\in E^\kappa_\omega$ and $\epsilon<\alpha$ such that $(B_\alpha\setminus\epsilon)\subseteq B$ and $\sup(B_\alpha)=\alpha$. Now, let $\gamma<\beta$ be a pair of ordinals in $B_\alpha\setminus\epsilon$. As $\gamma,\beta\in B$, we may pick a $\delta\in D$ with $\gamma<\delta<\beta$. As $\beta\in B\subseteq\mathop{\mathrm{Im}}(f)$, we may also pick a $\xi<\kappa$ such that $\beta=f(\xi)$. Since $f[\delta]\subseteq\delta\subseteq\beta$, it must be the case that $\xi\ge\delta>\gamma$. So $A\cap\gamma=(A\cap\xi)\cap\gamma=X_\beta\cap\gamma$. Thus, we showed that $A\cap\gamma=X_\beta\cap\gamma$ for every pair $\gamma<\beta$ of ordinals in $B_\alpha\setminus\epsilon$, and hence $\alpha$ is informative and $A_\alpha=A\cap\alpha$. In addition, for every pair $\gamma<\beta$ of ordinals in $B_\alpha\setminus\epsilon$, there exists $\delta\in D$ with $\gamma<\delta<\beta$, and hence $\alpha\in\mathop{\mathrm{acc}}^+(D)\subseteq C$. ◻ Altogether, $\mathop{\mathrm{P}}(\kappa, \omega^{\textup{ind}},\allowbreak{\sqsubseteq},1)$ holds. Since $\kappa$ is a strongly inaccessible cardinal that is non-subtle, Corollary [Corollary 19](#stp){reference-type="ref" reference="stp"} implies that there exists a streamlined $\kappa$-tree $K$ such that $V^-(K)$ covers a club in $\kappa$. So by appealing to Lemma [Lemma 66](#l63){reference-type="ref" reference="l63"} and then to Lemma [Lemma 4](#lemma33){reference-type="ref" reference="lemma33"}, we infer that there exists a $\kappa$-Souslin tree $\mathbf T$ with $V(\mathbf T)=\mathop{\mathrm{acc}}(\kappa)$. ◻ By [@paper48 Theorem 2.30], $\chi(\kappa)=0$ refutes $\clubsuit_{\mathop{\mathrm{AD}}}(\mathop{\mathrm{Reg}}(\kappa))$. An easy variant of that proof yields that $\chi(\kappa)=0$ furthermore refutes $\clubsuit_{\mathop{\mathrm{AD}}}(\mathop{\mathrm{Reg}}(\kappa)\cap D)$ for every club $D\subseteq\kappa$. It follows from the preceding theorem together with the proof of [@paper48 Theorem 2.23] that $\chi(\kappa)=\omega$ is compatible with $\clubsuit_{\mathop{\mathrm{AD}}}(D)$ holding for some club $D\subseteq\kappa$. Whether this can be improved to $\chi(\kappa)=1$ remains an open problem. # A new sufficient condition for a Dowker space {#secA} **Definition 71** ([@paper48]). Let $\mathcal S$ be a collection of stationary subsets of a regular uncountable cardinal $\kappa$, and $\mu,\theta$ be nonzero cardinals below $\kappa$. The principle $\clubsuit_{\mathop{\mathrm{AD}}}(\mathcal S,\mu,\theta)$ asserts the existence of a sequence $\langle \mathcal A_\alpha\mathrel{|}\allowbreak\alpha\in\bigcup\mathcal S\rangle$ such that: 1. For every $\alpha\in\mathop{\mathrm{acc}}(\kappa)\cap\bigcup\mathcal S$, $\mathcal A_\alpha$ is a pairwise disjoint family of $\mu$ many cofinal subsets of $\alpha$; 2. For every $\mathcal B\subseteq[\kappa]^\kappa$ of size $\theta$, for every $S\in\mathcal S$, there are stationarily many $\alpha\in S$ such that $\sup(A\cap B)=\alpha$ for all $A\in\mathcal A_\alpha$ and $B\in\mathcal B$;[^10] 3. For all $A\neq A'$ from $\bigcup_{S\in\mathcal S}\bigcup_{\alpha\in S}\mathcal A_\alpha$, $\sup(A\cap A')<\sup(A)$. *Remark 72*. The variation $\clubsuit_{\mathop{\mathrm{AD}}}(\mathcal S,\mu,{<}\theta)$ asserts the existence of a sequence simultaneously witnessing $\clubsuit_{\mathop{\mathrm{AD}}}(\mathcal S,\mu,\vartheta)$ for all $\vartheta<\theta$. By [@paper48 Lemma 2.10], for a pair $\chi<\kappa$ of infinite regular cardinals, for a stationary subset $S$ of $E^\kappa_\chi$, Ostaszewski's principle $\clubsuit(S)$ implies $\clubsuit_{\mathop{\mathrm{AD}}}(\mathcal S,\chi,{<}\omega)$ for some partition $\mathcal S$ of $S$ into $\kappa$ many stationary sets. The next theorem reduces the hypothesis "$S\subseteq E^\kappa_\chi$" down to "$S\cap\mathop{\mathrm{Tr}}(S)=\emptyset$". **Lemma 73**. *Suppose:* - *$\mu,\theta<\kappa=\kappa^{<\theta}$ are infinite cardinals;* - *$S\subseteq E^\kappa_{\ge\max\{\mu,\theta\}}$ is stationary and $\mathop{\mathrm{Tr}}(S)\cap S=\emptyset$;* - *$\clubsuit(S)$ holds.* *Then $\clubsuit_{\mathop{\mathrm{AD}}}(\mathcal S,\mu,{<}\theta)$ holds for some partition $\mathcal S$ of $S$ into $\kappa$ many stationary sets. More generally, for every $Z\subseteq\kappa$ such that $S\subseteq\mathop{\mathrm{acc}}^+(Z)$, there exists a matrix $\langle A_{\delta,i}\mathrel{|}\allowbreak\delta\in S, i<\mu\rangle$ and a partition $\mathcal S$ of $S$ into $\kappa$ many pairwise disjoint stationary sets such that:* 1. *For all $\delta\in S$, $\langle A_{\delta,i}\mathrel{|}\allowbreak i<\mu\rangle$ is a sequence of pairwise disjoint subsets of $Z\cap\delta$, and $\sup(A_{\delta,i})=\delta$;* 2. *For every $(\gamma,\delta)\in[S]^2$, for all $i,j<\mu$, $\sup(A_{\gamma,i}\cap A_{\delta,j})<\gamma$;* 3. *For every $\vartheta<\theta$, every sequence $\langle B_\tau\mathrel{|}\allowbreak\tau<\vartheta\rangle$ of cofinal subsets of $Z$ and every $S'\in\mathcal S$, there exists $\delta\in S'$ such that $\sup(A_{\delta,i}\cap B_\tau)=\delta$ for all $i<\mu$ and $\tau<\vartheta$.* *Proof.* By [@paper23 Theorem 3.7], since $\clubsuit(S)$ holds, we may find a partition $\langle S_{\vartheta,\iota}\mathrel{|}\allowbreak\vartheta<\theta, \iota<\kappa\rangle$ of $S$ into stationary sets such that $\clubsuit(S_{\vartheta,\iota})$ holds for all $\vartheta<\theta$ and $\iota<\kappa$. For all $\vartheta<\theta$ and $\iota<\kappa$, since $\clubsuit(S_{\vartheta,\iota})$ holds and $\kappa^{\vartheta}=\kappa$, by [@paper48 Lemma 3.5], we may fix a matrix $\langle X_\delta^\tau\mathrel{|}\allowbreak\delta\in S_{\vartheta,\iota}, \tau<\vartheta\rangle$ such that, for every sequence $\langle X^\tau\mathrel{|}\allowbreak\tau<\vartheta\rangle$ of cofinal subsets of $\kappa$, there are stationarily many $\delta\in S_{\vartheta,\iota}$, such that, for all $\tau<\vartheta$, $X^\tau_\delta\subseteq X^\tau\cap\delta$ and $\sup(X^\tau_\delta)=\delta$. Now, let $Z\subseteq\kappa$ with $S\subseteq\mathop{\mathrm{acc}}^+(Z)$ be given. For all $\vartheta<\theta$, $\iota<\kappa$, $\delta\in S_{\vartheta,\iota}$ and $\tau<\vartheta$, we do the following: - if $X^\tau_\delta\cap Z$ is a cofinal subset of $\delta$, then let $Y^\tau_\delta:=X^\tau_\delta\cap Z$. Otherwise, let $Y^\tau_\delta$ be an arbitrary cofinal subset of $Z\cap\delta$; - since $\delta\in S\subseteq\kappa\setminus\mathop{\mathrm{Tr}}(S)$, we may fix a club $C_\delta\subseteq\delta$ disjoint from $S$, and then, by [@paper23 Lemma 3.3], we may find a cofinal subset $Z^\tau_\delta$ of $Y^\tau_\delta$ such that in-between any two points of $Z^\tau_\delta$ there exists a point of $C_\delta$, so that $\mathop{\mathrm{acc}}^+(Z^\tau_\delta)\cap S=\emptyset$. As $\mathop{\mathrm{cf}}(\delta)\ge\theta>\vartheta$ and by possibly thinning out, we may assume that $\langle Z^\tau_\delta\mathrel{|}\allowbreak\tau<\vartheta\rangle$ consists of pairwise disjoint cofinal subsets of $Z\cap\delta$. As $\mathop{\mathrm{cf}}(\delta)\ge\mu$, for every $\tau<\vartheta$, we may fix a partition $\langle Z_\delta^{\tau,i}\mathrel{|}\allowbreak i<\mu\rangle$ of $Z_\delta^{\tau}$ into cofinal subsets of $\delta$. For every $i<\mu$, let $$A_{\delta,i}:=\bigcup_{\tau<\vartheta}Z_\delta^{\tau,i}.$$ For every $i<\mu$, since $\mathop{\mathrm{acc}}^+(Z_\delta^{\tau,i})\cap S\subseteq\mathop{\mathrm{acc}}^+(Z^\tau_\delta)\cap S=\emptyset$ , and since $\delta\in S\subseteq E^\kappa_{>\vartheta}$, we get that $\mathop{\mathrm{acc}}^+(A_{\delta,i})\cap S=\emptyset$. So $\langle A_{\delta,i}\mathrel{|}\allowbreak i<\mu\rangle$ is a sequence of pairwise disjoint cofinal subsets of $\delta$, and for every $\gamma\in S\cap\delta$ and every cofinal subset $A\subseteq\gamma$, $\sup(A\cap A_{\delta,i})<\gamma$. Thus, we have already taken care of Clauses (1) and (2). Next, consider $\mathcal S:=\{ \bigcup_{\vartheta<\theta}S_{\vartheta,\iota}\mathrel{|}\allowbreak\iota<\kappa\}$ which is a partition of $S$ into $\kappa$ many stationary sets. Now, given $\vartheta<\theta$, a sequence $\langle B_\tau\mathrel{|}\allowbreak\tau<\vartheta\rangle$ of cofinal subsets of $Z$, and some $S'\in\mathcal S$, we may find $\iota<\kappa$ such that $S'\supseteq S_{\vartheta,\iota}$, and find $\delta\in S_{\vartheta,\iota}$ such that, for all $\tau<\vartheta$, $X^\tau_\delta\subseteq B_\tau\cap\delta$ and $\sup(X^\tau_\delta)=\delta$. In particular, for all $\tau<\vartheta$ and $i<\mu$, $Z_\delta^{\tau,i}\subseteq Z^\tau_\delta\subseteq Y^\tau_\delta=X^\tau_\delta\cap Z\subseteq B_\tau$. Therefore, for all $\tau<\vartheta$ and $i<\mu$, $\sup(A_{\delta,i}\cap B_\tau)=\delta$. ◻ **Corollary 74**. *Suppose that $\clubsuit(S)$ holds for some nonreflecting stationary subset $S$ of $\kappa$. Then $\clubsuit_{\mathop{\mathrm{AD}}}(\mathcal S,\omega,{<}\omega)$ holds for some partition $\mathcal S$ of $S$ into $\kappa$ many stationary sets.0◻* The preceding yields the proof of Theorem [Theorem 6](#thmf){reference-type="ref" reference="thmf"} which in turn extends an old result of Good [@MR1216813] who got a Dowker space of size $\lambda^+$ from $\clubsuit(S)$ holding over a nonreflecting stationary $S\subseteq E^{\lambda^+}_\omega$.[^11] **Corollary 75**. *If $\clubsuit(S)$ holds over a nonreflecting stationary $S\subseteq\kappa$, then there are $2^\kappa$ many pairwise nonhomeomorphic Dowker spaces of size $\kappa$.* *Proof.* By [@paper54 Theorem A.1], if $\clubsuit_{\mathop{\mathrm{AD}}}(\mathcal S,1,2)$ holds for a partition $\mathcal S$ of a nonreflecting stationary subset of $\kappa$ into $\kappa$ many stationary sets, then there are $2^\kappa$ many pairwise nonhomeomorphic Dowker spaces of size $\kappa$. ◻ Our last corollary deals with the problem of having $\clubsuit_{\mathop{\mathrm{AD}}}$ hold over a club subset of a successor cardinal. **Corollary 76**. *Suppose that $\kappa=\lambda^+$ for some infinite cardinal $\lambda$, and that $\clubsuit(E^\kappa_\theta)$ holds for every $\theta\in\mathop{\mathrm{Reg}}(\kappa)$. Then there exists a partition $\mathcal S$ of some club subset $D\subseteq\mathop{\mathrm{acc}}(\kappa)$ into $\kappa$ many sets such that $\clubsuit_{\mathop{\mathrm{AD}}}(\mathcal S,\omega,1)$ holds. Furthermore, there is a matrix $\langle A_{\delta,i}\mathrel{|}\allowbreak\delta\in D, i<\mathop{\mathrm{cf}}(\delta)\rangle$ such that:* 1. *For every $\delta\in D$, $\langle A_{\delta,i}\mathrel{|}\allowbreak i<\mathop{\mathrm{cf}}(\delta)\rangle$ is sequence of pairwise disjoint cofinal subsets of $\delta$;* 2. *For all $A\neq A'$ from $\{ A_{\delta,i}\mathrel{|}\allowbreak\delta\in D, i<\mathop{\mathrm{cf}}(\delta)\}$ , $\sup(A\cap A')<\sup(A)$;* 3. *For every cofinal $B\subseteq\kappa$, for every $S\in\mathcal S$, there are stationarily many $\delta\in S$ such that $\sup(A_{\delta,i}\cap B)=\delta$ for all $i<\mathop{\mathrm{cf}}(\delta)$.* *Proof.* Let $\langle Z_\mu\mathrel{|}\allowbreak\mu\in\mathop{\mathrm{Reg}}(\kappa)\rangle$ be a partition of $\kappa$ into cofinal sets. Let $D:=\bigcap_{\mu\in\mathop{\mathrm{Reg}}(\kappa)}\mathop{\mathrm{acc}}^+(Z_\mu)$. For every $\mu\in\mathop{\mathrm{Reg}}(\kappa)$, by Lemma [Lemma 73](#lem7.1){reference-type="ref" reference="lem7.1"}, we may fix a matrix $\langle A_{\delta,i}\mathrel{|}\allowbreak\delta\in E^\kappa_\mu, i<\mu\rangle$ and a partition $\langle S_{\mu,\iota}\mathrel{|}\allowbreak\iota<\kappa\rangle$ of $E^\kappa_\mu$ into $\kappa$ many pairwise disjoint stationary sets such that: - For all $\delta\in E^\kappa_\mu$, $\langle A_{\delta,i}\mathrel{|}\allowbreak i<\mu\rangle$ is a sequence of pairwise disjoint subsets of $Z_\mu\cap\delta$, and $\sup(A_{\delta,i})=\delta$; - For every $(\gamma,\delta)\in[E^\kappa_\mu]^2$, for all $i,j<\mu$, $\sup(A_{\gamma,i}\cap A_{\delta,j})<\gamma$; - For every cofinal $B\subseteq Z_\mu$, for every $\iota<\kappa$, there exists $\delta\in S_{\mu,\iota}$ such that $\sup(A_{\delta,i}\cap B)=\delta$ for all $i<\mu$. Putting these matricies together, we get a matrix $\langle A_{\delta,i}\mathrel{|}\allowbreak\delta\in D, i<\mathop{\mathrm{cf}}(\delta)\rangle$ satisfying Clause (1). In addition, since $Z_\mu\cap Z_{\mu'}=\emptyset$ for $\mu\neq\mu'$, Clause (2) is satisfied. Now, $\mathcal S:=\{ \bigcup_{\mu\in\mathop{\mathrm{Reg}}(\kappa)}S_{\mu,\iota}\mathrel{|}\allowbreak\iota<\kappa\}$ is a partition of $D$ into $\kappa$ many stationary sets. By the pigeonhole principle, for every cofinal $B\subseteq\kappa$, there exists some $\mu\in\mathop{\mathrm{Reg}}(\kappa)$ such that $B\cap Z_\mu$ is cofinal in $\kappa$. So, for every $S\in\mathcal S$, there exist $\iota<\kappa$ and $\delta\in S_{\mu,\iota}\subseteq S$ such that $\sup(A_{\delta,i}\cap B)=\delta$ for all $i<\mathop{\mathrm{cf}}(\delta)$. ◻ # Acknowledgments {#acknowledgments .unnumbered} The first and second author were supported by the Israel Science Foundation (grant agreement 203/22). The first and third author were supported by the European Research Council (grant agreement ERC-2018-StG 802756). Some of the results of this paper were presented by the second author in a poster session at the *Young Set Theory Workshop* in Novi Sad, August 2022. Additional results were presented by the first author as part of a graduate course on *Set Theory, Algebra and Analysis* at the Fields Institute for Research in Mathematical Sciences during the spring semester of 2023. We thank the corresponding organizers for the opportunity to present this work and the participants for their feedback. RYY23 Uri Abraham and Saharon Shelah. . , 59(1):1--32, 1993. Ari Meir Brodsky and Assaf Rinot. A microscopic approach to Souslin-tree constructions. Part I. , 168(11):1949--2007, 2017. Ari Meir Brodsky and Assaf Rinot. Reduced powers of Souslin trees. , 5(e2):1--82, 2017. Ari Meir Brodsky and Assaf Rinot. Distributive Aronszajn trees. , 245(3):217--291, 2019. Ari Meir Brodsky and Assaf Rinot. More notions of forcing add a Souslin tree. , 60(3):437--455, 2019. Ari Meir Brodsky and Assaf Rinot. A remark on Schimmerling's question. , 36(3):525--561, 2019. Ari Meir Brodsky and Assaf Rinot. A microscopic approach to Souslin-tree constructions. Part II. , 172(5):Paper No. 102904, 65, 2021. Chris Good. Large cardinals and small Dowker spaces. , 123(1):263--272, 1995. Sherwood Hachtman and Dima Sinapova. The super tree property at the successor of a singular. , 236(1):473--500, 2020. Ronald Jensen and Kenneth Kunen. Some combinatorial properties of L and V. , 1969. Bernhard König. Local coherence. , 124(1-3):107--139, 2003. John Krueger. Weak square sequences and special Aronszajn trees. , 221(3):267--284, 2013. Chris Lambie-Hanson. Aronszajn trees, square principles, and stationary reflection. , 63(3-4):265--281, 2017. Chris Lambie-Hanson and Philipp Lücke. Squares, ascent paths, and chain conditions. , 83(4):1512--1538, 2018. Chris Lambie-Hanson and Assaf Rinot. Knaster and friends II: The C-sequence number. , 21(1):2150002, 54, 2021. Assaf Rinot. Higher Souslin trees and the GCH, revisited. , 311(C):510--531, 2017. Assaf Rinot. On the ideal ${J}[\kappa]$. , 173(2):Paper No. 103055, 13pp, 2022. Assaf Rinot and Roy Shalev. A guessing principle from a Souslin tree, with applications to topology. , 323(C):Paper No. 108296, 29pp, 2023. Assaf Rinot, Roy Shalev, and Stevo Todorcevic. A new small Dowker space. , to appear. `https://doi.org/10.1007/s10998-023-00541-6`. Assaf Rinot, Shira Yadai, and Zhixing You. Full Souslin trees at small cardinals, submitted July 2023. `http://assafrinot.com/paper/62`. Stevo Todorcevic. , volume 263 of *Progress in Mathematics*. Birkhäuser Verlag, Basel, 2007. Boban Veličković. Jensen's $\square$ principles and the Novák number of partially ordered sets. , 51(1):47--58, 1986. Christoph Weiß. Subtle and ineffable tree properties. , 2010. [^1]: The definition of $\mathop{\mathrm{acc}}(\kappa)$ may be found in Subsection [1.2](#nocon){reference-type="ref" reference="nocon"} below. [^2]: The definition of $\clubsuit_{\mathop{\mathrm{AD}}}$ may be found in the paper's Appendix. [^3]: Note that any $\kappa$-Souslin must be normal on a tail end. [^4]: See Definitions [Definition 34](#proxydef){reference-type="ref" reference="proxydef"} and [Convention 35](#proxydef2){reference-type="ref" reference="proxydef2"} below. [^5]: *$\chi(\kappa)$ can be understood as measuring how far $\kappa$ is from being weakly compact; see Definition [Definition 69](#defcnm){reference-type="ref" reference="defcnm"} below.* [^6]: *That is, for all $\alpha<\kappa$ and $s,t\in T_\alpha$, there is an automorphism of $\mathbf T$ sending $s$ to $t$.* [^7]: *That is, a tree of height and size $\chi$ admitting at least $\chi^+$-many branches.* [^8]: The statement of the theorem in [@HS20] is limited to countable cofinality, but the proof works unconditionally. [^9]: To clarify, in the special case that $n=0$, $x_0*\cdots*x_n$ stands for $x_0$. [^10]: Note that the existence of stationarily many such $\alpha\in S$ is no stronger than the existence of just one $\alpha\in S$. See [@paper23 Corollary 3.4] for the prototype argument. [^11]: Strictly speaking, the hypothesis in [@MR1216813] is $\clubsuit_{\lambda^+}(S,2)$, but [@paper23 Lemma 3.5] shows that this is no stronger than the vanilla $\clubsuit(S)$.
arxiv_math
{ "id": "2309.03821", "title": "The vanishing levels of a tree", "authors": "Assaf Rinot, Shira Yadai, Zhixing You", "categories": "math.LO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | The alpha complex is a fundamental data structure from computational geometry, which encodes the topological type of a union of balls $B(x;r) \subset \mathbb{R}^m$ for $x\in S$, including a weighted version that allows for varying radii. It consists of the collection of "simplices" $\sigma=\{x_0,...,x_k\} \subset S$, which correspond to nomempty $(k+1)$-fold intersections of cells in a radius-restricted version of the Voronoi diagram $\mathop{\mathrm{Vor}}(S,r)$. Existing algorithms for computing the alpha complex require that the points reside in low dimension because they begin by computing the entire Delaunay complex, which rapidly becomes intractable, even when the alpha complex is of a reasonable size. This paper presents a method for computing the alpha complex without computing the full Delaunay triangulation by applying Lagrangian duality, specifically an algorithm based on dual quadratic programming that seeks to rule simplices out rather than ruling them in. author: - Erik Carlsson - John Carlsson bibliography: - refs.bib title: Computing the alpha complex using dual active set quadratic programming --- # Introduction Given a point cloud and a threshold radius, the alpha complex is a simplicial complex whose simplices correspond to relationships between points that are relevant in the sense that they are not too far apart. It is a generalization of the Delaunay triangulation, another fundamental computational geometric structure, which is the dual graph of the Voronoi diagram. Alpha complexes are used in a diverse range of application areas to study the shape of datasets, such as molecular biology [@liang1998analytical], crystallography [@stukowski2014computational], shape reconstruction [@1044617], and persistent homology [@otter2015roadmap]. Formally, let $S=\{x_{1},...,x_{N}\}\subset\mathbb{R}^{m}$ be a set of points, and let $r\geq 0$ be a nonnegative real number. The *radius-restricted Voronoi diagram* is the collection $$\mathop{\mathrm{Vor}}(S,r)=\{V_{x}(r):x\in S\}\,,$$ where $V_{x}(r)=V_{x}\cap B(x;r)\subset\mathbb{R}^{m}$ is intersection of the usual Voronoi cell $$V_{x}=\{y\in\mathbb{R}^{m}:\|y-x\|\leq\|y-x'\|\,\forall x'\in S\}$$ with the ball $B(x;r)$ of radius $r$ centered about $x$. The alpha complex is the *nerve* of $\mathop{\mathrm{Vor}}(S,r)$, that is, the simplicial complex defined as the collection $$\mathop{\mathrm{Alpha}}(S,r)=\left\{ \sigma\subset S:\bigcap_{x\in\sigma}V_{x}(r)\neq\emptyset\right\} .$$ Simply put, a subset $\sigma=\{x_{i_{0}},\dots,x_{i_{k}}\}\subset S$ belongs to $\mathop{\mathrm{Alpha}}(S,r)$ if there exists a point $y\in\mathbb{R}^{m}$ that is equidistant from every member of $\sigma$, i.e. $\rho:=\|y-x_{i_{0}}\|=\cdots=\|y-x_{i_{k}}\|\leq r$, and furthermore, $\|y-x\|\geq\rho$ for all $x\in S$. A given subset $\sigma$ of size $k+1$ is called a $k$-dimensional simplex of $\mathop{\mathrm{Alpha}}(S,r)$. The alpha complex is a subcomplex of $\mathop{\mathrm{Delaunay}}(S)$, which is the nerve of the full Voronoi diagram $\mathop{\mathrm{Vor}}(S)$, and which agrees with the Delaunay triangulation when the points of $S$ are planar points in general position. Figure [\[fig:voronoi-delaunay-alpha\]](#fig:voronoi-delaunay-alpha){reference-type="ref" reference="fig:voronoi-delaunay-alpha"} shows one of these constructions. ![Voronoi diagram](figures/intro-pic-voronoi.pdf){#fig:Voronoi-diagram} ![Delaunay triangulation](figures/intro-pic-delaunay.pdf){#fig:Delaunay-triangulation} ![Alpha complex](figures/intro-pic-alpha.pdf){#fig:Alpha-complex} More generally, there exists an extension of the alpha complex known as the weighted alpha complex, in which the Voronoi cells are replaced by a power diagram, which allows for balls of different radii. An example of a power diagram and its associated weighted alpha complex is shown in Figure [5](#fig:nerve){reference-type="ref" reference="fig:nerve"} below. The weighted alpha complex gives rise to the *alpha shapes* [@edelsbrunner1992weighted], whose applications include the aforementioned study of biomolecules. A further extension is the wrap complex, which is used in surface modeling [@edelsbrunner2003wrap; @bauer2014morse]. In terms of computational topology, the alpha complex is homotopy equivalent to the union of the cells $$A=\bigcup_{x\in S}V_{x}(r)= \bigcup_{x\in S}B(x;r)\subset\mathbb{R}^{m}.$$ It can therefore be used to compute the topological type of a space which can be covered by balls from the combinatorial data of which cells intersect nontrivially. This is also true of the the *Čech complex* $\mathop{\mathrm{\check{C}ech}}(S,r)$, defined as the nerve of the covering by the balls as on the right, which satisfies $$H_{*}(\mathop{\mathrm{Alpha}}(S,r))\cong H_{*}(\mathop{\mathrm{\check{C}ech}}(S,r)),$$ both sides being isomorphic to $H_{*}(A)$. The alpha complex is by definition a subcomplex of the Čech complex $\mathop{\mathrm{Alpha}}(S,r)\subset\mathop{\mathrm{\check{C}ech}}(S,r)$, and it typically has far fewer simplices. This is advantageous, for instance, for computing *persistent homology* [@edelsbrunner2002topological; @zomorodian2004computing], noting both the Čech and alpha complexes give rise to a family of complexes, which are "filtered" by varying the radius $r$. Perhaps the most common construction for computing persistent homology is the Vietoris-Rips construction [@hausmann1995vietoris], especially through a highly efficient open source software tool known as Ripser [@bauer2021ripser]. Čech and alpha complexes can also be used for this purpose and have advantages over Vietoris-Rips in that they typically have far fewer simplices. Moreover, because of theoretical guarantees stemming from the nerve theorem, their homology groups may be calculated exactly from usual, non-persistent homology, which requires only Gaussian elimination. One reason Čech and alpha complexes are not as commonly used is that Vietoris-Rips allows for non-Eucidean metrics, but a more crucial reason is the poor scalability of the Delaunay construction in dimensions greater than three. Most methods for computing the alpha complex begin by computing the full Delaunay complex, and removing simplices which do not come from a Voronoi face which has the restricted radius property [@cgal:dy-as3-23a; @edelsbrunner1992weighted; @edelsbrunner1994three; @tralie2021cechmate]. There are a wide range of highly efficient algorithms for computing the Delaunay complex in dimensions $m\leq3$, many of which exploit the empty circumsphere property, which states that a $3$-simplex tetrahedron belongs to $\mathop{\mathrm{Delaunay}}(S)$ in $\mathbb{R}^{3}$ if and only if its circumsphere contains no points [@bose2018flipping; @hurtado1996flipping; @aurenhammer1984optimal; @klein1993randomized; @bowyer1981computing; @watson1981delaunay]. In dimension $m>3$, one can still compute $\mathop{\mathrm{Delaunay}}(S)$ by applying flipping methods [@edelsbrunner1992incremental], or by reducing the problem to finding a convex hull in $\mathbb{R}^{m+1}$ as in [@edelsbrunner1985voronoi]. In terms of computing persistent homology, other authors have a combination of the alpha complex and the Vietoris-Rips construction to incorporate to improve efficiency [@mishra2023stability]. In higher dimensions, it is often the case that the alpha complex has a reasonable number of simplices, but the full Delaunay complex is far too large to be computed, having on the order of $O(N^{\lceil m/2\rceil})$ simplices. In this situation, the general pipeline of the previous paragraph must be replaced by one that does not compute the full complex. One algorithm that takes this into account is given in [@sheehy2015output], whose complexity depends on the total size of the output, and on bounds relating the pairwise distance between points and the upper bound on the radius $r$. As an additional reduction, one is often only interested in subcomplex $\mathop{\mathrm{Alpha}}_{\leq d}(S,r)$ consisting of simplices of dimension at most $d$. A brute-force approach would be to formulate the existence of each individual simplex $\sigma\in\mathop{\mathrm{Alpha}}(S,r)$ as an optimization problem \|l\| y\^m y-x\^2 [\[mini:qpdel\]]{#mini:qpdel label="mini:qpdel"} where $x$ is any particular element of $\sigma$, all choices yielding the same result. Specifically, we can conclude that a given simplex $\sigma$ is in $\mathop{\mathrm{Alpha}}(S,r)$ when the constraints are feasible, and the minimizing value is at most $r^{2}$. By squaring the conditions, expanding and canceling terms, we actually see that the above inequalities and equalities are in just linear constraints, making [\[mini:qpdel\]](#mini:qpdel){reference-type="eqref" reference="mini:qpdel"} into a constrained (convex) quadratic program. Computing $\mathop{\mathrm{Alpha}}_{\leq d}(S,r)$ would thus require solving $$\binom{N}{1}+\cdots+\binom{N}{d+1}$$ such quadratic programs. In reality, many of these simplices may be ruled out, including any simplex $\sigma$ which is not an element of the Čech complex, or one whose faces have been determined not to exist, assuming we are proceeding in order of increasing dimension. Additionally, we only need to consider those constraints in ([\[mini:qpdel\]](#mini:qpdel){reference-type="ref" reference="mini:qpdel"}) coming from vertices $x_{j}$ which are neighbors in the one-skeleton of $\mathop{\mathrm{\check{C}ech}}(S,r)$. Despite these reductions, computing the alpha complex directly is too burdensome to be practical for large values of $N$ to be of practical value. The point of this paper is to show that that this approach becomes practical, provided that we use Lagrangian duality to solve ([\[mini:qpdel\]](#mini:qpdel){reference-type="ref" reference="mini:qpdel"}) instead of directly attacking the original primal problem. The fact that any feasible point in the dual problem determines a lower bound for the optimum of the primal problem is well-suited for this purpose because it allows the algorithm to terminate whenever a dual feasible point is found with an objective value greater than $r^{2}$, which in many cases will happen at an early stage. Furthermore, the form of this particular dual problem, shown in ([\[maxi:dual\]](#maxi:dual){reference-type="ref" reference="maxi:dual"}), has the property that the zero vector $\lambda=0$ is always feasible. Thus, there is no startup cost associated with identifying an initial feasible solution. Another crucial benefit is that while the size of the alpha complex is related to the rough dimension of the space traversed by the point cloud $S$, there is essentially no dependence on the embedding, because dual programming algorithms depend only on the respective dot products. Our method, which is straightforward to describe, is implemented for the more general weighted alpha complex, and is described in Algorithm [\[alg:dualalpha\]](#alg:dualalpha){reference-type="ref" reference="alg:dualalpha"} below. Beyond using dual programming as described, we have taken advantage of one further observation: the minimization problem ([\[mini:qpdel\]](#mini:qpdel){reference-type="ref" reference="mini:qpdel"}) for a given face is the same for that of the full cell $V_{x}$, except that those inequalities determined by faces are replaced by the corresponding equalities. The main loop of Algorithm [\[alg:dualalpha\]](#alg:dualalpha){reference-type="ref" reference="alg:dualalpha"} is written in a way so that the the coefficients are only calculated once per vertex, rather than once per potential simplex, which would otherwise be a major computational cost. Algorithm [\[alg:dualalpha\]](#alg:dualalpha){reference-type="ref" reference="alg:dualalpha"} was written in MAPLE, and is available at the first author's webpage: <https://www.math.ucdavis.edu/~ecarlsson/>. This includes an implementation of an elegant recent dual active set method due to [@arnstrom2022dual], which we used to solve the dual quadratic programs. In Section [4](#sec:examples){reference-type="ref" reference="sec:examples"}, we illustrate our algorithm in several examples which we validated using homology calculations, and which are also available online as MAPLE worksheets. We compared our answer against the output of persistence calculations which we carried out in Ripser. In some of those example there appears to be a potential computational advantage to using the alpha complex via our algorithm, as there often is for existing algorithms for the alpha complex in two or three dimensions [@somasundaram2021benchmarking]. However, the goal in comparing those answers is not to show a speed boost in persistent homology calculations, but rather to give a rigorous test of the correctness of the algorithm, which would fail to capture the correct homology if even a single simplex is incorrect. We make no comparison of the running time of our homology calculations, for which we used a general sparse matrix rank algorithm due to Dumas and Villard [@dumas2002sparse] instead of specialized methods. Intuitively, homology and persistent homology calculations of alpha complexes are expected to be faster than Vietoris-Rips calculations once the alpha complex has been computed, as the former is a subcomplex of the latter. As we described above, the alpha complex has far reaching applications beyond persistent homology computations, firstly in that it produces concrete geometric models, which give rise to the alpha shapes. In terms of homology, it is also useful that the alpha complex provides exact answers rather that persistence diagrams, which we use in Section [4.5](#sec:config){reference-type="ref" reference="sec:config"} to carry out an interesting calculation from geometric representation theory. Another recent application is due to the present authors, who discovered a hidden family of alpha complexes associated to the super-level sets of an arbitrary kernel density estimator in [@carlsson2023witness]. Implementing this construction in way that does not scale poorly with the embedding dimension was the motivation behind our main algorithm. ## Acknowledgments Both authors were supported by the Office of Naval Research (ONR) N00014-20-S-B001 during this project, which they gratefully acknowledge. # Preliminaries on computational topology We set some notation and background about filtered simplicial complexes, refering to [@edelsbrunner2010computational] for more details. ## Computational topology {#sec:comptop} Let $S$ be a set of size $N$, which we assume is totally ordered. **Definition 1**. A simplicial complex $X$ on a vertex set $S$ is a collection of nonempty subsets of $S$ which is closed under taking nonempty subsets. The elements of $X$ are called simplices, and are denoted $\sigma=[\sigma_{0},...,\sigma_{k}]$, using closed brackets to indicate that the elements are distinct and written in order. The number $k$ is called the dimension of $\sigma$, and the set of simplices of dimension $k$ is denoted $X_k$. If $\sigma\in X$ is a simplex, then the subsets for which $\sigma' \subset \sigma$ are called the faces of $\sigma$. The collection of simplices of dimension at most $k$ is a subcomplex called the $k$-skeleton of $X$, which is denoted $\mathop{\mathrm{Skel}}_k(X)$. For instance, $\mathop{\mathrm{Skel}}_1(X)$ is a complex with only zero and one-dimensional simplices, which is the same data as a graph. The following definition will also appear in Algorithm [\[alg:dualalpha\]](#alg:dualalpha){reference-type="ref" reference="alg:dualalpha"} below. **Definition 2**. Let $X$ be a complex. We define $\mathop{\mathrm{Lazy}}_k(X)$ to be the largest simplicial complex $Z$ on the vertex set $S$ for which $\mathop{\mathrm{Skel}}_k(Z)=\mathop{\mathrm{Skel}}_k(X)$. For instance, we have that $\mathop{\mathrm{Lazy}}_{-1}(X)$ is the complete complex on the vertex set $S$, whereas $\mathop{\mathrm{Lazy}}_0(X)$ is similar, but contains only those vertices which are in $X_0$, which need not be all of $S$. The one-dimensional lazy construction $\mathop{\mathrm{Lazy}}_1(X)$ appears in the definition of the Vietoris-Rips and lazy Witness complexes [@desilva2008weak; @hausmann1995vietoris; @10.2312:SPBG:SPBG04:157-166], which are widely used to compute persistent homology. **Definition 3**. The Barycentric subdivision $\mathop{\mathrm{Sd}}(X)$ is the complex whose vertices are the simplices in $X$, and whose $k$-simplices consist of strictly increasing flags $\sigma^{(0)}\subset \cdots \subset \sigma^{(k)}$ of elements of $X$. **Definition 4**. If the vertex set $S$ is equipped with a map to $\mathbb{R}^m$, then the geometric realization is defined by $$\label{eq:geom} |X|=\bigcup_{\sigma \in X} \mathop{\mathrm{conv}}(\sigma) \subset \mathbb{R}^m$$ where $\mathop{\mathrm{conv}}(\sigma)$ is the convex hull of the images of the vertices. If no such map is given, the geometric realization is defined to be the standard one in which the $i$th element of $S$ is sent to the unit vector $e_i\in \mathbb{R}^N$. Combining the two definitions, we see that a function $\Phi: X \rightarrow \mathbb{R}^m$ determines a linear map $|\mathop{\mathrm{Sd}}(X)|\rightarrow \mathbb{R}^m$. In persistent homology, one is interested in a nested family of complexes, depending on a real parameter $a$: **Definition 5**. A filtered complex is a pair $(X,w)$ consisting of a simplicial complex $X$, and a weight function $w:X\rightarrow \mathbb{R}$, which has the property that the subset $X(a)=w^{-1}(-\infty,a]\subset X$ is a complex for every $a$. The data of a pair $(X,w)$ and the corresponding nested collection of filtered complexes $X(a)$ and are interchangeable. If $X$ is filtered by $w$, then there are induced filtrations on $X(a)$, $\mathop{\mathrm{Skel}}_k(X)$, and $\mathop{\mathrm{Lazy}}_k(X)$. The filtration on $\mathop{\mathrm{Lazy}}_k(X)$ is the one for which $w(\sigma)$ is the max of $w(\sigma')$ as $\sigma'$ ranges over all elements of $\mathop{\mathrm{Skel}}_k(X)$ which are faces of $\sigma$. The first two are simply by restriction. If $X$ is a complex then the chain group is the vector space of all formal linear combinations $$C_k=\left\{ \sum_{\sigma\in X_k} c_{\sigma} \sigma \right\}$$ where we will always take coefficients to be elements of a finite field $c_{\sigma} \in \mathbb{F}_p$ for $p$ a prime. The $k$th homology group is given by $$\label{eq:homology} H_k(X,\mathbb{F}_p)=Z_k/B_k=\ker \partial_k/\mathop{\mathrm{Im}}\partial_{k+1}$$ where $\partial_k:C_{k}\rightarrow C_{k-1}$ is defined on each basis vector $\sigma=[x_0,...,x_k]$ by $$\partial_k(\sigma)=\sum_{i=0}^k (-1)^i [x_{0},...,\widehat{x_i},...,x_k]$$ The $k$th Betti number $\beta_k(X,\mathbb{F}_p)$ is the dimension of $H_k(X,\mathbb{F}_p)$. Filtered complexes have the additional structure of persistent homology groups, which assemble the individual homology groups $H_k(X(a))$ for each value of $a$ into a family of filtered homology groups. Instead of individual Betti numbers, one has a collection of persistence intervals often called a barcode diagram, such as the Ripser outputs shown in Section [4](#sec:examples){reference-type="ref" reference="sec:examples"}. Roughly speaking, one can infer the Betti numbers of a point cloud by counting the significant intervals in that diagram. For an introduction to persistent homology, we refer to [@edelsbrunner2010computational]. ## The Nerve theorem {#sec:nerve} Let $\mathcal{U}=\{U_x:x\in S\}$ be a collection of subsets of $\mathbb{R}^m$ with index set $S$. **Definition 6**. The Čech nerve, written $\mathop{\mathrm{Nrv}}(\mathcal{U})$, is the complex with vertex set $S$, and for which $$\label{eq:nerve} [x_0,...,x_k]\in X \Longleftrightarrow U_{x_0} \cap \cdots \cap U_{x_k} \neq \emptyset$$ Then the classical nerve theorem of Leray [@Leray1950LanneauSE] states: **Theorem 1** (Leray). *Suppose that $\mathcal{U}$ has the property that any $k$-fold intersection of the $U_x$ is contractible (which occurs, for instance, if every $U_x$ is convex). Then $\mathop{\mathrm{Nrv}}(\mathcal{U})$ is homotopy equivalent to the union $\bigcup \mathcal{U}\subset \mathbb{R}^m$.* In some cases, the nerve equivalence has an explicit form. If $\mathcal{U}$ is a covering by balls, then the linear map $|\mathop{\mathrm{Nrv}}(\mathcal{U})|\rightarrow \bigcup \mathcal{U}$ determined by sending each vertex to the corresponding center induces the nerve equivalence. More generally, if every $U_x$ is convex, and we select any representatives $x_\sigma \in U_{\sigma_0} \cap \cdots \cap U_{\sigma_k}$, we have an induced map $\Phi:|\mathop{\mathrm{Sd}}(X)|\rightarrow \bigcup \mathcal{U}$, which also induces the nerve equivalence equivalence via the equivalence of $|X|$ with the subdivision $|\mathop{\mathrm{Sd}}(X)|$. See [@bauer2023unified] for a proof, and more on the general setup of nerve theorems. ## Power diagrams {#sec:powdiag} Suppose that $S\subset \mathbb{R}^m$, and let $p:S\rightarrow \mathbb{R}$ be a function, called the weight map. We now have a function $\pi: \mathbb{R}^m \rightarrow \mathbb{R}$ given by $$\label{eq:weightmap} \pi(y)=\min_{x\in S} \pi_x(y),\quad \pi_x(y)=\lVert y-x\rVert^2-p(x).$$ **Definition 7**. Let $S,p$ be as above and let $a\in \mathbb{R}$. Then the weighted ball cover denoted $\mathcal{U}=\mathop{\mathrm{PowCov}}(S,p,a)$ is given by $\mathcal{U}=\{U_x:x\in S\}$, where $$\label{eq:powcov} U_x=\left\{y\in \mathbb{R}^m: \pi_x(y)\leq a\right\}$$ is either a closed ball, or is empty. **Definition 8**. The weighted power diagram is the covering $\mathop{\mathrm{PowDiag}}(S,p,a) =\{U_x\cap V_x:x\in S\}$ where $$\label{eq:powdiag}V_x=\left\{y:\pi_x(y)\leq \pi_{x'}(y) \mbox{ for all $x'\in S$}\right\},$$ and $U_x \in \mathop{\mathrm{PowCov}}(S,p,a)$ ranges over the corresponding elements in the weighted ball cover. The Čech and alpha complexes are the filtered complexes which are the nerves of the weighted ball and power cover, resepectively: **Definition 9**. The Čech complex denoted $(X,w)=\mathop{\mathrm{\check{C}ech}}(S,p)$ is the filtered complex determined by $X(a)=\mathop{\mathrm{Nrv}}(\mathop{\mathrm{PowCov}}(S,p,a))$. We let $\mathop{\mathrm{\check{C}ech}}(S,p,a_1)$ be the filtered subcomplex $X(a_1)$ which is cut off at weight $a_1$. **Definition 10**. The weighted alpha complex $(X,w)=\mathop{\mathrm{Alpha}}(S,p)$ is the filtered complex for which $X(a)=\mathop{\mathrm{Nrv}}(\mathop{\mathrm{PowDiag}}(S,p,a))$, with a similar definition of $\mathop{\mathrm{Alpha}}(S,p,a_1)$. Said another way, we have a simplex $\sigma=[x_0,...,x_k] \in X(a)$ If the weighted Voronoi face $V_\sigma=V_{\sigma_0}\cap \cdots \cap V_{\sigma_k}$ is nonempty, and there exists a point $x\in V_\sigma$ satisfying $\pi_{x_i}(x)\leq a$ for any $i$, noticing that the $\pi_{x_i}$ all become equal when restricted to $V_\sigma$. Adopting the terminology of the witness complex, such a point $x$ is called a *witness* for $\sigma$ because its existence determines that $\sigma \in X(a)$. Since $\pi_{x_i}$ is a quadratic function and $V_\sigma$ is convex, we have a unique minimizer $x_\sigma$ for every $\sigma \in X(a)$. The collection of these points is described as a map: **Definition 11**. Let $X=\mathop{\mathrm{Alpha}}(S,p)$. The *witness map* is the function $\Phi:X \rightarrow \mathbb{R}^m$ which carries each simplex $\sigma=[x_{0},...,x_{k}]$ to the unique element $x_\sigma \in V_{\sigma}$ that minimizes $\pi_{x_j}(x)$, which is independent of the choice of $j$. In particular, by restricting by to $X(a)$, we obtain a linear map $$|\mathop{\mathrm{Sd}}(X(a))|\rightarrow \bigcup \mathop{\mathrm{PowDiag}}(S,p,a)$$ by the discussion in Section [2.2](#sec:nerve){reference-type="ref" reference="sec:nerve"}. An example of a power diagram, its alpha complex, and the associated witness map is shown in Figure [5](#fig:nerve){reference-type="ref" reference="fig:nerve"}. ![On the left, a randomly generated power diagram in the plane, cut off at some weight $a$. On the right, the Barycentric subdivision of its associated alpha complex mapped into $\mathbb{R}^2$ using the witness map $\Phi$, which induces the nerve isomorphism.](figures/pdiag3.pdf){#fig:nerve} ![On the left, a randomly generated power diagram in the plane, cut off at some weight $a$. On the right, the Barycentric subdivision of its associated alpha complex mapped into $\mathbb{R}^2$ using the witness map $\Phi$, which induces the nerve isomorphism.](figures/pdiag4.pdf){#fig:nerve} The usual (unweighted) alpha complex and Voronoi diagram from the introduction are given by $$\mathop{\mathrm{Alpha}}(S,r)=\mathop{\mathrm{Alpha}}(S,p,r^2),\ \mathop{\mathrm{Vor}}(S,r)=\mathop{\mathrm{PowDiag}}(S,p,r^2)$$ in which $p(x)=0$ for all $x$. Notice that the full vertex set for $X=\mathop{\mathrm{Alpha}}(S,r)$ is given by $X_0(a)=S$ for $a\geq 0$, and is empty for $a<0$, whereas the vertices appear at different times in the weighted case. # Algorithm for computing the alpha complex We recall some facts about dual quadratic programming, and present our main algorithm. ## Dual programming {#sec:dualprog} Let $A$ be an $n\times m$ matrix, writing $A_i$ for the $i$th row. Let $V\in \mathbb{R}^n$, $x_0\in \mathbb{R}^m$, and let $J\subset \{1,...,n\}$ be a subset. Let $(y^*,c^*)=\mathop{\mathrm{PrimalQP}}(x,A,V,J)$ be the optimal solution and objective value to the quadratic program \|l\| y\^my-x\^2 [\[mini:primal\]]{#mini:primal label="mini:primal"} To indicate that there is no feasible solution, the routine will return $c^*=\infty$, and an arbitrary value of $x^*$. The dual quadratic program is \|l\| \^n-\^t B+U\^t [\[maxi:dual\]]{#maxi:dual label="maxi:dual"} where $$\label{eq:primtodual} B=AA^{t},\quad U=A x-V.$$ We will denote its solution by $(\lambda^*,c^*)=\mathop{\mathrm{DualQP}}(B,U,J,c_1)$, where $c_1$ is an upper bound on the allowable value of $c^*$. If we find that $c^*>c_1$, then the algorithm will terminate early and return $c^*=\infty$, together with an arbitrary value $\lambda^*$. If an optimum is obtained, the minimizing solution of the primal QP is determined by the KKT conditions, and is given by $$\label{eq:kkt} y^*=x-A^t \lambda^*.$$ **Example 1**. For every feasible point of [\[maxi:dual\]](#maxi:dual){reference-type="eqref" reference="maxi:dual"}, weak duality states that the corresponding value of the objective function is a lower bound on the solution in [\[mini:primal\]](#mini:primal){reference-type="eqref" reference="mini:primal"}. Suppose for some $i$ we have that $A_{i}x< V_i$, meaning that $x$ is not on the feasible side in [\[mini:primal\]](#mini:primal){reference-type="eqref" reference="mini:primal"}. Then the minimizer of [\[maxi:dual\]](#maxi:dual){reference-type="eqref" reference="maxi:dual"} along the line $\lambda_i=t$ and all other $\lambda_j$ are zero occurs at $t=U_i/B_{i,i}$. Substituting this into [\[maxi:dual\]](#maxi:dual){reference-type="eqref" reference="maxi:dual"} gives $$-\frac{1}{2} B_{i,i}t^2+U_it= \frac{U_i^2}{2B_{i,i}}=\frac{(A_{i}x-V)^2}{2\lVert A_{i}\rVert^2},$$ which is the lower bound corresponding to the point on the plane $A_{i}y=V$ which is as close as possible to $x$. To solve [\[maxi:dual\]](#maxi:dual){reference-type="eqref" reference="maxi:dual"}, we have incorporated a compiled MAPLE implementation of a highly efficient and elegant recent active set method due to [@arnstrom2022dual]. ## The alpha complex as a quadratic program Consider the filtered alpha complex $X=\mathop{\mathrm{Alpha}}(S,p,a_1)$ for $S\subset \mathbb{R}^m$, and let $w:X\rightarrow \mathbb{R}$ be the weight map. Let $\sigma=[x_0,...,x_k] \subset S$ be a subset which may or may not define a simplex in $X$, and and select any particular vertex, say $x=x_0$. Then the problem of determining whether $\sigma$ determines a simplex in $X$ amounts to solving the following constrained quadratic optimization problem: \|l\| y\^m\_x(y) [\[mini:pow\]]{#mini:pow label="mini:pow"} Specifically, we have a simplex $\sigma \in X$ if and only if [\[mini:pow\]](#mini:pow){reference-type="eqref" reference="mini:pow"} is feasible and the optimal solution $a^*$ satisfies $a^*\leq a_1$. By convexity, if the problem is feasible, then there is a unique minimizer $y^*$ with corresponding objective value $a^*$, and we have $w(\sigma)=a^*$, and $\Phi(\sigma)=y^*$. This can be formulated in terms of [\[mini:primal\]](#mini:primal){reference-type="eqref" reference="mini:primal"}. Let us write $S-\{x\}=\{x_1,...,x_{n}\}$, and let $J=\{j_1,...,j_k\}$ be those labels so that $\sigma-\{x\}=\{x_{j_1},...,x_{j_k}\}$. The contraints may be written as $$A_{i}= (x_i-x)^t,\quad V_i= \frac{1}{2}\left(\lVert x_i\rVert^2-\lVert{x}\rVert^2-p(x_i)+p(x)\right)$$ for $i\in \{1,...,n\}$. Using [\[eq:primtodual\]](#eq:primtodual){reference-type="eqref" reference="eq:primtodual"}, the dual problem is determined by $$B_{i,j}=(x_i-x)^t (x_j-x),\quad U_i=\frac{1}{2}\left(p(x_i)-p(x)-\lVert x_i-x\rVert^2\right).$$ Thus, if we set $(\lambda^*,c^*)=\mathop{\mathrm{DualQP}}(B,U,J,c_1)$ for $c_1=(a_1+p(x))/2$, then $\sigma$ determines a simplex if $c^*\leq c_1$, and its weight is given by $w(\sigma)=2c^*-p(x)$. The witness is given by $\Phi(\sigma)=y^*$, where by the KKT conditions [\[eq:kkt\]](#eq:kkt){reference-type="eqref" reference="eq:kkt"} we have $$y^*=x-\sum_{i} \lambda^*_i(x_i-x).$$ ## Description of the main algorithm We present our main algorithm which computes the weighted alpha complex using dual programming. Specifically, the input consists of the data of a power diagram $(S,p,a_1)$ as in Section [2.3](#sec:powdiag){reference-type="ref" reference="sec:powdiag"}, together with a nonnegative integer $d\geq 0$. The output is the $d$-skeleton $X=\mathop{\mathrm{Skel}}_d(\mathop{\mathrm{Alpha}}(S,p,a_1))$, which constains the simplices of the alpha complex up to dimension $d$, as well as the associated witness map $\Phi:X\rightarrow \mathbb{R}^m$. We now describe the procedure. As a preprocessing step, we begin by computing the Čech graph $G$ of the weighted ball cover $\mathop{\mathrm{PowCov}}(S,p,a_1)=\{U_x:x\in S\}$, which is the one-skeleton of the Čech complex $\mathop{\mathrm{\check{C}ech}}(S,p,a_1)$. In other words, $G$ is the graph whose vertices are those elements $x\in S$ for which $U_x$ is nonempty, i.e. $-p(x)\leq a_1$, and which has an edge connecting $x$ to $y$ if $U_x\cap U_y \neq \emptyset$. This is a important for efficiency for the following reason: let $x\in S$ and suppose $S'=\{x\} \cup N_G(x)\subset S$ contains $x$ and its neighbors in $G$. Then any face $V_\sigma$ of $V_x \in \mathop{\mathrm{PowDiag}}(S,p,a_1)$ is nonempty if and only the corresponding face is nonempty in $\mathop{\mathrm{PowDiag}}(S',p,a_1)$. This reduces the number of inequalities that need to be consider to only the necessary ones. We then proceed to compute the alpha complex, beginning with the zero simplices, and work upwards until we reach dimension $d$. For each dimension $k\leq d$, we assume we have already computed the simplices of dimension up to $k-1$, described by a $(k-1)$-dimensional complex $X$, starting with the empty complex. At step $k$, the algorithm may be described as follows. 1. Compute the set $\Sigma_k$ consisting of all potential simplices to be tested according to the following cases: 1. If $k=0$, then $\Sigma_k$ corresponds to the vertices of $G$. In other words, it is the set of simplices $[x]$ where $x\in S$ satisfies $-p(x)\leq a_1$. 2. If $k=1$, then $\Sigma_k$ is the set of edges of $G$. 3. If $k\geq 2$, then $\Sigma_k$ is the set of $k$-dimensional simplices in the lazy construction, $\Sigma_k=(\mathop{\mathrm{Lazy}}_{k-1}(X))_k$. 2. For each $x\in S$, do the following: 1. Let $S'=\{x,x_1,...,x_n\}$ consist of $x$ together with its neighbors in $G$. Determine the coefficients $(B,U,c_1)$ for the dual program that describes the cell $V_x \in \mathop{\mathrm{PowDiag}}(S',p,a_1)$ as in Section [3.1](#sec:dualprog){reference-type="ref" reference="sec:dualprog"}. The equations defining every face of $V_x$ have the same coefficients, but with different sets $J$ that label the equality constraints. 2. For every potential face $V_\sigma$ for $\sigma=[x,x_{j_1},...,x_{j_k}]\in \Sigma_k$, do the following, noting that we are only considering simplices $x\leq x_{j_1}\leq \cdots \leq x_{j_k}$, as the other orders have already been encountered: 1. [\[dual-active-set\]]{#dual-active-set label="dual-active-set"} Let $J=\{j_1,...,j_k\}$ be the indices of the equality constraints which determine the face $V_\sigma$. Solve the corresponding quadratic program using a dual active set method such as [@arnstrom2022dual], terminating early if the upper bound $c_1$ is exceeded. 2. If the quadratic program is feasible and the optimal value satisfies $c^*\leq c$, then add the simplex $\sigma$ to $X_k$ with the desired weight $w(\sigma)$ as determined by $c^*$. Compute the corresponding minimizer $x^*$ using the KKT equations, and update the witness map by setting $\Phi(\sigma)=x^*$. By using a dual active set method in step [\[dual-active-set\]](#dual-active-set){reference-type="ref" reference="dual-active-set"}, we can often rule out a potential simplex using a small subset of points, as shown in Figure [\[fig:active-set-pic\]](#fig:active-set-pic){reference-type="ref" reference="fig:active-set-pic"}, rather than all $N$ of them. The process of solving problem ([\[maxi:dual\]](#maxi:dual){reference-type="ref" reference="maxi:dual"}) involves sequentially inserting and removing iterates $\lambda_i$, which are dual variables corresponding to the data points $x_i$, and this insertion and removal is equivalent to efficiently selecting a (typically small) subset of data points whose existence rules out a potential simplex. The psuedo-code for this algorithm is given in Algorithm [\[alg:dualalpha\]](#alg:dualalpha){reference-type="ref" reference="alg:dualalpha"}. ![](figures/active-set-1.pdf){#fig:3-points} ![](figures/active-set-2.pdf){#fig:4-points-1} ![](figures/active-set-3.pdf){#fig:4-points-2} ![](figures/active-set-4.pdf){#fig:5-points} ------------ ------------------------------------ ---------------------------------------------------- **Input** $S\subset \mathbb{R}^m$ Vertices of the power diagram $p:S\rightarrow \mathbb{R}$ Power function $a_1\in\mathbb{R}$ Maximum allowable power $d\geq 0$ Dimension of output **Output** $X\subset \mathcal{P}(S)$ $d$-skeleton of $\mathop{\mathrm{Alpha}}(S,p,a_1)$ $w:X\rightarrow (-\infty,a_1]$ Weight function $\Phi: X \rightarrow \mathbb{R}^m$ Table of representatives ------------ ------------------------------------ ---------------------------------------------------- $Y\gets \mathop{\mathrm{Skel}}_1(\mathop{\mathrm{\check{C}ech}}(S,p,a_1))$ $G \gets \mathop{\mathrm{Graph}}(Y)$ [\[line:graph\]]{#line:graph label="line:graph"} $X \gets \emptyset$ $\Sigma_k \gets Y_k$ $\Sigma_k \gets (\mathop{\mathrm{Lazy}}_{k-1}(X))_k$ $\{x_1,...,x_n\} \gets N_G(x)$ [\[line:setb\]]{#line:setb label="line:setb"} $B \gets ((x_{i}-x)^t(x_j-x))_{i,j=1}^{n}$ [\[line:coeffs\]]{#line:coeffs label="line:coeffs"} [\[line:setu\]]{#line:setu label="line:setu"} $U\gets \frac{1}{2}(p(x_i)-p(x)-\lVert x_{i}-x \rVert^2)_{i=1}^{n}$ $c_1\gets (a_1+p(x))/2$ $J\gets \{j_{1},...,j_{k}\}$ $(\lambda^*,c^*) \gets \mathop{\mathrm{DualQP}}(B,U,J,c_1)$ $X \gets X \cup \{\sigma\}$ $w(\sigma)\gets 2c^*-p(x)$ $\Phi(\sigma)\gets x-\sum_{j=1}^{n} \lambda^*_j (x_j-x)$ We summarize the above discussion in a proposition, which is evident: **Proposition 1**. *Algorithm [\[alg:dualalpha\]](#alg:dualalpha){reference-type="ref" reference="alg:dualalpha"} computes the alpha complex.* # Examples and Applications {#sec:examples} We illustrate Algoritithm [\[alg:dualalpha\]](#alg:dualalpha){reference-type="ref" reference="alg:dualalpha"} in several examples. In the first, we apply the complex to a standard three-dimensional mesh generation example. We find that Algorithm [\[alg:dualalpha\]](#alg:dualalpha){reference-type="ref" reference="alg:dualalpha"} is not as fast as existing methods that are specialized to three dimensions. In the second, we generate 1000 random points in $\mathbb{R}^{10}$ and compute the alpha complex up to the three-dimensional simplices with a relatively large radius. In contrast with the three-dimensional example, this could not be done by computing the full Delaunay triangulation. In the last two examples, we use the alpha complex to compute the homology of some interesting topological spaces from a sampling of landmark points. While the main loop of Algorithm [\[alg:dualalpha\]](#alg:dualalpha){reference-type="ref" reference="alg:dualalpha"} can be done in full parallel, we have not used any parallelism in our computations. The homology groups calculated below were done using a MAPLE implementation of an algorithm of Dumas and Villard [@dumas2002sparse] for computing the ranks of sparse matrices mod $p$. For comparison, we also compute the persistence homology groups using Ripser. ## Three-dimensional mesh generation We first apply Algorithm [\[alg:dualalpha\]](#alg:dualalpha){reference-type="ref" reference="alg:dualalpha"} to triangulating a three dimensional data set consisting of 5000 points sampled from surface of the Stanford bunny [@turk1994zippered], downloaded from the CGAL website [@cgal:eb-23a]. We chose a radius size of approximately 1/15 of the diameter, which led to a Čech graph with a maximum vertex degree of 200 in the graph of line [\[line:graph\]](#line:graph){reference-type="ref" reference="line:graph"}. The resulting alpha complex, whose one-skeleton is shown in Figure [10](#fig:bunny){reference-type="ref" reference="fig:bunny"}, had sizes of $(|X_k|)_{k=0}^3=(5000,23830,29995,11163)$ simplices in each dimension. The full computation took approximately 12 seconds. This would be faster to compute using either specialized methods for three-dimensions, or using Delaunay software such as qhull [@qhull]. Notice that the Euler characteristic gives the value of 2, indicating that the corresponding covering is homotopy equivalent to the sphere. ![Alpha complex of a Stanford bunny data set with 5000 sites.](figures/bunny2.PNG){#fig:bunny} ## Random points in higher dimension The next example could not be done by computing the full Delaunay triangulation. We chose 1000 points in $\mathbb{R}^{10}$ by selecting each coordinate uniformly at random from the interval $[-1,1]$, producing a point cloud $S\subset \mathbb{R}^{10}$. We then computed the alpha complex $X=\mathop{\mathrm{Alpha}}(S,1.0)$ up to the 3-simplices. The radius was such that the maximum degree in the Čech graph was about half the size of the data set, in our example 451. The full calculation took approximately 5 minutes, resulting in complex of sizes $(|X_k|)_{k=0}^3=(1000,64785,560459,1783194)$. ## Two persistence examples We next tested our algorithm on two well-studied data sets from persistent homology. Both are freely available online, and are explained in Henry Adams' tutorial on topological data analysis and Ripser [@adams2000wiki]. We find that the first one, which is a 24-dimensional data set consisting of conformations of the cyclooctane molecule, is well-suited for the alpha complex because it tends to lie near the surface of a lower-dimensional space. The second one, which is a $9$-dimensional database of optical image patches, leads to a complex with more simplices, because it has thickness in more dimensions, despite being embedded in lower dimensions. Our first example is a data set consisting of 6040 points $S\subset \mathbb{R}^{24}$ in 24 dimensions, which correspond to conformations of the cyclooctane molecule, introduced in [@martin2010topology]. The authors found that the set of conformations tend to lie on a 2-dimensional topological subspace $X$ which is an interesting union of a Klein bottle and a sphere. Because of its interesting topological type, it is a well-suited use case of persistent homology, in particular Ripser. By running Ripser on the full data set up to a cutoff distance of $.5$, we obtain a diagram of persistence intervals as shown in Figure [11](#fig:cyclorips){reference-type="ref" reference="fig:cyclorips"}, which agrees with the desired Betti numbers of $(\beta_k(X))_{k=0}^2=(1,1,2)$. Ripser took approximately 48 seconds to complete this calculation. We then computed the alpha complex $\mathop{\mathrm{Alpha}}(S,.25)$ up the $3$-simplices, as required to compute up to the second Betti number. Notice that our cutoff of $a_1=.25$ is half the cutoff used for Ripser, because the minimum radius at which two balls intersect is half the distance between the centers. This computation took approximately 7 seconds, producing a complex of sizes $(|X_k|)_{k=0}^3=(6040,24646,28352,11858)$. We checked that it produced the desired Betti numbers exactly, without any persistence. Despite the much smaller size of the alpha complex as compared with the Vietoris-Rips construction, our homology calculation took longer, as we made no effort to use specialized algorithms. ![Persistence intervals of the cyclooctane data set as computed by Ripser. Many intervals not shown for viewability.](figures/cyclorips.png){#fig:cyclorips} We then applied the same procedure to a nine-dimensional data set studied in [@lee2003nonlinear], consisting of normalized $3\times 3$ patches taken from the van Hateren and van er Shaaf image database [@vanhateren1998independent]. Certain high-density subsets were studied using persistent homology in [@carlsson2008klein], revealing the topological type of certain subspaces of a parametrized Klein bottle. We apply our algorithm to a subset $S\subset \mathbb{R}^9$ of size 1000 which is known to have the topological type of a circle, and which is denoted $X(300,30)$ in [@carlsson2008klein]. We computed $\mathop{\mathrm{Alpha}}(S,.5)$ up to the 2-dimensional simplices, obtaining a complex of sizes $(|X_k|)_{k=0}^2=(1000,44080,454843)$ in about 38 seconds, and computed Betti numbers of $(1,1)$, which agree with those of the circle. In this example, Ripser took only 1.5 seconds to obtain persistence intervals indicating these numbers. ## Spherical images from different angles We next consider the alpha complex of a data set consisting of $28\times 28$ color images of a coloring of the surface of the sphere from different angles, viewed as vectors in dimension $28\times28\times 3$. This example could not be accomplished using an algorithm that begins by computing the full Delaunay triangulation due to the prohibitively high dimension. This illustrates a point that in dual programming the high dimensionality of the ambient space is not a direct factor as the input is a function only of respective dot products. Indeed, the only part of Algorithm [\[alg:dualalpha\]](#alg:dualalpha){reference-type="ref" reference="alg:dualalpha"} that depends explicitly on the dimension is line [\[line:coeffs\]](#line:coeffs){reference-type="ref" reference="line:coeffs"}, in which the coefficients of the quadratic program are computed, which only happens once per vertex in each dimension. Fix a function $\varphi : S^2\rightarrow \mathbb{R}^3$ thought of as a coloring of the surface of a sphere. We will be interested in the following two choices: $$\varphi_1(x,y,z)=(x,y,z),\quad \varphi_2(x,y,z)=(tx,ty,tz)$$ where $t=\max(z,0)$, so that one hemisphere is sent to the origin. We then have a map $F_{\varphi}:SO(3)\rightarrow \mathbb{R}^m$ for $m=28\cdot 28\cdot 3$ defined by projecting $\varphi\circ R^{-1}$ onto the $xy$-plane and discreting the result into a $28\times 28$ image. In other words, we take an image using a camera with position defined by $R$ with no perspective warping. A collection of these images are shown in Figure [19](#fig:so3){reference-type="ref" reference="fig:so3"}. ![Top row: Pictures of a sphere from colored by $\varphi_1$ from different angles, corresponding the RGB values with the three axes. Bottom row: The same for $\varphi_2$, displayed using a cylindrical HSV coloring scheme.](figures/ballrgb1.png){#fig:so3} ![Top row: Pictures of a sphere from colored by $\varphi_1$ from different angles, corresponding the RGB values with the three axes. Bottom row: The same for $\varphi_2$, displayed using a cylindrical HSV coloring scheme.](figures/ballrgb2.png){#fig:so3} ![Top row: Pictures of a sphere from colored by $\varphi_1$ from different angles, corresponding the RGB values with the three axes. Bottom row: The same for $\varphi_2$, displayed using a cylindrical HSV coloring scheme.](figures/ballrgb3.png){#fig:so3} ![Top row: Pictures of a sphere from colored by $\varphi_1$ from different angles, corresponding the RGB values with the three axes. Bottom row: The same for $\varphi_2$, displayed using a cylindrical HSV coloring scheme.](figures/ballrgb4.png){#fig:so3} ![Top row: Pictures of a sphere from colored by $\varphi_1$ from different angles, corresponding the RGB values with the three axes. Bottom row: The same for $\varphi_2$, displayed using a cylindrical HSV coloring scheme.](figures/ballhsv1.png){#fig:so3} ![Top row: Pictures of a sphere from colored by $\varphi_1$ from different angles, corresponding the RGB values with the three axes. Bottom row: The same for $\varphi_2$, displayed using a cylindrical HSV coloring scheme.](figures/ballhsv2.png){#fig:so3} ![Top row: Pictures of a sphere from colored by $\varphi_1$ from different angles, corresponding the RGB values with the three axes. Bottom row: The same for $\varphi_2$, displayed using a cylindrical HSV coloring scheme.](figures/ballhsv3.png){#fig:so3} ![Top row: Pictures of a sphere from colored by $\varphi_1$ from different angles, corresponding the RGB values with the three axes. Bottom row: The same for $\varphi_2$, displayed using a cylindrical HSV coloring scheme.](figures/ballhsv4.png){#fig:so3} For $\varphi=\varphi_1$, we randomly generated $N=1000000$ images, producing a point cloud $D\subset \mathbb{R}^m$. We then selected landmarks $S\subset X$, until every element of $D$ was within a distance of 8.0 of some element of $S$, giving a size of $|S|=809$. Assuming that a covering by balls of radius $10.0$ with centers at the points of $S$ would cover the entire image of $F_{\varphi_1}$, we proceeded to compute $X=\mathop{\mathrm{Alpha}}(S,10.0)$ up to the $4$-simplices. The resulting complex took approximately 20 seconds in each dimension, and had sizes of $$(|X_k|)_{k=0}^4=(809,7717,17694,15490,5599),$$ We then computed the Betti numbers up to dimension 3 over the fields $\mathbb{F}_2,\mathbb{F}_3$, giving $$(\beta_k(X,\mathbb{F}_2))=(1,1,1,1),\quad (\beta_k(X,\mathbb{F}_3))=(1,0,0,1).$$ These are the desired Betti numbers of $SO(3)\cong \mathbb{RP}^3$, which is to be expected if $F_{\varphi_1}$ is a reasonable embedding of $SO(3)$ in $\mathbb{R}^m$. We then performed a similar calculation for $\varphi=\varphi_2$. Since distances are smaller for this embedding, we used a minimum distance of $3.0$ in selecting the landmark points, yielding a set $S$ of size $2361$. We then computed $\mathop{\mathrm{Alpha}}(S,3.5)$ up to the 4-simplices, which took approximately fifteen minutes, yielding a complex of $X$ with sizes $$(|X_k|)_{k=0}^4=(2340,24463,68150,102772,128302).$$ We first notice that the number of simplices increases more rapidly with degree, whereas the one from the previous paragraph began to decrease above the $2$-simplices. This is because the image $F_{\varphi_2}(SO(3))$ is pinched at the angle which is entirely on the dark side, resulting in a higher-dimensional tangent space. This makes the alpha complex effectively four-dimensional nearby the singularity. For instance, a single vertex corresponding to a nearly entirely black image had $(1,105,1756,11776,39376)$ simplices in each degree containing it as a face. The Betti numbers were the calculated as above, yielding $$(\beta_k(X,\mathbb{F}_p))=(1,0,1,1)$$ for all primes $p$. In fact, these are the desired Betti numbers resulting from collapsing the circle $S^1\subset SO(3)$ to a point, where the circle is the stabilizer of the image that is completely on the dark side of the coloring. This agrees with what may be computed using the long exact sequence for relative homology groups $H_k(SO(3),S^1)$. In the case of $p=2$, we must use the fact that the map $H_1(S^1,\mathbb{F}_2)\rightarrow H_1(SO(3),\mathbb{F}_2)$ is nonzero, so that the connecting homomorphism is trivial. This results from the fact that $S^1$ corresponds to a generator of the fundamental group $\pi_1(SO(3))=\mathbb{Z}_2$. We then uploaded the second example into Ripser up to a distance cutoff of 7.0. The results, which took approximately 30 seconds to compute, are shown in Figure [20](#fig:so3rips){reference-type="ref" reference="fig:so3rips"}. ![Results of Applying Ripser to samples from the data set of spherical images with one side colored entirely black. The intervals which extend to the rightmost endpoint indicate Betti numbers of $(1,0,1,1)$.](figures/so3rips.png){#fig:so3rips} ## Homology groups of configuration spaces {#sec:config} In the next example, we use the alpha complex to carry out a complex homology calculation, which is the homology groups of a compact form of the ordered configuration space $\mathop{\mathrm{Conf}}_n(\mathbb{R}^2)$ of $n$-points in the plane for $n=3,4$. Additionally, $\mathop{\mathrm{Conf}}_n(\mathbb{R}^2)$ is acted on freely by the symmetric group $S_n$ by relabeling, which induces an action on homology. By selecting landmark points in symmetric way, we generalize the Betti number computation and produce the decomposition into irreducible characters of the corresponding group representation, which is an interesting object in geometric representation theory. This makes use of the theoretically sound nature of the alpha complex, which computes homology exactly. Let $\mathop{\mathrm{Conf}}_n(M)$ denote the configuration space of $n$ points in $M$ $$\mathop{\mathrm{Conf}}_n(M)=\left\{ (x_1,...,x_n) \in M^n: \mbox{$x_i\neq x_j$ for all $i\neq j$}\right\}.$$ Its homology and cohomology have been well-studied, see [@cohen1995configuration]. Of particular interest is in the case of $\mathop{\mathrm{Conf}}_n(\mathbb{R}^2)$, in which case the cohomology groups are the same as the cohomology of the pure Artin Braid group [@vainshtein1978cohomology]. The Betti numbers are given in characteristic zero in this case by the formula $$\label{eq:sterling} \sum_{k\geq 0} (-t)^k \beta_k(\mathop{\mathrm{Conf}}_n(\mathbb{R}^2),\mathbb{Q})= (1-t)(1-2t)\cdots (1-(n-1)t).$$ For instance, the first three Betti numbers of $\mathop{\mathrm{Conf}}_3(\mathbb{R}^2)$ would be $(1,3,2)$ with all higher Betti numbers being zero, and would be $(1,6,11,6)$ in the case of $n=4$. These numbers are known as Stirling numbers of the first kind. We also the action of the symmetric group by reordering the labels of the points, $$\sigma\cdot (x_1,...,x_n)=(x_{\sigma_1},...,x_{\sigma_n}).$$ The induced action makes both homology and cohomology into representations of the symmetric group. They turn out to be graded versions of the regular representation after twisting by the sign representation in odd degree, noticing that the above total dimensions from the previous paragraph sum to $n!$. A formula for the character of this action over complex coefficients is a special case of a results of Lehrer and Solomon [@lehrer1986action]. In this case, it says that the trace of the action of a permutation $\sigma$ on cohomology is given by $$\label{eq:lehrersolomon} \sum_{i} (-t)^i \mathop{\mathrm{Tr}}(\sigma,H^i(\mathop{\mathrm{Conf}}_n(\mathbb{R}^2),\mathbb{C}))= t^n \prod_{i=1}^{\infty} \prod_{j=1}^{m_i(\lambda)} (\alpha_i(t^{-1})-(j-1)i)$$ where $\lambda=(\lambda_1,...,\lambda_l)$ is a Young diagram describing the cycle type of $\sigma$, $m_i(\lambda)$ is the number of times that each $i$ appears in the elements of $\lambda$, and $$\alpha_j(t)=\sum_{d|j} t^d\mu(j/d)$$ where $\mu$ is the usual Möbius function. In particular, we recover formula [\[eq:sterling\]](#eq:sterling){reference-type="eqref" reference="eq:sterling"} by taking $\sigma$ to be the identity permutation. A priori, $\mathop{\mathrm{Conf}}_n(\mathbb{R}^2)$ is not well-suited to being triangulated because the constraint that $x_i\neq x_j$ is of measure zero, and so would not be detected by distance measurements. We will instead replace the full configuration space with the compact subspace $C_n\subset \mathop{\mathrm{Conf}}_n(\mathbb{R}^2)$, which consists of all points $(x_1,...,x_n)\in \mathbb{R}^2$ satisfying: - The points are mean centered, i.e. $x_1+\cdots +x_n=(0,0)$. - For every $i\neq j$, we have that $\lVert x_i-x_j\rVert\geq 1$. - The graph $G$ on $n$ vertices which contains an edge connecting $i$ and $j$ whenever we have equality $\lVert x_i-x_j\rVert=1$ is connected. We illustrate some typical points in Figure [\[fig:config\]](#fig:config){reference-type="ref" reference="fig:config"}, showing the graph $G$ in dashed lines, which is generically a tree. While $\mathop{\mathrm{Conf}}_n(\mathbb{R}^2)$ is smooth but not compact of dimension $2n$, it is not hard to see that $C_n$ is singular but compact of dimension $n-1$. We also observe that $C_n$ is preserved by the $S_n$ action on $\mathop{\mathrm{Conf}}_n(M)$. In a similar way as previous example, we selected landmark points $S^{(n)}$ from $C_n$ for the values of $n=3,4$, using a distance cutoff of $.3$ in both cases. In order to maintain an $S_n$-action on the complex itself, we sampled in such a way that whenever a single point is added to, we also added its $S_n$-orbit. The resulting sizes were $|S^{(3)}|=328,|S^{(4)}|=20232$, noticing that the sizes are multiples of 6 and 24 respectively. We first attempted to compute the homology groups using Ripser, by uploading $S^{(n)}$ to the Ripser live, with a cutoff distance of $.7$, which more than the lower bound of .6 for the distance between any two points. Not surprisingly, the case of $n=3$ was trivial. Ripser was able to compute the homology groups up to the second Betti number for $S^{(4)}$ in 30-40 minutes, though did not manage to compute the third Betti number. The results are shown in Figure [21](#fig:configrips){reference-type="ref" reference="fig:configrips"}. ![Results of Applying Ripser to samples from the configuration space $C_4$, with a very large number of shorter length persistence intervals excluded for viewability. We see the first three Betti numbers of $(1,6,11)$.](figures/configrips.png){#fig:configrips} We then computed the full alpha complexes to top dimension, with a distance cutoff of .35, yielding filtered complexes $X^{(n)}=\mathop{\mathrm{Alpha}}(S^{(n)},.35,4)$. The first one $X^{(3)}$ took under one second to compute, whereas computing $X^{(4)}$ took approximately 4 minutes. The resulting sizes were $$\label{eq1} \begin{split} (|X^{(3)}|)_{k=0}^4 & = (328,1245,1201,312,28),\\ (|X^{(4)}|)_{k=0}^4 & = (20232,228072,608928,665208,345888,92040,10272), \end{split}$$ all others sizes being empty because the spaces are embedded into dimension $2n-2$ by the mean-centering condition. Notice that the Euler characteristics are zero, and that each number is a multiple of $n!$. Some typical two and three-simplices are shown on the right side of Figure [\[fig:config\]](#fig:config){reference-type="ref" reference="fig:config"}. By the way we chose the landmark points, we have an action of the symmetric group on each $X^{(n)}_k$, and that the boundary maps commute with this action. The two spaces had Betti numbers that agreed with [\[eq:sterling\]](#eq:sterling){reference-type="eqref" reference="eq:sterling"}. To compute the character of the $S_n$-representation, we applied Mashke's theorem to convert the boundary operator $\partial_k : C_k(X^{(n)},\mathbb{F}_5)\rightarrow C_{k-1}(X^{(n)},\mathbb{F}_5)$ into block-diagonal form, with one component for each irredicuble representation of $S_n$, noting that the coefficient field $\mathbb{F}_p$ satisfies $p>n$. The homology computation took very long for the $n=4$ case, several hours for each block in each dimension, though this could be made much more efficient by incorporating group actions into state of the art methods for homology calculations. We found for $n=4$ that $$\sum_{k=0}^4 (-t)^k \mathop{\mathrm{ch}}H_k(X^{(4)},\mathbb{F}_5)= \chi_{(4)}-(\chi_{4}+\chi_{3,1}+\chi_{2,2})t$$ $$+(2\chi_{3,1}+\chi_{2,2}+\chi_{2,1,1})t^2-(\chi_{3,1}+\chi_{2,1,1})t^3,$$ where $\chi_{\lambda}$ denotes the irreducible character of $S_4$ for a given Young diagram $\lambda$. This agrees with the predicted value for the full space $\mathop{\mathrm{Conf}}_n(\mathbb{R}^2)$ that one would obtain from equation [\[eq:lehrersolomon\]](#eq:lehrersolomon){reference-type="eqref" reference="eq:lehrersolomon"}. Replacing each irreducible character with their corresponding dimensions $$(\chi_4,\chi_{3,1},\chi_{2,2},\chi_{2,1,1},\chi_{1,1,1,1})\mapsto (1,3,2,3,1),$$ we recover the expected value of $1-6t+11t^2-6t^3$.
arxiv_math
{ "id": "2310.00536", "title": "Computing the alpha complex using dual active set methods", "authors": "Erik Carlsson, John Carlsson", "categories": "math.AT cs.CG", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | For each integer $k>0$, let $n_k$ and $m_k$ be integers such that $n_k\geq 2, m_k\geq 2$, and let $\mathcal{D}_k$ be a subset of $\{0,\dots,n_k-1\}\times \{0,\dots,m_k-1\}$. For each $w=(i,j)\in \mathcal{D}_k$, we define an affine transformation on $\mathbb{R}^2$ by $$\label{eq:Sk} \Psi_w(x)=\Phi_k(x+w), \qquad w\in\mathcal{D}_k,$$ where $\Phi_k=\operatorname{diag}(n_k^{-1},m_k^{-1})$. The non-empty compact set $$\label{attractor} E=\bigcap\nolimits_{k=1}^{\infty}\bigcup\nolimits_{(w_1w_2\ldots w_k)\in \prod_{i=1}^k\mathcal{D}_i} \Psi_{w_1}\circ \Psi_{w_2}\circ \ldots\circ \Psi_{w_k}$$ is called a *self-affine Moran set*. In the paper, we investigate the fine multifractal spectrum of a class of self-affine Moran sets with fixed frequencies, and we prove that under certain separation conditions, the fine multifractal spectrum $H(\alpha)$ is given by the formula $$H(\alpha)=\inf_{-\infty<t<+\infty} \{\alpha t+\beta(t)\}.$$ address: - Department of Mathematics, East China Normal University, No. 500, Dongchuan Road, Shanghai 200241, P. R. China - College of Mathematics Sciences, Xinjiang Normal University, Urumqi, Xinjiang, 830054, P. R. China - Department of Mathematics, East China Normal University, No. 500, Dongchuan Road, Shanghai 200241, P. R. China author: - Yifei Gu - Chuanyan Hou - Jun Jie Miao title: Multifractal analysis of a class of self-affine Moran sets --- # Introduction ## Background Let $\mu$ be a Borel regular measure on $\mathbb{R}^d$ with $0<\mu(\mathbb{R}^d)<\infty$ and let $B(x,r)$ be a ball at center $x$ with radius $r$. The *local dimension* of $\mu$ at $x$ is given by $$\mbox{\rm dim}_{\rm loc}\,\mu(x) = \lim_{r\to 0} \frac{\log \mu(B(x,r))}{\log r},$$ provided the limit exists. For $\alpha\geq 0$ we write $$E_\alpha=\{x\in \mathbb{R}^d: \dim_{\mathrm{loc}}\mu(x)=\alpha\}.$$ The *fine multifractal spectrum or singularity spectrum* of $\mu$ is defined by $$\label{HHE} H(\alpha)=\mbox{\rm dim}_{\rm H}\,E_\alpha,$$ where $\mbox{\rm dim}_{\rm H}\,$ denotes the Hausdorff dimension. We refer readers to [@Bk_KJF2; @Olsen95] for the background reading. One main question in multifractal analysis is to investigate the fine multifractal spectrum, the Rényi dimensions and their relations [@AttSel21; @Bk_KJF2; @Feng12; @Olsen95]. There has been an enormous interest in finding the fine multifractal spectra of fractal measures, such as self-similar measures [@ArbPat96; @Falco94; @FenLa09; @OlsSn11], self-conformal measures [@Patzsc97], self-affine measures [@FKJ99; @FKJ10; @King95; @Olsen98; @Olsen11], Gibbs measures [@BJMM07; @FKJ99] and Moran measures [@Wumin05; @WuXiao11]. The Bedfor-McMullen carpets [@Bedfo84; @McMul84] are a class of simplest self-affine fractals which are often used as a testing ground on questions and conjectures of fractals. In [@King95], King computed the fine multifractal spectrum for self-affine measures supported on Bedford-McMullen carpets, and he proved that the fine multifractal formula [\[HHE\]](#HHE){reference-type="eqref" reference="HHE"} holds under certain separation condition which was removed by Jordan and Rams in [@JorRam11]. Olsen generalised King's work onto $\mathbb{R}^d$, and he studied the fine multifractal spectrum for the measures supported on self-affine sponges and Random self-affine sponges, see [@Olsen98; @Olsen11]. Recently, in [@GM22; @GHM23], the authors come up with a class of new fractals called self-affine Moran sets which are the generalisation of Bedford-McMullen carpets, and they studied the dimension theory of the sets and the properties of the self-affine Moran measures supported on the sets. It is natural to investigate the multifractal analysis on such sets. In the paper, we study the fine multifractal spectrum for the measures supported self-affine Moran sets. First we review the definitions of self-affine Moran sets and self-affine Moran measures. Then, we state our main conclusions on fine multifractal spectrum for self-affine Moran measures in section [2](#sec_SCMR){reference-type="ref" reference="sec_SCMR"}. The proofs are given in section [3](#sec_pf){reference-type="ref" reference="sec_pf"}. ## Self-affine Moran sets and measures {#sec_SAMS} Let $\{(n_k,m_k)\}_{k=1}^\infty$ be a sequence of integer pairs such that $n_k\geq 2$ and $m_k\geq 2$. For each integer $k>0$, let $\mathcal{D}_k$ be a subset of $\{0,\dots,n_k-1\}\times\{0,\dots,m_k-1\}$, and we write $r_k=\textrm{card}(\mathcal{D}_k)$. We always assume that $r_k\geq 2$. We write $$% \nonumber % Remove numbering (before each equation) \Sigma^{k} = \prod_{j=1}^k\mathcal{D}_j ,\qquad \Sigma^{\infty} = \prod_{j=1}^\infty\mathcal{D}_j, \qquad \Sigma^*=\bigcup_{k=0}^\infty\Sigma^k.$$ For $\mathbf{w}=w_1\cdots w_k\in\Sigma^k$, $\mathbf{v}=v_1\cdots v_l\in\Sigma^l$, write $\mathbf{w}\ast\mathbf{v}= w_1\cdots w_k v_1\cdots v_l\in\Sigma^{k+l}$. We write $\mathbf{v}|k = (v_1\cdots v_k)$ for the *curtailment* after $k$ terms of $\mathbf{v} = (v_1 v_2\cdots)\in \Sigma^{\infty}$. We write $\mathbf{w} \preceq \mathbf{v}$ if $\mathbf{w}$ is a curtailment of $\mathbf{v}$. We call the set $[\mathbf{w}] = \{\mathbf{v}\in\Sigma^{\infty} : \mathbf{w} \preceq \mathbf{v}\}$ the *cylinder* of $\mathbf{w}$, where $\mathbf{w}\in \Sigma^*$. If $\mathbf{w}=\emptyset$, its cylinder is $[\mathbf{w}]=\Sigma^{\infty}$. Given $k>0$, for each $w=(i,j)\in \mathcal{D}_k$, we write $\Phi_k=\operatorname{diag}(n_k^{-1},m_k^{-1})$ for the diagonal matrix, and we define an affine transformation on $\mathbb{R}^2$ by $$\label{eq:Sk} \Psi_w(x)=\Phi_k(x+w), \qquad w\in\mathcal{D}_k.$$ For each $\mathbf{w}=(w_1w_2\ldots w_k)\in \Sigma^k$, we write $$\Psi_{\mathbf{w}}=\Psi_{w_1}\circ \Psi_{w_2}\circ \ldots\circ \Psi_{w_k}.$$ Suppose that $J=[0,1]^2\subset \mathbb{R}^{2}$. For each integer $k>0$, let $\{\Psi_w\}_{w\in\mathcal{D}_k }$ be the self-affine IFS as in [\[eq:Sk\]](#eq:Sk){reference-type="eqref" reference="eq:Sk"}. For each $\mathbf{w}\in \Sigma^k$, we write $J_{\mathbf{w}}=\Psi_{\mathbf{w}}(J)$, i.e. $J_{\mathbf{w}}$ is a geometrical affine copy to $J$. Then we call the non-empty compact set $$\label{attractor} E=\bigcap\nolimits_{k=1}^{\infty}\bigcup\nolimits_{\mathbf{w}\in \Sigma^{k}} J_{\mathbf{w}}$$ a *self-affine Moran set or self-affine Moran carpet*, where the elements $J_{\mathbf{w}}$ are called * $k$th-level basic sets* of $E$, see [@GM22; @GHM23] for details. We denote the projection $\Pi: \Sigma^\infty \rightarrow \mathbb{R}^2$ by $$\Pi(\mathbf{w})= \sum_{k=1}^{\infty} \mathrm{diag} \left(\prod_{h=1}^{k} n_h^{-1}, \prod_{h=1}^{k} m_h^{-1}\right)w_k.$$ Note that the projection $\Pi$ is surjective. It is clear that $E=\Pi(\Sigma^\infty)$, that is, the self-affine Moran set is the image of the projection $\Pi$. For each $\delta>0$, let $k=k(\delta)$ be the unique integer satisfying $$\label{def_k} % \nonumber % Remove numbering (before each equation) \qquad\frac{1}{m_1}\frac{1}{m_2}\ldots \frac{1}{m_k}\leq \delta<\frac{1}{m_1}\frac{1}{m_2}\ldots \frac{1}{m_{k-1}}.$$ If no positive integer satisfies above equation, we always write $k=1$. For each integer $k>0$, let $l=l(k)$ be the unique integer satisfying $$\label{def_l} \frac{1}{n_1}\frac{1}{n_2}\ldots \frac{1}{n_l}\leq \frac{1}{m_1}\frac{1}{m_2}\ldots \frac{1}{m_{k}}<\frac{1}{n_1}\frac{1}{n_2}\ldots \frac{1}{n_{l-1}}.$$ We sometimes write $l(\delta)$ for $l(k)$ if $k=k(\delta)$ is given by [\[def_k\]](#def_k){reference-type="eqref" reference="def_k"}. If there is no ambiguity in the context, we just write $l$ instead of $l(k)$ for simplicity. For each $\delta>0$ and every $\mathbf{w}=w_1w_2\ldots w_n\ldots \in \Sigma^\infty$, where $w_n=(i_n,j_n)$, we write $$U(\delta, \mathbf{w})=\Big\{\mathbf{v}=v_1v_2\ldots v_n \ldots\in\Sigma^\infty: \begin{array}{ll} i_n=i_n',&n=1,\ldots,l(\delta), \\ j_n=j_n',&n=1,\ldots, k(\delta), \end{array} v_n=(i_n',j_n')\Big\},$$ and we write $\mathcal{U}_\delta$ for the collection of all such sets, i.e. $$\mathcal{U}_\delta=\{U(\delta, \mathbf{w}): w\in \Sigma^\infty\}.$$ We write $$\label{appsquare} \mathcal{S}_\delta=\{\Pi(U): U\in \mathcal{U}_\delta\}.$$ The elements $S$ of $\mathcal{S}_\delta$ are called the *$\delta$-approximate squares*. Approximate squares are an essential tool in studying self-affine fractals, see [@Baran07; @Bedfo84; @LalGa92; @McMul84], and we may also apply this tool to explore the fine multifractal spectrum. To study the multifractal spectrum, we need define self-affine Moran measures on $E$. For each integer $k>0$, let $\big(p_k(w)>0\big)_{w\in \mathcal{D}_k}$ be a probability vector. For each $\mathbf{w}=w_{1} w_2 \cdots w_{k}\in \Sigma^k$, we write $$\label{mu} \widetilde{\mu}([\mathbf{w}])=p_{\mathbf{w}}=p_1(w_{1}) p_2(w_2) \cdots p_k(w_{k}).$$ Then $\widetilde{\mu}$ is a Borel measure on $\Sigma^\infty$. It is clear that $$\label{projmu} \mu (A)=\widetilde{\mu}(\Pi^{-1}A)$$ is a Borel probability measure on $E$, and we call it a *self-affine Moran measure* on $E$. For each $k>0$, we write that, for $w=(i,j)\in \mathcal{D}_k$, $$q_k(w)=q_k(j)=\sum_{(i,j)\in \mathcal{D}_k} p_k(i,j),\qquad \widehat{q}_k(w)=\widehat{q}_k(i)=\sum_{(i,j)\in \mathcal{D}_k} p_k(i,j).$$ For each $S(\delta, x)\in \mathcal{S}_\delta$ where $x\in S(\delta,x)\cap E$, there exists a sequence $\mathbf{w}$ such that $\Pi(\mathbf{w})=x$ and $\Pi(U(\delta, x))= S(\delta, x)$. Then $$\label{muas} \mu(S(\delta, x))=\left\{ \begin{array}{lcl} p_1(w_1)\ldots p_l(w_l) q_{l+1}(w_{l+1})\ldots q_k(w_k), & \ & l \leq k, \\ p_1(w_1)\ldots p_k(w_k)\widehat{q}_{k+1}(w_{k+1}) \cdots \widehat{q}_l(w_l), & \ & l > k. \end{array} \right.$$ The measure distributed on approximate squares is key to study the fine multifractal spectrum in this paper. # Notation and Main Results {#sec_SCMR} ## Self-affine Moran sets with certain frequency In [@GM22; @GHM23], the authors study the dimension theory of self-affine Moran sets and measures with assumption $$\label{UB} N^+=\sup\{n_k, m_k: k=1,2,\ldots\}<\infty.$$ Since $N^+<\infty$, the patterns in the given sequence $\{(n_k,m_k,\mathcal{D}_k)\}_{k=1}^\infty$ are actually finite, and we write $\Gamma$ for the collection of these finite patterns. Therefore, for all integers $k>0$, $$(n_k,m_k,\mathcal{D}_k)\in \Gamma,$$ and $\operatorname{card} \Gamma <\infty$. Note that if we impose a Bernoulli measure on $\Gamma^\infty$, by Ergodic theory, for almost all sequences in $\Gamma^\infty$, every element in $\Gamma$ appearing in the sequence has fixed frequency. Therefore, we investigate the fine multifractal spectrum of self-affine Moran sets with fixed frequencies. Suppose that for each pattern $\gamma=(n_\gamma,m_\gamma,\mathcal{D}_\gamma)\in \Gamma$, the limit $$\label{freq} \lim_{n\to\infty}\frac{\operatorname{card}\{k: (n_k,m_k,\mathcal{D}_k)=\gamma, k=1,2,\ldots, n \}}{n}$$ exists, denoted by $f_\gamma$, and $\sum_{\gamma\in\Gamma} f_\gamma=1$. We call $E$ the *self-affine Moran set with frequency $\mathbf{f}=\{f_\gamma\}_{\gamma\in \Gamma}$.* Let $\mu$ be the self-affine Moran measure given by [\[projmu\]](#projmu){reference-type="eqref" reference="projmu"}. It is clear that for the given sequence $\{\big(p_k(w)>0\big)_{w\in \mathcal{D}_k}\}_{k=1}^\infty$ of probability vectors, for each $k>0$, there exists $\gamma\in \Gamma$ such that $\big(p_k(w)\big)_{w\in \mathcal{D}_k}=\big(p_\gamma(w)\big)_{w\in \mathcal{D}_\gamma}$, and the frequency of $\big(p_\gamma(w)\big)_{w\in \mathcal{D}_\gamma}$ appearing in the sequence is also $f_\gamma$. From now on, both notations are used in the context for simplicity. For the given probability vector $\mathbf{f}=\{f_\gamma\}_{\gamma\in\Gamma}$, we write $$\label{def_zeta} \zeta:=\frac{\sum_{\gamma\in\Gamma} f_\gamma \log m_\gamma}{\sum_{\gamma\in\Gamma} f_\gamma \log n_\gamma}.$$ For each $\gamma\in\Gamma$, let $\big(p_\gamma(w)>0\big)_{w\in \mathcal{D}_\gamma}$ be a probability vector, and we define $\beta_\gamma(t)$ to be the unique solution to $$\begin{aligned} \label{begat} m_\gamma^{-\beta_\gamma(t)}\sum_{(i,j)\in \mathcal{D}_\gamma}p^t_{\gamma}(ij) u_{\gamma}^{1-\zeta}(j)&=&1, \qquad \textit{for $\zeta\leq 1$; } \\ n_\gamma^{-\beta_\gamma(t)}\sum_{(i,j)\in \mathcal{D}_\gamma}p^t_{\gamma}(ij) \widehat{u}_{\gamma}^{1-\zeta}(i)&=&1,\qquad \textit{for $\zeta > 1$, } \nonumber\end{aligned}$$ where $u_{\gamma}(j)=\frac{q_{\gamma}^t(j)}{\sum_{(i,j)\in \mathcal{D}_\gamma} p_{\gamma}^t(i,j)}$ and $\widehat{u}_{\gamma}(i)=\frac{\widehat{q}_{\gamma}^t(i)}{\sum_{(i,j)\in \mathcal{D}_\gamma} p_{\gamma}^t(i,j)}$, and we write $$\label{betat} \beta(t)=\left\{ \begin{array}{ll} \frac{\sum_{\gamma\in \Gamma}f_\gamma\beta_\gamma(t) \log m_\gamma}{\sum_{\gamma\in\Gamma}f_\gamma \log m_\gamma} & \textit{ for }\zeta \leq 1,\\ \frac{\sum_{\gamma\in \Gamma}f_\gamma\beta_\gamma(t) \log n_\gamma}{\sum_{\gamma\in\Gamma}f_\gamma \log n_\gamma} & \textit{ for }\zeta> 1. \end{array} \right.$$ To study the fine multifractal spectrum, the following geometric separation conditions play important roles in the proof , which are also frequently used to study the geometric and topological properties of self-affine sets with grid structures, see [@HM; @LMR22]. We say $E$ satisfies *Row separation condition* (RSC) if for all $\gamma\in \Gamma$ and for all $(i,j),(i',j')\in D_\gamma$, we have $\Psi_{i,j}(J) \cap \Psi_{i',j'}(J)=\emptyset$ and $|j-j'|\neq 1$. We say $E$ satisfies *Top or bottom separation condition* (TBSC) if there exists a pattern $\gamma\in \Gamma$ with the frequency $f_\gamma>0$ such that at least one of the following conditions holds: - For all $(i,j)\in D_\gamma$, $j \neq 0$. - For all $(i,j)\in D_\gamma$, $j \neq m_\gamma-1$. Note that RSC and TBSC are crucial to the proof of the fine multifractal spectrum for the case $\zeta \leq 1$. Similarly, we may define column separation condition (CSC) and left or right separation condition (LRSC), which are crucial for the case $\zeta>1$. More details about these conditions are discussed in Example [Example 1](#eg){reference-type="ref" reference="eg"}. In fact, for $\zeta>1$, the conclusion follows just by exchanging the roles of $x$ and $y$ axes. Therefore we always assume that $\zeta \leq 1$ throughout the paper and omit the proof for $\zeta >1$. Let $$\begin{aligned} \alpha_{\min}&=&\frac{\sum_{\gamma\in\Gamma} f_\gamma \min_{w\in \mathcal{D}_\gamma}\big\{-\zeta\log p_{\gamma}(w) - (1-\zeta)\log q_{\gamma}(w)\big\}}{\sum_{\gamma\in\Gamma}f_\gamma \log m_\gamma}, \\ \alpha_{\max}&=&\frac{\sum_{\gamma\in\Gamma} f_\gamma \max_{w\in \mathcal{D}_\gamma} \big\{-\zeta\log p_{\gamma}(w) - (1-\zeta)\log q_{\gamma}(w)\big\}}{\sum_{\gamma\in\Gamma}f_\gamma \log m_\gamma}.\end{aligned}$$ The interval $(\alpha_{\mathrm{min}},\alpha_{\mathrm{max}})$ gives the proper domain that $H(\alpha)$ is valid. ## Main conclusions We have the following conclusions for self-affine Moran measures. **Theorem 1**. *Let $E$ be the self-affine Moran set satisfying either RSC or TBSC with the frequency $\mathbf{f}=\{f_\gamma\}_{\gamma\in\Gamma}$ and $\zeta\leq 1$. Let $\mu$ be the self-affine Moran measure given by [\[projmu\]](#projmu){reference-type="eqref" reference="projmu"} with $\big(p_\gamma(w)>0\big)_{w\in \mathcal{D}_\gamma}$. Then for every $\alpha\in (\alpha_{\mathrm{min}},\alpha_{\mathrm{max}})$, we have that $$H(\alpha)=\inf_t \{\alpha t+\beta(t)\}.$$ Furthermore, $H(\alpha)$ is differentiable with respect to $\alpha$ and is concave.* Note that (1) For $\zeta >1$, by replacing RSC and TBSC by CSC and LRSC, the fine multifractal spectrum formula is still valid. (2) RSC and TBSC are required for the proof of upper bound. Actually, we prove the upper bound holds in a more general case, see Theorem [Theorem 15](#thmub){reference-type="ref" reference="thmub"} , where RSC and TBSC are replaced by the replica condition in Section [3.2](#sec_ub){reference-type="ref" reference="sec_ub"}. It would be very interesting if one may prove the conclusion without the replica condition. For each $\gamma\in \Gamma$, we write that $$\begin{aligned} % \nonumber % Remove numbering (before each equation) r_\gamma(j)&=&\operatorname{card}\{i\colon (i,j)\in \mathcal{D}_\gamma \textrm{ for each } j\}, \\ \widehat{r}_\gamma (i)&=&\operatorname{card}\{j\colon (i,j)\in \mathcal{D}_\gamma \textrm{ for each } i\},\end{aligned}$$ Instead of using these geometric separation conditions, we may alternatively make some assumptions on $r_\gamma$ and $p_\gamma$, and the fine multifractal formula is still valid. **Theorem 2**. *Let $E$ be the self-affine Moran set with the frequency $\mathbf{f}=\{f_\gamma\}_{\gamma\in\Gamma}$ and $\zeta\leq 1$. Let $\mu$ be the self-affine Moran measure given by [\[projmu\]](#projmu){reference-type="eqref" reference="projmu"} with $\big(p_\gamma(w)>0\big)_{w\in \mathcal{D}_\gamma}$. Suppose that, for all $\gamma\neq \gamma'\in \Gamma$,* - *$r_{\gamma}(0)=r_{\gamma'}(0)$, and $r_{\gamma}(m_{\gamma}-1)=r_{\gamma'}(m_{\gamma'}-1);$* - *$\{p_{\gamma'}(i,0)\}_{(i,0)\in \mathcal{D}_{\gamma'}}$ and $\{p_{\gamma'}(i,m_{\gamma'}-1)\}_{(i,m_{\gamma'}-1)\in \mathcal{D}_{\gamma'}}$ are the permutations of $\{p_{\gamma}(i,0)\}_{(i,0)\in \mathcal{D}_{\gamma}}$ and $\{p_{\gamma}(i,m_{\gamma}-1)\}_{(i,m_{\gamma}-1)\in \mathcal{D}_{\gamma}}$, respectively.* *Then for every $\alpha\in (\alpha_{\mathrm{min}},\alpha_{\mathrm{max}})$, we have that $$H(\alpha)=\inf_t \{\alpha t+\beta(t)\}.$$* The fine multifractal spectrum of Bedford-McMullen set is a direct consequence of Theorem [Theorem 2](#cor2){reference-type="ref" reference="cor2"}, which was first studied by King [@King95], and improved by Jordan and Rams [@JorRam11] . **Corollary 3**. *Let $\mu$ be a self-affine measure supported on a Bedford-McMullen set $E$. Then for every $\alpha\in (\alpha_{\mathrm{min}},\alpha_{\mathrm{max}})$, we have that $$H(\alpha)=\inf_t \{\alpha t+\beta(t)\}.$$* Finally, we give an example to illustrate the separations conditions. **Example 1**. ![](figure4.png "fig:"){#fig_cdt width="\\textwidth"} Let $(n_1,m_1)=(5,4)$, $(n_2,m_2)=(4,3)$, $(n_3,m_3)=(4,5)$, $(n_4,m_4)=(3,4)$, $(n_5,m_5)=(5,3)$, $(n_6,m_6)=(4,3)$ and $$\begin{aligned} % \nonumber % Remove numbering (before each equation) \mathcal{D}_1&=&\{(1,0),(1,2),(2,1),(2,2),(3,1),(4,0)\}, \quad \ \mathcal{D}_2=\{(0,1),(1,1),(2,1)\}, \\ \mathcal{D}_3&=&\{(0,2),(1,0),(1,4),(3,0),(3,2)\}, \qquad\qquad \mathcal{D}_4=\{(0,1),(1,3),(2,1),(2,3)\}, \\ \mathcal{D}_5&=&\{(0,2),(1,0),(2,2),(3,0),(3,2)\},\qquad\qquad \mathcal{D}_6=\{(0,2),(1,0),(2,1),(3,2)\},\end{aligned}$$ see Figure [1](#fig_cdt){reference-type="ref" reference="fig_cdt"}. (1). Let $\Gamma=\{\mathcal{D}_1,\mathcal{D}_3,\mathcal{D}_6\}$, and $f_1=\frac{1}{4},f_2=\frac{1}{2},f_4=\frac{1}{4}$. Then by [\[def_zeta\]](#def_zeta){reference-type="eqref" reference="def_zeta"}, $$\zeta= \frac{0.25\log 4+0.5\log 5 + 0.25\log 3}{0.25\log 5+0.5\log 4 +0.25\log 4}\approx 0.9888<1.$$ Since the top row of $\mathcal{D}_1$ is empty and $f_1>0$, $E$ satisfies top or bottom separation condition. If we set $f_1=\frac{1}{10},f_2=\frac{4}{5},f_4=\frac{1}{10}$, we have $\zeta\approx 1.090>1$, and $E$ satisfies left or right separation condition. \(2\) Suppose that $\Gamma=\{\mathcal{D}_2,\mathcal{D}_3,\mathcal{D}_5\}$, and $f_1=\frac{1}{4},f_2=\frac{1}{2},f_4=\frac{1}{4}$. Then $\zeta\approx 0.9767<1,$ and $E$ satisfies both RSC and TBSC. \(3\) Suppose that $\Gamma=\{\mathcal{D}_3,\mathcal{D}_4,\mathcal{D}_5\}$, and $f_3=\frac{1}{5},f_4=\frac{1}{5},f_5=\frac{3}{5}$. Then $\zeta\approx 0.860 <1,$ and $E$ satisfies Row separation condition. We also compute its fine multifractal spectrum as a simple example to illustrate our main conclusion. Let $\beta_3(t)$, $\beta_4(t)$ and $\beta_5(t)$ be the solutions to $$\begin{aligned} % \nonumber % Remove numbering (before each equation) 5^{-\beta_3(t)}\sum_{(i,j)\in \mathcal{D}_3}p^t_3(ij) u_3^{1-\zeta}(j)=1, && \qquad 4^{-\beta_4(t)}\sum_{(i,j)\in \mathcal{D}_4}p^t_4(ij) u_4^{1-\zeta}(j)=1, \\ 3^{-\beta_5(t)}\sum_{(i,j)\in \mathcal{D}_5}p^t_5(ij) u_5^{1-\zeta}(j)=1. &&\end{aligned}$$ By [\[betat\]](#betat){reference-type="eqref" reference="betat"}, we have that $$\beta(t)=\frac{\beta_3(t)\log 5+2\beta_4(t)\log 2+3\beta_5(t)\log 3}{2\log 2+3\log 3+\log 5}$$ By Theorem [Theorem 1](#thm_mfa){reference-type="ref" reference="thm_mfa"}, for every $\alpha\in (\alpha_{\mathrm{min}},\alpha_{\mathrm{max}})$, the fine multifractal spectrum is given by $H(\alpha)=\inf_t \{\alpha t+\beta(t)\}.$ # Multifractal spectrum of self-affine Moran measures {#sec_pf} In this section, we investigate the fine multifractal spectrum of self-affine Moran measures. The proof is divided into two parts, one is to show that $\inf_t\{t\alpha+\beta(t)\}$ is the lower bound, the other is to show that it is the upper bound. To study the lower bound of the multifractal spectrum of self-affine Moran measures, we need the following two conclusions. The first is a version of law of large numbers, and we refer the readers to  [@CYS78 Theorem 1 in section 5.2] for details. **Theorem 4**. *Let $\{X_n\}_{n=1}^\infty$ be a sequence of independent random variables satisfying that $$\sum_{n=1}^\infty \frac{\mathbb{E} (|X_n|^{\alpha_n} )}{n^{\alpha_n}}<\infty$$ for some choice of $\alpha_n$ in $(0,2],$ where $\mathbb{E} (X_n)=0$ whenever $1\leq \alpha_n\leq 2$. Then the sequence $\frac{1}{n}\sum_{i=1}^n X_i$ converges to 0 almost surely.* The next is a version of Frostman Lemma, which is useful in finding the dimension of measures, see [@Falco97] for details. **Theorem 5**. *Let $\nu$ be a finite Borel measure on $\mathbb{R}^d$. If $\liminf_{r\to 0} \frac{\log \nu(B(x,r)}{\log r} =s$ for $\nu$-almost every $x$. Then $\mbox{\rm dim}_{\rm H}\,\nu= s.$\ * For $\delta=\frac{1}{m_1 m_2\ldots m_k}$, we simply write that $\mathcal{S}_k= \mathcal{S}_\delta$. For each $\mathbf{w}\in \Sigma^\infty$, there exists $U(\delta, \mathbf{w})\in \mathcal{U}_\delta$, and we write $S_k(\mathbf{w})$ for the approximate square in $\mathcal{S}_k$ containing $\Pi(w)$, i.e. $$S_k(\mathbf{w})=\Pi(U(\delta, \mathbf{w}))\in\mathcal{S}_k.$$ Recall that, for the sequence $\big\{\big(p_k(w)>0\big)_{w\in \mathcal{D}_k}\big\}_{k=1}^\infty$ of probability vectors and $w=(i,j)\in\mathcal{D}_k$, $$q_k(w)=q_k(j)=\sum_{(i,j)\in \mathcal{D}_k} p_k(i,j),\qquad \widehat{q}_k(w)=\widehat{q}_k(i)=\sum_{(i,j)\in \mathcal{D}_k} p_k(i,j)$$ and $$\label{eq_uk} u_k(w)=u_k(j)=\frac{q_k^t(j)}{\sum_{(i,j)\in \mathcal{D}_k} p_k^t(i,j)}.$$ ## Lower Bound We need to define a new measure to study the lower bound. For each $k>0$, given a real $t>0$, we define $\beta_k(t)$ as the solution to $$m_k^{-\beta_k(t)}\sum_{(i,j)\in \mathcal{D}_k}p^t_k(ij) u_k^{1-\zeta}(j)=1.$$ We write $$P_k(w)=P_k(i,j)=m_k^{- \beta_k(t)}p^t_k(i,j) u_k^{1-\zeta}(j),$$ and $$Q_k(w)=Q_k(j)=\sum_{(i,j)\in \mathcal{D}_k} P_k(i,j), \qquad \widehat{Q}_k(w)=\widehat{Q}_k(i)=\sum_{(i,j)\in \mathcal{D}_k \atop} P_k(i,j).$$ It is clear that $(P_k(w))_{w\in \mathcal{D}_k}$, $(Q_k(w))_{w\in \mathcal{D}_k}$ and $(\widehat{Q}_k(w)_{w\in \mathcal{D}_k}$ are probability vectors. Let $\widetilde{\mu}_t$ be the product Borel probability on $\Sigma^\infty$ defined by $$\label{defmut} \widetilde{\mu}_t([w_1w_2 \ldots w_k])=P_1(w_1)P_2(w_2)\ldots P_k(w_k),$$ and $\mu_t$ be the image measure of $\widetilde{\mu}_t$ under $\Pi$, i.e. $$\label{muttil} \mu_t=\widetilde{\mu}_t \circ \Pi^{-1}.$$ Similar to [\[muas\]](#muas){reference-type="eqref" reference="muas"}, for each $S_k(\mathbf{w})\in \mathcal{S}_k$, the $\mu_t$ measure distributed on $S_k(\mathbf{w})$ is given by $$\label{mutas} \mu_t(S_k(\mathbf{w}))=\left\{ \begin{array}{lcl} P_1(w_1)\ldots P_l(w_l) Q_{l+1}(w_{l+1})\ldots Q_k(w_k), & \ & l \leq k, \\ P_1(w_1)\ldots P_k(w_k) \widehat{Q}_{k+1}(w_{k+1})\ldots \widehat{Q}_l(w_l), & \ & l > k. \end{array} \right.$$ Recall that $E$ is the self-affine Moran set with frequency $\{f_\gamma\}_{\gamma\in \Gamma}$, we write $$\label{def_alpha} \alpha(t)=\frac{\sum_{\gamma\in\Gamma} f_\gamma \sum_{w\in\mathcal{D}_\gamma}P_{\gamma}(w)\Big(-\zeta\log p_{\gamma}(w) - (1-\zeta)\log q_{\gamma}(w)\Big)}{\sum_{\gamma\in\Gamma}f_\gamma \log m_\gamma}.$$ **Lemma 6**. *Let $E$ be the self-affine Moran set with the frequency $\{f_\gamma\}_{\gamma\in \Gamma}$. Let $\mu$ and $\widetilde{\mu}_t$ be the measures given by [\[projmu\]](#projmu){reference-type="eqref" reference="projmu"} and [\[defmut\]](#defmut){reference-type="eqref" reference="defmut"} respectively. Then for $\widetilde{\mu}_t$-almost all $\mathbf{w}$ we have that $$\lim_{k\to \infty}\frac{\log \mu(S_k(\mathbf{w}))}{-\log m_1 m_2\ldots m_k}=\alpha(t).$$* *Proof.* Let $\{X_i\}_{i=1}^\infty$ be a sequence of independent random variables on $(\Sigma^\infty,\mathcal{B},\mu_t)$ given by $$X_i(\mathbf{w})=\log p_i(w_i) - \sum_{w'\in\mathcal{D}_i} P_i(w') \log p_i(w'), \quad \mathbf{w}=w_1w_2 \ldots w_i \ldots \in \Sigma^\infty.$$ Note that there are only finitely many patterns in $\Gamma$, by simple calculation, we obtain that $$\mathbb{E}(X_i)=0 \qquad \text{and} \qquad \sum_{i=1}^{\infty}\frac{\mathbb{E}(X_i^2)}{i^2}<\infty.$$ Hence, by Theorem [Theorem 4](#lln){reference-type="ref" reference="lln"}, we have that $$\label{EX} \lim_{k\to \infty} \frac{1}{k} \sum_{i=1}^k X_i(\mathbf{w})=0,$$ for $\widetilde{\mu}_t$-a.e. $\mathbf{w}\in \Sigma^\infty$. Similarly, by Theorem [Theorem 4](#lln){reference-type="ref" reference="lln"}, for the random variables $$Y_i(\mathbf{w})=\log q_i(w_i) - \sum_{j'=0}^{m_i-1} Q_i(j') \log q_i(j'), \qquad w_i=(i',j')\in \mathcal{D}_i,$$ we have that, for $\widetilde{\mu}_t$-a.e. $\mathbf{w}\in \Sigma^\infty$, $$\label{EY} \lim_{k\to \infty} \frac{1}{k} \sum_{i=1}^k Y_i(\mathbf{w})=0.$$ Therefore, by [\[muas\]](#muas){reference-type="eqref" reference="muas"}, for each integer $k>0$, we obtain that $$\log \mu(S_k(\mathbf{w}))=\left\{\begin{array}{l}\sum_{i=1}^{l} X_i(\mathbf{w}) + \sum_{i=1}^{l} \sum_{w'\in \mathcal{D}_{i}} P_{i}(w') \log p_{i}(w') \\ \hspace{0.5cm}+ \sum_{i=l+1}^{k} Y_i(\mathbf{w}) + \sum_{i=l+1}^{k} \sum_{w'\in \mathcal{D}_{i}} P_{i}(w') \log q_{i}(w') , \qquad l\leq k\\ \sum_{i=1}^{k} X_i(\mathbf{w}) + \sum_{i=1}^{k} \sum_{w'\in \mathcal{D}_{i}} P_{i}(w') \log p_{i}(w') \\ \hspace{0.5cm}+\sum_{i=k+1}^{l} Y_i(\mathbf{w}) + \sum_{i=k+1}^{l} \sum_{w'\in \mathcal{D}_{i}} P_{i}(w') \log \widehat{q}_{i}(w'), \qquad l>k, \end{array} \right.$$ where $l=l(k)$ is given by [\[def_l\]](#def_l){reference-type="eqref" reference="def_l"}. For each $\gamma\in \Gamma$, we write $$c_\gamma(n)=\operatorname{card}\{k: (n_k,m_k,\mathcal{D}_k)=\gamma, k=1,2,\ldots, n \},$$ and it is clear that $f_\gamma=\lim_{n\to \infty} \frac{c_\gamma(n)}{n}$. It is obvious that $$\label{denmk} \lim_{k\to\infty } \frac{\log m_1\ldots m_k}{k}=\sum_{\gamma\in \Gamma} f_\gamma \log m_\gamma.$$ By [\[def_l\]](#def_l){reference-type="eqref" reference="def_l"} and [\[def_zeta\]](#def_zeta){reference-type="eqref" reference="def_zeta"}, we have that $$\label{lim_kl} \lim_{k\to\infty } \frac{l}{k}=\zeta.$$ For $\zeta<1$, there exists $K>0$ such that for all $k>K$ we have that $l<k$. Combining with [\[EX\]](#EX){reference-type="eqref" reference="EX"} and [\[EY\]](#EY){reference-type="eqref" reference="EY"}, we have that $$\begin{aligned} &&\hspace{-0.8cm} \lim_{k\to \infty} \frac{\log \mu(S_k(\mathbf{w}))}{k} = \lim_{k\to \infty} \left(\frac{1}{k}\sum_{i=1}^{l} \sum_{w'\in \mathcal{D}_{i}} P_{i}(w') \log p_{i}(w') + \frac{1}{k}\sum_{i=l+1}^{k} \sum_{w'\in \mathcal{D}_{i}} P_{i}(w') \log q_{i}(w')\right) \\ &&= \lim_{k\to \infty} \sum_{\gamma\in\Gamma}\left(\frac{c_\gamma(l)}{k} \sum_{w\in\mathcal{D}_\gamma}P_{\gamma}(w)\log p_{\gamma}(w) + \frac{c_\gamma(k)-c_\gamma(l)}{k} \sum_{w\in\mathcal{D}_\gamma}P_{\gamma}(w)\log q_{\gamma}(w)\right)\\ &&= \sum_{\gamma\in\Gamma} f_\gamma\sum_{w\in\mathcal{D}_\gamma}P_{\gamma}(w)\Big(\zeta\log p_{\gamma}(w) + (1-\zeta)\log q_{\gamma}(w)\Big),\end{aligned}$$ for $\widetilde{\mu}_t$-a.e. $\mathbf{w}\in \Sigma^\infty$. Combine with [\[denmk\]](#denmk){reference-type="eqref" reference="denmk"}, we have that $$\lim_{k\to \infty}\frac{\log \mu(S_k(\mathbf{w}))}{-\log m_1\ldots m_k} =\frac{\sum_{\gamma\in\Gamma} f_\gamma\sum_{w\in\mathcal{D}_\gamma}P_{\gamma}(w)\Big(-\zeta\log p_{\gamma}(w) - (1-\zeta)\log q_{\gamma}(w)\Big)}{\sum_{\gamma\in\Gamma}f_\gamma \log m_\gamma}.$$ For $\zeta=1$, by[\[lim_kl\]](#lim_kl){reference-type="eqref" reference="lim_kl"}, we have that $\lim_{k\to\infty } \frac{l}{k}=1$. Therefore, both $k\leq l$ and $l>k$ may appear alternately, and we only estimate the following for $l>k$ (the other case is identical). $$\begin{aligned} &&\left|\sum_{\gamma\in\Gamma}f_\gamma \sum_{w\in\mathcal{D}_\gamma}P_{\gamma}(w)\log p_{\gamma}(w) - \frac{\log \mu(S_k(\mathbf{w}))}{k}\right| \\ &&=\bigg|\sum_{\gamma\in\Gamma}f_\gamma \sum_{w\in\mathcal{D}_\gamma}P_{\gamma}(w)\log p_{\gamma}(w)- \frac{1}{k} \sum_{i=1}^k X_i(\mathbf{w})- \frac{1}{k}\sum_{i=1}^{k} \sum_{w'\in \mathcal{D}_{i}} P_{i}(w') \log p_{i}(w') \\ &&\hspace{6.5cm} - \frac{1}{k} \sum_{i=k+1}^l Y_i(\mathbf{w}) - \frac{1}{k} \sum_{i=k+1}^{l} \sum_{i'=0}^{n_i} \widehat{Q}_{i}(i') \log \widehat{q}_{i}(i') \bigg| \\ &&\leq \bigg| \frac{1}{k} \sum_{i=1}^k X_i(\mathbf{w}) + \frac{1}{k} \sum_{i=1}^k Y_i(\mathbf{w}) \bigg| +\bigg|\sum_{\gamma\in\Gamma}f_\gamma \sum_{w\in\mathcal{D}_\gamma} P_{\gamma}(w) \log p_{\gamma}(w) \\ &&\hspace{2cm} -\sum_{\gamma\in\Gamma} \bigg(\frac{c_\gamma(l)}{k} \sum_{w\in\mathcal{D}_\gamma}P_{\gamma}(w)\log p_{\gamma}(w) + \frac{c_\gamma(l)-c_\gamma(k)}{k} \sum_{i'=0}^{n_\gamma} \widehat{Q}_{i}(i') \log \widehat{q}_{i}\bigg)\bigg|.\end{aligned}$$ Combining [\[EX\]](#EX){reference-type="eqref" reference="EX"}, [\[EY\]](#EY){reference-type="eqref" reference="EY"} and [\[lim_kl\]](#lim_kl){reference-type="eqref" reference="lim_kl"} with $f_\gamma=\lim_{k\to \infty} \frac{c_\gamma(k)}{k}$ and $\lim_{k\to\infty } \frac{l}{k}=1$, we have that $$\begin{aligned} \hspace{-0.8cm} \lim_{k\to \infty} \frac{\log \mu(S_k(\mathbf{w}))}{k} &=& \sum_{\gamma\in\Gamma}f_\gamma \sum_{w\in\mathcal{D}_\gamma}P_{\gamma}(w)\log p_{\gamma}(w),\end{aligned}$$ for $\widetilde{\mu}_t$-a.e. $\mathbf{w}\in \Sigma^\infty$. Combine with [\[denmk\]](#denmk){reference-type="eqref" reference="denmk"}, we have that $$\lim_{k\to \infty}\frac{\log \mu(S_k(\mathbf{w}))}{-\log m_1\ldots m_k} =\frac{-\sum_{\gamma\in\Gamma} f_\gamma\sum_{w\in\mathcal{D}_\gamma}P_{\gamma}(w) \log p_{\gamma}(w) }{\sum_{\gamma\in\Gamma}f_\gamma \log m_\gamma}.$$ Therefore for $\zeta\leq 1$, by [\[def_alpha\]](#def_alpha){reference-type="eqref" reference="def_alpha"}, it follows that $$\begin{aligned} \lim_{k\to \infty}\frac{\log \mu(S_k(\mathbf{w}))}{-\log m_1\ldots m_k} &=&\alpha(t),\end{aligned}$$ and the conclusion holds. ◻ **Lemma 7**. *Let $E$ be the self-affine Moran set with the frequency $\mathbf{f}=\{f_\gamma\}_{\gamma\in\Gamma}$ and $\zeta\leq 1$. Let $\mu$ and $\mu_t$ be given by [\[projmu\]](#projmu){reference-type="eqref" reference="projmu"} and [\[muttil\]](#muttil){reference-type="eqref" reference="muttil"}. Then, for $\mu_t$-almost all $x$, $$\mbox{\rm dim}_{\rm loc}\,\mu(x) =\alpha(t).$$* *Proof.* For each integer $k>0$, we write $$A_k=\{x\in E: B\big(x,(m_1\ldots m_k)^{-1}e^{-\sqrt{k}}\big)\cap E \subset S_k(\mathbf{w}), \text{ for some }\mathbf{w}\in\Pi^{-1}(\{x\})\}.$$ For each $x\in E$, choose $\mathbf{w}=(i_1,j_1)(i_2,j_2)\ldots\in\Pi^{-1}(\{x\})$. Let $\widetilde{S}_k(\mathbf{w})$ be the rectangle with height $(m_1 \ldots m_k)^{-1}$ and width $(n_1 \ldots n_l)^{-1}$ given by $$\widetilde{S}_k(\mathbf{w})=\Big[\sum_{h=1}^l\frac{i_h}{n_1 \ldots n_h}, \sum_{h=1}^l\frac{i_h}{n_1 \ldots n_h}+\frac{1}{n_1 \ldots n_l}\Big]\times \Big[\sum_{h=1}^k\frac{j_h}{m_1 \ldots m_h}, \sum_{h=1}^k\frac{j_h}{m_1 \ldots m_h}+\frac{1}{m_1 \ldots m_k}\Big].$$ It is clear that $\widetilde{S}_k(\mathbf{w}) \cap E = S_k(\mathbf{w})$. Let $L_k^B$($L_k^T,L_k^L,L_k^R$) be the collection of $x\in E$ such that the distance from $x$ to the bottom(top, left, right) side of $\widetilde{S}_k(\mathbf{w})$ is less than $(m_1\ldots m_k)^{-1}e^{-\sqrt{k}}$, for some $\mathbf{w}\in\Pi^{-1}(\{x\})$. It is clear that $$A_k^c \subset L_k^B \cup L_k^T \cup L_k^L \cup L_k^R.$$ For each $x\in L_k^B$, it is clear that $j_{k+1} = \ldots =j_{k+[\sqrt{k}/\log N^+]}=0$, and the measure of $L_k^B$ is bounded by $$\mu_t(L_k^B) \leq Q_{k+1}(0)\ldots Q_{k+[\sqrt{k}/\log N^+]}(0).$$ By the same argument, we have that $$\begin{aligned} \mu_t(L_k^T) &\leq& Q_{k+1}(m_{k+1}-1)\ldots Q_{k+[\sqrt{k}/\log N^+]}(m_{k+[\sqrt{k}/\log N^+]}-1);\\ \mu_t(L_k^L) &\leq& \widehat{Q}_{l+1}(0)\ldots \widehat{Q}_{l+[\sqrt{l}/\log N^+]}(0);\\ \mu_t(L_k^R) &\leq& \widehat{Q}_{l+1}(n_{l+1}-1)\ldots Q_{l+[\sqrt{l}/\log N^+]}(n_{l+[\sqrt{l }/\log N^+]}-1).\end{aligned}$$ Let $Q^+=\max_{\gamma\in\Gamma} \{Q_{\gamma}(0)<1,Q_{\gamma}(m_\gamma-1)<1,\widehat{Q}_\gamma(0)<1, \widehat{Q}_\gamma(n_\gamma-1) <1\}^{1/(\log N^+ +1)}$. Since $\operatorname{card}<\infty$, it is clear that $Q^+<1$. If $Q_{k'}(0)\neq 1$ for all $k+1\leq k'\leq k+[\sqrt{k}/\log N^+]$, the measure of $L_k^B$ is bounded by $$\mu_t(L_k^B) \leq Q_{k+1}(0)\ldots Q_{k+[\sqrt{k}/\log N^+]}(0) \leq (Q^+)^{\sqrt{k}}.$$ Otherwise, there exists an integer $k_0$, $k+1\leq k_0 \leq k+[\sqrt{k}/\log N^+]$ such that $Q_{k_0}(0)=1$. It implies that $r_{k_0}(m_{k_0}-1)=0$, that is, $L_k^T$ is empty, and for each $x\in L_k^B$, there exists $\mathbf{w}\in \Pi^{-1}(\{x\})$ such that $$\begin{aligned} B\big(x,(m_1\ldots m_k)^{-1}e^{-\sqrt{k}}\big)\cap E &\subset& B\big(x,(m_1\ldots m_{k_0})\big)\cap E \\ &\subset& \bigcup\{S_{k}(\mathbf{w}')\colon j_1'=j_1, \ldots, j_{k}'=j_{k}\}.\end{aligned}$$ Hence, for each $x \in L_k^B \backslash (L_k^L\cup L_k^R)$, we have that $B\big(x,(m_1\ldots m_k)^{-1}e^{-\sqrt{k}}\big)\cap E \subset S_k(\mathbf{w})$ for some $\mathbf{w}\in \Pi^{-1}(\{x\})$, that is to say, $x\in A_k$. Therefore, we obtain that $A_k^c \subset L_k^L \cup L_k^R$. The similar argument applies to each of the other three sides. For each given $k>0$, we have to estimate the measure distributed on $A_k^c$ in the following four cases. (1)For all $k+1<k'<k+[\sqrt{k}/\log N^+]$ and all $l+1 \leq l'\leq l+[\sqrt{l}/\log N^+]$, we have that $Q_{k'}(0),Q_{k'}(m_{k'}-1), \widehat{Q}_{l'}(0), \widehat{Q}_{l'}(n_{l'}-1) < 1$. Since $\mu_t(L_k^*)\leq (Q^+)^{\sqrt{k}}$ where $*= B, T, L, R$, it is clear that $$\mu_t(A_k^c) \leq \mu_t(L_k^B) + \mu_t(L_k^T) + \mu_t(L_k^L) + \mu_t(L_k^R) \leq 4(Q^+)^{\sqrt{k}}.$$ (2)There exists an integer $k+1\leq k_0 \leq k+[\sqrt{k}/\log N^+]$ such that $Q_{k_0}(0)=1$ or $Q_{k_0}(m_{k_0}-1)=1$, and for all $l+1 \leq l'\leq l+[\sqrt{l}/\log N^+]$, $\widehat{Q}_{l'}(0), \widehat{Q}_{l'}(n_{l'}-1) <1$. Since $A_k^c \subset L_k^L \cup L_k^R$ and $\mu_t(L_k^*)\leq (Q^+)^{\sqrt{k}}$ where $*=L, R$, it is clear that $$\mu_t(A_k^c) \leq \mu_t(L_k^L) + \mu_t(L_k^R) \leq 2(Q^+)^{\sqrt{k}}.$$ (3)There exists an integer $l+1\leq l_0 \leq l+[\sqrt{l}/\log N^+]$ such that $\widehat{Q}_{l_0}(0)=1$ or $\widehat{Q}_{l_0}(n_{l_0}-1)=1$, and for all $k+1 \leq k'\leq k+[\sqrt{k}/\log N^+]$, $Q_{k'}(0), Q_{k'}(n_{k'}-1) < 1$. Similar to (2), we have that $$\mu_t(A_k^c) \leq \mu_t(L_k^B) + \mu_t(L_k^T) \leq 2(Q^+)^{\sqrt{k}}.$$ (4)There exists $k+1\leq k_0 \leq k+[\sqrt{k}/\log N^+]$ such that $Q_{k_0}(0)=1$ or $Q_{k_0}(m_{k_0}-1)=1$, and $l+1\leq l_0 \leq l+[\sqrt{l}/\log N^+]$ such that $Q_{l_0}(0)=1$ or $\widehat{Q}_{l_0}(n_{l_0}-1)=1$. At least one of $L_k^B$ and $L_k^T$ is empty, and one of $L_k^B$ and $L_k^T$ is empty. This implies that for each $x\in E$, there exists $\mathbf{w}\in \Pi^{-1}(\{x\})$ such that $B\big(x,(m_1\ldots m_k)^{-1}e^{-\sqrt{k}}\big)\cap E \subset S_k(\mathbf{w})$, and it immediately follows that $E \subset A_k$. Hence $A_k^c \cap E=\emptyset$, and thus $\mu_t(A_k^c)=0$. Therefore the measure of $A_k^c$ is bounded by $\mu_t(A_k^c)\leq 4(Q^+)^{\sqrt{k}}$. Since $Q^+<1$, it follows that $$\sum_{k=1}^\infty \mu_t(A_k^c) \leq 4 \sum_{k=1}^\infty (Q^+)^{\sqrt{k}}<\infty.$$ By Borel-Cantelli Lemma, it follows that $$\mu_t(A_k^c\,i.o.)=0.$$ Therefore, for $\mu_t$-almost all $x$, we have that $$\mu\Big(B\big(x, (m_1\ldots m_k)^{-1}e^{-\sqrt{k}}\big)\Big)\leq \mu(S_k(\mathbf{w})),$$ for sufficiently large $k$. For each $r>0$, there exists a unique integer $k$ such that $$(m_1\ldots m_{k+1})^{-1}e^{-\sqrt{k+1}}\leq r<(m_1\ldots m_k)^{-1}e^{-\sqrt{k}},$$ which implies that $\mu(B(x,r))\leq \mu(S_k(x))$. Therefore, we have that $$\liminf_{r\to 0}\frac{\log \mu(B(x,r))}{\log r}\geq \liminf_{k\to \infty}\frac{\log \mu(S_k(\mathbf{w}))}{-\log m_1 m_2\ldots m_k}.$$ Similarly, we have that $$\limsup_{r\to 0}\frac{\log \mu(B(x,r))}{\log r}\leq \limsup_{k\to \infty}\frac{\log \mu(S_k(\mathbf{w}))}{-\log m_1 m_2\ldots m_k}.$$ By Lemma [Lemma 6](#lb){reference-type="ref" reference="lb"}, The conclusion holds. ◻ **Lemma 8**. *Given a probability vector $\mathbf{f}=\{f_\gamma\}_{\gamma\in\Gamma}$ with $\zeta\leq 1$. Let $E$ be the self-affine Moran set with the frequency $\{f_\gamma\}_{\gamma\in \Gamma}$. Let $\mu$ and $\mu_t$ be given by [\[mu\]](#mu){reference-type="eqref" reference="mu"} and [\[muttil\]](#muttil){reference-type="eqref" reference="muttil"}. Then, for $\mu_t$-almost all $x$, we have that $$\lim_{k\to \infty}\frac{\log \mu(B(x,r))}{\log r}=\alpha(t).$$* *Proof.* For each integer $k>0$, we write $$A_k=\{x=\Pi(\mathbf{w})\in E: B\big(x,(m_1\ldots m_k)^{-1}e^{-\sqrt{k}}\big)\cap E \subset S_k(\mathbf{w})\}$$ Let $L_k$ be the collection of $x\in E$ such that the distance from $x$ to the bottom side of $S_k(\mathbf{w})$ is less than $(m_1\ldots m_k)^{-1}e^{-\sqrt{k}}$, where $x=\Pi(\mathbf{w})$, $\mathbf{w}=(i_1,j_1)\ldots(i_k,j_k)\ldots\in\Sigma^\infty$. It is clear that $j_{k+1}= \ldots =j_{k+[\sqrt{k}/\log N^+]}=0$. Hence, the measure of $L_k$ is bounded by $$\mu_t(L_k)\leq Q_{k+1}(0)\ldots Q_{{k+[\sqrt{k}/\log N^+]}}(0) \leq (Q^+)^{\sqrt{k}}$$ where $Q^+=\max_{\gamma\in\Gamma} \{Q_{\gamma}(0),Q_{\gamma}(m_\gamma-1),\widehat{Q}_\gamma(0), \widehat{Q}_\gamma(n_\gamma-1)\}^{1/(\log N^+ +1)}<1$. The similar argument applies to each of the other three sides, we obtain that $$\sum_{k=1}^\infty \mu_t(A_k^c) < 4 \sum_{k=1}^\infty (Q^+)^{\sqrt{k}}<\infty.$$ Since $\sum_k (Q^+)^{\sqrt{k}}<\infty$, by Borel-Cantelli Lemma, it follows that $$\mu_t(A_k^c\,i.o.)=0.$$ Therefore, for $\mu_t$-almost all $x$, we have that $$\mu\Big(B\big(x, (m_1\ldots m_k)^{-1}e^{-\sqrt{k}}\big)\Big)\leq \mu(S_k(\mathbf{w})),$$ for sufficiently large $k$. For each $r>0$, there exists a unique integer $k$ such that $$(m_1\ldots m_{k+1})^{-1}e^{-\sqrt{k+1}}\leq r<(m_1\ldots m_k)^{-1}e^{-\sqrt{k}},$$ which implies that $\mu(B(x,r))\leq \mu(S_k(x))$. Therefore, we have that $$\liminf_{r\to 0}\frac{\log \mu(B(x,r))}{\log r}\geq \liminf_{k\to \infty}\frac{\log \mu(S_k(\mathbf{w}))}{-\log m_1 m_2\ldots m_k}.$$ Similarly, we have that $$\limsup_{r\to 0}\frac{\log \mu(B(x,r))}{\log r}\leq \limsup_{k\to \infty}\frac{\log \mu(S_k(\mathbf{w}))}{-\log m_1 m_2\ldots m_k}.$$ By Lemma [Lemma 6](#lb){reference-type="ref" reference="lb"}, The conclusion holds. ◻ **Lemma 9**. *Let $E$ be the self-affine Moran set with the frequency $\mathbf{f}=\{f_\gamma\}_{\gamma\in\Gamma}$ and $\zeta\leq 1$. Let $\widetilde{\mu}_t$ and $\mu_t$ be defined by [\[defmut\]](#defmut){reference-type="eqref" reference="defmut"} and [\[muttil\]](#muttil){reference-type="eqref" reference="muttil"}. If $$\liminf_{k\to \infty}\frac{\log \mu_t(S_k(\mathbf{w}))}{-\log m_1 m_2\ldots m_k}=\delta,$$ for $\widetilde{\mu}_t$-a.e. $\mathbf{w}\in \Sigma^\infty$, then $\mbox{\rm dim}_{\rm H}\,\mu_t=\delta$.* *Proof.* Using the same argument as in Lemma [Lemma 8](#lem_ld){reference-type="ref" reference="lem_ld"}, we have that $$\lim_{k\to \infty}\frac{\log \mu_t(B(x,r))}{\log r} = \lim_{k\to \infty}\frac{\log \mu_t(S_k(\mathbf{w}))}{-\log m_1 m_2\ldots m_k},$$ for $\mu_t$-a.e. $x\in E$. By Theorem [Theorem 5](#lem_frost){reference-type="ref" reference="lem_frost"}, we conclude that $\mbox{\rm dim}_{\rm H}\,\mu_t=\delta$. ◻ **Lemma 10**. *Let $E$ be the self-affine Moran set with the frequency $\mathbf{f}=\{f_\gamma\}_{\gamma\in\Gamma}$ and $\zeta\leq 1$. Let $\mu_t$ be defined by [\[muttil\]](#muttil){reference-type="eqref" reference="muttil"}, then $$\mbox{\rm dim}_{\rm H}\,\mu_t = t\alpha(t)+\beta(t).$$* *Proof.* Similar to Lemma [Lemma 6](#lb){reference-type="ref" reference="lb"}, for each integer $i>0$, let $X_i$ and $Y_i$ be the independent random variables on $(\Sigma^\infty,\mathcal{B},\mu_t)$ given by $$\begin{aligned} X_i(\mathbf{w})&=&\log P_i(w_i) - \sum_{w'\in\mathcal{D}_i} P_i(w') \log P_i(w'), \\ Y_i(\mathbf{w})&=&\log Q_i(w_i) - \sum_{w'\in\mathcal{D}_i} P_i(w') \log Q_i(w'),\end{aligned}$$ where $\mathbf{w}=w_1w_2 \ldots w_i \ldots \in \Sigma^\infty.$ By Theorem [Theorem 4](#lln){reference-type="ref" reference="lln"}, we have that $$\lim_{k\to \infty} \frac{1}{k} \sum_{i=1}^k X_i(\mathbf{w})=0, \qquad \lim_{k\to \infty} \frac{1}{k} \sum_{i=1}^k Y_i(\mathbf{w})=0.$$ for $\widetilde{\mu}_t$-a.e. $\mathbf{w}\in \Sigma^\infty$. By the same argument as Lemma [Lemma 6](#lb){reference-type="ref" reference="lb"}, we have that $$\begin{aligned} \lim_{k\to \infty}\frac{\log \mu_t(S_k(\mathbf{w}))}{\log r_k} &=& \frac{\sum_{\gamma\in\Gamma} f_\gamma\sum_{w\in\mathcal{D}_\gamma}P_{\gamma}(w)\Big(-\zeta\log P_{\gamma}(w) - (1-\zeta)\log Q_{\gamma}(w)\Big)}{\sum_{\gamma\in\Gamma}f_\gamma \log m_\gamma} \\ &=& t\alpha(t)+\beta(t).\end{aligned}$$ Then by Lemma [Lemma 9](#dimmut){reference-type="ref" reference="dimmut"}, we have that $\mbox{\rm dim}_{\rm H}\,\mu_t = t\alpha(t)+\beta(t).$ ◻ **Theorem 11**. *Let $E$ be the self-affine Moran set with the frequency $\mathbf{f}=\{f_\gamma\}_{\gamma\in\Gamma}$ and $\zeta\leq 1$. Let $\mu$ be the self-affine Moran measure given by [\[projmu\]](#projmu){reference-type="eqref" reference="projmu"} . Then $$H(\alpha) \geq \inf_t\{t\alpha+\beta(t)\}.$$* *Proof.* For each $\alpha\in(\alpha_{\min}, \alpha_{\max})$, by [\[def_alpha\]](#def_alpha){reference-type="eqref" reference="def_alpha"}, there exists $t>0$ such that $\alpha=\alpha(t)$. Let $E'=\{x\in E_{\alpha(t)} : \mbox{\rm dim}_{\rm loc}\,\mu(x)=\alpha(t)\}$. By Lemma [Lemma 8](#lem_ld){reference-type="ref" reference="lem_ld"}, $\mu_t(E')=1$. Since $$\mbox{\rm dim}_{\rm H}\,E_{\alpha(t)} \geq \mbox{\rm dim}_{\rm H}\,E' \geq \mbox{\rm dim}_{\rm H}\,\mu_t,$$ by Lemma [Lemma 10](#mut){reference-type="ref" reference="mut"}, we have that $$\mbox{\rm dim}_{\rm H}\,\mu_t = t\alpha(t)+\beta(t) \geq \inf_t \{t\alpha+\beta(t)\},$$ and the conclusion holds. ◻ ## Upper Bound {#sec_ub} For $\mathbf{w}\in \Sigma^\infty$ we write $$\label{defB} I_k(\mathbf{w})=\frac{1}{k} \sum_{i=1}^{k}\log u_i(w_i) ,\qquad D_k(\mathbf{w})=I_l(\mathbf{w})-I_k(\mathbf{w}),$$ where $l=l(k)$ is given by [\[def_l\]](#def_l){reference-type="eqref" reference="def_l"}, and $u_i$ is given by [\[eq_uk\]](#eq_uk){reference-type="eqref" reference="eq_uk"}. We define $$\begin{aligned} &&\hspace{-1cm}\partial E=\left\{x\in E \ \colon \exists \mathbf{w}=(i_1, j_1) (i_1,j_2) \ldots \in \Pi^{-1} (x), \exists K>0, j_k=0, \forall k>K\right\}\\ &&\bigcup \left\{x\in E \ \colon \exists \mathbf{w}=(i_1, j_1) (i_1,j_2) \ldots \in \Pi^{-1} (x), \exists K>0, j_k=m_k-1, \forall k>K\right\}\end{aligned}$$ To study the multifractals spectrum on $E$, we need the following technique condition. **Definition 1**. We say $E$ satisties *the replica condition* (RC) if for all $\epsilon>0$ and all $x \in E\backslash\partial E$, for each $\mathbf{w}\in \Pi^{-1}(x)$, there exists a sequence $\{c_{k_i}\}_{i=1}^\infty$, such that $$c_{k_i} \geq (m_1 m_{2}\ldots m_{k_i})^{-(1+\epsilon/2)}$$ satisfying that $$B(x,c_{k_i})\cap E \subset \bigcup\{S_{k_i}(\mathbf{w}')\colon j_1'=j_1, \ldots, j_{k_i}'=j_{k_i}\} \qquad \textit{and} \qquad D_{k_i}(\mathbf{w}) > -\epsilon.$$ Actually, we prove that $RC$ implies the upper bound. For $\alpha>0$ and $\epsilon>0$, we write $$\begin{aligned} Y(\epsilon, k)&=&\{S\in \mathcal{S}_k: (m_1\ldots m_{k})^{-\alpha(1+\epsilon)} < \mu(S)<(m_1\ldots m_{k})^{-\alpha(1-\epsilon)}\},\\ G(\epsilon,k) &=& Y(\epsilon,k) \cap \{S_k(\mathbf{w}):D_k(\mathbf{w})\geq -\epsilon, \ \mathbf{w}\in\Sigma^\infty\}.\end{aligned}$$ We denote the distance from $x$ to a set $A$ by $d(A,x)=\inf\{|x-y| : y\in A\}$. **Lemma 12**. *Let $E$ be the self-affine Moran set with the frequency $\mathbf{f}=\{f_\gamma\}_{\gamma\in\Gamma}$ and $\zeta\leq 1$. For all $\epsilon>0$ and $x\in E_{\alpha}$, there exists $K\in\mathbb{N}$ such that for every $k\geq K$ there exists $\mathbf{w}'\in \Sigma^\infty$ such that $S_k(\mathbf{w}')\in Y(\epsilon, k)$ satisfying $$d(S_k(\mathbf{w}'),x)\leq (m_1 m_{2}\ldots m_{k})^{-1}.$$* *Proof.* Fix $\epsilon>0$. Since $x\in E_{\alpha}$, there exists a real $R>0$ such that for all $r<R$, we have that $$\alpha\left(1-\frac{\epsilon}{3}\right) \leq \frac{\log \mu(B(x,r))}{\log r} \leq \alpha\left(1+\frac{\epsilon}{3}\right).$$ Let $K>0$ be a sufficiently large integer such that $$\frac{1}{m_1 m_{2}\ldots m_{K}} < \frac{R}{2C_1} , \quad \frac{(2C_1)^{\alpha(1-\frac{\epsilon}{3})}}{ (m_1\ldots m_K)^{\frac{2\alpha\epsilon}{3}}}<1 \quad \textit{and } \quad \frac{(m_1\ldots m_K)^{2\alpha\epsilon/3}}{6N^+}>1,$$ where $C_1=(1+(N^+)^2)^{1/2}$. For all $k\geq K$ and all $\mathbf{w}'$ such that $d(S_k(\mathbf{w}'),x)\leq (m_1 m_{2}\ldots m_{k})^{-1}$ we have that $$\label{musu} \mu(S_k(\mathbf{w}'))\leq (2C_1(m_1 m_{2}\ldots m_{k})^{-1})^{\alpha(1-\frac{\epsilon}{3})}\leq (m_1 m_{2}\ldots m_{k})^{-\alpha(1-\epsilon)}.$$ On the other hand, there are at most $6N^+$ approximate squares $S_k(\mathbf{w'})$ such that $d(S_k(\mathbf{w}'),x)\leq (m_1 m_{2}\ldots m_{k})^{-1}$. It follows that at least one of these approximate squares must satisfies $$\mu(S_k(\mathbf{w}'))\geq \frac{1}{6N^+}(m_1 m_{2}\ldots m_{k})^{-\alpha(1+\frac{\epsilon}{3})}\geq (m_1 m_{2}\ldots m_{k})^{-\alpha(1+\epsilon)}.$$ Therefore $S_k(\mathbf{w}')\in Y(\epsilon, k)$, and the conclusion holds. ◻ **Lemma 13**. *Let $E$ be the self-affine Moran set with the frequency $\mathbf{f}=\{f_\gamma\}_{\gamma\in\Gamma}$ and $\zeta\leq 1$. Suppose $E$ satisfies the replica condition. Then for all $\epsilon>0$ and all $x\in E_{\alpha}$, there exists a sequence $\{k_i\}$ and {$\mathbf{w}'_i\in \Sigma^\infty\}$ such that* - *$d(S_{k_i}(\mathbf{w}'_i),x)\leq (m_1 m_{2}\ldots m_{k_i})^{-1}$,* - *$S_{k_i}(\mathbf{w}'_i)\in Y(\epsilon, k_i)$,* - *$D_{k_i}(\mathbf{w}'_i) \geq -\epsilon$, for each $\mathbf{w}\in \Pi^{-1} (x)$.* - *if $x \notin \partial E$ then $j_1'=j_1, \ldots, j'_{k_i}=j_{k_i}$ for each $\mathbf{w}\in \Pi^{-1} (x)$.* *Proof.* Fix $\epsilon >0$ and $x\in E_\alpha$. For each $\mathbf{w}\in \Pi^{-1} (x)$, by Lemma [Lemma 12](#lem12){reference-type="ref" reference="lem12"}, the $(1)$ and $(2)$ hold for all sufficiently large $k$. For $x \in \partial E$, , we have that $$\lim_{k\to\infty} I_k(\mathbf{w})=\sum_{\gamma\in \Gamma}f_\gamma u_{\gamma}(0) \quad \text{or} \quad \lim_{k\to\infty} I_k(\mathbf{w})=\sum_{\gamma \in \Gamma} f_\gamma u_\gamma(m_\gamma-1).$$ Thus $\lim_{k\to\infty} D_k(\mathbf{w})=0$, and there exists $K>0$ such that $D_k(\mathbf{w})>-\epsilon$ for $k>K$. Hence the conclusion holds for $x \in \partial E$. If $x \notin \partial E$, by replica condition, for each integer $i>0$, the ball $B(x,c_{k_i})$ is contained in the union of at most $N^+$ approximate squares $S_{k_i}(\mathbf{w}')$ satisfying $j'_1=j_1, \ldots, j'_{k_i}=j_{k_i}$, and $D_{k_i}(\mathbf{w}') =D_{k_i}(\mathbf{w}) \geq -\epsilon$. Moreover $$\mu(B(x,c_{k_i})) \geq c_{k_i}^{\alpha(1+\frac{\epsilon}{3})} \geq (m_1 m_{2}\ldots m_{k_i})^{-\alpha\left(1+\frac{\epsilon}{2}\right) \left(1+\frac{\epsilon}{3}\right)}.$$ This implies that the measure on at least one of the approximate squares is no less than $(m_1 m_{2}\ldots m_{k})^{-\alpha(1+\epsilon)}$, and we write a such square as $S_{k_i}(\mathbf{w}'_i)$ with $\mu(S_{k_i}(\mathbf{w}'_i))\geq (m_1\ldots m_{k})^{-\alpha(1+\epsilon)}$. Together with  [\[musu\]](#musu){reference-type="eqref" reference="musu"}, we have that $S_{k_i}(\mathbf{w}'_i)\in Y(\epsilon, k)$. ◻ Given $\mathbf{w}\in \Sigma^\infty$, for integers $0<l<k$,we write $$\Sigma_l^k(\mathbf{w})=\{\mathbf{w}'\in\Sigma^k \colon w_i'=w_i , i=1,2,\ldots, l; j'_i=j_i,i=l+1,\ldots,k\}.$$ **Lemma 14**. *For each $\epsilon>0$, there exists an integer $K>0$ such that, for all sufficiently large integer $k$, we have that $$\label{eqmu1} \mu(S_k(\mathbf{w}))^t \leq \left\{\begin{array}{ll} e^{2\epsilon k}\sum_{\mathbf{w}'\in \Sigma_l^k(\mathbf{w})}\Big(p_1(w'_1)\ldots p_k(w'_k)\Big)^t \Big(u_1(w'_1)\ldots u_k(w'_k)\Big)^{1-\zeta}, & \zeta <1;\\ e^{\epsilon k}\Big(p_1(w_1)\ldots p_k(w_k)\Big)^t , & \zeta =1, \end{array}\right.$$ for all $S_k(\mathbf{w})\in G(\epsilon,k)$.* *Proof.* First, we prove the inequality for $\zeta<1$. For each $S_k(\mathbf{w})\in \mathcal{S}_k$, by [\[muas\]](#muas){reference-type="eqref" reference="muas"} and [\[eq_uk\]](#eq_uk){reference-type="eqref" reference="eq_uk"}, we have that $$\begin{aligned} &&\mu(S_k(\mathbf{w}))^t = (p_1(w_1)\ldots p_l(w_l) q_{l+1}(w_{l+1})\ldots q_k(w_k))^t\\ &=& \Big(p_1(w_1)\ldots p_l(w_l)\Big)^t u_{l+1}(w_{l+1})\Big(\sum_{w_{l+1}'\in D_{l+1} \atop j'_{l+1}=j_{l+1}}p^t_{l+1}(w'_{l+1})\Big)\ldots u_k(w_k)\Big(\sum_{w_{k}'\in D_{k} \atop j'_k=j_k}p^t_k(w'_k)\Big)\\ &=& \Big(p_1(w_1)\ldots p_l(w_l)\Big)^t u_{l+1}(w_{l+1})\ldots u_k(w_k)\left(\sum_{\mathbf{w}'\in \Sigma_l^k(\mathbf{w})}\Big(p_{l+1}(w'_{l+1})\ldots p_k(w'_k)\Big)^t\right)\\ &=&\sum_{\mathbf{w}'\in \Sigma_l^k(\mathbf{w})}\Big(p_1(w'_1)\ldots p_k(w'_k)\Big)^t u_{l+1}(w'_{l+1})\ldots u_k(w'_k).\end{aligned}$$ Since $D_k(\mathbf{w})\geq -\epsilon$, by [\[defB\]](#defB){reference-type="eqref" reference="defB"}, we have that for $\mathbf{w}'\in \Gamma_k(\mathbf{w})$, $$\begin{aligned} \frac{u_{l+1}(w'_{l+1})\ldots u_k(w'_k)}{(u_1(w'_1)\ldots u_k(w'_k))^{1-\zeta}} &=& \frac{(u_1(w_1)\ldots u_k(w_k))^\zeta}{u_1(w_1)\ldots u_l(w_l)} \\ &\leq& e^{\zeta k I_k(\mathbf{w})-l I_l(\mathbf{w})}\\ &\leq& e^{(\zeta k-l)I_k(\mathbf{w})}\cdot e^{\epsilon l}. \end{aligned}$$ Since $I_k(\mathbf{w})$ is uniformly bounded by some constant $C$ and $\lim_{k\to \infty}\frac{l}{k}=\zeta<1$, there exists $K$ such that for $k\geq K$ we have that $(\zeta k-l)I_k(\mathbf{w})\leq \epsilon l$. Hence $$\begin{aligned} \frac{u_{l+1}(w'_{l+1})\ldots u_k(w'_k)}{(u_1(w'_1)\ldots u_k(w'_k))^{1-\zeta}} &\leq& e^{2\epsilon l}\leq e^{2\epsilon k}, \end{aligned}$$ which is equivalent to $$u_{l+1}(w'_{l+1})\ldots u_k(w'_k)\leq e^{2\epsilon k} \big(u_1(w'_1)\ldots u_k(w'_k)\big)^{1-\zeta}.$$ We apply this into the equation of $\mu(S_k(\mathbf{w}))^t$, and the conclusion follows. For $\zeta =1$ Fix $\mathbf{w}$ and k, we have that $$\mu(S_k(\mathbf{w}))^t=\left\{ \begin{array}{lcl} p_1^t(w_1)\ldots p_l^t(w_l)q_{l+1}^t(w_{l+1})\ldots q_k^t(w_k), & \ & l \leq k, \\ p_1^t(w_1)\ldots p_k^t(w_k)q_{k+1}^t(w_{k+1})\ldots q_l^t(w_l), & \ & l > k. \end{array} \right.$$ Thus $$\mu(S_k(\mathbf{w}))^t \leq C^{|(k-l)t|} (p_1(w_1)\ldots p_k(w_k))^t,$$ where $C=\max_\gamma \left\{\max_{(i,j)\in \mathcal{D}_\gamma}\left\{\frac{q_{\gamma }(j)}{p_{\gamma}(i,j)}, 2\right\}\right\}.$ Since $\zeta=1$, $\lim_{k\to \infty}\frac{l}{k}=1$. There exists an integer $K>0$ that for $k\geq K$, we have that $$|k-l| \leq \frac{\epsilon k}{|t|\log C}.$$ Therefore the conclusion holds. ◻ **Theorem 15**. *Let $E$ be the self-affine Moran set with the frequency $\mathbf{f}=\{f_\gamma\}_{\gamma\in\Gamma}$ and $\zeta\leq 1$. Suppose that $E$ satisfies the replica condition. Then for each $\alpha\in(\alpha_{\mathrm{min}},\alpha_{\mathrm{max}})$ we have that $$H(\alpha)\leq \inf_t \{\alpha t+\beta(t)\}.$$* *Proof.* First, we prove the conclusion for $\zeta <1$. Since $E$ satisfies the replica condition, by Lemma [Lemma 13](#cover){reference-type="ref" reference="cover"}, we have that for all integer $K\in \mathbb{N}$ and all real $\epsilon>0$, $$E_{\alpha} \subseteq \bigcup_{k>K}\bigcup_{Q_k(\mathbf{w})\in G(\epsilon,k)} \widehat{S}_k(\mathbf{w}),$$ where $\widehat{S}_k(\mathbf{w})$ is the rectangle with the same centre of $S_k(\mathbf{w})$ but $N^+$ times greater. Fix a real $t$, by [\[freq\]](#freq){reference-type="eqref" reference="freq"} and [\[betat\]](#betat){reference-type="eqref" reference="betat"} , we have that $$\begin{aligned} \lim_{k\to \infty} \Bigg(\frac{m_1^{\beta_1(t)}\ldots m_k^{\beta_k(t)}}{(m_1\ldots m_k)^{\beta(t)}}\Bigg)^\frac{1}{k} &=& \lim_{k\to \infty} \Bigg(\frac{\prod_{\gamma\in\Gamma} m_\gamma^{c_\gamma(k)\beta_\gamma(t)}}{\big(\prod_{\gamma\in \Gamma}m_\gamma^{c_\gamma(k)} \big)^{\beta(t)}}\Bigg)^\frac{1}{k} = 1.\end{aligned}$$ Arbitrarily choose $\epsilon>0$, there exists $K_1\in \mathbb{N}$ such that for $k\geq K_1$, $$\label{eqM} \frac{m_1^{\beta_1(t)}\ldots m_k^{\beta_k(t)}}{(m_1\ldots m_k)^{\beta(t)}}\leq (1+\epsilon)^k.$$ Let $K_0=\max\{K_1, K_2\}$, where $K_2$ is given by Lemma [\[lem_estmu\]](#lem_estmu){reference-type="eqref" reference="lem_estmu"}. By Lemma [Lemma 14](#lem_estmu){reference-type="ref" reference="lem_estmu"} and [\[begat\]](#begat){reference-type="eqref" reference="begat"}, we have that for $K>K_0$, $$\begin{aligned} &&\sum_{S_k(\mathbf{w})\in G(\epsilon,k)} (m_1 m_{2}\ldots m_{k})^{-\beta(t)}\mu(S_k(\mathbf{w}))^t \\ &&\leq \sum_{S_k(\mathbf{w})\in G(\epsilon,k)} (m_1\ldots m_{k})^{-\beta(t)} e^{2\epsilon k}\sum_{\mathbf{w}'\in \Sigma_l^k(\mathbf{w})}\Big(p_1(w'_1)\ldots p_k(w'_k)\Big)^t \Big(u_1(w'_1)\ldots u_k(w'_k)\Big)^{1-\zeta}\\ &&\leq e^{2\epsilon k} \frac{m_1^{\beta_1(t)}\ldots m_k^{\beta_k(t)}}{(m_1\ldots m_k)^{\beta(t)}}\sum_{S_k(\mathbf{w})\in G(\epsilon,k)} \sum_{\mathbf{w}'\in \Sigma_l^k(\mathbf{w})} \prod_{i=1}^k\Big(m_i^{-\beta_i(t)}p_i(w'_i)^t u_i(w'_i)^{1-\zeta}\Big)\\ &&\leq e^{2\epsilon k}(1+\epsilon)^k \sum_{\mathbf{w}'\in \Sigma^k} \prod_{i=1}^k\Big(m_i^{-\beta_i(t)}p_i^t(w'_i) u_i^{1-\zeta}(w'_i)\Big)\\ &&\leq e^{2\epsilon k}(1+\epsilon)^k \prod_{i=1}^k\sum_{w_i'\in \mathcal{D}_i}\Big(m_i^{-\beta_i(t)}p_i^t(w'_i) u_i^{1-\zeta}(w'_i)\Big)\\ &&\leq e^{2\epsilon k}(1+\epsilon)^k\\ &&\leq e^{3\epsilon k}.\end{aligned}$$ Let $r_K=3C_1(m_1m_2\ldots m_K)^{-1}$, for integer $K>0$. For all $\delta>\epsilon(\alpha|t|+5)$, since $m_k\geq 2$, for $K\geq K_0$, we have that $$\begin{aligned} &&\hspace{-2cm}\mathcal{H}_{r_K}^{t\alpha+\beta(t)+\delta}( E_{\alpha}) \leq \sum_{k\geq K} \sum_{S_k(\mathbf{w})\in G(\epsilon,k)} |\widehat{S}_k(\mathbf{w})|^{t\alpha+\beta(t)+\delta}\\ &\leq& (N^+C_1)^{t\alpha+\beta(t)+\delta}\sum_{k\geq K} \sum_{S_k(\mathbf{w})\in G(\epsilon,k)} (m_1 m_{2}\ldots m_{k})^{-(\beta(t)+5\epsilon)}\mu(S_k(\mathbf{w}))^t\\ &\leq& C_2 \sum_{k\geq K_0} 2^{-5\epsilon k} e^{3\epsilon k} \\ &<& \infty,\end{aligned}$$ where $C_1=(1+(N^+)^2)^\frac{1}{2}$, and $C_2=(N^+C_1)^{t\alpha+\beta(t)+\delta}$. This implies that $$\mbox{\rm dim}_{\rm H}\,E_{\alpha} \leq t\alpha+\beta(t)+\delta.$$ Since $\epsilon$ is arbitrarily chosen, $\delta$ can be arbitrarily small, we have that $$\mbox{\rm dim}_{\rm H}\,E_{\alpha}\leq t\alpha+\beta(t).$$ for all $t$. For the case $\zeta=1$, the proof is almost identical, and we omit the proof. Therefore the conclusion holds. ◻ By Theorem [Theorem 11](#thmlb){reference-type="ref" reference="thmlb"} and Theorem [Theorem 15](#thmub){reference-type="ref" reference="thmub"}, we immediately have the following conclusion. **Theorem 16**. *Let $E$ be the self-affine Moran set with the frequency $\mathbf{f}=\{f_\gamma\}_{\gamma\in\Gamma}$ and $\zeta\leq 1$. Suppose that $E$ satisfies the replica condition. Then for every $\alpha\in (\alpha_{\mathrm{min}},\alpha_{\mathrm{max}})$, we have that $$H(\alpha)=\inf_t \{\alpha t+\beta(t)\}.$$ Furthermore, $H(\alpha)$ is differentiable with respect to $\alpha$ and is concave.* *The proof of Theorem [Theorem 1](#thm_mfa){reference-type="ref" reference="thm_mfa"}.* Suppose that $E$ satisfies row separation condition. For all $\epsilon >0$ and all $x\in E\backslash \partial E$, arbitrarily choose $\mathbf{w}=(i_1,j_1)(i_2,j_2)\ldots(i_k,j_k)\ldots\in \Pi^{-1}(x)$, and let $c_k=\frac{1}{2}(m_1 m_2\ldots m_k)^{-1}$. it is clear that $$c_k \geq \big(m_1 m_2\ldots m_k\big)^{-1-\frac{\epsilon}{2}}.$$ Since row separation condition holds, that is to say, for any two rows at $k$-th level intersecting $E$, the distance between the two rows are at least $2c_k$. Therefore, we have that $$B(x,c_k)\cap E \subset \bigcup\{S_k(\mathbf{w}') \colon j_1'=j_1, \ldots, j_k'=j_k\}.$$ Note that, for all $\mathbf{w}\in \Pi^{-1}(x)$, we have that $$\begin{aligned} \limsup_{k\to \infty}D_k(\mathbf{w}) &\geq& \limsup_{l\to \infty}I_l(\mathbf{w})-\limsup_{k\to \infty}I_k(\mathbf{w})= 0.\end{aligned}$$ Hence, there exists a sequence $\{k_i\}$ such that $D_{k_i}(\mathbf{w}) > -\epsilon$, and this implies that $E$ satisfies the replica condition. By Theorem [Theorem 16](#thm1){reference-type="ref" reference="thm1"}, conclusion holds. Suppose $E$ satisfies top and bottom separation condition. There exists $\gamma \in \Gamma$ with $f_\gamma>0$ such that at least one of the following conditions holds: - For all $(i,j)\in D_\gamma$, $j \neq 0$. - For all $(i,j)\in D_\gamma$, $j \neq m_\gamma-1$. This implies that either the top row or the bottom row in pattern $(n_\gamma, m_\gamma, \mathcal{D}_\gamma)$ intersecting $E$ is empty. For all $\epsilon >0$ and all $x\in E\backslash \partial E$, arbitrarily choose $\mathbf{w}\in \Pi^{-1}(x)$ where $\mathbf{w}=(i_1,j_1)(i_2,j_2)\ldots(i_k,j_k)\ldots$. Let $\xi= \frac{\epsilon\log 2}{2 \log N^+}$, and $c_k=(m_1 \ldots m_{[(1+\xi)k]})^{-1}$. For each given $k>0$, since $m_k\geq 2$ , it follows that $$\begin{aligned} c_k &\geq& (N^+)^{-\xi k}(m_1\ldots m_k)^{-1}\geq 2^{-\frac{\epsilon k}{2}} (m_1\ldots m_k)^{-1}\geq (m_1 m_2\ldots m_k)^{-1-\frac{\epsilon}{2}}.\end{aligned}$$ Let $\xi'= \frac{\xi}{4+2\xi}$. Since $$\lim_{n\to\infty}\frac{\operatorname{card}\{k': (n_{k'},m_{k'},\mathcal{D}_{k'})=\gamma, {k'}=1,2,\ldots, n \}}{n} = f_\gamma > 0,$$ there exists $K_\xi>0$ such that for $k>K_\xi$, $$(1-\xi')f_\gamma<\frac{\operatorname{card}\{k': (n_{k'},m_{k'},\mathcal{D}_{k'})=\gamma, {k'}=1,2,\ldots, n \}}{n}<(1+\xi')f_\gamma.$$ This implies that for all $k>K_\xi$, $$\operatorname{card}\{k':(n_{k'},m_{k'},\mathcal{D}_{k'})=\gamma \text{ for } k<h<[(1+\xi)k]\}\geq 1.$$ Let $k_0$ be an integer satisfying $k<k_0<[(1+\xi)k]$ and $\mathcal{D}_{k_0}=\mathcal{D}_\gamma$. Then $$(m_1 \ldots m_{k_0})^{-1} > (m_1 \ldots m_{[(1+\xi)k]})^{-1} = c_k.$$ Since $\mathcal{D}_{k_0}=\mathcal{D}_\gamma$, either the top row or the bottom row of $\mathcal{D}_{k_0}$ intersecting with $E$ is empty, which implies that $$B(x,c_k)\cap E \subset B(x,(m_1 \ldots m_{k_0})^{-1})\cap E \subset \bigcup\{S_k(\mathbf{w}') \colon j_1'=j_1, \ldots, j_k'=j_k\}.$$ Since $\limsup_{k\to \infty}D_k(\mathbf{w}) \geq 0$, there exists a sequence $\{k_i\}$ such that $D_{k_i}(\mathbf{w}) > -\epsilon$. Hence $E$ satisfies the replica condition. By Theorem [Theorem 16](#thm1){reference-type="ref" reference="thm1"}, the conclusion holds. ◻ *The proof of Theorem [Theorem 2](#cor2){reference-type="ref" reference="cor2"}.* By Theorem [Theorem 16](#thm1){reference-type="ref" reference="thm1"}, it is sufficient to prove that $E$ satisfies the replica condition. Fix $\epsilon>0$ and $x \in E\backslash\partial E$, and choose $\mathbf{w}\in \Pi^{-1}(x)$. For $c_k=(N^+)^{-2}\big(m_1 m_2\ldots m_k \big)^{-1}$, we have that, for all sufficiently large $k$, $$c_k \geq (m_1 m_2\ldots m_k)^{-1-\frac{\epsilon}{2}}.$$ It remains to show that there exists a sequence $\{k_i\}$ such that $$\label{eqBcover} B(x,c_{k_i})\cap E \subset \bigcup\{S_{k_i}(\mathbf{w}') \colon j_1'=j_1, \ldots, j_{k_i}'=j_{k_i}\},$$ and $D_{k_i}(\mathbf{w}) > -\epsilon.$ For each integer $k>0$, we write $$V_k(\mathbf{w})=\inf\Big\{k'>k: j_{k'}\notin \{0,m_{k'}-1\}\quad \text{or}\quad \frac{j_{k'+1}}{m_{k'+1}-1}\neq \frac{j_{k'}}{m_{k'}-1} \Big\}-k-1.$$ Simply to say, $V_k(\mathbf{w})$ is the number of $j_{k'}$ constantly chosen from the bottom or constantly chosen from the top in the patterns. Since $x\notin\partial E$, for each $k$, $V_k(\mathbf{w})< \infty$. By the assumptions (1) and (2) in Theorem [Theorem 2](#cor2){reference-type="ref" reference="cor2"}, for $k<k'\leq k+V_k(\mathbf{w})$, we have that $u_{k'}(w_{k'})=u_{k+1}(w_{k+1})$, where $j_{k+1}=0$ or $m_{k+1}-1$, $w_{k+1}=(i_{k+1},j_{k+1})$. We prove it by contradiction. We assume that there exists $K>0$ such that for $k>K$, we have that $V_k(\mathbf{w})>0$ or $D_k(\mathbf{w})<-\epsilon$. Note that $V_k(\mathbf{w})=0$ implies either $j_{k+1}\notin\{0, m_{k+1}\}$ or $j_{k+1}\in \{0, m_{k+1}\}$, $\frac{j_{k+2}}{m_{k+2}-1}\neq \frac{j_{k+1}}{m_{k+1}-1}$. Both cases imply  [\[eqBcover\]](#eqBcover){reference-type="eqref" reference="eqBcover"}. Recall that $l(k)$ is a function of $k$ defined by [\[def_l\]](#def_l){reference-type="eqref" reference="def_l"}. To avoid confusion, we write $l_{k+V_k(\mathbf{w})}$ instead of $l(k+V_k(\mathbf{w}))$ in the following calculation. For each $k>0$, write $\zeta_k=\frac{l_{k+V_k(\mathbf{w})}}{k+V_k(\mathbf{w})}$. Since $\lim \frac{l}{k}=\zeta$, it is clear that $\lim_{k\to \infty}\zeta_k=\zeta.$ Hence there exists $C_0>0$ such that $\frac{1-\zeta_k}{\zeta_k}\leq C_0$ for all $k>0$. Next we show that $\frac{V_k(\mathbf{w})}{k}$ is bounded by a constant $C$. Suppose $\frac{V_k(\mathbf{w})}{k}>C_0>\frac{1-\zeta_k}{\zeta_k}$. (Otherwise $\frac{V_k(\mathbf{w})}{k}$ is bounded by a constant $C_0$). Then $k<l_{k+V_k(\mathbf{w})}$. Since $V_{k+V_k(\mathbf{w})}(\mathbf{w})=0$, we have that $D_{k+V_k(\mathbf{w})}(\mathbf{w})<-\epsilon$ and $$\begin{aligned} &&D_{k+V_k(\mathbf{w})}(\mathbf{w}) = I_{l_{k+V_k(\mathbf{w})}}(\mathbf{w}) -I_{k+V_k(\mathbf{w})}(\mathbf{w})\\ &&= \frac{k I_k(\mathbf{w})+ (l_{k+V_k(\mathbf{w})}-k)\log u_{k+1}(w_{k+1})}{l_{k+V_k(\mathbf{w})}} - \frac{k I_k(\mathbf{w})+ V_k(\mathbf{w})\log u_{k+1}(w_{k+1})}{k+V_k(\mathbf{w})}\\ &&=\frac{(1-\zeta_k)k}{\zeta_k(k+ V_k(\mathbf{w}))}(I_k(\mathbf{w})-\log u_{k+1}).\end{aligned}$$ Since the set $\Gamma$ is finite, there exists $C_1$, $C_2$ such that $C_1\leq \log u_k\leq C_2$, and $C_1\leq I_k(\mathbf{w}) \leq C_2$ for all $k$. Hence $$\frac{V_k(\mathbf{w})}{k} \leq \frac{\zeta_k-1}{ \epsilon \cdot \zeta_k}\Big(I_k(\mathbf{w})-\log u_{k+1}\Big) -1 \leq \frac{C_0(C_2-C_1)}{\epsilon} -1.$$ Taking $C=\max\{C_0,\frac{C_0(C_2-C_1)}{\epsilon} -1\}$, we have that $$\label{VKK} \frac{V_k(\mathbf{w})}{k} \leq C,$$ for all $k\geq K$. Choose a large integer $h$ such that $h\cdot \epsilon>C_2-C_1$, and choose $K'>K$ such that $\zeta_k>\frac{\zeta}{2}$ and $\frac{C_2-C_1}{K'}<\frac{\epsilon}{2}$. Thus for $k>K'$, $$\label{bk} |I_k(\mathbf{w})-I_{k+1}(\mathbf{w})|=\frac{1}{k}|I_{k+1}(\mathbf{w}) -\log u_{k+1}|<\frac{\epsilon}{2}.$$ Let $C'=\frac{2(C+1)}{\zeta}$, where $C$ is the constant in [\[VKK\]](#VKK){reference-type="eqref" reference="VKK"}. Since $V_k<\infty$ for all $k>0$. we choose $n_0>K'(C')^{2(h+1)}$ such that $V_{n_0}(\mathbf{w})=0$. We inductively define a sequence $\{n_j\}_{j=1}^{2h}$ such that $D_{n_j}(\mathbf{w}) < -\epsilon$. Assume that $D_{n_j}(\mathbf{w}) < -\epsilon$, which implies that $$\label{Iln} I_{l(n_j)}< I_{n_j}-\epsilon.$$ If $D_{l(n_j)}(\mathbf{w}) < -\epsilon$, by setting $n_{j+1}=l(n_j)$, the inequality $D_{n_{j+1}}(\mathbf{w}) < -\epsilon$ holds. Otherwise if $D_{l(n_j)}(\mathbf{w}) \geq -\epsilon$, we have that $V_{l(n_j)}(\mathbf{w})>0$. Let $$\lambda_1=\max\{\lambda<l(n_j):V_{\lambda}(\mathbf{w})=0\}\quad \text{and}\quad \lambda_2=\min\{\lambda>l(n_j):V_{\lambda}(\mathbf{w})=0\}.$$ Then we choose $$n_{j+1}=\left\{\begin{array}{ll} \lambda_1, & \text{for}\ I_{\lambda_1}(\mathbf{w})\leq I_{\lambda_2}(\mathbf{w});\\ \lambda_2, & \text{for}\ I_{\lambda_1}(\mathbf{w})> I_{\lambda_2}(\mathbf{w}). \end{array}\right.$$ Since $I_\lambda(\mathbf{w})$ is monotonic for $\lambda_1+1\leq \lambda\leq \lambda_2$, either $I_{\lambda_1+1}(w)$ or $I_{\lambda_2}(w)$ is not greater than $I_{l(n_j)}(\mathbf{w})$. Suppose $n_{j+1}=\lambda_1$. It is clear that $I_{\lambda_1+1}\leq I_{l(n_j)}$. Therefore, we obtain that $$\begin{aligned} I_{n_{j+1}}(\mathbf{w}) &\leq& I_{\lambda_1 +1} +\frac{\epsilon}{2} \qquad \qquad \textit{ by } \eqref{bk}\\ &\leq& I_{l(n_j)}(\mathbf{w})+\frac{\epsilon}{2}\\ &\leq& I_{n_j}(\mathbf{w})-\frac{\epsilon}{2}. \qquad\quad \textit{ by } \eqref{Iln} \end{aligned}$$ See Figure [2](#figP){reference-type="ref" reference="figP"} for clear relations of these terms. ![](figPosition.png){#figP width="80%"} Since $l(n_j)\leq n_{j+1}+V_{n_{j+1}}(\mathbf{w})$, by [\[VKK\]](#VKK){reference-type="eqref" reference="VKK"}, we have that $n_{j+1}\geq\frac{\zeta_{n_j}n_j}{C+1}\geq n_j (C')^{-1}$. Hence $n_j\geq K'$ for $j=0,1,\ldots,2h$. Since $I_{n_{j+1}}(\mathbf{w}) \leq I_{n_j}(\mathbf{w})-\frac{\epsilon}{2}$ for each $j$, by the fact $h\cdot \epsilon>C_2-C_1$, we have that $$I_{n_{2h}}(\mathbf{w})< I_{n_0}(\mathbf{w})-2h\cdot \frac{\epsilon}{2} \leq C_2-h\epsilon <C_1,$$ which contradicts to the fact $C_1\leq I_k(\mathbf{w}) \leq C_2$ for all $k$. Hence $E$ satisfies the replica condition, and by Theorem [Theorem 16](#thm1){reference-type="ref" reference="thm1"}, the conclusion holds. ◻ 1 M. Arbeiter and N. Patzschke Random self-similar multifractal. 181, 5--42, 1996. N. Attia and B. Selmi. A multifractal formalism for Hewitt-Stromberg measures. 31: 825--862, 2021. K. Barański. Hausdorff dimension of the limit sets of some planar geometric constructions. 210, 215--245, 2007. J. Barral and M. Mensi. . 27, 1419--1443, 2007. T. Bedford. . PhD thesis, University of Warwick, 1984. Y. S. Chow, and H. Teicher. . Springer, New York(1978) K. Falconer. 7, 681--702, 1994. K. Falconer. . John Wiley & Sons Ltd., Chichester, 1997. K. J. Falconer. . 12, 877--891,1999. K. J. Falconer. . John Wiley & Sons Inc., Hoboken, NJ, second edition, 2003. K. J. Falconer. . 23, 1047--1069, 2010. D.J. Feng 229, 3052--3077, 2012. D.J. Feng and K.S. Lau 92, 407--428, 2009. Y. Gu and J. J. Miao. . 513, 2022. Y. Gu, C. Hou and J. J. Miao. . . C. Hou and J. J. Miao. . 27, 1-11, 2019. T. Jordan and M. Rams, . 150, 147--156. 2011. J. King. The singularity spectrum for general Sierpinski carpets. 116, 1--11, 1995. S. P. Lalley and D. Gatzouras. Hausdorff and box dimensions of certain self-affine fractals. 41, 533--568, 1992. Z. Liang, J.J. Miao and H. Ruan. Gap sequences and topological properties of Bedford-McMullen sets 35, 4043--4063, 2022. C. McMullen. The Hausdorff dimension of general Sierpiński carpets. 96,1--9, 1984. L. Olsen. A multifractal formalism, *Adv. Math.* 116, 82-196, 1995. L. Olsen. Self-affine multifractal Sierpiński sponges in $R^{d}$, *Pacific J. Math.* 183, 143-199, 1998. L. Olsen. Random self-affine multifractal Sierpinski sponges in $R^d$. 162, 89--117, 2011. L. Olsen and N. Snigireva Multifractal spectra of in-homogeneous self-similar measures. 57, 1789--1844, 2008. N. Patzschke. Self-conformal multifractals. 19, 486--513, 1997. M. Wu. The singularity spectrum $f(\alpha)$ of some Moran fractals. 144, 141--55, 2005. M. Wu and J. Xiao The singularity spectrum of some non-regularity moran fractals. 44, 548--557, 2011.
arxiv_math
{ "id": "2309.08148", "title": "Multifractal analysis of a class of self-affine Moran sets", "authors": "Yifei Gu, Chuanyan Hou, Jun Jie Miao", "categories": "math.CA", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | We provide a complete characterization of theories of tracial von Neumann algebras that admit quantifier elimination. We also show that the theory of a separable tracial von Neumann algebra $\mathcal{N}$ is never model complete if its direct integral decomposition contains $\mathrm{II}_1$ factors $\mathcal{M}$ such that $M_2(\mathcal{M})$ embeds into an ultrapower of $\mathcal{M}$. The proof in the case of $\mathrm{II}_1$ factors uses an explicit construction based on random matrices and quantum expanders. address: - Department of Mathematics and Statistics, York University, Keele Street, Toronto, Ontario, Canada, M3J 1P3 and Matematički Institut SANU Kneza Mihaila 36, 11 000 Beograd, p.p. 367 Serbia - Fields Institute for Research in Mathematical Sciences, College St, Toronto, Ontario, Canada, M5T 3J1 - "Department of Mathematics, University of California, Irvine, Rowland Hall (Bldg.\\# 400), Irvine, CA 92697-3875" author: - Ilijas Farah, David Jekel, Jennifer Pi bibliography: - QEMC.bib title: Quantum expanders and quantifier reduction for tracial von Neumann algebras --- [^1] # Introduction One objection to the model-theoretic approach to operator algebras is that in order to understand a theory one needs to consider formulas with an arbitrarily large number of alterations of quantifiers. Since a typical human mind has difficulties parsing formulas such as $(\forall x_1)(\exists x_2)(\forall x_3)(\exists x_4)(\forall x_5)\psi(x_1,x_2,x_3,x_4 ,x_5)$ for a nontrivial $\psi$, elimination of quantifiers has been isolated as a desirable property of theories from the very beginnings of model theory and its early applications to algebra. As pointed out in [@chang1990model §5.1], "Each time the method is applied to a new theory we must start from scratch in the proofs, because there are few opportunities to use general theorems about models. On the other hand, the method is extremely valuable when we want to beat a particular theory into the ground.'' Our first task is to characterize which theories of tracial von Neumann algebras can be 'beaten into the ground' by quantifier elimination (Theorem [Theorem 1](#thm: main QE){reference-type="ref" reference="thm: main QE"}). Unlike the case of $\mathrm{C}^*$-algebras, where only finitely many theories admit quantifier elimination [@EFKV2017], we will see that the set of tracial von Neumann algebras that admit quantifier elimination is dense among type $\mathrm{I}$ algebras with respect to the logic topology (Proposition [Proposition 24](#P.dense){reference-type="ref" reference="P.dense"}), and its closure contains some theories of $\mathrm{II}_1$ factors such as matrix ultraproducts. Unfortunately---or fortunately, depending on one's disposition---all tracial von Neumann algebras whose theories admit quantifier elimination are of type I (i.e. a direct sum of matrix algebras). Experts in operator algebras will not find it surprising that no II$_1$ factor has a theory that can be beaten into the ground. This accomplished, we move on to study model completeness. This property of theories, introduced already by Abraham Robinson, can be viewed as a poor man's version of quantifier elimination. A theory is *model complete* if every embedding between its models is elementary. Operator algebraists will recognize this property as a generalization of the property of the hyperfinite II$_1$ factor $\mathcal{R}$, that every embedding of $\mathcal{R}$ into its ultrapower is unitarily equivalent to the diagonal embedding (the latter is elementary by Łoś's Theorem). By a standard ultrapower argument, this implies that every embedding of $\mathcal{R}$ into a model of its theory, $\mathop{\mathrm{Th}}(\mathcal{R})$, is elementary; this property was studied in [@AGKE2022] under the name of "generalized Jung property." We prove that if $\mathcal{M}$ has $\mathrm{II}_1$ factors $\mathcal{N}$ in its direct integral decomposition such that $M_2(\mathcal{N})$ embeds into an ultrapower of $\mathcal{N}$, then $\mathcal{M}$ is not model complete (Theorem [Theorem 2](#thm: main MC){reference-type="ref" reference="thm: main MC"}). The assumption of the theorem applies to $\mathcal{R}$ and to every Connes-embeddable $\mathrm{II}_1$ factor (by Connes-embeddable, we mean embeddable into the ultrapower $\mathcal{R}^\mathcal{U}$). Note that this does not contradict the fact that every embedding of $\mathcal{R}$ into every model of $\mathop{\mathrm{Th}}(\mathcal{R})$ is elementary --- it only shows the existence of non-hyperfinite models of $\mathop{\mathrm{Th}}(\mathcal{R})$ and non-elementary embeddings between them. This particular consequence was known by [@GHS2013], the proof of which relies on property (T) groups. We prove Theorem [Theorem 2](#thm: main MC){reference-type="ref" reference="thm: main MC"} by using Hastings' quantum expanders. ## On Quantifier Elimination For a fixed tracial von Neumann algebra $\mathcal{M}$, the theory of $\mathcal{M}$ admits quantifier elimination if and only if for every finitely generated ${\mathcal{A}\subseteq \mathcal{B}\subseteq \mathcal{M}^{\mathcal{U}}}$, every embedding of $\mathcal{A}$ into $\mathcal{M}^{\mathcal{U}}$ extends to an embedding of $\mathcal{B}$ ([@Farah2023 Proposition 1.2], see also [@HIKO2003 Proposition 13.17]). The question of which tracial von Neumann algebras admit quantifier elimination was explicitly asked in [@JekelModelEntropy]. It has been known for some time that both commutative tracial von Neumann algebras and matrix algebras admit quantifier elimination. Commutative tracial von Neumann algebras correspond to probability spaces, which Ben Ya'acov and Usvyatsov showed admit quantifier elimination in [@BYU2010 Example 4.3] and [@BY2012 Fact 2.10]. (For a survey on the model theory of probability spaces, see [@BH2023] and for the $\mathrm{W}^*$-setting see [@JekelModelEntropy §2.3].) For matrix algebras, the multivariable Specht's theorem classifies when two matrix tuples are unitarily conjugate [@Jing2015], and this implies that $(M_n(\mathbb{C}),\mathop{\mathrm{tr}}_n)$ admits quantifier elimination [@EFKV2017 end of §2]. In the case of $\mathrm{II}_1$ factors, [@GHS2013 §2] showed that the hyperfinite factor $\mathcal{R}$ does not admit quantifier elimination, and this argument was observed in [@GH2023] to generalize to McDuff factors. Furthermore, [@GHS2013 §3] implies that Connes-embeddable factors not elementarily equivalent to $\mathcal{R}$ are not model complete, hence also do not admit quantifier elimination. Recently, the first author [@Farah2023] extended this argument to refute quantifier elimination for $\mathrm{II}_1$ factors in general, and showed that tracial von Neumann algebras with a type $\mathrm{II}_1$ summand never admit quantifier elimination. Moreover, he conjectured that type $\mathrm{I}$ algebras admit quantifier elimination if and only if any two projections of the same trace are automorphically conjugate. We shall see this conjecture is true: **Theorem 1**. *Let $\mathcal{M}= (M,\tau)$ be a WOT-separable tracial von Neumann algebra. Then the following are equivalent.* (1) *$\mathop{\mathrm{Th}}(\mathcal{M})$ admits quantifier elimination.* (2) *$\mathcal{M}$ is type I and any two projections $p$ and $q$ in $\mathcal{M}$ with $\tau(p) = \tau(q)$ are conjugate by an automorphism of $\mathcal{M}$.* Thus the same algebra may or may not admit quantifier elimination, depending on the choice of the trace. We remark that since the quantifier-free type of a projection is determined by its trace, (2) asserts that projections with the same quantifier-free type are conjugate by an automorphism. ## On Model Completeness A theory is said to be *model complete* if every embedding $\mathcal{M}\to \mathcal{N}$ of two models of $\mathrm{T}$ is elementary. Another characterization, under the assumption of the Continuum Hypothesis, is that $\mathop{\mathrm{Th}}(\mathcal{M})$ if model complete if and only if, for every $\mathcal{A}$ and $\mathcal{B}$ elementarily equivalent to $\mathcal{M}$, every embedding $\mathcal{A}\to \mathcal{B}$ extends to an isomorphism $\mathcal{A}^{\mathcal{U}} \to \mathcal{B}^{\mathcal{U}}$ for some ultrafilter $\mathcal{U}$ [@Fa:STCstar Corollary 16.6.5]. The use of Continuum Hypothesis is, while necessary for this formulation, innocuous and removable at the expense of having a more complicated (but equally useful) formulation in terms of back-and-forth systems of partial isomorphisms between separable subalgebras of $A$ and $B$ that is $\sigma$-complete (see [@Fa:STCstar Theorem 16.6.4]). Model completeness is a weaker condition than quantifier elimination, and both conditions are special cases of quantifier reduction of arbitrary formulas (to quantifier-free formulas and to existential formulas, respectively); see §[2](#sec: preliminaries){reference-type="ref" reference="sec: preliminaries"}. As mentioned above, Goldbring, Hart, and Sinclair showed that the only possible model complete theory for Connes-embeddable $\mathrm{II}_1$ factors is $\mathop{\mathrm{Th}}(\mathcal{R})$ [@GHS2013 Proposition 3.2], and moreover that if the Connes embedding problem has a positive solution, then there is no model-complete theory of a $\mathrm{II}_1$ factor [@GHS2013 Corollary 3.4]. However, Ji, Natarajan, Vidick, Wright, and Yuen announced a negative solution of the Connes embedding problem in [@JNVWY2020]. Hence, the question of model completeness for arbitrary $\mathrm{II}_1$ factors (and more generally tracial von Neumann algebras) remained open. The first author [@Farah2023] showed that type $\mathrm{I}$ tracial von Neumann algebras are model complete, and conjectured that algebras with a type $\mathrm{II}_1$ summand are never model complete. We are able to prove this conjecture under the additional assumption that the algebra of $2 \times 2$ matrices over $\mathcal{M}$ approximately embed into $\mathcal{M}$ for sufficiently many $\mathrm{II}_1$ factors in the decomposition. **Theorem 2**. *If $\mathcal{M}$ is a $\mathrm{II}_1$ factor such that $M_2(\mathcal{M})$ embeds into $\mathcal{M}^{\mathcal{U}}$ for some ultrafilter $\mathcal{U}$, then $\mathop{\mathrm{Th}}(\mathcal{M})$ is not model complete.* *More generally, let $\mathcal{M}$ be a separable tracial von Neumann algebra with direct integral decomposition $\int_\Omega^{\oplus} (\mathcal{M}_\omega,\tau_\omega)\,d\omega$. Suppose that on a positive measure set, $\mathcal{M}_\omega$ is a $\mathrm{II}_1$ factor such that $M_2(\mathcal{M}_\omega)$ embeds into $\mathcal{M}_\omega^{\mathcal{U}}$ for some ultrafilter $\mathcal{U}$. Then $\mathop{\mathrm{Th}}(\mathcal{M})$ is not model complete.* The assumption that $M_2(\mathcal{M})$ embeds into an ultrapower of $\mathcal{M}$ is closely related to [@GH2017 Proposition 4.17], and is immediate in several cases of interest. For instance if $\mathcal{M}$ is Connes embeddable this holds because $M_2(\mathcal{M})$ embeds into $\mathcal{R}^{\mathcal{U}}$ and hence into $M^{\mathcal{U}}$. While Theorem [Theorem 2](#thm: main MC){reference-type="ref" reference="thm: main MC"} was already known in the Connes embeddable case by [@GHS2013], our argument is new even in that case. Another case where this condition is automatic is if $\mathcal{M}$ is existentially closed in the class of $\mathrm{II}_1$ factors, since by definition there is an embedding of $M_2(\mathcal{M})$ into $\mathcal{M}^{\mathcal{U}}$ extending the diagonal embedding. The condition also holds automatically if $\mathcal{M}$ is McDuff, and more generally if its fundamental group is nontrivial; see §[6.2](#sec: amplification){reference-type="ref" reference="sec: amplification"}. Although Popa and Vaes showed that there are $\mathrm{II}_1$ factors such that $M_2(\mathcal{M})$ does not embed into $\mathcal{M}$ [@PopaVaes2022 Theorem C], it is unknown at this point whether there exists any $\mathrm{II}_1$ factor such that $M_2(\mathcal{M})$ does not embed into $\mathcal{M}^{\mathcal{U}}$. Since such an object would not be Connes-embeddable, it would no doubt be difficult to construct (or perhaps if it exists, the proof would rely on the same machinery as the refutation of the Connes embedding problem). In §[6.2](#sec: amplification){reference-type="ref" reference="sec: amplification"}, we will discuss several equivalent conditions to $M_2(\mathcal{M})$ embedding into $\mathcal{M}^{\mathcal{U}}$ in hope of future progress on this question. The proof of Theorem [Theorem 2](#thm: main MC){reference-type="ref" reference="thm: main MC"} is divided into two parts. In the case of a $\mathrm{II}_1$ factor, we use a random matrix construction to create two tuples with similar behavior for their one-quantifier types, while their full types are distinguished by one having factorial commutant when the other does not. In fact, this approach gives explicit sentences distinguishing their types (see §[4.4](#sec: MC factor proof conclusion){reference-type="ref" reference="sec: MC factor proof conclusion"}). The matrix construction shares some common ideas with [@Farah2023], but also uses more substantial random matrix results such as Hastings's quantum expanders [@Hastings2007] and concentration of measure for random unitaries. Thus, this is a first application of the combination of model theory and random matrix theory as proposed in [@JekelModelEntropy §6]. To extend to the case of general tracial von Neumann algebras, there are two more ingredients. In the case of a direct integral over a diffuse space, there is a direct argument to show the failure of model completeness when $M_2(\mathcal{M}_\omega)$ embeds into $\mathcal{M}_\omega^{\mathcal{U}}$ (Lemma [Lemma 22](#lem: diffuse MC){reference-type="ref" reference="lem: diffuse MC"}). The remaining piece is the observation that if $\mathcal{M}_1 \oplus \mathcal{M}_2$ is model complete, then both $\mathcal{M}_1$ and $\mathcal{M}_2$ are model complete (Lemma [Lemma 20](#lem: direct sum){reference-type="ref" reference="lem: direct sum"}). We remark that in [@FG2023] it was shown that the model-theoretic behavior of a direct integral is determined by the behavior of the integrands; the reverse problem of determining the model-theoretic behavior of the direct integrands from the model-theoretic behavior of the direct integral is an important open question for continuous model theory (see [@FG2023 Conjecture 4.5]). The observation that model completeness passes to direct summands is a small step toward addressing this question. We remark that a similar argument shows that quantifier elimination passes to direct summands (Remark [Remark 21](#rem: QE direct sums){reference-type="ref" reference="rem: QE direct sums"}). ## Organization of this paper The paper is organized as follows. In §[2](#sec: preliminaries){reference-type="ref" reference="sec: preliminaries"}, we recall background on tracial von Neumann algebras and continuous model theory, including specific tests for quantifier elimination and model completeness. We also give some preliminary information on spectral gap, quantum expanders, and how the two are related for matrix algebras. In §[3.1](#sec: QE proof){reference-type="ref" reference="sec: QE proof"}, we prove Theorem [Theorem 1](#thm: main QE){reference-type="ref" reference="thm: main QE"}, and in §[3.2](#sec: QE tests){reference-type="ref" reference="sec: QE tests"}, we give several more explicit tests for quantifier elimination. Proposition [Proposition 9](#prop: QE obstructions){reference-type="ref" reference="prop: QE obstructions"} gives a complete list of obstructions to quantifier elimination and Proposition [Proposition 10](#prop: QE linear inequality){reference-type="ref" reference="prop: QE linear inequality"} gives a condition for quantifier elimination directly in terms of the weights in the direct sum decomposition of $\mathcal{M}$. In §[4](#sec: MC factor proof){reference-type="ref" reference="sec: MC factor proof"}, we prove Theorem [Theorem 2](#thm: main MC){reference-type="ref" reference="thm: main MC"} in the case of $\mathrm{II}_1$ factors. Then in §[5](#sec: MC general case){reference-type="ref" reference="sec: MC general case"}, we prove the general case, relying on the fact that model completeness passes to direct summands (§[5.1](#sec: MC direct sums){reference-type="ref" reference="sec: MC direct sums"}). In the final section we give closing remarks: in §[6.1](#sec: topological){reference-type="ref" reference="sec: topological"} we discuss topological properties of theories that extend theories of tracial von Neumann algebras studied in earlier sections, §[6.2](#sec: amplification){reference-type="ref" reference="sec: amplification"} is about the condition of $M_2(\mathcal{M})$ embedding to $\mathcal{M}^{\mathcal{U}}$, and §[6.3](#sec: nontracial){reference-type="ref" reference="sec: nontracial"} is about quantifier elimination and model completeness in the non-tracial setting. ## Acknowledgements {#acknowledgements .unnumbered} We are grateful to the Fields Institute for hosting all three authors during the Thematic Program on Operator Algebras in Fall 2023 (IF as an organizer, DJ as a postdoc, and JP as a visitor). We are grateful to Adrian Ioana for suggesting an argument that simplified the proof of Lemma [Lemma 17](#lem: second distance estimate){reference-type="ref" reference="lem: second distance estimate"}, and Ben Hayes for the suggestion of the argument given in §[4.5.5](#sec: BC proof){reference-type="ref" reference="sec: BC proof"}. We thank Brent Nelson, Narutaka Ozawa, Isaac Goldbring, and Hiroshi Ando for discussions about type $\mathrm{III}$ factors. # Preliminaries {#sec: preliminaries} ## Basic prerequisites ### Tracial von Neumann algebras We assume familiarity with tracial von Neumann algebras, and recommend [@Ioana2023] for an introduction to the topic, as well as the following standard reference books [@Blackadar2006; @Dixmier1969; @KadisonRingroseI; @Sakai1971; @TakesakiI; @Zhu1993]. In particular, we use the following notions and conventions: - A *tracial von Neumann algebra* is a finite von Neumann algebra with a specified tracial state. - The tracial state on $\mathcal{M}$ will usually be denoted by $\tau$ or $\tau_{\mathcal{M}}$. - The normalized trace on $M_n(\mathbb{C})$ will be denoted by $\mathop{\mathrm{tr}}_n$. - We also write $\left\lVert x \right\rVert_2 = \tau(x^*x)^{1/2}$ when $x$ is an element of a tracial von Neumann algebra, and in particular when $x$ is a matrix, $\left\lVert x \right\rVert_2 = \mathop{\mathrm{tr}}_n(x^*x)^{1/2}$ is the normalized Hilbert-Schmidt norm. - The completion of $\mathcal{M}$ with respect to $2$-norm is denoted $L^2(\mathcal{M})$. - Inclusions and embeddings of tracial von Neumann algebras $\mathcal{N}\subseteq \mathcal{M}$ are assumed to be trace-preserving $*$-homomorphisms. - If $\mathcal{N}\subseteq \mathcal{M}$, we denote by $E_{\mathcal{N}}: \mathcal{M}\to \mathcal{N}$ the canonical conditional expectation; there is a unique conditional expectation that preserves the trace, and it is the restriction of the orthogonal projection $L^2(\mathcal{M}) \to L^2(\mathcal{N})$. ### Continuous model theory {#sec: model theory prelims} We also assume some familiarity with continuous model theory, specifically model theory for metric structures; see e.g. [@BYBHU2008; @Hart2023]. In particular: - The structures under consideration are metric spaces, and the metric $d$ is one of the symbols in the language. The structure can have multiple sorts; for instance, for a von Neumann algebra, there is one sort for each operator norm ball. - Relation symbols are $\mathbb{R}$-valued, so in particular formulas will take values in $\mathbb{R}$ rather than evaluating to true/false. The relation symbols and function symbols are required to be uniformly continuous across all models. - Formulas are created in the usual recursive fashion with connectives from classical model theory replaced by continuous functions on $\mathbb{R}$, and the quantifiers $\forall$ and $\exists$ replaced with $\sup$ and $\inf$ (over appropriate bounded subsets of the von Neumann algebra). - For a language $\mathcal{L}$, and an $\mathcal{L}$-structure $\mathcal{M}$, by the *theory* of $\mathcal{M}$ (denoted $\mathrm{Th}(\mathcal{M})$) we mean the set of all $\mathcal{L}$-sentences $\varphi$ such that $\varphi^\mathcal{M}= 0$, except in §[6.1](#sec: topological){reference-type="ref" reference="sec: topological"}, where it is more convenient to consider the theory as a bounded functional on the algebra of all formulas into $\mathbb R$. - For an $n$-tuple $\mathbf{a}$ coming from a structure $\mathcal{M}$, the *type* of $\mathbf{a}$ is the map $\mathop{\mathrm{tp}}^{\mathcal{M}}(\mathbf{a}): \varphi\mapsto \varphi^\mathcal{M}(\mathbf{a})$ which assigns to each $\mathcal{L}$-formula $\varphi(x_1, \ldots, x_n)$ the value of $\varphi^\mathcal{M}(\mathbf{a})$. More generally, we say that any map $\mu$ which assigns a value $\varphi(\mu) \in \mathbb{R}$ to each $\mathcal{L}$-sentence $\varphi$ in $n$-variables is an $n$-type. For any fixed $n$, the space of all $n$-types is denoted $\mathbb{S}_n$. Moreover, for a theory $\mathrm{T}$, by $\mathbb{S}_n(\mathrm{T})$ we denote the space of $n$-types that arise in models of $\mathrm{T}$. - Quantifier-free formulas are those constructed recursively using connectives but no quantifiers. The quantifier-free type $\mathop{\mathrm{qftp}}^{\mathcal{M}}(\mathbf{a})$ is the restriction of $\mathop{\mathrm{tp}}^{\mathcal{M}}(\mathbf{a})$ to quantifier-free formulas. - The set $\mathbb{S}_n(\mathrm{T})$ is equipped with the *logic topology*, which is the topology of pointwise convergence on $\mathcal{L}$-formulas, i.e. the weak$^*$-topology. This makes $\mathbb{S}_n(\mathrm{T})$ into a compact Hausdorff space. Dually, each formula $\varphi$ defines a continuous function on $\mathbb{S}_n(\mathrm{T})$. - For any cardinal $\kappa$, we recall that a structure $\mathcal{M}$ is *$\kappa$-saturated* if every consistent type with parameters from a set $A \subseteq M$ with $|A| \leq \kappa$ is realized by some tuple $\mathbf{a}$ from $\mathcal{M}$. (For operator algebraists, we note that a type is consistent with the theory of $\mathcal{M}$ if it is in the weak$^*$-closure of the maps $\mathop{\mathrm{tp}}^{\mathcal{M}}(\mathbf{a})$ for tuples $\mathbf{a} \in \mathcal{M}$. Thus, countable ultraproducts of structures are countably saturated). The language for tracial von Neumann algebras as metric structures was developed in [@FHS2014], and other useful references include [@JekelCoveringEntropy §2] and [@GH2023]. The sorts in this language are operator norm balls, the functions are addition, multiplication, scalar multiplication, and adjoint, and the relation symbols are $\mathop{\mathrm{Re}}\mathop{\mathrm{tr}}$ and the distance $d(x,y) = \left\lVert x - y \right\rVert_2$. All ultraproducts considered in this work are tracial; see [@FHS2014b §2.2] for a formal construction of tracial ultraproducts, and [@Fa:STCstar §16] or [@Hart2023 §2, §6] for more background on ultrafilters and ultraproducts in continuous model theory. ### Definable Sets {#sec: definable} Lastly, in many arguments below we will need the notion of a definable set. These are sets that we are able to quantify over, without formally being a part of our language; see for instance [@BYBHU2008 Theorem 9.17] and [@FHLRTVW2021 Definition 3.2.3 and Lemma 3.2.5]. In particular, when $a$ is a definable element in some structure, then we can refer to it as if it were an interpretation of a constant symbol in our language. We will use the following characterization of definable sets over a subset $A$ relative to a structure $\mathcal{M}$, and refer the reader to [@BYBHU2008 §9], [@Goldbring2023spectralgap §2], and [@FHLRTVW2021 §3] for more information on definability. **Fact 1**. Fix a structure $\mathcal{M}$ and some subset $A \subseteq M$. Suppose $Z \subseteq M^n$ is a closed subset. Then $Z$ is a definable set in $\mathcal{M}$ over $A$ if and only if for every $\epsilon > 0$, there exists some $\delta > 0$ and some formula $\varphi(x_1, \ldots, x_n)$, possibly using parameters from $A$, such that for any $\mathbf{x} \in M^n$, $$\varphi^\mathcal{M}(\mathbf{x}) < \delta \implies d(\mathbf{x}, Z) \leq \epsilon.$$ If we say a set is definable in $\mathcal{M}$, then we mean it is definable in $\mathcal{M}$ over the empty set. ## Background on quantifier elimination and model completeness Recall that a theory $\mathrm{T}$ is said to admit *quantifier elimination* if every $\mathcal{L}$-formula $\varphi$ can be approximated uniformly across all models of $\mathrm{T}$ by a quantifier-free $\mathcal{L}$-formula. We will use the following characterization of quantifier elimination in terms of types. Note a closely related statement for positive bounded logic is given in [@HIKO2003 Proposition 14.21]. **Lemma 2** ([@JekelModelEntropy Lemma 2.14]). *Let $\mathrm{T}$ be an $\mathcal{L}$-theory. Then the following are equivalent:* 1. *$\mathrm{T}$ admits quantifier elimination.* 2. *For every $n$ and every $\mu, \nu \in \mathbb{S}_n(\mathrm{T})$, if $\mu$ and $\nu$ agree on quantifier-free formulas, then $\mu = \nu$.* There is an analogous characterization for model completeness, which can be regarded as a folklore result since it closely parallels what happens in discrete model theory (see e.g. [@Hirschfeld Theorem 2.2]), although we are not aware of any proof in the literature for the case of metric structures. Recall that an *inf formula*, or *existential formula*, is a formula obtained by preceding a quantifier-free formula with one or more $\inf$-quantifiers. **Lemma 3**. *Let $\mathrm{T}$ be an $\mathcal{L}$-theory. Then the following are equivalent:* 1. *For every $\mathcal{L}$-formula $\varphi$ and $\epsilon > 0$, there exists an inf-formula $\psi$ such that $|\varphi- \psi| < \epsilon$ (on the appropriate sort or domain) for all models of $\mathrm{T}$.* 2. *$\mathrm{T}$ is model complete, i.e. if $\mathcal{M}$ and $\mathcal{N}$ are models of $\mathrm{T}$, then every embedding $\mathcal{M}\to \mathcal{N}$ of $\mathcal{L}$-structures is an elementary embedding.* 3. *For every $n$ and every pair $\mu, \nu \in \mathbb{S}_n(\mathrm{T})$, if $\psi(\mu) \leq \psi(\nu)$ for every $\inf$-formula $\psi$, then $\mu = \nu$.* *Proof.* (1) $\implies$ (2). Assume that (1) holds. Let $\mathcal{M}\to \mathcal{N}$ be an inclusion of models of $\mathrm{T}$. Let $\varphi$ be an $n$-variable formula and let $\mathbf{x} = (x_1,\dots,x_n)$ be a tuple of the appropriate sort from $\mathcal{M}$. Let $\epsilon > 0$. Then by (1), there exist inf-formulas $\psi_1$ and $\psi_2$ such that $|\psi_1 - \varphi| < \epsilon$ and $|\psi_2 - (-\varphi)| < \epsilon$ in all models of $\mathrm{T}$. In particular, $$\varphi^{\mathcal{N}}(\mathbf{x}) \leq \psi_1^{\mathcal{N}}(\mathbf{x}) + \epsilon \leq \psi_1^{\mathcal{M}}(\mathbf{x}) + \epsilon \leq \varphi^{\mathcal{M}}(\mathbf{x}) + 2 \epsilon,$$ and symmetrically $-\varphi^{\mathcal{N}}(\mathbf{x}) \leq -\varphi^{\mathcal{M}}(\mathbf{x}) + 2 \epsilon$. Together this implies $|\varphi^{\mathcal{N}}(\mathbf{x}) - \varphi^{\mathcal{M}}(\mathbf{x})| \leq 2 \epsilon$, and since $\epsilon$ was arbitrary, we have $\varphi^{\mathcal{M}}(\mathbf{x}) = \varphi^{\mathcal{N}}(\mathbf{x})$. Therefore, the embedding $\mathcal{M}\to \mathcal{N}$ is elementary. \(2\) $\implies$ (3). Suppose $\mathrm{T}$ is model complete. Let $\mu$ and $\nu$ be $n$-types satisfying the hypothesis for (3). Let $\kappa$ be the density character of $\mathcal{L}$, and fix a $\kappa^+$-saturated model $\mathcal{M}$ of $\mathrm{T}$. Then $\mathcal{M}$ contains some $\mathbf{x}$ with type $\mu$ and some $\mathbf{y}$ with type $\nu$. By the downward Löwenheim-Skolem theorem [@BYBHU2008 Proposition 7.3], there exists an elementary substructure $\mathcal{N}\preceq \mathcal{M}$ containing $\mathbf{y}$ with density character at most $\kappa$. Let $\mathbf{z}$ be a family indexed by some set $I$ of cardinality $\kappa$ that is dense in $\mathcal{N}$. For every finite $F \subseteq I$, every $k\geq 1$, and every $k$-tuple of quantifier-free formulas $\varphi_1, \dots, \varphi_k$ in $n + |F|$ variables, consider the formula $$\psi(u_1,\dots,u_n) = \inf_{(v_i)_{i \in F}} \max_{j=1,\dots,k} |\varphi_j(u_1,\dots,u_n, (v_i)_{i \in F}) - \varphi_j^{\mathcal{M}}(y_1,\dots,y_n,(z_i)_{i \in F})|.$$ By assumption $\psi^{\mathcal{M}}(x_1,\dots,x_n)\leq \psi^{\mathcal{M}}(y_1,\dots,y_n) = 0$. Therefore, for any $\epsilon > 0$, there exists $(w_i)_{i \in F}$ such that $|\varphi_j^{\mathcal{M}}(y_1,\dots,y_n,(w_i)_{i \in F}) - \varphi_j^{\mathcal{M}}(x_1,\dots,x_n,(z_i)_{i \in F})| < \epsilon$ for all $j = 1, \ldots, k$. By saturation, this implies that there exists a family $\mathbf{w}$ indexed by $I$ in $\mathcal{M}$ such that $(\mathbf{x},\mathbf{w})$ has the same quantifier-free type as $(\mathbf{y},\mathbf{z})$. In particular, the substructure $\tilde{\mathcal{N}}$ of $\mathcal{M}$ generated by $(\mathbf{x},\mathbf{w})$ is isomorphic to the substructure $\mathcal{N}$ generated by $(\mathbf{y},\mathbf{z})$. So $\tilde{\mathcal{N}}$ is a model of $\mathrm{T}$ and by model completeness the inclusion $\tilde{\mathcal{N}} \to \mathcal{M}$ is elementary. Therefore, $$\mathop{\mathrm{tp}}^{\mathcal{M}}(\mathbf{x}) = \mathop{\mathrm{tp}}^{\tilde{\mathcal{N}}}(\mathbf{x}) = \mathop{\mathrm{tp}}^{\mathcal{N}}(\mathbf{y}) = \mathop{\mathrm{tp}}^{\mathcal{M}}(\mathbf{y}),$$ and $\mu = \nu$ as desired. \(3\) $\implies$ (1). The argument uses point-set topology on $\mathbb{S}_n(\mathrm{T})$, similar to the proof of Urysohn's lemma or the Stone-Weierstrass theorem. We divide it into several steps. **Step 1:** We claim that for every type $\mu$ and neighborhood $\mathcal{O}$ of $\mu$, there exist inf-formulas $\psi_1,\dots, \psi_k$ and $\delta > 0$ such that for types $\nu$, if $\psi_j(\nu) > \psi_j(\mu) - \delta$ for $j = 1, \dots k$, then $\nu \in \mathcal{O}$. To prove this, fix $\mu$ and a neighborhood $\mathcal{O}$, and suppose for contradiction that no such inf-formulas exist. Then for every $\delta > 0$ and any finite collection of inf-formulas $\psi_1$, ..., $\psi_k$, there exists some type $\nu \in \mathbb{S}_n(\mathrm{T}) \setminus \mathcal{O}$ satisfying $\psi_j(\nu) > \psi_j(\mu) - \delta$ for $j = 1, \dots k$. Since $\mathbb{S}_n(\mathrm{T}) \setminus \mathcal{O}$ is compact, there exists some $\nu \in \mathbb{S}_n(\mathrm{T}) \setminus \mathcal{O}$ satisfying $\psi(\nu) \geq \psi(\mu)$ for all inf-formulas $\varphi$. By (3), this implies $\nu = \mu$, which contradicts $\nu$ being in $\mathbb{S}_n(\mathrm{T}) \setminus \mathcal{O}$. **Step 2:** We claim that for every type $\mu$ and neighborhood $\mathcal{O}$, there exists a nonnegative inf-formula $\psi$ taking values in $[0,1]$ such that $\psi(\mu)>0$ and for all types $\nu$, if $\psi(\nu) > 0$, then $\nu \in \mathcal{O}$. To prove this, let $\psi_1$, ..., $\psi_k$ and $\delta$ be as in Step 1, and set $$\psi = \min_j \left( \psi_j - \psi_j(\mu) + \delta \right)^+,$$ where $+$ denotes the positive part. Since minimum and $^+$ are increasing functions, this is an inf-formula and by the choice of $\delta$ it satisfies the requirements. **Step 3:** We claim that if $\mathcal{E}_0$ and $\mathcal{E}_1$ are disjoint closed subsets of $\mathbb{S}_n(\mathrm{T})$, then there exists an inf-formula $\psi$ taking values in $[0,1]$ such that $\psi|_{\mathcal{E}_0} = 0$ and $\psi|_{\mathcal{E}_1} = 1$. By Step 2, for each $\mu \in \mathcal{E}_1$, there exists a nonnegative inf-formula $\psi_\mu$ such that $\psi_\mu(\mu)>0$ and if $\psi_\mu(\nu) > 0$, then $\nu \in \mathbb{S}_n(\mathrm{T}) \setminus \mathcal{E}_0$. Let $\mathcal{O}_\mu = \{\nu: \psi_\mu(\nu) > 0\}$. These neighborhoods form an open cover of the compact set $\mathcal{E}_1$, and hence $\mathcal{E}_1$ can be covered by finitely many of these neighborhoods, say $\mathcal{O}_{\mu_1}$, ..., $\mathcal{O}_{\mu_k}$. Thus, $\sum_{j=1}^k \psi_j$ is strictly positive on $\mathcal{E}_1$ and attains some minimum $\delta > 0$ on this set. Let $$\psi = \min \left( 1, \frac{1}{\delta} \sum_{j=1}^k \psi_j \right).$$ Then $\psi$ is an inf-formula with the desired properties. **Step 4:** We claim that for every formula $\varphi$ and $\epsilon > 0$, there exists an inf-formula $\psi$ such that $|\varphi- \psi| < \epsilon$ in every model of $\mathrm{T}$. By applying an affine transformation to $\varphi$, we can assume without loss of generality that $0 \leq \varphi\leq 1$. Fix $k \in \mathbb{N}$ such that $1/k < \epsilon$. For each $j = 1$, ..., $k$, $\{\varphi\leq (j-1)/k\}$ and $\{ \varphi\geq j/k\}$ define disjoint closed sets in $\mathbb{S}_n(\mathrm{T})$, and therefore by Step 3, there exists an inf-formula $\psi_j$ such that $0 \leq \psi_j \leq 1$ and for $\nu \in \mathbb{S}_n(\mathrm{T})$, $$\varphi(\nu) \leq (j-1)/k \implies \psi_j(\nu) = 0, \qquad \varphi(\nu) \geq j/k \implies \psi_j(\nu) = 1.$$ Let $$\psi = \frac{1}{k} \sum_{j=1}^k \psi_j.$$ Then for types $\nu$, if $\varphi(\nu) \in [(j-1)/k,j/k]$, then $\psi_1(\nu)$, ..., $\psi_{j-1}(\nu)$ are $1$ and $\psi_{j+1}(\nu)$, ..., $\psi_k(\nu)$ are zero, so that $\psi(\nu) \in [(j-1)/k,j/k]$. Hence, $|\varphi(\nu) - \psi(\nu)| \leq 1/k < \epsilon$ for all $\nu \in \mathbb{S}_n(\mathrm{T})$. Since $\varphi$ and $\epsilon>0$ were arbitrary, this proves (1). ◻ *Remark 4*. It is immediate from the characterization of model completeness in Lemma [Lemma 3](#lem: type MC){reference-type="ref" reference="lem: type MC"} (3) that quantifier elimination implies model completeness since every quantifier-free formula counts an inf-formula. Alternatively, condition (2) of Lemma [Lemma 2](#lem: type QE){reference-type="ref" reference="lem: type QE"} immediately implies condition (3) of Lemma [Lemma 3](#lem: type MC){reference-type="ref" reference="lem: type MC"}. ## Background on spectral gap and quantum expanders {#sec: sg preliminaries} In Sections [4](#sec: MC factor proof){reference-type="ref" reference="sec: MC factor proof"} and [6](#sec: further remarks){reference-type="ref" reference="sec: further remarks"}, we use notions of spectral gap for tracial von Neumann algebras. Let $\mathcal{N}\subseteq \mathcal{M}$ be an inclusion of tracial von Neumann algebras. Let $d \in \mathbb{N}$ and let $C > 0$, we say that $\mathcal{N}\subseteq \mathcal{M}$ has *$(C,d)$-spectral gap* if there exist $x_1$, ..., $x_d$ in the unit ball $B_1^{\mathcal{N}}$ such that $$\label{eqn: spectral gap def} d(y,\mathcal{N}' \cap \mathcal{M})^2 \leq C \sum_{j=1}^d \left\lVert [x_j,y] \right\rVert_2^2 \text{ for } y \in \mathcal{M},$$ where $\mathcal{N}' \cap \mathcal{M}= \{z \in \mathcal{M}: [z,x] = 0 \text{ for } x \in \mathcal{N}\}$. If this is true for some $d$ and $C$, we say that $\mathcal{N}\subseteq \mathcal{M}$ has *spectral gap*. In the case $\mathcal{N}= \mathcal{M}$, note that $\mathcal{N}' \cap \mathcal{M}$ reduces to the center $Z(\mathcal{M})$, and in this case, we will say simply that $\mathcal{M}$ has spectral gap. Goldbring gave a model-theoretic interpretation of spectral gap: $\mathcal{N}\subseteq \mathcal{M}$ having spectral gap implies that $\mathcal{N}' \cap \mathcal{M}$ is a definable set with parameters from $\mathcal{N}$ [@Goldbring2023spectralgap]. To motivate the definition of quantum expanders, we recall the following well-known characterization of unitaries that witness spectral gap. **Lemma 5**. *Let $\mathcal{N}\subseteq \mathcal{M}$ be an inclusion of tracial von Neumann algebras and let $\epsilon > 0$, let $u_1$, ..., $u_d$ be unitaries in $\mathcal{M}$. Then the following are equivalent:* (1) *For $a \in \mathcal{M}$, $$\left\lVert a - E_{\mathcal{N}}(a) \right\rVert_2^2 \leq \frac{1}{\epsilon} \sum_{j=1}^d \left\lVert [u_j,a] \right\rVert_2^2.$$* (2) *For $a \in \mathcal{M}$, $$\left\lVert \sum_{j=1}^d u_j(a - E_{\mathcal{N}}(a)) u_j^* + u_j^*(a - E_{\mathcal{N}}(a))u_j \right\rVert_2 \leq (2d - \epsilon) \left\lVert a - E_{\mathcal{N}}(a) \right\rVert_2.$$* *Proof.* Define $T: L^2(\mathcal{M}) \to L^2(\mathcal{M})^d$ by $$T(a) = \begin{bmatrix} [u_1,a] \\ \vdots \\ [u_d,a] \end{bmatrix}.$$ Then $$T^*T(a) = 2d\,a - \sum_{j=1}^d u_jau_j^* - \sum_{j=1}^d u_j^*au_j.$$ Note that $T$ maps the orthogonal complement $\mathcal{N}^{\perp}$ into $(\mathcal{N}^{\perp})^d$. Condition (1) can thus be restated as $$\epsilon \left\lVert a \right\rVert_2^2 \leq \left\lVert T(a) \right\rVert_2^2 = \left\langle a,T^*T(a) \right\rangle \text{ for } a \in \mathcal{N}^{\perp},$$ which is equivalent to the spectrum of $T^*T |_{\mathcal{N}^{\perp}}$ being contained in $[\epsilon,\infty)$. Meanwhile, condition (2) can be restated as $$\left\lVert (2d - T^*T)|_{\mathcal{N}^{\perp}} \right\rVert \leq 2d - \epsilon,$$ which is equivalent to (1). ◻ Quantum expanders are defined as follows. For $\epsilon > 0$ and $d \geq 2$, a *$(d,\epsilon)$-quantum expander* is a sequence of $d$-tuples of $n \times n$ unitaries $U_1^{(n)}$, ..., $U_d^{(n)}$ such that for $A \in M_n(\mathbb{C})$, $$\label{eq.4.10} \left\lVert \sum_{j=1}^d U_j^{(n)} (A - \mathop{\mathrm{tr}}_n(A)) (U_j^{(n)})^* \right\rVert_2 \leq (d - \epsilon) \left\lVert A - \mathop{\mathrm{tr}}_n(A) \right\rVert_2.$$ We also apply the same terminology even when $U_j^{(n)}$ is only defined for a subsequence of values of $n$. By applying Lemma [Lemma 5](#lem: spectral gap){reference-type="ref" reference="lem: spectral gap"} with $\mathcal{N}= \mathcal{M}= M_n(\mathbb{C})$ and $\mathcal{N}' \cap \mathcal{M}= \mathbb{C}1$, we have the following relationship between spectral gap and quantum expanders. **Corollary 6**. *Unitaries $U_1^{(n)}$, ..., $U_d^{(n)}$ witness $(d,1/\epsilon)$ spectral gap for $M_n(\mathbb{C})$ if and only if $(U_1^{(n)},\dots,U_d^{(n)},(U_1^{(n)})^*,\dots,(U_d^{(n)})^*)$ is a $(2d,\epsilon)$-quantum expander.* Note that ${(U_1^{(n)}, \dots, U_d^{(n)}, (U_1^{(n)})^*, \dots, (U_d^{(n)})^*)}$ is a $(2d,2\epsilon)$-quantum expander whenever $(U_1^{(n)}, \dots, U_d^{(n)})$ is a $(d,\epsilon)$-quantum expander. This follows because the adjoint of the map $A \mapsto \sum_{j=1}^d U_j^{(n)} A (U_j^{(n)})^*$ is the map $A \mapsto \sum_{j=1}^d (U_j^{(n)})^* A U_j^{(n)}$. Hence, there is often no loss of generality in restricting to quantum expanders given by a tuple of matrices and their adjoints. Several constructions of quantum expanders are known; see [@BST2010] for a survey. - Hastings [@Hastings2007] showed that tuples of Haar random unitaries give quantum expanders (see Theorem [Theorem 14](#thm: Hastings){reference-type="ref" reference="thm: Hastings"} below), and Pisier proved a similar result in [@PisierExpanders]. - A deterministic construction based on discrete Fourier transforms on non-abelian groups was given implicitly by Ambainis and Smith [@AS2004] and explicitly by Ben-Aroya, Schwartz, and Ta-Shma in [@BT2007] and [@BST2010]. - Another explicit construction was given by Gross and Eisert in [@GE2008] based on Margulis's construction of classical expanders. - Harrow [@Harrow2008] showed how to produce a quantum expander from any classical expander. - A zig-zag construction is given by Ben-Aroya, Schwartz, and Ta-Shma in [@BST2010 §4]. Moreover, if $G$ is a group with property (T) with generators $g_1$, ..., $g_d$, and $(\pi_j)_{j \in \mathbb{N}}$ is a sequence of irreducible unitary representations of $G$ on $\mathbb{C}^{n_j}$, then $(\pi_j(g_1),\dots,\pi_j(g_d))$ is a $(d,\epsilon)$-quantum expander where $\epsilon$ is related to the Kazhdan constant (for background on property (T), see [@BdlHVpropertyT]). Indeed, irreducibility of the representation means that the commutant of $\pi_j(g_1)$, ..., $\pi_j(g_d)$ cannot contain any nontrivial projections and hence is $\mathbb{C}1$, and then property (T) implies that $\pi_j(g_1)$, ..., $\pi_j(g_d)$ witness spectral gap for some $\epsilon > 0$. For example, $G = SL_3(\mathbb{Z})$ is a group with property (T) that has irreducible representations on $\mathbb{C}^n$ for infinitely many $n$ (for instance, those obtained from the representations of $SL_3(\mathbb{Z}/ q\mathbb{Z})$ for $q$ prime; see [@SF1973]). Property (T) groups and quantum expanders are interchangeable in many applications; see for example the two proofs of Lemma 4.3 offered in [@ioana2021almost]. # Complete Characterization of Quantifier Elimination {#sec: QE} ## Proof of Theorem [Theorem 1](#thm: main QE){reference-type="ref" reference="thm: main QE"} {#sec: QE proof} Toward the proof of Theorem [Theorem 1](#thm: main QE){reference-type="ref" reference="thm: main QE"}, first note that we can restrict our attention to type I algebras. Indeed, the first author already showed that any tracial von Neumann algebra with a type II$_1$ summand does not admit quantifier elimination [@Farah2023 Theorem 1] (another argument is given in Remark [Remark 23](#rem: alternate QE proof){reference-type="ref" reference="rem: alternate QE proof"} below). The next lemma will similarly allow us to eliminate summands of the form $M_n(\mathbb{C}) \otimes L^\infty[0,1]$ with $n \geq 2$, by showing that if either (1) or (2) in Theorem [Theorem 1](#thm: main QE){reference-type="ref" reference="thm: main QE"} happens, then there can be no such summands. **Lemma 7**. *Suppose that $\mathcal{M}$ is a tracial von Neumann algebra whose theory admits quantifier elimination or which has the property that any two projections of the same trace are conjugate by an automorphism of $\mathcal{M}$. Then $\mathcal{M}$ cannot have a direct summand of the form $M_n(\mathbb{C}) \otimes L^\infty[0,1]$ for $n \geq 2$.* *Proof.* Arguing by contrapositive, suppose that $\mathcal{M}$ has a direct summand of the form $M_n(\mathbb{C}) \otimes L^\infty[0,1]$. In $M_n(\mathbb{C}) \otimes L^\infty[0,1]$, consider the projections $p = 1 \otimes \mathbf{1}_{[0,1/n]}$ and $q = E_{1,1} \otimes 1$, where $E_{1,1}$ is the canonical matrix unit in $M_n(\mathbb{C})$. These two projections have the same trace, hence they have the same $*$-moments, i.e. the same quantifier-free type. However, they do not have the same type because $p$ is central and $q$ is not central in $M_n(\mathbb{C}) \otimes L^\infty[0,1]$, hence also in $\mathcal{M}$. This shows that $\mathcal{M}$ cannot admit quantifier elimination. Furthermore, since $p$ and $q$ do not have the same type, they cannot be conjugate by an automorphism of $\mathcal{M}$. ◻ Therefore, it suffices to prove Theorem [Theorem 1](#thm: main QE){reference-type="ref" reference="thm: main QE"} in the case where $\mathcal{M}$ is a direct sum of an optional $L^\infty[0,1]$ term and matrix algebras. Let us decompose $\mathcal{M}$ as follows: $$\mathcal{M}= (L^\infty[0,1], \alpha_0) \oplus \left( \bigoplus_{j \in J} (M_{n_j}(\mathbb{C}), \alpha_j) \right).$$ Here $\alpha_j$, for $j\in \{0\}\sqcup J$, are the weights of the direct summands. Thus $\alpha_0+\sum_{j\in J} n_j \alpha_j=1$. We rely on the following classification of the automorphisms of $\mathcal{M}$ (for background on the structure theory for finite-dimensional algebras, see e.g. [@Davidson1996 §3.1], [@JonesSunder1997 §3.2]). Every automorphism of $\mathcal{M}$ is a composition of 1. A direct sum of automorphisms of each component: (a measure-space automorphism of $L^\infty[0,1]$ and a unitary conjugation of each $M_n(\mathbb{C})$ term), 2. Swaps of matrix algebras $M_n(\mathbb{C})$ of the same dimension and the same weight. We first focus on the atomic portion. **Lemma 8**. *Suppose that $\mathcal{M}$ is a tracial von Neumann algebra such that any two projections of the same trace are conjugate by an automorphism of $\mathcal{M}$. Then any two matrix summands of $\mathcal{M}$ with a common dimension greater than or equal to 2 must have different weights.* *Proof.* Suppose there is some $j, k \in J$ so that $n_j = n_k \geq 2$, and $\alpha_j = \alpha_k$. Let $p$ be a projection of rank $2$ in the $M_{n_j}(\mathbb{C})$ summand, and let $q$ be a projection of rank $1$ in both the $M_{n_j}(\mathbb{C})$ and $M_{n_k}(\mathbb{C})$ summands (and $p$, $q$ are both 0 in all other summands.) Then $\tau(p) = \tau(q) = \frac{2 \alpha_j}{n_j}$, but $p$ and $q$ are not conjugate by any automorphism. ◻ *Proof of Theorem [Theorem 1](#thm: main QE){reference-type="ref" reference="thm: main QE"}.* (1) $\implies$ (2). Suppose that $\mathcal{M}$ admits elimination of quantifiers. In order to deal with the diffuse $L^\infty$ term and the atomic terms separately, we first show that the central projection $1_{L^\infty}$ is a definable element (see subsection [2.1.3](#sec: definable){reference-type="ref" reference="sec: definable"}). Note that for each $k$, the set $$S_k = \{e_1,\dots,e_k \in P(\mathcal{M}) \cap Z(\mathcal{M}): e_i e_j = 0, \tau(e_j) = \alpha_0/k \text{ for } i, j = 1, \dots, k \}$$ is definable using the stability of projections. Moreover, if $x$ is any element satisfying $$\inf_{(e_1,\dots,e_k) \in S_k} d\left(x,\sum_{j=1}^k e_j \right) \leq \epsilon,$$ then $x$ is $\epsilon$-close to a central projection that is divisible into $k$ central projections of trace $\alpha_0/k$. If $k$ is large enough, then the sum of the weights of discrete summands that are less than or equal to $\alpha_0/k$ will be less than $\epsilon^2$. Hence, $\sum_{j=1}^k e_j$ will be $2\epsilon$-close to $1_{L^\infty}$. So $1_{L^\infty}$ is definable. Let $p, q$ be two projections with the same trace. As noted in the proof of Lemma [Lemma 7](#lem: eliminate diffuse matrix term){reference-type="ref" reference="lem: eliminate diffuse matrix term"}, $p$ and $q$ then have the same quantifier-free type and hence they have the same type. Because $1_{L^\infty}$ is definable, every formula over $L^\infty$ and every formula over $\mathcal{N}:= \mathcal{M}\ominus L^\infty$ can be expressed as a definable predicate over $\mathcal{M}$. Thus, $1_{L^\infty} p$ and $1_{L^\infty}q$ have the same type in $L^\infty[0,1]$ and $(1_\mathcal{N})p$ and $(1_\mathcal{N})q$ have the same type in $\mathcal{N}$. Then, $1_{L^\infty} p$ and $1_{L^\infty}q$ are two projections of the same trace in $L^{\infty}[0,1]$ and therefore conjugate by an automorphism. Meanwhile, $(1 - 1_{L^\infty})p$ and $(1 - 1_{L^\infty})q$ have the same type in $\mathcal{N}$, hence they are conjugate by an automorphism in some elementary extension of $\mathcal{N}$. But $\mathcal{N}$ is type I and atomic, so any ultrapower of $\mathcal{N}$ is isomorphic to $\mathcal{N}$; hence any elementary extension of $\mathcal{N}$ is isomorphic to $\mathcal{N}$ since it embeds into an ultrapower and contains the diagonal. It follows that $(1_\mathcal{N})p$ and $(1_\mathcal{N})q$ are conjugate by an automorphism of $\mathcal{N}$, and so $p$ and $q$ are conjugate by an automorphism of $\mathcal{M}$. \(2\) $\implies$ (1) Let $\mathrm{T} := \text{Th}(\mathcal{M})$. We must check that every $\mathrm{T}$ type is determined by its quantifier-free type. First note that all $\mathrm{T}$ types can be realized in $\mathcal{M}$; indeed, $\mathcal{M}^{\mathcal{U}}$ is countably saturated (see §[2.1.2](#sec: model theory prelims){reference-type="ref" reference="sec: model theory prelims"}) and is a direct sum of $L^\infty[0,1]^{\mathcal{U}}$ and $(\mathbb{C}^{n_j})^{\mathcal{U}} = \mathbb{C}^{n_j}$ and ${M_{n_j}(\mathbb{C})^{\mathcal{U}} = M_{n_j}(\mathbb{C})}$. Any tuple of elements in $L^\infty[0,1]^{\mathcal{U}}$ has the same type as some tuple in $L^\infty[0,1]$, and swapping out the element in the $L^\infty[0,1]^{\mathcal{U}}$ summand for one of the same type will not change the type of the overall element in $\mathcal{M}^{\mathcal{U}}$. Fix some $\mathbf{x} = (x_1,\dots,x_k)$ and $\mathbf{y} = (y_1,\dots,y_k)$ in $\mathcal{M}$ with the same quantifier-free type. We shall build a sequence of automorphisms $\sigma_n$ of $\mathcal{M}$ such that $\sigma_n(\mathbf{x}) \rightarrow \mathbf{y}$, so $\mathop{\mathrm{tp}}^\mathcal{M}(\mathbf{x}) = \mathop{\mathrm{tp}}^\mathcal{M}(\mathbf{y})$. Note that by Lemma [Lemma 8](#lem: no identical matrix summands){reference-type="ref" reference="lem: no identical matrix summands"}, along with the classification of automorphisms of $\mathcal{M}$, the only possible automorphisms of $\mathcal{M}$ are those which are a direct sum of automorphisms of each component, possibly composed with swaps of copies of $\mathbb{C}$ which have the same weight. This motivates the following decomposition of $\mathcal{M}$, where we group together copies of $\mathbb{C}$ which have the same weight:[^2] $$\label{eq.alpha-j} \mathcal{M}= (L^\infty[0,1], \alpha_0) \oplus \left( \bigoplus_{j \in J_1} (\mathbb{C}, \alpha_j)^{\oplus n_j} \right) \oplus \left( \bigoplus_{j \in J_2} (M_{n_j}(\mathbb{C}), \alpha_j) \right).$$ We build a sequence of automorphisms in each summand separately, by showing that the quantifier-free types of the central projections of $\mathbf{x}$ and $\mathbf{y}$ onto each summand agree. Let us start with the matrix summands: let $p_j$, $j \in J_2$, be the central projection onto the $j$th summand $M_{n_j}(\mathbb{C})$, where $n_j \geq 2$. We claim that $p_j\mathbf{x} = (p_j x_1,\dots,p_j x_k)$ and ${p_j \mathbf{y} = (p_j y_1,\dots,p_j y_k)}$ have the same quantifier-free type in $M_{n_j}(\mathbb{C})$. Let $f$ be a self-adjoint non-commutative $*$-polynomial and note that $pf(\mathbf{x}) = f(p_j \mathbf{x})$ and $p_j f(\mathbf{y}) = f(p_j \mathbf{y})$ because $p_j$ is a central projection. Let $E \subseteq \mathbb{R}$ be Borel. Then $\tau(1_E(f(\mathbf{x}))) = \tau(1_E(f(\mathbf{y})))$, so there is some automorphism $\sigma$ conjugating $1_E(f(\mathbf{x}))$ to $1_E(f(\mathbf{y}))$. Because no two copies of $M_n(\mathbb{C})$ in the direct sum decomposition have the same weight by Lemma [Lemma 8](#lem: no identical matrix summands){reference-type="ref" reference="lem: no identical matrix summands"}, $\sigma$ must map $p_j 1_E(f(\mathbf{x}))$ to $p_j 1_E(f(\mathbf{y}))$, and thus $\mathop{\mathrm{tr}}_n(p_j 1_E(f(\mathbf{x}))) = \mathop{\mathrm{tr}}_n(p_j 1_E(f(\mathbf{y})))$. Equivalently, ${\mathop{\mathrm{tr}}_n(1_E(f(p_j \mathbf{x}))) = \mathop{\mathrm{tr}}_n(1_E(f(p_j \mathbf{y})))}$. Because this holds for every Borel set $E$, $f(p_j \mathbf{x})$ and $f(p_j \mathbf{y})$ have the same eigenvalues with the same multiplicity, so we have ${\mathop{\mathrm{tr}}_n(f(p_j \mathbf{x})) = \mathop{\mathrm{tr}}_n(f(p_j \mathbf{y}))}$. As this is true for all self-adjoint non-commutative $*$-polynomials $f$, $$\mathop{\mathrm{qftp}}^{M_{n_j}(\mathbb{C})}(p_j \mathbf{x}) = \mathop{\mathrm{qftp}}^{M_{n_j}(\mathbb{C})}(p_j \mathbf{y}).$$ By the multivariate Specht's theorem [@Jing2015], there is some unitary $u \in M_{n_j}(\mathbb{C})$ so that ${u_j p_j\mathbf{x}u_j^* = p_j \mathbf{y}}$. The same argument as in the matrix case shows that when $p_j$ for $j \in J_1$ is the central projection onto some summand of the form $\mathbb{C}^{n_j}$, $n_j \geq 1$, with each copy of $\mathbb{C}$ having the same weight $\alpha_j$, we obtain that $\mathop{\mathrm{qftp}}^{\mathbb{C}^{n_j}} (p_j \mathbf{x}) = \mathop{\mathrm{qftp}}^{\mathbb{C}^{n_j}}(p_j \mathbf{y})$, so some automorphism $\pi_j$ of $\mathbb{C}^{n_j}$ sends $p_j \mathbf{x}$ to $p_j \mathbf{y}$. Finally, let $p_0$ be the central projection onto the $L^\infty[0,1]$ summand. Then ${p_0 = 1 - \sum_{j \in J_1 \sqcup J_2} p_j}$, where $p_j$ is the central projection onto the $j$th summand of $\mathcal{M}$. Hence, for any non-commutative $*$-polynomial $f$, $$\tau(p_0 f(\mathbf{x})) = \tau(f(\mathbf{x})) - \sum_{j \in J_1 \sqcup J_2} \tau(p_j f(\mathbf{x})) = \tau(f(\mathbf{y})) - \sum_{j \in J_1 \sqcup J_2} \tau(p_j f(\mathbf{y})) = \tau(p_0 f(\mathbf{y})),$$ so we again obtain that $\mathop{\mathrm{qftp}}^{L^\infty}(p_0 \mathbf{x}) = \mathop{\mathrm{qftp}}^{L^\infty} (p_0 \mathbf{y})$. By [@JekelModelEntropy Lemma 2.16], there is a sequence of automorphisms $\alpha_n$ of $L^\infty[0,1]$ such that $\alpha_n(\mathbf{x}) \to \mathbf{y}$. By setting $\sigma_n$ to be the direct sum of the automorphisms in each summand of $\mathcal{M}$ given by the arguments above, that is, $$\sigma_n = \alpha_n \oplus \bigoplus_{j \in J_1} \pi_j \oplus \bigoplus_{j \in J_2} \operatorname{Ad}_{u_j},$$ we obtain that $\sigma_n(\mathbf{x}) \to \mathbf{y}$, so $\mathop{\mathrm{tp}}^{\mathcal{M}}(\mathbf{x}) = \mathop{\mathrm{tp}}^{\mathcal{M}}(\mathbf{y})$. We conclude that $\mathcal{M}$ admits elimination of quantifiers. ◻ ## Tests for quantifier elimination {#sec: QE tests} While Theorem [Theorem 1](#thm: main QE){reference-type="ref" reference="thm: main QE"} is an elegant characterization of quantifier elimination, it is not a very explicit criterion to decide if a given type I algebra admits quantifier elimination or not. Therefore, in this section we will state some more explicit conditions. The first proposition gives a characterization of quantifier elimination by listing possible obstructions. **Proposition 9**. *A separable tracial von Neumann algebra $\mathcal{M}$ admits quantifier elimination if and only if all the following conditions hold:* (1) *$\mathcal{M}$ is type I.* (2) *$\mathcal{M}$ has no summands of the form $M_n(\mathbb{C}) \otimes L^\infty[0,1]$.* (3) *If $\mathcal{M}$ has an $L^\infty[0,1]$ summand with weight $\alpha_0$, and if $p$ and $q$ are two projections in the atomic part, then either $\tau(p) = \tau(q)$ or $|\tau(p) - \tau(q)| > \alpha_0$.* (4) *If $p$ and $q$ are two projections in the atomic part with $\tau(p) = \tau(q)$, then we have (letting $E_ {Z(\mathcal{M})}$ denote the center-valued trace in $\mathcal{M}$) $E_{Z(\mathcal{M})}[p] = \sigma \circ E_{Z(\mathcal{M})}[q]$ where $\sigma$ is an automorphism of $\mathcal{M}$ given by a permutation of one-dimensional summands with the same weight.* *Proof.* Suppose that $\mathcal{M}$ admits quantifier elimination. Then [@Farah2023 Theorem 1] shows that (1) holds and Lemma [Lemma 7](#lem: eliminate diffuse matrix term){reference-type="ref" reference="lem: eliminate diffuse matrix term"} shows that (2) holds. For (3), suppose for contradiction that there are two projections $p$ and $q$ in the atomic part with $0 < |\tau(p) - \tau(q)| \leq \alpha_0$, and without loss of generality suppose that $\tau(p) < \tau(q)$. Let $p'$ be a projection in $L^\infty[0,1]$ such that $\tau(p') = \tau(q) - \tau(p)$. Then $q$ and $p' + p$ have the same trace but are not equivalent by an automorphism because the $L^\infty[0,1]$ summand must be invariant under any automorphism. Hence, by Theorem [Theorem 1](#thm: main QE){reference-type="ref" reference="thm: main QE"}, $\mathcal{M}$ does not have quantifier elimination. For (4), let $p$ and $q$ be projections in the atomic part with $\tau(p) = \tau(q)$. By Theorem [Theorem 1](#thm: main QE){reference-type="ref" reference="thm: main QE"}, $p$ and $q$ are conjugate by an automorphism. Hence also $E_{Z(\mathcal{M})}[p]$ and $E_{Z(\mathcal{M})}[q]$ are conjugate by an automorphism. Lemma [Lemma 8](#lem: no identical matrix summands){reference-type="ref" reference="lem: no identical matrix summands"} shows that there cannot be two copies of $M_n(\mathbb{C})$ with the same weight for $n \geq 2$, and hence any automorphism must fix the central projections associated to $M_n(\mathbb{C})$ terms for $n \geq 2$. Thus, $E_{Z(\mathcal{M})}[p]$ and $E_{Z(\mathcal{M})}[q]$ must have equal components in each of the $M_n(\mathbb{C})$ summands for $n \geq 2$. So they differ by an automorphism that merely permutes the one-dimensional summands. Conversely, suppose that (1)--(4) hold. Let $p$ and $q$ be two projections of the same trace. Using (3), the traces of $p$ and $q$ in the $L^\infty[0,1]$ summand must agree, so there is an automorphism of $\mathcal{M}$ such that $\alpha(p) - q$ is in the atomic part of $\mathcal{M}$. Hence, assume without loss of generality that $p$ and $q$ are in the atomic part. Up to an automorphism we can assume that $E_{Z(\mathcal{M})}[p] = E_{Z(\mathcal{M})}[q]$. This means that for each direct summand $M_n(\mathbb{C})$ in the decomposition of $\mathcal{M}$ (where $n \geq 1$), the components of $p$ and $q$ in the same summand have the same rank. Therefore, these components are conjugate by a unitary in $M_n(\mathbb{C})$. Overall, $p$ and $q$ are conjugate by an automorphism. Thus, by Theorem [Theorem 1](#thm: main QE){reference-type="ref" reference="thm: main QE"}, $\mathcal{M}$ admits quantifier elimination. ◻ Next, let us make the conditions on the atomic part even more explicit. In Lemma [Lemma 8](#lem: no identical matrix summands){reference-type="ref" reference="lem: no identical matrix summands"}, we showed that two matrix algebras of the same dimension cannot have the same weight, but there are many more constraints of a similar nature. For instance, if $$\mathcal{M}= (\mathbb{C},1/2) \oplus (\mathbb{C},1/3) \oplus (\mathbb{C},1/6),$$ then $1 \oplus 0 \oplus 0$ and $0 \oplus 1 \oplus 1$ have the same trace but are not automorphically conjugate. Another example is if $$\mathcal{M}= (\mathbb{C},2/5) \oplus (M_3(\mathbb{C}),3/5),$$ then $\mathcal{M}$ does not admit quantifier elimination since a rank $2$ projection in the second summand has the same trace as $1$ in the first summand. Overall, we have to consider linear combinations of ranks of projections in the different summands that could possibly add up to zero. Moreover, as in Proposition [Proposition 9](#prop: QE obstructions){reference-type="ref" reference="prop: QE obstructions"} (3), if the linear combinations add up to something smaller than $\alpha_0$, that would pose an obstacle to quantifier elimination just as well. We also remark that the behavior of the one-dimensional summands is somewhat different that the $M_n(\mathbb{C})$ summands for $n \geq 2$ since Lemma [Lemma 8](#lem: no identical matrix summands){reference-type="ref" reference="lem: no identical matrix summands"} only applies when $n \geq 2$. These considerations motivate the proposition below, especially the formulation of (3). **Proposition 10**. *Let $\mathcal{M}$ be a separable tracial von Neumann algebra. Then $\mathcal{M}$ admits quantifier elimination if and only if $\mathcal{M}$ has a decomposition of the form: $$\mathcal{M}= (L^\infty[0,1], \alpha_0) \oplus \left( \bigoplus_{j \in J_1} (\mathbb{C}, \alpha_j)^{\oplus n_j} \right) \oplus \left( \bigoplus_{j \in J_2} (M_{n_j}(\mathbb{C}), \alpha_j) \right),$$ where* (1) *The weights $\alpha_0 \geq 0$ and $\alpha_j > 0$ for $j \in J_1 \cup J_2$. The weights sum to $1$.* (2) *The indices $\alpha_j$ for $j \in J_1$ are distinct, that is, we have grouped together *all* one-dimensional summands of the same weight in our decomposition.* (3) *For all choices of integers $|r_j| \leq n_j$ for $j \in J_1 \cup J_2$ which are not all zero, we have $$\left| \sum_{j \in J_1} r_j \alpha_j + \sum_{j \in J_2} \frac{r_j \alpha_j}{n_j} \right| > \alpha_0.$$* *Proof.* Suppose $\mathcal{M}$ admits quantifier elimination. We already know $\mathcal{M}$ decomposes into an optional $L^\infty[0,1]$ term and an atomic part. By grouping the one-dimensional terms with the same weight, we obtain a direct sum decomposition satisfying conditions (1) and (2). It remains to check condition (3). By contrapositive, suppose that there exist integers $|r_j| \leq n_j$ satisfying $$\left| \sum_{j \in J_1} r_j \alpha_j + \sum_{j \in J_2} \frac{r_j \alpha_j}{n_j} \right| \leq \alpha_0.$$ For $j \in J_1$, let $p_j$ and $q_j$ be projections in $(\mathbb{C},\alpha_j)^{\oplus n_j}$ such that $$\mathop{\mathrm{rank}}(p_j) = \max(r_j,0), \qquad \mathop{\mathrm{rank}}(q_j) = \max(-r_j,0).$$ Similarly, for $j \in J_2$, let $p_j$ and $q_j$ be projections in $(M_{n_j}(\mathbb{C}),\alpha_j)$ with the same rank conditions. Thus, $\mathop{\mathrm{rank}}(p_j) - \mathop{\mathrm{rank}}(q_j) = r_j$. Finally, let $$t = \sum_{j \in J_1} r_j \alpha_j + \sum_{j \in J_2} \frac{r_j \alpha_j}{n_j},$$ and let $p_0$ and $q_0$ be projections in $(L^\infty[0,1],\alpha_0)$ such that $\tau(p_0) = \max(-t,0)$ and $\tau(q_0) = \max(t,0)$, so that $\tau(p_0) - \tau(q_0) = -t$. Let $$p = p_0 \oplus \bigoplus_{j \in J_1} p_j \oplus \bigoplus_{j \in J_2} p_j, \qquad q = q_0 \oplus \bigoplus_{j \in J_1} q_j \oplus \bigoplus_{j \in J_2} q_j.$$ By construction, $$\tau(p) - \tau(q) = \tau(p_0) - \tau(q_0) + \sum_{j \in J_1} \alpha_j r_j + \sum_{j \in J_2} \frac{\alpha_j r_j}{n_j} = 0.$$ However, $p$ and $q$ are not automorphically conjugate. Indeed, $r_j$ is nonzero for some $j$. If $j \in J_1$, the components of $p$ and $q$ in the central summand $(\mathbb{C},\alpha_j)^{\oplus n_j}$ have different ranks, and $(\mathbb{C},\alpha_j)^{\oplus n_j}$ is invariant under automorphisms because we grouped together all the terms with the same weight, hence $p$ cannot be transformed to $q$ by automorphism. Similarly, if $j \in J_2$, then the components of $p$ and $q$ in $(M_{n_j}(\mathbb{C}),\alpha_j)$ have different ranks, and by Lemma [Lemma 8](#lem: no identical matrix summands){reference-type="ref" reference="lem: no identical matrix summands"}, $(M_{n_j}(\mathbb{C}),\alpha_j)$ must be invariant under automorphisms since there is only one summand with a given dimension and weight. Hence, if (3) does not hold, then $\mathcal{M}$ cannot admit quantifier elimination. Conversely, suppose $\mathcal{M}$ has a decomposition satisfying (1) - (3), and we will show that $\mathcal{M}$ admits quantifier elimination. Consider two projections $p = p_0 \oplus \bigoplus_{j \in J_1 \sqcup J_2} p_j$ and $q= q_0 \oplus \bigoplus_{j \in J_1 \sqcup J_2} q_j$ in $\mathcal{M}$ with the same trace. Then $$\tau(p_0) - \tau(q_0) = \sum_{j \in J_1 \sqcup J_2} \frac{\alpha_j (\text{rank}(q_j) - \text{rank}(p_j))}{n_j}.$$ Hence, $$\left| \sum_{j \in J_1} \alpha_j (\operatorname{rank}(q_j) - \operatorname{rank}(p_j)) + \sum_{j \in J_2} \frac{\alpha_j (\operatorname{rank}(q_j) - \operatorname{rank}(p_j))}{n_j} \right| = |\tau(p_0) - \tau(q_0)| \leq \alpha_0.$$ By condition (3), this forces $\mathop{\mathrm{rank}}(p_j) = \mathop{\mathrm{rank}}(q_j)$ for all $j \in J_1 \sqcup J_2$. In particular, for $j \in J_1$, $p_j$ and $q_j$ are projections in $(\mathbb{C},\alpha_j)^{\oplus n_j}$ with the same rank and hence conjugate by an automorphism permuting the summands. Moreover, for $j \in J_2$, $p_j$ and $q_j$ are projections in $M_{n_j}(\mathbb{C})$ with the same rank, hence they are unitarily conjugate. Finally, since $p_j$ and $q_j$ have the same trace for $j \in J_1 \sqcup J_2$, we deduce that $p_0$ and $q_0$ have the same trace in $L^\infty[0,1]$ and hence they are conjugate by a measure-preserving transformation. Patching the automorphisms on each summand together, $p$ and $q$ are automorphically conjugate. Thus, by Theorem [Theorem 1](#thm: main QE){reference-type="ref" reference="thm: main QE"}, $\mathcal{M}$ has quantifier elimination. ◻ # Model completeness for $\mathrm{II}_1$ factors {#sec: MC factor proof} Our goal in this section is to prove Theorem [Theorem 2](#thm: main MC){reference-type="ref" reference="thm: main MC"} in the case of a $\mathrm{II}_1$ factor $\mathcal{M}$. The proof is a much more sophisticated variant of [@Farah2023 Lemma 2.1], which was in turn based on [@Bro:Topological Corollary 6.11]. Our construction is based on random matrix theory. Let $\mathbb{U}_n$ denote the unitary group of $M_n(\mathbb C)$. As a compact Lie group, $\mathbb{U}_n$ has a unique left-invariant probability measure, called the *Haar measure*. By a *Haar random unitary*, we mean a $\mathbb{U}_n$-valued random variable $U^{(n)}$ whose probability distribution is the Haar measure, i.e., $\mathbb{E}[f(U^{(n)})] = \int_{\mathbb{U}_n} f(u)\,d\operatorname{Haar}(u)$ for every continuous function $f$ on $\mathbb{U}_n$. Let $U_1^{(n)}$, $U_2^{(n)}$, $U_3^{(n)}$, and $U_4^{(n)}$ be independent Haar random unitaries. Consider the decomposition $\mathcal{M}\cong M_n(\mathbb{C}) \otimes \mathcal{M}^{1/n}$, where $\mathcal{M}^{1/n}$ is the $1/n$ compression of $\mathcal{M}$ [@MvNROO4 §2.6-2.8]; for each $n$, we fix a decomposition for the entire argument, and write $\mathcal{M}= M_n(\mathbb{C}) \otimes \mathcal{M}^{1/n}$. We set $$\mathbf{X}^{(n)} = (X_1^{(n)},X_2^{(n)},X_3^{(n)}) = (U_1^{(n)} \otimes 1_{\mathcal{M}^{1/n}},U_2^{(n)} \otimes 1_{\mathcal{M}^{1/n}}, U_3^{(n)} \otimes 1_{\mathcal{M}^{1/n}})$$ and $$\mathbf{Y}^{(n)} = (Y_1^{(n)},Y_2^{(n)},Y_3^{(n)}) = ((U_1^{(n)} \oplus U_1^{(n)}) \otimes 1_{\mathcal{M}^{1/2n}}, (U_2^{(n)} \oplus U_2^{(n)}) \otimes 1_{\mathcal{M}^{1/2n}}, (U_3^{(n)} \oplus U_4^{(n)}) \otimes 1_{\mathcal{M}^{1/2n}}).$$ Fix a free ultrafilter $\mathcal{U}$ on $\mathcal{N}$ and consider $\mathbf{X} = [\mathbf{X}^{(n)}]_{n \in \mathcal{N}}$ and $\mathbf{Y} = [\mathbf{Y}^{(n)}]_{n \in \mathbb{N}}$ as random elements of $\mathcal{M}^{\mathcal{U}}$, that is, elements that depend on the particular outcome $\omega$ in the implicit probability space $\Omega$. ## Outline of the proof {#S.Outline} The outline of the argument is as follows: (1) [\[item1\]]{#item1 label="item1"} Almost surely, for every $\inf$-formula $\varphi$, $\varphi^{\mathcal{M}^\mathcal{U}}(\mathbf{Y}) \leq \varphi^{\mathcal{M}^{\mathcal{U}}}(\mathbf{X})$. (2) Almost surely, the commutant $\mathbf{X}' \cap \mathcal{M}^{\mathcal{U}}$ is given by $$\mathcal{A}= \prod_{n \to \mathcal{U}} (\mathbb{C}1_{M_n(\mathbb{C})} \otimes \mathcal{M}^{1/n}) \subseteq \prod_{n \to \mathcal{U}} (M_n(\mathbb{C}) \otimes \mathcal{M}^{1/n}).$$ (3) Almost surely, the commutant $\mathbf{Y}' \cap \mathcal{M}^{\mathcal{U}}$ is given by $$\mathcal{B}= \prod_{n \to \mathcal{U}} [(\mathbb{C}1_{M_n(\mathbb{C})} \oplus \mathbb{C}1_{M_n(\mathbb{C})}) \otimes \mathcal{M}^{1/2n}] \subseteq \prod_{n \to \mathcal{U}} (M_{2n}(\mathbb{C}) \otimes \mathcal{M}^{1/2n}).$$ (4) Consequently, $\mathbf{X}' \cap \mathcal{M}^{\mathcal{U}}$ has trivial center but $\mathbf{Y}' \cap \mathcal{M}^{\mathcal{U}}$ does not and so $\mathbf{X}$ and $\mathbf{Y}$ do not have the same type. (5) By Lemma [Lemma 3](#lem: type MC){reference-type="ref" reference="lem: type MC"} together with (1) and (4), $\mathop{\mathrm{Th}}(\mathcal{M})$ is not model complete. The notation explained above will be fixed throughout the section. Moreover, we continue with the standing assumption that $M_2(\mathcal{M})$ embeds into $\mathcal{M}^{\mathcal{U}}$ for some ultrafilter $\mathcal{U}$, but this will only be used in the proof of [\[item1\]](#item1){reference-type="eqref" reference="item1"}, in Lemma [Lemma 13](#lem: inf formula limit){reference-type="ref" reference="lem: inf formula limit"}. ## Concentration of measure and approximate embedding For step (1), we use the following concentration of measure estimate which is based on the log-Sobolev inequality of Gross [@Gross1975]. The application of concentration in random matrix theory is due to Ben Arous and Guionnet [@BAG1997]; see also [@Guionnet2009] and [@AGZ2009 §2.3 and 4.4]. **Proposition 11** (See [@AGZ2009 §4.4 and Appendix F.6] and [@Meckes2019 Theorem 5.16-5.17]). *Let ${f: \mathbb{U}_n^{\times m} \to \mathbb{R}}$ be an $L$-Lipschitz function with respect to $\left\lVert \cdot \right\rVert_2$. Let $\mathbf{U}^{(n)}$ be a random element of $\mathbb{U}_n^{\times m}$ with probability distribution given by the Haar measure. Then for some positive constant $c$ independent of $n$, for all $\delta > 0$, $$\mathbb{P}( |f(\mathbf{U}^{(n)}) - \mathbb{E}[f(\mathbf{U}^{(n)})]| \geq \delta) \leq e^{-cn^2 \delta / L^2}.$$* **Lemma 12**. *For every $3$-variable formula $\varphi$, $$\label{eq: type convergence} \lim_{n \to \mathcal{U}} \varphi^{\mathcal{M}}(\mathbf{X}^{(n)})$$ is almost surely constant. In particular, $\lim_{n \to \mathcal{U}} \mathop{\mathrm{tp}}^{\mathcal{M}}(\mathbf{X}^{(n)})$ is almost surely constant.* *Proof.* To prove the claims, it suffices to show [\[eq: type convergence\]](#eq: type convergence){reference-type="eqref" reference="eq: type convergence"} holds almost surely for each $\varphi$ in a countable dense set of formulas (as usual in measure theory, "almost surely" distributes over countable conjunctions). In fact, the dense set of formulas can be chosen to be Lipschitz. Indeed, a formula will be Lipschitz as long as the atomic formulas and the connectives used are all Lipschitz; the quantifiers do not cause any issue since the supremum of a family of $L$-Lipschitz functions is $L$-Lipschitz. The atomic formulas are traces of non-commutative polynomials, and for every non-commutative polynomial $p$ and $R > 0$, there is some $L$ such that $\tau(p)$ is $L$-Lipschitz with respect to $\left\lVert \cdot \right\rVert_2$ on each operator norm ball of radius $R$. The connectives in the language are continuous functions $\mathbb{R}^m \to \mathbb{R}$, which can all be approximated on compact sets by Lipschitz functions. So assume that $\varphi$ is an $L$-Lipschitz formula in three variables. Note that $\mathbf{X}^{(n)}$ depends in a Lipschitz manner upon $\mathbf{U}^{(n)} = (U_1^{(n)},U_2^{(n)},U_3^{(n)})$; indeed, the mapping $M_n(\mathbb{C}) \to \mathcal{M}$ given by $u \mapsto u \otimes 1_{\mathcal{M}^{1/n}}$ is $1$-Lipschitz. In particular, $\varphi^{\mathcal{M}}(\mathbf{X}^{(n)})$ is an $L$-Lipschitz function of $\mathbf{U}^{(n)}$. Therefore, applying Proposition [Proposition 11](#prop: concentration){reference-type="ref" reference="prop: concentration"} with $\delta = 1/n$, $$\mathbb{P}( |\varphi^{\mathcal{M}}(\mathbf{X}^{(n)}) - \mathbb{E}[\varphi^{\mathcal{M}}(\mathbf{X}^{(n)}])| \geq 1/n) \leq e^{-cn / L^2}.$$ By the Borel-Cantelli lemma, this implies that almost surely $$\lim_{n \to \infty} |\varphi^{\mathcal{M}}(\mathbf{X}^{(n)}) - \mathbb{E}[\varphi^{\mathcal{M}}(\mathbf{X}^{(n)})]| = 0, \quad \text{hence} \quad \lim_{n \to \mathcal{U}} \varphi^{\mathcal{M}}(\mathbf{X}^{(n)}) = \lim_{n \to \mathcal{U}} \mathbb{E}[\varphi^{\mathcal{M}}(\mathbf{X}^{(n)})].\qedhere$$ ◻ **Lemma 13**. *Almost surely, for every $\inf$-formula $\varphi$ in three variables, $$\label{eq: inf formula inequality} \lim_{n \to \mathcal{U}} \varphi^{\mathcal{M}}(\mathbf{Y}^{(n)}) \leq \lim_{n \to \mathcal{U}} \varphi^{\mathcal{M}}(\mathbf{X}^{(n)}).$$* *Proof.* Let $\tilde{\mathbf{X}}^{(n)}$ be defined analogously to $\mathbf{X}^{(n)}$ but with $U_4^{(n)}$ in place of $U_3^{(n)}$, that is, $\tilde{\mathbf{X}}^{(n)} = (U_1^{(n)} \otimes 1_{\mathcal{M}^{1/n}}, U_2^{(n)} \otimes 1_{\mathcal{M}^{1/n}}, U_4^{(n)} \otimes 1_{\mathcal{M}^{1/n}})$. Since $\tilde{\mathbf{X}}^{(n)}$ has the same probability distribution on $\mathbf{X}^{(n)}$, the almost sure limit of $\mathop{\mathrm{tp}}^{\mathcal{M}}(\tilde{\mathbf{X}}^{(n)})$ agrees with that of $\mathbf{X}^{(n)}$. In the following, we fix an outcome $\omega$ in the probability space such that the limit as $n \to \mathcal{U}$ of the type of $\mathbf{X}^{(n)}$ and the type of $\tilde{\mathbf{X}}^{(n)}$ at $\omega$ agrees with the almost sure limit. Let $\varphi$ be an existential formula. Then $\varphi$ can be expressed as $$\varphi(x_1,x_2, x_3) = \inf_{z_1, \dots, z_k} \psi(x_1,x_2, x_3, z_1,\dots,z_k),$$ where $\psi$ is a quantifier-free formula and each $z_j$ ranges over the unit ball. Since $\mathcal{M}^{\mathcal{U}}$ is countably saturated (see §[2.1.2](#sec: model theory prelims){reference-type="ref" reference="sec: model theory prelims"}), there exists some $\mathbf{Z} \in (\mathcal{M}^{\mathcal{U}})_1^k$ such that $\varphi^{\mathcal{M}^{\mathcal{U}}}(\mathbf{X}) = \psi^{\mathcal{M}^{\mathcal{U}}}(\mathbf{X},\mathbf{Z})$. Now because $\mathbf{X}$ and $\tilde{\mathbf{X}}$ have the same type in $\mathcal{M}^{\mathcal{U}}$, there also exists some $\tilde{\mathbf{Z}} \in (\mathcal{M}^{\mathcal{U}})_1^k$ such that $(\mathbf{X},\mathbf{Z})$ and $(\tilde{\mathbf{X}},\tilde{\mathbf{Z}})$ have the same quantifier-free type. In the hypotheses of Theorem [Theorem 2](#thm: main MC){reference-type="ref" reference="thm: main MC"}, we assumed there is an embedding $i: M_2(\mathcal{M}) \to \mathcal{M}^{\mathcal{V}}$ for some ultrafilter $\mathcal{V}$.[^3] Let $i^{(n)}$ be the corresponding embedding $$i^{(n)}: \mathcal{M}^{1/n} = M_2(\mathcal{M})^{1/2n} \to (\mathcal{M}^{\mathcal{V}})^{1/2n} \cong (\mathcal{M}^{1/2n})^{\mathcal{V}}.$$ Then let $$i^{\mathcal{U}} = \prod_{n \to \mathcal{U}} (\mathop{\mathrm{id}}_{M_n(\mathbb{C})} \otimes i^{(n)}): \mathcal{M}^{\mathcal{U}} = \prod_{n \to \mathcal{U}} (M_n(\mathbb{C}) \otimes \mathcal{M}^{1/n}) \to \prod_{n \to \mathcal{U}} (M_n(\mathbb{C}) \otimes (\mathcal{M}^{1/2n})^{\mathcal{V}}) \cong ((\mathcal{M}^{1/2})^{\mathcal{V}})^{\mathcal{U}}.$$ Consider $i^{\mathcal{U}}(\mathbf{X}) \oplus i^{\mathcal{U}}(\tilde{\mathbf{X}})$ and $i^{\mathcal{U}}(\mathbf{Z}) \oplus i^{\mathcal{U}}(\tilde{\mathbf{Z}})$ as elements of $$M_2(((\mathcal{M})^{1/2})^{\mathcal{V}})^{\mathcal{U}}) = (\mathcal{M}^{\mathcal{V}})^{\mathcal{U}} = (\mathcal{M}^{\mathcal{U}})^{\mathcal{V}}.$$ Note that $(i^{\mathcal{U}}(\mathbf{X}) \oplus i^{\mathcal{U}}(\tilde{\mathbf{X}}), \, i^{\mathcal{U}}(\mathbf{Z}) \oplus i^{\mathcal{U}}(\tilde{\mathbf{Z}}))$ has the same quantifier-free type as $(\mathbf{X},\mathbf{Z})$, and in particular, $$\varphi^{(\mathcal{M}^{\mathcal{U}})^{\mathcal{V}}}(i^{\mathcal{U}}(\mathbf{X}) \oplus i^{\mathcal{U}}(\tilde{\mathbf{X}})) \leq \psi^{(\mathcal{M}^{\mathcal{U}})^{\mathcal{V}}}(i^{\mathcal{U}}(\mathbf{X}) \oplus i^{\mathcal{U}}(\tilde{\mathbf{X}}), \, i^{\mathcal{U}}(\mathbf{Z}) \oplus i^{\mathcal{U}}(\tilde{\mathbf{Z}})) = \varphi^{\mathcal{M}^{\mathcal{U}}}(\mathbf{X}).$$ On the other hand, $$i^{\mathcal{U}}(\mathbf{X}) \oplus i^{\mathcal{U}}(\tilde{\mathbf{X}}) = j(\mathbf{Y}),$$ where $j$ is the diagonal embedding $$j: \mathcal{M}^{\mathcal{U}} \to (\mathcal{M}^{\mathcal{U}})^{\mathcal{V}} \text{ or equivalently } \prod_{n \to \mathcal{U}} (M_{2n}(\mathbb{C}) \otimes \mathcal{M}^{1/2n}) \to \prod_{n \to \mathcal{U}} (M_{2n}(\mathbb{C}) \otimes (\mathcal{M}^{1/2n})^{\mathcal{V}}).$$ Hence, $$\varphi^{\mathcal{M}^{\mathcal{U}}}(\mathbf{Y}) = \varphi^{(\mathcal{M}^{\mathcal{U}})^{\mathcal{V}}}(j(\mathbf{Y})) = \varphi^{(\mathcal{M}^{\mathcal{U}})^{\mathcal{V}}}(i^{\mathcal{U}}(\mathbf{X}) \oplus i^{\mathcal{U}}(\tilde{\mathbf{X}})) \leq \varphi^{\mathcal{M}^{\mathcal{U}}}(\mathbf{X}).$$ This proves the asserted inequality [\[eq: inf formula inequality\]](#eq: inf formula inequality){reference-type="eqref" reference="eq: inf formula inequality"}. ◻ ## Quantum expanders and spectral gap For steps (2) and (3), we rely on the fact that random unitary tuples give quantum expanders, which was first proved by Hastings [@Hastings2007 top of second page]. A similar result (upgraded to the operator space setting but with a less sharp constant than the original setting) was shown by Pisier in [@PisierExpanders]. Moreover, a generalization of this result for representations of the unitary group was proved in [@BC2022]. **Theorem 14** (Hastings [@Hastings2007], see also [@PisierExpanders Lemma 1.8]). *Let $U_1^{(n)}$, ..., $U_d^{(n)}$ be independent Haar random unitary matrices, and consider the (random) map $\Phi^{(n)}: M_n(\mathbb{C}) \to M_n(\mathbb{C})$, $$\Phi^{(n)}(A) = \frac{1}{2d} \sum_{j=1}^d (U_j^{(n)} A (U_j^{(n)})^* + (U_j^{(n)})^*AU_j^{(n)})$$ Let $\lambda_1^{(n)} \geq \lambda_2^{(n)} \geq \dots$ be the eigenvalues of $\Phi^{(n)}$ (here $\lambda_1^{(n)} = 1$ with eigenspace the span of the identity matrix). Then almost surely $$\lim_{n \to \infty} \lambda^{(n)}_2 = \frac{\sqrt{2d - 1}}{d}.$$* *Proof.* The situation above is the Hermitian case with $D = 2d$ in Hastings's terminology. Hastings [@Hastings2007] at the top of the second page asserts convergence in probability of $\lambda_2^{(n)}$. Hastings's arguments in fact yield almost sure convergence. Indeed, $\liminf_{n \to \infty} \lambda_2^{(n)} \geq \sqrt{2d-1} / d$ follows from a deterministic lower bound on $\lambda_2^{(n)}$ in [@Hastings2007 eq (12)] which gives (using $\lambda_H=2\sqrt{D-1}/D=\sqrt{2d-1}/d$, see [@Hastings2007 (3)]), $\lambda_2 \geq \frac{2d-1}d(1-O(\ln(\ln(n))/\ln(n))$. For the converse inequality, at the end of §II.F, Hastings shows that for $c >1$, the probability that $\lambda_2^{(n)}$ is greater than $c \sqrt{2d - 1}/ d$ is bounded by $$c^{-(1/4)n^{2/15}} (1 + O(\log(n)n^{-2/15})).$$ Because this is summable, the Borel-Cantelli lemma implies that almost surely we have $\limsup_{n \to \infty} \lambda_2^{(n)} \leq c \lambda_H$. Since $c > 1$ was arbitrary, this yields almost sure convergence. ◻ **Corollary 15**. *Let $U_1^{(n)}$, ..., $U_d^{(n)}$ be random Haar unitary matrices. Then almost surely, for sufficiently large $n$, we have for all $A \in M_n(\mathbb{C})$, $$\label{eq: matrix spectral gap} \left\lVert A - \mathop{\mathrm{tr}}_n(A) \right\rVert_2^2 < \frac{d}{(d-1)^2} \sum_{j=1}^d \left\lVert [A,U_j^{(n)}] \right\rVert_2^2.$$* *Proof.* Let $\Phi^{(n)}$ be as in the previous theorem. Note that $\ker(\mathop{\mathrm{tr}}_n)$ is the orthogonal complement of $\mathbb{C}1$, which is the $\lambda_1^{(n)}$-eigenspace of $\Phi^{(n)}$. Hence, $$\left\lVert \sum_{j=1}^d (U_j^{(n)} (A - \mathop{\mathrm{tr}}_n(A)) (U_j^{(n)})^* + (U_j^{(n)})^* (A - \mathop{\mathrm{tr}}_n(A)) U_j^{(n)}) \right\rVert_2 \leq 2d \lambda_2^{(n)} \left\lVert A - \mathop{\mathrm{tr}}_n(A) \right\rVert_2.$$ Hence, by Lemma [Lemma 5](#lem: spectral gap){reference-type="ref" reference="lem: spectral gap"} / Corollary [Corollary 6](#cor: quantum expanders){reference-type="ref" reference="cor: quantum expanders"}, $$\left\lVert A - \mathop{\mathrm{tr}}_n(A) \right\rVert_2^2 \leq \frac{1}{2d(1 - \lambda_2^{(n)})} \sum_{j=1}^d \left\lVert [A,U_j^{(n)}] \right\rVert_2^2.$$ By Hastings's theorem, almost surely, $$\frac{1}{2d(1 - \lambda_2^{(n)})} \to \frac{1}{2d - 2 \sqrt{2d-1}} = \frac{d + \sqrt{2d-1}}{2(d^2 - 2d + 1)}$$ For $d > 1$, we have $\sqrt{2d - 1} < d$ (for instance, by squaring both sides). Hence, for sufficiently large $n$, $$\frac{1}{2d(1 - \lambda_2^{(n)})} < \frac{2d}{2(d - 1)^2} = \frac{d}{(d-1)^2}. \qedhere$$ ◻ **Lemma 16**. *Let $\mathcal{A}_n = 1_{M_n(\mathbb{C})} \otimes \mathcal{M}^{1/n}$ and let $\mathcal{A}= \prod_{n \to \mathcal{U}} \mathcal{A}_n$. Then almost surely, for all $a \in \mathcal{M}^{\mathcal{U}}$, $$d(a,\mathcal{A})^2 \leq \frac{3}{4} \sum_{j=1}^3 \left\lVert [X_j,a] \right\rVert_2^2.$$ In particular, $\mathcal{A}= \{\mathbf{X}\}' \cap \mathcal{M}^{\mathcal{U}}$.* *Proof.* By the previous lemma with $d = 3$, almost surely, for sufficiently large $n$, for $A \in M_n(\mathbb{C})$, we have $$\left\lVert A - \mathop{\mathrm{tr}}_n(A) \right\rVert_2^2 \leq \frac{3}{4} \sum_{j=1}^3 \left\lVert [A,U_j^{(n)}] \right\rVert_2^2.$$ Because this is an inequality of linear operators on a Hilbert space, we may tensorize with the identity on $L^2(\mathcal{M}^{1/n})$ (see e.g. [@GJKEP2023 Lemma 4.18]), to obtain for $a \in M_n(\mathbb{C}) \otimes \mathcal{M}^{1/n} = \mathcal{M}$ that $$\left\lVert a - E_{\mathcal{A}_n}[a] \right\rVert_2^2 \leq \frac{3}{4} \sum_{j=1}^3 \left\lVert [a,X_j^{(n)}] \right\rVert_2^2 \text{ for } a \in \mathcal{M}.$$ Then in the ultralimit, we obtain $$\left\lVert a - E_{\mathcal{A}}[a] \right\rVert_2^2 \leq \frac{3}{4} \sum_{j=1}^3 \left\lVert [a,X_j] \right\rVert_2^2 \text{ for } a \in \mathcal{M}^{\mathcal{U}}$$ since conditional expectations commute with ultraproducts. This is the desired estimate for $\mathcal{A}$. For the final claim, $\mathcal{A}\subseteq \{\mathbf{X}\}' \cap \mathcal{M}^\mathcal{U}$ is immediate from the construction of $\mathbf{X}$, and the opposite inclusion follows from the spectral gap estimate that we just proved. ◻ The analogous statement for $\mathbf{Y}$ is more delicate, and this is where we use the specific way that $\mathbf{X}$ and $\mathbf{Y}$ were constructed from $U_1^{(n)}$, ..., $U_4^{(n)}$; this part of the argument was simplified due to the suggestion of Adrian Ioana and it is a close relative to the proof of [@ioana2021almost Lemma 4.6]. **Lemma 17**. *For a II$_1$ factor $\mathcal{M}$ and $\mathcal{B}$, and $\mathbf{Y}$ as defined in §[4.1](#S.Outline){reference-type="ref" reference="S.Outline"}, almost surely, for $b \in \mathcal{M}^{\mathcal{U}}$, $$d(b,\mathcal{B})^2 \leq 7 \sum_{j=1}^3 \left\lVert [Y_j,b] \right\rVert_2^2.$$ In particular, $\{\mathbf{Y}\}' \cap \mathcal{M}^{\mathcal{U}} = \mathcal{B}$.* *Proof.* To prove the estimate for $\mathcal{B}$, the same tensorization and ultralimit argument as in the proof of Lemma [Lemma 16](#lem: first distance estimate){reference-type="ref" reference="lem: first distance estimate"} apply, and so it suffices to show that for $B = \begin{bmatrix} B_{1,1} & B_{1,2} \\ B_{2,1} & B_{2,2} \end{bmatrix} \in M_{2n}(\mathbb{C})$, we have $$\begin{aligned} \left\lVert \begin{bmatrix} B_{1,1} & B_{1,2} \\ B_{2,1} & B_{2,2} \end{bmatrix} - \begin{bmatrix} \mathop{\mathrm{tr}}_n(B_{1,1}) & 0 \\ 0 & \mathop{\mathrm{tr}}_n(B_{2,2}) \end{bmatrix} \right\rVert_2^2 &\leq 7 \sum_{j=1}^3 \left\lVert [B,Y_j^{(n)}] \right\rVert_2^2 \\ &= 7 \Bigg( \sum_{j=1}^2 \left\lVert \begin{bmatrix} U_j^{(n)} B_{1,1} - B_{1,1} U_j^{(n)} & U_j^{(n)} B_{1,2} - B_{1,2} U_j^{(n)} \\ U_j^{(n)} B_{2,1} - B_{2,1} U_j^{(n)} & U_j^{(n)} B_{2,2} - B_{2,2} U_j^{(n)} \end{bmatrix} \right\rVert_2^2 \\ &+ \left\lVert \begin{bmatrix} U_3^{(n)} B_{1,1} - B_{1,1} U_3^{(n)} & U_3^{(n)} B_{1,2} - B_{1,2} U_4^{(n)} \\ U_4^{(n)} B_{2,1} - B_{2,1} U_3^{(n)} & U_4^{(n)} B_{2,2} - B_{2,2} U_4^{(n)} \end{bmatrix} \right\rVert_2^2 \Bigg) \end{aligned}$$ Equivalently, we want to show that $$\begin{aligned} \left\lVert B_{1,1} - \mathop{\mathrm{tr}}_n(B_{1,1}) \right\rVert_2^2 &+ \left\lVert B_{2,2} - \mathop{\mathrm{tr}}_n(B_{2,2}) \right\rVert_2^2 + \left\lVert B_{1,2} \right\rVert_2^2 + \left\lVert B_{2,1} \right\rVert_2^2 \\ &\leq 7 \Big( \sum_{j=1}^3 \left\lVert [B_{1,1},U_j^{(n)}] \right\rVert_2^2 + \sum_{j=1}^3 \left\lVert [B_{2,2},U_j^{(n)}] \right\rVert_2^2 \\ & \quad + \sum_{j=1}^2 \left\lVert [B_{1,2},U_j^{(n)}] \right\rVert_2^2 + \left\lVert U_3^{(n)} B_{1,2} - B_{1,2} U_4^{(n)} \right\rVert_2^2 \\ & \quad + \sum_{j=1}^2 \left\lVert [B_{2,1},U_j^{(n)}] \right\rVert_2^2 + \left\lVert U_4^{(n)} B_{2,1} - B_{2,1} U_3^{(n)} \right\rVert_2^2 \Big) \end{aligned}$$ From Corollary [Corollary 15](#cor: Hastings spectral gap){reference-type="ref" reference="cor: Hastings spectral gap"}, we already know $$\left\lVert B_{1,1} - \mathop{\mathrm{tr}}_n(B_{1,1}) \right\rVert_2^2 \leq \frac{3}{4} \sum_{j=1}^3 \left\lVert [B_{1,1},U_j^{(n)}] \right\rVert_2^2,$$ and similarly for the $B_{2,2}$ term. Thus, it remains to estimate the $B_{1,2}$ and $B_{2,1}$ terms. We will handle the $B_{1,2}$ term and show that $$\label{eq: desired B12 estimate} \left\lVert B_{1,2} \right\rVert_2^2 \leq 7 \left( \sum_{j=1}^2 \left\lVert [B_{1,2},U_j^{(n)}] \right\rVert_2^2 + \left\lVert U_3^{(n)} B_{1,2} - B_{1,2} U_4^{(n)} \right\rVert_2^2 \right);$$ the argument for the $B_{2,1}$ term is symmetrical. First, we note that by Corollary [Corollary 15](#cor: Hastings spectral gap){reference-type="ref" reference="cor: Hastings spectral gap"} with $d = 2$, we have almost surely for sufficiently large $n$, $$\label{eq.B12} \left\lVert B_{1,2} - \mathop{\mathrm{tr}}_n(B_{1,2}) \right\rVert_2^2 \leq 2 \sum_{j=1}^2 \left\lVert [B_{1,2},U_j^{(n)}] \right\rVert_2^2.$$ Thus, it remains to estimate $\mathop{\mathrm{tr}}_n(B_{1,2})$. We note that $$\begin{aligned} |\mathop{\mathrm{tr}}_n(B_{1,2})| \left\lVert U_3^{(n)} - U_4^{(n)} \right\rVert_2 &= \left\lVert U_3^{(n)} \mathop{\mathrm{tr}}_n(B_{1,2}) - \mathop{\mathrm{tr}}_n(B_{1,2}) U_4^{(n)} \right\rVert_2 \\ &\leq \left\lVert U_3^{(n)}(B_{1,2} - \mathop{\mathrm{tr}}_n(B_{1,2})) - (B_{1,2} - \mathop{\mathrm{tr}}_n(B_{1,2})) U_4^{(n)} \right\rVert_2 \\ & \quad + \left\lVert U_3^{(n)} B_{1,2} - B_{1,2} U_4^{(n)} \right\rVert_2 \\ &\leq 2 \left\lVert B_{1,2} - \mathop{\mathrm{tr}}_n(B_{1,2}) \right\rVert_2 + \left\lVert U_3^{(n)} B_{1,2} - B_{1,2} U_4^{(n)} \right\rVert_2. \end{aligned}$$ Note that $\mathbb{E} \mathop{\mathrm{tr}}_n((U_3^{(n)})^* U_4^{(n)}) = 0$, and so by Proposition [Proposition 11](#prop: concentration){reference-type="ref" reference="prop: concentration"}, we have $\mathop{\mathrm{tr}}_n((U_3^{(n)})^* U_4^{(n)}) \to 0$ almost surely, and thus $\left\lVert U_3^{(n)} - U_4^{(n)} \right\rVert_2^2=\left\lVert 1-(U_3^{(n)})^* U_4^{(n)} \right\rVert_2^2 \to 2$ almost surely, and hence is eventually larger than $9/5$. Hence, we have that for sufficiently large $n$, $$|\mathop{\mathrm{tr}}_n(B_{1,2})| \leq \sqrt{5/9} \left( 2 \left\lVert B_{1,2} - \mathop{\mathrm{tr}}_n(B_{1,2}) \right\rVert_2 + \left\lVert U_3^{(n)} B_{1,2} - B_{1,2} U_4^{(n)} \right\rVert_2 \right).$$ By the Cauchy-Schwarz inequality and our previous estimate for $\left\lVert B_{1,2} - \mathop{\mathrm{tr}}_n(B_{1,2}) \right\rVert_2$, $$\begin{aligned} |\mathop{\mathrm{tr}}_n(B_{1,2})|^2 &\leq \frac{5}{9} (1 + 1/8) \left( 4 \left\lVert B_{1,2} - \mathop{\mathrm{tr}}_n(B_{1,2}) \right\rVert_2^2 + 8 \left\lVert U_3^{(n)} B_{1,2} - B_{1,2} U_4^{(n)} \right\rVert_2^2 \right) \\ &\leq \frac{5}{8} \left( 4 \cdot 2 \sum_{j=1}^2 \left\lVert [B_{1,2},U_j^{(n)}] \right\rVert_2^2 + 8 \left\lVert U_3^{(n)} B_{1,2} - B_{1,2} U_4^{(n)} \right\rVert_2^2 \right) \\ &= 5 \sum_{j=1}^2 \left\lVert [B_{1,2},U_j^{(n)}] \right\rVert_2^2 + 5 \left\lVert U_3^{(n)} B_{1,2} - B_{1,2} U_4^{(n)} \right\rVert_2^2 \end{aligned}$$ Hence, using this and [\[eq.B12\]](#eq.B12){reference-type="eqref" reference="eq.B12"}, $$\begin{aligned} \left\lVert B_{1,2} \right\rVert_2^2 &= \left\lVert B_{1,2} - \mathop{\mathrm{tr}}_n(B_{1,2}) \right\rVert_2^2 + \mathop{\mathrm{tr}}_n(B_{1,2})^2 \\ &\leq \left(2 + 5 \right) \sum_{j=1}^2 \left\lVert [B_{1,2},U_j^{(n)}] \right\rVert_2^2 + 5 \left\lVert U_3^{(n)} B_{1,2} - B_{1,2} U_4^{(n)} \right\rVert_2^2 \\ &\leq 7 \left( \sum_{j=1}^2 \left\lVert [B_{1,2},U_j^{(n)}] \right\rVert_2^2 + \left\lVert U_3^{(n)} B_{1,2} - B_{1,2} U_4^{(n)} \right\rVert_2^2 \right) \end{aligned}$$ as desired. ◻ *Remark 18*. In the spirit of Goldbring's work on spectral gap and definability [@Goldbring2023spectralgap], our bound on the distance to the relative commutant in Lemma [Lemma 16](#lem: first distance estimate){reference-type="ref" reference="lem: first distance estimate"} shows that $\mathcal{A}= \{\mathbf{X}\}' \cap \mathcal{M}^\mathcal{U}$ is a definable set in parameters $\mathbf{X}$ (see §[2.1.3](#sec: definable){reference-type="ref" reference="sec: definable"} and §[2.3](#sec: sg preliminaries){reference-type="ref" reference="sec: sg preliminaries"}). Similarly, Lemma [Lemma 17](#lem: second distance estimate){reference-type="ref" reference="lem: second distance estimate"} implies that $\{\mathbf{Y}\}' \cap \mathcal{M}^{\mathcal{U}}$ is definable in parameters $\mathbf{Y}$. ## Conclusion of the proof of Theorem [Theorem 2](#thm: main MC){reference-type="ref" reference="thm: main MC"} in the $\mathrm{II}_1$ factor case {#sec: MC factor proof conclusion} *Proof of Theorem [Theorem 2](#thm: main MC){reference-type="ref" reference="thm: main MC"} in the $\mathrm{II}_1$ factor case.* Referring to the outline of the proof stated in §[4.1](#S.Outline){reference-type="ref" reference="S.Outline"}, we have shown (1) in Lemma [Lemma 13](#lem: inf formula limit){reference-type="ref" reference="lem: inf formula limit"}, (2) in Lemma [Lemma 16](#lem: first distance estimate){reference-type="ref" reference="lem: first distance estimate"}, and (3) in Lemma [Lemma 17](#lem: second distance estimate){reference-type="ref" reference="lem: second distance estimate"}. Item (1) shows that, almost surely, $\varphi^{\mathcal{M}^{\mathcal{U}}}(\mathbf{X}) \leq \varphi^{\mathcal{M}^{\mathcal{U}}}(\mathbf{Y})$ for all $\inf$-formulas. If $\mathcal{M}$ were model complete, then $\mathbf{X}$ and $\mathbf{Y}$ would have the same type by Lemma [Lemma 3](#lem: type MC){reference-type="ref" reference="lem: type MC"}. Hence, to finish the argument, it suffices to show that $\mathbf{X}$ and $\mathbf{Y}$ do not have the same type. In fact, we claim that $\mathbf{X}$ and $\mathbf{Y}$ do not even have the same two-quantifier type. Consider the formula $$\psi(x_1, x_2, x_3) = \inf_{z_1} \left[ 1 - \left\lVert z_1 \right\rVert_2^2 + |\mathop{\mathrm{tr}}(z_1)|^2 + 7 \sum_{j=1}^3 \left\lVert [x_j,z_1] \right\rVert_2^2 + \sup_{z_2} \left[ \left\lVert [z_1,z_2] \right\rVert_2^2 \dot{-} 28 \sum_{j=1}^3 \left\lVert [x_j,z_2] \right\rVert_2^2 \right] \right],$$ where $z_1$ and $z_2$ range over the unit ball. Then the condition $\psi(x_1,x_2, x_3)=0$ attempts to assert the existence of $z_1$ with $\|z_1\|_2=1$ and $\mathop{\mathrm{tr}}(z_1)=0$ such that $z_1$ commutes with $x_j$ for $j = 1,2,3$ and also commutes with every $z_2$ in the relative commutant of $\{x_1,x_2, x_3\}$.[^4] We will find a self-adjoint unitary $z_1$ that commutes with $Y_j$ for $j = 1,2,3$, has zero trace, and commutes with everything in the relative commutant of $\{Y_1,Y_2, Y_3\}$; this will suffice to show that $\psi^{\mathcal{M}^\mathcal{U}}(\mathbf{Y})=0$. Indeed, $\{\mathbf{Y}\}' = \mathcal{B}$ is a direct sum of three copies of $\prod_{n \to \mathcal{U}} M_n(\mathbb{C}) \otimes \mathcal{M}^{1/2n}$ so it has a central projection $p$ of trace $1/3$. Let $z_1 = 3p - 1$, so that $\left\lVert z_1 \right\rVert_2^2 = 1$ and $\mathop{\mathrm{tr}}(z_1) = 0$ and $\sum_{j=1}^3 \left\lVert [Y_j,z_1] \right\rVert_2 = 0$. Also for every $z_2$, we have $$\left\lVert [z_1,z_2] \right\rVert_2^2 \leq \left( \left\lVert [z_1,E_{\mathcal{B}}[z_2]] \right\rVert_2 + 2 \left\lVert z_1 \right\rVert d(z_2,\mathcal{B}) \right)^2 = 4 d(z_2,\mathcal{B})^2 \leq 28 \sum_{j=1}^3 \left\lVert [Y_j,z_2] \right\rVert_2^2,$$ because of Lemma [Lemma 17](#lem: second distance estimate){reference-type="ref" reference="lem: second distance estimate"} and the fact that $[z_1,E_{\mathcal{B}}[z_2]] = 0$ On the other hand, we claim that $\psi^{\mathcal{M}^{\mathcal{U}}}(\mathbf{X}) = 1$. Because the ultraproduct $\mathcal{M}^{\mathcal{U}}$ is countably saturated (see §[2.1.2](#sec: model theory prelims){reference-type="ref" reference="sec: model theory prelims"}), there is some $z_1 \in \mathcal{M}^{\mathcal{U}}$ that attains the infimum in the formula. Let $z_1' = E_{\mathcal{A}}[z_1]$. Because $\mathcal{A}$ is a factor, a Dixmier averaging argument (see e.g., [@FHS2013 Lemma 4.2]) shows that $$\left\lVert z_1' \right\rVert_2^2 - |\mathop{\mathrm{tr}}(z_1')|^2 =\|z-\mathop{\mathrm{tr}}(z)\|_2^2\leq \sup_{z_2 \in \mathcal{A}_1} \left\lVert [z_1',z_2] \right\rVert_2^2,$$ where $\mathcal{A}_1$ is the unit ball of $\mathcal{A}$. Using choices of $z_2 \in \mathcal{A}$ witnessing this inequality as candidates for the supremum in $\psi$, we conclude $$\psi^{\mathcal{M}^{\mathcal{U}}}(\mathbf{X}) \geq 1 - \left\lVert z_1 \right\rVert_2^2 + |\mathop{\mathrm{tr}}(z_1)|^2 + d(z_1,\mathcal{A})^2 + \left[ \left\lVert z_1' \right\rVert_2^2 - |\mathop{\mathrm{tr}}(z_1')|^2 + 0 \right],$$ where we have also applied the spectral gap inequality from Lemma [Lemma 16](#lem: first distance estimate){reference-type="ref" reference="lem: first distance estimate"} to get the $d(z_1, \mathcal{A})^2$ term. Noting that $d(z_1,\mathcal{A})^2 = \left\lVert z_1 - z_1' \right\rVert_2^2 = \left\lVert z_1 \right\rVert_2^2 - \left\lVert z_1' \right\rVert_2^2$ and that $\mathop{\mathrm{tr}}(z_1') = \mathop{\mathrm{tr}}(z_1)$, the entire expression evaluates to $1$. For the upper bound $\psi^{\mathcal{M}^{\mathcal{U}}}(\mathbf{X}) \leq 1$, simply take $z_1 = 0$. ◻ *Remark 19*. Our argument also gives another proof of [@Farah2023 Theorem 1], that a $\mathrm{II}_1$ factor never admits quantifier elimination, even without the assumption that $M_2(\mathcal{M})$ embeds into $\mathcal{M}^{\mathcal{U}}$. Indeed, this assumption was only used to relate the existential types of $\mathbf{X}$ and $\mathbf{Y}$. It is immediate from Lemma [Lemma 12](#lem: type convergence){reference-type="ref" reference="lem: type convergence"} that the quantifier-free type of $(U_1^{(n)},U_2^{(n)},U_3^{(n)})$ converges almost surely as $n \to \mathcal{U}$, and the quantifier-free type of $(U_1^{(n)},U_2^{(n)},U_4^{(n)})$ converges to the same limit, hence so does the quantifier-free type of $(U_1^{(n)}\oplus U_1^{(n)},U_2^{(n)} \oplus U_2^{(n)},U_3^{(n)} \oplus U_4^{(n)})$. Therefore, $\mathbf{X}$ and $\mathbf{Y}$ have the same quantifier-free type. In fact, by Voiculescu's asymptotic freeness theory [@Voiculescu1991; @Voiculescu1998], $\mathbf{X}$ is a triple of freely independent Haar unitaries and so is $\mathbf{Y}$. However, $\mathbf{X}$ and $\mathbf{Y}$ do not have the same type, so that $\mathcal{M}$ does not admit quantifier elimination. ## Alternative approaches to the proof Here we remark on several variants of the random matrix argument, showing how a variety of different tools can be used to obtain the same result according to the reader's taste. In particular, we show how Pisier's estimate or property (T) groups could be used in place of Hastings's estimate in our argument, and we will sketch the original argument we had used before Adrian Ioana's suggestions. ### Replacing $U_1^{(n)}$ and $U_2^{(n)}$ by deterministic quantum expanders {#sec: general expander} Let $U_1^{(n)}$, ..., $U_d^{(n)}$ be a sequence of deterministic matrices such that $U_1^{(n)}$, ..., $U_d^{(n)}$ and their adjoints are a $(2d,\epsilon)$-quantum expander (see §[2.3](#sec: sg preliminaries){reference-type="ref" reference="sec: sg preliminaries"} for background). Let $U_{d+1}^{(n)}$ and $U_{d+2}^{(n)}$ be independent Haar random unitaries. Then the above argument for Theorem [Theorem 2](#thm: main MC){reference-type="ref" reference="thm: main MC"} in the factor case could also be done using $$\mathbf{X}^{(n)} = (U_1^{(n)} \otimes 1_{\mathcal{M}^{1/n}}, \dots, U_d^{(n)} \otimes 1_{\mathcal{M}^{1/n}}, U_{d+1}^{(n)} \otimes 1_{\mathcal{M}^{1/n}}),$$ and $$\mathbf{Y}^{(n)} = ((U_1^{(n)} \oplus U_1^{(n)}) \otimes 1_{\mathcal{M}^{1/2n}}, \dots, (U_d^{(n)} \oplus U_d^{(n)}) \otimes 1_{\mathcal{M}^{1/2n}}, (U_{d+1}^{(n)} \oplus U_{d+2}^{(n)}) \otimes 1_{\mathcal{M}^{1/2n}}).$$ Indeed, concentration of measure (Proposition [Proposition 11](#prop: concentration){reference-type="ref" reference="prop: concentration"} and the proof of Lemma [Lemma 12](#lem: type convergence){reference-type="ref" reference="lem: type convergence"}) still apply to a mixture of deterministic matrices and random Haar unitaries, and hence Lemma [Lemma 13](#lem: inf formula limit){reference-type="ref" reference="lem: inf formula limit"} still goes through. Similarly, the arguments for Lemma [Lemma 16](#lem: first distance estimate){reference-type="ref" reference="lem: first distance estimate"} and [Lemma 17](#lem: second distance estimate){reference-type="ref" reference="lem: second distance estimate"} only use the fact that $U_1^{(n)}$, ..., $U_d^{(n)}$ is an expander and that $\left\lVert U_{d+1}^{(n)} - U_{d+2}^{(n)} \right\rVert_2$ converges to $\sqrt{2}$ as $n \to \infty$. Hence, they would also work for appropriate an choice of constants (depending on $\epsilon$). ### Proof from Pisier's estimate Pisier's work [@PisierExpanders Theorem 1.3] shows the existence of separated quantum expanders in Theorem 1.3, and the key estimate is [@PisierExpanders Theorem 4.2]. This result generalizes Hastings's estimate to the operator space setting with a shorter argument, but when restricted to Hastings's original setting it gives a weaker constant. In particular, Pisier's Theorem 4.2 shows that independent Haar random unitaries give quantum expanders with high probability provided that $d$ is sufficiently large. However, existence of $(d,\epsilon)$-quantum expanders for some $d$ and $\epsilon$ is enough to make the argument of §[4.5.1](#sec: general expander){reference-type="ref" reference="sec: general expander"} work. ### Proof from Property (T) groups {#sec: property T} As noted in §[2.3](#sec: sg preliminaries){reference-type="ref" reference="sec: sg preliminaries"}, another way to obtain quantum expanders is from irreducible representations of a property (T) group. The argument of §[4.5.1](#sec: general expander){reference-type="ref" reference="sec: general expander"} goes through in this case (provided that we use an ultrafilter $\mathcal{U}$ on the set of dimensions of irreducible representations rather than all of $\mathbb{N}$). Indeed, IF used property (T) groups in a similar way to show a lack of quantifier elimination for $\mathrm{II}_1$ factors in [@Farah2023 Lemma 2.1]. ### Alternative random matrix argument {#sec: alternative} One natural approach to Theorem [Theorem 2](#thm: main MC){reference-type="ref" reference="thm: main MC"}, which we had originally taken for the proof, is to consider independent Haar random unitaries $U_1^{(n)}$, $U_2^{(n)}$, $V_1^{(n)}$, and $V_2^{(n))}$ and set $$\mathbf{X}^{(n)} = (U_1^{(n)} \otimes 1_{\mathcal{M}^{1/n}},U_2^{(n)} \otimes 1_{\mathcal{M}^{1/n}})$$ and $$\mathbf{Y}^{(n)} = ((U_1^{(n)} \oplus V_1^{(n)}) \otimes 1_{\mathcal{M}^{1/2n}}, (U_2^{(n)} \oplus V_2^{(n)}) \otimes 1_{\mathcal{M}^{1/2n}}).$$ With these operators, the proof of Lemma [Lemma 13](#lem: inf formula limit){reference-type="ref" reference="lem: inf formula limit"} and Lemma [Lemma 16](#lem: first distance estimate){reference-type="ref" reference="lem: first distance estimate"} proceeds in the same way. The challenge is to prove Lemma [Lemma 17](#lem: second distance estimate){reference-type="ref" reference="lem: second distance estimate"}. Considering a $2 \times 2$ block matrix $B$, we need an estimate for the off-diagonal terms $B_{1,2}$ and $B_{2,1}$. Specifically, we want some constant $C$ such that $$\label{eq: desired B12 estimate 2} \left\lVert B_{1,2} \right\rVert_2^2 \leq C \sum_{j=1}^2 \left\lVert U_j^{(n)} B_{1,2} - B_{1,2} V_j^{(n)} \right\rVert_2^2$$ and the argument for the $B_{2,1}$ term is symmetrical. By polar decomposition, $B_{1,2} = W |B_{1,2}|$ where $W$ is unitary and $|B_{1,2}| = (B_{1,2}^* B_{1,2})^{1/2}$. We first want to eliminate the positive part $|B_{1,2}|$ by showing it is close to a multiple of the identity. Observe that $$\left\lVert B_{1,2}^*U_j^{(n)} - V_j^{(n)} B_{1,2}^* \right\rVert_2 = \left\lVert B_{1,2} (V_j^{(n)})^* - (U_j^{(n)})^*B_{1,2} \right\rVert_2 = \left\lVert U_j^{(n)} B_{1,2} - B_{1,2} V_j^{(n)} \right\rVert_2.$$ We get by the triangle inequality and non-commutative Hölder's inequality that $$\left\lVert [V_j^{(n)}, B_{1,2}^* B_{1,2}] \right\rVert_1 \leq \operatorname{constant} \left\lVert B_{1,2} \right\rVert_2 \left\lVert U_j^{(n)} B_{1,2} - B_{1,2} V_j^{(n)} \right\rVert_2.$$ After applying the Powers-Størmer inequality [@PS1970 Lemma 4.1] to the positive operators $|B_{1,2}|$ and $V_j^{(n)} |B_{1,2}| (V_j^{(n)})^*$, we obtain $$\left\lVert [V_j^{(n)}, |B_{1,2}|] \right\rVert_2^2 \leq \operatorname{constant} \left\lVert B_{1,2} \right\rVert_2 \left\lVert U_j^{(n)} B_{1,2} - B_{1,2} V_j^{(n)} \right\rVert_2.$$ Then because $(V_1^{(n)}, V_2^{(n)})$ is a quantum expander, we get $$\left\lVert |B_{1,2}| - \mathop{\mathrm{tr}}_n(|B_{1,2}|) \right\rVert_2 \leq \operatorname{constant} \left\lVert B_{1,2} \right\rVert_2 \sum_{j=1}^n \left\lVert U_j^{(n)} B_{1,2} - B_{1,2} V_j^{(n)} \right\rVert_2.$$ Now that $|B_{1,2}|$ is close to $\mathop{\mathrm{tr}}_n(|B_{1,2}|)$, we obtain from the triangle inequality that $$\mathop{\mathrm{tr}}_n(|B_{1,2}|) \left\lVert U_j^{(n)} W - W V_j^{(n)} \right\rVert_2 \leq 2 \left\lVert |B_{1,2}| - \mathop{\mathrm{tr}}_n(|B_{1,2}|) \right\rVert_2 + \left\lVert U_j^{(n)}B_{1,2} - B_{1,2} V_j^{(n)} \right\rVert_2,$$ and hence from our previous estimates $$\mathop{\mathrm{tr}}_n(|B_{1,2}|) \sum_{j=1}^2 \left\lVert U_j^{(n)} W - W V_j^{(n)} \right\rVert_2 \leq \operatorname{constant} \sum_{j=1}^2 \left\lVert B_{1,2} \right\rVert_2 \left\lVert U_1^{(n)} B_{1,2} - B_{1,2} V_1^{(n)} \right\rVert_2.$$ Therefore, in order to estimate $\mathop{\mathrm{tr}}(|B_{1,2}|)$ and hence estimate $\left\lVert B_{1,2} \right\rVert_2$, it suffices to show that $$\label{eq: orbital distance estimate} \inf_{W \in \mathbb{U}_n} \left\lVert U_j^{(n)} W - W V_j^{(n)} \right\rVert_2 = \inf_{W \in \mathbb{U}_n} \left\lVert W^* U_j^{(n)} W - V_j^{(n)} \right\rVert_2 > \operatorname{constant},$$ for some positive constant. Note that the unitary group is a definable set, and hence the above expression is in fact a definable predicate in the language of tracial von Neumann algebras, or more precisely it is an inf-formula in the expanded language where we add a sort for the unitary group. It also has an almost sure limit as $n \to \infty$ by Lemma [Lemma 12](#lem: type convergence){reference-type="ref" reference="lem: type convergence"}. To show that the limit is positive, we argue using covering numbers and volume estimates, in a similar vein to [@PisierExpanders Lemma 1.9, 1.10, 1.11] or [@JekelModelEntropy Lemma 5.7]. The first ingredient is the fact that for a-ball in $\mathbb{U}_n$, we have $$\label{eq: ball measure} \operatorname{Haar}(B(U,\delta)) \leq (C \delta)^{n^2}$$ for some constant $C$. This follows from standard computations of the volume of the unitary group [@ShiZhou2014 Proposition 2.3] and of balls in Euclidean space, and use of the exponential map as in [@Szarek]. There are related estimates for the number of balls needed to cover $\mathbb{U}_n$. For $S \subseteq M_n(\mathbb{C})$, let $K_\delta(S)$ be the minimum number of $\delta$-balls with respect to $\left\lVert \cdot \right\rVert_2$ needed to cover $S$. Then $$\label{eq: unitary covering} \left( \frac{C_1}{\delta} \right)^{n^2} \leq K_\delta(\mathbb{U}_n) \leq \left( \frac{C_2}{\delta} \right)^{n^2},$$ for some positive constants $C_1$ and $C_2$. The upper bound follows similarly to Szarek's work [@Szarek], and the lower bound of [\[eq: unitary covering\]](#eq: unitary covering){reference-type="eqref" reference="eq: unitary covering"} follows from [\[eq: ball measure\]](#eq: ball measure){reference-type="eqref" reference="eq: ball measure"} by a packing argument. Now we can verify [\[eq: orbital distance estimate\]](#eq: orbital distance estimate){reference-type="eqref" reference="eq: orbital distance estimate"}. In the following, we use the norm $\max(\left\lVert A_1 \right\rVert_2,\dots,\left\lVert A_m \right\rVert_2)$ for $A \in M_n(\mathbb{C})^m$, and the induced metric on the unitary group $\mathbb{U}_n^{\times m}$. With this metric, the balls are products of balls in each of the individual spaces. Consider the map $$F \colon \mathbb{U}_n^3 \to \mathbb{U}_n^4 \colon (U_1,U_2,W) \mapsto (U_1,U_2,WU_1W^*,WU_2W^*)$$ Elementary bounds with the triangle inequality and Hölder's inequality show that $F$ is $3$-Lipschitz. Therefore, $$K_\delta(F(\mathbb{U}^{\times 3})) \leq K_{\delta/3}(\mathbb{U}^{\times 3}) \leq \left( \frac{3C_2}{\delta} \right)^{3n^2}.$$ Let $$\varphi(U_1,U_2,V_1,V_2) = \inf_{W \in \mathbb{U}_n} \max_{j=1,2} \left\lVert W U_j W^* - V_j \right\rVert_2.$$ If $\varphi(U_1,U_2,V_1,V_2) < \delta$, then $(U_1,U_2,V_1,V_2)$ is in the $\delta$-neighborhood of $F(\mathbb{U}_n^{\times 3})$. Hence, $$K_{2\delta}(\{\varphi< \delta\}) \leq K_\delta(F(\mathbb{U}_n^{\times 3})) \leq \left( \frac{3C_2}{\delta} \right)^{3n^2}.$$ Using [\[eq: ball measure\]](#eq: ball measure){reference-type="eqref" reference="eq: ball measure"}, $$\operatorname{Haar}^{\times 4}(\{\varphi< \delta\}) \leq \left( 2C \delta \right)^{4n^2} K_{2 \delta}(\{\varphi< \delta\}) \leq \left( 2C \delta \right)^{4n^2} \left( \frac{3C_2}{\delta} \right)^{3n^2} = \left( C_3 \delta \right)^{n^2}$$ for some constant $C_3$. Hence, if $\delta < 1/C_3$, then the Haar measure (i.e. the probability) that $\varphi(U_1,U_2,V_1,V_2) < \delta$ is summable in $n$, and therefore by the Borel-Cantelli lemma we obtain that almost surely the $\liminf$ in [\[eq: orbital distance estimate\]](#eq: orbital distance estimate){reference-type="eqref" reference="eq: orbital distance estimate"} is bounded below by $1/C_3$. ### Proof from strong convergence of tensors of Haar unitaries {#sec: BC proof} Here we consider the same setup as §[4.5.4](#sec: alternative){reference-type="ref" reference="sec: alternative"}, and explain an argument for [\[eq: desired B12 estimate 2\]](#eq: desired B12 estimate 2){reference-type="eqref" reference="eq: desired B12 estimate 2"} communicated to us by Ben Hayes. With $U_j^{(n)}$ and $V_j^{(n)}$ as above, let $$T^{(n)} = \begin{bmatrix} U_1^{(n)} \otimes 1 - 1 \otimes V_1^{(n)} \\ U_2^{(n)} \otimes 1 - 1 \otimes V_2^{(n)} \end{bmatrix} \in M_{2n,n}(\mathbb{C}) \otimes M_n(\mathbb{C}).$$ The work of Bordenave and Collins [@BC2023 Theorem 9.2][^5] implies that almost surely the spectrum of $(T^{(n)})^* T^{(n)}$ in $M_n(\mathbb{C}) \otimes M_n(\mathbb{C})$ converges in Hausdorff distance to that of $T^* T$, where $$T = \begin{bmatrix} u_1 \otimes 1 - 1 \otimes u_1 \\ u_2 \otimes 1 - 1 \otimes u_2 \end{bmatrix} \in M_{2,1}(\mathrm{C}_{\operatorname{red}}^*(F_2) \otimes_{\min} \mathrm{C}_{\operatorname{red}}^*(F_2));$$ here $\otimes_{\min}$ denotes the minimal/spatial $\mathrm{C}^*$ tensor product and $u_1$ and $u_2$ are the unitaries corresponding to the generators of the free group $F_2$. Now we observe that $$T^*T = 4(1 \otimes 1) - \sum_{j=1}^2 (u_j \otimes u_j^* + u_j^* \otimes u_j).$$ Luckily, the spectral measure of $T^*T$ is something that we can compute using computations in $F_2$ rather than $F_2 \times F_2$. Indeed, letting $g_1$ and $g_2$ be the generators of $F_2$, consider the injective group homomorphism $F_2 \to F_2 \times F_2$ given by $g_j \mapsto (g_j,g_j^{-1})$; this induces a map $\mathrm{C}_{\operatorname{red}}^*(F_2) \to \mathrm{C}_{\operatorname{red}}^*(F_2 \times F_2) \cong \mathrm{C}_{\operatorname{red}}^*(F_2) \otimes_{\min} \mathrm{C}_{\operatorname{red}}^*(F_2)$ satisfying $u_j \mapsto u_j \otimes u_j^*$. Hence, the spectrum of $T^*T$ is the same as that of $4 - u_1 - u_1^* - u_2 - u_2^*$ in $\mathrm{C}_{\operatorname{red}}^*(F_2)$. By a well-known computation of Kesten [@Kesten1959] and McKay [@McKay1981], the spectrum is $[4 - 2 \sqrt{3},4+2 \sqrt{3}]$. In particular, $$\lim_{n \to \infty} \min \operatorname{Spec}(T_j^{(n)} T_j^{(n)}) = 4 - 2 \sqrt{3}.$$ This implies that for every $\delta > 0$, when $n$ is sufficiently large, we have for $B_{1,2} \in M_n(\mathbb{C})$, $$\left\lVert B_{1,2} \right\rVert_2^2 \leq \frac{1}{4 - 2 \sqrt{3} - \delta} \left\lVert T^{(n)}( B_{1,2}) \right\rVert_2^2 = \frac{1}{4 - 2 \sqrt{3} - \delta} \sum_{j=1}^2 \left\lVert U_j^{(n)} B_{1,2} - B_{1,2} V_j^{(n)} \right\rVert_2^2,$$ which implies [\[eq: desired B12 estimate 2\]](#eq: desired B12 estimate 2){reference-type="eqref" reference="eq: desired B12 estimate 2"}. This method also gives a sharper and more explicit bound in [\[eq: orbital distance estimate\]](#eq: orbital distance estimate){reference-type="eqref" reference="eq: orbital distance estimate"}. Indeed, for a unitary matrix $W$, we have $$(4 - 2 \sqrt{3} - \delta) = (4 - 2 \sqrt{3} - \delta) \left\lVert W \right\rVert_2^2 \leq \sum_{j=1}^2 \left\lVert U_j^{(n)} W - W V_j^{(n)} \right\rVert_2^2 = \sum_{j=1}^2 \left\lVert W^* U_j^{(n)} W - V_j^{(n)} \right\rVert_2^2.$$ Hence, we have almost surely $$\liminf_{n \to \infty} \inf_{W \in \mathbb{U}_n} \sum_{j=1}^2 \left\lVert W^* U_j^{(n)} W - V_j^{(n)} \right\rVert_2^2 \geq 4 - 2 \sqrt{3}.$$ The exact $\liminf$ and $\limsup$ of the formula $\inf_{W \in \mathbb{U}_n} \sum_{j=1}^2 \left\lVert W^* U_j^{(n)} W - V_j^{(n)} \right\rVert_2^2$ as $n \to \infty$ are not yet known, and this would be an interesting example for the development of model-theoretic random matrix theory proposed in [@JekelModelEntropy §6]. # Model completeness for tracial von Neumann algebras {#sec: MC general case} It is not surprising that extending the conclusion of our main result from II$_1$ factors to arbitrary type II$_1$ von Neumann algebras is straightforward. ## Model completeness and direct sums {#sec: MC direct sums} **Lemma 20**. *If the theory of a tracial von Neumann algebra $\mathcal{M}$ is model-complete, then the theory of every direct summand of $\mathcal{M}$ is model-complete.* *Proof.* Let $\mathcal{M}$ be a tracial von Neumann algebra which decomposes as a direct sum $\mathcal{M}_1 \oplus \mathcal{M}_2$ with weights $\alpha$ and $1 - \alpha$. Assume the theory of $\mathcal{M}$ is model complete; we will prove that the theory of each one of $\mathcal{M}_1$ and $\mathcal{M}_2$ is model complete. Let $\mathcal{N}_1 \equiv \mathcal{M}_1$ and $\mathcal{N}_2 \equiv \mathcal{M}_2$, and $\iota_1: \mathcal{M}_1 \to \mathcal{N}_1$ and $\iota_2: \mathcal{M}_2 \to \mathcal{N}_2$ be trace-preserving $*$-homomorphisms; we need to show that $\iota_1$ and $\iota_2$ are elementary. Let $\mathcal{N}$ be the direct sum of $\mathcal{N}_1$ and $\mathcal{N}_2$ with weights $\alpha$ and $1 - \alpha$. Note that by [@FG2023], $\mathcal{N}\equiv \mathcal{M}$ since the theory of $\mathcal{N}$ is uniquely determined by the theories of the direct summands. By model completeness of $\mathcal{M}$, the map $\iota = \iota_1 \oplus \iota_2: \mathcal{M}\to \mathcal{N}$ is elementary. Let $\varphi(x_1,\dots,x_n)$ be a $\mathcal{L}_{\mathop{\mathrm{tr}}}$-formula, and we will show that $\varphi^{\mathcal{N}_1}(\iota_1(\mathbf{a})) = \varphi^{\mathcal{M}_1}(\mathbf{a})$ for ${\mathbf{a} = (a_1,\dots,a_n) \in \mathcal{M}_1^n}$. Because prenex formulas are dense in the space of all formulas [@BYBHU2008 §6], assume without loss of generality that $$\varphi(x_1,\dots,x_n) = \inf_{y_1} \sup_{y_2} \dots \inf_{y_{2m-1}} \sup_{y_{2m}} F(\mathop{\mathrm{Re}}\mathop{\mathrm{tr}}(p_1(\mathbf{x},\mathbf{y})),\dots,\mathop{\mathrm{Re}}\mathop{\mathrm{tr}}(p_k(\mathbf{x},\mathbf{y})))$$ where $y_1$, ..., $y_{2m}$ are variables in the unit ball, $F: \mathbb{R}^k \to \mathbb{R}$ is continuous, and $p_1$, ..., $p_k$ are non-commutative $*$-polynomials. Define $$\psi(x_1,\dots,\tilde{x}_n,z) = \inf_{y_1} \sup_{y_2} \dots \inf_{y_{2m-1}} \sup_{y_{2m}} F\left(\frac{1}{\alpha} \mathop{\mathrm{Re}}\mathop{\mathrm{tr}}(p_1(\mathbf{x},z\mathbf{y})),\dots, \frac{1}{\alpha} \mathop{\mathrm{Re}}\mathop{\mathrm{tr}}(p_k(\mathbf{x},z\mathbf{y})) \right),$$ where $z\mathbf{y} = (zy_1,\dots,zy_{2m})$. Observe that $$\varphi^{\mathcal{M}_1}(a_1,\dots,a_n) = \psi^{\mathcal{M}}(a_1 \oplus 0,\dots,a_n \oplus 0, 1 \oplus 0),$$ because $(1 \oplus 0)(y \oplus y') = y \oplus 0$. Similarly, $$\varphi^{\mathcal{N}_1}(\iota_1(a_1),\dots,\iota_1(a_n)) = \psi^{\mathcal{N}}(\iota(a_1 \oplus 0),\dots,\iota(a_n \oplus 0), \iota(1 \oplus 0)).$$ The mapping $\iota: \mathcal{M}\to \mathcal{N}$ is elementary, and hence $$\psi^{\mathcal{N}}(\iota(a_1 \oplus 0),\dots,\iota(a_n \oplus 0), \iota(1 \oplus 0)) = \psi^{\mathcal{M}}(a_1 \oplus 0,\dots,a_n \oplus 0, 1 \oplus 0).$$ This shows $\varphi^{\mathcal{N}_1}(\iota_1(\mathbf{a})) = \varphi^{\mathcal{M}_1}(\mathbf{a})$, so the mapping $\iota_1$ is elementary as desired. The same argument applies to $\iota_2$. Therefore, $\mathcal{M}_1$ and $\mathcal{M}_2$ are model complete. ◻ *Remark 21*. Similarly, if $\mathcal{M}= (\mathcal{M}_1,\alpha) \oplus (\mathcal{M}_2,1-\alpha)$ and if $\mathop{\mathrm{Th}}(\mathcal{M})$ admits quantifier elimination, then $\mathop{\mathrm{Th}}(\mathcal{M}_j)$ admits quantifier elimination for $j = 1$, $2$. To see this, consider $n$-tuples $\mathbf{x}$ and $\mathbf{y}$ in $\mathcal{M}_1$ that have the same quantifier-free type in $\mathcal{M}_1$ (i.e. they have the same $*$-moments). Then $(x_1 \oplus 0, \dots, x_n \oplus 0, 1 \oplus 0)$ and $(y_1 \oplus 0, \dots, y_n \oplus 0,1\oplus 0)$ have the same quantifier-free type in $\mathcal{M}$. Therefore, by Lemma [Lemma 2](#lem: type QE){reference-type="ref" reference="lem: type QE"}, they have the same type in $\mathcal{M}$. As we saw above, for each formula $\varphi$, there exists $\psi$ such that $\varphi^{\mathcal{M}_1}(x_1,\dots,x_n) = \psi^{\mathcal{M}}(x_1 \oplus 0, \dots, x_n \oplus 0, 1 \oplus 0)$ (and similarly for the $y_j$'s), and hence $\mathbf{x}$ and $\mathbf{y}$ have the same type in $\mathcal{M}_1$, and so $\mathop{\mathrm{Th}}(\mathcal{M}_1)$ has quantifier elimination by Lemma [Lemma 2](#lem: type QE){reference-type="ref" reference="lem: type QE"}. ## Conclusion of the proof of Theorem [Theorem 2](#thm: main MC){reference-type="ref" reference="thm: main MC"} {#conclusion-of-the-proof-of-theorem-thm-main-mc} By Lemma [Lemma 20](#lem: direct sum){reference-type="ref" reference="lem: direct sum"}, because we already proved Theorem [Theorem 2](#thm: main MC){reference-type="ref" reference="thm: main MC"} in the case of $\mathrm{II}_1$ factors, we can eliminate any direct summands that are $\mathrm{II}_1$ factors satisfying that $M_2(\mathcal{M}_\omega)$ embeds into $\mathcal{M}_\omega^{\mathcal{U}}$. It remains to handle the diffuse part of the direct integral decomposition for $\mathcal{M}$, which actually turns out to be much easier. **Lemma 22**. *Let $(\mathcal{M},\tau) = \int_{[0,1]} (\mathcal{M}_\omega,\tau_\omega)\,d\omega$, where $\mathcal{M}_\omega$ is a separable $\mathrm{II}_1$ factor such that $M_2(\mathcal{M}_\omega)$ embeds into $\mathcal{M}_\omega^{\mathcal{U}}$. Then $\mathcal{M}$ is not model complete.* *Proof.* Let $\mathcal{N}= L^\infty[0,1] \otimes \mathcal{M}$. Note that $$\mathcal{N}= \int_{[0,1]^2} \mathcal{M}_\omega \,d\omega \,d\omega'.$$ Thus, the distribution of $\mathop{\mathrm{Th}}(\mathcal{M}_\omega)$ over $[0,1]^2$ is the same as the distribution of the $\mathop{\mathrm{Th}}(\mathcal{M}_\omega)$ over $[0,1]$. Therefore, it follows from [@FG2023 Theorem 2.3] that $\mathcal{M}\equiv \mathcal{N}$. Moreover, $\mathcal{N}\oplus \mathcal{N}\cong \mathcal{N}$. Now fix an ultrafilter $\mathcal{U}$ on $\mathbb{N}$ and note that $M_2(\mathcal{M}_\omega)$ embeds into $\mathcal{M}_\omega^{\mathcal{U}}$ for all $\omega$, hence $M_2(\mathcal{N})$ embeds into $\mathcal{N}^{\mathcal{U}}$. Consider a trace preserving $*$-homomorphism $$\mathcal{N}\to \mathcal{N}\oplus \mathcal{N}\to M_2(\mathcal{N}) \to \mathcal{N}^{\mathcal{U}},$$ where the first map is an isomorphism and the second map is the block diagonal embedding. Then $1 \oplus 0$ is central in $\mathcal{N}\oplus \mathcal{N}$ but $1 \oplus 0$ is not central in $M_2(\mathcal{N})$. Hence, our homomorphism does not map $Z(\mathcal{N})$ into $Z(\mathcal{N}^{\mathcal{U}})$, so it is not elementary. ◻ *Proof of Theorem [Theorem 2](#thm: main MC){reference-type="ref" reference="thm: main MC"}.* Suppose $\mathcal{M}$ has a direct integral decomposition where $\mathcal{M}_\omega$ is a $\mathrm{II}_1$ factor such that $M_2(\mathcal{M}_\omega)$ embeds $\mathcal{M}_{\omega}^{\mathcal{U}}$, for $\omega$ in some positive measure set. If the positive measure set has an atom, then $\mathcal{M}$ has a direct summand $\mathcal{N}$ which is a $\mathrm{II}_1$ factor such that $M_2(\mathcal{N})$ embeds into $\mathcal{N}^{\mathcal{U}}$. The results of the previous section show that $\mathcal{N}$ is not model complete, hence by Lemma [Lemma 20](#lem: direct sum){reference-type="ref" reference="lem: direct sum"}, $\mathcal{M}$ is not model complete. If there is no atom in our positive measure set, then $\mathcal{M}$ has a direct summand of the form $\mathcal{N}= \int_{[0,1]} \mathcal{N}_\alpha\,d\alpha$ where the integral occurs with respect to Lebesgue measure and $\mathcal{N}_\alpha$ is a $\mathrm{II}_1$ factor such that $M_2(\mathcal{N}_\alpha)$ embeds into $\mathcal{N}_\alpha^{\mathcal{U}}$. Hence, by Lemma [Lemma 22](#lem: diffuse MC){reference-type="ref" reference="lem: diffuse MC"}, $\mathcal{N}$ is not model complete, and so by Lemma [Lemma 20](#lem: direct sum){reference-type="ref" reference="lem: direct sum"}, $\mathcal{M}$ is not model complete. ◻ *Remark 23*. A similar argument recovers the result of the first author that the theory of any separable tracial von Neumann algebra with a type $\mathrm{II}_1$ summand never admits quantifier elimination [@Farah2023]. An algebra satisfying the assumptions of Theorem [Theorem 2](#thm: main MC){reference-type="ref" reference="thm: main MC"} either has a $\mathrm{II}_1$ factor as a direct summand, or it has a type $\mathrm{II}_1$ direct summand with diffuse center. If there is a type $\mathrm{II}_1$ direct summand $\mathcal{N}$, then $\mathop{\mathrm{Th}}(\mathcal{N})$ does not have quantifier elimination by Remark [Remark 19](#rem: II1 QE){reference-type="ref" reference="rem: II1 QE"} and hence by Remark [Remark 21](#rem: QE direct sums){reference-type="ref" reference="rem: QE direct sums"}, $\mathop{\mathrm{Th}}(\mathcal{M})$ does not have quantifier elimination. On the other hand, suppose $\mathcal{N}$ is a type $\mathrm{II}_1$ direct summand of $\mathcal{M}$ with diffuse center. In this case, we argue similarly to Lemma [Lemma 7](#lem: eliminate diffuse matrix term){reference-type="ref" reference="lem: eliminate diffuse matrix term"}; $\mathcal{N}$ has a central projection of trace $1/2$, and also a non-central projection of trace $1/2$, and hence $\mathop{\mathrm{Th}}(\mathcal{N})$ does not have quantifier elimination. So by Remark [Remark 21](#rem: QE direct sums){reference-type="ref" reference="rem: QE direct sums"}, $\mathop{\mathrm{Th}}(\mathcal{M})$ does not have quantifier elimination. # Further remarks {#sec: further remarks} ## Topological properties {#sec: topological} In this section, we study the topological properties of the set of theories that admit quantifier elimination (and those that are model complete), and in particular we will see that quantifier elimination is generic among purely atomic tracial von Neumann algebras (though a lack of quantifier elimination is generic for tracial von Neumann algebras in general). There is a natural topology on the space of complete theories, where basic open sets have the form $$\{ \mathrm{T} \models |\varphi_1 - c_1| < \epsilon_1, \dots, |\varphi_k - c_k| < \epsilon_k \}$$ for some finite list of formulas $\varphi_1$, ..., $\varphi_k$, real numbers $c_1$, ..., $c_k$, and positive $\epsilon_1$, ..., $\epsilon_k$. In fact, this topology can be understood in functional analytic terms as follows. The sentences of a fixed language $\mathcal{L}$ form a real algebra that has a natural norm (see the last sentence of [@Fa:STCstar Definition D.2.4]). A complete theory in language $\mathcal{L}$ is naturally identified with a bounded homomorphism from this algebra into $\mathbb{R}$ ([@Fa:STCstar Definition D.2.8]), and the topology on the space of complete theories then agrees with the weak-$*$ topology. The space of theories is metrizable whenever the language $\mathcal{L}$ is separable (which is the case for tracial von Neumann algebras). Moreover, if $\mathcal{C}$ is a class of $\mathcal{L}$-structures that is closed under elementary equivalence, then $\mathcal{C}$ is axiomatizable if and only if $\mathop{\mathrm{Th}}_{\mathcal{C}}=\{\mathop{\mathrm{Th}}(\mathcal{M}): \mathcal{M}\in \mathcal{C}\}$ is a closed set and every model of some theory in $\mathop{\mathrm{Th}}_{\mathcal{C}}$ belongs to $\mathcal{C}$. A very basic observation is that quantifier elimination and model completeness define sets that are neither open nor closed in the space of theories of tracial von Neumann algebras. **Proposition 24**. *The following sets of theories of tracial von Neumann algebras are not closed (equivalently, the corresponding classes are not axiomatizable):* (1) *Those which admit quantifier elimination.* (2) *Those which do not admit quantifier elimination.* (3) *Those which are model complete.* (4) *Those which are not model complete.* *Proof.* We will use the following observation several times throughout the proof: For any two tracial von Neumann algebras $\mathcal{M}_0$ and $\mathcal{M}_1$, the theory of $\mathcal{M}_\alpha = (\mathcal{M}_0,1-\alpha) \oplus (\mathcal{M}_1,\alpha)$ depends continuously on $\alpha \in [0,1]$. This idea was used in [@GH2016 Proposition 5.1]. To prove it, one can show by induction that for each formula $\varphi$, the quantity $\varphi^{\mathcal{M}_\alpha}(x_1 \oplus x_1', \dots, x_n \oplus x_n')$ is continuous in $\alpha$ uniformly over $x_j$ and $x_j'$ in the unit ball. Now we proceed to the main claims: (1) $M_n(\mathbb{C})$ admits quantifier elimination. Fixing an ultrafilter $\mathcal{U}$ on the natural numbers, $\lim_{n \to \mathcal{U}} \mathop{\mathrm{Th}}(M_n(\mathbb{C})) = \mathop{\mathrm{Th}}(\prod_{n \to \mathcal{U}} M_n(\mathbb{C}))$, which does not admit quantifier elimination by [@Farah2023] since the matrix ultraproduct is a $\mathrm{II}_1$ factor.[^6] (2) Consider $(M_n(\mathbb{C}),1-\alpha) \oplus (\mathcal{R},\alpha)$. This does not admit quantifier elimination when $\alpha > 0$ but does admit quantifier elimination when $\alpha = 0$. (3) This follows from the same argument as (1). (4) This follows from the same argument as (2) since $(M_n(\mathbb{C}),1-\alpha) \oplus (\mathcal{R},\alpha)$ is not model complete by Theorem [Theorem 2](#thm: main MC){reference-type="ref" reference="thm: main MC"}.  ◻ Although these sets are not closed, the set of theories that admit quantifier elimination and the set of theories that are model complete are $G_\delta$ sets (countable intersections of open sets). This holds in general for separable metric languages. **Proposition 25**. *Let $\mathcal{L}$ be a separable language of metric structures. Both the set of complete theories that admit quantifier elimination and the set of complete theories that are model complete are $G_\delta$ sets.* *Proof.* First, consider quantifier elimination. Since the language is separable, choose for each $n$ a countable dense set $\mathcal{F}_n$ of formulas in $n$ variables (if there are multiple sorts, then we choose such a set for each tuple of sorts). For each $n$ and $\varphi\in \mathcal{F}_n$, for each $k \geq 1$, let $G_{\varphi,k}$ be the set of complete theories $\mathrm{T}$ such that there exists a quantifier-free formula $\psi$ such that $\mathrm{T}$ models $$\sup_{x_1,\dots,x_n} |\varphi(x_1,\dots,x_n) - \psi(x_1,\dots,x_n)| < \frac{1}{k}.$$ Note that $G_{\varphi,k}$ is open. Moreover, $\bigcap_{\varphi, k} G_{\varphi,k}$ is precisely the set of theories that admit quantifier elimination (since being able to approximate a dense set of formulas by quantifier-free formulas is equivalent to being able to approximate all formulas by quantifier-free formulas). The argument for model completeness works the same way, using the characterization of model completeness in Lemma [Lemma 3](#lem: type MC){reference-type="ref" reference="lem: type MC"} (1). ◻ *Remark 26*. It is not difficult to see that the analog of Proposition [Proposition 25](#P.Gdelta){reference-type="ref" reference="P.Gdelta"} holds for theories in classical (discrete) languages in general, and Proposition [Proposition 24](#P.dense){reference-type="ref" reference="P.dense"} is true for theories in certain classical languages. Thus the passage from discrete to continuous model theory does not increase the descriptive complexity of the sets of theories considered in the present paper. This stands in stark contrast with the descriptive complexity of sets of types omissible in the model of a theory in a countable language where the move from discrete to continuous results in a bizarre increase in complexity; see the introduction to [@farah2018omitting]. Despite being a $G_\delta$ set, the set of theories of tracial von Neumann algebras that admit quantifier elimination is meager. Rather than proving this directly, we will prove that algebras with spectral gap are meager. Note that Hastings's result (quoted as Theorem [Theorem 14](#thm: Hastings){reference-type="ref" reference="thm: Hastings"} and Corollary [Corollary 15](#cor: Hastings spectral gap){reference-type="ref" reference="cor: Hastings spectral gap"} above) shows that matrix algebras $M_n(\mathbb{C})$ have spectral gap, and in fact we can take $d = 3$ and $C$ uniformly bounded ($C < 3/4$ for sufficiently large $n$). **Lemma 27**. *Let $d \in \mathbb{N}$ and $C > 0$. The property that $\mathcal{M}^{\mathcal{U}}$ has $(C,d)$ spectral gap (for some or every ultrafilter on $\mathbb{N}$) is axiomatizable for tracial von Neumann algebras. Moreover, the complete theories of tracial von Neumann algebras with $(C,d)$-spectral gap form a closed set with dense complement.* *Proof.* By [@FHS2013 Lemma 4.2], the center $Z(\mathcal{M})$ is definable relative to the theory of tracial von Neumann algebras. Hence, similar to [@FHLRTVW2021 Definition 3.2.3; Lemma 3.2.5] in the $\mathrm{C}^*$-algebra case, $d(y,Z(\mathcal{M}))^2$ is a definable predicate (or it is a formula in an expanded language with a sort added for $Z(\mathcal{M})$). Thus, consider the sentence $$\inf_{x_1,\dots,x_d \in B_1^{\mathcal{M}}} \sup_{y \in B_1^{\mathcal{M}}} \left( d(y,Z(\mathcal{M}))^2 \dot{-} C \sum_{j=1}^d \left\lVert [x_j,y] \right\rVert_2^2 \right) = 0.$$ If $\mathcal{M}$ has $(C,d)$-spectral gap, then $\mathcal{M}$ satisfies this sentence. Moreover, if $\mathcal{M}$ satisfies this sentence and is countably saturated (for instance, if $\mathcal{M}$ is a nontrivial ultraproduct), then the infimum over $x_1$, ..., $x_d$ above is achieved, and hence $\mathcal{M}$ has $(C,d)$-spectral gap. In particular, $\mathcal{M}^{\mathcal{U}}$ has $(C,d)$-spectral gap if and only if it satisfies this sentence. Moreover, the set of theories of tracial von Neumann algebras with $(C,d)$-spectral gap is equal to the set of theories satisfying this sentence, and hence is closed. To see that its complement is dense, fix $\mathcal{M}$. Then $(\mathcal{M},1-\alpha) \oplus (\mathcal{R},\alpha)$ does not have spectral gap for $\alpha \in (0,1)$, because $\mathcal{R}$ does not have spectral gap, and $\mathop{\mathrm{Th}}((\mathcal{M},1-\alpha) \oplus (\mathcal{R},\alpha)) \to \mathcal{M}$ as $\alpha \to 0$, hence, $\mathop{\mathrm{Th}}(\mathcal{M})$ is a limit of theories which lack $(C,d)$-spectral gap. ◻ **Proposition 28**. *The following properties define meager sets in the space of complete theories of tracial von Neumann algebras.* (1) *Von Neumann algebras with spectral gap.* (2) *Type $\mathrm{I}$ tracial von Neumann algebras.* (3) *Tracial von Neumann algebras whose theory admits quantifier elimination.* *Proof.* (1) We saw above that for each $C$ and $d$, the $(C,d)$-spectral gap property defines a closed set whose complement is dense. Letting $C$ and $d$ range over $\mathbb{N}$, we see that von Neumann algebras with spectral gap yield a meager $F_\sigma$ set in the space of theories. \(2\) Each matrix algebra has $(2,2)$-spectral gap for some uniform constant $C$. Moreover, it is straightforward to check that a direct integral of tracial von Neumann algebras with $(2,2)$-spectral gap also has $(2,2)$-spectral gap. Hence, all type $\mathrm{I}$ tracial von Neumann algebras have $(2,2)$-spectral gap, and so their theories are contained in the meager set from (1), and in fact in a co-dense closed set. \(3\) By [@Farah2023 Theorem 1] or by Remark [Remark 23](#rem: alternate QE proof){reference-type="ref" reference="rem: alternate QE proof"}, only type $\mathrm{I}$ tracial von Neumann algebras can admit quantifier elimination. ◻ Since quantifier elimination is a meager property in the space of all theories, it makes sense to ask how generic it is within a smaller ambient space that removes some of the most obvious obstructions. The tracial von Neumann algebras $\mathcal{M}$ whose theories admit quantifier elimination come in two varieties, those with a nontrivial $L^\infty[0,1]$ summand and those without. First, those $\mathcal{M}$ with a nontrivial diffuse direct summand have finite-dimensional atomic part; this is because otherwise there would be an atomic projection whose weight would be smaller than the weight of the diffuse part, violating condition (3) of Proposition [Proposition 9](#prop: QE obstructions){reference-type="ref" reference="prop: QE obstructions"}. If we fix $k$ and numbers $n_1$, ..., $n_k$, and consider an algebra of the form $$\mathcal{M}= (L^\infty[0,1],\alpha_0) \oplus \bigoplus_{j=1}^k (M_{n_j}(\mathbb{C}),\alpha_j),$$ then set of weights $(\alpha_0,\dots,\alpha_k)$ such that $\mathcal{M}$ admits quantifier elimination is an open subset of the $k$-simplex, as we can see from Proposition [Proposition 10](#prop: QE linear inequality){reference-type="ref" reference="prop: QE linear inequality"}. However, it is not dense since $\alpha_j < \alpha_0$ for some $j \geq 1$ precludes quantifier elimination by Proposition [Proposition 9](#prop: QE obstructions){reference-type="ref" reference="prop: QE obstructions"} (3). Second, we have purely atomic $\mathcal{M}$. As noted in [@FG2023 §3], purely atomic algebras can be parameterized by $\rho_{\mathcal{M}}(m,n)$ for $m, n \geq 1$, where for each $m \in \mathbb{N}$, the values $\rho_{\mathcal{M}}(m,1) \geq \rho_{\mathcal{M}}(m,2) \geq \dots$ are the weights of the central projections associated to $M_m(\mathbb{C})$ terms in the direct sum decomposition. If there are only finitely many $M_m(\mathbb{C})$ terms, we set $\rho_{\mathcal{M}}(m,n) = 0$ for $n$ larger than the number of such terms. Let $$\Delta = \left\{ (\alpha_{m,n})_{m,n\geq 1}: \alpha_{m,n} \geq \alpha_{m,n+1} \geq 0, \sum_{m,n \geq 1} \alpha_{m,n} = 1 \right\}.$$ We view $\Delta$ as a metric space with respect to the $L^1$ metric. Note that on $\Delta$, the topology of pointwise convergence agrees with the $L^1$ topology; however, $\Delta$ is not compact because elements of $\Delta$ can converge pointwise to zero. We note the following consequence of Farah and Ghasemi's work. **Lemma 29**. *For $\vec{\alpha} = (\alpha_{m,n})_{m,n \geq 1}$, let $$\mathcal{M}_{\vec{\alpha}} = \bigoplus_{m,n \geq 1} (M_m(\mathbb{C}), \alpha_{m,n})$$ be the associated purely atomic tracial von Neumann algebra. The map $\vec{\alpha} \mapsto \mathop{\mathrm{Th}}(\mathcal{M}_{\vec{\alpha}})$ is a homeomorphism onto its image.* *Proof.* [@FG2023 Theorem 2.3] implies that the theory of $\mathcal{M}_{\vec{\alpha}}$ depends continuously on the weights $\vec{\alpha}$. The construction in [@FG2023 Lemma 3.2] shows that $\alpha_{m,n} = \rho_{\mathcal{M}_{\vec{\alpha}}}(m,n)$ can be recovered from $\mathop{\mathrm{Th}}(\mathcal{M}_{\vec{\alpha}})$. In particular, one can see from this that for each $m, n \geq 1$, if $\vec{\alpha} \in \Delta$ and the theory of $\mathcal{N}$ is sufficiently close to that of $\mathcal{M}_{\vec{\alpha}}$, then $\rho_{\mathcal{N}}(m,n)$ will be close to $\alpha_{m,n}$. ◻ **Proposition 30**. *The set of $\vec{\alpha} \in \Delta$ such that $\mathcal{M}_{\vec{\alpha}}$ admits quantifier elimination is comeager.* *Proof.* Let $\mathcal{M}_{\vec{\alpha},k} = \bigoplus_{1 \leq m,n \leq k} (M_m(\mathbb{C}),\alpha_{m,n}) \subseteq \mathcal{M}_{\vec{\alpha}}$, and let $\mathcal{M}_{\vec{\alpha},k}^{\perp}$ be the direct sum over the complementary indices. Let $\tau_{\vec{\alpha}}$ be the trace on $\mathcal{M}_{\vec{\alpha}}$. Let $$\epsilon_k(\vec{\alpha}) = \min \{ |\tau_{\vec{\alpha}}(p) - \tau_{\vec{\alpha}}(q)|: p, q \text{ projections in } \mathcal{M}_{\vec{\alpha},k} \text{ with } \tau(p) \neq \tau(q) \}.$$ Let $$G_k = \{ \vec{\alpha} \in \Delta: 1 - \sum_{1 \leq m,n \leq k} \alpha_{m,n} < \epsilon_k(\vec{\alpha}) \}.$$ Note that $G_k$ is open in $\Delta$ and it contains the set of $\vec{\alpha}$ such that $\vec{\alpha}$ is supported on $\{1,\dots,k\}^2$. Therefore, for each $\ell \in \mathbb{N}$, the set $\bigcup_{k \geq \ell} G_k$ is open (as a union of open sets) and dense in $\Delta$ (because it contains finitely supported $\vec{\alpha}$). Therefore, $$G = \bigcap_{\ell \in \mathbb{N}} \bigcup_{k \geq \ell} G_k$$ is comeager. Furthermore, $$F = \{\vec{\alpha} \in \Delta: \alpha_{m,n} \text{ are linearly independent over } \mathbb{Q}\}$$ is comeager because there are only countably many finite $\mathbb{Q}$-linear combinations to test, and each linear combination being nonzero is an open condition. Hence, $F \cap G$ is comeager. It remains to check that if $\vec{\alpha} \in F \cap G$, then $\mathcal{M}_{\vec{\alpha}}$ admits quantifier elimination. Let $p$ and $q$ be projections of the same trace in $\mathcal{M}_{\vec{\alpha}}$. For each $k$, write $p = p_k \oplus p_k^{\perp}$ and $q = q_k \oplus q_k^{\perp}$ with respect to the decomposition $\mathcal{M}_{\vec{\alpha}} = \mathcal{M}_{\vec{\alpha},k} \oplus \mathcal{M}_{\vec{\alpha},k}^{\perp}$. If $\vec{\alpha} \in G_k$, then by construction of $G_k$, we have $$|\tau_{\vec{\alpha}}(p_k) - \tau_{\vec{\alpha}}(q_k)| = |\tau_{\vec{\alpha}}(p_k^{\perp}) - \tau_{\vec{\alpha}}(q_k^{\perp})| < \epsilon_k(\vec{\alpha}),$$ which forces $\tau_{\vec{\alpha}}(p_k) = \tau_{\vec{\alpha}}(q_k)$ by definition of $\epsilon_k(\vec{\alpha})$. Now let $p_{m,n}$ and $q_{m,n}$ be the components of $p$ and $q$ respectively in the direct summand $(M_m(\mathbb{C}),\alpha_{m,n})$. Because the $\alpha_{m,n}$'s are linearly independent over $\mathbb{Q}$, the condition that $\tau_{\vec{\alpha}}(p_k) = \tau_{\vec{\alpha}}(q_k)$ forces that $\mathop{\mathrm{tr}}_m(p_{m,n}) = \mathop{\mathrm{tr}}_m(q_{m,n})$ for $m, n \leq k$. Because $\vec{\alpha} \in G$, we know that $\vec{\alpha} \in G_k$ for infinitely many $k$, and thus $\mathop{\mathrm{tr}}_m(p_{m,n}) = \mathop{\mathrm{tr}}_m(q_{m,n})$ for all $m, n$, which means that $p$ and $q$ are conjugate by an automorphism. Therefore, by Theorem [Theorem 1](#thm: main QE){reference-type="ref" reference="thm: main QE"}, $\mathcal{M}_{\vec{\alpha}}$ admits quantifier elimination. ◻ ## Matrix amplification and approximate embedding {#sec: amplification} In Theorem [Theorem 2](#thm: main MC){reference-type="ref" reference="thm: main MC"}, we assumed the condition that $M_2(\mathcal{M})$ embeds into $\mathcal{M}^{\mathcal{U}}$. While this condition holds automatically if $\mathcal{M}$ is Connes-embeddable or if $\mathcal{M}$ is existentially closed, we do not know if it holds for all $\mathrm{II}_1$ factors. In this section, we investigate this problem by giving a series of equivalent conditions. This expands upon the results about the "universal fundamental group" by Goldbring and Hart [@GH2017 Proposition 4.17].[^7] Recall that for $\mathrm{II}_1$ factors $\mathcal{M}$ and $\mathcal{N}$, the statement $\mathop{\mathrm{Th}}_\exists(\mathcal{M}) = \mathop{\mathrm{Th}}_\exists(\mathcal{N})$ means that for every $\inf$-sentence $\varphi$, we have $\varphi^{\mathcal{M}} = \varphi^{\mathcal{N}}$. An equivalent statement is that for some ultrafilter $\mathcal{U}$, we have that $\mathcal{M}$ embeds into $\mathcal{N}^{\mathcal{U}}$ and $\mathcal{N}$ embeds into $\mathcal{M}^{\mathcal{U}}$. For instance, when $\mathcal{M}$ is Connes-embeddable, then $\mathop{\mathrm{Th}}_\exists(\mathcal{M}) = \mathop{\mathrm{Th}}_\exists(\mathcal{R})$. We will show that the condition of $M_2(\mathcal{M})$ embedding into $\mathcal{M}^{\mathcal{U}}$ is equivalent to asking whether $\mathop{\mathrm{Th}}_\exists(\mathcal{M}^t) = \mathop{\mathrm{Th}}_\exists(\mathcal{M})$ for some or all $t \in (0,\infty) \setminus \{1\}$, where $\mathcal{M}^t$ is the $t$th compression/amplification of $\mathcal{M}$. **Proposition 31**. *Let $\mathcal{M}$ be a $\mathrm{II}_1$ factor.Then $$\begin{aligned} \lim_{t \to \infty} \mathop{\mathrm{Th}}_\exists(\mathcal{M}^t) &= \mathop{\mathrm{Th}}_\exists(\mathcal{M}\otimes \mathcal{R}), \\ \lim_{t \to 0} \mathop{\mathrm{Th}}_\exists(\mathcal{M}^t) & \text{ exists.}\end{aligned}$$* *Proof.* Consider an existential sentence $\varphi= \inf_{x_1,\dots,x_n} \psi(x_1,\dots,x_n)$ where $\psi$ is a quantifier-free formula and $x_j$ ranges over the unit ball. We can express $$\psi(\mathbf{x}) = F(\mathop{\mathrm{Re}}\mathop{\mathrm{tr}}(p_1(\mathbf{x})), \dots, \mathop{\mathrm{Re}}\mathop{\mathrm{tr}}(p_k(\mathbf{x})))$$ for some non-commutative $*$-polynomials $p_j$. By rescaling the input variables to $F$, assume without loss of generality that $\left\lVert p_j(\mathbf{x}) \right\rVert \leq 1$ when $x_1$, ..., $x_n$ are in the unit ball. Let $\omega_F$ be the modulus of continuity of $F$ with respect to the $\ell^\infty$-norm on $[-1,1]^k$. Suppose that $s < t$. Write $$t = ms + \epsilon, \text{ where } m \in \mathbb{N}\text{ and } \epsilon \in [0,t/s).$$ Let $\iota_{s,t}: \mathcal{M}^s \to \mathcal{M}^t$ be the non-unital $*$-homomorphism $$\iota_{s,t}(x) = x^{\oplus m} \oplus 0_{\mathcal{M}^\epsilon}.$$ Then $$\begin{aligned} \psi^{\mathcal{M}^t}(\iota_{s,t}(\mathbf{x})) &= F(\mathop{\mathrm{Re}}\mathop{\mathrm{tr}}(p_1(\iota_{s,t}(\mathbf{x}))), \dots, \mathop{\mathrm{Re}}\mathop{\mathrm{tr}}(p_k(\iota_{s,t}(\mathbf{x})))) \\ &= F\left(\frac{ms}{t} \mathop{\mathrm{Re}}\mathop{\mathrm{tr}}(p_1(\mathbf{x})), \dots, \frac{ms}{t} \mathop{\mathrm{Re}}\mathop{\mathrm{tr}}(p_k(\mathbf{x})) \right) \\ &\leq F(\mathop{\mathrm{Re}}\mathop{\mathrm{tr}}(p_1(\mathbf{x})), \dots, \mathop{\mathrm{Re}}\mathop{\mathrm{tr}}(p_k(\mathbf{x}))) + \omega_F\left( 1 - \frac{ms}{t} \right) \\ &\leq \psi^{\mathcal{M}^s}(\mathbf{x}) + \omega_F(s/t).\end{aligned}$$ This implies that $$\varphi^{\mathcal{M}^t} \leq \varphi^{\mathcal{M}^s} + \omega_F(s/t).$$ For each $s \in (0,\infty)$, we have $$\limsup_{t \to \infty} \varphi^{\mathcal{M}^t} \leq \liminf_{t \to \infty} \left[ \varphi^{\mathcal{M}^s} + \omega_F(s/t) \right] = \varphi^{\mathcal{M}^s}.$$ Hence, $\limsup_{t \to \infty} \varphi^{\mathcal{M}^t} \leq \inf_{s \in (0,\infty)} \varphi^{\mathcal{M}^s}$, which implies that $$\lim_{t \to \infty} \varphi^{\mathcal{M}^t} = \inf_{t \in (0,\infty)} \varphi^{\mathcal{M}^t}.$$ Similarly, for $t \in (0,\infty)$, $$\liminf_{s \to 0} \varphi^{\mathcal{M}^s} = \liminf_{s \to 0} \left[ \varphi^{\mathcal{M}^s} + \omega_F(s/t) \right] \geq \varphi^{\mathcal{M}^t},$$ hence, $\liminf_{s \to 0} \varphi^{\mathcal{M}^s} \geq \inf_{t \in (0,\infty)} \varphi^{\mathcal{M}^t}$, so $$\lim_{t \to 0} \varphi^{\mathcal{M}^t} = \sup_{t \in (0,\infty)} \varphi^{\mathcal{M}^t}.$$ It remains to show that the limit as $t \to \infty$ agrees with $\mathop{\mathrm{Th}}_\exists(\mathcal{M}\otimes \mathcal{R})$. First, note that $\mathcal{M}$ embeds into $\mathcal{M}\otimes \mathcal{R}$, so also $\mathcal{M}^t$ embeds into $(\mathcal{M}\otimes \mathcal{R})^t = \mathcal{M}\otimes \mathcal{R}^t \cong \mathcal{M}\otimes \mathcal{R}$. Thus, for each $\inf$-sentence $\varphi$, we have $\varphi^{\mathcal{M}\otimes \mathcal{R}} \leq \lim_{t \to \infty} \varphi^{\mathcal{M}^t}$. For the opposite inequality, note that $\mathcal{M}\otimes \mathcal{R}$ embeds into $\mathcal{N}= \prod_{n \to \mathcal{U}} \mathcal{M}\otimes M_n(\mathbb{C})$. Hence, $$\varphi^{\mathcal{M}\otimes \mathcal{R}} \geq \varphi^{\mathcal{N}} = \lim_{n \to \infty} \varphi^{M_n(\mathcal{M})} = \lim_{t \to \infty} \varphi^{\mathcal{M}^t}. \qedhere$$ ◻ **Proposition 32**. *Let $\mathcal{M}$ be a $\mathrm{II}_1$ factor. Then the following are equivalent:* (1) *$M_2(\mathcal{M})$ embeds into $\mathcal{M}^{\mathcal{U}}$ for some ultrafilter $\mathcal{U}$.* (2) *$\alpha \mathcal{M}\oplus (1 - \alpha) \mathcal{M}$ embeds into $\mathcal{M}^{\mathcal{U}}$ for some ultrafilter $\mathcal{U}$.* (3) *$\mathop{\mathrm{Th}}_\exists(\mathcal{M}) = \mathop{\mathrm{Th}}_\exists(\mathcal{M}\otimes \mathcal{R})$.* (4) *$\mathop{\mathrm{Th}}_\exists(\mathcal{M}^t) = \mathop{\mathrm{Th}}_\exists(\mathcal{M})$ for all $t \neq 1$.* (5) *$\mathop{\mathrm{Th}}_\exists(\mathcal{M}^t) = \mathop{\mathrm{Th}}_\exists(\mathcal{M})$ for some $t \neq 1$.* (6) *$\lim_{t \to \infty} \mathop{\mathrm{Th}}_\exists(\mathcal{M}^t) = \lim_{t \to 0} \mathop{\mathrm{Th}}_\exists(\mathcal{M}^t)$.* (7) *There exists a McDuff $\mathrm{II}_1$ factor $\mathcal{N}$ such that $\mathop{\mathrm{Th}}_\exists(\mathcal{M}) = \mathop{\mathrm{Th}}_\exists(\mathcal{N})$.* (8) *There exists a Gamma $\mathrm{II}_1$ factor $\mathcal{N}$ such that $\mathop{\mathrm{Th}}_\exists(\mathcal{M}) = \mathop{\mathrm{Th}}_\exists(\mathcal{N})$.* *Proof.* (1) $\implies$ (2) because $(1/2) \mathcal{M}\oplus (1/2) \mathcal{M}$ is contained in $M_2(\mathcal{M})$. \(2\) $\implies$ (3). Let $\iota: \alpha \mathcal{M}\oplus (1 - \alpha) \mathcal{M}\to \mathcal{M}^{\mathcal{U}}$ be an embedding where $\mathcal{U}$ is an ultrafilter on index set $I$. Let $p = \iota(1 \oplus 0)$. Let $\Delta: \mathcal{M}\to \alpha \mathcal{M}\oplus (1 - \alpha) \mathcal{M}$ be the diagonal map. Then $\Delta(\mathcal{M})$ commutes with $p$ and hence $\operatorname{Ad}_p \circ \iota \circ \Delta$ gives an embedding $\mathcal{M}\to p(\mathcal{M}^{\mathcal{U}})p$ since $\mathcal{M}$ is a $\mathrm{II}_1$ factor. Now $p$ lifts to a family of projections $(p_i)_{i \in I}$ with $\mathop{\mathrm{tr}}^{\mathcal{M}}(p_i) = \mathop{\mathrm{tr}}^{\mathcal{M}^{\mathcal{U}}}(p) = \alpha$. Since $p_i$ are unitarily conjugate to some fixed projection $p_0 \in \mathcal{M}$, $p \mathcal{M}^{\mathcal{U}} p = \prod_{i \to \mathcal{U}} p_i \mathcal{M}p_i = (p_0 \mathcal{M}p_0)^{\mathcal{U}}$. In other words, $\mathcal{M}$ embeds into an ultraproduct of $\mathcal{M}^{\alpha}$. This also implies that $\mathcal{M}^t$ embeds into an ultraproduct of $\mathcal{M}^{t\alpha}$ for each $t \in (0,\infty)$. Hence, $\mathcal{M}^{1/\alpha^k}$ embeds into an ultraproduct of $\mathcal{M}$ for each $k \in \mathcal{N}$. Thus, for an $\inf$-formula $\varphi$, $$\varphi^{\mathcal{M}\otimes \mathcal{R}} = \lim_{t \to \infty} \varphi^{\mathcal{M}^t} = \lim_{k \to \infty} \varphi^{\mathcal{M}^{1/\alpha^k}} \leq \varphi^{\mathcal{M}} \leq \varphi^{\mathcal{M}\otimes \mathcal{R}}.$$ Hence, $\mathop{\mathrm{Th}}_{\exists}(\mathcal{M}) = \mathop{\mathrm{Th}}_{\exists}(\mathcal{M}\otimes \mathcal{R})$. \(3\) $\implies$ (1). Note $M_2(\mathcal{M})$ embeds into $\mathcal{M}\otimes \mathcal{R}$, which embeds into $\mathcal{M}^{\mathcal{U}}$. \(3\) $\iff$ (4). When (3) holds, $\mathcal{M}$ and $\mathcal{M}\otimes \mathcal{R}$ are embeddable into each other's ultrapowers, which implies that $\mathcal{M}^t$ and $(\mathcal{M}\otimes \mathcal{R})^t \cong \mathcal{M}\otimes \mathcal{R}^t \cong \mathcal{M}\otimes \mathcal{R}$ are embeddable into each other's ultrapowers. Hence, $\mathop{\mathrm{Th}}_{\exists}(\mathcal{M}^t) = \mathop{\mathrm{Th}}_{\exists}(\mathcal{M}\otimes \mathcal{R}) = \mathop{\mathrm{Th}}_{\exists}(\mathcal{M})$ for all $t \in (0,\infty)$. Conversely, if $\mathop{\mathrm{Th}}_{\exists}(\mathcal{M}) = \mathop{\mathrm{Th}}_{\exists}(\mathcal{M}^t)$ for all $t$, then we have $\mathop{\mathrm{Th}}_{\exists}(\mathcal{M}\otimes \mathcal{R}) = \lim_{t \to \infty} \mathop{\mathrm{Th}}_{\exists}(\mathcal{M}^t) = \mathop{\mathrm{Th}}_{\exists}(\mathcal{M})$. \(4\) $\implies$ (5) is immediate. \(5\) $\implies$ (6). As in the previous lemma or in (2) $\implies$ (3), since $\mathcal{M}^t$ and $\mathcal{M}$ embed into each other's ultrapowers, the same holds for $\mathcal{M}^{t^k}$ for each $k \in \mathbb{Z}$, which implies (6). \(6\) $\implies$ (4). This follows immediately from the fact that for any $\inf$-sentence $\varphi$, we have $\lim_{t \to \infty} \varphi^{\mathcal{M}^t} = \inf_{t \in (0,\infty)} \varphi^{\mathcal{M}^t}$ and $\lim_{t \to 0} \varphi^{\mathcal{M}^t} = \sup_{t \in (0,\infty)} \varphi^{\mathcal{M}^t}$, which we showed in the proof of the previous lemma. \(3\) $\implies$ (7) $\implies$ (8) is immediate by definition. \(8\) $\implies$ (2). By assumption $\mathcal{M}$ embeds into $\mathcal{N}^{\mathcal{U}}$. Since $\mathcal{N}$ has property Gamma, there exists a projection $p \in \mathcal{N}^{\mathcal{U}}$ that commutes with the image of $\mathcal{M}$ (provided that ultrafilter $\mathcal{U}$ is on a sufficiently large index set). Then $\mathcal{M}$ and $p$ generate a copy of $\alpha \mathcal{M}\oplus (1 - \alpha) \mathcal{M}$ in $\mathcal{N}^{\mathcal{U}}$, where $\alpha = \mathop{\mathrm{tr}}^{\mathcal{N}^{\mathcal{U}}}(p)$. Finally, $\mathcal{N}^{\mathcal{U}}$ embeds into $\mathcal{M}^{\mathcal{V}}$ for some ultrafilter $\mathcal{V}$, hence (2) holds. ◻ *Remark 33*. By the usual arguments, if $\mathcal{M}$ is separable, then it suffices to consider some or all free ultrafilters on $\mathbb{N}$ for conditions (1) and (2). *Remark 34*. Similar to the proof of (6) $\implies$ (4), we can see that if $\mathcal{M}^t$ embeds into $\mathcal{M}^{\mathcal{U}}$ for some $t > 1$, then $\mathcal{M}\models \mathop{\mathrm{Th}}_{\exists}(\mathcal{M}\otimes \mathcal{R})$, and hence $\mathop{\mathrm{Th}}_{\exists}(\mathcal{M}) = \mathop{\mathrm{Th}}_{\exists}(\mathcal{M}\otimes \mathcal{R})$. Therefore, if these conditions fail, then $\mathcal{M}^s$ does not embed into $(\mathcal{M}^t)^{\mathcal{U}}$ for any $s > t$. Thus, all the existential theories of $\mathcal{M}^t$ for $t \in \mathbb{R}_+$ are distinct and the first-order fundamental group is trivial. Compare [@GH2017 Proposition 4.16] which showed that if the first-order fundamental group of $\mathcal{M}$ is not all of $\mathbb{R}_+$, then it is countable and hence there are continuum many non elementary equivalent matrix amplifications of $\mathcal{M}$. The same argument of course applies to the fundamental group for the existential theory. Note also from [@GH2016 Proposition 5.1] that a negative solution to Connes embedding immediately implies the existence of continuum many existential theories of type $\mathrm{II}_1$ algebras (but not factors). ## The non-tracial setting {#sec: nontracial} What major elementary classes of self-adjoint operator algebras admit quantifier elimination? The question for $\mathrm{C}^*$-algebras (both unital and non-unital) has been resolved in [@EFKV2017] and the results of the present paper, together with [@Farah2023], resolve the question in case of tracial von Neumann algebras. What remains is the case of von Neumann algebras with arbitrary faithful normal states, in particular type $\mathrm{III}$ von Neumann algebras. Several different ultraproducts exist for non-tracial von Neumann algebras, and the relationship between them is fairly well understood [@AH2014; @Ando2023]. Metric languages for the non-tracial setting were given in [@Dabrowski2019; @GHScorrespondences], and this is an area of ongoing research. One of the difficulties in the non-tracial setting is that quantifier elimination and model completeness could a priori depend on the choice of state. This makes a difference even for finite-dimensional algebras. Indeed, consider $M_3(\mathbb{C})$ with a state given by $\varphi(A) = \mathop{\mathrm{tr}}_3(AH)$ where $H = \mathop{\mathrm{diag}}(h_1,h_2,h_3)$, and suppose $h_1 > h_2 > h_3$. Let $t \in (0,1)$ such that $h_2 = th_1 + (1-t)h_3$, and let $$P = \begin{bmatrix} t & 0 & t^{1/2}(1-t)^{1/2} \\ 0 & 0 & 0 \\ t^{1/2}(1-t)^{1/2} & 0 & 1 - t \end{bmatrix}, \qquad Q = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{bmatrix}.$$ Then $P$ and $Q$ are projections and $\varphi(P) = \varphi(Q)$ but they are not conjugate by a state-preserving automorphism of $M_3(\mathbb{C})$. Since the unit ball of $M_3(\mathbb C)$ is compact, it is isomorphic to its ultraproducts, and the theory of $(M_3(\mathbb{C}),\varphi)$ does not admit quantifier elimination. However, in the type $\mathrm{III}_1$ setting, the Connes-Størmer transitivity theorem [@CS1978] implies that all states are approximately unitarily equivalent, and hence for any two states the associated Ocneanu ultraproducts $(\mathcal{M},\varphi)^{\mathcal{U}}$ and $(\mathcal{M},\psi)^{\mathcal{U}}$ are isomorphic, and so $(\mathcal{M},\varphi)$ and $(\mathcal{M},\psi)$ are elementarily equivalent. In fact, we believe the random matrix argument given here likely will adapt to the type $\mathrm{III}_1$ setting. Indeed, let $\mathrm{T}$ be the theory of some type $\mathrm{III}_1$ factor $(\mathcal{M},\varphi)$. Since $\mathcal{M}$ is type $\mathrm{III}$, we have $\mathcal{M}\cong \mathcal{M}\otimes M_n(\mathbb{C})$. Thus, the ultraproduct $(\mathcal{N},\psi) = \prod_{n \to \mathcal{U}} (M_n(\mathbb{C}),\mathop{\mathrm{tr}}_n) \otimes (\mathcal{M},\varphi)$ is a model of $\mathrm{T}$. The random matrix construction of §[4](#sec: MC factor proof){reference-type="ref" reference="sec: MC factor proof"} yields two elements $\mathbf{X}$ and $\mathbf{Y}$ in this ultraproduct such that $f^{\mathcal{N},\psi}(\mathbf{Y}) \leq f^{\mathcal{N},\psi}(\mathbf{X})$ for inf-formulas $f$, $\{\mathbf{X}\}'$ and $\{\mathbf{Y}\}'$ are definable sets with respect to parameters $\mathbf{X}$ and $\mathbf{Y}$ respectively,[^8] and $\{\mathbf{X}\}'$ is a $\mathrm{III}_1$ factor and $\{\mathbf{Y}\}'$ is not. Because $\mathrm{III}_1$ factors are an axiomatizable class [@GHScorrespondences Proposition 6.5.7], this means that $\mathbf{X}$ and $\mathbf{Y}$ cannot have the same type. In the type $\mathrm{III}_\lambda$ setting for $\lambda \in (0,1)$, we do not know if this argument goes through because we would have to pay more attention to the choice of state, and the random matrix argument requires having models with a tensor product decomposition as $(M_n(\mathbb{C}),\mathop{\mathrm{tr}}_n) \otimes (\mathcal{M},\varphi)$. Note that the state on $M_n(\mathbb{C})$ being the normalized trace is essential because the mapping $M_n(\mathbb{C}) \to M_n(\mathbb{C}): A \mapsto \frac{1}{2d} \sum_{j=1}^d (U_j A U_j^* + U_j^*AU_j)$ is not necessarily self-adjoint with respect to the inner product associated to an arbitrary state $\varphi$, and hence a bound on the eigenvalues of such a map does not automatically produce an estimate like Lemma [Lemma 16](#lem: first distance estimate){reference-type="ref" reference="lem: first distance estimate"}. In the type $\mathrm{III}_0$ and type $\mathrm{II}_\infty$ setting, another issue arises, namely that type $\mathrm{III}_0$ and type $\mathrm{II}_\infty$ factors are not axiomatizable classes [@GHScorrespondences Proposition 6.5.3 and Fact 6.5.4], so examining factoriality of the relative commutant of $\mathbf{X}$ and $\mathbf{Y}$ may not distinguish their types. Likely, a different approach is needed in these cases. [^1]: IF and DJ are partially supported by the National Science and Engineering Research Council (Canada). JP is partially supported by the National Science Foundation (US), grant DMS-2054477. [^2]: By [@FG2023 Lemma 3.2], the data used in [\[eq.alpha-j\]](#eq.alpha-j){reference-type="eqref" reference="eq.alpha-j"} is computable from the theory of $\mathcal{M}$. For reader's convenience we provide a translation. In the terminology of [@FG2023], $\alpha_0=\rho_{\mathcal{M}}(1,0)$, $\rho_{\mathcal{M}}(m,0)=0$ for $m\geq 2$, $\rho_{\mathcal{M}}(1,k)$, for $k\geq 1$, is the sequence in which each $\alpha_j$, for $j\in J_1$, appears $n_j$ times, arranged in decreasing order. Finally, $\rho_{\mathcal{M}}(n_j,1)=\alpha_j$ and $\rho_{\mathcal{M}}(n,k)=0$ if $n\neq n_j$ for all $j$ or if $k\geq 2$. [^3]: By standard methods, one can choose $\mathcal{V}=\mathcal{U}$ (see [@Fa:STCstar Theorem 16.7.4]), but this is besides the point. [^4]: The statement does not literally assert this, but it asserts the first two statements in an approximate sense, and the last part is necessarily imperfect because there is no implication in continuous logic, but we will see that it serves the purpose. [^5]: A similar result for GUE matrices was given by Belinschi and Capitaine [@BelCap2022]. This could be used to give a version of our construction with GUE matrices rather than Haar unitaries. [^6]: This also follows from [@GHS2013 §3] since the matrix ultraproduct is Connes embeddable and not elementarily equivalent to $\mathcal{R}$, because it does not have property Gamma. [^7]: The reader should be warned that in this proposition, clauses (1) and (2) should start with 'For any II$_1$ factor $\mathcal{M}$,...'. [^8]: Technically, one has to check that appropriate sets of left/right bounded elements in the commutant are definable sets, which could require a small additional argument.
arxiv_math
{ "id": "2310.06197", "title": "Quantum Expanders and Quantifier Reduction for Tracial von Neumann\n Algebras", "authors": "Ilijas Farah, David Jekel, Jennifer Pi", "categories": "math.OA math.LO", "license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/" }
--- abstract: | In this paper we analyze the probability distributions associated with rolling (possibly unfair) dice infinitely often. Specifically, given a $q$-sided die, if $x_i\in\{0,\ldots,q-1\}$ denotes the outcome of the $i^{\text{th}}$ toss, then the distribution function is $F(x)=\mathbb{P}[X\leq x]$, where $X = \sum_{i=1}^\infty x_i q^{-i}$. We show that $F$ is singular and establish a piecewise linear, iterative construction for it. We investigate two ways of comparing $F$ to the fair distribution---one using supremum norms and another using arclength. In the case of coin flips, we also address the case where each independent flip could come from a different distribution. In part, this work aims to address outstanding claims in the literature on Bernoulli schemes. The results herein are motivated by emerging needs, desires, and opportunities in computation to leverage physical stochasticity in microelectronic devices for random number generation. author: - Douglas T. Pfeffer , J. Darby Smith , William Severa  bibliography: - flip.bib title: Induced Distributions from Generalized Unfair Dice --- # Introduction[\[sec:intro\]]{#sec:intro label="sec:intro"} Contemporary computing approaches are dominated by deterministic operations. In terms of both the algorithmic approach and the underlying computing devices, determinism is deeply woven into our computing mindset. At the device level, stochastic behavior is often seen as a defect. Noise and fluctuations have been eliminated or constrained wherever possible. This is often beneficial; resistance to noise is one of the key benefits of *digital* electronics. However, a direct consequence is that our everyday computers are deterministic. Of course, randomness plays a role in many algorithms, including those from scientific computing and cryptography. For example, particle methods and other probabilistic approaches are often applied to high-dimensional physics problems where direct numerical solutions can be intractable. However, there is an inherent misalignment between stochastic behavior and deterministic hardware. Well-distributed and difficult-to-predict numbers can be generated by a Pseudo-Random Number Generator (pRNG). These methods (generally) take a *seed* value and generate a sequence of corresponding numbers through iteration. The sequence of values can appear to be random, but are entirely determined by the seed and the pRNG. The quality of the 'random' numbers is dependent on the quality of the pRNG. While this method is sufficient for many applications, deficiencies in either the seed setting or the pRNG can be disastrous.[^1] Sources of noise can be used to help improve the quality of pRNG number generation. Small timing differences in keyboard input and mouse input or fluctuations in measured quantities, such as WiFi signal, can be used. In modern approaches, these noisy signals feed what is called an *entropy pool*. This entropy pool (e.g. via `/dev/random/` on Linux) can then be combined, hashed, and otherwise manipulated to produce yet more unpredictable "random" numbers. Unfortunately, this entropy pool approach has three main challenges: 1. The sources of noise may not be truly random. 2. The pRNGs still produce non-random numbers. 3. The entropy pool can be depleted. We believe these motivate the study of probabilistic computing devices and, consequently, the study of how to best use a naturally stochastic computing device. This motivation is shared by those in the computing field, where probabilistic devices and true Random Number Generators (tRNGs) are an area of active study [@misra2022probabilistic; @chowdhury2023accelerated; @chowdhury2023full; @aadit2022massively; @kaiser2021probabilistic]. Certain devices, such as magnetic tunnel junctions [@rehm2022stochastic; @liu2022random] or tunnel diodes [@bernardo2017extracting], can be made to behave as an effective Bernoulli random variable (hereafter, a coin flip). These devices have some probability $p$ of returning 1 and a probability $1-p$ of returning 0. Simple random variables have great utility and can be exploited to return samples from a variety of distributions [@gryszka2021biased; @flegal2012exact]. Furthermore, when used as input to novel and emerging hardware, like spiking neurons in a neuromorphic computer [@schuman2022opportunities; @young2019review; @aimone2021roadmap], such simple stochastic input can be used to find approximate solutions to NP-hard problems such as MAXCUT [@theilman2022stochastic; @theilman2023goemans]. These formulations tend to model coin flips as precisely that---A two-outcome, even draw. We note that these devices conceivably behave not as just coins but as $q$-sided dice. Indeed, current devices considered for other purposes can be in one of five states [@leonard2023stochastic; @leonard2022shape Fig. 2e]. Though the bulk of current investigations examine coin-like devices, such as $p$-bits [@camsari2017stochastic], in the future we may find that dice-like devices are more attractive for physical reasons (more stable or more efficient) or perhaps even for computational ones (reduced burden or circuitry). An additional complication is that devices may not be fair, and it is critical to understand any departure from uniform in the general setting. To address these questions, we examine binary encoded distributions from Bernoulli coin flips and $q$-ary encoded distributions from $q$-sided die rolls. We revisit classic results in the Bernoulli case for a class of singular measures and state a 'folk theorem' on the extension to $n$-sided dice in Section [2](#sec:bernoulli){reference-type="ref" reference="sec:bernoulli"}. We establish this extension and provide a proof of the 'folk theorem' in Section [3](#sec:qsideddie){reference-type="ref" reference="sec:qsideddie"}. Finally, in Section [4](#sec:arclength){reference-type="ref" reference="sec:arclength"}, we seek to address the question 'Given an unfair $q$-sided die, how does its CDF compare to the uniform, fair die case?' To this end, we provide two comparative tools one can use. In Section [4.1](#supcompare){reference-type="ref" reference="supcompare"}, Theorem [Theorem 5](#theorem: uniformcomp){reference-type="ref" reference="theorem: uniformcomp"} provides an easily calculable upper bound on $\|x-F(x)\|_\infty$, and in Section [4.2](#comparearclength){reference-type="ref" reference="comparearclength"}, Theorem [Theorem 6](#theorem:arclength){reference-type="ref" reference="theorem:arclength"} provides a formula for calculating the arclength of $F(x)$ after finitely many dice rolls (which can then be compared to the fair arclength of $\sqrt{2}$). These formulas answer curiosity in and of themselves as well as stand to provide a basis for error analysis in the future of probabilistic computing. We conclude our effort with a discussion in Section [\[sec:conclusion\]](#sec:conclusion){reference-type="ref" reference="sec:conclusion"} that seeks to frame these results in the context of novel mathematical directions and future stochastic computing. # Existing results on Bernoulli schemes {#sec:bernoulli} Consider a series of Bernoulli outcomes $x_i$ on $\{0,1\}$. As coin flips, we will call $1$ 'heads' and $0$ 'tails'. In the independent and identically distributed (i.i.d.) case, let $\mathbb{P}[x_i=0]=p_0$ and, consequently, $\mathbb{P}\left[x_i=1\right]=1-p_0=p_1$. If $p_0=0.5$, we say we are flipping a 'fair' coin. Given $n$ outcomes, define $$X_n := \frac{x_1}{2^1} + \frac{x_2}{2^2} + \cdots + \frac{x_n}{2^n} = \sum_{i=1}^n \frac{x_i}{2^i}.$$ Each $X_n$ is a decimal number in $[0,1]$ formed from a binary encoding of $n$ outcomes $\{x_i\}_{i=1}^n$. Let $X := \lim_{n\to\infty} X_n$ be the encoded value in $[0,1]$ obtained by flipping our coin infinitely many times. We ask, given $y\in[0,1]$, what is $\mathbb{P}[X\leq y]$? That is, what is the cumulative distribution function (CDF) of $X$? Consider first the case that $p=0.5$. For a finite number of flips $N$, the probability of getting any single value of $X$ is the probability of getting a particular string of outcomes. In this fair case, that probability is $1/2^N$. Extending this probability mass function to a probability density function on the real line, we need to divide this probability by the width of the unit of mass it represents. Given the binary encoding, this width is $1/2^N$. Hence the probability density induced on the real line by $N$ flips is 1. The limit in $N$ is still 1, and the associated probability density function (PDF) of $X$ in the fair case is the uniform PDF. Therefore the CDF of $X$ in the fair case is given by $$\label{eq:uniform} F(x) = \int_{-\infty}^x \mathbbm{1}_{[0,1]} \mathop{d\mu} = \begin{cases}0 &\text{if} \ x<0\\ x &\text{if} \ 0\leq x \leq 1\\ 1 &\text{if} \ x>1 \end{cases}.$$ We remark that the uniform measure (the PDF) on $[0,1]$ coincides with Lebesgue measure restricted to $[0,1]$. What happens when $p\neq 0.5$? [@billingsley Example 31.1] shows that the distribution turns out to be *singular*---a probability distribution concentrated on a set of Lebesgue measure zero. This is observed by showing that the cumulative distribution function $F(x)$ for $X$ in the non-fair case is continuous, strictly increasing, and has $F'(x)=0$ almost everywhere. A related concept is the Cantor distribution, a probability distribution whose cumulative distribution function is the Cantor function. After demonstrating that the cumulative distribution function, $F(x)$, for the unfair coin is singular, Billingsley establishes a recursion definition for it. With $\mathbb{P}[x_i=0]=p_0$ and $p_1 = 1-p_0$, $$\label{unfairrecursive} F(x) = \begin{cases} p_0F(2x) &\text{if} \ 0\leq x \leq \textstyle \frac{1}{2}\\ p_0 + p_1F(2x-1) &\text{if} \ \textstyle \frac{1}{2} \leq x \leq 1 \end{cases}.$$ As examples, we graph this recursive function for four different coins $p_0 = .15, .25, .40, \text{ and } .49$ in Figure [1](#fig:unfair_example){reference-type="ref" reference="fig:unfair_example"}. ![The CDF $F(x)$ shown for various unfair coins with values for $p_0$ indicated by color. In each case, $p_1 = 1 - p_0$](figures/cdf_plot.pdf){#fig:unfair_example width=".65\\textwidth"} As any unfair coin produces a singular measure, all such unfair coin measures are singular with respect to Lebesgue measure (on $[0,1]$) and therefore singular with respect to the uniform measure on $[0,1]$. In the sequel, we will extend this classic CDF result into a series of results on unfair $q$-sided dice. This natural extension of the CDF formula to unfair dice is conjectured by Billingsley as an exercise and, to our knowledge no proof of this result exists in the literature. The following is a reinterpretation of this conjecture: **Conjecture 1** (Problem 31.1 in [@billingsley]). Let $p_0, \ldots, p_{q-1}$ be non-negative numbers adding to 1, where $q\geq 2$; suppose there is no $j$ such that $p_j=1$. Let $x_1, x_2,\ldots$ be independent, identically distributed random variables such that $P[x_i=j]=p_j$, $0\leq j < q$, and put $X = \sum_{i=1}^\infty x_i q^{-i}$. If $F(x)=\mathbb{P}[X\leq x]$ is the distribution function of $X$, then (i) $F$ is continuous, [\[billingsleyi\]]{#billingsleyi label="billingsleyi"} (ii) $F$ is strictly increasing on $[0,1]$ if and only if $p_j>0$ for all $j$,[\[billingsleyii\]]{#billingsleyii label="billingsleyii"} (iii) if $p_j = q^{-1}$ for all $j$, then $F(x)=x$ on $[0,1]$; and [\[billingsleyiii\]]{#billingsleyiii label="billingsleyiii"} (iv) if $p_j\neq q^{-1}$ for some $j$, then $F$ is singular. [\[billingsleyiv\]]{#billingsleyiv label="billingsleyiv"} In addition to the preceding proposition on a single (unfair) die, recent research has referenced pairs of dice as well; notably, [@cornean2022characterization] and [@cornean2022singular]. While their work is focused on the generalized *stationarity* setting, they provide a discussion of the problem's history in the i.i.d. case---a so-called 'Bernoulli scheme'. They identify the $q=2$ CDF as a 'Riesz-Nagy' function, and explicitly examine the Cantor function for $q=3$ (Problem 31.2 in [@billingsley]). In [@cornean2022characterization §1.1], the authors go onto to make two claims without proof: I. The measures $\text{d}F=\mu$ in all the Bernoulli schemes for any $q$ are again all singular with respect to one another. [\[folklore1\]]{#folklore1 label="folklore1"} II. Only one measure is absolutely continuous relative to Lebesgue measure; namely where all $j\in\{0,\ldots,q-1\}$ are equally likely. In this case, $\text{d}F=\lambda$ is Lebesgue measure itself on $[0,1]$.[\[folklore2\]]{#folklore2 label="folklore2"} The first of which they refer to as a 'folk theorem'. While they refer the reader to Section 14 of [@billingsleyergodic] for a discussion on the matter, we were unable to reproduce these observations from this text. That said, [@billingsleyergodic Example 3.5] does allude to the base-$q$ case for $q\geq 2$. Here, similar questions are posed as those [@billingsley Problems 31.1 and 32.1], but still no proofs are given. In part, the next section will provide proofs to the aforementioned facts. Specifically, we prove Conjecture [Conjecture 1](#conj:billingsleydice){reference-type="ref" reference="conj:billingsleydice"} in Theorems [Theorem 1](#maintheorem1){reference-type="ref" reference="maintheorem1"} and [Theorem 2](#maintheorem2){reference-type="ref" reference="maintheorem2"}. We prove claims ([\[folklore1\]](#folklore1){reference-type="ref" reference="folklore1"}) and ([\[folklore2\]](#folklore2){reference-type="ref" reference="folklore2"}) in Theorems [Theorem 2](#maintheorem2){reference-type="ref" reference="maintheorem2"} and [Theorem 3](#maintheorem3){reference-type="ref" reference="maintheorem3"}. Afterward, in Theorem [Theorem 4](#thm:noniidcoinlips){reference-type="ref" reference="thm:noniidcoinlips"}, we provide a novel analogous result to [@billingsley Example 31.1] for independent, but not identically distributed, coin flip sequences. # Results on infinite (unfair) dice rolling. {#sec:qsideddie} In this section we begin by establishing qualitative results for the cumulative distribution functions associated with sequences obtained from unfair dice rolls. We then follow this discussion with the development of some machinery one can use to compare the unfair cumulative distribution function with the uniform distribution. ## Analysis of the CDF. Our first goal is prove parts ([\[billingsleyi\]](#billingsleyi){reference-type="ref" reference="billingsleyi"}) and ([\[billingsleyii\]](#billingsleyii){reference-type="ref" reference="billingsleyii"}) of Conjecture [Conjecture 1](#conj:billingsleydice){reference-type="ref" reference="conj:billingsleydice"}: **Theorem 1**. Consider a $q$-sided die, where $x_i\in\{0,\ldots,q-1\}$ denotes the outcome of the $i^\text{th}$ toss and $p_j:=\mathbb{P}[x_i=j]$ for $j\in\{0,\ldots,q-1\}$. Given $X = \sum_{i=1}^\infty x_iq^{-i}$, if $F(x)=\mathbb{P}[X\leq x]$ is the distribution function obtained after tossing the die an infinite number of times, then (i) $F$ is continuous; (ii) $F$ is strictly increasing on $[0,1]$ if and only if $p_j>0$ for all $j$; (iii) $F'$ exists almost everywhere in $[0,1]$; *Proof.* For an arbitrary sequence $(u_1,u_2,\ldots)$ taken from $\{0,1,\ldots, q-1\}$, let $p_{u_i}:= \mathbb{P}[x_i=u_i]$. Since each $p_{u_i}<1$, we have $$\label{zerolimit} \mathbb{P}[x_i = u_i, \ i=1,2\ldots] = \lim_{n\to \infty} p_{u_1}\cdot p_{u_2}\cdot\ldots\cdot p_{u_n} = 0.$$ Letting $x = \sum_{i=1}^\infty \frac{u_i}{q^i}$ be the (essentially) unique base-$q$ expansion for a number in $[0,1]$, we see immediately that $\mathbb{P}[X = x]=0$. Hence, $$\mathbb{P}[X \leq x] = \mathbb{P}[X < x] + \mathbb{P}[X=x]=\mathbb{P}[X<x].$$ It follows that $F$ is left-continuous. As a distribution, $F$ must be right-continuous. Therefore $F$ is everywhere continuous. Now let $k\in \mathbb{N}$ so that $0\leq \frac{k}{q^n}\leq 1$. We can see that $$\frac{k}{q^n} = \sum_{i=1}^n \frac{u_i}{q^i}$$ for some $u_i\in \{0,1,\ldots,q-1\}$. Since $F$ is continuous, $$\begin{aligned} \begin{split} F\left( \frac{k+1}{q^n}\right) - F\left( \frac{k}{q^n}\right) &= \mathbb{P}\left[X < \frac{k+1}{q^n}\right] - \mathbb{P}\left[X < \frac{k}{q^n}\right] \\ &=\mathbb{P}\left[\frac{k}{q^n} < X < \frac{k+1}{q^n}\right]\\ &=\mathbb{P}[x_i = u_i, \ i=1,2,\ldots,n] \\ &= p_{u_1}\cdot\ldots\cdot p_{u_n}. \end{split} \label{eq:interval}\end{aligned}$$ Therefore, since base-$q$ expansions are dense in $[0,1]$, $F$ is strictly increasing on $[0,1]$ if and only if $p_j>0$ for all $j$. In any case, $F$ is non-decreasing and therefore, by Theorem 31.2 in [@billingsley], $F'$ exists almost everywhere in $[0,1]$. ◻ To proceed, we require two results on the frequency of digits in our base-$q$ expansions (both of which are due to Émile Borel). To discuss both, we fix the following notation: Given a finite set of $b$-digits, $B$, and an infinite sequence $\omega$ taken from $B$, let $\#_\omega(a,n)$ denote the number of times $a$ shows up in the first $n$ terms of $\omega$. The first result we need is known as *Borel's law of large numbers* (see [@2011aaf1-ab6b-3b14-bb2c-7b5bffa2d318] for an analytic proof) which states that if $S_n$, $n\geq 1$, is the number of successes in the first $n$ independent repetitions of a Bernoulli trial with success probability $p$, $0<p<1$, then $$\mathbb{P}\left( \lim_{n\to\infty} \frac{S_n}{n} =p \right) =1.$$ In the context of our paper, our Bernoulli trial is the tossing of a $q$-sided die. Borel's law of large numbers is then asserting that, with probability 1, the frequency of each individual outcome tends toward its probability. From this, Lemma [Lemma 1](#lemma:blln){reference-type="ref" reference="lemma:blln"} directly follows. **Lemma 1**. Consider a $q$-sided die, where $x_i\in\{0,\ldots,q-1\}$ denotes the outcome of the $i^\text{th}$ toss and $p_j:=\mathbb{P}[x_i=j]$ for $j\in\{0,\ldots,q-1\}$. Given $X = \sum_{i=1}^\infty x_iq^{-i}$, if $F(x)=\mathbb{P}[X\leq x]$ is the distribution function obtained after tossing the die an infinite number of times with $\mu$ as its associated probability measure, and $\omega_q = (d_1(x), d_2(x), \ldots)$ is the sequence of digits in the non-terminating base-$q$ expansion of an $x\in[0,1]$, then $$\mu\left( \left\{ x\in (0,1]\colon \lim_{n\to\infty} \frac{\#_{\omega_q}(j,n)}{n} = p_j \right\} \right) = 1.$$ The second result we need is Borel's *Normal Number Theorem*. While this result was originally established in [@mileBorelLesPD], we refer the reader to [@kuipers2012uniform Chapter 8] for more details. By definition, given a finite set of $b$ digits, $B$, an infinite sequence $\omega$ on this set is **(simply) normal** if $$\lim_{n\to \infty} \frac{\#_\omega(a,n)}{n} = \frac{1}{b}$$ for any $a\in B$. Thus, a sequence $\omega$ is normal for a set $B$ if the relative frequency of each item in $B$ is 'fair'. The normal number theorem says that almost every real number $x$ is normal in any integral base $b>1$. Utilizing our notation, we formally write **Lemma 2** (Normal Number Theorem). Let $x\in [0,1]$, $b>1$ be an integer, and $B=\{0,\ldots,b-1\}$. If $\omega_b$ is the sequence of digits from $B$ that form the base-$b$ expansion of $x$, then $$\lambda\left( \left\{ x\in [0,1]\colon \lim_{n\to\infty} \frac{\#_{\omega_b}(j,n)}{n} = \frac{1}{q} \right\} \right) = 1$$ for all $j\in B$, where $\lambda$ is Lebesgue measure on $[0,1]$. Next, we record a measure-theoretic definition and proposition that have been localized to $[0,1]$. Both are reproduced directly from [@billingsley pg.410]. **Definition 1**. Two measures $\mu$ and $\lambda$ on $[0,1]$ have *disjoint supports* if there exist Borel sets $S_\mu$ and $S_\lambda$ such that $$\mu([0,1]\setminus S_\mu) = 0, \hspace{.25in} \lambda([0,1]\setminus S_\lambda)=0, \hspace{.25in} \text{and} \hspace{.25in} S_\mu \cap S_\lambda=\emptyset.$$ **Proposition 1**. If $F\colon [0,1]\to [0,1]$ is a differentiable function for which $\mu((a,b]) = F(b)-F(a)$, then $\mu$ and Lebesgue measure $\lambda$ have disjoint supports if and only if $F'(x)=0$ except on a set of Lebesgue measure 0. We may now state and prove our second main result which proves parts ([\[billingsleyiii\]](#billingsleyiii){reference-type="ref" reference="billingsleyiii"}) and ([\[billingsleyiv\]](#billingsleyiv){reference-type="ref" reference="billingsleyiv"}) of Conjecture [Conjecture 1](#conj:billingsleydice){reference-type="ref" reference="conj:billingsleydice"} and simultaneously proves claim ([\[folklore2\]](#folklore2){reference-type="ref" reference="folklore2"}): **Theorem 2**. Consider a $q$-sided die, where $x_i\in\{0,\ldots,q-1\}$ denotes the outcome of the $i^\text{th}$ toss and $p_j:=\mathbb{P}[x_i=j]$ for $j\in\{0,\ldots,q-1\}$. Given $X = \sum_{i=1}^\infty x_iq^{-i}$, if $F(x)=\mathbb{P}[X\leq x]$ is the cumulative distribution function obtained after tossing the die an infinite number of times, then (i) If $p_j= \textstyle \frac{1}{q}$ for all $j$, then $F(x)=x$ on $[0,1]$ and [\[maintheorem2item1\]]{#maintheorem2item1 label="maintheorem2item1"} (ii) If $p_k\neq \textstyle \frac{1}{q}$ for some $k$, then $F$ is singular.[\[maintheorem2item2\]]{#maintheorem2item2 label="maintheorem2item2"} In either case, $F$ is given by the following recursion formula: $$\label{qsideddierecursion} F(x) = \begin{cases} p_0F(qx) &\text{if} \ 0\leq x \leq \textstyle \frac{1}{q}\\ p_0+p_1F(qx-1) &\text{if} \ \textstyle \frac{1}{q} \leq x \leq \frac{2}{q}\\ \ \ \ \ \ \ \ \ \vdots &\ \ \ \ \ \ \ \ \ \vdots\\ (p_0+p_1+\ldots+p_{q-2}) + p_{q-1}F(qx-(q-1)) &\text{if} \ \textstyle \frac{q-1}{q} \leq x \leq 1\\ \end{cases}.$$ *Proof.* While ([\[maintheorem2item1\]](#maintheorem2item1){reference-type="ref" reference="maintheorem2item1"}) will follow from the recursion formula established independently, we can also use the setting detailed in [\[eq:interval\]](#eq:interval){reference-type="eqref" reference="eq:interval"} to observe that, if $p_j=\textstyle \frac{1}{q}$ for all $j$, then $$F\left( \frac{k}{q^n}+\frac{1}{q^n}\right) - F\left( \frac{k}{q^n}\right) = F\left( \frac{k+1}{q^n}\right) - F\left( \frac{k}{q^n}\right) = p_{u_1}\cdot\ldots\cdot p_{u_n} = \frac{1}{q^n}.$$ Hence, due to the density of base-$q$ expansions in $[0,1]$, $F(x)=x$. We now establish ([\[maintheorem2item2\]](#maintheorem2item2){reference-type="ref" reference="maintheorem2item2"}). Take $x\in(0,1]$ and let $\omega_q = (d_1(x), d_2(x),\ldots)$ be the sequence of digits in its non-terminating base-$q$ expansion. If $\mu$ represents our probability measure, then $\mu[x\colon d_i(x)=j]=p_j$ for $j\in\{0,\ldots,q-1\}$. For every $j$, form $$S_j:= \left\{ x\in (0,1]\colon \lim_{n\to\infty} \frac{\#_{\omega_q}(j,n)}{n} = p_j \right\} \hspace{.25in} \text{and consider} \hspace{.25in} \widetilde{S}:=\bigcap_{j=0}^{q-1} S_j.$$ Lemma [Lemma 1](#lemma:blln){reference-type="ref" reference="lemma:blln"} asserts that $\mu(S_j)=1$ for every $j$. Thus, by subadditivity of the measure $\mu$, $$\mu((\widetilde{S})^c) = \mu\left( \bigcup_{j=0}^{q-1} S_j^c \right) \leq \sum_{j=0}^{q-1} \mu(S_j^c) = 0,$$ and therefore $\mu(\widetilde{S})=1$. Similarly, for every $j$ form $$T_j:= \left\{ x\in (0,1]\colon \lim_{n\to\infty} \frac{\#_{\omega_q}(j,n)}{n} = \frac{1}{q} \right\} \hspace{.25in} \text{and consider} \hspace{.25in} \widetilde{T}:=\bigcap_{j=0}^{q-1} T_j.$$ Lemma [Lemma 2](#lemma:bnnt){reference-type="ref" reference="lemma:bnnt"} asserts that $\lambda(T_j)=1$ for every $j$, where $\lambda$ is Lebesgue measure. Thus, again by the subadditivity of $\lambda$, $\lambda(\widetilde{T})=1$. Now suppose that $p_k\neq \frac{1}{q}$ for some $k$. Then, by the uniqueness of limits, $S_k\cap T_k = \emptyset$ and therefore $\widetilde{S}\cap \widetilde{T}=\emptyset$. By Definition [Definition 1](#def:disjointsupp){reference-type="ref" reference="def:disjointsupp"}, $\mu$ and $\lambda$ are seen to have disjoint supports. It now follows from Proposition [Proposition 1](#prop:disjointsupports){reference-type="ref" reference="prop:disjointsupports"} that $F'(x)=0$ except on a set of Lebesgue measure 0 and therefore $F$ is singular. Finally, we establish the recursion formula given in ([\[qsideddierecursion\]](#qsideddierecursion){reference-type="ref" reference="qsideddierecursion"}). Note that $[0,1]$ can be divided into $q$ intervals $[0, \textstyle \frac{1}{q}], [\textstyle \frac{1}{q}, \frac{2}{q}],\ldots, [\textstyle \frac{q-1}{q},1]$ --- the so-called base-$q$ intervals of rank 1. All cases in the recursion proceed in an identical fashion. As such, we provide an explicit proof of the last case of the recursion formula only. Suppose $x\in [\textstyle \frac{q-1}{q},1]$, the $q^\text{th}$ base-$q$ interval of rank 1. Here, $X \leq x$ can occur in $q$ different ways. Specifically, either $X$ lies in one of the previous base-$q$ intervals, or it lies in the last interval with $x$. Thus, $$\begin{aligned} \mathbb{P}[X \leq x] &= \mathbb{P}\left[ x_1=0 \ \text{or} \ \ldots \ \text{or} \ x_{1}=q-2 \ \text{or} \ \left(x_1=q-1 \ \text{and} \ \frac{q-1}{q} + \sum_{i=2}^\infty \frac{x_i}{q^i} \leq x\right)\right]\\ &= (p_0 + \ldots + p_{q-2}) + p_{q-1}\mathbb{P}\left[q-1 + \sum_{i=2}^\infty \frac{x_i}{q^{i-1}} \leq qx\right]\\ &= (p_0 + \ldots + p_{q-2}) + p_{q-1}\mathbb{P}\left[\sum_{i=1}^\infty \frac{x_{i+1}}{q^{i}} \leq qx-(q-1)\right]\\ &= (p_0 + \ldots + p_{q-2}) + p_{q-1}\mathbb{P}\left[X \leq qx-(q-1)\right]\\ &= (p_0 + \ldots + p_{q-2}) + p_{q-1}F(qx-(q-1)).\end{aligned}$$ Therefore, when $x\in [\textstyle \frac{q-1}{q},1]$, we have the recursion $$F(x) = (p_0 + \ldots + p_{q-2}) + p_{q-1}F(qx-(q-1)).$$ The rest of the cases follow similarly. ◻ **Remark 1**. Note that the *Cantor distribution* is the probability distribution whose cumulative distribution function is the Cantor function. This distribution is often given as an example of a singular distribution. As a result, it is worth noting that the singular distribution obtained in Theorem [Theorem 2](#maintheorem2){reference-type="ref" reference="maintheorem2"} is a generalization of the Cantor distribution. Indeed, if we let $q=3$, $p_0=p_2=0.5$, and $p_1=0$, Theorem [Theorem 2](#maintheorem2){reference-type="ref" reference="maintheorem2"} states that the resulting cumulative distribution function, $F(x)$, is singular and is given by the following recursion formula: $$F(x) = \begin{cases} \frac{1}{2}F(3x) &\text{if} \ 0\leq x \leq \frac{1}{3}\\ \frac{1}{2}&\text{if} \ \frac{1}{3}\leq x \leq \frac{2}{3}\\ \frac{1}{2} + \frac{1}{2}F(3x-2) &\text{if} \ \frac{2}{3}\leq x \leq 1\\ \end{cases}$$ By comparing this formula to that given in [@Dobos] and [@cantor1 pg.9], we see that this formula exactly defines the Cantor distribution whose graph is the 'Devil's Staircase'. Notably, the proof given in Theorem [Theorem 2](#maintheorem2){reference-type="ref" reference="maintheorem2"} can be modified to form a stronger conclusion on singularity. Specifically, we can compare the probability measures obtained from differently weighted dice. The following result proves claim ([\[folklore1\]](#folklore1){reference-type="ref" reference="folklore1"}): **Theorem 3**. Consider two $q$-sided dice with sides taken from $Q:=\{0, \ldots, q-1\}$. Let $x_i\in Q$ denote the outcome of the $i^\text{th}$ toss of one die with corresponding probabilities $p_j:=\mathbb{P}[x_i=j]$ for $j\in Q$, and let $\widetilde{x}_k$ and $\widetilde{p}_k$ be defined similarly for the second die (with $k\in Q)$. Given $X = \sum_{i=1}^\infty x_iq^{-i}$ and $\widetilde{X} = \sum_{i=1}^\infty \widetilde{x}_iq^{-i}$, put $F(x)=\mathbb{P}[X\leq x]$ and $\widetilde{F}(x)=\mathbb{P}[\widetilde{X}\leq x]$ as the respective cumulative distribution functions obtained after tossing the corresponding die an infinite number of times. If there exists an outcome $t\in Q$ so that $p_t \neq \widetilde{p}_t$, then the associated probability measures, $\mu$ and $\widetilde{\mu}$, are mutually singular. [\[twodice1\]]{#twodice1 label="twodice1"} *Proof.* It suffices to show that $\mu$ and $\widetilde{\mu}$ have disjoint supports. We will essentially use the argument given in the proof of Theorem [Theorem 2](#maintheorem2){reference-type="ref" reference="maintheorem2"}, but using two probability measures instead of one probability measure and Lebesgue measure. Take $x\in(0,1]$ and let $\omega_q = (d_1(x), d_2(x),\ldots)$ be the sequence of digits in its non-terminating base-$q$ (equivalently, base-$\widetilde{q}$) expansion. If $\mu$ and $\widetilde{\mu}$ represent our two probability measures, then $\mu[x\colon d_i(x)=j]=p_j$ and $\widetilde{\mu}[x\colon d_i(x)=k]=\widetilde{p}_k$ for $j,k\in Q$. For every $j$, form $$S_j:= \left\{ x\in (0,1]\colon \lim_{n\to\infty} \frac{\#_{\omega_q}(j,n)}{n} = p_j \right\} \hspace{.25in} \text{and} \hspace{.25in} S:=\bigcap_{j\in Q} S_j.$$ Lemma [Lemma 1](#lemma:blln){reference-type="ref" reference="lemma:blln"} asserts that $\mu(S_j)=1$ for every $j$. Thus, by subadditivity of $\mu$, we have $\mu(\widetilde{S})=1$. Similarly, for every $k$, form $$\widetilde{S_k}:= \left\{ x\in (0,1]\colon \lim_{n\to\infty} \frac{\#_{\omega_{q}}(k,n)}{n} = \widetilde{p_k} \right\} \hspace{.25in} \text{and} \hspace{.25in} \widetilde{S}:=\bigcap_{k\in Q} \widetilde{S_k}.$$ Again, by Lemma [Lemma 1](#lemma:blln){reference-type="ref" reference="lemma:blln"} and subadditivity of $\widetilde{\mu}$, we have $\widetilde{\mu}(\widetilde{S})=1$. By assumption, there exists an outcome $t$ for which $p_t\neq \widetilde{p}_t$. Thus, by the uniqueness of limits, we have $S_t\cap \widetilde{S_t} = \emptyset$ and therefore $S\cap \widetilde{S} = \emptyset$. Hence, $\mu$ and $\widetilde{\mu}$ have disjoint supports. ◻ We have so far only addressed i.i.d. random dice rolls and coin flips. For the coin flips case, we can say a little bit more in the independent, but not necessarily identically distributed case. **Theorem 4**. Suppose we flip an infinite number of 2-sided unfair coins, but each have a different weighting. Specifically, let $x_i\in\{0,1\}$ denote the outcome of the $i^\text{th}$ flip and suppose $0<\mathbb{P}[x_i=0]=:p_{i;0}\neq 0.5$. If $(p_{i;0})\not\to 0.5$, then $F'(x)=0$ almost everywhere and therefore $F$ is singular. *Proof.* Analogous arguments to those in Theorem [Theorem 1](#maintheorem1){reference-type="ref" reference="maintheorem1"} demonstrate that $F$ is well-defined, continuous, and increasing, and therefore $F'$ exists almost everywhere in $[0,1]$. Suppose that $(p_{i;0})\not\to0.5$. We will demonstrate that $F'(x)=0$. Let $k\in \mathbb{N}$ so that $0\leq \frac{k}{2^n}\leq 1$. Then, $\textstyle \frac{k}{2^n} = \sum_{i=1}^n \frac{u_i}{2^i}$ for some $u_i\in \{0,1\}$ and $$\begin{aligned} F\left( \frac{k+1}{2^n}\right) - F\left( \frac{k}{2^n}\right) &= \mathbb{P}\left[X < \frac{k+1}{2^n}\right] - \mathbb{P}\left[X < \frac{k}{2^n}\right]\\ &=\mathbb{P}\left[\frac{k}{2^n} < X < \frac{k+1}{2^n}\right]\\ &=\mathbb{P}[x_i = u_i, \ i=1,2,\ldots,n]\\ &= p_{u_1}\cdot\ldots\cdot p_{u_n}.\end{aligned}$$ Let $x$ be given and for each $m\in\mathbb{N}$, choose $k_m$ so that $x \in I_n$, where $$I_m = \left( \frac{k_m}{2^m}, \frac{k_m+1}{2^m}\right)$$ is the dyadic interval of rank $m$ that contains $x$. It follows from the density of dyadics in $[0,1]$, that $$\lim_{m\to\infty} \frac{ \mathbb{P}[X \in I_m]}{2^m} = \lim_{m\to\infty} \frac{ F\left( \frac{k+1}{2^m}\right) - F\left( \frac{k}{2^m}\right) }{2^m} = F'(x).$$ Therefore, if we suppose, for the sake of contradiction, that $F'(x)\neq 0$, then on one hand we obtain the following: $$\lim_{m\to\infty} \frac{\frac{ \mathbb{P}[X \in I_{m+1}]}{2^{m+1}}}{ \frac{ \mathbb{P}[X \in I_{m}]}{2^m} } = \frac{F'(x)}{F'(x)}=1.$$ Thus, $$\label{1/qcont} \lim_{m\to\infty} \frac{ \mathbb{P}[X \in I_{m+1}]}{ \mathbb{P}[X\in I_{m}] } = \frac{1}{2}.$$ On the other hand, We know that $I_m$ consists of those numbers in $[0,1]$ whose dyadic expansions match $x$'s for the first $m$ terms. Thus, if $X \in I_m$, then $$X = \sum_{i=1}^m \frac{u_i}{q^i} + \sum_{i=m+1}^\infty \frac{x_i}{q^i}.$$ This implies that $\mathbb{P}[X \in I_m] = p_{u_1}\cdot p_{u_2}\cdot\ldots \cdot p_{u_m}$. Therefore, $$\frac{ \mathbb{P}[X \in I_{m+1}]}{ \mathbb{P}[X \in I_{m}] } = \frac{ p_{u_1}\cdot p_{u_2}\cdot\ldots \cdot p_{u_m} \cdot p_{u_{m+1}}}{p_{u_1}\cdot p_{u_2}\cdot\ldots \cdot p_{u_m}} = p_{ u_{m+1} }.$$ We assumed that $(p_{i;0})\not\to 0.5$, therefore $$\lim_{m\to\infty}\frac{ \mathbb{P}[X \in I_{m+1}]}{ \mathbb{P}[X \in I_{m}] } = \lim_{m\to\infty} p_{ u_{m+1} }\neq \frac{1}{2}.$$ This contradicts the conclusion yielded in ([\[1/qcont\]](#1/qcont){reference-type="ref" reference="1/qcont"}). Thus, $F'(x) = 0$ almost everywhere and therefore $F$ is singular. ◻ # Comparisons of distributions to uniform. {#sec:arclength} If computational devices can be used to create distributions based on possibly unfair coin tosses or die rolls, it is natural to ask how far away results could be expected to be from uniform. In practice, such comparisons are likely to be statistical in nature. However, given the results of the previous sections, we now have firm distributional objects with which to compare. In this section we offer two analytic ways to compare an unfair distribution to the uniform and fair one. The first is done in the infinite toss limit and utilizes the sup-norm. The second considers the practicality of the finite and compares a finite number of rolls or tosses to uniform through arclength. ## Comparison to Uniform under $\|\cdot\|_\infty$ {#supcompare} In this section we establish a method to compare the (possibly unfair) distribution $F(x)$ with the uniform (fair) distribution using the sup-norm. To start, consider the operator $T\colon C[0,1]\to C[0,1]$ defined by $$\label{seqdef} (Tf)(x) = \begin{cases} p_0f(qx) &\text{if} \ 0\leq x \leq \textstyle \frac{1}{q}\\ p_0+p_1f(qx-1) &\text{if} \ \textstyle \frac{1}{q} \leq x \leq \frac{2}{q}\\ \ \ \ \ \ \ \ \ \vdots &\ \ \ \ \ \ \ \ \ \vdots\\ (p_0+p_1+\ldots+p_{q-2}) + p_{q-1}f(qx-(q-1)) &\text{if} \ \textstyle \frac{q-1}{q} \leq x \leq 1\\ \end{cases}$$ **Lemma 3**. Put $p_{\text{max}} = \max\{p_i\}_{i=1}^q$. If $f,g\in C[0,1]$, then $\|Tf-Tg\|_\infty \leq p_{\text{max}}\|f-g\|_\infty$, and therefore $T$ defines a contraction mapping. *Proof.* Suppose $\frac{q-1}{q} \leq x \leq 1$. Then, $$\begin{aligned} Tf-Tg &= \left(\sum_{j=0}^{q-2}p_j + p_{q-1}f(qx-(q-1))\right) - \left(\sum_{j=0}^{q-2}p_j + p_{q-1}g(qx-(q-1))\right)\\ &= p_{q-1}(f(qx-(q-1)) - g(qx-(q-1))). \end{aligned}$$ Therefore $\|Tf-Tg\|_\infty = p_{q-1}\|f-g\|_\infty$ when restricted to $\left[\frac{q-1}{q},1\right]$. Similar conclusions hold for the other subsets of the domain. Putting them all together (and supping over all of $[0,1]$), we conclude $$\|Tf-Tg\|_\infty \leq p_{\text{max}}\|f-g\|_\infty.$$ ◻ **Theorem 5**. Given the sequence of functions $(f_n)_{n=0}^\infty$ defined by $f_0=x$ and $f_{n+1}= Tf_n$ and the distribution function $F$ in Theorem [Theorem 2](#maintheorem2){reference-type="ref" reference="maintheorem2"}, (i) $f_n \to F$ (ii) $\|x-F(x)\|_\infty \leq \displaystyle\left( \frac{1}{1-p_{\text{max}}}\right)\|x-f_1(x)\|_\infty$ *Proof.* Lemma [Lemma 3](#lemma:contraction){reference-type="ref" reference="lemma:contraction"} showed that the operator $T\colon C[0,1]\to C[0,1]$ is a contraction mapping. Thus, since $(C[0,1],\|\cdot\|_\infty)$ is a (non-empty) complete metric space, the Banach Fixed Point Theorem guarantees that $T$ admits a unique fixed point---a function $G\in C[0,1]$ such that $TG=G$, where $G=\lim f_n$. This function is exactly the distribution $F$ in Theorem [Theorem 2](#maintheorem2){reference-type="ref" reference="maintheorem2"}. The Banach Fixed Point Theorem implies that you'll get the same fixed point no matter what starting function you pick (i.e., the choice of $f_0\in C[0,1]$ is arbitrary). As such, we can choose our initial starting function as $f_0=x$ and denote $F(x) =: f_\infty(x)$. Now, since $x-F(x) = f_0-f_\infty$ is seen to be a telescoping series, ,we have the following: $$\|x-F(x)\| = \|f_0 - f_\infty\| = \left\| \sum_{n=0}^\infty f_n-f_{n+1} \right\| \leq \sum_{n=0}^\infty \|f_n-f_{n+1}\|.$$ Which, by Lemma [Lemma 3](#lemma:contraction){reference-type="ref" reference="lemma:contraction"} and the fact that $|p_{\text{max}}|<1$, yields $$\|x-F(x)\| \leq \sum_{n=0}^\infty (p_{\text{max}})^n\|f_0-f_1\| = \displaystyle\left( \frac{1}{1-p_{\text{max}}}\right)\|f_0-f_1\| = \displaystyle\left( \frac{1}{1-p_{\text{max}}}\right)\|x-f_1\|,$$ where all norms are sup-norms over $[0,1]$. ◻ Thus, to understand the sup-norm difference between the distribution $F(x)$ and the uniform distribution $y=x$, it suffices to understand the quantity $\|x-f_1\|$, where $f_1$ is a (finite) piece-wise linear function and has no fractal-like components. ## Comparisons via Arclength {#comparearclength} The previous section's result gave comparative information about the full singular distribution---one obtained after we roll our unfair die infinitely often. What if we wanted to compare our (possibly unfair) distribution after finitely many rolls? One route is to look at their arclengths. We start by recording a small result from [@cantor1 Theorem 6.22]: **Lemma 4**. Let $F\colon [0,1]\to \mathbb{R}$ be a continuous, increasing function for which $F(0)=0$ and $F(1)=1$. Then the following two statements are equivalent (i) The length of the arc $y=F(x)$ on $[0,1]$ is $2$. (ii) The function $F$ is singular. **Proposition 2**. Consider a $q$-sided die, where $x_i\in\{0,\ldots,q-1\}$ denotes the outcome of the $i^\text{th}$ toss and $p_j:=\mathbb{P}[x_i=j]$ for $j\in\{0,\ldots,q-1\}$. Given $X = \sum_{i=1}^\infty x_iq^{-i}$, if $F(x)=\mathbb{P}[X\leq x]$ is the cumulative distribution function obtained after tossing the die an infinite number of times, then (i) If $p_j = \frac{1}{q}$ for all $j$, then the arclength of $F(x)$ on $[0,1]$ is $\sqrt{2}$. [\[item1:prop:arclengthbounds\]]{#item1:prop:arclengthbounds label="item1:prop:arclengthbounds"} (ii) If $p_j\neq \frac{1}{q}$ for some $j$, then the arclength of $F(x)$ on $[0,1]$ is $2$. *Proof.* By Theorem [Theorem 2](#maintheorem2){reference-type="ref" reference="maintheorem2"}, if $p_j = \frac{1}{q}$ for all $j$, then $F(x)=x$ on $[0,1]$ and therefore its length is $\sqrt{2}$. If, on the other hand, we have that $p_j\neq \frac{1}{q}$ for some $j$, then Theorem [Theorem 2](#maintheorem2){reference-type="ref" reference="maintheorem2"} guarantees that $F(x)$ is singular. Moreover, by Theorem [Theorem 1](#maintheorem1){reference-type="ref" reference="maintheorem1"}, we know that $F$ is both continuous and increasing on $[0,1]$. Finally, we observe that $F(0)=0$ and $F(1)=1$ (this can be seen, for example, by using the recursion formula in Theorem [Theorem 2](#maintheorem2){reference-type="ref" reference="maintheorem2"}). Thus, by Lemma [Lemma 4](#lemma:arclength){reference-type="ref" reference="lemma:arclength"}, the arclength of $F$ on $[0,1]$ is equal to 2. ◻ This theorem tells us the arclength for the cumulative distribution function after we roll our die infinitely often. If, however, we are given a die and roll it $n$ times, we can ask: Are we getting closer to a fair distribution, or an unfair one? One way to answer this is to look at the $n^{\text{th}}$ iterate of our recursion formula. That is, in the language of Theorem [Theorem 5](#theorem: uniformcomp){reference-type="ref" reference="theorem: uniformcomp"}, we consider $f_n = T^nx$ and look at its arclength as $n$ gets larger. We fix some notation first. Given our $q$-sided die, where $x_i\in\{0,\ldots,q-1\}$ denotes the outcome of the $i^\text{th}$ toss and $p_j:=\mathbb{P}[x_i=j]$ for $j\in\{0,\ldots,q-1\}$, we will put $P = \{p_j\}_{j=0}^{q-1}$ and denote the set of its $n$-tuples by $P^n$. Order this finite set lexicographically and put $P^n =\{v_\ell\}_{\ell=0}^{q^n-1}$ so that, for example, $v_0 = (p_0,p_0,\ldots,p_0)$, $v_1=(p_0,p_1,p_0,\ldots,p_0)$, etc. Let $\Pi_n\colon P^n\to \mathbb{R}$ be the mapping that multiplies the coordinates of a given tuple from $P^n$. For example, if $v_\ell = (p_0,p_1,p_0,p_2,\ldots,p_3)$, then $\Pi_n(v_\ell) = p_0\cdot p_1\cdot p_0\cdot p_2\cdot \ldots \cdot p_3$. **Remark 2**. The tuple of probabilities associated with a given $v_\ell$, say, $(p_{\ell_0},\ldots,p_{\ell_{q-1}})$ provide a *unique* 'tag' by which we can locate the $q^n$ base-$q$ intervals of rank $n$. For example, if we let $q=4$ so that our probabilities are $p_0,p_1,p_2$, and $p_3$, and we consider the base-$4$ intervals of rank $n=2$, then $$P^2 = \{(p_i,p_j) \ : \ i,j\in\{0,1,2,3\}\}.$$ Now, the interval $\left[ \frac{14}{4^2}, \frac{15}{4^2}\right]$ is uniquely associated with the tuple $v_{14}=(p_3,p_2)$ in the following way: Start with $[0,1]$, zoom in on the fourth subinterval $\left[ \frac{3}{4}, 1\right]$ and further zoom in on *its* third subinterval to yield $\left[ \frac{14}{4^2}, \frac{15}{4^2}\right]$. Note that order matters. For example, the tuple $v_7=(p_2,p_3)$ corresponds to the interval $\left[ \frac{7}{4^2}, \frac{8}{4^2}\right]$. In either case, $\Pi_2((p_3,p_2))=\Pi_2((p_2,p_3)) = p_3p_2$. **Theorem 6**. Consider a $q$-sided die, where $x_i\in\{0,\ldots,q-1\}$ denotes the outcome of the $i^\text{th}$ toss and $p_j:=\mathbb{P}[x_i=j]$ for $j\in\{0,\ldots,q-1\}$. Let $f_n = T^nx$ be the piece-wise linear function described in Theorem [Theorem 5](#theorem: uniformcomp){reference-type="ref" reference="theorem: uniformcomp"} and let $P^n =\{v_i\}_{i=0}^{q^n-1}$ be the set of $q$-tuples described in the preceding conversation. Then $$\text{Arclength of} \ f_n = \sum_{i=0}^{q^n-1} \sqrt{ \left( \frac{1}{q^n} \right)^2 + (\Pi_n(v_{i}))^2 }.$$ *Proof.* The function $f_n$ is piece-wise linear on the base-$q$ intervals of rank $n$: $$\label{eq:ranknints} \left[0, \frac{1}{q^n}\right], \left[\frac{1}{q^n}, \frac{2}{q^n}\right],\ldots, \left[ \frac{q^n-1}{q^n},1\right].$$ As such, the arclength of $f_n$ is equal to the sum of the length of the linear components on each of these intervals. Let $[\textstyle \frac{i}{q^n}, \frac{i+1}{q^n}]$ be an arbitrary such interval and consider the points $f_n(\frac{i}{q^n})$ and $f_n(\frac{i+1}{q^n})$. Note that, if $F$ is the full cumulative distribution function, then $F$ and $f_n$ agree on the endpoints of every base-$q$ interval of rank $n$. (In fact, this is true for any rank of base-$q$ intervals.) Thus, we can use the recursive definition for $F$ given in Theorem [Theorem 2](#maintheorem2){reference-type="ref" reference="maintheorem2"} to evaluate the endpoints $f_n(\frac{i}{q^n})$ and $f_n(\frac{i+1}{q^n})$. To see this, note that the points $\frac{i}{q^n}$ and $\frac{i+1}{q^n}$ must both live in some base-$q$ interval of rank $1$. That is, they must both live in one of $[0, \textstyle \frac{1}{q}], [\textstyle \frac{1}{q}, \frac{2}{q}],\ldots, [\textstyle \frac{q-1}{q},1]$. (Here we are allowing the possibility that one of these points is the endpoint of an interval). Suppose the two points live in the interval $[\textstyle \frac{k}{q}, \frac{k+1}{q}]$ for some $k$. Then $$\begin{aligned} f_n\left(\frac{i}{q^n}\right) = F\left(\frac{i}{q^n}\right) &= (p_0+p_1+\ldots+p_{k-1})+p_{k}F\left(q\cdot \frac{i}{q^n} - k \right) \\ &= (p_0+p_1+\ldots+p_{k-1})+p_{k}F\left(\frac{i - kq^{n-1}}{q^{n-1}}\right)\end{aligned}$$ and $$\begin{aligned} f_n\left(\frac{i+1}{q^n}\right) = F\left(\frac{i+1}{q^n}\right) &= (p_0+p_1+\ldots+p_{k-1})+p_{k}F\left(q\cdot \frac{i+1}{q^n} - k \right) \\ &=(p_0+p_1+\ldots+p_{k-1})+p_{k}F\left(\frac{(i - kq^{n-1})+1}{q^{n-1}}\right)\end{aligned}$$ so that $$\label{eq:functionheight} f_n\left(\frac{i+1}{q^n}\right) - f_n\left(\frac{i}{q^n}\right) = p_k\left( F\left(\frac{(i - kq^{n-1})+1}{q^{n-1}}\right) - F\left(\frac{i - kq^{n-1}}{q^{n-1}}\right) \right)$$ Notably, we are now looking at the endpoints of the base-$q$ interval of rank $n-1$ given by $\left[ \textstyle \frac{i - kq^{n-1}}{q^{n-1}}, \frac{(i - kq^{n-1})+1}{q^{n-1}} \right]$. Thus, ([\[eq:functionheight\]](#eq:functionheight){reference-type="ref" reference="eq:functionheight"}) shows that when we run the difference $f_n\left(\frac{i+1}{q^n}\right) - f_n\left(\frac{i}{q^n}\right)$ through an iteration of the recursive formula for $F$, it returns the probability $p_k$ (where $k$ is the rank-1 interval that the rank-$n$ interval $\left[ \textstyle \frac{i}{q^n},\frac{i+1}{q^n}\right]$ landed in) times another difference of $F$-function evaluations at the endpoints of a rank $n-1$ interval. Repeating this process $n$ times, we get that $$f_n\left(\frac{i+1}{q^n}\right) - f_n\left(\frac{i}{q^n}\right) = \Pi_n(v_{i})(F(1)-F(0)) = \Pi_n(v_{i})(1-0) = \Pi_n(v_{i}),$$ where, in light of Remark [Remark 2](#remark:tags){reference-type="ref" reference="remark:tags"}, $v_{i}$ is exactly the unique $q$-tuple 'tag' for the interval $\left[ \textstyle \frac{i}{q^n},\frac{i+1}{q^n}\right]$. Since $f_n$ is piece-wise linear on the base-$q$ intervals of rank $n$, its arclength over $\left[ \textstyle \frac{i}{q^n},\frac{i+1}{q^n}\right]$ is equal to $$\sqrt{\left( \frac{i+1}{q^n} - \frac{i}{q^n} \right)^2 + \left(f_n\left(\frac{i+1}{q^n}\right) - f_n\left(\frac{i}{q^n}\right)\right)^2} = \sqrt{\left( \frac{1}{q^n}\right)^2 + \left(\Pi_n(v_{i})\right)^2}$$ Computing this for each of interval in ([\[eq:ranknints\]](#eq:ranknints){reference-type="ref" reference="eq:ranknints"}) gives the total arclength as $$\text{Arclength of} \ f_n = \sum_{i=0}^{q^n-1} \sqrt{ \left( \frac{1}{q^n} \right)^2 + (\Pi_n(v_{i}))^2 }.$$ ◻ Observe that, indeed, if $p_j=\textstyle\frac{1}{q}$ for all $j$, then $\Pi_n(v_i)=\textstyle \frac{1}{q^n}$ for all $v_i\in P^n$ so that $$\begin{aligned} \text{Arclength of} \ F \ \text{on} \ [0,1] &= \lim_{n\to \infty} \sum_{i=0}^{q^n-1} \sqrt{ \left( \frac{1}{q^n} \right)^2 + (\Pi_n(v_{i}))^2 }\\ &= \lim_{n\to \infty} \sum_{i=0}^{q^n-1} \sqrt{ \left( \frac{1}{q^n} \right)^2 + \left( \frac{1}{q^n} \right)^2 }\\ &= \sqrt{2},\end{aligned}$$ Thus, this formula now provides an alternate way to recover ([\[item1:prop:arclengthbounds\]](#item1:prop:arclengthbounds){reference-type="ref" reference="item1:prop:arclengthbounds"}) from Proposition [Proposition 2](#prop:arclengthbounds){reference-type="ref" reference="prop:arclengthbounds"}. We also have the following corollary: **Corollary 1**. If $p_j\neq \textstyle \frac{1}{q}$ for some $j$, then $$\text{Arclength of} \ F \ \text{on} \ [0,1] = \lim_{n\to\infty} \sum_{i=0}^{q^n-1} \sqrt{ \left( \frac{1}{q^n} \right)^2 + (\Pi_n(v_{i}))^2 } = 2.$$ *Proof.* The fact that $p_j\neq \textstyle \frac{1}{q}$ implies, by Proposition [Proposition 2](#prop:arclengthbounds){reference-type="ref" reference="prop:arclengthbounds"}, that the arclength of $F$ is 2. Using the formula in Theorem [Theorem 6](#theorem:arclength){reference-type="ref" reference="theorem:arclength"} gives the result. ◻ In general, the authors know of no way to compute this limit directly. This might be interesting due to the fact that result can be rephrased as $$\lim_{n\to\infty} \left(\underset{v_i\in P^n}{\text{Average}}\left\{\sqrt{ 1 + (q^n\Pi_n(v_i))^2 }\right\}\right) = 2,$$ where we van view $\log(p_i)$ as being the weights on a complete $q$-ary tree and the mapping $\Pi_n$ as computing length of the paths (in a graph-theoretic sense) through the tree. The expression above is then taking a limit of the average distance, over all paths in the tree, between 1 and the perturbation-from-fair that a particular path yields. **Remark 3**. In light of Remark [Remark 1](#cantor){reference-type="ref" reference="cantor"}, we know that the Cantor function is obtained when $q=3$ and our probabilities are chosen so that $p_0=p_2=0.5$ and $p_1=0$. Proposition [Proposition 2](#prop:arclengthbounds){reference-type="ref" reference="prop:arclengthbounds"} therefore guarantees that its arclength on $[0,1]$ is equal to 2 and hence, we know that the limit considered in Corollary [Corollary 1](#corr:unfairarc){reference-type="ref" reference="corr:unfairarc"} is equal to 2. Interestingly, this is one of the few instances that one *can* compute this limit directly. See [@cantor2 pg.4] for details. # Discussion [\[sec:conclusion\]]{#sec:conclusion label="sec:conclusion"} Motivated by the need to understand probabilistic computing devices and their inherent randomness, our paper aimed to investigate the distribution function associated with rolling a (possibly unfair) $q$-sided die. While the literature covers the $q=2$ coin flip case extensively, the full $q$-sided die case had not yet been addressed. Theorems [Theorem 1](#maintheorem1){reference-type="ref" reference="maintheorem1"} and [Theorem 2](#maintheorem2){reference-type="ref" reference="maintheorem2"} helped answer Conjecture [Conjecture 1](#conj:billingsleydice){reference-type="ref" reference="conj:billingsleydice"}, while Theorems [Theorem 2](#maintheorem2){reference-type="ref" reference="maintheorem2"} and [Theorem 3](#maintheorem3){reference-type="ref" reference="maintheorem3"} addressed recent 'folk lore' claims (reproduced here as claims ([\[folklore1\]](#folklore1){reference-type="ref" reference="folklore1"}) and ([\[folklore2\]](#folklore2){reference-type="ref" reference="folklore2"}).) In the spirit of these results, Theorem [Theorem 4](#thm:noniidcoinlips){reference-type="ref" reference="thm:noniidcoinlips"} provided a novel analogous result to [@billingsley Example 31.1] for independent, but not identically distributed, coin flip sequences. Adding to these investigations, we have provided two theoretical tools to compare an unfair distribution, $F(x)$, to a fair one, $y=x$. Both use the iterative construction of the distribution given in ([\[seqdef\]](#seqdef){reference-type="ref" reference="seqdef"}). Theorem [Theorem 5](#theorem: uniformcomp){reference-type="ref" reference="theorem: uniformcomp"} in Section [4.1](#supcompare){reference-type="ref" reference="supcompare"} provided an upper bound on $\|x-F(x)\|_\infty$, and Theorem [Theorem 6](#theorem:arclength){reference-type="ref" reference="theorem:arclength"} in Section [4.2](#comparearclength){reference-type="ref" reference="comparearclength"} provided a formula for calculating the arclength of the $F(x)$ after finitely many dice rolls. ## Future Mathematical Work In this paper, we investigated the analytic properties of distributions associated with unfair dice. We looked at a single die in Theorems [Theorem 1](#maintheorem1){reference-type="ref" reference="maintheorem1"} and [Theorem 2](#maintheorem2){reference-type="ref" reference="maintheorem2"} and a pair of same-sided dice in Theorem [Theorem 3](#maintheorem3){reference-type="ref" reference="maintheorem3"}. It is reasonable to ask about analytic comparisons between pairs of dice with differing number of sides. Consider a $q$-sided and $\widetilde{q}$-sided pair of dice. Let $x_i\in\{0,\ldots,q-1\}$ denote the outcome of the $i^\text{th}$ toss of the $q$-sided die with corresponding probabilities $p_j:=\mathbb{P}[x_i=j]$ for $j\in\{0,\ldots,q-1\}$, and let $\widetilde{x}_k$ and $\widetilde{p}_k$ be defined similarly for the second die (with $k\in\{0,\ldots,\widetilde{q}-1\})$. Given $X = \sum_{i=1}^\infty x_iq^{-i}$ and $\widetilde{X} = \sum_{i=1}^\infty \widetilde{x}_i\widetilde{q}^{-i}$, put $F(x)=\mathbb{P}[X\leq x]$ and $\widetilde{F}(x)=\mathbb{P}[\widetilde{X}\leq x]$ as the respective cumulative distribution functions obtained after tossing the corresponding die an infinite number of times. Let $\mu$ and $\widetilde{\mu}$ be the associated probability measures, respectively. **Question 1**. If $q\neq \widetilde{q}$, then what can be said of $\mu$ and $\widetilde{\mu}$? As far as we can tell, this situation is more nuanced. It seems possible to show that when $q>\widetilde{q}$, and the $q$ die has only $\widetilde{q}$ possible outcomes (with the $q-\widetilde{q}$ outcomes having zero probability), the associated measures are mutually singular. However, when $q$ and $\widetilde{q}$ are not relatively prime (for example, when $q=2$ and $\widetilde{q}=4$), it may be the case that, for some specific choices of probabilities, the support of $\mu$ and $\widetilde{\mu}$ could be the same. It is further unclear what happens when $q$ and $\widetilde{q}$ *are* taken to be relatively prime. Tackling this generalization would require a more thorough understanding of how the measures $\mu$ and $\widetilde{\mu}$ interact with base expansions different from their own. In Corollary [Corollary 1](#corr:unfairarc){reference-type="ref" reference="corr:unfairarc"}, we showed that when $p_j\neq \textstyle \frac{1}{q}$ for some $j$, $$\text{Arclength of} \ F \ \text{on} \ [0,1] = \lim_{n\to\infty} \sum_{i=0}^{q^n-1} \sqrt{ \left( \frac{1}{q^n} \right)^2 + (\Pi_n(v_{i}))^2 } = 2.$$ We proved this by showing the distribution is singular. Ideally, we would like a direct proof of this fact to facilitate a deeper understanding of the iteration approach given by the recursive formula in ([\[seqdef\]](#seqdef){reference-type="ref" reference="seqdef"}). Specifically, we would like to rigorously quantify how much additional arclength is witnessed every time we iterate. ## Future Computational Work The largely theoretical results shown inform us about how distributions should look when unfair coins or dice are thrown. Of immediate computational interest is the formula for arclength after $n$ tosses given in Theorem [Theorem 6](#theorem:arclength){reference-type="ref" reference="theorem:arclength"}. Given precisely tuned devices returning weighted dice or coin flips results, one could feasibly form a sort of hypothesis test on how many flips one must take before noticing a deviation from a uniform distribution. Verifying the arclength formula in device simulation and devising such a test is the subject of ongoing work. Beyond the fast application of the comparison metrics, the form of the singular measures themselves provides an inspiration point for future microelectronic design and verification. In our proof of the folk theorem, Theorem [Theorem 2](#maintheorem2){reference-type="ref" reference="maintheorem2"}, an argument is made based on a set of full measure for one weighting being a set of zero measure for all other weightings. This result then provides a basis for verifying the distribution of an array of $p$-weighted coin-like devices. If one had the probability measure induced by each device and had the set of all binary numbers with density $p$, then the measure of that set determines if the device is correctly weighted. Obviously, such a distributional object does not exist. However, this theory touchpoint can guide the discovery of future approximate methods and heuristics. Future work will see us put these comparative tools into practice via simulation. We will analyze the resulting data, discuss their merits and difficulties, and offer additional refinements to their implementation. # Acknowledgements {#acknowledgements .unnumbered} This article has been authored by an employee of National Technology & Engineering Solutions of Sandia, LLC under Contract No. DE-NA0003525 with the U.S. Department of Energy (DOE). The employee owns all right, title and interest in and to the article and is solely responsible for its contents. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this article or allow others to do so, for United States Government purposes. The DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan <https://www.energy.gov/downloads/doe-public-access-plan>. The authors acknowledge support from the DOE Office of Science (ASCR/BES) Microelectronics Co-Design project COINFLIPS. Douglas T. Pfeffer  [Department of Mathematics, University of Tampa, 401 W. Kennedy Blvd. Tampa, FL 33606]{.smallcaps} e-mail: `dpfeffer@ut.edu` J. Darby Smith  [Neural Exploration and Research Laboratory, Center for Computing Research, Sandia National Laboratories, 1515 Eubank Blvd. SE, Albuquerque, NM 87123]{.smallcaps} e-mail: `jsmit16@sandia.gov` William Severa  [Neural Exploration and Research Laboratory, Center for Computing Research, Sandia National Laboratories, 1515 Eubank Blvd. SE, Albuquerque, NM 87123]{.smallcaps} e-mail: `wmsever@sandia.gov` [^1]: We provide two examples, though we suggest the reader to explore the fascinating world of deficient pRNGs. The first is of historical notoriety: The RANDU generator's iterates in three dimensions fall along planes [@markowsky2014sad], making predictions trivially easy in this case. The second is that system time is a common seed value. Knowing the system time at seed setting allows players to predict future events in games such as Pokémon [@pika].
arxiv_math
{ "id": "2309.07366", "title": "Induced Distributions from Generalized Unfair Dice", "authors": "Douglas T. Pfeffer, J. Darby Smith, and William Severa", "categories": "math.PR", "license": "http://creativecommons.org/licenses/by/4.0/" }